Note that it still isn't possible to directly pass a thrust device vector to a kernel and device vectors can't be directly used in device code. SomeKernelCall>( fooArray, fooVector.size() ) Īnd you can also use device memory not allocated by thrust within thrust algorithms by instantiating a thrust::device_ptr with the bare cuda device memory pointer.Įdited four and half years later to add that as per answer, thrust 1.8 adds a sequential execution policy which means you can run single threaded versions of thrust's alogrithms on the device. You can pass the device memory encapsulated inside a thrust::device_vector to your own kernel like this: thrust::device_vector fooVector įoo* fooArray = thrust::raw_pointer_cast( fooVector.data() ) As it was originally written, Thrust is purely a host side abstraction.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |