1

The documentation for cudaHostAlloc() says that with cudaHostAllocMapped flag it allocates pinned memory on the host and "Maps the allocation into the CUDA address space". Does it mean that a mirror buffer is allocated on the device too? Either during cudaHostAlloc() or cudaHostGetDevicePointer() call. Or does the device communicate with the host memory upon each access to the pointer returned by cudaHostGetDevicePointer() ?

This question is different from When to use cudaHostRegister() and cudaHostAlloc()? What is the meaning of "Pinned or page-locked" memory? Which are the equivalent in OpenCL? because I don't ask what the APIs are, when to use them or what is pinned memory. I ask specifically whether a mirror buffer is allocated on GPU or not.

Community
  • 1
  • 1
Serge Rogatch
  • 13,865
  • 7
  • 86
  • 158
  • Possible duplicate of [When to use cudaHostRegister() and cudaHostAlloc()? What is the meaning of "Pinned or page-locked" memory? Which are the equivalent in OpenCL?](http://stackoverflow.com/questions/39454465/when-to-use-cudahostregister-and-cudahostalloc-what-is-the-meaning-of-pinn) – Leos313 Sep 27 '16 at 07:32
  • @Leos313, there is nothing in that question or answers about the mirror buffer. I already knew the answers for that question. I am interested in the details on how GPU accesses CPU memory. – Serge Rogatch Sep 27 '16 at 07:50
  • 2
    No memory is allocated on the device, all accesses go to host memory. [This documentation](http://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#mapped-memory) may be better suited for your needs. – tera Sep 27 '16 at 08:56

1 Answers1

5

No "mirror" buffer is allocated.

When device code uses a pointer that refers to mapped host memory, then a device read or write using that pointer will generated PCIE traffic to transfer the data to/from host memory, to service that read or write.

Robert Crovella
  • 143,785
  • 11
  • 213
  • 257