diff mbox series

[rdma-core,3/5] pyverbs: Add dma-buf based MR support

Message ID 1606153984-104583-4-git-send-email-jianxin.xiong@intel.com (mailing list archive)
State New, archived
Headers show
Series Add user space dma-buf support | expand

Commit Message

Xiong, Jianxin Nov. 23, 2020, 5:53 p.m. UTC
Define a new sub-class of 'MR' that uses dma-buf object for the memory
region. Define a new class 'DmaBuf' for dma-buf object allocation.

Signed-off-by: Jianxin Xiong <jianxin.xiong@intel.com>
---
 pyverbs/CMakeLists.txt |  2 ++
 pyverbs/dmabuf.pxd     | 13 +++++++++
 pyverbs/dmabuf.pyx     | 58 +++++++++++++++++++++++++++++++++++++
 pyverbs/libibverbs.pxd |  2 ++
 pyverbs/mr.pxd         |  5 ++++
 pyverbs/mr.pyx         | 77 ++++++++++++++++++++++++++++++++++++++++++++++++--
 6 files changed, 155 insertions(+), 2 deletions(-)
 create mode 100644 pyverbs/dmabuf.pxd
 create mode 100644 pyverbs/dmabuf.pyx

Comments

Jason Gunthorpe Nov. 23, 2020, 6:05 p.m. UTC | #1
On Mon, Nov 23, 2020 at 09:53:02AM -0800, Jianxin Xiong wrote:

> +cdef class DmaBuf:
> +    def __init__(self, size, unit=0):
> +        """
> +        Allocate DmaBuf object from a GPU device. This is done through the
> +        DRI device interface (/dev/dri/card*). Usually this requires the
> +        effective user id being root or being a member of the 'video' group.
> +        :param size: The size (in number of bytes) of the buffer.
> +        :param unit: The unit number of the GPU to allocate the buffer from.
> +        :return: The newly created DmaBuf object on success.
> +        """
> +        self.dmabuf_mrs = weakref.WeakSet()
> +        self.dri_fd = open('/dev/dri/card'+str(unit), O_RDWR)
> +
> +        args = bytearray(32)
> +        pack_into('=iiiiiiq', args, 0, 1, size, 8, 0, 0, 0, 0)
> +        ioctl(self.dri_fd, DRM_IOCTL_MODE_CREATE_DUMB, args)
> +        a, b, c, d, self.handle, e, self.size = unpack('=iiiiiiq', args)
> +
> +        args = bytearray(12)
> +        pack_into('=iii', args, 0, self.handle, O_RDWR, 0)
> +        ioctl(self.dri_fd, DRM_IOCTL_PRIME_HANDLE_TO_FD, args)
> +        a, b, self.fd = unpack('=iii', args)
> +
> +        args = bytearray(16)
> +        pack_into('=iiq', args, 0, self.handle, 0, 0)
> +        ioctl(self.dri_fd, DRM_IOCTL_MODE_MAP_DUMB, args);
> +        a, b, self.map_offset = unpack('=iiq', args);

Wow, OK

Is it worth using ctypes here instead? Can you at least add a comment
before each pack specifying the 'struct XXX' this is following?

Does this work with normal Intel GPUs, like in a Laptop? AMD too?

Christian, I would be very happy to hear from you that this entire
work is good for AMD as well

Edward should look through this, but I'm glad to see something like
this

Thanks,
Jason
Xiong, Jianxin Nov. 23, 2020, 7:48 p.m. UTC | #2
> -----Original Message-----
> From: Jason Gunthorpe <jgg@ziepe.ca>
> Sent: Monday, November 23, 2020 10:05 AM
> To: Xiong, Jianxin <jianxin.xiong@intel.com>
> Cc: linux-rdma@vger.kernel.org; dri-devel@lists.freedesktop.org; Doug Ledford <dledford@redhat.com>; Leon Romanovsky
> <leon@kernel.org>; Sumit Semwal <sumit.semwal@linaro.org>; Christian Koenig <christian.koenig@amd.com>; Vetter, Daniel
> <daniel.vetter@intel.com>
> Subject: Re: [PATCH rdma-core 3/5] pyverbs: Add dma-buf based MR support
> 
> On Mon, Nov 23, 2020 at 09:53:02AM -0800, Jianxin Xiong wrote:
> 
> > +cdef class DmaBuf:
> > +    def __init__(self, size, unit=0):
> > +        """
> > +        Allocate DmaBuf object from a GPU device. This is done through the
> > +        DRI device interface (/dev/dri/card*). Usually this requires the
> > +        effective user id being root or being a member of the 'video' group.
> > +        :param size: The size (in number of bytes) of the buffer.
> > +        :param unit: The unit number of the GPU to allocate the buffer from.
> > +        :return: The newly created DmaBuf object on success.
> > +        """
> > +        self.dmabuf_mrs = weakref.WeakSet()
> > +        self.dri_fd = open('/dev/dri/card'+str(unit), O_RDWR)
> > +
> > +        args = bytearray(32)
> > +        pack_into('=iiiiiiq', args, 0, 1, size, 8, 0, 0, 0, 0)
> > +        ioctl(self.dri_fd, DRM_IOCTL_MODE_CREATE_DUMB, args)
> > +        a, b, c, d, self.handle, e, self.size = unpack('=iiiiiiq',
> > + args)
> > +
> > +        args = bytearray(12)
> > +        pack_into('=iii', args, 0, self.handle, O_RDWR, 0)
> > +        ioctl(self.dri_fd, DRM_IOCTL_PRIME_HANDLE_TO_FD, args)
> > +        a, b, self.fd = unpack('=iii', args)
> > +
> > +        args = bytearray(16)
> > +        pack_into('=iiq', args, 0, self.handle, 0, 0)
> > +        ioctl(self.dri_fd, DRM_IOCTL_MODE_MAP_DUMB, args);
> > +        a, b, self.map_offset = unpack('=iiq', args);
> 
> Wow, OK
> 
> Is it worth using ctypes here instead? Can you at least add a comment before each pack specifying the 'struct XXX' this is following?
> 

The ioctl call only accept a bytearray, not sure how to use ctypes here. I will add 
comments with the actual layout of the parameter structure.

> Does this work with normal Intel GPUs, like in a Laptop? AMD too?
> 

Yes, the interface is generic and works with most GPUs. Works with AMD, too.

> Christian, I would be very happy to hear from you that this entire work is good for AMD as well
> 
> Edward should look through this, but I'm glad to see something like this
> 
> Thanks,
> Jason
Daniel Vetter Nov. 24, 2020, 3:16 p.m. UTC | #3
On Mon, Nov 23, 2020 at 02:05:04PM -0400, Jason Gunthorpe wrote:
> On Mon, Nov 23, 2020 at 09:53:02AM -0800, Jianxin Xiong wrote:
> 
> > +cdef class DmaBuf:
> > +    def __init__(self, size, unit=0):
> > +        """
> > +        Allocate DmaBuf object from a GPU device. This is done through the
> > +        DRI device interface (/dev/dri/card*). Usually this requires the

Please use /dev/dri/renderD* instead. That's the interface meant for
unpriviledged rendering access. card* is the legacy interface with
backwards compat galore, don't use.

Specifically if you do this on a gpu which also has display (maybe some
testing on a local developer machine, no idea ...) then you mess with
compositors and stuff.

Also wherever you copied this from, please also educate those teams that
using /dev/dri/card* for rendering stuff is a Bad Idea (tm)

> > +        effective user id being root or being a member of the 'video' group.
> > +        :param size: The size (in number of bytes) of the buffer.
> > +        :param unit: The unit number of the GPU to allocate the buffer from.
> > +        :return: The newly created DmaBuf object on success.
> > +        """
> > +        self.dmabuf_mrs = weakref.WeakSet()
> > +        self.dri_fd = open('/dev/dri/card'+str(unit), O_RDWR)
> > +
> > +        args = bytearray(32)
> > +        pack_into('=iiiiiiq', args, 0, 1, size, 8, 0, 0, 0, 0)
> > +        ioctl(self.dri_fd, DRM_IOCTL_MODE_CREATE_DUMB, args)
> > +        a, b, c, d, self.handle, e, self.size = unpack('=iiiiiiq', args)

Yeah no, don't allocate render buffers with create_dumb. Every time this
comes up I'm wondering whether we should just completely disable dma-buf
operations on these. Dumb buffers are explicitly only for software
rendering for display purposes when the gpu userspace stack isn't fully
running yet, aka boot splash.

And yes I know there's endless amounts of abuse of that stuff floating
around, especially on arm-soc/android systems.

> > +
> > +        args = bytearray(12)
> > +        pack_into('=iii', args, 0, self.handle, O_RDWR, 0)
> > +        ioctl(self.dri_fd, DRM_IOCTL_PRIME_HANDLE_TO_FD, args)
> > +        a, b, self.fd = unpack('=iii', args)
> > +
> > +        args = bytearray(16)
> > +        pack_into('=iiq', args, 0, self.handle, 0, 0)
> > +        ioctl(self.dri_fd, DRM_IOCTL_MODE_MAP_DUMB, args);
> > +        a, b, self.map_offset = unpack('=iiq', args);
> 
> Wow, OK
> 
> Is it worth using ctypes here instead? Can you at least add a comment
> before each pack specifying the 'struct XXX' this is following?
> 
> Does this work with normal Intel GPUs, like in a Laptop? AMD too?
> 
> Christian, I would be very happy to hear from you that this entire
> work is good for AMD as well

I think the smallest generic interface for allocating gpu buffers which
are more useful than the stuff you get from CREATE_DUMB is gbm. That's
used by compositors to get bare metal opengl going on linux. Ofc Android
has gralloc for the same purpose, and cros has minigbm (which isn't the
same as gbm at all). So not so cool.

The other generic option is using vulkan, which works directly on bare
metal (without a compositor or anything running), and is cross vendor. So
cool, except not used for compute, which is generally the thing you want
if you have an rdma card.

Both gbm-egl/opengl and vulkan have extensions to hand you a dma-buf back,
properly.

Compute is the worst, because opencl is widely considered a mistake (maybe
opencl 3 is better, but nvidia is stuck on 1.2). The actually used stuff is
cuda (nvidia-only), rocm (amd-only) and now with intel also playing we
have xe (intel-only).

It's pretty glorious :-/

Also I think we discussed this already, but for actual p2p the intel
patches aren't in upstream yet. We have some internally, but with very
broken locking (in the process of getting fixed up, but it's taking time).

Cheers, Daniel

> Edward should look through this, but I'm glad to see something like
> this
> 
> Thanks,
> Jason
> _______________________________________________
> dri-devel mailing list
> dri-devel@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel
Jason Gunthorpe Nov. 24, 2020, 3:36 p.m. UTC | #4
On Tue, Nov 24, 2020 at 04:16:58PM +0100, Daniel Vetter wrote:

> Compute is the worst, because opencl is widely considered a mistake (maybe
> opencl 3 is better, but nvidia is stuck on 1.2). The actually used stuff is
> cuda (nvidia-only), rocm (amd-only) and now with intel also playing we
> have xe (intel-only).

> It's pretty glorious :-/

I enjoyed how the Intel version of CUDA is called "OneAPI" not "Third
API" ;)

Hopefuly xe compute won't leave a lot of half finished abandoned
kernel code like Xeon Phi did :(

> Also I think we discussed this already, but for actual p2p the intel
> patches aren't in upstream yet. We have some internally, but with very
> broken locking (in the process of getting fixed up, but it's taking time).

Someone needs to say this test works on a real system with an
unpatched upstream driver.

I thought AMD had the needed parts merged?

Jason
Xiong, Jianxin Nov. 24, 2020, 4:21 p.m. UTC | #5
> -----Original Message-----
> From: Jason Gunthorpe <jgg@ziepe.ca>
> Sent: Tuesday, November 24, 2020 7:36 AM
> To: Daniel Vetter <daniel@ffwll.ch>
> Cc: Xiong, Jianxin <jianxin.xiong@intel.com>; Leon Romanovsky <leon@kernel.org>; linux-rdma@vger.kernel.org; dri-
> devel@lists.freedesktop.org; Doug Ledford <dledford@redhat.com>; Vetter, Daniel <daniel.vetter@intel.com>; Christian Koenig
> <christian.koenig@amd.com>
> Subject: Re: [PATCH rdma-core 3/5] pyverbs: Add dma-buf based MR support
> 
> On Tue, Nov 24, 2020 at 04:16:58PM +0100, Daniel Vetter wrote:
> 
> > Compute is the worst, because opencl is widely considered a mistake
> > (maybe opencl 3 is better, but nvidia is stuck on 1.2). The actually
> > used stuff is cuda (nvidia-only), rocm (amd-only) and now with intel
> > also playing we have xe (intel-only).
> 
> > It's pretty glorious :-/
> 
> I enjoyed how the Intel version of CUDA is called "OneAPI" not "Third API" ;)
> 
> Hopefuly xe compute won't leave a lot of half finished abandoned kernel code like Xeon Phi did :(
> 
> > Also I think we discussed this already, but for actual p2p the intel
> > patches aren't in upstream yet. We have some internally, but with very
> > broken locking (in the process of getting fixed up, but it's taking time).
> 
> Someone needs to say this test works on a real system with an unpatched upstream driver.
> 
> I thought AMD had the needed parts merged?

Yes, I have tested these with AMD GPU.

> 
> Jason
Xiong, Jianxin Nov. 24, 2020, 6:45 p.m. UTC | #6
> -----Original Message-----
> From: Daniel Vetter <daniel@ffwll.ch>
> Sent: Tuesday, November 24, 2020 7:17 AM
> To: Jason Gunthorpe <jgg@ziepe.ca>
> Cc: Xiong, Jianxin <jianxin.xiong@intel.com>; Leon Romanovsky <leon@kernel.org>; linux-rdma@vger.kernel.org; dri-
> devel@lists.freedesktop.org; Doug Ledford <dledford@redhat.com>; Vetter, Daniel <daniel.vetter@intel.com>; Christian Koenig
> <christian.koenig@amd.com>
> Subject: Re: [PATCH rdma-core 3/5] pyverbs: Add dma-buf based MR support
> 
> On Mon, Nov 23, 2020 at 02:05:04PM -0400, Jason Gunthorpe wrote:
> > On Mon, Nov 23, 2020 at 09:53:02AM -0800, Jianxin Xiong wrote:
> >
> > > +cdef class DmaBuf:
> > > +    def __init__(self, size, unit=0):
> > > +        """
> > > +        Allocate DmaBuf object from a GPU device. This is done through the
> > > +        DRI device interface (/dev/dri/card*). Usually this
> > > +requires the
> 
> Please use /dev/dri/renderD* instead. That's the interface meant for unpriviledged rendering access. card* is the legacy interface with
> backwards compat galore, don't use.
> 
> Specifically if you do this on a gpu which also has display (maybe some testing on a local developer machine, no idea ...) then you mess with
> compositors and stuff.
> 
> Also wherever you copied this from, please also educate those teams that using /dev/dri/card* for rendering stuff is a Bad Idea (tm)

/dev/dri/renderD* is not always available (e.g. for many iGPUs) and doesn't support
mode setting commands (including dumb_buf). The original intention here is to
have something to support the new tests added, not for general compute. 

> 
> > > +        effective user id being root or being a member of the 'video' group.
> > > +        :param size: The size (in number of bytes) of the buffer.
> > > +        :param unit: The unit number of the GPU to allocate the buffer from.
> > > +        :return: The newly created DmaBuf object on success.
> > > +        """
> > > +        self.dmabuf_mrs = weakref.WeakSet()
> > > +        self.dri_fd = open('/dev/dri/card'+str(unit), O_RDWR)
> > > +
> > > +        args = bytearray(32)
> > > +        pack_into('=iiiiiiq', args, 0, 1, size, 8, 0, 0, 0, 0)
> > > +        ioctl(self.dri_fd, DRM_IOCTL_MODE_CREATE_DUMB, args)
> > > +        a, b, c, d, self.handle, e, self.size = unpack('=iiiiiiq',
> > > + args)
> 
> Yeah no, don't allocate render buffers with create_dumb. Every time this comes up I'm wondering whether we should just completely
> disable dma-buf operations on these. Dumb buffers are explicitly only for software rendering for display purposes when the gpu userspace
> stack isn't fully running yet, aka boot splash.
> 
> And yes I know there's endless amounts of abuse of that stuff floating around, especially on arm-soc/android systems.

One alternative is to use the GEM_CREATE method which can be done via the renderD*
device, but the command is vendor specific, so the logic is a little bit more complex. 

> 
> > > +
> > > +        args = bytearray(12)
> > > +        pack_into('=iii', args, 0, self.handle, O_RDWR, 0)
> > > +        ioctl(self.dri_fd, DRM_IOCTL_PRIME_HANDLE_TO_FD, args)
> > > +        a, b, self.fd = unpack('=iii', args)
> > > +
> > > +        args = bytearray(16)
> > > +        pack_into('=iiq', args, 0, self.handle, 0, 0)
> > > +        ioctl(self.dri_fd, DRM_IOCTL_MODE_MAP_DUMB, args);
> > > +        a, b, self.map_offset = unpack('=iiq', args);
> >
> > Wow, OK
> >
> > Is it worth using ctypes here instead? Can you at least add a comment
> > before each pack specifying the 'struct XXX' this is following?
> >
> > Does this work with normal Intel GPUs, like in a Laptop? AMD too?
> >
> > Christian, I would be very happy to hear from you that this entire
> > work is good for AMD as well
> 
> I think the smallest generic interface for allocating gpu buffers which are more useful than the stuff you get from CREATE_DUMB is gbm.
> That's used by compositors to get bare metal opengl going on linux. Ofc Android has gralloc for the same purpose, and cros has minigbm
> (which isn't the same as gbm at all). So not so cool.

Again, would the "renderD* + GEM_CREATE" combination be an acceptable alternative? 
That would be much simpler than going with gbm and less dependency in setting up
the testing evrionment.

> 
> The other generic option is using vulkan, which works directly on bare metal (without a compositor or anything running), and is cross vendor.
> So cool, except not used for compute, which is generally the thing you want if you have an rdma card.
> 
> Both gbm-egl/opengl and vulkan have extensions to hand you a dma-buf back, properly.
> 
> Compute is the worst, because opencl is widely considered a mistake (maybe opencl 3 is better, but nvidia is stuck on 1.2). The actually used
> stuff is cuda (nvidia-only), rocm (amd-only) and now with intel also playing we have xe (intel-only).
> 
> It's pretty glorious :-/
> 
> Also I think we discussed this already, but for actual p2p the intel patches aren't in upstream yet. We have some internally, but with very
> broken locking (in the process of getting fixed up, but it's taking time).
> 
> Cheers, Daniel
> 
> > Edward should look through this, but I'm glad to see something like
> > this
> >
> > Thanks,
> > Jason
> > _______________________________________________
> > dri-devel mailing list
> > dri-devel@lists.freedesktop.org
> > https://lists.freedesktop.org/mailman/listinfo/dri-devel
> 
> --
> Daniel Vetter
> Software Engineer, Intel Corporation
> http://blog.ffwll.ch
Daniel Vetter Nov. 25, 2020, 10:50 a.m. UTC | #7
On Tue, Nov 24, 2020 at 06:45:06PM +0000, Xiong, Jianxin wrote:
> > -----Original Message-----
> > From: Daniel Vetter <daniel@ffwll.ch>
> > Sent: Tuesday, November 24, 2020 7:17 AM
> > To: Jason Gunthorpe <jgg@ziepe.ca>
> > Cc: Xiong, Jianxin <jianxin.xiong@intel.com>; Leon Romanovsky <leon@kernel.org>; linux-rdma@vger.kernel.org; dri-
> > devel@lists.freedesktop.org; Doug Ledford <dledford@redhat.com>; Vetter, Daniel <daniel.vetter@intel.com>; Christian Koenig
> > <christian.koenig@amd.com>
> > Subject: Re: [PATCH rdma-core 3/5] pyverbs: Add dma-buf based MR support
> > 
> > On Mon, Nov 23, 2020 at 02:05:04PM -0400, Jason Gunthorpe wrote:
> > > On Mon, Nov 23, 2020 at 09:53:02AM -0800, Jianxin Xiong wrote:
> > >
> > > > +cdef class DmaBuf:
> > > > +    def __init__(self, size, unit=0):
> > > > +        """
> > > > +        Allocate DmaBuf object from a GPU device. This is done through the
> > > > +        DRI device interface (/dev/dri/card*). Usually this
> > > > +requires the
> > 
> > Please use /dev/dri/renderD* instead. That's the interface meant for unpriviledged rendering access. card* is the legacy interface with
> > backwards compat galore, don't use.
> > 
> > Specifically if you do this on a gpu which also has display (maybe some testing on a local developer machine, no idea ...) then you mess with
> > compositors and stuff.
> > 
> > Also wherever you copied this from, please also educate those teams that using /dev/dri/card* for rendering stuff is a Bad Idea (tm)
> 
> /dev/dri/renderD* is not always available (e.g. for many iGPUs) and doesn't support
> mode setting commands (including dumb_buf). The original intention here is to
> have something to support the new tests added, not for general compute. 

Not having dumb_buf available is a feature. So even more reasons to use
that.

Also note that amdgpu has killed card* access pretty much, it's for
modesetting only.

> > > > +        effective user id being root or being a member of the 'video' group.
> > > > +        :param size: The size (in number of bytes) of the buffer.
> > > > +        :param unit: The unit number of the GPU to allocate the buffer from.
> > > > +        :return: The newly created DmaBuf object on success.
> > > > +        """
> > > > +        self.dmabuf_mrs = weakref.WeakSet()
> > > > +        self.dri_fd = open('/dev/dri/card'+str(unit), O_RDWR)
> > > > +
> > > > +        args = bytearray(32)
> > > > +        pack_into('=iiiiiiq', args, 0, 1, size, 8, 0, 0, 0, 0)
> > > > +        ioctl(self.dri_fd, DRM_IOCTL_MODE_CREATE_DUMB, args)
> > > > +        a, b, c, d, self.handle, e, self.size = unpack('=iiiiiiq',
> > > > + args)
> > 
> > Yeah no, don't allocate render buffers with create_dumb. Every time this comes up I'm wondering whether we should just completely
> > disable dma-buf operations on these. Dumb buffers are explicitly only for software rendering for display purposes when the gpu userspace
> > stack isn't fully running yet, aka boot splash.
> > 
> > And yes I know there's endless amounts of abuse of that stuff floating around, especially on arm-soc/android systems.
> 
> One alternative is to use the GEM_CREATE method which can be done via the renderD*
> device, but the command is vendor specific, so the logic is a little bit more complex. 

Yup. I guess the most minimal thing is to have a per-vendor (you can ask
drm for the driver name to match the right one) callback here to allocate
buffers correctly. Might be less churn than trying to pull in vulkan or
something like that.

It's at least what we're doing in igt for testing drm drivers (although
most of the generic igt tests for display, so dumb_buffer fallback is
available).

DRM_IOCTL_VERSION is the thing you'd need here, struct drm_version.name
has the field for figuring out which driver it is.

Also drivers without render node support won't ever be in the same system
as an rdma card and actually useful (because well they're either very old,
or display-only). So not an issue I think.

> > > > +
> > > > +        args = bytearray(12)
> > > > +        pack_into('=iii', args, 0, self.handle, O_RDWR, 0)
> > > > +        ioctl(self.dri_fd, DRM_IOCTL_PRIME_HANDLE_TO_FD, args)
> > > > +        a, b, self.fd = unpack('=iii', args)
> > > > +
> > > > +        args = bytearray(16)
> > > > +        pack_into('=iiq', args, 0, self.handle, 0, 0)
> > > > +        ioctl(self.dri_fd, DRM_IOCTL_MODE_MAP_DUMB, args);
> > > > +        a, b, self.map_offset = unpack('=iiq', args);
> > >
> > > Wow, OK
> > >
> > > Is it worth using ctypes here instead? Can you at least add a comment
> > > before each pack specifying the 'struct XXX' this is following?
> > >
> > > Does this work with normal Intel GPUs, like in a Laptop? AMD too?
> > >
> > > Christian, I would be very happy to hear from you that this entire
> > > work is good for AMD as well
> > 
> > I think the smallest generic interface for allocating gpu buffers which are more useful than the stuff you get from CREATE_DUMB is gbm.
> > That's used by compositors to get bare metal opengl going on linux. Ofc Android has gralloc for the same purpose, and cros has minigbm
> > (which isn't the same as gbm at all). So not so cool.
> 
> Again, would the "renderD* + GEM_CREATE" combination be an acceptable alternative? 
> That would be much simpler than going with gbm and less dependency in setting up
> the testing evrionment.

Yeah imo makes sense. It's a bunch more code for you to make it work on
i915 and amd, but it's not terrible. And avoids the dependencies, and also
avoids the abuse of card* and dumb buffers. Plus not really more complex,
you just need a table or something to match from the drm driver name to
the driver-specific buffer create function. Everything else stays the
same.

Also this opens up the door to force-test stuff like p2p in the future,
since at least on i915 you'll be able to ensure that a buffer is in vram
only.

Would be good if we also have a trick for amdgpu to make sure the buffer
stays in vram. I think there's some flags you can pass to the amdgpu
buffer create function. So maybe you want 2 testcases here, one allocates
the buffer in system memory, the other in vram for testing p2p
functionality. That kind of stuff isn't possible with dumb buffers.
-Daniel




> > 
> > The other generic option is using vulkan, which works directly on bare metal (without a compositor or anything running), and is cross vendor.
> > So cool, except not used for compute, which is generally the thing you want if you have an rdma card.
> > 
> > Both gbm-egl/opengl and vulkan have extensions to hand you a dma-buf back, properly.
> > 
> > Compute is the worst, because opencl is widely considered a mistake (maybe opencl 3 is better, but nvidia is stuck on 1.2). The actually used
> > stuff is cuda (nvidia-only), rocm (amd-only) and now with intel also playing we have xe (intel-only).
> > 
> > It's pretty glorious :-/
> > 
> > Also I think we discussed this already, but for actual p2p the intel patches aren't in upstream yet. We have some internally, but with very
> > broken locking (in the process of getting fixed up, but it's taking time).
> > 
> > Cheers, Daniel
> > 
> > > Edward should look through this, but I'm glad to see something like
> > > this
> > >
> > > Thanks,
> > > Jason
> > > _______________________________________________
> > > dri-devel mailing list
> > > dri-devel@lists.freedesktop.org
> > > https://lists.freedesktop.org/mailman/listinfo/dri-devel
> > 
> > --
> > Daniel Vetter
> > Software Engineer, Intel Corporation
> > http://blog.ffwll.ch
Jason Gunthorpe Nov. 25, 2020, 12:14 p.m. UTC | #8
On Wed, Nov 25, 2020 at 11:50:41AM +0100, Daniel Vetter wrote:

> Yeah imo makes sense. It's a bunch more code for you to make it work on
> i915 and amd, but it's not terrible. And avoids the dependencies, and also
> avoids the abuse of card* and dumb buffers. Plus not really more complex,
> you just need a table or something to match from the drm driver name to
> the driver-specific buffer create function. Everything else stays the
> same.

If it is going to get more complicated please write it in C then. We
haven't done it yet, but you can link a C function through cython to
the python test script

If you struggle here I can probably work out the build system bits,
but it should not be too terrible

Jason
Xiong, Jianxin Nov. 25, 2020, 7:27 p.m. UTC | #9
> -----Original Message-----
> From: Jason Gunthorpe <jgg@ziepe.ca>
> Sent: Wednesday, November 25, 2020 4:15 AM
> To: Daniel Vetter <daniel@ffwll.ch>
> Cc: Xiong, Jianxin <jianxin.xiong@intel.com>; Leon Romanovsky <leon@kernel.org>; linux-rdma@vger.kernel.org; dri-
> devel@lists.freedesktop.org; Doug Ledford <dledford@redhat.com>; Vetter, Daniel <daniel.vetter@intel.com>; Christian Koenig
> <christian.koenig@amd.com>
> Subject: Re: [PATCH rdma-core 3/5] pyverbs: Add dma-buf based MR support
> 
> On Wed, Nov 25, 2020 at 11:50:41AM +0100, Daniel Vetter wrote:
> 
> > Yeah imo makes sense. It's a bunch more code for you to make it work
> > on
> > i915 and amd, but it's not terrible. And avoids the dependencies, and
> > also avoids the abuse of card* and dumb buffers. Plus not really more
> > complex, you just need a table or something to match from the drm
> > driver name to the driver-specific buffer create function. Everything
> > else stays the same.
> 
> If it is going to get more complicated please write it in C then. We haven't done it yet, but you can link a C function through cython to the
> python test script
> 
> If you struggle here I can probably work out the build system bits, but it should not be too terrible

Thanks Daniel and Jason. I have started working in this direction. There should be no
technical obstacle here.
Jason Gunthorpe Nov. 26, 2020, midnight UTC | #10
On Wed, Nov 25, 2020 at 07:27:07PM +0000, Xiong, Jianxin wrote:
> > From: Jason Gunthorpe <jgg@ziepe.ca>
> > Sent: Wednesday, November 25, 2020 4:15 AM
> > To: Daniel Vetter <daniel@ffwll.ch>
> > Cc: Xiong, Jianxin <jianxin.xiong@intel.com>; Leon Romanovsky <leon@kernel.org>; linux-rdma@vger.kernel.org; dri-
> > devel@lists.freedesktop.org; Doug Ledford <dledford@redhat.com>; Vetter, Daniel <daniel.vetter@intel.com>; Christian Koenig
> > <christian.koenig@amd.com>
> > Subject: Re: [PATCH rdma-core 3/5] pyverbs: Add dma-buf based MR support
> > 
> > On Wed, Nov 25, 2020 at 11:50:41AM +0100, Daniel Vetter wrote:
> > 
> > > Yeah imo makes sense. It's a bunch more code for you to make it work
> > > on
> > > i915 and amd, but it's not terrible. And avoids the dependencies, and
> > > also avoids the abuse of card* and dumb buffers. Plus not really more
> > > complex, you just need a table or something to match from the drm
> > > driver name to the driver-specific buffer create function. Everything
> > > else stays the same.
> > 
> > If it is going to get more complicated please write it in C then. We haven't done it yet, but you can link a C function through cython to the
> > python test script
> > 
> > If you struggle here I can probably work out the build system bits, but it should not be too terrible
> 
> Thanks Daniel and Jason. I have started working in this direction. There should be no
> technical obstacle here. 

Just to be clear I mean write some 'get dma buf fd' function in C, not
the whole test

Jason
Xiong, Jianxin Nov. 26, 2020, 12:43 a.m. UTC | #11
> -----Original Message-----
> From: Jason Gunthorpe <jgg@ziepe.ca>
> Sent: Wednesday, November 25, 2020 4:00 PM
> To: Xiong, Jianxin <jianxin.xiong@intel.com>
> Cc: Daniel Vetter <daniel@ffwll.ch>; Leon Romanovsky <leon@kernel.org>; linux-rdma@vger.kernel.org; dri-devel@lists.freedesktop.org;
> Doug Ledford <dledford@redhat.com>; Vetter, Daniel <daniel.vetter@intel.com>; Christian Koenig <christian.koenig@amd.com>
> Subject: Re: [PATCH rdma-core 3/5] pyverbs: Add dma-buf based MR support
> 
> On Wed, Nov 25, 2020 at 07:27:07PM +0000, Xiong, Jianxin wrote:
> > > From: Jason Gunthorpe <jgg@ziepe.ca>
> > > Sent: Wednesday, November 25, 2020 4:15 AM
> > > To: Daniel Vetter <daniel@ffwll.ch>
> > > Cc: Xiong, Jianxin <jianxin.xiong@intel.com>; Leon Romanovsky
> > > <leon@kernel.org>; linux-rdma@vger.kernel.org; dri-
> > > devel@lists.freedesktop.org; Doug Ledford <dledford@redhat.com>;
> > > Vetter, Daniel <daniel.vetter@intel.com>; Christian Koenig
> > > <christian.koenig@amd.com>
> > > Subject: Re: [PATCH rdma-core 3/5] pyverbs: Add dma-buf based MR
> > > support
> > >
> > > On Wed, Nov 25, 2020 at 11:50:41AM +0100, Daniel Vetter wrote:
> > >
> > > > Yeah imo makes sense. It's a bunch more code for you to make it
> > > > work on
> > > > i915 and amd, but it's not terrible. And avoids the dependencies,
> > > > and also avoids the abuse of card* and dumb buffers. Plus not
> > > > really more complex, you just need a table or something to match
> > > > from the drm driver name to the driver-specific buffer create
> > > > function. Everything else stays the same.
> > >
> > > If it is going to get more complicated please write it in C then. We
> > > haven't done it yet, but you can link a C function through cython to
> > > the python test script
> > >
> > > If you struggle here I can probably work out the build system bits,
> > > but it should not be too terrible
> >
> > Thanks Daniel and Jason. I have started working in this direction.
> > There should be no technical obstacle here.
> 
> Just to be clear I mean write some 'get dma buf fd' function in C, not the whole test
> 
Yes, that's my understanding.
diff mbox series

Patch

diff --git a/pyverbs/CMakeLists.txt b/pyverbs/CMakeLists.txt
index 9542c4b..5aee02b 100644
--- a/pyverbs/CMakeLists.txt
+++ b/pyverbs/CMakeLists.txt
@@ -1,5 +1,6 @@ 
 # SPDX-License-Identifier: (GPL-2.0 OR Linux-OpenIB)
 # Copyright (c) 2019, Mellanox Technologies. All rights reserved. See COPYING file
+# Copyright (c) 2020, Intel Corporation. All rights reserved.
 
 rdma_cython_module(pyverbs ""
   addr.pyx
@@ -16,6 +17,7 @@  rdma_cython_module(pyverbs ""
   wr.pyx
   xrcd.pyx
   srq.pyx
+  dmabuf.pyx
   )
 
 rdma_python_module(pyverbs
diff --git a/pyverbs/dmabuf.pxd b/pyverbs/dmabuf.pxd
new file mode 100644
index 0000000..040db4b
--- /dev/null
+++ b/pyverbs/dmabuf.pxd
@@ -0,0 +1,13 @@ 
+# SPDX-License-Identifier: (GPL-2.0 OR Linux-OpenIB)
+# Copyright (c) 2020, Intel Corporation. All rights reserved.
+
+#cython: language_level=3
+
+cdef class DmaBuf:
+    cdef int dri_fd
+    cdef int handle
+    cdef int fd
+    cdef unsigned long size
+    cdef unsigned long map_offset
+    cdef object dmabuf_mrs
+    cdef add_ref(self, obj)
diff --git a/pyverbs/dmabuf.pyx b/pyverbs/dmabuf.pyx
new file mode 100644
index 0000000..6c7622d
--- /dev/null
+++ b/pyverbs/dmabuf.pyx
@@ -0,0 +1,58 @@ 
+# SPDX-License-Identifier: (GPL-2.0 OR Linux-OpenIB)
+# Copyright (c) 2020, Intel Corporation. All rights reserved.
+
+#cython: language_level=3
+
+import weakref
+
+from os import open, close, O_RDWR
+from fcntl import ioctl
+from struct import pack_into, unpack
+from pyverbs.base cimport close_weakrefs
+from pyverbs.mr cimport DmaBufMR
+
+cdef extern from "drm/drm.h":
+    cdef int DRM_IOCTL_MODE_CREATE_DUMB
+    cdef int DRM_IOCTL_MODE_MAP_DUMB
+    cdef int DRM_IOCTL_MODE_DESTROY_DUMB
+    cdef int DRM_IOCTL_PRIME_HANDLE_TO_FD
+
+cdef class DmaBuf:
+    def __init__(self, size, unit=0):
+        """
+        Allocate DmaBuf object from a GPU device. This is done through the
+        DRI device interface (/dev/dri/card*). Usually this requires the
+        effective user id being root or being a member of the 'video' group.
+        :param size: The size (in number of bytes) of the buffer.
+        :param unit: The unit number of the GPU to allocate the buffer from.
+        :return: The newly created DmaBuf object on success.
+        """
+        self.dmabuf_mrs = weakref.WeakSet()
+        self.dri_fd = open('/dev/dri/card'+str(unit), O_RDWR)
+
+        args = bytearray(32)
+        pack_into('=iiiiiiq', args, 0, 1, size, 8, 0, 0, 0, 0)
+        ioctl(self.dri_fd, DRM_IOCTL_MODE_CREATE_DUMB, args)
+        a, b, c, d, self.handle, e, self.size = unpack('=iiiiiiq', args)
+
+        args = bytearray(12)
+        pack_into('=iii', args, 0, self.handle, O_RDWR, 0)
+        ioctl(self.dri_fd, DRM_IOCTL_PRIME_HANDLE_TO_FD, args)
+        a, b, self.fd = unpack('=iii', args)
+
+        args = bytearray(16)
+        pack_into('=iiq', args, 0, self.handle, 0, 0)
+        ioctl(self.dri_fd, DRM_IOCTL_MODE_MAP_DUMB, args);
+        a, b, self.map_offset = unpack('=iiq', args);
+
+    def __dealloc__(self):
+        close_weakrefs([self.dmabuf_mrs])
+        args = bytearray(4)
+        pack_into('=i', args, 0, self.handle)
+        ioctl(self.dri_fd, DRM_IOCTL_MODE_DESTROY_DUMB, args)
+        close(self.dri_fd)
+
+    cdef add_ref(self, obj):
+        if isinstance(obj, DmaBufMR):
+            self.dmabuf_mrs.add(obj)
+
diff --git a/pyverbs/libibverbs.pxd b/pyverbs/libibverbs.pxd
index 6fbba54..95e51e1 100644
--- a/pyverbs/libibverbs.pxd
+++ b/pyverbs/libibverbs.pxd
@@ -507,6 +507,8 @@  cdef extern from 'infiniband/verbs.h':
     ibv_pd *ibv_alloc_pd(ibv_context *context)
     int ibv_dealloc_pd(ibv_pd *pd)
     ibv_mr *ibv_reg_mr(ibv_pd *pd, void *addr, size_t length, int access)
+    ibv_mr *ibv_reg_dmabuf_mr(ibv_pd *pd, uint64_t offset, size_t length,
+			      int fd, int access)
     int ibv_dereg_mr(ibv_mr *mr)
     int ibv_advise_mr(ibv_pd *pd, uint32_t advice, uint32_t flags,
                       ibv_sge *sg_list, uint32_t num_sge)
diff --git a/pyverbs/mr.pxd b/pyverbs/mr.pxd
index ebe8ada..b89cf02 100644
--- a/pyverbs/mr.pxd
+++ b/pyverbs/mr.pxd
@@ -1,5 +1,6 @@ 
 # SPDX-License-Identifier: (GPL-2.0 OR Linux-OpenIB)
 # Copyright (c) 2019, Mellanox Technologies. All rights reserved. See COPYING file
+# Copyright (c) 2020, Intel Corporation. All rights reserved.
 
 #cython: language_level=3
 
@@ -33,3 +34,7 @@  cdef class MW(PyverbsCM):
 
 cdef class DMMR(MR):
     cdef object dm
+
+cdef class DmaBufMR(MR):
+    cdef object dmabuf
+    cdef unsigned long offset
diff --git a/pyverbs/mr.pyx b/pyverbs/mr.pyx
index 7011da1..4102d3c 100644
--- a/pyverbs/mr.pyx
+++ b/pyverbs/mr.pyx
@@ -1,11 +1,12 @@ 
 # SPDX-License-Identifier: (GPL-2.0 OR Linux-OpenIB)
 # Copyright (c) 2019, Mellanox Technologies. All rights reserved. See COPYING file
+# Copyright (c) 2020, Intel Corporation. All rights reserved.
 
 import resource
 import logging
 
 from posix.mman cimport mmap, munmap, MAP_PRIVATE, PROT_READ, PROT_WRITE, \
-    MAP_ANONYMOUS, MAP_HUGETLB
+    MAP_ANONYMOUS, MAP_HUGETLB, MAP_SHARED
 from pyverbs.pyverbs_error import PyverbsError, PyverbsRDMAError, \
     PyverbsUserError
 from libc.stdint cimport uintptr_t, SIZE_MAX
@@ -14,9 +15,10 @@  from posix.stdlib cimport posix_memalign
 from libc.string cimport memcpy, memset
 cimport pyverbs.libibverbs_enums as e
 from pyverbs.device cimport DM
-from libc.stdlib cimport free
+from libc.stdlib cimport free, malloc
 from .cmid cimport CMID
 from .pd cimport PD
+from .dmabuf cimport DmaBuf
 
 cdef extern from 'sys/mman.h':
     cdef void* MAP_FAILED
@@ -348,6 +350,77 @@  cdef class DMMR(MR):
     cpdef read(self, length, offset):
         return self.dm.copy_from_dm(offset, length)
 
+cdef class DmaBufMR(MR):
+    def __init__(self, PD pd not None, length=0, access=0, DmaBuf dmabuf=None, offset=0):
+        """
+        Initializes a DmaBufMR (DMA-BUF Memory Region) of the given length
+        and access flags using the given PD and DmaBuf objects.
+        :param pd: A PD object
+        :param length: Length in bytes
+        :param access: Access flags, see ibv_access_flags enum
+        :param dmabuf: A DmaBuf object
+        :param offset: Byte offset from the beginning of the dma-buf
+        :return: The newly create DMABUFMR
+        """
+        self.logger = logging.getLogger(self.__class__.__name__)
+        if dmabuf is None:
+            dmabuf = DmaBuf(length + offset)
+        self.mr = v.ibv_reg_dmabuf_mr(pd.pd, offset, length, dmabuf.fd, access)
+        if self.mr == NULL:
+            raise PyverbsRDMAErrno('Failed to register a dma-buf MR. length: {len}, access flags: {flags}'.
+                                   format(len=length, flags=access,))
+        super().__init__(pd, length, access)
+        self.pd = pd
+        self.dmabuf = dmabuf
+        self.offset = offset
+        pd.add_ref(self)
+        dmabuf.add_ref(self)
+        self.logger.debug('Registered dma-buf ibv_mr. Length: {len}, access flags {flags}'.
+                          format(len=length, flags=access))
+
+    def write(self, data, length, offset=0):
+        """
+        Write user data to the dma-buf backing the MR
+        :param data: User data to write
+        :param length: Length of the data to write
+        :param offset: Writing offset
+        :return: None
+        """
+        if isinstance(data, str):
+            data = data.encode()
+        # can't access the attributes if self.dmabuf is used w/o cast
+        dmabuf = <DmaBuf>self.dmabuf
+        cdef int off = offset + self.offset
+        cdef void *buf = mmap(NULL, length + off, PROT_READ | PROT_WRITE,
+                              MAP_SHARED, dmabuf.dri_fd, dmabuf.map_offset)
+        if buf == MAP_FAILED:
+            raise PyverbsError('Failed to map dma-buf of size {l}'.
+                               format(l=length))
+        memcpy(<char*>(buf + off), <char *>data, length)
+        munmap(buf, length + off)
+
+    cpdef read(self, length, offset):
+        """
+        Reads data from the dma-buf backing the MR
+        :param length: Length of data to read
+        :param offset: Reading offset
+        :return: The data on the buffer in the requested offset
+        """
+        # can't access the attributes if self.dmabuf is used w/o cast
+        dmabuf = <DmaBuf>self.dmabuf
+        cdef int off = offset + self.offset
+        cdef void *buf = mmap(NULL, length + off, PROT_READ | PROT_WRITE,
+                              MAP_SHARED, dmabuf.dri_fd, dmabuf.map_offset)
+        if buf == MAP_FAILED:
+            raise PyverbsError('Failed to map dma-buf of size {l}'.
+                               format(l=length))
+        cdef char *data =<char*>malloc(length)
+        memset(data, 0, length)
+        memcpy(data, <char*>(buf + off), length)
+        munmap(buf, length + off)
+        res = data[:length]
+        free(data)
+        return res
 
 def mwtype2str(mw_type):
     mw_types = {1:'IBV_MW_TYPE_1', 2:'IBV_MW_TYPE_2'}