mbox series

[RFC,0/8] *** Refactor struct virtgpu_object ***

Message ID 20200227002601.745-1-gurchetansingh@chromium.org (mailing list archive)
Headers show
Series *** Refactor struct virtgpu_object *** | expand

Message

Gurchetan Singh Feb. 27, 2020, 12:25 a.m. UTC
The main motivation behind this is to have eventually have something like this:

struct virtio_gpu_shmem {
    struct drm_gem_shmem_object base;
    uint32_t hw_res_handle;
    struct sg_table *pages;
    (...)
};

struct virtio_gpu_vram {
    struct drm_gem_object base;  // or *drm_gem_vram_object
    uint32_t hw_res_handle;
    {offset, range};
    (...)
};

Sending this out to solicit feedback on this approach.  Whichever approach
we decide, landing incremental changes to internal structures is reduces
rebasing costs and avoids mega-changes.

Gurchetan Singh (8):
  drm/virtio: make mmap callback consistent with callbacks
  drm/virtio: add virtio_gpu_is_shmem helper
  drm/virtio: add virtio_gpu_get_handle function
  drm/virtio: make RESOURCE_UNREF use virtio_gpu_get_handle(..)
  drm/virtio: make {ATTACH_RESOURCE, DETACH_RESOURCE} use
    virtio_gpu_get_handle(..)
  drm/virtio: rename virtio_gpu_object_array to virtio_gpu_gem_array
  drm/virtio: rename virtio_gpu_object_params to
    virtio_gpu_create_params
  drm/virtio: rename virtio_gpu_object to virtio_gpu_shmem

 drivers/gpu/drm/virtio/virtgpu_drv.h    |  72 ++++++------
 drivers/gpu/drm/virtio/virtgpu_gem.c    | 132 ++++++++++-----------
 drivers/gpu/drm/virtio/virtgpu_ioctl.c  |  50 ++++----
 drivers/gpu/drm/virtio/virtgpu_object.c | 148 +++++++++++++-----------
 drivers/gpu/drm/virtio/virtgpu_plane.c  |  52 ++++-----
 drivers/gpu/drm/virtio/virtgpu_vq.c     | 113 +++++++++---------
 6 files changed, 298 insertions(+), 269 deletions(-)

Comments

Gerd Hoffmann Feb. 27, 2020, 7:23 a.m. UTC | #1
On Wed, Feb 26, 2020 at 04:25:53PM -0800, Gurchetan Singh wrote:
> The main motivation behind this is to have eventually have something like this:
> 
> struct virtio_gpu_shmem {
>     struct drm_gem_shmem_object base;
>     uint32_t hw_res_handle;
>     struct sg_table *pages;
>     (...)
> };
> 
> struct virtio_gpu_vram {
>     struct drm_gem_object base;  // or *drm_gem_vram_object
>     uint32_t hw_res_handle;
>     {offset, range};
>     (...)
> };

Given that we probably will not use drm_gem_vram_object and
drm_gem_shmem_object->base is drm_gem_object I think we can
go this route:

struct virtgpu_object {
	struct drm_gem_shmem_object base;
	enum object_type;
	uint32_t hw_res_handle;
	[ ... ]
};

struct virtgpu_object_shmem {
	struct virtgpu_object base;
	struct sg_table *pages;
	[ ... ]
};

struct virtgpu_object_hostmem {
	struct virtgpu_object base;
	{offset, range};
	(...)
};

Then have helpers to get virtgpu_object_hostmem / virtgpu_object_shmem
from virtgpu_object via container_of which also sanity-check
object_type (maybe we can check drm_gem_object->ops for that instead of
adding a new field).

> Sending this out to solicit feedback on this approach.  Whichever approach
> we decide, landing incremental changes to internal structures is reduces
> rebasing costs and avoids mega-changes.

That certainly makes sense.

cheers,
  Gerd
Gerd Hoffmann Feb. 27, 2020, 10:59 a.m. UTC | #2
On Wed, Feb 26, 2020 at 04:25:53PM -0800, Gurchetan Singh wrote:
> The main motivation behind this is to have eventually have something like this:

patches 1+2 cherry-picked and pushed to -next.

thanks,
  Gerd
Gurchetan Singh Feb. 28, 2020, 2:03 a.m. UTC | #3
On Wed, Feb 26, 2020 at 11:23 PM Gerd Hoffmann <kraxel@redhat.com> wrote:
>
> On Wed, Feb 26, 2020 at 04:25:53PM -0800, Gurchetan Singh wrote:
> > The main motivation behind this is to have eventually have something like this:
> >
> > struct virtio_gpu_shmem {
> >     struct drm_gem_shmem_object base;
> >     uint32_t hw_res_handle;
> >     struct sg_table *pages;
> >     (...)
> > };
> >
> > struct virtio_gpu_vram {
> >     struct drm_gem_object base;  // or *drm_gem_vram_object
> >     uint32_t hw_res_handle;
> >     {offset, range};
> >     (...)
> > };
>
> Given that we probably will not use drm_gem_vram_object

Makes sense not to use drm_gem_vram_object for now.

> and
> drm_gem_shmem_object->base is drm_gem_object I think we can
> go this route:
>
> struct virtgpu_object {

Yeah, using "virtgpu_" rather than "virtio_gpu" makes sense.  A bit
less wordy, though the current code is based on "virtio_gpu".

>         struct drm_gem_shmem_object base;
>         enum object_type;
>         uint32_t hw_res_handle;
>         [ ... ]
> };
>
> struct virtgpu_object_shmem {
>         struct virtgpu_object base;
>         struct sg_table *pages;
>         [ ... ]
> };
>
> struct virtgpu_object_hostmem {
>         struct virtgpu_object base;
>         {offset, range};
>         (...)

I'm a kernel newbie, so it's not obvious to me why struct
drm_gem_shmem_object would be a base class for struct
virtgpu_object_hostmem?

The core utility of drm_gem_shmem_helper seems to get pages, pinning
pages, and releasing pages.  But with host-mem, we won't have an array
of pages, but just an (offset, length) -- which drm_gem_shmem_helper
function is useful here?

Side question: is drm_gem_object_funcs.vmap(..) /
drm_gem_object_funcs.vunmap(..) even possible for hostmem?

P.S:

The proof of concept hostmem implementation is on Gitlab [1][2].  Some notes:

- io_remap_pfn_range to get a userspace mapping
- calls drm_gem_private_object_init(..) rather than
drm_gem_object_init [which sets up shmemfs backing store].

[1] https://gitlab.freedesktop.org/virgl/drm-misc-next/-/blob/virtio-gpu-next/drivers/gpu/drm/virtio/virtgpu_drv.h#L80
[2] https://gitlab.freedesktop.org/virgl/drm-misc-next/-/blob/virtio-gpu-next/drivers/gpu/drm/virtio/virtgpu_hostmem.c#L179

> };
>
> Then have helpers to get virtgpu_object_hostmem / virtgpu_object_shmem
> from virtgpu_object via container_of which also sanity-check
> object_type (maybe we can check drm_gem_object->ops for that instead of
> adding a new field).
>
> > Sending this out to solicit feedback on this approach.  Whichever approach
> > we decide, landing incremental changes to internal structures is reduces
> > rebasing costs and avoids mega-changes.
>
> That certainly makes sense.
>
> cheers,
>   Gerd
>
Gerd Hoffmann Feb. 28, 2020, 10:28 a.m. UTC | #4
Hi,

> > struct virtgpu_object {
> 
> Yeah, using "virtgpu_" rather than "virtio_gpu" makes sense.

It wasn't my intention to suggest a rename.  It's just that the kernel
is a bit inconsistent here and I picked the wrong name here.  Most
places use virtio_gpu but some use virtgpu (file names, ioctl api).

> > struct virtgpu_object_hostmem {
> >         struct virtgpu_object base;
> >         {offset, range};
> >         (...)
> 
> I'm a kernel newbie, so it's not obvious to me why struct
> drm_gem_shmem_object would be a base class for struct
> virtgpu_object_hostmem?

I think it is easier to just continue using virtio_gpu_object in most
places and cast to virtio_gpu_object_{shmem,hostmem} only if needed.
Makes it easier to deal with common fields like hw_res_handle.

In the hostmem case we would simply not use the drm_gem_shmem_object
fields except for drm_gem_shmem_object.base (which is drm_gem_object).

> Side question: is drm_gem_object_funcs.vmap(..) /
> drm_gem_object_funcs.vunmap(..) even possible for hostmem?

Sure.  Using ioremap should work, after asking the host to map the
object at some location in the pci bar.

cheers,
  Gerd