diff mbox series

drm/virtio: delay pinning the pages till first use

Message ID 20211102113139.154140-1-maksym.wezdecki@collabora.com (mailing list archive)
State New, archived
Headers show
Series drm/virtio: delay pinning the pages till first use | expand

Commit Message

Maksym Wezdecki Nov. 2, 2021, 11:31 a.m. UTC
From: mwezdeck <maksym.wezdecki@collabora.co.uk>

The idea behind the commit:
  1. not pin the pages during resource_create ioctl
  2. pin the pages on the first use during:
	- transfer_*_host ioctl
        - map ioctl
  3. introduce new ioctl for pinning pages on demand

This change has no impact on user space.

Signed-off-by: mwezdeck <maksym.wezdecki@collabora.co.uk>
---
 drivers/gpu/drm/virtio/virtgpu_drv.h    |  6 ++-
 drivers/gpu/drm/virtio/virtgpu_ioctl.c  | 65 +++++++++++++++++++++++++
 drivers/gpu/drm/virtio/virtgpu_object.c | 42 ++++++++++++----
 include/uapi/drm/virtgpu_drm.h          | 10 ++++
 4 files changed, 113 insertions(+), 10 deletions(-)

Comments

Gerd Hoffmann Nov. 2, 2021, 1:03 p.m. UTC | #1
On Tue, Nov 02, 2021 at 12:31:39PM +0100, Maksym Wezdecki wrote:
> From: mwezdeck <maksym.wezdecki@collabora.co.uk>
> 
> The idea behind the commit:
>   1. not pin the pages during resource_create ioctl
>   2. pin the pages on the first use during:
> 	- transfer_*_host ioctl
>         - map ioctl

i.e. basically lazy pinning.  Approach looks sane to me.

>   3. introduce new ioctl for pinning pages on demand

What is the use case for this ioctl?
In any case this should be a separate patch.

> +	struct virtio_gpu_object_array *objs;
> +	struct virtio_gpu_object *bo;
> +	struct virtio_gpu_object_shmem *shmem;
> +
> +	objs = virtio_gpu_array_from_handles(file, &virtio_gpu_map->handle, 1);
> +	if (objs == NULL)
> +		return -ENOENT;
> +
> +	bo = gem_to_virtio_gpu_obj(objs->objs[0]);
> +	if (bo == NULL)
> +		return -ENOENT;
> +	
> +	shmem = to_virtio_gpu_shmem(bo);
> +	if (shmem == NULL)
> +		return -ENOENT;
> +
> +	if (!shmem->pages) {
> +		virtio_gpu_object_pin(vgdev, objs, 1);
> +	}

Move this into virtio_gpu_object_pin(),
or create a helper function for it ...

> +	objs = virtio_gpu_array_from_handles(file, &virtio_gpu_pin->handle, 1);
> +	if (objs == NULL)
> +		return -ENOENT;
> +
> +	bo = gem_to_virtio_gpu_obj(objs->objs[0]);
> +	if (bo == NULL)
> +		return -ENOENT;
> +	
> +	shmem = to_virtio_gpu_shmem(bo);
> +	if (shmem == NULL)
> +		return -ENOENT;
> +
> +	if (!shmem->pages) {
> +		return virtio_gpu_object_pin(vgdev, objs, 1);
> +	}

... to avoid this code duplication?

> +int virtio_gpu_object_pin(struct virtio_gpu_device *vgdev,
> +			  struct virtio_gpu_object_array *objs,
> +			  int num_gem_objects)
> +{
> +	int i, ret;
> +
> +	for (i = 0; i < num_gem_objects; i++) {

> +		ret = virtio_gpu_object_shmem_init(vgdev, bo, &ents, &nents);
> +		if (ret != 0) {
> +			return -EFAULT;
> +		}
> +
> +		virtio_gpu_object_attach(vgdev, bo, ents, nents);

I think it is enough to do the virtio_gpu_object_attach() call lazily.
virtio_gpu_object_shmem_init() should not actually allocate pages, that
only happens when virtio_gpu_object_attach() goes ask for a scatter
list.

take care,
  Gerd
Chia-I Wu Nov. 2, 2021, 3:58 p.m. UTC | #2
On Tue, Nov 2, 2021 at 6:07 AM Gerd Hoffmann <kraxel@redhat.com> wrote:
>
> On Tue, Nov 02, 2021 at 12:31:39PM +0100, Maksym Wezdecki wrote:
> > From: mwezdeck <maksym.wezdecki@collabora.co.uk>
> >
> > The idea behind the commit:
> >   1. not pin the pages during resource_create ioctl
> >   2. pin the pages on the first use during:
> >       - transfer_*_host ioctl
> >         - map ioctl
>
> i.e. basically lazy pinning.  Approach looks sane to me.
>
> >   3. introduce new ioctl for pinning pages on demand
>
> What is the use case for this ioctl?
> In any case this should be a separate patch.

Lazy pinning can be a nice optimization that userspace does not
necessarily need to know about.  This patch however skips pinning for
execbuffer ioctl and introduces a new pin ioctl instead.  That is a
red flag.
Gerd Hoffmann Nov. 3, 2021, 7:30 a.m. UTC | #3
On Tue, Nov 02, 2021 at 08:58:55AM -0700, Chia-I Wu wrote:
> On Tue, Nov 2, 2021 at 6:07 AM Gerd Hoffmann <kraxel@redhat.com> wrote:
> >
> > On Tue, Nov 02, 2021 at 12:31:39PM +0100, Maksym Wezdecki wrote:
> > > From: mwezdeck <maksym.wezdecki@collabora.co.uk>
> > >
> > > The idea behind the commit:
> > >   1. not pin the pages during resource_create ioctl
> > >   2. pin the pages on the first use during:
> > >       - transfer_*_host ioctl
> > >         - map ioctl
> >
> > i.e. basically lazy pinning.  Approach looks sane to me.
> >
> > >   3. introduce new ioctl for pinning pages on demand
> >
> > What is the use case for this ioctl?
> > In any case this should be a separate patch.
> 
> Lazy pinning can be a nice optimization that userspace does not
> necessarily need to know about.  This patch however skips pinning for
> execbuffer ioctl and introduces a new pin ioctl instead.  That is a
> red flag.

Ah, so the pin ioctl is for buffers which need a pin for execbuffer.

Yep, that isn't going to fly that way, it'll break old userspace.

Lazy pinning must be opt-in, so new userspace which knows about
the pin ioctl can enable lazy pinning.  One possible way would
be to add a flag for the VIRTGPU_RESOURCE_CREATE ioctl, so lazy
pinning can be enabled per resource.

take care,
  Gerd
kernel test robot Nov. 4, 2021, 12:44 p.m. UTC | #4
Hi Maksym,

Thank you for the patch! Yet something to improve:

[auto build test ERROR on drm/drm-next]
[also build test ERROR on next-20211104]
[cannot apply to v5.15]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]

url:    https://github.com/0day-ci/linux/commits/Maksym-Wezdecki/drm-virtio-delay-pinning-the-pages-till-first-use/20211102-193430
base:   git://anongit.freedesktop.org/drm/drm drm-next
config: i386-buildonly-randconfig-r002-20211101 (attached as .config)
compiler: clang version 14.0.0 (https://github.com/llvm/llvm-project 264d3b6d4e08401c5b50a85bd76e80b3461d77e6)
reproduce (this is a W=1 build):
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # https://github.com/0day-ci/linux/commit/1795d2fd78a334a37a02dba76ac1e314cf122467
        git remote add linux-review https://github.com/0day-ci/linux
        git fetch --no-tags linux-review Maksym-Wezdecki/drm-virtio-delay-pinning-the-pages-till-first-use/20211102-193430
        git checkout 1795d2fd78a334a37a02dba76ac1e314cf122467
        # save the attached .config to linux build tree
        COMPILER_INSTALL_PATH=$HOME/0day COMPILER=clang make.cross W=1 ARCH=i386 

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>

All errors (new ones prefixed by >>):

>> drivers/gpu/drm/virtio/virtgpu_object.c:254:11: error: variable 'ents' is uninitialized when used here [-Werror,-Wuninitialized]
                                                       ents, nents);
                                                       ^~~~
   drivers/gpu/drm/virtio/virtgpu_object.c:219:35: note: initialize the variable 'ents' to silence this warning
           struct virtio_gpu_mem_entry *ents;
                                            ^
                                             = NULL
>> drivers/gpu/drm/virtio/virtgpu_object.c:254:17: error: variable 'nents' is uninitialized when used here [-Werror,-Wuninitialized]
                                                       ents, nents);
                                                             ^~~~~
   drivers/gpu/drm/virtio/virtgpu_object.c:220:20: note: initialize the variable 'nents' to silence this warning
           unsigned int nents;
                             ^
                              = 0
   2 errors generated.


vim +/ents +254 drivers/gpu/drm/virtio/virtgpu_object.c

2f2aa13724d568 Gerd Hoffmann   2020-02-07  210  
dc5698e80cf724 Dave Airlie     2013-09-09  211  int virtio_gpu_object_create(struct virtio_gpu_device *vgdev,
4441235f9566e6 Gerd Hoffmann   2019-03-18  212  			     struct virtio_gpu_object_params *params,
530b28426a94b8 Gerd Hoffmann   2019-03-18  213  			     struct virtio_gpu_object **bo_ptr,
530b28426a94b8 Gerd Hoffmann   2019-03-18  214  			     struct virtio_gpu_fence *fence)
dc5698e80cf724 Dave Airlie     2013-09-09  215  {
e2324300f427ff Gerd Hoffmann   2019-08-29  216  	struct virtio_gpu_object_array *objs = NULL;
c66df701e783bc Gerd Hoffmann   2019-08-29  217  	struct drm_gem_shmem_object *shmem_obj;
dc5698e80cf724 Dave Airlie     2013-09-09  218  	struct virtio_gpu_object *bo;
2f2aa13724d568 Gerd Hoffmann   2020-02-07  219  	struct virtio_gpu_mem_entry *ents;
2f2aa13724d568 Gerd Hoffmann   2020-02-07  220  	unsigned int nents;
dc5698e80cf724 Dave Airlie     2013-09-09  221  	int ret;
dc5698e80cf724 Dave Airlie     2013-09-09  222  
dc5698e80cf724 Dave Airlie     2013-09-09  223  	*bo_ptr = NULL;
dc5698e80cf724 Dave Airlie     2013-09-09  224  
c66df701e783bc Gerd Hoffmann   2019-08-29  225  	params->size = roundup(params->size, PAGE_SIZE);
c66df701e783bc Gerd Hoffmann   2019-08-29  226  	shmem_obj = drm_gem_shmem_create(vgdev->ddev, params->size);
c66df701e783bc Gerd Hoffmann   2019-08-29  227  	if (IS_ERR(shmem_obj))
c66df701e783bc Gerd Hoffmann   2019-08-29  228  		return PTR_ERR(shmem_obj);
c66df701e783bc Gerd Hoffmann   2019-08-29  229  	bo = gem_to_virtio_gpu_obj(&shmem_obj->base);
dc5698e80cf724 Dave Airlie     2013-09-09  230  
556c62e85f9b97 Matthew Wilcox  2018-10-30  231  	ret = virtio_gpu_resource_id_get(vgdev, &bo->hw_res_handle);
e2324300f427ff Gerd Hoffmann   2019-08-29  232  	if (ret < 0)
e2324300f427ff Gerd Hoffmann   2019-08-29  233  		goto err_free_gem;
e2324300f427ff Gerd Hoffmann   2019-08-29  234  
530b28426a94b8 Gerd Hoffmann   2019-03-18  235  	bo->dumb = params->dumb;
530b28426a94b8 Gerd Hoffmann   2019-03-18  236  
e2324300f427ff Gerd Hoffmann   2019-08-29  237  	if (fence) {
e2324300f427ff Gerd Hoffmann   2019-08-29  238  		ret = -ENOMEM;
e2324300f427ff Gerd Hoffmann   2019-08-29  239  		objs = virtio_gpu_array_alloc(1);
e2324300f427ff Gerd Hoffmann   2019-08-29  240  		if (!objs)
e2324300f427ff Gerd Hoffmann   2019-08-29  241  			goto err_put_id;
c66df701e783bc Gerd Hoffmann   2019-08-29  242  		virtio_gpu_array_add_obj(objs, &bo->base.base);
e2324300f427ff Gerd Hoffmann   2019-08-29  243  
e2324300f427ff Gerd Hoffmann   2019-08-29  244  		ret = virtio_gpu_array_lock_resv(objs);
e2324300f427ff Gerd Hoffmann   2019-08-29  245  		if (ret != 0)
e2324300f427ff Gerd Hoffmann   2019-08-29  246  			goto err_put_objs;
e2324300f427ff Gerd Hoffmann   2019-08-29  247  	}
e2324300f427ff Gerd Hoffmann   2019-08-29  248  
897b4d1acaf563 Gerd Hoffmann   2020-09-23  249  	if (params->blob) {
3389082bb98296 Vivek Kasireddy 2021-04-12  250  		if (params->blob_mem == VIRTGPU_BLOB_MEM_GUEST)
3389082bb98296 Vivek Kasireddy 2021-04-12  251  			bo->guest_blob = true;
3389082bb98296 Vivek Kasireddy 2021-04-12  252  
897b4d1acaf563 Gerd Hoffmann   2020-09-23  253  		virtio_gpu_cmd_resource_create_blob(vgdev, bo, params,
897b4d1acaf563 Gerd Hoffmann   2020-09-23 @254  						    ents, nents);
897b4d1acaf563 Gerd Hoffmann   2020-09-23  255  	} else if (params->virgl) {
30172efbfb842c Gurchetan Singh 2020-09-23  256  		virtio_gpu_cmd_resource_create_3d(vgdev, bo, params,
30172efbfb842c Gurchetan Singh 2020-09-23  257  						  objs, fence);
30172efbfb842c Gurchetan Singh 2020-09-23  258  	} else {
1795d2fd78a334 mwezdeck        2021-11-02  259  		ret = virtio_gpu_object_shmem_init(vgdev, bo, &ents, &nents);
1795d2fd78a334 mwezdeck        2021-11-02  260  		if (ret != 0) {
1795d2fd78a334 mwezdeck        2021-11-02  261  			virtio_gpu_array_put_free(objs);
1795d2fd78a334 mwezdeck        2021-11-02  262  			virtio_gpu_free_object(&shmem_obj->base);
1795d2fd78a334 mwezdeck        2021-11-02  263  			return ret;
1795d2fd78a334 mwezdeck        2021-11-02  264  		}
30172efbfb842c Gurchetan Singh 2020-09-23  265  		virtio_gpu_cmd_create_resource(vgdev, bo, params,
30172efbfb842c Gurchetan Singh 2020-09-23  266  					       objs, fence);
c76d4ab764adae Gurchetan Singh 2020-04-01  267  		virtio_gpu_object_attach(vgdev, bo, ents, nents);
30172efbfb842c Gurchetan Singh 2020-09-23  268  	}
dc5698e80cf724 Dave Airlie     2013-09-09  269  
dc5698e80cf724 Dave Airlie     2013-09-09  270  	*bo_ptr = bo;
dc5698e80cf724 Dave Airlie     2013-09-09  271  	return 0;
e2324300f427ff Gerd Hoffmann   2019-08-29  272  
e2324300f427ff Gerd Hoffmann   2019-08-29  273  err_put_objs:
e2324300f427ff Gerd Hoffmann   2019-08-29  274  	virtio_gpu_array_put_free(objs);
e2324300f427ff Gerd Hoffmann   2019-08-29  275  err_put_id:
e2324300f427ff Gerd Hoffmann   2019-08-29  276  	virtio_gpu_resource_id_put(vgdev, bo->hw_res_handle);
e2324300f427ff Gerd Hoffmann   2019-08-29  277  err_free_gem:
c66df701e783bc Gerd Hoffmann   2019-08-29  278  	drm_gem_shmem_free_object(&shmem_obj->base);
e2324300f427ff Gerd Hoffmann   2019-08-29  279  	return ret;
dc5698e80cf724 Dave Airlie     2013-09-09  280  }
1795d2fd78a334 mwezdeck        2021-11-02  281  

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org
diff mbox series

Patch

diff --git a/drivers/gpu/drm/virtio/virtgpu_drv.h b/drivers/gpu/drm/virtio/virtgpu_drv.h
index e0265fe74aa5..cf2cad663575 100644
--- a/drivers/gpu/drm/virtio/virtgpu_drv.h
+++ b/drivers/gpu/drm/virtio/virtgpu_drv.h
@@ -278,7 +278,7 @@  struct virtio_gpu_fpriv {
 };
 
 /* virtgpu_ioctl.c */
-#define DRM_VIRTIO_NUM_IOCTLS 12
+#define DRM_VIRTIO_NUM_IOCTLS 13
 extern struct drm_ioctl_desc virtio_gpu_ioctls[DRM_VIRTIO_NUM_IOCTLS];
 void virtio_gpu_create_context(struct drm_device *dev, struct drm_file *file);
 
@@ -455,6 +455,10 @@  int virtio_gpu_object_create(struct virtio_gpu_device *vgdev,
 			     struct virtio_gpu_object **bo_ptr,
 			     struct virtio_gpu_fence *fence);
 
+int virtio_gpu_object_pin(struct virtio_gpu_device *vgdev,
+			  struct virtio_gpu_object_array *objs,
+			  int num_gem_objects);
+
 bool virtio_gpu_is_shmem(struct virtio_gpu_object *bo);
 
 int virtio_gpu_resource_id_get(struct virtio_gpu_device *vgdev,
diff --git a/drivers/gpu/drm/virtio/virtgpu_ioctl.c b/drivers/gpu/drm/virtio/virtgpu_ioctl.c
index 5618a1d5879c..49bf53f358b5 100644
--- a/drivers/gpu/drm/virtio/virtgpu_ioctl.c
+++ b/drivers/gpu/drm/virtio/virtgpu_ioctl.c
@@ -102,6 +102,25 @@  static int virtio_gpu_map_ioctl(struct drm_device *dev, void *data,
 {
 	struct virtio_gpu_device *vgdev = dev->dev_private;
 	struct drm_virtgpu_map *virtio_gpu_map = data;
+	struct virtio_gpu_object_array *objs;
+	struct virtio_gpu_object *bo;
+	struct virtio_gpu_object_shmem *shmem;
+
+	objs = virtio_gpu_array_from_handles(file, &virtio_gpu_map->handle, 1);
+	if (objs == NULL)
+		return -ENOENT;
+
+	bo = gem_to_virtio_gpu_obj(objs->objs[0]);
+	if (bo == NULL)
+		return -ENOENT;
+	
+	shmem = to_virtio_gpu_shmem(bo);
+	if (shmem == NULL)
+		return -ENOENT;
+
+	if (!shmem->pages) {
+		virtio_gpu_object_pin(vgdev, objs, 1);
+	}
 
 	return virtio_gpu_mode_dumb_mmap(file, vgdev->ddev,
 					 virtio_gpu_map->handle,
@@ -292,6 +311,9 @@  static int virtio_gpu_getparam_ioctl(struct drm_device *dev, void *data,
 	case VIRTGPU_PARAM_SUPPORTED_CAPSET_IDs:
 		value = vgdev->capset_id_mask;
 		break;
+	case VIRTGPU_PARAM_PIN_ON_DEMAND:
+		value = 1;
+		break;
 	default:
 		return -EINVAL;
 	}
@@ -397,6 +419,7 @@  static int virtio_gpu_transfer_from_host_ioctl(struct drm_device *dev,
 	struct virtio_gpu_object *bo;
 	struct virtio_gpu_object_array *objs;
 	struct virtio_gpu_fence *fence;
+	struct virtio_gpu_object_shmem *shmem;
 	int ret;
 	u32 offset = args->offset;
 
@@ -414,6 +437,11 @@  static int virtio_gpu_transfer_from_host_ioctl(struct drm_device *dev,
 		goto err_put_free;
 	}
 
+	shmem = to_virtio_gpu_shmem(bo);
+	if (!shmem->pages) {
+		virtio_gpu_object_pin(vgdev, objs, 1);
+	}
+
 	if (!bo->host3d_blob && (args->stride || args->layer_stride)) {
 		ret = -EINVAL;
 		goto err_put_free;
@@ -451,6 +479,7 @@  static int virtio_gpu_transfer_to_host_ioctl(struct drm_device *dev, void *data,
 	struct drm_virtgpu_3d_transfer_to_host *args = data;
 	struct virtio_gpu_object *bo;
 	struct virtio_gpu_object_array *objs;
+	struct virtio_gpu_object_shmem *shmem;
 	struct virtio_gpu_fence *fence;
 	int ret;
 	u32 offset = args->offset;
@@ -465,6 +494,11 @@  static int virtio_gpu_transfer_to_host_ioctl(struct drm_device *dev, void *data,
 		goto err_put_free;
 	}
 
+	shmem = to_virtio_gpu_shmem(bo);
+	if (!shmem->pages) {
+		virtio_gpu_object_pin(vgdev, objs, 1);
+	}
+
 	if (!vgdev->has_virgl_3d) {
 		virtio_gpu_cmd_transfer_to_host_2d
 			(vgdev, offset,
@@ -836,6 +870,34 @@  static int virtio_gpu_context_init_ioctl(struct drm_device *dev,
 	return ret;
 }
 
+static int virtio_gpu_pin_ioctl(struct drm_device *dev, void *data,
+				struct drm_file *file)
+{
+	struct virtio_gpu_device *vgdev = dev->dev_private;
+	struct drm_virtgpu_pin *virtio_gpu_pin = data;
+	struct virtio_gpu_object_array *objs;
+	struct virtio_gpu_object *bo;
+	struct virtio_gpu_object_shmem *shmem;
+
+	objs = virtio_gpu_array_from_handles(file, &virtio_gpu_pin->handle, 1);
+	if (objs == NULL)
+		return -ENOENT;
+
+	bo = gem_to_virtio_gpu_obj(objs->objs[0]);
+	if (bo == NULL)
+		return -ENOENT;
+	
+	shmem = to_virtio_gpu_shmem(bo);
+	if (shmem == NULL)
+		return -ENOENT;
+
+	if (!shmem->pages) {
+		return virtio_gpu_object_pin(vgdev, objs, 1);
+	}
+
+	return 0;
+}
+
 struct drm_ioctl_desc virtio_gpu_ioctls[DRM_VIRTIO_NUM_IOCTLS] = {
 	DRM_IOCTL_DEF_DRV(VIRTGPU_MAP, virtio_gpu_map_ioctl,
 			  DRM_RENDER_ALLOW),
@@ -875,4 +937,7 @@  struct drm_ioctl_desc virtio_gpu_ioctls[DRM_VIRTIO_NUM_IOCTLS] = {
 
 	DRM_IOCTL_DEF_DRV(VIRTGPU_CONTEXT_INIT, virtio_gpu_context_init_ioctl,
 			  DRM_RENDER_ALLOW),
+
+	DRM_IOCTL_DEF_DRV(VIRTGPU_PIN, virtio_gpu_pin_ioctl,
+			  DRM_RENDER_ALLOW),
 };
diff --git a/drivers/gpu/drm/virtio/virtgpu_object.c b/drivers/gpu/drm/virtio/virtgpu_object.c
index f648b0e24447..ae569003d7bc 100644
--- a/drivers/gpu/drm/virtio/virtgpu_object.c
+++ b/drivers/gpu/drm/virtio/virtgpu_object.c
@@ -80,9 +80,9 @@  void virtio_gpu_cleanup_object(struct virtio_gpu_object *bo)
 			kfree(shmem->pages);
 			shmem->pages = NULL;
 			drm_gem_shmem_unpin(&bo->base.base);
+			drm_gem_shmem_free_object(&bo->base.base);
 		}
 
-		drm_gem_shmem_free_object(&bo->base.base);
 	} else if (virtio_gpu_is_vram(bo)) {
 		struct virtio_gpu_object_vram *vram = to_virtio_gpu_vram(bo);
 
@@ -246,13 +246,6 @@  int virtio_gpu_object_create(struct virtio_gpu_device *vgdev,
 			goto err_put_objs;
 	}
 
-	ret = virtio_gpu_object_shmem_init(vgdev, bo, &ents, &nents);
-	if (ret != 0) {
-		virtio_gpu_array_put_free(objs);
-		virtio_gpu_free_object(&shmem_obj->base);
-		return ret;
-	}
-
 	if (params->blob) {
 		if (params->blob_mem == VIRTGPU_BLOB_MEM_GUEST)
 			bo->guest_blob = true;
@@ -262,8 +255,13 @@  int virtio_gpu_object_create(struct virtio_gpu_device *vgdev,
 	} else if (params->virgl) {
 		virtio_gpu_cmd_resource_create_3d(vgdev, bo, params,
 						  objs, fence);
-		virtio_gpu_object_attach(vgdev, bo, ents, nents);
 	} else {
+		ret = virtio_gpu_object_shmem_init(vgdev, bo, &ents, &nents);
+		if (ret != 0) {
+			virtio_gpu_array_put_free(objs);
+			virtio_gpu_free_object(&shmem_obj->base);
+			return ret;
+		}
 		virtio_gpu_cmd_create_resource(vgdev, bo, params,
 					       objs, fence);
 		virtio_gpu_object_attach(vgdev, bo, ents, nents);
@@ -280,3 +278,29 @@  int virtio_gpu_object_create(struct virtio_gpu_device *vgdev,
 	drm_gem_shmem_free_object(&shmem_obj->base);
 	return ret;
 }
+
+int virtio_gpu_object_pin(struct virtio_gpu_device *vgdev,
+			  struct virtio_gpu_object_array *objs,
+			  int num_gem_objects)
+{
+	int i, ret;
+
+	for (i = 0; i < num_gem_objects; i++) {
+		struct virtio_gpu_mem_entry *ents;
+		unsigned int nents;
+
+		struct virtio_gpu_object *bo =
+			gem_to_virtio_gpu_obj(objs->objs[i]);
+		if (!bo) {
+			return -EFAULT;
+		}
+
+		ret = virtio_gpu_object_shmem_init(vgdev, bo, &ents, &nents);
+		if (ret != 0) {
+			return -EFAULT;
+		}
+
+		virtio_gpu_object_attach(vgdev, bo, ents, nents);
+	}
+	return 0;
+}
diff --git a/include/uapi/drm/virtgpu_drm.h b/include/uapi/drm/virtgpu_drm.h
index a13e20cc66b4..be6e67f1bb7f 100644
--- a/include/uapi/drm/virtgpu_drm.h
+++ b/include/uapi/drm/virtgpu_drm.h
@@ -48,6 +48,7 @@  extern "C" {
 #define DRM_VIRTGPU_GET_CAPS  0x09
 #define DRM_VIRTGPU_RESOURCE_CREATE_BLOB 0x0a
 #define DRM_VIRTGPU_CONTEXT_INIT 0x0b
+#define DRM_VIRTGPU_PIN 0x0c
 
 #define VIRTGPU_EXECBUF_FENCE_FD_IN	0x01
 #define VIRTGPU_EXECBUF_FENCE_FD_OUT	0x02
@@ -82,6 +83,7 @@  struct drm_virtgpu_execbuffer {
 #define VIRTGPU_PARAM_CROSS_DEVICE 5 /* Cross virtio-device resource sharing  */
 #define VIRTGPU_PARAM_CONTEXT_INIT 6 /* DRM_VIRTGPU_CONTEXT_INIT */
 #define VIRTGPU_PARAM_SUPPORTED_CAPSET_IDs 7 /* Bitmask of supported capability set ids */
+#define VIRTGPU_PARAM_PIN_ON_DEMAND 8 /* is pinning on demand available? */
 
 struct drm_virtgpu_getparam {
 	__u64 param;
@@ -196,6 +198,10 @@  struct drm_virtgpu_context_init {
 	__u64 ctx_set_params;
 };
 
+struct drm_virtgpu_pin {
+	__u32 handle;
+};
+
 #define DRM_IOCTL_VIRTGPU_MAP \
 	DRM_IOWR(DRM_COMMAND_BASE + DRM_VIRTGPU_MAP, struct drm_virtgpu_map)
 
@@ -239,6 +245,10 @@  struct drm_virtgpu_context_init {
 	DRM_IOWR(DRM_COMMAND_BASE + DRM_VIRTGPU_CONTEXT_INIT,		\
 		struct drm_virtgpu_context_init)
 
+#define DRM_IOCTL_VIRTGPU_PIN					\
+	DRM_IOWR(DRM_COMMAND_BASE + DRM_VIRTGPU_PIN,		\
+		struct drm_virtgpu_pin)
+
 #if defined(__cplusplus)
 }
 #endif