Message ID | 20231223182932.27683-4-jason-jh.lin@mediatek.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | Add mediate-drm secure flow for SVP | expand |
On Sun, Dec 24, 2023 at 02:29:24AM +0800, Jason-JH.Lin wrote: > Add secure buffer control flow to mtk_drm_gem. > > When user space takes DRM_MTK_GEM_CREATE_ENCRYPTED flag and size > to create a mtk_drm_gem object, mtk_drm_gem will find a matched size > dma buffer from secure dma-heap and bind it to mtk_drm_gem object. > > Signed-off-by: Jason-JH.Lin <jason-jh.lin@mediatek.com> Apologies for jumping rather late, but last year was a mess here. There's the fundamental issue that this is new uapi, and it needs open userspace, and I haven't seen that. What's more, this is a pure kms api so there's no precedent at all for adding special ioctl to those - all the existing support for protected/restricted content buffers in upstream has used render nodes and EGL_EXT_protected_content in mesa3d to enable this feature on the drm/kms side. So I'm not exactly sure what your plan here is, but you need one, and it needs to be more than a testcase/demo. The other issue, and the reason I've looked into the mtk code, is that the dma-buf implementation breaks the dma-buf api. So that needs to be changed. Finally I think the primary way to display a protected content buffer on a pure kms driver should be by using prime fd2handle buffer importing. Because you're adding a dma-buf heap it's already possible for userspace to use this path (or at least try), and so we have to make this path work anyway. Once we have the prime import path working correctly for protected content buffers (which should shake out all the dma-api issues I've explained in the dma-buf heaps thread), it should be possible to implement the direct allocation function in a generic helper: struct drm_gem_object * drm_gem_create_object_from_heap(struct drm_device *dev, struct drm_file *file, struct drm_buf_heap *heap); Which does roughly: - allocate a dma-buf from the desired heap - import that dma-buf into the device using prime for the drm_file - using the already implemented driver import code for special cases like protected content buffers There should be no need to hand-roll all this code here, and especially not have any special-casing for the heap string name or things like that. That all must be handled in the dma-buf prime import code. Cheers, Sima > --- > drivers/gpu/drm/mediatek/mtk_drm_gem.c | 85 +++++++++++++++++++++++++- > drivers/gpu/drm/mediatek/mtk_drm_gem.h | 4 ++ > 2 files changed, 88 insertions(+), 1 deletion(-) > > diff --git a/drivers/gpu/drm/mediatek/mtk_drm_gem.c b/drivers/gpu/drm/mediatek/mtk_drm_gem.c > index 30e347adcbe9..858f34a735f8 100644 > --- a/drivers/gpu/drm/mediatek/mtk_drm_gem.c > +++ b/drivers/gpu/drm/mediatek/mtk_drm_gem.c > @@ -4,6 +4,8 @@ > */ > > #include <linux/dma-buf.h> > +#include <linux/dma-heap.h> > +#include <uapi/linux/dma-heap.h> > #include <drm/mediatek_drm.h> > > #include <drm/drm.h> > @@ -55,6 +57,81 @@ static struct mtk_drm_gem_obj *mtk_drm_gem_init(struct drm_device *dev, > return mtk_gem_obj; > } > > +struct mtk_drm_gem_obj *mtk_drm_gem_create_from_heap(struct drm_device *dev, > + const char *heap, size_t size) > +{ > + struct mtk_drm_private *priv = dev->dev_private; > + struct mtk_drm_gem_obj *mtk_gem; > + struct drm_gem_object *obj; > + struct dma_heap *dma_heap; > + struct dma_buf *dma_buf; > + struct dma_buf_attachment *attach; > + struct sg_table *sgt; > + struct iosys_map map = {}; > + int ret; > + > + mtk_gem = mtk_drm_gem_init(dev, size); > + if (IS_ERR(mtk_gem)) > + return ERR_CAST(mtk_gem); > + > + obj = &mtk_gem->base; > + > + dma_heap = dma_heap_find(heap); > + if (!dma_heap) { > + DRM_ERROR("heap find fail\n"); > + goto err_gem_free; > + } > + dma_buf = dma_heap_buffer_alloc(dma_heap, size, > + O_RDWR | O_CLOEXEC, DMA_HEAP_VALID_HEAP_FLAGS); > + if (IS_ERR(dma_buf)) { > + DRM_ERROR("buffer alloc fail\n"); > + dma_heap_put(dma_heap); > + goto err_gem_free; > + } > + dma_heap_put(dma_heap); > + > + attach = dma_buf_attach(dma_buf, priv->dma_dev); > + if (IS_ERR(attach)) { > + DRM_ERROR("attach fail, return\n"); > + dma_buf_put(dma_buf); > + goto err_gem_free; > + } > + > + sgt = dma_buf_map_attachment(attach, DMA_BIDIRECTIONAL); > + if (IS_ERR(sgt)) { > + DRM_ERROR("map failed, detach and return\n"); > + dma_buf_detach(dma_buf, attach); > + dma_buf_put(dma_buf); > + goto err_gem_free; > + } > + obj->import_attach = attach; > + mtk_gem->dma_addr = sg_dma_address(sgt->sgl); > + mtk_gem->sg = sgt; > + mtk_gem->size = dma_buf->size; > + > + if (!strcmp(heap, "mtk_svp") || !strcmp(heap, "mtk_svp_cma")) { > + /* secure buffer can not be mapped */ > + mtk_gem->secure = true; > + } else { > + ret = dma_buf_vmap(dma_buf, &map); > + mtk_gem->kvaddr = map.vaddr; > + if (ret) { > + DRM_ERROR("map failed, ret=%d\n", ret); > + dma_buf_unmap_attachment(attach, sgt, DMA_BIDIRECTIONAL); > + dma_buf_detach(dma_buf, attach); > + dma_buf_put(dma_buf); > + mtk_gem->kvaddr = NULL; > + } > + } > + > + return mtk_gem; > + > +err_gem_free: > + drm_gem_object_release(obj); > + kfree(mtk_gem); > + return ERR_PTR(-ENOMEM); > +} > + > struct mtk_drm_gem_obj *mtk_drm_gem_create(struct drm_device *dev, > size_t size, bool alloc_kmap) > { > @@ -225,7 +302,9 @@ struct drm_gem_object *mtk_gem_prime_import_sg_table(struct drm_device *dev, > if (IS_ERR(mtk_gem)) > return ERR_CAST(mtk_gem); > > + mtk_gem->secure = !sg_page(sg->sgl); > mtk_gem->dma_addr = sg_dma_address(sg->sgl); > + mtk_gem->size = attach->dmabuf->size; > mtk_gem->sg = sg; > > return &mtk_gem->base; > @@ -301,7 +380,11 @@ int mtk_gem_create_ioctl(struct drm_device *dev, void *data, > struct drm_mtk_gem_create *args = data; > int ret; > > - mtk_gem = mtk_drm_gem_create(dev, args->size, false); > + if (args->flags & DRM_MTK_GEM_CREATE_ENCRYPTED) > + mtk_gem = mtk_drm_gem_create_from_heap(dev, "mtk_svp_cma", args->size); > + else > + mtk_gem = mtk_drm_gem_create(dev, args->size, false); > + > if (IS_ERR(mtk_gem)) > return PTR_ERR(mtk_gem); > > diff --git a/drivers/gpu/drm/mediatek/mtk_drm_gem.h b/drivers/gpu/drm/mediatek/mtk_drm_gem.h > index 90f3d2916ec5..8fd5ce827d4f 100644 > --- a/drivers/gpu/drm/mediatek/mtk_drm_gem.h > +++ b/drivers/gpu/drm/mediatek/mtk_drm_gem.h > @@ -27,9 +27,11 @@ struct mtk_drm_gem_obj { > void *cookie; > void *kvaddr; > dma_addr_t dma_addr; > + size_t size; > unsigned long dma_attrs; > struct sg_table *sg; > struct page **pages; > + bool secure; > }; > > #define to_mtk_gem_obj(x) container_of(x, struct mtk_drm_gem_obj, base) > @@ -37,6 +39,8 @@ struct mtk_drm_gem_obj { > void mtk_drm_gem_free_object(struct drm_gem_object *gem); > struct mtk_drm_gem_obj *mtk_drm_gem_create(struct drm_device *dev, size_t size, > bool alloc_kmap); > +struct mtk_drm_gem_obj *mtk_drm_gem_create_from_heap(struct drm_device *dev, > + const char *heap, size_t size); > int mtk_drm_gem_dumb_create(struct drm_file *file_priv, struct drm_device *dev, > struct drm_mode_create_dumb *args); > struct sg_table *mtk_gem_prime_get_sg_table(struct drm_gem_object *obj); > -- > 2.18.0 >
On Fri, Jan 12, 2024 at 5:13 AM Daniel Vetter <daniel@ffwll.ch> wrote: > > On Sun, Dec 24, 2023 at 02:29:24AM +0800, Jason-JH.Lin wrote: > > Add secure buffer control flow to mtk_drm_gem. > > > > When user space takes DRM_MTK_GEM_CREATE_ENCRYPTED flag and size > > to create a mtk_drm_gem object, mtk_drm_gem will find a matched size > > dma buffer from secure dma-heap and bind it to mtk_drm_gem object. > > > > Signed-off-by: Jason-JH.Lin <jason-jh.lin@mediatek.com> > > Apologies for jumping rather late, but last year was a mess here. > > There's the fundamental issue that this is new uapi, and it needs open > userspace, and I haven't seen that. The open userspace for it is currently in minigbm for ChromeOS here: https://chromium.googlesource.com/chromiumos/platform/minigbm/+/main/mediatek.c#322 Does that satisfy that particular requirement? > > What's more, this is a pure kms api so there's no precedent at all for > adding special ioctl to those - all the existing support for > protected/restricted content buffers in upstream has used render nodes and > EGL_EXT_protected_content in mesa3d to enable this feature on the drm/kms > side. So I'm not exactly sure what your plan here is, but you need one, > and it needs to be more than a testcase/demo. I'm probably not understanding something here, but for the Intel protected content allocation there was a specific structure (drm_i915_gem_create_ext_protected_content) added to the DRM_IOCTL_I915_GEM_CREATE_EXT ioctl in order to enable allocation of protected buffers. So wouldn't that be precedent for using an ioctl like this to allocate a GEM object? > > The other issue, and the reason I've looked into the mtk code, is that the > dma-buf implementation breaks the dma-buf api. So that needs to be > changed. Yeah, agreed here...I do see now that attaching dma_bufs for content that is completely inaccessible by the kernel will be a no-go. > > Finally I think the primary way to display a protected content buffer on a > pure kms driver should be by using prime fd2handle buffer importing. > Because you're adding a dma-buf heap it's already possible for userspace > to use this path (or at least try), and so we have to make this path work > anyway. Is what you're getting at here having MTK implement their own gem_prime_import in order to work around having to do the dma_buf attach operation? (from looking at the code, this appears to be the only place in non-vendor specific code that dma_buf_attach is called) > > Once we have the prime import path working correctly for protected content > buffers (which should shake out all the dma-api issues I've explained in > the dma-buf heaps thread), it should be possible to implement the direct > allocation function in a generic helper: > > struct drm_gem_object * drm_gem_create_object_from_heap(struct drm_device *dev, > struct drm_file *file, > struct drm_buf_heap *heap); > > Which does roughly: > > - allocate a dma-buf from the desired heap > - import that dma-buf into the device using prime for the drm_file > - using the already implemented driver import code for special cases like > protected content buffers > > There should be no need to hand-roll all this code here, and especially > not have any special-casing for the heap string name or things like that. > That all must be handled in the dma-buf prime import code. > > Cheers, Sima > > > --- > > drivers/gpu/drm/mediatek/mtk_drm_gem.c | 85 +++++++++++++++++++++++++- > > drivers/gpu/drm/mediatek/mtk_drm_gem.h | 4 ++ > > 2 files changed, 88 insertions(+), 1 deletion(-) > > > > diff --git a/drivers/gpu/drm/mediatek/mtk_drm_gem.c b/drivers/gpu/drm/mediatek/mtk_drm_gem.c > > index 30e347adcbe9..858f34a735f8 100644 > > --- a/drivers/gpu/drm/mediatek/mtk_drm_gem.c > > +++ b/drivers/gpu/drm/mediatek/mtk_drm_gem.c > > @@ -4,6 +4,8 @@ > > */ > > > > #include <linux/dma-buf.h> > > +#include <linux/dma-heap.h> > > +#include <uapi/linux/dma-heap.h> > > #include <drm/mediatek_drm.h> > > > > #include <drm/drm.h> > > @@ -55,6 +57,81 @@ static struct mtk_drm_gem_obj *mtk_drm_gem_init(struct drm_device *dev, > > return mtk_gem_obj; > > } > > > > +struct mtk_drm_gem_obj *mtk_drm_gem_create_from_heap(struct drm_device *dev, > > + const char *heap, size_t size) > > +{ > > + struct mtk_drm_private *priv = dev->dev_private; > > + struct mtk_drm_gem_obj *mtk_gem; > > + struct drm_gem_object *obj; > > + struct dma_heap *dma_heap; > > + struct dma_buf *dma_buf; > > + struct dma_buf_attachment *attach; > > + struct sg_table *sgt; > > + struct iosys_map map = {}; > > + int ret; > > + > > + mtk_gem = mtk_drm_gem_init(dev, size); > > + if (IS_ERR(mtk_gem)) > > + return ERR_CAST(mtk_gem); > > + > > + obj = &mtk_gem->base; > > + > > + dma_heap = dma_heap_find(heap); > > + if (!dma_heap) { > > + DRM_ERROR("heap find fail\n"); > > + goto err_gem_free; > > + } > > + dma_buf = dma_heap_buffer_alloc(dma_heap, size, > > + O_RDWR | O_CLOEXEC, DMA_HEAP_VALID_HEAP_FLAGS); > > + if (IS_ERR(dma_buf)) { > > + DRM_ERROR("buffer alloc fail\n"); > > + dma_heap_put(dma_heap); > > + goto err_gem_free; > > + } > > + dma_heap_put(dma_heap); > > + > > + attach = dma_buf_attach(dma_buf, priv->dma_dev); > > + if (IS_ERR(attach)) { > > + DRM_ERROR("attach fail, return\n"); > > + dma_buf_put(dma_buf); > > + goto err_gem_free; > > + } > > + > > + sgt = dma_buf_map_attachment(attach, DMA_BIDIRECTIONAL); > > + if (IS_ERR(sgt)) { > > + DRM_ERROR("map failed, detach and return\n"); > > + dma_buf_detach(dma_buf, attach); > > + dma_buf_put(dma_buf); > > + goto err_gem_free; > > + } > > + obj->import_attach = attach; > > + mtk_gem->dma_addr = sg_dma_address(sgt->sgl); > > + mtk_gem->sg = sgt; > > + mtk_gem->size = dma_buf->size; > > + > > + if (!strcmp(heap, "mtk_svp") || !strcmp(heap, "mtk_svp_cma")) { > > + /* secure buffer can not be mapped */ > > + mtk_gem->secure = true; > > + } else { > > + ret = dma_buf_vmap(dma_buf, &map); > > + mtk_gem->kvaddr = map.vaddr; > > + if (ret) { > > + DRM_ERROR("map failed, ret=%d\n", ret); > > + dma_buf_unmap_attachment(attach, sgt, DMA_BIDIRECTIONAL); > > + dma_buf_detach(dma_buf, attach); > > + dma_buf_put(dma_buf); > > + mtk_gem->kvaddr = NULL; > > + } > > + } > > + > > + return mtk_gem; > > + > > +err_gem_free: > > + drm_gem_object_release(obj); > > + kfree(mtk_gem); > > + return ERR_PTR(-ENOMEM); > > +} > > + > > struct mtk_drm_gem_obj *mtk_drm_gem_create(struct drm_device *dev, > > size_t size, bool alloc_kmap) > > { > > @@ -225,7 +302,9 @@ struct drm_gem_object *mtk_gem_prime_import_sg_table(struct drm_device *dev, > > if (IS_ERR(mtk_gem)) > > return ERR_CAST(mtk_gem); > > > > + mtk_gem->secure = !sg_page(sg->sgl); > > mtk_gem->dma_addr = sg_dma_address(sg->sgl); > > + mtk_gem->size = attach->dmabuf->size; > > mtk_gem->sg = sg; > > > > return &mtk_gem->base; > > @@ -301,7 +380,11 @@ int mtk_gem_create_ioctl(struct drm_device *dev, void *data, > > struct drm_mtk_gem_create *args = data; > > int ret; > > > > - mtk_gem = mtk_drm_gem_create(dev, args->size, false); > > + if (args->flags & DRM_MTK_GEM_CREATE_ENCRYPTED) > > + mtk_gem = mtk_drm_gem_create_from_heap(dev, "mtk_svp_cma", args->size); > > + else > > + mtk_gem = mtk_drm_gem_create(dev, args->size, false); > > + > > if (IS_ERR(mtk_gem)) > > return PTR_ERR(mtk_gem); > > > > diff --git a/drivers/gpu/drm/mediatek/mtk_drm_gem.h b/drivers/gpu/drm/mediatek/mtk_drm_gem.h > > index 90f3d2916ec5..8fd5ce827d4f 100644 > > --- a/drivers/gpu/drm/mediatek/mtk_drm_gem.h > > +++ b/drivers/gpu/drm/mediatek/mtk_drm_gem.h > > @@ -27,9 +27,11 @@ struct mtk_drm_gem_obj { > > void *cookie; > > void *kvaddr; > > dma_addr_t dma_addr; > > + size_t size; > > unsigned long dma_attrs; > > struct sg_table *sg; > > struct page **pages; > > + bool secure; > > }; > > > > #define to_mtk_gem_obj(x) container_of(x, struct mtk_drm_gem_obj, base) > > @@ -37,6 +39,8 @@ struct mtk_drm_gem_obj { > > void mtk_drm_gem_free_object(struct drm_gem_object *gem); > > struct mtk_drm_gem_obj *mtk_drm_gem_create(struct drm_device *dev, size_t size, > > bool alloc_kmap); > > +struct mtk_drm_gem_obj *mtk_drm_gem_create_from_heap(struct drm_device *dev, > > + const char *heap, size_t size); > > int mtk_drm_gem_dumb_create(struct drm_file *file_priv, struct drm_device *dev, > > struct drm_mode_create_dumb *args); > > struct sg_table *mtk_gem_prime_get_sg_table(struct drm_gem_object *obj); > > -- > > 2.18.0 > > > > -- > Daniel Vetter > Software Engineer, Intel Corporation > http://blog.ffwll.ch
diff --git a/drivers/gpu/drm/mediatek/mtk_drm_gem.c b/drivers/gpu/drm/mediatek/mtk_drm_gem.c index 30e347adcbe9..858f34a735f8 100644 --- a/drivers/gpu/drm/mediatek/mtk_drm_gem.c +++ b/drivers/gpu/drm/mediatek/mtk_drm_gem.c @@ -4,6 +4,8 @@ */ #include <linux/dma-buf.h> +#include <linux/dma-heap.h> +#include <uapi/linux/dma-heap.h> #include <drm/mediatek_drm.h> #include <drm/drm.h> @@ -55,6 +57,81 @@ static struct mtk_drm_gem_obj *mtk_drm_gem_init(struct drm_device *dev, return mtk_gem_obj; } +struct mtk_drm_gem_obj *mtk_drm_gem_create_from_heap(struct drm_device *dev, + const char *heap, size_t size) +{ + struct mtk_drm_private *priv = dev->dev_private; + struct mtk_drm_gem_obj *mtk_gem; + struct drm_gem_object *obj; + struct dma_heap *dma_heap; + struct dma_buf *dma_buf; + struct dma_buf_attachment *attach; + struct sg_table *sgt; + struct iosys_map map = {}; + int ret; + + mtk_gem = mtk_drm_gem_init(dev, size); + if (IS_ERR(mtk_gem)) + return ERR_CAST(mtk_gem); + + obj = &mtk_gem->base; + + dma_heap = dma_heap_find(heap); + if (!dma_heap) { + DRM_ERROR("heap find fail\n"); + goto err_gem_free; + } + dma_buf = dma_heap_buffer_alloc(dma_heap, size, + O_RDWR | O_CLOEXEC, DMA_HEAP_VALID_HEAP_FLAGS); + if (IS_ERR(dma_buf)) { + DRM_ERROR("buffer alloc fail\n"); + dma_heap_put(dma_heap); + goto err_gem_free; + } + dma_heap_put(dma_heap); + + attach = dma_buf_attach(dma_buf, priv->dma_dev); + if (IS_ERR(attach)) { + DRM_ERROR("attach fail, return\n"); + dma_buf_put(dma_buf); + goto err_gem_free; + } + + sgt = dma_buf_map_attachment(attach, DMA_BIDIRECTIONAL); + if (IS_ERR(sgt)) { + DRM_ERROR("map failed, detach and return\n"); + dma_buf_detach(dma_buf, attach); + dma_buf_put(dma_buf); + goto err_gem_free; + } + obj->import_attach = attach; + mtk_gem->dma_addr = sg_dma_address(sgt->sgl); + mtk_gem->sg = sgt; + mtk_gem->size = dma_buf->size; + + if (!strcmp(heap, "mtk_svp") || !strcmp(heap, "mtk_svp_cma")) { + /* secure buffer can not be mapped */ + mtk_gem->secure = true; + } else { + ret = dma_buf_vmap(dma_buf, &map); + mtk_gem->kvaddr = map.vaddr; + if (ret) { + DRM_ERROR("map failed, ret=%d\n", ret); + dma_buf_unmap_attachment(attach, sgt, DMA_BIDIRECTIONAL); + dma_buf_detach(dma_buf, attach); + dma_buf_put(dma_buf); + mtk_gem->kvaddr = NULL; + } + } + + return mtk_gem; + +err_gem_free: + drm_gem_object_release(obj); + kfree(mtk_gem); + return ERR_PTR(-ENOMEM); +} + struct mtk_drm_gem_obj *mtk_drm_gem_create(struct drm_device *dev, size_t size, bool alloc_kmap) { @@ -225,7 +302,9 @@ struct drm_gem_object *mtk_gem_prime_import_sg_table(struct drm_device *dev, if (IS_ERR(mtk_gem)) return ERR_CAST(mtk_gem); + mtk_gem->secure = !sg_page(sg->sgl); mtk_gem->dma_addr = sg_dma_address(sg->sgl); + mtk_gem->size = attach->dmabuf->size; mtk_gem->sg = sg; return &mtk_gem->base; @@ -301,7 +380,11 @@ int mtk_gem_create_ioctl(struct drm_device *dev, void *data, struct drm_mtk_gem_create *args = data; int ret; - mtk_gem = mtk_drm_gem_create(dev, args->size, false); + if (args->flags & DRM_MTK_GEM_CREATE_ENCRYPTED) + mtk_gem = mtk_drm_gem_create_from_heap(dev, "mtk_svp_cma", args->size); + else + mtk_gem = mtk_drm_gem_create(dev, args->size, false); + if (IS_ERR(mtk_gem)) return PTR_ERR(mtk_gem); diff --git a/drivers/gpu/drm/mediatek/mtk_drm_gem.h b/drivers/gpu/drm/mediatek/mtk_drm_gem.h index 90f3d2916ec5..8fd5ce827d4f 100644 --- a/drivers/gpu/drm/mediatek/mtk_drm_gem.h +++ b/drivers/gpu/drm/mediatek/mtk_drm_gem.h @@ -27,9 +27,11 @@ struct mtk_drm_gem_obj { void *cookie; void *kvaddr; dma_addr_t dma_addr; + size_t size; unsigned long dma_attrs; struct sg_table *sg; struct page **pages; + bool secure; }; #define to_mtk_gem_obj(x) container_of(x, struct mtk_drm_gem_obj, base) @@ -37,6 +39,8 @@ struct mtk_drm_gem_obj { void mtk_drm_gem_free_object(struct drm_gem_object *gem); struct mtk_drm_gem_obj *mtk_drm_gem_create(struct drm_device *dev, size_t size, bool alloc_kmap); +struct mtk_drm_gem_obj *mtk_drm_gem_create_from_heap(struct drm_device *dev, + const char *heap, size_t size); int mtk_drm_gem_dumb_create(struct drm_file *file_priv, struct drm_device *dev, struct drm_mode_create_dumb *args); struct sg_table *mtk_gem_prime_get_sg_table(struct drm_gem_object *obj);
Add secure buffer control flow to mtk_drm_gem. When user space takes DRM_MTK_GEM_CREATE_ENCRYPTED flag and size to create a mtk_drm_gem object, mtk_drm_gem will find a matched size dma buffer from secure dma-heap and bind it to mtk_drm_gem object. Signed-off-by: Jason-JH.Lin <jason-jh.lin@mediatek.com> --- drivers/gpu/drm/mediatek/mtk_drm_gem.c | 85 +++++++++++++++++++++++++- drivers/gpu/drm/mediatek/mtk_drm_gem.h | 4 ++ 2 files changed, 88 insertions(+), 1 deletion(-)