From patchwork Tue Aug 7 18:11:01 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chris Wilson X-Patchwork-Id: 10559061 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 126FD14E5 for ; Tue, 7 Aug 2018 18:11:19 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 06B302A47F for ; Tue, 7 Aug 2018 18:11:19 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id EF1F32A7DC; Tue, 7 Aug 2018 18:11:18 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 917D92A47F for ; Tue, 7 Aug 2018 18:11:18 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id EB9D46E432; Tue, 7 Aug 2018 18:11:17 +0000 (UTC) X-Original-To: intel-gfx@lists.freedesktop.org Delivered-To: intel-gfx@lists.freedesktop.org Received: from fireflyinternet.com (mail.fireflyinternet.com [109.228.58.192]) by gabe.freedesktop.org (Postfix) with ESMTPS id 029CF6E432 for ; Tue, 7 Aug 2018 18:11:15 +0000 (UTC) X-Default-Received-SPF: pass (skip=forwardok (res=PASS)) x-ip-name=78.156.65.138; Received: from haswell.alporthouse.com (unverified [78.156.65.138]) by fireflyinternet.com (Firefly Internet (M1)) with ESMTP id 12689132-1500050 for multiple; Tue, 07 Aug 2018 19:11:02 +0100 From: Chris Wilson To: intel-gfx@lists.freedesktop.org Date: Tue, 7 Aug 2018 19:11:01 +0100 Message-Id: <20180807181101.14696-2-chris@chris-wilson.co.uk> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20180807181101.14696-1-chris@chris-wilson.co.uk> References: <20180807181101.14696-1-chris@chris-wilson.co.uk> Subject: [Intel-gfx] [PATCH 2/2] drm/i915: Mark "page-backed" dmabuf as being shrinkable X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" X-Virus-Scanned: ClamAV using ClamSMTP We currently assume that if we import a dmabuf, it is not backed by pages (we assume it exists in video memory on a foriegn device). However, some dmabuf will be backed by ordinary struct pages (e.g. vgem) and as such they may be shrinkable from direct reclaim. Since commit 09ea0dfbf972 ("dma-buf: make map_atomic and map function pointers optional") drivers do not need to supply a kmap() vfunc if they have no convenient access to the physical backing page. We can use that information to differentiate dmabuf that are likely to be backed by struct page and so suitable for including in our shrinkable set. Signed-off-by: Chris Wilson Cc: Tvrtko Ursulin Cc: Daniel Vetter --- drivers/gpu/drm/i915/i915_gem_dmabuf.c | 15 +++++++++++++-- 1 file changed, 13 insertions(+), 2 deletions(-) diff --git a/drivers/gpu/drm/i915/i915_gem_dmabuf.c b/drivers/gpu/drm/i915/i915_gem_dmabuf.c index 82e2ca17a441..8bc4030059f8 100644 --- a/drivers/gpu/drm/i915/i915_gem_dmabuf.c +++ b/drivers/gpu/drm/i915/i915_gem_dmabuf.c @@ -269,7 +269,15 @@ static void i915_gem_object_put_pages_dmabuf(struct drm_i915_gem_object *obj, DMA_BIDIRECTIONAL); } -static const struct drm_i915_gem_object_ops i915_gem_object_dmabuf_ops = { +static const struct drm_i915_gem_object_ops +i915_gem_object_dmabuf_ops = { + .get_pages = i915_gem_object_get_pages_dmabuf, + .put_pages = i915_gem_object_put_pages_dmabuf, +}; + +static const struct drm_i915_gem_object_ops +i915_gem_object_dmabuf_ops__shrinkable = { + .flags = I915_GEM_OBJECT_IS_SHRINKABLE, .get_pages = i915_gem_object_get_pages_dmabuf, .put_pages = i915_gem_object_put_pages_dmabuf, }; @@ -308,7 +316,10 @@ struct drm_gem_object *i915_gem_prime_import(struct drm_device *dev, } drm_gem_private_object_init(dev, &obj->base, dma_buf->size); - i915_gem_object_init(obj, &i915_gem_object_dmabuf_ops); + i915_gem_object_init(obj, + dma_buf->ops->map ? + &i915_gem_object_dmabuf_ops__shrinkable : + &i915_gem_object_dmabuf_ops); obj->base.import_attach = attach; obj->resv = dma_buf->resv;