From patchwork Thu Jun 15 13:20:50 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 9788843 X-Patchwork-Delegate: agross@codeaurora.org Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 9E5E9602CB for ; Thu, 15 Jun 2017 13:20:58 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 889FE27FB6 for ; Thu, 15 Jun 2017 13:20:58 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 7C66828613; Thu, 15 Jun 2017 13:20:58 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.8 required=2.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED, DKIM_SIGNED, FREEMAIL_FROM, RCVD_IN_DNSWL_HI, T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id ED27627FB6 for ; Thu, 15 Jun 2017 13:20:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751893AbdFONU5 (ORCPT ); Thu, 15 Jun 2017 09:20:57 -0400 Received: from mail-qt0-f195.google.com ([209.85.216.195]:35908 "EHLO mail-qt0-f195.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750948AbdFONU4 (ORCPT ); Thu, 15 Jun 2017 09:20:56 -0400 Received: by mail-qt0-f195.google.com with SMTP id s33so3069436qtg.3 for ; Thu, 15 Jun 2017 06:20:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=JSg9xIFmUmsmSRrXyUUF2MxPF1MXgm8jSVXA6l9qKws=; b=BMHLwQPkzkQFGa8+ujKXCiJyLtHNCxOPDuMj3I8Czf6+jBBa9XV5DjcSqaRLgerHgt tFSJtvu8WxiCfQS5cBgaAI7zdGFaUv2KF3t3nxb4ddyhYW0TXtpZPCzZmaVqEsvjapSE skF41e+8nJTRtpjbcyOLvIDAt8v4UmaMRVw+ZAMedyCiThjmg3jtuYu1WXC8a5jRa4X7 zxbV4IrEQ26L87fAe5op+NLTSX/aMM5JuD7gnQ5rurwDb46dDTXxXpOD+HilRS2CLLgp rI2UTFSyqPuefJ0YdEoPmiWT0yh3xF2zGxGVRvqQoWXeAOAXJZT6B/Eml4OcFyQE2iIf ncQw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=JSg9xIFmUmsmSRrXyUUF2MxPF1MXgm8jSVXA6l9qKws=; b=q372heaY/ljO9/aL8E02HtBJkipQfm978BwScZCz7pjuA0Abt1xLbWPvOVopkRTttc sqZxNtUt520I+JL/2DwZyPx4Wkq7RMHVY68S+m3guHW+KCcGu7dsRxuCzzRqHmyZ+MWC fERmtKta7p5dCbtBlVyyeZwAjUnPfNaLF9Y9pQ7n1KO7nb4H/nd46QucqFna0Xpeow6Y HDMALu2BjC4w9w36OI9PaBvj3e4oa0PV85YuK+uweQi1nDOxiN3qRSQ5o8xdf3ZA1XWz EuceXtvcGLH3o0t1efgy0YZ4OnI73sjy0UdihoV4I+uVvtQ1IHo7tDttFW/ySQD+9Nik NbBw== X-Gm-Message-State: AKS2vOzTFG7121tQ6qySueQ0eioilugwYfuT9FKfBimt09H4RCBVML4G hEZoQCaz6LgHQA== X-Received: by 10.233.223.65 with SMTP id t62mr6761111qkf.80.1497532855044; Thu, 15 Jun 2017 06:20:55 -0700 (PDT) Received: from localhost ([144.121.20.162]) by smtp.gmail.com with ESMTPSA id i28sm83018qta.52.2017.06.15.06.20.53 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Thu, 15 Jun 2017 06:20:53 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: linux-arm-msm@vger.kernel.org, freedreno@lists.freedesktop.org, Chris Wilson , Sushmita Susheelendra , Rob Clark Subject: [PATCH] fixup! drm/msm: Separate locking of buffer resources from struct_mutex Date: Thu, 15 Jun 2017 09:20:50 -0400 Message-Id: <20170615132050.1196-1-robdclark@gmail.com> X-Mailer: git-send-email 2.9.4 In-Reply-To: <1497394374-19982-1-git-send-email-ssusheel@codeaurora.org> References: <1497394374-19982-1-git-send-email-ssusheel@codeaurora.org> Sender: linux-arm-msm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP --- This is roughly based on Chris's suggestion, in particular the part about using mutex_lock_nested(). It's not *exactly* the same, in particular msm_obj->lock protects a bit more than just backing store and we don't currently track a pin_count. (Instead we currently keep pages pinned until the object is purged or freed.) Instead of making msm_obj->lock only cover backing store, it is easier to split out madv, which is still protected by struct_mutex, which is still held by the shrinker, so the shrinker does not need to grab msm_obj->lock until it purges an object. We avoid going down any path that could trigger shrinker by ensuring that msm_obj->madv == WILLNEED. To synchronize access to msm_obj->madv it is protected by msm_obj->lock inside struct_mutex. This seems to keep lockdep happy in my testing so far. drivers/gpu/drm/msm/msm_gem.c | 54 ++++++++++++++++++++++++++++++++-- drivers/gpu/drm/msm/msm_gem.h | 1 + drivers/gpu/drm/msm/msm_gem_shrinker.c | 12 ++++++++ 3 files changed, 65 insertions(+), 2 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index e132548..f5d1f84 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -26,6 +26,22 @@ #include "msm_gpu.h" #include "msm_mmu.h" +/* The shrinker can be triggered while we hold objA->lock, and need + * to grab objB->lock to purge it. Lockdep just sees these as a single + * class of lock, so we use subclasses to teach it the difference. + * + * OBJ_LOCK_NORMAL is implicit (ie. normal mutex_lock() call), and + * OBJ_LOCK_SHRINKER is used in msm_gem_purge(). + * + * It is *essential* that we never go down paths that could trigger the + * shrinker for a purgable object. This is ensured by checking that + * msm_obj->madv == MSM_MADV_WILLNEED. + */ +enum { + OBJ_LOCK_NORMAL, + OBJ_LOCK_SHRINKER, +}; + static dma_addr_t physaddr(struct drm_gem_object *obj) { struct msm_gem_object *msm_obj = to_msm_bo(obj); @@ -150,6 +166,12 @@ struct page **msm_gem_get_pages(struct drm_gem_object *obj) struct page **p; mutex_lock(&msm_obj->lock); + + if (WARN_ON(msm_obj->madv != MSM_MADV_WILLNEED)) { + mutex_unlock(&msm_obj->lock); + return ERR_PTR(-EBUSY); + } + p = get_pages(obj); mutex_unlock(&msm_obj->lock); return p; @@ -220,6 +242,11 @@ int msm_gem_fault(struct vm_fault *vmf) if (ret) goto out; + if (WARN_ON(msm_obj->madv != MSM_MADV_WILLNEED)) { + mutex_unlock(&msm_obj->lock); + return VM_FAULT_SIGBUS; + } + /* make sure we have pages attached now */ pages = get_pages(obj); if (IS_ERR(pages)) { @@ -358,6 +385,11 @@ int msm_gem_get_iova(struct drm_gem_object *obj, mutex_lock(&msm_obj->lock); + if (WARN_ON(msm_obj->madv != MSM_MADV_WILLNEED)) { + mutex_unlock(&msm_obj->lock); + return -EBUSY; + } + vma = lookup_vma(obj, aspace); if (!vma) { @@ -454,6 +486,12 @@ void *msm_gem_get_vaddr(struct drm_gem_object *obj) struct msm_gem_object *msm_obj = to_msm_bo(obj); mutex_lock(&msm_obj->lock); + + if (WARN_ON(msm_obj->madv != MSM_MADV_WILLNEED)) { + mutex_unlock(&msm_obj->lock); + return ERR_PTR(-EBUSY); + } + if (!msm_obj->vaddr) { struct page **pages = get_pages(obj); if (IS_ERR(pages)) { @@ -489,12 +527,18 @@ int msm_gem_madvise(struct drm_gem_object *obj, unsigned madv) { struct msm_gem_object *msm_obj = to_msm_bo(obj); + mutex_lock(&msm_obj->lock); + WARN_ON(!mutex_is_locked(&obj->dev->struct_mutex)); if (msm_obj->madv != __MSM_MADV_PURGED) msm_obj->madv = madv; - return (msm_obj->madv != __MSM_MADV_PURGED); + madv = msm_obj->madv; + + mutex_unlock(&msm_obj->lock); + + return (madv != __MSM_MADV_PURGED); } void msm_gem_purge(struct drm_gem_object *obj) @@ -506,6 +550,8 @@ void msm_gem_purge(struct drm_gem_object *obj) WARN_ON(!is_purgeable(msm_obj)); WARN_ON(obj->import_attach); + mutex_lock_nested(&msm_obj->lock, OBJ_LOCK_SHRINKER); + put_iova(obj); msm_gem_vunmap(obj); @@ -526,6 +572,8 @@ void msm_gem_purge(struct drm_gem_object *obj) invalidate_mapping_pages(file_inode(obj->filp)->i_mapping, 0, (loff_t)-1); + + mutex_unlock(&msm_obj->lock); } void msm_gem_vunmap(struct drm_gem_object *obj) @@ -660,7 +708,7 @@ void msm_gem_describe(struct drm_gem_object *obj, struct seq_file *m) uint64_t off = drm_vma_node_start(&obj->vma_node); const char *madv; - WARN_ON(!mutex_is_locked(&obj->dev->struct_mutex)); + mutex_lock(&msm_obj->lock); switch (msm_obj->madv) { case __MSM_MADV_PURGED: @@ -701,6 +749,8 @@ void msm_gem_describe(struct drm_gem_object *obj, struct seq_file *m) if (fence) describe_fence(fence, "Exclusive", m); rcu_read_unlock(); + + mutex_unlock(&msm_obj->lock); } void msm_gem_describe_objects(struct list_head *list, struct seq_file *m) diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h index 9ad5ba4c..2b9b8e9 100644 --- a/drivers/gpu/drm/msm/msm_gem.h +++ b/drivers/gpu/drm/msm/msm_gem.h @@ -101,6 +101,7 @@ static inline bool is_active(struct msm_gem_object *msm_obj) static inline bool is_purgeable(struct msm_gem_object *msm_obj) { + WARN_ON(!mutex_is_locked(&msm_obj->base.dev->struct_mutex)); return (msm_obj->madv == MSM_MADV_DONTNEED) && msm_obj->sgt && !msm_obj->base.dma_buf && !msm_obj->base.import_attach; } diff --git a/drivers/gpu/drm/msm/msm_gem_shrinker.c b/drivers/gpu/drm/msm/msm_gem_shrinker.c index ab1dd02..e1db4ad 100644 --- a/drivers/gpu/drm/msm/msm_gem_shrinker.c +++ b/drivers/gpu/drm/msm/msm_gem_shrinker.c @@ -20,6 +20,18 @@ static bool msm_gem_shrinker_lock(struct drm_device *dev, bool *unlock) { + /* NOTE: we are *closer* to being able to get rid of + * mutex_trylock_recursive().. the msm_gem code itself does + * not need struct_mutex, although codepaths that can trigger + * shrinker are still called in code-paths that hold the + * struct_mutex. + * + * Also, msm_obj->madv is protected by struct_mutex. + * + * The next step is probably split out a seperate lock for + * protecting inactive_list, so that shrinker does not need + * struct_mutex. + */ switch (mutex_trylock_recursive(&dev->struct_mutex)) { case MUTEX_TRYLOCK_FAILED: return false;