From patchwork Mon Apr 5 17:45:24 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 12183417 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 50B0BC433B4 for ; Mon, 5 Apr 2021 17:42:08 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 136E6613B3 for ; Mon, 5 Apr 2021 17:42:08 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239020AbhDERmN (ORCPT ); Mon, 5 Apr 2021 13:42:13 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54522 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234136AbhDERmM (ORCPT ); Mon, 5 Apr 2021 13:42:12 -0400 Received: from mail-pl1-x631.google.com (mail-pl1-x631.google.com [IPv6:2607:f8b0:4864:20::631]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CABAEC061756; Mon, 5 Apr 2021 10:42:05 -0700 (PDT) Received: by mail-pl1-x631.google.com with SMTP id g10so6014702plt.8; Mon, 05 Apr 2021 10:42:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=8LRG1eNUpIgk81CY0UQCJ60bzUUpB9qtBpu3ufoR+j8=; b=D2nAzJtLwxoOX4+J3mJWz1OAewqduQtqpN7T3ykt5zK2TjS6SI6mqeMmSymhQEj41R l5BPcZXonPzP61SokcJy5jVjjmwRW6TctN0gzvPRiobbK7Q3T+cAXm9wiUAwcOrgFj3h nxz/iUi5Gg4hvYMzg/cdRk3i9YCTajXS6fY9JzAQxuMp6JP7nnIdKivAMplgv3Pekzos Ws76fl8miMvvrJapX/rZySzy+VEzGkZjltO3CxxqJH44sSHoxw2BYcBKi5isM/xURs0d J2aUc2mTr5O1LxTQj3r33GsZ7kgYFHYToSf03x8+tNuG8/5ytDYMS2qjPNMziST2pJVz J1/Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=8LRG1eNUpIgk81CY0UQCJ60bzUUpB9qtBpu3ufoR+j8=; b=LzGe2oOxr9f2IavMWIlWyLOg1NGxrGrRsRdoiFs+s6Z5KYTRBaG/94qsemVdOGXaos msdrV5/oKJj6kh4bwCcb1uSVvG1VWzKRto+XWHpSg712iKHR8gcsI+EbBfCfXGSJovc7 jgTvrP3K7g2q3om5EaLuUh/R4BlGRkWbkjE++RNPB9NybqEDHFz2hbXO+HvXDK0QN263 L8xGXaMgw16X9qHsEgiwLxItGj1ogLSu0yFMTw4CDJyI4fACcaRMOazykFeFzKzuUXu9 BHnbvPVNfL+9AfEBzEJmB22gxz5qTlmiZMFk/dEzasIjFP4aKwulxEho7R5a65Uphm3j 0Xsw== X-Gm-Message-State: AOAM530TN/brsL0CQ5VKBKt+vBb2wyVEqqPTlEny4rCcyoxQiuuQ6+nH geAsTy4DJtHLeU9WO2AJVdU= X-Google-Smtp-Source: ABdhPJy/zdAuuM/6jfnh/2mhjg0lpJ3vI3c+BeHxPWhAnanktVOdB6B3lc6p9HXSa6pbd8RUV4sqSg== X-Received: by 2002:a17:902:8685:b029:e6:5ff6:f7df with SMTP id g5-20020a1709028685b02900e65ff6f7dfmr25046456plo.40.1617644525372; Mon, 05 Apr 2021 10:42:05 -0700 (PDT) Received: from localhost (c-73-25-156-94.hsd1.or.comcast.net. [73.25.156.94]) by smtp.gmail.com with ESMTPSA id t12sm16731786pga.85.2021.04.05.10.42.04 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 05 Apr 2021 10:42:04 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: Jordan Crouse , Rob Clark , Rob Clark , Sean Paul , David Airlie , Daniel Vetter , linux-arm-msm@vger.kernel.org (open list:DRM DRIVER FOR MSM ADRENO GPU), freedreno@lists.freedesktop.org (open list:DRM DRIVER FOR MSM ADRENO GPU), linux-kernel@vger.kernel.org (open list) Subject: [PATCH 1/8] drm/msm: ratelimit GEM related WARN_ON()s Date: Mon, 5 Apr 2021 10:45:24 -0700 Message-Id: <20210405174532.1441497-2-robdclark@gmail.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210405174532.1441497-1-robdclark@gmail.com> References: <20210405174532.1441497-1-robdclark@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org From: Rob Clark If you mess something up, you don't really need to see the same warn on splat 4000 times pumped out a slow debug UART port.. Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/msm_gem.c | 66 +++++++++++++++++------------------ drivers/gpu/drm/msm/msm_gem.h | 19 ++++++---- 2 files changed, 45 insertions(+), 40 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index 4e91b095ab77..d5abe8aa9978 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -96,7 +96,7 @@ static struct page **get_pages(struct drm_gem_object *obj) { struct msm_gem_object *msm_obj = to_msm_bo(obj); - WARN_ON(!msm_gem_is_locked(obj)); + GEM_WARN_ON(!msm_gem_is_locked(obj)); if (!msm_obj->pages) { struct drm_device *dev = obj->dev; @@ -180,7 +180,7 @@ struct page **msm_gem_get_pages(struct drm_gem_object *obj) msm_gem_lock(obj); - if (WARN_ON(msm_obj->madv != MSM_MADV_WILLNEED)) { + if (GEM_WARN_ON(msm_obj->madv != MSM_MADV_WILLNEED)) { msm_gem_unlock(obj); return ERR_PTR(-EBUSY); } @@ -256,7 +256,7 @@ static vm_fault_t msm_gem_fault(struct vm_fault *vmf) goto out; } - if (WARN_ON(msm_obj->madv != MSM_MADV_WILLNEED)) { + if (GEM_WARN_ON(msm_obj->madv != MSM_MADV_WILLNEED)) { msm_gem_unlock(obj); return VM_FAULT_SIGBUS; } @@ -289,7 +289,7 @@ static uint64_t mmap_offset(struct drm_gem_object *obj) struct drm_device *dev = obj->dev; int ret; - WARN_ON(!msm_gem_is_locked(obj)); + GEM_WARN_ON(!msm_gem_is_locked(obj)); /* Make it mmapable */ ret = drm_gem_create_mmap_offset(obj); @@ -318,7 +318,7 @@ static struct msm_gem_vma *add_vma(struct drm_gem_object *obj, struct msm_gem_object *msm_obj = to_msm_bo(obj); struct msm_gem_vma *vma; - WARN_ON(!msm_gem_is_locked(obj)); + GEM_WARN_ON(!msm_gem_is_locked(obj)); vma = kzalloc(sizeof(*vma), GFP_KERNEL); if (!vma) @@ -337,7 +337,7 @@ static struct msm_gem_vma *lookup_vma(struct drm_gem_object *obj, struct msm_gem_object *msm_obj = to_msm_bo(obj); struct msm_gem_vma *vma; - WARN_ON(!msm_gem_is_locked(obj)); + GEM_WARN_ON(!msm_gem_is_locked(obj)); list_for_each_entry(vma, &msm_obj->vmas, list) { if (vma->aspace == aspace) @@ -363,7 +363,7 @@ put_iova_spaces(struct drm_gem_object *obj) struct msm_gem_object *msm_obj = to_msm_bo(obj); struct msm_gem_vma *vma; - WARN_ON(!msm_gem_is_locked(obj)); + GEM_WARN_ON(!msm_gem_is_locked(obj)); list_for_each_entry(vma, &msm_obj->vmas, list) { if (vma->aspace) { @@ -380,7 +380,7 @@ put_iova_vmas(struct drm_gem_object *obj) struct msm_gem_object *msm_obj = to_msm_bo(obj); struct msm_gem_vma *vma, *tmp; - WARN_ON(!msm_gem_is_locked(obj)); + GEM_WARN_ON(!msm_gem_is_locked(obj)); list_for_each_entry_safe(vma, tmp, &msm_obj->vmas, list) { del_vma(vma); @@ -394,7 +394,7 @@ static int get_iova_locked(struct drm_gem_object *obj, struct msm_gem_vma *vma; int ret = 0; - WARN_ON(!msm_gem_is_locked(obj)); + GEM_WARN_ON(!msm_gem_is_locked(obj)); vma = lookup_vma(obj, aspace); @@ -429,13 +429,13 @@ static int msm_gem_pin_iova(struct drm_gem_object *obj, if (msm_obj->flags & MSM_BO_MAP_PRIV) prot |= IOMMU_PRIV; - WARN_ON(!msm_gem_is_locked(obj)); + GEM_WARN_ON(!msm_gem_is_locked(obj)); - if (WARN_ON(msm_obj->madv != MSM_MADV_WILLNEED)) + if (GEM_WARN_ON(msm_obj->madv != MSM_MADV_WILLNEED)) return -EBUSY; vma = lookup_vma(obj, aspace); - if (WARN_ON(!vma)) + if (GEM_WARN_ON(!vma)) return -EINVAL; pages = get_pages(obj); @@ -453,7 +453,7 @@ static int get_and_pin_iova_range_locked(struct drm_gem_object *obj, u64 local; int ret; - WARN_ON(!msm_gem_is_locked(obj)); + GEM_WARN_ON(!msm_gem_is_locked(obj)); ret = get_iova_locked(obj, aspace, &local, range_start, range_end); @@ -524,7 +524,7 @@ uint64_t msm_gem_iova(struct drm_gem_object *obj, msm_gem_lock(obj); vma = lookup_vma(obj, aspace); msm_gem_unlock(obj); - WARN_ON(!vma); + GEM_WARN_ON(!vma); return vma ? vma->iova : 0; } @@ -537,11 +537,11 @@ void msm_gem_unpin_iova_locked(struct drm_gem_object *obj, { struct msm_gem_vma *vma; - WARN_ON(!msm_gem_is_locked(obj)); + GEM_WARN_ON(!msm_gem_is_locked(obj)); vma = lookup_vma(obj, aspace); - if (!WARN_ON(!vma)) + if (!GEM_WARN_ON(!vma)) msm_gem_unmap_vma(aspace, vma); } @@ -593,12 +593,12 @@ static void *get_vaddr(struct drm_gem_object *obj, unsigned madv) struct msm_gem_object *msm_obj = to_msm_bo(obj); int ret = 0; - WARN_ON(!msm_gem_is_locked(obj)); + GEM_WARN_ON(!msm_gem_is_locked(obj)); if (obj->import_attach) return ERR_PTR(-ENODEV); - if (WARN_ON(msm_obj->madv > madv)) { + if (GEM_WARN_ON(msm_obj->madv > madv)) { DRM_DEV_ERROR(obj->dev->dev, "Invalid madv state: %u vs %u\n", msm_obj->madv, madv); return ERR_PTR(-EBUSY); @@ -664,8 +664,8 @@ void msm_gem_put_vaddr_locked(struct drm_gem_object *obj) { struct msm_gem_object *msm_obj = to_msm_bo(obj); - WARN_ON(!msm_gem_is_locked(obj)); - WARN_ON(msm_obj->vmap_count < 1); + GEM_WARN_ON(!msm_gem_is_locked(obj)); + GEM_WARN_ON(msm_obj->vmap_count < 1); msm_obj->vmap_count--; } @@ -707,8 +707,8 @@ void msm_gem_purge(struct drm_gem_object *obj) struct drm_device *dev = obj->dev; struct msm_gem_object *msm_obj = to_msm_bo(obj); - WARN_ON(!is_purgeable(msm_obj)); - WARN_ON(obj->import_attach); + GEM_WARN_ON(!is_purgeable(msm_obj)); + GEM_WARN_ON(obj->import_attach); put_iova_spaces(obj); @@ -739,9 +739,9 @@ void msm_gem_vunmap(struct drm_gem_object *obj) { struct msm_gem_object *msm_obj = to_msm_bo(obj); - WARN_ON(!msm_gem_is_locked(obj)); + GEM_WARN_ON(!msm_gem_is_locked(obj)); - if (!msm_obj->vaddr || WARN_ON(!is_vunmapable(msm_obj))) + if (!msm_obj->vaddr || GEM_WARN_ON(!is_vunmapable(msm_obj))) return; vunmap(msm_obj->vaddr); @@ -789,9 +789,9 @@ void msm_gem_active_get(struct drm_gem_object *obj, struct msm_gpu *gpu) struct msm_drm_private *priv = obj->dev->dev_private; might_sleep(); - WARN_ON(!msm_gem_is_locked(obj)); - WARN_ON(msm_obj->madv != MSM_MADV_WILLNEED); - WARN_ON(msm_obj->dontneed); + GEM_WARN_ON(!msm_gem_is_locked(obj)); + GEM_WARN_ON(msm_obj->madv != MSM_MADV_WILLNEED); + GEM_WARN_ON(msm_obj->dontneed); if (msm_obj->active_count++ == 0) { mutex_lock(&priv->mm_lock); @@ -806,7 +806,7 @@ void msm_gem_active_put(struct drm_gem_object *obj) struct msm_gem_object *msm_obj = to_msm_bo(obj); might_sleep(); - WARN_ON(!msm_gem_is_locked(obj)); + GEM_WARN_ON(!msm_gem_is_locked(obj)); if (--msm_obj->active_count == 0) { update_inactive(msm_obj); @@ -818,7 +818,7 @@ static void update_inactive(struct msm_gem_object *msm_obj) struct msm_drm_private *priv = msm_obj->base.dev->dev_private; mutex_lock(&priv->mm_lock); - WARN_ON(msm_obj->active_count != 0); + GEM_WARN_ON(msm_obj->active_count != 0); if (msm_obj->dontneed) mark_unpurgable(msm_obj); @@ -830,7 +830,7 @@ static void update_inactive(struct msm_gem_object *msm_obj) list_add_tail(&msm_obj->mm_list, &priv->inactive_dontneed); mark_purgable(msm_obj); } else { - WARN_ON(msm_obj->madv != __MSM_MADV_PURGED); + GEM_WARN_ON(msm_obj->madv != __MSM_MADV_PURGED); list_add_tail(&msm_obj->mm_list, &priv->inactive_purged); } @@ -1010,12 +1010,12 @@ void msm_gem_free_object(struct drm_gem_object *obj) msm_gem_lock(obj); /* object should not be on active list: */ - WARN_ON(is_active(msm_obj)); + GEM_WARN_ON(is_active(msm_obj)); put_iova_spaces(obj); if (obj->import_attach) { - WARN_ON(msm_obj->vaddr); + GEM_WARN_ON(msm_obj->vaddr); /* Don't drop the pages for imported dmabuf, as they are not * ours, just free the array we allocated: @@ -1131,7 +1131,7 @@ static struct drm_gem_object *_msm_gem_new(struct drm_device *dev, else if ((flags & (MSM_BO_STOLEN | MSM_BO_SCANOUT)) && priv->vram.size) use_vram = true; - if (WARN_ON(use_vram && !priv->vram.size)) + if (GEM_WARN_ON(use_vram && !priv->vram.size)) return ERR_PTR(-EINVAL); /* Disallow zero sized objects as they make the underlying diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h index 7c7d54bad189..917af526a5c5 100644 --- a/drivers/gpu/drm/msm/msm_gem.h +++ b/drivers/gpu/drm/msm/msm_gem.h @@ -11,6 +11,11 @@ #include #include "msm_drv.h" +/* Make all GEM related WARN_ON()s ratelimited.. when things go wrong they + * tend to go wrong 1000s of times in a short timespan. + */ +#define GEM_WARN_ON(x) WARN_RATELIMIT(x, "%s", __stringify(x)) + /* Additional internal-use only BO flags: */ #define MSM_BO_STOLEN 0x10000000 /* try to use stolen/splash memory */ #define MSM_BO_MAP_PRIV 0x20000000 /* use IOMMU_PRIV when mapping */ @@ -203,7 +208,7 @@ msm_gem_is_locked(struct drm_gem_object *obj) static inline bool is_active(struct msm_gem_object *msm_obj) { - WARN_ON(!msm_gem_is_locked(&msm_obj->base)); + GEM_WARN_ON(!msm_gem_is_locked(&msm_obj->base)); return msm_obj->active_count; } @@ -221,7 +226,7 @@ static inline bool is_purgeable(struct msm_gem_object *msm_obj) static inline bool is_vunmapable(struct msm_gem_object *msm_obj) { - WARN_ON(!msm_gem_is_locked(&msm_obj->base)); + GEM_WARN_ON(!msm_gem_is_locked(&msm_obj->base)); return (msm_obj->vmap_count == 0) && msm_obj->vaddr; } @@ -229,12 +234,12 @@ static inline void mark_purgable(struct msm_gem_object *msm_obj) { struct msm_drm_private *priv = msm_obj->base.dev->dev_private; - WARN_ON(!mutex_is_locked(&priv->mm_lock)); + GEM_WARN_ON(!mutex_is_locked(&priv->mm_lock)); if (is_unpurgable(msm_obj)) return; - if (WARN_ON(msm_obj->dontneed)) + if (GEM_WARN_ON(msm_obj->dontneed)) return; priv->shrinkable_count += msm_obj->base.size >> PAGE_SHIFT; @@ -245,16 +250,16 @@ static inline void mark_unpurgable(struct msm_gem_object *msm_obj) { struct msm_drm_private *priv = msm_obj->base.dev->dev_private; - WARN_ON(!mutex_is_locked(&priv->mm_lock)); + GEM_WARN_ON(!mutex_is_locked(&priv->mm_lock)); if (is_unpurgable(msm_obj)) return; - if (WARN_ON(!msm_obj->dontneed)) + if (GEM_WARN_ON(!msm_obj->dontneed)) return; priv->shrinkable_count -= msm_obj->base.size >> PAGE_SHIFT; - WARN_ON(priv->shrinkable_count < 0); + GEM_WARN_ON(priv->shrinkable_count < 0); msm_obj->dontneed = false; } From patchwork Mon Apr 5 17:45:25 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 12183415 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 55CA3C43462 for ; Mon, 5 Apr 2021 17:42:10 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 2FF4461394 for ; Mon, 5 Apr 2021 17:42:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239109AbhDERmP (ORCPT ); Mon, 5 Apr 2021 13:42:15 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54534 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239072AbhDERmO (ORCPT ); Mon, 5 Apr 2021 13:42:14 -0400 Received: from mail-pj1-x1036.google.com (mail-pj1-x1036.google.com [IPv6:2607:f8b0:4864:20::1036]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BC4B5C061788; Mon, 5 Apr 2021 10:42:07 -0700 (PDT) Received: by mail-pj1-x1036.google.com with SMTP id ep1-20020a17090ae641b029014d48811e37so966913pjb.4; Mon, 05 Apr 2021 10:42:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=zBaXsiA2oySpJPeze7wcHI2Mi9Lcz4C4TmcNMn/xtO0=; b=uyJQWPueiEltFhEvEjzWwifN3QRFcQlsspa6NenC8r9nT5T9uoPwQEutg98y6hu+Uo VNI6dFxAqMx8SIXZ6NAqnVcIhuDEyUPv37+CRfOfbaE47cUV361W4q/f3NmtmdDGj6sz Q0SMFX1bZm8EqRpQpyvro7gXmVKQ5SvLKOSgTzrA7gXNUilf4NqydojM4RwOFIh6PpQM uOXX4Ftx836e/6eZdAzctmaQkkIZ4JpX7NmmPoCmBdz5l61oEuAOe+kKx6XV0uGZInJ9 ZdKqXOi4Vo4n7y8RBl5meClP4yjZjg6S9BHjtAwourLAULV5pWz6wwR66La+ZJXRTQP7 oLmA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=zBaXsiA2oySpJPeze7wcHI2Mi9Lcz4C4TmcNMn/xtO0=; b=s0q/UPUW/wRtkYY44FjW+n9qKILuJVzgoq/Ik0xJk/RhSmX5KoliMNoD9YYvV8GLSb nMMTHJoMGBmEjleFw2ZY7equFqNNLO61CgMgvXrS5NIxM2lZ1fykkkYZ12UfvCJEPCgt Oel0C1lnQLYCkrf6QPNRaehHnHRsPPWod1rTNmFm3qkayl8rk7odgAiItOSJWDWWvlPU ySmqOWhV9rPkiE1xVV7DtcNo80OMqmlM/xdGO9EciS8bzY4H4SfIyKUj0xnno/CkPpGr ZV0+W13CGt5Fvd9a3i2JGU7DKoLJcsEeDGa7Nu6aShacC8t7PzPhlPuxxjxUXRuketS4 iEOg== X-Gm-Message-State: AOAM533fEdSKlwbt8MzAuddXeXqxfC5MQSzhuwXYhpiiZ87GhQ+HW1iP o05QcKPjpouaZj7xSxanlnI= X-Google-Smtp-Source: ABdhPJwcR3M+lNIiugmaI3FIWyaKHmXcTa2wHQNoSe+k8kDZvIMsaKZHBNGk0/pKEKZomkqJaqRxsg== X-Received: by 2002:a17:90a:b794:: with SMTP id m20mr255489pjr.152.1617644527380; Mon, 05 Apr 2021 10:42:07 -0700 (PDT) Received: from localhost (c-73-25-156-94.hsd1.or.comcast.net. [73.25.156.94]) by smtp.gmail.com with ESMTPSA id a144sm16268582pfd.200.2021.04.05.10.42.06 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 05 Apr 2021 10:42:06 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: Jordan Crouse , Rob Clark , Rob Clark , Sean Paul , David Airlie , Daniel Vetter , linux-arm-msm@vger.kernel.org (open list:DRM DRIVER FOR MSM ADRENO GPU), freedreno@lists.freedesktop.org (open list:DRM DRIVER FOR MSM ADRENO GPU), linux-kernel@vger.kernel.org (open list) Subject: [PATCH 2/8] drm/msm: Reorganize msm_gem_shrinker_scan() Date: Mon, 5 Apr 2021 10:45:25 -0700 Message-Id: <20210405174532.1441497-3-robdclark@gmail.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210405174532.1441497-1-robdclark@gmail.com> References: <20210405174532.1441497-1-robdclark@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org From: Rob Clark So we don't have to duplicate the boilerplate for eviction. This also lets us re-use the main scan loop for vmap shrinker. Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/msm_gem_shrinker.c | 94 +++++++++++++------------- 1 file changed, 46 insertions(+), 48 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_gem_shrinker.c b/drivers/gpu/drm/msm/msm_gem_shrinker.c index 33a49641ef30..38bf919f8508 100644 --- a/drivers/gpu/drm/msm/msm_gem_shrinker.c +++ b/drivers/gpu/drm/msm/msm_gem_shrinker.c @@ -17,21 +17,35 @@ msm_gem_shrinker_count(struct shrinker *shrinker, struct shrink_control *sc) return priv->shrinkable_count; } +static bool +purge(struct msm_gem_object *msm_obj) +{ + if (!is_purgeable(msm_obj)) + return false; + + /* + * This will move the obj out of still_in_list to + * the purged list + */ + msm_gem_purge(&msm_obj->base); + + return true; +} + static unsigned long -msm_gem_shrinker_scan(struct shrinker *shrinker, struct shrink_control *sc) +scan(struct msm_drm_private *priv, unsigned nr_to_scan, struct list_head *list, + bool (*shrink)(struct msm_gem_object *msm_obj)) { - struct msm_drm_private *priv = - container_of(shrinker, struct msm_drm_private, shrinker); + unsigned freed = 0; struct list_head still_in_list; - unsigned long freed = 0; INIT_LIST_HEAD(&still_in_list); mutex_lock(&priv->mm_lock); - while (freed < sc->nr_to_scan) { + while (freed < nr_to_scan) { struct msm_gem_object *msm_obj = list_first_entry_or_null( - &priv->inactive_dontneed, typeof(*msm_obj), mm_list); + list, typeof(*msm_obj), mm_list); if (!msm_obj) break; @@ -62,14 +76,9 @@ msm_gem_shrinker_scan(struct shrinker *shrinker, struct shrink_control *sc) if (!msm_gem_trylock(&msm_obj->base)) goto tail; - if (is_purgeable(msm_obj)) { - /* - * This will move the obj out of still_in_list to - * the purged list - */ - msm_gem_purge(&msm_obj->base); + if (shrink(msm_obj)) freed += msm_obj->base.size >> PAGE_SHIFT; - } + msm_gem_unlock(&msm_obj->base); tail: @@ -77,16 +86,25 @@ msm_gem_shrinker_scan(struct shrinker *shrinker, struct shrink_control *sc) mutex_lock(&priv->mm_lock); } - list_splice_tail(&still_in_list, &priv->inactive_dontneed); + list_splice_tail(&still_in_list, list); mutex_unlock(&priv->mm_lock); - if (freed > 0) { + return freed; +} + +static unsigned long +msm_gem_shrinker_scan(struct shrinker *shrinker, struct shrink_control *sc) +{ + struct msm_drm_private *priv = + container_of(shrinker, struct msm_drm_private, shrinker); + unsigned long freed; + + freed = scan(priv, sc->nr_to_scan, &priv->inactive_dontneed, purge); + + if (freed > 0) trace_msm_gem_purge(freed << PAGE_SHIFT); - } else { - return SHRINK_STOP; - } - return freed; + return (freed > 0) ? freed : SHRINK_STOP; } /* since we don't know any better, lets bail after a few @@ -95,29 +113,15 @@ msm_gem_shrinker_scan(struct shrinker *shrinker, struct shrink_control *sc) */ static const int vmap_shrink_limit = 15; -static unsigned -vmap_shrink(struct list_head *mm_list) +static bool +vmap_shrink(struct msm_gem_object *msm_obj) { - struct msm_gem_object *msm_obj; - unsigned unmapped = 0; + if (!is_vunmapable(msm_obj)) + return false; - list_for_each_entry(msm_obj, mm_list, mm_list) { - /* Use trylock, because we cannot block on a obj that - * might be trying to acquire mm_lock - */ - if (!msm_gem_trylock(&msm_obj->base)) - continue; - if (is_vunmapable(msm_obj)) { - msm_gem_vunmap(&msm_obj->base); - unmapped++; - } - msm_gem_unlock(&msm_obj->base); + msm_gem_vunmap(&msm_obj->base); - if (++unmapped >= vmap_shrink_limit) - break; - } - - return unmapped; + return true; } static int @@ -133,17 +137,11 @@ msm_gem_shrinker_vmap(struct notifier_block *nb, unsigned long event, void *ptr) }; unsigned idx, unmapped = 0; - mutex_lock(&priv->mm_lock); - - for (idx = 0; mm_lists[idx]; idx++) { - unmapped += vmap_shrink(mm_lists[idx]); - - if (unmapped >= vmap_shrink_limit) - break; + for (idx = 0; mm_lists[idx] && unmapped < vmap_shrink_limit; idx++) { + unmapped += scan(priv, vmap_shrink_limit - unmapped, + mm_lists[idx], vmap_shrink); } - mutex_unlock(&priv->mm_lock); - *(unsigned long *)ptr += unmapped; if (unmapped > 0) From patchwork Mon Apr 5 17:45:26 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 12183419 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 27A2FC43460 for ; Mon, 5 Apr 2021 17:42:14 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id D215F613CB for ; Mon, 5 Apr 2021 17:42:13 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239169AbhDERmT (ORCPT ); Mon, 5 Apr 2021 13:42:19 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54542 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239180AbhDERmR (ORCPT ); Mon, 5 Apr 2021 13:42:17 -0400 Received: from mail-pg1-x533.google.com (mail-pg1-x533.google.com [IPv6:2607:f8b0:4864:20::533]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C94CCC061756; Mon, 5 Apr 2021 10:42:09 -0700 (PDT) Received: by mail-pg1-x533.google.com with SMTP id l76so8560183pga.6; Mon, 05 Apr 2021 10:42:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=KzM5ykbDnSnt9IUX0aCvytu48YxBoDL+JwZURK1Yqus=; b=Z0xSUkgI/WnrPAodBPqs+x45wcrk9D0f0oyWOT/WBVmpPS2gFwzXSmuqpv9fgM+CCK bOBmmjpJteJ4x2fRDhs/jGyki8WpaRuMhQV5O2QwlCyETHmPgm/q6NaaM7t5nUl5oFP9 QyEzWAsMLPd3i36askiG7v8AbWGkTHS16PKij6lgBQN6yXcLlcHnYZcZ5zGTucCKqtfq brZLla4UxnvNcmosZ1INhprIw8wlgvprJyzwcSWJdbushazMjhtWxHrlwZ1h4yrP12XX XNsOcF2N74c7FeyccETZN8JespLGqUPuPjE/K9I3Olf8fqEXlQQHi9SIYRFCYTWhbWuC GZgg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=KzM5ykbDnSnt9IUX0aCvytu48YxBoDL+JwZURK1Yqus=; b=Hw8tpY6OQQwNTZtuyVE6jBXx51CUmsEXJf/3BFg7P86VZ1VjiVV9A3wXdTAl3WjmJO LopzNQFTH6CXMr7wbs7L4afEu9ndGqu9zfEQlew+y9lL0xxLn7qOscRht/GSWoBMH8zY aSU/LslC3HsMNbAQViGvnzO3WDScCaMnn/FBvj9I+LtkOl0R2e6T9lkoypxsG+H+l59m AUUg/JJlQ86x5K098lRHDcg5Zka8BnOBeKCTfBhge8NN0aiohGHA+ahAYC3QgphZI2CS yodv+Lpb6srJey1aUaPfFzC4JTl6fA3L/Xv2mLt3pMAMM3YfMn6mncUH0dh82rrbe1V6 FhVg== X-Gm-Message-State: AOAM532DY3i38U8EWw0DcsrkLXl4ACZnfFjHkFOhDP+6hEbnT/xphsvU e5WY0D/PF2Q/3eqXK7ZaCw3KnWSG1G1fOQ== X-Google-Smtp-Source: ABdhPJzQJZ7Z8aP6IlzpwDtucp4npu08KOLRLatwk+FA1xci/wuxVncqz9GxQBRoFBJhqtMILWq4pA== X-Received: by 2002:a65:5b47:: with SMTP id y7mr24029635pgr.119.1617644529404; Mon, 05 Apr 2021 10:42:09 -0700 (PDT) Received: from localhost (c-73-25-156-94.hsd1.or.comcast.net. [73.25.156.94]) by smtp.gmail.com with ESMTPSA id b84sm16959829pfb.162.2021.04.05.10.42.08 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 05 Apr 2021 10:42:08 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: Jordan Crouse , Rob Clark , Rob Clark , Sean Paul , David Airlie , Daniel Vetter , linux-arm-msm@vger.kernel.org (open list:DRM DRIVER FOR MSM ADRENO GPU), freedreno@lists.freedesktop.org (open list:DRM DRIVER FOR MSM ADRENO GPU), linux-kernel@vger.kernel.org (open list) Subject: [PATCH 3/8] drm/msm: Clear msm_obj->sgt in put_pages() Date: Mon, 5 Apr 2021 10:45:26 -0700 Message-Id: <20210405174532.1441497-4-robdclark@gmail.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210405174532.1441497-1-robdclark@gmail.com> References: <20210405174532.1441497-1-robdclark@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org From: Rob Clark Currently this doesn't matter since we keep the pages pinned until the object is destroyed. But when we start unpinning pages to allow objects to be evicted to swap, it will. Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/msm_gem.c | 1 + 1 file changed, 1 insertion(+) diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index d5abe8aa9978..71530a89b675 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -162,6 +162,7 @@ static void put_pages(struct drm_gem_object *obj) sg_free_table(msm_obj->sgt); kfree(msm_obj->sgt); + msm_obj->sgt = NULL; } if (use_pages(obj)) From patchwork Mon Apr 5 17:45:27 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 12183421 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A8706C43603 for ; Mon, 5 Apr 2021 17:42:14 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 81419613B7 for ; Mon, 5 Apr 2021 17:42:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239260AbhDERmU (ORCPT ); Mon, 5 Apr 2021 13:42:20 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54554 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239227AbhDERmT (ORCPT ); Mon, 5 Apr 2021 13:42:19 -0400 Received: from mail-pf1-x42a.google.com (mail-pf1-x42a.google.com [IPv6:2607:f8b0:4864:20::42a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1B20CC06178C; Mon, 5 Apr 2021 10:42:12 -0700 (PDT) Received: by mail-pf1-x42a.google.com with SMTP id m11so5984225pfc.11; Mon, 05 Apr 2021 10:42:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=hft/qShxHo0bK33jERj2ZhfMZ7O/VHjF9RLnRkSOuTI=; b=c/hVy6Rjga0/F4Ynn4GXA9wFndHn3akAXUMMcmBNsZpKtr6LAj3nHXVoeFvS69vkWO 5UiVrkG9JCNVpzh7PWcbbOCWfqF8K1h484iNhoageOyvJvQ8Wla3cFCPyuhtlkVxRCKY jkqm+gQ0ttd3i2jG6+4syBmu6mxKbE0XmoQawe4hLcyt50Zzg1t8DmjtBRcLyEsY5uvQ cH+Jsb78Eh6yC/rTCy90iDi/rxacodWfYInSYvply95Jjopy8ExZeUXtvfzBt2fiSjT2 1xaoIJhGH2of4TJTmlcGhiUDo3/DWLCMPDDMpGFaLTv+5YJwFtRvBIzWtS38D6pHctWW TMQA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=hft/qShxHo0bK33jERj2ZhfMZ7O/VHjF9RLnRkSOuTI=; b=fqcKx0yuGyaP21dllOHkgG9vd1QAXJnQIgA60bSLvQ/00i2TnuvNQE6A+IWaE3rX/v p1zqwguZ3NqPU5V7zmAL0C4sfOdFoS+44A7NXQbiqPeEM/BBgzKM1McDRn4Ag+Lk8mBj QNWgXFIkLglkB7lfkSVULxzC5qfK7bDzrHG9vwjpAH2YOfkowDNNBwmDKKq7+H4nVgMC cXbWNY4pjHpWFzlw28ERuM7pqjHdG3Wu+NmDyg0zoR2P6Ln5Ug1yc/9Rn+OM9Ul06jhx 9xx53SLwbvssn5loG3omMGFwOHmUwAmi/hZUmexkgNVcCRBC4fwvwoBaH5MwqGo6iBqd wkvw== X-Gm-Message-State: AOAM532bZjQ0MlP8NzPFe7Js2i+tkrOkyYWzUYB00hhjpfe7qVvQ2y2n L+Pi4I0COBKx6q8FhHxr+8Q= X-Google-Smtp-Source: ABdhPJzmZZbLJeQ4J4wHDZid4oFT6Yl/7zQkym+z79mPochrvWMslUalo4Qv8GYrkFzvSmdKAPe0Ow== X-Received: by 2002:a63:ff4d:: with SMTP id s13mr23749361pgk.310.1617644531620; Mon, 05 Apr 2021 10:42:11 -0700 (PDT) Received: from localhost (c-73-25-156-94.hsd1.or.comcast.net. [73.25.156.94]) by smtp.gmail.com with ESMTPSA id l25sm17411741pgu.72.2021.04.05.10.42.10 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 05 Apr 2021 10:42:10 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: Jordan Crouse , Rob Clark , Rob Clark , Sean Paul , David Airlie , Daniel Vetter , linux-arm-msm@vger.kernel.org (open list:DRM DRIVER FOR MSM ADRENO GPU), freedreno@lists.freedesktop.org (open list:DRM DRIVER FOR MSM ADRENO GPU), linux-kernel@vger.kernel.org (open list) Subject: [PATCH 4/8] drm/msm: Split iova purge and close Date: Mon, 5 Apr 2021 10:45:27 -0700 Message-Id: <20210405174532.1441497-5-robdclark@gmail.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210405174532.1441497-1-robdclark@gmail.com> References: <20210405174532.1441497-1-robdclark@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org From: Rob Clark Currently these always go together, either when we purge MADV_WONTNEED objects or when the object is freed. But for unpin, we want to be able to purge (unmap from iommu) the vma, while keeping the iova range allocated (so we can remap back to the same GPU virtual address when the object is re-pinned. Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/msm_gem.c | 17 ++++++++++++----- 1 file changed, 12 insertions(+), 5 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index 71530a89b675..5f0647adc29d 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -357,9 +357,14 @@ static void del_vma(struct msm_gem_vma *vma) kfree(vma); } -/* Called with msm_obj locked */ +/** + * If close is true, this also closes the VMA (releasing the allocated + * iova range) in addition to removing the iommu mapping. In the eviction + * case (!close), we keep the iova allocated, but only remove the iommu + * mapping. + */ static void -put_iova_spaces(struct drm_gem_object *obj) +put_iova_spaces(struct drm_gem_object *obj, bool close) { struct msm_gem_object *msm_obj = to_msm_bo(obj); struct msm_gem_vma *vma; @@ -369,7 +374,8 @@ put_iova_spaces(struct drm_gem_object *obj) list_for_each_entry(vma, &msm_obj->vmas, list) { if (vma->aspace) { msm_gem_purge_vma(vma->aspace, vma); - msm_gem_close_vma(vma->aspace, vma); + if (close) + msm_gem_close_vma(vma->aspace, vma); } } } @@ -711,7 +717,8 @@ void msm_gem_purge(struct drm_gem_object *obj) GEM_WARN_ON(!is_purgeable(msm_obj)); GEM_WARN_ON(obj->import_attach); - put_iova_spaces(obj); + /* Get rid of any iommu mapping(s): */ + put_iova_spaces(obj, true); msm_gem_vunmap(obj); @@ -1013,7 +1020,7 @@ void msm_gem_free_object(struct drm_gem_object *obj) /* object should not be on active list: */ GEM_WARN_ON(is_active(msm_obj)); - put_iova_spaces(obj); + put_iova_spaces(obj, true); if (obj->import_attach) { GEM_WARN_ON(msm_obj->vaddr); From patchwork Mon Apr 5 17:45:28 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 12183423 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C8160C43611 for ; Mon, 5 Apr 2021 17:42:15 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id A5895613CB for ; Mon, 5 Apr 2021 17:42:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239283AbhDERmV (ORCPT ); Mon, 5 Apr 2021 13:42:21 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54572 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239291AbhDERmU (ORCPT ); Mon, 5 Apr 2021 13:42:20 -0400 Received: from mail-pf1-x434.google.com (mail-pf1-x434.google.com [IPv6:2607:f8b0:4864:20::434]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 733B7C061788; Mon, 5 Apr 2021 10:42:14 -0700 (PDT) Received: by mail-pf1-x434.google.com with SMTP id c204so4862303pfc.4; Mon, 05 Apr 2021 10:42:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=Tgo9D0hje3g+WF3fE8utkvq2AhTGNUmeDw54h5VPmLM=; b=rj7smkgnlpLC/bTHiifDMhJrEeJ89AG1HOeRjIFbjIs7Lv6uu6et9szKj5V9gH3DuJ LUU2GeBZfoq3BnTAUZXy0cOvMNCHLB2Dh7XysYM+46tfZfFbPuqXdP2CkdhxVrxVRNf7 LYWiSptv6sucT44AMh0VnEffS8tnz5xVTnCKDgybWHNti8+U4yzFMIQEn4bW91xaNNUQ z1vPOVM/hgffT6ZaLaI8iNiPYHAuGDh4fdt4qHsFiWY/C5PmEQP/rSm/CD+gY6bEkDDA Vnizln7kX3+PD8ISt/LtfG/oMlLNPbIJ2tZyshxwd0aL1NsOZkmUNjIaTY2jx9FqjTFR bz/A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Tgo9D0hje3g+WF3fE8utkvq2AhTGNUmeDw54h5VPmLM=; b=Q3ZMwTtMC3kQfnP93kb4Iks2ekpD25qLDEDlTb4RaT27G8kUtNCYS1zdIXv6JMRh9C kI+dT2QeHrPvNqqbZ2tk0uPBjePbFMW2NFGyzI5azo4dRbtZ3J9dplQtFk9OTn8rqooI tJvl5PFBfM/wVu3uFxpt/WjViB3GIGB0frAO5Ojrt44zHRFrVP0FcL3d69x2+8Gmfa6A Izu5140RlJq858naEDhNy1MkKr7+JVdy5FQCIbdEI6ubbTkHxEYO3Y3TMimEqKZrt6re YDtP/dwyezRZXZ00fMSsWbNT1TyogbSU70/ncZUfbWh3118XISbmHKFwG9yfrMhc9Sye WmyQ== X-Gm-Message-State: AOAM530wSPp4KsFanP/5lmkuloJuLOO9J44twcJptXbciyaXFl/Yc3ci kq3pT8BU38FrK7kEKZH36OM= X-Google-Smtp-Source: ABdhPJz6fKeO/l5QlHXYUTnnn6zgKzUbJ7cRbP+y9uy9gEdCqgJHFfHw4BPwA5IULkeOGOVPN4Bxiw== X-Received: by 2002:a63:2e47:: with SMTP id u68mr24203990pgu.6.1617644534048; Mon, 05 Apr 2021 10:42:14 -0700 (PDT) Received: from localhost (c-73-25-156-94.hsd1.or.comcast.net. [73.25.156.94]) by smtp.gmail.com with ESMTPSA id hi21sm92676pjb.36.2021.04.05.10.42.12 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 05 Apr 2021 10:42:13 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: Jordan Crouse , Rob Clark , Rob Clark , Sean Paul , David Airlie , Daniel Vetter , linux-arm-msm@vger.kernel.org (open list:DRM DRIVER FOR MSM ADRENO GPU), freedreno@lists.freedesktop.org (open list:DRM DRIVER FOR MSM ADRENO GPU), linux-kernel@vger.kernel.org (open list) Subject: [PATCH 5/8] drm/msm: Add $debugfs/gem stats on resident objects Date: Mon, 5 Apr 2021 10:45:28 -0700 Message-Id: <20210405174532.1441497-6-robdclark@gmail.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210405174532.1441497-1-robdclark@gmail.com> References: <20210405174532.1441497-1-robdclark@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org From: Rob Clark Currently nearly everything, other than newly allocated objects which are not yet backed by pages, is pinned and resident in RAM. But it will be nice to have some stats on what is unpinned once that is supported. Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/msm_gem.c | 7 +++++++ drivers/gpu/drm/msm/msm_gem.h | 4 ++-- 2 files changed, 9 insertions(+), 2 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index 5f0647adc29d..9ff37904ec2b 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -902,6 +902,11 @@ void msm_gem_describe(struct drm_gem_object *obj, struct seq_file *m, stats->active.size += obj->size; } + if (msm_obj->pages) { + stats->resident.count++; + stats->resident.size += obj->size; + } + switch (msm_obj->madv) { case __MSM_MADV_PURGED: stats->purged.count++; @@ -991,6 +996,8 @@ void msm_gem_describe_objects(struct list_head *list, struct seq_file *m) stats.all.count, stats.all.size); seq_printf(m, "Active: %4d objects, %9zu bytes\n", stats.active.count, stats.active.size); + seq_printf(m, "Resident: %4d objects, %9zu bytes\n", + stats.resident.count, stats.resident.size); seq_printf(m, "Purgable: %4d objects, %9zu bytes\n", stats.purgable.count, stats.purgable.size); seq_printf(m, "Purged: %4d objects, %9zu bytes\n", diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h index 917af526a5c5..e13a9301b616 100644 --- a/drivers/gpu/drm/msm/msm_gem.h +++ b/drivers/gpu/drm/msm/msm_gem.h @@ -162,13 +162,13 @@ struct drm_gem_object *msm_gem_import(struct drm_device *dev, struct dma_buf *dmabuf, struct sg_table *sgt); __printf(2, 3) void msm_gem_object_set_name(struct drm_gem_object *bo, const char *fmt, ...); -#ifdef CONFIG_DEBUG_FS +#ifdef CONFIG_DEBUG_FS struct msm_gem_stats { struct { unsigned count; size_t size; - } all, active, purgable, purged; + } all, active, resident, purgable, purged; }; void msm_gem_describe(struct drm_gem_object *obj, struct seq_file *m, From patchwork Mon Apr 5 17:45:29 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 12183425 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id AB33DC433B4 for ; Mon, 5 Apr 2021 17:42:19 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 8476061394 for ; Mon, 5 Apr 2021 17:42:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239429AbhDERmY (ORCPT ); Mon, 5 Apr 2021 13:42:24 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54580 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239424AbhDERmY (ORCPT ); Mon, 5 Apr 2021 13:42:24 -0400 Received: from mail-pg1-x529.google.com (mail-pg1-x529.google.com [IPv6:2607:f8b0:4864:20::529]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1D53DC061756; Mon, 5 Apr 2021 10:42:17 -0700 (PDT) Received: by mail-pg1-x529.google.com with SMTP id z16so1173716pga.1; Mon, 05 Apr 2021 10:42:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=sHChho7zMUiHOnTsFblEmDhAGhcQTocBf3TEMdJd+tY=; b=K3oGoYSRmlEfhUMGRq3WGqZGqilwkwmulVgJ92meU6+ihbOeYaJJhXJpgjcdzRvazl Lga9QxQfvx/FyHrKJRrpjuV/d0psQu6xMZDuJ/2OnNOlaDwbU74ByGiGMlmVOHrb79Hf KQ6Kbsmkk32THBDqakI8ZVIW995Sz9TeKtVTD0hz7aWK4IqI62j3GYGeQVJXHMmXNCOE 1qTfWpIPq+n+5EnZYDdbquw0u3mZp2xdgJNc+SsOlxlfo/5NgR5Y8U4h5UKE4lMBwq0C UAxtYa+oYqu8D68QBjuUiuYrNdlUINA1NJcSXZCncFKdsXIfyZ5E0PugSyII5i3iLy5G IYQw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=sHChho7zMUiHOnTsFblEmDhAGhcQTocBf3TEMdJd+tY=; b=gQQPdz15NrVEeRz4f46u9BB9x4sb+LviiZF+DMzr93oIKKKk62MtJcSn+VYpzXyyGs 17YMk9uERIaspQmN8XGyFmIKCOV/MKxeYl2Re+lbQg3YjyYM4y7j+yHkqLKm013wuWJO UaAvzXHPVmYLu6exzg+y5qxbwcpRRYsU2ZUpxDkxR7KZpWCAGX1u3EYk96y+wzex38tM RDpM0A4jN+/k07kXEuu7OT81VelvVWOEi+9SLYR49EvX0MT5URc+dp8SySzgNyGhgXXl CLpEe77gVGonLt1kon9uupxIq/zMBrWM4lxl+yobT0QrwiHI3zIJHdojak4dzPU73pCt 6ntw== X-Gm-Message-State: AOAM533Yso4x9HfGbPACOkTD+UPqpOViovlmdNdzTBhugfVcyU0gY99t dbSI7uMgIpffB+8POqdAWuw= X-Google-Smtp-Source: ABdhPJyrMISSr0sdcuXCm4ayrqtIlCTOJMOezpXviaCFdDfrnlFkeK1xFXqgU2PbrREfywv1AlBu7Q== X-Received: by 2002:a63:aa48:: with SMTP id x8mr23969338pgo.246.1617644536531; Mon, 05 Apr 2021 10:42:16 -0700 (PDT) Received: from localhost (c-73-25-156-94.hsd1.or.comcast.net. [73.25.156.94]) by smtp.gmail.com with ESMTPSA id gz12sm92813pjb.33.2021.04.05.10.42.15 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 05 Apr 2021 10:42:15 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: Jordan Crouse , Rob Clark , Rob Clark , Sean Paul , David Airlie , Daniel Vetter , linux-arm-msm@vger.kernel.org (open list:DRM DRIVER FOR MSM ADRENO GPU), freedreno@lists.freedesktop.org (open list:DRM DRIVER FOR MSM ADRENO GPU), linux-kernel@vger.kernel.org (open list) Subject: [PATCH 6/8] drm/msm: Track potentially evictable objects Date: Mon, 5 Apr 2021 10:45:29 -0700 Message-Id: <20210405174532.1441497-7-robdclark@gmail.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210405174532.1441497-1-robdclark@gmail.com> References: <20210405174532.1441497-1-robdclark@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org From: Rob Clark Objects that are potential for swapping out are (1) willneed (ie. if they are purgable/MADV_WONTNEED we can just free the pages without them having to land in swap), (2) not on an active list, (3) not dma-buf imported or exported, and (4) not vmap'd. This repurposes the purged list for objects that do not have backing pages (either because they have not been pinned for the first time yet, or in a later patch because they have been unpinned/evicted. Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/msm_drv.c | 2 +- drivers/gpu/drm/msm/msm_drv.h | 13 ++++++---- drivers/gpu/drm/msm/msm_gem.c | 44 ++++++++++++++++++++++++++-------- drivers/gpu/drm/msm/msm_gem.h | 45 +++++++++++++++++++++++++++++++++++ 4 files changed, 89 insertions(+), 15 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_drv.c b/drivers/gpu/drm/msm/msm_drv.c index e12d5fbd0a34..d3d6c743b7af 100644 --- a/drivers/gpu/drm/msm/msm_drv.c +++ b/drivers/gpu/drm/msm/msm_drv.c @@ -451,7 +451,7 @@ static int msm_drm_init(struct device *dev, const struct drm_driver *drv) INIT_LIST_HEAD(&priv->inactive_willneed); INIT_LIST_HEAD(&priv->inactive_dontneed); - INIT_LIST_HEAD(&priv->inactive_purged); + INIT_LIST_HEAD(&priv->inactive_unpinned); mutex_init(&priv->mm_lock); /* Teach lockdep about lock ordering wrt. shrinker: */ diff --git a/drivers/gpu/drm/msm/msm_drv.h b/drivers/gpu/drm/msm/msm_drv.h index 6a42cdf4cf7e..2668941df529 100644 --- a/drivers/gpu/drm/msm/msm_drv.h +++ b/drivers/gpu/drm/msm/msm_drv.h @@ -182,11 +182,15 @@ struct msm_drm_private { struct mutex obj_lock; /** - * Lists of inactive GEM objects. Every bo is either in one of the + * LRUs of inactive GEM objects. Every bo is either in one of the * inactive lists (depending on whether or not it is shrinkable) or * gpu->active_list (for the gpu it is active on[1]), or transiently * on a temporary list as the shrinker is running. * + * Note that inactive_willneed also contains pinned and vmap'd bos, + * but the number of pinned-but-not-active objects is small (scanout + * buffers, ringbuffer, etc). + * * These lists are protected by mm_lock (which should be acquired * before per GEM object lock). One should *not* hold mm_lock in * get_pages()/vmap()/etc paths, as they can trigger the shrinker. @@ -194,10 +198,11 @@ struct msm_drm_private { * [1] if someone ever added support for the old 2d cores, there could be * more than one gpu object */ - struct list_head inactive_willneed; /* inactive + !shrinkable */ - struct list_head inactive_dontneed; /* inactive + shrinkable */ - struct list_head inactive_purged; /* inactive + purged */ + struct list_head inactive_willneed; /* inactive + potentially unpin/evictable */ + struct list_head inactive_dontneed; /* inactive + shrinkable */ + struct list_head inactive_unpinned; /* inactive + purged or unpinned */ long shrinkable_count; /* write access under mm_lock */ + long evictable_count; /* write access under mm_lock */ struct mutex mm_lock; struct workqueue_struct *wq; diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index 9ff37904ec2b..9ac89951080c 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -130,6 +130,9 @@ static struct page **get_pages(struct drm_gem_object *obj) */ if (msm_obj->flags & (MSM_BO_WC|MSM_BO_UNCACHED)) sync_for_device(msm_obj); + + GEM_WARN_ON(msm_obj->active_count); + update_inactive(msm_obj); } return msm_obj->pages; @@ -428,7 +431,7 @@ static int msm_gem_pin_iova(struct drm_gem_object *obj, struct msm_gem_object *msm_obj = to_msm_bo(obj); struct msm_gem_vma *vma; struct page **pages; - int prot = IOMMU_READ; + int ret, prot = IOMMU_READ; if (!(msm_obj->flags & MSM_BO_GPU_READONLY)) prot |= IOMMU_WRITE; @@ -449,8 +452,13 @@ static int msm_gem_pin_iova(struct drm_gem_object *obj, if (IS_ERR(pages)) return PTR_ERR(pages); - return msm_gem_map_vma(aspace, vma, prot, + ret = msm_gem_map_vma(aspace, vma, prot, msm_obj->sgt, obj->size >> PAGE_SHIFT); + + if (!ret) + msm_obj->pin_count++; + + return ret; } static int get_and_pin_iova_range_locked(struct drm_gem_object *obj, @@ -542,14 +550,21 @@ uint64_t msm_gem_iova(struct drm_gem_object *obj, void msm_gem_unpin_iova_locked(struct drm_gem_object *obj, struct msm_gem_address_space *aspace) { + struct msm_gem_object *msm_obj = to_msm_bo(obj); struct msm_gem_vma *vma; GEM_WARN_ON(!msm_gem_is_locked(obj)); vma = lookup_vma(obj, aspace); - if (!GEM_WARN_ON(!vma)) + if (!GEM_WARN_ON(!vma)) { msm_gem_unmap_vma(aspace, vma); + + msm_obj->pin_count--; + GEM_WARN_ON(msm_obj->pin_count < 0); + + update_inactive(msm_obj); + } } /* @@ -800,9 +815,12 @@ void msm_gem_active_get(struct drm_gem_object *obj, struct msm_gpu *gpu) GEM_WARN_ON(!msm_gem_is_locked(obj)); GEM_WARN_ON(msm_obj->madv != MSM_MADV_WILLNEED); GEM_WARN_ON(msm_obj->dontneed); + GEM_WARN_ON(!msm_obj->sgt); if (msm_obj->active_count++ == 0) { mutex_lock(&priv->mm_lock); + if (msm_obj->evictable) + mark_unevictable(msm_obj); list_del(&msm_obj->mm_list); list_add_tail(&msm_obj->mm_list, &gpu->active_list); mutex_unlock(&priv->mm_lock); @@ -825,21 +843,28 @@ static void update_inactive(struct msm_gem_object *msm_obj) { struct msm_drm_private *priv = msm_obj->base.dev->dev_private; + GEM_WARN_ON(!msm_gem_is_locked(&msm_obj->base)); + + if (msm_obj->active_count != 0) + return; + mutex_lock(&priv->mm_lock); - GEM_WARN_ON(msm_obj->active_count != 0); if (msm_obj->dontneed) mark_unpurgable(msm_obj); + if (msm_obj->evictable) + mark_unevictable(msm_obj); list_del(&msm_obj->mm_list); - if (msm_obj->madv == MSM_MADV_WILLNEED) { + if ((msm_obj->madv == MSM_MADV_WILLNEED) && msm_obj->sgt) { list_add_tail(&msm_obj->mm_list, &priv->inactive_willneed); + mark_evictable(msm_obj); } else if (msm_obj->madv == MSM_MADV_DONTNEED) { list_add_tail(&msm_obj->mm_list, &priv->inactive_dontneed); mark_purgable(msm_obj); } else { - GEM_WARN_ON(msm_obj->madv != __MSM_MADV_PURGED); - list_add_tail(&msm_obj->mm_list, &priv->inactive_purged); + GEM_WARN_ON((msm_obj->madv != __MSM_MADV_PURGED) && msm_obj->sgt); + list_add_tail(&msm_obj->mm_list, &priv->inactive_unpinned); } mutex_unlock(&priv->mm_lock); @@ -1201,8 +1226,7 @@ static struct drm_gem_object *_msm_gem_new(struct drm_device *dev, } mutex_lock(&priv->mm_lock); - /* Initially obj is idle, obj->madv == WILLNEED: */ - list_add_tail(&msm_obj->mm_list, &priv->inactive_willneed); + list_add_tail(&msm_obj->mm_list, &priv->inactive_unpinned); mutex_unlock(&priv->mm_lock); mutex_lock(&priv->obj_lock); @@ -1276,7 +1300,7 @@ struct drm_gem_object *msm_gem_import(struct drm_device *dev, msm_gem_unlock(obj); mutex_lock(&priv->mm_lock); - list_add_tail(&msm_obj->mm_list, &priv->inactive_willneed); + list_add_tail(&msm_obj->mm_list, &priv->inactive_unpinned); mutex_unlock(&priv->mm_lock); mutex_lock(&priv->obj_lock); diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h index e13a9301b616..39b2e5584f97 100644 --- a/drivers/gpu/drm/msm/msm_gem.h +++ b/drivers/gpu/drm/msm/msm_gem.h @@ -60,6 +60,11 @@ struct msm_gem_object { */ bool dontneed : 1; + /** + * Is object evictable (ie. counted in priv->evictable_count)? + */ + bool evictable : 1; + /** * count of active vmap'ing */ @@ -103,6 +108,7 @@ struct msm_gem_object { char name[32]; /* Identifier to print for the debugfs files */ int active_count; + int pin_count; }; #define to_msm_bo(x) container_of(x, struct msm_gem_object, base) @@ -263,7 +269,46 @@ static inline void mark_unpurgable(struct msm_gem_object *msm_obj) msm_obj->dontneed = false; } +static inline bool is_unevictable(struct msm_gem_object *msm_obj) +{ + return is_unpurgable(msm_obj) || msm_obj->pin_count || msm_obj->vaddr; +} + +static inline void mark_evictable(struct msm_gem_object *msm_obj) +{ + struct msm_drm_private *priv = msm_obj->base.dev->dev_private; + + WARN_ON(!mutex_is_locked(&priv->mm_lock)); + + if (is_unevictable(msm_obj)) + return; + + if (WARN_ON(msm_obj->evictable)) + return; + + priv->evictable_count += msm_obj->base.size >> PAGE_SHIFT; + msm_obj->evictable = true; +} + +static inline void mark_unevictable(struct msm_gem_object *msm_obj) +{ + struct msm_drm_private *priv = msm_obj->base.dev->dev_private; + + WARN_ON(!mutex_is_locked(&priv->mm_lock)); + + if (is_unevictable(msm_obj)) + return; + + if (WARN_ON(!msm_obj->evictable)) + return; + + priv->evictable_count -= msm_obj->base.size >> PAGE_SHIFT; + WARN_ON(priv->evictable_count < 0); + msm_obj->evictable = false; +} + void msm_gem_purge(struct drm_gem_object *obj); +void msm_gem_evict(struct drm_gem_object *obj); void msm_gem_vunmap(struct drm_gem_object *obj); /* Created per submit-ioctl, to track bo's and cmdstream bufs, etc, From patchwork Mon Apr 5 17:45:30 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 12183427 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 49A3AC4360C for ; Mon, 5 Apr 2021 17:42:24 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 2D7B961399 for ; Mon, 5 Apr 2021 17:42:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239525AbhDERm2 (ORCPT ); Mon, 5 Apr 2021 13:42:28 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54592 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239419AbhDERm1 (ORCPT ); Mon, 5 Apr 2021 13:42:27 -0400 Received: from mail-pj1-x102f.google.com (mail-pj1-x102f.google.com [IPv6:2607:f8b0:4864:20::102f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1ED5AC061756; Mon, 5 Apr 2021 10:42:19 -0700 (PDT) Received: by mail-pj1-x102f.google.com with SMTP id nh5so4220236pjb.5; Mon, 05 Apr 2021 10:42:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=lNUTj4cH2m+cKbQ1c8oBegHz15SkUdeTuXeC8QriD2w=; b=FYbnGNI+71Ugjz8aN913ncSWFoObUE+d7Ay9FWTkF4UYUw7/NkIgMd4KpR/ccYHhxM GxqwieReZYObur7q27lHWLPL2G9aaxsK1rehNeGDK1/d/JpamGzTu9gcgWL8F2vajDIY XRiponVvQngtXWAbtodxFlMPMnKEkLeBHiuiPa/h6ao34MVE/ml9ZTCGy22IkSmigeSb Huzyk7xgj5ADAU3WoXzIkc68tX7UUMgTCoE0ffi7aWb/i4g865LwbTCZlJdjGHQA7UPm b4aov0nhZAAasvRCk81CiEEMUcFlVh3atghT3b10VAe1gD4ORUOHVxzJLLVJZHOIk9CB mHgg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=lNUTj4cH2m+cKbQ1c8oBegHz15SkUdeTuXeC8QriD2w=; b=DR+MTxioSUnw8DqnOvh+hvA/wgyqQvXWfWLKRo7kS5V+2ubdMrHtV2St8sQv3DfH3F TR/8DotRXINiEO8ztoqH/rsu7SBhlVUJ9WGAqheLIhPy5EnIrTGBDiq6f4/2FnXJTzzl kft41gxDkuUWCB9l6gd3+Sa7ieVoiRU/XGX6286FpOgkMPL+Q0un/xZ3vDwM53mIsYAC +kVgbxc+TSs4PBBKa2zxtDxQDCeKAOzyR2xad1k9DAnvW1z7Wx9eyGtuKrSClu+Ey0VA uLkfqpSl69CD6oGse8RCnYXSq+d34TCSSSau5nFu0jTnXcRhUaNgJq0Q1Fx5MqjGmbwR VSkQ== X-Gm-Message-State: AOAM530Lm5K9T6bD2LAFr3lHHwgjHwe19Ow92KTSRjLzdOhDXS0EgS9a Sc8bfVU237pJgPLh10iDCww= X-Google-Smtp-Source: ABdhPJxdRO85RG3m2k3YtzCW0Xv3vu3+xlX8l5aNSXS4owe3vLWNMG9v/9vzzZfQ/AsXSkrK6IUCzw== X-Received: by 2002:a17:903:31ca:b029:e6:65f:ca87 with SMTP id v10-20020a17090331cab02900e6065fca87mr24984947ple.85.1617644538682; Mon, 05 Apr 2021 10:42:18 -0700 (PDT) Received: from localhost (c-73-25-156-94.hsd1.or.comcast.net. [73.25.156.94]) by smtp.gmail.com with ESMTPSA id l25sm17411896pgu.72.2021.04.05.10.42.17 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 05 Apr 2021 10:42:17 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: Jordan Crouse , Rob Clark , Rob Clark , Sean Paul , David Airlie , Daniel Vetter , linux-arm-msm@vger.kernel.org (open list:DRM DRIVER FOR MSM ADRENO GPU), freedreno@lists.freedesktop.org (open list:DRM DRIVER FOR MSM ADRENO GPU), linux-kernel@vger.kernel.org (open list) Subject: [PATCH 7/8] drm/msm: Small msm_gem_purge() fix Date: Mon, 5 Apr 2021 10:45:30 -0700 Message-Id: <20210405174532.1441497-8-robdclark@gmail.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210405174532.1441497-1-robdclark@gmail.com> References: <20210405174532.1441497-1-robdclark@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org From: Rob Clark Shoot down any mmap's *first* before put_pages(). Also add a WARN_ON that the object is locked (to make it clear that this doesn't race with msm_gem_fault()) and remove a redundant WARN_ON (since is_purgable() already covers that case). Fixes: 68209390f116 ("drm/msm: shrinker support") Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/msm_gem.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index 9ac89951080c..163a1d30b5c9 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -729,14 +729,16 @@ void msm_gem_purge(struct drm_gem_object *obj) struct drm_device *dev = obj->dev; struct msm_gem_object *msm_obj = to_msm_bo(obj); + GEM_WARN_ON(!msm_gem_is_locked(obj)); GEM_WARN_ON(!is_purgeable(msm_obj)); - GEM_WARN_ON(obj->import_attach); /* Get rid of any iommu mapping(s): */ put_iova_spaces(obj, true); msm_gem_vunmap(obj); + drm_vma_node_unmap(&obj->vma_node, dev->anon_inode->i_mapping); + put_pages(obj); put_iova_vmas(obj); @@ -744,7 +746,6 @@ void msm_gem_purge(struct drm_gem_object *obj) msm_obj->madv = __MSM_MADV_PURGED; update_inactive(msm_obj); - drm_vma_node_unmap(&obj->vma_node, dev->anon_inode->i_mapping); drm_gem_free_mmap_offset(obj); /* Our goal here is to return as much of the memory as From patchwork Mon Apr 5 17:45:31 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 12183429 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D3E55C43460 for ; Mon, 5 Apr 2021 17:42:35 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id ACE8B61396 for ; Mon, 5 Apr 2021 17:42:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239291AbhDERmj (ORCPT ); Mon, 5 Apr 2021 13:42:39 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54604 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239474AbhDERm3 (ORCPT ); Mon, 5 Apr 2021 13:42:29 -0400 Received: from mail-pg1-x52c.google.com (mail-pg1-x52c.google.com [IPv6:2607:f8b0:4864:20::52c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 93B3DC061794; Mon, 5 Apr 2021 10:42:21 -0700 (PDT) Received: by mail-pg1-x52c.google.com with SMTP id y32so5382693pga.11; Mon, 05 Apr 2021 10:42:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=7MWEztFYREaj4EaJg4EzCQLWGTSCEwCslvK3LYOeVzE=; b=HRjGZS1QILkvNMmvQmZn3S40Na/kBwIbA6WVXDpnPh/VerF7PjPRJvpl7/XN6W4A6M a6HWcvDmChgM4WL36FgQdBqTKbrpTyboqYE4Xmyqz2Pn6p/U2nwbYpYx4imr9/9Vt9Jo zGbSNQyGZlO4ootd5yQAv1i/dW4Z6+ljPnr9sODOK3Mw/0gywldqHIhPcpIMg74ctZA5 SMR/IzvOskt2Jmcru/JsesJCc+kr7QGaxjRDXPGDMqcdYYygeTWfjbQtl/44xeaUv4mr yPI8790q3HzS5SHk29h+3MdFMPyGngH1C+67AN53DJzLb9x14igyzbSZX+k6UQ+hRhbT Ha5w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=7MWEztFYREaj4EaJg4EzCQLWGTSCEwCslvK3LYOeVzE=; b=b4bMdsSG+eC8VugbmpFuD8xaOi18ngLRVEK2oryn2qKS/gYzs2AgxVQyTMjHfjChwx dTYbg2JUw3wNgFh1OKti09S7qhrFqttQapqWXnS9/qUWzQDNzklLUYH7/D+/1Yv8ldiu L7BOm24fXveO0kxvwP/Bu77UlwboDDhJS0DrkCgRSUsVtyrdXRsd5jlkjhVKcqFCwmH6 djyI2fMfZYIJs7gY7sA/ooDHmycXY1Z9IkLJ8wgs59sZkUQkqYEHZ7DBM+JAE1opR2XO tqyKQY9VPbIFrZJXMofEK9ff7BOW+PUxsdqKj4KUGfcZz/4IA71AbZNEBe52IIxDR3E6 WRrQ== X-Gm-Message-State: AOAM532Fnm+tHtZvsFIW0z/tB05MZhsC8ANd35d7YmwtqklFdpkehBqg 0bS3hIgtYezfZvTAAsO88G8= X-Google-Smtp-Source: ABdhPJxibT+j7asrrFr4/LyBfYOx3cjIw334LA6mEIfzkJ/SSj8fWZ8XoVuGuulT8c8TGoH9q2eLPg== X-Received: by 2002:a65:68d3:: with SMTP id k19mr23989185pgt.44.1617644541208; Mon, 05 Apr 2021 10:42:21 -0700 (PDT) Received: from localhost (c-73-25-156-94.hsd1.or.comcast.net. [73.25.156.94]) by smtp.gmail.com with ESMTPSA id b7sm2441194pgs.62.2021.04.05.10.42.19 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 05 Apr 2021 10:42:19 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: Jordan Crouse , Rob Clark , Rob Clark , Sean Paul , David Airlie , Daniel Vetter , linux-arm-msm@vger.kernel.org (open list:DRM DRIVER FOR MSM ADRENO GPU), freedreno@lists.freedesktop.org (open list:DRM DRIVER FOR MSM ADRENO GPU), linux-kernel@vger.kernel.org (open list) Subject: [PATCH 8/8] drm/msm: Support evicting GEM objects to swap Date: Mon, 5 Apr 2021 10:45:31 -0700 Message-Id: <20210405174532.1441497-9-robdclark@gmail.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: <20210405174532.1441497-1-robdclark@gmail.com> References: <20210405174532.1441497-1-robdclark@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org From: Rob Clark Now that tracking is wired up for potentially evictable GEM objects, wire up shrinker and the remaining GEM bits for unpinning backing pages of inactive objects. Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/msm_gem.c | 23 ++++++++++++++++ drivers/gpu/drm/msm/msm_gem_shrinker.c | 37 +++++++++++++++++++++++++- drivers/gpu/drm/msm/msm_gpu_trace.h | 13 +++++++++ 3 files changed, 72 insertions(+), 1 deletion(-) diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index 163a1d30b5c9..2b731cf42294 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -759,6 +759,29 @@ void msm_gem_purge(struct drm_gem_object *obj) 0, (loff_t)-1); } +/** + * Unpin the backing pages and make them available to be swapped out. + */ +void msm_gem_evict(struct drm_gem_object *obj) +{ + struct drm_device *dev = obj->dev; + struct msm_gem_object *msm_obj = to_msm_bo(obj); + + GEM_WARN_ON(!msm_gem_is_locked(obj)); + GEM_WARN_ON(is_unevictable(msm_obj)); + GEM_WARN_ON(!msm_obj->evictable); + GEM_WARN_ON(msm_obj->active_count); + + /* Get rid of any iommu mapping(s): */ + put_iova_spaces(obj, false); + + drm_vma_node_unmap(&obj->vma_node, dev->anon_inode->i_mapping); + + put_pages(obj); + + update_inactive(msm_obj); +} + void msm_gem_vunmap(struct drm_gem_object *obj) { struct msm_gem_object *msm_obj = to_msm_bo(obj); diff --git a/drivers/gpu/drm/msm/msm_gem_shrinker.c b/drivers/gpu/drm/msm/msm_gem_shrinker.c index 38bf919f8508..52828028b9d4 100644 --- a/drivers/gpu/drm/msm/msm_gem_shrinker.c +++ b/drivers/gpu/drm/msm/msm_gem_shrinker.c @@ -9,12 +9,26 @@ #include "msm_gpu.h" #include "msm_gpu_trace.h" +bool enable_swap = true; +MODULE_PARM_DESC(enable_swap, "Enable swappable GEM buffers"); +module_param(enable_swap, bool, 0600); + +static bool can_swap(void) +{ + return enable_swap && get_nr_swap_pages() > 0; +} + static unsigned long msm_gem_shrinker_count(struct shrinker *shrinker, struct shrink_control *sc) { struct msm_drm_private *priv = container_of(shrinker, struct msm_drm_private, shrinker); - return priv->shrinkable_count; + unsigned count = priv->shrinkable_count; + + if (can_swap()) + count += priv->evictable_count; + + return count; } static bool @@ -32,6 +46,17 @@ purge(struct msm_gem_object *msm_obj) return true; } +static bool +evict(struct msm_gem_object *msm_obj) +{ + if (is_unevictable(msm_obj)) + return false; + + msm_gem_evict(&msm_obj->base); + + return true; +} + static unsigned long scan(struct msm_drm_private *priv, unsigned nr_to_scan, struct list_head *list, bool (*shrink)(struct msm_gem_object *msm_obj)) @@ -104,6 +129,16 @@ msm_gem_shrinker_scan(struct shrinker *shrinker, struct shrink_control *sc) if (freed > 0) trace_msm_gem_purge(freed << PAGE_SHIFT); + if (can_swap() && freed < sc->nr_to_scan) { + int evicted = scan(priv, sc->nr_to_scan - freed, + &priv->inactive_willneed, evict); + + if (evicted > 0) + trace_msm_gem_evict(evicted << PAGE_SHIFT); + + freed += evicted; + } + return (freed > 0) ? freed : SHRINK_STOP; } diff --git a/drivers/gpu/drm/msm/msm_gpu_trace.h b/drivers/gpu/drm/msm/msm_gpu_trace.h index 03e0c2536b94..ca0b08d7875b 100644 --- a/drivers/gpu/drm/msm/msm_gpu_trace.h +++ b/drivers/gpu/drm/msm/msm_gpu_trace.h @@ -128,6 +128,19 @@ TRACE_EVENT(msm_gem_purge, ); +TRACE_EVENT(msm_gem_evict, + TP_PROTO(u32 bytes), + TP_ARGS(bytes), + TP_STRUCT__entry( + __field(u32, bytes) + ), + TP_fast_assign( + __entry->bytes = bytes; + ), + TP_printk("Evicting %u bytes", __entry->bytes) +); + + TRACE_EVENT(msm_gem_purge_vmaps, TP_PROTO(u32 unmapped), TP_ARGS(unmapped),