From patchwork Sat Jun 25 22:54:36 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 12895485 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 85FA2C433EF for ; Sat, 25 Jun 2022 22:55:04 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233441AbiFYWzD (ORCPT ); Sat, 25 Jun 2022 18:55:03 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60292 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233295AbiFYWzA (ORCPT ); Sat, 25 Jun 2022 18:55:00 -0400 Received: from mail-pg1-x52c.google.com (mail-pg1-x52c.google.com [IPv6:2607:f8b0:4864:20::52c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 39FB113F42; Sat, 25 Jun 2022 15:55:00 -0700 (PDT) Received: by mail-pg1-x52c.google.com with SMTP id r66so5681404pgr.2; Sat, 25 Jun 2022 15:55:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=b9AQXf/WNrNc7N52p1TXe/6XsOcmDEDaje1UkHX1E8o=; b=okxVvZPaPPWukgtMjoQ2zV2MOXVKhOwgEQOvy9s8dCC6F7E0d7C4RjjeWQUWzwZAkm HbgBHZeHh0rktafpVgdIjgiSpmmOqw6xg9YkCjTrPLgusIAG2HJs/PA8era9WpIh93Nt z+8xYrS0D9c7kB3yJ0kSruPdVZNGl/y00bgLEScL7cZxATvRwI2y+GxE69koJf33oyJo PlR4+li5IUTvUqMQVruUfcPoE25FkgSHT5zi9AICxEUBmH2KbCO2hbzvJRXUehRP8H2i SfIsa5IIuErdOtEu8xGLJBrd9pLz2FRkhenzvHZtjjN8UHlbhAKkBF4nxFdCTymjt0Qi B+kQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=b9AQXf/WNrNc7N52p1TXe/6XsOcmDEDaje1UkHX1E8o=; b=q03a157Q7R9Arra7VxLCtmo+Gfq63E/7OQb6DJL9rvPZ1P2x8sCQ4M/AUepi0r4/gB EeQL/VIibrAomk2BCha/AD0IrxiDve0pmeCP8MiZBy4+3MMt4Ea4l+DLDTTevm3VdsTb ZG4hE9y1HVAketD7wNYn3E0chh2Tf5ns0D8Q5tdLwixXvOpdgqTMF7Ayt++ytCci6Tpw E+zjyUn5NVghWqJthCEN/nnGO5fLukpHcvjrDaGAJh1R/Jo8GXT/M8ki/Q4NdfBbnzB1 okkvdjnuBkOGFtWRQ5CgFVWGhhsUwv6dImnxGxV8k0nBZ8krr0PLPVMk3SeBLyXdS1Ge 4s0w== X-Gm-Message-State: AJIora9qLc+sKu3luFAbTZB9ugD/UkJRy0SCEKnUECjiseMsoxpJUZSy eOdO6LBAikMhVfrWuJPeNII= X-Google-Smtp-Source: AGRyM1uZ3CwKWuR6uDhS+F4l+oftGA9Xv00IBSUo91DHn+OXfWfzgkeYBrHk2SEcfJPI7kLktGC3UA== X-Received: by 2002:a05:6a00:2312:b0:525:392a:73c3 with SMTP id h18-20020a056a00231200b00525392a73c3mr6465551pfh.67.1656197699690; Sat, 25 Jun 2022 15:54:59 -0700 (PDT) Received: from localhost ([2601:1c0:5200:a6:307:a401:7b76:c6e5]) by smtp.gmail.com with ESMTPSA id h6-20020a170902680600b00163ffe73300sm4205641plk.137.2022.06.25.15.54.57 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 25 Jun 2022 15:54:58 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Rob Clark , Rob Clark , Abhinav Kumar , Dmitry Baryshkov , Sean Paul , David Airlie , Daniel Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH 01/15] drm/msm: Switch to pfn mappings Date: Sat, 25 Jun 2022 15:54:36 -0700 Message-Id: <20220625225454.81039-2-robdclark@gmail.com> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220625225454.81039-1-robdclark@gmail.com> References: <20220625225454.81039-1-robdclark@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org From: Rob Clark I'm not entirely sure why we were using VM_MIXEDMAP. These are never CoW mappings. Let's switch to be more consistent with what other drivers and the GEM shmem helpers do. Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/msm_gem.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index ad7da2ca35ab..8ddbd2e001d4 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -259,7 +259,8 @@ static vm_fault_t msm_gem_fault(struct vm_fault *vmf) VERB("Inserting %p pfn %lx, pa %lx", (void *)vmf->address, pfn, pfn << PAGE_SHIFT); - ret = vmf_insert_mixed(vma, vmf->address, __pfn_to_pfn_t(pfn, PFN_DEV)); + ret = vmf_insert_pfn(vma, vmf->address, pfn); + out_unlock: msm_gem_unlock(obj); out: @@ -1051,7 +1052,7 @@ static int msm_gem_object_mmap(struct drm_gem_object *obj, struct vm_area_struct { struct msm_gem_object *msm_obj = to_msm_bo(obj); - vma->vm_flags |= VM_IO | VM_MIXEDMAP | VM_DONTEXPAND | VM_DONTDUMP; + vma->vm_flags |= VM_PFNMAP | VM_DONTEXPAND | VM_DONTDUMP; vma->vm_page_prot = msm_gem_pgprot(msm_obj, vm_get_page_prot(vma->vm_flags)); return 0; From patchwork Sat Jun 25 22:54:37 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 12895486 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9D549CCA480 for ; Sat, 25 Jun 2022 22:55:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233295AbiFYWzF (ORCPT ); Sat, 25 Jun 2022 18:55:05 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60322 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233394AbiFYWzE (ORCPT ); Sat, 25 Jun 2022 18:55:04 -0400 Received: from mail-pj1-x1036.google.com (mail-pj1-x1036.google.com [IPv6:2607:f8b0:4864:20::1036]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1F63C13F3E; Sat, 25 Jun 2022 15:55:03 -0700 (PDT) Received: by mail-pj1-x1036.google.com with SMTP id h9-20020a17090a648900b001ecb8596e43so6008535pjj.5; Sat, 25 Jun 2022 15:55:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=HUhJyEOgZ2z2SZ5h3XBUNWhKpL54GfHjGG/MbTrPVdE=; b=Ck3KdhjbNQUeHJG/pVcofv5UB/TZ3zQDCn+W9qGKvwHN7L/lty9zDtudp8d2kriG4a cbG84Xdm8NjzqR8nFyTfhwJrlzLlPVZv44se9PFgsF+18+rK9gACM3d+/WHBFs1IQyax 9pIDiHAaoD+g3g07UHQQpozFUl5HGG6Y2AU9xF8HSVaKX5DQpzsByYUBNZEKJx9BqKEh 4fnwX4jdmPZ1Y5VLrMuCwdQ+zkraPnWmMlUBpXcHGBzjSfmPBeELKTY+bVCw21klQ+YI /nn5yiAuiyGICbUf23zK13C3zEb/BDw2qwEqfyYqEQZqUFJe+0D3ERzpcE/qbmj3u5pA Pr5w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=HUhJyEOgZ2z2SZ5h3XBUNWhKpL54GfHjGG/MbTrPVdE=; b=BfAUUJfMHzTYKIdz6QC0TK+s79c8+EJ6F493L+935BHyuSo3lOxnGHHXckRhi/ehWm T7bxVY9AFjexUdUKMBF80y/TnulJdJJh2Iuei8DKDIVa+cNZ6cygUNIs8QGvHZW6ilAA boY6HOBOYfv+VOSauawmC3i5D6NpXYq0E3txjhQNtrwFKbYyFYJeUAUDTm2cL25vE53k EPV9E8SYqSq/HOxENC1kAxwqoj+xmHCpN8Hbri+IFJvnrgdxOegyCj3uyhvtyPO9ygMb mlMU545YsQYQ7IdM3EPdaWzHGPX0WjItXoP6KxKi1EWasQlB2lmt8q7vJ1o05UAUojTp Praw== X-Gm-Message-State: AJIora8k63+Wr1Gs0HTBZ6qTgAvWFvPIjTSSuZOE0pVnCzzPdJWwGSxo hKuSJ5146YbwPYxLZNcWrsY= X-Google-Smtp-Source: AGRyM1s877XF18gurlXU6S2PYXHqKvW6MpxQeLmMO0u6/ZWCl66vnnSMWEGjsEuEu6yzK4qqJ6m5vg== X-Received: by 2002:a17:902:c2ce:b0:16a:1aba:9f69 with SMTP id c14-20020a170902c2ce00b0016a1aba9f69mr6558518pla.67.1656197702509; Sat, 25 Jun 2022 15:55:02 -0700 (PDT) Received: from localhost ([2601:1c0:5200:a6:307:a401:7b76:c6e5]) by smtp.gmail.com with ESMTPSA id v12-20020a17090a7c0c00b001eac661025fsm4120994pjf.29.2022.06.25.15.55.00 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 25 Jun 2022 15:55:01 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Rob Clark , Rob Clark , Abhinav Kumar , Dmitry Baryshkov , Sean Paul , David Airlie , Daniel Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH 02/15] drm/msm: Make enable_eviction flag static Date: Sat, 25 Jun 2022 15:54:37 -0700 Message-Id: <20220625225454.81039-3-robdclark@gmail.com> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220625225454.81039-1-robdclark@gmail.com> References: <20220625225454.81039-1-robdclark@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org From: Rob Clark No need for it to be visible outside of this one src file. Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/msm_gem_shrinker.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/gpu/drm/msm/msm_gem_shrinker.c b/drivers/gpu/drm/msm/msm_gem_shrinker.c index 086dacf2f26a..6e39d959b9f0 100644 --- a/drivers/gpu/drm/msm/msm_gem_shrinker.c +++ b/drivers/gpu/drm/msm/msm_gem_shrinker.c @@ -15,7 +15,7 @@ /* Default disabled for now until it has some more testing on the different * iommu combinations that can be paired with the driver: */ -bool enable_eviction = false; +static bool enable_eviction = false; MODULE_PARM_DESC(enable_eviction, "Enable swappable GEM buffers"); module_param(enable_eviction, bool, 0600); From patchwork Sat Jun 25 22:54:38 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 12895487 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id EF8C3C43334 for ; Sat, 25 Jun 2022 22:55:11 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233565AbiFYWzL (ORCPT ); Sat, 25 Jun 2022 18:55:11 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60392 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233546AbiFYWzJ (ORCPT ); Sat, 25 Jun 2022 18:55:09 -0400 Received: from mail-pj1-x102e.google.com (mail-pj1-x102e.google.com [IPv6:2607:f8b0:4864:20::102e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C4D3C13F48; Sat, 25 Jun 2022 15:55:05 -0700 (PDT) Received: by mail-pj1-x102e.google.com with SMTP id g16-20020a17090a7d1000b001ea9f820449so8923823pjl.5; Sat, 25 Jun 2022 15:55:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=0sMFYlUXA/wvNrdz6yghIiBT/tIS2Mu/kejgpgcpcUk=; b=GQt6oqpuPd7BalfkhrFIglCcDVmBgoM6XHzayugipkCTfvQjl515fBlP39DiLFsur3 myq2zbGCo76BxFBgnhNNAWh9mHReWt6+gRLJOGDl0qyn4cC22OPU/0ukSuEbdcCERIvU pcD+ozk7aOV683I80mko7pd+PgiPACsq69uFzBKmzjRDow6YdubYoI4OsfzmkFDbn2eg agBI1QNF1bpSGY6Im7LSONGB4a2NXBbWeCGLQ4et26B+ebhYghMLgduwJn+mqMkHkHO0 J5mRSVf+bxzEFll7zA65xg6QVsR0eH43woejDQZu9mxDM2AtR5oNMeFCvG+h2Tj2KMZ5 pQwA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=0sMFYlUXA/wvNrdz6yghIiBT/tIS2Mu/kejgpgcpcUk=; b=36+GvRN/vUJ8ptjxCL3aOhcdE0MV6drNVZlnRN3aF/8ZQpmm0SyQTYSDyqAbjGS05Q 97h04Ai9wwVK1w8sx/V/+uvIw+/kJQOGJCB1MglvLB6G6rShikATFh2yVezYuQVAgTYp cCpLDFZ1X6vHty6mb5TfMYeRt+tLv3AT/DuDxSM0dOfCrGTSSO6MAecs4FCk+yGCoved 0v4uiiE6AWxu9Qs2+qsckK68LQTUSB6WpLeMyki4FqHyArhG+oFRHcluVkz9ggjbhSXP DGfybWtmmHidbg76/DZa0pbW1QWZzvJcmjMev24VU9p5vc93frmsRA3xrb8V/qpzVs3I 5jUw== X-Gm-Message-State: AJIora94vz2cvfdgf0t5UELcCyHw9N7bkPhBKNl6reImQOwn07jWjrVF foBmf/jUozmpccySFJIYQew= X-Google-Smtp-Source: AGRyM1uZjrPT7IbukdgxEedllvo2flhBCmo451rTjSu6KE4nw1gjSM+UMs5lCcNWhxWhTir9koR+wQ== X-Received: by 2002:a17:90b:1805:b0:1ed:1391:c8b7 with SMTP id lw5-20020a17090b180500b001ed1391c8b7mr12364162pjb.193.1656197705225; Sat, 25 Jun 2022 15:55:05 -0700 (PDT) Received: from localhost ([2601:1c0:5200:a6:307:a401:7b76:c6e5]) by smtp.gmail.com with ESMTPSA id w30-20020aa79a1e000000b005253bf1e4d0sm4152412pfj.24.2022.06.25.15.55.03 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 25 Jun 2022 15:55:04 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Rob Clark , Rob Clark , Abhinav Kumar , Dmitry Baryshkov , Sean Paul , David Airlie , Daniel Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH 03/15] drm/msm: Reorder lock vs submit alloc Date: Sat, 25 Jun 2022 15:54:38 -0700 Message-Id: <20220625225454.81039-4-robdclark@gmail.com> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220625225454.81039-1-robdclark@gmail.com> References: <20220625225454.81039-1-robdclark@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org From: Rob Clark This lets us drop the NORETRY. Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/msm_gem_submit.c | 24 ++++++++++-------------- 1 file changed, 10 insertions(+), 14 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_gem_submit.c b/drivers/gpu/drm/msm/msm_gem_submit.c index c9e4aeb14f4a..b7c61a99d274 100644 --- a/drivers/gpu/drm/msm/msm_gem_submit.c +++ b/drivers/gpu/drm/msm/msm_gem_submit.c @@ -36,7 +36,7 @@ static struct msm_gem_submit *submit_create(struct drm_device *dev, if (sz > SIZE_MAX) return ERR_PTR(-ENOMEM); - submit = kzalloc(sz, GFP_KERNEL | __GFP_NOWARN | __GFP_NORETRY); + submit = kzalloc(sz, GFP_KERNEL); if (!submit) return ERR_PTR(-ENOMEM); @@ -771,25 +771,21 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void *data, trace_msm_gpu_submit(pid_nr(pid), ring->id, submitid, args->nr_bos, args->nr_cmds); - ret = mutex_lock_interruptible(&queue->lock); - if (ret) - goto out_post_unlock; - if (args->flags & MSM_SUBMIT_FENCE_FD_OUT) { out_fence_fd = get_unused_fd_flags(O_CLOEXEC); if (out_fence_fd < 0) { ret = out_fence_fd; - goto out_unlock; + return ret; } } - submit = submit_create(dev, gpu, queue, args->nr_bos, - args->nr_cmds); - if (IS_ERR(submit)) { - ret = PTR_ERR(submit); - submit = NULL; - goto out_unlock; - } + submit = submit_create(dev, gpu, queue, args->nr_bos, args->nr_cmds); + if (IS_ERR(submit)) + return PTR_ERR(submit); + + ret = mutex_lock_interruptible(&queue->lock); + if (ret) + goto out_post_unlock; submit->pid = pid; submit->ident = submitid; @@ -965,9 +961,9 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void *data, if (ret && (out_fence_fd >= 0)) put_unused_fd(out_fence_fd); mutex_unlock(&queue->lock); +out_post_unlock: if (submit) msm_gem_submit_put(submit); -out_post_unlock: if (!IS_ERR_OR_NULL(post_deps)) { for (i = 0; i < args->nr_out_syncobjs; ++i) { kfree(post_deps[i].chain); From patchwork Sat Jun 25 22:54:39 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 12895488 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8A7B1C433EF for ; Sat, 25 Jun 2022 22:55:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233599AbiFYWzL (ORCPT ); Sat, 25 Jun 2022 18:55:11 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60398 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233549AbiFYWzJ (ORCPT ); Sat, 25 Jun 2022 18:55:09 -0400 Received: from mail-pl1-x630.google.com (mail-pl1-x630.google.com [IPv6:2607:f8b0:4864:20::630]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 80EE313F44; Sat, 25 Jun 2022 15:55:08 -0700 (PDT) Received: by mail-pl1-x630.google.com with SMTP id n10so5154606plp.0; Sat, 25 Jun 2022 15:55:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=FlC/bVN5qW7CT1CMAFt2x0HFIxhESZPM94joMv7rafM=; b=X0G2+tO3s8rijlHOZIm1HqX03Z7oJO9LhxGV9D2UTWjUAuxQ0r+Lyl5m3687mFL5Fv Ne52zwkO4YtHqPz7CTayfcV6kpclelKzUKINgRk2OD958fS2561i3yIgq+IXzcuEctrg rXa4otRJrRVbLEb079Gx3zKy8rNlKr4zJaQrVfl8ZEPQYRYp+QopjenMIvkzdtZjTBB7 fYJiznLUULZepXiCoGoJFmV0AcEQ80i4Z6YSBbB5C7AB2RFmGAuh8tqc9+EeyGPC/sb4 6SIkuDddhiIXVnThZJUnPybChTjKJ3ENZkvRR/C0kYGPLOi9OwP9QSIqvH7bstMx7E3T 4LVQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=FlC/bVN5qW7CT1CMAFt2x0HFIxhESZPM94joMv7rafM=; b=OxBkEvu25ybg3Nr2EuhvELG2IWsurUq3B46j0LYkxk3WELySkXyO4Z/kK2AMznWEjW Kebi0a73msem+/e508/6LGI2dqy0NlLYrARQyxnbaaBhHRhbjXFUipgGf822mdVo9xxT okKrvgx8S1qulo+Q0oQ99MZ6xhBVF3Vitau1+MEMd/vMMbH0GuYpGIDWgKEGogYyS8OL TBJiv0PgSjxPIxi6fi58Cdf6uIh8DhpWoJMatt61M8Ei9PNkLiF8LHV5i5HGiZnEG99B wkaI42tlVmXtxXvuwpNzQQThVAUHE5uHNPvcPAbn6RqHoQ5Nqa+FnVKCRmBmfZbXYsjX qYww== X-Gm-Message-State: AJIora8f3TmPWXDOXFintqwKrwc3Gcq/lhGYkarhBGQWJlfNa/zBySXY c5l/gOaPjuaiHCa+fQKMA+E= X-Google-Smtp-Source: AGRyM1sCNGUFjm7dseoYys3hDPOUoh/GsGPOqE8zXpR/MxJHMeyjmk6Ct5u0YvcxwgnfIEAHnCIFyw== X-Received: by 2002:a17:90b:1bc5:b0:1e3:3c67:37bf with SMTP id oa5-20020a17090b1bc500b001e33c6737bfmr6618252pjb.87.1656197707905; Sat, 25 Jun 2022 15:55:07 -0700 (PDT) Received: from localhost ([2601:1c0:5200:a6:307:a401:7b76:c6e5]) by smtp.gmail.com with ESMTPSA id k22-20020a170902761600b00167942e0ee9sm4215566pll.61.2022.06.25.15.55.06 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 25 Jun 2022 15:55:06 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Rob Clark , Rob Clark , Abhinav Kumar , Dmitry Baryshkov , Sean Paul , David Airlie , Daniel Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH 04/15] drm/msm: Small submit cleanup Date: Sat, 25 Jun 2022 15:54:39 -0700 Message-Id: <20220625225454.81039-5-robdclark@gmail.com> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220625225454.81039-1-robdclark@gmail.com> References: <20220625225454.81039-1-robdclark@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org From: Rob Clark Move more initialization into submit_create(). Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/msm_gem_submit.c | 20 +++++++++----------- 1 file changed, 9 insertions(+), 11 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_gem_submit.c b/drivers/gpu/drm/msm/msm_gem_submit.c index b7c61a99d274..c7819781879c 100644 --- a/drivers/gpu/drm/msm/msm_gem_submit.c +++ b/drivers/gpu/drm/msm/msm_gem_submit.c @@ -26,6 +26,7 @@ static struct msm_gem_submit *submit_create(struct drm_device *dev, struct msm_gpu_submitqueue *queue, uint32_t nr_bos, uint32_t nr_cmds) { + static atomic_t ident = ATOMIC_INIT(0); struct msm_gem_submit *submit; uint64_t sz; int ret; @@ -52,9 +53,13 @@ static struct msm_gem_submit *submit_create(struct drm_device *dev, submit->gpu = gpu; submit->cmd = (void *)&submit->bos[nr_bos]; submit->queue = queue; + submit->pid = get_pid(task_pid(current)); submit->ring = gpu->rb[queue->ring_nr]; submit->fault_dumped = false; + /* Get a unique identifier for the submission for logging purposes */ + submit->ident = atomic_inc_return(&ident) - 1; + INIT_LIST_HEAD(&submit->node); return submit; @@ -718,7 +723,6 @@ static void msm_process_post_deps(struct msm_submit_post_dep *post_deps, int msm_ioctl_gem_submit(struct drm_device *dev, void *data, struct drm_file *file) { - static atomic_t ident = ATOMIC_INIT(0); struct msm_drm_private *priv = dev->dev_private; struct drm_msm_gem_submit *args = data; struct msm_file_private *ctx = file->driver_priv; @@ -729,10 +733,9 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void *data, struct msm_submit_post_dep *post_deps = NULL; struct drm_syncobj **syncobjs_to_reset = NULL; int out_fence_fd = -1; - struct pid *pid = get_pid(task_pid(current)); bool has_ww_ticket = false; unsigned i; - int ret, submitid; + int ret; if (!gpu) return -ENXIO; @@ -764,12 +767,7 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void *data, if (!queue) return -ENOENT; - /* Get a unique identifier for the submission for logging purposes */ - submitid = atomic_inc_return(&ident) - 1; - ring = gpu->rb[queue->ring_nr]; - trace_msm_gpu_submit(pid_nr(pid), ring->id, submitid, - args->nr_bos, args->nr_cmds); if (args->flags & MSM_SUBMIT_FENCE_FD_OUT) { out_fence_fd = get_unused_fd_flags(O_CLOEXEC); @@ -783,13 +781,13 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void *data, if (IS_ERR(submit)) return PTR_ERR(submit); + trace_msm_gpu_submit(pid_nr(submit->pid), ring->id, submit->ident, + args->nr_bos, args->nr_cmds); + ret = mutex_lock_interruptible(&queue->lock); if (ret) goto out_post_unlock; - submit->pid = pid; - submit->ident = submitid; - if (args->flags & MSM_SUBMIT_SUDO) submit->in_rb = true; From patchwork Sat Jun 25 22:54:40 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 12895489 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id CF9B8C43334 for ; Sat, 25 Jun 2022 22:55:13 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233546AbiFYWzN (ORCPT ); Sat, 25 Jun 2022 18:55:13 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60446 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233601AbiFYWzM (ORCPT ); Sat, 25 Jun 2022 18:55:12 -0400 Received: from mail-pj1-x1030.google.com (mail-pj1-x1030.google.com [IPv6:2607:f8b0:4864:20::1030]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6F11313F42; Sat, 25 Jun 2022 15:55:11 -0700 (PDT) Received: by mail-pj1-x1030.google.com with SMTP id g20-20020a17090a579400b001ed52939d72so572626pji.4; Sat, 25 Jun 2022 15:55:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=oYZR0Z12iPpbx+Q3gyCHkuavybhxOk7o0RqxHP2v+Kc=; b=iMZ+H+mK0wcQoijBBKKG6Hhs4/17MWuyeJbkIJ2XRQhONAfAx3j9X7VUTYHXB+1Gn8 Ky8Imge2Gf3V0YFDno7uxwrAxEv2MhVtR0itBsKhxeNwo7EijhqhY0byPJtQyLP1oOHc iuCquHh3GvEySfOu5As1X3z89IEOxidZx8pybAFj5PKDiWmJhZl1CrijJBcH6iBfKqjK jMtQQ7qbY23OtXYm9AK9Z/EzI1SFV94liMeM0MT9+TovUDJbpSaSO9vrrZy3Lhvg95Rr lnYazSUsTYgcpuIG91nf/sS7MUtlRcpkYZ5X+44kR2MJlsWxNunMkta6RyjvJsm6+4L4 Y8yA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=oYZR0Z12iPpbx+Q3gyCHkuavybhxOk7o0RqxHP2v+Kc=; b=JokncI9ON53Y2Pt+LB5dAiAhrfYHkqs01DGakVC7mxFsdGFF1Sqj5FVQD07IGFMono tBrbW0GiOME2PuSx5ihtFtoSUrNDcRUQRIqw/+LbEEkBK6CqoHup8ucmMgelFJtXJzsD LqmJQrriTJRsRwzKJCVDNxmRO9jzyhPmNuZypaSLsxwUo/9+a5ldWvpb9ffjtp0Gilo5 tYTauLNbPKC/mglvUa+snj9U3+sZtppWgDbiiMeF7V6TR72Oo0IiJbbh/15WvB4VkHNB 3m8Uhr2LzoBHXldHzo12DChRMcVBHddmEoWoN5k6QXDU3qjaw/rdLB56Yzgr9CVSWBA9 zuDg== X-Gm-Message-State: AJIora9AkzPo7tXZQMFESwu766nwevdlQJZo+bEaHYtWYcnbfHvy1bdT GbekceYrqUe/VZOa7L1pAR4= X-Google-Smtp-Source: AGRyM1t9EaGdDdFekCCemPuWzi1zT4XQYUtT/1KaJXp/3h1bVgepbXhFy3eAQFs70Uz8roIXDmAwlw== X-Received: by 2002:a17:902:f78e:b0:168:fffc:fd57 with SMTP id q14-20020a170902f78e00b00168fffcfd57mr6368941pln.149.1656197710878; Sat, 25 Jun 2022 15:55:10 -0700 (PDT) Received: from localhost ([2601:1c0:5200:a6:307:a401:7b76:c6e5]) by smtp.gmail.com with ESMTPSA id r12-20020a170902ea4c00b001679a4711e5sm4180619plg.108.2022.06.25.15.55.09 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 25 Jun 2022 15:55:09 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Rob Clark , Rob Clark , Abhinav Kumar , Dmitry Baryshkov , Sean Paul , David Airlie , Daniel Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH 05/15] drm/msm: Split out idr_lock Date: Sat, 25 Jun 2022 15:54:40 -0700 Message-Id: <20220625225454.81039-6-robdclark@gmail.com> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220625225454.81039-1-robdclark@gmail.com> References: <20220625225454.81039-1-robdclark@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org From: Rob Clark Otherwise if we hit reclaim pinning objects in the submit path, we'll be blocking retire_worker trying to free a submit. Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/msm_drv.c | 4 ++-- drivers/gpu/drm/msm/msm_gem_submit.c | 10 ++++++++-- drivers/gpu/drm/msm/msm_gpu.h | 4 +++- drivers/gpu/drm/msm/msm_submitqueue.c | 1 + 4 files changed, 14 insertions(+), 5 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_drv.c b/drivers/gpu/drm/msm/msm_drv.c index acc940d32ab4..ace91ead2caf 100644 --- a/drivers/gpu/drm/msm/msm_drv.c +++ b/drivers/gpu/drm/msm/msm_drv.c @@ -838,13 +838,13 @@ static int wait_fence(struct msm_gpu_submitqueue *queue, uint32_t fence_id, * retired, so if the fence is not found it means there is nothing * to wait for */ - ret = mutex_lock_interruptible(&queue->lock); + ret = mutex_lock_interruptible(&queue->idr_lock); if (ret) return ret; fence = idr_find(&queue->fence_idr, fence_id); if (fence) fence = dma_fence_get_rcu(fence); - mutex_unlock(&queue->lock); + mutex_unlock(&queue->idr_lock); if (!fence) return 0; diff --git a/drivers/gpu/drm/msm/msm_gem_submit.c b/drivers/gpu/drm/msm/msm_gem_submit.c index c7819781879c..16c662808522 100644 --- a/drivers/gpu/drm/msm/msm_gem_submit.c +++ b/drivers/gpu/drm/msm/msm_gem_submit.c @@ -72,9 +72,9 @@ void __msm_gem_submit_destroy(struct kref *kref) unsigned i; if (submit->fence_id) { - mutex_lock(&submit->queue->lock); + mutex_lock(&submit->queue->idr_lock); idr_remove(&submit->queue->fence_idr, submit->fence_id); - mutex_unlock(&submit->queue->lock); + mutex_unlock(&submit->queue->idr_lock); } dma_fence_put(submit->user_fence); @@ -881,6 +881,8 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void *data, submit->nr_cmds = i; + mutex_lock(&queue->idr_lock); + /* * If using userspace provided seqno fence, validate that the id * is available before arming sched job. Since access to fence_idr @@ -889,6 +891,7 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void *data, */ if ((args->flags & MSM_SUBMIT_FENCE_SN_IN) && idr_find(&queue->fence_idr, args->fence)) { + mutex_unlock(&queue->idr_lock); ret = -EINVAL; goto out; } @@ -921,6 +924,9 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void *data, submit->user_fence, 1, INT_MAX, GFP_KERNEL); } + + mutex_unlock(&queue->idr_lock); + if (submit->fence_id < 0) { ret = submit->fence_id; submit->fence_id = 0; diff --git a/drivers/gpu/drm/msm/msm_gpu.h b/drivers/gpu/drm/msm/msm_gpu.h index 4911943ba53b..4ca56d96344a 100644 --- a/drivers/gpu/drm/msm/msm_gpu.h +++ b/drivers/gpu/drm/msm/msm_gpu.h @@ -457,7 +457,8 @@ static inline int msm_gpu_convert_priority(struct msm_gpu *gpu, int prio, * @node: node in the context's list of submitqueues * @fence_idr: maps fence-id to dma_fence for userspace visible fence * seqno, protected by submitqueue lock - * @lock: submitqueue lock + * @idr_lock: for serializing access to fence_idr + * @lock: submitqueue lock for serializing submits on a queue * @ref: reference count * @entity: the submit job-queue */ @@ -470,6 +471,7 @@ struct msm_gpu_submitqueue { struct msm_file_private *ctx; struct list_head node; struct idr fence_idr; + struct mutex idr_lock; struct mutex lock; struct kref ref; struct drm_sched_entity *entity; diff --git a/drivers/gpu/drm/msm/msm_submitqueue.c b/drivers/gpu/drm/msm/msm_submitqueue.c index f486a3cd4e55..c6929e205b51 100644 --- a/drivers/gpu/drm/msm/msm_submitqueue.c +++ b/drivers/gpu/drm/msm/msm_submitqueue.c @@ -200,6 +200,7 @@ int msm_submitqueue_create(struct drm_device *drm, struct msm_file_private *ctx, *id = queue->id; idr_init(&queue->fence_idr); + mutex_init(&queue->idr_lock); mutex_init(&queue->lock); list_add_tail(&queue->node, &ctx->submitqueues); From patchwork Sat Jun 25 22:54:41 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 12895490 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 59709C433EF for ; Sat, 25 Jun 2022 22:55:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233618AbiFYWzT (ORCPT ); Sat, 25 Jun 2022 18:55:19 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60488 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233615AbiFYWzP (ORCPT ); Sat, 25 Jun 2022 18:55:15 -0400 Received: from mail-pf1-x431.google.com (mail-pf1-x431.google.com [IPv6:2607:f8b0:4864:20::431]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3380313F46; Sat, 25 Jun 2022 15:55:14 -0700 (PDT) Received: by mail-pf1-x431.google.com with SMTP id n12so5725656pfq.0; Sat, 25 Jun 2022 15:55:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=6E5wGlWgwtmRN55cq/vUaSJZuL1NyuRgvoYtJGWwhUY=; b=Vi1AWtZj2gQvnTbnEJ+vVz5/R/MK06u0uvth2ceIgEa93LvE/ABorwvCFnu/9zR5/G ragQZtAioBCYwD0Htu0BanyAtxI5RM64PnvK9Gnmxnl7yyW+jrQoNzcdTyJuba2zmBU3 5ggpKYh6Dio5vMdjCSAy2dXZOuVptw2SVV8gfE/3lxDXs3MVFtxjrHKpfKAeVEjYyEjq EhlacTwggw8+3WROuuvYpyiCXOKgq7LqPARhEzIYGOKVjKcFLxVdo6O8ZZCVaR1ndydi ZmewtV0lsqmgyfHzKO2yrFwIybeJa4ZSSICD8sItcCZ3L3RJl8o4of0fmApV6w1zmDz8 7iHA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=6E5wGlWgwtmRN55cq/vUaSJZuL1NyuRgvoYtJGWwhUY=; b=6e8FDxVRXMFX3Ll9Lnm7N0VP2yKz9Lh1gRs1BjLKQomPbC9dAkQVE3050blNKvZKzA 69NrW82BbMC0Kea7BKv2qe1XJxC6C53gpPP6oIXuPbiU3OIlu7R5a5OduflVq7/rBMHz CoIyZYrgN2EgiYgeJpkQmM9fHGqTbvigWCgBN4w1a2QdoTyrnPCff4GwG5V33qZ1dBVp USIm98lycq6jS3qFr/tOCeFbcrkelwpGLcHJgNuRJSbJwAR/BAYVEQ5pxvwNKxh1U7Re u2ZEwhW6CTbCbrLX1O9q1Hzau+CozzDkGh/zWrrkqZo+9oIMOWdJCzIBe0HCOPDnVsMG KQgw== X-Gm-Message-State: AJIora9vLjDm43/kLWYxGcpvsKuH8fd/XK1T5sI71M74ywBIwy9RyjID N2/G2E5DbTghVZCrnvQknhM= X-Google-Smtp-Source: AGRyM1s50GgXiP1DWIFVT3ZSJuCzVwhR3YhNnJ39jN/ViCb4hpbjhliofqEDG3fK14VkLleR2Axk2Q== X-Received: by 2002:a05:6a00:2d4:b0:525:3e52:b056 with SMTP id b20-20020a056a0002d400b005253e52b056mr6616038pft.50.1656197713656; Sat, 25 Jun 2022 15:55:13 -0700 (PDT) Received: from localhost ([2601:1c0:5200:a6:307:a401:7b76:c6e5]) by smtp.gmail.com with ESMTPSA id p31-20020a056a000a1f00b0051bdb735647sm4141094pfh.159.2022.06.25.15.55.11 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 25 Jun 2022 15:55:12 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Rob Clark , Rob Clark , Abhinav Kumar , Dmitry Baryshkov , Sean Paul , David Airlie , Daniel Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH 06/15] drm/msm/gem: Check for active in shrinker path Date: Sat, 25 Jun 2022 15:54:41 -0700 Message-Id: <20220625225454.81039-7-robdclark@gmail.com> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220625225454.81039-1-robdclark@gmail.com> References: <20220625225454.81039-1-robdclark@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org From: Rob Clark Currently in our shrinker path we shouldn't be encountering anything that is active, but this will change in subsequent patches. So check if there are unsignaled fences. Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/msm_gem.c | 10 ++++++++++ drivers/gpu/drm/msm/msm_gem.h | 1 + drivers/gpu/drm/msm/msm_gem_shrinker.c | 6 ++++++ 3 files changed, 17 insertions(+) diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index 8ddbd2e001d4..b55d252aef17 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -870,6 +870,16 @@ static void update_inactive(struct msm_gem_object *msm_obj) mutex_unlock(&priv->mm_lock); } +bool msm_gem_active(struct drm_gem_object *obj) +{ + GEM_WARN_ON(!msm_gem_is_locked(obj)); + + if (to_msm_bo(obj)->pin_count) + return true; + + return !dma_resv_test_signaled(obj->resv, dma_resv_usage_rw(true)); +} + int msm_gem_cpu_prep(struct drm_gem_object *obj, uint32_t op, ktime_t *timeout) { bool write = !!(op & MSM_PREP_WRITE); diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h index 432032ad4aed..0ab0dc4f8c25 100644 --- a/drivers/gpu/drm/msm/msm_gem.h +++ b/drivers/gpu/drm/msm/msm_gem.h @@ -173,6 +173,7 @@ void msm_gem_put_vaddr(struct drm_gem_object *obj); int msm_gem_madvise(struct drm_gem_object *obj, unsigned madv); void msm_gem_active_get(struct drm_gem_object *obj, struct msm_gpu *gpu); void msm_gem_active_put(struct drm_gem_object *obj); +bool msm_gem_active(struct drm_gem_object *obj); int msm_gem_cpu_prep(struct drm_gem_object *obj, uint32_t op, ktime_t *timeout); int msm_gem_cpu_fini(struct drm_gem_object *obj); int msm_gem_new_handle(struct drm_device *dev, struct drm_file *file, diff --git a/drivers/gpu/drm/msm/msm_gem_shrinker.c b/drivers/gpu/drm/msm/msm_gem_shrinker.c index 6e39d959b9f0..ea8ed74982c1 100644 --- a/drivers/gpu/drm/msm/msm_gem_shrinker.c +++ b/drivers/gpu/drm/msm/msm_gem_shrinker.c @@ -43,6 +43,9 @@ purge(struct msm_gem_object *msm_obj) if (!is_purgeable(msm_obj)) return false; + if (msm_gem_active(&msm_obj->base)) + return false; + /* * This will move the obj out of still_in_list to * the purged list @@ -58,6 +61,9 @@ evict(struct msm_gem_object *msm_obj) if (is_unevictable(msm_obj)) return false; + if (msm_gem_active(&msm_obj->base)) + return false; + msm_gem_evict(&msm_obj->base); return true; From patchwork Sat Jun 25 22:54:42 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 12895491 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A2DAECCA480 for ; Sat, 25 Jun 2022 22:55:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233630AbiFYWzU (ORCPT ); Sat, 25 Jun 2022 18:55:20 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60532 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233626AbiFYWzR (ORCPT ); Sat, 25 Jun 2022 18:55:17 -0400 Received: from mail-pj1-x1031.google.com (mail-pj1-x1031.google.com [IPv6:2607:f8b0:4864:20::1031]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AD32D13F50; Sat, 25 Jun 2022 15:55:16 -0700 (PDT) Received: by mail-pj1-x1031.google.com with SMTP id a11-20020a17090acb8b00b001eca0041455so7631709pju.1; Sat, 25 Jun 2022 15:55:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=mxFT+vmlgH8OMaHZf0MGM1sQcdQOD7qQazzf52IZIdo=; b=SrSp3snXs0pK5mdm23qDw9AEOirWroW+OI7UXjNb32EksivAx79AdLoB5oK0MkDDjR YbjthKdN+bYDvOC0ds1PUimv+YinYSVg5F5CYdOElJl+/SuxTzoJ9//7y2YAWIhCAiYo x6X9yQcbAP7deozdGdurCRza18wqBxQso10Fub8cyR1txrvbkjr7/H7qODDPyDMqYsEM InEBuyaU+Fz2XZrnuBshy+raVq51J16T56l0VYkCXMKKN0uv5al3kIgGYXXGveFVMnwA YMV3j4AswqGKsOxyeIo1oMLoL6Ztn7HYvnkrg46GQw5nOiAVPh9xT2lECwS7qVQ4ZduD UFXw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=mxFT+vmlgH8OMaHZf0MGM1sQcdQOD7qQazzf52IZIdo=; b=WROFdZLTvOq8a1hXaXPymXg9GwW2+MWcEtoTH6+DQc7HhvluyJbeXKjfgieoZ33T42 Bto03XnnK2khhBfgljSdh0pQr27thcRI5iMeFp5J8CD7qLQiXDqUUN5R1K7iBO94IjvG JGjkjj93tiEx4TMsR89zU58eJF3FYiZkiLJNc+5OHw2Er5DGzJEgHwr3h39OI3k+6jwi eGyTKIixdoP9MTFLanHA3nH6GQhsY5TBuJlReMgH4XDsiSe0u31jT+v49moSgpYOOEKN Ut7CPyBZ7hlgbcZ33SMTRyzPcOyq79SyZufgKK0EyX247FN7h68UojdYmyuWDbgKuCFW UfTg== X-Gm-Message-State: AJIora+1YGJKeVKleQUN3bw4sNVEsN+Up94r9gnQplRBCv6EyeV5Y/Pg 1xB4y+CF34y4nJ1KAnAKYhk= X-Google-Smtp-Source: AGRyM1uKXZy+zqzm5xAuTM9oZrssykuV7f/ZFNRCHd/yi6B1VGx7yRW2GbsY3C+0S7YzScee+WKgaQ== X-Received: by 2002:a17:90b:4c4c:b0:1ed:41ec:599d with SMTP id np12-20020a17090b4c4c00b001ed41ec599dmr4801210pjb.202.1656197716188; Sat, 25 Jun 2022 15:55:16 -0700 (PDT) Received: from localhost ([2601:1c0:5200:a6:307:a401:7b76:c6e5]) by smtp.gmail.com with ESMTPSA id s144-20020a632c96000000b003c265b7d4f6sm4022256pgs.44.2022.06.25.15.55.14 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 25 Jun 2022 15:55:15 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Rob Clark , Rob Clark , Abhinav Kumar , Dmitry Baryshkov , Sean Paul , David Airlie , Daniel Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH 07/15] drm/msm/gem: Rename update_inactive Date: Sat, 25 Jun 2022 15:54:42 -0700 Message-Id: <20220625225454.81039-8-robdclark@gmail.com> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220625225454.81039-1-robdclark@gmail.com> References: <20220625225454.81039-1-robdclark@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org From: Rob Clark Really what this is doing is updating various LRU lists. Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/msm_gem.c | 25 +++++++++++++------------ 1 file changed, 13 insertions(+), 12 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index b55d252aef17..97467364dc0a 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -19,7 +19,7 @@ #include "msm_gpu.h" #include "msm_mmu.h" -static void update_inactive(struct msm_gem_object *msm_obj); +static void update_lru(struct drm_gem_object *obj); static dma_addr_t physaddr(struct drm_gem_object *obj) { @@ -132,7 +132,7 @@ static struct page **get_pages(struct drm_gem_object *obj) if (msm_obj->flags & MSM_BO_WC) sync_for_device(msm_obj); - update_inactive(msm_obj); + update_lru(obj); } return msm_obj->pages; @@ -193,7 +193,7 @@ struct page **msm_gem_get_pages(struct drm_gem_object *obj) if (!IS_ERR(p)) { msm_obj->pin_count++; - update_inactive(msm_obj); + update_lru(obj); } msm_gem_unlock(obj); @@ -207,7 +207,7 @@ void msm_gem_put_pages(struct drm_gem_object *obj) msm_gem_lock(obj); msm_obj->pin_count--; GEM_WARN_ON(msm_obj->pin_count < 0); - update_inactive(msm_obj); + update_lru(obj); msm_gem_unlock(obj); } @@ -449,7 +449,7 @@ void msm_gem_unpin_locked(struct drm_gem_object *obj) msm_obj->pin_count--; GEM_WARN_ON(msm_obj->pin_count < 0); - update_inactive(msm_obj); + update_lru(obj); } struct msm_gem_vma *msm_gem_get_vma_locked(struct drm_gem_object *obj, @@ -658,7 +658,7 @@ static void *get_vaddr(struct drm_gem_object *obj, unsigned madv) goto fail; } - update_inactive(msm_obj); + update_lru(obj); } return msm_obj->vaddr; @@ -730,7 +730,7 @@ int msm_gem_madvise(struct drm_gem_object *obj, unsigned madv) * between inactive lists */ if (msm_obj->active_count == 0) - update_inactive(msm_obj); + update_lru(obj); msm_gem_unlock(obj); @@ -757,7 +757,7 @@ void msm_gem_purge(struct drm_gem_object *obj) put_iova_vmas(obj); msm_obj->madv = __MSM_MADV_PURGED; - update_inactive(msm_obj); + update_lru(obj); drm_gem_free_mmap_offset(obj); @@ -792,7 +792,7 @@ void msm_gem_evict(struct drm_gem_object *obj) put_pages(obj); - update_inactive(msm_obj); + update_lru(obj); } void msm_gem_vunmap(struct drm_gem_object *obj) @@ -835,13 +835,14 @@ void msm_gem_active_put(struct drm_gem_object *obj) GEM_WARN_ON(!msm_gem_is_locked(obj)); if (--msm_obj->active_count == 0) { - update_inactive(msm_obj); + update_lru(obj); } } -static void update_inactive(struct msm_gem_object *msm_obj) +static void update_lru(struct drm_gem_object *obj) { - struct msm_drm_private *priv = msm_obj->base.dev->dev_private; + struct msm_drm_private *priv = obj->dev->dev_private; + struct msm_gem_object *msm_obj = to_msm_bo(obj); GEM_WARN_ON(!msm_gem_is_locked(&msm_obj->base)); From patchwork Sat Jun 25 22:54:43 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 12895492 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 71D6CC43334 for ; Sat, 25 Jun 2022 22:55:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233654AbiFYWz1 (ORCPT ); Sat, 25 Jun 2022 18:55:27 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60568 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233631AbiFYWzU (ORCPT ); Sat, 25 Jun 2022 18:55:20 -0400 Received: from mail-pj1-x102f.google.com (mail-pj1-x102f.google.com [IPv6:2607:f8b0:4864:20::102f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B7E1A13F44; Sat, 25 Jun 2022 15:55:19 -0700 (PDT) Received: by mail-pj1-x102f.google.com with SMTP id p3-20020a17090a428300b001ec865eb4a2so8926766pjg.3; Sat, 25 Jun 2022 15:55:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=4TbOfcqFkaHon89dsPDId7NNakdXJT5mDYYEM5ZbQu4=; b=OtlXdIyCatO6hpkm68NAcRAAkLob4BqDXO2VG+gqvKveiJRLF2weU9Qlo345JB2koH wZ754wNLdLlnBld88Ud+MZ09X6ILvBBDS6Tsvit2AhF0ymXs8h71ayrHvaAsBlDpuyxd 1uALrZclRfRKV0ECe/WesAG+7HK+zetifiDB02KTm7DRrAqsTf6f9suPqaLg4ctCuGLR vgm4s+RHGq/H6kKrvZzt5XOvilqUBmqRrVY/NqbsNtLoJEYtlr6YQ5LRpvxQ/QiFgSar 7VTfiVhE4g9dxb7GagTDT1HynHPx6oOEbJq6qyw6zn6EhtmcdZZBPenkwew3YwffhVCW 9GdA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=4TbOfcqFkaHon89dsPDId7NNakdXJT5mDYYEM5ZbQu4=; b=lGXQiAJwLRoWX/MfIfLCOF6TK4V1ZsNVLkL46Ar3HtlwHbXnZ9ISuKYCLmNwe3M58u eDUXeVHxaKjMHW2ywTDQK3Vl7RLmCLfj789w8CYpx+uBKyW5ALYxPlph0/AN2aLsVUyg 6BGBkhkAdk+SfgaQLoOrmXIZjM5ThO6J3C1hHrJzfQk9FZNMW3uH7HbVtSpSEM5/eNry h+dnJHZL4casosGr+mTVV0nYfXSKDY+6Ib41pmKnzxMjfUce0x+Cq0d1pglGUJFLpVh/ ixiLrLpA0dlF2deXqMt8uFcwMGqOXQwFbu/jMVQ/szkMoFVU28Q4D+KRoHGNzdroEQdQ nJYQ== X-Gm-Message-State: AJIora+6dctcLkSLyOhVd6m/QiF7693cha5a4pXxXm8MMfdeUolHlgQ8 bB1PkF4fG4i64m32ZymXS3B7uz257Gc= X-Google-Smtp-Source: AGRyM1uf6rORMtCQCRHJkiXjJZLzVLkeznHqyt6UGg+Lo81Wba1/QNXRY5pAiUWaRrQRJzFSghWYFQ== X-Received: by 2002:a17:902:7582:b0:16a:307a:5965 with SMTP id j2-20020a170902758200b0016a307a5965mr6356726pll.159.1656197719209; Sat, 25 Jun 2022 15:55:19 -0700 (PDT) Received: from localhost ([2601:1c0:5200:a6:307:a401:7b76:c6e5]) by smtp.gmail.com with ESMTPSA id bh2-20020a056a02020200b0040d2aea1643sm3962619pgb.29.2022.06.25.15.55.17 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 25 Jun 2022 15:55:18 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Rob Clark , Rob Clark , Abhinav Kumar , Dmitry Baryshkov , Sean Paul , David Airlie , Daniel Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH 08/15] drm/msm/gem: Rename to pin/unpin_pages Date: Sat, 25 Jun 2022 15:54:43 -0700 Message-Id: <20220625225454.81039-9-robdclark@gmail.com> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220625225454.81039-1-robdclark@gmail.com> References: <20220625225454.81039-1-robdclark@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org From: Rob Clark Since that is what these fxns actually do.. they are getting *pinned* pages (as opposed to cases where we need pages, but don't need them pinned, like CPU mappings). Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/msm_gem.c | 18 +++++++++++++----- drivers/gpu/drm/msm/msm_gem.h | 4 ++-- drivers/gpu/drm/msm/msm_gem_prime.c | 4 ++-- 3 files changed, 17 insertions(+), 9 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index 97467364dc0a..3da64c7f65a2 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -177,30 +177,38 @@ static void put_pages(struct drm_gem_object *obj) } } -struct page **msm_gem_get_pages(struct drm_gem_object *obj) +static struct page **msm_gem_pin_pages_locked(struct drm_gem_object *obj) { struct msm_gem_object *msm_obj = to_msm_bo(obj); struct page **p; - msm_gem_lock(obj); + GEM_WARN_ON(!msm_gem_is_locked(obj)); if (GEM_WARN_ON(msm_obj->madv != MSM_MADV_WILLNEED)) { - msm_gem_unlock(obj); return ERR_PTR(-EBUSY); } p = get_pages(obj); - if (!IS_ERR(p)) { msm_obj->pin_count++; update_lru(obj); } + return p; +} + +struct page **msm_gem_pin_pages(struct drm_gem_object *obj) +{ + struct page **p; + + msm_gem_lock(obj); + p = msm_gem_pin_pages_locked(obj); msm_gem_unlock(obj); + return p; } -void msm_gem_put_pages(struct drm_gem_object *obj) +void msm_gem_unpin_pages(struct drm_gem_object *obj) { struct msm_gem_object *msm_obj = to_msm_bo(obj); diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h index 0ab0dc4f8c25..6fe521ccda45 100644 --- a/drivers/gpu/drm/msm/msm_gem.h +++ b/drivers/gpu/drm/msm/msm_gem.h @@ -159,8 +159,8 @@ int msm_gem_get_and_pin_iova(struct drm_gem_object *obj, struct msm_gem_address_space *aspace, uint64_t *iova); void msm_gem_unpin_iova(struct drm_gem_object *obj, struct msm_gem_address_space *aspace); -struct page **msm_gem_get_pages(struct drm_gem_object *obj); -void msm_gem_put_pages(struct drm_gem_object *obj); +struct page **msm_gem_pin_pages(struct drm_gem_object *obj); +void msm_gem_unpin_pages(struct drm_gem_object *obj); int msm_gem_dumb_create(struct drm_file *file, struct drm_device *dev, struct drm_mode_create_dumb *args); int msm_gem_dumb_map_offset(struct drm_file *file, struct drm_device *dev, diff --git a/drivers/gpu/drm/msm/msm_gem_prime.c b/drivers/gpu/drm/msm/msm_gem_prime.c index dcc8a573bc76..c1d91863df05 100644 --- a/drivers/gpu/drm/msm/msm_gem_prime.c +++ b/drivers/gpu/drm/msm/msm_gem_prime.c @@ -63,12 +63,12 @@ struct drm_gem_object *msm_gem_prime_import_sg_table(struct drm_device *dev, int msm_gem_prime_pin(struct drm_gem_object *obj) { if (!obj->import_attach) - msm_gem_get_pages(obj); + msm_gem_pin_pages(obj); return 0; } void msm_gem_prime_unpin(struct drm_gem_object *obj) { if (!obj->import_attach) - msm_gem_put_pages(obj); + msm_gem_unpin_pages(obj); } From patchwork Sat Jun 25 22:54:44 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 12895493 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0BCF1C433EF for ; Sat, 25 Jun 2022 22:55:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233665AbiFYWz2 (ORCPT ); Sat, 25 Jun 2022 18:55:28 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60690 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233650AbiFYWz0 (ORCPT ); Sat, 25 Jun 2022 18:55:26 -0400 Received: from mail-pg1-x52f.google.com (mail-pg1-x52f.google.com [IPv6:2607:f8b0:4864:20::52f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E2D8E13F4E; Sat, 25 Jun 2022 15:55:21 -0700 (PDT) Received: by mail-pg1-x52f.google.com with SMTP id 23so5659169pgc.8; Sat, 25 Jun 2022 15:55:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=UVam+OtTXmVRJOOFcNCglGVq9lRc/LFnzlVeuJ/h90Y=; b=IAhdRKg3wq32mnlusa8itT5j46bSw9XZvICjmv6qzqx3ihb0k18YtsvJ9pv5AnYsiE upVCMq8I+5PAKNimjSpuQ6XPiQG3N9qa7mAeRGB9s7CwnE3DoviY27XD5sqvfDHZ4Nkf zVMujpzshIWAFBlYTYrFc9q56eHrdNx45XKwEh/ahh+bZvVdIDIbjcHuDy2peQvuPlif Z5m9OrjQ0btB+MMpt1MCjnqtxlrKg6Xnw+MoZkJgW49rmB3n1jaG9fmilTU3knT+roFv Vn4uBH52BfUb9YXg56XqC+dchDK57hQ/d/+q43gaAktx4gHxTdxhpQtHoodt0X0TM+ZC cZwQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=UVam+OtTXmVRJOOFcNCglGVq9lRc/LFnzlVeuJ/h90Y=; b=YoKduwH0Xeowvr1ZHoLPmXYvkmBj8de+I5A8evqnLiG2mYcH8kqijdWztgumcggU4k wN1Y4sCljioxFQhoWbCR86EvHdKMy5K1rgBhJ4WswK6u/vOyH8a9dlR9T3P7COBeNoDC rLH8GI9Pa7DWbB1VboybdbOea7CS0QvKjobwFBZ7WCHlGNNXOW6SX71nKLDTlKzzzSlI dPuuU21ecL2Up9QOVx+dD7TcoG26tQlStoV7ke5KzxpgqbNTivAHj/YN9k+K5wdBIaK8 42UFSMp2GzUwin6CKkdiUKLscjegj7/dIGZojT37e2qN1iClOeB6F2rcnxyt1zd6AhCR tRBg== X-Gm-Message-State: AJIora+v1Ti8foT3Hc+bj751C31KW6ewGVJJnnQ/C3aNFvXJXgybyq85 ytncQbpOdt9H45hLZsNiNV4= X-Google-Smtp-Source: AGRyM1vJfS4+n8zLuln5hMJjECb+kROUTmxBGlhggzcyYs1vAP2R+nd5a1cbQBRWp0l+C057exRgnQ== X-Received: by 2002:a05:6a00:234f:b0:525:1f7c:f2bf with SMTP id j15-20020a056a00234f00b005251f7cf2bfmr6603024pfj.14.1656197721362; Sat, 25 Jun 2022 15:55:21 -0700 (PDT) Received: from localhost ([2601:1c0:5200:a6:307:a401:7b76:c6e5]) by smtp.gmail.com with ESMTPSA id n24-20020a170902969800b0016a034ae481sm4197148plp.176.2022.06.25.15.55.20 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 25 Jun 2022 15:55:20 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Rob Clark , Rob Clark , Abhinav Kumar , Dmitry Baryshkov , Sean Paul , David Airlie , Daniel Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH 09/15] drm/msm/gem: Consolidate pin/unpin paths Date: Sat, 25 Jun 2022 15:54:44 -0700 Message-Id: <20220625225454.81039-10-robdclark@gmail.com> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220625225454.81039-1-robdclark@gmail.com> References: <20220625225454.81039-1-robdclark@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org From: Rob Clark Avoid having multiple spots where we increment/decrement pin_count (and associated LRU updating) Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/msm_gem.c | 13 +++++-------- 1 file changed, 5 insertions(+), 8 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index 3da64c7f65a2..407b18a24dc4 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -190,7 +190,7 @@ static struct page **msm_gem_pin_pages_locked(struct drm_gem_object *obj) p = get_pages(obj); if (!IS_ERR(p)) { - msm_obj->pin_count++; + to_msm_bo(obj)->pin_count++; update_lru(obj); } @@ -213,9 +213,7 @@ void msm_gem_unpin_pages(struct drm_gem_object *obj) struct msm_gem_object *msm_obj = to_msm_bo(obj); msm_gem_lock(obj); - msm_obj->pin_count--; - GEM_WARN_ON(msm_obj->pin_count < 0); - update_lru(obj); + msm_gem_unpin_locked(obj); msm_gem_unlock(obj); } @@ -436,14 +434,13 @@ int msm_gem_pin_vma_locked(struct drm_gem_object *obj, struct msm_gem_vma *vma) if (GEM_WARN_ON(msm_obj->madv != MSM_MADV_WILLNEED)) return -EBUSY; - pages = get_pages(obj); + pages = msm_gem_pin_pages_locked(obj); if (IS_ERR(pages)) return PTR_ERR(pages); ret = msm_gem_map_vma(vma->aspace, vma, prot, msm_obj->sgt, obj->size); - - if (!ret) - msm_obj->pin_count++; + if (ret) + msm_gem_unpin_locked(obj); return ret; } From patchwork Sat Jun 25 22:54:45 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 12895494 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C89A3C433EF for ; Sat, 25 Jun 2022 22:55:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233689AbiFYWzm (ORCPT ); Sat, 25 Jun 2022 18:55:42 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60722 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233660AbiFYWz2 (ORCPT ); Sat, 25 Jun 2022 18:55:28 -0400 Received: from mail-pl1-x634.google.com (mail-pl1-x634.google.com [IPv6:2607:f8b0:4864:20::634]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1368513F5D; Sat, 25 Jun 2022 15:55:25 -0700 (PDT) Received: by mail-pl1-x634.google.com with SMTP id jb13so5096284plb.9; Sat, 25 Jun 2022 15:55:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=0Nc8381KvCAKtcJgWAO7GB/O/Xk1BB4fgkOydTh0/AI=; b=jF9u9ecwrNMztGK/9kvvUUaM0xUZsYQ/Yi81m0vOkZKDQ1ZUzcqIK1YWeVrrUjBaad zmqxbOznpCndArMv3k97HOf7gHd/swNFBO+rUym8byek0fL6MLBtTdL6cYPo+6wNNqwV vjG3yQOwkLb7Yr7Z9SZ++kRuAXsAO7kfLPAp8fLltSiwrKUbv5+ArHPmyCUT66t38QoV F4glRI8k2aGeeoFS9aq8E4nwE78wiKuDPGOl9+NjQQL//PUZYxlMDhycNPHyV9syHFLc tmfBdiwUVqVTcdznSDNj5/37wazsR7IOMIbu6Qo0g9/BFjjUQgXcncrfu7cJ/nxBKE1g oDHA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=0Nc8381KvCAKtcJgWAO7GB/O/Xk1BB4fgkOydTh0/AI=; b=ju809i/1FKwjw8ZrsFzd53R2YbW0BGWWo1gFfDkscbBjatyUH5ZdGGfBQCPAPgv/y4 yfORmawTRlp8v37lX6t0pOuyKV9oKsHGryZQr6w/I9jkkWS760amMjKNlOHPtJ45TVgC bVbgtVAjo6hZWYGB7GVFJqBUXpLJNspT4qIH9hUGMtUJKrjGG/LeVUqq81ExHzPymWVy /AUGDYvi671iAMYZtdSMEeY26FXD8r14aBKlqpu+d3XeVIqN1gp2/AzuRO5QW3qe0iFb x9uxQc7+Z+CBFSZR6k8PumejDbf39y2AEgygm5kJNWSTHvpCt6ynO4cUg9yvMmQ5VBFs K0cw== X-Gm-Message-State: AJIora8+CFkR/hVjCj485imeKQ8jD4MA90Y/trlSV1Gy4IgZymSrb/TA jmaZGG5t95r/dYKU17GjjtwaY0Ixw+o= X-Google-Smtp-Source: AGRyM1tp7ypYCYv+vdPXVIqoYc3/7lUZNcRSha0pNX2ckuBBSjZvdF0uTnc7vDAYtY9O1NAx/R9UjQ== X-Received: by 2002:a17:902:b193:b0:16a:2c23:37d0 with SMTP id s19-20020a170902b19300b0016a2c2337d0mr6326379plr.35.1656197724481; Sat, 25 Jun 2022 15:55:24 -0700 (PDT) Received: from localhost ([2601:1c0:5200:a6:307:a401:7b76:c6e5]) by smtp.gmail.com with ESMTPSA id cp2-20020a170902e78200b001664d88aab3sm4176540plb.240.2022.06.25.15.55.22 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 25 Jun 2022 15:55:23 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Rob Clark , Rob Clark , Abhinav Kumar , Dmitry Baryshkov , Sean Paul , David Airlie , Daniel Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH 10/15] drm/msm/gem: Remove active refcnt Date: Sat, 25 Jun 2022 15:54:45 -0700 Message-Id: <20220625225454.81039-11-robdclark@gmail.com> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220625225454.81039-1-robdclark@gmail.com> References: <20220625225454.81039-1-robdclark@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org From: Rob Clark At this point the pinned refcnt is sufficient, and the shrinker is already prepared to encounter objects which are still active according to fences attached to the resv. Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/msm_gem.c | 45 ++-------------------------- drivers/gpu/drm/msm/msm_gem.h | 14 ++------- drivers/gpu/drm/msm/msm_gem_submit.c | 22 ++------------ 3 files changed, 8 insertions(+), 73 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index 407b18a24dc4..209438744bab 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -734,8 +734,7 @@ int msm_gem_madvise(struct drm_gem_object *obj, unsigned madv) /* If the obj is inactive, we might need to move it * between inactive lists */ - if (msm_obj->active_count == 0) - update_lru(obj); + update_lru(obj); msm_gem_unlock(obj); @@ -788,7 +787,6 @@ void msm_gem_evict(struct drm_gem_object *obj) GEM_WARN_ON(!msm_gem_is_locked(obj)); GEM_WARN_ON(is_unevictable(msm_obj)); GEM_WARN_ON(!msm_obj->evictable); - GEM_WARN_ON(msm_obj->active_count); /* Get rid of any iommu mapping(s): */ put_iova_spaces(obj, false); @@ -813,37 +811,6 @@ void msm_gem_vunmap(struct drm_gem_object *obj) msm_obj->vaddr = NULL; } -void msm_gem_active_get(struct drm_gem_object *obj, struct msm_gpu *gpu) -{ - struct msm_gem_object *msm_obj = to_msm_bo(obj); - struct msm_drm_private *priv = obj->dev->dev_private; - - might_sleep(); - GEM_WARN_ON(!msm_gem_is_locked(obj)); - GEM_WARN_ON(msm_obj->madv != MSM_MADV_WILLNEED); - GEM_WARN_ON(msm_obj->dontneed); - - if (msm_obj->active_count++ == 0) { - mutex_lock(&priv->mm_lock); - if (msm_obj->evictable) - mark_unevictable(msm_obj); - list_move_tail(&msm_obj->mm_list, &gpu->active_list); - mutex_unlock(&priv->mm_lock); - } -} - -void msm_gem_active_put(struct drm_gem_object *obj) -{ - struct msm_gem_object *msm_obj = to_msm_bo(obj); - - might_sleep(); - GEM_WARN_ON(!msm_gem_is_locked(obj)); - - if (--msm_obj->active_count == 0) { - update_lru(obj); - } -} - static void update_lru(struct drm_gem_object *obj) { struct msm_drm_private *priv = obj->dev->dev_private; @@ -851,9 +818,6 @@ static void update_lru(struct drm_gem_object *obj) GEM_WARN_ON(!msm_gem_is_locked(&msm_obj->base)); - if (msm_obj->active_count != 0) - return; - mutex_lock(&priv->mm_lock); if (msm_obj->dontneed) @@ -926,7 +890,7 @@ void msm_gem_describe(struct drm_gem_object *obj, struct seq_file *m, stats->all.count++; stats->all.size += obj->size; - if (is_active(msm_obj)) { + if (msm_gem_active(obj)) { stats->active.count++; stats->active.size += obj->size; } @@ -954,7 +918,7 @@ void msm_gem_describe(struct drm_gem_object *obj, struct seq_file *m, } seq_printf(m, "%08x: %c %2d (%2d) %08llx %p", - msm_obj->flags, is_active(msm_obj) ? 'A' : 'I', + msm_obj->flags, msm_gem_active(obj) ? 'A' : 'I', obj->name, kref_read(&obj->refcount), off, msm_obj->vaddr); @@ -1037,9 +1001,6 @@ static void msm_gem_free_object(struct drm_gem_object *obj) list_del(&msm_obj->mm_list); mutex_unlock(&priv->mm_lock); - /* object should not be on active list: */ - GEM_WARN_ON(is_active(msm_obj)); - put_iova_spaces(obj, true); if (obj->import_attach) { diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h index 6fe521ccda45..420ba49bf21a 100644 --- a/drivers/gpu/drm/msm/msm_gem.h +++ b/drivers/gpu/drm/msm/msm_gem.h @@ -138,7 +138,6 @@ struct msm_gem_object { char name[32]; /* Identifier to print for the debugfs files */ - int active_count; int pin_count; }; #define to_msm_bo(x) container_of(x, struct msm_gem_object, base) @@ -171,8 +170,6 @@ void *msm_gem_get_vaddr_active(struct drm_gem_object *obj); void msm_gem_put_vaddr_locked(struct drm_gem_object *obj); void msm_gem_put_vaddr(struct drm_gem_object *obj); int msm_gem_madvise(struct drm_gem_object *obj, unsigned madv); -void msm_gem_active_get(struct drm_gem_object *obj, struct msm_gpu *gpu); -void msm_gem_active_put(struct drm_gem_object *obj); bool msm_gem_active(struct drm_gem_object *obj); int msm_gem_cpu_prep(struct drm_gem_object *obj, uint32_t op, ktime_t *timeout); int msm_gem_cpu_fini(struct drm_gem_object *obj); @@ -245,12 +242,6 @@ msm_gem_is_locked(struct drm_gem_object *obj) return dma_resv_is_locked(obj->resv) || (kref_read(&obj->refcount) == 0); } -static inline bool is_active(struct msm_gem_object *msm_obj) -{ - GEM_WARN_ON(!msm_gem_is_locked(&msm_obj->base)); - return msm_obj->active_count; -} - /* imported/exported objects are not purgeable: */ static inline bool is_unpurgeable(struct msm_gem_object *msm_obj) { @@ -391,9 +382,8 @@ struct msm_gem_submit { /* make sure these don't conflict w/ MSM_SUBMIT_BO_x */ #define BO_VALID 0x8000 /* is current addr in cmdstream correct/valid? */ #define BO_LOCKED 0x4000 /* obj lock is held */ -#define BO_ACTIVE 0x2000 /* active refcnt is held */ -#define BO_OBJ_PINNED 0x1000 /* obj (pages) is pinned and on active list */ -#define BO_VMA_PINNED 0x0800 /* vma (virtual address) is pinned */ +#define BO_OBJ_PINNED 0x2000 /* obj (pages) is pinned and on active list */ +#define BO_VMA_PINNED 0x1000 /* vma (virtual address) is pinned */ uint32_t flags; union { struct msm_gem_object *obj; diff --git a/drivers/gpu/drm/msm/msm_gem_submit.c b/drivers/gpu/drm/msm/msm_gem_submit.c index 16c662808522..adf358fb8e9d 100644 --- a/drivers/gpu/drm/msm/msm_gem_submit.c +++ b/drivers/gpu/drm/msm/msm_gem_submit.c @@ -243,17 +243,13 @@ static void submit_cleanup_bo(struct msm_gem_submit *submit, int i, if (flags & BO_OBJ_PINNED) msm_gem_unpin_locked(obj); - if (flags & BO_ACTIVE) - msm_gem_active_put(obj); - if (flags & BO_LOCKED) dma_resv_unlock(obj->resv); } static void submit_unlock_unpin_bo(struct msm_gem_submit *submit, int i) { - unsigned cleanup_flags = BO_VMA_PINNED | BO_OBJ_PINNED | - BO_ACTIVE | BO_LOCKED; + unsigned cleanup_flags = BO_VMA_PINNED | BO_OBJ_PINNED | BO_LOCKED; submit_cleanup_bo(submit, i, cleanup_flags); if (!(submit->bos[i].flags & BO_VALID)) @@ -358,18 +354,6 @@ static int submit_pin_objects(struct msm_gem_submit *submit) submit->valid = true; - /* - * Increment active_count first, so if under memory pressure, we - * don't inadvertently evict a bo needed by the submit in order - * to pin an earlier bo in the same submit. - */ - for (i = 0; i < submit->nr_bos; i++) { - struct drm_gem_object *obj = &submit->bos[i].obj->base; - - msm_gem_active_get(obj, submit->gpu); - submit->bos[i].flags |= BO_ACTIVE; - } - for (i = 0; i < submit->nr_bos; i++) { struct drm_gem_object *obj = &submit->bos[i].obj->base; struct msm_gem_vma *vma; @@ -521,7 +505,7 @@ static void submit_cleanup(struct msm_gem_submit *submit, bool error) unsigned i; if (error) - cleanup_flags |= BO_VMA_PINNED | BO_OBJ_PINNED | BO_ACTIVE; + cleanup_flags |= BO_VMA_PINNED | BO_OBJ_PINNED; for (i = 0; i < submit->nr_bos; i++) { struct msm_gem_object *msm_obj = submit->bos[i].obj; @@ -540,7 +524,7 @@ void msm_submit_retire(struct msm_gem_submit *submit) msm_gem_lock(obj); /* Note, VMA already fence-unpinned before submit: */ - submit_cleanup_bo(submit, i, BO_OBJ_PINNED | BO_ACTIVE); + submit_cleanup_bo(submit, i, BO_OBJ_PINNED); msm_gem_unlock(obj); drm_gem_object_put(obj); } From patchwork Sat Jun 25 22:54:46 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 12895495 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 420D3C43334 for ; Sat, 25 Jun 2022 22:55:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233659AbiFYWzn (ORCPT ); Sat, 25 Jun 2022 18:55:43 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60692 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233664AbiFYWz2 (ORCPT ); Sat, 25 Jun 2022 18:55:28 -0400 Received: from mail-pl1-x62a.google.com (mail-pl1-x62a.google.com [IPv6:2607:f8b0:4864:20::62a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A6D4C13F44; Sat, 25 Jun 2022 15:55:27 -0700 (PDT) Received: by mail-pl1-x62a.google.com with SMTP id m14so5112989plg.5; Sat, 25 Jun 2022 15:55:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=AeMYgqPNjsyamzteMz4JJUbia+3HcpDgXI4+3iGYjqA=; b=XWwFhgsw0jy62ylwdGasi1YuRV0M1drEnIN3jNSmD9WAonjLsfbEA7hU4DteE1FAeW Ie+6qGqPtOARntKNsgRJstxzfWVM5MQyk4m8nh6cpAVhH2w7H2xiE7T/pzZpxCkUYcwu HEpM19RcB2tvIpKM1F8yvJqbid6mn8GN+sDQw/8i43aIqe28cYZzM8dSEvOYylmhX39/ t++myIfjITFWjSFJnrXu9VlSpUSHJ6JAYcpCxCSmLwwacVXTR4WUtVQF1ach9Fszoz02 tDq2kecCKTSUmI7ttzeMQoNIHRGkig8MqjZWHv6voqVmloRrqV3tyQ2VEHIMFDEfJ6+b ck1A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=AeMYgqPNjsyamzteMz4JJUbia+3HcpDgXI4+3iGYjqA=; b=33KnYIMh2Kq+U2XvPSiWUh/4l2HnJXHnDwmqtN3y2sttgvHWKSByHhJ4HM+6T38iZR kgZD2cX0P8lXKeMe+IWmZ1Ane2ib4KpWyWQ+QX/ULQ3uWz1LtUakv1UUFLg3vNlSiVCT SQ3FtXmAJCBUwX5pCp1H4m/BqpT+ktTw5vu8bldLqcC46EneHmGt+qL8YhW6silUecEl ROCDBIYUzrWxrlIpDLg5B+RSOPmcdy//2Prvfm8GqQlIZZlbvnwhp3VEdRD8Erp1riyd SU+iFY8+SO0u8fekAdonTQBEDZWqYL417BrnreduVcziyfABqOIezUewGPOzKeb2yRcq yfVg== X-Gm-Message-State: AJIora8sxrwMiRe5TnYec8gm1etnJqwr1bETO7qfg3kiXz3AP211ov1b bUJ1XQKVadYBjyzsBUmj0fE= X-Google-Smtp-Source: AGRyM1uvN7jJWa72gLy8ORBxW7G7eO/yFia+/uzSAqKPQMxlZ0yrLk+uyqWxPtfpepGB0y9z20z9ug== X-Received: by 2002:a17:902:d2d1:b0:16a:1dd9:4d3d with SMTP id n17-20020a170902d2d100b0016a1dd94d3dmr6372390plc.18.1656197727296; Sat, 25 Jun 2022 15:55:27 -0700 (PDT) Received: from localhost ([2601:1c0:5200:a6:307:a401:7b76:c6e5]) by smtp.gmail.com with ESMTPSA id 9-20020a170902c20900b0015e8d4eb1dfsm4205062pll.41.2022.06.25.15.55.25 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 25 Jun 2022 15:55:25 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Rob Clark , Daniel Vetter , Thomas Zimmermann , Dmitry Osipenko , Maarten Lankhorst , Maxime Ripard , David Airlie , linux-kernel@vger.kernel.org (open list) Subject: [PATCH 11/15] drm/gem: Add LRU/shrinker helper Date: Sat, 25 Jun 2022 15:54:46 -0700 Message-Id: <20220625225454.81039-12-robdclark@gmail.com> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220625225454.81039-1-robdclark@gmail.com> References: <20220625225454.81039-1-robdclark@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org From: Rob Clark Add a simple LRU helper to assist with driver's shrinker implementation. It handles tracking the number of backing pages associated with a given LRU, and provides a helper to implement shrinker_scan. A driver can use multiple LRU instances to track objects in various states, for example a dontneed LRU for purgeable objects, a willneed LRU for evictable objects, and an unpinned LRU for objects without backing pages. All LRUs that the object can be moved between must share a single lock. Cc: Daniel Vetter Cc: Thomas Zimmermann Cc: Dmitry Osipenko Signed-off-by: Rob Clark --- drivers/gpu/drm/drm_gem.c | 183 ++++++++++++++++++++++++++++++++++++++ include/drm/drm_gem.h | 56 ++++++++++++ 2 files changed, 239 insertions(+) diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c index eb0c2d041f13..684db28cc71c 100644 --- a/drivers/gpu/drm/drm_gem.c +++ b/drivers/gpu/drm/drm_gem.c @@ -165,6 +165,7 @@ void drm_gem_private_object_init(struct drm_device *dev, obj->resv = &obj->_resv; drm_vma_node_reset(&obj->vma_node); + INIT_LIST_HEAD(&obj->lru_node); } EXPORT_SYMBOL(drm_gem_private_object_init); @@ -951,6 +952,7 @@ drm_gem_object_release(struct drm_gem_object *obj) dma_resv_fini(&obj->_resv); drm_gem_free_mmap_offset(obj); + drm_gem_lru_remove(obj); } EXPORT_SYMBOL(drm_gem_object_release); @@ -1274,3 +1276,184 @@ drm_gem_unlock_reservations(struct drm_gem_object **objs, int count, ww_acquire_fini(acquire_ctx); } EXPORT_SYMBOL(drm_gem_unlock_reservations); + +/** + * drm_gem_lru_init - initialize a LRU + * + * @lru: The LRU to initialize + * @lock: The lock protecting the LRU + */ +void +drm_gem_lru_init(struct drm_gem_lru *lru, struct mutex *lock) +{ + lru->lock = lock; + lru->count = 0; + INIT_LIST_HEAD(&lru->list); +} +EXPORT_SYMBOL(drm_gem_lru_init); + +static void +lru_remove(struct drm_gem_object *obj) +{ + obj->lru->count -= obj->size >> PAGE_SHIFT; + WARN_ON(obj->lru->count < 0); + list_del(&obj->lru_node); + obj->lru = NULL; +} + +/** + * drm_gem_lru_remove - remove object from whatever LRU it is in + * + * If the object is currently in any LRU, remove it. + * + * @obj: The GEM object to remove from current LRU + */ +void +drm_gem_lru_remove(struct drm_gem_object *obj) +{ + struct drm_gem_lru *lru = obj->lru; + + if (!lru) + return; + + mutex_lock(lru->lock); + lru_remove(obj); + mutex_unlock(lru->lock); +} +EXPORT_SYMBOL(drm_gem_lru_remove); + +/** + * drm_gem_lru_move_tail - move the object to the tail of the LRU + * + * If the object is already in this LRU it will be moved to the + * tail. Otherwise it will be removed from whichever other LRU + * it is in (if any) and moved into this LRU. + * + * @lru: The LRU to move the object into. + * @obj: The GEM object to move into this LRU + */ +void +drm_gem_lru_move_tail(struct drm_gem_lru *lru, struct drm_gem_object *obj) +{ + mutex_lock(lru->lock); + drm_gem_lru_move_tail_locked(lru, obj); + mutex_unlock(lru->lock); +} +EXPORT_SYMBOL(drm_gem_lru_move_tail); + +/** + * drm_gem_lru_move_tail_locked - move the object to the tail of the LRU + * + * If the object is already in this LRU it will be moved to the + * tail. Otherwise it will be removed from whichever other LRU + * it is in (if any) and moved into this LRU. + * + * Call with LRU lock held. + * + * @lru: The LRU to move the object into. + * @obj: The GEM object to move into this LRU + */ +void +drm_gem_lru_move_tail_locked(struct drm_gem_lru *lru, struct drm_gem_object *obj) +{ + WARN_ON(!mutex_is_locked(lru->lock)); + + if (obj->lru) + lru_remove(obj); + + lru->count += obj->size >> PAGE_SHIFT; + list_add_tail(&obj->lru_node, &lru->list); + obj->lru = lru; +} +EXPORT_SYMBOL(drm_gem_lru_move_tail_locked); + +/** + * drm_gem_lru_scan - helper to implement shrinker.scan_objects + * + * If the shrink callback succeeds, it is expected that the driver + * move the object out of this LRU. + * + * If the LRU possibly contain active buffers, it is the responsibility + * of the shrink callback to check for this (ie. dma_resv_test_signaled()) + * or if necessary block until the buffer becomes idle. + * + * @lru: The LRU to scan + * @nr_to_scan: The number of pages to try to reclaim + * @shrink: Callback to try to shrink/reclaim the object. + */ +unsigned long +drm_gem_lru_scan(struct drm_gem_lru *lru, unsigned nr_to_scan, + bool (*shrink)(struct drm_gem_object *obj)) +{ + struct drm_gem_lru still_in_lru; + struct drm_gem_object *obj; + unsigned freed = 0; + + drm_gem_lru_init(&still_in_lru, lru->lock); + + mutex_lock(lru->lock); + + while (freed < nr_to_scan) { + obj = list_first_entry_or_null(&lru->list, typeof(*obj), lru_node); + + if (!obj) + break; + + drm_gem_lru_move_tail_locked(&still_in_lru, obj); + + /* + * If it's in the process of being freed, gem_object->free() + * may be blocked on lock waiting to remove it. So just + * skip it. + */ + if (!kref_get_unless_zero(&obj->refcount)) + continue; + + /* + * Now that we own a reference, we can drop the lock for the + * rest of the loop body, to reduce contention with other + * code paths that need the LRU lock + */ + mutex_unlock(lru->lock); + + /* + * Note that this still needs to be trylock, since we can + * hit shrinker in response to trying to get backing pages + * for this obj (ie. while it's lock is already held) + */ + if (!dma_resv_trylock(obj->resv)) + goto tail; + + if (shrink(obj)) { + freed += obj->size >> PAGE_SHIFT; + + /* + * If we succeeded in releasing the object's backing + * pages, we expect the driver to have moved the object + * out of this LRU + */ + WARN_ON(obj->lru == &still_in_lru); + WARN_ON(obj->lru == lru); + } + + dma_resv_unlock(obj->resv); + +tail: + drm_gem_object_put(obj); + mutex_lock(lru->lock); + } + + /* + * Move objects we've skipped over out of the temporary still_in_lru + * back into this LRU + */ + list_for_each_entry (obj, &still_in_lru.list, lru_node) + obj->lru = lru; + list_splice_tail(&still_in_lru.list, &lru->list); + lru->count += still_in_lru.count; + + mutex_unlock(lru->lock); + + return freed; +} +EXPORT_SYMBOL(drm_gem_lru_scan); diff --git a/include/drm/drm_gem.h b/include/drm/drm_gem.h index 87cffc9efa85..f13a9080af37 100644 --- a/include/drm/drm_gem.h +++ b/include/drm/drm_gem.h @@ -174,6 +174,41 @@ struct drm_gem_object_funcs { const struct vm_operations_struct *vm_ops; }; +/** + * struct drm_gem_lru - A simple LRU helper + * + * A helper for tracking GEM objects in a given state, to aid in + * driver's shrinker implementation. Tracks the count of pages + * for lockless &shrinker.count_objects, and provides + * &drm_gem_lru_scan for driver's &shrinker.scan_objects + * implementation. + */ +struct drm_gem_lru { + /** + * @lock: + * + * Lock protecting movement of GEM objects between LRUs. All + * LRUs that the object can move between should be protected + * by the same lock. + */ + struct mutex *lock; + + /** + * @count: + * + * The total number of backing pages of the GEM objects in + * this LRU. + */ + long count; + + /** + * @list: + * + * The LRU list. + */ + struct list_head list; +}; + /** * struct drm_gem_object - GEM buffer object * @@ -312,6 +347,20 @@ struct drm_gem_object { * */ const struct drm_gem_object_funcs *funcs; + + /** + * @lru_node: + * + * List node in a &drm_gem_lru. + */ + struct list_head lru_node; + + /** + * @lru: + * + * The current LRU list that the GEM object is on. + */ + struct drm_gem_lru *lru; }; /** @@ -420,4 +469,11 @@ void drm_gem_unlock_reservations(struct drm_gem_object **objs, int count, int drm_gem_dumb_map_offset(struct drm_file *file, struct drm_device *dev, u32 handle, u64 *offset); +void drm_gem_lru_init(struct drm_gem_lru *lru, struct mutex *lock); +void drm_gem_lru_remove(struct drm_gem_object *obj); +void drm_gem_lru_move_tail(struct drm_gem_lru *lru, struct drm_gem_object *obj); +void drm_gem_lru_move_tail_locked(struct drm_gem_lru *lru, struct drm_gem_object *obj); +unsigned long drm_gem_lru_scan(struct drm_gem_lru *lru, unsigned nr_to_scan, + bool (*shrink)(struct drm_gem_object *obj)); + #endif /* __DRM_GEM_H__ */ From patchwork Sat Jun 25 22:54:47 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 12895498 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 71696C43334 for ; Sat, 25 Jun 2022 22:55:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233660AbiFYWzp (ORCPT ); Sat, 25 Jun 2022 18:55:45 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60946 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233656AbiFYWzl (ORCPT ); Sat, 25 Jun 2022 18:55:41 -0400 Received: from mail-pj1-x1033.google.com (mail-pj1-x1033.google.com [IPv6:2607:f8b0:4864:20::1033]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7FB4313F3E; Sat, 25 Jun 2022 15:55:31 -0700 (PDT) Received: by mail-pj1-x1033.google.com with SMTP id x1-20020a17090abc8100b001ec7f8a51f5so8977571pjr.0; Sat, 25 Jun 2022 15:55:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=+vUPui8qf63lLBPoNgClq/bXD6rcWRw8Zck+xFPdacg=; b=MMfWEGCRHdsodGPUK/gKuOE51LwJ3inOlCSp3nUazHjCH7+QbUPnqLoz1AFrWXx8Cn EXVRU4go29YNDeab09jUM9zpxx4/+pfiYi+5t4tsgI6+o6TDCy3TwMzjTdQ6vau0qvro GDW+1c+2RVhjv993PAhBRftDMAPmUBm+KTWbsiqZfD4aQvWRQdowTHO45qgckmHRcFN4 jrAn0RagJQPCh5PJ2OUsoRHerz1OEC5en33g8twk9H6PHNHfL8LqYotYd/cYcat7XZx/ rp6nK8Jw2g9OEmqSLo7948qBgCaJIxP2lLnTfhSrMwtZbSapbcg5Cil0XB0KKqkg5Htr iEhg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=+vUPui8qf63lLBPoNgClq/bXD6rcWRw8Zck+xFPdacg=; b=xxduEGHe45PelYXViDNWakaNs0FcC1C4l9BvyHA0eQdxTqbtIZ/XVrGvG/VhuzNYca Fcp3GZ/Ay/Sd3OPt1y0QbhBk586zovVh0t8fHIVt+8SjTRfXNDzxpGhliPCVsngWsRF9 UHaOtCct13LOJenXU9bY6YB/aWeS6qR4LWGapZUd7JgWMkDmV+pKR72vzzdEiRSEFTX0 G2npEyUi9+/jl2X6WPSgFWhYCf6gNayiBTWRblTpnbc1Dtet9AUn4p6PXZBqXFmy0KEa JipS7T0CYxKIbexB/eu0WXWBlPQWdDfvHGOWwXl2qdJZ4AamhbX3gRkDkJOMTESsQ+42 4HEQ== X-Gm-Message-State: AJIora+F/M8pAkkkLviIwqpJjlEi4ArlVtQMy7UXkb2jskxigw+L3qWn jueCBFiK1gYwbdKnYnlNKTI= X-Google-Smtp-Source: AGRyM1uJYDbj1kLF9QFVS+yMKuOxb1UsGNgBUY8XS92dFkgAngSc19CwFz6/m/5CNP8b2OCjVN/vkg== X-Received: by 2002:a17:902:e891:b0:16a:6c64:aa59 with SMTP id w17-20020a170902e89100b0016a6c64aa59mr6338874plg.62.1656197730596; Sat, 25 Jun 2022 15:55:30 -0700 (PDT) Received: from localhost ([2601:1c0:5200:a6:307:a401:7b76:c6e5]) by smtp.gmail.com with ESMTPSA id p6-20020a170902780600b001620960f1dfsm4168692pll.198.2022.06.25.15.55.28 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 25 Jun 2022 15:55:29 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Rob Clark , Rob Clark , Abhinav Kumar , Dmitry Baryshkov , Sean Paul , David Airlie , Daniel Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH 12/15] drm/msm/gem: Convert to using drm_gem_lru Date: Sat, 25 Jun 2022 15:54:47 -0700 Message-Id: <20220625225454.81039-13-robdclark@gmail.com> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220625225454.81039-1-robdclark@gmail.com> References: <20220625225454.81039-1-robdclark@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org From: Rob Clark This converts over to use the shared GEM LRU/shrinker helpers. Note that it means we are no longer tracking purgeable or willneed buffers that are active separately. But the most recently pinned buffers should be at the tail of the various LRUs, and the shrinker is already prepared to encounter objects which are still active. Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/msm_drv.c | 14 +-- drivers/gpu/drm/msm/msm_drv.h | 70 +++++++++++---- drivers/gpu/drm/msm/msm_gem.c | 58 ++++-------- drivers/gpu/drm/msm/msm_gem.h | 93 -------------------- drivers/gpu/drm/msm/msm_gem_shrinker.c | 117 ++++++------------------- drivers/gpu/drm/msm/msm_gpu.c | 3 - drivers/gpu/drm/msm/msm_gpu.h | 6 -- 7 files changed, 104 insertions(+), 257 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_drv.c b/drivers/gpu/drm/msm/msm_drv.c index ace91ead2caf..46afdb4ac96e 100644 --- a/drivers/gpu/drm/msm/msm_drv.c +++ b/drivers/gpu/drm/msm/msm_drv.c @@ -373,14 +373,18 @@ static int msm_drm_init(struct device *dev, const struct drm_driver *drv) INIT_LIST_HEAD(&priv->objects); mutex_init(&priv->obj_lock); - INIT_LIST_HEAD(&priv->inactive_willneed); - INIT_LIST_HEAD(&priv->inactive_dontneed); - INIT_LIST_HEAD(&priv->inactive_unpinned); - mutex_init(&priv->mm_lock); + /* + * Initialize the LRUs: + */ + mutex_init(&priv->lru.lock); + drm_gem_lru_init(&priv->lru.unbacked, &priv->lru.lock); + drm_gem_lru_init(&priv->lru.pinned, &priv->lru.lock); + drm_gem_lru_init(&priv->lru.willneed, &priv->lru.lock); + drm_gem_lru_init(&priv->lru.dontneed, &priv->lru.lock); /* Teach lockdep about lock ordering wrt. shrinker: */ fs_reclaim_acquire(GFP_KERNEL); - might_lock(&priv->mm_lock); + might_lock(&priv->lru.lock); fs_reclaim_release(GFP_KERNEL); drm_mode_config_init(ddev); diff --git a/drivers/gpu/drm/msm/msm_drv.h b/drivers/gpu/drm/msm/msm_drv.h index 099a67d10c3a..b5c789777e01 100644 --- a/drivers/gpu/drm/msm/msm_drv.h +++ b/drivers/gpu/drm/msm/msm_drv.h @@ -152,28 +152,60 @@ struct msm_drm_private { struct mutex obj_lock; /** - * LRUs of inactive GEM objects. Every bo is either in one of the - * inactive lists (depending on whether or not it is shrinkable) or - * gpu->active_list (for the gpu it is active on[1]), or transiently - * on a temporary list as the shrinker is running. + * lru: * - * Note that inactive_willneed also contains pinned and vmap'd bos, - * but the number of pinned-but-not-active objects is small (scanout - * buffers, ringbuffer, etc). + * The various LRU's that a GEM object is in at various stages of + * it's lifetime. Objects start out in the unbacked LRU. When + * pinned (for scannout or permanently mapped GPU buffers, like + * ringbuffer, memptr, fw, etc) it moves to the pinned LRU. When + * unpinned, it moves into willneed or dontneed LRU depending on + * madvise state. When backing pages are evicted (willneed) or + * purged (dontneed) it moves back into the unbacked LRU. * - * These lists are protected by mm_lock (which should be acquired - * before per GEM object lock). One should *not* hold mm_lock in - * get_pages()/vmap()/etc paths, as they can trigger the shrinker. - * - * [1] if someone ever added support for the old 2d cores, there could be - * more than one gpu object + * The dontneed LRU is considered by the shrinker for objects + * that are candidate for purging, and the willneed LRU is + * considered for objects that could be evicted. */ - struct list_head inactive_willneed; /* inactive + potentially unpin/evictable */ - struct list_head inactive_dontneed; /* inactive + shrinkable */ - struct list_head inactive_unpinned; /* inactive + purged or unpinned */ - long shrinkable_count; /* write access under mm_lock */ - long evictable_count; /* write access under mm_lock */ - struct mutex mm_lock; + struct { + /** + * unbacked: + * + * The LRU for GEM objects without backing pages allocated. + * This mostly exists so that objects are always is one + * LRU. + */ + struct drm_gem_lru unbacked; + + /** + * pinned: + * + * The LRU for pinned GEM objects + */ + struct drm_gem_lru pinned; + + /** + * willneed: + * + * The LRU for unpinned GEM objects which are in madvise + * WILLNEED state (ie. can be evicted) + */ + struct drm_gem_lru willneed; + + /** + * dontneed: + * + * The LRU for unpinned GEM objects which are in madvise + * DONTNEED state (ie. can be purged) + */ + struct drm_gem_lru dontneed; + + /** + * lock: + * + * Protects manipulation of all of the LRUs. + */ + struct mutex lock; + } lru; struct workqueue_struct *wq; diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index 209438744bab..d4e8af46f4ef 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -174,6 +174,7 @@ static void put_pages(struct drm_gem_object *obj) put_pages_vram(obj); msm_obj->pages = NULL; + update_lru(obj); } } @@ -210,8 +211,6 @@ struct page **msm_gem_pin_pages(struct drm_gem_object *obj) void msm_gem_unpin_pages(struct drm_gem_object *obj) { - struct msm_gem_object *msm_obj = to_msm_bo(obj); - msm_gem_lock(obj); msm_gem_unpin_locked(obj); msm_gem_unlock(obj); @@ -761,7 +760,6 @@ void msm_gem_purge(struct drm_gem_object *obj) put_iova_vmas(obj); msm_obj->madv = __MSM_MADV_PURGED; - update_lru(obj); drm_gem_free_mmap_offset(obj); @@ -786,7 +784,6 @@ void msm_gem_evict(struct drm_gem_object *obj) GEM_WARN_ON(!msm_gem_is_locked(obj)); GEM_WARN_ON(is_unevictable(msm_obj)); - GEM_WARN_ON(!msm_obj->evictable); /* Get rid of any iommu mapping(s): */ put_iova_spaces(obj, false); @@ -794,8 +791,6 @@ void msm_gem_evict(struct drm_gem_object *obj) drm_vma_node_unmap(&obj->vma_node, dev->anon_inode->i_mapping); put_pages(obj); - - update_lru(obj); } void msm_gem_vunmap(struct drm_gem_object *obj) @@ -818,26 +813,20 @@ static void update_lru(struct drm_gem_object *obj) GEM_WARN_ON(!msm_gem_is_locked(&msm_obj->base)); - mutex_lock(&priv->mm_lock); - - if (msm_obj->dontneed) - mark_unpurgeable(msm_obj); - if (msm_obj->evictable) - mark_unevictable(msm_obj); - - list_del(&msm_obj->mm_list); - if ((msm_obj->madv == MSM_MADV_WILLNEED) && msm_obj->sgt) { - list_add_tail(&msm_obj->mm_list, &priv->inactive_willneed); - mark_evictable(msm_obj); - } else if (msm_obj->madv == MSM_MADV_DONTNEED) { - list_add_tail(&msm_obj->mm_list, &priv->inactive_dontneed); - mark_purgeable(msm_obj); + if (!msm_obj->pages) { + GEM_WARN_ON(msm_obj->pin_count); + GEM_WARN_ON(msm_obj->vmap_count); + + drm_gem_lru_move_tail(&priv->lru.unbacked, obj); + } else if (msm_obj->pin_count || msm_obj->vmap_count) { + drm_gem_lru_move_tail(&priv->lru.pinned, obj); + } else if (msm_obj->madv == MSM_MADV_WILLNEED) { + drm_gem_lru_move_tail(&priv->lru.willneed, obj); } else { - GEM_WARN_ON((msm_obj->madv != __MSM_MADV_PURGED) && msm_obj->sgt); - list_add_tail(&msm_obj->mm_list, &priv->inactive_unpinned); - } + GEM_WARN_ON(msm_obj->madv != MSM_MADV_DONTNEED); - mutex_unlock(&priv->mm_lock); + drm_gem_lru_move_tail(&priv->lru.dontneed, obj); + } } bool msm_gem_active(struct drm_gem_object *obj) @@ -995,12 +984,6 @@ static void msm_gem_free_object(struct drm_gem_object *obj) list_del(&msm_obj->node); mutex_unlock(&priv->obj_lock); - mutex_lock(&priv->mm_lock); - if (msm_obj->dontneed) - mark_unpurgeable(msm_obj); - list_del(&msm_obj->mm_list); - mutex_unlock(&priv->mm_lock); - put_iova_spaces(obj, true); if (obj->import_attach) { @@ -1160,13 +1143,6 @@ struct drm_gem_object *msm_gem_new(struct drm_device *dev, uint32_t size, uint32 to_msm_bo(obj)->vram_node = &vma->node; - /* Call chain get_pages() -> update_inactive() tries to - * access msm_obj->mm_list, but it is not initialized yet. - * To avoid NULL pointer dereference error, initialize - * mm_list to be empty. - */ - INIT_LIST_HEAD(&msm_obj->mm_list); - msm_gem_lock(obj); pages = get_pages(obj); msm_gem_unlock(obj); @@ -1189,9 +1165,7 @@ struct drm_gem_object *msm_gem_new(struct drm_device *dev, uint32_t size, uint32 mapping_set_gfp_mask(obj->filp->f_mapping, GFP_HIGHUSER); } - mutex_lock(&priv->mm_lock); - list_add_tail(&msm_obj->mm_list, &priv->inactive_unpinned); - mutex_unlock(&priv->mm_lock); + drm_gem_lru_move_tail(&priv->lru.unbacked, obj); mutex_lock(&priv->obj_lock); list_add_tail(&msm_obj->node, &priv->objects); @@ -1247,9 +1221,7 @@ struct drm_gem_object *msm_gem_import(struct drm_device *dev, msm_gem_unlock(obj); - mutex_lock(&priv->mm_lock); - list_add_tail(&msm_obj->mm_list, &priv->inactive_unpinned); - mutex_unlock(&priv->mm_lock); + drm_gem_lru_move_tail(&priv->lru.pinned, obj); mutex_lock(&priv->obj_lock); list_add_tail(&msm_obj->node, &priv->objects); diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h index 420ba49bf21a..0403b27ff779 100644 --- a/drivers/gpu/drm/msm/msm_gem.h +++ b/drivers/gpu/drm/msm/msm_gem.h @@ -93,16 +93,6 @@ struct msm_gem_object { */ uint8_t madv; - /** - * Is object on inactive_dontneed list (ie. counted in priv->shrinkable_count)? - */ - bool dontneed : 1; - - /** - * Is object evictable (ie. counted in priv->evictable_count)? - */ - bool evictable : 1; - /** * count of active vmap'ing */ @@ -114,17 +104,6 @@ struct msm_gem_object { */ struct list_head node; - /** - * An object is either: - * inactive - on priv->inactive_dontneed or priv->inactive_willneed - * (depending on purgeability status) - * active - on one one of the gpu's active_list.. well, at - * least for now we don't have (I don't think) hw sync between - * 2d and 3d one devices which have both, meaning we need to - * block on submit if a bo is already on other ring - */ - struct list_head mm_list; - struct page **pages; struct sg_table *sgt; void *vaddr; @@ -206,12 +185,6 @@ msm_gem_lock(struct drm_gem_object *obj) dma_resv_lock(obj->resv, NULL); } -static inline bool __must_check -msm_gem_trylock(struct drm_gem_object *obj) -{ - return dma_resv_trylock(obj->resv); -} - static inline int msm_gem_lock_interruptible(struct drm_gem_object *obj) { @@ -260,77 +233,11 @@ static inline bool is_vunmapable(struct msm_gem_object *msm_obj) return (msm_obj->vmap_count == 0) && msm_obj->vaddr; } -static inline void mark_purgeable(struct msm_gem_object *msm_obj) -{ - struct msm_drm_private *priv = msm_obj->base.dev->dev_private; - - GEM_WARN_ON(!mutex_is_locked(&priv->mm_lock)); - - if (is_unpurgeable(msm_obj)) - return; - - if (GEM_WARN_ON(msm_obj->dontneed)) - return; - - priv->shrinkable_count += msm_obj->base.size >> PAGE_SHIFT; - msm_obj->dontneed = true; -} - -static inline void mark_unpurgeable(struct msm_gem_object *msm_obj) -{ - struct msm_drm_private *priv = msm_obj->base.dev->dev_private; - - GEM_WARN_ON(!mutex_is_locked(&priv->mm_lock)); - - if (is_unpurgeable(msm_obj)) - return; - - if (GEM_WARN_ON(!msm_obj->dontneed)) - return; - - priv->shrinkable_count -= msm_obj->base.size >> PAGE_SHIFT; - GEM_WARN_ON(priv->shrinkable_count < 0); - msm_obj->dontneed = false; -} - static inline bool is_unevictable(struct msm_gem_object *msm_obj) { return is_unpurgeable(msm_obj) || msm_obj->vaddr; } -static inline void mark_evictable(struct msm_gem_object *msm_obj) -{ - struct msm_drm_private *priv = msm_obj->base.dev->dev_private; - - WARN_ON(!mutex_is_locked(&priv->mm_lock)); - - if (is_unevictable(msm_obj)) - return; - - if (WARN_ON(msm_obj->evictable)) - return; - - priv->evictable_count += msm_obj->base.size >> PAGE_SHIFT; - msm_obj->evictable = true; -} - -static inline void mark_unevictable(struct msm_gem_object *msm_obj) -{ - struct msm_drm_private *priv = msm_obj->base.dev->dev_private; - - WARN_ON(!mutex_is_locked(&priv->mm_lock)); - - if (is_unevictable(msm_obj)) - return; - - if (WARN_ON(!msm_obj->evictable)) - return; - - priv->evictable_count -= msm_obj->base.size >> PAGE_SHIFT; - WARN_ON(priv->evictable_count < 0); - msm_obj->evictable = false; -} - void msm_gem_purge(struct drm_gem_object *obj); void msm_gem_evict(struct drm_gem_object *obj); void msm_gem_vunmap(struct drm_gem_object *obj); diff --git a/drivers/gpu/drm/msm/msm_gem_shrinker.c b/drivers/gpu/drm/msm/msm_gem_shrinker.c index ea8ed74982c1..530b1102b46d 100644 --- a/drivers/gpu/drm/msm/msm_gem_shrinker.c +++ b/drivers/gpu/drm/msm/msm_gem_shrinker.c @@ -29,121 +29,61 @@ msm_gem_shrinker_count(struct shrinker *shrinker, struct shrink_control *sc) { struct msm_drm_private *priv = container_of(shrinker, struct msm_drm_private, shrinker); - unsigned count = priv->shrinkable_count; + unsigned count = priv->lru.dontneed.count; if (can_swap()) - count += priv->evictable_count; + count += priv->lru.willneed.count; return count; } static bool -purge(struct msm_gem_object *msm_obj) +purge(struct drm_gem_object *obj) { - if (!is_purgeable(msm_obj)) + if (!is_purgeable(to_msm_bo(obj))) return false; - if (msm_gem_active(&msm_obj->base)) + if (msm_gem_active(obj)) return false; - /* - * This will move the obj out of still_in_list to - * the purged list - */ - msm_gem_purge(&msm_obj->base); + msm_gem_purge(obj); return true; } static bool -evict(struct msm_gem_object *msm_obj) +evict(struct drm_gem_object *obj) { - if (is_unevictable(msm_obj)) + if (is_unevictable(to_msm_bo(obj))) return false; - if (msm_gem_active(&msm_obj->base)) + if (msm_gem_active(obj)) return false; - msm_gem_evict(&msm_obj->base); + msm_gem_evict(obj); return true; } -static unsigned long -scan(struct msm_drm_private *priv, unsigned nr_to_scan, struct list_head *list, - bool (*shrink)(struct msm_gem_object *msm_obj)) -{ - unsigned freed = 0; - struct list_head still_in_list; - - INIT_LIST_HEAD(&still_in_list); - - mutex_lock(&priv->mm_lock); - - while (freed < nr_to_scan) { - struct msm_gem_object *msm_obj = list_first_entry_or_null( - list, typeof(*msm_obj), mm_list); - - if (!msm_obj) - break; - - list_move_tail(&msm_obj->mm_list, &still_in_list); - - /* - * If it is in the process of being freed, msm_gem_free_object - * can be blocked on mm_lock waiting to remove it. So just - * skip it. - */ - if (!kref_get_unless_zero(&msm_obj->base.refcount)) - continue; - - /* - * Now that we own a reference, we can drop mm_lock for the - * rest of the loop body, to reduce contention with the - * retire_submit path (which could make more objects purgeable) - */ - - mutex_unlock(&priv->mm_lock); - - /* - * Note that this still needs to be trylock, since we can - * hit shrinker in response to trying to get backing pages - * for this obj (ie. while it's lock is already held) - */ - if (!msm_gem_trylock(&msm_obj->base)) - goto tail; - - if (shrink(msm_obj)) - freed += msm_obj->base.size >> PAGE_SHIFT; - - msm_gem_unlock(&msm_obj->base); - -tail: - drm_gem_object_put(&msm_obj->base); - mutex_lock(&priv->mm_lock); - } - - list_splice_tail(&still_in_list, list); - mutex_unlock(&priv->mm_lock); - - return freed; -} - static unsigned long msm_gem_shrinker_scan(struct shrinker *shrinker, struct shrink_control *sc) { struct msm_drm_private *priv = container_of(shrinker, struct msm_drm_private, shrinker); + long nr = sc->nr_to_scan; unsigned long freed; - freed = scan(priv, sc->nr_to_scan, &priv->inactive_dontneed, purge); + freed = drm_gem_lru_scan(&priv->lru.dontneed, nr, purge); + nr -= freed; if (freed > 0) trace_msm_gem_purge(freed << PAGE_SHIFT); - if (can_swap() && freed < sc->nr_to_scan) { - int evicted = scan(priv, sc->nr_to_scan - freed, - &priv->inactive_willneed, evict); + if (can_swap() && nr > 0) { + unsigned long evicted; + + evicted = drm_gem_lru_scan(&priv->lru.willneed, nr, evict); + nr -= evicted; if (evicted > 0) trace_msm_gem_evict(evicted << PAGE_SHIFT); @@ -179,12 +119,12 @@ msm_gem_shrinker_shrink(struct drm_device *dev, unsigned long nr_to_scan) static const int vmap_shrink_limit = 15; static bool -vmap_shrink(struct msm_gem_object *msm_obj) +vmap_shrink(struct drm_gem_object *obj) { - if (!is_vunmapable(msm_obj)) + if (!is_vunmapable(to_msm_bo(obj))) return false; - msm_gem_vunmap(&msm_obj->base); + msm_gem_vunmap(obj); return true; } @@ -194,17 +134,18 @@ msm_gem_shrinker_vmap(struct notifier_block *nb, unsigned long event, void *ptr) { struct msm_drm_private *priv = container_of(nb, struct msm_drm_private, vmap_notifier); - struct list_head *mm_lists[] = { - &priv->inactive_dontneed, - &priv->inactive_willneed, - priv->gpu ? &priv->gpu->active_list : NULL, + struct drm_gem_lru *lrus[] = { + &priv->lru.dontneed, + &priv->lru.willneed, + &priv->lru.pinned, NULL, }; unsigned idx, unmapped = 0; - for (idx = 0; mm_lists[idx] && unmapped < vmap_shrink_limit; idx++) { - unmapped += scan(priv, vmap_shrink_limit - unmapped, - mm_lists[idx], vmap_shrink); + for (idx = 0; lrus[idx] && unmapped < vmap_shrink_limit; idx++) { + unmapped += drm_gem_lru_scan(lrus[idx], + vmap_shrink_limit - unmapped, + vmap_shrink); } *(unsigned long *)ptr += unmapped; diff --git a/drivers/gpu/drm/msm/msm_gpu.c b/drivers/gpu/drm/msm/msm_gpu.c index 8c00f9187c03..bdee6ea51b73 100644 --- a/drivers/gpu/drm/msm/msm_gpu.c +++ b/drivers/gpu/drm/msm/msm_gpu.c @@ -862,7 +862,6 @@ int msm_gpu_init(struct drm_device *drm, struct platform_device *pdev, sched_set_fifo_low(gpu->worker->task); - INIT_LIST_HEAD(&gpu->active_list); mutex_init(&gpu->active_lock); mutex_init(&gpu->lock); init_waitqueue_head(&gpu->retire_event); @@ -990,8 +989,6 @@ void msm_gpu_cleanup(struct msm_gpu *gpu) DBG("%s", gpu->name); - WARN_ON(!list_empty(&gpu->active_list)); - for (i = 0; i < ARRAY_SIZE(gpu->rb); i++) { msm_ringbuffer_destroy(gpu->rb[i]); gpu->rb[i] = NULL; diff --git a/drivers/gpu/drm/msm/msm_gpu.h b/drivers/gpu/drm/msm/msm_gpu.h index 4ca56d96344a..b837785cdb04 100644 --- a/drivers/gpu/drm/msm/msm_gpu.h +++ b/drivers/gpu/drm/msm/msm_gpu.h @@ -178,12 +178,6 @@ struct msm_gpu { */ int cur_ctx_seqno; - /* - * List of GEM active objects on this gpu. Protected by - * msm_drm_private::mm_lock - */ - struct list_head active_list; - /** * lock: * From patchwork Sat Jun 25 22:54:48 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 12895496 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7CBA3CCA480 for ; Sat, 25 Jun 2022 22:55:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233626AbiFYWzp (ORCPT ); Sat, 25 Jun 2022 18:55:45 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60720 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233633AbiFYWzl (ORCPT ); Sat, 25 Jun 2022 18:55:41 -0400 Received: from mail-pg1-x529.google.com (mail-pg1-x529.google.com [IPv6:2607:f8b0:4864:20::529]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 30C6713F70; Sat, 25 Jun 2022 15:55:33 -0700 (PDT) Received: by mail-pg1-x529.google.com with SMTP id e63so5664768pgc.5; Sat, 25 Jun 2022 15:55:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=FzvQD1BdX2RVx+fHXlbnajcgEgfcaBeelT02bqIAfkE=; b=KOon/w2LGkob1wGHBBHJ89ou2eb0xLTun9FBY+3NZx1eWwTZfXTXIZMhvRPRLmKDF8 Dwcda1Jw30GegUsGq5yykO+9LaY0RQMTKOnYMfyANn2a3+wdtlv9dtjcONuRm0GcVBgA 3NtzNN7r099vZD9WVBGppQ5Lg4tknfuXe6UWFAk00weyuLfuLdo3YKh3dsJwT5jnH9hz gsgkoC0W0L+kjPHRjXkc+EzynBBIR4R0S+Rp8DRGu6WKIfxLtVnuRk06VDkwWsV9R1q1 0mxtU8/2yxnVxUA8PrhSm4XyZX9ZDNMbykvOViY/ldaB3UAeRLHbSYMkhzp8lGDDsH2A yftQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=FzvQD1BdX2RVx+fHXlbnajcgEgfcaBeelT02bqIAfkE=; b=jmpe3wY28YdtwIaiY6+GHvBgHs9tN0qlrsTRikhEQCdGb1EU9GyctqecSBEYsj6k9B 7OIH93iyxypPqAY5edVxlhYu/qS2E+KldmnPUttfI1KLJdEO/N7gqU3UwQioHCOpFOqH Bq2CCIzfMb+wMRwGzDUrAvQa7v1EY5IME7hAX0haa2rnlmgkLpH+mQoDQXr82fIJc16j we2dWAUOaKzzCAxH8qEKINzOFUi7Ce+brkBtMhPfvCa9jy8RFatvzWgsqPmkvrRRNLel xP4F80+es9F5P7LYX0RLNrI381HxtW6AmOmZkTsVp1zT63m4MAlWApz4eRBNshMubMPq Jmqg== X-Gm-Message-State: AJIora9/fOZtqDP23Iswke6+5SCNdMM8OepY1bU7wmOnJbJdMPwzWzUF 5gJWqvtlDIT0EcG5SW7CQUc= X-Google-Smtp-Source: AGRyM1vj3PKBYedlufVclRfXmk3uILPeCT+e1vBK/3Ya1177jHlJLFqiGL7Ag79Bl3wRNNifT2aS0g== X-Received: by 2002:a63:b94a:0:b0:40c:e843:a1dc with SMTP id v10-20020a63b94a000000b0040ce843a1dcmr5370006pgo.441.1656197733393; Sat, 25 Jun 2022 15:55:33 -0700 (PDT) Received: from localhost ([2601:1c0:5200:a6:307:a401:7b76:c6e5]) by smtp.gmail.com with ESMTPSA id jb20-20020a170903259400b0016a11b9aeb2sm4171689plb.187.2022.06.25.15.55.31 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 25 Jun 2022 15:55:32 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Rob Clark , Rob Clark , Abhinav Kumar , Dmitry Baryshkov , Sean Paul , David Airlie , Daniel Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH 13/15] drm/msm/gem: Unpin buffers earlier Date: Sat, 25 Jun 2022 15:54:48 -0700 Message-Id: <20220625225454.81039-14-robdclark@gmail.com> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220625225454.81039-1-robdclark@gmail.com> References: <20220625225454.81039-1-robdclark@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org From: Rob Clark We've already attached the fences, so obj->resv (which shrinker checks) tells us whether they are still active. So we can unpin sooner, before we drop the queue lock. This also avoids the need to grab the obj lock in the retire path, avoiding potential for lock contention between submit and retire. Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/msm_gem_submit.c | 8 ++------ 1 file changed, 2 insertions(+), 6 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_gem_submit.c b/drivers/gpu/drm/msm/msm_gem_submit.c index adf358fb8e9d..5599d93ec0d2 100644 --- a/drivers/gpu/drm/msm/msm_gem_submit.c +++ b/drivers/gpu/drm/msm/msm_gem_submit.c @@ -501,11 +501,11 @@ static int submit_reloc(struct msm_gem_submit *submit, struct msm_gem_object *ob */ static void submit_cleanup(struct msm_gem_submit *submit, bool error) { - unsigned cleanup_flags = BO_LOCKED; + unsigned cleanup_flags = BO_LOCKED | BO_OBJ_PINNED; unsigned i; if (error) - cleanup_flags |= BO_VMA_PINNED | BO_OBJ_PINNED; + cleanup_flags |= BO_VMA_PINNED; for (i = 0; i < submit->nr_bos; i++) { struct msm_gem_object *msm_obj = submit->bos[i].obj; @@ -522,10 +522,6 @@ void msm_submit_retire(struct msm_gem_submit *submit) for (i = 0; i < submit->nr_bos; i++) { struct drm_gem_object *obj = &submit->bos[i].obj->base; - msm_gem_lock(obj); - /* Note, VMA already fence-unpinned before submit: */ - submit_cleanup_bo(submit, i, BO_OBJ_PINNED); - msm_gem_unlock(obj); drm_gem_object_put(obj); } } From patchwork Sat Jun 25 22:54:49 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 12895497 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D3F37C433EF for ; Sat, 25 Jun 2022 22:55:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233683AbiFYWzq (ORCPT ); Sat, 25 Jun 2022 18:55:46 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60962 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233685AbiFYWzl (ORCPT ); Sat, 25 Jun 2022 18:55:41 -0400 Received: from mail-pl1-x634.google.com (mail-pl1-x634.google.com [IPv6:2607:f8b0:4864:20::634]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8854213F8C; Sat, 25 Jun 2022 15:55:36 -0700 (PDT) Received: by mail-pl1-x634.google.com with SMTP id k7so5102604plg.7; Sat, 25 Jun 2022 15:55:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=CHnA8EA1zYWAltmKrCqgpnviThEUlN7Pg3SwIn6d3WM=; b=EkQwqPjzd+LGHdTgSo2dAIAIRpSAC6Nv6RHe+yrSIMYp9IvQfajAKlCp7AcHut07Tp DzVol4+DsflE2sydgmeqfi/uE27GbpyLDLtKIWntALoTTjjZujCeng2dckMbBySlgQVO IxkahcVYhQmkr46T5tY/NpBYobQNJlUfHnr/JcrN/hkmvjmRv+Z/S/fyDEYwS71K7jA9 cLS9+sl/7tojk+f1SwEz/059TKhLFFPpg/98lHJsqOWrYxvecCGJbTTCAV6UE+Sa/ATr G0QZmAOm9ZYERsmS0G/OLbz3y8XWw1vrAWjfw/KEuM50O0AbEFCc2yeE+e5MOziTAahA WPxA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=CHnA8EA1zYWAltmKrCqgpnviThEUlN7Pg3SwIn6d3WM=; b=rgTr7jsaVqwb9447qGGJgMUjtEHEpc80AMjHr1V2xijPrnHvUxHjzwwzdfPI/b7DMJ 9jeVFLmkj6gwb6q/T5SBJR+rAdDt2/X2IOlEq/rPvCsFGl9IH5k21dV9gxvrJwKU7Q1S cex+Z+jbgVsTDCuU82ijIGr7QkfC9s9U+C47VN4ZEBKqj711mzLt5xc1hc5zr3/+zlY3 5kdbwpCDZLgQo6b0evaCHrlFdOlenz8yoJbjJ2/rvjOfT32DD2+CRZmlLqB8VyKQPY7P 7P1aTJhUVG4zWG8ga4LMnp2ftO36kTGYO4yeEWA027mu6xW9LjJSb7bmHci4bZVR1YsN o9Qw== X-Gm-Message-State: AJIora96sPURlXW7vNZUpYUQZupwrEhFCpSNYsbMolzJckdfQzvUeFus y9IHh3AKJ55QwkPskTIWFsg= X-Google-Smtp-Source: AGRyM1samwSo1WxvAFzhw8/vJ684m46AebmiyTYk6zs1V/HkEWtasanv8OtFQnUi/1Bnxeqd8QlG+Q== X-Received: by 2002:a17:902:e552:b0:163:6a5e:4e08 with SMTP id n18-20020a170902e55200b001636a5e4e08mr6494175plf.130.1656197736053; Sat, 25 Jun 2022 15:55:36 -0700 (PDT) Received: from localhost ([2601:1c0:5200:a6:307:a401:7b76:c6e5]) by smtp.gmail.com with ESMTPSA id z26-20020a634c1a000000b0040dd052ab11sm520830pga.58.2022.06.25.15.55.34 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 25 Jun 2022 15:55:35 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Rob Clark , Rob Clark , Abhinav Kumar , Dmitry Baryshkov , Sean Paul , David Airlie , Daniel Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH 14/15] drm/msm/gem: Consolidate shrinker trace Date: Sat, 25 Jun 2022 15:54:49 -0700 Message-Id: <20220625225454.81039-15-robdclark@gmail.com> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220625225454.81039-1-robdclark@gmail.com> References: <20220625225454.81039-1-robdclark@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org From: Rob Clark Combine separate trace events for purge vs evict into one. When we add support for purging/evicting active buffers we'll just add more info into this one trace event, rather than adding a bunch more events. Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/msm_gem_shrinker.c | 19 ++++++--------- drivers/gpu/drm/msm/msm_gpu_trace.h | 32 +++++++++++--------------- 2 files changed, 20 insertions(+), 31 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_gem_shrinker.c b/drivers/gpu/drm/msm/msm_gem_shrinker.c index 530b1102b46d..5cc05d669a08 100644 --- a/drivers/gpu/drm/msm/msm_gem_shrinker.c +++ b/drivers/gpu/drm/msm/msm_gem_shrinker.c @@ -71,25 +71,20 @@ msm_gem_shrinker_scan(struct shrinker *shrinker, struct shrink_control *sc) struct msm_drm_private *priv = container_of(shrinker, struct msm_drm_private, shrinker); long nr = sc->nr_to_scan; - unsigned long freed; + unsigned long freed, purged, evicted = 0; - freed = drm_gem_lru_scan(&priv->lru.dontneed, nr, purge); - nr -= freed; - - if (freed > 0) - trace_msm_gem_purge(freed << PAGE_SHIFT); + purged = drm_gem_lru_scan(&priv->lru.dontneed, nr, purge); + nr -= purged; if (can_swap() && nr > 0) { - unsigned long evicted; - evicted = drm_gem_lru_scan(&priv->lru.willneed, nr, evict); nr -= evicted; + } - if (evicted > 0) - trace_msm_gem_evict(evicted << PAGE_SHIFT); + freed = purged + evicted; - freed += evicted; - } + if (freed) + trace_msm_gem_shrink(sc->nr_to_scan, purged, evicted); return (freed > 0) ? freed : SHRINK_STOP; } diff --git a/drivers/gpu/drm/msm/msm_gpu_trace.h b/drivers/gpu/drm/msm/msm_gpu_trace.h index ca0b08d7875b..8867fa0a0306 100644 --- a/drivers/gpu/drm/msm/msm_gpu_trace.h +++ b/drivers/gpu/drm/msm/msm_gpu_trace.h @@ -115,29 +115,23 @@ TRACE_EVENT(msm_gmu_freq_change, ); -TRACE_EVENT(msm_gem_purge, - TP_PROTO(u32 bytes), - TP_ARGS(bytes), +TRACE_EVENT(msm_gem_shrink, + TP_PROTO(u32 nr_to_scan, u32 purged, u32 evicted), + TP_ARGS(nr_to_scan, purged, evicted), TP_STRUCT__entry( - __field(u32, bytes) + __field(u32, nr_to_scan) + __field(u32, purged) + __field(u32, evicted) ), TP_fast_assign( - __entry->bytes = bytes; + __entry->nr_to_scan = nr_to_scan; + __entry->purged = purged; + __entry->evicted = evicted; ), - TP_printk("Purging %u bytes", __entry->bytes) -); - - -TRACE_EVENT(msm_gem_evict, - TP_PROTO(u32 bytes), - TP_ARGS(bytes), - TP_STRUCT__entry( - __field(u32, bytes) - ), - TP_fast_assign( - __entry->bytes = bytes; - ), - TP_printk("Evicting %u bytes", __entry->bytes) + TP_printk("nr_to_scan=%u pages, purged=%u pages, evicted=%u pages", + __entry->nr_to_scan, + __entry->purged, + __entry->evicted) ); From patchwork Sat Jun 25 22:54:50 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 12895499 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7B24BCCA480 for ; Sat, 25 Jun 2022 22:55:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233664AbiFYWzr (ORCPT ); Sat, 25 Jun 2022 18:55:47 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60688 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233688AbiFYWzm (ORCPT ); Sat, 25 Jun 2022 18:55:42 -0400 Received: from mail-pf1-x42b.google.com (mail-pf1-x42b.google.com [IPv6:2607:f8b0:4864:20::42b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 71ECB13F78; Sat, 25 Jun 2022 15:55:39 -0700 (PDT) Received: by mail-pf1-x42b.google.com with SMTP id i64so5654865pfc.8; Sat, 25 Jun 2022 15:55:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=HjlEl5/qplLjRvIbiPqkIivzE1Th3OTP/5G9Lv59zyE=; b=E72ZEUw2YWzfnLk/27KVEoTDLtge0CRcNEoC/mqAONxPi0JsNZ9fiBOe0IY5kajcqH g6FKr24mvvmRaE4vOTYx0IioJJV9qhGmbxjgSFu4ERDCpHUIMLoviOT1269r6jNHkKVa ESbrTtC4Uh9t1QvGXKl7tZmIv2VKBNlg17fsS9drKz4J2nc8c9KZIueB50wz9KFLEXo1 gXFEiHWWvdXmAQ9nOjKG2Cq3WHO/VMmr5HvQxJLnemyqDsA7KNigLXrf9zqWw8ovAF5v DPp+AeWSifVAyylX5Vs7XVxF36vWDJePqPIe6csHg8E4istzhhrxWncSQUZ6vvXIxs5v r+hQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=HjlEl5/qplLjRvIbiPqkIivzE1Th3OTP/5G9Lv59zyE=; b=Qir/qNhGotdXaRdOrUuIa6NdZw0iaP3bvfZzZTsApd7VK+yrOtPUvIfqOp5WV4aOa3 EDZPnd05+lplXo2AoGnGz3Sc3TbOyLu/pFCbRPy1yzgD6c8pX24tbgLb6j5zUrBrv7DS JyVaqrM0oCGqiDPFNSXvdsQJ+0yZAXrIctzLSB5JX/x0HszMsHlTb8prepV8m5o8xKbr 9v5qSKQ6d0iHtJc1W6DHm5AxgXAbTUx7BVtt0MAMXqcbhxDm7E/7EzRum3hQ5OVUazwc A1ucjfQd7OQvKuo9dwwYbF5nifhcoa4Lp41onKxVlHMmBXDgIe1uh59Qd0W7gQb9flYh GJHA== X-Gm-Message-State: AJIora9oKLhqZErEY8+a3CqQPg8GPwOX/tM/u7tZggt52bwQtyT/7mVV xdpQA3bC86wg/1hv2H5M1kU= X-Google-Smtp-Source: AGRyM1vCt6HT5LWm6qTqal3dUZABGCG9BONuwExyWOu7jF0JXgiSJ8MGot8F0KRFX0PO9YNBj6xmiw== X-Received: by 2002:a63:1d04:0:b0:40d:a3e5:aa3a with SMTP id d4-20020a631d04000000b0040da3e5aa3amr5320398pgd.248.1656197738487; Sat, 25 Jun 2022 15:55:38 -0700 (PDT) Received: from localhost ([2601:1c0:5200:a6:307:a401:7b76:c6e5]) by smtp.gmail.com with ESMTPSA id c19-20020a62e813000000b005252adb89b3sm4137123pfi.32.2022.06.25.15.55.37 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 25 Jun 2022 15:55:37 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Rob Clark , Rob Clark , Abhinav Kumar , Dmitry Baryshkov , Sean Paul , David Airlie , Daniel Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH 15/15] drm/msm/gem: Evict active GEM objects when necessary Date: Sat, 25 Jun 2022 15:54:50 -0700 Message-Id: <20220625225454.81039-16-robdclark@gmail.com> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220625225454.81039-1-robdclark@gmail.com> References: <20220625225454.81039-1-robdclark@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org From: Rob Clark If we are under enough memory pressure, we should stall waiting for active buffers to become idle in order to evict. Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/msm_gem_shrinker.c | 68 +++++++++++++++++++++----- drivers/gpu/drm/msm/msm_gpu_trace.h | 16 +++--- 2 files changed, 66 insertions(+), 18 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_gem_shrinker.c b/drivers/gpu/drm/msm/msm_gem_shrinker.c index 5cc05d669a08..b0bee040432a 100644 --- a/drivers/gpu/drm/msm/msm_gem_shrinker.c +++ b/drivers/gpu/drm/msm/msm_gem_shrinker.c @@ -24,6 +24,11 @@ static bool can_swap(void) return enable_eviction && get_nr_swap_pages() > 0; } +static bool can_block(struct shrink_control *sc) +{ + return current_is_kswapd() || (sc->gfp_mask & __GFP_RECLAIM); +} + static unsigned long msm_gem_shrinker_count(struct shrinker *shrinker, struct shrink_control *sc) { @@ -65,26 +70,65 @@ evict(struct drm_gem_object *obj) return true; } +static bool +wait_for_idle(struct drm_gem_object *obj) +{ + enum dma_resv_usage usage = dma_resv_usage_rw(true); + return dma_resv_wait_timeout(obj->resv, usage, false, 1000) > 0; +} + +static bool +active_purge(struct drm_gem_object *obj) +{ + if (!wait_for_idle(obj)) + return false; + + return purge(obj); +} + +static bool +active_evict(struct drm_gem_object *obj) +{ + if (!wait_for_idle(obj)) + return false; + + return evict(obj); +} + static unsigned long msm_gem_shrinker_scan(struct shrinker *shrinker, struct shrink_control *sc) { struct msm_drm_private *priv = container_of(shrinker, struct msm_drm_private, shrinker); + struct { + struct drm_gem_lru *lru; + bool (*shrink)(struct drm_gem_object *obj); + bool cond; + unsigned long freed; + } stages[] = { + /* Stages of progressively more aggressive/expensive reclaim: */ + { &priv->lru.dontneed, purge, true }, + { &priv->lru.willneed, evict, can_swap() }, + { &priv->lru.dontneed, active_purge, can_block(sc) }, + { &priv->lru.willneed, active_evict, can_swap() && can_block(sc) }, + }; long nr = sc->nr_to_scan; - unsigned long freed, purged, evicted = 0; - - purged = drm_gem_lru_scan(&priv->lru.dontneed, nr, purge); - nr -= purged; - - if (can_swap() && nr > 0) { - evicted = drm_gem_lru_scan(&priv->lru.willneed, nr, evict); - nr -= evicted; + unsigned long freed = 0; + + for (unsigned i = 0; (nr > 0) && (i < ARRAY_SIZE(stages)); i++) { + if (!stages[i].cond) + continue; + stages[i].freed = + drm_gem_lru_scan(stages[i].lru, nr, stages[i].shrink); + nr -= stages[i].freed; + freed += stages[i].freed; } - freed = purged + evicted; - - if (freed) - trace_msm_gem_shrink(sc->nr_to_scan, purged, evicted); + if (freed) { + trace_msm_gem_shrink(sc->nr_to_scan, stages[0].freed, + stages[1].freed, stages[2].freed, + stages[3].freed); + } return (freed > 0) ? freed : SHRINK_STOP; } diff --git a/drivers/gpu/drm/msm/msm_gpu_trace.h b/drivers/gpu/drm/msm/msm_gpu_trace.h index 8867fa0a0306..ac40d857bc45 100644 --- a/drivers/gpu/drm/msm/msm_gpu_trace.h +++ b/drivers/gpu/drm/msm/msm_gpu_trace.h @@ -116,22 +116,26 @@ TRACE_EVENT(msm_gmu_freq_change, TRACE_EVENT(msm_gem_shrink, - TP_PROTO(u32 nr_to_scan, u32 purged, u32 evicted), - TP_ARGS(nr_to_scan, purged, evicted), + TP_PROTO(u32 nr_to_scan, u32 purged, u32 evicted, + u32 active_purged, u32 active_evicted), + TP_ARGS(nr_to_scan, purged, evicted, active_purged, active_evicted), TP_STRUCT__entry( __field(u32, nr_to_scan) __field(u32, purged) __field(u32, evicted) + __field(u32, active_purged) + __field(u32, active_evicted) ), TP_fast_assign( __entry->nr_to_scan = nr_to_scan; __entry->purged = purged; __entry->evicted = evicted; + __entry->active_purged = active_purged; + __entry->active_evicted = active_evicted; ), - TP_printk("nr_to_scan=%u pages, purged=%u pages, evicted=%u pages", - __entry->nr_to_scan, - __entry->purged, - __entry->evicted) + TP_printk("nr_to_scan=%u pg, purged=%u pg, evicted=%u pg, active_purged=%u pg, active_evicted=%u pg", + __entry->nr_to_scan, __entry->purged, __entry->evicted, + __entry->active_purged, __entry->active_evicted) );