From patchwork Sun Oct 4 19:21:33 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 11815883 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 8D47113B2 for ; Sun, 4 Oct 2020 19:21:23 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 646782068E for ; Sun, 4 Oct 2020 19:21:23 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="uPzD5q/N" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726444AbgJDTVJ (ORCPT ); Sun, 4 Oct 2020 15:21:09 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60428 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726085AbgJDTVI (ORCPT ); Sun, 4 Oct 2020 15:21:08 -0400 Received: from mail-pj1-x1041.google.com (mail-pj1-x1041.google.com [IPv6:2607:f8b0:4864:20::1041]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C928BC0613CE; Sun, 4 Oct 2020 12:21:08 -0700 (PDT) Received: by mail-pj1-x1041.google.com with SMTP id x2so3965607pjk.0; Sun, 04 Oct 2020 12:21:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=FnpkgYe/O9d4AYRxVVe1vBP2PhvVSTJl2AUFDqH2Sr0=; b=uPzD5q/NxoE6HKNg2DFVLhsSXAZj0DE+mtFzQRf6G2zWukz4iQSzK9wUbdC0TJm3Ef ZcdAgaySLaTGI+egOnLuIYt7zGMMNcIC08NYLySh2d9xXf0DHKLDT6M0xBg5bFAuEBAl 4EgLfm7iGvcXpyUd+aP+J2eBy0IdF43H+l73+OmYDi+I2Pbgm84sTA0eqKkBmT0GwsWf r9lQ5qzPWFi9f682LcpKC1hmcZUcNOdqWR52nBobQOcxi9iP+YKQ1wqMHr/CKUDQ5m3F XTIZ6qvKzy5DQmYVjLmoHRHv23fYZHLYwP5xEW1u71TMjQta/UHiIY03dL54s1qt5z0T q7bw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=FnpkgYe/O9d4AYRxVVe1vBP2PhvVSTJl2AUFDqH2Sr0=; b=UHyHbG9lKZa2gs3SvReviEd9YGQ55erIj6KLUvkczJP3VZDn9ljBj7EWQsd+4ERqmv NlOvJPZMMk5InkvpVYNyiWhHtLcLhzIltFGPVQMiRU27dfizvCVCKXIsAmMZ+JdWJwt4 M5SdyfxdazRIRUzdUw4MKwEN3Rx3ZJXM5JxizXAr2MWJUI3geERhEPDKfhFb/86QTufZ Sj5o2GcTtwUA4RtFm1MwCRGD0XwH5rMDpO8aY9/U4P2tCsJPpKxMnHwg1aPaZnIXMHmo R488UnCfzAIHIspQRiiiFKIYpB/AtG4QydNcgF4YU2q4AeBu8GdTimPltERE3E0QLOSE Z2Qw== X-Gm-Message-State: AOAM530z0ZvfWawvPU0gM5XZz8QooiqnnRGnr0KSQmWg6Gbn4BbTChKs iziyssXCH4j+tJ32x9UupWg= X-Google-Smtp-Source: ABdhPJyvMoILH6fXhmvKYFY3fZRjizG0TlgbGLt5UeUM7ybLbxyQePkCzvg/Qnastn4ThulWDj2NDQ== X-Received: by 2002:a17:90b:3103:: with SMTP id gc3mr11011474pjb.158.1601839268350; Sun, 04 Oct 2020 12:21:08 -0700 (PDT) Received: from localhost (c-73-25-156-94.hsd1.or.comcast.net. [73.25.156.94]) by smtp.gmail.com with ESMTPSA id m13sm9436279pfd.65.2020.10.04.12.21.07 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 04 Oct 2020 12:21:07 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: Rob Clark , Rob Clark , Sean Paul , David Airlie , Daniel Vetter , linux-arm-msm@vger.kernel.org (open list:DRM DRIVER FOR MSM ADRENO GPU), freedreno@lists.freedesktop.org (open list:DRM DRIVER FOR MSM ADRENO GPU), linux-kernel@vger.kernel.org (open list) Subject: [PATCH 01/14] drm/msm: Use correct drm_gem_object_put() in fail case Date: Sun, 4 Oct 2020 12:21:33 -0700 Message-Id: <20201004192152.3298573-2-robdclark@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201004192152.3298573-1-robdclark@gmail.com> References: <20201004192152.3298573-1-robdclark@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org From: Rob Clark We only want to use the _unlocked() variant in the unlocked case. Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/msm_gem.c | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index 14e14caf90f9..a870b3ad129d 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -1115,7 +1115,11 @@ static struct drm_gem_object *_msm_gem_new(struct drm_device *dev, return obj; fail: - drm_gem_object_put(obj); + if (struct_mutex_locked) { + drm_gem_object_put_locked(obj); + } else { + drm_gem_object_put(obj); + } return ERR_PTR(ret); } From patchwork Sun Oct 4 19:21:34 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 11815897 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id DE35513B2 for ; Sun, 4 Oct 2020 19:21:27 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id C58E72078D for ; Sun, 4 Oct 2020 19:21:27 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="CN9uiFFr" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726567AbgJDTVM (ORCPT ); Sun, 4 Oct 2020 15:21:12 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60434 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726085AbgJDTVM (ORCPT ); Sun, 4 Oct 2020 15:21:12 -0400 Received: from mail-pg1-x541.google.com (mail-pg1-x541.google.com [IPv6:2607:f8b0:4864:20::541]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A442FC0613CE; Sun, 4 Oct 2020 12:21:10 -0700 (PDT) Received: by mail-pg1-x541.google.com with SMTP id t14so4331755pgl.10; Sun, 04 Oct 2020 12:21:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=8roT9Qf/uCSVbwPyL+ZLgqLFUJWhIOLg7jX7brDWZVQ=; b=CN9uiFFrUASl4sYS287RnyYXZuyZ7Y2vAbTiPWN9SywBTKJ5RtW0OtFwOOi5Vgb2pF iK945jnrW5qYaz5lzdVXFta3bUO48vTw4wlgNcR/2NdjRQ6aZTa5jpDMHs/waJFIKKNn pqNK2Tqw8HNmFX3R+v6Fwu1mVWf5nVuz47tFalkfHjGrlpQoABy7873rtEL8S8HLj/Y6 VJF9OLHcSnEKuzRdbBRyGvlMjtEFbfU1GZpcgz/NCaK1DHjdd4uiL/41OO9NaTq+jk4I DCyDvzFenQNg22JXc/fLQ6/IAcjxD4NMDHDFSfTRLkJ+02xqpIgUBWpDEsMzi1ODgRa0 M2ww== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=8roT9Qf/uCSVbwPyL+ZLgqLFUJWhIOLg7jX7brDWZVQ=; b=HKXKWGQeUZnuG2CzqJ6vAN2xgEelefrIzLMI26cyGKaQRwiBseBJCPsV+Tgr/zdrFC gJe72hwyjKcy++rwOq5pG+y+9iQp9hpS8SzUPdEEtly8yTldJN2Ul/QVeiU/g/EPGKv3 YjqHwfh75bVUsHBZzlAyceTyjyTGVZw3SJhSnThfwepF1EWRGjBe82u2nHWdoXkSdvSr uV8YG1DCrG2+YW8zCpGRRcilq+rzYoMPjnllPLAV3lcwJtNTICVE4MU8IP6xk3tVNz6K 01xfYKzQraJAspfyBdtDjcwXjRa4yfvUr35XNo+BHNBQmQwZi4v/VcJdxDdEQ9z1TO32 +Rwg== X-Gm-Message-State: AOAM5332Lm492F+P7L6gCs8CxNOtK51+yhqBZppWnr4NBjiVlp5XQ+bA 53DLIN1sSgM+H1vTTpOiJZM= X-Google-Smtp-Source: ABdhPJzpVe6lU/lpwBKDe/+zQOSlcHlVKlEAL02V4leKB6Uq1FQPwew3fDKvnWcUxYT9D+3gGdj+lQ== X-Received: by 2002:a62:5bc2:0:b029:13e:d13d:a130 with SMTP id p185-20020a625bc20000b029013ed13da130mr13182418pfb.24.1601839270175; Sun, 04 Oct 2020 12:21:10 -0700 (PDT) Received: from localhost (c-73-25-156-94.hsd1.or.comcast.net. [73.25.156.94]) by smtp.gmail.com with ESMTPSA id x3sm9825340pfo.95.2020.10.04.12.21.09 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 04 Oct 2020 12:21:09 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: Rob Clark , Rob Clark , Sean Paul , David Airlie , Daniel Vetter , linux-arm-msm@vger.kernel.org (open list:DRM DRIVER FOR MSM ADRENO GPU), freedreno@lists.freedesktop.org (open list:DRM DRIVER FOR MSM ADRENO GPU), linux-kernel@vger.kernel.org (open list) Subject: [PATCH 02/14] drm/msm: Drop chatty trace Date: Sun, 4 Oct 2020 12:21:34 -0700 Message-Id: <20201004192152.3298573-3-robdclark@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201004192152.3298573-1-robdclark@gmail.com> References: <20201004192152.3298573-1-robdclark@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org From: Rob Clark It is somewhat redundant with the gpu tracepoints, and anyways not too useful to justify spamming the log when debug traces are enabled. Signed-off-by: Rob Clark Reviewed-by: Jordan Crouse --- drivers/gpu/drm/msm/msm_gpu.c | 1 - 1 file changed, 1 deletion(-) diff --git a/drivers/gpu/drm/msm/msm_gpu.c b/drivers/gpu/drm/msm/msm_gpu.c index 55d16489d0f3..31fce3ac0cdc 100644 --- a/drivers/gpu/drm/msm/msm_gpu.c +++ b/drivers/gpu/drm/msm/msm_gpu.c @@ -535,7 +535,6 @@ static void recover_worker(struct work_struct *work) static void hangcheck_timer_reset(struct msm_gpu *gpu) { - DBG("%s", gpu->name); mod_timer(&gpu->hangcheck_timer, round_jiffies_up(jiffies + DRM_MSM_HANGCHECK_JIFFIES)); } From patchwork Sun Oct 4 19:21:35 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 11815893 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 9631B13B2 for ; Sun, 4 Oct 2020 19:21:26 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 7A8DC2068E for ; Sun, 4 Oct 2020 19:21:26 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="uLU9fyPv" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726085AbgJDTVP (ORCPT ); Sun, 4 Oct 2020 15:21:15 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60442 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726573AbgJDTVM (ORCPT ); Sun, 4 Oct 2020 15:21:12 -0400 Received: from mail-pg1-x543.google.com (mail-pg1-x543.google.com [IPv6:2607:f8b0:4864:20::543]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B6F18C0613CE; Sun, 4 Oct 2020 12:21:12 -0700 (PDT) Received: by mail-pg1-x543.google.com with SMTP id 7so4327229pgm.11; Sun, 04 Oct 2020 12:21:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=c9ulxuqWUZZz8R2MHYok6DyKN2+X334CNFkrGm6LYVI=; b=uLU9fyPvbRl9/8aUxLuqomkqwnTttBQwRSHrQq9XasnJuEs10Huja4LSZtcH1yMb/G 4SCBsMXPAeSog/DP02UyVz1f2JEeFa+bzRZujV7Gw+aYbYjwLYCbHmJeVsi9BJQ4hc1j +UC/PFgLdbUtzQGlW/r3AeVEUhWEBoJQRvYg3MvLJMvk8tmFtE+ZQf/iE6iK66NKrgdP D7Nw33DTgyIqHMbDT2eMlkevMgmP576tUBhGwsxnIvnVPR+URrTS2kgU7wCuyUHkXVG1 UJdRCT/Oty72nEXs+Yd4OkHezq6YCObROTJemsINUra8qVVf8m3YHgoHQrZMkuN8+OyT Z3YQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=c9ulxuqWUZZz8R2MHYok6DyKN2+X334CNFkrGm6LYVI=; b=UAw5QQgnf3AHi4ShQDBqYLrlsufYW+0dNE7wjhEKylFa8JupzUfbBDKzln5zVtGLPQ NuApi3Lbbh7ziLLMphcUf3FMN5O3sqam4i+MZoD8iI4C8KalAIoL66mU8eRl0XrUiWU3 Sl+aVT0FaCOZwVY+hmHtVzj2q8aLjOoEtOQmU0DldfIn8gx1Mf3fNMiVPeMMtwLqJi1e abWw0uQ79yblNlxeKhI43Bn4rGK08U6Kd8WV+fi5eiK77yu1xlxNLlZt94eTq6Ocuhi6 DTUr9ITQVZU738X9l1Dmxjw4ya9a0qlc0x3ljRQWzMM/rRaZJpHgTQsY6j6g4eN1Q62+ qFsA== X-Gm-Message-State: AOAM5325xesk6myurvL73kqgYv9cFGhWj5hCRYofDFLo8n22+W4naLpX PEyBDfoHncqOYxDsV1mAYlseF1Gk/N0yR8Mp X-Google-Smtp-Source: ABdhPJzq16bA7i9inS3FVtzQZxgFb9yXiw3Uah4xYQ4dyDsIVKTAdVJywzFmSahO8lI4RuBXRCZMaA== X-Received: by 2002:a63:4416:: with SMTP id r22mr11414886pga.248.1601839272238; Sun, 04 Oct 2020 12:21:12 -0700 (PDT) Received: from localhost (c-73-25-156-94.hsd1.or.comcast.net. [73.25.156.94]) by smtp.gmail.com with ESMTPSA id q66sm5106105pfc.109.2020.10.04.12.21.10 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 04 Oct 2020 12:21:11 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: Rob Clark , Rob Clark , Sean Paul , David Airlie , Daniel Vetter , linux-arm-msm@vger.kernel.org (open list:DRM DRIVER FOR MSM ADRENO GPU), freedreno@lists.freedesktop.org (open list:DRM DRIVER FOR MSM ADRENO GPU), linux-kernel@vger.kernel.org (open list) Subject: [PATCH 03/14] drm/msm: Move update_fences() Date: Sun, 4 Oct 2020 12:21:35 -0700 Message-Id: <20201004192152.3298573-4-robdclark@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201004192152.3298573-1-robdclark@gmail.com> References: <20201004192152.3298573-1-robdclark@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org From: Rob Clark Small cleanup, update_fences() is used in the hangcheck path, but also in the normal retire path. Signed-off-by: Rob Clark Reviewed-by: Jordan Crouse --- drivers/gpu/drm/msm/msm_gpu.c | 28 ++++++++++++++-------------- 1 file changed, 14 insertions(+), 14 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_gpu.c b/drivers/gpu/drm/msm/msm_gpu.c index 31fce3ac0cdc..ca8c95b32c8b 100644 --- a/drivers/gpu/drm/msm/msm_gpu.c +++ b/drivers/gpu/drm/msm/msm_gpu.c @@ -265,6 +265,20 @@ int msm_gpu_hw_init(struct msm_gpu *gpu) return ret; } +static void update_fences(struct msm_gpu *gpu, struct msm_ringbuffer *ring, + uint32_t fence) +{ + struct msm_gem_submit *submit; + + list_for_each_entry(submit, &ring->submits, node) { + if (submit->seqno > fence) + break; + + msm_update_fence(submit->ring->fctx, + submit->fence->seqno); + } +} + #ifdef CONFIG_DEV_COREDUMP static ssize_t msm_gpu_devcoredump_read(char *buffer, loff_t offset, size_t count, void *data, size_t datalen) @@ -411,20 +425,6 @@ static void msm_gpu_crashstate_capture(struct msm_gpu *gpu, * Hangcheck detection for locked gpu: */ -static void update_fences(struct msm_gpu *gpu, struct msm_ringbuffer *ring, - uint32_t fence) -{ - struct msm_gem_submit *submit; - - list_for_each_entry(submit, &ring->submits, node) { - if (submit->seqno > fence) - break; - - msm_update_fence(submit->ring->fctx, - submit->fence->seqno); - } -} - static struct msm_gem_submit * find_submit(struct msm_ringbuffer *ring, uint32_t fence) { From patchwork Sun Oct 4 19:21:36 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 11815891 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 4C8C71752 for ; Sun, 4 Oct 2020 19:21:26 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 24C06205ED for ; Sun, 4 Oct 2020 19:21:26 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="qRIlBOEu" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726617AbgJDTVR (ORCPT ); Sun, 4 Oct 2020 15:21:17 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60448 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726610AbgJDTVQ (ORCPT ); Sun, 4 Oct 2020 15:21:16 -0400 Received: from mail-pg1-x543.google.com (mail-pg1-x543.google.com [IPv6:2607:f8b0:4864:20::543]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 124C8C0613CE; Sun, 4 Oct 2020 12:21:15 -0700 (PDT) Received: by mail-pg1-x543.google.com with SMTP id g29so4350832pgl.2; Sun, 04 Oct 2020 12:21:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=KYSQFMpDpWuIvIHyRV7nPPvwBZW1+DAsFDfZZ6m9imY=; b=qRIlBOEuTfE5NzMn7o480jTfvwh+pcV3bOBLuGmghcQkPVPoqvuqMJ+EiOu0lESELv fau6Rpvndyavx+uZvDMeV3IzqirOtwEsvb8GWNH2bzJ2YXe6YPzUCUFO2JeiUqk+C6zI 3ZB4Uc+m73tW/EzYxd/XGYldmFsBORbMHUenCqYF0nVvJ80BNtk6SCw3e1xF1IL3I560 bBr6c+GuDm7Yn7fVy8U5Y0QctIbcOO0wbvFLbCWl8X5RwJsPDF0hs+agiJSUXIt6/s1v JJ8nJss2a6sFs0DkpmRIaWZEegaYP6zunXbd0oaYJix+OAeQO06H2Sh9LB1X4COSeUa/ PRXQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=KYSQFMpDpWuIvIHyRV7nPPvwBZW1+DAsFDfZZ6m9imY=; b=Cu88H5PWN1U4knNsORgstDEr+1KieFshhz4E5xPE5R+GDHNAQd4dQXeJjpSy4WD12w Y4VAu3PEXAgzyXFbdqUwa9O+MYeTD9qw1styFGg3K29wXM3tpibqHvJqF/COwAkY/D3U 7mJOBOHeXBwJSXbtKPs46YipkHxaFUfQwoIxg+kUvQOyuNuJ4hC33vv6ANSt/A3ap7X6 LbRFY3kND8x51zzmscwsbIII5wp+5EFE6gMXCKYS+q/LDuwYN85iu8TnQEUeVUslxc0z 1qH2R5Ts1AkGs2iEsorGs+OYk9GU7qm2xdF10PCgTPNajsgw8cKtW+IrsEV6j6v+pwbi wppA== X-Gm-Message-State: AOAM533AYSoi0O5Ch8fXL4n/qxhJh80ojHMiUqlE7RREV9n0MC4j6V7d Xt4YuipfM7KpEcW0LCz2KUk= X-Google-Smtp-Source: ABdhPJxQmQbWPQJoDf+hHkXoXTmyy0giBgzDOiKG09Cvrj7sHeVN0u31JNZ0ZKo1uXlVf78tj90Q/A== X-Received: by 2002:a05:6a00:1513:b029:142:2501:34de with SMTP id q19-20020a056a001513b0290142250134demr13102809pfu.55.1601839274467; Sun, 04 Oct 2020 12:21:14 -0700 (PDT) Received: from localhost (c-73-25-156-94.hsd1.or.comcast.net. [73.25.156.94]) by smtp.gmail.com with ESMTPSA id x22sm7300178pfp.181.2020.10.04.12.21.13 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 04 Oct 2020 12:21:13 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: Rob Clark , Rob Clark , Sean Paul , David Airlie , Daniel Vetter , linux-arm-msm@vger.kernel.org (open list:DRM DRIVER FOR MSM ADRENO GPU), freedreno@lists.freedesktop.org (open list:DRM DRIVER FOR MSM ADRENO GPU), linux-kernel@vger.kernel.org (open list) Subject: [PATCH 04/14] drm/msm: Add priv->mm_lock to protect active/inactive lists Date: Sun, 4 Oct 2020 12:21:36 -0700 Message-Id: <20201004192152.3298573-5-robdclark@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201004192152.3298573-1-robdclark@gmail.com> References: <20201004192152.3298573-1-robdclark@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org From: Rob Clark Rather than relying on the big dev->struct_mutex hammer, introduce a more specific lock for protecting the bo lists. Signed-off-by: Rob Clark Reviewed-by: Jordan Crouse --- drivers/gpu/drm/msm/msm_debugfs.c | 7 +++++++ drivers/gpu/drm/msm/msm_drv.c | 1 + drivers/gpu/drm/msm/msm_drv.h | 13 +++++++++++- drivers/gpu/drm/msm/msm_gem.c | 28 +++++++++++++++----------- drivers/gpu/drm/msm/msm_gem_shrinker.c | 12 +++++++++++ drivers/gpu/drm/msm/msm_gpu.h | 5 ++++- 6 files changed, 52 insertions(+), 14 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_debugfs.c b/drivers/gpu/drm/msm/msm_debugfs.c index ee2e270f464c..64afbed89821 100644 --- a/drivers/gpu/drm/msm/msm_debugfs.c +++ b/drivers/gpu/drm/msm/msm_debugfs.c @@ -112,6 +112,11 @@ static int msm_gem_show(struct drm_device *dev, struct seq_file *m) { struct msm_drm_private *priv = dev->dev_private; struct msm_gpu *gpu = priv->gpu; + int ret; + + ret = mutex_lock_interruptible(&priv->mm_lock); + if (ret) + return ret; if (gpu) { seq_printf(m, "Active Objects (%s):\n", gpu->name); @@ -121,6 +126,8 @@ static int msm_gem_show(struct drm_device *dev, struct seq_file *m) seq_printf(m, "Inactive Objects:\n"); msm_gem_describe_objects(&priv->inactive_list, m); + mutex_unlock(&priv->mm_lock); + return 0; } diff --git a/drivers/gpu/drm/msm/msm_drv.c b/drivers/gpu/drm/msm/msm_drv.c index 49685571dc0e..dc6efc089285 100644 --- a/drivers/gpu/drm/msm/msm_drv.c +++ b/drivers/gpu/drm/msm/msm_drv.c @@ -441,6 +441,7 @@ static int msm_drm_init(struct device *dev, struct drm_driver *drv) init_llist_head(&priv->free_list); INIT_LIST_HEAD(&priv->inactive_list); + mutex_init(&priv->mm_lock); drm_mode_config_init(ddev); diff --git a/drivers/gpu/drm/msm/msm_drv.h b/drivers/gpu/drm/msm/msm_drv.h index b9dd8f8f4887..50978e5db376 100644 --- a/drivers/gpu/drm/msm/msm_drv.h +++ b/drivers/gpu/drm/msm/msm_drv.h @@ -174,8 +174,19 @@ struct msm_drm_private { struct msm_rd_state *hangrd; /* debugfs to dump hanging submits */ struct msm_perf_state *perf; - /* list of GEM objects: */ + /* + * List of inactive GEM objects. Every bo is either in the inactive_list + * or gpu->active_list (for the gpu it is active on[1]) + * + * These lists are protected by mm_lock. If struct_mutex is involved, it + * should be aquired prior to mm_lock. One should *not* hold mm_lock in + * get_pages()/vmap()/etc paths, as they can trigger the shrinker. + * + * [1] if someone ever added support for the old 2d cores, there could be + * more than one gpu object + */ struct list_head inactive_list; + struct mutex mm_lock; /* worker for delayed free of objects: */ struct work_struct free_work; diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index a870b3ad129d..b04ed8b52f9d 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -746,13 +746,17 @@ int msm_gem_sync_object(struct drm_gem_object *obj, void msm_gem_active_get(struct drm_gem_object *obj, struct msm_gpu *gpu) { struct msm_gem_object *msm_obj = to_msm_bo(obj); - WARN_ON(!mutex_is_locked(&obj->dev->struct_mutex)); + struct msm_drm_private *priv = obj->dev->dev_private; + + might_sleep(); WARN_ON(msm_obj->madv != MSM_MADV_WILLNEED); if (!atomic_fetch_inc(&msm_obj->active_count)) { + mutex_lock(&priv->mm_lock); msm_obj->gpu = gpu; list_del_init(&msm_obj->mm_list); list_add_tail(&msm_obj->mm_list, &gpu->active_list); + mutex_unlock(&priv->mm_lock); } } @@ -761,12 +765,14 @@ void msm_gem_active_put(struct drm_gem_object *obj) struct msm_gem_object *msm_obj = to_msm_bo(obj); struct msm_drm_private *priv = obj->dev->dev_private; - WARN_ON(!mutex_is_locked(&obj->dev->struct_mutex)); + might_sleep(); if (!atomic_dec_return(&msm_obj->active_count)) { + mutex_lock(&priv->mm_lock); msm_obj->gpu = NULL; list_del_init(&msm_obj->mm_list); list_add_tail(&msm_obj->mm_list, &priv->inactive_list); + mutex_unlock(&priv->mm_lock); } } @@ -921,13 +927,16 @@ static void free_object(struct msm_gem_object *msm_obj) { struct drm_gem_object *obj = &msm_obj->base; struct drm_device *dev = obj->dev; + struct msm_drm_private *priv = dev->dev_private; WARN_ON(!mutex_is_locked(&dev->struct_mutex)); /* object should not be on active list: */ WARN_ON(is_active(msm_obj)); + mutex_lock(&priv->mm_lock); list_del(&msm_obj->mm_list); + mutex_unlock(&priv->mm_lock); mutex_lock(&msm_obj->lock); @@ -1103,14 +1112,9 @@ static struct drm_gem_object *_msm_gem_new(struct drm_device *dev, mapping_set_gfp_mask(obj->filp->f_mapping, GFP_HIGHUSER); } - if (struct_mutex_locked) { - WARN_ON(!mutex_is_locked(&dev->struct_mutex)); - list_add_tail(&msm_obj->mm_list, &priv->inactive_list); - } else { - mutex_lock(&dev->struct_mutex); - list_add_tail(&msm_obj->mm_list, &priv->inactive_list); - mutex_unlock(&dev->struct_mutex); - } + mutex_lock(&priv->mm_lock); + list_add_tail(&msm_obj->mm_list, &priv->inactive_list); + mutex_unlock(&priv->mm_lock); return obj; @@ -1178,9 +1182,9 @@ struct drm_gem_object *msm_gem_import(struct drm_device *dev, mutex_unlock(&msm_obj->lock); - mutex_lock(&dev->struct_mutex); + mutex_lock(&priv->mm_lock); list_add_tail(&msm_obj->mm_list, &priv->inactive_list); - mutex_unlock(&dev->struct_mutex); + mutex_unlock(&priv->mm_lock); return obj; diff --git a/drivers/gpu/drm/msm/msm_gem_shrinker.c b/drivers/gpu/drm/msm/msm_gem_shrinker.c index 482576d7a39a..c41b84a3a484 100644 --- a/drivers/gpu/drm/msm/msm_gem_shrinker.c +++ b/drivers/gpu/drm/msm/msm_gem_shrinker.c @@ -51,11 +51,15 @@ msm_gem_shrinker_count(struct shrinker *shrinker, struct shrink_control *sc) if (!msm_gem_shrinker_lock(dev, &unlock)) return 0; + mutex_lock(&priv->mm_lock); + list_for_each_entry(msm_obj, &priv->inactive_list, mm_list) { if (is_purgeable(msm_obj)) count += msm_obj->base.size >> PAGE_SHIFT; } + mutex_unlock(&priv->mm_lock); + if (unlock) mutex_unlock(&dev->struct_mutex); @@ -75,6 +79,8 @@ msm_gem_shrinker_scan(struct shrinker *shrinker, struct shrink_control *sc) if (!msm_gem_shrinker_lock(dev, &unlock)) return SHRINK_STOP; + mutex_lock(&priv->mm_lock); + list_for_each_entry(msm_obj, &priv->inactive_list, mm_list) { if (freed >= sc->nr_to_scan) break; @@ -84,6 +90,8 @@ msm_gem_shrinker_scan(struct shrinker *shrinker, struct shrink_control *sc) } } + mutex_unlock(&priv->mm_lock); + if (unlock) mutex_unlock(&dev->struct_mutex); @@ -106,6 +114,8 @@ msm_gem_shrinker_vmap(struct notifier_block *nb, unsigned long event, void *ptr) if (!msm_gem_shrinker_lock(dev, &unlock)) return NOTIFY_DONE; + mutex_lock(&priv->mm_lock); + list_for_each_entry(msm_obj, &priv->inactive_list, mm_list) { if (is_vunmapable(msm_obj)) { msm_gem_vunmap(&msm_obj->base, OBJ_LOCK_SHRINKER); @@ -118,6 +128,8 @@ msm_gem_shrinker_vmap(struct notifier_block *nb, unsigned long event, void *ptr) } } + mutex_unlock(&priv->mm_lock); + if (unlock) mutex_unlock(&dev->struct_mutex); diff --git a/drivers/gpu/drm/msm/msm_gpu.h b/drivers/gpu/drm/msm/msm_gpu.h index 6c9e1fdc1a76..1806e87600c0 100644 --- a/drivers/gpu/drm/msm/msm_gpu.h +++ b/drivers/gpu/drm/msm/msm_gpu.h @@ -94,7 +94,10 @@ struct msm_gpu { struct msm_ringbuffer *rb[MSM_GPU_MAX_RINGS]; int nr_rings; - /* list of GEM active objects: */ + /* + * List of GEM active objects on this gpu. Protected by + * msm_drm_private::mm_lock + */ struct list_head active_list; /* does gpu need hw_init? */ From patchwork Sun Oct 4 19:21:37 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 11815887 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id A3E7292C for ; Sun, 4 Oct 2020 19:21:25 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 846D2206B6 for ; Sun, 4 Oct 2020 19:21:25 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="RdMiFjCT" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726636AbgJDTVV (ORCPT ); Sun, 4 Oct 2020 15:21:21 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60464 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726610AbgJDTVU (ORCPT ); Sun, 4 Oct 2020 15:21:20 -0400 Received: from mail-pj1-x1043.google.com (mail-pj1-x1043.google.com [IPv6:2607:f8b0:4864:20::1043]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 28943C0613CE; Sun, 4 Oct 2020 12:21:20 -0700 (PDT) Received: by mail-pj1-x1043.google.com with SMTP id t23so2705310pji.0; Sun, 04 Oct 2020 12:21:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=1yao+kQqGpfTGfpwPbzu0C0v9rS99roOK1Qab3jbPV8=; b=RdMiFjCT3+ax8ZcEBWacyU0tityOVST/sT2cPbYDgfVIO5chYKEUlVgZ8+DjZ3qFTP d/kXZHeYCtfZ4Hfd34lV/PpN6Uwi80Z2ChoCHFzQmjbcgE1rETp2SP1NLxD5F70sXoeD q51kg6OPmGkRztKMGpid1Ra5LNtl/okpgQfZ1bU4F8KegO/J6+PQcbU7OrBPacB3WNJI yAmjQf9N+HTPPr1NeWDtf/KVcIHH/30ZErvdu1u9/E62DGGUjX7WkIgJSJESxmAk9i49 RNuWm3ToFuEwnUW+gajgB/7upGrQvu+bEKub9Njr+89iMNGgcHeOgTLGn77gUrMEeckK cylA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=1yao+kQqGpfTGfpwPbzu0C0v9rS99roOK1Qab3jbPV8=; b=ZiC5ceUqTMndK7VXF7LxF1VCqjsduPRYitn3Lk2NiXFKYLjYQqohKylH7Sv2+ChZ76 KvMJRqFpt+IcI+oqUaorCKHXKmjFy4gTW61RoydS78/w7aLciFRQuDL6Zng3r8X+I+vu M8js9UgYfaeX5ISJxMPEVgXR4W2hbOPNustKqB7rylikdy7aYVa7Hi+k1O7wLs9d34ec jpM10oLhkCX7nvXQdE9KkEfyBd6GYpxZTnJfDGj7mnoFNeAPqx2m+09PBxCWZvgcCCHz 0/Pw1Cz1GDLCOOSqLtr57E6/PRXPvOXhZGfQODhM7Lgn5309+MrO8qhb8Ws/DAgG3bnB WRwA== X-Gm-Message-State: AOAM532ff6zzop337hYSOq3KI5lqj0YDHLxxMZjhMv0Xx2kBpWr152BS lrjCiqvFN8VYRtxaSD2FXEA= X-Google-Smtp-Source: ABdhPJx5E+8SSfkHkv2UIR0BCoxwS+L5Oqm1HRtIqUmyRmWhq9M6CAN3QWS/Unou5PEr10cLdTXshw== X-Received: by 2002:a17:90a:db49:: with SMTP id u9mr13127115pjx.119.1601839279592; Sun, 04 Oct 2020 12:21:19 -0700 (PDT) Received: from localhost (c-73-25-156-94.hsd1.or.comcast.net. [73.25.156.94]) by smtp.gmail.com with ESMTPSA id ih12sm7858154pjb.24.2020.10.04.12.21.18 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 04 Oct 2020 12:21:18 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: Rob Clark , Rob Clark , Sean Paul , David Airlie , Daniel Vetter , Jordan Crouse , Eric Anholt , Emil Velikov , AngeloGioacchino Del Regno , Ben Dooks , Jonathan Marek , Akhil P Oommen , Sharat Masetty , linux-arm-msm@vger.kernel.org (open list:DRM DRIVER FOR MSM ADRENO GPU), freedreno@lists.freedesktop.org (open list:DRM DRIVER FOR MSM ADRENO GPU), linux-kernel@vger.kernel.org (open list) Subject: [PATCH 05/14] drm/msm: Document and rename preempt_lock Date: Sun, 4 Oct 2020 12:21:37 -0700 Message-Id: <20201004192152.3298573-6-robdclark@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201004192152.3298573-1-robdclark@gmail.com> References: <20201004192152.3298573-1-robdclark@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org From: Rob Clark Before adding another lock, give ring->lock a more descriptive name. Signed-off-by: Rob Clark Reviewed-by: Jordan Crouse --- drivers/gpu/drm/msm/adreno/a5xx_gpu.c | 4 ++-- drivers/gpu/drm/msm/adreno/a5xx_preempt.c | 12 ++++++------ drivers/gpu/drm/msm/adreno/a6xx_gpu.c | 4 ++-- drivers/gpu/drm/msm/msm_ringbuffer.c | 2 +- drivers/gpu/drm/msm/msm_ringbuffer.h | 7 ++++++- 5 files changed, 17 insertions(+), 12 deletions(-) diff --git a/drivers/gpu/drm/msm/adreno/a5xx_gpu.c b/drivers/gpu/drm/msm/adreno/a5xx_gpu.c index c941c8138f25..543437a2186e 100644 --- a/drivers/gpu/drm/msm/adreno/a5xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a5xx_gpu.c @@ -36,7 +36,7 @@ void a5xx_flush(struct msm_gpu *gpu, struct msm_ringbuffer *ring, OUT_RING(ring, upper_32_bits(shadowptr(a5xx_gpu, ring))); } - spin_lock_irqsave(&ring->lock, flags); + spin_lock_irqsave(&ring->preempt_lock, flags); /* Copy the shadow to the actual register */ ring->cur = ring->next; @@ -44,7 +44,7 @@ void a5xx_flush(struct msm_gpu *gpu, struct msm_ringbuffer *ring, /* Make sure to wrap wptr if we need to */ wptr = get_wptr(ring); - spin_unlock_irqrestore(&ring->lock, flags); + spin_unlock_irqrestore(&ring->preempt_lock, flags); /* Make sure everything is posted before making a decision */ mb(); diff --git a/drivers/gpu/drm/msm/adreno/a5xx_preempt.c b/drivers/gpu/drm/msm/adreno/a5xx_preempt.c index 7e04509c4e1f..183de1139eeb 100644 --- a/drivers/gpu/drm/msm/adreno/a5xx_preempt.c +++ b/drivers/gpu/drm/msm/adreno/a5xx_preempt.c @@ -45,9 +45,9 @@ static inline void update_wptr(struct msm_gpu *gpu, struct msm_ringbuffer *ring) if (!ring) return; - spin_lock_irqsave(&ring->lock, flags); + spin_lock_irqsave(&ring->preempt_lock, flags); wptr = get_wptr(ring); - spin_unlock_irqrestore(&ring->lock, flags); + spin_unlock_irqrestore(&ring->preempt_lock, flags); gpu_write(gpu, REG_A5XX_CP_RB_WPTR, wptr); } @@ -62,9 +62,9 @@ static struct msm_ringbuffer *get_next_ring(struct msm_gpu *gpu) bool empty; struct msm_ringbuffer *ring = gpu->rb[i]; - spin_lock_irqsave(&ring->lock, flags); + spin_lock_irqsave(&ring->preempt_lock, flags); empty = (get_wptr(ring) == ring->memptrs->rptr); - spin_unlock_irqrestore(&ring->lock, flags); + spin_unlock_irqrestore(&ring->preempt_lock, flags); if (!empty) return ring; @@ -132,9 +132,9 @@ void a5xx_preempt_trigger(struct msm_gpu *gpu) } /* Make sure the wptr doesn't update while we're in motion */ - spin_lock_irqsave(&ring->lock, flags); + spin_lock_irqsave(&ring->preempt_lock, flags); a5xx_gpu->preempt[ring->id]->wptr = get_wptr(ring); - spin_unlock_irqrestore(&ring->lock, flags); + spin_unlock_irqrestore(&ring->preempt_lock, flags); /* Set the address of the incoming preemption record */ gpu_write64(gpu, REG_A5XX_CP_CONTEXT_SWITCH_RESTORE_ADDR_LO, diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c index 8915882e4444..fc85f008d69d 100644 --- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c @@ -65,7 +65,7 @@ static void a6xx_flush(struct msm_gpu *gpu, struct msm_ringbuffer *ring) OUT_RING(ring, upper_32_bits(shadowptr(a6xx_gpu, ring))); } - spin_lock_irqsave(&ring->lock, flags); + spin_lock_irqsave(&ring->preempt_lock, flags); /* Copy the shadow to the actual register */ ring->cur = ring->next; @@ -73,7 +73,7 @@ static void a6xx_flush(struct msm_gpu *gpu, struct msm_ringbuffer *ring) /* Make sure to wrap wptr if we need to */ wptr = get_wptr(ring); - spin_unlock_irqrestore(&ring->lock, flags); + spin_unlock_irqrestore(&ring->preempt_lock, flags); /* Make sure everything is posted before making a decision */ mb(); diff --git a/drivers/gpu/drm/msm/msm_ringbuffer.c b/drivers/gpu/drm/msm/msm_ringbuffer.c index 935bf9b1d941..1b6958e908dc 100644 --- a/drivers/gpu/drm/msm/msm_ringbuffer.c +++ b/drivers/gpu/drm/msm/msm_ringbuffer.c @@ -46,7 +46,7 @@ struct msm_ringbuffer *msm_ringbuffer_new(struct msm_gpu *gpu, int id, ring->memptrs_iova = memptrs_iova; INIT_LIST_HEAD(&ring->submits); - spin_lock_init(&ring->lock); + spin_lock_init(&ring->preempt_lock); snprintf(name, sizeof(name), "gpu-ring-%d", ring->id); diff --git a/drivers/gpu/drm/msm/msm_ringbuffer.h b/drivers/gpu/drm/msm/msm_ringbuffer.h index 0987d6bf848c..4956d1bc5d0e 100644 --- a/drivers/gpu/drm/msm/msm_ringbuffer.h +++ b/drivers/gpu/drm/msm/msm_ringbuffer.h @@ -46,7 +46,12 @@ struct msm_ringbuffer { struct msm_rbmemptrs *memptrs; uint64_t memptrs_iova; struct msm_fence_context *fctx; - spinlock_t lock; + + /* + * preempt_lock protects preemption and serializes wptr updates against + * preemption. Can be aquired from irq context. + */ + spinlock_t preempt_lock; }; struct msm_ringbuffer *msm_ringbuffer_new(struct msm_gpu *gpu, int id, From patchwork Sun Oct 4 19:21:38 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 11815925 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 5F3B8112E for ; Sun, 4 Oct 2020 19:21:54 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 406FC20637 for ; Sun, 4 Oct 2020 19:21:54 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="eXvFIIsR" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726673AbgJDTVa (ORCPT ); Sun, 4 Oct 2020 15:21:30 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60472 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726648AbgJDTVW (ORCPT ); Sun, 4 Oct 2020 15:21:22 -0400 Received: from mail-pg1-x543.google.com (mail-pg1-x543.google.com [IPv6:2607:f8b0:4864:20::543]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AFCA7C0613CE; Sun, 4 Oct 2020 12:21:22 -0700 (PDT) Received: by mail-pg1-x543.google.com with SMTP id t14so4331969pgl.10; Sun, 04 Oct 2020 12:21:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=7GbehqZxHly6cnVxFoKIhKffwo2k+nv7HJD+E3LcZlE=; b=eXvFIIsRblrS91tGMUAHOiHmASSkIoz7GLsHPVGpy+8MmbMNw3LqPRwtXc4ee5rAyy QuNHRaV+SVBz2r4heXKDntOu8c06j0wk8u/ebrUgU1Sn0FsrcEG4T5syiv8EiNifqfLL kMbJQXbnoZZldOHLToj2aNkR1zVt9lBUsorNyJ6tqnzYJAx4/UwSxpvrGY59RtLkbtvA dT54KjKzLvAjCKOBhwQnTvYY8abrn44W9BmwO+N0J15BFBi4RX90lXN6fJuR+KuIozHE qfoGEH0t0QnwHD+O5TgISqGOmWOqXcMHi2hRL72IVX/t2x5cp3tLTY54Zs90YJZZ6u23 VY7Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=7GbehqZxHly6cnVxFoKIhKffwo2k+nv7HJD+E3LcZlE=; b=NJuVmJMmON94AxHSNnE5Lx1YcdQszV2OMK+HJJUBn2pxvAzhlg4s497oEuj0nD1cbK hi4YZ3BioDV7+mktoIdFv03SQk6+mn0dtxE1NLiYtgCJHc64GDfxPSdL64EMnpXNhjtv CE53SNdwcX39EPaE5UgwQ3C6NBoj9Jw9YuCV24yrTZxc3oEszlGtLU7W+cv43bRhiFtJ cfXU5VwsF47MUH0JELhjEI28RmxBXS7/DXNT+lOsYpDCo2kZZIEu2B3oaKaQB19VQmaF aeCvshiiY5N64URzvdohPAbJnFIUR1N1JKrX+cmK9fxgJidKNRR+ruU+ZM8pRyv6RVNg O52Q== X-Gm-Message-State: AOAM530KXu00fOEeWH38GgqdhjXF9IenNVa4hI1VJheChabcNcHiO2z/ dm/R8JYd94MBwHUfwlPbnh4= X-Google-Smtp-Source: ABdhPJzbBTRnB5IuDLoELVjYGojcRVOVm6VqyNSlMSWzK1wAMuYb1n1eEcDK4DNKeJgbFWcu25aK3A== X-Received: by 2002:a65:66da:: with SMTP id c26mr6302332pgw.362.1601839282251; Sun, 04 Oct 2020 12:21:22 -0700 (PDT) Received: from localhost (c-73-25-156-94.hsd1.or.comcast.net. [73.25.156.94]) by smtp.gmail.com with ESMTPSA id 16sm7964168pjl.27.2020.10.04.12.21.20 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 04 Oct 2020 12:21:21 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: Rob Clark , Rob Clark , Sean Paul , David Airlie , Daniel Vetter , linux-arm-msm@vger.kernel.org (open list:DRM DRIVER FOR MSM ADRENO GPU), freedreno@lists.freedesktop.org (open list:DRM DRIVER FOR MSM ADRENO GPU), linux-kernel@vger.kernel.org (open list) Subject: [PATCH 06/14] drm/msm: Protect ring->submits with it's own lock Date: Sun, 4 Oct 2020 12:21:38 -0700 Message-Id: <20201004192152.3298573-7-robdclark@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201004192152.3298573-1-robdclark@gmail.com> References: <20201004192152.3298573-1-robdclark@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org From: Rob Clark One less place to rely on dev->struct_mutex. Signed-off-by: Rob Clark Reviewed-by: Jordan Crouse --- drivers/gpu/drm/msm/msm_gem_submit.c | 2 ++ drivers/gpu/drm/msm/msm_gpu.c | 37 ++++++++++++++++++++++------ drivers/gpu/drm/msm/msm_ringbuffer.c | 1 + drivers/gpu/drm/msm/msm_ringbuffer.h | 6 +++++ 4 files changed, 39 insertions(+), 7 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_gem_submit.c b/drivers/gpu/drm/msm/msm_gem_submit.c index aa5c60a7132d..e1d1f005b3d4 100644 --- a/drivers/gpu/drm/msm/msm_gem_submit.c +++ b/drivers/gpu/drm/msm/msm_gem_submit.c @@ -63,7 +63,9 @@ static struct msm_gem_submit *submit_create(struct drm_device *dev, void msm_gem_submit_free(struct msm_gem_submit *submit) { dma_fence_put(submit->fence); + spin_lock(&submit->ring->submit_lock); list_del(&submit->node); + spin_unlock(&submit->ring->submit_lock); put_pid(submit->pid); msm_submitqueue_put(submit->queue); diff --git a/drivers/gpu/drm/msm/msm_gpu.c b/drivers/gpu/drm/msm/msm_gpu.c index ca8c95b32c8b..8d1e254f964a 100644 --- a/drivers/gpu/drm/msm/msm_gpu.c +++ b/drivers/gpu/drm/msm/msm_gpu.c @@ -270,6 +270,7 @@ static void update_fences(struct msm_gpu *gpu, struct msm_ringbuffer *ring, { struct msm_gem_submit *submit; + spin_lock(&ring->submit_lock); list_for_each_entry(submit, &ring->submits, node) { if (submit->seqno > fence) break; @@ -277,6 +278,7 @@ static void update_fences(struct msm_gpu *gpu, struct msm_ringbuffer *ring, msm_update_fence(submit->ring->fctx, submit->fence->seqno); } + spin_unlock(&ring->submit_lock); } #ifdef CONFIG_DEV_COREDUMP @@ -430,11 +432,14 @@ find_submit(struct msm_ringbuffer *ring, uint32_t fence) { struct msm_gem_submit *submit; - WARN_ON(!mutex_is_locked(&ring->gpu->dev->struct_mutex)); - - list_for_each_entry(submit, &ring->submits, node) - if (submit->seqno == fence) + spin_lock(&ring->submit_lock); + list_for_each_entry(submit, &ring->submits, node) { + if (submit->seqno == fence) { + spin_unlock(&ring->submit_lock); return submit; + } + } + spin_unlock(&ring->submit_lock); return NULL; } @@ -523,8 +528,10 @@ static void recover_worker(struct work_struct *work) for (i = 0; i < gpu->nr_rings; i++) { struct msm_ringbuffer *ring = gpu->rb[i]; + spin_lock(&ring->submit_lock); list_for_each_entry(submit, &ring->submits, node) gpu->funcs->submit(gpu, submit); + spin_unlock(&ring->submit_lock); } } @@ -711,7 +718,6 @@ static void retire_submit(struct msm_gpu *gpu, struct msm_ringbuffer *ring, static void retire_submits(struct msm_gpu *gpu) { struct drm_device *dev = gpu->dev; - struct msm_gem_submit *submit, *tmp; int i; WARN_ON(!mutex_is_locked(&dev->struct_mutex)); @@ -720,9 +726,24 @@ static void retire_submits(struct msm_gpu *gpu) for (i = 0; i < gpu->nr_rings; i++) { struct msm_ringbuffer *ring = gpu->rb[i]; - list_for_each_entry_safe(submit, tmp, &ring->submits, node) { - if (dma_fence_is_signaled(submit->fence)) + while (true) { + struct msm_gem_submit *submit = NULL; + + spin_lock(&ring->submit_lock); + submit = list_first_entry_or_null(&ring->submits, + struct msm_gem_submit, node); + spin_unlock(&ring->submit_lock); + + /* + * If no submit, we are done. If submit->fence hasn't + * been signalled, then later submits are not signalled + * either, so we are also done. + */ + if (submit && dma_fence_is_signaled(submit->fence)) { retire_submit(gpu, ring, submit); + } else { + break; + } } } } @@ -765,7 +786,9 @@ void msm_gpu_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit) submit->seqno = ++ring->seqno; + spin_lock(&ring->submit_lock); list_add_tail(&submit->node, &ring->submits); + spin_unlock(&ring->submit_lock); msm_rd_dump_submit(priv->rd, submit, NULL); diff --git a/drivers/gpu/drm/msm/msm_ringbuffer.c b/drivers/gpu/drm/msm/msm_ringbuffer.c index 1b6958e908dc..4d2a2a4abef8 100644 --- a/drivers/gpu/drm/msm/msm_ringbuffer.c +++ b/drivers/gpu/drm/msm/msm_ringbuffer.c @@ -46,6 +46,7 @@ struct msm_ringbuffer *msm_ringbuffer_new(struct msm_gpu *gpu, int id, ring->memptrs_iova = memptrs_iova; INIT_LIST_HEAD(&ring->submits); + spin_lock_init(&ring->submit_lock); spin_lock_init(&ring->preempt_lock); snprintf(name, sizeof(name), "gpu-ring-%d", ring->id); diff --git a/drivers/gpu/drm/msm/msm_ringbuffer.h b/drivers/gpu/drm/msm/msm_ringbuffer.h index 4956d1bc5d0e..fe55d4a1aa16 100644 --- a/drivers/gpu/drm/msm/msm_ringbuffer.h +++ b/drivers/gpu/drm/msm/msm_ringbuffer.h @@ -39,7 +39,13 @@ struct msm_ringbuffer { int id; struct drm_gem_object *bo; uint32_t *start, *end, *cur, *next; + + /* + * List of in-flight submits on this ring. Protected by submit_lock. + */ struct list_head submits; + spinlock_t submit_lock; + uint64_t iova; uint32_t seqno; uint32_t hangcheck_fence; From patchwork Sun Oct 4 19:21:39 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 11815931 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 47DC492C for ; Sun, 4 Oct 2020 19:22:02 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 2755620637 for ; Sun, 4 Oct 2020 19:22:02 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="FGIyIVUb" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726663AbgJDTV3 (ORCPT ); Sun, 4 Oct 2020 15:21:29 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60480 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726610AbgJDTVZ (ORCPT ); Sun, 4 Oct 2020 15:21:25 -0400 Received: from mail-pf1-x442.google.com (mail-pf1-x442.google.com [IPv6:2607:f8b0:4864:20::442]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2D87CC0613CF; Sun, 4 Oct 2020 12:21:25 -0700 (PDT) Received: by mail-pf1-x442.google.com with SMTP id n14so5086074pff.6; Sun, 04 Oct 2020 12:21:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=8RonTZKV7nIutYPycO/AUjmVxTm8X/O252euLFCDy8g=; b=FGIyIVUbmv+JYYmPLDse5d6Al0XpfX0TkT0I0BCASHeusLqq/KNUVs4spRHJ642TbC Uu0C/gR2L2IrGb4YBWGvf1VGxRC/sDDojNdxFtyDoxJn5o9mPGXIMTrQ9ZL8v3s7ZG2e 0noRTxg1xfWov0XrQMGCb3tcbEVpRTUQNIMdIU0M+9eHnQu6UTp4xwMO36BmoDWDfhMD bHffDBk6BJTEwBaFOHgA/0OO15HEhvg4a0qUPrmUdezCoIAznxwEbM4M4Srl/RYrT29D iJbkSFe1eVo0tERO8SEcUA2/jN+QR/+i77Es+AL4U2X0m0dt2tXODEhssYJyULcBtaLt HGaw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=8RonTZKV7nIutYPycO/AUjmVxTm8X/O252euLFCDy8g=; b=PGsk7QkPvIu/1rc741lyLbzG66bD2Yccro8AJBI3cAVywOw6fkxJ7UO5n259/A6r/K fJ7EcXZqXXD2AAobD7RIOXLVeVdnkdt7tfZeVLUnOHjGvwr3MZ5M9q4RKP6oa2UBm7n3 F945dOksqFqXdAeSklZ8CgJMZr4NbdFiaaKI/7JS9CPs2SfG78g2pXoS9Mfie/j0gRLa 9fNYKdgG8oeaLVoz7T/FHu9zbUvPQeQ+DLEw7biWNHOD2ni5utWhW0fUmdviSC4ljOpN wqHsGqyKCre+5T2uk9Kk+xb5S62LRcH/XJ3DMzD1ZG+pyHnL5R4bnmubJobyP8YLB4Nv bKeg== X-Gm-Message-State: AOAM533aN9va281Y+Mt+N09R+eLoA1nIqlMVUyK6Jnm3NY9oL8UDEUVG MuJKqVRVCRoTaSZ5ZblcGDk= X-Google-Smtp-Source: ABdhPJzwhHz6PJU0sNZCTex9teqsgD1GUxASnYadyTjhVMRfVtSBaLn/hFZ/mUV8+1f+TPVkXVXadA== X-Received: by 2002:aa7:8ad8:0:b029:150:7d36:cddf with SMTP id b24-20020aa78ad80000b02901507d36cddfmr3904428pfd.55.1601839284512; Sun, 04 Oct 2020 12:21:24 -0700 (PDT) Received: from localhost (c-73-25-156-94.hsd1.or.comcast.net. [73.25.156.94]) by smtp.gmail.com with ESMTPSA id t12sm8138825pgk.32.2020.10.04.12.21.23 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 04 Oct 2020 12:21:23 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: Rob Clark , Rob Clark , Sean Paul , David Airlie , Daniel Vetter , linux-arm-msm@vger.kernel.org (open list:DRM DRIVER FOR MSM ADRENO GPU), freedreno@lists.freedesktop.org (open list:DRM DRIVER FOR MSM ADRENO GPU), linux-kernel@vger.kernel.org (open list) Subject: [PATCH 07/14] drm/msm: Refcount submits Date: Sun, 4 Oct 2020 12:21:39 -0700 Message-Id: <20201004192152.3298573-8-robdclark@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201004192152.3298573-1-robdclark@gmail.com> References: <20201004192152.3298573-1-robdclark@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org From: Rob Clark Before we remove dev->struct_mutex from the retire path, we have to deal with the situation of a submit retiring before the submit ioctl returns. To deal with this, ring->submits will hold a reference to the submit, which is dropped when the submit is retired. And the submit ioctl path holds it's own ref, which it drops when it is done with the submit. Also, add to submit list *after* getting/pinning bo's, to prevent badness in case the completed fence is corrupted, and retire_worker mistakenly believes the submit is done too early. Signed-off-by: Rob Clark Reviewed-by: Jordan Crouse --- drivers/gpu/drm/msm/msm_drv.h | 1 - drivers/gpu/drm/msm/msm_gem.h | 13 +++++++++++++ drivers/gpu/drm/msm/msm_gem_submit.c | 12 ++++++------ drivers/gpu/drm/msm/msm_gpu.c | 21 ++++++++++++++++----- 4 files changed, 35 insertions(+), 12 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_drv.h b/drivers/gpu/drm/msm/msm_drv.h index 50978e5db376..535f9e718e2d 100644 --- a/drivers/gpu/drm/msm/msm_drv.h +++ b/drivers/gpu/drm/msm/msm_drv.h @@ -277,7 +277,6 @@ void msm_unregister_mmu(struct drm_device *dev, struct msm_mmu *mmu); bool msm_use_mmu(struct drm_device *dev); -void msm_gem_submit_free(struct msm_gem_submit *submit); int msm_ioctl_gem_submit(struct drm_device *dev, void *data, struct drm_file *file); diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h index a1bf741b9b89..e05b1530aca6 100644 --- a/drivers/gpu/drm/msm/msm_gem.h +++ b/drivers/gpu/drm/msm/msm_gem.h @@ -136,6 +136,7 @@ void msm_gem_free_work(struct work_struct *work); * lasts for the duration of the submit-ioctl. */ struct msm_gem_submit { + struct kref ref; struct drm_device *dev; struct msm_gpu *gpu; struct msm_gem_address_space *aspace; @@ -169,6 +170,18 @@ struct msm_gem_submit { } bos[]; }; +void __msm_gem_submit_destroy(struct kref *kref); + +static inline void msm_gem_submit_get(struct msm_gem_submit *submit) +{ + kref_get(&submit->ref); +} + +static inline void msm_gem_submit_put(struct msm_gem_submit *submit) +{ + kref_put(&submit->ref, __msm_gem_submit_destroy); +} + /* helper to determine of a buffer in submit should be dumped, used for both * devcoredump and debugfs cmdstream dumping: */ diff --git a/drivers/gpu/drm/msm/msm_gem_submit.c b/drivers/gpu/drm/msm/msm_gem_submit.c index e1d1f005b3d4..7d653bdc92dc 100644 --- a/drivers/gpu/drm/msm/msm_gem_submit.c +++ b/drivers/gpu/drm/msm/msm_gem_submit.c @@ -42,6 +42,7 @@ static struct msm_gem_submit *submit_create(struct drm_device *dev, if (!submit) return NULL; + kref_init(&submit->ref); submit->dev = dev; submit->aspace = queue->ctx->aspace; submit->gpu = gpu; @@ -60,12 +61,12 @@ static struct msm_gem_submit *submit_create(struct drm_device *dev, return submit; } -void msm_gem_submit_free(struct msm_gem_submit *submit) +void __msm_gem_submit_destroy(struct kref *kref) { + struct msm_gem_submit *submit = + container_of(kref, struct msm_gem_submit, ref); + dma_fence_put(submit->fence); - spin_lock(&submit->ring->submit_lock); - list_del(&submit->node); - spin_unlock(&submit->ring->submit_lock); put_pid(submit->pid); msm_submitqueue_put(submit->queue); @@ -805,8 +806,7 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void *data, submit_cleanup(submit); if (has_ww_ticket) ww_acquire_fini(&submit->ticket); - if (ret) - msm_gem_submit_free(submit); + msm_gem_submit_put(submit); out_unlock: if (ret && (out_fence_fd >= 0)) put_unused_fd(out_fence_fd); diff --git a/drivers/gpu/drm/msm/msm_gpu.c b/drivers/gpu/drm/msm/msm_gpu.c index 8d1e254f964a..fd3fc6f36ab1 100644 --- a/drivers/gpu/drm/msm/msm_gpu.c +++ b/drivers/gpu/drm/msm/msm_gpu.c @@ -712,7 +712,12 @@ static void retire_submit(struct msm_gpu *gpu, struct msm_ringbuffer *ring, pm_runtime_mark_last_busy(&gpu->pdev->dev); pm_runtime_put_autosuspend(&gpu->pdev->dev); - msm_gem_submit_free(submit); + + spin_lock(&ring->submit_lock); + list_del(&submit->node); + spin_unlock(&ring->submit_lock); + + msm_gem_submit_put(submit); } static void retire_submits(struct msm_gpu *gpu) @@ -786,10 +791,6 @@ void msm_gpu_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit) submit->seqno = ++ring->seqno; - spin_lock(&ring->submit_lock); - list_add_tail(&submit->node, &ring->submits); - spin_unlock(&ring->submit_lock); - msm_rd_dump_submit(priv->rd, submit, NULL); update_sw_cntrs(gpu); @@ -816,6 +817,16 @@ void msm_gpu_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit) msm_gem_active_get(drm_obj, gpu); } + /* + * ring->submits holds a ref to the submit, to deal with the case + * that a submit completes before msm_ioctl_gem_submit() returns. + */ + msm_gem_submit_get(submit); + + spin_lock(&ring->submit_lock); + list_add_tail(&submit->node, &ring->submits); + spin_unlock(&ring->submit_lock); + gpu->funcs->submit(gpu, submit); priv->lastctx = submit->queue->ctx; From patchwork Sun Oct 4 19:21:40 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 11815899 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 26F5C92C for ; Sun, 4 Oct 2020 19:21:30 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 0815B2068E for ; Sun, 4 Oct 2020 19:21:30 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="Hzjye1b9" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726659AbgJDTV3 (ORCPT ); Sun, 4 Oct 2020 15:21:29 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60486 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726654AbgJDTV1 (ORCPT ); Sun, 4 Oct 2020 15:21:27 -0400 Received: from mail-pf1-x443.google.com (mail-pf1-x443.google.com [IPv6:2607:f8b0:4864:20::443]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1991DC0613D0; Sun, 4 Oct 2020 12:21:27 -0700 (PDT) Received: by mail-pf1-x443.google.com with SMTP id d6so5074862pfn.9; Sun, 04 Oct 2020 12:21:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=vEBEzHX4FK8oW5uL67k1FNFvzQH83mQw0CZJtsDsaKw=; b=Hzjye1b980xxUdFLsacO7q0N6hoEnDM3nQWNzKYMilrw33DARbC/5FepZSqAxq7Ki3 i/UmpWsE8CXF9AY7kWKEoCCO4QCkygaK5mhPGW+Aqe8br15sDIbcTv1+iW3fr8gfHPeV oP9ASgE5PqY7PLNOVJf+GGDVy+IugIVkJG7wEb3jFMoxbgdnZk9uX4Nh+3CdsYHoh/dh u/PgrmuHq1QbLebRAb5UV76b8RRALp74tIeM7cAUDgwo06Nwd1mwT8wZyk4XvoD1b9q/ 19ZpBMJPMOY+Y/n5MyDn6nIdpK1vP69bMDY2yE+zUgM82IiGYMuPyRPSfM7JWfJggrS/ TvCQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=vEBEzHX4FK8oW5uL67k1FNFvzQH83mQw0CZJtsDsaKw=; b=tlCu/gNbtKOeQ9jYy3YVsyHFYZJC0sj5wuSlbFx3Hv9gW06SMJMC1fJtzG0ytQf5Fe Zo+q9RX42/dgv98Z5Nz3kIGliciDDZQ0eijjYWxie2W6siJqmQnRfi/W/L0tq8u8dL2b p1H3KRSDmQ1PQxTrQHgQjGhTQYqNgtpb/MwRaAxCC+LRP56dNHFpIKgEPuTsBQVXIxy8 A9or8XcZsW2wL0WpSsDwEGVmbTAcBr1jeJ9OW0jxniyg3D2qbgWefywCgu4IqjivkKkn woup/yuCacHjUl8Zl97Qt+YHu1WqB0T5avsxXMn9VEditpS7GHmYuXQnzWBKfK5/bRYG o/9w== X-Gm-Message-State: AOAM532dFMv2/MhfjtkD3fki3JFzhsccNrrRk1++Ev/ZxxJoqogBHWeu mThySLfr8uy0UacMf2YM+geGXtITCsG+PTLb X-Google-Smtp-Source: ABdhPJxEXUizQpJydRPL5ncBxX0cny9DJU36ltWrqRo5u/wcFIoY6p7FzGE8l3EOH6JuFvZ6fqAecw== X-Received: by 2002:a62:dd02:0:b029:142:2501:398c with SMTP id w2-20020a62dd020000b02901422501398cmr12607535pff.81.1601839286605; Sun, 04 Oct 2020 12:21:26 -0700 (PDT) Received: from localhost (c-73-25-156-94.hsd1.or.comcast.net. [73.25.156.94]) by smtp.gmail.com with ESMTPSA id q66sm5106416pfc.109.2020.10.04.12.21.25 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 04 Oct 2020 12:21:25 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: Rob Clark , Rob Clark , Sean Paul , David Airlie , Daniel Vetter , linux-arm-msm@vger.kernel.org (open list:DRM DRIVER FOR MSM ADRENO GPU), freedreno@lists.freedesktop.org (open list:DRM DRIVER FOR MSM ADRENO GPU), linux-kernel@vger.kernel.org (open list) Subject: [PATCH 08/14] drm/msm: Remove obj->gpu Date: Sun, 4 Oct 2020 12:21:40 -0700 Message-Id: <20201004192152.3298573-9-robdclark@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201004192152.3298573-1-robdclark@gmail.com> References: <20201004192152.3298573-1-robdclark@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org From: Rob Clark It cannot be atomically updated with obj->active_count, and the only purpose is a useless WARN_ON() (which becomes a buggy WARN_ON() once retire_submits() is not serialized with incoming submits via struct_mutex) Signed-off-by: Rob Clark Reviewed-by: Jordan Crouse --- drivers/gpu/drm/msm/msm_gem.c | 2 -- drivers/gpu/drm/msm/msm_gem.h | 1 - drivers/gpu/drm/msm/msm_gpu.c | 5 ----- 3 files changed, 8 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index b04ed8b52f9d..c52a3969e60b 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -753,7 +753,6 @@ void msm_gem_active_get(struct drm_gem_object *obj, struct msm_gpu *gpu) if (!atomic_fetch_inc(&msm_obj->active_count)) { mutex_lock(&priv->mm_lock); - msm_obj->gpu = gpu; list_del_init(&msm_obj->mm_list); list_add_tail(&msm_obj->mm_list, &gpu->active_list); mutex_unlock(&priv->mm_lock); @@ -769,7 +768,6 @@ void msm_gem_active_put(struct drm_gem_object *obj) if (!atomic_dec_return(&msm_obj->active_count)) { mutex_lock(&priv->mm_lock); - msm_obj->gpu = NULL; list_del_init(&msm_obj->mm_list); list_add_tail(&msm_obj->mm_list, &priv->inactive_list); mutex_unlock(&priv->mm_lock); diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h index e05b1530aca6..61147bd96b06 100644 --- a/drivers/gpu/drm/msm/msm_gem.h +++ b/drivers/gpu/drm/msm/msm_gem.h @@ -64,7 +64,6 @@ struct msm_gem_object { * */ struct list_head mm_list; - struct msm_gpu *gpu; /* non-null if active */ /* Transiently in the process of submit ioctl, objects associated * with the submit are on submit->bo_list.. this only lasts for diff --git a/drivers/gpu/drm/msm/msm_gpu.c b/drivers/gpu/drm/msm/msm_gpu.c index fd3fc6f36ab1..c9ff19a75169 100644 --- a/drivers/gpu/drm/msm/msm_gpu.c +++ b/drivers/gpu/drm/msm/msm_gpu.c @@ -800,11 +800,6 @@ void msm_gpu_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit) struct drm_gem_object *drm_obj = &msm_obj->base; uint64_t iova; - /* can't happen yet.. but when we add 2d support we'll have - * to deal w/ cross-ring synchronization: - */ - WARN_ON(is_active(msm_obj) && (msm_obj->gpu != gpu)); - /* submit takes a reference to the bo and iova until retired: */ drm_gem_object_get(&msm_obj->base); msm_gem_get_and_pin_iova(&msm_obj->base, submit->aspace, &iova); From patchwork Sun Oct 4 19:21:41 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 11815905 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id C451192C for ; Sun, 4 Oct 2020 19:21:32 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id AA5EA20637 for ; Sun, 4 Oct 2020 19:21:32 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="cwzFQmZf" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726680AbgJDTVa (ORCPT ); Sun, 4 Oct 2020 15:21:30 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60494 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726655AbgJDTV3 (ORCPT ); Sun, 4 Oct 2020 15:21:29 -0400 Received: from mail-pg1-x541.google.com (mail-pg1-x541.google.com [IPv6:2607:f8b0:4864:20::541]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E02E0C0613E7; Sun, 4 Oct 2020 12:21:28 -0700 (PDT) Received: by mail-pg1-x541.google.com with SMTP id 22so393763pgv.6; Sun, 04 Oct 2020 12:21:28 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=VQDv9qIxfo9dLvFA8eAM/RvSpOEqDt7DqCgJQnSkF2g=; b=cwzFQmZfAdfXp0bNqEXXYtFrzbDUEWHIcvy2jgZDLsdLVzHxLNY0pi0IRZ9cnwwS5A SZ7h3np2ofACQq1T28kFs6VRsonTnhAoPtW+/BY2X2qfZ2gkLvdRfZuDe77h1qSUkwPm jMm7Lk/tbqF59ZFnWHPsMUAEVdpWV7WjujveCj9odqxixbhkvjcRg+Pft5q0CfHKyo/x mkAqDeP0KKLLjwszFhXfhbUeqaIl12cUq2A51Zxs6AUytjWts1KAYtAG5Widp45CBnir FOB2G0rHii338QzygGN28F4iFavxqBRY13F5ZAnevNQs9GTTVwOvN2HlI677T17RUMEx 1i0w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=VQDv9qIxfo9dLvFA8eAM/RvSpOEqDt7DqCgJQnSkF2g=; b=TBwZMmHIon1s+QZoyROhqW5gDnkWk2fR7PmLgfT22ia7LFEa7mxbp9pBxiRFX4J5Wr dl3+QqzBPGTRtqK0nRBjeqW0Z3cxdwnMebwzK58T+3P2LRLoy+Sh+1G2AtRa8DZ8swSr XFAVa18mJH9gKv14x4r+9Fuh9nTARKDbxoSc2+A4xC8GXvEHxfFtDq7N5hq0X7ZUkc1g D1CMQjwDz2y/3JySSKUGOBzitt1rAiZUhB8ax+F/eNG/hK98uXnMO8/2nC9KJLnWGVw8 kvNqhmzdoW6ROrHotNLCL2Ufmc2MbNLnbVKpRszMcKBCeHnZxGGR9IyzVjmEhJh/azge z+7Q== X-Gm-Message-State: AOAM530fu8ILaNwWuxlfjMIUike/yv2RLKKBmYybaSHh3S/aueZ1pqCo HZv/g8nQ8OXoZUwt14a7Ja2C4WK6E74pFWDU X-Google-Smtp-Source: ABdhPJw3vlcyj1RE09j9A4+U3ke0Uu7ufkZiGJqKrFh6RUJK5jwfMyQdxSzvx3K3DZdmxPpEpzKjKQ== X-Received: by 2002:a63:140e:: with SMTP id u14mr10479449pgl.91.1601839288416; Sun, 04 Oct 2020 12:21:28 -0700 (PDT) Received: from localhost (c-73-25-156-94.hsd1.or.comcast.net. [73.25.156.94]) by smtp.gmail.com with ESMTPSA id ie13sm8103315pjb.5.2020.10.04.12.21.27 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 04 Oct 2020 12:21:27 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: Rob Clark , Rob Clark , Sean Paul , David Airlie , Daniel Vetter , linux-arm-msm@vger.kernel.org (open list:DRM DRIVER FOR MSM ADRENO GPU), freedreno@lists.freedesktop.org (open list:DRM DRIVER FOR MSM ADRENO GPU), linux-kernel@vger.kernel.org (open list) Subject: [PATCH 09/14] drm/msm: Drop struct_mutex from the retire path Date: Sun, 4 Oct 2020 12:21:41 -0700 Message-Id: <20201004192152.3298573-10-robdclark@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201004192152.3298573-1-robdclark@gmail.com> References: <20201004192152.3298573-1-robdclark@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org From: Rob Clark Now that we are not relying on dev->struct_mutex to protect the ring->submits lists, drop the struct_mutex lock. Signed-off-by: Rob Clark Reviewed-by: Jordan Crouse --- drivers/gpu/drm/msm/msm_gpu.c | 8 +------- 1 file changed, 1 insertion(+), 7 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_gpu.c b/drivers/gpu/drm/msm/msm_gpu.c index c9ff19a75169..5e351d1c00e9 100644 --- a/drivers/gpu/drm/msm/msm_gpu.c +++ b/drivers/gpu/drm/msm/msm_gpu.c @@ -707,7 +707,7 @@ static void retire_submit(struct msm_gpu *gpu, struct msm_ringbuffer *ring, msm_gem_active_put(&msm_obj->base); msm_gem_unpin_iova(&msm_obj->base, submit->aspace); - drm_gem_object_put_locked(&msm_obj->base); + drm_gem_object_put(&msm_obj->base); } pm_runtime_mark_last_busy(&gpu->pdev->dev); @@ -722,11 +722,8 @@ static void retire_submit(struct msm_gpu *gpu, struct msm_ringbuffer *ring, static void retire_submits(struct msm_gpu *gpu) { - struct drm_device *dev = gpu->dev; int i; - WARN_ON(!mutex_is_locked(&dev->struct_mutex)); - /* Retire the commits starting with highest priority */ for (i = 0; i < gpu->nr_rings; i++) { struct msm_ringbuffer *ring = gpu->rb[i]; @@ -756,15 +753,12 @@ static void retire_submits(struct msm_gpu *gpu) static void retire_worker(struct work_struct *work) { struct msm_gpu *gpu = container_of(work, struct msm_gpu, retire_work); - struct drm_device *dev = gpu->dev; int i; for (i = 0; i < gpu->nr_rings; i++) update_fences(gpu, gpu->rb[i], gpu->rb[i]->memptrs->fence); - mutex_lock(&dev->struct_mutex); retire_submits(gpu); - mutex_unlock(&dev->struct_mutex); } /* call from irq handler to schedule work to retire bo's */ From patchwork Sun Oct 4 19:21:42 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 11815921 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id BA55A112E for ; Sun, 4 Oct 2020 19:21:49 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 9EF9620637 for ; Sun, 4 Oct 2020 19:21:49 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="nAEMC704" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726688AbgJDTVc (ORCPT ); Sun, 4 Oct 2020 15:21:32 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60504 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726683AbgJDTVc (ORCPT ); Sun, 4 Oct 2020 15:21:32 -0400 Received: from mail-pf1-x444.google.com (mail-pf1-x444.google.com [IPv6:2607:f8b0:4864:20::444]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E003DC0613CF; Sun, 4 Oct 2020 12:21:30 -0700 (PDT) Received: by mail-pf1-x444.google.com with SMTP id a200so260618pfa.10; Sun, 04 Oct 2020 12:21:30 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=xYPy68YgElKqp/ZhwxT4I2bIBmneh2iMyGRUpgP2Kzg=; b=nAEMC704Ssavz0xGEbyrm9K3jWYyc1bgQ8dkA/ZspvJj4to7+Qa/6sZC/5TF/Opmem a9nqKcM2G9xVuXLm8cbAQG8SzHpUnGGjNwB+ghB64XjnG1eTbQCMxqStvSZZJO5eeZUk LydYUfqZEQVU+7FyjFh6dMVBLk/nY2VDGRCxfzab2hTnkBZWzh5zyq/aNdHVAd66dnZz fGRG/HstoIwx+oeBbasuKNtclyI8BMIKy1MpRe/vSqJvMBGH2YVDRXKmyebjf26dv3Q4 zvYdMozFYF1+7VnS2O4Lz+nu29Ux+JnkNAYgoOtf87f0lNPohyynlHzkoXl3s8Xb32w/ rE4g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=xYPy68YgElKqp/ZhwxT4I2bIBmneh2iMyGRUpgP2Kzg=; b=Y8ahQbEY4yewLJOcml4VHz+fuMNy9FFrs8MQqJSVlX0ulcXFpS0RCvHROUxyk47lA9 evply72JsESXmQViMl7doIw9gokMcYvIJgFxkTZO7nupnn1cJeVyZQKiZGTsxxv9PyVE 5+bmSIPNKB6WpI4Y4WgUYutLcYpeUwuq41lyxgkod+wyN7VUGhcPXVfD0YRy7n2fpfJm AcLBuJMIHLko2Fe27UdQseu6ck1fj7LWJigqTYf5Khmvn0s5ATm78RFGi3zrmLvvYL9Y zYb8r528Kxiue+4I9hpkBZphIx5YNUo+8mk9zAw6gVG09e5n1qbcDgfI05mtgTphah8w uw0g== X-Gm-Message-State: AOAM533arq1S/9FD9775ZbQaT4h51ONBI7WKgcEX0uegjMcniPO461XF bf1oDvP3ciMvP55E/KLVsuk= X-Google-Smtp-Source: ABdhPJz9b6MrGzYddugbdJh3IUMYkbMj+m/4vj4/TIpTfnWuCI/9BDLXB513Fc0erhJoaG/QFDw4ng== X-Received: by 2002:a63:2022:: with SMTP id g34mr10627104pgg.378.1601839290364; Sun, 04 Oct 2020 12:21:30 -0700 (PDT) Received: from localhost (c-73-25-156-94.hsd1.or.comcast.net. [73.25.156.94]) by smtp.gmail.com with ESMTPSA id z63sm9337766pfz.187.2020.10.04.12.21.29 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 04 Oct 2020 12:21:29 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: Rob Clark , Rob Clark , Sean Paul , David Airlie , Daniel Vetter , linux-arm-msm@vger.kernel.org (open list:DRM DRIVER FOR MSM ADRENO GPU), freedreno@lists.freedesktop.org (open list:DRM DRIVER FOR MSM ADRENO GPU), linux-kernel@vger.kernel.org (open list) Subject: [PATCH 10/14] drm/msm: Drop struct_mutex in free_object() path Date: Sun, 4 Oct 2020 12:21:42 -0700 Message-Id: <20201004192152.3298573-11-robdclark@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201004192152.3298573-1-robdclark@gmail.com> References: <20201004192152.3298573-1-robdclark@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org From: Rob Clark Now that active_list/inactive_list is protected by mm_lock, we no longer need dev->struct_mutex in the free_object() path. Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/msm_gem.c | 8 -------- 1 file changed, 8 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index c52a3969e60b..126d92fd21cd 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -927,8 +927,6 @@ static void free_object(struct msm_gem_object *msm_obj) struct drm_device *dev = obj->dev; struct msm_drm_private *priv = dev->dev_private; - WARN_ON(!mutex_is_locked(&dev->struct_mutex)); - /* object should not be on active list: */ WARN_ON(is_active(msm_obj)); @@ -965,20 +963,14 @@ void msm_gem_free_work(struct work_struct *work) { struct msm_drm_private *priv = container_of(work, struct msm_drm_private, free_work); - struct drm_device *dev = priv->dev; struct llist_node *freed; struct msm_gem_object *msm_obj, *next; while ((freed = llist_del_all(&priv->free_list))) { - - mutex_lock(&dev->struct_mutex); - llist_for_each_entry_safe(msm_obj, next, freed, freed) free_object(msm_obj); - mutex_unlock(&dev->struct_mutex); - if (need_resched()) break; } From patchwork Sun Oct 4 19:21:43 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 11815923 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 24A6992C for ; Sun, 4 Oct 2020 19:21:53 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 041EA20637 for ; Sun, 4 Oct 2020 19:21:53 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="cQPXQz/V" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726713AbgJDTVt (ORCPT ); Sun, 4 Oct 2020 15:21:49 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60508 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726693AbgJDTVc (ORCPT ); Sun, 4 Oct 2020 15:21:32 -0400 Received: from mail-pf1-x442.google.com (mail-pf1-x442.google.com [IPv6:2607:f8b0:4864:20::442]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D6299C0613CE; Sun, 4 Oct 2020 12:21:32 -0700 (PDT) Received: by mail-pf1-x442.google.com with SMTP id 144so5091998pfb.4; Sun, 04 Oct 2020 12:21:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=AJNtfpRr6GjvvhCQZLjZdRuI9RXAVVAdif0FCISlc84=; b=cQPXQz/V1xLNwvkyD+rrh90BAeDPYIcM1txf7bY4IhIAOz7Z5zf3vyzQ5CtXaNc84k i+nnHwBl8hzwPK0wphlGSem1j/4yrGzaaRFZ+qSeZ5XFpKbfqmyV0x3WfD40o+9obrVv Jj1DGx6gcv120q7Rl90cLkr79FzLqCNducrChVH9IOrnSGjCuAypkc6s9yt+Y0lwNLVn 3mxSRzBEr13nvcG3QZoKsjTvDLn5Lj91MgiZ9WlkDMmr4XzYW6XEaFuwAU9mBzem90bX U04BsFNUvVHcNGobfOvhbzTCuoj4a49HViR3z3TJQPFZYbTVC9PqvLKzZi/Jin1okV+w ztYg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=AJNtfpRr6GjvvhCQZLjZdRuI9RXAVVAdif0FCISlc84=; b=SIQxkDfLCy4wv86ZoRi1/jrdhzWuBXRffyZO0EY/CDYOGQVUVRS9soqWLWnf5f639w 7YgEWbPQiOnn0z2o4cUrqNjhzXxt6J3mJtSsqfZSd+f802hjB6FPpRdRjMbqZ3Y/FTXX EP3gVjHX8yTskCZ0w7GaOmlaVNxBGc7oA4Om8MWhSC5Bn+1PTNA5H8VEe7LC7LujHAei X8BtvxoClQdh6KMpqBfdnvXfE07eQNEWOnoVGfldRBdbDh9xW98i3WGDK01nmSbNRmUe EvXYfIvPPwfMwHXijNiDtT0DoGWJPb+AeHU7c06zFqqLWHt2Dkd+TvBtqd6Fqo5cFdj/ 574A== X-Gm-Message-State: AOAM531btAl6R61uq3P8FZQJ9S1+gzhhlO/odwjg9j/vccSP6d+u0ePZ uXRWnqsLi1keYlxj1whnLUY= X-Google-Smtp-Source: ABdhPJwaiIMf4cs5Smkw+nl4kzB43RwYY0830r9RKkDpRdQT6Dggw3kDBvUj6CKba91twMAD/RWdmA== X-Received: by 2002:a63:b21e:: with SMTP id x30mr3994962pge.396.1601839292322; Sun, 04 Oct 2020 12:21:32 -0700 (PDT) Received: from localhost (c-73-25-156-94.hsd1.or.comcast.net. [73.25.156.94]) by smtp.gmail.com with ESMTPSA id me14sm1596859pjb.23.2020.10.04.12.21.31 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 04 Oct 2020 12:21:31 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: Rob Clark , Rob Clark , Sean Paul , David Airlie , Daniel Vetter , linux-arm-msm@vger.kernel.org (open list:DRM DRIVER FOR MSM ADRENO GPU), freedreno@lists.freedesktop.org (open list:DRM DRIVER FOR MSM ADRENO GPU), linux-kernel@vger.kernel.org (open list) Subject: [PATCH 11/14] drm/msm: remove msm_gem_free_work Date: Sun, 4 Oct 2020 12:21:43 -0700 Message-Id: <20201004192152.3298573-12-robdclark@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201004192152.3298573-1-robdclark@gmail.com> References: <20201004192152.3298573-1-robdclark@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org From: Rob Clark Now that we don't need struct_mutex in the free path, we can get rid of the asynchronous free altogether. Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/msm_drv.c | 3 --- drivers/gpu/drm/msm/msm_drv.h | 5 ----- drivers/gpu/drm/msm/msm_gem.c | 27 --------------------------- drivers/gpu/drm/msm/msm_gem.h | 1 - 4 files changed, 36 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_drv.c b/drivers/gpu/drm/msm/msm_drv.c index dc6efc089285..e766c1f45045 100644 --- a/drivers/gpu/drm/msm/msm_drv.c +++ b/drivers/gpu/drm/msm/msm_drv.c @@ -437,9 +437,6 @@ static int msm_drm_init(struct device *dev, struct drm_driver *drv) priv->wq = alloc_ordered_workqueue("msm", 0); - INIT_WORK(&priv->free_work, msm_gem_free_work); - init_llist_head(&priv->free_list); - INIT_LIST_HEAD(&priv->inactive_list); mutex_init(&priv->mm_lock); diff --git a/drivers/gpu/drm/msm/msm_drv.h b/drivers/gpu/drm/msm/msm_drv.h index 535f9e718e2d..96f8009e247c 100644 --- a/drivers/gpu/drm/msm/msm_drv.h +++ b/drivers/gpu/drm/msm/msm_drv.h @@ -188,10 +188,6 @@ struct msm_drm_private { struct list_head inactive_list; struct mutex mm_lock; - /* worker for delayed free of objects: */ - struct work_struct free_work; - struct llist_head free_list; - struct workqueue_struct *wq; unsigned int num_planes; @@ -340,7 +336,6 @@ void msm_gem_kernel_put(struct drm_gem_object *bo, struct msm_gem_address_space *aspace, bool locked); struct drm_gem_object *msm_gem_import(struct drm_device *dev, struct dma_buf *dmabuf, struct sg_table *sgt); -void msm_gem_free_work(struct work_struct *work); __printf(2, 3) void msm_gem_object_set_name(struct drm_gem_object *bo, const char *fmt, ...); diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index 126d92fd21cd..5e75d644ce41 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -917,16 +917,6 @@ void msm_gem_free_object(struct drm_gem_object *obj) struct drm_device *dev = obj->dev; struct msm_drm_private *priv = dev->dev_private; - if (llist_add(&msm_obj->freed, &priv->free_list)) - queue_work(priv->wq, &priv->free_work); -} - -static void free_object(struct msm_gem_object *msm_obj) -{ - struct drm_gem_object *obj = &msm_obj->base; - struct drm_device *dev = obj->dev; - struct msm_drm_private *priv = dev->dev_private; - /* object should not be on active list: */ WARN_ON(is_active(msm_obj)); @@ -959,23 +949,6 @@ static void free_object(struct msm_gem_object *msm_obj) kfree(msm_obj); } -void msm_gem_free_work(struct work_struct *work) -{ - struct msm_drm_private *priv = - container_of(work, struct msm_drm_private, free_work); - struct llist_node *freed; - struct msm_gem_object *msm_obj, *next; - - while ((freed = llist_del_all(&priv->free_list))) { - llist_for_each_entry_safe(msm_obj, next, - freed, freed) - free_object(msm_obj); - - if (need_resched()) - break; - } -} - /* convenience method to construct a GEM buffer object, and userspace handle */ int msm_gem_new_handle(struct drm_device *dev, struct drm_file *file, uint32_t size, uint32_t flags, uint32_t *handle, diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h index 61147bd96b06..e98a8004813b 100644 --- a/drivers/gpu/drm/msm/msm_gem.h +++ b/drivers/gpu/drm/msm/msm_gem.h @@ -127,7 +127,6 @@ enum msm_gem_lock { void msm_gem_purge(struct drm_gem_object *obj, enum msm_gem_lock subclass); void msm_gem_vunmap(struct drm_gem_object *obj, enum msm_gem_lock subclass); -void msm_gem_free_work(struct work_struct *work); /* Created per submit-ioctl, to track bo's and cmdstream bufs, etc, * associated with the cmdstream submission for synchronization (and From patchwork Sun Oct 4 19:21:44 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 11815919 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 0997792C for ; Sun, 4 Oct 2020 19:21:48 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id DC16020637 for ; Sun, 4 Oct 2020 19:21:47 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="ofZAEfvf" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726754AbgJDTVo (ORCPT ); Sun, 4 Oct 2020 15:21:44 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60514 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726713AbgJDTVf (ORCPT ); Sun, 4 Oct 2020 15:21:35 -0400 Received: from mail-pg1-x542.google.com (mail-pg1-x542.google.com [IPv6:2607:f8b0:4864:20::542]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3DE7EC0613CE; Sun, 4 Oct 2020 12:21:35 -0700 (PDT) Received: by mail-pg1-x542.google.com with SMTP id m34so4330936pgl.9; Sun, 04 Oct 2020 12:21:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=bvgK4u82bAyvmLcb1xV4zW9ZRosaPzo7Ro9rB88uZpw=; b=ofZAEfvfh8t1vznoy1eMEzfJNtT51uN1eqHEdUNZZF2XCyOISMqnMSt9iGAymBkYJ6 OReDh6nmpyaSeoep3kHHJHmvTuVVu/cT4A/Ig81/FTn9aY78ns7Af8iuMfQkvopF0Wsg dYzRG8fNOWbOQQoURIBzhVdMhp+ur3eGpbwlqgIRW1EesJz90TDCM8cwpoZ/KKyFmxPi DJnTQ+qRG0DpWj5DXS1YLfXmWxGMyB2JDh0G9ca3MCooBUBbLA5JcpQKp88956t44m6a M7xuso8lV3k2U61P0RypNKvWj7ihmkcIB/dLvASEkMl/axDNPx4heJF53YNzhyygy/mk qcvA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=bvgK4u82bAyvmLcb1xV4zW9ZRosaPzo7Ro9rB88uZpw=; b=De7f6ox4fLbqmsBLlZE6c0/oyKgYfBIeqBxupAw9be5pMp+30fBzAi3qguaLSKdtBE ltDRmqbX6QlayUJGssvp+35byj04Z9P716E+rvP08XaGBkJVcQcfnCACkObPLVOsLR1X ZY8pGb/BEFZBk285sHvuBFuEmdf01Xxo4CuAVzjp3rBZEG7SCYlaokKIc19fwvEMPZOc v7VmXj9zCfLNZtn+Yzjh65jSby62qGbCUdbLuJ3Wu9p5pj+k6vCZiI2pHkKF8Zhzae6W gsy7JH7XQ3z0XEPpEtSTIbuPyHwAqVwdVr9SwL4VOR7dex4UXNBHmGWytbDolL/ZU5yR 0mEQ== X-Gm-Message-State: AOAM533hV0NZw1rararqL0Ey1jokqU0iC9lvRuBbRXB4Dgo2fAHXjEaU JQLSYi5aNf9sAOD5b9LXago= X-Google-Smtp-Source: ABdhPJz09/i+2Wvqqf8XiOcUyAb17zQupk0PEwwUc6Pj2OeVvWLdUTNDay5FX1WDau4Gd0Ws7QGjBA== X-Received: by 2002:a63:d242:: with SMTP id t2mr11411995pgi.47.1601839294703; Sun, 04 Oct 2020 12:21:34 -0700 (PDT) Received: from localhost (c-73-25-156-94.hsd1.or.comcast.net. [73.25.156.94]) by smtp.gmail.com with ESMTPSA id ie13sm8103444pjb.5.2020.10.04.12.21.33 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 04 Oct 2020 12:21:33 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: Rob Clark , Rob Clark , Sean Paul , David Airlie , Daniel Vetter , Sumit Semwal , =?utf-8?q?Christian_K=C3=B6nig?= , linux-arm-msm@vger.kernel.org (open list:DRM DRIVER FOR MSM ADRENO GPU), freedreno@lists.freedesktop.org (open list:DRM DRIVER FOR MSM ADRENO GPU), linux-kernel@vger.kernel.org (open list), linux-media@vger.kernel.org (open list:DMA BUFFER SHARING FRAMEWORK), linaro-mm-sig@lists.linaro.org (moderated list:DMA BUFFER SHARING FRAMEWORK) Subject: [PATCH 12/14] drm/msm: drop struct_mutex in madvise path Date: Sun, 4 Oct 2020 12:21:44 -0700 Message-Id: <20201004192152.3298573-13-robdclark@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201004192152.3298573-1-robdclark@gmail.com> References: <20201004192152.3298573-1-robdclark@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org From: Rob Clark The obj->lock is sufficient for what we need. This *does* have the implication that userspace can try to shoot themselves in the foot by racing madvise(DONTNEED) with submit. But the result will be about the same if they did madvise(DONTNEED) before the submit ioctl, ie. they might not get want they want if they race with shrinker. But iova fault handling is robust enough, and userspace is only shooting it's own foot. Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/msm_drv.c | 11 ++------ drivers/gpu/drm/msm/msm_gem.c | 6 ++-- drivers/gpu/drm/msm/msm_gem.h | 38 ++++++++++++++++++-------- drivers/gpu/drm/msm/msm_gem_shrinker.c | 4 +-- 4 files changed, 32 insertions(+), 27 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_drv.c b/drivers/gpu/drm/msm/msm_drv.c index e766c1f45045..d2488816ce48 100644 --- a/drivers/gpu/drm/msm/msm_drv.c +++ b/drivers/gpu/drm/msm/msm_drv.c @@ -906,14 +906,9 @@ static int msm_ioctl_gem_madvise(struct drm_device *dev, void *data, return -EINVAL; } - ret = mutex_lock_interruptible(&dev->struct_mutex); - if (ret) - return ret; - obj = drm_gem_object_lookup(file, args->handle); if (!obj) { - ret = -ENOENT; - goto unlock; + return -ENOENT; } ret = msm_gem_madvise(obj, args->madv); @@ -922,10 +917,8 @@ static int msm_ioctl_gem_madvise(struct drm_device *dev, void *data, ret = 0; } - drm_gem_object_put_locked(obj); + drm_gem_object_put(obj); -unlock: - mutex_unlock(&dev->struct_mutex); return ret; } diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index 5e75d644ce41..9cdac4f7228c 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -639,8 +639,6 @@ int msm_gem_madvise(struct drm_gem_object *obj, unsigned madv) mutex_lock(&msm_obj->lock); - WARN_ON(!mutex_is_locked(&obj->dev->struct_mutex)); - if (msm_obj->madv != __MSM_MADV_PURGED) msm_obj->madv = madv; @@ -657,7 +655,7 @@ void msm_gem_purge(struct drm_gem_object *obj, enum msm_gem_lock subclass) struct msm_gem_object *msm_obj = to_msm_bo(obj); WARN_ON(!mutex_is_locked(&dev->struct_mutex)); - WARN_ON(!is_purgeable(msm_obj)); + WARN_ON(!is_purgeable(msm_obj, subclass)); WARN_ON(obj->import_attach); mutex_lock_nested(&msm_obj->lock, subclass); @@ -749,7 +747,7 @@ void msm_gem_active_get(struct drm_gem_object *obj, struct msm_gpu *gpu) struct msm_drm_private *priv = obj->dev->dev_private; might_sleep(); - WARN_ON(msm_obj->madv != MSM_MADV_WILLNEED); + WARN_ON(msm_gem_madv(msm_obj, OBJ_LOCK_NORMAL) != MSM_MADV_WILLNEED); if (!atomic_fetch_inc(&msm_obj->active_count)) { mutex_lock(&priv->mm_lock); diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h index e98a8004813b..bb8aa6b1b254 100644 --- a/drivers/gpu/drm/msm/msm_gem.h +++ b/drivers/gpu/drm/msm/msm_gem.h @@ -97,18 +97,6 @@ static inline bool is_active(struct msm_gem_object *msm_obj) return atomic_read(&msm_obj->active_count); } -static inline bool is_purgeable(struct msm_gem_object *msm_obj) -{ - WARN_ON(!mutex_is_locked(&msm_obj->base.dev->struct_mutex)); - return (msm_obj->madv == MSM_MADV_DONTNEED) && msm_obj->sgt && - !msm_obj->base.dma_buf && !msm_obj->base.import_attach; -} - -static inline bool is_vunmapable(struct msm_gem_object *msm_obj) -{ - return (msm_obj->vmap_count == 0) && msm_obj->vaddr; -} - /* The shrinker can be triggered while we hold objA->lock, and need * to grab objB->lock to purge it. Lockdep just sees these as a single * class of lock, so we use subclasses to teach it the difference. @@ -125,6 +113,32 @@ enum msm_gem_lock { OBJ_LOCK_SHRINKER, }; +/* Use this helper to read msm_obj->madv when msm_obj->lock not held: */ +static inline unsigned +msm_gem_madv(struct msm_gem_object *msm_obj, enum msm_gem_lock subclass) +{ + unsigned madv; + + mutex_lock_nested(&msm_obj->lock, subclass); + madv = msm_obj->madv; + mutex_unlock(&msm_obj->lock); + + return madv; +} + +static inline bool +is_purgeable(struct msm_gem_object *msm_obj, enum msm_gem_lock subclass) +{ + return (msm_gem_madv(msm_obj, subclass) == MSM_MADV_DONTNEED) && + msm_obj->sgt && !msm_obj->base.dma_buf && + !msm_obj->base.import_attach; +} + +static inline bool is_vunmapable(struct msm_gem_object *msm_obj) +{ + return (msm_obj->vmap_count == 0) && msm_obj->vaddr; +} + void msm_gem_purge(struct drm_gem_object *obj, enum msm_gem_lock subclass); void msm_gem_vunmap(struct drm_gem_object *obj, enum msm_gem_lock subclass); diff --git a/drivers/gpu/drm/msm/msm_gem_shrinker.c b/drivers/gpu/drm/msm/msm_gem_shrinker.c index c41b84a3a484..39a1b5327267 100644 --- a/drivers/gpu/drm/msm/msm_gem_shrinker.c +++ b/drivers/gpu/drm/msm/msm_gem_shrinker.c @@ -54,7 +54,7 @@ msm_gem_shrinker_count(struct shrinker *shrinker, struct shrink_control *sc) mutex_lock(&priv->mm_lock); list_for_each_entry(msm_obj, &priv->inactive_list, mm_list) { - if (is_purgeable(msm_obj)) + if (is_purgeable(msm_obj, OBJ_LOCK_SHRINKER)) count += msm_obj->base.size >> PAGE_SHIFT; } @@ -84,7 +84,7 @@ msm_gem_shrinker_scan(struct shrinker *shrinker, struct shrink_control *sc) list_for_each_entry(msm_obj, &priv->inactive_list, mm_list) { if (freed >= sc->nr_to_scan) break; - if (is_purgeable(msm_obj)) { + if (is_purgeable(msm_obj, OBJ_LOCK_SHRINKER)) { msm_gem_purge(&msm_obj->base, OBJ_LOCK_SHRINKER); freed += msm_obj->base.size >> PAGE_SHIFT; } From patchwork Sun Oct 4 19:21:45 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 11815927 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id BC38392C for ; Sun, 4 Oct 2020 19:22:00 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 9AB332068E for ; Sun, 4 Oct 2020 19:22:00 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="bXOlJJgI" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726781AbgJDTVx (ORCPT ); Sun, 4 Oct 2020 15:21:53 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60526 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726736AbgJDTVh (ORCPT ); Sun, 4 Oct 2020 15:21:37 -0400 Received: from mail-pf1-x441.google.com (mail-pf1-x441.google.com [IPv6:2607:f8b0:4864:20::441]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 737F0C0613CE; Sun, 4 Oct 2020 12:21:37 -0700 (PDT) Received: by mail-pf1-x441.google.com with SMTP id l126so5088059pfd.5; Sun, 04 Oct 2020 12:21:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=myFIFL6NJ2uK/QlsJsQ3/XorFsY0JTujNY+1tdB1S6w=; b=bXOlJJgIK0A3SfjsBUJbXQnzmQx7EVLfv4zyLT2NRAXZytpisRzrfHpSLxJQCc70gL tG3UczCTtX9V4iZsYLNKHMLfDlqeIWIFjM2tkCw2MUQalX2ONoMmUwKdJLaD8vEAAvu1 Yruky35HY5JMNNTze4rqfStJOnV52J/0zsq06JXY4c0CrIezelAMmsbNzZ/4E1tl5hQo qjZ42E/CbsjpXrKEuh36uMaxExMNC8IyoIZ9/7oCzniN2klkVB7SsQiyguFU6clC52w7 opgpHd2ZsaW9LmAeFzAmDBKHFsN8eAauclhnABHpUN1R7hsg4Qp7Iswem4aqajGYEUNu H7Pg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=myFIFL6NJ2uK/QlsJsQ3/XorFsY0JTujNY+1tdB1S6w=; b=ZNIZ9p/8qGEbD8sviTSIPWVaBOOU20noLuWevu4YuWuAOelyeirjTAWUeXQN1diV99 AIGETfmgCF+P1HeCEZUHhZeUKGW2riFZVVIKepv9EXaCyrfUImERtR5D+P7FUaUoHEUg sPuf5KmvUxaBUp+UucZdjoO6g7R3TsUmOeLlVhBru62OBPDgYFtJbG3NucfQDYVZCNtM ns9XdHGeXhMGrnhLCy/0FkEGLYnRDP3yFyZy3+LxVvD0sLh3pdQs5ZmDGjweW6xVYOU7 M2EbZHezWqrZmG2uygMtPxplLkhs9Hedt7QvvqAExoacl68VRWdiqJq74w1MrlZP+FUi Ozcg== X-Gm-Message-State: AOAM5337csCpHaMOckVNHldGY+yP80EEqx7qWMr5t4Qjqm1pN/DAWMP5 DHajS5jqWkZpA28bbiV7R8o= X-Google-Smtp-Source: ABdhPJyxQljzF6SU8gU/5ONFqeUxHtVTnbYKItuy/9NTU/uxFnge0zRQuDPqrTiEmDHaMrFUAtKEyw== X-Received: by 2002:a62:178c:0:b029:152:3610:836d with SMTP id 134-20020a62178c0000b02901523610836dmr11763973pfx.57.1601839296937; Sun, 04 Oct 2020 12:21:36 -0700 (PDT) Received: from localhost (c-73-25-156-94.hsd1.or.comcast.net. [73.25.156.94]) by smtp.gmail.com with ESMTPSA id c7sm8952626pfj.84.2020.10.04.12.21.35 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 04 Oct 2020 12:21:36 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: Rob Clark , Rob Clark , Sean Paul , David Airlie , Daniel Vetter , linux-arm-msm@vger.kernel.org (open list:DRM DRIVER FOR MSM ADRENO GPU), freedreno@lists.freedesktop.org (open list:DRM DRIVER FOR MSM ADRENO GPU), linux-kernel@vger.kernel.org (open list) Subject: [PATCH 13/14] drm/msm: Drop struct_mutex in shrinker path Date: Sun, 4 Oct 2020 12:21:45 -0700 Message-Id: <20201004192152.3298573-14-robdclark@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201004192152.3298573-1-robdclark@gmail.com> References: <20201004192152.3298573-1-robdclark@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org From: Rob Clark Now that the inactive_list is protected by mm_lock, and everything else on per-obj basis is protected by obj->lock, we no longer depend on struct_mutex. Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/msm_gem.c | 1 - drivers/gpu/drm/msm/msm_gem_shrinker.c | 54 -------------------------- 2 files changed, 55 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index 9cdac4f7228c..e749a1c6f4e0 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -654,7 +654,6 @@ void msm_gem_purge(struct drm_gem_object *obj, enum msm_gem_lock subclass) struct drm_device *dev = obj->dev; struct msm_gem_object *msm_obj = to_msm_bo(obj); - WARN_ON(!mutex_is_locked(&dev->struct_mutex)); WARN_ON(!is_purgeable(msm_obj, subclass)); WARN_ON(obj->import_attach); diff --git a/drivers/gpu/drm/msm/msm_gem_shrinker.c b/drivers/gpu/drm/msm/msm_gem_shrinker.c index 39a1b5327267..2c7bda1e2bf9 100644 --- a/drivers/gpu/drm/msm/msm_gem_shrinker.c +++ b/drivers/gpu/drm/msm/msm_gem_shrinker.c @@ -8,48 +8,13 @@ #include "msm_gem.h" #include "msm_gpu_trace.h" -static bool msm_gem_shrinker_lock(struct drm_device *dev, bool *unlock) -{ - /* NOTE: we are *closer* to being able to get rid of - * mutex_trylock_recursive().. the msm_gem code itself does - * not need struct_mutex, although codepaths that can trigger - * shrinker are still called in code-paths that hold the - * struct_mutex. - * - * Also, msm_obj->madv is protected by struct_mutex. - * - * The next step is probably split out a seperate lock for - * protecting inactive_list, so that shrinker does not need - * struct_mutex. - */ - switch (mutex_trylock_recursive(&dev->struct_mutex)) { - case MUTEX_TRYLOCK_FAILED: - return false; - - case MUTEX_TRYLOCK_SUCCESS: - *unlock = true; - return true; - - case MUTEX_TRYLOCK_RECURSIVE: - *unlock = false; - return true; - } - - BUG(); -} - static unsigned long msm_gem_shrinker_count(struct shrinker *shrinker, struct shrink_control *sc) { struct msm_drm_private *priv = container_of(shrinker, struct msm_drm_private, shrinker); - struct drm_device *dev = priv->dev; struct msm_gem_object *msm_obj; unsigned long count = 0; - bool unlock; - - if (!msm_gem_shrinker_lock(dev, &unlock)) - return 0; mutex_lock(&priv->mm_lock); @@ -60,9 +25,6 @@ msm_gem_shrinker_count(struct shrinker *shrinker, struct shrink_control *sc) mutex_unlock(&priv->mm_lock); - if (unlock) - mutex_unlock(&dev->struct_mutex); - return count; } @@ -71,13 +33,8 @@ msm_gem_shrinker_scan(struct shrinker *shrinker, struct shrink_control *sc) { struct msm_drm_private *priv = container_of(shrinker, struct msm_drm_private, shrinker); - struct drm_device *dev = priv->dev; struct msm_gem_object *msm_obj; unsigned long freed = 0; - bool unlock; - - if (!msm_gem_shrinker_lock(dev, &unlock)) - return SHRINK_STOP; mutex_lock(&priv->mm_lock); @@ -92,9 +49,6 @@ msm_gem_shrinker_scan(struct shrinker *shrinker, struct shrink_control *sc) mutex_unlock(&priv->mm_lock); - if (unlock) - mutex_unlock(&dev->struct_mutex); - if (freed > 0) trace_msm_gem_purge(freed << PAGE_SHIFT); @@ -106,13 +60,8 @@ msm_gem_shrinker_vmap(struct notifier_block *nb, unsigned long event, void *ptr) { struct msm_drm_private *priv = container_of(nb, struct msm_drm_private, vmap_notifier); - struct drm_device *dev = priv->dev; struct msm_gem_object *msm_obj; unsigned unmapped = 0; - bool unlock; - - if (!msm_gem_shrinker_lock(dev, &unlock)) - return NOTIFY_DONE; mutex_lock(&priv->mm_lock); @@ -130,9 +79,6 @@ msm_gem_shrinker_vmap(struct notifier_block *nb, unsigned long event, void *ptr) mutex_unlock(&priv->mm_lock); - if (unlock) - mutex_unlock(&dev->struct_mutex); - *(unsigned long *)ptr += unmapped; if (unmapped > 0) From patchwork Sun Oct 4 19:21:46 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 11815929 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 2601913B2 for ; Sun, 4 Oct 2020 19:22:01 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 09CAA20637 for ; Sun, 4 Oct 2020 19:22:01 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="h0DfyK2y" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726295AbgJDTVx (ORCPT ); Sun, 4 Oct 2020 15:21:53 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60532 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726752AbgJDTVk (ORCPT ); Sun, 4 Oct 2020 15:21:40 -0400 Received: from mail-pj1-x1043.google.com (mail-pj1-x1043.google.com [IPv6:2607:f8b0:4864:20::1043]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B6139C0613CF; Sun, 4 Oct 2020 12:21:39 -0700 (PDT) Received: by mail-pj1-x1043.google.com with SMTP id u3so3954473pjr.3; Sun, 04 Oct 2020 12:21:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=Jz4QzEAhEdO6lBCU1QoQ4yhqxZSfud3TeRKgBoZBDXA=; b=h0DfyK2y1SjSYpaHFHMtPvB92RnRZePuaoTdB5xxz0thjvfqQP7Siztchi+qNfr4kh XMePXwcGcvYZ+5xcurEpjvcLycdlS3PkbdwN3GgdoTeg5o2zHEqBoKFBjwaam79OQGXg 3YoqTN7w5QcQlTdvJyKGPEx2vewYpRKJpHdYKgYk5qXv2dLNGdQN5Ug8dVe1Vf09jaWa NctNaw/rVfTU56JuSIPI8q6JSzSqM17jP4t3Fpdxje/RKaFKiBrXrXzEuykq0DVxP2l3 GMla73CBHlbIm/VNXUwZuVh1Zx19xRbr6TLqFSD9lxsm9VBldSRUEQJziY11Qj8hC0OK /iLw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Jz4QzEAhEdO6lBCU1QoQ4yhqxZSfud3TeRKgBoZBDXA=; b=bHKldv1UBs3kSEqQ8LPCOxUjNmBCnnHNqZ13zA94EqfM42Ny+6xKDYgXfLd+PT7Wx+ Tp10A9RGxtRWyFARuYnJr5ZpGIY2HDOkuURbG7rjgU830xTOTIH6KLQRMb6l9XUcQ9bl xP3sgsuSmVuBYYKD4t6trwo2asiKX0urW5olUlB6ZDU29PDJ5+uNLvrFCXU4NltBD2ht +PjbedtMR502K+YYfOY54pBhwZncIFMskg64Nkc/2Fc3tGFKdXGCeqnMiGO/JhcOPuVh Oj+hnj/zaLhUdWXIiXYucgFlOPBRYaTEYbPZV56DL9tFc3p9Ocr6qdqMPN2M0lkm5VdT evmg== X-Gm-Message-State: AOAM5303vjyy4xT1n8w/BdGqQYB2fasoI5GR9redeLiRl+9TRy+hfMF4 QAYXj9jJo2niPmFN3jKGGjE= X-Google-Smtp-Source: ABdhPJzwbgfNqafM3y1Sgaak5gfe9bX33DwTRFupIpYNwim2i/Q2fWIh4YBVZQi0LAKdXb4VGj5dHw== X-Received: by 2002:a17:90a:c501:: with SMTP id k1mr13719865pjt.170.1601839299184; Sun, 04 Oct 2020 12:21:39 -0700 (PDT) Received: from localhost (c-73-25-156-94.hsd1.or.comcast.net. [73.25.156.94]) by smtp.gmail.com with ESMTPSA id z28sm9952148pfq.81.2020.10.04.12.21.37 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 04 Oct 2020 12:21:38 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: Rob Clark , Rob Clark , Sean Paul , David Airlie , Daniel Vetter , linux-arm-msm@vger.kernel.org (open list:DRM DRIVER FOR MSM ADRENO GPU), freedreno@lists.freedesktop.org (open list:DRM DRIVER FOR MSM ADRENO GPU), linux-kernel@vger.kernel.org (open list) Subject: [PATCH 14/14] drm/msm: Don't implicit-sync if only a single ring Date: Sun, 4 Oct 2020 12:21:46 -0700 Message-Id: <20201004192152.3298573-15-robdclark@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201004192152.3298573-1-robdclark@gmail.com> References: <20201004192152.3298573-1-robdclark@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org From: Rob Clark Any cross-device sync use-cases *must* use explicit sync. And if there is only a single ring (no-preemption), everything is FIFO order and there is no need to implicit-sync. Mesa should probably just always use MSM_SUBMIT_NO_IMPLICIT, as behavior is undefined when fences are not used to synchronize buffer usage across contexts (which is the only case where multiple different priority rings could come into play). Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/msm_gem_submit.c | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_gem_submit.c b/drivers/gpu/drm/msm/msm_gem_submit.c index 7d653bdc92dc..b9b68153b7b2 100644 --- a/drivers/gpu/drm/msm/msm_gem_submit.c +++ b/drivers/gpu/drm/msm/msm_gem_submit.c @@ -219,7 +219,7 @@ static int submit_lock_objects(struct msm_gem_submit *submit) return ret; } -static int submit_fence_sync(struct msm_gem_submit *submit, bool no_implicit) +static int submit_fence_sync(struct msm_gem_submit *submit, bool implicit_sync) { int i, ret = 0; @@ -239,7 +239,7 @@ static int submit_fence_sync(struct msm_gem_submit *submit, bool no_implicit) return ret; } - if (no_implicit) + if (!implicit_sync) continue; ret = msm_gem_sync_object(&msm_obj->base, submit->ring->fctx, @@ -704,7 +704,8 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void *data, if (ret) goto out; - ret = submit_fence_sync(submit, !!(args->flags & MSM_SUBMIT_NO_IMPLICIT)); + ret = submit_fence_sync(submit, (gpu->nr_rings > 1) && + !(args->flags & MSM_SUBMIT_NO_IMPLICIT)); if (ret) goto out;