From patchwork Mon Mar 20 14:43:23 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 13181341 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id CF411C761AF for ; Mon, 20 Mar 2023 14:44:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231838AbjCTOog (ORCPT ); Mon, 20 Mar 2023 10:44:36 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50194 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231824AbjCTOoe (ORCPT ); Mon, 20 Mar 2023 10:44:34 -0400 Received: from mail-pl1-x633.google.com (mail-pl1-x633.google.com [IPv6:2607:f8b0:4864:20::633]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 61D50A5DB; Mon, 20 Mar 2023 07:44:33 -0700 (PDT) Received: by mail-pl1-x633.google.com with SMTP id o11so12679769ple.1; Mon, 20 Mar 2023 07:44:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; t=1679323473; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=iNjAC1W4XZsf7RJ1ht9dOafT/e4zZ1svNLwVwT9eWII=; b=QL9SfnfKe7vTCJV52lAJ686yvUOxF/rLlremOL5Qb6MW7Zsh0KJ/GgqaYK8pebbydx vF1Lc+1gbXAStMSGtRkkkxah0yQu08ARfmQ94EXGPMrGCKMXRcTABiOs+lt/zkfTgL6f yLZfQaUJ1YJ59CTS72s0zdYI2Ebb1olYHarl6C66ZoBfakQ7yhKUbvS/1L0qKqIqOlg+ CJoi9k7HRW9wu7NvxyMnKJMYuNe5z0JzuR7ie0oxF6v+Al0H1P7flXjQDbziH1WFm8/r rBDRjHjPEz0rRFxmiGI8wqKQGiAXJg/O2qadKGBij1YrgtkZullrMuktIvJH0ILPOAU+ JkPQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1679323473; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=iNjAC1W4XZsf7RJ1ht9dOafT/e4zZ1svNLwVwT9eWII=; b=pGDAZtYwOr84Y03xWLzB8YWbZ907oBxntKkieuRjQYNHOIc+SWjVS7xYw/4xrCZnbl k6x/QbHM8itD08c6QXVNNG6PA4cLWL6dRhb5jYx4nEpXzOWEsUPpOcYRY1PfqsRKGaf7 C0hhwE2sbpAIaCN8ymiJ2zAsptFxWqf+re5ZAkFwjHaVdy8ku1PceI9/Tcru2PIeMjW/ W3Fs13tq4TQsMQBVNqzKHv9nlRLp8MdA7aYEy7e5sruK+K4sHtjdaiY98L64+S3y4zn2 IR6ogw6DFWQcsEmIvzWi682PSR1qF11fP+gFQxD+tMv3fRqlCoozknK07OUAWV3g030u ej/w== X-Gm-Message-State: AO0yUKUKzrvwuNgZXTen0ojflyZKA750NLYJEdsWFcif4b774KrGxDxO MFvIYIyResebyzZBCU13vnE= X-Google-Smtp-Source: AK7set88CXDkILKcokVTNL34lJkgh03yV1DadcHHZqKAL59RTe92k7k4c3vIOtp3mC5f+zQfkcU19w== X-Received: by 2002:a05:6a20:2098:b0:d9:dd69:47e3 with SMTP id b24-20020a056a20209800b000d9dd6947e3mr1884104pza.23.1679323472839; Mon, 20 Mar 2023 07:44:32 -0700 (PDT) Received: from localhost ([2a00:79e1:abd:4a00:61b:48ed:72ab:435b]) by smtp.gmail.com with ESMTPSA id m26-20020aa78a1a000000b005a8a5be96b2sm3028068pfa.104.2023.03.20.07.44.32 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 20 Mar 2023 07:44:32 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: Daniel Vetter , freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Rob Clark , Rob Clark , Abhinav Kumar , Dmitry Baryshkov , Sean Paul , David Airlie , Sumit Semwal , =?utf-8?q?Christian_K=C3=B6nig?= , linux-kernel@vger.kernel.org (open list), linux-media@vger.kernel.org (open list:DMA BUFFER SHARING FRAMEWORK), linaro-mm-sig@lists.linaro.org (moderated list:DMA BUFFER SHARING FRAMEWORK) Subject: [PATCH v2 01/23] drm/msm: Pre-allocate hw_fence Date: Mon, 20 Mar 2023 07:43:23 -0700 Message-Id: <20230320144356.803762-2-robdclark@gmail.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230320144356.803762-1-robdclark@gmail.com> References: <20230320144356.803762-1-robdclark@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org From: Rob Clark Avoid allocating memory in job_run() by pre-allocating the hw_fence. Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/msm_fence.c | 12 +++++++++--- drivers/gpu/drm/msm/msm_fence.h | 3 ++- drivers/gpu/drm/msm/msm_gem_submit.c | 7 +++++++ drivers/gpu/drm/msm/msm_ringbuffer.c | 2 +- 4 files changed, 19 insertions(+), 5 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_fence.c b/drivers/gpu/drm/msm/msm_fence.c index 56641408ea74..bab3d84f1686 100644 --- a/drivers/gpu/drm/msm/msm_fence.c +++ b/drivers/gpu/drm/msm/msm_fence.c @@ -99,7 +99,7 @@ static const struct dma_fence_ops msm_fence_ops = { }; struct dma_fence * -msm_fence_alloc(struct msm_fence_context *fctx) +msm_fence_alloc(void) { struct msm_fence *f; @@ -107,10 +107,16 @@ msm_fence_alloc(struct msm_fence_context *fctx) if (!f) return ERR_PTR(-ENOMEM); + return &f->base; +} + +void +msm_fence_init(struct dma_fence *fence, struct msm_fence_context *fctx) +{ + struct msm_fence *f = to_msm_fence(fence); + f->fctx = fctx; dma_fence_init(&f->base, &msm_fence_ops, &fctx->spinlock, fctx->context, ++fctx->last_fence); - - return &f->base; } diff --git a/drivers/gpu/drm/msm/msm_fence.h b/drivers/gpu/drm/msm/msm_fence.h index 7f1798c54cd1..f913fa22d8fe 100644 --- a/drivers/gpu/drm/msm/msm_fence.h +++ b/drivers/gpu/drm/msm/msm_fence.h @@ -61,7 +61,8 @@ void msm_fence_context_free(struct msm_fence_context *fctx); bool msm_fence_completed(struct msm_fence_context *fctx, uint32_t fence); void msm_update_fence(struct msm_fence_context *fctx, uint32_t fence); -struct dma_fence * msm_fence_alloc(struct msm_fence_context *fctx); +struct dma_fence * msm_fence_alloc(void); +void msm_fence_init(struct dma_fence *fence, struct msm_fence_context *fctx); static inline bool fence_before(uint32_t a, uint32_t b) diff --git a/drivers/gpu/drm/msm/msm_gem_submit.c b/drivers/gpu/drm/msm/msm_gem_submit.c index be4bf77103cd..2570c018b0cb 100644 --- a/drivers/gpu/drm/msm/msm_gem_submit.c +++ b/drivers/gpu/drm/msm/msm_gem_submit.c @@ -41,6 +41,13 @@ static struct msm_gem_submit *submit_create(struct drm_device *dev, if (!submit) return ERR_PTR(-ENOMEM); + submit->hw_fence = msm_fence_alloc(); + if (IS_ERR(submit->hw_fence)) { + ret = PTR_ERR(submit->hw_fence); + kfree(submit); + return ERR_PTR(ret); + } + ret = drm_sched_job_init(&submit->base, queue->entity, queue); if (ret) { kfree(submit); diff --git a/drivers/gpu/drm/msm/msm_ringbuffer.c b/drivers/gpu/drm/msm/msm_ringbuffer.c index 57a8e9564540..a62b45e5a8c3 100644 --- a/drivers/gpu/drm/msm/msm_ringbuffer.c +++ b/drivers/gpu/drm/msm/msm_ringbuffer.c @@ -18,7 +18,7 @@ static struct dma_fence *msm_job_run(struct drm_sched_job *job) struct msm_gpu *gpu = submit->gpu; int i; - submit->hw_fence = msm_fence_alloc(fctx); + msm_fence_init(submit->hw_fence, fctx); for (i = 0; i < submit->nr_bos; i++) { struct drm_gem_object *obj = &submit->bos[i].obj->base; From patchwork Mon Mar 20 14:43:24 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 13181342 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 75E00C6FD1D for ; Mon, 20 Mar 2023 14:44:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231846AbjCTOoh (ORCPT ); Mon, 20 Mar 2023 10:44:37 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50194 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231839AbjCTOog (ORCPT ); Mon, 20 Mar 2023 10:44:36 -0400 Received: from mail-pj1-x102c.google.com (mail-pj1-x102c.google.com [IPv6:2607:f8b0:4864:20::102c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id ED6E6AD22; Mon, 20 Mar 2023 07:44:34 -0700 (PDT) Received: by mail-pj1-x102c.google.com with SMTP id e15-20020a17090ac20f00b0023d1b009f52so16743708pjt.2; Mon, 20 Mar 2023 07:44:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; t=1679323474; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=qOKgIqKype2FtcfBEs8DzVnGdB8jDwzY52kLns1gHeQ=; b=VnF+6G2rX3V3uiMnbJNUCShKHEg3qSXYFZW2BadyVGNW0EATEl8lpaMebXjsuGk82j 6uNC/eXNRKXC6LHgJEGqwFVgVpKr8Ft2sQSWlWoJNn2mcPP6CjiXAakf5VgjfGk741GP sIHFQgcHBE6gPMKsa9eAIRJpEMN/RO4X6UIOevibBEhmVCdoZdol18YJKBu4jexGjC/y xW2lLDNDlqC2S0GLUL1feKz6u5E6vwZDYvzQtXJzzQd8X1JJtWfI4To24/vDTwEUXx1m F3Vs7SviqMHjK5T10Q782LncWJWcxm7mshYIhZNh48C6PEae62pRbnFZfPIvZvxobGNO Zrbg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1679323474; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=qOKgIqKype2FtcfBEs8DzVnGdB8jDwzY52kLns1gHeQ=; b=pi5qHJ7SjUuUHV7uDaBhNVfXi9oBe4l/WhLwJKLH9IJJbrzhjhxnvl4Ldbc5ZIUIha ygAck0Yw9TvKkYRPonfFYrLX482BhupzwffIG+ouG33UhpzbA3q9G0nrsaMXKaOH4LDm GygrmLrtD7bFkmKlDUqWRJAyICU7gN03YbvG5tjRLhCXWrIvTdHo/hzhtXZDELLBokM5 Ziao/0n5+gfVz41UEXx3yM72fQmmcGw6k9zGoabNriXVRBrmjAJM8PWx45Aov2/GWIl/ J7+OJwA9c28Z4WoYtsdZUiwdKYVV+GnlMiS5NE8G8+R0IILiGJSvAplga1Wa+z94W1Gx 4euQ== X-Gm-Message-State: AO0yUKUYmOgai/ksN/n9YzU5wyuNM7Vr0HwV81XtQmWBLUi/H2xL5Nat pLQktBieAi1moKoka0C8EsE= X-Google-Smtp-Source: AK7set87GLedaxfP6MTlGWAVIi2V/FMub1LUUnVtxXaItSj0lTXwOwSmnXTsGsDYl2P+bImVpBwAtQ== X-Received: by 2002:a05:6a20:bb12:b0:da:24f5:ff25 with SMTP id fc18-20020a056a20bb1200b000da24f5ff25mr1307876pzb.48.1679323474431; Mon, 20 Mar 2023 07:44:34 -0700 (PDT) Received: from localhost ([2a00:79e1:abd:4a00:61b:48ed:72ab:435b]) by smtp.gmail.com with ESMTPSA id c26-20020a631c5a000000b00507249cde91sm6186039pgm.91.2023.03.20.07.44.33 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 20 Mar 2023 07:44:34 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: Daniel Vetter , freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Rob Clark , Rob Clark , Abhinav Kumar , Dmitry Baryshkov , Sean Paul , David Airlie , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v2 02/23] drm/msm: Move submit bo flags update from obj lock Date: Mon, 20 Mar 2023 07:43:24 -0700 Message-Id: <20230320144356.803762-3-robdclark@gmail.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230320144356.803762-1-robdclark@gmail.com> References: <20230320144356.803762-1-robdclark@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org From: Rob Clark The flags are only accessed (1) when submit is constructed, before enqueuing to gpu sched (ie. when still visible to only the task calling the submit ioctl), (2) here, where we own a reference to the submit and are serialized on the gpu sched thread, and (3) after the submit is retired and last reference is dropped, which is serialized on the submit's reference count. Hence locking is unneeded here. Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/msm_ringbuffer.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/gpu/drm/msm/msm_ringbuffer.c b/drivers/gpu/drm/msm/msm_ringbuffer.c index a62b45e5a8c3..a80447c8764e 100644 --- a/drivers/gpu/drm/msm/msm_ringbuffer.c +++ b/drivers/gpu/drm/msm/msm_ringbuffer.c @@ -26,8 +26,8 @@ static struct dma_fence *msm_job_run(struct drm_sched_job *job) msm_gem_lock(obj); msm_gem_unpin_vma_fenced(submit->bos[i].vma, fctx); msm_gem_unpin_locked(obj); - submit->bos[i].flags &= ~(BO_VMA_PINNED | BO_OBJ_PINNED); msm_gem_unlock(obj); + submit->bos[i].flags &= ~(BO_VMA_PINNED | BO_OBJ_PINNED); } /* TODO move submit path over to using a per-ring lock.. */ From patchwork Mon Mar 20 14:43:25 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 13181343 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1CC7CC6FD1D for ; Mon, 20 Mar 2023 14:44:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230218AbjCTOom (ORCPT ); Mon, 20 Mar 2023 10:44:42 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50448 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231852AbjCTOoj (ORCPT ); Mon, 20 Mar 2023 10:44:39 -0400 Received: from mail-pj1-x102c.google.com (mail-pj1-x102c.google.com [IPv6:2607:f8b0:4864:20::102c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D2F17C163; Mon, 20 Mar 2023 07:44:36 -0700 (PDT) Received: by mail-pj1-x102c.google.com with SMTP id e15-20020a17090ac20f00b0023d1b009f52so16743826pjt.2; Mon, 20 Mar 2023 07:44:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; t=1679323476; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=hf3gRKGEzXkyWVAG2dEyFqi1/XxKRGGVjOETTiZLoYA=; b=i7p9jOLfKKDLeTfpyYWy7C7cs5eDcLLglS6iEXPl+/oEKtAHSdTCq+sXrgoFVTgnBh yO/LJxxjlc4i4j4Gpfo7b5bV6E8IV+ozXRsE2nuOP72UKaer0gQmS0XjbLEukYy9sedX SaYoXVQRm1mKpUEp+Rp+oOHqLAUTRvy75LyJpqxPjX1E4yGI659Z5jbdr1AhUqIg8ZGY 9m8+Ebx7hi28bYL7sY3q3N76MYvrrm0wctvT0G6ckOGnllmePPey2ZsBmbAkDYhbm9B/ 0R/LZDOxfzN895K5eve2Y/x4ONk1eKB6TlASR1TM531IYadPx91JYD3h7Mz0jK6kyCNF uJAA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1679323476; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=hf3gRKGEzXkyWVAG2dEyFqi1/XxKRGGVjOETTiZLoYA=; b=f/JdjNE1ctlCbvLQioRQbFu/nNu/wWiElDyhapm8cx7GPnM3RI+XSnHUKmqqc4GKeR AVEPXnTTijMq44fTsqHsYpHbzD/srl2lEy7dtj9evrTa8kgb78UGodH8GnTO0frwxZrH QXtYAOQQRoESdv8j6pk4Ma5ZJDk4J4T9Dcs4/vgfBL55hIZ52meadJ2Ox0k/vEv9a1gg S4nMGoeb/ZLJ7yJyYNgb+1hBPVjb4CSEAgW2gOsdIwTB83Rq5aZVTYfI2vjm3ZoZYBK+ d03NmL3V1Lxvj5dqI54b6tx39dlj7ZPRHYRuxQtvux1mvim0q4LZbXizKMKoA8U11c0X vaeA== X-Gm-Message-State: AO0yUKWIciUSPSRCaSjdwW05XOsozs99Ro0U3khcrL2FIcIs/8iDgpoi 3c0XoP/7fNeGgVAuOlnjA9M= X-Google-Smtp-Source: AK7set98LfakjnQuCwWj87usLvTm+iSmEV2T6zF1D7b1lOmHRdqRxvvEGo8C9by30PXh5dsVOb6A1Q== X-Received: by 2002:a17:903:28c3:b0:19e:ecaf:c4b4 with SMTP id kv3-20020a17090328c300b0019eecafc4b4mr14765234plb.4.1679323476396; Mon, 20 Mar 2023 07:44:36 -0700 (PDT) Received: from localhost ([2a00:79e1:abd:4a00:61b:48ed:72ab:435b]) by smtp.gmail.com with ESMTPSA id n20-20020a170902d0d400b0019c901b35ecsm6822551pln.106.2023.03.20.07.44.35 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 20 Mar 2023 07:44:36 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: Daniel Vetter , freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Rob Clark , Rob Clark , Abhinav Kumar , Dmitry Baryshkov , Sean Paul , David Airlie , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v2 03/23] drm/msm/gem: Tidy up VMA API Date: Mon, 20 Mar 2023 07:43:25 -0700 Message-Id: <20230320144356.803762-4-robdclark@gmail.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230320144356.803762-1-robdclark@gmail.com> References: <20230320144356.803762-1-robdclark@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org From: Rob Clark Stop open coding VMA construction, which will be needed in the next commit. And since the VMA already has a ptr to the adress space, stop passing that around everywhere. (Also, an aspace always has an mmu so we can drop a couple pointless NULL checks.) Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/msm_gem.c | 18 +++++----- drivers/gpu/drm/msm/msm_gem.h | 18 ++++------ drivers/gpu/drm/msm/msm_gem_submit.c | 2 +- drivers/gpu/drm/msm/msm_gem_vma.c | 51 ++++++++++++++++++---------- drivers/gpu/drm/msm/msm_ringbuffer.c | 2 +- 5 files changed, 51 insertions(+), 40 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index 1dee0d18abbb..6734aecf0703 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -309,12 +309,10 @@ static struct msm_gem_vma *add_vma(struct drm_gem_object *obj, msm_gem_assert_locked(obj); - vma = kzalloc(sizeof(*vma), GFP_KERNEL); + vma = msm_gem_vma_new(aspace); if (!vma) return ERR_PTR(-ENOMEM); - vma->aspace = aspace; - list_add_tail(&vma->list, &msm_obj->vmas); return vma; @@ -361,9 +359,9 @@ put_iova_spaces(struct drm_gem_object *obj, bool close) list_for_each_entry(vma, &msm_obj->vmas, list) { if (vma->aspace) { - msm_gem_purge_vma(vma->aspace, vma); + msm_gem_vma_purge(vma); if (close) - msm_gem_close_vma(vma->aspace, vma); + msm_gem_vma_close(vma); } } } @@ -399,7 +397,7 @@ static struct msm_gem_vma *get_vma_locked(struct drm_gem_object *obj, if (IS_ERR(vma)) return vma; - ret = msm_gem_init_vma(aspace, vma, obj->size, + ret = msm_gem_vma_init(vma, obj->size, range_start, range_end); if (ret) { del_vma(vma); @@ -437,7 +435,7 @@ int msm_gem_pin_vma_locked(struct drm_gem_object *obj, struct msm_gem_vma *vma) if (IS_ERR(pages)) return PTR_ERR(pages); - ret = msm_gem_map_vma(vma->aspace, vma, prot, msm_obj->sgt, obj->size); + ret = msm_gem_vma_map(vma, prot, msm_obj->sgt, obj->size); if (ret) msm_gem_unpin_locked(obj); @@ -539,8 +537,8 @@ static int clear_iova(struct drm_gem_object *obj, if (msm_gem_vma_inuse(vma)) return -EBUSY; - msm_gem_purge_vma(vma->aspace, vma); - msm_gem_close_vma(vma->aspace, vma); + msm_gem_vma_purge(vma); + msm_gem_vma_close(vma); del_vma(vma); return 0; @@ -589,7 +587,7 @@ void msm_gem_unpin_iova(struct drm_gem_object *obj, msm_gem_lock(obj); vma = lookup_vma(obj, aspace); if (!GEM_WARN_ON(!vma)) { - msm_gem_unpin_vma(vma); + msm_gem_vma_unpin(vma); msm_gem_unpin_locked(obj); } msm_gem_unlock(obj); diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h index c4844cf3a585..d3219c523034 100644 --- a/drivers/gpu/drm/msm/msm_gem.h +++ b/drivers/gpu/drm/msm/msm_gem.h @@ -69,19 +69,15 @@ struct msm_gem_vma { struct msm_fence_context *fctx[MSM_GPU_MAX_RINGS]; }; -int msm_gem_init_vma(struct msm_gem_address_space *aspace, - struct msm_gem_vma *vma, int size, +struct msm_gem_vma *msm_gem_vma_new(struct msm_gem_address_space *aspace); +int msm_gem_vma_init(struct msm_gem_vma *vma, int size, u64 range_start, u64 range_end); bool msm_gem_vma_inuse(struct msm_gem_vma *vma); -void msm_gem_purge_vma(struct msm_gem_address_space *aspace, - struct msm_gem_vma *vma); -void msm_gem_unpin_vma(struct msm_gem_vma *vma); -void msm_gem_unpin_vma_fenced(struct msm_gem_vma *vma, struct msm_fence_context *fctx); -int msm_gem_map_vma(struct msm_gem_address_space *aspace, - struct msm_gem_vma *vma, int prot, - struct sg_table *sgt, int size); -void msm_gem_close_vma(struct msm_gem_address_space *aspace, - struct msm_gem_vma *vma); +void msm_gem_vma_purge(struct msm_gem_vma *vma); +void msm_gem_vma_unpin(struct msm_gem_vma *vma); +void msm_gem_vma_unpin_fenced(struct msm_gem_vma *vma, struct msm_fence_context *fctx); +int msm_gem_vma_map(struct msm_gem_vma *vma, int prot, struct sg_table *sgt, int size); +void msm_gem_vma_close(struct msm_gem_vma *vma); struct msm_gem_object { struct drm_gem_object base; diff --git a/drivers/gpu/drm/msm/msm_gem_submit.c b/drivers/gpu/drm/msm/msm_gem_submit.c index 2570c018b0cb..1d8e7c2a8024 100644 --- a/drivers/gpu/drm/msm/msm_gem_submit.c +++ b/drivers/gpu/drm/msm/msm_gem_submit.c @@ -249,7 +249,7 @@ static void submit_cleanup_bo(struct msm_gem_submit *submit, int i, submit->bos[i].flags &= ~cleanup_flags; if (flags & BO_VMA_PINNED) - msm_gem_unpin_vma(submit->bos[i].vma); + msm_gem_vma_unpin(submit->bos[i].vma); if (flags & BO_OBJ_PINNED) msm_gem_unpin_locked(obj); diff --git a/drivers/gpu/drm/msm/msm_gem_vma.c b/drivers/gpu/drm/msm/msm_gem_vma.c index c471aebcdbab..2827679dc39a 100644 --- a/drivers/gpu/drm/msm/msm_gem_vma.c +++ b/drivers/gpu/drm/msm/msm_gem_vma.c @@ -56,9 +56,9 @@ bool msm_gem_vma_inuse(struct msm_gem_vma *vma) } /* Actually unmap memory for the vma */ -void msm_gem_purge_vma(struct msm_gem_address_space *aspace, - struct msm_gem_vma *vma) +void msm_gem_vma_purge(struct msm_gem_vma *vma) { + struct msm_gem_address_space *aspace = vma->aspace; unsigned size = vma->node.size; /* Print a message if we try to purge a vma in use */ @@ -68,14 +68,13 @@ void msm_gem_purge_vma(struct msm_gem_address_space *aspace, if (!vma->mapped) return; - if (aspace->mmu) - aspace->mmu->funcs->unmap(aspace->mmu, vma->iova, size); + aspace->mmu->funcs->unmap(aspace->mmu, vma->iova, size); vma->mapped = false; } /* Remove reference counts for the mapping */ -void msm_gem_unpin_vma(struct msm_gem_vma *vma) +void msm_gem_vma_unpin(struct msm_gem_vma *vma) { if (GEM_WARN_ON(!vma->inuse)) return; @@ -84,21 +83,21 @@ void msm_gem_unpin_vma(struct msm_gem_vma *vma) } /* Replace pin reference with fence: */ -void msm_gem_unpin_vma_fenced(struct msm_gem_vma *vma, struct msm_fence_context *fctx) +void msm_gem_vma_unpin_fenced(struct msm_gem_vma *vma, struct msm_fence_context *fctx) { vma->fctx[fctx->index] = fctx; vma->fence[fctx->index] = fctx->last_fence; vma->fence_mask |= BIT(fctx->index); - msm_gem_unpin_vma(vma); + msm_gem_vma_unpin(vma); } /* Map and pin vma: */ int -msm_gem_map_vma(struct msm_gem_address_space *aspace, - struct msm_gem_vma *vma, int prot, +msm_gem_vma_map(struct msm_gem_vma *vma, int prot, struct sg_table *sgt, int size) { - int ret = 0; + struct msm_gem_address_space *aspace = vma->aspace; + int ret; if (GEM_WARN_ON(!vma->iova)) return -EINVAL; @@ -111,9 +110,10 @@ msm_gem_map_vma(struct msm_gem_address_space *aspace, vma->mapped = true; - if (aspace && aspace->mmu) - ret = aspace->mmu->funcs->map(aspace->mmu, vma->iova, sgt, - size, prot); + if (!aspace) + return 0; + + ret = aspace->mmu->funcs->map(aspace->mmu, vma->iova, sgt, size, prot); if (ret) { vma->mapped = false; @@ -124,9 +124,10 @@ msm_gem_map_vma(struct msm_gem_address_space *aspace, } /* Close an iova. Warn if it is still in use */ -void msm_gem_close_vma(struct msm_gem_address_space *aspace, - struct msm_gem_vma *vma) +void msm_gem_vma_close(struct msm_gem_vma *vma) { + struct msm_gem_address_space *aspace = vma->aspace; + GEM_WARN_ON(msm_gem_vma_inuse(vma) || vma->mapped); spin_lock(&aspace->lock); @@ -139,13 +140,29 @@ void msm_gem_close_vma(struct msm_gem_address_space *aspace, msm_gem_address_space_put(aspace); } +struct msm_gem_vma *msm_gem_vma_new(struct msm_gem_address_space *aspace) +{ + struct msm_gem_vma *vma; + + vma = kzalloc(sizeof(*vma), GFP_KERNEL); + if (!vma) + return NULL; + + vma->aspace = aspace; + + return vma; +} + /* Initialize a new vma and allocate an iova for it */ -int msm_gem_init_vma(struct msm_gem_address_space *aspace, - struct msm_gem_vma *vma, int size, +int msm_gem_vma_init(struct msm_gem_vma *vma, int size, u64 range_start, u64 range_end) { + struct msm_gem_address_space *aspace = vma->aspace; int ret; + if (GEM_WARN_ON(!aspace)) + return -EINVAL; + if (GEM_WARN_ON(vma->iova)) return -EBUSY; diff --git a/drivers/gpu/drm/msm/msm_ringbuffer.c b/drivers/gpu/drm/msm/msm_ringbuffer.c index a80447c8764e..44a22b283730 100644 --- a/drivers/gpu/drm/msm/msm_ringbuffer.c +++ b/drivers/gpu/drm/msm/msm_ringbuffer.c @@ -24,7 +24,7 @@ static struct dma_fence *msm_job_run(struct drm_sched_job *job) struct drm_gem_object *obj = &submit->bos[i].obj->base; msm_gem_lock(obj); - msm_gem_unpin_vma_fenced(submit->bos[i].vma, fctx); + msm_gem_vma_unpin_fenced(submit->bos[i].vma, fctx); msm_gem_unpin_locked(obj); msm_gem_unlock(obj); submit->bos[i].flags &= ~(BO_VMA_PINNED | BO_OBJ_PINNED); From patchwork Mon Mar 20 14:43:26 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 13181344 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 77583C7618A for ; Mon, 20 Mar 2023 14:44:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231851AbjCTOon (ORCPT ); Mon, 20 Mar 2023 10:44:43 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50478 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231857AbjCTOok (ORCPT ); Mon, 20 Mar 2023 10:44:40 -0400 Received: from mail-pl1-x62b.google.com (mail-pl1-x62b.google.com [IPv6:2607:f8b0:4864:20::62b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B0B2DCA1A; Mon, 20 Mar 2023 07:44:38 -0700 (PDT) Received: by mail-pl1-x62b.google.com with SMTP id u5so12652623plq.7; Mon, 20 Mar 2023 07:44:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; t=1679323478; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=xALpMXlLTSMvOfS7mQNvHfzG9+8QM3cmx9b7Fz6Q2XU=; b=cxu8fcc47sheJUocs3NFK/ej2kra9Ypkl7pCx1H1P5vOekErq1/GNU3/ZBsh7fug/U LK5SzMY1CdLJhSlItGMRB+oN2kKcpG9ZezdZTN4n8Us/IaCGxAzVEcp9zoIzcbsHXM8l Qze0Vuq2K4bx7XB8PZONqv84P/6H9hG3kl7ovKb5I6Z0JnHdaUA2grKNAlEuYnpDmN+B Y9CLZCVJ6ye3i6QM7sDrald94pji9aWZ+2cy0EOHNDHwDIaL6b6wQ2WpqdFMnc1TBHeR qnuRIQoDVOVCXCBRpQ2sohXgh4ltYqi2IkT24bat3p9pR9f1dZscLLR8ov0SK99wX8qm HqPA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1679323478; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=xALpMXlLTSMvOfS7mQNvHfzG9+8QM3cmx9b7Fz6Q2XU=; b=sNva6z4hfKo7CcVhs4Ogj8oEn1utR9+RmVVISSiwsyw6v8wHdyiFLjzmje9aCvHVR9 uEV3FvdYvpKRZuRr/mp/1Z5cNl8HjsFq2HGIePxF7F9uzPQ3e9G/ZYw4dEuWl3dEMQOD 5MpUHWolH/sXuD93CU3A9OpC7CnIP7DEIV2JCKuc1+9QMcjNY0qDECxDgB0pPyLr91/K ij1+jFWMKoBLpxxcdNOars3/oEq7C3X/PItbVNpo5LC8eVcaudvtDXn7irWgNoJZEnSz U955qNJb/CoLE+zh9GpeIhx/FTcY7MqmK0z4vm6/GNlUwn3Ddoca0ksgRpFCkxDtX4N4 Lq3Q== X-Gm-Message-State: AO0yUKVVxLXhK86X99fHkVGr1nayN+mhDOhSUbIfKmEAmvpW+4cDfrQ7 EEJrWS0jgyqLESi+D+1VEa0= X-Google-Smtp-Source: AK7set/Og7pcuw4IzsENyS7Re6THWUo/30QuP9+MbL2JloNr5T9/L0e5eyWIxk080vd7BjvX4FSQ2A== X-Received: by 2002:a05:6a20:7b05:b0:da:39a5:6e66 with SMTP id s5-20020a056a207b0500b000da39a56e66mr920491pzh.18.1679323478303; Mon, 20 Mar 2023 07:44:38 -0700 (PDT) Received: from localhost ([2a00:79e1:abd:4a00:61b:48ed:72ab:435b]) by smtp.gmail.com with ESMTPSA id r13-20020a63e50d000000b004fb26a80875sm6389111pgh.22.2023.03.20.07.44.37 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 20 Mar 2023 07:44:37 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: Daniel Vetter , freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Rob Clark , Rob Clark , Abhinav Kumar , Dmitry Baryshkov , Sean Paul , David Airlie , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v2 04/23] drm/msm: Decouple vma tracking from obj lock Date: Mon, 20 Mar 2023 07:43:26 -0700 Message-Id: <20230320144356.803762-5-robdclark@gmail.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230320144356.803762-1-robdclark@gmail.com> References: <20230320144356.803762-1-robdclark@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org From: Rob Clark We need to use the inuse count to track that a BO is pinned until we have the hw_fence. But we want to remove the obj lock from the job_run() path as this could deadlock against reclaim/shrinker (because it is blocking the hw_fence from eventually being signaled). So split that tracking out into a per-vma lock with narrower scope. Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/msm_gem.h | 1 + drivers/gpu/drm/msm/msm_gem_vma.c | 44 ++++++++++++++++++++++++---- drivers/gpu/drm/msm/msm_ringbuffer.c | 2 +- 3 files changed, 40 insertions(+), 7 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h index d3219c523034..1929f09c5b0d 100644 --- a/drivers/gpu/drm/msm/msm_gem.h +++ b/drivers/gpu/drm/msm/msm_gem.h @@ -59,6 +59,7 @@ struct msm_fence_context; struct msm_gem_vma { struct drm_mm_node node; + spinlock_t lock; uint64_t iova; struct msm_gem_address_space *aspace; struct list_head list; /* node in msm_gem_object::vmas */ diff --git a/drivers/gpu/drm/msm/msm_gem_vma.c b/drivers/gpu/drm/msm/msm_gem_vma.c index 2827679dc39a..98287ed99960 100644 --- a/drivers/gpu/drm/msm/msm_gem_vma.c +++ b/drivers/gpu/drm/msm/msm_gem_vma.c @@ -40,19 +40,28 @@ msm_gem_address_space_get(struct msm_gem_address_space *aspace) bool msm_gem_vma_inuse(struct msm_gem_vma *vma) { + bool ret = true; + + spin_lock(&vma->lock); + if (vma->inuse > 0) - return true; + goto out; while (vma->fence_mask) { unsigned idx = ffs(vma->fence_mask) - 1; if (!msm_fence_completed(vma->fctx[idx], vma->fence[idx])) - return true; + goto out; vma->fence_mask &= ~BIT(idx); } - return false; + ret = false; + +out: + spin_unlock(&vma->lock); + + return ret; } /* Actually unmap memory for the vma */ @@ -73,8 +82,7 @@ void msm_gem_vma_purge(struct msm_gem_vma *vma) vma->mapped = false; } -/* Remove reference counts for the mapping */ -void msm_gem_vma_unpin(struct msm_gem_vma *vma) +static void vma_unpin_locked(struct msm_gem_vma *vma) { if (GEM_WARN_ON(!vma->inuse)) return; @@ -82,13 +90,23 @@ void msm_gem_vma_unpin(struct msm_gem_vma *vma) vma->inuse--; } +/* Remove reference counts for the mapping */ +void msm_gem_vma_unpin(struct msm_gem_vma *vma) +{ + spin_lock(&vma->lock); + vma_unpin_locked(vma); + spin_unlock(&vma->lock); +} + /* Replace pin reference with fence: */ void msm_gem_vma_unpin_fenced(struct msm_gem_vma *vma, struct msm_fence_context *fctx) { + spin_lock(&vma->lock); vma->fctx[fctx->index] = fctx; vma->fence[fctx->index] = fctx->last_fence; vma->fence_mask |= BIT(fctx->index); - msm_gem_vma_unpin(vma); + vma_unpin_locked(vma); + spin_unlock(&vma->lock); } /* Map and pin vma: */ @@ -103,7 +121,9 @@ msm_gem_vma_map(struct msm_gem_vma *vma, int prot, return -EINVAL; /* Increase the usage counter */ + spin_lock(&vma->lock); vma->inuse++; + spin_unlock(&vma->lock); if (vma->mapped) return 0; @@ -113,11 +133,22 @@ msm_gem_vma_map(struct msm_gem_vma *vma, int prot, if (!aspace) return 0; + /* + * NOTE: iommu/io-pgtable can allocate pages, so we cannot hold + * a lock across map/unmap which is also used in the job_run() + * path, as this can cause deadlock in job_run() vs shrinker/ + * reclaim. + * + * Revisit this if we can come up with a scheme to pre-alloc pages + * for the pgtable in map/unmap ops. + */ ret = aspace->mmu->funcs->map(aspace->mmu, vma->iova, sgt, size, prot); if (ret) { vma->mapped = false; + spin_lock(&vma->lock); vma->inuse--; + spin_unlock(&vma->lock); } return ret; @@ -148,6 +179,7 @@ struct msm_gem_vma *msm_gem_vma_new(struct msm_gem_address_space *aspace) if (!vma) return NULL; + spin_lock_init(&vma->lock); vma->aspace = aspace; return vma; diff --git a/drivers/gpu/drm/msm/msm_ringbuffer.c b/drivers/gpu/drm/msm/msm_ringbuffer.c index 44a22b283730..31b4fbf96c36 100644 --- a/drivers/gpu/drm/msm/msm_ringbuffer.c +++ b/drivers/gpu/drm/msm/msm_ringbuffer.c @@ -23,8 +23,8 @@ static struct dma_fence *msm_job_run(struct drm_sched_job *job) for (i = 0; i < submit->nr_bos; i++) { struct drm_gem_object *obj = &submit->bos[i].obj->base; - msm_gem_lock(obj); msm_gem_vma_unpin_fenced(submit->bos[i].vma, fctx); + msm_gem_lock(obj); msm_gem_unpin_locked(obj); msm_gem_unlock(obj); submit->bos[i].flags &= ~(BO_VMA_PINNED | BO_OBJ_PINNED); From patchwork Mon Mar 20 14:43:27 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 13181345 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 60EBFC7618A for ; Mon, 20 Mar 2023 14:45:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231559AbjCTOpT (ORCPT ); Mon, 20 Mar 2023 10:45:19 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50440 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231855AbjCTOon (ORCPT ); Mon, 20 Mar 2023 10:44:43 -0400 Received: from mail-pj1-x102a.google.com (mail-pj1-x102a.google.com [IPv6:2607:f8b0:4864:20::102a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D7C4EEF87; Mon, 20 Mar 2023 07:44:40 -0700 (PDT) Received: by mail-pj1-x102a.google.com with SMTP id j3-20020a17090adc8300b0023d09aea4a6so16726690pjv.5; Mon, 20 Mar 2023 07:44:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; t=1679323480; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=qmpOBRevcWufl9+ZPvYmrN7HY3DEyxJ7D+2z8txKnGY=; b=QJDUUDwrP9t1D0nGIJK5gSCB/5iP/M2PNO7hkTIuenIDr5wOAdLaLWvmdhSrAoP/Zn B0qM1MmtBAJr1CyCWUOKfjFjWfXiPohphc2rf4b+HHRa3dnAPQaj2dN5SytZOLW1aBjm wU7c8/Rm1GCxHMXyEgPQUNkZ0f7Qm514ozSU9oM0jiEtQd0nParCwFrrGGCOhuKTXOEv qbGIpvuvl0cREN8lGQyEnOoCu+t6B0413wo/TgUKr2RJMEJhpIoCyG/4vn19q+rnv8IP ybxZ27mbTr37mHchmTQe59dNGfiGprzmckYERDAUkWEsIp34IcYsjDydR3Yyvk6NWB/X +qxw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1679323480; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=qmpOBRevcWufl9+ZPvYmrN7HY3DEyxJ7D+2z8txKnGY=; b=G4RfVGTfD8f9DRuuIJzOE5V+1MDZa+8+MudPXrctX6fVELajKiuzN1GG9kI2cAVlKE n7602BOJQdG367k78zi5vfSQ2d/d4y+UpgB6xkIp3id3+REr/drbKN4b+68jZrCvS0ZY wb5Fjqyk09MD2kWiI3edw5bEGaO63yJ32uE5zu8V3rNjVT1K/DP5O1IvxBjzS6WWPE/V 7kXxBvHFtrT5vRAetbDwlAZR7AGvP+4bgbN9XmL2c7VD7ufBEi+2q8RIpV3gz9pwQEEA oCRUB+ShqQo4B+0R4hjr2K2J1jaiVC5KGZ7KrRKMOnX6Sx2F6FMqllt0c8xdOKRFXDUs kvRA== X-Gm-Message-State: AO0yUKUlRlrYN67tBOvQeFAPF/P1zpvzKx4ePtE1CIxEnJtZK/J4QAD6 HCRHwFLuXmNLXaCS1hWmxVw= X-Google-Smtp-Source: AK7set8xl+EwAB385uyueRZZJelsCmcMVEJo9Hgr+BjmAy9/CkKpfcsmJ83hqOSj+Zn0Gh1/n/okSA== X-Received: by 2002:a05:6a20:7a81:b0:d9:240c:acdd with SMTP id u1-20020a056a207a8100b000d9240cacddmr5912063pzh.40.1679323479922; Mon, 20 Mar 2023 07:44:39 -0700 (PDT) Received: from localhost ([2a00:79e1:abd:4a00:61b:48ed:72ab:435b]) by smtp.gmail.com with ESMTPSA id d22-20020a63fd16000000b00502f9fba637sm6180061pgh.68.2023.03.20.07.44.39 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 20 Mar 2023 07:44:39 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: Daniel Vetter , freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Rob Clark , Rob Clark , Abhinav Kumar , Dmitry Baryshkov , Sean Paul , David Airlie , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v2 05/23] drm/msm/gem: Simplify vmap vs LRU tracking Date: Mon, 20 Mar 2023 07:43:27 -0700 Message-Id: <20230320144356.803762-6-robdclark@gmail.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230320144356.803762-1-robdclark@gmail.com> References: <20230320144356.803762-1-robdclark@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org From: Rob Clark vmap'ing is just pinning in disguise. So treat it as such and simplify the LRU tracking. Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/msm_gem.c | 17 ++++++++--------- 1 file changed, 8 insertions(+), 9 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index 6734aecf0703..009a34b3a49b 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -626,6 +626,7 @@ int msm_gem_dumb_map_offset(struct drm_file *file, struct drm_device *dev, static void *get_vaddr(struct drm_gem_object *obj, unsigned madv) { struct msm_gem_object *msm_obj = to_msm_bo(obj); + struct page **pages; int ret = 0; msm_gem_assert_locked(obj); @@ -639,6 +640,10 @@ static void *get_vaddr(struct drm_gem_object *obj, unsigned madv) return ERR_PTR(-EBUSY); } + pages = msm_gem_pin_pages_locked(obj); + if (IS_ERR(pages)) + return ERR_CAST(pages); + /* increment vmap_count *before* vmap() call, so shrinker can * check vmap_count (is_vunmapable()) outside of msm_obj lock. * This guarantees that we won't try to msm_gem_vunmap() this @@ -648,25 +653,19 @@ static void *get_vaddr(struct drm_gem_object *obj, unsigned madv) msm_obj->vmap_count++; if (!msm_obj->vaddr) { - struct page **pages = get_pages(obj); - if (IS_ERR(pages)) { - ret = PTR_ERR(pages); - goto fail; - } msm_obj->vaddr = vmap(pages, obj->size >> PAGE_SHIFT, VM_MAP, msm_gem_pgprot(msm_obj, PAGE_KERNEL)); if (msm_obj->vaddr == NULL) { ret = -ENOMEM; goto fail; } - - update_lru(obj); } return msm_obj->vaddr; fail: msm_obj->vmap_count--; + msm_gem_unpin_locked(obj); return ERR_PTR(ret); } @@ -705,6 +704,7 @@ void msm_gem_put_vaddr_locked(struct drm_gem_object *obj) GEM_WARN_ON(msm_obj->vmap_count < 1); msm_obj->vmap_count--; + msm_gem_unpin_locked(obj); } void msm_gem_put_vaddr(struct drm_gem_object *obj) @@ -813,10 +813,9 @@ static void update_lru(struct drm_gem_object *obj) if (!msm_obj->pages) { GEM_WARN_ON(msm_obj->pin_count); - GEM_WARN_ON(msm_obj->vmap_count); drm_gem_lru_move_tail(&priv->lru.unbacked, obj); - } else if (msm_obj->pin_count || msm_obj->vmap_count) { + } else if (msm_obj->pin_count) { drm_gem_lru_move_tail(&priv->lru.pinned, obj); } else if (msm_obj->madv == MSM_MADV_WILLNEED) { drm_gem_lru_move_tail(&priv->lru.willneed, obj); From patchwork Mon Mar 20 14:43:28 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 13181349 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0A282C7619A for ; Mon, 20 Mar 2023 14:45:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231876AbjCTOp0 (ORCPT ); Mon, 20 Mar 2023 10:45:26 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50638 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231905AbjCTOou (ORCPT ); Mon, 20 Mar 2023 10:44:50 -0400 Received: from mail-pj1-x102a.google.com (mail-pj1-x102a.google.com [IPv6:2607:f8b0:4864:20::102a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0BBAAE1B0; Mon, 20 Mar 2023 07:44:43 -0700 (PDT) Received: by mail-pj1-x102a.google.com with SMTP id d13so12380068pjh.0; Mon, 20 Mar 2023 07:44:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; t=1679323481; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=gpnnpJi0jMpuVIirJnKhht3Br4nZk/nWgbLPyeNVKoM=; b=NHg2rsAQuZBbVNpWJoQ7SMYcg1sSGv6708+Yi51sSumVAb7P4W0Sgz+ZApLe1OBqQ9 X4zeyzHi7Wf4djrzl44O+b73TLhdaiCP0bG11wMhDpw/6pxLwiYlL/DiH+WLDYQCYKpY 73ETis2vl6NE7mVLY6bltE4zQhwR94ZiaarsDBky65Sl+A2ZdOAvgjAWoL1RhZ71BZXM GWmAn7CXAaMoxvvx/9+TgN7MBQEYTlO/uGlA3CtgeEGwsBHV1q00mTLF2m0rEJI2Aw2O dK8VeGnQI4ASznl7F14ZJRe5gCd+knY0S0CGUEO6h7v/Dr3YeYiuQDFMBNJClq6nWmGR 5Y8g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1679323481; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=gpnnpJi0jMpuVIirJnKhht3Br4nZk/nWgbLPyeNVKoM=; b=TgMoD8l+oc2aMLbp8r8buP1Roj3tRWm3La7/HTgj7N3cHSOUf6UUymlWE2ZtVygch5 aDTHHySkfqSzdfKXZ/Z6ZLBZHmEfKZoUPnC88jz4RasGGhxiqTGwqZaM5xYdwfTS+xg1 R/myBN3gITQsX0tuFoGoV9t3GUX19cqRXcCrTjNE3PznFrtv+QQDiyiGXsdxdK46Mgdo Ay5jGl4tHFdvEB0ZTqRqInXcJK7zRvhaXaUl0O99wDnfo16AwB1EcEPv069zT3VbbUhK +fT9DS/N9nSykBTW2FwkruYXxkDFYFU7LRkVieU0hQRvR00rvRcPhpB6I+NOvqcO87H8 t1yA== X-Gm-Message-State: AO0yUKVkYdOlHPhEFd/dufZI8fDiTy1lsi4KgUsCheAxwu8rxXWN3Ga/ D96JtpLHY4KIWKkXz0gVsa8= X-Google-Smtp-Source: AK7set9gngSUjr4POl841Y/cEm5ZUCas4/gLzFTfM7Kwi3k2gQ4K9PnIhK/7pgWuEZJlzgwJ1vs7Yw== X-Received: by 2002:a17:903:124d:b0:19e:7a2c:78a7 with SMTP id u13-20020a170903124d00b0019e7a2c78a7mr21091860plh.57.1679323481613; Mon, 20 Mar 2023 07:44:41 -0700 (PDT) Received: from localhost ([2a00:79e1:abd:4a00:61b:48ed:72ab:435b]) by smtp.gmail.com with ESMTPSA id l18-20020a170902d35200b0019468fe44d3sm6806113plk.25.2023.03.20.07.44.41 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 20 Mar 2023 07:44:41 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: Daniel Vetter , freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Rob Clark , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , David Airlie , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v2 06/23] drm/gem: Export drm_gem_lru_move_tail_locked() Date: Mon, 20 Mar 2023 07:43:28 -0700 Message-Id: <20230320144356.803762-7-robdclark@gmail.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230320144356.803762-1-robdclark@gmail.com> References: <20230320144356.803762-1-robdclark@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org From: Rob Clark Export the locked version or lru's move_tail(). Signed-off-by: Rob Clark --- drivers/gpu/drm/drm_gem.c | 11 ++++++++++- include/drm/drm_gem.h | 1 + 2 files changed, 11 insertions(+), 1 deletion(-) diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c index 59a0bb5ebd85..693f7f35a7bd 100644 --- a/drivers/gpu/drm/drm_gem.c +++ b/drivers/gpu/drm/drm_gem.c @@ -1344,7 +1344,15 @@ drm_gem_lru_remove(struct drm_gem_object *obj) } EXPORT_SYMBOL(drm_gem_lru_remove); -static void +/** + * drm_gem_lru_move_tail_locked - move the object to the tail of the LRU + * + * Like &drm_gem_lru_move_tail but lru lock must be held + * + * @lru: The LRU to move the object into. + * @obj: The GEM object to move into this LRU + */ +void drm_gem_lru_move_tail_locked(struct drm_gem_lru *lru, struct drm_gem_object *obj) { lockdep_assert_held_once(lru->lock); @@ -1356,6 +1364,7 @@ drm_gem_lru_move_tail_locked(struct drm_gem_lru *lru, struct drm_gem_object *obj list_add_tail(&obj->lru_node, &lru->list); obj->lru = lru; } +EXPORT_SYMBOL(drm_gem_lru_move_tail_locked); /** * drm_gem_lru_move_tail - move the object to the tail of the LRU diff --git a/include/drm/drm_gem.h b/include/drm/drm_gem.h index 772a4adf5287..a811c7e346ec 100644 --- a/include/drm/drm_gem.h +++ b/include/drm/drm_gem.h @@ -475,6 +475,7 @@ int drm_gem_dumb_map_offset(struct drm_file *file, struct drm_device *dev, void drm_gem_lru_init(struct drm_gem_lru *lru, struct mutex *lock); void drm_gem_lru_remove(struct drm_gem_object *obj); +void drm_gem_lru_move_tail_locked(struct drm_gem_lru *lru, struct drm_gem_object *obj); void drm_gem_lru_move_tail(struct drm_gem_lru *lru, struct drm_gem_object *obj); unsigned long drm_gem_lru_scan(struct drm_gem_lru *lru, unsigned nr_to_scan, bool (*shrink)(struct drm_gem_object *obj)); From patchwork Mon Mar 20 14:43:29 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 13181347 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DA8E4C6FD1D for ; Mon, 20 Mar 2023 14:45:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231866AbjCTOpW (ORCPT ); Mon, 20 Mar 2023 10:45:22 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51004 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231912AbjCTOov (ORCPT ); Mon, 20 Mar 2023 10:44:51 -0400 Received: from mail-pl1-x633.google.com (mail-pl1-x633.google.com [IPv6:2607:f8b0:4864:20::633]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 24E8EDBE6; Mon, 20 Mar 2023 07:44:44 -0700 (PDT) Received: by mail-pl1-x633.google.com with SMTP id iw3so12663328plb.6; Mon, 20 Mar 2023 07:44:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; t=1679323483; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=YjYkvhqut592WuSff3dyR3ovhFafDwjiBPIXBsiPTHA=; b=cOHoQlJ0xxnowDvRyQrlnClAHszOToY1+vnIF0VIIDgXA0RnF2Cl/GALLiqwt6XHAT y/tzmqUYHn53FusOSShO8LGtYTuDrh8DpRq9qd/Nl4y1IlQwOqIna913gt16RmzfHQZ9 dZokgnhA/xss3xcfz35c3fKoCxs4ER9SNrjg6JXx1uXkxJNtRp7vYXmZSgx70JronGD6 x0eXU6uMwh8V1hK9cpM87TFtDBhc/dbqaNahA2gbQYZ2ZhqNsT7TEC/ZfzmbYa2PqgOn GR8DiN6vcuIOnfYIrTW8fveCVpMVqlYu3eAeVKXbW3jBGGD4yoY5LJnyDt7t6GQF/mqx peUw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1679323483; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=YjYkvhqut592WuSff3dyR3ovhFafDwjiBPIXBsiPTHA=; b=t+KP6zp9Lg1lF6b90wAYzUoNWR4q5WXdo29VpU+5vXjULDFBe/Fn8qPBnrd1+Iaa0T +z4DGricViep9YmHnlWbTmeQq3fUmJio8P2YN2/00ihZJOAMKvnKfoxteccnqxw9A1A1 H8gthmY910ixicZTz1Inhd3SQI6d+AuWJNfuAYL9yMEObjJavDEvoNB50kvZ5XlDiNRi he0ijBwMD/98IpnZXD4UP4Hr+IjUrPjZQcySKjC3bKckcgzhJiMYGsHEdas4GysmJk+H dck9Qotm7ypxCyB98zDxO4T8UhOZkdHYbQFwH7uOe3GnrVhJG+LC7mYP5mdNgJULFVPv 9lkA== X-Gm-Message-State: AO0yUKUIEFBe5nVqwbXMd8hEAxy+wf+P5OwQH5rg2LuvNjCVjMihxB7Y mxLC/6/x9R6mFMeDtPgL/H0= X-Google-Smtp-Source: AK7set8QgTzNPHFdn6ePXThhc0O7gvvVmy/zOlqo83fcmxzg8pOx366lz1ZPhithPcnrmPwO40UgtQ== X-Received: by 2002:a17:903:186:b0:19e:6d83:8277 with SMTP id z6-20020a170903018600b0019e6d838277mr21841308plg.51.1679323483402; Mon, 20 Mar 2023 07:44:43 -0700 (PDT) Received: from localhost ([2a00:79e1:abd:4a00:61b:48ed:72ab:435b]) by smtp.gmail.com with ESMTPSA id d3-20020a170902728300b0019c32968271sm6809626pll.11.2023.03.20.07.44.42 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 20 Mar 2023 07:44:43 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: Daniel Vetter , freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Rob Clark , Rob Clark , Abhinav Kumar , Dmitry Baryshkov , Sean Paul , David Airlie , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v2 07/23] drm/msm/gem: Move update_lru() Date: Mon, 20 Mar 2023 07:43:29 -0700 Message-Id: <20230320144356.803762-8-robdclark@gmail.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230320144356.803762-1-robdclark@gmail.com> References: <20230320144356.803762-1-robdclark@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org From: Rob Clark Just code-motion. Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/msm_gem.c | 46 +++++++++++++++++------------------ 1 file changed, 22 insertions(+), 24 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index 009a34b3a49b..c97dddf3e2f2 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -19,8 +19,6 @@ #include "msm_gpu.h" #include "msm_mmu.h" -static void update_lru(struct drm_gem_object *obj); - static dma_addr_t physaddr(struct drm_gem_object *obj) { struct msm_gem_object *msm_obj = to_msm_bo(obj); @@ -63,6 +61,28 @@ static void sync_for_cpu(struct msm_gem_object *msm_obj) dma_unmap_sgtable(dev, msm_obj->sgt, DMA_BIDIRECTIONAL, 0); } +static void update_lru(struct drm_gem_object *obj) +{ + struct msm_drm_private *priv = obj->dev->dev_private; + struct msm_gem_object *msm_obj = to_msm_bo(obj); + + msm_gem_assert_locked(&msm_obj->base); + + if (!msm_obj->pages) { + GEM_WARN_ON(msm_obj->pin_count); + + drm_gem_lru_move_tail(&priv->lru.unbacked, obj); + } else if (msm_obj->pin_count) { + drm_gem_lru_move_tail(&priv->lru.pinned, obj); + } else if (msm_obj->madv == MSM_MADV_WILLNEED) { + drm_gem_lru_move_tail(&priv->lru.willneed, obj); + } else { + GEM_WARN_ON(msm_obj->madv != MSM_MADV_DONTNEED); + + drm_gem_lru_move_tail(&priv->lru.dontneed, obj); + } +} + /* allocate pages from VRAM carveout, used when no IOMMU: */ static struct page **get_pages_vram(struct drm_gem_object *obj, int npages) { @@ -804,28 +824,6 @@ void msm_gem_vunmap(struct drm_gem_object *obj) msm_obj->vaddr = NULL; } -static void update_lru(struct drm_gem_object *obj) -{ - struct msm_drm_private *priv = obj->dev->dev_private; - struct msm_gem_object *msm_obj = to_msm_bo(obj); - - msm_gem_assert_locked(&msm_obj->base); - - if (!msm_obj->pages) { - GEM_WARN_ON(msm_obj->pin_count); - - drm_gem_lru_move_tail(&priv->lru.unbacked, obj); - } else if (msm_obj->pin_count) { - drm_gem_lru_move_tail(&priv->lru.pinned, obj); - } else if (msm_obj->madv == MSM_MADV_WILLNEED) { - drm_gem_lru_move_tail(&priv->lru.willneed, obj); - } else { - GEM_WARN_ON(msm_obj->madv != MSM_MADV_DONTNEED); - - drm_gem_lru_move_tail(&priv->lru.dontneed, obj); - } -} - bool msm_gem_active(struct drm_gem_object *obj) { msm_gem_assert_locked(obj); From patchwork Mon Mar 20 14:43:30 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 13181351 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id BC08FC6FD1D for ; Mon, 20 Mar 2023 14:45:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231893AbjCTOp2 (ORCPT ); Mon, 20 Mar 2023 10:45:28 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50702 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231969AbjCTOpC (ORCPT ); Mon, 20 Mar 2023 10:45:02 -0400 Received: from mail-pj1-x1032.google.com (mail-pj1-x1032.google.com [IPv6:2607:f8b0:4864:20::1032]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E9F5910265; Mon, 20 Mar 2023 07:44:46 -0700 (PDT) Received: by mail-pj1-x1032.google.com with SMTP id lr16-20020a17090b4b9000b0023f187954acso12639449pjb.2; Mon, 20 Mar 2023 07:44:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; t=1679323485; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=yvDnw/hu64TENUPK8m8D4yt/NQNmhsgmGrwCJ9nCNhY=; b=jlcDy40aN1C5W1PLwaX4PT68HHflsCb69nfPT5Mi2J1S7Qq1tRG0GAOAQHkiHtdcCy iNug8vLQ8xaQ3qf7jD3h5se9ru7MEdY5XzpzPfu30BNzsr+kCLJIMC2qgCWksB3TfULr C0HH4ohgE/qt4zDYn/gl50baSa9FCCVioEa/RKjXuglOUQAHyvveIbecf1DyIvWWBr7h rb38OKulT20czvObsf/dBJ6ULSgo5Y2RK4REckQlYRFciYHUUMWxI+eEX05p21E9RW/b y9sbREQwwWG0DhPeFmwNvi4ujTSVJ2KOn64Q4QXMFkj3NwOJf/Oa0w//tMSz/dwdwEFh Lviw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1679323485; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=yvDnw/hu64TENUPK8m8D4yt/NQNmhsgmGrwCJ9nCNhY=; b=gJqSAAcs3NtT7gSuVFrgFpgbOC5PIew/iYfvrpyw9Dlnb+ZzZGGJrvIK9tBVe+24BL 79whdlZZqvahoQ0oZrmgZYqshJvdu9A05QTAoUoUBEi0QA4RaXPSNO7qqjlHkvDdZq38 W+a/V+4VimFBFYZx52/NanLt0sHtOcC/ySlPJzaXzXXjLStmAk7ZFXzBNML6SpVgytsj t77E198epkXsSZYKwTHV6K1tu8UX3fZiYaajFUVaOcV6vNZJbqvCMQk2RJZg1y/9ak/0 WAzhVMEvH+tWk8hKV31SzbZy08q8nC5WciZkyiu73S2C0gOhzWy6mbre6Ib7cGfrOT6I czzg== X-Gm-Message-State: AO0yUKW1BUXBbS4H7wQkUiDy0a8m2Y3fPVThmaPHJ+PAV8eObZ761kaP uANymiRN9i4tGb7FwqAwUHk= X-Google-Smtp-Source: AK7set+ey4SfRsKMXYlfJLMQ2jDvdVx2AebAv6ewBzlhcdkWDA/UUjxaowJxpOGwVjOUPrOQI3xzKw== X-Received: by 2002:a05:6a20:1b05:b0:da:5e10:799b with SMTP id ch5-20020a056a201b0500b000da5e10799bmr103078pzb.10.1679323485207; Mon, 20 Mar 2023 07:44:45 -0700 (PDT) Received: from localhost ([2a00:79e1:abd:4a00:61b:48ed:72ab:435b]) by smtp.gmail.com with ESMTPSA id k23-20020aa790d7000000b006247123adf1sm6626822pfk.143.2023.03.20.07.44.44 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 20 Mar 2023 07:44:44 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: Daniel Vetter , freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Rob Clark , Rob Clark , Abhinav Kumar , Dmitry Baryshkov , Sean Paul , David Airlie , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v2 08/23] drm/msm/gem: Protect pin_count/madv by LRU lock Date: Mon, 20 Mar 2023 07:43:30 -0700 Message-Id: <20230320144356.803762-9-robdclark@gmail.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230320144356.803762-1-robdclark@gmail.com> References: <20230320144356.803762-1-robdclark@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org From: Rob Clark Since the LRU lock is already acquired when moving an obj between LRUs, we can use it to protect pin_count and madv, without any significant change in locking (ie. it just expands the scope of the lock by a hand- ful of instructions). This prepares the way to decrement the pin_count in the job_run() path without needing to hold the obj lock, to avoid a potential deadlock (or rather stall) caused by the fence-signaling path (job_run()) blocking on shrinker/reclaim. (Only a stall because the wait for fence signaling wait_for_idle() is not infinite.) Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/msm_gem.c | 48 ++++++++++++++++++++++++++--------- drivers/gpu/drm/msm/msm_gem.h | 9 ++++++- 2 files changed, 44 insertions(+), 13 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index c97dddf3e2f2..d0ac3e704b66 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -61,7 +61,7 @@ static void sync_for_cpu(struct msm_gem_object *msm_obj) dma_unmap_sgtable(dev, msm_obj->sgt, DMA_BIDIRECTIONAL, 0); } -static void update_lru(struct drm_gem_object *obj) +static void update_lru_locked(struct drm_gem_object *obj) { struct msm_drm_private *priv = obj->dev->dev_private; struct msm_gem_object *msm_obj = to_msm_bo(obj); @@ -71,18 +71,27 @@ static void update_lru(struct drm_gem_object *obj) if (!msm_obj->pages) { GEM_WARN_ON(msm_obj->pin_count); - drm_gem_lru_move_tail(&priv->lru.unbacked, obj); + drm_gem_lru_move_tail_locked(&priv->lru.unbacked, obj); } else if (msm_obj->pin_count) { - drm_gem_lru_move_tail(&priv->lru.pinned, obj); + drm_gem_lru_move_tail_locked(&priv->lru.pinned, obj); } else if (msm_obj->madv == MSM_MADV_WILLNEED) { - drm_gem_lru_move_tail(&priv->lru.willneed, obj); + drm_gem_lru_move_tail_locked(&priv->lru.willneed, obj); } else { GEM_WARN_ON(msm_obj->madv != MSM_MADV_DONTNEED); - drm_gem_lru_move_tail(&priv->lru.dontneed, obj); + drm_gem_lru_move_tail_locked(&priv->lru.dontneed, obj); } } +static void update_lru(struct drm_gem_object *obj) +{ + struct msm_drm_private *priv = obj->dev->dev_private; + + mutex_lock(&priv->lru.lock); + update_lru_locked(obj); + mutex_unlock(&priv->lru.lock); +} + /* allocate pages from VRAM carveout, used when no IOMMU: */ static struct page **get_pages_vram(struct drm_gem_object *obj, int npages) { @@ -200,6 +209,7 @@ static void put_pages(struct drm_gem_object *obj) static struct page **msm_gem_pin_pages_locked(struct drm_gem_object *obj) { + struct msm_drm_private *priv = obj->dev->dev_private; struct msm_gem_object *msm_obj = to_msm_bo(obj); struct page **p; @@ -210,10 +220,13 @@ static struct page **msm_gem_pin_pages_locked(struct drm_gem_object *obj) } p = get_pages(obj); - if (!IS_ERR(p)) { - to_msm_bo(obj)->pin_count++; - update_lru(obj); - } + if (IS_ERR(p)) + return p; + + mutex_lock(&priv->lru.lock); + msm_obj->pin_count++; + update_lru_locked(obj); + mutex_unlock(&priv->lru.lock); return p; } @@ -464,14 +477,16 @@ int msm_gem_pin_vma_locked(struct drm_gem_object *obj, struct msm_gem_vma *vma) void msm_gem_unpin_locked(struct drm_gem_object *obj) { + struct msm_drm_private *priv = obj->dev->dev_private; struct msm_gem_object *msm_obj = to_msm_bo(obj); msm_gem_assert_locked(obj); + mutex_lock(&priv->lru.lock); msm_obj->pin_count--; GEM_WARN_ON(msm_obj->pin_count < 0); - - update_lru(obj); + update_lru_locked(obj); + mutex_unlock(&priv->lru.lock); } struct msm_gem_vma *msm_gem_get_vma_locked(struct drm_gem_object *obj, @@ -739,10 +754,13 @@ void msm_gem_put_vaddr(struct drm_gem_object *obj) */ int msm_gem_madvise(struct drm_gem_object *obj, unsigned madv) { + struct msm_drm_private *priv = obj->dev->dev_private; struct msm_gem_object *msm_obj = to_msm_bo(obj); msm_gem_lock(obj); + mutex_lock(&priv->lru.lock); + if (msm_obj->madv != __MSM_MADV_PURGED) msm_obj->madv = madv; @@ -751,7 +769,9 @@ int msm_gem_madvise(struct drm_gem_object *obj, unsigned madv) /* If the obj is inactive, we might need to move it * between inactive lists */ - update_lru(obj); + update_lru_locked(obj); + + mutex_unlock(&priv->lru.lock); msm_gem_unlock(obj); @@ -761,6 +781,7 @@ int msm_gem_madvise(struct drm_gem_object *obj, unsigned madv) void msm_gem_purge(struct drm_gem_object *obj) { struct drm_device *dev = obj->dev; + struct msm_drm_private *priv = obj->dev->dev_private; struct msm_gem_object *msm_obj = to_msm_bo(obj); msm_gem_assert_locked(obj); @@ -777,7 +798,10 @@ void msm_gem_purge(struct drm_gem_object *obj) put_iova_vmas(obj); + mutex_lock(&priv->lru.lock); + /* A one-way transition: */ msm_obj->madv = __MSM_MADV_PURGED; + mutex_unlock(&priv->lru.lock); drm_gem_free_mmap_offset(obj); diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h index 1929f09c5b0d..0057e8e8fa13 100644 --- a/drivers/gpu/drm/msm/msm_gem.h +++ b/drivers/gpu/drm/msm/msm_gem.h @@ -86,7 +86,9 @@ struct msm_gem_object { uint32_t flags; /** - * Advice: are the backing pages purgeable? + * madv: are the backing pages purgeable? + * + * Protected by obj lock and LRU lock */ uint8_t madv; @@ -114,6 +116,11 @@ struct msm_gem_object { char name[32]; /* Identifier to print for the debugfs files */ + /** + * pin_count: Number of times the pages are pinned + * + * Protected by LRU lock. + */ int pin_count; }; #define to_msm_bo(x) container_of(x, struct msm_gem_object, base) From patchwork Mon Mar 20 14:43:31 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 13181348 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id CE9E9C761AF for ; Mon, 20 Mar 2023 14:45:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231418AbjCTOpY (ORCPT ); Mon, 20 Mar 2023 10:45:24 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50940 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231987AbjCTOpG (ORCPT ); Mon, 20 Mar 2023 10:45:06 -0400 Received: from mail-pj1-x1031.google.com (mail-pj1-x1031.google.com [IPv6:2607:f8b0:4864:20::1031]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D06FECA1E; Mon, 20 Mar 2023 07:44:48 -0700 (PDT) Received: by mail-pj1-x1031.google.com with SMTP id gp15-20020a17090adf0f00b0023d1bbd9f9eso16772440pjb.0; Mon, 20 Mar 2023 07:44:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; t=1679323487; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=weYL3fqvUQCe/MGsBiphMuUm7IiUfAgwOkX0hsG5azI=; b=XoB4+mM4oASSRnco8nkZwARRG9zcXJcuiyLTtkB1/Em3LEQ2A1GcFBX4c9QpNQQF1j NODZTXwBb1ZsRgqzzncmT68xS+JsVOMXu4/yL4tqZacnngCN5gnrIyIgLpg67GCQaWu2 3RGsxmBtiA8aCU5oOdtb6AigAsxMRRyYT/P3Xh3WrVDvz6jvLttylpc7USgWWQ44GVqe rYzg55l+AaLbimu//o2q7VRZZJ2UA2rlR+D7WRutg8GbWIEPrBcIPRitI3OqhFoohqaA 20Pha7DZD60SGbvPicWbg+l+r5k+WAzhnRYWrhfrBafJ9eHbu4hwWd94ZwQX/0whZg/7 88jA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1679323487; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=weYL3fqvUQCe/MGsBiphMuUm7IiUfAgwOkX0hsG5azI=; b=4SU08Y7DjTy3Hb9zVvuTKqLxJNA1kKe3NBIh8EW3M7yWFGFxSyvv9LSJqlSZnsw2Ma u9ecKaMa4RsrSkF0xwRPfaXIhzB8o738x+IoxG418NJ0iV6H+G000/Qy8XLF564F8F5k rZcfEU3puAxbiKe3/rYjqBBsKNPNl1w/oL1rno52+5oHCCOHHGdGVcvvNQ8T7is65qsr r5JWdydfTatPMCzzwLQoCbNws34vJ9DR+coGfaVGpjvpd3mbchvQLoXS43ECAjubx1N8 /9edCEaa1bAZqpI6jgbfk6vllktxLKFzEYXjmsxd9+VaK6/ZJLUS86+hP6MrTOEwntr+ S8Lw== X-Gm-Message-State: AO0yUKUfUmMwyAowKOjglmlPB9/pXIOyYJTY0uwhF9TVv9chD3SttSJW qw/ivGhFtuG2DVJOXuh7A/s= X-Google-Smtp-Source: AK7set/q6mxFL2SNpoyCJqlGoIsp6zXx5THF6Tar0/8ALnqXrSzyWxWyrmITe8Cih+IgIxzET11mjA== X-Received: by 2002:a17:902:d2ca:b0:19d:1c6e:d31e with SMTP id n10-20020a170902d2ca00b0019d1c6ed31emr21226848plc.60.1679323487099; Mon, 20 Mar 2023 07:44:47 -0700 (PDT) Received: from localhost ([2a00:79e1:abd:4a00:61b:48ed:72ab:435b]) by smtp.gmail.com with ESMTPSA id g7-20020a170902934700b0019d397b0f18sm6777259plp.214.2023.03.20.07.44.46 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 20 Mar 2023 07:44:46 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: Daniel Vetter , freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Rob Clark , Rob Clark , Abhinav Kumar , Dmitry Baryshkov , Sean Paul , David Airlie , Sumit Semwal , =?utf-8?q?Christian_K=C3=B6nig?= , linux-kernel@vger.kernel.org (open list), linux-media@vger.kernel.org (open list:DMA BUFFER SHARING FRAMEWORK), linaro-mm-sig@lists.linaro.org (moderated list:DMA BUFFER SHARING FRAMEWORK) Subject: [PATCH v2 09/23] drm/msm/gem: Avoid obj lock in job_run() Date: Mon, 20 Mar 2023 07:43:31 -0700 Message-Id: <20230320144356.803762-10-robdclark@gmail.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230320144356.803762-1-robdclark@gmail.com> References: <20230320144356.803762-1-robdclark@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org From: Rob Clark Now that everything that controls which LRU an obj lives in *except* the backing pages is protected by the LRU lock, add a special path to unpin in the job_run() path, we we are assured that we already have backing pages and will not be racing against eviction (because the GEM object's dma_resv contains the fence that will be signaled when the submit/job completes). Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/msm_gem.c | 44 +++++++++++++++++++++++----- drivers/gpu/drm/msm/msm_gem.h | 1 + drivers/gpu/drm/msm/msm_ringbuffer.c | 4 +-- 3 files changed, 39 insertions(+), 10 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index d0ac3e704b66..9628e8d8dd02 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -61,18 +61,14 @@ static void sync_for_cpu(struct msm_gem_object *msm_obj) dma_unmap_sgtable(dev, msm_obj->sgt, DMA_BIDIRECTIONAL, 0); } -static void update_lru_locked(struct drm_gem_object *obj) +static void update_lru_active(struct drm_gem_object *obj) { struct msm_drm_private *priv = obj->dev->dev_private; struct msm_gem_object *msm_obj = to_msm_bo(obj); - msm_gem_assert_locked(&msm_obj->base); - - if (!msm_obj->pages) { - GEM_WARN_ON(msm_obj->pin_count); + GEM_WARN_ON(!msm_obj->pages); - drm_gem_lru_move_tail_locked(&priv->lru.unbacked, obj); - } else if (msm_obj->pin_count) { + if (msm_obj->pin_count) { drm_gem_lru_move_tail_locked(&priv->lru.pinned, obj); } else if (msm_obj->madv == MSM_MADV_WILLNEED) { drm_gem_lru_move_tail_locked(&priv->lru.willneed, obj); @@ -83,6 +79,22 @@ static void update_lru_locked(struct drm_gem_object *obj) } } +static void update_lru_locked(struct drm_gem_object *obj) +{ + struct msm_drm_private *priv = obj->dev->dev_private; + struct msm_gem_object *msm_obj = to_msm_bo(obj); + + msm_gem_assert_locked(&msm_obj->base); + + if (!msm_obj->pages) { + GEM_WARN_ON(msm_obj->pin_count); + + drm_gem_lru_move_tail_locked(&priv->lru.unbacked, obj); + } else { + update_lru_active(obj); + } +} + static void update_lru(struct drm_gem_object *obj) { struct msm_drm_private *priv = obj->dev->dev_private; @@ -489,6 +501,24 @@ void msm_gem_unpin_locked(struct drm_gem_object *obj) mutex_unlock(&priv->lru.lock); } +/* Special unpin path for use in fence-signaling path, avoiding the need + * to hold the obj lock by only depending on things that a protected by + * the LRU lock. In particular we know that that we already have backing + * and and that the object's dma_resv has the fence for the current + * submit/job which will prevent us racing against page eviction. + */ +void msm_gem_unpin_active(struct drm_gem_object *obj) +{ + struct msm_drm_private *priv = obj->dev->dev_private; + struct msm_gem_object *msm_obj = to_msm_bo(obj); + + mutex_lock(&priv->lru.lock); + msm_obj->pin_count--; + GEM_WARN_ON(msm_obj->pin_count < 0); + update_lru_active(obj); + mutex_unlock(&priv->lru.lock); +} + struct msm_gem_vma *msm_gem_get_vma_locked(struct drm_gem_object *obj, struct msm_gem_address_space *aspace) { diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h index 0057e8e8fa13..2bd6846c83a9 100644 --- a/drivers/gpu/drm/msm/msm_gem.h +++ b/drivers/gpu/drm/msm/msm_gem.h @@ -128,6 +128,7 @@ struct msm_gem_object { uint64_t msm_gem_mmap_offset(struct drm_gem_object *obj); int msm_gem_pin_vma_locked(struct drm_gem_object *obj, struct msm_gem_vma *vma); void msm_gem_unpin_locked(struct drm_gem_object *obj); +void msm_gem_unpin_active(struct drm_gem_object *obj); struct msm_gem_vma *msm_gem_get_vma_locked(struct drm_gem_object *obj, struct msm_gem_address_space *aspace); int msm_gem_get_iova(struct drm_gem_object *obj, diff --git a/drivers/gpu/drm/msm/msm_ringbuffer.c b/drivers/gpu/drm/msm/msm_ringbuffer.c index 31b4fbf96c36..b60199184409 100644 --- a/drivers/gpu/drm/msm/msm_ringbuffer.c +++ b/drivers/gpu/drm/msm/msm_ringbuffer.c @@ -24,9 +24,7 @@ static struct dma_fence *msm_job_run(struct drm_sched_job *job) struct drm_gem_object *obj = &submit->bos[i].obj->base; msm_gem_vma_unpin_fenced(submit->bos[i].vma, fctx); - msm_gem_lock(obj); - msm_gem_unpin_locked(obj); - msm_gem_unlock(obj); + msm_gem_unpin_active(obj); submit->bos[i].flags &= ~(BO_VMA_PINNED | BO_OBJ_PINNED); } From patchwork Mon Mar 20 14:43:32 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 13181350 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2AD35C76195 for ; Mon, 20 Mar 2023 14:45:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231883AbjCTOp1 (ORCPT ); Mon, 20 Mar 2023 10:45:27 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51522 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231984AbjCTOpG (ORCPT ); Mon, 20 Mar 2023 10:45:06 -0400 Received: from mail-pj1-x1032.google.com (mail-pj1-x1032.google.com [IPv6:2607:f8b0:4864:20::1032]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 50481AD2A; Mon, 20 Mar 2023 07:44:50 -0700 (PDT) Received: by mail-pj1-x1032.google.com with SMTP id j13so12352669pjd.1; Mon, 20 Mar 2023 07:44:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; t=1679323489; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=y74KF7H2nPAn1GkAkMf3ALBkHME3mvlc137TFEwpI5M=; b=KDNXAyKxSPxmqCM1goz2ei4OMneXayGq4rDfE6Qgq1VIyE6guRUWAO1mZzPWwgIP26 JYt+JAaoGORRqU5CpZfLxhpnaTqa15rJPVrROZd2PP5/Wrv+L91VA5PRNhE4DIVA3ZzB OFZzSBuKsPmYl+P7Ixx+dBJft1d8cUSoVWPBMVB/TZ2/y4+0WwyogGQnGsgpycK2T+HC 4AeAWwpByzuLtol49nW1nmtPSxhPBjJAzk4GYtsHsq6Xqba8j2nOOgS7lDgcUfg6PO4d 9Babeqe7rFqMFE/ieZ97Gbuj0CUvKp6/l1e4DET+Plb6nHmPrw2kqMuYyB3AqKl+6N11 miDQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1679323489; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=y74KF7H2nPAn1GkAkMf3ALBkHME3mvlc137TFEwpI5M=; b=sq3IHapkhrPVovMiDukpUvwyomSC4duPHeo1n081YL97mpu5si+sNgRcOsTrIvOIST PZrdqBCbqiCmZT0NoUMqcB98RBKGNpfuLZSrXE/gBXR8wmqHMAhZVaXju+F06UC4ffZR FhQ+0nevaXMSLWxlEqnOjCuf4PwHNFLxbDPN7YFVidxHo3FXXKZza6vJ2CSUyzIYFPN7 sIarBEzLAzgILIqQI2vyHTuts8omwayEPQtr+3kFWLSqEX25kIPbNfKPbviCAvog4B85 CRV0SklwNateH/CFda7Zu+4NUSnyynA9toohccnDq0655HqDOnFPPQmQ25IGczhpuHX5 +LTw== X-Gm-Message-State: AO0yUKVZ8VwqgciSVukiAqf9K4f+mVoPYiIicJ7D05IbWQazxWrdkV9Q ySBaTfiAh0UK1QN7ttr93A8= X-Google-Smtp-Source: AK7set9ZUleUp5mEZ/nimYP6FtanqkhrK9z8SGzJ4pCwCjW9CpDQ3yDpgBO1I4IgNMmovbqGg0F/ng== X-Received: by 2002:a17:902:c40c:b0:19a:a815:2868 with SMTP id k12-20020a170902c40c00b0019aa8152868mr21846958plk.44.1679323488934; Mon, 20 Mar 2023 07:44:48 -0700 (PDT) Received: from localhost ([2a00:79e1:abd:4a00:61b:48ed:72ab:435b]) by smtp.gmail.com with ESMTPSA id j3-20020a170902c3c300b001a072be70desm6828123plj.41.2023.03.20.07.44.48 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 20 Mar 2023 07:44:48 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: Daniel Vetter , freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Rob Clark , Rob Clark , Abhinav Kumar , Dmitry Baryshkov , Sean Paul , David Airlie , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v2 10/23] drm/msm: Switch idr_lock to spinlock Date: Mon, 20 Mar 2023 07:43:32 -0700 Message-Id: <20230320144356.803762-11-robdclark@gmail.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230320144356.803762-1-robdclark@gmail.com> References: <20230320144356.803762-1-robdclark@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org From: Rob Clark Needed to idr_preload() which returns with preemption disabled. Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/msm_drv.c | 6 ++---- drivers/gpu/drm/msm/msm_gem_submit.c | 10 +++++----- drivers/gpu/drm/msm/msm_gpu.h | 2 +- drivers/gpu/drm/msm/msm_submitqueue.c | 2 +- 4 files changed, 9 insertions(+), 11 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_drv.c b/drivers/gpu/drm/msm/msm_drv.c index aca48c868c14..ce1a77b607d1 100644 --- a/drivers/gpu/drm/msm/msm_drv.c +++ b/drivers/gpu/drm/msm/msm_drv.c @@ -918,13 +918,11 @@ static int wait_fence(struct msm_gpu_submitqueue *queue, uint32_t fence_id, * retired, so if the fence is not found it means there is nothing * to wait for */ - ret = mutex_lock_interruptible(&queue->idr_lock); - if (ret) - return ret; + spin_lock(&queue->idr_lock); fence = idr_find(&queue->fence_idr, fence_id); if (fence) fence = dma_fence_get_rcu(fence); - mutex_unlock(&queue->idr_lock); + spin_unlock(&queue->idr_lock); if (!fence) return 0; diff --git a/drivers/gpu/drm/msm/msm_gem_submit.c b/drivers/gpu/drm/msm/msm_gem_submit.c index 1d8e7c2a8024..b9d81e5acb42 100644 --- a/drivers/gpu/drm/msm/msm_gem_submit.c +++ b/drivers/gpu/drm/msm/msm_gem_submit.c @@ -79,9 +79,9 @@ void __msm_gem_submit_destroy(struct kref *kref) unsigned i; if (submit->fence_id) { - mutex_lock(&submit->queue->idr_lock); + spin_lock(&submit->queue->idr_lock); idr_remove(&submit->queue->fence_idr, submit->fence_id); - mutex_unlock(&submit->queue->idr_lock); + spin_unlock(&submit->queue->idr_lock); } dma_fence_put(submit->user_fence); @@ -882,7 +882,7 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void *data, submit->nr_cmds = i; - mutex_lock(&queue->idr_lock); + spin_lock(&queue->idr_lock); /* * If using userspace provided seqno fence, validate that the id @@ -892,7 +892,7 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void *data, */ if ((args->flags & MSM_SUBMIT_FENCE_SN_IN) && idr_find(&queue->fence_idr, args->fence)) { - mutex_unlock(&queue->idr_lock); + spin_unlock(&queue->idr_lock); ret = -EINVAL; goto out; } @@ -926,7 +926,7 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void *data, INT_MAX, GFP_KERNEL); } - mutex_unlock(&queue->idr_lock); + spin_unlock(&queue->idr_lock); if (submit->fence_id < 0) { ret = submit->fence_id; diff --git a/drivers/gpu/drm/msm/msm_gpu.h b/drivers/gpu/drm/msm/msm_gpu.h index fc1c0d8611a8..5929ecaa1fcd 100644 --- a/drivers/gpu/drm/msm/msm_gpu.h +++ b/drivers/gpu/drm/msm/msm_gpu.h @@ -499,7 +499,7 @@ struct msm_gpu_submitqueue { struct msm_file_private *ctx; struct list_head node; struct idr fence_idr; - struct mutex idr_lock; + struct spinlock idr_lock; struct mutex lock; struct kref ref; struct drm_sched_entity *entity; diff --git a/drivers/gpu/drm/msm/msm_submitqueue.c b/drivers/gpu/drm/msm/msm_submitqueue.c index c6929e205b51..0e803125a325 100644 --- a/drivers/gpu/drm/msm/msm_submitqueue.c +++ b/drivers/gpu/drm/msm/msm_submitqueue.c @@ -200,7 +200,7 @@ int msm_submitqueue_create(struct drm_device *drm, struct msm_file_private *ctx, *id = queue->id; idr_init(&queue->fence_idr); - mutex_init(&queue->idr_lock); + spin_lock_init(&queue->idr_lock); mutex_init(&queue->lock); list_add_tail(&queue->node, &ctx->submitqueues); From patchwork Mon Mar 20 14:43:33 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 13181346 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id AA5A4C7619A for ; Mon, 20 Mar 2023 14:45:23 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231861AbjCTOpU (ORCPT ); Mon, 20 Mar 2023 10:45:20 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51546 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231995AbjCTOpH (ORCPT ); Mon, 20 Mar 2023 10:45:07 -0400 Received: from mail-pg1-x52b.google.com (mail-pg1-x52b.google.com [IPv6:2607:f8b0:4864:20::52b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E734D10405; Mon, 20 Mar 2023 07:44:52 -0700 (PDT) Received: by mail-pg1-x52b.google.com with SMTP id z18so6709253pgj.13; Mon, 20 Mar 2023 07:44:52 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; t=1679323490; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=O4Xvkkg7F6CjyOF2RZo8Gtbu7JFTSy/bgLszOXTif/c=; b=pdh6pDbj0hMlQMJiqcjb8EcHHB++Bz4iZs1fWn8cufOZ+Nlx3Cw+6HTaCNHQc1riJo sNuYVoJZKGIWIeMkxx9NXtGIedo/FV+PKDgDHWaf9G0Teyo5+mk7ELb+tQFBYD1G0S21 0PtNuq9gBlinRoOXPX9RrFMMstWiCfJbX6w0UiLfyeU6VyzeB4va8oEX0kp4G/PqgrZ4 J8x7zIf81il9FWDLwkwoPzsnDUwEcI479IVGKAZjOeVZVZk4yNQNiDf7cUZrYqgllCFZ rRGRxpH+E0sP+A0ffVoJH4/hiO7565ux39p3UDwlawoDk1SaGvLvjUCn4t/eGacpTtyc c0Uw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1679323490; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=O4Xvkkg7F6CjyOF2RZo8Gtbu7JFTSy/bgLszOXTif/c=; b=7+Qhj1vSKnOh9qzyJAuKL3aSa4SXi12ShCNe1fQKfCAOdCj0TzckLX37agv7svEKna Fj8SANYxyCkcCtS5mpO52l+f1hdlSBVsT/PEFai8K0yZKsk2lFq/1o6lSzDlIA+ZNhY/ iAYpW7A+y8pT98r19jyNBXDxoZD9NWvkzoski3gtJoxz+BX8sH/Zrx4gHEkSoX9PmHUx IwD96l2dh8Xrdl73SB0dDLfIMQ6ppJov1gTiP4sKkSAj5mkXeS+Y5JAaSD09r6zWfbSh Uv5bqBEcIsGsOgYZSGjYB+x2igwFkBizf693qUMBLPgoElggdwIx8qzpiNQruoxu9SEV onQA== X-Gm-Message-State: AO0yUKWSEnuaTUa6DdHsztDNp4P0r0CqEUKI2HYgbuvt+irUnVAbQBUE iVuIoxZb4357KQvUOjxKUAI= X-Google-Smtp-Source: AK7set8KWfMWUfN/rUSfAdvgv538dmuC8YG9xJe+nc38O3y7lCz5M7md+3TF5MvUHCrG2tMwgVLAcQ== X-Received: by 2002:aa7:9f91:0:b0:627:fc31:1de with SMTP id z17-20020aa79f91000000b00627fc3101demr3121841pfr.7.1679323490696; Mon, 20 Mar 2023 07:44:50 -0700 (PDT) Received: from localhost ([2a00:79e1:abd:4a00:61b:48ed:72ab:435b]) by smtp.gmail.com with ESMTPSA id v22-20020a62a516000000b0058bc7453285sm6389779pfm.217.2023.03.20.07.44.50 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 20 Mar 2023 07:44:50 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: Daniel Vetter , freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Rob Clark , Rob Clark , Abhinav Kumar , Dmitry Baryshkov , Sean Paul , David Airlie , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v2 11/23] drm/msm: Use idr_preload() Date: Mon, 20 Mar 2023 07:43:33 -0700 Message-Id: <20230320144356.803762-12-robdclark@gmail.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230320144356.803762-1-robdclark@gmail.com> References: <20230320144356.803762-1-robdclark@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org From: Rob Clark Avoid allocation under idr_lock, to prevent deadlock against the job_free() path (which runs on same thread as job_run(), which makes it also part of the fence-signaling path. Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/msm_gem_submit.c | 8 ++++++-- 1 file changed, 6 insertions(+), 2 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_gem_submit.c b/drivers/gpu/drm/msm/msm_gem_submit.c index b9d81e5acb42..0ab62cb4ed69 100644 --- a/drivers/gpu/drm/msm/msm_gem_submit.c +++ b/drivers/gpu/drm/msm/msm_gem_submit.c @@ -882,6 +882,8 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void *data, submit->nr_cmds = i; + idr_preload(GFP_KERNEL); + spin_lock(&queue->idr_lock); /* @@ -893,6 +895,7 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void *data, if ((args->flags & MSM_SUBMIT_FENCE_SN_IN) && idr_find(&queue->fence_idr, args->fence)) { spin_unlock(&queue->idr_lock); + idr_preload_end(); ret = -EINVAL; goto out; } @@ -910,7 +913,7 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void *data, submit->fence_id = args->fence; ret = idr_alloc_u32(&queue->fence_idr, submit->user_fence, &submit->fence_id, submit->fence_id, - GFP_KERNEL); + GFP_NOWAIT); /* * We've already validated that the fence_id slot is valid, * so if idr_alloc_u32 failed, it is a kernel bug @@ -923,10 +926,11 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void *data, */ submit->fence_id = idr_alloc_cyclic(&queue->fence_idr, submit->user_fence, 1, - INT_MAX, GFP_KERNEL); + INT_MAX, GFP_NOWAIT); } spin_unlock(&queue->idr_lock); + idr_preload_end(); if (submit->fence_id < 0) { ret = submit->fence_id; From patchwork Mon Mar 20 14:43:34 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 13181352 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 35213C7618D for ; Mon, 20 Mar 2023 14:45:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231398AbjCTOp3 (ORCPT ); Mon, 20 Mar 2023 10:45:29 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51792 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232029AbjCTOpQ (ORCPT ); Mon, 20 Mar 2023 10:45:16 -0400 Received: from mail-pl1-x636.google.com (mail-pl1-x636.google.com [IPv6:2607:f8b0:4864:20::636]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7180710273; Mon, 20 Mar 2023 07:44:55 -0700 (PDT) Received: by mail-pl1-x636.google.com with SMTP id kq3so265939plb.13; Mon, 20 Mar 2023 07:44:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; t=1679323493; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=ldFupt5wrZbRxkmdbQJ7PqzJnkF6w4MSZ/CZLw9J9jU=; b=nLqncyImNrlz2RYRgSTcGk81CIP+FxeIhwn8gsHuejhfPlvxGvZoYbDo0a811aEXK9 V1LlsnULXd0sVZPzFucY4idd1TfEuzWSGRr8oLl/nGNtw0w61RogcCNsUN5LYoaVQgy8 A9Zjp7rXg0KtTQarA5j7PNz52fbJgT60Emcaw3O4Td7B/lgNjsxPkom0BwnCRmFdnQOz ay5YoqOYuFncYBGa78Ckg/Fc8t6MF4OaZhtzlc9O16bNsgB0fKDDkrlbvblGBXx62ZqL kkIj3+KSXWEXR+AMbc3PB0j7Fhhv2yHH3MBrVP4JTcpba0E8hu6lf6FUt8KEpiKFKbzS Nr0w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1679323493; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=ldFupt5wrZbRxkmdbQJ7PqzJnkF6w4MSZ/CZLw9J9jU=; b=4Ijn8dGLsYTuXuJN1KE8BnX95ftX4uYEwdWyzrbL1iFJatug5Mdr7SSOCqcFx+YGV8 Aqr09L4EpiXJxnW2JhK+3ZdG5BvRQ/kppHDkZT73XfwXgi8zY5R8Cochq/5u9/KutYLf N9lv5c/kpTkIUjcUkEfaC8PZF8BRyphYykkW5UZoPFF8j9eI1p1qFLZGQ8TL5NPK1w/7 OnaM5474dNWIiDTRiUOUrkYBPIZtbbLlqMn26z0L9pQnA6owu3f7ibpLr0hix5s2MOrT yScQZp2L80CQf4WqL5Tg02DqoHRAY9lGbBqWyIBucAKb+csgf8kLpSckSi/jOOgqaCrB SZmQ== X-Gm-Message-State: AO0yUKUBD+mWInhycCZqPa4ZV0Lj3Pj3mu59AuLSBBuqP9bs+cp5NgLS dBuS3crU8CWc5vB+qwX4Hxo= X-Google-Smtp-Source: AK7set8ban8EVPY/GlOSyVWpuGWIu9Ktpe8j0t2q1ieP66G7xMsPkz9aTzAfar0y6DwailYRK/6igQ== X-Received: by 2002:a05:6a20:bc9e:b0:d9:f086:e756 with SMTP id fx30-20020a056a20bc9e00b000d9f086e756mr1931648pzb.39.1679323493697; Mon, 20 Mar 2023 07:44:53 -0700 (PDT) Received: from localhost ([2a00:79e1:abd:4a00:61b:48ed:72ab:435b]) by smtp.gmail.com with ESMTPSA id j8-20020aa78dc8000000b00571f66721aesm6454473pfr.42.2023.03.20.07.44.53 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 20 Mar 2023 07:44:53 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: Daniel Vetter , freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Rob Clark , Rob Clark , Abhinav Kumar , Dmitry Baryshkov , Sean Paul , David Airlie , Akhil P Oommen , Chia-I Wu , Dmitry Osipenko , Luca Weiss , Maximilian Luz , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v2 12/23] drm/msm/gpu: Move fw loading out of hw_init() path Date: Mon, 20 Mar 2023 07:43:34 -0700 Message-Id: <20230320144356.803762-13-robdclark@gmail.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230320144356.803762-1-robdclark@gmail.com> References: <20230320144356.803762-1-robdclark@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org From: Rob Clark It is already a no-op, since we've already loaded the fw from adreno_load_gpu(), so drop the redundant call. Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/adreno/adreno_gpu.c | 9 +-------- 1 file changed, 1 insertion(+), 8 deletions(-) diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.c b/drivers/gpu/drm/msm/adreno/adreno_gpu.c index 817599766329..28cc5685ba96 100644 --- a/drivers/gpu/drm/msm/adreno/adreno_gpu.c +++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.c @@ -503,16 +503,9 @@ struct drm_gem_object *adreno_fw_create_bo(struct msm_gpu *gpu, int adreno_hw_init(struct msm_gpu *gpu) { - struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu); - int ret, i; - VERB("%s", gpu->name); - ret = adreno_load_fw(adreno_gpu); - if (ret) - return ret; - - for (i = 0; i < gpu->nr_rings; i++) { + for (int i = 0; i < gpu->nr_rings; i++) { struct msm_ringbuffer *ring = gpu->rb[i]; if (!ring) From patchwork Mon Mar 20 14:43:35 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 13181353 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id ABE48C7618D for ; Mon, 20 Mar 2023 14:45:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231889AbjCTOp4 (ORCPT ); Mon, 20 Mar 2023 10:45:56 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52088 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231873AbjCTOpZ (ORCPT ); Mon, 20 Mar 2023 10:45:25 -0400 Received: from mail-pj1-x1034.google.com (mail-pj1-x1034.google.com [IPv6:2607:f8b0:4864:20::1034]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8A8DDC160; Mon, 20 Mar 2023 07:45:00 -0700 (PDT) Received: by mail-pj1-x1034.google.com with SMTP id om3-20020a17090b3a8300b0023efab0e3bfso16599694pjb.3; Mon, 20 Mar 2023 07:45:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; t=1679323499; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=ZVlbUahPT9dLpoRTmNSYJvHTK7teqvjnC8Je3CP6Kxc=; b=NGWAfuutYzG2ND7yw5eCEsdM74IarWmxSaNS/cW9vVQlHnyZladHRE3ga26R0gj21T R4KpZPJqtoHD5+agUZmY0uO0nVdb9qpmIeRbmvwC+bKxjqUKinwXE16/Tcvs8DM9fQdr Av9B51dpHq/I96SJqFg+P0oUv7Cu1JJs8VyvBJ+/nVeiTjbmanwYjElJ0CsgZ7GiZDHF t2RQ231vKONzgMQzTcfdLpaRKaTjk5gbqRklsH7ny/KxYnYU62IyGqnahsqIz3fwFvsb nnWx98n11XLoFhaaJo4hqsKhsh2zNGfwa7G2//JgQqUkPYzqGLCqHhAy8fJRbkxs0GsS Zj0g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1679323499; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=ZVlbUahPT9dLpoRTmNSYJvHTK7teqvjnC8Je3CP6Kxc=; b=QQBk02X45YDYgtRzVOwF0bja2rXkUbs6qpC78GvhYTVflZykF4oe7raXXxOJQAkd6o px+KhQPaSS/6sLKxBC+9kfTuJ0917Kosau4gHm39k12LFlSy7BFYzu7YmqG5ilm6gnct Qn+DDsQcdPi5BNgIQGOQoFUDVdE211huLb4HDs5+P2a9g/w+Un/LmoIuaOgCVuvfTdcM gBAWHafHQOjeU3+8DJV4Dno+2V7MK+iTCxUfNsv11/EcqJMY6ujhW5sPCkVsoc9EwzGC 3Xxdt9TngeGh6JWDWBeX3XRCstNCcjm97nxcJwXW7bPQ6iasYF5miLHNHnVsNztpsDnU QAXQ== X-Gm-Message-State: AO0yUKUG3NLXw/qy5XTr8iLNaRPxv5KAnSvnKAjjrK23LYYgu5a7AGTr MaFqxunAhVXZv3ouW3BoN6E= X-Google-Smtp-Source: AK7set+nq/17+WMSgvQ9N76UARnEk4i1nIlQZyx1JbgYmEs1u1mjNQQXgECoC/jbIp+HuK50N9Mzaw== X-Received: by 2002:a17:90b:1c8d:b0:233:f98a:8513 with SMTP id oo13-20020a17090b1c8d00b00233f98a8513mr19657496pjb.8.1679323499152; Mon, 20 Mar 2023 07:44:59 -0700 (PDT) Received: from localhost ([2a00:79e1:abd:4a00:61b:48ed:72ab:435b]) by smtp.gmail.com with ESMTPSA id mv10-20020a17090b198a00b0023efa52d2b6sm6246591pjb.34.2023.03.20.07.44.58 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 20 Mar 2023 07:44:58 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: Daniel Vetter , freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Rob Clark , Rob Clark , Abhinav Kumar , Dmitry Baryshkov , Sean Paul , David Airlie , Douglas Anderson , Akhil P Oommen , Chia-I Wu , Konrad Dybcio , Nathan Chancellor , "Joel Fernandes (Google)" , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v2 13/23] drm/msm/gpu: Move BO allocation out of hw_init Date: Mon, 20 Mar 2023 07:43:35 -0700 Message-Id: <20230320144356.803762-14-robdclark@gmail.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230320144356.803762-1-robdclark@gmail.com> References: <20230320144356.803762-1-robdclark@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org From: Rob Clark These allocations are only done the first (successful) time through hw_init() so they won't actually happen in the job_run() path. But lockdep doesn't know this. So dis-entangle them from the hw_init() path. Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/adreno/a5xx_gpu.c | 48 +++++++++++----------- drivers/gpu/drm/msm/adreno/a6xx_gpu.c | 46 ++++++++++----------- drivers/gpu/drm/msm/adreno/adreno_device.c | 6 +++ drivers/gpu/drm/msm/msm_gpu.h | 6 +++ 4 files changed, 57 insertions(+), 49 deletions(-) diff --git a/drivers/gpu/drm/msm/adreno/a5xx_gpu.c b/drivers/gpu/drm/msm/adreno/a5xx_gpu.c index 660ba0db8900..f8e278d46dcf 100644 --- a/drivers/gpu/drm/msm/adreno/a5xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a5xx_gpu.c @@ -567,7 +567,7 @@ static void a5xx_ucode_check_version(struct a5xx_gpu *a5xx_gpu, msm_gem_put_vaddr(obj); } -static int a5xx_ucode_init(struct msm_gpu *gpu) +static int a5xx_ucode_load(struct msm_gpu *gpu) { struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu); struct a5xx_gpu *a5xx_gpu = to_a5xx_gpu(adreno_gpu); @@ -605,9 +605,24 @@ static int a5xx_ucode_init(struct msm_gpu *gpu) a5xx_ucode_check_version(a5xx_gpu, a5xx_gpu->pfp_bo); } - gpu_write64(gpu, REG_A5XX_CP_ME_INSTR_BASE_LO, a5xx_gpu->pm4_iova); + if (a5xx_gpu->has_whereami) { + if (!a5xx_gpu->shadow_bo) { + a5xx_gpu->shadow = msm_gem_kernel_new(gpu->dev, + sizeof(u32) * gpu->nr_rings, + MSM_BO_WC | MSM_BO_MAP_PRIV, + gpu->aspace, &a5xx_gpu->shadow_bo, + &a5xx_gpu->shadow_iova); - gpu_write64(gpu, REG_A5XX_CP_PFP_INSTR_BASE_LO, a5xx_gpu->pfp_iova); + if (IS_ERR(a5xx_gpu->shadow)) + return PTR_ERR(a5xx_gpu->shadow); + + msm_gem_object_set_name(a5xx_gpu->shadow_bo, "shadow"); + } + } else if (gpu->nr_rings > 1) { + /* Disable preemption if WHERE_AM_I isn't available */ + a5xx_preempt_fini(gpu); + gpu->nr_rings = 1; + } return 0; } @@ -900,9 +915,8 @@ static int a5xx_hw_init(struct msm_gpu *gpu) if (adreno_is_a530(adreno_gpu) || adreno_is_a540(adreno_gpu)) a5xx_gpmu_ucode_init(gpu); - ret = a5xx_ucode_init(gpu); - if (ret) - return ret; + gpu_write64(gpu, REG_A5XX_CP_ME_INSTR_BASE_LO, a5xx_gpu->pm4_iova); + gpu_write64(gpu, REG_A5XX_CP_PFP_INSTR_BASE_LO, a5xx_gpu->pfp_iova); /* Set the ringbuffer address */ gpu_write64(gpu, REG_A5XX_CP_RB_BASE, gpu->rb[0]->iova); @@ -916,27 +930,10 @@ static int a5xx_hw_init(struct msm_gpu *gpu) gpu_write(gpu, REG_A5XX_CP_RB_CNTL, MSM_GPU_RB_CNTL_DEFAULT | AXXX_CP_RB_CNTL_NO_UPDATE); - /* Create a privileged buffer for the RPTR shadow */ - if (a5xx_gpu->has_whereami) { - if (!a5xx_gpu->shadow_bo) { - a5xx_gpu->shadow = msm_gem_kernel_new(gpu->dev, - sizeof(u32) * gpu->nr_rings, - MSM_BO_WC | MSM_BO_MAP_PRIV, - gpu->aspace, &a5xx_gpu->shadow_bo, - &a5xx_gpu->shadow_iova); - - if (IS_ERR(a5xx_gpu->shadow)) - return PTR_ERR(a5xx_gpu->shadow); - - msm_gem_object_set_name(a5xx_gpu->shadow_bo, "shadow"); - } - + /* Configure the RPTR shadow if needed: */ + if (a5xx_gpu->shadow_bo) { gpu_write64(gpu, REG_A5XX_CP_RB_RPTR_ADDR, shadowptr(a5xx_gpu, gpu->rb[0])); - } else if (gpu->nr_rings > 1) { - /* Disable preemption if WHERE_AM_I isn't available */ - a5xx_preempt_fini(gpu); - gpu->nr_rings = 1; } a5xx_preempt_hw_init(gpu); @@ -1682,6 +1679,7 @@ static const struct adreno_gpu_funcs funcs = { .get_param = adreno_get_param, .set_param = adreno_set_param, .hw_init = a5xx_hw_init, + .ucode_load = a5xx_ucode_load, .pm_suspend = a5xx_pm_suspend, .pm_resume = a5xx_pm_resume, .recover = a5xx_recover, diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c index aae60cbd9164..89049094a242 100644 --- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c @@ -917,7 +917,7 @@ static bool a6xx_ucode_check_version(struct a6xx_gpu *a6xx_gpu, return ret; } -static int a6xx_ucode_init(struct msm_gpu *gpu) +static int a6xx_ucode_load(struct msm_gpu *gpu) { struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu); struct a6xx_gpu *a6xx_gpu = to_a6xx_gpu(adreno_gpu); @@ -946,7 +946,23 @@ static int a6xx_ucode_init(struct msm_gpu *gpu) } } - gpu_write64(gpu, REG_A6XX_CP_SQE_INSTR_BASE, a6xx_gpu->sqe_iova); + /* + * Expanded APRIV and targets that support WHERE_AM_I both need a + * privileged buffer to store the RPTR shadow + */ + if ((adreno_gpu->base.hw_apriv || a6xx_gpu->has_whereami) && + !a6xx_gpu->shadow_bo) { + a6xx_gpu->shadow = msm_gem_kernel_new(gpu->dev, + sizeof(u32) * gpu->nr_rings, + MSM_BO_WC | MSM_BO_MAP_PRIV, + gpu->aspace, &a6xx_gpu->shadow_bo, + &a6xx_gpu->shadow_iova); + + if (IS_ERR(a6xx_gpu->shadow)) + return PTR_ERR(a6xx_gpu->shadow); + + msm_gem_object_set_name(a6xx_gpu->shadow_bo, "shadow"); + } return 0; } @@ -1135,9 +1151,7 @@ static int hw_init(struct msm_gpu *gpu) if (ret) goto out; - ret = a6xx_ucode_init(gpu); - if (ret) - goto out; + gpu_write64(gpu, REG_A6XX_CP_SQE_INSTR_BASE, a6xx_gpu->sqe_iova); /* Set the ringbuffer address */ gpu_write64(gpu, REG_A6XX_CP_RB_BASE, gpu->rb[0]->iova); @@ -1152,25 +1166,8 @@ static int hw_init(struct msm_gpu *gpu) gpu_write(gpu, REG_A6XX_CP_RB_CNTL, MSM_GPU_RB_CNTL_DEFAULT | AXXX_CP_RB_CNTL_NO_UPDATE); - /* - * Expanded APRIV and targets that support WHERE_AM_I both need a - * privileged buffer to store the RPTR shadow - */ - - if (adreno_gpu->base.hw_apriv || a6xx_gpu->has_whereami) { - if (!a6xx_gpu->shadow_bo) { - a6xx_gpu->shadow = msm_gem_kernel_new(gpu->dev, - sizeof(u32) * gpu->nr_rings, - MSM_BO_WC | MSM_BO_MAP_PRIV, - gpu->aspace, &a6xx_gpu->shadow_bo, - &a6xx_gpu->shadow_iova); - - if (IS_ERR(a6xx_gpu->shadow)) - return PTR_ERR(a6xx_gpu->shadow); - - msm_gem_object_set_name(a6xx_gpu->shadow_bo, "shadow"); - } - + /* Configure the RPTR shadow if needed: */ + if (a6xx_gpu->shadow_bo) { gpu_write64(gpu, REG_A6XX_CP_RB_RPTR_ADDR_LO, shadowptr(a6xx_gpu, gpu->rb[0])); } @@ -1952,6 +1949,7 @@ static const struct adreno_gpu_funcs funcs = { .get_param = adreno_get_param, .set_param = adreno_set_param, .hw_init = a6xx_hw_init, + .ucode_load = a6xx_ucode_load, .pm_suspend = a6xx_pm_suspend, .pm_resume = a6xx_pm_resume, .recover = a6xx_recover, diff --git a/drivers/gpu/drm/msm/adreno/adreno_device.c b/drivers/gpu/drm/msm/adreno/adreno_device.c index 36f062c7582f..83d89b8d93e4 100644 --- a/drivers/gpu/drm/msm/adreno/adreno_device.c +++ b/drivers/gpu/drm/msm/adreno/adreno_device.c @@ -432,6 +432,12 @@ struct msm_gpu *adreno_load_gpu(struct drm_device *dev) if (ret) return NULL; + if (gpu->funcs->ucode_load) { + ret = gpu->funcs->ucode_load(gpu); + if (ret) + return NULL; + } + /* * Now that we have firmware loaded, and are ready to begin * booting the gpu, go ahead and enable runpm: diff --git a/drivers/gpu/drm/msm/msm_gpu.h b/drivers/gpu/drm/msm/msm_gpu.h index 5929ecaa1fcd..f84de0e8afac 100644 --- a/drivers/gpu/drm/msm/msm_gpu.h +++ b/drivers/gpu/drm/msm/msm_gpu.h @@ -50,6 +50,12 @@ struct msm_gpu_funcs { int (*set_param)(struct msm_gpu *gpu, struct msm_file_private *ctx, uint32_t param, uint64_t value, uint32_t len); int (*hw_init)(struct msm_gpu *gpu); + + /** + * @ucode_load: Optional hook to upload fw to GEM objs + */ + int (*ucode_load)(struct msm_gpu *gpu); + int (*pm_suspend)(struct msm_gpu *gpu); int (*pm_resume)(struct msm_gpu *gpu); void (*submit)(struct msm_gpu *gpu, struct msm_gem_submit *submit); From patchwork Mon Mar 20 14:43:36 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 13181356 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id EA960C7618A for ; Mon, 20 Mar 2023 14:46:20 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231954AbjCTOqS (ORCPT ); Mon, 20 Mar 2023 10:46:18 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51858 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231904AbjCTOpu (ORCPT ); Mon, 20 Mar 2023 10:45:50 -0400 Received: from mail-pl1-x634.google.com (mail-pl1-x634.google.com [IPv6:2607:f8b0:4864:20::634]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 98488B459; Mon, 20 Mar 2023 07:45:04 -0700 (PDT) Received: by mail-pl1-x634.google.com with SMTP id w4so4587895plg.9; Mon, 20 Mar 2023 07:45:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; t=1679323502; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=iRudoYKJ/9RBEgJ+2j3GsDNIAjaLf24fOR0Git+DXzM=; b=FSiCBrQqb/DttiCvh4W10nxnr4IwYmxqhvZrOxgMDQkWUirVLq87/u2vJrBad/daDh qw++XRKpNrVXUMtZedrPPKhpDSyZukdY/4ufBj5U2makZ08JuIbMHnJSOiC4xm1bzrjS HzPLQJOPxkS0Er/frUYuRF9+wzmCTAdEXPRxpH9m4E1e4+TSidc5nWZdbGAXgA9UJObW LQ8eG98up+TO8k88KCpl5YaCgqinGhFAOq+NXZiPa6fHGsmnWtfhPJltAZ0vlG4R9Kn2 hJvQqOIM1WcqtU43N+oX6hFcHJ9/IKX54VNK/yLNmLJEvxRuIOe1TvBoZvweJE5xupP+ mKsg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1679323502; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=iRudoYKJ/9RBEgJ+2j3GsDNIAjaLf24fOR0Git+DXzM=; b=0OJdm5aZeCUW885rLQNB4S0YOREn1Bvqq50NBa7CMwF4xgxHmCZO68vSC5f6jFfZyW F/RT2OwtjMNrOvSwOQI9kPpjbFBxdc++/7xDL0D31Bs04aItgRgO85shf6vDt17Cc+fx 9UrcRNzHjy4bcP+KL7VKBygoQQxTisjfr4jZ0m2C+WjM3seM39E6BkNEKBvNyv8nXMU6 6UUD3TP5rj6aYwZL6xcmYqAXvUO9Ut8AFMlQ089XP91uE9KnlhuyyZa/kuo4cwIBtt87 CrES5JU/7IJDN+NiJH0UaxYFFPg7HQlWUmFbIAH2M9fsmukCQwR909bcacD09Y8kfo1X cjUg== X-Gm-Message-State: AO0yUKU5v1IqvoBWPlf/+jBHMsd2KzlwAsEjrL5mdLGOhCHzbhv0XX3V eQMiO4U1BoISrZcNODwTKOg= X-Google-Smtp-Source: AK7set9ulfPXo9jK6KpDY96RxgLMtzdpXoEn31dcDtWIB8HP/mch1ngRHsDe2j2Y+tM8AsYX9Ahm6g== X-Received: by 2002:a17:902:e5c3:b0:19e:7d67:84e6 with SMTP id u3-20020a170902e5c300b0019e7d6784e6mr21119604plf.0.1679323502456; Mon, 20 Mar 2023 07:45:02 -0700 (PDT) Received: from localhost ([2a00:79e1:abd:4a00:61b:48ed:72ab:435b]) by smtp.gmail.com with ESMTPSA id az8-20020a170902a58800b0019aa5528a5csm6793088plb.144.2023.03.20.07.45.01 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 20 Mar 2023 07:45:01 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: Daniel Vetter , freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Rob Clark , Rob Clark , Abhinav Kumar , Dmitry Baryshkov , Sean Paul , David Airlie , Akhil P Oommen , Konrad Dybcio , Geert Uytterhoeven , Douglas Anderson , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v2 14/23] drm/msm/a6xx: Move ioremap out of hw_init path Date: Mon, 20 Mar 2023 07:43:36 -0700 Message-Id: <20230320144356.803762-15-robdclark@gmail.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230320144356.803762-1-robdclark@gmail.com> References: <20230320144356.803762-1-robdclark@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org From: Rob Clark Move the one-time RPMh setup to a6xx_gmu_init(). To get rid of the hack for one-time init vs start, add in an extra a6xx_rpmh_stop() at the end of the init sequence. Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/adreno/a6xx_gmu.c | 18 ++++++++---------- 1 file changed, 8 insertions(+), 10 deletions(-) diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c index f3c9600221d4..30a1bf39ea83 100644 --- a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c +++ b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c @@ -621,6 +621,8 @@ static void a6xx_gmu_rpmh_init(struct a6xx_gmu *gmu) /* ensure no writes happen before the uCode is fully written */ wmb(); + a6xx_rpmh_stop(gmu); + err: if (!IS_ERR_OR_NULL(pdcptr)) iounmap(pdcptr); @@ -753,7 +755,6 @@ static int a6xx_gmu_fw_load(struct a6xx_gmu *gmu) static int a6xx_gmu_fw_start(struct a6xx_gmu *gmu, unsigned int state) { - static bool rpmh_init; struct a6xx_gpu *a6xx_gpu = container_of(gmu, struct a6xx_gpu, gmu); struct adreno_gpu *adreno_gpu = &a6xx_gpu->base; int ret; @@ -776,15 +777,9 @@ static int a6xx_gmu_fw_start(struct a6xx_gmu *gmu, unsigned int state) /* Turn on register retention */ gmu_write(gmu, REG_A6XX_GMU_GENERAL_7, 1); - /* We only need to load the RPMh microcode once */ - if (!rpmh_init) { - a6xx_gmu_rpmh_init(gmu); - rpmh_init = true; - } else { - ret = a6xx_rpmh_start(gmu); - if (ret) - return ret; - } + ret = a6xx_rpmh_start(gmu); + if (ret) + return ret; ret = a6xx_gmu_fw_load(gmu); if (ret) @@ -1633,6 +1628,9 @@ int a6xx_gmu_init(struct a6xx_gpu *a6xx_gpu, struct device_node *node) /* Set up the HFI queues */ a6xx_hfi_init(gmu); + /* Initialize RPMh */ + a6xx_gmu_rpmh_init(gmu); + gmu->initialized = true; return 0; From patchwork Mon Mar 20 14:43:37 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 13181354 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A00D0C6FD1D for ; Mon, 20 Mar 2023 14:46:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231942AbjCTOqR (ORCPT ); Mon, 20 Mar 2023 10:46:17 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51850 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231825AbjCTOpu (ORCPT ); Mon, 20 Mar 2023 10:45:50 -0400 Received: from mail-pf1-x42b.google.com (mail-pf1-x42b.google.com [IPv6:2607:f8b0:4864:20::42b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DB78D1024A; Mon, 20 Mar 2023 07:45:05 -0700 (PDT) Received: by mail-pf1-x42b.google.com with SMTP id fd25so7071697pfb.1; Mon, 20 Mar 2023 07:45:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; t=1679323504; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=Qu7alhJ6wWy9WzDcFrk58GfmlRJKvb1Npis5WJImQUk=; b=e/FNCZpTNWAV+E5t5npeOCsIakIkV+ragj5nuKWTyQ0ksvH7Wj0f/MjaAqvkOI3bxN kYww4vhk4UrVxjm8CqRtiK7B32r1TSYWhFbxIvO52TOAJVcilv0wjdmq+8V0WVhlEqJ+ 5L89sNZ4JEHFpnBDbudoUjmsIlFDaVzEgB4MSpsl4K0iZWnLddZ6epdjRw33YKTV96Lo VUOaLxw6oOD5rwbZkxcbQE9bjykyh2CfLHh3qeLlNO0+40+2IkwbKSXKfuz4GfS2hBBQ WNh1RbhlF9XONK9T6JHNdlW27qhvTVJTCY3UcsZIUBcKFPSuEuAmhvrT7xU9Wr/s2qw7 A7Bg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1679323504; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Qu7alhJ6wWy9WzDcFrk58GfmlRJKvb1Npis5WJImQUk=; b=u4X4pSMYjkYM6HMpYipMKje5e3MBfiwineCYlJTC4zClC4dbhh9EXHGg839xRplztW alvfniAcaxHcUh3eM7CcuB/etWtHrgzXzYej0kzdeJcq3XItMyB/4aUtbashUgF7cShh NY340Z6AtVD651D0JGgMoxDJ1Xk1WxUGR/iFH6C1J03Rzo7t7JPYNqrNWe4JuEoqGp13 ISWiMNbtJXsKMpufmPmL2hQfilOlMEE5b24AfCPilplaZD5dGMpHI9US4aKY0xQhvzzw girjiFxXRaqRqcp3bzrBPPS3MmYPTAiyp2hjIS22UtE9U6qQXszFhPOT9Sp8jz2JRI6P AHEA== X-Gm-Message-State: AO0yUKUuGX2mGPLml/IIgtPJL9/XMp9Y1xwyRcJC4XwuBxY59LuZ8QRF XM6Oe0HoBS2NyDpC9hQURdE7SJMKMvI= X-Google-Smtp-Source: AK7set//Qk9cXDTXMYexJ0g5upjPAMn6/IlbJvD3ot8I7O6eFbD1nrN/RI0I2HcbbYaxb7YioZLATw== X-Received: by 2002:a62:7b95:0:b0:625:e051:e462 with SMTP id w143-20020a627b95000000b00625e051e462mr15333738pfc.15.1679323504322; Mon, 20 Mar 2023 07:45:04 -0700 (PDT) Received: from localhost ([2a00:79e1:abd:4a00:61b:48ed:72ab:435b]) by smtp.gmail.com with ESMTPSA id r14-20020a62e40e000000b00627ee6dcb84sm3045993pfh.203.2023.03.20.07.45.03 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 20 Mar 2023 07:45:03 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: Daniel Vetter , freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Rob Clark , MyungJoo Ham , Kyungmin Park , Chanwoo Choi , linux-pm@vger.kernel.org (open list:DEVICE FREQUENCY (DEVFREQ)), linux-kernel@vger.kernel.org (open list) Subject: [PATCH v2 15/23] PM / devfreq: Drop unneed locking to appease lockdep Date: Mon, 20 Mar 2023 07:43:37 -0700 Message-Id: <20230320144356.803762-16-robdclark@gmail.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230320144356.803762-1-robdclark@gmail.com> References: <20230320144356.803762-1-robdclark@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org From: Rob Clark In the process of adding lockdep annotation for GPU job_run() path to catch potential deadlocks against the shrinker/reclaim path, I turned up this lockdep splat: ====================================================== WARNING: possible circular locking dependency detected 6.2.0-rc8-debug+ #556 Not tainted ------------------------------------------------------ ring0/123 is trying to acquire lock: ffffff8087219078 (&devfreq->lock){+.+.}-{3:3}, at: devfreq_monitor_resume+0x3c/0xf0 but task is already holding lock: ffffffd6f64e57e8 (dma_fence_map){++++}-{0:0}, at: msm_job_run+0x68/0x150 which lock already depends on the new lock. the existing dependency chain (in reverse order) is: -> #3 (dma_fence_map){++++}-{0:0}: __dma_fence_might_wait+0x74/0xc0 dma_resv_lockdep+0x1f4/0x2f4 do_one_initcall+0x104/0x2bc kernel_init_freeable+0x344/0x34c kernel_init+0x30/0x134 ret_from_fork+0x10/0x20 -> #2 (mmu_notifier_invalidate_range_start){+.+.}-{0:0}: fs_reclaim_acquire+0x80/0xa8 slab_pre_alloc_hook.constprop.0+0x40/0x25c __kmem_cache_alloc_node+0x60/0x1cc __kmalloc+0xd8/0x100 topology_parse_cpu_capacity+0x8c/0x178 get_cpu_for_node+0x88/0xc4 parse_cluster+0x1b0/0x28c parse_cluster+0x8c/0x28c init_cpu_topology+0x168/0x188 smp_prepare_cpus+0x24/0xf8 kernel_init_freeable+0x18c/0x34c kernel_init+0x30/0x134 ret_from_fork+0x10/0x20 -> #1 (fs_reclaim){+.+.}-{0:0}: __fs_reclaim_acquire+0x3c/0x48 fs_reclaim_acquire+0x54/0xa8 slab_pre_alloc_hook.constprop.0+0x40/0x25c __kmem_cache_alloc_node+0x60/0x1cc __kmalloc_node_track_caller+0xb8/0xe0 kstrdup+0x70/0x90 kstrdup_const+0x38/0x48 kvasprintf_const+0x48/0xbc kobject_set_name_vargs+0x40/0xb0 dev_set_name+0x64/0x8c devfreq_add_device+0x31c/0x55c devm_devfreq_add_device+0x6c/0xb8 msm_devfreq_init+0xa8/0x16c msm_gpu_init+0x38c/0x570 adreno_gpu_init+0x1b4/0x2b4 a6xx_gpu_init+0x15c/0x3e4 adreno_bind+0x218/0x254 component_bind_all+0x114/0x1ec msm_drm_bind+0x2b8/0x608 try_to_bring_up_aggregate_device+0x88/0x1a4 __component_add+0xec/0x13c component_add+0x1c/0x28 dsi_dev_attach+0x28/0x34 dsi_host_attach+0xdc/0x124 mipi_dsi_attach+0x30/0x44 devm_mipi_dsi_attach+0x2c/0x70 ti_sn_bridge_probe+0x298/0x2c4 auxiliary_bus_probe+0x7c/0x94 really_probe+0x158/0x290 __driver_probe_device+0xc8/0xe0 driver_probe_device+0x44/0x100 __device_attach_driver+0x64/0xdc bus_for_each_drv+0xa0/0xc8 __device_attach+0xd8/0x168 device_initial_probe+0x1c/0x28 bus_probe_device+0x38/0xa0 deferred_probe_work_func+0xc8/0xe0 process_one_work+0x2d8/0x478 process_scheduled_works+0x4c/0x50 worker_thread+0x218/0x274 kthread+0xf0/0x100 ret_from_fork+0x10/0x20 -> #0 (&devfreq->lock){+.+.}-{3:3}: __lock_acquire+0xe00/0x1060 lock_acquire+0x1e0/0x2f8 __mutex_lock+0xcc/0x3c8 mutex_lock_nested+0x30/0x44 devfreq_monitor_resume+0x3c/0xf0 devfreq_simple_ondemand_handler+0x54/0x7c devfreq_resume_device+0xa4/0xe8 msm_devfreq_resume+0x78/0xa8 a6xx_pm_resume+0x110/0x234 adreno_runtime_resume+0x2c/0x38 pm_generic_runtime_resume+0x30/0x44 __rpm_callback+0x15c/0x174 rpm_callback+0x78/0x7c rpm_resume+0x318/0x524 __pm_runtime_resume+0x78/0xbc pm_runtime_get_sync.isra.0+0x14/0x20 msm_gpu_submit+0x58/0x178 msm_job_run+0x78/0x150 drm_sched_main+0x290/0x370 kthread+0xf0/0x100 ret_from_fork+0x10/0x20 other info that might help us debug this: Chain exists of: &devfreq->lock --> mmu_notifier_invalidate_range_start --> dma_fence_map Possible unsafe locking scenario: CPU0 CPU1 ---- ---- lock(dma_fence_map); lock(mmu_notifier_invalidate_range_start); lock(dma_fence_map); lock(&devfreq->lock); *** DEADLOCK *** 2 locks held by ring0/123: #0: ffffff8087201170 (&gpu->lock){+.+.}-{3:3}, at: msm_job_run+0x64/0x150 #1: ffffffd6f64e57e8 (dma_fence_map){++++}-{0:0}, at: msm_job_run+0x68/0x150 stack backtrace: CPU: 6 PID: 123 Comm: ring0 Not tainted 6.2.0-rc8-debug+ #556 Hardware name: Google Lazor (rev1 - 2) with LTE (DT) Call trace: dump_backtrace.part.0+0xb4/0xf8 show_stack+0x20/0x38 dump_stack_lvl+0x9c/0xd0 dump_stack+0x18/0x34 print_circular_bug+0x1b4/0x1f0 check_noncircular+0x78/0xac __lock_acquire+0xe00/0x1060 lock_acquire+0x1e0/0x2f8 __mutex_lock+0xcc/0x3c8 mutex_lock_nested+0x30/0x44 devfreq_monitor_resume+0x3c/0xf0 devfreq_simple_ondemand_handler+0x54/0x7c devfreq_resume_device+0xa4/0xe8 msm_devfreq_resume+0x78/0xa8 a6xx_pm_resume+0x110/0x234 adreno_runtime_resume+0x2c/0x38 pm_generic_runtime_resume+0x30/0x44 __rpm_callback+0x15c/0x174 rpm_callback+0x78/0x7c rpm_resume+0x318/0x524 __pm_runtime_resume+0x78/0xbc pm_runtime_get_sync.isra.0+0x14/0x20 msm_gpu_submit+0x58/0x178 msm_job_run+0x78/0x150 drm_sched_main+0x290/0x370 kthread+0xf0/0x100 ret_from_fork+0x10/0x20 The issue is that we cannot be holding any lock while doing memory allocations that is also needed in the job_run (and in the case of devfreq, this means runpm_resume()) because lockdep sees this as a potential dependency. Fortunately there is really no reason to hold the devfreq lock when we are creating the devfreq device, as it is not yet visible to any other task. The only reason it was needed was for a lockdep assert in devfreq_get_freq_range(). Instead, split this up into an internal fxn that is used in the devfreq_add_device() (where the lock is not required). Signed-off-by: Rob Clark --- drivers/devfreq/devfreq.c | 46 ++++++++++++++++++--------------------- 1 file changed, 21 insertions(+), 25 deletions(-) diff --git a/drivers/devfreq/devfreq.c b/drivers/devfreq/devfreq.c index 817c71da391a..11b774048bd2 100644 --- a/drivers/devfreq/devfreq.c +++ b/drivers/devfreq/devfreq.c @@ -111,23 +111,13 @@ static unsigned long find_available_max_freq(struct devfreq *devfreq) return max_freq; } -/** - * devfreq_get_freq_range() - Get the current freq range - * @devfreq: the devfreq instance - * @min_freq: the min frequency - * @max_freq: the max frequency - * - * This takes into consideration all constraints. - */ -void devfreq_get_freq_range(struct devfreq *devfreq, - unsigned long *min_freq, - unsigned long *max_freq) +static void __get_freq_range(struct devfreq *devfreq, + unsigned long *min_freq, + unsigned long *max_freq) { unsigned long *freq_table = devfreq->freq_table; s32 qos_min_freq, qos_max_freq; - lockdep_assert_held(&devfreq->lock); - /* * Initialize minimum/maximum frequency from freq table. * The devfreq drivers can initialize this in either ascending or @@ -158,6 +148,23 @@ void devfreq_get_freq_range(struct devfreq *devfreq, if (*min_freq > *max_freq) *min_freq = *max_freq; } + +/** + * devfreq_get_freq_range() - Get the current freq range + * @devfreq: the devfreq instance + * @min_freq: the min frequency + * @max_freq: the max frequency + * + * This takes into consideration all constraints. + */ +void devfreq_get_freq_range(struct devfreq *devfreq, + unsigned long *min_freq, + unsigned long *max_freq) +{ + lockdep_assert_held(&devfreq->lock); + + __get_freq_range(devfreq, min_freq, max_freq); +} EXPORT_SYMBOL(devfreq_get_freq_range); /** @@ -810,7 +817,6 @@ struct devfreq *devfreq_add_device(struct device *dev, } mutex_init(&devfreq->lock); - mutex_lock(&devfreq->lock); devfreq->dev.parent = dev; devfreq->dev.class = devfreq_class; devfreq->dev.release = devfreq_dev_release; @@ -823,17 +829,14 @@ struct devfreq *devfreq_add_device(struct device *dev, if (devfreq->profile->timer < 0 || devfreq->profile->timer >= DEVFREQ_TIMER_NUM) { - mutex_unlock(&devfreq->lock); err = -EINVAL; goto err_dev; } if (!devfreq->profile->max_state || !devfreq->profile->freq_table) { - mutex_unlock(&devfreq->lock); err = set_freq_table(devfreq); if (err < 0) goto err_dev; - mutex_lock(&devfreq->lock); } else { devfreq->freq_table = devfreq->profile->freq_table; devfreq->max_state = devfreq->profile->max_state; @@ -841,19 +844,17 @@ struct devfreq *devfreq_add_device(struct device *dev, devfreq->scaling_min_freq = find_available_min_freq(devfreq); if (!devfreq->scaling_min_freq) { - mutex_unlock(&devfreq->lock); err = -EINVAL; goto err_dev; } devfreq->scaling_max_freq = find_available_max_freq(devfreq); if (!devfreq->scaling_max_freq) { - mutex_unlock(&devfreq->lock); err = -EINVAL; goto err_dev; } - devfreq_get_freq_range(devfreq, &min_freq, &max_freq); + __get_freq_range(devfreq, &min_freq, &max_freq); devfreq->suspend_freq = dev_pm_opp_get_suspend_opp_freq(dev); devfreq->opp_table = dev_pm_opp_get_opp_table(dev); @@ -865,7 +866,6 @@ struct devfreq *devfreq_add_device(struct device *dev, dev_set_name(&devfreq->dev, "%s", dev_name(dev)); err = device_register(&devfreq->dev); if (err) { - mutex_unlock(&devfreq->lock); put_device(&devfreq->dev); goto err_out; } @@ -876,7 +876,6 @@ struct devfreq *devfreq_add_device(struct device *dev, devfreq->max_state), GFP_KERNEL); if (!devfreq->stats.trans_table) { - mutex_unlock(&devfreq->lock); err = -ENOMEM; goto err_devfreq; } @@ -886,7 +885,6 @@ struct devfreq *devfreq_add_device(struct device *dev, sizeof(*devfreq->stats.time_in_state), GFP_KERNEL); if (!devfreq->stats.time_in_state) { - mutex_unlock(&devfreq->lock); err = -ENOMEM; goto err_devfreq; } @@ -896,8 +894,6 @@ struct devfreq *devfreq_add_device(struct device *dev, srcu_init_notifier_head(&devfreq->transition_notifier_list); - mutex_unlock(&devfreq->lock); - err = dev_pm_qos_add_request(dev, &devfreq->user_min_freq_req, DEV_PM_QOS_MIN_FREQUENCY, 0); if (err < 0) From patchwork Mon Mar 20 14:43:38 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 13181355 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A7894C761AF for ; Mon, 20 Mar 2023 14:46:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231961AbjCTOqT (ORCPT ); Mon, 20 Mar 2023 10:46:19 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52070 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231913AbjCTOpy (ORCPT ); Mon, 20 Mar 2023 10:45:54 -0400 Received: from mail-pj1-x1034.google.com (mail-pj1-x1034.google.com [IPv6:2607:f8b0:4864:20::1034]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C8F301CBC0; Mon, 20 Mar 2023 07:45:06 -0700 (PDT) Received: by mail-pj1-x1034.google.com with SMTP id e15-20020a17090ac20f00b0023d1b009f52so16745718pjt.2; Mon, 20 Mar 2023 07:45:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; t=1679323506; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=ETBxoAbrNslCbg8f3yN7x2XxTKk/U7inb50D/YGBCKI=; b=AhmyuHGG/8/igp1P9vAPy7ZPfETRY0TeHFg6mYwFSEp4TLgeIwiFdiaf87Ez+cq6wm C0AW68YtTAXKI9AjPeHGpqoOKDOJViRoc8P16XiAr8I0ETNoavUX0aN2VHu+L3I9EpAx 94JZm4mKBNIp2ZQQhCmhmIZ27ex95EPXwwPo+NBFflXpuSJp8pFQ/0voEc1oCOpHxoni 8ULQEqZIlQ/lAaBa+oSmDtiumvwxw3FoD1C5KzD46aL5yq6V/jfmsoA6q9c/g0UnjK9Q +ppWuOEEO16lI8FMgRLUeqiUz80fjx84GrCxTDpQrCeJtaFq7ZgM+N/oZ2y/H2BMqU2+ UzSg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1679323506; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=ETBxoAbrNslCbg8f3yN7x2XxTKk/U7inb50D/YGBCKI=; b=KEvqarqcrvlBApjWBE72fLKpENpHDpvn2MVPvp2Hi+64Swsd19Cg6oCfmBSMSUYJ52 OS3p266Ztq/vE0tFzr0gQlw1/9pbyi1dqHRR4OSprxH9gdL+fSdi4YSq+Y2MZtqb3X5j wRMAudID0EUcGzqQar3CBTov7Gh3Vd2Zk+r4ukh2gLDkw7jxoL419I0g8qzS+QCIMBhk c6a5vP6C1R6yyuDojzruDyp+mP4gG/DO1K/TCt5ZzNeRNPLNyvexnOyWKDl5mpOGb7m1 4q4NSAef2LzHe//Qdz7M8iHVMqwrnl9XcZ2NrQFE/YhMXtwipek6dHWG+D3xni/dvaPe ocSQ== X-Gm-Message-State: AO0yUKUa7ihIEq3B9haywr+Bxc9m336hX4xygtAb0AeB7e6iITPNQPxd omzTZKNLKwg92Eil4VNJLiE= X-Google-Smtp-Source: AK7set9nBsw0YboqA8HGtkhw/1efSi+yfRsWBIzz4ldlha0ygO0WznYR00bgmoDEmsF5J6m1TLVXgQ== X-Received: by 2002:a17:903:280c:b0:1a1:adb0:ed72 with SMTP id kp12-20020a170903280c00b001a1adb0ed72mr9170835plb.4.1679323506157; Mon, 20 Mar 2023 07:45:06 -0700 (PDT) Received: from localhost ([2a00:79e1:abd:4a00:61b:48ed:72ab:435b]) by smtp.gmail.com with ESMTPSA id 17-20020a170902ee5100b0019339f3368asm6853516plo.3.2023.03.20.07.45.05 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 20 Mar 2023 07:45:05 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: Daniel Vetter , freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Rob Clark , MyungJoo Ham , Kyungmin Park , Chanwoo Choi , linux-pm@vger.kernel.org (open list:DEVICE FREQUENCY (DEVFREQ)), linux-kernel@vger.kernel.org (open list) Subject: [PATCH v2 16/23] PM / devfreq: Teach lockdep about locking order Date: Mon, 20 Mar 2023 07:43:38 -0700 Message-Id: <20230320144356.803762-17-robdclark@gmail.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230320144356.803762-1-robdclark@gmail.com> References: <20230320144356.803762-1-robdclark@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org From: Rob Clark This will make it easier to catch places doing allocations that can trigger reclaim under devfreq->lock. Because devfreq->lock is held over various devfreq_dev_profile callbacks, there might be some fallout if those callbacks do allocations that can trigger reclaim, but I've looked through the various callback implementations and don't see anything obvious. If it does trigger any lockdep splats, those should be fixed. Signed-off-by: Rob Clark --- drivers/devfreq/devfreq.c | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/drivers/devfreq/devfreq.c b/drivers/devfreq/devfreq.c index 11b774048bd2..5ce3bf9b59e7 100644 --- a/drivers/devfreq/devfreq.c +++ b/drivers/devfreq/devfreq.c @@ -817,6 +817,12 @@ struct devfreq *devfreq_add_device(struct device *dev, } mutex_init(&devfreq->lock); + + /* Teach lockdep about lock ordering wrt. shrinker: */ + fs_reclaim_acquire(GFP_KERNEL); + might_lock(&devfreq->lock); + fs_reclaim_release(GFP_KERNEL); + devfreq->dev.parent = dev; devfreq->dev.class = devfreq_class; devfreq->dev.release = devfreq_dev_release; From patchwork Mon Mar 20 14:43:39 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 13181359 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 61602C7618D for ; Mon, 20 Mar 2023 14:46:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231615AbjCTOqt (ORCPT ); Mon, 20 Mar 2023 10:46:49 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51906 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231200AbjCTOqP (ORCPT ); Mon, 20 Mar 2023 10:46:15 -0400 Received: from mail-pl1-x62f.google.com (mail-pl1-x62f.google.com [IPv6:2607:f8b0:4864:20::62f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 179F0AD2A; Mon, 20 Mar 2023 07:45:09 -0700 (PDT) Received: by mail-pl1-x62f.google.com with SMTP id w4so4588182plg.9; Mon, 20 Mar 2023 07:45:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; t=1679323508; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=xsq0RBbnJAmMfF8oeFhuiCUGc8EVlJtKASulhInC9Fo=; b=p9GBRAEl+Zz6ySuBggEsfElfaKfonm9GPtTYpwm/8fiqVvksH6NvISc12mzFYPoN9F +q9Gkcm1rOIPfG7hscKR6HQGh3cNRbaqHfMoXRq1OAzQAoImadzMO6xzoJ24931sLeP0 XMLomMkb6/tLUozr48+DwOxKtvsR6yPJ3YCvj/3k5ER7Mwrot6152NM+2ATfnVS5NyWG 6NKoUjUvIQT66ERX0lytM5yjdvodMixEz02YNTLV/kHYe+B2g+hLIToa6+mZv9iDCeC6 LIYyG2lv1209STn622E34Y0uCSi0hxg0BQ7YwO+PX7O9/L+Z6eVygpl+oVAM3x1BouSi peOQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1679323508; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=xsq0RBbnJAmMfF8oeFhuiCUGc8EVlJtKASulhInC9Fo=; b=Rctm4Vf2qJWlYHIOre0zp7SKKEow5UkQEucCUghfvHuvGfqvmMkzP99HgTD51p25Ja IkBvZ9uuMO6Ao6z7wmXlM4UziTMwHz3sPzgxnAEmI0DqL4Kd1psEfGJXM18rm9dkiLRk HpA7LNNeuTDqjhCyAhSObc4Xs7JMoeU5wDVwe3E2BAPz4qwUlzGmYAEyl69ukvkAycbF zdnZ7H8Fm9Vd++/kk+iEiK5CG8njBuO/X1tzcZPOtmdRS0IRUCc0OH94GpcV7rnnp7RA wf8FeDOQRR1jIb9JWeWju/ir909uWlpEIgk1X64jSdSd/2uaYUd9GDQmlx4+5MvFjnm/ p5XA== X-Gm-Message-State: AO0yUKW7Po4gZOS4/x7ZJiye7TrceUZ6N6UVCoMLzYNiNN8rpdlc11r1 +daADB9j5/wB9Ufp9T2SDFM= X-Google-Smtp-Source: AK7set+UDt95ULOdbK+W700UNozPXuHXfJsTpTogb+ie1dDwZe+HRPn9R6g0sX63vBd6BHd8dj2PBw== X-Received: by 2002:a05:6a20:6709:b0:da:5bcc:5b6c with SMTP id q9-20020a056a20670900b000da5bcc5b6cmr176291pzh.49.1679323507798; Mon, 20 Mar 2023 07:45:07 -0700 (PDT) Received: from localhost ([2a00:79e1:abd:4a00:61b:48ed:72ab:435b]) by smtp.gmail.com with ESMTPSA id c8-20020a62e808000000b005cdbd9c8825sm6417680pfi.195.2023.03.20.07.45.07 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 20 Mar 2023 07:45:07 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: Daniel Vetter , freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Rob Clark , "Rafael J. Wysocki" , Pavel Machek , Len Brown , Greg Kroah-Hartman , linux-pm@vger.kernel.org (open list:HIBERNATION (aka Software Suspend, aka swsusp)), linux-kernel@vger.kernel.org (open list) Subject: [PATCH v2 17/23] PM / QoS: Fix constraints alloc vs reclaim locking Date: Mon, 20 Mar 2023 07:43:39 -0700 Message-Id: <20230320144356.803762-18-robdclark@gmail.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230320144356.803762-1-robdclark@gmail.com> References: <20230320144356.803762-1-robdclark@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org From: Rob Clark In the process of adding lockdep annotation for drm GPU scheduler's job_run() to detect potential deadlock against shrinker/reclaim, I hit this lockdep splat: ====================================================== WARNING: possible circular locking dependency detected 6.2.0-rc8-debug+ #558 Tainted: G W ------------------------------------------------------ ring0/125 is trying to acquire lock: ffffffd6d6ce0f28 (dev_pm_qos_mtx){+.+.}-{3:3}, at: dev_pm_qos_update_request+0x38/0x68 but task is already holding lock: ffffff8087239208 (&gpu->active_lock){+.+.}-{3:3}, at: msm_gpu_submit+0xec/0x178 which lock already depends on the new lock. the existing dependency chain (in reverse order) is: -> #4 (&gpu->active_lock){+.+.}-{3:3}: __mutex_lock+0xcc/0x3c8 mutex_lock_nested+0x30/0x44 msm_gpu_submit+0xec/0x178 msm_job_run+0x78/0x150 drm_sched_main+0x290/0x370 kthread+0xf0/0x100 ret_from_fork+0x10/0x20 -> #3 (dma_fence_map){++++}-{0:0}: __dma_fence_might_wait+0x74/0xc0 dma_resv_lockdep+0x1f4/0x2f4 do_one_initcall+0x104/0x2bc kernel_init_freeable+0x344/0x34c kernel_init+0x30/0x134 ret_from_fork+0x10/0x20 -> #2 (mmu_notifier_invalidate_range_start){+.+.}-{0:0}: fs_reclaim_acquire+0x80/0xa8 slab_pre_alloc_hook.constprop.0+0x40/0x25c __kmem_cache_alloc_node+0x60/0x1cc __kmalloc+0xd8/0x100 topology_parse_cpu_capacity+0x8c/0x178 get_cpu_for_node+0x88/0xc4 parse_cluster+0x1b0/0x28c parse_cluster+0x8c/0x28c init_cpu_topology+0x168/0x188 smp_prepare_cpus+0x24/0xf8 kernel_init_freeable+0x18c/0x34c kernel_init+0x30/0x134 ret_from_fork+0x10/0x20 -> #1 (fs_reclaim){+.+.}-{0:0}: __fs_reclaim_acquire+0x3c/0x48 fs_reclaim_acquire+0x54/0xa8 slab_pre_alloc_hook.constprop.0+0x40/0x25c __kmem_cache_alloc_node+0x60/0x1cc kmalloc_trace+0x50/0xa8 dev_pm_qos_constraints_allocate+0x38/0x100 __dev_pm_qos_add_request+0xb0/0x1e8 dev_pm_qos_add_request+0x58/0x80 dev_pm_qos_expose_latency_limit+0x60/0x13c register_cpu+0x12c/0x130 topology_init+0xac/0xbc do_one_initcall+0x104/0x2bc kernel_init_freeable+0x344/0x34c kernel_init+0x30/0x134 ret_from_fork+0x10/0x20 -> #0 (dev_pm_qos_mtx){+.+.}-{3:3}: __lock_acquire+0xe00/0x1060 lock_acquire+0x1e0/0x2f8 __mutex_lock+0xcc/0x3c8 mutex_lock_nested+0x30/0x44 dev_pm_qos_update_request+0x38/0x68 msm_devfreq_boost+0x40/0x70 msm_devfreq_active+0xc0/0xf0 msm_gpu_submit+0x10c/0x178 msm_job_run+0x78/0x150 drm_sched_main+0x290/0x370 kthread+0xf0/0x100 ret_from_fork+0x10/0x20 other info that might help us debug this: Chain exists of: dev_pm_qos_mtx --> dma_fence_map --> &gpu->active_lock Possible unsafe locking scenario: CPU0 CPU1 ---- ---- lock(&gpu->active_lock); lock(dma_fence_map); lock(&gpu->active_lock); lock(dev_pm_qos_mtx); *** DEADLOCK *** 3 locks held by ring0/123: #0: ffffff8087251170 (&gpu->lock){+.+.}-{3:3}, at: msm_job_run+0x64/0x150 #1: ffffffd00b0e57e8 (dma_fence_map){++++}-{0:0}, at: msm_job_run+0x68/0x150 #2: ffffff8087251208 (&gpu->active_lock){+.+.}-{3:3}, at: msm_gpu_submit+0xec/0x178 stack backtrace: CPU: 6 PID: 123 Comm: ring0 Not tainted 6.2.0-rc8-debug+ #559 Hardware name: Google Lazor (rev1 - 2) with LTE (DT) Call trace: dump_backtrace.part.0+0xb4/0xf8 show_stack+0x20/0x38 dump_stack_lvl+0x9c/0xd0 dump_stack+0x18/0x34 print_circular_bug+0x1b4/0x1f0 check_noncircular+0x78/0xac __lock_acquire+0xe00/0x1060 lock_acquire+0x1e0/0x2f8 __mutex_lock+0xcc/0x3c8 mutex_lock_nested+0x30/0x44 dev_pm_qos_update_request+0x38/0x68 msm_devfreq_boost+0x40/0x70 msm_devfreq_active+0xc0/0xf0 msm_gpu_submit+0x10c/0x178 msm_job_run+0x78/0x150 drm_sched_main+0x290/0x370 kthread+0xf0/0x100 ret_from_fork+0x10/0x20 The issue is that dev_pm_qos_mtx is held in the runpm suspend/resume (or freq change) path, but it is also held across allocations that could recurse into shrinker. Solve this by changing dev_pm_qos_constraints_allocate() into a function that can be called unconditionally before the device qos object is needed and before aquiring dev_pm_qos_mtx. This way the allocations can be done without holding the mutex. In the case that we raced with another thread to allocate the qos object, detect this *after* acquiring the dev_pm_qos_mtx and simply free the redundant allocations. Signed-off-by: Rob Clark --- drivers/base/power/qos.c | 60 +++++++++++++++++++++++++++------------- 1 file changed, 41 insertions(+), 19 deletions(-) diff --git a/drivers/base/power/qos.c b/drivers/base/power/qos.c index 8e93167f1783..f3e0c6b65635 100644 --- a/drivers/base/power/qos.c +++ b/drivers/base/power/qos.c @@ -185,18 +185,24 @@ static int apply_constraint(struct dev_pm_qos_request *req, } /* - * dev_pm_qos_constraints_allocate + * dev_pm_qos_constraints_ensure_allocated * @dev: device to allocate data for * - * Called at the first call to add_request, for constraint data allocation - * Must be called with the dev_pm_qos_mtx mutex held + * Called to ensure that devices qos is allocated, before acquiring + * dev_pm_qos_mtx. */ -static int dev_pm_qos_constraints_allocate(struct device *dev) +static int dev_pm_qos_constraints_ensure_allocated(struct device *dev) { struct dev_pm_qos *qos; struct pm_qos_constraints *c; struct blocking_notifier_head *n; + if (!dev) + return -ENODEV; + + if (!IS_ERR_OR_NULL(dev->power.qos)) + return 0; + qos = kzalloc(sizeof(*qos), GFP_KERNEL); if (!qos) return -ENOMEM; @@ -227,10 +233,26 @@ static int dev_pm_qos_constraints_allocate(struct device *dev) INIT_LIST_HEAD(&qos->flags.list); + mutex_lock(&dev_pm_qos_mtx); + + if (!IS_ERR_OR_NULL(dev->power.qos)) { + /* + * We have raced with another task to create the qos. + * No biggie, just free the resources we've allocated + * outside of dev_pm_qos_mtx and move on with life. + */ + kfree(n); + kfree(qos); + goto unlock; + } + spin_lock_irq(&dev->power.lock); dev->power.qos = qos; spin_unlock_irq(&dev->power.lock); +unlock: + mutex_unlock(&dev_pm_qos_mtx); + return 0; } @@ -331,17 +353,15 @@ static int __dev_pm_qos_add_request(struct device *dev, { int ret = 0; - if (!dev || !req || dev_pm_qos_invalid_req_type(dev, type)) + if (!req || dev_pm_qos_invalid_req_type(dev, type)) return -EINVAL; if (WARN(dev_pm_qos_request_active(req), "%s() called for already added request\n", __func__)) return -EINVAL; - if (IS_ERR(dev->power.qos)) + if (IS_ERR_OR_NULL(dev->power.qos)) ret = -ENODEV; - else if (!dev->power.qos) - ret = dev_pm_qos_constraints_allocate(dev); trace_dev_pm_qos_add_request(dev_name(dev), type, value); if (ret) @@ -390,6 +410,10 @@ int dev_pm_qos_add_request(struct device *dev, struct dev_pm_qos_request *req, { int ret; + ret = dev_pm_qos_constraints_ensure_allocated(dev); + if (ret) + return ret; + mutex_lock(&dev_pm_qos_mtx); ret = __dev_pm_qos_add_request(dev, req, type, value); mutex_unlock(&dev_pm_qos_mtx); @@ -537,15 +561,11 @@ int dev_pm_qos_add_notifier(struct device *dev, struct notifier_block *notifier, { int ret = 0; - mutex_lock(&dev_pm_qos_mtx); - - if (IS_ERR(dev->power.qos)) - ret = -ENODEV; - else if (!dev->power.qos) - ret = dev_pm_qos_constraints_allocate(dev); - + ret = dev_pm_qos_constraints_ensure_allocated(dev); if (ret) - goto unlock; + return ret; + + mutex_lock(&dev_pm_qos_mtx); switch (type) { case DEV_PM_QOS_RESUME_LATENCY: @@ -565,7 +585,6 @@ int dev_pm_qos_add_notifier(struct device *dev, struct notifier_block *notifier, ret = -EINVAL; } -unlock: mutex_unlock(&dev_pm_qos_mtx); return ret; } @@ -905,10 +924,13 @@ int dev_pm_qos_update_user_latency_tolerance(struct device *dev, s32 val) { int ret; + ret = dev_pm_qos_constraints_ensure_allocated(dev); + if (ret) + return ret; + mutex_lock(&dev_pm_qos_mtx); - if (IS_ERR_OR_NULL(dev->power.qos) - || !dev->power.qos->latency_tolerance_req) { + if (!dev->power.qos->latency_tolerance_req) { struct dev_pm_qos_request *req; if (val < 0) { From patchwork Mon Mar 20 14:43:40 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 13181360 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C4472C6FD1D for ; Mon, 20 Mar 2023 14:46:53 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232009AbjCTOqw (ORCPT ); Mon, 20 Mar 2023 10:46:52 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52748 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231899AbjCTOqR (ORCPT ); Mon, 20 Mar 2023 10:46:17 -0400 Received: from mail-pj1-x1030.google.com (mail-pj1-x1030.google.com [IPv6:2607:f8b0:4864:20::1030]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id F3C79D302; Mon, 20 Mar 2023 07:45:10 -0700 (PDT) Received: by mail-pj1-x1030.google.com with SMTP id p3-20020a17090a74c300b0023f69bc7a68so8146731pjl.4; Mon, 20 Mar 2023 07:45:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; t=1679323509; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=4LG2BAueGKKjRd5oOzkHW01KAXz7hJtO0YTeAZX7ebQ=; b=JYMfuqbtpiJILVkp0uoqa+oBsFme/4zOTzeVWYi3j3wR1ZQJWuMq7gV27NVroQ4Cof DvrYwJDRdr8vjUe7AoOTjec4wKPvjzpSGWLEwjaFkMJH0s8z+PQvoCMmcpojocLE7ybO oo+/Hl2TVSIoCQa+s4zQECFi1GZGH4VbTC86Yqlf91CDneoARc4TOJ4Z2guCXmZDIpSu /196tgzmV3xQEvnInxfSyu1LNUDP3vSbz96LAF3Eqc21Kvo8zow4uMY7aNpcZ4+NT4Yg MDg9DNeVPHKmg+hxMcvtXfab3OuON64cpe+DAyLDIvSqDWJdJRf7qPJfP/BVFmTravPm rQuw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1679323509; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=4LG2BAueGKKjRd5oOzkHW01KAXz7hJtO0YTeAZX7ebQ=; b=UrNvbjjWzYc5ua1kh5fogAC/kGvNkeEO/mH/8s/RbZZnP5042J0uXChEEoknNxGfyL UH0zwFqJX5eCROH24/M7uqIub0l6hBYTbUytYna8balEm6JVHnQ11PrV41dFdzSDCIY8 R0ra9MXwCNmEKUHi231Ff/MMMmnYXA+w2Ohl9AtMfi+iYijV0rdbnsbLxlnJonftPWBi vNQy1B4vI7N7juWymju54QnKUyavLrZlede47FenT4ZvGXRY5U0MPlL2eMJLgFKBylXX pyBojVIwStxyzXRLswXcxcsIJ/XrVDiYBLfNkJTLVGORmOWVQmZBb6I7HPruWKnV38Jb IesA== X-Gm-Message-State: AO0yUKWBbmQ1kD6WIYSIHf1CX+ZiTOVHuCM010AlE4lYEfCYVXjrExMj dF4i+Z6WxaAITxPJYwZc1Uw= X-Google-Smtp-Source: AK7set8ZMssF9rVaP9Q2tyqSeIvwgvafCYLFB4nuIVnqISbFD6NVpbGZbgq3RCYIRz5Qz1iGPiJ1mg== X-Received: by 2002:a17:902:c40a:b0:1a1:b9e0:fa1c with SMTP id k10-20020a170902c40a00b001a1b9e0fa1cmr10839593plk.0.1679323509573; Mon, 20 Mar 2023 07:45:09 -0700 (PDT) Received: from localhost ([2a00:79e1:abd:4a00:61b:48ed:72ab:435b]) by smtp.gmail.com with ESMTPSA id io7-20020a17090312c700b001a1ca6dc38csm2920651plb.118.2023.03.20.07.45.08 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 20 Mar 2023 07:45:09 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: Daniel Vetter , freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Rob Clark , "Rafael J. Wysocki" , Len Brown , Pavel Machek , Greg Kroah-Hartman , linux-pm@vger.kernel.org (open list:POWER MANAGEMENT CORE), linux-kernel@vger.kernel.org (open list) Subject: [PATCH v2 18/23] PM / QoS: Decouple request alloc from dev_pm_qos_mtx Date: Mon, 20 Mar 2023 07:43:40 -0700 Message-Id: <20230320144356.803762-19-robdclark@gmail.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230320144356.803762-1-robdclark@gmail.com> References: <20230320144356.803762-1-robdclark@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org From: Rob Clark Similar to the previous patch, move the allocation out from under dev_pm_qos_mtx, by speculatively doing the allocation and handle any race after acquiring dev_pm_qos_mtx by freeing the redundant allocation. Signed-off-by: Rob Clark --- drivers/base/power/qos.c | 12 +++++++++++- 1 file changed, 11 insertions(+), 1 deletion(-) diff --git a/drivers/base/power/qos.c b/drivers/base/power/qos.c index f3e0c6b65635..9cba334b3729 100644 --- a/drivers/base/power/qos.c +++ b/drivers/base/power/qos.c @@ -922,12 +922,16 @@ s32 dev_pm_qos_get_user_latency_tolerance(struct device *dev) */ int dev_pm_qos_update_user_latency_tolerance(struct device *dev, s32 val) { + struct dev_pm_qos_request *req = NULL; int ret; ret = dev_pm_qos_constraints_ensure_allocated(dev); if (ret) return ret; + if (!dev->power.qos->latency_tolerance_req) + req = kzalloc(sizeof(*req), GFP_KERNEL); + mutex_lock(&dev_pm_qos_mtx); if (!dev->power.qos->latency_tolerance_req) { @@ -940,7 +944,6 @@ int dev_pm_qos_update_user_latency_tolerance(struct device *dev, s32 val) ret = -EINVAL; goto out; } - req = kzalloc(sizeof(*req), GFP_KERNEL); if (!req) { ret = -ENOMEM; goto out; @@ -952,6 +955,13 @@ int dev_pm_qos_update_user_latency_tolerance(struct device *dev, s32 val) } dev->power.qos->latency_tolerance_req = req; } else { + /* + * If we raced with another thread to allocate the request, + * simply free the redundant allocation and move on. + */ + if (req) + kfree(req); + if (val < 0) { __dev_pm_qos_drop_user_request(dev, DEV_PM_QOS_LATENCY_TOLERANCE); ret = 0; From patchwork Mon Mar 20 14:43:41 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 13181357 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 749C3C7618A for ; Mon, 20 Mar 2023 14:46:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231825AbjCTOqZ (ORCPT ); Mon, 20 Mar 2023 10:46:25 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52172 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231886AbjCTOp4 (ORCPT ); Mon, 20 Mar 2023 10:45:56 -0400 Received: from mail-pl1-x62b.google.com (mail-pl1-x62b.google.com [IPv6:2607:f8b0:4864:20::62b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 19F32EB73; Mon, 20 Mar 2023 07:45:11 -0700 (PDT) Received: by mail-pl1-x62b.google.com with SMTP id o11so12681606ple.1; Mon, 20 Mar 2023 07:45:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; t=1679323511; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=HW0myZZejO7uNMRvPlac+lbEnpvRrjfPGVZeXhkTeDU=; b=PhP6+MY2gm+VfsWU6J4UMRMOHIYbVs8+vizsPZv168wBOIB9PALpYVfd6ZivPoEngY t/VUJFnA/t4Ix6GsI7xwhIXMkAYpKhhRp3xvngTue6Yr2fFXbtmHtWpAQB/s+naJOTYw IRTZeCf/1OLyWnQopg4LLMR+VeOJW0SWboQWMJmtAUiYKPEyPfrp5RIa7I8htQgDrv7u Id7p9NjiUuVcsJhDDDBEShANjbF6FYb4DklqnEW405YxDUdx+2496YftV81PkBEMAJVb 1iddGsKJ7Zt798qVPBcNgXNRq0rx7b3ZiQPIaKcFfUSv7FPK9Xd9DR2vPRCvRDOQgqRi 4D2A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1679323511; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=HW0myZZejO7uNMRvPlac+lbEnpvRrjfPGVZeXhkTeDU=; b=i3CyqmY8vAPd+7VRJnahofMHzed8s099n1auEgtIfB4Uyk6R8naGFiG9YxGv8bUV8d EWZuE9kiZafCyNQQEbV/LsrnTmayd/Iy7P4pcYlKDkTUvpGOUzUJM/05uQhTNdCtsOOC TQ8GGnGW7o8vFhCLeQGV6lqnG3GYwIpSMyXxyex1Y5y9/q4vIHzLfr11FJ1LBEdLD13D ORoSwwwhNeK9puSXWADn06E5tnyoHSK5sl7AlULPQP/6cB8PQYyi5fwqK3amIFtPrUDW 2pmlYgosfbBmRMEpf79NZBikEObeXjkEkFJKSHa1Z/fuY+3JFg5g/d53rnMk24fIKQds d3Qw== X-Gm-Message-State: AO0yUKUs0PIKSDjE9NPz5e98vFbcgisCeSdjKyh/uGaamXgNb147/Nvm 91t4hty+BQqEVOoa1bDV7ZM= X-Google-Smtp-Source: AK7set8dqJQ+HWWTlRCoB9bVOq+YmX/QlpDsIah5XuQdk5bfZ+gnkND2WAlzEoISE8F1IrVQp/Nnmw== X-Received: by 2002:a05:6a20:be09:b0:d6:9e5e:f240 with SMTP id ge9-20020a056a20be0900b000d69e5ef240mr13319951pzb.4.1679323511174; Mon, 20 Mar 2023 07:45:11 -0700 (PDT) Received: from localhost ([2a00:79e1:abd:4a00:61b:48ed:72ab:435b]) by smtp.gmail.com with ESMTPSA id u15-20020a62ed0f000000b005809d382016sm6433907pfh.74.2023.03.20.07.45.10 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 20 Mar 2023 07:45:10 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: Daniel Vetter , freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Rob Clark , "Rafael J. Wysocki" , Pavel Machek , Len Brown , Greg Kroah-Hartman , linux-pm@vger.kernel.org (open list:HIBERNATION (aka Software Suspend, aka swsusp)), linux-kernel@vger.kernel.org (open list) Subject: [PATCH v2 19/23] PM / QoS: Teach lockdep about dev_pm_qos_mtx locking order Date: Mon, 20 Mar 2023 07:43:41 -0700 Message-Id: <20230320144356.803762-20-robdclark@gmail.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230320144356.803762-1-robdclark@gmail.com> References: <20230320144356.803762-1-robdclark@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org From: Rob Clark Annotate dev_pm_qos_mtx to teach lockdep to scream about allocations that could trigger reclaim under dev_pm_qos_mtx. Signed-off-by: Rob Clark --- drivers/base/power/qos.c | 11 +++++++++++ 1 file changed, 11 insertions(+) diff --git a/drivers/base/power/qos.c b/drivers/base/power/qos.c index 9cba334b3729..d4addda3944a 100644 --- a/drivers/base/power/qos.c +++ b/drivers/base/power/qos.c @@ -1012,3 +1012,14 @@ void dev_pm_qos_hide_latency_tolerance(struct device *dev) pm_runtime_put(dev); } EXPORT_SYMBOL_GPL(dev_pm_qos_hide_latency_tolerance); + +static int __init dev_pm_qos_init(void) +{ + /* Teach lockdep about lock ordering wrt. shrinker: */ + fs_reclaim_acquire(GFP_KERNEL); + might_lock(&dev_pm_qos_mtx); + fs_reclaim_release(GFP_KERNEL); + + return 0; +} +early_initcall(dev_pm_qos_init); From patchwork Mon Mar 20 14:43:42 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 13181358 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 85E61C7618D for ; Mon, 20 Mar 2023 14:46:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231934AbjCTOqh (ORCPT ); Mon, 20 Mar 2023 10:46:37 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51900 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230238AbjCTOqO (ORCPT ); Mon, 20 Mar 2023 10:46:14 -0400 Received: from mail-pj1-x1032.google.com (mail-pj1-x1032.google.com [IPv6:2607:f8b0:4864:20::1032]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4C980B763; Mon, 20 Mar 2023 07:45:13 -0700 (PDT) Received: by mail-pj1-x1032.google.com with SMTP id e15-20020a17090ac20f00b0023d1b009f52so16746144pjt.2; Mon, 20 Mar 2023 07:45:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; t=1679323513; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=QfAHpBWoMxTHMa2AnmzhjkXruv6iv5Hi+uoFH6JuoB0=; b=dvZaXVTGYKr1InrytzOA1sJ4NQK/BXq3NKHKzdudDkqaq0XSBSXf9z3k/Luykto2iP c+ZneJL4VSdLkTAQNEKxqGKTXzz+oLy7ySCL//+bGiDqqS8Kos40r9xWpaOKAqeVAX1I 8ud9G7quF6etshvv3cC+Cfw4AzbUl/5lJl5hQSMV5zMZN1hbdM58wMEWfU4eTlKS0D+h 8Km10S+bHJjWVXYJC7OKKLW+B6GYh+XLF9XGIOetq1OYyvj5k7FAok2JG+bMRN1OYN8H qUwQuvMHK6z5XutxhtjFHU09oWcsffFuyKjkKHs5HxjpqzlLs1cvryGa/JrKx16Z+bb/ eArw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1679323513; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=QfAHpBWoMxTHMa2AnmzhjkXruv6iv5Hi+uoFH6JuoB0=; b=XnmPL5XTe1//zrDlUU4S2MPP1SDN/vVFqhA4pu4MyQi9HWggeyEN60cUNZW/wWwQp7 WcDs6p2KNRFpbnnm1LeQBj1bLmVa3/4fYhxcYdXqPbQ2SEnrftK3pYQMouWPHOz1/Dp5 19txSRBXPI1v5PgpeXNkthgkJLdTIDuacSG6Oput7ybzfPBDDGGM6KiTEth8CtYaiFh8 5YZWSE2HLqA4JRdR3x+gmKKA/LFCmWY67ubM7ymJtwULpq7VzRLeeqgWz7geiKOcKcFa CZAHeKZ/k/9w7/9/NotHqk6pCjiYDqwOgl2oMMc40GtXDQN9onRfJmCNDlNFnn3xa69j WGTA== X-Gm-Message-State: AO0yUKW5V0N4AdMn32iNAjPhiiFqXKNhXC26dJBCbrHKqaGkocLMxexw 1EP8G+qev7mqLiGlAR8E5fI= X-Google-Smtp-Source: AK7set/kgpnfgYQ39cq6tSqCbmQJlYOTfrb5Xz1qqT6kTEByPM9h/PbOYeIY23ga+5pEL0JwAhoBsw== X-Received: by 2002:a17:902:f9cf:b0:19c:dedd:2ace with SMTP id kz15-20020a170902f9cf00b0019cdedd2acemr15267590plb.18.1679323512781; Mon, 20 Mar 2023 07:45:12 -0700 (PDT) Received: from localhost ([2a00:79e1:abd:4a00:61b:48ed:72ab:435b]) by smtp.gmail.com with ESMTPSA id jc7-20020a17090325c700b001a0667822c8sm6818324plb.94.2023.03.20.07.45.12 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 20 Mar 2023 07:45:12 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: Daniel Vetter , freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Rob Clark , Andy Gross , Bjorn Andersson , Konrad Dybcio , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v2 20/23] soc: qcom: smd-rpm: Use GFP_ATOMIC in write path Date: Mon, 20 Mar 2023 07:43:42 -0700 Message-Id: <20230320144356.803762-21-robdclark@gmail.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230320144356.803762-1-robdclark@gmail.com> References: <20230320144356.803762-1-robdclark@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org From: Rob Clark Preparing for better lockdep annotations for things that happen in runpm suspend/resume path vs shrinker/reclaim in the following patches, we need to avoid allocations that can trigger reclaim in the icc_set_bw() path. In the RPMh case, rpmh_write_batch() already uses GFP_ATOMIC, so it should be reasonable to use in the smd-rpm case as well. Alternatively, 256bytes is small enough for a function that isn't called recursively to allocate on-stack. Signed-off-by: Rob Clark --- drivers/soc/qcom/smd-rpm.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/soc/qcom/smd-rpm.c b/drivers/soc/qcom/smd-rpm.c index 7e3b6a7ea34c..478da981d9fb 100644 --- a/drivers/soc/qcom/smd-rpm.c +++ b/drivers/soc/qcom/smd-rpm.c @@ -113,7 +113,7 @@ int qcom_rpm_smd_write(struct qcom_smd_rpm *rpm, if (WARN_ON(size >= 256)) return -EINVAL; - pkt = kmalloc(size, GFP_KERNEL); + pkt = kmalloc(size, GFP_ATOMIC); if (!pkt) return -ENOMEM; From patchwork Mon Mar 20 14:43:43 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 13181361 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E99BBC7618A for ; Mon, 20 Mar 2023 14:46:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232022AbjCTOqy (ORCPT ); Mon, 20 Mar 2023 10:46:54 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54250 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231946AbjCTOqR (ORCPT ); Mon, 20 Mar 2023 10:46:17 -0400 Received: from mail-pg1-x529.google.com (mail-pg1-x529.google.com [IPv6:2607:f8b0:4864:20::529]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id F132BE3A4; Mon, 20 Mar 2023 07:45:14 -0700 (PDT) Received: by mail-pg1-x529.google.com with SMTP id z10so6717574pgr.8; Mon, 20 Mar 2023 07:45:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; t=1679323514; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=59Gn1iYcZ4C/Zt+HL1YxXBv746B3FiPq6mLTMzKnFqI=; b=VFHWbDT/va9ssb48/3WXbEoLQZ8/z5USHfV2w7WirkvR1cl6pry06SKeK54TFyjeKF u752DiUfJuleI6wI6QvuD1FGoBZGJu6G5fLyYnL7LAM7ko0rlw1i/4KZ6yu7ypC0/4oT /DiWwzURYVUFHb0l2ag9mAd9rTEK5fzfKLzN1Bc6k2U7L1qMhzp5oioWUp8T0IFE36dR nEnEItQyfSGQuoNXk7/81UHtNjF8Y4EW0DuSXpmx4OgvAsrclAjCcXrqxBtlNX1M3Hf6 6JObTD4VV/sPObGHUQ/Lr9eVtE/UvF9tMvqHXfKP3fqJwgF8QSVdQ8ZVdJamj2DbCLHQ 2jFg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1679323514; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=59Gn1iYcZ4C/Zt+HL1YxXBv746B3FiPq6mLTMzKnFqI=; b=HzZ0FwpzxhXKFph1pzeeDFlcSO/GFmEGrkGktjf5fVIS5r1U6yEGxQQWIEd4gONtb+ ogQxTR6a22tzVmTZVLfVQXj+Tq9ypKX52x0nD6iDE9ItIUkXY8WkP0PGVajH72nNtKnS ReHn+Rvcd1So5Tz+HttFMw+VE/nh4/eeo7AmUdtfjoUrvDyWBpbYGCMtonRxyjlmTTEm +PZpNAqS9aOQ2YjsRBdNDqJOvJSh0wG4MvrszS6FXHPqK+d28wMrk8FZ8owvsfh43NHK pxKs3l3b5oOW7iWjkaoocHt+fTTT4PIpd+SgJKasUvqIBmWbvb46HtZZKxjHzaQ81owJ 7cew== X-Gm-Message-State: AO0yUKV0VbEnjd3OKQNFVByErUEyZ7+xehJW61qrEKEt78kjoYemUswG 1KjROhXN0/rDONKSMEtr9GQ= X-Google-Smtp-Source: AK7set+slgDWMjc1bZxbFjmWezaBDy57xF31ml0zQbkl/Y9eQb25jYgJZAiCNEnSgoZqURs65tvgtw== X-Received: by 2002:aa7:949c:0:b0:627:e1a5:27b4 with SMTP id z28-20020aa7949c000000b00627e1a527b4mr6902903pfk.33.1679323514351; Mon, 20 Mar 2023 07:45:14 -0700 (PDT) Received: from localhost ([2a00:79e1:abd:4a00:61b:48ed:72ab:435b]) by smtp.gmail.com with ESMTPSA id j10-20020a63fc0a000000b00503000f0492sm6145458pgi.14.2023.03.20.07.45.13 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 20 Mar 2023 07:45:14 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: Daniel Vetter , freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Rob Clark , Georgi Djakov , linux-pm@vger.kernel.org (open list:INTERCONNECT API), linux-kernel@vger.kernel.org (open list) Subject: [PATCH v2 21/23] interconnect: Fix locking for runpm vs reclaim Date: Mon, 20 Mar 2023 07:43:43 -0700 Message-Id: <20230320144356.803762-22-robdclark@gmail.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230320144356.803762-1-robdclark@gmail.com> References: <20230320144356.803762-1-robdclark@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org From: Rob Clark For cases where icc_bw_set() can be called in callbaths that could deadlock against shrinker/reclaim, such as runpm resume, we need to decouple the icc locking. Introduce a new icc_bw_lock for cases where we need to serialize bw aggregation and update to decouple that from paths that require memory allocation such as node/link creation/ destruction. Fixes this lockdep splat: ====================================================== WARNING: possible circular locking dependency detected 6.2.0-rc8-debug+ #554 Not tainted ------------------------------------------------------ ring0/132 is trying to acquire lock: ffffff80871916d0 (&gmu->lock){+.+.}-{3:3}, at: a6xx_pm_resume+0xf0/0x234 but task is already holding lock: ffffffdb5aee57e8 (dma_fence_map){++++}-{0:0}, at: msm_job_run+0x68/0x150 which lock already depends on the new lock. the existing dependency chain (in reverse order) is: -> #4 (dma_fence_map){++++}-{0:0}: __dma_fence_might_wait+0x74/0xc0 dma_resv_lockdep+0x1f4/0x2f4 do_one_initcall+0x104/0x2bc kernel_init_freeable+0x344/0x34c kernel_init+0x30/0x134 ret_from_fork+0x10/0x20 -> #3 (mmu_notifier_invalidate_range_start){+.+.}-{0:0}: fs_reclaim_acquire+0x80/0xa8 slab_pre_alloc_hook.constprop.0+0x40/0x25c __kmem_cache_alloc_node+0x60/0x1cc __kmalloc+0xd8/0x100 topology_parse_cpu_capacity+0x8c/0x178 get_cpu_for_node+0x88/0xc4 parse_cluster+0x1b0/0x28c parse_cluster+0x8c/0x28c init_cpu_topology+0x168/0x188 smp_prepare_cpus+0x24/0xf8 kernel_init_freeable+0x18c/0x34c kernel_init+0x30/0x134 ret_from_fork+0x10/0x20 -> #2 (fs_reclaim){+.+.}-{0:0}: __fs_reclaim_acquire+0x3c/0x48 fs_reclaim_acquire+0x54/0xa8 slab_pre_alloc_hook.constprop.0+0x40/0x25c __kmem_cache_alloc_node+0x60/0x1cc __kmalloc+0xd8/0x100 kzalloc.constprop.0+0x14/0x20 icc_node_create_nolock+0x4c/0xc4 icc_node_create+0x38/0x58 qcom_icc_rpmh_probe+0x1b8/0x248 platform_probe+0x70/0xc4 really_probe+0x158/0x290 __driver_probe_device+0xc8/0xe0 driver_probe_device+0x44/0x100 __driver_attach+0xf8/0x108 bus_for_each_dev+0x78/0xc4 driver_attach+0x2c/0x38 bus_add_driver+0xd0/0x1d8 driver_register+0xbc/0xf8 __platform_driver_register+0x30/0x3c qnoc_driver_init+0x24/0x30 do_one_initcall+0x104/0x2bc kernel_init_freeable+0x344/0x34c kernel_init+0x30/0x134 ret_from_fork+0x10/0x20 -> #1 (icc_lock){+.+.}-{3:3}: __mutex_lock+0xcc/0x3c8 mutex_lock_nested+0x30/0x44 icc_set_bw+0x88/0x2b4 _set_opp_bw+0x8c/0xd8 _set_opp+0x19c/0x300 dev_pm_opp_set_opp+0x84/0x94 a6xx_gmu_resume+0x18c/0x804 a6xx_pm_resume+0xf8/0x234 adreno_runtime_resume+0x2c/0x38 pm_generic_runtime_resume+0x30/0x44 __rpm_callback+0x15c/0x174 rpm_callback+0x78/0x7c rpm_resume+0x318/0x524 __pm_runtime_resume+0x78/0xbc adreno_load_gpu+0xc4/0x17c msm_open+0x50/0x120 drm_file_alloc+0x17c/0x228 drm_open_helper+0x74/0x118 drm_open+0xa0/0x144 drm_stub_open+0xd4/0xe4 chrdev_open+0x1b8/0x1e4 do_dentry_open+0x2f8/0x38c vfs_open+0x34/0x40 path_openat+0x64c/0x7b4 do_filp_open+0x54/0xc4 do_sys_openat2+0x9c/0x100 do_sys_open+0x50/0x7c __arm64_sys_openat+0x28/0x34 invoke_syscall+0x8c/0x128 el0_svc_common.constprop.0+0xa0/0x11c do_el0_svc+0xac/0xbc el0_svc+0x48/0xa0 el0t_64_sync_handler+0xac/0x13c el0t_64_sync+0x190/0x194 -> #0 (&gmu->lock){+.+.}-{3:3}: __lock_acquire+0xe00/0x1060 lock_acquire+0x1e0/0x2f8 __mutex_lock+0xcc/0x3c8 mutex_lock_nested+0x30/0x44 a6xx_pm_resume+0xf0/0x234 adreno_runtime_resume+0x2c/0x38 pm_generic_runtime_resume+0x30/0x44 __rpm_callback+0x15c/0x174 rpm_callback+0x78/0x7c rpm_resume+0x318/0x524 __pm_runtime_resume+0x78/0xbc pm_runtime_get_sync.isra.0+0x14/0x20 msm_gpu_submit+0x58/0x178 msm_job_run+0x78/0x150 drm_sched_main+0x290/0x370 kthread+0xf0/0x100 ret_from_fork+0x10/0x20 other info that might help us debug this: Chain exists of: &gmu->lock --> mmu_notifier_invalidate_range_start --> dma_fence_map Possible unsafe locking scenario: CPU0 CPU1 ---- ---- lock(dma_fence_map); lock(mmu_notifier_invalidate_range_start); lock(dma_fence_map); lock(&gmu->lock); *** DEADLOCK *** 2 locks held by ring0/132: #0: ffffff8087191170 (&gpu->lock){+.+.}-{3:3}, at: msm_job_run+0x64/0x150 #1: ffffffdb5aee57e8 (dma_fence_map){++++}-{0:0}, at: msm_job_run+0x68/0x150 stack backtrace: CPU: 7 PID: 132 Comm: ring0 Not tainted 6.2.0-rc8-debug+ #554 Hardware name: Google Lazor (rev1 - 2) with LTE (DT) Call trace: dump_backtrace.part.0+0xb4/0xf8 show_stack+0x20/0x38 dump_stack_lvl+0x9c/0xd0 dump_stack+0x18/0x34 print_circular_bug+0x1b4/0x1f0 check_noncircular+0x78/0xac __lock_acquire+0xe00/0x1060 lock_acquire+0x1e0/0x2f8 __mutex_lock+0xcc/0x3c8 mutex_lock_nested+0x30/0x44 a6xx_pm_resume+0xf0/0x234 adreno_runtime_resume+0x2c/0x38 pm_generic_runtime_resume+0x30/0x44 __rpm_callback+0x15c/0x174 rpm_callback+0x78/0x7c rpm_resume+0x318/0x524 __pm_runtime_resume+0x78/0xbc pm_runtime_get_sync.isra.0+0x14/0x20 msm_gpu_submit+0x58/0x178 msm_job_run+0x78/0x150 drm_sched_main+0x290/0x370 kthread+0xf0/0x100 ret_from_fork+0x10/0x20 Signed-off-by: Rob Clark --- drivers/interconnect/core.c | 8 ++++++-- 1 file changed, 6 insertions(+), 2 deletions(-) diff --git a/drivers/interconnect/core.c b/drivers/interconnect/core.c index 25debded65a8..f7251784765f 100644 --- a/drivers/interconnect/core.c +++ b/drivers/interconnect/core.c @@ -29,6 +29,7 @@ static LIST_HEAD(icc_providers); static int providers_count; static bool synced_state; static DEFINE_MUTEX(icc_lock); +static DEFINE_MUTEX(icc_bw_lock); static struct dentry *icc_debugfs_dir; static void icc_summary_show_one(struct seq_file *s, struct icc_node *n) @@ -632,7 +633,7 @@ int icc_set_bw(struct icc_path *path, u32 avg_bw, u32 peak_bw) if (WARN_ON(IS_ERR(path) || !path->num_nodes)) return -EINVAL; - mutex_lock(&icc_lock); + mutex_lock(&icc_bw_lock); old_avg = path->reqs[0].avg_bw; old_peak = path->reqs[0].peak_bw; @@ -664,7 +665,7 @@ int icc_set_bw(struct icc_path *path, u32 avg_bw, u32 peak_bw) apply_constraints(path); } - mutex_unlock(&icc_lock); + mutex_unlock(&icc_bw_lock); trace_icc_set_bw_end(path, ret); @@ -963,6 +964,7 @@ void icc_node_add(struct icc_node *node, struct icc_provider *provider) return; mutex_lock(&icc_lock); + mutex_lock(&icc_bw_lock); node->provider = provider; list_add_tail(&node->node_list, &provider->nodes); @@ -988,6 +990,7 @@ void icc_node_add(struct icc_node *node, struct icc_provider *provider) node->avg_bw = 0; node->peak_bw = 0; + mutex_unlock(&icc_bw_lock); mutex_unlock(&icc_lock); } EXPORT_SYMBOL_GPL(icc_node_add); @@ -1111,6 +1114,7 @@ void icc_sync_state(struct device *dev) return; mutex_lock(&icc_lock); + mutex_lock(&icc_bw_lock); synced_state = true; list_for_each_entry(p, &icc_providers, provider_list) { dev_dbg(p->dev, "interconnect provider is in synced state\n"); From patchwork Mon Mar 20 14:43:44 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 13181362 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 45FDFC6FD1D for ; Mon, 20 Mar 2023 14:46:58 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231909AbjCTOq4 (ORCPT ); Mon, 20 Mar 2023 10:46:56 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52012 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231950AbjCTOqS (ORCPT ); Mon, 20 Mar 2023 10:46:18 -0400 Received: from mail-pf1-x429.google.com (mail-pf1-x429.google.com [IPv6:2607:f8b0:4864:20::429]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A47C5FF1F; Mon, 20 Mar 2023 07:45:16 -0700 (PDT) Received: by mail-pf1-x429.google.com with SMTP id fd25so7072120pfb.1; Mon, 20 Mar 2023 07:45:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; t=1679323516; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=ZTj6lDaB879i6vS4ZDrKb4b2c9OODf1NGI0zVIt2OIQ=; b=Bj+oH3CQChclMtHQOjFJOIB+Z7POqoHz3M05DrKcE49Bnf7ohSE3pDsx71PYs3ECvr oTHseaJiODm/MyHSlBMBASGteA5HzjOenIVagUXFeZlt7Q+3YFMZoD7rUejgRISy+aVY 2odaelScwYvdePOWDQYmQtu9g1I2XR+9uN9ba8aIcs04yVVbk2zqIX1fxAN1D/wbDGbE GKnk4Dos7F1ERtbzl7QoDAY8Tv0VU4CBkNuZAfbMf6wRdovcBhxA0rKvl1ZusIAFOTqG Ji6tDLvUkL6ZHoI/zPauDcA1VF/2Q7lzDyCG6Rg7kTodWzwhyU4JJcSU+8FUk5u7Mgzx O7Ag== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1679323516; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=ZTj6lDaB879i6vS4ZDrKb4b2c9OODf1NGI0zVIt2OIQ=; b=fvJT19qqknclywQ8N5iPBs4hvjycnTKC7F5IriZQ581YZxPMea6PxZ/g2thCQJZ4iC cUd/DVF2jQTB3+Bt4xAPX3lYGnBaRX5gpPJYXA6oyEK0iU1OFY2FN2gY6j4fOXtnlQDU 7yj2/w1Qe2k67XRV2sua+WD1NoFK4nAOEaFyy+dNC44kUuoxszekUKIO7rP707kWZUCu 9vmkcwsspFNHKf/c7cVQvTXOB9xxsQWJwRyt5Ts2I2VAgVAjxlc1/abbV6SSGNJOSoUK /r51+/3Eao5L0xO+0lzJP2DhO1GhwYcmqJngWoAzNmBDLDT2mlaybGwVTQTWYkPwgHLz h5Rw== X-Gm-Message-State: AO0yUKUUHwpPEzNxKViOLlcC4vgvNTz2Dz3lPlcgNDBonC8oX7N5hSXN o8RaDMMcT1ekio0oaBr7dufZ6TfKbp4= X-Google-Smtp-Source: AK7set+YZdGvBmz/yJ5ACJx//DaOCsv2sHayWOfLM7STeuIIzbdJFihoX6ELSY7Jku8dzTLjKv9Llg== X-Received: by 2002:a62:7946:0:b0:625:2636:9cd2 with SMTP id u67-20020a627946000000b0062526369cd2mr19429597pfc.18.1679323516022; Mon, 20 Mar 2023 07:45:16 -0700 (PDT) Received: from localhost ([2a00:79e1:abd:4a00:61b:48ed:72ab:435b]) by smtp.gmail.com with ESMTPSA id x48-20020a056a000bf000b005a79596c795sm6428405pfu.29.2023.03.20.07.45.15 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 20 Mar 2023 07:45:15 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: Daniel Vetter , freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Rob Clark , Georgi Djakov , linux-pm@vger.kernel.org (open list:INTERCONNECT API), linux-kernel@vger.kernel.org (open list) Subject: [PATCH v2 22/23] interconnect: Teach lockdep about icc_bw_lock order Date: Mon, 20 Mar 2023 07:43:44 -0700 Message-Id: <20230320144356.803762-23-robdclark@gmail.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230320144356.803762-1-robdclark@gmail.com> References: <20230320144356.803762-1-robdclark@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org From: Rob Clark Teach lockdep that icc_bw_lock is needed in code paths that could deadlock if they trigger reclaim. Signed-off-by: Rob Clark --- drivers/interconnect/core.c | 10 +++++++++- 1 file changed, 9 insertions(+), 1 deletion(-) diff --git a/drivers/interconnect/core.c b/drivers/interconnect/core.c index f7251784765f..5619963ee85c 100644 --- a/drivers/interconnect/core.c +++ b/drivers/interconnect/core.c @@ -1127,13 +1127,21 @@ void icc_sync_state(struct device *dev) } } } + mutex_unlock(&icc_bw_lock); mutex_unlock(&icc_lock); } EXPORT_SYMBOL_GPL(icc_sync_state); static int __init icc_init(void) { - struct device_node *root = of_find_node_by_path("/"); + struct device_node *root; + + /* Teach lockdep about lock ordering wrt. shrinker: */ + fs_reclaim_acquire(GFP_KERNEL); + might_lock(&icc_bw_lock); + fs_reclaim_release(GFP_KERNEL); + + root = of_find_node_by_path("/"); providers_count = of_count_icc_providers(root); of_node_put(root); From patchwork Mon Mar 20 14:43:45 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 13181363 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5B01DC7619A for ; Mon, 20 Mar 2023 14:46:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231899AbjCTOq5 (ORCPT ); Mon, 20 Mar 2023 10:46:57 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52088 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231912AbjCTOqT (ORCPT ); Mon, 20 Mar 2023 10:46:19 -0400 Received: from mail-pf1-x42b.google.com (mail-pf1-x42b.google.com [IPv6:2607:f8b0:4864:20::42b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 97A79233E0; Mon, 20 Mar 2023 07:45:18 -0700 (PDT) Received: by mail-pf1-x42b.google.com with SMTP id fd25so7072177pfb.1; Mon, 20 Mar 2023 07:45:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; t=1679323518; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=oauBr8L/4q4Zcgdj0lHO6PXlptkvS9AYRCFUFpLRYaw=; b=W0rr2Cqq8hYbPCTku8RNPwzAfVaC1WEIRqg/Jo/IaQ3jvSFgpYQ8herChIJi7syMJ5 YIfI8bLz+etpsyzlOr+ptGN7aE4yv3qQncLdwK43FM6YBwMvLv2nGuzW75q86R0a+YAr /BdXDMgQTi9jJFtm5pTOw4gXpkvXIjjdQ3m0/xGK4graIJcbepoylaU8+L3vUi3eQAwo mtSrja7NmvnQcnDLmqflMFRZ/y0KW5yPJuyn1li/3tS0CJzypeUbWgn9VKnIYHVcT7zk uSs37Ij8K4IFmzBssfj5JXRk9yXHeRHiK+Cxe3xgHa7l3E0jYsSkcDMjuztlg2g3IcnX 4Yqg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1679323518; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=oauBr8L/4q4Zcgdj0lHO6PXlptkvS9AYRCFUFpLRYaw=; b=IRzK7tS7hr2DDEYXtjq0C5mRD6mdDoJr07+USdEarq+pbYCOadEwV1KrWzjrxJ/iQ0 2laO0OPjeoibtjxCaZyusZ84QIDqsjSuwlgRfI8qMonfoDCa5781cFo2OwZCPi1u9zr/ pG0tJ+pYzhMt2GYjBj4PxziALyDqAuTNdRSXIr6R0VT+gB5pxvmhCsttt0IaLUAhK2yZ pgz6ZwToKcBjJ+bW2jziwFTpRzuc3VJlvCP1UOYUAmlXisTeKNZxzXj+DbPh68OhZx7J s84jVLchyQAPWWGZ0MWLzd/zW1qhbF0O1slTePEhY9nkeHT47HEpTk28I9C6eRC/LaNF n66g== X-Gm-Message-State: AO0yUKWdSANXwGV6zqpzGFedVUlAYmpZ0j6fojWjfC+tIdTcTiJemBJk qWtIMy3xnYXJTDlK9SpoSpU= X-Google-Smtp-Source: AK7set9yi2RN2vxR93mdgHSZKeyGB3sxPREYIvUl6fxuZOWrBiOHFnCQYgj8gj4DknW3GheF00DOIw== X-Received: by 2002:a62:1a05:0:b0:5a8:9858:750a with SMTP id a5-20020a621a05000000b005a89858750amr13197023pfa.13.1679323517814; Mon, 20 Mar 2023 07:45:17 -0700 (PDT) Received: from localhost ([2a00:79e1:abd:4a00:61b:48ed:72ab:435b]) by smtp.gmail.com with ESMTPSA id j24-20020aa78018000000b006245e034059sm6618112pfi.178.2023.03.20.07.45.17 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 20 Mar 2023 07:45:17 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: Daniel Vetter , freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Rob Clark , Rob Clark , Abhinav Kumar , Dmitry Baryshkov , Sean Paul , David Airlie , Luben Tuikov , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v2 23/23] drm/sched: Add (optional) fence signaling annotation Date: Mon, 20 Mar 2023 07:43:45 -0700 Message-Id: <20230320144356.803762-24-robdclark@gmail.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230320144356.803762-1-robdclark@gmail.com> References: <20230320144356.803762-1-robdclark@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org From: Rob Clark Based on https://lore.kernel.org/dri-devel/20200604081224.863494-10-daniel.vetter@ffwll.ch/ but made to be optional. Signed-off-by: Rob Clark Reviewed-by: Luben Tuikov --- drivers/gpu/drm/msm/msm_ringbuffer.c | 1 + drivers/gpu/drm/scheduler/sched_main.c | 9 +++++++++ include/drm/gpu_scheduler.h | 2 ++ 3 files changed, 12 insertions(+) diff --git a/drivers/gpu/drm/msm/msm_ringbuffer.c b/drivers/gpu/drm/msm/msm_ringbuffer.c index b60199184409..7e42baf16cd0 100644 --- a/drivers/gpu/drm/msm/msm_ringbuffer.c +++ b/drivers/gpu/drm/msm/msm_ringbuffer.c @@ -93,6 +93,7 @@ struct msm_ringbuffer *msm_ringbuffer_new(struct msm_gpu *gpu, int id, /* currently managing hangcheck ourselves: */ sched_timeout = MAX_SCHEDULE_TIMEOUT; + ring->sched.fence_signaling = true; ret = drm_sched_init(&ring->sched, &msm_sched_ops, num_hw_submissions, 0, sched_timeout, NULL, NULL, to_msm_bo(ring->bo)->name, gpu->dev->dev); diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/scheduler/sched_main.c index 4e6ad6e122bc..c2ee44d6224b 100644 --- a/drivers/gpu/drm/scheduler/sched_main.c +++ b/drivers/gpu/drm/scheduler/sched_main.c @@ -978,10 +978,15 @@ static bool drm_sched_blocked(struct drm_gpu_scheduler *sched) static int drm_sched_main(void *param) { struct drm_gpu_scheduler *sched = (struct drm_gpu_scheduler *)param; + const bool fence_signaling = sched->fence_signaling; + bool fence_cookie; int r; sched_set_fifo_low(current); + if (fence_signaling) + fence_cookie = dma_fence_begin_signalling(); + while (!kthread_should_stop()) { struct drm_sched_entity *entity = NULL; struct drm_sched_fence *s_fence; @@ -1039,6 +1044,10 @@ static int drm_sched_main(void *param) wake_up(&sched->job_scheduled); } + + if (fence_signaling) + dma_fence_end_signalling(fence_cookie); + return 0; } diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h index 9db9e5e504ee..8f23ea522e22 100644 --- a/include/drm/gpu_scheduler.h +++ b/include/drm/gpu_scheduler.h @@ -483,6 +483,7 @@ struct drm_sched_backend_ops { * @ready: marks if the underlying HW is ready to work * @free_guilty: A hit to time out handler to free the guilty job. * @dev: system &struct device + * @fence_signaling: Opt in to fence signaling annotations * * One scheduler is implemented for each hardware ring. */ @@ -507,6 +508,7 @@ struct drm_gpu_scheduler { bool ready; bool free_guilty; struct device *dev; + bool fence_signaling; }; int drm_sched_init(struct drm_gpu_scheduler *sched,