From patchwork Wed Jun 9 21:29:55 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jason Ekstrand X-Patchwork-Id: 12311231 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3DCC7C48BD1 for ; Wed, 9 Jun 2021 21:30:11 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 0F63A613EE for ; Wed, 9 Jun 2021 21:30:11 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 0F63A613EE Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=jlekstrand.net Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=dri-devel-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id D6E126EB65; Wed, 9 Jun 2021 21:30:08 +0000 (UTC) Received: from mail-pj1-x1035.google.com (mail-pj1-x1035.google.com [IPv6:2607:f8b0:4864:20::1035]) by gabe.freedesktop.org (Postfix) with ESMTPS id 97FF66EB61 for ; Wed, 9 Jun 2021 21:30:07 +0000 (UTC) Received: by mail-pj1-x1035.google.com with SMTP id k22-20020a17090aef16b0290163512accedso4327897pjz.0 for ; Wed, 09 Jun 2021 14:30:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=jlekstrand-net.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=7eTUFkifYg4qyfhm5fhERzc/Tf0nOJIkZRnSQ1FRjgU=; b=SWLsok6eApBieiqTDmEhXEDo812agMqb9+oln5RfIgDHx97JmVXJcasA65qjcRP3LB PVUluxk1UKP5ByHyiH/wxvawubb2t2/UGFZxYFwpjWGTlh30iA4r4ZgTUT+82h0R3hcy DsdtyEcG06bH25mW97vJvuOO+qt6nmeyWwKHPFULYflX0pLRpM/x6ngLPnShsVfUsRSa lbUm+NYKzr8xd0JYadRxD9JUagYKp0eXcRJssKLeyCWY++TkutpNlusMsMhJcNySVKkN whp3xl0jsSEXbfzJxwI+xKUBoriUKfFhiwBLOC3KM2Jm5V98mC+3c1ymGvwAypRPB/RI 0YmQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=7eTUFkifYg4qyfhm5fhERzc/Tf0nOJIkZRnSQ1FRjgU=; b=s6r2PI3ba9KhxfdjzxvtusCVM7QhDHP/4vqo7LkFzaMnt7dBLBU/xW7fYIswqoMm5Y ygTcK/K9lcy8k7K7enBRsXxIYCBo7CxOTnX647Ai970g3fj0USuJsT9kxtTNxlofcwPJ qgxUFSq/Y5X6VYxH3TiM76yPgKsnvlWM94DC8nsVeQS6E1XboRekapk68zBC1V5/gbKF bHXgtBdfiGMK6RrB2Wv54fLFFcL0OzOvg1xnzJmUAxCnI5vLxGvIIUrCKwVrRelIcCjn zuI0qqC9ogoIX8Rx9wINdAyRoyW7Vr5LUuGUh78lcWAWlc2n81TC60Pbnt9r1OeKcXWR +KNw== X-Gm-Message-State: AOAM530nbud8SgBqBA60G67etk1K8F4Arx7Bp5PSnymgjL8fosijXGyI EnMCb0lhIUjxY9Rvgd8QZ1soFXQd1U+gyw== X-Google-Smtp-Source: ABdhPJzN+jJFtIPgWvB61RKHIn33Jbsyu+oEOriZLjg80H9GZmqcrmbge4sVoM+mnRnNDzHS1BXoTQ== X-Received: by 2002:a17:902:ed86:b029:10d:81ec:9087 with SMTP id e6-20020a170902ed86b029010d81ec9087mr1533757plj.0.1623274206878; Wed, 09 Jun 2021 14:30:06 -0700 (PDT) Received: from omlet.lan (jfdmzpr04-ext.jf.intel.com. [134.134.137.73]) by smtp.gmail.com with ESMTPSA id u14sm519133pjx.14.2021.06.09.14.30.05 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 09 Jun 2021 14:30:06 -0700 (PDT) From: Jason Ekstrand To: dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org Subject: [PATCH 1/5] drm/i915: Move intel_engine_free_request_pool to i915_request.c Date: Wed, 9 Jun 2021 16:29:55 -0500 Message-Id: <20210609212959.471209-2-jason@jlekstrand.net> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210609212959.471209-1-jason@jlekstrand.net> References: <20210609212959.471209-1-jason@jlekstrand.net> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Daniel Vetter , Jon Bloomfield , Matthew Auld , Jason Ekstrand Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" This appears to break encapsulation by moving an intel_engine_cs function to a i915_request file. However, this function is intrinsically tied to the lifetime rules and allocation scheme of i915_request and having it in intel_engine_cs.c leaks details of i915_request. We have an abstraction leak either way. Since i915_request's allocation scheme is far more subtle than the simple pointer that is intel_engine_cs.request_pool, it's probably better to keep i915_request's details to itself. Signed-off-by: Jason Ekstrand Cc: Jon Bloomfield Cc: Daniel Vetter Cc: Matthew Auld Cc: Maarten Lankhorst --- drivers/gpu/drm/i915/gt/intel_engine_cs.c | 8 -------- drivers/gpu/drm/i915/i915_request.c | 7 +++++-- drivers/gpu/drm/i915/i915_request.h | 2 -- 3 files changed, 5 insertions(+), 12 deletions(-) diff --git a/drivers/gpu/drm/i915/gt/intel_engine_cs.c b/drivers/gpu/drm/i915/gt/intel_engine_cs.c index 9ceddfbb1687d..df6b80ec84199 100644 --- a/drivers/gpu/drm/i915/gt/intel_engine_cs.c +++ b/drivers/gpu/drm/i915/gt/intel_engine_cs.c @@ -422,14 +422,6 @@ void intel_engines_release(struct intel_gt *gt) } } -void intel_engine_free_request_pool(struct intel_engine_cs *engine) -{ - if (!engine->request_pool) - return; - - kmem_cache_free(i915_request_slab_cache(), engine->request_pool); -} - void intel_engines_free(struct intel_gt *gt) { struct intel_engine_cs *engine; diff --git a/drivers/gpu/drm/i915/i915_request.c b/drivers/gpu/drm/i915/i915_request.c index 1014c71cf7f52..48c5f8527854b 100644 --- a/drivers/gpu/drm/i915/i915_request.c +++ b/drivers/gpu/drm/i915/i915_request.c @@ -106,9 +106,12 @@ static signed long i915_fence_wait(struct dma_fence *fence, timeout); } -struct kmem_cache *i915_request_slab_cache(void) +void intel_engine_free_request_pool(struct intel_engine_cs *engine) { - return global.slab_requests; + if (!engine->request_pool) + return; + + kmem_cache_free(global.slab_requests, engine->request_pool); } static void i915_fence_release(struct dma_fence *fence) diff --git a/drivers/gpu/drm/i915/i915_request.h b/drivers/gpu/drm/i915/i915_request.h index 270f6cd37650c..f84c38d29f988 100644 --- a/drivers/gpu/drm/i915/i915_request.h +++ b/drivers/gpu/drm/i915/i915_request.h @@ -300,8 +300,6 @@ static inline bool dma_fence_is_i915(const struct dma_fence *fence) return fence->ops == &i915_fence_ops; } -struct kmem_cache *i915_request_slab_cache(void); - struct i915_request * __must_check __i915_request_create(struct intel_context *ce, gfp_t gfp); struct i915_request * __must_check From patchwork Wed Jun 9 21:29:56 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jason Ekstrand X-Patchwork-Id: 12311233 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 12C82C48BCF for ; Wed, 9 Jun 2021 21:30:16 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id D5982613EE for ; Wed, 9 Jun 2021 21:30:15 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org D5982613EE Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=jlekstrand.net Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=dri-devel-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 446E26EB5F; Wed, 9 Jun 2021 21:30:12 +0000 (UTC) Received: from mail-pl1-x636.google.com (mail-pl1-x636.google.com [IPv6:2607:f8b0:4864:20::636]) by gabe.freedesktop.org (Postfix) with ESMTPS id 4A6B86EB61 for ; Wed, 9 Jun 2021 21:30:09 +0000 (UTC) Received: by mail-pl1-x636.google.com with SMTP id e1so2008725plh.8 for ; Wed, 09 Jun 2021 14:30:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=jlekstrand-net.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=6ujlSwBFtNLSab33ylvX6bUw6EEVdMDaAuKh8x6lbj8=; b=xtHFHabhie43Azccm0jWH22+vjoZuhZf8pbmpNdAgib7WFSShoravd//bkjG/Z4U23 tNTx4UtDxo/ixObUh5EVrQkJnAXht/rBzAanigpspkdzsfy5MFx4ZxVWF/QfD/7vxj+U R7TEr2PXc5sIrUXJ+e3xROwEqWz4kyDWqoRfETTc9SIpkyjCMUcI0U7oK3tFGIqvbVRB LZgNncVMWYaYXfHfS1qrvhB/5hS9VDG5KyDZWMypsNSwhh956njKSKLLs4vtGOsrFaY+ qdPgJRNpoTQz2+1QxZ3Z318lNhL2E9HhmRUUYFADwHUmMA9lritc5KsTkR4W6lsvFFWw SmAw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=6ujlSwBFtNLSab33ylvX6bUw6EEVdMDaAuKh8x6lbj8=; b=fwY4s6QgJQpqEaltmYSYoJcPydE7H9aHQc4H7YNxbsPQnto0qNXUJMoqXotT5EG2NJ BGCKb1/eo1FYqpLcxAW7qCexRvsJk4TUec2OGvLClGFl2+AJVLncoeXn8G/kNsMotnhi NX5eHROkrkoiEL7/Y9c4Pg04qc7Umasro8urgWWgvP2rsOdl4qYvY37PxsjXOZA6uPGG 8N4IXsubMgYUI9Y6+fl/ZG8098YjabFEsCRtv6RrQgNUSmt5/TNzjKHNWcnayKATL6vO jg9vf/WGLr07df4dhyFWS3Q3HbTvdfK9x4aBBcqgT/FKfSFt/9Gkuy0KRjdkebvBoOsF lBVg== X-Gm-Message-State: AOAM530oybrn4j9t6o+5kiJLwx4S2TdSC1vMEnNBaTo1rQIIRZTeEdop rUP5gU28fdC6+yr5ELyEM7DAMzUPqb0RmQ== X-Google-Smtp-Source: ABdhPJypLRemn8oAin9zM6WsKPHZAdOouKo2sbpQmQRX4fw+BScAmvEpowu4Cilm6zzjEz4Cri3CrA== X-Received: by 2002:a17:90a:8e82:: with SMTP id f2mr12980243pjo.157.1623274208570; Wed, 09 Jun 2021 14:30:08 -0700 (PDT) Received: from omlet.lan (jfdmzpr04-ext.jf.intel.com. [134.134.137.73]) by smtp.gmail.com with ESMTPSA id u14sm519133pjx.14.2021.06.09.14.30.07 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 09 Jun 2021 14:30:08 -0700 (PDT) From: Jason Ekstrand To: dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org Subject: [PATCH 2/5] drm/i915: Use a simpler scheme for caching i915_request Date: Wed, 9 Jun 2021 16:29:56 -0500 Message-Id: <20210609212959.471209-3-jason@jlekstrand.net> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210609212959.471209-1-jason@jlekstrand.net> References: <20210609212959.471209-1-jason@jlekstrand.net> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Daniel Vetter , Jon Bloomfield , Matthew Auld , Jason Ekstrand Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Instead of attempting to recycle a request in to the cache when it retires, stuff a new one in the cache every time we allocate a request for some other reason. Signed-off-by: Jason Ekstrand Cc: Jon Bloomfield Cc: Daniel Vetter Cc: Matthew Auld Cc: Maarten Lankhorst --- drivers/gpu/drm/i915/i915_request.c | 66 ++++++++++++++--------------- 1 file changed, 31 insertions(+), 35 deletions(-) diff --git a/drivers/gpu/drm/i915/i915_request.c b/drivers/gpu/drm/i915/i915_request.c index 48c5f8527854b..e531c74f0b0e2 100644 --- a/drivers/gpu/drm/i915/i915_request.c +++ b/drivers/gpu/drm/i915/i915_request.c @@ -128,41 +128,6 @@ static void i915_fence_release(struct dma_fence *fence) i915_sw_fence_fini(&rq->submit); i915_sw_fence_fini(&rq->semaphore); - /* - * Keep one request on each engine for reserved use under mempressure - * - * We do not hold a reference to the engine here and so have to be - * very careful in what rq->engine we poke. The virtual engine is - * referenced via the rq->context and we released that ref during - * i915_request_retire(), ergo we must not dereference a virtual - * engine here. Not that we would want to, as the only consumer of - * the reserved engine->request_pool is the power management parking, - * which must-not-fail, and that is only run on the physical engines. - * - * Since the request must have been executed to be have completed, - * we know that it will have been processed by the HW and will - * not be unsubmitted again, so rq->engine and rq->execution_mask - * at this point is stable. rq->execution_mask will be a single - * bit if the last and _only_ engine it could execution on was a - * physical engine, if it's multiple bits then it started on and - * could still be on a virtual engine. Thus if the mask is not a - * power-of-two we assume that rq->engine may still be a virtual - * engine and so a dangling invalid pointer that we cannot dereference - * - * For example, consider the flow of a bonded request through a virtual - * engine. The request is created with a wide engine mask (all engines - * that we might execute on). On processing the bond, the request mask - * is reduced to one or more engines. If the request is subsequently - * bound to a single engine, it will then be constrained to only - * execute on that engine and never returned to the virtual engine - * after timeslicing away, see __unwind_incomplete_requests(). Thus we - * know that if the rq->execution_mask is a single bit, rq->engine - * can be a physical engine with the exact corresponding mask. - */ - if (is_power_of_2(rq->execution_mask) && - !cmpxchg(&rq->engine->request_pool, NULL, rq)) - return; - kmem_cache_free(global.slab_requests, rq); } @@ -869,6 +834,29 @@ static void retire_requests(struct intel_timeline *tl) break; } +static void +ensure_cached_request(struct i915_request **rsvd, gfp_t gfp) +{ + struct i915_request *rq; + + /* Don't try to add to the cache if we don't allow blocking. That + * just increases the chance that the actual allocation will fail. + */ + if (gfpflags_allow_blocking(gfp)) + return; + + if (READ_ONCE(rsvd)) + return; + + rq = kmem_cache_alloc(global.slab_requests, + gfp | __GFP_RETRY_MAYFAIL | __GFP_NOWARN); + if (!rq) + return; /* Oops but nothing we can do */ + + if (cmpxchg(rsvd, NULL, rq)) + kmem_cache_free(global.slab_requests, rq); +} + static noinline struct i915_request * request_alloc_slow(struct intel_timeline *tl, struct i915_request **rsvd, @@ -937,6 +925,14 @@ __i915_request_create(struct intel_context *ce, gfp_t gfp) /* Check that the caller provided an already pinned context */ __intel_context_pin(ce); + /* Before we do anything, try to make sure we have at least one + * request in the engine's cache. If we get here with GPF_NOWAIT + * (this can happen when switching to a kernel context), we we want + * to try very hard to not fail and we fall back to this cache. + * Top it off with a fresh request whenever it's empty. + */ + ensure_cached_request(&ce->engine->request_pool, gfp); + /* * Beware: Dragons be flying overhead. * From patchwork Wed Jun 9 21:29:57 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Jason Ekstrand X-Patchwork-Id: 12311235 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0CB14C48BCD for ; Wed, 9 Jun 2021 21:30:19 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id CF58361009 for ; Wed, 9 Jun 2021 21:30:18 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org CF58361009 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=jlekstrand.net Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=dri-devel-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 23D996EB6C; Wed, 9 Jun 2021 21:30:13 +0000 (UTC) Received: from mail-pj1-x1036.google.com (mail-pj1-x1036.google.com [IPv6:2607:f8b0:4864:20::1036]) by gabe.freedesktop.org (Postfix) with ESMTPS id 23C456EB6A for ; Wed, 9 Jun 2021 21:30:11 +0000 (UTC) Received: by mail-pj1-x1036.google.com with SMTP id m13-20020a17090b068db02901656cc93a75so2442078pjz.3 for ; Wed, 09 Jun 2021 14:30:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=jlekstrand-net.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=BG9tpybh3pbygSCt9T3vWKf6iagoboAQ8BQvwxHPpsE=; b=0PZCxFlqdgOwgUeiptjuRDmVy2XIV9oyTSVCRlWvGmx1Vd8WeDcP1xN9obHsdZrufu ppWBL0fUEXV5WNMkSz/XGAhJ8uA/X/mXcsbvjfSDFvV0j1mTum7GKswx2IWtlp0jGi29 gtoAxZMKDyaIzHCaPwqaBCVrp1rGIIUzFJILX18zIybZGCGtrD6xFPkgufi+OJiUAANw eGqIBqqJrjpG4cDayoRmEJgZHWMZkRNFv85gb4PZOmDr2tbwDEhN/CTY2v+WkhHWQ06Z Fm56p103lltl14ZVv/E2jU4I3kMd9SwjEvlPKnFtvpsbLS0ssmsS7RjrYekk7ifsl46h uhtQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=BG9tpybh3pbygSCt9T3vWKf6iagoboAQ8BQvwxHPpsE=; b=R67Dmxvx7gYKGxI/1jCFjcIC85Oi2iqLMxX/q2cIMbVhLOa8mROVRTX2bsajMfdQit bZwcyw3hEsHVZlEecQo5xZ0SAr2h/CTKkBADV9ey7LvS7cVkWlXNwJL4b/5vqmm5q60c qbJEY+I2KRXmvlWD39ic9pkqV1Ges3AQq1yoyCvyLrQtKz1LqLWRoPElxPzgKUOcbqp2 wEdCH9xPz5l+DnpBie5safptEkE22u1Z5SraaeRiAbYamOdRrJb+VlN1M2svqkAZwILb CslgLvt2b9KI17qpIVbCaCJCL+wCYK7bWyqzElMhPehiz+6gab7OuVFnV+BljCFbF6Hd u4Aw== X-Gm-Message-State: AOAM532FbZLeUoFbe7Nr0amq/UW21rlkJHss4Op0cVS2wn9WPe9+UbWe 1NqTqtSsTbPevZb5/uIDs0xJJmI9ykKH1Q== X-Google-Smtp-Source: ABdhPJyZHW5XcvVIrh9HZzGc7XNDtcQjWMJIthNTX7+zVQfEhrBNIFrI1YuDtk3oHBteH5HygmdYoQ== X-Received: by 2002:a17:902:b687:b029:eb:6491:b3f7 with SMTP id c7-20020a170902b687b02900eb6491b3f7mr1713761pls.38.1623274210405; Wed, 09 Jun 2021 14:30:10 -0700 (PDT) Received: from omlet.lan (jfdmzpr04-ext.jf.intel.com. [134.134.137.73]) by smtp.gmail.com with ESMTPSA id u14sm519133pjx.14.2021.06.09.14.30.08 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 09 Jun 2021 14:30:10 -0700 (PDT) From: Jason Ekstrand To: dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org Subject: [PATCH 3/5] drm/i915: Stop using SLAB_TYPESAFE_BY_RCU for i915_request Date: Wed, 9 Jun 2021 16:29:57 -0500 Message-Id: <20210609212959.471209-4-jason@jlekstrand.net> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210609212959.471209-1-jason@jlekstrand.net> References: <20210609212959.471209-1-jason@jlekstrand.net> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Daniel Vetter , Jon Bloomfield , Matthew Auld , Jason Ekstrand , Dave Airlie , =?utf-8?q?Christian_K=C3=B6nig?= Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Ever since 0eafec6d3244 ("drm/i915: Enable lockless lookup of request tracking via RCU"), the i915 driver has used SLAB_TYPESAFE_BY_RCU (it was called SLAB_DESTROY_BY_RCU at the time) in order to allow RCU on i915_request. As nifty as SLAB_TYPESAFE_BY_RCU may be, it comes with some serious disclaimers. In particular, objects can get recycled while RCU readers are still in-flight. This can be ok if everyone who touches these objects knows about the disclaimers and is careful. However, because we've chosen to use SLAB_TYPESAFE_BY_RCU for i915_request and because i915_request contains a dma_fence, we've leaked SLAB_TYPESAFE_BY_RCU and its whole pile of disclaimers to every driver in the kernel which may consume a dma_fence. We've tried to keep it somewhat contained by doing most of the hard work to prevent access of recycled objects via dma_fence_get_rcu_safe(). However, a quick grep of kernel sources says that, of the 30 instances of dma_fence_get_rcu*, only 11 of them use dma_fence_get_rcu_safe(). It's likely there bear traps in DRM and related subsystems just waiting for someone to accidentally step in them. This commit gets stops us using SLAB_TYPESAFE_BY_RCU for i915_request and, instead, does an RCU-safe slab free via rcu_call(). This should let us keep most of the perf benefits of slab allocation while avoiding the bear traps inherent in SLAB_TYPESAFE_BY_RCU. Signed-off-by: Jason Ekstrand Cc: Jon Bloomfield Cc: Daniel Vetter Cc: Christian König Cc: Dave Airlie Cc: Matthew Auld Cc: Maarten Lankhorst --- drivers/gpu/drm/i915/i915_request.c | 76 ++++++++++++++++------------- 1 file changed, 43 insertions(+), 33 deletions(-) diff --git a/drivers/gpu/drm/i915/i915_request.c b/drivers/gpu/drm/i915/i915_request.c index e531c74f0b0e2..55fa938126100 100644 --- a/drivers/gpu/drm/i915/i915_request.c +++ b/drivers/gpu/drm/i915/i915_request.c @@ -111,9 +111,44 @@ void intel_engine_free_request_pool(struct intel_engine_cs *engine) if (!engine->request_pool) return; + /* + * It's safe to free this right away because we always put a fresh + * i915_request in the cache that's never been touched by an RCU + * reader. + */ kmem_cache_free(global.slab_requests, engine->request_pool); } +static void __i915_request_free(struct rcu_head *head) +{ + struct i915_request *rq = container_of(head, typeof(*rq), fence.rcu); + + kmem_cache_free(global.slab_requests, rq); +} + +static void i915_request_free_rcu(struct i915_request *rq) +{ + /* + * Because we're on a slab allocator, memory may be re-used the + * moment we free it. There is no kfree_rcu() equivalent for + * slabs. Instead, we hand-roll it here with call_rcu(). This + * gives us all the perf benefits to slab allocation while ensuring + * that we never release a request back to the slab until there are + * no more readers. + * + * We do have to be careful, though, when calling kmem_cache_destroy() + * as there may be outstanding free requests. This is solved by + * inserting an rcu_barrier() before kmem_cache_destroy(). An RCU + * barrier is sufficient and we don't need synchronize_rcu() + * because the call_rcu() here will wait on any outstanding RCU + * readers and the rcu_barrier() will wait on any outstanding + * call_rcu() callbacks. So, if there are any readers who once had + * valid references to a request, rcu_barrier() will end up waiting + * on them by transitivity. + */ + call_rcu(&rq->fence.rcu, __i915_request_free); +} + static void i915_fence_release(struct dma_fence *fence) { struct i915_request *rq = to_request(fence); @@ -127,8 +162,7 @@ static void i915_fence_release(struct dma_fence *fence) */ i915_sw_fence_fini(&rq->submit); i915_sw_fence_fini(&rq->semaphore); - - kmem_cache_free(global.slab_requests, rq); + i915_request_free_rcu(rq); } const struct dma_fence_ops i915_fence_ops = { @@ -933,35 +967,6 @@ __i915_request_create(struct intel_context *ce, gfp_t gfp) */ ensure_cached_request(&ce->engine->request_pool, gfp); - /* - * Beware: Dragons be flying overhead. - * - * We use RCU to look up requests in flight. The lookups may - * race with the request being allocated from the slab freelist. - * That is the request we are writing to here, may be in the process - * of being read by __i915_active_request_get_rcu(). As such, - * we have to be very careful when overwriting the contents. During - * the RCU lookup, we change chase the request->engine pointer, - * read the request->global_seqno and increment the reference count. - * - * The reference count is incremented atomically. If it is zero, - * the lookup knows the request is unallocated and complete. Otherwise, - * it is either still in use, or has been reallocated and reset - * with dma_fence_init(). This increment is safe for release as we - * check that the request we have a reference to and matches the active - * request. - * - * Before we increment the refcount, we chase the request->engine - * pointer. We must not call kmem_cache_zalloc() or else we set - * that pointer to NULL and cause a crash during the lookup. If - * we see the request is completed (based on the value of the - * old engine and seqno), the lookup is complete and reports NULL. - * If we decide the request is not completed (new engine or seqno), - * then we grab a reference and double check that it is still the - * active request - which it won't be and restart the lookup. - * - * Do not use kmem_cache_zalloc() here! - */ rq = kmem_cache_alloc(global.slab_requests, gfp | __GFP_RETRY_MAYFAIL | __GFP_NOWARN); if (unlikely(!rq)) { @@ -2116,6 +2121,12 @@ static void i915_global_request_shrink(void) static void i915_global_request_exit(void) { + /* + * We need to rcu_barrier() before destroying slab_requests. See + * i915_request_free_rcu() for more details. + */ + rcu_barrier(); + kmem_cache_destroy(global.slab_execute_cbs); kmem_cache_destroy(global.slab_requests); } @@ -2132,8 +2143,7 @@ int __init i915_global_request_init(void) sizeof(struct i915_request), __alignof__(struct i915_request), SLAB_HWCACHE_ALIGN | - SLAB_RECLAIM_ACCOUNT | - SLAB_TYPESAFE_BY_RCU, + SLAB_RECLAIM_ACCOUNT, __i915_request_ctor); if (!global.slab_requests) return -ENOMEM; From patchwork Wed Jun 9 21:29:58 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Jason Ekstrand X-Patchwork-Id: 12311237 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4CE5EC48BDF for ; Wed, 9 Jun 2021 21:30:22 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 1DD0F61009 for ; Wed, 9 Jun 2021 21:30:22 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 1DD0F61009 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=jlekstrand.net Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=dri-devel-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id A19766EB83; Wed, 9 Jun 2021 21:30:16 +0000 (UTC) Received: from mail-pl1-x631.google.com (mail-pl1-x631.google.com [IPv6:2607:f8b0:4864:20::631]) by gabe.freedesktop.org (Postfix) with ESMTPS id C237A6EB6F for ; Wed, 9 Jun 2021 21:30:12 +0000 (UTC) Received: by mail-pl1-x631.google.com with SMTP id v11so4948353ply.6 for ; Wed, 09 Jun 2021 14:30:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=jlekstrand-net.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=JwH9BvcWQRM9Pbc0qvOae7i4EU7PPki/fj1kY1abKhQ=; b=vMpyc5vC6KwPg9tkOGnmFg5fdJ+apN1nFhKlt0jLKNoUzJgtYR3ETOK+7pQNTPMiQK BRSJuUPDHQHk9uXoPECKDNMMux5Ac4/gKzj6F8/aYwBlIvL9YyH+g4GOI2l3rM0q1/SQ U2/ztTlJ1iHqtceoEhB9BCq0LSoFmoQMq3PJScjso/0fOqbBUL5L6Ph8/4ahWk47w5MV 7y4BltVRQQ0AKXKmkWxSy3rnbnqkBbtzwH4DuB1cSawdwHxPptAM7EhbBOCp4irYZYYM n7ybadOCx+LDrhEiTyxaXDAY2Mcr1Ei7EtdmRcVHx4uaWQYqPesYXiby34YwJVg8N0Bn zQIA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=JwH9BvcWQRM9Pbc0qvOae7i4EU7PPki/fj1kY1abKhQ=; b=nPEOf5X8w4IIKAJWTprfDoQNT/RTUgSaJV4Y9GTgtrhrK11h3pjmHrf1AlK+zCRXsa KnwOqAWz2Hc+dCipzS+eLPLLVPoZ4k1OrlN3L00E4HMQ7J/nnuTcQSloCL2mMrgmHONF uLH0hh8SEe6c38lloZpnFcIqsuJLe1jJetSBfe4g+P6KN0IgW5WXW8Hc3cuxlD5eZMn+ 1ftGu9mCCIFf0XYKFkf2wBc4MJRqLydgm2J0H1zUSui7baAovELgF4So6oQKKZFyMyGx g23yFGucelyDZYfbd1zBLU3VW7IpMA7w6gtak8Yq85bmgE8I0OwVsXLxg28YgjhWpFp2 KIkw== X-Gm-Message-State: AOAM5313F9c30jJUIrX6tKOxYMvJpUfOJztA+Q78ixCxe3wPoIY0s/uw +5EHxlO13Y+NBqIf/NpioCU8NDuQgU+kSw== X-Google-Smtp-Source: ABdhPJwGjhYeJkB8rrm1/R1AZmcp2jX5EPi12XfY9jYSLgOHeNj6siylXNjqy5X/GIpx8dGl3XK+vg== X-Received: by 2002:a17:90a:e506:: with SMTP id t6mr1604852pjy.59.1623274211998; Wed, 09 Jun 2021 14:30:11 -0700 (PDT) Received: from omlet.lan (jfdmzpr04-ext.jf.intel.com. [134.134.137.73]) by smtp.gmail.com with ESMTPSA id u14sm519133pjx.14.2021.06.09.14.30.10 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 09 Jun 2021 14:30:11 -0700 (PDT) From: Jason Ekstrand To: dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org Subject: [PATCH 4/5] dma-buf: Stop using SLAB_TYPESAFE_BY_RCU in selftests Date: Wed, 9 Jun 2021 16:29:58 -0500 Message-Id: <20210609212959.471209-5-jason@jlekstrand.net> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210609212959.471209-1-jason@jlekstrand.net> References: <20210609212959.471209-1-jason@jlekstrand.net> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Daniel Vetter , =?utf-8?q?Christian_K=C3=B6nig?= , Jason Ekstrand , Matthew Auld Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" The only real-world user of SLAB_TYPESAFE_BY_RCU was i915 and it doesn't use that anymore so there's no need to be testing it in selftests. Signed-off-by: Jason Ekstrand Cc: Daniel Vetter Cc: Christian König Cc: Matthew Auld Cc: Maarten Lankhorst Reported-by: kernel test robot --- drivers/dma-buf/st-dma-fence-chain.c | 24 ++++-------------------- drivers/dma-buf/st-dma-fence.c | 27 +++++---------------------- 2 files changed, 9 insertions(+), 42 deletions(-) diff --git a/drivers/dma-buf/st-dma-fence-chain.c b/drivers/dma-buf/st-dma-fence-chain.c index 9525f7f561194..73010184559fe 100644 --- a/drivers/dma-buf/st-dma-fence-chain.c +++ b/drivers/dma-buf/st-dma-fence-chain.c @@ -19,36 +19,27 @@ #define CHAIN_SZ (4 << 10) -static struct kmem_cache *slab_fences; - -static inline struct mock_fence { +struct mock_fence { struct dma_fence base; spinlock_t lock; -} *to_mock_fence(struct dma_fence *f) { - return container_of(f, struct mock_fence, base); -} +}; static const char *mock_name(struct dma_fence *f) { return "mock"; } -static void mock_fence_release(struct dma_fence *f) -{ - kmem_cache_free(slab_fences, to_mock_fence(f)); -} - static const struct dma_fence_ops mock_ops = { .get_driver_name = mock_name, .get_timeline_name = mock_name, - .release = mock_fence_release, + .release = dma_fence_free, }; static struct dma_fence *mock_fence(void) { struct mock_fence *f; - f = kmem_cache_alloc(slab_fences, GFP_KERNEL); + f = kmalloc(sizeof(*f), GFP_KERNEL); if (!f) return NULL; @@ -701,14 +692,7 @@ int dma_fence_chain(void) pr_info("sizeof(dma_fence_chain)=%zu\n", sizeof(struct dma_fence_chain)); - slab_fences = KMEM_CACHE(mock_fence, - SLAB_TYPESAFE_BY_RCU | - SLAB_HWCACHE_ALIGN); - if (!slab_fences) - return -ENOMEM; - ret = subtests(tests, NULL); - kmem_cache_destroy(slab_fences); return ret; } diff --git a/drivers/dma-buf/st-dma-fence.c b/drivers/dma-buf/st-dma-fence.c index c8a12d7ad71ab..ca98cb0b9525b 100644 --- a/drivers/dma-buf/st-dma-fence.c +++ b/drivers/dma-buf/st-dma-fence.c @@ -14,25 +14,16 @@ #include "selftest.h" -static struct kmem_cache *slab_fences; - -static struct mock_fence { +struct mock_fence { struct dma_fence base; struct spinlock lock; -} *to_mock_fence(struct dma_fence *f) { - return container_of(f, struct mock_fence, base); -} +}; static const char *mock_name(struct dma_fence *f) { return "mock"; } -static void mock_fence_release(struct dma_fence *f) -{ - kmem_cache_free(slab_fences, to_mock_fence(f)); -} - struct wait_cb { struct dma_fence_cb cb; struct task_struct *task; @@ -77,14 +68,14 @@ static const struct dma_fence_ops mock_ops = { .get_driver_name = mock_name, .get_timeline_name = mock_name, .wait = mock_wait, - .release = mock_fence_release, + .release = dma_fence_free, }; static struct dma_fence *mock_fence(void) { struct mock_fence *f; - f = kmem_cache_alloc(slab_fences, GFP_KERNEL); + f = kmalloc(sizeof(*f), GFP_KERNEL); if (!f) return NULL; @@ -463,7 +454,7 @@ static int thread_signal_callback(void *arg) rcu_read_lock(); do { - f2 = dma_fence_get_rcu_safe(&t->fences[!t->id]); + f2 = dma_fence_get_rcu(t->fences[!t->id]); } while (!f2 && !kthread_should_stop()); rcu_read_unlock(); @@ -563,15 +554,7 @@ int dma_fence(void) pr_info("sizeof(dma_fence)=%zu\n", sizeof(struct dma_fence)); - slab_fences = KMEM_CACHE(mock_fence, - SLAB_TYPESAFE_BY_RCU | - SLAB_HWCACHE_ALIGN); - if (!slab_fences) - return -ENOMEM; - ret = subtests(tests, NULL); - kmem_cache_destroy(slab_fences); - return ret; } From patchwork Wed Jun 9 21:29:59 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Jason Ekstrand X-Patchwork-Id: 12311239 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 07A3AC48BCD for ; Wed, 9 Jun 2021 21:30:24 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id CC3C8613EE for ; Wed, 9 Jun 2021 21:30:23 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org CC3C8613EE Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=jlekstrand.net Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=dri-devel-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 381E26EB61; Wed, 9 Jun 2021 21:30:17 +0000 (UTC) Received: from mail-pl1-x62d.google.com (mail-pl1-x62d.google.com [IPv6:2607:f8b0:4864:20::62d]) by gabe.freedesktop.org (Postfix) with ESMTPS id 308C56EB8D for ; Wed, 9 Jun 2021 21:30:15 +0000 (UTC) Received: by mail-pl1-x62d.google.com with SMTP id e1so13412926pld.13 for ; Wed, 09 Jun 2021 14:30:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=jlekstrand-net.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=TVM6IqkVxgF758A5cge42oRP8xpTXQAuAHmNBA0fFv0=; b=vgD4miESEhiOO5Ipfy3sfslWwzjyKiWCpZSzZlqy1cwgg/0zWGow0db1bcdXr3B5ia rtDv6xX9JPJYEWQ5kysiQSLrXXhR019g1tJ4DntjjzxJRSaa+Ao3f0QnSFUBAiwtw6sv Rr7n13Nn+zxAzb4t19CvQnBOQZOSGJ3R0HORQNdlCHufvfgBh7jCgEQpmcLKLOM/PJcH 1pr5Uq66suhIxO7xa9XqwmzQV7wrx4MjhKBRPm0bPItaH8aEWWjLUBv3Fobfwg0lqGWg gNEp+5aQCRpHlaoNF5MP6jw7cIl7gYtcsl4ofbCPEdStThOpD2VI80+zSxoXdM+i8G39 /scQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=TVM6IqkVxgF758A5cge42oRP8xpTXQAuAHmNBA0fFv0=; b=baxahVT8k9g2o3HinEW+v1PuNBmqFMxUSOlY/dyBYxK3O8+9rwU/3RS79q4ZgNzepq D0uLyo4CfFZhkw9V14XZT2J2N2EnDhSL2TnkgiZ9Vw/3LQSB5Glj0i2H3qXIhoOWnWwL iUeougxiq6rnCvTSD4iyqiRprgT/6l+KVmcPM3FkKIb5RQGB6YodRfzOSqq75HSAH7Gq mxc9nYJR/jFhSNSawHifv/8NWHULFuzM2oMxJqSaT0+ahf2GO+ad1Fe8akVmtDsFQIGg yXU/TQCUDNSdQGXfRl9DNwP5EgP4eRj9EoVqb4LRhJUDkpCRktBPn7nYCdq9utL8dYE0 rKRQ== X-Gm-Message-State: AOAM533eb7BM/ijC1cs/A4j46q6wpwp2ymdk9UioBC0+XWWSZoiQDgfx OFjHQXgCmFa5hSwZkQgWUPMLCm9U3ZTOqQ== X-Google-Smtp-Source: ABdhPJxA7otZafFMLRTV6Q2ddhPEa1XgHZ6ZhrzXeSXf6tntPEcgFP6S38N9qoUqvaOPomgnSMqFzg== X-Received: by 2002:a17:90a:bd18:: with SMTP id y24mr6503097pjr.83.1623274214438; Wed, 09 Jun 2021 14:30:14 -0700 (PDT) Received: from omlet.lan (jfdmzpr04-ext.jf.intel.com. [134.134.137.73]) by smtp.gmail.com with ESMTPSA id u14sm519133pjx.14.2021.06.09.14.30.12 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 09 Jun 2021 14:30:14 -0700 (PDT) From: Jason Ekstrand To: dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org Subject: [PATCH 5/5] DONOTMERGE: dma-buf: Get rid of dma_fence_get_rcu_safe Date: Wed, 9 Jun 2021 16:29:59 -0500 Message-Id: <20210609212959.471209-6-jason@jlekstrand.net> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210609212959.471209-1-jason@jlekstrand.net> References: <20210609212959.471209-1-jason@jlekstrand.net> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Daniel Vetter , =?utf-8?q?Christian_K=C3=B6nig?= , Jason Ekstrand , Matthew Auld Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" This helper existed to handle the weird corner-cases caused by using SLAB_TYPESAFE_BY_RCU for backing dma_fence. Now that no one is using that anymore (i915 was the only real user), dma_fence_get_rcu is sufficient. The one slightly annoying thing we have to deal with here is that dma_fence_get_rcu_safe did an rcu_dereference as well as a SLAB_TYPESAFE_BY_RCU-safe dma_fence_get_rcu. This means each call site ends up being 3 lines instead of 1. Signed-off-by: Jason Ekstrand Cc: Daniel Vetter Cc: Christian König Cc: Matthew Auld Cc: Maarten Lankhorst Reported-by: kernel test robot --- drivers/dma-buf/dma-fence-chain.c | 8 ++-- drivers/dma-buf/dma-resv.c | 4 +- drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c | 4 +- drivers/gpu/drm/i915/i915_active.h | 4 +- drivers/gpu/drm/i915/i915_vma.c | 4 +- include/drm/drm_syncobj.h | 4 +- include/linux/dma-fence.h | 50 ----------------------- include/linux/dma-resv.h | 4 +- 8 files changed, 23 insertions(+), 59 deletions(-) diff --git a/drivers/dma-buf/dma-fence-chain.c b/drivers/dma-buf/dma-fence-chain.c index 7d129e68ac701..46dfc7d94d8ed 100644 --- a/drivers/dma-buf/dma-fence-chain.c +++ b/drivers/dma-buf/dma-fence-chain.c @@ -15,15 +15,17 @@ static bool dma_fence_chain_enable_signaling(struct dma_fence *fence); * dma_fence_chain_get_prev - use RCU to get a reference to the previous fence * @chain: chain node to get the previous node from * - * Use dma_fence_get_rcu_safe to get a reference to the previous fence of the - * chain node. + * Use rcu_dereference and dma_fence_get_rcu to get a reference to the + * previous fence of the chain node. */ static struct dma_fence *dma_fence_chain_get_prev(struct dma_fence_chain *chain) { struct dma_fence *prev; rcu_read_lock(); - prev = dma_fence_get_rcu_safe(&chain->prev); + prev = rcu_dereference(chain->prev); + if (prev) + prev = dma_fence_get_rcu(prev); rcu_read_unlock(); return prev; } diff --git a/drivers/dma-buf/dma-resv.c b/drivers/dma-buf/dma-resv.c index f26c71747d43a..cfe0db3cca292 100644 --- a/drivers/dma-buf/dma-resv.c +++ b/drivers/dma-buf/dma-resv.c @@ -376,7 +376,9 @@ int dma_resv_copy_fences(struct dma_resv *dst, struct dma_resv *src) dst_list = NULL; } - new = dma_fence_get_rcu_safe(&src->fence_excl); + new = rcu_dereference(src->fence_excl); + if (new) + new = dma_fence_get_rcu(new); rcu_read_unlock(); src_list = dma_resv_shared_list(dst); diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c index 72d9b92b17547..0aeb6117f3893 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c @@ -161,7 +161,9 @@ int amdgpu_fence_emit(struct amdgpu_ring *ring, struct dma_fence **f, struct dma_fence *old; rcu_read_lock(); - old = dma_fence_get_rcu_safe(ptr); + old = rcu_dereference(*ptr); + if (old) + old = dma_fence_get_rcu(old); rcu_read_unlock(); if (old) { diff --git a/drivers/gpu/drm/i915/i915_active.h b/drivers/gpu/drm/i915/i915_active.h index d0feda68b874f..bd89cfc806ca5 100644 --- a/drivers/gpu/drm/i915/i915_active.h +++ b/drivers/gpu/drm/i915/i915_active.h @@ -103,7 +103,9 @@ i915_active_fence_get(struct i915_active_fence *active) struct dma_fence *fence; rcu_read_lock(); - fence = dma_fence_get_rcu_safe(&active->fence); + fence = rcu_dereference(active->fence); + if (fence) + fence = dma_fence_get_rcu(fence); rcu_read_unlock(); return fence; diff --git a/drivers/gpu/drm/i915/i915_vma.c b/drivers/gpu/drm/i915/i915_vma.c index 0f227f28b2802..ed0388d99197e 100644 --- a/drivers/gpu/drm/i915/i915_vma.c +++ b/drivers/gpu/drm/i915/i915_vma.c @@ -351,7 +351,9 @@ int i915_vma_wait_for_bind(struct i915_vma *vma) struct dma_fence *fence; rcu_read_lock(); - fence = dma_fence_get_rcu_safe(&vma->active.excl.fence); + fence = rcu_dereference(vma->active.excl.fence); + if (fence) + fence = dma_fence_get_rcu(fence); rcu_read_unlock(); if (fence) { err = dma_fence_wait(fence, MAX_SCHEDULE_TIMEOUT); diff --git a/include/drm/drm_syncobj.h b/include/drm/drm_syncobj.h index 6cf7243a1dc5e..6c45d52988bcc 100644 --- a/include/drm/drm_syncobj.h +++ b/include/drm/drm_syncobj.h @@ -105,7 +105,9 @@ drm_syncobj_fence_get(struct drm_syncobj *syncobj) struct dma_fence *fence; rcu_read_lock(); - fence = dma_fence_get_rcu_safe(&syncobj->fence); + fence = rcu_dereference(syncobj->fence); + if (fence) + fence = dma_fence_get_rcu(syncobj->fence); rcu_read_unlock(); return fence; diff --git a/include/linux/dma-fence.h b/include/linux/dma-fence.h index 6ffb4b2c63715..f4a2ab2b1ae46 100644 --- a/include/linux/dma-fence.h +++ b/include/linux/dma-fence.h @@ -307,56 +307,6 @@ static inline struct dma_fence *dma_fence_get_rcu(struct dma_fence *fence) return NULL; } -/** - * dma_fence_get_rcu_safe - acquire a reference to an RCU tracked fence - * @fencep: pointer to fence to increase refcount of - * - * Function returns NULL if no refcount could be obtained, or the fence. - * This function handles acquiring a reference to a fence that may be - * reallocated within the RCU grace period (such as with SLAB_TYPESAFE_BY_RCU), - * so long as the caller is using RCU on the pointer to the fence. - * - * An alternative mechanism is to employ a seqlock to protect a bunch of - * fences, such as used by struct dma_resv. When using a seqlock, - * the seqlock must be taken before and checked after a reference to the - * fence is acquired (as shown here). - * - * The caller is required to hold the RCU read lock. - */ -static inline struct dma_fence * -dma_fence_get_rcu_safe(struct dma_fence __rcu **fencep) -{ - do { - struct dma_fence *fence; - - fence = rcu_dereference(*fencep); - if (!fence) - return NULL; - - if (!dma_fence_get_rcu(fence)) - continue; - - /* The atomic_inc_not_zero() inside dma_fence_get_rcu() - * provides a full memory barrier upon success (such as now). - * This is paired with the write barrier from assigning - * to the __rcu protected fence pointer so that if that - * pointer still matches the current fence, we know we - * have successfully acquire a reference to it. If it no - * longer matches, we are holding a reference to some other - * reallocated pointer. This is possible if the allocator - * is using a freelist like SLAB_TYPESAFE_BY_RCU where the - * fence remains valid for the RCU grace period, but it - * may be reallocated. When using such allocators, we are - * responsible for ensuring the reference we get is to - * the right fence, as below. - */ - if (fence == rcu_access_pointer(*fencep)) - return rcu_pointer_handoff(fence); - - dma_fence_put(fence); - } while (1); -} - #ifdef CONFIG_LOCKDEP bool dma_fence_begin_signalling(void); void dma_fence_end_signalling(bool cookie); diff --git a/include/linux/dma-resv.h b/include/linux/dma-resv.h index 562b885cf9c3d..a38c021f379af 100644 --- a/include/linux/dma-resv.h +++ b/include/linux/dma-resv.h @@ -248,7 +248,9 @@ dma_resv_get_excl_unlocked(struct dma_resv *obj) return NULL; rcu_read_lock(); - fence = dma_fence_get_rcu_safe(&obj->fence_excl); + fence = rcu_dereference(obj->fence_excl); + if (fence) + fence = dma_fence_get_rcu(fence); rcu_read_unlock(); return fence;