From patchwork Tue May 25 21:17:47 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Jason Ekstrand X-Patchwork-Id: 12280155 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2707CC47088 for ; Tue, 25 May 2021 21:18:18 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id DC38F6142D for ; Tue, 25 May 2021 21:18:17 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org DC38F6142D Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=jlekstrand.net Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=dri-devel-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 9CFFA6EB4D; Tue, 25 May 2021 21:18:07 +0000 (UTC) Received: from mail-pf1-x431.google.com (mail-pf1-x431.google.com [IPv6:2607:f8b0:4864:20::431]) by gabe.freedesktop.org (Postfix) with ESMTPS id 4F0796EB46 for ; Tue, 25 May 2021 21:18:02 +0000 (UTC) Received: by mail-pf1-x431.google.com with SMTP id d16so24585570pfn.12 for ; Tue, 25 May 2021 14:18:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=jlekstrand-net.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=bfEYaaEsri34UVrh87VwS0D4/LmhE0dmCO8raqQLSuw=; b=sIWRMHVcSTGqlQ8A5aUaxSGn7Cp2nYjSQ9mVEnOuUCBxi09XAYbFh7rqdSuQEiLYPI 8naTIn9vr1+6YFglNQWZFdoWXstVKQR8tcTiCeeqXTvzmYMA1S2iAOW2x4t1a6TZTEYk bTuwnhJYGq4uBWbOJl2rewnDFbA4rrl5/M3tmrZ+hPEAIaLZn6XX3ZYeZsB8JcLiECyH 6BITRi09v3/vVL0lchWxnAD5JL5wOEO3x+e3xTDbR3cf9oIgGqSVBv8eNEecwuUMusHH Sqff4lbScgzlD+PNq2/H5n2NHk3wwTi4NsRt7eTwAC5lJPvg1gn2ltfVaJItvlDpk1XK jFvg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=bfEYaaEsri34UVrh87VwS0D4/LmhE0dmCO8raqQLSuw=; b=K180GX7m13Yc6aXUwd1W6eO0UUIMRvPmdj+7DY1caaBYsnXnvfSUBpaHPdl0xli2C6 ickwd7dPA3TbmLWCZCAxbG3p3M0wg/DncMn65infPIsStgKWY8OUHOdDa7pj9wmiHvij ztbZHdpiPQGwLuZYjI1qvNSPyk+dFzaPnl+5rX48GUoOExentvRhlw7Nw1Ce+KCRZRKv Zbq8UhrzpiQ8xTwRMbHXxwMXRd6ZizGzhFZIsDJZZT0cyhdwIcYaqkEPuT00hgE92EIR 8xjcdgT2s2fyUegNntR2nWtza9p8NIn3COvv3wxTBUYJ95xhEtUHvGm6WlHQ+wR8SS2I qoMQ== X-Gm-Message-State: AOAM5319KWh8iAFhq3Ob3fXtDGf9Mf6wzOOVn6J6dJoWhCultdldSDQw w6k8PO1fi3DcsnUIM5D90uP9UBMd/OSelw== X-Google-Smtp-Source: ABdhPJxhkdOiX/X3Zg+JCtwbbjuCzzEVQK0zBHufXro4yy8UMrPBITNg9CYq+RtXhLOwgW07SfGvzg== X-Received: by 2002:a63:4287:: with SMTP id p129mr21351106pga.230.1621977481615; Tue, 25 May 2021 14:18:01 -0700 (PDT) Received: from omlet.lan ([134.134.139.83]) by smtp.gmail.com with ESMTPSA id e186sm14342278pfa.145.2021.05.25.14.17.59 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 25 May 2021 14:18:01 -0700 (PDT) From: Jason Ekstrand To: dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org Subject: [PATCH 1/7] dma-buf: Add dma_fence_array_for_each (v2) Date: Tue, 25 May 2021 16:17:47 -0500 Message-Id: <20210525211753.1086069-2-jason@jlekstrand.net> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210525211753.1086069-1-jason@jlekstrand.net> References: <20210525211753.1086069-1-jason@jlekstrand.net> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: =?utf-8?q?Christian_K=C3=B6nig?= , =?utf-8?q?Christian_K=C3=B6nig?= , Jason Ekstrand , Daniel Vetter Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" From: Christian König Add a helper to iterate over all fences in a dma_fence_array object. v2 (Jason Ekstrand) - Return NULL from dma_fence_array_first if head == NULL. This matches the iterator behavior of dma_fence_chain_for_each in that it iterates zero times if head == NULL. - Return NULL from dma_fence_array_next if index > array->num_fences. Signed-off-by: Jason Ekstrand Reviewed-by: Jason Ekstrand Reviewed-by: Christian König Cc: Daniel Vetter Cc: Maarten Lankhorst --- drivers/dma-buf/dma-fence-array.c | 27 +++++++++++++++++++++++++++ include/linux/dma-fence-array.h | 17 +++++++++++++++++ 2 files changed, 44 insertions(+) diff --git a/drivers/dma-buf/dma-fence-array.c b/drivers/dma-buf/dma-fence-array.c index d3fbd950be944..2ac1afc697d0f 100644 --- a/drivers/dma-buf/dma-fence-array.c +++ b/drivers/dma-buf/dma-fence-array.c @@ -201,3 +201,30 @@ bool dma_fence_match_context(struct dma_fence *fence, u64 context) return true; } EXPORT_SYMBOL(dma_fence_match_context); + +struct dma_fence *dma_fence_array_first(struct dma_fence *head) +{ + struct dma_fence_array *array; + + if (!head) + return NULL; + + array = to_dma_fence_array(head); + if (!array) + return head; + + return array->fences[0]; +} +EXPORT_SYMBOL(dma_fence_array_first); + +struct dma_fence *dma_fence_array_next(struct dma_fence *head, + unsigned int index) +{ + struct dma_fence_array *array = to_dma_fence_array(head); + + if (!array || index >= array->num_fences) + return NULL; + + return array->fences[index]; +} +EXPORT_SYMBOL(dma_fence_array_next); diff --git a/include/linux/dma-fence-array.h b/include/linux/dma-fence-array.h index 303dd712220fd..588ac8089dd61 100644 --- a/include/linux/dma-fence-array.h +++ b/include/linux/dma-fence-array.h @@ -74,6 +74,19 @@ to_dma_fence_array(struct dma_fence *fence) return container_of(fence, struct dma_fence_array, base); } +/** + * dma_fence_array_for_each - iterate over all fences in array + * @fence: current fence + * @index: index into the array + * @head: potential dma_fence_array object + * + * Test if @array is a dma_fence_array object and if yes iterate over all fences + * in the array. If not just iterate over the fence in @array itself. + */ +#define dma_fence_array_for_each(fence, index, head) \ + for (index = 0, fence = dma_fence_array_first(head); fence; \ + ++(index), fence = dma_fence_array_next(head, index)) + struct dma_fence_array *dma_fence_array_create(int num_fences, struct dma_fence **fences, u64 context, unsigned seqno, @@ -81,4 +94,8 @@ struct dma_fence_array *dma_fence_array_create(int num_fences, bool dma_fence_match_context(struct dma_fence *fence, u64 context); +struct dma_fence *dma_fence_array_first(struct dma_fence *head); +struct dma_fence *dma_fence_array_next(struct dma_fence *head, + unsigned int index); + #endif /* __LINUX_DMA_FENCE_ARRAY_H */ From patchwork Tue May 25 21:17:48 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Jason Ekstrand X-Patchwork-Id: 12280153 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 427E5C4707F for ; Tue, 25 May 2021 21:18:16 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 0D290611BE for ; Tue, 25 May 2021 21:18:16 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 0D290611BE Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=jlekstrand.net Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=dri-devel-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 012176EB4C; Tue, 25 May 2021 21:18:07 +0000 (UTC) Received: from mail-pl1-x62b.google.com (mail-pl1-x62b.google.com [IPv6:2607:f8b0:4864:20::62b]) by gabe.freedesktop.org (Postfix) with ESMTPS id 9670C6EB4C for ; Tue, 25 May 2021 21:18:05 +0000 (UTC) Received: by mail-pl1-x62b.google.com with SMTP id s4so15473888plg.12 for ; Tue, 25 May 2021 14:18:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=jlekstrand-net.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=zBYlG0TQXh3M5UESnQyBpw3BPZcYPcdHje/svqKqnqU=; b=joO78nT0dLZnb9w8eJniqsx2G2+4nlTVNaMzBydbNFULqXWjGkN1/BRY/qv7FCnti/ 6WpO1ZY1f24gsAypfay6zclPnXbIx0Uhq4wkIbvDmiGWw4aC1kTa0LHdOiPRwECeQfSc fXJQemjfhPwgR0wHoMJ0Ctk7voPZalLEUSkqst6hLUWN6ZlfKgFVesc7WEhr/o/T8NUp Me93Qf0dwaGpk6mnmkMYw3R3mPAvjWYJbExUyH5K0wNBBUEVc/5BBU9okb0Rhe+K63vN nHC3+04sGcsb5Aeer69/sCkqHywz6EIayTdzoA7fz0VY4HE89D8fPKdjXZBWZbvYUllZ phAw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=zBYlG0TQXh3M5UESnQyBpw3BPZcYPcdHje/svqKqnqU=; b=UEiZEn4z7wcyEvt9psds51yfkFgZaVQUc+6E4SXmO0pzpWD/uJIx1ArhNmpPjonppL eE57XYyVfhkoAP6VOF6ud8g75mx/MoXyctzXhtkBDG/sK58rjQL+X1o2uL++H8paeUgp Bf9D3/CTst8+1ByEhTKJCd/s6aMnZ0QvpKGMVWuyGLLeBBGcWB9Xqi6wftOMqaAQHVu9 75HedgP8pd4b0R38KKjhK1ggBAICk2bKq0LaZcVQyANit5WT8uBP2xXNwUGx9bkVITkg +QvxFAYsTiiWdQ+KPZ3Ysm4bEcUEtxcdWxJ31n9pgCVgIruHESdwX8EjuFE3sYNZycvN C+eg== X-Gm-Message-State: AOAM533vM5ZlGPerHAkMfgkg02fA0FKyZ15asnxoNtgLkxWmOdir7XbY DAdASAFj0KPDWlBbRAFhrXldh025uoZAJg== X-Google-Smtp-Source: ABdhPJypms8hHhyEWPWO7Fvw22wMDzBbdTdTOjyDpDW478qXFko1qswyGQJ8BnFot6xxq/p6vsdzAw== X-Received: by 2002:a17:90b:3781:: with SMTP id mz1mr326902pjb.234.1621977484492; Tue, 25 May 2021 14:18:04 -0700 (PDT) Received: from omlet.lan ([134.134.139.83]) by smtp.gmail.com with ESMTPSA id e186sm14342278pfa.145.2021.05.25.14.18.01 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 25 May 2021 14:18:03 -0700 (PDT) From: Jason Ekstrand To: dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org Subject: [PATCH 2/7] dma-buf: Rename dma_resv helpers from _rcu to _unlocked (v2) Date: Tue, 25 May 2021 16:17:48 -0500 Message-Id: <20210525211753.1086069-3-jason@jlekstrand.net> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210525211753.1086069-1-jason@jlekstrand.net> References: <20210525211753.1086069-1-jason@jlekstrand.net> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Thomas Zimmermann , Daniel Vetter , Huang Rui , VMware Graphics , Gerd Hoffmann , Jason Ekstrand , Sean Paul , =?utf-8?q?Christian_K=C3=B6nig?= Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" None of these helpers actually leak any RCU details to the caller. They all assume you have a genuine reference, take the RCU read lock, and retry if needed. Naming them with an _rcu is likely to cause callers more panic than needed. v2 (Jason Ekstrand): - Fix function argument indentation Signed-off-by: Jason Ekstrand Suggested-by: Daniel Vetter Cc: Christian König Cc: Maarten Lankhorst Cc: Maxime Ripard Cc: Thomas Zimmermann Cc: Lucas Stach Cc: Rob Clark Cc: Sean Paul Cc: Huang Rui Cc: Gerd Hoffmann Cc: VMware Graphics --- drivers/dma-buf/dma-buf.c | 4 +-- drivers/dma-buf/dma-resv.c | 28 +++++++++---------- drivers/gpu/drm/amd/amdgpu/amdgpu_display.c | 6 ++-- drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c | 2 +- drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c | 4 +-- drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c | 6 ++-- drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c | 4 +-- drivers/gpu/drm/amd/amdgpu/amdgpu_object.c | 4 +-- drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c | 6 ++-- drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 14 +++++----- .../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 6 ++-- drivers/gpu/drm/drm_gem.c | 10 +++---- drivers/gpu/drm/drm_gem_atomic_helper.c | 2 +- drivers/gpu/drm/etnaviv/etnaviv_gem.c | 7 ++--- drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c | 8 +++--- drivers/gpu/drm/i915/display/intel_display.c | 2 +- drivers/gpu/drm/i915/dma_resv_utils.c | 2 +- drivers/gpu/drm/i915/gem/i915_gem_busy.c | 2 +- .../gpu/drm/i915/gem/i915_gem_execbuffer.c | 2 +- drivers/gpu/drm/i915/gem/i915_gem_object.h | 2 +- drivers/gpu/drm/i915/gem/i915_gem_userptr.c | 4 +-- drivers/gpu/drm/i915/gem/i915_gem_wait.c | 10 +++---- drivers/gpu/drm/i915/i915_request.c | 6 ++-- drivers/gpu/drm/i915/i915_sw_fence.c | 4 +-- drivers/gpu/drm/msm/msm_gem.c | 3 +- drivers/gpu/drm/nouveau/dispnv50/wndw.c | 2 +- drivers/gpu/drm/nouveau/nouveau_gem.c | 4 +-- drivers/gpu/drm/panfrost/panfrost_drv.c | 4 +-- drivers/gpu/drm/panfrost/panfrost_job.c | 2 +- drivers/gpu/drm/radeon/radeon_gem.c | 6 ++-- drivers/gpu/drm/radeon/radeon_mn.c | 4 +-- drivers/gpu/drm/ttm/ttm_bo.c | 18 ++++++------ drivers/gpu/drm/vgem/vgem_fence.c | 4 +-- drivers/gpu/drm/virtio/virtgpu_ioctl.c | 6 ++-- drivers/gpu/drm/vmwgfx/vmwgfx_bo.c | 2 +- include/linux/dma-resv.h | 18 ++++++------ 36 files changed, 108 insertions(+), 110 deletions(-) diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c index f264b70c383eb..ed6451d55d663 100644 --- a/drivers/dma-buf/dma-buf.c +++ b/drivers/dma-buf/dma-buf.c @@ -1147,8 +1147,8 @@ static int __dma_buf_begin_cpu_access(struct dma_buf *dmabuf, long ret; /* Wait on any implicit rendering fences */ - ret = dma_resv_wait_timeout_rcu(resv, write, true, - MAX_SCHEDULE_TIMEOUT); + ret = dma_resv_wait_timeout_unlocked(resv, write, true, + MAX_SCHEDULE_TIMEOUT); if (ret < 0) return ret; diff --git a/drivers/dma-buf/dma-resv.c b/drivers/dma-buf/dma-resv.c index 6ddbeb5dfbf65..d6f1ed4cd4d55 100644 --- a/drivers/dma-buf/dma-resv.c +++ b/drivers/dma-buf/dma-resv.c @@ -417,7 +417,7 @@ int dma_resv_copy_fences(struct dma_resv *dst, struct dma_resv *src) EXPORT_SYMBOL(dma_resv_copy_fences); /** - * dma_resv_get_fences_rcu - Get an object's shared and exclusive + * dma_resv_get_fences_unlocked - Get an object's shared and exclusive * fences without update side lock held * @obj: the reservation object * @pfence_excl: the returned exclusive fence (or NULL) @@ -429,10 +429,10 @@ EXPORT_SYMBOL(dma_resv_copy_fences); * exclusive fence is not specified the fence is put into the array of the * shared fences as well. Returns either zero or -ENOMEM. */ -int dma_resv_get_fences_rcu(struct dma_resv *obj, - struct dma_fence **pfence_excl, - unsigned *pshared_count, - struct dma_fence ***pshared) +int dma_resv_get_fences_unlocked(struct dma_resv *obj, + struct dma_fence **pfence_excl, + unsigned *pshared_count, + struct dma_fence ***pshared) { struct dma_fence **shared = NULL; struct dma_fence *fence_excl; @@ -515,10 +515,10 @@ int dma_resv_get_fences_rcu(struct dma_resv *obj, *pshared = shared; return ret; } -EXPORT_SYMBOL_GPL(dma_resv_get_fences_rcu); +EXPORT_SYMBOL_GPL(dma_resv_get_fences_unlocked); /** - * dma_resv_wait_timeout_rcu - Wait on reservation's objects + * dma_resv_wait_timeout_unlocked - Wait on reservation's objects * shared and/or exclusive fences. * @obj: the reservation object * @wait_all: if true, wait on all fences, else wait on just exclusive fence @@ -529,9 +529,9 @@ EXPORT_SYMBOL_GPL(dma_resv_get_fences_rcu); * Returns -ERESTARTSYS if interrupted, 0 if the wait timed out, or * greater than zer on success. */ -long dma_resv_wait_timeout_rcu(struct dma_resv *obj, - bool wait_all, bool intr, - unsigned long timeout) +long dma_resv_wait_timeout_unlocked(struct dma_resv *obj, + bool wait_all, bool intr, + unsigned long timeout) { struct dma_fence *fence; unsigned seq, shared_count; @@ -602,7 +602,7 @@ long dma_resv_wait_timeout_rcu(struct dma_resv *obj, rcu_read_unlock(); goto retry; } -EXPORT_SYMBOL_GPL(dma_resv_wait_timeout_rcu); +EXPORT_SYMBOL_GPL(dma_resv_wait_timeout_unlocked); static inline int dma_resv_test_signaled_single(struct dma_fence *passed_fence) @@ -622,7 +622,7 @@ static inline int dma_resv_test_signaled_single(struct dma_fence *passed_fence) } /** - * dma_resv_test_signaled_rcu - Test if a reservation object's + * dma_resv_test_signaled_unlocked - Test if a reservation object's * fences have been signaled. * @obj: the reservation object * @test_all: if true, test all fences, otherwise only test the exclusive @@ -631,7 +631,7 @@ static inline int dma_resv_test_signaled_single(struct dma_fence *passed_fence) * RETURNS * true if all fences signaled, else false */ -bool dma_resv_test_signaled_rcu(struct dma_resv *obj, bool test_all) +bool dma_resv_test_signaled_unlocked(struct dma_resv *obj, bool test_all) { unsigned seq, shared_count; int ret; @@ -680,4 +680,4 @@ bool dma_resv_test_signaled_rcu(struct dma_resv *obj, bool test_all) rcu_read_unlock(); return ret; } -EXPORT_SYMBOL_GPL(dma_resv_test_signaled_rcu); +EXPORT_SYMBOL_GPL(dma_resv_test_signaled_unlocked); diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c index 8a1fb8b6606e5..b8e24f199be9a 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c @@ -203,9 +203,9 @@ int amdgpu_display_crtc_page_flip_target(struct drm_crtc *crtc, goto unpin; } - r = dma_resv_get_fences_rcu(new_abo->tbo.base.resv, &work->excl, - &work->shared_count, - &work->shared); + r = dma_resv_get_fences_unlocked(new_abo->tbo.base.resv, &work->excl, + &work->shared_count, + &work->shared); if (unlikely(r != 0)) { DRM_ERROR("failed to get fences for buffer\n"); goto unpin; diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c index baa980a477d94..0d0319bc51577 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c @@ -98,7 +98,7 @@ __dma_resv_make_exclusive(struct dma_resv *obj) if (!dma_resv_get_list(obj)) /* no shared fences to convert */ return 0; - r = dma_resv_get_fences_rcu(obj, NULL, &count, &fences); + r = dma_resv_get_fences_unlocked(obj, NULL, &count, &fences); if (r) return r; diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c index 18974bd081f00..8e2996d6ba3ad 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c @@ -471,8 +471,8 @@ int amdgpu_gem_wait_idle_ioctl(struct drm_device *dev, void *data, return -ENOENT; } robj = gem_to_amdgpu_bo(gobj); - ret = dma_resv_wait_timeout_rcu(robj->tbo.base.resv, true, true, - timeout); + ret = dma_resv_wait_timeout_unlocked(robj->tbo.base.resv, true, true, + timeout); /* ret == 0 means not signaled, * ret > 0 means signaled diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c index b4971e90b98cf..38e1b32dd2cef 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ids.c @@ -112,7 +112,7 @@ void amdgpu_pasid_free_delayed(struct dma_resv *resv, unsigned count; int r; - r = dma_resv_get_fences_rcu(resv, NULL, &count, &fences); + r = dma_resv_get_fences_unlocked(resv, NULL, &count, &fences); if (r) goto fallback; @@ -156,8 +156,8 @@ void amdgpu_pasid_free_delayed(struct dma_resv *resv, /* Not enough memory for the delayed delete, as last resort * block for all the fences to complete. */ - dma_resv_wait_timeout_rcu(resv, true, false, - MAX_SCHEDULE_TIMEOUT); + dma_resv_wait_timeout_unlocked(resv, true, false, + MAX_SCHEDULE_TIMEOUT); amdgpu_pasid_free(pasid); } diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c index 828b5167ff128..0319c8b547c48 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c @@ -75,8 +75,8 @@ static bool amdgpu_mn_invalidate_gfx(struct mmu_interval_notifier *mni, mmu_interval_set_seq(mni, cur_seq); - r = dma_resv_wait_timeout_rcu(bo->tbo.base.resv, true, false, - MAX_SCHEDULE_TIMEOUT); + r = dma_resv_wait_timeout_unlocked(bo->tbo.base.resv, true, false, + MAX_SCHEDULE_TIMEOUT); mutex_unlock(&adev->notifier_lock); if (r <= 0) DRM_ERROR("(%ld) failed to wait for user bo\n", r); diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c index 0adffcace3263..de1c7c5501683 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c @@ -791,8 +791,8 @@ int amdgpu_bo_kmap(struct amdgpu_bo *bo, void **ptr) return 0; } - r = dma_resv_wait_timeout_rcu(bo->tbo.base.resv, false, false, - MAX_SCHEDULE_TIMEOUT); + r = dma_resv_wait_timeout_unlocked(bo->tbo.base.resv, false, false, + MAX_SCHEDULE_TIMEOUT); if (r < 0) return r; diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c index c6dbc08016045..4a2196404fb69 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c @@ -1115,9 +1115,9 @@ static int amdgpu_uvd_send_msg(struct amdgpu_ring *ring, struct amdgpu_bo *bo, ib->length_dw = 16; if (direct) { - r = dma_resv_wait_timeout_rcu(bo->tbo.base.resv, - true, false, - msecs_to_jiffies(10)); + r = dma_resv_wait_timeout_unlocked(bo->tbo.base.resv, + true, false, + msecs_to_jiffies(10)); if (r == 0) r = -ETIMEDOUT; if (r < 0) diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c index 4a3e3f72e1277..7ba1c537d6584 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c @@ -2007,14 +2007,14 @@ static void amdgpu_vm_prt_fini(struct amdgpu_device *adev, struct amdgpu_vm *vm) unsigned i, shared_count; int r; - r = dma_resv_get_fences_rcu(resv, &excl, - &shared_count, &shared); + r = dma_resv_get_fences_unlocked(resv, &excl, + &shared_count, &shared); if (r) { /* Not enough memory to grab the fence list, as last resort * block for all the fences to complete. */ - dma_resv_wait_timeout_rcu(resv, true, false, - MAX_SCHEDULE_TIMEOUT); + dma_resv_wait_timeout_unlocked(resv, true, false, + MAX_SCHEDULE_TIMEOUT); return; } @@ -2625,7 +2625,7 @@ bool amdgpu_vm_evictable(struct amdgpu_bo *bo) return true; /* Don't evict VM page tables while they are busy */ - if (!dma_resv_test_signaled_rcu(bo->tbo.base.resv, true)) + if (!dma_resv_test_signaled_unlocked(bo->tbo.base.resv, true)) return false; /* Try to block ongoing updates */ @@ -2805,8 +2805,8 @@ void amdgpu_vm_adjust_size(struct amdgpu_device *adev, uint32_t min_vm_size, */ long amdgpu_vm_wait_idle(struct amdgpu_vm *vm, long timeout) { - timeout = dma_resv_wait_timeout_rcu(vm->root.base.bo->tbo.base.resv, - true, true, timeout); + timeout = dma_resv_wait_timeout_unlocked(vm->root.base.bo->tbo.base.resv, + true, true, timeout); if (timeout <= 0) return timeout; diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c index 9ca517b658546..0121d2817fa26 100644 --- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c +++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c @@ -8276,9 +8276,9 @@ static void amdgpu_dm_commit_planes(struct drm_atomic_state *state, * deadlock during GPU reset when this fence will not signal * but we hold reservation lock for the BO. */ - r = dma_resv_wait_timeout_rcu(abo->tbo.base.resv, true, - false, - msecs_to_jiffies(5000)); + r = dma_resv_wait_timeout_unlocked(abo->tbo.base.resv, true, + false, + msecs_to_jiffies(5000)); if (unlikely(r <= 0)) DRM_ERROR("Waiting for fences timed out!"); diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c index 9989425e9875a..1241a421b9e81 100644 --- a/drivers/gpu/drm/drm_gem.c +++ b/drivers/gpu/drm/drm_gem.c @@ -770,8 +770,8 @@ long drm_gem_dma_resv_wait(struct drm_file *filep, u32 handle, return -EINVAL; } - ret = dma_resv_wait_timeout_rcu(obj->resv, wait_all, - true, timeout); + ret = dma_resv_wait_timeout_unlocked(obj->resv, wait_all, + true, timeout); if (ret == 0) ret = -ETIME; else if (ret > 0) @@ -1375,13 +1375,13 @@ int drm_gem_fence_array_add_implicit(struct xarray *fence_array, if (!write) { struct dma_fence *fence = - dma_resv_get_excl_rcu(obj->resv); + dma_resv_get_excl_unlocked(obj->resv); return drm_gem_fence_array_add(fence_array, fence); } - ret = dma_resv_get_fences_rcu(obj->resv, NULL, - &fence_count, &fences); + ret = dma_resv_get_fences_unlocked(obj->resv, NULL, + &fence_count, &fences); if (ret || !fence_count) return ret; diff --git a/drivers/gpu/drm/drm_gem_atomic_helper.c b/drivers/gpu/drm/drm_gem_atomic_helper.c index a005c5a0ba46a..a27135084ae5c 100644 --- a/drivers/gpu/drm/drm_gem_atomic_helper.c +++ b/drivers/gpu/drm/drm_gem_atomic_helper.c @@ -147,7 +147,7 @@ int drm_gem_plane_helper_prepare_fb(struct drm_plane *plane, struct drm_plane_st return 0; obj = drm_gem_fb_get_obj(state->fb, 0); - fence = dma_resv_get_excl_rcu(obj->resv); + fence = dma_resv_get_excl_unlocked(obj->resv); drm_atomic_set_fence_for_plane(state, fence); return 0; diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem.c b/drivers/gpu/drm/etnaviv/etnaviv_gem.c index db69f19ab5bca..4e6f5346e84e4 100644 --- a/drivers/gpu/drm/etnaviv/etnaviv_gem.c +++ b/drivers/gpu/drm/etnaviv/etnaviv_gem.c @@ -390,14 +390,13 @@ int etnaviv_gem_cpu_prep(struct drm_gem_object *obj, u32 op, } if (op & ETNA_PREP_NOSYNC) { - if (!dma_resv_test_signaled_rcu(obj->resv, - write)) + if (!dma_resv_test_signaled_unlocked(obj->resv, write)) return -EBUSY; } else { unsigned long remain = etnaviv_timeout_to_jiffies(timeout); - ret = dma_resv_wait_timeout_rcu(obj->resv, - write, true, remain); + ret = dma_resv_wait_timeout_unlocked(obj->resv, + write, true, remain); if (ret <= 0) return ret == 0 ? -ETIMEDOUT : ret; } diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c b/drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c index d05c359945799..6617fada4595d 100644 --- a/drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c +++ b/drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c @@ -189,13 +189,13 @@ static int submit_fence_sync(struct etnaviv_gem_submit *submit) continue; if (bo->flags & ETNA_SUBMIT_BO_WRITE) { - ret = dma_resv_get_fences_rcu(robj, &bo->excl, - &bo->nr_shared, - &bo->shared); + ret = dma_resv_get_fences_unlocked(robj, &bo->excl, + &bo->nr_shared, + &bo->shared); if (ret) return ret; } else { - bo->excl = dma_resv_get_excl_rcu(robj); + bo->excl = dma_resv_get_excl_unlocked(robj); } } diff --git a/drivers/gpu/drm/i915/display/intel_display.c b/drivers/gpu/drm/i915/display/intel_display.c index 422b59ebf6dce..5f0b85a102159 100644 --- a/drivers/gpu/drm/i915/display/intel_display.c +++ b/drivers/gpu/drm/i915/display/intel_display.c @@ -11040,7 +11040,7 @@ intel_prepare_plane_fb(struct drm_plane *_plane, if (ret < 0) goto unpin_fb; - fence = dma_resv_get_excl_rcu(obj->base.resv); + fence = dma_resv_get_excl_unlocked(obj->base.resv); if (fence) { add_rps_boost_after_vblank(new_plane_state->hw.crtc, fence); diff --git a/drivers/gpu/drm/i915/dma_resv_utils.c b/drivers/gpu/drm/i915/dma_resv_utils.c index 9e508e7d4629f..bdfc6bf16a4e9 100644 --- a/drivers/gpu/drm/i915/dma_resv_utils.c +++ b/drivers/gpu/drm/i915/dma_resv_utils.c @@ -10,7 +10,7 @@ void dma_resv_prune(struct dma_resv *resv) { if (dma_resv_trylock(resv)) { - if (dma_resv_test_signaled_rcu(resv, true)) + if (dma_resv_test_signaled_unlocked(resv, true)) dma_resv_add_excl_fence(resv, NULL); dma_resv_unlock(resv); } diff --git a/drivers/gpu/drm/i915/gem/i915_gem_busy.c b/drivers/gpu/drm/i915/gem/i915_gem_busy.c index 25235ef630c10..754ad6d1bace9 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_busy.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_busy.c @@ -105,7 +105,7 @@ i915_gem_busy_ioctl(struct drm_device *dev, void *data, * Alternatively, we can trade that extra information on read/write * activity with * args->busy = - * !dma_resv_test_signaled_rcu(obj->resv, true); + * !dma_resv_test_signaled_unlocked(obj->resv, true); * to report the overall busyness. This is what the wait-ioctl does. * */ diff --git a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c index 297143511f99b..e8f323564e57b 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c @@ -1481,7 +1481,7 @@ static inline bool use_reloc_gpu(struct i915_vma *vma) if (DBG_FORCE_RELOC) return false; - return !dma_resv_test_signaled_rcu(vma->resv, true); + return !dma_resv_test_signaled_unlocked(vma->resv, true); } static unsigned long vma_phys_addr(struct i915_vma *vma, u32 offset) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.h b/drivers/gpu/drm/i915/gem/i915_gem_object.h index 2ebd79537aea9..7c0eb425cb3b3 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_object.h +++ b/drivers/gpu/drm/i915/gem/i915_gem_object.h @@ -500,7 +500,7 @@ i915_gem_object_last_write_engine(struct drm_i915_gem_object *obj) struct dma_fence *fence; rcu_read_lock(); - fence = dma_resv_get_excl_rcu(obj->base.resv); + fence = dma_resv_get_excl_unlocked(obj->base.resv); rcu_read_unlock(); if (fence && dma_fence_is_i915(fence) && !dma_fence_is_signaled(fence)) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c index a657b99ec7606..44df18dc9669f 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c @@ -85,8 +85,8 @@ static bool i915_gem_userptr_invalidate(struct mmu_interval_notifier *mni, return true; /* we will unbind on next submission, still have userptr pins */ - r = dma_resv_wait_timeout_rcu(obj->base.resv, true, false, - MAX_SCHEDULE_TIMEOUT); + r = dma_resv_wait_timeout_unlocked(obj->base.resv, true, false, + MAX_SCHEDULE_TIMEOUT); if (r <= 0) drm_err(&i915->drm, "(%ld) failed to wait for idle\n", r); diff --git a/drivers/gpu/drm/i915/gem/i915_gem_wait.c b/drivers/gpu/drm/i915/gem/i915_gem_wait.c index 4b9856d5ba14f..5b6c52659ad4d 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_wait.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_wait.c @@ -45,7 +45,7 @@ i915_gem_object_wait_reservation(struct dma_resv *resv, unsigned int count, i; int ret; - ret = dma_resv_get_fences_rcu(resv, &excl, &count, &shared); + ret = dma_resv_get_fences_unlocked(resv, &excl, &count, &shared); if (ret) return ret; @@ -73,7 +73,7 @@ i915_gem_object_wait_reservation(struct dma_resv *resv, */ prune_fences = count && timeout >= 0; } else { - excl = dma_resv_get_excl_rcu(resv); + excl = dma_resv_get_excl_unlocked(resv); } if (excl && timeout >= 0) @@ -158,8 +158,8 @@ i915_gem_object_wait_priority(struct drm_i915_gem_object *obj, unsigned int count, i; int ret; - ret = dma_resv_get_fences_rcu(obj->base.resv, - &excl, &count, &shared); + ret = dma_resv_get_fences_unlocked(obj->base.resv, + &excl, &count, &shared); if (ret) return ret; @@ -170,7 +170,7 @@ i915_gem_object_wait_priority(struct drm_i915_gem_object *obj, kfree(shared); } else { - excl = dma_resv_get_excl_rcu(obj->base.resv); + excl = dma_resv_get_excl_unlocked(obj->base.resv); } if (excl) { diff --git a/drivers/gpu/drm/i915/i915_request.c b/drivers/gpu/drm/i915/i915_request.c index 970d8f4986bbe..f1ed03ced7dd1 100644 --- a/drivers/gpu/drm/i915/i915_request.c +++ b/drivers/gpu/drm/i915/i915_request.c @@ -1594,8 +1594,8 @@ i915_request_await_object(struct i915_request *to, struct dma_fence **shared; unsigned int count, i; - ret = dma_resv_get_fences_rcu(obj->base.resv, - &excl, &count, &shared); + ret = dma_resv_get_fences_unlocked(obj->base.resv, + &excl, &count, &shared); if (ret) return ret; @@ -1611,7 +1611,7 @@ i915_request_await_object(struct i915_request *to, dma_fence_put(shared[i]); kfree(shared); } else { - excl = dma_resv_get_excl_rcu(obj->base.resv); + excl = dma_resv_get_excl_unlocked(obj->base.resv); } if (excl) { diff --git a/drivers/gpu/drm/i915/i915_sw_fence.c b/drivers/gpu/drm/i915/i915_sw_fence.c index 2744558f30507..0bcb7ea44201e 100644 --- a/drivers/gpu/drm/i915/i915_sw_fence.c +++ b/drivers/gpu/drm/i915/i915_sw_fence.c @@ -582,7 +582,7 @@ int i915_sw_fence_await_reservation(struct i915_sw_fence *fence, struct dma_fence **shared; unsigned int count, i; - ret = dma_resv_get_fences_rcu(resv, &excl, &count, &shared); + ret = dma_resv_get_fences_unlocked(resv, &excl, &count, &shared); if (ret) return ret; @@ -606,7 +606,7 @@ int i915_sw_fence_await_reservation(struct i915_sw_fence *fence, dma_fence_put(shared[i]); kfree(shared); } else { - excl = dma_resv_get_excl_rcu(resv); + excl = dma_resv_get_excl_unlocked(resv); } if (ret >= 0 && excl && excl->ops != exclude) { diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index 56df86e5f7400..1aca60507bb14 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -915,8 +915,7 @@ int msm_gem_cpu_prep(struct drm_gem_object *obj, uint32_t op, ktime_t *timeout) op & MSM_PREP_NOSYNC ? 0 : timeout_to_jiffies(timeout); long ret; - ret = dma_resv_wait_timeout_rcu(obj->resv, write, - true, remain); + ret = dma_resv_wait_timeout_unlocked(obj->resv, write, true, remain); if (ret == 0) return remain == 0 ? -EBUSY : -ETIMEDOUT; else if (ret < 0) diff --git a/drivers/gpu/drm/nouveau/dispnv50/wndw.c b/drivers/gpu/drm/nouveau/dispnv50/wndw.c index 0cb1f9d848d3e..8d048bacd6f02 100644 --- a/drivers/gpu/drm/nouveau/dispnv50/wndw.c +++ b/drivers/gpu/drm/nouveau/dispnv50/wndw.c @@ -561,7 +561,7 @@ nv50_wndw_prepare_fb(struct drm_plane *plane, struct drm_plane_state *state) asyw->image.handle[0] = ctxdma->object.handle; } - asyw->state.fence = dma_resv_get_excl_rcu(nvbo->bo.base.resv); + asyw->state.fence = dma_resv_get_excl_unlocked(nvbo->bo.base.resv); asyw->image.offset[0] = nvbo->offset; if (wndw->func->prepare) { diff --git a/drivers/gpu/drm/nouveau/nouveau_gem.c b/drivers/gpu/drm/nouveau/nouveau_gem.c index a70e82413fa75..bc6b09ee9b552 100644 --- a/drivers/gpu/drm/nouveau/nouveau_gem.c +++ b/drivers/gpu/drm/nouveau/nouveau_gem.c @@ -928,8 +928,8 @@ nouveau_gem_ioctl_cpu_prep(struct drm_device *dev, void *data, return -ENOENT; nvbo = nouveau_gem_object(gem); - lret = dma_resv_wait_timeout_rcu(nvbo->bo.base.resv, write, true, - no_wait ? 0 : 30 * HZ); + lret = dma_resv_wait_timeout_unlocked(nvbo->bo.base.resv, write, true, + no_wait ? 0 : 30 * HZ); if (!lret) ret = -EBUSY; else if (lret > 0) diff --git a/drivers/gpu/drm/panfrost/panfrost_drv.c b/drivers/gpu/drm/panfrost/panfrost_drv.c index ca07098a61419..eef5b632ee0ce 100644 --- a/drivers/gpu/drm/panfrost/panfrost_drv.c +++ b/drivers/gpu/drm/panfrost/panfrost_drv.c @@ -311,8 +311,8 @@ panfrost_ioctl_wait_bo(struct drm_device *dev, void *data, if (!gem_obj) return -ENOENT; - ret = dma_resv_wait_timeout_rcu(gem_obj->resv, true, - true, timeout); + ret = dma_resv_wait_timeout_unlocked(gem_obj->resv, true, + true, timeout); if (!ret) ret = timeout ? -ETIMEDOUT : -EBUSY; diff --git a/drivers/gpu/drm/panfrost/panfrost_job.c b/drivers/gpu/drm/panfrost/panfrost_job.c index 6003cfeb13221..2df3e999a38d0 100644 --- a/drivers/gpu/drm/panfrost/panfrost_job.c +++ b/drivers/gpu/drm/panfrost/panfrost_job.c @@ -203,7 +203,7 @@ static void panfrost_acquire_object_fences(struct drm_gem_object **bos, int i; for (i = 0; i < bo_count; i++) - implicit_fences[i] = dma_resv_get_excl_rcu(bos[i]->resv); + implicit_fences[i] = dma_resv_get_excl_unlocked(bos[i]->resv); } static void panfrost_attach_object_fences(struct drm_gem_object **bos, diff --git a/drivers/gpu/drm/radeon/radeon_gem.c b/drivers/gpu/drm/radeon/radeon_gem.c index 05ea2f39f6261..1a38b0bf36d11 100644 --- a/drivers/gpu/drm/radeon/radeon_gem.c +++ b/drivers/gpu/drm/radeon/radeon_gem.c @@ -125,7 +125,7 @@ static int radeon_gem_set_domain(struct drm_gem_object *gobj, } if (domain == RADEON_GEM_DOMAIN_CPU) { /* Asking for cpu access wait for object idle */ - r = dma_resv_wait_timeout_rcu(robj->tbo.base.resv, true, true, 30 * HZ); + r = dma_resv_wait_timeout_unlocked(robj->tbo.base.resv, true, true, 30 * HZ); if (!r) r = -EBUSY; @@ -474,7 +474,7 @@ int radeon_gem_busy_ioctl(struct drm_device *dev, void *data, } robj = gem_to_radeon_bo(gobj); - r = dma_resv_test_signaled_rcu(robj->tbo.base.resv, true); + r = dma_resv_test_signaled_unlocked(robj->tbo.base.resv, true); if (r == 0) r = -EBUSY; else @@ -503,7 +503,7 @@ int radeon_gem_wait_idle_ioctl(struct drm_device *dev, void *data, } robj = gem_to_radeon_bo(gobj); - ret = dma_resv_wait_timeout_rcu(robj->tbo.base.resv, true, true, 30 * HZ); + ret = dma_resv_wait_timeout_unlocked(robj->tbo.base.resv, true, true, 30 * HZ); if (ret == 0) r = -EBUSY; else if (ret < 0) diff --git a/drivers/gpu/drm/radeon/radeon_mn.c b/drivers/gpu/drm/radeon/radeon_mn.c index e37c9a57a7c36..a19be3f8a218c 100644 --- a/drivers/gpu/drm/radeon/radeon_mn.c +++ b/drivers/gpu/drm/radeon/radeon_mn.c @@ -66,8 +66,8 @@ static bool radeon_mn_invalidate(struct mmu_interval_notifier *mn, return true; } - r = dma_resv_wait_timeout_rcu(bo->tbo.base.resv, true, false, - MAX_SCHEDULE_TIMEOUT); + r = dma_resv_wait_timeout_unlocked(bo->tbo.base.resv, true, false, + MAX_SCHEDULE_TIMEOUT); if (r <= 0) DRM_ERROR("(%ld) failed to wait for user bo\n", r); diff --git a/drivers/gpu/drm/ttm/ttm_bo.c b/drivers/gpu/drm/ttm/ttm_bo.c index ca1b098b6a561..215cad3149621 100644 --- a/drivers/gpu/drm/ttm/ttm_bo.c +++ b/drivers/gpu/drm/ttm/ttm_bo.c @@ -294,7 +294,7 @@ static int ttm_bo_cleanup_refs(struct ttm_buffer_object *bo, struct dma_resv *resv = &bo->base._resv; int ret; - if (dma_resv_test_signaled_rcu(resv, true)) + if (dma_resv_test_signaled_unlocked(resv, true)) ret = 0; else ret = -EBUSY; @@ -306,8 +306,8 @@ static int ttm_bo_cleanup_refs(struct ttm_buffer_object *bo, dma_resv_unlock(bo->base.resv); spin_unlock(&bo->bdev->lru_lock); - lret = dma_resv_wait_timeout_rcu(resv, true, interruptible, - 30 * HZ); + lret = dma_resv_wait_timeout_unlocked(resv, true, interruptible, + 30 * HZ); if (lret < 0) return lret; @@ -409,8 +409,8 @@ static void ttm_bo_release(struct kref *kref) /* Last resort, if we fail to allocate memory for the * fences block for the BO to become idle */ - dma_resv_wait_timeout_rcu(bo->base.resv, true, false, - 30 * HZ); + dma_resv_wait_timeout_unlocked(bo->base.resv, true, false, + 30 * HZ); } if (bo->bdev->funcs->release_notify) @@ -420,7 +420,7 @@ static void ttm_bo_release(struct kref *kref) ttm_mem_io_free(bdev, &bo->mem); } - if (!dma_resv_test_signaled_rcu(bo->base.resv, true) || + if (!dma_resv_test_signaled_unlocked(bo->base.resv, true) || !dma_resv_trylock(bo->base.resv)) { /* The BO is not idle, resurrect it for delayed destroy */ ttm_bo_flush_all_fences(bo); @@ -1116,14 +1116,14 @@ int ttm_bo_wait(struct ttm_buffer_object *bo, long timeout = 15 * HZ; if (no_wait) { - if (dma_resv_test_signaled_rcu(bo->base.resv, true)) + if (dma_resv_test_signaled_unlocked(bo->base.resv, true)) return 0; else return -EBUSY; } - timeout = dma_resv_wait_timeout_rcu(bo->base.resv, true, - interruptible, timeout); + timeout = dma_resv_wait_timeout_unlocked(bo->base.resv, true, + interruptible, timeout); if (timeout < 0) return timeout; diff --git a/drivers/gpu/drm/vgem/vgem_fence.c b/drivers/gpu/drm/vgem/vgem_fence.c index 2902dc6e64faf..010a82405e374 100644 --- a/drivers/gpu/drm/vgem/vgem_fence.c +++ b/drivers/gpu/drm/vgem/vgem_fence.c @@ -151,8 +151,8 @@ int vgem_fence_attach_ioctl(struct drm_device *dev, /* Check for a conflicting fence */ resv = obj->resv; - if (!dma_resv_test_signaled_rcu(resv, - arg->flags & VGEM_FENCE_WRITE)) { + if (!dma_resv_test_signaled_unlocked(resv, + arg->flags & VGEM_FENCE_WRITE)) { ret = -EBUSY; goto err_fence; } diff --git a/drivers/gpu/drm/virtio/virtgpu_ioctl.c b/drivers/gpu/drm/virtio/virtgpu_ioctl.c index 669f2ee395154..ab010c8e32816 100644 --- a/drivers/gpu/drm/virtio/virtgpu_ioctl.c +++ b/drivers/gpu/drm/virtio/virtgpu_ioctl.c @@ -451,10 +451,10 @@ static int virtio_gpu_wait_ioctl(struct drm_device *dev, void *data, return -ENOENT; if (args->flags & VIRTGPU_WAIT_NOWAIT) { - ret = dma_resv_test_signaled_rcu(obj->resv, true); + ret = dma_resv_test_signaled_unlocked(obj->resv, true); } else { - ret = dma_resv_wait_timeout_rcu(obj->resv, true, true, - timeout); + ret = dma_resv_wait_timeout_unlocked(obj->resv, true, true, + timeout); } if (ret == 0) ret = -EBUSY; diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_bo.c b/drivers/gpu/drm/vmwgfx/vmwgfx_bo.c index 04dd49c4c2572..19e1ce23842a9 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_bo.c +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_bo.c @@ -743,7 +743,7 @@ static int vmw_user_bo_synccpu_grab(struct vmw_user_buffer_object *user_bo, if (flags & drm_vmw_synccpu_allow_cs) { long lret; - lret = dma_resv_wait_timeout_rcu + lret = dma_resv_wait_timeout_unlocked (bo->base.resv, true, true, nonblock ? 0 : MAX_SCHEDULE_TIMEOUT); if (!lret) diff --git a/include/linux/dma-resv.h b/include/linux/dma-resv.h index d44a77e8a7e34..99cfb7af966b8 100644 --- a/include/linux/dma-resv.h +++ b/include/linux/dma-resv.h @@ -246,7 +246,7 @@ dma_resv_get_excl(struct dma_resv *obj) } /** - * dma_resv_get_excl_rcu - get the reservation object's + * dma_resv_get_excl_unlocked - get the reservation object's * exclusive fence, without lock held. * @obj: the reservation object * @@ -257,7 +257,7 @@ dma_resv_get_excl(struct dma_resv *obj) * The exclusive fence or NULL if none */ static inline struct dma_fence * -dma_resv_get_excl_rcu(struct dma_resv *obj) +dma_resv_get_excl_unlocked(struct dma_resv *obj) { struct dma_fence *fence; @@ -278,16 +278,16 @@ void dma_resv_add_shared_fence(struct dma_resv *obj, struct dma_fence *fence); void dma_resv_add_excl_fence(struct dma_resv *obj, struct dma_fence *fence); -int dma_resv_get_fences_rcu(struct dma_resv *obj, - struct dma_fence **pfence_excl, - unsigned *pshared_count, - struct dma_fence ***pshared); +int dma_resv_get_fences_unlocked(struct dma_resv *obj, + struct dma_fence **pfence_excl, + unsigned *pshared_count, + struct dma_fence ***pshared); int dma_resv_copy_fences(struct dma_resv *dst, struct dma_resv *src); -long dma_resv_wait_timeout_rcu(struct dma_resv *obj, bool wait_all, bool intr, - unsigned long timeout); +long dma_resv_wait_timeout_unlocked(struct dma_resv *obj, bool wait_all, bool intr, + unsigned long timeout); -bool dma_resv_test_signaled_rcu(struct dma_resv *obj, bool test_all); +bool dma_resv_test_signaled_unlocked(struct dma_resv *obj, bool test_all); #endif /* _LINUX_RESERVATION_H */ From patchwork Tue May 25 21:17:49 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Jason Ekstrand X-Patchwork-Id: 12280157 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 392B3C47087 for ; Tue, 25 May 2021 21:18:20 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 0151961423 for ; Tue, 25 May 2021 21:18:19 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 0151961423 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=jlekstrand.net Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=dri-devel-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 35DAE6EB50; Tue, 25 May 2021 21:18:08 +0000 (UTC) Received: from mail-pf1-x42b.google.com (mail-pf1-x42b.google.com [IPv6:2607:f8b0:4864:20::42b]) by gabe.freedesktop.org (Postfix) with ESMTPS id 362046EB4D for ; Tue, 25 May 2021 21:18:07 +0000 (UTC) Received: by mail-pf1-x42b.google.com with SMTP id d16so24585685pfn.12 for ; Tue, 25 May 2021 14:18:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=jlekstrand-net.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=AL5uNnVtUk1C+mH/6NWLJWmNdaxYP0PjQJbDEuxpySk=; b=Ips17hc7MmLO98L0dy/V0AU1V8nE0dVv+kG0bCGj7sw+lKob4YKDVH3SnE7SlXOTs9 ovBF76lHY2xIt1dJkpVr/BWyLolgE+D7zCwDUqwy2vw8JLDFdLPPA9lDn0bfxGEuXL02 nUmgX6P+xwUuB+IcNpuVEJEx1diCAnveB/E1Ia3cNUpc6RB9aW3O2REA+fNv5VaKuFHy 4N/TCoLUGpT4ZRhEnMzE4uWBzdWpBKKbZc4dMZ9OC/2OvaGk49J5V/1+yTxw7bgM3/vk Tgi6MM46bN06NQfz4DZ2wUm642M/jr+MSWg+vDLcrgsx8s+0ch4FqmHKLL+09hHU26qc QWjA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=AL5uNnVtUk1C+mH/6NWLJWmNdaxYP0PjQJbDEuxpySk=; b=DARpfhg74I6pepc5gZFPULd+IKAp3UI6m8OMiZyEtsrwy1XkIkt9sKMx/zLCg1a9q8 0BX3E5RhiY67oom0aEKiEkDmkP8wZEBtF/gW+F3oDufe10EiUBv1XH4ga5TT05zDmZN6 fQD9TUUiTZxW6hjNRokCYYo922CBLfs17MfHhBvfPWU5/6lZXPs5ZI8iOZZ0wFhiOWb4 U8xexn5swJnpvED7+MzajXIHGeBpR3mS8+JbiihYbpoPDHUvrqIaKowMgPLMyDsqFFyQ h+eQMGbbISWn2xsEVsBJTV2jB3lVpVbWnzA16N7Z0rA27nS0z9B3J8hug2dJLoHdGbVK Cn4A== X-Gm-Message-State: AOAM5312tqrOTMSmFVlAtYlwoIcvrPIcVEbKSlseMMvfT3NuomGbBWIH CPp9IcGNksCybWudVNRdIuThNSfEZh1bfA== X-Google-Smtp-Source: ABdhPJxncSd9JR6bFaeenqN0OOWlRPCuzBhpn4lfcwfCHKIL+Cg/qQq3nO4vOihovDpDz6VCpwh94Q== X-Received: by 2002:a63:224d:: with SMTP id t13mr21209550pgm.283.1621977486263; Tue, 25 May 2021 14:18:06 -0700 (PDT) Received: from omlet.lan ([134.134.139.83]) by smtp.gmail.com with ESMTPSA id e186sm14342278pfa.145.2021.05.25.14.18.04 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 25 May 2021 14:18:05 -0700 (PDT) From: Jason Ekstrand To: dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org Subject: [PATCH 3/7] dma-buf: Add dma_resv_get_singleton_unlocked (v5) Date: Tue, 25 May 2021 16:17:49 -0500 Message-Id: <20210525211753.1086069-4-jason@jlekstrand.net> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210525211753.1086069-1-jason@jlekstrand.net> References: <20210525211753.1086069-1-jason@jlekstrand.net> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Daniel Vetter , =?utf-8?q?Christian_K=C3=B6nig?= , Jason Ekstrand Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Add a helper function to get a single fence representing all fences in a dma_resv object. This fence is either the only one in the object or all not signaled fences of the object in a flatted out dma_fence_array. v2 (Jason Ekstrand): - Take reference of fences both for creating the dma_fence_array and in the case where we return one fence. - Handle the case where dma_resv_get_list() returns NULL v3 (Jason Ekstrand): - Add an _rcu suffix because it is read-only - Rewrite to use dma_resv_get_fences_rcu so it's RCU-safe - Add an EXPORT_SYMBOL_GPL declaration - Re-author the patch to Jason since very little is left of Christian König's original patch - Remove the extra fence argument v4 (Jason Ekstrand): - Restore the extra fence argument v5 (Daniel Vetter): - Rename from _rcu to _unlocked since it doesn't leak RCU details to the caller - Fix docs - Use ERR_PTR for error handling rather than an output dma_fence** v5 (Jason Ekstrand): - Drop the extra fence param and leave that to a separate patch Signed-off-by: Jason Ekstrand Reviewed-by: Daniel Vetter Cc: Christian König Cc: Maarten Lankhorst --- drivers/dma-buf/dma-resv.c | 92 ++++++++++++++++++++++++++++++++++++++ include/linux/dma-resv.h | 2 + 2 files changed, 94 insertions(+) diff --git a/drivers/dma-buf/dma-resv.c b/drivers/dma-buf/dma-resv.c index d6f1ed4cd4d55..23db2181c8ad8 100644 --- a/drivers/dma-buf/dma-resv.c +++ b/drivers/dma-buf/dma-resv.c @@ -33,6 +33,8 @@ */ #include +#include +#include #include #include #include @@ -49,6 +51,11 @@ * write-side updates. */ +/* deep dive into the fence containers */ +#define dma_fence_deep_dive_for_each(fence, chain, index, head) \ + dma_fence_chain_for_each(chain, head) \ + dma_fence_array_for_each(fence, index, chain) + DEFINE_WD_CLASS(reservation_ww_class); EXPORT_SYMBOL(reservation_ww_class); @@ -517,6 +524,91 @@ int dma_resv_get_fences_unlocked(struct dma_resv *obj, } EXPORT_SYMBOL_GPL(dma_resv_get_fences_unlocked); +/** + * dma_resv_get_singleton_unlocked - get a single fence for the dma_resv object + * @obj: the reservation object + * + * Get a single fence representing all unsignaled fences in the dma_resv object + * plus the given extra fence. If we got only one fence return a new + * reference to that, otherwise return a dma_fence_array object. + * + * RETURNS + * The singleton dma_fence on success or an ERR_PTR on failure + */ +struct dma_fence *dma_resv_get_singleton_unlocked(struct dma_resv *obj) +{ + struct dma_fence *result, **resv_fences, *fence, *chain, **fences; + struct dma_fence_array *array; + unsigned int num_resv_fences, num_fences; + unsigned int err, i, j; + + err = dma_resv_get_fences_unlocked(obj, NULL, &num_resv_fences, &resv_fences); + if (err) + return ERR_PTR(err); + + if (num_resv_fences == 0) + return NULL; + + num_fences = 0; + result = NULL; + + for (i = 0; i < num_resv_fences; ++i) { + dma_fence_deep_dive_for_each(fence, chain, j, resv_fences[i]) { + if (dma_fence_is_signaled(fence)) + continue; + + result = fence; + ++num_fences; + } + } + + if (num_fences <= 1) { + result = dma_fence_get(result); + goto put_resv_fences; + } + + fences = kmalloc_array(num_fences, sizeof(struct dma_fence*), + GFP_KERNEL); + if (!fences) { + result = ERR_PTR(-ENOMEM); + goto put_resv_fences; + } + + num_fences = 0; + for (i = 0; i < num_resv_fences; ++i) { + dma_fence_deep_dive_for_each(fence, chain, j, resv_fences[i]) { + if (!dma_fence_is_signaled(fence)) + fences[num_fences++] = dma_fence_get(fence); + } + } + + if (num_fences <= 1) { + result = num_fences ? fences[0] : NULL; + kfree(fences); + goto put_resv_fences; + } + + array = dma_fence_array_create(num_fences, fences, + dma_fence_context_alloc(1), + 1, false); + if (array) { + result = &array->base; + } else { + result = ERR_PTR(-ENOMEM); + while (num_fences--) + dma_fence_put(fences[num_fences]); + kfree(fences); + } + +put_resv_fences: + while (num_resv_fences--) + dma_fence_put(resv_fences[num_resv_fences]); + kfree(resv_fences); + + return result; +} +EXPORT_SYMBOL_GPL(dma_resv_get_singleton_unlocked); + /** * dma_resv_wait_timeout_unlocked - Wait on reservation's objects * shared and/or exclusive fences. diff --git a/include/linux/dma-resv.h b/include/linux/dma-resv.h index 99cfb7af966b8..c5fa09555eca5 100644 --- a/include/linux/dma-resv.h +++ b/include/linux/dma-resv.h @@ -285,6 +285,8 @@ int dma_resv_get_fences_unlocked(struct dma_resv *obj, int dma_resv_copy_fences(struct dma_resv *dst, struct dma_resv *src); +struct dma_fence *dma_resv_get_singleton_unlocked(struct dma_resv *obj); + long dma_resv_wait_timeout_unlocked(struct dma_resv *obj, bool wait_all, bool intr, unsigned long timeout); From patchwork Tue May 25 21:17:50 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Jason Ekstrand X-Patchwork-Id: 12280161 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2BF0CC47087 for ; Tue, 25 May 2021 21:18:28 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id DCF226142B for ; Tue, 25 May 2021 21:18:27 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org DCF226142B Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=jlekstrand.net Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=dri-devel-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 904AC894C3; Tue, 25 May 2021 21:18:19 +0000 (UTC) Received: from mail-pj1-x102a.google.com (mail-pj1-x102a.google.com [IPv6:2607:f8b0:4864:20::102a]) by gabe.freedesktop.org (Postfix) with ESMTPS id 8BA446EB49 for ; Tue, 25 May 2021 21:18:08 +0000 (UTC) Received: by mail-pj1-x102a.google.com with SMTP id v13-20020a17090abb8db029015f9f7d7290so2488101pjr.0 for ; Tue, 25 May 2021 14:18:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=jlekstrand-net.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=khdZVEEK1rP9kUwKpWiHIvVsVSC2tlfvCOHGJLhYPfw=; b=tZwS4IMa7UY4aVAguim0NS+EKV5ES9Vc0QRtnfHKS16itzMpsD0qciJuA9TbKgnorn jBhyGscwIQUdUpanxz8ZbYn2CstobqR66xLAf882Ko3LKcrnHJVT9WUHdhxLFSFvc9xX fldQiOpj8KTvASzbiugIedBE7GyMLhCFNDZBKMpSyEjUx5rMnpNv8mCs8NdAixzwEgMS oNuJmjFQx2b4YKItS5lW+L1pyEqMSPpxxjWPVCQNiF6PBzBTAf28CwazgtkFJTD/pMtj EsQ58zc33IMQ1vtarjZZwp7NYHPOdv1wILq2gAb/1pozU4/iu5Y08mHkp0b3XJSA5F2Y Cakw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=khdZVEEK1rP9kUwKpWiHIvVsVSC2tlfvCOHGJLhYPfw=; b=B8z4r8hyjsE4UXOXVxY5TD6fpRkVjmNMniaOngQidaqUz7wo3I7w5VjP1yk7g0fcBL v5uRYGjX1MiXRmOiBVTkGhS2ibv4Igy4kiPu5Np0u0gw1RIvcAcsoD3e8e0eN4BrR2sJ WmrECymb2pIVjDE0vmWls5X1ghUcKhevSgC/0qpw2xNK6UyrTDoYRaw4eB8LLrIOdxYj P8ZlWg1dIepUU/nOSi8kKXBnnsQas8Xkbk8A+86JkP77HChNS1u/bWm31kdCx4CHIc9V +fpGgzX6B/ruKjB/DnaPTkYz9P3T7V59itZGNefXzD1SghbSnCxSvTsfwdbIqzUJZaM1 RMkA== X-Gm-Message-State: AOAM530dR9o6l8hX4bN5UA4aP12zib05Or13yLbmABHmRb5NZ2pQz+w6 IHw0VjpfXIa6okqTbupxObEpfdU++sRseg== X-Google-Smtp-Source: ABdhPJxsCexXt0qdANF6TJ1maa4rnZI7NhaWP+hRNDQJOMuYwwT9ml9eZwq+RosYMOi4/ddeowJU6w== X-Received: by 2002:a17:90b:14cc:: with SMTP id jz12mr33137764pjb.210.1621977487956; Tue, 25 May 2021 14:18:07 -0700 (PDT) Received: from omlet.lan ([134.134.139.83]) by smtp.gmail.com with ESMTPSA id e186sm14342278pfa.145.2021.05.25.14.18.06 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 25 May 2021 14:18:07 -0700 (PDT) From: Jason Ekstrand To: dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org Subject: [PATCH 4/7] dma-buf: Document DMA_BUF_IOCTL_SYNC Date: Tue, 25 May 2021 16:17:50 -0500 Message-Id: <20210525211753.1086069-5-jason@jlekstrand.net> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210525211753.1086069-1-jason@jlekstrand.net> References: <20210525211753.1086069-1-jason@jlekstrand.net> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Daniel Vetter , =?utf-8?q?Christian_K=C3=B6nig?= , Jason Ekstrand Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" This adds a new "DMA Buffer ioctls" section to the dma-buf docs and adds documentation for DMA_BUF_IOCTL_SYNC. Signed-off-by: Jason Ekstrand Cc: Daniel Vetter Cc: Christian König Cc: Sumit Semwal --- Documentation/driver-api/dma-buf.rst | 8 +++++++ include/uapi/linux/dma-buf.h | 32 +++++++++++++++++++++++++++- 2 files changed, 39 insertions(+), 1 deletion(-) diff --git a/Documentation/driver-api/dma-buf.rst b/Documentation/driver-api/dma-buf.rst index 7f37ec30d9fd7..784f84fe50a5e 100644 --- a/Documentation/driver-api/dma-buf.rst +++ b/Documentation/driver-api/dma-buf.rst @@ -88,6 +88,9 @@ consider though: - The DMA buffer FD is also pollable, see `Implicit Fence Poll Support`_ below for details. +- The DMA buffer FD also supports a few dma-buf-specific ioctls, see + `DMA Buffer ioctls`_ below for details. + Basic Operation and Device DMA Access ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ @@ -106,6 +109,11 @@ Implicit Fence Poll Support .. kernel-doc:: drivers/dma-buf/dma-buf.c :doc: implicit fence polling +DMA Buffer ioctls +~~~~~~~~~~~~~~~~~ + +.. kernel-doc:: include/uapi/linux/dma-buf.h + Kernel Functions and Structures Reference ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ diff --git a/include/uapi/linux/dma-buf.h b/include/uapi/linux/dma-buf.h index 7f30393b92c3b..1f67ced853b14 100644 --- a/include/uapi/linux/dma-buf.h +++ b/include/uapi/linux/dma-buf.h @@ -22,8 +22,38 @@ #include -/* begin/end dma-buf functions used for userspace mmap. */ +/** + * struct dma_buf_sync - Synchronize with CPU access. + * + * When a DMA buffer is accessed from the CPU via mmap, it is not always + * possible to guarantee coherency between the CPU-visible map and underlying + * memory. To manage coherency, DMA_BUF_IOCTL_SYNC must be used to bracket + * any CPU access to give the kernel the chance to shuffle memory around if + * needed. + * + * Prior to accessing the map, the client should call DMA_BUF_IOCTL_SYNC + * with DMA_BUF_SYNC_START and the appropriate read/write flags. Once the + * access is complete, the client should call DMA_BUF_IOCTL_SYNC with + * DMA_BUF_SYNC_END and the same read/write flags. + */ struct dma_buf_sync { + /** + * @flags: Set of access flags + * + * - DMA_BUF_SYNC_START: Indicates the start of a map access + * session. + * + * - DMA_BUF_SYNC_END: Indicates the end of a map access session. + * + * - DMA_BUF_SYNC_READ: Indicates that the mapped DMA buffer will + * be read by the client via the CPU map. + * + * - DMA_BUF_SYNC_READ: Indicates that the mapped DMA buffer will + * be written by the client via the CPU map. + * + * - DMA_BUF_SYNC_RW: An alias for DMA_BUF_SYNC_READ | + * DMA_BUF_SYNC_WRITE. + */ __u64 flags; }; From patchwork Tue May 25 21:17:51 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Jason Ekstrand X-Patchwork-Id: 12280159 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9382CC47088 for ; Tue, 25 May 2021 21:18:24 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 60C76611BE for ; Tue, 25 May 2021 21:18:24 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 60C76611BE Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=jlekstrand.net Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=dri-devel-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 88F916EB46; Tue, 25 May 2021 21:18:17 +0000 (UTC) Received: from mail-pl1-x62a.google.com (mail-pl1-x62a.google.com [IPv6:2607:f8b0:4864:20::62a]) by gabe.freedesktop.org (Postfix) with ESMTPS id E04446EB49 for ; Tue, 25 May 2021 21:18:10 +0000 (UTC) Received: by mail-pl1-x62a.google.com with SMTP id z4so14840106plg.8 for ; Tue, 25 May 2021 14:18:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=jlekstrand-net.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=aVRWHUdIShQi1nsSzSKZUPU5qNRn//26RDAX0L399dc=; b=My6+y+nJzz50Iyxuc4xCyjJoOEy50Z47yu3tVntkp8cHx8geMn/VP7UA4B/wIUx7qM rQSfJcJxoadYt7Y18TakvxExwWMZJIizfL59AnHiSzt8ZqYOSMi9iOkkvPegFREKp18o WIw9scla1s6XmVU2j26QiYDrwMh8e0OqEgopS9abSkPxc6UQG01tzXDEQxIlonTkWzIb 1z44pSNyM4R8jE+XhtN4T9PHgHVtpCvdPB4YoSD9N3paiVpj6c8ow4WWfJpbOYvWuTeV DNKI2Rqd3Mo6/xFzjYt5Vt/Iki9UHOLV99qH/68JjmXefXVy/jgXDQVB0P+84g/yxNIc GT0A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=aVRWHUdIShQi1nsSzSKZUPU5qNRn//26RDAX0L399dc=; b=arZaYiVfjH1w3MMUeSHhM2nMOe8M4J+9fXNvRSgbfv8YSmpxn+pQsU1sXl2IYjSP4T gJdUp4vXlabyEEMGpBoNgo/FCL54KpQs8kNOLFj7as6tHiCf51K0EdboBKxh6jqE7/aN 3t7QcHcZbcun/9+ZyxHrYQ9dhFYoB20dl+57ipSNNC1LARuVjgXCMcT1eStayUGiCDWO w2X4S0yDA2OWPDdatDb20mPkPLCSHr7lJYh44yfz2ucC24Rb9kS8vhJwlGR58ELwm0GM 61IJhMJuNpRtklnE2lfmazrkla6Lsjv1KSbELK0ixq40ktnDYAmjcxWK9g13d+vQcSU2 ZqjA== X-Gm-Message-State: AOAM53002XP9TAcCwhAUmGpBRYc1dAAff/2lZn0mzfdw9VkxBLPPm8O+ kS2y11Sjd2n0bBFV/e5yEYqJE1EY7Cg1yg== X-Google-Smtp-Source: ABdhPJwCB9VBuDNuH2qw9I1VG0qkdO3HU+K/LR3AH5yUvGtVrQeZw4SdEMtOd2oOL+wll/50afRLSg== X-Received: by 2002:a17:90a:4105:: with SMTP id u5mr31251452pjf.45.1621977489868; Tue, 25 May 2021 14:18:09 -0700 (PDT) Received: from omlet.lan ([134.134.139.83]) by smtp.gmail.com with ESMTPSA id e186sm14342278pfa.145.2021.05.25.14.18.08 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 25 May 2021 14:18:09 -0700 (PDT) From: Jason Ekstrand To: dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org Subject: [PATCH 5/7] dma-buf: Add an API for exporting sync files (v11) Date: Tue, 25 May 2021 16:17:51 -0500 Message-Id: <20210525211753.1086069-6-jason@jlekstrand.net> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210525211753.1086069-1-jason@jlekstrand.net> References: <20210525211753.1086069-1-jason@jlekstrand.net> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: =?utf-8?q?Christian_K=C3=B6nig?= , Jason Ekstrand , Daniel Vetter Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Modern userspace APIs like Vulkan are built on an explicit synchronization model. This doesn't always play nicely with the implicit synchronization used in the kernel and assumed by X11 and Wayland. The client -> compositor half of the synchronization isn't too bad, at least on intel, because we can control whether or not i915 synchronizes on the buffer and whether or not it's considered written. The harder part is the compositor -> client synchronization when we get the buffer back from the compositor. We're required to be able to provide the client with a VkSemaphore and VkFence representing the point in time where the window system (compositor and/or display) finished using the buffer. With current APIs, it's very hard to do this in such a way that we don't get confused by the Vulkan driver's access of the buffer. In particular, once we tell the kernel that we're rendering to the buffer again, any CPU waits on the buffer or GPU dependencies will wait on some of the client rendering and not just the compositor. This new IOCTL solves this problem by allowing us to get a snapshot of the implicit synchronization state of a given dma-buf in the form of a sync file. It's effectively the same as a poll() or I915_GEM_WAIT only, instead of CPU waiting directly, it encapsulates the wait operation, at the current moment in time, in a sync_file so we can check/wait on it later. As long as the Vulkan driver does the sync_file export from the dma-buf before we re-introduce it for rendering, it will only contain fences from the compositor or display. This allows to accurately turn it into a VkFence or VkSemaphore without any over- synchronization. v2 (Jason Ekstrand): - Use a wrapper dma_fence_array of all fences including the new one when importing an exclusive fence. v3 (Jason Ekstrand): - Lock around setting shared fences as well as exclusive - Mark SIGNAL_SYNC_FILE as a read-write ioctl. - Initialize ret to 0 in dma_buf_wait_sync_file v4 (Jason Ekstrand): - Use the new dma_resv_get_singleton helper v5 (Jason Ekstrand): - Rename the IOCTLs to import/export rather than wait/signal - Drop the WRITE flag and always get/set the exclusive fence v6 (Jason Ekstrand): - Drop the sync_file import as it was all-around sketchy and not nearly as useful as import. - Re-introduce READ/WRITE flag support for export - Rework the commit message v7 (Jason Ekstrand): - Require at least one sync flag - Fix a refcounting bug: dma_resv_get_excl() doesn't take a reference - Use _rcu helpers since we're accessing the dma_resv read-only v8 (Jason Ekstrand): - Return -ENOMEM if the sync_file_create fails - Predicate support on IS_ENABLED(CONFIG_SYNC_FILE) v9 (Jason Ekstrand): - Add documentation for the new ioctl v10 (Jason Ekstrand): - Go back to dma_buf_sync_file as the ioctl struct name v11 (Daniel Vetter): - Go back to dma_buf_export_sync_file as the ioctl struct name - Better kerneldoc describing what the read/write flags do Signed-off-by: Jason Ekstrand Acked-by: Simon Ser Acked-by: Christian König Reviewed-by: Daniel Vetter Cc: Sumit Semwal Cc: Maarten Lankhorst --- drivers/dma-buf/dma-buf.c | 67 ++++++++++++++++++++++++++++++++++++ include/uapi/linux/dma-buf.h | 35 +++++++++++++++++++ 2 files changed, 102 insertions(+) diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c index ed6451d55d663..65a9574ee04ed 100644 --- a/drivers/dma-buf/dma-buf.c +++ b/drivers/dma-buf/dma-buf.c @@ -20,6 +20,7 @@ #include #include #include +#include #include #include #include @@ -191,6 +192,9 @@ static loff_t dma_buf_llseek(struct file *file, loff_t offset, int whence) * Note that this only signals the completion of the respective fences, i.e. the * DMA transfers are complete. Cache flushing and any other necessary * preparations before CPU access can begin still need to happen. + * + * As an alternative to poll(), the set of fences on DMA buffer can be + * exported as a &sync_file using &dma_buf_sync_file_export. */ static void dma_buf_poll_cb(struct dma_fence *fence, struct dma_fence_cb *cb) @@ -362,6 +366,64 @@ static long dma_buf_set_name(struct dma_buf *dmabuf, const char __user *buf) return ret; } +#if IS_ENABLED(CONFIG_SYNC_FILE) +static long dma_buf_export_sync_file(struct dma_buf *dmabuf, + void __user *user_data) +{ + struct dma_buf_export_sync_file arg; + struct dma_fence *fence = NULL; + struct sync_file *sync_file; + int fd, ret; + + if (copy_from_user(&arg, user_data, sizeof(arg))) + return -EFAULT; + + if (arg.flags & ~DMA_BUF_SYNC_RW) + return -EINVAL; + + if ((arg.flags & DMA_BUF_SYNC_RW) == 0) + return -EINVAL; + + fd = get_unused_fd_flags(O_CLOEXEC); + if (fd < 0) + return fd; + + if (arg.flags & DMA_BUF_SYNC_WRITE) { + fence = dma_resv_get_singleton_unlocked(dmabuf->resv); + if (IS_ERR(fence)) { + ret = PTR_ERR(fence); + goto err_put_fd; + } + } else if (arg.flags & DMA_BUF_SYNC_READ) { + fence = dma_resv_get_excl_unlocked(dmabuf->resv); + } + + if (!fence) + fence = dma_fence_get_stub(); + + sync_file = sync_file_create(fence); + + dma_fence_put(fence); + + if (!sync_file) { + ret = -ENOMEM; + goto err_put_fd; + } + + fd_install(fd, sync_file->file); + + arg.fd = fd; + if (copy_to_user(user_data, &arg, sizeof(arg))) + return -EFAULT; + + return 0; + +err_put_fd: + put_unused_fd(fd); + return ret; +} +#endif + static long dma_buf_ioctl(struct file *file, unsigned int cmd, unsigned long arg) { @@ -405,6 +467,11 @@ static long dma_buf_ioctl(struct file *file, case DMA_BUF_SET_NAME_B: return dma_buf_set_name(dmabuf, (const char __user *)arg); +#if IS_ENABLED(CONFIG_SYNC_FILE) + case DMA_BUF_IOCTL_EXPORT_SYNC_FILE: + return dma_buf_export_sync_file(dmabuf, (void __user *)arg); +#endif + default: return -ENOTTY; } diff --git a/include/uapi/linux/dma-buf.h b/include/uapi/linux/dma-buf.h index 1f67ced853b14..aeba45180b028 100644 --- a/include/uapi/linux/dma-buf.h +++ b/include/uapi/linux/dma-buf.h @@ -67,6 +67,40 @@ struct dma_buf_sync { #define DMA_BUF_NAME_LEN 32 +/** + * struct dma_buf_export_sync_file - Get a sync_file from a dma-buf + * + * Userspace can perform a DMA_BUF_IOCTL_EXPORT_SYNC_FILE to retrieve the + * current set of fences on a dma-buf file descriptor as a sync_file. CPU + * waits via poll() or other driver-specific mechanisms typically wait on + * whatever fences are on the dma-buf at the time the wait begins. This + * is similar except that it takes a snapshot of the current fences on the + * dma-buf for waiting later instead of waiting immediately. This is + * useful for modern graphics APIs such as Vulkan which assume an explicit + * synchronization model but still need to inter-operate with dma-buf. + */ +struct dma_buf_export_sync_file { + /** + * @flags: Read/write flags + * + * Must be DMA_BUF_SYNC_READ, DMA_BUF_SYNC_WRITE, or both. + * + * If DMA_BUF_SYNC_READ is set and DMA_BUF_SYNC_WRITE is not set, + * the returned sync file waits on any writers of the dma-buf to + * complete. Waiting on the returned sync file is equivalent to + * poll() with POLLIN. + * + * If DMA_BUF_SYNC_WRITE is set, the returned sync file waits on + * any users of the dma-buf (read or write) to complete. Waiting + * on the returned sync file is equivalent to poll() with POLLOUT. + * If both DMA_BUF_SYNC_WRITE and DMA_BUF_SYNC_READ are set, this + * is equivalent to just DMA_BUF_SYNC_WRITE. + */ + __u32 flags; + /** @fd: Returned sync file descriptor */ + __s32 fd; +}; + #define DMA_BUF_BASE 'b' #define DMA_BUF_IOCTL_SYNC _IOW(DMA_BUF_BASE, 0, struct dma_buf_sync) @@ -76,5 +110,6 @@ struct dma_buf_sync { #define DMA_BUF_SET_NAME _IOW(DMA_BUF_BASE, 1, const char *) #define DMA_BUF_SET_NAME_A _IOW(DMA_BUF_BASE, 1, u32) #define DMA_BUF_SET_NAME_B _IOW(DMA_BUF_BASE, 1, u64) +#define DMA_BUF_IOCTL_EXPORT_SYNC_FILE _IOWR(DMA_BUF_BASE, 2, struct dma_buf_export_sync_file) #endif From patchwork Tue May 25 21:17:52 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Jason Ekstrand X-Patchwork-Id: 12280163 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 79201C4707F for ; Tue, 25 May 2021 21:18:26 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 480B26141C for ; Tue, 25 May 2021 21:18:26 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 480B26141C Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=jlekstrand.net Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=dri-devel-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 260CF89358; Tue, 25 May 2021 21:18:19 +0000 (UTC) Received: from mail-pj1-x102c.google.com (mail-pj1-x102c.google.com [IPv6:2607:f8b0:4864:20::102c]) by gabe.freedesktop.org (Postfix) with ESMTPS id 9B8FC6EB52 for ; Tue, 25 May 2021 21:18:12 +0000 (UTC) Received: by mail-pj1-x102c.google.com with SMTP id pi6-20020a17090b1e46b029015cec51d7cdso13900560pjb.5 for ; Tue, 25 May 2021 14:18:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=jlekstrand-net.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=CbePMyIlBcQSF8R+LXMprWheoIU4n7xx3MVF6E4IC+Y=; b=ZIuTqBc1fm8r/v+UNhuaoyRwvkdsZmJ1eny412qRypVYvPFZ/YCsH0OK38v6TTVFkS yLOjv6RFfYqJbIFMwgiOwULstCDj8ANE6WS7xeBlh668ZyRZ0heW4cMmal3xnHID6ezB Aisw3Mgvx+0H/WmBr9HgtY1kqzjZS+xslpwMGEINc39lY0X/ppMCyHGxRBUEn7zHj9MX sxPj8k25y3PqrdfxSMK3mSCMLvGA9ywzMR9/pkv4GICsn1cG5ImdSOMF8liMsSOS4S8K aTU/gAq4Blc7GWHXkU51x8ZsF/0EyBFFT8aLj8BbLPm9kKcywTTM+U2zgSYJjIbSn4qt gijQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=CbePMyIlBcQSF8R+LXMprWheoIU4n7xx3MVF6E4IC+Y=; b=AsrYfj6SpIMgNX8ozPox/BD0F3CrB8yn+COUEMKfhW14CXndWs/4znIeTyCcE5dVLG I+J02ZdQSga3GYElShyNlVZ1eg1lL8r0LpCJv59RsAlHPtkiv6fjLHMWvE1o3VcUATAl ppNBn82x7aZZ/EAd4f7bMUaS9twbWCqicXJLohmUniH7nMRLQsxDGDopaaNHedhsfP9r XU9wrQala/vN55R3jxxRiaI1B4dcIYd85Kw1TavKUorGl+p+g3ZQ0l1u8zoKZNub++yb NrynKlOfS4G6yItFB4sMAv3T5fydHN2qHjfmB6qqpBZd0RiAoa8rhslv7i8fqaHzktjD hfuw== X-Gm-Message-State: AOAM533jDit1L2Mwo0anRyz9vL4+CgxVNY99Guc2SH6o7+m8/cNf8I3Z opoyafii7L8ICLYytdwbpVo3wIM7bYcrpQ== X-Google-Smtp-Source: ABdhPJwUc8EpMlwJiEyLt62zniksrdoI8IOw579MwiSPzkd612GnTa5JiMDouq0QB3r2n0AiDpjrLA== X-Received: by 2002:a17:90b:128d:: with SMTP id fw13mr305667pjb.211.1621977491633; Tue, 25 May 2021 14:18:11 -0700 (PDT) Received: from omlet.lan ([134.134.139.83]) by smtp.gmail.com with ESMTPSA id e186sm14342278pfa.145.2021.05.25.14.18.10 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 25 May 2021 14:18:11 -0700 (PDT) From: Jason Ekstrand To: dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org Subject: [PATCH 6/7] RFC: dma-buf: Add an extra fence to dma_resv_get_singleton_unlocked Date: Tue, 25 May 2021 16:17:52 -0500 Message-Id: <20210525211753.1086069-7-jason@jlekstrand.net> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210525211753.1086069-1-jason@jlekstrand.net> References: <20210525211753.1086069-1-jason@jlekstrand.net> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Daniel Vetter , =?utf-8?q?Christian_K=C3=B6nig?= , Jason Ekstrand Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" For dma-buf sync_file import, we want to get all the fences on a dma_resv plus one more. We could wrap the fence we get back in an array fence or we could make dma_resv_get_singleton_unlocked take "one more" to make this case easier. Signed-off-by: Jason Ekstrand Reviewed-by: Daniel Vetter Cc: Christian König Cc: Maarten Lankhorst --- drivers/dma-buf/dma-buf.c | 2 +- drivers/dma-buf/dma-resv.c | 23 +++++++++++++++++++++-- include/linux/dma-resv.h | 3 ++- 3 files changed, 24 insertions(+), 4 deletions(-) diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c index 65a9574ee04ed..ea117de962903 100644 --- a/drivers/dma-buf/dma-buf.c +++ b/drivers/dma-buf/dma-buf.c @@ -389,7 +389,7 @@ static long dma_buf_export_sync_file(struct dma_buf *dmabuf, return fd; if (arg.flags & DMA_BUF_SYNC_WRITE) { - fence = dma_resv_get_singleton_unlocked(dmabuf->resv); + fence = dma_resv_get_singleton_unlocked(dmabuf->resv, NULL); if (IS_ERR(fence)) { ret = PTR_ERR(fence); goto err_put_fd; diff --git a/drivers/dma-buf/dma-resv.c b/drivers/dma-buf/dma-resv.c index 23db2181c8ad8..5a5e13a01e516 100644 --- a/drivers/dma-buf/dma-resv.c +++ b/drivers/dma-buf/dma-resv.c @@ -527,6 +527,7 @@ EXPORT_SYMBOL_GPL(dma_resv_get_fences_unlocked); /** * dma_resv_get_singleton_unlocked - get a single fence for the dma_resv object * @obj: the reservation object + * @extra: extra fence to add to the resulting array * * Get a single fence representing all unsignaled fences in the dma_resv object * plus the given extra fence. If we got only one fence return a new @@ -535,7 +536,8 @@ EXPORT_SYMBOL_GPL(dma_resv_get_fences_unlocked); * RETURNS * The singleton dma_fence on success or an ERR_PTR on failure */ -struct dma_fence *dma_resv_get_singleton_unlocked(struct dma_resv *obj) +struct dma_fence *dma_resv_get_singleton_unlocked(struct dma_resv *obj, + struct dma_fence *extra) { struct dma_fence *result, **resv_fences, *fence, *chain, **fences; struct dma_fence_array *array; @@ -546,7 +548,7 @@ struct dma_fence *dma_resv_get_singleton_unlocked(struct dma_resv *obj) if (err) return ERR_PTR(err); - if (num_resv_fences == 0) + if (num_resv_fences == 0 && !extra) return NULL; num_fences = 0; @@ -562,6 +564,16 @@ struct dma_fence *dma_resv_get_singleton_unlocked(struct dma_resv *obj) } } + if (extra) { + dma_fence_deep_dive_for_each(fence, chain, j, extra) { + if (dma_fence_is_signaled(fence)) + continue; + + result = fence; + ++num_fences; + } + } + if (num_fences <= 1) { result = dma_fence_get(result); goto put_resv_fences; @@ -582,6 +594,13 @@ struct dma_fence *dma_resv_get_singleton_unlocked(struct dma_resv *obj) } } + if (extra) { + dma_fence_deep_dive_for_each(fence, chain, j, extra) { + if (dma_fence_is_signaled(fence)) + fences[num_fences++] = dma_fence_get(fence); + } + } + if (num_fences <= 1) { result = num_fences ? fences[0] : NULL; kfree(fences); diff --git a/include/linux/dma-resv.h b/include/linux/dma-resv.h index c5fa09555eca5..4b1dabfa7017d 100644 --- a/include/linux/dma-resv.h +++ b/include/linux/dma-resv.h @@ -285,7 +285,8 @@ int dma_resv_get_fences_unlocked(struct dma_resv *obj, int dma_resv_copy_fences(struct dma_resv *dst, struct dma_resv *src); -struct dma_fence *dma_resv_get_singleton_unlocked(struct dma_resv *obj); +struct dma_fence *dma_resv_get_singleton_unlocked(struct dma_resv *obj, + struct dma_fence *extra); long dma_resv_wait_timeout_unlocked(struct dma_resv *obj, bool wait_all, bool intr, unsigned long timeout); From patchwork Tue May 25 21:17:53 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Jason Ekstrand X-Patchwork-Id: 12280165 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 796C6C47086 for ; Tue, 25 May 2021 21:18:29 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 44C206141C for ; Tue, 25 May 2021 21:18:29 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 44C206141C Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=jlekstrand.net Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=dri-devel-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 319C789700; Tue, 25 May 2021 21:18:20 +0000 (UTC) Received: from mail-pj1-x1033.google.com (mail-pj1-x1033.google.com [IPv6:2607:f8b0:4864:20::1033]) by gabe.freedesktop.org (Postfix) with ESMTPS id 7A0616EB46 for ; Tue, 25 May 2021 21:18:14 +0000 (UTC) Received: by mail-pj1-x1033.google.com with SMTP id n6-20020a17090ac686b029015d2f7aeea8so14023825pjt.1 for ; Tue, 25 May 2021 14:18:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=jlekstrand-net.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=2Zomxth+WuiC0IHgmdncpOPr5o6JQyY+ikQtNjEMgrQ=; b=qJIec8BequFpbl+oVYyw7tmxYT3paf9hpAqm3gLAv1n/WrVrei8i5XHXMiKCPxPj3f btJElvhCBVGPqEQiOtssRGm/9bcf9N8StvYzJ6FdTHTdE4wkQqUXiuYCd3o/N3LV6NBK PwmkJZ+vK+dbGRgdbCOCG/rWqPrVFdvwnyyq1jQ+CVIvsB8wNzN+TsfryoDtebZxIQ4R QOpDA6qqC4f9+SZn5tugi236CXyFwKMo5PkzXOgew8TwjINaA8I79EryRchAzhFpHYKs Gk8FvGFE3dijjxDVT6B/KzdD0R/0myjEOXxfbnzTcSaQcmLfYYQtXTzRGruhG5Hy4tIP WZog== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=2Zomxth+WuiC0IHgmdncpOPr5o6JQyY+ikQtNjEMgrQ=; b=i54SRyoxFoeku/s6l5c9yGvdl1G11GGzUcXk10vpSkw+NIgbo3QyzJnlw4+1s+S9dC VrxOSZ2XFfGw1nmAwcaSxAiepcX7m0kL10mgWaSbbJRZXSL6jZHB60efLffpXFqdzsCi nXEzPs/YdtdTp7WYKPmd1sFIICXhuob4MCxS3JS3IgXsQaXpl6aUQSnt+fRQ0Buj9ndZ IkWc4VgNDLx2LrnvSM4boqj0N5M38eZ9id+E/lsa2ht96IKAci/gMk+P1X2KghV06cN4 unA56g/+qU68aQkiYjeBHH++oibHbZr2iaHqB5MeNO69ZvauN6qaEyskzFhO/erZpuUC 2jtg== X-Gm-Message-State: AOAM530Npc8VDTgpdT9uJMfKL8pzV1QGgTwCEB+Mlmp8vEwh9+lDQgr8 Y/QY+Bp3L8xJnmlsf/FC5b8/56/BgKH7Qw== X-Google-Smtp-Source: ABdhPJzAPXz0yewfnqZlrDIupak3VBLdb4uKxv8DCVTUymb0HjVcU6+d0kC+y+rSxM53sVsqH18pnQ== X-Received: by 2002:a17:90a:880c:: with SMTP id s12mr32796620pjn.66.1621977493521; Tue, 25 May 2021 14:18:13 -0700 (PDT) Received: from omlet.lan ([134.134.139.83]) by smtp.gmail.com with ESMTPSA id e186sm14342278pfa.145.2021.05.25.14.18.11 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 25 May 2021 14:18:13 -0700 (PDT) From: Jason Ekstrand To: dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org Subject: [PATCH 7/7] RFC: dma-buf: Add an API for importing sync files (v7) Date: Tue, 25 May 2021 16:17:53 -0500 Message-Id: <20210525211753.1086069-8-jason@jlekstrand.net> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210525211753.1086069-1-jason@jlekstrand.net> References: <20210525211753.1086069-1-jason@jlekstrand.net> MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Daniel Vetter , =?utf-8?q?Christian_K=C3=B6nig?= , Jason Ekstrand Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" This patch is analogous to the previous sync file export patch in that it allows you to import a sync_file into a dma-buf. Unlike the previous patch, however, this does add genuinely new functionality to dma-buf. Without this, the only way to attach a sync_file to a dma-buf is to submit a batch to your driver of choice which waits on the sync_file and claims to write to the dma-buf. Even if said batch is a no-op, a submit is typically way more overhead than just attaching a fence. A submit may also imply extra synchronization with other work because it happens on a hardware queue. In the Vulkan world, this is useful for dealing with the out-fence from vkQueuePresent. Current Linux window-systems (X11, Wayland, etc.) all rely on dma-buf implicit sync. Since Vulkan is an explicit sync API, we get a set of fences (VkSemaphores) in vkQueuePresent and have to stash those as an exclusive (write) fence on the dma-buf. We handle it in Mesa today with the above mentioned dummy submit trick. This ioctl would allow us to set it directly without the dummy submit. This may also open up possibilities for GPU drivers to move away from implicit sync for their kernel driver uAPI and instead provide sync files and rely on dma-buf import/export for communicating with other implicit sync clients. We make the explicit choice here to only allow setting RW fences which translates to an exclusive fence on the dma_resv. There's no use for read-only fences for communicating with other implicit sync userspace and any such attempts are likely to be racy at best. When we got to insert the RW fence, the actual fence we set as the new exclusive fence is a combination of the sync_file provided by the user and all the other fences on the dma_resv. This ensures that the newly added exclusive fence will never signal before the old one would have and ensures that we don't break any dma_resv contracts. We require userspace to specify RW in the flags for symmetry with the export ioctl and in case we ever want to support read fences in the future. There is one downside here that's worth documenting: If two clients writing to the same dma-buf using this API race with each other, their actions on the dma-buf may happen in parallel or in an undefined order. Both with and without this API, the pattern is the same: Collect all the fences on dma-buf, submit work which depends on said fences, and then set a new exclusive (write) fence on the dma-buf which depends on said work. The difference is that, when it's all handled by the GPU driver's submit ioctl, the three operations happen atomically under the dma_resv lock. If two userspace submits race, one will happen before the other. You aren't guaranteed which but you are guaranteed that they're strictly ordered. If userspace manages the fences itself, then these three operations happen separately and the two render operations may happen genuinely in parallel or get interleaved. However, this is a case of userspace racing with itself. As long as we ensure userspace can't back the kernel into a corner, it should be fine. v2 (Jason Ekstrand): - Use a wrapper dma_fence_array of all fences including the new one when importing an exclusive fence. v3 (Jason Ekstrand): - Lock around setting shared fences as well as exclusive - Mark SIGNAL_SYNC_FILE as a read-write ioctl. - Initialize ret to 0 in dma_buf_wait_sync_file v4 (Jason Ekstrand): - Use the new dma_resv_get_singleton helper v5 (Jason Ekstrand): - Rename the IOCTLs to import/export rather than wait/signal - Drop the WRITE flag and always get/set the exclusive fence v6 (Jason Ekstrand): - Split import and export into separate patches - New commit message v7 (Daniel Vetter): - Fix the uapi header to use the right struct in the ioctl - Use a separate dma_buf_import_sync_file struct - Add kerneldoc for dma_buf_import_sync_file Signed-off-by: Jason Ekstrand Cc: Christian König Cc: Daniel Vetter Cc: Sumit Semwal Cc: Maarten Lankhorst --- drivers/dma-buf/dma-buf.c | 36 ++++++++++++++++++++++++++++++++++++ include/uapi/linux/dma-buf.h | 22 ++++++++++++++++++++++ 2 files changed, 58 insertions(+) diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c index ea117de962903..098340222662b 100644 --- a/drivers/dma-buf/dma-buf.c +++ b/drivers/dma-buf/dma-buf.c @@ -422,6 +422,40 @@ static long dma_buf_export_sync_file(struct dma_buf *dmabuf, put_unused_fd(fd); return ret; } + +static long dma_buf_import_sync_file(struct dma_buf *dmabuf, + const void __user *user_data) +{ + struct dma_buf_import_sync_file arg; + struct dma_fence *fence, *singleton = NULL; + int ret = 0; + + if (copy_from_user(&arg, user_data, sizeof(arg))) + return -EFAULT; + + if (arg.flags != DMA_BUF_SYNC_RW) + return -EINVAL; + + fence = sync_file_get_fence(arg.fd); + if (!fence) + return -EINVAL; + + dma_resv_lock(dmabuf->resv, NULL); + + singleton = dma_resv_get_singleton_unlocked(dmabuf->resv, fence); + if (IS_ERR(singleton)) { + ret = PTR_ERR(singleton); + } else if (singleton) { + dma_resv_add_excl_fence(dmabuf->resv, singleton); + dma_resv_add_shared_fence(dmabuf->resv, singleton); + } + + dma_resv_unlock(dmabuf->resv); + + dma_fence_put(fence); + + return ret; +} #endif static long dma_buf_ioctl(struct file *file, @@ -470,6 +504,8 @@ static long dma_buf_ioctl(struct file *file, #if IS_ENABLED(CONFIG_SYNC_FILE) case DMA_BUF_IOCTL_EXPORT_SYNC_FILE: return dma_buf_export_sync_file(dmabuf, (void __user *)arg); + case DMA_BUF_IOCTL_IMPORT_SYNC_FILE: + return dma_buf_import_sync_file(dmabuf, (const void __user *)arg); #endif default: diff --git a/include/uapi/linux/dma-buf.h b/include/uapi/linux/dma-buf.h index aeba45180b028..af53987db24be 100644 --- a/include/uapi/linux/dma-buf.h +++ b/include/uapi/linux/dma-buf.h @@ -101,6 +101,27 @@ struct dma_buf_export_sync_file { __s32 fd; }; +/** + * struct dma_buf_import_sync_file - Insert a sync_file into a dma-buf + * + * Userspace can perform a DMA_BUF_IOCTL_IMPORT_SYNC_FILE to insert a + * sync_file into a dma-buf for the purposes of implicit synchronization + * with other dma-buf consumers. This allows clients using explicitly + * synchronized APIs such as Vulkan to inter-op with dma-buf consumers + * which expect implicit synchronization such as OpenGL or most media + * drivers/video. + */ +struct dma_buf_import_sync_file { + /** + * @flags: Read/write flags + * + * Must be DMA_BUF_SYNC_RW. + */ + __u32 flags; + /** @fd: Sync file descriptor */ + __s32 fd; +}; + #define DMA_BUF_BASE 'b' #define DMA_BUF_IOCTL_SYNC _IOW(DMA_BUF_BASE, 0, struct dma_buf_sync) @@ -111,5 +132,6 @@ struct dma_buf_export_sync_file { #define DMA_BUF_SET_NAME_A _IOW(DMA_BUF_BASE, 1, u32) #define DMA_BUF_SET_NAME_B _IOW(DMA_BUF_BASE, 1, u64) #define DMA_BUF_IOCTL_EXPORT_SYNC_FILE _IOWR(DMA_BUF_BASE, 2, struct dma_buf_export_sync_file) +#define DMA_BUF_IOCTL_IMPORT_SYNC_FILE _IOW(DMA_BUF_BASE, 3, struct dma_buf_import_sync_file) #endif