From patchwork Thu Oct 4 13:12:43 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Christian_K=C3=B6nig?= X-Patchwork-Id: 10626143 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 9ED1115A6 for ; Thu, 4 Oct 2018 13:13:01 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 8F26629055 for ; Thu, 4 Oct 2018 13:13:01 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 83406290A2; Thu, 4 Oct 2018 13:13:01 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_ADSP_CUSTOM_MED, FREEMAIL_FROM,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id F35D629055 for ; Thu, 4 Oct 2018 13:13:00 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 7015988FA6; Thu, 4 Oct 2018 13:12:55 +0000 (UTC) X-Original-To: dri-devel@lists.freedesktop.org Delivered-To: dri-devel@lists.freedesktop.org Received: from mail-wr1-x444.google.com (mail-wr1-x444.google.com [IPv6:2a00:1450:4864:20::444]) by gabe.freedesktop.org (Postfix) with ESMTPS id 087636E259; Thu, 4 Oct 2018 13:12:54 +0000 (UTC) Received: by mail-wr1-x444.google.com with SMTP id z4-v6so9876697wrb.1; Thu, 04 Oct 2018 06:12:53 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:mime-version :content-transfer-encoding; bh=9OOtt3k/Cfx8YsQbGwfcgcq2F6WD0scbWbIHDapVYsI=; b=SzpmlYrgKdLHNJle1bXUmgkcr1F9KFVIc9DJBgnyN/A2CvHUY9XOJFUXZ8W13cV+fS Mgp4bmU3JBjnzPRcmVM/CHaGLG03TjmuqUDqn+QBFKCks8TOTdPaHKqFW2T5BFmLVXRc w5PC6dGVlbUuFg6k/VvLc7Iq+t8ZzZB/f0kXt9RNGkf5ejFIG0ZClx4+NOSJkdqScucM p8VkkGXjfcOrVu9lw4omzGifuOt4eoJqr6oS/cM2R6k7RYsxfNHohGSMHAXJpnjnpcgj cfN4LYfb7AICEaVxMMv8+FBtBH383yayhgfPnTPyLsGS+pBGrysxx3y6519F4lTYQGHR +uig== X-Gm-Message-State: ABuFfojk9o9WfW2Xd6wme+CaAES5QfSyx9rZH6EYUqOXZ0ntD8ucgf6b azj3/6Tc4TV1eNv/OTLfAJO3RX3A X-Google-Smtp-Source: ACcGV62kTdOaCTeu+YXwZMklOEsP1jJP/0IJ8Qc3Ll4pljLffK5W74fJ1/8nl6l/MaIUkQIzOyHazg== X-Received: by 2002:adf:90a2:: with SMTP id i31-v6mr4523365wri.77.1538658772318; Thu, 04 Oct 2018 06:12:52 -0700 (PDT) Received: from baker.fritz.box ([2a02:908:1257:4460:2d2f:ac8e:6b85:b5e9]) by smtp.gmail.com with ESMTPSA id b139-v6sm9755457wmd.36.2018.10.04.06.12.50 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 04 Oct 2018 06:12:51 -0700 (PDT) From: " =?utf-8?q?Christian_K=C3=B6nig?= " X-Google-Original-From: =?utf-8?q?Christian_K=C3=B6nig?= To: amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org Subject: [PATCH 1/8] dma-buf: remove shared fence staging in reservation object Date: Thu, 4 Oct 2018 15:12:43 +0200 Message-Id: <20181004131250.2373-1-christian.koenig@amd.com> X-Mailer: git-send-email 2.14.1 MIME-Version: 1.0 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" X-Virus-Scanned: ClamAV using ClamSMTP No need for that any more. Just replace the list when there isn't enough room any more for the additional fence. Signed-off-by: Christian König Reviewed-by: Michel Dänzer Acked-by: Junwei Zhang Reviewed-by: Junwei Zhang --- drivers/dma-buf/reservation.c | 178 ++++++++++++++---------------------------- include/linux/reservation.h | 4 - 2 files changed, 58 insertions(+), 124 deletions(-) diff --git a/drivers/dma-buf/reservation.c b/drivers/dma-buf/reservation.c index 6c95f61a32e7..5825fc336a13 100644 --- a/drivers/dma-buf/reservation.c +++ b/drivers/dma-buf/reservation.c @@ -68,105 +68,23 @@ EXPORT_SYMBOL(reservation_seqcount_string); */ int reservation_object_reserve_shared(struct reservation_object *obj) { - struct reservation_object_list *fobj, *old; - u32 max; + struct reservation_object_list *old, *new; + unsigned int i, j, k, max; old = reservation_object_get_list(obj); if (old && old->shared_max) { - if (old->shared_count < old->shared_max) { - /* perform an in-place update */ - kfree(obj->staged); - obj->staged = NULL; + if (old->shared_count < old->shared_max) return 0; - } else + else max = old->shared_max * 2; - } else - max = 4; - - /* - * resize obj->staged or allocate if it doesn't exist, - * noop if already correct size - */ - fobj = krealloc(obj->staged, offsetof(typeof(*fobj), shared[max]), - GFP_KERNEL); - if (!fobj) - return -ENOMEM; - - obj->staged = fobj; - fobj->shared_max = max; - return 0; -} -EXPORT_SYMBOL(reservation_object_reserve_shared); - -static void -reservation_object_add_shared_inplace(struct reservation_object *obj, - struct reservation_object_list *fobj, - struct dma_fence *fence) -{ - struct dma_fence *signaled = NULL; - u32 i, signaled_idx; - - dma_fence_get(fence); - - preempt_disable(); - write_seqcount_begin(&obj->seq); - - for (i = 0; i < fobj->shared_count; ++i) { - struct dma_fence *old_fence; - - old_fence = rcu_dereference_protected(fobj->shared[i], - reservation_object_held(obj)); - - if (old_fence->context == fence->context) { - /* memory barrier is added by write_seqcount_begin */ - RCU_INIT_POINTER(fobj->shared[i], fence); - write_seqcount_end(&obj->seq); - preempt_enable(); - - dma_fence_put(old_fence); - return; - } - - if (!signaled && dma_fence_is_signaled(old_fence)) { - signaled = old_fence; - signaled_idx = i; - } - } - - /* - * memory barrier is added by write_seqcount_begin, - * fobj->shared_count is protected by this lock too - */ - if (signaled) { - RCU_INIT_POINTER(fobj->shared[signaled_idx], fence); } else { - BUG_ON(fobj->shared_count >= fobj->shared_max); - RCU_INIT_POINTER(fobj->shared[fobj->shared_count], fence); - fobj->shared_count++; + max = 4; } - write_seqcount_end(&obj->seq); - preempt_enable(); - - dma_fence_put(signaled); -} - -static void -reservation_object_add_shared_replace(struct reservation_object *obj, - struct reservation_object_list *old, - struct reservation_object_list *fobj, - struct dma_fence *fence) -{ - unsigned i, j, k; - - dma_fence_get(fence); - - if (!old) { - RCU_INIT_POINTER(fobj->shared[0], fence); - fobj->shared_count = 1; - goto done; - } + new = kmalloc(offsetof(typeof(*new), shared[max]), GFP_KERNEL); + if (!new) + return -ENOMEM; /* * no need to bump fence refcounts, rcu_read access @@ -174,46 +92,45 @@ reservation_object_add_shared_replace(struct reservation_object *obj, * references from the old struct are carried over to * the new. */ - for (i = 0, j = 0, k = fobj->shared_max; i < old->shared_count; ++i) { - struct dma_fence *check; + for (i = 0, j = 0, k = max; i < (old ? old->shared_count : 0); ++i) { + struct dma_fence *fence; - check = rcu_dereference_protected(old->shared[i], - reservation_object_held(obj)); - - if (check->context == fence->context || - dma_fence_is_signaled(check)) - RCU_INIT_POINTER(fobj->shared[--k], check); + fence = rcu_dereference_protected(old->shared[i], + reservation_object_held(obj)); + if (dma_fence_is_signaled(fence)) + RCU_INIT_POINTER(new->shared[--k], fence); else - RCU_INIT_POINTER(fobj->shared[j++], check); + RCU_INIT_POINTER(new->shared[j++], fence); } - fobj->shared_count = j; - RCU_INIT_POINTER(fobj->shared[fobj->shared_count], fence); - fobj->shared_count++; + new->shared_count = j; + new->shared_max = max; -done: preempt_disable(); write_seqcount_begin(&obj->seq); /* * RCU_INIT_POINTER can be used here, * seqcount provides the necessary barriers */ - RCU_INIT_POINTER(obj->fence, fobj); + RCU_INIT_POINTER(obj->fence, new); write_seqcount_end(&obj->seq); preempt_enable(); if (!old) - return; + return 0; /* Drop the references to the signaled fences */ - for (i = k; i < fobj->shared_max; ++i) { - struct dma_fence *f; + for (i = k; i < new->shared_max; ++i) { + struct dma_fence *fence; - f = rcu_dereference_protected(fobj->shared[i], - reservation_object_held(obj)); - dma_fence_put(f); + fence = rcu_dereference_protected(new->shared[i], + reservation_object_held(obj)); + dma_fence_put(fence); } kfree_rcu(old, rcu); + + return 0; } +EXPORT_SYMBOL(reservation_object_reserve_shared); /** * reservation_object_add_shared_fence - Add a fence to a shared slot @@ -226,15 +143,39 @@ reservation_object_add_shared_replace(struct reservation_object *obj, void reservation_object_add_shared_fence(struct reservation_object *obj, struct dma_fence *fence) { - struct reservation_object_list *old, *fobj = obj->staged; + struct reservation_object_list *fobj; + unsigned int i; - old = reservation_object_get_list(obj); - obj->staged = NULL; + dma_fence_get(fence); + + fobj = reservation_object_get_list(obj); - if (!fobj) - reservation_object_add_shared_inplace(obj, old, fence); - else - reservation_object_add_shared_replace(obj, old, fobj, fence); + preempt_disable(); + write_seqcount_begin(&obj->seq); + + for (i = 0; i < fobj->shared_count; ++i) { + struct dma_fence *old_fence; + + old_fence = rcu_dereference_protected(fobj->shared[i], + reservation_object_held(obj)); + if (old_fence->context == fence->context || + dma_fence_is_signaled(old_fence)) { + dma_fence_put(old_fence); + goto replace; + } + } + + BUG_ON(fobj->shared_count >= fobj->shared_max); + fobj->shared_count++; + +replace: + /* + * memory barrier is added by write_seqcount_begin, + * fobj->shared_count is protected by this lock too + */ + RCU_INIT_POINTER(fobj->shared[i], fence); + write_seqcount_end(&obj->seq); + preempt_enable(); } EXPORT_SYMBOL(reservation_object_add_shared_fence); @@ -343,9 +284,6 @@ int reservation_object_copy_fences(struct reservation_object *dst, new = dma_fence_get_rcu_safe(&src->fence_excl); rcu_read_unlock(); - kfree(dst->staged); - dst->staged = NULL; - src_list = reservation_object_get_list(dst); old = reservation_object_get_excl(dst); diff --git a/include/linux/reservation.h b/include/linux/reservation.h index 02166e815afb..54cf6773a14c 100644 --- a/include/linux/reservation.h +++ b/include/linux/reservation.h @@ -68,7 +68,6 @@ struct reservation_object_list { * @seq: sequence count for managing RCU read-side synchronization * @fence_excl: the exclusive fence, if there is one currently * @fence: list of current shared fences - * @staged: staged copy of shared fences for RCU updates */ struct reservation_object { struct ww_mutex lock; @@ -76,7 +75,6 @@ struct reservation_object { struct dma_fence __rcu *fence_excl; struct reservation_object_list __rcu *fence; - struct reservation_object_list *staged; }; #define reservation_object_held(obj) lockdep_is_held(&(obj)->lock.base) @@ -95,7 +93,6 @@ reservation_object_init(struct reservation_object *obj) __seqcount_init(&obj->seq, reservation_seqcount_string, &reservation_seqcount_class); RCU_INIT_POINTER(obj->fence, NULL); RCU_INIT_POINTER(obj->fence_excl, NULL); - obj->staged = NULL; } /** @@ -124,7 +121,6 @@ reservation_object_fini(struct reservation_object *obj) kfree(fobj); } - kfree(obj->staged); ww_mutex_destroy(&obj->lock); }