From patchwork Mon May 30 14:22:32 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sergey Senozhatsky X-Patchwork-Id: 12864713 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 90052C433F5 for ; Mon, 30 May 2022 15:22:13 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S242091AbiE3PWI (ORCPT ); Mon, 30 May 2022 11:22:08 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39242 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S242925AbiE3PUw (ORCPT ); Mon, 30 May 2022 11:20:52 -0400 Received: from mail-pg1-x535.google.com (mail-pg1-x535.google.com [IPv6:2607:f8b0:4864:20::535]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0CFD111CA0D for ; Mon, 30 May 2022 07:22:52 -0700 (PDT) Received: by mail-pg1-x535.google.com with SMTP id 137so10276979pgb.5 for ; Mon, 30 May 2022 07:22:52 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=3GQ31AOMg6CYL2o02SAXplBrbILZAQU3ouRxiSJOrgU=; b=cgfAWNSA80459wk4ly9bSAoXoF/r0/0/legUd21+ZHniCAG1wV1B7JWCmIV9cguxnR NV+x0Xe7rxk2pW6mtOL0GEwag0Bw5mdIxGiHpGKwJ5FcR94bn8Tj5J80iBM1BOCblTGM PptUeWclGDH0aOgyVPyWxqj6BN9KxTrSnvYHY= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=3GQ31AOMg6CYL2o02SAXplBrbILZAQU3ouRxiSJOrgU=; b=Vcj2OkbpWTvF/2NxuIEP/JdnHSNAk1zz8iYTcfr2EL9P4daaLENcvHLXx1AgB3yls9 /CPpFraZLluaNo3GTWXdC2d6Y/d84QbrWAhtiHsBT7D5Kv/j6pX1R624hIkasX1WixyM 0XME5GaxKkRX/0KRCtJsAzHeIOcdKZ3wf73gKaZMAAlKbX/syUJud23HDl9Cp5V1Cdeb HqT9E6Xwyr+3t5X9yKekcdW6H4vTPOeQtFJmPrAGNcqpfplEbSmENtUcrtN9PN5T1zQ3 GTcuGzYY6GPr+dJObjhaqb7OmMstmQlKJ28L3kHCjH5nHJYd5ahVOiXcE98tKhFmJzca TpaA== X-Gm-Message-State: AOAM530R68Ssk/iM3RcEJcst+3JX3LoBl8PUWLx0zypWXPP+aFpXhDyd 6xf2xSA8jT1sNC1itcYIMjzfdA== X-Google-Smtp-Source: ABdhPJygqQPEqP+uApdlFFyunbfljkbAcq0qYTWB6hl/oH8z4ZP/AA/eKh0jCieqFOXEU2LAYOIIAQ== X-Received: by 2002:a05:6a00:a8b:b0:4cd:6030:4df3 with SMTP id b11-20020a056a000a8b00b004cd60304df3mr57535126pfl.40.1653920570326; Mon, 30 May 2022 07:22:50 -0700 (PDT) Received: from tigerii.tok.corp.google.com ([2401:fa00:8f:203:5f0f:14e6:3bd7:41e3]) by smtp.gmail.com with ESMTPSA id i29-20020a056a00005d00b00517de3dc3c6sm8835947pfk.84.2022.05.30.07.22.47 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 30 May 2022 07:22:49 -0700 (PDT) From: Sergey Senozhatsky To: Sumit Semwal , Gustavo Padovan , Christian Konig Cc: Tomasz Figa , Ricardo Ribalda , Christoph Hellwig , linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org, linaro-mm-sig@lists.linaro.org, linux-kernel@vger.kernel.org, Sergey Senozhatsky Subject: [PATCH] dma-fence: allow dma fence to have their own lock Date: Mon, 30 May 2022 23:22:32 +0900 Message-Id: <20220530142232.2871634-1-senozhatsky@chromium.org> X-Mailer: git-send-email 2.36.1.124.g0e6072fb45-goog MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-media@vger.kernel.org RFC I don't have a good name for this yet and I did not spend any time on documentataion (for that reason) We create fences (out fences) as part of operations execution, which are short-lived objects, we want to release all memory after operation execution is completed or when operation gets cancelled/deleted via ioctl(). This creates a bit of a problem. DMA fences are refcounted objects and exporter never knows when importer imports a fence or puts its refcount, so exporter never knows when fence will be destoyed, which should not be a problem for refcounted objects, but here comes the twist... operation A - creates and exports out fence X ... user-space imports fence X operation A - finishes execution, signals fence X kfree operation A, put dma_fence DMA fences are designed to borrow spinlock that DMA fences use to protect struct dma_fence members: struct dma_fence { spinlock_t *lock; const struct dma_fence_ops *ops; ..... }; void dma_fence_init(struct dma_fence *fence, const struct dma_fence_ops *ops, spinlock_t *lock, u64 context, u64 seqno); So the `lock` should have at least same lifespan as the DMA fence that borrows it, which is impossible to guarantee in our case. When we kfree operation A struct we also kfree ->lock that operation lends to DMA fence, which outlives operation A (depending on what fence importers do and when they drop imported fence refcount). This patch adds a new memnber to struct dma_fence: __lock_inplace. Which is a lock that DMA fence will use to protect its own data when it cannot reliably borrow a lock from the outside object. I also had a patch that puts inplace and borrowed locks to an unnamed uninon and adds one more dma_fence_flag_bits to distinguish between fences with borrowed and inplace locks struct dma_fence { uninon { spinlock_t *lock; spinlock_t __lock_inplace; }; ... }; And then instead of locking/unlocking ->lock directly we would use dma_fence_lock_irqsave()/dma_fence_unlock_irqrestore() macros which would check fence flags and either use borrowed lock or inplace lock. But after seeing how owten drivers directly access fence ->lock I decided to scratch that approach and just add extra spinlock member. Not-Yet-Signed-off-by: Sergey Senozhatsky --- drivers/dma-buf/dma-fence.c | 10 ++++++++++ include/linux/dma-fence.h | 6 ++++++ 2 files changed, 16 insertions(+) diff --git a/drivers/dma-buf/dma-fence.c b/drivers/dma-buf/dma-fence.c index 066400ed8841..7ae40b8adb73 100644 --- a/drivers/dma-buf/dma-fence.c +++ b/drivers/dma-buf/dma-fence.c @@ -958,3 +958,13 @@ dma_fence_init(struct dma_fence *fence, const struct dma_fence_ops *ops, trace_dma_fence_init(fence); } EXPORT_SYMBOL(dma_fence_init); + +void dma_fence_inplace_lock_init(struct dma_fence *fence, + const struct dma_fence_ops *ops, + u64 context, u64 seqno) +{ + spin_lock_init(&fence->__lock_inplace); + + dma_fence_init(fence, ops, &fence->__lock_inplace, context, seqno); +} +EXPORT_SYMBOL(dma_fence_inplace_lock_init); diff --git a/include/linux/dma-fence.h b/include/linux/dma-fence.h index 1ea691753bd3..6b15a0d2eccf 100644 --- a/include/linux/dma-fence.h +++ b/include/linux/dma-fence.h @@ -64,6 +64,8 @@ struct dma_fence_cb; */ struct dma_fence { spinlock_t *lock; + spinlock_t __lock_inplace; + const struct dma_fence_ops *ops; /* * We clear the callback list on kref_put so that by the time we @@ -262,6 +264,10 @@ struct dma_fence_ops { void dma_fence_init(struct dma_fence *fence, const struct dma_fence_ops *ops, spinlock_t *lock, u64 context, u64 seqno); +void dma_fence_inplace_lock_init(struct dma_fence *fence, + const struct dma_fence_ops *ops, + u64 context, u64 seqno); + void dma_fence_release(struct kref *kref); void dma_fence_free(struct dma_fence *fence); void dma_fence_describe(struct dma_fence *fence, struct seq_file *seq);