From patchwork Fri Feb 10 20:25:11 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lukas Wunner X-Patchwork-Id: 13136419 X-Patchwork-Delegate: bhelgaas@google.com Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 21F37C6379F for ; Fri, 10 Feb 2023 20:53:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233901AbjBJUxh (ORCPT ); Fri, 10 Feb 2023 15:53:37 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55056 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233897AbjBJUxf (ORCPT ); Fri, 10 Feb 2023 15:53:35 -0500 Received: from mailout2.hostsharing.net (mailout2.hostsharing.net [IPv6:2a01:37:3000::53df:4ee9:0]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B52BF3AB0; Fri, 10 Feb 2023 12:53:34 -0800 (PST) Received: from h08.hostsharing.net (h08.hostsharing.net [83.223.95.28]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256 client-signature RSA-PSS (4096 bits) client-digest SHA256) (Client CN "*.hostsharing.net", Issuer "RapidSSL Global TLS RSA4096 SHA256 2022 CA1" (verified OK)) by mailout2.hostsharing.net (Postfix) with ESMTPS id 23BDB10189E11; Fri, 10 Feb 2023 21:53:33 +0100 (CET) Received: from localhost (unknown [89.246.108.87]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange ECDHE (P-256) server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by h08.hostsharing.net (Postfix) with ESMTPSA id 00EE5600CA83; Fri, 10 Feb 2023 21:53:32 +0100 (CET) X-Mailbox-Line: From ad46bbc593d4b7f1c9c5cbafbb51d89533edd4a7 Mon Sep 17 00:00:00 2001 Message-Id: In-Reply-To: References: From: Lukas Wunner Date: Fri, 10 Feb 2023 21:25:11 +0100 Subject: [PATCH v3 11/16] PCI/DOE: Allow mailbox creation without devres management To: Bjorn Helgaas , linux-pci@vger.kernel.org Cc: Gregory Price , Ira Weiny , Jonathan Cameron , Dan Williams , Alison Schofield , Vishal Verma , Dave Jiang , "Li, Ming" , Hillf Danton , Ben Widawsky , linuxarm@huawei.com, linux-cxl@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org DOE mailbox creation is currently only possible through a devres-managed API. The lifetime of mailboxes thus ends with driver unbinding. An upcoming commit will create DOE mailboxes upon device enumeration by the PCI core. Their lifetime shall not be limited by a driver. Therefore rework pcim_doe_create_mb() into the non-devres-managed pci_doe_create_mb(). Add pci_doe_destroy_mb() for mailbox destruction on device removal. Provide a devres-managed wrapper under the existing pcim_doe_create_mb() name. The error path of pcim_doe_create_mb() previously called xa_destroy() if alloc_ordered_workqueue() failed. That's unnecessary because the xarray is still empty at that point. It doesn't need to be destroyed until it's been populated by pci_doe_cache_protocols(). Arrange the error path of the new pci_doe_create_mb() accordingly. pci_doe_cancel_tasks() is no longer used as callback for devm_add_action(), so refactor it to accept a struct pci_doe_mb pointer instead of a generic void pointer. Tested-by: Ira Weiny Signed-off-by: Lukas Wunner Reviewed-by: Jonathan Cameron Reviewed-by: Ming Li --- Changes v2 -> v3: * Call pci_doe_flush_mb() from pci_doe_destroy_mb() to simplify error paths (Jonathan) * Explain in commit message why xa_destroy() is reordered in error path of the new pci_doe_create_mb() (Jonathan) drivers/pci/doe.c | 103 +++++++++++++++++++++++++++++----------------- 1 file changed, 66 insertions(+), 37 deletions(-) diff --git a/drivers/pci/doe.c b/drivers/pci/doe.c index 291cd7a46a39..2bc202b64b6a 100644 --- a/drivers/pci/doe.c +++ b/drivers/pci/doe.c @@ -37,7 +37,7 @@ * * This state is used to manage a single DOE mailbox capability. All fields * should be considered opaque to the consumers and the structure passed into - * the helpers below after being created by devm_pci_doe_create() + * the helpers below after being created by pci_doe_create_mb(). * * @pdev: PCI device this mailbox belongs to * @cap_offset: Capability offset @@ -415,24 +415,8 @@ static int pci_doe_cache_protocols(struct pci_doe_mb *doe_mb) return 0; } -static void pci_doe_xa_destroy(void *mb) +static void pci_doe_cancel_tasks(struct pci_doe_mb *doe_mb) { - struct pci_doe_mb *doe_mb = mb; - - xa_destroy(&doe_mb->prots); -} - -static void pci_doe_destroy_workqueue(void *mb) -{ - struct pci_doe_mb *doe_mb = mb; - - destroy_workqueue(doe_mb->work_queue); -} - -static void pci_doe_cancel_tasks(void *mb) -{ - struct pci_doe_mb *doe_mb = mb; - /* Stop all pending work items from starting */ set_bit(PCI_DOE_FLAG_DEAD, &doe_mb->flags); @@ -442,7 +426,7 @@ static void pci_doe_cancel_tasks(void *mb) } /** - * pcim_doe_create_mb() - Create a DOE mailbox object + * pci_doe_create_mb() - Create a DOE mailbox object * * @pdev: PCI device to create the DOE mailbox for * @cap_offset: Offset of the DOE mailbox @@ -453,24 +437,20 @@ static void pci_doe_cancel_tasks(void *mb) * RETURNS: created mailbox object on success * ERR_PTR(-errno) on failure */ -struct pci_doe_mb *pcim_doe_create_mb(struct pci_dev *pdev, u16 cap_offset) +static struct pci_doe_mb *pci_doe_create_mb(struct pci_dev *pdev, + u16 cap_offset) { struct pci_doe_mb *doe_mb; - struct device *dev = &pdev->dev; int rc; - doe_mb = devm_kzalloc(dev, sizeof(*doe_mb), GFP_KERNEL); + doe_mb = kzalloc(sizeof(*doe_mb), GFP_KERNEL); if (!doe_mb) return ERR_PTR(-ENOMEM); doe_mb->pdev = pdev; doe_mb->cap_offset = cap_offset; init_waitqueue_head(&doe_mb->wq); - xa_init(&doe_mb->prots); - rc = devm_add_action(dev, pci_doe_xa_destroy, doe_mb); - if (rc) - return ERR_PTR(rc); doe_mb->work_queue = alloc_ordered_workqueue("%s %s DOE [%x]", 0, dev_driver_string(&pdev->dev), @@ -479,36 +459,85 @@ struct pci_doe_mb *pcim_doe_create_mb(struct pci_dev *pdev, u16 cap_offset) if (!doe_mb->work_queue) { pci_err(pdev, "[%x] failed to allocate work queue\n", doe_mb->cap_offset); - return ERR_PTR(-ENOMEM); + rc = -ENOMEM; + goto err_free; } - rc = devm_add_action_or_reset(dev, pci_doe_destroy_workqueue, doe_mb); - if (rc) - return ERR_PTR(rc); /* Reset the mailbox by issuing an abort */ rc = pci_doe_abort(doe_mb); if (rc) { pci_err(pdev, "[%x] failed to reset mailbox with abort command : %d\n", doe_mb->cap_offset, rc); - return ERR_PTR(rc); + goto err_destroy_wq; } /* * The state machine and the mailbox should be in sync now; - * Set up cancel tasks prior to using the mailbox to query protocols. + * Use the mailbox to query protocols. */ - rc = devm_add_action_or_reset(dev, pci_doe_cancel_tasks, doe_mb); - if (rc) - return ERR_PTR(rc); - rc = pci_doe_cache_protocols(doe_mb); if (rc) { pci_err(pdev, "[%x] failed to cache protocols : %d\n", doe_mb->cap_offset, rc); - return ERR_PTR(rc); + goto err_cancel; } return doe_mb; + +err_cancel: + pci_doe_cancel_tasks(doe_mb); + xa_destroy(&doe_mb->prots); +err_destroy_wq: + destroy_workqueue(doe_mb->work_queue); +err_free: + kfree(doe_mb); + return ERR_PTR(rc); +} + +/** + * pci_doe_destroy_mb() - Destroy a DOE mailbox object + * + * @ptr: Pointer to DOE mailbox + * + * Destroy all internal data structures created for the DOE mailbox. + */ +static void pci_doe_destroy_mb(void *ptr) +{ + struct pci_doe_mb *doe_mb = ptr; + + pci_doe_cancel_tasks(doe_mb); + xa_destroy(&doe_mb->prots); + destroy_workqueue(doe_mb->work_queue); + kfree(doe_mb); +} + +/** + * pcim_doe_create_mb() - Create a DOE mailbox object + * + * @pdev: PCI device to create the DOE mailbox for + * @cap_offset: Offset of the DOE mailbox + * + * Create a single mailbox object to manage the mailbox protocol at the + * cap_offset specified. The mailbox will automatically be destroyed on + * driver unbinding from @pdev. + * + * RETURNS: created mailbox object on success + * ERR_PTR(-errno) on failure + */ +struct pci_doe_mb *pcim_doe_create_mb(struct pci_dev *pdev, u16 cap_offset) +{ + struct pci_doe_mb *doe_mb; + int rc; + + doe_mb = pci_doe_create_mb(pdev, cap_offset); + if (IS_ERR(doe_mb)) + return doe_mb; + + rc = devm_add_action_or_reset(&pdev->dev, pci_doe_destroy_mb, doe_mb); + if (rc) + return ERR_PTR(rc); + + return doe_mb; } EXPORT_SYMBOL_GPL(pcim_doe_create_mb);