Message ID | 20220715030424.462963-4-ira.weiny@intel.com |
---|---|
State | Superseded |
Headers | show |
Series | CXL: Read CDAT | expand |
On Thu, 14 Jul 2022 20:04:20 -0700 ira.weiny@intel.com wrote: > From: Jonathan Cameron <Jonathan.Cameron@huawei.com> > > Introduced in a PCIe r6.0, sec 6.30, DOE provides a config space based > mailbox with standard protocol discovery. Each mailbox is accessed > through a DOE Extended Capability. > > Each DOE mailbox must support the DOE discovery protocol in addition to > any number of additional protocols. > > Define core PCIe functionality to manage a single PCIe DOE mailbox at a > defined config space offset. Functionality includes iterating, > creating, query of supported protocol, and task submission. Destruction > of the mailboxes is device managed. > > Cc: "Li, Ming" <ming4.li@intel.com> > Cc: Bjorn Helgaas <helgaas@kernel.org> > Cc: Matthew Wilcox <willy@infradead.org> > Acked-by: Bjorn Helgaas <helgaas@kernel.org> > Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> > Co-developed-by: Ira Weiny <ira.weiny@intel.com> > Signed-off-by: Ira Weiny <ira.weiny@intel.com> Hi Ira, Thanks for persisting with this! So, I think this works, but there is at least one 'sleep' I can't see a purpose for. I think it's just a left over from refactoring. A few other more trivial things inline. Thanks, Jonathan > >> # Endpoint library must be initialized before its users > obj-$(CONFIG_PCI_ENDPOINT) += endpoint/ > diff --git a/drivers/pci/doe.c b/drivers/pci/doe.c > new file mode 100644 > index 000000000000..12c3762be22f > --- /dev/null > +++ b/drivers/pci/doe.c > @@ -0,0 +1,546 @@ > +// SPDX-License-Identifier: GPL-2.0 > +/* > + * Data Object Exchange > + * PCIe r6.0, sec 6.30 DOE > + * > + * Copyright (C) 2021 Huawei > + * Jonathan Cameron <Jonathan.Cameron@huawei.com> > + * > + * Copyright (C) 2022 Intel Corporation > + * Ira Weiny <ira.weiny@intel.com> > + */ > + > +#define dev_fmt(fmt) "DOE: " fmt > + > +#include <linux/bitfield.h> > +#include <linux/delay.h> > +#include <linux/jiffies.h> > +#include <linux/mutex.h> > +#include <linux/pci.h> > +#include <linux/pci-doe.h> > +#include <linux/workqueue.h> > + > +#define PCI_DOE_PROTOCOL_DISCOVERY 0 > + > +#define PCI_DOE_BUSY_MAX_RETRIES 16 Left over from removed code. > +static int pci_doe_wait(struct pci_doe_mb *doe_mb, unsigned long timeout) This feels misnamed to me now. It's not waiting for the DOE, it's just sleeping unless we cancel. The actual poll / wait is handled outside this. pci_doe_sleep_unless_cancel() maybe? > +{ > + if (wait_event_timeout(doe_mb->wq, > + test_bit(PCI_DOE_FLAG_CANCEL, &doe_mb->flags), > + timeout)) > + return -EIO; > + return 0; > +} > +static int pci_doe_send_req(struct pci_doe_mb *doe_mb, > + struct pci_doe_task *task) > +{ > + struct pci_dev *pdev = doe_mb->pdev; > + int offset = doe_mb->cap_offset; > + u32 val; > + int i; > + > + /* > + * Check the DOE busy bit is not set. If it is set, this could indicate > + * someone other than Linux (e.g. firmware) is using the mailbox. Note > + * it is expected that firmware and OS will negotiate access rights via > + * an, as yet to be defined method. trivial: an, as yet to be defined, method. > + */ > + pci_read_config_dword(pdev, offset + PCI_DOE_STATUS, &val); > + if (FIELD_GET(PCI_DOE_STATUS_BUSY, val)) > + return -EBUSY; > + > + if (FIELD_GET(PCI_DOE_STATUS_ERROR, val)) > + return -EIO; > + > + /* Write DOE Header */ > + val = FIELD_PREP(PCI_DOE_DATA_OBJECT_HEADER_1_VID, task->prot.vid) | > + FIELD_PREP(PCI_DOE_DATA_OBJECT_HEADER_1_TYPE, task->prot.type); > + pci_write_config_dword(pdev, offset + PCI_DOE_WRITE, val); > + /* Length is 2 DW of header + length of payload in DW */ > + pci_write_config_dword(pdev, offset + PCI_DOE_WRITE, > + FIELD_PREP(PCI_DOE_DATA_OBJECT_HEADER_2_LENGTH, > + 2 + task->request_pl_sz / > + sizeof(u32))); > + for (i = 0; i < task->request_pl_sz / sizeof(u32); i++) > + pci_write_config_dword(pdev, offset + PCI_DOE_WRITE, > + task->request_pl[i]); > + > + pci_doe_write_ctrl(doe_mb, PCI_DOE_CTRL_GO); > + > + /* Request is sent - now wait for poll or IRQ */ Could drop the IRQ given not currently handling. Though I suppose it's correct documentation for this function, so can leave it if preferred. More than possible we'll get a follow up patch dropping it though from someone doing cleanup. > + return 0; > +} > + ... > + > +static void doe_statemachine_work(struct work_struct *work) > +{ > + struct pci_doe_task *task = container_of(work, struct pci_doe_task, > + work); > + struct pci_doe_mb *doe_mb = task->doe_mb; > + struct pci_dev *pdev = doe_mb->pdev; > + int offset = doe_mb->cap_offset; > + unsigned long timeout_jiffies; > + u32 val; > + int rc; > + > + if (test_bit(PCI_DOE_FLAG_DEAD, &doe_mb->flags)) { > + signal_task_complete(task, -EIO); > + return; > + } > + > + /* Send request */ > + rc = pci_doe_send_req(doe_mb, task); > + nitpick, but blank line separating call from error handling reduces readability in my opinion. > + if (rc) { > + /* > + * The specification does not provide any guidance on how to > + * resolve conflicting requests from other entities. > + * Furthermore, it is likely that busy will not be detected > + * most of the time. Flag any detection of status busy with an > + * error. > + */ > + if (rc == -EBUSY) > + dev_err_ratelimited(&pdev->dev, "[%x] busy detected; another entity is sending conflicting requests\n", > + offset); > + signal_task_abort(task, rc); > + return; > + } > + > + timeout_jiffies = jiffies + PCI_DOE_TIMEOUT; > + rc = pci_doe_wait(doe_mb, PCI_DOE_POLL_INTERVAL); What's this particular wait for? I think you can just move directly to checking if the response is ready. > + if (rc) { > + signal_task_abort(task, rc); > + return; > + } > + > + /* Poll for response */ > +retry_resp: > + pci_read_config_dword(pdev, offset + PCI_DOE_STATUS, &val); > + if (FIELD_GET(PCI_DOE_STATUS_ERROR, val)) { > + signal_task_abort(task, -EIO); > + return; > + } > + > + if (!FIELD_GET(PCI_DOE_STATUS_DATA_OBJECT_READY, val)) { > + if (time_after(jiffies, timeout_jiffies)) { > + signal_task_abort(task, -EIO); > + return; > + } > + rc = pci_doe_wait(doe_mb, PCI_DOE_POLL_INTERVAL); > + if (rc) { > + signal_task_abort(task, rc); > + return; > + } > + goto retry_resp; > + } > + > + rc = pci_doe_recv_resp(doe_mb, task); > + if (rc < 0) { > + signal_task_abort(task, rc); > + return; > + } > + > + signal_task_complete(task, rc); > +} > + > +/** > + * pci_doe_supports_prot() - Return if the DOE instance supports the given > + * protocol > + * @doe_mb: DOE mailbox capability to query > + * @vid: Protocol Vendor ID > + * @type: Protocol type > + * > + * RETURNS: True if the DOE mailbox supports the protocol specified > + */ > +bool pci_doe_supports_prot(struct pci_doe_mb *doe_mb, u16 vid, u8 type) > +{ > + unsigned long index; > + void *entry; > + > + /* The discovery protocol must always be supported */ > + if (vid == PCI_VENDOR_ID_PCI_SIG && type == PCI_DOE_PROTOCOL_DISCOVERY) > + return true; Given how cheap this look up is now it's all in xarray, we could drop this 'optimization'. I'm fairly sure the discovery protocol will always be discovered (spec says it must be returned when calling itself as the fist protocol). > + > + xa_for_each(&doe_mb->prots, index, entry) > + if (entry == pci_doe_xa_prot_entry(vid, type)) > + return true; > + > + return false; > +} > +EXPORT_SYMBOL_GPL(pci_doe_supports_prot); > +EXPORT_SYMBOL_GPL(pci_doe_submit_task); > diff --git a/include/linux/pci-doe.h b/include/linux/pci-doe.h > new file mode 100644 > index 000000000000..c77f6258c996 > --- /dev/null > +++ b/include/linux/pci-doe.h > @@ -0,0 +1,79 @@ > +/* SPDX-License-Identifier: GPL-2.0 */ > +/* > + * Data Object Exchange > + * PCIe r6.0, sec 6.30 DOE > + * > + * Copyright (C) 2021 Huawei > + * Jonathan Cameron <Jonathan.Cameron@huawei.com> > + * > + * Copyright (C) 2022 Intel Corporation > + * Ira Weiny <ira.weiny@intel.com> > + */ > + > +#ifndef LINUX_PCI_DOE_H > +#define LINUX_PCI_DOE_H > + > +#include <linux/completion.h> include not needed any more. Thanks, Jonathan > + > +struct pci_doe_protocol { > + u16 vid; > + u8 type; > +}; > + > +struct pci_doe_mb; > + > +/** > + * struct pci_doe_task - represents a single query/response > + * > + * @prot: DOE Protocol > + * @request_pl: The request payload > + * @request_pl_sz: Size of the request payload (bytes) > + * @response_pl: The response payload > + * @response_pl_sz: Size of the response payload (bytes) > + * @rv: Return value. Length of received response or error (bytes) > + * @complete: Called when task is complete > + * @private: Private data for the consumer > + * @work: Used internally by the mailbox > + * @doe_mb: Used internally by the mailbox > + * > + * The payload sizes and rv are specified in bytes with the following > + * restrictions concerning the protocol. > + * > + * 1) The request_pl_sz must be a multiple of double words (4 bytes) > + * 2) The response_pl_sz must be >= a single double word (4 bytes) > + * 3) rv is returned as bytes but it will be a multiple of double words > + * > + * NOTE there is no need for the caller to initialize work or doe_mb. > + */ > +struct pci_doe_task { > + struct pci_doe_protocol prot; > + u32 *request_pl; > + size_t request_pl_sz; > + u32 *response_pl; > + size_t response_pl_sz; > + int rv; > + void (*complete)(struct pci_doe_task *task); > + void *private; > + > + /* No need for the user to initialize these fields */ > + struct work_struct work; > + struct pci_doe_mb *doe_mb; > +}; > + > +/** > + * pci_doe_for_each_off - Iterate each DOE capability > + * @pdev: struct pci_dev to iterate > + * @off: u16 of config space offset of each mailbox capability found > + */ > +#define pci_doe_for_each_off(pdev, off) \ > + for (off = pci_find_next_ext_capability(pdev, off, \ > + PCI_EXT_CAP_ID_DOE); \ > + off > 0; \ > + off = pci_find_next_ext_capability(pdev, off, \ > + PCI_EXT_CAP_ID_DOE)) > + > +struct pci_doe_mb *pcim_doe_create_mb(struct pci_dev *pdev, u16 cap_offset); > +bool pci_doe_supports_prot(struct pci_doe_mb *doe_mb, u16 vid, u8 type); > +int pci_doe_submit_task(struct pci_doe_mb *doe_mb, struct pci_doe_task *task); > + > +#endif
On Tue, Jul 19, 2022 at 05:35:53PM +0100, Jonathan Cameron wrote: > On Thu, 14 Jul 2022 20:04:20 -0700 > ira.weiny@intel.com wrote: > > > From: Jonathan Cameron <Jonathan.Cameron@huawei.com> > > > > Introduced in a PCIe r6.0, sec 6.30, DOE provides a config space based > > mailbox with standard protocol discovery. Each mailbox is accessed > > through a DOE Extended Capability. > > > > Each DOE mailbox must support the DOE discovery protocol in addition to > > any number of additional protocols. > > > > Define core PCIe functionality to manage a single PCIe DOE mailbox at a > > defined config space offset. Functionality includes iterating, > > creating, query of supported protocol, and task submission. Destruction > > of the mailboxes is device managed. > > > > Cc: "Li, Ming" <ming4.li@intel.com> > > Cc: Bjorn Helgaas <helgaas@kernel.org> > > Cc: Matthew Wilcox <willy@infradead.org> > > Acked-by: Bjorn Helgaas <helgaas@kernel.org> > > Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> > > Co-developed-by: Ira Weiny <ira.weiny@intel.com> > > Signed-off-by: Ira Weiny <ira.weiny@intel.com> > Hi Ira, > > Thanks for persisting with this! > > So, I think this works, but there is at least one 'sleep' I can't > see a purpose for. I think it's just a left over from refactoring. > > A few other more trivial things inline. > > Thanks, > > Jonathan > > > > > >> # Endpoint library must be initialized before its users > > obj-$(CONFIG_PCI_ENDPOINT) += endpoint/ > > diff --git a/drivers/pci/doe.c b/drivers/pci/doe.c > > new file mode 100644 > > index 000000000000..12c3762be22f > > --- /dev/null > > +++ b/drivers/pci/doe.c > > @@ -0,0 +1,546 @@ > > +// SPDX-License-Identifier: GPL-2.0 > > +/* > > + * Data Object Exchange > > + * PCIe r6.0, sec 6.30 DOE > > + * > > + * Copyright (C) 2021 Huawei > > + * Jonathan Cameron <Jonathan.Cameron@huawei.com> > > + * > > + * Copyright (C) 2022 Intel Corporation > > + * Ira Weiny <ira.weiny@intel.com> > > + */ > > + > > +#define dev_fmt(fmt) "DOE: " fmt > > + > > +#include <linux/bitfield.h> > > +#include <linux/delay.h> > > +#include <linux/jiffies.h> > > +#include <linux/mutex.h> > > +#include <linux/pci.h> > > +#include <linux/pci-doe.h> > > +#include <linux/workqueue.h> > > + > > +#define PCI_DOE_PROTOCOL_DISCOVERY 0 > > + > > +#define PCI_DOE_BUSY_MAX_RETRIES 16 > Left over from removed code. I think Dan may have taken these. If so I'll send a clean up. If not I can spin. Let me check. > > > +static int pci_doe_wait(struct pci_doe_mb *doe_mb, unsigned long timeout) > > This feels misnamed to me now. It's not waiting for the DOE, it's > just sleeping unless we cancel. The actual poll / wait is handled > outside this. pci_doe_sleep_unless_cancel() maybe? It is waiting a timeout period _or_ checking if we are canceled. So I think it is correct for the common case. > > > +{ > > + if (wait_event_timeout(doe_mb->wq, > > + test_bit(PCI_DOE_FLAG_CANCEL, &doe_mb->flags), > > + timeout)) > > + return -EIO; > > + return 0; > > +} > > > +static int pci_doe_send_req(struct pci_doe_mb *doe_mb, > > + struct pci_doe_task *task) > > +{ > > + struct pci_dev *pdev = doe_mb->pdev; > > + int offset = doe_mb->cap_offset; > > + u32 val; > > + int i; > > + > > + /* > > + * Check the DOE busy bit is not set. If it is set, this could indicate > > + * someone other than Linux (e.g. firmware) is using the mailbox. Note > > + * it is expected that firmware and OS will negotiate access rights via > > + * an, as yet to be defined method. > > trivial: an, as yet to be defined, method. I'll send a follow on patch for this. > > > + */ > > + pci_read_config_dword(pdev, offset + PCI_DOE_STATUS, &val); > > + if (FIELD_GET(PCI_DOE_STATUS_BUSY, val)) > > + return -EBUSY; > > + > > + if (FIELD_GET(PCI_DOE_STATUS_ERROR, val)) > > + return -EIO; > > + > > + /* Write DOE Header */ > > + val = FIELD_PREP(PCI_DOE_DATA_OBJECT_HEADER_1_VID, task->prot.vid) | > > + FIELD_PREP(PCI_DOE_DATA_OBJECT_HEADER_1_TYPE, task->prot.type); > > + pci_write_config_dword(pdev, offset + PCI_DOE_WRITE, val); > > + /* Length is 2 DW of header + length of payload in DW */ > > + pci_write_config_dword(pdev, offset + PCI_DOE_WRITE, > > + FIELD_PREP(PCI_DOE_DATA_OBJECT_HEADER_2_LENGTH, > > + 2 + task->request_pl_sz / > > + sizeof(u32))); > > + for (i = 0; i < task->request_pl_sz / sizeof(u32); i++) > > + pci_write_config_dword(pdev, offset + PCI_DOE_WRITE, > > + task->request_pl[i]); > > + > > + pci_doe_write_ctrl(doe_mb, PCI_DOE_CTRL_GO); > > + > > + /* Request is sent - now wait for poll or IRQ */ > > Could drop the IRQ given not currently handling. Though I suppose it's correct > documentation for this function, so can leave it if preferred. > More than possible we'll get a follow up patch dropping it though from > someone doing cleanup. I send a follow on clean patch for this as well. I suspect IRQ may never make it in. But for sure it will not for this cycle. Best to have the comment correct for the final 5.20. > > > + return 0; > > +} > > + > > ... > > > + > > +static void doe_statemachine_work(struct work_struct *work) > > +{ > > + struct pci_doe_task *task = container_of(work, struct pci_doe_task, > > + work); > > + struct pci_doe_mb *doe_mb = task->doe_mb; > > + struct pci_dev *pdev = doe_mb->pdev; > > + int offset = doe_mb->cap_offset; > > + unsigned long timeout_jiffies; > > + u32 val; > > + int rc; > > + > > + if (test_bit(PCI_DOE_FLAG_DEAD, &doe_mb->flags)) { > > + signal_task_complete(task, -EIO); > > + return; > > + } > > + > > + /* Send request */ > > + rc = pci_doe_send_req(doe_mb, task); > > + > nitpick, but blank line separating call from error handling reduces > readability in my opinion. :-/ I'm not sure why I did this. More clean ups. > > > + if (rc) { > > + /* > > + * The specification does not provide any guidance on how to > > + * resolve conflicting requests from other entities. > > + * Furthermore, it is likely that busy will not be detected > > + * most of the time. Flag any detection of status busy with an > > + * error. > > + */ > > + if (rc == -EBUSY) > > + dev_err_ratelimited(&pdev->dev, "[%x] busy detected; another entity is sending conflicting requests\n", > > + offset); > > + signal_task_abort(task, rc); > > + return; > > + } > > + > > + timeout_jiffies = jiffies + PCI_DOE_TIMEOUT; > > + rc = pci_doe_wait(doe_mb, PCI_DOE_POLL_INTERVAL); > > What's this particular wait for? I think you can just move directly to checking > if the response is ready. We could but I assume it will take at least some time to process the request. So it seemed best to wait and then check. But of course we all know that also used to wait for an IRQ as an option. :-/ I'm really on the fence here because I don't think it really matters. We are sleeping so it does not really affect the system much and this is not a performance path. If we were spinning I would agree with you. > > > + if (rc) { > > + signal_task_abort(task, rc); > > + return; > > + } > > + > > + /* Poll for response */ > > +retry_resp: > > + pci_read_config_dword(pdev, offset + PCI_DOE_STATUS, &val); > > + if (FIELD_GET(PCI_DOE_STATUS_ERROR, val)) { > > + signal_task_abort(task, -EIO); > > + return; > > + } > > + > > + if (!FIELD_GET(PCI_DOE_STATUS_DATA_OBJECT_READY, val)) { > > + if (time_after(jiffies, timeout_jiffies)) { > > + signal_task_abort(task, -EIO); > > + return; > > + } > > + rc = pci_doe_wait(doe_mb, PCI_DOE_POLL_INTERVAL); > > + if (rc) { > > + signal_task_abort(task, rc); > > + return; > > + } > > + goto retry_resp; > > + } > > + > > + rc = pci_doe_recv_resp(doe_mb, task); > > + if (rc < 0) { > > + signal_task_abort(task, rc); > > + return; > > + } > > + > > + signal_task_complete(task, rc); > > +} > > + > > > +/** > > + * pci_doe_supports_prot() - Return if the DOE instance supports the given > > + * protocol > > + * @doe_mb: DOE mailbox capability to query > > + * @vid: Protocol Vendor ID > > + * @type: Protocol type > > + * > > + * RETURNS: True if the DOE mailbox supports the protocol specified > > + */ > > +bool pci_doe_supports_prot(struct pci_doe_mb *doe_mb, u16 vid, u8 type) > > +{ > > + unsigned long index; > > + void *entry; > > + > > + /* The discovery protocol must always be supported */ > > + if (vid == PCI_VENDOR_ID_PCI_SIG && type == PCI_DOE_PROTOCOL_DISCOVERY) > > + return true; > > Given how cheap this look up is now it's all in xarray, we could drop this > 'optimization'. I'm fairly sure the discovery protocol will always be > discovered (spec says it must be returned when calling itself as the fist > protocol). No we can't because this is called before the xarray is populated with the discovery protocol. This was actually added not as an optimization but to allow the discovery protocol to run through the common query path. > > > + > > + xa_for_each(&doe_mb->prots, index, entry) > > + if (entry == pci_doe_xa_prot_entry(vid, type)) > > + return true; > > + > > + return false; > > +} > > +EXPORT_SYMBOL_GPL(pci_doe_supports_prot); > > > > +EXPORT_SYMBOL_GPL(pci_doe_submit_task); > > diff --git a/include/linux/pci-doe.h b/include/linux/pci-doe.h > > new file mode 100644 > > index 000000000000..c77f6258c996 > > --- /dev/null > > +++ b/include/linux/pci-doe.h > > @@ -0,0 +1,79 @@ > > +/* SPDX-License-Identifier: GPL-2.0 */ > > +/* > > + * Data Object Exchange > > + * PCIe r6.0, sec 6.30 DOE > > + * > > + * Copyright (C) 2021 Huawei > > + * Jonathan Cameron <Jonathan.Cameron@huawei.com> > > + * > > + * Copyright (C) 2022 Intel Corporation > > + * Ira Weiny <ira.weiny@intel.com> > > + */ > > + > > +#ifndef LINUX_PCI_DOE_H > > +#define LINUX_PCI_DOE_H > > + > > +#include <linux/completion.h> > > include not needed any more. Will fix, Ira > > Thanks, > > Jonathan > > > + > > +struct pci_doe_protocol { > > + u16 vid; > > + u8 type; > > +}; > > + > > +struct pci_doe_mb; > > + > > +/** > > + * struct pci_doe_task - represents a single query/response > > + * > > + * @prot: DOE Protocol > > + * @request_pl: The request payload > > + * @request_pl_sz: Size of the request payload (bytes) > > + * @response_pl: The response payload > > + * @response_pl_sz: Size of the response payload (bytes) > > + * @rv: Return value. Length of received response or error (bytes) > > + * @complete: Called when task is complete > > + * @private: Private data for the consumer > > + * @work: Used internally by the mailbox > > + * @doe_mb: Used internally by the mailbox > > + * > > + * The payload sizes and rv are specified in bytes with the following > > + * restrictions concerning the protocol. > > + * > > + * 1) The request_pl_sz must be a multiple of double words (4 bytes) > > + * 2) The response_pl_sz must be >= a single double word (4 bytes) > > + * 3) rv is returned as bytes but it will be a multiple of double words > > + * > > + * NOTE there is no need for the caller to initialize work or doe_mb. > > + */ > > +struct pci_doe_task { > > + struct pci_doe_protocol prot; > > + u32 *request_pl; > > + size_t request_pl_sz; > > + u32 *response_pl; > > + size_t response_pl_sz; > > + int rv; > > + void (*complete)(struct pci_doe_task *task); > > + void *private; > > + > > + /* No need for the user to initialize these fields */ > > + struct work_struct work; > > + struct pci_doe_mb *doe_mb; > > +}; > > + > > +/** > > + * pci_doe_for_each_off - Iterate each DOE capability > > + * @pdev: struct pci_dev to iterate > > + * @off: u16 of config space offset of each mailbox capability found > > + */ > > +#define pci_doe_for_each_off(pdev, off) \ > > + for (off = pci_find_next_ext_capability(pdev, off, \ > > + PCI_EXT_CAP_ID_DOE); \ > > + off > 0; \ > > + off = pci_find_next_ext_capability(pdev, off, \ > > + PCI_EXT_CAP_ID_DOE)) > > + > > +struct pci_doe_mb *pcim_doe_create_mb(struct pci_dev *pdev, u16 cap_offset); > > +bool pci_doe_supports_prot(struct pci_doe_mb *doe_mb, u16 vid, u8 type); > > +int pci_doe_submit_task(struct pci_doe_mb *doe_mb, struct pci_doe_task *task); > > + > > +#endif >
On Tue, Jul 19, 2022 at 12:16:06PM -0700, Ira wrote: > On Tue, Jul 19, 2022 at 05:35:53PM +0100, Jonathan Cameron wrote: [snip] > > Hi Ira, > > > > Thanks for persisting with this! > > > > So, I think this works, but there is at least one 'sleep' I can't > > see a purpose for. I think it's just a left over from refactoring. > > > > A few other more trivial things inline. [snip] > > > + > > > +#define PCI_DOE_BUSY_MAX_RETRIES 16 > > Left over from removed code. > > I think Dan may have taken these. If so I'll send a clean up. If not I can > spin. Let me check. I'm spinning a v15 of this patch. [snip] > > > > > > + if (rc) { > > > + /* > > > + * The specification does not provide any guidance on how to > > > + * resolve conflicting requests from other entities. > > > + * Furthermore, it is likely that busy will not be detected > > > + * most of the time. Flag any detection of status busy with an > > > + * error. > > > + */ > > > + if (rc == -EBUSY) > > > + dev_err_ratelimited(&pdev->dev, "[%x] busy detected; another entity is sending conflicting requests\n", > > > + offset); > > > + signal_task_abort(task, rc); > > > + return; > > > + } > > > + > > > + timeout_jiffies = jiffies + PCI_DOE_TIMEOUT; > > > + rc = pci_doe_wait(doe_mb, PCI_DOE_POLL_INTERVAL); > > > > What's this particular wait for? I think you can just move directly to checking > > if the response is ready. > > We could but I assume it will take at least some time to process the request. > So it seemed best to wait and then check. > > But of course we all know that also used to wait for an IRQ as an option. :-/ > > I'm really on the fence here because I don't think it really matters. We are > sleeping so it does not really affect the system much and this is not a > performance path. If we were spinning I would agree with you. I've deferred to your expertise here and removed the extra wait. Ira
On Tue, 19 Jul 2022 12:16:06 -0700 Ira Weiny <ira.weiny@intel.com> wrote: > On Tue, Jul 19, 2022 at 05:35:53PM +0100, Jonathan Cameron wrote: > > On Thu, 14 Jul 2022 20:04:20 -0700 > > ira.weiny@intel.com wrote: > > > > > From: Jonathan Cameron <Jonathan.Cameron@huawei.com> > > > > > > Introduced in a PCIe r6.0, sec 6.30, DOE provides a config space based > > > mailbox with standard protocol discovery. Each mailbox is accessed > > > through a DOE Extended Capability. > > > > > > Each DOE mailbox must support the DOE discovery protocol in addition to > > > any number of additional protocols. > > > > > > Define core PCIe functionality to manage a single PCIe DOE mailbox at a > > > defined config space offset. Functionality includes iterating, > > > creating, query of supported protocol, and task submission. Destruction > > > of the mailboxes is device managed. > > > > > > Cc: "Li, Ming" <ming4.li@intel.com> > > > Cc: Bjorn Helgaas <helgaas@kernel.org> > > > Cc: Matthew Wilcox <willy@infradead.org> > > > Acked-by: Bjorn Helgaas <helgaas@kernel.org> > > > Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> > > > Co-developed-by: Ira Weiny <ira.weiny@intel.com> > > > Signed-off-by: Ira Weiny <ira.weiny@intel.com> > > Hi Ira, > > > > Thanks for persisting with this! > > > > So, I think this works, but there is at least one 'sleep' I can't > > see a purpose for. I think it's just a left over from refactoring. > > > > A few other more trivial things inline. > > > > Thanks, > > > > Jonathan > > > > > > > > > >> # Endpoint library must be initialized before its users > > > obj-$(CONFIG_PCI_ENDPOINT) += endpoint/ > > > diff --git a/drivers/pci/doe.c b/drivers/pci/doe.c > > > new file mode 100644 > > > index 000000000000..12c3762be22f > > > --- /dev/null > > > +++ b/drivers/pci/doe.c > > > @@ -0,0 +1,546 @@ > > > +// SPDX-License-Identifier: GPL-2.0 > > > +/* > > > + * Data Object Exchange > > > + * PCIe r6.0, sec 6.30 DOE > > > + * > > > + * Copyright (C) 2021 Huawei > > > + * Jonathan Cameron <Jonathan.Cameron@huawei.com> > > > + * > > > + * Copyright (C) 2022 Intel Corporation > > > + * Ira Weiny <ira.weiny@intel.com> > > > + */ > > > + > > > +#define dev_fmt(fmt) "DOE: " fmt > > > + > > > +#include <linux/bitfield.h> > > > +#include <linux/delay.h> > > > +#include <linux/jiffies.h> > > > +#include <linux/mutex.h> > > > +#include <linux/pci.h> > > > +#include <linux/pci-doe.h> > > > +#include <linux/workqueue.h> > > > + > > > +#define PCI_DOE_PROTOCOL_DISCOVERY 0 > > > + > > > +#define PCI_DOE_BUSY_MAX_RETRIES 16 > > Left over from removed code. > > I think Dan may have taken these. If so I'll send a clean up. If not I can > spin. Let me check. > Absolutely. All tiny improvements, so fine to go in next cycle given late timing. > > > +/** > > > + * pci_doe_supports_prot() - Return if the DOE instance supports the given > > > + * protocol > > > + * @doe_mb: DOE mailbox capability to query > > > + * @vid: Protocol Vendor ID > > > + * @type: Protocol type > > > + * > > > + * RETURNS: True if the DOE mailbox supports the protocol specified > > > + */ > > > +bool pci_doe_supports_prot(struct pci_doe_mb *doe_mb, u16 vid, u8 type) > > > +{ > > > + unsigned long index; > > > + void *entry; > > > + > > > + /* The discovery protocol must always be supported */ > > > + if (vid == PCI_VENDOR_ID_PCI_SIG && type == PCI_DOE_PROTOCOL_DISCOVERY) > > > + return true; > > > > Given how cheap this look up is now it's all in xarray, we could drop this > > 'optimization'. I'm fairly sure the discovery protocol will always be > > discovered (spec says it must be returned when calling itself as the fist > > protocol). > > No we can't because this is called before the xarray is populated with the > discovery protocol. This was actually added not as an optimization but to > allow the discovery protocol to run through the common query path. > Ah. I was too lazy to check if that was the case :) Jonathan
diff --git a/.clang-format b/.clang-format index 9b87ea1fc16e..1247d54f9e49 100644 --- a/.clang-format +++ b/.clang-format @@ -516,6 +516,7 @@ ForEachMacros: - 'of_property_for_each_string' - 'of_property_for_each_u32' - 'pci_bus_for_each_resource' + - 'pci_doe_for_each_off' - 'pcl_for_each_chunk' - 'pcl_for_each_segment' - 'pcm_for_each_format' diff --git a/drivers/pci/Kconfig b/drivers/pci/Kconfig index 133c73207782..b2f2e588a817 100644 --- a/drivers/pci/Kconfig +++ b/drivers/pci/Kconfig @@ -121,6 +121,9 @@ config XEN_PCIDEV_FRONTEND config PCI_ATS bool +config PCI_DOE + bool + config PCI_ECAM bool diff --git a/drivers/pci/Makefile b/drivers/pci/Makefile index 0da6b1ebc694..2680e4c92f0a 100644 --- a/drivers/pci/Makefile +++ b/drivers/pci/Makefile @@ -31,6 +31,7 @@ obj-$(CONFIG_PCI_ECAM) += ecam.o obj-$(CONFIG_PCI_P2PDMA) += p2pdma.o obj-$(CONFIG_XEN_PCIDEV_FRONTEND) += xen-pcifront.o obj-$(CONFIG_VGA_ARB) += vgaarb.o +obj-$(CONFIG_PCI_DOE) += doe.o # Endpoint library must be initialized before its users obj-$(CONFIG_PCI_ENDPOINT) += endpoint/ diff --git a/drivers/pci/doe.c b/drivers/pci/doe.c new file mode 100644 index 000000000000..12c3762be22f --- /dev/null +++ b/drivers/pci/doe.c @@ -0,0 +1,546 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Data Object Exchange + * PCIe r6.0, sec 6.30 DOE + * + * Copyright (C) 2021 Huawei + * Jonathan Cameron <Jonathan.Cameron@huawei.com> + * + * Copyright (C) 2022 Intel Corporation + * Ira Weiny <ira.weiny@intel.com> + */ + +#define dev_fmt(fmt) "DOE: " fmt + +#include <linux/bitfield.h> +#include <linux/delay.h> +#include <linux/jiffies.h> +#include <linux/mutex.h> +#include <linux/pci.h> +#include <linux/pci-doe.h> +#include <linux/workqueue.h> + +#define PCI_DOE_PROTOCOL_DISCOVERY 0 + +#define PCI_DOE_BUSY_MAX_RETRIES 16 + +/* Timeout of 1 second from 6.30.2 Operation, PCI Spec r6.0 */ +#define PCI_DOE_TIMEOUT HZ +#define PCI_DOE_POLL_INTERVAL (PCI_DOE_TIMEOUT / 128) + +#define PCI_DOE_FLAG_CANCEL 0 +#define PCI_DOE_FLAG_DEAD 1 + +/** + * struct pci_doe_mb - State for a single DOE mailbox + * + * This state is used to manage a single DOE mailbox capability. All fields + * should be considered opaque to the consumers and the structure passed into + * the helpers below after being created by devm_pci_doe_create() + * + * @pdev: PCI device this mailbox belongs to + * @cap_offset: Capability offset + * @prots: Array of protocols supported (encoded as long values) + * @wq: Wait queue for work item + * @work_queue: Queue of pci_doe_work items + * @flags: Bit array of PCI_DOE_FLAG_* flags + */ +struct pci_doe_mb { + struct pci_dev *pdev; + u16 cap_offset; + struct xarray prots; + + wait_queue_head_t wq; + struct workqueue_struct *work_queue; + unsigned long flags; +}; + +static int pci_doe_wait(struct pci_doe_mb *doe_mb, unsigned long timeout) +{ + if (wait_event_timeout(doe_mb->wq, + test_bit(PCI_DOE_FLAG_CANCEL, &doe_mb->flags), + timeout)) + return -EIO; + return 0; +} + +static void pci_doe_write_ctrl(struct pci_doe_mb *doe_mb, u32 val) +{ + struct pci_dev *pdev = doe_mb->pdev; + int offset = doe_mb->cap_offset; + + pci_write_config_dword(pdev, offset + PCI_DOE_CTRL, val); +} + +static int pci_doe_abort(struct pci_doe_mb *doe_mb) +{ + struct pci_dev *pdev = doe_mb->pdev; + int offset = doe_mb->cap_offset; + unsigned long timeout_jiffies; + + pci_dbg(pdev, "[%x] Issuing Abort\n", offset); + + timeout_jiffies = jiffies + PCI_DOE_TIMEOUT; + pci_doe_write_ctrl(doe_mb, PCI_DOE_CTRL_ABORT); + + do { + int rc; + u32 val; + + rc = pci_doe_wait(doe_mb, PCI_DOE_POLL_INTERVAL); + if (rc) + return rc; + pci_read_config_dword(pdev, offset + PCI_DOE_STATUS, &val); + + /* Abort success! */ + if (!FIELD_GET(PCI_DOE_STATUS_ERROR, val) && + !FIELD_GET(PCI_DOE_STATUS_BUSY, val)) + return 0; + + } while (!time_after(jiffies, timeout_jiffies)); + + /* Abort has timed out and the MB is dead */ + pci_err(pdev, "[%x] ABORT timed out\n", offset); + return -EIO; +} + +static int pci_doe_send_req(struct pci_doe_mb *doe_mb, + struct pci_doe_task *task) +{ + struct pci_dev *pdev = doe_mb->pdev; + int offset = doe_mb->cap_offset; + u32 val; + int i; + + /* + * Check the DOE busy bit is not set. If it is set, this could indicate + * someone other than Linux (e.g. firmware) is using the mailbox. Note + * it is expected that firmware and OS will negotiate access rights via + * an, as yet to be defined method. + */ + pci_read_config_dword(pdev, offset + PCI_DOE_STATUS, &val); + if (FIELD_GET(PCI_DOE_STATUS_BUSY, val)) + return -EBUSY; + + if (FIELD_GET(PCI_DOE_STATUS_ERROR, val)) + return -EIO; + + /* Write DOE Header */ + val = FIELD_PREP(PCI_DOE_DATA_OBJECT_HEADER_1_VID, task->prot.vid) | + FIELD_PREP(PCI_DOE_DATA_OBJECT_HEADER_1_TYPE, task->prot.type); + pci_write_config_dword(pdev, offset + PCI_DOE_WRITE, val); + /* Length is 2 DW of header + length of payload in DW */ + pci_write_config_dword(pdev, offset + PCI_DOE_WRITE, + FIELD_PREP(PCI_DOE_DATA_OBJECT_HEADER_2_LENGTH, + 2 + task->request_pl_sz / + sizeof(u32))); + for (i = 0; i < task->request_pl_sz / sizeof(u32); i++) + pci_write_config_dword(pdev, offset + PCI_DOE_WRITE, + task->request_pl[i]); + + pci_doe_write_ctrl(doe_mb, PCI_DOE_CTRL_GO); + + /* Request is sent - now wait for poll or IRQ */ + return 0; +} + +static bool pci_doe_data_obj_ready(struct pci_doe_mb *doe_mb) +{ + struct pci_dev *pdev = doe_mb->pdev; + int offset = doe_mb->cap_offset; + u32 val; + + pci_read_config_dword(pdev, offset + PCI_DOE_STATUS, &val); + if (FIELD_GET(PCI_DOE_STATUS_DATA_OBJECT_READY, val)) + return true; + return false; +} + +static int pci_doe_recv_resp(struct pci_doe_mb *doe_mb, struct pci_doe_task *task) +{ + struct pci_dev *pdev = doe_mb->pdev; + int offset = doe_mb->cap_offset; + size_t length, payload_length; + u32 val; + int i; + + /* Read the first dword to get the protocol */ + pci_read_config_dword(pdev, offset + PCI_DOE_READ, &val); + if ((FIELD_GET(PCI_DOE_DATA_OBJECT_HEADER_1_VID, val) != task->prot.vid) || + (FIELD_GET(PCI_DOE_DATA_OBJECT_HEADER_1_TYPE, val) != task->prot.type)) { + dev_err_ratelimited(&pdev->dev, "[%x] expected [VID, Protocol] = [%04x, %02x], got [%04x, %02x]\n", + doe_mb->cap_offset, task->prot.vid, task->prot.type, + FIELD_GET(PCI_DOE_DATA_OBJECT_HEADER_1_VID, val), + FIELD_GET(PCI_DOE_DATA_OBJECT_HEADER_1_TYPE, val)); + return -EIO; + } + + pci_write_config_dword(pdev, offset + PCI_DOE_READ, 0); + /* Read the second dword to get the length */ + pci_read_config_dword(pdev, offset + PCI_DOE_READ, &val); + pci_write_config_dword(pdev, offset + PCI_DOE_READ, 0); + + length = FIELD_GET(PCI_DOE_DATA_OBJECT_HEADER_2_LENGTH, val); + if (length > SZ_1M || length < 2) + return -EIO; + + /* First 2 dwords have already been read */ + length -= 2; + payload_length = min(length, task->response_pl_sz / sizeof(u32)); + /* Read the rest of the response payload */ + for (i = 0; i < payload_length; i++) { + pci_read_config_dword(pdev, offset + PCI_DOE_READ, + &task->response_pl[i]); + /* Prior to the last ack, ensure Data Object Ready */ + if (i == (payload_length - 1) && !pci_doe_data_obj_ready(doe_mb)) + return -EIO; + pci_write_config_dword(pdev, offset + PCI_DOE_READ, 0); + } + + /* Flush excess length */ + for (; i < length; i++) { + pci_read_config_dword(pdev, offset + PCI_DOE_READ, &val); + pci_write_config_dword(pdev, offset + PCI_DOE_READ, 0); + } + + /* Final error check to pick up on any since Data Object Ready */ + pci_read_config_dword(pdev, offset + PCI_DOE_STATUS, &val); + if (FIELD_GET(PCI_DOE_STATUS_ERROR, val)) + return -EIO; + + return min(length, task->response_pl_sz / sizeof(u32)) * sizeof(u32); +} + +static void signal_task_complete(struct pci_doe_task *task, int rv) +{ + task->rv = rv; + task->complete(task); +} + +static void signal_task_abort(struct pci_doe_task *task, int rv) +{ + struct pci_doe_mb *doe_mb = task->doe_mb; + struct pci_dev *pdev = doe_mb->pdev; + + if (pci_doe_abort(doe_mb)) { + /* + * If the device can't process an abort; set the mailbox dead + * - no more submissions + */ + pci_err(pdev, "[%x] Abort failed marking mailbox dead\n", + doe_mb->cap_offset); + set_bit(PCI_DOE_FLAG_DEAD, &doe_mb->flags); + } + signal_task_complete(task, rv); +} + +static void doe_statemachine_work(struct work_struct *work) +{ + struct pci_doe_task *task = container_of(work, struct pci_doe_task, + work); + struct pci_doe_mb *doe_mb = task->doe_mb; + struct pci_dev *pdev = doe_mb->pdev; + int offset = doe_mb->cap_offset; + unsigned long timeout_jiffies; + u32 val; + int rc; + + if (test_bit(PCI_DOE_FLAG_DEAD, &doe_mb->flags)) { + signal_task_complete(task, -EIO); + return; + } + + /* Send request */ + rc = pci_doe_send_req(doe_mb, task); + + if (rc) { + /* + * The specification does not provide any guidance on how to + * resolve conflicting requests from other entities. + * Furthermore, it is likely that busy will not be detected + * most of the time. Flag any detection of status busy with an + * error. + */ + if (rc == -EBUSY) + dev_err_ratelimited(&pdev->dev, "[%x] busy detected; another entity is sending conflicting requests\n", + offset); + signal_task_abort(task, rc); + return; + } + + timeout_jiffies = jiffies + PCI_DOE_TIMEOUT; + rc = pci_doe_wait(doe_mb, PCI_DOE_POLL_INTERVAL); + if (rc) { + signal_task_abort(task, rc); + return; + } + + /* Poll for response */ +retry_resp: + pci_read_config_dword(pdev, offset + PCI_DOE_STATUS, &val); + if (FIELD_GET(PCI_DOE_STATUS_ERROR, val)) { + signal_task_abort(task, -EIO); + return; + } + + if (!FIELD_GET(PCI_DOE_STATUS_DATA_OBJECT_READY, val)) { + if (time_after(jiffies, timeout_jiffies)) { + signal_task_abort(task, -EIO); + return; + } + rc = pci_doe_wait(doe_mb, PCI_DOE_POLL_INTERVAL); + if (rc) { + signal_task_abort(task, rc); + return; + } + goto retry_resp; + } + + rc = pci_doe_recv_resp(doe_mb, task); + if (rc < 0) { + signal_task_abort(task, rc); + return; + } + + signal_task_complete(task, rc); +} + +static void pci_doe_task_complete(struct pci_doe_task *task) +{ + complete(task->private); +} + +static int pci_doe_discovery(struct pci_doe_mb *doe_mb, u8 *index, u16 *vid, + u8 *protocol) +{ + u32 request_pl = FIELD_PREP(PCI_DOE_DATA_OBJECT_DISC_REQ_3_INDEX, + *index); + u32 response_pl; + DECLARE_COMPLETION_ONSTACK(c); + struct pci_doe_task task = { + .prot.vid = PCI_VENDOR_ID_PCI_SIG, + .prot.type = PCI_DOE_PROTOCOL_DISCOVERY, + .request_pl = &request_pl, + .request_pl_sz = sizeof(request_pl), + .response_pl = &response_pl, + .response_pl_sz = sizeof(response_pl), + .complete = pci_doe_task_complete, + .private = &c, + }; + int rc; + + rc = pci_doe_submit_task(doe_mb, &task); + if (rc < 0) + return rc; + + wait_for_completion(&c); + + if (task.rv != sizeof(response_pl)) + return -EIO; + + *vid = FIELD_GET(PCI_DOE_DATA_OBJECT_DISC_RSP_3_VID, response_pl); + *protocol = FIELD_GET(PCI_DOE_DATA_OBJECT_DISC_RSP_3_PROTOCOL, + response_pl); + *index = FIELD_GET(PCI_DOE_DATA_OBJECT_DISC_RSP_3_NEXT_INDEX, + response_pl); + + return 0; +} + +static void *pci_doe_xa_prot_entry(u16 vid, u8 prot) +{ + return xa_mk_value((vid << 8) | prot); +} + +static int pci_doe_cache_protocols(struct pci_doe_mb *doe_mb) +{ + u8 index = 0; + u8 xa_idx = 0; + + do { + int rc; + u16 vid; + u8 prot; + + rc = pci_doe_discovery(doe_mb, &index, &vid, &prot); + if (rc) + return rc; + + pci_dbg(doe_mb->pdev, + "[%x] Found protocol %d vid: %x prot: %x\n", + doe_mb->cap_offset, xa_idx, vid, prot); + + rc = xa_insert(&doe_mb->prots, xa_idx++, + pci_doe_xa_prot_entry(vid, prot), GFP_KERNEL); + if (rc) + return rc; + } while (index); + + return 0; +} + +static void pci_doe_xa_destroy(void *mb) +{ + struct pci_doe_mb *doe_mb = mb; + + xa_destroy(&doe_mb->prots); +} + +static void pci_doe_destroy_workqueue(void *mb) +{ + struct pci_doe_mb *doe_mb = mb; + + destroy_workqueue(doe_mb->work_queue); +} + +static void pci_doe_flush_mb(void *mb) +{ + struct pci_doe_mb *doe_mb = mb; + + /* Stop all pending work items from starting */ + set_bit(PCI_DOE_FLAG_DEAD, &doe_mb->flags); + + /* Cancel an in progress work item, if necessary */ + set_bit(PCI_DOE_FLAG_CANCEL, &doe_mb->flags); + wake_up(&doe_mb->wq); + + /* Flush all work items */ + flush_workqueue(doe_mb->work_queue); +} + +/** + * pcim_doe_create_mb() - Create a DOE mailbox object + * + * @pdev: PCI device to create the DOE mailbox for + * @cap_offset: Offset of the DOE mailbox + * + * Create a single mailbox object to manage the mailbox protocol at the + * cap_offset specified. + * + * RETURNS: created mailbox object on success + * ERR_PTR(-errno) on failure + */ +struct pci_doe_mb *pcim_doe_create_mb(struct pci_dev *pdev, u16 cap_offset) +{ + struct pci_doe_mb *doe_mb; + struct device *dev = &pdev->dev; + int rc; + + doe_mb = devm_kzalloc(dev, sizeof(*doe_mb), GFP_KERNEL); + if (!doe_mb) + return ERR_PTR(-ENOMEM); + + doe_mb->pdev = pdev; + doe_mb->cap_offset = cap_offset; + init_waitqueue_head(&doe_mb->wq); + + xa_init(&doe_mb->prots); + rc = devm_add_action(dev, pci_doe_xa_destroy, doe_mb); + if (rc) + return ERR_PTR(rc); + + doe_mb->work_queue = alloc_ordered_workqueue("%s %s DOE [%x]", 0, + dev_driver_string(&pdev->dev), + pci_name(pdev), + doe_mb->cap_offset); + if (!doe_mb->work_queue) { + pci_err(pdev, "[%x] failed to allocate work queue\n", + doe_mb->cap_offset); + return ERR_PTR(-ENOMEM); + } + rc = devm_add_action_or_reset(dev, pci_doe_destroy_workqueue, doe_mb); + if (rc) + return ERR_PTR(rc); + + /* Reset the mailbox by issuing an abort */ + rc = pci_doe_abort(doe_mb); + if (rc) { + pci_err(pdev, "[%x] failed to reset mailbox with abort command : %d\n", + doe_mb->cap_offset, rc); + return ERR_PTR(rc); + } + + /* + * The state machine and the mailbox should be in sync now; + * Set up mailbox flush prior to using the mailbox to query protocols. + */ + rc = devm_add_action_or_reset(dev, pci_doe_flush_mb, doe_mb); + if (rc) + return ERR_PTR(rc); + + rc = pci_doe_cache_protocols(doe_mb); + if (rc) { + pci_err(pdev, "[%x] failed to cache protocols : %d\n", + doe_mb->cap_offset, rc); + return ERR_PTR(rc); + } + + return doe_mb; +} +EXPORT_SYMBOL_GPL(pcim_doe_create_mb); + +/** + * pci_doe_supports_prot() - Return if the DOE instance supports the given + * protocol + * @doe_mb: DOE mailbox capability to query + * @vid: Protocol Vendor ID + * @type: Protocol type + * + * RETURNS: True if the DOE mailbox supports the protocol specified + */ +bool pci_doe_supports_prot(struct pci_doe_mb *doe_mb, u16 vid, u8 type) +{ + unsigned long index; + void *entry; + + /* The discovery protocol must always be supported */ + if (vid == PCI_VENDOR_ID_PCI_SIG && type == PCI_DOE_PROTOCOL_DISCOVERY) + return true; + + xa_for_each(&doe_mb->prots, index, entry) + if (entry == pci_doe_xa_prot_entry(vid, type)) + return true; + + return false; +} +EXPORT_SYMBOL_GPL(pci_doe_supports_prot); + +/** + * pci_doe_submit_task() - Submit a task to be processed by the state machine + * + * @doe_mb: DOE mailbox capability to submit to + * @task: task to be queued + * + * Submit a DOE task (request/response) to the DOE mailbox to be processed. + * Returns upon queueing the task object. If the queue is full this function + * will sleep until there is room in the queue. + * + * task->complete will be called when the state machine is done processing this + * task. + * + * Excess data will be discarded. + * + * RETURNS: 0 when task has been successfully queued, -ERRNO on error + */ +int pci_doe_submit_task(struct pci_doe_mb *doe_mb, struct pci_doe_task *task) +{ + if (!pci_doe_supports_prot(doe_mb, task->prot.vid, task->prot.type)) + return -EINVAL; + + /* + * DOE requests must be a whole number of DW and the response needs to + * be big enough for at least 1 DW + */ + if (task->request_pl_sz % sizeof(u32) || + task->response_pl_sz < sizeof(u32)) + return -EINVAL; + + if (test_bit(PCI_DOE_FLAG_DEAD, &doe_mb->flags)) + return -EIO; + + task->doe_mb = doe_mb; + INIT_WORK(&task->work, doe_statemachine_work); + queue_work(doe_mb->work_queue, &task->work); + return 0; +} +EXPORT_SYMBOL_GPL(pci_doe_submit_task); diff --git a/include/linux/pci-doe.h b/include/linux/pci-doe.h new file mode 100644 index 000000000000..c77f6258c996 --- /dev/null +++ b/include/linux/pci-doe.h @@ -0,0 +1,79 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Data Object Exchange + * PCIe r6.0, sec 6.30 DOE + * + * Copyright (C) 2021 Huawei + * Jonathan Cameron <Jonathan.Cameron@huawei.com> + * + * Copyright (C) 2022 Intel Corporation + * Ira Weiny <ira.weiny@intel.com> + */ + +#ifndef LINUX_PCI_DOE_H +#define LINUX_PCI_DOE_H + +#include <linux/completion.h> + +struct pci_doe_protocol { + u16 vid; + u8 type; +}; + +struct pci_doe_mb; + +/** + * struct pci_doe_task - represents a single query/response + * + * @prot: DOE Protocol + * @request_pl: The request payload + * @request_pl_sz: Size of the request payload (bytes) + * @response_pl: The response payload + * @response_pl_sz: Size of the response payload (bytes) + * @rv: Return value. Length of received response or error (bytes) + * @complete: Called when task is complete + * @private: Private data for the consumer + * @work: Used internally by the mailbox + * @doe_mb: Used internally by the mailbox + * + * The payload sizes and rv are specified in bytes with the following + * restrictions concerning the protocol. + * + * 1) The request_pl_sz must be a multiple of double words (4 bytes) + * 2) The response_pl_sz must be >= a single double word (4 bytes) + * 3) rv is returned as bytes but it will be a multiple of double words + * + * NOTE there is no need for the caller to initialize work or doe_mb. + */ +struct pci_doe_task { + struct pci_doe_protocol prot; + u32 *request_pl; + size_t request_pl_sz; + u32 *response_pl; + size_t response_pl_sz; + int rv; + void (*complete)(struct pci_doe_task *task); + void *private; + + /* No need for the user to initialize these fields */ + struct work_struct work; + struct pci_doe_mb *doe_mb; +}; + +/** + * pci_doe_for_each_off - Iterate each DOE capability + * @pdev: struct pci_dev to iterate + * @off: u16 of config space offset of each mailbox capability found + */ +#define pci_doe_for_each_off(pdev, off) \ + for (off = pci_find_next_ext_capability(pdev, off, \ + PCI_EXT_CAP_ID_DOE); \ + off > 0; \ + off = pci_find_next_ext_capability(pdev, off, \ + PCI_EXT_CAP_ID_DOE)) + +struct pci_doe_mb *pcim_doe_create_mb(struct pci_dev *pdev, u16 cap_offset); +bool pci_doe_supports_prot(struct pci_doe_mb *doe_mb, u16 vid, u8 type); +int pci_doe_submit_task(struct pci_doe_mb *doe_mb, struct pci_doe_task *task); + +#endif diff --git a/include/uapi/linux/pci_regs.h b/include/uapi/linux/pci_regs.h index 108f8523fa04..57b8e2ffb1dd 100644 --- a/include/uapi/linux/pci_regs.h +++ b/include/uapi/linux/pci_regs.h @@ -737,7 +737,8 @@ #define PCI_EXT_CAP_ID_DVSEC 0x23 /* Designated Vendor-Specific */ #define PCI_EXT_CAP_ID_DLF 0x25 /* Data Link Feature */ #define PCI_EXT_CAP_ID_PL_16GT 0x26 /* Physical Layer 16.0 GT/s */ -#define PCI_EXT_CAP_ID_MAX PCI_EXT_CAP_ID_PL_16GT +#define PCI_EXT_CAP_ID_DOE 0x2E /* Data Object Exchange */ +#define PCI_EXT_CAP_ID_MAX PCI_EXT_CAP_ID_DOE #define PCI_EXT_CAP_DSN_SIZEOF 12 #define PCI_EXT_CAP_MCAST_ENDPOINT_SIZEOF 40 @@ -1103,4 +1104,30 @@ #define PCI_PL_16GT_LE_CTRL_USP_TX_PRESET_MASK 0x000000F0 #define PCI_PL_16GT_LE_CTRL_USP_TX_PRESET_SHIFT 4 +/* Data Object Exchange */ +#define PCI_DOE_CAP 0x04 /* DOE Capabilities Register */ +#define PCI_DOE_CAP_INT_SUP 0x00000001 /* Interrupt Support */ +#define PCI_DOE_CAP_INT_MSG_NUM 0x00000ffe /* Interrupt Message Number */ +#define PCI_DOE_CTRL 0x08 /* DOE Control Register */ +#define PCI_DOE_CTRL_ABORT 0x00000001 /* DOE Abort */ +#define PCI_DOE_CTRL_INT_EN 0x00000002 /* DOE Interrupt Enable */ +#define PCI_DOE_CTRL_GO 0x80000000 /* DOE Go */ +#define PCI_DOE_STATUS 0x0c /* DOE Status Register */ +#define PCI_DOE_STATUS_BUSY 0x00000001 /* DOE Busy */ +#define PCI_DOE_STATUS_INT_STATUS 0x00000002 /* DOE Interrupt Status */ +#define PCI_DOE_STATUS_ERROR 0x00000004 /* DOE Error */ +#define PCI_DOE_STATUS_DATA_OBJECT_READY 0x80000000 /* Data Object Ready */ +#define PCI_DOE_WRITE 0x10 /* DOE Write Data Mailbox Register */ +#define PCI_DOE_READ 0x14 /* DOE Read Data Mailbox Register */ + +/* DOE Data Object - note not actually registers */ +#define PCI_DOE_DATA_OBJECT_HEADER_1_VID 0x0000ffff +#define PCI_DOE_DATA_OBJECT_HEADER_1_TYPE 0x00ff0000 +#define PCI_DOE_DATA_OBJECT_HEADER_2_LENGTH 0x0003ffff + +#define PCI_DOE_DATA_OBJECT_DISC_REQ_3_INDEX 0x000000ff +#define PCI_DOE_DATA_OBJECT_DISC_RSP_3_VID 0x0000ffff +#define PCI_DOE_DATA_OBJECT_DISC_RSP_3_PROTOCOL 0x00ff0000 +#define PCI_DOE_DATA_OBJECT_DISC_RSP_3_NEXT_INDEX 0xff000000 + #endif /* LINUX_PCI_REGS_H */