From patchwork Mon Sep 26 21:57:10 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ira Weiny X-Patchwork-Id: 12989474 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id EF7B9C6FA86 for ; Mon, 26 Sep 2022 21:57:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230510AbiIZV5Z (ORCPT ); Mon, 26 Sep 2022 17:57:25 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45806 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230145AbiIZV5W (ORCPT ); Mon, 26 Sep 2022 17:57:22 -0400 Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 82E79D12DE; Mon, 26 Sep 2022 14:57:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1664229441; x=1695765441; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=HWQPEWMIfAJ0oMb03uWhJIko74Q4m6iCSM6tVkiOPpw=; b=NFHD5DxvvutSNVSTcMAj8s30NAP/RgNqT4RsS18B3PDWbCqEQIFetumm u3YcC/wSR+qvwLIfb+5Njbwirwh8aKKzzBIh5yodHb0G3ipfco/unCGee 4o5LvFNf34zsEdgbJX74EZdAoE4bmt0LL4dz/RSdHWIQ6rRANEiiKQ77Q DJwOdlpWBy7GwI3u1hTfj2fP6FUTJiUQEoQQgBUyAf2eoDXC44celiXiE mWWWe6ocTAlXkZorw+oP8k+V0PhWMq3PI7LKVzcwF13wIKDVZG6efBmae 2PhGISU5Rrv4UU0+ytBy9KWL/DRxrIOzwPhgIvICT9S8n+d28oZraAOCB A==; X-IronPort-AV: E=McAfee;i="6500,9779,10482"; a="302057557" X-IronPort-AV: E=Sophos;i="5.93,347,1654585200"; d="scan'208";a="302057557" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 26 Sep 2022 14:57:20 -0700 X-IronPort-AV: E=McAfee;i="6500,9779,10482"; a="598911429" X-IronPort-AV: E=Sophos;i="5.93,347,1654585200"; d="scan'208";a="598911429" Received: from iweiny-desk3.amr.corp.intel.com (HELO localhost) ([10.209.75.159]) by orsmga006-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 26 Sep 2022 14:57:20 -0700 From: ira.weiny@intel.com To: Dan Williams Cc: Ira Weiny , Bjorn Helgaas , Greg Kroah-Hartman , Jonathan Cameron , Alison Schofield , Vishal Verma , Ben Widawsky , linux-cxl@vger.kernel.org, linux-kernel@vger.kernel.org, linux-pci@vger.kernel.org Subject: [PATCH V3 1/2] PCI: Allow drivers to request exclusive config regions Date: Mon, 26 Sep 2022 14:57:10 -0700 Message-Id: <20220926215711.2893286-2-ira.weiny@intel.com> X-Mailer: git-send-email 2.37.2 In-Reply-To: <20220926215711.2893286-1-ira.weiny@intel.com> References: <20220926215711.2893286-1-ira.weiny@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-cxl@vger.kernel.org From: Ira Weiny PCI config space access from user space has traditionally been unrestricted with writes being an understood risk for device operation. Unfortunately, device breakage or odd behavior from config writes lacks indicators that can leave driver writers confused when evaluating failures. This is especially true with the new PCIe Data Object Exchange (DOE) mailbox protocol where backdoor shenanigans from user space through things such as vendor defined protocols may affect device operation without complete breakage. A prior proposal restricted read and writes completely.[1] Greg and Bjorn pointed out that proposal is flawed for a couple of reasons. First, lspci should always be allowed and should not interfere with any device operation. Second, setpci is a valuable tool that is sometimes necessary and it should not be completely restricted.[2] Finally methods exist for full lock of device access if required. Even though access should not be restricted it would be nice for driver writers to be able to flag critical parts of the config space such that interference from user space can be detected. Introduce pci_request_config_region_exclusive() to mark exclusive config regions. Such regions trigger a warning and kernel taint if accessed via user space. Create pci_warn_once() to restrict the user from spamming the log. [1] https://lore.kernel.org/all/161663543465.1867664.5674061943008380442.stgit@dwillia2-desk3.amr.corp.intel.com/ [2] https://lore.kernel.org/all/YF8NGeGv9vYcMfTV@kroah.com/ Cc: Bjorn Helgaas Cc: Greg Kroah-Hartman Reviewed-by: Jonathan Cameron Suggested-by: Dan Williams Signed-off-by: Ira Weiny Acked-by: Greg Kroah-Hartman Acked-by: Bjorn Helgaas --- Changes from V2: Greg: s/config_resource/driver_exclusive_resource/ Jonathan: don't reformat the pci_*() messages Changes from V1: Greg and Dan: Create and use pci_warn_once() to keep the user from spamming Dan: Clarify the warn message Changes from[1]: Change name to pci_request_config_region_exclusive() Don't flag reads at all. Allow writes with a warn and taint of the kernel. Update commit message Forward port to latest tree. --- drivers/pci/pci-sysfs.c | 7 +++++++ drivers/pci/probe.c | 6 ++++++ include/linux/ioport.h | 2 ++ include/linux/pci.h | 17 +++++++++++++++++ kernel/resource.c | 13 ++++++++----- 5 files changed, 40 insertions(+), 5 deletions(-) diff --git a/drivers/pci/pci-sysfs.c b/drivers/pci/pci-sysfs.c index fc804e08e3cb..7354e135e646 100644 --- a/drivers/pci/pci-sysfs.c +++ b/drivers/pci/pci-sysfs.c @@ -755,6 +755,13 @@ static ssize_t pci_write_config(struct file *filp, struct kobject *kobj, if (ret) return ret; + if (resource_is_exclusive(&dev->driver_exclusive_resource, off, + count)) { + pci_warn_once(dev, "%s: Unexpected write to kernel-exclusive config offset %llx", + current->comm, off); + add_taint(TAINT_USER, LOCKDEP_STILL_OK); + } + if (off > dev->cfg_size) return 0; if (off + count > dev->cfg_size) { diff --git a/drivers/pci/probe.c b/drivers/pci/probe.c index c5286b027f00..e16ce452cc1e 100644 --- a/drivers/pci/probe.c +++ b/drivers/pci/probe.c @@ -2306,6 +2306,12 @@ struct pci_dev *pci_alloc_dev(struct pci_bus *bus) INIT_LIST_HEAD(&dev->bus_list); dev->dev.type = &pci_dev_type; dev->bus = pci_bus_get(bus); + dev->driver_exclusive_resource = (struct resource) { + .name = "PCI Exclusive", + .start = 0, + .end = -1, + }; + #ifdef CONFIG_PCI_MSI raw_spin_lock_init(&dev->msi_lock); #endif diff --git a/include/linux/ioport.h b/include/linux/ioport.h index 616b683563a9..cf1de55d14da 100644 --- a/include/linux/ioport.h +++ b/include/linux/ioport.h @@ -312,6 +312,8 @@ extern void __devm_release_region(struct device *dev, struct resource *parent, resource_size_t start, resource_size_t n); extern int iomem_map_sanity_check(resource_size_t addr, unsigned long size); extern bool iomem_is_exclusive(u64 addr); +extern bool resource_is_exclusive(struct resource *resource, u64 addr, + resource_size_t size); extern int walk_system_ram_range(unsigned long start_pfn, unsigned long nr_pages, diff --git a/include/linux/pci.h b/include/linux/pci.h index 060af91bafcd..d75347114307 100644 --- a/include/linux/pci.h +++ b/include/linux/pci.h @@ -409,6 +409,7 @@ struct pci_dev { */ unsigned int irq; struct resource resource[DEVICE_COUNT_RESOURCE]; /* I/O and memory regions + expansion ROMs */ + struct resource driver_exclusive_resource; /* driver exclusive resource ranges */ bool match_driver; /* Skip attaching driver */ @@ -1406,6 +1407,21 @@ int pci_request_selected_regions(struct pci_dev *, int, const char *); int pci_request_selected_regions_exclusive(struct pci_dev *, int, const char *); void pci_release_selected_regions(struct pci_dev *, int); +static inline __must_check struct resource * +pci_request_config_region_exclusive(struct pci_dev *pdev, unsigned int offset, + unsigned int len, const char *name) +{ + return __request_region(&pdev->driver_exclusive_resource, offset, len, + name, IORESOURCE_EXCLUSIVE); +} + +static inline void pci_release_config_region(struct pci_dev *pdev, + unsigned int offset, + unsigned int len) +{ + __release_region(&pdev->driver_exclusive_resource, offset, len); +} + /* drivers/pci/bus.c */ void pci_add_resource(struct list_head *resources, struct resource *res); void pci_add_resource_offset(struct list_head *resources, struct resource *res, @@ -2481,6 +2497,7 @@ void pci_uevent_ers(struct pci_dev *pdev, enum pci_ers_result err_type); #define pci_crit(pdev, fmt, arg...) dev_crit(&(pdev)->dev, fmt, ##arg) #define pci_err(pdev, fmt, arg...) dev_err(&(pdev)->dev, fmt, ##arg) #define pci_warn(pdev, fmt, arg...) dev_warn(&(pdev)->dev, fmt, ##arg) +#define pci_warn_once(pdev, fmt, arg...) dev_warn_once(&(pdev)->dev, fmt, ##arg) #define pci_notice(pdev, fmt, arg...) dev_notice(&(pdev)->dev, fmt, ##arg) #define pci_info(pdev, fmt, arg...) dev_info(&(pdev)->dev, fmt, ##arg) #define pci_dbg(pdev, fmt, arg...) dev_dbg(&(pdev)->dev, fmt, ##arg) diff --git a/kernel/resource.c b/kernel/resource.c index 4c5e80b92f2f..82ed54cd1f0d 100644 --- a/kernel/resource.c +++ b/kernel/resource.c @@ -1707,18 +1707,15 @@ static int strict_iomem_checks; * * Returns true if exclusive to the kernel, otherwise returns false. */ -bool iomem_is_exclusive(u64 addr) +bool resource_is_exclusive(struct resource *root, u64 addr, resource_size_t size) { const unsigned int exclusive_system_ram = IORESOURCE_SYSTEM_RAM | IORESOURCE_EXCLUSIVE; bool skip_children = false, err = false; - int size = PAGE_SIZE; struct resource *p; - addr = addr & PAGE_MASK; - read_lock(&resource_lock); - for_each_resource(&iomem_resource, p, skip_children) { + for_each_resource(root, p, skip_children) { if (p->start >= addr + size) break; if (p->end < addr) { @@ -1757,6 +1754,12 @@ bool iomem_is_exclusive(u64 addr) return err; } +bool iomem_is_exclusive(u64 addr) +{ + return resource_is_exclusive(&iomem_resource, addr & PAGE_MASK, + PAGE_SIZE); +} + struct resource_entry *resource_list_create_entry(struct resource *res, size_t extra_size) { From patchwork Mon Sep 26 21:57:11 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ira Weiny X-Patchwork-Id: 12989475 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 775BEC6FA82 for ; Mon, 26 Sep 2022 21:57:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230365AbiIZV5Z (ORCPT ); Mon, 26 Sep 2022 17:57:25 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45812 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229993AbiIZV5X (ORCPT ); Mon, 26 Sep 2022 17:57:23 -0400 Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 98596D1EA1; Mon, 26 Sep 2022 14:57:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1664229442; x=1695765442; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=7mhGF9QNeYv3EmWFVfkw9VlFnNtzfRGpHZffuH3vsgY=; b=mmyZfrj+pb6XyazRf0pG/JwXTq+ytzxLJ0PfhViYZEfnC1zG+By23h41 o0B336wp3IhRVLtRQbDvAQZxh8TbJc2M5hkmdUp2yqHo052zVQrEaUca0 R/lw5vGHILEEySe0cghqRr1vYctxKvTFuqcQVKewvDuyrmp+7Dx1dArwR qf/oKM3yH9mSITSNSTogglu8L/hpSqTF/yc5x/IvowUl2oeCZZ19rLQEi mfdEJZbkLsmKLF2NBNdHRBby5IAUq12uiNIq1eMoasYk2uVjzcQWr4Flt j6YDGpG8kTJ6xqmnsm+ztrlMhZaJD2i0snWNwFbawvvptqZ6SXm8yUawi Q==; X-IronPort-AV: E=McAfee;i="6500,9779,10482"; a="302057561" X-IronPort-AV: E=Sophos;i="5.93,347,1654585200"; d="scan'208";a="302057561" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 26 Sep 2022 14:57:21 -0700 X-IronPort-AV: E=McAfee;i="6500,9779,10482"; a="598911432" X-IronPort-AV: E=Sophos;i="5.93,347,1654585200"; d="scan'208";a="598911432" Received: from iweiny-desk3.amr.corp.intel.com (HELO localhost) ([10.209.75.159]) by orsmga006-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 26 Sep 2022 14:57:21 -0700 From: ira.weiny@intel.com To: Dan Williams Cc: Ira Weiny , Alison Schofield , Vishal Verma , Ben Widawsky , linux-cxl@vger.kernel.org, linux-kernel@vger.kernel.org, linux-pci@vger.kernel.org Subject: [PATCH V3 2/2] cxl/doe: Request exclusive DOE access Date: Mon, 26 Sep 2022 14:57:11 -0700 Message-Id: <20220926215711.2893286-3-ira.weiny@intel.com> X-Mailer: git-send-email 2.37.2 In-Reply-To: <20220926215711.2893286-1-ira.weiny@intel.com> References: <20220926215711.2893286-1-ira.weiny@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-cxl@vger.kernel.org From: Ira Weiny The PCIE Data Object Exchange (DOE) mailbox is a protocol run over configuration cycles. It assumes one initiator at a time. While the kernel has control of the mailbox user space writes could interfere with the kernel access. Mark DOE mailbox config space exclusive when iterated by the CXL driver. Signed-off-by: Ira Weiny Reviewed-by: Jonathan Cameron --- Changes from V2: Jonathan: s/PCI_DOE_CAP_SIZE/PCI_DOE_CAP_SIZEOF Set PCI_DOE_CAP_SIZEOF directly --- drivers/cxl/pci.c | 5 +++++ include/uapi/linux/pci_regs.h | 1 + 2 files changed, 6 insertions(+) diff --git a/drivers/cxl/pci.c b/drivers/cxl/pci.c index faeb5d9d7a7a..621a0522b554 100644 --- a/drivers/cxl/pci.c +++ b/drivers/cxl/pci.c @@ -418,6 +418,11 @@ static void devm_cxl_pci_create_doe(struct cxl_dev_state *cxlds) continue; } + if (!pci_request_config_region_exclusive(pdev, off, + PCI_DOE_CAP_SIZEOF, + dev_name(dev))) + pci_err(pdev, "Failed to exclude DOE registers\n"); + if (xa_insert(&cxlds->doe_mbs, off, doe_mb, GFP_KERNEL)) { dev_err(dev, "xa_insert failed to insert MB @ %x\n", off); diff --git a/include/uapi/linux/pci_regs.h b/include/uapi/linux/pci_regs.h index 57b8e2ffb1dd..82a03ea954af 100644 --- a/include/uapi/linux/pci_regs.h +++ b/include/uapi/linux/pci_regs.h @@ -1119,6 +1119,7 @@ #define PCI_DOE_STATUS_DATA_OBJECT_READY 0x80000000 /* Data Object Ready */ #define PCI_DOE_WRITE 0x10 /* DOE Write Data Mailbox Register */ #define PCI_DOE_READ 0x14 /* DOE Read Data Mailbox Register */ +#define PCI_DOE_CAP_SIZEOF 0x18 /* Size of DOE register block */ /* DOE Data Object - note not actually registers */ #define PCI_DOE_DATA_OBJECT_HEADER_1_VID 0x0000ffff