From patchwork Fri Apr 11 10:54:07 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Roger_Pau_Monn=C3=A9?= X-Patchwork-Id: 14048086 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 8A3A5C369A9 for ; Fri, 11 Apr 2025 10:54:34 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.947029.1344760 (Exim 4.92) (envelope-from ) id 1u3C1c-00006a-Hg; Fri, 11 Apr 2025 10:54:20 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 947029.1344760; Fri, 11 Apr 2025 10:54:20 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1u3C1c-00006T-D9; Fri, 11 Apr 2025 10:54:20 +0000 Received: by outflank-mailman (input) for mailman id 947029; Fri, 11 Apr 2025 10:54:19 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1u3C1b-00006D-GV for xen-devel@lists.xenproject.org; Fri, 11 Apr 2025 10:54:19 +0000 Received: from mail-wr1-x433.google.com (mail-wr1-x433.google.com [2a00:1450:4864:20::433]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id 514be73f-16c3-11f0-9ead-5ba50f476ded; Fri, 11 Apr 2025 12:54:18 +0200 (CEST) Received: by mail-wr1-x433.google.com with SMTP id ffacd0b85a97d-39c1efc4577so1006707f8f.0 for ; Fri, 11 Apr 2025 03:54:18 -0700 (PDT) Received: from localhost ([84.78.159.3]) by smtp.gmail.com with UTF8SMTPSA id ffacd0b85a97d-39eae96400dsm1703607f8f.11.2025.04.11.03.54.17 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 11 Apr 2025 03:54:17 -0700 (PDT) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 514be73f-16c3-11f0-9ead-5ba50f476ded DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=citrix.com; s=google; t=1744368858; x=1744973658; darn=lists.xenproject.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=WFYE7oR89OgS7wmrEALx0NVD7P/DNddXsGfdMj0wDnU=; b=WNrf1M6VYPx3Mq35se1kHKk9VcsejfCnHjWFMngTcW3f340e2W5LhaFEe1DXWfZ6so j6tze8v1/dcvU4NDh5HJiZpYTHq9EGQfIKOWuBzGKWG20/+LD+JnJjD3A6jJtk3rWh+k kD9o9VfOPeiEhgSxD6m/auCJl6BJSEYFeZ0Ng= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1744368858; x=1744973658; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=WFYE7oR89OgS7wmrEALx0NVD7P/DNddXsGfdMj0wDnU=; b=KISY898xJ3ikBapO8rCs8mzuuU8dUUT/u61U30bsjEBhCXdL5wsY4s826D+oRjZbzb JJ3uX+6tRGovmuQ0jt+LFr1IwgZc5HIA02Woj8ziZJ5z7KJbUvoFPMxUIh2gVdkIyhdX gjFB6lCdjXjkSZ5k8+FrR7IJp/95dOoL5fk/YjcSlpXHxo5kgdm4zk57tc9sb36bueOS 7BwJOUrRMsss3njtEX3GSvIeYHrbZ1xe/nBiDnyyjzXsM5shqG2QvIH2JN2ZOLaUVtv6 UZqiMs6/7ZHGFUOo0YzJ5xTlrYPGZU56zq+YlsWj5eb9ljKaDECWMx+yO+vd6KTD0KIX opSw== X-Gm-Message-State: AOJu0YzAoEHIlZkfuI/XlH1VV9XO/s1YXi7Avh1ss8RgesbF1vAOwQSH iRBRaZkJzJmLy5+ceDwVWss3aeRsM/b3OcZ1mNI3EX+d3X/n1GQGi4NvQvYw83SiI4E51zH6Fbv J X-Gm-Gg: ASbGnct0A2e2za4jWbbNDvwu+kTm23Ks9MzWqezPZ470Iqj9kEYJUztrF0SpRLcBwic 0vfytTTvorBueZiUaX6KzuCNU8WtWeTl+qDBiDkBc8JmSLqs7AADiHAnkAgiio0Nj7uU3Xmr1AC eC66IdtRpYyXdkdltaEgXULHgdWwDQ+RfuBAgcWvE1kFmByyrREIdXAMCZwl/yrubm3jSuHt4ZU CHQoal44JkMoC6NVt4HLsnM12TkolbPObi4hHnwJWupvRdp9hs3BK/Vv8ab+Q4AlUi+XX49MsXO fS/Mvf9UKXSDz8fiDuqlI8glptAkzLAOYOm7jLOIfYsRmA== X-Google-Smtp-Source: AGHT+IG+0aHMf+gcjUPaF3A4Cr5jBdZdfnpiBWKiLTs/5HHicDRzCTmTEN6MoKG/Nvzcy95ESVaR1g== X-Received: by 2002:a5d:588a:0:b0:390:f9d0:5e3 with SMTP id ffacd0b85a97d-39ea51ee9c3mr1885396f8f.1.1744368857883; Fri, 11 Apr 2025 03:54:17 -0700 (PDT) From: Roger Pau Monne To: xen-devel@lists.xenproject.org Cc: Roger Pau Monne , Jan Beulich , Andrew Cooper Subject: [PATCH 1/5] x86/mm: account for the offset when performing subpage r/o MMIO access Date: Fri, 11 Apr 2025 12:54:07 +0200 Message-ID: <20250411105411.22334-2-roger.pau@citrix.com> X-Mailer: git-send-email 2.48.1 In-Reply-To: <20250411105411.22334-1-roger.pau@citrix.com> References: <20250411105411.22334-1-roger.pau@citrix.com> MIME-Version: 1.0 The current logic in subpage_mmio_write_emulate() doesn't take into account the page offset, and always performs the writes at offset 0 (start of the page). Fix this by accounting for the offset before performing the write. Fixes: 8847d6e23f97 ('x86/mm: add API for marking only part of a MMIO page read only') Signed-off-by: Roger Pau Monné Reviewed-by: Andrew Cooper --- xen/arch/x86/mm.c | 1 + 1 file changed, 1 insertion(+) diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c index 4fecd37aeca0..1cf236516789 100644 --- a/xen/arch/x86/mm.c +++ b/xen/arch/x86/mm.c @@ -5131,6 +5131,7 @@ static void subpage_mmio_write_emulate( return; } + addr += offset; switch ( len ) { case 1: From patchwork Fri Apr 11 10:54:08 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Roger_Pau_Monn=C3=A9?= X-Patchwork-Id: 14048088 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 24416C36010 for ; Fri, 11 Apr 2025 10:54:39 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.947031.1344779 (Exim 4.92) (envelope-from ) id 1u3C1e-0000b0-Tu; Fri, 11 Apr 2025 10:54:22 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 947031.1344779; Fri, 11 Apr 2025 10:54:22 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1u3C1e-0000at-Qr; Fri, 11 Apr 2025 10:54:22 +0000 Received: by outflank-mailman (input) for mailman id 947031; Fri, 11 Apr 2025 10:54:21 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1u3C1d-00006G-No for xen-devel@lists.xenproject.org; Fri, 11 Apr 2025 10:54:21 +0000 Received: from mail-wr1-x42b.google.com (mail-wr1-x42b.google.com [2a00:1450:4864:20::42b]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id 51f9e349-16c3-11f0-9ffb-bf95429c2676; Fri, 11 Apr 2025 12:54:20 +0200 (CEST) Received: by mail-wr1-x42b.google.com with SMTP id ffacd0b85a97d-39c30d9085aso1117000f8f.1 for ; Fri, 11 Apr 2025 03:54:20 -0700 (PDT) Received: from localhost ([84.78.159.3]) by smtp.gmail.com with UTF8SMTPSA id ffacd0b85a97d-39eaf43cc72sm1683094f8f.67.2025.04.11.03.54.18 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 11 Apr 2025 03:54:18 -0700 (PDT) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 51f9e349-16c3-11f0-9ffb-bf95429c2676 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=citrix.com; s=google; t=1744368859; x=1744973659; darn=lists.xenproject.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=8jkOKV9f4wIs23OL1gwfLbIIZcMjYz7cTYPK8gN12NE=; b=HAxexIbzVJONbPT6WhLGRJZqjcsoEoXWiDR6ucwG1e92JA6vYiJcgqgdrvsGVx3gyv ONA3ymr6fUfIaKV6xmZIHwO6RGTu+BK/KZn4cIh1t0XCdVtLigtN2PcXYrUuhPWozATd exBuiuhq1OzfahzNVVQARM4Z2cObSlP5f1zFg= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1744368859; x=1744973659; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=8jkOKV9f4wIs23OL1gwfLbIIZcMjYz7cTYPK8gN12NE=; b=iJZrHJLtkECOYqZvFRBtqlucfDP+MM//ehviayw5VDzH3qW9EhqL40u/VFxKVxUwlM SPMnGHOCIXHu9tlOUoyodLtwb9PfdYOdK6owZMpiIK2Gwm63OzQVcxKJ//G1orHVjUCi WlOcnpXJYPNYysYk0hbONCxyoBOyjjzF0g0WpdCOYpK94b1fJN8+fsFEk6B07t2iKLFP hY/FvrwNVvSYeDBC+4ll/64enJafxFnQEVMMPC/WJiATM1RDhXVs4ltVmJagZRPwBGtP bRXTzJaqKvyF50NRZ6yK0vbTNJ5o4uTCBNC83cI0F1QUg3dilqkv9affAIfRgr48kmQx SQJA== X-Gm-Message-State: AOJu0Yz50dfYnrR4XAyDU4EKIIUzmoEQCUBfw6hg9I6elmIEbvn0m4ev Mt7bUeQN2kHW7STK/44CpqV0jWC7F9DEkE6gEMwk7zwQSOVE+DlXP3gn05H99YJOZdQUQcBmvC0 7 X-Gm-Gg: ASbGncv6Z2rcx9SMGzzSqlNrnu9zNqJimtSO/OexE7wiU/tLuSb/huMse/YuUnUaL5D +qG0noyzpgeDfqTePP45+VVi13tjY6o8N3Lv0ycQ0mnTl95vi8u13jFjuJ/Sfao3ITRV6OQ0LFe qBaZsFVeFuYBAPmbyCBkcsN/G5e54fw8n0uMS8HWZY9FZMaNDs70AAUU6JIx0TLB4pc2xBxhBM/ SyjxnYK/evd0Krg6tZugA4W4Vdn8sEPU1oM3TrwCQi7z+ucojjRVmV8jHva84OslCtdjh2RnxEJ UXcWavqRVBCzqRoHng60Z2oeV6B5RJiZSz5Sw62m05OkCNQMl7cxhpVP X-Google-Smtp-Source: AGHT+IEalTEBHnBqdw6GEGhLxDVi54GZZQNzRPx8BV3uHkoWl8+oFfZLmR2QsPzNm7+zvZ5QyNA7zw== X-Received: by 2002:a05:6000:144e:b0:39c:1f11:ead with SMTP id ffacd0b85a97d-39ea5213077mr1863723f8f.26.1744368859090; Fri, 11 Apr 2025 03:54:19 -0700 (PDT) From: Roger Pau Monne To: xen-devel@lists.xenproject.org Cc: Roger Pau Monne , Jan Beulich , Andrew Cooper , Anthony PERARD , Michal Orzel , Julien Grall , Stefano Stabellini Subject: [PATCH 2/5] xen/io: provide helpers for multi size MMIO accesses Date: Fri, 11 Apr 2025 12:54:08 +0200 Message-ID: <20250411105411.22334-3-roger.pau@citrix.com> X-Mailer: git-send-email 2.48.1 In-Reply-To: <20250411105411.22334-1-roger.pau@citrix.com> References: <20250411105411.22334-1-roger.pau@citrix.com> MIME-Version: 1.0 Several handlers have the same necessity of reading from an MMIO region using 1, 2, 4 or 8 bytes accesses. So far this has been open-coded in the function itself. Instead provide a new handler that encapsulates the accesses. Since the added helpers are not architecture specific, introduce a new generic io.h header. No functional change intended. Signed-off-by: Roger Pau Monné --- xen/arch/x86/hvm/vmsi.c | 47 ++---------------------------- xen/arch/x86/mm.c | 32 +++++---------------- xen/drivers/vpci/msix.c | 47 ++---------------------------- xen/include/xen/io.h | 63 +++++++++++++++++++++++++++++++++++++++++ 4 files changed, 76 insertions(+), 113 deletions(-) create mode 100644 xen/include/xen/io.h diff --git a/xen/arch/x86/hvm/vmsi.c b/xen/arch/x86/hvm/vmsi.c index fd83abb929ec..61b89834d97d 100644 --- a/xen/arch/x86/hvm/vmsi.c +++ b/xen/arch/x86/hvm/vmsi.c @@ -24,6 +24,7 @@ * Will be merged it with virtual IOAPIC logic, since most is the same */ +#include #include #include #include @@ -304,28 +305,7 @@ static void adjacent_read( hwaddr = fix_to_virt(fixmap_idx) + PAGE_OFFSET(address); - switch ( len ) - { - case 1: - *pval = readb(hwaddr); - break; - - case 2: - *pval = readw(hwaddr); - break; - - case 4: - *pval = readl(hwaddr); - break; - - case 8: - *pval = readq(hwaddr); - break; - - default: - ASSERT_UNREACHABLE(); - break; - } + *pval = read_mmio(hwaddr, len); } static void adjacent_write( @@ -344,28 +324,7 @@ static void adjacent_write( hwaddr = fix_to_virt(fixmap_idx) + PAGE_OFFSET(address); - switch ( len ) - { - case 1: - writeb(val, hwaddr); - break; - - case 2: - writew(val, hwaddr); - break; - - case 4: - writel(val, hwaddr); - break; - - case 8: - writeq(val, hwaddr); - break; - - default: - ASSERT_UNREACHABLE(); - break; - } + write_mmio(hwaddr, val, len); } static int cf_check msixtbl_read( diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c index 1cf236516789..989e62e7ce6f 100644 --- a/xen/arch/x86/mm.c +++ b/xen/arch/x86/mm.c @@ -95,6 +95,7 @@ #include #include #include +#include #include #include #include @@ -116,7 +117,6 @@ #include #include #include -#include #include #include #include @@ -5102,7 +5102,7 @@ static void __iomem *subpage_mmio_map_page( static void subpage_mmio_write_emulate( mfn_t mfn, unsigned int offset, - const void *data, + unsigned long data, unsigned int len) { struct subpage_ro_range *entry; @@ -5115,7 +5115,6 @@ static void subpage_mmio_write_emulate( if ( test_bit(offset / MMIO_RO_SUBPAGE_GRAN, entry->ro_elems) ) { - write_ignored: gprintk(XENLOG_WARNING, "ignoring write to R/O MMIO 0x%"PRI_mfn"%03x len %u\n", mfn_x(mfn), offset, len); @@ -5131,26 +5130,7 @@ static void subpage_mmio_write_emulate( return; } - addr += offset; - switch ( len ) - { - case 1: - writeb(*(const uint8_t*)data, addr); - break; - case 2: - writew(*(const uint16_t*)data, addr); - break; - case 4: - writel(*(const uint32_t*)data, addr); - break; - case 8: - writeq(*(const uint64_t*)data, addr); - break; - default: - /* mmio_ro_emulated_write() already validated the size */ - ASSERT_UNREACHABLE(); - goto write_ignored; - } + write_mmio(addr + offset, data, len); } #ifdef CONFIG_HVM @@ -5185,18 +5165,20 @@ int cf_check mmio_ro_emulated_write( struct x86_emulate_ctxt *ctxt) { struct mmio_ro_emulate_ctxt *mmio_ro_ctxt = ctxt->data; + unsigned long data = 0; /* Only allow naturally-aligned stores at the original %cr2 address. */ if ( ((bytes | offset) & (bytes - 1)) || !bytes || - offset != mmio_ro_ctxt->cr2 ) + offset != mmio_ro_ctxt->cr2 || bytes > sizeof(data) ) { gdprintk(XENLOG_WARNING, "bad access (cr2=%lx, addr=%lx, bytes=%u)\n", mmio_ro_ctxt->cr2, offset, bytes); return X86EMUL_UNHANDLEABLE; } + memcpy(&data, p_data, bytes); subpage_mmio_write_emulate(mmio_ro_ctxt->mfn, PAGE_OFFSET(offset), - p_data, bytes); + data, bytes); return X86EMUL_OKAY; } diff --git a/xen/drivers/vpci/msix.c b/xen/drivers/vpci/msix.c index 6bd8c55bb48e..6455f2a03a01 100644 --- a/xen/drivers/vpci/msix.c +++ b/xen/drivers/vpci/msix.c @@ -17,6 +17,7 @@ * License along with this program; If not, see . */ +#include #include #include @@ -344,28 +345,7 @@ static int adjacent_read(const struct domain *d, const struct vpci_msix *msix, return X86EMUL_OKAY; } - switch ( len ) - { - case 1: - *data = readb(mem + PAGE_OFFSET(addr)); - break; - - case 2: - *data = readw(mem + PAGE_OFFSET(addr)); - break; - - case 4: - *data = readl(mem + PAGE_OFFSET(addr)); - break; - - case 8: - *data = readq(mem + PAGE_OFFSET(addr)); - break; - - default: - ASSERT_UNREACHABLE(); - break; - } + *data = read_mmio(mem + PAGE_OFFSET(addr), len); spin_unlock(&vpci->lock); return X86EMUL_OKAY; @@ -493,28 +473,7 @@ static int adjacent_write(const struct domain *d, const struct vpci_msix *msix, return X86EMUL_OKAY; } - switch ( len ) - { - case 1: - writeb(data, mem + PAGE_OFFSET(addr)); - break; - - case 2: - writew(data, mem + PAGE_OFFSET(addr)); - break; - - case 4: - writel(data, mem + PAGE_OFFSET(addr)); - break; - - case 8: - writeq(data, mem + PAGE_OFFSET(addr)); - break; - - default: - ASSERT_UNREACHABLE(); - break; - } + write_mmio(mem + PAGE_OFFSET(addr), data, len); spin_unlock(&vpci->lock); return X86EMUL_OKAY; diff --git a/xen/include/xen/io.h b/xen/include/xen/io.h new file mode 100644 index 000000000000..5c360ce9dee2 --- /dev/null +++ b/xen/include/xen/io.h @@ -0,0 +1,63 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Generic helpers for doing MMIO accesses. + * + * Copyright (c) 2025 Cloud Software Group + */ +#ifndef XEN_IO_H +#define XEN_IO_H + +#include + +#include + +static inline uint64_t read_mmio(const volatile void __iomem *mem, + unsigned int size) +{ + switch ( size ) + { + case 1: + return readb(mem); + + case 2: + return readw(mem); + + case 4: + return readl(mem); + + case 8: + return readq(mem); + } + + ASSERT_UNREACHABLE(); + return ~0UL; +} + +static inline void write_mmio(volatile void __iomem *mem, uint64_t data, + unsigned int size) +{ + switch ( size ) + { + case 1: + writeb(data, mem); + break; + + case 2: + writew(data, mem); + break; + + case 4: + writel(data, mem); + break; + + case 8: + writeq(data, mem); + break; + + default: + ASSERT_UNREACHABLE(); + break; + } +} + +#endif /* XEN_IO_H */ From patchwork Fri Apr 11 10:54:09 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Roger_Pau_Monn=C3=A9?= X-Patchwork-Id: 14048085 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 12E87C36010 for ; Fri, 11 Apr 2025 10:54:34 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.947033.1344796 (Exim 4.92) (envelope-from ) id 1u3C1h-0000wP-MK; Fri, 11 Apr 2025 10:54:25 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 947033.1344796; Fri, 11 Apr 2025 10:54:25 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1u3C1h-0000vu-E7; Fri, 11 Apr 2025 10:54:25 +0000 Received: by outflank-mailman (input) for mailman id 947033; Fri, 11 Apr 2025 10:54:23 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1u3C1f-00006G-3A for xen-devel@lists.xenproject.org; Fri, 11 Apr 2025 10:54:23 +0000 Received: from mail-wm1-x330.google.com (mail-wm1-x330.google.com [2a00:1450:4864:20::330]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id 52ac9609-16c3-11f0-9ffb-bf95429c2676; Fri, 11 Apr 2025 12:54:21 +0200 (CEST) Received: by mail-wm1-x330.google.com with SMTP id 5b1f17b1804b1-43d0782d787so12259175e9.0 for ; Fri, 11 Apr 2025 03:54:21 -0700 (PDT) Received: from localhost ([84.78.159.3]) by smtp.gmail.com with UTF8SMTPSA id 5b1f17b1804b1-43f207cb88asm81987055e9.37.2025.04.11.03.54.19 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 11 Apr 2025 03:54:19 -0700 (PDT) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 52ac9609-16c3-11f0-9ffb-bf95429c2676 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=citrix.com; s=google; t=1744368860; x=1744973660; darn=lists.xenproject.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=wj85dH1SZWtTaTOVDhEbnrtTJVzaPhDihu61l9X/kIU=; b=lzJaCjuv7P1wOwO+BLmp31+4vb6tBHPDdxil2pVv46YowGmwFiWS77SlIMMMmfM02n TMzfH46mpvJoDXEjOoaN8SWLzJxK7GfAHvy80adfGzNjoWGzow5eLqtR/IJx8pOWxG4h 6TSkR9Jb9iHqwfJZMypLxRY6P39btMZGc1pxw= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1744368860; x=1744973660; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=wj85dH1SZWtTaTOVDhEbnrtTJVzaPhDihu61l9X/kIU=; b=jmMGN2Hh6E8Dx7JZgN8I4taWvohDPS/+SVNBdDLU4juQHWV/zmQqWaDgDW3I1JQzz+ E4McdcUCjlaqSuwq0opb9t8MOmxHNzGPZ6gc0bg4P1UOwLrWrX3raAhW9Bf0QkFxTNQl 0MONhoPBwYYXYBVU7sXbHpvAG84laDbq9kCj9Povw/uIAbLZ2soVo6efX+lj0wuKFzbs Xklcda08gLZovayCmzVxEVPPstl7zijGJX06pHQJxXuPStU7W+mZoHW1TybQq8VFnNM0 515o9U320Q9aL2/ERTEb5ITL3EYdf/2OK1Q97A17o8hD1noT9YGhW8HaYapLDdy2dkbj EROw== X-Gm-Message-State: AOJu0YyThPO6DWBnRbm/DD1pLmR052svUCoQnY57EDMpioX37I25hlUH 2/uHKZlcIQ1EbDLT9KNFQ9boVU5JeRrZ4IsD+4Rjqff6UtU0fpyzgfvWHWDBCjCI2O0sNtORAaA X X-Gm-Gg: ASbGncs6Tz1AVjhnJ9I8zzCLloI6QgIoI3aY7auah106hK2CFXfJEzuE1ZaB+dVI6wN qwh1oXqPZN9vvPsLw8f6SISqn3aPoZ7ZNCsFC1/nqVh75sdj2YPegjUMRVHMT5q0ym2OEKPNqKQ ILXNEMylzyqpIav1mo9mKTEXjEnT7SZlR79rD0Y4/5Ig3cWkJ4FfTOD2OG0taOidptKZ5qGL7Q7 3DodIcHJYDObeeR/q6UllexFKogFfSr8YIBXIxtcKQbiNpPa5py9gM0Zalw1pHWcDLEOpK7lqab tjREA1P4zJgtDUPsl22uPRBorKpRLoKM2d/1V3C1mfxNew== X-Google-Smtp-Source: AGHT+IEfHs7ikBhpkat3gYs4vuYPvNM6dYkC6mNFIpSJo4LfRBMWesSf/bFHKvs68232r+u3lYEpkQ== X-Received: by 2002:a05:600c:1e0a:b0:43c:fa0e:471a with SMTP id 5b1f17b1804b1-43f3a9291d9mr14900975e9.5.1744368860164; Fri, 11 Apr 2025 03:54:20 -0700 (PDT) From: Roger Pau Monne To: xen-devel@lists.xenproject.org Cc: Roger Pau Monne , Jan Beulich , Andrew Cooper Subject: [PATCH 3/5] x86/hvm: fix handling of accesses to partial r/o MMIO pages Date: Fri, 11 Apr 2025 12:54:09 +0200 Message-ID: <20250411105411.22334-4-roger.pau@citrix.com> X-Mailer: git-send-email 2.48.1 In-Reply-To: <20250411105411.22334-1-roger.pau@citrix.com> References: <20250411105411.22334-1-roger.pau@citrix.com> MIME-Version: 1.0 The current logic to handle accesses to MMIO pages partially read-only is based on the (now removed) logic used to handle accesses to the r/o MMCFG region(s) for PVH v1 dom0. However that has issues when running on AMD hardware, as in that case the guest linear address that triggered the fault is not provided as part of the VM exit. This caused mmio_ro_emulated_write() to always fail before calling subpage_mmio_write_emulate() when running on AMD and called from an HVM context. Take a different approach and convert the handling of partial read-only MMIO page accesses into an HVM MMIO ops handler, as that's the more natural way to handle this kind of emulation for HVM domains. This allows getting rid of hvm_emulate_one_mmio() and it's single cal site in hvm_hap_nested_page_fault(). Note a small adjustment is needed to the `pf-fixup` dom0 PVH logic: avoid attempting to fixup faults resulting from accesses to read-only MMIO regions, as handling of those accesses is now done by handle_mmio(). Fixes: 33c19df9a5a0 ('x86/PCI: intercept accesses to RO MMIO from dom0s in HVM containers') Signed-off-by: Roger Pau Monné --- The fixes tag is maybe a bit wonky, it's either this or: 8847d6e23f97 ('x86/mm: add API for marking only part of a MMIO page read only') However the addition of subpage r/o access handling to the existing mmio_ro_emulated_write() function was done based on the assumption that the current code was working - which turned out to not be the case for AMD, hence my preference for blaming the commit that actually introduced the broken logic. --- xen/arch/x86/hvm/emulate.c | 47 +------------- xen/arch/x86/hvm/hvm.c | 89 +++++++++++++++++++++++--- xen/arch/x86/include/asm/hvm/emulate.h | 1 - xen/arch/x86/include/asm/mm.h | 12 ++++ xen/arch/x86/mm.c | 37 +---------- 5 files changed, 96 insertions(+), 90 deletions(-) diff --git a/xen/arch/x86/hvm/emulate.c b/xen/arch/x86/hvm/emulate.c index 9fff1b82f7c6..ed888f0b49d3 100644 --- a/xen/arch/x86/hvm/emulate.c +++ b/xen/arch/x86/hvm/emulate.c @@ -370,7 +370,8 @@ static int hvmemul_do_io( /* If there is no suitable backing DM, just ignore accesses */ if ( !s ) { - if ( is_mmio && is_hardware_domain(currd) ) + if ( is_mmio && is_hardware_domain(currd) && + !rangeset_contains_singleton(mmio_ro_ranges, PFN_DOWN(addr)) ) { /* * PVH dom0 is likely missing MMIO mappings on the p2m, due to @@ -2856,50 +2857,6 @@ int hvm_emulate_one( return _hvm_emulate_one(hvmemul_ctxt, &hvm_emulate_ops, completion); } -int hvm_emulate_one_mmio(unsigned long mfn, unsigned long gla) -{ - static const struct x86_emulate_ops hvm_ro_emulate_ops_mmio = { - .read = x86emul_unhandleable_rw, - .insn_fetch = hvmemul_insn_fetch, - .write = mmio_ro_emulated_write, - .validate = hvmemul_validate, - }; - struct mmio_ro_emulate_ctxt mmio_ro_ctxt = { .cr2 = gla, .mfn = _mfn(mfn) }; - struct hvm_emulate_ctxt ctxt; - unsigned int seg, bdf; - int rc; - - if ( pci_ro_mmcfg_decode(mfn, &seg, &bdf) ) - { - /* Should be always handled by vPCI for PVH dom0. */ - gdprintk(XENLOG_ERR, "unhandled MMCFG access for %pp\n", - &PCI_SBDF(seg, bdf)); - ASSERT_UNREACHABLE(); - return X86EMUL_UNHANDLEABLE; - } - - hvm_emulate_init_once(&ctxt, x86_insn_is_mem_write, - guest_cpu_user_regs()); - ctxt.ctxt.data = &mmio_ro_ctxt; - - switch ( rc = _hvm_emulate_one(&ctxt, &hvm_ro_emulate_ops_mmio, - VIO_no_completion) ) - { - case X86EMUL_UNHANDLEABLE: - case X86EMUL_UNIMPLEMENTED: - hvm_dump_emulation_state(XENLOG_G_WARNING, "r/o MMIO", &ctxt, rc); - break; - case X86EMUL_EXCEPTION: - hvm_inject_event(&ctxt.ctxt.event); - /* fallthrough */ - default: - hvm_emulate_writeback(&ctxt); - break; - } - - return rc; -} - void hvm_emulate_one_vm_event(enum emul_kind kind, unsigned int trapnr, unsigned int errcode) { diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c index 6f1174c5127e..21f005b0947c 100644 --- a/xen/arch/x86/hvm/hvm.c +++ b/xen/arch/x86/hvm/hvm.c @@ -8,6 +8,7 @@ */ #include +#include #include #include #include @@ -35,7 +36,6 @@ #include #include #include -#include #include #include #include @@ -585,9 +585,81 @@ static int cf_check hvm_print_line( return X86EMUL_OKAY; } +static int cf_check subpage_mmio_accept(struct vcpu *v, unsigned long addr) +{ + p2m_type_t t; + mfn_t mfn = get_gfn_query_unlocked(v->domain, addr, &t); + + return !mfn_eq(mfn, INVALID_MFN) && t == p2m_mmio_direct && + !!subpage_mmio_find_page(mfn); +} + +static int cf_check subpage_mmio_read( + struct vcpu *v, unsigned long addr, unsigned int len, unsigned long *data) +{ + struct domain *d = v->domain; + p2m_type_t t; + mfn_t mfn = get_gfn_query(d, addr, &t); + struct subpage_ro_range *entry; + volatile void __iomem *mem; + + *data = ~0UL; + + if ( mfn_eq(mfn, INVALID_MFN) || t != p2m_mmio_direct ) + { + put_gfn(d, addr); + return X86EMUL_RETRY; + } + + entry = subpage_mmio_find_page(mfn); + if ( !entry ) + { + put_gfn(d, addr); + return X86EMUL_RETRY; + } + + mem = subpage_mmio_map_page(entry); + if ( !mem ) + { + put_gfn(d, addr); + gprintk(XENLOG_ERR, "Failed to map page for MMIO read at %#lx\n", + mfn_to_maddr(mfn) + PAGE_OFFSET(addr)); + return X86EMUL_OKAY; + } + + *data = read_mmio(mem + PAGE_OFFSET(addr), len); + + put_gfn(d, addr); + return X86EMUL_OKAY; +} + +static int cf_check subpage_mmio_write( + struct vcpu *v, unsigned long addr, unsigned int len, unsigned long data) +{ + struct domain *d = v->domain; + p2m_type_t t; + mfn_t mfn = get_gfn_query(d, addr, &t); + + if ( mfn_eq(mfn, INVALID_MFN) || t != p2m_mmio_direct ) + { + put_gfn(d, addr); + return X86EMUL_RETRY; + } + + subpage_mmio_write_emulate(mfn, PAGE_OFFSET(addr), data, len); + + put_gfn(d, addr); + return X86EMUL_OKAY; +} + int hvm_domain_initialise(struct domain *d, const struct xen_domctl_createdomain *config) { + static const struct hvm_mmio_ops subpage_mmio_ops = { + .check = subpage_mmio_accept, + .read = subpage_mmio_read, + .write = subpage_mmio_write, + }; unsigned int nr_gsis; int rc; @@ -692,6 +764,9 @@ int hvm_domain_initialise(struct domain *d, register_portio_handler(d, XEN_HVM_DEBUGCONS_IOPORT, 1, hvm_print_line); + /* Handler for r/o MMIO subpage accesses. */ + register_mmio_handler(d, &subpage_mmio_ops); + if ( hvm_tsc_scaling_supported ) d->arch.hvm.tsc_scaling_ratio = hvm_default_tsc_scaling_ratio; @@ -1981,7 +2056,9 @@ int hvm_hap_nested_page_fault(paddr_t gpa, unsigned long gla, */ if ( (p2mt == p2m_mmio_dm) || (npfec.write_access && - (p2m_is_discard_write(p2mt) || (p2mt == p2m_ioreq_server))) ) + (p2m_is_discard_write(p2mt) || (p2mt == p2m_ioreq_server) || + /* MMIO entries can be r/o if the target mfn is in mmio_ro_ranges. */ + (p2mt == p2m_mmio_direct))) ) { if ( !handle_mmio_with_translation(gla, gfn, npfec) ) hvm_inject_hw_exception(X86_EXC_GP, 0); @@ -2033,14 +2110,6 @@ int hvm_hap_nested_page_fault(paddr_t gpa, unsigned long gla, goto out_put_gfn; } - if ( (p2mt == p2m_mmio_direct) && npfec.write_access && npfec.present && - (is_hardware_domain(currd) || subpage_mmio_write_accept(mfn, gla)) && - (hvm_emulate_one_mmio(mfn_x(mfn), gla) == X86EMUL_OKAY) ) - { - rc = 1; - goto out_put_gfn; - } - /* If we fell through, the vcpu will retry now that access restrictions have * been removed. It may fault again if the p2m entry type still requires so. * Otherwise, this is an error condition. */ diff --git a/xen/arch/x86/include/asm/hvm/emulate.h b/xen/arch/x86/include/asm/hvm/emulate.h index c7a2d2a5be4e..178ac32e151f 100644 --- a/xen/arch/x86/include/asm/hvm/emulate.h +++ b/xen/arch/x86/include/asm/hvm/emulate.h @@ -86,7 +86,6 @@ void hvmemul_cancel(struct vcpu *v); struct segment_register *hvmemul_get_seg_reg( enum x86_segment seg, struct hvm_emulate_ctxt *hvmemul_ctxt); -int hvm_emulate_one_mmio(unsigned long mfn, unsigned long gla); static inline bool handle_mmio(void) { diff --git a/xen/arch/x86/include/asm/mm.h b/xen/arch/x86/include/asm/mm.h index a1bc8cc27451..c2e9ef6e5023 100644 --- a/xen/arch/x86/include/asm/mm.h +++ b/xen/arch/x86/include/asm/mm.h @@ -554,6 +554,18 @@ int cf_check mmio_ro_emulated_write( enum x86_segment seg, unsigned long offset, void *p_data, unsigned int bytes, struct x86_emulate_ctxt *ctxt); +/* r/o MMIO subpage access handlers. */ +struct subpage_ro_range { + struct list_head list; + mfn_t mfn; + void __iomem *mapped; + DECLARE_BITMAP(ro_elems, PAGE_SIZE / MMIO_RO_SUBPAGE_GRAN); +}; +struct subpage_ro_range *subpage_mmio_find_page(mfn_t mfn); +void __iomem *subpage_mmio_map_page(struct subpage_ro_range *entry); +void subpage_mmio_write_emulate( + mfn_t mfn, unsigned int offset, unsigned long data, unsigned int len); + int audit_adjust_pgtables(struct domain *d, int dir, int noisy); extern int pagefault_by_memadd(unsigned long addr, struct cpu_user_regs *regs); diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c index 989e62e7ce6f..f59c7816fba5 100644 --- a/xen/arch/x86/mm.c +++ b/xen/arch/x86/mm.c @@ -157,13 +157,6 @@ struct rangeset *__read_mostly mmio_ro_ranges; static uint32_t __ro_after_init base_disallow_mask; /* Handling sub-page read-only MMIO regions */ -struct subpage_ro_range { - struct list_head list; - mfn_t mfn; - void __iomem *mapped; - DECLARE_BITMAP(ro_elems, PAGE_SIZE / MMIO_RO_SUBPAGE_GRAN); -}; - static LIST_HEAD_RO_AFTER_INIT(subpage_ro_ranges); static DEFINE_SPINLOCK(subpage_ro_lock); @@ -4929,7 +4922,7 @@ long arch_memory_op(unsigned long cmd, XEN_GUEST_HANDLE_PARAM(void) arg) return rc; } -static struct subpage_ro_range *subpage_mmio_find_page(mfn_t mfn) +struct subpage_ro_range *subpage_mmio_find_page(mfn_t mfn) { struct subpage_ro_range *entry; @@ -5074,7 +5067,7 @@ int __init subpage_mmio_ro_add( return rc; } -static void __iomem *subpage_mmio_map_page( +void __iomem *subpage_mmio_map_page( struct subpage_ro_range *entry) { void __iomem *mapped_page; @@ -5099,7 +5092,7 @@ static void __iomem *subpage_mmio_map_page( return entry->mapped; } -static void subpage_mmio_write_emulate( +void subpage_mmio_write_emulate( mfn_t mfn, unsigned int offset, unsigned long data, @@ -5133,30 +5126,6 @@ static void subpage_mmio_write_emulate( write_mmio(addr + offset, data, len); } -#ifdef CONFIG_HVM -bool subpage_mmio_write_accept(mfn_t mfn, unsigned long gla) -{ - unsigned int offset = PAGE_OFFSET(gla); - const struct subpage_ro_range *entry; - - entry = subpage_mmio_find_page(mfn); - if ( !entry ) - return false; - - if ( !test_bit(offset / MMIO_RO_SUBPAGE_GRAN, entry->ro_elems) ) - { - /* - * We don't know the write size at this point yet, so it could be - * an unaligned write, but accept it here anyway and deal with it - * later. - */ - return true; - } - - return false; -} -#endif - int cf_check mmio_ro_emulated_write( enum x86_segment seg, unsigned long offset, From patchwork Fri Apr 11 10:54:10 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Roger_Pau_Monn=C3=A9?= X-Patchwork-Id: 14048087 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id C972DC369AA for ; Fri, 11 Apr 2025 10:54:34 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.947032.1344789 (Exim 4.92) (envelope-from ) id 1u3C1h-0000sp-87; Fri, 11 Apr 2025 10:54:25 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 947032.1344789; Fri, 11 Apr 2025 10:54:25 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1u3C1h-0000sa-5O; Fri, 11 Apr 2025 10:54:25 +0000 Received: by outflank-mailman (input) for mailman id 947032; Fri, 11 Apr 2025 10:54:23 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1u3C1e-00006D-UF for xen-devel@lists.xenproject.org; Fri, 11 Apr 2025 10:54:22 +0000 Received: from mail-wr1-x433.google.com (mail-wr1-x433.google.com [2a00:1450:4864:20::433]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id 53807e62-16c3-11f0-9ead-5ba50f476ded; Fri, 11 Apr 2025 12:54:22 +0200 (CEST) Received: by mail-wr1-x433.google.com with SMTP id ffacd0b85a97d-3913d129c1aso1272803f8f.0 for ; Fri, 11 Apr 2025 03:54:22 -0700 (PDT) Received: from localhost ([84.78.159.3]) by smtp.gmail.com with UTF8SMTPSA id 5b1f17b1804b1-43f23572c43sm82403165e9.25.2025.04.11.03.54.20 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 11 Apr 2025 03:54:21 -0700 (PDT) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 53807e62-16c3-11f0-9ead-5ba50f476ded DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=citrix.com; s=google; t=1744368862; x=1744973662; darn=lists.xenproject.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=CFhz/3cflKTytjOy30j+Pi6jdQU9sMKV9FECJ7J0Ano=; b=Ni+fVMoCQ9nt9H2ozIjd0Wv0GypwFdAaBt96TuuMdhc4mBWeMUMIpk6SfhPR2DFipM 7FJWPY2Ch5EpqV1SiBvNszHwnmR/UlZyIA/8VnKTLSvxDOajfLDNbM1IzQ8QtZCIAKTH M7fu/ZO536nLXBvjhY/VA/ERyaj20Wx/VlJnk= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1744368862; x=1744973662; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=CFhz/3cflKTytjOy30j+Pi6jdQU9sMKV9FECJ7J0Ano=; b=evSWsJhjyoEayctY5I4obERNwPBs1/X1JyAb1jU8yHTcdVki111S2LJ0mdCcF98p3Q 4C7D6cMP9mQubsw3lEkc0hlhGypmvvapq2qYnVLh2kAkUTyFhFfknQJxaZ2VHFVYlx0H +RRuC/0MPk4v7tgkDKnBhozl/HTR3vY0zK5jyp5/QslYwhn47Aa8jdTMXeUtSuT489kB PeMcebdshjabMMiTbY7Ht4eCrrhv0GDY00OGY47A8mp2GOLEDsAc8oUUctGyImHKm/Uw 3ulLCl3qtqDnnd6MlMTOPxrEcIguZviWlbacs8Y1dTJxXtBTaEiGdx9YP81NDLaiYUqf PfEA== X-Gm-Message-State: AOJu0YwuLbHeoFoftQbwKgDB8DbQE+7PfggYJUhFfdyEBB/rQzZdFTYs Qn979/8vHLdULVfu4H+i5escx/YtbfHUTD7IRLk4Bjej0jRRemr1srXglJYzXXeCNtogBl3fWFZ h X-Gm-Gg: ASbGncvEpU6BBYsCiu17XZ551ZuUaEiLN/qS6Wjto5/zsOwnO9clVA4FjYZvyJcS9ay dRncuIOFh5icCJQB16Nsaz6MmWnTX54HriBB7wJ2IyiQorhW6OxldDLWWNmpt3EAPHCOX6OQfnC raBDqnlQ3LZTMDceSQUfvS9qmFrhcyUtryqCCc56NtiwUCOpCauL8LToFAUIrBmzr+qGRFXDdDR 9EsRoCDLcNzGmuTR9nqfwXzNMY8oUptcqvQq1RiwzDtqRoy3DXYg7JE/p6Ej5fle2Kq10LTVUaT jhaj6j4PAdzZOeqFWbTz3YqLM2b9EfXut/qWF6acFbtIbw== X-Google-Smtp-Source: AGHT+IEbyOBim1v2/GKVwjHB+iS4/e2fy+s9USNITR5vi4Tk/b0C1eOSXzyhtYa89axBqOagWFkDew== X-Received: by 2002:a05:6000:420d:b0:39b:f44b:e176 with SMTP id ffacd0b85a97d-39d8f4dbfd5mr5817723f8f.24.1744368861666; Fri, 11 Apr 2025 03:54:21 -0700 (PDT) From: Roger Pau Monne To: xen-devel@lists.xenproject.org Cc: Roger Pau Monne , Jan Beulich , Andrew Cooper Subject: [PATCH 4/5] x86/hvm: only register the r/o subpage ops when needed Date: Fri, 11 Apr 2025 12:54:10 +0200 Message-ID: <20250411105411.22334-5-roger.pau@citrix.com> X-Mailer: git-send-email 2.48.1 In-Reply-To: <20250411105411.22334-1-roger.pau@citrix.com> References: <20250411105411.22334-1-roger.pau@citrix.com> MIME-Version: 1.0 MMIO operation handlers can be expensive to process, hence attempt to register only those that will be needed by the domain. Subpage r/o MMIO regions are added exclusively at boot, further limit their addition to strictly before the initial domain gets created, so by the time initial domain creation happens Xen knows whether subpage is required or not. This allows only registering the MMIO handler when there are subpage regions to handle. Signed-off-by: Roger Pau Monné Reviewed-by: Jan Beulich --- Could possibly be part of the previous patch, but strictly speaking is an improvement, as even before the previous patch subpage r/o was always called even when no subpage regions are registered. --- xen/arch/x86/hvm/hvm.c | 5 +++-- xen/arch/x86/include/asm/mm.h | 1 + xen/arch/x86/mm.c | 16 ++++++++++++++++ 3 files changed, 20 insertions(+), 2 deletions(-) diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c index 21f005b0947c..1a5dfc07813d 100644 --- a/xen/arch/x86/hvm/hvm.c +++ b/xen/arch/x86/hvm/hvm.c @@ -764,8 +764,9 @@ int hvm_domain_initialise(struct domain *d, register_portio_handler(d, XEN_HVM_DEBUGCONS_IOPORT, 1, hvm_print_line); - /* Handler for r/o MMIO subpage accesses. */ - register_mmio_handler(d, &subpage_mmio_ops); + if ( subpage_ro_active() ) + /* Handler for r/o MMIO subpage accesses. */ + register_mmio_handler(d, &subpage_mmio_ops); if ( hvm_tsc_scaling_supported ) d->arch.hvm.tsc_scaling_ratio = hvm_default_tsc_scaling_ratio; diff --git a/xen/arch/x86/include/asm/mm.h b/xen/arch/x86/include/asm/mm.h index c2e9ef6e5023..aeb8ebcf2d56 100644 --- a/xen/arch/x86/include/asm/mm.h +++ b/xen/arch/x86/include/asm/mm.h @@ -561,6 +561,7 @@ struct subpage_ro_range { void __iomem *mapped; DECLARE_BITMAP(ro_elems, PAGE_SIZE / MMIO_RO_SUBPAGE_GRAN); }; +bool subpage_ro_active(void); struct subpage_ro_range *subpage_mmio_find_page(mfn_t mfn); void __iomem *subpage_mmio_map_page(struct subpage_ro_range *entry); void subpage_mmio_write_emulate( diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c index f59c7816fba5..3bc6304d831c 100644 --- a/xen/arch/x86/mm.c +++ b/xen/arch/x86/mm.c @@ -4922,6 +4922,11 @@ long arch_memory_op(unsigned long cmd, XEN_GUEST_HANDLE_PARAM(void) arg) return rc; } +bool subpage_ro_active(void) +{ + return !list_empty(&subpage_ro_ranges); +} + struct subpage_ro_range *subpage_mmio_find_page(mfn_t mfn) { struct subpage_ro_range *entry; @@ -5011,6 +5016,17 @@ int __init subpage_mmio_ro_add( !IS_ALIGNED(size, MMIO_RO_SUBPAGE_GRAN) ) return -EINVAL; + /* + * Force all r/o subregions to be registered before initial domain + * creation, so that the emulation handlers can be added only when there + * are pages registered. + */ + if ( system_state >= SYS_STATE_smp_boot ) + { + ASSERT_UNREACHABLE(); + return -EILSEQ; + } + if ( !size ) return 0; From patchwork Fri Apr 11 10:54:11 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Roger_Pau_Monn=C3=A9?= X-Patchwork-Id: 14048084 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id A924FC369A8 for ; Fri, 11 Apr 2025 10:54:33 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.947035.1344807 (Exim 4.92) (envelope-from ) id 1u3C1i-0001J3-UQ; Fri, 11 Apr 2025 10:54:26 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 947035.1344807; Fri, 11 Apr 2025 10:54:26 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1u3C1i-0001Ib-Px; Fri, 11 Apr 2025 10:54:26 +0000 Received: by outflank-mailman (input) for mailman id 947035; Fri, 11 Apr 2025 10:54:25 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1u3C1h-00006G-8N for xen-devel@lists.xenproject.org; Fri, 11 Apr 2025 10:54:25 +0000 Received: from mail-wr1-x42a.google.com (mail-wr1-x42a.google.com [2a00:1450:4864:20::42a]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id 542a5a8e-16c3-11f0-9ffb-bf95429c2676; Fri, 11 Apr 2025 12:54:23 +0200 (CEST) Received: by mail-wr1-x42a.google.com with SMTP id ffacd0b85a97d-39c14016868so1566442f8f.1 for ; Fri, 11 Apr 2025 03:54:23 -0700 (PDT) Received: from localhost ([84.78.159.3]) by smtp.gmail.com with UTF8SMTPSA id ffacd0b85a97d-39eae96c70csm1677896f8f.38.2025.04.11.03.54.22 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 11 Apr 2025 03:54:22 -0700 (PDT) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 542a5a8e-16c3-11f0-9ffb-bf95429c2676 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=citrix.com; s=google; t=1744368863; x=1744973663; darn=lists.xenproject.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=RdHHb20HOi9ewMxj1HKA2pF0/OWllbuDky2HQy9a5pQ=; b=Gu5QblXb/oaZsHKXi7QC7PTHWqeMUsME+deQF0yS0amWDy6h5I8Mlu8pcQy/EgI2yE 35ze9voBpMHI8PbqhzBynd6BNin6kJbSOoGLxyTAaqFnWVSJNU+pMoeZC9T5TjFqPfCm 7nq36iNrb9AvTqOg0tk+suVZ4WpFvb+8gT7DE= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1744368863; x=1744973663; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=RdHHb20HOi9ewMxj1HKA2pF0/OWllbuDky2HQy9a5pQ=; b=M9Y2bCWnbn75dhLPvAbo+cOHiLyDCzXrhqdPpBzL5Q0uT0NUqypXg+mmatWNVXn9Mo H70TB7qHmfmonuYf7P8I+1vx/4kyLTX2gCnK0MxMP+gYNzzj7y4hxa6WIs9oCybdTbMm ITtucgM+D+p+/h1mr/PUXgGCt9J9SnbJoY8cSRqRBMEv/sLoamf1lfDBcSybYtU9gn3J Cba/HJpbh3+ODxO6Oda2UXaULa8AXxEbqG1f6O47pOmngCJ9lAdpouI5XLRvtsdrOKCm +jOa45TibH8DTB3W7KkspqJWqMukrwonNJoB1kvwfmZuC+4I5TQHuai9qVRzNDhEQdic fgXQ== X-Gm-Message-State: AOJu0YwxVcPzwewLWK37YtQxrqHU7YWA79KSi3joOlpSSIY+uVQRChrR v1BTcd0/IkhwBT1P1F27NKGewCOnRVsebSS7VrdqU7kP6tO1jb15wCP3K5uKU8XRyyqoiwfFa7r Y X-Gm-Gg: ASbGnctrGFO8zNKFtUi7YAokpb79UJGGCXrBd9fG+jTZi8Hb5qdl41x/qGbHQltHZzk mYJEGhlYYlqPeEEpcicSMEKi+GMYYw0xtlnt+OQeAMkrfeitNpmA71dXEpxRisry81jYlzhWU2D xHJ0c4yBAXBO8MPdql1nzVPI6y9EYeiSRUSd3KiGupTMFrwKjHhlLXPBihqMmmBXnT9L1Jo3P4V k7wit5WpapTG4iiqDJ1mJAHThL28PTwKWj5kMU9aZS7v98SdHphb2HTs7xoWYkXZMobfwmsKjDk taC2TJaPjbUj+rCjqdBP+GtNq7GkkJTvDoQNudFyWbfPrg== X-Google-Smtp-Source: AGHT+IGgYhUzdc9FW83ZPtcCp31uhkyhKc744vxfZo8AV0cfCbfSjJYJXDsMf7Wu6O70OV3GcQ9VBg== X-Received: by 2002:a05:6000:400f:b0:39c:dfa:d33e with SMTP id ffacd0b85a97d-39ea52154a0mr1849055f8f.23.1744368862766; Fri, 11 Apr 2025 03:54:22 -0700 (PDT) From: Roger Pau Monne To: xen-devel@lists.xenproject.org Cc: Roger Pau Monne , Jan Beulich , Andrew Cooper Subject: [PATCH 5/5] x86/mm: move mmio_ro_emulated_write() to PV only file Date: Fri, 11 Apr 2025 12:54:11 +0200 Message-ID: <20250411105411.22334-6-roger.pau@citrix.com> X-Mailer: git-send-email 2.48.1 In-Reply-To: <20250411105411.22334-1-roger.pau@citrix.com> References: <20250411105411.22334-1-roger.pau@citrix.com> MIME-Version: 1.0 mmio_ro_emulated_write() is only used in pv/ro-page-fault.c, move the function to that file and make it static. No functional change intended. Signed-off-by: Roger Pau Monné Reviewed-by: Jan Beulich --- xen/arch/x86/include/asm/mm.h | 12 ------------ xen/arch/x86/mm.c | 26 ------------------------- xen/arch/x86/pv/ro-page-fault.c | 34 +++++++++++++++++++++++++++++++++ 3 files changed, 34 insertions(+), 38 deletions(-) diff --git a/xen/arch/x86/include/asm/mm.h b/xen/arch/x86/include/asm/mm.h index aeb8ebcf2d56..2665daa6f74f 100644 --- a/xen/arch/x86/include/asm/mm.h +++ b/xen/arch/x86/include/asm/mm.h @@ -542,18 +542,6 @@ void memguard_unguard_stack(void *p); int subpage_mmio_ro_add(paddr_t start, size_t size); bool subpage_mmio_write_accept(mfn_t mfn, unsigned long gla); -struct mmio_ro_emulate_ctxt { - unsigned long cr2; - /* Used only for mmcfg case */ - unsigned int seg, bdf; - /* Used only for non-mmcfg case */ - mfn_t mfn; -}; - -int cf_check mmio_ro_emulated_write( - enum x86_segment seg, unsigned long offset, void *p_data, - unsigned int bytes, struct x86_emulate_ctxt *ctxt); - /* r/o MMIO subpage access handlers. */ struct subpage_ro_range { struct list_head list; diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c index 3bc6304d831c..7c6a5fde5ebd 100644 --- a/xen/arch/x86/mm.c +++ b/xen/arch/x86/mm.c @@ -5142,32 +5142,6 @@ void subpage_mmio_write_emulate( write_mmio(addr + offset, data, len); } -int cf_check mmio_ro_emulated_write( - enum x86_segment seg, - unsigned long offset, - void *p_data, - unsigned int bytes, - struct x86_emulate_ctxt *ctxt) -{ - struct mmio_ro_emulate_ctxt *mmio_ro_ctxt = ctxt->data; - unsigned long data = 0; - - /* Only allow naturally-aligned stores at the original %cr2 address. */ - if ( ((bytes | offset) & (bytes - 1)) || !bytes || - offset != mmio_ro_ctxt->cr2 || bytes > sizeof(data) ) - { - gdprintk(XENLOG_WARNING, "bad access (cr2=%lx, addr=%lx, bytes=%u)\n", - mmio_ro_ctxt->cr2, offset, bytes); - return X86EMUL_UNHANDLEABLE; - } - - memcpy(&data, p_data, bytes); - subpage_mmio_write_emulate(mmio_ro_ctxt->mfn, PAGE_OFFSET(offset), - data, bytes); - - return X86EMUL_OKAY; -} - /* * For these PTE APIs, the caller must follow the alloc-map-unmap-free * lifecycle, which means explicitly mapping the PTE pages before accessing diff --git a/xen/arch/x86/pv/ro-page-fault.c b/xen/arch/x86/pv/ro-page-fault.c index 11b01c479e43..8b1c25e60c17 100644 --- a/xen/arch/x86/pv/ro-page-fault.c +++ b/xen/arch/x86/pv/ro-page-fault.c @@ -298,6 +298,14 @@ static int ptwr_do_page_fault(struct x86_emulate_ctxt *ctxt, * fault handling for read-only MMIO pages */ +struct mmio_ro_emulate_ctxt { + unsigned long cr2; + /* Used only for mmcfg case */ + unsigned int seg, bdf; + /* Used only for non-mmcfg case */ + mfn_t mfn; +}; + static int cf_check mmcfg_intercept_write( enum x86_segment seg, unsigned long offset, @@ -329,6 +337,32 @@ static int cf_check mmcfg_intercept_write( return X86EMUL_OKAY; } +static int cf_check mmio_ro_emulated_write( + enum x86_segment seg, + unsigned long offset, + void *p_data, + unsigned int bytes, + struct x86_emulate_ctxt *ctxt) +{ + struct mmio_ro_emulate_ctxt *mmio_ro_ctxt = ctxt->data; + unsigned long data = 0; + + /* Only allow naturally-aligned stores at the original %cr2 address. */ + if ( ((bytes | offset) & (bytes - 1)) || !bytes || + offset != mmio_ro_ctxt->cr2 || bytes > sizeof(data) ) + { + gdprintk(XENLOG_WARNING, "bad access (cr2=%lx, addr=%lx, bytes=%u)\n", + mmio_ro_ctxt->cr2, offset, bytes); + return X86EMUL_UNHANDLEABLE; + } + + memcpy(&data, p_data, bytes); + subpage_mmio_write_emulate(mmio_ro_ctxt->mfn, PAGE_OFFSET(offset), + data, bytes); + + return X86EMUL_OKAY; +} + static const struct x86_emulate_ops mmio_ro_emulate_ops = { .read = x86emul_unhandleable_rw, .insn_fetch = ptwr_emulated_insn_fetch,