diff mbox series

[V4,3/5] hyperv/IOMMU: Enable swiotlb bounce buffer for Isolation VM

Message ID 20211205081815.129276-4-ltykernel@gmail.com (mailing list archive)
State Superseded
Headers show
Series x86/Hyper-V: Add Hyper-V Isolation VM support(Second part) | expand

Commit Message

Tianyu Lan Dec. 5, 2021, 8:18 a.m. UTC
From: Tianyu Lan <Tianyu.Lan@microsoft.com>

hyperv Isolation VM requires bounce buffer support to copy
data from/to encrypted memory and so enable swiotlb force
mode to use swiotlb bounce buffer for DMA transaction.

In Isolation VM with AMD SEV, the bounce buffer needs to be
accessed via extra address space which is above shared_gpa_boundary
(E.G 39 bit address line) reported by Hyper-V CPUID ISOLATION_CONFIG.
The access physical address will be original physical address +
shared_gpa_boundary. The shared_gpa_boundary in the AMD SEV SNP
spec is called virtual top of memory(vTOM). Memory addresses below
vTOM are automatically treated as private while memory above
vTOM is treated as shared.

Hyper-V initalizes swiotlb bounce buffer and default swiotlb
needs to be disabled. pci_swiotlb_detect_override() and
pci_swiotlb_detect_4gb() enable the default one. To override
the setting, hyperv_swiotlb_detect() needs to run before
these detect functions which depends on the pci_xen_swiotlb_
init(). Make pci_xen_swiotlb_init() depends on the hyperv_swiotlb
_detect() to keep the order.

Swiotlb bounce buffer code calls set_memory_decrypted()
to mark bounce buffer visible to host and map it in extra
address space via memremap. Populate the shared_gpa_boundary
(vTOM) via swiotlb_unencrypted_base variable.

The map function memremap() can't work in the early place
hyperv_iommu_swiotlb_init() and so call swiotlb_update_mem_attributes()
in the hyperv_iommu_swiotlb_later_init().

Signed-off-by: Tianyu Lan <Tianyu.Lan@microsoft.com>
---
Change since v3:
	* Add comment in pci-swiotlb-xen.c to explain why add
	  dependency between hyperv_swiotlb_detect() and pci_
	  xen_swiotlb_detect().
	* Return directly when fails to allocate Hyper-V swiotlb
	  buffer in the hyperv_iommu_swiotlb_init().
---
 arch/x86/xen/pci-swiotlb-xen.c | 12 ++++++-
 drivers/hv/vmbus_drv.c         |  3 ++
 drivers/iommu/hyperv-iommu.c   | 58 ++++++++++++++++++++++++++++++++++
 include/linux/hyperv.h         |  8 +++++
 4 files changed, 80 insertions(+), 1 deletion(-)

Comments

Jürgen Groß Dec. 5, 2021, 8:34 a.m. UTC | #1
On 05.12.21 09:18, Tianyu Lan wrote:
> From: Tianyu Lan <Tianyu.Lan@microsoft.com>
> 
> hyperv Isolation VM requires bounce buffer support to copy
> data from/to encrypted memory and so enable swiotlb force
> mode to use swiotlb bounce buffer for DMA transaction.
> 
> In Isolation VM with AMD SEV, the bounce buffer needs to be
> accessed via extra address space which is above shared_gpa_boundary
> (E.G 39 bit address line) reported by Hyper-V CPUID ISOLATION_CONFIG.
> The access physical address will be original physical address +
> shared_gpa_boundary. The shared_gpa_boundary in the AMD SEV SNP
> spec is called virtual top of memory(vTOM). Memory addresses below
> vTOM are automatically treated as private while memory above
> vTOM is treated as shared.
> 
> Hyper-V initalizes swiotlb bounce buffer and default swiotlb
> needs to be disabled. pci_swiotlb_detect_override() and
> pci_swiotlb_detect_4gb() enable the default one. To override
> the setting, hyperv_swiotlb_detect() needs to run before
> these detect functions which depends on the pci_xen_swiotlb_
> init(). Make pci_xen_swiotlb_init() depends on the hyperv_swiotlb
> _detect() to keep the order.

Why? Does Hyper-V plan to support Xen PV guests? If not, I don't see
the need for adding this change.


Juergen
Tianyu Lan Dec. 5, 2021, 8:48 a.m. UTC | #2
On 12/5/2021 4:34 PM, Juergen Gross wrote:
> On 05.12.21 09:18, Tianyu Lan wrote:
>> From: Tianyu Lan <Tianyu.Lan@microsoft.com>
>>
>> hyperv Isolation VM requires bounce buffer support to copy
>> data from/to encrypted memory and so enable swiotlb force
>> mode to use swiotlb bounce buffer for DMA transaction.
>>
>> In Isolation VM with AMD SEV, the bounce buffer needs to be
>> accessed via extra address space which is above shared_gpa_boundary
>> (E.G 39 bit address line) reported by Hyper-V CPUID ISOLATION_CONFIG.
>> The access physical address will be original physical address +
>> shared_gpa_boundary. The shared_gpa_boundary in the AMD SEV SNP
>> spec is called virtual top of memory(vTOM). Memory addresses below
>> vTOM are automatically treated as private while memory above
>> vTOM is treated as shared.
>>
>> Hyper-V initalizes swiotlb bounce buffer and default swiotlb
>> needs to be disabled. pci_swiotlb_detect_override() and
>> pci_swiotlb_detect_4gb() enable the default one. To override
>> the setting, hyperv_swiotlb_detect() needs to run before
>> these detect functions which depends on the pci_xen_swiotlb_
>> init(). Make pci_xen_swiotlb_init() depends on the hyperv_swiotlb
>> _detect() to keep the order.
> 
> Why? Does Hyper-V plan to support Xen PV guests? If not, I don't see
> the need for adding this change.
> 

This is to keep detect function calling order that Hyper-V detect 
callback needs to call before pci_swiotlb_detect_override() and 
pci_swiotlb_detect_4gb(). This is the same for why
pci_swiotlb_detect_override() needs to depend on the 
pci_xen_swiotlb_detect(). Hyper-V also has such request and so make xen 
detect callback depends on Hyper-V one.
Jürgen Groß Dec. 5, 2021, 10:31 a.m. UTC | #3
On 05.12.21 09:48, Tianyu Lan wrote:
> 
> 
> On 12/5/2021 4:34 PM, Juergen Gross wrote:
>> On 05.12.21 09:18, Tianyu Lan wrote:
>>> From: Tianyu Lan <Tianyu.Lan@microsoft.com>
>>>
>>> hyperv Isolation VM requires bounce buffer support to copy
>>> data from/to encrypted memory and so enable swiotlb force
>>> mode to use swiotlb bounce buffer for DMA transaction.
>>>
>>> In Isolation VM with AMD SEV, the bounce buffer needs to be
>>> accessed via extra address space which is above shared_gpa_boundary
>>> (E.G 39 bit address line) reported by Hyper-V CPUID ISOLATION_CONFIG.
>>> The access physical address will be original physical address +
>>> shared_gpa_boundary. The shared_gpa_boundary in the AMD SEV SNP
>>> spec is called virtual top of memory(vTOM). Memory addresses below
>>> vTOM are automatically treated as private while memory above
>>> vTOM is treated as shared.
>>>
>>> Hyper-V initalizes swiotlb bounce buffer and default swiotlb
>>> needs to be disabled. pci_swiotlb_detect_override() and
>>> pci_swiotlb_detect_4gb() enable the default one. To override
>>> the setting, hyperv_swiotlb_detect() needs to run before
>>> these detect functions which depends on the pci_xen_swiotlb_
>>> init(). Make pci_xen_swiotlb_init() depends on the hyperv_swiotlb
>>> _detect() to keep the order.
>>
>> Why? Does Hyper-V plan to support Xen PV guests? If not, I don't see
>> the need for adding this change.
>>
> 
> This is to keep detect function calling order that Hyper-V detect 
> callback needs to call before pci_swiotlb_detect_override() and 
> pci_swiotlb_detect_4gb(). This is the same for why
> pci_swiotlb_detect_override() needs to depend on the 
> pci_xen_swiotlb_detect(). Hyper-V also has such request and so make xen 
> detect callback depends on Hyper-V one.

And does this even work without CONFIG_SWIOTLB_XEN, i.e. without
pci_xen_swiotlb_detect() being in the system?


Juergen
Tianyu Lan Dec. 6, 2021, 2:19 p.m. UTC | #4
On 12/5/2021 6:31 PM, Juergen Gross wrote:
> On 05.12.21 09:48, Tianyu Lan wrote:
>>
>>
>> On 12/5/2021 4:34 PM, Juergen Gross wrote:
>>> On 05.12.21 09:18, Tianyu Lan wrote:
>>>> From: Tianyu Lan <Tianyu.Lan@microsoft.com>
>>>>
>>>> hyperv Isolation VM requires bounce buffer support to copy
>>>> data from/to encrypted memory and so enable swiotlb force
>>>> mode to use swiotlb bounce buffer for DMA transaction.
>>>>
>>>> In Isolation VM with AMD SEV, the bounce buffer needs to be
>>>> accessed via extra address space which is above shared_gpa_boundary
>>>> (E.G 39 bit address line) reported by Hyper-V CPUID ISOLATION_CONFIG.
>>>> The access physical address will be original physical address +
>>>> shared_gpa_boundary. The shared_gpa_boundary in the AMD SEV SNP
>>>> spec is called virtual top of memory(vTOM). Memory addresses below
>>>> vTOM are automatically treated as private while memory above
>>>> vTOM is treated as shared.
>>>>
>>>> Hyper-V initalizes swiotlb bounce buffer and default swiotlb
>>>> needs to be disabled. pci_swiotlb_detect_override() and
>>>> pci_swiotlb_detect_4gb() enable the default one. To override
>>>> the setting, hyperv_swiotlb_detect() needs to run before
>>>> these detect functions which depends on the pci_xen_swiotlb_
>>>> init(). Make pci_xen_swiotlb_init() depends on the hyperv_swiotlb
>>>> _detect() to keep the order.
>>>
>>> Why? Does Hyper-V plan to support Xen PV guests? If not, I don't see
>>> the need for adding this change.
>>>
>>
>> This is to keep detect function calling order that Hyper-V detect 
>> callback needs to call before pci_swiotlb_detect_override() and 
>> pci_swiotlb_detect_4gb(). This is the same for why
>> pci_swiotlb_detect_override() needs to depend on the 
>> pci_xen_swiotlb_detect(). Hyper-V also has such request and so make 
>> xen detect callback depends on Hyper-V one.
> 
> And does this even work without CONFIG_SWIOTLB_XEN, i.e. without
> pci_xen_swiotlb_detect() being in the system?
> 
Hi Juergen:
	Thanks for your review. This is a issue and I just sent out a v5 which 
decouples the dependency between xen and hyperv.
diff mbox series

Patch

diff --git a/arch/x86/xen/pci-swiotlb-xen.c b/arch/x86/xen/pci-swiotlb-xen.c
index 46df59aeaa06..8e2ee3ce6374 100644
--- a/arch/x86/xen/pci-swiotlb-xen.c
+++ b/arch/x86/xen/pci-swiotlb-xen.c
@@ -4,6 +4,7 @@ 
 
 #include <linux/dma-map-ops.h>
 #include <linux/pci.h>
+#include <linux/hyperv.h>
 #include <xen/swiotlb-xen.h>
 
 #include <asm/xen/hypervisor.h>
@@ -90,7 +91,16 @@  int pci_xen_swiotlb_init_late(void)
 }
 EXPORT_SYMBOL_GPL(pci_xen_swiotlb_init_late);
 
+/*
+ * Hyper-V initalizes swiotlb bounce buffer and default swiotlb
+ * needs to be disabled. pci_swiotlb_detect_override() and
+ * pci_swiotlb_detect_4gb() enable the default one. To override
+ * the setting, hyperv_swiotlb_detect() needs to run before
+ * these detect functions which depends on the pci_xen_swiotlb_
+ * init(). Make pci_xen_swiotlb_init() depends on the hyperv_swiotlb
+ * _detect() to keep the order.
+ */
 IOMMU_INIT_FINISH(pci_xen_swiotlb_detect,
-		  NULL,
+		  hyperv_swiotlb_detect,
 		  pci_xen_swiotlb_init,
 		  NULL);
diff --git a/drivers/hv/vmbus_drv.c b/drivers/hv/vmbus_drv.c
index 392c1ac4f819..0a64ccfafb8b 100644
--- a/drivers/hv/vmbus_drv.c
+++ b/drivers/hv/vmbus_drv.c
@@ -33,6 +33,7 @@ 
 #include <linux/random.h>
 #include <linux/kernel.h>
 #include <linux/syscore_ops.h>
+#include <linux/dma-map-ops.h>
 #include <clocksource/hyperv_timer.h>
 #include "hyperv_vmbus.h"
 
@@ -2078,6 +2079,7 @@  struct hv_device *vmbus_device_create(const guid_t *type,
 	return child_device_obj;
 }
 
+static u64 vmbus_dma_mask = DMA_BIT_MASK(64);
 /*
  * vmbus_device_register - Register the child device
  */
@@ -2118,6 +2120,7 @@  int vmbus_device_register(struct hv_device *child_device_obj)
 	}
 	hv_debug_add_dev_dir(child_device_obj);
 
+	child_device_obj->device.dma_mask = &vmbus_dma_mask;
 	return 0;
 
 err_kset_unregister:
diff --git a/drivers/iommu/hyperv-iommu.c b/drivers/iommu/hyperv-iommu.c
index e285a220c913..44ba24d9e06c 100644
--- a/drivers/iommu/hyperv-iommu.c
+++ b/drivers/iommu/hyperv-iommu.c
@@ -13,14 +13,20 @@ 
 #include <linux/irq.h>
 #include <linux/iommu.h>
 #include <linux/module.h>
+#include <linux/hyperv.h>
+#include <linux/io.h>
 
 #include <asm/apic.h>
 #include <asm/cpu.h>
 #include <asm/hw_irq.h>
 #include <asm/io_apic.h>
+#include <asm/iommu.h>
+#include <asm/iommu_table.h>
 #include <asm/irq_remapping.h>
 #include <asm/hypervisor.h>
 #include <asm/mshyperv.h>
+#include <asm/swiotlb.h>
+#include <linux/dma-direct.h>
 
 #include "irq_remapping.h"
 
@@ -337,4 +343,56 @@  static const struct irq_domain_ops hyperv_root_ir_domain_ops = {
 	.free = hyperv_root_irq_remapping_free,
 };
 
+static void __init hyperv_iommu_swiotlb_init(void)
+{
+	unsigned long hyperv_io_tlb_size;
+	void *hyperv_io_tlb_start;
+
+	/*
+	 * Allocate Hyper-V swiotlb bounce buffer at early place
+	 * to reserve large contiguous memory.
+	 */
+	hyperv_io_tlb_size = swiotlb_size_or_default();
+	hyperv_io_tlb_start = memblock_alloc(hyperv_io_tlb_size, PAGE_SIZE);
+
+	if (!hyperv_io_tlb_start) {
+		pr_warn("Fail to allocate Hyper-V swiotlb buffer.\n");
+		return;
+	}
+
+	swiotlb_init_with_tbl(hyperv_io_tlb_start,
+			      hyperv_io_tlb_size >> IO_TLB_SHIFT, true);
+}
+
+int __init hyperv_swiotlb_detect(void)
+{
+	if (!hypervisor_is_type(X86_HYPER_MS_HYPERV))
+		return 0;
+
+	if (!hv_is_isolation_supported())
+		return 0;
+
+	/*
+	 * Enable swiotlb force mode in Isolation VM to
+	 * use swiotlb bounce buffer for dma transaction.
+	 */
+	if (hv_isolation_type_snp())
+		swiotlb_unencrypted_base = ms_hyperv.shared_gpa_boundary;
+	swiotlb_force = SWIOTLB_FORCE;
+	return 1;
+}
+
+static void __init hyperv_iommu_swiotlb_later_init(void)
+{
+	/*
+	 * Swiotlb bounce buffer needs to be mapped in extra address
+	 * space. Map function doesn't work in the early place and so
+	 * call swiotlb_update_mem_attributes() here.
+	 */
+	swiotlb_update_mem_attributes();
+}
+
+IOMMU_INIT_FINISH(hyperv_swiotlb_detect,
+		  NULL, hyperv_iommu_swiotlb_init,
+		  hyperv_iommu_swiotlb_later_init);
 #endif
diff --git a/include/linux/hyperv.h b/include/linux/hyperv.h
index b823311eac79..1f037e114dc8 100644
--- a/include/linux/hyperv.h
+++ b/include/linux/hyperv.h
@@ -1726,6 +1726,14 @@  int hyperv_write_cfg_blk(struct pci_dev *dev, void *buf, unsigned int len,
 int hyperv_reg_block_invalidate(struct pci_dev *dev, void *context,
 				void (*block_invalidate)(void *context,
 							 u64 block_mask));
+#if IS_ENABLED(CONFIG_HYPERV)
+int __init hyperv_swiotlb_detect(void);
+#else
+static inline int __init hyperv_swiotlb_detect(void)
+{
+	return 0;
+}
+#endif
 
 struct hyperv_pci_block_ops {
 	int (*read_block)(struct pci_dev *dev, void *buf, unsigned int buf_len,