Message ID | 20210530150628.2063957-2-ltykernel@gmail.com (mailing list archive) |
---|---|
State | Not Applicable |
Headers | show |
Series | x86/Hyper-V: Add Hyper-V Isolation VM support | expand |
On Sun, May 30, 2021 at 11:06:18AM -0400, Tianyu Lan wrote: > + if (ms_hyperv.ghcb_base) { > + rdmsrl(MSR_AMD64_SEV_ES_GHCB, ghcb_gpa); > + > + ghcb_va = ioremap_cache(ghcb_gpa, HV_HYP_PAGE_SIZE); > + if (!ghcb_va) > + return -ENOMEM; Can you explain this a bit more? We've very much deprecated ioremap_cache in favor of memremap. Why yo you need a __iomem address here? Why do we need the remap here at all? Does the data structure at this address not have any types that we could use a struct for? > + > + rdmsrl(MSR_AMD64_SEV_ES_GHCB, ghcb_gpa); > + ghcb_va = ioremap_cache(ghcb_gpa, HV_HYP_PAGE_SIZE); > + if (!ghcb_va) { This seems to duplicate the above code. > +bool hv_isolation_type_snp(void) > +{ > + return static_branch_unlikely(&isolation_type_snp); > +} > +EXPORT_SYMBOL_GPL(hv_isolation_type_snp); This probably wants a kerneldoc explaining when it should be used.
Hi Christoph: Thanks for your review. On 6/7/2021 2:41 PM, Christoph Hellwig wrote: > On Sun, May 30, 2021 at 11:06:18AM -0400, Tianyu Lan wrote: >> + if (ms_hyperv.ghcb_base) { >> + rdmsrl(MSR_AMD64_SEV_ES_GHCB, ghcb_gpa); >> + >> + ghcb_va = ioremap_cache(ghcb_gpa, HV_HYP_PAGE_SIZE); >> + if (!ghcb_va) >> + return -ENOMEM; > > Can you explain this a bit more? We've very much deprecated > ioremap_cache in favor of memremap. Why yo you need a __iomem address > here? Why do we need the remap here at all? > GHCB physical address is an address in extra address space which is above shared gpa boundary reported by Hyper-V CPUID. The addresses below shared gpa boundary treated as encrypted and the one above is treated as decrypted. System memory is remapped in the extra address space and it starts from the boundary. The shared memory with host needs to use address in the extra address(pa + shared_gpa_boundary) in Linux guest. Here is to map ghcb page for the communication operations with Hypervisor(e.g, hypercall and read/write MSR) via GHCB page. memremap() will go through iomem_resource list and the address in extra address space will not be in the list. So I used ioremap_cache(). I will memremap() instead of ioremap() here. > Does the data structure at this address not have any types that we > could use a struct for? The struct will be added in the following patch. I will refresh the following patch and use the struct hv_ghcb for the mapped point. > >> + >> + rdmsrl(MSR_AMD64_SEV_ES_GHCB, ghcb_gpa); >> + ghcb_va = ioremap_cache(ghcb_gpa, HV_HYP_PAGE_SIZE); >> + if (!ghcb_va) { > > This seems to duplicate the above code. The above is to map ghcb for BSP and here does the same thing for APs Will add a new function to avoid the duplication. > >> +bool hv_isolation_type_snp(void) >> +{ >> + return static_branch_unlikely(&isolation_type_snp); >> +} >> +EXPORT_SYMBOL_GPL(hv_isolation_type_snp); > > This probably wants a kerneldoc explaining when it should be used. > OK. I will add.
On Sun, May 30, 2021 at 11:06:18AM -0400, Tianyu Lan wrote: > From: Tianyu Lan <Tianyu.Lan@microsoft.com> > > Hyper-V exposes GHCB page via SEV ES GHCB MSR for SNP guest > to communicate with hypervisor. Map GHCB page for all > cpus to read/write MSR register and submit hvcall request > via GHCB. > > Signed-off-by: Tianyu Lan <Tianyu.Lan@microsoft.com> > --- > arch/x86/hyperv/hv_init.c | 60 ++++++++++++++++++++++++++++++--- > arch/x86/include/asm/mshyperv.h | 2 ++ > include/asm-generic/mshyperv.h | 2 ++ > 3 files changed, 60 insertions(+), 4 deletions(-) > > diff --git a/arch/x86/hyperv/hv_init.c b/arch/x86/hyperv/hv_init.c > index bb0ae4b5c00f..dc74d01cb859 100644 > --- a/arch/x86/hyperv/hv_init.c > +++ b/arch/x86/hyperv/hv_init.c > @@ -60,6 +60,9 @@ static int hv_cpu_init(unsigned int cpu) > struct hv_vp_assist_page **hvp = &hv_vp_assist_page[smp_processor_id()]; > void **input_arg; > struct page *pg; > + u64 ghcb_gpa; > + void *ghcb_va; > + void **ghcb_base; Any reason you can't reuse the SEV-ES support code in the Linux kernel? It already has code to setup GHCBs for all vCPUs. I see that you don't need #VC handling in your SNP VMs because of the paravisor running underneath it, but just re-using the GHCB setup code shouldn't be too hard. Regards, Joerg
Hi Joerg: Thanks for your review. On 6/9/2021 8:38 PM, Joerg Roedel wrote: > On Sun, May 30, 2021 at 11:06:18AM -0400, Tianyu Lan wrote: >> From: Tianyu Lan <Tianyu.Lan@microsoft.com> >> >> Hyper-V exposes GHCB page via SEV ES GHCB MSR for SNP guest >> to communicate with hypervisor. Map GHCB page for all >> cpus to read/write MSR register and submit hvcall request >> via GHCB. >> >> Signed-off-by: Tianyu Lan <Tianyu.Lan@microsoft.com> >> --- >> arch/x86/hyperv/hv_init.c | 60 ++++++++++++++++++++++++++++++--- >> arch/x86/include/asm/mshyperv.h | 2 ++ >> include/asm-generic/mshyperv.h | 2 ++ >> 3 files changed, 60 insertions(+), 4 deletions(-) >> >> diff --git a/arch/x86/hyperv/hv_init.c b/arch/x86/hyperv/hv_init.c >> index bb0ae4b5c00f..dc74d01cb859 100644 >> --- a/arch/x86/hyperv/hv_init.c >> +++ b/arch/x86/hyperv/hv_init.c >> @@ -60,6 +60,9 @@ static int hv_cpu_init(unsigned int cpu) >> struct hv_vp_assist_page **hvp = &hv_vp_assist_page[smp_processor_id()]; >> void **input_arg; >> struct page *pg; >> + u64 ghcb_gpa; >> + void *ghcb_va; >> + void **ghcb_base; > > Any reason you can't reuse the SEV-ES support code in the Linux kernel? > It already has code to setup GHCBs for all vCPUs. > > I see that you don't need #VC handling in your SNP VMs because of the > paravisor running underneath it, but just re-using the GHCB setup code > shouldn't be too hard. > Thanks for your suggestion. I will have a try to use SEV-ES code.
diff --git a/arch/x86/hyperv/hv_init.c b/arch/x86/hyperv/hv_init.c index bb0ae4b5c00f..dc74d01cb859 100644 --- a/arch/x86/hyperv/hv_init.c +++ b/arch/x86/hyperv/hv_init.c @@ -60,6 +60,9 @@ static int hv_cpu_init(unsigned int cpu) struct hv_vp_assist_page **hvp = &hv_vp_assist_page[smp_processor_id()]; void **input_arg; struct page *pg; + u64 ghcb_gpa; + void *ghcb_va; + void **ghcb_base; /* hv_cpu_init() can be called with IRQs disabled from hv_resume() */ pg = alloc_pages(irqs_disabled() ? GFP_ATOMIC : GFP_KERNEL, hv_root_partition ? 1 : 0); @@ -106,6 +109,17 @@ static int hv_cpu_init(unsigned int cpu) wrmsrl(HV_X64_MSR_VP_ASSIST_PAGE, val); } + if (ms_hyperv.ghcb_base) { + rdmsrl(MSR_AMD64_SEV_ES_GHCB, ghcb_gpa); + + ghcb_va = ioremap_cache(ghcb_gpa, HV_HYP_PAGE_SIZE); + if (!ghcb_va) + return -ENOMEM; + + ghcb_base = (void **)this_cpu_ptr(ms_hyperv.ghcb_base); + *ghcb_base = ghcb_va; + } + return 0; } @@ -201,6 +215,7 @@ static int hv_cpu_die(unsigned int cpu) unsigned long flags; void **input_arg; void *pg; + void **ghcb_va = NULL; local_irq_save(flags); input_arg = (void **)this_cpu_ptr(hyperv_pcpu_input_arg); @@ -214,6 +229,13 @@ static int hv_cpu_die(unsigned int cpu) *output_arg = NULL; } + if (ms_hyperv.ghcb_base) { + ghcb_va = (void **)this_cpu_ptr(ms_hyperv.ghcb_base); + if (*ghcb_va) + iounmap(*ghcb_va); + *ghcb_va = NULL; + } + local_irq_restore(flags); free_pages((unsigned long)pg, hv_root_partition ? 1 : 0); @@ -369,6 +391,9 @@ void __init hyperv_init(void) u64 guest_id, required_msrs; union hv_x64_msr_hypercall_contents hypercall_msr; int cpuhp, i; + u64 ghcb_gpa; + void *ghcb_va; + void **ghcb_base; if (x86_hyper_type != X86_HYPER_MS_HYPERV) return; @@ -429,9 +454,24 @@ void __init hyperv_init(void) VMALLOC_END, GFP_KERNEL, PAGE_KERNEL_ROX, VM_FLUSH_RESET_PERMS, NUMA_NO_NODE, __builtin_return_address(0)); - if (hv_hypercall_pg == NULL) { - wrmsrl(HV_X64_MSR_GUEST_OS_ID, 0); - goto remove_cpuhp_state; + if (hv_hypercall_pg == NULL) + goto clean_guest_os_id; + + if (hv_isolation_type_snp()) { + ms_hyperv.ghcb_base = alloc_percpu(void *); + if (!ms_hyperv.ghcb_base) + goto clean_guest_os_id; + + rdmsrl(MSR_AMD64_SEV_ES_GHCB, ghcb_gpa); + ghcb_va = ioremap_cache(ghcb_gpa, HV_HYP_PAGE_SIZE); + if (!ghcb_va) { + free_percpu(ms_hyperv.ghcb_base); + ms_hyperv.ghcb_base = NULL; + goto clean_guest_os_id; + } + + ghcb_base = (void **)this_cpu_ptr(ms_hyperv.ghcb_base); + *ghcb_base = ghcb_va; } rdmsrl(HV_X64_MSR_HYPERCALL, hypercall_msr.as_uint64); @@ -502,7 +542,8 @@ void __init hyperv_init(void) hv_query_ext_cap(0); return; -remove_cpuhp_state: +clean_guest_os_id: + wrmsrl(HV_X64_MSR_GUEST_OS_ID, 0); cpuhp_remove_state(cpuhp); free_vp_assist_page: kfree(hv_vp_assist_page); @@ -531,6 +572,9 @@ void hyperv_cleanup(void) */ hv_hypercall_pg = NULL; + if (ms_hyperv.ghcb_base) + free_percpu(ms_hyperv.ghcb_base); + /* Reset the hypercall page */ hypercall_msr.as_uint64 = 0; wrmsrl(HV_X64_MSR_HYPERCALL, hypercall_msr.as_uint64); @@ -615,6 +659,14 @@ bool hv_is_isolation_supported(void) } EXPORT_SYMBOL_GPL(hv_is_isolation_supported); +DEFINE_STATIC_KEY_FALSE(isolation_type_snp); + +bool hv_isolation_type_snp(void) +{ + return static_branch_unlikely(&isolation_type_snp); +} +EXPORT_SYMBOL_GPL(hv_isolation_type_snp); + /* Bit mask of the extended capability to query: see HV_EXT_CAPABILITY_xxx */ bool hv_query_ext_cap(u64 cap_query) { diff --git a/arch/x86/include/asm/mshyperv.h b/arch/x86/include/asm/mshyperv.h index 67ff0d637e55..aeacca7c4da8 100644 --- a/arch/x86/include/asm/mshyperv.h +++ b/arch/x86/include/asm/mshyperv.h @@ -11,6 +11,8 @@ #include <asm/paravirt.h> #include <asm/mshyperv.h> +DECLARE_STATIC_KEY_FALSE(isolation_type_snp); + typedef int (*hyperv_fill_flush_list_func)( struct hv_guest_mapping_flush_list *flush, void *data); diff --git a/include/asm-generic/mshyperv.h b/include/asm-generic/mshyperv.h index 9a000ba2bb75..3ae56a29594f 100644 --- a/include/asm-generic/mshyperv.h +++ b/include/asm-generic/mshyperv.h @@ -35,6 +35,7 @@ struct ms_hyperv_info { u32 max_lp_index; u32 isolation_config_a; u32 isolation_config_b; + void __percpu **ghcb_base; }; extern struct ms_hyperv_info ms_hyperv; @@ -224,6 +225,7 @@ bool hv_is_hyperv_initialized(void); bool hv_is_hibernation_supported(void); enum hv_isolation_type hv_get_isolation_type(void); bool hv_is_isolation_supported(void); +bool hv_isolation_type_snp(void); void hyperv_cleanup(void); bool hv_query_ext_cap(u64 cap_query); #else /* CONFIG_HYPERV */