Message ID | 20200213213036.207625-1-olvaffe@gmail.com (mailing list archive) |
---|---|
Headers | show |
Series | KVM: x86: honor guest memory type | expand |
On 13/02/20 22:30, Chia-I Wu wrote: > Hi, > > Host GPU drivers like to give userspace WC mapping. When the userspace makes > the mapping available to a guest, it also tells the guest to create a WC > mapping. However, even when the guest kernel picks the correct memory type, > it gets ignored because of VMX_EPT_IPAT_BIT on Intel. > > This series adds a new flag to KVM_SET_USER_MEMORY_REGION, which tells the > host kernel to honor the guest memory type for the memslot. An alternative > fix is for KVM to unconditionally honor the guest memory type (unless it is > MMIO, to avoid MCEs on Intel). I believe the alternative fix is how things > are on ARM, and probably also how things are on AMD. > > I am new to KVM and HW virtualization technologies. This series is meant as > an RFC. > When we tried to do this in the past, we got machine checks everywhere unfortunately due to the same address being mapped with different memory types. Unfortunately I cannot find the entry anymore in bugzilla, but this was not fixed as far as I know. Paolo
On Thu, Feb 13, 2020 at 1:41 PM Paolo Bonzini <pbonzini@redhat.com> wrote: > > On 13/02/20 22:30, Chia-I Wu wrote: > > Hi, > > > > Host GPU drivers like to give userspace WC mapping. When the userspace makes > > the mapping available to a guest, it also tells the guest to create a WC > > mapping. However, even when the guest kernel picks the correct memory type, > > it gets ignored because of VMX_EPT_IPAT_BIT on Intel. > > > > This series adds a new flag to KVM_SET_USER_MEMORY_REGION, which tells the > > host kernel to honor the guest memory type for the memslot. An alternative > > fix is for KVM to unconditionally honor the guest memory type (unless it is > > MMIO, to avoid MCEs on Intel). I believe the alternative fix is how things > > are on ARM, and probably also how things are on AMD. > > > > I am new to KVM and HW virtualization technologies. This series is meant as > > an RFC. > > > > When we tried to do this in the past, we got machine checks everywhere > unfortunately due to the same address being mapped with different memory > types. Unfortunately I cannot find the entry anymore in bugzilla, but > this was not fixed as far as I know. Yeah, I did a bit of history digging here https://gitlab.freedesktop.org/virgl/virglrenderer/issues/151#note_372594 The bug you mentioned was probably this one https://bugzilla.kernel.org/show_bug.cgi?id=104091 which was caused by https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=fd717f11015f673487ffc826e59b2bad69d20fe5 From what I can tell, the commit allowed the guests to create cached mappings to MMIO regions and caused MCEs. That is different than what I need, which is to allow guests to create uncached mappings to system ram (i.e., !kvm_is_mmio_pfn) when the host userspace also has uncached mappings. But it is true that this still allows the userspace & guest kernel to create conflicting memory types. Implementation-wise, the current implementation uses kvm_arch_register_noncoherent_dma. It essentially treats a memslot with the new flag as a non-coherent vfio device as far as mmu is concerned. > > Paolo >
On 13/02/20 23:18, Chia-I Wu wrote: > > The bug you mentioned was probably this one > > https://bugzilla.kernel.org/show_bug.cgi?id=104091 Yes, indeed. > From what I can tell, the commit allowed the guests to create cached > mappings to MMIO regions and caused MCEs. That is different than what > I need, which is to allow guests to create uncached mappings to system > ram (i.e., !kvm_is_mmio_pfn) when the host userspace also has uncached > mappings. But it is true that this still allows the userspace & guest > kernel to create conflicting memory types. Right, the question is whether the MCEs were tied to MMIO regions specifically and if so why. An interesting remark is in the footnote of table 11-7 in the SDM. There, for the MTRR (EPT for us) memory type UC you can read: The UC attribute comes from the MTRRs and the processors are not required to snoop their caches since the data could never have been cached. This attribute is preferred for performance reasons. There are two possibilities: 1) the footnote doesn't apply to UC mode coming from EPT page tables. That would make your change safe. 2) the footnote also applies when the UC attribute comes from the EPT page tables rather than the MTRRs. In that case, the host should use UC as the EPT page attribute if and only if it's consistent with the host MTRRs; it would be more or less impossible to honor UC in the guest MTRRs. In that case, something like the patch below would be needed. It is not clear from the manual why the footnote would not apply to WC; that is, the manual doesn't say explicitly that the processor does not do snooping for accesses to WC memory. But I guess that must be the case, which is why I used MTRR_TYPE_WRCOMB in the patch below. Either way, we would have an explanation of why creating cached mapping to MMIO regions would, and why in practice we're not seeing MCEs for guest RAM (the guest would have set WB for that memory in its MTRRs, not UC). One thing you didn't say: how would userspace use KVM_MEM_DMA? On which regions would it be set? Thanks, Paolo diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index dc331fb06495..2be6f7effa1d 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -6920,8 +6920,16 @@ static u64 vmx_get_mt_mask(struct kvm_vcpu *vcpu, gfn_t gfn, bool is_mmio) } cache = kvm_mtrr_get_guest_memory_type(vcpu, gfn); - exit: + if (cache == MTRR_TYPE_UNCACHABLE && !is_mmio) { + /* + * We cannot set UC in the EPT page tables as it can cause + * machine check exceptions (??). Hopefully the guest is + * using PAT. + */ + cache = MTRR_TYPE_WRCOMB; + } + return (cache << VMX_EPT_MT_EPTE_SHIFT) | ipat; }
On Fri, Feb 14, 2020 at 11:26:06AM +0100, Paolo Bonzini wrote: > On 13/02/20 23:18, Chia-I Wu wrote: > > > > The bug you mentioned was probably this one > > > > https://bugzilla.kernel.org/show_bug.cgi?id=104091 > > Yes, indeed. > > > From what I can tell, the commit allowed the guests to create cached > > mappings to MMIO regions and caused MCEs. That is different than what > > I need, which is to allow guests to create uncached mappings to system > > ram (i.e., !kvm_is_mmio_pfn) when the host userspace also has uncached > > mappings. But it is true that this still allows the userspace & guest > > kernel to create conflicting memory types. This is ok. > Right, the question is whether the MCEs were tied to MMIO regions > specifically and if so why. 99.99999% likelihood the answer is "yes". Cacheable accesses to non-existent memory and most (all?) MMIO regions will cause a #MC. This includes speculative accesses. Commit fd717f11015f ("KVM: x86: apply guest MTRR virtualization on host reserved pages") explicitly had a comment "1. MMIO: trust guest MTRR", which is basically a direct avenue to generating #MCs. IIRC, WC accesses to non-existent memory will also cause #MC, but KVM has bigger problems if it has PRESENT EPTEs pointing at garbage. > An interesting remark is in the footnote of table 11-7 in the SDM. > There, for the MTRR (EPT for us) memory type UC you can read: > > The UC attribute comes from the MTRRs and the processors are not > required to snoop their caches since the data could never have > been cached. This attribute is preferred for performance reasons. > > There are two possibilities: > > 1) the footnote doesn't apply to UC mode coming from EPT page tables. > That would make your change safe. > > 2) the footnote also applies when the UC attribute comes from the EPT > page tables rather than the MTRRs. In that case, the host should use > UC as the EPT page attribute if and only if it's consistent with the host > MTRRs; it would be more or less impossible to honor UC in the guest MTRRs. > In that case, something like the patch below would be needed. (2), the EPTs effectively replace the MTRRs. The expectation being that the VMM will use always use EPT memtypes consistent with the MTRRs. > It is not clear from the manual why the footnote would not apply to WC; that > is, the manual doesn't say explicitly that the processor does not do snooping > for accesses to WC memory. But I guess that must be the case, which is why I > used MTRR_TYPE_WRCOMB in the patch below. A few paragraphs below table 11-12 states: In particular, a WC page must never be aliased to a cacheable page because WC writes may not check the processor caches. > Either way, we would have an explanation of why creating cached mapping to > MMIO regions would, and why in practice we're not seeing MCEs for guest RAM > (the guest would have set WB for that memory in its MTRRs, not UC). Aliasing (physical) RAM with different memtypes won't cause #MC, just memory corruption. > One thing you didn't say: how would userspace use KVM_MEM_DMA? On which > regions would it be set? > > Thanks, > > Paolo > > diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c > index dc331fb06495..2be6f7effa1d 100644 > --- a/arch/x86/kvm/vmx/vmx.c > +++ b/arch/x86/kvm/vmx/vmx.c > @@ -6920,8 +6920,16 @@ static u64 vmx_get_mt_mask(struct kvm_vcpu *vcpu, gfn_t gfn, bool is_mmio) > } > > cache = kvm_mtrr_get_guest_memory_type(vcpu, gfn); > - > exit: > + if (cache == MTRR_TYPE_UNCACHABLE && !is_mmio) { > + /* > + * We cannot set UC in the EPT page tables as it can cause > + * machine check exceptions (??). Hopefully the guest is > + * using PAT. > + */ > + cache = MTRR_TYPE_WRCOMB; This is unnecessary. Setting UC in the EPT tables will never directly lead to #MC. Forcing WC is likely more dangerous. > + } > + > return (cache << VMX_EPT_MT_EPTE_SHIFT) | ipat; > } > >
On Fri, Feb 14, 2020 at 2:26 AM Paolo Bonzini <pbonzini@redhat.com> wrote: > > On 13/02/20 23:18, Chia-I Wu wrote: > > > > The bug you mentioned was probably this one > > > > https://bugzilla.kernel.org/show_bug.cgi?id=104091 > > Yes, indeed. > > > From what I can tell, the commit allowed the guests to create cached > > mappings to MMIO regions and caused MCEs. That is different than what > > I need, which is to allow guests to create uncached mappings to system > > ram (i.e., !kvm_is_mmio_pfn) when the host userspace also has uncached > > mappings. But it is true that this still allows the userspace & guest > > kernel to create conflicting memory types. > > Right, the question is whether the MCEs were tied to MMIO regions > specifically and if so why. > > An interesting remark is in the footnote of table 11-7 in the SDM. > There, for the MTRR (EPT for us) memory type UC you can read: > > The UC attribute comes from the MTRRs and the processors are not > required to snoop their caches since the data could never have > been cached. This attribute is preferred for performance reasons. > > There are two possibilities: > > 1) the footnote doesn't apply to UC mode coming from EPT page tables. > That would make your change safe. > > 2) the footnote also applies when the UC attribute comes from the EPT > page tables rather than the MTRRs. In that case, the host should use > UC as the EPT page attribute if and only if it's consistent with the host > MTRRs; it would be more or less impossible to honor UC in the guest MTRRs. > In that case, something like the patch below would be needed. > > It is not clear from the manual why the footnote would not apply to WC; that > is, the manual doesn't say explicitly that the processor does not do snooping > for accesses to WC memory. But I guess that must be the case, which is why I > used MTRR_TYPE_WRCOMB in the patch below. > > Either way, we would have an explanation of why creating cached mapping to > MMIO regions would, and why in practice we're not seeing MCEs for guest RAM > (the guest would have set WB for that memory in its MTRRs, not UC). > > One thing you didn't say: how would userspace use KVM_MEM_DMA? On which > regions would it be set? It will be set for shmems that are mapped WC. GPU/DRM drivers allocate shmems as DMA-able gpu buffers and allow the userspace to map them cached or WC (I915_MMAP_WC or AMDGPU_GEM_CREATE_CPU_GTT_USWC for example). When a shmem is mapped WC and is made available to the guest, we would like the ability to map the region WC in the guest. > Thanks, > > Paolo > > diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c > index dc331fb06495..2be6f7effa1d 100644 > --- a/arch/x86/kvm/vmx/vmx.c > +++ b/arch/x86/kvm/vmx/vmx.c > @@ -6920,8 +6920,16 @@ static u64 vmx_get_mt_mask(struct kvm_vcpu *vcpu, gfn_t gfn, bool is_mmio) > } > > cache = kvm_mtrr_get_guest_memory_type(vcpu, gfn); > - > exit: > + if (cache == MTRR_TYPE_UNCACHABLE && !is_mmio) { > + /* > + * We cannot set UC in the EPT page tables as it can cause > + * machine check exceptions (??). Hopefully the guest is > + * using PAT. > + */ > + cache = MTRR_TYPE_WRCOMB; > + } > + > return (cache << VMX_EPT_MT_EPTE_SHIFT) | ipat; > } > >
On Fri, Feb 14, 2020 at 11:52 AM Sean Christopherson <sean.j.christopherson@intel.com> wrote: > > On Fri, Feb 14, 2020 at 11:26:06AM +0100, Paolo Bonzini wrote: > > On 13/02/20 23:18, Chia-I Wu wrote: > > > > > > The bug you mentioned was probably this one > > > > > > https://bugzilla.kernel.org/show_bug.cgi?id=104091 > > > > Yes, indeed. > > > > > From what I can tell, the commit allowed the guests to create cached > > > mappings to MMIO regions and caused MCEs. That is different than what > > > I need, which is to allow guests to create uncached mappings to system > > > ram (i.e., !kvm_is_mmio_pfn) when the host userspace also has uncached > > > mappings. But it is true that this still allows the userspace & guest > > > kernel to create conflicting memory types. > > This is ok. > > > Right, the question is whether the MCEs were tied to MMIO regions > > specifically and if so why. > > 99.99999% likelihood the answer is "yes". Cacheable accesses to non-existent > memory and most (all?) MMIO regions will cause a #MC. This includes > speculative accesses. > > Commit fd717f11015f ("KVM: x86: apply guest MTRR virtualization on host > reserved pages") explicitly had a comment "1. MMIO: trust guest MTRR", > which is basically a direct avenue to generating #MCs. > > IIRC, WC accesses to non-existent memory will also cause #MC, but KVM has > bigger problems if it has PRESENT EPTEs pointing at garbage. > > > An interesting remark is in the footnote of table 11-7 in the SDM. > > There, for the MTRR (EPT for us) memory type UC you can read: > > > > The UC attribute comes from the MTRRs and the processors are not > > required to snoop their caches since the data could never have > > been cached. This attribute is preferred for performance reasons. > > > > There are two possibilities: > > > > 1) the footnote doesn't apply to UC mode coming from EPT page tables. > > That would make your change safe. > > > > 2) the footnote also applies when the UC attribute comes from the EPT > > page tables rather than the MTRRs. In that case, the host should use > > UC as the EPT page attribute if and only if it's consistent with the host > > MTRRs; it would be more or less impossible to honor UC in the guest MTRRs. > > In that case, something like the patch below would be needed. > > (2), the EPTs effectively replace the MTRRs. The expectation being that > the VMM will use always use EPT memtypes consistent with the MTRRs. This is my understanding as well. > > It is not clear from the manual why the footnote would not apply to WC; that > > is, the manual doesn't say explicitly that the processor does not do snooping > > for accesses to WC memory. But I guess that must be the case, which is why I > > used MTRR_TYPE_WRCOMB in the patch below. > > A few paragraphs below table 11-12 states: > > In particular, a WC page must never be aliased to a cacheable page because > WC writes may not check the processor caches. > > > Either way, we would have an explanation of why creating cached mapping to > > MMIO regions would, and why in practice we're not seeing MCEs for guest RAM > > (the guest would have set WB for that memory in its MTRRs, not UC). > > Aliasing (physical) RAM with different memtypes won't cause #MC, just > memory corruption. What we need potentially gives the userspace (the guest kernel, to be exact) the ability to create conflicting memory types. If we can be sure that the worst scenario is for a guest to corrupt its own memory, by only allowing aliases on physical ram, that seems alright. AFAICT, it is currently allowed on ARM (verified) and AMD (not verified, but svm_get_mt_mask returns 0 which supposedly means the NPT does not restrict what the guest PAT can do). This diff would do the trick for Intel without needing any uapi change: diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c index e3394c839dea..88af11cc551a 100644 --- a/arch/x86/kvm/vmx/vmx.c +++ b/arch/x86/kvm/vmx/vmx.c @@ -6905,12 +6905,6 @@ static u64 vmx_get_mt_mask(struct kvm_vcpu *vcpu, gfn_t gfn, bool is_mmio) goto exit; } - if (!kvm_arch_has_noncoherent_dma(vcpu->kvm)) { - ipat = VMX_EPT_IPAT_BIT; - cache = MTRR_TYPE_WRBACK; - goto exit; - } - if (kvm_read_cr0(vcpu) & X86_CR0_CD) { ipat = VMX_EPT_IPAT_BIT; if (kvm_check_has_quirk(vcpu->kvm, KVM_X86_QUIRK_CD_NW_CLEARED)) > > One thing you didn't say: how would userspace use KVM_MEM_DMA? On which > > regions would it be set? > > > > Thanks, > > > > Paolo > > > > diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c > > index dc331fb06495..2be6f7effa1d 100644 > > --- a/arch/x86/kvm/vmx/vmx.c > > +++ b/arch/x86/kvm/vmx/vmx.c > > @@ -6920,8 +6920,16 @@ static u64 vmx_get_mt_mask(struct kvm_vcpu *vcpu, gfn_t gfn, bool is_mmio) > > } > > > > cache = kvm_mtrr_get_guest_memory_type(vcpu, gfn); > > - > > exit: > > + if (cache == MTRR_TYPE_UNCACHABLE && !is_mmio) { > > + /* > > + * We cannot set UC in the EPT page tables as it can cause > > + * machine check exceptions (??). Hopefully the guest is > > + * using PAT. > > + */ > > + cache = MTRR_TYPE_WRCOMB; > > This is unnecessary. Setting UC in the EPT tables will never directly lead > to #MC. Forcing WC is likely more dangerous. > > > + } > > + > > return (cache << VMX_EPT_MT_EPTE_SHIFT) | ipat; > > } > > > >
On Fri, Feb 14, 2020 at 1:47 PM Chia-I Wu <olvaffe@gmail.com> wrote: > > On Fri, Feb 14, 2020 at 11:52 AM Sean Christopherson > <sean.j.christopherson@intel.com> wrote: > > > > On Fri, Feb 14, 2020 at 11:26:06AM +0100, Paolo Bonzini wrote: > > > On 13/02/20 23:18, Chia-I Wu wrote: > > > > > > > > The bug you mentioned was probably this one > > > > > > > > https://bugzilla.kernel.org/show_bug.cgi?id=104091 > > > > > > Yes, indeed. > > > > > > > From what I can tell, the commit allowed the guests to create cached > > > > mappings to MMIO regions and caused MCEs. That is different than what > > > > I need, which is to allow guests to create uncached mappings to system > > > > ram (i.e., !kvm_is_mmio_pfn) when the host userspace also has uncached > > > > mappings. But it is true that this still allows the userspace & guest > > > > kernel to create conflicting memory types. > > > > This is ok. > > > > > Right, the question is whether the MCEs were tied to MMIO regions > > > specifically and if so why. > > > > 99.99999% likelihood the answer is "yes". Cacheable accesses to non-existent > > memory and most (all?) MMIO regions will cause a #MC. This includes > > speculative accesses. > > > > Commit fd717f11015f ("KVM: x86: apply guest MTRR virtualization on host > > reserved pages") explicitly had a comment "1. MMIO: trust guest MTRR", > > which is basically a direct avenue to generating #MCs. > > > > IIRC, WC accesses to non-existent memory will also cause #MC, but KVM has > > bigger problems if it has PRESENT EPTEs pointing at garbage. > > > > > An interesting remark is in the footnote of table 11-7 in the SDM. > > > There, for the MTRR (EPT for us) memory type UC you can read: > > > > > > The UC attribute comes from the MTRRs and the processors are not > > > required to snoop their caches since the data could never have > > > been cached. This attribute is preferred for performance reasons. > > > > > > There are two possibilities: > > > > > > 1) the footnote doesn't apply to UC mode coming from EPT page tables. > > > That would make your change safe. > > > > > > 2) the footnote also applies when the UC attribute comes from the EPT > > > page tables rather than the MTRRs. In that case, the host should use > > > UC as the EPT page attribute if and only if it's consistent with the host > > > MTRRs; it would be more or less impossible to honor UC in the guest MTRRs. > > > In that case, something like the patch below would be needed. > > > > (2), the EPTs effectively replace the MTRRs. The expectation being that > > the VMM will use always use EPT memtypes consistent with the MTRRs. > This is my understanding as well. > > > > It is not clear from the manual why the footnote would not apply to WC; that > > > is, the manual doesn't say explicitly that the processor does not do snooping > > > for accesses to WC memory. But I guess that must be the case, which is why I > > > used MTRR_TYPE_WRCOMB in the patch below. > > > > A few paragraphs below table 11-12 states: > > > > In particular, a WC page must never be aliased to a cacheable page because > > WC writes may not check the processor caches. > > > > > Either way, we would have an explanation of why creating cached mapping to > > > MMIO regions would, and why in practice we're not seeing MCEs for guest RAM > > > (the guest would have set WB for that memory in its MTRRs, not UC). > > > > Aliasing (physical) RAM with different memtypes won't cause #MC, just > > memory corruption. > > What we need potentially gives the userspace (the guest kernel, to be > exact) the ability to create conflicting memory types. If we can be > sure that the worst scenario is for a guest to corrupt its own memory, > by only allowing aliases on physical ram, that seems alright. > > AFAICT, it is currently allowed on ARM (verified) and AMD (not > verified, but svm_get_mt_mask returns 0 which supposedly means the NPT > does not restrict what the guest PAT can do). This diff would do the > trick for Intel without needing any uapi change: I would be concerned about Intel CPU errata such as SKX40 and SKX59.
On Fri, Feb 14, 2020 at 01:56:48PM -0800, Jim Mattson wrote: > On Fri, Feb 14, 2020 at 1:47 PM Chia-I Wu <olvaffe@gmail.com> wrote: > > AFAICT, it is currently allowed on ARM (verified) and AMD (not > > verified, but svm_get_mt_mask returns 0 which supposedly means the NPT > > does not restrict what the guest PAT can do). This diff would do the > > trick for Intel without needing any uapi change: > > I would be concerned about Intel CPU errata such as SKX40 and SKX59. The part KVM cares about, #MC, is already addressed by forcing UC for MMIO. The data corruption issue is on the guest kernel to correctly use WC and/or non-temporal writes.
On 14/02/20 23:03, Sean Christopherson wrote: >> On Fri, Feb 14, 2020 at 1:47 PM Chia-I Wu <olvaffe@gmail.com> wrote: >>> AFAICT, it is currently allowed on ARM (verified) and AMD (not >>> verified, but svm_get_mt_mask returns 0 which supposedly means the NPT >>> does not restrict what the guest PAT can do). This diff would do the >>> trick for Intel without needing any uapi change: >> I would be concerned about Intel CPU errata such as SKX40 and SKX59. > The part KVM cares about, #MC, is already addressed by forcing UC for MMIO. > The data corruption issue is on the guest kernel to correctly use WC > and/or non-temporal writes. What about coherency across live migration? The userspace process would use cached accesses, and also a WBINVD could potentially corrupt guest memory. Paolo
On Tue, Feb 18, 2020 at 05:28:51PM +0100, Paolo Bonzini wrote: > On 14/02/20 23:03, Sean Christopherson wrote: > >> On Fri, Feb 14, 2020 at 1:47 PM Chia-I Wu <olvaffe@gmail.com> wrote: > >>> AFAICT, it is currently allowed on ARM (verified) and AMD (not > >>> verified, but svm_get_mt_mask returns 0 which supposedly means the NPT > >>> does not restrict what the guest PAT can do). This diff would do the > >>> trick for Intel without needing any uapi change: > >> I would be concerned about Intel CPU errata such as SKX40 and SKX59. > > The part KVM cares about, #MC, is already addressed by forcing UC for MMIO. > > The data corruption issue is on the guest kernel to correctly use WC > > and/or non-temporal writes. > > What about coherency across live migration? The userspace process would > use cached accesses, and also a WBINVD could potentially corrupt guest > memory. Haven't given it an iota of thought. My comments were purely in the context of the SKX40 and SKX59 errata, which are silly bugs that no real world software will ever hit, e.g. no sane software is going to split a MASKMOV instruction across address spaces. Hitting SKX59 would further require the kernel to map MMIO as WB directly adjacent to RAM, which is beyond crazy.
> From: Paolo Bonzini > Sent: Wednesday, February 19, 2020 12:29 AM > > On 14/02/20 23:03, Sean Christopherson wrote: > >> On Fri, Feb 14, 2020 at 1:47 PM Chia-I Wu <olvaffe@gmail.com> wrote: > >>> AFAICT, it is currently allowed on ARM (verified) and AMD (not > >>> verified, but svm_get_mt_mask returns 0 which supposedly means the > NPT > >>> does not restrict what the guest PAT can do). This diff would do the > >>> trick for Intel without needing any uapi change: > >> I would be concerned about Intel CPU errata such as SKX40 and SKX59. > > The part KVM cares about, #MC, is already addressed by forcing UC for > MMIO. > > The data corruption issue is on the guest kernel to correctly use WC > > and/or non-temporal writes. > > What about coherency across live migration? The userspace process would > use cached accesses, and also a WBINVD could potentially corrupt guest > memory. > In such case the userspace process possibly should conservatively use UC mapping, as if for MMIO regions on a passthrough device. However there remains a problem. the definition of KVM_MEM_DMA implies favoring guest setting, which could be whatever type in concept. Then assuming UC is also problematic. I'm not sure whether inventing another interface to query effective memory type from KVM is a good idea. There is no guarantee that the guest will use same type for every page in the same slot, then such interface might be messy. Alternatively, maybe we could just have an interface for KVM userspace to force memory type for a given slot, if it is mainly used in para-virtualized scenarios (e.g. virtio-gpu) where the guest is enlightened to use a forced type (e.g. WC)? Thanks Kevin
> From: Chia-I Wu > Sent: Saturday, February 15, 2020 5:15 AM > > On Fri, Feb 14, 2020 at 2:26 AM Paolo Bonzini <pbonzini@redhat.com> wrote: > > > > On 13/02/20 23:18, Chia-I Wu wrote: > > > > > > The bug you mentioned was probably this one > > > > > > https://bugzilla.kernel.org/show_bug.cgi?id=104091 > > > > Yes, indeed. > > > > > From what I can tell, the commit allowed the guests to create cached > > > mappings to MMIO regions and caused MCEs. That is different than what > > > I need, which is to allow guests to create uncached mappings to system > > > ram (i.e., !kvm_is_mmio_pfn) when the host userspace also has > uncached > > > mappings. But it is true that this still allows the userspace & guest > > > kernel to create conflicting memory types. > > > > Right, the question is whether the MCEs were tied to MMIO regions > > specifically and if so why. > > > > An interesting remark is in the footnote of table 11-7 in the SDM. > > There, for the MTRR (EPT for us) memory type UC you can read: > > > > The UC attribute comes from the MTRRs and the processors are not > > required to snoop their caches since the data could never have > > been cached. This attribute is preferred for performance reasons. > > > > There are two possibilities: > > > > 1) the footnote doesn't apply to UC mode coming from EPT page tables. > > That would make your change safe. > > > > 2) the footnote also applies when the UC attribute comes from the EPT > > page tables rather than the MTRRs. In that case, the host should use > > UC as the EPT page attribute if and only if it's consistent with the host > > MTRRs; it would be more or less impossible to honor UC in the guest > MTRRs. > > In that case, something like the patch below would be needed. > > > > It is not clear from the manual why the footnote would not apply to WC; > that > > is, the manual doesn't say explicitly that the processor does not do > snooping > > for accesses to WC memory. But I guess that must be the case, which is > why I > > used MTRR_TYPE_WRCOMB in the patch below. > > > > Either way, we would have an explanation of why creating cached mapping > to > > MMIO regions would, and why in practice we're not seeing MCEs for guest > RAM > > (the guest would have set WB for that memory in its MTRRs, not UC). > > > > One thing you didn't say: how would userspace use KVM_MEM_DMA? On > which > > regions would it be set? > It will be set for shmems that are mapped WC. > > GPU/DRM drivers allocate shmems as DMA-able gpu buffers and allow the > userspace to map them cached or WC (I915_MMAP_WC or > AMDGPU_GEM_CREATE_CPU_GTT_USWC for example). When a shmem is > mapped > WC and is made available to the guest, we would like the ability to > map the region WC in the guest. Curious... How is such slot exposed to the guest? A reserved memory region? Is it static or might be dynamically added? Thanks Kevin
On Wed, Feb 19, 2020 at 2:00 AM Tian, Kevin <kevin.tian@intel.com> wrote: > > > From: Chia-I Wu > > Sent: Saturday, February 15, 2020 5:15 AM > > > > On Fri, Feb 14, 2020 at 2:26 AM Paolo Bonzini <pbonzini@redhat.com> wrote: > > > > > > On 13/02/20 23:18, Chia-I Wu wrote: > > > > > > > > The bug you mentioned was probably this one > > > > > > > > https://bugzilla.kernel.org/show_bug.cgi?id=104091 > > > > > > Yes, indeed. > > > > > > > From what I can tell, the commit allowed the guests to create cached > > > > mappings to MMIO regions and caused MCEs. That is different than what > > > > I need, which is to allow guests to create uncached mappings to system > > > > ram (i.e., !kvm_is_mmio_pfn) when the host userspace also has > > uncached > > > > mappings. But it is true that this still allows the userspace & guest > > > > kernel to create conflicting memory types. > > > > > > Right, the question is whether the MCEs were tied to MMIO regions > > > specifically and if so why. > > > > > > An interesting remark is in the footnote of table 11-7 in the SDM. > > > There, for the MTRR (EPT for us) memory type UC you can read: > > > > > > The UC attribute comes from the MTRRs and the processors are not > > > required to snoop their caches since the data could never have > > > been cached. This attribute is preferred for performance reasons. > > > > > > There are two possibilities: > > > > > > 1) the footnote doesn't apply to UC mode coming from EPT page tables. > > > That would make your change safe. > > > > > > 2) the footnote also applies when the UC attribute comes from the EPT > > > page tables rather than the MTRRs. In that case, the host should use > > > UC as the EPT page attribute if and only if it's consistent with the host > > > MTRRs; it would be more or less impossible to honor UC in the guest > > MTRRs. > > > In that case, something like the patch below would be needed. > > > > > > It is not clear from the manual why the footnote would not apply to WC; > > that > > > is, the manual doesn't say explicitly that the processor does not do > > snooping > > > for accesses to WC memory. But I guess that must be the case, which is > > why I > > > used MTRR_TYPE_WRCOMB in the patch below. > > > > > > Either way, we would have an explanation of why creating cached mapping > > to > > > MMIO regions would, and why in practice we're not seeing MCEs for guest > > RAM > > > (the guest would have set WB for that memory in its MTRRs, not UC). > > > > > > One thing you didn't say: how would userspace use KVM_MEM_DMA? On > > which > > > regions would it be set? > > It will be set for shmems that are mapped WC. > > > > GPU/DRM drivers allocate shmems as DMA-able gpu buffers and allow the > > userspace to map them cached or WC (I915_MMAP_WC or > > AMDGPU_GEM_CREATE_CPU_GTT_USWC for example). When a shmem is > > mapped > > WC and is made available to the guest, we would like the ability to > > map the region WC in the guest. > > Curious... How is such slot exposed to the guest? A reserved memory > region? Is it static or might be dynamically added? The plan is for virtio-gpu device to reserve a huge memory region in the guest. Memslots may be added dynamically or statically to back the region. Dynamic: the host adds a 16MB GPU allocation as a memslot at a time. The guest kernel suballocates from the 16MB pool. Static: the host creates a huge PROT_NONE memfd and adds it as a memslot. GPU allocations are mremap()ed into the memfd region to provide the real mapping. These options are considered because the number of memslots are limited: 32 on ARM and 509 on x86. If the number of memslots could be made larger (4096 or more), we would also consider adding each individual GPU allocation as a memslot. These are actually questions we need feedback. Besides, GPU allocations can be assumed to be kernel dma-bufs in this context. I wonder if it makes sense to have a variation of KVM_SET_USER_MEMORY_REGION that takes dma-bufs. > > Thanks > Kevin
On Wed, Feb 19, 2020 at 1:52 AM Tian, Kevin <kevin.tian@intel.com> wrote: > > > From: Paolo Bonzini > > Sent: Wednesday, February 19, 2020 12:29 AM > > > > On 14/02/20 23:03, Sean Christopherson wrote: > > >> On Fri, Feb 14, 2020 at 1:47 PM Chia-I Wu <olvaffe@gmail.com> wrote: > > >>> AFAICT, it is currently allowed on ARM (verified) and AMD (not > > >>> verified, but svm_get_mt_mask returns 0 which supposedly means the > > NPT > > >>> does not restrict what the guest PAT can do). This diff would do the > > >>> trick for Intel without needing any uapi change: > > >> I would be concerned about Intel CPU errata such as SKX40 and SKX59. > > > The part KVM cares about, #MC, is already addressed by forcing UC for > > MMIO. > > > The data corruption issue is on the guest kernel to correctly use WC > > > and/or non-temporal writes. > > > > What about coherency across live migration? The userspace process would > > use cached accesses, and also a WBINVD could potentially corrupt guest > > memory. > > > > In such case the userspace process possibly should conservatively use > UC mapping, as if for MMIO regions on a passthrough device. However > there remains a problem. the definition of KVM_MEM_DMA implies > favoring guest setting, which could be whatever type in concept. Then > assuming UC is also problematic. I'm not sure whether inventing another > interface to query effective memory type from KVM is a good idea. There > is no guarantee that the guest will use same type for every page in the > same slot, then such interface might be messy. Alternatively, maybe > we could just have an interface for KVM userspace to force memory type > for a given slot, if it is mainly used in para-virtualized scenarios (e.g. > virtio-gpu) where the guest is enlightened to use a forced type (e.g. WC)? KVM forcing the memory type for a given slot should work too. But the ignore-guest-pat bit seems to be Intel-specific. We will need to define how the second-level page attributes combine with the guest page attributes somehow. KVM should in theory be able to tell that the userspace region is mapped with a certain memory type and can force the same memory type onto the guest. The userspace does not need to be involved. But that sounds very slow? This may be a dumb question, but would it help to add KVM_SET_DMA_BUF and let KVM negotiate the memory type with the in-kernel GPU drivers? > > Thanks > Kevin
> From: Chia-I Wu <olvaffe@gmail.com> > Sent: Thursday, February 20, 2020 3:37 AM > > On Wed, Feb 19, 2020 at 1:52 AM Tian, Kevin <kevin.tian@intel.com> wrote: > > > > > From: Paolo Bonzini > > > Sent: Wednesday, February 19, 2020 12:29 AM > > > > > > On 14/02/20 23:03, Sean Christopherson wrote: > > > >> On Fri, Feb 14, 2020 at 1:47 PM Chia-I Wu <olvaffe@gmail.com> wrote: > > > >>> AFAICT, it is currently allowed on ARM (verified) and AMD (not > > > >>> verified, but svm_get_mt_mask returns 0 which supposedly means > the > > > NPT > > > >>> does not restrict what the guest PAT can do). This diff would do the > > > >>> trick for Intel without needing any uapi change: > > > >> I would be concerned about Intel CPU errata such as SKX40 and SKX59. > > > > The part KVM cares about, #MC, is already addressed by forcing UC for > > > MMIO. > > > > The data corruption issue is on the guest kernel to correctly use WC > > > > and/or non-temporal writes. > > > > > > What about coherency across live migration? The userspace process > would > > > use cached accesses, and also a WBINVD could potentially corrupt guest > > > memory. > > > > > > > In such case the userspace process possibly should conservatively use > > UC mapping, as if for MMIO regions on a passthrough device. However > > there remains a problem. the definition of KVM_MEM_DMA implies > > favoring guest setting, which could be whatever type in concept. Then > > assuming UC is also problematic. I'm not sure whether inventing another > > interface to query effective memory type from KVM is a good idea. There > > is no guarantee that the guest will use same type for every page in the > > same slot, then such interface might be messy. Alternatively, maybe > > we could just have an interface for KVM userspace to force memory type > > for a given slot, if it is mainly used in para-virtualized scenarios (e.g. > > virtio-gpu) where the guest is enlightened to use a forced type (e.g. WC)? > KVM forcing the memory type for a given slot should work too. But the > ignore-guest-pat bit seems to be Intel-specific. We will need to > define how the second-level page attributes combine with the guest > page attributes somehow. oh, I'm not aware of that difference. without an ipat-equivalent capability, I'm not sure how to forcing random type here. If you look at table 11-7 in Intel SDM, none of MTRR (EPT) memory type can lead to consistent effective type when combining with random PAT value. So it is definitely a dead end. > > KVM should in theory be able to tell that the userspace region is > mapped with a certain memory type and can force the same memory type > onto the guest. The userspace does not need to be involved. But that > sounds very slow? This may be a dumb question, but would it help to > add KVM_SET_DMA_BUF and let KVM negotiate the memory type with the > in-kernel GPU drivers? > > KVM_SET_DMA_BUF looks more reasonable. But I guess we don't need KVM to be aware of such negotiation. We can continue your original proposal to have KVM simply favor guest memory type (maybe still call KVM_MEM_DMA). On the other hand, Qemu should just mmap on the fd handle of the dmabuf passed from the virtio-gpu device backend, e.g. to conduct migration. That way the mmap request is finally served by DRM and underlying GPU drivers, with proper type enforced automatically. Thanks Kevin
> From: Chia-I Wu <olvaffe@gmail.com> > Sent: Thursday, February 20, 2020 3:18 AM > > On Wed, Feb 19, 2020 at 2:00 AM Tian, Kevin <kevin.tian@intel.com> wrote: > > > > > From: Chia-I Wu > > > Sent: Saturday, February 15, 2020 5:15 AM > > > > > > On Fri, Feb 14, 2020 at 2:26 AM Paolo Bonzini <pbonzini@redhat.com> > wrote: > > > > > > > > On 13/02/20 23:18, Chia-I Wu wrote: > > > > > > > > > > The bug you mentioned was probably this one > > > > > > > > > > https://bugzilla.kernel.org/show_bug.cgi?id=104091 > > > > > > > > Yes, indeed. > > > > > > > > > From what I can tell, the commit allowed the guests to create cached > > > > > mappings to MMIO regions and caused MCEs. That is different than > what > > > > > I need, which is to allow guests to create uncached mappings to > system > > > > > ram (i.e., !kvm_is_mmio_pfn) when the host userspace also has > > > uncached > > > > > mappings. But it is true that this still allows the userspace & guest > > > > > kernel to create conflicting memory types. > > > > > > > > Right, the question is whether the MCEs were tied to MMIO regions > > > > specifically and if so why. > > > > > > > > An interesting remark is in the footnote of table 11-7 in the SDM. > > > > There, for the MTRR (EPT for us) memory type UC you can read: > > > > > > > > The UC attribute comes from the MTRRs and the processors are not > > > > required to snoop their caches since the data could never have > > > > been cached. This attribute is preferred for performance reasons. > > > > > > > > There are two possibilities: > > > > > > > > 1) the footnote doesn't apply to UC mode coming from EPT page tables. > > > > That would make your change safe. > > > > > > > > 2) the footnote also applies when the UC attribute comes from the EPT > > > > page tables rather than the MTRRs. In that case, the host should use > > > > UC as the EPT page attribute if and only if it's consistent with the host > > > > MTRRs; it would be more or less impossible to honor UC in the guest > > > MTRRs. > > > > In that case, something like the patch below would be needed. > > > > > > > > It is not clear from the manual why the footnote would not apply to WC; > > > that > > > > is, the manual doesn't say explicitly that the processor does not do > > > snooping > > > > for accesses to WC memory. But I guess that must be the case, which is > > > why I > > > > used MTRR_TYPE_WRCOMB in the patch below. > > > > > > > > Either way, we would have an explanation of why creating cached > mapping > > > to > > > > MMIO regions would, and why in practice we're not seeing MCEs for > guest > > > RAM > > > > (the guest would have set WB for that memory in its MTRRs, not UC). > > > > > > > > One thing you didn't say: how would userspace use KVM_MEM_DMA? > On > > > which > > > > regions would it be set? > > > It will be set for shmems that are mapped WC. > > > > > > GPU/DRM drivers allocate shmems as DMA-able gpu buffers and allow > the > > > userspace to map them cached or WC (I915_MMAP_WC or > > > AMDGPU_GEM_CREATE_CPU_GTT_USWC for example). When a shmem > is > > > mapped > > > WC and is made available to the guest, we would like the ability to > > > map the region WC in the guest. > > > > Curious... How is such slot exposed to the guest? A reserved memory > > region? Is it static or might be dynamically added? > The plan is for virtio-gpu device to reserve a huge memory region in > the guest. Memslots may be added dynamically or statically to back > the region. so the region is marked as E820_RESERVED to prevent guest kernel from using it for other purpose and then virtio-gpu device will report virtio-gpu driver of the exact location of the region through its own interface? > > Dynamic: the host adds a 16MB GPU allocation as a memslot at a time. > The guest kernel suballocates from the 16MB pool. > > Static: the host creates a huge PROT_NONE memfd and adds it as a > memslot. GPU allocations are mremap()ed into the memfd region to > provide the real mapping. > > These options are considered because the number of memslots are > limited: 32 on ARM and 509 on x86. If the number of memslots could be > made larger (4096 or more), we would also consider adding each > individual GPU allocation as a memslot. > > These are actually questions we need feedback. Besides, GPU > allocations can be assumed to be kernel dma-bufs in this context. I > wonder if it makes sense to have a variation of > KVM_SET_USER_MEMORY_REGION that takes dma-bufs. I feel it makes more sense. Thanks Kevin
> From: Tian, Kevin > Sent: Thursday, February 20, 2020 10:05 AM > > > From: Chia-I Wu <olvaffe@gmail.com> > > Sent: Thursday, February 20, 2020 3:37 AM > > > > On Wed, Feb 19, 2020 at 1:52 AM Tian, Kevin <kevin.tian@intel.com> wrote: > > > > > > > From: Paolo Bonzini > > > > Sent: Wednesday, February 19, 2020 12:29 AM > > > > > > > > On 14/02/20 23:03, Sean Christopherson wrote: > > > > >> On Fri, Feb 14, 2020 at 1:47 PM Chia-I Wu <olvaffe@gmail.com> > wrote: > > > > >>> AFAICT, it is currently allowed on ARM (verified) and AMD (not > > > > >>> verified, but svm_get_mt_mask returns 0 which supposedly means > > the > > > > NPT > > > > >>> does not restrict what the guest PAT can do). This diff would do the > > > > >>> trick for Intel without needing any uapi change: > > > > >> I would be concerned about Intel CPU errata such as SKX40 and > SKX59. > > > > > The part KVM cares about, #MC, is already addressed by forcing UC > for > > > > MMIO. > > > > > The data corruption issue is on the guest kernel to correctly use WC > > > > > and/or non-temporal writes. > > > > > > > > What about coherency across live migration? The userspace process > > would > > > > use cached accesses, and also a WBINVD could potentially corrupt guest > > > > memory. > > > > > > > > > > In such case the userspace process possibly should conservatively use > > > UC mapping, as if for MMIO regions on a passthrough device. However > > > there remains a problem. the definition of KVM_MEM_DMA implies > > > favoring guest setting, which could be whatever type in concept. Then > > > assuming UC is also problematic. I'm not sure whether inventing another > > > interface to query effective memory type from KVM is a good idea. There > > > is no guarantee that the guest will use same type for every page in the > > > same slot, then such interface might be messy. Alternatively, maybe > > > we could just have an interface for KVM userspace to force memory type > > > for a given slot, if it is mainly used in para-virtualized scenarios (e.g. > > > virtio-gpu) where the guest is enlightened to use a forced type (e.g. WC)? > > KVM forcing the memory type for a given slot should work too. But the > > ignore-guest-pat bit seems to be Intel-specific. We will need to > > define how the second-level page attributes combine with the guest > > page attributes somehow. > > oh, I'm not aware of that difference. without an ipat-equivalent > capability, I'm not sure how to forcing random type here. If you look at > table 11-7 in Intel SDM, none of MTRR (EPT) memory type can lead to > consistent effective type when combining with random PAT value. So > it is definitely a dead end. > > > > > KVM should in theory be able to tell that the userspace region is > > mapped with a certain memory type and can force the same memory type > > onto the guest. The userspace does not need to be involved. But that > > sounds very slow? This may be a dumb question, but would it help to > > add KVM_SET_DMA_BUF and let KVM negotiate the memory type with the > > in-kernel GPU drivers? > > > > > > KVM_SET_DMA_BUF looks more reasonable. But I guess we don't need > KVM to be aware of such negotiation. We can continue your original > proposal to have KVM simply favor guest memory type (maybe still call > KVM_MEM_DMA). On the other hand, Qemu should just mmap on the > fd handle of the dmabuf passed from the virtio-gpu device backend, e.g. > to conduct migration. That way the mmap request is finally served by > DRM and underlying GPU drivers, with proper type enforced automatically. > Thinking more possibly we don't need introduce new interface to KVM. As long as Qemu uses dmabuf interface to mmap the specific region, KVM can simply check memory type in host page table given hva of a memslot. If the type is UC or WC, it implies that userspace wants a non-coherent mapping which should be reflected in the guest side too. In such case, KVM can go to non-cohenrent DMA path and favor guest memory type automatically. Thanks Kevin
On Wed, Feb 19, 2020 at 6:38 PM Tian, Kevin <kevin.tian@intel.com> wrote: > > > From: Tian, Kevin > > Sent: Thursday, February 20, 2020 10:05 AM > > > > > From: Chia-I Wu <olvaffe@gmail.com> > > > Sent: Thursday, February 20, 2020 3:37 AM > > > > > > On Wed, Feb 19, 2020 at 1:52 AM Tian, Kevin <kevin.tian@intel.com> wrote: > > > > > > > > > From: Paolo Bonzini > > > > > Sent: Wednesday, February 19, 2020 12:29 AM > > > > > > > > > > On 14/02/20 23:03, Sean Christopherson wrote: > > > > > >> On Fri, Feb 14, 2020 at 1:47 PM Chia-I Wu <olvaffe@gmail.com> > > wrote: > > > > > >>> AFAICT, it is currently allowed on ARM (verified) and AMD (not > > > > > >>> verified, but svm_get_mt_mask returns 0 which supposedly means > > > the > > > > > NPT > > > > > >>> does not restrict what the guest PAT can do). This diff would do the > > > > > >>> trick for Intel without needing any uapi change: > > > > > >> I would be concerned about Intel CPU errata such as SKX40 and > > SKX59. > > > > > > The part KVM cares about, #MC, is already addressed by forcing UC > > for > > > > > MMIO. > > > > > > The data corruption issue is on the guest kernel to correctly use WC > > > > > > and/or non-temporal writes. > > > > > > > > > > What about coherency across live migration? The userspace process > > > would > > > > > use cached accesses, and also a WBINVD could potentially corrupt guest > > > > > memory. > > > > > > > > > > > > > In such case the userspace process possibly should conservatively use > > > > UC mapping, as if for MMIO regions on a passthrough device. However > > > > there remains a problem. the definition of KVM_MEM_DMA implies > > > > favoring guest setting, which could be whatever type in concept. Then > > > > assuming UC is also problematic. I'm not sure whether inventing another > > > > interface to query effective memory type from KVM is a good idea. There > > > > is no guarantee that the guest will use same type for every page in the > > > > same slot, then such interface might be messy. Alternatively, maybe > > > > we could just have an interface for KVM userspace to force memory type > > > > for a given slot, if it is mainly used in para-virtualized scenarios (e.g. > > > > virtio-gpu) where the guest is enlightened to use a forced type (e.g. WC)? > > > KVM forcing the memory type for a given slot should work too. But the > > > ignore-guest-pat bit seems to be Intel-specific. We will need to > > > define how the second-level page attributes combine with the guest > > > page attributes somehow. > > > > oh, I'm not aware of that difference. without an ipat-equivalent > > capability, I'm not sure how to forcing random type here. If you look at > > table 11-7 in Intel SDM, none of MTRR (EPT) memory type can lead to > > consistent effective type when combining with random PAT value. So > > it is definitely a dead end. > > > > > > > > KVM should in theory be able to tell that the userspace region is > > > mapped with a certain memory type and can force the same memory type > > > onto the guest. The userspace does not need to be involved. But that > > > sounds very slow? This may be a dumb question, but would it help to > > > add KVM_SET_DMA_BUF and let KVM negotiate the memory type with the > > > in-kernel GPU drivers? > > > > > > > > > > KVM_SET_DMA_BUF looks more reasonable. But I guess we don't need > > KVM to be aware of such negotiation. We can continue your original > > proposal to have KVM simply favor guest memory type (maybe still call > > KVM_MEM_DMA). On the other hand, Qemu should just mmap on the > > fd handle of the dmabuf passed from the virtio-gpu device backend, e.g. > > to conduct migration. That way the mmap request is finally served by > > DRM and underlying GPU drivers, with proper type enforced automatically. > > > > Thinking more possibly we don't need introduce new interface to KVM. > As long as Qemu uses dmabuf interface to mmap the specific region, > KVM can simply check memory type in host page table given hva of a > memslot. If the type is UC or WC, it implies that userspace wants a > non-coherent mapping which should be reflected in the guest side too. > In such case, KVM can go to non-cohenrent DMA path and favor guest > memory type automatically. Sorry, I mixed two things together. Userspace access to dmabuf mmap must be guarded by DMA_BUF_SYNC_{START,END} ioctls. It is possible that the GPU driver always picks a WB mapping and let the ioctls flush/invalidate CPU caches. We actually want the guest memory type to match vkMapMemory's memory type, which can be different from dmabuf mmap's memory type. It is not enough for KVM to inspect the hva's memory type. KVM_SET_DMA_BUF, if supported, is a signal to KVM that the guest memory type should be honored (or forced if there is a new op in dma_buf_ops that tells KVM which memory type to force). KVM_MEM_DMA flag in this RFC sends the same signal. Unless KVM_SET_DMA_BUF gives the userspace other features such as setting unlimited number of dmabufs to subregions of a memslot, it is not very useful. If uapi change is to be avoided, it is the easiest that guest memory type is always honored unless it causes #MC (i.e.,is_mmio==true). > > Thanks > Kevin
On Wed, Feb 19, 2020 at 6:13 PM Tian, Kevin <kevin.tian@intel.com> wrote: > > > Curious... How is such slot exposed to the guest? A reserved memory > > > region? Is it static or might be dynamically added? > > The plan is for virtio-gpu device to reserve a huge memory region in > > the guest. Memslots may be added dynamically or statically to back > > the region. > > so the region is marked as E820_RESERVED to prevent guest kernel > from using it for other purpose and then virtio-gpu device will report > virtio-gpu driver of the exact location of the region through its own > interface? The current plan is that the virtio-gpu device will have a bar for the region, which is like the vram aperture on real GPUs. The virtio-gpu driver manages the region like managing vram. When the guest userspace allocates from vram, the guest kernel reserves an unused range from the region and tells the host the offset. The host allocates a real GPU buffer, maps the buffer, and add a memslot with gpa==bar_base+offset (or mremap). When the guest userspace mmap, the guest kernel does a io_remap_pfn_range.
> From: Chia-I Wu <olvaffe@gmail.com> > Sent: Friday, February 21, 2020 6:24 AM > > On Wed, Feb 19, 2020 at 6:38 PM Tian, Kevin <kevin.tian@intel.com> wrote: > > > > > From: Tian, Kevin > > > Sent: Thursday, February 20, 2020 10:05 AM > > > > > > > From: Chia-I Wu <olvaffe@gmail.com> > > > > Sent: Thursday, February 20, 2020 3:37 AM > > > > > > > > On Wed, Feb 19, 2020 at 1:52 AM Tian, Kevin <kevin.tian@intel.com> > wrote: > > > > > > > > > > > From: Paolo Bonzini > > > > > > Sent: Wednesday, February 19, 2020 12:29 AM > > > > > > > > > > > > On 14/02/20 23:03, Sean Christopherson wrote: > > > > > > >> On Fri, Feb 14, 2020 at 1:47 PM Chia-I Wu <olvaffe@gmail.com> > > > wrote: > > > > > > >>> AFAICT, it is currently allowed on ARM (verified) and AMD (not > > > > > > >>> verified, but svm_get_mt_mask returns 0 which supposedly > means > > > > the > > > > > > NPT > > > > > > >>> does not restrict what the guest PAT can do). This diff would do > the > > > > > > >>> trick for Intel without needing any uapi change: > > > > > > >> I would be concerned about Intel CPU errata such as SKX40 and > > > SKX59. > > > > > > > The part KVM cares about, #MC, is already addressed by forcing > UC > > > for > > > > > > MMIO. > > > > > > > The data corruption issue is on the guest kernel to correctly use > WC > > > > > > > and/or non-temporal writes. > > > > > > > > > > > > What about coherency across live migration? The userspace > process > > > > would > > > > > > use cached accesses, and also a WBINVD could potentially corrupt > guest > > > > > > memory. > > > > > > > > > > > > > > > > In such case the userspace process possibly should conservatively use > > > > > UC mapping, as if for MMIO regions on a passthrough device. > However > > > > > there remains a problem. the definition of KVM_MEM_DMA implies > > > > > favoring guest setting, which could be whatever type in concept. Then > > > > > assuming UC is also problematic. I'm not sure whether inventing > another > > > > > interface to query effective memory type from KVM is a good idea. > There > > > > > is no guarantee that the guest will use same type for every page in the > > > > > same slot, then such interface might be messy. Alternatively, maybe > > > > > we could just have an interface for KVM userspace to force memory > type > > > > > for a given slot, if it is mainly used in para-virtualized scenarios (e.g. > > > > > virtio-gpu) where the guest is enlightened to use a forced type (e.g. > WC)? > > > > KVM forcing the memory type for a given slot should work too. But the > > > > ignore-guest-pat bit seems to be Intel-specific. We will need to > > > > define how the second-level page attributes combine with the guest > > > > page attributes somehow. > > > > > > oh, I'm not aware of that difference. without an ipat-equivalent > > > capability, I'm not sure how to forcing random type here. If you look at > > > table 11-7 in Intel SDM, none of MTRR (EPT) memory type can lead to > > > consistent effective type when combining with random PAT value. So > > > it is definitely a dead end. > > > > > > > > > > > KVM should in theory be able to tell that the userspace region is > > > > mapped with a certain memory type and can force the same memory > type > > > > onto the guest. The userspace does not need to be involved. But that > > > > sounds very slow? This may be a dumb question, but would it help to > > > > add KVM_SET_DMA_BUF and let KVM negotiate the memory type with > the > > > > in-kernel GPU drivers? > > > > > > > > > > > > > > KVM_SET_DMA_BUF looks more reasonable. But I guess we don't need > > > KVM to be aware of such negotiation. We can continue your original > > > proposal to have KVM simply favor guest memory type (maybe still call > > > KVM_MEM_DMA). On the other hand, Qemu should just mmap on the > > > fd handle of the dmabuf passed from the virtio-gpu device backend, e.g. > > > to conduct migration. That way the mmap request is finally served by > > > DRM and underlying GPU drivers, with proper type enforced > automatically. > > > > > > > Thinking more possibly we don't need introduce new interface to KVM. > > As long as Qemu uses dmabuf interface to mmap the specific region, > > KVM can simply check memory type in host page table given hva of a > > memslot. If the type is UC or WC, it implies that userspace wants a > > non-coherent mapping which should be reflected in the guest side too. > > In such case, KVM can go to non-cohenrent DMA path and favor guest > > memory type automatically. > Sorry, I mixed two things together. > > Userspace access to dmabuf mmap must be guarded by > DMA_BUF_SYNC_{START,END} ioctls. It is possible that the GPU driver > always picks a WB mapping and let the ioctls flush/invalidate CPU > caches. We actually want the guest memory type to match vkMapMemory's > memory type, which can be different from dmabuf mmap's memory type. > It is not enough for KVM to inspect the hva's memory type. I'm not familiar with dmabuf and what is the difference between vkMapMemory and mmap. Just a simple thought that whatever memory type/synchronization enforced on the host userspace should ideally be applied to guest userspace too. e.g. in above example we possibly want the guest to use WB and issue flush/invalidate hypercalls to guard with other potential parallel operations in the host side. otherwise I cannot see how synchronization can be done when one use WB with sync primitives while the other simply use WC w/o such primitives. > > KVM_SET_DMA_BUF, if supported, is a signal to KVM that the guest > memory type should be honored (or forced if there is a new op in > dma_buf_ops that tells KVM which memory type to force). KVM_MEM_DMA > flag in this RFC sends the same signal. Unless KVM_SET_DMA_BUF gives > the userspace other features such as setting unlimited number of > dmabufs to subregions of a memslot, it is not very useful. the good part of a new interface is its simplicity, but only in slot granularity. instead having KVM to inspect hva can support page granularity, but adding run-time overhead. Let's see how Paolo thinks.
On Thu, Feb 20, 2020 at 4:23 PM Tian, Kevin <kevin.tian@intel.com> wrote: > > From: Chia-I Wu <olvaffe@gmail.com> > > Sent: Friday, February 21, 2020 6:24 AM > > > > On Wed, Feb 19, 2020 at 6:38 PM Tian, Kevin <kevin.tian@intel.com> > wrote: > > > > > > > From: Tian, Kevin > > > > Sent: Thursday, February 20, 2020 10:05 AM > > > > > > > > > From: Chia-I Wu <olvaffe@gmail.com> > > > > > Sent: Thursday, February 20, 2020 3:37 AM > > > > > > > > > > On Wed, Feb 19, 2020 at 1:52 AM Tian, Kevin <kevin.tian@intel.com> > > wrote: > > > > > > > > > > > > > From: Paolo Bonzini > > > > > > > Sent: Wednesday, February 19, 2020 12:29 AM > > > > > > > > > > > > > > On 14/02/20 23:03, Sean Christopherson wrote: > > > > > > > >> On Fri, Feb 14, 2020 at 1:47 PM Chia-I Wu < > olvaffe@gmail.com> > > > > wrote: > > > > > > > >>> AFAICT, it is currently allowed on ARM (verified) and AMD > (not > > > > > > > >>> verified, but svm_get_mt_mask returns 0 which supposedly > > means > > > > > the > > > > > > > NPT > > > > > > > >>> does not restrict what the guest PAT can do). This diff > would do > > the > > > > > > > >>> trick for Intel without needing any uapi change: > > > > > > > >> I would be concerned about Intel CPU errata such as SKX40 > and > > > > SKX59. > > > > > > > > The part KVM cares about, #MC, is already addressed by > forcing > > UC > > > > for > > > > > > > MMIO. > > > > > > > > The data corruption issue is on the guest kernel to > correctly use > > WC > > > > > > > > and/or non-temporal writes. > > > > > > > > > > > > > > What about coherency across live migration? The userspace > > process > > > > > would > > > > > > > use cached accesses, and also a WBINVD could potentially > corrupt > > guest > > > > > > > memory. > > > > > > > > > > > > > > > > > > > In such case the userspace process possibly should > conservatively use > > > > > > UC mapping, as if for MMIO regions on a passthrough device. > > However > > > > > > there remains a problem. the definition of KVM_MEM_DMA implies > > > > > > favoring guest setting, which could be whatever type in concept. > Then > > > > > > assuming UC is also problematic. I'm not sure whether inventing > > another > > > > > > interface to query effective memory type from KVM is a good idea. > > There > > > > > > is no guarantee that the guest will use same type for every page > in the > > > > > > same slot, then such interface might be messy. Alternatively, > maybe > > > > > > we could just have an interface for KVM userspace to force memory > > type > > > > > > for a given slot, if it is mainly used in para-virtualized > scenarios (e.g. > > > > > > virtio-gpu) where the guest is enlightened to use a forced type > (e.g. > > WC)? > > > > > KVM forcing the memory type for a given slot should work too. But > the > > > > > ignore-guest-pat bit seems to be Intel-specific. We will need to > > > > > define how the second-level page attributes combine with the guest > > > > > page attributes somehow. > > > > > > > > oh, I'm not aware of that difference. without an ipat-equivalent > > > > capability, I'm not sure how to forcing random type here. If you > look at > > > > table 11-7 in Intel SDM, none of MTRR (EPT) memory type can lead to > > > > consistent effective type when combining with random PAT value. So > > > > it is definitely a dead end. > > > > > > > > > > > > > > KVM should in theory be able to tell that the userspace region is > > > > > mapped with a certain memory type and can force the same memory > > type > > > > > onto the guest. The userspace does not need to be involved. But > that > > > > > sounds very slow? This may be a dumb question, but would it help > to > > > > > add KVM_SET_DMA_BUF and let KVM negotiate the memory type with > > the > > > > > in-kernel GPU drivers? > > > > > > > > > > > > > > > > > > KVM_SET_DMA_BUF looks more reasonable. But I guess we don't need > > > > KVM to be aware of such negotiation. We can continue your original > > > > proposal to have KVM simply favor guest memory type (maybe still call > > > > KVM_MEM_DMA). On the other hand, Qemu should just mmap on the > > > > fd handle of the dmabuf passed from the virtio-gpu device backend, > e.g. > > > > to conduct migration. That way the mmap request is finally served by > > > > DRM and underlying GPU drivers, with proper type enforced > > automatically. > > > > > > > > > > Thinking more possibly we don't need introduce new interface to KVM. > > > As long as Qemu uses dmabuf interface to mmap the specific region, > > > KVM can simply check memory type in host page table given hva of a > > > memslot. If the type is UC or WC, it implies that userspace wants a > > > non-coherent mapping which should be reflected in the guest side too. > > > In such case, KVM can go to non-cohenrent DMA path and favor guest > > > memory type automatically. > > Sorry, I mixed two things together. > > > > Userspace access to dmabuf mmap must be guarded by > > DMA_BUF_SYNC_{START,END} ioctls. It is possible that the GPU driver > > always picks a WB mapping and let the ioctls flush/invalidate CPU > > caches. We actually want the guest memory type to match vkMapMemory's > > memory type, which can be different from dmabuf mmap's memory type. > > It is not enough for KVM to inspect the hva's memory type. > > I'm not familiar with dmabuf and what is the difference between > vkMapMemory and mmap. Just a simple thought that whatever > memory type/synchronization enforced on the host userspace should > ideally be applied to guest userspace too. e.g. in above example we > possibly want the guest to use WB and issue flush/invalidate hypercalls > to guard with other potential parallel operations in the host side. > otherwise I cannot see how synchronization can be done when one > use WB with sync primitives while the other simply use WC w/o such > primitives. > I am reasonably familiar with the GPU stacks, but I am not familiar with KVM :) When allocating a GPU memory, the userspace can specify whether it wants a coherent one or an incoherent one. vkMapMemory returns a coherent or a incoherent mapping respectively. Indeed we also want the guest userspace to have a coherent or a incoherent mapping respectively. The GPU memory can be exported as a dmabuf to share with another device or process. For security, we allocate the GPU memory in a GPU process and we export the dmabuf to the hypervisor. mmap of dmabuf semantically returns an incoherent mapping. As a result, the guest will set up a mapping that has the same memory type as the vkMapMemory mapping does, but the hva in KVM_SET_USER_MEMORY_REGION points to the dmabuf's incoherent mapping. If you think it is the best for KVM to inspect hva to determine the memory type with page granularity, that is reasonable and should work for us too. The userspace can do something (e.g., add a GPU driver dependency to the hypervisor such that the dma-buf is imported as a GPU memory and mapped using vkMapMemory) or I can work with dma-buf maintainers to see if dma-buf's semantics can be changed. > > > > KVM_SET_DMA_BUF, if supported, is a signal to KVM that the guest > > memory type should be honored (or forced if there is a new op in > > dma_buf_ops that tells KVM which memory type to force). KVM_MEM_DMA > > flag in this RFC sends the same signal. Unless KVM_SET_DMA_BUF gives > > the userspace other features such as setting unlimited number of > > dmabufs to subregions of a memslot, it is not very useful. > > the good part of a new interface is its simplicity, but only in slot > granularity. instead having KVM to inspect hva can support page > granularity, but adding run-time overhead. Let's see how Paolo > thinks.
(resend because gmail did not format to plain text...) On Thu, Feb 20, 2020 at 8:45 PM Chia-I Wu <olvaffe@gmail.com> wrote: > > > > On Thu, Feb 20, 2020 at 4:23 PM Tian, Kevin <kevin.tian@intel.com> wrote: >> >> > From: Chia-I Wu <olvaffe@gmail.com> >> > Sent: Friday, February 21, 2020 6:24 AM >> > >> > On Wed, Feb 19, 2020 at 6:38 PM Tian, Kevin <kevin.tian@intel.com> wrote: >> > > >> > > > From: Tian, Kevin >> > > > Sent: Thursday, February 20, 2020 10:05 AM >> > > > >> > > > > From: Chia-I Wu <olvaffe@gmail.com> >> > > > > Sent: Thursday, February 20, 2020 3:37 AM >> > > > > >> > > > > On Wed, Feb 19, 2020 at 1:52 AM Tian, Kevin <kevin.tian@intel.com> >> > wrote: >> > > > > > >> > > > > > > From: Paolo Bonzini >> > > > > > > Sent: Wednesday, February 19, 2020 12:29 AM >> > > > > > > >> > > > > > > On 14/02/20 23:03, Sean Christopherson wrote: >> > > > > > > >> On Fri, Feb 14, 2020 at 1:47 PM Chia-I Wu <olvaffe@gmail.com> >> > > > wrote: >> > > > > > > >>> AFAICT, it is currently allowed on ARM (verified) and AMD (not >> > > > > > > >>> verified, but svm_get_mt_mask returns 0 which supposedly >> > means >> > > > > the >> > > > > > > NPT >> > > > > > > >>> does not restrict what the guest PAT can do). This diff would do >> > the >> > > > > > > >>> trick for Intel without needing any uapi change: >> > > > > > > >> I would be concerned about Intel CPU errata such as SKX40 and >> > > > SKX59. >> > > > > > > > The part KVM cares about, #MC, is already addressed by forcing >> > UC >> > > > for >> > > > > > > MMIO. >> > > > > > > > The data corruption issue is on the guest kernel to correctly use >> > WC >> > > > > > > > and/or non-temporal writes. >> > > > > > > >> > > > > > > What about coherency across live migration? The userspace >> > process >> > > > > would >> > > > > > > use cached accesses, and also a WBINVD could potentially corrupt >> > guest >> > > > > > > memory. >> > > > > > > >> > > > > > >> > > > > > In such case the userspace process possibly should conservatively use >> > > > > > UC mapping, as if for MMIO regions on a passthrough device. >> > However >> > > > > > there remains a problem. the definition of KVM_MEM_DMA implies >> > > > > > favoring guest setting, which could be whatever type in concept. Then >> > > > > > assuming UC is also problematic. I'm not sure whether inventing >> > another >> > > > > > interface to query effective memory type from KVM is a good idea. >> > There >> > > > > > is no guarantee that the guest will use same type for every page in the >> > > > > > same slot, then such interface might be messy. Alternatively, maybe >> > > > > > we could just have an interface for KVM userspace to force memory >> > type >> > > > > > for a given slot, if it is mainly used in para-virtualized scenarios (e.g. >> > > > > > virtio-gpu) where the guest is enlightened to use a forced type (e.g. >> > WC)? >> > > > > KVM forcing the memory type for a given slot should work too. But the >> > > > > ignore-guest-pat bit seems to be Intel-specific. We will need to >> > > > > define how the second-level page attributes combine with the guest >> > > > > page attributes somehow. >> > > > >> > > > oh, I'm not aware of that difference. without an ipat-equivalent >> > > > capability, I'm not sure how to forcing random type here. If you look at >> > > > table 11-7 in Intel SDM, none of MTRR (EPT) memory type can lead to >> > > > consistent effective type when combining with random PAT value. So >> > > > it is definitely a dead end. >> > > > >> > > > > >> > > > > KVM should in theory be able to tell that the userspace region is >> > > > > mapped with a certain memory type and can force the same memory >> > type >> > > > > onto the guest. The userspace does not need to be involved. But that >> > > > > sounds very slow? This may be a dumb question, but would it help to >> > > > > add KVM_SET_DMA_BUF and let KVM negotiate the memory type with >> > the >> > > > > in-kernel GPU drivers? >> > > > > >> > > > > >> > > > >> > > > KVM_SET_DMA_BUF looks more reasonable. But I guess we don't need >> > > > KVM to be aware of such negotiation. We can continue your original >> > > > proposal to have KVM simply favor guest memory type (maybe still call >> > > > KVM_MEM_DMA). On the other hand, Qemu should just mmap on the >> > > > fd handle of the dmabuf passed from the virtio-gpu device backend, e.g. >> > > > to conduct migration. That way the mmap request is finally served by >> > > > DRM and underlying GPU drivers, with proper type enforced >> > automatically. >> > > > >> > > >> > > Thinking more possibly we don't need introduce new interface to KVM. >> > > As long as Qemu uses dmabuf interface to mmap the specific region, >> > > KVM can simply check memory type in host page table given hva of a >> > > memslot. If the type is UC or WC, it implies that userspace wants a >> > > non-coherent mapping which should be reflected in the guest side too. >> > > In such case, KVM can go to non-cohenrent DMA path and favor guest >> > > memory type automatically. >> > Sorry, I mixed two things together. >> > >> > Userspace access to dmabuf mmap must be guarded by >> > DMA_BUF_SYNC_{START,END} ioctls. It is possible that the GPU driver >> > always picks a WB mapping and let the ioctls flush/invalidate CPU >> > caches. We actually want the guest memory type to match vkMapMemory's >> > memory type, which can be different from dmabuf mmap's memory type. >> > It is not enough for KVM to inspect the hva's memory type. >> >> I'm not familiar with dmabuf and what is the difference between >> vkMapMemory and mmap. Just a simple thought that whatever >> memory type/synchronization enforced on the host userspace should >> ideally be applied to guest userspace too. e.g. in above example we >> possibly want the guest to use WB and issue flush/invalidate hypercalls >> to guard with other potential parallel operations in the host side. >> otherwise I cannot see how synchronization can be done when one >> use WB with sync primitives while the other simply use WC w/o such >> primitives. > I am reasonably familiar with the GPU stacks, but I am not familiar with KVM :) When allocating a GPU memory, the userspace can specify whether it wants a coherent one or an incoherent one. vkMapMemory returns a coherent or a incoherent mapping respectively. Indeed we also want the guest userspace to have a coherent or a incoherent mapping respectively. The GPU memory can be exported as a dmabuf to share with another device or process. For security, we allocate the GPU memory in a GPU process and we export the dmabuf to the hypervisor. mmap of dmabuf semantically returns an incoherent mapping. As a result, the guest will set up a mapping that has the same memory type as the vkMapMemory mapping does, but the hva in KVM_SET_USER_MEMORY_REGION points to the dmabuf's incoherent mapping. If you think it is the best for KVM to inspect hva to determine the memory type with page granularity, that is reasonable and should work for us too. The userspace can do something (e.g., add a GPU driver dependency to the hypervisor such that the dma-buf is imported as a GPU memory and mapped using vkMapMemory) or I can work with dma-buf maintainers to see if dma-buf's semantics can be changed. > >> >> > >> > KVM_SET_DMA_BUF, if supported, is a signal to KVM that the guest >> > memory type should be honored (or forced if there is a new op in >> > dma_buf_ops that tells KVM which memory type to force). KVM_MEM_DMA >> > flag in this RFC sends the same signal. Unless KVM_SET_DMA_BUF gives >> > the userspace other features such as setting unlimited number of >> > dmabufs to subregions of a memslot, it is not very useful. >> >> the good part of a new interface is its simplicity, but only in slot >> granularity. instead having KVM to inspect hva can support page >> granularity, but adding run-time overhead. Let's see how Paolo >> thinks. >> >> > >> > If uapi change is to be avoided, it is the easiest that guest memory >> > type is always honored unless it causes #MC (i.e.,is_mmio==true). >> > >> >> I feel this goes too far... >> >> Thanks >> Kevin
> From: Chia-I Wu <olvaffe@gmail.com> > Sent: Friday, February 21, 2020 12:51 PM > > (resend because gmail did not format to plain text...) > > On Thu, Feb 20, 2020 at 8:45 PM Chia-I Wu <olvaffe@gmail.com> wrote: > > > > > > > > On Thu, Feb 20, 2020 at 4:23 PM Tian, Kevin <kevin.tian@intel.com> wrote: > >> > >> > From: Chia-I Wu <olvaffe@gmail.com> > >> > Sent: Friday, February 21, 2020 6:24 AM > >> > > >> > On Wed, Feb 19, 2020 at 6:38 PM Tian, Kevin <kevin.tian@intel.com> > wrote: > >> > > > >> > > > From: Tian, Kevin > >> > > > Sent: Thursday, February 20, 2020 10:05 AM > >> > > > > >> > > > > From: Chia-I Wu <olvaffe@gmail.com> > >> > > > > Sent: Thursday, February 20, 2020 3:37 AM > >> > > > > > >> > > > > On Wed, Feb 19, 2020 at 1:52 AM Tian, Kevin > <kevin.tian@intel.com> > >> > wrote: > >> > > > > > > >> > > > > > > From: Paolo Bonzini > >> > > > > > > Sent: Wednesday, February 19, 2020 12:29 AM > >> > > > > > > > >> > > > > > > On 14/02/20 23:03, Sean Christopherson wrote: > >> > > > > > > >> On Fri, Feb 14, 2020 at 1:47 PM Chia-I Wu > <olvaffe@gmail.com> > >> > > > wrote: > >> > > > > > > >>> AFAICT, it is currently allowed on ARM (verified) and AMD > (not > >> > > > > > > >>> verified, but svm_get_mt_mask returns 0 which supposedly > >> > means > >> > > > > the > >> > > > > > > NPT > >> > > > > > > >>> does not restrict what the guest PAT can do). This diff > would do > >> > the > >> > > > > > > >>> trick for Intel without needing any uapi change: > >> > > > > > > >> I would be concerned about Intel CPU errata such as SKX40 > and > >> > > > SKX59. > >> > > > > > > > The part KVM cares about, #MC, is already addressed by > forcing > >> > UC > >> > > > for > >> > > > > > > MMIO. > >> > > > > > > > The data corruption issue is on the guest kernel to correctly > use > >> > WC > >> > > > > > > > and/or non-temporal writes. > >> > > > > > > > >> > > > > > > What about coherency across live migration? The userspace > >> > process > >> > > > > would > >> > > > > > > use cached accesses, and also a WBINVD could potentially > corrupt > >> > guest > >> > > > > > > memory. > >> > > > > > > > >> > > > > > > >> > > > > > In such case the userspace process possibly should conservatively > use > >> > > > > > UC mapping, as if for MMIO regions on a passthrough device. > >> > However > >> > > > > > there remains a problem. the definition of KVM_MEM_DMA > implies > >> > > > > > favoring guest setting, which could be whatever type in concept. > Then > >> > > > > > assuming UC is also problematic. I'm not sure whether inventing > >> > another > >> > > > > > interface to query effective memory type from KVM is a good > idea. > >> > There > >> > > > > > is no guarantee that the guest will use same type for every page > in the > >> > > > > > same slot, then such interface might be messy. Alternatively, > maybe > >> > > > > > we could just have an interface for KVM userspace to force > memory > >> > type > >> > > > > > for a given slot, if it is mainly used in para-virtualized scenarios > (e.g. > >> > > > > > virtio-gpu) where the guest is enlightened to use a forced type > (e.g. > >> > WC)? > >> > > > > KVM forcing the memory type for a given slot should work too. But > the > >> > > > > ignore-guest-pat bit seems to be Intel-specific. We will need to > >> > > > > define how the second-level page attributes combine with the > guest > >> > > > > page attributes somehow. > >> > > > > >> > > > oh, I'm not aware of that difference. without an ipat-equivalent > >> > > > capability, I'm not sure how to forcing random type here. If you look > at > >> > > > table 11-7 in Intel SDM, none of MTRR (EPT) memory type can lead > to > >> > > > consistent effective type when combining with random PAT value. So > >> > > > it is definitely a dead end. > >> > > > > >> > > > > > >> > > > > KVM should in theory be able to tell that the userspace region is > >> > > > > mapped with a certain memory type and can force the same > memory > >> > type > >> > > > > onto the guest. The userspace does not need to be involved. But > that > >> > > > > sounds very slow? This may be a dumb question, but would it help > to > >> > > > > add KVM_SET_DMA_BUF and let KVM negotiate the memory type > with > >> > the > >> > > > > in-kernel GPU drivers? > >> > > > > > >> > > > > > >> > > > > >> > > > KVM_SET_DMA_BUF looks more reasonable. But I guess we don't > need > >> > > > KVM to be aware of such negotiation. We can continue your original > >> > > > proposal to have KVM simply favor guest memory type (maybe still > call > >> > > > KVM_MEM_DMA). On the other hand, Qemu should just mmap on > the > >> > > > fd handle of the dmabuf passed from the virtio-gpu device backend, > e.g. > >> > > > to conduct migration. That way the mmap request is finally served by > >> > > > DRM and underlying GPU drivers, with proper type enforced > >> > automatically. > >> > > > > >> > > > >> > > Thinking more possibly we don't need introduce new interface to KVM. > >> > > As long as Qemu uses dmabuf interface to mmap the specific region, > >> > > KVM can simply check memory type in host page table given hva of a > >> > > memslot. If the type is UC or WC, it implies that userspace wants a > >> > > non-coherent mapping which should be reflected in the guest side too. > >> > > In such case, KVM can go to non-cohenrent DMA path and favor guest > >> > > memory type automatically. > >> > Sorry, I mixed two things together. > >> > > >> > Userspace access to dmabuf mmap must be guarded by > >> > DMA_BUF_SYNC_{START,END} ioctls. It is possible that the GPU driver > >> > always picks a WB mapping and let the ioctls flush/invalidate CPU > >> > caches. We actually want the guest memory type to match > vkMapMemory's > >> > memory type, which can be different from dmabuf mmap's memory > type. > >> > It is not enough for KVM to inspect the hva's memory type. > >> > >> I'm not familiar with dmabuf and what is the difference between > >> vkMapMemory and mmap. Just a simple thought that whatever > >> memory type/synchronization enforced on the host userspace should > >> ideally be applied to guest userspace too. e.g. in above example we > >> possibly want the guest to use WB and issue flush/invalidate hypercalls > >> to guard with other potential parallel operations in the host side. > >> otherwise I cannot see how synchronization can be done when one > >> use WB with sync primitives while the other simply use WC w/o such > >> primitives. > > > I am reasonably familiar with the GPU stacks, but I am not familiar with KVM > :) > > When allocating a GPU memory, the userspace can specify whether it wants > a > coherent one or an incoherent one. vkMapMemory returns a coherent or a > incoherent mapping respectively. Indeed we also want the guest userspace > to > have a coherent or a incoherent mapping respectively. > > The GPU memory can be exported as a dmabuf to share with another device > or > process. For security, we allocate the GPU memory in a GPU process and we > export the dmabuf to the hypervisor. mmap of dmabuf semantically returns > an > incoherent mapping. As a result, the guest will set up a mapping that has the > same memory type as the vkMapMemory mapping does, but the hva in > KVM_SET_USER_MEMORY_REGION points to the dmabuf's incoherent > mapping. > > If you think it is the best for KVM to inspect hva to determine the memory > type with page granularity, that is reasonable and should work for us too. > The userspace can do something (e.g., add a GPU driver dependency to the > hypervisor such that the dma-buf is imported as a GPU memory and mapped > using > vkMapMemory) or I can work with dma-buf maintainers to see if dma-buf's > semantics can be changed. I think you need consider the live migration requirement as Paolo pointed out. The migration thread needs to read/write the region, then it must use the same type as GPU process and guest to read/write the region. In such case, the hva mapped by Qemu should have the desired type as the guest. However, adding GPU driver dependency to Qemu might trigger some concern. I'm not sure whether there is generic mechanism though, to share dmabuf fd between GPU process and Qemu while allowing Qemu to follow the desired type w/o using vkMapMemory... Note this is orthogonal to whether introducing a new uapi or implicitly checking hva to favor guest memory type. It's purely about Qemu itself. Ideally anyone with the desire to access a dma-buf object should follow the expected semantics. It's interesting that dma-buf sub-system doesn't provide a centralized synchronization about memory type between multiple mmap paths. > > > > >> > >> > > >> > KVM_SET_DMA_BUF, if supported, is a signal to KVM that the guest > >> > memory type should be honored (or forced if there is a new op in > >> > dma_buf_ops that tells KVM which memory type to force). > KVM_MEM_DMA > >> > flag in this RFC sends the same signal. Unless KVM_SET_DMA_BUF gives > >> > the userspace other features such as setting unlimited number of > >> > dmabufs to subregions of a memslot, it is not very useful. > >> > >> the good part of a new interface is its simplicity, but only in slot > >> granularity. instead having KVM to inspect hva can support page > >> granularity, but adding run-time overhead. Let's see how Paolo > >> thinks. > >> > >> > > >> > If uapi change is to be avoided, it is the easiest that guest memory > >> > type is always honored unless it causes #MC (i.e.,is_mmio==true). > >> > > >> > >> I feel this goes too far... > >> > >> Thanks > >> Kevin
On Thu, Feb 20, 2020 at 09:39:05PM -0800, Tian, Kevin wrote: > > From: Chia-I Wu <olvaffe@gmail.com> > > Sent: Friday, February 21, 2020 12:51 PM > > If you think it is the best for KVM to inspect hva to determine the memory > > type with page granularity, that is reasonable and should work for us too. > > The userspace can do something (e.g., add a GPU driver dependency to the > > hypervisor such that the dma-buf is imported as a GPU memory and mapped > > using > > vkMapMemory) or I can work with dma-buf maintainers to see if dma-buf's > > semantics can be changed. > > I think you need consider the live migration requirement as Paolo pointed out. > The migration thread needs to read/write the region, then it must use the > same type as GPU process and guest to read/write the region. In such case, > the hva mapped by Qemu should have the desired type as the guest. However, > adding GPU driver dependency to Qemu might trigger some concern. I'm not > sure whether there is generic mechanism though, to share dmabuf fd between GPU > process and Qemu while allowing Qemu to follow the desired type w/o using > vkMapMemory... Alternatively, KVM could make KVM_MEM_DMA and KVM_MEM_LOG_DIRTY_PAGES mutually exclusive, i.e. force a transition to WB memtype for the guest (with appropriate zapping) when migration is activated. I think that would work? > Note this is orthogonal to whether introducing a new uapi or implicitly checking > hva to favor guest memory type. It's purely about Qemu itself. Ideally anyone > with the desire to access a dma-buf object should follow the expected semantics. > It's interesting that dma-buf sub-system doesn't provide a centralized > synchronization about memory type between multiple mmap paths.
On Fri, Feb 21, 2020 at 7:59 AM Sean Christopherson <sean.j.christopherson@intel.com> wrote: > > On Thu, Feb 20, 2020 at 09:39:05PM -0800, Tian, Kevin wrote: > > > From: Chia-I Wu <olvaffe@gmail.com> > > > Sent: Friday, February 21, 2020 12:51 PM > > > If you think it is the best for KVM to inspect hva to determine the memory > > > type with page granularity, that is reasonable and should work for us too. > > > The userspace can do something (e.g., add a GPU driver dependency to the > > > hypervisor such that the dma-buf is imported as a GPU memory and mapped > > > using > > > vkMapMemory) or I can work with dma-buf maintainers to see if dma-buf's > > > semantics can be changed. > > > > I think you need consider the live migration requirement as Paolo pointed out. > > The migration thread needs to read/write the region, then it must use the > > same type as GPU process and guest to read/write the region. In such case, > > the hva mapped by Qemu should have the desired type as the guest. However, > > adding GPU driver dependency to Qemu might trigger some concern. I'm not > > sure whether there is generic mechanism though, to share dmabuf fd between GPU > > process and Qemu while allowing Qemu to follow the desired type w/o using > > vkMapMemory... > > Alternatively, KVM could make KVM_MEM_DMA and KVM_MEM_LOG_DIRTY_PAGES > mutually exclusive, i.e. force a transition to WB memtype for the guest > (with appropriate zapping) when migration is activated. I think that > would work? Hm, virtio-gpu does not allow live migration when the 3D function (virgl=on) is enabled. This is the relevant code in qemu: if (virtio_gpu_virgl_enabled(g->conf)) { error_setg(&g->migration_blocker, "virgl is not yet migratable"); Although we (virtio-gpu and virglrenderer projects) plan to make host GPU buffers available to the guest via memslots, those buffers should be considered a part of the "GPU state". The migration thread should work with virglrenderer and let virglrenderer save/restore them, if live migration is to be supported. QEMU depends on GPU drivers already when configured with --enable-virglrenderer. There is vhost-user-gpu that can move the dependency to a GPU process. But there are still going to be cases (e.g., nVidia's proprietary driver does not support dma-buf) where QEMU cannot avoid GPU driver dependency. > > Note this is orthogonal to whether introducing a new uapi or implicitly checking > > hva to favor guest memory type. It's purely about Qemu itself. Ideally anyone > > with the desire to access a dma-buf object should follow the expected semantics. > > It's interesting that dma-buf sub-system doesn't provide a centralized > > synchronization about memory type between multiple mmap paths.
Hi, > > The plan is for virtio-gpu device to reserve a huge memory region in > > the guest. Memslots may be added dynamically or statically to back > > the region. > > so the region is marked as E820_RESERVED to prevent guest kernel > from using it for other purpose and then virtio-gpu device will report > virtio-gpu driver of the exact location of the region through its own > interface? It's large pci bar, to reserve address space, using (recently added) virtio shared memory support. dma-bufs are mapped dynamically as sub-regions into that pci bar. At kvm level that'll end up as one memory slot per dma-buf. cheers, Gerd
> From: Chia-I Wu <olvaffe@gmail.com> > Sent: Saturday, February 22, 2020 2:21 AM > > On Fri, Feb 21, 2020 at 7:59 AM Sean Christopherson > <sean.j.christopherson@intel.com> wrote: > > > > On Thu, Feb 20, 2020 at 09:39:05PM -0800, Tian, Kevin wrote: > > > > From: Chia-I Wu <olvaffe@gmail.com> > > > > Sent: Friday, February 21, 2020 12:51 PM > > > > If you think it is the best for KVM to inspect hva to determine the > memory > > > > type with page granularity, that is reasonable and should work for us > too. > > > > The userspace can do something (e.g., add a GPU driver dependency to > the > > > > hypervisor such that the dma-buf is imported as a GPU memory and > mapped > > > > using > > > > vkMapMemory) or I can work with dma-buf maintainers to see if dma- > buf's > > > > semantics can be changed. > > > > > > I think you need consider the live migration requirement as Paolo pointed > out. > > > The migration thread needs to read/write the region, then it must use the > > > same type as GPU process and guest to read/write the region. In such > case, > > > the hva mapped by Qemu should have the desired type as the guest. > However, > > > adding GPU driver dependency to Qemu might trigger some concern. I'm > not > > > sure whether there is generic mechanism though, to share dmabuf fd > between GPU > > > process and Qemu while allowing Qemu to follow the desired type w/o > using > > > vkMapMemory... > > > > Alternatively, KVM could make KVM_MEM_DMA and > KVM_MEM_LOG_DIRTY_PAGES > > mutually exclusive, i.e. force a transition to WB memtype for the guest > > (with appropriate zapping) when migration is activated. I think that > > would work? > Hm, virtio-gpu does not allow live migration when the 3D function > (virgl=on) is enabled. This is the relevant code in qemu: > > if (virtio_gpu_virgl_enabled(g->conf)) { > error_setg(&g->migration_blocker, "virgl is not yet migratable"); > > Although we (virtio-gpu and virglrenderer projects) plan to make host > GPU buffers available to the guest via memslots, those buffers should > be considered a part of the "GPU state". The migration thread should > work with virglrenderer and let virglrenderer save/restore them, if > live migration is to be supported. Thanks for your explanation. Your RFC makes more sense now. One remaining open is, although for live migration we can explicitly state that migration thread itself should not access the dma-buf region, how can we warn other usages which may potentially simply walk every memslot and access the content through the mmap-ed virtual address? Possibly we may need a flag to indicate a memslot which is mmaped only for KVM to retrieve its page table mapping but not for direct access in Qemu. > > QEMU depends on GPU drivers already when configured with > --enable-virglrenderer. There is vhost-user-gpu that can move the > dependency to a GPU process. But there are still going to be cases > (e.g., nVidia's proprietary driver does not support dma-buf) where > QEMU cannot avoid GPU driver dependency. > > > > > > > Note this is orthogonal to whether introducing a new uapi or implicitly > checking > > > hva to favor guest memory type. It's purely about Qemu itself. Ideally > anyone > > > with the desire to access a dma-buf object should follow the expected > semantics. > > > It's interesting that dma-buf sub-system doesn't provide a centralized > > > synchronization about memory type between multiple mmap paths.