Message ID | 20210807134936.3083984-2-pbonzini@redhat.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | KVM: x86: pass arguments on the page fault path via struct kvm_page_fault | expand |
On Sat, Aug 07, 2021, Paolo Bonzini wrote: > Do not bother removing the low bits of the gpa. This masking dates back > to the very first commit of KVM but it is unnecessary---or even > problematic, because the gpa is later used to fill in the MMIO page cache. I don't disagree with the code change, but I don't see how stripping the offset can be problematic for the MMIO page cache. I assume you're referring to handle_abnormal_pfn() -> vcpu_cache_mmio_info(). The "gva" is masked with PAGE_MASK, i.e. the offset is stripped anyways. And fundamentally, that cache is tied to the granularity of the memslots, tracking the offset would be wrong.
diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 964c797dcc46..7477f340d318 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -3950,7 +3950,7 @@ static int nonpaging_page_fault(struct kvm_vcpu *vcpu, gpa_t gpa, pgprintk("%s: gva %lx error %x\n", __func__, gpa, error_code); /* This path builds a PAE pagetable, we can map 2mb pages at maximum. */ - return direct_page_fault(vcpu, gpa & PAGE_MASK, error_code, prefault, + return direct_page_fault(vcpu, gpa, error_code, prefault, PG_LEVEL_2M, false); }
Do not bother removing the low bits of the gpa. This masking dates back to the very first commit of KVM but it is unnecessary---or even problematic, because the gpa is later used to fill in the MMIO page cache. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> --- arch/x86/kvm/mmu/mmu.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)