Message ID | 50AAC7F9.7050305@linux.vnet.ibm.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
On Tue, Nov 20, 2012 at 07:59:53AM +0800, Xiao Guangrong wrote: > The current reexecute_instruction can not well detect the failed instruction > emulation. It allows guest to retry all the instructions except it accesses > on error pfn. > > For example, some cases are nested-write-protect - if the page we want to > write is used as PDE but it chains to itself. Under this case, we should > stop the emulation and report the case to userspace. > > Signed-off-by: Xiao Guangrong <xiaoguangrong@linux.vnet.ibm.com> > --- > arch/x86/include/asm/kvm_host.h | 2 + > arch/x86/kvm/paging_tmpl.h | 2 + > arch/x86/kvm/x86.c | 54 ++++++++++++++++++++++++++++----------- > 3 files changed, 43 insertions(+), 15 deletions(-) > > diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h > index b2e11f4..c5eb52f 100644 > --- a/arch/x86/include/asm/kvm_host.h > +++ b/arch/x86/include/asm/kvm_host.h > @@ -566,6 +566,8 @@ struct kvm_arch { > u64 hv_guest_os_id; > u64 hv_hypercall; > > + /* synchronizing reexecute_instruction and page fault path. */ > + u64 page_fault_count; > #ifdef CONFIG_KVM_MMU_AUDIT > int audit_point; > #endif > diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h > index 891eb6d..d55ad89 100644 > --- a/arch/x86/kvm/paging_tmpl.h > +++ b/arch/x86/kvm/paging_tmpl.h > @@ -568,6 +568,8 @@ static int FNAME(page_fault)(struct kvm_vcpu *vcpu, gva_t addr, u32 error_code, > if (mmu_notifier_retry(vcpu->kvm, mmu_seq)) > goto out_unlock; > > + vcpu->kvm->arch.page_fault_count++; > + > kvm_mmu_audit(vcpu, AUDIT_PRE_PAGE_FAULT); > kvm_mmu_free_some_pages(vcpu); > if (!force_pt_level) > diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c > index 5fe72cc..2fe484b 100644 > --- a/arch/x86/kvm/x86.c > +++ b/arch/x86/kvm/x86.c > @@ -4473,37 +4473,61 @@ static bool reexecute_instruction(struct kvm_vcpu *vcpu, unsigned long cr2) > { > gpa_t gpa = cr2; > pfn_t pfn; > - > - if (!ACCESS_ONCE(vcpu->kvm->arch.indirect_shadow_pages)) > - return false; > + u64 page_fault_count; > + int emulate; > > if (!vcpu->arch.mmu.direct_map) { > gpa = kvm_mmu_gva_to_gpa_read(vcpu, cr2, NULL); > + /* > + * If the mapping is invalid in guest, let cpu retry > + * it to generate fault. > + */ > if (gpa == UNMAPPED_GVA) > - return true; /* let cpu generate fault */ > + return true; > } > > /* > - * if emulation was due to access to shadowed page table > - * and it failed try to unshadow page and re-enter the > - * guest to let CPU execute the instruction. > - */ > - if (kvm_mmu_unprotect_page(vcpu->kvm, gpa_to_gfn(gpa))) > - return true; > - > - /* > * Do not retry the unhandleable instruction if it faults on the > * readonly host memory, otherwise it will goto a infinite loop: > * retry instruction -> write #PF -> emulation fail -> retry > * instruction -> ... > */ > pfn = gfn_to_pfn(vcpu->kvm, gpa_to_gfn(gpa)); > - if (!is_error_noslot_pfn(pfn)) { > - kvm_release_pfn_clean(pfn); > + > + /* > + * If the instruction failed on the error pfn, it can not be fixed, > + * report the error to userspace. > + */ > + if (is_error_noslot_pfn(pfn)) > + return false; > + > + kvm_release_pfn_clean(pfn); > + > + /* The instructions are well-emulated on direct mmu. */ > + if (vcpu->arch.mmu.direct_map) { > + if (ACCESS_ONCE(vcpu->kvm->arch.indirect_shadow_pages)) > + kvm_mmu_unprotect_page(vcpu->kvm, gpa_to_gfn(gpa)); > + > return true; > } > > - return false; > +again: > + page_fault_count = ACCESS_ONCE(vcpu->kvm->arch.page_fault_count); > + > + /* > + * if emulation was due to access to shadowed page table > + * and it failed try to unshadow page and re-enter the > + * guest to let CPU execute the instruction. > + */ > + kvm_mmu_unprotect_page(vcpu->kvm, gpa_to_gfn(gpa)); > + emulate = vcpu->arch.mmu.page_fault(vcpu, cr3, PFERR_WRITE_MASK, false); Can you explain what is the objective here? > + /* The page fault path called above can increase the count. */ > + if (page_fault_count + 1 != > + ACCESS_ONCE(vcpu->kvm->arch.page_fault_count)) > + goto again; > + > + return !emulate; > } > > static bool retry_instruction(struct x86_emulate_ctxt *ctxt, -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On 11/27/2012 06:41 AM, Marcelo Tosatti wrote: >> >> - return false; >> +again: >> + page_fault_count = ACCESS_ONCE(vcpu->kvm->arch.page_fault_count); >> + >> + /* >> + * if emulation was due to access to shadowed page table >> + * and it failed try to unshadow page and re-enter the >> + * guest to let CPU execute the instruction. >> + */ >> + kvm_mmu_unprotect_page(vcpu->kvm, gpa_to_gfn(gpa)); >> + emulate = vcpu->arch.mmu.page_fault(vcpu, cr3, PFERR_WRITE_MASK, false); > > Can you explain what is the objective here? > Sure. :) The instruction emulation is caused by fault access on cr3. After unprotect the target page, we call vcpu->arch.mmu.page_fault to fix the mapping of cr3. if it return 1, mmu can not fix the mapping, we should report the error, otherwise it is good to return to guest and let it re-execute the instruction again. page_fault_count is used to avoid the race on other vcpus, since after we unprotect the target page, other cpu can enter page fault path and let the page be write-protected again. This way can help us to detect all the case that mmu can not be fixed. -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Tue, Nov 27, 2012 at 11:30:24AM +0800, Xiao Guangrong wrote: > On 11/27/2012 06:41 AM, Marcelo Tosatti wrote: > > >> > >> - return false; > >> +again: > >> + page_fault_count = ACCESS_ONCE(vcpu->kvm->arch.page_fault_count); > >> + > >> + /* > >> + * if emulation was due to access to shadowed page table > >> + * and it failed try to unshadow page and re-enter the > >> + * guest to let CPU execute the instruction. > >> + */ > >> + kvm_mmu_unprotect_page(vcpu->kvm, gpa_to_gfn(gpa)); > >> + emulate = vcpu->arch.mmu.page_fault(vcpu, cr3, PFERR_WRITE_MASK, false); > > > > Can you explain what is the objective here? > > > > Sure. :) > > The instruction emulation is caused by fault access on cr3. After unprotect > the target page, we call vcpu->arch.mmu.page_fault to fix the mapping of cr3. > if it return 1, mmu can not fix the mapping, we should report the error, > otherwise it is good to return to guest and let it re-execute the instruction > again. > > page_fault_count is used to avoid the race on other vcpus, since after we > unprotect the target page, other cpu can enter page fault path and let the > page be write-protected again. > > This way can help us to detect all the case that mmu can not be fixed. How about recording the gfn number for shadow pages that have been shadowed in the current pagefault run? (which is cheap, compared to shadowing these pages). If failed instruction emulation is write to one of these gfns, then fail. -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On 11/28/2012 07:42 AM, Marcelo Tosatti wrote: > On Tue, Nov 27, 2012 at 11:30:24AM +0800, Xiao Guangrong wrote: >> On 11/27/2012 06:41 AM, Marcelo Tosatti wrote: >> >>>> >>>> - return false; >>>> +again: >>>> + page_fault_count = ACCESS_ONCE(vcpu->kvm->arch.page_fault_count); >>>> + >>>> + /* >>>> + * if emulation was due to access to shadowed page table >>>> + * and it failed try to unshadow page and re-enter the >>>> + * guest to let CPU execute the instruction. >>>> + */ >>>> + kvm_mmu_unprotect_page(vcpu->kvm, gpa_to_gfn(gpa)); >>>> + emulate = vcpu->arch.mmu.page_fault(vcpu, cr3, PFERR_WRITE_MASK, false); >>> >>> Can you explain what is the objective here? >>> >> >> Sure. :) >> >> The instruction emulation is caused by fault access on cr3. After unprotect >> the target page, we call vcpu->arch.mmu.page_fault to fix the mapping of cr3. >> if it return 1, mmu can not fix the mapping, we should report the error, >> otherwise it is good to return to guest and let it re-execute the instruction >> again. >> >> page_fault_count is used to avoid the race on other vcpus, since after we >> unprotect the target page, other cpu can enter page fault path and let the >> page be write-protected again. >> >> This way can help us to detect all the case that mmu can not be fixed. > > How about recording the gfn number for shadow pages that have been > shadowed in the current pagefault run? (which is cheap, compared to > shadowing these pages). > Marcelo, Thanks for your idea! If we use this way, we should cache gfns in vcpu struct. Actually, i have considered the approach like yours, that is getting all page tables of the guest, then to see whether the page table gfns are contained in the target gfn. But we need changed mmu->gva_to_pfn or introduce a new method to get page tables of the guest. But reexecute_instruction is really the unlikely path, both of these ways can make the mmu code more complex and/or introduce unnecessary overload for the common cases. it looks like the way used in this patch is the simplest and no harmful to the core code. -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Tue, Nov 27, 2012 at 11:30:24AM +0800, Xiao Guangrong wrote: > On 11/27/2012 06:41 AM, Marcelo Tosatti wrote: > > >> > >> - return false; > >> +again: > >> + page_fault_count = ACCESS_ONCE(vcpu->kvm->arch.page_fault_count); > >> + > >> + /* > >> + * if emulation was due to access to shadowed page table > >> + * and it failed try to unshadow page and re-enter the > >> + * guest to let CPU execute the instruction. > >> + */ > >> + kvm_mmu_unprotect_page(vcpu->kvm, gpa_to_gfn(gpa)); > >> + emulate = vcpu->arch.mmu.page_fault(vcpu, cr3, PFERR_WRITE_MASK, false); > > > > Can you explain what is the objective here? > > > > Sure. :) > > The instruction emulation is caused by fault access on cr3. After unprotect > the target page, we call vcpu->arch.mmu.page_fault to fix the mapping of cr3. > if it return 1, mmu can not fix the mapping, we should report the error, > otherwise it is good to return to guest and let it re-execute the instruction > again. > > page_fault_count is used to avoid the race on other vcpus, since after we > unprotect the target page, other cpu can enter page fault path and let the > page be write-protected again. > > This way can help us to detect all the case that mmu can not be fixed. > Can you write this in a comment above vcpu->arch.mmu.page_fault()? -- Gleb. -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On 11/28/2012 10:12 PM, Gleb Natapov wrote: > On Tue, Nov 27, 2012 at 11:30:24AM +0800, Xiao Guangrong wrote: >> On 11/27/2012 06:41 AM, Marcelo Tosatti wrote: >> >>>> >>>> - return false; >>>> +again: >>>> + page_fault_count = ACCESS_ONCE(vcpu->kvm->arch.page_fault_count); >>>> + >>>> + /* >>>> + * if emulation was due to access to shadowed page table >>>> + * and it failed try to unshadow page and re-enter the >>>> + * guest to let CPU execute the instruction. >>>> + */ >>>> + kvm_mmu_unprotect_page(vcpu->kvm, gpa_to_gfn(gpa)); >>>> + emulate = vcpu->arch.mmu.page_fault(vcpu, cr3, PFERR_WRITE_MASK, false); >>> >>> Can you explain what is the objective here? >>> >> >> Sure. :) >> >> The instruction emulation is caused by fault access on cr3. After unprotect >> the target page, we call vcpu->arch.mmu.page_fault to fix the mapping of cr3. >> if it return 1, mmu can not fix the mapping, we should report the error, >> otherwise it is good to return to guest and let it re-execute the instruction >> again. >> >> page_fault_count is used to avoid the race on other vcpus, since after we >> unprotect the target page, other cpu can enter page fault path and let the >> page be write-protected again. >> >> This way can help us to detect all the case that mmu can not be fixed. >> > Can you write this in a comment above vcpu->arch.mmu.page_fault()? Okay, if Marcelo does not object this way. :) -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Wed, Nov 28, 2012 at 10:59:35PM +0800, Xiao Guangrong wrote: > On 11/28/2012 10:12 PM, Gleb Natapov wrote: > > On Tue, Nov 27, 2012 at 11:30:24AM +0800, Xiao Guangrong wrote: > >> On 11/27/2012 06:41 AM, Marcelo Tosatti wrote: > >> > >>>> > >>>> - return false; > >>>> +again: > >>>> + page_fault_count = ACCESS_ONCE(vcpu->kvm->arch.page_fault_count); > >>>> + > >>>> + /* > >>>> + * if emulation was due to access to shadowed page table > >>>> + * and it failed try to unshadow page and re-enter the > >>>> + * guest to let CPU execute the instruction. > >>>> + */ > >>>> + kvm_mmu_unprotect_page(vcpu->kvm, gpa_to_gfn(gpa)); > >>>> + emulate = vcpu->arch.mmu.page_fault(vcpu, cr3, PFERR_WRITE_MASK, false); > >>> > >>> Can you explain what is the objective here? > >>> > >> > >> Sure. :) > >> > >> The instruction emulation is caused by fault access on cr3. After unprotect > >> the target page, we call vcpu->arch.mmu.page_fault to fix the mapping of cr3. > >> if it return 1, mmu can not fix the mapping, we should report the error, > >> otherwise it is good to return to guest and let it re-execute the instruction > >> again. > >> > >> page_fault_count is used to avoid the race on other vcpus, since after we > >> unprotect the target page, other cpu can enter page fault path and let the > >> page be write-protected again. > >> > >> This way can help us to detect all the case that mmu can not be fixed. > >> > > Can you write this in a comment above vcpu->arch.mmu.page_fault()? > > Okay, if Marcelo does not object this way. :) I do object, since it is possible to detect precisely the condition by storing which gfns have been cached. Then, Xiao, you need a way to handle large read-only sptes. -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On 11/29/2012 05:57 AM, Marcelo Tosatti wrote: > On Wed, Nov 28, 2012 at 10:59:35PM +0800, Xiao Guangrong wrote: >> On 11/28/2012 10:12 PM, Gleb Natapov wrote: >>> On Tue, Nov 27, 2012 at 11:30:24AM +0800, Xiao Guangrong wrote: >>>> On 11/27/2012 06:41 AM, Marcelo Tosatti wrote: >>>> >>>>>> >>>>>> - return false; >>>>>> +again: >>>>>> + page_fault_count = ACCESS_ONCE(vcpu->kvm->arch.page_fault_count); >>>>>> + >>>>>> + /* >>>>>> + * if emulation was due to access to shadowed page table >>>>>> + * and it failed try to unshadow page and re-enter the >>>>>> + * guest to let CPU execute the instruction. >>>>>> + */ >>>>>> + kvm_mmu_unprotect_page(vcpu->kvm, gpa_to_gfn(gpa)); >>>>>> + emulate = vcpu->arch.mmu.page_fault(vcpu, cr3, PFERR_WRITE_MASK, false); >>>>> >>>>> Can you explain what is the objective here? >>>>> >>>> >>>> Sure. :) >>>> >>>> The instruction emulation is caused by fault access on cr3. After unprotect >>>> the target page, we call vcpu->arch.mmu.page_fault to fix the mapping of cr3. >>>> if it return 1, mmu can not fix the mapping, we should report the error, >>>> otherwise it is good to return to guest and let it re-execute the instruction >>>> again. >>>> >>>> page_fault_count is used to avoid the race on other vcpus, since after we >>>> unprotect the target page, other cpu can enter page fault path and let the >>>> page be write-protected again. >>>> >>>> This way can help us to detect all the case that mmu can not be fixed. >>>> >>> Can you write this in a comment above vcpu->arch.mmu.page_fault()? >> >> Okay, if Marcelo does not object this way. :) > > I do object, since it is possible to detect precisely the condition by > storing which gfns have been cached. > > Then, Xiao, you need a way to handle large read-only sptes. Sorry, Marcelo, i am still confused why read-only sptes can not work under this patch? The code after read-only large spte is is: + if ((level > PT_PAGE_TABLE_LEVEL && + has_wrprotected_page(vcpu->kvm, gfn, level)) || + mmu_need_write_protect(vcpu, gfn, can_unsync)) { pgprintk("%s: found shadow page for %llx, marking ro\n", __func__, gfn); ret = 1; It return 1, then reexecute_instruction return 0. It is the same as without readonly large-spte. -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On 11/29/2012 06:40 AM, Xiao Guangrong wrote: > On 11/29/2012 05:57 AM, Marcelo Tosatti wrote: >> On Wed, Nov 28, 2012 at 10:59:35PM +0800, Xiao Guangrong wrote: >>> On 11/28/2012 10:12 PM, Gleb Natapov wrote: >>>> On Tue, Nov 27, 2012 at 11:30:24AM +0800, Xiao Guangrong wrote: >>>>> On 11/27/2012 06:41 AM, Marcelo Tosatti wrote: >>>>> >>>>>>> >>>>>>> - return false; >>>>>>> +again: >>>>>>> + page_fault_count = ACCESS_ONCE(vcpu->kvm->arch.page_fault_count); >>>>>>> + >>>>>>> + /* >>>>>>> + * if emulation was due to access to shadowed page table >>>>>>> + * and it failed try to unshadow page and re-enter the >>>>>>> + * guest to let CPU execute the instruction. >>>>>>> + */ >>>>>>> + kvm_mmu_unprotect_page(vcpu->kvm, gpa_to_gfn(gpa)); >>>>>>> + emulate = vcpu->arch.mmu.page_fault(vcpu, cr3, PFERR_WRITE_MASK, false); >>>>>> >>>>>> Can you explain what is the objective here? >>>>>> >>>>> >>>>> Sure. :) >>>>> >>>>> The instruction emulation is caused by fault access on cr3. After unprotect >>>>> the target page, we call vcpu->arch.mmu.page_fault to fix the mapping of cr3. >>>>> if it return 1, mmu can not fix the mapping, we should report the error, >>>>> otherwise it is good to return to guest and let it re-execute the instruction >>>>> again. >>>>> >>>>> page_fault_count is used to avoid the race on other vcpus, since after we >>>>> unprotect the target page, other cpu can enter page fault path and let the >>>>> page be write-protected again. >>>>> >>>>> This way can help us to detect all the case that mmu can not be fixed. >>>>> >>>> Can you write this in a comment above vcpu->arch.mmu.page_fault()? >>> >>> Okay, if Marcelo does not object this way. :) >> >> I do object, since it is possible to detect precisely the condition by >> storing which gfns have been cached. >> >> Then, Xiao, you need a way to handle large read-only sptes. > > Sorry, Marcelo, i am still confused why read-only sptes can not work > under this patch? > > The code after read-only large spte is is: > > + if ((level > PT_PAGE_TABLE_LEVEL && > + has_wrprotected_page(vcpu->kvm, gfn, level)) || > + mmu_need_write_protect(vcpu, gfn, can_unsync)) { > pgprintk("%s: found shadow page for %llx, marking ro\n", > __func__, gfn); > ret = 1; > > It return 1, then reexecute_instruction return 0. It is the same as without > readonly large-spte. Ah, wait, There is a case, the large page located at 0-2M, the 0-4K is used as a page-table (e.g. PDE), and the guest want to write the memory located at 5K which should be freely written. This patch can return 0 for both current code and readonly large spte. I need to think it more. -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Thu, Nov 29, 2012 at 06:40:51AM +0800, Xiao Guangrong wrote: > On 11/29/2012 05:57 AM, Marcelo Tosatti wrote: > > On Wed, Nov 28, 2012 at 10:59:35PM +0800, Xiao Guangrong wrote: > >> On 11/28/2012 10:12 PM, Gleb Natapov wrote: > >>> On Tue, Nov 27, 2012 at 11:30:24AM +0800, Xiao Guangrong wrote: > >>>> On 11/27/2012 06:41 AM, Marcelo Tosatti wrote: > >>>> > >>>>>> > >>>>>> - return false; > >>>>>> +again: > >>>>>> + page_fault_count = ACCESS_ONCE(vcpu->kvm->arch.page_fault_count); > >>>>>> + > >>>>>> + /* > >>>>>> + * if emulation was due to access to shadowed page table > >>>>>> + * and it failed try to unshadow page and re-enter the > >>>>>> + * guest to let CPU execute the instruction. > >>>>>> + */ > >>>>>> + kvm_mmu_unprotect_page(vcpu->kvm, gpa_to_gfn(gpa)); > >>>>>> + emulate = vcpu->arch.mmu.page_fault(vcpu, cr3, PFERR_WRITE_MASK, false); > >>>>> > >>>>> Can you explain what is the objective here? > >>>>> > >>>> > >>>> Sure. :) > >>>> > >>>> The instruction emulation is caused by fault access on cr3. After unprotect > >>>> the target page, we call vcpu->arch.mmu.page_fault to fix the mapping of cr3. > >>>> if it return 1, mmu can not fix the mapping, we should report the error, > >>>> otherwise it is good to return to guest and let it re-execute the instruction > >>>> again. > >>>> > >>>> page_fault_count is used to avoid the race on other vcpus, since after we > >>>> unprotect the target page, other cpu can enter page fault path and let the > >>>> page be write-protected again. > >>>> > >>>> This way can help us to detect all the case that mmu can not be fixed. > >>>> > >>> Can you write this in a comment above vcpu->arch.mmu.page_fault()? > >> > >> Okay, if Marcelo does not object this way. :) > > > > I do object, since it is possible to detect precisely the condition by > > storing which gfns have been cached. > > > > Then, Xiao, you need a way to handle large read-only sptes. > > Sorry, Marcelo, i am still confused why read-only sptes can not work > under this patch? > > The code after read-only large spte is is: > > + if ((level > PT_PAGE_TABLE_LEVEL && > + has_wrprotected_page(vcpu->kvm, gfn, level)) || > + mmu_need_write_protect(vcpu, gfn, can_unsync)) { > pgprintk("%s: found shadow page for %llx, marking ro\n", > __func__, gfn); > ret = 1; > > It return 1, then reexecute_instruction return 0. It is the same as without > readonly large-spte. https://lkml.org/lkml/2012/11/17/75 Does unshadowing work with large sptes at reexecute_instruction? That is, do we nuke any large read-only sptes that might be causing a certain gfn to be read-only? That is, following the sequence there, is the large read-only spte properly destroyed? -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Thu, Nov 29, 2012 at 07:16:50AM +0800, Xiao Guangrong wrote: > On 11/29/2012 06:40 AM, Xiao Guangrong wrote: > > On 11/29/2012 05:57 AM, Marcelo Tosatti wrote: > >> On Wed, Nov 28, 2012 at 10:59:35PM +0800, Xiao Guangrong wrote: > >>> On 11/28/2012 10:12 PM, Gleb Natapov wrote: > >>>> On Tue, Nov 27, 2012 at 11:30:24AM +0800, Xiao Guangrong wrote: > >>>>> On 11/27/2012 06:41 AM, Marcelo Tosatti wrote: > >>>>> > >>>>>>> > >>>>>>> - return false; > >>>>>>> +again: > >>>>>>> + page_fault_count = ACCESS_ONCE(vcpu->kvm->arch.page_fault_count); > >>>>>>> + > >>>>>>> + /* > >>>>>>> + * if emulation was due to access to shadowed page table > >>>>>>> + * and it failed try to unshadow page and re-enter the > >>>>>>> + * guest to let CPU execute the instruction. > >>>>>>> + */ > >>>>>>> + kvm_mmu_unprotect_page(vcpu->kvm, gpa_to_gfn(gpa)); > >>>>>>> + emulate = vcpu->arch.mmu.page_fault(vcpu, cr3, PFERR_WRITE_MASK, false); > >>>>>> > >>>>>> Can you explain what is the objective here? > >>>>>> > >>>>> > >>>>> Sure. :) > >>>>> > >>>>> The instruction emulation is caused by fault access on cr3. After unprotect > >>>>> the target page, we call vcpu->arch.mmu.page_fault to fix the mapping of cr3. > >>>>> if it return 1, mmu can not fix the mapping, we should report the error, > >>>>> otherwise it is good to return to guest and let it re-execute the instruction > >>>>> again. > >>>>> > >>>>> page_fault_count is used to avoid the race on other vcpus, since after we > >>>>> unprotect the target page, other cpu can enter page fault path and let the > >>>>> page be write-protected again. > >>>>> > >>>>> This way can help us to detect all the case that mmu can not be fixed. > >>>>> > >>>> Can you write this in a comment above vcpu->arch.mmu.page_fault()? > >>> > >>> Okay, if Marcelo does not object this way. :) > >> > >> I do object, since it is possible to detect precisely the condition by > >> storing which gfns have been cached. > >> > >> Then, Xiao, you need a way to handle large read-only sptes. > > > > Sorry, Marcelo, i am still confused why read-only sptes can not work > > under this patch? > > > > The code after read-only large spte is is: > > > > + if ((level > PT_PAGE_TABLE_LEVEL && > > + has_wrprotected_page(vcpu->kvm, gfn, level)) || > > + mmu_need_write_protect(vcpu, gfn, can_unsync)) { > > pgprintk("%s: found shadow page for %llx, marking ro\n", > > __func__, gfn); > > ret = 1; > > > > It return 1, then reexecute_instruction return 0. It is the same as without > > readonly large-spte. > > Ah, wait, There is a case, the large page located at 0-2M, the 0-4K is used as a > page-table (e.g. PDE), and the guest want to write the memory located at 5K which > should be freely written. This patch can return 0 for both current code and readonly > large spte. Yes, should remove the read-only large spte if any write to 0-2M fails (said 'unshadow' in the previous email but the correct is 'remove large spte in range'). > I need to think it more. -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Hi Marcelo, Thanks for your patience. I was reading your reply over and over again, i would like to argue it more :). Please see below. On 11/29/2012 08:21 AM, Marcelo Tosatti wrote: > > https://lkml.org/lkml/2012/11/17/75 > > Does unshadowing work with large sptes at reexecute_instruction? That > is, do we nuke any large read-only sptes that might be causing a certain > gfn to be read-only? > > That is, following the sequence there, is the large read-only spte > properly destroyed? Actually, sptes can not prevent gfn becoming writable, that means the gfn can become writable *even if* it exists a spte which maps to the gfn. The condition that can prevent gfn becoming writable is, the gfn has been shadowed, that means the gfn can not become writable if it exists a sp with sp.gfn = gfn. Note, drop_spte does not remove any sp. So, either destroying spte or keeping spte doest not have any affect for gfn write-protection. If luck enough, my point is right, the current code can do some optimizations as below: if (level > PT_PAGE_TABLE_LEVEL && - has_wrprotected_page(vcpu->kvm, gfn, level)) { - ret = 1; - drop_spte(vcpu->kvm, sptep); - goto done; - } + has_wrprotected_page(vcpu->kvm, gfn, level)) + return 0; 1): we can return 0 instead of 1 to avoid unnecessary emulation. vcpu will refault again then kvm will use small page. 2): need not do any change on the spte. -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Mon, Dec 03, 2012 at 04:33:01PM +0800, Xiao Guangrong wrote: > Hi Marcelo, > > Thanks for your patience. I was reading your reply over and over again, i would > like to argue it more :). > Please see below. > > On 11/29/2012 08:21 AM, Marcelo Tosatti wrote: > > > > > https://lkml.org/lkml/2012/11/17/75 > > > > Does unshadowing work with large sptes at reexecute_instruction? That > > is, do we nuke any large read-only sptes that might be causing a certain > > gfn to be read-only? > > > > That is, following the sequence there, is the large read-only spte > > properly destroyed? > > Actually, sptes can not prevent gfn becoming writable, that means the gfn > can become writable *even if* it exists a spte which maps to the gfn. > > The condition that can prevent gfn becoming writable is, the gfn has been > shadowed, that means the gfn can not become writable if it exists a sp > with sp.gfn = gfn. > > Note, drop_spte does not remove any sp. So, either destroying spte or keeping > spte doest not have any affect for gfn write-protection. > > If luck enough, my point is right, the current code can do some optimizations > as below: > > if (level > PT_PAGE_TABLE_LEVEL && > - has_wrprotected_page(vcpu->kvm, gfn, level)) { > - ret = 1; > - drop_spte(vcpu->kvm, sptep); > - goto done; > - } > + has_wrprotected_page(vcpu->kvm, gfn, level)) > + return 0; > > > 1): we can return 0 instead of 1 to avoid unnecessary emulation. vcpu will refault > again then kvm will use small page. So on refault the large spte is nuked. That works, yes. > 2): need not do any change on the spte. -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index b2e11f4..c5eb52f 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -566,6 +566,8 @@ struct kvm_arch { u64 hv_guest_os_id; u64 hv_hypercall; + /* synchronizing reexecute_instruction and page fault path. */ + u64 page_fault_count; #ifdef CONFIG_KVM_MMU_AUDIT int audit_point; #endif diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h index 891eb6d..d55ad89 100644 --- a/arch/x86/kvm/paging_tmpl.h +++ b/arch/x86/kvm/paging_tmpl.h @@ -568,6 +568,8 @@ static int FNAME(page_fault)(struct kvm_vcpu *vcpu, gva_t addr, u32 error_code, if (mmu_notifier_retry(vcpu->kvm, mmu_seq)) goto out_unlock; + vcpu->kvm->arch.page_fault_count++; + kvm_mmu_audit(vcpu, AUDIT_PRE_PAGE_FAULT); kvm_mmu_free_some_pages(vcpu); if (!force_pt_level) diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 5fe72cc..2fe484b 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -4473,37 +4473,61 @@ static bool reexecute_instruction(struct kvm_vcpu *vcpu, unsigned long cr2) { gpa_t gpa = cr2; pfn_t pfn; - - if (!ACCESS_ONCE(vcpu->kvm->arch.indirect_shadow_pages)) - return false; + u64 page_fault_count; + int emulate; if (!vcpu->arch.mmu.direct_map) { gpa = kvm_mmu_gva_to_gpa_read(vcpu, cr2, NULL); + /* + * If the mapping is invalid in guest, let cpu retry + * it to generate fault. + */ if (gpa == UNMAPPED_GVA) - return true; /* let cpu generate fault */ + return true; } /* - * if emulation was due to access to shadowed page table - * and it failed try to unshadow page and re-enter the - * guest to let CPU execute the instruction. - */ - if (kvm_mmu_unprotect_page(vcpu->kvm, gpa_to_gfn(gpa))) - return true; - - /* * Do not retry the unhandleable instruction if it faults on the * readonly host memory, otherwise it will goto a infinite loop: * retry instruction -> write #PF -> emulation fail -> retry * instruction -> ... */ pfn = gfn_to_pfn(vcpu->kvm, gpa_to_gfn(gpa)); - if (!is_error_noslot_pfn(pfn)) { - kvm_release_pfn_clean(pfn); + + /* + * If the instruction failed on the error pfn, it can not be fixed, + * report the error to userspace. + */ + if (is_error_noslot_pfn(pfn)) + return false; + + kvm_release_pfn_clean(pfn); + + /* The instructions are well-emulated on direct mmu. */ + if (vcpu->arch.mmu.direct_map) { + if (ACCESS_ONCE(vcpu->kvm->arch.indirect_shadow_pages)) + kvm_mmu_unprotect_page(vcpu->kvm, gpa_to_gfn(gpa)); + return true; } - return false; +again: + page_fault_count = ACCESS_ONCE(vcpu->kvm->arch.page_fault_count); + + /* + * if emulation was due to access to shadowed page table + * and it failed try to unshadow page and re-enter the + * guest to let CPU execute the instruction. + */ + kvm_mmu_unprotect_page(vcpu->kvm, gpa_to_gfn(gpa)); + emulate = vcpu->arch.mmu.page_fault(vcpu, cr2, PFERR_WRITE_MASK, false); + + /* The page fault path called above can increase the count. */ + if (page_fault_count + 1 != + ACCESS_ONCE(vcpu->kvm->arch.page_fault_count)) + goto again; + + return !emulate; } static bool retry_instruction(struct x86_emulate_ctxt *ctxt,
The current reexecute_instruction can not well detect the failed instruction emulation. It allows guest to retry all the instructions except it accesses on error pfn. For example, some cases are nested-write-protect - if the page we want to write is used as PDE but it chains to itself. Under this case, we should stop the emulation and report the case to userspace. Signed-off-by: Xiao Guangrong <xiaoguangrong@linux.vnet.ibm.com> --- arch/x86/include/asm/kvm_host.h | 2 + arch/x86/kvm/paging_tmpl.h | 2 + arch/x86/kvm/x86.c | 54 ++++++++++++++++++++++++++++----------- 3 files changed, 43 insertions(+), 15 deletions(-)