Message ID | 702419686d5700373123f6ea84e7a946c2cad8b4.1625186503.git.isaku.yamahata@intel.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | KVM: X86: TDX support | expand |
On 03/07/21 00:04, isaku.yamahata@intel.com wrote: > From: Sean Christopherson <sean.j.christopherson@intel.com> > > Employ a 'continue' to reduce the indentation for linking a new shadow > page during __direct_map() in preparation for linking private pages. > > Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> > Signed-off-by: Isaku Yamahata <isaku.yamahata@intel.com> > --- > arch/x86/kvm/mmu/mmu.c | 19 +++++++++---------- > 1 file changed, 9 insertions(+), 10 deletions(-) > > diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c > index 1c40dfd05979..0259781cee6a 100644 > --- a/arch/x86/kvm/mmu/mmu.c > +++ b/arch/x86/kvm/mmu/mmu.c > @@ -2910,16 +2910,15 @@ static int __direct_map(struct kvm_vcpu *vcpu, gpa_t gpa, u32 error_code, > break; > > drop_large_spte(vcpu, it.sptep); > - if (!is_shadow_present_pte(*it.sptep)) { > - sp = __kvm_mmu_get_page(vcpu, base_gfn, > - gfn_stolen_bits, it.addr, > - it.level - 1, true, ACC_ALL); > - > - link_shadow_page(vcpu, it.sptep, sp); > - if (is_tdp && huge_page_disallowed && > - req_level >= it.level) > - account_huge_nx_page(vcpu->kvm, sp); > - } > + if (is_shadow_present_pte(*it.sptep)) > + continue; > + > + sp = __kvm_mmu_get_page(vcpu, base_gfn, gfn_stolen_bits, > + it.addr, it.level - 1, true, ACC_ALL); > + > + link_shadow_page(vcpu, it.sptep, sp); > + if (is_tdp && huge_page_disallowed && req_level >= it.level) > + account_huge_nx_page(vcpu->kvm, sp); > } > > ret = mmu_set_spte(vcpu, it.sptep, ACC_ALL, > Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 1c40dfd05979..0259781cee6a 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -2910,16 +2910,15 @@ static int __direct_map(struct kvm_vcpu *vcpu, gpa_t gpa, u32 error_code, break; drop_large_spte(vcpu, it.sptep); - if (!is_shadow_present_pte(*it.sptep)) { - sp = __kvm_mmu_get_page(vcpu, base_gfn, - gfn_stolen_bits, it.addr, - it.level - 1, true, ACC_ALL); - - link_shadow_page(vcpu, it.sptep, sp); - if (is_tdp && huge_page_disallowed && - req_level >= it.level) - account_huge_nx_page(vcpu->kvm, sp); - } + if (is_shadow_present_pte(*it.sptep)) + continue; + + sp = __kvm_mmu_get_page(vcpu, base_gfn, gfn_stolen_bits, + it.addr, it.level - 1, true, ACC_ALL); + + link_shadow_page(vcpu, it.sptep, sp); + if (is_tdp && huge_page_disallowed && req_level >= it.level) + account_huge_nx_page(vcpu->kvm, sp); } ret = mmu_set_spte(vcpu, it.sptep, ACC_ALL,