Message ID | 20181105064503.36285-2-richard.weiyang@gmail.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | [1/2] KVM: use same boundary check for async_pf | expand |
2018-11-05 14:45+0800, Wei Yang: > This patch fix some typo in comment and merge two lines to one within 80 > characters. > > Signed-off-by: Wei Yang <richard.weiyang@gmail.com> > --- > diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c > @@ -4240,7 +4240,7 @@ static inline bool is_last_gpte(struct kvm_mmu *mmu, > unsigned level, unsigned gpte) > { > /* > - * The RHS has bit 7 set iff level < mmu->last_nonleaf_level. > + * The RHS has bit 7 set if level < mmu->last_nonleaf_level. > * If it is clear, there are no large pages at this level, so clear > * PT_PAGE_SIZE_MASK in gpte if that is the case. > */ > @@ -4248,7 +4248,7 @@ static inline bool is_last_gpte(struct kvm_mmu *mmu, > > /* > * PT_PAGE_TABLE_LEVEL always terminates. The RHS has bit 7 set > - * iff level <= PT_PAGE_TABLE_LEVEL, which for our purpose means Both "iff" look intended as it stands for "if and only if". > + * if level <= PT_PAGE_TABLE_LEVEL, which for our purpose means > * level == PT_PAGE_TABLE_LEVEL; set PT_PAGE_SIZE_MASK in gpte then. > */ > gpte |= level - PT_PAGE_TABLE_LEVEL - 1; > diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c > @@ -9531,7 +9531,7 @@ static void kvm_del_async_pf_gfn(struct kvm_vcpu *vcpu, gfn_t gfn) > return; > k = kvm_async_pf_hash_fn(vcpu->arch.apf.gfns[j]); > /* > - * k lies cyclically in ]i,j] > + * k lies cyclically in [i,j] And here, I think the notation is to hint the two cases where the range is either (i,k] or [0,j] ∪ (i,max]. I don't think it contains enough information to distinguish ( and [ with any useful interpretation, so rewriting it sounds better. [i,j] is even worse, though. I don't really see how to do that without being verbose and maybe someone understands that notation, so I'll keep it for now. > * | i.k.j | > * |....j i.k.| or |.k..j i...| > */ > diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c > @@ -1569,8 +1569,7 @@ kvm_pfn_t __gfn_to_pfn_memslot(struct kvm_memory_slot *slot, gfn_t gfn, > writable = NULL; > } > > - return hva_to_pfn(addr, atomic, async, write_fault, > - writable); > + return hva_to_pfn(addr, atomic, async, write_fault, writable); I generally prefer patches to do only one thing. In this case fixing typos so I also dropped this hunk while queueing, thanks.
On Thu, Dec 20, 2018 at 09:25:34PM +0100, Radim Kr??m???? wrote: >2018-11-05 14:45+0800, Wei Yang: >> This patch fix some typo in comment and merge two lines to one within 80 >> characters. >> >> Signed-off-by: Wei Yang <richard.weiyang@gmail.com> >> --- >> diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c >> @@ -4240,7 +4240,7 @@ static inline bool is_last_gpte(struct kvm_mmu *mmu, >> unsigned level, unsigned gpte) >> { >> /* >> - * The RHS has bit 7 set iff level < mmu->last_nonleaf_level. >> + * The RHS has bit 7 set if level < mmu->last_nonleaf_level. >> * If it is clear, there are no large pages at this level, so clear >> * PT_PAGE_SIZE_MASK in gpte if that is the case. >> */ >> @@ -4248,7 +4248,7 @@ static inline bool is_last_gpte(struct kvm_mmu *mmu, >> >> /* >> * PT_PAGE_TABLE_LEVEL always terminates. The RHS has bit 7 set >> - * iff level <= PT_PAGE_TABLE_LEVEL, which for our purpose means > >Both "iff" look intended as it stands for "if and only if". > Thanks :-) >> + * if level <= PT_PAGE_TABLE_LEVEL, which for our purpose means >> * level == PT_PAGE_TABLE_LEVEL; set PT_PAGE_SIZE_MASK in gpte then. >> */ >> gpte |= level - PT_PAGE_TABLE_LEVEL - 1; >> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c >> @@ -9531,7 +9531,7 @@ static void kvm_del_async_pf_gfn(struct kvm_vcpu *vcpu, gfn_t gfn) >> return; >> k = kvm_async_pf_hash_fn(vcpu->arch.apf.gfns[j]); >> /* >> - * k lies cyclically in ]i,j] >> + * k lies cyclically in [i,j] > >And here, I think the notation is to hint the two cases where the range >is either (i,k] or [0,j] ??? (i,max]. I don't think it contains enough The two cases are: j > i: +------------------------+ | i j | +------------------------+ i > j: +------------------------+ | j i | +------------------------+ So the range we want to search is i...j >information to distinguish ( and [ with any useful interpretation, so >rewriting it sounds better. [i,j] is even worse, though. Yep, maybe we need a better notation. > >I don't really see how to do that without being verbose and maybe >someone understands that notation, so I'll keep it for now. > >> * | i.k.j | >> * |....j i.k.| or |.k..j i...| >> */ >> diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c >> @@ -1569,8 +1569,7 @@ kvm_pfn_t __gfn_to_pfn_memslot(struct kvm_memory_slot *slot, gfn_t gfn, >> writable = NULL; >> } >> >> - return hva_to_pfn(addr, atomic, async, write_fault, >> - writable); >> + return hva_to_pfn(addr, atomic, async, write_fault, writable); > >I generally prefer patches to do only one thing. In this case fixing >typos so I also dropped this hunk while queueing, Got it, thanks. > >thanks.
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index 3730d482d2fd..bb587e7e8771 100644 --- a/arch/x86/kvm/mmu.c +++ b/arch/x86/kvm/mmu.c @@ -4240,7 +4240,7 @@ static inline bool is_last_gpte(struct kvm_mmu *mmu, unsigned level, unsigned gpte) { /* - * The RHS has bit 7 set iff level < mmu->last_nonleaf_level. + * The RHS has bit 7 set if level < mmu->last_nonleaf_level. * If it is clear, there are no large pages at this level, so clear * PT_PAGE_SIZE_MASK in gpte if that is the case. */ @@ -4248,7 +4248,7 @@ static inline bool is_last_gpte(struct kvm_mmu *mmu, /* * PT_PAGE_TABLE_LEVEL always terminates. The RHS has bit 7 set - * iff level <= PT_PAGE_TABLE_LEVEL, which for our purpose means + * if level <= PT_PAGE_TABLE_LEVEL, which for our purpose means * level == PT_PAGE_TABLE_LEVEL; set PT_PAGE_SIZE_MASK in gpte then. */ gpte |= level - PT_PAGE_TABLE_LEVEL - 1; @@ -5646,7 +5646,7 @@ void kvm_mmu_slot_remove_write_access(struct kvm *kvm, * spte from present to present (changing the spte from present * to nonpresent will flush all the TLBs immediately), in other * words, the only case we care is mmu_spte_update() where we - * haved checked SPTE_HOST_WRITEABLE | SPTE_MMU_WRITEABLE + * have checked SPTE_HOST_WRITEABLE | SPTE_MMU_WRITEABLE * instead of PT_WRITABLE_MASK, that means it does not depend * on PT_WRITABLE_MASK anymore. */ diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 76f8d46be05d..71bd9e0aa540 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -9278,7 +9278,7 @@ static void kvm_mmu_slot_apply_flags(struct kvm *kvm, * with dirty logging disabled in order to eliminate unnecessary GPA * logging in PML buffer (and potential PML buffer full VMEXT). This * guarantees leaving PML enabled during guest's lifetime won't have - * any additonal overhead from PML when guest is running with dirty + * any additional overhead from PML when guest is running with dirty * logging disabled for memory slots. * * kvm_x86_ops->slot_enable_log_dirty is called when switching new slot @@ -9531,7 +9531,7 @@ static void kvm_del_async_pf_gfn(struct kvm_vcpu *vcpu, gfn_t gfn) return; k = kvm_async_pf_hash_fn(vcpu->arch.apf.gfns[j]); /* - * k lies cyclically in ]i,j] + * k lies cyclically in [i,j] * | i.k.j | * |....j i.k.| or |.k..j i...| */ diff --git a/virt/kvm/async_pf.c b/virt/kvm/async_pf.c index 4f4f6eac88a4..eca271767899 100644 --- a/virt/kvm/async_pf.c +++ b/virt/kvm/async_pf.c @@ -82,7 +82,7 @@ static void async_pf_execute(struct work_struct *work) might_sleep(); /* - * This work is run asynchromously to the task which owns + * This work is run asynchronously to the task which owns * mm and might be done in another context, so we must * access remotely. */ diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 2679e476b6c3..c9cad4f22960 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -1569,8 +1569,7 @@ kvm_pfn_t __gfn_to_pfn_memslot(struct kvm_memory_slot *slot, gfn_t gfn, writable = NULL; } - return hva_to_pfn(addr, atomic, async, write_fault, - writable); + return hva_to_pfn(addr, atomic, async, write_fault, writable); } EXPORT_SYMBOL_GPL(__gfn_to_pfn_memslot);
This patch fix some typo in comment and merge two lines to one within 80 characters. Signed-off-by: Wei Yang <richard.weiyang@gmail.com> --- arch/x86/kvm/mmu.c | 6 +++--- arch/x86/kvm/x86.c | 4 ++-- virt/kvm/async_pf.c | 2 +- virt/kvm/kvm_main.c | 3 +-- 4 files changed, 7 insertions(+), 8 deletions(-)