Message ID | 20220203010051.2813563-11-dmatlack@google.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | Extend Eager Page Splitting to the shadow MMU | expand |
On Wed, Feb 2, 2022 at 5:02 PM David Matlack <dmatlack@google.com> wrote: > > rmap_add() only uses the slot to call gfn_to_rmap() which takes a const > memslot. > > No functional change intended. > Reviewed-by: Ben Gardon <bgardon@google.com> > Signed-off-by: David Matlack <dmatlack@google.com> > --- > arch/x86/kvm/mmu/mmu.c | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c > index 48ebf2bebb90..a5e3bb632542 100644 > --- a/arch/x86/kvm/mmu/mmu.c > +++ b/arch/x86/kvm/mmu/mmu.c > @@ -1607,7 +1607,7 @@ static bool kvm_test_age_rmapp(struct kvm *kvm, struct kvm_rmap_head *rmap_head, > > #define RMAP_RECYCLE_THRESHOLD 1000 > > -static void rmap_add(struct kvm_vcpu *vcpu, struct kvm_memory_slot *slot, > +static void rmap_add(struct kvm_vcpu *vcpu, const struct kvm_memory_slot *slot, > u64 *spte, gfn_t gfn) > { > struct kvm_mmu_page *sp; > -- > 2.35.0.rc2.247.g8bbb082509-goog >
diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 48ebf2bebb90..a5e3bb632542 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -1607,7 +1607,7 @@ static bool kvm_test_age_rmapp(struct kvm *kvm, struct kvm_rmap_head *rmap_head, #define RMAP_RECYCLE_THRESHOLD 1000 -static void rmap_add(struct kvm_vcpu *vcpu, struct kvm_memory_slot *slot, +static void rmap_add(struct kvm_vcpu *vcpu, const struct kvm_memory_slot *slot, u64 *spte, gfn_t gfn) { struct kvm_mmu_page *sp;
rmap_add() only uses the slot to call gfn_to_rmap() which takes a const memslot. No functional change intended. Signed-off-by: David Matlack <dmatlack@google.com> --- arch/x86/kvm/mmu/mmu.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)