diff mbox series

[v2,07/16] KVM: Store gfn_to_pfn_cache length as an immutable property

Message ID 20221013211234.1318131-8-seanjc@google.com (mailing list archive)
State New, archived
Headers show
Series KVM: x86: gfn_to_pfn_cache fixes and cleanups | expand

Commit Message

Sean Christopherson Oct. 13, 2022, 9:12 p.m. UTC
From: Michal Luczaj <mhal@rbox.co>

Make the length of a gfn=>pfn cache an immutable property of the cache
to cleanup the APIs and avoid potential bugs, e.g calling check() with a
larger size than refresh() could put KVM into an infinite loop.

All current (and anticipated future) users access the cache with a
predetermined size, which isn't a coincidence as using a dedicated cache
really only make sense when the access pattern is "fixed".

Add a WARN in kvm_setup_guest_pvclock() to assert that the offset+size
matches the length of the cache, both to make it more obvious that the
length really is immutable in that case, and to detect future bugs.

No functional change intended.

Signed-off-by: Michal Luczaj <mhal@rbox.co>
Co-developed-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Sean Christopherson <seanjc@google.com>
---
 arch/x86/kvm/x86.c        | 14 ++++++------
 arch/x86/kvm/xen.c        | 46 ++++++++++++++++-----------------------
 include/linux/kvm_host.h  | 14 +++++-------
 include/linux/kvm_types.h |  1 +
 virt/kvm/pfncache.c       | 18 +++++++--------
 5 files changed, 42 insertions(+), 51 deletions(-)

Comments

David Woodhouse Nov. 21, 2022, 2:26 p.m. UTC | #1
On Thu, 2022-10-13 at 21:12 +0000, Sean Christopherson wrote:
> From: Michal Luczaj <mhal@rbox.co>
> 
> Make the length of a gfn=>pfn cache an immutable property of the cache
> to cleanup the APIs and avoid potential bugs, e.g calling check() with a
> larger size than refresh() could put KVM into an infinite loop.

Hm, that's a strange hypothetical bug to be worried about, given the
pattern is usually to have the check() and refresh() within a few lines
of each other with just atomicity/locking stuff in between them.

I won't fight for it, but I quite liked the idea that each user of a
GPC would know how much space *it* is going to access, and provide that
length as a required parameter. I do note you've added a WARN_ON to one
such user, and that's great — but overall, this patch makes that
checking *optional* instead of mandatory.

> All current (and anticipated future) users access the cache with a
> predetermined size, which isn't a coincidence as using a dedicated cache
> really only make sense when the access pattern is "fixed".

In fixing up the runstate area, I've made that not true. Not only does
the runstate area change size at runtime if the guest changes between
32-bit and 64-bit mode, but it also now uses *two* GPCs to cope with a
region that crosses a page boundary, and the size of the first
therefore changes according to how much fits on the tail of the page.

> Add a WARN in kvm_setup_guest_pvclock() to assert that the offset+size
> matches the length of the cache, both to make it more obvious that the
> length really is immutable in that case, and to detect future bugs.
...
> @@ -3031,13 +3030,13 @@ static void kvm_setup_guest_pvclock(struct kvm_vcpu *v,
>  	struct pvclock_vcpu_time_info *guest_hv_clock;
>  	unsigned long flags;
>  
> +	WARN_ON_ONCE(gpc->len != offset + sizeof(*guest_hv_clock));
> +

That ought to be 'gpc->len < offset + sizeof(*guest_hv_clock)' I think?

In the case where we are writing a clock *within* a mapped Xen
vcpu_info structure, it doesn't have to be at the *end* of that
structure. I think the xen_shinfo_test should have caught that?
Sean Christopherson Nov. 21, 2022, 7:11 p.m. UTC | #2
On Mon, Nov 21, 2022, David Woodhouse wrote:
> On Thu, 2022-10-13 at 21:12 +0000, Sean Christopherson wrote:
> > From: Michal Luczaj <mhal@rbox.co>
> > 
> > Make the length of a gfn=>pfn cache an immutable property of the cache
> > to cleanup the APIs and avoid potential bugs, e.g calling check() with a
> > larger size than refresh() could put KVM into an infinite loop.
> 
> Hm, that's a strange hypothetical bug to be worried about, given the
> pattern is usually to have the check() and refresh() within a few lines
> of each other with just atomicity/locking stuff in between them.

Why do you say it's strange to be worried about?  The GPC and Xen code is all quite
complex and has had multiple bugs, several of which are not exactly edge cases.
I don't think it's at all strange to want to make it difficult to introduce a bug
that would in many ways be worse than panicking the kernel.

But as Paolo said, the APIs themselves are to blame[*], check() and refresh()
shouldn't be split for the common case, i.e. this particular concern should largely
be a non-issue in the long run.

[*] https://lore.kernel.org/all/c61f6089-57b7-e00f-d5ed-68e62237eab0@redhat.com

> I won't fight for it, but I quite liked the idea that each user of a
> GPC would know how much space *it* is going to access, and provide that
> length as a required parameter. I do note you've added a WARN_ON to one
> such user, and that's great — but overall, this patch makes that
> checking *optional* instead of mandatory.

I honestly don't see a meaningful difference in this case.  The only practical
difference is that shoving @len into the cache makes the check a one-time thing.
The "mandatory" check at use time still relies on a human to not make a mistake.
If the check were derived from the actual access, a la get_user(), then I would
feel differently.

Case in point, the mandatory check didn't prevent KVM from screwing up bounds
checking in kvm_xen_schedop_poll().  The PAGE_SIZE passed in for @len is in no
way tied to actual access that's being performed, the code is simply regurgitating
the size of the cache.

> > All current (and anticipated future) users access the cache with a
> > predetermined size, which isn't a coincidence as using a dedicated cache
> > really only make sense when the access pattern is "fixed".
> 
> In fixing up the runstate area, I've made that not true. Not only does
> the runstate area change size at runtime if the guest changes between
> 32-bit and 64-bit mode, but it also now uses *two* GPCs to cope with a
> region that crosses a page boundary, and the size of the first
> therefore changes according to how much fits on the tail of the page.
> 
> > Add a WARN in kvm_setup_guest_pvclock() to assert that the offset+size
> > matches the length of the cache, both to make it more obvious that the
> > length really is immutable in that case, and to detect future bugs.
> ...
> > @@ -3031,13 +3030,13 @@ static void kvm_setup_guest_pvclock(struct kvm_vcpu *v,
> >  	struct pvclock_vcpu_time_info *guest_hv_clock;
> >  	unsigned long flags;
> >  
> > +	WARN_ON_ONCE(gpc->len != offset + sizeof(*guest_hv_clock));
> > +
> 
> That ought to be 'gpc->len < offset + sizeof(*guest_hv_clock)' I think?
> 
> In the case where we are writing a clock *within* a mapped Xen
> vcpu_info structure, it doesn't have to be at the *end* of that
> structure. I think the xen_shinfo_test should have caught that?

The WARN doesn't get false positive because "struct pvclock_vcpu_time_info" is
placed at the end of "struct vcpu_info" and "struct compat_vcpu_info".
 
I don't have a strong opinion on whether it's "!=" of "<", my goal in adding the
WARN was primarily to show that @len really is immutable in this case.  Guarding
against future overrun bugs was a bonus.
David Woodhouse Nov. 21, 2022, 8:02 p.m. UTC | #3
On Mon, 2022-11-21 at 19:11 +0000, Sean Christopherson wrote:
> On Mon, Nov 21, 2022, David Woodhouse wrote:
> > On Thu, 2022-10-13 at 21:12 +0000, Sean Christopherson wrote:
> > > From: Michal Luczaj <mhal@rbox.co>
> > > 
> > > Make the length of a gfn=>pfn cache an immutable property of the cache
> > > to cleanup the APIs and avoid potential bugs, e.g calling check() with a
> > > larger size than refresh() could put KVM into an infinite loop.
> > 
> > Hm, that's a strange hypothetical bug to be worried about, given the
> > pattern is usually to have the check() and refresh() within a few lines
> > of each other with just atomicity/locking stuff in between them.
> 
> Why do you say it's strange to be worried about?  The GPC and Xen code is all quite
> complex and has had multiple bugs, several of which are not exactly edge cases.
> I don't think it's at all strange to want to make it difficult to introduce a bug
> that would in many ways be worse than panicking the kernel.

The check() and refresh() calls are within a few lines of each other,
and it'd be really strange for them to have a *different* idea about
what the length is, surely?

> But as Paolo said, the APIs themselves are to blame[*], check() and refresh()
> shouldn't be split for the common case, i.e. this particular concern should largely
> be a non-issue in the long run.
> 
> [*] https://lore.kernel.org/all/c61f6089-57b7-e00f-d5ed-68e62237eab0@redhat.com

Yeah. As I said to Paul, I've been tempted by that. I've so far not
done it because although they look broadly similar, a bunch of the
sites do end up with *different* code between the check() and the
refresh(), for various locking and atomicity reasons.

> > I won't fight for it, but I quite liked the idea that each user of a
> > GPC would know how much space *it* is going to access, and provide that
> > length as a required parameter. I do note you've added a WARN_ON to one
> > such user, and that's great — but overall, this patch makes that
> > checking *optional* instead of mandatory.
> 
> I honestly don't see a meaningful difference in this case.  The only practical
> difference is that shoving @len into the cache makes the check a one-time thing.
> The "mandatory" check at use time still relies on a human to not make a mistake.
> If the check were derived from the actual access, a la get_user(), then I would
> feel differently.
>
> Case in point, the mandatory check didn't prevent KVM from screwing up bounds
> checking in kvm_xen_schedop_poll().  The PAGE_SIZE passed in for @len is in no
> way tied to actual access that's being performed, the code is simply regurgitating
> the size of the cache.

True, but that's a different class of bug, and the human needs to make
a more *egregious* mistake.

If the function itself writes outside the size that *it* thinks *it's*
going to write, right there and then in that function, that's utterly
hosed (and the SCHEDOP_poll thing was indeed so hosed).

The mandatory check *did* save us from configuring a 32-bit runstate
area at the end of a page, then *writing* to it in 64-bit mode (where
it's larger) and running off the end of the page.

It saved us from "knowing", a few seconds ago under different
circumstances, what the size of the runstate area was... and then it
actually being different when it's written.

But that's not the common case, so again, I won't fight for it.

I've reworked the unapplied parts of this series on top of the poll and
runstate fixes in my tree, *except* for this one making the length
immutable, and I'm running some tests.

https://git.infradead.org/users/dwmw2/linux.git/shortlog/refs/heads/gpc-fixes

I'm happy to reinstate the immutable length thing in some form on top.

Given that the runstate code already calculates for itself how many
bytes it can fit onto the first page, it really doesn't care about the
length field in the GPC. As a nasty hack, the runstate code could
probably even get away with setting 'len' to zero. That's kind of
awful, but maybe we could introduce a __kvm_gpc_activate() which does
take a new length, leaving kvm_gpc_activate() without it?

> > > All current (and anticipated future) users access the cache with a
> > > predetermined size, which isn't a coincidence as using a dedicated cache
> > > really only make sense when the access pattern is "fixed".
> > 
> > In fixing up the runstate area, I've made that not true. Not only does
> > the runstate area change size at runtime if the guest changes between
> > 32-bit and 64-bit mode, but it also now uses *two* GPCs to cope with a
> > region that crosses a page boundary, and the size of the first
> > therefore changes according to how much fits on the tail of the page.
> > 
> > > Add a WARN in kvm_setup_guest_pvclock() to assert that the offset+size
> > > matches the length of the cache, both to make it more obvious that the
> > > length really is immutable in that case, and to detect future bugs.
> > 
> > ...
> > > @@ -3031,13 +3030,13 @@ static void kvm_setup_guest_pvclock(struct kvm_vcpu *v,
> > >  	struct pvclock_vcpu_time_info *guest_hv_clock;
> > >  	unsigned long flags;
> > >  
> > > +	WARN_ON_ONCE(gpc->len != offset + sizeof(*guest_hv_clock));
> > > +
> > 
> > That ought to be 'gpc->len < offset + sizeof(*guest_hv_clock)' I think?
> > 
> > In the case where we are writing a clock *within* a mapped Xen
> > vcpu_info structure, it doesn't have to be at the *end* of that
> > structure. I think the xen_shinfo_test should have caught that?
> 
> The WARN doesn't get false positive because "struct pvclock_vcpu_time_info" is
> placed at the end of "struct vcpu_info" and "struct compat_vcpu_info".
>  
> I don't have a strong opinion on whether it's "!=" of "<", my goal in adding the
> WARN was primarily to show that @len really is immutable in this case.  Guarding
> against future overrun bugs was a bonus.

Ah right, I think I was looking at the pvclock_wall_clock field in the
shared_info, not the time field in the vcpu_info.
Sean Christopherson Nov. 22, 2022, 6:59 p.m. UTC | #4
On Mon, Nov 21, 2022, David Woodhouse wrote:
> On Mon, 2022-11-21 at 19:11 +0000, Sean Christopherson wrote:
> > On Mon, Nov 21, 2022, David Woodhouse wrote:
> > > I won't fight for it, but I quite liked the idea that each user of a
> > > GPC would know how much space *it* is going to access, and provide that
> > > length as a required parameter. I do note you've added a WARN_ON to one
> > > such user, and that's great — but overall, this patch makes that
> > > checking *optional* instead of mandatory.
> > 
> > I honestly don't see a meaningful difference in this case.  The only practical
> > difference is that shoving @len into the cache makes the check a one-time thing.
> > The "mandatory" check at use time still relies on a human to not make a mistake.
> > If the check were derived from the actual access, a la get_user(), then I would
> > feel differently.
> >
> > Case in point, the mandatory check didn't prevent KVM from screwing up bounds
> > checking in kvm_xen_schedop_poll().  The PAGE_SIZE passed in for @len is in no
> > way tied to actual access that's being performed, the code is simply regurgitating
> > the size of the cache.
> 
> True, but that's a different class of bug, and the human needs to make
> a more *egregious* mistake.
> 
> If the function itself writes outside the size that *it* thinks *it's*
> going to write, right there and then in that function, that's utterly
> hosed (and the SCHEDOP_poll thing was indeed so hosed).

Yes, such mistakes are more egregious in the sense they are harder to find and
have more severe consequences, but I don't think the mistakes are necessarily
harder to make.  Bugs in simple usage patterns are easy to spot, but at the same
time they're also less likely to be buggy because they're simpler.

> The mandatory check *did* save us from configuring a 32-bit runstate
> area at the end of a page, then *writing* to it in 64-bit mode (where
> it's larger) and running off the end of the page.

Only because the length/capacity wasn't immutable, i.e. that particilar bug couldn't
have been introduced in the first place if kvm_gpc_activate() was the only "public"
API that allowed "changing" the length.

That's really what I dislike.  I have no objection to adding a sanity check, what
I think is broken and dangerous is allowing a gpc->gpa to effectively become valid
by refreshing with a smaller length.

The gfn_to_hva_cache APIs have the same problem, but they get away with it because
they don't support concurrent usage and don't have to deal with invalidation events.

Lastly, if we keep "length" then we also need to keep "gpa", otherwise the resulting
API is all kinds of funky.

E.g. I'd be totally ok with something like this that would allow users to opt-in
to sanity checking their usage.

int __kvm_gpc_lock(struct gfn_to_pfn_cache *gpc)
{
	int r;

	read_lock_irqsave(&gpc->lock, gpc->flags);

	while (kvm_gpc_check(gpc)) {
                 read_unlock_irqrestore(&gpc->lock, gpc->flags);

                 r = kvm_gpc_refresh(gpc);
		 if (r)
                         return r;

                 read_lock_irqsave(&gpc->lock, gpc->flags);
	}

	return 0;
}

int kvm_gpc_lock(struct gfn_to_pfn_cache *gpc, gpa_t gpa, unsigned long len)
{
	if (WARN_ON_ONCE(gpa < gpc->gpa || (gpa + len > PAGE_SIZE) ||
			 ((gpa & PAGE_MASK) != (gpc->gpa & PAGE_MASK)))
		return -EINVAL;	

	return __kvm_gpc_lock(gpc);
}
diff mbox series

Patch

diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 9c68050672de..0b4fa3455f3a 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -2315,8 +2315,7 @@  static void kvm_write_system_time(struct kvm_vcpu *vcpu, gpa_t system_time,
 
 	/* we verify if the enable bit is set... */
 	if (system_time & 1)
-		kvm_gpc_activate(&vcpu->arch.pv_time, system_time & ~1ULL,
-				 sizeof(struct pvclock_vcpu_time_info));
+		kvm_gpc_activate(&vcpu->arch.pv_time, system_time & ~1ULL);
 	else
 		kvm_gpc_deactivate(&vcpu->arch.pv_time);
 
@@ -3031,13 +3030,13 @@  static void kvm_setup_guest_pvclock(struct kvm_vcpu *v,
 	struct pvclock_vcpu_time_info *guest_hv_clock;
 	unsigned long flags;
 
+	WARN_ON_ONCE(gpc->len != offset + sizeof(*guest_hv_clock));
+
 	read_lock_irqsave(&gpc->lock, flags);
-	while (!kvm_gpc_check(v->kvm, gpc, gpc->gpa,
-			      offset + sizeof(*guest_hv_clock))) {
+	while (!kvm_gpc_check(v->kvm, gpc, gpc->gpa)) {
 		read_unlock_irqrestore(&gpc->lock, flags);
 
-		if (kvm_gpc_refresh(v->kvm, gpc, gpc->gpa,
-				    offset + sizeof(*guest_hv_clock)))
+		if (kvm_gpc_refresh(v->kvm, gpc, gpc->gpa))
 			return;
 
 		read_lock_irqsave(&gpc->lock, flags);
@@ -11755,7 +11754,8 @@  int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu)
 	vcpu->arch.regs_avail = ~0;
 	vcpu->arch.regs_dirty = ~0;
 
-	kvm_gpc_init(&vcpu->arch.pv_time, vcpu->kvm, vcpu, KVM_HOST_USES_PFN);
+	kvm_gpc_init(&vcpu->arch.pv_time, vcpu->kvm, vcpu, KVM_HOST_USES_PFN,
+		     sizeof(struct pvclock_vcpu_time_info));
 
 	if (!irqchip_in_kernel(vcpu->kvm) || kvm_vcpu_is_reset_bsp(vcpu))
 		vcpu->arch.mp_state = KVM_MP_STATE_RUNNABLE;
diff --git a/arch/x86/kvm/xen.c b/arch/x86/kvm/xen.c
index 55b7195d69d6..6f5a5507392e 100644
--- a/arch/x86/kvm/xen.c
+++ b/arch/x86/kvm/xen.c
@@ -47,7 +47,7 @@  static int kvm_xen_shared_info_init(struct kvm *kvm, gfn_t gfn)
 	}
 
 	do {
-		ret = kvm_gpc_activate(gpc, gpa, PAGE_SIZE);
+		ret = kvm_gpc_activate(gpc, gpa);
 		if (ret)
 			goto out;
 
@@ -203,7 +203,6 @@  void kvm_xen_update_runstate_guest(struct kvm_vcpu *v, int state)
 	struct gfn_to_pfn_cache *gpc = &vx->runstate_cache;
 	uint64_t *user_times;
 	unsigned long flags;
-	size_t user_len;
 	int *user_state;
 
 	kvm_xen_update_runstate(v, state);
@@ -211,17 +210,15 @@  void kvm_xen_update_runstate_guest(struct kvm_vcpu *v, int state)
 	if (!vx->runstate_cache.active)
 		return;
 
-	user_len = sizeof(struct vcpu_runstate_info);
-
 	read_lock_irqsave(&gpc->lock, flags);
-	while (!kvm_gpc_check(v->kvm, gpc, gpc->gpa, user_len)) {
+	while (!kvm_gpc_check(v->kvm, gpc, gpc->gpa)) {
 		read_unlock_irqrestore(&gpc->lock, flags);
 
 		/* When invoked from kvm_sched_out() we cannot sleep */
 		if (state == RUNSTATE_runnable)
 			return;
 
-		if (kvm_gpc_refresh(v->kvm, gpc, gpc->gpa, user_len))
+		if (kvm_gpc_refresh(v->kvm, gpc, gpc->gpa))
 			return;
 
 		read_lock_irqsave(&gpc->lock, flags);
@@ -347,12 +344,10 @@  void kvm_xen_inject_pending_events(struct kvm_vcpu *v)
 	 * little more honest about it.
 	 */
 	read_lock_irqsave(&gpc->lock, flags);
-	while (!kvm_gpc_check(v->kvm, gpc, gpc->gpa,
-			      sizeof(struct vcpu_info))) {
+	while (!kvm_gpc_check(v->kvm, gpc, gpc->gpa)) {
 		read_unlock_irqrestore(&gpc->lock, flags);
 
-		if (kvm_gpc_refresh(v->kvm, gpc, gpc->gpa,
-				    sizeof(struct vcpu_info)))
+		if (kvm_gpc_refresh(v->kvm, gpc, gpc->gpa))
 			return;
 
 		read_lock_irqsave(&gpc->lock, flags);
@@ -412,8 +407,7 @@  int __kvm_xen_has_interrupt(struct kvm_vcpu *v)
 		     sizeof_field(struct compat_vcpu_info, evtchn_upcall_pending));
 
 	read_lock_irqsave(&gpc->lock, flags);
-	while (!kvm_gpc_check(v->kvm, gpc, gpc->gpa,
-			      sizeof(struct vcpu_info))) {
+	while (!kvm_gpc_check(v->kvm, gpc, gpc->gpa)) {
 		read_unlock_irqrestore(&gpc->lock, flags);
 
 		/*
@@ -427,8 +421,7 @@  int __kvm_xen_has_interrupt(struct kvm_vcpu *v)
 		if (in_atomic() || !task_is_running(current))
 			return 1;
 
-		if (kvm_gpc_refresh(v->kvm, gpc, gpc->gpa,
-				    sizeof(struct vcpu_info))) {
+		if (kvm_gpc_refresh(v->kvm, gpc, gpc->gpa)) {
 			/*
 			 * If this failed, userspace has screwed up the
 			 * vcpu_info mapping. No interrupts for you.
@@ -555,7 +548,7 @@  int kvm_xen_vcpu_set_attr(struct kvm_vcpu *vcpu, struct kvm_xen_vcpu_attr *data)
 		}
 
 		r = kvm_gpc_activate(&vcpu->arch.xen.vcpu_info_cache,
-				     data->u.gpa, sizeof(struct vcpu_info));
+				     data->u.gpa);
 		if (!r)
 			kvm_make_request(KVM_REQ_CLOCK_UPDATE, vcpu);
 
@@ -569,8 +562,7 @@  int kvm_xen_vcpu_set_attr(struct kvm_vcpu *vcpu, struct kvm_xen_vcpu_attr *data)
 		}
 
 		r = kvm_gpc_activate(&vcpu->arch.xen.vcpu_time_info_cache,
-				     data->u.gpa,
-				     sizeof(struct pvclock_vcpu_time_info));
+				     data->u.gpa);
 		if (!r)
 			kvm_make_request(KVM_REQ_CLOCK_UPDATE, vcpu);
 		break;
@@ -587,8 +579,7 @@  int kvm_xen_vcpu_set_attr(struct kvm_vcpu *vcpu, struct kvm_xen_vcpu_attr *data)
 		}
 
 		r = kvm_gpc_activate(&vcpu->arch.xen.runstate_cache,
-				     data->u.gpa,
-				     sizeof(struct vcpu_runstate_info));
+				     data->u.gpa);
 		break;
 
 	case KVM_XEN_VCPU_ATTR_TYPE_RUNSTATE_CURRENT:
@@ -956,7 +947,7 @@  static bool wait_pending_event(struct kvm_vcpu *vcpu, int nr_ports,
 
 	read_lock_irqsave(&gpc->lock, flags);
 	idx = srcu_read_lock(&kvm->srcu);
-	if (!kvm_gpc_check(kvm, gpc, gpc->gpa, PAGE_SIZE))
+	if (!kvm_gpc_check(kvm, gpc, gpc->gpa))
 		goto out_rcu;
 
 	ret = false;
@@ -1347,7 +1338,7 @@  int kvm_xen_set_evtchn_fast(struct kvm_xen_evtchn *xe, struct kvm *kvm)
 	idx = srcu_read_lock(&kvm->srcu);
 
 	read_lock_irqsave(&gpc->lock, flags);
-	if (!kvm_gpc_check(kvm, gpc, gpc->gpa, PAGE_SIZE))
+	if (!kvm_gpc_check(kvm, gpc, gpc->gpa))
 		goto out_rcu;
 
 	if (IS_ENABLED(CONFIG_64BIT) && kvm->arch.xen.long_mode) {
@@ -1381,7 +1372,7 @@  int kvm_xen_set_evtchn_fast(struct kvm_xen_evtchn *xe, struct kvm *kvm)
 		gpc = &vcpu->arch.xen.vcpu_info_cache;
 
 		read_lock_irqsave(&gpc->lock, flags);
-		if (!kvm_gpc_check(kvm, gpc, gpc->gpa, sizeof(struct vcpu_info))) {
+		if (!kvm_gpc_check(kvm, gpc, gpc->gpa)) {
 			/*
 			 * Could not access the vcpu_info. Set the bit in-kernel
 			 * and prod the vCPU to deliver it for itself.
@@ -1479,7 +1470,7 @@  static int kvm_xen_set_evtchn(struct kvm_xen_evtchn *xe, struct kvm *kvm)
 			break;
 
 		idx = srcu_read_lock(&kvm->srcu);
-		rc = kvm_gpc_refresh(kvm, gpc, gpc->gpa, PAGE_SIZE);
+		rc = kvm_gpc_refresh(kvm, gpc, gpc->gpa);
 		srcu_read_unlock(&kvm->srcu, idx);
 	} while(!rc);
 
@@ -1809,11 +1800,11 @@  void kvm_xen_init_vcpu(struct kvm_vcpu *vcpu)
 	timer_setup(&vcpu->arch.xen.poll_timer, cancel_evtchn_poll, 0);
 
 	kvm_gpc_init(&vcpu->arch.xen.runstate_cache, vcpu->kvm, NULL,
-		     KVM_HOST_USES_PFN);
+		     KVM_HOST_USES_PFN, sizeof(struct vcpu_runstate_info));
 	kvm_gpc_init(&vcpu->arch.xen.vcpu_info_cache, vcpu->kvm, NULL,
-		     KVM_HOST_USES_PFN);
+		     KVM_HOST_USES_PFN, sizeof(struct vcpu_info));
 	kvm_gpc_init(&vcpu->arch.xen.vcpu_time_info_cache, vcpu->kvm, NULL,
-		     KVM_HOST_USES_PFN);
+		     KVM_HOST_USES_PFN, sizeof(struct pvclock_vcpu_time_info));
 }
 
 void kvm_xen_destroy_vcpu(struct kvm_vcpu *vcpu)
@@ -1831,7 +1822,8 @@  void kvm_xen_destroy_vcpu(struct kvm_vcpu *vcpu)
 void kvm_xen_init_vm(struct kvm *kvm)
 {
 	idr_init(&kvm->arch.xen.evtchn_ports);
-	kvm_gpc_init(&kvm->arch.xen.shinfo_cache, kvm, NULL, KVM_HOST_USES_PFN);
+	kvm_gpc_init(&kvm->arch.xen.shinfo_cache, kvm, NULL, KVM_HOST_USES_PFN,
+		     PAGE_SIZE);
 }
 
 void kvm_xen_destroy_vm(struct kvm *kvm)
diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index e5e70607a5ef..c79f2e122ac8 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -1252,13 +1252,15 @@  void kvm_vcpu_mark_page_dirty(struct kvm_vcpu *vcpu, gfn_t gfn);
  *		   changes!---will also force @vcpu to exit the guest and
  *		   refresh the cache); and/or if the PFN used directly
  *		   by KVM (and thus needs a kernel virtual mapping).
+ * @len:	   sanity check; the range being access must fit a single page.
  *
  * This sets up a gfn_to_pfn_cache by initializing locks and assigning the
  * immutable attributes.  Note, the cache must be zero-allocated (or zeroed by
  * the caller before init).
  */
 void kvm_gpc_init(struct gfn_to_pfn_cache *gpc, struct kvm *kvm,
-		  struct kvm_vcpu *vcpu, enum pfn_cache_usage usage);
+		  struct kvm_vcpu *vcpu, enum pfn_cache_usage usage,
+		  unsigned long len);
 
 /**
  * kvm_gpc_activate - prepare a cached kernel mapping and HPA for a given guest
@@ -1266,7 +1268,6 @@  void kvm_gpc_init(struct gfn_to_pfn_cache *gpc, struct kvm *kvm,
  *
  * @gpc:	   struct gfn_to_pfn_cache object.
  * @gpa:	   guest physical address to map.
- * @len:	   sanity check; the range being access must fit a single page.
  *
  * @return:	   0 for success.
  *		   -EINVAL for a mapping which would cross a page boundary.
@@ -1276,7 +1277,7 @@  void kvm_gpc_init(struct gfn_to_pfn_cache *gpc, struct kvm *kvm,
  * invalidations to be processed.  Callers are required to use kvm_gpc_check()
  * to ensure that the cache is valid before accessing the target page.
  */
-int kvm_gpc_activate(struct gfn_to_pfn_cache *gpc, gpa_t gpa, unsigned long len);
+int kvm_gpc_activate(struct gfn_to_pfn_cache *gpc, gpa_t gpa);
 
 /**
  * kvm_gpc_check - check validity of a gfn_to_pfn_cache.
@@ -1284,7 +1285,6 @@  int kvm_gpc_activate(struct gfn_to_pfn_cache *gpc, gpa_t gpa, unsigned long len)
  * @kvm:	   pointer to kvm instance.
  * @gpc:	   struct gfn_to_pfn_cache object.
  * @gpa:	   current guest physical address to map.
- * @len:	   sanity check; the range being access must fit a single page.
  *
  * @return:	   %true if the cache is still valid and the address matches.
  *		   %false if the cache is not valid.
@@ -1296,8 +1296,7 @@  int kvm_gpc_activate(struct gfn_to_pfn_cache *gpc, gpa_t gpa, unsigned long len)
  * Callers in IN_GUEST_MODE may do so without locking, although they should
  * still hold a read lock on kvm->scru for the memslot checks.
  */
-bool kvm_gpc_check(struct kvm *kvm, struct gfn_to_pfn_cache *gpc, gpa_t gpa,
-		   unsigned long len);
+bool kvm_gpc_check(struct kvm *kvm, struct gfn_to_pfn_cache *gpc, gpa_t gpa);
 
 /**
  * kvm_gpc_refresh - update a previously initialized cache.
@@ -1317,8 +1316,7 @@  bool kvm_gpc_check(struct kvm *kvm, struct gfn_to_pfn_cache *gpc, gpa_t gpa,
  * still lock and check the cache status, as this function does not return
  * with the lock still held to permit access.
  */
-int kvm_gpc_refresh(struct kvm *kvm, struct gfn_to_pfn_cache *gpc, gpa_t gpa,
-		    unsigned long len);
+int kvm_gpc_refresh(struct kvm *kvm, struct gfn_to_pfn_cache *gpc, gpa_t gpa);
 
 /**
  * kvm_gpc_unmap - temporarily unmap a gfn_to_pfn_cache.
diff --git a/include/linux/kvm_types.h b/include/linux/kvm_types.h
index 76de36e56cdf..d66b276d29e0 100644
--- a/include/linux/kvm_types.h
+++ b/include/linux/kvm_types.h
@@ -74,6 +74,7 @@  struct gfn_to_pfn_cache {
 	struct mutex refresh_lock;
 	void *khva;
 	kvm_pfn_t pfn;
+	unsigned long len;
 	enum pfn_cache_usage usage;
 	bool active;
 	bool valid;
diff --git a/virt/kvm/pfncache.c b/virt/kvm/pfncache.c
index 6756dfa60d5a..34883ad12536 100644
--- a/virt/kvm/pfncache.c
+++ b/virt/kvm/pfncache.c
@@ -76,15 +76,14 @@  void gfn_to_pfn_cache_invalidate_start(struct kvm *kvm, unsigned long start,
 	}
 }
 
-bool kvm_gpc_check(struct kvm *kvm, struct gfn_to_pfn_cache *gpc, gpa_t gpa,
-		   unsigned long len)
+bool kvm_gpc_check(struct kvm *kvm, struct gfn_to_pfn_cache *gpc, gpa_t gpa)
 {
 	struct kvm_memslots *slots = kvm_memslots(kvm);
 
 	if (!gpc->active)
 		return false;
 
-	if ((gpa & ~PAGE_MASK) + len > PAGE_SIZE)
+	if ((gpa & ~PAGE_MASK) + gpc->len > PAGE_SIZE)
 		return false;
 
 	if (gpc->gpa != gpa || gpc->generation != slots->generation ||
@@ -238,8 +237,7 @@  static kvm_pfn_t hva_to_pfn_retry(struct kvm *kvm, struct gfn_to_pfn_cache *gpc)
 	return -EFAULT;
 }
 
-int kvm_gpc_refresh(struct kvm *kvm, struct gfn_to_pfn_cache *gpc, gpa_t gpa,
-		    unsigned long len)
+int kvm_gpc_refresh(struct kvm *kvm, struct gfn_to_pfn_cache *gpc, gpa_t gpa)
 {
 	struct kvm_memslots *slots = kvm_memslots(kvm);
 	unsigned long page_offset = gpa & ~PAGE_MASK;
@@ -253,7 +251,7 @@  int kvm_gpc_refresh(struct kvm *kvm, struct gfn_to_pfn_cache *gpc, gpa_t gpa,
 	 * If must fit within a single page. The 'len' argument is
 	 * only to enforce that.
 	 */
-	if (page_offset + len > PAGE_SIZE)
+	if (page_offset + gpc->len > PAGE_SIZE)
 		return -EINVAL;
 
 	/*
@@ -358,7 +356,8 @@  void kvm_gpc_unmap(struct kvm *kvm, struct gfn_to_pfn_cache *gpc)
 EXPORT_SYMBOL_GPL(kvm_gpc_unmap);
 
 void kvm_gpc_init(struct gfn_to_pfn_cache *gpc, struct kvm *kvm,
-		  struct kvm_vcpu *vcpu, enum pfn_cache_usage usage)
+		  struct kvm_vcpu *vcpu, enum pfn_cache_usage usage,
+		  unsigned long len)
 {
 	WARN_ON_ONCE(!usage || (usage & KVM_GUEST_AND_HOST_USE_PFN) != usage);
 	WARN_ON_ONCE((usage & KVM_GUEST_USES_PFN) && !vcpu);
@@ -369,10 +368,11 @@  void kvm_gpc_init(struct gfn_to_pfn_cache *gpc, struct kvm *kvm,
 	gpc->kvm = kvm;
 	gpc->vcpu = vcpu;
 	gpc->usage = usage;
+	gpc->len = len;
 }
 EXPORT_SYMBOL_GPL(kvm_gpc_init);
 
-int kvm_gpc_activate(struct gfn_to_pfn_cache *gpc, gpa_t gpa, unsigned long len)
+int kvm_gpc_activate(struct gfn_to_pfn_cache *gpc, gpa_t gpa)
 {
 	struct kvm *kvm = gpc->kvm;
 
@@ -395,7 +395,7 @@  int kvm_gpc_activate(struct gfn_to_pfn_cache *gpc, gpa_t gpa, unsigned long len)
 		gpc->active = true;
 		write_unlock_irq(&gpc->lock);
 	}
-	return kvm_gpc_refresh(kvm, gpc, gpa, len);
+	return kvm_gpc_refresh(kvm, gpc, gpa);
 }
 EXPORT_SYMBOL_GPL(kvm_gpc_activate);