Message ID | 20230113172523.2063867-2-maz@kernel.org (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | KVM: arm64: Drop support for VPIPT i-cache policy | expand |
On Fri, Jan 13, 2023 at 05:25:22PM +0000, Marc Zyngier wrote: > Systems with a VMID-tagged PIPT i-cache have been supported for > a while by Linux and KVM. However, these systems never appeared > on our side of the multiverse. > > Refuse to initialise KVM on such a machine, should then ever appear. > Following changes will drop the support from the hypervisor. > > Signed-off-by: Marc Zyngier <maz@kernel.org> > --- > arch/arm64/kvm/arm.c | 5 +++++ > 1 file changed, 5 insertions(+) > > diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c > index 9c5573bc4614..508deed213a2 100644 > --- a/arch/arm64/kvm/arm.c > +++ b/arch/arm64/kvm/arm.c > @@ -2195,6 +2195,11 @@ int kvm_arch_init(void *opaque) > int err; > bool in_hyp_mode; > > + if (icache_is_vpipt()) { > + kvm_info("Incompatible VPIPT I-Cache policy\n"); > + return -ENODEV; > + } Hmm, does this work properly with late CPU onlining? For example, if my set of boot CPUs are all friendly PIPT and KVM initialises happily, but then I late online a CPU with a horrible VPIPT policy, I worry that we'll quietly do the wrong thing wrt maintenance. If that's the case, then arguably we already have a bug in the cases where we trap and emulate accesses to CTR_EL0 from userspace because I _think_ we'll change the L1Ip field at runtime after userspace could've already read it. Is there something that stops us from ended up in this situation? Will
On Fri, 20 Jan 2023 10:14:16 +0000, Will Deacon <will@kernel.org> wrote: > > On Fri, Jan 13, 2023 at 05:25:22PM +0000, Marc Zyngier wrote: > > Systems with a VMID-tagged PIPT i-cache have been supported for > > a while by Linux and KVM. However, these systems never appeared > > on our side of the multiverse. > > > > Refuse to initialise KVM on such a machine, should then ever appear. > > Following changes will drop the support from the hypervisor. > > > > Signed-off-by: Marc Zyngier <maz@kernel.org> > > --- > > arch/arm64/kvm/arm.c | 5 +++++ > > 1 file changed, 5 insertions(+) > > > > diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c > > index 9c5573bc4614..508deed213a2 100644 > > --- a/arch/arm64/kvm/arm.c > > +++ b/arch/arm64/kvm/arm.c > > @@ -2195,6 +2195,11 @@ int kvm_arch_init(void *opaque) > > int err; > > bool in_hyp_mode; > > > > + if (icache_is_vpipt()) { > > + kvm_info("Incompatible VPIPT I-Cache policy\n"); > > + return -ENODEV; > > + } > > Hmm, does this work properly with late CPU onlining? For example, if my set > of boot CPUs are all friendly PIPT and KVM initialises happily, but then I > late online a CPU with a horrible VPIPT policy, I worry that we'll quietly > do the wrong thing wrt maintenance. Yup. The problem is what do we do in that case? Apart from preventing the late onlining itself? > > If that's the case, then arguably we already have a bug in the cases where > we trap and emulate accesses to CTR_EL0 from userspace because I _think_ > we'll change the L1Ip field at runtime after userspace could've already read > it. > > Is there something that stops us from ended up in this situation? Probably not. Userspace will observe the wrong thing, and this applies to *any* late onlining with a more restrictive cache topology (such as PIPT -> VIPT). Unclear how the trapping will be engaged on the *other* CPUs as well... I've tried to reverse-engineer the cpufeature arrays again, and failed to find a good solution for this. Suzuki, what do you think? M.
diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index 9c5573bc4614..508deed213a2 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -2195,6 +2195,11 @@ int kvm_arch_init(void *opaque) int err; bool in_hyp_mode; + if (icache_is_vpipt()) { + kvm_info("Incompatible VPIPT I-Cache policy\n"); + return -ENODEV; + } + if (!is_hyp_mode_available()) { kvm_info("HYP mode not available\n"); return -ENODEV;
Systems with a VMID-tagged PIPT i-cache have been supported for a while by Linux and KVM. However, these systems never appeared on our side of the multiverse. Refuse to initialise KVM on such a machine, should then ever appear. Following changes will drop the support from the hypervisor. Signed-off-by: Marc Zyngier <maz@kernel.org> --- arch/arm64/kvm/arm.c | 5 +++++ 1 file changed, 5 insertions(+)