Message ID | 20240917151904.74314-2-nrb@linux.ibm.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | KVM: s390: fix diag258 virtual-physical confusion | expand |
On Tue, Sep 17, 2024 at 05:18:33PM +0200, Nico Boehr wrote: > Previously, access_guest_page() did not check whether the given guest > address is inside of a memslot. This is not a problem, since > kvm_write_guest_page/kvm_read_guest_page return -EFAULT in this case. ... > To be able to distinguish these two cases, return PGM_ADDRESSING in > access_guest_page() when the guest address is outside guest memory. In > access_guest_real(), populate vcpu->arch.pgm.code such that > kvm_s390_inject_prog_cond() can be used in the caller for injecting into > the guest (if applicable). > > Since this adds a new return value to access_guest_page(), we need to make > sure that other callers are not confused by the new positive return value. > > There are the following users of access_guest_page(): > - access_guest_with_key() does the checking itself (in > guest_range_to_gpas()), so this case should never happen. Even if, the > handling is set up properly. > - access_guest_real() just passes the return code to its callers, which > are: > - read_guest_real() - see below > - write_guest_real() - see below > > There are the following users of read_guest_real(): > - ar_translation() in gaccess.c which already returns PGM_* With this patch you actually fix a bug in ar_translation(), where two read_guest_real() invocations might have returned -EFAULT instead of the correct PGM_ADDRESSING. Looks like the author assumed read_guest_real() would do the right thing. See commit 664b49735370 ("KVM: s390: Add access register mode"). > Fixes: 2293897805c2 ("KVM: s390: add architecture compliant guest access functions") > Cc: stable@vger.kernel.org > Signed-off-by: Nico Boehr <nrb@linux.ibm.com> > --- > arch/s390/kvm/gaccess.c | 7 +++++++ > arch/s390/kvm/gaccess.h | 14 ++++++++------ > 2 files changed, 15 insertions(+), 6 deletions(-) > diff --git a/arch/s390/kvm/gaccess.c b/arch/s390/kvm/gaccess.c > index e65f597e3044..004047578729 100644 > --- a/arch/s390/kvm/gaccess.c > +++ b/arch/s390/kvm/gaccess.c > @@ -828,6 +828,9 @@ static int access_guest_page(struct kvm *kvm, enum gacc_mode mode, gpa_t gpa, > const gfn_t gfn = gpa_to_gfn(gpa); > int rc; > > + if (!gfn_to_memslot(kvm, gfn)) > + return PGM_ADDRESSING; > + > if (mode == GACC_STORE) > rc = kvm_write_guest_page(kvm, gfn, data, offset, len); It would be nice to not add random empty lines to stay consistent with the existing coding style. > } > + > + if (rc > 0) > + vcpu->arch.pgm.code = rc; > + > return rc; > } Same here. But whoever applies this can change this, or not. In any case: Reviewed-by: Heiko Carstens <hca@linux.ibm.com>
Am 17.09.24 um 17:18 schrieb Nico Boehr: > @@ -985,6 +988,10 @@ int access_guest_real(struct kvm_vcpu *vcpu, unsigned long gra, > gra += fragment_len; > data += fragment_len; > } > + > + if (rc > 0) > + vcpu->arch.pgm.code = rc; This will work but using trans_exc might be more future proof I guess? Otherwise this looks good with the nits fixed.
diff --git a/arch/s390/kvm/gaccess.c b/arch/s390/kvm/gaccess.c index e65f597e3044..004047578729 100644 --- a/arch/s390/kvm/gaccess.c +++ b/arch/s390/kvm/gaccess.c @@ -828,6 +828,9 @@ static int access_guest_page(struct kvm *kvm, enum gacc_mode mode, gpa_t gpa, const gfn_t gfn = gpa_to_gfn(gpa); int rc; + if (!gfn_to_memslot(kvm, gfn)) + return PGM_ADDRESSING; + if (mode == GACC_STORE) rc = kvm_write_guest_page(kvm, gfn, data, offset, len); else @@ -985,6 +988,10 @@ int access_guest_real(struct kvm_vcpu *vcpu, unsigned long gra, gra += fragment_len; data += fragment_len; } + + if (rc > 0) + vcpu->arch.pgm.code = rc; + return rc; } diff --git a/arch/s390/kvm/gaccess.h b/arch/s390/kvm/gaccess.h index b320d12aa049..3fde45a151f2 100644 --- a/arch/s390/kvm/gaccess.h +++ b/arch/s390/kvm/gaccess.h @@ -405,11 +405,12 @@ int read_guest_abs(struct kvm_vcpu *vcpu, unsigned long gpa, void *data, * @len: number of bytes to copy * * Copy @len bytes from @data (kernel space) to @gra (guest real address). - * It is up to the caller to ensure that the entire guest memory range is - * valid memory before calling this function. * Guest low address and key protection are not checked. * - * Returns zero on success or -EFAULT on error. + * Returns zero on success, -EFAULT when copying from @data failed, or + * PGM_ADRESSING in case @gra is outside a memslot. In this case, pgm check info + * is also stored to allow injecting into the guest (if applicable) using + * kvm_s390_inject_prog_cond(). * * If an error occurs data may have been copied partially to guest memory. */ @@ -428,11 +429,12 @@ int write_guest_real(struct kvm_vcpu *vcpu, unsigned long gra, void *data, * @len: number of bytes to copy * * Copy @len bytes from @gra (guest real address) to @data (kernel space). - * It is up to the caller to ensure that the entire guest memory range is - * valid memory before calling this function. * Guest key protection is not checked. * - * Returns zero on success or -EFAULT on error. + * Returns zero on success, -EFAULT when copying to @data failed, or + * PGM_ADRESSING in case @gra is outside a memslot. In this case, pgm check info + * is also stored to allow injecting into the guest (if applicable) using + * kvm_s390_inject_prog_cond(). * * If an error occurs data may have been copied partially to kernel space. */
Previously, access_guest_page() did not check whether the given guest address is inside of a memslot. This is not a problem, since kvm_write_guest_page/kvm_read_guest_page return -EFAULT in this case. However, -EFAULT is also returned when copy_to/from_user fails. When emulating a guest instruction, the address being outside a memslot usually means that an addressing exception should be injected into the guest. Failure in copy_to/from_user however indicates that something is wrong in userspace and hence should be handled there. To be able to distinguish these two cases, return PGM_ADDRESSING in access_guest_page() when the guest address is outside guest memory. In access_guest_real(), populate vcpu->arch.pgm.code such that kvm_s390_inject_prog_cond() can be used in the caller for injecting into the guest (if applicable). Since this adds a new return value to access_guest_page(), we need to make sure that other callers are not confused by the new positive return value. There are the following users of access_guest_page(): - access_guest_with_key() does the checking itself (in guest_range_to_gpas()), so this case should never happen. Even if, the handling is set up properly. - access_guest_real() just passes the return code to its callers, which are: - read_guest_real() - see below - write_guest_real() - see below There are the following users of read_guest_real(): - ar_translation() in gaccess.c which already returns PGM_* - setup_apcb10(), setup_apcb00(), setup_apcb11() in vsie.c which always return -EFAULT on read_guest_read() nonzero return - no change - shadow_crycb(), handle_stfle() always present this as validity, this could be handled better but doesn't change current behaviour - no change There are the following users of write_guest_real(): - kvm_s390_store_status_unloaded() always returns -EFAULT on write_guest_real() failure. Fixes: 2293897805c2 ("KVM: s390: add architecture compliant guest access functions") Cc: stable@vger.kernel.org Signed-off-by: Nico Boehr <nrb@linux.ibm.com> --- arch/s390/kvm/gaccess.c | 7 +++++++ arch/s390/kvm/gaccess.h | 14 ++++++++------ 2 files changed, 15 insertions(+), 6 deletions(-)