From patchwork Tue Mar 15 14:11:31 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christian Borntraeger X-Patchwork-Id: 12781482 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B87CCC43217 for ; Tue, 15 Mar 2022 14:12:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1349033AbiCOONL (ORCPT ); Tue, 15 Mar 2022 10:13:11 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56178 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1349035AbiCOOM7 (ORCPT ); Tue, 15 Mar 2022 10:12:59 -0400 Received: from mx0a-001b2d01.pphosted.com (mx0b-001b2d01.pphosted.com [148.163.158.5]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0692E546B8; Tue, 15 Mar 2022 07:11:45 -0700 (PDT) Received: from pps.filterd (m0098419.ppops.net [127.0.0.1]) by mx0b-001b2d01.pphosted.com (8.16.1.2/8.16.1.2) with SMTP id 22FCiUqt010657; Tue, 15 Mar 2022 14:11:44 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : content-transfer-encoding : mime-version; s=pp1; bh=8f22+OALsNAkpqF/krJV+8RdsVd0gmS6s4o79r5zrlc=; b=r1amNqqOiumAseXHpR9RRCNuhWrpCffRabzqm+mX84kfe9S9hcUTJ4PH7lZc9d9YXuU1 W12+f4B95xqF4qtNXXhYP7O5VOkSjVU/O+3YYqaFQr982eB9HjprndZ2T4quoVgIzgKa BhBo+ZZBQwh+DgqLeQmclaWmJ9lxOIkXhRaHUWdJa7JX1PWDHi8g8eRhUZUaQcDnMlB4 a78FjHAVjjZ8C7QwC8/9iTDAE4wAe4fQGgoKxtPlCcZz5Q1ikmDmuddLZf6QsShKsxJW zVjBiuZVkEmIf5GZEICDDnvnhwv1GqspNG/oX7coeImYxppyoNqxpPfLmo1Ba6H23FD/ Zg== Received: from pps.reinject (localhost [127.0.0.1]) by mx0b-001b2d01.pphosted.com with ESMTP id 3etu2st0s8-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 15 Mar 2022 14:11:43 +0000 Received: from m0098419.ppops.net (m0098419.ppops.net [127.0.0.1]) by pps.reinject (8.16.0.43/8.16.0.43) with SMTP id 22FDtr8u018282; Tue, 15 Mar 2022 14:11:43 GMT Received: from ppma06ams.nl.ibm.com (66.31.33a9.ip4.static.sl-reverse.com [169.51.49.102]) by mx0b-001b2d01.pphosted.com with ESMTP id 3etu2st0rc-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 15 Mar 2022 14:11:43 +0000 Received: from pps.filterd (ppma06ams.nl.ibm.com [127.0.0.1]) by ppma06ams.nl.ibm.com (8.16.1.2/8.16.1.2) with SMTP id 22FE9Msw013300; Tue, 15 Mar 2022 14:11:41 GMT Received: from b06cxnps4076.portsmouth.uk.ibm.com (d06relay13.portsmouth.uk.ibm.com [9.149.109.198]) by ppma06ams.nl.ibm.com with ESMTP id 3erjshptbj-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 15 Mar 2022 14:11:41 +0000 Received: from d06av24.portsmouth.uk.ibm.com (mk.ibm.com [9.149.105.60]) by b06cxnps4076.portsmouth.uk.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 22FEBcXE51511706 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Tue, 15 Mar 2022 14:11:38 GMT Received: from d06av24.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 9C95D42047; Tue, 15 Mar 2022 14:11:38 +0000 (GMT) Received: from d06av24.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 8537042042; Tue, 15 Mar 2022 14:11:38 +0000 (GMT) Received: from tuxmaker.boeblingen.de.ibm.com (unknown [9.152.85.9]) by d06av24.portsmouth.uk.ibm.com (Postfix) with ESMTPS; Tue, 15 Mar 2022 14:11:38 +0000 (GMT) Received: by tuxmaker.boeblingen.de.ibm.com (Postfix, from userid 25651) id 43ABFE127A; Tue, 15 Mar 2022 15:11:38 +0100 (CET) From: Christian Borntraeger To: Paolo Bonzini Cc: KVM , Janosch Frank , Claudio Imbrenda , David Hildenbrand , linux-s390 , Christian Borntraeger , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Janis Schoetterl-Glausch , Thomas Huth Subject: [GIT PULL 1/7] KVM: s390: pv: make use of ultravisor AIV support Date: Tue, 15 Mar 2022 15:11:31 +0100 Message-Id: <20220315141137.357923-2-borntraeger@linux.ibm.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220315141137.357923-1-borntraeger@linux.ibm.com> References: <20220315141137.357923-1-borntraeger@linux.ibm.com> X-TM-AS-GCONF: 00 X-Proofpoint-GUID: gKjHUStmX51uPySKu5FI7UMFefG6NDXT X-Proofpoint-ORIG-GUID: JaFRnC_p-jiPViZWJaAooyhfCOB37AdB X-Proofpoint-UnRewURL: 0 URL was un-rewritten MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.850,Hydra:6.0.425,FMLib:17.11.64.514 definitions=2022-03-15_03,2022-03-15_01,2022-02-23_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 impostorscore=0 suspectscore=0 bulkscore=0 phishscore=0 priorityscore=1501 spamscore=0 lowpriorityscore=0 malwarescore=0 mlxlogscore=885 adultscore=0 mlxscore=0 clxscore=1011 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2202240000 definitions=main-2203150092 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Michael Mueller This patch enables the ultravisor adapter interruption vitualization support indicated by UV feature BIT_UV_FEAT_AIV. This allows ISC interruption injection directly into the GISA IPM for PV kvm guests. Hardware that does not support this feature will continue to use the UV interruption interception method to deliver ISC interruptions to PV kvm guests. For this purpose, the ECA_AIV bit for all guest cpus will be cleared and the GISA will be disabled during PV CPU setup. In addition a check in __inject_io() has been removed. That reduces the required instructions for interruption handling for PV and traditional kvm guests. Signed-off-by: Michael Mueller Reviewed-by: Janosch Frank Link: https://lore.kernel.org/r/20220209152217.1793281-2-mimu@linux.ibm.com Reviewed-by: Christian Borntraeger Signed-off-by: Christian Borntraeger --- arch/s390/include/asm/uv.h | 1 + arch/s390/kvm/interrupt.c | 54 +++++++++++++++++++++++++++++++++----- arch/s390/kvm/kvm-s390.c | 11 +++++--- arch/s390/kvm/kvm-s390.h | 11 ++++++++ 4 files changed, 68 insertions(+), 9 deletions(-) diff --git a/arch/s390/include/asm/uv.h b/arch/s390/include/asm/uv.h index 86218382d29c..a2d376b8bce3 100644 --- a/arch/s390/include/asm/uv.h +++ b/arch/s390/include/asm/uv.h @@ -80,6 +80,7 @@ enum uv_cmds_inst { enum uv_feat_ind { BIT_UV_FEAT_MISC = 0, + BIT_UV_FEAT_AIV = 1, }; struct uv_cb_header { diff --git a/arch/s390/kvm/interrupt.c b/arch/s390/kvm/interrupt.c index db933c252dbc..9b30beac904d 100644 --- a/arch/s390/kvm/interrupt.c +++ b/arch/s390/kvm/interrupt.c @@ -1901,13 +1901,12 @@ static int __inject_io(struct kvm *kvm, struct kvm_s390_interrupt_info *inti) isc = int_word_to_isc(inti->io.io_int_word); /* - * Do not make use of gisa in protected mode. We do not use the lock - * checking variant as this is just a performance optimization and we - * do not hold the lock here. This is ok as the code will pick - * interrupts from both "lists" for delivery. + * We do not use the lock checking variant as this is just a + * performance optimization and we do not hold the lock here. + * This is ok as the code will pick interrupts from both "lists" + * for delivery. */ - if (!kvm_s390_pv_get_handle(kvm) && - gi->origin && inti->type & KVM_S390_INT_IO_AI_MASK) { + if (gi->origin && inti->type & KVM_S390_INT_IO_AI_MASK) { VM_EVENT(kvm, 4, "%s isc %1u", "inject: I/O (AI/gisa)", isc); gisa_set_ipm_gisc(gi->origin, isc); kfree(inti); @@ -3171,9 +3170,33 @@ void kvm_s390_gisa_init(struct kvm *kvm) VM_EVENT(kvm, 3, "gisa 0x%pK initialized", gi->origin); } +void kvm_s390_gisa_enable(struct kvm *kvm) +{ + struct kvm_s390_gisa_interrupt *gi = &kvm->arch.gisa_int; + struct kvm_vcpu *vcpu; + unsigned long i; + u32 gisa_desc; + + if (gi->origin) + return; + kvm_s390_gisa_init(kvm); + gisa_desc = kvm_s390_get_gisa_desc(kvm); + if (!gisa_desc) + return; + kvm_for_each_vcpu(i, vcpu, kvm) { + mutex_lock(&vcpu->mutex); + vcpu->arch.sie_block->gd = gisa_desc; + vcpu->arch.sie_block->eca |= ECA_AIV; + VCPU_EVENT(vcpu, 3, "AIV gisa format-%u enabled for cpu %03u", + vcpu->arch.sie_block->gd & 0x3, vcpu->vcpu_id); + mutex_unlock(&vcpu->mutex); + } +} + void kvm_s390_gisa_destroy(struct kvm *kvm) { struct kvm_s390_gisa_interrupt *gi = &kvm->arch.gisa_int; + struct kvm_s390_gisa *gisa = gi->origin; if (!gi->origin) return; @@ -3184,6 +3207,25 @@ void kvm_s390_gisa_destroy(struct kvm *kvm) cpu_relax(); hrtimer_cancel(&gi->timer); gi->origin = NULL; + VM_EVENT(kvm, 3, "gisa 0x%pK destroyed", gisa); +} + +void kvm_s390_gisa_disable(struct kvm *kvm) +{ + struct kvm_s390_gisa_interrupt *gi = &kvm->arch.gisa_int; + struct kvm_vcpu *vcpu; + unsigned long i; + + if (!gi->origin) + return; + kvm_for_each_vcpu(i, vcpu, kvm) { + mutex_lock(&vcpu->mutex); + vcpu->arch.sie_block->eca &= ~ECA_AIV; + vcpu->arch.sie_block->gd = 0U; + mutex_unlock(&vcpu->mutex); + VCPU_EVENT(vcpu, 3, "AIV disabled for cpu %03u", vcpu->vcpu_id); + } + kvm_s390_gisa_destroy(kvm); } /** diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c index e056ad86ccd2..b5ea95cf8686 100644 --- a/arch/s390/kvm/kvm-s390.c +++ b/arch/s390/kvm/kvm-s390.c @@ -2195,6 +2195,9 @@ static int kvm_s390_cpus_from_pv(struct kvm *kvm, u16 *rcp, u16 *rrcp) } mutex_unlock(&vcpu->mutex); } + /* Ensure that we re-enable gisa if the non-PV guest used it but the PV guest did not. */ + if (use_gisa) + kvm_s390_gisa_enable(kvm); return ret; } @@ -2206,6 +2209,10 @@ static int kvm_s390_cpus_to_pv(struct kvm *kvm, u16 *rc, u16 *rrc) struct kvm_vcpu *vcpu; + /* Disable the GISA if the ultravisor does not support AIV. */ + if (!test_bit_inv(BIT_UV_FEAT_AIV, &uv_info.uv_feature_indications)) + kvm_s390_gisa_disable(kvm); + kvm_for_each_vcpu(i, vcpu, kvm) { mutex_lock(&vcpu->mutex); r = kvm_s390_pv_create_cpu(vcpu, rc, rrc); @@ -3350,9 +3357,7 @@ int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu) vcpu->arch.sie_block->icpua = vcpu->vcpu_id; spin_lock_init(&vcpu->arch.local_int.lock); - vcpu->arch.sie_block->gd = (u32)(u64)vcpu->kvm->arch.gisa_int.origin; - if (vcpu->arch.sie_block->gd && sclp.has_gisaf) - vcpu->arch.sie_block->gd |= GISA_FORMAT1; + vcpu->arch.sie_block->gd = kvm_s390_get_gisa_desc(vcpu->kvm); seqcount_init(&vcpu->arch.cputm_seqcount); vcpu->arch.pfault_token = KVM_S390_PFAULT_TOKEN_INVALID; diff --git a/arch/s390/kvm/kvm-s390.h b/arch/s390/kvm/kvm-s390.h index 098831e815e6..4ba8fc30d87a 100644 --- a/arch/s390/kvm/kvm-s390.h +++ b/arch/s390/kvm/kvm-s390.h @@ -231,6 +231,15 @@ static inline unsigned long kvm_s390_get_gfn_end(struct kvm_memslots *slots) return ms->base_gfn + ms->npages; } +static inline u32 kvm_s390_get_gisa_desc(struct kvm *kvm) +{ + u32 gd = (u32)(u64)kvm->arch.gisa_int.origin; + + if (gd && sclp.has_gisaf) + gd |= GISA_FORMAT1; + return gd; +} + /* implemented in pv.c */ int kvm_s390_pv_destroy_cpu(struct kvm_vcpu *vcpu, u16 *rc, u16 *rrc); int kvm_s390_pv_create_cpu(struct kvm_vcpu *vcpu, u16 *rc, u16 *rrc); @@ -450,6 +459,8 @@ int kvm_s390_get_irq_state(struct kvm_vcpu *vcpu, void kvm_s390_gisa_init(struct kvm *kvm); void kvm_s390_gisa_clear(struct kvm *kvm); void kvm_s390_gisa_destroy(struct kvm *kvm); +void kvm_s390_gisa_disable(struct kvm *kvm); +void kvm_s390_gisa_enable(struct kvm *kvm); int kvm_s390_gib_init(u8 nisc); void kvm_s390_gib_destroy(void); From patchwork Tue Mar 15 14:11:32 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christian Borntraeger X-Patchwork-Id: 12781480 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C6FC2C433F5 for ; Tue, 15 Mar 2022 14:11:58 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1349034AbiCOONI (ORCPT ); Tue, 15 Mar 2022 10:13:08 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56180 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1349037AbiCOOM7 (ORCPT ); Tue, 15 Mar 2022 10:12:59 -0400 Received: from mx0a-001b2d01.pphosted.com (mx0b-001b2d01.pphosted.com [148.163.158.5]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 08E3954BC2; Tue, 15 Mar 2022 07:11:45 -0700 (PDT) Received: from pps.filterd (m0098419.ppops.net [127.0.0.1]) by mx0b-001b2d01.pphosted.com (8.16.1.2/8.16.1.2) with SMTP id 22FCiX6C010791; Tue, 15 Mar 2022 14:11:45 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : content-transfer-encoding : mime-version; s=pp1; bh=FZ9Ihi1kXsB14+EdFBOQEH0lWlzEWGpkJH0MBanFw1g=; b=WgiVtNHXsbYhqlafoH9PSXEL6eyHaT94d5cXI10Tf7EEwrjks5/JuuHa1KEsZ1q/eqOw XVXBWtumRZ9G8p+rVeNo1ijCZewHUBWxSGFMmGZajBPqeMQAW/TqXD8MnKhNqKAQmgpk eCgciwQ2e1chvqf4B6Uv6JX/A2VhxTc6iRFwcDBdGhymoxYgos9CK+YsnYKOq/FVwRVx jvEo9q8IyqjzrMRj851HdEdBCIuaE9KsYXYoJfFF+O56X3M8GMuZx1G//jSaocA7eJ2X qSicGYuulzEmlyl3CEiJLzhvzd6P5993BnfnuG3Df0qfCW3gFkvpABJn+sZXyPlfmU3l /Q== Received: from pps.reinject (localhost [127.0.0.1]) by mx0b-001b2d01.pphosted.com with ESMTP id 3etu2st0t0-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 15 Mar 2022 14:11:44 +0000 Received: from m0098419.ppops.net (m0098419.ppops.net [127.0.0.1]) by pps.reinject (8.16.0.43/8.16.0.43) with SMTP id 22FDmAtr015297; Tue, 15 Mar 2022 14:11:44 GMT Received: from ppma01fra.de.ibm.com (46.49.7a9f.ip4.static.sl-reverse.com [159.122.73.70]) by mx0b-001b2d01.pphosted.com with ESMTP id 3etu2st0rp-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 15 Mar 2022 14:11:44 +0000 Received: from pps.filterd (ppma01fra.de.ibm.com [127.0.0.1]) by ppma01fra.de.ibm.com (8.16.1.2/8.16.1.2) with SMTP id 22FE9EEU008152; Tue, 15 Mar 2022 14:11:42 GMT Received: from b06cxnps4076.portsmouth.uk.ibm.com (d06relay13.portsmouth.uk.ibm.com [9.149.109.198]) by ppma01fra.de.ibm.com with ESMTP id 3erk58nsnj-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 15 Mar 2022 14:11:42 +0000 Received: from d06av24.portsmouth.uk.ibm.com (mk.ibm.com [9.149.105.60]) by b06cxnps4076.portsmouth.uk.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 22FEBdFI51511710 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Tue, 15 Mar 2022 14:11:39 GMT Received: from d06av24.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id E3DF54203F; Tue, 15 Mar 2022 14:11:38 +0000 (GMT) Received: from d06av24.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id CC0CA42042; Tue, 15 Mar 2022 14:11:38 +0000 (GMT) Received: from tuxmaker.boeblingen.de.ibm.com (unknown [9.152.85.9]) by d06av24.portsmouth.uk.ibm.com (Postfix) with ESMTPS; Tue, 15 Mar 2022 14:11:38 +0000 (GMT) Received: by tuxmaker.boeblingen.de.ibm.com (Postfix, from userid 25651) id 87F88E11F3; Tue, 15 Mar 2022 15:11:38 +0100 (CET) From: Christian Borntraeger To: Paolo Bonzini Cc: KVM , Janosch Frank , Claudio Imbrenda , David Hildenbrand , linux-s390 , Christian Borntraeger , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Janis Schoetterl-Glausch , Thomas Huth Subject: [GIT PULL 2/7] KVM: s390x: fix SCK locking Date: Tue, 15 Mar 2022 15:11:32 +0100 Message-Id: <20220315141137.357923-3-borntraeger@linux.ibm.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220315141137.357923-1-borntraeger@linux.ibm.com> References: <20220315141137.357923-1-borntraeger@linux.ibm.com> X-TM-AS-GCONF: 00 X-Proofpoint-GUID: rLBdIFX30dDUk0fFm77O9k_JEwqHi_uA X-Proofpoint-ORIG-GUID: lOxc0aI0ZWBBjVdeaiQexxN5RrNbkJVo X-Proofpoint-UnRewURL: 0 URL was un-rewritten MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.850,Hydra:6.0.425,FMLib:17.11.64.514 definitions=2022-03-15_03,2022-03-15_01,2022-02-23_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 impostorscore=0 suspectscore=0 bulkscore=0 phishscore=0 priorityscore=1501 spamscore=0 lowpriorityscore=0 malwarescore=0 mlxlogscore=999 adultscore=0 mlxscore=0 clxscore=1015 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2202240000 definitions=main-2203150092 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Claudio Imbrenda When handling the SCK instruction, the kvm lock is taken, even though the vcpu lock is already being held. The normal locking order is kvm lock first and then vcpu lock. This is can (and in some circumstances does) lead to deadlocks. The function kvm_s390_set_tod_clock is called both by the SCK handler and by some IOCTLs to set the clock. The IOCTLs will not hold the vcpu lock, so they can safely take the kvm lock. The SCK handler holds the vcpu lock, but will also somehow need to acquire the kvm lock without relinquishing the vcpu lock. The solution is to factor out the code to set the clock, and provide two wrappers. One is called like the original function and does the locking, the other is called kvm_s390_try_set_tod_clock and uses trylock to try to acquire the kvm lock. This new wrapper is then used in the SCK handler. If locking fails, -EAGAIN is returned, which is eventually propagated to userspace, thus also freeing the vcpu lock and allowing for forward progress. This is not the most efficient or elegant way to solve this issue, but the SCK instruction is deprecated and its performance is not critical. The goal of this patch is just to provide a simple but correct way to fix the bug. Fixes: 6a3f95a6b04c ("KVM: s390: Intercept SCK instruction") Signed-off-by: Claudio Imbrenda Reviewed-by: Christian Borntraeger Reviewed-by: Janis Schoetterl-Glausch Link: https://lore.kernel.org/r/20220301143340.111129-1-imbrenda@linux.ibm.com Cc: stable@vger.kernel.org Signed-off-by: Christian Borntraeger --- arch/s390/kvm/kvm-s390.c | 19 ++++++++++++++++--- arch/s390/kvm/kvm-s390.h | 4 ++-- arch/s390/kvm/priv.c | 15 ++++++++++++++- 3 files changed, 32 insertions(+), 6 deletions(-) diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c index b5ea95cf8686..b53ff693b66e 100644 --- a/arch/s390/kvm/kvm-s390.c +++ b/arch/s390/kvm/kvm-s390.c @@ -3961,14 +3961,12 @@ static int kvm_s390_handle_requests(struct kvm_vcpu *vcpu) return 0; } -void kvm_s390_set_tod_clock(struct kvm *kvm, - const struct kvm_s390_vm_tod_clock *gtod) +static void __kvm_s390_set_tod_clock(struct kvm *kvm, const struct kvm_s390_vm_tod_clock *gtod) { struct kvm_vcpu *vcpu; union tod_clock clk; unsigned long i; - mutex_lock(&kvm->lock); preempt_disable(); store_tod_clock_ext(&clk); @@ -3989,9 +3987,24 @@ void kvm_s390_set_tod_clock(struct kvm *kvm, kvm_s390_vcpu_unblock_all(kvm); preempt_enable(); +} + +void kvm_s390_set_tod_clock(struct kvm *kvm, const struct kvm_s390_vm_tod_clock *gtod) +{ + mutex_lock(&kvm->lock); + __kvm_s390_set_tod_clock(kvm, gtod); mutex_unlock(&kvm->lock); } +int kvm_s390_try_set_tod_clock(struct kvm *kvm, const struct kvm_s390_vm_tod_clock *gtod) +{ + if (!mutex_trylock(&kvm->lock)) + return 0; + __kvm_s390_set_tod_clock(kvm, gtod); + mutex_unlock(&kvm->lock); + return 1; +} + /** * kvm_arch_fault_in_page - fault-in guest page if necessary * @vcpu: The corresponding virtual cpu diff --git a/arch/s390/kvm/kvm-s390.h b/arch/s390/kvm/kvm-s390.h index 4ba8fc30d87a..798955b62fa3 100644 --- a/arch/s390/kvm/kvm-s390.h +++ b/arch/s390/kvm/kvm-s390.h @@ -358,8 +358,8 @@ int kvm_s390_handle_sigp(struct kvm_vcpu *vcpu); int kvm_s390_handle_sigp_pei(struct kvm_vcpu *vcpu); /* implemented in kvm-s390.c */ -void kvm_s390_set_tod_clock(struct kvm *kvm, - const struct kvm_s390_vm_tod_clock *gtod); +void kvm_s390_set_tod_clock(struct kvm *kvm, const struct kvm_s390_vm_tod_clock *gtod); +int kvm_s390_try_set_tod_clock(struct kvm *kvm, const struct kvm_s390_vm_tod_clock *gtod); long kvm_arch_fault_in_page(struct kvm_vcpu *vcpu, gpa_t gpa, int writable); int kvm_s390_store_status_unloaded(struct kvm_vcpu *vcpu, unsigned long addr); int kvm_s390_vcpu_store_status(struct kvm_vcpu *vcpu, unsigned long addr); diff --git a/arch/s390/kvm/priv.c b/arch/s390/kvm/priv.c index 30b24c42ef99..5beb7a4a11b3 100644 --- a/arch/s390/kvm/priv.c +++ b/arch/s390/kvm/priv.c @@ -102,7 +102,20 @@ static int handle_set_clock(struct kvm_vcpu *vcpu) return kvm_s390_inject_prog_cond(vcpu, rc); VCPU_EVENT(vcpu, 3, "SCK: setting guest TOD to 0x%llx", gtod.tod); - kvm_s390_set_tod_clock(vcpu->kvm, >od); + /* + * To set the TOD clock the kvm lock must be taken, but the vcpu lock + * is already held in handle_set_clock. The usual lock order is the + * opposite. As SCK is deprecated and should not be used in several + * cases, for example when the multiple epoch facility or TOD clock + * steering facility is installed (see Principles of Operation), a + * slow path can be used. If the lock can not be taken via try_lock, + * the instruction will be retried via -EAGAIN at a later point in + * time. + */ + if (!kvm_s390_try_set_tod_clock(vcpu->kvm, >od)) { + kvm_s390_retry_instr(vcpu); + return -EAGAIN; + } kvm_s390_set_psw_cc(vcpu, 0); return 0; From patchwork Tue Mar 15 14:11:33 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christian Borntraeger X-Patchwork-Id: 12781483 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id EABA7C433EF for ; Tue, 15 Mar 2022 14:12:01 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1349049AbiCOONM (ORCPT ); Tue, 15 Mar 2022 10:13:12 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56158 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1349030AbiCOOM6 (ORCPT ); Tue, 15 Mar 2022 10:12:58 -0400 Received: from mx0a-001b2d01.pphosted.com (mx0b-001b2d01.pphosted.com [148.163.158.5]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 93122546B5; Tue, 15 Mar 2022 07:11:45 -0700 (PDT) Received: from pps.filterd (m0098413.ppops.net [127.0.0.1]) by mx0b-001b2d01.pphosted.com (8.16.1.2/8.16.1.2) with SMTP id 22FD12mc030252; Tue, 15 Mar 2022 14:11:44 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : content-transfer-encoding : mime-version; s=pp1; bh=6reW27b8peKc+iPmYYpFkz+i2kB2DjdsIbCcupx2b+o=; b=K0u1sCQS8FTVpcaRW6+jHLvgpUA42RsMEVS6cRRHSxWphDIjzyuWPQEll4PtoQYLHhcP kT6huJMZWs0eEn6cSQvfipM+vrai0/4idMkCr1i7P8VAImnnZHMpeSZSefR193UHfWF4 mKXl3ln1mUN2KLlUotzTEc1x7T3KmwvYQwakeoBAEVrT90Wbpy0cNQ/BAPws7bbnvmdM wKhro6ekxL3tkh2pvqbB1gFNHZHAeAE9Vuj6VaOWoYTDkEpHSyYo+c4jkJpWtqe+a0PU Xp99by+1cXofVDaR7qoGKPeXfAFZ4363FPSoCYiF7F3ppB2qRIsvqUJuk0y0QjR/Palz tQ== Received: from pps.reinject (localhost [127.0.0.1]) by mx0b-001b2d01.pphosted.com with ESMTP id 3etuajhquy-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 15 Mar 2022 14:11:44 +0000 Received: from m0098413.ppops.net (m0098413.ppops.net [127.0.0.1]) by pps.reinject (8.16.0.43/8.16.0.43) with SMTP id 22FDObHB022639; Tue, 15 Mar 2022 14:11:44 GMT Received: from ppma03ams.nl.ibm.com (62.31.33a9.ip4.static.sl-reverse.com [169.51.49.98]) by mx0b-001b2d01.pphosted.com with ESMTP id 3etuajhqub-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 15 Mar 2022 14:11:44 +0000 Received: from pps.filterd (ppma03ams.nl.ibm.com [127.0.0.1]) by ppma03ams.nl.ibm.com (8.16.1.2/8.16.1.2) with SMTP id 22FE8xcN007656; Tue, 15 Mar 2022 14:11:42 GMT Received: from b06cxnps4074.portsmouth.uk.ibm.com (d06relay11.portsmouth.uk.ibm.com [9.149.109.196]) by ppma03ams.nl.ibm.com with ESMTP id 3et95wt42f-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 15 Mar 2022 14:11:42 +0000 Received: from d06av26.portsmouth.uk.ibm.com (d06av26.portsmouth.uk.ibm.com [9.149.105.62]) by b06cxnps4074.portsmouth.uk.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 22FEBdGF58196444 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Tue, 15 Mar 2022 14:11:39 GMT Received: from d06av26.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 4CF2CAE051; Tue, 15 Mar 2022 14:11:39 +0000 (GMT) Received: from d06av26.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 2BCB3AE045; Tue, 15 Mar 2022 14:11:39 +0000 (GMT) Received: from tuxmaker.boeblingen.de.ibm.com (unknown [9.152.85.9]) by d06av26.portsmouth.uk.ibm.com (Postfix) with ESMTPS; Tue, 15 Mar 2022 14:11:39 +0000 (GMT) Received: by tuxmaker.boeblingen.de.ibm.com (Postfix, from userid 25651) id DD5FFE11F3; Tue, 15 Mar 2022 15:11:38 +0100 (CET) From: Christian Borntraeger To: Paolo Bonzini Cc: KVM , Janosch Frank , Claudio Imbrenda , David Hildenbrand , linux-s390 , Christian Borntraeger , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Janis Schoetterl-Glausch , Thomas Huth Subject: [GIT PULL 3/7] KVM: s390: selftests: Split memop tests Date: Tue, 15 Mar 2022 15:11:33 +0100 Message-Id: <20220315141137.357923-4-borntraeger@linux.ibm.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220315141137.357923-1-borntraeger@linux.ibm.com> References: <20220315141137.357923-1-borntraeger@linux.ibm.com> X-TM-AS-GCONF: 00 X-Proofpoint-ORIG-GUID: -Akhs_Wb9qkJzTjrJ8IDGTxXhD6F0O9P X-Proofpoint-GUID: oFFcuwG9qfNihz8IRRLxGAuE9BBtVgVZ X-Proofpoint-UnRewURL: 0 URL was un-rewritten MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.850,Hydra:6.0.425,FMLib:17.11.64.514 definitions=2022-03-15_03,2022-03-15_01,2022-02-23_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 phishscore=0 bulkscore=0 malwarescore=0 suspectscore=0 mlxlogscore=999 impostorscore=0 adultscore=0 clxscore=1015 mlxscore=0 lowpriorityscore=0 priorityscore=1501 spamscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2202240000 definitions=main-2203150092 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Janis Schoetterl-Glausch Split success case/copy test from error test, making them independent. This means they do not share state and are easier to understand. Also, new test can be added in the same manner without affecting the old ones. In order to make that simpler, introduce functionality for the setup of commonly used variables. Signed-off-by: Janis Schoetterl-Glausch Link: https://lore.kernel.org/r/20220308125841.3271721-2-scgl@linux.ibm.com Signed-off-by: Christian Borntraeger --- tools/testing/selftests/kvm/s390x/memop.c | 139 +++++++++++++--------- 1 file changed, 83 insertions(+), 56 deletions(-) diff --git a/tools/testing/selftests/kvm/s390x/memop.c b/tools/testing/selftests/kvm/s390x/memop.c index d19c3ffdea3f..b9b673acb766 100644 --- a/tools/testing/selftests/kvm/s390x/memop.c +++ b/tools/testing/selftests/kvm/s390x/memop.c @@ -18,71 +18,82 @@ static uint8_t mem1[65536]; static uint8_t mem2[65536]; -static void guest_code(void) -{ - int i; +struct test_default { + struct kvm_vm *kvm_vm; + struct kvm_run *run; + int size; +}; - for (;;) { - for (i = 0; i < sizeof(mem2); i++) - mem2[i] = mem1[i]; - GUEST_SYNC(0); - } +static struct test_default test_default_init(void *guest_code) +{ + struct test_default t; + + t.size = min((size_t)kvm_check_cap(KVM_CAP_S390_MEM_OP), sizeof(mem1)); + t.kvm_vm = vm_create_default(VCPU_ID, 0, guest_code); + t.run = vcpu_state(t.kvm_vm, VCPU_ID); + return t; } -int main(int argc, char *argv[]) +static void guest_copy(void) { - struct kvm_vm *vm; - struct kvm_run *run; + memcpy(&mem2, &mem1, sizeof(mem2)); + GUEST_SYNC(0); +} + +static void test_copy(void) +{ + struct test_default t = test_default_init(guest_copy); struct kvm_s390_mem_op ksmo; - int rv, i, maxsize; - - setbuf(stdout, NULL); /* Tell stdout not to buffer its content */ - - maxsize = kvm_check_cap(KVM_CAP_S390_MEM_OP); - if (!maxsize) { - print_skip("CAP_S390_MEM_OP not supported"); - exit(KSFT_SKIP); - } - if (maxsize > sizeof(mem1)) - maxsize = sizeof(mem1); - - /* Create VM */ - vm = vm_create_default(VCPU_ID, 0, guest_code); - run = vcpu_state(vm, VCPU_ID); + int i; for (i = 0; i < sizeof(mem1); i++) mem1[i] = i * i + i; /* Set the first array */ - ksmo.gaddr = addr_gva2gpa(vm, (uintptr_t)mem1); + ksmo.gaddr = addr_gva2gpa(t.kvm_vm, (uintptr_t)mem1); ksmo.flags = 0; - ksmo.size = maxsize; + ksmo.size = t.size; ksmo.op = KVM_S390_MEMOP_LOGICAL_WRITE; ksmo.buf = (uintptr_t)mem1; ksmo.ar = 0; - vcpu_ioctl(vm, VCPU_ID, KVM_S390_MEM_OP, &ksmo); + vcpu_ioctl(t.kvm_vm, VCPU_ID, KVM_S390_MEM_OP, &ksmo); /* Let the guest code copy the first array to the second */ - vcpu_run(vm, VCPU_ID); - TEST_ASSERT(run->exit_reason == KVM_EXIT_S390_SIEIC, + vcpu_run(t.kvm_vm, VCPU_ID); + TEST_ASSERT(t.run->exit_reason == KVM_EXIT_S390_SIEIC, "Unexpected exit reason: %u (%s)\n", - run->exit_reason, - exit_reason_str(run->exit_reason)); + t.run->exit_reason, + exit_reason_str(t.run->exit_reason)); memset(mem2, 0xaa, sizeof(mem2)); /* Get the second array */ ksmo.gaddr = (uintptr_t)mem2; ksmo.flags = 0; - ksmo.size = maxsize; + ksmo.size = t.size; ksmo.op = KVM_S390_MEMOP_LOGICAL_READ; ksmo.buf = (uintptr_t)mem2; ksmo.ar = 0; - vcpu_ioctl(vm, VCPU_ID, KVM_S390_MEM_OP, &ksmo); + vcpu_ioctl(t.kvm_vm, VCPU_ID, KVM_S390_MEM_OP, &ksmo); - TEST_ASSERT(!memcmp(mem1, mem2, maxsize), + TEST_ASSERT(!memcmp(mem1, mem2, t.size), "Memory contents do not match!"); + kvm_vm_free(t.kvm_vm); +} + +static void guest_idle(void) +{ + for (;;) + GUEST_SYNC(0); +} + +static void test_errors(void) +{ + struct test_default t = test_default_init(guest_idle); + struct kvm_s390_mem_op ksmo; + int rv; + /* Check error conditions - first bad size: */ ksmo.gaddr = (uintptr_t)mem1; ksmo.flags = 0; @@ -90,7 +101,7 @@ int main(int argc, char *argv[]) ksmo.op = KVM_S390_MEMOP_LOGICAL_WRITE; ksmo.buf = (uintptr_t)mem1; ksmo.ar = 0; - rv = _vcpu_ioctl(vm, VCPU_ID, KVM_S390_MEM_OP, &ksmo); + rv = _vcpu_ioctl(t.kvm_vm, VCPU_ID, KVM_S390_MEM_OP, &ksmo); TEST_ASSERT(rv == -1 && errno == E2BIG, "ioctl allows insane sizes"); /* Zero size: */ @@ -100,65 +111,65 @@ int main(int argc, char *argv[]) ksmo.op = KVM_S390_MEMOP_LOGICAL_WRITE; ksmo.buf = (uintptr_t)mem1; ksmo.ar = 0; - rv = _vcpu_ioctl(vm, VCPU_ID, KVM_S390_MEM_OP, &ksmo); + rv = _vcpu_ioctl(t.kvm_vm, VCPU_ID, KVM_S390_MEM_OP, &ksmo); TEST_ASSERT(rv == -1 && (errno == EINVAL || errno == ENOMEM), "ioctl allows 0 as size"); /* Bad flags: */ ksmo.gaddr = (uintptr_t)mem1; ksmo.flags = -1; - ksmo.size = maxsize; + ksmo.size = t.size; ksmo.op = KVM_S390_MEMOP_LOGICAL_WRITE; ksmo.buf = (uintptr_t)mem1; ksmo.ar = 0; - rv = _vcpu_ioctl(vm, VCPU_ID, KVM_S390_MEM_OP, &ksmo); + rv = _vcpu_ioctl(t.kvm_vm, VCPU_ID, KVM_S390_MEM_OP, &ksmo); TEST_ASSERT(rv == -1 && errno == EINVAL, "ioctl allows all flags"); /* Bad operation: */ ksmo.gaddr = (uintptr_t)mem1; ksmo.flags = 0; - ksmo.size = maxsize; + ksmo.size = t.size; ksmo.op = -1; ksmo.buf = (uintptr_t)mem1; ksmo.ar = 0; - rv = _vcpu_ioctl(vm, VCPU_ID, KVM_S390_MEM_OP, &ksmo); + rv = _vcpu_ioctl(t.kvm_vm, VCPU_ID, KVM_S390_MEM_OP, &ksmo); TEST_ASSERT(rv == -1 && errno == EINVAL, "ioctl allows bad operations"); /* Bad guest address: */ ksmo.gaddr = ~0xfffUL; ksmo.flags = KVM_S390_MEMOP_F_CHECK_ONLY; - ksmo.size = maxsize; + ksmo.size = t.size; ksmo.op = KVM_S390_MEMOP_LOGICAL_WRITE; ksmo.buf = (uintptr_t)mem1; ksmo.ar = 0; - rv = _vcpu_ioctl(vm, VCPU_ID, KVM_S390_MEM_OP, &ksmo); + rv = _vcpu_ioctl(t.kvm_vm, VCPU_ID, KVM_S390_MEM_OP, &ksmo); TEST_ASSERT(rv > 0, "ioctl does not report bad guest memory access"); /* Bad host address: */ ksmo.gaddr = (uintptr_t)mem1; ksmo.flags = 0; - ksmo.size = maxsize; + ksmo.size = t.size; ksmo.op = KVM_S390_MEMOP_LOGICAL_WRITE; ksmo.buf = 0; ksmo.ar = 0; - rv = _vcpu_ioctl(vm, VCPU_ID, KVM_S390_MEM_OP, &ksmo); + rv = _vcpu_ioctl(t.kvm_vm, VCPU_ID, KVM_S390_MEM_OP, &ksmo); TEST_ASSERT(rv == -1 && errno == EFAULT, "ioctl does not report bad host memory address"); /* Bad access register: */ - run->psw_mask &= ~(3UL << (63 - 17)); - run->psw_mask |= 1UL << (63 - 17); /* Enable AR mode */ - vcpu_run(vm, VCPU_ID); /* To sync new state to SIE block */ + t.run->psw_mask &= ~(3UL << (63 - 17)); + t.run->psw_mask |= 1UL << (63 - 17); /* Enable AR mode */ + vcpu_run(t.kvm_vm, VCPU_ID); /* To sync new state to SIE block */ ksmo.gaddr = (uintptr_t)mem1; ksmo.flags = 0; - ksmo.size = maxsize; + ksmo.size = t.size; ksmo.op = KVM_S390_MEMOP_LOGICAL_WRITE; ksmo.buf = (uintptr_t)mem1; ksmo.ar = 17; - rv = _vcpu_ioctl(vm, VCPU_ID, KVM_S390_MEM_OP, &ksmo); + rv = _vcpu_ioctl(t.kvm_vm, VCPU_ID, KVM_S390_MEM_OP, &ksmo); TEST_ASSERT(rv == -1 && errno == EINVAL, "ioctl allows ARs > 15"); - run->psw_mask &= ~(3UL << (63 - 17)); /* Disable AR mode */ - vcpu_run(vm, VCPU_ID); /* Run to sync new state */ + t.run->psw_mask &= ~(3UL << (63 - 17)); /* Disable AR mode */ + vcpu_run(t.kvm_vm, VCPU_ID); /* Run to sync new state */ /* Check that the SIDA calls are rejected for non-protected guests */ ksmo.gaddr = 0; @@ -167,15 +178,31 @@ int main(int argc, char *argv[]) ksmo.op = KVM_S390_MEMOP_SIDA_READ; ksmo.buf = (uintptr_t)mem1; ksmo.sida_offset = 0x1c0; - rv = _vcpu_ioctl(vm, VCPU_ID, KVM_S390_MEM_OP, &ksmo); + rv = _vcpu_ioctl(t.kvm_vm, VCPU_ID, KVM_S390_MEM_OP, &ksmo); TEST_ASSERT(rv == -1 && errno == EINVAL, "ioctl does not reject SIDA_READ in non-protected mode"); ksmo.op = KVM_S390_MEMOP_SIDA_WRITE; - rv = _vcpu_ioctl(vm, VCPU_ID, KVM_S390_MEM_OP, &ksmo); + rv = _vcpu_ioctl(t.kvm_vm, VCPU_ID, KVM_S390_MEM_OP, &ksmo); TEST_ASSERT(rv == -1 && errno == EINVAL, "ioctl does not reject SIDA_WRITE in non-protected mode"); - kvm_vm_free(vm); + kvm_vm_free(t.kvm_vm); +} + +int main(int argc, char *argv[]) +{ + int memop_cap; + + setbuf(stdout, NULL); /* Tell stdout not to buffer its content */ + + memop_cap = kvm_check_cap(KVM_CAP_S390_MEM_OP); + if (!memop_cap) { + print_skip("CAP_S390_MEM_OP not supported"); + exit(KSFT_SKIP); + } + + test_copy(); + test_errors(); return 0; } From patchwork Tue Mar 15 14:11:34 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christian Borntraeger X-Patchwork-Id: 12781485 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D65F2C433EF for ; Tue, 15 Mar 2022 14:12:04 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1349057AbiCOONO (ORCPT ); Tue, 15 Mar 2022 10:13:14 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56184 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1349039AbiCOOM7 (ORCPT ); Tue, 15 Mar 2022 10:12:59 -0400 Received: from mx0b-001b2d01.pphosted.com (mx0b-001b2d01.pphosted.com [148.163.158.5]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5637054BCE; Tue, 15 Mar 2022 07:11:46 -0700 (PDT) Received: from pps.filterd (m0098421.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.16.1.2/8.16.1.2) with SMTP id 22FDCnln009278; Tue, 15 Mar 2022 14:11:45 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : content-transfer-encoding : mime-version; s=pp1; bh=aEvSLMRIYlzAlHbiUvOq6136GDR7Udpl5RLN+SYTLA0=; b=dstsL2TXfnaXTj25Zsw7doFqtsXaYOFhmZBL4LTrxPJeDvI/sUOjg5tfmW3CnG0FcqKB zwRv+0ubzxt+EkIAoyAF8h4J+DGVCbPi0FqZ6IUp9r65Mqtd6pS4soERMIDdXpdU74SM u1qxR/c/MDKOQ20Giwv6dGNM+lIvWppTfGOShmqj2AVymrItUcJ7wcUTuxLo0QzSVcXv G/ew2cKyfm8YzNJNa0M13mofwFzeFMKGZIr3qzOZGR6nhooPqsN1ds4STFjNoLMNUyOf 4Z3xmEy0nMNGfoaL8YZiAQ9N9NAf0rqzfFXqbZ+CtgVQa5Hpw4d+WP4Ie+HBSIgr/JG4 Zw== Received: from pps.reinject (localhost [127.0.0.1]) by mx0a-001b2d01.pphosted.com with ESMTP id 3etufuhew4-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 15 Mar 2022 14:11:45 +0000 Received: from m0098421.ppops.net (m0098421.ppops.net [127.0.0.1]) by pps.reinject (8.16.0.43/8.16.0.43) with SMTP id 22FDGfRR023400; Tue, 15 Mar 2022 14:11:44 GMT Received: from ppma03ams.nl.ibm.com (62.31.33a9.ip4.static.sl-reverse.com [169.51.49.98]) by mx0a-001b2d01.pphosted.com with ESMTP id 3etufuhevg-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 15 Mar 2022 14:11:44 +0000 Received: from pps.filterd (ppma03ams.nl.ibm.com [127.0.0.1]) by ppma03ams.nl.ibm.com (8.16.1.2/8.16.1.2) with SMTP id 22FE8xcO007656; Tue, 15 Mar 2022 14:11:43 GMT Received: from b06avi18878370.portsmouth.uk.ibm.com (b06avi18878370.portsmouth.uk.ibm.com [9.149.26.194]) by ppma03ams.nl.ibm.com with ESMTP id 3et95wt42g-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 15 Mar 2022 14:11:43 +0000 Received: from d06av24.portsmouth.uk.ibm.com (mk.ibm.com [9.149.105.60]) by b06avi18878370.portsmouth.uk.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 22FEBfph44761344 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Tue, 15 Mar 2022 14:11:41 GMT Received: from d06av24.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id A6C904203F; Tue, 15 Mar 2022 14:11:39 +0000 (GMT) Received: from d06av24.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 8A9B442042; Tue, 15 Mar 2022 14:11:39 +0000 (GMT) Received: from tuxmaker.boeblingen.de.ibm.com (unknown [9.152.85.9]) by d06av24.portsmouth.uk.ibm.com (Postfix) with ESMTPS; Tue, 15 Mar 2022 14:11:39 +0000 (GMT) Received: by tuxmaker.boeblingen.de.ibm.com (Postfix, from userid 25651) id 474B0E11F3; Tue, 15 Mar 2022 15:11:39 +0100 (CET) From: Christian Borntraeger To: Paolo Bonzini Cc: KVM , Janosch Frank , Claudio Imbrenda , David Hildenbrand , linux-s390 , Christian Borntraeger , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Janis Schoetterl-Glausch , Thomas Huth Subject: [GIT PULL 4/7] KVM: s390: selftests: Add macro as abstraction for MEM_OP Date: Tue, 15 Mar 2022 15:11:34 +0100 Message-Id: <20220315141137.357923-5-borntraeger@linux.ibm.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220315141137.357923-1-borntraeger@linux.ibm.com> References: <20220315141137.357923-1-borntraeger@linux.ibm.com> X-TM-AS-GCONF: 00 X-Proofpoint-ORIG-GUID: v4e6hNPlFQ-CB0XY1ErNQv9kspzZwOaz X-Proofpoint-GUID: bfh1Lb3TRlcihIoei3mw2G7W4JPXtL8Q X-Proofpoint-UnRewURL: 0 URL was un-rewritten MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.850,Hydra:6.0.425,FMLib:17.11.64.514 definitions=2022-03-15_03,2022-03-15_01,2022-02-23_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 bulkscore=0 adultscore=0 mlxlogscore=999 spamscore=0 suspectscore=0 mlxscore=0 impostorscore=0 malwarescore=0 clxscore=1015 lowpriorityscore=0 priorityscore=1501 phishscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2202240000 definitions=main-2203150092 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Janis Schoetterl-Glausch In order to achieve good test coverage we need to be able to invoke the MEM_OP ioctl with all possible parametrizations. However, for a given test, we want to be concise and not specify a long list of default values for parameters not relevant for the test, so the readers attention is not needlessly diverted. Add a macro that enables this and convert the existing test to use it. The macro emulates named arguments and hides some of the ioctl's redundancy, e.g. sets the key flag if an access key is specified. Signed-off-by: Janis Schoetterl-Glausch Link: https://lore.kernel.org/r/20220308125841.3271721-3-scgl@linux.ibm.com Signed-off-by: Christian Borntraeger --- tools/testing/selftests/kvm/s390x/memop.c | 272 ++++++++++++++++------ 1 file changed, 197 insertions(+), 75 deletions(-) diff --git a/tools/testing/selftests/kvm/s390x/memop.c b/tools/testing/selftests/kvm/s390x/memop.c index b9b673acb766..e2ad3d70bae4 100644 --- a/tools/testing/selftests/kvm/s390x/memop.c +++ b/tools/testing/selftests/kvm/s390x/memop.c @@ -13,6 +13,188 @@ #include "test_util.h" #include "kvm_util.h" +enum mop_target { + LOGICAL, + SIDA, + ABSOLUTE, + INVALID, +}; + +enum mop_access_mode { + READ, + WRITE, +}; + +struct mop_desc { + uintptr_t gaddr; + uintptr_t gaddr_v; + uint64_t set_flags; + unsigned int f_check : 1; + unsigned int f_inject : 1; + unsigned int f_key : 1; + unsigned int _gaddr_v : 1; + unsigned int _set_flags : 1; + unsigned int _sida_offset : 1; + unsigned int _ar : 1; + uint32_t size; + enum mop_target target; + enum mop_access_mode mode; + void *buf; + uint32_t sida_offset; + uint8_t ar; + uint8_t key; +}; + +static struct kvm_s390_mem_op ksmo_from_desc(struct mop_desc desc) +{ + struct kvm_s390_mem_op ksmo = { + .gaddr = (uintptr_t)desc.gaddr, + .size = desc.size, + .buf = ((uintptr_t)desc.buf), + .reserved = "ignored_ignored_ignored_ignored" + }; + + switch (desc.target) { + case LOGICAL: + if (desc.mode == READ) + ksmo.op = KVM_S390_MEMOP_LOGICAL_READ; + if (desc.mode == WRITE) + ksmo.op = KVM_S390_MEMOP_LOGICAL_WRITE; + break; + case SIDA: + if (desc.mode == READ) + ksmo.op = KVM_S390_MEMOP_SIDA_READ; + if (desc.mode == WRITE) + ksmo.op = KVM_S390_MEMOP_SIDA_WRITE; + break; + case ABSOLUTE: + if (desc.mode == READ) + ksmo.op = KVM_S390_MEMOP_ABSOLUTE_READ; + if (desc.mode == WRITE) + ksmo.op = KVM_S390_MEMOP_ABSOLUTE_WRITE; + break; + case INVALID: + ksmo.op = -1; + } + if (desc.f_check) + ksmo.flags |= KVM_S390_MEMOP_F_CHECK_ONLY; + if (desc.f_inject) + ksmo.flags |= KVM_S390_MEMOP_F_INJECT_EXCEPTION; + if (desc._set_flags) + ksmo.flags = desc.set_flags; + if (desc.f_key) { + ksmo.flags |= KVM_S390_MEMOP_F_SKEY_PROTECTION; + ksmo.key = desc.key; + } + if (desc._ar) + ksmo.ar = desc.ar; + else + ksmo.ar = 0; + if (desc._sida_offset) + ksmo.sida_offset = desc.sida_offset; + + return ksmo; +} + +/* vcpu dummy id signifying that vm instead of vcpu ioctl is to occur */ +const uint32_t VM_VCPU_ID = (uint32_t)-1; + +struct test_vcpu { + struct kvm_vm *vm; + uint32_t id; +}; + +#define PRINT_MEMOP false +static void print_memop(uint32_t vcpu_id, const struct kvm_s390_mem_op *ksmo) +{ + if (!PRINT_MEMOP) + return; + + if (vcpu_id == VM_VCPU_ID) + printf("vm memop("); + else + printf("vcpu memop("); + switch (ksmo->op) { + case KVM_S390_MEMOP_LOGICAL_READ: + printf("LOGICAL, READ, "); + break; + case KVM_S390_MEMOP_LOGICAL_WRITE: + printf("LOGICAL, WRITE, "); + break; + case KVM_S390_MEMOP_SIDA_READ: + printf("SIDA, READ, "); + break; + case KVM_S390_MEMOP_SIDA_WRITE: + printf("SIDA, WRITE, "); + break; + case KVM_S390_MEMOP_ABSOLUTE_READ: + printf("ABSOLUTE, READ, "); + break; + case KVM_S390_MEMOP_ABSOLUTE_WRITE: + printf("ABSOLUTE, WRITE, "); + break; + } + printf("gaddr=%llu, size=%u, buf=%llu, ar=%u, key=%u", + ksmo->gaddr, ksmo->size, ksmo->buf, ksmo->ar, ksmo->key); + if (ksmo->flags & KVM_S390_MEMOP_F_CHECK_ONLY) + printf(", CHECK_ONLY"); + if (ksmo->flags & KVM_S390_MEMOP_F_INJECT_EXCEPTION) + printf(", INJECT_EXCEPTION"); + if (ksmo->flags & KVM_S390_MEMOP_F_SKEY_PROTECTION) + printf(", SKEY_PROTECTION"); + puts(")"); +} + +static void memop_ioctl(struct test_vcpu vcpu, struct kvm_s390_mem_op *ksmo) +{ + if (vcpu.id == VM_VCPU_ID) + vm_ioctl(vcpu.vm, KVM_S390_MEM_OP, ksmo); + else + vcpu_ioctl(vcpu.vm, vcpu.id, KVM_S390_MEM_OP, ksmo); +} + +static int err_memop_ioctl(struct test_vcpu vcpu, struct kvm_s390_mem_op *ksmo) +{ + if (vcpu.id == VM_VCPU_ID) + return _vm_ioctl(vcpu.vm, KVM_S390_MEM_OP, ksmo); + else + return _vcpu_ioctl(vcpu.vm, vcpu.id, KVM_S390_MEM_OP, ksmo); +} + +#define MEMOP(err, vcpu_p, mop_target_p, access_mode_p, buf_p, size_p, ...) \ +({ \ + struct test_vcpu __vcpu = (vcpu_p); \ + struct mop_desc __desc = { \ + .target = (mop_target_p), \ + .mode = (access_mode_p), \ + .buf = (buf_p), \ + .size = (size_p), \ + __VA_ARGS__ \ + }; \ + struct kvm_s390_mem_op __ksmo; \ + \ + if (__desc._gaddr_v) { \ + if (__desc.target == ABSOLUTE) \ + __desc.gaddr = addr_gva2gpa(__vcpu.vm, __desc.gaddr_v); \ + else \ + __desc.gaddr = __desc.gaddr_v; \ + } \ + __ksmo = ksmo_from_desc(__desc); \ + print_memop(__vcpu.id, &__ksmo); \ + err##memop_ioctl(__vcpu, &__ksmo); \ +}) + +#define MOP(...) MEMOP(, __VA_ARGS__) +#define ERR_MOP(...) MEMOP(err_, __VA_ARGS__) + +#define GADDR(a) .gaddr = ((uintptr_t)a) +#define GADDR_V(v) ._gaddr_v = 1, .gaddr_v = ((uintptr_t)v) +#define CHECK_ONLY .f_check = 1 +#define SET_FLAGS(f) ._set_flags = 1, .set_flags = (f) +#define SIDA_OFFSET(o) ._sida_offset = 1, .sida_offset = (o) +#define AR(a) ._ar = 1, .ar = (a) +#define KEY(a) .f_key = 1, .key = (a) + #define VCPU_ID 1 static uint8_t mem1[65536]; @@ -20,6 +202,7 @@ static uint8_t mem2[65536]; struct test_default { struct kvm_vm *kvm_vm; + struct test_vcpu vcpu; struct kvm_run *run; int size; }; @@ -30,6 +213,7 @@ static struct test_default test_default_init(void *guest_code) t.size = min((size_t)kvm_check_cap(KVM_CAP_S390_MEM_OP), sizeof(mem1)); t.kvm_vm = vm_create_default(VCPU_ID, 0, guest_code); + t.vcpu = (struct test_vcpu) { t.kvm_vm, VCPU_ID }; t.run = vcpu_state(t.kvm_vm, VCPU_ID); return t; } @@ -43,20 +227,14 @@ static void guest_copy(void) static void test_copy(void) { struct test_default t = test_default_init(guest_copy); - struct kvm_s390_mem_op ksmo; int i; for (i = 0; i < sizeof(mem1); i++) mem1[i] = i * i + i; /* Set the first array */ - ksmo.gaddr = addr_gva2gpa(t.kvm_vm, (uintptr_t)mem1); - ksmo.flags = 0; - ksmo.size = t.size; - ksmo.op = KVM_S390_MEMOP_LOGICAL_WRITE; - ksmo.buf = (uintptr_t)mem1; - ksmo.ar = 0; - vcpu_ioctl(t.kvm_vm, VCPU_ID, KVM_S390_MEM_OP, &ksmo); + MOP(t.vcpu, LOGICAL, WRITE, mem1, t.size, + GADDR(addr_gva2gpa(t.kvm_vm, (uintptr_t)mem1))); /* Let the guest code copy the first array to the second */ vcpu_run(t.kvm_vm, VCPU_ID); @@ -68,13 +246,7 @@ static void test_copy(void) memset(mem2, 0xaa, sizeof(mem2)); /* Get the second array */ - ksmo.gaddr = (uintptr_t)mem2; - ksmo.flags = 0; - ksmo.size = t.size; - ksmo.op = KVM_S390_MEMOP_LOGICAL_READ; - ksmo.buf = (uintptr_t)mem2; - ksmo.ar = 0; - vcpu_ioctl(t.kvm_vm, VCPU_ID, KVM_S390_MEM_OP, &ksmo); + MOP(t.vcpu, LOGICAL, READ, mem2, t.size, GADDR_V(mem2)); TEST_ASSERT(!memcmp(mem1, mem2, t.size), "Memory contents do not match!"); @@ -91,68 +263,31 @@ static void guest_idle(void) static void test_errors(void) { struct test_default t = test_default_init(guest_idle); - struct kvm_s390_mem_op ksmo; int rv; - /* Check error conditions - first bad size: */ - ksmo.gaddr = (uintptr_t)mem1; - ksmo.flags = 0; - ksmo.size = -1; - ksmo.op = KVM_S390_MEMOP_LOGICAL_WRITE; - ksmo.buf = (uintptr_t)mem1; - ksmo.ar = 0; - rv = _vcpu_ioctl(t.kvm_vm, VCPU_ID, KVM_S390_MEM_OP, &ksmo); + /* Bad size: */ + rv = ERR_MOP(t.vcpu, LOGICAL, WRITE, mem1, -1, GADDR_V(mem1)); TEST_ASSERT(rv == -1 && errno == E2BIG, "ioctl allows insane sizes"); /* Zero size: */ - ksmo.gaddr = (uintptr_t)mem1; - ksmo.flags = 0; - ksmo.size = 0; - ksmo.op = KVM_S390_MEMOP_LOGICAL_WRITE; - ksmo.buf = (uintptr_t)mem1; - ksmo.ar = 0; - rv = _vcpu_ioctl(t.kvm_vm, VCPU_ID, KVM_S390_MEM_OP, &ksmo); + rv = ERR_MOP(t.vcpu, LOGICAL, WRITE, mem1, 0, GADDR_V(mem1)); TEST_ASSERT(rv == -1 && (errno == EINVAL || errno == ENOMEM), "ioctl allows 0 as size"); /* Bad flags: */ - ksmo.gaddr = (uintptr_t)mem1; - ksmo.flags = -1; - ksmo.size = t.size; - ksmo.op = KVM_S390_MEMOP_LOGICAL_WRITE; - ksmo.buf = (uintptr_t)mem1; - ksmo.ar = 0; - rv = _vcpu_ioctl(t.kvm_vm, VCPU_ID, KVM_S390_MEM_OP, &ksmo); + rv = ERR_MOP(t.vcpu, LOGICAL, WRITE, mem1, t.size, GADDR_V(mem1), SET_FLAGS(-1)); TEST_ASSERT(rv == -1 && errno == EINVAL, "ioctl allows all flags"); /* Bad operation: */ - ksmo.gaddr = (uintptr_t)mem1; - ksmo.flags = 0; - ksmo.size = t.size; - ksmo.op = -1; - ksmo.buf = (uintptr_t)mem1; - ksmo.ar = 0; - rv = _vcpu_ioctl(t.kvm_vm, VCPU_ID, KVM_S390_MEM_OP, &ksmo); + rv = ERR_MOP(t.vcpu, INVALID, WRITE, mem1, t.size, GADDR_V(mem1)); TEST_ASSERT(rv == -1 && errno == EINVAL, "ioctl allows bad operations"); /* Bad guest address: */ - ksmo.gaddr = ~0xfffUL; - ksmo.flags = KVM_S390_MEMOP_F_CHECK_ONLY; - ksmo.size = t.size; - ksmo.op = KVM_S390_MEMOP_LOGICAL_WRITE; - ksmo.buf = (uintptr_t)mem1; - ksmo.ar = 0; - rv = _vcpu_ioctl(t.kvm_vm, VCPU_ID, KVM_S390_MEM_OP, &ksmo); + rv = ERR_MOP(t.vcpu, LOGICAL, WRITE, mem1, t.size, GADDR((void *)~0xfffUL), CHECK_ONLY); TEST_ASSERT(rv > 0, "ioctl does not report bad guest memory access"); /* Bad host address: */ - ksmo.gaddr = (uintptr_t)mem1; - ksmo.flags = 0; - ksmo.size = t.size; - ksmo.op = KVM_S390_MEMOP_LOGICAL_WRITE; - ksmo.buf = 0; - ksmo.ar = 0; - rv = _vcpu_ioctl(t.kvm_vm, VCPU_ID, KVM_S390_MEM_OP, &ksmo); + rv = ERR_MOP(t.vcpu, LOGICAL, WRITE, 0, t.size, GADDR_V(mem1)); TEST_ASSERT(rv == -1 && errno == EFAULT, "ioctl does not report bad host memory address"); @@ -160,29 +295,16 @@ static void test_errors(void) t.run->psw_mask &= ~(3UL << (63 - 17)); t.run->psw_mask |= 1UL << (63 - 17); /* Enable AR mode */ vcpu_run(t.kvm_vm, VCPU_ID); /* To sync new state to SIE block */ - ksmo.gaddr = (uintptr_t)mem1; - ksmo.flags = 0; - ksmo.size = t.size; - ksmo.op = KVM_S390_MEMOP_LOGICAL_WRITE; - ksmo.buf = (uintptr_t)mem1; - ksmo.ar = 17; - rv = _vcpu_ioctl(t.kvm_vm, VCPU_ID, KVM_S390_MEM_OP, &ksmo); + rv = ERR_MOP(t.vcpu, LOGICAL, WRITE, mem1, t.size, GADDR_V(mem1), AR(17)); TEST_ASSERT(rv == -1 && errno == EINVAL, "ioctl allows ARs > 15"); t.run->psw_mask &= ~(3UL << (63 - 17)); /* Disable AR mode */ vcpu_run(t.kvm_vm, VCPU_ID); /* Run to sync new state */ /* Check that the SIDA calls are rejected for non-protected guests */ - ksmo.gaddr = 0; - ksmo.flags = 0; - ksmo.size = 8; - ksmo.op = KVM_S390_MEMOP_SIDA_READ; - ksmo.buf = (uintptr_t)mem1; - ksmo.sida_offset = 0x1c0; - rv = _vcpu_ioctl(t.kvm_vm, VCPU_ID, KVM_S390_MEM_OP, &ksmo); + rv = ERR_MOP(t.vcpu, SIDA, READ, mem1, 8, GADDR(0), SIDA_OFFSET(0x1c0)); TEST_ASSERT(rv == -1 && errno == EINVAL, "ioctl does not reject SIDA_READ in non-protected mode"); - ksmo.op = KVM_S390_MEMOP_SIDA_WRITE; - rv = _vcpu_ioctl(t.kvm_vm, VCPU_ID, KVM_S390_MEM_OP, &ksmo); + rv = ERR_MOP(t.vcpu, SIDA, WRITE, mem1, 8, GADDR(0), SIDA_OFFSET(0x1c0)); TEST_ASSERT(rv == -1 && errno == EINVAL, "ioctl does not reject SIDA_WRITE in non-protected mode"); From patchwork Tue Mar 15 14:11:35 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christian Borntraeger X-Patchwork-Id: 12781486 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B133FC4332F for ; Tue, 15 Mar 2022 14:12:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237940AbiCOONr (ORCPT ); Tue, 15 Mar 2022 10:13:47 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56182 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1349038AbiCOOM7 (ORCPT ); Tue, 15 Mar 2022 10:12:59 -0400 Received: from mx0a-001b2d01.pphosted.com (mx0b-001b2d01.pphosted.com [148.163.158.5]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4654D54BCD; Tue, 15 Mar 2022 07:11:46 -0700 (PDT) Received: from pps.filterd (m0098414.ppops.net [127.0.0.1]) by mx0b-001b2d01.pphosted.com (8.16.1.2/8.16.1.2) with SMTP id 22FDt0Pl002090; Tue, 15 Mar 2022 14:11:45 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : content-transfer-encoding : mime-version; s=pp1; bh=Cky0JAwB0FgiuTE1ZIray99YAgYqwPDLl4P5F6cVtEI=; b=NtIsWLuZexB+2tDxLwDtBuMzNa6cEkgw2Ew5nSIFihOAV+2ZH3/QH5W3cd1RmOxRMN24 c5r5cb5PmOCY9JNlV0vU09aoBhhEajYjwToh67aVvC8ZDFq8uBv4rrk6no4rs/73zWXZ k0fVrD3TheCdidw2GATnG5dQUd9mAcxXISYPsEbYWKMyLu34L8lFAgqiN4+6bBp4NwF5 Zgtvmt6wslNorX151QjMR8/NoXr2Xv9idq/YG/2hNqQPDfxYu2SNQXWKqdl/v6YYBMCW PiGVtzntb6T061Mbx2vi08hgBAM8J+m71gopqTWAh9HRAC4dgS+jZsgF3WouFmlg5G+0 nA== Received: from pps.reinject (localhost [127.0.0.1]) by mx0b-001b2d01.pphosted.com with ESMTP id 3etv3vgdyv-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 15 Mar 2022 14:11:45 +0000 Received: from m0098414.ppops.net (m0098414.ppops.net [127.0.0.1]) by pps.reinject (8.16.0.43/8.16.0.43) with SMTP id 22FDv1kD011216; Tue, 15 Mar 2022 14:11:44 GMT Received: from ppma03ams.nl.ibm.com (62.31.33a9.ip4.static.sl-reverse.com [169.51.49.98]) by mx0b-001b2d01.pphosted.com with ESMTP id 3etv3vgdy3-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 15 Mar 2022 14:11:44 +0000 Received: from pps.filterd (ppma03ams.nl.ibm.com [127.0.0.1]) by ppma03ams.nl.ibm.com (8.16.1.2/8.16.1.2) with SMTP id 22FE99CR008054; Tue, 15 Mar 2022 14:11:43 GMT Received: from b06cxnps4076.portsmouth.uk.ibm.com (d06relay13.portsmouth.uk.ibm.com [9.149.109.198]) by ppma03ams.nl.ibm.com with ESMTP id 3et95wt42h-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 15 Mar 2022 14:11:43 +0000 Received: from d06av23.portsmouth.uk.ibm.com (d06av23.portsmouth.uk.ibm.com [9.149.105.59]) by b06cxnps4076.portsmouth.uk.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 22FEBeau51839320 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Tue, 15 Mar 2022 14:11:40 GMT Received: from d06av23.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 08D20A4051; Tue, 15 Mar 2022 14:11:40 +0000 (GMT) Received: from d06av23.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id E7F60A4040; Tue, 15 Mar 2022 14:11:39 +0000 (GMT) Received: from tuxmaker.boeblingen.de.ibm.com (unknown [9.152.85.9]) by d06av23.portsmouth.uk.ibm.com (Postfix) with ESMTPS; Tue, 15 Mar 2022 14:11:39 +0000 (GMT) Received: by tuxmaker.boeblingen.de.ibm.com (Postfix, from userid 25651) id A45D8E11F3; Tue, 15 Mar 2022 15:11:39 +0100 (CET) From: Christian Borntraeger To: Paolo Bonzini Cc: KVM , Janosch Frank , Claudio Imbrenda , David Hildenbrand , linux-s390 , Christian Borntraeger , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Janis Schoetterl-Glausch , Thomas Huth Subject: [GIT PULL 5/7] KVM: s390: selftests: Add named stages for memop test Date: Tue, 15 Mar 2022 15:11:35 +0100 Message-Id: <20220315141137.357923-6-borntraeger@linux.ibm.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220315141137.357923-1-borntraeger@linux.ibm.com> References: <20220315141137.357923-1-borntraeger@linux.ibm.com> X-TM-AS-GCONF: 00 X-Proofpoint-GUID: r-8Q2hpkvyMQ8djxIU3CRnXE2OfCSrmW X-Proofpoint-ORIG-GUID: lRMS3UHQJrqefBg2XevS771abHSjfS07 X-Proofpoint-UnRewURL: 0 URL was un-rewritten MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.850,Hydra:6.0.425,FMLib:17.11.64.514 definitions=2022-03-15_03,2022-03-15_01,2022-02-23_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 priorityscore=1501 clxscore=1015 suspectscore=0 mlxlogscore=999 phishscore=0 adultscore=0 malwarescore=0 mlxscore=0 spamscore=0 impostorscore=0 bulkscore=0 lowpriorityscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2202240000 definitions=main-2203150092 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Janis Schoetterl-Glausch The stages synchronize guest and host execution. This helps the reader and constraits the execution of the test -- if the observed staging differs from the expected the test fails. Signed-off-by: Janis Schoetterl-Glausch Link: https://lore.kernel.org/r/20220308125841.3271721-4-scgl@linux.ibm.com Signed-off-by: Christian Borntraeger --- tools/testing/selftests/kvm/s390x/memop.c | 44 +++++++++++++++++------ 1 file changed, 33 insertions(+), 11 deletions(-) diff --git a/tools/testing/selftests/kvm/s390x/memop.c b/tools/testing/selftests/kvm/s390x/memop.c index e2ad3d70bae4..88ce2d670ed5 100644 --- a/tools/testing/selftests/kvm/s390x/memop.c +++ b/tools/testing/selftests/kvm/s390x/memop.c @@ -218,10 +218,32 @@ static struct test_default test_default_init(void *guest_code) return t; } +enum stage { + /* Synced state set by host, e.g. DAT */ + STAGE_INITED, + /* Guest did nothing */ + STAGE_IDLED, + /* Guest copied memory (locations up to test case) */ + STAGE_COPIED, +}; + +#define HOST_SYNC(vcpu_p, stage) \ +({ \ + struct test_vcpu __vcpu = (vcpu_p); \ + struct ucall uc; \ + int __stage = (stage); \ + \ + vcpu_run(__vcpu.vm, __vcpu.id); \ + get_ucall(__vcpu.vm, __vcpu.id, &uc); \ + ASSERT_EQ(uc.cmd, UCALL_SYNC); \ + ASSERT_EQ(uc.args[1], __stage); \ +}) \ + static void guest_copy(void) { + GUEST_SYNC(STAGE_INITED); memcpy(&mem2, &mem1, sizeof(mem2)); - GUEST_SYNC(0); + GUEST_SYNC(STAGE_COPIED); } static void test_copy(void) @@ -232,16 +254,13 @@ static void test_copy(void) for (i = 0; i < sizeof(mem1); i++) mem1[i] = i * i + i; + HOST_SYNC(t.vcpu, STAGE_INITED); + /* Set the first array */ - MOP(t.vcpu, LOGICAL, WRITE, mem1, t.size, - GADDR(addr_gva2gpa(t.kvm_vm, (uintptr_t)mem1))); + MOP(t.vcpu, LOGICAL, WRITE, mem1, t.size, GADDR_V(mem1)); /* Let the guest code copy the first array to the second */ - vcpu_run(t.kvm_vm, VCPU_ID); - TEST_ASSERT(t.run->exit_reason == KVM_EXIT_S390_SIEIC, - "Unexpected exit reason: %u (%s)\n", - t.run->exit_reason, - exit_reason_str(t.run->exit_reason)); + HOST_SYNC(t.vcpu, STAGE_COPIED); memset(mem2, 0xaa, sizeof(mem2)); @@ -256,8 +275,9 @@ static void test_copy(void) static void guest_idle(void) { + GUEST_SYNC(STAGE_INITED); /* for consistency's sake */ for (;;) - GUEST_SYNC(0); + GUEST_SYNC(STAGE_IDLED); } static void test_errors(void) @@ -265,6 +285,8 @@ static void test_errors(void) struct test_default t = test_default_init(guest_idle); int rv; + HOST_SYNC(t.vcpu, STAGE_INITED); + /* Bad size: */ rv = ERR_MOP(t.vcpu, LOGICAL, WRITE, mem1, -1, GADDR_V(mem1)); TEST_ASSERT(rv == -1 && errno == E2BIG, "ioctl allows insane sizes"); @@ -294,11 +316,11 @@ static void test_errors(void) /* Bad access register: */ t.run->psw_mask &= ~(3UL << (63 - 17)); t.run->psw_mask |= 1UL << (63 - 17); /* Enable AR mode */ - vcpu_run(t.kvm_vm, VCPU_ID); /* To sync new state to SIE block */ + HOST_SYNC(t.vcpu, STAGE_IDLED); /* To sync new state to SIE block */ rv = ERR_MOP(t.vcpu, LOGICAL, WRITE, mem1, t.size, GADDR_V(mem1), AR(17)); TEST_ASSERT(rv == -1 && errno == EINVAL, "ioctl allows ARs > 15"); t.run->psw_mask &= ~(3UL << (63 - 17)); /* Disable AR mode */ - vcpu_run(t.kvm_vm, VCPU_ID); /* Run to sync new state */ + HOST_SYNC(t.vcpu, STAGE_IDLED); /* Run to sync new state */ /* Check that the SIDA calls are rejected for non-protected guests */ rv = ERR_MOP(t.vcpu, SIDA, READ, mem1, 8, GADDR(0), SIDA_OFFSET(0x1c0)); From patchwork Tue Mar 15 14:11:36 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christian Borntraeger X-Patchwork-Id: 12781479 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 84512C433EF for ; Tue, 15 Mar 2022 14:11:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1348996AbiCOONH (ORCPT ); Tue, 15 Mar 2022 10:13:07 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56186 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1349040AbiCOOM7 (ORCPT ); Tue, 15 Mar 2022 10:12:59 -0400 Received: from mx0a-001b2d01.pphosted.com (mx0a-001b2d01.pphosted.com [148.163.156.1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id F2DFA546B9; Tue, 15 Mar 2022 07:11:46 -0700 (PDT) Received: from pps.filterd (m0098409.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.16.1.2/8.16.1.2) with SMTP id 22FDa7N6012359; Tue, 15 Mar 2022 14:11:46 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : content-transfer-encoding : mime-version; s=pp1; bh=fUJ0PW6+HLr6dIHZzShpkZb7Nkxj/ZfwczrCVkP5M3M=; b=tD7WtjFkm9EaRENeRhcRvqt1pAAFQ9WHN9VgbG7OvIhFkScXB9t2essYV5HmR7Z7Gxji rYxWWxoIysZaDMWvzQfp8rB23y5OW2IHzGF1DOhQ+lwztREFaVyp0YEjEGFrcQOkIEKH lmTG8I3F+IkDy8cIytBkQTlQ6aYqk0Xw8QVna27I27JlxdwG8Wt3ZsA+pEH6GZ64AodX JNWNldC369o1prwmxgAGeE69e3iSjJTTh1zQBgwtS89bLvSTCK5itQRRmTaiCBXKNULI VWO/wHrUfZU1Qhn77HvHnu7A+zvM6llL8RJc05ShonR4NutwLjLGuP//s1jBbd6J7dOL 8g== Received: from pps.reinject (localhost [127.0.0.1]) by mx0a-001b2d01.pphosted.com with ESMTP id 3etscymh70-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 15 Mar 2022 14:11:46 +0000 Received: from m0098409.ppops.net (m0098409.ppops.net [127.0.0.1]) by pps.reinject (8.16.0.43/8.16.0.43) with SMTP id 22FEADIM011826; Tue, 15 Mar 2022 14:11:46 GMT Received: from ppma02fra.de.ibm.com (47.49.7a9f.ip4.static.sl-reverse.com [159.122.73.71]) by mx0a-001b2d01.pphosted.com with ESMTP id 3etscymh64-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 15 Mar 2022 14:11:45 +0000 Received: from pps.filterd (ppma02fra.de.ibm.com [127.0.0.1]) by ppma02fra.de.ibm.com (8.16.1.2/8.16.1.2) with SMTP id 22FE83ia028989; Tue, 15 Mar 2022 14:11:43 GMT Received: from b06cxnps4075.portsmouth.uk.ibm.com (d06relay12.portsmouth.uk.ibm.com [9.149.109.197]) by ppma02fra.de.ibm.com with ESMTP id 3erk58nt76-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 15 Mar 2022 14:11:43 +0000 Received: from d06av22.portsmouth.uk.ibm.com (d06av22.portsmouth.uk.ibm.com [9.149.105.58]) by b06cxnps4075.portsmouth.uk.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 22FEBe9V55181600 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Tue, 15 Mar 2022 14:11:40 GMT Received: from d06av22.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 59E954C04A; Tue, 15 Mar 2022 14:11:40 +0000 (GMT) Received: from d06av22.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 380BE4C046; Tue, 15 Mar 2022 14:11:40 +0000 (GMT) Received: from tuxmaker.boeblingen.de.ibm.com (unknown [9.152.85.9]) by d06av22.portsmouth.uk.ibm.com (Postfix) with ESMTPS; Tue, 15 Mar 2022 14:11:40 +0000 (GMT) Received: by tuxmaker.boeblingen.de.ibm.com (Postfix, from userid 25651) id E6EA6E127A; Tue, 15 Mar 2022 15:11:39 +0100 (CET) From: Christian Borntraeger To: Paolo Bonzini Cc: KVM , Janosch Frank , Claudio Imbrenda , David Hildenbrand , linux-s390 , Christian Borntraeger , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Janis Schoetterl-Glausch , Thomas Huth Subject: [GIT PULL 6/7] KVM: s390: selftests: Add more copy memop tests Date: Tue, 15 Mar 2022 15:11:36 +0100 Message-Id: <20220315141137.357923-7-borntraeger@linux.ibm.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220315141137.357923-1-borntraeger@linux.ibm.com> References: <20220315141137.357923-1-borntraeger@linux.ibm.com> X-TM-AS-GCONF: 00 X-Proofpoint-GUID: RaZBhQ7cf73SslW9Gwkp_Ji_TWVdFTeB X-Proofpoint-ORIG-GUID: ky8-D_akuMLZEWv5GZ62zoUb4RleoKhP X-Proofpoint-UnRewURL: 0 URL was un-rewritten MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.850,Hydra:6.0.425,FMLib:17.11.64.514 definitions=2022-03-15_03,2022-03-15_01,2022-02-23_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 priorityscore=1501 malwarescore=0 impostorscore=0 mlxlogscore=999 mlxscore=0 clxscore=1015 spamscore=0 bulkscore=0 adultscore=0 lowpriorityscore=0 suspectscore=0 phishscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2202240000 definitions=main-2203150092 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Janis Schoetterl-Glausch Do not just test the actual copy, but also that success is indicated when using the check only flag. Add copy test with storage key checking enabled, including tests for storage and fetch protection override. These test cover both logical vcpu ioctls as well as absolute vm ioctls. Signed-off-by: Janis Schoetterl-Glausch Link: https://lore.kernel.org/r/20220308125841.3271721-5-scgl@linux.ibm.com Signed-off-by: Christian Borntraeger --- tools/testing/selftests/kvm/s390x/memop.c | 247 ++++++++++++++++++++-- 1 file changed, 232 insertions(+), 15 deletions(-) diff --git a/tools/testing/selftests/kvm/s390x/memop.c b/tools/testing/selftests/kvm/s390x/memop.c index 88ce2d670ed5..42282663b38b 100644 --- a/tools/testing/selftests/kvm/s390x/memop.c +++ b/tools/testing/selftests/kvm/s390x/memop.c @@ -195,13 +195,21 @@ static int err_memop_ioctl(struct test_vcpu vcpu, struct kvm_s390_mem_op *ksmo) #define AR(a) ._ar = 1, .ar = (a) #define KEY(a) .f_key = 1, .key = (a) +#define CHECK_N_DO(f, ...) ({ f(__VA_ARGS__, CHECK_ONLY); f(__VA_ARGS__); }) + #define VCPU_ID 1 +#define PAGE_SHIFT 12 +#define PAGE_SIZE (1ULL << PAGE_SHIFT) +#define PAGE_MASK (~(PAGE_SIZE - 1)) +#define CR0_FETCH_PROTECTION_OVERRIDE (1UL << (63 - 38)) +#define CR0_STORAGE_PROTECTION_OVERRIDE (1UL << (63 - 39)) static uint8_t mem1[65536]; static uint8_t mem2[65536]; struct test_default { struct kvm_vm *kvm_vm; + struct test_vcpu vm; struct test_vcpu vcpu; struct kvm_run *run; int size; @@ -213,6 +221,7 @@ static struct test_default test_default_init(void *guest_code) t.size = min((size_t)kvm_check_cap(KVM_CAP_S390_MEM_OP), sizeof(mem1)); t.kvm_vm = vm_create_default(VCPU_ID, 0, guest_code); + t.vm = (struct test_vcpu) { t.kvm_vm, VM_VCPU_ID }; t.vcpu = (struct test_vcpu) { t.kvm_vm, VCPU_ID }; t.run = vcpu_state(t.kvm_vm, VCPU_ID); return t; @@ -223,6 +232,8 @@ enum stage { STAGE_INITED, /* Guest did nothing */ STAGE_IDLED, + /* Guest set storage keys (specifics up to test case) */ + STAGE_SKEYS_SET, /* Guest copied memory (locations up to test case) */ STAGE_COPIED, }; @@ -239,6 +250,47 @@ enum stage { ASSERT_EQ(uc.args[1], __stage); \ }) \ +static void prepare_mem12(void) +{ + int i; + + for (i = 0; i < sizeof(mem1); i++) + mem1[i] = rand(); + memset(mem2, 0xaa, sizeof(mem2)); +} + +#define ASSERT_MEM_EQ(p1, p2, size) \ + TEST_ASSERT(!memcmp(p1, p2, size), "Memory contents do not match!") + +#define DEFAULT_WRITE_READ(copy_cpu, mop_cpu, mop_target_p, size, ...) \ +({ \ + struct test_vcpu __copy_cpu = (copy_cpu), __mop_cpu = (mop_cpu); \ + enum mop_target __target = (mop_target_p); \ + uint32_t __size = (size); \ + \ + prepare_mem12(); \ + CHECK_N_DO(MOP, __mop_cpu, __target, WRITE, mem1, __size, \ + GADDR_V(mem1), ##__VA_ARGS__); \ + HOST_SYNC(__copy_cpu, STAGE_COPIED); \ + CHECK_N_DO(MOP, __mop_cpu, __target, READ, mem2, __size, \ + GADDR_V(mem2), ##__VA_ARGS__); \ + ASSERT_MEM_EQ(mem1, mem2, __size); \ +}) + +#define DEFAULT_READ(copy_cpu, mop_cpu, mop_target_p, size, ...) \ +({ \ + struct test_vcpu __copy_cpu = (copy_cpu), __mop_cpu = (mop_cpu); \ + enum mop_target __target = (mop_target_p); \ + uint32_t __size = (size); \ + \ + prepare_mem12(); \ + CHECK_N_DO(MOP, __mop_cpu, __target, WRITE, mem1, __size, \ + GADDR_V(mem1)); \ + HOST_SYNC(__copy_cpu, STAGE_COPIED); \ + CHECK_N_DO(MOP, __mop_cpu, __target, READ, mem2, __size, ##__VA_ARGS__);\ + ASSERT_MEM_EQ(mem1, mem2, __size); \ +}) + static void guest_copy(void) { GUEST_SYNC(STAGE_INITED); @@ -249,27 +301,183 @@ static void guest_copy(void) static void test_copy(void) { struct test_default t = test_default_init(guest_copy); - int i; - - for (i = 0; i < sizeof(mem1); i++) - mem1[i] = i * i + i; HOST_SYNC(t.vcpu, STAGE_INITED); - /* Set the first array */ - MOP(t.vcpu, LOGICAL, WRITE, mem1, t.size, GADDR_V(mem1)); + DEFAULT_WRITE_READ(t.vcpu, t.vcpu, LOGICAL, t.size); - /* Let the guest code copy the first array to the second */ + kvm_vm_free(t.kvm_vm); +} + +static void set_storage_key_range(void *addr, size_t len, uint8_t key) +{ + uintptr_t _addr, abs, i; + int not_mapped = 0; + + _addr = (uintptr_t)addr; + for (i = _addr & PAGE_MASK; i < _addr + len; i += PAGE_SIZE) { + abs = i; + asm volatile ( + "lra %[abs], 0(0,%[abs])\n" + " jz 0f\n" + " llill %[not_mapped],1\n" + " j 1f\n" + "0: sske %[key], %[abs]\n" + "1:" + : [abs] "+&a" (abs), [not_mapped] "+r" (not_mapped) + : [key] "r" (key) + : "cc" + ); + GUEST_ASSERT_EQ(not_mapped, 0); + } +} + +static void guest_copy_key(void) +{ + set_storage_key_range(mem1, sizeof(mem1), 0x90); + set_storage_key_range(mem2, sizeof(mem2), 0x90); + GUEST_SYNC(STAGE_SKEYS_SET); + + for (;;) { + memcpy(&mem2, &mem1, sizeof(mem2)); + GUEST_SYNC(STAGE_COPIED); + } +} + +static void test_copy_key(void) +{ + struct test_default t = test_default_init(guest_copy_key); + + HOST_SYNC(t.vcpu, STAGE_SKEYS_SET); + + /* vm, no key */ + DEFAULT_WRITE_READ(t.vcpu, t.vm, ABSOLUTE, t.size); + + /* vm/vcpu, machting key or key 0 */ + DEFAULT_WRITE_READ(t.vcpu, t.vcpu, LOGICAL, t.size, KEY(0)); + DEFAULT_WRITE_READ(t.vcpu, t.vcpu, LOGICAL, t.size, KEY(9)); + DEFAULT_WRITE_READ(t.vcpu, t.vm, ABSOLUTE, t.size, KEY(0)); + DEFAULT_WRITE_READ(t.vcpu, t.vm, ABSOLUTE, t.size, KEY(9)); + /* + * There used to be different code paths for key handling depending on + * if the region crossed a page boundary. + * There currently are not, but the more tests the merrier. + */ + DEFAULT_WRITE_READ(t.vcpu, t.vcpu, LOGICAL, 1, KEY(0)); + DEFAULT_WRITE_READ(t.vcpu, t.vcpu, LOGICAL, 1, KEY(9)); + DEFAULT_WRITE_READ(t.vcpu, t.vm, ABSOLUTE, 1, KEY(0)); + DEFAULT_WRITE_READ(t.vcpu, t.vm, ABSOLUTE, 1, KEY(9)); + + /* vm/vcpu, mismatching keys on read, but no fetch protection */ + DEFAULT_READ(t.vcpu, t.vcpu, LOGICAL, t.size, GADDR_V(mem2), KEY(2)); + DEFAULT_READ(t.vcpu, t.vm, ABSOLUTE, t.size, GADDR_V(mem1), KEY(2)); + + kvm_vm_free(t.kvm_vm); +} + +static void guest_copy_key_fetch_prot(void) +{ + /* + * For some reason combining the first sync with override enablement + * results in an exception when calling HOST_SYNC. + */ + GUEST_SYNC(STAGE_INITED); + /* Storage protection override applies to both store and fetch. */ + set_storage_key_range(mem1, sizeof(mem1), 0x98); + set_storage_key_range(mem2, sizeof(mem2), 0x98); + GUEST_SYNC(STAGE_SKEYS_SET); + + for (;;) { + memcpy(&mem2, &mem1, sizeof(mem2)); + GUEST_SYNC(STAGE_COPIED); + } +} + +static void test_copy_key_storage_prot_override(void) +{ + struct test_default t = test_default_init(guest_copy_key_fetch_prot); + + HOST_SYNC(t.vcpu, STAGE_INITED); + t.run->s.regs.crs[0] |= CR0_STORAGE_PROTECTION_OVERRIDE; + t.run->kvm_dirty_regs = KVM_SYNC_CRS; + HOST_SYNC(t.vcpu, STAGE_SKEYS_SET); + + /* vcpu, mismatching keys, storage protection override in effect */ + DEFAULT_WRITE_READ(t.vcpu, t.vcpu, LOGICAL, t.size, KEY(2)); + + kvm_vm_free(t.kvm_vm); +} + +static void test_copy_key_fetch_prot(void) +{ + struct test_default t = test_default_init(guest_copy_key_fetch_prot); + + HOST_SYNC(t.vcpu, STAGE_INITED); + HOST_SYNC(t.vcpu, STAGE_SKEYS_SET); + + /* vm/vcpu, matching key, fetch protection in effect */ + DEFAULT_READ(t.vcpu, t.vcpu, LOGICAL, t.size, GADDR_V(mem2), KEY(9)); + DEFAULT_READ(t.vcpu, t.vm, ABSOLUTE, t.size, GADDR_V(mem2), KEY(9)); + + kvm_vm_free(t.kvm_vm); +} + +const uint64_t last_page_addr = -PAGE_SIZE; + +static void guest_copy_key_fetch_prot_override(void) +{ + int i; + char *page_0 = 0; + + GUEST_SYNC(STAGE_INITED); + set_storage_key_range(0, PAGE_SIZE, 0x18); + set_storage_key_range((void *)last_page_addr, PAGE_SIZE, 0x0); + asm volatile ("sske %[key],%[addr]\n" :: [addr] "r"(0), [key] "r"(0x18) : "cc"); + GUEST_SYNC(STAGE_SKEYS_SET); + + for (;;) { + for (i = 0; i < PAGE_SIZE; i++) + page_0[i] = mem1[i]; + GUEST_SYNC(STAGE_COPIED); + } +} + +static void test_copy_key_fetch_prot_override(void) +{ + struct test_default t = test_default_init(guest_copy_key_fetch_prot_override); + vm_vaddr_t guest_0_page, guest_last_page; + + guest_0_page = vm_vaddr_alloc(t.kvm_vm, PAGE_SIZE, 0); + guest_last_page = vm_vaddr_alloc(t.kvm_vm, PAGE_SIZE, last_page_addr); + if (guest_0_page != 0 || guest_last_page != last_page_addr) { + print_skip("did not allocate guest pages at required positions"); + goto out; + } + + HOST_SYNC(t.vcpu, STAGE_INITED); + t.run->s.regs.crs[0] |= CR0_FETCH_PROTECTION_OVERRIDE; + t.run->kvm_dirty_regs = KVM_SYNC_CRS; + HOST_SYNC(t.vcpu, STAGE_SKEYS_SET); + + /* vcpu, mismatching keys on fetch, fetch protection override applies */ + prepare_mem12(); + MOP(t.vcpu, LOGICAL, WRITE, mem1, PAGE_SIZE, GADDR_V(mem1)); HOST_SYNC(t.vcpu, STAGE_COPIED); + CHECK_N_DO(MOP, t.vcpu, LOGICAL, READ, mem2, 2048, GADDR_V(guest_0_page), KEY(2)); + ASSERT_MEM_EQ(mem1, mem2, 2048); - memset(mem2, 0xaa, sizeof(mem2)); - - /* Get the second array */ - MOP(t.vcpu, LOGICAL, READ, mem2, t.size, GADDR_V(mem2)); - - TEST_ASSERT(!memcmp(mem1, mem2, t.size), - "Memory contents do not match!"); + /* + * vcpu, mismatching keys on fetch, fetch protection override applies, + * wraparound + */ + prepare_mem12(); + MOP(t.vcpu, LOGICAL, WRITE, mem1, 2 * PAGE_SIZE, GADDR_V(guest_last_page)); + HOST_SYNC(t.vcpu, STAGE_COPIED); + CHECK_N_DO(MOP, t.vcpu, LOGICAL, READ, mem2, PAGE_SIZE + 2048, + GADDR_V(guest_last_page), KEY(2)); + ASSERT_MEM_EQ(mem1, mem2, 2048); +out: kvm_vm_free(t.kvm_vm); } @@ -335,17 +543,26 @@ static void test_errors(void) int main(int argc, char *argv[]) { - int memop_cap; + int memop_cap, extension_cap; setbuf(stdout, NULL); /* Tell stdout not to buffer its content */ memop_cap = kvm_check_cap(KVM_CAP_S390_MEM_OP); + extension_cap = kvm_check_cap(KVM_CAP_S390_MEM_OP_EXTENSION); if (!memop_cap) { print_skip("CAP_S390_MEM_OP not supported"); exit(KSFT_SKIP); } test_copy(); + if (extension_cap > 0) { + test_copy_key(); + test_copy_key_storage_prot_override(); + test_copy_key_fetch_prot(); + test_copy_key_fetch_prot_override(); + } else { + print_skip("storage key memop extension not supported"); + } test_errors(); return 0; From patchwork Tue Mar 15 14:11:37 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christian Borntraeger X-Patchwork-Id: 12781484 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9D72BC433F5 for ; Tue, 15 Mar 2022 14:12:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241113AbiCOONN (ORCPT ); Tue, 15 Mar 2022 10:13:13 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56120 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1349044AbiCOONA (ORCPT ); Tue, 15 Mar 2022 10:13:00 -0400 Received: from mx0b-001b2d01.pphosted.com (mx0b-001b2d01.pphosted.com [148.163.158.5]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 55B5C54BD1; Tue, 15 Mar 2022 07:11:47 -0700 (PDT) Received: from pps.filterd (m0127361.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.16.1.2/8.16.1.2) with SMTP id 22FDTXfn001163; Tue, 15 Mar 2022 14:11:46 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : content-transfer-encoding : mime-version; s=pp1; bh=lgUcleidPgeSbZdOb4sTvjU1fm+T1hemCinHIHwKwXU=; b=BIyzRLoxCEgFrpnQuWKJGbqJk5BzqrjN+3TXys0jnrMGIhzFbzhnRR9sCxLRWnkdk7tj zz+xFW1iVM9unsBNJvsi1QIAtoV8Nwr0orjtucciGMLLhMACKK5r62T4rlppGdCl9TQv LUIZHAZdb8HqpbZhkvfXDPRM4/dWliMRBDOR+NFNH0atx/Qf7HTppuF/y4LusF83gdn/ lFSenAx4gYbDhD4YVc1JwRXp3dtfYnxEUiUkQYu8Nx0scWhc5b6VMnoY3Q0zw8rfmUMA nxCiJHQ3lE2RZurJbcogyrB7P5NjcJIksJ9rIblK31RONgFtF4mggCkLsqzZYNeOywAK jA== Received: from pps.reinject (localhost [127.0.0.1]) by mx0a-001b2d01.pphosted.com with ESMTP id 3etuqvs1pg-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 15 Mar 2022 14:11:46 +0000 Received: from m0127361.ppops.net (m0127361.ppops.net [127.0.0.1]) by pps.reinject (8.16.0.43/8.16.0.43) with SMTP id 22FDpkpH022474; Tue, 15 Mar 2022 14:11:46 GMT Received: from ppma04ams.nl.ibm.com (63.31.33a9.ip4.static.sl-reverse.com [169.51.49.99]) by mx0a-001b2d01.pphosted.com with ESMTP id 3etuqvs1p0-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 15 Mar 2022 14:11:45 +0000 Received: from pps.filterd (ppma04ams.nl.ibm.com [127.0.0.1]) by ppma04ams.nl.ibm.com (8.16.1.2/8.16.1.2) with SMTP id 22FE9JJF013277; Tue, 15 Mar 2022 14:11:44 GMT Received: from b06cxnps4075.portsmouth.uk.ibm.com (d06relay12.portsmouth.uk.ibm.com [9.149.109.197]) by ppma04ams.nl.ibm.com with ESMTP id 3erk58xsw2-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 15 Mar 2022 14:11:43 +0000 Received: from d06av22.portsmouth.uk.ibm.com (d06av22.portsmouth.uk.ibm.com [9.149.105.58]) by b06cxnps4075.portsmouth.uk.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 22FEBe8156295826 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Tue, 15 Mar 2022 14:11:40 GMT Received: from d06av22.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id A8F994C044; Tue, 15 Mar 2022 14:11:40 +0000 (GMT) Received: from d06av22.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 875884C040; Tue, 15 Mar 2022 14:11:40 +0000 (GMT) Received: from tuxmaker.boeblingen.de.ibm.com (unknown [9.152.85.9]) by d06av22.portsmouth.uk.ibm.com (Postfix) with ESMTPS; Tue, 15 Mar 2022 14:11:40 +0000 (GMT) Received: by tuxmaker.boeblingen.de.ibm.com (Postfix, from userid 25651) id 42584E11F3; Tue, 15 Mar 2022 15:11:40 +0100 (CET) From: Christian Borntraeger To: Paolo Bonzini Cc: KVM , Janosch Frank , Claudio Imbrenda , David Hildenbrand , linux-s390 , Christian Borntraeger , Heiko Carstens , Vasily Gorbik , Alexander Gordeev , Janis Schoetterl-Glausch , Thomas Huth Subject: [GIT PULL 7/7] KVM: s390: selftests: Add error memop tests Date: Tue, 15 Mar 2022 15:11:37 +0100 Message-Id: <20220315141137.357923-8-borntraeger@linux.ibm.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220315141137.357923-1-borntraeger@linux.ibm.com> References: <20220315141137.357923-1-borntraeger@linux.ibm.com> X-TM-AS-GCONF: 00 X-Proofpoint-GUID: LwtSvwYXdTKwdnKmmwyekFDbq0lzS9HH X-Proofpoint-ORIG-GUID: 7Jot3k7dfHkPD_S3HnthXZp2ipbTmDPW X-Proofpoint-UnRewURL: 0 URL was un-rewritten MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.850,Hydra:6.0.425,FMLib:17.11.64.514 definitions=2022-03-15_03,2022-03-15_01,2022-02-23_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 spamscore=0 bulkscore=0 phishscore=0 malwarescore=0 mlxlogscore=999 impostorscore=0 lowpriorityscore=0 clxscore=1015 priorityscore=1501 mlxscore=0 suspectscore=0 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2202240000 definitions=main-2203150092 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Janis Schoetterl-Glausch Test that errors occur if key protection disallows access, including tests for storage and fetch protection override. Perform tests for both logical vcpu and absolute vm ioctls. Also extend the existing tests to the vm ioctl. Signed-off-by: Janis Schoetterl-Glausch Link: https://lore.kernel.org/r/20220308125841.3271721-6-scgl@linux.ibm.com Signed-off-by: Christian Borntraeger --- tools/testing/selftests/kvm/s390x/memop.c | 153 +++++++++++++++++++--- 1 file changed, 132 insertions(+), 21 deletions(-) diff --git a/tools/testing/selftests/kvm/s390x/memop.c b/tools/testing/selftests/kvm/s390x/memop.c index 42282663b38b..b04c2c1b3c30 100644 --- a/tools/testing/selftests/kvm/s390x/memop.c +++ b/tools/testing/selftests/kvm/s390x/memop.c @@ -422,6 +422,46 @@ static void test_copy_key_fetch_prot(void) kvm_vm_free(t.kvm_vm); } +#define ERR_PROT_MOP(...) \ +({ \ + int rv; \ + \ + rv = ERR_MOP(__VA_ARGS__); \ + TEST_ASSERT(rv == 4, "Should result in protection exception"); \ +}) + +static void test_errors_key(void) +{ + struct test_default t = test_default_init(guest_copy_key_fetch_prot); + + HOST_SYNC(t.vcpu, STAGE_INITED); + HOST_SYNC(t.vcpu, STAGE_SKEYS_SET); + + /* vm/vcpu, mismatching keys, fetch protection in effect */ + CHECK_N_DO(ERR_PROT_MOP, t.vcpu, LOGICAL, WRITE, mem1, t.size, GADDR_V(mem1), KEY(2)); + CHECK_N_DO(ERR_PROT_MOP, t.vcpu, LOGICAL, READ, mem2, t.size, GADDR_V(mem2), KEY(2)); + CHECK_N_DO(ERR_PROT_MOP, t.vm, ABSOLUTE, WRITE, mem1, t.size, GADDR_V(mem1), KEY(2)); + CHECK_N_DO(ERR_PROT_MOP, t.vm, ABSOLUTE, READ, mem2, t.size, GADDR_V(mem2), KEY(2)); + + kvm_vm_free(t.kvm_vm); +} + +static void test_errors_key_storage_prot_override(void) +{ + struct test_default t = test_default_init(guest_copy_key_fetch_prot); + + HOST_SYNC(t.vcpu, STAGE_INITED); + t.run->s.regs.crs[0] |= CR0_STORAGE_PROTECTION_OVERRIDE; + t.run->kvm_dirty_regs = KVM_SYNC_CRS; + HOST_SYNC(t.vcpu, STAGE_SKEYS_SET); + + /* vm, mismatching keys, storage protection override not applicable to vm */ + CHECK_N_DO(ERR_PROT_MOP, t.vm, ABSOLUTE, WRITE, mem1, t.size, GADDR_V(mem1), KEY(2)); + CHECK_N_DO(ERR_PROT_MOP, t.vm, ABSOLUTE, READ, mem2, t.size, GADDR_V(mem2), KEY(2)); + + kvm_vm_free(t.kvm_vm); +} + const uint64_t last_page_addr = -PAGE_SIZE; static void guest_copy_key_fetch_prot_override(void) @@ -481,6 +521,58 @@ static void test_copy_key_fetch_prot_override(void) kvm_vm_free(t.kvm_vm); } +static void test_errors_key_fetch_prot_override_not_enabled(void) +{ + struct test_default t = test_default_init(guest_copy_key_fetch_prot_override); + vm_vaddr_t guest_0_page, guest_last_page; + + guest_0_page = vm_vaddr_alloc(t.kvm_vm, PAGE_SIZE, 0); + guest_last_page = vm_vaddr_alloc(t.kvm_vm, PAGE_SIZE, last_page_addr); + if (guest_0_page != 0 || guest_last_page != last_page_addr) { + print_skip("did not allocate guest pages at required positions"); + goto out; + } + HOST_SYNC(t.vcpu, STAGE_INITED); + HOST_SYNC(t.vcpu, STAGE_SKEYS_SET); + + /* vcpu, mismatching keys on fetch, fetch protection override not enabled */ + CHECK_N_DO(ERR_PROT_MOP, t.vcpu, LOGICAL, READ, mem2, 2048, GADDR_V(0), KEY(2)); + +out: + kvm_vm_free(t.kvm_vm); +} + +static void test_errors_key_fetch_prot_override_enabled(void) +{ + struct test_default t = test_default_init(guest_copy_key_fetch_prot_override); + vm_vaddr_t guest_0_page, guest_last_page; + + guest_0_page = vm_vaddr_alloc(t.kvm_vm, PAGE_SIZE, 0); + guest_last_page = vm_vaddr_alloc(t.kvm_vm, PAGE_SIZE, last_page_addr); + if (guest_0_page != 0 || guest_last_page != last_page_addr) { + print_skip("did not allocate guest pages at required positions"); + goto out; + } + HOST_SYNC(t.vcpu, STAGE_INITED); + t.run->s.regs.crs[0] |= CR0_FETCH_PROTECTION_OVERRIDE; + t.run->kvm_dirty_regs = KVM_SYNC_CRS; + HOST_SYNC(t.vcpu, STAGE_SKEYS_SET); + + /* + * vcpu, mismatching keys on fetch, + * fetch protection override does not apply because memory range acceeded + */ + CHECK_N_DO(ERR_PROT_MOP, t.vcpu, LOGICAL, READ, mem2, 2048 + 1, GADDR_V(0), KEY(2)); + CHECK_N_DO(ERR_PROT_MOP, t.vcpu, LOGICAL, READ, mem2, PAGE_SIZE + 2048 + 1, + GADDR_V(guest_last_page), KEY(2)); + /* vm, fetch protected override does not apply */ + CHECK_N_DO(ERR_PROT_MOP, t.vm, ABSOLUTE, READ, mem2, 2048, GADDR(0), KEY(2)); + CHECK_N_DO(ERR_PROT_MOP, t.vm, ABSOLUTE, READ, mem2, 2048, GADDR_V(guest_0_page), KEY(2)); + +out: + kvm_vm_free(t.kvm_vm); +} + static void guest_idle(void) { GUEST_SYNC(STAGE_INITED); /* for consistency's sake */ @@ -488,6 +580,37 @@ static void guest_idle(void) GUEST_SYNC(STAGE_IDLED); } +static void _test_errors_common(struct test_vcpu vcpu, enum mop_target target, int size) +{ + int rv; + + /* Bad size: */ + rv = ERR_MOP(vcpu, target, WRITE, mem1, -1, GADDR_V(mem1)); + TEST_ASSERT(rv == -1 && errno == E2BIG, "ioctl allows insane sizes"); + + /* Zero size: */ + rv = ERR_MOP(vcpu, target, WRITE, mem1, 0, GADDR_V(mem1)); + TEST_ASSERT(rv == -1 && (errno == EINVAL || errno == ENOMEM), + "ioctl allows 0 as size"); + + /* Bad flags: */ + rv = ERR_MOP(vcpu, target, WRITE, mem1, size, GADDR_V(mem1), SET_FLAGS(-1)); + TEST_ASSERT(rv == -1 && errno == EINVAL, "ioctl allows all flags"); + + /* Bad guest address: */ + rv = ERR_MOP(vcpu, target, WRITE, mem1, size, GADDR((void *)~0xfffUL), CHECK_ONLY); + TEST_ASSERT(rv > 0, "ioctl does not report bad guest memory access"); + + /* Bad host address: */ + rv = ERR_MOP(vcpu, target, WRITE, 0, size, GADDR_V(mem1)); + TEST_ASSERT(rv == -1 && errno == EFAULT, + "ioctl does not report bad host memory address"); + + /* Bad key: */ + rv = ERR_MOP(vcpu, target, WRITE, mem1, size, GADDR_V(mem1), KEY(17)); + TEST_ASSERT(rv == -1 && errno == EINVAL, "ioctl allows invalid key"); +} + static void test_errors(void) { struct test_default t = test_default_init(guest_idle); @@ -495,31 +618,15 @@ static void test_errors(void) HOST_SYNC(t.vcpu, STAGE_INITED); - /* Bad size: */ - rv = ERR_MOP(t.vcpu, LOGICAL, WRITE, mem1, -1, GADDR_V(mem1)); - TEST_ASSERT(rv == -1 && errno == E2BIG, "ioctl allows insane sizes"); - - /* Zero size: */ - rv = ERR_MOP(t.vcpu, LOGICAL, WRITE, mem1, 0, GADDR_V(mem1)); - TEST_ASSERT(rv == -1 && (errno == EINVAL || errno == ENOMEM), - "ioctl allows 0 as size"); - - /* Bad flags: */ - rv = ERR_MOP(t.vcpu, LOGICAL, WRITE, mem1, t.size, GADDR_V(mem1), SET_FLAGS(-1)); - TEST_ASSERT(rv == -1 && errno == EINVAL, "ioctl allows all flags"); + _test_errors_common(t.vcpu, LOGICAL, t.size); + _test_errors_common(t.vm, ABSOLUTE, t.size); /* Bad operation: */ rv = ERR_MOP(t.vcpu, INVALID, WRITE, mem1, t.size, GADDR_V(mem1)); TEST_ASSERT(rv == -1 && errno == EINVAL, "ioctl allows bad operations"); - - /* Bad guest address: */ - rv = ERR_MOP(t.vcpu, LOGICAL, WRITE, mem1, t.size, GADDR((void *)~0xfffUL), CHECK_ONLY); - TEST_ASSERT(rv > 0, "ioctl does not report bad guest memory access"); - - /* Bad host address: */ - rv = ERR_MOP(t.vcpu, LOGICAL, WRITE, 0, t.size, GADDR_V(mem1)); - TEST_ASSERT(rv == -1 && errno == EFAULT, - "ioctl does not report bad host memory address"); + /* virtual addresses are not translated when passing INVALID */ + rv = ERR_MOP(t.vm, INVALID, WRITE, mem1, PAGE_SIZE, GADDR(0)); + TEST_ASSERT(rv == -1 && errno == EINVAL, "ioctl allows bad operations"); /* Bad access register: */ t.run->psw_mask &= ~(3UL << (63 - 17)); @@ -560,6 +667,10 @@ int main(int argc, char *argv[]) test_copy_key_storage_prot_override(); test_copy_key_fetch_prot(); test_copy_key_fetch_prot_override(); + test_errors_key(); + test_errors_key_storage_prot_override(); + test_errors_key_fetch_prot_override_not_enabled(); + test_errors_key_fetch_prot_override_enabled(); } else { print_skip("storage key memop extension not supported"); }