From patchwork Fri Apr 12 16:28:10 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dave Martin X-Patchwork-Id: 10898793 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id E4F1A17E0 for ; Fri, 12 Apr 2019 16:30:53 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id B9E5B28F36 for ; Fri, 12 Apr 2019 16:30:53 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id B7FE528F2D; Fri, 12 Apr 2019 16:30:53 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 377E128F20 for ; Fri, 12 Apr 2019 16:30:53 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:MIME-Version:Cc:List-Subscribe: List-Help:List-Post:List-Archive:List-Unsubscribe:List-Id:References: In-Reply-To:Message-Id:Date:Subject:To:From:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=W016IumtSmDCc1ut4VH049Kmg2r7dKlRJMNToKWghRI=; b=fhrVEv4h5MvN1HEQpiMHtQkgFv lv2EhnegDTwEeYxZE1aw8moxPQSHaNxeD29f5nqHoUwOOHk3IzEhwAuWzZSBIKVPeyVLfAaTIhvdb Tcciq/y0IFjvS38lNG4a12B/7aNsbE60+rdlO8QdEODruZrG7zTHqa7PZrn41WH1UfpaPgPOYGvnN wawRZg0e0GqQfvIk7g6kqKf3BuMOPx0gdCcoB6yoEG/DA8nJ+PU0JIhz6ntDmpTS6MwBFlsPSnTF+ 27oM2VHw8yVuRLam9dsunm6NVGElFTYzhu40U+VOwHMBMQCSfxA12IXYM3v+KdadlEef46gvm6Vpw RBgI3vrg==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1hEz4h-0004wZ-08; Fri, 12 Apr 2019 16:30:47 +0000 Received: from foss.arm.com ([217.140.101.70]) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1hEz3H-00029z-St for linux-arm-kernel@lists.infradead.org; Fri, 12 Apr 2019 16:29:21 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 486F215AD; Fri, 12 Apr 2019 09:29:19 -0700 (PDT) Received: from e103592.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 2FBEA3F718; Fri, 12 Apr 2019 09:29:17 -0700 (PDT) From: Dave Martin To: kvmarm@lists.cs.columbia.edu Subject: [PATCH 06/14] KVM: arm64/sve: Miscellaneous tidyups in guest.c Date: Fri, 12 Apr 2019 17:28:10 +0100 Message-Id: <1555086498-26691-7-git-send-email-Dave.Martin@arm.com> X-Mailer: git-send-email 2.1.4 In-Reply-To: <1555086498-26691-1-git-send-email-Dave.Martin@arm.com> References: <1555086498-26691-1-git-send-email-Dave.Martin@arm.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190412_092919_956361_CDC4EDE6 X-CRM114-Status: GOOD ( 14.32 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Peter Maydell , Okamoto Takayuki , Christoffer Dall , Ard Biesheuvel , Marc Zyngier , Catalin Marinas , Will Deacon , Andrew Jones , Zhang Lei , Julien Grall , =?utf-8?q?Alex_Benn=C3=A9e?= , linux-arm-kernel@lists.infradead.org MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP * Remove a few redundant blank lines that are stylistically inconsistent with code already in guest.c and are just taking up space. * Delete a couple of pointless empty default cases from switch statements whose behaviour is otherwise obvious anyway. * Fix some typos and consolidate some redundantly duplicated comments. * Respell the slice index check in sve_reg_to_region() as "> 0" to be more consistent with what is logically being checked here (i.e., "is the slice index too large"), even though we don't try to cope with multiple slices yet. No functional change. Suggested-by: Andrew Jones Signed-off-by: Dave Martin --- arch/arm64/kvm/guest.c | 18 +++++------------- 1 file changed, 5 insertions(+), 13 deletions(-) diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c index 2e449e1..f5ff7ae 100644 --- a/arch/arm64/kvm/guest.c +++ b/arch/arm64/kvm/guest.c @@ -290,9 +290,10 @@ static int set_sve_vls(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) #define KVM_SVE_PREG_SIZE KVM_REG_SIZE(KVM_REG_ARM64_SVE_PREG(0, 0)) /* - * number of register slices required to cover each whole SVE register on vcpu - * NOTE: If you are tempted to modify this, you must also to rework - * sve_reg_to_region() to match: + * Number of register slices required to cover each whole SVE register. + * NOTE: Only the first slice every exists, for now. + * If you are tempted to modify this, you must also rework sve_reg_to_region() + * to match: */ #define vcpu_sve_slices(vcpu) 1 @@ -334,8 +335,7 @@ static int sve_reg_to_region(struct sve_state_reg_region *region, /* Verify that we match the UAPI header: */ BUILD_BUG_ON(SVE_NUM_SLICES != KVM_ARM64_SVE_MAX_SLICES); - /* Only the first slice ever exists, for now: */ - if ((reg->id & SVE_REG_SLICE_MASK) != 0) + if ((reg->id & SVE_REG_SLICE_MASK) > 0) return -ENOENT; vq = sve_vq_from_vl(vcpu->arch.sve_max_vl); @@ -520,7 +520,6 @@ static int get_timer_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) static unsigned long num_sve_regs(const struct kvm_vcpu *vcpu) { - /* Only the first slice ever exists, for now */ const unsigned int slices = vcpu_sve_slices(vcpu); if (!vcpu_has_sve(vcpu)) @@ -536,7 +535,6 @@ static unsigned long num_sve_regs(const struct kvm_vcpu *vcpu) static int copy_sve_reg_indices(const struct kvm_vcpu *vcpu, u64 __user *uindices) { - /* Only the first slice ever exists, for now */ const unsigned int slices = vcpu_sve_slices(vcpu); u64 reg; unsigned int i, n; @@ -555,7 +553,6 @@ static int copy_sve_reg_indices(const struct kvm_vcpu *vcpu, reg = KVM_REG_ARM64_SVE_VLS; if (put_user(reg, uindices++)) return -EFAULT; - ++num_regs; for (i = 0; i < slices; i++) { @@ -563,7 +560,6 @@ static int copy_sve_reg_indices(const struct kvm_vcpu *vcpu, reg = KVM_REG_ARM64_SVE_ZREG(n, i); if (put_user(reg, uindices++)) return -EFAULT; - num_regs++; } @@ -571,14 +567,12 @@ static int copy_sve_reg_indices(const struct kvm_vcpu *vcpu, reg = KVM_REG_ARM64_SVE_PREG(n, i); if (put_user(reg, uindices++)) return -EFAULT; - num_regs++; } reg = KVM_REG_ARM64_SVE_FFR(i); if (put_user(reg, uindices++)) return -EFAULT; - num_regs++; } @@ -645,7 +639,6 @@ int kvm_arm_get_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) case KVM_REG_ARM_CORE: return get_core_reg(vcpu, reg); case KVM_REG_ARM_FW: return kvm_arm_get_fw_reg(vcpu, reg); case KVM_REG_ARM64_SVE: return get_sve_reg(vcpu, reg); - default: break; /* fall through */ } if (is_timer_reg(reg->id)) @@ -664,7 +657,6 @@ int kvm_arm_set_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) case KVM_REG_ARM_CORE: return set_core_reg(vcpu, reg); case KVM_REG_ARM_FW: return kvm_arm_set_fw_reg(vcpu, reg); case KVM_REG_ARM64_SVE: return set_sve_reg(vcpu, reg); - default: break; /* fall through */ } if (is_timer_reg(reg->id))