From patchwork Thu Oct 12 10:41:26 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoffer Dall X-Patchwork-Id: 10001543 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 43135602BF for ; Thu, 12 Oct 2017 10:42:34 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 3590428D4E for ; Thu, 12 Oct 2017 10:42:34 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 2934928BEC; Thu, 12 Oct 2017 10:42:34 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.5 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, RCVD_IN_DNSWL_HI, RCVD_IN_SORBS_SPAM autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id C556428BEC for ; Thu, 12 Oct 2017 10:42:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1756607AbdJLKmX (ORCPT ); Thu, 12 Oct 2017 06:42:23 -0400 Received: from mail-wm0-f48.google.com ([74.125.82.48]:47928 "EHLO mail-wm0-f48.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1756606AbdJLKmN (ORCPT ); Thu, 12 Oct 2017 06:42:13 -0400 Received: by mail-wm0-f48.google.com with SMTP id t69so11962024wmt.2 for ; Thu, 12 Oct 2017 03:42:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=agVkglqwzS57os/4kmOIpn9V6rV5LQTsK7Hk5WIRhsE=; b=OwwLvpV+oiEKTg8LmyCsj9PPdX+x8XVHko3thEv71K/4cawM9ToEJoSgnuF+fR1B1y qawbRlBHzTttQP5SjK05IodgHLg7Ko6sh8wrDgRb/fLzVEmT+1w9yJymNK2tpUoqsB8w sOkR6TyxzXGpkANSvrZTMaHz4suOKK6Fp6ttI= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=agVkglqwzS57os/4kmOIpn9V6rV5LQTsK7Hk5WIRhsE=; b=gt5yz99w0gZyYSoI1cu4evmz6wCP4ZI87uuRYMLF67GGJX4vNldhD8OFl//jpUUoZn WvcRUuLSB7Uoplf4aLdSDP28XlHWxgbyD3jaavCV3BRfW3fMdF/lB6icusxnsZOvdYLr 93fMf5sqBB9ZaMUgIzeWkrDiGjjYbhafN4ANsTGhKXBXYnttoDD4ZxixYCymetVh733W vqpwjYvBTlLmuZ+1YhGrPRICQ+u5MS8N9XL1tlyy3jBReRH5pLmVZWa7NXVOBrfnWuPJ 2go8Xbd3f7dEM9mKlT/zaRSTMo4FYSKX4TNIt69QN6zx/gGvWp2hVOdxvWCJwEEhLV5D shmw== X-Gm-Message-State: AMCzsaX1pztOZbhPQZtq767+rpn/jtGJCA/xHjhaqyrZLYYVywMSt7DY /18WKgWSxcqzCpiffSFmt8T9nA== X-Google-Smtp-Source: AOwi7QAitsqOEimIKjUFgdxywUARfskdbzZLxWnXtSrcbOVpSDIwo+wilg9GqE/SIaOhYJwbmi81+Q== X-Received: by 10.80.192.73 with SMTP id u9mr1318526edd.37.1507804932163; Thu, 12 Oct 2017 03:42:12 -0700 (PDT) Received: from localhost.localdomain (xd93dd96b.cust.hiper.dk. [217.61.217.107]) by smtp.gmail.com with ESMTPSA id g49sm4798603edc.31.2017.10.12.03.42.10 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Thu, 12 Oct 2017 03:42:11 -0700 (PDT) From: Christoffer Dall To: kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org Cc: kvm@vger.kernel.org, Marc Zyngier , Shih-Wei Li , Christoffer Dall Subject: [PATCH 22/37] KVM: arm64: Change 32-bit handling of VM system registers Date: Thu, 12 Oct 2017 12:41:26 +0200 Message-Id: <20171012104141.26902-23-christoffer.dall@linaro.org> X-Mailer: git-send-email 2.9.0 In-Reply-To: <20171012104141.26902-1-christoffer.dall@linaro.org> References: <20171012104141.26902-1-christoffer.dall@linaro.org> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP We currently handle 32-bit accesses to trapped VM system registers using the 32-bit index into the coproc array on the vcpu structure, which is a union of the coprog array and the sysreg array. Since all the 32-bit coproc indicies are created to correspond to the architectural mapping between 64-bit system registers and 32-bit coprocessor registers, and because the AArch64 system registers are the double in size of the AArch32 coprocessor registers, we can always find the system register entry that we must update by dividing the 32-bit coproc index by 2. This is going to make our lives much easier when we have to start accessing system registers that uses deferred save/restore and might have to be read directly from the physical CPU. Signed-off-by: Christoffer Dall Reviewed-by: Andrew Jones --- arch/arm64/include/asm/kvm_host.h | 8 -------- arch/arm64/kvm/sys_regs.c | 20 +++++++++++++++----- 2 files changed, 15 insertions(+), 13 deletions(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 5e09eb9..9f5761f 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -289,14 +289,6 @@ struct kvm_vcpu_arch { #define vcpu_cp14(v,r) ((v)->arch.ctxt.copro[(r)]) #define vcpu_cp15(v,r) ((v)->arch.ctxt.copro[(r)]) -#ifdef CONFIG_CPU_BIG_ENDIAN -#define vcpu_cp15_64_high(v,r) vcpu_cp15((v),(r)) -#define vcpu_cp15_64_low(v,r) vcpu_cp15((v),(r) + 1) -#else -#define vcpu_cp15_64_high(v,r) vcpu_cp15((v),(r) + 1) -#define vcpu_cp15_64_low(v,r) vcpu_cp15((v),(r)) -#endif - struct kvm_vm_stat { ulong remote_tlb_flush; }; diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index bb0e41b..dbe35fd 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -120,16 +120,26 @@ static bool access_vm_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r) { bool was_enabled = vcpu_has_cache_enabled(vcpu); + u64 val; + int reg = r->reg; BUG_ON(!p->is_write); - if (!p->is_aarch32) { - vcpu_sys_reg(vcpu, r->reg) = p->regval; + /* See the 32bit mapping in kvm_host.h */ + if (p->is_aarch32) + reg = r->reg / 2; + + if (!p->is_aarch32 || !p->is_32bit) { + val = p->regval; } else { - if (!p->is_32bit) - vcpu_cp15_64_high(vcpu, r->reg) = upper_32_bits(p->regval); - vcpu_cp15_64_low(vcpu, r->reg) = lower_32_bits(p->regval); + val = vcpu_sys_reg(vcpu, reg); + if (r->reg % 2) + val = (p->regval << 32) | (u64)lower_32_bits(val); + else + val = ((u64)upper_32_bits(val) << 32) | + (u64)lower_32_bits(p->regval); } + vcpu_sys_reg(vcpu, reg) = val; kvm_toggle_cache(vcpu, was_enabled); return true;