From patchwork Thu Dec 7 17:06:16 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoffer Dall X-Patchwork-Id: 10100183 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id A4E2060360 for ; Thu, 7 Dec 2017 17:07:21 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 8FB6F284DE for ; Thu, 7 Dec 2017 17:07:21 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 8478B284F8; Thu, 7 Dec 2017 17:07:21 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 1A9B4284DE for ; Thu, 7 Dec 2017 17:07:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932146AbdLGRHT (ORCPT ); Thu, 7 Dec 2017 12:07:19 -0500 Received: from mail-wm0-f68.google.com ([74.125.82.68]:44489 "EHLO mail-wm0-f68.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1756625AbdLGRHR (ORCPT ); Thu, 7 Dec 2017 12:07:17 -0500 Received: by mail-wm0-f68.google.com with SMTP id t8so14269891wmc.3 for ; Thu, 07 Dec 2017 09:07:16 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=tO7Ei4Q5aZyvOCFTtSEQGJur4IX7xPn9xsKjdl/NQJw=; b=ZBpUToHdds0Iy/DDIYaqLxOk3NOQEEqViPz74GArWvad7UBEj5kjxVGLB0OSsjayXc ir1R7ZriKrs4VbuCpEC2HWJh31DElFkNBvNKOZ+b86lpdd5ONMgZczWZFCpYfL23gYVk B2Lxjs7CCRn46KHF2k+8WdUuenojOq2NgNyiU= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=tO7Ei4Q5aZyvOCFTtSEQGJur4IX7xPn9xsKjdl/NQJw=; b=Wyv3r8oqbu+XortwU1rXAnuGFTnWShIK6MKGBIGsQur3QctyxPmCxtlCUwg7WCxuy1 Q2eeZNhLZx3o0o7JXdzKld4+X+knInWYA1DJqBkUrq/zmK9fAFGxKnSKDb2BmgkxpAna 1xGdkcjb0zGF6l71RRVAzvT6ppcu0P94JvoGHFZwUraKdc10b32dvpr+Ft8c+PJr+pma KvFMw0k2JBF50frQzm15S8VznQjMWS6VD/wg6R4Y/fyKF3wSNsZVFiXeAwED9KRI56zb 6+62yucOAKdiTz0lv/F/s8MgOm15BTxt3NKqfRBcOi1uaQxt/EzAZmgiEdpsCJOlha/6 Gt5A== X-Gm-Message-State: AJaThX4E1EYBWkHclINa1qZ88rq16j/tV4DGm89+5tMYCwXEIdX8MjVW 9fwdrbv8baCQhN64Eo3tHvP2mQ== X-Google-Smtp-Source: AGs4zMYg3HcqYWzIQpVZGT7BXNp6P2pmgZzeEDJ39PTQ1V7x6OPM5hiWjWiPeaKIP3Mejk8FsDCmwg== X-Received: by 10.80.138.156 with SMTP id j28mr45899396edj.111.1512666435673; Thu, 07 Dec 2017 09:07:15 -0800 (PST) Received: from localhost.localdomain (x50d2404e.cust.hiper.dk. [80.210.64.78]) by smtp.gmail.com with ESMTPSA id a16sm2868270edd.19.2017.12.07.09.07.14 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Thu, 07 Dec 2017 09:07:14 -0800 (PST) From: Christoffer Dall To: kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org Cc: kvm@vger.kernel.org, Marc Zyngier , Shih-Wei Li , Andrew Jones , Christoffer Dall Subject: [PATCH v2 22/36] KVM: arm64: Prepare to handle traps on deferred VM sysregs Date: Thu, 7 Dec 2017 18:06:16 +0100 Message-Id: <20171207170630.592-23-christoffer.dall@linaro.org> X-Mailer: git-send-email 2.14.2 In-Reply-To: <20171207170630.592-1-christoffer.dall@linaro.org> References: <20171207170630.592-1-christoffer.dall@linaro.org> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP When we defer the save/restore of system registers to vcpu_load and vcpu_put, we need to take care of the emulation code that handles traps to these registers, since simply reading the memory array will return stale data. Therefore, introduce two functions to directly read/write the registers from the physical CPU when we're on a VHE system that has loaded the system registers onto the physical CPU. Signed-off-by: Christoffer Dall --- Notes: Changes since v1: - Removed spurious white space arch/arm64/include/asm/kvm_host.h | 4 +++ arch/arm64/kvm/sys_regs.c | 53 +++++++++++++++++++++++++++++++++++++-- 2 files changed, 55 insertions(+), 2 deletions(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index de0d55b30b61..f6afe685a280 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -279,6 +279,10 @@ struct kvm_vcpu_arch { /* Detect first run of a vcpu */ bool has_run_once; + + /* True when deferrable sysregs are loaded on the physical CPU, + * see kvm_vcpu_load_sysregs and kvm_vcpu_put_sysregs. */ + bool sysregs_loaded_on_cpu; }; #define vcpu_gp_regs(v) (&(v)->arch.ctxt.gp_regs) diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index 62c12ab9e6c4..80adbec933de 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -35,6 +35,7 @@ #include #include #include +#include #include #include #include @@ -111,6 +112,54 @@ static bool access_dcsw(struct kvm_vcpu *vcpu, return true; } +static u64 read_deferrable_vm_reg(struct kvm_vcpu *vcpu, int reg) +{ + if (vcpu->arch.sysregs_loaded_on_cpu) { + switch (reg) { + case SCTLR_EL1: return read_sysreg_el1(sctlr); + case TTBR0_EL1: return read_sysreg_el1(ttbr0); + case TTBR1_EL1: return read_sysreg_el1(ttbr1); + case TCR_EL1: return read_sysreg_el1(tcr); + case ESR_EL1: return read_sysreg_el1(esr); + case FAR_EL1: return read_sysreg_el1(far); + case AFSR0_EL1: return read_sysreg_el1(afsr0); + case AFSR1_EL1: return read_sysreg_el1(afsr1); + case MAIR_EL1: return read_sysreg_el1(mair); + case AMAIR_EL1: return read_sysreg_el1(amair); + case CONTEXTIDR_EL1: return read_sysreg_el1(contextidr); + case DACR32_EL2: return read_sysreg(dacr32_el2); + case IFSR32_EL2: return read_sysreg(ifsr32_el2); + default: BUG(); + } + } + + return vcpu_sys_reg(vcpu, reg); +} + +static void write_deferrable_vm_reg(struct kvm_vcpu *vcpu, int reg, u64 val) +{ + if (vcpu->arch.sysregs_loaded_on_cpu) { + switch (reg) { + case SCTLR_EL1: write_sysreg_el1(val, sctlr); return; + case TTBR0_EL1: write_sysreg_el1(val, ttbr0); return; + case TTBR1_EL1: write_sysreg_el1(val, ttbr1); return; + case TCR_EL1: write_sysreg_el1(val, tcr); return; + case ESR_EL1: write_sysreg_el1(val, esr); return; + case FAR_EL1: write_sysreg_el1(val, far); return; + case AFSR0_EL1: write_sysreg_el1(val, afsr0); return; + case AFSR1_EL1: write_sysreg_el1(val, afsr1); return; + case MAIR_EL1: write_sysreg_el1(val, mair); return; + case AMAIR_EL1: write_sysreg_el1(val, amair); return; + case CONTEXTIDR_EL1: write_sysreg_el1(val, contextidr); return; + case DACR32_EL2: write_sysreg(val, dacr32_el2); return; + case IFSR32_EL2: write_sysreg(val, ifsr32_el2); return; + default: BUG(); + } + } + + vcpu_sys_reg(vcpu, reg) = val; +} + /* * Generic accessor for VM registers. Only called as long as HCR_TVM * is set. If the guest enables the MMU, we stop trapping the VM @@ -133,14 +182,14 @@ static bool access_vm_reg(struct kvm_vcpu *vcpu, if (!p->is_aarch32 || !p->is_32bit) { val = p->regval; } else { - val = vcpu_sys_reg(vcpu, reg); + val = read_deferrable_vm_reg(vcpu, reg); if (r->reg % 2) val = (p->regval << 32) | (u64)lower_32_bits(val); else val = ((u64)upper_32_bits(val) << 32) | (u64)lower_32_bits(p->regval); } - vcpu_sys_reg(vcpu, reg) = val; + write_deferrable_vm_reg(vcpu, reg, val); kvm_toggle_cache(vcpu, was_enabled); return true;