From patchwork Thu Oct 12 10:41:27 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoffer Dall X-Patchwork-Id: 10001633 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 64ACF60325 for ; Thu, 12 Oct 2017 10:49:58 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 4D7FE28C77 for ; Thu, 12 Oct 2017 10:49:52 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 4270C28C90; Thu, 12 Oct 2017 10:49:52 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,RCVD_IN_DNSWL_MED autolearn=unavailable version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [65.50.211.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id E64DC28C77 for ; Thu, 12 Oct 2017 10:49:51 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:MIME-Version:Cc:List-Subscribe: List-Help:List-Post:List-Archive:List-Unsubscribe:List-Id:References: In-Reply-To:Message-Id:Date:Subject:To:From:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=arydfaDHgS1vQKKJAa7MhXH3j1+UZN7BS65NhySEkoc=; b=Hv8X3tZGffYiW+nNyak/ZynE4R oDvqk0vyuWmi31RXbaTzHXHp9d+KJq2UWAKosbcsx1p2Dzn6VvCukhKrJj9Faq/kty+Lr/fWJ1jHJ sy5hFmYjwbV2YjrUbJduKeIw2i36vv8ppVGe7TH9a71+VRUUdDb9u+4fWxbX4q7u48YXxmcdVvxUs KKSHCFYWBe+qSw59dF5+oEONPDZOEXQpl80I0KDau+I2qqCnfF5KQ4egxBvESSdBK2lrwYlZ0yKja 8Ni/GRvvv2bGxCi4D1SXFNy8Idz5Piy7SaTqruC/WHgdBkQmWQumPhQIYp1HZPct2tEx5SgS5apSB 0LTIGjjQ==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.87 #1 (Red Hat Linux)) id 1e2b3m-0006rQ-WE; Thu, 12 Oct 2017 10:49:51 +0000 Received: from mail-wm0-x22c.google.com ([2a00:1450:400c:c09::22c]) by bombadil.infradead.org with esmtps (Exim 4.87 #1 (Red Hat Linux)) id 1e2awb-0007fD-QM for linux-arm-kernel@lists.infradead.org; Thu, 12 Oct 2017 10:42:42 +0000 Received: by mail-wm0-x22c.google.com with SMTP id t69so11962175wmt.2 for ; Thu, 12 Oct 2017 03:42:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=hxPW0cejPfrGRaqQymaWbvQMDInl3g5xX5qVmyJoWwE=; b=C7FGVJhdbiWs9qGymHTjvyxmg+YLQfrpGLddz0Frgi15unUY0Z+piWZUJZR3/DERzu /OtqQVNH1qSQs4xSqZ67ySED48a4uG4aV2WUI4TmJS0JCSPOlbDmS+ApFqYWHAYlE42K ugjW8q1MLwDmcTutmeva0sKythurlqRIntz60= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=hxPW0cejPfrGRaqQymaWbvQMDInl3g5xX5qVmyJoWwE=; b=a/m5BfVUpw0TmTIjlDOkiA3Y48DgBYll8UN82CxxSSnVPxNfp+KF7sX/N5aR9uwyJL OQbdvq6gkoDhJvUyL58UAnCqc9LAn0JgMon4G3G0A1pPBFdNr3HJCkkkde+VwsRWsblZ c3hMnYmWqHsS63kELfdeeaBqvUOVhf/FQwEY/QcZGL2VyhXYGRtsrUK5FCfos+oX511Z 6hNDiWcuJDQ0aZJjMRqcLl62dam042MOEdjjDHc0qu2KqHORJb8zjCZR2DqxVsVm0DJp Arw3z3HOXZxoxWk1bO6MLvgLRsj1ruAzalpC7LxFC1A92nfZtqnFLpDyAYsGdxg8e1Vz cpuA== X-Gm-Message-State: AMCzsaUZh1CjIMLi+Njj+tQ8BxdfQ7dmPaBplAdE+32KufZP/SrKhWlK tJh9kbGnvUurfTtSSxm8BsyjlQ== X-Google-Smtp-Source: AOwi7QBUGP7RISAcARkPL+uHOTdJbOZjKpMbt2RYC5yGbPT90sSTVkgiM3Ly6zswnem2QFtf/VghWA== X-Received: by 10.80.178.36 with SMTP id o33mr2477962edd.116.1507804933427; Thu, 12 Oct 2017 03:42:13 -0700 (PDT) Received: from localhost.localdomain (xd93dd96b.cust.hiper.dk. [217.61.217.107]) by smtp.gmail.com with ESMTPSA id g49sm4798603edc.31.2017.10.12.03.42.12 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Thu, 12 Oct 2017 03:42:12 -0700 (PDT) From: Christoffer Dall To: kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org Subject: [PATCH 23/37] KVM: arm64: Prepare to handle traps on deferred VM sysregs Date: Thu, 12 Oct 2017 12:41:27 +0200 Message-Id: <20171012104141.26902-24-christoffer.dall@linaro.org> X-Mailer: git-send-email 2.9.0 In-Reply-To: <20171012104141.26902-1-christoffer.dall@linaro.org> References: <20171012104141.26902-1-christoffer.dall@linaro.org> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20171012_034226_731651_4D862787 X-CRM114-Status: GOOD ( 15.93 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Marc Zyngier , Christoffer Dall , Shih-Wei Li , kvm@vger.kernel.org MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP When we defer the save/restore of system registers to vcpu_load and vcpu_put, we need to take care of the emulation code that handles traps to these registers, since simply reading the memory array will return stale data. Therefore, introduce two functions to directly read/write the registers from the physical CPU when we're on a VHE system that has loaded the system registers onto the physical CPU. Signed-off-by: Christoffer Dall --- arch/arm64/include/asm/kvm_host.h | 4 +++ arch/arm64/kvm/sys_regs.c | 54 +++++++++++++++++++++++++++++++++++++-- 2 files changed, 56 insertions(+), 2 deletions(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 9f5761f..dcded44 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -278,6 +278,10 @@ struct kvm_vcpu_arch { /* Detect first run of a vcpu */ bool has_run_once; + + /* True when deferrable sysregs are loaded on the physical CPU, + * see kvm_vcpu_load_sysregs and kvm_vcpu_put_sysregs. */ + bool sysregs_loaded_on_cpu; }; #define vcpu_gp_regs(v) (&(v)->arch.ctxt.gp_regs) diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index dbe35fd..f7887dd 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -34,6 +34,7 @@ #include #include #include +#include #include #include #include @@ -110,8 +111,57 @@ static bool access_dcsw(struct kvm_vcpu *vcpu, return true; } +static u64 read_deferrable_vm_reg(struct kvm_vcpu *vcpu, int reg) +{ + if (vcpu->arch.sysregs_loaded_on_cpu) { + switch (reg) { + case SCTLR_EL1: return read_sysreg_el1(sctlr); + case TTBR0_EL1: return read_sysreg_el1(ttbr0); + case TTBR1_EL1: return read_sysreg_el1(ttbr1); + case TCR_EL1: return read_sysreg_el1(tcr); + case ESR_EL1: return read_sysreg_el1(esr); + case FAR_EL1: return read_sysreg_el1(far); + case AFSR0_EL1: return read_sysreg_el1(afsr0); + case AFSR1_EL1: return read_sysreg_el1(afsr1); + case MAIR_EL1: return read_sysreg_el1(mair); + case AMAIR_EL1: return read_sysreg_el1(amair); + case CONTEXTIDR_EL1: return read_sysreg_el1(contextidr); + case DACR32_EL2: return read_sysreg(dacr32_el2); + case IFSR32_EL2: return read_sysreg(ifsr32_el2); + default: BUG(); + } + } + + return vcpu_sys_reg(vcpu, reg); +} + +static void write_deferrable_vm_reg(struct kvm_vcpu *vcpu, int reg, u64 val) +{ + if (vcpu->arch.sysregs_loaded_on_cpu) { + switch (reg) { + case SCTLR_EL1: write_sysreg_el1(val, sctlr); return; + case TTBR0_EL1: write_sysreg_el1(val, ttbr0); return; + case TTBR1_EL1: write_sysreg_el1(val, ttbr1); return; + case TCR_EL1: write_sysreg_el1(val, tcr); return; + case ESR_EL1: write_sysreg_el1(val, esr); return; + case FAR_EL1: write_sysreg_el1(val, far); return; + case AFSR0_EL1: write_sysreg_el1(val, afsr0); return; + case AFSR1_EL1: write_sysreg_el1(val, afsr1); return; + case MAIR_EL1: write_sysreg_el1(val, mair); return; + case AMAIR_EL1: write_sysreg_el1(val, amair); return; + case CONTEXTIDR_EL1: write_sysreg_el1(val, contextidr); return; + case DACR32_EL2: write_sysreg(val, dacr32_el2); return; + case IFSR32_EL2: write_sysreg(val, ifsr32_el2); return; + default: BUG(); + } + } + + vcpu_sys_reg(vcpu, reg) = val; +} + /* * Generic accessor for VM registers. Only called as long as HCR_TVM + * * is set. If the guest enables the MMU, we stop trapping the VM * sys_regs and leave it in complete control of the caches. */ @@ -132,14 +182,14 @@ static bool access_vm_reg(struct kvm_vcpu *vcpu, if (!p->is_aarch32 || !p->is_32bit) { val = p->regval; } else { - val = vcpu_sys_reg(vcpu, reg); + val = read_deferrable_vm_reg(vcpu, reg); if (r->reg % 2) val = (p->regval << 32) | (u64)lower_32_bits(val); else val = ((u64)upper_32_bits(val) << 32) | (u64)lower_32_bits(p->regval); } - vcpu_sys_reg(vcpu, reg) = val; + write_deferrable_vm_reg(vcpu, reg, val); kvm_toggle_cache(vcpu, was_enabled); return true;