From patchwork Fri Jan 12 12:07:33 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoffer Dall X-Patchwork-Id: 10160455 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 28594602D8 for ; Fri, 12 Jan 2018 12:09:25 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 0C17C28518 for ; Fri, 12 Jan 2018 12:09:25 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 006FB289CD; Fri, 12 Jan 2018 12:09:24 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 7C3FD28518 for ; Fri, 12 Jan 2018 12:09:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S933721AbeALMJX (ORCPT ); Fri, 12 Jan 2018 07:09:23 -0500 Received: from mail-wm0-f65.google.com ([74.125.82.65]:44628 "EHLO mail-wm0-f65.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S933681AbeALMId (ORCPT ); Fri, 12 Jan 2018 07:08:33 -0500 Received: by mail-wm0-f65.google.com with SMTP id t8so11612510wmc.3 for ; Fri, 12 Jan 2018 04:08:33 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=dlYo1Chu3rfL0OTXaQ2wHf85sXnXf3HZgvVngFLIOx8=; b=KaqxJKXP4afXlbDV1ErMYJqgtdhvCxzse4Qtco0+FlhI6tB4DKPaojcGh9GZ+hlgLF eK0f05Cv9Ebnvk5udpCBNHwDDNrxQfg8WFbmxWbxEVmekBC5xME0dUVUCA7CiqVOpxZj tfTfpWICSrXlWAqWIMYFjmIRqnkNdsLJUZL3I= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=dlYo1Chu3rfL0OTXaQ2wHf85sXnXf3HZgvVngFLIOx8=; b=m2xyiR0FRCKQn6+nKIl4SWXcrcwEGQ/QNO3uawWUvWFACQ1p+owAP4tFstjFPzSxik mX1T6PLKX4P+4oD4GaxmSCWQMN/3DbybX5m/L43hrsM2HA5wnd0BpQTeIJsbqOxJWkIX iNI4EbMyReL0rIh6zm/SC1UQzKp6zqF80uRlU9sQyhXul1aN9f6loqu8aTwP0elmX02J 6mixeKAoBVU2FRd5HvJB/Zz/nFRMAScLKi86YUBQcMfBXC2yYOoWbX7oa7yFs/wy8iOq F+1B9iE4uvZzzxwUto5za3qxFEkpIXsUoh2aK51J7BGnqnph2VT7UT3p26YJ3hEp6qPn tm8A== X-Gm-Message-State: AKGB3mJPtBnoAW/X34nnujDPyFdpImMl2KhRD8D0THjbQYqpD8wWHOGt rxWjgpL/ABNogIlBNZDxOZ+Eew== X-Google-Smtp-Source: ACJfBosER7l2t8F72WqngKfxnz4t0aYfWaGIqAYqjnZYeZXQsvM6D3+8C4LCtxaprey0+wNgXrw8cA== X-Received: by 10.80.184.103 with SMTP id k36mr35370689ede.102.1515758912578; Fri, 12 Jan 2018 04:08:32 -0800 (PST) Received: from localhost.localdomain (x50d2404e.cust.hiper.dk. [80.210.64.78]) by smtp.gmail.com with ESMTPSA id f16sm13489705edj.65.2018.01.12.04.08.31 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Fri, 12 Jan 2018 04:08:31 -0800 (PST) From: Christoffer Dall To: kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org Cc: kvm@vger.kernel.org, Marc Zyngier , Shih-Wei Li , Andrew Jones , Christoffer Dall Subject: [PATCH v3 27/41] KVM: arm/arm64: Prepare to handle deferred save/restore of SPSR_EL1 Date: Fri, 12 Jan 2018 13:07:33 +0100 Message-Id: <20180112120747.27999-28-christoffer.dall@linaro.org> X-Mailer: git-send-email 2.14.2 In-Reply-To: <20180112120747.27999-1-christoffer.dall@linaro.org> References: <20180112120747.27999-1-christoffer.dall@linaro.org> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP SPSR_EL1 is not used by a VHE host kernel and can be deferred, but we need to rework the accesses to this register to access the latest value depending on whether or not guest system registers are loaded on the CPU or only reside in memory. The handling of accessing the various banked SPSRs for 32-bit VMs is a bit clunky, but this will be improved in following patches which will first prepare and subsequently implement deferred save/restore of the 32-bit registers, including the 32-bit SPSRs. Signed-off-by: Christoffer Dall --- arch/arm/include/asm/kvm_emulate.h | 12 ++++++++++- arch/arm/kvm/emulate.c | 2 +- arch/arm64/include/asm/kvm_emulate.h | 41 +++++++++++++++++++++++++++++++----- arch/arm64/kvm/inject_fault.c | 4 ++-- virt/kvm/arm/aarch32.c | 2 +- 5 files changed, 51 insertions(+), 10 deletions(-) diff --git a/arch/arm/include/asm/kvm_emulate.h b/arch/arm/include/asm/kvm_emulate.h index d5e1b8bf6422..db8a09e9a16f 100644 --- a/arch/arm/include/asm/kvm_emulate.h +++ b/arch/arm/include/asm/kvm_emulate.h @@ -41,7 +41,17 @@ static inline unsigned long *vcpu_reg32(struct kvm_vcpu *vcpu, u8 reg_num) return vcpu_reg(vcpu, reg_num); } -unsigned long *vcpu_spsr(struct kvm_vcpu *vcpu); +unsigned long *__vcpu_spsr(struct kvm_vcpu *vcpu); + +static inline unsigned long vpcu_read_spsr(struct kvm_vcpu *vcpu) +{ + return *__vcpu_spsr(vcpu); +} + +static inline void vcpu_write_spsr(struct kvm_vcpu *vcpu, unsigned long v) +{ + *__vcpu_spsr(vcpu) = v; +} static inline unsigned long vcpu_get_reg(struct kvm_vcpu *vcpu, u8 reg_num) diff --git a/arch/arm/kvm/emulate.c b/arch/arm/kvm/emulate.c index fa501bf437f3..9046b53d87c1 100644 --- a/arch/arm/kvm/emulate.c +++ b/arch/arm/kvm/emulate.c @@ -142,7 +142,7 @@ unsigned long *vcpu_reg(struct kvm_vcpu *vcpu, u8 reg_num) /* * Return the SPSR for the current mode of the virtual CPU. */ -unsigned long *vcpu_spsr(struct kvm_vcpu *vcpu) +unsigned long *__vcpu_spsr(struct kvm_vcpu *vcpu) { unsigned long mode = *vcpu_cpsr(vcpu) & MODE_MASK; switch (mode) { diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h index df1cb146750d..f0bc7c096fdc 100644 --- a/arch/arm64/include/asm/kvm_emulate.h +++ b/arch/arm64/include/asm/kvm_emulate.h @@ -26,6 +26,7 @@ #include #include +#include #include #include #include @@ -131,13 +132,43 @@ static inline void vcpu_set_reg(struct kvm_vcpu *vcpu, u8 reg_num, vcpu_gp_regs(vcpu)->regs.regs[reg_num] = val; } -/* Get vcpu SPSR for current mode */ -static inline unsigned long *vcpu_spsr(const struct kvm_vcpu *vcpu) +static inline unsigned long vcpu_read_spsr(const struct kvm_vcpu *vcpu) { - if (vcpu_mode_is_32bit(vcpu)) - return vcpu_spsr32(vcpu); + unsigned long *p = (unsigned long *)&vcpu_gp_regs(vcpu)->spsr[KVM_SPSR_EL1]; + + if (vcpu_mode_is_32bit(vcpu)) { + unsigned long *p_32bit = vcpu_spsr32(vcpu); + + /* KVM_SPSR_SVC aliases KVM_SPSR_EL1 */ + if (p_32bit != (unsigned long *)p) + return *p_32bit; + } + + if (vcpu->arch.sysregs_loaded_on_cpu) + return read_sysreg_el1(spsr); + else + return *p; +} - return (unsigned long *)&vcpu_gp_regs(vcpu)->spsr[KVM_SPSR_EL1]; +static inline void vcpu_write_spsr(const struct kvm_vcpu *vcpu, unsigned long v) +{ + unsigned long *p = (unsigned long *)&vcpu_gp_regs(vcpu)->spsr[KVM_SPSR_EL1]; + + /* KVM_SPSR_SVC aliases KVM_SPSR_EL1 */ + if (vcpu_mode_is_32bit(vcpu)) { + unsigned long *p_32bit = vcpu_spsr32(vcpu); + + /* KVM_SPSR_SVC aliases KVM_SPSR_EL1 */ + if (p_32bit != (unsigned long *)p) { + *p_32bit = v; + return; + } + } + + if (vcpu->arch.sysregs_loaded_on_cpu) + write_sysreg_el1(v, spsr); + else + *p = v; } static inline bool vcpu_mode_priv(const struct kvm_vcpu *vcpu) diff --git a/arch/arm64/kvm/inject_fault.c b/arch/arm64/kvm/inject_fault.c index 1e070943e7a6..c638593d305d 100644 --- a/arch/arm64/kvm/inject_fault.c +++ b/arch/arm64/kvm/inject_fault.c @@ -71,7 +71,7 @@ static void inject_abt64(struct kvm_vcpu *vcpu, bool is_iabt, unsigned long addr *vcpu_pc(vcpu) = get_except_vector(vcpu, except_type_sync); *vcpu_cpsr(vcpu) = PSTATE_FAULT_BITS_64; - *vcpu_spsr(vcpu) = cpsr; + vcpu_write_spsr(vcpu, cpsr); vcpu_write_sys_reg(vcpu, FAR_EL1, addr); @@ -106,7 +106,7 @@ static void inject_undef64(struct kvm_vcpu *vcpu) *vcpu_pc(vcpu) = get_except_vector(vcpu, except_type_sync); *vcpu_cpsr(vcpu) = PSTATE_FAULT_BITS_64; - *vcpu_spsr(vcpu) = cpsr; + vcpu_write_spsr(vcpu, cpsr); /* * Build an unknown exception, depending on the instruction diff --git a/virt/kvm/arm/aarch32.c b/virt/kvm/arm/aarch32.c index 8bc479fa37e6..efc84cbe8277 100644 --- a/virt/kvm/arm/aarch32.c +++ b/virt/kvm/arm/aarch32.c @@ -178,7 +178,7 @@ static void prepare_fault32(struct kvm_vcpu *vcpu, u32 mode, u32 vect_offset) *vcpu_cpsr(vcpu) = cpsr; /* Note: These now point to the banked copies */ - *vcpu_spsr(vcpu) = new_spsr_value; + vcpu_write_spsr(vcpu, new_spsr_value); *vcpu_reg32(vcpu, 14) = *vcpu_pc(vcpu) + return_offset; /* Branch to exception vector */