From patchwork Fri Jan 12 12:07:32 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoffer Dall X-Patchwork-Id: 10160415 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id DF81D602D8 for ; Fri, 12 Jan 2018 12:08:37 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id BECE528870 for ; Fri, 12 Jan 2018 12:08:37 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id B3304289CD; Fri, 12 Jan 2018 12:08:37 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 8FC1328870 for ; Fri, 12 Jan 2018 12:08:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S933683AbeALMIe (ORCPT ); Fri, 12 Jan 2018 07:08:34 -0500 Received: from mail-wm0-f66.google.com ([74.125.82.66]:40068 "EHLO mail-wm0-f66.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S933669AbeALMIc (ORCPT ); Fri, 12 Jan 2018 07:08:32 -0500 Received: by mail-wm0-f66.google.com with SMTP id v123so2212760wmd.5 for ; Fri, 12 Jan 2018 04:08:32 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=1YOwrJceyUykWRrtvgxaXsbPodAgpGeK7jogU4b3Dd4=; b=hVHmWIlnAmUCN40p5jIT7BC8jkVk3QPxul5QjpL7wjAY1LRSgTmyB6LmsXR5D5cyUj 9SqCzF7kgBPGU3fstvBCOJ5DMKGN4Wxoh7PYygIu+1EDnb65f3fKAKyBWmEXal/TR5SN S95zGLNE/GmRv6hCMG4kws2B2+JvLd5fpsunY= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=1YOwrJceyUykWRrtvgxaXsbPodAgpGeK7jogU4b3Dd4=; b=J9NvM0sMc2hI8ssgh68cb1HPIM8EdXAW77uQt9nsPfNejuV+fbs6FTdazWg/5gQrVq ELQK7yqOf/BthvIl0pF5NNpa4tdVnYxhYsxwqlJxiEZEcCuQY7ThFoDvxwkMUzuUPiwc bN4Fl72lfn/JNq+KEmCYwfk6XIMwwVcTIyRvOeMKctUNzxlJqlapBgy1BQ+MQiAy6Jrb NyGjUjVwLfhgvoe1LUMXkDtZxho0GLBdbzOKhVeXnzLib3NRm30JWzn4X0OxrG4h/81j JZaN5OKZRJ6pFjCmVO+evcAgrfFPynb15ByTTL52GKMPHtw4GhCTcOaPpmrX5zKpxVrt P1TQ== X-Gm-Message-State: AKGB3mK1zIb4VKstSlnjhmgV+Xq3J4I6CgtuQDujHwLUXS1aN6tNSWW1 OIcsg/heD0UzIB0QVpCOfz+BKQ== X-Google-Smtp-Source: ACJfBov92wknzObAr3siIPoL19pcNDGREs6IYXAKQTvN2RNuLrPOEvdCICZPBG5vmcp5FD5QyweEpw== X-Received: by 10.80.181.37 with SMTP id y34mr35294676edd.277.1515758911296; Fri, 12 Jan 2018 04:08:31 -0800 (PST) Received: from localhost.localdomain (x50d2404e.cust.hiper.dk. [80.210.64.78]) by smtp.gmail.com with ESMTPSA id f16sm13489705edj.65.2018.01.12.04.08.29 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Fri, 12 Jan 2018 04:08:29 -0800 (PST) From: Christoffer Dall To: kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org Cc: kvm@vger.kernel.org, Marc Zyngier , Shih-Wei Li , Andrew Jones , Christoffer Dall Subject: [PATCH v3 26/41] KVM: arm64: Introduce framework for accessing deferred sysregs Date: Fri, 12 Jan 2018 13:07:32 +0100 Message-Id: <20180112120747.27999-27-christoffer.dall@linaro.org> X-Mailer: git-send-email 2.14.2 In-Reply-To: <20180112120747.27999-1-christoffer.dall@linaro.org> References: <20180112120747.27999-1-christoffer.dall@linaro.org> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP We are about to defer saving and restoring some groups of system registers to vcpu_put and vcpu_load on supported systems. This means that we need some infrastructure to access system registes which supports either accessing the memory backing of the register or directly accessing the system registers, depending on the state of the system when we access the register. We do this by defining a set of read/write accessors for each system register, and letting each system register be defined as "immediate" or "deferrable". Immediate registers are always saved/restored in the world-switch path, but deferrable registers are only saved/restored in vcpu_put/vcpu_load when supported and sysregs_loaded_on_cpu will be set in that case. Not that we don't use the deferred mechanism yet in this patch, but only introduce infrastructure. This is to improve convenience of review in the subsequent patches where it is clear which registers become deferred. [ Most of this logic was contributed by Marc Zyngier ] Signed-off-by: Marc Zyngier Signed-off-by: Christoffer Dall Reviewed-by: Julien Thierry --- arch/arm64/include/asm/kvm_host.h | 8 +- arch/arm64/kvm/sys_regs.c | 160 ++++++++++++++++++++++++++++++++++++++ 2 files changed, 166 insertions(+), 2 deletions(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 91272c35cc36..4b5ef82f6bdb 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -281,6 +281,10 @@ struct kvm_vcpu_arch { /* Detect first run of a vcpu */ bool has_run_once; + + /* True when deferrable sysregs are loaded on the physical CPU, + * see kvm_vcpu_load_sysregs and kvm_vcpu_put_sysregs. */ + bool sysregs_loaded_on_cpu; }; #define vcpu_gp_regs(v) (&(v)->arch.ctxt.gp_regs) @@ -293,8 +297,8 @@ struct kvm_vcpu_arch { */ #define __vcpu_sys_reg(v,r) ((v)->arch.ctxt.sys_regs[(r)]) -#define vcpu_read_sys_reg(v,r) __vcpu_sys_reg(v,r) -#define vcpu_write_sys_reg(v,r,n) do { __vcpu_sys_reg(v,r) = n; } while (0) +u64 vcpu_read_sys_reg(struct kvm_vcpu *vcpu, int reg); +void vcpu_write_sys_reg(struct kvm_vcpu *vcpu, int reg, u64 val); /* * CP14 and CP15 live in the same array, as they are backed by the diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index 96398d53b462..9d353a6a55c9 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -35,6 +35,7 @@ #include #include #include +#include #include #include #include @@ -76,6 +77,165 @@ static bool write_to_read_only(struct kvm_vcpu *vcpu, return false; } +struct sys_reg_accessor { + u64 (*rdsr)(struct kvm_vcpu *, int); + void (*wrsr)(struct kvm_vcpu *, int, u64); +}; + +#define DECLARE_IMMEDIATE_SR(i) \ + static u64 __##i##_read(struct kvm_vcpu *vcpu, int r) \ + { \ + return __vcpu_sys_reg(vcpu, r); \ + } \ + \ + static void __##i##_write(struct kvm_vcpu *vcpu, int r, u64 v) \ + { \ + __vcpu_sys_reg(vcpu, r) = v; \ + } \ + +#define DECLARE_DEFERRABLE_SR(i, s) \ + static u64 __##i##_read(struct kvm_vcpu *vcpu, int r) \ + { \ + if (vcpu->arch.sysregs_loaded_on_cpu) { \ + WARN_ON(kvm_arm_get_running_vcpu() != vcpu); \ + return read_sysreg_s((s)); \ + } \ + return __vcpu_sys_reg(vcpu, r); \ + } \ + \ + static void __##i##_write(struct kvm_vcpu *vcpu, int r, u64 v) \ + { \ + if (vcpu->arch.sysregs_loaded_on_cpu) { \ + WARN_ON(kvm_arm_get_running_vcpu() != vcpu); \ + write_sysreg_s(v, (s)); \ + } else { \ + __vcpu_sys_reg(vcpu, r) = v; \ + } \ + } \ + + +#define SR_HANDLER_RANGE(i,e) \ + [i ... e] = (struct sys_reg_accessor) { \ + .rdsr = __##i##_read, \ + .wrsr = __##i##_write, \ + } + +#define SR_HANDLER(i) SR_HANDLER_RANGE(i, i) + +static void bad_sys_reg(int reg) +{ + WARN_ONCE(1, "Bad system register access %d\n", reg); +} + +static u64 __default_read_sys_reg(struct kvm_vcpu *vcpu, int reg) +{ + bad_sys_reg(reg); + return 0; +} + +static void __default_write_sys_reg(struct kvm_vcpu *vcpu, int reg, u64 val) +{ + bad_sys_reg(reg); +} + +/* Ordered as in enum vcpu_sysreg */ +DECLARE_IMMEDIATE_SR(MPIDR_EL1); +DECLARE_IMMEDIATE_SR(CSSELR_EL1); +DECLARE_IMMEDIATE_SR(SCTLR_EL1); +DECLARE_IMMEDIATE_SR(ACTLR_EL1); +DECLARE_IMMEDIATE_SR(CPACR_EL1); +DECLARE_IMMEDIATE_SR(TTBR0_EL1); +DECLARE_IMMEDIATE_SR(TTBR1_EL1); +DECLARE_IMMEDIATE_SR(TCR_EL1); +DECLARE_IMMEDIATE_SR(ESR_EL1); +DECLARE_IMMEDIATE_SR(AFSR0_EL1); +DECLARE_IMMEDIATE_SR(AFSR1_EL1); +DECLARE_IMMEDIATE_SR(FAR_EL1); +DECLARE_IMMEDIATE_SR(MAIR_EL1); +DECLARE_IMMEDIATE_SR(VBAR_EL1); +DECLARE_IMMEDIATE_SR(CONTEXTIDR_EL1); +DECLARE_IMMEDIATE_SR(TPIDR_EL0); +DECLARE_IMMEDIATE_SR(TPIDRRO_EL0); +DECLARE_IMMEDIATE_SR(TPIDR_EL1); +DECLARE_IMMEDIATE_SR(AMAIR_EL1); +DECLARE_IMMEDIATE_SR(CNTKCTL_EL1); +DECLARE_IMMEDIATE_SR(PAR_EL1); +DECLARE_IMMEDIATE_SR(MDSCR_EL1); +DECLARE_IMMEDIATE_SR(MDCCINT_EL1); +DECLARE_IMMEDIATE_SR(PMCR_EL0); +DECLARE_IMMEDIATE_SR(PMSELR_EL0); +DECLARE_IMMEDIATE_SR(PMEVCNTR0_EL0); +/* PMEVCNTR30_EL0 */ +DECLARE_IMMEDIATE_SR(PMCCNTR_EL0); +DECLARE_IMMEDIATE_SR(PMEVTYPER0_EL0); +/* PMEVTYPER30_EL0 */ +DECLARE_IMMEDIATE_SR(PMCCFILTR_EL0); +DECLARE_IMMEDIATE_SR(PMCNTENSET_EL0); +DECLARE_IMMEDIATE_SR(PMINTENSET_EL1); +DECLARE_IMMEDIATE_SR(PMOVSSET_EL0); +DECLARE_IMMEDIATE_SR(PMSWINC_EL0); +DECLARE_IMMEDIATE_SR(PMUSERENR_EL0); +DECLARE_IMMEDIATE_SR(DACR32_EL2); +DECLARE_IMMEDIATE_SR(IFSR32_EL2); +DECLARE_IMMEDIATE_SR(FPEXC32_EL2); +DECLARE_IMMEDIATE_SR(DBGVCR32_EL2); + +static const struct sys_reg_accessor sys_reg_accessors[NR_SYS_REGS] = { + [0 ... NR_SYS_REGS - 1] = { + .rdsr = __default_read_sys_reg, + .wrsr = __default_write_sys_reg, + }, + + SR_HANDLER(MPIDR_EL1), + SR_HANDLER(CSSELR_EL1), + SR_HANDLER(SCTLR_EL1), + SR_HANDLER(ACTLR_EL1), + SR_HANDLER(CPACR_EL1), + SR_HANDLER(TTBR0_EL1), + SR_HANDLER(TTBR1_EL1), + SR_HANDLER(TCR_EL1), + SR_HANDLER(ESR_EL1), + SR_HANDLER(AFSR0_EL1), + SR_HANDLER(AFSR1_EL1), + SR_HANDLER(FAR_EL1), + SR_HANDLER(MAIR_EL1), + SR_HANDLER(VBAR_EL1), + SR_HANDLER(CONTEXTIDR_EL1), + SR_HANDLER(TPIDR_EL0), + SR_HANDLER(TPIDRRO_EL0), + SR_HANDLER(TPIDR_EL1), + SR_HANDLER(AMAIR_EL1), + SR_HANDLER(CNTKCTL_EL1), + SR_HANDLER(PAR_EL1), + SR_HANDLER(MDSCR_EL1), + SR_HANDLER(MDCCINT_EL1), + SR_HANDLER(PMCR_EL0), + SR_HANDLER(PMSELR_EL0), + SR_HANDLER_RANGE(PMEVCNTR0_EL0, PMEVCNTR30_EL0), + SR_HANDLER(PMCCNTR_EL0), + SR_HANDLER_RANGE(PMEVTYPER0_EL0, PMEVTYPER30_EL0), + SR_HANDLER(PMCCFILTR_EL0), + SR_HANDLER(PMCNTENSET_EL0), + SR_HANDLER(PMINTENSET_EL1), + SR_HANDLER(PMOVSSET_EL0), + SR_HANDLER(PMSWINC_EL0), + SR_HANDLER(PMUSERENR_EL0), + SR_HANDLER(DACR32_EL2), + SR_HANDLER(IFSR32_EL2), + SR_HANDLER(FPEXC32_EL2), + SR_HANDLER(DBGVCR32_EL2), +}; + +u64 vcpu_read_sys_reg(struct kvm_vcpu *vcpu, int reg) +{ + return sys_reg_accessors[reg].rdsr(vcpu, reg); +} + +void vcpu_write_sys_reg(struct kvm_vcpu *vcpu, int reg, u64 val) +{ + sys_reg_accessors[reg].wrsr(vcpu, reg, val); +} + /* 3 bits per cache level, as per CLIDR, but non-existent caches always 0 */ static u32 cache_levels;