From patchwork Mon Mar 20 07:55:34 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dongjiu Geng X-Patchwork-Id: 9633105 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 8BD4F602D6 for ; Mon, 20 Mar 2017 07:40:48 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 7C6DD27F95 for ; Mon, 20 Mar 2017 07:40:48 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 7119027F9F; Mon, 20 Mar 2017 07:40:48 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-1.9 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [65.50.211.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id DE57927F95 for ; Mon, 20 Mar 2017 07:40:47 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:Message-ID:Date:Subject:To :From:Reply-To:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References: List-Owner; bh=4afD76xSw9ugD4b/hTsAZ3XZ8P/zNilJsI/XWO56k9k=; b=QI63yVf5Zz1fGr +4/ZjjZHm+TClb2IfMv7wqCgdXHG8aLOiyBo9Tzx0E3i+T7Mja62dIhj9c+9uswon7o0Dpnl3aGPt LXtCZMxfmGK4OcZAPCgEXOFDJgjY4mfqqg4ayb1FzhQo5CLL0bdeCgR2H2b3z9TrdJwCbIDtx5CS6 TJXKphIuWMCuyrudD3xTXYybd5lTWceS6R0FxlRXWFrUIUSwlIV6qfTKjJH+a8ByZpCfj9rv+ajvH AMwQBKW5REkE2ESpfubFk0wMVG835h8AbPhPusIzDdgLvDpTDYomghNM8tJfPECa1zDMtBOAbKZri GapWB5FpAquv112T3XGw==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.87 #1 (Red Hat Linux)) id 1cprvq-0002sY-Uf; Mon, 20 Mar 2017 07:40:46 +0000 Received: from [45.249.212.187] (helo=dggrg01-dlp.huawei.com) by bombadil.infradead.org with esmtps (Exim 4.87 #1 (Red Hat Linux)) id 1cprvh-0001Fc-JY for linux-arm-kernel@lists.infradead.org; Mon, 20 Mar 2017 07:40:41 +0000 Received: from 172.30.72.53 (EHLO DGGEML404-HUB.china.huawei.com) ([172.30.72.53]) by dggrg01-dlp.huawei.com (MOS 4.4.6-GA FastPath queued) with ESMTP id ALB10871; Mon, 20 Mar 2017 15:39:48 +0800 (CST) Received: from linux.huawei.com (10.67.187.203) by DGGEML404-HUB.china.huawei.com (10.3.17.39) with Microsoft SMTP Server id 14.3.301.0; Mon, 20 Mar 2017 15:39:37 +0800 From: Dongjiu Geng To: , , , , , , , , , , , , Subject: [PATCH] kvm: pass the virtual SEI syndrome to guest OS Date: Mon, 20 Mar 2017 15:55:34 +0800 Message-ID: <1489996534-8270-1-git-send-email-gengdongjiu@huawei.com> X-Mailer: git-send-email 1.7.7 MIME-Version: 1.0 X-Originating-IP: [10.67.187.203] X-CFilter-Loop: Reflected X-Mirapoint-Virus-RAPID-Raw: score=unknown(0), refid=str=0001.0A0B0207.58CF8746.0382, ss=1, re=0.000, recu=0.000, reip=0.000, cl=1, cld=1, fgs=0, ip=0.0.0.0, so=2014-11-16 11:51:01, dmn=2013-03-21 17:37:32 X-Mirapoint-Loop-Id: 3edb835d684afbe352be8bc75e475042 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20170320_004038_601455_A397057D X-CRM114-Status: GOOD ( 13.36 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: wuquanming@huawei.com, wangxiongfeng2@huawei.com, xiexiuqi@huawei.com Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP In the RAS implementation, hardware pass the virtual SEI syndrome information through the VSESR_EL2, so set the virtual SEI syndrome using physical SEI syndrome el2_elr to pass to the guest OS Signed-off-by: Dongjiu Geng Signed-off-by: Quanming wu --- arch/arm64/Kconfig | 8 ++++++++ arch/arm64/include/asm/esr.h | 1 + arch/arm64/include/asm/kvm_emulate.h | 12 ++++++++++++ arch/arm64/include/asm/kvm_host.h | 4 ++++ arch/arm64/kvm/hyp/switch.c | 15 ++++++++++++++- arch/arm64/kvm/inject_fault.c | 10 ++++++++++ 6 files changed, 49 insertions(+), 1 deletion(-) diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 8c7c244247b6..ea62170a3b75 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -908,6 +908,14 @@ endmenu menu "ARMv8.2 architectural features" +config HAS_RAS_EXTENSION + bool "Support arm64 RAS extension" + default n + help + Reliability, Availability, Serviceability(RAS; part of the ARMv8.2 Extensions). + + Selecting this option OS will try to recover the error that RAS hardware node detected. + config ARM64_UAO bool "Enable support for User Access Override (UAO)" default y diff --git a/arch/arm64/include/asm/esr.h b/arch/arm64/include/asm/esr.h index d14c478976d0..e38d32b2bdad 100644 --- a/arch/arm64/include/asm/esr.h +++ b/arch/arm64/include/asm/esr.h @@ -111,6 +111,7 @@ #define ESR_ELx_COND_MASK (UL(0xF) << ESR_ELx_COND_SHIFT) #define ESR_ELx_WFx_ISS_WFE (UL(1) << 0) #define ESR_ELx_xVC_IMM_MASK ((1UL << 16) - 1) +#define VSESR_ELx_IDS_ISS_MASK ((1UL << 25) - 1) /* ESR value templates for specific events */ diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h index f5ea0ba70f07..20d4da7f5dce 100644 --- a/arch/arm64/include/asm/kvm_emulate.h +++ b/arch/arm64/include/asm/kvm_emulate.h @@ -148,6 +148,18 @@ static inline u32 kvm_vcpu_get_hsr(const struct kvm_vcpu *vcpu) return vcpu->arch.fault.esr_el2; } +#ifdef CONFIG_HAS_RAS_EXTENSION +static inline u32 kvm_vcpu_get_vsesr(const struct kvm_vcpu *vcpu) +{ + return vcpu->arch.fault.vsesr_el2; +} + +static inline void kvm_vcpu_set_vsesr(struct kvm_vcpu *vcpu, unsigned long val) +{ + vcpu->arch.fault.vsesr_el2 = val; +} +#endif + static inline int kvm_vcpu_get_condition(const struct kvm_vcpu *vcpu) { u32 esr = kvm_vcpu_get_hsr(vcpu); diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index e7705e7bb07b..f9e3bb57c461 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -83,6 +83,10 @@ struct kvm_mmu_memory_cache { }; struct kvm_vcpu_fault_info { +#ifdef CONFIG_HAS_RAS_EXTENSION + /* Virtual SError Exception Syndrome Register */ + u32 vsesr_el2; +#endif u32 esr_el2; /* Hyp Syndrom Register */ u64 far_el2; /* Hyp Fault Address Register */ u64 hpfar_el2; /* Hyp IPA Fault Address Register */ diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c index aede1658aeda..770a153fb6ba 100644 --- a/arch/arm64/kvm/hyp/switch.c +++ b/arch/arm64/kvm/hyp/switch.c @@ -86,6 +86,13 @@ static void __hyp_text __activate_traps(struct kvm_vcpu *vcpu) isb(); } write_sysreg(val, hcr_el2); +#ifdef CONFIG_HAS_RAS_EXTENSION + /* If virtual System Error or Asynchronous Abort is pending. set + * the virtual exception syndrome information + */ + if (vcpu->arch.hcr_el2 & HCR_VSE) + write_sysreg(vcpu->arch.fault.vsesr_el2, vsesr_el2); +#endif /* Trap on AArch32 cp15 c15 accesses (EL1 or EL0) */ write_sysreg(1 << 15, hstr_el2); /* @@ -139,8 +146,14 @@ static void __hyp_text __deactivate_traps(struct kvm_vcpu *vcpu) * the crucial bit is "On taking a vSError interrupt, * HCR_EL2.VSE is cleared to 0." */ - if (vcpu->arch.hcr_el2 & HCR_VSE) + if (vcpu->arch.hcr_el2 & HCR_VSE) { vcpu->arch.hcr_el2 = read_sysreg(hcr_el2); +#ifdef CONFIG_HAS_RAS_EXTENSION + /* set vsesr_el2[24:0] with esr_el2[24:0] */ + kvm_vcpu_set_vsesr(vcpu, read_sysreg_el2(esr) + & VSESR_ELx_IDS_ISS_MASK); +#endif + } __deactivate_traps_arch()(); write_sysreg(0, hstr_el2); diff --git a/arch/arm64/kvm/inject_fault.c b/arch/arm64/kvm/inject_fault.c index da6a8cfa54a0..08a13dfe28a8 100644 --- a/arch/arm64/kvm/inject_fault.c +++ b/arch/arm64/kvm/inject_fault.c @@ -242,4 +242,14 @@ void kvm_inject_undefined(struct kvm_vcpu *vcpu) void kvm_inject_vabt(struct kvm_vcpu *vcpu) { vcpu_set_hcr(vcpu, vcpu_get_hcr(vcpu) | HCR_VSE); +#ifdef CONFIG_HAS_RAS_EXTENSION + /* If virtual System Error or Asynchronous Abort is set. set + * the virtual exception syndrome information + */ + kvm_vcpu_set_vsesr(vcpu, ((kvm_vcpu_get_vsesr(vcpu) + & (~VSESR_ELx_IDS_ISS_MASK)) + | (kvm_vcpu_get_hsr(vcpu) + & VSESR_ELx_IDS_ISS_MASK))); +#endif + }