From patchwork Mon Aug 13 14:57:19 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marc Zyngier X-Patchwork-Id: 10564315 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id A38EF1815 for ; Mon, 13 Aug 2018 14:58:55 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 927BA2941C for ; Mon, 13 Aug 2018 14:58:55 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 8519829425; Mon, 13 Aug 2018 14:58:55 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id C0C2F2941C for ; Mon, 13 Aug 2018 14:58:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729192AbeHMRl3 (ORCPT ); Mon, 13 Aug 2018 13:41:29 -0400 Received: from foss.arm.com ([217.140.101.70]:59770 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729106AbeHMRl3 (ORCPT ); Mon, 13 Aug 2018 13:41:29 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id EAA00ED1; Mon, 13 Aug 2018 07:58:52 -0700 (PDT) Received: from approximate.Emea.Arm.com (approximate.Emea.Arm.com [10.4.13.119]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id D60B93F5D0; Mon, 13 Aug 2018 07:58:49 -0700 (PDT) From: Marc Zyngier To: Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= Cc: kvm@vger.kernel.org, kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org, Andre Przywara , Andrew Jones , Catalin Marinas , Christoffer Dall , Dongjiu Geng , Eric Auger , "Gustavo A . R . Silva" , James Morse , Jia He , Kees Cook , Mark Rutland , Punit Agrawal , Suzuki Poulose Subject: [PATCH 01/37] arm64: KVM: Add support for Stage-2 control of memory types and cacheability Date: Mon, 13 Aug 2018 15:57:19 +0100 Message-Id: <20180813145755.16566-2-marc.zyngier@arm.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20180813145755.16566-1-marc.zyngier@arm.com> References: <20180813145755.16566-1-marc.zyngier@arm.com> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Up to ARMv8.3, the combinaison of Stage-1 and Stage-2 attributes results in the strongest attribute of the two stages. This means that the hypervisor has to perform quite a lot of cache maintenance just in case the guest has some non-cacheable mappings around. ARMv8.4 solves this problem by offering a different mode (FWB) where Stage-2 has total control over the memory attribute (this is limited to systems where both I/O and instruction fetches are coherent with the dcache). This is achieved by having a different set of memory attributes in the page tables, and a new bit set in HCR_EL2. On such a system, we can then safely sidestep any form of dcache management. Acked-by: Catalin Marinas Reviewed-by: Christoffer Dall Signed-off-by: Marc Zyngier --- arch/arm64/include/asm/cpucaps.h | 3 ++- arch/arm64/include/asm/kvm_arm.h | 1 + arch/arm64/include/asm/kvm_emulate.h | 2 ++ arch/arm64/include/asm/kvm_mmu.h | 27 +++++++++++++++++++++------ arch/arm64/include/asm/memory.h | 7 +++++++ arch/arm64/include/asm/pgtable-prot.h | 14 ++++++++++++-- arch/arm64/include/asm/sysreg.h | 1 + arch/arm64/kernel/cpufeature.c | 20 ++++++++++++++++++++ virt/kvm/arm/mmu.c | 4 ++++ 9 files changed, 70 insertions(+), 9 deletions(-) diff --git a/arch/arm64/include/asm/cpucaps.h b/arch/arm64/include/asm/cpucaps.h index 8a699c708fc9..ed84d6536830 100644 --- a/arch/arm64/include/asm/cpucaps.h +++ b/arch/arm64/include/asm/cpucaps.h @@ -49,7 +49,8 @@ #define ARM64_HAS_CACHE_DIC 28 #define ARM64_HW_DBM 29 #define ARM64_SSBD 30 +#define ARM64_HAS_STAGE2_FWB 31 -#define ARM64_NCAPS 31 +#define ARM64_NCAPS 32 #endif /* __ASM_CPUCAPS_H */ diff --git a/arch/arm64/include/asm/kvm_arm.h b/arch/arm64/include/asm/kvm_arm.h index 6dd285e979c9..aa45df752a16 100644 --- a/arch/arm64/include/asm/kvm_arm.h +++ b/arch/arm64/include/asm/kvm_arm.h @@ -23,6 +23,7 @@ #include /* Hyp Configuration Register (HCR) bits */ +#define HCR_FWB (UL(1) << 46) #define HCR_TEA (UL(1) << 37) #define HCR_TERR (UL(1) << 36) #define HCR_TLOR (UL(1) << 35) diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h index 1dab3a984608..dd98fdf33d99 100644 --- a/arch/arm64/include/asm/kvm_emulate.h +++ b/arch/arm64/include/asm/kvm_emulate.h @@ -63,6 +63,8 @@ static inline void vcpu_reset_hcr(struct kvm_vcpu *vcpu) /* trap error record accesses */ vcpu->arch.hcr_el2 |= HCR_TERR; } + if (cpus_have_const_cap(ARM64_HAS_STAGE2_FWB)) + vcpu->arch.hcr_el2 |= HCR_FWB; if (test_bit(KVM_ARM_VCPU_EL1_32BIT, vcpu->arch.features)) vcpu->arch.hcr_el2 &= ~HCR_RW; diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h index fb9a7127bb75..bac9f016736b 100644 --- a/arch/arm64/include/asm/kvm_mmu.h +++ b/arch/arm64/include/asm/kvm_mmu.h @@ -267,6 +267,15 @@ static inline void __clean_dcache_guest_page(kvm_pfn_t pfn, unsigned long size) { void *va = page_address(pfn_to_page(pfn)); + /* + * With FWB, we ensure that the guest always accesses memory using + * cacheable attributes, and we don't have to clean to PoC when + * faulting in pages. Furthermore, FWB implies IDC, so cleaning to + * PoU is not required either in this case. + */ + if (cpus_have_const_cap(ARM64_HAS_STAGE2_FWB)) + return; + kvm_flush_dcache_to_poc(va, size); } @@ -287,20 +296,26 @@ static inline void __invalidate_icache_guest_page(kvm_pfn_t pfn, static inline void __kvm_flush_dcache_pte(pte_t pte) { - struct page *page = pte_page(pte); - kvm_flush_dcache_to_poc(page_address(page), PAGE_SIZE); + if (!cpus_have_const_cap(ARM64_HAS_STAGE2_FWB)) { + struct page *page = pte_page(pte); + kvm_flush_dcache_to_poc(page_address(page), PAGE_SIZE); + } } static inline void __kvm_flush_dcache_pmd(pmd_t pmd) { - struct page *page = pmd_page(pmd); - kvm_flush_dcache_to_poc(page_address(page), PMD_SIZE); + if (!cpus_have_const_cap(ARM64_HAS_STAGE2_FWB)) { + struct page *page = pmd_page(pmd); + kvm_flush_dcache_to_poc(page_address(page), PMD_SIZE); + } } static inline void __kvm_flush_dcache_pud(pud_t pud) { - struct page *page = pud_page(pud); - kvm_flush_dcache_to_poc(page_address(page), PUD_SIZE); + if (!cpus_have_const_cap(ARM64_HAS_STAGE2_FWB)) { + struct page *page = pud_page(pud); + kvm_flush_dcache_to_poc(page_address(page), PUD_SIZE); + } } #define kvm_virt_to_phys(x) __pa_symbol(x) diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h index 49d99214f43c..b96442960aea 100644 --- a/arch/arm64/include/asm/memory.h +++ b/arch/arm64/include/asm/memory.h @@ -155,6 +155,13 @@ #define MT_S2_NORMAL 0xf #define MT_S2_DEVICE_nGnRE 0x1 +/* + * Memory types for Stage-2 translation when ID_AA64MMFR2_EL1.FWB is 0001 + * Stage-2 enforces Normal-WB and Device-nGnRE + */ +#define MT_S2_FWB_NORMAL 6 +#define MT_S2_FWB_DEVICE_nGnRE 1 + #ifdef CONFIG_ARM64_4K_PAGES #define IOREMAP_MAX_ORDER (PUD_SHIFT) #else diff --git a/arch/arm64/include/asm/pgtable-prot.h b/arch/arm64/include/asm/pgtable-prot.h index 108ecad7acc5..c66c3047400e 100644 --- a/arch/arm64/include/asm/pgtable-prot.h +++ b/arch/arm64/include/asm/pgtable-prot.h @@ -67,8 +67,18 @@ #define PAGE_HYP_RO __pgprot(_HYP_PAGE_DEFAULT | PTE_HYP | PTE_RDONLY | PTE_HYP_XN) #define PAGE_HYP_DEVICE __pgprot(PROT_DEVICE_nGnRE | PTE_HYP) -#define PAGE_S2 __pgprot(_PROT_DEFAULT | PTE_S2_MEMATTR(MT_S2_NORMAL) | PTE_S2_RDONLY | PTE_S2_XN) -#define PAGE_S2_DEVICE __pgprot(_PROT_DEFAULT | PTE_S2_MEMATTR(MT_S2_DEVICE_nGnRE) | PTE_S2_RDONLY | PTE_S2_XN) +#define PAGE_S2_MEMATTR(attr) \ + ({ \ + u64 __val; \ + if (cpus_have_const_cap(ARM64_HAS_STAGE2_FWB)) \ + __val = PTE_S2_MEMATTR(MT_S2_FWB_ ## attr); \ + else \ + __val = PTE_S2_MEMATTR(MT_S2_ ## attr); \ + __val; \ + }) + +#define PAGE_S2 __pgprot(_PROT_DEFAULT | PAGE_S2_MEMATTR(NORMAL) | PTE_S2_RDONLY | PTE_S2_XN) +#define PAGE_S2_DEVICE __pgprot(_PROT_DEFAULT | PAGE_S2_MEMATTR(DEVICE_nGnRE) | PTE_S2_RDONLY | PTE_S2_XN) #define PAGE_NONE __pgprot(((_PAGE_DEFAULT) & ~PTE_VALID) | PTE_PROT_NONE | PTE_RDONLY | PTE_NG | PTE_PXN | PTE_UXN) #define PAGE_SHARED __pgprot(_PAGE_DEFAULT | PTE_USER | PTE_NG | PTE_PXN | PTE_UXN | PTE_WRITE) diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h index a8f84812c6e8..98af0b37fb31 100644 --- a/arch/arm64/include/asm/sysreg.h +++ b/arch/arm64/include/asm/sysreg.h @@ -576,6 +576,7 @@ #define ID_AA64MMFR1_VMIDBITS_16 2 /* id_aa64mmfr2 */ +#define ID_AA64MMFR2_FWB_SHIFT 40 #define ID_AA64MMFR2_AT_SHIFT 32 #define ID_AA64MMFR2_LVA_SHIFT 16 #define ID_AA64MMFR2_IESB_SHIFT 12 diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index f24892a40d2c..d58d1f0abe16 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -192,6 +192,7 @@ static const struct arm64_ftr_bits ftr_id_aa64mmfr1[] = { }; static const struct arm64_ftr_bits ftr_id_aa64mmfr2[] = { + ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_FWB_SHIFT, 4, 0), ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_AT_SHIFT, 4, 0), ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_LVA_SHIFT, 4, 0), ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_IESB_SHIFT, 4, 0), @@ -1026,6 +1027,14 @@ static void cpu_copy_el2regs(const struct arm64_cpu_capabilities *__unused) } #endif +static void cpu_has_fwb(const struct arm64_cpu_capabilities *__unused) +{ + u64 val = read_sysreg_s(SYS_CLIDR_EL1); + + /* Check that CLIDR_EL1.LOU{U,IS} are both 0 */ + WARN_ON(val & (7 << 27 | 7 << 21)); +} + static const struct arm64_cpu_capabilities arm64_features[] = { { .desc = "GIC system register CPU interface", @@ -1182,6 +1191,17 @@ static const struct arm64_cpu_capabilities arm64_features[] = { .type = ARM64_CPUCAP_SYSTEM_FEATURE, .matches = has_cache_dic, }, + { + .desc = "Stage-2 Force Write-Back", + .type = ARM64_CPUCAP_SYSTEM_FEATURE, + .capability = ARM64_HAS_STAGE2_FWB, + .sys_reg = SYS_ID_AA64MMFR2_EL1, + .sign = FTR_UNSIGNED, + .field_pos = ID_AA64MMFR2_FWB_SHIFT, + .min_field_value = 1, + .matches = has_cpuid_feature, + .cpu_enable = cpu_has_fwb, + }, #ifdef CONFIG_ARM64_HW_AFDBM { /* diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c index 1d90d79706bd..ea7314296ad1 100644 --- a/virt/kvm/arm/mmu.c +++ b/virt/kvm/arm/mmu.c @@ -196,6 +196,10 @@ static void clear_stage2_pmd_entry(struct kvm *kvm, pmd_t *pmd, phys_addr_t addr * This is why right after unmapping a page/section and invalidating * the corresponding TLBs, we call kvm_flush_dcache_p*() to make sure * the IO subsystem will never hit in the cache. + * + * This is all avoided on systems that have ARM64_HAS_STAGE2_FWB, as + * we then fully enforce cacheability of RAM, no matter what the guest + * does. */ static void unmap_stage2_ptes(struct kvm *kvm, pmd_t *pmd, phys_addr_t addr, phys_addr_t end) From patchwork Mon Aug 13 14:57:20 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marc Zyngier X-Patchwork-Id: 10564317 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 8324C14E2 for ; Mon, 13 Aug 2018 14:58:58 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 73B7329425 for ; Mon, 13 Aug 2018 14:58:58 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 681B929432; Mon, 13 Aug 2018 14:58:58 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 0FA4329425 for ; Mon, 13 Aug 2018 14:58:58 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729241AbeHMRld (ORCPT ); Mon, 13 Aug 2018 13:41:33 -0400 Received: from foss.arm.com ([217.140.101.70]:59792 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729215AbeHMRlc (ORCPT ); Mon, 13 Aug 2018 13:41:32 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 309E01596; Mon, 13 Aug 2018 07:58:56 -0700 (PDT) Received: from approximate.Emea.Arm.com (approximate.Emea.Arm.com [10.4.13.119]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 36AE03F5D0; Mon, 13 Aug 2018 07:58:53 -0700 (PDT) From: Marc Zyngier To: Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= Cc: kvm@vger.kernel.org, kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org, Andre Przywara , Andrew Jones , Catalin Marinas , Christoffer Dall , Dongjiu Geng , Eric Auger , "Gustavo A . R . Silva" , James Morse , Jia He , Kees Cook , Mark Rutland , Punit Agrawal , Suzuki Poulose Subject: [PATCH 02/37] arm64: KVM: Handle Set/Way CMOs as NOPs if FWB is present Date: Mon, 13 Aug 2018 15:57:20 +0100 Message-Id: <20180813145755.16566-3-marc.zyngier@arm.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20180813145755.16566-1-marc.zyngier@arm.com> References: <20180813145755.16566-1-marc.zyngier@arm.com> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Set/Way handling is one of the ugliest corners of KVM. We shouldn't have to handle that, but better safe than sorry. Thankfully, FWB fixes this for us by not requiering any maintenance (the guest is forced to use cacheable memory, no matter what it says, and the whole system is garanteed to be cache coherent), which means we don't have to emulate S/W CMOs, and don't have to track VM ops either. We still have to trap S/W though, if only to prevent the guest from doing something bad. Reviewed-by: Christoffer Dall Signed-off-by: Marc Zyngier --- arch/arm64/kvm/sys_regs.c | 11 ++++++++++- 1 file changed, 10 insertions(+), 1 deletion(-) diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index a4363735d3f8..774d72155904 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -194,7 +194,16 @@ static bool access_dcsw(struct kvm_vcpu *vcpu, if (!p->is_write) return read_from_write_only(vcpu, p, r); - kvm_set_way_flush(vcpu); + /* + * Only track S/W ops if we don't have FWB. It still indicates + * that the guest is a bit broken (S/W operations should only + * be done by firmware, knowing that there is only a single + * CPU left in the system, and certainly not from non-secure + * software). + */ + if (!cpus_have_const_cap(ARM64_HAS_STAGE2_FWB)) + kvm_set_way_flush(vcpu); + return true; } From patchwork Mon Aug 13 14:57:21 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marc Zyngier X-Patchwork-Id: 10564319 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 4B42C14E2 for ; Mon, 13 Aug 2018 14:59:01 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 3C3122941C for ; Mon, 13 Aug 2018 14:59:01 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 306A229425; Mon, 13 Aug 2018 14:59:01 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id CBE412941C for ; Mon, 13 Aug 2018 14:59:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729296AbeHMRlg (ORCPT ); Mon, 13 Aug 2018 13:41:36 -0400 Received: from foss.arm.com ([217.140.101.70]:59812 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729215AbeHMRlf (ORCPT ); Mon, 13 Aug 2018 13:41:35 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 56F9515B2; Mon, 13 Aug 2018 07:58:59 -0700 (PDT) Received: from approximate.Emea.Arm.com (approximate.Emea.Arm.com [10.4.13.119]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 663CE3F5D0; Mon, 13 Aug 2018 07:58:56 -0700 (PDT) From: Marc Zyngier To: Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= Cc: kvm@vger.kernel.org, kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org, Andre Przywara , Andrew Jones , Catalin Marinas , Christoffer Dall , Dongjiu Geng , Eric Auger , "Gustavo A . R . Silva" , James Morse , Jia He , Kees Cook , Mark Rutland , Punit Agrawal , Suzuki Poulose Subject: [PATCH 03/37] arm64: KVM: Avoid marking pages as XN in Stage-2 if CTR_EL0.DIC is set Date: Mon, 13 Aug 2018 15:57:21 +0100 Message-Id: <20180813145755.16566-4-marc.zyngier@arm.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20180813145755.16566-1-marc.zyngier@arm.com> References: <20180813145755.16566-1-marc.zyngier@arm.com> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP On systems where CTR_EL0.DIC is set, we don't need to perform icache invalidation to guarantee that we'll fetch the right instruction stream. This also means that taking a permission fault to invalidate the icache is an unnecessary overhead. On such systems, we can safely leave the page as being executable. Acked-by: Catalin Marinas Acked-by: Mark Rutland Acked-by: Christoffer Dall Signed-off-by: Marc Zyngier --- arch/arm64/include/asm/pgtable-prot.h | 14 ++++++++++++-- 1 file changed, 12 insertions(+), 2 deletions(-) diff --git a/arch/arm64/include/asm/pgtable-prot.h b/arch/arm64/include/asm/pgtable-prot.h index c66c3047400e..78b942c1bea4 100644 --- a/arch/arm64/include/asm/pgtable-prot.h +++ b/arch/arm64/include/asm/pgtable-prot.h @@ -77,8 +77,18 @@ __val; \ }) -#define PAGE_S2 __pgprot(_PROT_DEFAULT | PAGE_S2_MEMATTR(NORMAL) | PTE_S2_RDONLY | PTE_S2_XN) -#define PAGE_S2_DEVICE __pgprot(_PROT_DEFAULT | PAGE_S2_MEMATTR(DEVICE_nGnRE) | PTE_S2_RDONLY | PTE_S2_XN) +#define PAGE_S2_XN \ + ({ \ + u64 __val; \ + if (cpus_have_const_cap(ARM64_HAS_CACHE_DIC)) \ + __val = 0; \ + else \ + __val = PTE_S2_XN; \ + __val; \ + }) + +#define PAGE_S2 __pgprot(_PROT_DEFAULT | PAGE_S2_MEMATTR(NORMAL) | PTE_S2_RDONLY | PAGE_S2_XN) +#define PAGE_S2_DEVICE __pgprot(_PROT_DEFAULT | PAGE_S2_MEMATTR(DEVICE_nGnRE) | PTE_S2_RDONLY | PAGE_S2_XN) #define PAGE_NONE __pgprot(((_PAGE_DEFAULT) & ~PTE_VALID) | PTE_PROT_NONE | PTE_RDONLY | PTE_NG | PTE_PXN | PTE_UXN) #define PAGE_SHARED __pgprot(_PAGE_DEFAULT | PTE_USER | PTE_NG | PTE_PXN | PTE_UXN | PTE_WRITE) From patchwork Mon Aug 13 14:57:22 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marc Zyngier X-Patchwork-Id: 10564321 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D574B17E1 for ; Mon, 13 Aug 2018 14:59:04 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id C544B2941C for ; Mon, 13 Aug 2018 14:59:04 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id B91AD29425; Mon, 13 Aug 2018 14:59:04 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 5CC8F2941C for ; Mon, 13 Aug 2018 14:59:04 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729369AbeHMRlj (ORCPT ); Mon, 13 Aug 2018 13:41:39 -0400 Received: from foss.arm.com ([217.140.101.70]:59836 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729335AbeHMRlj (ORCPT ); Mon, 13 Aug 2018 13:41:39 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 89D9B15BE; Mon, 13 Aug 2018 07:59:02 -0700 (PDT) Received: from approximate.Emea.Arm.com (approximate.Emea.Arm.com [10.4.13.119]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 973563F5D0; Mon, 13 Aug 2018 07:58:59 -0700 (PDT) From: Marc Zyngier To: Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= Cc: kvm@vger.kernel.org, kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org, Andre Przywara , Andrew Jones , Catalin Marinas , Christoffer Dall , Dongjiu Geng , Eric Auger , "Gustavo A . R . Silva" , James Morse , Jia He , Kees Cook , Mark Rutland , Punit Agrawal , Suzuki Poulose Subject: [PATCH 04/37] KVM: arm/arm64: Consolidate page-table accessors Date: Mon, 13 Aug 2018 15:57:22 +0100 Message-Id: <20180813145755.16566-5-marc.zyngier@arm.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20180813145755.16566-1-marc.zyngier@arm.com> References: <20180813145755.16566-1-marc.zyngier@arm.com> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP The arm and arm64 KVM page tables accessors are pointlessly different between the two architectures, and likely both wrong one way or another: arm64 lacks a dsb(), and arm doesn't use WRITE_ONCE. Let's unify them. Acked-by: Mark Rutland Acked-by: Christoffer Dall Signed-off-by: Marc Zyngier --- arch/arm/include/asm/kvm_mmu.h | 12 ------------ arch/arm64/include/asm/kvm_mmu.h | 3 --- virt/kvm/arm/mmu.c | 12 ++++++++++++ 3 files changed, 12 insertions(+), 15 deletions(-) diff --git a/arch/arm/include/asm/kvm_mmu.h b/arch/arm/include/asm/kvm_mmu.h index 8553d68b7c8a..b2feaea1434c 100644 --- a/arch/arm/include/asm/kvm_mmu.h +++ b/arch/arm/include/asm/kvm_mmu.h @@ -75,18 +75,6 @@ phys_addr_t kvm_get_idmap_vector(void); int kvm_mmu_init(void); void kvm_clear_hyp_idmap(void); -static inline void kvm_set_pmd(pmd_t *pmd, pmd_t new_pmd) -{ - *pmd = new_pmd; - dsb(ishst); -} - -static inline void kvm_set_pte(pte_t *pte, pte_t new_pte) -{ - *pte = new_pte; - dsb(ishst); -} - static inline pte_t kvm_s2pte_mkwrite(pte_t pte) { pte_val(pte) |= L_PTE_S2_RDWR; diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h index bac9f016736b..ea000fb47ec0 100644 --- a/arch/arm64/include/asm/kvm_mmu.h +++ b/arch/arm64/include/asm/kvm_mmu.h @@ -169,9 +169,6 @@ phys_addr_t kvm_get_idmap_vector(void); int kvm_mmu_init(void); void kvm_clear_hyp_idmap(void); -#define kvm_set_pte(ptep, pte) set_pte(ptep, pte) -#define kvm_set_pmd(pmdp, pmd) set_pmd(pmdp, pmd) - static inline pte_t kvm_s2pte_mkwrite(pte_t pte) { pte_val(pte) |= PTE_S2_RDWR; diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c index ea7314296ad1..a6bdbed1903a 100644 --- a/virt/kvm/arm/mmu.c +++ b/virt/kvm/arm/mmu.c @@ -177,6 +177,18 @@ static void clear_stage2_pmd_entry(struct kvm *kvm, pmd_t *pmd, phys_addr_t addr put_page(virt_to_page(pmd)); } +static inline void kvm_set_pte(pte_t *ptep, pte_t new_pte) +{ + WRITE_ONCE(*ptep, new_pte); + dsb(ishst); +} + +static inline void kvm_set_pmd(pmd_t *pmdp, pmd_t new_pmd) +{ + WRITE_ONCE(*pmdp, new_pmd); + dsb(ishst); +} + /* * Unmapping vs dcache management: * From patchwork Mon Aug 13 14:57:23 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marc Zyngier X-Patchwork-Id: 10564325 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 5C7361815 for ; Mon, 13 Aug 2018 14:59:08 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 4DA022941C for ; Mon, 13 Aug 2018 14:59:08 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 4172229425; Mon, 13 Aug 2018 14:59:08 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id CAB3829422 for ; Mon, 13 Aug 2018 14:59:07 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729510AbeHMRlm (ORCPT ); Mon, 13 Aug 2018 13:41:42 -0400 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:59872 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729335AbeHMRlm (ORCPT ); Mon, 13 Aug 2018 13:41:42 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id B8BE4ED1; Mon, 13 Aug 2018 07:59:05 -0700 (PDT) Received: from approximate.Emea.Arm.com (approximate.Emea.Arm.com [10.4.13.119]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id C80F63F5D0; Mon, 13 Aug 2018 07:59:02 -0700 (PDT) From: Marc Zyngier To: Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= Cc: kvm@vger.kernel.org, kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org, Andre Przywara , Andrew Jones , Catalin Marinas , Christoffer Dall , Dongjiu Geng , Eric Auger , "Gustavo A . R . Silva" , James Morse , Jia He , Kees Cook , Mark Rutland , Punit Agrawal , Suzuki Poulose Subject: [PATCH 05/37] KVM: arm/arm64: Stop using the kernel's {pmd,pud,pgd}_populate helpers Date: Mon, 13 Aug 2018 15:57:23 +0100 Message-Id: <20180813145755.16566-6-marc.zyngier@arm.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20180813145755.16566-1-marc.zyngier@arm.com> References: <20180813145755.16566-1-marc.zyngier@arm.com> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP The {pmd,pud,pgd}_populate accessors usage have always been a bit weird in KVM. We don't have a struct mm to pass (and neither does the kernel most of the time, but still...), and the 32bit code has all kind of cache maintenance that doesn't make sense on ARMv7+ when MP extensions are mandatory (which is the case when the VEs are present). Let's bite the bullet and provide our own implementations. The only bit of architectural code left has to do with building the table entry itself (arm64 having up to 52bit PA, arm lacking PUD level). Acked-by: Mark Rutland Acked-by: Christoffer Dall Signed-off-by: Marc Zyngier --- arch/arm/include/asm/kvm_mmu.h | 4 ++++ arch/arm64/include/asm/kvm_mmu.h | 7 +++++++ virt/kvm/arm/mmu.c | 25 +++++++++++++++++++++---- 3 files changed, 32 insertions(+), 4 deletions(-) diff --git a/arch/arm/include/asm/kvm_mmu.h b/arch/arm/include/asm/kvm_mmu.h index b2feaea1434c..265ea9cf7df7 100644 --- a/arch/arm/include/asm/kvm_mmu.h +++ b/arch/arm/include/asm/kvm_mmu.h @@ -75,6 +75,10 @@ phys_addr_t kvm_get_idmap_vector(void); int kvm_mmu_init(void); void kvm_clear_hyp_idmap(void); +#define kvm_mk_pmd(ptep) __pmd(__pa(ptep) | PMD_TYPE_TABLE) +#define kvm_mk_pud(pmdp) __pud(__pa(pmdp) | PMD_TYPE_TABLE) +#define kvm_mk_pgd(pudp) ({ BUILD_BUG(); 0; }) + static inline pte_t kvm_s2pte_mkwrite(pte_t pte) { pte_val(pte) |= L_PTE_S2_RDWR; diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h index ea000fb47ec0..d6fff7de5539 100644 --- a/arch/arm64/include/asm/kvm_mmu.h +++ b/arch/arm64/include/asm/kvm_mmu.h @@ -169,6 +169,13 @@ phys_addr_t kvm_get_idmap_vector(void); int kvm_mmu_init(void); void kvm_clear_hyp_idmap(void); +#define kvm_mk_pmd(ptep) \ + __pmd(__phys_to_pmd_val(__pa(ptep)) | PMD_TYPE_TABLE) +#define kvm_mk_pud(pmdp) \ + __pud(__phys_to_pud_val(__pa(pmdp)) | PMD_TYPE_TABLE) +#define kvm_mk_pgd(pudp) \ + __pgd(__phys_to_pgd_val(__pa(pudp)) | PUD_TYPE_TABLE) + static inline pte_t kvm_s2pte_mkwrite(pte_t pte) { pte_val(pte) |= PTE_S2_RDWR; diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c index a6bdbed1903a..eade30caaa3c 100644 --- a/virt/kvm/arm/mmu.c +++ b/virt/kvm/arm/mmu.c @@ -189,6 +189,23 @@ static inline void kvm_set_pmd(pmd_t *pmdp, pmd_t new_pmd) dsb(ishst); } +static inline void kvm_pmd_populate(pmd_t *pmdp, pte_t *ptep) +{ + kvm_set_pmd(pmdp, kvm_mk_pmd(ptep)); +} + +static inline void kvm_pud_populate(pud_t *pudp, pmd_t *pmdp) +{ + WRITE_ONCE(*pudp, kvm_mk_pud(pmdp)); + dsb(ishst); +} + +static inline void kvm_pgd_populate(pgd_t *pgdp, pud_t *pudp) +{ + WRITE_ONCE(*pgdp, kvm_mk_pgd(pudp)); + dsb(ishst); +} + /* * Unmapping vs dcache management: * @@ -617,7 +634,7 @@ static int create_hyp_pmd_mappings(pud_t *pud, unsigned long start, kvm_err("Cannot allocate Hyp pte\n"); return -ENOMEM; } - pmd_populate_kernel(NULL, pmd, pte); + kvm_pmd_populate(pmd, pte); get_page(virt_to_page(pmd)); kvm_flush_dcache_to_poc(pmd, sizeof(*pmd)); } @@ -650,7 +667,7 @@ static int create_hyp_pud_mappings(pgd_t *pgd, unsigned long start, kvm_err("Cannot allocate Hyp pmd\n"); return -ENOMEM; } - pud_populate(NULL, pud, pmd); + kvm_pud_populate(pud, pmd); get_page(virt_to_page(pud)); kvm_flush_dcache_to_poc(pud, sizeof(*pud)); } @@ -687,7 +704,7 @@ static int __create_hyp_mappings(pgd_t *pgdp, unsigned long ptrs_per_pgd, err = -ENOMEM; goto out; } - pgd_populate(NULL, pgd, pud); + kvm_pgd_populate(pgd, pud); get_page(virt_to_page(pgd)); kvm_flush_dcache_to_poc(pgd, sizeof(*pgd)); } @@ -1106,7 +1123,7 @@ static int stage2_set_pte(struct kvm *kvm, struct kvm_mmu_memory_cache *cache, if (!cache) return 0; /* ignore calls from kvm_set_spte_hva */ pte = mmu_memory_cache_alloc(cache); - pmd_populate_kernel(NULL, pmd, pte); + kvm_pmd_populate(pmd, pte); get_page(virt_to_page(pmd)); } From patchwork Mon Aug 13 14:57:24 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marc Zyngier X-Patchwork-Id: 10564327 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 3C40914E2 for ; Mon, 13 Aug 2018 14:59:11 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 2D6BE2941C for ; Mon, 13 Aug 2018 14:59:11 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 20C6B29425; Mon, 13 Aug 2018 14:59:11 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id C08EF2941C for ; Mon, 13 Aug 2018 14:59:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729638AbeHMRlp (ORCPT ); Mon, 13 Aug 2018 13:41:45 -0400 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:59896 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729596AbeHMRlp (ORCPT ); Mon, 13 Aug 2018 13:41:45 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id E854B1596; Mon, 13 Aug 2018 07:59:08 -0700 (PDT) Received: from approximate.Emea.Arm.com (approximate.Emea.Arm.com [10.4.13.119]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 037DF3F5D0; Mon, 13 Aug 2018 07:59:05 -0700 (PDT) From: Marc Zyngier To: Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= Cc: kvm@vger.kernel.org, kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org, Andre Przywara , Andrew Jones , Catalin Marinas , Christoffer Dall , Dongjiu Geng , Eric Auger , "Gustavo A . R . Silva" , James Morse , Jia He , Kees Cook , Mark Rutland , Punit Agrawal , Suzuki Poulose Subject: [PATCH 06/37] KVM: arm/arm64: Remove unnecessary CMOs when creating HYP page tables Date: Mon, 13 Aug 2018 15:57:24 +0100 Message-Id: <20180813145755.16566-7-marc.zyngier@arm.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20180813145755.16566-1-marc.zyngier@arm.com> References: <20180813145755.16566-1-marc.zyngier@arm.com> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP There is no need to perform cache maintenance operations when creating the HYP page tables if we have the multiprocessing extensions. ARMv7 mandates them with the virtualization support, and ARMv8 just mandates them unconditionally. Let's remove these operations. Acked-by: Mark Rutland Acked-by: Christoffer Dall Signed-off-by: Marc Zyngier --- virt/kvm/arm/mmu.c | 4 ---- 1 file changed, 4 deletions(-) diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c index eade30caaa3c..97d27cd9c654 100644 --- a/virt/kvm/arm/mmu.c +++ b/virt/kvm/arm/mmu.c @@ -609,7 +609,6 @@ static void create_hyp_pte_mappings(pmd_t *pmd, unsigned long start, pte = pte_offset_kernel(pmd, addr); kvm_set_pte(pte, pfn_pte(pfn, prot)); get_page(virt_to_page(pte)); - kvm_flush_dcache_to_poc(pte, sizeof(*pte)); pfn++; } while (addr += PAGE_SIZE, addr != end); } @@ -636,7 +635,6 @@ static int create_hyp_pmd_mappings(pud_t *pud, unsigned long start, } kvm_pmd_populate(pmd, pte); get_page(virt_to_page(pmd)); - kvm_flush_dcache_to_poc(pmd, sizeof(*pmd)); } next = pmd_addr_end(addr, end); @@ -669,7 +667,6 @@ static int create_hyp_pud_mappings(pgd_t *pgd, unsigned long start, } kvm_pud_populate(pud, pmd); get_page(virt_to_page(pud)); - kvm_flush_dcache_to_poc(pud, sizeof(*pud)); } next = pud_addr_end(addr, end); @@ -706,7 +703,6 @@ static int __create_hyp_mappings(pgd_t *pgdp, unsigned long ptrs_per_pgd, } kvm_pgd_populate(pgd, pud); get_page(virt_to_page(pgd)); - kvm_flush_dcache_to_poc(pgd, sizeof(*pgd)); } next = pgd_addr_end(addr, end); From patchwork Mon Aug 13 14:57:25 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marc Zyngier X-Patchwork-Id: 10564329 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 983F814E2 for ; Mon, 13 Aug 2018 14:59:14 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 87FF62941C for ; Mon, 13 Aug 2018 14:59:14 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 7C08C29425; Mon, 13 Aug 2018 14:59:14 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 06BB22941C for ; Mon, 13 Aug 2018 14:59:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729697AbeHMRlt (ORCPT ); Mon, 13 Aug 2018 13:41:49 -0400 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:59924 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729584AbeHMRlt (ORCPT ); Mon, 13 Aug 2018 13:41:49 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 2372115B2; Mon, 13 Aug 2018 07:59:12 -0700 (PDT) Received: from approximate.Emea.Arm.com (approximate.Emea.Arm.com [10.4.13.119]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 3269E3F5D0; Mon, 13 Aug 2018 07:59:09 -0700 (PDT) From: Marc Zyngier To: Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= Cc: kvm@vger.kernel.org, kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org, Andre Przywara , Andrew Jones , Catalin Marinas , Christoffer Dall , Dongjiu Geng , Eric Auger , "Gustavo A . R . Silva" , James Morse , Jia He , Kees Cook , Mark Rutland , Punit Agrawal , Suzuki Poulose Subject: [PATCH 07/37] KVM: arm/arm64: Enable adaptative WFE trapping Date: Mon, 13 Aug 2018 15:57:25 +0100 Message-Id: <20180813145755.16566-8-marc.zyngier@arm.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20180813145755.16566-1-marc.zyngier@arm.com> References: <20180813145755.16566-1-marc.zyngier@arm.com> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Trapping blocking WFE is extremely beneficial in situations where the system is oversubscribed, as it allows another thread to run while being blocked. In a non-oversubscribed environment, this is the complete opposite, and trapping WFE is just unnecessary overhead. Let's only enable WFE trapping if the CPU has more than a single task to run (that is, more than just the vcpu thread). Reviewed-by: Christoffer Dall Signed-off-by: Marc Zyngier --- arch/arm/include/asm/kvm_emulate.h | 10 ++++++++++ arch/arm64/include/asm/kvm_emulate.h | 10 ++++++++++ virt/kvm/arm/arm.c | 6 ++++++ 3 files changed, 26 insertions(+) diff --git a/arch/arm/include/asm/kvm_emulate.h b/arch/arm/include/asm/kvm_emulate.h index 6493bd479ddc..b50fe8380868 100644 --- a/arch/arm/include/asm/kvm_emulate.h +++ b/arch/arm/include/asm/kvm_emulate.h @@ -107,6 +107,16 @@ static inline unsigned long *vcpu_hcr(const struct kvm_vcpu *vcpu) return (unsigned long *)&vcpu->arch.hcr; } +static inline void vcpu_clear_wfe_traps(struct kvm_vcpu *vcpu) +{ + vcpu->arch.hcr &= ~HCR_TWE; +} + +static inline void vcpu_set_wfe_traps(struct kvm_vcpu *vcpu) +{ + vcpu->arch.hcr |= HCR_TWE; +} + static inline bool vcpu_mode_is_32bit(const struct kvm_vcpu *vcpu) { return 1; diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h index dd98fdf33d99..bfefdd9a72eb 100644 --- a/arch/arm64/include/asm/kvm_emulate.h +++ b/arch/arm64/include/asm/kvm_emulate.h @@ -83,6 +83,16 @@ static inline unsigned long *vcpu_hcr(struct kvm_vcpu *vcpu) return (unsigned long *)&vcpu->arch.hcr_el2; } +static inline void vcpu_clear_wfe_traps(struct kvm_vcpu *vcpu) +{ + vcpu->arch.hcr_el2 &= ~HCR_TWE; +} + +static inline void vcpu_set_wfe_traps(struct kvm_vcpu *vcpu) +{ + vcpu->arch.hcr_el2 |= HCR_TWE; +} + static inline void vcpu_set_vsesr(struct kvm_vcpu *vcpu, u64 vsesr) { vcpu->arch.vsesr_el2 = vsesr; diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c index 04e554cae3a2..8e66b89a3db2 100644 --- a/virt/kvm/arm/arm.c +++ b/virt/kvm/arm/arm.c @@ -30,6 +30,7 @@ #include #include #include +#include #include #include #include @@ -380,6 +381,11 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu) kvm_timer_vcpu_load(vcpu); kvm_vcpu_load_sysregs(vcpu); kvm_arch_vcpu_load_fp(vcpu); + + if (single_task_running()) + vcpu_clear_wfe_traps(vcpu); + else + vcpu_set_wfe_traps(vcpu); } void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu) From patchwork Mon Aug 13 14:57:26 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marc Zyngier X-Patchwork-Id: 10564331 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 53D4914E2 for ; Mon, 13 Aug 2018 14:59:17 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 43A0D2941C for ; Mon, 13 Aug 2018 14:59:17 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 37B4429425; Mon, 13 Aug 2018 14:59:17 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id DC7B32941C for ; Mon, 13 Aug 2018 14:59:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729707AbeHMRlw (ORCPT ); Mon, 13 Aug 2018 13:41:52 -0400 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:59948 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729406AbeHMRlv (ORCPT ); Mon, 13 Aug 2018 13:41:51 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 52CB215BF; Mon, 13 Aug 2018 07:59:15 -0700 (PDT) Received: from approximate.Emea.Arm.com (approximate.Emea.Arm.com [10.4.13.119]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 61C2E3F5D0; Mon, 13 Aug 2018 07:59:12 -0700 (PDT) From: Marc Zyngier To: Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= Cc: kvm@vger.kernel.org, kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org, Andre Przywara , Andrew Jones , Catalin Marinas , Christoffer Dall , Dongjiu Geng , Eric Auger , "Gustavo A . R . Silva" , James Morse , Jia He , Kees Cook , Mark Rutland , Punit Agrawal , Suzuki Poulose Subject: [PATCH 08/37] KVM: arm/arm64: Fix vgic init race Date: Mon, 13 Aug 2018 15:57:26 +0100 Message-Id: <20180813145755.16566-9-marc.zyngier@arm.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20180813145755.16566-1-marc.zyngier@arm.com> References: <20180813145755.16566-1-marc.zyngier@arm.com> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Christoffer Dall The vgic_init function can race with kvm_arch_vcpu_create() which does not hold kvm_lock() and we therefore have no synchronization primitives to ensure we're doing the right thing. As the user is trying to initialize or run the VM while at the same time creating more VCPUs, we just have to refuse to initialize the VGIC in this case rather than silently failing with a broken VCPU. Reviewed-by: Eric Auger Signed-off-by: Christoffer Dall Signed-off-by: Marc Zyngier --- virt/kvm/arm/vgic/vgic-init.c | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/virt/kvm/arm/vgic/vgic-init.c b/virt/kvm/arm/vgic/vgic-init.c index 2673efce65f3..b71417913741 100644 --- a/virt/kvm/arm/vgic/vgic-init.c +++ b/virt/kvm/arm/vgic/vgic-init.c @@ -271,6 +271,10 @@ int vgic_init(struct kvm *kvm) if (vgic_initialized(kvm)) return 0; + /* Are we also in the middle of creating a VCPU? */ + if (kvm->created_vcpus != atomic_read(&kvm->online_vcpus)) + return -EBUSY; + /* freeze the number of spis */ if (!dist->nr_spis) dist->nr_spis = VGIC_NR_IRQS_LEGACY - VGIC_NR_PRIVATE_IRQS; From patchwork Mon Aug 13 14:57:27 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marc Zyngier X-Patchwork-Id: 10564335 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id AEB8B17E1 for ; Mon, 13 Aug 2018 14:59:21 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 9EB582942A for ; Mon, 13 Aug 2018 14:59:21 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 9256A29434; Mon, 13 Aug 2018 14:59:21 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 5C4262942A for ; Mon, 13 Aug 2018 14:59:20 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729725AbeHMRlz (ORCPT ); Mon, 13 Aug 2018 13:41:55 -0400 Received: from foss.arm.com ([217.140.101.70]:59980 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729709AbeHMRlz (ORCPT ); Mon, 13 Aug 2018 13:41:55 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 82E8F15BE; Mon, 13 Aug 2018 07:59:18 -0700 (PDT) Received: from approximate.Emea.Arm.com (approximate.Emea.Arm.com [10.4.13.119]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 9218F3F5D0; Mon, 13 Aug 2018 07:59:15 -0700 (PDT) From: Marc Zyngier To: Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= Cc: kvm@vger.kernel.org, kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org, Andre Przywara , Andrew Jones , Catalin Marinas , Christoffer Dall , Dongjiu Geng , Eric Auger , "Gustavo A . R . Silva" , James Morse , Jia He , Kees Cook , Mark Rutland , Punit Agrawal , Suzuki Poulose Subject: [PATCH 09/37] KVM: arm64: vgic-its: Remove VLA usage Date: Mon, 13 Aug 2018 15:57:27 +0100 Message-Id: <20180813145755.16566-10-marc.zyngier@arm.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20180813145755.16566-1-marc.zyngier@arm.com> References: <20180813145755.16566-1-marc.zyngier@arm.com> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Kees Cook In the quest to remove all stack VLA usage from the kernel[1], this switches to using a maximum size and adds sanity checks. Additionally cleans up some of the int-vs-u32 usage and adds additional bounds checking. As it currently stands, this will always be 8 bytes until the ABI changes. [1] https://lkml.kernel.org/r/CA+55aFzCG-zNmZwX4A2FQpadafLfEzK6CC=qPXydAacU1RqZWA@mail.gmail.com Cc: Christoffer Dall Cc: Eric Auger Cc: Andre Przywara Cc: linux-arm-kernel@lists.infradead.org Cc: kvmarm@lists.cs.columbia.edu Signed-off-by: Kees Cook [maz: dropped WARN_ONs] Signed-off-by: Marc Zyngier --- virt/kvm/arm/vgic/vgic-its.c | 14 ++++++++++---- 1 file changed, 10 insertions(+), 4 deletions(-) diff --git a/virt/kvm/arm/vgic/vgic-its.c b/virt/kvm/arm/vgic/vgic-its.c index 4ed79c939fb4..1d88010f6b61 100644 --- a/virt/kvm/arm/vgic/vgic-its.c +++ b/virt/kvm/arm/vgic/vgic-its.c @@ -168,8 +168,14 @@ struct vgic_its_abi { int (*commit)(struct vgic_its *its); }; +#define ABI_0_ESZ 8 +#define ESZ_MAX ABI_0_ESZ + static const struct vgic_its_abi its_table_abi_versions[] = { - [0] = {.cte_esz = 8, .dte_esz = 8, .ite_esz = 8, + [0] = { + .cte_esz = ABI_0_ESZ, + .dte_esz = ABI_0_ESZ, + .ite_esz = ABI_0_ESZ, .save_tables = vgic_its_save_tables_v0, .restore_tables = vgic_its_restore_tables_v0, .commit = vgic_its_commit_v0, @@ -183,7 +189,7 @@ inline const struct vgic_its_abi *vgic_its_get_abi(struct vgic_its *its) return &its_table_abi_versions[its->abi_rev]; } -int vgic_its_set_abi(struct vgic_its *its, int rev) +static int vgic_its_set_abi(struct vgic_its *its, u32 rev) { const struct vgic_its_abi *abi; @@ -1881,14 +1887,14 @@ typedef int (*entry_fn_t)(struct vgic_its *its, u32 id, void *entry, * Return: < 0 on error, 0 if last element was identified, 1 otherwise * (the last element may not be found on second level tables) */ -static int scan_its_table(struct vgic_its *its, gpa_t base, int size, int esz, +static int scan_its_table(struct vgic_its *its, gpa_t base, int size, u32 esz, int start_id, entry_fn_t fn, void *opaque) { struct kvm *kvm = its->dev->kvm; unsigned long len = size; int id = start_id; gpa_t gpa = base; - char entry[esz]; + char entry[ESZ_MAX]; int ret; memset(entry, 0, esz); From patchwork Mon Aug 13 14:57:28 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marc Zyngier X-Patchwork-Id: 10564337 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 7604D14E2 for ; Mon, 13 Aug 2018 14:59:24 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 65CC729425 for ; Mon, 13 Aug 2018 14:59:24 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 596102942B; Mon, 13 Aug 2018 14:59:24 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id C221929425 for ; Mon, 13 Aug 2018 14:59:23 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729596AbeHMRl6 (ORCPT ); Mon, 13 Aug 2018 13:41:58 -0400 Received: from foss.arm.com ([217.140.101.70]:60010 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729584AbeHMRl6 (ORCPT ); Mon, 13 Aug 2018 13:41:58 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id B389580D; Mon, 13 Aug 2018 07:59:21 -0700 (PDT) Received: from approximate.Emea.Arm.com (approximate.Emea.Arm.com [10.4.13.119]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id C26F53F5D0; Mon, 13 Aug 2018 07:59:18 -0700 (PDT) From: Marc Zyngier To: Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= Cc: kvm@vger.kernel.org, kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org, Andre Przywara , Andrew Jones , Catalin Marinas , Christoffer Dall , Dongjiu Geng , Eric Auger , "Gustavo A . R . Silva" , James Morse , Jia He , Kees Cook , Mark Rutland , Punit Agrawal , Suzuki Poulose Subject: [PATCH 10/37] KVM: arm/arm64: vgic-debug: Show LPI status Date: Mon, 13 Aug 2018 15:57:28 +0100 Message-Id: <20180813145755.16566-11-marc.zyngier@arm.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20180813145755.16566-1-marc.zyngier@arm.com> References: <20180813145755.16566-1-marc.zyngier@arm.com> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP The vgic debugfs file only knows about SGI/PPI/SPI interrupts, and completely ignores LPIs. Let's fix that. Signed-off-by: Marc Zyngier --- virt/kvm/arm/vgic/vgic-debug.c | 42 ++++++++++++++++++++++++++-------- virt/kvm/arm/vgic/vgic-its.c | 12 +++++----- virt/kvm/arm/vgic/vgic.h | 1 + 3 files changed, 39 insertions(+), 16 deletions(-) diff --git a/virt/kvm/arm/vgic/vgic-debug.c b/virt/kvm/arm/vgic/vgic-debug.c index c589d4c2b478..9279e35fefb1 100644 --- a/virt/kvm/arm/vgic/vgic-debug.c +++ b/virt/kvm/arm/vgic/vgic-debug.c @@ -36,9 +36,12 @@ struct vgic_state_iter { int nr_cpus; int nr_spis; + int nr_lpis; int dist_id; int vcpu_id; int intid; + int lpi_idx; + u32 *lpi_array; }; static void iter_next(struct vgic_state_iter *iter) @@ -52,6 +55,12 @@ static void iter_next(struct vgic_state_iter *iter) if (iter->intid == VGIC_NR_PRIVATE_IRQS && ++iter->vcpu_id < iter->nr_cpus) iter->intid = 0; + + if (iter->intid >= (iter->nr_spis + VGIC_NR_PRIVATE_IRQS)) { + if (iter->lpi_idx < iter->nr_lpis) + iter->intid = iter->lpi_array[iter->lpi_idx]; + iter->lpi_idx++; + } } static void iter_init(struct kvm *kvm, struct vgic_state_iter *iter, @@ -63,6 +72,11 @@ static void iter_init(struct kvm *kvm, struct vgic_state_iter *iter, iter->nr_cpus = nr_cpus; iter->nr_spis = kvm->arch.vgic.nr_spis; + if (kvm->arch.vgic.vgic_model == KVM_DEV_TYPE_ARM_VGIC_V3) { + iter->nr_lpis = vgic_copy_lpi_list(kvm, NULL, &iter->lpi_array); + if (iter->nr_lpis < 0) + iter->nr_lpis = 0; + } /* Fast forward to the right position if needed */ while (pos--) @@ -73,7 +87,8 @@ static bool end_of_vgic(struct vgic_state_iter *iter) { return iter->dist_id > 0 && iter->vcpu_id == iter->nr_cpus && - (iter->intid - VGIC_NR_PRIVATE_IRQS) == iter->nr_spis; + iter->intid >= (iter->nr_spis + VGIC_NR_PRIVATE_IRQS) && + iter->lpi_idx > iter->nr_lpis; } static void *vgic_debug_start(struct seq_file *s, loff_t *pos) @@ -130,6 +145,7 @@ static void vgic_debug_stop(struct seq_file *s, void *v) mutex_lock(&kvm->lock); iter = kvm->arch.vgic.iter; + kfree(iter->lpi_array); kfree(iter); kvm->arch.vgic.iter = NULL; mutex_unlock(&kvm->lock); @@ -137,12 +153,14 @@ static void vgic_debug_stop(struct seq_file *s, void *v) static void print_dist_state(struct seq_file *s, struct vgic_dist *dist) { + bool v3 = dist->vgic_model == KVM_DEV_TYPE_ARM_VGIC_V3; + seq_printf(s, "Distributor\n"); seq_printf(s, "===========\n"); - seq_printf(s, "vgic_model:\t%s\n", - (dist->vgic_model == KVM_DEV_TYPE_ARM_VGIC_V3) ? - "GICv3" : "GICv2"); + seq_printf(s, "vgic_model:\t%s\n", v3 ? "GICv3" : "GICv2"); seq_printf(s, "nr_spis:\t%d\n", dist->nr_spis); + if (v3) + seq_printf(s, "nr_lpis:\t%d\n", dist->lpi_list_count); seq_printf(s, "enabled:\t%d\n", dist->enabled); seq_printf(s, "\n"); @@ -174,8 +192,10 @@ static void print_irq_state(struct seq_file *s, struct vgic_irq *irq, type = "SGI"; else if (irq->intid < VGIC_NR_PRIVATE_IRQS) type = "PPI"; - else + else if (irq->intid < VGIC_MAX_SPI) type = "SPI"; + else + type = "LPI"; if (irq->intid ==0 || irq->intid == VGIC_NR_PRIVATE_IRQS) print_header(s, irq, vcpu); @@ -202,7 +222,6 @@ static void print_irq_state(struct seq_file *s, struct vgic_irq *irq, irq->source, irq->priority, (irq->vcpu) ? irq->vcpu->vcpu_id : -1); - } static int vgic_debug_show(struct seq_file *s, void *v) @@ -221,17 +240,20 @@ static int vgic_debug_show(struct seq_file *s, void *v) if (!kvm->arch.vgic.initialized) return 0; - if (iter->vcpu_id < iter->nr_cpus) { + if (iter->vcpu_id < iter->nr_cpus) vcpu = kvm_get_vcpu(kvm, iter->vcpu_id); - irq = &vcpu->arch.vgic_cpu.private_irqs[iter->intid]; - } else { - irq = &kvm->arch.vgic.spis[iter->intid - VGIC_NR_PRIVATE_IRQS]; + + irq = vgic_get_irq(kvm, vcpu, iter->intid); + if (!irq) { + seq_printf(s, " LPI %4d freed\n", iter->intid); + return 0; } spin_lock_irqsave(&irq->irq_lock, flags); print_irq_state(s, irq, vcpu); spin_unlock_irqrestore(&irq->irq_lock, flags); + vgic_put_irq(kvm, irq); return 0; } diff --git a/virt/kvm/arm/vgic/vgic-its.c b/virt/kvm/arm/vgic/vgic-its.c index 1d88010f6b61..cee2c3c5519c 100644 --- a/virt/kvm/arm/vgic/vgic-its.c +++ b/virt/kvm/arm/vgic/vgic-its.c @@ -318,9 +318,9 @@ static int update_lpi_config(struct kvm *kvm, struct vgic_irq *irq, * enumerate those LPIs without holding any lock. * Returns their number and puts the kmalloc'ed array into intid_ptr. */ -static int vgic_copy_lpi_list(struct kvm_vcpu *vcpu, u32 **intid_ptr) +int vgic_copy_lpi_list(struct kvm *kvm, struct kvm_vcpu *vcpu, u32 **intid_ptr) { - struct vgic_dist *dist = &vcpu->kvm->arch.vgic; + struct vgic_dist *dist = &kvm->arch.vgic; struct vgic_irq *irq; unsigned long flags; u32 *intids; @@ -343,7 +343,7 @@ static int vgic_copy_lpi_list(struct kvm_vcpu *vcpu, u32 **intid_ptr) if (i == irq_count) break; /* We don't need to "get" the IRQ, as we hold the list lock. */ - if (irq->target_vcpu != vcpu) + if (vcpu && irq->target_vcpu != vcpu) continue; intids[i++] = irq->intid; } @@ -435,7 +435,7 @@ static int its_sync_lpi_pending_table(struct kvm_vcpu *vcpu) unsigned long flags; u8 pendmask; - nr_irqs = vgic_copy_lpi_list(vcpu, &intids); + nr_irqs = vgic_copy_lpi_list(vcpu->kvm, vcpu, &intids); if (nr_irqs < 0) return nr_irqs; @@ -1160,7 +1160,7 @@ static int vgic_its_cmd_handle_invall(struct kvm *kvm, struct vgic_its *its, vcpu = kvm_get_vcpu(kvm, collection->target_addr); - irq_count = vgic_copy_lpi_list(vcpu, &intids); + irq_count = vgic_copy_lpi_list(kvm, vcpu, &intids); if (irq_count < 0) return irq_count; @@ -1208,7 +1208,7 @@ static int vgic_its_cmd_handle_movall(struct kvm *kvm, struct vgic_its *its, vcpu1 = kvm_get_vcpu(kvm, target1_addr); vcpu2 = kvm_get_vcpu(kvm, target2_addr); - irq_count = vgic_copy_lpi_list(vcpu1, &intids); + irq_count = vgic_copy_lpi_list(kvm, vcpu1, &intids); if (irq_count < 0) return irq_count; diff --git a/virt/kvm/arm/vgic/vgic.h b/virt/kvm/arm/vgic/vgic.h index ead00b2072b2..ed9d9d9e2fc5 100644 --- a/virt/kvm/arm/vgic/vgic.h +++ b/virt/kvm/arm/vgic/vgic.h @@ -305,6 +305,7 @@ static inline bool vgic_dist_overlap(struct kvm *kvm, gpa_t base, size_t size) (base < d->vgic_dist_base + KVM_VGIC_V3_DIST_SIZE); } +int vgic_copy_lpi_list(struct kvm *kvm, struct kvm_vcpu *vcpu, u32 **intid_ptr); int vgic_its_resolve_lpi(struct kvm *kvm, struct vgic_its *its, u32 devid, u32 eventid, struct vgic_irq **irq); struct vgic_its *vgic_msi_to_its(struct kvm *kvm, struct kvm_msi *msi); From patchwork Mon Aug 13 14:57:29 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marc Zyngier X-Patchwork-Id: 10564339 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 178B217E1 for ; Mon, 13 Aug 2018 14:59:27 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 08CD42942B for ; Mon, 13 Aug 2018 14:59:27 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id F0A7C29432; Mon, 13 Aug 2018 14:59:26 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 76CF02942B for ; Mon, 13 Aug 2018 14:59:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729730AbeHMRmB (ORCPT ); Mon, 13 Aug 2018 13:42:01 -0400 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:60038 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729709AbeHMRmB (ORCPT ); Mon, 13 Aug 2018 13:42:01 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id E260DED1; Mon, 13 Aug 2018 07:59:24 -0700 (PDT) Received: from approximate.Emea.Arm.com (approximate.Emea.Arm.com [10.4.13.119]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id F1C603F5D0; Mon, 13 Aug 2018 07:59:21 -0700 (PDT) From: Marc Zyngier To: Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= Cc: kvm@vger.kernel.org, kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org, Andre Przywara , Andrew Jones , Catalin Marinas , Christoffer Dall , Dongjiu Geng , Eric Auger , "Gustavo A . R . Silva" , James Morse , Jia He , Kees Cook , Mark Rutland , Punit Agrawal , Suzuki Poulose Subject: [PATCH 11/37] arm64: KVM: Cleanup tpidr_el2 init on non-VHE Date: Mon, 13 Aug 2018 15:57:29 +0100 Message-Id: <20180813145755.16566-12-marc.zyngier@arm.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20180813145755.16566-1-marc.zyngier@arm.com> References: <20180813145755.16566-1-marc.zyngier@arm.com> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP When running on a non-VHE system, we initialize tpidr_el2 to contain the per-CPU offset required to reach per-cpu variables. Actually, we initialize it twice: the first time as part of the EL2 initialization, by copying tpidr_el1 into its el2 counterpart, and another time by calling into __kvm_set_tpidr_el2. It turns out that the first part is wrong, as it includes the distance between the kernel mapping and the linear mapping, while EL2 only cares about the linear mapping. This was the last vestige of the first per-cpu use of tpidr_el2 that came in with SDEI. The only caller then was hyp_panic(), and its now using the pc-relative get_host_ctxt() stuff, instead of kimage addresses from the literal pool. It is not a big deal, as we override it straight away, but it is slightly confusing. In order to clear said confusion, let's set this directly as part of the hyp-init code, and drop the ad-hoc HYP helper. Reviewed-by: James Morse Acked-by: Christoffer Dall Signed-off-by: Marc Zyngier --- arch/arm64/include/asm/kvm_host.h | 21 ++++++++------------- arch/arm64/kvm/hyp-init.S | 6 +++--- arch/arm64/kvm/hyp/sysreg-sr.c | 5 ----- 3 files changed, 11 insertions(+), 21 deletions(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index fe8777b12f86..268619ce0154 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -380,14 +380,19 @@ int kvm_perf_teardown(void); struct kvm_vcpu *kvm_mpidr_to_vcpu(struct kvm *kvm, unsigned long mpidr); -void __kvm_set_tpidr_el2(u64 tpidr_el2); DECLARE_PER_CPU(kvm_cpu_context_t, kvm_host_cpu_state); static inline void __cpu_init_hyp_mode(phys_addr_t pgd_ptr, unsigned long hyp_stack_ptr, unsigned long vector_ptr) { - u64 tpidr_el2; + /* + * Calculate the raw per-cpu offset without a translation from the + * kernel's mapping to the linear mapping, and store it in tpidr_el2 + * so that we can use adr_l to access per-cpu variables in EL2. + */ + u64 tpidr_el2 = ((u64)this_cpu_ptr(&kvm_host_cpu_state) - + (u64)kvm_ksym_ref(kvm_host_cpu_state)); /* * Call initialization code, and switch to the full blown HYP code. @@ -396,17 +401,7 @@ static inline void __cpu_init_hyp_mode(phys_addr_t pgd_ptr, * cpus_have_const_cap() wrapper. */ BUG_ON(!static_branch_likely(&arm64_const_caps_ready)); - __kvm_call_hyp((void *)pgd_ptr, hyp_stack_ptr, vector_ptr); - - /* - * Calculate the raw per-cpu offset without a translation from the - * kernel's mapping to the linear mapping, and store it in tpidr_el2 - * so that we can use adr_l to access per-cpu variables in EL2. - */ - tpidr_el2 = (u64)this_cpu_ptr(&kvm_host_cpu_state) - - (u64)kvm_ksym_ref(kvm_host_cpu_state); - - kvm_call_hyp(__kvm_set_tpidr_el2, tpidr_el2); + __kvm_call_hyp((void *)pgd_ptr, hyp_stack_ptr, vector_ptr, tpidr_el2); } static inline bool kvm_arch_check_sve_has_vhe(void) diff --git a/arch/arm64/kvm/hyp-init.S b/arch/arm64/kvm/hyp-init.S index 6fd91b31a131..ea9225160786 100644 --- a/arch/arm64/kvm/hyp-init.S +++ b/arch/arm64/kvm/hyp-init.S @@ -57,6 +57,7 @@ __invalid: * x0: HYP pgd * x1: HYP stack * x2: HYP vectors + * x3: per-CPU offset */ __do_hyp_init: /* Check for a stub HVC call */ @@ -119,9 +120,8 @@ CPU_BE( orr x4, x4, #SCTLR_ELx_EE) mov sp, x1 msr vbar_el2, x2 - /* copy tpidr_el1 into tpidr_el2 for use by HYP */ - mrs x1, tpidr_el1 - msr tpidr_el2, x1 + /* Set tpidr_el2 for use by HYP */ + msr tpidr_el2, x3 /* Hello, World! */ eret diff --git a/arch/arm64/kvm/hyp/sysreg-sr.c b/arch/arm64/kvm/hyp/sysreg-sr.c index 35bc16832efe..9ce223944983 100644 --- a/arch/arm64/kvm/hyp/sysreg-sr.c +++ b/arch/arm64/kvm/hyp/sysreg-sr.c @@ -288,8 +288,3 @@ void kvm_vcpu_put_sysregs(struct kvm_vcpu *vcpu) vcpu->arch.sysregs_loaded_on_cpu = false; } - -void __hyp_text __kvm_set_tpidr_el2(u64 tpidr_el2) -{ - asm("msr tpidr_el2, %0": : "r" (tpidr_el2)); -} From patchwork Mon Aug 13 14:57:30 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marc Zyngier X-Patchwork-Id: 10564341 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 4ECE517E1 for ; Mon, 13 Aug 2018 14:59:30 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 3FF582942B for ; Mon, 13 Aug 2018 14:59:30 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 3407529434; Mon, 13 Aug 2018 14:59:30 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id BBC7B2942B for ; Mon, 13 Aug 2018 14:59:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729709AbeHMRmF (ORCPT ); Mon, 13 Aug 2018 13:42:05 -0400 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:60060 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729364AbeHMRmE (ORCPT ); Mon, 13 Aug 2018 13:42:04 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 1DAA51596; Mon, 13 Aug 2018 07:59:28 -0700 (PDT) Received: from approximate.Emea.Arm.com (approximate.Emea.Arm.com [10.4.13.119]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 2C9353F5D0; Mon, 13 Aug 2018 07:59:25 -0700 (PDT) From: Marc Zyngier To: Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= Cc: kvm@vger.kernel.org, kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org, Andre Przywara , Andrew Jones , Catalin Marinas , Christoffer Dall , Dongjiu Geng , Eric Auger , "Gustavo A . R . Silva" , James Morse , Jia He , Kees Cook , Mark Rutland , Punit Agrawal , Suzuki Poulose Subject: [PATCH 12/37] KVM: arm/arm64: vgic: Define GICD_IIDR fields for GICv2 and GIv3 Date: Mon, 13 Aug 2018 15:57:30 +0100 Message-Id: <20180813145755.16566-13-marc.zyngier@arm.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20180813145755.16566-1-marc.zyngier@arm.com> References: <20180813145755.16566-1-marc.zyngier@arm.com> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Christoffer Dall Instead of hardcoding the shifts and masks in the GICD_IIDR register emulation, let's add the definition of these fields to the GIC header files and use them. This will make things more obvious when we're going to bump the revision in the IIDR when we'll make guest-visible changes to the implementation. Reviewed-by: Andrew Jones Signed-off-by: Christoffer Dall Signed-off-by: Marc Zyngier --- include/linux/irqchip/arm-gic-v3.h | 10 ++++++++++ include/linux/irqchip/arm-gic.h | 10 ++++++++++ virt/kvm/arm/vgic/vgic-mmio-v2.c | 3 ++- virt/kvm/arm/vgic/vgic-mmio-v3.c | 3 ++- 4 files changed, 24 insertions(+), 2 deletions(-) diff --git a/include/linux/irqchip/arm-gic-v3.h b/include/linux/irqchip/arm-gic-v3.h index cbb872c1b607..b22f9dfa61af 100644 --- a/include/linux/irqchip/arm-gic-v3.h +++ b/include/linux/irqchip/arm-gic-v3.h @@ -61,6 +61,16 @@ #define GICD_CTLR_ENABLE_G1A (1U << 1) #define GICD_CTLR_ENABLE_G1 (1U << 0) +#define GICD_IIDR_IMPLEMENTER_SHIFT 0 +#define GICD_IIDR_IMPLEMENTER_MASK (0xfff << GICD_IIDR_IMPLEMENTER_SHIFT) +#define GICD_IIDR_REVISION_SHIFT 12 +#define GICD_IIDR_REVISION_MASK (0xf << GICD_IIDR_REVISION_SHIFT) +#define GICD_IIDR_VARIANT_SHIFT 16 +#define GICD_IIDR_VARIANT_MASK (0xf << GICD_IIDR_VARIANT_SHIFT) +#define GICD_IIDR_PRODUCT_ID_SHIFT 24 +#define GICD_IIDR_PRODUCT_ID_MASK (0xff << GICD_IIDR_PRODUCT_ID_SHIFT) + + /* * In systems with a single security state (what we emulate in KVM) * the meaning of the interrupt group enable bits is slightly different diff --git a/include/linux/irqchip/arm-gic.h b/include/linux/irqchip/arm-gic.h index 68d8b1f73682..484f5bfa9f3d 100644 --- a/include/linux/irqchip/arm-gic.h +++ b/include/linux/irqchip/arm-gic.h @@ -71,6 +71,16 @@ (GICD_INT_DEF_PRI << 8) |\ GICD_INT_DEF_PRI) +#define GICD_IIDR_IMPLEMENTER_SHIFT 0 +#define GICD_IIDR_IMPLEMENTER_MASK (0xfff << GICD_IIDR_IMPLEMENTER_SHIFT) +#define GICD_IIDR_REVISION_SHIFT 12 +#define GICD_IIDR_REVISION_MASK (0xf << GICD_IIDR_REVISION_SHIFT) +#define GICD_IIDR_VARIANT_SHIFT 16 +#define GICD_IIDR_VARIANT_MASK (0xf << GICD_IIDR_VARIANT_SHIFT) +#define GICD_IIDR_PRODUCT_ID_SHIFT 24 +#define GICD_IIDR_PRODUCT_ID_MASK (0xff << GICD_IIDR_PRODUCT_ID_SHIFT) + + #define GICH_HCR 0x0 #define GICH_VTR 0x4 #define GICH_VMCR 0x8 diff --git a/virt/kvm/arm/vgic/vgic-mmio-v2.c b/virt/kvm/arm/vgic/vgic-mmio-v2.c index ffc587bf4742..af44e569373a 100644 --- a/virt/kvm/arm/vgic/vgic-mmio-v2.c +++ b/virt/kvm/arm/vgic/vgic-mmio-v2.c @@ -37,7 +37,8 @@ static unsigned long vgic_mmio_read_v2_misc(struct kvm_vcpu *vcpu, value |= (atomic_read(&vcpu->kvm->online_vcpus) - 1) << 5; break; case GIC_DIST_IIDR: - value = (PRODUCT_ID_KVM << 24) | (IMPLEMENTER_ARM << 0); + value = (PRODUCT_ID_KVM << GICD_IIDR_PRODUCT_ID_SHIFT) | + (IMPLEMENTER_ARM << GICD_IIDR_IMPLEMENTER_SHIFT); break; default: return 0; diff --git a/virt/kvm/arm/vgic/vgic-mmio-v3.c b/virt/kvm/arm/vgic/vgic-mmio-v3.c index 287784095b5b..c03f42409b98 100644 --- a/virt/kvm/arm/vgic/vgic-mmio-v3.c +++ b/virt/kvm/arm/vgic/vgic-mmio-v3.c @@ -81,7 +81,8 @@ static unsigned long vgic_mmio_read_v3_misc(struct kvm_vcpu *vcpu, } break; case GICD_IIDR: - value = (PRODUCT_ID_KVM << 24) | (IMPLEMENTER_ARM << 0); + value = (PRODUCT_ID_KVM << GICD_IIDR_PRODUCT_ID_SHIFT) | + (IMPLEMENTER_ARM << GICD_IIDR_IMPLEMENTER_SHIFT); break; default: return 0; From patchwork Mon Aug 13 14:57:31 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marc Zyngier X-Patchwork-Id: 10564343 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id AAD0214E2 for ; Mon, 13 Aug 2018 14:59:33 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 9A4EE29425 for ; Mon, 13 Aug 2018 14:59:33 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 8E9BA2942B; Mon, 13 Aug 2018 14:59:33 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 208A029425 for ; Mon, 13 Aug 2018 14:59:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729740AbeHMRmI (ORCPT ); Mon, 13 Aug 2018 13:42:08 -0400 Received: from foss.arm.com ([217.140.101.70]:60084 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729406AbeHMRmI (ORCPT ); Mon, 13 Aug 2018 13:42:08 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 4F55415B2; Mon, 13 Aug 2018 07:59:31 -0700 (PDT) Received: from approximate.Emea.Arm.com (approximate.Emea.Arm.com [10.4.13.119]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 5E01E3F5D0; Mon, 13 Aug 2018 07:59:28 -0700 (PDT) From: Marc Zyngier To: Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= Cc: kvm@vger.kernel.org, kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org, Andre Przywara , Andrew Jones , Catalin Marinas , Christoffer Dall , Dongjiu Geng , Eric Auger , "Gustavo A . R . Silva" , James Morse , Jia He , Kees Cook , Mark Rutland , Punit Agrawal , Suzuki Poulose Subject: [PATCH 13/37] KVM: arm/arm64: vgic: Keep track of implementation revision Date: Mon, 13 Aug 2018 15:57:31 +0100 Message-Id: <20180813145755.16566-14-marc.zyngier@arm.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20180813145755.16566-1-marc.zyngier@arm.com> References: <20180813145755.16566-1-marc.zyngier@arm.com> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Christoffer Dall As we are about to tweak implementation aspects of the VGIC emulation, while still preserving some level of backwards compatibility support, add a field to keep track of the implementation revision field which is reported to the VM and to userspace. Reviewed-by: Andrew Jones Signed-off-by: Christoffer Dall Signed-off-by: Marc Zyngier --- include/kvm/arm_vgic.h | 3 +++ virt/kvm/arm/vgic/vgic-init.c | 1 + virt/kvm/arm/vgic/vgic-mmio-v2.c | 6 ++++-- virt/kvm/arm/vgic/vgic-mmio-v3.c | 6 ++++-- 4 files changed, 12 insertions(+), 4 deletions(-) diff --git a/include/kvm/arm_vgic.h b/include/kvm/arm_vgic.h index cfdd2484cc42..7e64c463ab4d 100644 --- a/include/kvm/arm_vgic.h +++ b/include/kvm/arm_vgic.h @@ -217,6 +217,9 @@ struct vgic_dist { /* vGIC model the kernel emulates for the guest (GICv2 or GICv3) */ u32 vgic_model; + /* Implementation revision as reported in the GICD_IIDR */ + u32 implementation_rev; + /* Do injected MSIs require an additional device ID? */ bool msis_require_devid; diff --git a/virt/kvm/arm/vgic/vgic-init.c b/virt/kvm/arm/vgic/vgic-init.c index b71417913741..8b6fc45c42fe 100644 --- a/virt/kvm/arm/vgic/vgic-init.c +++ b/virt/kvm/arm/vgic/vgic-init.c @@ -298,6 +298,7 @@ int vgic_init(struct kvm *kvm) vgic_debug_init(kvm); + dist->implementation_rev = 0; dist->initialized = true; out: diff --git a/virt/kvm/arm/vgic/vgic-mmio-v2.c b/virt/kvm/arm/vgic/vgic-mmio-v2.c index af44e569373a..f0c5351805b6 100644 --- a/virt/kvm/arm/vgic/vgic-mmio-v2.c +++ b/virt/kvm/arm/vgic/vgic-mmio-v2.c @@ -25,19 +25,21 @@ static unsigned long vgic_mmio_read_v2_misc(struct kvm_vcpu *vcpu, gpa_t addr, unsigned int len) { + struct vgic_dist *vgic = &vcpu->kvm->arch.vgic; u32 value; switch (addr & 0x0c) { case GIC_DIST_CTRL: - value = vcpu->kvm->arch.vgic.enabled ? GICD_ENABLE : 0; + value = vgic->enabled ? GICD_ENABLE : 0; break; case GIC_DIST_CTR: - value = vcpu->kvm->arch.vgic.nr_spis + VGIC_NR_PRIVATE_IRQS; + value = vgic->nr_spis + VGIC_NR_PRIVATE_IRQS; value = (value >> 5) - 1; value |= (atomic_read(&vcpu->kvm->online_vcpus) - 1) << 5; break; case GIC_DIST_IIDR: value = (PRODUCT_ID_KVM << GICD_IIDR_PRODUCT_ID_SHIFT) | + (vgic->implementation_rev << GICD_IIDR_REVISION_SHIFT) | (IMPLEMENTER_ARM << GICD_IIDR_IMPLEMENTER_SHIFT); break; default: diff --git a/virt/kvm/arm/vgic/vgic-mmio-v3.c b/virt/kvm/arm/vgic/vgic-mmio-v3.c index c03f42409b98..ebe10a015bfb 100644 --- a/virt/kvm/arm/vgic/vgic-mmio-v3.c +++ b/virt/kvm/arm/vgic/vgic-mmio-v3.c @@ -62,16 +62,17 @@ bool vgic_supports_direct_msis(struct kvm *kvm) static unsigned long vgic_mmio_read_v3_misc(struct kvm_vcpu *vcpu, gpa_t addr, unsigned int len) { + struct vgic_dist *vgic = &vcpu->kvm->arch.vgic; u32 value = 0; switch (addr & 0x0c) { case GICD_CTLR: - if (vcpu->kvm->arch.vgic.enabled) + if (vgic->enabled) value |= GICD_CTLR_ENABLE_SS_G1; value |= GICD_CTLR_ARE_NS | GICD_CTLR_DS; break; case GICD_TYPER: - value = vcpu->kvm->arch.vgic.nr_spis + VGIC_NR_PRIVATE_IRQS; + value = vgic->nr_spis + VGIC_NR_PRIVATE_IRQS; value = (value >> 5) - 1; if (vgic_has_its(vcpu->kvm)) { value |= (INTERRUPT_ID_BITS_ITS - 1) << 19; @@ -82,6 +83,7 @@ static unsigned long vgic_mmio_read_v3_misc(struct kvm_vcpu *vcpu, break; case GICD_IIDR: value = (PRODUCT_ID_KVM << GICD_IIDR_PRODUCT_ID_SHIFT) | + (vgic->implementation_rev << GICD_IIDR_REVISION_SHIFT) | (IMPLEMENTER_ARM << GICD_IIDR_IMPLEMENTER_SHIFT); break; default: From patchwork Mon Aug 13 14:57:32 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marc Zyngier X-Patchwork-Id: 10564345 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id BA56B14E2 for ; Mon, 13 Aug 2018 14:59:36 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id A9F6729425 for ; Mon, 13 Aug 2018 14:59:36 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 943642942B; Mon, 13 Aug 2018 14:59:36 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 2E3F929425 for ; Mon, 13 Aug 2018 14:59:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729748AbeHMRmL (ORCPT ); Mon, 13 Aug 2018 13:42:11 -0400 Received: from foss.arm.com ([217.140.101.70]:60104 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729364AbeHMRmL (ORCPT ); Mon, 13 Aug 2018 13:42:11 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 7FCBB15BE; Mon, 13 Aug 2018 07:59:34 -0700 (PDT) Received: from approximate.Emea.Arm.com (approximate.Emea.Arm.com [10.4.13.119]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 8E9903F5D0; Mon, 13 Aug 2018 07:59:31 -0700 (PDT) From: Marc Zyngier To: Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= Cc: kvm@vger.kernel.org, kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org, Andre Przywara , Andrew Jones , Catalin Marinas , Christoffer Dall , Dongjiu Geng , Eric Auger , "Gustavo A . R . Silva" , James Morse , Jia He , Kees Cook , Mark Rutland , Punit Agrawal , Suzuki Poulose Subject: [PATCH 14/37] KVM: arm/arm64: vgic: GICv2 IGROUPR should read as zero Date: Mon, 13 Aug 2018 15:57:32 +0100 Message-Id: <20180813145755.16566-15-marc.zyngier@arm.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20180813145755.16566-1-marc.zyngier@arm.com> References: <20180813145755.16566-1-marc.zyngier@arm.com> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Christoffer Dall We currently don't support grouping in the emulated VGIC, which is a known defect on KVM (not hurting any currently used guests as far as we're aware). This is currently handled by treating all interrupts as group 0 interrupts for an emulated GICv2 and always signaling interrupts as group 0 to the virtual CPU interface. However, when reading which group interrupts belongs to in the guest from the emulated VGIC, the VGIC currently reports group 1 instead of group 0, which is misleading. Fix this temporarily before introducing full group support by changing the hander to _raz instead of _rao. Fixes: fb848db39661a "KVM: arm/arm64: vgic-new: Add GICv2 MMIO handling framework" Reviewed-by: Andrew Jones Signed-off-by: Christoffer Dall Signed-off-by: Marc Zyngier --- virt/kvm/arm/vgic/vgic-init.c | 2 +- virt/kvm/arm/vgic/vgic-mmio-v2.c | 8 +++++++- 2 files changed, 8 insertions(+), 2 deletions(-) diff --git a/virt/kvm/arm/vgic/vgic-init.c b/virt/kvm/arm/vgic/vgic-init.c index 8b6fc45c42fe..230c9221fe70 100644 --- a/virt/kvm/arm/vgic/vgic-init.c +++ b/virt/kvm/arm/vgic/vgic-init.c @@ -298,7 +298,7 @@ int vgic_init(struct kvm *kvm) vgic_debug_init(kvm); - dist->implementation_rev = 0; + dist->implementation_rev = 1; dist->initialized = true; out: diff --git a/virt/kvm/arm/vgic/vgic-mmio-v2.c b/virt/kvm/arm/vgic/vgic-mmio-v2.c index f0c5351805b6..db646f140e7d 100644 --- a/virt/kvm/arm/vgic/vgic-mmio-v2.c +++ b/virt/kvm/arm/vgic/vgic-mmio-v2.c @@ -22,6 +22,12 @@ #include "vgic.h" #include "vgic-mmio.h" +/* + * The Revision field in the IIDR have the following meanings: + * + * Revision 1: Report GICv2 interrupts as group 0 instead of group 1 + */ + static unsigned long vgic_mmio_read_v2_misc(struct kvm_vcpu *vcpu, gpa_t addr, unsigned int len) { @@ -365,7 +371,7 @@ static const struct vgic_register_region vgic_v2_dist_registers[] = { vgic_mmio_read_v2_misc, vgic_mmio_write_v2_misc, 12, VGIC_ACCESS_32bit), REGISTER_DESC_WITH_BITS_PER_IRQ(GIC_DIST_IGROUP, - vgic_mmio_read_rao, vgic_mmio_write_wi, NULL, NULL, 1, + vgic_mmio_read_raz, vgic_mmio_write_wi, NULL, NULL, 1, VGIC_ACCESS_32bit), REGISTER_DESC_WITH_BITS_PER_IRQ(GIC_DIST_ENABLE_SET, vgic_mmio_read_enable, vgic_mmio_write_senable, NULL, NULL, 1, From patchwork Mon Aug 13 14:57:33 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marc Zyngier X-Patchwork-Id: 10564347 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 2A88714E2 for ; Mon, 13 Aug 2018 14:59:40 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 1ADC829425 for ; Mon, 13 Aug 2018 14:59:40 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 0E4BC2942B; Mon, 13 Aug 2018 14:59:40 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 8B98929425 for ; Mon, 13 Aug 2018 14:59:39 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729803AbeHMRmO (ORCPT ); Mon, 13 Aug 2018 13:42:14 -0400 Received: from foss.arm.com ([217.140.101.70]:60124 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729716AbeHMRmO (ORCPT ); Mon, 13 Aug 2018 13:42:14 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id B2354ED1; Mon, 13 Aug 2018 07:59:37 -0700 (PDT) Received: from approximate.Emea.Arm.com (approximate.Emea.Arm.com [10.4.13.119]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id C08273F5D0; Mon, 13 Aug 2018 07:59:34 -0700 (PDT) From: Marc Zyngier To: Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= Cc: kvm@vger.kernel.org, kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org, Andre Przywara , Andrew Jones , Catalin Marinas , Christoffer Dall , Dongjiu Geng , Eric Auger , "Gustavo A . R . Silva" , James Morse , Jia He , Kees Cook , Mark Rutland , Punit Agrawal , Suzuki Poulose Subject: [PATCH 15/37] KVM: arm/arm64: vgic: Add group field to struct irq Date: Mon, 13 Aug 2018 15:57:33 +0100 Message-Id: <20180813145755.16566-16-marc.zyngier@arm.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20180813145755.16566-1-marc.zyngier@arm.com> References: <20180813145755.16566-1-marc.zyngier@arm.com> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Christoffer Dall In preparation for proper group 0 and group 1 support in the vgic, we add a field in the struct irq to store the group of all interrupts. We initialize the group to group 0 when emulating GICv2 and to group 1 when emulating GICv3, just like we treat them today. LPIs are always group 1. We also continue to ignore writes from the guest, preserving existing functionality, for now. Finally, we also add this field to the vgic debug logic to show the group for all interrupts. Reviewed-by: Andrew Jones Signed-off-by: Christoffer Dall Signed-off-by: Marc Zyngier --- include/kvm/arm_vgic.h | 1 + virt/kvm/arm/vgic/vgic-debug.c | 8 +++++--- virt/kvm/arm/vgic/vgic-init.c | 19 +++++++++++++++++-- virt/kvm/arm/vgic/vgic-its.c | 1 + 4 files changed, 24 insertions(+), 5 deletions(-) diff --git a/include/kvm/arm_vgic.h b/include/kvm/arm_vgic.h index 7e64c463ab4d..c661d0ee6628 100644 --- a/include/kvm/arm_vgic.h +++ b/include/kvm/arm_vgic.h @@ -133,6 +133,7 @@ struct vgic_irq { u8 source; /* GICv2 SGIs only */ u8 active_source; /* GICv2 SGIs only */ u8 priority; + u8 group; /* 0 == group 0, 1 == group 1 */ enum vgic_irq_config config; /* Level or edge */ /* diff --git a/virt/kvm/arm/vgic/vgic-debug.c b/virt/kvm/arm/vgic/vgic-debug.c index 9279e35fefb1..07aa900bac56 100644 --- a/virt/kvm/arm/vgic/vgic-debug.c +++ b/virt/kvm/arm/vgic/vgic-debug.c @@ -166,6 +166,7 @@ static void print_dist_state(struct seq_file *s, struct vgic_dist *dist) seq_printf(s, "P=pending_latch, L=line_level, A=active\n"); seq_printf(s, "E=enabled, H=hw, C=config (level=1, edge=0)\n"); + seq_printf(s, "G=group\n"); } static void print_header(struct seq_file *s, struct vgic_irq *irq, @@ -180,8 +181,8 @@ static void print_header(struct seq_file *s, struct vgic_irq *irq, } seq_printf(s, "\n"); - seq_printf(s, "%s%2d TYP ID TGT_ID PLAEHC HWID TARGET SRC PRI VCPU_ID\n", hdr, id); - seq_printf(s, "---------------------------------------------------------------\n"); + seq_printf(s, "%s%2d TYP ID TGT_ID PLAEHCG HWID TARGET SRC PRI VCPU_ID\n", hdr, id); + seq_printf(s, "----------------------------------------------------------------\n"); } static void print_irq_state(struct seq_file *s, struct vgic_irq *irq, @@ -202,7 +203,7 @@ static void print_irq_state(struct seq_file *s, struct vgic_irq *irq, seq_printf(s, " %s %4d " " %2d " - "%d%d%d%d%d%d " + "%d%d%d%d%d%d%d " "%8d " "%8x " " %2x " @@ -217,6 +218,7 @@ static void print_irq_state(struct seq_file *s, struct vgic_irq *irq, irq->enabled, irq->hw, irq->config == VGIC_CONFIG_LEVEL, + irq->group, irq->hwintid, irq->mpidr, irq->source, diff --git a/virt/kvm/arm/vgic/vgic-init.c b/virt/kvm/arm/vgic/vgic-init.c index 230c9221fe70..a7c19cda5835 100644 --- a/virt/kvm/arm/vgic/vgic-init.c +++ b/virt/kvm/arm/vgic/vgic-init.c @@ -175,10 +175,13 @@ static int kvm_vgic_dist_init(struct kvm *kvm, unsigned int nr_spis) irq->vcpu = NULL; irq->target_vcpu = vcpu0; kref_init(&irq->refcount); - if (dist->vgic_model == KVM_DEV_TYPE_ARM_VGIC_V2) + if (dist->vgic_model == KVM_DEV_TYPE_ARM_VGIC_V2) { irq->targets = 0; - else + irq->group = 0; + } else { irq->mpidr = 0; + irq->group = 1; + } } return 0; } @@ -227,6 +230,18 @@ int kvm_vgic_vcpu_init(struct kvm_vcpu *vcpu) /* PPIs */ irq->config = VGIC_CONFIG_LEVEL; } + + /* + * GICv3 can only be created via the KVM_DEVICE_CREATE API and + * so we always know the emulation type at this point as it's + * either explicitly configured as GICv3, or explicitly + * configured as GICv2, or not configured yet which also + * implies GICv2. + */ + if (dist->vgic_model == KVM_DEV_TYPE_ARM_VGIC_V3) + irq->group = 1; + else + irq->group = 0; } if (!irqchip_in_kernel(vcpu->kvm)) diff --git a/virt/kvm/arm/vgic/vgic-its.c b/virt/kvm/arm/vgic/vgic-its.c index cee2c3c5519c..12502251727e 100644 --- a/virt/kvm/arm/vgic/vgic-its.c +++ b/virt/kvm/arm/vgic/vgic-its.c @@ -71,6 +71,7 @@ static struct vgic_irq *vgic_add_lpi(struct kvm *kvm, u32 intid, kref_init(&irq->refcount); irq->intid = intid; irq->target_vcpu = vcpu; + irq->group = 1; spin_lock_irqsave(&dist->lpi_list_lock, flags); From patchwork Mon Aug 13 14:57:34 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marc Zyngier X-Patchwork-Id: 10564349 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 1831E14E2 for ; Mon, 13 Aug 2018 14:59:43 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 062272942A for ; Mon, 13 Aug 2018 14:59:43 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id EC96029432; Mon, 13 Aug 2018 14:59:42 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 8B1932942A for ; Mon, 13 Aug 2018 14:59:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729785AbeHMRmR (ORCPT ); Mon, 13 Aug 2018 13:42:17 -0400 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:60154 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729584AbeHMRmR (ORCPT ); Mon, 13 Aug 2018 13:42:17 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id E36911688; Mon, 13 Aug 2018 07:59:40 -0700 (PDT) Received: from approximate.Emea.Arm.com (approximate.Emea.Arm.com [10.4.13.119]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id F12FD3F5D0; Mon, 13 Aug 2018 07:59:37 -0700 (PDT) From: Marc Zyngier To: Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= Cc: kvm@vger.kernel.org, kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org, Andre Przywara , Andrew Jones , Catalin Marinas , Christoffer Dall , Dongjiu Geng , Eric Auger , "Gustavo A . R . Silva" , James Morse , Jia He , Kees Cook , Mark Rutland , Punit Agrawal , Suzuki Poulose Subject: [PATCH 16/37] KVM: arm/arm64: vgic: Signal IRQs using their configured group Date: Mon, 13 Aug 2018 15:57:34 +0100 Message-Id: <20180813145755.16566-17-marc.zyngier@arm.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20180813145755.16566-1-marc.zyngier@arm.com> References: <20180813145755.16566-1-marc.zyngier@arm.com> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Christoffer Dall Now when we have a group configuration on the struct IRQ, use this state when populating the LR and signaling interrupts as either group 0 or group 1 to the VM. Depending on the model of the emulated GIC, and the guest's configuration of the VMCR, interrupts may be signaled as IRQs or FIQs to the VM. Reviewed-by: Andrew Jones Signed-off-by: Christoffer Dall Signed-off-by: Marc Zyngier --- include/linux/irqchip/arm-gic.h | 1 + virt/kvm/arm/vgic/vgic-v2.c | 3 +++ virt/kvm/arm/vgic/vgic-v3.c | 6 +----- 3 files changed, 5 insertions(+), 5 deletions(-) diff --git a/include/linux/irqchip/arm-gic.h b/include/linux/irqchip/arm-gic.h index 484f5bfa9f3d..6c4aaf04046c 100644 --- a/include/linux/irqchip/arm-gic.h +++ b/include/linux/irqchip/arm-gic.h @@ -104,6 +104,7 @@ #define GICH_LR_PENDING_BIT (1 << 28) #define GICH_LR_ACTIVE_BIT (1 << 29) #define GICH_LR_EOI (1 << 19) +#define GICH_LR_GROUP1 (1 << 30) #define GICH_LR_HW (1 << 31) #define GICH_VMCR_ENABLE_GRP0_SHIFT 0 diff --git a/virt/kvm/arm/vgic/vgic-v2.c b/virt/kvm/arm/vgic/vgic-v2.c index a5f2e44f1c33..df5e6a6e3186 100644 --- a/virt/kvm/arm/vgic/vgic-v2.c +++ b/virt/kvm/arm/vgic/vgic-v2.c @@ -159,6 +159,9 @@ void vgic_v2_populate_lr(struct kvm_vcpu *vcpu, struct vgic_irq *irq, int lr) } } + if (irq->group) + val |= GICH_LR_GROUP1; + if (irq->hw) { val |= GICH_LR_HW; val |= irq->hwintid << GICH_LR_PHYSID_CPUID_SHIFT; diff --git a/virt/kvm/arm/vgic/vgic-v3.c b/virt/kvm/arm/vgic/vgic-v3.c index cdce653e3c47..530b8491c892 100644 --- a/virt/kvm/arm/vgic/vgic-v3.c +++ b/virt/kvm/arm/vgic/vgic-v3.c @@ -197,11 +197,7 @@ void vgic_v3_populate_lr(struct kvm_vcpu *vcpu, struct vgic_irq *irq, int lr) if (vgic_irq_is_mapped_level(irq) && (val & ICH_LR_PENDING_BIT)) irq->line_level = false; - /* - * We currently only support Group1 interrupts, which is a - * known defect. This needs to be addressed at some point. - */ - if (model == KVM_DEV_TYPE_ARM_VGIC_V3) + if (irq->group) val |= ICH_LR_GROUP; val |= (u64)irq->priority << ICH_LR_PRIORITY_SHIFT; From patchwork Mon Aug 13 14:57:35 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marc Zyngier X-Patchwork-Id: 10564353 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id A408917E1 for ; Mon, 13 Aug 2018 14:59:46 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 9431429425 for ; Mon, 13 Aug 2018 14:59:46 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 87BBA2942B; Mon, 13 Aug 2018 14:59:46 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 02E5C29425 for ; Mon, 13 Aug 2018 14:59:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729815AbeHMRmV (ORCPT ); Mon, 13 Aug 2018 13:42:21 -0400 Received: from foss.arm.com ([217.140.101.70]:60186 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729406AbeHMRmV (ORCPT ); Mon, 13 Aug 2018 13:42:21 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 208A180D; Mon, 13 Aug 2018 07:59:44 -0700 (PDT) Received: from approximate.Emea.Arm.com (approximate.Emea.Arm.com [10.4.13.119]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 2F7073F5D0; Mon, 13 Aug 2018 07:59:41 -0700 (PDT) From: Marc Zyngier To: Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= Cc: kvm@vger.kernel.org, kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org, Andre Przywara , Andrew Jones , Catalin Marinas , Christoffer Dall , Dongjiu Geng , Eric Auger , "Gustavo A . R . Silva" , James Morse , Jia He , Kees Cook , Mark Rutland , Punit Agrawal , Suzuki Poulose Subject: [PATCH 17/37] KVM: arm/arm64: vgic: Permit uaccess writes to return errors Date: Mon, 13 Aug 2018 15:57:35 +0100 Message-Id: <20180813145755.16566-18-marc.zyngier@arm.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20180813145755.16566-1-marc.zyngier@arm.com> References: <20180813145755.16566-1-marc.zyngier@arm.com> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Christoffer Dall Currently we do not allow any vgic mmio write operations to fail, which makes sense from mmio traps from the guest. However, we should be able to report failures to userspace, if userspace writes incompatible values to read-only registers. Rework the internal interface to allow errors to be returned on the write side for userspace writes. Reviewed-by: Andrew Jones Signed-off-by: Christoffer Dall Signed-off-by: Marc Zyngier --- virt/kvm/arm/vgic/vgic-mmio-v3.c | 12 +++++++----- virt/kvm/arm/vgic/vgic-mmio.c | 18 +++++++++++++----- virt/kvm/arm/vgic/vgic-mmio.h | 19 +++++++++++-------- 3 files changed, 31 insertions(+), 18 deletions(-) diff --git a/virt/kvm/arm/vgic/vgic-mmio-v3.c b/virt/kvm/arm/vgic/vgic-mmio-v3.c index ebe10a015bfb..ef57a1aa0b14 100644 --- a/virt/kvm/arm/vgic/vgic-mmio-v3.c +++ b/virt/kvm/arm/vgic/vgic-mmio-v3.c @@ -249,9 +249,9 @@ static unsigned long vgic_v3_uaccess_read_pending(struct kvm_vcpu *vcpu, return value; } -static void vgic_v3_uaccess_write_pending(struct kvm_vcpu *vcpu, - gpa_t addr, unsigned int len, - unsigned long val) +static int vgic_v3_uaccess_write_pending(struct kvm_vcpu *vcpu, + gpa_t addr, unsigned int len, + unsigned long val) { u32 intid = VGIC_ADDR_TO_INTID(addr, 1); int i; @@ -276,6 +276,8 @@ static void vgic_v3_uaccess_write_pending(struct kvm_vcpu *vcpu, vgic_put_irq(vcpu->kvm, irq); } + + return 0; } /* We want to avoid outer shareable. */ @@ -468,7 +470,7 @@ static const struct vgic_register_region vgic_v3_dist_registers[] = { VGIC_ACCESS_32bit), REGISTER_DESC_WITH_BITS_PER_IRQ_SHARED(GICD_ICPENDR, vgic_mmio_read_pending, vgic_mmio_write_cpending, - vgic_mmio_read_raz, vgic_mmio_write_wi, 1, + vgic_mmio_read_raz, vgic_mmio_uaccess_write_wi, 1, VGIC_ACCESS_32bit), REGISTER_DESC_WITH_BITS_PER_IRQ_SHARED(GICD_ISACTIVER, vgic_mmio_read_active, vgic_mmio_write_sactive, @@ -541,7 +543,7 @@ static const struct vgic_register_region vgic_v3_sgibase_registers[] = { VGIC_ACCESS_32bit), REGISTER_DESC_WITH_LENGTH_UACCESS(GICR_ICPENDR0, vgic_mmio_read_pending, vgic_mmio_write_cpending, - vgic_mmio_read_raz, vgic_mmio_write_wi, 4, + vgic_mmio_read_raz, vgic_mmio_uaccess_write_wi, 4, VGIC_ACCESS_32bit), REGISTER_DESC_WITH_LENGTH_UACCESS(GICR_ISACTIVER0, vgic_mmio_read_active, vgic_mmio_write_sactive, diff --git a/virt/kvm/arm/vgic/vgic-mmio.c b/virt/kvm/arm/vgic/vgic-mmio.c index ff9655cfeb2f..e1e79989d473 100644 --- a/virt/kvm/arm/vgic/vgic-mmio.c +++ b/virt/kvm/arm/vgic/vgic-mmio.c @@ -40,6 +40,13 @@ void vgic_mmio_write_wi(struct kvm_vcpu *vcpu, gpa_t addr, /* Ignore */ } +int vgic_mmio_uaccess_write_wi(struct kvm_vcpu *vcpu, gpa_t addr, + unsigned int len, unsigned long val) +{ + /* Ignore */ + return 0; +} + /* * Read accesses to both GICD_ICENABLER and GICD_ISENABLER return the value * of the enabled bit, so there is only one function for both here. @@ -363,11 +370,12 @@ void vgic_mmio_write_cactive(struct kvm_vcpu *vcpu, mutex_unlock(&vcpu->kvm->lock); } -void vgic_mmio_uaccess_write_cactive(struct kvm_vcpu *vcpu, +int vgic_mmio_uaccess_write_cactive(struct kvm_vcpu *vcpu, gpa_t addr, unsigned int len, unsigned long val) { __vgic_mmio_write_cactive(vcpu, addr, len, val); + return 0; } static void __vgic_mmio_write_sactive(struct kvm_vcpu *vcpu, @@ -399,11 +407,12 @@ void vgic_mmio_write_sactive(struct kvm_vcpu *vcpu, mutex_unlock(&vcpu->kvm->lock); } -void vgic_mmio_uaccess_write_sactive(struct kvm_vcpu *vcpu, +int vgic_mmio_uaccess_write_sactive(struct kvm_vcpu *vcpu, gpa_t addr, unsigned int len, unsigned long val) { __vgic_mmio_write_sactive(vcpu, addr, len, val); + return 0; } unsigned long vgic_mmio_read_priority(struct kvm_vcpu *vcpu, @@ -735,10 +744,9 @@ static int vgic_uaccess_write(struct kvm_vcpu *vcpu, struct kvm_io_device *dev, r_vcpu = iodev->redist_vcpu ? iodev->redist_vcpu : vcpu; if (region->uaccess_write) - region->uaccess_write(r_vcpu, addr, sizeof(u32), *val); - else - region->write(r_vcpu, addr, sizeof(u32), *val); + return region->uaccess_write(r_vcpu, addr, sizeof(u32), *val); + region->write(r_vcpu, addr, sizeof(u32), *val); return 0; } diff --git a/virt/kvm/arm/vgic/vgic-mmio.h b/virt/kvm/arm/vgic/vgic-mmio.h index 5693f6df45ec..9c3d6d358014 100644 --- a/virt/kvm/arm/vgic/vgic-mmio.h +++ b/virt/kvm/arm/vgic/vgic-mmio.h @@ -37,8 +37,8 @@ struct vgic_register_region { unsigned long (*uaccess_read)(struct kvm_vcpu *vcpu, gpa_t addr, unsigned int len); union { - void (*uaccess_write)(struct kvm_vcpu *vcpu, gpa_t addr, - unsigned int len, unsigned long val); + int (*uaccess_write)(struct kvm_vcpu *vcpu, gpa_t addr, + unsigned int len, unsigned long val); int (*uaccess_its_write)(struct kvm *kvm, struct vgic_its *its, gpa_t addr, unsigned int len, unsigned long val); @@ -134,6 +134,9 @@ unsigned long vgic_mmio_read_rao(struct kvm_vcpu *vcpu, void vgic_mmio_write_wi(struct kvm_vcpu *vcpu, gpa_t addr, unsigned int len, unsigned long val); +int vgic_mmio_uaccess_write_wi(struct kvm_vcpu *vcpu, gpa_t addr, + unsigned int len, unsigned long val); + unsigned long vgic_mmio_read_enable(struct kvm_vcpu *vcpu, gpa_t addr, unsigned int len); @@ -167,13 +170,13 @@ void vgic_mmio_write_sactive(struct kvm_vcpu *vcpu, gpa_t addr, unsigned int len, unsigned long val); -void vgic_mmio_uaccess_write_cactive(struct kvm_vcpu *vcpu, - gpa_t addr, unsigned int len, - unsigned long val); +int vgic_mmio_uaccess_write_cactive(struct kvm_vcpu *vcpu, + gpa_t addr, unsigned int len, + unsigned long val); -void vgic_mmio_uaccess_write_sactive(struct kvm_vcpu *vcpu, - gpa_t addr, unsigned int len, - unsigned long val); +int vgic_mmio_uaccess_write_sactive(struct kvm_vcpu *vcpu, + gpa_t addr, unsigned int len, + unsigned long val); unsigned long vgic_mmio_read_priority(struct kvm_vcpu *vcpu, gpa_t addr, unsigned int len); From patchwork Mon Aug 13 14:57:36 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marc Zyngier X-Patchwork-Id: 10564355 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 63E1D14E2 for ; Mon, 13 Aug 2018 14:59:49 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 53C5A29425 for ; Mon, 13 Aug 2018 14:59:49 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 483D82942B; Mon, 13 Aug 2018 14:59:49 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id D8F9529425 for ; Mon, 13 Aug 2018 14:59:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729835AbeHMRmY (ORCPT ); Mon, 13 Aug 2018 13:42:24 -0400 Received: from foss.arm.com ([217.140.101.70]:60220 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729364AbeHMRmY (ORCPT ); Mon, 13 Aug 2018 13:42:24 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 501E515BF; Mon, 13 Aug 2018 07:59:47 -0700 (PDT) Received: from approximate.Emea.Arm.com (approximate.Emea.Arm.com [10.4.13.119]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 5F1953F5D0; Mon, 13 Aug 2018 07:59:44 -0700 (PDT) From: Marc Zyngier To: Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= Cc: kvm@vger.kernel.org, kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org, Andre Przywara , Andrew Jones , Catalin Marinas , Christoffer Dall , Dongjiu Geng , Eric Auger , "Gustavo A . R . Silva" , James Morse , Jia He , Kees Cook , Mark Rutland , Punit Agrawal , Suzuki Poulose Subject: [PATCH 18/37] KVM: arm/arm64: vgic: Return error on incompatible uaccess GICD_IIDR writes Date: Mon, 13 Aug 2018 15:57:36 +0100 Message-Id: <20180813145755.16566-19-marc.zyngier@arm.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20180813145755.16566-1-marc.zyngier@arm.com> References: <20180813145755.16566-1-marc.zyngier@arm.com> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Christoffer Dall If userspace attempts to write a GICD_IIDR that does not match the kernel version, return an error to userspace. The intention is to allow implementation changes inside KVM while avoiding silently breaking migration resulting in guests not running without any clear indication of what went wrong. Reviewed-by: Andrew Jones Signed-off-by: Christoffer Dall Signed-off-by: Marc Zyngier --- virt/kvm/arm/vgic/vgic-mmio-v2.c | 21 ++++++++++++++++++--- virt/kvm/arm/vgic/vgic-mmio-v3.c | 21 ++++++++++++++++++--- 2 files changed, 36 insertions(+), 6 deletions(-) diff --git a/virt/kvm/arm/vgic/vgic-mmio-v2.c b/virt/kvm/arm/vgic/vgic-mmio-v2.c index db646f140e7d..4f0f2c4165ec 100644 --- a/virt/kvm/arm/vgic/vgic-mmio-v2.c +++ b/virt/kvm/arm/vgic/vgic-mmio-v2.c @@ -75,6 +75,20 @@ static void vgic_mmio_write_v2_misc(struct kvm_vcpu *vcpu, } } +static int vgic_mmio_uaccess_write_v2_misc(struct kvm_vcpu *vcpu, + gpa_t addr, unsigned int len, + unsigned long val) +{ + switch (addr & 0x0c) { + case GIC_DIST_IIDR: + if (val != vgic_mmio_read_v2_misc(vcpu, addr, len)) + return -EINVAL; + } + + vgic_mmio_write_v2_misc(vcpu, addr, len, val); + return 0; +} + static void vgic_mmio_write_sgir(struct kvm_vcpu *source_vcpu, gpa_t addr, unsigned int len, unsigned long val) @@ -367,9 +381,10 @@ static void vgic_mmio_write_apr(struct kvm_vcpu *vcpu, } static const struct vgic_register_region vgic_v2_dist_registers[] = { - REGISTER_DESC_WITH_LENGTH(GIC_DIST_CTRL, - vgic_mmio_read_v2_misc, vgic_mmio_write_v2_misc, 12, - VGIC_ACCESS_32bit), + REGISTER_DESC_WITH_LENGTH_UACCESS(GIC_DIST_CTRL, + vgic_mmio_read_v2_misc, vgic_mmio_write_v2_misc, + NULL, vgic_mmio_uaccess_write_v2_misc, + 12, VGIC_ACCESS_32bit), REGISTER_DESC_WITH_BITS_PER_IRQ(GIC_DIST_IGROUP, vgic_mmio_read_raz, vgic_mmio_write_wi, NULL, NULL, 1, VGIC_ACCESS_32bit), diff --git a/virt/kvm/arm/vgic/vgic-mmio-v3.c b/virt/kvm/arm/vgic/vgic-mmio-v3.c index ef57a1aa0b14..abdb0ec77b38 100644 --- a/virt/kvm/arm/vgic/vgic-mmio-v3.c +++ b/virt/kvm/arm/vgic/vgic-mmio-v3.c @@ -113,6 +113,20 @@ static void vgic_mmio_write_v3_misc(struct kvm_vcpu *vcpu, } } +static int vgic_mmio_uaccess_write_v3_misc(struct kvm_vcpu *vcpu, + gpa_t addr, unsigned int len, + unsigned long val) +{ + switch (addr & 0x0c) { + case GICD_IIDR: + if (val != vgic_mmio_read_v3_misc(vcpu, addr, len)) + return -EINVAL; + } + + vgic_mmio_write_v3_misc(vcpu, addr, len, val); + return 0; +} + static unsigned long vgic_mmio_read_irouter(struct kvm_vcpu *vcpu, gpa_t addr, unsigned int len) { @@ -449,9 +463,10 @@ static void vgic_mmio_write_pendbase(struct kvm_vcpu *vcpu, } static const struct vgic_register_region vgic_v3_dist_registers[] = { - REGISTER_DESC_WITH_LENGTH(GICD_CTLR, - vgic_mmio_read_v3_misc, vgic_mmio_write_v3_misc, 16, - VGIC_ACCESS_32bit), + REGISTER_DESC_WITH_LENGTH_UACCESS(GICD_CTLR, + vgic_mmio_read_v3_misc, vgic_mmio_write_v3_misc, + NULL, vgic_mmio_uaccess_write_v3_misc, + 16, VGIC_ACCESS_32bit), REGISTER_DESC_WITH_LENGTH(GICD_STATUSR, vgic_mmio_read_rao, vgic_mmio_write_wi, 4, VGIC_ACCESS_32bit), From patchwork Mon Aug 13 14:57:37 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marc Zyngier X-Patchwork-Id: 10564359 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id EA6C914E2 for ; Mon, 13 Aug 2018 14:59:52 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id DA49029425 for ; Mon, 13 Aug 2018 14:59:52 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id CEC7F2942B; Mon, 13 Aug 2018 14:59:52 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 45AAB29425 for ; Mon, 13 Aug 2018 14:59:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729860AbeHMRm1 (ORCPT ); Mon, 13 Aug 2018 13:42:27 -0400 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:60250 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729846AbeHMRm1 (ORCPT ); Mon, 13 Aug 2018 13:42:27 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 8175A168F; Mon, 13 Aug 2018 07:59:50 -0700 (PDT) Received: from approximate.Emea.Arm.com (approximate.Emea.Arm.com [10.4.13.119]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 900213F5D0; Mon, 13 Aug 2018 07:59:47 -0700 (PDT) From: Marc Zyngier To: Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= Cc: kvm@vger.kernel.org, kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org, Andre Przywara , Andrew Jones , Catalin Marinas , Christoffer Dall , Dongjiu Geng , Eric Auger , "Gustavo A . R . Silva" , James Morse , Jia He , Kees Cook , Mark Rutland , Punit Agrawal , Suzuki Poulose Subject: [PATCH 19/37] KVM: arm/arm64: vgic: Allow configuration of interrupt groups Date: Mon, 13 Aug 2018 15:57:37 +0100 Message-Id: <20180813145755.16566-20-marc.zyngier@arm.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20180813145755.16566-1-marc.zyngier@arm.com> References: <20180813145755.16566-1-marc.zyngier@arm.com> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Christoffer Dall Implement the required MMIO accessors for GICv2 and GICv3 for the IGROUPR distributor and redistributor registers. This can allow guests to change behavior compared to running on previous versions of KVM, but only to align with the architecture and hardware implementations. This also allows userspace to configure the interrupts groups for GICv3. We don't allow userspace to write the groups on GICv2 just yet, because that would result in GICv2 guests not receiving interrupts after migrating from an older kernel that exposes GICv2 interrupts as group 1. Reviewed-by: Andrew Jones Signed-off-by: Christoffer Dall Signed-off-by: Marc Zyngier --- virt/kvm/arm/vgic/vgic-init.c | 2 +- virt/kvm/arm/vgic/vgic-mmio-v2.c | 13 ++++++++++- virt/kvm/arm/vgic/vgic-mmio-v3.c | 11 +++++++-- virt/kvm/arm/vgic/vgic-mmio.c | 38 ++++++++++++++++++++++++++++++++ virt/kvm/arm/vgic/vgic-mmio.h | 6 +++++ 5 files changed, 66 insertions(+), 4 deletions(-) diff --git a/virt/kvm/arm/vgic/vgic-init.c b/virt/kvm/arm/vgic/vgic-init.c index a7c19cda5835..c0c0b88af1d5 100644 --- a/virt/kvm/arm/vgic/vgic-init.c +++ b/virt/kvm/arm/vgic/vgic-init.c @@ -313,7 +313,7 @@ int vgic_init(struct kvm *kvm) vgic_debug_init(kvm); - dist->implementation_rev = 1; + dist->implementation_rev = 2; dist->initialized = true; out: diff --git a/virt/kvm/arm/vgic/vgic-mmio-v2.c b/virt/kvm/arm/vgic/vgic-mmio-v2.c index 4f0f2c4165ec..ee164f831401 100644 --- a/virt/kvm/arm/vgic/vgic-mmio-v2.c +++ b/virt/kvm/arm/vgic/vgic-mmio-v2.c @@ -26,6 +26,8 @@ * The Revision field in the IIDR have the following meanings: * * Revision 1: Report GICv2 interrupts as group 0 instead of group 1 + * Revision 2: Interrupt groups are guest-configurable and signaled using + * their configured groups. */ static unsigned long vgic_mmio_read_v2_misc(struct kvm_vcpu *vcpu, @@ -89,6 +91,14 @@ static int vgic_mmio_uaccess_write_v2_misc(struct kvm_vcpu *vcpu, return 0; } +static int vgic_mmio_uaccess_write_v2_group(struct kvm_vcpu *vcpu, + gpa_t addr, unsigned int len, + unsigned long val) +{ + /* Ignore writes from userspace */ + return 0; +} + static void vgic_mmio_write_sgir(struct kvm_vcpu *source_vcpu, gpa_t addr, unsigned int len, unsigned long val) @@ -386,7 +396,8 @@ static const struct vgic_register_region vgic_v2_dist_registers[] = { NULL, vgic_mmio_uaccess_write_v2_misc, 12, VGIC_ACCESS_32bit), REGISTER_DESC_WITH_BITS_PER_IRQ(GIC_DIST_IGROUP, - vgic_mmio_read_raz, vgic_mmio_write_wi, NULL, NULL, 1, + vgic_mmio_read_group, vgic_mmio_write_group, + NULL, vgic_mmio_uaccess_write_v2_group, 1, VGIC_ACCESS_32bit), REGISTER_DESC_WITH_BITS_PER_IRQ(GIC_DIST_ENABLE_SET, vgic_mmio_read_enable, vgic_mmio_write_senable, NULL, NULL, 1, diff --git a/virt/kvm/arm/vgic/vgic-mmio-v3.c b/virt/kvm/arm/vgic/vgic-mmio-v3.c index abdb0ec77b38..88e78b582139 100644 --- a/virt/kvm/arm/vgic/vgic-mmio-v3.c +++ b/virt/kvm/arm/vgic/vgic-mmio-v3.c @@ -59,6 +59,13 @@ bool vgic_supports_direct_msis(struct kvm *kvm) return kvm_vgic_global_state.has_gicv4 && vgic_has_its(kvm); } +/* + * The Revision field in the IIDR have the following meanings: + * + * Revision 2: Interrupt groups are guest-configurable and signaled using + * their configured groups. + */ + static unsigned long vgic_mmio_read_v3_misc(struct kvm_vcpu *vcpu, gpa_t addr, unsigned int len) { @@ -471,7 +478,7 @@ static const struct vgic_register_region vgic_v3_dist_registers[] = { vgic_mmio_read_rao, vgic_mmio_write_wi, 4, VGIC_ACCESS_32bit), REGISTER_DESC_WITH_BITS_PER_IRQ_SHARED(GICD_IGROUPR, - vgic_mmio_read_rao, vgic_mmio_write_wi, NULL, NULL, 1, + vgic_mmio_read_group, vgic_mmio_write_group, NULL, NULL, 1, VGIC_ACCESS_32bit), REGISTER_DESC_WITH_BITS_PER_IRQ_SHARED(GICD_ISENABLER, vgic_mmio_read_enable, vgic_mmio_write_senable, NULL, NULL, 1, @@ -544,7 +551,7 @@ static const struct vgic_register_region vgic_v3_rdbase_registers[] = { static const struct vgic_register_region vgic_v3_sgibase_registers[] = { REGISTER_DESC_WITH_LENGTH(GICR_IGROUPR0, - vgic_mmio_read_rao, vgic_mmio_write_wi, 4, + vgic_mmio_read_group, vgic_mmio_write_group, 4, VGIC_ACCESS_32bit), REGISTER_DESC_WITH_LENGTH(GICR_ISENABLER0, vgic_mmio_read_enable, vgic_mmio_write_senable, 4, diff --git a/virt/kvm/arm/vgic/vgic-mmio.c b/virt/kvm/arm/vgic/vgic-mmio.c index e1e79989d473..f56ff1cf52ec 100644 --- a/virt/kvm/arm/vgic/vgic-mmio.c +++ b/virt/kvm/arm/vgic/vgic-mmio.c @@ -47,6 +47,44 @@ int vgic_mmio_uaccess_write_wi(struct kvm_vcpu *vcpu, gpa_t addr, return 0; } +unsigned long vgic_mmio_read_group(struct kvm_vcpu *vcpu, + gpa_t addr, unsigned int len) +{ + u32 intid = VGIC_ADDR_TO_INTID(addr, 1); + u32 value = 0; + int i; + + /* Loop over all IRQs affected by this read */ + for (i = 0; i < len * 8; i++) { + struct vgic_irq *irq = vgic_get_irq(vcpu->kvm, vcpu, intid + i); + + if (irq->group) + value |= BIT(i); + + vgic_put_irq(vcpu->kvm, irq); + } + + return value; +} + +void vgic_mmio_write_group(struct kvm_vcpu *vcpu, gpa_t addr, + unsigned int len, unsigned long val) +{ + u32 intid = VGIC_ADDR_TO_INTID(addr, 1); + int i; + unsigned long flags; + + for (i = 0; i < len * 8; i++) { + struct vgic_irq *irq = vgic_get_irq(vcpu->kvm, vcpu, intid + i); + + spin_lock_irqsave(&irq->irq_lock, flags); + irq->group = !!(val & BIT(i)); + vgic_queue_irq_unlock(vcpu->kvm, irq, flags); + + vgic_put_irq(vcpu->kvm, irq); + } +} + /* * Read accesses to both GICD_ICENABLER and GICD_ISENABLER return the value * of the enabled bit, so there is only one function for both here. diff --git a/virt/kvm/arm/vgic/vgic-mmio.h b/virt/kvm/arm/vgic/vgic-mmio.h index 9c3d6d358014..a07f90acdaec 100644 --- a/virt/kvm/arm/vgic/vgic-mmio.h +++ b/virt/kvm/arm/vgic/vgic-mmio.h @@ -137,6 +137,12 @@ void vgic_mmio_write_wi(struct kvm_vcpu *vcpu, gpa_t addr, int vgic_mmio_uaccess_write_wi(struct kvm_vcpu *vcpu, gpa_t addr, unsigned int len, unsigned long val); +unsigned long vgic_mmio_read_group(struct kvm_vcpu *vcpu, gpa_t addr, + unsigned int len); + +void vgic_mmio_write_group(struct kvm_vcpu *vcpu, gpa_t addr, + unsigned int len, unsigned long val); + unsigned long vgic_mmio_read_enable(struct kvm_vcpu *vcpu, gpa_t addr, unsigned int len); From patchwork Mon Aug 13 14:57:38 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marc Zyngier X-Patchwork-Id: 10564361 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id B4A7214E2 for ; Mon, 13 Aug 2018 14:59:55 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id A456E29425 for ; Mon, 13 Aug 2018 14:59:55 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 989762942B; Mon, 13 Aug 2018 14:59:55 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 308A129425 for ; Mon, 13 Aug 2018 14:59:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729882AbeHMRma (ORCPT ); Mon, 13 Aug 2018 13:42:30 -0400 Received: from foss.arm.com ([217.140.101.70]:60280 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729852AbeHMRma (ORCPT ); Mon, 13 Aug 2018 13:42:30 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 8D92EED1; Mon, 13 Aug 2018 07:59:53 -0700 (PDT) Received: from approximate.Emea.Arm.com (approximate.Emea.Arm.com [10.4.13.119]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id BFF443F5D0; Mon, 13 Aug 2018 07:59:50 -0700 (PDT) From: Marc Zyngier To: Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= Cc: kvm@vger.kernel.org, kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org, Andre Przywara , Andrew Jones , Catalin Marinas , Christoffer Dall , Dongjiu Geng , Eric Auger , "Gustavo A . R . Silva" , James Morse , Jia He , Kees Cook , Mark Rutland , Punit Agrawal , Suzuki Poulose Subject: [PATCH 20/37] KVM: arm/arm64: vgic: Let userspace opt-in to writable v2 IGROUPR Date: Mon, 13 Aug 2018 15:57:38 +0100 Message-Id: <20180813145755.16566-21-marc.zyngier@arm.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20180813145755.16566-1-marc.zyngier@arm.com> References: <20180813145755.16566-1-marc.zyngier@arm.com> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Christoffer Dall Simply letting IGROUPR be writable from userspace would break migration from old kernels to newer kernels, because old kernels incorrectly report interrupt groups as group 1. This would not be a big problem if userspace wrote GICD_IIDR as read from the kernel, because we could detect the incompatibility and return an error to userspace. Unfortunately, this is not the case with current userspace implementations and simply letting IGROUPR be writable from userspace for an emulated GICv2 silently breaks migration and causes the destination VM to no longer run after migration. We now encourage userspace to write the read and expected value of GICD_IIDR as the first part of a GIC register restore, and if we observe a write to GICD_IIDR we know that userspace has been updated and has had a chance to cope with older kernels (VGICv2 IIDR.Revision == 0) incorrectly reporting interrupts as group 1, and therefore we now allow groups to be user writable. Reviewed-by: Andrew Jones Signed-off-by: Christoffer Dall Signed-off-by: Marc Zyngier --- include/kvm/arm_vgic.h | 3 +++ virt/kvm/arm/vgic/vgic-mmio-v2.c | 16 +++++++++++++++- 2 files changed, 18 insertions(+), 1 deletion(-) diff --git a/include/kvm/arm_vgic.h b/include/kvm/arm_vgic.h index c661d0ee6628..c134790be32c 100644 --- a/include/kvm/arm_vgic.h +++ b/include/kvm/arm_vgic.h @@ -221,6 +221,9 @@ struct vgic_dist { /* Implementation revision as reported in the GICD_IIDR */ u32 implementation_rev; + /* Userspace can write to GICv2 IGROUPR */ + bool v2_groups_user_writable; + /* Do injected MSIs require an additional device ID? */ bool msis_require_devid; diff --git a/virt/kvm/arm/vgic/vgic-mmio-v2.c b/virt/kvm/arm/vgic/vgic-mmio-v2.c index ee164f831401..26654f4140ed 100644 --- a/virt/kvm/arm/vgic/vgic-mmio-v2.c +++ b/virt/kvm/arm/vgic/vgic-mmio-v2.c @@ -85,6 +85,18 @@ static int vgic_mmio_uaccess_write_v2_misc(struct kvm_vcpu *vcpu, case GIC_DIST_IIDR: if (val != vgic_mmio_read_v2_misc(vcpu, addr, len)) return -EINVAL; + + /* + * If we observe a write to GICD_IIDR we know that userspace + * has been updated and has had a chance to cope with older + * kernels (VGICv2 IIDR.Revision == 0) incorrectly reporting + * interrupts as group 1, and therefore we now allow groups to + * be user writable. Doing this by default would break + * migration from old kernels to new kernels with legacy + * userspace. + */ + vcpu->kvm->arch.vgic.v2_groups_user_writable = true; + return 0; } vgic_mmio_write_v2_misc(vcpu, addr, len, val); @@ -95,7 +107,9 @@ static int vgic_mmio_uaccess_write_v2_group(struct kvm_vcpu *vcpu, gpa_t addr, unsigned int len, unsigned long val) { - /* Ignore writes from userspace */ + if (vcpu->kvm->arch.vgic.v2_groups_user_writable) + vgic_mmio_write_group(vcpu, addr, len, val); + return 0; } From patchwork Mon Aug 13 14:57:39 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marc Zyngier X-Patchwork-Id: 10564363 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 3CB4C17E1 for ; Mon, 13 Aug 2018 14:59:58 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 2CB6729425 for ; Mon, 13 Aug 2018 14:59:58 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 2123D2942B; Mon, 13 Aug 2018 14:59:58 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id A99EE29425 for ; Mon, 13 Aug 2018 14:59:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729915AbeHMRmd (ORCPT ); Mon, 13 Aug 2018 13:42:33 -0400 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:60306 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729257AbeHMRmd (ORCPT ); Mon, 13 Aug 2018 13:42:33 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id BD04080D; Mon, 13 Aug 2018 07:59:56 -0700 (PDT) Received: from approximate.Emea.Arm.com (approximate.Emea.Arm.com [10.4.13.119]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id CC0393F5D0; Mon, 13 Aug 2018 07:59:53 -0700 (PDT) From: Marc Zyngier To: Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= Cc: kvm@vger.kernel.org, kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org, Andre Przywara , Andrew Jones , Catalin Marinas , Christoffer Dall , Dongjiu Geng , Eric Auger , "Gustavo A . R . Silva" , James Morse , Jia He , Kees Cook , Mark Rutland , Punit Agrawal , Suzuki Poulose Subject: [PATCH 21/37] KVM: arm/arm64: vgic: Update documentation of the GIC devices wrt IIDR Date: Mon, 13 Aug 2018 15:57:39 +0100 Message-Id: <20180813145755.16566-22-marc.zyngier@arm.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20180813145755.16566-1-marc.zyngier@arm.com> References: <20180813145755.16566-1-marc.zyngier@arm.com> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Christoffer Dall Update the documentation to reflect the ordering requirements of restoring the GICD_IIDR register before any other registers and the effects this has on restoring the interrupt groups for an emulated GICv2 instance. Also remove some outdated limitations in the documentation while we're at it. Reviewed-by: Andrew Jones Signed-off-by: Christoffer Dall Signed-off-by: Marc Zyngier --- Documentation/virtual/kvm/devices/arm-vgic-v3.txt | 8 ++++++++ Documentation/virtual/kvm/devices/arm-vgic.txt | 15 +++++++++------ 2 files changed, 17 insertions(+), 6 deletions(-) diff --git a/Documentation/virtual/kvm/devices/arm-vgic-v3.txt b/Documentation/virtual/kvm/devices/arm-vgic-v3.txt index 2408ab720ef7..ff290b43c8e5 100644 --- a/Documentation/virtual/kvm/devices/arm-vgic-v3.txt +++ b/Documentation/virtual/kvm/devices/arm-vgic-v3.txt @@ -100,6 +100,14 @@ Groups: Note that distributor fields are not banked, but return the same value regardless of the mpidr used to access the register. + GICD_IIDR.Revision is updated when the KVM implementation is changed in a + way directly observable by the guest or userspace. Userspace should read + GICD_IIDR from KVM and write back the read value to confirm its expected + behavior is aligned with the KVM implementation. Userspace should set + GICD_IIDR before setting any other registers to ensure the expected + behavior. + + The GICD_STATUSR and GICR_STATUSR registers are architecturally defined such that a write of a clear bit has no effect, whereas a write with a set bit clears that value. To allow userspace to freely set the values of these two diff --git a/Documentation/virtual/kvm/devices/arm-vgic.txt b/Documentation/virtual/kvm/devices/arm-vgic.txt index b3ce12643553..97b6518148f8 100644 --- a/Documentation/virtual/kvm/devices/arm-vgic.txt +++ b/Documentation/virtual/kvm/devices/arm-vgic.txt @@ -49,9 +49,15 @@ Groups: index is specified with the vcpu_index field. Note that most distributor fields are not banked, but return the same value regardless of the vcpu_index used to access the register. - Limitations: - - Priorities are not implemented, and registers are RAZ/WI - - Currently only implemented for KVM_DEV_TYPE_ARM_VGIC_V2. + + GICD_IIDR.Revision is updated when the KVM implementation of an emulated + GICv2 is changed in a way directly observable by the guest or userspace. + Userspace should read GICD_IIDR from KVM and write back the read value to + confirm its expected behavior is aligned with the KVM implementation. + Userspace should set GICD_IIDR before setting any other registers (both + KVM_DEV_ARM_VGIC_GRP_DIST_REGS and KVM_DEV_ARM_VGIC_GRP_CPU_REGS) to ensure + the expected behavior. Unless GICD_IIDR has been set from userspace, writes + to the interrupt group registers (GICD_IGROUPR) are ignored. Errors: -ENXIO: Getting or setting this register is not yet supported -EBUSY: One or more VCPUs are running @@ -94,9 +100,6 @@ Groups: use the lower 5 bits to communicate with the KVM device and must shift the value left by 3 places to obtain the actual priority mask level. - Limitations: - - Priorities are not implemented, and registers are RAZ/WI - - Currently only implemented for KVM_DEV_TYPE_ARM_VGIC_V2. Errors: -ENXIO: Getting or setting this register is not yet supported -EBUSY: One or more VCPUs are running From patchwork Mon Aug 13 14:57:40 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marc Zyngier X-Patchwork-Id: 10564365 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 1B8371057 for ; Mon, 13 Aug 2018 15:00:03 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 09D8629434 for ; Mon, 13 Aug 2018 15:00:03 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id F163629437; Mon, 13 Aug 2018 15:00:02 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 1578C29434 for ; Mon, 13 Aug 2018 15:00:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729852AbeHMRmh (ORCPT ); Mon, 13 Aug 2018 13:42:37 -0400 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:60328 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729702AbeHMRmh (ORCPT ); Mon, 13 Aug 2018 13:42:37 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id EF8D015B2; Mon, 13 Aug 2018 07:59:59 -0700 (PDT) Received: from approximate.Emea.Arm.com (approximate.Emea.Arm.com [10.4.13.119]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 08DFF3F5D0; Mon, 13 Aug 2018 07:59:56 -0700 (PDT) From: Marc Zyngier To: Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= Cc: kvm@vger.kernel.org, kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org, Andre Przywara , Andrew Jones , Catalin Marinas , Christoffer Dall , Dongjiu Geng , Eric Auger , "Gustavo A . R . Silva" , James Morse , Jia He , Kees Cook , Mark Rutland , Punit Agrawal , Suzuki Poulose Subject: [PATCH 22/37] arm/arm64: KVM: Add KVM_GET/SET_VCPU_EVENTS Date: Mon, 13 Aug 2018 15:57:40 +0100 Message-Id: <20180813145755.16566-23-marc.zyngier@arm.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20180813145755.16566-1-marc.zyngier@arm.com> References: <20180813145755.16566-1-marc.zyngier@arm.com> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Dongjiu Geng For the migrating VMs, user space may need to know the exception state. For example, in the machine A, KVM make an SError pending, when migrate to B, KVM also needs to pend an SError. This new IOCTL exports user-invisible states related to SError. Together with appropriate user space changes, user space can get/set the SError exception state to do migrate/snapshot/suspend. Signed-off-by: Dongjiu Geng Reviewed-by: James Morse [expanded documentation wording] Signed-off-by: James Morse Signed-off-by: Marc Zyngier --- Documentation/virtual/kvm/api.txt | 54 +++++++++++++++++++++++++--- arch/arm64/include/asm/kvm_emulate.h | 5 +++ arch/arm64/include/asm/kvm_host.h | 7 ++++ arch/arm64/include/uapi/asm/kvm.h | 13 +++++++ arch/arm64/kvm/guest.c | 46 ++++++++++++++++++++++++ arch/arm64/kvm/inject_fault.c | 6 ++-- arch/arm64/kvm/reset.c | 1 + virt/kvm/arm/arm.c | 21 +++++++++++ 8 files changed, 146 insertions(+), 7 deletions(-) diff --git a/Documentation/virtual/kvm/api.txt b/Documentation/virtual/kvm/api.txt index d10944e619d3..284d36e72f28 100644 --- a/Documentation/virtual/kvm/api.txt +++ b/Documentation/virtual/kvm/api.txt @@ -835,11 +835,13 @@ struct kvm_clock_data { Capability: KVM_CAP_VCPU_EVENTS Extended by: KVM_CAP_INTR_SHADOW -Architectures: x86 -Type: vm ioctl +Architectures: x86, arm64 +Type: vcpu ioctl Parameters: struct kvm_vcpu_event (out) Returns: 0 on success, -1 on error +X86: + Gets currently pending exceptions, interrupts, and NMIs as well as related states of the vcpu. @@ -881,15 +883,52 @@ Only two fields are defined in the flags field: - KVM_VCPUEVENT_VALID_SMM may be set in the flags field to signal that smi contains a valid state. +ARM64: + +If the guest accesses a device that is being emulated by the host kernel in +such a way that a real device would generate a physical SError, KVM may make +a virtual SError pending for that VCPU. This system error interrupt remains +pending until the guest takes the exception by unmasking PSTATE.A. + +Running the VCPU may cause it to take a pending SError, or make an access that +causes an SError to become pending. The event's description is only valid while +the VPCU is not running. + +This API provides a way to read and write the pending 'event' state that is not +visible to the guest. To save, restore or migrate a VCPU the struct representing +the state can be read then written using this GET/SET API, along with the other +guest-visible registers. It is not possible to 'cancel' an SError that has been +made pending. + +A device being emulated in user-space may also wish to generate an SError. To do +this the events structure can be populated by user-space. The current state +should be read first, to ensure no existing SError is pending. If an existing +SError is pending, the architecture's 'Multiple SError interrupts' rules should +be followed. (2.5.3 of DDI0587.a "ARM Reliability, Availability, and +Serviceability (RAS) Specification"). + +struct kvm_vcpu_events { + struct { + __u8 serror_pending; + __u8 serror_has_esr; + /* Align it to 8 bytes */ + __u8 pad[6]; + __u64 serror_esr; + } exception; + __u32 reserved[12]; +}; + 4.32 KVM_SET_VCPU_EVENTS Capability: KVM_CAP_VCPU_EVENTS Extended by: KVM_CAP_INTR_SHADOW -Architectures: x86 -Type: vm ioctl +Architectures: x86, arm64 +Type: vcpu ioctl Parameters: struct kvm_vcpu_event (in) Returns: 0 on success, -1 on error +X86: + Set pending exceptions, interrupts, and NMIs as well as related states of the vcpu. @@ -910,6 +949,13 @@ shall be written into the VCPU. KVM_VCPUEVENT_VALID_SMM can only be set if KVM_CAP_X86_SMM is available. +ARM64: + +Set the pending SError exception state for this VCPU. It is not possible to +'cancel' an Serror that has been made pending. + +See KVM_GET_VCPU_EVENTS for the data structure. + 4.33 KVM_GET_DEBUGREGS diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h index bfefdd9a72eb..3f8ab8da9a2b 100644 --- a/arch/arm64/include/asm/kvm_emulate.h +++ b/arch/arm64/include/asm/kvm_emulate.h @@ -93,6 +93,11 @@ static inline void vcpu_set_wfe_traps(struct kvm_vcpu *vcpu) vcpu->arch.hcr_el2 |= HCR_TWE; } +static inline unsigned long vcpu_get_vsesr(struct kvm_vcpu *vcpu) +{ + return vcpu->arch.vsesr_el2; +} + static inline void vcpu_set_vsesr(struct kvm_vcpu *vcpu, u64 vsesr) { vcpu->arch.vsesr_el2 = vsesr; diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 268619ce0154..bc244cc6e451 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -350,6 +350,11 @@ unsigned long kvm_arm_num_regs(struct kvm_vcpu *vcpu); int kvm_arm_copy_reg_indices(struct kvm_vcpu *vcpu, u64 __user *indices); int kvm_arm_get_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg); int kvm_arm_set_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg); +int kvm_arm_vcpu_get_events(struct kvm_vcpu *vcpu, + struct kvm_vcpu_events *events); + +int kvm_arm_vcpu_set_events(struct kvm_vcpu *vcpu, + struct kvm_vcpu_events *events); #define KVM_ARCH_WANT_MMU_NOTIFIER int kvm_unmap_hva(struct kvm *kvm, unsigned long hva); @@ -378,6 +383,8 @@ void handle_exit_early(struct kvm_vcpu *vcpu, struct kvm_run *run, int kvm_perf_init(void); int kvm_perf_teardown(void); +void kvm_set_sei_esr(struct kvm_vcpu *vcpu, u64 syndrome); + struct kvm_vcpu *kvm_mpidr_to_vcpu(struct kvm *kvm, unsigned long mpidr); DECLARE_PER_CPU(kvm_cpu_context_t, kvm_host_cpu_state); diff --git a/arch/arm64/include/uapi/asm/kvm.h b/arch/arm64/include/uapi/asm/kvm.h index 4e76630dd655..97c3478ee6e7 100644 --- a/arch/arm64/include/uapi/asm/kvm.h +++ b/arch/arm64/include/uapi/asm/kvm.h @@ -39,6 +39,7 @@ #define __KVM_HAVE_GUEST_DEBUG #define __KVM_HAVE_IRQ_LINE #define __KVM_HAVE_READONLY_MEM +#define __KVM_HAVE_VCPU_EVENTS #define KVM_COALESCED_MMIO_PAGE_OFFSET 1 @@ -154,6 +155,18 @@ struct kvm_sync_regs { struct kvm_arch_memory_slot { }; +/* for KVM_GET/SET_VCPU_EVENTS */ +struct kvm_vcpu_events { + struct { + __u8 serror_pending; + __u8 serror_has_esr; + /* Align it to 8 bytes */ + __u8 pad[6]; + __u64 serror_esr; + } exception; + __u32 reserved[12]; +}; + /* If you need to interpret the index values, here is the key: */ #define KVM_REG_ARM_COPROC_MASK 0x000000000FFF0000 #define KVM_REG_ARM_COPROC_SHIFT 16 diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c index 56a0260ceb11..dd05be96d981 100644 --- a/arch/arm64/kvm/guest.c +++ b/arch/arm64/kvm/guest.c @@ -289,6 +289,52 @@ int kvm_arch_vcpu_ioctl_set_sregs(struct kvm_vcpu *vcpu, return -EINVAL; } +int kvm_arm_vcpu_get_events(struct kvm_vcpu *vcpu, + struct kvm_vcpu_events *events) +{ + memset(events, 0, sizeof(*events)); + + events->exception.serror_pending = !!(vcpu->arch.hcr_el2 & HCR_VSE); + events->exception.serror_has_esr = cpus_have_const_cap(ARM64_HAS_RAS_EXTN); + + if (events->exception.serror_pending && events->exception.serror_has_esr) + events->exception.serror_esr = vcpu_get_vsesr(vcpu); + + return 0; +} + +int kvm_arm_vcpu_set_events(struct kvm_vcpu *vcpu, + struct kvm_vcpu_events *events) +{ + int i; + bool serror_pending = events->exception.serror_pending; + bool has_esr = events->exception.serror_has_esr; + + /* check whether the reserved field is zero */ + for (i = 0; i < ARRAY_SIZE(events->reserved); i++) + if (events->reserved[i]) + return -EINVAL; + + /* check whether the pad field is zero */ + for (i = 0; i < ARRAY_SIZE(events->exception.pad); i++) + if (events->exception.pad[i]) + return -EINVAL; + + if (serror_pending && has_esr) { + if (!cpus_have_const_cap(ARM64_HAS_RAS_EXTN)) + return -EINVAL; + + if (!((events->exception.serror_esr) & ~ESR_ELx_ISS_MASK)) + kvm_set_sei_esr(vcpu, events->exception.serror_esr); + else + return -EINVAL; + } else if (serror_pending) { + kvm_inject_vabt(vcpu); + } + + return 0; +} + int __attribute_const__ kvm_target_cpu(void) { unsigned long implementor = read_cpuid_implementor(); diff --git a/arch/arm64/kvm/inject_fault.c b/arch/arm64/kvm/inject_fault.c index d8e71659ba7e..a55e91dfcf8f 100644 --- a/arch/arm64/kvm/inject_fault.c +++ b/arch/arm64/kvm/inject_fault.c @@ -164,9 +164,9 @@ void kvm_inject_undefined(struct kvm_vcpu *vcpu) inject_undef64(vcpu); } -static void pend_guest_serror(struct kvm_vcpu *vcpu, u64 esr) +void kvm_set_sei_esr(struct kvm_vcpu *vcpu, u64 esr) { - vcpu_set_vsesr(vcpu, esr); + vcpu_set_vsesr(vcpu, esr & ESR_ELx_ISS_MASK); *vcpu_hcr(vcpu) |= HCR_VSE; } @@ -184,5 +184,5 @@ static void pend_guest_serror(struct kvm_vcpu *vcpu, u64 esr) */ void kvm_inject_vabt(struct kvm_vcpu *vcpu) { - pend_guest_serror(vcpu, ESR_ELx_ISV); + kvm_set_sei_esr(vcpu, ESR_ELx_ISV); } diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c index a74311beda35..a3db01a28062 100644 --- a/arch/arm64/kvm/reset.c +++ b/arch/arm64/kvm/reset.c @@ -79,6 +79,7 @@ int kvm_arch_dev_ioctl_check_extension(struct kvm *kvm, long ext) break; case KVM_CAP_SET_GUEST_DEBUG: case KVM_CAP_VCPU_ATTRIBUTES: + case KVM_CAP_VCPU_EVENTS: r = 1; break; default: diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c index 8e66b89a3db2..1c72247aeb1d 100644 --- a/virt/kvm/arm/arm.c +++ b/virt/kvm/arm/arm.c @@ -1130,6 +1130,27 @@ long kvm_arch_vcpu_ioctl(struct file *filp, r = kvm_arm_vcpu_has_attr(vcpu, &attr); break; } +#ifdef __KVM_HAVE_VCPU_EVENTS + case KVM_GET_VCPU_EVENTS: { + struct kvm_vcpu_events events; + + if (kvm_arm_vcpu_get_events(vcpu, &events)) + return -EINVAL; + + if (copy_to_user(argp, &events, sizeof(events))) + return -EFAULT; + + return 0; + } + case KVM_SET_VCPU_EVENTS: { + struct kvm_vcpu_events events; + + if (copy_from_user(&events, argp, sizeof(events))) + return -EFAULT; + + return kvm_arm_vcpu_set_events(vcpu, &events); + } +#endif default: r = -EINVAL; } From patchwork Mon Aug 13 14:57:41 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marc Zyngier X-Patchwork-Id: 10564367 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 4E3AD1057 for ; Mon, 13 Aug 2018 15:00:05 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 39BDB29446 for ; Mon, 13 Aug 2018 15:00:05 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 374902944B; Mon, 13 Aug 2018 15:00:05 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id A69C629446 for ; Mon, 13 Aug 2018 15:00:04 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729919AbeHMRmk (ORCPT ); Mon, 13 Aug 2018 13:42:40 -0400 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:60354 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729874AbeHMRmk (ORCPT ); Mon, 13 Aug 2018 13:42:40 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 2C4D61596; Mon, 13 Aug 2018 08:00:03 -0700 (PDT) Received: from approximate.Emea.Arm.com (approximate.Emea.Arm.com [10.4.13.119]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 3A3583F5D0; Mon, 13 Aug 2018 08:00:00 -0700 (PDT) From: Marc Zyngier To: Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= Cc: kvm@vger.kernel.org, kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org, Andre Przywara , Andrew Jones , Catalin Marinas , Christoffer Dall , Dongjiu Geng , Eric Auger , "Gustavo A . R . Silva" , James Morse , Jia He , Kees Cook , Mark Rutland , Punit Agrawal , Suzuki Poulose Subject: [PATCH 23/37] arm64: KVM: export the capability to set guest SError syndrome Date: Mon, 13 Aug 2018 15:57:41 +0100 Message-Id: <20180813145755.16566-24-marc.zyngier@arm.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20180813145755.16566-1-marc.zyngier@arm.com> References: <20180813145755.16566-1-marc.zyngier@arm.com> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Dongjiu Geng For the arm64 RAS Extension, user space can inject a virtual-SError with specified ESR. So user space needs to know whether KVM support to inject such SError, this interface adds this query for this capability. KVM will check whether system support RAS Extension, if supported, KVM returns true to user space, otherwise returns false. Signed-off-by: Dongjiu Geng Reviewed-by: James Morse [expanded documentation wording] Signed-off-by: James Morse Signed-off-by: Marc Zyngier --- Documentation/virtual/kvm/api.txt | 26 ++++++++++++++++++++++++++ arch/arm64/kvm/reset.c | 3 +++ include/uapi/linux/kvm.h | 1 + 3 files changed, 30 insertions(+) diff --git a/Documentation/virtual/kvm/api.txt b/Documentation/virtual/kvm/api.txt index 284d36e72f28..dbbb95d5798a 100644 --- a/Documentation/virtual/kvm/api.txt +++ b/Documentation/virtual/kvm/api.txt @@ -907,6 +907,18 @@ SError is pending, the architecture's 'Multiple SError interrupts' rules should be followed. (2.5.3 of DDI0587.a "ARM Reliability, Availability, and Serviceability (RAS) Specification"). +SError exceptions always have an ESR value. Some CPUs have the ability to +specify what the virtual SError's ESR value should be. These systems will +advertise KVM_CAP_ARM_SET_SERROR_ESR. In this case exception.has_esr will +always have a non-zero value when read, and the agent making an SError pending +should specify the ISS field in the lower 24 bits of exception.serror_esr. If +the system supports KVM_CAP_ARM_SET_SERROR_ESR, but user-space sets the events +with exception.has_esr as zero, KVM will choose an ESR. + +Specifying exception.has_esr on a system that does not support it will return +-EINVAL. Setting anything other than the lower 24bits of exception.serror_esr +will return -EINVAL. + struct kvm_vcpu_events { struct { __u8 serror_pending; @@ -4664,3 +4676,17 @@ This capability indicates that KVM supports paravirtualized Hyper-V TLB Flush hypercalls: HvFlushVirtualAddressSpace, HvFlushVirtualAddressSpaceEx, HvFlushVirtualAddressList, HvFlushVirtualAddressListEx. + +8.19 KVM_CAP_ARM_SET_SERROR_ESR + +Architectures: arm, arm64 + +This capability indicates that userspace can specify (via the +KVM_SET_VCPU_EVENTS ioctl) the syndrome value reported to the guest when it +takes a virtual SError interrupt exception. +If KVM advertises this capability, userspace can only specify the ISS field for +the ESR syndrome. Other parts of the ESR, such as the EC are generated by the +CPU when the exception is taken. If this virtual SError is taken to EL1 using +AArch64, this value will be reported in the ISS field of ESR_ELx. + +See KVM_CAP_VCPU_EVENTS for more details. diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c index a3db01a28062..067c6ba969bd 100644 --- a/arch/arm64/kvm/reset.c +++ b/arch/arm64/kvm/reset.c @@ -77,6 +77,9 @@ int kvm_arch_dev_ioctl_check_extension(struct kvm *kvm, long ext) case KVM_CAP_ARM_PMU_V3: r = kvm_arm_support_pmu_v3(); break; + case KVM_CAP_ARM_INJECT_SERROR_ESR: + r = cpus_have_const_cap(ARM64_HAS_RAS_EXTN); + break; case KVM_CAP_SET_GUEST_DEBUG: case KVM_CAP_VCPU_ATTRIBUTES: case KVM_CAP_VCPU_EVENTS: diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h index b6270a3b38e9..a7d9bc4e4068 100644 --- a/include/uapi/linux/kvm.h +++ b/include/uapi/linux/kvm.h @@ -949,6 +949,7 @@ struct kvm_ppc_resize_hpt { #define KVM_CAP_GET_MSR_FEATURES 153 #define KVM_CAP_HYPERV_EVENTFD 154 #define KVM_CAP_HYPERV_TLBFLUSH 155 +#define KVM_CAP_ARM_INJECT_SERROR_ESR 156 #ifdef KVM_CAP_IRQ_ROUTING From patchwork Mon Aug 13 14:57:42 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marc Zyngier X-Patchwork-Id: 10564371 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id B52B514E2 for ; Mon, 13 Aug 2018 15:00:08 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id A2E2129437 for ; Mon, 13 Aug 2018 15:00:08 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id A00BA29497; Mon, 13 Aug 2018 15:00:08 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 20D5729482 for ; Mon, 13 Aug 2018 15:00:08 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729928AbeHMRmn (ORCPT ); Mon, 13 Aug 2018 13:42:43 -0400 Received: from foss.arm.com ([217.140.101.70]:60384 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729874AbeHMRmn (ORCPT ); Mon, 13 Aug 2018 13:42:43 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 5C879ED1; Mon, 13 Aug 2018 08:00:06 -0700 (PDT) Received: from approximate.Emea.Arm.com (approximate.Emea.Arm.com [10.4.13.119]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 6B2573F5D0; Mon, 13 Aug 2018 08:00:03 -0700 (PDT) From: Marc Zyngier To: Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= Cc: kvm@vger.kernel.org, kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org, Andre Przywara , Andrew Jones , Catalin Marinas , Christoffer Dall , Dongjiu Geng , Eric Auger , "Gustavo A . R . Silva" , James Morse , Jia He , Kees Cook , Mark Rutland , Punit Agrawal , Suzuki Poulose Subject: [PATCH 24/37] KVM: arm64: Share the parts of get/set events useful to 32bit Date: Mon, 13 Aug 2018 15:57:42 +0100 Message-Id: <20180813145755.16566-25-marc.zyngier@arm.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20180813145755.16566-1-marc.zyngier@arm.com> References: <20180813145755.16566-1-marc.zyngier@arm.com> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: James Morse The get/set events helpers to do some work to check reserved and padding fields are zero. This is useful on 32bit too. Move this code into virt/kvm/arm/arm.c, and give the arch code some underscores. This is temporarily hidden behind __KVM_HAVE_VCPU_EVENTS until 32bit is wired up. Signed-off-by: James Morse Reviewed-by: Dongjiu Geng Signed-off-by: Marc Zyngier --- arch/arm64/include/asm/kvm_host.h | 8 ++++---- arch/arm64/kvm/guest.c | 21 ++++----------------- virt/kvm/arm/arm.c | 28 ++++++++++++++++++++++++++++ 3 files changed, 36 insertions(+), 21 deletions(-) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index bc244cc6e451..f26055f2306e 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -350,11 +350,11 @@ unsigned long kvm_arm_num_regs(struct kvm_vcpu *vcpu); int kvm_arm_copy_reg_indices(struct kvm_vcpu *vcpu, u64 __user *indices); int kvm_arm_get_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg); int kvm_arm_set_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg); -int kvm_arm_vcpu_get_events(struct kvm_vcpu *vcpu, - struct kvm_vcpu_events *events); +int __kvm_arm_vcpu_get_events(struct kvm_vcpu *vcpu, + struct kvm_vcpu_events *events); -int kvm_arm_vcpu_set_events(struct kvm_vcpu *vcpu, - struct kvm_vcpu_events *events); +int __kvm_arm_vcpu_set_events(struct kvm_vcpu *vcpu, + struct kvm_vcpu_events *events); #define KVM_ARCH_WANT_MMU_NOTIFIER int kvm_unmap_hva(struct kvm *kvm, unsigned long hva); diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c index dd05be96d981..725c7545e91a 100644 --- a/arch/arm64/kvm/guest.c +++ b/arch/arm64/kvm/guest.c @@ -289,11 +289,9 @@ int kvm_arch_vcpu_ioctl_set_sregs(struct kvm_vcpu *vcpu, return -EINVAL; } -int kvm_arm_vcpu_get_events(struct kvm_vcpu *vcpu, - struct kvm_vcpu_events *events) +int __kvm_arm_vcpu_get_events(struct kvm_vcpu *vcpu, + struct kvm_vcpu_events *events) { - memset(events, 0, sizeof(*events)); - events->exception.serror_pending = !!(vcpu->arch.hcr_el2 & HCR_VSE); events->exception.serror_has_esr = cpus_have_const_cap(ARM64_HAS_RAS_EXTN); @@ -303,23 +301,12 @@ int kvm_arm_vcpu_get_events(struct kvm_vcpu *vcpu, return 0; } -int kvm_arm_vcpu_set_events(struct kvm_vcpu *vcpu, - struct kvm_vcpu_events *events) +int __kvm_arm_vcpu_set_events(struct kvm_vcpu *vcpu, + struct kvm_vcpu_events *events) { - int i; bool serror_pending = events->exception.serror_pending; bool has_esr = events->exception.serror_has_esr; - /* check whether the reserved field is zero */ - for (i = 0; i < ARRAY_SIZE(events->reserved); i++) - if (events->reserved[i]) - return -EINVAL; - - /* check whether the pad field is zero */ - for (i = 0; i < ARRAY_SIZE(events->exception.pad); i++) - if (events->exception.pad[i]) - return -EINVAL; - if (serror_pending && has_esr) { if (!cpus_have_const_cap(ARM64_HAS_RAS_EXTN)) return -EINVAL; diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c index 1c72247aeb1d..14f8fad1c7ae 100644 --- a/virt/kvm/arm/arm.c +++ b/virt/kvm/arm/arm.c @@ -1050,6 +1050,34 @@ static int kvm_arm_vcpu_has_attr(struct kvm_vcpu *vcpu, return ret; } +#ifdef __KVM_HAVE_VCPU_EVENTS /* temporary: until 32bit is wired up */ +static int kvm_arm_vcpu_get_events(struct kvm_vcpu *vcpu, + struct kvm_vcpu_events *events) +{ + memset(events, 0, sizeof(*events)); + + return __kvm_arm_vcpu_get_events(vcpu, events); +} + +static int kvm_arm_vcpu_set_events(struct kvm_vcpu *vcpu, + struct kvm_vcpu_events *events) +{ + int i; + + /* check whether the reserved field is zero */ + for (i = 0; i < ARRAY_SIZE(events->reserved); i++) + if (events->reserved[i]) + return -EINVAL; + + /* check whether the pad field is zero */ + for (i = 0; i < ARRAY_SIZE(events->exception.pad); i++) + if (events->exception.pad[i]) + return -EINVAL; + + return __kvm_arm_vcpu_set_events(vcpu, events); +} +#endif /* __KVM_HAVE_VCPU_EVENTS */ + long kvm_arch_vcpu_ioctl(struct file *filp, unsigned int ioctl, unsigned long arg) { From patchwork Mon Aug 13 14:57:43 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marc Zyngier X-Patchwork-Id: 10564373 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id CA3B01057 for ; Mon, 13 Aug 2018 15:00:11 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id B927F29450 for ; Mon, 13 Aug 2018 15:00:11 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id B744F29490; Mon, 13 Aug 2018 15:00:11 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 2ED0629472 for ; Mon, 13 Aug 2018 15:00:11 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729940AbeHMRmq (ORCPT ); Mon, 13 Aug 2018 13:42:46 -0400 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:60410 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729855AbeHMRmq (ORCPT ); Mon, 13 Aug 2018 13:42:46 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 8E00980D; Mon, 13 Aug 2018 08:00:09 -0700 (PDT) Received: from approximate.Emea.Arm.com (approximate.Emea.Arm.com [10.4.13.119]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 9C8843F5D0; Mon, 13 Aug 2018 08:00:06 -0700 (PDT) From: Marc Zyngier To: Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= Cc: kvm@vger.kernel.org, kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org, Andre Przywara , Andrew Jones , Catalin Marinas , Christoffer Dall , Dongjiu Geng , Eric Auger , "Gustavo A . R . Silva" , James Morse , Jia He , Kees Cook , Mark Rutland , Punit Agrawal , Suzuki Poulose Subject: [PATCH 25/37] KVM: arm: Add 32bit get/set events support Date: Mon, 13 Aug 2018 15:57:43 +0100 Message-Id: <20180813145755.16566-26-marc.zyngier@arm.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20180813145755.16566-1-marc.zyngier@arm.com> References: <20180813145755.16566-1-marc.zyngier@arm.com> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: James Morse arm64's new use of KVMs get_events/set_events API calls isn't just or RAS, it allows an SError that has been made pending by KVM as part of its device emulation to be migrated. Wire this up for 32bit too. We only need to read/write the HCR_VA bit, and check that no esr has been provided, as we don't yet support VDFSR. Signed-off-by: James Morse Reviewed-by: Dongjiu Geng Signed-off-by: Marc Zyngier --- Documentation/virtual/kvm/api.txt | 8 ++++---- arch/arm/include/asm/kvm_host.h | 5 +++++ arch/arm/include/uapi/asm/kvm.h | 13 +++++++++++++ arch/arm/kvm/guest.c | 23 +++++++++++++++++++++++ virt/kvm/arm/arm.c | 4 ---- 5 files changed, 45 insertions(+), 8 deletions(-) diff --git a/Documentation/virtual/kvm/api.txt b/Documentation/virtual/kvm/api.txt index dbbb95d5798a..4d93fd65bbbd 100644 --- a/Documentation/virtual/kvm/api.txt +++ b/Documentation/virtual/kvm/api.txt @@ -835,7 +835,7 @@ struct kvm_clock_data { Capability: KVM_CAP_VCPU_EVENTS Extended by: KVM_CAP_INTR_SHADOW -Architectures: x86, arm64 +Architectures: x86, arm, arm64 Type: vcpu ioctl Parameters: struct kvm_vcpu_event (out) Returns: 0 on success, -1 on error @@ -883,7 +883,7 @@ Only two fields are defined in the flags field: - KVM_VCPUEVENT_VALID_SMM may be set in the flags field to signal that smi contains a valid state. -ARM64: +ARM/ARM64: If the guest accesses a device that is being emulated by the host kernel in such a way that a real device would generate a physical SError, KVM may make @@ -934,7 +934,7 @@ struct kvm_vcpu_events { Capability: KVM_CAP_VCPU_EVENTS Extended by: KVM_CAP_INTR_SHADOW -Architectures: x86, arm64 +Architectures: x86, arm, arm64 Type: vcpu ioctl Parameters: struct kvm_vcpu_event (in) Returns: 0 on success, -1 on error @@ -961,7 +961,7 @@ shall be written into the VCPU. KVM_VCPUEVENT_VALID_SMM can only be set if KVM_CAP_X86_SMM is available. -ARM64: +ARM/ARM64: Set the pending SError exception state for this VCPU. It is not possible to 'cancel' an Serror that has been made pending. diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h index 1f1fe4109b02..79906cecb091 100644 --- a/arch/arm/include/asm/kvm_host.h +++ b/arch/arm/include/asm/kvm_host.h @@ -216,6 +216,11 @@ int kvm_arm_get_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg); int kvm_arm_set_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg); unsigned long kvm_call_hyp(void *hypfn, ...); void force_vm_exit(const cpumask_t *mask); +int __kvm_arm_vcpu_get_events(struct kvm_vcpu *vcpu, + struct kvm_vcpu_events *events); + +int __kvm_arm_vcpu_set_events(struct kvm_vcpu *vcpu, + struct kvm_vcpu_events *events); #define KVM_ARCH_WANT_MMU_NOTIFIER int kvm_unmap_hva(struct kvm *kvm, unsigned long hva); diff --git a/arch/arm/include/uapi/asm/kvm.h b/arch/arm/include/uapi/asm/kvm.h index 16e006f708ca..4602464ebdfb 100644 --- a/arch/arm/include/uapi/asm/kvm.h +++ b/arch/arm/include/uapi/asm/kvm.h @@ -27,6 +27,7 @@ #define __KVM_HAVE_GUEST_DEBUG #define __KVM_HAVE_IRQ_LINE #define __KVM_HAVE_READONLY_MEM +#define __KVM_HAVE_VCPU_EVENTS #define KVM_COALESCED_MMIO_PAGE_OFFSET 1 @@ -125,6 +126,18 @@ struct kvm_sync_regs { struct kvm_arch_memory_slot { }; +/* for KVM_GET/SET_VCPU_EVENTS */ +struct kvm_vcpu_events { + struct { + __u8 serror_pending; + __u8 serror_has_esr; + /* Align it to 8 bytes */ + __u8 pad[6]; + __u64 serror_esr; + } exception; + __u32 reserved[12]; +}; + /* If you need to interpret the index values, here is the key: */ #define KVM_REG_ARM_COPROC_MASK 0x000000000FFF0000 #define KVM_REG_ARM_COPROC_SHIFT 16 diff --git a/arch/arm/kvm/guest.c b/arch/arm/kvm/guest.c index a18f33edc471..2b8de885b2bf 100644 --- a/arch/arm/kvm/guest.c +++ b/arch/arm/kvm/guest.c @@ -261,6 +261,29 @@ int kvm_arch_vcpu_ioctl_set_sregs(struct kvm_vcpu *vcpu, return -EINVAL; } + +int __kvm_arm_vcpu_get_events(struct kvm_vcpu *vcpu, + struct kvm_vcpu_events *events) +{ + events->exception.serror_pending = !!(*vcpu_hcr(vcpu) & HCR_VA); + + return 0; +} + +int __kvm_arm_vcpu_set_events(struct kvm_vcpu *vcpu, + struct kvm_vcpu_events *events) +{ + bool serror_pending = events->exception.serror_pending; + bool has_esr = events->exception.serror_has_esr; + + if (serror_pending && has_esr) + return -EINVAL; + else if (serror_pending) + kvm_inject_vabt(vcpu); + + return 0; +} + int __attribute_const__ kvm_target_cpu(void) { switch (read_cpuid_part()) { diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c index 14f8fad1c7ae..ac658bd63196 100644 --- a/virt/kvm/arm/arm.c +++ b/virt/kvm/arm/arm.c @@ -1050,7 +1050,6 @@ static int kvm_arm_vcpu_has_attr(struct kvm_vcpu *vcpu, return ret; } -#ifdef __KVM_HAVE_VCPU_EVENTS /* temporary: until 32bit is wired up */ static int kvm_arm_vcpu_get_events(struct kvm_vcpu *vcpu, struct kvm_vcpu_events *events) { @@ -1076,7 +1075,6 @@ static int kvm_arm_vcpu_set_events(struct kvm_vcpu *vcpu, return __kvm_arm_vcpu_set_events(vcpu, events); } -#endif /* __KVM_HAVE_VCPU_EVENTS */ long kvm_arch_vcpu_ioctl(struct file *filp, unsigned int ioctl, unsigned long arg) @@ -1158,7 +1156,6 @@ long kvm_arch_vcpu_ioctl(struct file *filp, r = kvm_arm_vcpu_has_attr(vcpu, &attr); break; } -#ifdef __KVM_HAVE_VCPU_EVENTS case KVM_GET_VCPU_EVENTS: { struct kvm_vcpu_events events; @@ -1178,7 +1175,6 @@ long kvm_arch_vcpu_ioctl(struct file *filp, return kvm_arm_vcpu_set_events(vcpu, &events); } -#endif default: r = -EINVAL; } From patchwork Mon Aug 13 14:57:44 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marc Zyngier X-Patchwork-Id: 10564375 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id CAE2714E2 for ; Mon, 13 Aug 2018 15:00:14 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id BB57429481 for ; Mon, 13 Aug 2018 15:00:14 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id B82C529493; Mon, 13 Aug 2018 15:00:14 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 4F87229482 for ; Mon, 13 Aug 2018 15:00:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729948AbeHMRmt (ORCPT ); Mon, 13 Aug 2018 13:42:49 -0400 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:60438 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729702AbeHMRmt (ORCPT ); Mon, 13 Aug 2018 13:42:49 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id C041015B2; Mon, 13 Aug 2018 08:00:12 -0700 (PDT) Received: from approximate.Emea.Arm.com (approximate.Emea.Arm.com [10.4.13.119]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id CE1013F5D0; Mon, 13 Aug 2018 08:00:09 -0700 (PDT) From: Marc Zyngier To: Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= Cc: kvm@vger.kernel.org, kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org, Andre Przywara , Andrew Jones , Catalin Marinas , Christoffer Dall , Dongjiu Geng , Eric Auger , "Gustavo A . R . Silva" , James Morse , Jia He , Kees Cook , Mark Rutland , Punit Agrawal , Suzuki Poulose Subject: [PATCH 26/37] KVM: arm/arm64: vgic: Fix possible spectre-v1 write in vgic_mmio_write_apr() Date: Mon, 13 Aug 2018 15:57:44 +0100 Message-Id: <20180813145755.16566-27-marc.zyngier@arm.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20180813145755.16566-1-marc.zyngier@arm.com> References: <20180813145755.16566-1-marc.zyngier@arm.com> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Mark Rutland It's possible for userspace to control n. Sanitize n when using it as an array index, to inhibit the potential spectre-v1 write gadget. Note that while it appears that n must be bound to the interval [0,3] due to the way it is extracted from addr, we cannot guarantee that compiler transformations (and/or future refactoring) will ensure this is the case, and given this is a slow path it's better to always perform the masking. Found by smatch. Signed-off-by: Mark Rutland Cc: Christoffer Dall Cc: Marc Zyngier Cc: kvmarm@lists.cs.columbia.edu Signed-off-by: Marc Zyngier --- virt/kvm/arm/vgic/vgic-mmio-v2.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/virt/kvm/arm/vgic/vgic-mmio-v2.c b/virt/kvm/arm/vgic/vgic-mmio-v2.c index 26654f4140ed..738b65d2d0e7 100644 --- a/virt/kvm/arm/vgic/vgic-mmio-v2.c +++ b/virt/kvm/arm/vgic/vgic-mmio-v2.c @@ -399,6 +399,9 @@ static void vgic_mmio_write_apr(struct kvm_vcpu *vcpu, if (n > vgic_v3_max_apr_idx(vcpu)) return; + + n = array_index_nospec(n, 4); + /* GICv3 only uses ICH_AP1Rn for memory mapped (GICv2) guests */ vgicv3->vgic_ap1r[n] = val; } From patchwork Mon Aug 13 14:57:45 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marc Zyngier X-Patchwork-Id: 10564379 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 41B2A1057 for ; Mon, 13 Aug 2018 15:00:18 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 3052C29491 for ; Mon, 13 Aug 2018 15:00:18 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 2D4902949B; Mon, 13 Aug 2018 15:00:18 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id BA10529499 for ; Mon, 13 Aug 2018 15:00:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729954AbeHMRmx (ORCPT ); Mon, 13 Aug 2018 13:42:53 -0400 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:60462 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729364AbeHMRmx (ORCPT ); Mon, 13 Aug 2018 13:42:53 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id F199E1596; Mon, 13 Aug 2018 08:00:15 -0700 (PDT) Received: from approximate.Emea.Arm.com (approximate.Emea.Arm.com [10.4.13.119]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 0C2123F5D0; Mon, 13 Aug 2018 08:00:12 -0700 (PDT) From: Marc Zyngier To: Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= Cc: kvm@vger.kernel.org, kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org, Andre Przywara , Andrew Jones , Catalin Marinas , Christoffer Dall , Dongjiu Geng , Eric Auger , "Gustavo A . R . Silva" , James Morse , Jia He , Kees Cook , Mark Rutland , Punit Agrawal , Suzuki Poulose Subject: [PATCH 27/37] KVM: arm/arm64: Fix potential loss of ptimer interrupts Date: Mon, 13 Aug 2018 15:57:45 +0100 Message-Id: <20180813145755.16566-28-marc.zyngier@arm.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20180813145755.16566-1-marc.zyngier@arm.com> References: <20180813145755.16566-1-marc.zyngier@arm.com> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Christoffer Dall kvm_timer_update_state() is called when changing the phys timer configuration registers, either via vcpu reset, as a result of a trap from the guest, or when userspace programs the registers. phys_timer_emulate() is in turn called by kvm_timer_update_state() to either cancel an existing software timer, or program a new software timer, to emulate the behavior of a real phys timer, based on the change in configuration registers. Unfortunately, the interaction between these two functions left a small race; if the conceptual emulated phys timer should actually fire, but the soft timer hasn't executed its callback yet, we cancel the timer in phys_timer_emulate without injecting an irq. This only happens if the check in kvm_timer_update_state is called before the timer should fire, which is relatively unlikely, but possible. The solution is to update the state of the phys timer after calling phys_timer_emulate, which will pick up the pending timer state and update the interrupt value. Note that this leaves the opportunity of raising the interrupt twice, once in the just-programmed soft timer, and once in kvm_timer_update_state. Since this always happens synchronously with the VCPU execution, there is no harm in this, and the guest ever only sees a single timer interrupt. Cc: Stable # 4.15+ Signed-off-by: Christoffer Dall Signed-off-by: Marc Zyngier --- virt/kvm/arm/arch_timer.c | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/virt/kvm/arm/arch_timer.c b/virt/kvm/arm/arch_timer.c index bd3d57f40f1b..18ff6203079d 100644 --- a/virt/kvm/arm/arch_timer.c +++ b/virt/kvm/arm/arch_timer.c @@ -295,9 +295,9 @@ static void phys_timer_emulate(struct kvm_vcpu *vcpu) struct arch_timer_context *ptimer = vcpu_ptimer(vcpu); /* - * If the timer can fire now we have just raised the IRQ line and we - * don't need to have a soft timer scheduled for the future. If the - * timer cannot fire at all, then we also don't need a soft timer. + * If the timer can fire now, we don't need to have a soft timer + * scheduled for the future. If the timer cannot fire at all, + * then we also don't need a soft timer. */ if (kvm_timer_should_fire(ptimer) || !kvm_timer_irq_can_fire(ptimer)) { soft_timer_cancel(&timer->phys_timer, NULL); @@ -332,10 +332,10 @@ static void kvm_timer_update_state(struct kvm_vcpu *vcpu) level = kvm_timer_should_fire(vtimer); kvm_timer_update_irq(vcpu, level, vtimer); + phys_timer_emulate(vcpu); + if (kvm_timer_should_fire(ptimer) != ptimer->irq.level) kvm_timer_update_irq(vcpu, !ptimer->irq.level, ptimer); - - phys_timer_emulate(vcpu); } static void vtimer_save_state(struct kvm_vcpu *vcpu) From patchwork Mon Aug 13 14:57:46 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marc Zyngier X-Patchwork-Id: 10564381 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 2B7801057 for ; Mon, 13 Aug 2018 15:00:21 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 1D0032943F for ; Mon, 13 Aug 2018 15:00:21 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 1B47629499; Mon, 13 Aug 2018 15:00:21 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id B713A2943F for ; Mon, 13 Aug 2018 15:00:20 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729976AbeHMRm4 (ORCPT ); Mon, 13 Aug 2018 13:42:56 -0400 Received: from foss.arm.com ([217.140.101.70]:60480 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729920AbeHMRm4 (ORCPT ); Mon, 13 Aug 2018 13:42:56 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 2CD5915BE; Mon, 13 Aug 2018 08:00:19 -0700 (PDT) Received: from approximate.Emea.Arm.com (approximate.Emea.Arm.com [10.4.13.119]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 3BCAA3F5D0; Mon, 13 Aug 2018 08:00:16 -0700 (PDT) From: Marc Zyngier To: Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= Cc: kvm@vger.kernel.org, kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org, Andre Przywara , Andrew Jones , Catalin Marinas , Christoffer Dall , Dongjiu Geng , Eric Auger , "Gustavo A . R . Silva" , James Morse , Jia He , Kees Cook , Mark Rutland , Punit Agrawal , Suzuki Poulose Subject: [PATCH 28/37] KVM: arm/arm64: Fix lost IRQs from emulated physcial timer when blocked Date: Mon, 13 Aug 2018 15:57:46 +0100 Message-Id: <20180813145755.16566-29-marc.zyngier@arm.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20180813145755.16566-1-marc.zyngier@arm.com> References: <20180813145755.16566-1-marc.zyngier@arm.com> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Christoffer Dall When the VCPU is blocked (for example from WFI) we don't inject the physical timer interrupt if it should fire while the CPU is blocked, but instead we just wake up the VCPU and expect kvm_timer_vcpu_load to take care of injecting the interrupt. Unfortunately, kvm_timer_vcpu_load() doesn't actually do that, it only has support to schedule a soft timer if the emulated phys timer is expected to fire in the future. Follow the same pattern as kvm_timer_update_state() and update the irq state after potentially scheduling a soft timer. Reported-by: Andre Przywara Cc: Stable # 4.15+ Fixes: bbdd52cfcba29 ("KVM: arm/arm64: Avoid phys timer emulation in vcpu entry/exit") Signed-off-by: Christoffer Dall Signed-off-by: Marc Zyngier --- virt/kvm/arm/arch_timer.c | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/virt/kvm/arm/arch_timer.c b/virt/kvm/arm/arch_timer.c index 18ff6203079d..17cecc96f735 100644 --- a/virt/kvm/arm/arch_timer.c +++ b/virt/kvm/arm/arch_timer.c @@ -487,6 +487,7 @@ void kvm_timer_vcpu_load(struct kvm_vcpu *vcpu) { struct arch_timer_cpu *timer = &vcpu->arch.timer_cpu; struct arch_timer_context *vtimer = vcpu_vtimer(vcpu); + struct arch_timer_context *ptimer = vcpu_ptimer(vcpu); if (unlikely(!timer->enabled)) return; @@ -502,6 +503,10 @@ void kvm_timer_vcpu_load(struct kvm_vcpu *vcpu) /* Set the background timer for the physical timer emulation. */ phys_timer_emulate(vcpu); + + /* If the timer fired while we weren't running, inject it now */ + if (kvm_timer_should_fire(ptimer) != ptimer->irq.level) + kvm_timer_update_irq(vcpu, !ptimer->irq.level, ptimer); } bool kvm_timer_should_notify_user(struct kvm_vcpu *vcpu) From patchwork Mon Aug 13 14:57:47 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marc Zyngier X-Patchwork-Id: 10564383 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 567621057 for ; Mon, 13 Aug 2018 15:00:24 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 487B12948A for ; Mon, 13 Aug 2018 15:00:24 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 4495E2944B; Mon, 13 Aug 2018 15:00:24 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id E030429489 for ; Mon, 13 Aug 2018 15:00:23 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729987AbeHMRm7 (ORCPT ); Mon, 13 Aug 2018 13:42:59 -0400 Received: from foss.arm.com ([217.140.101.70]:60506 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729874AbeHMRm7 (ORCPT ); Mon, 13 Aug 2018 13:42:59 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 5DDC180D; Mon, 13 Aug 2018 08:00:22 -0700 (PDT) Received: from approximate.Emea.Arm.com (approximate.Emea.Arm.com [10.4.13.119]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 6BEC33F73C; Mon, 13 Aug 2018 08:00:19 -0700 (PDT) From: Marc Zyngier To: Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= Cc: kvm@vger.kernel.org, kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org, Andre Przywara , Andrew Jones , Catalin Marinas , Christoffer Dall , Dongjiu Geng , Eric Auger , "Gustavo A . R . Silva" , James Morse , Jia He , Kees Cook , Mark Rutland , Punit Agrawal , Suzuki Poulose Subject: [PATCH 29/37] KVM: arm64: Remove non-existent AArch32 ICC_SGI1R encoding Date: Mon, 13 Aug 2018 15:57:47 +0100 Message-Id: <20180813145755.16566-30-marc.zyngier@arm.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20180813145755.16566-1-marc.zyngier@arm.com> References: <20180813145755.16566-1-marc.zyngier@arm.com> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP ICC_SGI1R is a 64bit system register, even on AArch32. It is thus pointless to have such an encoding in the 32bit cp15 array. Let's drop it. Reviewed-by: Eric Auger Acked-by: Christoffer Dall Signed-off-by: Marc Zyngier --- arch/arm64/kvm/sys_regs.c | 2 -- 1 file changed, 2 deletions(-) diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index 774d72155904..e04aacb2a24c 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -1622,8 +1622,6 @@ static const struct sys_reg_desc cp14_64_regs[] = { * register). */ static const struct sys_reg_desc cp15_regs[] = { - { Op1( 0), CRn( 0), CRm(12), Op2( 0), access_gic_sgi }, - { Op1( 0), CRn( 1), CRm( 0), Op2( 0), access_vm_reg, NULL, c1_SCTLR }, { Op1( 0), CRn( 2), CRm( 0), Op2( 0), access_vm_reg, NULL, c2_TTBR0 }, { Op1( 0), CRn( 2), CRm( 0), Op2( 1), access_vm_reg, NULL, c2_TTBR1 }, From patchwork Mon Aug 13 14:57:48 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marc Zyngier X-Patchwork-Id: 10564385 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 3E24C1057 for ; Mon, 13 Aug 2018 15:00:28 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 2D00A29446 for ; Mon, 13 Aug 2018 15:00:28 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 2AB6329496; Mon, 13 Aug 2018 15:00:28 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id A5A5129446 for ; Mon, 13 Aug 2018 15:00:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730000AbeHMRnD (ORCPT ); Mon, 13 Aug 2018 13:43:03 -0400 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:60538 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729995AbeHMRnD (ORCPT ); Mon, 13 Aug 2018 13:43:03 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 8EBD81596; Mon, 13 Aug 2018 08:00:25 -0700 (PDT) Received: from approximate.Emea.Arm.com (approximate.Emea.Arm.com [10.4.13.119]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 9E2133F5D0; Mon, 13 Aug 2018 08:00:22 -0700 (PDT) From: Marc Zyngier To: Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= Cc: kvm@vger.kernel.org, kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org, Andre Przywara , Andrew Jones , Catalin Marinas , Christoffer Dall , Dongjiu Geng , Eric Auger , "Gustavo A . R . Silva" , James Morse , Jia He , Kees Cook , Mark Rutland , Punit Agrawal , Suzuki Poulose Subject: [PATCH 30/37] KVM: arm/arm64: vgic-v3: Add core support for Group0 SGIs Date: Mon, 13 Aug 2018 15:57:48 +0100 Message-Id: <20180813145755.16566-31-marc.zyngier@arm.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20180813145755.16566-1-marc.zyngier@arm.com> References: <20180813145755.16566-1-marc.zyngier@arm.com> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Although vgic-v3 now supports Group0 interrupts, it still doesn't deal with Group0 SGIs. As usually with the GIC, nothing is simple: - ICC_SGI1R can signal SGIs of both groups, since GICD_CTLR.DS==1 with KVM (as per 8.1.10, Non-secure EL1 access) - ICC_SGI0R can only generate Group0 SGIs - ICC_ASGI1R sees its scope refocussed to generate only Group0 SGIs (as per the note at the bottom of Table 8-14) We only support Group1 SGIs so far, so no material change. Reviewed-by: Eric Auger Reviewed-by: Christoffer Dall Signed-off-by: Marc Zyngier --- arch/arm/kvm/coproc.c | 2 +- arch/arm64/kvm/sys_regs.c | 2 +- include/kvm/arm_vgic.h | 2 +- virt/kvm/arm/vgic/vgic-mmio-v3.c | 19 +++++++++++++++---- 4 files changed, 18 insertions(+), 7 deletions(-) diff --git a/arch/arm/kvm/coproc.c b/arch/arm/kvm/coproc.c index 3a02e76699a6..b17c52608a19 100644 --- a/arch/arm/kvm/coproc.c +++ b/arch/arm/kvm/coproc.c @@ -253,7 +253,7 @@ static bool access_gic_sgi(struct kvm_vcpu *vcpu, reg = (u64)*vcpu_reg(vcpu, p->Rt2) << 32; reg |= *vcpu_reg(vcpu, p->Rt1) ; - vgic_v3_dispatch_sgi(vcpu, reg); + vgic_v3_dispatch_sgi(vcpu, reg, true); return true; } diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index e04aacb2a24c..aba6755c816d 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -255,7 +255,7 @@ static bool access_gic_sgi(struct kvm_vcpu *vcpu, if (!p->is_write) return read_from_write_only(vcpu, p, r); - vgic_v3_dispatch_sgi(vcpu, p->regval); + vgic_v3_dispatch_sgi(vcpu, p->regval, true); return true; } diff --git a/include/kvm/arm_vgic.h b/include/kvm/arm_vgic.h index c134790be32c..4f31f96bbfab 100644 --- a/include/kvm/arm_vgic.h +++ b/include/kvm/arm_vgic.h @@ -373,7 +373,7 @@ void kvm_vgic_sync_hwstate(struct kvm_vcpu *vcpu); void kvm_vgic_flush_hwstate(struct kvm_vcpu *vcpu); void kvm_vgic_reset_mapped_irq(struct kvm_vcpu *vcpu, u32 vintid); -void vgic_v3_dispatch_sgi(struct kvm_vcpu *vcpu, u64 reg); +void vgic_v3_dispatch_sgi(struct kvm_vcpu *vcpu, u64 reg, bool allow_group1); /** * kvm_vgic_get_max_vcpus - Get the maximum number of VCPUs allowed by HW diff --git a/virt/kvm/arm/vgic/vgic-mmio-v3.c b/virt/kvm/arm/vgic/vgic-mmio-v3.c index 88e78b582139..a2a175b08b17 100644 --- a/virt/kvm/arm/vgic/vgic-mmio-v3.c +++ b/virt/kvm/arm/vgic/vgic-mmio-v3.c @@ -900,7 +900,8 @@ static int match_mpidr(u64 sgi_aff, u16 sgi_cpu_mask, struct kvm_vcpu *vcpu) /** * vgic_v3_dispatch_sgi - handle SGI requests from VCPUs * @vcpu: The VCPU requesting a SGI - * @reg: The value written into the ICC_SGI1R_EL1 register by that VCPU + * @reg: The value written into ICC_{ASGI1,SGI0,SGI1}R by that VCPU + * @allow_group1: Does the sysreg access allow generation of G1 SGIs * * With GICv3 (and ARE=1) CPUs trigger SGIs by writing to a system register. * This will trap in sys_regs.c and call this function. @@ -910,7 +911,7 @@ static int match_mpidr(u64 sgi_aff, u16 sgi_cpu_mask, struct kvm_vcpu *vcpu) * check for matching ones. If this bit is set, we signal all, but not the * calling VCPU. */ -void vgic_v3_dispatch_sgi(struct kvm_vcpu *vcpu, u64 reg) +void vgic_v3_dispatch_sgi(struct kvm_vcpu *vcpu, u64 reg, bool allow_group1) { struct kvm *kvm = vcpu->kvm; struct kvm_vcpu *c_vcpu; @@ -959,9 +960,19 @@ void vgic_v3_dispatch_sgi(struct kvm_vcpu *vcpu, u64 reg) irq = vgic_get_irq(vcpu->kvm, c_vcpu, sgi); spin_lock_irqsave(&irq->irq_lock, flags); - irq->pending_latch = true; - vgic_queue_irq_unlock(vcpu->kvm, irq, flags); + /* + * An access targetting Group0 SGIs can only generate + * those, while an access targetting Group1 SGIs can + * generate interrupts of either group. + */ + if (!irq->group || allow_group1) { + irq->pending_latch = true; + vgic_queue_irq_unlock(vcpu->kvm, irq, flags); + } else { + spin_unlock_irqrestore(&irq->irq_lock, flags); + } + vgic_put_irq(vcpu->kvm, irq); } } From patchwork Mon Aug 13 14:57:49 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marc Zyngier X-Patchwork-Id: 10564387 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D2E171057 for ; Mon, 13 Aug 2018 15:00:30 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id BF7BD2947A for ; Mon, 13 Aug 2018 15:00:30 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id B2DC0294B2; Mon, 13 Aug 2018 15:00:30 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 3831C294B3 for ; Mon, 13 Aug 2018 15:00:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730006AbeHMRnF (ORCPT ); Mon, 13 Aug 2018 13:43:05 -0400 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:60556 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729994AbeHMRnF (ORCPT ); Mon, 13 Aug 2018 13:43:05 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id BE24280D; Mon, 13 Aug 2018 08:00:28 -0700 (PDT) Received: from approximate.Emea.Arm.com (approximate.Emea.Arm.com [10.4.13.119]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id CD01C3F5D0; Mon, 13 Aug 2018 08:00:25 -0700 (PDT) From: Marc Zyngier To: Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= Cc: kvm@vger.kernel.org, kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org, Andre Przywara , Andrew Jones , Catalin Marinas , Christoffer Dall , Dongjiu Geng , Eric Auger , "Gustavo A . R . Silva" , James Morse , Jia He , Kees Cook , Mark Rutland , Punit Agrawal , Suzuki Poulose Subject: [PATCH 31/37] KVM: arm64: vgic-v3: Add support for ICC_SGI0R_EL1 and ICC_ASGI1R_EL1 accesses Date: Mon, 13 Aug 2018 15:57:49 +0100 Message-Id: <20180813145755.16566-32-marc.zyngier@arm.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20180813145755.16566-1-marc.zyngier@arm.com> References: <20180813145755.16566-1-marc.zyngier@arm.com> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP In order to generate Group0 SGIs, let's add some decoding logic to access_gic_sgi(), and pass the generating group accordingly. Reviewed-by: Eric Auger Reviewed-by: Christoffer Dall Signed-off-by: Marc Zyngier --- arch/arm64/include/asm/sysreg.h | 2 ++ arch/arm64/kvm/sys_regs.c | 41 +++++++++++++++++++++++++++++++-- 2 files changed, 41 insertions(+), 2 deletions(-) diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h index 98af0b37fb31..b0d2a52a71a3 100644 --- a/arch/arm64/include/asm/sysreg.h +++ b/arch/arm64/include/asm/sysreg.h @@ -314,6 +314,8 @@ #define SYS_ICC_DIR_EL1 sys_reg(3, 0, 12, 11, 1) #define SYS_ICC_RPR_EL1 sys_reg(3, 0, 12, 11, 3) #define SYS_ICC_SGI1R_EL1 sys_reg(3, 0, 12, 11, 5) +#define SYS_ICC_ASGI1R_EL1 sys_reg(3, 0, 12, 11, 6) +#define SYS_ICC_SGI0R_EL1 sys_reg(3, 0, 12, 11, 7) #define SYS_ICC_IAR1_EL1 sys_reg(3, 0, 12, 12, 0) #define SYS_ICC_EOIR1_EL1 sys_reg(3, 0, 12, 12, 1) #define SYS_ICC_HPPIR1_EL1 sys_reg(3, 0, 12, 12, 2) diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index aba6755c816d..22fbbdbece3c 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -252,10 +252,43 @@ static bool access_gic_sgi(struct kvm_vcpu *vcpu, struct sys_reg_params *p, const struct sys_reg_desc *r) { + bool g1; + if (!p->is_write) return read_from_write_only(vcpu, p, r); - vgic_v3_dispatch_sgi(vcpu, p->regval, true); + /* + * In a system where GICD_CTLR.DS=1, a ICC_SGI0R_EL1 access generates + * Group0 SGIs only, while ICC_SGI1R_EL1 can generate either group, + * depending on the SGI configuration. ICC_ASGI1R_EL1 is effectively + * equivalent to ICC_SGI0R_EL1, as there is no "alternative" secure + * group. + */ + if (p->is_aarch32) { + switch (p->Op1) { + default: /* Keep GCC quiet */ + case 0: /* ICC_SGI1R */ + g1 = true; + break; + case 1: /* ICC_ASGI1R */ + case 2: /* ICC_SGI0R */ + g1 = false; + break; + } + } else { + switch (p->Op2) { + default: /* Keep GCC quiet */ + case 5: /* ICC_SGI1R_EL1 */ + g1 = true; + break; + case 6: /* ICC_ASGI1R_EL1 */ + case 7: /* ICC_SGI0R_EL1 */ + g1 = false; + break; + } + } + + vgic_v3_dispatch_sgi(vcpu, p->regval, g1); return true; } @@ -1312,6 +1345,8 @@ static const struct sys_reg_desc sys_reg_descs[] = { { SYS_DESC(SYS_ICC_DIR_EL1), read_from_write_only }, { SYS_DESC(SYS_ICC_RPR_EL1), write_to_read_only }, { SYS_DESC(SYS_ICC_SGI1R_EL1), access_gic_sgi }, + { SYS_DESC(SYS_ICC_ASGI1R_EL1), access_gic_sgi }, + { SYS_DESC(SYS_ICC_SGI0R_EL1), access_gic_sgi }, { SYS_DESC(SYS_ICC_IAR1_EL1), write_to_read_only }, { SYS_DESC(SYS_ICC_EOIR1_EL1), read_from_write_only }, { SYS_DESC(SYS_ICC_HPPIR1_EL1), write_to_read_only }, @@ -1744,8 +1779,10 @@ static const struct sys_reg_desc cp15_regs[] = { static const struct sys_reg_desc cp15_64_regs[] = { { Op1( 0), CRn( 0), CRm( 2), Op2( 0), access_vm_reg, NULL, c2_TTBR0 }, { Op1( 0), CRn( 0), CRm( 9), Op2( 0), access_pmu_evcntr }, - { Op1( 0), CRn( 0), CRm(12), Op2( 0), access_gic_sgi }, + { Op1( 0), CRn( 0), CRm(12), Op2( 0), access_gic_sgi }, /* ICC_SGI1R */ { Op1( 1), CRn( 0), CRm( 2), Op2( 0), access_vm_reg, NULL, c2_TTBR1 }, + { Op1( 1), CRn( 0), CRm(12), Op2( 0), access_gic_sgi }, /* ICC_ASGI1R */ + { Op1( 2), CRn( 0), CRm(12), Op2( 0), access_gic_sgi }, /* ICC_SGI0R */ { Op1( 2), CRn( 0), CRm(14), Op2( 0), access_cntp_cval }, }; From patchwork Mon Aug 13 14:57:50 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marc Zyngier X-Patchwork-Id: 10564391 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id DB93E14E2 for ; Mon, 13 Aug 2018 15:00:33 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id CD84529497 for ; Mon, 13 Aug 2018 15:00:33 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id CBDC329444; Mon, 13 Aug 2018 15:00:33 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 71F08294A3 for ; Mon, 13 Aug 2018 15:00:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730007AbeHMRnJ (ORCPT ); Mon, 13 Aug 2018 13:43:09 -0400 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:60584 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729702AbeHMRnJ (ORCPT ); Mon, 13 Aug 2018 13:43:09 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id EF85F15BE; Mon, 13 Aug 2018 08:00:31 -0700 (PDT) Received: from approximate.Emea.Arm.com (approximate.Emea.Arm.com [10.4.13.119]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 097823F5D0; Mon, 13 Aug 2018 08:00:28 -0700 (PDT) From: Marc Zyngier To: Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= Cc: kvm@vger.kernel.org, kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org, Andre Przywara , Andrew Jones , Catalin Marinas , Christoffer Dall , Dongjiu Geng , Eric Auger , "Gustavo A . R . Silva" , James Morse , Jia He , Kees Cook , Mark Rutland , Punit Agrawal , Suzuki Poulose Subject: [PATCH 32/37] KVM: arm: vgic-v3: Add support for ICC_SGI0R and ICC_ASGI1R accesses Date: Mon, 13 Aug 2018 15:57:50 +0100 Message-Id: <20180813145755.16566-33-marc.zyngier@arm.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20180813145755.16566-1-marc.zyngier@arm.com> References: <20180813145755.16566-1-marc.zyngier@arm.com> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP In order to generate Group0 SGIs, let's add some decoding logic to access_gic_sgi(), and pass the generating group accordingly. Reviewed-by: Eric Auger Reviewed-by: Christoffer Dall Signed-off-by: Marc Zyngier --- arch/arm/kvm/coproc.c | 25 ++++++++++++++++++++++++- 1 file changed, 24 insertions(+), 1 deletion(-) diff --git a/arch/arm/kvm/coproc.c b/arch/arm/kvm/coproc.c index b17c52608a19..450c7a4fbc8a 100644 --- a/arch/arm/kvm/coproc.c +++ b/arch/arm/kvm/coproc.c @@ -246,6 +246,7 @@ static bool access_gic_sgi(struct kvm_vcpu *vcpu, const struct coproc_reg *r) { u64 reg; + bool g1; if (!p->is_write) return read_from_write_only(vcpu, p); @@ -253,7 +254,25 @@ static bool access_gic_sgi(struct kvm_vcpu *vcpu, reg = (u64)*vcpu_reg(vcpu, p->Rt2) << 32; reg |= *vcpu_reg(vcpu, p->Rt1) ; - vgic_v3_dispatch_sgi(vcpu, reg, true); + /* + * In a system where GICD_CTLR.DS=1, a ICC_SGI0R access generates + * Group0 SGIs only, while ICC_SGI1R can generate either group, + * depending on the SGI configuration. ICC_ASGI1R is effectively + * equivalent to ICC_SGI0R, as there is no "alternative" secure + * group. + */ + switch (p->Op1) { + default: /* Keep GCC quiet */ + case 0: /* ICC_SGI1R */ + g1 = true; + break; + case 1: /* ICC_ASGI1R */ + case 2: /* ICC_SGI0R */ + g1 = false; + break; + } + + vgic_v3_dispatch_sgi(vcpu, reg, g1); return true; } @@ -459,6 +478,10 @@ static const struct coproc_reg cp15_regs[] = { /* ICC_SGI1R */ { CRm64(12), Op1( 0), is64, access_gic_sgi}, + /* ICC_ASGI1R */ + { CRm64(12), Op1( 1), is64, access_gic_sgi}, + /* ICC_SGI0R */ + { CRm64(12), Op1( 2), is64, access_gic_sgi}, /* VBAR: swapped by interrupt.S. */ { CRn(12), CRm( 0), Op1( 0), Op2( 0), is32, From patchwork Mon Aug 13 14:57:51 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marc Zyngier X-Patchwork-Id: 10564393 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id EB7461057 for ; Mon, 13 Aug 2018 15:00:37 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id DA40529490 for ; Mon, 13 Aug 2018 15:00:37 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id D792F29487; Mon, 13 Aug 2018 15:00:37 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 1959A29490 for ; Mon, 13 Aug 2018 15:00:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730014AbeHMRnM (ORCPT ); Mon, 13 Aug 2018 13:43:12 -0400 Received: from foss.arm.com ([217.140.101.70]:60614 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729874AbeHMRnM (ORCPT ); Mon, 13 Aug 2018 13:43:12 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 4EEEE15BF; Mon, 13 Aug 2018 08:00:35 -0700 (PDT) Received: from approximate.Emea.Arm.com (approximate.Emea.Arm.com [10.4.13.119]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 3A76D3F5D0; Mon, 13 Aug 2018 08:00:32 -0700 (PDT) From: Marc Zyngier To: Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= Cc: kvm@vger.kernel.org, kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org, Andre Przywara , Andrew Jones , Catalin Marinas , Christoffer Dall , Dongjiu Geng , Eric Auger , "Gustavo A . R . Silva" , James Morse , Jia He , Kees Cook , Mark Rutland , Punit Agrawal , Suzuki Poulose , Jia He Subject: [PATCH 33/37] KVM: arm/arm64: vgic: Move DEBUG_SPINLOCK_BUG_ON to vgic.h Date: Mon, 13 Aug 2018 15:57:51 +0100 Message-Id: <20180813145755.16566-34-marc.zyngier@arm.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20180813145755.16566-1-marc.zyngier@arm.com> References: <20180813145755.16566-1-marc.zyngier@arm.com> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Jia He DEBUG_SPINLOCK_BUG_ON can be used with both vgic-v2 and vgic-v3, so let's move it to vgic.h Signed-off-by: Jia He [maz: commit message tidy-up] Signed-off-by: Marc Zyngier --- virt/kvm/arm/vgic/vgic.c | 6 ------ virt/kvm/arm/vgic/vgic.h | 6 ++++++ 2 files changed, 6 insertions(+), 6 deletions(-) diff --git a/virt/kvm/arm/vgic/vgic.c b/virt/kvm/arm/vgic/vgic.c index 33c8325c8f35..c22cea678a66 100644 --- a/virt/kvm/arm/vgic/vgic.c +++ b/virt/kvm/arm/vgic/vgic.c @@ -28,12 +28,6 @@ #define CREATE_TRACE_POINTS #include "trace.h" -#ifdef CONFIG_DEBUG_SPINLOCK -#define DEBUG_SPINLOCK_BUG_ON(p) BUG_ON(p) -#else -#define DEBUG_SPINLOCK_BUG_ON(p) -#endif - struct vgic_global kvm_vgic_global_state __ro_after_init = { .gicv3_cpuif = STATIC_KEY_FALSE_INIT, }; diff --git a/virt/kvm/arm/vgic/vgic.h b/virt/kvm/arm/vgic/vgic.h index ed9d9d9e2fc5..a90024718ca4 100644 --- a/virt/kvm/arm/vgic/vgic.h +++ b/virt/kvm/arm/vgic/vgic.h @@ -103,6 +103,12 @@ #define KVM_VGIC_V3_RDIST_COUNT_MASK GENMASK_ULL(63, 52) #define KVM_VGIC_V3_RDIST_COUNT_SHIFT 52 +#ifdef CONFIG_DEBUG_SPINLOCK +#define DEBUG_SPINLOCK_BUG_ON(p) BUG_ON(p) +#else +#define DEBUG_SPINLOCK_BUG_ON(p) +#endif + /* Requires the irq_lock to be held by the caller. */ static inline bool irq_is_pending(struct vgic_irq *irq) { From patchwork Mon Aug 13 14:57:52 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marc Zyngier X-Patchwork-Id: 10564395 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 4D2EA14E2 for ; Mon, 13 Aug 2018 15:00:41 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 3D91029478 for ; Mon, 13 Aug 2018 15:00:41 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 3B9FB294B6; Mon, 13 Aug 2018 15:00:41 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 2FC8729478 for ; Mon, 13 Aug 2018 15:00:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730016AbeHMRnP (ORCPT ); Mon, 13 Aug 2018 13:43:15 -0400 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:60648 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729874AbeHMRnP (ORCPT ); Mon, 13 Aug 2018 13:43:15 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id A34781596; Mon, 13 Aug 2018 08:00:38 -0700 (PDT) Received: from approximate.Emea.Arm.com (approximate.Emea.Arm.com [10.4.13.119]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 8E4343F5D0; Mon, 13 Aug 2018 08:00:35 -0700 (PDT) From: Marc Zyngier To: Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= Cc: kvm@vger.kernel.org, kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org, Andre Przywara , Andrew Jones , Catalin Marinas , Christoffer Dall , Dongjiu Geng , Eric Auger , "Gustavo A . R . Silva" , James Morse , Jia He , Kees Cook , Mark Rutland , Punit Agrawal , Suzuki Poulose , Jia He Subject: [PATCH 34/37] KVM: arm/arm64: vgic: Do not use spin_lock_irqsave/restore with irq disabled Date: Mon, 13 Aug 2018 15:57:52 +0100 Message-Id: <20180813145755.16566-35-marc.zyngier@arm.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20180813145755.16566-1-marc.zyngier@arm.com> References: <20180813145755.16566-1-marc.zyngier@arm.com> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Jia He kvm_vgic_sync_hwstate is only called with IRQ being disabled. There is thus no need to call spin_lock_irqsave/restore in vgic_fold_lr_state and vgic_prune_ap_list. This patch replace them with the non irq-safe version. Signed-off-by: Jia He Acked-by: Christoffer Dall [maz: commit message tidy-up] Signed-off-by: Marc Zyngier --- virt/kvm/arm/vgic/vgic-v2.c | 7 ++++--- virt/kvm/arm/vgic/vgic-v3.c | 7 ++++--- virt/kvm/arm/vgic/vgic.c | 13 +++++++------ 3 files changed, 15 insertions(+), 12 deletions(-) diff --git a/virt/kvm/arm/vgic/vgic-v2.c b/virt/kvm/arm/vgic/vgic-v2.c index df5e6a6e3186..69b892abd7dc 100644 --- a/virt/kvm/arm/vgic/vgic-v2.c +++ b/virt/kvm/arm/vgic/vgic-v2.c @@ -62,7 +62,8 @@ void vgic_v2_fold_lr_state(struct kvm_vcpu *vcpu) struct vgic_cpu *vgic_cpu = &vcpu->arch.vgic_cpu; struct vgic_v2_cpu_if *cpuif = &vgic_cpu->vgic_v2; int lr; - unsigned long flags; + + DEBUG_SPINLOCK_BUG_ON(!irqs_disabled()); cpuif->vgic_hcr &= ~GICH_HCR_UIE; @@ -83,7 +84,7 @@ void vgic_v2_fold_lr_state(struct kvm_vcpu *vcpu) irq = vgic_get_irq(vcpu->kvm, vcpu, intid); - spin_lock_irqsave(&irq->irq_lock, flags); + spin_lock(&irq->irq_lock); /* Always preserve the active bit */ irq->active = !!(val & GICH_LR_ACTIVE_BIT); @@ -126,7 +127,7 @@ void vgic_v2_fold_lr_state(struct kvm_vcpu *vcpu) vgic_irq_set_phys_active(irq, false); } - spin_unlock_irqrestore(&irq->irq_lock, flags); + spin_unlock(&irq->irq_lock); vgic_put_irq(vcpu->kvm, irq); } diff --git a/virt/kvm/arm/vgic/vgic-v3.c b/virt/kvm/arm/vgic/vgic-v3.c index 530b8491c892..9c0dd234ebe8 100644 --- a/virt/kvm/arm/vgic/vgic-v3.c +++ b/virt/kvm/arm/vgic/vgic-v3.c @@ -46,7 +46,8 @@ void vgic_v3_fold_lr_state(struct kvm_vcpu *vcpu) struct vgic_v3_cpu_if *cpuif = &vgic_cpu->vgic_v3; u32 model = vcpu->kvm->arch.vgic.vgic_model; int lr; - unsigned long flags; + + DEBUG_SPINLOCK_BUG_ON(!irqs_disabled()); cpuif->vgic_hcr &= ~ICH_HCR_UIE; @@ -75,7 +76,7 @@ void vgic_v3_fold_lr_state(struct kvm_vcpu *vcpu) if (!irq) /* An LPI could have been unmapped. */ continue; - spin_lock_irqsave(&irq->irq_lock, flags); + spin_lock(&irq->irq_lock); /* Always preserve the active bit */ irq->active = !!(val & ICH_LR_ACTIVE_BIT); @@ -118,7 +119,7 @@ void vgic_v3_fold_lr_state(struct kvm_vcpu *vcpu) vgic_irq_set_phys_active(irq, false); } - spin_unlock_irqrestore(&irq->irq_lock, flags); + spin_unlock(&irq->irq_lock); vgic_put_irq(vcpu->kvm, irq); } diff --git a/virt/kvm/arm/vgic/vgic.c b/virt/kvm/arm/vgic/vgic.c index c22cea678a66..7cfdfbc910e0 100644 --- a/virt/kvm/arm/vgic/vgic.c +++ b/virt/kvm/arm/vgic/vgic.c @@ -593,10 +593,11 @@ static void vgic_prune_ap_list(struct kvm_vcpu *vcpu) { struct vgic_cpu *vgic_cpu = &vcpu->arch.vgic_cpu; struct vgic_irq *irq, *tmp; - unsigned long flags; + + DEBUG_SPINLOCK_BUG_ON(!irqs_disabled()); retry: - spin_lock_irqsave(&vgic_cpu->ap_list_lock, flags); + spin_lock(&vgic_cpu->ap_list_lock); list_for_each_entry_safe(irq, tmp, &vgic_cpu->ap_list_head, ap_list) { struct kvm_vcpu *target_vcpu, *vcpuA, *vcpuB; @@ -637,7 +638,7 @@ static void vgic_prune_ap_list(struct kvm_vcpu *vcpu) /* This interrupt looks like it has to be migrated. */ spin_unlock(&irq->irq_lock); - spin_unlock_irqrestore(&vgic_cpu->ap_list_lock, flags); + spin_unlock(&vgic_cpu->ap_list_lock); /* * Ensure locking order by always locking the smallest @@ -651,7 +652,7 @@ static void vgic_prune_ap_list(struct kvm_vcpu *vcpu) vcpuB = vcpu; } - spin_lock_irqsave(&vcpuA->arch.vgic_cpu.ap_list_lock, flags); + spin_lock(&vcpuA->arch.vgic_cpu.ap_list_lock); spin_lock_nested(&vcpuB->arch.vgic_cpu.ap_list_lock, SINGLE_DEPTH_NESTING); spin_lock(&irq->irq_lock); @@ -676,7 +677,7 @@ static void vgic_prune_ap_list(struct kvm_vcpu *vcpu) spin_unlock(&irq->irq_lock); spin_unlock(&vcpuB->arch.vgic_cpu.ap_list_lock); - spin_unlock_irqrestore(&vcpuA->arch.vgic_cpu.ap_list_lock, flags); + spin_unlock(&vcpuA->arch.vgic_cpu.ap_list_lock); if (target_vcpu_needs_kick) { kvm_make_request(KVM_REQ_IRQ_PENDING, target_vcpu); @@ -686,7 +687,7 @@ static void vgic_prune_ap_list(struct kvm_vcpu *vcpu) goto retry; } - spin_unlock_irqrestore(&vgic_cpu->ap_list_lock, flags); + spin_unlock(&vgic_cpu->ap_list_lock); } static inline void vgic_fold_lr_state(struct kvm_vcpu *vcpu) From patchwork Mon Aug 13 14:57:53 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marc Zyngier X-Patchwork-Id: 10564399 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 6D5541057 for ; Mon, 13 Aug 2018 15:00:44 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 5E3A32945F for ; Mon, 13 Aug 2018 15:00:44 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 5A5EF294BC; Mon, 13 Aug 2018 15:00:44 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 45E762945F for ; Mon, 13 Aug 2018 15:00:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730023AbeHMRnT (ORCPT ); Mon, 13 Aug 2018 13:43:19 -0400 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:60670 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730020AbeHMRnS (ORCPT ); Mon, 13 Aug 2018 13:43:18 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id D34BD1688; Mon, 13 Aug 2018 08:00:41 -0700 (PDT) Received: from approximate.Emea.Arm.com (approximate.Emea.Arm.com [10.4.13.119]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id E1B483F5D0; Mon, 13 Aug 2018 08:00:38 -0700 (PDT) From: Marc Zyngier To: Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= Cc: kvm@vger.kernel.org, kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org, Andre Przywara , Andrew Jones , Catalin Marinas , Christoffer Dall , Dongjiu Geng , Eric Auger , "Gustavo A . R . Silva" , James Morse , Jia He , Kees Cook , Mark Rutland , Punit Agrawal , Suzuki Poulose Subject: [PATCH 35/37] KVM: arm: Use true and false for boolean values Date: Mon, 13 Aug 2018 15:57:53 +0100 Message-Id: <20180813145755.16566-36-marc.zyngier@arm.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20180813145755.16566-1-marc.zyngier@arm.com> References: <20180813145755.16566-1-marc.zyngier@arm.com> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: "Gustavo A. R. Silva" Return statements in functions returning bool should use true or false instead of an integer value. This code was detected with the help of Coccinelle. Signed-off-by: Gustavo A. R. Silva Signed-off-by: Marc Zyngier --- arch/arm/include/asm/kvm_emulate.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/arm/include/asm/kvm_emulate.h b/arch/arm/include/asm/kvm_emulate.h index b50fe8380868..2b7eed21b20e 100644 --- a/arch/arm/include/asm/kvm_emulate.h +++ b/arch/arm/include/asm/kvm_emulate.h @@ -119,7 +119,7 @@ static inline void vcpu_set_wfe_traps(struct kvm_vcpu *vcpu) static inline bool vcpu_mode_is_32bit(const struct kvm_vcpu *vcpu) { - return 1; + return true; } static inline unsigned long *vcpu_pc(struct kvm_vcpu *vcpu) From patchwork Mon Aug 13 14:57:54 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marc Zyngier X-Patchwork-Id: 10564401 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 7EFF114E2 for ; Mon, 13 Aug 2018 15:00:47 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 6E40F2949F for ; Mon, 13 Aug 2018 15:00:47 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 6C989294C3; Mon, 13 Aug 2018 15:00:47 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id D29E12949F for ; Mon, 13 Aug 2018 15:00:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730032AbeHMRnW (ORCPT ); Mon, 13 Aug 2018 13:43:22 -0400 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:60698 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729925AbeHMRnW (ORCPT ); Mon, 13 Aug 2018 13:43:22 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 10BBB168F; Mon, 13 Aug 2018 08:00:45 -0700 (PDT) Received: from approximate.Emea.Arm.com (approximate.Emea.Arm.com [10.4.13.119]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 1EA543F5D0; Mon, 13 Aug 2018 08:00:41 -0700 (PDT) From: Marc Zyngier To: Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= Cc: kvm@vger.kernel.org, kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org, Andre Przywara , Andrew Jones , Catalin Marinas , Christoffer Dall , Dongjiu Geng , Eric Auger , "Gustavo A . R . Silva" , James Morse , Jia He , Kees Cook , Mark Rutland , Punit Agrawal , Suzuki Poulose Subject: [PATCH 36/37] KVM: arm/arm64: Skip updating PMD entry if no change Date: Mon, 13 Aug 2018 15:57:54 +0100 Message-Id: <20180813145755.16566-37-marc.zyngier@arm.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20180813145755.16566-1-marc.zyngier@arm.com> References: <20180813145755.16566-1-marc.zyngier@arm.com> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Punit Agrawal Contention on updating a PMD entry by a large number of vcpus can lead to duplicate work when handling stage 2 page faults. As the page table update follows the break-before-make requirement of the architecture, it can lead to repeated refaults due to clearing the entry and flushing the tlbs. This problem is more likely when - * there are large number of vcpus * the mapping is large block mapping such as when using PMD hugepages (512MB) with 64k pages. Fix this by skipping the page table update if there is no change in the entry being updated. Cc: stable@vger.kernel.org Fixes: ad361f093c1e ("KVM: ARM: Support hugetlbfs backed huge pages") Reviewed-by: Suzuki Poulose Acked-by: Christoffer Dall Signed-off-by: Punit Agrawal Signed-off-by: Marc Zyngier --- virt/kvm/arm/mmu.c | 38 +++++++++++++++++++++++++++----------- 1 file changed, 27 insertions(+), 11 deletions(-) diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c index 97d27cd9c654..13dfe36501aa 100644 --- a/virt/kvm/arm/mmu.c +++ b/virt/kvm/arm/mmu.c @@ -1044,19 +1044,35 @@ static int stage2_set_pmd_huge(struct kvm *kvm, struct kvm_mmu_memory_cache pmd = stage2_get_pmd(kvm, cache, addr); VM_BUG_ON(!pmd); - /* - * Mapping in huge pages should only happen through a fault. If a - * page is merged into a transparent huge page, the individual - * subpages of that huge page should be unmapped through MMU - * notifiers before we get here. - * - * Merging of CompoundPages is not supported; they should become - * splitting first, unmapped, merged, and mapped back in on-demand. - */ - VM_BUG_ON(pmd_present(*pmd) && pmd_pfn(*pmd) != pmd_pfn(*new_pmd)); - old_pmd = *pmd; if (pmd_present(old_pmd)) { + /* + * Multiple vcpus faulting on the same PMD entry, can + * lead to them sequentially updating the PMD with the + * same value. Following the break-before-make + * (pmd_clear() followed by tlb_flush()) process can + * hinder forward progress due to refaults generated + * on missing translations. + * + * Skip updating the page table if the entry is + * unchanged. + */ + if (pmd_val(old_pmd) == pmd_val(*new_pmd)) + return 0; + + /* + * Mapping in huge pages should only happen through a + * fault. If a page is merged into a transparent huge + * page, the individual subpages of that huge page + * should be unmapped through MMU notifiers before we + * get here. + * + * Merging of CompoundPages is not supported; they + * should become splitting first, unmapped, merged, + * and mapped back in on-demand. + */ + VM_BUG_ON(pmd_pfn(old_pmd) != pmd_pfn(*new_pmd)); + pmd_clear(pmd); kvm_tlb_flush_vmid_ipa(kvm, addr); } else { From patchwork Mon Aug 13 14:57:55 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marc Zyngier X-Patchwork-Id: 10564403 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 55E3114E2 for ; Mon, 13 Aug 2018 15:00:50 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 4631129493 for ; Mon, 13 Aug 2018 15:00:50 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 4460E2949B; Mon, 13 Aug 2018 15:00:50 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id E0A5A29493 for ; Mon, 13 Aug 2018 15:00:49 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730025AbeHMRnZ (ORCPT ); Mon, 13 Aug 2018 13:43:25 -0400 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:60730 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729794AbeHMRnZ (ORCPT ); Mon, 13 Aug 2018 13:43:25 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 41DE880D; Mon, 13 Aug 2018 08:00:48 -0700 (PDT) Received: from approximate.Emea.Arm.com (approximate.Emea.Arm.com [10.4.13.119]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 4FF8D3F5D0; Mon, 13 Aug 2018 08:00:45 -0700 (PDT) From: Marc Zyngier To: Paolo Bonzini , =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= Cc: kvm@vger.kernel.org, kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org, Andre Przywara , Andrew Jones , Catalin Marinas , Christoffer Dall , Dongjiu Geng , Eric Auger , "Gustavo A . R . Silva" , James Morse , Jia He , Kees Cook , Mark Rutland , Punit Agrawal , Suzuki Poulose Subject: [PATCH 37/37] KVM: arm/arm64: Skip updating PTE entry if no change Date: Mon, 13 Aug 2018 15:57:55 +0100 Message-Id: <20180813145755.16566-38-marc.zyngier@arm.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20180813145755.16566-1-marc.zyngier@arm.com> References: <20180813145755.16566-1-marc.zyngier@arm.com> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Punit Agrawal When there is contention on faulting in a particular page table entry at stage 2, the break-before-make requirement of the architecture can lead to additional refaulting due to TLB invalidation. Avoid this by skipping a page table update if the new value of the PTE matches the previous value. Cc: stable@vger.kernel.org Fixes: d5d8184d35c9 ("KVM: ARM: Memory virtualization setup") Reviewed-by: Suzuki Poulose Acked-by: Christoffer Dall Signed-off-by: Punit Agrawal Signed-off-by: Marc Zyngier --- virt/kvm/arm/mmu.c | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c index 13dfe36501aa..91aaf73b00df 100644 --- a/virt/kvm/arm/mmu.c +++ b/virt/kvm/arm/mmu.c @@ -1147,6 +1147,10 @@ static int stage2_set_pte(struct kvm *kvm, struct kvm_mmu_memory_cache *cache, /* Create 2nd stage page table mapping - Level 3 */ old_pte = *pte; if (pte_present(old_pte)) { + /* Skip page table update if there is no change */ + if (pte_val(old_pte) == pte_val(*new_pte)) + return 0; + kvm_set_pte(pte, __pte(0)); kvm_tlb_flush_vmid_ipa(kvm, addr); } else {