From patchwork Wed Jan 31 09:34:53 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoffer Dall X-Patchwork-Id: 10193683 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 0DC87603EE for ; Wed, 31 Jan 2018 09:40:24 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id E3AB1201BC for ; Wed, 31 Jan 2018 09:40:23 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id D813A285FC; Wed, 31 Jan 2018 09:40:23 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 4E9C7201BC for ; Wed, 31 Jan 2018 09:40:23 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753979AbeAaJkU (ORCPT ); Wed, 31 Jan 2018 04:40:20 -0500 Received: from mail-wm0-f65.google.com ([74.125.82.65]:39430 "EHLO mail-wm0-f65.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753701AbeAaJfe (ORCPT ); Wed, 31 Jan 2018 04:35:34 -0500 Received: by mail-wm0-f65.google.com with SMTP id b21so6585277wme.4 for ; Wed, 31 Jan 2018 01:35:34 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=MOX6bFDdi05h3s3G21U2nMxdRvd61s1xjMY05OaLYTw=; b=SgvgLuKxj1MnbOmO6w/u03QOVhduMpeBnxQ+ByMy14JnYF2Gbr/xLWX7Lm1qePOjWy MP5E8FZ0GWWCopy+38hfPJnBGDXsS8jeLSg6FjMg0bJdBApFdXSlvkWjCfWgd8ZL5mht vS/tA+khEmzS4NCbfDZ1Vf8I/KZBhZoHaH6FU= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=MOX6bFDdi05h3s3G21U2nMxdRvd61s1xjMY05OaLYTw=; b=OpxW4btmB31MYW4h7PLlJxFV1SEF3yDOqeE1NVS89EpjubeHRYOu29PNiNpRHmqQ2W hmghHYWyBUdTA5Tdy4KIP5u/icChyzVAE+3LBZO9/5M/mvKLfROXlsNQxZlu4V3n5y58 gwLpaC2jeb4q1W8sJe4Pm+FSsp8D4c9XRHyscxYWKmViGE+sXq36G5YnRADRnRUGJ7Yl IhXtvYx5/Bp/u6X4T26sbhDoXhVCb8C4KJPtChpx6+lPZMoKWh3QScdrei9MQpC5i8nU tC8kSJIKPC5lsmpD0JXExRkD8o9XNk14OQWSOkkjf8nfDgQF8JMkwCZ5Wxk14GYK7x/c JPKg== X-Gm-Message-State: AKwxytd9P4u+iHF2s8nQyujwwUc9THPppi61n/Rhwhyyj+OxRwWYt1C0 xkCLNiV+Hsd+7X2OW0RTKqWL/w== X-Google-Smtp-Source: AH8x224vG94HdglWkLvE9QbmOtYUxDE50g2uGOtI+bIijakh10SQy0u1Z6pFRxgE49lvdJCT4HKlNg== X-Received: by 10.80.216.143 with SMTP id p15mr45626731edj.294.1517391333558; Wed, 31 Jan 2018 01:35:33 -0800 (PST) Received: from localhost.localdomain (x50d2404e.cust.hiper.dk. [80.210.64.78]) by smtp.gmail.com with ESMTPSA id z2sm10247837edc.50.2018.01.31.01.35.32 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Wed, 31 Jan 2018 01:35:32 -0800 (PST) From: Christoffer Dall To: Paolo Bonzini , =?UTF-8?q?Radim=20Kr=C4=8Dm=C3=A1=C5=99?= Cc: linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, kvm@vger.kernel.org, Marc Zyngier , Christoffer Dall Subject: [PULL 14/28] KVM: arm/arm64: Split dcache/icache flushing Date: Wed, 31 Jan 2018 10:34:53 +0100 Message-Id: <20180131093507.22219-15-christoffer.dall@linaro.org> X-Mailer: git-send-email 2.14.2 In-Reply-To: <20180131093507.22219-1-christoffer.dall@linaro.org> References: <20180131093507.22219-1-christoffer.dall@linaro.org> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Marc Zyngier As we're about to introduce opportunistic invalidation of the icache, let's split dcache and icache flushing. Acked-by: Christoffer Dall Signed-off-by: Marc Zyngier Signed-off-by: Christoffer Dall --- arch/arm/include/asm/kvm_mmu.h | 60 ++++++++++++++++++++++++++++------------ arch/arm64/include/asm/kvm_mmu.h | 13 +++++++-- virt/kvm/arm/mmu.c | 20 ++++++++++---- 3 files changed, 67 insertions(+), 26 deletions(-) diff --git a/arch/arm/include/asm/kvm_mmu.h b/arch/arm/include/asm/kvm_mmu.h index fa6f2174276b..9fa4b2520974 100644 --- a/arch/arm/include/asm/kvm_mmu.h +++ b/arch/arm/include/asm/kvm_mmu.h @@ -126,21 +126,12 @@ static inline bool vcpu_has_cache_enabled(struct kvm_vcpu *vcpu) return (vcpu_cp15(vcpu, c1_SCTLR) & 0b101) == 0b101; } -static inline void __coherent_cache_guest_page(struct kvm_vcpu *vcpu, - kvm_pfn_t pfn, - unsigned long size) +static inline void __clean_dcache_guest_page(struct kvm_vcpu *vcpu, + kvm_pfn_t pfn, + unsigned long size) { /* - * If we are going to insert an instruction page and the icache is - * either VIPT or PIPT, there is a potential problem where the host - * (or another VM) may have used the same page as this guest, and we - * read incorrect data from the icache. If we're using a PIPT cache, - * we can invalidate just that page, but if we are using a VIPT cache - * we need to invalidate the entire icache - damn shame - as written - * in the ARM ARM (DDI 0406C.b - Page B3-1393). - * - * VIVT caches are tagged using both the ASID and the VMID and doesn't - * need any kind of flushing (DDI 0406C.b - Page B3-1392). + * Clean the dcache to the Point of Coherency. * * We need to do this through a kernel mapping (using the * user-space mapping has proved to be the wrong @@ -155,19 +146,52 @@ static inline void __coherent_cache_guest_page(struct kvm_vcpu *vcpu, kvm_flush_dcache_to_poc(va, PAGE_SIZE); - if (icache_is_pipt()) - __cpuc_coherent_user_range((unsigned long)va, - (unsigned long)va + PAGE_SIZE); - size -= PAGE_SIZE; pfn++; kunmap_atomic(va); } +} - if (!icache_is_pipt() && !icache_is_vivt_asid_tagged()) { +static inline void __invalidate_icache_guest_page(struct kvm_vcpu *vcpu, + kvm_pfn_t pfn, + unsigned long size) +{ + /* + * If we are going to insert an instruction page and the icache is + * either VIPT or PIPT, there is a potential problem where the host + * (or another VM) may have used the same page as this guest, and we + * read incorrect data from the icache. If we're using a PIPT cache, + * we can invalidate just that page, but if we are using a VIPT cache + * we need to invalidate the entire icache - damn shame - as written + * in the ARM ARM (DDI 0406C.b - Page B3-1393). + * + * VIVT caches are tagged using both the ASID and the VMID and doesn't + * need any kind of flushing (DDI 0406C.b - Page B3-1392). + */ + + VM_BUG_ON(size & ~PAGE_MASK); + + if (icache_is_vivt_asid_tagged()) + return; + + if (!icache_is_pipt()) { /* any kind of VIPT cache */ __flush_icache_all(); + return; + } + + /* PIPT cache. As for the d-side, use a temporary kernel mapping. */ + while (size) { + void *va = kmap_atomic_pfn(pfn); + + __cpuc_coherent_user_range((unsigned long)va, + (unsigned long)va + PAGE_SIZE); + + size -= PAGE_SIZE; + pfn++; + + kunmap_atomic(va); } } diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h index 672c8684d5c2..8034b96fb3a4 100644 --- a/arch/arm64/include/asm/kvm_mmu.h +++ b/arch/arm64/include/asm/kvm_mmu.h @@ -230,19 +230,26 @@ static inline bool vcpu_has_cache_enabled(struct kvm_vcpu *vcpu) return (vcpu_sys_reg(vcpu, SCTLR_EL1) & 0b101) == 0b101; } -static inline void __coherent_cache_guest_page(struct kvm_vcpu *vcpu, - kvm_pfn_t pfn, - unsigned long size) +static inline void __clean_dcache_guest_page(struct kvm_vcpu *vcpu, + kvm_pfn_t pfn, + unsigned long size) { void *va = page_address(pfn_to_page(pfn)); kvm_flush_dcache_to_poc(va, size); +} +static inline void __invalidate_icache_guest_page(struct kvm_vcpu *vcpu, + kvm_pfn_t pfn, + unsigned long size) +{ if (icache_is_aliasing()) { /* any kind of VIPT cache */ __flush_icache_all(); } else if (is_kernel_in_hyp_mode() || !icache_is_vpipt()) { /* PIPT or VPIPT at EL2 (see comment in __kvm_tlb_flush_vmid_ipa) */ + void *va = page_address(pfn_to_page(pfn)); + flush_icache_range((unsigned long)va, (unsigned long)va + size); } diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c index b36945d49986..2174244f6317 100644 --- a/virt/kvm/arm/mmu.c +++ b/virt/kvm/arm/mmu.c @@ -1257,10 +1257,16 @@ void kvm_arch_mmu_enable_log_dirty_pt_masked(struct kvm *kvm, kvm_mmu_write_protect_pt_masked(kvm, slot, gfn_offset, mask); } -static void coherent_cache_guest_page(struct kvm_vcpu *vcpu, kvm_pfn_t pfn, - unsigned long size) +static void clean_dcache_guest_page(struct kvm_vcpu *vcpu, kvm_pfn_t pfn, + unsigned long size) { - __coherent_cache_guest_page(vcpu, pfn, size); + __clean_dcache_guest_page(vcpu, pfn, size); +} + +static void invalidate_icache_guest_page(struct kvm_vcpu *vcpu, kvm_pfn_t pfn, + unsigned long size) +{ + __invalidate_icache_guest_page(vcpu, pfn, size); } static void kvm_send_hwpoison_signal(unsigned long address, @@ -1391,7 +1397,9 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, new_pmd = kvm_s2pmd_mkwrite(new_pmd); kvm_set_pfn_dirty(pfn); } - coherent_cache_guest_page(vcpu, pfn, PMD_SIZE); + clean_dcache_guest_page(vcpu, pfn, PMD_SIZE); + invalidate_icache_guest_page(vcpu, pfn, PMD_SIZE); + ret = stage2_set_pmd_huge(kvm, memcache, fault_ipa, &new_pmd); } else { pte_t new_pte = pfn_pte(pfn, mem_type); @@ -1401,7 +1409,9 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, kvm_set_pfn_dirty(pfn); mark_page_dirty(kvm, gfn); } - coherent_cache_guest_page(vcpu, pfn, PAGE_SIZE); + clean_dcache_guest_page(vcpu, pfn, PAGE_SIZE); + invalidate_icache_guest_page(vcpu, pfn, PAGE_SIZE); + ret = stage2_set_pte(kvm, memcache, fault_ipa, &new_pte, flags); }