From patchwork Sat Sep 18 00:56:27 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lai Jiangshan X-Patchwork-Id: 12503359 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 72616C433EF for ; Sat, 18 Sep 2021 00:56:43 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 538FF6113A for ; Sat, 18 Sep 2021 00:56:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240776AbhIRA6D (ORCPT ); Fri, 17 Sep 2021 20:58:03 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36700 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239732AbhIRA6C (ORCPT ); Fri, 17 Sep 2021 20:58:02 -0400 Received: from mail-pf1-x433.google.com (mail-pf1-x433.google.com [IPv6:2607:f8b0:4864:20::433]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id ED0A7C061574; Fri, 17 Sep 2021 17:56:39 -0700 (PDT) Received: by mail-pf1-x433.google.com with SMTP id c1so7803594pfp.10; Fri, 17 Sep 2021 17:56:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=a6PYG+m/xoIoPiMToxE/tL9ihqq7/KCP6ZvYVM4ScbM=; b=Id5nrQDA9hbBo1pTm7dyu3rzF/UaBwZN4f08xl7BEFnELljpNcXOjr1gaGFvgbjf8M tBpuzWgfQSWCexBTPWT7QDrKcUAknEARhdyzat3Vb8lCdukJtUFsO4SzigrSSAE1vdaD /3F850MjiYPSE2kj0hKrpeZCCemsa6064j9zZc9PCb0Qhc2g+3DprBa2driccPairPya gkK5/o1uhZcwINhcfUieXWPipuS+bn2WJwJSstpSMhxU4Ti4Nzr9D1nsv9jY7vnqbT7K QHsYeFro+vwdRN1Eyc1LMWnk/kpOA1VK3UAsPUNFiCTBft7dJm7oAS+FSJoJKxubNWGx 9G3w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=a6PYG+m/xoIoPiMToxE/tL9ihqq7/KCP6ZvYVM4ScbM=; b=HUTeE4QXf4eGdyAm+7J4gH6fHUORhNTDvCB/Om6LANOfIFWeC9B5XzXaxLuS5/Bc6s QYjqbSzOBtZJW6N9dZIxTgNvI4dmhGKQfped84vdgZsbAkDZN5NKNTzMgigJgM13Sl/4 6/evGH4wuNfWTBnqExdKgUx4ehkyiFeyq/cpk+1y0atRx1uCeIWogO37MIvUKY3IImWu DnKQFMUORPI1Me4YwIMvp9IQfnLVOE4t1DN5Kyu9VK8EDDOva+CW8lNxHxHD29/1IWvW 2/MqdDboreaTnCFYvrC1dZ7Od1oY9fZ16WDSPSUFBp3d24Kyvg5nxEUZG+0nUmfHpL7E y1Tg== X-Gm-Message-State: AOAM532C3Q7ytbY2dWfKSwEkQcBr3AYENluKWPbeQP7YEUBuoDPisL/h /YTH/3CGble/A68fcy5zcASMO25jamw= X-Google-Smtp-Source: ABdhPJyi8fJXwyTUVRT7Hto96wu/kHTmSfokskyY0wwfMbqexfa1cRYzmSyppYoELdMkKJqHyRBsUQ== X-Received: by 2002:a63:185b:: with SMTP id 27mr12394199pgy.0.1631926599350; Fri, 17 Sep 2021 17:56:39 -0700 (PDT) Received: from localhost ([47.251.4.198]) by smtp.gmail.com with ESMTPSA id e11sm6809246pfv.201.2021.09.17.17.56.38 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Fri, 17 Sep 2021 17:56:39 -0700 (PDT) From: Lai Jiangshan To: linux-kernel@vger.kernel.org Cc: Lai Jiangshan , Paolo Bonzini , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Thomas Gleixner , Ingo Molnar , Borislav Petkov , x86@kernel.org, "H. Peter Anvin" , Xiao Guangrong , Marcelo Tosatti , kvm@vger.kernel.org Subject: [PATCH V2 01/10] KVM: X86: Fix missed remote tlb flush in rmap_write_protect() Date: Sat, 18 Sep 2021 08:56:27 +0800 Message-Id: <20210918005636.3675-2-jiangshanlai@gmail.com> X-Mailer: git-send-email 2.19.1.6.gb485710b In-Reply-To: <20210918005636.3675-1-jiangshanlai@gmail.com> References: <20210918005636.3675-1-jiangshanlai@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Lai Jiangshan When kvm->tlbs_dirty > 0, some rmaps might have been deleted without flushing tlb remotely after kvm_sync_page(). If @gfn was writable before and it's rmaps was deleted in kvm_sync_page(), and if the tlb entry is still in a remote running VCPU, the @gfn is not safely protected. To fix the problem, kvm_sync_page() does the remote flush when needed to avoid the problem. Fixes: a4ee1ca4a36e ("KVM: MMU: delay flush all tlbs on sync_page path") Signed-off-by: Lai Jiangshan --- Changed from V1: force remote flush timely instead of increasing tlbs_dirty. arch/x86/kvm/mmu/paging_tmpl.h | 23 ++--------------------- 1 file changed, 2 insertions(+), 21 deletions(-) diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h index 72f358613786..5962d4f8a72e 100644 --- a/arch/x86/kvm/mmu/paging_tmpl.h +++ b/arch/x86/kvm/mmu/paging_tmpl.h @@ -1038,14 +1038,6 @@ static gpa_t FNAME(gva_to_gpa_nested)(struct kvm_vcpu *vcpu, gpa_t vaddr, * Using the cached information from sp->gfns is safe because: * - The spte has a reference to the struct page, so the pfn for a given gfn * can't change unless all sptes pointing to it are nuked first. - * - * Note: - * We should flush all tlbs if spte is dropped even though guest is - * responsible for it. Since if we don't, kvm_mmu_notifier_invalidate_page - * and kvm_mmu_notifier_invalidate_range_start detect the mapping page isn't - * used by guest then tlbs are not flushed, so guest is allowed to access the - * freed pages. - * And we increase kvm->tlbs_dirty to delay tlbs flush in this case. */ static int FNAME(sync_page)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp) { @@ -1098,13 +1090,7 @@ static int FNAME(sync_page)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp) return 0; if (FNAME(prefetch_invalid_gpte)(vcpu, sp, &sp->spt[i], gpte)) { - /* - * Update spte before increasing tlbs_dirty to make - * sure no tlb flush is lost after spte is zapped; see - * the comments in kvm_flush_remote_tlbs(). - */ - smp_wmb(); - vcpu->kvm->tlbs_dirty++; + set_spte_ret |= SET_SPTE_NEED_REMOTE_TLB_FLUSH; continue; } @@ -1119,12 +1105,7 @@ static int FNAME(sync_page)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp) if (gfn != sp->gfns[i]) { drop_spte(vcpu->kvm, &sp->spt[i]); - /* - * The same as above where we are doing - * prefetch_invalid_gpte(). - */ - smp_wmb(); - vcpu->kvm->tlbs_dirty++; + set_spte_ret |= SET_SPTE_NEED_REMOTE_TLB_FLUSH; continue; } From patchwork Sat Sep 18 00:56:28 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lai Jiangshan X-Patchwork-Id: 12503361 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A9354C433EF for ; Sat, 18 Sep 2021 00:56:49 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 8F11B61100 for ; Sat, 18 Sep 2021 00:56:49 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S243103AbhIRA6K (ORCPT ); Fri, 17 Sep 2021 20:58:10 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36736 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S242844AbhIRA6J (ORCPT ); Fri, 17 Sep 2021 20:58:09 -0400 Received: from mail-pg1-x52d.google.com (mail-pg1-x52d.google.com [IPv6:2607:f8b0:4864:20::52d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BEB0DC061574; Fri, 17 Sep 2021 17:56:46 -0700 (PDT) Received: by mail-pg1-x52d.google.com with SMTP id w8so11195896pgf.5; Fri, 17 Sep 2021 17:56:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=25aWXLCbUHXS2COaXtckpEQG8G6M9yqF57CffvGsPlw=; b=ZgIbePrF8l3FOoDwCAT3VRuaZWCM2Yw/8R7mhTQbGGOdUQsPKVB6BHnKEVZlGKSwZb 6tPPThIYZI8h10y98fNU2PILmm7jphW1ny13eYg5k8F3VVwNvSiutkOF2KFpmwMr8tsk pmWkq/6m1uULi/Cfm2fYcDHqnpMrgEy4EYidWTC56kzmrri2G7Z6Sz6VB9C5HPiF++2T OQUG/LX+TDAjEIuhaKiT5ji/GicpMM+c19sI6xY7POAqmrzIeCEgeVL6y+LoyvoUz8aK 0Xmc6O+nXn6F1GO5qjclqQ0ZSB0UpWgvZ3OjkoqP8Zkw8cWYy7DhRJO9rPSnzBjHD3Rs mJ6Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=25aWXLCbUHXS2COaXtckpEQG8G6M9yqF57CffvGsPlw=; b=lziegG0b/x39kCpDO1oXU/2SW6LfjEqR4tR5kkOSFCzjnKYip3SYh9V+6Z2EXS5rFu uudHdX1Z55/6QgvgpBoskPgWy1dNwNuEJN9qJkTNOOfHS/xAZyZ7Zco+GCn/0im3J/WH CwUzozknP0n7mcLylO13242C+uoPPbxHXxqsEMj3Lwp+zHzoJJO6hSAGKoWyRs+LL1Au gEeYDZ+ZwOLuMyGx44x/iUWO3d410hA9cgrzSTg+/fGl3u6ZLihWliv+8so/OnSsBDIx UhRB1ll1XSqZY2KKZmgQwXngAdp6XEoP6QFOLs8ku5Q1Lo2XXzGlox75LYFWomkZBCGx TeCA== X-Gm-Message-State: AOAM530WtlVuJzy37LsL7oRoaswuXiAcFcWho4gDikQpyrm/BpwB6Vcs zicuAulCD+YGZyMbSDNh1mmKHUcUw3M= X-Google-Smtp-Source: ABdhPJxYDQgaCXZhES8TeSRNbeR4xtQXH3AkrtMJyYpbvhVLrCJbK1PPhDEqME6ifvqB5cEk5Sp0bA== X-Received: by 2002:aa7:8097:0:b029:3cd:b205:cfe9 with SMTP id v23-20020aa780970000b02903cdb205cfe9mr13851638pff.1.1631926606145; Fri, 17 Sep 2021 17:56:46 -0700 (PDT) Received: from localhost ([47.251.4.198]) by smtp.gmail.com with ESMTPSA id e13sm6841682pfc.137.2021.09.17.17.56.44 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Fri, 17 Sep 2021 17:56:45 -0700 (PDT) From: Lai Jiangshan To: linux-kernel@vger.kernel.org Cc: Lai Jiangshan , Paolo Bonzini , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Thomas Gleixner , Ingo Molnar , Borislav Petkov , x86@kernel.org, "H. Peter Anvin" , Marcelo Tosatti , Avi Kivity , kvm@vger.kernel.org Subject: [PATCH V2 02/10] KVM: X86: Synchronize the shadow pagetable before link it Date: Sat, 18 Sep 2021 08:56:28 +0800 Message-Id: <20210918005636.3675-3-jiangshanlai@gmail.com> X-Mailer: git-send-email 2.19.1.6.gb485710b In-Reply-To: <20210918005636.3675-1-jiangshanlai@gmail.com> References: <20210918005636.3675-1-jiangshanlai@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Lai Jiangshan If gpte is changed from non-present to present, the guest doesn't need to flush tlb per SDM. So the host must synchronze sp before link it. Otherwise the guest might use a wrong mapping. For example: the guest first changes a level-1 pagetable, and then links its parent to a new place where the original gpte is non-present. Finally the guest can access the remapped area without flushing the tlb. The guest's behavior should be allowed per SDM, but the host kvm mmu makes it wrong. Fixes: 4731d4c7a077 ("KVM: MMU: out of sync shadow core") Signed-off-by: Lai Jiangshan --- Changed from V1: Don't loop, but just return when it needs to break. arch/x86/kvm/mmu/mmu.c | 15 ++++++++------- arch/x86/kvm/mmu/paging_tmpl.h | 22 ++++++++++++++++++++++ 2 files changed, 30 insertions(+), 7 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 26f6bd238a77..3c1b069a7bcf 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -2024,8 +2024,8 @@ static void mmu_pages_clear_parents(struct mmu_page_path *parents) } while (!sp->unsync_children); } -static void mmu_sync_children(struct kvm_vcpu *vcpu, - struct kvm_mmu_page *parent) +static int mmu_sync_children(struct kvm_vcpu *vcpu, + struct kvm_mmu_page *parent, bool can_yield) { int i; struct kvm_mmu_page *sp; @@ -2052,12 +2052,16 @@ static void mmu_sync_children(struct kvm_vcpu *vcpu, } if (need_resched() || rwlock_needbreak(&vcpu->kvm->mmu_lock)) { kvm_mmu_flush_or_zap(vcpu, &invalid_list, false, flush); + if (!can_yield) + return -EINTR; + cond_resched_rwlock_write(&vcpu->kvm->mmu_lock); flush = false; } } kvm_mmu_flush_or_zap(vcpu, &invalid_list, false, flush); + return 0; } static void __clear_sp_write_flooding_count(struct kvm_mmu_page *sp) @@ -2143,9 +2147,6 @@ static struct kvm_mmu_page *kvm_mmu_get_page(struct kvm_vcpu *vcpu, kvm_make_request(KVM_REQ_TLB_FLUSH_CURRENT, vcpu); } - if (sp->unsync_children) - kvm_make_request(KVM_REQ_MMU_SYNC, vcpu); - __clear_sp_write_flooding_count(sp); trace_get_page: @@ -3642,7 +3643,7 @@ void kvm_mmu_sync_roots(struct kvm_vcpu *vcpu) write_lock(&vcpu->kvm->mmu_lock); kvm_mmu_audit(vcpu, AUDIT_PRE_SYNC); - mmu_sync_children(vcpu, sp); + mmu_sync_children(vcpu, sp, true); kvm_mmu_audit(vcpu, AUDIT_POST_SYNC); write_unlock(&vcpu->kvm->mmu_lock); @@ -3658,7 +3659,7 @@ void kvm_mmu_sync_roots(struct kvm_vcpu *vcpu) if (IS_VALID_PAE_ROOT(root)) { root &= PT64_BASE_ADDR_MASK; sp = to_shadow_page(root); - mmu_sync_children(vcpu, sp); + mmu_sync_children(vcpu, sp, true); } } diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h index 5962d4f8a72e..87374cfd82be 100644 --- a/arch/x86/kvm/mmu/paging_tmpl.h +++ b/arch/x86/kvm/mmu/paging_tmpl.h @@ -704,6 +704,28 @@ static int FNAME(fetch)(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault, access = gw->pt_access[it.level - 2]; sp = kvm_mmu_get_page(vcpu, table_gfn, fault->addr, it.level-1, false, access); + /* + * We must synchronize the pagetable before link it + * because the guest doens't need to flush tlb when + * gpte is changed from non-present to present. + * Otherwise, the guest may use the wrong mapping. + * + * For PG_LEVEL_4K, kvm_mmu_get_page() has already + * synchronized it transiently via kvm_sync_page(). + * + * For higher level pagetable, we synchronize it + * via slower mmu_sync_children(). If it needs to + * break, returns RET_PF_RETRY and will retry on + * next #PF. It had already made some progress. + * + * It also makes KVM_REQ_MMU_SYNC request if the @sp + * is linked on a different addr to expedite it. + */ + if (sp->unsync_children && + mmu_sync_children(vcpu, sp, false)) { + kvm_make_request(KVM_REQ_MMU_SYNC, vcpu); + return RET_PF_RETRY; + } } /* From patchwork Sat Sep 18 00:56:29 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lai Jiangshan X-Patchwork-Id: 12503363 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D4CDBC433EF for ; Sat, 18 Sep 2021 00:56:55 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id BD03F60F92 for ; Sat, 18 Sep 2021 00:56:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S243233AbhIRA6R (ORCPT ); Fri, 17 Sep 2021 20:58:17 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36770 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S243226AbhIRA6P (ORCPT ); Fri, 17 Sep 2021 20:58:15 -0400 Received: from mail-pj1-x1035.google.com (mail-pj1-x1035.google.com [IPv6:2607:f8b0:4864:20::1035]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2CAD6C061574; Fri, 17 Sep 2021 17:56:53 -0700 (PDT) Received: by mail-pj1-x1035.google.com with SMTP id m21-20020a17090a859500b00197688449c4so8637931pjn.0; Fri, 17 Sep 2021 17:56:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=UL21jgKxaRNfyyPYg4v6IIayXHcpZPBqIuWPy8p1uvE=; b=XAoXjyri0mY+WG5j86PR9O9ZcQGr7+u2d9X+6WlbhOongfQM7zUMh5EEIT7N+AmLa1 MBO1PalAsV5TJ2jPVTgwPf6YlGBw9E6K7UzKJJYsVgCIrsZoPIS+TLHNZpIFxyP7fE2F bsbfbSD8tFVztoSEgOjlk/JMORUXCsf0MVTprSsheIiYMMtQudQ/t6tDoasYF/Yg7kxH ViBne3MnaZ611iYTfAslsn8lfLyReXkcnS+OT1Opvz5mRFpSjTRutPAt0zji8vFgoBGw DerEXyw6Jmd8zchwruqkK7nJQwCSWqM6Ye6iOyK3X/xp1UyPdJC45hKWx/CUF+A/s/KZ CLOg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=UL21jgKxaRNfyyPYg4v6IIayXHcpZPBqIuWPy8p1uvE=; b=H8D6oIJG+4WkoCQ5MfQlJJjYOIdXtYvKdGWk8ak2wLYTruTjilPNBsl/vsJqaIOqws QvPz3SjM0VGcg9Argi7U4DVyA+d/BTsypB+dAx0o7GZhAwANmgito7+rTAXrLnEAA0m8 OlSYQRRO9ovZvKy15yGdkEK0W053HiDiESwUBl7ii9FYk/zXGMu/5JoDGDhlLXI7AGq3 3FcesVSyJa1FIRhirm39mPJlu3VeSLjvmiT+gFgyR/CElWjHyt0f7Tq7jI8KGDJLDwb5 UVeJhOlIbPndmogSFF64CC7N3qMUCwlqgbgmfUg5+sCRlm7uKLMSIzcTJGjyXU5G26qn GbJw== X-Gm-Message-State: AOAM532CUoveoO2IJQDpNsd8eHr7tpS1WVVtXq1lLIsX5COZmSRPYN8z j8PQo4IdlHXNPKqJqIQbG1dTVMXefi4= X-Google-Smtp-Source: ABdhPJzYSl4L5+MVfLtGnecSW9sPmboRcWzOTv2xMmBbLdA2kBzvmaYOjKdH7VgaX71/iEcuhRapCQ== X-Received: by 2002:a17:90a:5513:: with SMTP id b19mr24764709pji.16.1631926612587; Fri, 17 Sep 2021 17:56:52 -0700 (PDT) Received: from localhost ([47.251.4.198]) by smtp.gmail.com with ESMTPSA id 141sm7370396pgg.16.2021.09.17.17.56.51 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Fri, 17 Sep 2021 17:56:52 -0700 (PDT) From: Lai Jiangshan To: linux-kernel@vger.kernel.org Cc: Lai Jiangshan , Paolo Bonzini , kvm@vger.kernel.org Subject: [PATCH V2 03/10] KVM: Remove tlbs_dirty Date: Sat, 18 Sep 2021 08:56:29 +0800 Message-Id: <20210918005636.3675-4-jiangshanlai@gmail.com> X-Mailer: git-send-email 2.19.1.6.gb485710b In-Reply-To: <20210918005636.3675-1-jiangshanlai@gmail.com> References: <20210918005636.3675-1-jiangshanlai@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Lai Jiangshan There is no user of tlbs_dirty. Signed-off-by: Lai Jiangshan --- include/linux/kvm_host.h | 1 - virt/kvm/kvm_main.c | 9 +-------- 2 files changed, 1 insertion(+), 9 deletions(-) diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index e4d712e9f760..3b7846cd0637 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -608,7 +608,6 @@ struct kvm { unsigned long mmu_notifier_range_start; unsigned long mmu_notifier_range_end; #endif - long tlbs_dirty; struct list_head devices; u64 manual_dirty_log_protect; struct dentry *debugfs_dentry; diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 3e67c93ca403..6d6be42ec78d 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -312,12 +312,6 @@ EXPORT_SYMBOL_GPL(kvm_make_all_cpus_request); #ifndef CONFIG_HAVE_KVM_ARCH_TLB_FLUSH_ALL void kvm_flush_remote_tlbs(struct kvm *kvm) { - /* - * Read tlbs_dirty before setting KVM_REQ_TLB_FLUSH in - * kvm_make_all_cpus_request. - */ - long dirty_count = smp_load_acquire(&kvm->tlbs_dirty); - /* * We want to publish modifications to the page tables before reading * mode. Pairs with a memory barrier in arch-specific code. @@ -332,7 +326,6 @@ void kvm_flush_remote_tlbs(struct kvm *kvm) if (!kvm_arch_flush_remote_tlb(kvm) || kvm_make_all_cpus_request(kvm, KVM_REQ_TLB_FLUSH)) ++kvm->stat.generic.remote_tlb_flush; - cmpxchg(&kvm->tlbs_dirty, dirty_count, 0); } EXPORT_SYMBOL_GPL(kvm_flush_remote_tlbs); #endif @@ -537,7 +530,7 @@ static __always_inline int __kvm_handle_hva_range(struct kvm *kvm, } } - if (range->flush_on_ret && (ret || kvm->tlbs_dirty)) + if (range->flush_on_ret && ret) kvm_flush_remote_tlbs(kvm); if (locked) From patchwork Sat Sep 18 00:56:30 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lai Jiangshan X-Patchwork-Id: 12503365 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id EED49C433F5 for ; Sat, 18 Sep 2021 00:57:04 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id D002561244 for ; Sat, 18 Sep 2021 00:57:04 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S243403AbhIRA60 (ORCPT ); Fri, 17 Sep 2021 20:58:26 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36818 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S243504AbhIRA6W (ORCPT ); Fri, 17 Sep 2021 20:58:22 -0400 Received: from mail-pj1-x102f.google.com (mail-pj1-x102f.google.com [IPv6:2607:f8b0:4864:20::102f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EDFBCC061767; Fri, 17 Sep 2021 17:56:59 -0700 (PDT) Received: by mail-pj1-x102f.google.com with SMTP id n13-20020a17090a4e0d00b0017946980d8dso11172890pjh.5; Fri, 17 Sep 2021 17:56:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=quHOswZUn9JS3Yqwc0+Ss/CHRYeQKc2vTPvyghLjoZc=; b=mIifOA/aDUsdWBqZIhh3GVFXZ8EgOY28JNr3v6HS+G4domnOAt7ntJgaxkNHleA+9n pKboh3slCA5eYW0xxqXaLesMwMbCKy/2pUdKb6gHIBGII4QoqFW8k1OEvrvyIFwwBdM/ u0BFL8NeODpAQFbWwO+l4XTuJYF2MySyG8PeVr2wo2gz/hKBJOn2YqcpMwj85vTJdhTZ 2YC9HUFgJapgw6oMyqGsBZQQecUZgyaVfNXa2++FWsZWlnL7XmGmbLz7WDXt9lhsIqzA v8Z/l5gepNiGDNQeCPeiJ5u0IuQ1hqq99AhQxRMIP224FaXmWCQfkAen8YzJtRatISzF CvAA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=quHOswZUn9JS3Yqwc0+Ss/CHRYeQKc2vTPvyghLjoZc=; b=guEmOYJ7mKYSAWlgYSd63hz3xmVxrPDf/V94horGuZWD/DcqFucg8lsw6hzslrU+rE KDrLCi58DHvKl1nbX27o29ruop6uFjzNrrKurDsxUSJvnumYEfFXJF75BNE1dYrGTQ4o BoYXhMJ2O4Y3s96SAnXjGdOHHgSc9Hp0Xx0TkhrKFnaX0XUPaQIxd3zKkHhY+/iRiB3x LKgIxoC5l3lpgch/54S13scwy7CLStgMUapu6H8kKLoWlYLUO3qssC/MuBWOrcxnRQ4N xiDjKZ7PWzvypPoFXJ21F9689xC/H0KH09pj3+CzVo9SotMaERerA+dznMiQIcz80IKv XFkQ== X-Gm-Message-State: AOAM532kXQhSOV+IJ/qCdWnJhPKtahsseQDO0Kg+gxxmVaCNGMbp9T7K T3pKugfw2kG5woSCgf3yaO3ySEEJ8Ic= X-Google-Smtp-Source: ABdhPJweGdV429tcqqRLltDNMNkWZg3vWAvMQ50/+7YzhshbVleTcZiszdeuLodnXx//KFRRcKHpeA== X-Received: by 2002:a17:903:24e:b0:13c:49b6:ee98 with SMTP id j14-20020a170903024e00b0013c49b6ee98mr12169444plh.51.1631926619387; Fri, 17 Sep 2021 17:56:59 -0700 (PDT) Received: from localhost ([47.251.4.198]) by smtp.gmail.com with ESMTPSA id d4sm6965982pfv.21.2021.09.17.17.56.58 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Fri, 17 Sep 2021 17:56:59 -0700 (PDT) From: Lai Jiangshan To: linux-kernel@vger.kernel.org Cc: Lai Jiangshan , Paolo Bonzini , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Thomas Gleixner , Ingo Molnar , Borislav Petkov , x86@kernel.org, "H. Peter Anvin" , kvm@vger.kernel.org Subject: [PATCH V2 04/10] KVM: X86: Don't flush current tlb on shadow page modification Date: Sat, 18 Sep 2021 08:56:30 +0800 Message-Id: <20210918005636.3675-5-jiangshanlai@gmail.com> X-Mailer: git-send-email 2.19.1.6.gb485710b In-Reply-To: <20210918005636.3675-1-jiangshanlai@gmail.com> References: <20210918005636.3675-1-jiangshanlai@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Lai Jiangshan After any shadow page modification, flushing tlb only on current VCPU is weird due to other VCPU's tlb might still be stale. In other words, if there is any mandatory tlb-flushing after shadow page modification, SET_SPTE_NEED_REMOTE_TLB_FLUSH or remote_flush should be set and the tlbs of all VCPUs should be flushed. There is not point to only flush current tlb except when the request is from vCPU's or pCPU's activities. If there was any bug that mandatory tlb-flushing is required and SET_SPTE_NEED_REMOTE_TLB_FLUSH/remote_flush is failed to set, this patch would expose the bug in a more destructive way. The related code paths are checked and no missing SET_SPTE_NEED_REMOTE_TLB_FLUSH is found yet. Currently, there is no optional tlb-flushing after sync page related code is changed to flush tlb timely. So we can just remove these local flushing code. Signed-off-by: Lai Jiangshan --- arch/x86/kvm/mmu/mmu.c | 5 ----- arch/x86/kvm/mmu/tdp_mmu.c | 1 - 2 files changed, 6 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 3c1b069a7bcf..f40087ee2704 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -1934,9 +1934,6 @@ static void kvm_mmu_flush_or_zap(struct kvm_vcpu *vcpu, { if (kvm_mmu_remote_flush_or_zap(vcpu->kvm, invalid_list, remote_flush)) return; - - if (local_flush) - kvm_make_request(KVM_REQ_TLB_FLUSH_CURRENT, vcpu); } #ifdef CONFIG_KVM_MMU_AUDIT @@ -2144,7 +2141,6 @@ static struct kvm_mmu_page *kvm_mmu_get_page(struct kvm_vcpu *vcpu, break; WARN_ON(!list_empty(&invalid_list)); - kvm_make_request(KVM_REQ_TLB_FLUSH_CURRENT, vcpu); } __clear_sp_write_flooding_count(sp); @@ -2751,7 +2747,6 @@ static int mmu_set_spte(struct kvm_vcpu *vcpu, u64 *sptep, if (set_spte_ret & SET_SPTE_WRITE_PROTECTED_PT) { if (write_fault) ret = RET_PF_EMULATE; - kvm_make_request(KVM_REQ_TLB_FLUSH_CURRENT, vcpu); } if (set_spte_ret & SET_SPTE_NEED_REMOTE_TLB_FLUSH || flush) diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 548393c2bfe9..d5339fee6f2d 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -958,7 +958,6 @@ static int tdp_mmu_map_handle_target_level(struct kvm_vcpu *vcpu, if (make_spte_ret & SET_SPTE_WRITE_PROTECTED_PT) { if (fault->write) ret = RET_PF_EMULATE; - kvm_make_request(KVM_REQ_TLB_FLUSH_CURRENT, vcpu); } /* If a MMIO SPTE is installed, the MMIO will need to be emulated. */ From patchwork Sat Sep 18 00:56:31 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lai Jiangshan X-Patchwork-Id: 12503367 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.9 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,UNWANTED_LANGUAGE_BODY, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1E541C433EF for ; Sat, 18 Sep 2021 00:57:10 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id F275361246 for ; Sat, 18 Sep 2021 00:57:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S243575AbhIRA6a (ORCPT ); Fri, 17 Sep 2021 20:58:30 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36856 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S243791AbhIRA63 (ORCPT ); Fri, 17 Sep 2021 20:58:29 -0400 Received: from mail-pf1-x431.google.com (mail-pf1-x431.google.com [IPv6:2607:f8b0:4864:20::431]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 04A25C061574; Fri, 17 Sep 2021 17:57:07 -0700 (PDT) Received: by mail-pf1-x431.google.com with SMTP id y4so9201429pfe.5; Fri, 17 Sep 2021 17:57:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=dcT9myNkcQ8lo8kKG4vlWc7FUWKKPEuPnBaujt5wbLs=; b=VuzmkhwIsHD/1cQVkVnlRvVa6wszyuxvL1bAyDYGt6mLm+I3+hoQbvNOnGahOckyiq TmowLVn6bdjEhDlrDAGWVCQjD7GR49pNHgiCfFpkg+zVSQJwa7CfrIABwADQrdyHCvKB OetrAqslCDAuuEybEY/AakpsOMaQwMbZEPBeniGtWLf7uYoPeAcaNBBcwjEdOxJxI3w3 qBUyLK7p77O+wM9xaSE+nsEBJTRkIZVsSgfj5befOE/m50b24Kh3FkQCHEjFWYxsjSDn MacZKzl0aaGrTfQ54Zdx8ZfVS4lhmdmvlwW11kFKwts/AuY48JaOhj8/lc1fN5dA5852 XzOA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=dcT9myNkcQ8lo8kKG4vlWc7FUWKKPEuPnBaujt5wbLs=; b=2Bp9jDbsf3DM3Ld7xoFpkeI1Xx6UgYTm7I+BPG4FtUT+V9yQBmXXyuSoAH95U5glMP GLRI2eEnJACfBbWvjOKZR47ekDo42njcd0EmUpuiZKYDM1t5Cq+NsQwHm2AIlmdmC8SG ybjZ4dJwNmz3ei1VBkduSFVXQZo5O2+JAwYtMaGu/SMfMlqIuqpvxgUKmJ7j6/ikxuQ7 HZ7mAvvjXvGb6Uj4bBGY2YCALgB4h7gFNhsrYEypA2U4wImZxsAhZvdcN6TaLWcxJp01 Iw5TxOqpJ/SHFFL3gs3WnXCAYeSI6yn6riCm5bBLxqeJObrrlhtkoJ0bxnN/iUNqlmvi GvOg== X-Gm-Message-State: AOAM531ByMdLIH6GW4GRUDVsgP7DPvFLoXq0st9MsSPBmzq1YZV7qwDG 9zuUisRlE/yXF+yMUyFWAz9zYBIz9n8= X-Google-Smtp-Source: ABdhPJwqogZYI7b4iUZKjlhEYWR/WIcLhVdmgJob+S6PI9w6FFvb99yJWPRRDRm6L5ssfehs0Qt34g== X-Received: by 2002:a62:84d7:0:b0:438:af8:87ac with SMTP id k206-20020a6284d7000000b004380af887acmr13700360pfd.56.1631926626407; Fri, 17 Sep 2021 17:57:06 -0700 (PDT) Received: from localhost ([47.251.4.198]) by smtp.gmail.com with ESMTPSA id t68sm7504330pgc.59.2021.09.17.17.57.05 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Fri, 17 Sep 2021 17:57:06 -0700 (PDT) From: Lai Jiangshan To: linux-kernel@vger.kernel.org Cc: Lai Jiangshan , Paolo Bonzini , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Thomas Gleixner , Ingo Molnar , Borislav Petkov , x86@kernel.org, "H. Peter Anvin" , kvm@vger.kernel.org Subject: [PATCH V2 05/10] KVM: X86: Remove kvm_mmu_flush_or_zap() Date: Sat, 18 Sep 2021 08:56:31 +0800 Message-Id: <20210918005636.3675-6-jiangshanlai@gmail.com> X-Mailer: git-send-email 2.19.1.6.gb485710b In-Reply-To: <20210918005636.3675-1-jiangshanlai@gmail.com> References: <20210918005636.3675-1-jiangshanlai@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Lai Jiangshan Because local_flush is useless, kvm_mmu_flush_or_zap() can be removed and kvm_mmu_remote_flush_or_zap is used instead. Signed-off-by: Lai Jiangshan --- arch/x86/kvm/mmu/mmu.c | 26 ++++++-------------------- 1 file changed, 6 insertions(+), 20 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index f40087ee2704..9aba5d93a747 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -1928,14 +1928,6 @@ static bool kvm_mmu_remote_flush_or_zap(struct kvm *kvm, return true; } -static void kvm_mmu_flush_or_zap(struct kvm_vcpu *vcpu, - struct list_head *invalid_list, - bool remote_flush, bool local_flush) -{ - if (kvm_mmu_remote_flush_or_zap(vcpu->kvm, invalid_list, remote_flush)) - return; -} - #ifdef CONFIG_KVM_MMU_AUDIT #include "mmu_audit.c" #else @@ -2029,7 +2021,6 @@ static int mmu_sync_children(struct kvm_vcpu *vcpu, struct mmu_page_path parents; struct kvm_mmu_pages pages; LIST_HEAD(invalid_list); - bool flush = false; while (mmu_unsync_walk(parent, &pages)) { bool protected = false; @@ -2039,25 +2030,23 @@ static int mmu_sync_children(struct kvm_vcpu *vcpu, if (protected) { kvm_flush_remote_tlbs(vcpu->kvm); - flush = false; } for_each_sp(pages, sp, parents, i) { kvm_unlink_unsync_page(vcpu->kvm, sp); - flush |= kvm_sync_page(vcpu, sp, &invalid_list); + kvm_sync_page(vcpu, sp, &invalid_list); mmu_pages_clear_parents(&parents); } if (need_resched() || rwlock_needbreak(&vcpu->kvm->mmu_lock)) { - kvm_mmu_flush_or_zap(vcpu, &invalid_list, false, flush); + kvm_mmu_remote_flush_or_zap(vcpu->kvm, &invalid_list, false); if (!can_yield) return -EINTR; cond_resched_rwlock_write(&vcpu->kvm->mmu_lock); - flush = false; } } - kvm_mmu_flush_or_zap(vcpu, &invalid_list, false, flush); + kvm_mmu_remote_flush_or_zap(vcpu->kvm, &invalid_list, false); return 0; } @@ -5146,7 +5135,7 @@ static void kvm_mmu_pte_write(struct kvm_vcpu *vcpu, gpa_t gpa, LIST_HEAD(invalid_list); u64 entry, gentry, *spte; int npte; - bool remote_flush, local_flush; + bool flush = false; /* * If we don't have indirect shadow pages, it means no page is @@ -5155,8 +5144,6 @@ static void kvm_mmu_pte_write(struct kvm_vcpu *vcpu, gpa_t gpa, if (!READ_ONCE(vcpu->kvm->arch.indirect_shadow_pages)) return; - remote_flush = local_flush = false; - pgprintk("%s: gpa %llx bytes %d\n", __func__, gpa, bytes); /* @@ -5185,18 +5172,17 @@ static void kvm_mmu_pte_write(struct kvm_vcpu *vcpu, gpa_t gpa, if (!spte) continue; - local_flush = true; while (npte--) { entry = *spte; mmu_page_zap_pte(vcpu->kvm, sp, spte, NULL); if (gentry && sp->role.level != PG_LEVEL_4K) ++vcpu->kvm->stat.mmu_pde_zapped; if (need_remote_flush(entry, *spte)) - remote_flush = true; + flush = true; ++spte; } } - kvm_mmu_flush_or_zap(vcpu, &invalid_list, remote_flush, local_flush); + kvm_mmu_remote_flush_or_zap(vcpu->kvm, &invalid_list, flush); kvm_mmu_audit(vcpu, AUDIT_POST_PTE_WRITE); write_unlock(&vcpu->kvm->mmu_lock); } From patchwork Sat Sep 18 00:56:32 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lai Jiangshan X-Patchwork-Id: 12503369 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 53F86C433EF for ; Sat, 18 Sep 2021 00:57:21 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 3BC366112E for ; Sat, 18 Sep 2021 00:57:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S245353AbhIRA6m (ORCPT ); Fri, 17 Sep 2021 20:58:42 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36906 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1344073AbhIRA6i (ORCPT ); Fri, 17 Sep 2021 20:58:38 -0400 Received: from mail-pj1-x102b.google.com (mail-pj1-x102b.google.com [IPv6:2607:f8b0:4864:20::102b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 85053C0613D3; Fri, 17 Sep 2021 17:57:13 -0700 (PDT) Received: by mail-pj1-x102b.google.com with SMTP id g13-20020a17090a3c8d00b00196286963b9so11184043pjc.3; Fri, 17 Sep 2021 17:57:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=ZA+x0Dr9VRAZpjOgKU8m5Fy4ryA8YwDPMh/59d87rIQ=; b=Pf02RchtREYyPG4gVATQkoNjuESuDWsTsFb3wsI8eMBOFeC41QGmatvUeRPaflK5PC cSntiSwdTPlbq9CYwwV5yWcRRV8/Z/5z5bODBzcWBDrmtoBWp2BAiiPZFmbxx5LJsnhN 4ufOOD0yftUS/FKLAXkkjWSFf4cWdKU3Guoi2oxBRvMCotKqqi30R3/JwOrA+Nrdi4Vb b6QMUE90Y+6cU1aAFAYJk/rTElpJ+LaDj+qFyzJpUbqrDjtxH5YgBVwQGZrOEJ8lRxW1 9EhKDD++3nJP4e3CNOW3FZmaHv9ctB7wvYfOdn5qufCsKOWEfB4CGzjKIxIkCjy/uBwU st3Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=ZA+x0Dr9VRAZpjOgKU8m5Fy4ryA8YwDPMh/59d87rIQ=; b=T2nYLq8AQhbyhN/MhW9/i8/e78NuyidSg3qtyuatFckGLHspvtRkaxdeDZZRLaDjuk eaV2F2cJgQBNLUnB62rCLnKS1sxodWD/ljbuSVJUBOk1uyG7iLY2CKajwffFI1Bf8p/v eWce86QZzUMpk7UM9+YPEkkb6bTUa5LA6LmoHgD0cbIHw1eHkY2SDtJFdWq4CRRTDwL0 ye8AWXC8CUGRnlRkE9MIJSbq2VmHpTAy7HhkXWBg261q98ibn+GpO2E78ehJjJdKBKxh gV1erenGjngQYqnwEEszGF3H/cJEGVR0FcYeQDOfoeQ/TD9X0Cd1Hp4Swaz1QH+lYsa/ Z3ag== X-Gm-Message-State: AOAM532rhHVQSlKclA4Ylgf19V2EYApjr1NkRTUrPveQ80NwNS4XUWbu e7+OEo93EipNA/PZ097QRKsJ/WfERP8= X-Google-Smtp-Source: ABdhPJxSglwd7cpK5W9I9ybudk1OWdomZcKHSBdP9Z7wwApeYSQsDWvaQWUBEE2DDoBudr0oX+M8iw== X-Received: by 2002:a17:902:bccc:b0:136:1474:3f37 with SMTP id o12-20020a170902bccc00b0013614743f37mr11954394pls.57.1631926632918; Fri, 17 Sep 2021 17:57:12 -0700 (PDT) Received: from localhost ([47.251.4.198]) by smtp.gmail.com with ESMTPSA id q1sm7026523pfu.4.2021.09.17.17.57.11 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Fri, 17 Sep 2021 17:57:12 -0700 (PDT) From: Lai Jiangshan To: linux-kernel@vger.kernel.org Cc: Lai Jiangshan , Paolo Bonzini , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Thomas Gleixner , Ingo Molnar , Borislav Petkov , x86@kernel.org, "H. Peter Anvin" , kvm@vger.kernel.org Subject: [PATCH V2 06/10] KVM: X86: Change kvm_sync_page() to return true when remote flush is needed Date: Sat, 18 Sep 2021 08:56:32 +0800 Message-Id: <20210918005636.3675-7-jiangshanlai@gmail.com> X-Mailer: git-send-email 2.19.1.6.gb485710b In-Reply-To: <20210918005636.3675-1-jiangshanlai@gmail.com> References: <20210918005636.3675-1-jiangshanlai@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Lai Jiangshan Currently kvm_sync_page() returns true when there is any present spte. But the return value is ignored in the callers. Changing kvm_sync_page() to return true when remote flush is needed and changing mmu->sync_page() not to directly flush can combine and reduce remote flush requests. Signed-off-by: Lai Jiangshan --- arch/x86/kvm/mmu/mmu.c | 21 +++++++++++++-------- arch/x86/kvm/mmu/paging_tmpl.h | 21 ++++++++++----------- 2 files changed, 23 insertions(+), 19 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 9aba5d93a747..2f3f47dc96b0 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -1792,7 +1792,7 @@ static void mark_unsync(u64 *spte) static int nonpaging_sync_page(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp) { - return 0; + return -1; } #define KVM_PAGE_ARRAY_NR 16 @@ -1906,12 +1906,14 @@ static void kvm_mmu_commit_zap_page(struct kvm *kvm, static bool kvm_sync_page(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, struct list_head *invalid_list) { - if (vcpu->arch.mmu->sync_page(vcpu, sp) == 0) { + int ret = vcpu->arch.mmu->sync_page(vcpu, sp); + + if (ret < 0) { kvm_mmu_prepare_zap_page(vcpu->kvm, sp, invalid_list); return false; } - return true; + return !!ret; } static bool kvm_mmu_remote_flush_or_zap(struct kvm *kvm, @@ -2021,6 +2023,7 @@ static int mmu_sync_children(struct kvm_vcpu *vcpu, struct mmu_page_path parents; struct kvm_mmu_pages pages; LIST_HEAD(invalid_list); + bool flush = false; while (mmu_unsync_walk(parent, &pages)) { bool protected = false; @@ -2030,23 +2033,25 @@ static int mmu_sync_children(struct kvm_vcpu *vcpu, if (protected) { kvm_flush_remote_tlbs(vcpu->kvm); + flush = false; } for_each_sp(pages, sp, parents, i) { kvm_unlink_unsync_page(vcpu->kvm, sp); - kvm_sync_page(vcpu, sp, &invalid_list); + flush |= kvm_sync_page(vcpu, sp, &invalid_list); mmu_pages_clear_parents(&parents); } if (need_resched() || rwlock_needbreak(&vcpu->kvm->mmu_lock)) { - kvm_mmu_remote_flush_or_zap(vcpu->kvm, &invalid_list, false); + kvm_mmu_remote_flush_or_zap(vcpu->kvm, &invalid_list, flush); if (!can_yield) return -EINTR; cond_resched_rwlock_write(&vcpu->kvm->mmu_lock); + flush = false; } } - kvm_mmu_remote_flush_or_zap(vcpu->kvm, &invalid_list, false); + kvm_mmu_remote_flush_or_zap(vcpu->kvm, &invalid_list, flush); return 0; } @@ -2130,6 +2135,7 @@ static struct kvm_mmu_page *kvm_mmu_get_page(struct kvm_vcpu *vcpu, break; WARN_ON(!list_empty(&invalid_list)); + kvm_flush_remote_tlbs(vcpu->kvm); } __clear_sp_write_flooding_count(sp); @@ -4128,7 +4134,7 @@ static unsigned long get_cr3(struct kvm_vcpu *vcpu) } static bool sync_mmio_spte(struct kvm_vcpu *vcpu, u64 *sptep, gfn_t gfn, - unsigned int access, int *nr_present) + unsigned int access) { if (unlikely(is_mmio_spte(*sptep))) { if (gfn != get_mmio_spte_gfn(*sptep)) { @@ -4136,7 +4142,6 @@ static bool sync_mmio_spte(struct kvm_vcpu *vcpu, u64 *sptep, gfn_t gfn, return true; } - (*nr_present)++; mark_mmio_spte(vcpu, sptep, gfn, access); return true; } diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h index 87374cfd82be..c3edbc0f06b3 100644 --- a/arch/x86/kvm/mmu/paging_tmpl.h +++ b/arch/x86/kvm/mmu/paging_tmpl.h @@ -1060,11 +1060,16 @@ static gpa_t FNAME(gva_to_gpa_nested)(struct kvm_vcpu *vcpu, gpa_t vaddr, * Using the cached information from sp->gfns is safe because: * - The spte has a reference to the struct page, so the pfn for a given gfn * can't change unless all sptes pointing to it are nuked first. + * + * Returns + * < 0: the sp should be zapped + * 0: the sp is synced and no tlb flushing is required + * > 0: the sp is synced and tlb flushing is required */ static int FNAME(sync_page)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp) { union kvm_mmu_page_role mmu_role = vcpu->arch.mmu->mmu_role.base; - int i, nr_present = 0; + int i; bool host_writable; gpa_t first_pte_gpa; int set_spte_ret = 0; @@ -1092,7 +1097,7 @@ static int FNAME(sync_page)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp) */ if (WARN_ON_ONCE(sp->role.direct || (sp->role.word ^ mmu_role.word) & ~sync_role_ign.word)) - return 0; + return -1; first_pte_gpa = FNAME(get_level1_sp_gpa)(sp); @@ -1109,7 +1114,7 @@ static int FNAME(sync_page)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp) if (kvm_vcpu_read_guest_atomic(vcpu, pte_gpa, &gpte, sizeof(pt_element_t))) - return 0; + return -1; if (FNAME(prefetch_invalid_gpte)(vcpu, sp, &sp->spt[i], gpte)) { set_spte_ret |= SET_SPTE_NEED_REMOTE_TLB_FLUSH; @@ -1121,8 +1126,7 @@ static int FNAME(sync_page)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp) pte_access &= FNAME(gpte_access)(gpte); FNAME(protect_clean_gpte)(vcpu->arch.mmu, &pte_access, gpte); - if (sync_mmio_spte(vcpu, &sp->spt[i], gfn, pte_access, - &nr_present)) + if (sync_mmio_spte(vcpu, &sp->spt[i], gfn, pte_access)) continue; if (gfn != sp->gfns[i]) { @@ -1131,8 +1135,6 @@ static int FNAME(sync_page)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp) continue; } - nr_present++; - host_writable = sp->spt[i] & shadow_host_writable_mask; set_spte_ret |= set_spte(vcpu, &sp->spt[i], @@ -1141,10 +1143,7 @@ static int FNAME(sync_page)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp) true, false, host_writable); } - if (set_spte_ret & SET_SPTE_NEED_REMOTE_TLB_FLUSH) - kvm_flush_remote_tlbs(vcpu->kvm); - - return nr_present; + return set_spte_ret & SET_SPTE_NEED_REMOTE_TLB_FLUSH; } #undef pt_element_t From patchwork Sat Sep 18 00:56:33 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lai Jiangshan X-Patchwork-Id: 12503371 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 16908C433F5 for ; Sat, 18 Sep 2021 00:57:24 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id F228E61241 for ; Sat, 18 Sep 2021 00:57:23 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S243367AbhIRA6o (ORCPT ); Fri, 17 Sep 2021 20:58:44 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36934 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S243807AbhIRA6m (ORCPT ); Fri, 17 Sep 2021 20:58:42 -0400 Received: from mail-pj1-x1032.google.com (mail-pj1-x1032.google.com [IPv6:2607:f8b0:4864:20::1032]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 00148C0613C1; Fri, 17 Sep 2021 17:57:19 -0700 (PDT) Received: by mail-pj1-x1032.google.com with SMTP id j1so8049728pjv.3; Fri, 17 Sep 2021 17:57:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=1MwVblyi6rCY57E7Ddf6mFxPNkUig1XlnQMyxrn2cZ8=; b=BsZl9s47wKeBNzD0K/DUjczU4zrStKph2z3U9YfQmocHfWvw0444DiNd/b6ZtTkRHF 5GJfgZeLkHNv4cE8mMG9rDaQcouwyVHeEXyebTyGhXgYWyvghXajAkbv9SIZKByhw/kX XYdQZzkQJHAQ3aPsRt6RLsxilaZ9AB+S0pynfm2LhQQ6yE5aRIu07/3f6h+lsWxmnhkp JwboAPJ3jnX9ahA/EU7STTs0w1hSBq8MfTyTHPz1fPOVhpnzb3CP95six1pFTYsplaaO MDH4wr42GAIWqigXzWko/lxHBmDstjEBKu2GCU0Ry3wpCBCenclF9Yhh6QJjFezzxhRk Usjw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=1MwVblyi6rCY57E7Ddf6mFxPNkUig1XlnQMyxrn2cZ8=; b=XiBBbq016f1rtI4uxUzD3SwQymhrhRSuVLEzxVQQPxtgLyZJ1HXGqY3ZH90RdrkKzF +FXxghdGXBiA5HXx/wK+VItPOwgfVJmyfQIlHoWI3Zd4q10WXy5rD3hAaT0GGda1MxQc ZkNh6B9ZdzvwsiW//bwjZNulM/RrnKUqYzzb5MB2sEiRbsR/HNd/T2lcTXo59EYM65tf gPugNrWl+8P6pY4dYmjsNaQPnoSQwBtq/hhWRMy0GQKQGWwo+pzXDyD+kcEUpx9XhvSJ ZUOpSnbVicDyuTNmcIf+ew7UiB5hhN8wrRfxBjltkJSb7as18fzZFEWWgUhanfzUrITq El5g== X-Gm-Message-State: AOAM531ahfKpcaPh7YI3hsjkrk6xe+AoXAc/ICKXWn1n3koxxYdA5HCo lSyezEgTExe/1xrTmy0Hn6/1LlyMSPo= X-Google-Smtp-Source: ABdhPJwTiAo1Gv82gTlk9T0zsU0AxkDg+1S3PDTvxse1r4O6BWgBF5sGCSlz2GahvfDhGan1bfPM1g== X-Received: by 2002:a17:90a:d312:: with SMTP id p18mr15355600pju.64.1631926639397; Fri, 17 Sep 2021 17:57:19 -0700 (PDT) Received: from localhost ([47.251.4.198]) by smtp.gmail.com with ESMTPSA id k22sm7278606pfi.149.2021.09.17.17.57.18 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Fri, 17 Sep 2021 17:57:19 -0700 (PDT) From: Lai Jiangshan To: linux-kernel@vger.kernel.org Cc: Lai Jiangshan , Paolo Bonzini , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Thomas Gleixner , Ingo Molnar , Borislav Petkov , x86@kernel.org, "H. Peter Anvin" , kvm@vger.kernel.org Subject: [PATCH V2 07/10] KVM: X86: Zap the invalid list after remote tlb flushing Date: Sat, 18 Sep 2021 08:56:33 +0800 Message-Id: <20210918005636.3675-8-jiangshanlai@gmail.com> X-Mailer: git-send-email 2.19.1.6.gb485710b In-Reply-To: <20210918005636.3675-1-jiangshanlai@gmail.com> References: <20210918005636.3675-1-jiangshanlai@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Lai Jiangshan In mmu_sync_children(), it can zap the invalid list after remote tlb flushing. Emptifying the invalid list ASAP might help reduce a remote tlb flushing in some cases. Signed-off-by: Lai Jiangshan --- arch/x86/kvm/mmu/mmu.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 2f3f47dc96b0..ff52e8354148 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -2032,7 +2032,7 @@ static int mmu_sync_children(struct kvm_vcpu *vcpu, protected |= rmap_write_protect(vcpu, sp->gfn); if (protected) { - kvm_flush_remote_tlbs(vcpu->kvm); + kvm_mmu_remote_flush_or_zap(vcpu->kvm, &invalid_list, true); flush = false; } From patchwork Sat Sep 18 00:56:34 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lai Jiangshan X-Patchwork-Id: 12503373 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 07E11C433EF for ; Sat, 18 Sep 2021 00:57:29 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id E713E60F4A for ; Sat, 18 Sep 2021 00:57:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S244946AbhIRA6u (ORCPT ); Fri, 17 Sep 2021 20:58:50 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:36970 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S243226AbhIRA6t (ORCPT ); Fri, 17 Sep 2021 20:58:49 -0400 Received: from mail-pf1-x430.google.com (mail-pf1-x430.google.com [IPv6:2607:f8b0:4864:20::430]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 89B4EC061766; Fri, 17 Sep 2021 17:57:26 -0700 (PDT) Received: by mail-pf1-x430.google.com with SMTP id g14so10782644pfm.1; Fri, 17 Sep 2021 17:57:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=vCtWUpjznOPCx64IjyJAYKRRtTFfsaLvDBezHp1+q7w=; b=ESQnvmjKaGYPzGqT4QgPrYFVgNh0VBSC9I962/dp8hme/0f1lpDyK3YbMWGW1ljrTO vv5EVHDbJvzjOeECMO+WLoBTf+vFizuNbp6BdGp52f/IRBErU8/Bpa30brmiqax4TWrt YhB98ytQUgUcJlkDXRJXuxixWgDkR+CMDY2zVUPJKFyKvK7anEc37Ov+79bhVdq7bphP VUsMaZjn64uskcfHMekLAXQXbxRzacjURuvVGj0u7p2oJmJqCaZg8wNehasVOpMPOjNa BqhEGL4Ctp0gg3jCGmNI2YcNAscxeHtJbexReeUu4hhkyOaRFVMvG0b2FNeaPvIPyU0j w9Kw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=vCtWUpjznOPCx64IjyJAYKRRtTFfsaLvDBezHp1+q7w=; b=pyA29+t57Mn1ncl882TThF+AOUTK7BK96/ctnXDE1+k2CT9KqqPirrSIDJ5OPH5DON mV9+FVwy5Z1DKk1o6zCOY9If/MNUSA444OeyGa+GNfqFx7RhTBZTOCJ2IhBW1lkiyASU /epahvJYhTlJW7p0FMVzn7VSSUmEfYXzANO9Hv+iY9f8vL0NdADLvLVz0hev582v1ty3 5R3xj7dAhuVS9UGjd15UdjCnE3jEZCqeeZuZKWrpVDR7ujvrhEEXy8kHzyPZpfliEOF4 eXCJjH8hEIH8udiqzhV6G/dp8EvKZcZssmRGVWfcjCjfVOiGQi2EueaElb0B9m/3GcVJ 74RQ== X-Gm-Message-State: AOAM530mhRmb+2mYM9K1MdEQKu3+d937Cj8GzQzXtny45lJcQyLhcLEX YDmPFF2+KIF37pp81+lgb0Uzlp6Z9i0= X-Google-Smtp-Source: ABdhPJy7X8mZJQnJbufUi7d86Ijo7iELmpnS9ALzzkNy89/yA05fNTrEpr1LZiWak3Qg/lBUHrlnnw== X-Received: by 2002:a63:1f45:: with SMTP id q5mr12136728pgm.385.1631926646013; Fri, 17 Sep 2021 17:57:26 -0700 (PDT) Received: from localhost ([47.251.4.198]) by smtp.gmail.com with ESMTPSA id g13sm7402390pfi.176.2021.09.17.17.57.24 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Fri, 17 Sep 2021 17:57:25 -0700 (PDT) From: Lai Jiangshan To: linux-kernel@vger.kernel.org Cc: Lai Jiangshan , Paolo Bonzini , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Thomas Gleixner , Ingo Molnar , Borislav Petkov , x86@kernel.org, "H. Peter Anvin" , kvm@vger.kernel.org Subject: [PATCH V2 08/10] KVM: X86: Remove FNAME(update_pte) Date: Sat, 18 Sep 2021 08:56:34 +0800 Message-Id: <20210918005636.3675-9-jiangshanlai@gmail.com> X-Mailer: git-send-email 2.19.1.6.gb485710b In-Reply-To: <20210918005636.3675-1-jiangshanlai@gmail.com> References: <20210918005636.3675-1-jiangshanlai@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Lai Jiangshan Its solo caller is changed to use FNAME(prefetch_gpte) directly. Signed-off-by: Lai Jiangshan Reviewed-by: Maxim Levitsky --- arch/x86/kvm/mmu/paging_tmpl.h | 10 +--------- 1 file changed, 1 insertion(+), 9 deletions(-) diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h index c3edbc0f06b3..ca5fdd07cfa2 100644 --- a/arch/x86/kvm/mmu/paging_tmpl.h +++ b/arch/x86/kvm/mmu/paging_tmpl.h @@ -589,14 +589,6 @@ FNAME(prefetch_gpte)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, return true; } -static void FNAME(update_pte)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, - u64 *spte, const void *pte) -{ - pt_element_t gpte = *(const pt_element_t *)pte; - - FNAME(prefetch_gpte)(vcpu, sp, spte, gpte, false); -} - static bool FNAME(gpte_changed)(struct kvm_vcpu *vcpu, struct guest_walker *gw, int level) { @@ -1001,7 +993,7 @@ static void FNAME(invlpg)(struct kvm_vcpu *vcpu, gva_t gva, hpa_t root_hpa) sizeof(pt_element_t))) break; - FNAME(update_pte)(vcpu, sp, sptep, &gpte); + FNAME(prefetch_gpte)(vcpu, sp, sptep, gpte, false); } if (!sp->unsync_children) From patchwork Sat Sep 18 00:56:35 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lai Jiangshan X-Patchwork-Id: 12503375 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 98A21C433F5 for ; Sat, 18 Sep 2021 00:57:36 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 776C460F4A for ; Sat, 18 Sep 2021 00:57:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1343695AbhIRA65 (ORCPT ); Fri, 17 Sep 2021 20:58:57 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37018 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S245742AbhIRA64 (ORCPT ); Fri, 17 Sep 2021 20:58:56 -0400 Received: from mail-pg1-x531.google.com (mail-pg1-x531.google.com [IPv6:2607:f8b0:4864:20::531]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1B3DAC061574; Fri, 17 Sep 2021 17:57:34 -0700 (PDT) Received: by mail-pg1-x531.google.com with SMTP id k24so11193717pgh.8; Fri, 17 Sep 2021 17:57:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=fZOpI0Kg9/SwIlD9AruKFcQ1gB54Skfl4iNTdeLQxsk=; b=gcj1lMhCGb960WHbCZmDbWnmCIQawH5HQPuUyDL1FqU2v+bSGrbp5k0UTeo7P7vQcO vekhdBPbDnJp1gu7sBlonFC7qRjyaCKaicXInqlINNppFmxPZhdAwwqiC/E9xc3FJKvp eUyI8AJLvt9dNH0fLdfHcG/9wDecB/AuheZjhq0U1A3/ZEC+WG7jxupzrFIMyE/+mceh TMdvY5PF0lPzs24jIWKprn7YszAa0twCJleFUNyC83VGHUsFwhul7NSlXr+BELAiaMky 8+FthGU/dKCfr7iX2I/6WhPCjqx8cNMhetyvl3Lz8GKpR4JnerJElS7QbIR+39hYpnN7 9DMA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=fZOpI0Kg9/SwIlD9AruKFcQ1gB54Skfl4iNTdeLQxsk=; b=KO4JqUh38iqd/yn04kkBoOrIQO0LKQhwNLV+3+abTI/OGRrdb9+cZSVbeaGFW/9Bug AwNm8anNI5OKN7iBYqxGXcJCnY2TSgWlpctUiHPPZcP7D/JEF6XkB63CPxyTwm7cxUDs KiLchQYRJwScVQqdgrJRW9e/28eEX6UFqAsIxWxQbvnouwPJWWPEPvXZrwDrYUoBeblm Wcfu3eeMksUzLIPFuDYV1stVK4ELgXJYeB6ZhUXIAaIusu/1vKZr3vZbk7vianYkEZPL l1n8FFND2tHrEbkgbKM/R1rsVPcLWMwmXSGQTpVA3Y7QYCzf2I7AmamrpYCsIEpT33hr 7vNA== X-Gm-Message-State: AOAM530Guqoru0botvLy7Cg4ENjgcs03KtHC5BHGDYGt5mLJbgTDbLtX Ht03X5lgQ8IYg5aS3XDqeIa9QiTRLXw= X-Google-Smtp-Source: ABdhPJwbk09tnMeg41qjh0f52hhNukoj1z39zpZtN02KDU/j2EZxNgb5EXnl9/Ukpe989TOqUAxilA== X-Received: by 2002:a63:3587:: with SMTP id c129mr12462353pga.127.1631926653208; Fri, 17 Sep 2021 17:57:33 -0700 (PDT) Received: from localhost ([47.251.4.198]) by smtp.gmail.com with ESMTPSA id i7sm362314pgp.39.2021.09.17.17.57.32 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Fri, 17 Sep 2021 17:57:32 -0700 (PDT) From: Lai Jiangshan To: linux-kernel@vger.kernel.org Cc: Lai Jiangshan , Paolo Bonzini , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Thomas Gleixner , Ingo Molnar , Borislav Petkov , x86@kernel.org, "H. Peter Anvin" , kvm@vger.kernel.org Subject: [PATCH V2 09/10] KVM: X86: Don't unsync pagetables when speculative Date: Sat, 18 Sep 2021 08:56:35 +0800 Message-Id: <20210918005636.3675-10-jiangshanlai@gmail.com> X-Mailer: git-send-email 2.19.1.6.gb485710b In-Reply-To: <20210918005636.3675-1-jiangshanlai@gmail.com> References: <20210918005636.3675-1-jiangshanlai@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Lai Jiangshan We'd better only unsync the pagetable when there just was a really write fault on a level-1 pagetable. Signed-off-by: Lai Jiangshan --- arch/x86/kvm/mmu/mmu.c | 6 +++++- arch/x86/kvm/mmu/mmu_internal.h | 3 ++- arch/x86/kvm/mmu/spte.c | 2 +- 3 files changed, 8 insertions(+), 3 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index ff52e8354148..e3213d804647 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -2577,7 +2577,8 @@ static void kvm_unsync_page(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp) * were marked unsync (or if there is no shadow page), -EPERM if the SPTE must * be write-protected. */ -int mmu_try_to_unsync_pages(struct kvm_vcpu *vcpu, gfn_t gfn, bool can_unsync) +int mmu_try_to_unsync_pages(struct kvm_vcpu *vcpu, gfn_t gfn, bool can_unsync, + bool speculative) { struct kvm_mmu_page *sp; bool locked = false; @@ -2603,6 +2604,9 @@ int mmu_try_to_unsync_pages(struct kvm_vcpu *vcpu, gfn_t gfn, bool can_unsync) if (sp->unsync) continue; + if (speculative) + return -EEXIST; + /* * TDP MMU page faults require an additional spinlock as they * run with mmu_lock held for read, not write, and the unsync diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h index 658d8d228d43..f5d8be787993 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -116,7 +116,8 @@ static inline bool kvm_vcpu_ad_need_write_protect(struct kvm_vcpu *vcpu) kvm_x86_ops.cpu_dirty_log_size; } -int mmu_try_to_unsync_pages(struct kvm_vcpu *vcpu, gfn_t gfn, bool can_unsync); +int mmu_try_to_unsync_pages(struct kvm_vcpu *vcpu, gfn_t gfn, bool can_unsync, + bool speculative); void kvm_mmu_gfn_disallow_lpage(const struct kvm_memory_slot *slot, gfn_t gfn); void kvm_mmu_gfn_allow_lpage(const struct kvm_memory_slot *slot, gfn_t gfn); diff --git a/arch/x86/kvm/mmu/spte.c b/arch/x86/kvm/mmu/spte.c index 3e97cdb13eb7..b68a580f3510 100644 --- a/arch/x86/kvm/mmu/spte.c +++ b/arch/x86/kvm/mmu/spte.c @@ -159,7 +159,7 @@ int make_spte(struct kvm_vcpu *vcpu, unsigned int pte_access, int level, * e.g. it's write-tracked (upper-level SPs) or has one or more * shadow pages and unsync'ing pages is not allowed. */ - if (mmu_try_to_unsync_pages(vcpu, gfn, can_unsync)) { + if (mmu_try_to_unsync_pages(vcpu, gfn, can_unsync, speculative)) { pgprintk("%s: found shadow page for %llx, marking ro\n", __func__, gfn); ret |= SET_SPTE_WRITE_PROTECTED_PT; From patchwork Sat Sep 18 00:56:36 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lai Jiangshan X-Patchwork-Id: 12503377 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 61B21C433F5 for ; Sat, 18 Sep 2021 00:57:44 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 4A10A6112E for ; Sat, 18 Sep 2021 00:57:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S245065AbhIRA7F (ORCPT ); Fri, 17 Sep 2021 20:59:05 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37054 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S242022AbhIRA7D (ORCPT ); Fri, 17 Sep 2021 20:59:03 -0400 Received: from mail-pl1-x62e.google.com (mail-pl1-x62e.google.com [IPv6:2607:f8b0:4864:20::62e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8A23BC061574; Fri, 17 Sep 2021 17:57:40 -0700 (PDT) Received: by mail-pl1-x62e.google.com with SMTP id v2so7269031plp.8; Fri, 17 Sep 2021 17:57:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=oxK1ob4roXyy3ZVWd2lGw+F2XcCDp2ZaJLGKfR9pxws=; b=IANeNGrCqrt3WDWCCioYG9kAL1ElSTErWjDK1D4eEYXgi7boZG5smt7RVTerSgMfPZ tFEWi27f34JhHoVBYY7pE3wAPCxoCLRbck6EUHY1/oqm28XFGINIUCVJGSwDVyCsmv6S RRWWxZ/kgvFtT3t8yhB0UzJdpse3xlKhbpkQPL45MgIhmA4X1Yl4Z94aqSpBfXugBbAm uRyP/BCy1W2ytzGbpTPl4u7c72uTJFKP82rhTRHqS+NYHu8Z05gzzaD99FQfUQunkeTH cI5dxFBCkMQoRbGLrlrhQAOf3cYkqpzf6tLdsyjmWzJ3i6MuiWE6LJweREo/3XdQ9W4Y g38w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=oxK1ob4roXyy3ZVWd2lGw+F2XcCDp2ZaJLGKfR9pxws=; b=iW1yGX25o+u5gcd/CzGJa19JAx2q8IGMuC3yibg8dDL8vZ3R2kGiMolkcXMZuTxgt9 ajQwXq0gwN2PkoQ2olwmcEGZ9EmXgBwLszosCcK+8hPt4i0A1labUukKNfEyEmKQPxKn GqW9nhhG4Tpt8q+dMzIKeZ+JTmNAZDQB9fk+Sy7GcZETteaTXSHl6aMa7EnL3qzN+/Fm 1TGULjhSO9P9/NAREbGAl6YEZCcIBUttwVv/nUdK2GNdDa4u2sy7z5KSsf1YAccR2HsA z9k6MIcE2rJ9ZzSR4g9PNwhWrC6icVK9cwyVklqULzCSCsEBuQmI+5QGGd0FAldtX3zx h6gQ== X-Gm-Message-State: AOAM532RgDC4XqN7g5tMfs0uy6lkcQ67ExVuxiOoGJqwjERmKoUU32Vc ld8SA1hZzm3YvXqUqP3Tm6jT4dMx1Dw= X-Google-Smtp-Source: ABdhPJzPMTRNDrgN1TgCxUqYfsVXYkhJck1T+xcbJFAbA+Vlj/zIScih/9aFt5V65C6o/czg/plaKg== X-Received: by 2002:a17:90a:1a43:: with SMTP id 3mr2350434pjl.242.1631926659991; Fri, 17 Sep 2021 17:57:39 -0700 (PDT) Received: from localhost ([47.251.4.198]) by smtp.gmail.com with ESMTPSA id 203sm6665814pfx.119.2021.09.17.17.57.38 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Fri, 17 Sep 2021 17:57:39 -0700 (PDT) From: Lai Jiangshan To: linux-kernel@vger.kernel.org Cc: Lai Jiangshan , Paolo Bonzini , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Thomas Gleixner , Ingo Molnar , Borislav Petkov , x86@kernel.org, "H. Peter Anvin" , kvm@vger.kernel.org Subject: [PATCH V2 10/10] KVM: X86: Don't check unsync if the original spte is writible Date: Sat, 18 Sep 2021 08:56:36 +0800 Message-Id: <20210918005636.3675-11-jiangshanlai@gmail.com> X-Mailer: git-send-email 2.19.1.6.gb485710b In-Reply-To: <20210918005636.3675-1-jiangshanlai@gmail.com> References: <20210918005636.3675-1-jiangshanlai@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Lai Jiangshan If the original spte is writable, the target gfn should not be the gfn of synchronized shadowpage and can continue to be writable. When !can_unsync, speculative must be false. So when the check of "!can_unsync" is removed, we need to move the label of "out" up. Signed-off-by: Lai Jiangshan --- arch/x86/kvm/mmu/spte.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/arch/x86/kvm/mmu/spte.c b/arch/x86/kvm/mmu/spte.c index b68a580f3510..a33c581aabd6 100644 --- a/arch/x86/kvm/mmu/spte.c +++ b/arch/x86/kvm/mmu/spte.c @@ -150,7 +150,7 @@ int make_spte(struct kvm_vcpu *vcpu, unsigned int pte_access, int level, * is responsibility of kvm_mmu_get_page / kvm_mmu_sync_roots. * Same reasoning can be applied to dirty page accounting. */ - if (!can_unsync && is_writable_pte(old_spte)) + if (is_writable_pte(old_spte)) goto out; /* @@ -171,10 +171,10 @@ int make_spte(struct kvm_vcpu *vcpu, unsigned int pte_access, int level, if (pte_access & ACC_WRITE_MASK) spte |= spte_shadow_dirty_mask(spte); +out: if (speculative) spte = mark_spte_for_access_track(spte); -out: WARN_ONCE(is_rsvd_spte(&vcpu->arch.mmu->shadow_zero_check, spte, level), "spte = 0x%llx, level = %d, rsvd bits = 0x%llx", spte, level, get_rsvd_bits(&vcpu->arch.mmu->shadow_zero_check, spte, level));