From patchwork Tue Aug 24 07:55:17 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lai Jiangshan X-Patchwork-Id: 12455693 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-14.2 required=3.0 tests=BAYES_00,DATE_IN_PAST_06_12, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3298FC4338F for ; Tue, 24 Aug 2021 17:59:19 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 0984860F58 for ; Tue, 24 Aug 2021 17:59:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241017AbhHXSAB (ORCPT ); Tue, 24 Aug 2021 14:00:01 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41802 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S240935AbhHXR76 (ORCPT ); Tue, 24 Aug 2021 13:59:58 -0400 Received: from mail-pf1-x430.google.com (mail-pf1-x430.google.com [IPv6:2607:f8b0:4864:20::430]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 38607C08ED8B; Tue, 24 Aug 2021 10:40:30 -0700 (PDT) Received: by mail-pf1-x430.google.com with SMTP id w68so19006166pfd.0; Tue, 24 Aug 2021 10:40:30 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=w/ArGHeLEE7U3heuTr2jTkUenyA8yIadcyI1Uv6vrXA=; b=IL0wSuVJdS4m3gMpdTktRGE8yE7NOEXL6sMpQtgAC0zn/7pyD4eav1ieFD+pbj/K9J DPBZHLsUKeBd2p3ttM0AwmnOUQk5f3QUm+Yzg2Gg3cii4J21LN0Ofj/0kqnK+y3nTzHU dhCI5KOtxkADpuV9AIn73KvPzPlsauztocDJ0pw9W+E2ZN5zl9wYjhHFjZZ3tUaBOnlU Cg/AOvRthPx5NaqF6cq+iZoqK0FY0ElulnLj4WX96Z9tuVXzgkX2RNjsGU1lQ2kL9wzS BCKizsZbS3uPnqTpy21rrnhJmdhrBh+AD+hatnRxW4UQyl0ZJqq3rxTSzNh9yVvBE4gz bcGA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=w/ArGHeLEE7U3heuTr2jTkUenyA8yIadcyI1Uv6vrXA=; b=VnsdOvEMnPyxeD2iFvn/H8jdk2caFvBHLdh9QLOR0Nr97MDLZ2hbgQvUEMjd0vq75L 9JIVLogpY7q6wBhItiAfO8p34NuGTUfBCzZv4kXalBr10LqyccDwgo07p0AuXEsauMOi jzwEplpjnkeZ7ABwAYd+ahZ0hlWM5q6RDg0EtTvmpfSsxVZsFuCzPp64Be02CWuX8l/N 5E0AXo61Z13giAnI2e7oQlFUEHlI3wUEyUbXhzbCG1AdtZwkuMc8cLZXQpfZGrQixQYs 5gyI4uqKrGGlZbj922DuteUHIdfvyt1aKKYhrSK8bIMUZ2ye+Mu9eeJaDo198gvZRwdZ oCEQ== X-Gm-Message-State: AOAM533uu8N59xdGWTGYtnBJvFBbJnSSb/SRP3j6ARH2lJg0NrmjCflR DaBgKYt/hRIkmDHyUjgxqrlYMYVIU/M= X-Google-Smtp-Source: ABdhPJy/knetC2Ufm9M17OLqgJBugppJkqoDyRCF6EJ5OjqEDNRUuroDGFW2tyx1RppN9xt6W8dJGA== X-Received: by 2002:a05:6a00:9a6:b0:3e1:656c:da81 with SMTP id u38-20020a056a0009a600b003e1656cda81mr39726847pfg.26.1629826829647; Tue, 24 Aug 2021 10:40:29 -0700 (PDT) Received: from localhost ([47.251.4.198]) by smtp.gmail.com with ESMTPSA id d22sm3183619pjw.38.2021.08.24.10.40.28 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Tue, 24 Aug 2021 10:40:29 -0700 (PDT) From: Lai Jiangshan To: linux-kernel@vger.kernel.org Cc: Paolo Bonzini , Lai Jiangshan , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Thomas Gleixner , Ingo Molnar , Borislav Petkov , x86@kernel.org, "H. Peter Anvin" , Marcelo Tosatti , Avi Kivity , kvm@vger.kernel.org Subject: [PATCH 1/7] KVM: X86: Fix missed remote tlb flush in rmap_write_protect() Date: Tue, 24 Aug 2021 15:55:17 +0800 Message-Id: <20210824075524.3354-2-jiangshanlai@gmail.com> X-Mailer: git-send-email 2.19.1.6.gb485710b In-Reply-To: <20210824075524.3354-1-jiangshanlai@gmail.com> References: <20210824075524.3354-1-jiangshanlai@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Lai Jiangshan When kvm->tlbs_dirty > 0, some rmaps might have been deleted without flushing tlb remotely after kvm_sync_page(). If @gfn was writable before and it's rmaps was deleted in kvm_sync_page(), we need to flush tlb too even if __rmap_write_protect() doesn't request it. Fixes: 4731d4c7a077 ("KVM: MMU: out of sync shadow core") Signed-off-by: Lai Jiangshan --- arch/x86/kvm/mmu/mmu.c | 16 ++++++++++++++++ 1 file changed, 16 insertions(+) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 4853c033e6ce..313918df1a10 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -1420,6 +1420,14 @@ bool kvm_mmu_slot_gfn_write_protect(struct kvm *kvm, rmap_head = gfn_to_rmap(gfn, i, slot); write_protected |= __rmap_write_protect(kvm, rmap_head, true); } + /* + * When kvm->tlbs_dirty > 0, some rmaps might have been deleted + * without flushing tlb remotely after kvm_sync_page(). If @gfn + * was writable before and it's rmaps was deleted in kvm_sync_page(), + * we need to flush tlb too. + */ + if (min_level == PG_LEVEL_4K && kvm->tlbs_dirty) + write_protected = true; } if (is_tdp_mmu_enabled(kvm)) @@ -5733,6 +5741,14 @@ void kvm_mmu_slot_remove_write_access(struct kvm *kvm, flush = slot_handle_level(kvm, memslot, slot_rmap_write_protect, start_level, KVM_MAX_HUGEPAGE_LEVEL, false); + /* + * When kvm->tlbs_dirty > 0, some rmaps might have been deleted + * without flushing tlb remotely after kvm_sync_page(). If @gfn + * was writable before and it's rmaps was deleted in kvm_sync_page(), + * we need to flush tlb too. + */ + if (start_level == PG_LEVEL_4K && kvm->tlbs_dirty) + flush = true; write_unlock(&kvm->mmu_lock); } From patchwork Tue Aug 24 07:55:18 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lai Jiangshan X-Patchwork-Id: 12455695 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-14.2 required=3.0 tests=BAYES_00,DATE_IN_PAST_06_12, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A1820C432BE for ; Tue, 24 Aug 2021 17:59:20 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 84B086108F for ; Tue, 24 Aug 2021 17:59:20 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238929AbhHXSAD (ORCPT ); Tue, 24 Aug 2021 14:00:03 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41612 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S240939AbhHXR76 (ORCPT ); Tue, 24 Aug 2021 13:59:58 -0400 Received: from mail-pj1-x102f.google.com (mail-pj1-x102f.google.com [IPv6:2607:f8b0:4864:20::102f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5BCA3C0E569D; Tue, 24 Aug 2021 10:40:37 -0700 (PDT) Received: by mail-pj1-x102f.google.com with SMTP id f11-20020a17090aa78b00b0018e98a7cddaso2851674pjq.4; Tue, 24 Aug 2021 10:40:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=n2Yhwdxg87AjiJQPOXAK9fAfw6iA/p1KpP0vkmBf3iI=; b=PnIYN7XDcQqoI4NGz1IG50cn95beIP+i6LsApG21Xq3XThDbC/NbBPNFh/ZcbOMtc+ LFUF4fYDUWBA4s1u8lmBkxBi99HPKyLpPZw+hOWNVSRjvbr5fdUcnbuofceYeUoQBrn8 NxB15h04q3oL8AkvCxDj89VlPk88AEN/V3YoT+k5ZqMwkhBDrMhJxRPjFd2v6Sn/+79u 3ATkPE1nOh4z0S9kcRvQudlygjevq/44cUHcVE8jcrFmg8O1UG6tv1rUCnIBCfnGnBLX M09oIC4h5rVWpvuW6OAej4xnxMhX0/7s3Bgif0w1m99J3cHXrRL5xeoyZZ6Z8yxO7p+D nhqg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=n2Yhwdxg87AjiJQPOXAK9fAfw6iA/p1KpP0vkmBf3iI=; b=HlMYaBFsPWvvlpQ/HBKGeGKPhCPo7clMoum1SbfVoB1plod+yrKxQ0oQq2hQSop03d qIHXtbdOYQjUTcdPogmKAG316xw5vCaIt6uNKNuoJs1StHIkTV2Mn81g82TXsnmhWVLw 3OKrPVlJFlp+bNwxxsgEU3x+PNMRJ1KmCva7Tb+MWtQcZHlQg4ns1c9I7m9Tr0lAnDA0 lyj2nq2lyeqZdT3hMK1+yMv53a9frZcBohUONp0q7iDTz/xPHQMDJT9kC+Zo9RaoHmbU OxhuVYwee1Q1xl6YZSTkHor86bxDhNAQE7N2rUnxU8+5OYxsaLj70AaXnu7wIuLJ9JVb I2Qw== X-Gm-Message-State: AOAM530gmjoXevbrt7aUZAuZ/L11UbZZWMo2qF+D1hfD1hGYCPbo8wjq fDV7TSv7Gv6RSHsKNvteVy0XbxcW7iE= X-Google-Smtp-Source: ABdhPJxBJ/7paECHawfYfO6vY4v9UKkH24PLLWF0eAhvcBDpeXPY89gdPoUNlTmF6NTj4JRqoCs7RQ== X-Received: by 2002:a17:90a:7384:: with SMTP id j4mr5637555pjg.138.1629826836728; Tue, 24 Aug 2021 10:40:36 -0700 (PDT) Received: from localhost ([47.251.4.198]) by smtp.gmail.com with ESMTPSA id w186sm20624084pfw.78.2021.08.24.10.40.35 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Tue, 24 Aug 2021 10:40:36 -0700 (PDT) From: Lai Jiangshan To: linux-kernel@vger.kernel.org Cc: Paolo Bonzini , Lai Jiangshan , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Thomas Gleixner , Ingo Molnar , Borislav Petkov , x86@kernel.org, "H. Peter Anvin" , Marcelo Tosatti , Avi Kivity , kvm@vger.kernel.org Subject: [PATCH 2/7] KVM: X86: Synchronize the shadow pagetable before link it Date: Tue, 24 Aug 2021 15:55:18 +0800 Message-Id: <20210824075524.3354-3-jiangshanlai@gmail.com> X-Mailer: git-send-email 2.19.1.6.gb485710b In-Reply-To: <20210824075524.3354-1-jiangshanlai@gmail.com> References: <20210824075524.3354-1-jiangshanlai@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Lai Jiangshan If gpte is changed from non-present to present, the guest doesn't need to flush tlb per SDM. So the host must synchronze sp before link it. Otherwise the guest might use a wrong mapping. For example: the guest first changes a level-1 pagetable, and then links its parent to a new place where the original gpte is non-present. Finally the guest can access the remapped area without flushing the tlb. The guest's behavior should be allowed per SDM, but the host kvm mmu makes it wrong. Fixes: 4731d4c7a077 ("KVM: MMU: out of sync shadow core") Signed-off-by: Lai Jiangshan --- arch/x86/kvm/mmu/mmu.c | 21 ++++++++++++++------- arch/x86/kvm/mmu/paging_tmpl.h | 28 +++++++++++++++++++++++++--- 2 files changed, 39 insertions(+), 10 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 313918df1a10..987953a901d2 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -2032,8 +2032,9 @@ static void mmu_pages_clear_parents(struct mmu_page_path *parents) } while (!sp->unsync_children); } -static void mmu_sync_children(struct kvm_vcpu *vcpu, - struct kvm_mmu_page *parent) +static bool mmu_sync_children(struct kvm_vcpu *vcpu, + struct kvm_mmu_page *parent, + bool root) { int i; struct kvm_mmu_page *sp; @@ -2061,11 +2062,20 @@ static void mmu_sync_children(struct kvm_vcpu *vcpu, if (need_resched() || rwlock_needbreak(&vcpu->kvm->mmu_lock)) { kvm_mmu_flush_or_zap(vcpu, &invalid_list, false, flush); cond_resched_rwlock_write(&vcpu->kvm->mmu_lock); + /* + * If @parent is not root, the caller doesn't have + * any reference to it. And we couldn't access to + * @parent and continue synchronizing after the + * mmu_lock was once released. + */ + if (!root) + return false; flush = false; } } kvm_mmu_flush_or_zap(vcpu, &invalid_list, false, flush); + return true; } static void __clear_sp_write_flooding_count(struct kvm_mmu_page *sp) @@ -2151,9 +2161,6 @@ static struct kvm_mmu_page *kvm_mmu_get_page(struct kvm_vcpu *vcpu, kvm_make_request(KVM_REQ_TLB_FLUSH_CURRENT, vcpu); } - if (sp->unsync_children) - kvm_make_request(KVM_REQ_MMU_SYNC, vcpu); - __clear_sp_write_flooding_count(sp); trace_get_page: @@ -3650,7 +3657,7 @@ void kvm_mmu_sync_roots(struct kvm_vcpu *vcpu) write_lock(&vcpu->kvm->mmu_lock); kvm_mmu_audit(vcpu, AUDIT_PRE_SYNC); - mmu_sync_children(vcpu, sp); + mmu_sync_children(vcpu, sp, true); kvm_mmu_audit(vcpu, AUDIT_POST_SYNC); write_unlock(&vcpu->kvm->mmu_lock); @@ -3666,7 +3673,7 @@ void kvm_mmu_sync_roots(struct kvm_vcpu *vcpu) if (IS_VALID_PAE_ROOT(root)) { root &= PT64_BASE_ADDR_MASK; sp = to_shadow_page(root); - mmu_sync_children(vcpu, sp); + mmu_sync_children(vcpu, sp, true); } } diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h index 50ade6450ace..48c7fe1b2d50 100644 --- a/arch/x86/kvm/mmu/paging_tmpl.h +++ b/arch/x86/kvm/mmu/paging_tmpl.h @@ -664,7 +664,7 @@ static void FNAME(pte_prefetch)(struct kvm_vcpu *vcpu, struct guest_walker *gw, * emulate this operation, return 1 to indicate this case. */ static int FNAME(fetch)(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault, - struct guest_walker *gw) + struct guest_walker *gw, unsigned long mmu_seq) { struct kvm_mmu_page *sp = NULL; struct kvm_shadow_walk_iterator it; @@ -678,6 +678,8 @@ static int FNAME(fetch)(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault, top_level = vcpu->arch.mmu->root_level; if (top_level == PT32E_ROOT_LEVEL) top_level = PT32_ROOT_LEVEL; + +again: /* * Verify that the top-level gpte is still there. Since the page * is a root page, it is either write protected (and cannot be @@ -713,8 +715,28 @@ static int FNAME(fetch)(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault, if (FNAME(gpte_changed)(vcpu, gw, it.level - 1)) goto out_gpte_changed; - if (sp) + if (sp) { + /* + * We must synchronize the pagetable before link it + * because the guest doens't need to flush tlb when + * gpte is changed from non-present to present. + * Otherwise, the guest may use the wrong mapping. + * + * For PG_LEVEL_4K, kvm_mmu_get_page() has already + * synchronized it transiently via kvm_sync_page(). + * + * For higher level pagetable, we synchronize it + * via slower mmu_sync_children(). If it once + * released the mmu_lock, we need to restart from + * the root since we don't have reference to @sp. + */ + if (sp->unsync_children && !mmu_sync_children(vcpu, sp, false)) { + if (mmu_notifier_retry_hva(vcpu->kvm, mmu_seq, fault->hva)) + goto out_gpte_changed; + goto again; + } link_shadow_page(vcpu, it.sptep, sp); + } } kvm_mmu_hugepage_adjust(vcpu, fault); @@ -905,7 +927,7 @@ static int FNAME(page_fault)(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault r = make_mmu_pages_available(vcpu); if (r) goto out_unlock; - r = FNAME(fetch)(vcpu, fault, &walker); + r = FNAME(fetch)(vcpu, fault, &walker, mmu_seq); kvm_mmu_audit(vcpu, AUDIT_POST_PAGE_FAULT); out_unlock: From patchwork Tue Aug 24 07:55:19 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lai Jiangshan X-Patchwork-Id: 12455697 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-14.2 required=3.0 tests=BAYES_00,DATE_IN_PAST_06_12, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A3EBBC4320E for ; Tue, 24 Aug 2021 17:59:23 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 7B5A460F58 for ; Tue, 24 Aug 2021 17:59:23 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239623AbhHXSAG (ORCPT ); Tue, 24 Aug 2021 14:00:06 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41812 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S240945AbhHXR77 (ORCPT ); Tue, 24 Aug 2021 13:59:59 -0400 Received: from mail-pl1-x629.google.com (mail-pl1-x629.google.com [IPv6:2607:f8b0:4864:20::629]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DCFA9C0E56A0; Tue, 24 Aug 2021 10:40:43 -0700 (PDT) Received: by mail-pl1-x629.google.com with SMTP id m4so1636039pll.0; Tue, 24 Aug 2021 10:40:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=HEn0Id8jewcjHef7bTOMTnX15Pf7THVZ4hwtDc/gbQc=; b=SGOQMhG+lLXSOYynPJIMSzV0IjW2y87va+qmXM9QFlnstFrpuhOvzO4yfkQVlIjtxo ROxBrgAvviJ9titRCCQepQYhAIILiLuivsV6JJ6gHi0iq2ReR1aF69jZKgbGroFdR/Em f3rODccVDhP9VpSTFQlwXLG8q7xSI9kcZjMqsTteQFilAUcqeiwJJ1VeBNbuSd3wrxS0 vEda56xJaxSr0E695gxcOxf4s2ujs1SrZ6kUYpDREbGAUbN6lGYs+MH4ceRYSSa8En+p d2DxuEV2wp0noxE/kdXpxqOeksTMdNRe+QA8NdFh/hgxwUkn2rBg6EKN9MFNY5QIUrZn Yd9g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=HEn0Id8jewcjHef7bTOMTnX15Pf7THVZ4hwtDc/gbQc=; b=QS774jbXPcQ/cjht/aEURpijFiNDxnUlP7FC4f/82OzDaNCiOpP/IZwzHqcokNQiPW kQs//X6HW4Ek4e1kPIVW3BeE8Ua4mBx36rgVYEsLcr+QKDUuQOYHlGiuaTauHcp9ZPwS qe2oX9lgZi6MLyaBDwpXzw0qVY/3u1LOlQ9i9NbfYkAdc7TpSV/4NkB6IrevFKkkYWQD gwms+yiMSNGrrZd8J74GP2+t8exEM0TOaz7f+KJbsF3nXXblmDBTcUkUR3m957KhfEod vwl/XCO65hXJzJH373KZtGHc2Ad8Kd6aYqiNDlC6JTZfBFPYaz3AL9qlrEqZrAxKxl3i dlRQ== X-Gm-Message-State: AOAM5327pz7aybfmOzWlEpyCGTXqKUIjkQeu6jktUVJVfcySobSHv1v+ hAXscdOVoCE8ONnTiSO7dVciv/YCzr0= X-Google-Smtp-Source: ABdhPJzZcR3aOHLJ8VoDN+6iQuLKvkzFrcBdBQ8/mNnig7f1E8sPxr07p0IkV+lEEpRyPm5pKI76sw== X-Received: by 2002:a17:90a:f289:: with SMTP id fs9mr5572327pjb.43.1629826843246; Tue, 24 Aug 2021 10:40:43 -0700 (PDT) Received: from localhost ([47.251.4.198]) by smtp.gmail.com with ESMTPSA id n22sm19861089pff.57.2021.08.24.10.40.42 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Tue, 24 Aug 2021 10:40:42 -0700 (PDT) From: Lai Jiangshan To: linux-kernel@vger.kernel.org Cc: Paolo Bonzini , Lai Jiangshan , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Thomas Gleixner , Ingo Molnar , Borislav Petkov , x86@kernel.org, "H. Peter Anvin" , kvm@vger.kernel.org Subject: [PATCH 3/7] KVM: X86: Zap the invalid list after remote tlb flushing Date: Tue, 24 Aug 2021 15:55:19 +0800 Message-Id: <20210824075524.3354-4-jiangshanlai@gmail.com> X-Mailer: git-send-email 2.19.1.6.gb485710b In-Reply-To: <20210824075524.3354-1-jiangshanlai@gmail.com> References: <20210824075524.3354-1-jiangshanlai@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Lai Jiangshan In mmu_sync_children(), it can zap the invalid list after remote tlb flushing. Emptifying the invalid list ASAP might help reduce a remote tlb flushing in some cases. Signed-off-by: Lai Jiangshan --- arch/x86/kvm/mmu/mmu.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 987953a901d2..a165eb8713bc 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -2050,7 +2050,7 @@ static bool mmu_sync_children(struct kvm_vcpu *vcpu, protected |= rmap_write_protect(vcpu, sp->gfn); if (protected) { - kvm_flush_remote_tlbs(vcpu->kvm); + kvm_mmu_flush_or_zap(vcpu, &invalid_list, true, flush); flush = false; } From patchwork Tue Aug 24 07:55:20 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lai Jiangshan X-Patchwork-Id: 12455699 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-14.2 required=3.0 tests=BAYES_00,DATE_IN_PAST_06_12, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5DBE2C4338F for ; Tue, 24 Aug 2021 17:59:30 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 3FD06610E6 for ; Tue, 24 Aug 2021 17:59:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241149AbhHXSAH (ORCPT ); Tue, 24 Aug 2021 14:00:07 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42016 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238831AbhHXSAB (ORCPT ); Tue, 24 Aug 2021 14:00:01 -0400 Received: from mail-pf1-x42d.google.com (mail-pf1-x42d.google.com [IPv6:2607:f8b0:4864:20::42d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 49769C08EDA0; Tue, 24 Aug 2021 10:40:50 -0700 (PDT) Received: by mail-pf1-x42d.google.com with SMTP id v123so4302221pfb.11; Tue, 24 Aug 2021 10:40:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=CQW9zcD1iWohLL0SIYdT9vr926ih+mvgsdrlaiOGq68=; b=q7ttQLIoeG3ckkiRgxqEPJcPxTRSJalv5DEs23w1eyMvpLw+2d2AUxpeU+AXF1WbXh HbS8KcqUbr3vpNR+vEZAaGL4mQuO3OJcqlB+AEkZJFuJpJmz43W5OCx5h+B1HLdfxW62 YQEwq5ehFTcu/HkZblC+HndnVmad46B8acxo+fFVvdAjMaSgpe6hKrWzC7xNzRhWpfPf 91sUZYkBCRitRylF2cAZbuAVxvEI5P4fOMaXs9El8fbWhlVYz9lX+CKPZaBFOkNB15Wc 9jiy2ohaWcPoeXpBIL9TnGgBwhSrV/4JNr3TZZWk7bXRIrQCD/21uzzra9JI6jgw81d6 sqlg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=CQW9zcD1iWohLL0SIYdT9vr926ih+mvgsdrlaiOGq68=; b=GlSWLF6x5Rcy6rDWULjge93muiE3M8GsO4//94HSEYrNXYcb06JG6RGnakra8eL+B7 41g/ifmV960mtN7lUEPQHpqQyCVnoxor76to9UgBsB+Rf7+N3gLNMxRHR/ETfZenGkRg TgslWUjvKPZRi/rie2Me3YovV0ObRECYtHjAPYR1hBxV5r0nxr5uT5/8HP5r6yw0ssjJ T+wf+yPJPoS6vw8oN6LJtcOZcBs7Dxwt09OtkolnWK3gOQEE/zKvuSYgHupQZioO5uLU lgC39/CJnRE5yiSCndLnf+zB++R9Xn0f6Nduv+OPmHrVqV9T49VdGZ6ZFjdMoxGIVP3d Xkvw== X-Gm-Message-State: AOAM531YpvDbcy1LjjzTYJznosSixtYPO9tVoYGXbgTiD36/P4kIJoXm cw3MvWuhL5PMV8dNhrIn2ctVPtL+zwI= X-Google-Smtp-Source: ABdhPJzdmTsFFwkdj6b9umV4aZli6sfawea+gfCCbWcIBSHLZiJmFwuZn4MXZg1uwWwWAwDC8kYB7g== X-Received: by 2002:a63:144f:: with SMTP id 15mr20866714pgu.46.1629826849708; Tue, 24 Aug 2021 10:40:49 -0700 (PDT) Received: from localhost ([47.251.4.198]) by smtp.gmail.com with ESMTPSA id o16sm8920397pgv.29.2021.08.24.10.40.48 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Tue, 24 Aug 2021 10:40:49 -0700 (PDT) From: Lai Jiangshan To: linux-kernel@vger.kernel.org Cc: Paolo Bonzini , Lai Jiangshan , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Thomas Gleixner , Ingo Molnar , Borislav Petkov , x86@kernel.org, "H. Peter Anvin" , kvm@vger.kernel.org Subject: [PATCH 4/7] KVM: X86: Remove FNAME(update_pte) Date: Tue, 24 Aug 2021 15:55:20 +0800 Message-Id: <20210824075524.3354-5-jiangshanlai@gmail.com> X-Mailer: git-send-email 2.19.1.6.gb485710b In-Reply-To: <20210824075524.3354-1-jiangshanlai@gmail.com> References: <20210824075524.3354-1-jiangshanlai@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Lai Jiangshan Its solo caller is changed to use FNAME(prefetch_gpte) directly. Signed-off-by: Lai Jiangshan Reviewed-by: Maxim Levitsky --- arch/x86/kvm/mmu/paging_tmpl.h | 10 +--------- 1 file changed, 1 insertion(+), 9 deletions(-) diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h index 48c7fe1b2d50..6b2e248f2f4c 100644 --- a/arch/x86/kvm/mmu/paging_tmpl.h +++ b/arch/x86/kvm/mmu/paging_tmpl.h @@ -589,14 +589,6 @@ FNAME(prefetch_gpte)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, return true; } -static void FNAME(update_pte)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, - u64 *spte, const void *pte) -{ - pt_element_t gpte = *(const pt_element_t *)pte; - - FNAME(prefetch_gpte)(vcpu, sp, spte, gpte, false); -} - static bool FNAME(gpte_changed)(struct kvm_vcpu *vcpu, struct guest_walker *gw, int level) { @@ -998,7 +990,7 @@ static void FNAME(invlpg)(struct kvm_vcpu *vcpu, gva_t gva, hpa_t root_hpa) sizeof(pt_element_t))) break; - FNAME(update_pte)(vcpu, sp, sptep, &gpte); + FNAME(prefetch_gpte)(vcpu, sp, sptep, gpte, false); } if (!is_shadow_present_pte(*sptep) || !sp->unsync_children) From patchwork Tue Aug 24 07:55:21 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lai Jiangshan X-Patchwork-Id: 12455701 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-14.2 required=3.0 tests=BAYES_00,DATE_IN_PAST_06_12, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 68D40C4320A for ; Tue, 24 Aug 2021 17:59:37 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 51A546108F for ; Tue, 24 Aug 2021 17:59:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241280AbhHXSAU (ORCPT ); Tue, 24 Aug 2021 14:00:20 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41806 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234046AbhHXSAC (ORCPT ); Tue, 24 Aug 2021 14:00:02 -0400 Received: from mail-pj1-x1029.google.com (mail-pj1-x1029.google.com [IPv6:2607:f8b0:4864:20::1029]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 63EC1C0E56B5; Tue, 24 Aug 2021 10:40:57 -0700 (PDT) Received: by mail-pj1-x1029.google.com with SMTP id ot2-20020a17090b3b4200b0019127f8ed87so2356843pjb.1; Tue, 24 Aug 2021 10:40:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=HAE5yTJRw6nBnT3NDMYKf0lO0HsM6MD7ZAUs+Qxrxdo=; b=qHc1MzXMh9BICBEj+LBI2CJGYybR3PU2mIS0KKSjnQcsEt+LNM5Bl3Jk9xQDfj5reE t1HUvnJYhjrENMLAGuaBnrnzC4wjLQ2ViAHRKL/AbGPYdIeEIV/UYca8V2xyLDdSHopH 0pU6RGseeT0K8rUPrHhTyvctCkJ7LYIHjGqBaItsDf4iau14sDPU8KYgf8mU2uZhyd4A 3pw4lYtOQRZEojlNCG9aw077sFCms0fHeOrYsBW9xFusfB8SQqw8984UCUBEYaVrBblC NnPUt7i2PvYMv3b5Eaj9z76AOKpXx5iyBKurcvRr4m5aB0a3Z1osLoTrhvvtjHfb8XX8 nFrg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=HAE5yTJRw6nBnT3NDMYKf0lO0HsM6MD7ZAUs+Qxrxdo=; b=XwjYYUPBUcBVH/tCDMUoJ+Jf61s46fgjbfJtBzkPO9LiNxdUjoMsZLfq/dMC5qVZXJ qJk2SI03vsL4xvrVZzJRQS+lpGJTAjM+Gkf010e9ZoBMLoz3xJEQxf46HI6Wyi0Q2qLI 3p0Dka2lD2eH1HcrVPRUZWXrZ22XTmkJ9GQgTuA4Yn6MgjbS7OcAElYtucFXqUepZICT jPgTHebhy1pwnwnamARFoxz1RdsGvSp82h3izZKAii255PtNXj6F9npXHuYFmZjEG2UJ vBEKg3r3YMleJuq9L1Ds0AyQrcdbUcTDRK3rgjd/jJP+gVRPAsk1hEZLbPfNj44JWYUY Tfqw== X-Gm-Message-State: AOAM533FXbKdqEv1Jye2zVXcBS7iERuiiIUmV/n7NoLl9nqtXaVfPF6H muXsRYgMTKfefbjrEA2jcQFKYZ4sH7c= X-Google-Smtp-Source: ABdhPJwHhxNxGV8NlXRLDHA/ZMupB/ML9PT4Hy2f65d7cjBdFLDnvuZCOIx20fKABMSZ/D6dyN8FCQ== X-Received: by 2002:a17:90b:155:: with SMTP id em21mr5638212pjb.116.1629826856841; Tue, 24 Aug 2021 10:40:56 -0700 (PDT) Received: from localhost ([47.251.4.198]) by smtp.gmail.com with ESMTPSA id z33sm22371593pga.20.2021.08.24.10.40.55 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Tue, 24 Aug 2021 10:40:56 -0700 (PDT) From: Lai Jiangshan To: linux-kernel@vger.kernel.org Cc: Paolo Bonzini , Lai Jiangshan , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Thomas Gleixner , Ingo Molnar , Borislav Petkov , x86@kernel.org, "H. Peter Anvin" , kvm@vger.kernel.org Subject: [PATCH 5/7] KVM: X86: Don't unsync pagetables when speculative Date: Tue, 24 Aug 2021 15:55:21 +0800 Message-Id: <20210824075524.3354-6-jiangshanlai@gmail.com> X-Mailer: git-send-email 2.19.1.6.gb485710b In-Reply-To: <20210824075524.3354-1-jiangshanlai@gmail.com> References: <20210824075524.3354-1-jiangshanlai@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Lai Jiangshan We'd better only unsync the pagetable when there just was a really write fault on a level-1 pagetable. Signed-off-by: Lai Jiangshan --- arch/x86/kvm/mmu/mmu.c | 6 +++++- arch/x86/kvm/mmu/mmu_internal.h | 3 ++- arch/x86/kvm/mmu/spte.c | 2 +- 3 files changed, 8 insertions(+), 3 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index a165eb8713bc..e5932af6f11c 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -2600,7 +2600,8 @@ static void kvm_unsync_page(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp) * were marked unsync (or if there is no shadow page), -EPERM if the SPTE must * be write-protected. */ -int mmu_try_to_unsync_pages(struct kvm_vcpu *vcpu, gfn_t gfn, bool can_unsync) +int mmu_try_to_unsync_pages(struct kvm_vcpu *vcpu, gfn_t gfn, bool can_unsync, + bool speculative) { struct kvm_mmu_page *sp; bool locked = false; @@ -2626,6 +2627,9 @@ int mmu_try_to_unsync_pages(struct kvm_vcpu *vcpu, gfn_t gfn, bool can_unsync) if (sp->unsync) continue; + if (speculative) + return -EEXIST; + /* * TDP MMU page faults require an additional spinlock as they * run with mmu_lock held for read, not write, and the unsync diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h index 658d8d228d43..f5d8be787993 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -116,7 +116,8 @@ static inline bool kvm_vcpu_ad_need_write_protect(struct kvm_vcpu *vcpu) kvm_x86_ops.cpu_dirty_log_size; } -int mmu_try_to_unsync_pages(struct kvm_vcpu *vcpu, gfn_t gfn, bool can_unsync); +int mmu_try_to_unsync_pages(struct kvm_vcpu *vcpu, gfn_t gfn, bool can_unsync, + bool speculative); void kvm_mmu_gfn_disallow_lpage(const struct kvm_memory_slot *slot, gfn_t gfn); void kvm_mmu_gfn_allow_lpage(const struct kvm_memory_slot *slot, gfn_t gfn); diff --git a/arch/x86/kvm/mmu/spte.c b/arch/x86/kvm/mmu/spte.c index 3e97cdb13eb7..b68a580f3510 100644 --- a/arch/x86/kvm/mmu/spte.c +++ b/arch/x86/kvm/mmu/spte.c @@ -159,7 +159,7 @@ int make_spte(struct kvm_vcpu *vcpu, unsigned int pte_access, int level, * e.g. it's write-tracked (upper-level SPs) or has one or more * shadow pages and unsync'ing pages is not allowed. */ - if (mmu_try_to_unsync_pages(vcpu, gfn, can_unsync)) { + if (mmu_try_to_unsync_pages(vcpu, gfn, can_unsync, speculative)) { pgprintk("%s: found shadow page for %llx, marking ro\n", __func__, gfn); ret |= SET_SPTE_WRITE_PROTECTED_PT; From patchwork Tue Aug 24 07:55:22 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lai Jiangshan X-Patchwork-Id: 12455703 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-14.2 required=3.0 tests=BAYES_00,DATE_IN_PAST_06_12, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 18518C4320A for ; Tue, 24 Aug 2021 17:59:49 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id F36DB6108F for ; Tue, 24 Aug 2021 17:59:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239254AbhHXSAc (ORCPT ); Tue, 24 Aug 2021 14:00:32 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41528 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239998AbhHXSAG (ORCPT ); Tue, 24 Aug 2021 14:00:06 -0400 Received: from mail-pg1-x52b.google.com (mail-pg1-x52b.google.com [IPv6:2607:f8b0:4864:20::52b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E2F22C081B04; Tue, 24 Aug 2021 10:41:03 -0700 (PDT) Received: by mail-pg1-x52b.google.com with SMTP id g184so1939718pgc.6; Tue, 24 Aug 2021 10:41:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=oxK1ob4roXyy3ZVWd2lGw+F2XcCDp2ZaJLGKfR9pxws=; b=HDvxvaUjpfMgRDj4dWlcYeWDABASwvKL7ZvT5EKy/RWIvIE2tKdrxYWNwGQe+aJ1fX ZQE2EzzdyHwGFost5ha1A+TvvrDUBEaOmqscAW/L8v5p23jARGujtGcW64CFKAMPKmRk REKKKVTLutfhajHfWie5Op8XGv9BZCopIr3ZWN52FXP0qqP8AfueLuDixoL7B7Wa6tov V4T+wAOoS5d5d/Gcg9FjuLpJ9XDp/pTyFO2btXZHvRqPhuuwE+k46bGQ8UmsycCgDr5y Car34QB2PEtQpSOAnoO74l6foIYuGPjtTZQvWyI8Tp5EUbUrBYC+13EZHgGOuAJY9+JW lXJA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=oxK1ob4roXyy3ZVWd2lGw+F2XcCDp2ZaJLGKfR9pxws=; b=CqEDdvtuJlWMoSLYa4gPMkyX/L7yt3U1z1hKU+uYucJnmpK5+mD0Bvmhy9dkf8mW/6 6sKLEOdibsXXqZ1fqLz5SHBV4RoO6RItKGhY/8aFMNgONBAM1KTVu75DscFiQmJ225K8 xh7LxhxaYYcyoVqamaAbPvhGCCmCy4ksIgkcdWOVeTx8W4M08S8I+EDpiDCpHZ1gHYGM 1wUL/jgQj1oZqDZnBdYShD466A5GNWBaCCuvKf1/o1qiypbM2/3hasmzHPnNnAEuBdor 5LKh0JHQmzyFi0CSaoPeeZSlgn6GZkP9JlaT/N8Mp3TXhefbFYUiVJ4kzrUhw8JtrL8f CErA== X-Gm-Message-State: AOAM530R3I9SeScAHXb5FYGWLceLltiFZEPlH91xm8EESf188Qxvs7d6 jzcOuVzfokjOnj/TWAhDJsCVMFPjGQU= X-Google-Smtp-Source: ABdhPJwqCL9QBsjJIO99s99jw+KW1jy+uuibJU69apR7Algqobe5j4qu9COofBgzBmfvZzojKBk1Wg== X-Received: by 2002:a63:8748:: with SMTP id i69mr15760614pge.92.1629826863346; Tue, 24 Aug 2021 10:41:03 -0700 (PDT) Received: from localhost ([47.251.4.198]) by smtp.gmail.com with ESMTPSA id h127sm3005172pfe.191.2021.08.24.10.41.02 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Tue, 24 Aug 2021 10:41:03 -0700 (PDT) From: Lai Jiangshan To: linux-kernel@vger.kernel.org Cc: Paolo Bonzini , Lai Jiangshan , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Thomas Gleixner , Ingo Molnar , Borislav Petkov , x86@kernel.org, "H. Peter Anvin" , kvm@vger.kernel.org Subject: [PATCH 6/7] KVM: X86: Don't check unsync if the original spte is writible Date: Tue, 24 Aug 2021 15:55:22 +0800 Message-Id: <20210824075524.3354-7-jiangshanlai@gmail.com> X-Mailer: git-send-email 2.19.1.6.gb485710b In-Reply-To: <20210824075524.3354-1-jiangshanlai@gmail.com> References: <20210824075524.3354-1-jiangshanlai@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Lai Jiangshan If the original spte is writable, the target gfn should not be the gfn of synchronized shadowpage and can continue to be writable. When !can_unsync, speculative must be false. So when the check of "!can_unsync" is removed, we need to move the label of "out" up. Signed-off-by: Lai Jiangshan --- arch/x86/kvm/mmu/spte.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/arch/x86/kvm/mmu/spte.c b/arch/x86/kvm/mmu/spte.c index b68a580f3510..a33c581aabd6 100644 --- a/arch/x86/kvm/mmu/spte.c +++ b/arch/x86/kvm/mmu/spte.c @@ -150,7 +150,7 @@ int make_spte(struct kvm_vcpu *vcpu, unsigned int pte_access, int level, * is responsibility of kvm_mmu_get_page / kvm_mmu_sync_roots. * Same reasoning can be applied to dirty page accounting. */ - if (!can_unsync && is_writable_pte(old_spte)) + if (is_writable_pte(old_spte)) goto out; /* @@ -171,10 +171,10 @@ int make_spte(struct kvm_vcpu *vcpu, unsigned int pte_access, int level, if (pte_access & ACC_WRITE_MASK) spte |= spte_shadow_dirty_mask(spte); +out: if (speculative) spte = mark_spte_for_access_track(spte); -out: WARN_ONCE(is_rsvd_spte(&vcpu->arch.mmu->shadow_zero_check, spte, level), "spte = 0x%llx, level = %d, rsvd bits = 0x%llx", spte, level, get_rsvd_bits(&vcpu->arch.mmu->shadow_zero_check, spte, level)); From patchwork Tue Aug 24 07:55:23 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lai Jiangshan X-Patchwork-Id: 12455705 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-14.2 required=3.0 tests=BAYES_00,DATE_IN_PAST_06_12, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9A3A7C4338F for ; Tue, 24 Aug 2021 18:00:08 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 7949D61052 for ; Tue, 24 Aug 2021 18:00:08 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239239AbhHXSAv (ORCPT ); Tue, 24 Aug 2021 14:00:51 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41880 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239356AbhHXSAc (ORCPT ); Tue, 24 Aug 2021 14:00:32 -0400 Received: from mail-pl1-x632.google.com (mail-pl1-x632.google.com [IPv6:2607:f8b0:4864:20::632]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6C014C081B1F; Tue, 24 Aug 2021 10:41:10 -0700 (PDT) Received: by mail-pl1-x632.google.com with SMTP id a5so12682025plh.5; Tue, 24 Aug 2021 10:41:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=U4DiJbbRSL9RJ86g/a4Cg/6UN5ORuG4SS/e84gNuqa0=; b=D0VMCcGPGgq4s04g076Ys1ZIM7vBNXlJHQc30ROohgaMz4oo0qq0UQjd6/FJ4SAH6h z641GNTGpqKTba1ZosjCqPpOUvvBzRTEEV4AnwGkd2FrvtoPQdjvGGtJaa6SQwdIQOPr EBMbYand+QAFQDoOKFwqPq2IdCpT8mh+4/f1He10ZBL/8bAlJlwazhXKWzeydKgmz2VS RiMqAWvO+wk/8zjKLA577kW8VkowIg+bdDKbENhmAqCP3dvtuJVpMHEctJnxlq3ujJ47 BVcLPplzxtr21znWEigtB/BT6I6IYaLcxNWazP+p35TvmXSq8pAq7DYmc0SIiJ5hTOzc WgvQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=U4DiJbbRSL9RJ86g/a4Cg/6UN5ORuG4SS/e84gNuqa0=; b=sNBF8IMpNcEQc3a6NjjFePoVQVyobrtSLMoyG0ex1jufgn94sxdQxRoWa2RNCJChxP WEvwRTv/i4yi0lGU22V0kkD2rTvXQCU1zB8ngH9vYU+0yAaMFxYzAUdTeYbxe20owEGs YN/lLtfcuYSDW21xmJqfHNOFHaZfV38a5LMwCIhaAif7977Z5pcs0yOrxRcpHbbLP/l9 6dMPKa0Je5s035rivaml/oGWxbBm7i+WGQhzezcSKoLMmdJ5gZduRyBpOAc1JSpqrBb8 Fu5I6hsCG2TnRImuDBXhc2BNXaH/QzEkNhaJYFA8kvyLXiXA/cs3CXwVA/+RqgcOIM0/ KJdA== X-Gm-Message-State: AOAM53144V3zaWua/N8GW4MnWsK9qBgtYFvmSvqt2yAe3y/vEPUTikrx hM2+kuoDDic/SCJglX+do4CwiOssBOE= X-Google-Smtp-Source: ABdhPJym2F8WG9gPjxCYxVWnI0py9MZspka9xpAneeOayM1EFSfXUk2goHILbw1AfjsNsqTtykB4wQ== X-Received: by 2002:a17:902:e84f:b0:12d:c616:a402 with SMTP id t15-20020a170902e84f00b0012dc616a402mr33651656plg.77.1629826869877; Tue, 24 Aug 2021 10:41:09 -0700 (PDT) Received: from localhost ([47.251.4.198]) by smtp.gmail.com with ESMTPSA id 15sm2987674pfu.192.2021.08.24.10.41.08 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Tue, 24 Aug 2021 10:41:09 -0700 (PDT) From: Lai Jiangshan To: linux-kernel@vger.kernel.org Cc: Paolo Bonzini , Lai Jiangshan , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Thomas Gleixner , Ingo Molnar , Borislav Petkov , x86@kernel.org, "H. Peter Anvin" , kvm@vger.kernel.org Subject: [PATCH 7/7] KVM: X86: Also prefetch the last range in __direct_pte_prefetch(). Date: Tue, 24 Aug 2021 15:55:23 +0800 Message-Id: <20210824075524.3354-8-jiangshanlai@gmail.com> X-Mailer: git-send-email 2.19.1.6.gb485710b In-Reply-To: <20210824075524.3354-1-jiangshanlai@gmail.com> References: <20210824075524.3354-1-jiangshanlai@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Lai Jiangshan __direct_pte_prefetch() skips prefetching the last range. The last range are often the whole range after the faulted spte when guest is touching huge-page-mapped(in guest view) memory forwardly which means prefetching them can reduce pagefault. Signed-off-by: Lai Jiangshan --- arch/x86/kvm/mmu/mmu.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index e5932af6f11c..ac260e01e9d8 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -2847,8 +2847,9 @@ static void __direct_pte_prefetch(struct kvm_vcpu *vcpu, i = (sptep - sp->spt) & ~(PTE_PREFETCH_NUM - 1); spte = sp->spt + i; - for (i = 0; i < PTE_PREFETCH_NUM; i++, spte++) { - if (is_shadow_present_pte(*spte) || spte == sptep) { + for (i = 0; i <= PTE_PREFETCH_NUM; i++, spte++) { + if (i == PTE_PREFETCH_NUM || + is_shadow_present_pte(*spte) || spte == sptep) { if (!start) continue; if (direct_pte_prefetch_many(vcpu, sp, start, spte) < 0)