From patchwork Wed May 3 10:52:19 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xiao Guangrong X-Patchwork-Id: 9709353 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id CB10160351 for ; Wed, 3 May 2017 10:53:23 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id BD6AB28427 for ; Wed, 3 May 2017 10:53:23 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id B2461285E9; Wed, 3 May 2017 10:53:23 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.5 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, FREEMAIL_FROM, RCVD_IN_DNSWL_HI, RCVD_IN_SORBS_SPAM autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 43E9C28427 for ; Wed, 3 May 2017 10:53:23 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753508AbdECKxV (ORCPT ); Wed, 3 May 2017 06:53:21 -0400 Received: from mail-pf0-f195.google.com ([209.85.192.195]:36188 "EHLO mail-pf0-f195.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753141AbdECKxL (ORCPT ); Wed, 3 May 2017 06:53:11 -0400 Received: by mail-pf0-f195.google.com with SMTP id v14so3779095pfd.3; Wed, 03 May 2017 03:53:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=w7qCcA41saMZCVP0zlh5DQ474ttL6rR3xS+/9x5IAcg=; b=l7XLT+uOsni7lZ2yYYX8OP1KsVOru5ovvXlUApIl5kHdajZx9zKRDgJSzzyRvtnOLG O2V5gkfHFvGUyrvXp0nMltFsEbSZo5Evu9TvAd92Sme90Q2M1U08E+dARJZ617ctesxH +PYME90gjCkqge3PiCAwWYV7ToD/CYgnJk7uvbg6GcICzNqNThCfPHZ2ngrZtA9niZg5 783EeiTXtk0F//YiiCyM0cpmE+75VL0BEJ9xClH0icIIDP4QFIR3XJRA27KGmhRYPNjp RW/s35bkoOnhsPj8GN9vyTd4NCE26FWzwJ92hKWL4UybxFdqt1Fon6/SYioO2l1/SBK1 TGWA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=w7qCcA41saMZCVP0zlh5DQ474ttL6rR3xS+/9x5IAcg=; b=ONft8tiKKxNqtdzkhiNtqhXFaqW0WTnLXR1hU/DAnwFh+HsEdV20jcYybD9vM7Vj29 eLlB7NtAxZ449rTWyDrPptnklVZbYy6/eojzZUUDE/z47VViJXgwoOcNeIYHRaQEKdip dTyDQ1X5Y2cHIWfoDMhsC73URqVQYv+emZlhZPZc0BZhK34TPcNcGBIrGgRmKkpggAE4 4FjnEVZpqFIK3cRhUM8HdSZW+yup46+VdCi4TA8h2MY2meSEJVT7VEaOIWNikdaBYYAa 9+McUWWl1b/2C7KgHvhjeQmLNueLnVeBdFtHz/sdLuG56GMX/2d2vgFImFslMMxuCNNI kErA== X-Gm-Message-State: AN3rC/5D/T+cDlk1O05WnT71YRI1jvYGNP9ULE55MZk0tna5QZbHtK9E G0yKvJTWQcXo8N+J X-Received: by 10.98.149.196 with SMTP id c65mr4157853pfk.37.1493808790324; Wed, 03 May 2017 03:53:10 -0700 (PDT) Received: from eric.tencent.com ([203.205.141.35]) by smtp.gmail.com with ESMTPSA id d24sm4395561pfb.97.2017.05.03.03.53.07 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 03 May 2017 03:53:09 -0700 (PDT) From: guangrong.xiao@gmail.com X-Google-Original-From: xiaoguangrong@tencent.com To: pbonzini@redhat.com, mtosatti@redhat.com, avi.kivity@gmail.com, rkrcmar@redhat.com Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, qemu-devel@nongnu.org, Xiao Guangrong Subject: [PATCH 2/7] KVM: MMU: introduce possible_writable_spte_bitmap Date: Wed, 3 May 2017 18:52:19 +0800 Message-Id: <20170503105224.19049-3-xiaoguangrong@tencent.com> X-Mailer: git-send-email 2.9.3 In-Reply-To: <20170503105224.19049-1-xiaoguangrong@tencent.com> References: <20170503105224.19049-1-xiaoguangrong@tencent.com> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Xiao Guangrong It is used to track possible writable sptes on the shadow page on which the bit is set to 1 for the sptes that are already writable or can be locklessly updated to writable on the fast_page_fault path, also a counter for the number of possible writable sptes is introduced to speed up bitmap walking Later patch will benefit good performance by using this bitmap and counter to fast figure out writable sptes and write protect them Signed-off-by: Xiao Guangrong --- arch/x86/include/asm/kvm_host.h | 6 ++++- arch/x86/kvm/mmu.c | 53 ++++++++++++++++++++++++++++++++++++++++- 2 files changed, 57 insertions(+), 2 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 84c8489..4872ae7 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -114,6 +114,7 @@ static inline gfn_t gfn_to_index(gfn_t gfn, gfn_t base_gfn, int level) #define KVM_MIN_ALLOC_MMU_PAGES 64 #define KVM_MMU_HASH_SHIFT 12 #define KVM_NUM_MMU_PAGES (1 << KVM_MMU_HASH_SHIFT) +#define KVM_MMU_SP_ENTRY_NR 512 #define KVM_MIN_FREE_MMU_PAGES 5 #define KVM_REFILL_PAGES 25 #define KVM_MAX_CPUID_ENTRIES 80 @@ -287,12 +288,15 @@ struct kvm_mmu_page { bool unsync; int root_count; /* Currently serving as active root */ unsigned int unsync_children; + unsigned int possiable_writable_sptes; struct kvm_rmap_head parent_ptes; /* rmap pointers to parent sptes */ /* The page is obsolete if mmu_valid_gen != kvm->arch.mmu_valid_gen. */ unsigned long mmu_valid_gen; - DECLARE_BITMAP(unsync_child_bitmap, 512); + DECLARE_BITMAP(unsync_child_bitmap, KVM_MMU_SP_ENTRY_NR); + + DECLARE_BITMAP(possible_writable_spte_bitmap, KVM_MMU_SP_ENTRY_NR); #ifdef CONFIG_X86_32 /* diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index ba8e7af..8a20e4f 100644 --- a/arch/x86/kvm/mmu.c +++ b/arch/x86/kvm/mmu.c @@ -570,6 +570,49 @@ static bool is_dirty_spte(u64 spte) : spte & PT_WRITABLE_MASK; } +static bool is_possible_writable_spte(u64 spte) +{ + if (!is_shadow_present_pte(spte)) + return false; + + if (is_writable_pte(spte)) + return true; + + if (spte_can_locklessly_be_made_writable(spte)) + return true; + + /* + * although is_access_track_spte() sptes can be updated out of + * mmu-lock, we need not take them into account as access_track + * drops writable bit for them + */ + return false; +} + +static void +mmu_log_possible_writable_spte(u64 *sptep, u64 old_spte, u64 new_spte) +{ + struct kvm_mmu_page *sp = page_header(__pa(sptep)); + bool old_state, new_state; + + old_state = is_possible_writable_spte(old_spte); + new_state = is_possible_writable_spte(new_spte); + + if (old_state == new_state) + return; + + /* a possible writable spte is dropped */ + if (old_state) { + sp->possiable_writable_sptes--; + __clear_bit(sptep - sp->spt, sp->possible_writable_spte_bitmap); + return; + } + + /* a new possible writable spte is set */ + sp->possiable_writable_sptes++; + __set_bit(sptep - sp->spt, sp->possible_writable_spte_bitmap); +} + /* Rules for using mmu_spte_set: * Set the sptep from nonpresent to present. * Note: the sptep being assigned *must* be either not present @@ -580,6 +623,7 @@ static void mmu_spte_set(u64 *sptep, u64 new_spte) { WARN_ON(is_shadow_present_pte(*sptep)); __set_spte(sptep, new_spte); + mmu_log_possible_writable_spte(sptep, 0ull, new_spte); } /* @@ -598,6 +642,7 @@ static void mmu_spte_update_no_track(u64 *sptep, u64 new_spte) } __update_clear_spte_fast(sptep, new_spte); + mmu_log_possible_writable_spte(sptep, old_spte, new_spte); } /* @@ -623,6 +668,7 @@ static u64 mmu_spte_update_track(u64 *sptep, u64 new_spte) WARN_ON(spte_to_pfn(old_spte) != spte_to_pfn(new_spte)); + mmu_log_possible_writable_spte(sptep, old_spte, new_spte); return old_spte; } @@ -688,6 +734,8 @@ static int mmu_spte_clear_track_bits(u64 *sptep) else old_spte = __update_clear_spte_slow(sptep, 0ull); + mmu_log_possible_writable_spte(sptep, old_spte, 0ull); + if (!is_shadow_present_pte(old_spte)) return 0; @@ -716,7 +764,10 @@ static int mmu_spte_clear_track_bits(u64 *sptep) */ static void mmu_spte_clear_no_track(u64 *sptep) { + u64 old_spte = *sptep; + __update_clear_spte_fast(sptep, 0ull); + mmu_log_possible_writable_spte(sptep, old_spte, 0ull); } static u64 mmu_spte_get_lockless(u64 *sptep) @@ -1988,7 +2039,7 @@ static int __mmu_unsync_walk(struct kvm_mmu_page *sp, { int i, ret, nr_unsync_leaf = 0; - for_each_set_bit(i, sp->unsync_child_bitmap, 512) { + for_each_set_bit(i, sp->unsync_child_bitmap, KVM_MMU_SP_ENTRY_NR) { struct kvm_mmu_page *child; u64 ent = sp->spt[i];