From patchwork Sat Sep 22 01:56:38 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Wei Yang X-Patchwork-Id: 10611313 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 662945A4 for ; Sat, 22 Sep 2018 01:57:14 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 3F5CA2D87D for ; Sat, 22 Sep 2018 01:57:14 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 19A3C2D8CC; Sat, 22 Sep 2018 01:57:14 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FROM,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 7FC7D2D87D for ; Sat, 22 Sep 2018 01:57:13 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2391585AbeIVHsl (ORCPT ); Sat, 22 Sep 2018 03:48:41 -0400 Received: from mail-pl1-f195.google.com ([209.85.214.195]:42420 "EHLO mail-pl1-f195.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728318AbeIVHsl (ORCPT ); Sat, 22 Sep 2018 03:48:41 -0400 Received: by mail-pl1-f195.google.com with SMTP id x20-v6so1151230pln.9 for ; Fri, 21 Sep 2018 18:57:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id; bh=638VIXb1UKRqS/wbY7hd8otq4q9GPrvOwAaDRr2R9SQ=; b=epRIaAJ2d5or3F9v4IQD+GIMs4726KLZY1QWw9LJk3OfBs6fPDWab83xZFUuBM75mE FJFlULbrs/qMRl5qqLRVEhR87nDygoc9NUBIG9fYJZhyY68pzNqCgXIZnouPxfw8kEvW q7njNXe3BXm2SUzRTE8Faec86HbZ3jQZ6PLZVZnXgayvct4OvxrMZomRubLUKzzBkpPQ 8dhweXcfdKb3zo8LC+XOz7XxbgBdy+G6eBA5T4iYZgTxCD6mLbXXUWwH7hwCuXd4Lo7e kIT77VWlwzjDw4UAvNaJa9rMMJeRoLnAYHUk00IoDGbB9xnAOlyZsxBnxeM93dAqooSP bxuQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id; bh=638VIXb1UKRqS/wbY7hd8otq4q9GPrvOwAaDRr2R9SQ=; b=WVKBQT/j4ZmdZr3D3QdBH2rHrAeEqlYXy7kd1WKUMDvwNZL9QQKxnBTm/YvTcLDKqp nML6V5m70tffrndKcoiDgUTfBi+wA9Ro0C2kqOL7F3fOEXDXKgakaAvJqP8ktcl/Mkm0 EgxDzL3V7xBcEH+M03y2seDTjv0AXVw3xf0th6JJFzQ5Q6ul4OW6XeIiqaP8d3YCROlx b3j0PcRk0vkfCehtyfeXAaBctJE15seKN6HPsSUChrIZwQhT5H4t0X/ypZ65oi+2MDTl TXTg6aQx4wT68JbFRdR/lqVKudiQHiz122MgrDGDexER0UbcjuTOKIDSu/1vjLDKSnHZ HzCA== X-Gm-Message-State: ABuFfogfa06z6nC8jw5qsb+/7SSEWiGPf0v8JzkVZtG/6+jeqIg9HRjQ 8IwE//O14RiGe/v8hO8KBDI= X-Google-Smtp-Source: ACcGV62zt4mv3NKHbsb5kn2zV6dTxkh19M+MlzEa0M1atOAKs1urfZHaM3BAgisk+RzZK6jLDTdMZw== X-Received: by 2002:a17:902:6a8a:: with SMTP id n10-v6mr344331plk.288.1537581425881; Fri, 21 Sep 2018 18:57:05 -0700 (PDT) Received: from localhost ([185.92.221.13]) by smtp.gmail.com with ESMTPSA id 132-v6sm1209321pgh.67.2018.09.21.18.57.02 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 21 Sep 2018 18:57:04 -0700 (PDT) From: Wei Yang To: pbonzini@redhat.com, rkrcmar@redhat.com, tglx@linutronix.de Cc: x86@kernel.org, kvm@vger.kernel.org, Wei Yang Subject: [PATCH 1/2] KVM: x86: replace KVM_PAGES_PER_HPAGE with KVM_HPAGE_GFN_SIZE Date: Sat, 22 Sep 2018 09:56:38 +0800 Message-Id: <20180922015639.12666-1-richard.weiyang@gmail.com> X-Mailer: git-send-email 2.15.1 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP KVM_PAGES_PER_HPAGE is got by left shift (KVM_HPAGE_GFN_SHIFT + PAGE_SHIFT) and then divide by PAGE_SIZE, which could be simplified by just left shift KVM_HPAGE_GFN_SHIFT. At the same time, in almost 40% places where KVM_PAGES_PER_HPAGE is used, pfn mask is actually what it needs instead of the number of pages. This patch replaces KVM_PAGES_PER_HPAGE with KVM_HPAGE_GFN_SIZE and introduces KVM_HPAGE_GFN_MASK to make it a little bit easy to read. Signed-off-by: Wei Yang --- arch/x86/include/asm/kvm_host.h | 3 ++- arch/x86/kvm/mmu.c | 10 +++++----- arch/x86/kvm/paging_tmpl.h | 6 +++--- arch/x86/kvm/x86.c | 6 +++--- 4 files changed, 13 insertions(+), 12 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index f1a4e520ef5c..c5e7bb811b1e 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -104,10 +104,11 @@ /* KVM Hugepage definitions for x86 */ #define KVM_NR_PAGE_SIZES 3 #define KVM_HPAGE_GFN_SHIFT(x) (((x) - 1) * 9) +#define KVM_HPAGE_GFN_SIZE(x) (1UL << KVM_HPAGE_GFN_SHIFT(x)) +#define KVM_HPAGE_GFN_MASK(x) (~(KVM_HPAGE_GFN_SIZE(x) - 1)) #define KVM_HPAGE_SHIFT(x) (PAGE_SHIFT + KVM_HPAGE_GFN_SHIFT(x)) #define KVM_HPAGE_SIZE(x) (1UL << KVM_HPAGE_SHIFT(x)) #define KVM_HPAGE_MASK(x) (~(KVM_HPAGE_SIZE(x) - 1)) -#define KVM_PAGES_PER_HPAGE(x) (KVM_HPAGE_SIZE(x) / PAGE_SIZE) static inline gfn_t gfn_to_index(gfn_t gfn, gfn_t base_gfn, int level) { diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index 0caaaa25e88b..897614414311 100644 --- a/arch/x86/kvm/mmu.c +++ b/arch/x86/kvm/mmu.c @@ -3170,7 +3170,7 @@ static void transparent_hugepage_adjust(struct kvm_vcpu *vcpu, * head. */ *levelp = level = PT_DIRECTORY_LEVEL; - mask = KVM_PAGES_PER_HPAGE(level) - 1; + mask = KVM_HPAGE_GFN_SIZE(level) - 1; VM_BUG_ON((gfn & mask) != (pfn & mask)); if (pfn & mask) { gfn &= ~mask; @@ -3416,7 +3416,7 @@ static int nonpaging_map(struct kvm_vcpu *vcpu, gva_t v, u32 error_code, if (level > PT_DIRECTORY_LEVEL) level = PT_DIRECTORY_LEVEL; - gfn &= ~(KVM_PAGES_PER_HPAGE(level) - 1); + gfn &= KVM_HPAGE_GFN_MASK(level); } if (fast_page_fault(vcpu, v, level, error_code)) @@ -4018,9 +4018,9 @@ EXPORT_SYMBOL_GPL(kvm_handle_page_fault); static bool check_hugepage_cache_consistency(struct kvm_vcpu *vcpu, gfn_t gfn, int level) { - int page_num = KVM_PAGES_PER_HPAGE(level); + int page_num = KVM_HPAGE_GFN_SIZE(level); - gfn &= ~(page_num - 1); + gfn &= KVM_HPAGE_GFN_MASK(level); return kvm_mtrr_check_gfn_range_consistency(vcpu, gfn, page_num); } @@ -4053,7 +4053,7 @@ static int tdp_page_fault(struct kvm_vcpu *vcpu, gva_t gpa, u32 error_code, if (level > PT_DIRECTORY_LEVEL && !check_hugepage_cache_consistency(vcpu, gfn, level)) level = PT_DIRECTORY_LEVEL; - gfn &= ~(KVM_PAGES_PER_HPAGE(level) - 1); + gfn &= KVM_HPAGE_GFN_MASK(level); } if (fast_page_fault(vcpu, gpa, level, error_code)) diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h index 14ffd973df54..c8a242715cbb 100644 --- a/arch/x86/kvm/paging_tmpl.h +++ b/arch/x86/kvm/paging_tmpl.h @@ -658,7 +658,7 @@ static int FNAME(fetch)(struct kvm_vcpu *vcpu, gva_t addr, if (is_shadow_present_pte(*it.sptep)) continue; - direct_gfn = gw->gfn & ~(KVM_PAGES_PER_HPAGE(it.level) - 1); + direct_gfn = gw->gfn & KVM_HPAGE_GFN_MASK(it.level); sp = kvm_mmu_get_page(vcpu, direct_gfn, addr, it.level-1, true, direct_access); @@ -700,7 +700,7 @@ FNAME(is_self_change_mapping)(struct kvm_vcpu *vcpu, bool *write_fault_to_shadow_pgtable) { int level; - gfn_t mask = ~(KVM_PAGES_PER_HPAGE(walker->level) - 1); + gfn_t mask = KVM_HPAGE_GFN_MASK(walker->level); bool self_changed = false; if (!(walker->pte_access & ACC_WRITE_MASK || @@ -786,7 +786,7 @@ static int FNAME(page_fault)(struct kvm_vcpu *vcpu, gva_t addr, u32 error_code, level = mapping_level(vcpu, walker.gfn, &force_pt_level); if (likely(!force_pt_level)) { level = min(walker.level, level); - walker.gfn = walker.gfn & ~(KVM_PAGES_PER_HPAGE(level) - 1); + walker.gfn = walker.gfn & KVM_HPAGE_GFN_MASK(level); } } else force_pt_level = true; diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index f7dff0457846..70b4e5e74f7d 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -9021,9 +9021,9 @@ int kvm_arch_create_memslot(struct kvm *kvm, struct kvm_memory_slot *slot, slot->arch.lpage_info[i - 1] = linfo; - if (slot->base_gfn & (KVM_PAGES_PER_HPAGE(level) - 1)) + if (slot->base_gfn & (KVM_HPAGE_GFN_SIZE(level) - 1)) linfo[0].disallow_lpage = 1; - if ((slot->base_gfn + npages) & (KVM_PAGES_PER_HPAGE(level) - 1)) + if ((slot->base_gfn + npages) & (KVM_HPAGE_GFN_SIZE(level) - 1)) linfo[lpages - 1].disallow_lpage = 1; ugfn = slot->userspace_addr >> PAGE_SHIFT; /* @@ -9031,7 +9031,7 @@ int kvm_arch_create_memslot(struct kvm *kvm, struct kvm_memory_slot *slot, * other, or if explicitly asked to, disable large page * support for this slot */ - if ((slot->base_gfn ^ ugfn) & (KVM_PAGES_PER_HPAGE(level) - 1) || + if ((slot->base_gfn ^ ugfn) & (KVM_HPAGE_GFN_SIZE(level) - 1) || !kvm_largepages_enabled()) { unsigned long j;