From patchwork Wed Nov 14 21:51:54 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Barret Rhoden X-Patchwork-Id: 10683147 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 6C74813B5 for ; Wed, 14 Nov 2018 21:53:11 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 5DFE32C0FC for ; Wed, 14 Nov 2018 21:53:11 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 514482C113; Wed, 14 Nov 2018 21:53:11 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.7 required=2.0 tests=BAYES_00,DKIM_ADSP_CUSTOM_MED, DKIM_INVALID,DKIM_SIGNED,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from ml01.01.org (ml01.01.org [198.145.21.10]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id E95AF2C105 for ; Wed, 14 Nov 2018 21:53:10 +0000 (UTC) Received: from [127.0.0.1] (localhost [IPv6:::1]) by ml01.01.org (Postfix) with ESMTP id CEB302119071E; Wed, 14 Nov 2018 13:53:10 -0800 (PST) X-Original-To: linux-nvdimm@lists.01.org Delivered-To: linux-nvdimm@lists.01.org Received-SPF: Pass (sender SPF authorized) identity=mailfrom; client-ip=2607:f8b0:4864:20::94a; helo=mail-ua1-x94a.google.com; envelope-from=3q5nswwqkda4p5v2u22uzs.q20zw18b-19rw00zw676.ef.25u@flex--brho.bounces.google.com; receiver=linux-nvdimm@lists.01.org Received: from mail-ua1-x94a.google.com (mail-ua1-x94a.google.com [IPv6:2607:f8b0:4864:20::94a]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by ml01.01.org (Postfix) with ESMTPS id ECF7D2117D766 for ; Wed, 14 Nov 2018 13:53:08 -0800 (PST) Received: by mail-ua1-x94a.google.com with SMTP id u20so2796519uan.11 for ; Wed, 14 Nov 2018 13:53:08 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=TjNbpFDEmROOYDgX+JBtXjmBSAKa/KDKnxWS9mQYdnM=; b=taFjDK0teNWIgkT/L+183FUK7Rjv5BiYjc2vj9n0IhdJavI+1kbRzaJU2Yiiu0h737 Wh0YUHhy9yDVHl+9c4GqoLinKZL/hiYqZIJgsfeS1QhSZjd36eM3bD78H/Q8sGHJ+vLm I7/MEgB1fNMShiA10dPq+M3pt1sOc2JWuVUnU9G7dxuXUvzheuRMYLSs/jlRC6c7hyFE APQR91LYXzb4ENO5mM4jbgKl9V7Yus/mOvA2PBY5nGoYkqPxnKIh+QLY4Uv0+ZDJUDRJ lfa4l23S3nN7B/tI6vj3q9nyARf3QBI5df/wHX2XKckqMBtglSEKQIibBZ0wwNQznCwd GYjA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=TjNbpFDEmROOYDgX+JBtXjmBSAKa/KDKnxWS9mQYdnM=; b=uLFQYEZ+7KvvBjdiMwQTfpKkZ3q8tthB7BuLoYIlvhKBrQtMemDSgM4S+U2fAYu9HS u2OZt3XMvph7Xmt16Ml28KeRdyPK3R+7Ci32JTj5ORP7F+2552qUNWXnV+idMdDs+69g xyhFskuSp4S65lUwHLPyZOpENL8MIFHt+iIPnbIV/9Dz0+0bUwszgsqLTry19GFkLGVZ CoKF8f3tpGp42WoXXFItM4k19y5x4z65f1KhhTI9k8hB4VJbISgiyigOThnGxs70d4Os As4w81p+FrQ2UAmmUxjb+dhAnRTDVLcNTkN/qdsRL5D6BJJDKe2dJPUp+H/PBzY35l5v 1ZIw== X-Gm-Message-State: AGRZ1gJdMJ3iYUUHr77VlraOVR09mCxAqxuynn5srovFSUPCU1d3MnR4 NP6QrE3+YwMPm6cqXRPXA97DpcSA X-Google-Smtp-Source: AJdET5fxjUluLcn0PMYFFDQMUPhJJ6f6///VYVv7kfdG0gbmeQPMhsaCe6AtiBa2pml32koG0VHN598e X-Received: by 2002:ab0:72d6:: with SMTP id g22mr3170146uap.14.1542232387447; Wed, 14 Nov 2018 13:53:07 -0800 (PST) Date: Wed, 14 Nov 2018 16:51:54 -0500 In-Reply-To: <20181114215155.259978-1-brho@google.com> Message-Id: <20181114215155.259978-3-brho@google.com> Mime-Version: 1.0 References: <20181109203921.178363-1-brho@google.com> <20181114215155.259978-1-brho@google.com> X-Mailer: git-send-email 2.19.1.1215.g8438c0b245-goog Subject: [PATCH v2 2/3] kvm: Use huge pages for DAX-backed files From: Barret Rhoden To: Dan Williams , Dave Jiang , Ross Zwisler , Vishal Verma , Paolo Bonzini , " =?utf-8?b?UmFkaW0gS3LEjW3DocWZ?= " , Thomas Gleixner , Ingo Molnar , Borislav Petkov X-BeenThere: linux-nvdimm@lists.01.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: "Linux-nvdimm developer list." List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: kvm@vger.kernel.org, yu.c.zhang@intel.com, linux-nvdimm@lists.01.org, x86@kernel.org, linux-kernel@vger.kernel.org, "H. Peter Anvin" , yi.z.zhang@intel.com Errors-To: linux-nvdimm-bounces@lists.01.org Sender: "Linux-nvdimm" X-Virus-Scanned: ClamAV using ClamSMTP This change allows KVM to map DAX-backed files made of huge pages with huge mappings in the EPT/TDP. DAX pages are not PageTransCompound. The existing check is trying to determine if the mapping for the pfn is a huge mapping or not. For non-DAX maps, e.g. hugetlbfs, that means checking PageTransCompound. For DAX, we can check the page table itself. Note that KVM already faulted in the page (or huge page) in the host's page table, and we hold the KVM mmu spinlock. We grabbed that lock in kvm_mmu_notifier_invalidate_range_end, before checking the mmu seq. Signed-off-by: Barret Rhoden --- - removed map_shift local variable arch/x86/kvm/mmu.c | 33 +++++++++++++++++++++++++++++++-- 1 file changed, 31 insertions(+), 2 deletions(-) diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index cf5f572f2305..6914989d1e3d 100644 --- a/arch/x86/kvm/mmu.c +++ b/arch/x86/kvm/mmu.c @@ -3152,6 +3152,35 @@ static int kvm_handle_bad_page(struct kvm_vcpu *vcpu, gfn_t gfn, kvm_pfn_t pfn) return -EFAULT; } +static bool pfn_is_huge_mapped(struct kvm *kvm, gfn_t gfn, kvm_pfn_t pfn) +{ + struct page *page = pfn_to_page(pfn); + unsigned long hva; + + if (!is_zone_device_page(page)) + return PageTransCompoundMap(page); + + /* + * DAX pages do not use compound pages. The page should have already + * been mapped into the host-side page table during try_async_pf(), so + * we can check the page tables directly. + */ + hva = gfn_to_hva(kvm, gfn); + if (kvm_is_error_hva(hva)) + return false; + + /* + * Our caller grabbed the KVM mmu_lock with a successful + * mmu_notifier_retry, so we're safe to walk the page table. + */ + switch (dev_pagemap_mapping_shift(hva, current->mm)) { + case PMD_SHIFT: + case PUD_SIZE: + return true; + } + return false; +} + static void transparent_hugepage_adjust(struct kvm_vcpu *vcpu, gfn_t *gfnp, kvm_pfn_t *pfnp, int *levelp) @@ -3168,7 +3197,7 @@ static void transparent_hugepage_adjust(struct kvm_vcpu *vcpu, */ if (!is_error_noslot_pfn(pfn) && !kvm_is_reserved_pfn(pfn) && level == PT_PAGE_TABLE_LEVEL && - PageTransCompoundMap(pfn_to_page(pfn)) && + pfn_is_huge_mapped(vcpu->kvm, gfn, pfn) && !mmu_gfn_lpage_is_disallowed(vcpu, gfn, PT_DIRECTORY_LEVEL)) { unsigned long mask; /* @@ -5678,7 +5707,7 @@ static bool kvm_mmu_zap_collapsible_spte(struct kvm *kvm, */ if (sp->role.direct && !kvm_is_reserved_pfn(pfn) && - PageTransCompoundMap(pfn_to_page(pfn))) { + pfn_is_huge_mapped(kvm, sp->gfn, pfn)) { pte_list_remove(rmap_head, sptep); need_tlb_flush = 1; goto restart;