From patchwork Wed Dec 11 21:32:07 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Barret Rhoden X-Patchwork-Id: 11286483 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 32FB613B6 for ; Wed, 11 Dec 2019 21:32:36 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 1120C2054F for ; Wed, 11 Dec 2019 21:32:35 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="BjkB4hQu" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726874AbfLKVca (ORCPT ); Wed, 11 Dec 2019 16:32:30 -0500 Received: from mail-vs1-f74.google.com ([209.85.217.74]:36148 "EHLO mail-vs1-f74.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726823AbfLKVc3 (ORCPT ); Wed, 11 Dec 2019 16:32:29 -0500 Received: by mail-vs1-f74.google.com with SMTP id t199so28239vst.3 for ; Wed, 11 Dec 2019 13:32:28 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=cpnhXnLglixPPYBVn7HgN2if2evOi/FjwLYbdxu/Tg4=; b=BjkB4hQuCzNvgDg1hIM+zGyYK1u8CTBOSJ20PWcYUmmzJm0EEozkJ9Q1YxNVhZbr/H giphN6cCCXbh58OitPnHXjObIa6Lisbnnsvi3GVkeLZDzoZtLarfyhvxTCAT5FC1bQn/ ABMbYqeYtHThYG5cKkpxH4rsuDwQQYbSC5IR8WViXW4NXf5GbkrJH0PqNj87OVn+HzCx wv3VAMzPpJ9mH4AeUf1vyZP60dMKx7RiDbRa3LFMKTxz2A61K7XqIDr8FG353vLUY1pP mooYYKMADxjb8XPiPdZTRxvh/hp+RiCzYIWeKWEuaT29+ds4KLwo96ik7QNi4n675Sop aasQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=cpnhXnLglixPPYBVn7HgN2if2evOi/FjwLYbdxu/Tg4=; b=HbGBENXWawwHgKhVE+wCT9tpqfDOcWiVW+aZ8DvPOIhElsYiFlmfgvLcEOusqUWspJ v+1wfC3V7394pv87od5AYrl1FEUtwaBtAcG4ahCVYpfdxuEZLsydI1Z2Gcrti0T9RPwc qTn1bdubKGHIX5kMFjKTojroc774Difx9l3C9QFt5aGIQyLw4ZhKHJ/W+PphvnMN/Uy5 K6r9442CcoqLa/A1BByfjDhHzxbY4kaGDnByAjNlEeTu3vmItEG7NGGHCaBClqk/g/z7 zgrWlgwE6z2HwwRR9sKrTb6isnm5BEuVub+Q/R/5QKB3kpXxBDLgXwci4VzeIkQrtNKi yh7g== X-Gm-Message-State: APjAAAW92SLVIM/X3CB2MZQCQICIsG7QW6EK/rvMJeGgk+xP1zENmiIw l23cNeMDsOWtukMwTRsnEvtfKJqU X-Google-Smtp-Source: APXvYqz6qNfUF/2RBgQj1i0wA7WRKTT/3zMghj8ls7WH3mm9ABUYccruj1nHyEp3pYmgO7zVmf0M+JNp X-Received: by 2002:a1f:ac57:: with SMTP id v84mr5840002vke.90.1576099948361; Wed, 11 Dec 2019 13:32:28 -0800 (PST) Date: Wed, 11 Dec 2019 16:32:07 -0500 In-Reply-To: <20191211213207.215936-1-brho@google.com> Message-Id: <20191211213207.215936-3-brho@google.com> Mime-Version: 1.0 References: <20191211213207.215936-1-brho@google.com> X-Mailer: git-send-email 2.24.0.525.g8f36a354ae-goog Subject: [PATCH v4 2/2] kvm: Use huge pages for DAX-backed files From: Barret Rhoden To: Paolo Bonzini , Dan Williams , David Hildenbrand , Dave Jiang , Alexander Duyck Cc: linux-nvdimm@lists.01.org, x86@kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, jason.zeng@intel.com Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org This change allows KVM to map DAX-backed files made of huge pages with huge mappings in the EPT/TDP. DAX pages are not PageTransCompound. The existing check is trying to determine if the mapping for the pfn is a huge mapping or not. For non-DAX maps, e.g. hugetlbfs, that means checking PageTransCompound. For DAX, we can check the page table itself. Note that KVM already faulted in the page (or huge page) in the host's page table, and we hold the KVM mmu spinlock. We grabbed that lock in kvm_mmu_notifier_invalidate_range_end, before checking the mmu seq. Signed-off-by: Barret Rhoden --- arch/x86/kvm/mmu/mmu.c | 36 ++++++++++++++++++++++++++++++++---- 1 file changed, 32 insertions(+), 4 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 6f92b40d798c..cd07bc4e595f 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -3384,6 +3384,35 @@ static int kvm_handle_bad_page(struct kvm_vcpu *vcpu, gfn_t gfn, kvm_pfn_t pfn) return -EFAULT; } +static bool pfn_is_huge_mapped(struct kvm *kvm, gfn_t gfn, kvm_pfn_t pfn) +{ + struct page *page = pfn_to_page(pfn); + unsigned long hva; + + if (!is_zone_device_page(page)) + return PageTransCompoundMap(page); + + /* + * DAX pages do not use compound pages. The page should have already + * been mapped into the host-side page table during try_async_pf(), so + * we can check the page tables directly. + */ + hva = gfn_to_hva(kvm, gfn); + if (kvm_is_error_hva(hva)) + return false; + + /* + * Our caller grabbed the KVM mmu_lock with a successful + * mmu_notifier_retry, so we're safe to walk the page table. + */ + switch (dev_pagemap_mapping_shift(hva, current->mm)) { + case PMD_SHIFT: + case PUD_SIZE: + return true; + } + return false; +} + static void transparent_hugepage_adjust(struct kvm_vcpu *vcpu, gfn_t gfn, kvm_pfn_t *pfnp, int *levelp) @@ -3398,8 +3427,8 @@ static void transparent_hugepage_adjust(struct kvm_vcpu *vcpu, * here. */ if (!is_error_noslot_pfn(pfn) && !kvm_is_reserved_pfn(pfn) && - !kvm_is_zone_device_pfn(pfn) && level == PT_PAGE_TABLE_LEVEL && - PageTransCompoundMap(pfn_to_page(pfn)) && + level == PT_PAGE_TABLE_LEVEL && + pfn_is_huge_mapped(vcpu->kvm, gfn, pfn) && !mmu_gfn_lpage_is_disallowed(vcpu, gfn, PT_DIRECTORY_LEVEL)) { unsigned long mask; /* @@ -6015,8 +6044,7 @@ static bool kvm_mmu_zap_collapsible_spte(struct kvm *kvm, * mapping if the indirect sp has level = 1. */ if (sp->role.direct && !kvm_is_reserved_pfn(pfn) && - !kvm_is_zone_device_pfn(pfn) && - PageTransCompoundMap(pfn_to_page(pfn))) { + pfn_is_huge_mapped(kvm, sp->gfn, pfn)) { pte_list_remove(rmap_head, sptep); if (kvm_available_flush_tlb_with_range())