From patchwork Thu Apr 4 20:23:44 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Barret Rhoden X-Patchwork-Id: 10886267 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 986471708 for ; Thu, 4 Apr 2019 20:24:15 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 86C1028AE3 for ; Thu, 4 Apr 2019 20:24:15 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 7ADA128AF2; Thu, 4 Apr 2019 20:24:15 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.5 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI, USER_IN_DEF_DKIM_WL autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 2060928AE3 for ; Thu, 4 Apr 2019 20:24:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730859AbfDDUYI (ORCPT ); Thu, 4 Apr 2019 16:24:08 -0400 Received: from mail-vk1-f202.google.com ([209.85.221.202]:43842 "EHLO mail-vk1-f202.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730836AbfDDUYH (ORCPT ); Thu, 4 Apr 2019 16:24:07 -0400 Received: by mail-vk1-f202.google.com with SMTP id s11so1536353vkk.10 for ; Thu, 04 Apr 2019 13:24:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=0wnnkWPvsjcCmPpDGZZOltQ8m2/AePhqjWl4B6moFQY=; b=LyHguTLzka7Fymtke5wD0vAQz10+1dZFeGH0oVAdNqR7j1DOXQKKK0huAgxhaBkoB7 I+mudDEJ3pbCi8PQYuaHvYRDIzdU4cB3pAIlB7eLp9sbX23LmYG652GWRUF+afV7Nbbn Rr4r0uDb23gG/8fG5xHPgzohfNXZs0S5Lm/pEkzeJEBCTJi6706yQTBUx+XWiWL91mGl iYPGcgOhit8eSqpfY/iGpBtuPaUwjZUdi0vD95kr7e/IVNa7jXejH9qipeHA6OkoWbVC D8naX3rQARsWVGJMkgNBWeGIRK1BTA4JJU1PbP/4YD2Sqfz2e7hVkpAG/3mIHbmWln6U 5FxA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=0wnnkWPvsjcCmPpDGZZOltQ8m2/AePhqjWl4B6moFQY=; b=t2SrGta64rA5VA3x5r9Uov2MWIHLAMFMR7qnDHqfib7wpDye6ZlQkoDO2393vyBthQ g01PKjDKJ75L4ITWhoxBCZ7vNNiSE5e09HJXfAq7Cb+2e1hd0GLQE/5y67VDZ3gAdKVC MJbZbgek5y+pKwNe3gC6IMWvPy/C2xG2D2vWjIGCvJmnZ0dMHJowu7Nt0FAjPi1hltl+ O3BrYjdinaLBC7y9l8SQoA5b551c6CJA8CnQTK88gitVBCcIbcr8/Oy3LvXZapLIydni gZJFy778JeIkjo7m62no0minGRQJxkEhk9Mo75vvB7nXFn8XK7dxZYnCA7BIMBUsj5xN OnAg== X-Gm-Message-State: APjAAAV0ZRPylhwo4m6W9ZJcJf2Zr9P/lGQk/WEwIIMBkHxUm6VeV2mt HpHG3lV2HGtoBuvwFd90OjbQ3SJA X-Google-Smtp-Source: APXvYqxfiNOMpTCx2nzD7rpIB+lfL/ESLm0Iwf/QdYpMzwDxx4q+nF/QwKtbz5yLo8mBd7eu7h8eVFZQ X-Received: by 2002:a1f:9010:: with SMTP id s16mr1018825vkd.12.1554409445741; Thu, 04 Apr 2019 13:24:05 -0700 (PDT) Date: Thu, 4 Apr 2019 16:23:44 -0400 In-Reply-To: <20190404202345.133553-1-brho@google.com> Message-Id: <20190404202345.133553-3-brho@google.com> Mime-Version: 1.0 References: <20190404202345.133553-1-brho@google.com> X-Mailer: git-send-email 2.21.0.392.gf8f6787159e-goog Subject: [PATCH v3 2/3] kvm: Use huge pages for DAX-backed files From: Barret Rhoden To: Paolo Bonzini , Dan Williams , David Hildenbrand , Dave Jiang , Alexander Duyck Cc: linux-nvdimm@lists.01.org, x86@kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, yu.c.zhang@intel.com, yi.z.zhang@intel.com Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP This change allows KVM to map DAX-backed files made of huge pages with huge mappings in the EPT/TDP. DAX pages are not PageTransCompound. The existing check is trying to determine if the mapping for the pfn is a huge mapping or not. For non-DAX maps, e.g. hugetlbfs, that means checking PageTransCompound. For DAX, we can check the page table itself. Note that KVM already faulted in the page (or huge page) in the host's page table, and we hold the KVM mmu spinlock. We grabbed that lock in kvm_mmu_notifier_invalidate_range_end, before checking the mmu seq. Signed-off-by: Barret Rhoden --- arch/x86/kvm/mmu.c | 33 +++++++++++++++++++++++++++++++-- 1 file changed, 31 insertions(+), 2 deletions(-) diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index eee455a8a612..48cbf5553688 100644 --- a/arch/x86/kvm/mmu.c +++ b/arch/x86/kvm/mmu.c @@ -3207,6 +3207,35 @@ static int kvm_handle_bad_page(struct kvm_vcpu *vcpu, gfn_t gfn, kvm_pfn_t pfn) return -EFAULT; } +static bool pfn_is_huge_mapped(struct kvm *kvm, gfn_t gfn, kvm_pfn_t pfn) +{ + struct page *page = pfn_to_page(pfn); + unsigned long hva; + + if (!is_zone_device_page(page)) + return PageTransCompoundMap(page); + + /* + * DAX pages do not use compound pages. The page should have already + * been mapped into the host-side page table during try_async_pf(), so + * we can check the page tables directly. + */ + hva = gfn_to_hva(kvm, gfn); + if (kvm_is_error_hva(hva)) + return false; + + /* + * Our caller grabbed the KVM mmu_lock with a successful + * mmu_notifier_retry, so we're safe to walk the page table. + */ + switch (dev_pagemap_mapping_shift(hva, current->mm)) { + case PMD_SHIFT: + case PUD_SIZE: + return true; + } + return false; +} + static void transparent_hugepage_adjust(struct kvm_vcpu *vcpu, gfn_t *gfnp, kvm_pfn_t *pfnp, int *levelp) @@ -3223,7 +3252,7 @@ static void transparent_hugepage_adjust(struct kvm_vcpu *vcpu, */ if (!is_error_noslot_pfn(pfn) && !kvm_is_reserved_pfn(pfn) && level == PT_PAGE_TABLE_LEVEL && - PageTransCompoundMap(pfn_to_page(pfn)) && + pfn_is_huge_mapped(vcpu->kvm, gfn, pfn) && !mmu_gfn_lpage_is_disallowed(vcpu, gfn, PT_DIRECTORY_LEVEL)) { unsigned long mask; /* @@ -5774,7 +5803,7 @@ static bool kvm_mmu_zap_collapsible_spte(struct kvm *kvm, */ if (sp->role.direct && !kvm_is_reserved_pfn(pfn) && - PageTransCompoundMap(pfn_to_page(pfn))) { + pfn_is_huge_mapped(kvm, sp->gfn, pfn)) { pte_list_remove(rmap_head, sptep); if (kvm_available_flush_tlb_with_range())