From patchwork Mon Nov 29 03:43:14 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Stevens X-Patchwork-Id: 12643535 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A40A0C433EF for ; Mon, 29 Nov 2021 03:46:49 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1377205AbhK2DuE (ORCPT ); Sun, 28 Nov 2021 22:50:04 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39558 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1347270AbhK2DsB (ORCPT ); Sun, 28 Nov 2021 22:48:01 -0500 Received: from mail-pj1-x1030.google.com (mail-pj1-x1030.google.com [IPv6:2607:f8b0:4864:20::1030]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B3A96C0613FA for ; Sun, 28 Nov 2021 19:43:43 -0800 (PST) Received: by mail-pj1-x1030.google.com with SMTP id gb13-20020a17090b060d00b001a674e2c4a8so12898012pjb.4 for ; Sun, 28 Nov 2021 19:43:43 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=KBIhrhYvqU8vAMqrM6ier98qcZNdt/MfNoMeiL1ILeo=; b=dLLidXmHdJNp/Ad0VpTUgUdOv70mU4hmdNjFJEU+B1t3GXz4VWEW3+psy+dSbVTzvk YGMlKE3Cegu0HYGIPDT5atNeyi8CwMyc1sKJh/raAgNqcx93YvXcz/8zNhEV/Xtz5dFE 82D9NCiynwggu4Uwg+YYDnlo7jZ0ZSCtZTUOg= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=KBIhrhYvqU8vAMqrM6ier98qcZNdt/MfNoMeiL1ILeo=; b=IhLA1d0/8fzv/by3JbIEAtMvcmrLzW4ZiXzgGXKleeWuGs9UWFJDZ+UTD6QmHdLfze f8BVXBk26UPKc57e7QTFeSvK7lAxrrtecSIEuuaBmeVzVhpclERWxxydgU3erLfy/dDw H+OD0oSVswM3kVXO6R2i1nnbwDKKBkTFfExX23MMc47xiDKk2DWGZF1N1bESZpX9Qmix LPmtr1me/aWjErmgnp8t9h1pzxDOs0VDbyYRUdj1lB+46mpX+PfpX15ZwQkIPSrVt2pB VuTGpF3JTHNSCq3LJFdBoOFppkyUlme6xVOVTOnAr47MNbvvyuLUpjDrj5ZWfRiNLhAZ JkOQ== X-Gm-Message-State: AOAM531vlDG5kbm8uR9YlPRkdn6U9vXNpvySm1LJ1iz1KoOc4rMh3uCd QMAKbPG0/S3ApUVBK+qDq1FBZw== X-Google-Smtp-Source: ABdhPJwHmlx9AsblDx4ryooqGE93EzeACfdkMvLSOWMDmj9HP54vyhhkf2GsfiK5ONaMPMesuR1oGA== X-Received: by 2002:a17:90b:3b45:: with SMTP id ot5mr35449148pjb.235.1638157423203; Sun, 28 Nov 2021 19:43:43 -0800 (PST) Received: from localhost ([2401:fa00:8f:203:72d1:80f6:e1c9:ed0a]) by smtp.gmail.com with UTF8SMTPSA id j7sm17097012pjf.41.2021.11.28.19.43.39 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Sun, 28 Nov 2021 19:43:42 -0800 (PST) From: David Stevens X-Google-Original-From: David Stevens To: Marc Zyngier , Paolo Bonzini Cc: James Morse , Alexandru Elisei , Suzuki K Poulose , Will Deacon , Sean Christopherson , Wanpeng Li , Jim Mattson , Joerg Roedel , linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, David Stevens Subject: [PATCH v5 1/4] KVM: mmu: introduce new gfn_to_pfn_page functions Date: Mon, 29 Nov 2021 12:43:14 +0900 Message-Id: <20211129034317.2964790-2-stevensd@google.com> X-Mailer: git-send-email 2.34.0.rc2.393.gf8c9666880-goog In-Reply-To: <20211129034317.2964790-1-stevensd@google.com> References: <20211129034317.2964790-1-stevensd@google.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: David Stevens Introduce new gfn_to_pfn_page functions that parallel existing gfn_to_pfn functions. The new functions are identical except they take an additional out parameter that is used to return the struct page if the hva was resolved by gup. This allows callers to differentiate the gup and follow_pte cases, which in turn allows callers to only touch the page refcount when necessitated by gup. The old gfn_to_pfn functions are depreciated, and all callers should be migrated to the new gfn_to_pfn_page functions. In the interim, the gfn_to_pfn functions are reimplemented as wrappers of the corresponding gfn_to_pfn_page functions. The wrappers take a reference to the pfn's page that had previously been taken in hva_to_pfn_remapped. Signed-off-by: David Stevens --- include/linux/kvm_host.h | 17 ++++ virt/kvm/kvm_main.c | 196 +++++++++++++++++++++++++++++---------- 2 files changed, 162 insertions(+), 51 deletions(-) diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 60a35d9fe259..24b49c7aacf9 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -861,6 +861,19 @@ kvm_pfn_t __gfn_to_pfn_memslot(struct kvm_memory_slot *slot, gfn_t gfn, bool atomic, bool *async, bool write_fault, bool *writable, hva_t *hva); +kvm_pfn_t gfn_to_pfn_page(struct kvm *kvm, gfn_t gfn, struct page **page); +kvm_pfn_t gfn_to_pfn_page_prot(struct kvm *kvm, gfn_t gfn, + bool write_fault, bool *writable, + struct page **page); +kvm_pfn_t gfn_to_pfn_page_memslot(struct kvm_memory_slot *slot, + gfn_t gfn, struct page **page); +kvm_pfn_t gfn_to_pfn_page_memslot_atomic(struct kvm_memory_slot *slot, + gfn_t gfn, struct page **page); +kvm_pfn_t __gfn_to_pfn_page_memslot(struct kvm_memory_slot *slot, + gfn_t gfn, bool atomic, bool *async, + bool write_fault, bool *writable, + hva_t *hva, struct page **page); + void kvm_release_pfn_clean(kvm_pfn_t pfn); void kvm_release_pfn_dirty(kvm_pfn_t pfn); void kvm_set_pfn_dirty(kvm_pfn_t pfn); @@ -941,6 +954,10 @@ struct kvm_memslots *kvm_vcpu_memslots(struct kvm_vcpu *vcpu); struct kvm_memory_slot *kvm_vcpu_gfn_to_memslot(struct kvm_vcpu *vcpu, gfn_t gfn); kvm_pfn_t kvm_vcpu_gfn_to_pfn_atomic(struct kvm_vcpu *vcpu, gfn_t gfn); kvm_pfn_t kvm_vcpu_gfn_to_pfn(struct kvm_vcpu *vcpu, gfn_t gfn); +kvm_pfn_t kvm_vcpu_gfn_to_pfn_page_atomic(struct kvm_vcpu *vcpu, gfn_t gfn, + struct page **page); +kvm_pfn_t kvm_vcpu_gfn_to_pfn_page(struct kvm_vcpu *vcpu, gfn_t gfn, + struct page **page); int kvm_vcpu_map(struct kvm_vcpu *vcpu, gpa_t gpa, struct kvm_host_map *map); int kvm_map_gfn(struct kvm_vcpu *vcpu, gfn_t gfn, struct kvm_host_map *map, struct gfn_to_pfn_cache *cache, bool atomic); diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 3f6d450355f0..16a8a71f20bf 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -2230,9 +2230,9 @@ static inline int check_user_page_hwpoison(unsigned long addr) * only part that runs if we can in atomic context. */ static bool hva_to_pfn_fast(unsigned long addr, bool write_fault, - bool *writable, kvm_pfn_t *pfn) + bool *writable, kvm_pfn_t *pfn, + struct page **page) { - struct page *page[1]; /* * Fast pin a writable pfn only if it is a write fault request @@ -2243,7 +2243,7 @@ static bool hva_to_pfn_fast(unsigned long addr, bool write_fault, return false; if (get_user_page_fast_only(addr, FOLL_WRITE, page)) { - *pfn = page_to_pfn(page[0]); + *pfn = page_to_pfn(*page); if (writable) *writable = true; @@ -2258,10 +2258,9 @@ static bool hva_to_pfn_fast(unsigned long addr, bool write_fault, * 1 indicates success, -errno is returned if error is detected. */ static int hva_to_pfn_slow(unsigned long addr, bool *async, bool write_fault, - bool *writable, kvm_pfn_t *pfn) + bool *writable, kvm_pfn_t *pfn, struct page **page) { unsigned int flags = FOLL_HWPOISON; - struct page *page; int npages = 0; might_sleep(); @@ -2274,7 +2273,7 @@ static int hva_to_pfn_slow(unsigned long addr, bool *async, bool write_fault, if (async) flags |= FOLL_NOWAIT; - npages = get_user_pages_unlocked(addr, 1, &page, flags); + npages = get_user_pages_unlocked(addr, 1, page, flags); if (npages != 1) return npages; @@ -2284,11 +2283,11 @@ static int hva_to_pfn_slow(unsigned long addr, bool *async, bool write_fault, if (get_user_page_fast_only(addr, FOLL_WRITE, &wpage)) { *writable = true; - put_page(page); - page = wpage; + put_page(*page); + *page = wpage; } } - *pfn = page_to_pfn(page); + *pfn = page_to_pfn(*page); return npages; } @@ -2303,13 +2302,6 @@ static bool vma_is_valid(struct vm_area_struct *vma, bool write_fault) return true; } -static int kvm_try_get_pfn(kvm_pfn_t pfn) -{ - if (kvm_is_reserved_pfn(pfn)) - return 1; - return get_page_unless_zero(pfn_to_page(pfn)); -} - static int hva_to_pfn_remapped(struct vm_area_struct *vma, unsigned long addr, bool *async, bool write_fault, bool *writable, @@ -2349,26 +2341,6 @@ static int hva_to_pfn_remapped(struct vm_area_struct *vma, *writable = pte_write(*ptep); pfn = pte_pfn(*ptep); - /* - * Get a reference here because callers of *hva_to_pfn* and - * *gfn_to_pfn* ultimately call kvm_release_pfn_clean on the - * returned pfn. This is only needed if the VMA has VM_MIXEDMAP - * set, but the kvm_try_get_pfn/kvm_release_pfn_clean pair will - * simply do nothing for reserved pfns. - * - * Whoever called remap_pfn_range is also going to call e.g. - * unmap_mapping_range before the underlying pages are freed, - * causing a call to our MMU notifier. - * - * Certain IO or PFNMAP mappings can be backed with valid - * struct pages, but be allocated without refcounting e.g., - * tail pages of non-compound higher order allocations, which - * would then underflow the refcount when the caller does the - * required put_page. Don't allow those pages here. - */ - if (!kvm_try_get_pfn(pfn)) - r = -EFAULT; - out: pte_unmap_unlock(ptep, ptl); *p_pfn = pfn; @@ -2390,8 +2362,9 @@ static int hva_to_pfn_remapped(struct vm_area_struct *vma, * 2): @write_fault = false && @writable, @writable will tell the caller * whether the mapping is writable. */ -static kvm_pfn_t hva_to_pfn(unsigned long addr, bool atomic, bool *async, - bool write_fault, bool *writable) +static kvm_pfn_t hva_to_pfn(unsigned long addr, bool atomic, + bool *async, bool write_fault, bool *writable, + struct page **page) { struct vm_area_struct *vma; kvm_pfn_t pfn = 0; @@ -2400,13 +2373,14 @@ static kvm_pfn_t hva_to_pfn(unsigned long addr, bool atomic, bool *async, /* we can do it either atomically or asynchronously, not both */ BUG_ON(atomic && async); - if (hva_to_pfn_fast(addr, write_fault, writable, &pfn)) + if (hva_to_pfn_fast(addr, write_fault, writable, &pfn, page)) return pfn; if (atomic) return KVM_PFN_ERR_FAULT; - npages = hva_to_pfn_slow(addr, async, write_fault, writable, &pfn); + npages = hva_to_pfn_slow(addr, async, write_fault, writable, + &pfn, page); if (npages == 1) return pfn; @@ -2438,12 +2412,14 @@ static kvm_pfn_t hva_to_pfn(unsigned long addr, bool atomic, bool *async, return pfn; } -kvm_pfn_t __gfn_to_pfn_memslot(struct kvm_memory_slot *slot, gfn_t gfn, - bool atomic, bool *async, bool write_fault, - bool *writable, hva_t *hva) +kvm_pfn_t __gfn_to_pfn_page_memslot(struct kvm_memory_slot *slot, + gfn_t gfn, bool atomic, bool *async, + bool write_fault, bool *writable, + hva_t *hva, struct page **page) { unsigned long addr = __gfn_to_hva_many(slot, gfn, NULL, write_fault); + *page = NULL; if (hva) *hva = addr; @@ -2466,45 +2442,163 @@ kvm_pfn_t __gfn_to_pfn_memslot(struct kvm_memory_slot *slot, gfn_t gfn, } return hva_to_pfn(addr, atomic, async, write_fault, - writable); + writable, page); +} +EXPORT_SYMBOL_GPL(__gfn_to_pfn_page_memslot); + +kvm_pfn_t gfn_to_pfn_page_prot(struct kvm *kvm, gfn_t gfn, bool write_fault, + bool *writable, struct page **page) +{ + return __gfn_to_pfn_page_memslot(gfn_to_memslot(kvm, gfn), gfn, false, + NULL, write_fault, writable, NULL, + page); +} +EXPORT_SYMBOL_GPL(gfn_to_pfn_page_prot); + +kvm_pfn_t gfn_to_pfn_page_memslot(struct kvm_memory_slot *slot, gfn_t gfn, + struct page **page) +{ + return __gfn_to_pfn_page_memslot(slot, gfn, false, NULL, true, + NULL, NULL, page); +} +EXPORT_SYMBOL_GPL(gfn_to_pfn_page_memslot); + +kvm_pfn_t gfn_to_pfn_page_memslot_atomic(struct kvm_memory_slot *slot, + gfn_t gfn, struct page **page) +{ + return __gfn_to_pfn_page_memslot(slot, gfn, true, NULL, true, NULL, + NULL, page); +} +EXPORT_SYMBOL_GPL(gfn_to_pfn_page_memslot_atomic); + +kvm_pfn_t kvm_vcpu_gfn_to_pfn_page_atomic(struct kvm_vcpu *vcpu, gfn_t gfn, + struct page **page) +{ + return gfn_to_pfn_page_memslot_atomic( + kvm_vcpu_gfn_to_memslot(vcpu, gfn), gfn, page); +} +EXPORT_SYMBOL_GPL(kvm_vcpu_gfn_to_pfn_page_atomic); + +kvm_pfn_t gfn_to_pfn_page(struct kvm *kvm, gfn_t gfn, struct page **page) +{ + return gfn_to_pfn_page_memslot(gfn_to_memslot(kvm, gfn), gfn, page); +} +EXPORT_SYMBOL_GPL(gfn_to_pfn_page); + +kvm_pfn_t kvm_vcpu_gfn_to_pfn_page(struct kvm_vcpu *vcpu, gfn_t gfn, + struct page **page) +{ + return gfn_to_pfn_page_memslot(kvm_vcpu_gfn_to_memslot(vcpu, gfn), + gfn, page); +} +EXPORT_SYMBOL_GPL(kvm_vcpu_gfn_to_pfn_page); + +static kvm_pfn_t ensure_pfn_ref(struct page *page, kvm_pfn_t pfn) +{ + if (page || is_error_pfn(pfn)) + return pfn; + + /* + * If we're here, a pfn resolved by hva_to_pfn_remapped is + * going to be returned to something that ultimately calls + * kvm_release_pfn_clean, so the refcount needs to be bumped if + * the pfn isn't a reserved pfn. + * + * Whoever called remap_pfn_range is also going to call e.g. + * unmap_mapping_range before the underlying pages are freed, + * causing a call to our MMU notifier. + * + * Certain IO or PFNMAP mappings can be backed with valid + * struct pages, but be allocated without refcounting e.g., + * tail pages of non-compound higher order allocations, which + * would then underflow the refcount when the caller does the + * required put_page. Don't allow those pages here. + */ + if (kvm_is_reserved_pfn(pfn) || + get_page_unless_zero(pfn_to_page(pfn))) + return pfn; + + return KVM_PFN_ERR_FAULT; +} + +kvm_pfn_t __gfn_to_pfn_memslot(struct kvm_memory_slot *slot, gfn_t gfn, + bool atomic, bool *async, bool write_fault, + bool *writable, hva_t *hva) +{ + struct page *page; + kvm_pfn_t pfn; + + pfn = __gfn_to_pfn_page_memslot(slot, gfn, atomic, async, + write_fault, writable, hva, &page); + + return ensure_pfn_ref(page, pfn); } EXPORT_SYMBOL_GPL(__gfn_to_pfn_memslot); kvm_pfn_t gfn_to_pfn_prot(struct kvm *kvm, gfn_t gfn, bool write_fault, bool *writable) { - return __gfn_to_pfn_memslot(gfn_to_memslot(kvm, gfn), gfn, false, NULL, - write_fault, writable, NULL); + struct page *page; + kvm_pfn_t pfn; + + pfn = gfn_to_pfn_page_prot(kvm, gfn, write_fault, writable, &page); + + return ensure_pfn_ref(page, pfn); } EXPORT_SYMBOL_GPL(gfn_to_pfn_prot); kvm_pfn_t gfn_to_pfn_memslot(struct kvm_memory_slot *slot, gfn_t gfn) { - return __gfn_to_pfn_memslot(slot, gfn, false, NULL, true, NULL, NULL); + struct page *page; + kvm_pfn_t pfn; + + pfn = gfn_to_pfn_page_memslot(slot, gfn, &page); + + return ensure_pfn_ref(page, pfn); } EXPORT_SYMBOL_GPL(gfn_to_pfn_memslot); kvm_pfn_t gfn_to_pfn_memslot_atomic(struct kvm_memory_slot *slot, gfn_t gfn) { - return __gfn_to_pfn_memslot(slot, gfn, true, NULL, true, NULL, NULL); + struct page *page; + kvm_pfn_t pfn; + + pfn = gfn_to_pfn_page_memslot_atomic(slot, gfn, &page); + + return ensure_pfn_ref(page, pfn); } EXPORT_SYMBOL_GPL(gfn_to_pfn_memslot_atomic); kvm_pfn_t kvm_vcpu_gfn_to_pfn_atomic(struct kvm_vcpu *vcpu, gfn_t gfn) { - return gfn_to_pfn_memslot_atomic(kvm_vcpu_gfn_to_memslot(vcpu, gfn), gfn); + struct page *page; + kvm_pfn_t pfn; + + pfn = kvm_vcpu_gfn_to_pfn_page_atomic(vcpu, gfn, &page); + + return ensure_pfn_ref(page, pfn); } EXPORT_SYMBOL_GPL(kvm_vcpu_gfn_to_pfn_atomic); kvm_pfn_t gfn_to_pfn(struct kvm *kvm, gfn_t gfn) { - return gfn_to_pfn_memslot(gfn_to_memslot(kvm, gfn), gfn); + struct page *page; + kvm_pfn_t pfn; + + pfn = gfn_to_pfn_page(kvm, gfn, &page); + + return ensure_pfn_ref(page, pfn); } EXPORT_SYMBOL_GPL(gfn_to_pfn); kvm_pfn_t kvm_vcpu_gfn_to_pfn(struct kvm_vcpu *vcpu, gfn_t gfn) { - return gfn_to_pfn_memslot(kvm_vcpu_gfn_to_memslot(vcpu, gfn), gfn); + struct page *page; + kvm_pfn_t pfn; + + pfn = kvm_vcpu_gfn_to_pfn_page(vcpu, gfn, &page); + + return ensure_pfn_ref(page, pfn); } EXPORT_SYMBOL_GPL(kvm_vcpu_gfn_to_pfn); From patchwork Mon Nov 29 03:43:15 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Stevens X-Patchwork-Id: 12643537 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 02412C433F5 for ; Mon, 29 Nov 2021 03:46:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1377574AbhK2DuO (ORCPT ); Sun, 28 Nov 2021 22:50:14 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39154 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1377224AbhK2DsM (ORCPT ); Sun, 28 Nov 2021 22:48:12 -0500 Received: from mail-pj1-x1036.google.com (mail-pj1-x1036.google.com [IPv6:2607:f8b0:4864:20::1036]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 50B04C0613FC for ; Sun, 28 Nov 2021 19:43:49 -0800 (PST) Received: by mail-pj1-x1036.google.com with SMTP id x7so11613818pjn.0 for ; Sun, 28 Nov 2021 19:43:49 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=WMYRrbcse/N7Z6kty0CX8mwv84TiaTtEgSRBD9icxeo=; b=l5BRuDFG8QUnDUfK0C4OQZ7pI/xscGyHXmrsmUVpB0homEFKpzaHd7GPALVmtCCjla x/NrMEM9Ls/X+/4lShvIyG4x5vdA5kGh93aou3U/cBCOR8dxuHHuR2gZsPtfbd6ouW+9 jLl2SQxzemkKFhr8XTfasLphpSKL4EgcWJ3Ic= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=WMYRrbcse/N7Z6kty0CX8mwv84TiaTtEgSRBD9icxeo=; b=IM+SVz3yUdJ/ZBXLc/ouBWAY3EqkKUHkhNBhIeoKqE2EAwzabmrsUb0+crvPVnY6Hq DlpwE3rqjWAC8LNV4WFlGaFuEug+qIvI1klSF5W1aATWuNeuniB8SuP5CENwjHkQwY8d W4CwRkh0Kq/w8Qpfr2lAHAnR3vAKnf6StF0GreA2TTEaike3YYHtTbKqGQ6RtRTxiaWi lkQTciIwd6vZPiuaaKE6OSE5mHjBwCJN9KiyL/IpgO6BS+fII8SQpD23UvwgVd4EanoL 9rXBU9X9rlZPznjr9XtS/Yk5XyRoatgRVs4785nrBE7QYnm/qJ5MIf243aWVYEmhVW2v tXOg== X-Gm-Message-State: AOAM533SW0Y4+tfQu890BcMdPRgoopyavOoZhJwocQerNuI1IUXppycA r4hXOrhl4rnf7wHw3yJAPPANqw== X-Google-Smtp-Source: ABdhPJyDEsZsotgmp6fBOwk9oYujXo2fN06uKmxeZ+XjnZYzPGTRUcyJvb12Rt+m6tqKzhAuh+1n+Q== X-Received: by 2002:a17:90b:4d0c:: with SMTP id mw12mr35136514pjb.209.1638157428872; Sun, 28 Nov 2021 19:43:48 -0800 (PST) Received: from localhost ([2401:fa00:8f:203:72d1:80f6:e1c9:ed0a]) by smtp.gmail.com with UTF8SMTPSA id c21sm15497042pfl.138.2021.11.28.19.43.45 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Sun, 28 Nov 2021 19:43:48 -0800 (PST) From: David Stevens X-Google-Original-From: David Stevens To: Marc Zyngier , Paolo Bonzini Cc: James Morse , Alexandru Elisei , Suzuki K Poulose , Will Deacon , Sean Christopherson , Wanpeng Li , Jim Mattson , Joerg Roedel , linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, David Stevens Subject: [PATCH v5 2/4] KVM: x86/mmu: use gfn_to_pfn_page Date: Mon, 29 Nov 2021 12:43:15 +0900 Message-Id: <20211129034317.2964790-3-stevensd@google.com> X-Mailer: git-send-email 2.34.0.rc2.393.gf8c9666880-goog In-Reply-To: <20211129034317.2964790-1-stevensd@google.com> References: <20211129034317.2964790-1-stevensd@google.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: David Stevens Covert usages of the deprecated gfn_to_pfn functions to the new gfn_to_pfn_page functions. Signed-off-by: David Stevens --- arch/x86/kvm/mmu.h | 1 + arch/x86/kvm/mmu/mmu.c | 18 +++++++++++------- arch/x86/kvm/mmu/paging_tmpl.h | 9 ++++++--- arch/x86/kvm/x86.c | 6 ++++-- 4 files changed, 22 insertions(+), 12 deletions(-) diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h index 9ae6168d381e..97d94a9612b6 100644 --- a/arch/x86/kvm/mmu.h +++ b/arch/x86/kvm/mmu.h @@ -164,6 +164,7 @@ struct kvm_page_fault { /* Outputs of kvm_faultin_pfn. */ kvm_pfn_t pfn; hva_t hva; + struct page *page; bool map_writable; }; diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 04c00c34517e..0626395ff1d9 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -2891,6 +2891,9 @@ void kvm_mmu_hugepage_adjust(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault if (unlikely(fault->max_level == PG_LEVEL_4K)) return; + if (!fault->page) + return; + if (is_error_noslot_pfn(fault->pfn) || kvm_is_reserved_pfn(fault->pfn)) return; @@ -3950,9 +3953,9 @@ static bool kvm_faultin_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault, } async = false; - fault->pfn = __gfn_to_pfn_memslot(slot, fault->gfn, false, &async, - fault->write, &fault->map_writable, - &fault->hva); + fault->pfn = __gfn_to_pfn_page_memslot(slot, fault->gfn, false, &async, + fault->write, &fault->map_writable, + &fault->hva, &fault->page); if (!async) return false; /* *pfn has correct page already */ @@ -3966,9 +3969,9 @@ static bool kvm_faultin_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault, goto out_retry; } - fault->pfn = __gfn_to_pfn_memslot(slot, fault->gfn, false, NULL, - fault->write, &fault->map_writable, - &fault->hva); + fault->pfn = __gfn_to_pfn_page_memslot(slot, fault->gfn, false, NULL, + fault->write, &fault->map_writable, + &fault->hva, &fault->page); return false; out_retry: @@ -4029,7 +4032,8 @@ static int direct_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault read_unlock(&vcpu->kvm->mmu_lock); else write_unlock(&vcpu->kvm->mmu_lock); - kvm_release_pfn_clean(fault->pfn); + if (fault->page) + put_page(fault->page); return r; } diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h index f87d36898c44..370d52f252a8 100644 --- a/arch/x86/kvm/mmu/paging_tmpl.h +++ b/arch/x86/kvm/mmu/paging_tmpl.h @@ -565,6 +565,7 @@ FNAME(prefetch_gpte)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, unsigned pte_access; gfn_t gfn; kvm_pfn_t pfn; + struct page *page; if (FNAME(prefetch_invalid_gpte)(vcpu, sp, spte, gpte)) return false; @@ -580,12 +581,13 @@ FNAME(prefetch_gpte)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp, if (!slot) return false; - pfn = gfn_to_pfn_memslot_atomic(slot, gfn); + pfn = gfn_to_pfn_page_memslot_atomic(slot, gfn, &page); if (is_error_pfn(pfn)) return false; mmu_set_spte(vcpu, slot, spte, pte_access, gfn, pfn, NULL); - kvm_release_pfn_clean(pfn); + if (page) + put_page(page); return true; } @@ -923,7 +925,8 @@ static int FNAME(page_fault)(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault out_unlock: write_unlock(&vcpu->kvm->mmu_lock); - kvm_release_pfn_clean(fault->pfn); + if (fault->page) + put_page(fault->page); return r; } diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 5c479ae57693..95f56ec43e0b 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -7820,6 +7820,7 @@ static bool reexecute_instruction(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, { gpa_t gpa = cr2_or_gpa; kvm_pfn_t pfn; + struct page *page; if (!(emulation_type & EMULTYPE_ALLOW_RETRY_PF)) return false; @@ -7849,7 +7850,7 @@ static bool reexecute_instruction(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, * retry instruction -> write #PF -> emulation fail -> retry * instruction -> ... */ - pfn = gfn_to_pfn(vcpu->kvm, gpa_to_gfn(gpa)); + pfn = gfn_to_pfn_page(vcpu->kvm, gpa_to_gfn(gpa), &page); /* * If the instruction failed on the error pfn, it can not be fixed, @@ -7858,7 +7859,8 @@ static bool reexecute_instruction(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, if (is_error_noslot_pfn(pfn)) return false; - kvm_release_pfn_clean(pfn); + if (page) + put_page(page); /* The instructions are well-emulated on direct mmu. */ if (vcpu->arch.mmu->direct_map) { From patchwork Mon Nov 29 03:43:16 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Stevens X-Patchwork-Id: 12643541 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 77244C4332F for ; Mon, 29 Nov 2021 03:47:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1377595AbhK2DuP (ORCPT ); Sun, 28 Nov 2021 22:50:15 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39600 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1377230AbhK2DsM (ORCPT ); Sun, 28 Nov 2021 22:48:12 -0500 Received: from mail-pj1-x1031.google.com (mail-pj1-x1031.google.com [IPv6:2607:f8b0:4864:20::1031]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E079AC0613FD for ; Sun, 28 Nov 2021 19:43:53 -0800 (PST) Received: by mail-pj1-x1031.google.com with SMTP id p18-20020a17090ad31200b001a78bb52876so14406377pju.3 for ; Sun, 28 Nov 2021 19:43:53 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=1E5J112/KKe5nzfIjRapghLBkhpzMa3/YxcaryPQAAY=; b=CPZyN1kOZxK9jC2nDgnbHL1qPOcAxvPYZ4/Ldp15ZFP8LQUMl5fThYRqCxYekmI+Ti jaGDXmQRNigPBEiMxuYtVffQzOSXSTMZfCgxkkTeXS6mHjhsZzo3h+r10/68GkCXPnC9 CxPtfq+6DPfEA6ybJRpLeLHrmjDI7ysst1V7c= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=1E5J112/KKe5nzfIjRapghLBkhpzMa3/YxcaryPQAAY=; b=djZ0hRZyyhNi5D5Odr6Zfzw0pXx6JMUx1pWbMIiqApEYDzZmpaVIOujRr9JhjOxo4U PE3fu1C23k9Dhiyhumry8UFGUmZA3jQVnhPHPqmt3GpGsmxVVkUxfE5rSG8227djopHF 8ybzTdtG7PABourMsWkR47N1WqKZrzaoZwKGebf4PBCPH96NLUn12cSEAPgElGfG5KPD amdiTq4cZwiibCG8hbn+lepojBF639YJA779CfaHkkM1X58pUOfGDkKgf5pRqyk3WhKC Fv5BvRIn/rZXjjhgWdizeZqTc7M6JvUUTnPCUUaactinPruw8B68TYQCdPY35zLAdSdL 5TfQ== X-Gm-Message-State: AOAM5320HZ0yux8pXy+5WKFaoIFiV4qK6X/2oRK63bLMjatZROiflfQg FtBkUy0WbgxY0iokcCobunPH2w== X-Google-Smtp-Source: ABdhPJzR1mYlqKwgxgr65ZpLjYIw4Q3tENBbFuvuJ6S4uxpSlpvxqFbVGGaQXv+7du9DJSv52f6+mQ== X-Received: by 2002:a17:90a:e506:: with SMTP id t6mr34337394pjy.9.1638157433470; Sun, 28 Nov 2021 19:43:53 -0800 (PST) Received: from localhost ([2401:fa00:8f:203:72d1:80f6:e1c9:ed0a]) by smtp.gmail.com with UTF8SMTPSA id h13sm14337804pfv.84.2021.11.28.19.43.50 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Sun, 28 Nov 2021 19:43:53 -0800 (PST) From: David Stevens X-Google-Original-From: David Stevens To: Marc Zyngier , Paolo Bonzini Cc: James Morse , Alexandru Elisei , Suzuki K Poulose , Will Deacon , Sean Christopherson , Wanpeng Li , Jim Mattson , Joerg Roedel , linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, David Stevens Subject: [PATCH v5 3/4] KVM: arm64/mmu: use gfn_to_pfn_page Date: Mon, 29 Nov 2021 12:43:16 +0900 Message-Id: <20211129034317.2964790-4-stevensd@google.com> X-Mailer: git-send-email 2.34.0.rc2.393.gf8c9666880-goog In-Reply-To: <20211129034317.2964790-1-stevensd@google.com> References: <20211129034317.2964790-1-stevensd@google.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: David Stevens Covert usages of the deprecated gfn_to_pfn functions to the new gfn_to_pfn_page functions. Signed-off-by: David Stevens Signed-off-by: Sean Christopherson --- arch/arm64/kvm/mmu.c | 27 +++++++++++++++++---------- 1 file changed, 17 insertions(+), 10 deletions(-) diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 326cdfec74a1..197fb8afbb94 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -829,7 +829,7 @@ static bool fault_supports_stage2_huge_mapping(struct kvm_memory_slot *memslot, static unsigned long transparent_hugepage_adjust(struct kvm *kvm, struct kvm_memory_slot *memslot, unsigned long hva, kvm_pfn_t *pfnp, - phys_addr_t *ipap) + struct page **page, phys_addr_t *ipap) { kvm_pfn_t pfn = *pfnp; @@ -838,7 +838,8 @@ transparent_hugepage_adjust(struct kvm *kvm, struct kvm_memory_slot *memslot, * sure that the HVA and IPA are sufficiently aligned and that the * block map is contained within the memslot. */ - if (fault_supports_stage2_huge_mapping(memslot, hva, PMD_SIZE) && + if (*page && + fault_supports_stage2_huge_mapping(memslot, hva, PMD_SIZE) && get_user_mapping_size(kvm, hva) >= PMD_SIZE) { /* * The address we faulted on is backed by a transparent huge @@ -859,10 +860,11 @@ transparent_hugepage_adjust(struct kvm *kvm, struct kvm_memory_slot *memslot, * page accordingly. */ *ipap &= PMD_MASK; - kvm_release_pfn_clean(pfn); + put_page(*page); pfn &= ~(PTRS_PER_PMD - 1); - get_page(pfn_to_page(pfn)); *pfnp = pfn; + *page = pfn_to_page(pfn); + get_page(*page); return PMD_SIZE; } @@ -955,6 +957,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, short vma_shift; gfn_t gfn; kvm_pfn_t pfn; + struct page *page; bool logging_active = memslot_is_logging(memslot); unsigned long fault_level = kvm_vcpu_trap_get_fault_level(vcpu); unsigned long vma_pagesize, fault_granule; @@ -1056,8 +1059,8 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, */ smp_rmb(); - pfn = __gfn_to_pfn_memslot(memslot, gfn, false, NULL, - write_fault, &writable, NULL); + pfn = __gfn_to_pfn_page_memslot(memslot, gfn, false, NULL, + write_fault, &writable, NULL, &page); if (pfn == KVM_PFN_ERR_HWPOISON) { kvm_send_hwpoison_signal(hva, vma_shift); return 0; @@ -1102,7 +1105,8 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, vma_pagesize = fault_granule; else vma_pagesize = transparent_hugepage_adjust(kvm, memslot, - hva, &pfn, + hva, + &pfn, &page, &fault_ipa); } @@ -1142,14 +1146,17 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, /* Mark the page dirty only if the fault is handled successfully */ if (writable && !ret) { - kvm_set_pfn_dirty(pfn); + if (page) + kvm_set_pfn_dirty(pfn); mark_page_dirty_in_slot(kvm, memslot, gfn); } out_unlock: spin_unlock(&kvm->mmu_lock); - kvm_set_pfn_accessed(pfn); - kvm_release_pfn_clean(pfn); + if (page) { + kvm_set_pfn_accessed(pfn); + put_page(page); + } return ret != -EAGAIN ? ret : 0; } From patchwork Mon Nov 29 03:43:17 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Stevens X-Patchwork-Id: 12643539 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 97123C433EF for ; Mon, 29 Nov 2021 03:47:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1377598AbhK2DuQ (ORCPT ); Sun, 28 Nov 2021 22:50:16 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39602 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1377228AbhK2DsM (ORCPT ); Sun, 28 Nov 2021 22:48:12 -0500 Received: from mail-pf1-x433.google.com (mail-pf1-x433.google.com [IPv6:2607:f8b0:4864:20::433]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 44F7DC0613FF for ; Sun, 28 Nov 2021 19:43:58 -0800 (PST) Received: by mail-pf1-x433.google.com with SMTP id x131so15357474pfc.12 for ; Sun, 28 Nov 2021 19:43:58 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=9Hs2KkuD2RvJU1MuHxuaC5QQB7g97DRylbbix/DEyMM=; b=k6uPJkAreyLqR/oL85kAD76M20sYasXKj1QbtK3CkHo3sgCCQCC4FWtVOblMCLxb/z bPi22S2w7YghmBm3Kka1/UIGGiA8AiUT2+RMrGmqVQlt4EUYiTQn1smbFmUJatk/XJub u6epHvK0KKlesLZEIUpl4jFhfksZXtlcfSic4= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=9Hs2KkuD2RvJU1MuHxuaC5QQB7g97DRylbbix/DEyMM=; b=TvqJ88EXh6Pr4ZWof3gFTUunBFGCkhMcbiFs368MV83Ck210ZA8JG1TENEs/6TF/Yf OH501pu5ZCVWj5v2ACVZSMgsuyxoSXqLJUw6NKQEXMnMPQvSn3NaJ6pSgVegpVlCz0i0 40KJXXu1ZinyhCuImaL9tiS3i1aCwDfGwZM63Ol6XG9sQGD5WdqQMMphBLV4HqLc+JDo TZMn8WFs+8ewtQF8wtF9eSTDgDiFe+O3Yq2Bb/jzq5jZeHnVzK/DU+baqbsXNh9HTbgJ ij8WuQxv8xGgXV7C5UqsWlbZA8/e2DeO8oVLEK8cHmAUwmrbm+2gAAFbpWI8T/tuSgIZ pknA== X-Gm-Message-State: AOAM532RhoksWYDVgjxtOfeQWV4V7l+MChT96dGolAAGEs8yRUfhZ1mL nAQHw2cC+iWJAHWWDKzRqe6+uA== X-Google-Smtp-Source: ABdhPJwtRlhKS2AxEOFgBKvN/CyDcHjbqYa75A4CX+1o9/zI6sOg74qnXMkTRUr57A+X+/j0+SotFg== X-Received: by 2002:a63:c044:: with SMTP id z4mr33793003pgi.335.1638157437859; Sun, 28 Nov 2021 19:43:57 -0800 (PST) Received: from localhost ([2401:fa00:8f:203:72d1:80f6:e1c9:ed0a]) by smtp.gmail.com with UTF8SMTPSA id y8sm15439643pfi.56.2021.11.28.19.43.54 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Sun, 28 Nov 2021 19:43:57 -0800 (PST) From: David Stevens X-Google-Original-From: David Stevens To: Marc Zyngier , Paolo Bonzini Cc: James Morse , Alexandru Elisei , Suzuki K Poulose , Will Deacon , Sean Christopherson , Wanpeng Li , Jim Mattson , Joerg Roedel , linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, David Stevens Subject: [PATCH v5 4/4] KVM: mmu: remove over-aggressive warnings Date: Mon, 29 Nov 2021 12:43:17 +0900 Message-Id: <20211129034317.2964790-5-stevensd@google.com> X-Mailer: git-send-email 2.34.0.rc2.393.gf8c9666880-goog In-Reply-To: <20211129034317.2964790-1-stevensd@google.com> References: <20211129034317.2964790-1-stevensd@google.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: David Stevens Remove two warnings that require ref counts for pages to be non-zero, as mapped pfns from follow_pfn may not have an initialized ref count. Signed-off-by: David Stevens --- arch/x86/kvm/mmu/mmu.c | 7 ------- virt/kvm/kvm_main.c | 2 +- 2 files changed, 1 insertion(+), 8 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 0626395ff1d9..7c4c7fededf0 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -621,13 +621,6 @@ static int mmu_spte_clear_track_bits(struct kvm *kvm, u64 *sptep) pfn = spte_to_pfn(old_spte); - /* - * KVM does not hold the refcount of the page used by - * kvm mmu, before reclaiming the page, we should - * unmap it from mmu first. - */ - WARN_ON(!kvm_is_reserved_pfn(pfn) && !page_count(pfn_to_page(pfn))); - if (is_accessed_spte(old_spte)) kvm_set_pfn_accessed(pfn); diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 16a8a71f20bf..d81edcb3e107 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -170,7 +170,7 @@ bool kvm_is_zone_device_pfn(kvm_pfn_t pfn) * the device has been pinned, e.g. by get_user_pages(). WARN if the * page_count() is zero to help detect bad usage of this helper. */ - if (!pfn_valid(pfn) || WARN_ON_ONCE(!page_count(pfn_to_page(pfn)))) + if (!pfn_valid(pfn) || !page_count(pfn_to_page(pfn))) return false; return is_zone_device_page(pfn_to_page(pfn));