From patchwork Fri Dec 13 16:48:02 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 13907464 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5E242E77180 for ; Fri, 13 Dec 2024 16:48:32 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 996696B00A3; Fri, 13 Dec 2024 11:48:29 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 944D96B00A4; Fri, 13 Dec 2024 11:48:29 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 721AA6B00A5; Fri, 13 Dec 2024 11:48:29 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 42E566B00A3 for ; Fri, 13 Dec 2024 11:48:29 -0500 (EST) Received: from smtpin27.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 0B4CFA0CC2 for ; Fri, 13 Dec 2024 16:48:29 +0000 (UTC) X-FDA: 82890518046.27.D7501EC Received: from mail-wr1-f74.google.com (mail-wr1-f74.google.com [209.85.221.74]) by imf23.hostedemail.com (Postfix) with ESMTP id 14B7D14000A for ; Fri, 13 Dec 2024 16:48:09 +0000 (UTC) Authentication-Results: imf23.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=FKYBsrGa; spf=pass (imf23.hostedemail.com: domain of 3WWVcZwUKCNYL23328GG8D6.4GEDAFMP-EECN24C.GJ8@flex--tabba.bounces.google.com designates 209.85.221.74 as permitted sender) smtp.mailfrom=3WWVcZwUKCNYL23328GG8D6.4GEDAFMP-EECN24C.GJ8@flex--tabba.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1734108490; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=gdnqwTM+ekDbCobF/sslf1fAX7qCXn0A+3Ru0J9WkCw=; b=BYrNRZi/jFj8MMt1cFLujiGU8rZw1LnZAdQLU7xGKCXsOO+D1cIQnS6/t2yyBYQyqXHqIo mJFtVPpBIYY9FwHSQkOzUQ4gA4dYPbGC/1T55rmZ7cl0rugoq6dPFvw4U1zN8dcEF1OpP3 +bxgUkoAiHTF5QICuPd2P+0HmiXvDWk= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1734108490; a=rsa-sha256; cv=none; b=z4fTi0Cg51UrAu29H38OTmo+QhnSvYbShZFPZvks4xUEUcQwQG9x06bxV22tvnSeKoKYKA 0yXTBJPNalEnO7qFjbvmiiiPus+WUifNcQ5lMQ9JU55TzKXKICNjjaCyPMZa1tDyNrm3sX GQsmckjkaQ+LXUxCRShzAS/Js0ugGVE= ARC-Authentication-Results: i=1; imf23.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=FKYBsrGa; spf=pass (imf23.hostedemail.com: domain of 3WWVcZwUKCNYL23328GG8D6.4GEDAFMP-EECN24C.GJ8@flex--tabba.bounces.google.com designates 209.85.221.74 as permitted sender) smtp.mailfrom=3WWVcZwUKCNYL23328GG8D6.4GEDAFMP-EECN24C.GJ8@flex--tabba.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com Received: by mail-wr1-f74.google.com with SMTP id ffacd0b85a97d-386321c8f4bso1157966f8f.0 for ; Fri, 13 Dec 2024 08:48:26 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1734108506; x=1734713306; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=gdnqwTM+ekDbCobF/sslf1fAX7qCXn0A+3Ru0J9WkCw=; b=FKYBsrGaSZM6kFHFUMHu3C7lOSNaEkTAtV6223eeYVahRGP59zwLcpPAEEc+r8cs0F j8lp5C3hLk0RnOg/IfuRY8AVxemaCIkHMCZbA/veb/td//Aobshxkb6vHAABTDydAnDH DW261mwe7BuLroEBPAN7PfBHVOUZ3ePRw0Yk7xnXcg0K63IPlHa0TG+dVIsi1pMeoOUu 29xE/IiiqHi++7QVHYrfovkOl+PMZ3SDesZP2PdIdeTAiU6wp/uFGnSyJR+rmKBvm2X4 U4hPkcpDeFssxg1G/tux8GOYqQiZkOheAuVQ/65W7vIECNeAkQb0lQJYN0otZK6xRERI R6mg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1734108506; x=1734713306; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=gdnqwTM+ekDbCobF/sslf1fAX7qCXn0A+3Ru0J9WkCw=; b=XPrjOkEyH7mqain9EaXs2FyxIaqmZ/gu6naBeVGQvZQlVBrIb3U5uFwDMMQaExLiIz F44VSARaNOuq44UjXh40tkmQm1ycDELK2o/GYLvPTvpsnbkcEb/zIQmBXOH3Plo7soKw V55mWdMhcG/xEigjsxzKyvkGlkwwYbvrADkiTCB4iiB7kd2JJfNs61mPsaFHpYQEZWUF vQTwpyqwSUyTTd66Lo68ZIIA8LBatjswqFsO0EBIn5EWTURp3JAlkJOeq5LZbTqJndJZ f4QHA0iKh4nbxO6tokEQB/9FF5GFeKkH/eqvQu6hy8zrEF/CzKmPVG9UjlWflCrJJ7B7 U8PQ== X-Forwarded-Encrypted: i=1; AJvYcCUL3XGXbom9mNpKV7KAELUCjVJ+vN92T1e8iHOd+eXOAjiWlnKpbQ25Wsp8igvWuO9D/4IuJFJE6g==@kvack.org X-Gm-Message-State: AOJu0Yy/6Cq2fi30DiC3xbQBgnOPw905UbYYLlqeZL76wr6ZPq1kfsa1 JCmBPvAkmvC+36rPTTY26vlbVo5tIAxNdzlokkRoYlZjT/nIu+pI00Kg0Zbzy+OPFWmzMChPuw= = X-Google-Smtp-Source: AGHT+IHsiL1TU4JB1jwgTJjdMcyuypfsiIlUlbhs/D1z5iTg4X32f0hrac57+zGyUbo1D5HK1o3YHzzCeg== X-Received: from wrpa7.prod.google.com ([2002:adf:eec7:0:b0:382:31e8:c1f8]) (user=tabba job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6000:1f82:b0:385:effc:a279 with SMTP id ffacd0b85a97d-3888e0c184dmr1858759f8f.58.1734108505690; Fri, 13 Dec 2024 08:48:25 -0800 (PST) Date: Fri, 13 Dec 2024 16:48:02 +0000 In-Reply-To: <20241213164811.2006197-1-tabba@google.com> Mime-Version: 1.0 References: <20241213164811.2006197-1-tabba@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20241213164811.2006197-7-tabba@google.com> Subject: [RFC PATCH v4 06/14] KVM: guest_memfd: Handle final folio_put() of guestmem pages From: Fuad Tabba To: kvm@vger.kernel.org, linux-arm-msm@vger.kernel.org, linux-mm@kvack.org Cc: pbonzini@redhat.com, chenhuacai@kernel.org, mpe@ellerman.id.au, anup@brainfault.org, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, seanjc@google.com, viro@zeniv.linux.org.uk, brauner@kernel.org, willy@infradead.org, akpm@linux-foundation.org, xiaoyao.li@intel.com, yilun.xu@intel.com, chao.p.peng@linux.intel.com, jarkko@kernel.org, amoorthy@google.com, dmatlack@google.com, yu.c.zhang@linux.intel.com, isaku.yamahata@intel.com, mic@digikod.net, vbabka@suse.cz, vannapurve@google.com, ackerleytng@google.com, mail@maciej.szmigiero.name, david@redhat.com, michael.roth@amd.com, wei.w.wang@intel.com, liam.merwick@oracle.com, isaku.yamahata@gmail.com, kirill.shutemov@linux.intel.com, suzuki.poulose@arm.com, steven.price@arm.com, quic_eberman@quicinc.com, quic_mnalajal@quicinc.com, quic_tsoni@quicinc.com, quic_svaddagi@quicinc.com, quic_cvanscha@quicinc.com, quic_pderrin@quicinc.com, quic_pheragu@quicinc.com, catalin.marinas@arm.com, james.morse@arm.com, yuzenghui@huawei.com, oliver.upton@linux.dev, maz@kernel.org, will@kernel.org, qperret@google.com, keirf@google.com, roypat@amazon.co.uk, shuah@kernel.org, hch@infradead.org, jgg@nvidia.com, rientjes@google.com, jhubbard@nvidia.com, fvdl@google.com, hughd@google.com, jthoughton@google.com, tabba@google.com X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: 14B7D14000A X-Stat-Signature: eo9opr4jdb957bmuiza64pwxqaztgc6p X-Rspam-User: X-HE-Tag: 1734108489-20947 X-HE-Meta: U2FsdGVkX1+GZMNZo4DiuVVe3N+cfsbFc8vF/RP1KgMKscULxuy4tzOnWuSAkDHbXvjOPmcRwGCbevCzwuVV2ceJoaAHgWgDDy6HzfftHzJzRrpo7o9TgiHsHyoH6H1ONWlndTfPHHEo0IxGb1rLvTtGngYnVclC2E3PzFQjHEFgFBpM3qhXj6mTiDtCM46kWdjA2DTGKkbV6cUwltekP4t0wgGzWF0CY/E5m3m8uo/Pm7zIZJ3b5Ecs6DsnXZ5hDnFc2kp/CkqoI/QmqvZnMUHNmPz2qAA8P6Cogx8vV3LuYSXlyfU57Y2O04jegmoj817FGNBstdbY0akPnZFRIG9KwlBckAk/iDzeKPB6Ik43MI/e3o7F+4BUX/J7whmZ2zzXV9daqT2d8Nwi4xugu/O5KXhLr18ePjf85PsP/99xbpvkjhhDmsTfEaW5hyS2ao0bzxckYbA7itBuOlLJPz07mprlRLcf/edFqNbZVpP/0u4r9zZS5Xo2GrOm4/YH5ZlYYjt/u5FzGY06fQiykD3Zln0uZX0TMYYccj8KNHk5CENMZcWk9OUX7ZGyBmXcBQL0WddYtqcOipqTsf+ECOInO+E4WKWfmLVj5dkPw4eNZeM411A8EWSpsu/CbT8deLum6nuaA7Wx1ECIIUwAD36zuhp8G1IqfrKfaxJDKSQ1NzxrdH3EG+Jky4a8EpQnejhXJTCo0WZu5RMncLAk0MYkzRrLiLVrstofn+5b7iLif4NGM3wqt2a7tO+assNgMyDl460hBoIuuYHs1fZTUO2cWDEBcP9zhtpQaO25bUSms/SQCmJ5XGhQqn2tNqQcNoRE75oPgQ1gdZTHJFBUsbDkxT2TgUZ3N1qZweuq8OKrHXp2qFF5Dikd4TO3mUTL9EcnHp0KaksooO30WdDJPWRz45F1V8I9KCpp1yHwQUo/u9XZMjPf9LyuG5SlMsVN4H47dmntMOzXce6HbzJ +v10G3Dv NSMsW4oan9NdO9ft1p3cxWvFOZxmQGq7SX0ano9VpFI6/eB9gAgTYgLbjW9OOmcfEu3IYg3uINrVDChUKftiimkvWWHfIYmI4KnyDrFTCcHVmMZ3eMTw6oNOOP1cZZHRPG0BEOQ3+5USiwz9yRI9FLVJdoRLed10bn2qsQ3Vgb2dHK0FlraiAs0WNg9lRMVIa2CVRucBfAqhCdhA+qk0gUZrK7+gA57Pis3i6VSS77P4o8M35TfpDITvYIRJkO9a0gYH7Lw+RPuONQ4VbNTNAZT3Zh99hqc9XH99fCTgPwBClAYeqsJNQ15zMXUQz9e6eGa6pAKctghOnyRPGow1TWTd5jSrWzfAGM2gefFK017HtRWidNYXqxInxp7X1HmdY0hloq65dkD0Lakb4uNnLB31nyX+6kzoqXrV2r6+0ZtmyACOQADJfrxsTSxZPTHXak9eSlbcDHJZ+Mw+StAhEdCTVZHqoHN5JaX2L7dHd80iDuRTAoGfvHBzHZX49i84hjX31Oh/RS5Kdl5ma451YMnRnApAKmXf4jL4ifYXpFLMpA93NrFQMAA+K7IcSxg+14+ZtQMnw4ktaf3XUW23VwGtTfAHuEPJ6rxE2+nP1JLDcnCF4NQ5uTfY0H/NC5Yzh3ddDnKG31HhFqw+n27YeN+OMP495Xri8khcfwCn8LzXlqw6VDjvx/ERH9ThgsO4r3m2FgWisNWq2owc= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000001, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Before transitioning a guest_memfd folio to unshared, thereby disallowing access by the host and allowing the hypervisor to transition its view of the guest page as private, we need to be sure that the host doesn't have any references to the folio. This patch introduces a new type for guest_memfd folios, and uses that to register a callback that informs the guest_memfd subsystem when the last reference is dropped, therefore knowing that the host doesn't have any remaining references. Signed-off-by: Fuad Tabba --- The function kvm_slot_gmem_register_callback() isn't used in this series. It will be used later in code that performs unsharing of memory. I have tested it with pKVM, based on downstream code [*]. It's included in this RFC since it demonstrates the plan to handle unsharing of private folios. [*] https://android-kvm.googlesource.com/linux/+/refs/heads/tabba/guestmem-6.13-v4-pkvm --- include/linux/kvm_host.h | 11 +++ include/linux/page-flags.h | 7 ++ mm/debug.c | 1 + mm/swap.c | 4 + virt/kvm/guest_memfd.c | 145 +++++++++++++++++++++++++++++++++++++ 5 files changed, 168 insertions(+) diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 84aa7908a5dd..7ada5f78ded4 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -2574,6 +2574,8 @@ int kvm_slot_gmem_clear_mappable(struct kvm_memory_slot *slot, gfn_t start, gfn_t end); bool kvm_slot_gmem_is_mappable(struct kvm_memory_slot *slot, gfn_t gfn); bool kvm_slot_gmem_is_guest_mappable(struct kvm_memory_slot *slot, gfn_t gfn); +int kvm_slot_gmem_register_callback(struct kvm_memory_slot *slot, gfn_t gfn); +void kvm_gmem_handle_folio_put(struct folio *folio); #else static inline bool kvm_gmem_is_mappable(struct kvm *kvm, gfn_t gfn, gfn_t end) { @@ -2615,6 +2617,15 @@ static inline bool kvm_slot_gmem_is_guest_mappable(struct kvm_memory_slot *slot, WARN_ON_ONCE(1); return false; } +int kvm_slot_gmem_register_callback(struct kvm_memory_slot *slot, gfn_t gfn) +{ + WARN_ON_ONCE(1); + return -EINVAL; +} +static inline void kvm_gmem_handle_folio_put(struct folio *folio) +{ + WARN_ON_ONCE(1); +} #endif /* CONFIG_KVM_GMEM_MAPPABLE */ #endif diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h index aca57802d7c7..b0e8e43de77c 100644 --- a/include/linux/page-flags.h +++ b/include/linux/page-flags.h @@ -950,6 +950,7 @@ enum pagetype { PGTY_slab = 0xf5, PGTY_zsmalloc = 0xf6, PGTY_unaccepted = 0xf7, + PGTY_guestmem = 0xf8, PGTY_mapcount_underflow = 0xff }; @@ -1099,6 +1100,12 @@ FOLIO_TYPE_OPS(hugetlb, hugetlb) FOLIO_TEST_FLAG_FALSE(hugetlb) #endif +#ifdef CONFIG_KVM_GMEM_MAPPABLE +FOLIO_TYPE_OPS(guestmem, guestmem) +#else +FOLIO_TEST_FLAG_FALSE(guestmem) +#endif + PAGE_TYPE_OPS(Zsmalloc, zsmalloc, zsmalloc) /* diff --git a/mm/debug.c b/mm/debug.c index 95b6ab809c0e..db93be385ed9 100644 --- a/mm/debug.c +++ b/mm/debug.c @@ -56,6 +56,7 @@ static const char *page_type_names[] = { DEF_PAGETYPE_NAME(table), DEF_PAGETYPE_NAME(buddy), DEF_PAGETYPE_NAME(unaccepted), + DEF_PAGETYPE_NAME(guestmem), }; static const char *page_type_name(unsigned int page_type) diff --git a/mm/swap.c b/mm/swap.c index 6f01b56bce13..15220eaabc86 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -37,6 +37,7 @@ #include #include #include +#include #include "internal.h" @@ -103,6 +104,9 @@ static void free_typed_folio(struct folio *folio) case PGTY_offline: /* Nothing to do, it's offline. */ return; + case PGTY_guestmem: + kvm_gmem_handle_folio_put(folio); + return; default: WARN_ON_ONCE(1); } diff --git a/virt/kvm/guest_memfd.c b/virt/kvm/guest_memfd.c index d1c192927cf7..5ecaa5dfcd00 100644 --- a/virt/kvm/guest_memfd.c +++ b/virt/kvm/guest_memfd.c @@ -387,6 +387,28 @@ enum folio_mappability { KVM_GMEM_NONE_MAPPABLE = 0b11, /* Not mappable, transient state. */ }; +/* + * Unregisters the __folio_put() callback from the folio. + * + * Restores a folio's refcount after all pending references have been released, + * and removes the folio type, thereby removing the callback. Now the folio can + * be freed normaly once all actual references have been dropped. + * + * Must be called with the filemap (inode->i_mapping) invalidate_lock held. + * Must also have exclusive access to the folio: folio must be either locked, or + * gmem holds the only reference. + */ +static void __kvm_gmem_restore_pending_folio(struct folio *folio) +{ + if (WARN_ON_ONCE(folio_mapped(folio) || !folio_test_guestmem(folio))) + return; + + WARN_ON_ONCE(!folio_test_locked(folio) || folio_ref_count(folio) > 1); + + __folio_clear_guestmem(folio); + folio_ref_add(folio, folio_nr_pages(folio)); +} + /* * Marks the range [start, end) as mappable by both the host and the guest. * Usually called when guest shares memory with the host. @@ -400,7 +422,31 @@ static int gmem_set_mappable(struct inode *inode, pgoff_t start, pgoff_t end) filemap_invalidate_lock(inode->i_mapping); for (i = start; i < end; i++) { + struct folio *folio = NULL; + + /* + * If the folio is NONE_MAPPABLE, it indicates that it is + * transitioning to private (GUEST_MAPPABLE). Transition it to + * shared (ALL_MAPPABLE) immediately, and remove the callback. + */ + if (xa_to_value(xa_load(mappable_offsets, i)) == KVM_GMEM_NONE_MAPPABLE) { + folio = filemap_lock_folio(inode->i_mapping, i); + if (WARN_ON_ONCE(IS_ERR(folio))) { + r = PTR_ERR(folio); + break; + } + + if (folio_test_guestmem(folio)) + __kvm_gmem_restore_pending_folio(folio); + } + r = xa_err(xa_store(mappable_offsets, i, xval, GFP_KERNEL)); + + if (folio) { + folio_unlock(folio); + folio_put(folio); + } + if (r) break; } @@ -473,6 +519,105 @@ static int gmem_clear_mappable(struct inode *inode, pgoff_t start, pgoff_t end) return r; } +/* + * Registers a callback to __folio_put(), so that gmem knows that the host does + * not have any references to the folio. It does that by setting the folio type + * to guestmem. + * + * Returns 0 if the host doesn't have any references, or -EAGAIN if the host + * has references, and the callback has been registered. + * + * Must be called with the following locks held: + * - filemap (inode->i_mapping) invalidate_lock + * - folio lock + */ +static int __gmem_register_callback(struct folio *folio, struct inode *inode, pgoff_t idx) +{ + struct xarray *mappable_offsets = &kvm_gmem_private(inode)->mappable_offsets; + void *xval_guest = xa_mk_value(KVM_GMEM_GUEST_MAPPABLE); + int refcount; + + rwsem_assert_held_write_nolockdep(&inode->i_mapping->invalidate_lock); + WARN_ON_ONCE(!folio_test_locked(folio)); + + if (folio_mapped(folio) || folio_test_guestmem(folio)) + return -EAGAIN; + + /* Register a callback first. */ + __folio_set_guestmem(folio); + + /* + * Check for references after setting the type to guestmem, to guard + * against potential races with the refcount being decremented later. + * + * At least one reference is expected because the folio is locked. + */ + + refcount = folio_ref_sub_return(folio, folio_nr_pages(folio)); + if (refcount == 1) { + int r; + + /* refcount isn't elevated, it's now faultable by the guest. */ + r = WARN_ON_ONCE(xa_err(xa_store(mappable_offsets, idx, xval_guest, GFP_KERNEL))); + if (!r) + __kvm_gmem_restore_pending_folio(folio); + + return r; + } + + return -EAGAIN; +} + +int kvm_slot_gmem_register_callback(struct kvm_memory_slot *slot, gfn_t gfn) +{ + unsigned long pgoff = slot->gmem.pgoff + gfn - slot->base_gfn; + struct inode *inode = file_inode(slot->gmem.file); + struct folio *folio; + int r; + + filemap_invalidate_lock(inode->i_mapping); + + folio = filemap_lock_folio(inode->i_mapping, pgoff); + if (WARN_ON_ONCE(IS_ERR(folio))) { + r = PTR_ERR(folio); + goto out; + } + + r = __gmem_register_callback(folio, inode, pgoff); + + folio_unlock(folio); + folio_put(folio); +out: + filemap_invalidate_unlock(inode->i_mapping); + + return r; +} + +/* + * Callback function for __folio_put(), i.e., called when all references by the + * host to the folio have been dropped. This allows gmem to transition the state + * of the folio to mappable by the guest, and allows the hypervisor to continue + * transitioning its state to private, since the host cannot attempt to access + * it anymore. + */ +void kvm_gmem_handle_folio_put(struct folio *folio) +{ + struct xarray *mappable_offsets; + struct inode *inode; + pgoff_t index; + void *xval; + + inode = folio->mapping->host; + index = folio->index; + mappable_offsets = &kvm_gmem_private(inode)->mappable_offsets; + xval = xa_mk_value(KVM_GMEM_GUEST_MAPPABLE); + + filemap_invalidate_lock(inode->i_mapping); + __kvm_gmem_restore_pending_folio(folio); + WARN_ON_ONCE(xa_err(xa_store(mappable_offsets, index, xval, GFP_KERNEL))); + filemap_invalidate_unlock(inode->i_mapping); +} + static bool gmem_is_mappable(struct inode *inode, pgoff_t pgoff) { struct xarray *mappable_offsets = &kvm_gmem_private(inode)->mappable_offsets;