From patchwork Fri Dec 13 16:48:01 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 13907463 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D1045E7717F for ; Fri, 13 Dec 2024 16:48:29 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 687776B00A1; Fri, 13 Dec 2024 11:48:27 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 5E9FC6B00A2; Fri, 13 Dec 2024 11:48:27 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 39FB56B00A3; Fri, 13 Dec 2024 11:48:27 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 16A596B00A1 for ; Fri, 13 Dec 2024 11:48:27 -0500 (EST) Received: from smtpin17.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id BA5978103E for ; Fri, 13 Dec 2024 16:48:26 +0000 (UTC) X-FDA: 82890517164.17.0E6FFE5 Received: from mail-wm1-f73.google.com (mail-wm1-f73.google.com [209.85.128.73]) by imf28.hostedemail.com (Postfix) with ESMTP id C5C61C0026 for ; Fri, 13 Dec 2024 16:47:55 +0000 (UTC) Authentication-Results: imf28.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=JVigBCRz; spf=pass (imf28.hostedemail.com: domain of 3V2VcZwUKCNQJ01106EE6B4.2ECB8DKN-CCAL02A.EH6@flex--tabba.bounces.google.com designates 209.85.128.73 as permitted sender) smtp.mailfrom=3V2VcZwUKCNQJ01106EE6B4.2ECB8DKN-CCAL02A.EH6@flex--tabba.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1734108485; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=vJuueBr2gSdijh3IQFMZdg4//VRgMvs45ltnyf1edv8=; b=kJSoYFhio7tHTCSXnS2Z6gAaqq35/f1sE387A45FXOluP9Rx8ebkMuZaoh8FGozwHCB7RP fD+2cT4B3Axd5V3JggBUYHvkTHP8Q+j0P2H+zxBFutOW4oHnU42ppCumd8qgHDlku7U2tW Eyu+TvQtVAt9nNr1lDyLvDploCa0y/8= ARC-Authentication-Results: i=1; imf28.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=JVigBCRz; spf=pass (imf28.hostedemail.com: domain of 3V2VcZwUKCNQJ01106EE6B4.2ECB8DKN-CCAL02A.EH6@flex--tabba.bounces.google.com designates 209.85.128.73 as permitted sender) smtp.mailfrom=3V2VcZwUKCNQJ01106EE6B4.2ECB8DKN-CCAL02A.EH6@flex--tabba.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1734108485; a=rsa-sha256; cv=none; b=YXusgieVrZ4ssTXIM9y+ig2t4D/T/sIAuCON5DpZP9IVyRhLOD8veLQObDni7tJd2cTMHr 96oynegmQvUAKHWweqsaekdwwwU2M4Pj5y7NovHRNcastrDDA53i3jc2o48hkPhg4+2iDA o1DCrgzd5XNRDqTsjKt5ymFy1GS/YOA= Received: by mail-wm1-f73.google.com with SMTP id 5b1f17b1804b1-43627bb20b5so13235315e9.1 for ; Fri, 13 Dec 2024 08:48:24 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1734108503; x=1734713303; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=vJuueBr2gSdijh3IQFMZdg4//VRgMvs45ltnyf1edv8=; b=JVigBCRzM5v/u6+Pjo42TTJ4uhJkPadZczmhEJUEFEhtUE+yF6Y35jjjvaO7quxZIe OnakfMa4VNouDdik3CEl6ajSFWFkMxx1NflmmuspTrr3E17/u4mh9IgI7V9i1TL5vL5l wxqV5PpWa/SomlmMopBHYVTuLgbU+K2FCS9+3IVlTDdWJSWDWHaXrZ2ePYekwRVXKYE3 YZ6haPYPU6TS4OuPAt+1EOkY0+PCyzfEdYj/Z4FsdJiNzDLf0xM6tvYjm1u1KbRPa36E pTRI5oN6q4E+9T+w3yixVW9+JZ26QIWbwqam2fBSDtpcAyq17+gS4z+zYrnVyR/gmnTk VxzQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1734108503; x=1734713303; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=vJuueBr2gSdijh3IQFMZdg4//VRgMvs45ltnyf1edv8=; b=dysChiyN8VKca1eSMd/7NPWA271q4b5CrYPaIa8fG2VToSV+J4/TVXDhsK6BOYzav5 G1ZVqZqGeMdhH6ZlS9EWQ+biA6dXsgtio9ld+IhwWiCShKn82YyzrLLi/jCJX+uVGqTB +JfCHf/xdgrhGCxTBjXsWkNxhpLNqLdYaW5ahUKNB0Pf4YNfMXdWrhyb03HGuLvqaKOJ DbejsjWE3KODLTPZY5nZgMqLFFBz5hMlrVt5Ikc4J4cS8foZFQ7fix9pLB1T5FW5RNF3 ZBzG/6i5SA+3M+D7AjUNUDc2YYDcX8D2EzAns7TVEZA70ftLycWytxDHHtkJlKSx5rzf CnJw== X-Forwarded-Encrypted: i=1; AJvYcCVbXc6aSMylCnoPNoROsDSRwJWpONPVwqOIipj5sFgbZLOFExlipfH6J98YiSuYYboiGdHMtv2+5Q==@kvack.org X-Gm-Message-State: AOJu0YwyURwDuWxMeJH+i6lgEun9strWhChz3BVfKtGu/gzku3Ikp2ZR dJUFXmWOGE8ufRW+mhS4slPzPZLrd2ZVwgq33WRTW3jQcGk6mgMeruwH5ycuCN4MTsOFZ4HTwQ= = X-Google-Smtp-Source: AGHT+IFkQF+3tcycznq/pNNLMGlVHQ2lVFQl1W+VwFJakHWzfT3BbDdg+Hikl/qFw92f/o0UH09Yhh+yDw== X-Received: from wmhp18.prod.google.com ([2002:a05:600c:4192:b0:434:9de6:413a]) (user=tabba job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:1e19:b0:435:1a2:2633 with SMTP id 5b1f17b1804b1-4362aa52c65mr29622895e9.15.1734108503647; Fri, 13 Dec 2024 08:48:23 -0800 (PST) Date: Fri, 13 Dec 2024 16:48:01 +0000 In-Reply-To: <20241213164811.2006197-1-tabba@google.com> Mime-Version: 1.0 References: <20241213164811.2006197-1-tabba@google.com> X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20241213164811.2006197-6-tabba@google.com> Subject: [RFC PATCH v4 05/14] KVM: guest_memfd: Folio mappability states and functions that manage their transition From: Fuad Tabba To: kvm@vger.kernel.org, linux-arm-msm@vger.kernel.org, linux-mm@kvack.org Cc: pbonzini@redhat.com, chenhuacai@kernel.org, mpe@ellerman.id.au, anup@brainfault.org, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, seanjc@google.com, viro@zeniv.linux.org.uk, brauner@kernel.org, willy@infradead.org, akpm@linux-foundation.org, xiaoyao.li@intel.com, yilun.xu@intel.com, chao.p.peng@linux.intel.com, jarkko@kernel.org, amoorthy@google.com, dmatlack@google.com, yu.c.zhang@linux.intel.com, isaku.yamahata@intel.com, mic@digikod.net, vbabka@suse.cz, vannapurve@google.com, ackerleytng@google.com, mail@maciej.szmigiero.name, david@redhat.com, michael.roth@amd.com, wei.w.wang@intel.com, liam.merwick@oracle.com, isaku.yamahata@gmail.com, kirill.shutemov@linux.intel.com, suzuki.poulose@arm.com, steven.price@arm.com, quic_eberman@quicinc.com, quic_mnalajal@quicinc.com, quic_tsoni@quicinc.com, quic_svaddagi@quicinc.com, quic_cvanscha@quicinc.com, quic_pderrin@quicinc.com, quic_pheragu@quicinc.com, catalin.marinas@arm.com, james.morse@arm.com, yuzenghui@huawei.com, oliver.upton@linux.dev, maz@kernel.org, will@kernel.org, qperret@google.com, keirf@google.com, roypat@amazon.co.uk, shuah@kernel.org, hch@infradead.org, jgg@nvidia.com, rientjes@google.com, jhubbard@nvidia.com, fvdl@google.com, hughd@google.com, jthoughton@google.com, tabba@google.com X-Rspamd-Queue-Id: C5C61C0026 X-Rspamd-Server: rspam12 X-Stat-Signature: pp4pkcyiy1cd47fjtsozso49sk37obkj X-Rspam-User: X-HE-Tag: 1734108475-734481 X-HE-Meta: U2FsdGVkX19lniVYOSnLxLDidHzXdV/UqoHZEaeycVlfGQsnxZdfDRWbnW/zUJDbJfPBs2P8OOyBFuFnOT5SOCgfndC52iQKhEc+dyjU5ihK4HXhsl2YuOZJ439IrKoMaJ/aw6S8FKHEXmqOW9XD8DUlpOuaVW7y7YP4w3Y3ogsRBG/6fmgapyVMeI1MJc5GFqQyitsWMuJfKIAGopvVq5dJi4m1qT6pifx5k+EnKbbFnq6TF9ZOPvutJOPVvSifllqSjePF4qwBrXTr6rnCO1exKcaXbvgLsOjU6mexRu4TozAR49GRlG3OKMSmW7I0oXPa36mPO6buOCpyHpKvVm0aN+KPHN6zZur18XPQ68JokigW1QEqqzdhJiVDS7NPdtY+EuM9nVRM3g3qQP+tpJ43tx8t1JHcXr3Ymtj6tdNZzZOl7h1n6oe+A04cNFvsTP/yGQlbHaBTAFfQmI8znUeOd+dAIEjwTz4OSd/npPjXOD5RyL07LtOFTue91GZuxFURuOprtPL/k1o3b2Yc8IOhP5Olxec+I5JAk/VGJeGx3zlH4426A/nT/R1EcOkWL4KT4UdlB+SVjwhMZx/w1dyDP2C3TXfCkZqlekcb7pPsYfhIeM2fEEQwp1fdnTvPUcLxSST9sg1ur58esbH0Uc7/QyR48xrzolYgw5ijFUX4FcINqvBrf+sLZbHPXPHtHwnqJduM5g7ItX76LBQafxuSuKMb1fHCl9jBq+lp9J9dpIy/VptngIOW+aHbzTzcIwJbX2HErW5oz//76xLFUDAwgOHsC2L8oODTRf45Qcs0P8F0xowWQ8q39ZZ/ezgyKpJyDkfQ005AgSRIc266SzZlm8W0PazhYt4nl/bqNHV9VG/GDEAfxVHGwdps8ud/35sDADcxPALzfgGaFhh1ZQ+1Vt09mNIsYW1b2IPz/aC2GMn0whSgMy7vIHIF7t4IqQYqeIPyhPyWsFYNS8b X+iFzzmN VeE2Q73mhesC1k/K4CkUgm/qjjk5+Fd9FjmkdkNcKiD6TsgXx8MKKvJWiTLNg6G/E7Ugjql9aqZo2do0xERFmmge0N//TB3NR580i/mONSii8k27A8iVwY39l51JUlGWFPsa5V4TSi6J85ZY9QINkxfWGtPJvhloeooxqf1c0oh7nVGhGFaJ5rgNGe5ezhZg3yxKa6FZ88xx06rBYDsdEzTQDT0ls5q0kgzNwH6Z+MZJbgGPquJzk0azGoOQDZ1NTfJ/jrwcvKdORExgfpyfQtaPJUH9J7KCrbSYpQz/AORk3rVCvlF1S0hkgg1TXaXLxJCc2/JIUOr6FZJf4+EWq12mZQ2gY6FkFcV1g5NINBbnLjrNYj5L/sZZ+3jiXQGy+R8n4oCpVdlBNqvEA0Nk1oewuuzcwHG/3AMxqodLX2XWC4jJiL8bmJlKYZtWy1gPuNSHz56PbuNREe5GjTBS3nKy5+MHvmPUiMTk7OTVQSXMXPUtFIAj2i3QFBBXEUdzHhTN93GU7e4/oJTUGdCTykkf3yiBT4Aa/0o07+fDZh62Gca1SmaNTEVgR9NbRYdtX0zMFFWl0Gag8FnL8yO3eHBtnDcbgPhvYfr0HLk0ia8IRrXi3pGfe0b9Bra/CzJResBx4kxZDNEW7RefwNQpLNk9XhQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000112, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: To allow restricted mapping of guest_memfd folios by the host, guest_memfd needs to track whether they can be mapped and by who, since the mapping will only be allowed under conditions where it safe to access these folios. These conditions depend on the folios being explicitly shared with the host, or not yet exposed to the guest (e.g., at initialization). This patch introduces states that determine whether the host and the guest can fault in the folios as well as the functions that manage transitioning between those states. Signed-off-by: Fuad Tabba --- include/linux/kvm_host.h | 53 ++++++++++++++ virt/kvm/guest_memfd.c | 153 +++++++++++++++++++++++++++++++++++++++ virt/kvm/kvm_main.c | 92 +++++++++++++++++++++++ 3 files changed, 298 insertions(+) diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index cda3ed4c3c27..84aa7908a5dd 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -2564,4 +2564,57 @@ long kvm_arch_vcpu_pre_fault_memory(struct kvm_vcpu *vcpu, struct kvm_pre_fault_memory *range); #endif +#ifdef CONFIG_KVM_GMEM_MAPPABLE +bool kvm_gmem_is_mappable(struct kvm *kvm, gfn_t gfn, gfn_t end); +int kvm_gmem_set_mappable(struct kvm *kvm, gfn_t start, gfn_t end); +int kvm_gmem_clear_mappable(struct kvm *kvm, gfn_t start, gfn_t end); +int kvm_slot_gmem_set_mappable(struct kvm_memory_slot *slot, gfn_t start, + gfn_t end); +int kvm_slot_gmem_clear_mappable(struct kvm_memory_slot *slot, gfn_t start, + gfn_t end); +bool kvm_slot_gmem_is_mappable(struct kvm_memory_slot *slot, gfn_t gfn); +bool kvm_slot_gmem_is_guest_mappable(struct kvm_memory_slot *slot, gfn_t gfn); +#else +static inline bool kvm_gmem_is_mappable(struct kvm *kvm, gfn_t gfn, gfn_t end) +{ + WARN_ON_ONCE(1); + return false; +} +static inline int kvm_gmem_set_mappable(struct kvm *kvm, gfn_t start, gfn_t end) +{ + WARN_ON_ONCE(1); + return -EINVAL; +} +static inline int kvm_gmem_clear_mappable(struct kvm *kvm, gfn_t start, + gfn_t end) +{ + WARN_ON_ONCE(1); + return -EINVAL; +} +static inline int kvm_slot_gmem_set_mappable(struct kvm_memory_slot *slot, + gfn_t start, gfn_t end) +{ + WARN_ON_ONCE(1); + return -EINVAL; +} +static inline int kvm_slot_gmem_clear_mappable(struct kvm_memory_slot *slot, + gfn_t start, gfn_t end) +{ + WARN_ON_ONCE(1); + return -EINVAL; +} +static inline bool kvm_slot_gmem_is_mappable(struct kvm_memory_slot *slot, + gfn_t gfn) +{ + WARN_ON_ONCE(1); + return false; +} +static inline bool kvm_slot_gmem_is_guest_mappable(struct kvm_memory_slot *slot, + gfn_t gfn) +{ + WARN_ON_ONCE(1); + return false; +} +#endif /* CONFIG_KVM_GMEM_MAPPABLE */ + #endif diff --git a/virt/kvm/guest_memfd.c b/virt/kvm/guest_memfd.c index 0a7b6cf8bd8f..d1c192927cf7 100644 --- a/virt/kvm/guest_memfd.c +++ b/virt/kvm/guest_memfd.c @@ -375,6 +375,159 @@ static void kvm_gmem_init_mount(void) kvm_gmem_mnt->mnt_flags |= MNT_NOEXEC; } +#ifdef CONFIG_KVM_GMEM_MAPPABLE +/* + * An enum of the valid states that describe who can map a folio. + * Bit 0: if set guest cannot map the page + * Bit 1: if set host cannot map the page + */ +enum folio_mappability { + KVM_GMEM_ALL_MAPPABLE = 0b00, /* Mappable by host and guest. */ + KVM_GMEM_GUEST_MAPPABLE = 0b10, /* Mappable only by guest. */ + KVM_GMEM_NONE_MAPPABLE = 0b11, /* Not mappable, transient state. */ +}; + +/* + * Marks the range [start, end) as mappable by both the host and the guest. + * Usually called when guest shares memory with the host. + */ +static int gmem_set_mappable(struct inode *inode, pgoff_t start, pgoff_t end) +{ + struct xarray *mappable_offsets = &kvm_gmem_private(inode)->mappable_offsets; + void *xval = xa_mk_value(KVM_GMEM_ALL_MAPPABLE); + pgoff_t i; + int r = 0; + + filemap_invalidate_lock(inode->i_mapping); + for (i = start; i < end; i++) { + r = xa_err(xa_store(mappable_offsets, i, xval, GFP_KERNEL)); + if (r) + break; + } + filemap_invalidate_unlock(inode->i_mapping); + + return r; +} + +/* + * Marks the range [start, end) as not mappable by the host. If the host doesn't + * have any references to a particular folio, then that folio is marked as + * mappable by the guest. + * + * However, if the host still has references to the folio, then the folio is + * marked and not mappable by anyone. Marking it is not mappable allows it to + * drain all references from the host, and to ensure that the hypervisor does + * not transition the folio to private, since the host still might access it. + * + * Usually called when guest unshares memory with the host. + */ +static int gmem_clear_mappable(struct inode *inode, pgoff_t start, pgoff_t end) +{ + struct xarray *mappable_offsets = &kvm_gmem_private(inode)->mappable_offsets; + void *xval_guest = xa_mk_value(KVM_GMEM_GUEST_MAPPABLE); + void *xval_none = xa_mk_value(KVM_GMEM_NONE_MAPPABLE); + pgoff_t i; + int r = 0; + + filemap_invalidate_lock(inode->i_mapping); + for (i = start; i < end; i++) { + struct folio *folio; + int refcount = 0; + + folio = filemap_lock_folio(inode->i_mapping, i); + if (!IS_ERR(folio)) { + refcount = folio_ref_count(folio); + } else { + r = PTR_ERR(folio); + if (WARN_ON_ONCE(r != -ENOENT)) + break; + + folio = NULL; + } + + /* +1 references are expected because of filemap_lock_folio(). */ + if (folio && refcount > folio_nr_pages(folio) + 1) { + /* + * Outstanding references, the folio cannot be faulted + * in by anyone until they're dropped. + */ + r = xa_err(xa_store(mappable_offsets, i, xval_none, GFP_KERNEL)); + } else { + /* + * No outstanding references. Transition the folio to + * guest mappable immediately. + */ + r = xa_err(xa_store(mappable_offsets, i, xval_guest, GFP_KERNEL)); + } + + if (folio) { + folio_unlock(folio); + folio_put(folio); + } + + if (WARN_ON_ONCE(r)) + break; + } + filemap_invalidate_unlock(inode->i_mapping); + + return r; +} + +static bool gmem_is_mappable(struct inode *inode, pgoff_t pgoff) +{ + struct xarray *mappable_offsets = &kvm_gmem_private(inode)->mappable_offsets; + unsigned long r; + + r = xa_to_value(xa_load(mappable_offsets, pgoff)); + + return (r == KVM_GMEM_ALL_MAPPABLE); +} + +static bool gmem_is_guest_mappable(struct inode *inode, pgoff_t pgoff) +{ + struct xarray *mappable_offsets = &kvm_gmem_private(inode)->mappable_offsets; + unsigned long r; + + r = xa_to_value(xa_load(mappable_offsets, pgoff)); + + return (r == KVM_GMEM_ALL_MAPPABLE || r == KVM_GMEM_GUEST_MAPPABLE); +} + +int kvm_slot_gmem_set_mappable(struct kvm_memory_slot *slot, gfn_t start, gfn_t end) +{ + struct inode *inode = file_inode(slot->gmem.file); + pgoff_t start_off = slot->gmem.pgoff + start - slot->base_gfn; + pgoff_t end_off = start_off + end - start; + + return gmem_set_mappable(inode, start_off, end_off); +} + +int kvm_slot_gmem_clear_mappable(struct kvm_memory_slot *slot, gfn_t start, gfn_t end) +{ + struct inode *inode = file_inode(slot->gmem.file); + pgoff_t start_off = slot->gmem.pgoff + start - slot->base_gfn; + pgoff_t end_off = start_off + end - start; + + return gmem_clear_mappable(inode, start_off, end_off); +} + +bool kvm_slot_gmem_is_mappable(struct kvm_memory_slot *slot, gfn_t gfn) +{ + struct inode *inode = file_inode(slot->gmem.file); + unsigned long pgoff = slot->gmem.pgoff + gfn - slot->base_gfn; + + return gmem_is_mappable(inode, pgoff); +} + +bool kvm_slot_gmem_is_guest_mappable(struct kvm_memory_slot *slot, gfn_t gfn) +{ + struct inode *inode = file_inode(slot->gmem.file); + unsigned long pgoff = slot->gmem.pgoff + gfn - slot->base_gfn; + + return gmem_is_guest_mappable(inode, pgoff); +} +#endif /* CONFIG_KVM_GMEM_MAPPABLE */ + static struct file_operations kvm_gmem_fops = { .open = generic_file_open, .release = kvm_gmem_release, diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index de2c11dae231..fffff01cebe7 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -3094,6 +3094,98 @@ static int next_segment(unsigned long len, int offset) return len; } +#ifdef CONFIG_KVM_GMEM_MAPPABLE +bool kvm_gmem_is_mappable(struct kvm *kvm, gfn_t start, gfn_t end) +{ + struct kvm_memslot_iter iter; + bool r = true; + + mutex_lock(&kvm->slots_lock); + + kvm_for_each_memslot_in_gfn_range(&iter, kvm_memslots(kvm), start, end) { + struct kvm_memory_slot *memslot = iter.slot; + gfn_t gfn_start, gfn_end, i; + + if (!kvm_slot_can_be_private(memslot)) + continue; + + gfn_start = max(start, memslot->base_gfn); + gfn_end = min(end, memslot->base_gfn + memslot->npages); + if (WARN_ON_ONCE(gfn_start >= gfn_end)) + continue; + + for (i = gfn_start; i < gfn_end; i++) { + r = kvm_slot_gmem_is_mappable(memslot, i); + if (r) + goto out; + } + } +out: + mutex_unlock(&kvm->slots_lock); + + return r; +} + +int kvm_gmem_set_mappable(struct kvm *kvm, gfn_t start, gfn_t end) +{ + struct kvm_memslot_iter iter; + int r = 0; + + mutex_lock(&kvm->slots_lock); + + kvm_for_each_memslot_in_gfn_range(&iter, kvm_memslots(kvm), start, end) { + struct kvm_memory_slot *memslot = iter.slot; + gfn_t gfn_start, gfn_end; + + if (!kvm_slot_can_be_private(memslot)) + continue; + + gfn_start = max(start, memslot->base_gfn); + gfn_end = min(end, memslot->base_gfn + memslot->npages); + if (WARN_ON_ONCE(start >= end)) + continue; + + r = kvm_slot_gmem_set_mappable(memslot, gfn_start, gfn_end); + if (WARN_ON_ONCE(r)) + break; + } + + mutex_unlock(&kvm->slots_lock); + + return r; +} + +int kvm_gmem_clear_mappable(struct kvm *kvm, gfn_t start, gfn_t end) +{ + struct kvm_memslot_iter iter; + int r = 0; + + mutex_lock(&kvm->slots_lock); + + kvm_for_each_memslot_in_gfn_range(&iter, kvm_memslots(kvm), start, end) { + struct kvm_memory_slot *memslot = iter.slot; + gfn_t gfn_start, gfn_end; + + if (!kvm_slot_can_be_private(memslot)) + continue; + + gfn_start = max(start, memslot->base_gfn); + gfn_end = min(end, memslot->base_gfn + memslot->npages); + if (WARN_ON_ONCE(start >= end)) + continue; + + r = kvm_slot_gmem_clear_mappable(memslot, gfn_start, gfn_end); + if (WARN_ON_ONCE(r)) + break; + } + + mutex_unlock(&kvm->slots_lock); + + return r; +} + +#endif /* CONFIG_KVM_GMEM_MAPPABLE */ + /* Copy @len bytes from guest memory at '(@gfn * PAGE_SIZE) + @offset' to @data */ static int __kvm_read_guest_page(struct kvm_memory_slot *slot, gfn_t gfn, void *data, int offset, int len)