From patchwork Thu Aug 1 09:01:09 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 13749993 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 27073C3DA4A for ; Thu, 1 Aug 2024 09:01:29 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id AAE586B0093; Thu, 1 Aug 2024 05:01:28 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id A5B916B0096; Thu, 1 Aug 2024 05:01:28 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 811EA6B0098; Thu, 1 Aug 2024 05:01:28 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 5C0FB6B0093 for ; Thu, 1 Aug 2024 05:01:28 -0400 (EDT) Received: from smtpin20.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 122538095B for ; Thu, 1 Aug 2024 09:01:28 +0000 (UTC) X-FDA: 82403083056.20.BC07937 Received: from mail-yw1-f201.google.com (mail-yw1-f201.google.com [209.85.128.201]) by imf21.hostedemail.com (Postfix) with ESMTP id 2D89F1C001D for ; Thu, 1 Aug 2024 09:01:25 +0000 (UTC) Authentication-Results: imf21.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=GKibT1fo; spf=pass (imf21.hostedemail.com: domain of 35U6rZgUKCGwdKLLKQYYQVO.MYWVSXeh-WWUfKMU.YbQ@flex--tabba.bounces.google.com designates 209.85.128.201 as permitted sender) smtp.mailfrom=35U6rZgUKCGwdKLLKQYYQVO.MYWVSXeh-WWUfKMU.YbQ@flex--tabba.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1722502811; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=31dOGsxSnS2qAkgpLgw/kAzkCwiCBaGMKDCBk/lWPaw=; b=KbdF3OYxSIgqd/ULEtwCaFiHQZwvQk1h/ILczAhYWjBg81VnqUNxRZNpPrB87wEg322Epj mrjlE+w4TEGxN6r53UxNTuBvgFen1nlRoupAlXvnBRKGhhVZLLr6AQMx5WSDFgsnMogUpi 5RBqBZ+9FzNw+dpJN6fv4k61ETq2RZk= ARC-Authentication-Results: i=1; imf21.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=GKibT1fo; spf=pass (imf21.hostedemail.com: domain of 35U6rZgUKCGwdKLLKQYYQVO.MYWVSXeh-WWUfKMU.YbQ@flex--tabba.bounces.google.com designates 209.85.128.201 as permitted sender) smtp.mailfrom=35U6rZgUKCGwdKLLKQYYQVO.MYWVSXeh-WWUfKMU.YbQ@flex--tabba.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1722502811; a=rsa-sha256; cv=none; b=mncR8df3Qw+Y9d3B43k3a1JxMIpFAIjne0K/YdVC8YCbOJhxr09z2XEAJrxTsxsdQ4/ZeK IBZ3amShHos2+t1EjFomyZBvTs8ieevqQ/fHZ8fzL9CcdbNC6MA5SvwdTDQ6z2Dvx0OZOi M4cKaVj/spiCye3c8nUI+Pgj2aK2o1Y= Received: by mail-yw1-f201.google.com with SMTP id 00721157ae682-65194ea3d4dso130591897b3.0 for ; Thu, 01 Aug 2024 02:01:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1722502885; x=1723107685; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=31dOGsxSnS2qAkgpLgw/kAzkCwiCBaGMKDCBk/lWPaw=; b=GKibT1foa2xquC1Km4sTOJZbgPfnMyss8lNmTXo/+u1zq+lI9Ayc/qgW1yfhkut9xk gQlyuQLxWmzpAMNbL++xb6LG9/1GXHrs1/KyqAr6dfZgIalJv6N11KHDaye7zHi8KRKm 1GeBx6S+DFUpPfDzFqXhqEC5UzLQWXahrs9tT8YM4gwJknfG6V4xMW9Mkhld2vJQ6LOX yF24pVq2XYsFDMf95n+SEye+MK8Y3hdOsuM4atlpCeWXhtI4nSFSKJ/59+iJOFi7c51n dnHL9S1ArKRXm5XUPH/z3Vz3H2gXmIvjGAkUmIDxhZlhjp0qxNCHRJdhheMqijxNOq7G LSag== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722502885; x=1723107685; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=31dOGsxSnS2qAkgpLgw/kAzkCwiCBaGMKDCBk/lWPaw=; b=u5tWKSyVo83BW/39TSUTvWHeIIlt51uaWFuzXL3/aoAOygen9YnBJSh4XS9cwQU5R8 ZejC9jAaNCIMRfuTBI0lu3xlP620sWOGJLfNFseUh6TADYOfDzf6JhOaFsWh8KVRnKXI m6/wEztv11zrdq5ZBCMT50NOT9CHYGnorYlhv3Oc9DRhlfNHpZ1fmrrqU1MXNXIODADn IhL5lWRcco+IxlAiVQAF0kgwSaDcXKnIO4CsXl4HINqpDgEaxHjGc7WKOCWuTm9/6xB/ 4UG4UpSKq8VzEiAQBnly9LUQSpUw/f+zvZtEg4N96GiCFoMWqRCfrDG2Cwi3no15ueEb kUhg== X-Forwarded-Encrypted: i=1; AJvYcCWHxgGV5qKUT/LeuU+Jtc/a9l8V335Cs4rtgj2Sk3RrI/9a9S/pWcPg+5fjj+nqcBd3TCDesLAXyIwHrQKs5S+nwGs= X-Gm-Message-State: AOJu0Yw35BtSOgUDKdcTYz+Cgt0p8P2+vGmAB6hMBq44VlQN5mbims2m FmYJjwZmXkMP6GVZ40qPy2KsBX1tDZEipBKIwpyAZxRBFjKbda1XKRhKD+k+3DNcMT7mQ3vPGw= = X-Google-Smtp-Source: AGHT+IFeMtDkO9X4o1H08sPbTOZyFDUmCP10xCQXIQm4WlBnMC+GIqYzlc/fnsdt6vCtEVOetfjohOJS0A== X-Received: from fuad.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:1613]) (user=tabba job=sendgmr) by 2002:a05:690c:15:b0:64a:8aec:617c with SMTP id 00721157ae682-6874580ff7emr1313617b3.0.1722502885098; Thu, 01 Aug 2024 02:01:25 -0700 (PDT) Date: Thu, 1 Aug 2024 10:01:09 +0100 In-Reply-To: <20240801090117.3841080-1-tabba@google.com> Mime-Version: 1.0 References: <20240801090117.3841080-1-tabba@google.com> X-Mailer: git-send-email 2.46.0.rc1.232.g9752f9e123-goog Message-ID: <20240801090117.3841080-3-tabba@google.com> Subject: [RFC PATCH v2 02/10] KVM: Add restricted support for mapping guestmem by the host From: Fuad Tabba To: kvm@vger.kernel.org, linux-arm-msm@vger.kernel.org, linux-mm@kvack.org Cc: pbonzini@redhat.com, chenhuacai@kernel.org, mpe@ellerman.id.au, anup@brainfault.org, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, seanjc@google.com, viro@zeniv.linux.org.uk, brauner@kernel.org, willy@infradead.org, akpm@linux-foundation.org, xiaoyao.li@intel.com, yilun.xu@intel.com, chao.p.peng@linux.intel.com, jarkko@kernel.org, amoorthy@google.com, dmatlack@google.com, yu.c.zhang@linux.intel.com, isaku.yamahata@intel.com, mic@digikod.net, vbabka@suse.cz, vannapurve@google.com, ackerleytng@google.com, mail@maciej.szmigiero.name, david@redhat.com, michael.roth@amd.com, wei.w.wang@intel.com, liam.merwick@oracle.com, isaku.yamahata@gmail.com, kirill.shutemov@linux.intel.com, suzuki.poulose@arm.com, steven.price@arm.com, quic_eberman@quicinc.com, quic_mnalajal@quicinc.com, quic_tsoni@quicinc.com, quic_svaddagi@quicinc.com, quic_cvanscha@quicinc.com, quic_pderrin@quicinc.com, quic_pheragu@quicinc.com, catalin.marinas@arm.com, james.morse@arm.com, yuzenghui@huawei.com, oliver.upton@linux.dev, maz@kernel.org, will@kernel.org, qperret@google.com, keirf@google.com, roypat@amazon.co.uk, shuah@kernel.org, hch@infradead.org, jgg@nvidia.com, rientjes@google.com, jhubbard@nvidia.com, fvdl@google.com, hughd@google.com, tabba@google.com X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 2D89F1C001D X-Stat-Signature: 4jzzin3c39ioa5szw8rem5kabpiciikp X-Rspam-User: X-HE-Tag: 1722502885-405559 X-HE-Meta: U2FsdGVkX19WIzqEfZEEsoBTnhSX/t0QSU6hrGDDrA/KV/A3IPIS+4bxFu1JRRQtp4cf7WrtXblvGRPxXR6u3M6MLep4Uhi00yR0nlb8WLCtAmy/XP4dQ4/XZdhCyXKh9FwG9V+4jk2WRtszjicQ3RzrU90tS3a/pzgsUPe5nnYpfcS4X5OadHmiDXFeT5ARtN4872JBcWgtkph6Wf5FvWpGOMSMdU/4SkulcLT3JJnJ1RY3v8ndT3lX6v1O63JgSCp1MF0x29hV11Eql6g+Ux07kVyr8Ap/0h+VA0amniP5X8sCKX7cuMbjMutJz6F3y5veMHK0egDWze5ph64A75ZZDXnwtdKo/4NeCoKvBin4eIjqoXQb4l0r0jBsBIlBNS4eYvh14UpvjLAu3pt7BFPGeDCIfCyCj7K0vOi8gjZEX7FA4IbyvZ+J+oYi+nayhAN86DCRJa5iDuyQJ1c3v3Z82GkW+v7z//GoLBOO/l6F5fwNQTaWT3LhAjNT9RCBQ8FnpIkrWCDqn8PU1UjpmdmmJA84xjJN4R5rW4Z19gT4zKqmTSsT3bipLSrAhFiNre4FIRGEEI6KUffg5AHkIMzqWT+nSdexG/DB9eQzpZj/K8JYf70OId502mph0ULHQUXFnJnf6u0tfFFUCoBMRGgyY2ek3JqiSFsuel6jbbiO/LhY43OgGP5QWaYTGa8kRRrFfC5kXY2usjAeS7ldBf35C6f+HQb45q1PGjiwBll1B5ZFJhFFk7dM+2S1aFuRVSY0/yaz3Nx8/SsrBQwYCHKiTj1EaqaITTwJR39Bg6i37H7ymaIaTkJGyn05aSZUrf/ZRh/86QFjVJI8ks5jSv31OWxNPwSAhnH9+PmxdwJZZ8LGUFg3TOguq/frdaTBAloTzQFpTK8hIEusxCbBiZyWS7BojPC0fiyC4UeAVfRNAKDQ5Rcy393lbiK3rvz4uT9oKDnnw/YDVkSU58J wDKvhMdO 6UzlcdPbiQHgKXxcg7pAxr+nAPQYD0cRy8/pphF2KpHNsKiojH9LvjOeDEMWg3wKZNDP2FWS0LXadjD4rlQV2DG+bSFZV5Ik0TqmhLfEI5AOWmVBkNCU9s8KedYb3r5uYvPkiNkyWJ5shiy6uK5q35EW7z17DXTQlmc9xTmHXOWVfWMXLyKBrbSvMLcnwwwqPXEtQWdLVZ/zKBAcMUuzQ36SIzsyROzDr5Ba9jpofUB2MrvWhRmUZ3RjfVOfsx9Qc4WEo7gH+P/vA/dQH8wLM5lxkOSg1BwHZ62juBGcrd1KLnXwBF6A6+UlP3w52OWWp8wcGt+s8X+LkBtunH0K2gjThmCBANoIJ9vr5KswpP2EU+wMlzkIUWW+OUmzyXk8z3mA/igszqD5/5fNZcjjKfSumudkuooYiWvFhTiERhN7qBw6BNrJ8iWzSriw5yedysJor3t6uY9vunP/ZJnAFj7xelOyH/zcEsPuV3WNSyEfnH55REyWKD3OKVCXoWROcYsSPtgvLDkeaf396NLrIKyPKcMsoghBweu2eKHrIMJUr7PY6n6F1OpttAkwRcbF9fKRGS11KtAKbfE8= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Add support for mmap() and fault() for guest_memfd in the host. The ability to fault in a guest page is contingent on that page being shared with the host. To track this, this patch adds a new xarray to each guest_memfd object, which tracks the mappability of guest frames. The guest_memfd PRIVATE memory attribute is not used for two reasons. First because it reflects the userspace expectation for that memory location, and therefore can be toggled by userspace. The second is, although each guest_memfd file has a 1:1 binding with a KVM instance, the plan is to allow multiple files per inode, e.g. to allow intra-host migration to a new KVM instance, without destroying guest_memfd. This new feature is gated with a new configuration option, CONFIG_KVM_PRIVATE_MEM_MAPPABLE. Signed-off-by: Fuad Tabba --- include/linux/kvm_host.h | 61 ++++++++++++++++++++ virt/kvm/Kconfig | 4 ++ virt/kvm/guest_memfd.c | 110 +++++++++++++++++++++++++++++++++++ virt/kvm/kvm_main.c | 122 +++++++++++++++++++++++++++++++++++++++ 4 files changed, 297 insertions(+) diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 43a157f8171a..ab1344327e57 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -2452,4 +2452,65 @@ static inline int kvm_gmem_get_pfn_locked(struct kvm *kvm, } #endif /* CONFIG_KVM_PRIVATE_MEM */ +#ifdef CONFIG_KVM_PRIVATE_MEM_MAPPABLE +bool kvm_gmem_is_mappable(struct kvm *kvm, gfn_t gfn, gfn_t end); +bool kvm_gmem_is_mapped(struct kvm *kvm, gfn_t start, gfn_t end); +int kvm_gmem_set_mappable(struct kvm *kvm, gfn_t start, gfn_t end); +int kvm_gmem_clear_mappable(struct kvm *kvm, gfn_t start, gfn_t end); +int kvm_slot_gmem_toggle_mappable(struct kvm_memory_slot *slot, gfn_t start, + gfn_t end, bool is_mappable); +int kvm_slot_gmem_set_mappable(struct kvm_memory_slot *slot, gfn_t start, + gfn_t end); +int kvm_slot_gmem_clear_mappable(struct kvm_memory_slot *slot, gfn_t start, + gfn_t end); +bool kvm_slot_gmem_is_mappable(struct kvm_memory_slot *slot, gfn_t gfn); +#else +static inline bool kvm_gmem_is_mappable(struct kvm *kvm, gfn_t gfn, gfn_t end) +{ + WARN_ON_ONCE(1); + return false; +} +static inline bool kvm_gmem_is_mapped(struct kvm *kvm, gfn_t start, gfn_t end) +{ + WARN_ON_ONCE(1); + return false; +} +static inline int kvm_gmem_set_mappable(struct kvm *kvm, gfn_t start, gfn_t end) +{ + WARN_ON_ONCE(1); + return -EINVAL; +} +static inline int kvm_gmem_clear_mappable(struct kvm *kvm, gfn_t start, + gfn_t end) +{ + WARN_ON_ONCE(1); + return -EINVAL; +} +static inline int kvm_slot_gmem_toggle_mappable(struct kvm_memory_slot *slot, + gfn_t start, gfn_t end, + bool is_mappable) +{ + WARN_ON_ONCE(1); + return -EINVAL; +} +static inline int kvm_slot_gmem_set_mappable(struct kvm_memory_slot *slot, + gfn_t start, gfn_t end) +{ + WARN_ON_ONCE(1); + return -EINVAL; +} +static inline int kvm_slot_gmem_clear_mappable(struct kvm_memory_slot *slot, + gfn_t start, gfn_t end) +{ + WARN_ON_ONCE(1); + return -EINVAL; +} +static inline bool kvm_slot_gmem_is_mappable(struct kvm_memory_slot *slot, + gfn_t gfn) +{ + WARN_ON_ONCE(1); + return false; +} +#endif /* CONFIG_KVM_PRIVATE_MEM_MAPPABLE */ + #endif diff --git a/virt/kvm/Kconfig b/virt/kvm/Kconfig index 29b73eedfe74..a3970c5eca7b 100644 --- a/virt/kvm/Kconfig +++ b/virt/kvm/Kconfig @@ -109,3 +109,7 @@ config KVM_GENERIC_PRIVATE_MEM select KVM_GENERIC_MEMORY_ATTRIBUTES select KVM_PRIVATE_MEM bool + +config KVM_PRIVATE_MEM_MAPPABLE + select KVM_PRIVATE_MEM + bool diff --git a/virt/kvm/guest_memfd.c b/virt/kvm/guest_memfd.c index f3f4334a9ccb..0a1f266a16f9 100644 --- a/virt/kvm/guest_memfd.c +++ b/virt/kvm/guest_memfd.c @@ -11,6 +11,9 @@ struct kvm_gmem { struct kvm *kvm; struct xarray bindings; struct list_head entry; +#ifdef CONFIG_KVM_PRIVATE_MEM_MAPPABLE + struct xarray unmappable_gfns; +#endif }; static struct folio *kvm_gmem_get_folio(struct inode *inode, pgoff_t index) @@ -230,6 +233,11 @@ static int kvm_gmem_release(struct inode *inode, struct file *file) mutex_unlock(&kvm->slots_lock); xa_destroy(&gmem->bindings); + +#ifdef CONFIG_KVM_PRIVATE_MEM_MAPPABLE + xa_destroy(&gmem->unmappable_gfns); +#endif + kfree(gmem); kvm_put_kvm(kvm); @@ -248,7 +256,105 @@ static inline struct file *kvm_gmem_get_file(struct kvm_memory_slot *slot) return get_file_active(&slot->gmem.file); } +#ifdef CONFIG_KVM_PRIVATE_MEM_MAPPABLE +int kvm_slot_gmem_toggle_mappable(struct kvm_memory_slot *slot, gfn_t start, + gfn_t end, bool is_mappable) +{ + struct kvm_gmem *gmem = slot->gmem.file->private_data; + void *xval = is_mappable ? NULL : xa_mk_value(true); + void *r; + + r = xa_store_range(&gmem->unmappable_gfns, start, end - 1, xval, GFP_KERNEL); + + return xa_err(r); +} + +int kvm_slot_gmem_set_mappable(struct kvm_memory_slot *slot, gfn_t start, gfn_t end) +{ + return kvm_slot_gmem_toggle_mappable(slot, start, end, true); +} + +int kvm_slot_gmem_clear_mappable(struct kvm_memory_slot *slot, gfn_t start, gfn_t end) +{ + return kvm_slot_gmem_toggle_mappable(slot, start, end, false); +} + +bool kvm_slot_gmem_is_mappable(struct kvm_memory_slot *slot, gfn_t gfn) +{ + struct kvm_gmem *gmem = slot->gmem.file->private_data; + unsigned long _gfn = gfn; + + return !xa_find(&gmem->unmappable_gfns, &_gfn, ULONG_MAX, XA_PRESENT); +} + +static bool kvm_gmem_isfaultable(struct vm_fault *vmf) +{ + struct kvm_gmem *gmem = vmf->vma->vm_file->private_data; + struct inode *inode = file_inode(vmf->vma->vm_file); + pgoff_t pgoff = vmf->pgoff; + struct kvm_memory_slot *slot; + unsigned long index; + bool r = true; + + filemap_invalidate_lock(inode->i_mapping); + + xa_for_each_range(&gmem->bindings, index, slot, pgoff, pgoff) { + pgoff_t base_gfn = slot->base_gfn; + pgoff_t gfn_pgoff = slot->gmem.pgoff; + pgoff_t gfn = base_gfn + max(gfn_pgoff, pgoff) - gfn_pgoff; + + if (!kvm_slot_gmem_is_mappable(slot, gfn)) { + r = false; + break; + } + } + + filemap_invalidate_unlock(inode->i_mapping); + + return r; +} + +static vm_fault_t kvm_gmem_fault(struct vm_fault *vmf) +{ + struct folio *folio; + + folio = kvm_gmem_get_folio(file_inode(vmf->vma->vm_file), vmf->pgoff); + if (!folio) + return VM_FAULT_SIGBUS; + + if (!kvm_gmem_isfaultable(vmf)) { + folio_unlock(folio); + folio_put(folio); + return VM_FAULT_SIGBUS; + } + + vmf->page = folio_file_page(folio, vmf->pgoff); + return VM_FAULT_LOCKED; +} + +static const struct vm_operations_struct kvm_gmem_vm_ops = { + .fault = kvm_gmem_fault, +}; + +static int kvm_gmem_mmap(struct file *file, struct vm_area_struct *vma) +{ + if ((vma->vm_flags & (VM_SHARED | VM_MAYSHARE)) != + (VM_SHARED | VM_MAYSHARE)) { + return -EINVAL; + } + + file_accessed(file); + vm_flags_set(vma, VM_DONTDUMP); + vma->vm_ops = &kvm_gmem_vm_ops; + + return 0; +} +#else +#define kvm_gmem_mmap NULL +#endif /* CONFIG_KVM_PRIVATE_MEM_MAPPABLE */ + static struct file_operations kvm_gmem_fops = { + .mmap = kvm_gmem_mmap, .open = generic_file_open, .release = kvm_gmem_release, .fallocate = kvm_gmem_fallocate, @@ -369,6 +475,10 @@ static int __kvm_gmem_create(struct kvm *kvm, loff_t size, u64 flags) xa_init(&gmem->bindings); list_add(&gmem->entry, &inode->i_mapping->i_private_list); +#ifdef CONFIG_KVM_PRIVATE_MEM_MAPPABLE + xa_init(&gmem->unmappable_gfns); +#endif + fd_install(fd, file); return fd; diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 1192942aef91..f4b4498d4de6 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -3265,6 +3265,128 @@ static int next_segment(unsigned long len, int offset) return len; } +#ifdef CONFIG_KVM_PRIVATE_MEM_MAPPABLE +static bool __kvm_gmem_is_mappable(struct kvm *kvm, gfn_t start, gfn_t end) +{ + struct kvm_memslot_iter iter; + + lockdep_assert_held(&kvm->slots_lock); + + kvm_for_each_memslot_in_gfn_range(&iter, kvm_memslots(kvm), start, end) { + struct kvm_memory_slot *memslot = iter.slot; + gfn_t gfn_start, gfn_end, i; + + gfn_start = max(start, memslot->base_gfn); + gfn_end = min(end, memslot->base_gfn + memslot->npages); + if (WARN_ON_ONCE(gfn_start >= gfn_end)) + continue; + + for (i = gfn_start; i < gfn_end; i++) { + if (!kvm_slot_gmem_is_mappable(memslot, i)) + return false; + } + } + + return true; +} + +bool kvm_gmem_is_mappable(struct kvm *kvm, gfn_t start, gfn_t end) +{ + bool r; + + mutex_lock(&kvm->slots_lock); + r = __kvm_gmem_is_mappable(kvm, start, end); + mutex_unlock(&kvm->slots_lock); + + return r; +} + +static bool __kvm_gmem_is_mapped(struct kvm *kvm, gfn_t start, gfn_t end) +{ + struct kvm_memslot_iter iter; + + lockdep_assert_held(&kvm->slots_lock); + + kvm_for_each_memslot_in_gfn_range(&iter, kvm_memslots(kvm), start, end) { + struct kvm_memory_slot *memslot = iter.slot; + gfn_t gfn_start, gfn_end, i; + + gfn_start = max(start, memslot->base_gfn); + gfn_end = min(end, memslot->base_gfn + memslot->npages); + if (WARN_ON_ONCE(gfn_start >= gfn_end)) + continue; + + for (i = gfn_start; i < gfn_end; i++) { + struct page *page; + bool is_mapped; + kvm_pfn_t pfn; + + if (WARN_ON_ONCE(kvm_gmem_get_pfn_locked(kvm, memslot, i, &pfn, NULL))) + continue; + + page = pfn_to_page(pfn); + is_mapped = page_mapped(page) || page_maybe_dma_pinned(page); + unlock_page(page); + put_page(page); + + if (is_mapped) + return true; + } + } + + return false; +} + +bool kvm_gmem_is_mapped(struct kvm *kvm, gfn_t start, gfn_t end) +{ + bool r; + + mutex_lock(&kvm->slots_lock); + r = __kvm_gmem_is_mapped(kvm, start, end); + mutex_unlock(&kvm->slots_lock); + + return r; +} + +static int kvm_gmem_toggle_mappable(struct kvm *kvm, gfn_t start, gfn_t end, + bool is_mappable) +{ + struct kvm_memslot_iter iter; + int r = 0; + + mutex_lock(&kvm->slots_lock); + + kvm_for_each_memslot_in_gfn_range(&iter, kvm_memslots(kvm), start, end) { + struct kvm_memory_slot *memslot = iter.slot; + gfn_t gfn_start, gfn_end; + + gfn_start = max(start, memslot->base_gfn); + gfn_end = min(end, memslot->base_gfn + memslot->npages); + if (WARN_ON_ONCE(start >= end)) + continue; + + r = kvm_slot_gmem_toggle_mappable(memslot, gfn_start, gfn_end, is_mappable); + if (WARN_ON_ONCE(r)) + break; + } + + mutex_unlock(&kvm->slots_lock); + + return r; +} + +int kvm_gmem_set_mappable(struct kvm *kvm, gfn_t start, gfn_t end) +{ + return kvm_gmem_toggle_mappable(kvm, start, end, true); +} + +int kvm_gmem_clear_mappable(struct kvm *kvm, gfn_t start, gfn_t end) +{ + return kvm_gmem_toggle_mappable(kvm, start, end, false); +} + +#endif /* CONFIG_KVM_PRIVATE_MEM_MAPPABLE */ + /* Copy @len bytes from guest memory at '(@gfn * PAGE_SIZE) + @offset' to @data */ static int __kvm_read_guest_page(struct kvm_memory_slot *slot, gfn_t gfn, void *data, int offset, int len)