From patchwork Mon Aug 5 18:34:47 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Elliot Berman X-Patchwork-Id: 13753955 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 421ECC3DA7F for ; Mon, 5 Aug 2024 18:35:35 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id CC0E76B0098; Mon, 5 Aug 2024 14:35:34 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id C718C6B0099; Mon, 5 Aug 2024 14:35:34 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id AEA836B009A; Mon, 5 Aug 2024 14:35:34 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 907286B0098 for ; Mon, 5 Aug 2024 14:35:34 -0400 (EDT) Received: from smtpin21.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 105EA141B59 for ; Mon, 5 Aug 2024 18:35:34 +0000 (UTC) X-FDA: 82419044988.21.6A31DBC Received: from mx0a-0031df01.pphosted.com (mx0a-0031df01.pphosted.com [205.220.168.131]) by imf20.hostedemail.com (Postfix) with ESMTP id A5E231C0008 for ; Mon, 5 Aug 2024 18:35:31 +0000 (UTC) Authentication-Results: imf20.hostedemail.com; dkim=pass header.d=quicinc.com header.s=qcppdkim1 header.b=SfVU1oaR; dmarc=pass (policy=none) header.from=quicinc.com; spf=pass (imf20.hostedemail.com: domain of quic_eberman@quicinc.com designates 205.220.168.131 as permitted sender) smtp.mailfrom=quic_eberman@quicinc.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1722882862; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=11QWH8pHvBW8f1MWbaRj+DTkAzgqxQMHSVhEzac2GrE=; b=8jakQyZZggIGGuWveDK7LH4ENTY0QrCeKQYVKrhxo5wemFDRT9o4Ny7OTVNi81SECvtChZ fMII8pHFMVoiGxWUNMqeKQt1OnEYjSxVWXVeDW8Gu+u/ZDjvdtLrdwXt81Zfe9ibPZsR01 bLW/BrMwCPR8HmEhpDAkZ+dtn7pe11Q= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1722882862; a=rsa-sha256; cv=none; b=P5STT62GhwxHWhZIDMyXYfYCO1AiTLk85i4/JeXmAbYU8AaTMsYEqoUghu5/mD9igBRjr1 ImPjwZmfzrTiBzAbi32XQgS67vpx22JNS+DbB8YpxT1thVXzGs621P03aELJR5MIBJLe1y DRNj1PxyzfvxdZEuta2eJfC2zMHrDNE= ARC-Authentication-Results: i=1; imf20.hostedemail.com; dkim=pass header.d=quicinc.com header.s=qcppdkim1 header.b=SfVU1oaR; dmarc=pass (policy=none) header.from=quicinc.com; spf=pass (imf20.hostedemail.com: domain of quic_eberman@quicinc.com designates 205.220.168.131 as permitted sender) smtp.mailfrom=quic_eberman@quicinc.com Received: from pps.filterd (m0279863.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 475BbnqL019900; Mon, 5 Aug 2024 18:35:26 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h= cc:content-transfer-encoding:content-type:date:from:in-reply-to :message-id:mime-version:references:subject:to; s=qcppdkim1; bh= 11QWH8pHvBW8f1MWbaRj+DTkAzgqxQMHSVhEzac2GrE=; b=SfVU1oaRmqi2I1Hu Q2HdtSpmE6G788E+QGlxDtec7Xe+PXwS5UvNdhfyA6XHpoS6iudNCgF4UdHaNQfD k86+iUpBg3MURy1xgl8w0gfzrddXk7onBEG+yoq7iAt9ZooTZtxx1u4d5phfFV9V moVuWOEPolpLnPEELPY/D0tevl2p9MgG6m1lNEtO4JE9ZbwlOJcpgmAUznX8Kjlk 6l4bDM+Sx+lIWJobOCdCSKbp40xV44E/X5LRaBNOXguzsUyHKoZDhICGcuRjPi4v AOeL6w9jYOzk3dkvFNMrmYVKIvG+mJqfYAA9sAxQgN8XUBFQc8De7K9QKnGXilQL bKjU4w== Received: from nasanppmta02.qualcomm.com (i-global254.qualcomm.com [199.106.103.254]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 40scs2vvgh-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Mon, 05 Aug 2024 18:35:26 +0000 (GMT) Received: from nasanex01b.na.qualcomm.com (nasanex01b.na.qualcomm.com [10.46.141.250]) by NASANPPMTA02.qualcomm.com (8.17.1.19/8.17.1.19) with ESMTPS id 475IZEoH024674 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Mon, 5 Aug 2024 18:35:14 GMT Received: from hu-eberman-lv.qualcomm.com (10.49.16.6) by nasanex01b.na.qualcomm.com (10.46.141.250) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.9; Mon, 5 Aug 2024 11:35:13 -0700 From: Elliot Berman Date: Mon, 5 Aug 2024 11:34:47 -0700 Subject: [PATCH RFC 1/4] mm: Introduce guest_memfd MIME-Version: 1.0 Message-ID: <20240805-guest-memfd-lib-v1-1-e5a29a4ff5d7@quicinc.com> References: <20240805-guest-memfd-lib-v1-0-e5a29a4ff5d7@quicinc.com> In-Reply-To: <20240805-guest-memfd-lib-v1-0-e5a29a4ff5d7@quicinc.com> To: Andrew Morton , Paolo Bonzini , Sean Christopherson , Fuad Tabba , David Hildenbrand , Patrick Roy , , Ackerley Tng CC: , , , , , Elliot Berman X-Mailer: b4 0.13.0 X-Originating-IP: [10.49.16.6] X-ClientProxiedBy: nalasex01a.na.qualcomm.com (10.47.209.196) To nasanex01b.na.qualcomm.com (10.46.141.250) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-ORIG-GUID: ZGxuxUGqCP_Vrg127mms6-uSzdAQx_Ti X-Proofpoint-GUID: ZGxuxUGqCP_Vrg127mms6-uSzdAQx_Ti X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1039,Hydra:6.0.680,FMLib:17.12.28.16 definitions=2024-08-05_07,2024-08-02_01,2024-05-17_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 mlxscore=0 bulkscore=0 suspectscore=0 adultscore=0 spamscore=0 priorityscore=1501 impostorscore=0 lowpriorityscore=0 mlxlogscore=902 phishscore=0 malwarescore=0 clxscore=1015 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.19.0-2407110000 definitions=main-2408050133 X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: A5E231C0008 X-Stat-Signature: 57a47pcp6gseebyrzwc1tcpod55ct777 X-Rspam-User: X-HE-Tag: 1722882931-958607 X-HE-Meta: U2FsdGVkX18S0xepUhpiey9N6AGQW0NVfwsjQ5NP239iOK1pqNlZO/b+QQuqmo2ZSntTfPP7PsejX6mZ7HPia3cgsIhAcWctBNmf2JT0KgazWgF50X3Akxn8wb4uePMCSw5u/VYJzYsVLOtIiONm07WlmbiSJCb0kaBqX27RG6fIEHNitiERJnVrtt/g6GM45cUxr7tX7/qr0s6U66ddHYl957J2JaG8KLaTc27P+lRnzM5PQJ0HouNS40xMfLEIvKswV8yfqzMpcjX+Oau9jv33UN7pJppc9wYzi0vTzCniwFW4U1rh99SzttOGc47MKQ4rMJX5dTLnHeD8eivKTz2ZxCNW1Xhwtp4nveZlMBfvFdZnhIrTTWOy8UVyDBMMydgaPH+iMw9XGW1GdxEIyWDvfV2n0WkC2uWh8GZk4N3L1KCY/tKwr0X01E7a/uJ7AOtDsdNYGt15PQetV3ua0w0f0gtmUCNdz24Ws5W/s1b+s8fC1BafUj6nu0PS2xEqqdANQ7s/kT2WMwTS0XGRaPeMxjhfOTIvuF+jhzrcWPQ/HdsuZ6ItDstAH5g+j0Wns21XFMQRgicQiHgTxanbCl1SzuM4zQXEqxvN1HmSWhhWy0c3Zt04TqvYnFLX/006fALgiyls9XDcbH/MSECk8ztwh+3C1rTPwJny4pDSAQpX6R6PZzmaXcNCq1/5o8PufYNGPOYejeJek4bbTx2c2MRaw5Tkmeq9WqyqdGlF8JJuK/zJ3MgERar3+E9rtipz8PG1+L4ra/kbb1zHU6QukwvDRg+6xzUlLlbj1hgZvxkrEs+Gx3ex4DxUexlmOLYJKyy06G1Tkw52gIOS5NNTwGX3UXq8eIikO5fZlbZTi+u50G7u/WcfTQ+Pv6c+MrUubB/6Z1YuCBqKY4rQcnh7MZEKxkpTn7RRd2pmI4sgY+lfZOY4YXygBEM5NolxKdLdE66C13C0SiVlSrd/rRq SV5yzLcS ksUBW7IWkD1ImTBrhBJjyZ4QZc++ZRjJltyfQZRzcLc/bCtbxK/MqhKRLz6kEteNILoDCBwhxqZp1nO3Qt2haD/sffT4l438v9H7J88aPNT69C1SjS+ghDRXNZOsKfUFWlPmcVM9Iy2YmwMPsCZzXyaIAhjEL1rUQQo6mc2DLz/A/d8f6mWGdwiS3LMwPpnjCWulLFnM/2618ejx6/AWlNe44zEBsthWGpfW/Xo8bV+DYyNjjf5r9MTXC7GeImKQflBx4FrNopoVEWDyaC/NRnpxLn2+B9NtLqAjCxK7K8cO/9E1WOk8jNq2Vzfsh3xajuz3AdW4g/gjEEzQGmpiZp2lt+Ab1aoglRSz2HudWJRKsSgX2Gu1MfPOSbg== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: In preparation for adding more features to KVM's guest_memfd, refactor and introduce a library which abstracts some of the core-mm decisions about managing folios associated with the file. The goal of the refactor serves two purposes: Provide an easier way to reason about memory in guest_memfd. With KVM supporting multiple confidentiality models (TDX, SEV-SNP, pKVM, ARM CCA), and coming support for allowing kernel and userspace to access this memory, it seems necessary to create a stronger abstraction between core-mm concerns and hypervisor concerns. Provide a common implementation for other hypervisors (Gunyah) to use. Signed-off-by: Elliot Berman --- include/linux/guest_memfd.h | 44 +++++++ mm/Kconfig | 3 + mm/Makefile | 1 + mm/guest_memfd.c | 285 ++++++++++++++++++++++++++++++++++++++++++++ 4 files changed, 333 insertions(+) diff --git a/include/linux/guest_memfd.h b/include/linux/guest_memfd.h new file mode 100644 index 000000000000..be56d9d53067 --- /dev/null +++ b/include/linux/guest_memfd.h @@ -0,0 +1,44 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (c) 2024 Qualcomm Innovation Center, Inc. All rights reserved. + */ + +#ifndef _LINUX_GUEST_MEMFD_H +#define _LINUX_GUEST_MEMFD_H + +#include + +/** + * struct guest_memfd_operations - ops provided by owner to manage folios + * @invalidate_begin: called when folios should be unmapped from guest. + * May fail if folios couldn't be unmapped from guest. + * Required. + * @invalidate_end: called after invalidate_begin returns success. Optional. + * @prepare: called before a folio is mapped into the guest address space. + * Optional. + * @release: Called when releasing the guest_memfd file. Required. + */ +struct guest_memfd_operations { + int (*invalidate_begin)(struct inode *inode, pgoff_t offset, unsigned long nr); + void (*invalidate_end)(struct inode *inode, pgoff_t offset, unsigned long nr); + int (*prepare)(struct inode *inode, pgoff_t offset, struct folio *folio); + int (*release)(struct inode *inode); +}; + +/** + * @GUEST_MEMFD_GRAB_UPTODATE: Ensure pages are zeroed/up to date. + * If trusted hyp will do it, can ommit this flag + * @GUEST_MEMFD_PREPARE: Call the ->prepare() op, if present. + */ +enum { + GUEST_MEMFD_GRAB_UPTODATE = BIT(0), + GUEST_MEMFD_PREPARE = BIT(1), +}; + +struct folio *guest_memfd_grab_folio(struct file *file, pgoff_t index, u32 flags); +struct file *guest_memfd_alloc(const char *name, + const struct guest_memfd_operations *ops, + loff_t size, unsigned long flags); +bool is_guest_memfd(struct file *file, const struct guest_memfd_operations *ops); + +#endif diff --git a/mm/Kconfig b/mm/Kconfig index b72e7d040f78..333f46525695 100644 --- a/mm/Kconfig +++ b/mm/Kconfig @@ -1168,6 +1168,9 @@ config SECRETMEM memory areas visible only in the context of the owning process and not mapped to other processes and other kernel page tables. +config GUEST_MEMFD + tristate + config ANON_VMA_NAME bool "Anonymous VMA name support" depends on PROC_FS && ADVISE_SYSCALLS && MMU diff --git a/mm/Makefile b/mm/Makefile index d2915f8c9dc0..e15a95ebeac5 100644 --- a/mm/Makefile +++ b/mm/Makefile @@ -122,6 +122,7 @@ obj-$(CONFIG_PAGE_EXTENSION) += page_ext.o obj-$(CONFIG_PAGE_TABLE_CHECK) += page_table_check.o obj-$(CONFIG_CMA_DEBUGFS) += cma_debug.o obj-$(CONFIG_SECRETMEM) += secretmem.o +obj-$(CONFIG_GUEST_MEMFD) += guest_memfd.o obj-$(CONFIG_CMA_SYSFS) += cma_sysfs.o obj-$(CONFIG_USERFAULTFD) += userfaultfd.o obj-$(CONFIG_IDLE_PAGE_TRACKING) += page_idle.o diff --git a/mm/guest_memfd.c b/mm/guest_memfd.c new file mode 100644 index 000000000000..580138b0f9d4 --- /dev/null +++ b/mm/guest_memfd.c @@ -0,0 +1,285 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (c) 2024 Qualcomm Innovation Center, Inc. All rights reserved. + */ + +#include +#include +#include +#include + +struct folio *guest_memfd_grab_folio(struct file *file, pgoff_t index, u32 flags) +{ + struct inode *inode = file_inode(file); + struct guest_memfd_operations *ops = inode->i_private; + struct folio *folio; + int r; + + /* TODO: Support huge pages. */ + folio = filemap_grab_folio(inode->i_mapping, index); + if (IS_ERR(folio)) + return folio; + + /* + * Use the up-to-date flag to track whether or not the memory has been + * zeroed before being handed off to the guest. There is no backing + * storage for the memory, so the folio will remain up-to-date until + * it's removed. + */ + if ((flags & GUEST_MEMFD_GRAB_UPTODATE) && + !folio_test_uptodate(folio)) { + unsigned long nr_pages = folio_nr_pages(folio); + unsigned long i; + + for (i = 0; i < nr_pages; i++) + clear_highpage(folio_page(folio, i)); + + folio_mark_uptodate(folio); + } + + if (flags & GUEST_MEMFD_PREPARE && ops->prepare) { + r = ops->prepare(inode, index, folio); + if (r < 0) + goto out_err; + } + + /* + * Ignore accessed, referenced, and dirty flags. The memory is + * unevictable and there is no storage to write back to. + */ + return folio; +out_err: + folio_unlock(folio); + folio_put(folio); + return ERR_PTR(r); +} +EXPORT_SYMBOL_GPL(guest_memfd_grab_folio); + +static long gmem_punch_hole(struct file *file, loff_t offset, loff_t len) +{ + struct inode *inode = file_inode(file); + const struct guest_memfd_operations *ops = inode->i_private; + pgoff_t start = offset >> PAGE_SHIFT; + unsigned long nr = len >> PAGE_SHIFT; + long ret; + + /* + * Bindings must be stable across invalidation to ensure the start+end + * are balanced. + */ + filemap_invalidate_lock(inode->i_mapping); + + ret = ops->invalidate_begin(inode, start, nr); + if (ret) + goto out; + + truncate_inode_pages_range(inode->i_mapping, offset, offset + len - 1); + + if (ops->invalidate_end) + ops->invalidate_end(inode, start, nr); + +out: + filemap_invalidate_unlock(inode->i_mapping); + + return 0; +} + +static long gmem_allocate(struct file *file, loff_t offset, loff_t len) +{ + struct inode *inode = file_inode(file); + struct address_space *mapping = inode->i_mapping; + pgoff_t start, index, end; + int r; + + /* Dedicated guest is immutable by default. */ + if (offset + len > i_size_read(inode)) + return -EINVAL; + + filemap_invalidate_lock_shared(mapping); + + start = offset >> PAGE_SHIFT; + end = (offset + len) >> PAGE_SHIFT; + + r = 0; + for (index = start; index < end;) { + struct folio *folio; + + if (signal_pending(current)) { + r = -EINTR; + break; + } + + folio = guest_memfd_grab_folio(file, index, + GUEST_MEMFD_GRAB_UPTODATE | + GUEST_MEMFD_PREPARE); + if (!folio) { + r = -ENOMEM; + break; + } + + index = folio_next_index(folio); + + folio_unlock(folio); + folio_put(folio); + + /* 64-bit only, wrapping the index should be impossible. */ + if (WARN_ON_ONCE(!index)) + break; + + cond_resched(); + } + + filemap_invalidate_unlock_shared(mapping); + + return r; +} + +static long gmem_fallocate(struct file *file, int mode, loff_t offset, + loff_t len) +{ + int ret; + + if (!(mode & FALLOC_FL_KEEP_SIZE)) + return -EOPNOTSUPP; + + if (mode & ~(FALLOC_FL_KEEP_SIZE | FALLOC_FL_PUNCH_HOLE)) + return -EOPNOTSUPP; + + if (!PAGE_ALIGNED(offset) || !PAGE_ALIGNED(len)) + return -EINVAL; + + if (mode & FALLOC_FL_PUNCH_HOLE) + ret = gmem_punch_hole(file, offset, len); + else + ret = gmem_allocate(file, offset, len); + + if (!ret) + file_modified(file); + return ret; +} + +static int gmem_release(struct inode *inode, struct file *file) +{ + struct guest_memfd_operations *ops = inode->i_private; + + return ops->release(inode); +} + +static struct file_operations gmem_fops = { + .open = generic_file_open, + .llseek = generic_file_llseek, + .release = gmem_release, + .fallocate = gmem_fallocate, + .owner = THIS_MODULE, +}; + +static int gmem_migrate_folio(struct address_space *mapping, struct folio *dst, + struct folio *src, enum migrate_mode mode) +{ + WARN_ON_ONCE(1); + return -EINVAL; +} + +static int gmem_error_folio(struct address_space *mapping, struct folio *folio) +{ + struct inode *inode = mapping->host; + struct guest_memfd_operations *ops = inode->i_private; + off_t offset = folio->index; + size_t nr = folio_nr_pages(folio); + int ret; + + filemap_invalidate_lock_shared(mapping); + + ret = ops->invalidate_begin(inode, offset, nr); + if (!ret && ops->invalidate_end) + ops->invalidate_end(inode, offset, nr); + + filemap_invalidate_unlock_shared(mapping); + + return ret; +} + +static bool gmem_release_folio(struct folio *folio, gfp_t gfp) +{ + struct inode *inode = folio_inode(folio); + struct guest_memfd_operations *ops = inode->i_private; + off_t offset = folio->index; + size_t nr = folio_nr_pages(folio); + int ret; + + ret = ops->invalidate_begin(inode, offset, nr); + if (ret) + return false; + if (ops->invalidate_end) + ops->invalidate_end(inode, offset, nr); + + return true; +} + +static const struct address_space_operations gmem_aops = { + .dirty_folio = noop_dirty_folio, + .migrate_folio = gmem_migrate_folio, + .error_remove_folio = gmem_error_folio, + .release_folio = gmem_release_folio, +}; + +static inline bool guest_memfd_check_ops(const struct guest_memfd_operations *ops) +{ + return ops->invalidate_begin && ops->release; +} + +struct file *guest_memfd_alloc(const char *name, + const struct guest_memfd_operations *ops, + loff_t size, unsigned long flags) +{ + struct inode *inode; + struct file *file; + + if (size <= 0 || !PAGE_ALIGNED(size)) + return ERR_PTR(-EINVAL); + + if (!guest_memfd_check_ops(ops)) + return ERR_PTR(-EINVAL); + + if (flags) + return ERR_PTR(-EINVAL); + + /* + * Use the so called "secure" variant, which creates a unique inode + * instead of reusing a single inode. Each guest_memfd instance needs + * its own inode to track the size, flags, etc. + */ + file = anon_inode_create_getfile(name, &gmem_fops, (void *)flags, + O_RDWR, NULL); + if (IS_ERR(file)) + return file; + + file->f_flags |= O_LARGEFILE; + + inode = file_inode(file); + WARN_ON(file->f_mapping != inode->i_mapping); + + inode->i_private = (void *)ops; /* discards const qualifier */ + inode->i_mapping->a_ops = &gmem_aops; + inode->i_mode |= S_IFREG; + inode->i_size = size; + mapping_set_gfp_mask(inode->i_mapping, GFP_HIGHUSER); + mapping_set_inaccessible(inode->i_mapping); + /* Unmovable mappings are supposed to be marked unevictable as well. */ + WARN_ON_ONCE(!mapping_unevictable(inode->i_mapping)); + + return file; +} +EXPORT_SYMBOL_GPL(guest_memfd_alloc); + +bool is_guest_memfd(struct file *file, const struct guest_memfd_operations *ops) +{ + if (file->f_op != &gmem_fops) + return false; + + struct inode *inode = file_inode(file); + struct guest_memfd_operations *gops = inode->i_private; + + return ops == gops; +} +EXPORT_SYMBOL_GPL(is_guest_memfd); From patchwork Mon Aug 5 18:34:48 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Elliot Berman X-Patchwork-Id: 13753953 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 35BDBC3DA4A for ; Mon, 5 Aug 2024 18:35:23 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C73486B0092; Mon, 5 Aug 2024 14:35:22 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id C22D06B0093; Mon, 5 Aug 2024 14:35:22 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id AC3406B0095; Mon, 5 Aug 2024 14:35:22 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 89FD86B0092 for ; Mon, 5 Aug 2024 14:35:22 -0400 (EDT) Received: from smtpin22.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 0A4E5A5672 for ; Mon, 5 Aug 2024 18:35:22 +0000 (UTC) X-FDA: 82419044484.22.682891C Received: from mx0b-0031df01.pphosted.com (mx0b-0031df01.pphosted.com [205.220.180.131]) by imf08.hostedemail.com (Postfix) with ESMTP id B3E36160030 for ; Mon, 5 Aug 2024 18:35:19 +0000 (UTC) Authentication-Results: imf08.hostedemail.com; dkim=pass header.d=quicinc.com header.s=qcppdkim1 header.b=nBmip99i; dmarc=pass (policy=none) header.from=quicinc.com; spf=pass (imf08.hostedemail.com: domain of quic_eberman@quicinc.com designates 205.220.180.131 as permitted sender) smtp.mailfrom=quic_eberman@quicinc.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1722882912; a=rsa-sha256; cv=none; b=1OawDy684p5m2La3PR4Xu0RoMYlflraSVNGMX6OEDiloEed9iQt/B7lSnCVNouEwyBdbOv mKKpQCMlQSjv/MjD1/XjIFmnyD+MHF29YGMS9ZOCeH5BRnIm94TUVtXyWTF7zxoLIpbUod 7AlKEDCY1nbOhM0ihRm+BUUBsGGxGrM= ARC-Authentication-Results: i=1; imf08.hostedemail.com; dkim=pass header.d=quicinc.com header.s=qcppdkim1 header.b=nBmip99i; dmarc=pass (policy=none) header.from=quicinc.com; spf=pass (imf08.hostedemail.com: domain of quic_eberman@quicinc.com designates 205.220.180.131 as permitted sender) smtp.mailfrom=quic_eberman@quicinc.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1722882912; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=NNwc/yTMOhkA2KsrscwiCYYgyWrv8AnymFPd4NDgii4=; b=Oeq2EbjWwDvZ0v2GikkrIL9bdObjpq/44HknFFbPUoAJdvBYFRehRGgrwHq1o98KXHp1j5 52DIiR0DrzHBuP0gdNljW4nFec2UW3bTVKs6wccqEU5YCmL884GYJ6vDXvITsIcrV6sYaa oFbpEpaZbw0imly0XdrLCmEH9hkLMgE= Received: from pps.filterd (m0279869.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 475CS0ZK012626; Mon, 5 Aug 2024 18:35:16 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h= cc:content-transfer-encoding:content-type:date:from:in-reply-to :message-id:mime-version:references:subject:to; s=qcppdkim1; bh= NNwc/yTMOhkA2KsrscwiCYYgyWrv8AnymFPd4NDgii4=; b=nBmip99ie+VKFEV6 5BwX9G+Nq9+yRv8Zq/fE+4C2HG6UEfMkYzNf8NiHeaozuwzZ15UEDwhe8R6c339d inq4fkafeIcjMrRDzSxv0NmrGPnLxSeqp6Pz8ijsPgMyfr1l4P/NQ9Y2fmC5VEAx Bm7noiyPlryzZzAjgrRZmP3fnriwS/hqGxpupiitP45l3iM6PGcW53hoP4xkFwcM 8o107oX6jhUfwhsClNX0o3UkIBko+h+UaAONRnJkmTuyhWkCq+InSX6m3xshCJ/N PpvfGvdZh5mvem95KOXCvWA6feBRKTv3j9T3fPBasnWPCN2wEboXSLscLE67t0Lz H85zUw== Received: from nasanppmta02.qualcomm.com (i-global254.qualcomm.com [199.106.103.254]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 40sbj6n0k8-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Mon, 05 Aug 2024 18:35:15 +0000 (GMT) Received: from nasanex01b.na.qualcomm.com (nasanex01b.na.qualcomm.com [10.46.141.250]) by NASANPPMTA02.qualcomm.com (8.17.1.19/8.17.1.19) with ESMTPS id 475IZEHn024680 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Mon, 5 Aug 2024 18:35:14 GMT Received: from hu-eberman-lv.qualcomm.com (10.49.16.6) by nasanex01b.na.qualcomm.com (10.46.141.250) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.9; Mon, 5 Aug 2024 11:35:14 -0700 From: Elliot Berman Date: Mon, 5 Aug 2024 11:34:48 -0700 Subject: [PATCH RFC 2/4] kvm: Convert to use mm/guest_memfd MIME-Version: 1.0 Message-ID: <20240805-guest-memfd-lib-v1-2-e5a29a4ff5d7@quicinc.com> References: <20240805-guest-memfd-lib-v1-0-e5a29a4ff5d7@quicinc.com> In-Reply-To: <20240805-guest-memfd-lib-v1-0-e5a29a4ff5d7@quicinc.com> To: Andrew Morton , Paolo Bonzini , Sean Christopherson , Fuad Tabba , David Hildenbrand , Patrick Roy , , Ackerley Tng CC: , , , , , Elliot Berman X-Mailer: b4 0.13.0 X-Originating-IP: [10.49.16.6] X-ClientProxiedBy: nalasex01a.na.qualcomm.com (10.47.209.196) To nasanex01b.na.qualcomm.com (10.46.141.250) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-ORIG-GUID: MTWcnK-X0sCx-AQ_MiaF48nCefXYbKNb X-Proofpoint-GUID: MTWcnK-X0sCx-AQ_MiaF48nCefXYbKNb X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1039,Hydra:6.0.680,FMLib:17.12.28.16 definitions=2024-08-05_07,2024-08-02_01,2024-05-17_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 spamscore=0 suspectscore=0 priorityscore=1501 adultscore=0 mlxscore=0 lowpriorityscore=0 bulkscore=0 phishscore=0 mlxlogscore=999 clxscore=1011 malwarescore=0 impostorscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.19.0-2407110000 definitions=main-2408050132 X-Rspam-User: X-Rspamd-Queue-Id: B3E36160030 X-Rspamd-Server: rspam01 X-Stat-Signature: nrr9de1xgbkgho4j59ou9u1pebodog9x X-HE-Tag: 1722882919-149499 X-HE-Meta: U2FsdGVkX18cZT2zW1Ecp63hP+SPs56owU3Tq+1gkpDg+lTy/W+Wj4gA7ghfeVd7ZjiJaHc5MORRVecTzg8spEVrifWbFh/3FbFfnuL6fn/cr0pSvtPVf5Fwv6k9fPDyAo2V4YufMIys1HrAM29quPMhwngx2ncw/xgQk4QbpVZbnYWHy4LaymS5URV9U8ZSEtNZMiz1hchPjlqL4A9uUt9fxWqvt7U5NuqS9JPCJx7wX7fzS2uisjX8mfIM4OtNgBEGzdje/P6t+lD3C7q6QMqQz+JApHtD6Vu6GsyKnYaX+8l5rxi9R4cXdtJ6DMGgXQq3LTqV5WdJa1fVBy2bGTnMPYV+mbM2qVhMgvPDjEx/kSfSJcSWlLZgtUrSJmUUn6RWji1fo6I8wdJ1YNoT10cNzCLxEVgZ+MGgRrMZURFD2K6qreNNVT/nLku0BMTUlCCiBPZ3flNdFKtmldjFwQR25G6Iopvv7Qz71F3hAq09xXH68RrIbHYOn+1ap/Pt51zZEQqdNyM6mntMDXB4S6cfhjJDZdmdtepb/NIxt9bzQw3pnkhQzY0ptyn+t6hftjY0WfuHiEjf2BFPgAMxIlfcfoNIrBJ5CSJz642zgSjnsiU3GE/1O3OV2AILo2P39ucDFZ7tI1wtER9btyuvVR/++q1nOu5cbL9F45eM6g4v2KAWNlYmY9u7aH7w+moKSpEGxGJD6NoMaUOxAHGPfqlLry2gV3ISMlKE9bxENsU6Rgn7oI/E3ve152rGIRSlT2C0U662gDoOhLT+lJwRXcBVfZSMpfJ7P7kwZqqccpzS58I0LlZ3Lkn+MHuIK0RFWkX3cknoBrO3gb9V3q/QY3y+9Q2j7SR/ncH4+bhsStX9OzXJeCoc1p6H0/3EhbNsk/C19rNKOfDjkL8fUvHVFMIsxguOWdHCZWcC5Eakc/AjK0OWRSxLTuF8aZT620Afxiym3ZAZ24b+/pxYgt2 JdnveF1g MzpFTXV30FrCP2JmqZNZSxduSHhjKSeqZnXjJBctwFQp/XyM8XSEgZC9fkmetiWTj+7e1S4YPCAkRjvpDebi3nSQhGlLGm8MhDY9S2Jtjs89dcFhJz5Xw/IGLCwWZLWqX8Oh8KAV/ZCY68pDrjsJQT0zGPYb3MGFovT/IzJxRildMuPn7YLZ2KWvxErxgwV7hAJxTNkS2lxf+9xsoBFepvKXfnes1JGOdL/8fSl4jZ0iLCZiTVTJQMQ0B5KAyG8BjXXJYIimhuegHv6Y1PxUAD3iOcqW9gE7zOn9vC8XKmAkEXPfZAFVySN3TUCzQr8zhFN/69pzm3YBUI/PitIMg/ctk/bTJcM7D08n4 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Use the recently created mm/guest_memfd implementation. No functional change intended. Signed-off-by: Elliot Berman --- virt/kvm/Kconfig | 1 + virt/kvm/guest_memfd.c | 299 ++++++++----------------------------------------- virt/kvm/kvm_main.c | 2 - virt/kvm/kvm_mm.h | 6 - 4 files changed, 49 insertions(+), 259 deletions(-) diff --git a/virt/kvm/Kconfig b/virt/kvm/Kconfig index b14e14cdbfb9..1147e004fcbd 100644 --- a/virt/kvm/Kconfig +++ b/virt/kvm/Kconfig @@ -106,6 +106,7 @@ config KVM_GENERIC_MEMORY_ATTRIBUTES config KVM_PRIVATE_MEM select XARRAY_MULTI + select GUEST_MEMFD bool config KVM_GENERIC_PRIVATE_MEM diff --git a/virt/kvm/guest_memfd.c b/virt/kvm/guest_memfd.c index 1c509c351261..2fe3ff9e5793 100644 --- a/virt/kvm/guest_memfd.c +++ b/virt/kvm/guest_memfd.c @@ -1,9 +1,7 @@ // SPDX-License-Identifier: GPL-2.0 -#include -#include +#include #include #include -#include #include "kvm_mm.h" @@ -13,9 +11,16 @@ struct kvm_gmem { struct list_head entry; }; -static int kvm_gmem_prepare_folio(struct inode *inode, pgoff_t index, struct folio *folio) +static inline struct kvm_gmem *inode_to_kvm_gmem(struct inode *inode) { + struct list_head *gmem_list = &inode->i_mapping->i_private_list; + + return list_first_entry_or_null(gmem_list, struct kvm_gmem, entry); +} + #ifdef CONFIG_HAVE_KVM_GMEM_PREPARE +static int kvm_gmem_prepare_folio(struct inode *inode, pgoff_t index, struct folio *folio) +{ struct list_head *gmem_list = &inode->i_mapping->i_private_list; struct kvm_gmem *gmem; @@ -45,57 +50,14 @@ static int kvm_gmem_prepare_folio(struct inode *inode, pgoff_t index, struct fol } } -#endif return 0; } +#endif -static struct folio *kvm_gmem_get_folio(struct inode *inode, pgoff_t index, bool prepare) -{ - struct folio *folio; - - /* TODO: Support huge pages. */ - folio = filemap_grab_folio(inode->i_mapping, index); - if (IS_ERR(folio)) - return folio; - - /* - * Use the up-to-date flag to track whether or not the memory has been - * zeroed before being handed off to the guest. There is no backing - * storage for the memory, so the folio will remain up-to-date until - * it's removed. - * - * TODO: Skip clearing pages when trusted firmware will do it when - * assigning memory to the guest. - */ - if (!folio_test_uptodate(folio)) { - unsigned long nr_pages = folio_nr_pages(folio); - unsigned long i; - - for (i = 0; i < nr_pages; i++) - clear_highpage(folio_page(folio, i)); - - folio_mark_uptodate(folio); - } - - if (prepare) { - int r = kvm_gmem_prepare_folio(inode, index, folio); - if (r < 0) { - folio_unlock(folio); - folio_put(folio); - return ERR_PTR(r); - } - } - - /* - * Ignore accessed, referenced, and dirty flags. The memory is - * unevictable and there is no storage to write back to. - */ - return folio; -} - -static void kvm_gmem_invalidate_begin(struct kvm_gmem *gmem, pgoff_t start, +static int kvm_gmem_invalidate_begin(struct inode *inode, pgoff_t start, pgoff_t end) { + struct kvm_gmem *gmem = inode_to_kvm_gmem(inode); bool flush = false, found_memslot = false; struct kvm_memory_slot *slot; struct kvm *kvm = gmem->kvm; @@ -126,11 +88,14 @@ static void kvm_gmem_invalidate_begin(struct kvm_gmem *gmem, pgoff_t start, if (found_memslot) KVM_MMU_UNLOCK(kvm); + + return 0; } -static void kvm_gmem_invalidate_end(struct kvm_gmem *gmem, pgoff_t start, +static void kvm_gmem_invalidate_end(struct inode *inode, pgoff_t start, pgoff_t end) { + struct kvm_gmem *gmem = inode_to_kvm_gmem(inode); struct kvm *kvm = gmem->kvm; if (xa_find(&gmem->bindings, &start, end - 1, XA_PRESENT)) { @@ -140,106 +105,9 @@ static void kvm_gmem_invalidate_end(struct kvm_gmem *gmem, pgoff_t start, } } -static long kvm_gmem_punch_hole(struct inode *inode, loff_t offset, loff_t len) -{ - struct list_head *gmem_list = &inode->i_mapping->i_private_list; - pgoff_t start = offset >> PAGE_SHIFT; - pgoff_t end = (offset + len) >> PAGE_SHIFT; - struct kvm_gmem *gmem; - - /* - * Bindings must be stable across invalidation to ensure the start+end - * are balanced. - */ - filemap_invalidate_lock(inode->i_mapping); - - list_for_each_entry(gmem, gmem_list, entry) - kvm_gmem_invalidate_begin(gmem, start, end); - - truncate_inode_pages_range(inode->i_mapping, offset, offset + len - 1); - - list_for_each_entry(gmem, gmem_list, entry) - kvm_gmem_invalidate_end(gmem, start, end); - - filemap_invalidate_unlock(inode->i_mapping); - - return 0; -} - -static long kvm_gmem_allocate(struct inode *inode, loff_t offset, loff_t len) +static int kvm_gmem_release(struct inode *inode) { - struct address_space *mapping = inode->i_mapping; - pgoff_t start, index, end; - int r; - - /* Dedicated guest is immutable by default. */ - if (offset + len > i_size_read(inode)) - return -EINVAL; - - filemap_invalidate_lock_shared(mapping); - - start = offset >> PAGE_SHIFT; - end = (offset + len) >> PAGE_SHIFT; - - r = 0; - for (index = start; index < end; ) { - struct folio *folio; - - if (signal_pending(current)) { - r = -EINTR; - break; - } - - folio = kvm_gmem_get_folio(inode, index, true); - if (IS_ERR(folio)) { - r = PTR_ERR(folio); - break; - } - - index = folio_next_index(folio); - - folio_unlock(folio); - folio_put(folio); - - /* 64-bit only, wrapping the index should be impossible. */ - if (WARN_ON_ONCE(!index)) - break; - - cond_resched(); - } - - filemap_invalidate_unlock_shared(mapping); - - return r; -} - -static long kvm_gmem_fallocate(struct file *file, int mode, loff_t offset, - loff_t len) -{ - int ret; - - if (!(mode & FALLOC_FL_KEEP_SIZE)) - return -EOPNOTSUPP; - - if (mode & ~(FALLOC_FL_KEEP_SIZE | FALLOC_FL_PUNCH_HOLE)) - return -EOPNOTSUPP; - - if (!PAGE_ALIGNED(offset) || !PAGE_ALIGNED(len)) - return -EINVAL; - - if (mode & FALLOC_FL_PUNCH_HOLE) - ret = kvm_gmem_punch_hole(file_inode(file), offset, len); - else - ret = kvm_gmem_allocate(file_inode(file), offset, len); - - if (!ret) - file_modified(file); - return ret; -} - -static int kvm_gmem_release(struct inode *inode, struct file *file) -{ - struct kvm_gmem *gmem = file->private_data; + struct kvm_gmem *gmem = inode_to_kvm_gmem(inode); struct kvm_memory_slot *slot; struct kvm *kvm = gmem->kvm; unsigned long index; @@ -265,8 +133,8 @@ static int kvm_gmem_release(struct inode *inode, struct file *file) * Zap all SPTEs pointed at by this file. Do not free the backing * memory, as its lifetime is associated with the inode, not the file. */ - kvm_gmem_invalidate_begin(gmem, 0, -1ul); - kvm_gmem_invalidate_end(gmem, 0, -1ul); + kvm_gmem_invalidate_begin(inode, 0, -1ul); + kvm_gmem_invalidate_end(inode, 0, -1ul); list_del(&gmem->entry); @@ -293,56 +161,6 @@ static inline struct file *kvm_gmem_get_file(struct kvm_memory_slot *slot) return get_file_active(&slot->gmem.file); } -static struct file_operations kvm_gmem_fops = { - .open = generic_file_open, - .release = kvm_gmem_release, - .fallocate = kvm_gmem_fallocate, -}; - -void kvm_gmem_init(struct module *module) -{ - kvm_gmem_fops.owner = module; -} - -static int kvm_gmem_migrate_folio(struct address_space *mapping, - struct folio *dst, struct folio *src, - enum migrate_mode mode) -{ - WARN_ON_ONCE(1); - return -EINVAL; -} - -static int kvm_gmem_error_folio(struct address_space *mapping, struct folio *folio) -{ - struct list_head *gmem_list = &mapping->i_private_list; - struct kvm_gmem *gmem; - pgoff_t start, end; - - filemap_invalidate_lock_shared(mapping); - - start = folio->index; - end = start + folio_nr_pages(folio); - - list_for_each_entry(gmem, gmem_list, entry) - kvm_gmem_invalidate_begin(gmem, start, end); - - /* - * Do not truncate the range, what action is taken in response to the - * error is userspace's decision (assuming the architecture supports - * gracefully handling memory errors). If/when the guest attempts to - * access a poisoned page, kvm_gmem_get_pfn() will return -EHWPOISON, - * at which point KVM can either terminate the VM or propagate the - * error to userspace. - */ - - list_for_each_entry(gmem, gmem_list, entry) - kvm_gmem_invalidate_end(gmem, start, end); - - filemap_invalidate_unlock_shared(mapping); - - return MF_DELAYED; -} - #ifdef CONFIG_HAVE_KVM_GMEM_INVALIDATE static void kvm_gmem_free_folio(struct folio *folio) { @@ -354,34 +172,25 @@ static void kvm_gmem_free_folio(struct folio *folio) } #endif -static const struct address_space_operations kvm_gmem_aops = { - .dirty_folio = noop_dirty_folio, - .migrate_folio = kvm_gmem_migrate_folio, - .error_remove_folio = kvm_gmem_error_folio, +static const struct guest_memfd_operations kvm_gmem_ops = { + .invalidate_begin = kvm_gmem_invalidate_begin, + .invalidate_end = kvm_gmem_invalidate_end, +#ifdef CONFIG_HAVE_KVM_GMEM_PREAPRE + .prepare = kvm_gmem_prepare_folio, +#endif #ifdef CONFIG_HAVE_KVM_GMEM_INVALIDATE .free_folio = kvm_gmem_free_folio, #endif + .release = kvm_gmem_release, }; -static int kvm_gmem_getattr(struct mnt_idmap *idmap, const struct path *path, - struct kstat *stat, u32 request_mask, - unsigned int query_flags) +static inline struct kvm_gmem *file_to_kvm_gmem(struct file *file) { - struct inode *inode = path->dentry->d_inode; - - generic_fillattr(idmap, request_mask, inode, stat); - return 0; -} + if (!is_guest_memfd(file, &kvm_gmem_ops)) + return NULL; -static int kvm_gmem_setattr(struct mnt_idmap *idmap, struct dentry *dentry, - struct iattr *attr) -{ - return -EINVAL; + return inode_to_kvm_gmem(file_inode(file)); } -static const struct inode_operations kvm_gmem_iops = { - .getattr = kvm_gmem_getattr, - .setattr = kvm_gmem_setattr, -}; static int __kvm_gmem_create(struct kvm *kvm, loff_t size, u64 flags) { @@ -401,31 +210,16 @@ static int __kvm_gmem_create(struct kvm *kvm, loff_t size, u64 flags) goto err_fd; } - file = anon_inode_create_getfile(anon_name, &kvm_gmem_fops, gmem, - O_RDWR, NULL); + file = guest_memfd_alloc(anon_name, &kvm_gmem_ops, size, 0); if (IS_ERR(file)) { err = PTR_ERR(file); goto err_gmem; } - file->f_flags |= O_LARGEFILE; - - inode = file->f_inode; - WARN_ON(file->f_mapping != inode->i_mapping); - - inode->i_private = (void *)(unsigned long)flags; - inode->i_op = &kvm_gmem_iops; - inode->i_mapping->a_ops = &kvm_gmem_aops; - inode->i_mode |= S_IFREG; - inode->i_size = size; - mapping_set_gfp_mask(inode->i_mapping, GFP_HIGHUSER); - mapping_set_inaccessible(inode->i_mapping); - /* Unmovable mappings are supposed to be marked unevictable as well. */ - WARN_ON_ONCE(!mapping_unevictable(inode->i_mapping)); - kvm_get_kvm(kvm); gmem->kvm = kvm; xa_init(&gmem->bindings); + inode = file_inode(file); list_add(&gmem->entry, &inode->i_mapping->i_private_list); fd_install(fd, file); @@ -469,11 +263,8 @@ int kvm_gmem_bind(struct kvm *kvm, struct kvm_memory_slot *slot, if (!file) return -EBADF; - if (file->f_op != &kvm_gmem_fops) - goto err; - - gmem = file->private_data; - if (gmem->kvm != kvm) + gmem = file_to_kvm_gmem(file); + if (!gmem || gmem->kvm != kvm) goto err; inode = file_inode(file); @@ -530,7 +321,9 @@ void kvm_gmem_unbind(struct kvm_memory_slot *slot) if (!file) return; - gmem = file->private_data; + gmem = file_to_kvm_gmem(file); + if (!gmem) + return; filemap_invalidate_lock(file->f_mapping); xa_store_range(&gmem->bindings, start, end - 1, NULL, GFP_KERNEL); @@ -548,20 +341,24 @@ static int __kvm_gmem_get_pfn(struct file *file, struct kvm_memory_slot *slot, struct kvm_gmem *gmem = file->private_data; struct folio *folio; struct page *page; - int r; + int r, flags = GUEST_MEMFD_GRAB_UPTODATE; if (file != slot->gmem.file) { WARN_ON_ONCE(slot->gmem.file); return -EFAULT; } - gmem = file->private_data; - if (xa_load(&gmem->bindings, index) != slot) { - WARN_ON_ONCE(xa_load(&gmem->bindings, index)); + gmem = file_to_kvm_gmem(file); + if (WARN_ON_ONCE(!gmem)) + return -EINVAL; + + if (WARN_ON_ONCE(xa_load(&gmem->bindings, index) != slot)) return -EIO; - } - folio = kvm_gmem_get_folio(file_inode(file), index, prepare); + if (prepare) + flags |= GUEST_MEMFD_PREPARE; + + folio = guest_memfd_grab_folio(file, index, flags); if (IS_ERR(folio)) return PTR_ERR(folio); diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index d0788d0a72cc..cad798f4e135 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -6516,8 +6516,6 @@ int kvm_init(unsigned vcpu_size, unsigned vcpu_align, struct module *module) if (WARN_ON_ONCE(r)) goto err_vfio; - kvm_gmem_init(module); - /* * Registration _must_ be the very last thing done, as this exposes * /dev/kvm to userspace, i.e. all infrastructure must be setup! diff --git a/virt/kvm/kvm_mm.h b/virt/kvm/kvm_mm.h index 715f19669d01..6336a4fdcf50 100644 --- a/virt/kvm/kvm_mm.h +++ b/virt/kvm/kvm_mm.h @@ -36,17 +36,11 @@ static inline void gfn_to_pfn_cache_invalidate_start(struct kvm *kvm, #endif /* HAVE_KVM_PFNCACHE */ #ifdef CONFIG_KVM_PRIVATE_MEM -void kvm_gmem_init(struct module *module); int kvm_gmem_create(struct kvm *kvm, struct kvm_create_guest_memfd *args); int kvm_gmem_bind(struct kvm *kvm, struct kvm_memory_slot *slot, unsigned int fd, loff_t offset); void kvm_gmem_unbind(struct kvm_memory_slot *slot); #else -static inline void kvm_gmem_init(struct module *module) -{ - -} - static inline int kvm_gmem_bind(struct kvm *kvm, struct kvm_memory_slot *slot, unsigned int fd, loff_t offset) From patchwork Mon Aug 5 18:34:49 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Elliot Berman X-Patchwork-Id: 13753956 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 760BBC3DA4A for ; Mon, 5 Aug 2024 18:35:36 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A01A86B0099; Mon, 5 Aug 2024 14:35:35 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 93D0B6B009A; Mon, 5 Aug 2024 14:35:35 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 78F986B009B; Mon, 5 Aug 2024 14:35:35 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 5D9666B0099 for ; Mon, 5 Aug 2024 14:35:35 -0400 (EDT) Received: from smtpin13.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 2CAB2C04C8 for ; Mon, 5 Aug 2024 18:35:34 +0000 (UTC) X-FDA: 82419044988.13.5A6AC03 Received: from mx0a-0031df01.pphosted.com (mx0a-0031df01.pphosted.com [205.220.168.131]) by imf15.hostedemail.com (Postfix) with ESMTP id C6F96A0024 for ; Mon, 5 Aug 2024 18:35:31 +0000 (UTC) Authentication-Results: imf15.hostedemail.com; dkim=pass header.d=quicinc.com header.s=qcppdkim1 header.b=ilp7NFpM; spf=pass (imf15.hostedemail.com: domain of quic_eberman@quicinc.com designates 205.220.168.131 as permitted sender) smtp.mailfrom=quic_eberman@quicinc.com; dmarc=pass (policy=none) header.from=quicinc.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1722882883; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=t1pn8IKt3beAj61svnkSxeGlbod/pIj3zCqhxkMpp1w=; b=pxggBYFC4wEY7osoK6qKL6YkL6PO/HlUtrOthKsPem+rFJS38zGqL0FLFmDY+BDTKz829M SNuaTffik2ZS5mHGZWYQDTfqK4gBc6jICck9JPlgbm9WCCHVKxS0TYVBTFtp7mRBt+A74S i24dLabY3jeAmqim/m50N1cJX/mDf50= ARC-Authentication-Results: i=1; imf15.hostedemail.com; dkim=pass header.d=quicinc.com header.s=qcppdkim1 header.b=ilp7NFpM; spf=pass (imf15.hostedemail.com: domain of quic_eberman@quicinc.com designates 205.220.168.131 as permitted sender) smtp.mailfrom=quic_eberman@quicinc.com; dmarc=pass (policy=none) header.from=quicinc.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1722882883; a=rsa-sha256; cv=none; b=R6Ikvhdt73ZnzcIGhTAoz6JO/1eVG11Y1haBONjyUz7GlbQCpAuCcT+dgD2b6h9wTwe9Q9 OTwtQEAkf2h0zSHPdqgRE0bIF57e2WCRehOnthJq1swXb1D1YXMqnJyhx9ePHkree9ZO10 VRt+24WsgazV4YGP78KbmLjgVtNBDCQ= Received: from pps.filterd (m0279863.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 475Ba1wb019108; Mon, 5 Aug 2024 18:35:26 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h= cc:content-transfer-encoding:content-type:date:from:in-reply-to :message-id:mime-version:references:subject:to; s=qcppdkim1; bh= t1pn8IKt3beAj61svnkSxeGlbod/pIj3zCqhxkMpp1w=; b=ilp7NFpM6DFeBDl8 W2bBX1hMzxVSdWfGchUjniGKkx5rbEA43dGS6ywxd2ONUPCRZpNMGRa+FHVK7AdK 0z0YnZ0dmjdVjJrbVeeUj0zInLNzMpNvTjqnRKnyj5yteJ4A0XXLRiii8ntFpC9n ayW/9+d5w8HN12VLKzjnbx0EH+ATlOkD8R/Dulbom+eIV71Vryq0BKfZh/BAlWXA HqT6/Grd8PRWUoZjUgozB09K254LwLAojCpuPkVM7IKvyoxZNtfVWVPc0vQccmck /7glP+rY1jC4i76BTW8vCMnFOLjRn3WwRtld+qmUh7B+uBIloCbP50wEYuJnN8tC EvgJ9w== Received: from nasanppmta02.qualcomm.com (i-global254.qualcomm.com [199.106.103.254]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 40scs2vvgj-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Mon, 05 Aug 2024 18:35:26 +0000 (GMT) Received: from nasanex01b.na.qualcomm.com (nasanex01b.na.qualcomm.com [10.46.141.250]) by NASANPPMTA02.qualcomm.com (8.17.1.19/8.17.1.19) with ESMTPS id 475IZEiA024686 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Mon, 5 Aug 2024 18:35:14 GMT Received: from hu-eberman-lv.qualcomm.com (10.49.16.6) by nasanex01b.na.qualcomm.com (10.46.141.250) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.9; Mon, 5 Aug 2024 11:35:14 -0700 From: Elliot Berman Date: Mon, 5 Aug 2024 11:34:49 -0700 Subject: [PATCH RFC 3/4] mm: guest_memfd: Add option to remove guest private memory from direct map MIME-Version: 1.0 Message-ID: <20240805-guest-memfd-lib-v1-3-e5a29a4ff5d7@quicinc.com> References: <20240805-guest-memfd-lib-v1-0-e5a29a4ff5d7@quicinc.com> In-Reply-To: <20240805-guest-memfd-lib-v1-0-e5a29a4ff5d7@quicinc.com> To: Andrew Morton , Paolo Bonzini , Sean Christopherson , Fuad Tabba , David Hildenbrand , Patrick Roy , , Ackerley Tng CC: , , , , , Elliot Berman X-Mailer: b4 0.13.0 X-Originating-IP: [10.49.16.6] X-ClientProxiedBy: nalasex01a.na.qualcomm.com (10.47.209.196) To nasanex01b.na.qualcomm.com (10.46.141.250) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-ORIG-GUID: Km60UcZQKmFHYOSjr87yEZONp7TaX5Yo X-Proofpoint-GUID: Km60UcZQKmFHYOSjr87yEZONp7TaX5Yo X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1039,Hydra:6.0.680,FMLib:17.12.28.16 definitions=2024-08-05_07,2024-08-02_01,2024-05-17_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 mlxscore=0 bulkscore=0 suspectscore=0 adultscore=0 spamscore=0 priorityscore=1501 impostorscore=0 lowpriorityscore=0 mlxlogscore=885 phishscore=0 malwarescore=0 clxscore=1015 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.19.0-2407110000 definitions=main-2408050133 X-Stat-Signature: zi9dw1qnmc7zdi3e6un6sknhyyxsnmoa X-Rspam-User: X-Rspamd-Queue-Id: C6F96A0024 X-Rspamd-Server: rspam02 X-HE-Tag: 1722882931-470342 X-HE-Meta: U2FsdGVkX183hZ9AyDM8jwdoeF3d5QRTngYpkMjPHNh8OkU3d7H4fVwdbaMZ/LbuToA0uY3UfwE+pNs7AKSAuI+fBfdr8mA49ARzjKvmK3beWwv9kGUp6J6kRGb/zXR4g/2xDsQDkvizQP6fPXAEBsp+HzYztClwWjF02/M1PXzcAUMYclvlOkFP+lf65eJpjH/40QDOloyb3CMij2d2PTVg2NGaR4GtV6zLU5dGYpm6+LoWph2yy5scn9SqvLo1tFbikJqxD4Ebr+hZ3fBpnFuqYg5Ss5fpgR91wcYK6ek3XwALK1WzlnD7SDLtPo6MC+bmGirCfGhlqIKccXnOa4tlJVUkMh6Ipkcq3n21hfK4lefdLFbAnYoo46WqWlfG0qy4HvYQfLyySegSD/kW+2ClqvujqktM+Ps/U59sNe6KknmnySxYuu4+c40bjIvwPzlBfALFsT5Mi3rOSmESAD+DRGf5JvvDBI8vrNSMpiyiasF1EipOMlWmaiy8BCg8V/AcpELZH+whzbBppMRJ9yqP7DCJ4pLEkg5zzqfLnEgGhxPOFYLjgg11ygvGzaE90xQUa9WDUxixriymgmnNYzLUFec2yIfL3s6KSOJ4FgfedqWbX6lvZFiwbNiZvK7NTtbTZCcJPHK0zjH5cFE6jQhq2swaZTCCGFHqoh4kxqq2P+1Ib3328ruB0efc2eX57q5AU5rJD0BkuuPUe8Cz/8M5ev5/zrfXLZymD1HsXvSnAML1LAQTQBirM3pFMVTssY1OntNb1ZAHMdnLA90WgQCmDpP42/HD60LUTrAg+2MZhChyQ/xpSi8yNH7N/qVdSmSEqfQCpWMMdNTBpQOeErV9XhMlnsZIETA51871oXTrxo4+fb+UpZwpv+6BE1bmd12mWn7DqSnblHV9YTVDxhHs9kw6ekcfW7W+tnG2pCgPett7kBPY2JOrEizBymvQdeGNt6quyElxMjupz8h xxDw5i2g sfkt8IQvL8B4YRuSUSrR6h5GYgXPiXNtVoHlTd0m4GuFYovAV4IlskWldhsmhHjiWDSw2WLU3uzD/IVX4FDE6j2SOJhrcFy7WzOFdHivYCy1JC/zbgOB/xhYYKTCsob7v2/fzwtDjXRHtTrzaamOwts/hADp5/LVx851K1fZHmBluOpJbbWSl/EgOjT08edj19jHM8iK5jYIlgYy/z3ZtcvCTR0HaoPesmy6J+o5vtGVQZ+T0exMOD4v+k6KK+gFpcYKJYU4Vo+57jwRuLIxHfrreteTUutn0XP21pbfeyq9rnldOsTcYmK2lAlPqt+rANt5vIA8fHhmke4T2ykERIHrkr8ZuY4qAhmLiCfdlN+rVjq9Q9BOn0MeP1Q7DK4ngjsAsoegEtju6ECSUkyL6wP7s4Z45P7LXQdkoxF8sOKJ4ABjKYT/wvP7DzM6QPs52x2Mru2wdlRm7ODYcvRQ9wL4f2s0uBN8wiOCixwxurJnw9vA= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: This patch was reworked from Patrick's patch: https://lore.kernel.org/all/20240709132041.3625501-6-roypat@amazon.co.uk/ While guest_memfd is not available to be mapped by userspace, it is still accessible through the kernel's direct map. This means that in scenarios where guest-private memory is not hardware protected, it can be speculatively read and its contents potentially leaked through hardware side-channels. Removing guest-private memory from the direct map, thus mitigates a large class of speculative execution issues [1, Table 1]. Direct map removal do not reuse the `.prepare` machinery, since `prepare` can be called multiple time, and it is the responsibility of the preparation routine to not "prepare" the same folio twice [2]. Thus, instead explicitly check if `filemap_grab_folio` allocated a new folio, and remove the returned folio from the direct map only if this was the case. The patch uses release_folio instead of free_folio to reinsert pages back into the direct map as by the time free_folio is called, folio->mapping can already be NULL. This means that a call to folio_inode inside free_folio might deference a NULL pointer, leaving no way to access the inode which stores the flags that allow determining whether the page was removed from the direct map in the first place. [1]: https://download.vusec.net/papers/quarantine_raid23.pdf Cc: Patrick Roy Signed-off-by: Elliot Berman Signed-off-by: Patrick Roy --- include/linux/guest_memfd.h | 8 ++++++ mm/guest_memfd.c | 65 ++++++++++++++++++++++++++++++++++++++++++++- 2 files changed, 72 insertions(+), 1 deletion(-) diff --git a/include/linux/guest_memfd.h b/include/linux/guest_memfd.h index be56d9d53067..f9e4a27aed67 100644 --- a/include/linux/guest_memfd.h +++ b/include/linux/guest_memfd.h @@ -25,6 +25,14 @@ struct guest_memfd_operations { int (*release)(struct inode *inode); }; +/** + * @GUEST_MEMFD_FLAG_NO_DIRECT_MAP: When making folios inaccessible by host, also + * remove them from the kernel's direct map. + */ +enum { + GUEST_MEMFD_FLAG_NO_DIRECT_MAP = BIT(0), +}; + /** * @GUEST_MEMFD_GRAB_UPTODATE: Ensure pages are zeroed/up to date. * If trusted hyp will do it, can ommit this flag diff --git a/mm/guest_memfd.c b/mm/guest_memfd.c index 580138b0f9d4..e9d8cab72b28 100644 --- a/mm/guest_memfd.c +++ b/mm/guest_memfd.c @@ -7,9 +7,55 @@ #include #include #include +#include + +static inline int guest_memfd_folio_private(struct folio *folio) +{ + unsigned long nr_pages = folio_nr_pages(folio); + unsigned long i; + int r; + + for (i = 0; i < nr_pages; i++) { + struct page *page = folio_page(folio, i); + + r = set_direct_map_invalid_noflush(page); + if (r < 0) + goto out_remap; + } + + folio_set_private(folio); + return 0; +out_remap: + for (; i > 0; i--) { + struct page *page = folio_page(folio, i - 1); + + BUG_ON(set_direct_map_default_noflush(page)); + } + return r; +} + +static inline void guest_memfd_folio_clear_private(struct folio *folio) +{ + unsigned long start = (unsigned long)folio_address(folio); + unsigned long nr = folio_nr_pages(folio); + unsigned long i; + + if (!folio_test_private(folio)) + return; + + for (i = 0; i < nr; i++) { + struct page *page = folio_page(folio, i); + + BUG_ON(set_direct_map_default_noflush(page)); + } + flush_tlb_kernel_range(start, start + folio_size(folio)); + + folio_clear_private(folio); +} struct folio *guest_memfd_grab_folio(struct file *file, pgoff_t index, u32 flags) { + unsigned long gmem_flags = (unsigned long)file->private_data; struct inode *inode = file_inode(file); struct guest_memfd_operations *ops = inode->i_private; struct folio *folio; @@ -43,6 +89,12 @@ struct folio *guest_memfd_grab_folio(struct file *file, pgoff_t index, u32 flags goto out_err; } + if (gmem_flags & GUEST_MEMFD_FLAG_NO_DIRECT_MAP) { + r = guest_memfd_folio_private(folio); + if (r) + goto out_err; + } + /* * Ignore accessed, referenced, and dirty flags. The memory is * unevictable and there is no storage to write back to. @@ -213,14 +265,25 @@ static bool gmem_release_folio(struct folio *folio, gfp_t gfp) if (ops->invalidate_end) ops->invalidate_end(inode, offset, nr); + guest_memfd_folio_clear_private(folio); + return true; } +static void gmem_invalidate_folio(struct folio *folio, size_t offset, size_t len) +{ + /* not yet supported */ + BUG_ON(offset || len != folio_size(folio)); + + BUG_ON(!gmem_release_folio(folio, 0)); +} + static const struct address_space_operations gmem_aops = { .dirty_folio = noop_dirty_folio, .migrate_folio = gmem_migrate_folio, .error_remove_folio = gmem_error_folio, .release_folio = gmem_release_folio, + .invalidate_folio = gmem_invalidate_folio, }; static inline bool guest_memfd_check_ops(const struct guest_memfd_operations *ops) @@ -241,7 +304,7 @@ struct file *guest_memfd_alloc(const char *name, if (!guest_memfd_check_ops(ops)) return ERR_PTR(-EINVAL); - if (flags) + if (flags & ~GUEST_MEMFD_FLAG_NO_DIRECT_MAP) return ERR_PTR(-EINVAL); /* From patchwork Mon Aug 5 18:34:50 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Elliot Berman X-Patchwork-Id: 13753954 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 85E3EC52D70 for ; Mon, 5 Aug 2024 18:35:24 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 212B96B0093; Mon, 5 Aug 2024 14:35:23 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 1C31D6B0095; Mon, 5 Aug 2024 14:35:23 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0629A6B0096; Mon, 5 Aug 2024 14:35:22 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id DDFE26B0095 for ; Mon, 5 Aug 2024 14:35:22 -0400 (EDT) Received: from smtpin03.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id AB2B4A0462 for ; Mon, 5 Aug 2024 18:35:22 +0000 (UTC) X-FDA: 82419044484.03.353E770 Received: from mx0b-0031df01.pphosted.com (mx0b-0031df01.pphosted.com [205.220.180.131]) by imf07.hostedemail.com (Postfix) with ESMTP id 7EB424001E for ; Mon, 5 Aug 2024 18:35:20 +0000 (UTC) Authentication-Results: imf07.hostedemail.com; dkim=pass header.d=quicinc.com header.s=qcppdkim1 header.b=TFJbHmIF; spf=pass (imf07.hostedemail.com: domain of quic_eberman@quicinc.com designates 205.220.180.131 as permitted sender) smtp.mailfrom=quic_eberman@quicinc.com; dmarc=pass (policy=none) header.from=quicinc.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1722882859; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=VI8SceDSszYjbEPmaGh0xnvBavYMrOtuUVZ9U4i2+FQ=; b=UXWMKor31k3/kv4gjskGJWAOLt+e02gvbBKbW2lIDvTphW0Fy1IWLlPSIOt69rVUeO10es lQ6/87w+JdKFNX0sseTF1Fd00bIzoAXbVra9qMnjwmSersX0s7r+s1GNpKww9qZCN7WxTt E2+WURcPF6GoCpk2ZuC9Vr+v/+1Fzd4= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1722882859; a=rsa-sha256; cv=none; b=FTaQc6B3kMfOgjecT1BQMKSw8PPbIqy6AWBpEN4C7QD0jpLq1VIMh+8H/h6cZkeRTP4aae QnM0Sik4MSyjFWLUgrH0BellPrLUYYJBUFZO5eArXCvF4LLs0adGSOYbULcIKISWKAo7h0 n64DQ5JWoXAZQ2xk+GNIR7hxalJv9to= ARC-Authentication-Results: i=1; imf07.hostedemail.com; dkim=pass header.d=quicinc.com header.s=qcppdkim1 header.b=TFJbHmIF; spf=pass (imf07.hostedemail.com: domain of quic_eberman@quicinc.com designates 205.220.180.131 as permitted sender) smtp.mailfrom=quic_eberman@quicinc.com; dmarc=pass (policy=none) header.from=quicinc.com Received: from pps.filterd (m0279873.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 475IVP7L031919; Mon, 5 Aug 2024 18:35:16 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h= cc:content-transfer-encoding:content-type:date:from:in-reply-to :message-id:mime-version:references:subject:to; s=qcppdkim1; bh= VI8SceDSszYjbEPmaGh0xnvBavYMrOtuUVZ9U4i2+FQ=; b=TFJbHmIF42xpNejp uEiFEeP3sPD26D8spggjdCuJyfOu0jlxa8SYzLC2eQTPX7u9wJP4ia5QsXWmEWsS SmmKNHcrLc3pvEN18/ywtJj/pbGLEQCnQNIknjbgAONxWX8/vouT0oJpJsTqrjHI J9gmm9vFySGTdtaWdnKBAEB7N2/wzg46Yv+PGea5o6e+SE79cX8pw5ACket4tJVJ /46l1exQm31wG5TRfMLTxajnaKVpxbaSBGD4aUBEqV4hjewIyCoYwq4gqUdn0GO0 QWEzNjfgnFbOoQ2c53f7004mFKM1sJG/EDoM7V2ya+7A1keYJZCyH8BtLkBfygdu XmsY1w== Received: from nasanppmta04.qualcomm.com (i-global254.qualcomm.com [199.106.103.254]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 40tuhvsmeq-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Mon, 05 Aug 2024 18:35:16 +0000 (GMT) Received: from nasanex01b.na.qualcomm.com (nasanex01b.na.qualcomm.com [10.46.141.250]) by NASANPPMTA04.qualcomm.com (8.17.1.19/8.17.1.19) with ESMTPS id 475IZF4A029446 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Mon, 5 Aug 2024 18:35:15 GMT Received: from hu-eberman-lv.qualcomm.com (10.49.16.6) by nasanex01b.na.qualcomm.com (10.46.141.250) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.9; Mon, 5 Aug 2024 11:35:14 -0700 From: Elliot Berman Date: Mon, 5 Aug 2024 11:34:50 -0700 Subject: [PATCH RFC 4/4] mm: guest_memfd: Add ability for mmap'ing pages MIME-Version: 1.0 Message-ID: <20240805-guest-memfd-lib-v1-4-e5a29a4ff5d7@quicinc.com> References: <20240805-guest-memfd-lib-v1-0-e5a29a4ff5d7@quicinc.com> In-Reply-To: <20240805-guest-memfd-lib-v1-0-e5a29a4ff5d7@quicinc.com> To: Andrew Morton , Paolo Bonzini , Sean Christopherson , Fuad Tabba , David Hildenbrand , Patrick Roy , , Ackerley Tng CC: , , , , , Elliot Berman X-Mailer: b4 0.13.0 X-Originating-IP: [10.49.16.6] X-ClientProxiedBy: nalasex01a.na.qualcomm.com (10.47.209.196) To nasanex01b.na.qualcomm.com (10.46.141.250) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-ORIG-GUID: sfX1xQ5G9G6F0uigExaJMuPsByDlTZno X-Proofpoint-GUID: sfX1xQ5G9G6F0uigExaJMuPsByDlTZno X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1039,Hydra:6.0.680,FMLib:17.12.28.16 definitions=2024-08-05_07,2024-08-02_01,2024-05-17_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 lowpriorityscore=0 priorityscore=1501 spamscore=0 mlxlogscore=999 phishscore=0 clxscore=1015 suspectscore=0 adultscore=0 mlxscore=0 malwarescore=0 bulkscore=0 impostorscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.19.0-2407110000 definitions=main-2408050132 X-Rspamd-Queue-Id: 7EB424001E X-Stat-Signature: f85o3dhrw4s91nexg8dhc9xgn58iccep X-Rspamd-Server: rspam09 X-Rspam-User: X-HE-Tag: 1722882920-359583 X-HE-Meta: U2FsdGVkX19a/lXRP0ewhIdrb/dakx/mq3h3+zc/5WrI9fCvx+8T6mjswydjr9dZVWks41XVDYQOUBiHr0ba0CPedmEBaBx53m5h3NnZnxXFCPL68qxu5bgGC/FsoQ+hPRB+nxSn0+0HBE8jGvCYjydxceLKVFfW804P2rMt+AGSDDEP/A7EVth1UtpJ0QKEa54lqT+OHW+fI2v10aP7N7uJ9mHe8PPyJ/u6JwLcCQWe6N1Ayq5x9R3nwYXF80+tRsGVKpwdjtyUuY65s9MJ3Q5a5PTBqUdWkuBENeoRYLLYbfggMeBHKuzfmWr0PLXrXeGJ/FWRMGB83NEebz1Af0+98Hs67NiBTo0TNhWGYIPdoTGprr6cAm0YJ+LY4FrRDoh8xUGJhyZ5EurbFBVJEKsb1s2kd0a3pZZ5ZgAuzrFuJBaMrjvX2mNfE2GdZDeE3jv9LPag4EqGtPGcJ6AhjfutuG8lq6d0wzoX465fJ0xcFQ90JGLZ+QonwX34ENs0c56JvtVS+ysFsij5kjngnmhJZAJqqPfBTWrc4hCakaSno4XOT/YUStqvlBNmZ9P9fVTRcRCBifcDHjPz6sTOwqDY5TnxCVZJCEjjmkEswW+0GiApslzW7feDY20hKpdDQi9xskFq5TFFnExV9Ln4PpEMWu9GsISA5PPCmdjH3G6KyfjFvm9i1MeQdrjDyWdUGY6Mn24ZqsmeoXFdIjlR8hKlXdZzIq+iBLfZK5V1KJw+9s14EMPfTq1whsNJBdttEo6LKYAZzXidW70cIeT64c35/P3ANaRGF+6ozgW8lBF1V6RJ4Lt4H2nA1BK+vsGww/7dstuTsxHJEwlb2i7vw2WDcAoNhSk+Z7ZW2b/AX5Tf92n5yIpQvLSiNPGXHqjM/4JXqAxZe4nEPmnkjND8hF9Q4R4syZG47vbh8xqpi1vT0BMU3bwQRSQJBC7c1ICzbwfFLHgYEWb6aRaVal0 lZSL284/ /pnyWwSnNiNgCkBoQB4/R2MIqMdFJHp9U0TZs6LYO4PKde9TGsUE57kXjB3h3JDBq634glftOhUo9fAQKTQPNs+lNQteRU3aVfHx1PzzZk+co8MQpHgE1ofyRBnqglNxCE5hmYNTaXA2wL3PJVJq27GBWkJxhHNqlZotxnW9xuvEd1TtX2Fi2iT5Xr+yVWnsRQsTmDamtqdd5C6uWd1E+4YAFVy6Ly5rW++Tkk6wfckNMgQ25BD51wv26d/Fy+96BOOFPCkBph7Rk50PeijaQqotVtcReOQ7BF+6IH6Grqx2fbg36vuWBBgQOC818/lGSCNOvO+qUBEmDO2QVz40g4T7fDAqsafW8uLRI X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Confidential/protected guest virtual machines want to share some memory back with the host Linux. For example, virtqueues allow host and protected guest to exchange data. In MMU-only isolation of protected guest virtual machines, the transition between "shared" and "private" can be done in-place without a trusted hypervisor copying pages. Add support for this feature and allow Linux to mmap host-accessible pages. When the owner provides an ->accessible() callback in the struct guest_memfd_operations, guest_memfd allows folios to be mapped when the ->accessible() callback returns 0. To safely make inaccessible: ``` folio = guest_memfd_grab_folio(inode, index, flags); r = guest_memfd_make_inaccessible(inode, folio); if (r) goto err; hypervisor_does_guest_mapping(folio); folio_unlock(folio); ``` hypervisor_does_s2_mapping(folio) should make it so ops->accessible(...) on those folios fails. The folio lock ensures atomicity. Signed-off-by: Elliot Berman --- include/linux/guest_memfd.h | 7 ++++ mm/guest_memfd.c | 81 ++++++++++++++++++++++++++++++++++++++++++++- 2 files changed, 87 insertions(+), 1 deletion(-) diff --git a/include/linux/guest_memfd.h b/include/linux/guest_memfd.h index f9e4a27aed67..edcb4ba60cb0 100644 --- a/include/linux/guest_memfd.h +++ b/include/linux/guest_memfd.h @@ -16,12 +16,18 @@ * @invalidate_end: called after invalidate_begin returns success. Optional. * @prepare: called before a folio is mapped into the guest address space. * Optional. + * @accessible: called after prepare returns success and before it's mapped + * into the guest address space. Returns 0 if the folio can be + * accessed. + * Optional. If not present, assumes folios are never accessible. * @release: Called when releasing the guest_memfd file. Required. */ struct guest_memfd_operations { int (*invalidate_begin)(struct inode *inode, pgoff_t offset, unsigned long nr); void (*invalidate_end)(struct inode *inode, pgoff_t offset, unsigned long nr); int (*prepare)(struct inode *inode, pgoff_t offset, struct folio *folio); + int (*accessible)(struct inode *inode, struct folio *folio, + pgoff_t offset, unsigned long nr); int (*release)(struct inode *inode); }; @@ -48,5 +54,6 @@ struct file *guest_memfd_alloc(const char *name, const struct guest_memfd_operations *ops, loff_t size, unsigned long flags); bool is_guest_memfd(struct file *file, const struct guest_memfd_operations *ops); +int guest_memfd_make_inaccessible(struct file *file, struct folio *folio); #endif diff --git a/mm/guest_memfd.c b/mm/guest_memfd.c index e9d8cab72b28..6b5609932ca5 100644 --- a/mm/guest_memfd.c +++ b/mm/guest_memfd.c @@ -9,6 +9,8 @@ #include #include +#include "internal.h" + static inline int guest_memfd_folio_private(struct folio *folio) { unsigned long nr_pages = folio_nr_pages(folio); @@ -89,7 +91,7 @@ struct folio *guest_memfd_grab_folio(struct file *file, pgoff_t index, u32 flags goto out_err; } - if (gmem_flags & GUEST_MEMFD_FLAG_NO_DIRECT_MAP) { + if (!ops->accessible && (gmem_flags & GUEST_MEMFD_FLAG_NO_DIRECT_MAP)) { r = guest_memfd_folio_private(folio); if (r) goto out_err; @@ -107,6 +109,82 @@ struct folio *guest_memfd_grab_folio(struct file *file, pgoff_t index, u32 flags } EXPORT_SYMBOL_GPL(guest_memfd_grab_folio); +int guest_memfd_make_inaccessible(struct file *file, struct folio *folio) +{ + unsigned long gmem_flags = (unsigned long)file->private_data; + unsigned long i; + int r; + + unmap_mapping_folio(folio); + + /** + * We can't use the refcount. It might be elevated due to + * guest/vcpu trying to access same folio as another vcpu + * or because userspace is trying to access folio for same reason + * + * folio_lock serializes the transitions between (in)accessible + */ + if (folio_maybe_dma_pinned(folio)) + return -EBUSY; + + if (gmem_flags & GUEST_MEMFD_FLAG_NO_DIRECT_MAP) { + r = guest_memfd_folio_private(folio); + if (r) + return r; + } + + return 0; +} + +static vm_fault_t gmem_fault(struct vm_fault *vmf) +{ + struct file *file = vmf->vma->vm_file; + struct inode *inode = file_inode(file); + const struct guest_memfd_operations *ops = inode->i_private; + struct folio *folio; + pgoff_t off; + int r; + + folio = guest_memfd_grab_folio(file, vmf->pgoff, GUEST_MEMFD_GRAB_UPTODATE); + if (!folio) + return VM_FAULT_SIGBUS; + + off = vmf->pgoff & (folio_nr_pages(folio) - 1); + r = ops->accessible(inode, folio, off, 1); + if (r) { + folio_unlock(folio); + folio_put(folio); + return VM_FAULT_SIGBUS; + } + + guest_memfd_folio_clear_private(folio); + + vmf->page = folio_page(folio, off); + + return VM_FAULT_LOCKED; +} + +static const struct vm_operations_struct gmem_vm_ops = { + .fault = gmem_fault, +}; + +static int gmem_mmap(struct file *file, struct vm_area_struct *vma) +{ + const struct guest_memfd_operations *ops = file_inode(file)->i_private; + + if (!ops->accessible) + return -EPERM; + + /* No support for private mappings to avoid COW. */ + if ((vma->vm_flags & (VM_SHARED | VM_MAYSHARE)) != + (VM_SHARED | VM_MAYSHARE)) + return -EINVAL; + + file_accessed(file); + vma->vm_ops = &gmem_vm_ops; + return 0; +} + static long gmem_punch_hole(struct file *file, loff_t offset, loff_t len) { struct inode *inode = file_inode(file); @@ -220,6 +298,7 @@ static int gmem_release(struct inode *inode, struct file *file) static struct file_operations gmem_fops = { .open = generic_file_open, .llseek = generic_file_llseek, + .mmap = gmem_mmap, .release = gmem_release, .fallocate = gmem_fallocate, .owner = THIS_MODULE,