From patchwork Mon Aug 5 18:34:47 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Elliot Berman X-Patchwork-Id: 13753955 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 421ECC3DA7F for ; Mon, 5 Aug 2024 18:35:35 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id CC0E76B0098; Mon, 5 Aug 2024 14:35:34 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id C718C6B0099; Mon, 5 Aug 2024 14:35:34 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id AEA836B009A; Mon, 5 Aug 2024 14:35:34 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 907286B0098 for ; Mon, 5 Aug 2024 14:35:34 -0400 (EDT) Received: from smtpin21.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 105EA141B59 for ; Mon, 5 Aug 2024 18:35:34 +0000 (UTC) X-FDA: 82419044988.21.6A31DBC Received: from mx0a-0031df01.pphosted.com (mx0a-0031df01.pphosted.com [205.220.168.131]) by imf20.hostedemail.com (Postfix) with ESMTP id A5E231C0008 for ; Mon, 5 Aug 2024 18:35:31 +0000 (UTC) Authentication-Results: imf20.hostedemail.com; dkim=pass header.d=quicinc.com header.s=qcppdkim1 header.b=SfVU1oaR; dmarc=pass (policy=none) header.from=quicinc.com; spf=pass (imf20.hostedemail.com: domain of quic_eberman@quicinc.com designates 205.220.168.131 as permitted sender) smtp.mailfrom=quic_eberman@quicinc.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1722882862; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=11QWH8pHvBW8f1MWbaRj+DTkAzgqxQMHSVhEzac2GrE=; b=8jakQyZZggIGGuWveDK7LH4ENTY0QrCeKQYVKrhxo5wemFDRT9o4Ny7OTVNi81SECvtChZ fMII8pHFMVoiGxWUNMqeKQt1OnEYjSxVWXVeDW8Gu+u/ZDjvdtLrdwXt81Zfe9ibPZsR01 bLW/BrMwCPR8HmEhpDAkZ+dtn7pe11Q= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1722882862; a=rsa-sha256; cv=none; b=P5STT62GhwxHWhZIDMyXYfYCO1AiTLk85i4/JeXmAbYU8AaTMsYEqoUghu5/mD9igBRjr1 ImPjwZmfzrTiBzAbi32XQgS67vpx22JNS+DbB8YpxT1thVXzGs621P03aELJR5MIBJLe1y DRNj1PxyzfvxdZEuta2eJfC2zMHrDNE= ARC-Authentication-Results: i=1; imf20.hostedemail.com; dkim=pass header.d=quicinc.com header.s=qcppdkim1 header.b=SfVU1oaR; dmarc=pass (policy=none) header.from=quicinc.com; spf=pass (imf20.hostedemail.com: domain of quic_eberman@quicinc.com designates 205.220.168.131 as permitted sender) smtp.mailfrom=quic_eberman@quicinc.com Received: from pps.filterd (m0279863.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 475BbnqL019900; Mon, 5 Aug 2024 18:35:26 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h= cc:content-transfer-encoding:content-type:date:from:in-reply-to :message-id:mime-version:references:subject:to; s=qcppdkim1; bh= 11QWH8pHvBW8f1MWbaRj+DTkAzgqxQMHSVhEzac2GrE=; b=SfVU1oaRmqi2I1Hu Q2HdtSpmE6G788E+QGlxDtec7Xe+PXwS5UvNdhfyA6XHpoS6iudNCgF4UdHaNQfD k86+iUpBg3MURy1xgl8w0gfzrddXk7onBEG+yoq7iAt9ZooTZtxx1u4d5phfFV9V moVuWOEPolpLnPEELPY/D0tevl2p9MgG6m1lNEtO4JE9ZbwlOJcpgmAUznX8Kjlk 6l4bDM+Sx+lIWJobOCdCSKbp40xV44E/X5LRaBNOXguzsUyHKoZDhICGcuRjPi4v AOeL6w9jYOzk3dkvFNMrmYVKIvG+mJqfYAA9sAxQgN8XUBFQc8De7K9QKnGXilQL bKjU4w== Received: from nasanppmta02.qualcomm.com (i-global254.qualcomm.com [199.106.103.254]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 40scs2vvgh-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Mon, 05 Aug 2024 18:35:26 +0000 (GMT) Received: from nasanex01b.na.qualcomm.com (nasanex01b.na.qualcomm.com [10.46.141.250]) by NASANPPMTA02.qualcomm.com (8.17.1.19/8.17.1.19) with ESMTPS id 475IZEoH024674 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Mon, 5 Aug 2024 18:35:14 GMT Received: from hu-eberman-lv.qualcomm.com (10.49.16.6) by nasanex01b.na.qualcomm.com (10.46.141.250) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.9; Mon, 5 Aug 2024 11:35:13 -0700 From: Elliot Berman Date: Mon, 5 Aug 2024 11:34:47 -0700 Subject: [PATCH RFC 1/4] mm: Introduce guest_memfd MIME-Version: 1.0 Message-ID: <20240805-guest-memfd-lib-v1-1-e5a29a4ff5d7@quicinc.com> References: <20240805-guest-memfd-lib-v1-0-e5a29a4ff5d7@quicinc.com> In-Reply-To: <20240805-guest-memfd-lib-v1-0-e5a29a4ff5d7@quicinc.com> To: Andrew Morton , Paolo Bonzini , Sean Christopherson , Fuad Tabba , David Hildenbrand , Patrick Roy , , Ackerley Tng CC: , , , , , Elliot Berman X-Mailer: b4 0.13.0 X-Originating-IP: [10.49.16.6] X-ClientProxiedBy: nalasex01a.na.qualcomm.com (10.47.209.196) To nasanex01b.na.qualcomm.com (10.46.141.250) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-ORIG-GUID: ZGxuxUGqCP_Vrg127mms6-uSzdAQx_Ti X-Proofpoint-GUID: ZGxuxUGqCP_Vrg127mms6-uSzdAQx_Ti X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1039,Hydra:6.0.680,FMLib:17.12.28.16 definitions=2024-08-05_07,2024-08-02_01,2024-05-17_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 mlxscore=0 bulkscore=0 suspectscore=0 adultscore=0 spamscore=0 priorityscore=1501 impostorscore=0 lowpriorityscore=0 mlxlogscore=902 phishscore=0 malwarescore=0 clxscore=1015 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.19.0-2407110000 definitions=main-2408050133 X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: A5E231C0008 X-Stat-Signature: 57a47pcp6gseebyrzwc1tcpod55ct777 X-Rspam-User: X-HE-Tag: 1722882931-958607 X-HE-Meta: U2FsdGVkX18S0xepUhpiey9N6AGQW0NVfwsjQ5NP239iOK1pqNlZO/b+QQuqmo2ZSntTfPP7PsejX6mZ7HPia3cgsIhAcWctBNmf2JT0KgazWgF50X3Akxn8wb4uePMCSw5u/VYJzYsVLOtIiONm07WlmbiSJCb0kaBqX27RG6fIEHNitiERJnVrtt/g6GM45cUxr7tX7/qr0s6U66ddHYl957J2JaG8KLaTc27P+lRnzM5PQJ0HouNS40xMfLEIvKswV8yfqzMpcjX+Oau9jv33UN7pJppc9wYzi0vTzCniwFW4U1rh99SzttOGc47MKQ4rMJX5dTLnHeD8eivKTz2ZxCNW1Xhwtp4nveZlMBfvFdZnhIrTTWOy8UVyDBMMydgaPH+iMw9XGW1GdxEIyWDvfV2n0WkC2uWh8GZk4N3L1KCY/tKwr0X01E7a/uJ7AOtDsdNYGt15PQetV3ua0w0f0gtmUCNdz24Ws5W/s1b+s8fC1BafUj6nu0PS2xEqqdANQ7s/kT2WMwTS0XGRaPeMxjhfOTIvuF+jhzrcWPQ/HdsuZ6ItDstAH5g+j0Wns21XFMQRgicQiHgTxanbCl1SzuM4zQXEqxvN1HmSWhhWy0c3Zt04TqvYnFLX/006fALgiyls9XDcbH/MSECk8ztwh+3C1rTPwJny4pDSAQpX6R6PZzmaXcNCq1/5o8PufYNGPOYejeJek4bbTx2c2MRaw5Tkmeq9WqyqdGlF8JJuK/zJ3MgERar3+E9rtipz8PG1+L4ra/kbb1zHU6QukwvDRg+6xzUlLlbj1hgZvxkrEs+Gx3ex4DxUexlmOLYJKyy06G1Tkw52gIOS5NNTwGX3UXq8eIikO5fZlbZTi+u50G7u/WcfTQ+Pv6c+MrUubB/6Z1YuCBqKY4rQcnh7MZEKxkpTn7RRd2pmI4sgY+lfZOY4YXygBEM5NolxKdLdE66C13C0SiVlSrd/rRq SV5yzLcS ksUBW7IWkD1ImTBrhBJjyZ4QZc++ZRjJltyfQZRzcLc/bCtbxK/MqhKRLz6kEteNILoDCBwhxqZp1nO3Qt2haD/sffT4l438v9H7J88aPNT69C1SjS+ghDRXNZOsKfUFWlPmcVM9Iy2YmwMPsCZzXyaIAhjEL1rUQQo6mc2DLz/A/d8f6mWGdwiS3LMwPpnjCWulLFnM/2618ejx6/AWlNe44zEBsthWGpfW/Xo8bV+DYyNjjf5r9MTXC7GeImKQflBx4FrNopoVEWDyaC/NRnpxLn2+B9NtLqAjCxK7K8cO/9E1WOk8jNq2Vzfsh3xajuz3AdW4g/gjEEzQGmpiZp2lt+Ab1aoglRSz2HudWJRKsSgX2Gu1MfPOSbg== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: In preparation for adding more features to KVM's guest_memfd, refactor and introduce a library which abstracts some of the core-mm decisions about managing folios associated with the file. The goal of the refactor serves two purposes: Provide an easier way to reason about memory in guest_memfd. With KVM supporting multiple confidentiality models (TDX, SEV-SNP, pKVM, ARM CCA), and coming support for allowing kernel and userspace to access this memory, it seems necessary to create a stronger abstraction between core-mm concerns and hypervisor concerns. Provide a common implementation for other hypervisors (Gunyah) to use. Signed-off-by: Elliot Berman --- include/linux/guest_memfd.h | 44 +++++++ mm/Kconfig | 3 + mm/Makefile | 1 + mm/guest_memfd.c | 285 ++++++++++++++++++++++++++++++++++++++++++++ 4 files changed, 333 insertions(+) diff --git a/include/linux/guest_memfd.h b/include/linux/guest_memfd.h new file mode 100644 index 000000000000..be56d9d53067 --- /dev/null +++ b/include/linux/guest_memfd.h @@ -0,0 +1,44 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (c) 2024 Qualcomm Innovation Center, Inc. All rights reserved. + */ + +#ifndef _LINUX_GUEST_MEMFD_H +#define _LINUX_GUEST_MEMFD_H + +#include + +/** + * struct guest_memfd_operations - ops provided by owner to manage folios + * @invalidate_begin: called when folios should be unmapped from guest. + * May fail if folios couldn't be unmapped from guest. + * Required. + * @invalidate_end: called after invalidate_begin returns success. Optional. + * @prepare: called before a folio is mapped into the guest address space. + * Optional. + * @release: Called when releasing the guest_memfd file. Required. + */ +struct guest_memfd_operations { + int (*invalidate_begin)(struct inode *inode, pgoff_t offset, unsigned long nr); + void (*invalidate_end)(struct inode *inode, pgoff_t offset, unsigned long nr); + int (*prepare)(struct inode *inode, pgoff_t offset, struct folio *folio); + int (*release)(struct inode *inode); +}; + +/** + * @GUEST_MEMFD_GRAB_UPTODATE: Ensure pages are zeroed/up to date. + * If trusted hyp will do it, can ommit this flag + * @GUEST_MEMFD_PREPARE: Call the ->prepare() op, if present. + */ +enum { + GUEST_MEMFD_GRAB_UPTODATE = BIT(0), + GUEST_MEMFD_PREPARE = BIT(1), +}; + +struct folio *guest_memfd_grab_folio(struct file *file, pgoff_t index, u32 flags); +struct file *guest_memfd_alloc(const char *name, + const struct guest_memfd_operations *ops, + loff_t size, unsigned long flags); +bool is_guest_memfd(struct file *file, const struct guest_memfd_operations *ops); + +#endif diff --git a/mm/Kconfig b/mm/Kconfig index b72e7d040f78..333f46525695 100644 --- a/mm/Kconfig +++ b/mm/Kconfig @@ -1168,6 +1168,9 @@ config SECRETMEM memory areas visible only in the context of the owning process and not mapped to other processes and other kernel page tables. +config GUEST_MEMFD + tristate + config ANON_VMA_NAME bool "Anonymous VMA name support" depends on PROC_FS && ADVISE_SYSCALLS && MMU diff --git a/mm/Makefile b/mm/Makefile index d2915f8c9dc0..e15a95ebeac5 100644 --- a/mm/Makefile +++ b/mm/Makefile @@ -122,6 +122,7 @@ obj-$(CONFIG_PAGE_EXTENSION) += page_ext.o obj-$(CONFIG_PAGE_TABLE_CHECK) += page_table_check.o obj-$(CONFIG_CMA_DEBUGFS) += cma_debug.o obj-$(CONFIG_SECRETMEM) += secretmem.o +obj-$(CONFIG_GUEST_MEMFD) += guest_memfd.o obj-$(CONFIG_CMA_SYSFS) += cma_sysfs.o obj-$(CONFIG_USERFAULTFD) += userfaultfd.o obj-$(CONFIG_IDLE_PAGE_TRACKING) += page_idle.o diff --git a/mm/guest_memfd.c b/mm/guest_memfd.c new file mode 100644 index 000000000000..580138b0f9d4 --- /dev/null +++ b/mm/guest_memfd.c @@ -0,0 +1,285 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (c) 2024 Qualcomm Innovation Center, Inc. All rights reserved. + */ + +#include +#include +#include +#include + +struct folio *guest_memfd_grab_folio(struct file *file, pgoff_t index, u32 flags) +{ + struct inode *inode = file_inode(file); + struct guest_memfd_operations *ops = inode->i_private; + struct folio *folio; + int r; + + /* TODO: Support huge pages. */ + folio = filemap_grab_folio(inode->i_mapping, index); + if (IS_ERR(folio)) + return folio; + + /* + * Use the up-to-date flag to track whether or not the memory has been + * zeroed before being handed off to the guest. There is no backing + * storage for the memory, so the folio will remain up-to-date until + * it's removed. + */ + if ((flags & GUEST_MEMFD_GRAB_UPTODATE) && + !folio_test_uptodate(folio)) { + unsigned long nr_pages = folio_nr_pages(folio); + unsigned long i; + + for (i = 0; i < nr_pages; i++) + clear_highpage(folio_page(folio, i)); + + folio_mark_uptodate(folio); + } + + if (flags & GUEST_MEMFD_PREPARE && ops->prepare) { + r = ops->prepare(inode, index, folio); + if (r < 0) + goto out_err; + } + + /* + * Ignore accessed, referenced, and dirty flags. The memory is + * unevictable and there is no storage to write back to. + */ + return folio; +out_err: + folio_unlock(folio); + folio_put(folio); + return ERR_PTR(r); +} +EXPORT_SYMBOL_GPL(guest_memfd_grab_folio); + +static long gmem_punch_hole(struct file *file, loff_t offset, loff_t len) +{ + struct inode *inode = file_inode(file); + const struct guest_memfd_operations *ops = inode->i_private; + pgoff_t start = offset >> PAGE_SHIFT; + unsigned long nr = len >> PAGE_SHIFT; + long ret; + + /* + * Bindings must be stable across invalidation to ensure the start+end + * are balanced. + */ + filemap_invalidate_lock(inode->i_mapping); + + ret = ops->invalidate_begin(inode, start, nr); + if (ret) + goto out; + + truncate_inode_pages_range(inode->i_mapping, offset, offset + len - 1); + + if (ops->invalidate_end) + ops->invalidate_end(inode, start, nr); + +out: + filemap_invalidate_unlock(inode->i_mapping); + + return 0; +} + +static long gmem_allocate(struct file *file, loff_t offset, loff_t len) +{ + struct inode *inode = file_inode(file); + struct address_space *mapping = inode->i_mapping; + pgoff_t start, index, end; + int r; + + /* Dedicated guest is immutable by default. */ + if (offset + len > i_size_read(inode)) + return -EINVAL; + + filemap_invalidate_lock_shared(mapping); + + start = offset >> PAGE_SHIFT; + end = (offset + len) >> PAGE_SHIFT; + + r = 0; + for (index = start; index < end;) { + struct folio *folio; + + if (signal_pending(current)) { + r = -EINTR; + break; + } + + folio = guest_memfd_grab_folio(file, index, + GUEST_MEMFD_GRAB_UPTODATE | + GUEST_MEMFD_PREPARE); + if (!folio) { + r = -ENOMEM; + break; + } + + index = folio_next_index(folio); + + folio_unlock(folio); + folio_put(folio); + + /* 64-bit only, wrapping the index should be impossible. */ + if (WARN_ON_ONCE(!index)) + break; + + cond_resched(); + } + + filemap_invalidate_unlock_shared(mapping); + + return r; +} + +static long gmem_fallocate(struct file *file, int mode, loff_t offset, + loff_t len) +{ + int ret; + + if (!(mode & FALLOC_FL_KEEP_SIZE)) + return -EOPNOTSUPP; + + if (mode & ~(FALLOC_FL_KEEP_SIZE | FALLOC_FL_PUNCH_HOLE)) + return -EOPNOTSUPP; + + if (!PAGE_ALIGNED(offset) || !PAGE_ALIGNED(len)) + return -EINVAL; + + if (mode & FALLOC_FL_PUNCH_HOLE) + ret = gmem_punch_hole(file, offset, len); + else + ret = gmem_allocate(file, offset, len); + + if (!ret) + file_modified(file); + return ret; +} + +static int gmem_release(struct inode *inode, struct file *file) +{ + struct guest_memfd_operations *ops = inode->i_private; + + return ops->release(inode); +} + +static struct file_operations gmem_fops = { + .open = generic_file_open, + .llseek = generic_file_llseek, + .release = gmem_release, + .fallocate = gmem_fallocate, + .owner = THIS_MODULE, +}; + +static int gmem_migrate_folio(struct address_space *mapping, struct folio *dst, + struct folio *src, enum migrate_mode mode) +{ + WARN_ON_ONCE(1); + return -EINVAL; +} + +static int gmem_error_folio(struct address_space *mapping, struct folio *folio) +{ + struct inode *inode = mapping->host; + struct guest_memfd_operations *ops = inode->i_private; + off_t offset = folio->index; + size_t nr = folio_nr_pages(folio); + int ret; + + filemap_invalidate_lock_shared(mapping); + + ret = ops->invalidate_begin(inode, offset, nr); + if (!ret && ops->invalidate_end) + ops->invalidate_end(inode, offset, nr); + + filemap_invalidate_unlock_shared(mapping); + + return ret; +} + +static bool gmem_release_folio(struct folio *folio, gfp_t gfp) +{ + struct inode *inode = folio_inode(folio); + struct guest_memfd_operations *ops = inode->i_private; + off_t offset = folio->index; + size_t nr = folio_nr_pages(folio); + int ret; + + ret = ops->invalidate_begin(inode, offset, nr); + if (ret) + return false; + if (ops->invalidate_end) + ops->invalidate_end(inode, offset, nr); + + return true; +} + +static const struct address_space_operations gmem_aops = { + .dirty_folio = noop_dirty_folio, + .migrate_folio = gmem_migrate_folio, + .error_remove_folio = gmem_error_folio, + .release_folio = gmem_release_folio, +}; + +static inline bool guest_memfd_check_ops(const struct guest_memfd_operations *ops) +{ + return ops->invalidate_begin && ops->release; +} + +struct file *guest_memfd_alloc(const char *name, + const struct guest_memfd_operations *ops, + loff_t size, unsigned long flags) +{ + struct inode *inode; + struct file *file; + + if (size <= 0 || !PAGE_ALIGNED(size)) + return ERR_PTR(-EINVAL); + + if (!guest_memfd_check_ops(ops)) + return ERR_PTR(-EINVAL); + + if (flags) + return ERR_PTR(-EINVAL); + + /* + * Use the so called "secure" variant, which creates a unique inode + * instead of reusing a single inode. Each guest_memfd instance needs + * its own inode to track the size, flags, etc. + */ + file = anon_inode_create_getfile(name, &gmem_fops, (void *)flags, + O_RDWR, NULL); + if (IS_ERR(file)) + return file; + + file->f_flags |= O_LARGEFILE; + + inode = file_inode(file); + WARN_ON(file->f_mapping != inode->i_mapping); + + inode->i_private = (void *)ops; /* discards const qualifier */ + inode->i_mapping->a_ops = &gmem_aops; + inode->i_mode |= S_IFREG; + inode->i_size = size; + mapping_set_gfp_mask(inode->i_mapping, GFP_HIGHUSER); + mapping_set_inaccessible(inode->i_mapping); + /* Unmovable mappings are supposed to be marked unevictable as well. */ + WARN_ON_ONCE(!mapping_unevictable(inode->i_mapping)); + + return file; +} +EXPORT_SYMBOL_GPL(guest_memfd_alloc); + +bool is_guest_memfd(struct file *file, const struct guest_memfd_operations *ops) +{ + if (file->f_op != &gmem_fops) + return false; + + struct inode *inode = file_inode(file); + struct guest_memfd_operations *gops = inode->i_private; + + return ops == gops; +} +EXPORT_SYMBOL_GPL(is_guest_memfd);