From patchwork Mon Aug 5 18:34:50 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Elliot Berman X-Patchwork-Id: 13753954 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 85E3EC52D70 for ; Mon, 5 Aug 2024 18:35:24 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 212B96B0093; Mon, 5 Aug 2024 14:35:23 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 1C31D6B0095; Mon, 5 Aug 2024 14:35:23 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0629A6B0096; Mon, 5 Aug 2024 14:35:22 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id DDFE26B0095 for ; Mon, 5 Aug 2024 14:35:22 -0400 (EDT) Received: from smtpin03.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id AB2B4A0462 for ; Mon, 5 Aug 2024 18:35:22 +0000 (UTC) X-FDA: 82419044484.03.353E770 Received: from mx0b-0031df01.pphosted.com (mx0b-0031df01.pphosted.com [205.220.180.131]) by imf07.hostedemail.com (Postfix) with ESMTP id 7EB424001E for ; Mon, 5 Aug 2024 18:35:20 +0000 (UTC) Authentication-Results: imf07.hostedemail.com; dkim=pass header.d=quicinc.com header.s=qcppdkim1 header.b=TFJbHmIF; spf=pass (imf07.hostedemail.com: domain of quic_eberman@quicinc.com designates 205.220.180.131 as permitted sender) smtp.mailfrom=quic_eberman@quicinc.com; dmarc=pass (policy=none) header.from=quicinc.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1722882859; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=VI8SceDSszYjbEPmaGh0xnvBavYMrOtuUVZ9U4i2+FQ=; b=UXWMKor31k3/kv4gjskGJWAOLt+e02gvbBKbW2lIDvTphW0Fy1IWLlPSIOt69rVUeO10es lQ6/87w+JdKFNX0sseTF1Fd00bIzoAXbVra9qMnjwmSersX0s7r+s1GNpKww9qZCN7WxTt E2+WURcPF6GoCpk2ZuC9Vr+v/+1Fzd4= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1722882859; a=rsa-sha256; cv=none; b=FTaQc6B3kMfOgjecT1BQMKSw8PPbIqy6AWBpEN4C7QD0jpLq1VIMh+8H/h6cZkeRTP4aae QnM0Sik4MSyjFWLUgrH0BellPrLUYYJBUFZO5eArXCvF4LLs0adGSOYbULcIKISWKAo7h0 n64DQ5JWoXAZQ2xk+GNIR7hxalJv9to= ARC-Authentication-Results: i=1; imf07.hostedemail.com; dkim=pass header.d=quicinc.com header.s=qcppdkim1 header.b=TFJbHmIF; spf=pass (imf07.hostedemail.com: domain of quic_eberman@quicinc.com designates 205.220.180.131 as permitted sender) smtp.mailfrom=quic_eberman@quicinc.com; dmarc=pass (policy=none) header.from=quicinc.com Received: from pps.filterd (m0279873.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 475IVP7L031919; Mon, 5 Aug 2024 18:35:16 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h= cc:content-transfer-encoding:content-type:date:from:in-reply-to :message-id:mime-version:references:subject:to; s=qcppdkim1; bh= VI8SceDSszYjbEPmaGh0xnvBavYMrOtuUVZ9U4i2+FQ=; b=TFJbHmIF42xpNejp uEiFEeP3sPD26D8spggjdCuJyfOu0jlxa8SYzLC2eQTPX7u9wJP4ia5QsXWmEWsS SmmKNHcrLc3pvEN18/ywtJj/pbGLEQCnQNIknjbgAONxWX8/vouT0oJpJsTqrjHI J9gmm9vFySGTdtaWdnKBAEB7N2/wzg46Yv+PGea5o6e+SE79cX8pw5ACket4tJVJ /46l1exQm31wG5TRfMLTxajnaKVpxbaSBGD4aUBEqV4hjewIyCoYwq4gqUdn0GO0 QWEzNjfgnFbOoQ2c53f7004mFKM1sJG/EDoM7V2ya+7A1keYJZCyH8BtLkBfygdu XmsY1w== Received: from nasanppmta04.qualcomm.com (i-global254.qualcomm.com [199.106.103.254]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 40tuhvsmeq-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Mon, 05 Aug 2024 18:35:16 +0000 (GMT) Received: from nasanex01b.na.qualcomm.com (nasanex01b.na.qualcomm.com [10.46.141.250]) by NASANPPMTA04.qualcomm.com (8.17.1.19/8.17.1.19) with ESMTPS id 475IZF4A029446 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Mon, 5 Aug 2024 18:35:15 GMT Received: from hu-eberman-lv.qualcomm.com (10.49.16.6) by nasanex01b.na.qualcomm.com (10.46.141.250) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.9; Mon, 5 Aug 2024 11:35:14 -0700 From: Elliot Berman Date: Mon, 5 Aug 2024 11:34:50 -0700 Subject: [PATCH RFC 4/4] mm: guest_memfd: Add ability for mmap'ing pages MIME-Version: 1.0 Message-ID: <20240805-guest-memfd-lib-v1-4-e5a29a4ff5d7@quicinc.com> References: <20240805-guest-memfd-lib-v1-0-e5a29a4ff5d7@quicinc.com> In-Reply-To: <20240805-guest-memfd-lib-v1-0-e5a29a4ff5d7@quicinc.com> To: Andrew Morton , Paolo Bonzini , Sean Christopherson , Fuad Tabba , David Hildenbrand , Patrick Roy , , Ackerley Tng CC: , , , , , Elliot Berman X-Mailer: b4 0.13.0 X-Originating-IP: [10.49.16.6] X-ClientProxiedBy: nalasex01a.na.qualcomm.com (10.47.209.196) To nasanex01b.na.qualcomm.com (10.46.141.250) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-ORIG-GUID: sfX1xQ5G9G6F0uigExaJMuPsByDlTZno X-Proofpoint-GUID: sfX1xQ5G9G6F0uigExaJMuPsByDlTZno X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1039,Hydra:6.0.680,FMLib:17.12.28.16 definitions=2024-08-05_07,2024-08-02_01,2024-05-17_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 lowpriorityscore=0 priorityscore=1501 spamscore=0 mlxlogscore=999 phishscore=0 clxscore=1015 suspectscore=0 adultscore=0 mlxscore=0 malwarescore=0 bulkscore=0 impostorscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.19.0-2407110000 definitions=main-2408050132 X-Rspamd-Queue-Id: 7EB424001E X-Stat-Signature: f85o3dhrw4s91nexg8dhc9xgn58iccep X-Rspamd-Server: rspam09 X-Rspam-User: X-HE-Tag: 1722882920-359583 X-HE-Meta: U2FsdGVkX19a/lXRP0ewhIdrb/dakx/mq3h3+zc/5WrI9fCvx+8T6mjswydjr9dZVWks41XVDYQOUBiHr0ba0CPedmEBaBx53m5h3NnZnxXFCPL68qxu5bgGC/FsoQ+hPRB+nxSn0+0HBE8jGvCYjydxceLKVFfW804P2rMt+AGSDDEP/A7EVth1UtpJ0QKEa54lqT+OHW+fI2v10aP7N7uJ9mHe8PPyJ/u6JwLcCQWe6N1Ayq5x9R3nwYXF80+tRsGVKpwdjtyUuY65s9MJ3Q5a5PTBqUdWkuBENeoRYLLYbfggMeBHKuzfmWr0PLXrXeGJ/FWRMGB83NEebz1Af0+98Hs67NiBTo0TNhWGYIPdoTGprr6cAm0YJ+LY4FrRDoh8xUGJhyZ5EurbFBVJEKsb1s2kd0a3pZZ5ZgAuzrFuJBaMrjvX2mNfE2GdZDeE3jv9LPag4EqGtPGcJ6AhjfutuG8lq6d0wzoX465fJ0xcFQ90JGLZ+QonwX34ENs0c56JvtVS+ysFsij5kjngnmhJZAJqqPfBTWrc4hCakaSno4XOT/YUStqvlBNmZ9P9fVTRcRCBifcDHjPz6sTOwqDY5TnxCVZJCEjjmkEswW+0GiApslzW7feDY20hKpdDQi9xskFq5TFFnExV9Ln4PpEMWu9GsISA5PPCmdjH3G6KyfjFvm9i1MeQdrjDyWdUGY6Mn24ZqsmeoXFdIjlR8hKlXdZzIq+iBLfZK5V1KJw+9s14EMPfTq1whsNJBdttEo6LKYAZzXidW70cIeT64c35/P3ANaRGF+6ozgW8lBF1V6RJ4Lt4H2nA1BK+vsGww/7dstuTsxHJEwlb2i7vw2WDcAoNhSk+Z7ZW2b/AX5Tf92n5yIpQvLSiNPGXHqjM/4JXqAxZe4nEPmnkjND8hF9Q4R4syZG47vbh8xqpi1vT0BMU3bwQRSQJBC7c1ICzbwfFLHgYEWb6aRaVal0 lZSL284/ /pnyWwSnNiNgCkBoQB4/R2MIqMdFJHp9U0TZs6LYO4PKde9TGsUE57kXjB3h3JDBq634glftOhUo9fAQKTQPNs+lNQteRU3aVfHx1PzzZk+co8MQpHgE1ofyRBnqglNxCE5hmYNTaXA2wL3PJVJq27GBWkJxhHNqlZotxnW9xuvEd1TtX2Fi2iT5Xr+yVWnsRQsTmDamtqdd5C6uWd1E+4YAFVy6Ly5rW++Tkk6wfckNMgQ25BD51wv26d/Fy+96BOOFPCkBph7Rk50PeijaQqotVtcReOQ7BF+6IH6Grqx2fbg36vuWBBgQOC818/lGSCNOvO+qUBEmDO2QVz40g4T7fDAqsafW8uLRI X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Confidential/protected guest virtual machines want to share some memory back with the host Linux. For example, virtqueues allow host and protected guest to exchange data. In MMU-only isolation of protected guest virtual machines, the transition between "shared" and "private" can be done in-place without a trusted hypervisor copying pages. Add support for this feature and allow Linux to mmap host-accessible pages. When the owner provides an ->accessible() callback in the struct guest_memfd_operations, guest_memfd allows folios to be mapped when the ->accessible() callback returns 0. To safely make inaccessible: ``` folio = guest_memfd_grab_folio(inode, index, flags); r = guest_memfd_make_inaccessible(inode, folio); if (r) goto err; hypervisor_does_guest_mapping(folio); folio_unlock(folio); ``` hypervisor_does_s2_mapping(folio) should make it so ops->accessible(...) on those folios fails. The folio lock ensures atomicity. Signed-off-by: Elliot Berman --- include/linux/guest_memfd.h | 7 ++++ mm/guest_memfd.c | 81 ++++++++++++++++++++++++++++++++++++++++++++- 2 files changed, 87 insertions(+), 1 deletion(-) diff --git a/include/linux/guest_memfd.h b/include/linux/guest_memfd.h index f9e4a27aed67..edcb4ba60cb0 100644 --- a/include/linux/guest_memfd.h +++ b/include/linux/guest_memfd.h @@ -16,12 +16,18 @@ * @invalidate_end: called after invalidate_begin returns success. Optional. * @prepare: called before a folio is mapped into the guest address space. * Optional. + * @accessible: called after prepare returns success and before it's mapped + * into the guest address space. Returns 0 if the folio can be + * accessed. + * Optional. If not present, assumes folios are never accessible. * @release: Called when releasing the guest_memfd file. Required. */ struct guest_memfd_operations { int (*invalidate_begin)(struct inode *inode, pgoff_t offset, unsigned long nr); void (*invalidate_end)(struct inode *inode, pgoff_t offset, unsigned long nr); int (*prepare)(struct inode *inode, pgoff_t offset, struct folio *folio); + int (*accessible)(struct inode *inode, struct folio *folio, + pgoff_t offset, unsigned long nr); int (*release)(struct inode *inode); }; @@ -48,5 +54,6 @@ struct file *guest_memfd_alloc(const char *name, const struct guest_memfd_operations *ops, loff_t size, unsigned long flags); bool is_guest_memfd(struct file *file, const struct guest_memfd_operations *ops); +int guest_memfd_make_inaccessible(struct file *file, struct folio *folio); #endif diff --git a/mm/guest_memfd.c b/mm/guest_memfd.c index e9d8cab72b28..6b5609932ca5 100644 --- a/mm/guest_memfd.c +++ b/mm/guest_memfd.c @@ -9,6 +9,8 @@ #include #include +#include "internal.h" + static inline int guest_memfd_folio_private(struct folio *folio) { unsigned long nr_pages = folio_nr_pages(folio); @@ -89,7 +91,7 @@ struct folio *guest_memfd_grab_folio(struct file *file, pgoff_t index, u32 flags goto out_err; } - if (gmem_flags & GUEST_MEMFD_FLAG_NO_DIRECT_MAP) { + if (!ops->accessible && (gmem_flags & GUEST_MEMFD_FLAG_NO_DIRECT_MAP)) { r = guest_memfd_folio_private(folio); if (r) goto out_err; @@ -107,6 +109,82 @@ struct folio *guest_memfd_grab_folio(struct file *file, pgoff_t index, u32 flags } EXPORT_SYMBOL_GPL(guest_memfd_grab_folio); +int guest_memfd_make_inaccessible(struct file *file, struct folio *folio) +{ + unsigned long gmem_flags = (unsigned long)file->private_data; + unsigned long i; + int r; + + unmap_mapping_folio(folio); + + /** + * We can't use the refcount. It might be elevated due to + * guest/vcpu trying to access same folio as another vcpu + * or because userspace is trying to access folio for same reason + * + * folio_lock serializes the transitions between (in)accessible + */ + if (folio_maybe_dma_pinned(folio)) + return -EBUSY; + + if (gmem_flags & GUEST_MEMFD_FLAG_NO_DIRECT_MAP) { + r = guest_memfd_folio_private(folio); + if (r) + return r; + } + + return 0; +} + +static vm_fault_t gmem_fault(struct vm_fault *vmf) +{ + struct file *file = vmf->vma->vm_file; + struct inode *inode = file_inode(file); + const struct guest_memfd_operations *ops = inode->i_private; + struct folio *folio; + pgoff_t off; + int r; + + folio = guest_memfd_grab_folio(file, vmf->pgoff, GUEST_MEMFD_GRAB_UPTODATE); + if (!folio) + return VM_FAULT_SIGBUS; + + off = vmf->pgoff & (folio_nr_pages(folio) - 1); + r = ops->accessible(inode, folio, off, 1); + if (r) { + folio_unlock(folio); + folio_put(folio); + return VM_FAULT_SIGBUS; + } + + guest_memfd_folio_clear_private(folio); + + vmf->page = folio_page(folio, off); + + return VM_FAULT_LOCKED; +} + +static const struct vm_operations_struct gmem_vm_ops = { + .fault = gmem_fault, +}; + +static int gmem_mmap(struct file *file, struct vm_area_struct *vma) +{ + const struct guest_memfd_operations *ops = file_inode(file)->i_private; + + if (!ops->accessible) + return -EPERM; + + /* No support for private mappings to avoid COW. */ + if ((vma->vm_flags & (VM_SHARED | VM_MAYSHARE)) != + (VM_SHARED | VM_MAYSHARE)) + return -EINVAL; + + file_accessed(file); + vma->vm_ops = &gmem_vm_ops; + return 0; +} + static long gmem_punch_hole(struct file *file, loff_t offset, loff_t len) { struct inode *inode = file_inode(file); @@ -220,6 +298,7 @@ static int gmem_release(struct inode *inode, struct file *file) static struct file_operations gmem_fops = { .open = generic_file_open, .llseek = generic_file_llseek, + .mmap = gmem_mmap, .release = gmem_release, .fallocate = gmem_fallocate, .owner = THIS_MODULE,