From patchwork Mon Aug 5 18:34:49 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Elliot Berman X-Patchwork-Id: 13753956 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 760BBC3DA4A for ; Mon, 5 Aug 2024 18:35:36 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A01A86B0099; Mon, 5 Aug 2024 14:35:35 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 93D0B6B009A; Mon, 5 Aug 2024 14:35:35 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 78F986B009B; Mon, 5 Aug 2024 14:35:35 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 5D9666B0099 for ; Mon, 5 Aug 2024 14:35:35 -0400 (EDT) Received: from smtpin13.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 2CAB2C04C8 for ; Mon, 5 Aug 2024 18:35:34 +0000 (UTC) X-FDA: 82419044988.13.5A6AC03 Received: from mx0a-0031df01.pphosted.com (mx0a-0031df01.pphosted.com [205.220.168.131]) by imf15.hostedemail.com (Postfix) with ESMTP id C6F96A0024 for ; Mon, 5 Aug 2024 18:35:31 +0000 (UTC) Authentication-Results: imf15.hostedemail.com; dkim=pass header.d=quicinc.com header.s=qcppdkim1 header.b=ilp7NFpM; spf=pass (imf15.hostedemail.com: domain of quic_eberman@quicinc.com designates 205.220.168.131 as permitted sender) smtp.mailfrom=quic_eberman@quicinc.com; dmarc=pass (policy=none) header.from=quicinc.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1722882883; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=t1pn8IKt3beAj61svnkSxeGlbod/pIj3zCqhxkMpp1w=; b=pxggBYFC4wEY7osoK6qKL6YkL6PO/HlUtrOthKsPem+rFJS38zGqL0FLFmDY+BDTKz829M SNuaTffik2ZS5mHGZWYQDTfqK4gBc6jICck9JPlgbm9WCCHVKxS0TYVBTFtp7mRBt+A74S i24dLabY3jeAmqim/m50N1cJX/mDf50= ARC-Authentication-Results: i=1; imf15.hostedemail.com; dkim=pass header.d=quicinc.com header.s=qcppdkim1 header.b=ilp7NFpM; spf=pass (imf15.hostedemail.com: domain of quic_eberman@quicinc.com designates 205.220.168.131 as permitted sender) smtp.mailfrom=quic_eberman@quicinc.com; dmarc=pass (policy=none) header.from=quicinc.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1722882883; a=rsa-sha256; cv=none; b=R6Ikvhdt73ZnzcIGhTAoz6JO/1eVG11Y1haBONjyUz7GlbQCpAuCcT+dgD2b6h9wTwe9Q9 OTwtQEAkf2h0zSHPdqgRE0bIF57e2WCRehOnthJq1swXb1D1YXMqnJyhx9ePHkree9ZO10 VRt+24WsgazV4YGP78KbmLjgVtNBDCQ= Received: from pps.filterd (m0279863.ppops.net [127.0.0.1]) by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 475Ba1wb019108; Mon, 5 Aug 2024 18:35:26 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h= cc:content-transfer-encoding:content-type:date:from:in-reply-to :message-id:mime-version:references:subject:to; s=qcppdkim1; bh= t1pn8IKt3beAj61svnkSxeGlbod/pIj3zCqhxkMpp1w=; b=ilp7NFpM6DFeBDl8 W2bBX1hMzxVSdWfGchUjniGKkx5rbEA43dGS6ywxd2ONUPCRZpNMGRa+FHVK7AdK 0z0YnZ0dmjdVjJrbVeeUj0zInLNzMpNvTjqnRKnyj5yteJ4A0XXLRiii8ntFpC9n ayW/9+d5w8HN12VLKzjnbx0EH+ATlOkD8R/Dulbom+eIV71Vryq0BKfZh/BAlWXA HqT6/Grd8PRWUoZjUgozB09K254LwLAojCpuPkVM7IKvyoxZNtfVWVPc0vQccmck /7glP+rY1jC4i76BTW8vCMnFOLjRn3WwRtld+qmUh7B+uBIloCbP50wEYuJnN8tC EvgJ9w== Received: from nasanppmta02.qualcomm.com (i-global254.qualcomm.com [199.106.103.254]) by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 40scs2vvgj-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Mon, 05 Aug 2024 18:35:26 +0000 (GMT) Received: from nasanex01b.na.qualcomm.com (nasanex01b.na.qualcomm.com [10.46.141.250]) by NASANPPMTA02.qualcomm.com (8.17.1.19/8.17.1.19) with ESMTPS id 475IZEiA024686 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Mon, 5 Aug 2024 18:35:14 GMT Received: from hu-eberman-lv.qualcomm.com (10.49.16.6) by nasanex01b.na.qualcomm.com (10.46.141.250) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.9; Mon, 5 Aug 2024 11:35:14 -0700 From: Elliot Berman Date: Mon, 5 Aug 2024 11:34:49 -0700 Subject: [PATCH RFC 3/4] mm: guest_memfd: Add option to remove guest private memory from direct map MIME-Version: 1.0 Message-ID: <20240805-guest-memfd-lib-v1-3-e5a29a4ff5d7@quicinc.com> References: <20240805-guest-memfd-lib-v1-0-e5a29a4ff5d7@quicinc.com> In-Reply-To: <20240805-guest-memfd-lib-v1-0-e5a29a4ff5d7@quicinc.com> To: Andrew Morton , Paolo Bonzini , Sean Christopherson , Fuad Tabba , David Hildenbrand , Patrick Roy , , Ackerley Tng CC: , , , , , Elliot Berman X-Mailer: b4 0.13.0 X-Originating-IP: [10.49.16.6] X-ClientProxiedBy: nalasex01a.na.qualcomm.com (10.47.209.196) To nasanex01b.na.qualcomm.com (10.46.141.250) X-QCInternal: smtphost X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085 X-Proofpoint-ORIG-GUID: Km60UcZQKmFHYOSjr87yEZONp7TaX5Yo X-Proofpoint-GUID: Km60UcZQKmFHYOSjr87yEZONp7TaX5Yo X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1039,Hydra:6.0.680,FMLib:17.12.28.16 definitions=2024-08-05_07,2024-08-02_01,2024-05-17_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 mlxscore=0 bulkscore=0 suspectscore=0 adultscore=0 spamscore=0 priorityscore=1501 impostorscore=0 lowpriorityscore=0 mlxlogscore=885 phishscore=0 malwarescore=0 clxscore=1015 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.19.0-2407110000 definitions=main-2408050133 X-Stat-Signature: zi9dw1qnmc7zdi3e6un6sknhyyxsnmoa X-Rspam-User: X-Rspamd-Queue-Id: C6F96A0024 X-Rspamd-Server: rspam02 X-HE-Tag: 1722882931-470342 X-HE-Meta: U2FsdGVkX183hZ9AyDM8jwdoeF3d5QRTngYpkMjPHNh8OkU3d7H4fVwdbaMZ/LbuToA0uY3UfwE+pNs7AKSAuI+fBfdr8mA49ARzjKvmK3beWwv9kGUp6J6kRGb/zXR4g/2xDsQDkvizQP6fPXAEBsp+HzYztClwWjF02/M1PXzcAUMYclvlOkFP+lf65eJpjH/40QDOloyb3CMij2d2PTVg2NGaR4GtV6zLU5dGYpm6+LoWph2yy5scn9SqvLo1tFbikJqxD4Ebr+hZ3fBpnFuqYg5Ss5fpgR91wcYK6ek3XwALK1WzlnD7SDLtPo6MC+bmGirCfGhlqIKccXnOa4tlJVUkMh6Ipkcq3n21hfK4lefdLFbAnYoo46WqWlfG0qy4HvYQfLyySegSD/kW+2ClqvujqktM+Ps/U59sNe6KknmnySxYuu4+c40bjIvwPzlBfALFsT5Mi3rOSmESAD+DRGf5JvvDBI8vrNSMpiyiasF1EipOMlWmaiy8BCg8V/AcpELZH+whzbBppMRJ9yqP7DCJ4pLEkg5zzqfLnEgGhxPOFYLjgg11ygvGzaE90xQUa9WDUxixriymgmnNYzLUFec2yIfL3s6KSOJ4FgfedqWbX6lvZFiwbNiZvK7NTtbTZCcJPHK0zjH5cFE6jQhq2swaZTCCGFHqoh4kxqq2P+1Ib3328ruB0efc2eX57q5AU5rJD0BkuuPUe8Cz/8M5ev5/zrfXLZymD1HsXvSnAML1LAQTQBirM3pFMVTssY1OntNb1ZAHMdnLA90WgQCmDpP42/HD60LUTrAg+2MZhChyQ/xpSi8yNH7N/qVdSmSEqfQCpWMMdNTBpQOeErV9XhMlnsZIETA51871oXTrxo4+fb+UpZwpv+6BE1bmd12mWn7DqSnblHV9YTVDxhHs9kw6ekcfW7W+tnG2pCgPett7kBPY2JOrEizBymvQdeGNt6quyElxMjupz8h xxDw5i2g sfkt8IQvL8B4YRuSUSrR6h5GYgXPiXNtVoHlTd0m4GuFYovAV4IlskWldhsmhHjiWDSw2WLU3uzD/IVX4FDE6j2SOJhrcFy7WzOFdHivYCy1JC/zbgOB/xhYYKTCsob7v2/fzwtDjXRHtTrzaamOwts/hADp5/LVx851K1fZHmBluOpJbbWSl/EgOjT08edj19jHM8iK5jYIlgYy/z3ZtcvCTR0HaoPesmy6J+o5vtGVQZ+T0exMOD4v+k6KK+gFpcYKJYU4Vo+57jwRuLIxHfrreteTUutn0XP21pbfeyq9rnldOsTcYmK2lAlPqt+rANt5vIA8fHhmke4T2ykERIHrkr8ZuY4qAhmLiCfdlN+rVjq9Q9BOn0MeP1Q7DK4ngjsAsoegEtju6ECSUkyL6wP7s4Z45P7LXQdkoxF8sOKJ4ABjKYT/wvP7DzM6QPs52x2Mru2wdlRm7ODYcvRQ9wL4f2s0uBN8wiOCixwxurJnw9vA= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: This patch was reworked from Patrick's patch: https://lore.kernel.org/all/20240709132041.3625501-6-roypat@amazon.co.uk/ While guest_memfd is not available to be mapped by userspace, it is still accessible through the kernel's direct map. This means that in scenarios where guest-private memory is not hardware protected, it can be speculatively read and its contents potentially leaked through hardware side-channels. Removing guest-private memory from the direct map, thus mitigates a large class of speculative execution issues [1, Table 1]. Direct map removal do not reuse the `.prepare` machinery, since `prepare` can be called multiple time, and it is the responsibility of the preparation routine to not "prepare" the same folio twice [2]. Thus, instead explicitly check if `filemap_grab_folio` allocated a new folio, and remove the returned folio from the direct map only if this was the case. The patch uses release_folio instead of free_folio to reinsert pages back into the direct map as by the time free_folio is called, folio->mapping can already be NULL. This means that a call to folio_inode inside free_folio might deference a NULL pointer, leaving no way to access the inode which stores the flags that allow determining whether the page was removed from the direct map in the first place. [1]: https://download.vusec.net/papers/quarantine_raid23.pdf Cc: Patrick Roy Signed-off-by: Elliot Berman Signed-off-by: Patrick Roy --- include/linux/guest_memfd.h | 8 ++++++ mm/guest_memfd.c | 65 ++++++++++++++++++++++++++++++++++++++++++++- 2 files changed, 72 insertions(+), 1 deletion(-) diff --git a/include/linux/guest_memfd.h b/include/linux/guest_memfd.h index be56d9d53067..f9e4a27aed67 100644 --- a/include/linux/guest_memfd.h +++ b/include/linux/guest_memfd.h @@ -25,6 +25,14 @@ struct guest_memfd_operations { int (*release)(struct inode *inode); }; +/** + * @GUEST_MEMFD_FLAG_NO_DIRECT_MAP: When making folios inaccessible by host, also + * remove them from the kernel's direct map. + */ +enum { + GUEST_MEMFD_FLAG_NO_DIRECT_MAP = BIT(0), +}; + /** * @GUEST_MEMFD_GRAB_UPTODATE: Ensure pages are zeroed/up to date. * If trusted hyp will do it, can ommit this flag diff --git a/mm/guest_memfd.c b/mm/guest_memfd.c index 580138b0f9d4..e9d8cab72b28 100644 --- a/mm/guest_memfd.c +++ b/mm/guest_memfd.c @@ -7,9 +7,55 @@ #include #include #include +#include + +static inline int guest_memfd_folio_private(struct folio *folio) +{ + unsigned long nr_pages = folio_nr_pages(folio); + unsigned long i; + int r; + + for (i = 0; i < nr_pages; i++) { + struct page *page = folio_page(folio, i); + + r = set_direct_map_invalid_noflush(page); + if (r < 0) + goto out_remap; + } + + folio_set_private(folio); + return 0; +out_remap: + for (; i > 0; i--) { + struct page *page = folio_page(folio, i - 1); + + BUG_ON(set_direct_map_default_noflush(page)); + } + return r; +} + +static inline void guest_memfd_folio_clear_private(struct folio *folio) +{ + unsigned long start = (unsigned long)folio_address(folio); + unsigned long nr = folio_nr_pages(folio); + unsigned long i; + + if (!folio_test_private(folio)) + return; + + for (i = 0; i < nr; i++) { + struct page *page = folio_page(folio, i); + + BUG_ON(set_direct_map_default_noflush(page)); + } + flush_tlb_kernel_range(start, start + folio_size(folio)); + + folio_clear_private(folio); +} struct folio *guest_memfd_grab_folio(struct file *file, pgoff_t index, u32 flags) { + unsigned long gmem_flags = (unsigned long)file->private_data; struct inode *inode = file_inode(file); struct guest_memfd_operations *ops = inode->i_private; struct folio *folio; @@ -43,6 +89,12 @@ struct folio *guest_memfd_grab_folio(struct file *file, pgoff_t index, u32 flags goto out_err; } + if (gmem_flags & GUEST_MEMFD_FLAG_NO_DIRECT_MAP) { + r = guest_memfd_folio_private(folio); + if (r) + goto out_err; + } + /* * Ignore accessed, referenced, and dirty flags. The memory is * unevictable and there is no storage to write back to. @@ -213,14 +265,25 @@ static bool gmem_release_folio(struct folio *folio, gfp_t gfp) if (ops->invalidate_end) ops->invalidate_end(inode, offset, nr); + guest_memfd_folio_clear_private(folio); + return true; } +static void gmem_invalidate_folio(struct folio *folio, size_t offset, size_t len) +{ + /* not yet supported */ + BUG_ON(offset || len != folio_size(folio)); + + BUG_ON(!gmem_release_folio(folio, 0)); +} + static const struct address_space_operations gmem_aops = { .dirty_folio = noop_dirty_folio, .migrate_folio = gmem_migrate_folio, .error_remove_folio = gmem_error_folio, .release_folio = gmem_release_folio, + .invalidate_folio = gmem_invalidate_folio, }; static inline bool guest_memfd_check_ops(const struct guest_memfd_operations *ops) @@ -241,7 +304,7 @@ struct file *guest_memfd_alloc(const char *name, if (!guest_memfd_check_ops(ops)) return ERR_PTR(-EINVAL); - if (flags) + if (flags & ~GUEST_MEMFD_FLAG_NO_DIRECT_MAP) return ERR_PTR(-EINVAL); /*