From patchwork Fri Mar 28 15:31:27 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 14032180 Received: from mail-wm1-f73.google.com (mail-wm1-f73.google.com [209.85.128.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9A3EA1E4AE for ; Fri, 28 Mar 2025 15:31:38 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1743175900; cv=none; b=fSgTaauzxhuSyAWL3NQQBXADhQaVO1PkXX8e8v2oHlnB70rA1F91TjN/Yv4aNpRLg8cItSZ52cvqJRHOLAs6N++lzk43udBtEhSvgxBL8HC7HgofQKesn5CDzOz7VY83pUvsp+b3pMj/IqY9f93HvWG09UjBjI90LB4CMl6SFsE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1743175900; c=relaxed/simple; bh=/DbJqIf5jARzMrFbUI/9YBIhv5OcRrzbdxbR00O3bKo=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=eNYj+O3c6GTLGvpVz73QuH0E6FVR7HdnJhVBS+ty+7ER/wgCsBrGETV/qJ2iocdNsndEV0KMjJRB8gWhuO2xOm/wJrpgzB4G6JFl6cAmhmPRv+jvghwLSGQi32hhNeS7/mEGYz16OE5xU5q6veIHqVsJQIppHiNlSjHTvAfmT+8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=FoJwqyne; arc=none smtp.client-ip=209.85.128.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="FoJwqyne" Received: by mail-wm1-f73.google.com with SMTP id 5b1f17b1804b1-43bd0586b86so13880695e9.3 for ; Fri, 28 Mar 2025 08:31:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1743175897; x=1743780697; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=pob1TR9dEqi7MIBnWCqwy033V3I8qaqnlKaqyDYFuI0=; b=FoJwqyneD4NHahCGtNSatjczyFolhh5y3whqWvGV6Ezk0CIB8VJnBx+4IQg5xyX+Nu j3lFO0Dy6hV8aa5lhQqXYQF3JbaXjkBDTgmb/TOmBXPUoQzDx07VHxTeCeo8H++XNPGH WT1WwNY9PDPfTfkMPnrcp3ZL33HjeUHndQP/QV9pffa7sAQj1/xxUFce5rvFyxTNie9+ 64KHcxVaaSfBB7/h0tBabCNZZj3+AONCRZJF7P4SHrzhIUjwpESB6ya8+DYOUFbp2csG qT9jZbCpNZNnvTfiUpP9ugvOxhRL0XOLfpP+AjQgrqUEzAIkc1MoKsVx9FwwgmMRCIe4 +wZQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1743175897; x=1743780697; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=pob1TR9dEqi7MIBnWCqwy033V3I8qaqnlKaqyDYFuI0=; b=qJxxTb9Tn/LS6uqw3XOLwxL1Icu0mNeIbT9pQZOFUEbKnnrVnoYtni4JPZDSWB3UZd GpmcB/LmIlvvyDMXjQTMLpAj/6kXTCL4P2h8opipinuKXr4wmB2NLC9WNqWMQtJyRXEd /8R5dhCJCaTGjKQKr0MIQgHjSu6koggVBaNB76HW4SCj1QjTrKwoGDpYAEaG59G7BOQ+ /69UMY1dY8Exq1lwXoVZJNT7lsRv5/YHnTEC3Wrqop2+bwSowPw/0oaJLhLhOz6mVWN7 o/Sk2TD7AEIyT0WZtvknoPVd9sBhKHA1MWOKjpa55ESi6xFLh/DHqUEGjZ9Pl2UcoOoE g0jA== X-Forwarded-Encrypted: i=1; AJvYcCVrtuTXloL8Yh4jR1ol5D2JnqcGxV1i+MGmuJiNb4H1/D5wKbO8UUQC/ccHYkNAuLBZAvKBVPM1nw1dC+UM@vger.kernel.org X-Gm-Message-State: AOJu0Yy8dYyetqEBnrXSK9BZYhaYekTRxnTPTnbtHuIteXS+4n7c7wDX T8lasq9V2JiOib4ecPOtXSpk6Eg+ux6BR56BzHMOxmPz0X0EfFRI7oyD0fRoo3iLE3rKIpUkLw= = X-Google-Smtp-Source: AGHT+IGngaZfLNR9G+mSoTgx2073MmGnWMrkl6EslGCWKeowY0DzmJkEF+mR9oXw1GyqKOgR3dKkpwK4uw== X-Received: from wmbay1.prod.google.com ([2002:a05:600c:1e01:b0:43d:9f1:31a9]) (user=tabba job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:c15:b0:43c:eeee:b713 with SMTP id 5b1f17b1804b1-43d85065ab9mr84209735e9.20.1743175897006; Fri, 28 Mar 2025 08:31:37 -0700 (PDT) Date: Fri, 28 Mar 2025 15:31:27 +0000 In-Reply-To: <20250328153133.3504118-1-tabba@google.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250328153133.3504118-1-tabba@google.com> X-Mailer: git-send-email 2.49.0.472.ge94155a9ec-goog Message-ID: <20250328153133.3504118-2-tabba@google.com> Subject: [PATCH v7 1/7] KVM: guest_memfd: Make guest mem use guest mem inodes instead of anonymous inodes From: Fuad Tabba To: kvm@vger.kernel.org, linux-arm-msm@vger.kernel.org, linux-mm@kvack.org Cc: pbonzini@redhat.com, chenhuacai@kernel.org, mpe@ellerman.id.au, anup@brainfault.org, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, seanjc@google.com, viro@zeniv.linux.org.uk, brauner@kernel.org, willy@infradead.org, akpm@linux-foundation.org, xiaoyao.li@intel.com, yilun.xu@intel.com, chao.p.peng@linux.intel.com, jarkko@kernel.org, amoorthy@google.com, dmatlack@google.com, isaku.yamahata@intel.com, mic@digikod.net, vbabka@suse.cz, vannapurve@google.com, ackerleytng@google.com, mail@maciej.szmigiero.name, david@redhat.com, michael.roth@amd.com, wei.w.wang@intel.com, liam.merwick@oracle.com, isaku.yamahata@gmail.com, kirill.shutemov@linux.intel.com, suzuki.poulose@arm.com, steven.price@arm.com, quic_eberman@quicinc.com, quic_mnalajal@quicinc.com, quic_tsoni@quicinc.com, quic_svaddagi@quicinc.com, quic_cvanscha@quicinc.com, quic_pderrin@quicinc.com, quic_pheragu@quicinc.com, catalin.marinas@arm.com, james.morse@arm.com, yuzenghui@huawei.com, oliver.upton@linux.dev, maz@kernel.org, will@kernel.org, qperret@google.com, keirf@google.com, roypat@amazon.co.uk, shuah@kernel.org, hch@infradead.org, jgg@nvidia.com, rientjes@google.com, jhubbard@nvidia.com, fvdl@google.com, hughd@google.com, jthoughton@google.com, peterx@redhat.com, pankaj.gupta@amd.com, tabba@google.com From: Ackerley Tng Using guest mem inodes allows us to store metadata for the backing memory on the inode. Metadata will be added in a later patch to support HugeTLB pages. Metadata about backing memory should not be stored on the file, since the file represents a guest_memfd's binding with a struct kvm, and metadata about backing memory is not unique to a specific binding and struct kvm. Signed-off-by: Fuad Tabba Signed-off-by: Ackerley Tng --- include/uapi/linux/magic.h | 1 + virt/kvm/guest_memfd.c | 130 +++++++++++++++++++++++++++++++------ 2 files changed, 111 insertions(+), 20 deletions(-) diff --git a/include/uapi/linux/magic.h b/include/uapi/linux/magic.h index bb575f3ab45e..169dba2a6920 100644 --- a/include/uapi/linux/magic.h +++ b/include/uapi/linux/magic.h @@ -103,5 +103,6 @@ #define DEVMEM_MAGIC 0x454d444d /* "DMEM" */ #define SECRETMEM_MAGIC 0x5345434d /* "SECM" */ #define PID_FS_MAGIC 0x50494446 /* "PIDF" */ +#define GUEST_MEMORY_MAGIC 0x474d454d /* "GMEM" */ #endif /* __LINUX_MAGIC_H__ */ diff --git a/virt/kvm/guest_memfd.c b/virt/kvm/guest_memfd.c index fbf89e643add..844e70c82558 100644 --- a/virt/kvm/guest_memfd.c +++ b/virt/kvm/guest_memfd.c @@ -1,12 +1,16 @@ // SPDX-License-Identifier: GPL-2.0 +#include #include #include #include +#include #include #include #include "kvm_mm.h" +static struct vfsmount *kvm_gmem_mnt; + struct kvm_gmem { struct kvm *kvm; struct xarray bindings; @@ -320,6 +324,38 @@ static pgoff_t kvm_gmem_get_index(struct kvm_memory_slot *slot, gfn_t gfn) return gfn - slot->base_gfn + slot->gmem.pgoff; } +static const struct super_operations kvm_gmem_super_operations = { + .statfs = simple_statfs, +}; + +static int kvm_gmem_init_fs_context(struct fs_context *fc) +{ + struct pseudo_fs_context *ctx; + + if (!init_pseudo(fc, GUEST_MEMORY_MAGIC)) + return -ENOMEM; + + ctx = fc->fs_private; + ctx->ops = &kvm_gmem_super_operations; + + return 0; +} + +static struct file_system_type kvm_gmem_fs = { + .name = "kvm_guest_memory", + .init_fs_context = kvm_gmem_init_fs_context, + .kill_sb = kill_anon_super, +}; + +static void kvm_gmem_init_mount(void) +{ + kvm_gmem_mnt = kern_mount(&kvm_gmem_fs); + BUG_ON(IS_ERR(kvm_gmem_mnt)); + + /* For giggles. Userspace can never map this anyways. */ + kvm_gmem_mnt->mnt_flags |= MNT_NOEXEC; +} + #ifdef CONFIG_KVM_GMEM_SHARED_MEM static bool kvm_gmem_offset_is_shared(struct file *file, pgoff_t index) { @@ -430,6 +466,8 @@ static struct file_operations kvm_gmem_fops = { void kvm_gmem_init(struct module *module) { kvm_gmem_fops.owner = module; + + kvm_gmem_init_mount(); } static int kvm_gmem_migrate_folio(struct address_space *mapping, @@ -511,11 +549,79 @@ static const struct inode_operations kvm_gmem_iops = { .setattr = kvm_gmem_setattr, }; +static struct inode *kvm_gmem_inode_make_secure_inode(const char *name, + loff_t size, u64 flags) +{ + const struct qstr qname = QSTR_INIT(name, strlen(name)); + struct inode *inode; + int err; + + inode = alloc_anon_inode(kvm_gmem_mnt->mnt_sb); + if (IS_ERR(inode)) + return inode; + + err = security_inode_init_security_anon(inode, &qname, NULL); + if (err) { + iput(inode); + return ERR_PTR(err); + } + + inode->i_private = (void *)(unsigned long)flags; + inode->i_op = &kvm_gmem_iops; + inode->i_mapping->a_ops = &kvm_gmem_aops; + inode->i_mode |= S_IFREG; + inode->i_size = size; + mapping_set_gfp_mask(inode->i_mapping, GFP_HIGHUSER); + mapping_set_inaccessible(inode->i_mapping); + /* Unmovable mappings are supposed to be marked unevictable as well. */ + WARN_ON_ONCE(!mapping_unevictable(inode->i_mapping)); + + return inode; +} + +static struct file *kvm_gmem_inode_create_getfile(void *priv, loff_t size, + u64 flags) +{ + static const char *name = "[kvm-gmem]"; + struct inode *inode; + struct file *file; + int err; + + err = -ENOENT; + if (!try_module_get(kvm_gmem_fops.owner)) + goto err; + + inode = kvm_gmem_inode_make_secure_inode(name, size, flags); + if (IS_ERR(inode)) { + err = PTR_ERR(inode); + goto err_put_module; + } + + file = alloc_file_pseudo(inode, kvm_gmem_mnt, name, O_RDWR, + &kvm_gmem_fops); + if (IS_ERR(file)) { + err = PTR_ERR(file); + goto err_put_inode; + } + + file->f_flags |= O_LARGEFILE; + file->private_data = priv; + +out: + return file; + +err_put_inode: + iput(inode); +err_put_module: + module_put(kvm_gmem_fops.owner); +err: + file = ERR_PTR(err); + goto out; +} + static int __kvm_gmem_create(struct kvm *kvm, loff_t size, u64 flags) { - const char *anon_name = "[kvm-gmem]"; struct kvm_gmem *gmem; - struct inode *inode; struct file *file; int fd, err; @@ -529,32 +635,16 @@ static int __kvm_gmem_create(struct kvm *kvm, loff_t size, u64 flags) goto err_fd; } - file = anon_inode_create_getfile(anon_name, &kvm_gmem_fops, gmem, - O_RDWR, NULL); + file = kvm_gmem_inode_create_getfile(gmem, size, flags); if (IS_ERR(file)) { err = PTR_ERR(file); goto err_gmem; } - file->f_flags |= O_LARGEFILE; - - inode = file->f_inode; - WARN_ON(file->f_mapping != inode->i_mapping); - - inode->i_private = (void *)(unsigned long)flags; - inode->i_op = &kvm_gmem_iops; - inode->i_mapping->a_ops = &kvm_gmem_aops; - inode->i_mode |= S_IFREG; - inode->i_size = size; - mapping_set_gfp_mask(inode->i_mapping, GFP_HIGHUSER); - mapping_set_inaccessible(inode->i_mapping); - /* Unmovable mappings are supposed to be marked unevictable as well. */ - WARN_ON_ONCE(!mapping_unevictable(inode->i_mapping)); - kvm_get_kvm(kvm); gmem->kvm = kvm; xa_init(&gmem->bindings); - list_add(&gmem->entry, &inode->i_mapping->i_private_list); + list_add(&gmem->entry, &file_inode(file)->i_mapping->i_private_list); fd_install(fd, file); return fd; From patchwork Fri Mar 28 15:31:28 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 14032181 Received: from mail-wr1-f74.google.com (mail-wr1-f74.google.com [209.85.221.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 884E91C82F4 for ; Fri, 28 Mar 2025 15:31:40 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.221.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1743175902; cv=none; b=Son9jBEqPwpiY1QX/joQM+cF09O+dINTOUKmJUM+926CW/nq692frIhhUEZmgpY+BlnMcnS+t0TZKH82vH3RlkVnJdz5Jj07XzCP+X0WRdVUR1BLZtG6AYfoLZVRLqXyOqkCRmsTDpr6Tf9qoFQ4zVRC5ucBhKLrv1gsTAVR37k= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1743175902; c=relaxed/simple; bh=pft+SQyppK60oO6cviL7FEd9sFkZha3RWok0vnNL+cw=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=EQPgVPqZd6juSl3GYnJKevcS+QC/boIkfSHtPB9VccgcwxVi2wZmKQ7WNdzNgZs7LsE1n2iD+7h/Zidus/OH9YS1V/8L85vrYqJuoNQhDs4Oo1OG1nrIA8WF2NRMh5DEMSqKaB22+2hYju5I/2xHzLOd4vbH2DGcofSYleaU1Os= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=xLcyzWLS; arc=none smtp.client-ip=209.85.221.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="xLcyzWLS" Received: by mail-wr1-f74.google.com with SMTP id ffacd0b85a97d-3913d8d7c3eso1272137f8f.0 for ; Fri, 28 Mar 2025 08:31:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1743175899; x=1743780699; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=zBwVWT3bMzuxYkVzph9Re975E4ZOgPSLG5vu5hERwkU=; b=xLcyzWLSFHcLUHCq//U6TZtgA3IGs9zpj/xWrPMW9JAyoGzOqtLe4LuJ5gOfTdRW6P pCHXmAOIqoB0Ql177ynu9Gtjf9FcVpDeLcDD1W0rMm+/uuhzFGSpdzXfPzIgecqvxTXS o+QWeqr4BZQk5uROCGWraC7HW8VzurPAlZPYPbuwZPC+e46DS+FNiLC6d7ZKvctTlI4c GdgoRhhtMFBkDmZGOO4+k9iDVVj/H4Nv+f9vakWbowE1p8NTIeHfl8njTeZzsbPw12Tq w9hU3AArh7UoJLS+bZDBhYDkFy6ZadOhljAft3g+ownfg0KyikDtGfw77bLhUE8UhABk y3Kg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1743175899; x=1743780699; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=zBwVWT3bMzuxYkVzph9Re975E4ZOgPSLG5vu5hERwkU=; b=igwUsZ51gMYNXXB2vC3MzYq9QfWNYvepEudhDNkiAybs0Ug9zWuALzRuklKauL6VUZ BkSMyDok4HVeLNghgwcGXCVWbKzWBk5VoMfdGBzZZYFqpY9FiqMoUkuGlXfHNsC8f6OU TV0hC7L9mm8NuNVtqwoCBpIkVHSIxt04PwuMbEG73ICgBNR0Hrg716Qaew79RC+oq7ej OMJdLDTSp4fd76xozDvFZHvX/M5tL3+4kSXWh1L+9yn1tALNsUhmMyrN1+V7ozikLK9r ZoAUqWgsJ0USsxg6wS8NVe/ovzKOZ0cqp0ZRQnTL3af1zhYsUYrxe9aPT2SUVhbGFcoY ysJQ== X-Forwarded-Encrypted: i=1; AJvYcCUi9SnJY3RzB7iFJ308jqkZI1qdA2wInUnJjUENVKWf7RU6Vl71D4NRwva5+k6QXms94wTtGbTlXuSeMtLG@vger.kernel.org X-Gm-Message-State: AOJu0YwrAbnsjs19lwZqPi58Z/vBk5HQ06yTauZqzXg5a+SKBZjWvako Lb7vkUcEyyiRe3lWT6AHXejq/No7jajHG64FO4Ic7TWQtqDM4wKHyobSjWfraBehdL9N1mTfYA= = X-Google-Smtp-Source: AGHT+IFjfxf6/c4YGYpRR8ebQgMC7nETbgoP7o3HpJ1E0q7VHYEiUY/Sncl1PhAsgk21ab/L2t0C6VGu3g== X-Received: from wrps2.prod.google.com ([2002:adf:f802:0:b0:399:7a0a:493f]) (user=tabba job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6000:18a8:b0:391:4559:876a with SMTP id ffacd0b85a97d-39ad1794cd7mr8199077f8f.46.1743175898942; Fri, 28 Mar 2025 08:31:38 -0700 (PDT) Date: Fri, 28 Mar 2025 15:31:28 +0000 In-Reply-To: <20250328153133.3504118-1-tabba@google.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250328153133.3504118-1-tabba@google.com> X-Mailer: git-send-email 2.49.0.472.ge94155a9ec-goog Message-ID: <20250328153133.3504118-3-tabba@google.com> Subject: [PATCH v7 2/7] KVM: guest_memfd: Introduce kvm_gmem_get_pfn_locked(), which retains the folio lock From: Fuad Tabba To: kvm@vger.kernel.org, linux-arm-msm@vger.kernel.org, linux-mm@kvack.org Cc: pbonzini@redhat.com, chenhuacai@kernel.org, mpe@ellerman.id.au, anup@brainfault.org, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, seanjc@google.com, viro@zeniv.linux.org.uk, brauner@kernel.org, willy@infradead.org, akpm@linux-foundation.org, xiaoyao.li@intel.com, yilun.xu@intel.com, chao.p.peng@linux.intel.com, jarkko@kernel.org, amoorthy@google.com, dmatlack@google.com, isaku.yamahata@intel.com, mic@digikod.net, vbabka@suse.cz, vannapurve@google.com, ackerleytng@google.com, mail@maciej.szmigiero.name, david@redhat.com, michael.roth@amd.com, wei.w.wang@intel.com, liam.merwick@oracle.com, isaku.yamahata@gmail.com, kirill.shutemov@linux.intel.com, suzuki.poulose@arm.com, steven.price@arm.com, quic_eberman@quicinc.com, quic_mnalajal@quicinc.com, quic_tsoni@quicinc.com, quic_svaddagi@quicinc.com, quic_cvanscha@quicinc.com, quic_pderrin@quicinc.com, quic_pheragu@quicinc.com, catalin.marinas@arm.com, james.morse@arm.com, yuzenghui@huawei.com, oliver.upton@linux.dev, maz@kernel.org, will@kernel.org, qperret@google.com, keirf@google.com, roypat@amazon.co.uk, shuah@kernel.org, hch@infradead.org, jgg@nvidia.com, rientjes@google.com, jhubbard@nvidia.com, fvdl@google.com, hughd@google.com, jthoughton@google.com, peterx@redhat.com, pankaj.gupta@amd.com, tabba@google.com Create a new variant of kvm_gmem_get_pfn(), which retains the folio lock if it returns successfully. This is needed in subsequent patches to protect against races when checking whether a folio can be shared with the host. Signed-off-by: Fuad Tabba --- include/linux/kvm_host.h | 11 +++++++++++ virt/kvm/guest_memfd.c | 27 ++++++++++++++++++++------- 2 files changed, 31 insertions(+), 7 deletions(-) diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index ec3bedc18eab..bc73d7426363 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -2535,6 +2535,9 @@ static inline bool kvm_mem_is_private(struct kvm *kvm, gfn_t gfn) int kvm_gmem_get_pfn(struct kvm *kvm, struct kvm_memory_slot *slot, gfn_t gfn, kvm_pfn_t *pfn, struct page **page, int *max_order); +int kvm_gmem_get_pfn_locked(struct kvm *kvm, struct kvm_memory_slot *slot, + gfn_t gfn, kvm_pfn_t *pfn, struct page **page, + int *max_order); #else static inline int kvm_gmem_get_pfn(struct kvm *kvm, struct kvm_memory_slot *slot, gfn_t gfn, @@ -2544,6 +2547,14 @@ static inline int kvm_gmem_get_pfn(struct kvm *kvm, KVM_BUG_ON(1, kvm); return -EIO; } +static inline int kvm_gmem_get_pfn_locked(struct kvm *kvm, + struct kvm_memory_slot *slot, + gfn_t gfn, kvm_pfn_t *pfn, + struct page **page, int *max_order) +{ + KVM_BUG_ON(1, kvm); + return -EIO; +} #endif /* CONFIG_KVM_PRIVATE_MEM */ #ifdef CONFIG_HAVE_KVM_ARCH_GMEM_PREPARE diff --git a/virt/kvm/guest_memfd.c b/virt/kvm/guest_memfd.c index 844e70c82558..ac6b8853699d 100644 --- a/virt/kvm/guest_memfd.c +++ b/virt/kvm/guest_memfd.c @@ -802,9 +802,9 @@ static struct folio *__kvm_gmem_get_pfn(struct file *file, return folio; } -int kvm_gmem_get_pfn(struct kvm *kvm, struct kvm_memory_slot *slot, - gfn_t gfn, kvm_pfn_t *pfn, struct page **page, - int *max_order) +int kvm_gmem_get_pfn_locked(struct kvm *kvm, struct kvm_memory_slot *slot, + gfn_t gfn, kvm_pfn_t *pfn, struct page **page, + int *max_order) { pgoff_t index = kvm_gmem_get_index(slot, gfn); struct file *file = kvm_gmem_get_file(slot); @@ -824,17 +824,30 @@ int kvm_gmem_get_pfn(struct kvm *kvm, struct kvm_memory_slot *slot, if (!is_prepared) r = kvm_gmem_prepare_folio(kvm, slot, gfn, folio); - folio_unlock(folio); - - if (!r) + if (!r) { *page = folio_file_page(folio, index); - else + } else { + folio_unlock(folio); folio_put(folio); + } out: fput(file); return r; } +EXPORT_SYMBOL_GPL(kvm_gmem_get_pfn_locked); + +int kvm_gmem_get_pfn(struct kvm *kvm, struct kvm_memory_slot *slot, + gfn_t gfn, kvm_pfn_t *pfn, struct page **page, + int *max_order) +{ + int r = kvm_gmem_get_pfn_locked(kvm, slot, gfn, pfn, page, max_order); + + if (!r) + unlock_page(*page); + + return r; +} EXPORT_SYMBOL_GPL(kvm_gmem_get_pfn); #ifdef CONFIG_KVM_GENERIC_PRIVATE_MEM From patchwork Fri Mar 28 15:31:29 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 14032182 Received: from mail-wm1-f74.google.com (mail-wm1-f74.google.com [209.85.128.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5811A1D61B7 for ; Fri, 28 Mar 2025 15:31:42 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1743175904; cv=none; b=KOq+oDVyqFirKwlx+f+ig7bY7UuQcfPpcqHqxGikUTVhrs5R/J7kcCO3nVm40honUkO8RnsB6RyraMv+/+GcWTwruFIr5QpbqzzwNc5BZ9kqfRMZ7/RwpXB0Ol5KMkDOwuqsgG/H39zBE3e7Ka0B7vvtmRGWcrD1FHZNARqS380= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1743175904; c=relaxed/simple; bh=3g9AH9oSvUwCNVwYdH5PtJPKPAklKGH5WHxqN6AhJFo=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=MrqPekpdkJlwMjxOiZdhMy2d//PligVy5fs7uijG/ySxo+N5a+sUVH0S2TnwggYNgb/x2ZzcP84vr75OH/Qinp2LGsFsq4yMHLTlRRAmS06yi2nI7p8TsFRdMmRTEMcSt2O3bX4jYace+nlV9hdmFwuuHgI2cQqiVTVg1lRl5UY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=mCNMvNJJ; arc=none smtp.client-ip=209.85.128.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="mCNMvNJJ" Received: by mail-wm1-f74.google.com with SMTP id 5b1f17b1804b1-43d51bd9b41so21024755e9.3 for ; Fri, 28 Mar 2025 08:31:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1743175900; x=1743780700; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=V/jpShOFnJ6I/k4rEqX8noV9TrB3vpkLt2594tqNmFU=; b=mCNMvNJJiGxEP8dbzmpGw3GxoZb5BsGOkyw8EQxrBa0wTfAv1sHBdXNEL36JgeV6da LaKsp9nlBhqI4SjcaiglkLpyrsDjUBFvz44FZUQAud/3COQ0hsBk+3Onamu4C4bwh4gq ix371cY6rBb1JF1ybR8qxVAgDNmAyxYOgegjrnlqbD8TiqDwg5FGnQ/akHEUzYvH8qE6 5YKqzV4o/MLbw6+oXzW51XYyAoXkiKLlSh8RiS4WYVoWbd39hWZBnJlYOtovfMzHPMe2 fLhsY/B8Uu4+TsAQdK2yZZ96CHj8tcdtnFdqRbAc4D644Jw9usg3QCcpy4+8LBWHKU+6 Icxg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1743175900; x=1743780700; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=V/jpShOFnJ6I/k4rEqX8noV9TrB3vpkLt2594tqNmFU=; b=lD3msGOEyBqMPIP6ntA0Nb0isDDl9nGsj+70eQmQmCPxxFp4f8Zy77oQ8uaJtNXZNg PCW9AAjmKjjzvCsXSmeYwUhKS9y4spFKOpx+LnX+oKjeOuZfR3YjHVgoCqmFcvRMSP/Q rsEhee+QnrXv13tGJkBCyz49GixQHIkJ3XC/5vfisKZAP2NGLYz2xZjHkZhcwDqj4byW KeqMvB8iz2QrttKc1n+8noeV4N98VjesBz811WRMd0McSXhfaghEA1MRRvwQRDB+s6RM iDOSUKF6GnOvpmqqanqrAoUS4Ru6pJ6SJ6leu3T49fm879FwosXOfIq6LIFRe4s0/qwb oG5Q== X-Forwarded-Encrypted: i=1; AJvYcCXm+QOvk/MeMCtiar2JRJrZjsgWoTHmbSQjjbIV1kRneKiAQW/5RmqDZ4fVy4rXkGRyIq/BZWoYomJuSWsP@vger.kernel.org X-Gm-Message-State: AOJu0Ywk8Pq/ODoPtzTffKpFZw7lZYVp7JadGTgkTvKaqrmOyLfFe/6R PM59zqY/zicd0JpKeskVPdonr0HkpOP/pyj63I7Mkuz6Rz9uJmpVBYTeXj9XFj4qrqQpsXhWcw= = X-Google-Smtp-Source: AGHT+IEcG7Zl1YWubAQ/xxzfCc7DzPo7o1l3NxQG19us3iyaCBZ3hbODn54f9bMZd/gVjGPshQUZSrIV3Q== X-Received: from wmsp31.prod.google.com ([2002:a05:600c:1d9f:b0:43d:1f62:f36c]) (user=tabba job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:45c8:b0:43c:fdbe:439b with SMTP id 5b1f17b1804b1-43d84f5de84mr98366605e9.4.1743175900745; Fri, 28 Mar 2025 08:31:40 -0700 (PDT) Date: Fri, 28 Mar 2025 15:31:29 +0000 In-Reply-To: <20250328153133.3504118-1-tabba@google.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250328153133.3504118-1-tabba@google.com> X-Mailer: git-send-email 2.49.0.472.ge94155a9ec-goog Message-ID: <20250328153133.3504118-4-tabba@google.com> Subject: [PATCH v7 3/7] KVM: guest_memfd: Track folio sharing within a struct kvm_gmem_private From: Fuad Tabba To: kvm@vger.kernel.org, linux-arm-msm@vger.kernel.org, linux-mm@kvack.org Cc: pbonzini@redhat.com, chenhuacai@kernel.org, mpe@ellerman.id.au, anup@brainfault.org, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, seanjc@google.com, viro@zeniv.linux.org.uk, brauner@kernel.org, willy@infradead.org, akpm@linux-foundation.org, xiaoyao.li@intel.com, yilun.xu@intel.com, chao.p.peng@linux.intel.com, jarkko@kernel.org, amoorthy@google.com, dmatlack@google.com, isaku.yamahata@intel.com, mic@digikod.net, vbabka@suse.cz, vannapurve@google.com, ackerleytng@google.com, mail@maciej.szmigiero.name, david@redhat.com, michael.roth@amd.com, wei.w.wang@intel.com, liam.merwick@oracle.com, isaku.yamahata@gmail.com, kirill.shutemov@linux.intel.com, suzuki.poulose@arm.com, steven.price@arm.com, quic_eberman@quicinc.com, quic_mnalajal@quicinc.com, quic_tsoni@quicinc.com, quic_svaddagi@quicinc.com, quic_cvanscha@quicinc.com, quic_pderrin@quicinc.com, quic_pheragu@quicinc.com, catalin.marinas@arm.com, james.morse@arm.com, yuzenghui@huawei.com, oliver.upton@linux.dev, maz@kernel.org, will@kernel.org, qperret@google.com, keirf@google.com, roypat@amazon.co.uk, shuah@kernel.org, hch@infradead.org, jgg@nvidia.com, rientjes@google.com, jhubbard@nvidia.com, fvdl@google.com, hughd@google.com, jthoughton@google.com, peterx@redhat.com, pankaj.gupta@amd.com, tabba@google.com From: Ackerley Tng Track guest_memfd folio sharing state within the inode, since it is a property of the guest_memfd's memory contents. The guest_memfd PRIVATE memory attribute is not used for two reasons. It reflects the userspace expectation for the memory state, and therefore can be toggled by userspace. Also, although each guest_memfd file has a 1:1 binding with a KVM instance, the plan is to allow multiple files per inode, e.g. to allow intra-host migration to a new KVM instance, without destroying guest_memfd. Signed-off-by: Ackerley Tng Co-developed-by: Vishal Annapurve Signed-off-by: Vishal Annapurve Co-developed-by: Fuad Tabba Signed-off-by: Fuad Tabba --- virt/kvm/guest_memfd.c | 58 ++++++++++++++++++++++++++++++++++++++---- 1 file changed, 53 insertions(+), 5 deletions(-) diff --git a/virt/kvm/guest_memfd.c b/virt/kvm/guest_memfd.c index ac6b8853699d..cde16ed3b230 100644 --- a/virt/kvm/guest_memfd.c +++ b/virt/kvm/guest_memfd.c @@ -17,6 +17,18 @@ struct kvm_gmem { struct list_head entry; }; +struct kvm_gmem_inode_private { +#ifdef CONFIG_KVM_GMEM_SHARED_MEM + struct xarray shared_offsets; + rwlock_t offsets_lock; +#endif +}; + +static struct kvm_gmem_inode_private *kvm_gmem_private(struct inode *inode) +{ + return inode->i_mapping->i_private_data; +} + #ifdef CONFIG_KVM_GMEM_SHARED_MEM void kvm_gmem_handle_folio_put(struct folio *folio) { @@ -324,8 +336,28 @@ static pgoff_t kvm_gmem_get_index(struct kvm_memory_slot *slot, gfn_t gfn) return gfn - slot->base_gfn + slot->gmem.pgoff; } +static void kvm_gmem_evict_inode(struct inode *inode) +{ + struct kvm_gmem_inode_private *private = kvm_gmem_private(inode); + +#ifdef CONFIG_KVM_GMEM_SHARED_MEM + /* + * .evict_inode can be called before private data is set up if there are + * issues during inode creation. + */ + if (private) + xa_destroy(&private->shared_offsets); +#endif + + truncate_inode_pages_final(inode->i_mapping); + + kfree(private); + clear_inode(inode); +} + static const struct super_operations kvm_gmem_super_operations = { - .statfs = simple_statfs, + .statfs = simple_statfs, + .evict_inode = kvm_gmem_evict_inode, }; static int kvm_gmem_init_fs_context(struct fs_context *fc) @@ -553,6 +585,7 @@ static struct inode *kvm_gmem_inode_make_secure_inode(const char *name, loff_t size, u64 flags) { const struct qstr qname = QSTR_INIT(name, strlen(name)); + struct kvm_gmem_inode_private *private; struct inode *inode; int err; @@ -561,10 +594,20 @@ static struct inode *kvm_gmem_inode_make_secure_inode(const char *name, return inode; err = security_inode_init_security_anon(inode, &qname, NULL); - if (err) { - iput(inode); - return ERR_PTR(err); - } + if (err) + goto out; + + err = -ENOMEM; + private = kzalloc(sizeof(*private), GFP_KERNEL); + if (!private) + goto out; + +#ifdef CONFIG_KVM_GMEM_SHARED_MEM + xa_init(&private->shared_offsets); + rwlock_init(&private->offsets_lock); +#endif + + inode->i_mapping->i_private_data = private; inode->i_private = (void *)(unsigned long)flags; inode->i_op = &kvm_gmem_iops; @@ -577,6 +620,11 @@ static struct inode *kvm_gmem_inode_make_secure_inode(const char *name, WARN_ON_ONCE(!mapping_unevictable(inode->i_mapping)); return inode; + +out: + iput(inode); + + return ERR_PTR(err); } static struct file *kvm_gmem_inode_create_getfile(void *priv, loff_t size, From patchwork Fri Mar 28 15:31:30 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 14032183 Received: from mail-wm1-f73.google.com (mail-wm1-f73.google.com [209.85.128.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 23BB01531E3 for ; Fri, 28 Mar 2025 15:31:43 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1743175906; cv=none; b=KnG/qmBGZtbVmtpSJTRn+SakTyx+hl3Ii7f5m2iEJq1J1T4IezQNpTTssDVFjrYyvfgbv2O33LgV4+rDzC7/8bGITxGjDXcL5xUSnnXJmOgeLj7y36H+yI5auwEr07ZRXcOmoOyDxbY3V11UPHJqZw63etKlqY1BTaK7DAoUdeo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1743175906; c=relaxed/simple; bh=U0ho/tm3JZHvTmCYzNDcZ1cG6BRZ9f/32jVstYt0n00=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=ErcbCtNHSQo9zCTft/dYghEXhdlmLb6ADj0hmH93Yt67uIHS24YHrx7uZUWMc6VAkw26EuZfFxfJs8hZnaXyVFvxwQfF0AwPpx0hEpPALGYFfKkTwNSr2p2LpDxNvRG134zd0EDG8WuaNOOEJTFhDm4Yd1+yHpXsaCIcI037qU8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=d++gIkdL; arc=none smtp.client-ip=209.85.128.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="d++gIkdL" Received: by mail-wm1-f73.google.com with SMTP id 5b1f17b1804b1-4394c489babso11330145e9.1 for ; Fri, 28 Mar 2025 08:31:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1743175902; x=1743780702; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=DQKphsCXTEhqFMrf3VWaTcY0QsviPAOOuZY+cq7cn/w=; b=d++gIkdLVA+2O+6Yt/SdHL6uaoFXNXxVVCrGHW3o5pvRi7DDjio2tZmCpfCOmiP2kB IFw6u8q+6wxE6wo/DfIqanyRM6hyaM6uEHBXDyceLNlOo/0c1fi+xnzejyarLFljBVZR IsZuDVinAMcWbtaq3q8jrjieCrAulWZrjbS4lBHZhl5Ave/1oZALBkWdGVMRuMxP5Wsu 61H5NPDMF4FN5LqEjzNe8B0GcrzVPl1sjLszxjTZ2A8oFxdzVmV4ct2SU4Ars44rtgSf v6wxbECD+slYAZHG+XHexCGCwyRF8jLjfh4pzvekDV66c/UFBjC0E9FzO7wwmFHty+Ry GXWA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1743175902; x=1743780702; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=DQKphsCXTEhqFMrf3VWaTcY0QsviPAOOuZY+cq7cn/w=; b=SWZymPuEvbR/wpNHQXYoUiTDprYWE5K/U71ou6Qxh9n9PQZP4q5KrtvYsXb73YxlPL 91fQ5ZSzF/aRJz4sXEVIGoQuZYY/FaLNfNDpsGb+1tcfoPPnFVzpIr+flSHr7sOMfIED Lrrnp1VcC2p7904pLiV/vdxHNtEsNCVYgrqfndDHO8MUdGN0Q5JK974ulgn4MVlzuVDQ u4dZMfbtmwWnKklZTNpNTl+d2y3kl/5tlwR+vy94lOS48tXd4tnAz2R5pYd3b7M3R7hY FH5bMjjBr6ThxWvHMz3QIb+VS37QlNXzMFNV8t4WUKZKcB6m62kBo0ZJfW0ddcnlyOUM Racg== X-Forwarded-Encrypted: i=1; AJvYcCX/xzZc1OmFqsTu5qpzfbqqPAF/A0jnL9oYty+aZJK9Z6b5jKC83NjTb8Riv9i/WM/oAEbbY5Iyv+PyjeKU@vger.kernel.org X-Gm-Message-State: AOJu0Yyg+y5QC5W4oAly4KRqwg1U9sFzcBuEz1DLFye0uyNEfc0R5VKJ t4zbxAahA0+mM7uNrg3xiKeTXYhk5u0vjE3mjWl31Mvu+D0lfc+moDJUh/zFBSvq+TFbYZwk3w= = X-Google-Smtp-Source: AGHT+IFeKUD228KCyiAqE0V133tP1161pUOyLkFtpOrh/x3TzNM0KdnO1/NnMHDp+Bj1LbLVaO4D8TJEbg== X-Received: from wmbg10.prod.google.com ([2002:a05:600c:a40a:b0:43d:4d65:23e1]) (user=tabba job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:548a:b0:43c:f332:703a with SMTP id 5b1f17b1804b1-43d87508dd1mr76169855e9.31.1743175902548; Fri, 28 Mar 2025 08:31:42 -0700 (PDT) Date: Fri, 28 Mar 2025 15:31:30 +0000 In-Reply-To: <20250328153133.3504118-1-tabba@google.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250328153133.3504118-1-tabba@google.com> X-Mailer: git-send-email 2.49.0.472.ge94155a9ec-goog Message-ID: <20250328153133.3504118-5-tabba@google.com> Subject: [PATCH v7 4/7] KVM: guest_memfd: Folio sharing states and functions that manage their transition From: Fuad Tabba To: kvm@vger.kernel.org, linux-arm-msm@vger.kernel.org, linux-mm@kvack.org Cc: pbonzini@redhat.com, chenhuacai@kernel.org, mpe@ellerman.id.au, anup@brainfault.org, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, seanjc@google.com, viro@zeniv.linux.org.uk, brauner@kernel.org, willy@infradead.org, akpm@linux-foundation.org, xiaoyao.li@intel.com, yilun.xu@intel.com, chao.p.peng@linux.intel.com, jarkko@kernel.org, amoorthy@google.com, dmatlack@google.com, isaku.yamahata@intel.com, mic@digikod.net, vbabka@suse.cz, vannapurve@google.com, ackerleytng@google.com, mail@maciej.szmigiero.name, david@redhat.com, michael.roth@amd.com, wei.w.wang@intel.com, liam.merwick@oracle.com, isaku.yamahata@gmail.com, kirill.shutemov@linux.intel.com, suzuki.poulose@arm.com, steven.price@arm.com, quic_eberman@quicinc.com, quic_mnalajal@quicinc.com, quic_tsoni@quicinc.com, quic_svaddagi@quicinc.com, quic_cvanscha@quicinc.com, quic_pderrin@quicinc.com, quic_pheragu@quicinc.com, catalin.marinas@arm.com, james.morse@arm.com, yuzenghui@huawei.com, oliver.upton@linux.dev, maz@kernel.org, will@kernel.org, qperret@google.com, keirf@google.com, roypat@amazon.co.uk, shuah@kernel.org, hch@infradead.org, jgg@nvidia.com, rientjes@google.com, jhubbard@nvidia.com, fvdl@google.com, hughd@google.com, jthoughton@google.com, peterx@redhat.com, pankaj.gupta@amd.com, tabba@google.com To allow in-place sharing of guest_memfd folios with the host, guest_memfd needs to track their sharing state, because mapping of shared folios will only be allowed where it safe to access these folios. It is safe to map and access these folios when explicitly shared with the host, or potentially if not yet exposed to the guest (e.g., at initialization). This patch introduces sharing states for guest_memfd folios as well as the functions that manage transitioning between those states. Signed-off-by: Fuad Tabba --- include/linux/kvm_host.h | 39 +++++++- virt/kvm/guest_memfd.c | 208 ++++++++++++++++++++++++++++++++++++--- virt/kvm/kvm_main.c | 62 ++++++++++++ 3 files changed, 295 insertions(+), 14 deletions(-) diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index bc73d7426363..bf82faf16c53 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -2600,7 +2600,44 @@ long kvm_arch_vcpu_pre_fault_memory(struct kvm_vcpu *vcpu, #endif #ifdef CONFIG_KVM_GMEM_SHARED_MEM +int kvm_gmem_set_shared(struct kvm *kvm, gfn_t start, gfn_t end); +int kvm_gmem_clear_shared(struct kvm *kvm, gfn_t start, gfn_t end); +int kvm_gmem_slot_set_shared(struct kvm_memory_slot *slot, gfn_t start, + gfn_t end); +int kvm_gmem_slot_clear_shared(struct kvm_memory_slot *slot, gfn_t start, + gfn_t end); +bool kvm_gmem_slot_is_guest_shared(struct kvm_memory_slot *slot, gfn_t gfn); void kvm_gmem_handle_folio_put(struct folio *folio); -#endif +#else +static inline int kvm_gmem_set_shared(struct kvm *kvm, gfn_t start, gfn_t end) +{ + WARN_ON_ONCE(1); + return -EINVAL; +} +static inline int kvm_gmem_clear_shared(struct kvm *kvm, gfn_t start, + gfn_t end) +{ + WARN_ON_ONCE(1); + return -EINVAL; +} +static inline int kvm_gmem_slot_set_shared(struct kvm_memory_slot *slot, + gfn_t start, gfn_t end) +{ + WARN_ON_ONCE(1); + return -EINVAL; +} +static inline int kvm_gmem_slot_clear_shared(struct kvm_memory_slot *slot, + gfn_t start, gfn_t end) +{ + WARN_ON_ONCE(1); + return -EINVAL; +} +static inline bool kvm_gmem_slot_is_guest_shared(struct kvm_memory_slot *slot, + gfn_t gfn) +{ + WARN_ON_ONCE(1); + return false; +} +#endif /* CONFIG_KVM_GMEM_SHARED_MEM */ #endif diff --git a/virt/kvm/guest_memfd.c b/virt/kvm/guest_memfd.c index cde16ed3b230..3b4d724084a8 100644 --- a/virt/kvm/guest_memfd.c +++ b/virt/kvm/guest_memfd.c @@ -29,14 +29,6 @@ static struct kvm_gmem_inode_private *kvm_gmem_private(struct inode *inode) return inode->i_mapping->i_private_data; } -#ifdef CONFIG_KVM_GMEM_SHARED_MEM -void kvm_gmem_handle_folio_put(struct folio *folio) -{ - WARN_ONCE(1, "A placeholder that shouldn't trigger. Work in progress."); -} -EXPORT_SYMBOL_GPL(kvm_gmem_handle_folio_put); -#endif /* CONFIG_KVM_GMEM_SHARED_MEM */ - /** * folio_file_pfn - like folio_file_page, but return a pfn. * @folio: The folio which contains this index. @@ -389,22 +381,211 @@ static void kvm_gmem_init_mount(void) } #ifdef CONFIG_KVM_GMEM_SHARED_MEM -static bool kvm_gmem_offset_is_shared(struct file *file, pgoff_t index) +/* + * An enum of the valid folio sharing states: + * Bit 0: set if not shared with the guest (guest cannot fault it in) + * Bit 1: set if not shared with the host (host cannot fault it in) + */ +enum folio_shareability { + KVM_GMEM_ALL_SHARED = 0b00, /* Shared with the host and the guest. */ + KVM_GMEM_GUEST_SHARED = 0b10, /* Shared only with the guest. */ + KVM_GMEM_NONE_SHARED = 0b11, /* Not shared, transient state. */ +}; + +static int kvm_gmem_offset_set_shared(struct inode *inode, pgoff_t index) { - struct kvm_gmem *gmem = file->private_data; + struct xarray *shared_offsets = &kvm_gmem_private(inode)->shared_offsets; + rwlock_t *offsets_lock = &kvm_gmem_private(inode)->offsets_lock; + void *xval = xa_mk_value(KVM_GMEM_ALL_SHARED); + + lockdep_assert_held_write(offsets_lock); + + return xa_err(xa_store(shared_offsets, index, xval, GFP_KERNEL)); +} + +/* + * Marks the range [start, end) as shared with both the host and the guest. + * Called when guest shares memory with the host. + */ +static int kvm_gmem_offset_range_set_shared(struct inode *inode, + pgoff_t start, pgoff_t end) +{ + rwlock_t *offsets_lock = &kvm_gmem_private(inode)->offsets_lock; + pgoff_t i; + int r = 0; + + write_lock(offsets_lock); + for (i = start; i < end; i++) { + r = kvm_gmem_offset_set_shared(inode, i); + if (WARN_ON_ONCE(r)) + break; + } + write_unlock(offsets_lock); + + return r; +} + +static int kvm_gmem_offset_clear_shared(struct inode *inode, pgoff_t index) +{ + struct xarray *shared_offsets = &kvm_gmem_private(inode)->shared_offsets; + rwlock_t *offsets_lock = &kvm_gmem_private(inode)->offsets_lock; + void *xval_guest = xa_mk_value(KVM_GMEM_GUEST_SHARED); + void *xval_none = xa_mk_value(KVM_GMEM_NONE_SHARED); + struct folio *folio; + int refcount; + int r; + + lockdep_assert_held_write(offsets_lock); + + folio = filemap_lock_folio(inode->i_mapping, index); + if (!IS_ERR(folio)) { + /* +1 references are expected because of filemap_lock_folio(). */ + refcount = folio_nr_pages(folio) + 1; + } else { + r = PTR_ERR(folio); + if (WARN_ON_ONCE(r != -ENOENT)) + return r; + + folio = NULL; + } + + if (!folio || folio_ref_freeze(folio, refcount)) { + /* + * No outstanding references: transition to guest shared. + */ + r = xa_err(xa_store(shared_offsets, index, xval_guest, GFP_KERNEL)); + + if (folio) + folio_ref_unfreeze(folio, refcount); + } else { + /* + * Outstanding references: the folio cannot be faulted in by + * anyone until they're dropped. + */ + r = xa_err(xa_store(shared_offsets, index, xval_none, GFP_KERNEL)); + } + + if (folio) { + folio_unlock(folio); + folio_put(folio); + } + + return r; +} +/* + * Marks the range [start, end) as not shared with the host. If the host doesn't + * have any references to a particular folio, then that folio is marked as + * shared with the guest. + * + * However, if the host still has references to the folio, then the folio is + * marked and not shared with anyone. Marking it as not shared allows draining + * all references from the host, and ensures that the hypervisor does not + * transition the folio to private, since the host still might access it. + * + * Called when guest unshares memory with the host. + */ +static int kvm_gmem_offset_range_clear_shared(struct inode *inode, + pgoff_t start, pgoff_t end) +{ + rwlock_t *offsets_lock = &kvm_gmem_private(inode)->offsets_lock; + pgoff_t i; + int r = 0; + + write_lock(offsets_lock); + for (i = start; i < end; i++) { + r = kvm_gmem_offset_clear_shared(inode, i); + if (WARN_ON_ONCE(r)) + break; + } + write_unlock(offsets_lock); + + return r; +} + +void kvm_gmem_handle_folio_put(struct folio *folio) +{ + WARN_ONCE(1, "A placeholder that shouldn't trigger. Work in progress."); +} +EXPORT_SYMBOL_GPL(kvm_gmem_handle_folio_put); + +/* + * Returns true if the folio is shared with the host and the guest. + * + * Must be called with the offsets_lock lock held. + */ +static bool kvm_gmem_offset_is_shared(struct inode *inode, pgoff_t index) +{ + struct xarray *shared_offsets = &kvm_gmem_private(inode)->shared_offsets; + rwlock_t *offsets_lock = &kvm_gmem_private(inode)->offsets_lock; + unsigned long r; + + lockdep_assert_held(offsets_lock); - /* For now, VMs that support shared memory share all their memory. */ - return kvm_arch_gmem_supports_shared_mem(gmem->kvm); + r = xa_to_value(xa_load(shared_offsets, index)); + + return r == KVM_GMEM_ALL_SHARED; +} + +/* + * Returns true if the folio is shared with the guest (not transitioning). + * + * Must be called with the offsets_lock lock held. + */ +static bool kvm_gmem_offset_is_guest_shared(struct inode *inode, pgoff_t index) +{ + struct xarray *shared_offsets = &kvm_gmem_private(inode)->shared_offsets; + rwlock_t *offsets_lock = &kvm_gmem_private(inode)->offsets_lock; + unsigned long r; + + lockdep_assert_held(offsets_lock); + + r = xa_to_value(xa_load(shared_offsets, index)); + + return (r == KVM_GMEM_ALL_SHARED || r == KVM_GMEM_GUEST_SHARED); +} + +int kvm_gmem_slot_set_shared(struct kvm_memory_slot *slot, gfn_t start, gfn_t end) +{ + struct inode *inode = file_inode(READ_ONCE(slot->gmem.file)); + pgoff_t start_off = slot->gmem.pgoff + start - slot->base_gfn; + pgoff_t end_off = start_off + end - start; + + return kvm_gmem_offset_range_set_shared(inode, start_off, end_off); +} + +int kvm_gmem_slot_clear_shared(struct kvm_memory_slot *slot, gfn_t start, gfn_t end) +{ + struct inode *inode = file_inode(READ_ONCE(slot->gmem.file)); + pgoff_t start_off = slot->gmem.pgoff + start - slot->base_gfn; + pgoff_t end_off = start_off + end - start; + + return kvm_gmem_offset_range_clear_shared(inode, start_off, end_off); +} + +bool kvm_gmem_slot_is_guest_shared(struct kvm_memory_slot *slot, gfn_t gfn) +{ + struct inode *inode = file_inode(READ_ONCE(slot->gmem.file)); + rwlock_t *offsets_lock = &kvm_gmem_private(inode)->offsets_lock; + unsigned long pgoff = slot->gmem.pgoff + gfn - slot->base_gfn; + bool r; + + read_lock(offsets_lock); + r = kvm_gmem_offset_is_guest_shared(inode, pgoff); + read_unlock(offsets_lock); + + return r; } static vm_fault_t kvm_gmem_fault(struct vm_fault *vmf) { struct inode *inode = file_inode(vmf->vma->vm_file); + rwlock_t *offsets_lock = &kvm_gmem_private(inode)->offsets_lock; struct folio *folio; vm_fault_t ret = VM_FAULT_LOCKED; filemap_invalidate_lock_shared(inode->i_mapping); + read_lock(offsets_lock); folio = kvm_gmem_get_folio(inode, vmf->pgoff); if (IS_ERR(folio)) { @@ -423,7 +604,7 @@ static vm_fault_t kvm_gmem_fault(struct vm_fault *vmf) goto out_folio; } - if (!kvm_gmem_offset_is_shared(vmf->vma->vm_file, vmf->pgoff)) { + if (!kvm_gmem_offset_is_shared(inode, vmf->pgoff)) { ret = VM_FAULT_SIGBUS; goto out_folio; } @@ -457,6 +638,7 @@ static vm_fault_t kvm_gmem_fault(struct vm_fault *vmf) } out_filemap: + read_unlock(offsets_lock); filemap_invalidate_unlock_shared(inode->i_mapping); return ret; diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 3e40acb9f5c0..90762252381c 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -3091,6 +3091,68 @@ static int next_segment(unsigned long len, int offset) return len; } +#ifdef CONFIG_KVM_GMEM_SHARED_MEM +int kvm_gmem_set_shared(struct kvm *kvm, gfn_t start, gfn_t end) +{ + struct kvm_memslot_iter iter; + int r = 0; + + mutex_lock(&kvm->slots_lock); + + kvm_for_each_memslot_in_gfn_range(&iter, kvm_memslots(kvm), start, end) { + struct kvm_memory_slot *memslot = iter.slot; + gfn_t gfn_start, gfn_end; + + if (!kvm_slot_can_be_private(memslot)) + continue; + + gfn_start = max(start, memslot->base_gfn); + gfn_end = min(end, memslot->base_gfn + memslot->npages); + if (WARN_ON_ONCE(start >= end)) + continue; + + r = kvm_gmem_slot_set_shared(memslot, gfn_start, gfn_end); + if (WARN_ON_ONCE(r)) + break; + } + + mutex_unlock(&kvm->slots_lock); + + return r; +} +EXPORT_SYMBOL_GPL(kvm_gmem_set_shared); + +int kvm_gmem_clear_shared(struct kvm *kvm, gfn_t start, gfn_t end) +{ + struct kvm_memslot_iter iter; + int r = 0; + + mutex_lock(&kvm->slots_lock); + + kvm_for_each_memslot_in_gfn_range(&iter, kvm_memslots(kvm), start, end) { + struct kvm_memory_slot *memslot = iter.slot; + gfn_t gfn_start, gfn_end; + + if (!kvm_slot_can_be_private(memslot)) + continue; + + gfn_start = max(start, memslot->base_gfn); + gfn_end = min(end, memslot->base_gfn + memslot->npages); + if (WARN_ON_ONCE(start >= end)) + continue; + + r = kvm_gmem_slot_clear_shared(memslot, gfn_start, gfn_end); + if (WARN_ON_ONCE(r)) + break; + } + + mutex_unlock(&kvm->slots_lock); + + return r; +} +EXPORT_SYMBOL_GPL(kvm_gmem_clear_shared); +#endif /* CONFIG_KVM_GMEM_SHARED_MEM */ + /* Copy @len bytes from guest memory at '(@gfn * PAGE_SIZE) + @offset' to @data */ static int __kvm_read_guest_page(struct kvm_memory_slot *slot, gfn_t gfn, void *data, int offset, int len) From patchwork Fri Mar 28 15:31:31 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 14032184 Received: from mail-wm1-f74.google.com (mail-wm1-f74.google.com [209.85.128.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E04691D88AC for ; Fri, 28 Mar 2025 15:31:45 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1743175907; cv=none; b=dFtJq6SDHy59qrt2xbnYoGLLe+OvnnfeeaXYc+VlsooDja3ELj65rQnFIGH9KaPh8ZpFLgeuykKfHOrouf2boB9tRhr2mUhcEiZA/uREDuZpvODtSK/O7koTsOnlrrIawBKTyST2bVwLgpC1Es96XiQTPeina7d6raoByUEqqcE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1743175907; c=relaxed/simple; bh=FNZ5rbF/GwQ04BpQvSx2eGZdOReGNlh9r02eIimMdjs=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=azpGcLXTw3qvJtewtksi3iMH/4nKGx9/n8J/Ji8rLs4QxsblVDieCdSgUrEoksb+JWgMV4aHcGvxhFg9McGdwOqkkBCsjMNmYnPEM5BPu6q3+/H+sN8bUqQG+Qhv/EeFIn8Y1H4KqeMyZp5F4Yrr+WFbI/G7TGu5k3D/iCYYdRo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=2aYYOI0K; arc=none smtp.client-ip=209.85.128.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="2aYYOI0K" Received: by mail-wm1-f74.google.com with SMTP id 5b1f17b1804b1-4394040fea1so12110695e9.0 for ; Fri, 28 Mar 2025 08:31:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1743175904; x=1743780704; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=jBMUpGqTWD4UpO2EN6fhtj6WpgPcfU+elgx/+TqNl3U=; b=2aYYOI0KPANBxjlszDqd/xXP/bSZ6k0VyLFVb2g3Hjwfjip072qB5SQ47pKumhA/wi /NKDhMQ8odpO2gBmkDm91ir0EEk/pKon2pTIJioYRZaZDBmT6uOm2YAiNh2/6b+8h59m 1SGvh8At4SnqpqFin24IehveOR4TFHYji+OKdlCdqhOFb6QxmwogndJ01crbb9UGlqIt Xwrv3BwBPc8ozKG29f/GeGyrFPUWm/2i9SSc3h/Q0EmsMlo3S3/tiPS78BXynrMIYXqO 9u98XKkOGXEq29yLrMHLP04DgyuyugmcQoDQ015OKmnirrkejeTAkie7K5JTFgTaSGGs VSmw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1743175904; x=1743780704; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=jBMUpGqTWD4UpO2EN6fhtj6WpgPcfU+elgx/+TqNl3U=; b=PMC+Rc493MunSiWBMmXzarLgHv6Q/kPInDdbg1MHQkWO0q15D7Y65hXIJmXS4fWP0c M7EIS2Z0Pgg4bSuTcqTfwp+UeXzxaU2h0N7pNpGezLjyFU81OYuNU+uXuhsOykVFXDjL uCTSMz6shqL61Tcv853gZ6iau5/woKKBYpheUmWEUKPZyW+mWdyBCbFmDZUJbQiyhPrA 1JhtcwHabO+tTrDUQ5OJLeMBv63CFIH7/wMk8//GffZWQLnNsSKGQV/sIR6fCXewUeaR lP4OeAanGQAee6+7DL26FtiI4F9mzl0ZH0c2eLOePi6BR+IIjqtU+uRgmF7s5n4X3UFv WWoA== X-Forwarded-Encrypted: i=1; AJvYcCXCGID1w80aaNeCNcc7jCOKdsApJH3oJNQsxDbcJw987t9JaN4/WAYPjPius1SLpXWeD5nu8TUMnsYZZbGx@vger.kernel.org X-Gm-Message-State: AOJu0YxgQiVMSvQ9yVRS7tzKYHT9rgqrZHHlvPlZBazjXClLsWzB6BJA 09VDvWfGSk9rAKt4uJdloUmyM+wYKm2a8uiJ3qvxZCtLHmf3ELb14aCL5LB+J99d1WMWsXkJqg= = X-Google-Smtp-Source: AGHT+IGsVvvboBRAyzbhN+VadIQHCOH8PGYVPxU5z0Lyav96vZClKVh4KR5eyXtOx0aT0txiHrEbB6kiHg== X-Received: from wmrn12.prod.google.com ([2002:a05:600c:500c:b0:43d:41a2:b768]) (user=tabba job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:6dca:b0:43c:f509:2bbf with SMTP id 5b1f17b1804b1-43d911902efmr30762195e9.15.1743175904477; Fri, 28 Mar 2025 08:31:44 -0700 (PDT) Date: Fri, 28 Mar 2025 15:31:31 +0000 In-Reply-To: <20250328153133.3504118-1-tabba@google.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250328153133.3504118-1-tabba@google.com> X-Mailer: git-send-email 2.49.0.472.ge94155a9ec-goog Message-ID: <20250328153133.3504118-6-tabba@google.com> Subject: [PATCH v7 5/7] KVM: guest_memfd: Restore folio state after final folio_put() From: Fuad Tabba To: kvm@vger.kernel.org, linux-arm-msm@vger.kernel.org, linux-mm@kvack.org Cc: pbonzini@redhat.com, chenhuacai@kernel.org, mpe@ellerman.id.au, anup@brainfault.org, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, seanjc@google.com, viro@zeniv.linux.org.uk, brauner@kernel.org, willy@infradead.org, akpm@linux-foundation.org, xiaoyao.li@intel.com, yilun.xu@intel.com, chao.p.peng@linux.intel.com, jarkko@kernel.org, amoorthy@google.com, dmatlack@google.com, isaku.yamahata@intel.com, mic@digikod.net, vbabka@suse.cz, vannapurve@google.com, ackerleytng@google.com, mail@maciej.szmigiero.name, david@redhat.com, michael.roth@amd.com, wei.w.wang@intel.com, liam.merwick@oracle.com, isaku.yamahata@gmail.com, kirill.shutemov@linux.intel.com, suzuki.poulose@arm.com, steven.price@arm.com, quic_eberman@quicinc.com, quic_mnalajal@quicinc.com, quic_tsoni@quicinc.com, quic_svaddagi@quicinc.com, quic_cvanscha@quicinc.com, quic_pderrin@quicinc.com, quic_pheragu@quicinc.com, catalin.marinas@arm.com, james.morse@arm.com, yuzenghui@huawei.com, oliver.upton@linux.dev, maz@kernel.org, will@kernel.org, qperret@google.com, keirf@google.com, roypat@amazon.co.uk, shuah@kernel.org, hch@infradead.org, jgg@nvidia.com, rientjes@google.com, jhubbard@nvidia.com, fvdl@google.com, hughd@google.com, jthoughton@google.com, peterx@redhat.com, pankaj.gupta@amd.com, tabba@google.com Before transitioning a guest_memfd folio to unshared, thereby disallowing access by the host and allowing the hypervisor to transition its view of the guest page as private, we need to ensure that the host doesn't have any references to the folio. This patch uses the guest_memfd folio type to register a callback that informs the guest_memfd subsystem when the last reference is dropped, therefore knowing that the host doesn't have any remaining references. Signed-off-by: Fuad Tabba --- The function kvm_slot_gmem_register_callback() isn't used in this series. It will be used later in code that performs unsharing of memory. I have tested it with pKVM, based on downstream code [*]. It's included in this RFC since it demonstrates the plan to handle unsharing of private folios. [*] https://android-kvm.googlesource.com/linux/+/refs/heads/tabba/guestmem-6.13-v7-pkvm --- include/linux/kvm_host.h | 6 ++ virt/kvm/guest_memfd.c | 143 ++++++++++++++++++++++++++++++++++++++- 2 files changed, 148 insertions(+), 1 deletion(-) diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index bf82faf16c53..d9d9d72d8beb 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -2607,6 +2607,7 @@ int kvm_gmem_slot_set_shared(struct kvm_memory_slot *slot, gfn_t start, int kvm_gmem_slot_clear_shared(struct kvm_memory_slot *slot, gfn_t start, gfn_t end); bool kvm_gmem_slot_is_guest_shared(struct kvm_memory_slot *slot, gfn_t gfn); +int kvm_gmem_slot_register_callback(struct kvm_memory_slot *slot, gfn_t gfn); void kvm_gmem_handle_folio_put(struct folio *folio); #else static inline int kvm_gmem_set_shared(struct kvm *kvm, gfn_t start, gfn_t end) @@ -2638,6 +2639,11 @@ static inline bool kvm_gmem_slot_is_guest_shared(struct kvm_memory_slot *slot, WARN_ON_ONCE(1); return false; } +static inline int kvm_gmem_slot_register_callback(struct kvm_memory_slot *slot, gfn_t gfn) +{ + WARN_ON_ONCE(1); + return -EINVAL; +} #endif /* CONFIG_KVM_GMEM_SHARED_MEM */ #endif diff --git a/virt/kvm/guest_memfd.c b/virt/kvm/guest_memfd.c index 3b4d724084a8..ce19bd6c2031 100644 --- a/virt/kvm/guest_memfd.c +++ b/virt/kvm/guest_memfd.c @@ -392,6 +392,27 @@ enum folio_shareability { KVM_GMEM_NONE_SHARED = 0b11, /* Not shared, transient state. */ }; +/* + * Unregisters the __folio_put() callback from the folio. + * + * Restores a folio's refcount after all pending references have been released, + * and removes the folio type, thereby removing the callback. Now the folio can + * be freed normaly once all actual references have been dropped. + * + * Must be called with the folio locked and the offsets_lock write lock held. + */ +static void kvm_gmem_restore_pending_folio(struct folio *folio, struct inode *inode) +{ + rwlock_t *offsets_lock = &kvm_gmem_private(inode)->offsets_lock; + + lockdep_assert_held_write(offsets_lock); + WARN_ON_ONCE(!folio_test_locked(folio)); + WARN_ON_ONCE(!folio_test_guestmem(folio)); + + __folio_clear_guestmem(folio); + folio_ref_add(folio, folio_nr_pages(folio)); +} + static int kvm_gmem_offset_set_shared(struct inode *inode, pgoff_t index) { struct xarray *shared_offsets = &kvm_gmem_private(inode)->shared_offsets; @@ -400,6 +421,24 @@ static int kvm_gmem_offset_set_shared(struct inode *inode, pgoff_t index) lockdep_assert_held_write(offsets_lock); + /* + * If the folio is NONE_SHARED, it indicates that it is transitioning to + * private (GUEST_SHARED). Transition it to shared (ALL_SHARED) + * immediately, and remove the callback. + */ + if (xa_to_value(xa_load(shared_offsets, index)) == KVM_GMEM_NONE_SHARED) { + struct folio *folio = filemap_lock_folio(inode->i_mapping, index); + + if (WARN_ON_ONCE(IS_ERR(folio))) + return PTR_ERR(folio); + + if (folio_test_guestmem(folio)) + kvm_gmem_restore_pending_folio(folio, inode); + + folio_unlock(folio); + folio_put(folio); + } + return xa_err(xa_store(shared_offsets, index, xval, GFP_KERNEL)); } @@ -503,9 +542,111 @@ static int kvm_gmem_offset_range_clear_shared(struct inode *inode, return r; } +/* + * Registers a callback to __folio_put(), so that gmem knows that the host does + * not have any references to the folio. The callback itself is registered by + * setting the folio type to guestmem. + * + * Returns 0 if a callback was registered or already has been registered, or + * -EAGAIN if the host has references, indicating a callback wasn't registered. + * + * Must be called with the folio locked and the offsets_lock write lock held. + */ +static int kvm_gmem_register_callback(struct folio *folio, struct inode *inode, pgoff_t index) +{ + struct xarray *shared_offsets = &kvm_gmem_private(inode)->shared_offsets; + rwlock_t *offsets_lock = &kvm_gmem_private(inode)->offsets_lock; + void *xval_guest = xa_mk_value(KVM_GMEM_GUEST_SHARED); + int refcount; + int r = 0; + + lockdep_assert_held_write(offsets_lock); + WARN_ON_ONCE(!folio_test_locked(folio)); + + if (folio_test_guestmem(folio)) + return 0; + + if (folio_mapped(folio)) + return -EAGAIN; + + refcount = folio_ref_count(folio); + if (!folio_ref_freeze(folio, refcount)) + return -EAGAIN; + + /* + * Register callback by setting the folio type and subtracting gmem's + * references for it to trigger once outstanding references are dropped. + */ + if (refcount > 1) { + __folio_set_guestmem(folio); + refcount -= folio_nr_pages(folio); + } else { + /* No outstanding references, transition it to guest shared. */ + r = WARN_ON_ONCE(xa_err(xa_store(shared_offsets, index, xval_guest, GFP_KERNEL))); + } + + folio_ref_unfreeze(folio, refcount); + return r; +} + +int kvm_gmem_slot_register_callback(struct kvm_memory_slot *slot, gfn_t gfn) +{ + unsigned long pgoff = slot->gmem.pgoff + gfn - slot->base_gfn; + struct inode *inode = file_inode(READ_ONCE(slot->gmem.file)); + rwlock_t *offsets_lock = &kvm_gmem_private(inode)->offsets_lock; + struct folio *folio; + int r; + + write_lock(offsets_lock); + + folio = filemap_lock_folio(inode->i_mapping, pgoff); + if (WARN_ON_ONCE(IS_ERR(folio))) { + write_unlock(offsets_lock); + return PTR_ERR(folio); + } + + r = kvm_gmem_register_callback(folio, inode, pgoff); + + folio_unlock(folio); + folio_put(folio); + write_unlock(offsets_lock); + + return r; +} +EXPORT_SYMBOL_GPL(kvm_gmem_slot_register_callback); + +/* + * Callback function for __folio_put(), i.e., called once all references by the + * host to the folio have been dropped. This allows gmem to transition the state + * of the folio to shared with the guest, and allows the hypervisor to continue + * transitioning its state to private, since the host cannot attempt to access + * it anymore. + */ void kvm_gmem_handle_folio_put(struct folio *folio) { - WARN_ONCE(1, "A placeholder that shouldn't trigger. Work in progress."); + struct address_space *mapping; + struct xarray *shared_offsets; + rwlock_t *offsets_lock; + struct inode *inode; + pgoff_t index; + void *xval; + + mapping = folio->mapping; + if (WARN_ON_ONCE(!mapping)) + return; + + inode = mapping->host; + index = folio->index; + shared_offsets = &kvm_gmem_private(inode)->shared_offsets; + offsets_lock = &kvm_gmem_private(inode)->offsets_lock; + xval = xa_mk_value(KVM_GMEM_GUEST_SHARED); + + write_lock(offsets_lock); + folio_lock(folio); + kvm_gmem_restore_pending_folio(folio, inode); + folio_unlock(folio); + WARN_ON_ONCE(xa_err(xa_store(shared_offsets, index, xval, GFP_KERNEL))); + write_unlock(offsets_lock); } EXPORT_SYMBOL_GPL(kvm_gmem_handle_folio_put); From patchwork Fri Mar 28 15:31:32 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 14032185 Received: from mail-wm1-f74.google.com (mail-wm1-f74.google.com [209.85.128.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 128D21DA0E0 for ; Fri, 28 Mar 2025 15:31:47 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1743175909; cv=none; b=J3u0dJRlHLbZVgzI7LL8o3Gru3biZeASHpO6dfZPNBFG/C6KVQ6FUtuWYiBfz16+jxPDoEOV3gYSu8NC3kw5cyMkjRL12uwt8ToKpDrWm+PNOX8yCA6Z33Itvzub8XwwDyDriPpBcAfWdYpUelPMT9495pBPd0pBQhRln0h2+SQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1743175909; c=relaxed/simple; bh=ywIBAiOIC5j43YjmiU9xRCcK2knCGG5rRQ+rKgEbt1M=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=cc5XvyfFKisZWGfOcy2ixeS68GTfwBO8H88xGD7LhefmJmLVCQXBOfs2Q/xdH5lQbMiDX+zF8yqQzSexQNVpa9jGLRaE80xtl5qY+XrmGYJplyo9fgsCGvwMvo+dfnVl+r4ZNSq74hHaoGuzfouEXSKsguuwtX2u5JYYQhUbTpM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=lfIR6uGP; arc=none smtp.client-ip=209.85.128.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="lfIR6uGP" Received: by mail-wm1-f74.google.com with SMTP id 5b1f17b1804b1-43d08915f61so14529665e9.2 for ; Fri, 28 Mar 2025 08:31:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1743175906; x=1743780706; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=R3RfnojiUTVQNcoqz4Ca2o/3ssUHr85YnTpv1JDxiLk=; b=lfIR6uGPfN2HOTgxBayHKok3VaXEvBPAySJRHP9nep9xHAxnjHA8EjNXO958ZFmkRo TTOhptQQoZIl+nXOMT58OJKWuS3oua26LqeF6VAcZ3F7AkDG/UqkvRnhlsTR6gyPUGUE 7uh99DPOIAht2Qvp6v85vsP697K5SkuBccOYxv9UPaR1gt1jJorqGnTqcrLUYOLMpqBC Ak1q0shoC+4X+oTwLGc2rHDLPUhgcQ9vJW0qTkkO1w+u+6QeyogVnsr2yQxN4NsBWepT CffPfPTds3uc//bA4rWMQKahHcGsqcrRmYSquhBa4aBZecAA8QPZ/iIhVcVQXLxhKvAZ nRXw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1743175906; x=1743780706; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=R3RfnojiUTVQNcoqz4Ca2o/3ssUHr85YnTpv1JDxiLk=; b=O5crVb99bh6iegXFSmI55kmmyFiepaCZqm0lv7zVzoOjQZt3SRl5pwgk9Y4JRJPRZO 0nwpc8A7gecwtATk4613AZzzYH40Tzkg7Kx+SN9wb5gAZdpMk/p8P0x1KzBOqEOFmx2a eFcqMocgxjBxxdolLNwRuXet092Dg6jGDGaLwtCsoIOk8FLRnKk2yAOx9jdc5/Ybh+d2 AiOxG7fj0yUl9Le6hbMqByJjdOUJexlaFeyOLuS5n8x3NDavR9HmWd3+rfvv0M69jDc0 eLTOD1dlKl0ef8kscXCzgde+E34ADAA61LblfwAHy76uW7dPS9fHUBnOY2Nk8ytQL5RK vlYQ== X-Forwarded-Encrypted: i=1; AJvYcCWXctMQL0tOZplB7TXyR5+YfetGG3iVLTZ0KVKqZOd95LywOVvUFhPiwVX/NjPgvDOyRx+TAwNzOcgE5SKM@vger.kernel.org X-Gm-Message-State: AOJu0YzfwHw26ghWY4ZNLA65aj2ul0Jp/bOCbz6mgllF5hL4VbtwYKsQ VcMyiLeZWwP4m0YWcYfUW4jEVn9VYq8Dl9lUSsgs8qh0ozSkaszJf8PlxW5sP2+q7GvaBZtQ+g= = X-Google-Smtp-Source: AGHT+IH5o8XkmQE16RpA7JAnMr7kPBXwGpQkyy1fEQJArMIbsDhiWMuAir53k+JzjSmedSowM4jfKUSd+w== X-Received: from wmbay1.prod.google.com ([2002:a05:600c:1e01:b0:43d:9f1:31a9]) (user=tabba job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:4e52:b0:43c:fc04:6d34 with SMTP id 5b1f17b1804b1-43d9489777emr25843765e9.20.1743175906450; Fri, 28 Mar 2025 08:31:46 -0700 (PDT) Date: Fri, 28 Mar 2025 15:31:32 +0000 In-Reply-To: <20250328153133.3504118-1-tabba@google.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250328153133.3504118-1-tabba@google.com> X-Mailer: git-send-email 2.49.0.472.ge94155a9ec-goog Message-ID: <20250328153133.3504118-7-tabba@google.com> Subject: [PATCH v7 6/7] KVM: guest_memfd: Handle invalidation of shared memory From: Fuad Tabba To: kvm@vger.kernel.org, linux-arm-msm@vger.kernel.org, linux-mm@kvack.org Cc: pbonzini@redhat.com, chenhuacai@kernel.org, mpe@ellerman.id.au, anup@brainfault.org, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, seanjc@google.com, viro@zeniv.linux.org.uk, brauner@kernel.org, willy@infradead.org, akpm@linux-foundation.org, xiaoyao.li@intel.com, yilun.xu@intel.com, chao.p.peng@linux.intel.com, jarkko@kernel.org, amoorthy@google.com, dmatlack@google.com, isaku.yamahata@intel.com, mic@digikod.net, vbabka@suse.cz, vannapurve@google.com, ackerleytng@google.com, mail@maciej.szmigiero.name, david@redhat.com, michael.roth@amd.com, wei.w.wang@intel.com, liam.merwick@oracle.com, isaku.yamahata@gmail.com, kirill.shutemov@linux.intel.com, suzuki.poulose@arm.com, steven.price@arm.com, quic_eberman@quicinc.com, quic_mnalajal@quicinc.com, quic_tsoni@quicinc.com, quic_svaddagi@quicinc.com, quic_cvanscha@quicinc.com, quic_pderrin@quicinc.com, quic_pheragu@quicinc.com, catalin.marinas@arm.com, james.morse@arm.com, yuzenghui@huawei.com, oliver.upton@linux.dev, maz@kernel.org, will@kernel.org, qperret@google.com, keirf@google.com, roypat@amazon.co.uk, shuah@kernel.org, hch@infradead.org, jgg@nvidia.com, rientjes@google.com, jhubbard@nvidia.com, fvdl@google.com, hughd@google.com, jthoughton@google.com, peterx@redhat.com, pankaj.gupta@amd.com, tabba@google.com When guest_memfd backed memory is invalidated, e.g., on punching holes, releasing, ensure that the sharing states are updated and that any folios in a transient state are restored to an appropriate state. Signed-off-by: Fuad Tabba --- virt/kvm/guest_memfd.c | 57 ++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 57 insertions(+) diff --git a/virt/kvm/guest_memfd.c b/virt/kvm/guest_memfd.c index ce19bd6c2031..eec9d5e09f09 100644 --- a/virt/kvm/guest_memfd.c +++ b/virt/kvm/guest_memfd.c @@ -118,6 +118,16 @@ static struct folio *kvm_gmem_get_folio(struct inode *inode, pgoff_t index) return filemap_grab_folio(inode->i_mapping, index); } +#ifdef CONFIG_KVM_GMEM_SHARED_MEM +static void kvm_gmem_offset_range_invalidate_shared(struct inode *inode, + pgoff_t start, pgoff_t end); +#else +static inline void kvm_gmem_offset_range_invalidate_shared(struct inode *inode, + pgoff_t start, pgoff_t end) +{ +} +#endif + static void kvm_gmem_invalidate_begin(struct kvm_gmem *gmem, pgoff_t start, pgoff_t end) { @@ -127,6 +137,7 @@ static void kvm_gmem_invalidate_begin(struct kvm_gmem *gmem, pgoff_t start, unsigned long index; xa_for_each_range(&gmem->bindings, index, slot, start, end - 1) { + struct file *file = READ_ONCE(slot->gmem.file); pgoff_t pgoff = slot->gmem.pgoff; struct kvm_gfn_range gfn_range = { @@ -146,6 +157,16 @@ static void kvm_gmem_invalidate_begin(struct kvm_gmem *gmem, pgoff_t start, } flush |= kvm_mmu_unmap_gfn_range(kvm, &gfn_range); + + /* + * If this gets called after kvm_gmem_unbind() it means that all + * in-flight operations are gone, and the file has been closed. + */ + if (file) { + kvm_gmem_offset_range_invalidate_shared(file_inode(file), + gfn_range.start, + gfn_range.end); + } } if (flush) @@ -512,6 +533,42 @@ static int kvm_gmem_offset_clear_shared(struct inode *inode, pgoff_t index) return r; } +/* + * Callback when invalidating memory that is potentially shared. + * + * Must be called with the offsets_lock write lock held. + */ +static void kvm_gmem_offset_range_invalidate_shared(struct inode *inode, + pgoff_t start, pgoff_t end) +{ + struct xarray *shared_offsets = &kvm_gmem_private(inode)->shared_offsets; + rwlock_t *offsets_lock = &kvm_gmem_private(inode)->offsets_lock; + pgoff_t i; + + lockdep_assert_held_write(offsets_lock); + + for (i = start; i < end; i++) { + /* + * If the folio is NONE_SHARED, it indicates that it's + * transitioning to private (GUEST_SHARED). Transition it to + * shared (ALL_SHARED) and remove the callback. + */ + if (xa_to_value(xa_load(shared_offsets, i)) == KVM_GMEM_NONE_SHARED) { + struct folio *folio = filemap_lock_folio(inode->i_mapping, i); + + if (!WARN_ON_ONCE(IS_ERR(folio))) { + if (folio_test_guestmem(folio)) + kvm_gmem_restore_pending_folio(folio, inode); + + folio_unlock(folio); + folio_put(folio); + } + } + + xa_erase(shared_offsets, i); + } +} + /* * Marks the range [start, end) as not shared with the host. If the host doesn't * have any references to a particular folio, then that folio is marked as From patchwork Fri Mar 28 15:31:33 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 14032186 Received: from mail-wr1-f73.google.com (mail-wr1-f73.google.com [209.85.221.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 143F1146A66 for ; Fri, 28 Mar 2025 15:31:49 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.221.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1743175911; cv=none; b=MB3HDSXrWfLirY1jK/Om6rGL9AuQQWU2WU/a93ftivaxn2CWFDz3pT6D5SiKzrFyzR5wQGEz2EzgvnUsXslEFnbCgiQa6HNP0ukKL+ZflHLfPDZeGFYxTBJtx5ZHjbhj9mrb74TYnqMbcA+3HqS7qqQIb8tE6Ffi3PP3jvSh7fE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1743175911; c=relaxed/simple; bh=Sv6NOiNAFHxHxhBPG6LmgRVSoo0Z12cLiekiQ8g6R/c=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=JOGcInvXWqUOOmgaNaelWhtl/xOIrShgDXcXfUGMiW4yznheOflrQlKs4OCzpp8V+ThHbJshZ7pzhNN25bTTkgthL4hncliPhxX5dC04XZse2O5KsWpgFCQCfczV2F72CeW4qMtrzdIxZzZGv3v4kTyhP4ZFwtooZOsElNzez50= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=qzDocxv/; arc=none smtp.client-ip=209.85.221.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="qzDocxv/" Received: by mail-wr1-f73.google.com with SMTP id ffacd0b85a97d-3912539665cso1642043f8f.1 for ; Fri, 28 Mar 2025 08:31:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1743175908; x=1743780708; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=Mv/1E0AU+NyqHw7xnz4KzldcKMV/7tB5wdC54OSV0gs=; b=qzDocxv/L4MAcrrIXAid9GSxGes7kty3sNlgv8JZuRmJl50/lyosZLwwWdCC4q9rzK gZyIcQgNglUJMDbLuUFOQBPn+8qq4KJuVsg6xJvQT9yELUSTiDcddXa30hO8aVdI24IL P+n7NVpBQVJu5YONUzXxWAXuuAOPTwzPdadx4QfhWFDPUy3wnRO1vzg8giN2hpilIG+N fjmtCki6/Vr94Rpk+TxfWAU6RO5yXFuwMvrjZ1Gj0Q1JGVKM7uRLvnMcZWUpWnE4K/fS MQB5E7vniqTpNsfSlNvFCmGhQRG2R7WhfmBBYkPthWCBgC5N4i+t1f5H4BwdsCsjO4tJ 3jKw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1743175908; x=1743780708; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=Mv/1E0AU+NyqHw7xnz4KzldcKMV/7tB5wdC54OSV0gs=; b=RkHUfkBc7egqbSMdvjMbVv8WYCGlZVjO0MSGPD1Dcp4hX9zLiiq78a+kucwIXeQ/Tv l0ukWUrB0H/WQ/FuNOf4Z009SKwiALf/wujrgXs8lnyufl4ieDd3+Dtok193Sqngqz5K 9PAlMYIvVoIxH1ZjvOklTufgYButApJp13xbTX1qxlZopIz7VRd9PYir6l6/QT5OGvkt 1RR7PUz2w/Nx7s0spKHMNyElrrAbnPhFqavsE0UWehwytISorUVBJpW5kM12yhqVvMnE 9U6pLeK7b1A09OXHqa3v7IFvh9q+pggmoGU09jyZR8DjjNztkg8JwMblLk3IsQLuPYw0 KPiw== X-Forwarded-Encrypted: i=1; AJvYcCVARdVNkkXz5vuXy0QtloZkjbEBzp6lgdG5p7tYPpFBJ0voDXKml1ct3CJXjYLgL3Pa7D4Pri3EKr1aAFnQ@vger.kernel.org X-Gm-Message-State: AOJu0YyuGI14GvkTe99XxHmBaku1kHV1e1ziXPR51uMCmp59aJVjVrx7 82sPUJx+Ttex29DFPDhsaim0+baOi0VFXcY44XO3XiVphfryGocsEKEt0+XQz3S1nsYF+cQ4TQ= = X-Google-Smtp-Source: AGHT+IFlQ1UrYxh5xtzAR3ixBktqgMp71GraHERqKS0AjfxoTOcZj42vAVyGIN1u0tDDWZHdqWBm4jW8+Q== X-Received: from wmbje14.prod.google.com ([2002:a05:600c:1f8e:b0:43d:4038:9229]) (user=tabba job=prod-delivery.src-stubby-dispatcher) by 2002:a5d:6d02:0:b0:391:a74:d7e2 with SMTP id ffacd0b85a97d-39c0c13b8bamr3105305f8f.26.1743175908326; Fri, 28 Mar 2025 08:31:48 -0700 (PDT) Date: Fri, 28 Mar 2025 15:31:33 +0000 In-Reply-To: <20250328153133.3504118-1-tabba@google.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250328153133.3504118-1-tabba@google.com> X-Mailer: git-send-email 2.49.0.472.ge94155a9ec-goog Message-ID: <20250328153133.3504118-8-tabba@google.com> Subject: [PATCH v7 7/7] KVM: guest_memfd: Add a guest_memfd() flag to initialize it as shared From: Fuad Tabba To: kvm@vger.kernel.org, linux-arm-msm@vger.kernel.org, linux-mm@kvack.org Cc: pbonzini@redhat.com, chenhuacai@kernel.org, mpe@ellerman.id.au, anup@brainfault.org, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, seanjc@google.com, viro@zeniv.linux.org.uk, brauner@kernel.org, willy@infradead.org, akpm@linux-foundation.org, xiaoyao.li@intel.com, yilun.xu@intel.com, chao.p.peng@linux.intel.com, jarkko@kernel.org, amoorthy@google.com, dmatlack@google.com, isaku.yamahata@intel.com, mic@digikod.net, vbabka@suse.cz, vannapurve@google.com, ackerleytng@google.com, mail@maciej.szmigiero.name, david@redhat.com, michael.roth@amd.com, wei.w.wang@intel.com, liam.merwick@oracle.com, isaku.yamahata@gmail.com, kirill.shutemov@linux.intel.com, suzuki.poulose@arm.com, steven.price@arm.com, quic_eberman@quicinc.com, quic_mnalajal@quicinc.com, quic_tsoni@quicinc.com, quic_svaddagi@quicinc.com, quic_cvanscha@quicinc.com, quic_pderrin@quicinc.com, quic_pheragu@quicinc.com, catalin.marinas@arm.com, james.morse@arm.com, yuzenghui@huawei.com, oliver.upton@linux.dev, maz@kernel.org, will@kernel.org, qperret@google.com, keirf@google.com, roypat@amazon.co.uk, shuah@kernel.org, hch@infradead.org, jgg@nvidia.com, rientjes@google.com, jhubbard@nvidia.com, fvdl@google.com, hughd@google.com, jthoughton@google.com, peterx@redhat.com, pankaj.gupta@amd.com, tabba@google.com Not all use cases require guest_memfd() to be shared with the host when first created. Add a new flag, GUEST_MEMFD_FLAG_INIT_SHARED, which when set on KVM_CREATE_GUEST_MEMFD initializes the memory as shared with the host, and therefore mappable by it. Otherwise, memory is private until explicitly shared by the guest with the host. Signed-off-by: Fuad Tabba --- Documentation/virt/kvm/api.rst | 4 ++++ include/uapi/linux/kvm.h | 1 + tools/testing/selftests/kvm/guest_memfd_test.c | 7 +++++-- virt/kvm/guest_memfd.c | 12 ++++++++++++ 4 files changed, 22 insertions(+), 2 deletions(-) diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst index 2b52eb77e29c..a5496d7d323b 100644 --- a/Documentation/virt/kvm/api.rst +++ b/Documentation/virt/kvm/api.rst @@ -6386,6 +6386,10 @@ most one mapping per page, i.e. binding multiple memory regions to a single guest_memfd range is not allowed (any number of memory regions can be bound to a single guest_memfd file, but the bound ranges must not overlap). +If the capability KVM_CAP_GMEM_SHARED_MEM is supported, then the flags field +supports GUEST_MEMFD_FLAG_INIT_SHARED, which initializes the memory as shared +with the host, and thereby, mappable by it. + See KVM_SET_USER_MEMORY_REGION2 for additional details. 4.143 KVM_PRE_FAULT_MEMORY diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h index 117937a895da..22d7e33bf09c 100644 --- a/include/uapi/linux/kvm.h +++ b/include/uapi/linux/kvm.h @@ -1566,6 +1566,7 @@ struct kvm_memory_attributes { #define KVM_MEMORY_ATTRIBUTE_PRIVATE (1ULL << 3) #define KVM_CREATE_GUEST_MEMFD _IOWR(KVMIO, 0xd4, struct kvm_create_guest_memfd) +#define GUEST_MEMFD_FLAG_INIT_SHARED (1UL << 0) struct kvm_create_guest_memfd { __u64 size; diff --git a/tools/testing/selftests/kvm/guest_memfd_test.c b/tools/testing/selftests/kvm/guest_memfd_test.c index 38c501e49e0e..4a7fcd6aa372 100644 --- a/tools/testing/selftests/kvm/guest_memfd_test.c +++ b/tools/testing/selftests/kvm/guest_memfd_test.c @@ -159,7 +159,7 @@ static void test_invalid_punch_hole(int fd, size_t page_size, size_t total_size) static void test_create_guest_memfd_invalid(struct kvm_vm *vm) { size_t page_size = getpagesize(); - uint64_t flag; + uint64_t flag = BIT(0); size_t size; int fd; @@ -170,7 +170,10 @@ static void test_create_guest_memfd_invalid(struct kvm_vm *vm) size); } - for (flag = BIT(0); flag; flag <<= 1) { + if (kvm_has_cap(KVM_CAP_GMEM_SHARED_MEM)) + flag = GUEST_MEMFD_FLAG_INIT_SHARED << 1; + + for (; flag; flag <<= 1) { fd = __vm_create_guest_memfd(vm, page_size, flag); TEST_ASSERT(fd == -1 && errno == EINVAL, "guest_memfd() with flag '0x%lx' should fail with EINVAL", diff --git a/virt/kvm/guest_memfd.c b/virt/kvm/guest_memfd.c index eec9d5e09f09..32e149478b04 100644 --- a/virt/kvm/guest_memfd.c +++ b/virt/kvm/guest_memfd.c @@ -1069,6 +1069,15 @@ static int __kvm_gmem_create(struct kvm *kvm, loff_t size, u64 flags) goto err_gmem; } + if (IS_ENABLED(CONFIG_KVM_GMEM_SHARED_MEM) && + (flags & GUEST_MEMFD_FLAG_INIT_SHARED)) { + err = kvm_gmem_offset_range_set_shared(file_inode(file), 0, size >> PAGE_SHIFT); + if (err) { + fput(file); + goto err_gmem; + } + } + kvm_get_kvm(kvm); gmem->kvm = kvm; xa_init(&gmem->bindings); @@ -1090,6 +1099,9 @@ int kvm_gmem_create(struct kvm *kvm, struct kvm_create_guest_memfd *args) u64 flags = args->flags; u64 valid_flags = 0; + if (IS_ENABLED(CONFIG_KVM_GMEM_SHARED_MEM)) + valid_flags |= GUEST_MEMFD_FLAG_INIT_SHARED; + if (flags & ~valid_flags) return -EINVAL;