From patchwork Tue Oct 25 15:13:37 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chao Peng X-Patchwork-Id: 13019442 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 48B29C04A95 for ; Tue, 25 Oct 2022 15:18:39 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id BD73B8E0003; Tue, 25 Oct 2022 11:18:38 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id B86948E0001; Tue, 25 Oct 2022 11:18:38 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A00AF8E0003; Tue, 25 Oct 2022 11:18:38 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 8FC1F8E0001 for ; Tue, 25 Oct 2022 11:18:38 -0400 (EDT) Received: from smtpin04.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 4D7ECAAE2B for ; Tue, 25 Oct 2022 15:18:38 +0000 (UTC) X-FDA: 80059828716.04.D76C33B Received: from mga12.intel.com (mga12.intel.com [192.55.52.136]) by imf23.hostedemail.com (Postfix) with ESMTP id 4F54F140042 for ; Tue, 25 Oct 2022 15:18:37 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1666711117; x=1698247117; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=PWzabYw82KjsHY+Wdd8kK1yaEP2EV57gElCnFtm53oE=; b=Luohtf1K4Gfkv4otHE2vipIj00A/RK1DMxRfM+nUov0Aq+Hij+gZs2TU 8Ecs0UtVCI8t5qJLZTkkEgUzuv60XP0Ocd6kYq8SLfDwUCKC9VWkkOSyt c+4B8OJ1zDVrGeu8zx7mQ6ow+yXuUPypl6jGpJyr5aDcxeYGR8Uh8+b9L EJyAb2JzZPSXSilRYhYXcm/0JjTIht5l8Aooakevffc5K/yacmp+xdVFu otLhc2H+P+2AlTsCExZ9fT2CoTXSFW3eFyAtM6ie8L6d6Pub5XZEjDu9f gYqy8ZaUOwch/GOnHPgyQf73XWQOq81UvH2Bx4v07NgCDZ+E61VTXcQ55 w==; X-IronPort-AV: E=McAfee;i="6500,9779,10510"; a="287424053" X-IronPort-AV: E=Sophos;i="5.95,212,1661842800"; d="scan'208";a="287424053" Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Oct 2022 08:18:36 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10510"; a="736865490" X-IronPort-AV: E=Sophos;i="5.95,212,1661842800"; d="scan'208";a="736865490" Received: from chaop.bj.intel.com ([10.240.193.75]) by fmsmga002.fm.intel.com with ESMTP; 25 Oct 2022 08:18:25 -0700 From: Chao Peng To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-arch@vger.kernel.org, linux-api@vger.kernel.org, linux-doc@vger.kernel.org, qemu-devel@nongnu.org Cc: Paolo Bonzini , Jonathan Corbet , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Thomas Gleixner , Ingo Molnar , Borislav Petkov , x86@kernel.org, "H . Peter Anvin" , Hugh Dickins , Jeff Layton , "J . Bruce Fields" , Andrew Morton , Shuah Khan , Mike Rapoport , Steven Price , "Maciej S . Szmigiero" , Vlastimil Babka , Vishal Annapurve , Yu Zhang , Chao Peng , "Kirill A . Shutemov" , luto@kernel.org, jun.nakajima@intel.com, dave.hansen@intel.com, ak@linux.intel.com, david@redhat.com, aarcange@redhat.com, ddutile@redhat.com, dhildenb@redhat.com, Quentin Perret , tabba@google.com, Michael Roth , mhocko@suse.com, Muchun Song , wei.w.wang@intel.com Subject: [PATCH v9 1/8] mm: Introduce memfd_restricted system call to create restricted user memory Date: Tue, 25 Oct 2022 23:13:37 +0800 Message-Id: <20221025151344.3784230-2-chao.p.peng@linux.intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20221025151344.3784230-1-chao.p.peng@linux.intel.com> References: <20221025151344.3784230-1-chao.p.peng@linux.intel.com> MIME-Version: 1.0 ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1666711117; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=B9ak8iDwWJ4gw5uC/9UD9wZIrO7gTbpJwS41owklwbc=; b=bHUGL7WMitSr1abm1wZ5ZnMNrez9EX7g077vgnsywrXFYFSNeZkYAn1SFGiOTSOOMhxFb/ QcXarmGwCuqJ+/M/GFnoGaRgCIvc9zL1EC53K0i+ALmWmzXVLGAtWW5JEMah4LJ82/CWd5 lotvwirRRNOka0owjjUDAPSWYIjSZTk= ARC-Authentication-Results: i=1; imf23.hostedemail.com; dkim=none ("invalid DKIM record") header.d=intel.com header.s=Intel header.b=Luohtf1K; spf=none (imf23.hostedemail.com: domain of chao.p.peng@linux.intel.com has no SPF policy when checking 192.55.52.136) smtp.mailfrom=chao.p.peng@linux.intel.com; dmarc=fail reason="No valid SPF" header.from=intel.com (policy=none) ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1666711117; a=rsa-sha256; cv=none; b=aZ8hG6yZipFWWTKx/iHzorF6dSmy2MpGF0saRzHqeQUfjLqd6iKvjyDyR9Dyd1HDP3Br6I 8VlSgFLknCe72lWeTZpAvV4PFnduTOx4T7LBHw0hXuTC6wR/RuDPiTTHF7+LtqAc9a92Wv lFpx3QQPKgd+AiC6ued7kyyeZBHkDqo= X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 4F54F140042 X-Rspam-User: Authentication-Results: imf23.hostedemail.com; dkim=none ("invalid DKIM record") header.d=intel.com header.s=Intel header.b=Luohtf1K; spf=none (imf23.hostedemail.com: domain of chao.p.peng@linux.intel.com has no SPF policy when checking 192.55.52.136) smtp.mailfrom=chao.p.peng@linux.intel.com; dmarc=fail reason="No valid SPF" header.from=intel.com (policy=none) X-Stat-Signature: mq8n5fxj9tdxgt4uprpsxwy34w1ia3pd X-HE-Tag: 1666711117-8522 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: "Kirill A. Shutemov" Introduce 'memfd_restricted' system call with the ability to create memory areas that are restricted from userspace access through ordinary MMU operations (e.g. read/write/mmap). The memory content is expected to be used through a new in-kernel interface by a third kernel module. memfd_restricted() is useful for scenarios where a file descriptor(fd) can be used as an interface into mm but want to restrict userspace's ability on the fd. Initially it is designed to provide protections for KVM encrypted guest memory. Normally KVM uses memfd memory via mmapping the memfd into KVM userspace (e.g. QEMU) and then using the mmaped virtual address to setup the mapping in the KVM secondary page table (e.g. EPT). With confidential computing technologies like Intel TDX, the memfd memory may be encrypted with special key for special software domain (e.g. KVM guest) and is not expected to be directly accessed by userspace. Precisely, userspace access to such encrypted memory may lead to host crash so should be prevented. memfd_restricted() provides semantics required for KVM guest encrypted memory support that a fd created with memfd_restricted() is going to be used as the source of guest memory in confidential computing environment and KVM can directly interact with core-mm without the need to expose the memoy content into KVM userspace. KVM userspace is still in charge of the lifecycle of the fd. It should pass the created fd to KVM. KVM uses the new restrictedmem_get_page() to obtain the physical memory page and then uses it to populate the KVM secondary page table entries. The userspace restricted memfd can be fallocate-ed or hole-punched from userspace. When these operations happen, KVM can get notified through restrictedmem_notifier, it then gets chance to remove any mapped entries of the range in the secondary page tables. memfd_restricted() itself is implemented as a shim layer on top of real memory file systems (currently tmpfs). Pages in restrictedmem are marked as unmovable and unevictable, this is required for current confidential usage. But in future this might be changed. By default memfd_restricted() prevents userspace read, write and mmap. By defining new bit in the 'flags', it can be extended to support other restricted semantics in the future. The system call is currently wired up for x86 arch. Signed-off-by: Kirill A. Shutemov Signed-off-by: Chao Peng Reviewed-by: Fuad Tabba --- arch/x86/entry/syscalls/syscall_32.tbl | 1 + arch/x86/entry/syscalls/syscall_64.tbl | 1 + include/linux/restrictedmem.h | 62 ++++++ include/linux/syscalls.h | 1 + include/uapi/asm-generic/unistd.h | 5 +- include/uapi/linux/magic.h | 1 + kernel/sys_ni.c | 3 + mm/Kconfig | 4 + mm/Makefile | 1 + mm/restrictedmem.c | 250 +++++++++++++++++++++++++ 10 files changed, 328 insertions(+), 1 deletion(-) create mode 100644 include/linux/restrictedmem.h create mode 100644 mm/restrictedmem.c diff --git a/arch/x86/entry/syscalls/syscall_32.tbl b/arch/x86/entry/syscalls/syscall_32.tbl index 320480a8db4f..dc70ba90247e 100644 --- a/arch/x86/entry/syscalls/syscall_32.tbl +++ b/arch/x86/entry/syscalls/syscall_32.tbl @@ -455,3 +455,4 @@ 448 i386 process_mrelease sys_process_mrelease 449 i386 futex_waitv sys_futex_waitv 450 i386 set_mempolicy_home_node sys_set_mempolicy_home_node +451 i386 memfd_restricted sys_memfd_restricted diff --git a/arch/x86/entry/syscalls/syscall_64.tbl b/arch/x86/entry/syscalls/syscall_64.tbl index c84d12608cd2..06516abc8318 100644 --- a/arch/x86/entry/syscalls/syscall_64.tbl +++ b/arch/x86/entry/syscalls/syscall_64.tbl @@ -372,6 +372,7 @@ 448 common process_mrelease sys_process_mrelease 449 common futex_waitv sys_futex_waitv 450 common set_mempolicy_home_node sys_set_mempolicy_home_node +451 common memfd_restricted sys_memfd_restricted # # Due to a historical design error, certain syscalls are numbered differently diff --git a/include/linux/restrictedmem.h b/include/linux/restrictedmem.h new file mode 100644 index 000000000000..9c37c3ea3180 --- /dev/null +++ b/include/linux/restrictedmem.h @@ -0,0 +1,62 @@ +/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */ +#ifndef _LINUX_RESTRICTEDMEM_H + +#include +#include +#include + +struct restrictedmem_notifier; + +struct restrictedmem_notifier_ops { + void (*invalidate_start)(struct restrictedmem_notifier *notifier, + pgoff_t start, pgoff_t end); + void (*invalidate_end)(struct restrictedmem_notifier *notifier, + pgoff_t start, pgoff_t end); +}; + +struct restrictedmem_notifier { + struct list_head list; + const struct restrictedmem_notifier_ops *ops; +}; + +#ifdef CONFIG_RESTRICTEDMEM + +void restrictedmem_register_notifier(struct file *file, + struct restrictedmem_notifier *notifier); +void restrictedmem_unregister_notifier(struct file *file, + struct restrictedmem_notifier *notifier); + +int restrictedmem_get_page(struct file *file, pgoff_t offset, + struct page **pagep, int *order); + +static inline bool file_is_restrictedmem(struct file *file) +{ + return file->f_inode->i_sb->s_magic == RESTRICTEDMEM_MAGIC; +} + +#else + +static inline void restrictedmem_register_notifier(struct file *file, + struct restrictedmem_notifier *notifier) +{ +} + +static inline void restrictedmem_unregister_notifier(struct file *file, + struct restrictedmem_notifier *notifier) +{ +} + +static inline int restrictedmem_get_page(struct file *file, pgoff_t offset, + struct page **pagep, int *order) +{ + return -1; +} + +static inline bool file_is_restrictedmem(struct file *file) +{ + return false; +} + +#endif /* CONFIG_RESTRICTEDMEM */ + +#endif /* _LINUX_RESTRICTEDMEM_H */ diff --git a/include/linux/syscalls.h b/include/linux/syscalls.h index a34b0f9a9972..f9e9e0c820c5 100644 --- a/include/linux/syscalls.h +++ b/include/linux/syscalls.h @@ -1056,6 +1056,7 @@ asmlinkage long sys_memfd_secret(unsigned int flags); asmlinkage long sys_set_mempolicy_home_node(unsigned long start, unsigned long len, unsigned long home_node, unsigned long flags); +asmlinkage long sys_memfd_restricted(unsigned int flags); /* * Architecture-specific system calls diff --git a/include/uapi/asm-generic/unistd.h b/include/uapi/asm-generic/unistd.h index 45fa180cc56a..e93cd35e46d0 100644 --- a/include/uapi/asm-generic/unistd.h +++ b/include/uapi/asm-generic/unistd.h @@ -886,8 +886,11 @@ __SYSCALL(__NR_futex_waitv, sys_futex_waitv) #define __NR_set_mempolicy_home_node 450 __SYSCALL(__NR_set_mempolicy_home_node, sys_set_mempolicy_home_node) +#define __NR_memfd_restricted 451 +__SYSCALL(__NR_memfd_restricted, sys_memfd_restricted) + #undef __NR_syscalls -#define __NR_syscalls 451 +#define __NR_syscalls 452 /* * 32 bit systems traditionally used different diff --git a/include/uapi/linux/magic.h b/include/uapi/linux/magic.h index 6325d1d0e90f..8aa38324b90a 100644 --- a/include/uapi/linux/magic.h +++ b/include/uapi/linux/magic.h @@ -101,5 +101,6 @@ #define DMA_BUF_MAGIC 0x444d4142 /* "DMAB" */ #define DEVMEM_MAGIC 0x454d444d /* "DMEM" */ #define SECRETMEM_MAGIC 0x5345434d /* "SECM" */ +#define RESTRICTEDMEM_MAGIC 0x5245534d /* "RESM" */ #endif /* __LINUX_MAGIC_H__ */ diff --git a/kernel/sys_ni.c b/kernel/sys_ni.c index 860b2dcf3ac4..7c4a32cbd2e7 100644 --- a/kernel/sys_ni.c +++ b/kernel/sys_ni.c @@ -360,6 +360,9 @@ COND_SYSCALL(pkey_free); /* memfd_secret */ COND_SYSCALL(memfd_secret); +/* memfd_restricted */ +COND_SYSCALL(memfd_restricted); + /* * Architecture specific weak syscall entries. */ diff --git a/mm/Kconfig b/mm/Kconfig index 0331f1461f81..0177d53676c7 100644 --- a/mm/Kconfig +++ b/mm/Kconfig @@ -1076,6 +1076,10 @@ config IO_MAPPING config SECRETMEM def_bool ARCH_HAS_SET_DIRECT_MAP && !EMBEDDED +config RESTRICTEDMEM + bool + depends on TMPFS + config ANON_VMA_NAME bool "Anonymous VMA name support" depends on PROC_FS && ADVISE_SYSCALLS && MMU diff --git a/mm/Makefile b/mm/Makefile index 9a564f836403..6cb6403ffd40 100644 --- a/mm/Makefile +++ b/mm/Makefile @@ -117,6 +117,7 @@ obj-$(CONFIG_PAGE_EXTENSION) += page_ext.o obj-$(CONFIG_PAGE_TABLE_CHECK) += page_table_check.o obj-$(CONFIG_CMA_DEBUGFS) += cma_debug.o obj-$(CONFIG_SECRETMEM) += secretmem.o +obj-$(CONFIG_RESTRICTEDMEM) += restrictedmem.o obj-$(CONFIG_CMA_SYSFS) += cma_sysfs.o obj-$(CONFIG_USERFAULTFD) += userfaultfd.o obj-$(CONFIG_IDLE_PAGE_TRACKING) += page_idle.o diff --git a/mm/restrictedmem.c b/mm/restrictedmem.c new file mode 100644 index 000000000000..e5bf8907e0f8 --- /dev/null +++ b/mm/restrictedmem.c @@ -0,0 +1,250 @@ +// SPDX-License-Identifier: GPL-2.0 +#include "linux/sbitmap.h" +#include +#include +#include +#include +#include +#include +#include + +struct restrictedmem_data { + struct mutex lock; + struct file *memfd; + struct list_head notifiers; +}; + +static void restrictedmem_notifier_invalidate(struct restrictedmem_data *data, + pgoff_t start, pgoff_t end, bool notify_start) +{ + struct restrictedmem_notifier *notifier; + + mutex_lock(&data->lock); + list_for_each_entry(notifier, &data->notifiers, list) { + if (notify_start) + notifier->ops->invalidate_start(notifier, start, end); + else + notifier->ops->invalidate_end(notifier, start, end); + } + mutex_unlock(&data->lock); +} + +static int restrictedmem_release(struct inode *inode, struct file *file) +{ + struct restrictedmem_data *data = inode->i_mapping->private_data; + + fput(data->memfd); + kfree(data); + return 0; +} + +static long restrictedmem_fallocate(struct file *file, int mode, + loff_t offset, loff_t len) +{ + struct restrictedmem_data *data = file->f_mapping->private_data; + struct file *memfd = data->memfd; + int ret; + + if (mode & FALLOC_FL_PUNCH_HOLE) { + if (!PAGE_ALIGNED(offset) || !PAGE_ALIGNED(len)) + return -EINVAL; + } + + restrictedmem_notifier_invalidate(data, offset, offset + len, true); + ret = memfd->f_op->fallocate(memfd, mode, offset, len); + restrictedmem_notifier_invalidate(data, offset, offset + len, false); + return ret; +} + +static const struct file_operations restrictedmem_fops = { + .release = restrictedmem_release, + .fallocate = restrictedmem_fallocate, +}; + +static int restrictedmem_getattr(struct user_namespace *mnt_userns, + const struct path *path, struct kstat *stat, + u32 request_mask, unsigned int query_flags) +{ + struct inode *inode = d_inode(path->dentry); + struct restrictedmem_data *data = inode->i_mapping->private_data; + struct file *memfd = data->memfd; + + return memfd->f_inode->i_op->getattr(mnt_userns, path, stat, + request_mask, query_flags); +} + +static int restrictedmem_setattr(struct user_namespace *mnt_userns, + struct dentry *dentry, struct iattr *attr) +{ + struct inode *inode = d_inode(dentry); + struct restrictedmem_data *data = inode->i_mapping->private_data; + struct file *memfd = data->memfd; + int ret; + + if (attr->ia_valid & ATTR_SIZE) { + if (memfd->f_inode->i_size) + return -EPERM; + + if (!PAGE_ALIGNED(attr->ia_size)) + return -EINVAL; + } + + ret = memfd->f_inode->i_op->setattr(mnt_userns, + file_dentry(memfd), attr); + return ret; +} + +static const struct inode_operations restrictedmem_iops = { + .getattr = restrictedmem_getattr, + .setattr = restrictedmem_setattr, +}; + +static int restrictedmem_init_fs_context(struct fs_context *fc) +{ + if (!init_pseudo(fc, RESTRICTEDMEM_MAGIC)) + return -ENOMEM; + + fc->s_iflags |= SB_I_NOEXEC; + return 0; +} + +static struct file_system_type restrictedmem_fs = { + .owner = THIS_MODULE, + .name = "memfd:restrictedmem", + .init_fs_context = restrictedmem_init_fs_context, + .kill_sb = kill_anon_super, +}; + +static struct vfsmount *restrictedmem_mnt; + +static __init int restrictedmem_init(void) +{ + restrictedmem_mnt = kern_mount(&restrictedmem_fs); + if (IS_ERR(restrictedmem_mnt)) + return PTR_ERR(restrictedmem_mnt); + return 0; +} +fs_initcall(restrictedmem_init); + +static struct file *restrictedmem_file_create(struct file *memfd) +{ + struct restrictedmem_data *data; + struct address_space *mapping; + struct inode *inode; + struct file *file; + + data = kzalloc(sizeof(*data), GFP_KERNEL); + if (!data) + return ERR_PTR(-ENOMEM); + + data->memfd = memfd; + mutex_init(&data->lock); + INIT_LIST_HEAD(&data->notifiers); + + inode = alloc_anon_inode(restrictedmem_mnt->mnt_sb); + if (IS_ERR(inode)) { + kfree(data); + return ERR_CAST(inode); + } + + inode->i_mode |= S_IFREG; + inode->i_op = &restrictedmem_iops; + inode->i_mapping->private_data = data; + + file = alloc_file_pseudo(inode, restrictedmem_mnt, + "restrictedmem", O_RDWR, + &restrictedmem_fops); + if (IS_ERR(file)) { + iput(inode); + kfree(data); + return ERR_CAST(file); + } + + file->f_flags |= O_LARGEFILE; + + mapping = memfd->f_mapping; + mapping_set_unevictable(mapping); + mapping_set_gfp_mask(mapping, + mapping_gfp_mask(mapping) & ~__GFP_MOVABLE); + + return file; +} + +SYSCALL_DEFINE1(memfd_restricted, unsigned int, flags) +{ + struct file *file, *restricted_file; + int fd, err; + + if (flags) + return -EINVAL; + + fd = get_unused_fd_flags(0); + if (fd < 0) + return fd; + + file = shmem_file_setup("memfd:restrictedmem", 0, VM_NORESERVE); + if (IS_ERR(file)) { + err = PTR_ERR(file); + goto err_fd; + } + file->f_mode |= FMODE_LSEEK | FMODE_PREAD | FMODE_PWRITE; + file->f_flags |= O_LARGEFILE; + + restricted_file = restrictedmem_file_create(file); + if (IS_ERR(restricted_file)) { + err = PTR_ERR(restricted_file); + fput(file); + goto err_fd; + } + + fd_install(fd, restricted_file); + return fd; +err_fd: + put_unused_fd(fd); + return err; +} + +void restrictedmem_register_notifier(struct file *file, + struct restrictedmem_notifier *notifier) +{ + struct restrictedmem_data *data = file->f_mapping->private_data; + + mutex_lock(&data->lock); + list_add(¬ifier->list, &data->notifiers); + mutex_unlock(&data->lock); +} +EXPORT_SYMBOL_GPL(restrictedmem_register_notifier); + +void restrictedmem_unregister_notifier(struct file *file, + struct restrictedmem_notifier *notifier) +{ + struct restrictedmem_data *data = file->f_mapping->private_data; + + mutex_lock(&data->lock); + list_del(¬ifier->list); + mutex_unlock(&data->lock); +} +EXPORT_SYMBOL_GPL(restrictedmem_unregister_notifier); + +int restrictedmem_get_page(struct file *file, pgoff_t offset, + struct page **pagep, int *order) +{ + struct restrictedmem_data *data = file->f_mapping->private_data; + struct file *memfd = data->memfd; + struct page *page; + int ret; + + ret = shmem_getpage(file_inode(memfd), offset, &page, SGP_WRITE); + if (ret) + return ret; + + *pagep = page; + if (order) + *order = thp_order(compound_head(page)); + + SetPageUptodate(page); + unlock_page(page); + + return 0; +} +EXPORT_SYMBOL_GPL(restrictedmem_get_page); From patchwork Tue Oct 25 15:13:38 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chao Peng X-Patchwork-Id: 13019443 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id EA8D4C04A95 for ; Tue, 25 Oct 2022 15:18:50 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 742C38E0005; Tue, 25 Oct 2022 11:18:50 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 6CBF88E0001; Tue, 25 Oct 2022 11:18:50 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4F7A38E0005; Tue, 25 Oct 2022 11:18:50 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 3F11D8E0001 for ; Tue, 25 Oct 2022 11:18:50 -0400 (EDT) Received: from smtpin18.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id BEEE01A02A0 for ; Tue, 25 Oct 2022 15:18:49 +0000 (UTC) X-FDA: 80059829178.18.DE42032 Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by imf10.hostedemail.com (Postfix) with ESMTP id E0F38C003B for ; Tue, 25 Oct 2022 15:18:48 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1666711129; x=1698247129; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=lotBbnmI5RSBbig2cJtL+iRErDtEVdlh7GuCKWGHGPs=; b=Lqzrz16rG3GL4WqeNTfSsEkE2stdenx80Lj3KcC71u+Xq/F64o1enCsA nSowKSG3Hb51covLSy5N6b78gK6004ADYLBlQhPbYrGfuIpS+cSmv49JA kewBGw5Y/u/YJL+EkKCEQosarCmTJm9rTJkDmBmPjhOgkcrRClQl4jl1N 8PD5unHtTSF6LLJghzoABxlSgfJKL4vhdF4ERLwMfU9Lzds7aRMOICIaf Gw6eNnxJfCOqbPc1pp+turc55xot51UX000sQlgd5lM59A6ZRkDnIdZ+0 mipI3fU9n905LUyFs7KFFSB+QV1lvB2pCO2XA7DABPoUCbHca0hVMZ9Ak g==; X-IronPort-AV: E=McAfee;i="6500,9779,10510"; a="334307270" X-IronPort-AV: E=Sophos;i="5.95,212,1661842800"; d="scan'208";a="334307270" Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Oct 2022 08:18:47 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10510"; a="736865540" X-IronPort-AV: E=Sophos;i="5.95,212,1661842800"; d="scan'208";a="736865540" Received: from chaop.bj.intel.com ([10.240.193.75]) by fmsmga002.fm.intel.com with ESMTP; 25 Oct 2022 08:18:36 -0700 From: Chao Peng To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-arch@vger.kernel.org, linux-api@vger.kernel.org, linux-doc@vger.kernel.org, qemu-devel@nongnu.org Cc: Paolo Bonzini , Jonathan Corbet , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Thomas Gleixner , Ingo Molnar , Borislav Petkov , x86@kernel.org, "H . Peter Anvin" , Hugh Dickins , Jeff Layton , "J . Bruce Fields" , Andrew Morton , Shuah Khan , Mike Rapoport , Steven Price , "Maciej S . Szmigiero" , Vlastimil Babka , Vishal Annapurve , Yu Zhang , Chao Peng , "Kirill A . Shutemov" , luto@kernel.org, jun.nakajima@intel.com, dave.hansen@intel.com, ak@linux.intel.com, david@redhat.com, aarcange@redhat.com, ddutile@redhat.com, dhildenb@redhat.com, Quentin Perret , tabba@google.com, Michael Roth , mhocko@suse.com, Muchun Song , wei.w.wang@intel.com Subject: [PATCH v9 2/8] KVM: Extend the memslot to support fd-based private memory Date: Tue, 25 Oct 2022 23:13:38 +0800 Message-Id: <20221025151344.3784230-3-chao.p.peng@linux.intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20221025151344.3784230-1-chao.p.peng@linux.intel.com> References: <20221025151344.3784230-1-chao.p.peng@linux.intel.com> MIME-Version: 1.0 ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1666711129; a=rsa-sha256; cv=none; b=TDZxU8FJghJdu/QIIk4+2YaLG5xFUadv0WtC6ygcet3cH7cdWML5CXAAoJ0VZwZ1jO0c87 UW2j8MEDfxQx4Rs66TUjHwZxnnK7wvHb6XDoz9Aq0QYVQy8FQZ5A7CYkJjHe5GJNrkrOPf vddLLDVeXryW9AFj9LU9It9DiwUvwdk= ARC-Authentication-Results: i=1; imf10.hostedemail.com; dkim=none ("invalid DKIM record") header.d=intel.com header.s=Intel header.b=Lqzrz16r; dmarc=fail reason="No valid SPF" header.from=intel.com (policy=none); spf=none (imf10.hostedemail.com: domain of chao.p.peng@linux.intel.com has no SPF policy when checking 192.55.52.88) smtp.mailfrom=chao.p.peng@linux.intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1666711129; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=EiedROpUv8rfOn97xaJYxmo4YEh2NMfpIZ7wMB0ZcwU=; b=gNM4HuRyxEdUoQkC6VcaGDILPUROazgtK4BR9BcvKqjhIHVtdIwDAVKDVHwynQrHfOdl+7 gW4wjtiwTzJUslRVR/3ToWs8X57mf8nyziiaqDezzzZ5Ljqk8z0gemFJKP/5sM6E8dUnQ3 mR+Y/9yE9M7SPNRVz7YiKrb4RDZQo0Y= X-Rspam-User: Authentication-Results: imf10.hostedemail.com; dkim=none ("invalid DKIM record") header.d=intel.com header.s=Intel header.b=Lqzrz16r; dmarc=fail reason="No valid SPF" header.from=intel.com (policy=none); spf=none (imf10.hostedemail.com: domain of chao.p.peng@linux.intel.com has no SPF policy when checking 192.55.52.88) smtp.mailfrom=chao.p.peng@linux.intel.com X-Rspamd-Server: rspam11 X-Stat-Signature: nqm18potad6sesngfaa9d17pq7z4bfyk X-Rspamd-Queue-Id: E0F38C003B X-HE-Tag: 1666711128-422528 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: In memory encryption usage, guest memory may be encrypted with special key and can be accessed only by the guest itself. We call such memory private memory. It's valueless and sometimes can cause problem to allow userspace to access guest private memory. This new KVM memslot extension allows guest private memory being provided though a restrictedmem backed file descriptor(fd) and userspace is restricted to access the bookmarked memory in the fd. This new extension, indicated by the new flag KVM_MEM_PRIVATE, adds two additional KVM memslot fields restricted_fd/restricted_offset to allow userspace to instruct KVM to provide guest memory through restricted_fd. 'guest_phys_addr' is mapped at the restricted_offset of restricted_fd and the size is 'memory_size'. The extended memslot can still have the userspace_addr(hva). When use, a single memslot can maintain both private memory through restricted_fd and shared memory through userspace_addr. Whether the private or shared part is visible to guest is maintained by other KVM code. A restrictedmem_notifier field is also added to the memslot structure to allow the restricted_fd's backing store to notify KVM the memory change, KVM then can invalidate its page table entries. Together with the change, a new config HAVE_KVM_RESTRICTED_MEM is added and right now it is selected on X86_64 only. A KVM_CAP_PRIVATE_MEM is also introduced to indicate KVM support for KVM_MEM_PRIVATE. To make code maintenance easy, internally we use a binary compatible alias struct kvm_user_mem_region to handle both the normal and the '_ext' variants. Co-developed-by: Yu Zhang Signed-off-by: Yu Zhang Signed-off-by: Chao Peng Reviewed-by: Fuad Tabba Tested-by: Fuad Tabba --- Documentation/virt/kvm/api.rst | 48 ++++++++++++++++++++++++++++----- arch/x86/kvm/Kconfig | 2 ++ arch/x86/kvm/x86.c | 2 +- include/linux/kvm_host.h | 13 +++++++-- include/uapi/linux/kvm.h | 29 ++++++++++++++++++++ virt/kvm/Kconfig | 3 +++ virt/kvm/kvm_main.c | 49 ++++++++++++++++++++++++++++------ 7 files changed, 128 insertions(+), 18 deletions(-) diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst index eee9f857a986..f3fa75649a78 100644 --- a/Documentation/virt/kvm/api.rst +++ b/Documentation/virt/kvm/api.rst @@ -1319,7 +1319,7 @@ yet and must be cleared on entry. :Capability: KVM_CAP_USER_MEMORY :Architectures: all :Type: vm ioctl -:Parameters: struct kvm_userspace_memory_region (in) +:Parameters: struct kvm_userspace_memory_region(_ext) (in) :Returns: 0 on success, -1 on error :: @@ -1332,9 +1332,18 @@ yet and must be cleared on entry. __u64 userspace_addr; /* start of the userspace allocated memory */ }; + struct kvm_userspace_memory_region_ext { + struct kvm_userspace_memory_region region; + __u64 restricted_offset; + __u32 restricted_fd; + __u32 pad1; + __u64 pad2[14]; + }; + /* for kvm_memory_region::flags */ #define KVM_MEM_LOG_DIRTY_PAGES (1UL << 0) #define KVM_MEM_READONLY (1UL << 1) + #define KVM_MEM_PRIVATE (1UL << 2) This ioctl allows the user to create, modify or delete a guest physical memory slot. Bits 0-15 of "slot" specify the slot id and this value @@ -1365,12 +1374,27 @@ It is recommended that the lower 21 bits of guest_phys_addr and userspace_addr be identical. This allows large pages in the guest to be backed by large pages in the host. -The flags field supports two flags: KVM_MEM_LOG_DIRTY_PAGES and -KVM_MEM_READONLY. The former can be set to instruct KVM to keep track of -writes to memory within the slot. See KVM_GET_DIRTY_LOG ioctl to know how to -use it. The latter can be set, if KVM_CAP_READONLY_MEM capability allows it, -to make a new slot read-only. In this case, writes to this memory will be -posted to userspace as KVM_EXIT_MMIO exits. +kvm_userspace_memory_region_ext struct includes all fields of +kvm_userspace_memory_region struct, while also adds additional fields for some +other features. See below description of flags field for more information. +It's recommended to use kvm_userspace_memory_region_ext in new userspace code. + +The flags field supports following flags: + +- KVM_MEM_LOG_DIRTY_PAGES to instruct KVM to keep track of writes to memory + within the slot. For more details, see KVM_GET_DIRTY_LOG ioctl. + +- KVM_MEM_READONLY, if KVM_CAP_READONLY_MEM allows, to make a new slot + read-only. In this case, writes to this memory will be posted to userspace as + KVM_EXIT_MMIO exits. + +- KVM_MEM_PRIVATE, if KVM_CAP_PRIVATE_MEM allows, to indicate a new slot has + private memory backed by a file descriptor(fd) and userspace access to the + fd may be restricted. Userspace should use restricted_fd/restricted_offset in + kvm_userspace_memory_region_ext to instruct KVM to provide private memory + to guest. Userspace should guarantee not to map the same pfn indicated by + restricted_fd/restricted_offset to different gfns with multiple memslots. + Failed to do this may result undefined behavior. When the KVM_CAP_SYNC_MMU capability is available, changes in the backing of the memory region are automatically reflected into the guest. For example, an @@ -8215,6 +8239,16 @@ structure. When getting the Modified Change Topology Report value, the attr->addr must point to a byte where the value will be stored or retrieved from. +8.36 KVM_CAP_PRIVATE_MEM +------------------------ + +:Architectures: x86 + +This capability indicates that private memory is supported and userspace can +set KVM_MEM_PRIVATE flag for KVM_SET_USER_MEMORY_REGION ioctl. See +KVM_SET_USER_MEMORY_REGION for details on the usage of KVM_MEM_PRIVATE and +kvm_userspace_memory_region_ext fields. + 9. Known KVM API problems ========================= diff --git a/arch/x86/kvm/Kconfig b/arch/x86/kvm/Kconfig index 67be7f217e37..8d2bd455c0cd 100644 --- a/arch/x86/kvm/Kconfig +++ b/arch/x86/kvm/Kconfig @@ -49,6 +49,8 @@ config KVM select SRCU select INTERVAL_TREE select HAVE_KVM_PM_NOTIFIER if PM + select HAVE_KVM_RESTRICTED_MEM if X86_64 + select RESTRICTEDMEM if HAVE_KVM_RESTRICTED_MEM help Support hosting fully virtualized guest machines using hardware virtualization extensions. You will need a fairly recent diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 4bd5f8a751de..02ad31f46dd7 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -12425,7 +12425,7 @@ void __user * __x86_set_memory_region(struct kvm *kvm, int id, gpa_t gpa, } for (i = 0; i < KVM_ADDRESS_SPACE_NUM; i++) { - struct kvm_userspace_memory_region m; + struct kvm_user_mem_region m; m.slot = id | (i << 16); m.flags = 0; diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 32f259fa5801..739a7562a1f3 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -44,6 +44,7 @@ #include #include +#include #ifndef KVM_MAX_VCPU_IDS #define KVM_MAX_VCPU_IDS KVM_MAX_VCPUS @@ -575,8 +576,16 @@ struct kvm_memory_slot { u32 flags; short id; u16 as_id; + struct file *restricted_file; + loff_t restricted_offset; + struct restrictedmem_notifier notifier; }; +static inline bool kvm_slot_can_be_private(const struct kvm_memory_slot *slot) +{ + return slot && (slot->flags & KVM_MEM_PRIVATE); +} + static inline bool kvm_slot_dirty_track_enabled(const struct kvm_memory_slot *slot) { return slot->flags & KVM_MEM_LOG_DIRTY_PAGES; @@ -1103,9 +1112,9 @@ enum kvm_mr_change { }; int kvm_set_memory_region(struct kvm *kvm, - const struct kvm_userspace_memory_region *mem); + const struct kvm_user_mem_region *mem); int __kvm_set_memory_region(struct kvm *kvm, - const struct kvm_userspace_memory_region *mem); + const struct kvm_user_mem_region *mem); void kvm_arch_free_memslot(struct kvm *kvm, struct kvm_memory_slot *slot); void kvm_arch_memslots_updated(struct kvm *kvm, u64 gen); int kvm_arch_prepare_memory_region(struct kvm *kvm, diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h index 0d5d4419139a..f1ae45c10c94 100644 --- a/include/uapi/linux/kvm.h +++ b/include/uapi/linux/kvm.h @@ -103,6 +103,33 @@ struct kvm_userspace_memory_region { __u64 userspace_addr; /* start of the userspace allocated memory */ }; +struct kvm_userspace_memory_region_ext { + struct kvm_userspace_memory_region region; + __u64 restricted_offset; + __u32 restricted_fd; + __u32 pad1; + __u64 pad2[14]; +}; + +#ifdef __KERNEL__ +/* + * kvm_user_mem_region is a kernel-only alias of kvm_userspace_memory_region_ext + * that "unpacks" kvm_userspace_memory_region so that KVM can directly access + * all fields from the top-level "extended" region. + */ +struct kvm_user_mem_region { + __u32 slot; + __u32 flags; + __u64 guest_phys_addr; + __u64 memory_size; + __u64 userspace_addr; + __u64 restricted_offset; + __u32 restricted_fd; + __u32 pad1; + __u64 pad2[14]; +}; +#endif + /* * The bit 0 ~ bit 15 of kvm_memory_region::flags are visible for userspace, * other bits are reserved for kvm internal use which are defined in @@ -110,6 +137,7 @@ struct kvm_userspace_memory_region { */ #define KVM_MEM_LOG_DIRTY_PAGES (1UL << 0) #define KVM_MEM_READONLY (1UL << 1) +#define KVM_MEM_PRIVATE (1UL << 2) /* for KVM_IRQ_LINE */ struct kvm_irq_level { @@ -1178,6 +1206,7 @@ struct kvm_ppc_resize_hpt { #define KVM_CAP_S390_ZPCI_OP 221 #define KVM_CAP_S390_CPU_TOPOLOGY 222 #define KVM_CAP_DIRTY_LOG_RING_ACQ_REL 223 +#define KVM_CAP_PRIVATE_MEM 224 #ifdef KVM_CAP_IRQ_ROUTING diff --git a/virt/kvm/Kconfig b/virt/kvm/Kconfig index 800f9470e36b..9ff164c7e0cc 100644 --- a/virt/kvm/Kconfig +++ b/virt/kvm/Kconfig @@ -86,3 +86,6 @@ config KVM_XFER_TO_GUEST_WORK config HAVE_KVM_PM_NOTIFIER bool + +config HAVE_KVM_RESTRICTED_MEM + bool diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index e30f1b4ecfa5..8dace78a0278 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -1526,7 +1526,7 @@ static void kvm_replace_memslot(struct kvm *kvm, } } -static int check_memory_region_flags(const struct kvm_userspace_memory_region *mem) +static int check_memory_region_flags(const struct kvm_user_mem_region *mem) { u32 valid_flags = KVM_MEM_LOG_DIRTY_PAGES; @@ -1920,7 +1920,7 @@ static bool kvm_check_memslot_overlap(struct kvm_memslots *slots, int id, * Must be called holding kvm->slots_lock for write. */ int __kvm_set_memory_region(struct kvm *kvm, - const struct kvm_userspace_memory_region *mem) + const struct kvm_user_mem_region *mem) { struct kvm_memory_slot *old, *new; struct kvm_memslots *slots; @@ -2024,7 +2024,7 @@ int __kvm_set_memory_region(struct kvm *kvm, EXPORT_SYMBOL_GPL(__kvm_set_memory_region); int kvm_set_memory_region(struct kvm *kvm, - const struct kvm_userspace_memory_region *mem) + const struct kvm_user_mem_region *mem) { int r; @@ -2036,7 +2036,7 @@ int kvm_set_memory_region(struct kvm *kvm, EXPORT_SYMBOL_GPL(kvm_set_memory_region); static int kvm_vm_ioctl_set_memory_region(struct kvm *kvm, - struct kvm_userspace_memory_region *mem) + struct kvm_user_mem_region *mem) { if ((u16)mem->slot >= KVM_USER_MEM_SLOTS) return -EINVAL; @@ -4627,6 +4627,33 @@ static int kvm_vm_ioctl_get_stats_fd(struct kvm *kvm) return fd; } +#define SANITY_CHECK_MEM_REGION_FIELD(field) \ +do { \ + BUILD_BUG_ON(offsetof(struct kvm_user_mem_region, field) != \ + offsetof(struct kvm_userspace_memory_region, field)); \ + BUILD_BUG_ON(sizeof_field(struct kvm_user_mem_region, field) != \ + sizeof_field(struct kvm_userspace_memory_region, field)); \ +} while (0) + +#define SANITY_CHECK_MEM_REGION_EXT_FIELD(field) \ +do { \ + BUILD_BUG_ON(offsetof(struct kvm_user_mem_region, field) != \ + offsetof(struct kvm_userspace_memory_region_ext, field)); \ + BUILD_BUG_ON(sizeof_field(struct kvm_user_mem_region, field) != \ + sizeof_field(struct kvm_userspace_memory_region_ext, field)); \ +} while (0) + +static void kvm_sanity_check_user_mem_region_alias(void) +{ + SANITY_CHECK_MEM_REGION_FIELD(slot); + SANITY_CHECK_MEM_REGION_FIELD(flags); + SANITY_CHECK_MEM_REGION_FIELD(guest_phys_addr); + SANITY_CHECK_MEM_REGION_FIELD(memory_size); + SANITY_CHECK_MEM_REGION_FIELD(userspace_addr); + SANITY_CHECK_MEM_REGION_EXT_FIELD(restricted_offset); + SANITY_CHECK_MEM_REGION_EXT_FIELD(restricted_fd); +} + static long kvm_vm_ioctl(struct file *filp, unsigned int ioctl, unsigned long arg) { @@ -4650,14 +4677,20 @@ static long kvm_vm_ioctl(struct file *filp, break; } case KVM_SET_USER_MEMORY_REGION: { - struct kvm_userspace_memory_region kvm_userspace_mem; + struct kvm_user_mem_region mem; + unsigned long size = sizeof(struct kvm_userspace_memory_region); + + kvm_sanity_check_user_mem_region_alias(); r = -EFAULT; - if (copy_from_user(&kvm_userspace_mem, argp, - sizeof(kvm_userspace_mem))) + if (copy_from_user(&mem, argp, size)) + goto out; + + r = -EINVAL; + if (mem.flags & KVM_MEM_PRIVATE) goto out; - r = kvm_vm_ioctl_set_memory_region(kvm, &kvm_userspace_mem); + r = kvm_vm_ioctl_set_memory_region(kvm, &mem); break; } case KVM_GET_DIRTY_LOG: { From patchwork Tue Oct 25 15:13:39 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chao Peng X-Patchwork-Id: 13019444 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1688FC04A95 for ; Tue, 25 Oct 2022 15:19:03 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id AEE428E0006; Tue, 25 Oct 2022 11:19:02 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id A9E0C8E0001; Tue, 25 Oct 2022 11:19:02 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 916298E0006; Tue, 25 Oct 2022 11:19:02 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 81FAF8E0001 for ; Tue, 25 Oct 2022 11:19:02 -0400 (EDT) Received: from smtpin20.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 1262A1201D2 for ; Tue, 25 Oct 2022 15:19:02 +0000 (UTC) X-FDA: 80059829724.20.AAC40AD Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by imf09.hostedemail.com (Postfix) with ESMTP id 26A72140005 for ; Tue, 25 Oct 2022 15:19:00 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1666711141; x=1698247141; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=ih/6HAJshbJt6TWBogTEZBcZrL2eIfbHydJCwgeirTw=; b=FYIUqYH1eXa3q8T1pSxl5dsWMvVLsr15nmQTgwXQtGcqvghCkI8+rFDe K3Jo9VQ09+z4f6awOUvLMjJYANDDkeimh/NPSMIkd4LguDXh927uWvFlK 9asGVFG857LyHoSFy8zAf21euTEYE/5+zAie7j/42QdiJTguafUzPmsl1 kUFHnmFTKo3xtG3VxB0eQkQwvJdHisgzIIqFG3I8C4ipss0Ek2AE01WWO 6XY7T6AtqDLhAHEASs1kw61dcK64Yq8NyRbtqE8uimzCkHiP9qBEFKJqQ 0PL/rSovXEAMK5aCbKUGyVl1NzBexDtLyKA2l0WQrWXAx9kXTGHg3rKEQ g==; X-IronPort-AV: E=McAfee;i="6500,9779,10510"; a="308799653" X-IronPort-AV: E=Sophos;i="5.95,212,1661842800"; d="scan'208";a="308799653" Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Oct 2022 08:18:59 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10510"; a="736865566" X-IronPort-AV: E=Sophos;i="5.95,212,1661842800"; d="scan'208";a="736865566" Received: from chaop.bj.intel.com ([10.240.193.75]) by fmsmga002.fm.intel.com with ESMTP; 25 Oct 2022 08:18:47 -0700 From: Chao Peng To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-arch@vger.kernel.org, linux-api@vger.kernel.org, linux-doc@vger.kernel.org, qemu-devel@nongnu.org Cc: Paolo Bonzini , Jonathan Corbet , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Thomas Gleixner , Ingo Molnar , Borislav Petkov , x86@kernel.org, "H . Peter Anvin" , Hugh Dickins , Jeff Layton , "J . Bruce Fields" , Andrew Morton , Shuah Khan , Mike Rapoport , Steven Price , "Maciej S . Szmigiero" , Vlastimil Babka , Vishal Annapurve , Yu Zhang , Chao Peng , "Kirill A . Shutemov" , luto@kernel.org, jun.nakajima@intel.com, dave.hansen@intel.com, ak@linux.intel.com, david@redhat.com, aarcange@redhat.com, ddutile@redhat.com, dhildenb@redhat.com, Quentin Perret , tabba@google.com, Michael Roth , mhocko@suse.com, Muchun Song , wei.w.wang@intel.com Subject: [PATCH v9 3/8] KVM: Add KVM_EXIT_MEMORY_FAULT exit Date: Tue, 25 Oct 2022 23:13:39 +0800 Message-Id: <20221025151344.3784230-4-chao.p.peng@linux.intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20221025151344.3784230-1-chao.p.peng@linux.intel.com> References: <20221025151344.3784230-1-chao.p.peng@linux.intel.com> MIME-Version: 1.0 ARC-Authentication-Results: i=1; imf09.hostedemail.com; dkim=none ("invalid DKIM record") header.d=intel.com header.s=Intel header.b=FYIUqYH1; spf=none (imf09.hostedemail.com: domain of chao.p.peng@linux.intel.com has no SPF policy when checking 134.134.136.24) smtp.mailfrom=chao.p.peng@linux.intel.com; dmarc=fail reason="No valid SPF" header.from=intel.com (policy=none) ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1666711141; a=rsa-sha256; cv=none; b=Tzyg/ENLT37EmxPoxTaMhT2Tw2HyEI9QQ/Ghm8g9reuKstN2z1y3+5DoF41bdnbS2hy5tj suwqxrGB0eN17IZdFM+FflQS8PVBL0mcNqnebU92nkN7QdYW/jZTl2yPqkuqUGQS7zJruV /k0Ed3RCnNmnP0yHtnODMAOFnM76DQs= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1666711141; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=fknS4Ck8zD0qFZp1qLGKTaX9bAxoV+hj2N6xgz0n5M0=; b=wYQwFIvoHpT+hRBrzG3vhupSwGecwUaDZgBGb/YMjVSKUy+uW6MfqEV8t+jrLm3x8ff8qD ss3gByGxZMtjHQlrYmvYmcVwmsE1eAYca9+jUq5Hsw8cTyTTHxOSZh2nLG2qb+qKb0Y4u2 oIWuP5AkgmNvgfX6DhMybJJrUsAELn0= X-Rspamd-Queue-Id: 26A72140005 Authentication-Results: imf09.hostedemail.com; dkim=none ("invalid DKIM record") header.d=intel.com header.s=Intel header.b=FYIUqYH1; spf=none (imf09.hostedemail.com: domain of chao.p.peng@linux.intel.com has no SPF policy when checking 134.134.136.24) smtp.mailfrom=chao.p.peng@linux.intel.com; dmarc=fail reason="No valid SPF" header.from=intel.com (policy=none) X-Rspamd-Server: rspam02 X-Rspam-User: X-Stat-Signature: orhihz5fh3mgh8fkz9jwkzsgkrmqfoyw X-HE-Tag: 1666711140-87820 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This new KVM exit allows userspace to handle memory-related errors. It indicates an error happens in KVM at guest memory range [gpa, gpa+size). The flags includes additional information for userspace to handle the error. Currently bit 0 is defined as 'private memory' where '1' indicates error happens due to private memory access and '0' indicates error happens due to shared memory access. When private memory is enabled, this new exit will be used for KVM to exit to userspace for shared <-> private memory conversion in memory encryption usage. In such usage, typically there are two kind of memory conversions: - explicit conversion: happens when guest explicitly calls into KVM to map a range (as private or shared), KVM then exits to userspace to perform the map/unmap operations. - implicit conversion: happens in KVM page fault handler where KVM exits to userspace for an implicit conversion when the page is in a different state than requested (private or shared). Suggested-by: Sean Christopherson Co-developed-by: Yu Zhang Signed-off-by: Yu Zhang Signed-off-by: Chao Peng Reviewed-by: Fuad Tabba Tested-by: Fuad Tabba --- Documentation/virt/kvm/api.rst | 23 +++++++++++++++++++++++ include/uapi/linux/kvm.h | 9 +++++++++ 2 files changed, 32 insertions(+) diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst index f3fa75649a78..975688912b8c 100644 --- a/Documentation/virt/kvm/api.rst +++ b/Documentation/virt/kvm/api.rst @@ -6537,6 +6537,29 @@ array field represents return values. The userspace should update the return values of SBI call before resuming the VCPU. For more details on RISC-V SBI spec refer, https://github.com/riscv/riscv-sbi-doc. +:: + + /* KVM_EXIT_MEMORY_FAULT */ + struct { + #define KVM_MEMORY_EXIT_FLAG_PRIVATE (1 << 0) + __u32 flags; + __u32 padding; + __u64 gpa; + __u64 size; + } memory; + +If exit reason is KVM_EXIT_MEMORY_FAULT then it indicates that the VCPU has +encountered a memory error which is not handled by KVM kernel module and +userspace may choose to handle it. The 'flags' field indicates the memory +properties of the exit. + + - KVM_MEMORY_EXIT_FLAG_PRIVATE - indicates the memory error is caused by + private memory access when the bit is set. Otherwise the memory error is + caused by shared memory access when the bit is clear. + +'gpa' and 'size' indicate the memory range the error occurs at. The userspace +may handle the error and return to KVM to retry the previous memory access. + :: /* KVM_EXIT_NOTIFY */ diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h index f1ae45c10c94..fa60b032a405 100644 --- a/include/uapi/linux/kvm.h +++ b/include/uapi/linux/kvm.h @@ -300,6 +300,7 @@ struct kvm_xen_exit { #define KVM_EXIT_RISCV_SBI 35 #define KVM_EXIT_RISCV_CSR 36 #define KVM_EXIT_NOTIFY 37 +#define KVM_EXIT_MEMORY_FAULT 38 /* For KVM_EXIT_INTERNAL_ERROR */ /* Emulate instruction failed. */ @@ -538,6 +539,14 @@ struct kvm_run { #define KVM_NOTIFY_CONTEXT_INVALID (1 << 0) __u32 flags; } notify; + /* KVM_EXIT_MEMORY_FAULT */ + struct { +#define KVM_MEMORY_EXIT_FLAG_PRIVATE (1 << 0) + __u32 flags; + __u32 padding; + __u64 gpa; + __u64 size; + } memory; /* Fix the size of the union. */ char padding[256]; }; From patchwork Tue Oct 25 15:13:40 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chao Peng X-Patchwork-Id: 13019445 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id A876CFA3743 for ; Tue, 25 Oct 2022 15:19:13 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 45C6D8E0007; Tue, 25 Oct 2022 11:19:13 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 40C138E0001; Tue, 25 Oct 2022 11:19:13 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2AD888E0007; Tue, 25 Oct 2022 11:19:13 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 1A21E8E0001 for ; Tue, 25 Oct 2022 11:19:13 -0400 (EDT) Received: from smtpin16.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id B9D9BAB35F for ; Tue, 25 Oct 2022 15:19:12 +0000 (UTC) X-FDA: 80059830144.16.87B47B3 Received: from mga06.intel.com (mga06b.intel.com [134.134.136.31]) by imf25.hostedemail.com (Postfix) with ESMTP id A11A2A0003 for ; Tue, 25 Oct 2022 15:19:11 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1666711151; x=1698247151; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=6EFuXYDl0hDb29a8QD5wsl2s9EmZqV7kVnmmwCD6iGo=; b=YXqGQ3WXvdzEJgoxrDj6djhtNAAC0tOzafZDRrgFKRUhKBqLDl3ir4w7 sQGbXwHEis1uB9HoiJAMrGkDUHLFNrv1s6NLWiqsotlhpuAnOctuqZtuI 2QsRuZR2YlZjQxsgZ7/ijCZD6TkkCVB62voULw+1uSngopd36A5+N4egH 9psI4vEGd59D9mjOaAnyKl8bfolrPgeJHYOB5Y3aJKZaVFSBhs08+Tx+Y WOadcJ8BxupMZHgGTD4l84ir+x0TQdfK64wyWtr4Chcmt33NYqpiy1dZN JybQo6ZfwUgBwikhlS2mCoXjZxh2X63xT7/ENpbCPNjBD61+8Yt4PttmB w==; X-IronPort-AV: E=McAfee;i="6500,9779,10510"; a="369772116" X-IronPort-AV: E=Sophos;i="5.95,212,1661842800"; d="scan'208";a="369772116" Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Oct 2022 08:19:10 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10510"; a="736865621" X-IronPort-AV: E=Sophos;i="5.95,212,1661842800"; d="scan'208";a="736865621" Received: from chaop.bj.intel.com ([10.240.193.75]) by fmsmga002.fm.intel.com with ESMTP; 25 Oct 2022 08:18:59 -0700 From: Chao Peng To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-arch@vger.kernel.org, linux-api@vger.kernel.org, linux-doc@vger.kernel.org, qemu-devel@nongnu.org Cc: Paolo Bonzini , Jonathan Corbet , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Thomas Gleixner , Ingo Molnar , Borislav Petkov , x86@kernel.org, "H . Peter Anvin" , Hugh Dickins , Jeff Layton , "J . Bruce Fields" , Andrew Morton , Shuah Khan , Mike Rapoport , Steven Price , "Maciej S . Szmigiero" , Vlastimil Babka , Vishal Annapurve , Yu Zhang , Chao Peng , "Kirill A . Shutemov" , luto@kernel.org, jun.nakajima@intel.com, dave.hansen@intel.com, ak@linux.intel.com, david@redhat.com, aarcange@redhat.com, ddutile@redhat.com, dhildenb@redhat.com, Quentin Perret , tabba@google.com, Michael Roth , mhocko@suse.com, Muchun Song , wei.w.wang@intel.com Subject: [PATCH v9 4/8] KVM: Use gfn instead of hva for mmu_notifier_retry Date: Tue, 25 Oct 2022 23:13:40 +0800 Message-Id: <20221025151344.3784230-5-chao.p.peng@linux.intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20221025151344.3784230-1-chao.p.peng@linux.intel.com> References: <20221025151344.3784230-1-chao.p.peng@linux.intel.com> MIME-Version: 1.0 ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1666711151; a=rsa-sha256; cv=none; b=bHFjXEeIwLpYrdERg2zyLJAnUk8Y5CacMAW2b131vtOQaxMrP+Rn1167p5Ns+wgVdmLZhr azCvKxr8vi74xZ0fU1AoABdnk+3+w5ogy2GCjq3MTU+5xQAUcCn60nAD1wk7HrHZS6SiFU xIFwjt3SP+26Kc4sbYzAXHHTeVmOGmQ= ARC-Authentication-Results: i=1; imf25.hostedemail.com; dkim=none ("invalid DKIM record") header.d=intel.com header.s=Intel header.b=YXqGQ3WX; dmarc=fail reason="No valid SPF" header.from=intel.com (policy=none); spf=none (imf25.hostedemail.com: domain of chao.p.peng@linux.intel.com has no SPF policy when checking 134.134.136.31) smtp.mailfrom=chao.p.peng@linux.intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1666711151; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=ZeCX6+XYJ88tSgL5hc/072IEYNeLVQ1q2js914cezdE=; b=zMbn0cOF3fPaTwGYDNqBXjbuV0UoXuIYLg/AGHncti9ZgOBXJQnM3AFj4p8fnerbg2Eghc +C/9ffgqbsNrMNCDjSXswAj/4JeUVksUn5F6EqVTshrbaDZZ8ZdZ17XdMpffppFBQu6WRY OyjzuwRfz09Gaognh1EP6F34L/7irso= X-Stat-Signature: 1iqmau88qmj5cgjoqfwiwdgtag55k1kr X-Rspam-User: X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: A11A2A0003 Authentication-Results: imf25.hostedemail.com; dkim=none ("invalid DKIM record") header.d=intel.com header.s=Intel header.b=YXqGQ3WX; dmarc=fail reason="No valid SPF" header.from=intel.com (policy=none); spf=none (imf25.hostedemail.com: domain of chao.p.peng@linux.intel.com has no SPF policy when checking 134.134.136.31) smtp.mailfrom=chao.p.peng@linux.intel.com X-HE-Tag: 1666711151-638316 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Currently in mmu_notifier validate path, hva range is recorded and then checked against in the mmu_notifier_retry_hva() of the page fault path. However, for the to be introduced private memory, a page fault may not have a hva associated, checking gfn(gpa) makes more sense. For existing non private memory case, gfn is expected to continue to work. The only downside is when aliasing multiple gfns to a single hva, the current algorithm of checking multiple ranges could result in a much larger range being rejected. Such aliasing should be uncommon, so the impact is expected small. It also fixes a bug in kvm_zap_gfn_range() which has already been using gfn when calling kvm_mmu_invalidate_begin/end() while these functions accept hva in current code. Signed-off-by: Chao Peng Reviewed-by: Fuad Tabba Signed-off-by: Sean Christopherson --- arch/x86/kvm/mmu/mmu.c | 2 +- include/linux/kvm_host.h | 18 +++++++--------- virt/kvm/kvm_main.c | 45 ++++++++++++++++++++++++++-------------- 3 files changed, 39 insertions(+), 26 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 6f81539061d6..33b1aec44fb8 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -4217,7 +4217,7 @@ static bool is_page_fault_stale(struct kvm_vcpu *vcpu, return true; return fault->slot && - mmu_invalidate_retry_hva(vcpu->kvm, mmu_seq, fault->hva); + mmu_invalidate_retry_gfn(vcpu->kvm, mmu_seq, fault->gfn); } static int direct_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 739a7562a1f3..79e5cbc35fcf 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -775,8 +775,8 @@ struct kvm { struct mmu_notifier mmu_notifier; unsigned long mmu_invalidate_seq; long mmu_invalidate_in_progress; - unsigned long mmu_invalidate_range_start; - unsigned long mmu_invalidate_range_end; + gfn_t mmu_invalidate_range_start; + gfn_t mmu_invalidate_range_end; #endif struct list_head devices; u64 manual_dirty_log_protect; @@ -1365,10 +1365,8 @@ void kvm_mmu_free_memory_cache(struct kvm_mmu_memory_cache *mc); void *kvm_mmu_memory_cache_alloc(struct kvm_mmu_memory_cache *mc); #endif -void kvm_mmu_invalidate_begin(struct kvm *kvm, unsigned long start, - unsigned long end); -void kvm_mmu_invalidate_end(struct kvm *kvm, unsigned long start, - unsigned long end); +void kvm_mmu_invalidate_begin(struct kvm *kvm, gfn_t start, gfn_t end); +void kvm_mmu_invalidate_end(struct kvm *kvm, gfn_t start, gfn_t end); long kvm_arch_dev_ioctl(struct file *filp, unsigned int ioctl, unsigned long arg); @@ -1937,9 +1935,9 @@ static inline int mmu_invalidate_retry(struct kvm *kvm, unsigned long mmu_seq) return 0; } -static inline int mmu_invalidate_retry_hva(struct kvm *kvm, +static inline int mmu_invalidate_retry_gfn(struct kvm *kvm, unsigned long mmu_seq, - unsigned long hva) + gfn_t gfn) { lockdep_assert_held(&kvm->mmu_lock); /* @@ -1949,8 +1947,8 @@ static inline int mmu_invalidate_retry_hva(struct kvm *kvm, * positives, due to shortcuts when handing concurrent invalidations. */ if (unlikely(kvm->mmu_invalidate_in_progress) && - hva >= kvm->mmu_invalidate_range_start && - hva < kvm->mmu_invalidate_range_end) + gfn >= kvm->mmu_invalidate_range_start && + gfn < kvm->mmu_invalidate_range_end) return 1; if (kvm->mmu_invalidate_seq != mmu_seq) return 1; diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 8dace78a0278..09c9cdeb773c 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -540,8 +540,7 @@ static void kvm_mmu_notifier_invalidate_range(struct mmu_notifier *mn, typedef bool (*hva_handler_t)(struct kvm *kvm, struct kvm_gfn_range *range); -typedef void (*on_lock_fn_t)(struct kvm *kvm, unsigned long start, - unsigned long end); +typedef void (*on_lock_fn_t)(struct kvm *kvm, gfn_t start, gfn_t end); typedef void (*on_unlock_fn_t)(struct kvm *kvm); @@ -628,7 +627,8 @@ static __always_inline int __kvm_handle_hva_range(struct kvm *kvm, locked = true; KVM_MMU_LOCK(kvm); if (!IS_KVM_NULL_FN(range->on_lock)) - range->on_lock(kvm, range->start, range->end); + range->on_lock(kvm, gfn_range.start, + gfn_range.end); if (IS_KVM_NULL_FN(range->handler)) break; } @@ -715,15 +715,9 @@ static void kvm_mmu_notifier_change_pte(struct mmu_notifier *mn, kvm_handle_hva_range(mn, address, address + 1, pte, kvm_set_spte_gfn); } -void kvm_mmu_invalidate_begin(struct kvm *kvm, unsigned long start, - unsigned long end) +static inline void update_invalidate_range(struct kvm *kvm, gfn_t start, + gfn_t end) { - /* - * The count increase must become visible at unlock time as no - * spte can be established without taking the mmu_lock and - * count is also read inside the mmu_lock critical section. - */ - kvm->mmu_invalidate_in_progress++; if (likely(kvm->mmu_invalidate_in_progress == 1)) { kvm->mmu_invalidate_range_start = start; kvm->mmu_invalidate_range_end = end; @@ -744,6 +738,28 @@ void kvm_mmu_invalidate_begin(struct kvm *kvm, unsigned long start, } } +static void mark_invalidate_in_progress(struct kvm *kvm, gfn_t start, gfn_t end) +{ + /* + * The count increase must become visible at unlock time as no + * spte can be established without taking the mmu_lock and + * count is also read inside the mmu_lock critical section. + */ + kvm->mmu_invalidate_in_progress++; +} + +static bool kvm_mmu_handle_gfn_range(struct kvm *kvm, struct kvm_gfn_range *range) +{ + update_invalidate_range(kvm, range->start, range->end); + return kvm_unmap_gfn_range(kvm, range); +} + +void kvm_mmu_invalidate_begin(struct kvm *kvm, gfn_t start, gfn_t end) +{ + mark_invalidate_in_progress(kvm, start, end); + update_invalidate_range(kvm, start, end); +} + static int kvm_mmu_notifier_invalidate_range_start(struct mmu_notifier *mn, const struct mmu_notifier_range *range) { @@ -752,8 +768,8 @@ static int kvm_mmu_notifier_invalidate_range_start(struct mmu_notifier *mn, .start = range->start, .end = range->end, .pte = __pte(0), - .handler = kvm_unmap_gfn_range, - .on_lock = kvm_mmu_invalidate_begin, + .handler = kvm_mmu_handle_gfn_range, + .on_lock = mark_invalidate_in_progress, .on_unlock = kvm_arch_guest_memory_reclaimed, .flush_on_ret = true, .may_block = mmu_notifier_range_blockable(range), @@ -791,8 +807,7 @@ static int kvm_mmu_notifier_invalidate_range_start(struct mmu_notifier *mn, return 0; } -void kvm_mmu_invalidate_end(struct kvm *kvm, unsigned long start, - unsigned long end) +void kvm_mmu_invalidate_end(struct kvm *kvm, gfn_t start, gfn_t end) { /* * This sequence increase will notify the kvm page fault that From patchwork Tue Oct 25 15:13:41 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chao Peng X-Patchwork-Id: 13019446 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6A4CDECDFA1 for ; Tue, 25 Oct 2022 15:19:24 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D36968E0002; Tue, 25 Oct 2022 11:19:23 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id CBEEF8E0001; Tue, 25 Oct 2022 11:19:23 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B5FF08E0002; Tue, 25 Oct 2022 11:19:23 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id A77EA8E0001 for ; Tue, 25 Oct 2022 11:19:23 -0400 (EDT) Received: from smtpin06.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 75F9314078B for ; Tue, 25 Oct 2022 15:19:23 +0000 (UTC) X-FDA: 80059830606.06.BC9DC02 Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by imf22.hostedemail.com (Postfix) with ESMTP id 9333CC0005 for ; Tue, 25 Oct 2022 15:19:22 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1666711162; x=1698247162; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=aLtIISiTrDngZitiG9gcuAA16c7d+c3InY1rPwfxPPk=; b=LeCgo9nLO67tkYLiF9OmzcmvE0wjoowzJpbenibZpU/Z2MX6BLSinVig FxWS8zvO2e9P0bsHNeRukPHMvnouHO6BvqB7pTC3TpRqLi0STLnnGYfvm tmpk+jHeOcavVNwLMJnU0L+CnoCBvptPlbCaykhsHoKUAxnR4GbNJq7hm 3fJGCaHv72Vf04XVd5zFMzSfR3hknlvr4KKA4FHJ+gXV2Z1oTQQJBjp2D nqbMfQVOWMvCLlN2RkPV9oLZewAIY/OQTN0yNW3/8nD+hc0eGKQIH0ntj TGP+KPV6fEuaBbOInjsgRait5JJTNDpfcLGrd7QyvXDOHi2peC1Mabtm4 w==; X-IronPort-AV: E=McAfee;i="6500,9779,10510"; a="305320985" X-IronPort-AV: E=Sophos;i="5.95,212,1661842800"; d="scan'208";a="305320985" Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Oct 2022 08:19:21 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10510"; a="736865649" X-IronPort-AV: E=Sophos;i="5.95,212,1661842800"; d="scan'208";a="736865649" Received: from chaop.bj.intel.com ([10.240.193.75]) by fmsmga002.fm.intel.com with ESMTP; 25 Oct 2022 08:19:10 -0700 From: Chao Peng To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-arch@vger.kernel.org, linux-api@vger.kernel.org, linux-doc@vger.kernel.org, qemu-devel@nongnu.org Cc: Paolo Bonzini , Jonathan Corbet , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Thomas Gleixner , Ingo Molnar , Borislav Petkov , x86@kernel.org, "H . Peter Anvin" , Hugh Dickins , Jeff Layton , "J . Bruce Fields" , Andrew Morton , Shuah Khan , Mike Rapoport , Steven Price , "Maciej S . Szmigiero" , Vlastimil Babka , Vishal Annapurve , Yu Zhang , Chao Peng , "Kirill A . Shutemov" , luto@kernel.org, jun.nakajima@intel.com, dave.hansen@intel.com, ak@linux.intel.com, david@redhat.com, aarcange@redhat.com, ddutile@redhat.com, dhildenb@redhat.com, Quentin Perret , tabba@google.com, Michael Roth , mhocko@suse.com, Muchun Song , wei.w.wang@intel.com Subject: [PATCH v9 5/8] KVM: Register/unregister the guest private memory regions Date: Tue, 25 Oct 2022 23:13:41 +0800 Message-Id: <20221025151344.3784230-6-chao.p.peng@linux.intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20221025151344.3784230-1-chao.p.peng@linux.intel.com> References: <20221025151344.3784230-1-chao.p.peng@linux.intel.com> MIME-Version: 1.0 ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1666711162; a=rsa-sha256; cv=none; b=HRhh7XGMmrH8sQD/fumK88BZNaxA0Q9yoPglQ6t8dp7KAtfh15zwbRFcNDHPcDl5bhKNrb ASk2n8vCi97UGXgFnwgGMINDE6Z+9iQ9HScpTbKnrHVHfdzWQwNV7NWFaqD6mURKeq163n Yf7PwtB5KGloP4rstTDs78ltrvRYGTU= ARC-Authentication-Results: i=1; imf22.hostedemail.com; dkim=none ("invalid DKIM record") header.d=intel.com header.s=Intel header.b=LeCgo9nL; dmarc=fail reason="No valid SPF" header.from=intel.com (policy=none); spf=none (imf22.hostedemail.com: domain of chao.p.peng@linux.intel.com has no SPF policy when checking 192.55.52.93) smtp.mailfrom=chao.p.peng@linux.intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1666711162; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=FwTLoCwh/MiCXurkwROzZt91PSoc9QqKgtUzt7vqWZA=; b=IPoZQGmqwIdaKqz+X4w3ggrGrnROJohu5P6kGb+vD9vSYNLP1BrWHobWtqCW9i9Z/BAP1q CmAV7/cgXcH5KpkG4I6uG9WwF+WbOLSCZ19Gnk9OJyRdE2RT826LDZ66yt+4tzycaM+IkQ xUCbl/zfCxn/Xb9Q75LRt4eEdbD+quE= X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: 9333CC0005 X-Rspam-User: Authentication-Results: imf22.hostedemail.com; dkim=none ("invalid DKIM record") header.d=intel.com header.s=Intel header.b=LeCgo9nL; dmarc=fail reason="No valid SPF" header.from=intel.com (policy=none); spf=none (imf22.hostedemail.com: domain of chao.p.peng@linux.intel.com has no SPF policy when checking 192.55.52.93) smtp.mailfrom=chao.p.peng@linux.intel.com X-Stat-Signature: 3g4gdgddrjpd9w4q4rk1r9ji3qmf3bun X-HE-Tag: 1666711162-550990 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Introduce generic private memory register/unregister by reusing existing SEV ioctls KVM_MEMORY_ENCRYPT_{UN,}REG_REGION. It differs from SEV case by treating address in the region as gpa instead of hva. Which cases should these ioctls go is determined by the kvm_arch_has_private_mem(). Architecture which supports KVM_PRIVATE_MEM should override this function. KVM internally defaults all guest memory as private memory and maintain the shared memory in 'mem_attr_array'. The above ioctls operate on this field and unmap existing mappings if any. Signed-off-by: Chao Peng Reviewed-by: Fuad Tabba --- Documentation/virt/kvm/api.rst | 17 ++- arch/x86/kvm/Kconfig | 1 + include/linux/kvm_host.h | 10 +- virt/kvm/Kconfig | 4 + virt/kvm/kvm_main.c | 227 +++++++++++++++++++++++++-------- 5 files changed, 198 insertions(+), 61 deletions(-) diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst index 975688912b8c..08253cf498d1 100644 --- a/Documentation/virt/kvm/api.rst +++ b/Documentation/virt/kvm/api.rst @@ -4717,10 +4717,19 @@ Documentation/virt/kvm/x86/amd-memory-encryption.rst. This ioctl can be used to register a guest memory region which may contain encrypted data (e.g. guest RAM, SMRAM etc). -It is used in the SEV-enabled guest. When encryption is enabled, a guest -memory region may contain encrypted data. The SEV memory encryption -engine uses a tweak such that two identical plaintext pages, each at -different locations will have differing ciphertexts. So swapping or +Currently this ioctl supports registering memory regions for two usages: +private memory and SEV-encrypted memory. + +When private memory is enabled, this ioctl is used to register guest private +memory region and the addr/size of kvm_enc_region represents guest physical +address (GPA). In this usage, this ioctl zaps the existing guest memory +mappings in KVM that fallen into the region. + +When SEV-encrypted memory is enabled, this ioctl is used to register guest +memory region which may contain encrypted data for a SEV-enabled guest. The +addr/size of kvm_enc_region represents userspace address (HVA). The SEV +memory encryption engine uses a tweak such that two identical plaintext pages, +each at different locations will have differing ciphertexts. So swapping or moving ciphertext of those pages will not result in plaintext being swapped. So relocating (or migrating) physical backing pages for the SEV guest will require some additional steps. diff --git a/arch/x86/kvm/Kconfig b/arch/x86/kvm/Kconfig index 8d2bd455c0cd..73fdfa429b20 100644 --- a/arch/x86/kvm/Kconfig +++ b/arch/x86/kvm/Kconfig @@ -51,6 +51,7 @@ config KVM select HAVE_KVM_PM_NOTIFIER if PM select HAVE_KVM_RESTRICTED_MEM if X86_64 select RESTRICTEDMEM if HAVE_KVM_RESTRICTED_MEM + select KVM_GENERIC_PRIVATE_MEM if HAVE_KVM_RESTRICTED_MEM help Support hosting fully virtualized guest machines using hardware virtualization extensions. You will need a fairly recent diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 79e5cbc35fcf..4ce98fa0153c 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -245,7 +245,8 @@ bool kvm_setup_async_pf(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, int kvm_async_pf_wakeup_all(struct kvm_vcpu *vcpu); #endif -#ifdef KVM_ARCH_WANT_MMU_NOTIFIER + +#if defined(KVM_ARCH_WANT_MMU_NOTIFIER) || defined(CONFIG_KVM_GENERIC_PRIVATE_MEM) struct kvm_gfn_range { struct kvm_memory_slot *slot; gfn_t start; @@ -254,6 +255,9 @@ struct kvm_gfn_range { bool may_block; }; bool kvm_unmap_gfn_range(struct kvm *kvm, struct kvm_gfn_range *range); +#endif + +#ifdef KVM_ARCH_WANT_MMU_NOTIFIER bool kvm_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range); bool kvm_test_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range); bool kvm_set_spte_gfn(struct kvm *kvm, struct kvm_gfn_range *range); @@ -794,6 +798,9 @@ struct kvm { struct notifier_block pm_notifier; #endif char stats_id[KVM_STATS_NAME_SIZE]; +#ifdef CONFIG_KVM_GENERIC_PRIVATE_MEM + struct xarray mem_attr_array; +#endif }; #define kvm_err(fmt, ...) \ @@ -1453,6 +1460,7 @@ bool kvm_arch_dy_has_pending_interrupt(struct kvm_vcpu *vcpu); int kvm_arch_post_init_vm(struct kvm *kvm); void kvm_arch_pre_destroy_vm(struct kvm *kvm); int kvm_arch_create_vm_debugfs(struct kvm *kvm); +bool kvm_arch_has_private_mem(struct kvm *kvm); #ifndef __KVM_HAVE_ARCH_VM_ALLOC /* diff --git a/virt/kvm/Kconfig b/virt/kvm/Kconfig index 9ff164c7e0cc..69ca59e82149 100644 --- a/virt/kvm/Kconfig +++ b/virt/kvm/Kconfig @@ -89,3 +89,7 @@ config HAVE_KVM_PM_NOTIFIER config HAVE_KVM_RESTRICTED_MEM bool + +config KVM_GENERIC_PRIVATE_MEM + bool + depends on HAVE_KVM_RESTRICTED_MEM diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 09c9cdeb773c..fc3835826ace 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -520,6 +520,62 @@ void kvm_destroy_vcpus(struct kvm *kvm) } EXPORT_SYMBOL_GPL(kvm_destroy_vcpus); +static inline void update_invalidate_range(struct kvm *kvm, gfn_t start, + gfn_t end) +{ + if (likely(kvm->mmu_invalidate_in_progress == 1)) { + kvm->mmu_invalidate_range_start = start; + kvm->mmu_invalidate_range_end = end; + } else { + /* + * Fully tracking multiple concurrent ranges has diminishing + * returns. Keep things simple and just find the minimal range + * which includes the current and new ranges. As there won't be + * enough information to subtract a range after its invalidate + * completes, any ranges invalidated concurrently will + * accumulate and persist until all outstanding invalidates + * complete. + */ + kvm->mmu_invalidate_range_start = + min(kvm->mmu_invalidate_range_start, start); + kvm->mmu_invalidate_range_end = + max(kvm->mmu_invalidate_range_end, end); + } +} + +static void mark_invalidate_in_progress(struct kvm *kvm, gfn_t start, gfn_t end) +{ + /* + * The count increase must become visible at unlock time as no + * spte can be established without taking the mmu_lock and + * count is also read inside the mmu_lock critical section. + */ + kvm->mmu_invalidate_in_progress++; +} + +void kvm_mmu_invalidate_begin(struct kvm *kvm, gfn_t start, gfn_t end) +{ + mark_invalidate_in_progress(kvm, start, end); + update_invalidate_range(kvm, start, end); +} + +void kvm_mmu_invalidate_end(struct kvm *kvm, gfn_t start, gfn_t end) +{ + /* + * This sequence increase will notify the kvm page fault that + * the page that is going to be mapped in the spte could have + * been freed. + */ + kvm->mmu_invalidate_seq++; + smp_wmb(); + /* + * The above sequence increase must be visible before the + * below count decrease, which is ensured by the smp_wmb above + * in conjunction with the smp_rmb in mmu_invalidate_retry(). + */ + kvm->mmu_invalidate_in_progress--; +} + #if defined(CONFIG_MMU_NOTIFIER) && defined(KVM_ARCH_WANT_MMU_NOTIFIER) static inline struct kvm *mmu_notifier_to_kvm(struct mmu_notifier *mn) { @@ -715,51 +771,12 @@ static void kvm_mmu_notifier_change_pte(struct mmu_notifier *mn, kvm_handle_hva_range(mn, address, address + 1, pte, kvm_set_spte_gfn); } -static inline void update_invalidate_range(struct kvm *kvm, gfn_t start, - gfn_t end) -{ - if (likely(kvm->mmu_invalidate_in_progress == 1)) { - kvm->mmu_invalidate_range_start = start; - kvm->mmu_invalidate_range_end = end; - } else { - /* - * Fully tracking multiple concurrent ranges has diminishing - * returns. Keep things simple and just find the minimal range - * which includes the current and new ranges. As there won't be - * enough information to subtract a range after its invalidate - * completes, any ranges invalidated concurrently will - * accumulate and persist until all outstanding invalidates - * complete. - */ - kvm->mmu_invalidate_range_start = - min(kvm->mmu_invalidate_range_start, start); - kvm->mmu_invalidate_range_end = - max(kvm->mmu_invalidate_range_end, end); - } -} - -static void mark_invalidate_in_progress(struct kvm *kvm, gfn_t start, gfn_t end) -{ - /* - * The count increase must become visible at unlock time as no - * spte can be established without taking the mmu_lock and - * count is also read inside the mmu_lock critical section. - */ - kvm->mmu_invalidate_in_progress++; -} - static bool kvm_mmu_handle_gfn_range(struct kvm *kvm, struct kvm_gfn_range *range) { update_invalidate_range(kvm, range->start, range->end); return kvm_unmap_gfn_range(kvm, range); } -void kvm_mmu_invalidate_begin(struct kvm *kvm, gfn_t start, gfn_t end) -{ - mark_invalidate_in_progress(kvm, start, end); - update_invalidate_range(kvm, start, end); -} - static int kvm_mmu_notifier_invalidate_range_start(struct mmu_notifier *mn, const struct mmu_notifier_range *range) { @@ -807,23 +824,6 @@ static int kvm_mmu_notifier_invalidate_range_start(struct mmu_notifier *mn, return 0; } -void kvm_mmu_invalidate_end(struct kvm *kvm, gfn_t start, gfn_t end) -{ - /* - * This sequence increase will notify the kvm page fault that - * the page that is going to be mapped in the spte could have - * been freed. - */ - kvm->mmu_invalidate_seq++; - smp_wmb(); - /* - * The above sequence increase must be visible before the - * below count decrease, which is ensured by the smp_wmb above - * in conjunction with the smp_rmb in mmu_invalidate_retry(). - */ - kvm->mmu_invalidate_in_progress--; -} - static void kvm_mmu_notifier_invalidate_range_end(struct mmu_notifier *mn, const struct mmu_notifier_range *range) { @@ -937,6 +937,89 @@ static int kvm_init_mmu_notifier(struct kvm *kvm) #endif /* CONFIG_MMU_NOTIFIER && KVM_ARCH_WANT_MMU_NOTIFIER */ +#ifdef CONFIG_KVM_GENERIC_PRIVATE_MEM + +static void kvm_unmap_mem_range(struct kvm *kvm, gfn_t start, gfn_t end) +{ + struct kvm_gfn_range gfn_range; + struct kvm_memory_slot *slot; + struct kvm_memslots *slots; + struct kvm_memslot_iter iter; + int i; + int r = 0; + + gfn_range.pte = __pte(0); + gfn_range.may_block = true; + + for (i = 0; i < KVM_ADDRESS_SPACE_NUM; i++) { + slots = __kvm_memslots(kvm, i); + + kvm_for_each_memslot_in_gfn_range(&iter, slots, start, end) { + slot = iter.slot; + gfn_range.start = max(start, slot->base_gfn); + gfn_range.end = min(end, slot->base_gfn + slot->npages); + if (gfn_range.start >= gfn_range.end) + continue; + gfn_range.slot = slot; + + r |= kvm_unmap_gfn_range(kvm, &gfn_range); + } + } + + if (r) + kvm_flush_remote_tlbs(kvm); +} + +#define KVM_MEM_ATTR_SHARED 0x0001 +static int kvm_vm_ioctl_set_mem_attr(struct kvm *kvm, gpa_t gpa, gpa_t size, + bool is_private) +{ + gfn_t start, end; + unsigned long i; + void *entry; + int idx; + int r = 0; + + if (size == 0 || gpa + size < gpa) + return -EINVAL; + if (gpa & (PAGE_SIZE - 1) || size & (PAGE_SIZE - 1)) + return -EINVAL; + + start = gpa >> PAGE_SHIFT; + end = (gpa + size - 1 + PAGE_SIZE) >> PAGE_SHIFT; + + /* + * Guest memory defaults to private, kvm->mem_attr_array only stores + * shared memory. + */ + entry = is_private ? NULL : xa_mk_value(KVM_MEM_ATTR_SHARED); + + idx = srcu_read_lock(&kvm->srcu); + KVM_MMU_LOCK(kvm); + kvm_mmu_invalidate_begin(kvm, start, end); + + for (i = start; i < end; i++) { + r = xa_err(xa_store(&kvm->mem_attr_array, i, entry, + GFP_KERNEL_ACCOUNT)); + if (r) + goto err; + } + + kvm_unmap_mem_range(kvm, start, end); + + goto ret; +err: + for (; i > start; i--) + xa_erase(&kvm->mem_attr_array, i); +ret: + kvm_mmu_invalidate_end(kvm, start, end); + KVM_MMU_UNLOCK(kvm); + srcu_read_unlock(&kvm->srcu, idx); + + return r; +} +#endif /* CONFIG_KVM_GENERIC_PRIVATE_MEM */ + #ifdef CONFIG_HAVE_KVM_PM_NOTIFIER static int kvm_pm_notifier_call(struct notifier_block *bl, unsigned long state, @@ -1165,6 +1248,9 @@ static struct kvm *kvm_create_vm(unsigned long type, const char *fdname) spin_lock_init(&kvm->mn_invalidate_lock); rcuwait_init(&kvm->mn_memslots_update_rcuwait); xa_init(&kvm->vcpu_array); +#ifdef CONFIG_KVM_GENERIC_PRIVATE_MEM + xa_init(&kvm->mem_attr_array); +#endif INIT_LIST_HEAD(&kvm->gpc_list); spin_lock_init(&kvm->gpc_lock); @@ -1338,6 +1424,9 @@ static void kvm_destroy_vm(struct kvm *kvm) kvm_free_memslots(kvm, &kvm->__memslots[i][0]); kvm_free_memslots(kvm, &kvm->__memslots[i][1]); } +#ifdef CONFIG_KVM_GENERIC_PRIVATE_MEM + xa_destroy(&kvm->mem_attr_array); +#endif cleanup_srcu_struct(&kvm->irq_srcu); cleanup_srcu_struct(&kvm->srcu); kvm_arch_free_vm(kvm); @@ -1541,6 +1630,11 @@ static void kvm_replace_memslot(struct kvm *kvm, } } +bool __weak kvm_arch_has_private_mem(struct kvm *kvm) +{ + return false; +} + static int check_memory_region_flags(const struct kvm_user_mem_region *mem) { u32 valid_flags = KVM_MEM_LOG_DIRTY_PAGES; @@ -4708,6 +4802,24 @@ static long kvm_vm_ioctl(struct file *filp, r = kvm_vm_ioctl_set_memory_region(kvm, &mem); break; } +#ifdef CONFIG_KVM_GENERIC_PRIVATE_MEM + case KVM_MEMORY_ENCRYPT_REG_REGION: + case KVM_MEMORY_ENCRYPT_UNREG_REGION: { + struct kvm_enc_region region; + bool set = ioctl == KVM_MEMORY_ENCRYPT_REG_REGION; + + if (!kvm_arch_has_private_mem(kvm)) + goto arch_vm_ioctl; + + r = -EFAULT; + if (copy_from_user(®ion, argp, sizeof(region))) + goto out; + + r = kvm_vm_ioctl_set_mem_attr(kvm, region.addr, + region.size, set); + break; + } +#endif case KVM_GET_DIRTY_LOG: { struct kvm_dirty_log log; @@ -4861,6 +4973,9 @@ static long kvm_vm_ioctl(struct file *filp, r = kvm_vm_ioctl_get_stats_fd(kvm); break; default: +#ifdef CONFIG_KVM_GENERIC_PRIVATE_MEM +arch_vm_ioctl: +#endif r = kvm_arch_vm_ioctl(filp, ioctl, arg); } out: From patchwork Tue Oct 25 15:13:42 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chao Peng X-Patchwork-Id: 13019447 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 48022C38A2D for ; Tue, 25 Oct 2022 15:19:42 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id DF8E58E0002; Tue, 25 Oct 2022 11:19:41 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id D81B38E0001; Tue, 25 Oct 2022 11:19:41 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BFB958E0002; Tue, 25 Oct 2022 11:19:41 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id AFC578E0001 for ; Tue, 25 Oct 2022 11:19:41 -0400 (EDT) Received: from smtpin21.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 7AC861201D2 for ; Tue, 25 Oct 2022 15:19:41 +0000 (UTC) X-FDA: 80059831362.21.FD61D99 Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by imf17.hostedemail.com (Postfix) with ESMTP id C4B7040016 for ; Tue, 25 Oct 2022 15:19:40 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1666711180; x=1698247180; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=0kS2wF3HRGDR5BE6p747FISp0sobPFWYTxerbWnA2RQ=; b=YOw8fqPFaFLTuSYQfjXRCXso+3dwxbmtf5/C73Y0U0BiWsBzZESBVKAL 6xL/BGR4i5aqjdXHfa+Jn+G48v/dSdfmf7b9GQLBUFbsJPOWxggOdoog2 COFg7xd72YjCSnUpxrX4zI1MWisV2wM116YooHQtnsdwEE+Wap+chF/Pt M58ysOdz9pLOxkATjwB8/yLDlJV+14j1sCB6iGzm/eFdNQifH45mfqtfT ZuCnsjf5nyzqaD/cfBm14WkKstSkKc5d3ncaAj3lE+4MpmwxTS4XhHj6l DFZ/4EX/KXYgdxNOkR1KIWphS9iNy46ZI8z1y6CKgofZN0vE3jSKKbpRm g==; X-IronPort-AV: E=McAfee;i="6500,9779,10510"; a="295109495" X-IronPort-AV: E=Sophos;i="5.95,212,1661842800"; d="scan'208";a="295109495" Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Oct 2022 08:19:39 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10510"; a="736865704" X-IronPort-AV: E=Sophos;i="5.95,212,1661842800"; d="scan'208";a="736865704" Received: from chaop.bj.intel.com ([10.240.193.75]) by fmsmga002.fm.intel.com with ESMTP; 25 Oct 2022 08:19:21 -0700 From: Chao Peng To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-arch@vger.kernel.org, linux-api@vger.kernel.org, linux-doc@vger.kernel.org, qemu-devel@nongnu.org Cc: Paolo Bonzini , Jonathan Corbet , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Thomas Gleixner , Ingo Molnar , Borislav Petkov , x86@kernel.org, "H . Peter Anvin" , Hugh Dickins , Jeff Layton , "J . Bruce Fields" , Andrew Morton , Shuah Khan , Mike Rapoport , Steven Price , "Maciej S . Szmigiero" , Vlastimil Babka , Vishal Annapurve , Yu Zhang , Chao Peng , "Kirill A . Shutemov" , luto@kernel.org, jun.nakajima@intel.com, dave.hansen@intel.com, ak@linux.intel.com, david@redhat.com, aarcange@redhat.com, ddutile@redhat.com, dhildenb@redhat.com, Quentin Perret , tabba@google.com, Michael Roth , mhocko@suse.com, Muchun Song , wei.w.wang@intel.com Subject: [PATCH v9 6/8] KVM: Update lpage info when private/shared memory are mixed Date: Tue, 25 Oct 2022 23:13:42 +0800 Message-Id: <20221025151344.3784230-7-chao.p.peng@linux.intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20221025151344.3784230-1-chao.p.peng@linux.intel.com> References: <20221025151344.3784230-1-chao.p.peng@linux.intel.com> MIME-Version: 1.0 ARC-Authentication-Results: i=1; imf17.hostedemail.com; dkim=none ("invalid DKIM record") header.d=intel.com header.s=Intel header.b=YOw8fqPF; spf=none (imf17.hostedemail.com: domain of chao.p.peng@linux.intel.com has no SPF policy when checking 134.134.136.20) smtp.mailfrom=chao.p.peng@linux.intel.com; dmarc=fail reason="No valid SPF" header.from=intel.com (policy=none) ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1666711181; a=rsa-sha256; cv=none; b=pI7Vyw5D6TwEgmnmUhECW+UE6hJVP8LWIj7y9qNGoOeIUo7lTVORG4bg6Hx60hHxbtvVoI JB4gg4dZjcwdrDcW3U2HYdlv05GRR1xNLLYaqnI7crdeiaMM+fVccZqt1EtCpJTD8uNH6a 2fEdW9rxzYhM+ANs47D88Fax83SXyLg= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1666711181; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=dmMeA0i+XC1aI8CmAVcc6CO7d+pGCGu2hdfG6xQJpR0=; b=fm+R+msNcieaIL0arfkwm77OUl2xc0f5VevP4w37JXs058MEQF6NcfGFxZs8m4rmhGu3zw BkhuWPkZKVu3kWgO9KdoslIpuNXQ8DgeKGQ+iRZOxarraNit9U7YUt20i5pP3ECyLb9FOR guZ62xY8V376v008Z5HDqXXtgRykJXc= X-Rspamd-Queue-Id: C4B7040016 Authentication-Results: imf17.hostedemail.com; dkim=none ("invalid DKIM record") header.d=intel.com header.s=Intel header.b=YOw8fqPF; spf=none (imf17.hostedemail.com: domain of chao.p.peng@linux.intel.com has no SPF policy when checking 134.134.136.20) smtp.mailfrom=chao.p.peng@linux.intel.com; dmarc=fail reason="No valid SPF" header.from=intel.com (policy=none) X-Rspamd-Server: rspam02 X-Rspam-User: X-Stat-Signature: op8dj3dbsqq9u88fpgdfehnnzz9hx9ri X-HE-Tag: 1666711180-598614 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: When private/shared memory are mixed in a large page, the lpage_info may not be accurate and should be updated with this mixed info. A large page has mixed pages can't be really mapped as large page since its private/shared pages are from different physical memory. Update lpage_info when private/shared memory attribute is changed. If both private and shared pages are within a large page region, it can't be mapped as large page. It's a bit challenge to track the mixed info in a 'count' like variable, this patch instead reserves a bit in 'disallow_lpage' to indicate a large page has mixed private/share pages. Signed-off-by: Chao Peng --- arch/x86/include/asm/kvm_host.h | 8 +++ arch/x86/kvm/mmu/mmu.c | 112 +++++++++++++++++++++++++++++++- arch/x86/kvm/x86.c | 2 + include/linux/kvm_host.h | 19 ++++++ virt/kvm/kvm_main.c | 16 +++-- 5 files changed, 152 insertions(+), 5 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 7551b6f9c31c..db811a54e3fd 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -37,6 +37,7 @@ #include #define __KVM_HAVE_ARCH_VCPU_DEBUGFS +#define __KVM_HAVE_ARCH_UPDATE_MEM_ATTR #define KVM_MAX_VCPUS 1024 @@ -952,6 +953,13 @@ struct kvm_vcpu_arch { #endif }; +/* + * Use a bit in disallow_lpage to indicate private/shared pages mixed at the + * level. The remaining bits are used as a reference count. + */ +#define KVM_LPAGE_PRIVATE_SHARED_MIXED (1U << 31) +#define KVM_LPAGE_COUNT_MAX ((1U << 31) - 1) + struct kvm_lpage_info { int disallow_lpage; }; diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 33b1aec44fb8..67a9823a8c35 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -762,11 +762,16 @@ static void update_gfn_disallow_lpage_count(const struct kvm_memory_slot *slot, { struct kvm_lpage_info *linfo; int i; + int disallow_count; for (i = PG_LEVEL_2M; i <= KVM_MAX_HUGEPAGE_LEVEL; ++i) { linfo = lpage_info_slot(gfn, slot, i); + + disallow_count = linfo->disallow_lpage & KVM_LPAGE_COUNT_MAX; + WARN_ON(disallow_count + count < 0 || + disallow_count > KVM_LPAGE_COUNT_MAX - count); + linfo->disallow_lpage += count; - WARN_ON(linfo->disallow_lpage < 0); } } @@ -6910,3 +6915,108 @@ void kvm_mmu_pre_destroy_vm(struct kvm *kvm) if (kvm->arch.nx_lpage_recovery_thread) kthread_stop(kvm->arch.nx_lpage_recovery_thread); } + +static inline bool linfo_is_mixed(struct kvm_lpage_info *linfo) +{ + return linfo->disallow_lpage & KVM_LPAGE_PRIVATE_SHARED_MIXED; +} + +static inline void linfo_update_mixed(struct kvm_lpage_info *linfo, bool mixed) +{ + if (mixed) + linfo->disallow_lpage |= KVM_LPAGE_PRIVATE_SHARED_MIXED; + else + linfo->disallow_lpage &= ~KVM_LPAGE_PRIVATE_SHARED_MIXED; +} + +static bool mem_attr_is_mixed_2m(struct kvm *kvm, unsigned int attr, + gfn_t start, gfn_t end) +{ + XA_STATE(xas, &kvm->mem_attr_array, start); + gfn_t gfn = start; + void *entry; + bool shared = attr == KVM_MEM_ATTR_SHARED; + bool mixed = false; + + rcu_read_lock(); + entry = xas_load(&xas); + while (gfn < end) { + if (xas_retry(&xas, entry)) + continue; + + KVM_BUG_ON(gfn != xas.xa_index, kvm); + + if ((entry && !shared) || (!entry && shared)) { + mixed = true; + goto out; + } + + entry = xas_next(&xas); + gfn++; + } +out: + rcu_read_unlock(); + return mixed; +} + +static bool mem_attr_is_mixed(struct kvm *kvm, struct kvm_memory_slot *slot, + int level, unsigned int attr, + gfn_t start, gfn_t end) +{ + unsigned long gfn; + void *entry; + + if (level == PG_LEVEL_2M) + return mem_attr_is_mixed_2m(kvm, attr, start, end); + + entry = xa_load(&kvm->mem_attr_array, start); + for (gfn = start; gfn < end; gfn += KVM_PAGES_PER_HPAGE(level - 1)) { + if (linfo_is_mixed(lpage_info_slot(gfn, slot, level - 1))) + return true; + if (xa_load(&kvm->mem_attr_array, gfn) != entry) + return true; + } + return false; +} + +void kvm_arch_update_mem_attr(struct kvm *kvm, struct kvm_memory_slot *slot, + unsigned int attr, gfn_t start, gfn_t end) +{ + + unsigned long lpage_start, lpage_end; + unsigned long gfn, pages, mask; + int level; + + WARN_ONCE(!(attr & (KVM_MEM_ATTR_PRIVATE | KVM_MEM_ATTR_SHARED)), + "Unsupported mem attribute.\n"); + + /* + * The sequence matters here: we update the higher level basing on the + * lower level's scanning result. + */ + for (level = PG_LEVEL_2M; level <= KVM_MAX_HUGEPAGE_LEVEL; level++) { + pages = KVM_PAGES_PER_HPAGE(level); + mask = ~(pages - 1); + lpage_start = max(start & mask, slot->base_gfn); + lpage_end = (end - 1) & mask; + + /* + * We only need to scan the head and tail page, for middle pages + * we know they are not mixed. + */ + linfo_update_mixed(lpage_info_slot(lpage_start, slot, level), + mem_attr_is_mixed(kvm, slot, level, attr, + lpage_start, start)); + + if (lpage_start == lpage_end) + return; + + for (gfn = lpage_start + pages; gfn < lpage_end; gfn += pages) + linfo_update_mixed(lpage_info_slot(gfn, slot, level), + false); + + linfo_update_mixed(lpage_info_slot(lpage_end, slot, level), + mem_attr_is_mixed(kvm, slot, level, attr, + end, lpage_end + pages)); + } +} diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 02ad31f46dd7..4276ca73bd7b 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -12563,6 +12563,8 @@ static int kvm_alloc_memslot_metadata(struct kvm *kvm, if ((slot->base_gfn + npages) & (KVM_PAGES_PER_HPAGE(level) - 1)) linfo[lpages - 1].disallow_lpage = 1; ugfn = slot->userspace_addr >> PAGE_SHIFT; + if (kvm_slot_can_be_private(slot)) + ugfn |= slot->restricted_offset >> PAGE_SHIFT; /* * If the gfn and userspace address are not aligned wrt each * other, disable large page support for this slot. diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 4ce98fa0153c..6ce36065532c 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -2284,4 +2284,23 @@ static inline void kvm_account_pgtable_pages(void *virt, int nr) /* Max number of entries allowed for each kvm dirty ring */ #define KVM_DIRTY_RING_MAX_ENTRIES 65536 +#ifdef CONFIG_KVM_GENERIC_PRIVATE_MEM + +#define KVM_MEM_ATTR_SHARED 0x0001 +#define KVM_MEM_ATTR_PRIVATE 0x0002 + +#ifdef __KVM_HAVE_ARCH_UPDATE_MEM_ATTR +void kvm_arch_update_mem_attr(struct kvm *kvm, struct kvm_memory_slot *slot, + unsigned int attr, gfn_t start, gfn_t end); +#else +static inline void kvm_arch_update_mem_attr(struct kvm *kvm, + struct kvm_memory_slot *slot, + unsigned int attr, + gfn_t start, gfn_t end) +{ +} +#endif + +#endif /* CONFIG_KVM_GENERIC_PRIVATE_MEM */ + #endif diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index fc3835826ace..13a37b4d9e97 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -939,7 +939,8 @@ static int kvm_init_mmu_notifier(struct kvm *kvm) #ifdef CONFIG_KVM_GENERIC_PRIVATE_MEM -static void kvm_unmap_mem_range(struct kvm *kvm, gfn_t start, gfn_t end) +static void kvm_unmap_mem_range(struct kvm *kvm, gfn_t start, gfn_t end, + unsigned int attr) { struct kvm_gfn_range gfn_range; struct kvm_memory_slot *slot; @@ -963,6 +964,7 @@ static void kvm_unmap_mem_range(struct kvm *kvm, gfn_t start, gfn_t end) gfn_range.slot = slot; r |= kvm_unmap_gfn_range(kvm, &gfn_range); + kvm_arch_update_mem_attr(kvm, slot, attr, start, end); } } @@ -970,7 +972,6 @@ static void kvm_unmap_mem_range(struct kvm *kvm, gfn_t start, gfn_t end) kvm_flush_remote_tlbs(kvm); } -#define KVM_MEM_ATTR_SHARED 0x0001 static int kvm_vm_ioctl_set_mem_attr(struct kvm *kvm, gpa_t gpa, gpa_t size, bool is_private) { @@ -979,6 +980,7 @@ static int kvm_vm_ioctl_set_mem_attr(struct kvm *kvm, gpa_t gpa, gpa_t size, void *entry; int idx; int r = 0; + unsigned int attr; if (size == 0 || gpa + size < gpa) return -EINVAL; @@ -992,7 +994,13 @@ static int kvm_vm_ioctl_set_mem_attr(struct kvm *kvm, gpa_t gpa, gpa_t size, * Guest memory defaults to private, kvm->mem_attr_array only stores * shared memory. */ - entry = is_private ? NULL : xa_mk_value(KVM_MEM_ATTR_SHARED); + if (is_private) { + attr = KVM_MEM_ATTR_PRIVATE; + entry = NULL; + } else { + attr = KVM_MEM_ATTR_SHARED; + entry = xa_mk_value(KVM_MEM_ATTR_SHARED); + } idx = srcu_read_lock(&kvm->srcu); KVM_MMU_LOCK(kvm); @@ -1005,7 +1013,7 @@ static int kvm_vm_ioctl_set_mem_attr(struct kvm *kvm, gpa_t gpa, gpa_t size, goto err; } - kvm_unmap_mem_range(kvm, start, end); + kvm_unmap_mem_range(kvm, start, end, attr); goto ret; err: From patchwork Tue Oct 25 15:13:43 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chao Peng X-Patchwork-Id: 13019448 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id AA543C38A2D for ; Tue, 25 Oct 2022 15:19:48 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 45EC58E0003; Tue, 25 Oct 2022 11:19:48 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 40B7A8E0001; Tue, 25 Oct 2022 11:19:48 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2D3F28E0003; Tue, 25 Oct 2022 11:19:48 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 1E3F98E0001 for ; Tue, 25 Oct 2022 11:19:48 -0400 (EDT) Received: from smtpin11.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id E2865140D12 for ; Tue, 25 Oct 2022 15:19:47 +0000 (UTC) X-FDA: 80059831614.11.DB6EB3C Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by imf03.hostedemail.com (Postfix) with ESMTP id 4835120032 for ; Tue, 25 Oct 2022 15:19:46 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1666711187; x=1698247187; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=4Npg8YeluRyzE+wi0wB8FNymqEhtTKsC/t0Ec1nfRz0=; b=Xr62YP4rUhtRnu40f/IT3qilB0QpPKAefc1zp0jvg2xAaeZVRIbE2rJu AoUxsentWHq+ZTSG/wNaHCmu+eiBFkNdGDAHevSQblFeU0Wn87ijaMGAO ddTmdgTlnlviy2sZd1JdA3j+JSDw+PKcWKsCGSYDCsa2kxIg7BmbSnehT HOsPxMcoIs9vtzj/boOHR/CwTd0Cgb0MuW3PkNGfWrVQffzOVnIOGpudL DcG5euSXKSGW64CnJ3QZEV0iV7Wupq7W87gD65Jj1H846ufBWc5kuvfx8 pkR4vGrSr6XM1iyFTHYZYkVEN2Myfk6jUZR0R1OL2PlNpvohSEX/2mgn/ w==; X-IronPort-AV: E=McAfee;i="6500,9779,10510"; a="308799926" X-IronPort-AV: E=Sophos;i="5.95,212,1661842800"; d="scan'208";a="308799926" Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Oct 2022 08:19:46 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10510"; a="736865800" X-IronPort-AV: E=Sophos;i="5.95,212,1661842800"; d="scan'208";a="736865800" Received: from chaop.bj.intel.com ([10.240.193.75]) by fmsmga002.fm.intel.com with ESMTP; 25 Oct 2022 08:19:32 -0700 From: Chao Peng To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-arch@vger.kernel.org, linux-api@vger.kernel.org, linux-doc@vger.kernel.org, qemu-devel@nongnu.org Cc: Paolo Bonzini , Jonathan Corbet , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Thomas Gleixner , Ingo Molnar , Borislav Petkov , x86@kernel.org, "H . Peter Anvin" , Hugh Dickins , Jeff Layton , "J . Bruce Fields" , Andrew Morton , Shuah Khan , Mike Rapoport , Steven Price , "Maciej S . Szmigiero" , Vlastimil Babka , Vishal Annapurve , Yu Zhang , Chao Peng , "Kirill A . Shutemov" , luto@kernel.org, jun.nakajima@intel.com, dave.hansen@intel.com, ak@linux.intel.com, david@redhat.com, aarcange@redhat.com, ddutile@redhat.com, dhildenb@redhat.com, Quentin Perret , tabba@google.com, Michael Roth , mhocko@suse.com, Muchun Song , wei.w.wang@intel.com Subject: [PATCH v9 7/8] KVM: Handle page fault for private memory Date: Tue, 25 Oct 2022 23:13:43 +0800 Message-Id: <20221025151344.3784230-8-chao.p.peng@linux.intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20221025151344.3784230-1-chao.p.peng@linux.intel.com> References: <20221025151344.3784230-1-chao.p.peng@linux.intel.com> MIME-Version: 1.0 ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1666711187; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=1CT8ZXgp0AeheY8gksB2XQZwbbyPtszMCd40r0AHKkY=; b=MfKi64BE3Nfns3W+Xqk9DRnFokokecObBt42BpRD2UMPMbd6602Wcw2maaZRth/GNJbHSI sMfHEtMwFImS1dRdtp1D95evvoJNWN/Rku2pCSxkL8HPb1tfhKAu/zVI4uFmrOqtZifZdN Z0FUpHbVNmZJw/2t/5bUEyGVM9vU6B8= ARC-Authentication-Results: i=1; imf03.hostedemail.com; dkim=none ("invalid DKIM record") header.d=intel.com header.s=Intel header.b=Xr62YP4r; dmarc=fail reason="No valid SPF" header.from=intel.com (policy=none); spf=none (imf03.hostedemail.com: domain of chao.p.peng@linux.intel.com has no SPF policy when checking 134.134.136.24) smtp.mailfrom=chao.p.peng@linux.intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1666711187; a=rsa-sha256; cv=none; b=LvhOTH17ib6ivTEwQ/JmvSlvc94OROc9KbgYXmtsFkl+yxyH7D2YTG1WCY4YDUaEj49tXM VMS83fxylPHdV2/djWb6RwaV9GWeXPUM2TBM0F4/xF/3VknsyCnAlUFRjnRtEMogIfTgbU E+QBvmnmnv7WekhZ1uB3t/k6Xm/KlRs= X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: 4835120032 X-Rspam-User: Authentication-Results: imf03.hostedemail.com; dkim=none ("invalid DKIM record") header.d=intel.com header.s=Intel header.b=Xr62YP4r; dmarc=fail reason="No valid SPF" header.from=intel.com (policy=none); spf=none (imf03.hostedemail.com: domain of chao.p.peng@linux.intel.com has no SPF policy when checking 134.134.136.24) smtp.mailfrom=chao.p.peng@linux.intel.com X-Stat-Signature: 58o745ng1zak494a3k93t4a7fd9hmbkb X-HE-Tag: 1666711186-87295 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: A memslot with KVM_MEM_PRIVATE being set can include both fd-based private memory and hva-based shared memory. Architecture code (like TDX code) can tell whether the on-going fault is private or not. This patch adds a 'is_private' field to kvm_page_fault to indicate this and architecture code is expected to set it. To handle page fault for such memslot, the handling logic is different depending on whether the fault is private or shared. KVM checks if 'is_private' matches the host's view of the page (maintained in mem_attr_array). - For a successful match, private pfn is obtained with restrictedmem_get_page () from private fd and shared pfn is obtained with existing get_user_pages(). - For a failed match, KVM causes a KVM_EXIT_MEMORY_FAULT exit to userspace. Userspace then can convert memory between private/shared in host's view and retry the fault. Co-developed-by: Yu Zhang Signed-off-by: Yu Zhang Signed-off-by: Chao Peng --- arch/x86/kvm/mmu/mmu.c | 56 +++++++++++++++++++++++++++++++-- arch/x86/kvm/mmu/mmu_internal.h | 14 ++++++++- arch/x86/kvm/mmu/mmutrace.h | 1 + arch/x86/kvm/mmu/spte.h | 6 ++++ arch/x86/kvm/mmu/tdp_mmu.c | 3 +- include/linux/kvm_host.h | 28 +++++++++++++++++ 6 files changed, 103 insertions(+), 5 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 67a9823a8c35..10017a9f26ee 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -3030,7 +3030,7 @@ static int host_pfn_mapping_level(struct kvm *kvm, gfn_t gfn, int kvm_mmu_max_mapping_level(struct kvm *kvm, const struct kvm_memory_slot *slot, gfn_t gfn, - int max_level) + int max_level, bool is_private) { struct kvm_lpage_info *linfo; int host_level; @@ -3042,6 +3042,9 @@ int kvm_mmu_max_mapping_level(struct kvm *kvm, break; } + if (is_private) + return max_level; + if (max_level == PG_LEVEL_4K) return PG_LEVEL_4K; @@ -3070,7 +3073,8 @@ void kvm_mmu_hugepage_adjust(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault * level, which will be used to do precise, accurate accounting. */ fault->req_level = kvm_mmu_max_mapping_level(vcpu->kvm, slot, - fault->gfn, fault->max_level); + fault->gfn, fault->max_level, + fault->is_private); if (fault->req_level == PG_LEVEL_4K || fault->huge_page_disallowed) return; @@ -4141,6 +4145,32 @@ void kvm_arch_async_page_ready(struct kvm_vcpu *vcpu, struct kvm_async_pf *work) kvm_mmu_do_page_fault(vcpu, work->cr2_or_gpa, 0, true); } +static inline u8 order_to_level(int order) +{ + BUILD_BUG_ON(KVM_MAX_HUGEPAGE_LEVEL > PG_LEVEL_1G); + + if (order >= KVM_HPAGE_GFN_SHIFT(PG_LEVEL_1G)) + return PG_LEVEL_1G; + + if (order >= KVM_HPAGE_GFN_SHIFT(PG_LEVEL_2M)) + return PG_LEVEL_2M; + + return PG_LEVEL_4K; +} + +static int kvm_faultin_pfn_private(struct kvm_page_fault *fault) +{ + int order; + struct kvm_memory_slot *slot = fault->slot; + + if (kvm_restricted_mem_get_pfn(slot, fault->gfn, &fault->pfn, &order)) + return RET_PF_RETRY; + + fault->max_level = min(order_to_level(order), fault->max_level); + fault->map_writable = !(slot->flags & KVM_MEM_READONLY); + return RET_PF_CONTINUE; +} + static int kvm_faultin_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) { struct kvm_memory_slot *slot = fault->slot; @@ -4173,6 +4203,22 @@ static int kvm_faultin_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) return RET_PF_EMULATE; } + if (kvm_slot_can_be_private(slot) && + fault->is_private != kvm_mem_is_private(vcpu->kvm, fault->gfn)) { + vcpu->run->exit_reason = KVM_EXIT_MEMORY_FAULT; + if (fault->is_private) + vcpu->run->memory.flags = KVM_MEMORY_EXIT_FLAG_PRIVATE; + else + vcpu->run->memory.flags = 0; + vcpu->run->memory.padding = 0; + vcpu->run->memory.gpa = fault->gfn << PAGE_SHIFT; + vcpu->run->memory.size = PAGE_SIZE; + return RET_PF_USER; + } + + if (fault->is_private) + return kvm_faultin_pfn_private(fault); + async = false; fault->pfn = __gfn_to_pfn_memslot(slot, fault->gfn, false, &async, fault->write, &fault->map_writable, @@ -5557,6 +5603,9 @@ int noinline kvm_mmu_page_fault(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, u64 err return -EIO; } + if (r == RET_PF_USER) + return 0; + if (r < 0) return r; if (r != RET_PF_EMULATE) @@ -6408,7 +6457,8 @@ static bool kvm_mmu_zap_collapsible_spte(struct kvm *kvm, */ if (sp->role.direct && sp->role.level < kvm_mmu_max_mapping_level(kvm, slot, sp->gfn, - PG_LEVEL_NUM)) { + PG_LEVEL_NUM, + false)) { kvm_zap_one_rmap_spte(kvm, rmap_head, sptep); if (kvm_available_flush_tlb_with_range()) diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h index 582def531d4d..5cdff5ca546c 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -188,6 +188,7 @@ struct kvm_page_fault { /* Derived from mmu and global state. */ const bool is_tdp; + const bool is_private; const bool nx_huge_page_workaround_enabled; /* @@ -236,6 +237,7 @@ int kvm_tdp_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault); * RET_PF_RETRY: let CPU fault again on the address. * RET_PF_EMULATE: mmio page fault, emulate the instruction directly. * RET_PF_INVALID: the spte is invalid, let the real page fault path update it. + * RET_PF_USER: need to exit to userspace to handle this fault. * RET_PF_FIXED: The faulting entry has been fixed. * RET_PF_SPURIOUS: The faulting entry was already fixed, e.g. by another vCPU. * @@ -252,6 +254,7 @@ enum { RET_PF_RETRY, RET_PF_EMULATE, RET_PF_INVALID, + RET_PF_USER, RET_PF_FIXED, RET_PF_SPURIOUS, }; @@ -309,7 +312,7 @@ static inline int kvm_mmu_do_page_fault(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, int kvm_mmu_max_mapping_level(struct kvm *kvm, const struct kvm_memory_slot *slot, gfn_t gfn, - int max_level); + int max_level, bool is_private); void kvm_mmu_hugepage_adjust(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault); void disallowed_hugepage_adjust(struct kvm_page_fault *fault, u64 spte, int cur_level); @@ -318,4 +321,13 @@ void *mmu_memory_cache_alloc(struct kvm_mmu_memory_cache *mc); void account_huge_nx_page(struct kvm *kvm, struct kvm_mmu_page *sp); void unaccount_huge_nx_page(struct kvm *kvm, struct kvm_mmu_page *sp); +#ifndef CONFIG_HAVE_KVM_RESTRICTED_MEM +static inline int kvm_restricted_mem_get_pfn(struct kvm_memory_slot *slot, + gfn_t gfn, kvm_pfn_t *pfn, int *order) +{ + WARN_ON_ONCE(1); + return -EOPNOTSUPP; +} +#endif /* CONFIG_HAVE_KVM_RESTRICTED_MEM */ + #endif /* __KVM_X86_MMU_INTERNAL_H */ diff --git a/arch/x86/kvm/mmu/mmutrace.h b/arch/x86/kvm/mmu/mmutrace.h index ae86820cef69..2d7555381955 100644 --- a/arch/x86/kvm/mmu/mmutrace.h +++ b/arch/x86/kvm/mmu/mmutrace.h @@ -58,6 +58,7 @@ TRACE_DEFINE_ENUM(RET_PF_CONTINUE); TRACE_DEFINE_ENUM(RET_PF_RETRY); TRACE_DEFINE_ENUM(RET_PF_EMULATE); TRACE_DEFINE_ENUM(RET_PF_INVALID); +TRACE_DEFINE_ENUM(RET_PF_USER); TRACE_DEFINE_ENUM(RET_PF_FIXED); TRACE_DEFINE_ENUM(RET_PF_SPURIOUS); diff --git a/arch/x86/kvm/mmu/spte.h b/arch/x86/kvm/mmu/spte.h index 7670c13ce251..9acdf72537ce 100644 --- a/arch/x86/kvm/mmu/spte.h +++ b/arch/x86/kvm/mmu/spte.h @@ -315,6 +315,12 @@ static inline bool is_dirty_spte(u64 spte) return dirty_mask ? spte & dirty_mask : spte & PT_WRITABLE_MASK; } +static inline bool is_private_spte(u64 spte) +{ + /* FIXME: Query C-bit/S-bit for SEV/TDX. */ + return false; +} + static inline u64 get_rsvd_bits(struct rsvd_bits_validate *rsvd_check, u64 pte, int level) { diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 672f0432d777..9f97aac90606 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -1768,7 +1768,8 @@ static void zap_collapsible_spte_range(struct kvm *kvm, continue; max_mapping_level = kvm_mmu_max_mapping_level(kvm, slot, - iter.gfn, PG_LEVEL_NUM); + iter.gfn, PG_LEVEL_NUM, + is_private_spte(iter.old_spte)); if (max_mapping_level < iter.level) continue; diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 6ce36065532c..69300fc6d572 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -2301,6 +2301,34 @@ static inline void kvm_arch_update_mem_attr(struct kvm *kvm, } #endif +static inline bool kvm_mem_is_private(struct kvm *kvm, gfn_t gfn) +{ + return !xa_load(&kvm->mem_attr_array, gfn); +} + +#else /* !CONFIG_KVM_GENERIC_PRIVATE_MEM */ + +static inline bool kvm_mem_is_private(struct kvm *kvm, gfn_t gfn) +{ + return false; +} + #endif /* CONFIG_KVM_GENERIC_PRIVATE_MEM */ +#ifdef CONFIG_HAVE_KVM_RESTRICTED_MEM +static inline int kvm_restricted_mem_get_pfn(struct kvm_memory_slot *slot, + gfn_t gfn, kvm_pfn_t *pfn, int *order) +{ + int ret; + struct page *page; + pgoff_t index = gfn - slot->base_gfn + + (slot->restricted_offset >> PAGE_SHIFT); + + ret = restrictedmem_get_page(slot->restricted_file, index, + &page, order); + *pfn = page_to_pfn(page); + return ret; +} +#endif /* CONFIG_HAVE_KVM_RESTRICTED_MEM */ + #endif From patchwork Tue Oct 25 15:13:44 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chao Peng X-Patchwork-Id: 13019449 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D6B3DC04A95 for ; Tue, 25 Oct 2022 15:19:59 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 7B8CD8E0005; Tue, 25 Oct 2022 11:19:59 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 7402A8E0001; Tue, 25 Oct 2022 11:19:59 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5BA358E0005; Tue, 25 Oct 2022 11:19:59 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 4F0928E0001 for ; Tue, 25 Oct 2022 11:19:59 -0400 (EDT) Received: from smtpin04.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 0D89E80662 for ; Tue, 25 Oct 2022 15:19:59 +0000 (UTC) X-FDA: 80059832118.04.0B1B9E1 Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by imf19.hostedemail.com (Postfix) with ESMTP id 686451A003E for ; Tue, 25 Oct 2022 15:19:58 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1666711198; x=1698247198; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=gSDGYt1SO9GcrVjFFDdyGRTv5gtWiyJtwnCYk8FJbTc=; b=lgez4p6MWIjuq9XDnGl5vrkhF+5Xy/J1q4ZLwWAH5b1eCsp9lnYLX1TG +l7xfN1wFlOfywqYLWwkDDPxg29FA4EY2VUMya+qndBU5XlQWsWnQxkP7 /DssHSr1u4fQIpu0wB51TkrFFOUBtAjC7fPRotkCOtTxFbS9mfO4ZPHAB 8PZL8O9hoMPidByXOnFJrLv3A3S0trMWR+jGWoR6aTXkLflNsWq2CzG6r K6Kp1dOFxH1ZWM40jG5or44Pchrg6Dqk0x0kz+6a5XRFTONcGuAIVhsRI DrppYJOt9JKCL2SIVD8+Q/dO9CHd3r4eO0THS0W71voP87dcLwdeo88mY g==; X-IronPort-AV: E=McAfee;i="6500,9779,10510"; a="334307758" X-IronPort-AV: E=Sophos;i="5.95,212,1661842800"; d="scan'208";a="334307758" Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Oct 2022 08:19:56 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10510"; a="736865843" X-IronPort-AV: E=Sophos;i="5.95,212,1661842800"; d="scan'208";a="736865843" Received: from chaop.bj.intel.com ([10.240.193.75]) by fmsmga002.fm.intel.com with ESMTP; 25 Oct 2022 08:19:46 -0700 From: Chao Peng To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-arch@vger.kernel.org, linux-api@vger.kernel.org, linux-doc@vger.kernel.org, qemu-devel@nongnu.org Cc: Paolo Bonzini , Jonathan Corbet , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Thomas Gleixner , Ingo Molnar , Borislav Petkov , x86@kernel.org, "H . Peter Anvin" , Hugh Dickins , Jeff Layton , "J . Bruce Fields" , Andrew Morton , Shuah Khan , Mike Rapoport , Steven Price , "Maciej S . Szmigiero" , Vlastimil Babka , Vishal Annapurve , Yu Zhang , Chao Peng , "Kirill A . Shutemov" , luto@kernel.org, jun.nakajima@intel.com, dave.hansen@intel.com, ak@linux.intel.com, david@redhat.com, aarcange@redhat.com, ddutile@redhat.com, dhildenb@redhat.com, Quentin Perret , tabba@google.com, Michael Roth , mhocko@suse.com, Muchun Song , wei.w.wang@intel.com Subject: [PATCH v9 8/8] KVM: Enable and expose KVM_MEM_PRIVATE Date: Tue, 25 Oct 2022 23:13:44 +0800 Message-Id: <20221025151344.3784230-9-chao.p.peng@linux.intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20221025151344.3784230-1-chao.p.peng@linux.intel.com> References: <20221025151344.3784230-1-chao.p.peng@linux.intel.com> MIME-Version: 1.0 ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1666711198; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=IB/8SLeaHlLmiP60umsYqWiy5FLt7B8Dbecuyp2aI28=; b=TkXJwe86E9tCABh9TaGdLPZQNUTB7adx4usmp07TnfI2A652+9hwkIju6cHKWXY+vV21yf pymO0uM3dDsbEJBMDFr99b+VVFkX9pDgmHoJxiMsrTzCA1YHcIU+MQ0hR1eU7z0xSNTVhn Heez1vnaNS0AhUomjGh+0xTEYI7GdjE= ARC-Authentication-Results: i=1; imf19.hostedemail.com; dkim=none ("invalid DKIM record") header.d=intel.com header.s=Intel header.b=lgez4p6M; spf=none (imf19.hostedemail.com: domain of chao.p.peng@linux.intel.com has no SPF policy when checking 192.55.52.88) smtp.mailfrom=chao.p.peng@linux.intel.com; dmarc=fail reason="No valid SPF" header.from=intel.com (policy=none) ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1666711198; a=rsa-sha256; cv=none; b=UGQBUV7myJuczz1AuJlgL6oUuO5rJDsQjKFqENMUmJgVWBM0fJpoXNWYdy4hDmgF0bnMQP tjE9EtIJJDTkPYoHyjR6kwwmGw/ILOw0TKdfs6cFGBva4wCuXS/QuTmoUnrvSUrT7u1/CN nH/dMIsVDGzcbNxtTZDZYzr8YTJUKvI= X-Rspamd-Queue-Id: 686451A003E Authentication-Results: imf19.hostedemail.com; dkim=none ("invalid DKIM record") header.d=intel.com header.s=Intel header.b=lgez4p6M; spf=none (imf19.hostedemail.com: domain of chao.p.peng@linux.intel.com has no SPF policy when checking 192.55.52.88) smtp.mailfrom=chao.p.peng@linux.intel.com; dmarc=fail reason="No valid SPF" header.from=intel.com (policy=none) X-Rspam-User: X-Rspamd-Server: rspam10 X-Stat-Signature: 8icrbia5h57ba37prf54uq3yydpxkr7t X-HE-Tag: 1666711198-41033 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Expose KVM_MEM_PRIVATE and memslot fields restricted_fd/offset to userspace. KVM register/unregister private memslot to fd-based memory backing store and responses to invalidation event from restrictedmem_notifier to zap the existing memory mappings in the secondary page table. Whether KVM_MEM_PRIVATE is actually exposed to userspace is determined by architecture code which can turn on it by overriding the default kvm_arch_has_private_mem(). A 'kvm' reference is added in memslot structure since in restrictedmem_notifier callback we can only obtain a memslot reference but 'kvm' is needed to do the zapping. Co-developed-by: Yu Zhang Signed-off-by: Yu Zhang Signed-off-by: Chao Peng Reviewed-by: Fuad Tabba --- include/linux/kvm_host.h | 3 +- virt/kvm/kvm_main.c | 174 +++++++++++++++++++++++++++++++++++++-- 2 files changed, 171 insertions(+), 6 deletions(-) diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 69300fc6d572..e27d62c30484 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -246,7 +246,7 @@ int kvm_async_pf_wakeup_all(struct kvm_vcpu *vcpu); #endif -#if defined(KVM_ARCH_WANT_MMU_NOTIFIER) || defined(CONFIG_KVM_GENERIC_PRIVATE_MEM) +#if defined(KVM_ARCH_WANT_MMU_NOTIFIER) || defined(CONFIG_HAVE_KVM_RESTRICTED_MEM) struct kvm_gfn_range { struct kvm_memory_slot *slot; gfn_t start; @@ -583,6 +583,7 @@ struct kvm_memory_slot { struct file *restricted_file; loff_t restricted_offset; struct restrictedmem_notifier notifier; + struct kvm *kvm; }; static inline bool kvm_slot_can_be_private(const struct kvm_memory_slot *slot) diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 13a37b4d9e97..dae6a2c196ad 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -1028,6 +1028,111 @@ static int kvm_vm_ioctl_set_mem_attr(struct kvm *kvm, gpa_t gpa, gpa_t size, } #endif /* CONFIG_KVM_GENERIC_PRIVATE_MEM */ +#ifdef CONFIG_HAVE_KVM_RESTRICTED_MEM +static bool restrictedmem_range_is_valid(struct kvm_memory_slot *slot, + pgoff_t start, pgoff_t end, + gfn_t *gfn_start, gfn_t *gfn_end) +{ + unsigned long base_pgoff = slot->restricted_offset >> PAGE_SHIFT; + + if (start > base_pgoff) + *gfn_start = slot->base_gfn + start - base_pgoff; + else + *gfn_start = slot->base_gfn; + + if (end < base_pgoff + slot->npages) + *gfn_end = slot->base_gfn + end - base_pgoff; + else + *gfn_end = slot->base_gfn + slot->npages; + + if (*gfn_start >= *gfn_end) + return false; + + return true; +} + +static void kvm_restrictedmem_invalidate_begin(struct restrictedmem_notifier *notifier, + pgoff_t start, pgoff_t end) +{ + struct kvm_memory_slot *slot = container_of(notifier, + struct kvm_memory_slot, + notifier); + struct kvm *kvm = slot->kvm; + gfn_t gfn_start, gfn_end; + struct kvm_gfn_range gfn_range; + int idx; + + if (!restrictedmem_range_is_valid(slot, start, end, + &gfn_start, &gfn_end)) + return; + + idx = srcu_read_lock(&kvm->srcu); + KVM_MMU_LOCK(kvm); + + kvm_mmu_invalidate_begin(kvm, gfn_start, gfn_end); + + gfn_range.start = gfn_start; + gfn_range.end = gfn_end; + gfn_range.slot = slot; + gfn_range.pte = __pte(0); + gfn_range.may_block = true; + + if (kvm_unmap_gfn_range(kvm, &gfn_range)) + kvm_flush_remote_tlbs(kvm); + + KVM_MMU_UNLOCK(kvm); + srcu_read_unlock(&kvm->srcu, idx); +} + +static void kvm_restrictedmem_invalidate_end(struct restrictedmem_notifier *notifier, + pgoff_t start, pgoff_t end) +{ + struct kvm_memory_slot *slot = container_of(notifier, + struct kvm_memory_slot, + notifier); + struct kvm *kvm = slot->kvm; + gfn_t gfn_start, gfn_end; + + if (!restrictedmem_range_is_valid(slot, start, end, + &gfn_start, &gfn_end)) + return; + + KVM_MMU_LOCK(kvm); + kvm_mmu_invalidate_end(kvm, gfn_start, gfn_end); + KVM_MMU_UNLOCK(kvm); +} + +static struct restrictedmem_notifier_ops kvm_restrictedmem_notifier_ops = { + .invalidate_start = kvm_restrictedmem_invalidate_begin, + .invalidate_end = kvm_restrictedmem_invalidate_end, +}; + +static inline void kvm_restrictedmem_register(struct kvm_memory_slot *slot) +{ + slot->notifier.ops = &kvm_restrictedmem_notifier_ops; + restrictedmem_register_notifier(slot->restricted_file, &slot->notifier); +} + +static inline void kvm_restrictedmem_unregister(struct kvm_memory_slot *slot) +{ + restrictedmem_unregister_notifier(slot->restricted_file, + &slot->notifier); +} + +#else /* !CONFIG_HAVE_KVM_RESTRICTED_MEM */ + +static inline void kvm_restrictedmem_register(struct kvm_memory_slot *slot) +{ + WARN_ON_ONCE(1); +} + +static inline void kvm_restrictedmem_unregister(struct kvm_memory_slot *slot) +{ + WARN_ON_ONCE(1); +} + +#endif /* CONFIG_HAVE_KVM_RESTRICTED_MEM */ + #ifdef CONFIG_HAVE_KVM_PM_NOTIFIER static int kvm_pm_notifier_call(struct notifier_block *bl, unsigned long state, @@ -1072,6 +1177,11 @@ static void kvm_destroy_dirty_bitmap(struct kvm_memory_slot *memslot) /* This does not remove the slot from struct kvm_memslots data structures */ static void kvm_free_memslot(struct kvm *kvm, struct kvm_memory_slot *slot) { + if (slot->flags & KVM_MEM_PRIVATE) { + kvm_restrictedmem_unregister(slot); + fput(slot->restricted_file); + } + kvm_destroy_dirty_bitmap(slot); kvm_arch_free_memslot(kvm, slot); @@ -1643,10 +1753,16 @@ bool __weak kvm_arch_has_private_mem(struct kvm *kvm) return false; } -static int check_memory_region_flags(const struct kvm_user_mem_region *mem) +static int check_memory_region_flags(struct kvm *kvm, + const struct kvm_user_mem_region *mem) { u32 valid_flags = KVM_MEM_LOG_DIRTY_PAGES; +#ifdef CONFIG_KVM_GENERIC_PRIVATE_MEM + if (kvm_arch_has_private_mem(kvm)) + valid_flags |= KVM_MEM_PRIVATE; +#endif + #ifdef __KVM_HAVE_READONLY_MEM valid_flags |= KVM_MEM_READONLY; #endif @@ -1722,6 +1838,9 @@ static int kvm_prepare_memory_region(struct kvm *kvm, { int r; + if (change == KVM_MR_CREATE && new->flags & KVM_MEM_PRIVATE) + kvm_restrictedmem_register(new); + /* * If dirty logging is disabled, nullify the bitmap; the old bitmap * will be freed on "commit". If logging is enabled in both old and @@ -1750,6 +1869,9 @@ static int kvm_prepare_memory_region(struct kvm *kvm, if (r && new && new->dirty_bitmap && (!old || !old->dirty_bitmap)) kvm_destroy_dirty_bitmap(new); + if (r && change == KVM_MR_CREATE && new->flags & KVM_MEM_PRIVATE) + kvm_restrictedmem_unregister(new); + return r; } @@ -2047,7 +2169,7 @@ int __kvm_set_memory_region(struct kvm *kvm, int as_id, id; int r; - r = check_memory_region_flags(mem); + r = check_memory_region_flags(kvm, mem); if (r) return r; @@ -2066,6 +2188,10 @@ int __kvm_set_memory_region(struct kvm *kvm, !access_ok((void __user *)(unsigned long)mem->userspace_addr, mem->memory_size)) return -EINVAL; + if (mem->flags & KVM_MEM_PRIVATE && + (mem->restricted_offset & (PAGE_SIZE - 1) || + mem->restricted_offset > U64_MAX - mem->memory_size)) + return -EINVAL; if (as_id >= KVM_ADDRESS_SPACE_NUM || id >= KVM_MEM_SLOTS_NUM) return -EINVAL; if (mem->guest_phys_addr + mem->memory_size < mem->guest_phys_addr) @@ -2104,6 +2230,9 @@ int __kvm_set_memory_region(struct kvm *kvm, if ((kvm->nr_memslot_pages + npages) < kvm->nr_memslot_pages) return -EINVAL; } else { /* Modify an existing slot. */ + /* Private memslots are immutable, they can only be deleted. */ + if (mem->flags & KVM_MEM_PRIVATE) + return -EINVAL; if ((mem->userspace_addr != old->userspace_addr) || (npages != old->npages) || ((mem->flags ^ old->flags) & KVM_MEM_READONLY)) @@ -2132,10 +2261,28 @@ int __kvm_set_memory_region(struct kvm *kvm, new->npages = npages; new->flags = mem->flags; new->userspace_addr = mem->userspace_addr; + if (mem->flags & KVM_MEM_PRIVATE) { + new->restricted_file = fget(mem->restricted_fd); + if (!new->restricted_file || + !file_is_restrictedmem(new->restricted_file)) { + r = -EINVAL; + goto out; + } + new->restricted_offset = mem->restricted_offset; + } + + new->kvm = kvm; r = kvm_set_memslot(kvm, old, new, change); if (r) - kfree(new); + goto out; + + return 0; + +out: + if (new->restricted_file) + fput(new->restricted_file); + kfree(new); return r; } EXPORT_SYMBOL_GPL(__kvm_set_memory_region); @@ -4604,6 +4751,11 @@ static long kvm_vm_ioctl_check_extension_generic(struct kvm *kvm, long arg) case KVM_CAP_BINARY_STATS_FD: case KVM_CAP_SYSTEM_EVENT_DATA: return 1; +#ifdef CONFIG_KVM_GENERIC_PRIVATE_MEM + case KVM_CAP_PRIVATE_MEM: + return 1; +#endif + default: break; } @@ -4795,16 +4947,28 @@ static long kvm_vm_ioctl(struct file *filp, } case KVM_SET_USER_MEMORY_REGION: { struct kvm_user_mem_region mem; - unsigned long size = sizeof(struct kvm_userspace_memory_region); + unsigned int flags_offset = offsetof(typeof(mem), flags); + unsigned long size; + u32 flags; kvm_sanity_check_user_mem_region_alias(); + memset(&mem, 0, sizeof(mem)); + r = -EFAULT; + if (get_user(flags, (u32 __user *)(argp + flags_offset))) + goto out; + + if (flags & KVM_MEM_PRIVATE) + size = sizeof(struct kvm_userspace_memory_region_ext); + else + size = sizeof(struct kvm_userspace_memory_region); + if (copy_from_user(&mem, argp, size)) goto out; r = -EINVAL; - if (mem.flags & KVM_MEM_PRIVATE) + if ((flags ^ mem.flags) & KVM_MEM_PRIVATE) goto out; r = kvm_vm_ioctl_set_memory_region(kvm, &mem);