From patchwork Fri Dec 2 06:13:39 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chao Peng X-Patchwork-Id: 13062258 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D289EC636F8 for ; Fri, 2 Dec 2022 06:18:33 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 1DD3D6B0073; Fri, 2 Dec 2022 01:18:33 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 165A66B0074; Fri, 2 Dec 2022 01:18:33 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id EFA726B0075; Fri, 2 Dec 2022 01:18:32 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id DDC4E6B0073 for ; Fri, 2 Dec 2022 01:18:32 -0500 (EST) Received: from smtpin07.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id BFD1C4018A for ; Fri, 2 Dec 2022 06:18:32 +0000 (UTC) X-FDA: 80196362064.07.523FBB3 Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by imf15.hostedemail.com (Postfix) with ESMTP id 235A8A000F for ; Fri, 2 Dec 2022 06:18:31 +0000 (UTC) Authentication-Results: imf15.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=M0YOsVDD; dmarc=pass (policy=none) header.from=intel.com; spf=none (imf15.hostedemail.com: domain of chao.p.peng@linux.intel.com has no SPF policy when checking 134.134.136.100) smtp.mailfrom=chao.p.peng@linux.intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1669961912; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=hxMo29MPLERIVN55Mc5nap4HmlHiqc4o86xMClsF7e4=; b=fipdJAR0EzGm6dgOMzoaqm4r7uRuZmOQOoMJHPFO7P7gfrJOy7Ax+7SM6xzwmoMIMqYE1b bG4lz4UEuq/7tldPWISxkgjo1OBP/JkNNrGf8K6yVAdFvB2CxYDriHeqhK2ZMpn7RdcCtB VK5qCMnDeJpfHUtqHQKZlcG0VrfWqec= ARC-Authentication-Results: i=1; imf15.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=M0YOsVDD; dmarc=pass (policy=none) header.from=intel.com; spf=none (imf15.hostedemail.com: domain of chao.p.peng@linux.intel.com has no SPF policy when checking 134.134.136.100) smtp.mailfrom=chao.p.peng@linux.intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1669961912; a=rsa-sha256; cv=none; b=jNMSD61XQvV2sYrLS2D3ycNPnqxloYnJ9WsrcMEAlGV5dDv1q5j/uy6eWWa+NHMGxhe2is isL8HzGZ8q/r8buB+dp+cwUqzx03BHdEGg53lb6t1KRRK5W+EfvzX9nAJDNAt42UACVhSi eXVTDCW+EVRwScSy1QD0ACIEyCW7N4o= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1669961912; x=1701497912; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=t0LZ0YyKrKPl65oG/djA5Wt39fL9WiC+pwAjlkHR9fE=; b=M0YOsVDDHOigBubGybR7IP3tR1lHEEcwFpPJxx2UYU15V2LXG9uNRQOI kBMoC1nzoP1ZV6DG0E3IsdWQKhNTjCdyCST2k3AemO4Utuws7uK64m0Cy J0i+2tfHpXnB3vYaIzlXlw4obcjSPaq6qxIKvQBA4XCPUDBrX6QEqvkPC uX9IYMXYYVxq1R5CGS0EmoTodn1DIYoc7JoyO4XbuLDu8/yNADAxFvf24 ir3haGtsTH3XhD74Wqn+FQwOiLrJRbfYZKVokVRB0lNH1G+30w2LFvMUL HUl1K86ptv+Hm6MwMrn/12Si4ARBj+/IyuZwS0kWUrXraSD30pheUcbVo w==; X-IronPort-AV: E=McAfee;i="6500,9779,10548"; a="380170384" X-IronPort-AV: E=Sophos;i="5.96,210,1665471600"; d="scan'208";a="380170384" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 01 Dec 2022 22:18:30 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10548"; a="733698589" X-IronPort-AV: E=Sophos;i="5.96,210,1665471600"; d="scan'208";a="733698589" Received: from chaop.bj.intel.com ([10.240.193.75]) by FMSMGA003.fm.intel.com with ESMTP; 01 Dec 2022 22:18:19 -0800 From: Chao Peng To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-arch@vger.kernel.org, linux-api@vger.kernel.org, linux-doc@vger.kernel.org, qemu-devel@nongnu.org Cc: Paolo Bonzini , Jonathan Corbet , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Arnd Bergmann , Naoya Horiguchi , Miaohe Lin , x86@kernel.org, "H . Peter Anvin" , Hugh Dickins , Jeff Layton , "J . Bruce Fields" , Andrew Morton , Shuah Khan , Mike Rapoport , Steven Price , "Maciej S . Szmigiero" , Vlastimil Babka , Vishal Annapurve , Yu Zhang , Chao Peng , "Kirill A . Shutemov" , luto@kernel.org, jun.nakajima@intel.com, dave.hansen@intel.com, ak@linux.intel.com, david@redhat.com, aarcange@redhat.com, ddutile@redhat.com, dhildenb@redhat.com, Quentin Perret , tabba@google.com, Michael Roth , mhocko@suse.com, wei.w.wang@intel.com Subject: [PATCH v10 1/9] mm: Introduce memfd_restricted system call to create restricted user memory Date: Fri, 2 Dec 2022 14:13:39 +0800 Message-Id: <20221202061347.1070246-2-chao.p.peng@linux.intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20221202061347.1070246-1-chao.p.peng@linux.intel.com> References: <20221202061347.1070246-1-chao.p.peng@linux.intel.com> MIME-Version: 1.0 X-Spamd-Result: default: False [0.80 / 9.00]; BAYES_HAM(-3.00)[100.00%]; R_MISSING_CHARSET(2.50)[]; SUBJECT_HAS_UNDERSCORES(1.00)[]; MID_CONTAINS_FROM(1.00)[]; DMARC_POLICY_ALLOW(-0.50)[intel.com,none]; R_DKIM_ALLOW(-0.20)[intel.com:s=Intel]; RCVD_NO_TLS_LAST(0.10)[]; MIME_GOOD(-0.10)[text/plain]; RCPT_COUNT_TWELVE(0.00)[49]; R_SPF_NA(0.00)[no SPF record]; FROM_EQ_ENVFROM(0.00)[]; TO_MATCH_ENVRCPT_SOME(0.00)[]; MIME_TRACE(0.00)[0:+]; FROM_HAS_DN(0.00)[]; DKIM_TRACE(0.00)[intel.com:+]; TO_DN_SOME(0.00)[]; RCVD_COUNT_THREE(0.00)[3]; ARC_SIGNED(0.00)[hostedemail.com:s=arc-20220608:i=1]; ARC_NA(0.00)[] X-Rspamd-Queue-Id: 235A8A000F X-Rspam-User: X-Rspamd-Server: rspam02 X-Stat-Signature: 3rrtgugr6ituu6kwjmabyoup1f59wqh6 X-HE-Tag: 1669961911-925801 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: "Kirill A. Shutemov" Introduce 'memfd_restricted' system call with the ability to create memory areas that are restricted from userspace access through ordinary MMU operations (e.g. read/write/mmap). The memory content is expected to be used through the new in-kernel interface by a third kernel module. memfd_restricted() is useful for scenarios where a file descriptor(fd) can be used as an interface into mm but want to restrict userspace's ability on the fd. Initially it is designed to provide protections for KVM encrypted guest memory. Normally KVM uses memfd memory via mmapping the memfd into KVM userspace (e.g. QEMU) and then using the mmaped virtual address to setup the mapping in the KVM secondary page table (e.g. EPT). With confidential computing technologies like Intel TDX, the memfd memory may be encrypted with special key for special software domain (e.g. KVM guest) and is not expected to be directly accessed by userspace. Precisely, userspace access to such encrypted memory may lead to host crash so should be prevented. memfd_restricted() provides semantics required for KVM guest encrypted memory support that a fd created with memfd_restricted() is going to be used as the source of guest memory in confidential computing environment and KVM can directly interact with core-mm without the need to expose the memoy content into KVM userspace. KVM userspace is still in charge of the lifecycle of the fd. It should pass the created fd to KVM. KVM uses the new restrictedmem_get_page() to obtain the physical memory page and then uses it to populate the KVM secondary page table entries. The userspace restricted memfd can be fallocate-ed or hole-punched from userspace. When hole-punched, KVM can get notified through invalidate_start/invalidate_end() callbacks, KVM then gets chance to remove any mapped entries of the range in the secondary page tables. Machine check can happen for memory pages in the restricted memfd, instead of routing this directly to userspace, we call the error() callback that KVM registered. KVM then gets chance to handle it correctly. memfd_restricted() itself is implemented as a shim layer on top of real memory file systems (currently tmpfs). Pages in restrictedmem are marked as unmovable and unevictable, this is required for current confidential usage. But in future this might be changed. By default memfd_restricted() prevents userspace read, write and mmap. By defining new bit in the 'flags', it can be extended to support other restricted semantics in the future. The system call is currently wired up for x86 arch. Signed-off-by: Kirill A. Shutemov Signed-off-by: Chao Peng Reviewed-by: Fuad Tabba Tested-by: Fuad Tabba --- arch/x86/entry/syscalls/syscall_32.tbl | 1 + arch/x86/entry/syscalls/syscall_64.tbl | 1 + include/linux/restrictedmem.h | 71 ++++++ include/linux/syscalls.h | 1 + include/uapi/asm-generic/unistd.h | 5 +- include/uapi/linux/magic.h | 1 + kernel/sys_ni.c | 3 + mm/Kconfig | 4 + mm/Makefile | 1 + mm/memory-failure.c | 3 + mm/restrictedmem.c | 318 +++++++++++++++++++++++++ 11 files changed, 408 insertions(+), 1 deletion(-) create mode 100644 include/linux/restrictedmem.h create mode 100644 mm/restrictedmem.c diff --git a/arch/x86/entry/syscalls/syscall_32.tbl b/arch/x86/entry/syscalls/syscall_32.tbl index 320480a8db4f..dc70ba90247e 100644 --- a/arch/x86/entry/syscalls/syscall_32.tbl +++ b/arch/x86/entry/syscalls/syscall_32.tbl @@ -455,3 +455,4 @@ 448 i386 process_mrelease sys_process_mrelease 449 i386 futex_waitv sys_futex_waitv 450 i386 set_mempolicy_home_node sys_set_mempolicy_home_node +451 i386 memfd_restricted sys_memfd_restricted diff --git a/arch/x86/entry/syscalls/syscall_64.tbl b/arch/x86/entry/syscalls/syscall_64.tbl index c84d12608cd2..06516abc8318 100644 --- a/arch/x86/entry/syscalls/syscall_64.tbl +++ b/arch/x86/entry/syscalls/syscall_64.tbl @@ -372,6 +372,7 @@ 448 common process_mrelease sys_process_mrelease 449 common futex_waitv sys_futex_waitv 450 common set_mempolicy_home_node sys_set_mempolicy_home_node +451 common memfd_restricted sys_memfd_restricted # # Due to a historical design error, certain syscalls are numbered differently diff --git a/include/linux/restrictedmem.h b/include/linux/restrictedmem.h new file mode 100644 index 000000000000..c2700c5daa43 --- /dev/null +++ b/include/linux/restrictedmem.h @@ -0,0 +1,71 @@ +/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */ +#ifndef _LINUX_RESTRICTEDMEM_H + +#include +#include +#include + +struct restrictedmem_notifier; + +struct restrictedmem_notifier_ops { + void (*invalidate_start)(struct restrictedmem_notifier *notifier, + pgoff_t start, pgoff_t end); + void (*invalidate_end)(struct restrictedmem_notifier *notifier, + pgoff_t start, pgoff_t end); + void (*error)(struct restrictedmem_notifier *notifier, + pgoff_t start, pgoff_t end); +}; + +struct restrictedmem_notifier { + struct list_head list; + const struct restrictedmem_notifier_ops *ops; +}; + +#ifdef CONFIG_RESTRICTEDMEM + +void restrictedmem_register_notifier(struct file *file, + struct restrictedmem_notifier *notifier); +void restrictedmem_unregister_notifier(struct file *file, + struct restrictedmem_notifier *notifier); + +int restrictedmem_get_page(struct file *file, pgoff_t offset, + struct page **pagep, int *order); + +static inline bool file_is_restrictedmem(struct file *file) +{ + return file->f_inode->i_sb->s_magic == RESTRICTEDMEM_MAGIC; +} + +void restrictedmem_error_page(struct page *page, struct address_space *mapping); + +#else + +static inline void restrictedmem_register_notifier(struct file *file, + struct restrictedmem_notifier *notifier) +{ +} + +static inline void restrictedmem_unregister_notifier(struct file *file, + struct restrictedmem_notifier *notifier) +{ +} + +static inline int restrictedmem_get_page(struct file *file, pgoff_t offset, + struct page **pagep, int *order) +{ + return -1; +} + +static inline bool file_is_restrictedmem(struct file *file) +{ + return false; +} + +static inline void restrictedmem_error_page(struct page *page, + struct address_space *mapping) +{ +} + +#endif /* CONFIG_RESTRICTEDMEM */ + +#endif /* _LINUX_RESTRICTEDMEM_H */ diff --git a/include/linux/syscalls.h b/include/linux/syscalls.h index a34b0f9a9972..f9e9e0c820c5 100644 --- a/include/linux/syscalls.h +++ b/include/linux/syscalls.h @@ -1056,6 +1056,7 @@ asmlinkage long sys_memfd_secret(unsigned int flags); asmlinkage long sys_set_mempolicy_home_node(unsigned long start, unsigned long len, unsigned long home_node, unsigned long flags); +asmlinkage long sys_memfd_restricted(unsigned int flags); /* * Architecture-specific system calls diff --git a/include/uapi/asm-generic/unistd.h b/include/uapi/asm-generic/unistd.h index 45fa180cc56a..e93cd35e46d0 100644 --- a/include/uapi/asm-generic/unistd.h +++ b/include/uapi/asm-generic/unistd.h @@ -886,8 +886,11 @@ __SYSCALL(__NR_futex_waitv, sys_futex_waitv) #define __NR_set_mempolicy_home_node 450 __SYSCALL(__NR_set_mempolicy_home_node, sys_set_mempolicy_home_node) +#define __NR_memfd_restricted 451 +__SYSCALL(__NR_memfd_restricted, sys_memfd_restricted) + #undef __NR_syscalls -#define __NR_syscalls 451 +#define __NR_syscalls 452 /* * 32 bit systems traditionally used different diff --git a/include/uapi/linux/magic.h b/include/uapi/linux/magic.h index 6325d1d0e90f..8aa38324b90a 100644 --- a/include/uapi/linux/magic.h +++ b/include/uapi/linux/magic.h @@ -101,5 +101,6 @@ #define DMA_BUF_MAGIC 0x444d4142 /* "DMAB" */ #define DEVMEM_MAGIC 0x454d444d /* "DMEM" */ #define SECRETMEM_MAGIC 0x5345434d /* "SECM" */ +#define RESTRICTEDMEM_MAGIC 0x5245534d /* "RESM" */ #endif /* __LINUX_MAGIC_H__ */ diff --git a/kernel/sys_ni.c b/kernel/sys_ni.c index 860b2dcf3ac4..7c4a32cbd2e7 100644 --- a/kernel/sys_ni.c +++ b/kernel/sys_ni.c @@ -360,6 +360,9 @@ COND_SYSCALL(pkey_free); /* memfd_secret */ COND_SYSCALL(memfd_secret); +/* memfd_restricted */ +COND_SYSCALL(memfd_restricted); + /* * Architecture specific weak syscall entries. */ diff --git a/mm/Kconfig b/mm/Kconfig index 57e1d8c5b505..06b0e1d6b8c1 100644 --- a/mm/Kconfig +++ b/mm/Kconfig @@ -1076,6 +1076,10 @@ config IO_MAPPING config SECRETMEM def_bool ARCH_HAS_SET_DIRECT_MAP && !EMBEDDED +config RESTRICTEDMEM + bool + depends on TMPFS + config ANON_VMA_NAME bool "Anonymous VMA name support" depends on PROC_FS && ADVISE_SYSCALLS && MMU diff --git a/mm/Makefile b/mm/Makefile index 8e105e5b3e29..bcbb0edf9ba1 100644 --- a/mm/Makefile +++ b/mm/Makefile @@ -121,6 +121,7 @@ obj-$(CONFIG_PAGE_EXTENSION) += page_ext.o obj-$(CONFIG_PAGE_TABLE_CHECK) += page_table_check.o obj-$(CONFIG_CMA_DEBUGFS) += cma_debug.o obj-$(CONFIG_SECRETMEM) += secretmem.o +obj-$(CONFIG_RESTRICTEDMEM) += restrictedmem.o obj-$(CONFIG_CMA_SYSFS) += cma_sysfs.o obj-$(CONFIG_USERFAULTFD) += userfaultfd.o obj-$(CONFIG_IDLE_PAGE_TRACKING) += page_idle.o diff --git a/mm/memory-failure.c b/mm/memory-failure.c index 145bb561ddb3..f91b444e471e 100644 --- a/mm/memory-failure.c +++ b/mm/memory-failure.c @@ -62,6 +62,7 @@ #include #include #include +#include #include "swap.h" #include "internal.h" #include "ras/ras_event.h" @@ -940,6 +941,8 @@ static int me_pagecache_clean(struct page_state *ps, struct page *p) goto out; } + restrictedmem_error_page(p, mapping); + /* * The shmem page is kept in page cache instead of truncating * so is expected to have an extra refcount after error-handling. diff --git a/mm/restrictedmem.c b/mm/restrictedmem.c new file mode 100644 index 000000000000..56953c204e5c --- /dev/null +++ b/mm/restrictedmem.c @@ -0,0 +1,318 @@ +// SPDX-License-Identifier: GPL-2.0 +#include "linux/sbitmap.h" +#include +#include +#include +#include +#include +#include +#include + +struct restrictedmem_data { + struct mutex lock; + struct file *memfd; + struct list_head notifiers; +}; + +static void restrictedmem_invalidate_start(struct restrictedmem_data *data, + pgoff_t start, pgoff_t end) +{ + struct restrictedmem_notifier *notifier; + + mutex_lock(&data->lock); + list_for_each_entry(notifier, &data->notifiers, list) { + notifier->ops->invalidate_start(notifier, start, end); + } + mutex_unlock(&data->lock); +} + +static void restrictedmem_invalidate_end(struct restrictedmem_data *data, + pgoff_t start, pgoff_t end) +{ + struct restrictedmem_notifier *notifier; + + mutex_lock(&data->lock); + list_for_each_entry(notifier, &data->notifiers, list) { + notifier->ops->invalidate_end(notifier, start, end); + } + mutex_unlock(&data->lock); +} + +static void restrictedmem_notifier_error(struct restrictedmem_data *data, + pgoff_t start, pgoff_t end) +{ + struct restrictedmem_notifier *notifier; + + mutex_lock(&data->lock); + list_for_each_entry(notifier, &data->notifiers, list) { + notifier->ops->error(notifier, start, end); + } + mutex_unlock(&data->lock); +} + +static int restrictedmem_release(struct inode *inode, struct file *file) +{ + struct restrictedmem_data *data = inode->i_mapping->private_data; + + fput(data->memfd); + kfree(data); + return 0; +} + +static long restrictedmem_punch_hole(struct restrictedmem_data *data, int mode, + loff_t offset, loff_t len) +{ + int ret; + pgoff_t start, end; + struct file *memfd = data->memfd; + + if (!PAGE_ALIGNED(offset) || !PAGE_ALIGNED(len)) + return -EINVAL; + + start = offset >> PAGE_SHIFT; + end = (offset + len) >> PAGE_SHIFT; + + restrictedmem_invalidate_start(data, start, end); + ret = memfd->f_op->fallocate(memfd, mode, offset, len); + restrictedmem_invalidate_end(data, start, end); + + return ret; +} + +static long restrictedmem_fallocate(struct file *file, int mode, + loff_t offset, loff_t len) +{ + struct restrictedmem_data *data = file->f_mapping->private_data; + struct file *memfd = data->memfd; + + if (mode & FALLOC_FL_PUNCH_HOLE) + return restrictedmem_punch_hole(data, mode, offset, len); + + return memfd->f_op->fallocate(memfd, mode, offset, len); +} + +static const struct file_operations restrictedmem_fops = { + .release = restrictedmem_release, + .fallocate = restrictedmem_fallocate, +}; + +static int restrictedmem_getattr(struct user_namespace *mnt_userns, + const struct path *path, struct kstat *stat, + u32 request_mask, unsigned int query_flags) +{ + struct inode *inode = d_inode(path->dentry); + struct restrictedmem_data *data = inode->i_mapping->private_data; + struct file *memfd = data->memfd; + + return memfd->f_inode->i_op->getattr(mnt_userns, path, stat, + request_mask, query_flags); +} + +static int restrictedmem_setattr(struct user_namespace *mnt_userns, + struct dentry *dentry, struct iattr *attr) +{ + struct inode *inode = d_inode(dentry); + struct restrictedmem_data *data = inode->i_mapping->private_data; + struct file *memfd = data->memfd; + int ret; + + if (attr->ia_valid & ATTR_SIZE) { + if (memfd->f_inode->i_size) + return -EPERM; + + if (!PAGE_ALIGNED(attr->ia_size)) + return -EINVAL; + } + + ret = memfd->f_inode->i_op->setattr(mnt_userns, + file_dentry(memfd), attr); + return ret; +} + +static const struct inode_operations restrictedmem_iops = { + .getattr = restrictedmem_getattr, + .setattr = restrictedmem_setattr, +}; + +static int restrictedmem_init_fs_context(struct fs_context *fc) +{ + if (!init_pseudo(fc, RESTRICTEDMEM_MAGIC)) + return -ENOMEM; + + fc->s_iflags |= SB_I_NOEXEC; + return 0; +} + +static struct file_system_type restrictedmem_fs = { + .owner = THIS_MODULE, + .name = "memfd:restrictedmem", + .init_fs_context = restrictedmem_init_fs_context, + .kill_sb = kill_anon_super, +}; + +static struct vfsmount *restrictedmem_mnt; + +static __init int restrictedmem_init(void) +{ + restrictedmem_mnt = kern_mount(&restrictedmem_fs); + if (IS_ERR(restrictedmem_mnt)) + return PTR_ERR(restrictedmem_mnt); + return 0; +} +fs_initcall(restrictedmem_init); + +static struct file *restrictedmem_file_create(struct file *memfd) +{ + struct restrictedmem_data *data; + struct address_space *mapping; + struct inode *inode; + struct file *file; + + data = kzalloc(sizeof(*data), GFP_KERNEL); + if (!data) + return ERR_PTR(-ENOMEM); + + data->memfd = memfd; + mutex_init(&data->lock); + INIT_LIST_HEAD(&data->notifiers); + + inode = alloc_anon_inode(restrictedmem_mnt->mnt_sb); + if (IS_ERR(inode)) { + kfree(data); + return ERR_CAST(inode); + } + + inode->i_mode |= S_IFREG; + inode->i_op = &restrictedmem_iops; + inode->i_mapping->private_data = data; + + file = alloc_file_pseudo(inode, restrictedmem_mnt, + "restrictedmem", O_RDWR, + &restrictedmem_fops); + if (IS_ERR(file)) { + iput(inode); + kfree(data); + return ERR_CAST(file); + } + + file->f_flags |= O_LARGEFILE; + + /* + * These pages are currently unmovable so don't place them into movable + * pageblocks (e.g. CMA and ZONE_MOVABLE). + */ + mapping = memfd->f_mapping; + mapping_set_unevictable(mapping); + mapping_set_gfp_mask(mapping, + mapping_gfp_mask(mapping) & ~__GFP_MOVABLE); + + return file; +} + +SYSCALL_DEFINE1(memfd_restricted, unsigned int, flags) +{ + struct file *file, *restricted_file; + int fd, err; + + if (flags) + return -EINVAL; + + fd = get_unused_fd_flags(0); + if (fd < 0) + return fd; + + file = shmem_file_setup("memfd:restrictedmem", 0, VM_NORESERVE); + if (IS_ERR(file)) { + err = PTR_ERR(file); + goto err_fd; + } + file->f_mode |= FMODE_LSEEK | FMODE_PREAD | FMODE_PWRITE; + file->f_flags |= O_LARGEFILE; + + restricted_file = restrictedmem_file_create(file); + if (IS_ERR(restricted_file)) { + err = PTR_ERR(restricted_file); + fput(file); + goto err_fd; + } + + fd_install(fd, restricted_file); + return fd; +err_fd: + put_unused_fd(fd); + return err; +} + +void restrictedmem_register_notifier(struct file *file, + struct restrictedmem_notifier *notifier) +{ + struct restrictedmem_data *data = file->f_mapping->private_data; + + mutex_lock(&data->lock); + list_add(¬ifier->list, &data->notifiers); + mutex_unlock(&data->lock); +} +EXPORT_SYMBOL_GPL(restrictedmem_register_notifier); + +void restrictedmem_unregister_notifier(struct file *file, + struct restrictedmem_notifier *notifier) +{ + struct restrictedmem_data *data = file->f_mapping->private_data; + + mutex_lock(&data->lock); + list_del(¬ifier->list); + mutex_unlock(&data->lock); +} +EXPORT_SYMBOL_GPL(restrictedmem_unregister_notifier); + +int restrictedmem_get_page(struct file *file, pgoff_t offset, + struct page **pagep, int *order) +{ + struct restrictedmem_data *data = file->f_mapping->private_data; + struct file *memfd = data->memfd; + struct folio *folio; + struct page *page; + int ret; + + ret = shmem_get_folio(file_inode(memfd), offset, &folio, SGP_WRITE); + if (ret) + return ret; + + page = folio_file_page(folio, offset); + *pagep = page; + if (order) + *order = thp_order(compound_head(page)); + + SetPageUptodate(page); + unlock_page(page); + + return 0; +} +EXPORT_SYMBOL_GPL(restrictedmem_get_page); + +void restrictedmem_error_page(struct page *page, struct address_space *mapping) +{ + struct super_block *sb = restrictedmem_mnt->mnt_sb; + struct inode *inode, *next; + + if (!shmem_mapping(mapping)) + return; + + spin_lock(&sb->s_inode_list_lock); + list_for_each_entry_safe(inode, next, &sb->s_inodes, i_sb_list) { + struct restrictedmem_data *data = inode->i_mapping->private_data; + struct file *memfd = data->memfd; + + if (memfd->f_mapping == mapping) { + pgoff_t start, end; + + spin_unlock(&sb->s_inode_list_lock); + + start = page->index; + end = start + thp_nr_pages(page); + restrictedmem_notifier_error(data, start, end); + return; + } + } + spin_unlock(&sb->s_inode_list_lock); +} From patchwork Fri Dec 2 06:13:40 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chao Peng X-Patchwork-Id: 13062259 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id B2D7AC4332F for ; Fri, 2 Dec 2022 06:18:44 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3D7E46B0074; Fri, 2 Dec 2022 01:18:44 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 33ABF6B0075; Fri, 2 Dec 2022 01:18:44 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 18D006B0078; Fri, 2 Dec 2022 01:18:44 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 071456B0074 for ; Fri, 2 Dec 2022 01:18:44 -0500 (EST) Received: from smtpin30.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id C76918038B for ; Fri, 2 Dec 2022 06:18:43 +0000 (UTC) X-FDA: 80196362526.30.7A23442 Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by imf08.hostedemail.com (Postfix) with ESMTP id 33248160010 for ; Fri, 2 Dec 2022 06:18:42 +0000 (UTC) Authentication-Results: imf08.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=bIj+ceXR; dmarc=pass (policy=none) header.from=intel.com; spf=none (imf08.hostedemail.com: domain of chao.p.peng@linux.intel.com has no SPF policy when checking 134.134.136.100) smtp.mailfrom=chao.p.peng@linux.intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1669961923; a=rsa-sha256; cv=none; b=VthfJL6VL3ghehWHoT7+igRe+05h+WTN50HZEyz3ocnoEuFXRoZptY6XLVNOj3iDbQoaWE lfhAV1GL21FbGs1eHTD6h4qvLTIoIcZ4nMeMWSNx6j54Bw/CYkTlBChH3QX8QHAPRfT8Rg tc6OukpSAuB/CPjXT5BGkMRJo79lGE4= ARC-Authentication-Results: i=1; imf08.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=bIj+ceXR; dmarc=pass (policy=none) header.from=intel.com; spf=none (imf08.hostedemail.com: domain of chao.p.peng@linux.intel.com has no SPF policy when checking 134.134.136.100) smtp.mailfrom=chao.p.peng@linux.intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1669961923; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=ePd9gVn+lp+sr/KOzp1HR00BteXBj6Ds3pwRmfClbWo=; b=tVeX92XlMqt+a/K5nAY6CCBoAoa7yep7x8/Y2bUCBVf1odlMZ9do1a9xeW/60W4AsYKfd0 4mcgfz7K7GaGzWApb6JK9Z4cHk7jXkUFSqM0srV8j1ZWlysS90HGFhdBve/xdJgEUddrJn DHssxELmeC2cfm2mqNeJFIPQXU/oimk= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1669961923; x=1701497923; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=6LvyjYHjuWAwPSyQcfYrCfXc2wEuDH2WGLmg2tkJEmY=; b=bIj+ceXRznZe1khoGXe5adi9vbF+o67wuD7A1o6+CEPxKW792MkJDV6H jo1ZxnqTX0wZdva2cJtnEnPIORfPbqE9/SDVtzacwQRiF6p926W02PCAq dWl7GM1h/S5YLSJGhsqQMBqlGgf6f2rO9e0daHvkZ/xZtgsgG1dZBeOIl +42p6SgeWzwe3E6SQ27ogkaVYEwkZDIeDzZobkFn4ZxP17cyZBYQHm3iz kbv/uxL3k/5LercZwuvj60cBSrzcQuET5+SoHUJ/7SNVEGmj1SveuJjS1 Pwx7sXnOBZwz/U6MAj0SIsveOa1QB31XkXh9qL4SHSSiBY9V9HuuOoqFs Q==; X-IronPort-AV: E=McAfee;i="6500,9779,10548"; a="380170431" X-IronPort-AV: E=Sophos;i="5.96,210,1665471600"; d="scan'208";a="380170431" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 01 Dec 2022 22:18:41 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10548"; a="733698624" X-IronPort-AV: E=Sophos;i="5.96,210,1665471600"; d="scan'208";a="733698624" Received: from chaop.bj.intel.com ([10.240.193.75]) by FMSMGA003.fm.intel.com with ESMTP; 01 Dec 2022 22:18:30 -0800 From: Chao Peng To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-arch@vger.kernel.org, linux-api@vger.kernel.org, linux-doc@vger.kernel.org, qemu-devel@nongnu.org Cc: Paolo Bonzini , Jonathan Corbet , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Arnd Bergmann , Naoya Horiguchi , Miaohe Lin , x86@kernel.org, "H . Peter Anvin" , Hugh Dickins , Jeff Layton , "J . Bruce Fields" , Andrew Morton , Shuah Khan , Mike Rapoport , Steven Price , "Maciej S . Szmigiero" , Vlastimil Babka , Vishal Annapurve , Yu Zhang , Chao Peng , "Kirill A . Shutemov" , luto@kernel.org, jun.nakajima@intel.com, dave.hansen@intel.com, ak@linux.intel.com, david@redhat.com, aarcange@redhat.com, ddutile@redhat.com, dhildenb@redhat.com, Quentin Perret , tabba@google.com, Michael Roth , mhocko@suse.com, wei.w.wang@intel.com Subject: [PATCH v10 2/9] KVM: Introduce per-page memory attributes Date: Fri, 2 Dec 2022 14:13:40 +0800 Message-Id: <20221202061347.1070246-3-chao.p.peng@linux.intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20221202061347.1070246-1-chao.p.peng@linux.intel.com> References: <20221202061347.1070246-1-chao.p.peng@linux.intel.com> MIME-Version: 1.0 X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: 33248160010 X-Stat-Signature: b3ti7cccenwrc4s98kk5yxxw3ezz4kx6 X-Spamd-Result: default: False [-0.20 / 9.00]; BAYES_HAM(-3.00)[100.00%]; R_MISSING_CHARSET(2.50)[]; MID_CONTAINS_FROM(1.00)[]; DMARC_POLICY_ALLOW(-0.50)[intel.com,none]; R_DKIM_ALLOW(-0.20)[intel.com:s=Intel]; MIME_GOOD(-0.10)[text/plain]; RCVD_NO_TLS_LAST(0.10)[]; RCPT_COUNT_TWELVE(0.00)[49]; R_SPF_NA(0.00)[no SPF record]; FROM_EQ_ENVFROM(0.00)[]; TO_MATCH_ENVRCPT_SOME(0.00)[]; MIME_TRACE(0.00)[0:+]; FROM_HAS_DN(0.00)[]; DKIM_TRACE(0.00)[intel.com:+]; TO_DN_SOME(0.00)[]; RCVD_COUNT_THREE(0.00)[3]; ARC_SIGNED(0.00)[hostedemail.com:s=arc-20220608:i=1]; ARC_NA(0.00)[] X-Rspam-User: X-HE-Tag: 1669961922-631319 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: In confidential computing usages, whether a page is private or shared is necessary information for KVM to perform operations like page fault handling, page zapping etc. There are other potential use cases for per-page memory attributes, e.g. to make memory read-only (or no-exec, or exec-only, etc.) without having to modify memslots. Introduce two ioctls (advertised by KVM_CAP_MEMORY_ATTRIBUTES) to allow userspace to operate on the per-page memory attributes. - KVM_SET_MEMORY_ATTRIBUTES to set the per-page memory attributes to a guest memory range. - KVM_GET_SUPPORTED_MEMORY_ATTRIBUTES to return the KVM supported memory attributes. KVM internally uses xarray to store the per-page memory attributes. Suggested-by: Sean Christopherson Signed-off-by: Chao Peng Link: https://lore.kernel.org/all/Y2WB48kD0J4VGynX@google.com/ Reviewed-by: Fuad Tabba Tested-by: Fuad Tabba --- Documentation/virt/kvm/api.rst | 63 ++++++++++++++++++++++++++++ arch/x86/kvm/Kconfig | 1 + include/linux/kvm_host.h | 3 ++ include/uapi/linux/kvm.h | 17 ++++++++ virt/kvm/Kconfig | 3 ++ virt/kvm/kvm_main.c | 76 ++++++++++++++++++++++++++++++++++ 6 files changed, 163 insertions(+) diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst index 5617bc4f899f..bb2f709c0900 100644 --- a/Documentation/virt/kvm/api.rst +++ b/Documentation/virt/kvm/api.rst @@ -5952,6 +5952,59 @@ delivery must be provided via the "reg_aen" struct. The "pad" and "reserved" fields may be used for future extensions and should be set to 0s by userspace. +4.138 KVM_GET_SUPPORTED_MEMORY_ATTRIBUTES +----------------------------------------- + +:Capability: KVM_CAP_MEMORY_ATTRIBUTES +:Architectures: x86 +:Type: vm ioctl +:Parameters: u64 memory attributes bitmask(out) +:Returns: 0 on success, <0 on error + +Returns supported memory attributes bitmask. Supported memory attributes will +have the corresponding bits set in u64 memory attributes bitmask. + +The following memory attributes are defined:: + + #define KVM_MEMORY_ATTRIBUTE_READ (1ULL << 0) + #define KVM_MEMORY_ATTRIBUTE_WRITE (1ULL << 1) + #define KVM_MEMORY_ATTRIBUTE_EXECUTE (1ULL << 2) + #define KVM_MEMORY_ATTRIBUTE_PRIVATE (1ULL << 3) + +4.139 KVM_SET_MEMORY_ATTRIBUTES +----------------------------------------- + +:Capability: KVM_CAP_MEMORY_ATTRIBUTES +:Architectures: x86 +:Type: vm ioctl +:Parameters: struct kvm_memory_attributes(in/out) +:Returns: 0 on success, <0 on error + +Sets memory attributes for pages in a guest memory range. Parameters are +specified via the following structure:: + + struct kvm_memory_attributes { + __u64 address; + __u64 size; + __u64 attributes; + __u64 flags; + }; + +The user sets the per-page memory attributes to a guest memory range indicated +by address/size, and in return KVM adjusts address and size to reflect the +actual pages of the memory range have been successfully set to the attributes. +If the call returns 0, "address" is updated to the last successful address + 1 +and "size" is updated to the remaining address size that has not been set +successfully. The user should check the return value as well as the size to +decide if the operation succeeded for the whole range or not. The user may want +to retry the operation with the returned address/size if the previous range was +partially successful. + +Both address and size should be page aligned and the supported attributes can be +retrieved with KVM_GET_SUPPORTED_MEMORY_ATTRIBUTES. + +The "flags" field may be used for future extensions and should be set to 0s. + 5. The kvm_run structure ======================== @@ -8270,6 +8323,16 @@ structure. When getting the Modified Change Topology Report value, the attr->addr must point to a byte where the value will be stored or retrieved from. +8.40 KVM_CAP_MEMORY_ATTRIBUTES +------------------------------ + +:Capability: KVM_CAP_MEMORY_ATTRIBUTES +:Architectures: x86 +:Type: vm + +This capability indicates KVM supports per-page memory attributes and ioctls +KVM_GET_SUPPORTED_MEMORY_ATTRIBUTES/KVM_SET_MEMORY_ATTRIBUTES are available. + 9. Known KVM API problems ========================= diff --git a/arch/x86/kvm/Kconfig b/arch/x86/kvm/Kconfig index fbeaa9ddef59..a8e379a3afee 100644 --- a/arch/x86/kvm/Kconfig +++ b/arch/x86/kvm/Kconfig @@ -49,6 +49,7 @@ config KVM select SRCU select INTERVAL_TREE select HAVE_KVM_PM_NOTIFIER if PM + select HAVE_KVM_MEMORY_ATTRIBUTES help Support hosting fully virtualized guest machines using hardware virtualization extensions. You will need a fairly recent diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 8f874a964313..a784e2b06625 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -800,6 +800,9 @@ struct kvm { #ifdef CONFIG_HAVE_KVM_PM_NOTIFIER struct notifier_block pm_notifier; +#endif +#ifdef CONFIG_HAVE_KVM_MEMORY_ATTRIBUTES + struct xarray mem_attr_array; #endif char stats_id[KVM_STATS_NAME_SIZE]; }; diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h index 64dfe9c07c87..5d0941acb5bb 100644 --- a/include/uapi/linux/kvm.h +++ b/include/uapi/linux/kvm.h @@ -1182,6 +1182,7 @@ struct kvm_ppc_resize_hpt { #define KVM_CAP_S390_CPU_TOPOLOGY 222 #define KVM_CAP_DIRTY_LOG_RING_ACQ_REL 223 #define KVM_CAP_S390_PROTECTED_ASYNC_DISABLE 224 +#define KVM_CAP_MEMORY_ATTRIBUTES 225 #ifdef KVM_CAP_IRQ_ROUTING @@ -2238,4 +2239,20 @@ struct kvm_s390_zpci_op { /* flags for kvm_s390_zpci_op->u.reg_aen.flags */ #define KVM_S390_ZPCIOP_REGAEN_HOST (1 << 0) +/* Available with KVM_CAP_MEMORY_ATTRIBUTES */ +#define KVM_GET_SUPPORTED_MEMORY_ATTRIBUTES _IOR(KVMIO, 0xd2, __u64) +#define KVM_SET_MEMORY_ATTRIBUTES _IOWR(KVMIO, 0xd3, struct kvm_memory_attributes) + +struct kvm_memory_attributes { + __u64 address; + __u64 size; + __u64 attributes; + __u64 flags; +}; + +#define KVM_MEMORY_ATTRIBUTE_READ (1ULL << 0) +#define KVM_MEMORY_ATTRIBUTE_WRITE (1ULL << 1) +#define KVM_MEMORY_ATTRIBUTE_EXECUTE (1ULL << 2) +#define KVM_MEMORY_ATTRIBUTE_PRIVATE (1ULL << 3) + #endif /* __LINUX_KVM_H */ diff --git a/virt/kvm/Kconfig b/virt/kvm/Kconfig index 800f9470e36b..effdea5dd4f0 100644 --- a/virt/kvm/Kconfig +++ b/virt/kvm/Kconfig @@ -19,6 +19,9 @@ config HAVE_KVM_IRQ_ROUTING config HAVE_KVM_DIRTY_RING bool +config HAVE_KVM_MEMORY_ATTRIBUTES + bool + # Only strongly ordered architectures can select this, as it doesn't # put any explicit constraint on userspace ordering. They can also # select the _ACQ_REL version. diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 1782c4555d94..7f0f5e9f2406 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -1150,6 +1150,9 @@ static struct kvm *kvm_create_vm(unsigned long type, const char *fdname) spin_lock_init(&kvm->mn_invalidate_lock); rcuwait_init(&kvm->mn_memslots_update_rcuwait); xa_init(&kvm->vcpu_array); +#ifdef CONFIG_HAVE_KVM_MEMORY_ATTRIBUTES + xa_init(&kvm->mem_attr_array); +#endif INIT_LIST_HEAD(&kvm->gpc_list); spin_lock_init(&kvm->gpc_lock); @@ -1323,6 +1326,9 @@ static void kvm_destroy_vm(struct kvm *kvm) kvm_free_memslots(kvm, &kvm->__memslots[i][0]); kvm_free_memslots(kvm, &kvm->__memslots[i][1]); } +#ifdef CONFIG_HAVE_KVM_MEMORY_ATTRIBUTES + xa_destroy(&kvm->mem_attr_array); +#endif cleanup_srcu_struct(&kvm->irq_srcu); cleanup_srcu_struct(&kvm->srcu); kvm_arch_free_vm(kvm); @@ -2323,6 +2329,49 @@ static int kvm_vm_ioctl_clear_dirty_log(struct kvm *kvm, } #endif /* CONFIG_KVM_GENERIC_DIRTYLOG_READ_PROTECT */ +#ifdef CONFIG_HAVE_KVM_MEMORY_ATTRIBUTES +static u64 kvm_supported_mem_attributes(struct kvm *kvm) +{ + return 0; +} + +static int kvm_vm_ioctl_set_mem_attributes(struct kvm *kvm, + struct kvm_memory_attributes *attrs) +{ + gfn_t start, end; + unsigned long i; + void *entry; + u64 supported_attrs = kvm_supported_mem_attributes(kvm); + + /* flags is currently not used. */ + if (attrs->flags) + return -EINVAL; + if (attrs->attributes & ~supported_attrs) + return -EINVAL; + if (attrs->size == 0 || attrs->address + attrs->size < attrs->address) + return -EINVAL; + if (!PAGE_ALIGNED(attrs->address) || !PAGE_ALIGNED(attrs->size)) + return -EINVAL; + + start = attrs->address >> PAGE_SHIFT; + end = (attrs->address + attrs->size - 1 + PAGE_SIZE) >> PAGE_SHIFT; + + entry = attrs->attributes ? xa_mk_value(attrs->attributes) : NULL; + + mutex_lock(&kvm->lock); + for (i = start; i < end; i++) + if (xa_err(xa_store(&kvm->mem_attr_array, i, entry, + GFP_KERNEL_ACCOUNT))) + break; + mutex_unlock(&kvm->lock); + + attrs->address = i << PAGE_SHIFT; + attrs->size = (end - i) << PAGE_SHIFT; + + return 0; +} +#endif /* CONFIG_HAVE_KVM_MEMORY_ATTRIBUTES */ + struct kvm_memory_slot *gfn_to_memslot(struct kvm *kvm, gfn_t gfn) { return __gfn_to_memslot(kvm_memslots(kvm), gfn); @@ -4459,6 +4508,9 @@ static long kvm_vm_ioctl_check_extension_generic(struct kvm *kvm, long arg) #ifdef CONFIG_HAVE_KVM_MSI case KVM_CAP_SIGNAL_MSI: #endif +#ifdef CONFIG_HAVE_KVM_MEMORY_ATTRIBUTES + case KVM_CAP_MEMORY_ATTRIBUTES: +#endif #ifdef CONFIG_HAVE_KVM_IRQFD case KVM_CAP_IRQFD: case KVM_CAP_IRQFD_RESAMPLE: @@ -4804,6 +4856,30 @@ static long kvm_vm_ioctl(struct file *filp, break; } #endif /* CONFIG_HAVE_KVM_IRQ_ROUTING */ +#ifdef CONFIG_HAVE_KVM_MEMORY_ATTRIBUTES + case KVM_GET_SUPPORTED_MEMORY_ATTRIBUTES: { + u64 attrs = kvm_supported_mem_attributes(kvm); + + r = -EFAULT; + if (copy_to_user(argp, &attrs, sizeof(attrs))) + goto out; + r = 0; + break; + } + case KVM_SET_MEMORY_ATTRIBUTES: { + struct kvm_memory_attributes attrs; + + r = -EFAULT; + if (copy_from_user(&attrs, argp, sizeof(attrs))) + goto out; + + r = kvm_vm_ioctl_set_mem_attributes(kvm, &attrs); + + if (!r && copy_to_user(argp, &attrs, sizeof(attrs))) + r = -EFAULT; + break; + } +#endif /* CONFIG_HAVE_KVM_MEMORY_ATTRIBUTES */ case KVM_CREATE_DEVICE: { struct kvm_create_device cd; From patchwork Fri Dec 2 06:13:41 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chao Peng X-Patchwork-Id: 13062260 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3E878C4332F for ; Fri, 2 Dec 2022 06:18:55 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B2A116B0075; Fri, 2 Dec 2022 01:18:54 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id AB4876B0078; Fri, 2 Dec 2022 01:18:54 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8DF376B007B; Fri, 2 Dec 2022 01:18:54 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 7C95C6B0075 for ; Fri, 2 Dec 2022 01:18:54 -0500 (EST) Received: from smtpin19.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id 54CD61A0196 for ; Fri, 2 Dec 2022 06:18:54 +0000 (UTC) X-FDA: 80196362988.19.F8D9688 Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by imf21.hostedemail.com (Postfix) with ESMTP id A097E1C000D for ; Fri, 2 Dec 2022 06:18:53 +0000 (UTC) Authentication-Results: imf21.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=BRtxQy4g; dmarc=pass (policy=none) header.from=intel.com; spf=none (imf21.hostedemail.com: domain of chao.p.peng@linux.intel.com has no SPF policy when checking 134.134.136.100) smtp.mailfrom=chao.p.peng@linux.intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1669961933; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=pcBZ/mZl2j6HfhHqqkHUYOGGrAhhBMbiWJ5MGt2w4A0=; b=oMH5flpmxIJifixJVtbuFr+ewhSSwZ9r0YYHow2UkZ6o7o4uR0RHofg5Id2SLJch+xOEbO xHJh8Z9Iekd5x59Dvpk8b8DG43nsDYj8wBvpWPgpDw4DJr7ObxTqlU+YwiPdnvTufcDOsb E66c9G4WQXNvp5VswXAvfRAFxkSDI0Q= ARC-Authentication-Results: i=1; imf21.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=BRtxQy4g; dmarc=pass (policy=none) header.from=intel.com; spf=none (imf21.hostedemail.com: domain of chao.p.peng@linux.intel.com has no SPF policy when checking 134.134.136.100) smtp.mailfrom=chao.p.peng@linux.intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1669961933; a=rsa-sha256; cv=none; b=C6yHMqc8TpVua9L+joiL88Y0n5/2komeTadN7VI85qUltlTs2FdmlNLpl8aIU5HciizZ3e Kf4SzTui2go5NRqAUhww55fhcS5/8M97BgPctMQE+mJKnaxnF/ddq1hGYvw6uIcLn5WY14 ur37npO15zr8YkqXaxl0zL5ukGeNY2s= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1669961933; x=1701497933; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=qntyGV0VBLHYxvfiPB5jnYsGGKsN59dqMGuJDSWxnvI=; b=BRtxQy4gkXMlOHiDu5pdfLPo0Mth57igm0nR/drcHESj8fpwIXkpG4sw NIq6pCIVjsvM+CbmNKcliHDQObnXxSJKB1brP2fBDdi4m6yP5wbTv8vPs oYgycGwiKy0TUW0Nz9xmHKj1d2bU2hM/qV2neyDmhOvUODDqUZudUD8Yk fCrcoadIuK6oYQ6rcJ9gJlBzYQJSIWc21RoMQgwAm3opgCL5dxEVO8vLv BqccrA88YnB0phaYlIGRpbElx0WHKHbliDrDfIIqCCGPlSRJnUSqrBnSY Kq6Yax0lTVFtLGfOzsctwxsgokXQMld8Win1VBEkI5adgBed1g5RRt9Hc A==; X-IronPort-AV: E=McAfee;i="6500,9779,10548"; a="380170484" X-IronPort-AV: E=Sophos;i="5.96,210,1665471600"; d="scan'208";a="380170484" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 01 Dec 2022 22:18:52 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10548"; a="733698659" X-IronPort-AV: E=Sophos;i="5.96,210,1665471600"; d="scan'208";a="733698659" Received: from chaop.bj.intel.com ([10.240.193.75]) by FMSMGA003.fm.intel.com with ESMTP; 01 Dec 2022 22:18:41 -0800 From: Chao Peng To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-arch@vger.kernel.org, linux-api@vger.kernel.org, linux-doc@vger.kernel.org, qemu-devel@nongnu.org Cc: Paolo Bonzini , Jonathan Corbet , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Arnd Bergmann , Naoya Horiguchi , Miaohe Lin , x86@kernel.org, "H . Peter Anvin" , Hugh Dickins , Jeff Layton , "J . Bruce Fields" , Andrew Morton , Shuah Khan , Mike Rapoport , Steven Price , "Maciej S . Szmigiero" , Vlastimil Babka , Vishal Annapurve , Yu Zhang , Chao Peng , "Kirill A . Shutemov" , luto@kernel.org, jun.nakajima@intel.com, dave.hansen@intel.com, ak@linux.intel.com, david@redhat.com, aarcange@redhat.com, ddutile@redhat.com, dhildenb@redhat.com, Quentin Perret , tabba@google.com, Michael Roth , mhocko@suse.com, wei.w.wang@intel.com Subject: [PATCH v10 3/9] KVM: Extend the memslot to support fd-based private memory Date: Fri, 2 Dec 2022 14:13:41 +0800 Message-Id: <20221202061347.1070246-4-chao.p.peng@linux.intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20221202061347.1070246-1-chao.p.peng@linux.intel.com> References: <20221202061347.1070246-1-chao.p.peng@linux.intel.com> MIME-Version: 1.0 X-Spamd-Result: default: False [-0.20 / 9.00]; BAYES_HAM(-3.00)[100.00%]; R_MISSING_CHARSET(2.50)[]; MID_CONTAINS_FROM(1.00)[]; DMARC_POLICY_ALLOW(-0.50)[intel.com,none]; R_DKIM_ALLOW(-0.20)[intel.com:s=Intel]; MIME_GOOD(-0.10)[text/plain]; RCVD_NO_TLS_LAST(0.10)[]; RCPT_COUNT_TWELVE(0.00)[49]; R_SPF_NA(0.00)[no SPF record]; FROM_EQ_ENVFROM(0.00)[]; TO_MATCH_ENVRCPT_SOME(0.00)[]; MIME_TRACE(0.00)[0:+]; FROM_HAS_DN(0.00)[]; DKIM_TRACE(0.00)[intel.com:+]; TO_DN_SOME(0.00)[]; RCVD_COUNT_THREE(0.00)[3]; ARC_SIGNED(0.00)[hostedemail.com:s=arc-20220608:i=1]; ARC_NA(0.00)[] X-Rspamd-Queue-Id: A097E1C000D X-Rspam-User: X-Rspamd-Server: rspam02 X-Stat-Signature: se5r1e47ktxh375wotgpqsj6138jrfat X-HE-Tag: 1669961933-611529 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: In memory encryption usage, guest memory may be encrypted with special key and can be accessed only by the guest itself. We call such memory private memory. It's valueless and sometimes can cause problem to allow userspace to access guest private memory. This new KVM memslot extension allows guest private memory being provided through a restrictedmem backed file descriptor(fd) and userspace is restricted to access the bookmarked memory in the fd. This new extension, indicated by the new flag KVM_MEM_PRIVATE, adds two additional KVM memslot fields restricted_fd/restricted_offset to allow userspace to instruct KVM to provide guest memory through restricted_fd. 'guest_phys_addr' is mapped at the restricted_offset of restricted_fd and the size is 'memory_size'. The extended memslot can still have the userspace_addr(hva). When use, a single memslot can maintain both private memory through restricted_fd and shared memory through userspace_addr. Whether the private or shared part is visible to guest is maintained by other KVM code. A restrictedmem_notifier field is also added to the memslot structure to allow the restricted_fd's backing store to notify KVM the memory change, KVM then can invalidate its page table entries or handle memory errors. Together with the change, a new config HAVE_KVM_RESTRICTED_MEM is added and right now it is selected on X86_64 only. To make future maintenance easy, internally use a binary compatible alias struct kvm_user_mem_region to handle both the normal and the '_ext' variants. Co-developed-by: Yu Zhang Signed-off-by: Yu Zhang Signed-off-by: Chao Peng Reviewed-by: Fuad Tabba Tested-by: Fuad Tabba --- Documentation/virt/kvm/api.rst | 40 ++++++++++++++++++++++----- arch/x86/kvm/Kconfig | 2 ++ arch/x86/kvm/x86.c | 2 +- include/linux/kvm_host.h | 8 ++++-- include/uapi/linux/kvm.h | 28 +++++++++++++++++++ virt/kvm/Kconfig | 3 +++ virt/kvm/kvm_main.c | 49 ++++++++++++++++++++++++++++------ 7 files changed, 114 insertions(+), 18 deletions(-) diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst index bb2f709c0900..99352170c130 100644 --- a/Documentation/virt/kvm/api.rst +++ b/Documentation/virt/kvm/api.rst @@ -1319,7 +1319,7 @@ yet and must be cleared on entry. :Capability: KVM_CAP_USER_MEMORY :Architectures: all :Type: vm ioctl -:Parameters: struct kvm_userspace_memory_region (in) +:Parameters: struct kvm_userspace_memory_region(_ext) (in) :Returns: 0 on success, -1 on error :: @@ -1332,9 +1332,18 @@ yet and must be cleared on entry. __u64 userspace_addr; /* start of the userspace allocated memory */ }; + struct kvm_userspace_memory_region_ext { + struct kvm_userspace_memory_region region; + __u64 restricted_offset; + __u32 restricted_fd; + __u32 pad1; + __u64 pad2[14]; + }; + /* for kvm_memory_region::flags */ #define KVM_MEM_LOG_DIRTY_PAGES (1UL << 0) #define KVM_MEM_READONLY (1UL << 1) + #define KVM_MEM_PRIVATE (1UL << 2) This ioctl allows the user to create, modify or delete a guest physical memory slot. Bits 0-15 of "slot" specify the slot id and this value @@ -1365,12 +1374,29 @@ It is recommended that the lower 21 bits of guest_phys_addr and userspace_addr be identical. This allows large pages in the guest to be backed by large pages in the host. -The flags field supports two flags: KVM_MEM_LOG_DIRTY_PAGES and -KVM_MEM_READONLY. The former can be set to instruct KVM to keep track of -writes to memory within the slot. See KVM_GET_DIRTY_LOG ioctl to know how to -use it. The latter can be set, if KVM_CAP_READONLY_MEM capability allows it, -to make a new slot read-only. In this case, writes to this memory will be -posted to userspace as KVM_EXIT_MMIO exits. +kvm_userspace_memory_region_ext struct includes all fields of +kvm_userspace_memory_region struct, while also adds additional fields for some +other features. See below description of flags field for more information. +It's recommended to use kvm_userspace_memory_region_ext in new userspace code. + +The flags field supports following flags: + +- KVM_MEM_LOG_DIRTY_PAGES to instruct KVM to keep track of writes to memory + within the slot. For more details, see KVM_GET_DIRTY_LOG ioctl. + +- KVM_MEM_READONLY, if KVM_CAP_READONLY_MEM allows, to make a new slot + read-only. In this case, writes to this memory will be posted to userspace as + KVM_EXIT_MMIO exits. + +- KVM_MEM_PRIVATE, if KVM_MEMORY_ATTRIBUTE_PRIVATE is supported (see + KVM_GET_SUPPORTED_MEMORY_ATTRIBUTES ioctl), to indicate a new slot has private + memory backed by a file descriptor(fd) and userspace access to the fd may be + restricted. Userspace should use restricted_fd/restricted_offset in the + kvm_userspace_memory_region_ext to instruct KVM to provide private memory + to guest. Userspace should guarantee not to map the same host physical address + indicated by restricted_fd/restricted_offset to different guest physical + addresses within multiple memslots. Failed to do this may result undefined + behavior. When the KVM_CAP_SYNC_MMU capability is available, changes in the backing of the memory region are automatically reflected into the guest. For example, an diff --git a/arch/x86/kvm/Kconfig b/arch/x86/kvm/Kconfig index a8e379a3afee..690cb21010e7 100644 --- a/arch/x86/kvm/Kconfig +++ b/arch/x86/kvm/Kconfig @@ -50,6 +50,8 @@ config KVM select INTERVAL_TREE select HAVE_KVM_PM_NOTIFIER if PM select HAVE_KVM_MEMORY_ATTRIBUTES + select HAVE_KVM_RESTRICTED_MEM if X86_64 + select RESTRICTEDMEM if HAVE_KVM_RESTRICTED_MEM help Support hosting fully virtualized guest machines using hardware virtualization extensions. You will need a fairly recent diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 7f850dfb4086..9a07380f8d3c 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -12224,7 +12224,7 @@ void __user * __x86_set_memory_region(struct kvm *kvm, int id, gpa_t gpa, } for (i = 0; i < KVM_ADDRESS_SPACE_NUM; i++) { - struct kvm_userspace_memory_region m; + struct kvm_user_mem_region m; m.slot = id | (i << 16); m.flags = 0; diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index a784e2b06625..02347e386ea2 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -44,6 +44,7 @@ #include #include +#include #ifndef KVM_MAX_VCPU_IDS #define KVM_MAX_VCPU_IDS KVM_MAX_VCPUS @@ -585,6 +586,9 @@ struct kvm_memory_slot { u32 flags; short id; u16 as_id; + struct file *restricted_file; + loff_t restricted_offset; + struct restrictedmem_notifier notifier; }; static inline bool kvm_slot_dirty_track_enabled(const struct kvm_memory_slot *slot) @@ -1123,9 +1127,9 @@ enum kvm_mr_change { }; int kvm_set_memory_region(struct kvm *kvm, - const struct kvm_userspace_memory_region *mem); + const struct kvm_user_mem_region *mem); int __kvm_set_memory_region(struct kvm *kvm, - const struct kvm_userspace_memory_region *mem); + const struct kvm_user_mem_region *mem); void kvm_arch_free_memslot(struct kvm *kvm, struct kvm_memory_slot *slot); void kvm_arch_memslots_updated(struct kvm *kvm, u64 gen); int kvm_arch_prepare_memory_region(struct kvm *kvm, diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h index 5d0941acb5bb..13bff963b8b0 100644 --- a/include/uapi/linux/kvm.h +++ b/include/uapi/linux/kvm.h @@ -103,6 +103,33 @@ struct kvm_userspace_memory_region { __u64 userspace_addr; /* start of the userspace allocated memory */ }; +struct kvm_userspace_memory_region_ext { + struct kvm_userspace_memory_region region; + __u64 restricted_offset; + __u32 restricted_fd; + __u32 pad1; + __u64 pad2[14]; +}; + +#ifdef __KERNEL__ +/* + * kvm_user_mem_region is a kernel-only alias of kvm_userspace_memory_region_ext + * that "unpacks" kvm_userspace_memory_region so that KVM can directly access + * all fields from the top-level "extended" region. + */ +struct kvm_user_mem_region { + __u32 slot; + __u32 flags; + __u64 guest_phys_addr; + __u64 memory_size; + __u64 userspace_addr; + __u64 restricted_offset; + __u32 restricted_fd; + __u32 pad1; + __u64 pad2[14]; +}; +#endif + /* * The bit 0 ~ bit 15 of kvm_memory_region::flags are visible for userspace, * other bits are reserved for kvm internal use which are defined in @@ -110,6 +137,7 @@ struct kvm_userspace_memory_region { */ #define KVM_MEM_LOG_DIRTY_PAGES (1UL << 0) #define KVM_MEM_READONLY (1UL << 1) +#define KVM_MEM_PRIVATE (1UL << 2) /* for KVM_IRQ_LINE */ struct kvm_irq_level { diff --git a/virt/kvm/Kconfig b/virt/kvm/Kconfig index effdea5dd4f0..d605545d6dd1 100644 --- a/virt/kvm/Kconfig +++ b/virt/kvm/Kconfig @@ -89,3 +89,6 @@ config KVM_XFER_TO_GUEST_WORK config HAVE_KVM_PM_NOTIFIER bool + +config HAVE_KVM_RESTRICTED_MEM + bool diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 7f0f5e9f2406..b882eb2c76a2 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -1532,7 +1532,7 @@ static void kvm_replace_memslot(struct kvm *kvm, } } -static int check_memory_region_flags(const struct kvm_userspace_memory_region *mem) +static int check_memory_region_flags(const struct kvm_user_mem_region *mem) { u32 valid_flags = KVM_MEM_LOG_DIRTY_PAGES; @@ -1934,7 +1934,7 @@ static bool kvm_check_memslot_overlap(struct kvm_memslots *slots, int id, * Must be called holding kvm->slots_lock for write. */ int __kvm_set_memory_region(struct kvm *kvm, - const struct kvm_userspace_memory_region *mem) + const struct kvm_user_mem_region *mem) { struct kvm_memory_slot *old, *new; struct kvm_memslots *slots; @@ -2038,7 +2038,7 @@ int __kvm_set_memory_region(struct kvm *kvm, EXPORT_SYMBOL_GPL(__kvm_set_memory_region); int kvm_set_memory_region(struct kvm *kvm, - const struct kvm_userspace_memory_region *mem) + const struct kvm_user_mem_region *mem) { int r; @@ -2050,7 +2050,7 @@ int kvm_set_memory_region(struct kvm *kvm, EXPORT_SYMBOL_GPL(kvm_set_memory_region); static int kvm_vm_ioctl_set_memory_region(struct kvm *kvm, - struct kvm_userspace_memory_region *mem) + struct kvm_user_mem_region *mem) { if ((u16)mem->slot >= KVM_USER_MEM_SLOTS) return -EINVAL; @@ -4698,6 +4698,33 @@ static int kvm_vm_ioctl_get_stats_fd(struct kvm *kvm) return fd; } +#define SANITY_CHECK_MEM_REGION_FIELD(field) \ +do { \ + BUILD_BUG_ON(offsetof(struct kvm_user_mem_region, field) != \ + offsetof(struct kvm_userspace_memory_region, field)); \ + BUILD_BUG_ON(sizeof_field(struct kvm_user_mem_region, field) != \ + sizeof_field(struct kvm_userspace_memory_region, field)); \ +} while (0) + +#define SANITY_CHECK_MEM_REGION_EXT_FIELD(field) \ +do { \ + BUILD_BUG_ON(offsetof(struct kvm_user_mem_region, field) != \ + offsetof(struct kvm_userspace_memory_region_ext, field)); \ + BUILD_BUG_ON(sizeof_field(struct kvm_user_mem_region, field) != \ + sizeof_field(struct kvm_userspace_memory_region_ext, field)); \ +} while (0) + +static void kvm_sanity_check_user_mem_region_alias(void) +{ + SANITY_CHECK_MEM_REGION_FIELD(slot); + SANITY_CHECK_MEM_REGION_FIELD(flags); + SANITY_CHECK_MEM_REGION_FIELD(guest_phys_addr); + SANITY_CHECK_MEM_REGION_FIELD(memory_size); + SANITY_CHECK_MEM_REGION_FIELD(userspace_addr); + SANITY_CHECK_MEM_REGION_EXT_FIELD(restricted_offset); + SANITY_CHECK_MEM_REGION_EXT_FIELD(restricted_fd); +} + static long kvm_vm_ioctl(struct file *filp, unsigned int ioctl, unsigned long arg) { @@ -4721,14 +4748,20 @@ static long kvm_vm_ioctl(struct file *filp, break; } case KVM_SET_USER_MEMORY_REGION: { - struct kvm_userspace_memory_region kvm_userspace_mem; + struct kvm_user_mem_region mem; + unsigned long size = sizeof(struct kvm_userspace_memory_region); + + kvm_sanity_check_user_mem_region_alias(); r = -EFAULT; - if (copy_from_user(&kvm_userspace_mem, argp, - sizeof(kvm_userspace_mem))) + if (copy_from_user(&mem, argp, size)) + goto out; + + r = -EINVAL; + if (mem.flags & KVM_MEM_PRIVATE) goto out; - r = kvm_vm_ioctl_set_memory_region(kvm, &kvm_userspace_mem); + r = kvm_vm_ioctl_set_memory_region(kvm, &mem); break; } case KVM_GET_DIRTY_LOG: { From patchwork Fri Dec 2 06:13:42 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chao Peng X-Patchwork-Id: 13062261 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 59AEAC4332F for ; Fri, 2 Dec 2022 06:19:05 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id DEA086B0078; Fri, 2 Dec 2022 01:19:04 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id D99A06B007B; Fri, 2 Dec 2022 01:19:04 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BED996B007D; Fri, 2 Dec 2022 01:19:04 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id AD9766B0078 for ; Fri, 2 Dec 2022 01:19:04 -0500 (EST) Received: from smtpin06.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 8A8F8120747 for ; Fri, 2 Dec 2022 06:19:04 +0000 (UTC) X-FDA: 80196363408.06.07F6B84 Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by imf21.hostedemail.com (Postfix) with ESMTP id 104531C0002 for ; Fri, 2 Dec 2022 06:19:03 +0000 (UTC) Authentication-Results: imf21.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=Oqw9I8Mq; spf=none (imf21.hostedemail.com: domain of chao.p.peng@linux.intel.com has no SPF policy when checking 134.134.136.100) smtp.mailfrom=chao.p.peng@linux.intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1669961944; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=HtB0c5kAS3TehxjUZq+drbzELyVKnrW1oPOP86GtMGw=; b=fgE6cj5H2MLuuB6BthUElHU2aGvxR80mGTqLklr5nS2fZkVwdOh9iWnmdfEyyU9tmg+J47 f5MsL477t2NrPFXfQ9IpoxVPOKlUye3qlxTYz4JvAVVdcAt6IS1KAbFXeVjMUX79g64C4a EDY9+8fHh87EMEDmzYN8WMxQJFOLQXQ= ARC-Authentication-Results: i=1; imf21.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=Oqw9I8Mq; spf=none (imf21.hostedemail.com: domain of chao.p.peng@linux.intel.com has no SPF policy when checking 134.134.136.100) smtp.mailfrom=chao.p.peng@linux.intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1669961944; a=rsa-sha256; cv=none; b=DUKf3Jg4mwW9+A0doPEiDs9DevdyLmJ1HIdS43L/9KEDoJCYzSVkfrtmsFNgEkly6Sf/tG dwkMJZ9hw1r1WGVvI+Emx32iGn4/MejzCxVx70mEckXO6oq0g3esajUtKEwt4RW5l5QWbn OpcmUIXGaixjqu0aUWUJ/AyJ6QKJVE4= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1669961944; x=1701497944; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=DVQ0g3/rVdl/Yl3T+oMVV4ILn+xt6i5l+sFNLAIKp18=; b=Oqw9I8MqT8K0vpnzNiqn3UEeS0uqBLpw3FEHd8rr7xW1uYyXmk9/VRMa 2+8Ysc5JGluE1Ap6EfUWx9COL7k/yAWmMQFSl8+oAyvWltTumZxTtabaz JJe9Vt0OCUYlT8hwjs7/3faF+XxFMhkZpo0dsRQr2duQpegVoCqQ6jJOs 9qHQk09tfVzPpiXBjOx1OubrT3m0r9eZRmWuEtdbJjwKJiKslo/BzQs6h R5EiEZzFGnsNkEoXx0Y0P+7CSrRFjD/7vvqy4EHrZLJE3wPOCIgYdnxFp T3Qta44WmGh8JtZ5o+YZX68CrO0vGaaEBMLzGFbmcfP9sNgDK4E/NbZfg g==; X-IronPort-AV: E=McAfee;i="6500,9779,10548"; a="380170527" X-IronPort-AV: E=Sophos;i="5.96,210,1665471600"; d="scan'208";a="380170527" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 01 Dec 2022 22:19:02 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10548"; a="733698677" X-IronPort-AV: E=Sophos;i="5.96,210,1665471600"; d="scan'208";a="733698677" Received: from chaop.bj.intel.com ([10.240.193.75]) by FMSMGA003.fm.intel.com with ESMTP; 01 Dec 2022 22:18:51 -0800 From: Chao Peng To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-arch@vger.kernel.org, linux-api@vger.kernel.org, linux-doc@vger.kernel.org, qemu-devel@nongnu.org Cc: Paolo Bonzini , Jonathan Corbet , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Arnd Bergmann , Naoya Horiguchi , Miaohe Lin , x86@kernel.org, "H . Peter Anvin" , Hugh Dickins , Jeff Layton , "J . Bruce Fields" , Andrew Morton , Shuah Khan , Mike Rapoport , Steven Price , "Maciej S . Szmigiero" , Vlastimil Babka , Vishal Annapurve , Yu Zhang , Chao Peng , "Kirill A . Shutemov" , luto@kernel.org, jun.nakajima@intel.com, dave.hansen@intel.com, ak@linux.intel.com, david@redhat.com, aarcange@redhat.com, ddutile@redhat.com, dhildenb@redhat.com, Quentin Perret , tabba@google.com, Michael Roth , mhocko@suse.com, wei.w.wang@intel.com Subject: [PATCH v10 4/9] KVM: Add KVM_EXIT_MEMORY_FAULT exit Date: Fri, 2 Dec 2022 14:13:42 +0800 Message-Id: <20221202061347.1070246-5-chao.p.peng@linux.intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20221202061347.1070246-1-chao.p.peng@linux.intel.com> References: <20221202061347.1070246-1-chao.p.peng@linux.intel.com> MIME-Version: 1.0 X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 104531C0002 X-Stat-Signature: u7wb6huix6wryp7faigsd9yogygqpnj9 X-Rspam-User: X-Spamd-Result: default: False [0.80 / 9.00]; BAYES_HAM(-3.00)[100.00%]; R_MISSING_CHARSET(2.50)[]; SUBJECT_HAS_UNDERSCORES(1.00)[]; MID_CONTAINS_FROM(1.00)[]; DMARC_POLICY_ALLOW(-0.50)[intel.com,none]; R_DKIM_ALLOW(-0.20)[intel.com:s=Intel]; RCVD_NO_TLS_LAST(0.10)[]; MIME_GOOD(-0.10)[text/plain]; RCPT_COUNT_TWELVE(0.00)[49]; R_SPF_NA(0.00)[no SPF record]; FROM_EQ_ENVFROM(0.00)[]; TO_MATCH_ENVRCPT_SOME(0.00)[]; MIME_TRACE(0.00)[0:+]; FROM_HAS_DN(0.00)[]; DKIM_TRACE(0.00)[intel.com:+]; TO_DN_SOME(0.00)[]; RCVD_COUNT_THREE(0.00)[3]; ARC_SIGNED(0.00)[hostedemail.com:s=arc-20220608:i=1]; ARC_NA(0.00)[] X-HE-Tag: 1669961943-882484 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This new KVM exit allows userspace to handle memory-related errors. It indicates an error happens in KVM at guest memory range [gpa, gpa+size). The flags includes additional information for userspace to handle the error. Currently bit 0 is defined as 'private memory' where '1' indicates error happens due to private memory access and '0' indicates error happens due to shared memory access. When private memory is enabled, this new exit will be used for KVM to exit to userspace for shared <-> private memory conversion in memory encryption usage. In such usage, typically there are two kind of memory conversions: - explicit conversion: happens when guest explicitly calls into KVM to map a range (as private or shared), KVM then exits to userspace to perform the map/unmap operations. - implicit conversion: happens in KVM page fault handler where KVM exits to userspace for an implicit conversion when the page is in a different state than requested (private or shared). Suggested-by: Sean Christopherson Co-developed-by: Yu Zhang Signed-off-by: Yu Zhang Signed-off-by: Chao Peng Reviewed-by: Fuad Tabba Reviewed-by: Fuad Tabba Tested-by: Fuad Tabba --- Documentation/virt/kvm/api.rst | 22 ++++++++++++++++++++++ include/uapi/linux/kvm.h | 8 ++++++++ 2 files changed, 30 insertions(+) diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst index 99352170c130..d9edb14ce30b 100644 --- a/Documentation/virt/kvm/api.rst +++ b/Documentation/virt/kvm/api.rst @@ -6634,6 +6634,28 @@ array field represents return values. The userspace should update the return values of SBI call before resuming the VCPU. For more details on RISC-V SBI spec refer, https://github.com/riscv/riscv-sbi-doc. +:: + + /* KVM_EXIT_MEMORY_FAULT */ + struct { + #define KVM_MEMORY_EXIT_FLAG_PRIVATE (1ULL << 0) + __u64 flags; + __u64 gpa; + __u64 size; + } memory; + +If exit reason is KVM_EXIT_MEMORY_FAULT then it indicates that the VCPU has +encountered a memory error which is not handled by KVM kernel module and +userspace may choose to handle it. The 'flags' field indicates the memory +properties of the exit. + + - KVM_MEMORY_EXIT_FLAG_PRIVATE - indicates the memory error is caused by + private memory access when the bit is set. Otherwise the memory error is + caused by shared memory access when the bit is clear. + +'gpa' and 'size' indicate the memory range the error occurs at. The userspace +may handle the error and return to KVM to retry the previous memory access. + :: /* KVM_EXIT_NOTIFY */ diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h index 13bff963b8b0..c7e9d375a902 100644 --- a/include/uapi/linux/kvm.h +++ b/include/uapi/linux/kvm.h @@ -300,6 +300,7 @@ struct kvm_xen_exit { #define KVM_EXIT_RISCV_SBI 35 #define KVM_EXIT_RISCV_CSR 36 #define KVM_EXIT_NOTIFY 37 +#define KVM_EXIT_MEMORY_FAULT 38 /* For KVM_EXIT_INTERNAL_ERROR */ /* Emulate instruction failed. */ @@ -541,6 +542,13 @@ struct kvm_run { #define KVM_NOTIFY_CONTEXT_INVALID (1 << 0) __u32 flags; } notify; + /* KVM_EXIT_MEMORY_FAULT */ + struct { +#define KVM_MEMORY_EXIT_FLAG_PRIVATE (1ULL << 0) + __u64 flags; + __u64 gpa; + __u64 size; + } memory; /* Fix the size of the union. */ char padding[256]; }; From patchwork Fri Dec 2 06:13:43 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chao Peng X-Patchwork-Id: 13062262 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 323B7C636F9 for ; Fri, 2 Dec 2022 06:19:17 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B9F016B007B; Fri, 2 Dec 2022 01:19:16 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id B4FE96B007D; Fri, 2 Dec 2022 01:19:16 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9A1EF6B007E; Fri, 2 Dec 2022 01:19:16 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 8A9B76B007B for ; Fri, 2 Dec 2022 01:19:16 -0500 (EST) Received: from smtpin16.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 5339740318 for ; Fri, 2 Dec 2022 06:19:16 +0000 (UTC) X-FDA: 80196363912.16.6F78E8D Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by imf06.hostedemail.com (Postfix) with ESMTP id B2942180009 for ; Fri, 2 Dec 2022 06:19:14 +0000 (UTC) Authentication-Results: imf06.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=F3yhmrSg; dmarc=pass (policy=none) header.from=intel.com; spf=none (imf06.hostedemail.com: domain of chao.p.peng@linux.intel.com has no SPF policy when checking 134.134.136.100) smtp.mailfrom=chao.p.peng@linux.intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1669961955; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=Dq7aaqrADLkgD07Cyv5Y6vzScXo2Gft3dqFO6/czcno=; b=r9oaDqqF+85DcFhTgaX9SvCR9Z0pBHC6bHs9flh1l7xO2vtBdbiR/m4jkS6FSDqouIjJxx 51BSQXj1dcmY3LBXpKT4XICvehYGhMF4+xoXVQ3s6KWGpkiOgkr1VLziByQajyaPOWyO+h QnFg6O7596QNvqTAbkLx9oqf3ack+5c= ARC-Authentication-Results: i=1; imf06.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=F3yhmrSg; dmarc=pass (policy=none) header.from=intel.com; spf=none (imf06.hostedemail.com: domain of chao.p.peng@linux.intel.com has no SPF policy when checking 134.134.136.100) smtp.mailfrom=chao.p.peng@linux.intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1669961955; a=rsa-sha256; cv=none; b=6SuZsEGh1TtXYhuYYw9YQVTar0B4n+Z1hRcmNEP0hGTaVn+Kh+3yvpQWd7nYT98Y4Sc3eT ng3rlOXPpTROtyybX5LU1R8zLz+G1RqcaH9TZr6BrZiiwE+YA99pd4NJILCal697ox1aPh wATiZYKcJxInmsfqEth4JZjpwKJj48w= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1669961954; x=1701497954; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=4VbBf/BrO2dlA458W3eDkygjDffYK27a3r7McgtRbUs=; b=F3yhmrSgi2EC+RodRfYayLzz+rr0x0rRVTSX4gmuEQ53DVAw+WcOPWdX OvDBgv5P1dhA2/RnYBJbWXPvSH7lEsBGNqlT9yr5+6VJZVwel8RJOLbLg o0Ai9fkNfR0kEaAp6ecAgfcX8i2A1qF1/qYB3ZvJ+ZRj4gIAB6Mc7BEFB cuMNWvqGV9xsxdvg5C+9ToLwStOIcP5Vox8lQr67QUPjlNAeRIojcAIwR rijj6EfPEL9MZI79J+jmuuyxsmHf5m1aULt27noIhqorqpNgfssMCFxj0 w8oAAfWf2PExIV4I6ZU2MBX+sHQL99gH2S+kIkxrJ1duQMxECWdjtEeqm Q==; X-IronPort-AV: E=McAfee;i="6500,9779,10548"; a="380170567" X-IronPort-AV: E=Sophos;i="5.96,210,1665471600"; d="scan'208";a="380170567" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 01 Dec 2022 22:19:13 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10548"; a="733698684" X-IronPort-AV: E=Sophos;i="5.96,210,1665471600"; d="scan'208";a="733698684" Received: from chaop.bj.intel.com ([10.240.193.75]) by FMSMGA003.fm.intel.com with ESMTP; 01 Dec 2022 22:19:02 -0800 From: Chao Peng To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-arch@vger.kernel.org, linux-api@vger.kernel.org, linux-doc@vger.kernel.org, qemu-devel@nongnu.org Cc: Paolo Bonzini , Jonathan Corbet , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Arnd Bergmann , Naoya Horiguchi , Miaohe Lin , x86@kernel.org, "H . Peter Anvin" , Hugh Dickins , Jeff Layton , "J . Bruce Fields" , Andrew Morton , Shuah Khan , Mike Rapoport , Steven Price , "Maciej S . Szmigiero" , Vlastimil Babka , Vishal Annapurve , Yu Zhang , Chao Peng , "Kirill A . Shutemov" , luto@kernel.org, jun.nakajima@intel.com, dave.hansen@intel.com, ak@linux.intel.com, david@redhat.com, aarcange@redhat.com, ddutile@redhat.com, dhildenb@redhat.com, Quentin Perret , tabba@google.com, Michael Roth , mhocko@suse.com, wei.w.wang@intel.com Subject: [PATCH v10 5/9] KVM: Use gfn instead of hva for mmu_notifier_retry Date: Fri, 2 Dec 2022 14:13:43 +0800 Message-Id: <20221202061347.1070246-6-chao.p.peng@linux.intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20221202061347.1070246-1-chao.p.peng@linux.intel.com> References: <20221202061347.1070246-1-chao.p.peng@linux.intel.com> MIME-Version: 1.0 X-Stat-Signature: yfgzbd4zbjn7jqjrpmqjkacf9ee76z4i X-Spamd-Result: default: False [0.80 / 9.00]; BAYES_HAM(-3.00)[100.00%]; R_MISSING_CHARSET(2.50)[]; SUBJECT_HAS_UNDERSCORES(1.00)[]; MID_CONTAINS_FROM(1.00)[]; DMARC_POLICY_ALLOW(-0.50)[intel.com,none]; R_DKIM_ALLOW(-0.20)[intel.com:s=Intel]; RCVD_NO_TLS_LAST(0.10)[]; MIME_GOOD(-0.10)[text/plain]; RCPT_COUNT_TWELVE(0.00)[49]; R_SPF_NA(0.00)[no SPF record]; FROM_EQ_ENVFROM(0.00)[]; TO_MATCH_ENVRCPT_SOME(0.00)[]; MIME_TRACE(0.00)[0:+]; FROM_HAS_DN(0.00)[]; DKIM_TRACE(0.00)[intel.com:+]; TO_DN_SOME(0.00)[]; RCVD_COUNT_THREE(0.00)[3]; ARC_SIGNED(0.00)[hostedemail.com:s=arc-20220608:i=1]; ARC_NA(0.00)[] X-Rspamd-Queue-Id: B2942180009 X-Rspamd-Server: rspam08 X-Rspam-User: X-HE-Tag: 1669961954-764385 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Currently in mmu_notifier invalidate path, hva range is recorded and then checked against by mmu_notifier_retry_hva() in the page fault handling path. However, for the to be introduced private memory, a page fault may not have a hva associated, checking gfn(gpa) makes more sense. For existing hva based shared memory, gfn is expected to also work. The only downside is when aliasing multiple gfns to a single hva, the current algorithm of checking multiple ranges could result in a much larger range being rejected. Such aliasing should be uncommon, so the impact is expected small. Suggested-by: Sean Christopherson Signed-off-by: Chao Peng Reviewed-by: Fuad Tabba Tested-by: Fuad Tabba --- arch/x86/kvm/mmu/mmu.c | 8 +++++--- include/linux/kvm_host.h | 33 +++++++++++++++++++++------------ virt/kvm/kvm_main.c | 32 +++++++++++++++++++++++--------- 3 files changed, 49 insertions(+), 24 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 4736d7849c60..e2c70b5afa3e 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -4259,7 +4259,7 @@ static bool is_page_fault_stale(struct kvm_vcpu *vcpu, return true; return fault->slot && - mmu_invalidate_retry_hva(vcpu->kvm, mmu_seq, fault->hva); + mmu_invalidate_retry_gfn(vcpu->kvm, mmu_seq, fault->gfn); } static int direct_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) @@ -6098,7 +6098,9 @@ void kvm_zap_gfn_range(struct kvm *kvm, gfn_t gfn_start, gfn_t gfn_end) write_lock(&kvm->mmu_lock); - kvm_mmu_invalidate_begin(kvm, gfn_start, gfn_end); + kvm_mmu_invalidate_begin(kvm); + + kvm_mmu_invalidate_range_add(kvm, gfn_start, gfn_end); flush = kvm_rmap_zap_gfn_range(kvm, gfn_start, gfn_end); @@ -6112,7 +6114,7 @@ void kvm_zap_gfn_range(struct kvm *kvm, gfn_t gfn_start, gfn_t gfn_end) kvm_flush_remote_tlbs_with_address(kvm, gfn_start, gfn_end - gfn_start); - kvm_mmu_invalidate_end(kvm, gfn_start, gfn_end); + kvm_mmu_invalidate_end(kvm); write_unlock(&kvm->mmu_lock); } diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 02347e386ea2..3d69484d2704 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -787,8 +787,8 @@ struct kvm { struct mmu_notifier mmu_notifier; unsigned long mmu_invalidate_seq; long mmu_invalidate_in_progress; - unsigned long mmu_invalidate_range_start; - unsigned long mmu_invalidate_range_end; + gfn_t mmu_invalidate_range_start; + gfn_t mmu_invalidate_range_end; #endif struct list_head devices; u64 manual_dirty_log_protect; @@ -1389,10 +1389,9 @@ void kvm_mmu_free_memory_cache(struct kvm_mmu_memory_cache *mc); void *kvm_mmu_memory_cache_alloc(struct kvm_mmu_memory_cache *mc); #endif -void kvm_mmu_invalidate_begin(struct kvm *kvm, unsigned long start, - unsigned long end); -void kvm_mmu_invalidate_end(struct kvm *kvm, unsigned long start, - unsigned long end); +void kvm_mmu_invalidate_begin(struct kvm *kvm); +void kvm_mmu_invalidate_range_add(struct kvm *kvm, gfn_t start, gfn_t end); +void kvm_mmu_invalidate_end(struct kvm *kvm); long kvm_arch_dev_ioctl(struct file *filp, unsigned int ioctl, unsigned long arg); @@ -1963,9 +1962,9 @@ static inline int mmu_invalidate_retry(struct kvm *kvm, unsigned long mmu_seq) return 0; } -static inline int mmu_invalidate_retry_hva(struct kvm *kvm, +static inline int mmu_invalidate_retry_gfn(struct kvm *kvm, unsigned long mmu_seq, - unsigned long hva) + gfn_t gfn) { lockdep_assert_held(&kvm->mmu_lock); /* @@ -1974,10 +1973,20 @@ static inline int mmu_invalidate_retry_hva(struct kvm *kvm, * that might be being invalidated. Note that it may include some false * positives, due to shortcuts when handing concurrent invalidations. */ - if (unlikely(kvm->mmu_invalidate_in_progress) && - hva >= kvm->mmu_invalidate_range_start && - hva < kvm->mmu_invalidate_range_end) - return 1; + if (unlikely(kvm->mmu_invalidate_in_progress)) { + /* + * Dropping mmu_lock after bumping mmu_invalidate_in_progress + * but before updating the range is a KVM bug. + */ + if (WARN_ON_ONCE(kvm->mmu_invalidate_range_start == INVALID_GPA || + kvm->mmu_invalidate_range_end == INVALID_GPA)) + return 1; + + if (gfn >= kvm->mmu_invalidate_range_start && + gfn < kvm->mmu_invalidate_range_end) + return 1; + } + if (kvm->mmu_invalidate_seq != mmu_seq) return 1; return 0; diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index b882eb2c76a2..ad55dfbc75d7 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -540,9 +540,7 @@ static void kvm_mmu_notifier_invalidate_range(struct mmu_notifier *mn, typedef bool (*hva_handler_t)(struct kvm *kvm, struct kvm_gfn_range *range); -typedef void (*on_lock_fn_t)(struct kvm *kvm, unsigned long start, - unsigned long end); - +typedef void (*on_lock_fn_t)(struct kvm *kvm); typedef void (*on_unlock_fn_t)(struct kvm *kvm); struct kvm_hva_range { @@ -628,7 +626,8 @@ static __always_inline int __kvm_handle_hva_range(struct kvm *kvm, locked = true; KVM_MMU_LOCK(kvm); if (!IS_KVM_NULL_FN(range->on_lock)) - range->on_lock(kvm, range->start, range->end); + range->on_lock(kvm); + if (IS_KVM_NULL_FN(range->handler)) break; } @@ -715,8 +714,7 @@ static void kvm_mmu_notifier_change_pte(struct mmu_notifier *mn, kvm_handle_hva_range(mn, address, address + 1, pte, kvm_set_spte_gfn); } -void kvm_mmu_invalidate_begin(struct kvm *kvm, unsigned long start, - unsigned long end) +void kvm_mmu_invalidate_begin(struct kvm *kvm) { /* * The count increase must become visible at unlock time as no @@ -724,6 +722,17 @@ void kvm_mmu_invalidate_begin(struct kvm *kvm, unsigned long start, * count is also read inside the mmu_lock critical section. */ kvm->mmu_invalidate_in_progress++; + + if (likely(kvm->mmu_invalidate_in_progress == 1)) { + kvm->mmu_invalidate_range_start = INVALID_GPA; + kvm->mmu_invalidate_range_end = INVALID_GPA; + } +} + +void kvm_mmu_invalidate_range_add(struct kvm *kvm, gfn_t start, gfn_t end) +{ + WARN_ON_ONCE(!kvm->mmu_invalidate_in_progress); + if (likely(kvm->mmu_invalidate_in_progress == 1)) { kvm->mmu_invalidate_range_start = start; kvm->mmu_invalidate_range_end = end; @@ -744,6 +753,12 @@ void kvm_mmu_invalidate_begin(struct kvm *kvm, unsigned long start, } } +static bool kvm_mmu_unmap_gfn_range(struct kvm *kvm, struct kvm_gfn_range *range) +{ + kvm_mmu_invalidate_range_add(kvm, range->start, range->end); + return kvm_unmap_gfn_range(kvm, range); +} + static int kvm_mmu_notifier_invalidate_range_start(struct mmu_notifier *mn, const struct mmu_notifier_range *range) { @@ -752,7 +767,7 @@ static int kvm_mmu_notifier_invalidate_range_start(struct mmu_notifier *mn, .start = range->start, .end = range->end, .pte = __pte(0), - .handler = kvm_unmap_gfn_range, + .handler = kvm_mmu_unmap_gfn_range, .on_lock = kvm_mmu_invalidate_begin, .on_unlock = kvm_arch_guest_memory_reclaimed, .flush_on_ret = true, @@ -791,8 +806,7 @@ static int kvm_mmu_notifier_invalidate_range_start(struct mmu_notifier *mn, return 0; } -void kvm_mmu_invalidate_end(struct kvm *kvm, unsigned long start, - unsigned long end) +void kvm_mmu_invalidate_end(struct kvm *kvm) { /* * This sequence increase will notify the kvm page fault that From patchwork Fri Dec 2 06:13:44 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chao Peng X-Patchwork-Id: 13062263 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 17238C4167B for ; Fri, 2 Dec 2022 06:19:27 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 97C146B0071; Fri, 2 Dec 2022 01:19:26 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 92DE46B0075; Fri, 2 Dec 2022 01:19:26 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7A6576B007D; Fri, 2 Dec 2022 01:19:26 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 6B65F6B0071 for ; Fri, 2 Dec 2022 01:19:26 -0500 (EST) Received: from smtpin09.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 3B4E01605D5 for ; Fri, 2 Dec 2022 06:19:26 +0000 (UTC) X-FDA: 80196364332.09.135275E Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by imf28.hostedemail.com (Postfix) with ESMTP id 90C6AC0008 for ; Fri, 2 Dec 2022 06:19:25 +0000 (UTC) Authentication-Results: imf28.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=Cpm2TP7j; spf=none (imf28.hostedemail.com: domain of chao.p.peng@linux.intel.com has no SPF policy when checking 134.134.136.100) smtp.mailfrom=chao.p.peng@linux.intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1669961965; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=ogKl7ul//IIr8lIfE0AHGtPLBl/1d8HgtNolCRn0LrI=; b=n8W+Wq5vvnwVGGGijlH4ttXQwWjPhyeuNHwXuSVO0GHM+xzV49xRIFfY9q+HgtQfg7sOl+ rSsjVhW/5KhmB+ExCWS5+1H868ZeoVpjuhrEF41k75vkBPTptuZjgyXWzoCBMS2gn7vWtl f3d9YXcrHYGDpy0xngohYf+a2z2PfHk= ARC-Authentication-Results: i=1; imf28.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=Cpm2TP7j; spf=none (imf28.hostedemail.com: domain of chao.p.peng@linux.intel.com has no SPF policy when checking 134.134.136.100) smtp.mailfrom=chao.p.peng@linux.intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1669961965; a=rsa-sha256; cv=none; b=3ZyfNFJfj9bQe4W3Qp6RFt1C0huPf9917CVJT7L6DpTADqsh3703KFcd24MxqgR/lWv5Oc gXtpqvJjl4KsY1GZ6UifKdeBSSE0ohMfxMtpuCf+o0ysZfWSYYJn8uii9t6mWJ8KtFNKX6 6eRws3Q3wS9CbWE6uq2go3AwiNxwRAE= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1669961965; x=1701497965; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=Kvt2bx+9z1hqybtj0u/XMjFK2xqSu46ePlTuyoBers4=; b=Cpm2TP7jYw/EoD0P5EO9A634aPmJErp8Z9p+rvqv2ualJr4rgw3TCyqb Ljvi50uO2AGzeKH7gxMTYbSOKVqzyl6pRddaXS7ABpvM8w6GhAI+Xt2xc nzgNY1GmIMCWEkWLQJsbqSiiVi6n3kXYlP9hvQF5kRuM6qoaIe6X1JmUF IyNgCxvv6BrpCYnoD9bw4ERmTAgmyJGG0Ow7591AQZ8PbXahFwbE+QiLX t/EmZWGxHOoMzNqN/8DkdJNanggtKXFT6XI/QsPjrVm22vLlN3kN8ZZ/J s+qdzdAmFDm1OxnrO455gkLwQhDd3W0ynheq+R0fhMBtqprnsOL96Rdw/ A==; X-IronPort-AV: E=McAfee;i="6500,9779,10548"; a="380170598" X-IronPort-AV: E=Sophos;i="5.96,210,1665471600"; d="scan'208";a="380170598" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 01 Dec 2022 22:19:23 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10548"; a="733698707" X-IronPort-AV: E=Sophos;i="5.96,210,1665471600"; d="scan'208";a="733698707" Received: from chaop.bj.intel.com ([10.240.193.75]) by FMSMGA003.fm.intel.com with ESMTP; 01 Dec 2022 22:19:13 -0800 From: Chao Peng To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-arch@vger.kernel.org, linux-api@vger.kernel.org, linux-doc@vger.kernel.org, qemu-devel@nongnu.org Cc: Paolo Bonzini , Jonathan Corbet , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Arnd Bergmann , Naoya Horiguchi , Miaohe Lin , x86@kernel.org, "H . Peter Anvin" , Hugh Dickins , Jeff Layton , "J . Bruce Fields" , Andrew Morton , Shuah Khan , Mike Rapoport , Steven Price , "Maciej S . Szmigiero" , Vlastimil Babka , Vishal Annapurve , Yu Zhang , Chao Peng , "Kirill A . Shutemov" , luto@kernel.org, jun.nakajima@intel.com, dave.hansen@intel.com, ak@linux.intel.com, david@redhat.com, aarcange@redhat.com, ddutile@redhat.com, dhildenb@redhat.com, Quentin Perret , tabba@google.com, Michael Roth , mhocko@suse.com, wei.w.wang@intel.com Subject: [PATCH v10 6/9] KVM: Unmap existing mappings when change the memory attributes Date: Fri, 2 Dec 2022 14:13:44 +0800 Message-Id: <20221202061347.1070246-7-chao.p.peng@linux.intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20221202061347.1070246-1-chao.p.peng@linux.intel.com> References: <20221202061347.1070246-1-chao.p.peng@linux.intel.com> MIME-Version: 1.0 X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 90C6AC0008 X-Stat-Signature: bcpr51bcyhq1apoyc6idzxik3iza5qhh X-Rspam-User: X-Spamd-Result: default: False [-0.20 / 9.00]; BAYES_HAM(-3.00)[100.00%]; R_MISSING_CHARSET(2.50)[]; MID_CONTAINS_FROM(1.00)[]; DMARC_POLICY_ALLOW(-0.50)[intel.com,none]; R_DKIM_ALLOW(-0.20)[intel.com:s=Intel]; MIME_GOOD(-0.10)[text/plain]; RCVD_NO_TLS_LAST(0.10)[]; RCPT_COUNT_TWELVE(0.00)[49]; R_SPF_NA(0.00)[no SPF record]; FROM_EQ_ENVFROM(0.00)[]; TO_MATCH_ENVRCPT_SOME(0.00)[]; MIME_TRACE(0.00)[0:+]; FROM_HAS_DN(0.00)[]; DKIM_TRACE(0.00)[intel.com:+]; TO_DN_SOME(0.00)[]; RCVD_COUNT_THREE(0.00)[3]; ARC_SIGNED(0.00)[hostedemail.com:s=arc-20220608:i=1]; ARC_NA(0.00)[] X-HE-Tag: 1669961965-40210 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Unmap the existing guest mappings when memory attribute is changed between shared and private. This is needed because shared pages and private pages are from different backends, unmapping existing ones gives a chance for page fault handler to re-populate the mappings according to the new attribute. Only architecture has private memory support needs this and the supported architecture is expected to rewrite the weak kvm_arch_has_private_mem(). Also, during memory attribute changing and the unmapping time frame, page fault handler may happen in the same memory range and can cause incorrect page state, invoke kvm_mmu_invalidate_* helpers to let the page fault handler retry during this time frame. Signed-off-by: Chao Peng --- include/linux/kvm_host.h | 7 +- virt/kvm/kvm_main.c | 168 ++++++++++++++++++++++++++------------- 2 files changed, 116 insertions(+), 59 deletions(-) diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 3d69484d2704..3331c0c92838 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -255,7 +255,6 @@ bool kvm_setup_async_pf(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, int kvm_async_pf_wakeup_all(struct kvm_vcpu *vcpu); #endif -#ifdef KVM_ARCH_WANT_MMU_NOTIFIER struct kvm_gfn_range { struct kvm_memory_slot *slot; gfn_t start; @@ -264,6 +263,8 @@ struct kvm_gfn_range { bool may_block; }; bool kvm_unmap_gfn_range(struct kvm *kvm, struct kvm_gfn_range *range); + +#ifdef KVM_ARCH_WANT_MMU_NOTIFIER bool kvm_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range); bool kvm_test_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range); bool kvm_set_spte_gfn(struct kvm *kvm, struct kvm_gfn_range *range); @@ -785,11 +786,12 @@ struct kvm { #if defined(CONFIG_MMU_NOTIFIER) && defined(KVM_ARCH_WANT_MMU_NOTIFIER) struct mmu_notifier mmu_notifier; +#endif unsigned long mmu_invalidate_seq; long mmu_invalidate_in_progress; gfn_t mmu_invalidate_range_start; gfn_t mmu_invalidate_range_end; -#endif + struct list_head devices; u64 manual_dirty_log_protect; struct dentry *debugfs_dentry; @@ -1480,6 +1482,7 @@ bool kvm_arch_dy_has_pending_interrupt(struct kvm_vcpu *vcpu); int kvm_arch_post_init_vm(struct kvm *kvm); void kvm_arch_pre_destroy_vm(struct kvm *kvm); int kvm_arch_create_vm_debugfs(struct kvm *kvm); +bool kvm_arch_has_private_mem(struct kvm *kvm); #ifndef __KVM_HAVE_ARCH_VM_ALLOC /* diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index ad55dfbc75d7..4e1e1e113bf0 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -520,6 +520,62 @@ void kvm_destroy_vcpus(struct kvm *kvm) } EXPORT_SYMBOL_GPL(kvm_destroy_vcpus); +void kvm_mmu_invalidate_begin(struct kvm *kvm) +{ + /* + * The count increase must become visible at unlock time as no + * spte can be established without taking the mmu_lock and + * count is also read inside the mmu_lock critical section. + */ + kvm->mmu_invalidate_in_progress++; + + if (likely(kvm->mmu_invalidate_in_progress == 1)) { + kvm->mmu_invalidate_range_start = INVALID_GPA; + kvm->mmu_invalidate_range_end = INVALID_GPA; + } +} + +void kvm_mmu_invalidate_range_add(struct kvm *kvm, gfn_t start, gfn_t end) +{ + WARN_ON_ONCE(!kvm->mmu_invalidate_in_progress); + + if (likely(kvm->mmu_invalidate_in_progress == 1)) { + kvm->mmu_invalidate_range_start = start; + kvm->mmu_invalidate_range_end = end; + } else { + /* + * Fully tracking multiple concurrent ranges has diminishing + * returns. Keep things simple and just find the minimal range + * which includes the current and new ranges. As there won't be + * enough information to subtract a range after its invalidate + * completes, any ranges invalidated concurrently will + * accumulate and persist until all outstanding invalidates + * complete. + */ + kvm->mmu_invalidate_range_start = + min(kvm->mmu_invalidate_range_start, start); + kvm->mmu_invalidate_range_end = + max(kvm->mmu_invalidate_range_end, end); + } +} + +void kvm_mmu_invalidate_end(struct kvm *kvm) +{ + /* + * This sequence increase will notify the kvm page fault that + * the page that is going to be mapped in the spte could have + * been freed. + */ + kvm->mmu_invalidate_seq++; + smp_wmb(); + /* + * The above sequence increase must be visible before the + * below count decrease, which is ensured by the smp_wmb above + * in conjunction with the smp_rmb in mmu_invalidate_retry(). + */ + kvm->mmu_invalidate_in_progress--; +} + #if defined(CONFIG_MMU_NOTIFIER) && defined(KVM_ARCH_WANT_MMU_NOTIFIER) static inline struct kvm *mmu_notifier_to_kvm(struct mmu_notifier *mn) { @@ -714,45 +770,6 @@ static void kvm_mmu_notifier_change_pte(struct mmu_notifier *mn, kvm_handle_hva_range(mn, address, address + 1, pte, kvm_set_spte_gfn); } -void kvm_mmu_invalidate_begin(struct kvm *kvm) -{ - /* - * The count increase must become visible at unlock time as no - * spte can be established without taking the mmu_lock and - * count is also read inside the mmu_lock critical section. - */ - kvm->mmu_invalidate_in_progress++; - - if (likely(kvm->mmu_invalidate_in_progress == 1)) { - kvm->mmu_invalidate_range_start = INVALID_GPA; - kvm->mmu_invalidate_range_end = INVALID_GPA; - } -} - -void kvm_mmu_invalidate_range_add(struct kvm *kvm, gfn_t start, gfn_t end) -{ - WARN_ON_ONCE(!kvm->mmu_invalidate_in_progress); - - if (likely(kvm->mmu_invalidate_in_progress == 1)) { - kvm->mmu_invalidate_range_start = start; - kvm->mmu_invalidate_range_end = end; - } else { - /* - * Fully tracking multiple concurrent ranges has diminishing - * returns. Keep things simple and just find the minimal range - * which includes the current and new ranges. As there won't be - * enough information to subtract a range after its invalidate - * completes, any ranges invalidated concurrently will - * accumulate and persist until all outstanding invalidates - * complete. - */ - kvm->mmu_invalidate_range_start = - min(kvm->mmu_invalidate_range_start, start); - kvm->mmu_invalidate_range_end = - max(kvm->mmu_invalidate_range_end, end); - } -} - static bool kvm_mmu_unmap_gfn_range(struct kvm *kvm, struct kvm_gfn_range *range) { kvm_mmu_invalidate_range_add(kvm, range->start, range->end); @@ -806,23 +823,6 @@ static int kvm_mmu_notifier_invalidate_range_start(struct mmu_notifier *mn, return 0; } -void kvm_mmu_invalidate_end(struct kvm *kvm) -{ - /* - * This sequence increase will notify the kvm page fault that - * the page that is going to be mapped in the spte could have - * been freed. - */ - kvm->mmu_invalidate_seq++; - smp_wmb(); - /* - * The above sequence increase must be visible before the - * below count decrease, which is ensured by the smp_wmb above - * in conjunction with the smp_rmb in mmu_invalidate_retry(). - */ - kvm->mmu_invalidate_in_progress--; -} - static void kvm_mmu_notifier_invalidate_range_end(struct mmu_notifier *mn, const struct mmu_notifier_range *range) { @@ -1140,6 +1140,11 @@ int __weak kvm_arch_create_vm_debugfs(struct kvm *kvm) return 0; } +bool __weak kvm_arch_has_private_mem(struct kvm *kvm) +{ + return false; +} + static struct kvm *kvm_create_vm(unsigned long type, const char *fdname) { struct kvm *kvm = kvm_arch_alloc_vm(); @@ -2349,15 +2354,47 @@ static u64 kvm_supported_mem_attributes(struct kvm *kvm) return 0; } +static void kvm_unmap_mem_range(struct kvm *kvm, gfn_t start, gfn_t end) +{ + struct kvm_gfn_range gfn_range; + struct kvm_memory_slot *slot; + struct kvm_memslots *slots; + struct kvm_memslot_iter iter; + int i; + int r = 0; + + gfn_range.pte = __pte(0); + gfn_range.may_block = true; + + for (i = 0; i < KVM_ADDRESS_SPACE_NUM; i++) { + slots = __kvm_memslots(kvm, i); + + kvm_for_each_memslot_in_gfn_range(&iter, slots, start, end) { + slot = iter.slot; + gfn_range.start = max(start, slot->base_gfn); + gfn_range.end = min(end, slot->base_gfn + slot->npages); + if (gfn_range.start >= gfn_range.end) + continue; + gfn_range.slot = slot; + + r |= kvm_unmap_gfn_range(kvm, &gfn_range); + } + } + + if (r) + kvm_flush_remote_tlbs(kvm); +} + static int kvm_vm_ioctl_set_mem_attributes(struct kvm *kvm, struct kvm_memory_attributes *attrs) { gfn_t start, end; unsigned long i; void *entry; + int idx; u64 supported_attrs = kvm_supported_mem_attributes(kvm); - /* flags is currently not used. */ + /* 'flags' is currently not used. */ if (attrs->flags) return -EINVAL; if (attrs->attributes & ~supported_attrs) @@ -2372,6 +2409,13 @@ static int kvm_vm_ioctl_set_mem_attributes(struct kvm *kvm, entry = attrs->attributes ? xa_mk_value(attrs->attributes) : NULL; + if (kvm_arch_has_private_mem(kvm)) { + KVM_MMU_LOCK(kvm); + kvm_mmu_invalidate_begin(kvm); + kvm_mmu_invalidate_range_add(kvm, start, end); + KVM_MMU_UNLOCK(kvm); + } + mutex_lock(&kvm->lock); for (i = start; i < end; i++) if (xa_err(xa_store(&kvm->mem_attr_array, i, entry, @@ -2379,6 +2423,16 @@ static int kvm_vm_ioctl_set_mem_attributes(struct kvm *kvm, break; mutex_unlock(&kvm->lock); + if (kvm_arch_has_private_mem(kvm)) { + idx = srcu_read_lock(&kvm->srcu); + KVM_MMU_LOCK(kvm); + if (i > start) + kvm_unmap_mem_range(kvm, start, i); + kvm_mmu_invalidate_end(kvm); + KVM_MMU_UNLOCK(kvm); + srcu_read_unlock(&kvm->srcu, idx); + } + attrs->address = i << PAGE_SHIFT; attrs->size = (end - i) << PAGE_SHIFT; From patchwork Fri Dec 2 06:13:45 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chao Peng X-Patchwork-Id: 13062264 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id B4EE7C4332F for ; Fri, 2 Dec 2022 06:19:37 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 44C096B0073; Fri, 2 Dec 2022 01:19:37 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 3FC806B0075; Fri, 2 Dec 2022 01:19:37 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2299F6B007B; Fri, 2 Dec 2022 01:19:37 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 1482E6B0073 for ; Fri, 2 Dec 2022 01:19:37 -0500 (EST) Received: from smtpin09.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id D9FE3C0750 for ; Fri, 2 Dec 2022 06:19:36 +0000 (UTC) X-FDA: 80196364752.09.40DB42B Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by imf13.hostedemail.com (Postfix) with ESMTP id 4FF762000B for ; Fri, 2 Dec 2022 06:19:36 +0000 (UTC) Authentication-Results: imf13.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b="SNAG/sXf"; dmarc=pass (policy=none) header.from=intel.com; spf=none (imf13.hostedemail.com: domain of chao.p.peng@linux.intel.com has no SPF policy when checking 134.134.136.100) smtp.mailfrom=chao.p.peng@linux.intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1669961976; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=5l4bbgIU+KNMUQwtsv66wFTG6GccOEOAAAgbwFDAWJ4=; b=vz+IkrR3aJNLVh3w6r363BoVO/BGEVIf5bVJxQmCJGeCNG7Av8gdWveS8xTvMr16AUzqpm yvv5G3wKlpmnkEMEuaquxons9e63WFjbbYHn2qdJI90HIKhTuE/Ck+1t6U/pIEIEg+R/Xr txX6OTiofmVWfoHTlHZ1jsmZSdOOD5k= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1669961976; a=rsa-sha256; cv=none; b=z7QFV6ddnzNnBCXclo61OqIIk6WP05aEyLAzmeURhIpqLCi2YXUmTKuYENDkqOD7fJNVQA eneksW3rKOp70ruZrwoYr5fmI6E8nLVW3QjhFvQ+vYVHTGg+5mbzU+1b1nFcUjg0VPqiyx KyOORYpI4SZDFIQ+BwtKX4JzznfbE7o= ARC-Authentication-Results: i=1; imf13.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b="SNAG/sXf"; dmarc=pass (policy=none) header.from=intel.com; spf=none (imf13.hostedemail.com: domain of chao.p.peng@linux.intel.com has no SPF policy when checking 134.134.136.100) smtp.mailfrom=chao.p.peng@linux.intel.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1669961976; x=1701497976; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=uHWX+bmiWvi8twUa4xphTJMpG6518qs7AwJd83WeEzU=; b=SNAG/sXfUfcbA4pYrbAkRWepyVoSNrifG3vSV3z8DxwateQo8eio+bj5 OwYEPk3Gwf+d82PUtbbzH180q5geHn0fGuCO+u7rk/pJV1ff+bjfZI3+4 4J++XAZHZ1R61boAdf4H2xFDKZg6ofr+exjdc61z+UHdidcU7AdHYufR8 Op/Fm1Ff5Ui7iYaMd2wR9woutYY6aKB+n4BtC7zD8570uH52T+G+JVcci 5cPFW4mblEw0M7PdgyKxLgTX36y5OsjJfdIk31kE++g/47PBVVgPkEN7f GAWxBiwIBJnh2m2WqBcbBWKhhw01bnjJ9KTtzCpwfJa6T9qMzatt25huH Q==; X-IronPort-AV: E=McAfee;i="6500,9779,10548"; a="380170655" X-IronPort-AV: E=Sophos;i="5.96,210,1665471600"; d="scan'208";a="380170655" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 01 Dec 2022 22:19:34 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10548"; a="733698745" X-IronPort-AV: E=Sophos;i="5.96,210,1665471600"; d="scan'208";a="733698745" Received: from chaop.bj.intel.com ([10.240.193.75]) by FMSMGA003.fm.intel.com with ESMTP; 01 Dec 2022 22:19:23 -0800 From: Chao Peng To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-arch@vger.kernel.org, linux-api@vger.kernel.org, linux-doc@vger.kernel.org, qemu-devel@nongnu.org Cc: Paolo Bonzini , Jonathan Corbet , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Arnd Bergmann , Naoya Horiguchi , Miaohe Lin , x86@kernel.org, "H . Peter Anvin" , Hugh Dickins , Jeff Layton , "J . Bruce Fields" , Andrew Morton , Shuah Khan , Mike Rapoport , Steven Price , "Maciej S . Szmigiero" , Vlastimil Babka , Vishal Annapurve , Yu Zhang , Chao Peng , "Kirill A . Shutemov" , luto@kernel.org, jun.nakajima@intel.com, dave.hansen@intel.com, ak@linux.intel.com, david@redhat.com, aarcange@redhat.com, ddutile@redhat.com, dhildenb@redhat.com, Quentin Perret , tabba@google.com, Michael Roth , mhocko@suse.com, wei.w.wang@intel.com Subject: [PATCH v10 7/9] KVM: Update lpage info when private/shared memory are mixed Date: Fri, 2 Dec 2022 14:13:45 +0800 Message-Id: <20221202061347.1070246-8-chao.p.peng@linux.intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20221202061347.1070246-1-chao.p.peng@linux.intel.com> References: <20221202061347.1070246-1-chao.p.peng@linux.intel.com> MIME-Version: 1.0 X-Stat-Signature: 5qkqwst783nxmb5c79wuwcjk75g8wemm X-Spamd-Result: default: False [-0.20 / 9.00]; BAYES_HAM(-3.00)[100.00%]; R_MISSING_CHARSET(2.50)[]; MID_CONTAINS_FROM(1.00)[]; DMARC_POLICY_ALLOW(-0.50)[intel.com,none]; R_DKIM_ALLOW(-0.20)[intel.com:s=Intel]; MIME_GOOD(-0.10)[text/plain]; RCVD_NO_TLS_LAST(0.10)[]; RCPT_COUNT_TWELVE(0.00)[49]; R_SPF_NA(0.00)[no SPF record]; FROM_EQ_ENVFROM(0.00)[]; TO_MATCH_ENVRCPT_SOME(0.00)[]; MIME_TRACE(0.00)[0:+]; FROM_HAS_DN(0.00)[]; DKIM_TRACE(0.00)[intel.com:+]; TO_DN_SOME(0.00)[]; RCVD_COUNT_THREE(0.00)[3]; ARC_SIGNED(0.00)[hostedemail.com:s=arc-20220608:i=1]; ARC_NA(0.00)[] X-Rspamd-Queue-Id: 4FF762000B X-Rspam-User: X-Rspamd-Server: rspam06 X-HE-Tag: 1669961976-486847 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: A large page with mixed private/shared subpages can't be mapped as large page since its sub private/shared pages are from different memory backends and may also treated by architecture differently. When private/shared memory are mixed in a large page, the current lpage_info is not sufficient to decide whether the page can be mapped as large page or not and additional private/shared mixed information is needed. Tracking this 'mixed' information with the current 'count' like disallow_lpage is a bit challenge so reserve a bit in 'disallow_lpage' to indicate a large page has mixed private/share subpages and update this 'mixed' bit whenever the memory attribute is changed between private and shared. Signed-off-by: Chao Peng --- arch/x86/include/asm/kvm_host.h | 8 ++ arch/x86/kvm/mmu/mmu.c | 134 +++++++++++++++++++++++++++++++- arch/x86/kvm/x86.c | 2 + include/linux/kvm_host.h | 19 +++++ virt/kvm/kvm_main.c | 9 ++- 5 files changed, 169 insertions(+), 3 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 283cbb83d6ae..7772ab37ac89 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -38,6 +38,7 @@ #include #define __KVM_HAVE_ARCH_VCPU_DEBUGFS +#define __KVM_HAVE_ARCH_SET_MEMORY_ATTRIBUTES #define KVM_MAX_VCPUS 1024 @@ -1011,6 +1012,13 @@ struct kvm_vcpu_arch { #endif }; +/* + * Use a bit in disallow_lpage to indicate private/shared pages mixed at the + * level. The remaining bits are used as a reference count. + */ +#define KVM_LPAGE_PRIVATE_SHARED_MIXED (1U << 31) +#define KVM_LPAGE_COUNT_MAX ((1U << 31) - 1) + struct kvm_lpage_info { int disallow_lpage; }; diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index e2c70b5afa3e..2190fd8c95c0 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -763,11 +763,16 @@ static void update_gfn_disallow_lpage_count(const struct kvm_memory_slot *slot, { struct kvm_lpage_info *linfo; int i; + int disallow_count; for (i = PG_LEVEL_2M; i <= KVM_MAX_HUGEPAGE_LEVEL; ++i) { linfo = lpage_info_slot(gfn, slot, i); + + disallow_count = linfo->disallow_lpage & KVM_LPAGE_COUNT_MAX; + WARN_ON(disallow_count + count < 0 || + disallow_count > KVM_LPAGE_COUNT_MAX - count); + linfo->disallow_lpage += count; - WARN_ON(linfo->disallow_lpage < 0); } } @@ -6986,3 +6991,130 @@ void kvm_mmu_pre_destroy_vm(struct kvm *kvm) if (kvm->arch.nx_huge_page_recovery_thread) kthread_stop(kvm->arch.nx_huge_page_recovery_thread); } + +static bool linfo_is_mixed(struct kvm_lpage_info *linfo) +{ + return linfo->disallow_lpage & KVM_LPAGE_PRIVATE_SHARED_MIXED; +} + +static void linfo_set_mixed(gfn_t gfn, struct kvm_memory_slot *slot, + int level, bool mixed) +{ + struct kvm_lpage_info *linfo = lpage_info_slot(gfn, slot, level); + + if (mixed) + linfo->disallow_lpage |= KVM_LPAGE_PRIVATE_SHARED_MIXED; + else + linfo->disallow_lpage &= ~KVM_LPAGE_PRIVATE_SHARED_MIXED; +} + +static bool is_expected_attr_entry(void *entry, unsigned long expected_attrs) +{ + bool expect_private = expected_attrs & KVM_MEMORY_ATTRIBUTE_PRIVATE; + + if (xa_to_value(entry) & KVM_MEMORY_ATTRIBUTE_PRIVATE) { + if (!expect_private) + return false; + } else if (expect_private) + return false; + + return true; +} + +static bool mem_attrs_mixed_2m(struct kvm *kvm, unsigned long attrs, + gfn_t start, gfn_t end) +{ + XA_STATE(xas, &kvm->mem_attr_array, start); + gfn_t gfn = start; + void *entry; + bool mixed = false; + + rcu_read_lock(); + entry = xas_load(&xas); + while (gfn < end) { + if (xas_retry(&xas, entry)) + continue; + + KVM_BUG_ON(gfn != xas.xa_index, kvm); + + if (!is_expected_attr_entry(entry, attrs)) { + mixed = true; + break; + } + + entry = xas_next(&xas); + gfn++; + } + + rcu_read_unlock(); + return mixed; +} + +static bool mem_attrs_mixed(struct kvm *kvm, struct kvm_memory_slot *slot, + int level, unsigned long attrs, + gfn_t start, gfn_t end) +{ + unsigned long gfn; + + if (level == PG_LEVEL_2M) + return mem_attrs_mixed_2m(kvm, attrs, start, end); + + for (gfn = start; gfn < end; gfn += KVM_PAGES_PER_HPAGE(level - 1)) + if (linfo_is_mixed(lpage_info_slot(gfn, slot, level - 1)) || + !is_expected_attr_entry(xa_load(&kvm->mem_attr_array, gfn), + attrs)) + return true; + return false; +} + +static void kvm_update_lpage_private_shared_mixed(struct kvm *kvm, + struct kvm_memory_slot *slot, + unsigned long attrs, + gfn_t start, gfn_t end) +{ + unsigned long pages, mask; + gfn_t gfn, gfn_end, first, last; + int level; + bool mixed; + + /* + * The sequence matters here: we set the higher level basing on the + * lower level's scanning result. + */ + for (level = PG_LEVEL_2M; level <= KVM_MAX_HUGEPAGE_LEVEL; level++) { + pages = KVM_PAGES_PER_HPAGE(level); + mask = ~(pages - 1); + first = start & mask; + last = (end - 1) & mask; + + /* + * We only need to scan the head and tail page, for middle pages + * we know they will not be mixed. + */ + gfn = max(first, slot->base_gfn); + gfn_end = min(first + pages, slot->base_gfn + slot->npages); + mixed = mem_attrs_mixed(kvm, slot, level, attrs, gfn, gfn_end); + linfo_set_mixed(gfn, slot, level, mixed); + + if (first == last) + return; + + for (gfn = first + pages; gfn < last; gfn += pages) + linfo_set_mixed(gfn, slot, level, false); + + gfn = last; + gfn_end = min(last + pages, slot->base_gfn + slot->npages); + mixed = mem_attrs_mixed(kvm, slot, level, attrs, gfn, gfn_end); + linfo_set_mixed(gfn, slot, level, mixed); + } +} + +void kvm_arch_set_memory_attributes(struct kvm *kvm, + struct kvm_memory_slot *slot, + unsigned long attrs, + gfn_t start, gfn_t end) +{ + if (kvm_slot_can_be_private(slot)) + kvm_update_lpage_private_shared_mixed(kvm, slot, attrs, + start, end); +} diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 9a07380f8d3c..5aefcff614d2 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -12362,6 +12362,8 @@ static int kvm_alloc_memslot_metadata(struct kvm *kvm, if ((slot->base_gfn + npages) & (KVM_PAGES_PER_HPAGE(level) - 1)) linfo[lpages - 1].disallow_lpage = 1; ugfn = slot->userspace_addr >> PAGE_SHIFT; + if (kvm_slot_can_be_private(slot)) + ugfn |= slot->restricted_offset >> PAGE_SHIFT; /* * If the gfn and userspace address are not aligned wrt each * other, disable large page support for this slot. diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 3331c0c92838..25099c94e770 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -592,6 +592,11 @@ struct kvm_memory_slot { struct restrictedmem_notifier notifier; }; +static inline bool kvm_slot_can_be_private(const struct kvm_memory_slot *slot) +{ + return slot && (slot->flags & KVM_MEM_PRIVATE); +} + static inline bool kvm_slot_dirty_track_enabled(const struct kvm_memory_slot *slot) { return slot->flags & KVM_MEM_LOG_DIRTY_PAGES; @@ -2316,4 +2321,18 @@ static inline void kvm_account_pgtable_pages(void *virt, int nr) /* Max number of entries allowed for each kvm dirty ring */ #define KVM_DIRTY_RING_MAX_ENTRIES 65536 +#ifdef __KVM_HAVE_ARCH_SET_MEMORY_ATTRIBUTES +void kvm_arch_set_memory_attributes(struct kvm *kvm, + struct kvm_memory_slot *slot, + unsigned long attrs, + gfn_t start, gfn_t end); +#else +static inline void kvm_arch_set_memory_attributes(struct kvm *kvm, + struct kvm_memory_slot *slot, + unsigned long attrs, + gfn_t start, gfn_t end) +{ +} +#endif /* __KVM_HAVE_ARCH_SET_MEMORY_ATTRIBUTES */ + #endif diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 4e1e1e113bf0..e107afea32f0 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -2354,7 +2354,8 @@ static u64 kvm_supported_mem_attributes(struct kvm *kvm) return 0; } -static void kvm_unmap_mem_range(struct kvm *kvm, gfn_t start, gfn_t end) +static void kvm_unmap_mem_range(struct kvm *kvm, gfn_t start, gfn_t end, + unsigned long attrs) { struct kvm_gfn_range gfn_range; struct kvm_memory_slot *slot; @@ -2378,6 +2379,10 @@ static void kvm_unmap_mem_range(struct kvm *kvm, gfn_t start, gfn_t end) gfn_range.slot = slot; r |= kvm_unmap_gfn_range(kvm, &gfn_range); + + kvm_arch_set_memory_attributes(kvm, slot, attrs, + gfn_range.start, + gfn_range.end); } } @@ -2427,7 +2432,7 @@ static int kvm_vm_ioctl_set_mem_attributes(struct kvm *kvm, idx = srcu_read_lock(&kvm->srcu); KVM_MMU_LOCK(kvm); if (i > start) - kvm_unmap_mem_range(kvm, start, i); + kvm_unmap_mem_range(kvm, start, i, attrs->attributes); kvm_mmu_invalidate_end(kvm); KVM_MMU_UNLOCK(kvm); srcu_read_unlock(&kvm->srcu, idx); From patchwork Fri Dec 2 06:13:46 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chao Peng X-Patchwork-Id: 13062265 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8B577C47088 for ; Fri, 2 Dec 2022 06:19:48 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 203096B0075; Fri, 2 Dec 2022 01:19:48 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 1B36D6B0078; Fri, 2 Dec 2022 01:19:48 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 02C456B007B; Fri, 2 Dec 2022 01:19:47 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id E91936B0075 for ; Fri, 2 Dec 2022 01:19:47 -0500 (EST) Received: from smtpin17.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id BD6AC8042C for ; Fri, 2 Dec 2022 06:19:47 +0000 (UTC) X-FDA: 80196365214.17.7CE92C5 Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by imf21.hostedemail.com (Postfix) with ESMTP id 2AFCC1C0002 for ; Fri, 2 Dec 2022 06:19:46 +0000 (UTC) Authentication-Results: imf21.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=L+XpQEo0; dmarc=pass (policy=none) header.from=intel.com; spf=none (imf21.hostedemail.com: domain of chao.p.peng@linux.intel.com has no SPF policy when checking 134.134.136.100) smtp.mailfrom=chao.p.peng@linux.intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1669961987; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=NTpYR+PAsV6cYbFaFOzP0A1cFFMQ6MjoW/ivmJIPGnM=; b=OGgUNUASj5Wdk1ePyAl00p9O/+sf0lK8VxufWxvyaxzYK727C2V6QmtQ0n1hDOpUaqK0rt 0DEFlGE+OD1XYa7SEXbsUiOIf78OLVBvQlHhLM+a738xrzavzu/h5eyU5KpEuOYbk265mT 9oPHQdlGu1Kl9IWwqyyvjRu3jLM63D4= ARC-Authentication-Results: i=1; imf21.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=L+XpQEo0; dmarc=pass (policy=none) header.from=intel.com; spf=none (imf21.hostedemail.com: domain of chao.p.peng@linux.intel.com has no SPF policy when checking 134.134.136.100) smtp.mailfrom=chao.p.peng@linux.intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1669961987; a=rsa-sha256; cv=none; b=CLINpKrLoyb2UrdoJC1R1iQ3/cKRgvCbYDl+3dwcarL8Af6zFqKeV6bXpEfvc7v0dctint 5z7kffbqZgp/265Cu/jaJ/VZGPuVg0RUIDtZQAz/1pcj4Lm64pKrfF8VWNF8gCcVwm97xt XXJ2dOHR+AuIcfzyr3pw072chdO1O5Q= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1669961987; x=1701497987; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=NF2CzRCc7hZrLWjr9YAx/RuM8EDcz32QaRo9aI6LYAA=; b=L+XpQEo0s+5nIbZ1ltLORU25x5oYg4DThcOabmA+iMInAkXmBeJ0r1j+ dNBeNymWA9fWY0eqIbicoFbDhP4B0Us1y8Kxiq6Lx80tYbRoFmp+87aSg UCHqHhipgKBgd3cPK5DafXSmnYpgsJHaiCc4EQgKCu9uuCXiM8oxCAyVh YN6W4ZL6kx1dZ8dAJORpM40iPZ/DQRWXAp1Nyi6UtrP0/ytg5MyHiUKtm U5AP5nJ9L3M7nKRFlSvCjfpHHc7aO1q/f5raLKMtXeal8+T5fxg865fzA /Y9Zan9TGFzGFx0Y1OV8WuSbw940I8jLg4voz67hSo7gTFY20dV1WKTIT g==; X-IronPort-AV: E=McAfee;i="6500,9779,10548"; a="380170685" X-IronPort-AV: E=Sophos;i="5.96,210,1665471600"; d="scan'208";a="380170685" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 01 Dec 2022 22:19:45 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10548"; a="733698783" X-IronPort-AV: E=Sophos;i="5.96,210,1665471600"; d="scan'208";a="733698783" Received: from chaop.bj.intel.com ([10.240.193.75]) by FMSMGA003.fm.intel.com with ESMTP; 01 Dec 2022 22:19:34 -0800 From: Chao Peng To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-arch@vger.kernel.org, linux-api@vger.kernel.org, linux-doc@vger.kernel.org, qemu-devel@nongnu.org Cc: Paolo Bonzini , Jonathan Corbet , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Arnd Bergmann , Naoya Horiguchi , Miaohe Lin , x86@kernel.org, "H . Peter Anvin" , Hugh Dickins , Jeff Layton , "J . Bruce Fields" , Andrew Morton , Shuah Khan , Mike Rapoport , Steven Price , "Maciej S . Szmigiero" , Vlastimil Babka , Vishal Annapurve , Yu Zhang , Chao Peng , "Kirill A . Shutemov" , luto@kernel.org, jun.nakajima@intel.com, dave.hansen@intel.com, ak@linux.intel.com, david@redhat.com, aarcange@redhat.com, ddutile@redhat.com, dhildenb@redhat.com, Quentin Perret , tabba@google.com, Michael Roth , mhocko@suse.com, wei.w.wang@intel.com Subject: [PATCH v10 8/9] KVM: Handle page fault for private memory Date: Fri, 2 Dec 2022 14:13:46 +0800 Message-Id: <20221202061347.1070246-9-chao.p.peng@linux.intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20221202061347.1070246-1-chao.p.peng@linux.intel.com> References: <20221202061347.1070246-1-chao.p.peng@linux.intel.com> MIME-Version: 1.0 X-Spamd-Result: default: False [-0.20 / 9.00]; BAYES_HAM(-3.00)[100.00%]; R_MISSING_CHARSET(2.50)[]; MID_CONTAINS_FROM(1.00)[]; DMARC_POLICY_ALLOW(-0.50)[intel.com,none]; R_DKIM_ALLOW(-0.20)[intel.com:s=Intel]; MIME_GOOD(-0.10)[text/plain]; RCVD_NO_TLS_LAST(0.10)[]; RCPT_COUNT_TWELVE(0.00)[49]; R_SPF_NA(0.00)[no SPF record]; FROM_EQ_ENVFROM(0.00)[]; TO_MATCH_ENVRCPT_SOME(0.00)[]; MIME_TRACE(0.00)[0:+]; FROM_HAS_DN(0.00)[]; DKIM_TRACE(0.00)[intel.com:+]; TO_DN_SOME(0.00)[]; RCVD_COUNT_THREE(0.00)[3]; ARC_SIGNED(0.00)[hostedemail.com:s=arc-20220608:i=1]; ARC_NA(0.00)[] X-Stat-Signature: b4sb6kmugccy9s3sieokaacdo9fdpn9e X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 2AFCC1C0002 X-Rspam-User: X-HE-Tag: 1669961986-59356 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: A KVM_MEM_PRIVATE memslot can include both fd-based private memory and hva-based shared memory. Architecture code (like TDX code) can tell whether the on-going fault is private or not. This patch adds a 'is_private' field to kvm_page_fault to indicate this and architecture code is expected to set it. To handle page fault for such memslot, the handling logic is different depending on whether the fault is private or shared. KVM checks if 'is_private' matches the host's view of the page (maintained in mem_attr_array). - For a successful match, private pfn is obtained with restrictedmem_get_page() and shared pfn is obtained with existing get_user_pages(). - For a failed match, KVM causes a KVM_EXIT_MEMORY_FAULT exit to userspace. Userspace then can convert memory between private/shared in host's view and retry the fault. Co-developed-by: Yu Zhang Signed-off-by: Yu Zhang Signed-off-by: Chao Peng Reviewed-by: Fuad Tabba Tested-by: Fuad Tabba --- arch/x86/kvm/mmu/mmu.c | 63 +++++++++++++++++++++++++++++++-- arch/x86/kvm/mmu/mmu_internal.h | 14 +++++++- arch/x86/kvm/mmu/mmutrace.h | 1 + arch/x86/kvm/mmu/tdp_mmu.c | 2 +- include/linux/kvm_host.h | 30 ++++++++++++++++ 5 files changed, 105 insertions(+), 5 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 2190fd8c95c0..b1953ebc012e 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -3058,7 +3058,7 @@ static int host_pfn_mapping_level(struct kvm *kvm, gfn_t gfn, int kvm_mmu_max_mapping_level(struct kvm *kvm, const struct kvm_memory_slot *slot, gfn_t gfn, - int max_level) + int max_level, bool is_private) { struct kvm_lpage_info *linfo; int host_level; @@ -3070,6 +3070,9 @@ int kvm_mmu_max_mapping_level(struct kvm *kvm, break; } + if (is_private) + return max_level; + if (max_level == PG_LEVEL_4K) return PG_LEVEL_4K; @@ -3098,7 +3101,8 @@ void kvm_mmu_hugepage_adjust(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault * level, which will be used to do precise, accurate accounting. */ fault->req_level = kvm_mmu_max_mapping_level(vcpu->kvm, slot, - fault->gfn, fault->max_level); + fault->gfn, fault->max_level, + fault->is_private); if (fault->req_level == PG_LEVEL_4K || fault->huge_page_disallowed) return; @@ -4178,6 +4182,49 @@ void kvm_arch_async_page_ready(struct kvm_vcpu *vcpu, struct kvm_async_pf *work) kvm_mmu_do_page_fault(vcpu, work->cr2_or_gpa, 0, true); } +static inline u8 order_to_level(int order) +{ + BUILD_BUG_ON(KVM_MAX_HUGEPAGE_LEVEL > PG_LEVEL_1G); + + if (order >= KVM_HPAGE_GFN_SHIFT(PG_LEVEL_1G)) + return PG_LEVEL_1G; + + if (order >= KVM_HPAGE_GFN_SHIFT(PG_LEVEL_2M)) + return PG_LEVEL_2M; + + return PG_LEVEL_4K; +} + +static int kvm_do_memory_fault_exit(struct kvm_vcpu *vcpu, + struct kvm_page_fault *fault) +{ + vcpu->run->exit_reason = KVM_EXIT_MEMORY_FAULT; + if (fault->is_private) + vcpu->run->memory.flags = KVM_MEMORY_EXIT_FLAG_PRIVATE; + else + vcpu->run->memory.flags = 0; + vcpu->run->memory.gpa = fault->gfn << PAGE_SHIFT; + vcpu->run->memory.size = PAGE_SIZE; + return RET_PF_USER; +} + +static int kvm_faultin_pfn_private(struct kvm_vcpu *vcpu, + struct kvm_page_fault *fault) +{ + int order; + struct kvm_memory_slot *slot = fault->slot; + + if (!kvm_slot_can_be_private(slot)) + return kvm_do_memory_fault_exit(vcpu, fault); + + if (kvm_restricted_mem_get_pfn(slot, fault->gfn, &fault->pfn, &order)) + return RET_PF_RETRY; + + fault->max_level = min(order_to_level(order), fault->max_level); + fault->map_writable = !(slot->flags & KVM_MEM_READONLY); + return RET_PF_CONTINUE; +} + static int kvm_faultin_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) { struct kvm_memory_slot *slot = fault->slot; @@ -4210,6 +4257,12 @@ static int kvm_faultin_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) return RET_PF_EMULATE; } + if (fault->is_private != kvm_mem_is_private(vcpu->kvm, fault->gfn)) + return kvm_do_memory_fault_exit(vcpu, fault); + + if (fault->is_private) + return kvm_faultin_pfn_private(vcpu, fault); + async = false; fault->pfn = __gfn_to_pfn_memslot(slot, fault->gfn, false, false, &async, fault->write, &fault->map_writable, @@ -5599,6 +5652,9 @@ int noinline kvm_mmu_page_fault(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, u64 err return -EIO; } + if (r == RET_PF_USER) + return 0; + if (r < 0) return r; if (r != RET_PF_EMULATE) @@ -6452,7 +6508,8 @@ static bool kvm_mmu_zap_collapsible_spte(struct kvm *kvm, */ if (sp->role.direct && sp->role.level < kvm_mmu_max_mapping_level(kvm, slot, sp->gfn, - PG_LEVEL_NUM)) { + PG_LEVEL_NUM, + false)) { kvm_zap_one_rmap_spte(kvm, rmap_head, sptep); if (kvm_available_flush_tlb_with_range()) diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h index dbaf6755c5a7..5ccf08183b00 100644 --- a/arch/x86/kvm/mmu/mmu_internal.h +++ b/arch/x86/kvm/mmu/mmu_internal.h @@ -189,6 +189,7 @@ struct kvm_page_fault { /* Derived from mmu and global state. */ const bool is_tdp; + const bool is_private; const bool nx_huge_page_workaround_enabled; /* @@ -237,6 +238,7 @@ int kvm_tdp_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault); * RET_PF_RETRY: let CPU fault again on the address. * RET_PF_EMULATE: mmio page fault, emulate the instruction directly. * RET_PF_INVALID: the spte is invalid, let the real page fault path update it. + * RET_PF_USER: need to exit to userspace to handle this fault. * RET_PF_FIXED: The faulting entry has been fixed. * RET_PF_SPURIOUS: The faulting entry was already fixed, e.g. by another vCPU. * @@ -253,6 +255,7 @@ enum { RET_PF_RETRY, RET_PF_EMULATE, RET_PF_INVALID, + RET_PF_USER, RET_PF_FIXED, RET_PF_SPURIOUS, }; @@ -310,7 +313,7 @@ static inline int kvm_mmu_do_page_fault(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, int kvm_mmu_max_mapping_level(struct kvm *kvm, const struct kvm_memory_slot *slot, gfn_t gfn, - int max_level); + int max_level, bool is_private); void kvm_mmu_hugepage_adjust(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault); void disallowed_hugepage_adjust(struct kvm_page_fault *fault, u64 spte, int cur_level); @@ -319,4 +322,13 @@ void *mmu_memory_cache_alloc(struct kvm_mmu_memory_cache *mc); void track_possible_nx_huge_page(struct kvm *kvm, struct kvm_mmu_page *sp); void untrack_possible_nx_huge_page(struct kvm *kvm, struct kvm_mmu_page *sp); +#ifndef CONFIG_HAVE_KVM_RESTRICTED_MEM +static inline int kvm_restricted_mem_get_pfn(struct kvm_memory_slot *slot, + gfn_t gfn, kvm_pfn_t *pfn, int *order) +{ + WARN_ON_ONCE(1); + return -EOPNOTSUPP; +} +#endif /* CONFIG_HAVE_KVM_RESTRICTED_MEM */ + #endif /* __KVM_X86_MMU_INTERNAL_H */ diff --git a/arch/x86/kvm/mmu/mmutrace.h b/arch/x86/kvm/mmu/mmutrace.h index ae86820cef69..2d7555381955 100644 --- a/arch/x86/kvm/mmu/mmutrace.h +++ b/arch/x86/kvm/mmu/mmutrace.h @@ -58,6 +58,7 @@ TRACE_DEFINE_ENUM(RET_PF_CONTINUE); TRACE_DEFINE_ENUM(RET_PF_RETRY); TRACE_DEFINE_ENUM(RET_PF_EMULATE); TRACE_DEFINE_ENUM(RET_PF_INVALID); +TRACE_DEFINE_ENUM(RET_PF_USER); TRACE_DEFINE_ENUM(RET_PF_FIXED); TRACE_DEFINE_ENUM(RET_PF_SPURIOUS); diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c index 771210ce5181..8ba1a4afc546 100644 --- a/arch/x86/kvm/mmu/tdp_mmu.c +++ b/arch/x86/kvm/mmu/tdp_mmu.c @@ -1768,7 +1768,7 @@ static void zap_collapsible_spte_range(struct kvm *kvm, continue; max_mapping_level = kvm_mmu_max_mapping_level(kvm, slot, - iter.gfn, PG_LEVEL_NUM); + iter.gfn, PG_LEVEL_NUM, false); if (max_mapping_level < iter.level) continue; diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 25099c94e770..153842bb33df 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -2335,4 +2335,34 @@ static inline void kvm_arch_set_memory_attributes(struct kvm *kvm, } #endif /* __KVM_HAVE_ARCH_SET_MEMORY_ATTRIBUTES */ +#ifdef CONFIG_HAVE_KVM_MEMORY_ATTRIBUTES +static inline bool kvm_mem_is_private(struct kvm *kvm, gfn_t gfn) +{ + return xa_to_value(xa_load(&kvm->mem_attr_array, gfn)) & + KVM_MEMORY_ATTRIBUTE_PRIVATE; +} +#else +static inline bool kvm_mem_is_private(struct kvm *kvm, gfn_t gfn) +{ + return false; +} + +#endif /* CONFIG_HAVE_KVM_MEMORY_ATTRIBUTES */ + +#ifdef CONFIG_HAVE_KVM_RESTRICTED_MEM +static inline int kvm_restricted_mem_get_pfn(struct kvm_memory_slot *slot, + gfn_t gfn, kvm_pfn_t *pfn, int *order) +{ + int ret; + struct page *page; + pgoff_t index = gfn - slot->base_gfn + + (slot->restricted_offset >> PAGE_SHIFT); + + ret = restrictedmem_get_page(slot->restricted_file, index, + &page, order); + *pfn = page_to_pfn(page); + return ret; +} +#endif /* CONFIG_HAVE_KVM_RESTRICTED_MEM */ + #endif From patchwork Fri Dec 2 06:13:47 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chao Peng X-Patchwork-Id: 13062266 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8FCDEC4167B for ; Fri, 2 Dec 2022 06:19:59 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 277166B0078; Fri, 2 Dec 2022 01:19:59 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 201766B007B; Fri, 2 Dec 2022 01:19:59 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 052BA6B007D; Fri, 2 Dec 2022 01:19:58 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id EA8556B0078 for ; Fri, 2 Dec 2022 01:19:58 -0500 (EST) Received: from smtpin13.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id C1758C05F8 for ; Fri, 2 Dec 2022 06:19:58 +0000 (UTC) X-FDA: 80196365676.13.28E2020 Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by imf25.hostedemail.com (Postfix) with ESMTP id 07282A0012 for ; Fri, 2 Dec 2022 06:19:57 +0000 (UTC) Authentication-Results: imf25.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=hEHxJDl3; dmarc=pass (policy=none) header.from=intel.com; spf=none (imf25.hostedemail.com: domain of chao.p.peng@linux.intel.com has no SPF policy when checking 134.134.136.100) smtp.mailfrom=chao.p.peng@linux.intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1669961998; a=rsa-sha256; cv=none; b=J/8jFrBlvdP4JIxmQ1VV6j+zRGD6HJbOGCPlG2F0kzujO732GvyuPlrSJ0pZAQz6V/ZcVL Ao5540t0XrcOdeil/UQdvfxpaQaR2M1b92ZQNEKVKtvAP3yHtNoBI31NvS9/Ts2nUUmQVL itWC3i8g4MEbKiANWf0K6GrI09dbmSg= ARC-Authentication-Results: i=1; imf25.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=hEHxJDl3; dmarc=pass (policy=none) header.from=intel.com; spf=none (imf25.hostedemail.com: domain of chao.p.peng@linux.intel.com has no SPF policy when checking 134.134.136.100) smtp.mailfrom=chao.p.peng@linux.intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1669961998; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=LitJ1Pcae3Q2I4e/+ChynXp7xxO2JKX2XlQWHV4NnwA=; b=6ANNYMCyzK9uqn6XtPYC0TnQp+rKx9TJ8Cy/jKuMJKiGGRngZf3BJQJvsr+zglEPrJKn37 TQiZ+eQi/FbKjUMNwgR6DZEaPJ5IdyM+P4eEvwJ7RX55evAUtJvP52tMG5VVOAkSHmfyKI yN2Ciu+ITBH6zc3xfvzQdJrID033baQ= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1669961998; x=1701497998; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=88Pkdv7kTSmlac+dsvaEbFSp3wXt4d9KWmWXJPdj1wM=; b=hEHxJDl39us7QWLRmlKCx21UZ/xi3nMf/KGCAZs7MdcqmNpk6mR4H9Go ozBFV6o9cvue2MIdlSGLdBmuJBuyeduBmEXqeqQKpgVfgNBb3rr8pWW8c 33wKlqux2yCaW73kR0iqwkAwBLAIx25/XMUDlwyKzOaX86p7H1DTa0XJJ luLDHA4avWl2P3YIXK8oyiPbcKQSFnwr/MiFDnp9DI/cS4ukbMJkuzdpl LoB4f2dK6I/IysyyCwb3x3+P/c7NmMF7dMU3NV//0xoEVjtgAp1MAqHbv qL7UPwwmqNhdDSypM7agcJlR5s6Mw9hwRxmAWSR1aZrqfQuOZSly3YxvB Q==; X-IronPort-AV: E=McAfee;i="6500,9779,10548"; a="380170734" X-IronPort-AV: E=Sophos;i="5.96,210,1665471600"; d="scan'208";a="380170734" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 01 Dec 2022 22:19:56 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10548"; a="733698820" X-IronPort-AV: E=Sophos;i="5.96,210,1665471600"; d="scan'208";a="733698820" Received: from chaop.bj.intel.com ([10.240.193.75]) by FMSMGA003.fm.intel.com with ESMTP; 01 Dec 2022 22:19:45 -0800 From: Chao Peng To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-arch@vger.kernel.org, linux-api@vger.kernel.org, linux-doc@vger.kernel.org, qemu-devel@nongnu.org Cc: Paolo Bonzini , Jonathan Corbet , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Arnd Bergmann , Naoya Horiguchi , Miaohe Lin , x86@kernel.org, "H . Peter Anvin" , Hugh Dickins , Jeff Layton , "J . Bruce Fields" , Andrew Morton , Shuah Khan , Mike Rapoport , Steven Price , "Maciej S . Szmigiero" , Vlastimil Babka , Vishal Annapurve , Yu Zhang , Chao Peng , "Kirill A . Shutemov" , luto@kernel.org, jun.nakajima@intel.com, dave.hansen@intel.com, ak@linux.intel.com, david@redhat.com, aarcange@redhat.com, ddutile@redhat.com, dhildenb@redhat.com, Quentin Perret , tabba@google.com, Michael Roth , mhocko@suse.com, wei.w.wang@intel.com Subject: [PATCH v10 9/9] KVM: Enable and expose KVM_MEM_PRIVATE Date: Fri, 2 Dec 2022 14:13:47 +0800 Message-Id: <20221202061347.1070246-10-chao.p.peng@linux.intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20221202061347.1070246-1-chao.p.peng@linux.intel.com> References: <20221202061347.1070246-1-chao.p.peng@linux.intel.com> MIME-Version: 1.0 X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: 07282A0012 X-Stat-Signature: 9quxidz7uj3tb18mcgfqnqzy8kzpqocm X-Spamd-Result: default: False [0.80 / 9.00]; BAYES_HAM(-3.00)[100.00%]; R_MISSING_CHARSET(2.50)[]; SUBJECT_HAS_UNDERSCORES(1.00)[]; MID_CONTAINS_FROM(1.00)[]; DMARC_POLICY_ALLOW(-0.50)[intel.com,none]; R_DKIM_ALLOW(-0.20)[intel.com:s=Intel]; RCVD_NO_TLS_LAST(0.10)[]; MIME_GOOD(-0.10)[text/plain]; RCPT_COUNT_TWELVE(0.00)[49]; R_SPF_NA(0.00)[no SPF record]; FROM_EQ_ENVFROM(0.00)[]; TO_MATCH_ENVRCPT_SOME(0.00)[]; MIME_TRACE(0.00)[0:+]; FROM_HAS_DN(0.00)[]; DKIM_TRACE(0.00)[intel.com:+]; TO_DN_SOME(0.00)[]; RCVD_COUNT_THREE(0.00)[3]; ARC_SIGNED(0.00)[hostedemail.com:s=arc-20220608:i=1]; ARC_NA(0.00)[] X-Rspam-User: X-HE-Tag: 1669961997-961050 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Register/unregister private memslot to fd-based memory backing store restrictedmem and implement the callbacks for restrictedmem_notifier: - invalidate_start()/invalidate_end() to zap the existing memory mappings in the KVM page table. - error() to request KVM_REQ_MEMORY_MCE and later exit to userspace with KVM_EXIT_SHUTDOWN. Expose KVM_MEM_PRIVATE for memslot and KVM_MEMORY_ATTRIBUTE_PRIVATE for KVM_GET_SUPPORTED_MEMORY_ATTRIBUTES to userspace but either are controlled by kvm_arch_has_private_mem() which should be rewritten by architecture code. Co-developed-by: Yu Zhang Signed-off-by: Yu Zhang Signed-off-by: Chao Peng Reviewed-by: Fuad Tabba Tested-by: Fuad Tabba Signed-off-by: Isaku Yamahata --- arch/x86/include/asm/kvm_host.h | 1 + arch/x86/kvm/x86.c | 13 +++ include/linux/kvm_host.h | 3 + virt/kvm/kvm_main.c | 179 +++++++++++++++++++++++++++++++- 4 files changed, 191 insertions(+), 5 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 7772ab37ac89..27ef31133352 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -114,6 +114,7 @@ KVM_ARCH_REQ_FLAGS(31, KVM_REQUEST_WAIT | KVM_REQUEST_NO_WAKEUP) #define KVM_REQ_HV_TLB_FLUSH \ KVM_ARCH_REQ_FLAGS(32, KVM_REQUEST_WAIT | KVM_REQUEST_NO_WAKEUP) +#define KVM_REQ_MEMORY_MCE KVM_ARCH_REQ(33) #define CR0_RESERVED_BITS \ (~(unsigned long)(X86_CR0_PE | X86_CR0_MP | X86_CR0_EM | X86_CR0_TS \ diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 5aefcff614d2..c67e22f3e2ee 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -6587,6 +6587,13 @@ int kvm_arch_pm_notifier(struct kvm *kvm, unsigned long state) } #endif /* CONFIG_HAVE_KVM_PM_NOTIFIER */ +#ifdef CONFIG_HAVE_KVM_RESTRICTED_MEM +void kvm_arch_memory_mce(struct kvm *kvm) +{ + kvm_make_all_cpus_request(kvm, KVM_REQ_MEMORY_MCE); +} +#endif + static int kvm_vm_ioctl_get_clock(struct kvm *kvm, void __user *argp) { struct kvm_clock_data data = { 0 }; @@ -10357,6 +10364,12 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu) if (kvm_check_request(KVM_REQ_UPDATE_CPU_DIRTY_LOGGING, vcpu)) static_call(kvm_x86_update_cpu_dirty_logging)(vcpu); + + if (kvm_check_request(KVM_REQ_MEMORY_MCE, vcpu)) { + vcpu->run->exit_reason = KVM_EXIT_SHUTDOWN; + r = 0; + goto out; + } } if (kvm_check_request(KVM_REQ_EVENT, vcpu) || req_int_win || diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 153842bb33df..f032d878e034 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -590,6 +590,7 @@ struct kvm_memory_slot { struct file *restricted_file; loff_t restricted_offset; struct restrictedmem_notifier notifier; + struct kvm *kvm; }; static inline bool kvm_slot_can_be_private(const struct kvm_memory_slot *slot) @@ -2363,6 +2364,8 @@ static inline int kvm_restricted_mem_get_pfn(struct kvm_memory_slot *slot, *pfn = page_to_pfn(page); return ret; } + +void kvm_arch_memory_mce(struct kvm *kvm); #endif /* CONFIG_HAVE_KVM_RESTRICTED_MEM */ #endif diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index e107afea32f0..ac835fc77273 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -936,6 +936,121 @@ static int kvm_init_mmu_notifier(struct kvm *kvm) #endif /* CONFIG_MMU_NOTIFIER && KVM_ARCH_WANT_MMU_NOTIFIER */ +#ifdef CONFIG_HAVE_KVM_RESTRICTED_MEM +static bool restrictedmem_range_is_valid(struct kvm_memory_slot *slot, + pgoff_t start, pgoff_t end, + gfn_t *gfn_start, gfn_t *gfn_end) +{ + unsigned long base_pgoff = slot->restricted_offset >> PAGE_SHIFT; + + if (start > base_pgoff) + *gfn_start = slot->base_gfn + start - base_pgoff; + else + *gfn_start = slot->base_gfn; + + if (end < base_pgoff + slot->npages) + *gfn_end = slot->base_gfn + end - base_pgoff; + else + *gfn_end = slot->base_gfn + slot->npages; + + if (*gfn_start >= *gfn_end) + return false; + + return true; +} + +static void kvm_restrictedmem_invalidate_begin(struct restrictedmem_notifier *notifier, + pgoff_t start, pgoff_t end) +{ + struct kvm_memory_slot *slot = container_of(notifier, + struct kvm_memory_slot, + notifier); + struct kvm *kvm = slot->kvm; + gfn_t gfn_start, gfn_end; + struct kvm_gfn_range gfn_range; + int idx; + + if (!restrictedmem_range_is_valid(slot, start, end, + &gfn_start, &gfn_end)) + return; + + gfn_range.start = gfn_start; + gfn_range.end = gfn_end; + gfn_range.slot = slot; + gfn_range.pte = __pte(0); + gfn_range.may_block = true; + + idx = srcu_read_lock(&kvm->srcu); + KVM_MMU_LOCK(kvm); + + kvm_mmu_invalidate_begin(kvm); + kvm_mmu_invalidate_range_add(kvm, gfn_start, gfn_end); + if (kvm_unmap_gfn_range(kvm, &gfn_range)) + kvm_flush_remote_tlbs(kvm); + + KVM_MMU_UNLOCK(kvm); + srcu_read_unlock(&kvm->srcu, idx); +} + +static void kvm_restrictedmem_invalidate_end(struct restrictedmem_notifier *notifier, + pgoff_t start, pgoff_t end) +{ + struct kvm_memory_slot *slot = container_of(notifier, + struct kvm_memory_slot, + notifier); + struct kvm *kvm = slot->kvm; + gfn_t gfn_start, gfn_end; + + if (!restrictedmem_range_is_valid(slot, start, end, + &gfn_start, &gfn_end)) + return; + + KVM_MMU_LOCK(kvm); + kvm_mmu_invalidate_end(kvm); + KVM_MMU_UNLOCK(kvm); +} + +static void kvm_restrictedmem_error(struct restrictedmem_notifier *notifier, + pgoff_t start, pgoff_t end) +{ + struct kvm_memory_slot *slot = container_of(notifier, + struct kvm_memory_slot, + notifier); + kvm_arch_memory_mce(slot->kvm); +} + +static struct restrictedmem_notifier_ops kvm_restrictedmem_notifier_ops = { + .invalidate_start = kvm_restrictedmem_invalidate_begin, + .invalidate_end = kvm_restrictedmem_invalidate_end, + .error = kvm_restrictedmem_error, +}; + +static inline void kvm_restrictedmem_register(struct kvm_memory_slot *slot) +{ + slot->notifier.ops = &kvm_restrictedmem_notifier_ops; + restrictedmem_register_notifier(slot->restricted_file, &slot->notifier); +} + +static inline void kvm_restrictedmem_unregister(struct kvm_memory_slot *slot) +{ + restrictedmem_unregister_notifier(slot->restricted_file, + &slot->notifier); +} + +#else /* !CONFIG_HAVE_KVM_RESTRICTED_MEM */ + +static inline void kvm_restrictedmem_register(struct kvm_memory_slot *slot) +{ + WARN_ON_ONCE(1); +} + +static inline void kvm_restrictedmem_unregister(struct kvm_memory_slot *slot) +{ + WARN_ON_ONCE(1); +} + +#endif /* CONFIG_HAVE_KVM_RESTRICTED_MEM */ + #ifdef CONFIG_HAVE_KVM_PM_NOTIFIER static int kvm_pm_notifier_call(struct notifier_block *bl, unsigned long state, @@ -980,6 +1095,11 @@ static void kvm_destroy_dirty_bitmap(struct kvm_memory_slot *memslot) /* This does not remove the slot from struct kvm_memslots data structures */ static void kvm_free_memslot(struct kvm *kvm, struct kvm_memory_slot *slot) { + if (slot->flags & KVM_MEM_PRIVATE) { + kvm_restrictedmem_unregister(slot); + fput(slot->restricted_file); + } + kvm_destroy_dirty_bitmap(slot); kvm_arch_free_memslot(kvm, slot); @@ -1551,10 +1671,14 @@ static void kvm_replace_memslot(struct kvm *kvm, } } -static int check_memory_region_flags(const struct kvm_user_mem_region *mem) +static int check_memory_region_flags(struct kvm *kvm, + const struct kvm_user_mem_region *mem) { u32 valid_flags = KVM_MEM_LOG_DIRTY_PAGES; + if (kvm_arch_has_private_mem(kvm)) + valid_flags |= KVM_MEM_PRIVATE; + #ifdef __KVM_HAVE_READONLY_MEM valid_flags |= KVM_MEM_READONLY; #endif @@ -1630,6 +1754,9 @@ static int kvm_prepare_memory_region(struct kvm *kvm, { int r; + if (change == KVM_MR_CREATE && new->flags & KVM_MEM_PRIVATE) + kvm_restrictedmem_register(new); + /* * If dirty logging is disabled, nullify the bitmap; the old bitmap * will be freed on "commit". If logging is enabled in both old and @@ -1658,6 +1785,9 @@ static int kvm_prepare_memory_region(struct kvm *kvm, if (r && new && new->dirty_bitmap && (!old || !old->dirty_bitmap)) kvm_destroy_dirty_bitmap(new); + if (r && change == KVM_MR_CREATE && new->flags & KVM_MEM_PRIVATE) + kvm_restrictedmem_unregister(new); + return r; } @@ -1963,7 +2093,7 @@ int __kvm_set_memory_region(struct kvm *kvm, int as_id, id; int r; - r = check_memory_region_flags(mem); + r = check_memory_region_flags(kvm, mem); if (r) return r; @@ -1982,6 +2112,10 @@ int __kvm_set_memory_region(struct kvm *kvm, !access_ok((void __user *)(unsigned long)mem->userspace_addr, mem->memory_size)) return -EINVAL; + if (mem->flags & KVM_MEM_PRIVATE && + (mem->restricted_offset & (PAGE_SIZE - 1) || + mem->restricted_offset > U64_MAX - mem->memory_size)) + return -EINVAL; if (as_id >= KVM_ADDRESS_SPACE_NUM || id >= KVM_MEM_SLOTS_NUM) return -EINVAL; if (mem->guest_phys_addr + mem->memory_size < mem->guest_phys_addr) @@ -2020,6 +2154,9 @@ int __kvm_set_memory_region(struct kvm *kvm, if ((kvm->nr_memslot_pages + npages) < kvm->nr_memslot_pages) return -EINVAL; } else { /* Modify an existing slot. */ + /* Private memslots are immutable, they can only be deleted. */ + if (mem->flags & KVM_MEM_PRIVATE) + return -EINVAL; if ((mem->userspace_addr != old->userspace_addr) || (npages != old->npages) || ((mem->flags ^ old->flags) & KVM_MEM_READONLY)) @@ -2048,10 +2185,28 @@ int __kvm_set_memory_region(struct kvm *kvm, new->npages = npages; new->flags = mem->flags; new->userspace_addr = mem->userspace_addr; + if (mem->flags & KVM_MEM_PRIVATE) { + new->restricted_file = fget(mem->restricted_fd); + if (!new->restricted_file || + !file_is_restrictedmem(new->restricted_file)) { + r = -EINVAL; + goto out; + } + new->restricted_offset = mem->restricted_offset; + } + + new->kvm = kvm; r = kvm_set_memslot(kvm, old, new, change); if (r) - kfree(new); + goto out; + + return 0; + +out: + if (new->restricted_file) + fput(new->restricted_file); + kfree(new); return r; } EXPORT_SYMBOL_GPL(__kvm_set_memory_region); @@ -2351,6 +2506,8 @@ static int kvm_vm_ioctl_clear_dirty_log(struct kvm *kvm, #ifdef CONFIG_HAVE_KVM_MEMORY_ATTRIBUTES static u64 kvm_supported_mem_attributes(struct kvm *kvm) { + if (kvm_arch_has_private_mem(kvm)) + return KVM_MEMORY_ATTRIBUTE_PRIVATE; return 0; } @@ -4822,16 +4979,28 @@ static long kvm_vm_ioctl(struct file *filp, } case KVM_SET_USER_MEMORY_REGION: { struct kvm_user_mem_region mem; - unsigned long size = sizeof(struct kvm_userspace_memory_region); + unsigned int flags_offset = offsetof(typeof(mem), flags); + unsigned long size; + u32 flags; kvm_sanity_check_user_mem_region_alias(); + memset(&mem, 0, sizeof(mem)); + r = -EFAULT; + if (get_user(flags, (u32 __user *)(argp + flags_offset))) + goto out; + + if (flags & KVM_MEM_PRIVATE) + size = sizeof(struct kvm_userspace_memory_region_ext); + else + size = sizeof(struct kvm_userspace_memory_region); + if (copy_from_user(&mem, argp, size)) goto out; r = -EINVAL; - if (mem.flags & KVM_MEM_PRIVATE) + if ((flags ^ mem.flags) & KVM_MEM_PRIVATE) goto out; r = kvm_vm_ioctl_set_memory_region(kvm, &mem);