From patchwork Fri Nov 19 13:47:27 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chao Peng X-Patchwork-Id: 12628879 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 39483C4332F for ; Fri, 19 Nov 2021 13:49:16 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id D847361139 for ; Fri, 19 Nov 2021 13:49:15 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org D847361139 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 099D16B0078; Fri, 19 Nov 2021 08:48:56 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 022086B007B; Fri, 19 Nov 2021 08:48:55 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DB71A6B007D; Fri, 19 Nov 2021 08:48:55 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0068.hostedemail.com [216.40.44.68]) by kanga.kvack.org (Postfix) with ESMTP id C9E2D6B0078 for ; Fri, 19 Nov 2021 08:48:55 -0500 (EST) Received: from smtpin26.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 88FB48B749 for ; Fri, 19 Nov 2021 13:48:45 +0000 (UTC) X-FDA: 78825810210.26.2DE5C79 Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by imf30.hostedemail.com (Postfix) with ESMTP id E465FE0019B8 for ; Fri, 19 Nov 2021 13:48:41 +0000 (UTC) X-IronPort-AV: E=McAfee;i="6200,9189,10172"; a="234362085" X-IronPort-AV: E=Sophos;i="5.87,247,1631602800"; d="scan'208";a="234362085" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Nov 2021 05:48:43 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.87,247,1631602800"; d="scan'208";a="507904771" Received: from chaop.bj.intel.com ([10.240.192.101]) by orsmga008.jf.intel.com with ESMTP; 19 Nov 2021 05:48:34 -0800 From: Chao Peng To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, qemu-devel@nongnu.org Cc: Paolo Bonzini , Jonathan Corbet , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Thomas Gleixner , Ingo Molnar , Borislav Petkov , x86@kernel.org, "H . Peter Anvin" , Hugh Dickins , Jeff Layton , "J . Bruce Fields" , Andrew Morton , Yu Zhang , Chao Peng , "Kirill A . Shutemov" , luto@kernel.org, john.ji@intel.com, susie.li@intel.com, jun.nakajima@intel.com, dave.hansen@intel.com, ak@linux.intel.com, david@redhat.com Subject: [RFC v2 PATCH 01/13] mm/shmem: Introduce F_SEAL_GUEST Date: Fri, 19 Nov 2021 21:47:27 +0800 Message-Id: <20211119134739.20218-2-chao.p.peng@linux.intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20211119134739.20218-1-chao.p.peng@linux.intel.com> References: <20211119134739.20218-1-chao.p.peng@linux.intel.com> X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: E465FE0019B8 X-Stat-Signature: 4h1bf6na7qt9p1yaxzy9ps5jbkutet4f Authentication-Results: imf30.hostedemail.com; dkim=none; spf=none (imf30.hostedemail.com: domain of chao.p.peng@linux.intel.com has no SPF policy when checking 134.134.136.65) smtp.mailfrom=chao.p.peng@linux.intel.com; dmarc=fail reason="No valid SPF, No valid DKIM" header.from=intel.com (policy=none) X-HE-Tag: 1637329721-393967 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: "Kirill A. Shutemov" The new seal type provides semantics required for KVM guest private memory support. A file descriptor with the seal set is going to be used as source of guest memory in confidential computing environments such as Intel TDX and AMD SEV. F_SEAL_GUEST can only be set on empty memfd. After the seal is set userspace cannot read, write or mmap the memfd. Userspace is in charge of guest memory lifecycle: it can allocate the memory with falloc or punch hole to free memory from the guest. The file descriptor passed down to KVM as guest memory backend. KVM register itself as the owner of the memfd via memfd_register_guest(). KVM provides callback that needed to be called on fallocate and punch hole. memfd_register_guest() returns callbacks that need be used for requesting a new page from memfd. Signed-off-by: Kirill A. Shutemov Signed-off-by: Chao Peng --- include/linux/memfd.h | 24 ++++++++ include/linux/shmem_fs.h | 9 +++ include/uapi/linux/fcntl.h | 1 + mm/memfd.c | 33 +++++++++- mm/shmem.c | 123 ++++++++++++++++++++++++++++++++++++- 5 files changed, 186 insertions(+), 4 deletions(-) diff --git a/include/linux/memfd.h b/include/linux/memfd.h index 4f1600413f91..ff920ef28688 100644 --- a/include/linux/memfd.h +++ b/include/linux/memfd.h @@ -4,13 +4,37 @@ #include +struct guest_ops { + void (*invalidate_page_range)(struct inode *inode, void *owner, + pgoff_t start, pgoff_t end); + void (*fallocate)(struct inode *inode, void *owner, + pgoff_t start, pgoff_t end); +}; + +struct guest_mem_ops { + unsigned long (*get_lock_pfn)(struct inode *inode, pgoff_t offset, + bool alloc, int *order); + void (*put_unlock_pfn)(unsigned long pfn); + +}; + #ifdef CONFIG_MEMFD_CREATE extern long memfd_fcntl(struct file *file, unsigned int cmd, unsigned long arg); + +extern inline int memfd_register_guest(struct inode *inode, void *owner, + const struct guest_ops *guest_ops, + const struct guest_mem_ops **guest_mem_ops); #else static inline long memfd_fcntl(struct file *f, unsigned int c, unsigned long a) { return -EINVAL; } +static inline int memfd_register_guest(struct inode *inode, void *owner, + const struct guest_ops *guest_ops, + const struct guest_mem_ops **guest_mem_ops) +{ + return -EINVAL; +} #endif #endif /* __LINUX_MEMFD_H */ diff --git a/include/linux/shmem_fs.h b/include/linux/shmem_fs.h index 166158b6e917..8280c918775a 100644 --- a/include/linux/shmem_fs.h +++ b/include/linux/shmem_fs.h @@ -12,6 +12,9 @@ /* inode in-kernel data */ +struct guest_ops; +struct guest_mem_ops; + struct shmem_inode_info { spinlock_t lock; unsigned int seals; /* shmem seals */ @@ -25,6 +28,8 @@ struct shmem_inode_info { struct simple_xattrs xattrs; /* list of xattrs */ atomic_t stop_eviction; /* hold when working on inode */ struct inode vfs_inode; + void *guest_owner; + const struct guest_ops *guest_ops; }; struct shmem_sb_info { @@ -96,6 +101,10 @@ extern unsigned long shmem_swap_usage(struct vm_area_struct *vma); extern unsigned long shmem_partial_swap_usage(struct address_space *mapping, pgoff_t start, pgoff_t end); +extern int shmem_register_guest(struct inode *inode, void *owner, + const struct guest_ops *guest_ops, + const struct guest_mem_ops **guest_mem_ops); + /* Flag allocation requirements to shmem_getpage */ enum sgp_type { SGP_READ, /* don't exceed i_size, don't allocate page */ diff --git a/include/uapi/linux/fcntl.h b/include/uapi/linux/fcntl.h index 2f86b2ad6d7e..c79bc8572721 100644 --- a/include/uapi/linux/fcntl.h +++ b/include/uapi/linux/fcntl.h @@ -43,6 +43,7 @@ #define F_SEAL_GROW 0x0004 /* prevent file from growing */ #define F_SEAL_WRITE 0x0008 /* prevent writes */ #define F_SEAL_FUTURE_WRITE 0x0010 /* prevent future writes while mapped */ +#define F_SEAL_GUEST 0x0020 /* (1U << 31) is reserved for signed error codes */ /* diff --git a/mm/memfd.c b/mm/memfd.c index 081dd33e6a61..a98b30bcf982 100644 --- a/mm/memfd.c +++ b/mm/memfd.c @@ -130,11 +130,25 @@ static unsigned int *memfd_file_seals_ptr(struct file *file) return NULL; } +int memfd_register_guest(struct inode *inode, void *owner, + const struct guest_ops *guest_ops, + const struct guest_mem_ops **guest_mem_ops) +{ + if (shmem_mapping(inode->i_mapping)) { + return shmem_register_guest(inode, owner, + guest_ops, guest_mem_ops); + } + + return -EINVAL; +} +EXPORT_SYMBOL_GPL(memfd_register_guest); + #define F_ALL_SEALS (F_SEAL_SEAL | \ F_SEAL_SHRINK | \ F_SEAL_GROW | \ F_SEAL_WRITE | \ - F_SEAL_FUTURE_WRITE) + F_SEAL_FUTURE_WRITE | \ + F_SEAL_GUEST) static int memfd_add_seals(struct file *file, unsigned int seals) { @@ -203,10 +217,27 @@ static int memfd_add_seals(struct file *file, unsigned int seals) } } + if (seals & F_SEAL_GUEST) { + i_mmap_lock_read(inode->i_mapping); + + if (!RB_EMPTY_ROOT(&inode->i_mapping->i_mmap.rb_root)) { + error = -EBUSY; + goto unlock; + } + + if (i_size_read(inode)) { + error = -EBUSY; + goto unlock; + } + } + *file_seals |= seals; error = 0; unlock: + if (seals & F_SEAL_GUEST) + i_mmap_unlock_read(inode->i_mapping); + inode_unlock(inode); return error; } diff --git a/mm/shmem.c b/mm/shmem.c index 23c91a8beb78..38b3b6b9a3a5 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -78,6 +78,7 @@ static struct vfsmount *shm_mnt; #include #include #include +#include #include @@ -906,6 +907,21 @@ static bool shmem_punch_compound(struct page *page, pgoff_t start, pgoff_t end) return split_huge_page(page) >= 0; } +static void guest_invalidate_page(struct inode *inode, + struct page *page, pgoff_t start, pgoff_t end) +{ + struct shmem_inode_info *info = SHMEM_I(inode); + + if (!info->guest_ops || !info->guest_ops->invalidate_page_range) + return; + + start = max(start, page->index); + end = min(end, page->index + thp_nr_pages(page)) - 1; + + info->guest_ops->invalidate_page_range(inode, info->guest_owner, + start, end); +} + /* * Remove range of pages and swap entries from page cache, and free them. * If !unfalloc, truncate or punch hole; if unfalloc, undo failed fallocate. @@ -949,6 +965,8 @@ static void shmem_undo_range(struct inode *inode, loff_t lstart, loff_t lend, } index += thp_nr_pages(page) - 1; + guest_invalidate_page(inode, page, start, end); + if (!unfalloc || !PageUptodate(page)) truncate_inode_page(mapping, page); unlock_page(page); @@ -1025,6 +1043,9 @@ static void shmem_undo_range(struct inode *inode, loff_t lstart, loff_t lend, index--; break; } + + guest_invalidate_page(inode, page, start, end); + VM_BUG_ON_PAGE(PageWriteback(page), page); if (shmem_punch_compound(page, start, end)) truncate_inode_page(mapping, page); @@ -1098,6 +1119,9 @@ static int shmem_setattr(struct user_namespace *mnt_userns, (newsize > oldsize && (info->seals & F_SEAL_GROW))) return -EPERM; + if ((info->seals & F_SEAL_GUEST) && (newsize & ~PAGE_MASK)) + return -EINVAL; + if (newsize != oldsize) { error = shmem_reacct_size(SHMEM_I(inode)->flags, oldsize, newsize); @@ -1364,6 +1388,8 @@ static int shmem_writepage(struct page *page, struct writeback_control *wbc) goto redirty; if (!total_swap_pages) goto redirty; + if (info->seals & F_SEAL_GUEST) + goto redirty; /* * Our capabilities prevent regular writeback or sync from ever calling @@ -2262,6 +2288,9 @@ static int shmem_mmap(struct file *file, struct vm_area_struct *vma) if (ret) return ret; + if (info->seals & F_SEAL_GUEST) + return -EPERM; + /* arm64 - allow memory tagging on RAM-based files */ vma->vm_flags |= VM_MTE_ALLOWED; @@ -2459,12 +2488,14 @@ shmem_write_begin(struct file *file, struct address_space *mapping, int ret = 0; /* i_rwsem is held by caller */ - if (unlikely(info->seals & (F_SEAL_GROW | - F_SEAL_WRITE | F_SEAL_FUTURE_WRITE))) { + if (unlikely(info->seals & (F_SEAL_GROW | F_SEAL_WRITE | + F_SEAL_FUTURE_WRITE | F_SEAL_GUEST))) { if (info->seals & (F_SEAL_WRITE | F_SEAL_FUTURE_WRITE)) return -EPERM; if ((info->seals & F_SEAL_GROW) && pos + len > inode->i_size) return -EPERM; + if (info->seals & F_SEAL_GUEST) + return -EPERM; } ret = shmem_getpage(inode, index, pagep, SGP_WRITE); @@ -2546,6 +2577,20 @@ static ssize_t shmem_file_read_iter(struct kiocb *iocb, struct iov_iter *to) end_index = i_size >> PAGE_SHIFT; if (index > end_index) break; + + /* + * inode_lock protects setting up seals as well as write to + * i_size. Setting F_SEAL_GUEST only allowed with i_size == 0. + * + * Check F_SEAL_GUEST after i_size. It effectively serialize + * read vs. setting F_SEAL_GUEST without taking inode_lock in + * read path. + */ + if (SHMEM_I(inode)->seals & F_SEAL_GUEST) { + error = -EPERM; + break; + } + if (index == end_index) { nr = i_size & ~PAGE_MASK; if (nr <= offset) @@ -2677,6 +2722,12 @@ static long shmem_fallocate(struct file *file, int mode, loff_t offset, goto out; } + if ((info->seals & F_SEAL_GUEST) && + (offset & ~PAGE_MASK || len & ~PAGE_MASK)) { + error = -EINVAL; + goto out; + } + shmem_falloc.waitq = &shmem_falloc_waitq; shmem_falloc.start = (u64)unmap_start >> PAGE_SHIFT; shmem_falloc.next = (unmap_end + 1) >> PAGE_SHIFT; @@ -2796,6 +2847,8 @@ static long shmem_fallocate(struct file *file, int mode, loff_t offset, if (!(mode & FALLOC_FL_KEEP_SIZE) && offset + len > inode->i_size) i_size_write(inode, offset + len); inode->i_ctime = current_time(inode); + if (info->guest_ops && info->guest_ops->fallocate) + info->guest_ops->fallocate(inode, info->guest_owner, start, end); undone: spin_lock(&inode->i_lock); inode->i_private = NULL; @@ -3800,6 +3853,20 @@ static int shmem_error_remove_page(struct address_space *mapping, return 0; } +#ifdef CONFIG_MIGRATION +int shmem_migrate_page(struct address_space *mapping, + struct page *newpage, struct page *page, + enum migrate_mode mode) +{ + struct inode *inode = mapping->host; + struct shmem_inode_info *info = SHMEM_I(inode); + + if (info->seals & F_SEAL_GUEST) + return -ENOTSUPP; + return migrate_page(mapping, newpage, page, mode); +} +#endif + const struct address_space_operations shmem_aops = { .writepage = shmem_writepage, .set_page_dirty = __set_page_dirty_no_writeback, @@ -3808,12 +3875,62 @@ const struct address_space_operations shmem_aops = { .write_end = shmem_write_end, #endif #ifdef CONFIG_MIGRATION - .migratepage = migrate_page, + .migratepage = shmem_migrate_page, #endif .error_remove_page = shmem_error_remove_page, }; EXPORT_SYMBOL(shmem_aops); +static unsigned long shmem_get_lock_pfn(struct inode *inode, pgoff_t offset, + bool alloc, int *order) +{ + struct page *page; + int ret; + enum sgp_type sgp = alloc ? SGP_WRITE : SGP_READ; + + ret = shmem_getpage(inode, offset, &page, sgp); + if (ret) + return ret; + + *order = thp_order(compound_head(page)); + + return page_to_pfn(page); +} + +static void shmem_put_unlock_pfn(unsigned long pfn) +{ + struct page *page = pfn_to_page(pfn); + + VM_BUG_ON_PAGE(!PageLocked(page), page); + + set_page_dirty(page); + unlock_page(page); + put_page(page); +} + +static const struct guest_mem_ops shmem_guest_ops = { + .get_lock_pfn = shmem_get_lock_pfn, + .put_unlock_pfn = shmem_put_unlock_pfn, +}; + +int shmem_register_guest(struct inode *inode, void *owner, + const struct guest_ops *guest_ops, + const struct guest_mem_ops **guest_mem_ops) +{ + struct shmem_inode_info *info = SHMEM_I(inode); + + if (!owner) + return -EINVAL; + + if (info->guest_owner && info->guest_owner != owner) + return -EPERM; + + info->guest_owner = owner; + info->guest_ops = guest_ops; + *guest_mem_ops = &shmem_guest_ops; + return 0; +} + static const struct file_operations shmem_file_operations = { .mmap = shmem_mmap, .get_unmapped_area = shmem_get_unmapped_area, From patchwork Fri Nov 19 13:47:28 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chao Peng X-Patchwork-Id: 12628881 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 21CF3C433FE for ; Fri, 19 Nov 2021 13:49:53 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id BE8F961247 for ; Fri, 19 Nov 2021 13:49:52 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org BE8F961247 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id B213C6B007B; Fri, 19 Nov 2021 08:49:03 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id AA9096B007D; Fri, 19 Nov 2021 08:49:03 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 923336B007E; Fri, 19 Nov 2021 08:49:03 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0147.hostedemail.com [216.40.44.147]) by kanga.kvack.org (Postfix) with ESMTP id 8297D6B007B for ; Fri, 19 Nov 2021 08:49:03 -0500 (EST) Received: from smtpin14.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 4933C1815BBE5 for ; Fri, 19 Nov 2021 13:48:53 +0000 (UTC) X-FDA: 78825810546.14.8D5E9AB Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by imf22.hostedemail.com (Postfix) with ESMTP id 2A0F51904 for ; Fri, 19 Nov 2021 13:48:51 +0000 (UTC) X-IronPort-AV: E=McAfee;i="6200,9189,10172"; a="233134175" X-IronPort-AV: E=Sophos;i="5.87,247,1631602800"; d="scan'208";a="233134175" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Nov 2021 05:48:51 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.87,247,1631602800"; d="scan'208";a="507904803" Received: from chaop.bj.intel.com ([10.240.192.101]) by orsmga008.jf.intel.com with ESMTP; 19 Nov 2021 05:48:43 -0800 From: Chao Peng To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, qemu-devel@nongnu.org Cc: Paolo Bonzini , Jonathan Corbet , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Thomas Gleixner , Ingo Molnar , Borislav Petkov , x86@kernel.org, "H . Peter Anvin" , Hugh Dickins , Jeff Layton , "J . Bruce Fields" , Andrew Morton , Yu Zhang , Chao Peng , "Kirill A . Shutemov" , luto@kernel.org, john.ji@intel.com, susie.li@intel.com, jun.nakajima@intel.com, dave.hansen@intel.com, ak@linux.intel.com, david@redhat.com Subject: [RFC v2 PATCH 02/13] KVM: Add KVM_EXIT_MEMORY_ERROR exit Date: Fri, 19 Nov 2021 21:47:28 +0800 Message-Id: <20211119134739.20218-3-chao.p.peng@linux.intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20211119134739.20218-1-chao.p.peng@linux.intel.com> References: <20211119134739.20218-1-chao.p.peng@linux.intel.com> X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: 2A0F51904 Authentication-Results: imf22.hostedemail.com; dkim=none; dmarc=fail reason="No valid SPF, No valid DKIM" header.from=intel.com (policy=none); spf=none (imf22.hostedemail.com: domain of chao.p.peng@linux.intel.com has no SPF policy when checking 192.55.52.120) smtp.mailfrom=chao.p.peng@linux.intel.com X-Stat-Signature: z55d6skitqdffni48dhkg9dk94cymazq X-HE-Tag: 1637329731-495062 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This new exit allows userspace to handle memory related error. It will be used for shared memory <-->private memory conversion. Signed-off-by: Yu Zhang Signed-off-by: Chao Peng --- include/uapi/linux/kvm.h | 15 +++++++++++++++ 1 file changed, 15 insertions(+) diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h index 7c93d61cb19e..7e3a8935534b 100644 --- a/include/uapi/linux/kvm.h +++ b/include/uapi/linux/kvm.h @@ -285,6 +285,18 @@ struct kvm_tdx_exit { } u; }; +struct kvm_memory_exit { +#define KVM_EXIT_MEM_MAP_SHARED 1 +#define KVM_EXIT_MEM_MAP_PRIVATE 2 + __u32 type; + union { + struct { + __u64 gpa; + __u64 size; + } map; + } u; +}; + #define KVM_S390_GET_SKEYS_NONE 1 #define KVM_S390_SKEYS_MAX 1048576 @@ -324,6 +336,7 @@ struct kvm_tdx_exit { #define KVM_EXIT_X86_BUS_LOCK 33 #define KVM_EXIT_XEN 34 #define KVM_EXIT_RISCV_SBI 35 +#define KVM_EXIT_MEMORY_ERROR 36 #define KVM_EXIT_TDX 50 /* dump number to avoid conflict. */ /* For KVM_EXIT_INTERNAL_ERROR */ @@ -542,6 +555,8 @@ struct kvm_run { unsigned long args[6]; unsigned long ret[2]; } riscv_sbi; + /* KVM_EXIT_MEMORY_ERROR */ + struct kvm_memory_exit mem; /* KVM_EXIT_TDX_VMCALL */ struct kvm_tdx_exit tdx; /* Fix the size of the union. */ From patchwork Fri Nov 19 13:47:29 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chao Peng X-Patchwork-Id: 12628883 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4C2A2C433EF for ; Fri, 19 Nov 2021 13:50:28 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id ACB0E61401 for ; Fri, 19 Nov 2021 13:50:27 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org ACB0E61401 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 10E226B007D; Fri, 19 Nov 2021 08:49:13 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 097336B007E; Fri, 19 Nov 2021 08:49:13 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E2D066B0080; Fri, 19 Nov 2021 08:49:12 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0144.hostedemail.com [216.40.44.144]) by kanga.kvack.org (Postfix) with ESMTP id D29976B007D for ; Fri, 19 Nov 2021 08:49:12 -0500 (EST) Received: from smtpin15.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 910A7181B3E93 for ; Fri, 19 Nov 2021 13:49:02 +0000 (UTC) X-FDA: 78825810924.15.D1587DD Received: from mga18.intel.com (mga18.intel.com [134.134.136.126]) by imf21.hostedemail.com (Postfix) with ESMTP id 5110ED0369CE for ; Fri, 19 Nov 2021 13:49:00 +0000 (UTC) X-IronPort-AV: E=McAfee;i="6200,9189,10172"; a="221293848" X-IronPort-AV: E=Sophos;i="5.87,247,1631602800"; d="scan'208";a="221293848" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Nov 2021 05:49:00 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.87,247,1631602800"; d="scan'208";a="507904829" Received: from chaop.bj.intel.com ([10.240.192.101]) by orsmga008.jf.intel.com with ESMTP; 19 Nov 2021 05:48:51 -0800 From: Chao Peng To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, qemu-devel@nongnu.org Cc: Paolo Bonzini , Jonathan Corbet , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Thomas Gleixner , Ingo Molnar , Borislav Petkov , x86@kernel.org, "H . Peter Anvin" , Hugh Dickins , Jeff Layton , "J . Bruce Fields" , Andrew Morton , Yu Zhang , Chao Peng , "Kirill A . Shutemov" , luto@kernel.org, john.ji@intel.com, susie.li@intel.com, jun.nakajima@intel.com, dave.hansen@intel.com, ak@linux.intel.com, david@redhat.com Subject: [RFC v2 PATCH 03/13] KVM: Extend kvm_userspace_memory_region to support fd based memslot Date: Fri, 19 Nov 2021 21:47:29 +0800 Message-Id: <20211119134739.20218-4-chao.p.peng@linux.intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20211119134739.20218-1-chao.p.peng@linux.intel.com> References: <20211119134739.20218-1-chao.p.peng@linux.intel.com> X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 5110ED0369CE X-Stat-Signature: czk94angdsiy1ibc5p9w3ob8wcfx5u1s Authentication-Results: imf21.hostedemail.com; dkim=none; spf=none (imf21.hostedemail.com: domain of chao.p.peng@linux.intel.com has no SPF policy when checking 134.134.136.126) smtp.mailfrom=chao.p.peng@linux.intel.com; dmarc=fail reason="No valid SPF, No valid DKIM" header.from=intel.com (policy=none) X-HE-Tag: 1637329740-52264 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The patch introduces two fds into memslot, guarded by KVM_MEM_FD flag. The userspace_addr field is repurposed as the offset into two fds, for respectively the shared and private views. If private_fd == -1, the memory slot has only a shared view. Suggested-by: Paolo Bonzini Signed-off-by: Yu Zhang Signed-off-by: Chao Peng --- arch/arm64/kvm/mmu.c | 14 +++++++------- arch/mips/kvm/mips.c | 14 +++++++------- arch/powerpc/include/asm/kvm_ppc.h | 28 ++++++++++++++-------------- arch/powerpc/kvm/book3s.c | 14 +++++++------- arch/powerpc/kvm/book3s_hv.c | 14 +++++++------- arch/powerpc/kvm/book3s_pr.c | 14 +++++++------- arch/powerpc/kvm/booke.c | 14 +++++++------- arch/powerpc/kvm/powerpc.c | 14 +++++++------- arch/riscv/kvm/mmu.c | 14 +++++++------- arch/s390/kvm/kvm-s390.c | 14 +++++++------- arch/x86/include/asm/kvm_host.h | 6 +++--- arch/x86/kvm/vmx/main.c | 6 +++--- arch/x86/kvm/vmx/tdx.c | 6 +++--- arch/x86/kvm/vmx/tdx_stubs.c | 6 +++--- arch/x86/kvm/x86.c | 16 ++++++++-------- include/linux/kvm_host.h | 18 +++++++++--------- include/uapi/linux/kvm.h | 12 ++++++++++++ virt/kvm/kvm_main.c | 23 +++++++++++++++-------- 18 files changed, 133 insertions(+), 114 deletions(-) diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 326cdfec74a1..395e52314834 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -1463,10 +1463,10 @@ int kvm_mmu_init(u32 *hyp_va_bits) } void kvm_arch_commit_memory_region(struct kvm *kvm, - const struct kvm_userspace_memory_region *mem, - struct kvm_memory_slot *old, - const struct kvm_memory_slot *new, - enum kvm_mr_change change) + const struct kvm_userspace_memory_region_ext *mem, + struct kvm_memory_slot *old, + const struct kvm_memory_slot *new, + enum kvm_mr_change change) { /* * At this point memslot has been committed and there is an @@ -1486,9 +1486,9 @@ void kvm_arch_commit_memory_region(struct kvm *kvm, } int kvm_arch_prepare_memory_region(struct kvm *kvm, - struct kvm_memory_slot *memslot, - const struct kvm_userspace_memory_region *mem, - enum kvm_mr_change change) + struct kvm_memory_slot *memslot, + const struct kvm_userspace_memory_region_ext *mem, + enum kvm_mr_change change) { hva_t hva = mem->userspace_addr; hva_t reg_end = hva + mem->memory_size; diff --git a/arch/mips/kvm/mips.c b/arch/mips/kvm/mips.c index 562aa878b266..ef71146809d5 100644 --- a/arch/mips/kvm/mips.c +++ b/arch/mips/kvm/mips.c @@ -233,18 +233,18 @@ void kvm_arch_flush_shadow_memslot(struct kvm *kvm, } int kvm_arch_prepare_memory_region(struct kvm *kvm, - struct kvm_memory_slot *memslot, - const struct kvm_userspace_memory_region *mem, - enum kvm_mr_change change) + struct kvm_memory_slot *memslot, + const struct kvm_userspace_memory_region_ext *mem, + enum kvm_mr_change change) { return 0; } void kvm_arch_commit_memory_region(struct kvm *kvm, - const struct kvm_userspace_memory_region *mem, - struct kvm_memory_slot *old, - const struct kvm_memory_slot *new, - enum kvm_mr_change change) + const struct kvm_userspace_memory_region_ext *mem, + struct kvm_memory_slot *old, + const struct kvm_memory_slot *new, + enum kvm_mr_change change) { int needs_flush; diff --git a/arch/powerpc/include/asm/kvm_ppc.h b/arch/powerpc/include/asm/kvm_ppc.h index 671fbd1a765e..7cdc756a94a0 100644 --- a/arch/powerpc/include/asm/kvm_ppc.h +++ b/arch/powerpc/include/asm/kvm_ppc.h @@ -200,14 +200,14 @@ extern void kvmppc_core_destroy_vm(struct kvm *kvm); extern void kvmppc_core_free_memslot(struct kvm *kvm, struct kvm_memory_slot *slot); extern int kvmppc_core_prepare_memory_region(struct kvm *kvm, - struct kvm_memory_slot *memslot, - const struct kvm_userspace_memory_region *mem, - enum kvm_mr_change change); + struct kvm_memory_slot *memslot, + const struct kvm_userspace_memory_region_ext *mem, + enum kvm_mr_change change); extern void kvmppc_core_commit_memory_region(struct kvm *kvm, - const struct kvm_userspace_memory_region *mem, - const struct kvm_memory_slot *old, - const struct kvm_memory_slot *new, - enum kvm_mr_change change); + const struct kvm_userspace_memory_region_ext *mem, + const struct kvm_memory_slot *old, + const struct kvm_memory_slot *new, + enum kvm_mr_change change); extern int kvm_vm_ioctl_get_smmu_info(struct kvm *kvm, struct kvm_ppc_smmu_info *info); extern void kvmppc_core_flush_memslot(struct kvm *kvm, @@ -274,14 +274,14 @@ struct kvmppc_ops { int (*get_dirty_log)(struct kvm *kvm, struct kvm_dirty_log *log); void (*flush_memslot)(struct kvm *kvm, struct kvm_memory_slot *memslot); int (*prepare_memory_region)(struct kvm *kvm, - struct kvm_memory_slot *memslot, - const struct kvm_userspace_memory_region *mem, - enum kvm_mr_change change); + struct kvm_memory_slot *memslot, + const struct kvm_userspace_memory_region_ext *mem, + enum kvm_mr_change change); void (*commit_memory_region)(struct kvm *kvm, - const struct kvm_userspace_memory_region *mem, - const struct kvm_memory_slot *old, - const struct kvm_memory_slot *new, - enum kvm_mr_change change); + const struct kvm_userspace_memory_region_ext *mem, + const struct kvm_memory_slot *old, + const struct kvm_memory_slot *new, + enum kvm_mr_change change); bool (*unmap_gfn_range)(struct kvm *kvm, struct kvm_gfn_range *range); bool (*age_gfn)(struct kvm *kvm, struct kvm_gfn_range *range); bool (*test_age_gfn)(struct kvm *kvm, struct kvm_gfn_range *range); diff --git a/arch/powerpc/kvm/book3s.c b/arch/powerpc/kvm/book3s.c index b785f6772391..6b4bf08e7c8b 100644 --- a/arch/powerpc/kvm/book3s.c +++ b/arch/powerpc/kvm/book3s.c @@ -847,19 +847,19 @@ void kvmppc_core_flush_memslot(struct kvm *kvm, struct kvm_memory_slot *memslot) } int kvmppc_core_prepare_memory_region(struct kvm *kvm, - struct kvm_memory_slot *memslot, - const struct kvm_userspace_memory_region *mem, - enum kvm_mr_change change) + struct kvm_memory_slot *memslot, + const struct kvm_userspace_memory_region_ext *mem, + enum kvm_mr_change change) { return kvm->arch.kvm_ops->prepare_memory_region(kvm, memslot, mem, change); } void kvmppc_core_commit_memory_region(struct kvm *kvm, - const struct kvm_userspace_memory_region *mem, - const struct kvm_memory_slot *old, - const struct kvm_memory_slot *new, - enum kvm_mr_change change) + const struct kvm_userspace_memory_region_ext *mem, + const struct kvm_memory_slot *old, + const struct kvm_memory_slot *new, + enum kvm_mr_change change) { kvm->arch.kvm_ops->commit_memory_region(kvm, mem, old, new, change); } diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index 7b74fc0a986b..3b7be7894c48 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -4854,9 +4854,9 @@ static void kvmppc_core_free_memslot_hv(struct kvm_memory_slot *slot) } static int kvmppc_core_prepare_memory_region_hv(struct kvm *kvm, - struct kvm_memory_slot *slot, - const struct kvm_userspace_memory_region *mem, - enum kvm_mr_change change) + struct kvm_memory_slot *slot, + const struct kvm_userspace_memory_region_ext *mem, + enum kvm_mr_change change) { unsigned long npages = mem->memory_size >> PAGE_SHIFT; @@ -4871,10 +4871,10 @@ static int kvmppc_core_prepare_memory_region_hv(struct kvm *kvm, } static void kvmppc_core_commit_memory_region_hv(struct kvm *kvm, - const struct kvm_userspace_memory_region *mem, - const struct kvm_memory_slot *old, - const struct kvm_memory_slot *new, - enum kvm_mr_change change) + const struct kvm_userspace_memory_region_ext *mem, + const struct kvm_memory_slot *old, + const struct kvm_memory_slot *new, + enum kvm_mr_change change) { unsigned long npages = mem->memory_size >> PAGE_SHIFT; diff --git a/arch/powerpc/kvm/book3s_pr.c b/arch/powerpc/kvm/book3s_pr.c index 6bc9425acb32..4dd06b24c1b6 100644 --- a/arch/powerpc/kvm/book3s_pr.c +++ b/arch/powerpc/kvm/book3s_pr.c @@ -1899,18 +1899,18 @@ static void kvmppc_core_flush_memslot_pr(struct kvm *kvm, } static int kvmppc_core_prepare_memory_region_pr(struct kvm *kvm, - struct kvm_memory_slot *memslot, - const struct kvm_userspace_memory_region *mem, - enum kvm_mr_change change) + struct kvm_memory_slot *memslot, + const struct kvm_userspace_memory_region_ext *mem, + enum kvm_mr_change change) { return 0; } static void kvmppc_core_commit_memory_region_pr(struct kvm *kvm, - const struct kvm_userspace_memory_region *mem, - const struct kvm_memory_slot *old, - const struct kvm_memory_slot *new, - enum kvm_mr_change change) + const struct kvm_userspace_memory_region_ext *mem, + const struct kvm_memory_slot *old, + const struct kvm_memory_slot *new, + enum kvm_mr_change change) { return; } diff --git a/arch/powerpc/kvm/booke.c b/arch/powerpc/kvm/booke.c index 8c15c90dd3a9..f2d1acd782bf 100644 --- a/arch/powerpc/kvm/booke.c +++ b/arch/powerpc/kvm/booke.c @@ -1821,18 +1821,18 @@ void kvmppc_core_free_memslot(struct kvm *kvm, struct kvm_memory_slot *slot) } int kvmppc_core_prepare_memory_region(struct kvm *kvm, - struct kvm_memory_slot *memslot, - const struct kvm_userspace_memory_region *mem, - enum kvm_mr_change change) + struct kvm_memory_slot *memslot, + const struct kvm_userspace_memory_region_ext *mem, + enum kvm_mr_change change) { return 0; } void kvmppc_core_commit_memory_region(struct kvm *kvm, - const struct kvm_userspace_memory_region *mem, - const struct kvm_memory_slot *old, - const struct kvm_memory_slot *new, - enum kvm_mr_change change) + const struct kvm_userspace_memory_region_ext *mem, + const struct kvm_memory_slot *old, + const struct kvm_memory_slot *new, + enum kvm_mr_change change) { } diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c index 35e9cccdeef9..4aa5ef921710 100644 --- a/arch/powerpc/kvm/powerpc.c +++ b/arch/powerpc/kvm/powerpc.c @@ -706,18 +706,18 @@ void kvm_arch_free_memslot(struct kvm *kvm, struct kvm_memory_slot *slot) } int kvm_arch_prepare_memory_region(struct kvm *kvm, - struct kvm_memory_slot *memslot, - const struct kvm_userspace_memory_region *mem, - enum kvm_mr_change change) + struct kvm_memory_slot *memslot, + const struct kvm_userspace_memory_region_ext *mem, + enum kvm_mr_change change) { return kvmppc_core_prepare_memory_region(kvm, memslot, mem, change); } void kvm_arch_commit_memory_region(struct kvm *kvm, - const struct kvm_userspace_memory_region *mem, - struct kvm_memory_slot *old, - const struct kvm_memory_slot *new, - enum kvm_mr_change change) + const struct kvm_userspace_memory_region_ext *mem, + struct kvm_memory_slot *old, + const struct kvm_memory_slot *new, + enum kvm_mr_change change) { kvmppc_core_commit_memory_region(kvm, mem, old, new, change); } diff --git a/arch/riscv/kvm/mmu.c b/arch/riscv/kvm/mmu.c index d81bae8eb55e..a7f25b0da391 100644 --- a/arch/riscv/kvm/mmu.c +++ b/arch/riscv/kvm/mmu.c @@ -456,10 +456,10 @@ void kvm_arch_flush_shadow_memslot(struct kvm *kvm, } void kvm_arch_commit_memory_region(struct kvm *kvm, - const struct kvm_userspace_memory_region *mem, - struct kvm_memory_slot *old, - const struct kvm_memory_slot *new, - enum kvm_mr_change change) + const struct kvm_userspace_memory_region_ext *mem, + struct kvm_memory_slot *old, + const struct kvm_memory_slot *new, + enum kvm_mr_change change) { /* * At this point memslot has been committed and there is an @@ -471,9 +471,9 @@ void kvm_arch_commit_memory_region(struct kvm *kvm, } int kvm_arch_prepare_memory_region(struct kvm *kvm, - struct kvm_memory_slot *memslot, - const struct kvm_userspace_memory_region *mem, - enum kvm_mr_change change) + struct kvm_memory_slot *memslot, + const struct kvm_userspace_memory_region_ext *mem, + enum kvm_mr_change change) { hva_t hva = mem->userspace_addr; hva_t reg_end = hva + mem->memory_size; diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c index c6257f625929..dc9d1ec3d337 100644 --- a/arch/s390/kvm/kvm-s390.c +++ b/arch/s390/kvm/kvm-s390.c @@ -5018,9 +5018,9 @@ vm_fault_t kvm_arch_vcpu_fault(struct kvm_vcpu *vcpu, struct vm_fault *vmf) /* Section: memory related */ int kvm_arch_prepare_memory_region(struct kvm *kvm, - struct kvm_memory_slot *memslot, - const struct kvm_userspace_memory_region *mem, - enum kvm_mr_change change) + struct kvm_memory_slot *memslot, + const struct kvm_userspace_memory_region_ext *mem, + enum kvm_mr_change change) { /* A few sanity checks. We can have memory slots which have to be located/ended at a segment boundary (1MB). The memory in userland is @@ -5043,10 +5043,10 @@ int kvm_arch_prepare_memory_region(struct kvm *kvm, } void kvm_arch_commit_memory_region(struct kvm *kvm, - const struct kvm_userspace_memory_region *mem, - struct kvm_memory_slot *old, - const struct kvm_memory_slot *new, - enum kvm_mr_change change) + const struct kvm_userspace_memory_region_ext *mem, + struct kvm_memory_slot *old, + const struct kvm_memory_slot *new, + enum kvm_mr_change change) { int rc = 0; diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 9ab707646ed1..86a17a23d6be 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1556,9 +1556,9 @@ struct kvm_x86_ops { void (*vcpu_deliver_sipi_vector)(struct kvm_vcpu *vcpu, u8 vector); int (*prepare_memory_region)(struct kvm *kvm, - struct kvm_memory_slot *memslot, - const struct kvm_userspace_memory_region *mem, - enum kvm_mr_change change); + struct kvm_memory_slot *memslot, + const struct kvm_userspace_memory_region_ext *mem, + enum kvm_mr_change change); #ifdef CONFIG_KVM_TDX_SEAM_BACKDOOR void (*do_seamcall)(struct kvm_seamcall *call); diff --git a/arch/x86/kvm/vmx/main.c b/arch/x86/kvm/vmx/main.c index 1473fe8ce5a6..0a8bedaf9c1b 100644 --- a/arch/x86/kvm/vmx/main.c +++ b/arch/x86/kvm/vmx/main.c @@ -992,9 +992,9 @@ static void vt_setup_mce(struct kvm_vcpu *vcpu) } static int vt_prepare_memory_region(struct kvm *kvm, - struct kvm_memory_slot *memslot, - const struct kvm_userspace_memory_region *mem, - enum kvm_mr_change change) + struct kvm_memory_slot *memslot, + const struct kvm_userspace_memory_region_ext *mem, + enum kvm_mr_change change) { if (is_td(kvm)) tdx_prepare_memory_region(kvm, memslot, mem, change); diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c index 4992750b6db0..839740a98d47 100644 --- a/arch/x86/kvm/vmx/tdx.c +++ b/arch/x86/kvm/vmx/tdx.c @@ -2679,9 +2679,9 @@ static void tdx_flush_gprs(struct kvm_vcpu *vcpu) } static int tdx_prepare_memory_region(struct kvm *kvm, - struct kvm_memory_slot *memslot, - const struct kvm_userspace_memory_region *mem, - enum kvm_mr_change change) + struct kvm_memory_slot *memslot, + const struct kvm_userspace_memory_region_ext *mem, + enum kvm_mr_change change) { /* TDX Secure-EPT allows only RWX. */ if (mem->flags & KVM_MEM_READONLY) diff --git a/arch/x86/kvm/vmx/tdx_stubs.c b/arch/x86/kvm/vmx/tdx_stubs.c index 9c6023d18afd..490a5faeb411 100644 --- a/arch/x86/kvm/vmx/tdx_stubs.c +++ b/arch/x86/kvm/vmx/tdx_stubs.c @@ -28,9 +28,9 @@ static int tdx_deliver_posted_interrupt(struct kvm_vcpu *vcpu, int vector) { ret static void tdx_get_exit_info(struct kvm_vcpu *vcpu, u64 *info1, u64 *info2, u32 *intr_info, u32 *error_code) {} static int tdx_prepare_memory_region(struct kvm *kvm, - struct kvm_memory_slot *memslot, - const struct kvm_userspace_memory_region *mem, - enum kvm_mr_change change) { return 0; } + struct kvm_memory_slot *memslot, + const struct kvm_userspace_memory_region_ext *mem, + enum kvm_mr_change change) { return 0; } static void tdx_prepare_switch_to_guest(struct kvm_vcpu *vcpu) {} static int __init tdx_check_processor_compatibility(void) { return 0; } static void __init tdx_pre_kvm_init(unsigned int *vcpu_size, diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index a02920b49b26..1558f6375949 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -11635,7 +11635,7 @@ void __user * __x86_set_memory_region(struct kvm *kvm, int id, gpa_t gpa, } for (i = 0; i < KVM_ADDRESS_SPACE_NUM; i++) { - struct kvm_userspace_memory_region m; + struct kvm_userspace_memory_region_ext m; m.slot = id | (i << 16); m.flags = 0; @@ -11841,9 +11841,9 @@ void kvm_arch_memslots_updated(struct kvm *kvm, u64 gen) } int kvm_arch_prepare_memory_region(struct kvm *kvm, - struct kvm_memory_slot *memslot, - const struct kvm_userspace_memory_region *mem, - enum kvm_mr_change change) + struct kvm_memory_slot *memslot, + const struct kvm_userspace_memory_region_ext *mem, + enum kvm_mr_change change) { int err; @@ -11948,10 +11948,10 @@ static void kvm_mmu_slot_apply_flags(struct kvm *kvm, } void kvm_arch_commit_memory_region(struct kvm *kvm, - const struct kvm_userspace_memory_region *mem, - struct kvm_memory_slot *old, - const struct kvm_memory_slot *new, - enum kvm_mr_change change) + const struct kvm_userspace_memory_region_ext *mem, + struct kvm_memory_slot *old, + const struct kvm_memory_slot *new, + enum kvm_mr_change change) { if (!kvm->arch.n_requested_mmu_pages) kvm_mmu_change_mmu_pages(kvm, diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 3dd5c349f52e..99e9f9969703 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -824,20 +824,20 @@ enum kvm_mr_change { }; int kvm_set_memory_region(struct kvm *kvm, - const struct kvm_userspace_memory_region *mem); + const struct kvm_userspace_memory_region_ext *mem); int __kvm_set_memory_region(struct kvm *kvm, - const struct kvm_userspace_memory_region *mem); + const struct kvm_userspace_memory_region_ext *mem); void kvm_arch_free_memslot(struct kvm *kvm, struct kvm_memory_slot *slot); void kvm_arch_memslots_updated(struct kvm *kvm, u64 gen); int kvm_arch_prepare_memory_region(struct kvm *kvm, - struct kvm_memory_slot *memslot, - const struct kvm_userspace_memory_region *mem, - enum kvm_mr_change change); + struct kvm_memory_slot *memslot, + const struct kvm_userspace_memory_region_ext *mem, + enum kvm_mr_change change); void kvm_arch_commit_memory_region(struct kvm *kvm, - const struct kvm_userspace_memory_region *mem, - struct kvm_memory_slot *old, - const struct kvm_memory_slot *new, - enum kvm_mr_change change); + const struct kvm_userspace_memory_region_ext *mem, + struct kvm_memory_slot *old, + const struct kvm_memory_slot *new, + enum kvm_mr_change change); /* flush all memory translations */ void kvm_arch_flush_shadow_all(struct kvm *kvm); /* flush memory translations pointing to 'slot' */ diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h index 7e3a8935534b..374da6767ef6 100644 --- a/include/uapi/linux/kvm.h +++ b/include/uapi/linux/kvm.h @@ -103,6 +103,17 @@ struct kvm_userspace_memory_region { __u64 userspace_addr; /* start of the userspace allocated memory */ }; +struct kvm_userspace_memory_region_ext { + __u32 slot; + __u32 flags; + __u64 guest_phys_addr; + __u64 memory_size; /* bytes */ + __u64 userspace_addr; /* offset into fd/private_fd */ + __s32 fd; + __s32 private_fd; /* valid if guest private memory is supported */ + __u32 padding[6]; +}; + /* * The bit 0 ~ bit 15 of kvm_memory_region::flags are visible for userspace, * other bits are reserved for kvm internal use which are defined in @@ -110,6 +121,7 @@ struct kvm_userspace_memory_region { */ #define KVM_MEM_LOG_DIRTY_PAGES (1UL << 0) #define KVM_MEM_READONLY (1UL << 1) +#define KVM_MEM_FD (1UL << 2) /* for KVM_IRQ_LINE */ struct kvm_irq_level { diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 1578be8e4441..271cef8d1cd0 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -1424,7 +1424,7 @@ static void update_memslots(struct kvm_memslots *slots, } static int check_memory_region_flags(struct kvm *kvm, - const struct kvm_userspace_memory_region *mem) + const struct kvm_userspace_memory_region_ext *mem) { u32 valid_flags = 0; @@ -1537,7 +1537,7 @@ static struct kvm_memslots *kvm_dup_memslots(struct kvm_memslots *old, } static int kvm_set_memslot(struct kvm *kvm, - const struct kvm_userspace_memory_region *mem, + const struct kvm_userspace_memory_region_ext *mem, struct kvm_memory_slot *old, struct kvm_memory_slot *new, int as_id, enum kvm_mr_change change) @@ -1629,7 +1629,7 @@ static int kvm_set_memslot(struct kvm *kvm, } static int kvm_delete_memslot(struct kvm *kvm, - const struct kvm_userspace_memory_region *mem, + const struct kvm_userspace_memory_region_ext *mem, struct kvm_memory_slot *old, int as_id) { struct kvm_memory_slot new; @@ -1663,7 +1663,7 @@ static int kvm_delete_memslot(struct kvm *kvm, * Must be called holding kvm->slots_lock for write. */ int __kvm_set_memory_region(struct kvm *kvm, - const struct kvm_userspace_memory_region *mem) + const struct kvm_userspace_memory_region_ext *mem) { struct kvm_memory_slot old, new; struct kvm_memory_slot *tmp; @@ -1783,7 +1783,7 @@ int __kvm_set_memory_region(struct kvm *kvm, EXPORT_SYMBOL_GPL(__kvm_set_memory_region); int kvm_set_memory_region(struct kvm *kvm, - const struct kvm_userspace_memory_region *mem) + const struct kvm_userspace_memory_region_ext *mem) { int r; @@ -1795,7 +1795,7 @@ int kvm_set_memory_region(struct kvm *kvm, EXPORT_SYMBOL_GPL(kvm_set_memory_region); static int kvm_vm_ioctl_set_memory_region(struct kvm *kvm, - struct kvm_userspace_memory_region *mem) + struct kvm_userspace_memory_region_ext *mem) { if ((u16)mem->slot >= KVM_USER_MEM_SLOTS) return -EINVAL; @@ -4368,12 +4368,19 @@ static long kvm_vm_ioctl(struct file *filp, break; } case KVM_SET_USER_MEMORY_REGION: { - struct kvm_userspace_memory_region kvm_userspace_mem; + struct kvm_userspace_memory_region_ext kvm_userspace_mem; r = -EFAULT; if (copy_from_user(&kvm_userspace_mem, argp, - sizeof(kvm_userspace_mem))) + sizeof(struct kvm_userspace_memory_region))) goto out; + if (kvm_userspace_mem.flags & KVM_MEM_FD) { + int offset = offsetof( + struct kvm_userspace_memory_region_ext, fd); + if (copy_from_user(&kvm_userspace_mem.fd, argp + offset, + sizeof(kvm_userspace_mem) - offset)) + goto out; + } r = kvm_vm_ioctl_set_memory_region(kvm, &kvm_userspace_mem); break; From patchwork Fri Nov 19 13:47:30 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chao Peng X-Patchwork-Id: 12628885 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E572BC433F5 for ; Fri, 19 Nov 2021 13:51:00 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 9583961247 for ; Fri, 19 Nov 2021 13:51:00 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 9583961247 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 79D886B007E; Fri, 19 Nov 2021 08:49:20 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 725E26B0080; Fri, 19 Nov 2021 08:49:20 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5C7346B0081; Fri, 19 Nov 2021 08:49:20 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0185.hostedemail.com [216.40.44.185]) by kanga.kvack.org (Postfix) with ESMTP id 4F1E06B007E for ; Fri, 19 Nov 2021 08:49:20 -0500 (EST) Received: from smtpin04.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 04AB8182888CC for ; Fri, 19 Nov 2021 13:49:10 +0000 (UTC) X-FDA: 78825811260.04.43B3679 Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by imf31.hostedemail.com (Postfix) with ESMTP id 2283C10529A5 for ; Fri, 19 Nov 2021 13:49:06 +0000 (UTC) X-IronPort-AV: E=McAfee;i="6200,9189,10172"; a="233134235" X-IronPort-AV: E=Sophos;i="5.87,247,1631602800"; d="scan'208";a="233134235" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Nov 2021 05:49:08 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.87,247,1631602800"; d="scan'208";a="507904885" Received: from chaop.bj.intel.com ([10.240.192.101]) by orsmga008.jf.intel.com with ESMTP; 19 Nov 2021 05:49:00 -0800 From: Chao Peng To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, qemu-devel@nongnu.org Cc: Paolo Bonzini , Jonathan Corbet , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Thomas Gleixner , Ingo Molnar , Borislav Petkov , x86@kernel.org, "H . Peter Anvin" , Hugh Dickins , Jeff Layton , "J . Bruce Fields" , Andrew Morton , Yu Zhang , Chao Peng , "Kirill A . Shutemov" , luto@kernel.org, john.ji@intel.com, susie.li@intel.com, jun.nakajima@intel.com, dave.hansen@intel.com, ak@linux.intel.com, david@redhat.com Subject: [RFC v2 PATCH 04/13] KVM: Add fd-based memslot data structure and utils Date: Fri, 19 Nov 2021 21:47:30 +0800 Message-Id: <20211119134739.20218-5-chao.p.peng@linux.intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20211119134739.20218-1-chao.p.peng@linux.intel.com> References: <20211119134739.20218-1-chao.p.peng@linux.intel.com> X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 2283C10529A5 X-Stat-Signature: rkkt7fbr6t65w6q6egiborirx4mrumpz Authentication-Results: imf31.hostedemail.com; dkim=none; spf=none (imf31.hostedemail.com: domain of chao.p.peng@linux.intel.com has no SPF policy when checking 192.55.52.120) smtp.mailfrom=chao.p.peng@linux.intel.com; dmarc=fail reason="No valid SPF, No valid DKIM" header.from=intel.com (policy=none) X-HE-Tag: 1637329746-686437 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: For fd-based memslot store the file references for shared fd and the private fd (if any) in the memslot structure. Since there is no 'hva' concept we cannot call hva_to_pfn() to get a pfn, instead kvm_memfd_ops is added to get_pfn/put_pfn from the memory backing stores that provide these fds. Signed-off-by: Yu Zhang Signed-off-by: Chao Peng --- include/linux/kvm_host.h | 23 +++++++++++++++++++++++ 1 file changed, 23 insertions(+) diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 99e9f9969703..1d4ac0c9b63b 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -424,6 +424,12 @@ static inline int kvm_vcpu_exiting_guest_mode(struct kvm_vcpu *vcpu) */ #define KVM_MEM_MAX_NR_PAGES ((1UL << 31) - 1) +struct kvm_memfd_ops { + kvm_pfn_t (*get_pfn)(struct kvm_memory_slot *slot, struct file *file, + gfn_t gfn, bool alloc, int *order); + void (*put_pfn)(kvm_pfn_t pfn); +}; + struct kvm_memory_slot { gfn_t base_gfn; unsigned long npages; @@ -433,6 +439,9 @@ struct kvm_memory_slot { u32 flags; short id; u16 as_id; + struct file *file; + struct file *priv_file; + struct kvm_memfd_ops *memfd_ops; }; static inline bool kvm_slot_dirty_track_enabled(struct kvm_memory_slot *slot) @@ -1310,6 +1319,20 @@ static inline int memslot_id(struct kvm *kvm, gfn_t gfn) return gfn_to_memslot(kvm, gfn)->id; } +static inline bool memslot_is_memfd(const struct kvm_memory_slot *slot) +{ + if (slot && slot->memfd_ops) + return true; + return false; +} + +static inline bool memslot_has_private(const struct kvm_memory_slot *slot) +{ + if (slot && slot->priv_file) + return true; + return false; +} + static inline gfn_t hva_to_gfn_memslot(unsigned long hva, struct kvm_memory_slot *slot) { From patchwork Fri Nov 19 13:47:31 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chao Peng X-Patchwork-Id: 12628887 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7EC28C433EF for ; Fri, 19 Nov 2021 13:51:34 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 2B36B61247 for ; Fri, 19 Nov 2021 13:51:34 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 2B36B61247 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 136D36B0081; Fri, 19 Nov 2021 08:49:29 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 096D26B0082; Fri, 19 Nov 2021 08:49:29 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E04ED6B0083; Fri, 19 Nov 2021 08:49:28 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0151.hostedemail.com [216.40.44.151]) by kanga.kvack.org (Postfix) with ESMTP id C69C76B0081 for ; Fri, 19 Nov 2021 08:49:28 -0500 (EST) Received: from smtpin11.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 82B6E18028214 for ; Fri, 19 Nov 2021 13:49:18 +0000 (UTC) X-FDA: 78825811596.11.05009A2 Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by imf10.hostedemail.com (Postfix) with ESMTP id B31B760019B3 for ; Fri, 19 Nov 2021 13:49:16 +0000 (UTC) X-IronPort-AV: E=McAfee;i="6200,9189,10172"; a="234650770" X-IronPort-AV: E=Sophos;i="5.87,247,1631602800"; d="scan'208";a="234650770" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Nov 2021 05:49:16 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.87,247,1631602800"; d="scan'208";a="507904918" Received: from chaop.bj.intel.com ([10.240.192.101]) by orsmga008.jf.intel.com with ESMTP; 19 Nov 2021 05:49:08 -0800 From: Chao Peng To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, qemu-devel@nongnu.org Cc: Paolo Bonzini , Jonathan Corbet , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Thomas Gleixner , Ingo Molnar , Borislav Petkov , x86@kernel.org, "H . Peter Anvin" , Hugh Dickins , Jeff Layton , "J . Bruce Fields" , Andrew Morton , Yu Zhang , Chao Peng , "Kirill A . Shutemov" , luto@kernel.org, john.ji@intel.com, susie.li@intel.com, jun.nakajima@intel.com, dave.hansen@intel.com, ak@linux.intel.com, david@redhat.com Subject: [RFC v2 PATCH 05/13] KVM: Implement fd-based memory using new memfd interfaces Date: Fri, 19 Nov 2021 21:47:31 +0800 Message-Id: <20211119134739.20218-6-chao.p.peng@linux.intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20211119134739.20218-1-chao.p.peng@linux.intel.com> References: <20211119134739.20218-1-chao.p.peng@linux.intel.com> X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: B31B760019B3 X-Stat-Signature: km8a8bw63mzgc5h3hmbcfdfie6wmisk7 Authentication-Results: imf10.hostedemail.com; dkim=none; spf=none (imf10.hostedemail.com: domain of chao.p.peng@linux.intel.com has no SPF policy when checking 192.55.52.115) smtp.mailfrom=chao.p.peng@linux.intel.com; dmarc=fail reason="No valid SPF, No valid DKIM" header.from=intel.com (policy=none) X-HE-Tag: 1637329756-181584 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This patch pairs a fd-based memslot to a memory backing store. Two sides handshake to exchange callbacks that will be called later. KVM->memfd: - get_pfn: get or allocate(when alloc is true) page at specified offset in the fd, the page will be locked - put_pfn: put and unlock the pfn memfd->KVM: - invalidate_page_range: called when userspace punch hole on the fd, KVM should unmap related pages in the second MMU - fallocate: called when userspace fallocate space on the fd, KVM can map related pages in the second MMU Currently tmpfs behind memfd interface is supported. Signed-off-by: Yu Zhang Signed-off-by: Chao Peng --- arch/x86/kvm/Makefile | 3 +- include/linux/kvm_host.h | 6 +++ virt/kvm/memfd.c | 101 +++++++++++++++++++++++++++++++++++++++ 3 files changed, 109 insertions(+), 1 deletion(-) create mode 100644 virt/kvm/memfd.c diff --git a/arch/x86/kvm/Makefile b/arch/x86/kvm/Makefile index f919df73e5e3..5d7f289b1ca0 100644 --- a/arch/x86/kvm/Makefile +++ b/arch/x86/kvm/Makefile @@ -11,7 +11,8 @@ KVM := ../../../virt/kvm kvm-y += $(KVM)/kvm_main.o $(KVM)/coalesced_mmio.o \ $(KVM)/eventfd.o $(KVM)/irqchip.o $(KVM)/vfio.o \ - $(KVM)/dirty_ring.o $(KVM)/binary_stats.o + $(KVM)/dirty_ring.o $(KVM)/binary_stats.o \ + $(KVM)/memfd.o kvm-$(CONFIG_KVM_ASYNC_PF) += $(KVM)/async_pf.o kvm-y += x86.o emulate.o i8259.o irq.o lapic.o \ diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 1d4ac0c9b63b..e8646103356b 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -769,6 +769,12 @@ static inline void kvm_irqfd_exit(void) { } #endif + +int kvm_memfd_register(struct kvm *kvm, + const struct kvm_userspace_memory_region_ext *mem, + struct kvm_memory_slot *slot); +void kvm_memfd_unregister(struct kvm *kvm, struct kvm_memory_slot *slot); + int kvm_init(void *opaque, unsigned vcpu_size, unsigned vcpu_align, struct module *module); void kvm_exit(void); diff --git a/virt/kvm/memfd.c b/virt/kvm/memfd.c new file mode 100644 index 000000000000..bd930dcb455f --- /dev/null +++ b/virt/kvm/memfd.c @@ -0,0 +1,101 @@ + +// SPDX-License-Identifier: GPL-2.0-only +/* + * memfd.c: routines for fd based guest memory backing store + * Copyright (c) 2021, Intel Corporation. + * + * Author: + * Chao Peng + */ + +#include +#include +const static struct guest_mem_ops *memfd_ops; + +static void memfd_invalidate_page_range(struct inode *inode, void *owner, + pgoff_t start, pgoff_t end) +{ + //!!!We can get here after the owner no longer exists +} + +static void memfd_fallocate(struct inode *inode, void *owner, + pgoff_t start, pgoff_t end) +{ + //!!!We can get here after the owner no longer exists +} + +static const struct guest_ops memfd_notifier = { + .invalidate_page_range = memfd_invalidate_page_range, + .fallocate = memfd_fallocate, +}; + +static kvm_pfn_t kvm_memfd_get_pfn(struct kvm_memory_slot *slot, + struct file *file, gfn_t gfn, + bool alloc, int *order) +{ + pgoff_t index = gfn - slot->base_gfn + + (slot->userspace_addr >> PAGE_SHIFT); + + return memfd_ops->get_lock_pfn(file->f_inode, index, alloc, order); +} + +static void kvm_memfd_put_pfn(kvm_pfn_t pfn) +{ + memfd_ops->put_unlock_pfn(pfn); +} + +static struct kvm_memfd_ops kvm_memfd_ops = { + .get_pfn = kvm_memfd_get_pfn, + .put_pfn = kvm_memfd_put_pfn, +}; + +int kvm_memfd_register(struct kvm *kvm, + const struct kvm_userspace_memory_region_ext *mem, + struct kvm_memory_slot *slot) +{ + int ret; + struct fd fd = fdget(mem->fd); + + if (!fd.file) + return -EINVAL; + + ret = memfd_register_guest(fd.file->f_inode, kvm, + &memfd_notifier, &memfd_ops); + if (ret) + return ret; + slot->file = fd.file; + + if (mem->private_fd >= 0) { + fd = fdget(mem->private_fd); + if (!fd.file) { + ret = -EINVAL; + goto err; + } + + ret = memfd_register_guest(fd.file->f_inode, kvm, + &memfd_notifier, &memfd_ops); + if (ret) + goto err; + slot->priv_file = fd.file; + } + + slot->memfd_ops = &kvm_memfd_ops; + return 0; +err: + kvm_memfd_unregister(kvm, slot); + return ret; +} + +void kvm_memfd_unregister(struct kvm *kvm, struct kvm_memory_slot *slot) +{ + if (slot->file) { + fput(slot->file); + slot->file = NULL; + } + + if (slot->priv_file) { + fput(slot->priv_file); + slot->priv_file = NULL; + } + slot->memfd_ops = NULL; +} From patchwork Fri Nov 19 13:47:32 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chao Peng X-Patchwork-Id: 12628889 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6F6C4C433EF for ; Fri, 19 Nov 2021 13:52:07 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 20ECE61247 for ; Fri, 19 Nov 2021 13:52:07 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 20ECE61247 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 5CD8D6B0082; Fri, 19 Nov 2021 08:49:37 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 551826B0083; Fri, 19 Nov 2021 08:49:37 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3A4EA6B0085; Fri, 19 Nov 2021 08:49:37 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0046.hostedemail.com [216.40.44.46]) by kanga.kvack.org (Postfix) with ESMTP id 2BA356B0082 for ; Fri, 19 Nov 2021 08:49:37 -0500 (EST) Received: from smtpin18.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id E05DB8B773 for ; Fri, 19 Nov 2021 13:49:26 +0000 (UTC) X-FDA: 78825811974.18.96C38CD Received: from mga06.intel.com (mga06.intel.com [134.134.136.31]) by imf16.hostedemail.com (Postfix) with ESMTP id DCB82F00009E for ; Fri, 19 Nov 2021 13:49:23 +0000 (UTC) X-IronPort-AV: E=McAfee;i="6200,9189,10172"; a="295228673" X-IronPort-AV: E=Sophos;i="5.87,247,1631602800"; d="scan'208";a="295228673" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Nov 2021 05:49:24 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.87,247,1631602800"; d="scan'208";a="507904988" Received: from chaop.bj.intel.com ([10.240.192.101]) by orsmga008.jf.intel.com with ESMTP; 19 Nov 2021 05:49:16 -0800 From: Chao Peng To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, qemu-devel@nongnu.org Cc: Paolo Bonzini , Jonathan Corbet , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Thomas Gleixner , Ingo Molnar , Borislav Petkov , x86@kernel.org, "H . Peter Anvin" , Hugh Dickins , Jeff Layton , "J . Bruce Fields" , Andrew Morton , Yu Zhang , Chao Peng , "Kirill A . Shutemov" , luto@kernel.org, john.ji@intel.com, susie.li@intel.com, jun.nakajima@intel.com, dave.hansen@intel.com, ak@linux.intel.com, david@redhat.com Subject: [RFC v2 PATCH 06/13] KVM: Register/unregister memfd backed memslot Date: Fri, 19 Nov 2021 21:47:32 +0800 Message-Id: <20211119134739.20218-7-chao.p.peng@linux.intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20211119134739.20218-1-chao.p.peng@linux.intel.com> References: <20211119134739.20218-1-chao.p.peng@linux.intel.com> X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: DCB82F00009E X-Stat-Signature: 9448irxixpdz88ycd1bqrs97zcnyyd5x Authentication-Results: imf16.hostedemail.com; dkim=none; spf=none (imf16.hostedemail.com: domain of chao.p.peng@linux.intel.com has no SPF policy when checking 134.134.136.31) smtp.mailfrom=chao.p.peng@linux.intel.com; dmarc=fail reason="No valid SPF, No valid DKIM" header.from=intel.com (policy=none) X-HE-Tag: 1637329763-721555 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Signed-off-by: Yu Zhang Signed-off-by: Chao Peng --- virt/kvm/kvm_main.c | 23 +++++++++++++++++++---- 1 file changed, 19 insertions(+), 4 deletions(-) diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 271cef8d1cd0..b8673490d301 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -1426,7 +1426,7 @@ static void update_memslots(struct kvm_memslots *slots, static int check_memory_region_flags(struct kvm *kvm, const struct kvm_userspace_memory_region_ext *mem) { - u32 valid_flags = 0; + u32 valid_flags = KVM_MEM_FD; if (!kvm->dirty_log_unsupported) valid_flags |= KVM_MEM_LOG_DIRTY_PAGES; @@ -1604,10 +1604,20 @@ static int kvm_set_memslot(struct kvm *kvm, kvm_copy_memslots(slots, __kvm_memslots(kvm, as_id)); } + if (mem->flags & KVM_MEM_FD && change == KVM_MR_CREATE) { + r = kvm_memfd_register(kvm, mem, new); + if (r) + goto out_slots; + } + r = kvm_arch_prepare_memory_region(kvm, new, mem, change); if (r) goto out_slots; + if (mem->flags & KVM_MEM_FD && (r || change == KVM_MR_DELETE)) { + kvm_memfd_unregister(kvm, new); + } + update_memslots(slots, new, change); slots = install_new_memslots(kvm, as_id, slots); @@ -1683,10 +1693,12 @@ int __kvm_set_memory_region(struct kvm *kvm, return -EINVAL; if (mem->guest_phys_addr & (PAGE_SIZE - 1)) return -EINVAL; - /* We can read the guest memory with __xxx_user() later on. */ if ((mem->userspace_addr & (PAGE_SIZE - 1)) || - (mem->userspace_addr != untagged_addr(mem->userspace_addr)) || - !access_ok((void __user *)(unsigned long)mem->userspace_addr, + (mem->userspace_addr != untagged_addr(mem->userspace_addr))) + return -EINVAL; + /* We can read the guest memory with __xxx_user() later on. */ + if (!(mem->flags & KVM_MEM_FD) && + !access_ok((void __user *)(unsigned long)mem->userspace_addr, mem->memory_size)) return -EINVAL; if (as_id >= KVM_ADDRESS_SPACE_NUM || id >= KVM_MEM_SLOTS_NUM) @@ -1727,6 +1739,9 @@ int __kvm_set_memory_region(struct kvm *kvm, new.dirty_bitmap = NULL; memset(&new.arch, 0, sizeof(new.arch)); } else { /* Modify an existing slot. */ + /* Private memslots are immutable, they can only be deleted. */ + if (mem->flags & KVM_MEM_FD && mem->private_fd >= 0) + return -EINVAL; if ((new.userspace_addr != old.userspace_addr) || (new.npages != old.npages) || ((new.flags ^ old.flags) & KVM_MEM_READONLY)) From patchwork Fri Nov 19 13:47:33 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chao Peng X-Patchwork-Id: 12628891 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 39C32C433EF for ; Fri, 19 Nov 2021 13:52:41 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 5959660FE3 for ; Fri, 19 Nov 2021 13:52:40 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 5959660FE3 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 41AF46B0083; Fri, 19 Nov 2021 08:49:45 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 3CA3C6B0085; Fri, 19 Nov 2021 08:49:45 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 244B46B0087; Fri, 19 Nov 2021 08:49:45 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0145.hostedemail.com [216.40.44.145]) by kanga.kvack.org (Postfix) with ESMTP id 157DA6B0083 for ; Fri, 19 Nov 2021 08:49:45 -0500 (EST) Received: from smtpin22.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id C5C51184967D1 for ; Fri, 19 Nov 2021 13:49:34 +0000 (UTC) X-FDA: 78825812268.22.622DD35 Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by imf12.hostedemail.com (Postfix) with ESMTP id 259CF10000B4 for ; Fri, 19 Nov 2021 13:49:33 +0000 (UTC) X-IronPort-AV: E=McAfee;i="6200,9189,10172"; a="234245687" X-IronPort-AV: E=Sophos;i="5.87,247,1631602800"; d="scan'208";a="234245687" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Nov 2021 05:49:32 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.87,247,1631602800"; d="scan'208";a="507905038" Received: from chaop.bj.intel.com ([10.240.192.101]) by orsmga008.jf.intel.com with ESMTP; 19 Nov 2021 05:49:24 -0800 From: Chao Peng To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, qemu-devel@nongnu.org Cc: Paolo Bonzini , Jonathan Corbet , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Thomas Gleixner , Ingo Molnar , Borislav Petkov , x86@kernel.org, "H . Peter Anvin" , Hugh Dickins , Jeff Layton , "J . Bruce Fields" , Andrew Morton , Yu Zhang , Chao Peng , "Kirill A . Shutemov" , luto@kernel.org, john.ji@intel.com, susie.li@intel.com, jun.nakajima@intel.com, dave.hansen@intel.com, ak@linux.intel.com, david@redhat.com Subject: [RFC v2 PATCH 07/13] KVM: Handle page fault for fd based memslot Date: Fri, 19 Nov 2021 21:47:33 +0800 Message-Id: <20211119134739.20218-8-chao.p.peng@linux.intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20211119134739.20218-1-chao.p.peng@linux.intel.com> References: <20211119134739.20218-1-chao.p.peng@linux.intel.com> X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 259CF10000B4 X-Stat-Signature: goqnj853eqnixrfx7hjpf4ofif3xqicq Authentication-Results: imf12.hostedemail.com; dkim=none; spf=none (imf12.hostedemail.com: domain of chao.p.peng@linux.intel.com has no SPF policy when checking 134.134.136.24) smtp.mailfrom=chao.p.peng@linux.intel.com; dmarc=fail reason="No valid SPF, No valid DKIM" header.from=intel.com (policy=none) X-HE-Tag: 1637329773-777558 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Current code assume the private memory is persistent and KVM can check with backing store to see if private memory exists at the same address by calling get_pfn(alloc=false). Signed-off-by: Yu Zhang Signed-off-by: Chao Peng --- arch/x86/kvm/mmu/mmu.c | 75 ++++++++++++++++++++++++++++++++++++++++-- 1 file changed, 73 insertions(+), 2 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 40377901598b..cd5d1f923694 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -3277,6 +3277,9 @@ int kvm_mmu_max_mapping_level(struct kvm *kvm, if (max_level == PG_LEVEL_4K) return PG_LEVEL_4K; + if (memslot_is_memfd(slot)) + return max_level; + host_level = host_pfn_mapping_level(kvm, gfn, pfn, slot); return min(host_level, max_level); } @@ -4555,6 +4558,65 @@ static bool kvm_arch_setup_async_pf(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa, kvm_vcpu_gfn_to_hva(vcpu, gfn), &arch); } +static bool kvm_faultin_pfn_memfd(struct kvm_vcpu *vcpu, + struct kvm_page_fault *fault, int *r) +{ int order; + kvm_pfn_t pfn; + struct kvm_memory_slot *slot = fault->slot; + bool priv_gfn = kvm_vcpu_is_private_gfn(vcpu, fault->addr >> PAGE_SHIFT); + bool priv_slot_exists = memslot_has_private(slot); + bool priv_gfn_exists = false; + int mem_convert_type; + + if (priv_gfn && !priv_slot_exists) { + *r = RET_PF_INVALID; + return true; + } + + if (priv_slot_exists) { + pfn = slot->memfd_ops->get_pfn(slot, slot->priv_file, + fault->gfn, false, &order); + if (pfn >= 0) + priv_gfn_exists = true; + } + + if (priv_gfn && !priv_gfn_exists) { + mem_convert_type = KVM_EXIT_MEM_MAP_PRIVATE; + goto out_convert; + } + + if (!priv_gfn && priv_gfn_exists) { + slot->memfd_ops->put_pfn(pfn); + mem_convert_type = KVM_EXIT_MEM_MAP_SHARED; + goto out_convert; + } + + if (!priv_gfn) { + pfn = slot->memfd_ops->get_pfn(slot, slot->file, + fault->gfn, true, &order); + if (fault->pfn < 0) { + *r = RET_PF_INVALID; + return true; + } + } + + if (slot->flags & KVM_MEM_READONLY) + fault->map_writable = false; + if (order == 0) + fault->max_level = PG_LEVEL_4K; + + return false; + +out_convert: + vcpu->run->exit_reason = KVM_EXIT_MEMORY_ERROR; + vcpu->run->mem.type = mem_convert_type; + vcpu->run->mem.u.map.gpa = fault->gfn << PAGE_SHIFT; + vcpu->run->mem.u.map.size = PAGE_SIZE; + fault->pfn = -1; + *r = -1; + return true; +} + static bool kvm_faultin_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault, int *r) { struct kvm_memory_slot *slot = fault->slot; @@ -4596,6 +4658,9 @@ static bool kvm_faultin_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault, } } + if (memslot_is_memfd(slot)) + return kvm_faultin_pfn_memfd(vcpu, fault, r); + async = false; fault->pfn = __gfn_to_pfn_memslot(slot, fault->gfn, false, &async, fault->write, &fault->map_writable, @@ -4660,7 +4725,8 @@ static int direct_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault else write_lock(&vcpu->kvm->mmu_lock); - if (fault->slot && mmu_notifier_retry_hva(vcpu->kvm, mmu_seq, fault->hva)) + if (fault->slot && !memslot_is_memfd(fault->slot) && + mmu_notifier_retry_hva(vcpu->kvm, mmu_seq, fault->hva)) goto out_unlock; r = make_mmu_pages_available(vcpu); if (r) @@ -4676,7 +4742,12 @@ static int direct_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault read_unlock(&vcpu->kvm->mmu_lock); else write_unlock(&vcpu->kvm->mmu_lock); - kvm_release_pfn_clean(fault->pfn); + + if (memslot_is_memfd(fault->slot)) + fault->slot->memfd_ops->put_pfn(fault->pfn); + else + kvm_release_pfn_clean(fault->pfn); + return r; } From patchwork Fri Nov 19 13:47:34 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chao Peng X-Patchwork-Id: 12628893 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7DC9EC433F5 for ; Fri, 19 Nov 2021 13:53:13 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 1855261247 for ; Fri, 19 Nov 2021 13:53:13 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 1855261247 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 3305A6B0085; Fri, 19 Nov 2021 08:49:56 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 2DFE66B0087; Fri, 19 Nov 2021 08:49:56 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 182456B0088; Fri, 19 Nov 2021 08:49:56 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0179.hostedemail.com [216.40.44.179]) by kanga.kvack.org (Postfix) with ESMTP id 098ED6B0085 for ; Fri, 19 Nov 2021 08:49:56 -0500 (EST) Received: from smtpin03.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id B11D789086 for ; Fri, 19 Nov 2021 13:49:45 +0000 (UTC) X-FDA: 78825812730.03.BAD8D00 Received: from mga12.intel.com (mga12.intel.com [192.55.52.136]) by imf10.hostedemail.com (Postfix) with ESMTP id E3E0E60019B7 for ; Fri, 19 Nov 2021 13:49:43 +0000 (UTC) X-IronPort-AV: E=McAfee;i="6200,9189,10172"; a="214443948" X-IronPort-AV: E=Sophos;i="5.87,247,1631602800"; d="scan'208";a="214443948" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Nov 2021 05:49:42 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.87,247,1631602800"; d="scan'208";a="507905069" Received: from chaop.bj.intel.com ([10.240.192.101]) by orsmga008.jf.intel.com with ESMTP; 19 Nov 2021 05:49:32 -0800 From: Chao Peng To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, qemu-devel@nongnu.org Cc: Paolo Bonzini , Jonathan Corbet , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Thomas Gleixner , Ingo Molnar , Borislav Petkov , x86@kernel.org, "H . Peter Anvin" , Hugh Dickins , Jeff Layton , "J . Bruce Fields" , Andrew Morton , Yu Zhang , Chao Peng , "Kirill A . Shutemov" , luto@kernel.org, john.ji@intel.com, susie.li@intel.com, jun.nakajima@intel.com, dave.hansen@intel.com, ak@linux.intel.com, david@redhat.com Subject: [RFC v2 PATCH 08/13] KVM: Rename hva memory invalidation code to cover fd-based offset Date: Fri, 19 Nov 2021 21:47:34 +0800 Message-Id: <20211119134739.20218-9-chao.p.peng@linux.intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20211119134739.20218-1-chao.p.peng@linux.intel.com> References: <20211119134739.20218-1-chao.p.peng@linux.intel.com> X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: E3E0E60019B7 X-Stat-Signature: xpy6ix1xe9t8zg7m1ga87aq4rz3hb4r6 Authentication-Results: imf10.hostedemail.com; dkim=none; spf=none (imf10.hostedemail.com: domain of chao.p.peng@linux.intel.com has no SPF policy when checking 192.55.52.136) smtp.mailfrom=chao.p.peng@linux.intel.com; dmarc=fail reason="No valid SPF, No valid DKIM" header.from=intel.com (policy=none) X-HE-Tag: 1637329783-615886 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The poupose is for fd-based memslot reusing the same code for memory invalidation. The code can be reused except changing 'hva' to more neutral naming 'useraddr'. Signed-off-by: Yu Zhang Signed-off-by: Chao Peng --- include/linux/kvm_host.h | 4 ++-- virt/kvm/kvm_main.c | 44 ++++++++++++++++++++-------------------- 2 files changed, 24 insertions(+), 24 deletions(-) diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index e8646103356b..925c4d9f0a31 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -1340,9 +1340,9 @@ static inline bool memslot_has_private(const struct kvm_memory_slot *slot) } static inline gfn_t -hva_to_gfn_memslot(unsigned long hva, struct kvm_memory_slot *slot) +useraddr_to_gfn_memslot(unsigned long useraddr, struct kvm_memory_slot *slot) { - gfn_t gfn_offset = (hva - slot->userspace_addr) >> PAGE_SHIFT; + gfn_t gfn_offset = (useraddr - slot->userspace_addr) >> PAGE_SHIFT; return slot->base_gfn + gfn_offset; } diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index b8673490d301..d9a6890dd18a 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -471,16 +471,16 @@ static void kvm_mmu_notifier_invalidate_range(struct mmu_notifier *mn, srcu_read_unlock(&kvm->srcu, idx); } -typedef bool (*hva_handler_t)(struct kvm *kvm, struct kvm_gfn_range *range); +typedef bool (*gfn_handler_t)(struct kvm *kvm, struct kvm_gfn_range *range); typedef void (*on_lock_fn_t)(struct kvm *kvm, unsigned long start, unsigned long end); -struct kvm_hva_range { +struct kvm_useraddr_range { unsigned long start; unsigned long end; pte_t pte; - hva_handler_t handler; + gfn_handler_t handler; on_lock_fn_t on_lock; bool flush_on_ret; bool may_block; @@ -499,8 +499,8 @@ static void kvm_null_fn(void) } #define IS_KVM_NULL_FN(fn) ((fn) == (void *)kvm_null_fn) -static __always_inline int __kvm_handle_hva_range(struct kvm *kvm, - const struct kvm_hva_range *range) +static __always_inline int __kvm_handle_useraddr_range(struct kvm *kvm, + const struct kvm_useraddr_range *range) { bool ret = false, locked = false; struct kvm_gfn_range gfn_range; @@ -518,12 +518,12 @@ static __always_inline int __kvm_handle_hva_range(struct kvm *kvm, for (i = 0; i < KVM_ADDRESS_SPACE_NUM; i++) { slots = __kvm_memslots(kvm, i); kvm_for_each_memslot(slot, slots) { - unsigned long hva_start, hva_end; + unsigned long useraddr_start, useraddr_end; - hva_start = max(range->start, slot->userspace_addr); - hva_end = min(range->end, slot->userspace_addr + + useraddr_start = max(range->start, slot->userspace_addr); + useraddr_end = min(range->end, slot->userspace_addr + (slot->npages << PAGE_SHIFT)); - if (hva_start >= hva_end) + if (useraddr_start >= useraddr_end) continue; /* @@ -536,11 +536,11 @@ static __always_inline int __kvm_handle_hva_range(struct kvm *kvm, gfn_range.may_block = range->may_block; /* - * {gfn(page) | page intersects with [hva_start, hva_end)} = + * {gfn(page) | page intersects with [useraddr_start, useraddr_end)} = * {gfn_start, gfn_start+1, ..., gfn_end-1}. */ - gfn_range.start = hva_to_gfn_memslot(hva_start, slot); - gfn_range.end = hva_to_gfn_memslot(hva_end + PAGE_SIZE - 1, slot); + gfn_range.start = useraddr_to_gfn_memslot(useraddr_start, slot); + gfn_range.end = useraddr_to_gfn_memslot(useraddr_end + PAGE_SIZE - 1, slot); gfn_range.slot = slot; if (!locked) { @@ -571,10 +571,10 @@ static __always_inline int kvm_handle_hva_range(struct mmu_notifier *mn, unsigned long start, unsigned long end, pte_t pte, - hva_handler_t handler) + gfn_handler_t handler) { struct kvm *kvm = mmu_notifier_to_kvm(mn); - const struct kvm_hva_range range = { + const struct kvm_useraddr_range range = { .start = start, .end = end, .pte = pte, @@ -584,16 +584,16 @@ static __always_inline int kvm_handle_hva_range(struct mmu_notifier *mn, .may_block = false, }; - return __kvm_handle_hva_range(kvm, &range); + return __kvm_handle_useraddr_range(kvm, &range); } static __always_inline int kvm_handle_hva_range_no_flush(struct mmu_notifier *mn, unsigned long start, unsigned long end, - hva_handler_t handler) + gfn_handler_t handler) { struct kvm *kvm = mmu_notifier_to_kvm(mn); - const struct kvm_hva_range range = { + const struct kvm_useraddr_range range = { .start = start, .end = end, .pte = __pte(0), @@ -603,7 +603,7 @@ static __always_inline int kvm_handle_hva_range_no_flush(struct mmu_notifier *mn .may_block = false, }; - return __kvm_handle_hva_range(kvm, &range); + return __kvm_handle_useraddr_range(kvm, &range); } static void kvm_mmu_notifier_change_pte(struct mmu_notifier *mn, struct mm_struct *mm, @@ -661,7 +661,7 @@ static int kvm_mmu_notifier_invalidate_range_start(struct mmu_notifier *mn, const struct mmu_notifier_range *range) { struct kvm *kvm = mmu_notifier_to_kvm(mn); - const struct kvm_hva_range hva_range = { + const struct kvm_useraddr_range useraddr_range = { .start = range->start, .end = range->end, .pte = __pte(0), @@ -685,7 +685,7 @@ static int kvm_mmu_notifier_invalidate_range_start(struct mmu_notifier *mn, kvm->mn_active_invalidate_count++; spin_unlock(&kvm->mn_invalidate_lock); - __kvm_handle_hva_range(kvm, &hva_range); + __kvm_handle_useraddr_range(kvm, &useraddr_range); return 0; } @@ -712,7 +712,7 @@ static void kvm_mmu_notifier_invalidate_range_end(struct mmu_notifier *mn, const struct mmu_notifier_range *range) { struct kvm *kvm = mmu_notifier_to_kvm(mn); - const struct kvm_hva_range hva_range = { + const struct kvm_useraddr_range useraddr_range = { .start = range->start, .end = range->end, .pte = __pte(0), @@ -723,7 +723,7 @@ static void kvm_mmu_notifier_invalidate_range_end(struct mmu_notifier *mn, }; bool wake; - __kvm_handle_hva_range(kvm, &hva_range); + __kvm_handle_useraddr_range(kvm, &useraddr_range); /* Pairs with the increment in range_start(). */ spin_lock(&kvm->mn_invalidate_lock); From patchwork Fri Nov 19 13:47:35 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chao Peng X-Patchwork-Id: 12628937 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C12B9C433EF for ; Fri, 19 Nov 2021 13:53:45 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 6365860FE3 for ; Fri, 19 Nov 2021 13:53:45 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 6365860FE3 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 94D7C6B0087; Fri, 19 Nov 2021 08:50:01 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 8850D6B0088; Fri, 19 Nov 2021 08:50:01 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7741C6B0089; Fri, 19 Nov 2021 08:50:01 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0241.hostedemail.com [216.40.44.241]) by kanga.kvack.org (Postfix) with ESMTP id 68A256B0087 for ; Fri, 19 Nov 2021 08:50:01 -0500 (EST) Received: from smtpin29.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 2428A184AAA8E for ; Fri, 19 Nov 2021 13:49:51 +0000 (UTC) X-FDA: 78825812982.29.800D487 Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by imf31.hostedemail.com (Postfix) with ESMTP id F3F65105298B for ; Fri, 19 Nov 2021 13:49:47 +0000 (UTC) X-IronPort-AV: E=McAfee;i="6200,9189,10172"; a="297831211" X-IronPort-AV: E=Sophos;i="5.87,247,1631602800"; d="scan'208";a="297831211" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Nov 2021 05:49:49 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.87,247,1631602800"; d="scan'208";a="507905098" Received: from chaop.bj.intel.com ([10.240.192.101]) by orsmga008.jf.intel.com with ESMTP; 19 Nov 2021 05:49:41 -0800 From: Chao Peng To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, qemu-devel@nongnu.org Cc: Paolo Bonzini , Jonathan Corbet , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Thomas Gleixner , Ingo Molnar , Borislav Petkov , x86@kernel.org, "H . Peter Anvin" , Hugh Dickins , Jeff Layton , "J . Bruce Fields" , Andrew Morton , Yu Zhang , Chao Peng , "Kirill A . Shutemov" , luto@kernel.org, john.ji@intel.com, susie.li@intel.com, jun.nakajima@intel.com, dave.hansen@intel.com, ak@linux.intel.com, david@redhat.com Subject: [RFC v2 PATCH 09/13] KVM: Introduce kvm_memfd_invalidate_range Date: Fri, 19 Nov 2021 21:47:35 +0800 Message-Id: <20211119134739.20218-10-chao.p.peng@linux.intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20211119134739.20218-1-chao.p.peng@linux.intel.com> References: <20211119134739.20218-1-chao.p.peng@linux.intel.com> X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: F3F65105298B X-Stat-Signature: gd57n1x5smf1gd5c4w79ue6a9d8d9rzb Authentication-Results: imf31.hostedemail.com; dkim=none; spf=none (imf31.hostedemail.com: domain of chao.p.peng@linux.intel.com has no SPF policy when checking 134.134.136.100) smtp.mailfrom=chao.p.peng@linux.intel.com; dmarc=fail reason="No valid SPF, No valid DKIM" header.from=intel.com (policy=none) X-HE-Tag: 1637329787-350318 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Invalidate on fd-based memslot can reuse the code from existing MMU notifier. Signed-off-by: Yu Zhang Signed-off-by: Chao Peng --- include/linux/kvm_host.h | 3 +++ virt/kvm/kvm_main.c | 35 +++++++++++++++++++++++++++++++++++ 2 files changed, 38 insertions(+) diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 925c4d9f0a31..f0fd32f6eab3 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -1883,4 +1883,7 @@ static inline void kvm_handle_signal_exit(struct kvm_vcpu *vcpu) /* Max number of entries allowed for each kvm dirty ring */ #define KVM_DIRTY_RING_MAX_ENTRIES 65536 +int kvm_memfd_invalidate_range(struct kvm *kvm, struct inode *inode, + unsigned long start, unsigned long end); + #endif diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index d9a6890dd18a..090afbadb03f 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -811,6 +811,35 @@ static int kvm_init_mmu_notifier(struct kvm *kvm) return mmu_notifier_register(&kvm->mmu_notifier, current->mm); } +int kvm_memfd_invalidate_range(struct kvm *kvm, struct inode *inode, + unsigned long start, unsigned long end) +{ + int ret; + const struct kvm_useraddr_range useraddr_range = { + .start = start, + .end = end, + .pte = __pte(0), + .handler = kvm_unmap_gfn_range, + .on_lock = (void *)kvm_null_fn, + .flush_on_ret = true, + .may_block = false, + }; + + + /* Prevent memslot modification */ + spin_lock(&kvm->mn_invalidate_lock); + kvm->mn_active_invalidate_count++; + spin_unlock(&kvm->mn_invalidate_lock); + + ret = __kvm_handle_useraddr_range(kvm, &useraddr_range); + + spin_lock(&kvm->mn_invalidate_lock); + kvm->mn_active_invalidate_count--; + spin_unlock(&kvm->mn_invalidate_lock); + + return ret; +} + #else /* !(CONFIG_MMU_NOTIFIER && KVM_ARCH_WANT_MMU_NOTIFIER) */ static int kvm_init_mmu_notifier(struct kvm *kvm) @@ -818,6 +847,12 @@ static int kvm_init_mmu_notifier(struct kvm *kvm) return 0; } +int kvm_memfd_invalidate_range(struct kvm *kvm, struct inode *inode, + unsigned long start, unsigned long end) +{ + return 0; +} + #endif /* CONFIG_MMU_NOTIFIER && KVM_ARCH_WANT_MMU_NOTIFIER */ #ifdef CONFIG_HAVE_KVM_PM_NOTIFIER From patchwork Fri Nov 19 13:47:36 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chao Peng X-Patchwork-Id: 12628939 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 23FFBC433F5 for ; Fri, 19 Nov 2021 13:54:19 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id C3BA661547 for ; Fri, 19 Nov 2021 13:54:18 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org C3BA661547 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 94B126B0088; Fri, 19 Nov 2021 08:50:10 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 8D9166B0092; Fri, 19 Nov 2021 08:50:10 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7CBAE6B0088; Fri, 19 Nov 2021 08:50:10 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0143.hostedemail.com [216.40.44.143]) by kanga.kvack.org (Postfix) with ESMTP id 6106C6B0088 for ; Fri, 19 Nov 2021 08:50:10 -0500 (EST) Received: from smtpin19.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 264CD83C33 for ; Fri, 19 Nov 2021 13:50:00 +0000 (UTC) X-FDA: 78825813108.19.AFBCC48 Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by imf25.hostedemail.com (Postfix) with ESMTP id 41C13B000180 for ; Fri, 19 Nov 2021 13:49:57 +0000 (UTC) X-IronPort-AV: E=McAfee;i="6200,9189,10172"; a="231896147" X-IronPort-AV: E=Sophos;i="5.87,247,1631602800"; d="scan'208";a="231896147" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Nov 2021 05:49:57 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.87,247,1631602800"; d="scan'208";a="507905127" Received: from chaop.bj.intel.com ([10.240.192.101]) by orsmga008.jf.intel.com with ESMTP; 19 Nov 2021 05:49:49 -0800 From: Chao Peng To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, qemu-devel@nongnu.org Cc: Paolo Bonzini , Jonathan Corbet , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Thomas Gleixner , Ingo Molnar , Borislav Petkov , x86@kernel.org, "H . Peter Anvin" , Hugh Dickins , Jeff Layton , "J . Bruce Fields" , Andrew Morton , Yu Zhang , Chao Peng , "Kirill A . Shutemov" , luto@kernel.org, john.ji@intel.com, susie.li@intel.com, jun.nakajima@intel.com, dave.hansen@intel.com, ak@linux.intel.com, david@redhat.com Subject: [RFC v2 PATCH 10/13] KVM: Match inode for invalidation of fd-based slot Date: Fri, 19 Nov 2021 21:47:36 +0800 Message-Id: <20211119134739.20218-11-chao.p.peng@linux.intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20211119134739.20218-1-chao.p.peng@linux.intel.com> References: <20211119134739.20218-1-chao.p.peng@linux.intel.com> X-Rspamd-Queue-Id: 41C13B000180 X-Stat-Signature: rhu1wnmqd5dt1guayebherz4yajwwoyk Authentication-Results: imf25.hostedemail.com; dkim=none; dmarc=fail reason="No valid SPF, No valid DKIM" header.from=intel.com (policy=none); spf=none (imf25.hostedemail.com: domain of chao.p.peng@linux.intel.com has no SPF policy when checking 192.55.52.93) smtp.mailfrom=chao.p.peng@linux.intel.com X-Rspamd-Server: rspam02 X-HE-Tag: 1637329797-649638 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Different fd/priv_fd can have the same userspace_addr so start/end is meaningful only when they are used together with fd/priv_fd. Signed-off-by: Yu Zhang Signed-off-by: Chao Peng --- virt/kvm/kvm_main.c | 13 +++++++++++++ 1 file changed, 13 insertions(+) diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 090afbadb03f..65055ac460eb 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -479,6 +479,7 @@ typedef void (*on_lock_fn_t)(struct kvm *kvm, unsigned long start, struct kvm_useraddr_range { unsigned long start; unsigned long end; + struct inode *inode; pte_t pte; gfn_handler_t handler; on_lock_fn_t on_lock; @@ -520,6 +521,17 @@ static __always_inline int __kvm_handle_useraddr_range(struct kvm *kvm, kvm_for_each_memslot(slot, slots) { unsigned long useraddr_start, useraddr_end; + /* + * Skip the slot if range->inode is not the same as + * that in slot->file or slot->priv_file. + */ + if (range->inode && + (!slot->file || + slot->file->f_inode != range->inode) && + (!slot->priv_file || + slot->priv_file->f_inode != range->inode)) + continue; + useraddr_start = max(range->start, slot->userspace_addr); useraddr_end = min(range->end, slot->userspace_addr + (slot->npages << PAGE_SHIFT)); @@ -818,6 +830,7 @@ int kvm_memfd_invalidate_range(struct kvm *kvm, struct inode *inode, const struct kvm_useraddr_range useraddr_range = { .start = start, .end = end, + .inode = inode, .pte = __pte(0), .handler = kvm_unmap_gfn_range, .on_lock = (void *)kvm_null_fn, From patchwork Fri Nov 19 13:47:37 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chao Peng X-Patchwork-Id: 12628941 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 335B4C433FE for ; Fri, 19 Nov 2021 13:54:52 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id D3FDA61A86 for ; Fri, 19 Nov 2021 13:54:51 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org D3FDA61A86 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 886FB6B008A; Fri, 19 Nov 2021 08:50:17 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 8368F6B008C; Fri, 19 Nov 2021 08:50:17 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6FF0A6B0092; Fri, 19 Nov 2021 08:50:17 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0035.hostedemail.com [216.40.44.35]) by kanga.kvack.org (Postfix) with ESMTP id 619086B008A for ; Fri, 19 Nov 2021 08:50:17 -0500 (EST) Received: from smtpin26.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 16D7786E70 for ; Fri, 19 Nov 2021 13:50:07 +0000 (UTC) X-FDA: 78825813654.26.D6425CB Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by imf12.hostedemail.com (Postfix) with ESMTP id 6686810003CA for ; Fri, 19 Nov 2021 13:50:06 +0000 (UTC) X-IronPort-AV: E=McAfee;i="6200,9189,10172"; a="234245766" X-IronPort-AV: E=Sophos;i="5.87,247,1631602800"; d="scan'208";a="234245766" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Nov 2021 05:50:05 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.87,247,1631602800"; d="scan'208";a="507905181" Received: from chaop.bj.intel.com ([10.240.192.101]) by orsmga008.jf.intel.com with ESMTP; 19 Nov 2021 05:49:57 -0800 From: Chao Peng To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, qemu-devel@nongnu.org Cc: Paolo Bonzini , Jonathan Corbet , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Thomas Gleixner , Ingo Molnar , Borislav Petkov , x86@kernel.org, "H . Peter Anvin" , Hugh Dickins , Jeff Layton , "J . Bruce Fields" , Andrew Morton , Yu Zhang , Chao Peng , "Kirill A . Shutemov" , luto@kernel.org, john.ji@intel.com, susie.li@intel.com, jun.nakajima@intel.com, dave.hansen@intel.com, ak@linux.intel.com, david@redhat.com Subject: [RFC v2 PATCH 11/13] KVM: Add kvm_map_gfn_range Date: Fri, 19 Nov 2021 21:47:37 +0800 Message-Id: <20211119134739.20218-12-chao.p.peng@linux.intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20211119134739.20218-1-chao.p.peng@linux.intel.com> References: <20211119134739.20218-1-chao.p.peng@linux.intel.com> X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 6686810003CA X-Stat-Signature: w8oomuo1sopyngjynj65qw7o5t99kqh8 Authentication-Results: imf12.hostedemail.com; dkim=none; spf=none (imf12.hostedemail.com: domain of chao.p.peng@linux.intel.com has no SPF policy when checking 134.134.136.24) smtp.mailfrom=chao.p.peng@linux.intel.com; dmarc=fail reason="No valid SPF, No valid DKIM" header.from=intel.com (policy=none) X-HE-Tag: 1637329806-660115 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This may be used in the fallocate callback for memfd based memory to setup the mapping for KVM second MMU when the pages are allocated in the memory backing store. Signed-off-by: Yu Zhang Signed-off-by: Chao Peng --- arch/x86/kvm/mmu/mmu.c | 47 ++++++++++++++++++++++++++++++++++++++++ include/linux/kvm_host.h | 2 ++ virt/kvm/kvm_main.c | 5 +++++ 3 files changed, 54 insertions(+) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index cd5d1f923694..5c475a161a3c 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -1951,6 +1951,53 @@ static __always_inline bool kvm_handle_gfn_range(struct kvm *kvm, return ret; } +bool kvm_map_gfn_range(struct kvm *kvm, struct kvm_gfn_range *range) +{ + struct kvm_vcpu *vcpu; + kvm_pfn_t pfn; + gfn_t gfn; + int idx; + bool ret = true; + + /* Need vcpu context for kvm_mmu_do_page_fault. */ + vcpu = kvm_get_vcpu(kvm, 0); + if (mutex_lock_killable(&vcpu->mutex)) + return false; + + vcpu_load(vcpu); + idx = srcu_read_lock(&kvm->srcu); + + kvm_mmu_reload(vcpu); + + gfn = range->start; + while (gfn < range->end) { + if (signal_pending(current)) { + ret = false; + break; + } + + if (need_resched()) + cond_resched(); + + pfn = kvm_mmu_do_page_fault(vcpu, gfn << PAGE_SHIFT, + PFERR_WRITE_MASK | PFERR_USER_MASK, + false); + if (is_error_noslot_pfn(pfn) || kvm->vm_bugged) { + ret = false; + break; + } + + gfn++; + } + + srcu_read_unlock(&kvm->srcu, idx); + vcpu_put(vcpu); + + mutex_unlock(&vcpu->mutex); + + return ret; +} + bool kvm_unmap_gfn_range(struct kvm *kvm, struct kvm_gfn_range *range) { bool flush = false; diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index f0fd32f6eab3..d841ed877b4b 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -237,6 +237,8 @@ struct kvm_gfn_range { pte_t pte; bool may_block; }; + +bool kvm_map_gfn_range(struct kvm *kvm, struct kvm_gfn_range *range); bool kvm_unmap_gfn_range(struct kvm *kvm, struct kvm_gfn_range *range); bool kvm_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range); bool kvm_test_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range); diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 65055ac460eb..492c1a99ec63 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -471,6 +471,11 @@ static void kvm_mmu_notifier_invalidate_range(struct mmu_notifier *mn, srcu_read_unlock(&kvm->srcu, idx); } +bool __weak kvm_map_gfn_range(struct kvm *kvm, struct kvm_gfn_range *range) +{ + return false; +} + typedef bool (*gfn_handler_t)(struct kvm *kvm, struct kvm_gfn_range *range); typedef void (*on_lock_fn_t)(struct kvm *kvm, unsigned long start, From patchwork Fri Nov 19 13:47:38 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chao Peng X-Patchwork-Id: 12628943 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 84D9EC433F5 for ; Fri, 19 Nov 2021 13:55:25 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 3D8C761547 for ; Fri, 19 Nov 2021 13:55:25 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 3D8C761547 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id D69596B008C; Fri, 19 Nov 2021 08:50:25 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id D18306B0092; Fri, 19 Nov 2021 08:50:25 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BB9B26B0093; Fri, 19 Nov 2021 08:50:25 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0126.hostedemail.com [216.40.44.126]) by kanga.kvack.org (Postfix) with ESMTP id ADFCD6B008C for ; Fri, 19 Nov 2021 08:50:25 -0500 (EST) Received: from smtpin26.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 672FB82E455D for ; Fri, 19 Nov 2021 13:50:15 +0000 (UTC) X-FDA: 78825813990.26.BE14B57 Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by imf11.hostedemail.com (Postfix) with ESMTP id D4BCEF00020D for ; Fri, 19 Nov 2021 13:50:14 +0000 (UTC) X-IronPort-AV: E=McAfee;i="6200,9189,10172"; a="297831256" X-IronPort-AV: E=Sophos;i="5.87,247,1631602800"; d="scan'208";a="297831256" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Nov 2021 05:50:13 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.87,247,1631602800"; d="scan'208";a="507905238" Received: from chaop.bj.intel.com ([10.240.192.101]) by orsmga008.jf.intel.com with ESMTP; 19 Nov 2021 05:50:05 -0800 From: Chao Peng To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, qemu-devel@nongnu.org Cc: Paolo Bonzini , Jonathan Corbet , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Thomas Gleixner , Ingo Molnar , Borislav Petkov , x86@kernel.org, "H . Peter Anvin" , Hugh Dickins , Jeff Layton , "J . Bruce Fields" , Andrew Morton , Yu Zhang , Chao Peng , "Kirill A . Shutemov" , luto@kernel.org, john.ji@intel.com, susie.li@intel.com, jun.nakajima@intel.com, dave.hansen@intel.com, ak@linux.intel.com, david@redhat.com Subject: [RFC v2 PATCH 12/13] KVM: Introduce kvm_memfd_fallocate_range Date: Fri, 19 Nov 2021 21:47:38 +0800 Message-Id: <20211119134739.20218-13-chao.p.peng@linux.intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20211119134739.20218-1-chao.p.peng@linux.intel.com> References: <20211119134739.20218-1-chao.p.peng@linux.intel.com> X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: D4BCEF00020D Authentication-Results: imf11.hostedemail.com; dkim=none; dmarc=fail reason="No valid SPF, No valid DKIM" header.from=intel.com (policy=none); spf=none (imf11.hostedemail.com: domain of chao.p.peng@linux.intel.com has no SPF policy when checking 134.134.136.100) smtp.mailfrom=chao.p.peng@linux.intel.com X-Stat-Signature: 5f6sjiu57e19mauqiez7863b7ubewno1 X-HE-Tag: 1637329814-225048 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: It reuses the same code for kvm_memfd_invalidate_range, except using kvm_map_gfn_range as its handler. Signed-off-by: Yu Zhang Signed-off-by: Chao Peng --- include/linux/kvm_host.h | 2 ++ virt/kvm/kvm_main.c | 28 +++++++++++++++++++++++++--- 2 files changed, 27 insertions(+), 3 deletions(-) diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index d841ed877b4b..f1d7856be05b 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -1887,5 +1887,7 @@ static inline void kvm_handle_signal_exit(struct kvm_vcpu *vcpu) int kvm_memfd_invalidate_range(struct kvm *kvm, struct inode *inode, unsigned long start, unsigned long end); +int kvm_memfd_fallocate_range(struct kvm *kvm, struct inode *inode, + unsigned long start, unsigned long end); #endif diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 492c1a99ec63..7eaafc0ae6ab 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -828,8 +828,10 @@ static int kvm_init_mmu_notifier(struct kvm *kvm) return mmu_notifier_register(&kvm->mmu_notifier, current->mm); } -int kvm_memfd_invalidate_range(struct kvm *kvm, struct inode *inode, - unsigned long start, unsigned long end) +int kvm_memfd_handle_range(struct kvm *kvm, struct inode *inode, + unsigned long start, unsigned long end, + gfn_handler_t handler) + { int ret; const struct kvm_useraddr_range useraddr_range = { @@ -837,7 +839,7 @@ int kvm_memfd_invalidate_range(struct kvm *kvm, struct inode *inode, .end = end, .inode = inode, .pte = __pte(0), - .handler = kvm_unmap_gfn_range, + .handler = handler, .on_lock = (void *)kvm_null_fn, .flush_on_ret = true, .may_block = false, @@ -858,6 +860,20 @@ int kvm_memfd_invalidate_range(struct kvm *kvm, struct inode *inode, return ret; } +int kvm_memfd_invalidate_range(struct kvm *kvm, struct inode *inode, + unsigned long start, unsigned long end) +{ + return kvm_memfd_handle_range(kvm, inode, start, end, + kvm_unmap_gfn_range); +} + +int kvm_memfd_fallocate_range(struct kvm *kvm, struct inode *inode, + unsigned long start, unsigned long end) +{ + return kvm_memfd_handle_range(kvm, inode, start, end, + kvm_map_gfn_range); +} + #else /* !(CONFIG_MMU_NOTIFIER && KVM_ARCH_WANT_MMU_NOTIFIER) */ static int kvm_init_mmu_notifier(struct kvm *kvm) @@ -871,6 +887,12 @@ int kvm_memfd_invalidate_range(struct kvm *kvm, struct inode *inode, return 0; } +int kvm_memfd_fallocate_range(struct kvm *kvm, struct inode *inode, + unsigned long start, unsigned long end) +{ + return 0; +} + #endif /* CONFIG_MMU_NOTIFIER && KVM_ARCH_WANT_MMU_NOTIFIER */ #ifdef CONFIG_HAVE_KVM_PM_NOTIFIER From patchwork Fri Nov 19 13:47:39 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chao Peng X-Patchwork-Id: 12628945 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5903CC433F5 for ; Fri, 19 Nov 2021 13:55:59 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id F31C661547 for ; Fri, 19 Nov 2021 13:55:58 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org F31C661547 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id D428C6B0092; Fri, 19 Nov 2021 08:50:33 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id CF1C16B0093; Fri, 19 Nov 2021 08:50:33 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BBA366B0095; Fri, 19 Nov 2021 08:50:33 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0065.hostedemail.com [216.40.44.65]) by kanga.kvack.org (Postfix) with ESMTP id AD1606B0092 for ; Fri, 19 Nov 2021 08:50:33 -0500 (EST) Received: from smtpin05.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 5C5DB184B56CE for ; Fri, 19 Nov 2021 13:50:23 +0000 (UTC) X-FDA: 78825814536.05.6A1D538 Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by imf23.hostedemail.com (Postfix) with ESMTP id 60EF99000383 for ; Fri, 19 Nov 2021 13:50:20 +0000 (UTC) X-IronPort-AV: E=McAfee;i="6200,9189,10172"; a="297831270" X-IronPort-AV: E=Sophos;i="5.87,247,1631602800"; d="scan'208";a="297831270" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 19 Nov 2021 05:50:21 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.87,247,1631602800"; d="scan'208";a="507905274" Received: from chaop.bj.intel.com ([10.240.192.101]) by orsmga008.jf.intel.com with ESMTP; 19 Nov 2021 05:50:13 -0800 From: Chao Peng To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, qemu-devel@nongnu.org Cc: Paolo Bonzini , Jonathan Corbet , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , Thomas Gleixner , Ingo Molnar , Borislav Petkov , x86@kernel.org, "H . Peter Anvin" , Hugh Dickins , Jeff Layton , "J . Bruce Fields" , Andrew Morton , Yu Zhang , Chao Peng , "Kirill A . Shutemov" , luto@kernel.org, john.ji@intel.com, susie.li@intel.com, jun.nakajima@intel.com, dave.hansen@intel.com, ak@linux.intel.com, david@redhat.com Subject: [RFC v2 PATCH 13/13] KVM: Enable memfd based page invalidation/fallocate Date: Fri, 19 Nov 2021 21:47:39 +0800 Message-Id: <20211119134739.20218-14-chao.p.peng@linux.intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20211119134739.20218-1-chao.p.peng@linux.intel.com> References: <20211119134739.20218-1-chao.p.peng@linux.intel.com> X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: 60EF99000383 Authentication-Results: imf23.hostedemail.com; dkim=none; dmarc=fail reason="No valid SPF, No valid DKIM" header.from=intel.com (policy=none); spf=none (imf23.hostedemail.com: domain of chao.p.peng@linux.intel.com has no SPF policy when checking 134.134.136.100) smtp.mailfrom=chao.p.peng@linux.intel.com X-Stat-Signature: 1ptukx1ckup5jt87apsme7zqiosqf95i X-HE-Tag: 1637329820-647231 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Since the memory backing store does not get notified when VM is destroyed so need check if VM is still live in these callbacks. Signed-off-by: Yu Zhang Signed-off-by: Chao Peng --- virt/kvm/memfd.c | 22 ++++++++++++++++++++++ 1 file changed, 22 insertions(+) diff --git a/virt/kvm/memfd.c b/virt/kvm/memfd.c index bd930dcb455f..bcfdc685ce22 100644 --- a/virt/kvm/memfd.c +++ b/virt/kvm/memfd.c @@ -12,16 +12,38 @@ #include const static struct guest_mem_ops *memfd_ops; +static bool vm_is_dead(struct kvm *vm) +{ + struct kvm *kvm; + + list_for_each_entry(kvm, &vm_list, vm_list) { + if (kvm == vm) + return false; + } + + return true; +} + static void memfd_invalidate_page_range(struct inode *inode, void *owner, pgoff_t start, pgoff_t end) { //!!!We can get here after the owner no longer exists + if (vm_is_dead(owner)) + return; + + kvm_memfd_invalidate_range(owner, inode, start >> PAGE_SHIFT, + end >> PAGE_SHIFT); } static void memfd_fallocate(struct inode *inode, void *owner, pgoff_t start, pgoff_t end) { //!!!We can get here after the owner no longer exists + if (vm_is_dead(owner)) + return; + + kvm_memfd_fallocate_range(owner, inode, start >> PAGE_SHIFT, + end >> PAGE_SHIFT); } static const struct guest_ops memfd_notifier = {