From patchwork Tue Oct 20 06:18:54 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kirill A. Shutemov" X-Patchwork-Id: 11845771 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 64BC014B4 for ; Tue, 20 Oct 2020 06:19:34 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 1170F223BF for ; Tue, 20 Oct 2020 06:19:34 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=shutemov-name.20150623.gappssmtp.com header.i=@shutemov-name.20150623.gappssmtp.com header.b="JfPp6qCG" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 1170F223BF Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=shutemov.name Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 138A06B007D; Tue, 20 Oct 2020 02:19:14 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 0E4176B0081; Tue, 20 Oct 2020 02:19:14 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E78116B0080; Tue, 20 Oct 2020 02:19:13 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0034.hostedemail.com [216.40.44.34]) by kanga.kvack.org (Postfix) with ESMTP id B6E4D6B007B for ; Tue, 20 Oct 2020 02:19:13 -0400 (EDT) Received: from smtpin03.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 49EC182DF882 for ; Tue, 20 Oct 2020 06:19:13 +0000 (UTC) X-FDA: 77391301386.03.blow22_3b14b392723d Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin03.hostedemail.com (Postfix) with ESMTP id 1A1AB28A4EB for ; Tue, 20 Oct 2020 06:19:13 +0000 (UTC) X-Spam-Summary: 1,0,0,3b3bd353112707ca,d41d8cd98f00b204,kirill@shutemov.name,,RULES_HIT:2:41:355:379:541:960:973:988:989:1260:1311:1314:1345:1359:1431:1437:1515:1535:1605:1730:1747:1777:1792:2393:2559:2562:2897:3138:3139:3140:3141:3142:3308:3865:3866:3867:3868:3872:3874:4049:4120:4250:4321:4605:5007:6119:6261:6653:6742:7558:7903:8634:9121:10004:11026:11473:11658:11914:12043:12048:12291:12296:12297:12438:12517:12519:12555:12683:12895:12986:13894:14110:21080:21222:21324:21444:21451:21611:21627:21740:21990:30012:30054,0,RBL:209.85.208.194:@shutemov.name:.lbl8.mailshell.net-62.8.84.100 66.201.201.201;04yg3uzjrsymczf8f7e96imn3c99oockhb6g1twb6j9f9gq9n9qt4hm6c4gsebd.gzjo1cn4y4ui6ktpedjcue8trhxouy8hdqba7sn1eqrarsdisaqconxsm7oop4a.c-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fn,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: blow22_3b14b392723d X-Filterd-Recvd-Size: 9960 Received: from mail-lj1-f194.google.com (mail-lj1-f194.google.com [209.85.208.194]) by imf31.hostedemail.com (Postfix) with ESMTP for ; Tue, 20 Oct 2020 06:19:12 +0000 (UTC) Received: by mail-lj1-f194.google.com with SMTP id m20so746891ljj.5 for ; Mon, 19 Oct 2020 23:19:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=shutemov-name.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=X/AtwUDIUn48g4k1JMQhi4QAxxtPsV8WteLTDiTNmxc=; b=JfPp6qCGASr2dJ2fakpZJ/fzyDk1AscK0upX+1EvAoNCwUAfXJF8hJQyqg47MXmTLI T8Bg/HjhcMQZHi55Rl+ufcciPWjUvblzpko+Rjfud93AxnRaQg10hKd8ARqA1lAqpH// CAVrCc+7zyCPKlh66MIXjilJ1s7TiQtVYTLo/QW1tJViUVbs0PFIek2K7Rxy7msVGuK+ rQACyCccbOocxMOBdKfA6lN9pQ55v8htkn4mvsOWjnqR3oTuLlVGCYSpuVoupFl2H4AT Ju6OrlpGe4i4iyn9riSOObFDEFqaJwBsnfDbVl2JqlCW2a8bRSnJInGi2L8qL+VMQDV/ BLaA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=X/AtwUDIUn48g4k1JMQhi4QAxxtPsV8WteLTDiTNmxc=; b=mWKhtzihr9YF+by1exJJN36hJiKAD3DoEnng6DsEIHxJelNCadbf1DX9yNu5tblF+Q OeGPZgifB+v1azjOKOwpOoVm+95ZLfXmv5KFsj2R2/XxhuujvlA5yjD8/Yq4tRxaMTCR 3olBxU/i5K54HShhWLvxuslu6TPzBEA4JXobNysuqKYyxEKLAn+z+5TjccniGeADjwsI Lw2JW+yfSWwYoX1o+HBQKAkZZ/Hu4rcXsJ1YOQRZIWIFtAMw+I+xDXihun7y1+3jzBwp ftWAVijgZqQ10dw7aI7Xk/BCTiQPwG7neAzUPda+g7Rndl0JoFVgRcKt90ZUCCkFqm3V 21ng== X-Gm-Message-State: AOAM5315kCjTzcEzzWnICRHLMinzmUZgxL61XLMlV3e89T3+H1PZfncI Jq4s8N3g3nfPcY633B93cWkzHg== X-Google-Smtp-Source: ABdhPJzddsHVqR9gNuwpBey6U9atB+OQkWz90dQzJ+5gTf2tHZ47zm/83ZQZQoIfsUI3D9HQtWFezQ== X-Received: by 2002:a2e:9183:: with SMTP id f3mr451109ljg.343.1603174751286; Mon, 19 Oct 2020 23:19:11 -0700 (PDT) Received: from box.localdomain ([86.57.175.117]) by smtp.gmail.com with ESMTPSA id b15sm197222ljp.117.2020.10.19.23.19.07 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 19 Oct 2020 23:19:09 -0700 (PDT) From: "Kirill A. Shutemov" X-Google-Original-From: "Kirill A. Shutemov" Received: by box.localdomain (Postfix, from userid 1000) id 136DB102F6A; Tue, 20 Oct 2020 09:19:02 +0300 (+03) To: Dave Hansen , Andy Lutomirski , Peter Zijlstra , Paolo Bonzini , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel Cc: David Rientjes , Andrea Arcangeli , Kees Cook , Will Drewry , "Edgecombe, Rick P" , "Kleen, Andi" , Liran Alon , Mike Rapoport , x86@kernel.org, kvm@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, "Kirill A. Shutemov" Subject: [RFCv2 11/16] KVM: Protected memory extension Date: Tue, 20 Oct 2020 09:18:54 +0300 Message-Id: <20201020061859.18385-12-kirill.shutemov@linux.intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201020061859.18385-1-kirill.shutemov@linux.intel.com> References: <20201020061859.18385-1-kirill.shutemov@linux.intel.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add infrastructure that handles protected memory extension. Arch-specific code has to provide hypercalls and define non-zero VM_KVM_PROTECTED. Signed-off-by: Kirill A. Shutemov --- include/linux/kvm_host.h | 4 +++ virt/kvm/Kconfig | 3 ++ virt/kvm/kvm_main.c | 68 ++++++++++++++++++++++++++++++++++++++ virt/lib/Makefile | 1 + virt/lib/mem_protected.c | 71 ++++++++++++++++++++++++++++++++++++++++ 5 files changed, 147 insertions(+) create mode 100644 virt/lib/mem_protected.c diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 380a64613880..6655e8da4555 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -701,6 +701,10 @@ void kvm_arch_flush_shadow_all(struct kvm *kvm); void kvm_arch_flush_shadow_memslot(struct kvm *kvm, struct kvm_memory_slot *slot); +int kvm_protect_all_memory(struct kvm *kvm); +int kvm_protect_memory(struct kvm *kvm, + unsigned long gfn, unsigned long npages, bool protect); + int gfn_to_page_many_atomic(struct kvm_memory_slot *slot, gfn_t gfn, struct page **pages, int nr_pages); diff --git a/virt/kvm/Kconfig b/virt/kvm/Kconfig index 1c37ccd5d402..50d7422386aa 100644 --- a/virt/kvm/Kconfig +++ b/virt/kvm/Kconfig @@ -63,3 +63,6 @@ config HAVE_KVM_NO_POLL config KVM_XFER_TO_GUEST_WORK bool + +config HAVE_KVM_PROTECTED_MEMORY + bool diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 125db5a73e10..4c008c7b4974 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -154,6 +154,8 @@ static void kvm_uevent_notify_change(unsigned int type, struct kvm *kvm); static unsigned long long kvm_createvm_count; static unsigned long long kvm_active_vms; +int __kvm_protect_memory(unsigned long start, unsigned long end, bool protect); + __weak void kvm_arch_mmu_notifier_invalidate_range(struct kvm *kvm, unsigned long start, unsigned long end) { @@ -1371,6 +1373,15 @@ int __kvm_set_memory_region(struct kvm *kvm, if (r) goto out_bitmap; + if (IS_ENABLED(CONFIG_HAVE_KVM_PROTECTED_MEMORY) && + mem->memory_size && kvm->mem_protected) { + r = __kvm_protect_memory(new.userspace_addr, + new.userspace_addr + new.npages * PAGE_SIZE, + true); + if (r) + goto out_bitmap; + } + if (old.dirty_bitmap && !new.dirty_bitmap) kvm_destroy_dirty_bitmap(&old); return 0; @@ -2720,6 +2731,63 @@ void kvm_vcpu_mark_page_dirty(struct kvm_vcpu *vcpu, gfn_t gfn) } EXPORT_SYMBOL_GPL(kvm_vcpu_mark_page_dirty); +int kvm_protect_memory(struct kvm *kvm, + unsigned long gfn, unsigned long npages, bool protect) +{ + struct kvm_memory_slot *memslot; + unsigned long start, end; + gfn_t numpages; + + if (!IS_ENABLED(CONFIG_HAVE_KVM_PROTECTED_MEMORY)) + return -KVM_ENOSYS; + + if (!npages) + return 0; + + memslot = gfn_to_memslot(kvm, gfn); + /* Not backed by memory. It's okay. */ + if (!memslot) + return 0; + + start = gfn_to_hva_many(memslot, gfn, &numpages); + end = start + npages * PAGE_SIZE; + + /* XXX: Share range across memory slots? */ + if (WARN_ON(numpages < npages)) + return -EINVAL; + + return __kvm_protect_memory(start, end, protect); +} +EXPORT_SYMBOL_GPL(kvm_protect_memory); + +int kvm_protect_all_memory(struct kvm *kvm) +{ + struct kvm_memslots *slots; + struct kvm_memory_slot *memslot; + unsigned long start, end; + int i, ret = 0;; + + if (!IS_ENABLED(CONFIG_HAVE_KVM_PROTECTED_MEMORY)) + return -KVM_ENOSYS; + + mutex_lock(&kvm->slots_lock); + kvm->mem_protected = true; + for (i = 0; i < KVM_ADDRESS_SPACE_NUM; i++) { + slots = __kvm_memslots(kvm, i); + kvm_for_each_memslot(memslot, slots) { + start = memslot->userspace_addr; + end = start + memslot->npages * PAGE_SIZE; + ret = __kvm_protect_memory(start, end, true); + if (ret) + goto out; + } + } +out: + mutex_unlock(&kvm->slots_lock); + return ret; +} +EXPORT_SYMBOL_GPL(kvm_protect_all_memory); + void kvm_sigset_activate(struct kvm_vcpu *vcpu) { if (!vcpu->sigset_active) diff --git a/virt/lib/Makefile b/virt/lib/Makefile index bd7f9a78bb6b..d6e50510801f 100644 --- a/virt/lib/Makefile +++ b/virt/lib/Makefile @@ -1,2 +1,3 @@ # SPDX-License-Identifier: GPL-2.0-only obj-$(CONFIG_IRQ_BYPASS_MANAGER) += irqbypass.o +obj-$(CONFIG_HAVE_KVM_PROTECTED_MEMORY) += mem_protected.o diff --git a/virt/lib/mem_protected.c b/virt/lib/mem_protected.c new file mode 100644 index 000000000000..0b01dd74f29c --- /dev/null +++ b/virt/lib/mem_protected.c @@ -0,0 +1,71 @@ +#include +#include +#include +#include +#include +#include + +int __kvm_protect_memory(unsigned long start, unsigned long end, bool protect) +{ + struct mm_struct *mm = current->mm; + struct vm_area_struct *vma, *prev; + int ret; + + if (mmap_write_lock_killable(mm)) + return -EINTR; + + ret = -ENOMEM; + vma = find_vma(current->mm, start); + if (!vma) + goto out; + + ret = -EINVAL; + if (vma->vm_start > start) + goto out; + + if (start > vma->vm_start) + prev = vma; + else + prev = vma->vm_prev; + + ret = 0; + while (true) { + unsigned long newflags, tmp; + + tmp = vma->vm_end; + if (tmp > end) + tmp = end; + + newflags = vma->vm_flags; + if (protect) + newflags |= VM_KVM_PROTECTED; + else + newflags &= ~VM_KVM_PROTECTED; + + /* The VMA has been handled as part of other memslot */ + if (newflags == vma->vm_flags) + goto next; + + ret = mprotect_fixup(vma, &prev, start, tmp, newflags); + if (ret) + goto out; + +next: + start = tmp; + if (start < prev->vm_end) + start = prev->vm_end; + + if (start >= end) + goto out; + + vma = prev->vm_next; + if (!vma || vma->vm_start != start) { + ret = -ENOMEM; + goto out; + } + } +out: + mmap_write_unlock(mm); + return ret; +} +EXPORT_SYMBOL_GPL(__kvm_protect_memory);