From patchwork Tue Oct 20 06:18:44 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kirill A . Shutemov" X-Patchwork-Id: 11845747 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id CE35416C0 for ; Tue, 20 Oct 2020 06:19:09 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 592F02245B for ; Tue, 20 Oct 2020 06:19:09 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=shutemov-name.20150623.gappssmtp.com header.i=@shutemov-name.20150623.gappssmtp.com header.b="ZaPL1QB9" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 592F02245B Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=shutemov.name Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 314AA6B005C; Tue, 20 Oct 2020 02:19:08 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 290BA6B0068; Tue, 20 Oct 2020 02:19:08 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 156E46B006C; Tue, 20 Oct 2020 02:19:08 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0223.hostedemail.com [216.40.44.223]) by kanga.kvack.org (Postfix) with ESMTP id DBA316B005C for ; Tue, 20 Oct 2020 02:19:07 -0400 (EDT) Received: from smtpin16.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 6D1E23626 for ; Tue, 20 Oct 2020 06:19:07 +0000 (UTC) X-FDA: 77391301134.16.scene68_09139e52723d Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin16.hostedemail.com (Postfix) with ESMTP id 4923A102207C6 for ; Tue, 20 Oct 2020 06:19:07 +0000 (UTC) X-Spam-Summary: 1,0,0,6774f43d04173597,d41d8cd98f00b204,kirill@shutemov.name,,RULES_HIT:2:41:69:355:379:541:800:960:966:973:988:989:1260:1311:1314:1345:1359:1431:1437:1515:1535:1606:1730:1747:1777:1792:2196:2199:2393:2538:2559:2562:3138:3139:3140:3141:3142:3355:3865:3866:3867:3868:3871:3872:3874:4120:4250:4321:4385:4605:5007:6119:6261:6653:6742:7875:7903:8568:9592:10004:11026:11473:11657:11658:11914:12043:12048:12114:12291:12297:12438:12517:12519:12555:12683:12895:12986:13894:14096:21080:21444:21451:21627:21990:30012:30054:30064:30067:30070,0,RBL:209.85.167.67:@shutemov.name:.lbl8.mailshell.net-62.14.0.100 66.201.201.201;04ygkooyuartu5sowbgp63hnzuh57ycwww7popyx4thj8aisaje594um6ramibo.igbhc1xwiguo7akmxrgskowjmz1o4f6ng5gyg5ecmfwdnoehfwb1f9g6g8u8aet.y-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fn,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:33,LUA_SUMMARY:none X-HE-Tag: scene68_09139e52723d X-Filterd-Recvd-Size: 9066 Received: from mail-lf1-f67.google.com (mail-lf1-f67.google.com [209.85.167.67]) by imf09.hostedemail.com (Postfix) with ESMTP for ; Tue, 20 Oct 2020 06:19:06 +0000 (UTC) Received: by mail-lf1-f67.google.com with SMTP id z2so687109lfr.1 for ; Mon, 19 Oct 2020 23:19:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=shutemov-name.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=gw0cyONCYVO4XOfg8B3Kq+78eeAkNdUf+c9fS1LSrP0=; b=ZaPL1QB9kI9ElMTPNqOmGOLmic4Yaj0wvYPOX2hSKZhYJ6S6JOdvkbyRVMBgSA1ePH qOo/e7bIhHOzulxvaz6DHor4XhXlHsL5b7g5c09qtmluVioDR8vcoAyGjqCFoVC09MMW GuNcpOQzcaxyfJZunY6z+/ByvKC9FlCQWmIV4WiycPHzcFhvsOs/5C+BuYShVEdGRMri 7OSVKYs0jmkMmgS7omvohDxhGLSz34Ql6fQ7jd47EHkBHeLwtkRMuOWgHTbOVl1NgB0b /2XFnbu+/jdrX5vywxVIT4SmeaBDOr0iF1arjlf9XcIHkiXzQJqcBu9f3jCIDoJD398Z eFdA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=gw0cyONCYVO4XOfg8B3Kq+78eeAkNdUf+c9fS1LSrP0=; b=MuDWcXVIODqXNhD6rS1v3ixkfExDQNr7/3lSTeibtWfNLb66l7yvJ16TFGl+Pd5NOf XdHQBaBhggjKf+TIybJZqZJlgOrsX3gj+nQb82Ry0/oZ12j76wHfmT2Z2b+cME3YTfha W8T8CHw8KRl8om6JtervaeF84vbF82r78Hbk0xwQnhwbRamT6BzP9yxC9HpLjldSrRyC QFcojGKt60Me2lHrZCDuGAtMIA6bES6RLUbcxE8viKEtl+sIlJraV/iZQTohMG1mHeZY NUSdgBNbGUwyVPi2mDey2PKqP8mNQOx7JvExUm+jmhCibQlTCOW63BPK2xSCmQlxfHN6 ep2A== X-Gm-Message-State: AOAM531VvdgY5+h3KXLAPxJnVAfP6eRKRketcspsdkGJDEBaYYE/+rcm c3IzlDt5ANf3VTCFGiOyR/BYdw== X-Google-Smtp-Source: ABdhPJxgAcS39rh9YdwtTC/8H4DgRISiHr+0k6ljiQIlbDwDcfbprOE9I0s2eFZWV5EzSucg+VEbOg== X-Received: by 2002:a19:64b:: with SMTP id 72mr395811lfg.47.1603174745124; Mon, 19 Oct 2020 23:19:05 -0700 (PDT) Received: from box.localdomain ([86.57.175.117]) by smtp.gmail.com with ESMTPSA id t21sm137904lfl.64.2020.10.19.23.19.04 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 19 Oct 2020 23:19:04 -0700 (PDT) From: "Kirill A. Shutemov" X-Google-Original-From: "Kirill A. Shutemov" Received: by box.localdomain (Postfix, from userid 1000) id B7950102017; Tue, 20 Oct 2020 09:19:01 +0300 (+03) To: Dave Hansen , Andy Lutomirski , Peter Zijlstra , Paolo Bonzini , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel Cc: David Rientjes , Andrea Arcangeli , Kees Cook , Will Drewry , "Edgecombe, Rick P" , "Kleen, Andi" , Liran Alon , Mike Rapoport , x86@kernel.org, kvm@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, "Kirill A. Shutemov" Subject: [RFCv2 01/16] x86/mm: Move force_dma_unencrypted() to common code Date: Tue, 20 Oct 2020 09:18:44 +0300 Message-Id: <20201020061859.18385-2-kirill.shutemov@linux.intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201020061859.18385-1-kirill.shutemov@linux.intel.com> References: <20201020061859.18385-1-kirill.shutemov@linux.intel.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: force_dma_unencrypted() has to return true for KVM guest with the memory protected enabled. Move it out of AMD SME code. Introduce new config option X86_MEM_ENCRYPT_COMMON that has to be selected by all x86 memory encryption features. This is preparation for the following patches. Signed-off-by: Kirill A. Shutemov --- arch/x86/Kconfig | 8 +++++-- arch/x86/include/asm/io.h | 4 +++- arch/x86/mm/Makefile | 2 ++ arch/x86/mm/mem_encrypt.c | 30 ------------------------- arch/x86/mm/mem_encrypt_common.c | 38 ++++++++++++++++++++++++++++++++ 5 files changed, 49 insertions(+), 33 deletions(-) create mode 100644 arch/x86/mm/mem_encrypt_common.c diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index 7101ac64bb20..619ebf40e457 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -1514,13 +1514,17 @@ config X86_CPA_STATISTICS helps to determine the effectiveness of preserving large and huge page mappings when mapping protections are changed. +config X86_MEM_ENCRYPT_COMMON + select ARCH_HAS_FORCE_DMA_UNENCRYPTED + select DYNAMIC_PHYSICAL_MASK + def_bool n + config AMD_MEM_ENCRYPT bool "AMD Secure Memory Encryption (SME) support" depends on X86_64 && CPU_SUP_AMD select DMA_COHERENT_POOL - select DYNAMIC_PHYSICAL_MASK select ARCH_USE_MEMREMAP_PROT - select ARCH_HAS_FORCE_DMA_UNENCRYPTED + select X86_MEM_ENCRYPT_COMMON help Say yes to enable support for the encryption of system memory. This requires an AMD processor that supports Secure Memory diff --git a/arch/x86/include/asm/io.h b/arch/x86/include/asm/io.h index e1aa17a468a8..c58d52fd7bf2 100644 --- a/arch/x86/include/asm/io.h +++ b/arch/x86/include/asm/io.h @@ -256,10 +256,12 @@ static inline void slow_down_io(void) #endif -#ifdef CONFIG_AMD_MEM_ENCRYPT #include extern struct static_key_false sev_enable_key; + +#ifdef CONFIG_AMD_MEM_ENCRYPT + static inline bool sev_key_active(void) { return static_branch_unlikely(&sev_enable_key); diff --git a/arch/x86/mm/Makefile b/arch/x86/mm/Makefile index 5864219221ca..b31cb52bf1bd 100644 --- a/arch/x86/mm/Makefile +++ b/arch/x86/mm/Makefile @@ -52,6 +52,8 @@ obj-$(CONFIG_X86_INTEL_MEMORY_PROTECTION_KEYS) += pkeys.o obj-$(CONFIG_RANDOMIZE_MEMORY) += kaslr.o obj-$(CONFIG_PAGE_TABLE_ISOLATION) += pti.o +obj-$(CONFIG_X86_MEM_ENCRYPT_COMMON) += mem_encrypt_common.o + obj-$(CONFIG_AMD_MEM_ENCRYPT) += mem_encrypt.o obj-$(CONFIG_AMD_MEM_ENCRYPT) += mem_encrypt_identity.o obj-$(CONFIG_AMD_MEM_ENCRYPT) += mem_encrypt_boot.o diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c index 9f1177edc2e7..4dbdc9dac36b 100644 --- a/arch/x86/mm/mem_encrypt.c +++ b/arch/x86/mm/mem_encrypt.c @@ -15,10 +15,6 @@ #include #include #include -#include -#include -#include -#include #include #include @@ -350,32 +346,6 @@ bool sev_active(void) return sme_me_mask && sev_enabled; } -/* Override for DMA direct allocation check - ARCH_HAS_FORCE_DMA_UNENCRYPTED */ -bool force_dma_unencrypted(struct device *dev) -{ - /* - * For SEV, all DMA must be to unencrypted addresses. - */ - if (sev_active()) - return true; - - /* - * For SME, all DMA must be to unencrypted addresses if the - * device does not support DMA to addresses that include the - * encryption mask. - */ - if (sme_active()) { - u64 dma_enc_mask = DMA_BIT_MASK(__ffs64(sme_me_mask)); - u64 dma_dev_mask = min_not_zero(dev->coherent_dma_mask, - dev->bus_dma_limit); - - if (dma_dev_mask <= dma_enc_mask) - return true; - } - - return false; -} - void __init mem_encrypt_free_decrypted_mem(void) { unsigned long vaddr, vaddr_end, npages; diff --git a/arch/x86/mm/mem_encrypt_common.c b/arch/x86/mm/mem_encrypt_common.c new file mode 100644 index 000000000000..964e04152417 --- /dev/null +++ b/arch/x86/mm/mem_encrypt_common.c @@ -0,0 +1,38 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * AMD Memory Encryption Support + * + * Copyright (C) 2016 Advanced Micro Devices, Inc. + * + * Author: Tom Lendacky + */ + +#include +#include +#include + +/* Override for DMA direct allocation check - ARCH_HAS_FORCE_DMA_UNENCRYPTED */ +bool force_dma_unencrypted(struct device *dev) +{ + /* + * For SEV, all DMA must be to unencrypted/shared addresses. + */ + if (sev_active()) + return true; + + /* + * For SME, all DMA must be to unencrypted addresses if the + * device does not support DMA to addresses that include the + * encryption mask. + */ + if (sme_active()) { + u64 dma_enc_mask = DMA_BIT_MASK(__ffs64(sme_me_mask)); + u64 dma_dev_mask = min_not_zero(dev->coherent_dma_mask, + dev->bus_dma_limit); + + if (dma_dev_mask <= dma_enc_mask) + return true; + } + + return false; +} From patchwork Tue Oct 20 06:18:45 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kirill A . Shutemov" X-Patchwork-Id: 11845751 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id F0A8714B4 for ; Tue, 20 Oct 2020 06:19:13 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 89C242245B for ; Tue, 20 Oct 2020 06:19:13 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=shutemov-name.20150623.gappssmtp.com header.i=@shutemov-name.20150623.gappssmtp.com header.b="dKpkpylJ" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 89C242245B Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=shutemov.name Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id C2E1C6B0068; Tue, 20 Oct 2020 02:19:08 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id C05206B006C; Tue, 20 Oct 2020 02:19:08 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id ACD736B006E; Tue, 20 Oct 2020 02:19:08 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0002.hostedemail.com [216.40.44.2]) by kanga.kvack.org (Postfix) with ESMTP id 64FFF6B0068 for ; Tue, 20 Oct 2020 02:19:08 -0400 (EDT) Received: from smtpin18.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 0BF451EE6 for ; Tue, 20 Oct 2020 06:19:08 +0000 (UTC) X-FDA: 77391301176.18.money38_460b0682723d Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin18.hostedemail.com (Postfix) with ESMTP id DF23E102207C6 for ; Tue, 20 Oct 2020 06:19:07 +0000 (UTC) X-Spam-Summary: 1,0,0,562d583a431aa04a,d41d8cd98f00b204,kirill@shutemov.name,,RULES_HIT:41:355:379:541:960:973:988:989:1260:1311:1314:1345:1359:1431:1437:1515:1535:1543:1711:1730:1747:1777:1792:2393:2559:2562:2897:2901:3138:3139:3140:3141:3142:3354:3865:3867:3868:3871:3872:4118:4250:4321:4605:5007:6119:6120:6261:6653:6742:7901:7974:10004:11026:11473:11657:11658:11914:12043:12048:12296:12297:12438:12517:12519:12555:12895:12986:13894:14181:14721:21080:21444:21451:21627:21990:30012:30054:30055,0,RBL:209.85.208.193:@shutemov.name:.lbl8.mailshell.net-62.8.84.100 66.201.201.201;04y87oo5rfoo751hz6tqi9cxmusqwopep9dofzkzicuenbcpx5113twd6i5actr.g7cw414qgtjqht1nj75zwr6ai74qfm5hkbri8d4k3apjts6pcpkcukzsp6c3csh.w-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fn,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:28,LUA_SUMMARY:none X-HE-Tag: money38_460b0682723d X-Filterd-Recvd-Size: 7865 Received: from mail-lj1-f193.google.com (mail-lj1-f193.google.com [209.85.208.193]) by imf22.hostedemail.com (Postfix) with ESMTP for ; Tue, 20 Oct 2020 06:19:07 +0000 (UTC) Received: by mail-lj1-f193.google.com with SMTP id c21so718762ljn.13 for ; Mon, 19 Oct 2020 23:19:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=shutemov-name.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=5f0A0zc+XYCpsa7He3BwUgXnJ1JLxwIL0kcsDi74wWI=; b=dKpkpylJdcvW/k7YVAZv/3xjWvLxRyGiSE75YeRuwST3KCyBHHoqvOrWyqv3QgqyGX klYmJ4zR5jq35IKLs7FvJuFDv07diL4A9uCjwKCmWylCHJPWobBgvpQBmGNKyrYdSp06 aU9eGqkUlLpJZcR0U71TfKa+VdavLXijNowaoU6OVS7Kv3CHiJnzSX2Gs16t5qA9uges ylx3CkZFuUE5zj1oXPmwUz+vLfwgidYaEag5TDL2NDGaeiXzfc1Ycfgb07TJw5Xt8a/W tgQr8BFj4+0cy9JygSvYvcETVdy0lCd/0nOXdDLeZfwOZVYdc8Ck2MvBjmpdxI5uSrfI 69bA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=5f0A0zc+XYCpsa7He3BwUgXnJ1JLxwIL0kcsDi74wWI=; b=uKHljpzZ+q5S7qIr+L//mW5484FFbwWzjNYK7XuqjpzLifc3IM50/08cQPQXK5gyj2 7D7xcA1WE25B7hEQQz9nZBPEnAxC7j+m0FGjcSDFlAW3lJsx7gxT/9jFtF8YBcG+0lOl dV7tL/Vk1dw2EuBOL1RQ/wA07vO9WRZKP7tWyJc++/f5Nw178xrkoAy0FSSjv/yTqCP3 guImlJ86DHPrKTuYcm+xPjzAapWrKc9HCIs5JOz2CMpk9HEDxq0U7WUsN0hvylSqMeOT 5xAnXnMDlPakkWZ6FQVygI7mC00TPMGZu8bfmCXBFw/2srZXw9Kl71Q09sW8e8NAXxC9 lwrg== X-Gm-Message-State: AOAM531rJAtrLWZVE0YpycJ6xfcQd3NRY7fmUecacGunvhGslwenAIFz ksA9vbGoWquUhEIQQ4WKA8E1Ug== X-Google-Smtp-Source: ABdhPJy0JVVG9BEVSPa0OKvs+uLXEY7LFFLEripqdWD/w7alhdkwCKGVmyVeFSRr7ffNo5CfXVDdJw== X-Received: by 2002:a2e:9b4d:: with SMTP id o13mr455628ljj.356.1603174746085; Mon, 19 Oct 2020 23:19:06 -0700 (PDT) Received: from box.localdomain ([86.57.175.117]) by smtp.gmail.com with ESMTPSA id v11sm135766lfg.275.2020.10.19.23.19.04 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 19 Oct 2020 23:19:04 -0700 (PDT) From: "Kirill A. Shutemov" X-Google-Original-From: "Kirill A. Shutemov" Received: by box.localdomain (Postfix, from userid 1000) id BEC73102F61; Tue, 20 Oct 2020 09:19:01 +0300 (+03) To: Dave Hansen , Andy Lutomirski , Peter Zijlstra , Paolo Bonzini , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel Cc: David Rientjes , Andrea Arcangeli , Kees Cook , Will Drewry , "Edgecombe, Rick P" , "Kleen, Andi" , Liran Alon , Mike Rapoport , x86@kernel.org, kvm@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, "Kirill A. Shutemov" Subject: [RFCv2 02/16] x86/kvm: Introduce KVM memory protection feature Date: Tue, 20 Oct 2020 09:18:45 +0300 Message-Id: <20201020061859.18385-3-kirill.shutemov@linux.intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201020061859.18385-1-kirill.shutemov@linux.intel.com> References: <20201020061859.18385-1-kirill.shutemov@linux.intel.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Provide basic helpers, KVM_FEATURE, CPUID flag and a hypercall. Host side doesn't provide the feature yet, so it is a dead code for now. Signed-off-by: Kirill A. Shutemov --- arch/x86/include/asm/cpufeatures.h | 1 + arch/x86/include/asm/kvm_para.h | 5 +++++ arch/x86/include/uapi/asm/kvm_para.h | 3 ++- arch/x86/kernel/kvm.c | 18 ++++++++++++++++++ include/uapi/linux/kvm_para.h | 3 ++- 5 files changed, 28 insertions(+), 2 deletions(-) diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h index 2901d5df4366..a72157137764 100644 --- a/arch/x86/include/asm/cpufeatures.h +++ b/arch/x86/include/asm/cpufeatures.h @@ -236,6 +236,7 @@ #define X86_FEATURE_EPT_AD ( 8*32+17) /* Intel Extended Page Table access-dirty bit */ #define X86_FEATURE_VMCALL ( 8*32+18) /* "" Hypervisor supports the VMCALL instruction */ #define X86_FEATURE_VMW_VMMCALL ( 8*32+19) /* "" VMware prefers VMMCALL hypercall instruction */ +#define X86_FEATURE_KVM_MEM_PROTECTED ( 8*32+20) /* KVM memory protection extenstion */ /* Intel-defined CPU features, CPUID level 0x00000007:0 (EBX), word 9 */ #define X86_FEATURE_FSGSBASE ( 9*32+ 0) /* RDFSBASE, WRFSBASE, RDGSBASE, WRGSBASE instructions*/ diff --git a/arch/x86/include/asm/kvm_para.h b/arch/x86/include/asm/kvm_para.h index 338119852512..74aea18f3130 100644 --- a/arch/x86/include/asm/kvm_para.h +++ b/arch/x86/include/asm/kvm_para.h @@ -11,11 +11,16 @@ extern void kvmclock_init(void); #ifdef CONFIG_KVM_GUEST bool kvm_check_and_clear_guest_paused(void); +bool kvm_mem_protected(void); #else static inline bool kvm_check_and_clear_guest_paused(void) { return false; } +static inline bool kvm_mem_protected(void) +{ + return false; +} #endif /* CONFIG_KVM_GUEST */ #define KVM_HYPERCALL \ diff --git a/arch/x86/include/uapi/asm/kvm_para.h b/arch/x86/include/uapi/asm/kvm_para.h index 812e9b4c1114..defbfc630a9f 100644 --- a/arch/x86/include/uapi/asm/kvm_para.h +++ b/arch/x86/include/uapi/asm/kvm_para.h @@ -28,10 +28,11 @@ #define KVM_FEATURE_PV_UNHALT 7 #define KVM_FEATURE_PV_TLB_FLUSH 9 #define KVM_FEATURE_ASYNC_PF_VMEXIT 10 -#define KVM_FEATURE_PV_SEND_IPI 11 +#define KVM_FEATURE_PV_SEND_IPI 11 #define KVM_FEATURE_POLL_CONTROL 12 #define KVM_FEATURE_PV_SCHED_YIELD 13 #define KVM_FEATURE_ASYNC_PF_INT 14 +#define KVM_FEATURE_MEM_PROTECTED 15 #define KVM_HINTS_REALTIME 0 diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c index 9663ba31347c..2c1f8952b92a 100644 --- a/arch/x86/kernel/kvm.c +++ b/arch/x86/kernel/kvm.c @@ -37,6 +37,13 @@ #include #include +static bool mem_protected; + +bool kvm_mem_protected(void) +{ + return mem_protected; +} + DEFINE_STATIC_KEY_FALSE(kvm_async_pf_enabled); static int kvmapf = 1; @@ -742,6 +749,17 @@ static void __init kvm_init_platform(void) { kvmclock_init(); x86_platform.apic_post_init = kvm_apic_init; + + if (kvm_para_has_feature(KVM_FEATURE_MEM_PROTECTED)) { + if (kvm_hypercall0(KVM_HC_ENABLE_MEM_PROTECTED)) { + pr_err("Failed to enable KVM memory protection\n"); + return; + } + + pr_info("KVM memory protection enabled\n"); + mem_protected = true; + setup_force_cpu_cap(X86_FEATURE_KVM_MEM_PROTECTED); + } } const __initconst struct hypervisor_x86 x86_hyper_kvm = { diff --git a/include/uapi/linux/kvm_para.h b/include/uapi/linux/kvm_para.h index 8b86609849b9..1a216f32e572 100644 --- a/include/uapi/linux/kvm_para.h +++ b/include/uapi/linux/kvm_para.h @@ -27,8 +27,9 @@ #define KVM_HC_MIPS_EXIT_VM 7 #define KVM_HC_MIPS_CONSOLE_OUTPUT 8 #define KVM_HC_CLOCK_PAIRING 9 -#define KVM_HC_SEND_IPI 10 +#define KVM_HC_SEND_IPI 10 #define KVM_HC_SCHED_YIELD 11 +#define KVM_HC_ENABLE_MEM_PROTECTED 12 /* * hypercalls use architecture specific From patchwork Tue Oct 20 06:18:46 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kirill A . Shutemov" X-Patchwork-Id: 11845749 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id C396D16C0 for ; Tue, 20 Oct 2020 06:19:11 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 5CB412242E for ; Tue, 20 Oct 2020 06:19:11 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=shutemov-name.20150623.gappssmtp.com header.i=@shutemov-name.20150623.gappssmtp.com header.b="xayXCOMN" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 5CB412242E Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=shutemov.name Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 60E0E6B0062; Tue, 20 Oct 2020 02:19:08 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 55C9D6B006C; Tue, 20 Oct 2020 02:19:08 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3A20B6B006E; Tue, 20 Oct 2020 02:19:08 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0118.hostedemail.com [216.40.44.118]) by kanga.kvack.org (Postfix) with ESMTP id 0AEF36B0062 for ; Tue, 20 Oct 2020 02:19:07 -0400 (EDT) Received: from smtpin14.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id A1222180AD811 for ; Tue, 20 Oct 2020 06:19:07 +0000 (UTC) X-FDA: 77391301134.14.bath50_21149892723d Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin14.hostedemail.com (Postfix) with ESMTP id 829EC18229818 for ; Tue, 20 Oct 2020 06:19:07 +0000 (UTC) X-Spam-Summary: 1,0,0,405bedd663eabc8f,d41d8cd98f00b204,kirill@shutemov.name,,RULES_HIT:41:355:379:541:960:973:988:989:1260:1311:1314:1345:1359:1431:1437:1515:1535:1542:1711:1730:1747:1777:1792:2393:2559:2562:3138:3139:3140:3141:3142:3353:3865:3866:3867:3868:3871:4117:4250:4321:4605:5007:6119:6261:6653:6742:7875:7903:8660:9036:10004:11026:11473:11657:11658:11914:12043:12048:12296:12297:12438:12517:12519:12555:12895:12986:13148:13230:13894:14096:14181:14721:21080:21220:21444:21451:21627:21939:21990:30045:30054:30070,0,RBL:209.85.167.68:@shutemov.name:.lbl8.mailshell.net-62.8.0.100 66.201.201.201;04ygkiuc974catg13auix46tzjjwfock11fjs6affsabt7oqbf7gjrf9eufmytg.hwcg4upyuebnzse851i3b6q13unoqjf5ymtufb79cpbfwncp7a1as8ioadibnsh.k-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fn,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:28,LUA_SUMMARY:none X-HE-Tag: bath50_21149892723d X-Filterd-Recvd-Size: 6753 Received: from mail-lf1-f68.google.com (mail-lf1-f68.google.com [209.85.167.68]) by imf45.hostedemail.com (Postfix) with ESMTP for ; Tue, 20 Oct 2020 06:19:06 +0000 (UTC) Received: by mail-lf1-f68.google.com with SMTP id a7so653596lfk.9 for ; Mon, 19 Oct 2020 23:19:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=shutemov-name.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=hD8fdBTcCRJJbNA8v3NsYdfNpsT0GOXjZBHChPVSAwQ=; b=xayXCOMNNZwbmuUkVVezQLWXxIF8Miy8enw02DRg4WMA4X4m4slW+NMA5P0c3qhO5D +kGcktM4P+HXe2yBeAnG/2+Xclqp2OgxTip6/Bb3CVCIZ2bBJdJuUilLIHzwrkBYOE4B TsX/QxjT2Zm+Xmi1DZEf4e3GHpzUBG52A5spJ1L+jd8GahD1wE/T70vxhKe2veBu6B1U on4Jm3FJeU84oDq/lvJr3ij5Om1hKycOuf2yF3ueejQYeW554+O4+OSO9VQsHq+jp040 3geG9EoBXHbv6MxUNE5qtCkA4baTyshro1n8PLvrT+t13ES/igx8D186TbBJCvSPy9yt zZ+Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=hD8fdBTcCRJJbNA8v3NsYdfNpsT0GOXjZBHChPVSAwQ=; b=j3RekthepK2bhhSYOFtW0QBn1XbfFCL9fwTZnO1RHv1BolesIv3HMrStg+vkKJ8KIZ eCCH/SmxWKU7Pmedi/Ly9E6NyvJ0AkMg+2gI03wjuZrbF6YazCQVd6mZ763bhpUq6Pz5 r9oo42QVluEdQQCCdPRBZWny+xMV8M+KlShQInKW2KoN+lL/QMDle4TxqTnSbHozawmc W9F6p8IOKqlGBKRhdtZEZh2xEXVqgExA3+GJ2Xxw//u+7s8wEorhnLGMDOCLGs3bQ8I2 BBb+RBwsxk+secCcaOcYAyhUHzddiHZ/nz4LL3xevt4h6asHwgWvBl1b4JNlBUR16/3T cWQg== X-Gm-Message-State: AOAM533EfunwyvaFPsxs5AtfkNAgnb6b32gGsFRi1QUJsjxGhz2AS6KI hbbADhSsRprnhc/pOwPLPCUKCw== X-Google-Smtp-Source: ABdhPJxCPKzmW7NJTv3qxlWKTiOn+Grv8DEV/W8CxNVAx3qx4Sk8sxbe6hNbgDfkCVtKyrc+SsUrYg== X-Received: by 2002:a19:824f:: with SMTP id e76mr380839lfd.572.1603174745613; Mon, 19 Oct 2020 23:19:05 -0700 (PDT) Received: from box.localdomain ([86.57.175.117]) by smtp.gmail.com with ESMTPSA id y19sm135513lfj.288.2020.10.19.23.19.04 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 19 Oct 2020 23:19:04 -0700 (PDT) From: "Kirill A. Shutemov" X-Google-Original-From: "Kirill A. Shutemov" Received: by box.localdomain (Postfix, from userid 1000) id C76D7102F62; Tue, 20 Oct 2020 09:19:01 +0300 (+03) To: Dave Hansen , Andy Lutomirski , Peter Zijlstra , Paolo Bonzini , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel Cc: David Rientjes , Andrea Arcangeli , Kees Cook , Will Drewry , "Edgecombe, Rick P" , "Kleen, Andi" , Liran Alon , Mike Rapoport , x86@kernel.org, kvm@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, "Kirill A. Shutemov" Subject: [RFCv2 03/16] x86/kvm: Make DMA pages shared Date: Tue, 20 Oct 2020 09:18:46 +0300 Message-Id: <20201020061859.18385-4-kirill.shutemov@linux.intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201020061859.18385-1-kirill.shutemov@linux.intel.com> References: <20201020061859.18385-1-kirill.shutemov@linux.intel.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Make force_dma_unencrypted() return true for KVM to get DMA pages mapped as shared. __set_memory_enc_dec() now informs the host via hypercall if the state of the page has changed from shared to private or back. Signed-off-by: Kirill A. Shutemov --- arch/x86/Kconfig | 1 + arch/x86/mm/mem_encrypt_common.c | 5 +++-- arch/x86/mm/pat/set_memory.c | 7 +++++++ include/uapi/linux/kvm_para.h | 2 ++ 4 files changed, 13 insertions(+), 2 deletions(-) diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index 619ebf40e457..cd272e3babbc 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -805,6 +805,7 @@ config KVM_GUEST select PARAVIRT_CLOCK select ARCH_CPUIDLE_HALTPOLL select X86_HV_CALLBACK_VECTOR + select X86_MEM_ENCRYPT_COMMON default y help This option enables various optimizations for running under the KVM diff --git a/arch/x86/mm/mem_encrypt_common.c b/arch/x86/mm/mem_encrypt_common.c index 964e04152417..a878e7f246d5 100644 --- a/arch/x86/mm/mem_encrypt_common.c +++ b/arch/x86/mm/mem_encrypt_common.c @@ -10,14 +10,15 @@ #include #include #include +#include /* Override for DMA direct allocation check - ARCH_HAS_FORCE_DMA_UNENCRYPTED */ bool force_dma_unencrypted(struct device *dev) { /* - * For SEV, all DMA must be to unencrypted/shared addresses. + * For SEV and KVM, all DMA must be to unencrypted/shared addresses. */ - if (sev_active()) + if (sev_active() || kvm_mem_protected()) return true; /* diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c index d1b2a889f035..4c49303126c9 100644 --- a/arch/x86/mm/pat/set_memory.c +++ b/arch/x86/mm/pat/set_memory.c @@ -16,6 +16,7 @@ #include #include #include +#include #include #include @@ -1977,6 +1978,12 @@ static int __set_memory_enc_dec(unsigned long addr, int numpages, bool enc) struct cpa_data cpa; int ret; + if (kvm_mem_protected()) { + unsigned long gfn = __pa(addr) >> PAGE_SHIFT; + int call = enc ? KVM_HC_MEM_UNSHARE : KVM_HC_MEM_SHARE; + return kvm_hypercall2(call, gfn, numpages); + } + /* Nothing to do if memory encryption is not active */ if (!mem_encrypt_active()) return 0; diff --git a/include/uapi/linux/kvm_para.h b/include/uapi/linux/kvm_para.h index 1a216f32e572..c6d8c988e330 100644 --- a/include/uapi/linux/kvm_para.h +++ b/include/uapi/linux/kvm_para.h @@ -30,6 +30,8 @@ #define KVM_HC_SEND_IPI 10 #define KVM_HC_SCHED_YIELD 11 #define KVM_HC_ENABLE_MEM_PROTECTED 12 +#define KVM_HC_MEM_SHARE 13 +#define KVM_HC_MEM_UNSHARE 14 /* * hypercalls use architecture specific From patchwork Tue Oct 20 06:18:47 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kirill A . Shutemov" X-Patchwork-Id: 11845755 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D8EF216C0 for ; Tue, 20 Oct 2020 06:19:17 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 958572240A for ; Tue, 20 Oct 2020 06:19:17 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=shutemov-name.20150623.gappssmtp.com header.i=@shutemov-name.20150623.gappssmtp.com header.b="mMkXN+PJ" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 958572240A Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=shutemov.name Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 7C7906B006E; Tue, 20 Oct 2020 02:19:09 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 6D77B6B0070; Tue, 20 Oct 2020 02:19:09 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 575276B0072; Tue, 20 Oct 2020 02:19:09 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0156.hostedemail.com [216.40.44.156]) by kanga.kvack.org (Postfix) with ESMTP id 269CA6B006E for ; Tue, 20 Oct 2020 02:19:09 -0400 (EDT) Received: from smtpin22.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id AFA8D1EE6 for ; Tue, 20 Oct 2020 06:19:08 +0000 (UTC) X-FDA: 77391301176.22.step91_56126292723d Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin22.hostedemail.com (Postfix) with ESMTP id 8DA4118038E67 for ; Tue, 20 Oct 2020 06:19:08 +0000 (UTC) X-Spam-Summary: 1,0,0,51da448e42ff2c79,d41d8cd98f00b204,kirill@shutemov.name,,RULES_HIT:41:69:355:379:541:960:966:973:988:989:1260:1311:1314:1345:1359:1431:1437:1515:1535:1543:1711:1730:1747:1777:1792:2196:2199:2393:2559:2562:2895:2897:3138:3139:3140:3141:3142:3354:3622:3865:3867:3868:3871:3874:4118:4250:4321:4385:4605:5007:6119:6120:6261:6653:6742:7901:8568:8660:9592:10004:11026:11473:11658:11914:12043:12048:12291:12296:12297:12438:12517:12519:12555:12895:12986:13141:13148:13230:13894:14181:14721:21080:21444:21451:21627:21939:21990:30054:30070,0,RBL:209.85.167.68:@shutemov.name:.lbl8.mailshell.net-62.8.0.100 66.201.201.201;04ygeqomxtz783ykehsmjsmoi58s6ypoww78hd3fr49wocgmcih34si6eiekyt3.buqthki3djekjc4nzma5fqyucncjen73hpzdtenhq5gwtofg3enuifrz7bwqta6.c-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fn,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: step91_56126292723d X-Filterd-Recvd-Size: 7703 Received: from mail-lf1-f68.google.com (mail-lf1-f68.google.com [209.85.167.68]) by imf23.hostedemail.com (Postfix) with ESMTP for ; Tue, 20 Oct 2020 06:19:08 +0000 (UTC) Received: by mail-lf1-f68.google.com with SMTP id l2so689899lfk.0 for ; Mon, 19 Oct 2020 23:19:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=shutemov-name.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=VLwnQOcY8yfmeB13Kjhgi8noTYS9j96JYTZlWN6p8dw=; b=mMkXN+PJsKyjcgNnw16HhIsH3uZMMe4LNcpcVc8TElH+2TdOPUfwodYlz71uOOoRzB hA4fKBbO9EejGrmP36YF+cLGKCHNs4OYJsWKf+sEvWoXgbeCHZSYU/R/5dOdIpEAWfzr 9qiPF85fT4FfwBHzAe/OD9fiUIWyk9LHPTLQPYnrJnAU4d8R7CbICPhsItmYQPxjnSkf OyqMTRj8oRRDgLo0dEBpxUH7sKXnBPtZNtGp5N3X/CdLqHQbxm0xdrmNEAecz4fZxDnj OFrZhIAvqXJhzegTFhaBWQ9l8snDiX37FKHsBAqUTGjR2WKG8VUg8bkcYPDuCq29TAmC SOMA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=VLwnQOcY8yfmeB13Kjhgi8noTYS9j96JYTZlWN6p8dw=; b=DWSM0wgVCAWwZh2Dih8jaQEt2XKCwmJQ5GEKzsVsX/+chIXFT9yeXCiWHPHgcPCcw8 Yzdm9AFjn9cMEo0I0TvSgXw3XSl/T3in7J5uzG1JCoXOGpyR2S9gegouywrRVoRuVbt4 0SQwnAEBY1yo+ts0XXGx0fFLJznIvKl5DK443OVemUo5g6ZfwnngAuAfN7XOXt5LpxGJ 2DbdphZbba0myREE2VeHftbdkk00SBeYbm8scnZv1bvSBwtAnjiQFjCvmAwBgdZBheqb YVAZSTS8JnzSEtvKfwaguYVwFevUGgITt73GA0yFDPg9wmn7XQL4kHNGKl78yM59I6Sv 5ZbQ== X-Gm-Message-State: AOAM532uLC8vixDt5s43Sq/pScMnG4alVJjycyHYlT0IHbJtr3R9Nl3C 3Uk7Xct/lci48fi3mm4eUxPwwg== X-Google-Smtp-Source: ABdhPJxM1TseWEUmrwjfepjJ1+gFTB5TQTUN6AS1rn2jd/uGyNH1WHUi5y9xB7XRPQiMjHW6662Uxg== X-Received: by 2002:a05:6512:3b2:: with SMTP id v18mr396729lfp.140.1603174746880; Mon, 19 Oct 2020 23:19:06 -0700 (PDT) Received: from box.localdomain ([86.57.175.117]) by smtp.gmail.com with ESMTPSA id j129sm143894lfd.10.2020.10.19.23.19.04 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 19 Oct 2020 23:19:04 -0700 (PDT) From: "Kirill A. Shutemov" X-Google-Original-From: "Kirill A. Shutemov" Received: by box.localdomain (Postfix, from userid 1000) id D270F102F63; Tue, 20 Oct 2020 09:19:01 +0300 (+03) To: Dave Hansen , Andy Lutomirski , Peter Zijlstra , Paolo Bonzini , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel Cc: David Rientjes , Andrea Arcangeli , Kees Cook , Will Drewry , "Edgecombe, Rick P" , "Kleen, Andi" , Liran Alon , Mike Rapoport , x86@kernel.org, kvm@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, "Kirill A. Shutemov" Subject: [RFCv2 04/16] x86/kvm: Use bounce buffers for KVM memory protection Date: Tue, 20 Oct 2020 09:18:47 +0300 Message-Id: <20201020061859.18385-5-kirill.shutemov@linux.intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201020061859.18385-1-kirill.shutemov@linux.intel.com> References: <20201020061859.18385-1-kirill.shutemov@linux.intel.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Mirror SEV, use SWIOTLB always if KVM memory protection is enabled. Signed-off-by: Kirill A. Shutemov --- arch/x86/Kconfig | 1 + arch/x86/kernel/kvm.c | 2 ++ arch/x86/kernel/pci-swiotlb.c | 3 ++- arch/x86/mm/mem_encrypt.c | 21 --------------------- arch/x86/mm/mem_encrypt_common.c | 23 +++++++++++++++++++++++ 5 files changed, 28 insertions(+), 22 deletions(-) diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index cd272e3babbc..b22b95517437 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -806,6 +806,7 @@ config KVM_GUEST select ARCH_CPUIDLE_HALTPOLL select X86_HV_CALLBACK_VECTOR select X86_MEM_ENCRYPT_COMMON + select SWIOTLB default y help This option enables various optimizations for running under the KVM diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c index 2c1f8952b92a..30bb3d2d6ccd 100644 --- a/arch/x86/kernel/kvm.c +++ b/arch/x86/kernel/kvm.c @@ -26,6 +26,7 @@ #include #include #include +#include #include #include #include @@ -759,6 +760,7 @@ static void __init kvm_init_platform(void) pr_info("KVM memory protection enabled\n"); mem_protected = true; setup_force_cpu_cap(X86_FEATURE_KVM_MEM_PROTECTED); + swiotlb_force = SWIOTLB_FORCE; } } diff --git a/arch/x86/kernel/pci-swiotlb.c b/arch/x86/kernel/pci-swiotlb.c index c2cfa5e7c152..814060a6ceb0 100644 --- a/arch/x86/kernel/pci-swiotlb.c +++ b/arch/x86/kernel/pci-swiotlb.c @@ -13,6 +13,7 @@ #include #include #include +#include int swiotlb __read_mostly; @@ -49,7 +50,7 @@ int __init pci_swiotlb_detect_4gb(void) * buffers are allocated and used for devices that do not support * the addressing range required for the encryption mask. */ - if (sme_active()) + if (sme_active() || kvm_mem_protected()) swiotlb = 1; return swiotlb; diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c index 4dbdc9dac36b..5de64e068b0a 100644 --- a/arch/x86/mm/mem_encrypt.c +++ b/arch/x86/mm/mem_encrypt.c @@ -369,24 +369,3 @@ void __init mem_encrypt_free_decrypted_mem(void) free_init_pages("unused decrypted", vaddr, vaddr_end); } - -/* Architecture __weak replacement functions */ -void __init mem_encrypt_init(void) -{ - if (!sme_me_mask) - return; - - /* Call into SWIOTLB to update the SWIOTLB DMA buffers */ - swiotlb_update_mem_attributes(); - - /* - * With SEV, we need to unroll the rep string I/O instructions. - */ - if (sev_active()) - static_branch_enable(&sev_enable_key); - - pr_info("AMD %s active\n", - sev_active() ? "Secure Encrypted Virtualization (SEV)" - : "Secure Memory Encryption (SME)"); -} - diff --git a/arch/x86/mm/mem_encrypt_common.c b/arch/x86/mm/mem_encrypt_common.c index a878e7f246d5..7900f3788010 100644 --- a/arch/x86/mm/mem_encrypt_common.c +++ b/arch/x86/mm/mem_encrypt_common.c @@ -37,3 +37,26 @@ bool force_dma_unencrypted(struct device *dev) return false; } + +void __init mem_encrypt_init(void) +{ + if (!sme_me_mask && !kvm_mem_protected()) + return; + + /* Call into SWIOTLB to update the SWIOTLB DMA buffers */ + swiotlb_update_mem_attributes(); + + /* + * With SEV, we need to unroll the rep string I/O instructions. + */ + if (sev_active()) + static_branch_enable(&sev_enable_key); + + if (sme_me_mask) { + pr_info("AMD %s active\n", + sev_active() ? "Secure Encrypted Virtualization (SEV)" + : "Secure Memory Encryption (SME)"); + } else { + pr_info("KVM memory protection enabled\n"); + } +} From patchwork Tue Oct 20 06:18:48 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kirill A . Shutemov" X-Patchwork-Id: 11845757 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 19EC514B4 for ; Tue, 20 Oct 2020 06:19:20 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id C82DE22282 for ; Tue, 20 Oct 2020 06:19:19 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=shutemov-name.20150623.gappssmtp.com header.i=@shutemov-name.20150623.gappssmtp.com header.b="Jaw6HndA" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org C82DE22282 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=shutemov.name Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id AAB086B0070; Tue, 20 Oct 2020 02:19:10 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 9E81A6B0071; Tue, 20 Oct 2020 02:19:10 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 85D086B0072; Tue, 20 Oct 2020 02:19:10 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0222.hostedemail.com [216.40.44.222]) by kanga.kvack.org (Postfix) with ESMTP id 554D06B0070 for ; Tue, 20 Oct 2020 02:19:10 -0400 (EDT) Received: from smtpin10.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id E8F3182E5939 for ; Tue, 20 Oct 2020 06:19:09 +0000 (UTC) X-FDA: 77391301218.10.top77_321055b2723d Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin10.hostedemail.com (Postfix) with ESMTP id C6AD216A0B9 for ; Tue, 20 Oct 2020 06:19:09 +0000 (UTC) X-Spam-Summary: 1,0,0,4a3527cb066c4316,d41d8cd98f00b204,kirill@shutemov.name,,RULES_HIT:41:355:379:541:960:973:988:989:1260:1311:1314:1345:1359:1431:1437:1515:1534:1541:1711:1730:1747:1777:1792:2393:2559:2562:2736:3138:3139:3140:3141:3142:3352:3865:3867:3871:3874:4250:4321:5007:6119:6120:6261:6653:6742:7901:10004:11026:11658:11914:12043:12048:12296:12297:12517:12519:12555:12895:12986:13069:13161:13229:13311:13357:13894:14096:14181:14384:14721:21080:21433:21444:21451:21627:21990:30054:30062,0,RBL:209.85.167.65:@shutemov.name:.lbl8.mailshell.net-62.8.0.100 66.201.201.201;04yrczkcrzo8qbatath6iypa83rpeop17xm3tfx8u39mwfk3khj3onqqtxcy6g5.ys9cfr1ax97qr9nxcte11xtofxyy3riuaraf3qx1cjncqrtouyi4a33srr3zc9i.k-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fn,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: top77_321055b2723d X-Filterd-Recvd-Size: 4870 Received: from mail-lf1-f65.google.com (mail-lf1-f65.google.com [209.85.167.65]) by imf08.hostedemail.com (Postfix) with ESMTP for ; Tue, 20 Oct 2020 06:19:09 +0000 (UTC) Received: by mail-lf1-f65.google.com with SMTP id j30so674541lfp.4 for ; Mon, 19 Oct 2020 23:19:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=shutemov-name.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=qLURW/jdlNSmWtGV7khIZIL/9JX3uhBDJsl3glpsKS8=; b=Jaw6HndAu/3od4pm499wgLo/PnSpOsgf0qKVfrKZ1ObrIz2yUpappsJVWeGMaVAqNO zSq3AWMELJvDUUtpU8pJz+MZT0jeQQR7iN/Ls98p1ehKXFap4WGihAI3j73Y43ME2ILD Qx9R8kIrkKBMrnE5Wz8ug5w+u3Gub5ePmKcDsf5JJtjgCQkVIFtvxx25WrErKV8U4Dve P5UiLdHJzlXicKgNe8Ov+RGhPZS9rX9WxGEc+5yZ68LNby4EsxZ6T9FgHRrW74PIhXab OHvYo/0kDIK1WRTZvuBsBjo02CVVdOGsKHYcx7W8ewU4ZIqOMNtSxDmApLflbLxA5FA5 zJeA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=qLURW/jdlNSmWtGV7khIZIL/9JX3uhBDJsl3glpsKS8=; b=Z6hvfJT2uUvXBmhaqTxx9CuaCuLTk/WA2gq+AwT7YlzL6zwRT+vIxg1n4BevztXU50 k8XbtsRrohBbUihJeDB9i9BwAnL4fZtUN5h2nJm6PEBu4bkVhRgbMwzGzNQWbN7zDZ+v 4WreQ0cRT5emzzYkVq3Ie+bOYVnwWcb/6zKJSLflEwex1970GJFlrmVvD4kWzVcwMKDQ UEGfuSv3Gb+hF//kCx5gpGkTOtYS5ttV/ow0fnz3yLlz58Qt9GQR+ljWiSkS37NApbnL H6gXMXI5sBefFQ/popyFbFIp9e+S0GHojaBz1v94yE92gEDLWwjce40ck8k62OEZde5S WQVg== X-Gm-Message-State: AOAM530lSX3+Z/qy0po56/l1y8fizuEGCQkBeMX/uUX+A9sELfTczUUj 5pGMuQ82Ijiid3gI4n+o05VmuQ== X-Google-Smtp-Source: ABdhPJyKeklJAljBMnRznV5JsOkA5QkHO53L5Td3aC6wSbikok2iDHFkX0MnaQa4ylGJgw8He3Cn1Q== X-Received: by 2002:a19:64b:: with SMTP id 72mr395872lfg.47.1603174748148; Mon, 19 Oct 2020 23:19:08 -0700 (PDT) Received: from box.localdomain ([86.57.175.117]) by smtp.gmail.com with ESMTPSA id m30sm136843lfc.97.2020.10.19.23.19.06 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 19 Oct 2020 23:19:06 -0700 (PDT) From: "Kirill A. Shutemov" X-Google-Original-From: "Kirill A. Shutemov" Received: by box.localdomain (Postfix, from userid 1000) id D79B8102F64; Tue, 20 Oct 2020 09:19:01 +0300 (+03) To: Dave Hansen , Andy Lutomirski , Peter Zijlstra , Paolo Bonzini , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel Cc: David Rientjes , Andrea Arcangeli , Kees Cook , Will Drewry , "Edgecombe, Rick P" , "Kleen, Andi" , Liran Alon , Mike Rapoport , x86@kernel.org, kvm@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, "Kirill A. Shutemov" Subject: [RFCv2 05/16] x86/kvm: Make VirtIO use DMA API in KVM guest Date: Tue, 20 Oct 2020 09:18:48 +0300 Message-Id: <20201020061859.18385-6-kirill.shutemov@linux.intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201020061859.18385-1-kirill.shutemov@linux.intel.com> References: <20201020061859.18385-1-kirill.shutemov@linux.intel.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: VirtIO for KVM is a primary way to provide IO. All memory that used for communication with the host has to be marked as shared. The easiest way to archive that is to use DMA API that already knows how to deal with shared memory. Signed-off-by: Kirill A. Shutemov --- drivers/virtio/virtio_ring.c | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c index becc77697960..ace733845d5d 100644 --- a/drivers/virtio/virtio_ring.c +++ b/drivers/virtio/virtio_ring.c @@ -12,6 +12,7 @@ #include #include #include +#include #ifdef DEBUG /* For development, we want to crash whenever the ring is screwed. */ @@ -255,6 +256,9 @@ static bool vring_use_dma_api(struct virtio_device *vdev) if (xen_domain()) return true; + if (kvm_mem_protected()) + return true; + return false; } From patchwork Tue Oct 20 06:18:49 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kirill A . Shutemov" X-Patchwork-Id: 11845761 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 1B9DC14B4 for ; Tue, 20 Oct 2020 06:19:24 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id C2E52223BF for ; Tue, 20 Oct 2020 06:19:23 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=shutemov-name.20150623.gappssmtp.com header.i=@shutemov-name.20150623.gappssmtp.com header.b="wqgOv0Ad" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org C2E52223BF Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=shutemov.name Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 79B336B0073; Tue, 20 Oct 2020 02:19:11 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 74FB26B0074; Tue, 20 Oct 2020 02:19:11 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 55B526B0075; Tue, 20 Oct 2020 02:19:11 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0099.hostedemail.com [216.40.44.99]) by kanga.kvack.org (Postfix) with ESMTP id 0F02B6B0071 for ; Tue, 20 Oct 2020 02:19:11 -0400 (EDT) Received: from smtpin20.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id A50933629 for ; Tue, 20 Oct 2020 06:19:10 +0000 (UTC) X-FDA: 77391301260.20.bike79_59116e92723d Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin20.hostedemail.com (Postfix) with ESMTP id 7F2B6180C07A3 for ; Tue, 20 Oct 2020 06:19:10 +0000 (UTC) X-Spam-Summary: 1,0,0,64a7bfd9fd08165d,d41d8cd98f00b204,kirill@shutemov.name,,RULES_HIT:41:355:379:541:960:973:988:989:1260:1311:1314:1345:1359:1437:1515:1534:1539:1711:1714:1730:1747:1777:1792:2198:2199:2393:2559:2562:2731:3138:3139:3140:3141:3142:3350:3865:3867:3868:3871:3872:5007:6261:6653:6742:7903:10004:11026:11658:11914:12043:12048:12297:12438:12517:12519:12555:12895:13069:13311:13357:13894:14096:14181:14384:14721:21080:21444:21451:21627:30054,0,RBL:209.85.208.196:@shutemov.name:.lbl8.mailshell.net-66.201.201.201 62.8.84.100;04ygsg814dqkc1igzznzegyehxowiopatuer3a56s84ft7bfagcsce4dezs475s.jjzbxknis8qcu8wc13pym93chw4cjaidpp3ybd5nkro53id36oq31y151xd4xd6.a-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fn,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: bike79_59116e92723d X-Filterd-Recvd-Size: 4654 Received: from mail-lj1-f196.google.com (mail-lj1-f196.google.com [209.85.208.196]) by imf23.hostedemail.com (Postfix) with ESMTP for ; Tue, 20 Oct 2020 06:19:10 +0000 (UTC) Received: by mail-lj1-f196.google.com with SMTP id y16so760209ljk.1 for ; Mon, 19 Oct 2020 23:19:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=shutemov-name.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=pE6faVejbkAIUEORvfpEBGX8dWE5OEtccf8+g3bpDfA=; b=wqgOv0AdT4cxsnlRIT/UMGHB0RcEbRbr7Zk7Ij6sVDDKQ58f7O/mJyeJdUG+lVOM3W QjB2s1DT+SUQVCNFzdwwY6LrGOrnKqHLcaVwezTp/Ec9EVQ9VbKanXvyaP7kWY8cU2fq xY9yC+/+uB431+nxnsp4hOyXOgds8MRIVEpVaR03fauEGuiOFNxj51NvTD9SyN28NpnC Ze3LwVbA7FzXuB5GisL6E+XJJtIHsF28V/8BG4LccqzrGvrvEyfsyh2ImhaGa6ZZkVI4 qdy3YOAYaheLIrtXdvNRCy3VuuIYqQBQxV5+IXrn8aZ8CysQLNcewpbbGqREi6qJO9Iu GMTA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=pE6faVejbkAIUEORvfpEBGX8dWE5OEtccf8+g3bpDfA=; b=r0aEMDC8H20w8EnEYVBrBqg9g5yWb6Kzh0rDe/ltrPvH+FO48JEhwLG4QiBMkuiTfL 4ywp6at13aWK53l9iFePDCICD+Zlt96cyq1/NwkYblAhCl2Czs8D64EDTeHv2h1YVuWw 15/8RenTnZPVMjz+nx5DQXgI1gkI5H15seGpntsKvaA5I1ntfLBLyVZ4wBE+/SjHMDiY O94b4SlWmaDAsIg2XPfC8OgWgC/fNchse4TARY6Wpf8zMLLRoDqSycEgdlGpNbdUKtce zQcvyy3D89DAqTpaxF7MtIVELAJOHCu4d1qv0Q+Mz7rGcgN9WmpnD3puB0bEaNXuvbBR W5lA== X-Gm-Message-State: AOAM5308A20kGa24RasUZz3JXQ0nlb3lukw7Q6+0Fo0f8zdNRYH5W3nn CxlVSx4b6+7tfJeepO/gJ9tMpQ== X-Google-Smtp-Source: ABdhPJx7ahPxugd4z72ja1bSn+DqbXFXAuU8ORtiw6nlDwM9v5+X158Jixbz9FhEJVNgbphYkh7WWA== X-Received: by 2002:a2e:8992:: with SMTP id c18mr456881lji.318.1603174748897; Mon, 19 Oct 2020 23:19:08 -0700 (PDT) Received: from box.localdomain ([86.57.175.117]) by smtp.gmail.com with ESMTPSA id m203sm135528lfd.195.2020.10.19.23.19.06 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 19 Oct 2020 23:19:07 -0700 (PDT) From: "Kirill A. Shutemov" X-Google-Original-From: "Kirill A. Shutemov" Received: by box.localdomain (Postfix, from userid 1000) id DFB58102F65; Tue, 20 Oct 2020 09:19:01 +0300 (+03) To: Dave Hansen , Andy Lutomirski , Peter Zijlstra , Paolo Bonzini , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel Cc: David Rientjes , Andrea Arcangeli , Kees Cook , Will Drewry , "Edgecombe, Rick P" , "Kleen, Andi" , Liran Alon , Mike Rapoport , x86@kernel.org, kvm@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, "Kirill A. Shutemov" Subject: [RFCv2 06/16] x86/kvmclock: Share hvclock memory with the host Date: Tue, 20 Oct 2020 09:18:49 +0300 Message-Id: <20201020061859.18385-7-kirill.shutemov@linux.intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201020061859.18385-1-kirill.shutemov@linux.intel.com> References: <20201020061859.18385-1-kirill.shutemov@linux.intel.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: hvclock is shared between the guest and the hypervisor. It has to be accessible by host. Signed-off-by: Kirill A. Shutemov --- arch/x86/kernel/kvmclock.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/x86/kernel/kvmclock.c b/arch/x86/kernel/kvmclock.c index 34b18f6eeb2c..ac6c2abe0d0f 100644 --- a/arch/x86/kernel/kvmclock.c +++ b/arch/x86/kernel/kvmclock.c @@ -253,7 +253,7 @@ static void __init kvmclock_init_mem(void) * hvclock is shared between the guest and the hypervisor, must * be mapped decrypted. */ - if (sev_active()) { + if (sev_active() || kvm_mem_protected()) { r = set_memory_decrypted((unsigned long) hvclock_mem, 1UL << order); if (r) { From patchwork Tue Oct 20 06:18:50 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kirill A . Shutemov" X-Patchwork-Id: 11845763 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 0250416C0 for ; Tue, 20 Oct 2020 06:19:26 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id A77D3222E8 for ; Tue, 20 Oct 2020 06:19:25 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=shutemov-name.20150623.gappssmtp.com header.i=@shutemov-name.20150623.gappssmtp.com header.b="uRxBxRP0" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org A77D3222E8 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=shutemov.name Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id C0A506B0071; Tue, 20 Oct 2020 02:19:11 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id B95C76B0074; Tue, 20 Oct 2020 02:19:11 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A14626B0078; Tue, 20 Oct 2020 02:19:11 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0033.hostedemail.com [216.40.44.33]) by kanga.kvack.org (Postfix) with ESMTP id 62D526B0071 for ; Tue, 20 Oct 2020 02:19:11 -0400 (EDT) Received: from smtpin16.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 13A9A180AD807 for ; Tue, 20 Oct 2020 06:19:11 +0000 (UTC) X-FDA: 77391301302.16.coal68_42006652723d Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin16.hostedemail.com (Postfix) with ESMTP id E4FA7101D65F4 for ; Tue, 20 Oct 2020 06:19:10 +0000 (UTC) X-Spam-Summary: 1,0,0,b9cbedd7e78437f8,d41d8cd98f00b204,kirill@shutemov.name,,RULES_HIT:41:355:379:541:960:973:988:989:1260:1311:1314:1345:1359:1431:1437:1515:1535:1541:1711:1730:1747:1777:1792:2198:2199:2393:2559:2562:2693:2731:2897:3138:3139:3140:3141:3142:3352:3865:3867:3870:3871:4250:4321:5007:6119:6120:6261:6653:6742:7901:8603:9036:10004:11026:11658:11914:12043:12048:12297:12438:12517:12519:12555:12895:13069:13255:13311:13357:13894:14096:14181:14384:14721:21080:21444:21451:21627:30054,0,RBL:209.85.208.194:@shutemov.name:.lbl8.mailshell.net-62.8.84.100 66.201.201.201;04ygoggdig3kii7y49uamfwbrcqk4ophnpaaxsfrqwykobmdjpezfb1wm9wftyj.1rapr7wid3hpecjtp3uekn1seub6he88eiybw3fzhcncjptwfnbmqzr5jsghf6q.e-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fn,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: coal68_42006652723d X-Filterd-Recvd-Size: 5165 Received: from mail-lj1-f194.google.com (mail-lj1-f194.google.com [209.85.208.194]) by imf38.hostedemail.com (Postfix) with ESMTP for ; Tue, 20 Oct 2020 06:19:10 +0000 (UTC) Received: by mail-lj1-f194.google.com with SMTP id a5so725954ljj.11 for ; Mon, 19 Oct 2020 23:19:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=shutemov-name.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=vaud5LDoCMN7VYhH4oLnzKJ275PDylYsQEFgwO7dM1A=; b=uRxBxRP022DKNAknogcT1hVjFyXYSFc3HjPH/yHkhWIPPZ5/a2m0cy+pO6EHn8x+pC cQccGGl77VGFHrSaBgF+dCQv80xDnzvt3ORVquoJbp2plCK9KYKacYaCn7Neqkm4B/oD Mk6URhr1Qqhh+mYOKukSQtQDPQXG6MeJ6XHh1md445K+c2dAh4xTMEIdBd8bBqtmOkoO N8KVTNaKjDwZsA2HgAIOOPvnPvR33QIlN1BUfsiBOvcOUnrulKXPrMyZnAwoH6v9AQrT /jXZ+Yj6YnnKzR2FB/soZsY6Ey6nxbP1P2AfmLn38McxO4SUWVgzzZmVm2UilsP0hTxY 2sjQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=vaud5LDoCMN7VYhH4oLnzKJ275PDylYsQEFgwO7dM1A=; b=DTFa2RUqZgOxNgKy+3K2SlZZFHtT0xLX5T2XzCIihcb9OtP3lERdJZeaBqtXYi2I9k 2C1PFvIoVEg1y7Tpos6/cQP3mrRtf71uVQ6gu9757qg1YTos2cr/YLtqceKb6CS1E0Ku iWthtvCGpjfWEejNJTvQNNNV2LdH73cttNRBy2ZanYFIvY2NEIbXRxuUJ9laRQDdi3gs iKzyh7XoGiDR8IUikpdRTOcQ31FbnSGcYBMY/PGkOj4cJ0v7lUZgpdaIr+Qxz2UBMOfF TGrQNAR+Tq1EksJLnSiQ1Bu0TGgkg0Ug+QEz56CGOABfc63hJniO9dakLRN3N5153jHX oJkg== X-Gm-Message-State: AOAM530PxSFmj8rGNlUrjEHBSOkXkPJn5UZQ5+uRURakviSZ+5Gm8OfZ RO25LC++0/yUi6C6OCI2KQ+97w== X-Google-Smtp-Source: ABdhPJywmvEpa7Ydh3L77EwVrCq7+3fO4YdtEd4vP4JBqRv2DbR3YE4NZhPbQkl4Hp+GW9rY1xZDOQ== X-Received: by 2002:a2e:9f49:: with SMTP id v9mr512286ljk.369.1603174749329; Mon, 19 Oct 2020 23:19:09 -0700 (PDT) Received: from box.localdomain ([86.57.175.117]) by smtp.gmail.com with ESMTPSA id k19sm194793ljg.88.2020.10.19.23.19.06 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 19 Oct 2020 23:19:07 -0700 (PDT) From: "Kirill A. Shutemov" X-Google-Original-From: "Kirill A. Shutemov" Received: by box.localdomain (Postfix, from userid 1000) id E7995102F66; Tue, 20 Oct 2020 09:19:01 +0300 (+03) To: Dave Hansen , Andy Lutomirski , Peter Zijlstra , Paolo Bonzini , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel Cc: David Rientjes , Andrea Arcangeli , Kees Cook , Will Drewry , "Edgecombe, Rick P" , "Kleen, Andi" , Liran Alon , Mike Rapoport , x86@kernel.org, kvm@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, "Kirill A. Shutemov" Subject: [RFCv2 07/16] x86/realmode: Share trampoline area if KVM memory protection enabled Date: Tue, 20 Oct 2020 09:18:50 +0300 Message-Id: <20201020061859.18385-8-kirill.shutemov@linux.intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201020061859.18385-1-kirill.shutemov@linux.intel.com> References: <20201020061859.18385-1-kirill.shutemov@linux.intel.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: If KVM memory protection is active, the trampoline area will need to be in shared memory. Signed-off-by: Kirill A. Shutemov --- arch/x86/realmode/init.c | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-) diff --git a/arch/x86/realmode/init.c b/arch/x86/realmode/init.c index 1ed1208931e0..7392940a7f96 100644 --- a/arch/x86/realmode/init.c +++ b/arch/x86/realmode/init.c @@ -9,6 +9,7 @@ #include #include #include +#include struct real_mode_header *real_mode_header; u32 *trampoline_cr4_features; @@ -55,11 +56,11 @@ static void __init setup_real_mode(void) base = (unsigned char *)real_mode_header; /* - * If SME is active, the trampoline area will need to be in - * decrypted memory in order to bring up other processors + * If SME or KVM memory protection is active, the trampoline area will + * need to be in decrypted memory in order to bring up other processors * successfully. This is not needed for SEV. */ - if (sme_active()) + if (sme_active() || kvm_mem_protected()) set_memory_decrypted((unsigned long)base, size >> PAGE_SHIFT); memcpy(base, real_mode_blob, size); From patchwork Tue Oct 20 06:18:51 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kirill A . Shutemov" X-Patchwork-Id: 11845759 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 0748F16C0 for ; Tue, 20 Oct 2020 06:19:22 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id AA46822282 for ; Tue, 20 Oct 2020 06:19:21 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=shutemov-name.20150623.gappssmtp.com header.i=@shutemov-name.20150623.gappssmtp.com header.b="Mx8Loju0" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org AA46822282 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=shutemov.name Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 3571D6B0072; Tue, 20 Oct 2020 02:19:11 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 2E1AD6B0073; Tue, 20 Oct 2020 02:19:11 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 137DA6B0074; Tue, 20 Oct 2020 02:19:11 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0148.hostedemail.com [216.40.44.148]) by kanga.kvack.org (Postfix) with ESMTP id D87846B0072 for ; Tue, 20 Oct 2020 02:19:10 -0400 (EDT) Received: from smtpin09.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 7EBB23626 for ; Tue, 20 Oct 2020 06:19:10 +0000 (UTC) X-FDA: 77391301260.09.geese91_0515ec62723d Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin09.hostedemail.com (Postfix) with ESMTP id 5B6F5180AD811 for ; Tue, 20 Oct 2020 06:19:10 +0000 (UTC) X-Spam-Summary: 1,0,0,59d32e05385829ce,d41d8cd98f00b204,kirill@shutemov.name,,RULES_HIT:1:2:41:355:379:541:960:973:988:989:1260:1311:1314:1345:1359:1437:1515:1605:1730:1747:1777:1792:2393:2559:2562:2897:3138:3139:3140:3141:3142:3867:3871:4049:4250:4321:4605:5007:6119:6261:6653:6742:7558:7903:9036:9592:10004:11026:11473:11658:11914:12043:12048:12291:12296:12297:12438:12517:12519:12555:12683:12895:12986:13255:13894:14096:14110:21080:21444:21451:21627:21987:21990:30051:30054,0,RBL:209.85.167.66:@shutemov.name:.lbl8.mailshell.net-62.8.0.100 66.201.201.201;04yg8mhjcwg4t777xe7uqheknqp5dop9sger8u7twwiowimrdmsnnm93h3d4zjp.dtqqyon14z1kpnkjt3qfqbddjz4awfx4zmu8to1dqtq37pmns8hxea5uyi4j91s.n-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fn,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: geese91_0515ec62723d X-Filterd-Recvd-Size: 10618 Received: from mail-lf1-f66.google.com (mail-lf1-f66.google.com [209.85.167.66]) by imf05.hostedemail.com (Postfix) with ESMTP for ; Tue, 20 Oct 2020 06:19:09 +0000 (UTC) Received: by mail-lf1-f66.google.com with SMTP id d24so660108lfa.8 for ; Mon, 19 Oct 2020 23:19:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=shutemov-name.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=AaYRj6xxvmS8+RSQnvAq9Jrn29IaqjqNuI5uDDknTs8=; b=Mx8Loju0ToSqHftzzoqfzPESbc1/N5hRz2KdNGw6KMMpjbYco5/VUfE0YsDuD/OBDB JpJ/p+fJo/oJLR6hyaQ81Ij7sV1JyaGGX4cvNalQc+/Smhjx6MWeJvV+7MmRii4qI9lr iLtnSPtkcNN9ChtzzeCEBk/GWIthMl995KtwZndE+McBnOElURh1YWLGLm9290oiKyzQ gs85KEQcShl9Na88mBkbryY9v93NrKHgz3fPWpqALkn/h2k8+gSjUvqBxaO5z39uQ/p4 H2s7AwoLkEmDssTQesbwtQiJXlxCDVAw3vhvG35bXeapUXA2tBY8eSngavhDBqnymtbB Rmeg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=AaYRj6xxvmS8+RSQnvAq9Jrn29IaqjqNuI5uDDknTs8=; b=Urwrp6379qUo+OnulX6yzPovN9iU/EbbRmuh0lp+owZp0hKZTHkxRlHk+PuGjmRWPz xL40Mk2ifXekMfwXphpRoPm8CZscoTn8qk1dwU9AF1i7RrXSVoZL9onqtQlbCH2Oy226 Vw56YOclUMCQMCK7RlmH6RiXUgbXFKp5/2T8aUYVggqqO++Ni/oLcvg5pvconN/bweM+ frliyjXAFLKLeELuqLbW6j/2nU00Vskfi6R6v1cRsoK1QCJo4xYIR0C12NEh5tzLsvN3 k1edpAufMGi/x7JbVj3m1C8BB79pxhizCTAXVI3ExoDoAMv/qeXJRryBC1iRgZQA8xVW +nng== X-Gm-Message-State: AOAM5301Y/PcByV1maPR0sw+rnwopFzqWvvU3KeTeJM5+D/tAz+yhHXY tawC19JeIV8cUc9UZ1IsXyP/GA== X-Google-Smtp-Source: ABdhPJzbVM30/ELO/VgPUbztIkJyCxcT3aLv3ozPYXAT3IE2iadH3MCBuaGjih+qQs0NlZhBazcvPQ== X-Received: by 2002:a19:8988:: with SMTP id l130mr405110lfd.126.1603174748490; Mon, 19 Oct 2020 23:19:08 -0700 (PDT) Received: from box.localdomain ([86.57.175.117]) by smtp.gmail.com with ESMTPSA id s17sm135242lfp.292.2020.10.19.23.19.06 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 19 Oct 2020 23:19:07 -0700 (PDT) From: "Kirill A. Shutemov" X-Google-Original-From: "Kirill A. Shutemov" Received: by box.localdomain (Postfix, from userid 1000) id EF98B102F67; Tue, 20 Oct 2020 09:19:01 +0300 (+03) To: Dave Hansen , Andy Lutomirski , Peter Zijlstra , Paolo Bonzini , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel Cc: David Rientjes , Andrea Arcangeli , Kees Cook , Will Drewry , "Edgecombe, Rick P" , "Kleen, Andi" , Liran Alon , Mike Rapoport , x86@kernel.org, kvm@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, "Kirill A. Shutemov" Subject: [RFCv2 08/16] KVM: Use GUP instead of copy_from/to_user() to access guest memory Date: Tue, 20 Oct 2020 09:18:51 +0300 Message-Id: <20201020061859.18385-9-kirill.shutemov@linux.intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201020061859.18385-1-kirill.shutemov@linux.intel.com> References: <20201020061859.18385-1-kirill.shutemov@linux.intel.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: New helpers copy_from_guest()/copy_to_guest() to be used if KVM memory protection feature is enabled. Signed-off-by: Kirill A. Shutemov --- include/linux/kvm_host.h | 4 ++ virt/kvm/kvm_main.c | 90 +++++++++++++++++++++++++++++++--------- 2 files changed, 75 insertions(+), 19 deletions(-) diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 05e3c2fb3ef7..380a64613880 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -504,6 +504,7 @@ struct kvm { struct srcu_struct irq_srcu; pid_t userspace_pid; unsigned int max_halt_poll_ns; + bool mem_protected; }; #define kvm_err(fmt, ...) \ @@ -728,6 +729,9 @@ void kvm_set_pfn_dirty(kvm_pfn_t pfn); void kvm_set_pfn_accessed(kvm_pfn_t pfn); void kvm_get_pfn(kvm_pfn_t pfn); +int copy_from_guest(void *data, unsigned long hva, int len, bool protected); +int copy_to_guest(unsigned long hva, const void *data, int len, bool protected); + void kvm_release_pfn(kvm_pfn_t pfn, bool dirty, struct gfn_to_pfn_cache *cache); int kvm_read_guest_page(struct kvm *kvm, gfn_t gfn, void *data, int offset, int len); diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index cf88233b819a..a9884cb8c867 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -2313,19 +2313,70 @@ static int next_segment(unsigned long len, int offset) return len; } +int copy_from_guest(void *data, unsigned long hva, int len, bool protected) +{ + int offset = offset_in_page(hva); + struct page *page; + int npages, seg; + + if (!protected) + return __copy_from_user(data, (void __user *)hva, len); + + might_fault(); + kasan_check_write(data, len); + check_object_size(data, len, false); + + while ((seg = next_segment(len, offset)) != 0) { + npages = get_user_pages_unlocked(hva, 1, &page, 0); + if (npages != 1) + return -EFAULT; + memcpy(data, page_address(page) + offset, seg); + put_page(page); + len -= seg; + hva += seg; + offset = 0; + } + + return 0; +} + +int copy_to_guest(unsigned long hva, const void *data, int len, bool protected) +{ + int offset = offset_in_page(hva); + struct page *page; + int npages, seg; + + if (!protected) + return __copy_to_user((void __user *)hva, data, len); + + might_fault(); + kasan_check_read(data, len); + check_object_size(data, len, true); + + while ((seg = next_segment(len, offset)) != 0) { + npages = get_user_pages_unlocked(hva, 1, &page, FOLL_WRITE); + if (npages != 1) + return -EFAULT; + memcpy(page_address(page) + offset, data, seg); + put_page(page); + len -= seg; + hva += seg; + offset = 0; + } + + return 0; +} + static int __kvm_read_guest_page(struct kvm_memory_slot *slot, gfn_t gfn, - void *data, int offset, int len) + void *data, int offset, int len, + bool protected) { - int r; unsigned long addr; addr = gfn_to_hva_memslot_prot(slot, gfn, NULL); if (kvm_is_error_hva(addr)) return -EFAULT; - r = __copy_from_user(data, (void __user *)addr + offset, len); - if (r) - return -EFAULT; - return 0; + return copy_from_guest(data, addr + offset, len, protected); } int kvm_read_guest_page(struct kvm *kvm, gfn_t gfn, void *data, int offset, @@ -2333,7 +2384,8 @@ int kvm_read_guest_page(struct kvm *kvm, gfn_t gfn, void *data, int offset, { struct kvm_memory_slot *slot = gfn_to_memslot(kvm, gfn); - return __kvm_read_guest_page(slot, gfn, data, offset, len); + return __kvm_read_guest_page(slot, gfn, data, offset, len, + kvm->mem_protected); } EXPORT_SYMBOL_GPL(kvm_read_guest_page); @@ -2342,7 +2394,8 @@ int kvm_vcpu_read_guest_page(struct kvm_vcpu *vcpu, gfn_t gfn, void *data, { struct kvm_memory_slot *slot = kvm_vcpu_gfn_to_memslot(vcpu, gfn); - return __kvm_read_guest_page(slot, gfn, data, offset, len); + return __kvm_read_guest_page(slot, gfn, data, offset, len, + vcpu->kvm->mem_protected); } EXPORT_SYMBOL_GPL(kvm_vcpu_read_guest_page); @@ -2415,7 +2468,8 @@ int kvm_vcpu_read_guest_atomic(struct kvm_vcpu *vcpu, gpa_t gpa, EXPORT_SYMBOL_GPL(kvm_vcpu_read_guest_atomic); static int __kvm_write_guest_page(struct kvm_memory_slot *memslot, gfn_t gfn, - const void *data, int offset, int len) + const void *data, int offset, int len, + bool protected) { int r; unsigned long addr; @@ -2423,7 +2477,8 @@ static int __kvm_write_guest_page(struct kvm_memory_slot *memslot, gfn_t gfn, addr = gfn_to_hva_memslot(memslot, gfn); if (kvm_is_error_hva(addr)) return -EFAULT; - r = __copy_to_user((void __user *)addr + offset, data, len); + + r = copy_to_guest(addr + offset, data, len, protected); if (r) return -EFAULT; mark_page_dirty_in_slot(memslot, gfn); @@ -2435,7 +2490,8 @@ int kvm_write_guest_page(struct kvm *kvm, gfn_t gfn, { struct kvm_memory_slot *slot = gfn_to_memslot(kvm, gfn); - return __kvm_write_guest_page(slot, gfn, data, offset, len); + return __kvm_write_guest_page(slot, gfn, data, offset, len, + kvm->mem_protected); } EXPORT_SYMBOL_GPL(kvm_write_guest_page); @@ -2444,7 +2500,8 @@ int kvm_vcpu_write_guest_page(struct kvm_vcpu *vcpu, gfn_t gfn, { struct kvm_memory_slot *slot = kvm_vcpu_gfn_to_memslot(vcpu, gfn); - return __kvm_write_guest_page(slot, gfn, data, offset, len); + return __kvm_write_guest_page(slot, gfn, data, offset, len, + vcpu->kvm->mem_protected); } EXPORT_SYMBOL_GPL(kvm_vcpu_write_guest_page); @@ -2560,7 +2617,7 @@ int kvm_write_guest_offset_cached(struct kvm *kvm, struct gfn_to_hva_cache *ghc, if (unlikely(!ghc->memslot)) return kvm_write_guest(kvm, gpa, data, len); - r = __copy_to_user((void __user *)ghc->hva + offset, data, len); + r = copy_to_guest(ghc->hva + offset, data, len, kvm->mem_protected); if (r) return -EFAULT; mark_page_dirty_in_slot(ghc->memslot, gpa >> PAGE_SHIFT); @@ -2581,7 +2638,6 @@ int kvm_read_guest_offset_cached(struct kvm *kvm, struct gfn_to_hva_cache *ghc, unsigned long len) { struct kvm_memslots *slots = kvm_memslots(kvm); - int r; gpa_t gpa = ghc->gpa + offset; BUG_ON(len + offset > ghc->len); @@ -2597,11 +2653,7 @@ int kvm_read_guest_offset_cached(struct kvm *kvm, struct gfn_to_hva_cache *ghc, if (unlikely(!ghc->memslot)) return kvm_read_guest(kvm, gpa, data, len); - r = __copy_from_user(data, (void __user *)ghc->hva + offset, len); - if (r) - return -EFAULT; - - return 0; + return copy_from_guest(data, ghc->hva + offset, len, kvm->mem_protected); } EXPORT_SYMBOL_GPL(kvm_read_guest_offset_cached); From patchwork Tue Oct 20 06:18:52 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kirill A . Shutemov" X-Patchwork-Id: 11845765 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 0CA9C16C0 for ; Tue, 20 Oct 2020 06:19:28 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id A12C722409 for ; Tue, 20 Oct 2020 06:19:27 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=shutemov-name.20150623.gappssmtp.com header.i=@shutemov-name.20150623.gappssmtp.com header.b="kWnjG/bx" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org A12C722409 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=shutemov.name Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id B21CC6B0074; Tue, 20 Oct 2020 02:19:12 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id AF9376B0075; Tue, 20 Oct 2020 02:19:12 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 995A46B0078; Tue, 20 Oct 2020 02:19:12 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 5F4CE6B0074 for ; Tue, 20 Oct 2020 02:19:12 -0400 (EDT) Received: from smtpin23.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 100F982DF882 for ; Tue, 20 Oct 2020 06:19:12 +0000 (UTC) X-FDA: 77391301344.23.beam29_28087562723d Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin23.hostedemail.com (Postfix) with ESMTP id E5EC237604 for ; Tue, 20 Oct 2020 06:19:11 +0000 (UTC) X-Spam-Summary: 1,0,0,bfe7ae6683d6af72,d41d8cd98f00b204,kirill@shutemov.name,,RULES_HIT:1:2:41:355:379:541:960:973:988:989:1260:1311:1314:1345:1359:1437:1515:1605:1730:1747:1777:1792:2194:2199:2393:2553:2559:2562:2736:2901:3138:3139:3140:3141:3142:3865:3866:3867:3868:3870:3871:3872:3874:4051:4250:4321:4605:5007:6119:6261:6653:6742:7903:7904:10004:10226:11026:11473:11658:11914:12043:12048:12296:12297:12438:12517:12519:12555:12895:12986:13161:13229:13894:14096:21080:21444:21451:21627:21740:21987:21990:30003:30012:30054:30069:30079:30090,0,RBL:209.85.208.194:@shutemov.name:.lbl8.mailshell.net-62.8.84.100 66.201.201.201;04yf79ajn9gkesn9ksnre3rumz1ikycj1xkckjyb4fw4kzfg5euqkk5cmafd9rk.d9hwqtw7rh1cfsb1gm4kogrx4ecuc63k3rmxdyp1pg4aj34sgq1rwhgm5hknppr.r-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fn,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: beam29_28087562723d X-Filterd-Recvd-Size: 12101 Received: from mail-lj1-f194.google.com (mail-lj1-f194.google.com [209.85.208.194]) by imf50.hostedemail.com (Postfix) with ESMTP for ; Tue, 20 Oct 2020 06:19:11 +0000 (UTC) Received: by mail-lj1-f194.google.com with SMTP id c21so766887ljj.0 for ; Mon, 19 Oct 2020 23:19:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=shutemov-name.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=Ef3vnZp/GnAIOhUojdZDqsMeSzQG+k/o8v/cqaX5tdE=; b=kWnjG/bxvvaKB7p7Og+d8rHcQSxNwWjP6TC59FneVw8z/VNXgUaDMKTQO/MkwPrA+x +duFDtLhAFDA0qpugJYlrviajPw8NRXeR/ysVV+ZxXdgzP+uvH9BsVizNcK20WoevkeV WMSbCL4u7ayNAPEVdnIplFR8AGo2eUH7Qa6s/QpmNARyn2T7XGVKz4Q2RsWWEnVchVQB 9D4kOaDUpfEcu8nwrB5TUSRXAtenzPz54EqxIf0OKr5R1UAEtOjz5vAv6gktjIWRMcB8 Tz0Uq4xh7vFJJnA9Pr4i4yyNJxAvwhYezd7vtQicoEwlgj5ko/lCtadFljYF5rIySZRl XyAA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Ef3vnZp/GnAIOhUojdZDqsMeSzQG+k/o8v/cqaX5tdE=; b=Xosqshyq/dYm8JejXKK4MvGlwRNnqBqpD0BPYfhufpJsmOKx+1qflWCVMEu4jfW/Yx cP7NG9y/MO1HKHQV3yWV3WD/yrX9lje5llDOj808m6y6aRlck9EZc+l70aN61Cq7Yuqy FeLWj5TMVT7VaYAn1yNvRDIYWMdEhGw3b0S86A/qoSYKuhu+GPZmAFTeo825aMsreHIF nHF9JpOTI9sm3k6k5wdF43JF5P9S6cJfQ646dma75Am1BAuW+kSMsMfZmqONqdGmeAaw W1pt1eLWK41Q3rXTbquRTVYSH+Lw8ms9B0+3XwadD1uE1sbx2Nc7OW54NK2KUZTPFEO9 4uUA== X-Gm-Message-State: AOAM533z7sZ9aiw3ugdl/dJbj6WyZysUSkOU490OrxCihrYzFA470Q7K uw0lQA/KAq02OckDI/TxLnZrUw== X-Google-Smtp-Source: ABdhPJyvDjiTcYo3ooj2XDbkY21HO+vY/E6/whgeBJA4ZvSFIs8MEUig63kvuD9QS067pDfLSnnJGA== X-Received: by 2002:a2e:9bce:: with SMTP id w14mr478315ljj.439.1603174750219; Mon, 19 Oct 2020 23:19:10 -0700 (PDT) Received: from box.localdomain ([86.57.175.117]) by smtp.gmail.com with ESMTPSA id h11sm194817ljc.21.2020.10.19.23.19.07 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 19 Oct 2020 23:19:09 -0700 (PDT) From: "Kirill A. Shutemov" X-Google-Original-From: "Kirill A. Shutemov" Received: by box.localdomain (Postfix, from userid 1000) id 03C94102F68; Tue, 20 Oct 2020 09:19:02 +0300 (+03) To: Dave Hansen , Andy Lutomirski , Peter Zijlstra , Paolo Bonzini , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel Cc: David Rientjes , Andrea Arcangeli , Kees Cook , Will Drewry , "Edgecombe, Rick P" , "Kleen, Andi" , Liran Alon , Mike Rapoport , x86@kernel.org, kvm@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, "Kirill A. Shutemov" Subject: [RFCv2 09/16] KVM: mm: Introduce VM_KVM_PROTECTED Date: Tue, 20 Oct 2020 09:18:52 +0300 Message-Id: <20201020061859.18385-10-kirill.shutemov@linux.intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201020061859.18385-1-kirill.shutemov@linux.intel.com> References: <20201020061859.18385-1-kirill.shutemov@linux.intel.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The new VMA flag that indicate a VMA that is not accessible to userspace but usable by kernel with GUP if FOLL_KVM is specified. The FOLL_KVM is only used in the KVM code. The code has to know how to deal with such pages. Signed-off-by: Kirill A. Shutemov --- include/linux/mm.h | 8 ++++++++ mm/gup.c | 20 ++++++++++++++++---- mm/huge_memory.c | 20 ++++++++++++++++---- mm/memory.c | 3 +++ mm/mmap.c | 3 +++ virt/kvm/async_pf.c | 2 +- virt/kvm/kvm_main.c | 9 +++++---- 7 files changed, 52 insertions(+), 13 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 16b799a0522c..c8d8cdcbc425 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -342,6 +342,8 @@ extern unsigned int kobjsize(const void *objp); # define VM_MAPPED_COPY VM_ARCH_1 /* T if mapped copy of data (nommu mmap) */ #endif +#define VM_KVM_PROTECTED 0 + #ifndef VM_GROWSUP # define VM_GROWSUP VM_NONE #endif @@ -658,6 +660,11 @@ static inline bool vma_is_accessible(struct vm_area_struct *vma) return vma->vm_flags & VM_ACCESS_FLAGS; } +static inline bool vma_is_kvm_protected(struct vm_area_struct *vma) +{ + return vma->vm_flags & VM_KVM_PROTECTED; +} + #ifdef CONFIG_SHMEM /* * The vma_is_shmem is not inline because it is used only by slow @@ -2766,6 +2773,7 @@ struct page *follow_page(struct vm_area_struct *vma, unsigned long address, #define FOLL_SPLIT_PMD 0x20000 /* split huge pmd before returning */ #define FOLL_PIN 0x40000 /* pages must be released via unpin_user_page */ #define FOLL_FAST_ONLY 0x80000 /* gup_fast: prevent fall-back to slow gup */ +#define FOLL_KVM 0x100000 /* access to VM_KVM_PROTECTED VMAs */ /* * FOLL_PIN and FOLL_LONGTERM may be used in various combinations with each diff --git a/mm/gup.c b/mm/gup.c index e869c634cc9a..accf6db0c06f 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -384,10 +384,19 @@ static int follow_pfn_pte(struct vm_area_struct *vma, unsigned long address, * FOLL_FORCE can write to even unwritable pte's, but only * after we've gone through a COW cycle and they are dirty. */ -static inline bool can_follow_write_pte(pte_t pte, unsigned int flags) +static inline bool can_follow_write_pte(struct vm_area_struct *vma, + pte_t pte, unsigned int flags) { - return pte_write(pte) || - ((flags & FOLL_FORCE) && (flags & FOLL_COW) && pte_dirty(pte)); + if (pte_write(pte)) + return true; + + if ((flags & FOLL_FORCE) && (flags & FOLL_COW) && pte_dirty(pte)) + return true; + + if (!vma_is_kvm_protected(vma) || !(vma->vm_flags & VM_WRITE)) + return false; + + return (vma->vm_flags & VM_SHARED) || page_mapcount(pte_page(pte)) == 1; } static struct page *follow_page_pte(struct vm_area_struct *vma, @@ -430,7 +439,7 @@ static struct page *follow_page_pte(struct vm_area_struct *vma, } if ((flags & FOLL_NUMA) && pte_protnone(pte)) goto no_page; - if ((flags & FOLL_WRITE) && !can_follow_write_pte(pte, flags)) { + if ((flags & FOLL_WRITE) && !can_follow_write_pte(vma, pte, flags)) { pte_unmap_unlock(ptep, ptl); return NULL; } @@ -750,6 +759,9 @@ static struct page *follow_page_mask(struct vm_area_struct *vma, ctx->page_mask = 0; + if (vma_is_kvm_protected(vma) && (flags & FOLL_KVM)) + flags &= ~FOLL_NUMA; + /* make this handle hugepd */ page = follow_huge_addr(mm, address, flags & FOLL_WRITE); if (!IS_ERR(page)) { diff --git a/mm/huge_memory.c b/mm/huge_memory.c index da397779a6d4..ec8cf9a40cfd 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -1322,10 +1322,19 @@ vm_fault_t do_huge_pmd_wp_page(struct vm_fault *vmf, pmd_t orig_pmd) * FOLL_FORCE can write to even unwritable pmd's, but only * after we've gone through a COW cycle and they are dirty. */ -static inline bool can_follow_write_pmd(pmd_t pmd, unsigned int flags) +static inline bool can_follow_write_pmd(struct vm_area_struct *vma, + pmd_t pmd, unsigned int flags) { - return pmd_write(pmd) || - ((flags & FOLL_FORCE) && (flags & FOLL_COW) && pmd_dirty(pmd)); + if (pmd_write(pmd)) + return true; + + if ((flags & FOLL_FORCE) && (flags & FOLL_COW) && pmd_dirty(pmd)) + return true; + + if (!vma_is_kvm_protected(vma) || !(vma->vm_flags & VM_WRITE)) + return false; + + return (vma->vm_flags & VM_SHARED) || page_mapcount(pmd_page(pmd)) == 1; } struct page *follow_trans_huge_pmd(struct vm_area_struct *vma, @@ -1338,7 +1347,7 @@ struct page *follow_trans_huge_pmd(struct vm_area_struct *vma, assert_spin_locked(pmd_lockptr(mm, pmd)); - if (flags & FOLL_WRITE && !can_follow_write_pmd(*pmd, flags)) + if (flags & FOLL_WRITE && !can_follow_write_pmd(vma, *pmd, flags)) goto out; /* Avoid dumping huge zero page */ @@ -1412,6 +1421,9 @@ vm_fault_t do_huge_pmd_numa_page(struct vm_fault *vmf, pmd_t pmd) bool was_writable; int flags = 0; + if (vma_is_kvm_protected(vma)) + return VM_FAULT_SIGBUS; + vmf->ptl = pmd_lock(vma->vm_mm, vmf->pmd); if (unlikely(!pmd_same(pmd, *vmf->pmd))) goto out_unlock; diff --git a/mm/memory.c b/mm/memory.c index eeae590e526a..2c9756b4e52f 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -4165,6 +4165,9 @@ static vm_fault_t do_numa_page(struct vm_fault *vmf) bool was_writable = pte_savedwrite(vmf->orig_pte); int flags = 0; + if (vma_is_kvm_protected(vma)) + return VM_FAULT_SIGBUS; + /* * The "pte" at this point cannot be used safely without * validation through pte_unmap_same(). It's of NUMA type but diff --git a/mm/mmap.c b/mm/mmap.c index bdd19f5b994e..be699f688b6c 100644 --- a/mm/mmap.c +++ b/mm/mmap.c @@ -112,6 +112,9 @@ pgprot_t vm_get_page_prot(unsigned long vm_flags) (VM_READ|VM_WRITE|VM_EXEC|VM_SHARED)]) | pgprot_val(arch_vm_get_page_prot(vm_flags))); + if (vm_flags & VM_KVM_PROTECTED) + ret = PAGE_NONE; + return arch_filter_pgprot(ret); } EXPORT_SYMBOL(vm_get_page_prot); diff --git a/virt/kvm/async_pf.c b/virt/kvm/async_pf.c index dd777688d14a..85a2f99f6e9b 100644 --- a/virt/kvm/async_pf.c +++ b/virt/kvm/async_pf.c @@ -61,7 +61,7 @@ static void async_pf_execute(struct work_struct *work) * access remotely. */ mmap_read_lock(mm); - get_user_pages_remote(mm, addr, 1, FOLL_WRITE, NULL, NULL, + get_user_pages_remote(mm, addr, 1, FOLL_WRITE | FOLL_KVM, NULL, NULL, &locked); if (locked) mmap_read_unlock(mm); diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index a9884cb8c867..125db5a73e10 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -1794,7 +1794,7 @@ unsigned long kvm_vcpu_gfn_to_hva_prot(struct kvm_vcpu *vcpu, gfn_t gfn, bool *w static inline int check_user_page_hwpoison(unsigned long addr) { - int rc, flags = FOLL_HWPOISON | FOLL_WRITE; + int rc, flags = FOLL_HWPOISON | FOLL_WRITE | FOLL_KVM; rc = get_user_pages(addr, 1, flags, NULL, NULL); return rc == -EHWPOISON; @@ -1836,7 +1836,7 @@ static bool hva_to_pfn_fast(unsigned long addr, bool write_fault, static int hva_to_pfn_slow(unsigned long addr, bool *async, bool write_fault, bool *writable, kvm_pfn_t *pfn) { - unsigned int flags = FOLL_HWPOISON; + unsigned int flags = FOLL_HWPOISON | FOLL_KVM; struct page *page; int npages = 0; @@ -2327,7 +2327,7 @@ int copy_from_guest(void *data, unsigned long hva, int len, bool protected) check_object_size(data, len, false); while ((seg = next_segment(len, offset)) != 0) { - npages = get_user_pages_unlocked(hva, 1, &page, 0); + npages = get_user_pages_unlocked(hva, 1, &page, FOLL_KVM); if (npages != 1) return -EFAULT; memcpy(data, page_address(page) + offset, seg); @@ -2354,7 +2354,8 @@ int copy_to_guest(unsigned long hva, const void *data, int len, bool protected) check_object_size(data, len, true); while ((seg = next_segment(len, offset)) != 0) { - npages = get_user_pages_unlocked(hva, 1, &page, FOLL_WRITE); + npages = get_user_pages_unlocked(hva, 1, &page, + FOLL_WRITE | FOLL_KVM); if (npages != 1) return -EFAULT; memcpy(page_address(page) + offset, data, seg); From patchwork Tue Oct 20 06:18:53 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kirill A . Shutemov" X-Patchwork-Id: 11845767 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 09B5B14B4 for ; Tue, 20 Oct 2020 06:19:30 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id B9382222C8 for ; Tue, 20 Oct 2020 06:19:29 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=shutemov-name.20150623.gappssmtp.com header.i=@shutemov-name.20150623.gappssmtp.com header.b="QOwj6pUP" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org B9382222C8 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=shutemov.name Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 4B9876B0075; Tue, 20 Oct 2020 02:19:13 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 46D006B0078; Tue, 20 Oct 2020 02:19:13 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2BD9D6B007D; Tue, 20 Oct 2020 02:19:13 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0142.hostedemail.com [216.40.44.142]) by kanga.kvack.org (Postfix) with ESMTP id E4C026B0078 for ; Tue, 20 Oct 2020 02:19:12 -0400 (EDT) Received: from smtpin25.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 902373626 for ; Tue, 20 Oct 2020 06:19:12 +0000 (UTC) X-FDA: 77391301344.25.ring22_53038c52723d Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin25.hostedemail.com (Postfix) with ESMTP id 6AF141804E3A0 for ; Tue, 20 Oct 2020 06:19:12 +0000 (UTC) X-Spam-Summary: 1,0,0,0747a0bc0e012e09,d41d8cd98f00b204,kirill@shutemov.name,,RULES_HIT:41:355:379:541:960:973:988:989:1260:1311:1314:1345:1359:1437:1515:1534:1541:1711:1730:1747:1777:1792:1981:2194:2199:2393:2559:2562:3138:3139:3140:3141:3142:3352:3865:4321:5007:6261:6653:6742:10004:11026:11657:11658:11914:12043:12048:12296:12297:12438:12517:12519:12555:12895:13069:13161:13229:13311:13357:13894:14181:14384:14721:21080:21444:21451:21627:21795:30051:30054,0,RBL:209.85.208.195:@shutemov.name:.lbl8.mailshell.net-62.8.84.100 66.201.201.201;04yg9x3znmme5wzxa4bw6ib3kx5qrocpax7djib1skd5goh7iumb19mazjc5ro7.jre35zeyfk6pbtxb7oczj9ags5qjejsxw4gszjotitp89cnb3jxw8qx5graubdt.s-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fn,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:23,LUA_SUMMARY:none X-HE-Tag: ring22_53038c52723d X-Filterd-Recvd-Size: 4911 Received: from mail-lj1-f195.google.com (mail-lj1-f195.google.com [209.85.208.195]) by imf40.hostedemail.com (Postfix) with ESMTP for ; Tue, 20 Oct 2020 06:19:11 +0000 (UTC) Received: by mail-lj1-f195.google.com with SMTP id h20so732559lji.9 for ; Mon, 19 Oct 2020 23:19:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=shutemov-name.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=Y7hM+tUDAeAq4RAxUld5ugj5h0NrD2Du8CkrLpwlkag=; b=QOwj6pUPgpTDZ6ktUvy4xykf+ox4KixOkoZHuQqKEafWwwbuwhWO8H15O9F2S74Kyb aKOivTr6vej82pS6GKMyAEpN9vglV3fdogePZcExy41kgzXGzhx0JEZOYEBBgQauJfbf a4fpB0aFkwpbAZow63PKwa2cI3mLpbrXowiG8FCfuo621x3Xv+E9Yu7FMmUIwTEnAYRs OBWkoRYnDrD971FOU2wxMq0kpfZSB8vPJoBpfy+oTVo/8R14HJIBOngbE85+oA+IEQ/A B/MYw1omq+M7GBfCkdovs/HQaPgwVTpyuHY/J9z1tQkVrsaQjLFe+L8wzWGA6UrGvN0Y p+rQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Y7hM+tUDAeAq4RAxUld5ugj5h0NrD2Du8CkrLpwlkag=; b=AYMtSqF1ma+mffujQxIBpIevFfwIVSqBByQnG9Y3kAOBTMhLaxliqATaui7WSHaRpO 5MxfIjm6OaZG7QMzommaw60oKtkyOf55AwHTbiajfe/SK2llnRxvooFFUq+zD40MXDEi yoPj3+Mk1F8dFe4pSXR2qJb1MuFaxOXfK30KM71adFtXxb6/srlmBQZ5Oipe+3+vKqro +HKDxouNdG5dDAG6MhcL6RHPmSdqWTjSqCXk7zAYs3XaYetKc1VniUr+izvt/UDOvlhP W9bT1Zq3dYMltHCa4jPZb1rzJZhws4QFc8xhDNeTIdcFehE26ZTlJjFzdv+kf9AaISGD V5yA== X-Gm-Message-State: AOAM532+ErudqIfBOgwLNUdvm9pvWW6WKJ1fILE5BEjzYU1exyYACfG8 Sp36M4I7VzOemgWR7FxIeftRFg== X-Google-Smtp-Source: ABdhPJyNQOaGkoovzVmlF7h+01Jp8CyRpn89ZWIY3mQpFq+pcIj7Ik9DQ/oAPoDKgfNVHjdjAC5Vog== X-Received: by 2002:a2e:949:: with SMTP id 70mr511621ljj.319.1603174750510; Mon, 19 Oct 2020 23:19:10 -0700 (PDT) Received: from box.localdomain ([86.57.175.117]) by smtp.gmail.com with ESMTPSA id l14sm194088lji.123.2020.10.19.23.19.07 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 19 Oct 2020 23:19:09 -0700 (PDT) From: "Kirill A. Shutemov" X-Google-Original-From: "Kirill A. Shutemov" Received: by box.localdomain (Postfix, from userid 1000) id 0BAA4102F69; Tue, 20 Oct 2020 09:19:02 +0300 (+03) To: Dave Hansen , Andy Lutomirski , Peter Zijlstra , Paolo Bonzini , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel Cc: David Rientjes , Andrea Arcangeli , Kees Cook , Will Drewry , "Edgecombe, Rick P" , "Kleen, Andi" , Liran Alon , Mike Rapoport , x86@kernel.org, kvm@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, "Kirill A. Shutemov" Subject: [RFCv2 10/16] KVM: x86: Use GUP for page walk instead of __get_user() Date: Tue, 20 Oct 2020 09:18:53 +0300 Message-Id: <20201020061859.18385-11-kirill.shutemov@linux.intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201020061859.18385-1-kirill.shutemov@linux.intel.com> References: <20201020061859.18385-1-kirill.shutemov@linux.intel.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The user mapping doesn't have the page mapping for protected memory. Signed-off-by: Kirill A. Shutemov --- arch/x86/kvm/mmu/paging_tmpl.h | 10 ++++++++-- 1 file changed, 8 insertions(+), 2 deletions(-) diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h index 4dd6b1e5b8cf..258a6361b9b2 100644 --- a/arch/x86/kvm/mmu/paging_tmpl.h +++ b/arch/x86/kvm/mmu/paging_tmpl.h @@ -397,8 +397,14 @@ static int FNAME(walk_addr_generic)(struct guest_walker *walker, goto error; ptep_user = (pt_element_t __user *)((void *)host_addr + offset); - if (unlikely(__get_user(pte, ptep_user))) - goto error; + if (vcpu->kvm->mem_protected) { + if (copy_from_guest(&pte, host_addr + offset, + sizeof(pte), true)) + goto error; + } else { + if (unlikely(__get_user(pte, ptep_user))) + goto error; + } walker->ptep_user[walker->level - 1] = ptep_user; trace_kvm_mmu_paging_element(pte, walker->level); From patchwork Tue Oct 20 06:18:54 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kirill A . Shutemov" X-Patchwork-Id: 11845771 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 64BC014B4 for ; Tue, 20 Oct 2020 06:19:34 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 1170F223BF for ; Tue, 20 Oct 2020 06:19:34 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=shutemov-name.20150623.gappssmtp.com header.i=@shutemov-name.20150623.gappssmtp.com header.b="JfPp6qCG" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 1170F223BF Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=shutemov.name Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 138A06B007D; Tue, 20 Oct 2020 02:19:14 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 0E4176B0081; Tue, 20 Oct 2020 02:19:14 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E78116B0080; Tue, 20 Oct 2020 02:19:13 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0034.hostedemail.com [216.40.44.34]) by kanga.kvack.org (Postfix) with ESMTP id B6E4D6B007B for ; Tue, 20 Oct 2020 02:19:13 -0400 (EDT) Received: from smtpin03.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 49EC182DF882 for ; Tue, 20 Oct 2020 06:19:13 +0000 (UTC) X-FDA: 77391301386.03.blow22_3b14b392723d Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin03.hostedemail.com (Postfix) with ESMTP id 1A1AB28A4EB for ; Tue, 20 Oct 2020 06:19:13 +0000 (UTC) X-Spam-Summary: 1,0,0,3b3bd353112707ca,d41d8cd98f00b204,kirill@shutemov.name,,RULES_HIT:2:41:355:379:541:960:973:988:989:1260:1311:1314:1345:1359:1431:1437:1515:1535:1605:1730:1747:1777:1792:2393:2559:2562:2897:3138:3139:3140:3141:3142:3308:3865:3866:3867:3868:3872:3874:4049:4120:4250:4321:4605:5007:6119:6261:6653:6742:7558:7903:8634:9121:10004:11026:11473:11658:11914:12043:12048:12291:12296:12297:12438:12517:12519:12555:12683:12895:12986:13894:14110:21080:21222:21324:21444:21451:21611:21627:21740:21990:30012:30054,0,RBL:209.85.208.194:@shutemov.name:.lbl8.mailshell.net-62.8.84.100 66.201.201.201;04yg3uzjrsymczf8f7e96imn3c99oockhb6g1twb6j9f9gq9n9qt4hm6c4gsebd.gzjo1cn4y4ui6ktpedjcue8trhxouy8hdqba7sn1eqrarsdisaqconxsm7oop4a.c-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fn,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: blow22_3b14b392723d X-Filterd-Recvd-Size: 9960 Received: from mail-lj1-f194.google.com (mail-lj1-f194.google.com [209.85.208.194]) by imf31.hostedemail.com (Postfix) with ESMTP for ; Tue, 20 Oct 2020 06:19:12 +0000 (UTC) Received: by mail-lj1-f194.google.com with SMTP id m20so746891ljj.5 for ; Mon, 19 Oct 2020 23:19:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=shutemov-name.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=X/AtwUDIUn48g4k1JMQhi4QAxxtPsV8WteLTDiTNmxc=; b=JfPp6qCGASr2dJ2fakpZJ/fzyDk1AscK0upX+1EvAoNCwUAfXJF8hJQyqg47MXmTLI T8Bg/HjhcMQZHi55Rl+ufcciPWjUvblzpko+Rjfud93AxnRaQg10hKd8ARqA1lAqpH// CAVrCc+7zyCPKlh66MIXjilJ1s7TiQtVYTLo/QW1tJViUVbs0PFIek2K7Rxy7msVGuK+ rQACyCccbOocxMOBdKfA6lN9pQ55v8htkn4mvsOWjnqR3oTuLlVGCYSpuVoupFl2H4AT Ju6OrlpGe4i4iyn9riSOObFDEFqaJwBsnfDbVl2JqlCW2a8bRSnJInGi2L8qL+VMQDV/ BLaA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=X/AtwUDIUn48g4k1JMQhi4QAxxtPsV8WteLTDiTNmxc=; b=mWKhtzihr9YF+by1exJJN36hJiKAD3DoEnng6DsEIHxJelNCadbf1DX9yNu5tblF+Q OeGPZgifB+v1azjOKOwpOoVm+95ZLfXmv5KFsj2R2/XxhuujvlA5yjD8/Yq4tRxaMTCR 3olBxU/i5K54HShhWLvxuslu6TPzBEA4JXobNysuqKYyxEKLAn+z+5TjccniGeADjwsI Lw2JW+yfSWwYoX1o+HBQKAkZZ/Hu4rcXsJ1YOQRZIWIFtAMw+I+xDXihun7y1+3jzBwp ftWAVijgZqQ10dw7aI7Xk/BCTiQPwG7neAzUPda+g7Rndl0JoFVgRcKt90ZUCCkFqm3V 21ng== X-Gm-Message-State: AOAM5315kCjTzcEzzWnICRHLMinzmUZgxL61XLMlV3e89T3+H1PZfncI Jq4s8N3g3nfPcY633B93cWkzHg== X-Google-Smtp-Source: ABdhPJzddsHVqR9gNuwpBey6U9atB+OQkWz90dQzJ+5gTf2tHZ47zm/83ZQZQoIfsUI3D9HQtWFezQ== X-Received: by 2002:a2e:9183:: with SMTP id f3mr451109ljg.343.1603174751286; Mon, 19 Oct 2020 23:19:11 -0700 (PDT) Received: from box.localdomain ([86.57.175.117]) by smtp.gmail.com with ESMTPSA id b15sm197222ljp.117.2020.10.19.23.19.07 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 19 Oct 2020 23:19:09 -0700 (PDT) From: "Kirill A. Shutemov" X-Google-Original-From: "Kirill A. Shutemov" Received: by box.localdomain (Postfix, from userid 1000) id 136DB102F6A; Tue, 20 Oct 2020 09:19:02 +0300 (+03) To: Dave Hansen , Andy Lutomirski , Peter Zijlstra , Paolo Bonzini , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel Cc: David Rientjes , Andrea Arcangeli , Kees Cook , Will Drewry , "Edgecombe, Rick P" , "Kleen, Andi" , Liran Alon , Mike Rapoport , x86@kernel.org, kvm@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, "Kirill A. Shutemov" Subject: [RFCv2 11/16] KVM: Protected memory extension Date: Tue, 20 Oct 2020 09:18:54 +0300 Message-Id: <20201020061859.18385-12-kirill.shutemov@linux.intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201020061859.18385-1-kirill.shutemov@linux.intel.com> References: <20201020061859.18385-1-kirill.shutemov@linux.intel.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add infrastructure that handles protected memory extension. Arch-specific code has to provide hypercalls and define non-zero VM_KVM_PROTECTED. Signed-off-by: Kirill A. Shutemov --- include/linux/kvm_host.h | 4 +++ virt/kvm/Kconfig | 3 ++ virt/kvm/kvm_main.c | 68 ++++++++++++++++++++++++++++++++++++++ virt/lib/Makefile | 1 + virt/lib/mem_protected.c | 71 ++++++++++++++++++++++++++++++++++++++++ 5 files changed, 147 insertions(+) create mode 100644 virt/lib/mem_protected.c diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 380a64613880..6655e8da4555 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -701,6 +701,10 @@ void kvm_arch_flush_shadow_all(struct kvm *kvm); void kvm_arch_flush_shadow_memslot(struct kvm *kvm, struct kvm_memory_slot *slot); +int kvm_protect_all_memory(struct kvm *kvm); +int kvm_protect_memory(struct kvm *kvm, + unsigned long gfn, unsigned long npages, bool protect); + int gfn_to_page_many_atomic(struct kvm_memory_slot *slot, gfn_t gfn, struct page **pages, int nr_pages); diff --git a/virt/kvm/Kconfig b/virt/kvm/Kconfig index 1c37ccd5d402..50d7422386aa 100644 --- a/virt/kvm/Kconfig +++ b/virt/kvm/Kconfig @@ -63,3 +63,6 @@ config HAVE_KVM_NO_POLL config KVM_XFER_TO_GUEST_WORK bool + +config HAVE_KVM_PROTECTED_MEMORY + bool diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 125db5a73e10..4c008c7b4974 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -154,6 +154,8 @@ static void kvm_uevent_notify_change(unsigned int type, struct kvm *kvm); static unsigned long long kvm_createvm_count; static unsigned long long kvm_active_vms; +int __kvm_protect_memory(unsigned long start, unsigned long end, bool protect); + __weak void kvm_arch_mmu_notifier_invalidate_range(struct kvm *kvm, unsigned long start, unsigned long end) { @@ -1371,6 +1373,15 @@ int __kvm_set_memory_region(struct kvm *kvm, if (r) goto out_bitmap; + if (IS_ENABLED(CONFIG_HAVE_KVM_PROTECTED_MEMORY) && + mem->memory_size && kvm->mem_protected) { + r = __kvm_protect_memory(new.userspace_addr, + new.userspace_addr + new.npages * PAGE_SIZE, + true); + if (r) + goto out_bitmap; + } + if (old.dirty_bitmap && !new.dirty_bitmap) kvm_destroy_dirty_bitmap(&old); return 0; @@ -2720,6 +2731,63 @@ void kvm_vcpu_mark_page_dirty(struct kvm_vcpu *vcpu, gfn_t gfn) } EXPORT_SYMBOL_GPL(kvm_vcpu_mark_page_dirty); +int kvm_protect_memory(struct kvm *kvm, + unsigned long gfn, unsigned long npages, bool protect) +{ + struct kvm_memory_slot *memslot; + unsigned long start, end; + gfn_t numpages; + + if (!IS_ENABLED(CONFIG_HAVE_KVM_PROTECTED_MEMORY)) + return -KVM_ENOSYS; + + if (!npages) + return 0; + + memslot = gfn_to_memslot(kvm, gfn); + /* Not backed by memory. It's okay. */ + if (!memslot) + return 0; + + start = gfn_to_hva_many(memslot, gfn, &numpages); + end = start + npages * PAGE_SIZE; + + /* XXX: Share range across memory slots? */ + if (WARN_ON(numpages < npages)) + return -EINVAL; + + return __kvm_protect_memory(start, end, protect); +} +EXPORT_SYMBOL_GPL(kvm_protect_memory); + +int kvm_protect_all_memory(struct kvm *kvm) +{ + struct kvm_memslots *slots; + struct kvm_memory_slot *memslot; + unsigned long start, end; + int i, ret = 0;; + + if (!IS_ENABLED(CONFIG_HAVE_KVM_PROTECTED_MEMORY)) + return -KVM_ENOSYS; + + mutex_lock(&kvm->slots_lock); + kvm->mem_protected = true; + for (i = 0; i < KVM_ADDRESS_SPACE_NUM; i++) { + slots = __kvm_memslots(kvm, i); + kvm_for_each_memslot(memslot, slots) { + start = memslot->userspace_addr; + end = start + memslot->npages * PAGE_SIZE; + ret = __kvm_protect_memory(start, end, true); + if (ret) + goto out; + } + } +out: + mutex_unlock(&kvm->slots_lock); + return ret; +} +EXPORT_SYMBOL_GPL(kvm_protect_all_memory); + void kvm_sigset_activate(struct kvm_vcpu *vcpu) { if (!vcpu->sigset_active) diff --git a/virt/lib/Makefile b/virt/lib/Makefile index bd7f9a78bb6b..d6e50510801f 100644 --- a/virt/lib/Makefile +++ b/virt/lib/Makefile @@ -1,2 +1,3 @@ # SPDX-License-Identifier: GPL-2.0-only obj-$(CONFIG_IRQ_BYPASS_MANAGER) += irqbypass.o +obj-$(CONFIG_HAVE_KVM_PROTECTED_MEMORY) += mem_protected.o diff --git a/virt/lib/mem_protected.c b/virt/lib/mem_protected.c new file mode 100644 index 000000000000..0b01dd74f29c --- /dev/null +++ b/virt/lib/mem_protected.c @@ -0,0 +1,71 @@ +#include +#include +#include +#include +#include +#include + +int __kvm_protect_memory(unsigned long start, unsigned long end, bool protect) +{ + struct mm_struct *mm = current->mm; + struct vm_area_struct *vma, *prev; + int ret; + + if (mmap_write_lock_killable(mm)) + return -EINTR; + + ret = -ENOMEM; + vma = find_vma(current->mm, start); + if (!vma) + goto out; + + ret = -EINVAL; + if (vma->vm_start > start) + goto out; + + if (start > vma->vm_start) + prev = vma; + else + prev = vma->vm_prev; + + ret = 0; + while (true) { + unsigned long newflags, tmp; + + tmp = vma->vm_end; + if (tmp > end) + tmp = end; + + newflags = vma->vm_flags; + if (protect) + newflags |= VM_KVM_PROTECTED; + else + newflags &= ~VM_KVM_PROTECTED; + + /* The VMA has been handled as part of other memslot */ + if (newflags == vma->vm_flags) + goto next; + + ret = mprotect_fixup(vma, &prev, start, tmp, newflags); + if (ret) + goto out; + +next: + start = tmp; + if (start < prev->vm_end) + start = prev->vm_end; + + if (start >= end) + goto out; + + vma = prev->vm_next; + if (!vma || vma->vm_start != start) { + ret = -ENOMEM; + goto out; + } + } +out: + mmap_write_unlock(mm); + return ret; +} +EXPORT_SYMBOL_GPL(__kvm_protect_memory); From patchwork Tue Oct 20 06:18:55 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kirill A . Shutemov" X-Patchwork-Id: 11845769 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 1F5CC16C0 for ; Tue, 20 Oct 2020 06:19:32 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id C2B53222C8 for ; Tue, 20 Oct 2020 06:19:31 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=shutemov-name.20150623.gappssmtp.com header.i=@shutemov-name.20150623.gappssmtp.com header.b="GSndWaHn" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org C2B53222C8 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=shutemov.name Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 835376B0078; Tue, 20 Oct 2020 02:19:13 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 577AE6B007E; Tue, 20 Oct 2020 02:19:13 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3563C6B007B; Tue, 20 Oct 2020 02:19:13 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0106.hostedemail.com [216.40.44.106]) by kanga.kvack.org (Postfix) with ESMTP id EECC66B0075 for ; Tue, 20 Oct 2020 02:19:12 -0400 (EDT) Received: from smtpin27.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 91375181AEF10 for ; Tue, 20 Oct 2020 06:19:12 +0000 (UTC) X-FDA: 77391301344.27.feet00_5b0b7a42723d Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin27.hostedemail.com (Postfix) with ESMTP id 6A91A3D663 for ; Tue, 20 Oct 2020 06:19:12 +0000 (UTC) X-Spam-Summary: 1,0,0,3d7da8ba75c2c15c,d41d8cd98f00b204,kirill@shutemov.name,,RULES_HIT:41:355:379:541:960:973:988:989:1260:1311:1314:1345:1359:1437:1515:1535:1543:1711:1730:1747:1777:1792:2194:2199:2393:2559:2562:2693:3138:3139:3140:3141:3142:3354:3865:3866:3868:3873:4118:4250:4321:5007:6119:6261:6653:6742:7904:10004:11026:11473:11657:11658:11914:12043:12048:12296:12297:12438:12517:12519:12555:12895:12986:13894:14096:14181:14721:21080:21444:21451:21627:30054:30079,0,RBL:209.85.167.66:@shutemov.name:.lbl8.mailshell.net-66.201.201.201 62.8.0.100;04ygg5ydthgyyuobgsnji86wy7szmocn9imwa4inihi7hz4gj46wc9wjrzu3ymt.ydeq1mphx1iz69drt1xejjymy1g63fzenopmnts6tswiycdc3r64c3pin47qwtz.n-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fn,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: feet00_5b0b7a42723d X-Filterd-Recvd-Size: 7534 Received: from mail-lf1-f66.google.com (mail-lf1-f66.google.com [209.85.167.66]) by imf23.hostedemail.com (Postfix) with ESMTP for ; Tue, 20 Oct 2020 06:19:11 +0000 (UTC) Received: by mail-lf1-f66.google.com with SMTP id r127so641758lff.12 for ; Mon, 19 Oct 2020 23:19:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=shutemov-name.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=4qhCOwzKNOI3O5QHd++Ps6ruP4fb4SeRwyQf+CUv0m0=; b=GSndWaHnIpcamiRyrAj/fiORS32HM6qZwJJRgw4zwfhl+YwtSi5jQEtTN8kpJEO0Fb 0JI6szeDJonivYoGkzAE1rxYCANs9iOW+9gJP0+02XULrrfBtTTIBbiwSnDIMj36B4Xb +O5CFhfYyhABpZ1YJi+jeVZMpFcgDoFjW/B+KKJk4F4ZQFKBXe0Cx/FG4BWP1EEVRU2C HpTUlFC78LY3LUwUgbuyOoUCFe2A0vc0bFsA4gBASokNt58UOLPrZ0vg7qogFxL2se3l Gkxv5aV4dnb1/3ipAJyLqgx6uPlPF9cxZoFA1IChj913U7BlB9bmxXqjYbLA9+QNWwmX PL4Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=4qhCOwzKNOI3O5QHd++Ps6ruP4fb4SeRwyQf+CUv0m0=; b=N6HIM0OubwyNpFWatwk722Ek3fyA63CJVpB7AInSSAgBn/PooDvEPKBPkXzTIwiuLA cB/KWpURm921z/yuL6DqVfttTjI4BI0kGldXLHeylvIEQpAbpq50nLlS/XbcRMjxA8sK PI1ScwgNIyMn7m6aoMvs2KBkjb1s0gys2bcDQBVUd62az1JDaRrOgKMavHkVw2wrg7hM inm4eYj0PwpXHbmq+GApMdtFhau+mv4AJuvrMktWpxc32HKNXhScwXH7w0K/N5/zWmGm mi6nY91nLA/lXwcreSTJA2JaElKOwIB4DsMlfMen9KP1Mc2uRi5uHEJrLsvGu4CTOq3b wcbA== X-Gm-Message-State: AOAM533W+s8Z82y5MNo0rFHVZdRm1ENyHsayWQZFaatM6hJlOuUB1xcs kOcCCfB17pIe2bCdW9WXnnG/dQ== X-Google-Smtp-Source: ABdhPJz0bYAWSZ6yvrE2dNCscOjdmtJQ3urSk2sm+men3qnSmbPZ0P47RPkjLyJGfn+xTukQu7Ie0w== X-Received: by 2002:a19:5f52:: with SMTP id a18mr372073lfj.511.1603174750811; Mon, 19 Oct 2020 23:19:10 -0700 (PDT) Received: from box.localdomain ([86.57.175.117]) by smtp.gmail.com with ESMTPSA id e28sm194113ljp.28.2020.10.19.23.19.07 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 19 Oct 2020 23:19:09 -0700 (PDT) From: "Kirill A. Shutemov" X-Google-Original-From: "Kirill A. Shutemov" Received: by box.localdomain (Postfix, from userid 1000) id 1BCF0102F6B; Tue, 20 Oct 2020 09:19:02 +0300 (+03) To: Dave Hansen , Andy Lutomirski , Peter Zijlstra , Paolo Bonzini , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel Cc: David Rientjes , Andrea Arcangeli , Kees Cook , Will Drewry , "Edgecombe, Rick P" , "Kleen, Andi" , Liran Alon , Mike Rapoport , x86@kernel.org, kvm@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, "Kirill A. Shutemov" Subject: [RFCv2 12/16] KVM: x86: Enabled protected memory extension Date: Tue, 20 Oct 2020 09:18:55 +0300 Message-Id: <20201020061859.18385-13-kirill.shutemov@linux.intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201020061859.18385-1-kirill.shutemov@linux.intel.com> References: <20201020061859.18385-1-kirill.shutemov@linux.intel.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Wire up hypercalls for the feature and define VM_KVM_PROTECTED. Signed-off-by: Kirill A. Shutemov --- arch/x86/Kconfig | 1 + arch/x86/kvm/Kconfig | 1 + arch/x86/kvm/cpuid.c | 3 ++- arch/x86/kvm/x86.c | 9 +++++++++ include/linux/mm.h | 6 ++++++ 5 files changed, 19 insertions(+), 1 deletion(-) diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index b22b95517437..0bcbdadb97d6 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -807,6 +807,7 @@ config KVM_GUEST select X86_HV_CALLBACK_VECTOR select X86_MEM_ENCRYPT_COMMON select SWIOTLB + select ARCH_USES_HIGH_VMA_FLAGS default y help This option enables various optimizations for running under the KVM diff --git a/arch/x86/kvm/Kconfig b/arch/x86/kvm/Kconfig index fbd5bd7a945a..2ea77c2a8029 100644 --- a/arch/x86/kvm/Kconfig +++ b/arch/x86/kvm/Kconfig @@ -46,6 +46,7 @@ config KVM select KVM_GENERIC_DIRTYLOG_READ_PROTECT select KVM_VFIO select SRCU + select HAVE_KVM_PROTECTED_MEMORY help Support hosting fully virtualized guest machines using hardware virtualization extensions. You will need a fairly recent diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c index 3fd6eec202d7..eed33db032fb 100644 --- a/arch/x86/kvm/cpuid.c +++ b/arch/x86/kvm/cpuid.c @@ -746,7 +746,8 @@ static inline int __do_cpuid_func(struct kvm_cpuid_array *array, u32 function) (1 << KVM_FEATURE_PV_SEND_IPI) | (1 << KVM_FEATURE_POLL_CONTROL) | (1 << KVM_FEATURE_PV_SCHED_YIELD) | - (1 << KVM_FEATURE_ASYNC_PF_INT); + (1 << KVM_FEATURE_ASYNC_PF_INT) | + (1 << KVM_FEATURE_MEM_PROTECTED); if (sched_info_on()) entry->eax |= (1 << KVM_FEATURE_STEAL_TIME); diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index ce856e0ece84..e89ff39204f0 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -7752,6 +7752,15 @@ int kvm_emulate_hypercall(struct kvm_vcpu *vcpu) kvm_sched_yield(vcpu->kvm, a0); ret = 0; break; + case KVM_HC_ENABLE_MEM_PROTECTED: + ret = kvm_protect_all_memory(vcpu->kvm); + break; + case KVM_HC_MEM_SHARE: + ret = kvm_protect_memory(vcpu->kvm, a0, a1, false); + break; + case KVM_HC_MEM_UNSHARE: + ret = kvm_protect_memory(vcpu->kvm, a0, a1, true); + break; default: ret = -KVM_ENOSYS; break; diff --git a/include/linux/mm.h b/include/linux/mm.h index c8d8cdcbc425..ee274d27e764 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -304,11 +304,13 @@ extern unsigned int kobjsize(const void *objp); #define VM_HIGH_ARCH_BIT_2 34 /* bit only usable on 64-bit architectures */ #define VM_HIGH_ARCH_BIT_3 35 /* bit only usable on 64-bit architectures */ #define VM_HIGH_ARCH_BIT_4 36 /* bit only usable on 64-bit architectures */ +#define VM_HIGH_ARCH_BIT_5 37 /* bit only usable on 64-bit architectures */ #define VM_HIGH_ARCH_0 BIT(VM_HIGH_ARCH_BIT_0) #define VM_HIGH_ARCH_1 BIT(VM_HIGH_ARCH_BIT_1) #define VM_HIGH_ARCH_2 BIT(VM_HIGH_ARCH_BIT_2) #define VM_HIGH_ARCH_3 BIT(VM_HIGH_ARCH_BIT_3) #define VM_HIGH_ARCH_4 BIT(VM_HIGH_ARCH_BIT_4) +#define VM_HIGH_ARCH_5 BIT(VM_HIGH_ARCH_BIT_5) #endif /* CONFIG_ARCH_USES_HIGH_VMA_FLAGS */ #ifdef CONFIG_ARCH_HAS_PKEYS @@ -342,7 +344,11 @@ extern unsigned int kobjsize(const void *objp); # define VM_MAPPED_COPY VM_ARCH_1 /* T if mapped copy of data (nommu mmap) */ #endif +#if defined(CONFIG_X86_64) && defined(CONFIG_KVM) +#define VM_KVM_PROTECTED VM_HIGH_ARCH_5 +#else #define VM_KVM_PROTECTED 0 +#endif #ifndef VM_GROWSUP # define VM_GROWSUP VM_NONE From patchwork Tue Oct 20 06:18:56 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kirill A . Shutemov" X-Patchwork-Id: 11845773 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 59D8B14B4 for ; Tue, 20 Oct 2020 06:19:36 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 0406F222E8 for ; Tue, 20 Oct 2020 06:19:35 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=shutemov-name.20150623.gappssmtp.com header.i=@shutemov-name.20150623.gappssmtp.com header.b="OvuEbAZW" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 0406F222E8 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=shutemov.name Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 49DC46B007B; Tue, 20 Oct 2020 02:19:14 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 3D2B66B0080; Tue, 20 Oct 2020 02:19:14 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 12F036B007B; Tue, 20 Oct 2020 02:19:14 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0054.hostedemail.com [216.40.44.54]) by kanga.kvack.org (Postfix) with ESMTP id CCAA86B007D for ; Tue, 20 Oct 2020 02:19:13 -0400 (EDT) Received: from smtpin06.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 770C5180AD811 for ; Tue, 20 Oct 2020 06:19:13 +0000 (UTC) X-FDA: 77391301386.06.ice62_57070d52723d Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin06.hostedemail.com (Postfix) with ESMTP id 4935A1021D0B9 for ; Tue, 20 Oct 2020 06:19:13 +0000 (UTC) X-Spam-Summary: 1,0,0,1cbec636085c3c17,d41d8cd98f00b204,kirill@shutemov.name,,RULES_HIT:1:2:41:355:379:541:960:966:968:973:988:989:1260:1311:1314:1345:1359:1431:1437:1515:1605:1730:1747:1777:1792:2196:2199:2393:2559:2562:2897:2898:2901:2914:3138:3139:3140:3141:3142:3308:3867:3868:3872:4049:4321:4385:4605:5007:6119:6261:6653:6742:7903:9707:10004:11026:11473:11658:11914:12043:12048:12291:12296:12297:12438:12517:12519:12555:12683:12895:12986:13894:13972:14096:14110:21080:21444:21451:21627:21990:30051:30054:30070,0,RBL:209.85.167.67:@shutemov.name:.lbl8.mailshell.net-62.8.0.100 66.201.201.201;04yf5dya6tws8rxzitwiztbmutk76oc7odqrw8yfhwo5z8ipou69bf4k7dnhg7g.rbqjgb6eb3ocs654bt4cie1d66tzxtobhmnihf8f7zk5bw3aaswjh1ind8ch3z7.q-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fn,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:23,LUA_SUMMARY:none X-HE-Tag: ice62_57070d52723d X-Filterd-Recvd-Size: 10449 Received: from mail-lf1-f67.google.com (mail-lf1-f67.google.com [209.85.167.67]) by imf21.hostedemail.com (Postfix) with ESMTP for ; Tue, 20 Oct 2020 06:19:12 +0000 (UTC) Received: by mail-lf1-f67.google.com with SMTP id a7so653911lfk.9 for ; Mon, 19 Oct 2020 23:19:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=shutemov-name.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=YqXoyWzrpggXifhp2ZZvkYQN13pzHcgcJE84Cdo/0fA=; b=OvuEbAZWg3vpu8+LZy777g08/QYkrpy0M/C3pJJUVn5405klpLJKvddwUm/X8YG7sp GyOX+PH1FyvsxFbYCY9z84kfL6l126OrW4VmCKmakO3y1HTg9VEyY02PYG1PEge7/Q8r z2j50GS5eoRh2cLyya1+OFS64Xwz8sEdMr3uN1ZnQBxMTBotnH7o/PoKd8pHThWyvsn+ uRCnDYRMb9t5l1XcGamPtITidQt8bkWtkHpL0SGaWPxfwx4Wwu0AkhnmRi2RXvyTfx5A MJ9bSUBb3CHAqNR0w2ov2Jg1ExbFBpFv7W9z60KdpSn3fpPMRxCujKX9CYht0WBYmcls MjlA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=YqXoyWzrpggXifhp2ZZvkYQN13pzHcgcJE84Cdo/0fA=; b=NK4kqQabVrvfyqxZuptoTb+FGOnvxt6KPP89l5Myq50niWmG0tQT5zLbemWPumjdrV hxSMS0kQFlP7l4Jvag+E37Y8kA0omf8tSg6buIv9ICOSamFzraWdxCgF890Ms74eWVe9 r6WOj3burgaUTfEkmLtJ3XHxMxrERD3xJYq4g3148aCKD8/g72dVdalCs365jYTX6EQn 5V/IyS90o42hyunwPmk+Uz+XG6mJppt2yCcPQPQf/tWnDWVUTBfBGgw1K98ScTfM8AYt juKk+whImUl8fvkIx+xm3L0bmhGsGoxn1yXbZUCHeswVmgo3W87hDd8aclmp4N/F+8vZ Ittw== X-Gm-Message-State: AOAM533mfcd4GakrUPNBpkbauvhOFZp+nd4lgwKuBgtcHeBwPi4seHoH HDwkGRwKIXNoAOo+PiXdhz3R5w== X-Google-Smtp-Source: ABdhPJx1Za/f8n8gS2Lf4tVPWSC9ysXQk4V29HtHvIrTK/S7VXXVqCjMAeIH20fbfqK12CcIlnQacA== X-Received: by 2002:a19:4cd:: with SMTP id 196mr374025lfe.484.1603174751608; Mon, 19 Oct 2020 23:19:11 -0700 (PDT) Received: from box.localdomain ([86.57.175.117]) by smtp.gmail.com with ESMTPSA id o14sm136989lfc.29.2020.10.19.23.19.07 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 19 Oct 2020 23:19:09 -0700 (PDT) From: "Kirill A. Shutemov" X-Google-Original-From: "Kirill A. Shutemov" Received: by box.localdomain (Postfix, from userid 1000) id 24A6D102F6C; Tue, 20 Oct 2020 09:19:02 +0300 (+03) To: Dave Hansen , Andy Lutomirski , Peter Zijlstra , Paolo Bonzini , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel Cc: David Rientjes , Andrea Arcangeli , Kees Cook , Will Drewry , "Edgecombe, Rick P" , "Kleen, Andi" , Liran Alon , Mike Rapoport , x86@kernel.org, kvm@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, "Kirill A. Shutemov" Subject: [RFCv2 13/16] KVM: Rework copy_to/from_guest() to avoid direct mapping Date: Tue, 20 Oct 2020 09:18:56 +0300 Message-Id: <20201020061859.18385-14-kirill.shutemov@linux.intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201020061859.18385-1-kirill.shutemov@linux.intel.com> References: <20201020061859.18385-1-kirill.shutemov@linux.intel.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: We are going unmap guest pages from direct mapping and cannot rely on it for guest memory access. Use temporary kmap_atomic()-style mapping to access guest memory. Signed-off-by: Kirill A. Shutemov --- virt/kvm/kvm_main.c | 27 ++++++++++- virt/lib/mem_protected.c | 101 +++++++++++++++++++++++++++++++++++++++ 2 files changed, 126 insertions(+), 2 deletions(-) diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 4c008c7b4974..9b569b78874a 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -51,6 +51,7 @@ #include #include #include +#include #include #include @@ -154,6 +155,12 @@ static void kvm_uevent_notify_change(unsigned int type, struct kvm *kvm); static unsigned long long kvm_createvm_count; static unsigned long long kvm_active_vms; +void *kvm_map_page_atomic(struct page *page); +void kvm_unmap_page_atomic(void *vaddr); + +int kvm_init_protected_memory(void); +void kvm_exit_protected_memory(void); + int __kvm_protect_memory(unsigned long start, unsigned long end, bool protect); __weak void kvm_arch_mmu_notifier_invalidate_range(struct kvm *kvm, @@ -2329,6 +2336,7 @@ int copy_from_guest(void *data, unsigned long hva, int len, bool protected) int offset = offset_in_page(hva); struct page *page; int npages, seg; + void *vaddr; if (!protected) return __copy_from_user(data, (void __user *)hva, len); @@ -2341,7 +2349,11 @@ int copy_from_guest(void *data, unsigned long hva, int len, bool protected) npages = get_user_pages_unlocked(hva, 1, &page, FOLL_KVM); if (npages != 1) return -EFAULT; - memcpy(data, page_address(page) + offset, seg); + + vaddr = kvm_map_page_atomic(page); + memcpy(data, vaddr + offset, seg); + kvm_unmap_page_atomic(vaddr); + put_page(page); len -= seg; hva += seg; @@ -2356,6 +2368,7 @@ int copy_to_guest(unsigned long hva, const void *data, int len, bool protected) int offset = offset_in_page(hva); struct page *page; int npages, seg; + void *vaddr; if (!protected) return __copy_to_user((void __user *)hva, data, len); @@ -2369,7 +2382,11 @@ int copy_to_guest(unsigned long hva, const void *data, int len, bool protected) FOLL_WRITE | FOLL_KVM); if (npages != 1) return -EFAULT; - memcpy(page_address(page) + offset, data, seg); + + vaddr = kvm_map_page_atomic(page); + memcpy(vaddr + offset, data, seg); + kvm_unmap_page_atomic(vaddr); + put_page(page); len -= seg; hva += seg; @@ -4945,6 +4962,10 @@ int kvm_init(void *opaque, unsigned vcpu_size, unsigned vcpu_align, if (r) goto out_free; + if (IS_ENABLED(CONFIG_HAVE_KVM_PROTECTED_MEMORY) && + kvm_init_protected_memory()) + goto out_unreg; + kvm_chardev_ops.owner = module; kvm_vm_fops.owner = module; kvm_vcpu_fops.owner = module; @@ -4968,6 +4989,7 @@ int kvm_init(void *opaque, unsigned vcpu_size, unsigned vcpu_align, return 0; out_unreg: + kvm_exit_protected_memory(); kvm_async_pf_deinit(); out_free: kmem_cache_destroy(kvm_vcpu_cache); @@ -4989,6 +5011,7 @@ EXPORT_SYMBOL_GPL(kvm_init); void kvm_exit(void) { + kvm_exit_protected_memory(); debugfs_remove_recursive(kvm_debugfs_dir); misc_deregister(&kvm_dev); kmem_cache_destroy(kvm_vcpu_cache); diff --git a/virt/lib/mem_protected.c b/virt/lib/mem_protected.c index 0b01dd74f29c..1dfe82534242 100644 --- a/virt/lib/mem_protected.c +++ b/virt/lib/mem_protected.c @@ -5,6 +5,100 @@ #include #include +static pte_t **guest_map_ptes; +static struct vm_struct *guest_map_area; + +void *kvm_map_page_atomic(struct page *page) +{ + pte_t *pte; + void *vaddr; + + preempt_disable(); + pte = guest_map_ptes[smp_processor_id()]; + vaddr = guest_map_area->addr + smp_processor_id() * PAGE_SIZE; + set_pte(pte, mk_pte(page, PAGE_KERNEL)); + return vaddr; +} +EXPORT_SYMBOL_GPL(kvm_map_page_atomic); + +void kvm_unmap_page_atomic(void *vaddr) +{ + pte_t *pte = guest_map_ptes[smp_processor_id()]; + set_pte(pte, __pte(0)); + flush_tlb_one_kernel((unsigned long)vaddr); + preempt_enable(); +} +EXPORT_SYMBOL_GPL(kvm_unmap_page_atomic); + +int kvm_init_protected_memory(void) +{ + guest_map_ptes = kmalloc_array(num_possible_cpus(), + sizeof(pte_t *), GFP_KERNEL); + if (!guest_map_ptes) + return -ENOMEM; + + guest_map_area = alloc_vm_area(PAGE_SIZE * num_possible_cpus(), + guest_map_ptes); + if (!guest_map_ptes) { + kfree(guest_map_ptes); + return -ENOMEM; + } + + return 0; +} +EXPORT_SYMBOL_GPL(kvm_init_protected_memory); + +void kvm_exit_protected_memory(void) +{ + if (guest_map_area) + free_vm_area(guest_map_area); + if (guest_map_ptes) + kfree(guest_map_ptes); +} +EXPORT_SYMBOL_GPL(kvm_exit_protected_memory); + +static int adjust_direct_mapping_pte_range(pmd_t *pmd, unsigned long addr, + unsigned long end, + struct mm_walk *walk) +{ + bool protect = (bool)walk->private; + pte_t *pte; + struct page *page; + + if (pmd_trans_huge(*pmd)) { + page = pmd_page(*pmd); + if (is_huge_zero_page(page)) + return 0; + VM_BUG_ON_PAGE(total_mapcount(page) != 1, page); + /* XXX: Would it fail with direct device assignment? */ + VM_BUG_ON_PAGE(page_count(page) != 1, page); + kernel_map_pages(page, HPAGE_PMD_NR, !protect); + return 0; + } + + pte = pte_offset_map(pmd, addr); + for (; addr != end; pte++, addr += PAGE_SIZE) { + pte_t entry = *pte; + + if (!pte_present(entry)) + continue; + + if (is_zero_pfn(pte_pfn(entry))) + continue; + + page = pte_page(entry); + + VM_BUG_ON_PAGE(page_mapcount(page) != 1, page); + kernel_map_pages(page, 1, !protect); + } + + return 0; +} + +static const struct mm_walk_ops adjust_direct_mapping_ops = { + .pmd_entry = adjust_direct_mapping_pte_range, +}; + int __kvm_protect_memory(unsigned long start, unsigned long end, bool protect) { struct mm_struct *mm = current->mm; @@ -50,6 +144,13 @@ int __kvm_protect_memory(unsigned long start, unsigned long end, bool protect) if (ret) goto out; + if (vma_is_anonymous(vma)) { + ret = walk_page_range_novma(mm, start, tmp, + &adjust_direct_mapping_ops, NULL, + (void *) protect); + if (ret) + goto out; + } next: start = tmp; if (start < prev->vm_end) From patchwork Tue Oct 20 06:18:57 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kirill A . Shutemov" X-Patchwork-Id: 11845775 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id AE39B14B4 for ; Tue, 20 Oct 2020 06:19:38 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 5906722282 for ; Tue, 20 Oct 2020 06:19:38 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=shutemov-name.20150623.gappssmtp.com header.i=@shutemov-name.20150623.gappssmtp.com header.b="z9xv6k6s" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 5906722282 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=shutemov.name Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id BE28C6B007E; Tue, 20 Oct 2020 02:19:14 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id BBC296B0080; Tue, 20 Oct 2020 02:19:14 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9E14C6B0081; Tue, 20 Oct 2020 02:19:14 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0007.hostedemail.com [216.40.44.7]) by kanga.kvack.org (Postfix) with ESMTP id 720096B007E for ; Tue, 20 Oct 2020 02:19:14 -0400 (EDT) Received: from smtpin15.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 0BE9C1EE6 for ; Tue, 20 Oct 2020 06:19:14 +0000 (UTC) X-FDA: 77391301428.15.boot77_1b035c52723d Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin15.hostedemail.com (Postfix) with ESMTP id D9B7A1814B0C1 for ; Tue, 20 Oct 2020 06:19:13 +0000 (UTC) X-Spam-Summary: 1,0,0,6ce45fac671b264c,d41d8cd98f00b204,kirill@shutemov.name,,RULES_HIT:4:41:355:379:541:960:973:988:989:1260:1311:1314:1345:1359:1437:1515:1605:1730:1747:1777:1792:1801:1981:2194:2198:2199:2200:2393:2559:2562:2638:2901:2914:3138:3139:3140:3141:3142:3865:3866:3867:3870:3871:3874:4250:4321:4470:4605:5007:6119:6261:6653:6742:7558:7903:8603:8660:10004:11026:11473:11657:11658:11914:12043:12048:12114:12296:12297:12438:12517:12519:12555:12660:12895:12986:13148:13161:13229:13230:13894:14096:21080:21324:21444:21451:21611:21627:21939:21987:21990:30012:30054:30070,0,RBL:209.85.208.194:@shutemov.name:.lbl8.mailshell.net-66.201.201.201 62.8.84.100;04yf36ewqezfdb7e1jpjuk4459up7opa8qjcurb8hbesxu1ed8q8hb1f1eoo5t1.bdysapjiz9wxekd9du1i3uw88qudr81pr5guds733or5o4kgdn9xfbtucbhzx13.k-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fn,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:23,LUA_SUMMARY:none X-HE-Tag: boot77_1b035c52723d X-Filterd-Recvd-Size: 16142 Received: from mail-lj1-f194.google.com (mail-lj1-f194.google.com [209.85.208.194]) by imf29.hostedemail.com (Postfix) with ESMTP for ; Tue, 20 Oct 2020 06:19:13 +0000 (UTC) Received: by mail-lj1-f194.google.com with SMTP id c21so718993ljn.13 for ; Mon, 19 Oct 2020 23:19:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=shutemov-name.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=PNKc4RJ1Oy7aH+IpRJ4FjlsZeZXXYZyCX4/DKrWCkeg=; b=z9xv6k6sZbLfbo5Th3nyLshepFClZRI0+N5oprtMQmWqhnSeoh90hc68/VX1S+scUd GVZs5ClxNa+3LIz6eS1vgrrEdz0o6bOJdOy1QkBv0kFX+Ewh5FpCDJ+5QIrL2UKh6kxW Wz4XxkSFG3/2Q66q6sm5t5N8x2+E8Pv7J7+Ut6XPcq7kMKGrQMXbO28evVleAOLaQkcX TP2BV3yE+oXe59sGq4UF5BTuVTLfV/uJ25VWM00A+TQrxA1Ts7lhKOQLetxJzOU9UNzC SrpBMhLqFFvnQbKe/Do66wd3KStwZaWiWW14qfqtN649MM1RCLVDfAM8C/ch4Ptwyfsx +PSQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=PNKc4RJ1Oy7aH+IpRJ4FjlsZeZXXYZyCX4/DKrWCkeg=; b=tssFYAmHX9bHRW/r+RbfjUCLmFXfdlKuCA1vJx4GAatTPFOZqqrFPfnEq0JWqW122r nIPnrjtuhCzFD9iz4IBQtzni3JvhSGRciDYRpa/pddBydwEKkCwQkLAhHUFhuIOZh0Zs lpQw1DlLzBT68zwPKYPAz5LOVk9Zg2i8nUkTacsL3zTuuoE/FldybCLD3u4+N5J3IsNK hYP8JjAW5Jgnczk4eEFHOwvxe3kEqsoSh3irH+/i6wJs+hrUyQDCm1tPB2MUU0s3WNhk zJnd5In2uouh1olS74DJ2QYAHiGJ/0FFJ8oqZLENEGHr9fGydin4uJLel557jD7fAjtd Ym3A== X-Gm-Message-State: AOAM530M4sVHLGAcdfGgFBIjbz3JwMN8kIqf6KFuQSJ4/h+NRn4Uwvz6 4Cive2biNaRwdgLuxW0gGwdH5Q== X-Google-Smtp-Source: ABdhPJzcLWyJrdYvk3jPu4M19U2PUZGr+x4SGEcbncAnbwFbLzZv2JMatoIbShM+kdIpd+z7iI19OA== X-Received: by 2002:a05:651c:102a:: with SMTP id w10mr471257ljm.64.1603174752041; Mon, 19 Oct 2020 23:19:12 -0700 (PDT) Received: from box.localdomain ([86.57.175.117]) by smtp.gmail.com with ESMTPSA id u28sm136394lfi.182.2020.10.19.23.19.07 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 19 Oct 2020 23:19:09 -0700 (PDT) From: "Kirill A. Shutemov" X-Google-Original-From: "Kirill A. Shutemov" Received: by box.localdomain (Postfix, from userid 1000) id 2CAED102F6D; Tue, 20 Oct 2020 09:19:02 +0300 (+03) To: Dave Hansen , Andy Lutomirski , Peter Zijlstra , Paolo Bonzini , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel Cc: David Rientjes , Andrea Arcangeli , Kees Cook , Will Drewry , "Edgecombe, Rick P" , "Kleen, Andi" , Liran Alon , Mike Rapoport , x86@kernel.org, kvm@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, "Kirill A. Shutemov" Subject: [RFCv2 14/16] KVM: Handle protected memory in __kvm_map_gfn()/__kvm_unmap_gfn() Date: Tue, 20 Oct 2020 09:18:57 +0300 Message-Id: <20201020061859.18385-15-kirill.shutemov@linux.intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201020061859.18385-1-kirill.shutemov@linux.intel.com> References: <20201020061859.18385-1-kirill.shutemov@linux.intel.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: We cannot access protected pages directly. Use ioremap() to create a temporary mapping of the page. The mapping is destroyed on __kvm_unmap_gfn(). The new interface gfn_to_pfn_memslot_protected() is used to detect if the page is protected. ioremap_cache_force() is a hack to bypass IORES_MAP_SYSTEM_RAM check in the x86 ioremap code. We need a better solution. Signed-off-by: Kirill A. Shutemov --- arch/powerpc/kvm/book3s_64_mmu_hv.c | 2 +- arch/powerpc/kvm/book3s_64_mmu_radix.c | 2 +- arch/x86/include/asm/io.h | 2 + arch/x86/include/asm/pgtable_types.h | 1 + arch/x86/kvm/mmu/mmu.c | 6 ++- arch/x86/mm/ioremap.c | 16 ++++++-- include/linux/kvm_host.h | 3 +- include/linux/kvm_types.h | 1 + virt/kvm/kvm_main.c | 52 +++++++++++++++++++------- 9 files changed, 63 insertions(+), 22 deletions(-) diff --git a/arch/powerpc/kvm/book3s_64_mmu_hv.c b/arch/powerpc/kvm/book3s_64_mmu_hv.c index 38ea396a23d6..8e06cd3f759c 100644 --- a/arch/powerpc/kvm/book3s_64_mmu_hv.c +++ b/arch/powerpc/kvm/book3s_64_mmu_hv.c @@ -590,7 +590,7 @@ int kvmppc_book3s_hv_page_fault(struct kvm_vcpu *vcpu, } else { /* Call KVM generic code to do the slow-path check */ pfn = __gfn_to_pfn_memslot(memslot, gfn, false, NULL, - writing, &write_ok); + writing, &write_ok, NULL); if (is_error_noslot_pfn(pfn)) return -EFAULT; page = NULL; diff --git a/arch/powerpc/kvm/book3s_64_mmu_radix.c b/arch/powerpc/kvm/book3s_64_mmu_radix.c index 22a677b18695..6fd4e3f9b66a 100644 --- a/arch/powerpc/kvm/book3s_64_mmu_radix.c +++ b/arch/powerpc/kvm/book3s_64_mmu_radix.c @@ -822,7 +822,7 @@ int kvmppc_book3s_instantiate_page(struct kvm_vcpu *vcpu, /* Call KVM generic code to do the slow-path check */ pfn = __gfn_to_pfn_memslot(memslot, gfn, false, NULL, - writing, upgrade_p); + writing, upgrade_p, NULL); if (is_error_noslot_pfn(pfn)) return -EFAULT; page = NULL; diff --git a/arch/x86/include/asm/io.h b/arch/x86/include/asm/io.h index c58d52fd7bf2..a3e1bfad1026 100644 --- a/arch/x86/include/asm/io.h +++ b/arch/x86/include/asm/io.h @@ -184,6 +184,8 @@ extern void __iomem *ioremap_uc(resource_size_t offset, unsigned long size); #define ioremap_uc ioremap_uc extern void __iomem *ioremap_cache(resource_size_t offset, unsigned long size); #define ioremap_cache ioremap_cache +extern void __iomem *ioremap_cache_force(resource_size_t offset, unsigned long size); +#define ioremap_cache_force ioremap_cache_force extern void __iomem *ioremap_prot(resource_size_t offset, unsigned long size, unsigned long prot_val); #define ioremap_prot ioremap_prot extern void __iomem *ioremap_encrypted(resource_size_t phys_addr, unsigned long size); diff --git a/arch/x86/include/asm/pgtable_types.h b/arch/x86/include/asm/pgtable_types.h index 816b31c68550..4c16a9583786 100644 --- a/arch/x86/include/asm/pgtable_types.h +++ b/arch/x86/include/asm/pgtable_types.h @@ -147,6 +147,7 @@ enum page_cache_mode { _PAGE_CACHE_MODE_UC = 3, _PAGE_CACHE_MODE_WT = 4, _PAGE_CACHE_MODE_WP = 5, + _PAGE_CACHE_MODE_WB_FORCE = 6, _PAGE_CACHE_MODE_NUM = 8 }; diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 71aa3da2a0b7..162cb285b87b 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -4058,7 +4058,8 @@ static bool try_async_pf(struct kvm_vcpu *vcpu, bool prefault, gfn_t gfn, } async = false; - *pfn = __gfn_to_pfn_memslot(slot, gfn, false, &async, write, writable); + *pfn = __gfn_to_pfn_memslot(slot, gfn, false, &async, write, writable, + NULL); if (!async) return false; /* *pfn has correct page already */ @@ -4072,7 +4073,8 @@ static bool try_async_pf(struct kvm_vcpu *vcpu, bool prefault, gfn_t gfn, return true; } - *pfn = __gfn_to_pfn_memslot(slot, gfn, false, NULL, write, writable); + *pfn = __gfn_to_pfn_memslot(slot, gfn, false, NULL, write, writable, + NULL); return false; } diff --git a/arch/x86/mm/ioremap.c b/arch/x86/mm/ioremap.c index 9e5ccc56f8e0..4409785e294c 100644 --- a/arch/x86/mm/ioremap.c +++ b/arch/x86/mm/ioremap.c @@ -202,9 +202,12 @@ __ioremap_caller(resource_size_t phys_addr, unsigned long size, __ioremap_check_mem(phys_addr, size, &io_desc); /* - * Don't allow anybody to remap normal RAM that we're using.. + * Don't allow anybody to remap normal RAM that we're using, unless + * _PAGE_CACHE_MODE_WB_FORCE is used. */ - if (io_desc.flags & IORES_MAP_SYSTEM_RAM) { + if (pcm == _PAGE_CACHE_MODE_WB_FORCE) { + pcm = _PAGE_CACHE_MODE_WB; + } else if (io_desc.flags & IORES_MAP_SYSTEM_RAM) { WARN_ONCE(1, "ioremap on RAM at %pa - %pa\n", &phys_addr, &last_addr); return NULL; @@ -419,6 +422,13 @@ void __iomem *ioremap_cache(resource_size_t phys_addr, unsigned long size) } EXPORT_SYMBOL(ioremap_cache); +void __iomem *ioremap_cache_force(resource_size_t phys_addr, unsigned long size) +{ + return __ioremap_caller(phys_addr, size, _PAGE_CACHE_MODE_WB_FORCE, + __builtin_return_address(0), false); +} +EXPORT_SYMBOL(ioremap_cache_force); + void __iomem *ioremap_prot(resource_size_t phys_addr, unsigned long size, unsigned long prot_val) { @@ -467,7 +477,7 @@ void iounmap(volatile void __iomem *addr) p = find_vm_area((void __force *)addr); if (!p) { - printk(KERN_ERR "iounmap: bad address %p\n", addr); + printk(KERN_ERR "iounmap: bad address %px\n", addr); dump_stack(); return; } diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 6655e8da4555..0d5f3885747b 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -238,6 +238,7 @@ struct kvm_host_map { void *hva; kvm_pfn_t pfn; kvm_pfn_t gfn; + bool protected; }; /* @@ -725,7 +726,7 @@ kvm_pfn_t gfn_to_pfn_memslot(struct kvm_memory_slot *slot, gfn_t gfn); kvm_pfn_t gfn_to_pfn_memslot_atomic(struct kvm_memory_slot *slot, gfn_t gfn); kvm_pfn_t __gfn_to_pfn_memslot(struct kvm_memory_slot *slot, gfn_t gfn, bool atomic, bool *async, bool write_fault, - bool *writable); + bool *writable, bool *protected); void kvm_release_pfn_clean(kvm_pfn_t pfn); void kvm_release_pfn_dirty(kvm_pfn_t pfn); diff --git a/include/linux/kvm_types.h b/include/linux/kvm_types.h index a7580f69dda0..0a8c6426b4f4 100644 --- a/include/linux/kvm_types.h +++ b/include/linux/kvm_types.h @@ -58,6 +58,7 @@ struct gfn_to_pfn_cache { gfn_t gfn; kvm_pfn_t pfn; bool dirty; + bool protected; }; #ifdef KVM_ARCH_NR_OBJS_PER_MEMORY_CACHE diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 9b569b78874a..7c2c764c28c5 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -1852,9 +1852,10 @@ static bool hva_to_pfn_fast(unsigned long addr, bool write_fault, * 1 indicates success, -errno is returned if error is detected. */ static int hva_to_pfn_slow(unsigned long addr, bool *async, bool write_fault, - bool *writable, kvm_pfn_t *pfn) + bool *writable, bool *protected, kvm_pfn_t *pfn) { unsigned int flags = FOLL_HWPOISON | FOLL_KVM; + struct vm_area_struct *vma; struct page *page; int npages = 0; @@ -1868,9 +1869,15 @@ static int hva_to_pfn_slow(unsigned long addr, bool *async, bool write_fault, if (async) flags |= FOLL_NOWAIT; - npages = get_user_pages_unlocked(addr, 1, &page, flags); - if (npages != 1) + mmap_read_lock(current->mm); + npages = get_user_pages(addr, 1, flags, &page, &vma); + if (npages != 1) { + mmap_read_unlock(current->mm); return npages; + } + if (protected) + *protected = vma_is_kvm_protected(vma); + mmap_read_unlock(current->mm); /* map read fault as writable if possible */ if (unlikely(!write_fault) && writable) { @@ -1961,7 +1968,7 @@ static int hva_to_pfn_remapped(struct vm_area_struct *vma, * whether the mapping is writable. */ static kvm_pfn_t hva_to_pfn(unsigned long addr, bool atomic, bool *async, - bool write_fault, bool *writable) + bool write_fault, bool *writable, bool *protected) { struct vm_area_struct *vma; kvm_pfn_t pfn = 0; @@ -1976,7 +1983,8 @@ static kvm_pfn_t hva_to_pfn(unsigned long addr, bool atomic, bool *async, if (atomic) return KVM_PFN_ERR_FAULT; - npages = hva_to_pfn_slow(addr, async, write_fault, writable, &pfn); + npages = hva_to_pfn_slow(addr, async, write_fault, writable, protected, + &pfn); if (npages == 1) return pfn; @@ -2010,7 +2018,7 @@ static kvm_pfn_t hva_to_pfn(unsigned long addr, bool atomic, bool *async, kvm_pfn_t __gfn_to_pfn_memslot(struct kvm_memory_slot *slot, gfn_t gfn, bool atomic, bool *async, bool write_fault, - bool *writable) + bool *writable, bool *protected) { unsigned long addr = __gfn_to_hva_many(slot, gfn, NULL, write_fault); @@ -2033,7 +2041,7 @@ kvm_pfn_t __gfn_to_pfn_memslot(struct kvm_memory_slot *slot, gfn_t gfn, } return hva_to_pfn(addr, atomic, async, write_fault, - writable); + writable, protected); } EXPORT_SYMBOL_GPL(__gfn_to_pfn_memslot); @@ -2041,19 +2049,26 @@ kvm_pfn_t gfn_to_pfn_prot(struct kvm *kvm, gfn_t gfn, bool write_fault, bool *writable) { return __gfn_to_pfn_memslot(gfn_to_memslot(kvm, gfn), gfn, false, NULL, - write_fault, writable); + write_fault, writable, NULL); } EXPORT_SYMBOL_GPL(gfn_to_pfn_prot); kvm_pfn_t gfn_to_pfn_memslot(struct kvm_memory_slot *slot, gfn_t gfn) { - return __gfn_to_pfn_memslot(slot, gfn, false, NULL, true, NULL); + return __gfn_to_pfn_memslot(slot, gfn, false, NULL, true, NULL, NULL); } EXPORT_SYMBOL_GPL(gfn_to_pfn_memslot); +static kvm_pfn_t gfn_to_pfn_memslot_protected(struct kvm_memory_slot *slot, + gfn_t gfn, bool *protected) +{ + return __gfn_to_pfn_memslot(slot, gfn, false, NULL, true, NULL, + protected); +} + kvm_pfn_t gfn_to_pfn_memslot_atomic(struct kvm_memory_slot *slot, gfn_t gfn) { - return __gfn_to_pfn_memslot(slot, gfn, true, NULL, true, NULL); + return __gfn_to_pfn_memslot(slot, gfn, true, NULL, true, NULL, NULL); } EXPORT_SYMBOL_GPL(gfn_to_pfn_memslot_atomic); @@ -2134,7 +2149,7 @@ static void kvm_cache_gfn_to_pfn(struct kvm_memory_slot *slot, gfn_t gfn, { kvm_release_pfn(cache->pfn, cache->dirty, cache); - cache->pfn = gfn_to_pfn_memslot(slot, gfn); + cache->pfn = gfn_to_pfn_memslot_protected(slot, gfn, &cache->protected); cache->gfn = gfn; cache->dirty = false; cache->generation = gen; @@ -2149,6 +2164,7 @@ static int __kvm_map_gfn(struct kvm_memslots *slots, gfn_t gfn, void *hva = NULL; struct page *page = KVM_UNMAPPED_PAGE; struct kvm_memory_slot *slot = __gfn_to_memslot(slots, gfn); + bool protected; u64 gen = slots->generation; if (!map) @@ -2162,15 +2178,20 @@ static int __kvm_map_gfn(struct kvm_memslots *slots, gfn_t gfn, kvm_cache_gfn_to_pfn(slot, gfn, cache, gen); } pfn = cache->pfn; + protected = cache->protected; } else { if (atomic) return -EAGAIN; - pfn = gfn_to_pfn_memslot(slot, gfn); + pfn = gfn_to_pfn_memslot_protected(slot, gfn, &protected); } if (is_error_noslot_pfn(pfn)) return -EINVAL; - if (pfn_valid(pfn)) { + if (protected) { + if (atomic) + return -EAGAIN; + hva = ioremap_cache_force(pfn_to_hpa(pfn), PAGE_SIZE); + } else if (pfn_valid(pfn)) { page = pfn_to_page(pfn); if (atomic) hva = kmap_atomic(page); @@ -2191,6 +2212,7 @@ static int __kvm_map_gfn(struct kvm_memslots *slots, gfn_t gfn, map->hva = hva; map->pfn = pfn; map->gfn = gfn; + map->protected = protected; return 0; } @@ -2221,7 +2243,9 @@ static void __kvm_unmap_gfn(struct kvm_memory_slot *memslot, if (!map->hva) return; - if (map->page != KVM_UNMAPPED_PAGE) { + if (map->protected) { + iounmap(map->hva); + } else if (map->page != KVM_UNMAPPED_PAGE) { if (atomic) kunmap_atomic(map->hva); else From patchwork Tue Oct 20 06:18:58 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kirill A . Shutemov" X-Patchwork-Id: 11845777 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 2DF5A14B4 for ; Tue, 20 Oct 2020 06:19:41 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id BFAAB22282 for ; Tue, 20 Oct 2020 06:19:40 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=shutemov-name.20150623.gappssmtp.com header.i=@shutemov-name.20150623.gappssmtp.com header.b="lMVJEorZ" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org BFAAB22282 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=shutemov.name Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 507DF6B0080; Tue, 20 Oct 2020 02:19:15 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 4641E6B0081; Tue, 20 Oct 2020 02:19:15 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 188BF6B0083; Tue, 20 Oct 2020 02:19:15 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0251.hostedemail.com [216.40.44.251]) by kanga.kvack.org (Postfix) with ESMTP id C32A06B0081 for ; Tue, 20 Oct 2020 02:19:14 -0400 (EDT) Received: from smtpin14.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 6CF43180AD807 for ; Tue, 20 Oct 2020 06:19:14 +0000 (UTC) X-FDA: 77391301428.14.spy43_5d059752723d Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin14.hostedemail.com (Postfix) with ESMTP id 4F50218229818 for ; Tue, 20 Oct 2020 06:19:14 +0000 (UTC) X-Spam-Summary: 1,0,0,b1f544171418659c,d41d8cd98f00b204,kirill@shutemov.name,,RULES_HIT:2:41:355:379:541:960:968:973:988:989:1260:1311:1314:1345:1359:1437:1515:1535:1606:1730:1747:1777:1792:2393:2559:2562:3138:3139:3140:3141:3142:3355:3369:3865:3867:3868:3871:3872:4119:4250:4321:4605:5007:6119:6261:6653:6742:7903:8957:10004:11026:11473:11658:11914:12043:12048:12291:12296:12297:12438:12517:12519:12555:12679:12895:12986:13894:14096:21080:21324:21444:21451:21627:21966:21987:21990:30003:30054:30070,0,RBL:209.85.167.67:@shutemov.name:.lbl8.mailshell.net-62.8.0.100 66.201.201.201;04yrt5ozbuamiw91gzewkcid3rnxpyp4bmfwgwu3agy6jkmpyqs65krd9g3zxar.1k3p61qyniwexfr8iaja5r6u48shwkk88m1pjz6woxhi4j7h33qb73aky5iytk6.g-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fn,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:23,LUA_SUMMARY:none X-HE-Tag: spy43_5d059752723d X-Filterd-Recvd-Size: 8970 Received: from mail-lf1-f67.google.com (mail-lf1-f67.google.com [209.85.167.67]) by imf04.hostedemail.com (Postfix) with ESMTP for ; Tue, 20 Oct 2020 06:19:13 +0000 (UTC) Received: by mail-lf1-f67.google.com with SMTP id c141so669564lfg.5 for ; Mon, 19 Oct 2020 23:19:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=shutemov-name.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=KIAVbPzuFo0SQGjh1AZh62RNTXjbtTvj2q2tSqP5u5A=; b=lMVJEorZcBNsU+WKL2FowmkbJxB6ruG+LU7quJ9vFJVwuQlbsEYX1tQ83ACqgxFanj 4ggBJcvVBmrTiDeUgBFYqEv57Ou/jSYAqNFrrAVcorcH/JToj5lI/C0UYX3yS0S8rCl6 zaL5WY6luVVaXlJq3jMMf1kO2wjLAfZv8Tqu+zy4VbuCreEYfz5QDzVEf9hkVMUKAoaf ljmDOeYwiFfb15HQs4So/fZYbIIRRobF9J0oUy9bS+4I4TGYW7GisLUf3CbZwP2zbgQZ eygBtilf124JiAcgeTqYXIt+33gQNYAMrILdV2OAyHpTiAwPYUyu+8Qm6iDPuDE5nrur 7bSw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=KIAVbPzuFo0SQGjh1AZh62RNTXjbtTvj2q2tSqP5u5A=; b=LrIxrjls5nOoK2l39+ORw/OM0L16eq8AEEygW2GQxreRsxJ6Z/MzycNup6EaXBmYU4 Vj0EVUHVgXzrh4h0R2CZrSwn6Y9PdpUA2b1vOIdjkQuHP4SVs0UvXP4ABHtsOf6tEQy/ 15qC1PawXkvNp0ezvY0/oGqFL7LAZBsh9xVObU1TiYhckXH1Z77ozbIpG8nTHjMnivLB 60C2wy8ZZOYts0MJNlTlLdZQYtr5WUjkTDEOiDd+uKndA7NoWNM7UBEAxBpR3LQ2fDr7 KtKbBJ3lyFzEujogdVdkatDk5m39QIlPjq0M+yZN3ANYM+cTXESo2My91s7AOhFN233B fHxw== X-Gm-Message-State: AOAM530C37RzMCm6URFlLbMJPNwDCUoo2ag6pFj6N7w3eHa2/gNnDR3A GUA/KOYqTZrPI6vYRigZkIyEkQ== X-Google-Smtp-Source: ABdhPJyJBolsoJfELSO4r+qt6ijjZjPzBQXOcPFq5Dp6mn7pNXbaxYjhYpHuOQpmW8XwEU+vrLpZBw== X-Received: by 2002:a05:6512:41e:: with SMTP id u30mr463605lfk.204.1603174752407; Mon, 19 Oct 2020 23:19:12 -0700 (PDT) Received: from box.localdomain ([86.57.175.117]) by smtp.gmail.com with ESMTPSA id x28sm140002lfd.4.2020.10.19.23.19.09 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 19 Oct 2020 23:19:09 -0700 (PDT) From: "Kirill A. Shutemov" X-Google-Original-From: "Kirill A. Shutemov" Received: by box.localdomain (Postfix, from userid 1000) id 344C2102F6E; Tue, 20 Oct 2020 09:19:02 +0300 (+03) To: Dave Hansen , Andy Lutomirski , Peter Zijlstra , Paolo Bonzini , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel Cc: David Rientjes , Andrea Arcangeli , Kees Cook , Will Drewry , "Edgecombe, Rick P" , "Kleen, Andi" , Liran Alon , Mike Rapoport , x86@kernel.org, kvm@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, "Kirill A. Shutemov" Subject: [RFCv2 15/16] KVM: Unmap protected pages from direct mapping Date: Tue, 20 Oct 2020 09:18:58 +0300 Message-Id: <20201020061859.18385-16-kirill.shutemov@linux.intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201020061859.18385-1-kirill.shutemov@linux.intel.com> References: <20201020061859.18385-1-kirill.shutemov@linux.intel.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: If the protected memory feature enabled, unmap guest memory from kernel's direct mappings. Migration and KSM is disabled for protected memory as it would require a special treatment. Signed-off-by: Kirill A. Shutemov --- include/linux/mm.h | 3 +++ mm/huge_memory.c | 8 ++++++++ mm/ksm.c | 2 ++ mm/memory.c | 12 ++++++++++++ mm/rmap.c | 4 ++++ virt/lib/mem_protected.c | 21 +++++++++++++++++++++ 6 files changed, 50 insertions(+) diff --git a/include/linux/mm.h b/include/linux/mm.h index ee274d27e764..74efc51e63f0 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -671,6 +671,9 @@ static inline bool vma_is_kvm_protected(struct vm_area_struct *vma) return vma->vm_flags & VM_KVM_PROTECTED; } +void kvm_map_page(struct page *page, int nr_pages); +void kvm_unmap_page(struct page *page, int nr_pages); + #ifdef CONFIG_SHMEM /* * The vma_is_shmem is not inline because it is used only by slow diff --git a/mm/huge_memory.c b/mm/huge_memory.c index ec8cf9a40cfd..40974656cb43 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -627,6 +627,10 @@ static vm_fault_t __do_huge_pmd_anonymous_page(struct vm_fault *vmf, spin_unlock(vmf->ptl); count_vm_event(THP_FAULT_ALLOC); count_memcg_event_mm(vma->vm_mm, THP_FAULT_ALLOC); + + /* Unmap page from direct mapping */ + if (vma_is_kvm_protected(vma)) + kvm_unmap_page(page, HPAGE_PMD_NR); } return 0; @@ -1689,6 +1693,10 @@ int zap_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma, page_remove_rmap(page, true); VM_BUG_ON_PAGE(page_mapcount(page) < 0, page); VM_BUG_ON_PAGE(!PageHead(page), page); + + /* Map the page back to the direct mapping */ + if (vma_is_kvm_protected(vma)) + kvm_map_page(page, HPAGE_PMD_NR); } else if (thp_migration_supported()) { swp_entry_t entry; diff --git a/mm/ksm.c b/mm/ksm.c index 9afccc36dbd2..c720e271448f 100644 --- a/mm/ksm.c +++ b/mm/ksm.c @@ -528,6 +528,8 @@ static struct vm_area_struct *find_mergeable_vma(struct mm_struct *mm, return NULL; if (!(vma->vm_flags & VM_MERGEABLE) || !vma->anon_vma) return NULL; + if (vma_is_kvm_protected(vma)) + return NULL; return vma; } diff --git a/mm/memory.c b/mm/memory.c index 2c9756b4e52f..e28bd5f902a7 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -1245,6 +1245,11 @@ static unsigned long zap_pte_range(struct mmu_gather *tlb, likely(!(vma->vm_flags & VM_SEQ_READ))) mark_page_accessed(page); } + + /* Map the page back to the direct mapping */ + if (vma_is_anonymous(vma) && vma_is_kvm_protected(vma)) + kvm_map_page(page, 1); + rss[mm_counter(page)]--; page_remove_rmap(page, false); if (unlikely(page_mapcount(page) < 0)) @@ -3466,6 +3471,7 @@ static vm_fault_t do_anonymous_page(struct vm_fault *vmf) struct page *page; vm_fault_t ret = 0; pte_t entry; + bool set = false; /* File mapping without ->vm_ops ? */ if (vma->vm_flags & VM_SHARED) @@ -3554,6 +3560,7 @@ static vm_fault_t do_anonymous_page(struct vm_fault *vmf) inc_mm_counter_fast(vma->vm_mm, MM_ANONPAGES); page_add_new_anon_rmap(page, vma, vmf->address, false); lru_cache_add_inactive_or_unevictable(page, vma); + set = true; setpte: set_pte_at(vma->vm_mm, vmf->address, vmf->pte, entry); @@ -3561,6 +3568,11 @@ static vm_fault_t do_anonymous_page(struct vm_fault *vmf) update_mmu_cache(vma, vmf->address, vmf->pte); unlock: pte_unmap_unlock(vmf->pte, vmf->ptl); + + /* Unmap page from direct mapping */ + if (vma_is_kvm_protected(vma) && set) + kvm_unmap_page(page, 1); + return ret; release: put_page(page); diff --git a/mm/rmap.c b/mm/rmap.c index 9425260774a1..247548d6d24b 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -1725,6 +1725,10 @@ static bool try_to_unmap_one(struct page *page, struct vm_area_struct *vma, static bool invalid_migration_vma(struct vm_area_struct *vma, void *arg) { + /* TODO */ + if (vma_is_kvm_protected(vma)) + return true; + return vma_is_temporary_stack(vma); } diff --git a/virt/lib/mem_protected.c b/virt/lib/mem_protected.c index 1dfe82534242..9d2ef99285e5 100644 --- a/virt/lib/mem_protected.c +++ b/virt/lib/mem_protected.c @@ -30,6 +30,27 @@ void kvm_unmap_page_atomic(void *vaddr) } EXPORT_SYMBOL_GPL(kvm_unmap_page_atomic); +void kvm_map_page(struct page *page, int nr_pages) +{ + int i; + + /* Clear page before returning it to the direct mapping */ + for (i = 0; i < nr_pages; i++) { + void *p = kvm_map_page_atomic(page + i); + memset(p, 0, PAGE_SIZE); + kvm_unmap_page_atomic(p); + } + + kernel_map_pages(page, nr_pages, 1); +} +EXPORT_SYMBOL_GPL(kvm_map_page); + +void kvm_unmap_page(struct page *page, int nr_pages) +{ + kernel_map_pages(page, nr_pages, 0); +} +EXPORT_SYMBOL_GPL(kvm_unmap_page); + int kvm_init_protected_memory(void) { guest_map_ptes = kmalloc_array(num_possible_cpus(), From patchwork Tue Oct 20 06:18:59 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kirill A . Shutemov" X-Patchwork-Id: 11845779 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 34EBA16C0 for ; Tue, 20 Oct 2020 06:19:43 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id DECC422404 for ; Tue, 20 Oct 2020 06:19:42 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=shutemov-name.20150623.gappssmtp.com header.i=@shutemov-name.20150623.gappssmtp.com header.b="d2jXg+GK" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org DECC422404 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=shutemov.name Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 8AF716B0081; Tue, 20 Oct 2020 02:19:15 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 72E7A6B0083; Tue, 20 Oct 2020 02:19:15 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3CD786B0085; Tue, 20 Oct 2020 02:19:15 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0163.hostedemail.com [216.40.44.163]) by kanga.kvack.org (Postfix) with ESMTP id DDDB36B0082 for ; Tue, 20 Oct 2020 02:19:14 -0400 (EDT) Received: from smtpin16.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 837431EF1 for ; Tue, 20 Oct 2020 06:19:14 +0000 (UTC) X-FDA: 77391301428.16.cast59_48051db2723d Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin16.hostedemail.com (Postfix) with ESMTP id 63C0A1021F92D for ; Tue, 20 Oct 2020 06:19:14 +0000 (UTC) X-Spam-Summary: 1,0,0,bd62e5185cd71baf,d41d8cd98f00b204,kirill@shutemov.name,,RULES_HIT:41:355:379:541:960:973:988:989:1260:1311:1314:1345:1359:1437:1515:1535:1542:1711:1730:1747:1777:1792:1801:2198:2199:2393:2559:2562:2897:3138:3139:3140:3141:3142:3353:3865:3866:3867:3870:3871:3874:4117:4250:4321:4605:5007:6119:6261:6653:6742:7903:10004:11026:11473:11657:11658:11914:12043:12048:12296:12297:12438:12517:12519:12555:12895:12986:13894:14096:14181:14721:21080:21444:21451:21627:21990:30003:30054,0,RBL:209.85.208.193:@shutemov.name:.lbl8.mailshell.net-62.8.84.100 66.201.201.201;04yfadhpnubti6cuchjfpytewhmw5yca7wda96dd3jb8i9ub5yqrkebgtkajmz5.naxa9f7kkubfhyprpcwaes7e1d1a8w3k3qpid8qcn1d5breqh7esqg7tejcrfek.e-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fn,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:23,LUA_SUMMARY:none X-HE-Tag: cast59_48051db2723d X-Filterd-Recvd-Size: 6543 Received: from mail-lj1-f193.google.com (mail-lj1-f193.google.com [209.85.208.193]) by imf38.hostedemail.com (Postfix) with ESMTP for ; Tue, 20 Oct 2020 06:19:14 +0000 (UTC) Received: by mail-lj1-f193.google.com with SMTP id i2so749819ljg.4 for ; Mon, 19 Oct 2020 23:19:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=shutemov-name.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=P1ZYdJ+48hESkuHQ/ZsaZHpWMaHGaWqk6TiKgGiH9uk=; b=d2jXg+GKr/U9ky+di6Pjj65rEOr97pO3dwozUnliTUnv1Qv9Q2TuAgC8IEHLG/EQuw DKuDYgSdeHQq8jTcfxrBKvbvkxwxjRrJq52cWR0Ir5bIH8OSpPK5LJmwPA7McOQwyCzV NDh+ERmP7yjlgXPEJgE/9Feow0m/be5I7IM+IsuJCjiMYomnibh63f9Yxm1vQafnbBAZ BcjvCzWRgWWr+i2/FDzXt0Mh46L9UL9hcL2QLTqh7wNOk6YCuKFryCJfrne2iiMTwj7u feCZU1reswc+eprlM23CpudlU/4fEiqdOYZSNVIh9qV2uyktYuq6LDH4AvTI1bQwcB40 e4Dg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=P1ZYdJ+48hESkuHQ/ZsaZHpWMaHGaWqk6TiKgGiH9uk=; b=cVFmY8nWzclj08YVweChCKkB6aQxUB4jW6332Rhc0btKEXgbCcVIOm/+WRf9EBd1D3 36yTmlkUKCg+IUtmw0/hNlv7tX05dyMfaxK2vqjtJPkaDTUGqq2WZYM9xelBcBr2Stc2 ZbrcNddJzI/5zPqsPtJiiVaQE/kAdxNn9krpvOfPptsHLpi8mSDAEsiuUw1aQXUNN7q6 qI56FEgO9tSRStReVZrERlHWiTKVFzBGLU53Oq+naNWIAx846RW8khg8Q7PCDD6nnlJL djn5j3fcP7U163SgbDGjlTK4d/N0EdTrhWa20H1d5erW2wR4pgh4zBKmzSbtQQakvaZ/ zmFQ== X-Gm-Message-State: AOAM530vNLN95wD0ZsGpp6J38MMCOsYs1Lr7ySev/U6t1fM+uGnRahFD C5EUntK1jzpjHAUL7Py6FOGMXQ== X-Google-Smtp-Source: ABdhPJx4OTPXoR7QAQvHaAogNS+nSnlzEId+sAo8GQE1S8lBt0CtaocrLea19BD5ZB7UVB4nJWmOzw== X-Received: by 2002:a2e:b5d7:: with SMTP id g23mr535144ljn.61.1603174752853; Mon, 19 Oct 2020 23:19:12 -0700 (PDT) Received: from box.localdomain ([86.57.175.117]) by smtp.gmail.com with ESMTPSA id a7sm139248lfl.2.2020.10.19.23.19.09 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 19 Oct 2020 23:19:09 -0700 (PDT) From: "Kirill A. Shutemov" X-Google-Original-From: "Kirill A. Shutemov" Received: by box.localdomain (Postfix, from userid 1000) id 3C9F0102F6F; Tue, 20 Oct 2020 09:19:02 +0300 (+03) To: Dave Hansen , Andy Lutomirski , Peter Zijlstra , Paolo Bonzini , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel Cc: David Rientjes , Andrea Arcangeli , Kees Cook , Will Drewry , "Edgecombe, Rick P" , "Kleen, Andi" , Liran Alon , Mike Rapoport , x86@kernel.org, kvm@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, "Kirill A. Shutemov" Subject: [RFCv2 16/16] mm: Do not use zero page for VM_KVM_PROTECTED VMAs Date: Tue, 20 Oct 2020 09:18:59 +0300 Message-Id: <20201020061859.18385-17-kirill.shutemov@linux.intel.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20201020061859.18385-1-kirill.shutemov@linux.intel.com> References: <20201020061859.18385-1-kirill.shutemov@linux.intel.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Presence of zero pages in the mapping would disclose content of the mapping. Don't use them if KVM memory protection is enabled. Signed-off-by: Kirill A. Shutemov --- arch/s390/include/asm/pgtable.h | 2 +- include/linux/mm.h | 4 ++-- mm/huge_memory.c | 3 +-- mm/memory.c | 3 +-- 4 files changed, 5 insertions(+), 7 deletions(-) diff --git a/arch/s390/include/asm/pgtable.h b/arch/s390/include/asm/pgtable.h index b55561cc8786..72ca3b3f04cb 100644 --- a/arch/s390/include/asm/pgtable.h +++ b/arch/s390/include/asm/pgtable.h @@ -543,7 +543,7 @@ static inline int mm_alloc_pgste(struct mm_struct *mm) * In the case that a guest uses storage keys * faults should no longer be backed by zero pages */ -#define mm_forbids_zeropage mm_has_pgste +#define vma_forbids_zeropage(vma) mm_has_pgste(vma->vm_mm) static inline int mm_uses_skeys(struct mm_struct *mm) { #ifdef CONFIG_PGSTE diff --git a/include/linux/mm.h b/include/linux/mm.h index 74efc51e63f0..ee713b7c2819 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -130,8 +130,8 @@ extern int mmap_rnd_compat_bits __read_mostly; * s390 does this to prevent multiplexing of hardware bits * related to the physical page in case of virtualization. */ -#ifndef mm_forbids_zeropage -#define mm_forbids_zeropage(X) (0) +#ifndef vma_forbids_zeropage +#define vma_forbids_zeropage(vma) vma_is_kvm_protected(vma) #endif /* diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 40974656cb43..383614b24c4f 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -709,8 +709,7 @@ vm_fault_t do_huge_pmd_anonymous_page(struct vm_fault *vmf) return VM_FAULT_OOM; if (unlikely(khugepaged_enter(vma, vma->vm_flags))) return VM_FAULT_OOM; - if (!(vmf->flags & FAULT_FLAG_WRITE) && - !mm_forbids_zeropage(vma->vm_mm) && + if (!(vmf->flags & FAULT_FLAG_WRITE) && !vma_forbids_zeropage(vma) && transparent_hugepage_use_zero_page()) { pgtable_t pgtable; struct page *zero_page; diff --git a/mm/memory.c b/mm/memory.c index e28bd5f902a7..9907ffe00490 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -3495,8 +3495,7 @@ static vm_fault_t do_anonymous_page(struct vm_fault *vmf) return 0; /* Use the zero-page for reads */ - if (!(vmf->flags & FAULT_FLAG_WRITE) && - !mm_forbids_zeropage(vma->vm_mm)) { + if (!(vmf->flags & FAULT_FLAG_WRITE) && !vma_forbids_zeropage(vma)) { entry = pte_mkspecial(pfn_pte(my_zero_pfn(vmf->address), vma->vm_page_prot)); vmf->pte = pte_offset_map_lock(vma->vm_mm, vmf->pmd,