From patchwork Mon Nov 25 03:06:28 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bharata B Rao X-Patchwork-Id: 11259485 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 6A4C76C1 for ; Mon, 25 Nov 2019 03:06:58 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 27B4320718 for ; Mon, 25 Nov 2019 03:06:57 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 27B4320718 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.ibm.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id F342D6B0572; Sun, 24 Nov 2019 22:06:53 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id EE5116B0573; Sun, 24 Nov 2019 22:06:53 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D85486B0574; Sun, 24 Nov 2019 22:06:53 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0149.hostedemail.com [216.40.44.149]) by kanga.kvack.org (Postfix) with ESMTP id C20086B0572 for ; Sun, 24 Nov 2019 22:06:53 -0500 (EST) Received: from smtpin04.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with SMTP id 4F6A32C79 for ; Mon, 25 Nov 2019 03:06:53 +0000 (UTC) X-FDA: 76193312706.04.oil82_1ef2a49042821 X-Spam-Summary: 2,0,0,c167b7fa7255742f,d41d8cd98f00b204,bharata@linux.ibm.com,:linuxppc-dev@lists.ozlabs.org:kvm-ppc@vger.kernel.org::paulus@au1.ibm.com:aneesh.kumar@linux.vnet.ibm.com:jglisse@redhat.com:cclaudio@linux.ibm.com:linuxram@us.ibm.com:sukadev@linux.vnet.ibm.com:hch@lst.de:bharata@linux.ibm.com,RULES_HIT:1:2:41:355:379:541:800:960:966:973:988:989:1260:1261:1311:1314:1345:1359:1431:1437:1515:1605:1730:1747:1777:1792:2196:2199:2393:2559:2562:2693:2903:3138:3139:3140:3141:3142:3865:3866:3867:3868:3870:3871:3872:3874:4050:4250:4321:4385:4398:4605:5007:6119:6261:7875:7903:7904:8568:8603:10004:11026:11232:11473:11657:11658:11914:12043:12291:12296:12297:12438:12555:12895:12986:13141:13230:13894:13972:14096:14394:21080:21222:21433:21451:21627:30012:30054:30083:30089,0,RBL:148.163.156.1:@linux.ibm.com:.lbl8.mailshell.net-62.2.0.100 64.100.201.201,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime: 25,LUA_S X-HE-Tag: oil82_1ef2a49042821 X-Filterd-Recvd-Size: 10671 Received: from mx0a-001b2d01.pphosted.com (mx0a-001b2d01.pphosted.com [148.163.156.1]) by imf13.hostedemail.com (Postfix) with ESMTP for ; Mon, 25 Nov 2019 03:06:52 +0000 (UTC) Received: from pps.filterd (m0098404.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id xAP320gL046655 for ; Sun, 24 Nov 2019 22:06:51 -0500 Received: from e06smtp07.uk.ibm.com (e06smtp07.uk.ibm.com [195.75.94.103]) by mx0a-001b2d01.pphosted.com with ESMTP id 2wfk5j97k0-1 (version=TLSv1.2 cipher=AES256-GCM-SHA384 bits=256 verify=NOT) for ; Sun, 24 Nov 2019 22:06:51 -0500 Received: from localhost by e06smtp07.uk.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Mon, 25 Nov 2019 03:06:48 -0000 Received: from b06cxnps4076.portsmouth.uk.ibm.com (9.149.109.198) by e06smtp07.uk.ibm.com (192.168.101.137) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; (version=TLSv1/SSLv3 cipher=AES256-GCM-SHA384 bits=256/256) Mon, 25 Nov 2019 03:06:47 -0000 Received: from d06av22.portsmouth.uk.ibm.com (d06av22.portsmouth.uk.ibm.com [9.149.105.58]) by b06cxnps4076.portsmouth.uk.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id xAP36jCq47841484 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Mon, 25 Nov 2019 03:06:45 GMT Received: from d06av22.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id BED444C050; Mon, 25 Nov 2019 03:06:45 +0000 (GMT) Received: from d06av22.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id CDDB74C046; Mon, 25 Nov 2019 03:06:43 +0000 (GMT) Received: from bharata.in.ibm.com (unknown [9.124.35.39]) by d06av22.portsmouth.uk.ibm.com (Postfix) with ESMTP; Mon, 25 Nov 2019 03:06:43 +0000 (GMT) From: Bharata B Rao To: linuxppc-dev@lists.ozlabs.org, kvm-ppc@vger.kernel.org, linux-mm@kvack.org Cc: paulus@au1.ibm.com, aneesh.kumar@linux.vnet.ibm.com, jglisse@redhat.com, cclaudio@linux.ibm.com, linuxram@us.ibm.com, sukadev@linux.vnet.ibm.com, hch@lst.de, Bharata B Rao Subject: [PATCH v11 4/7] KVM: PPC: Radix changes for secure guest Date: Mon, 25 Nov 2019 08:36:28 +0530 X-Mailer: git-send-email 2.21.0 In-Reply-To: <20191125030631.7716-1-bharata@linux.ibm.com> References: <20191125030631.7716-1-bharata@linux.ibm.com> MIME-Version: 1.0 X-TM-AS-GCONF: 00 x-cbid: 19112503-0028-0000-0000-000003BF0222 X-IBM-AV-DETECTION: SAVI=unused REMOTE=unused XFE=unused x-cbparentid: 19112503-0029-0000-0000-00002482341C Message-Id: <20191125030631.7716-5-bharata@linux.ibm.com> X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.95,18.0.572 definitions=2019-11-24_04:2019-11-21,2019-11-24 signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 bulkscore=0 mlxscore=0 phishscore=0 spamscore=0 mlxlogscore=999 malwarescore=0 priorityscore=1501 suspectscore=0 clxscore=1015 impostorscore=0 adultscore=0 lowpriorityscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-1910280000 definitions=main-1911250026 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: - After the guest becomes secure, when we handle a page fault of a page belonging to SVM in HV, send that page to UV via UV_PAGE_IN. - Whenever a page is unmapped on the HV side, inform UV via UV_PAGE_INVAL. - Ensure all those routines that walk the secondary page tables of the guest don't do so in case of secure VM. For secure guest, the active secondary page tables are in secure memory and the secondary page tables in HV are freed when guest becomes secure. Signed-off-by: Bharata B Rao --- arch/powerpc/include/asm/kvm_book3s_uvmem.h | 6 ++++ arch/powerpc/include/asm/ultravisor-api.h | 1 + arch/powerpc/include/asm/ultravisor.h | 5 ++++ arch/powerpc/kvm/book3s_64_mmu_radix.c | 22 ++++++++++++++ arch/powerpc/kvm/book3s_hv_uvmem.c | 32 +++++++++++++++++++++ 5 files changed, 66 insertions(+) diff --git a/arch/powerpc/include/asm/kvm_book3s_uvmem.h b/arch/powerpc/include/asm/kvm_book3s_uvmem.h index 95f389c2937b..3033a9585b43 100644 --- a/arch/powerpc/include/asm/kvm_book3s_uvmem.h +++ b/arch/powerpc/include/asm/kvm_book3s_uvmem.h @@ -18,6 +18,7 @@ unsigned long kvmppc_h_svm_page_out(struct kvm *kvm, unsigned long page_shift); unsigned long kvmppc_h_svm_init_start(struct kvm *kvm); unsigned long kvmppc_h_svm_init_done(struct kvm *kvm); +int kvmppc_send_page_to_uv(struct kvm *kvm, unsigned long gfn); #else static inline int kvmppc_uvmem_init(void) { @@ -58,5 +59,10 @@ static inline unsigned long kvmppc_h_svm_init_done(struct kvm *kvm) { return H_UNSUPPORTED; } + +static inline int kvmppc_send_page_to_uv(struct kvm *kvm, unsigned long gfn) +{ + return -EFAULT; +} #endif /* CONFIG_PPC_UV */ #endif /* __ASM_KVM_BOOK3S_UVMEM_H__ */ diff --git a/arch/powerpc/include/asm/ultravisor-api.h b/arch/powerpc/include/asm/ultravisor-api.h index 2483f15bd71a..e774274ab30e 100644 --- a/arch/powerpc/include/asm/ultravisor-api.h +++ b/arch/powerpc/include/asm/ultravisor-api.h @@ -32,5 +32,6 @@ #define UV_SHARE_PAGE 0xF130 #define UV_UNSHARE_PAGE 0xF134 #define UV_UNSHARE_ALL_PAGES 0xF140 +#define UV_PAGE_INVAL 0xF138 #endif /* _ASM_POWERPC_ULTRAVISOR_API_H */ diff --git a/arch/powerpc/include/asm/ultravisor.h b/arch/powerpc/include/asm/ultravisor.h index 79bb005e8ee9..40cc8bace654 100644 --- a/arch/powerpc/include/asm/ultravisor.h +++ b/arch/powerpc/include/asm/ultravisor.h @@ -67,4 +67,9 @@ static inline int uv_register_mem_slot(u64 lpid, u64 start_gpa, u64 size, size, flags, slotid); } +static inline int uv_page_inval(u64 lpid, u64 gpa, u64 page_shift) +{ + return ucall_norets(UV_PAGE_INVAL, lpid, gpa, page_shift); +} + #endif /* _ASM_POWERPC_ULTRAVISOR_H */ diff --git a/arch/powerpc/kvm/book3s_64_mmu_radix.c b/arch/powerpc/kvm/book3s_64_mmu_radix.c index 2d415c36a61d..9f6ba113ffe3 100644 --- a/arch/powerpc/kvm/book3s_64_mmu_radix.c +++ b/arch/powerpc/kvm/book3s_64_mmu_radix.c @@ -19,6 +19,8 @@ #include #include #include +#include +#include /* * Supported radix tree geometry. @@ -915,6 +917,9 @@ int kvmppc_book3s_radix_page_fault(struct kvm_run *run, struct kvm_vcpu *vcpu, if (!(dsisr & DSISR_PRTABLE_FAULT)) gpa |= ea & 0xfff; + if (kvm->arch.secure_guest & KVMPPC_SECURE_INIT_DONE) + return kvmppc_send_page_to_uv(kvm, gfn); + /* Get the corresponding memslot */ memslot = gfn_to_memslot(kvm, gfn); @@ -972,6 +977,11 @@ int kvm_unmap_radix(struct kvm *kvm, struct kvm_memory_slot *memslot, unsigned long gpa = gfn << PAGE_SHIFT; unsigned int shift; + if (kvm->arch.secure_guest & KVMPPC_SECURE_INIT_DONE) { + uv_page_inval(kvm->arch.lpid, gpa, PAGE_SHIFT); + return 0; + } + ptep = __find_linux_pte(kvm->arch.pgtable, gpa, NULL, &shift); if (ptep && pte_present(*ptep)) kvmppc_unmap_pte(kvm, ptep, gpa, shift, memslot, @@ -989,6 +999,9 @@ int kvm_age_radix(struct kvm *kvm, struct kvm_memory_slot *memslot, int ref = 0; unsigned long old, *rmapp; + if (kvm->arch.secure_guest & KVMPPC_SECURE_INIT_DONE) + return ref; + ptep = __find_linux_pte(kvm->arch.pgtable, gpa, NULL, &shift); if (ptep && pte_present(*ptep) && pte_young(*ptep)) { old = kvmppc_radix_update_pte(kvm, ptep, _PAGE_ACCESSED, 0, @@ -1013,6 +1026,9 @@ int kvm_test_age_radix(struct kvm *kvm, struct kvm_memory_slot *memslot, unsigned int shift; int ref = 0; + if (kvm->arch.secure_guest & KVMPPC_SECURE_INIT_DONE) + return ref; + ptep = __find_linux_pte(kvm->arch.pgtable, gpa, NULL, &shift); if (ptep && pte_present(*ptep) && pte_young(*ptep)) ref = 1; @@ -1030,6 +1046,9 @@ static int kvm_radix_test_clear_dirty(struct kvm *kvm, int ret = 0; unsigned long old, *rmapp; + if (kvm->arch.secure_guest & KVMPPC_SECURE_INIT_DONE) + return ret; + ptep = __find_linux_pte(kvm->arch.pgtable, gpa, NULL, &shift); if (ptep && pte_present(*ptep) && pte_dirty(*ptep)) { ret = 1; @@ -1082,6 +1101,9 @@ void kvmppc_radix_flush_memslot(struct kvm *kvm, unsigned long gpa; unsigned int shift; + if (kvm->arch.secure_guest & KVMPPC_SECURE_INIT_DONE) + return; + gpa = memslot->base_gfn << PAGE_SHIFT; spin_lock(&kvm->mmu_lock); for (n = memslot->npages; n; --n) { diff --git a/arch/powerpc/kvm/book3s_hv_uvmem.c b/arch/powerpc/kvm/book3s_hv_uvmem.c index 51f094db43f8..9266ed53cf7a 100644 --- a/arch/powerpc/kvm/book3s_hv_uvmem.c +++ b/arch/powerpc/kvm/book3s_hv_uvmem.c @@ -70,6 +70,17 @@ * Shared pages: Whenever guest shares a secure page, UV will split and * remap the 2MB page if required and issue H_SVM_PAGE_IN with 64K page size. * + * HV invalidating a page: When a regular page belonging to secure + * guest gets unmapped, HV informs UV with UV_PAGE_INVAL of 64K + * page size. Using 64K page size is correct here because any non-secure + * page will essentially be of 64K page size. Splitting by UV during sharing + * and page-out ensures this. + * + * Page fault handling: When HV handles page fault of a page belonging + * to secure guest, it sends that to UV with a 64K UV_PAGE_IN request. + * Using 64K size is correct here too as UV would have split the 2MB page + * into 64k mappings and would have done page-outs earlier. + * * In summary, the current secure pages handling code in HV assumes * 64K page size and in fact fails any page-in/page-out requests of * non-64K size upfront. If and when UV starts supporting multiple @@ -619,6 +630,27 @@ kvmppc_h_svm_page_out(struct kvm *kvm, unsigned long gpa, return ret; } +int kvmppc_send_page_to_uv(struct kvm *kvm, unsigned long gfn) +{ + unsigned long pfn; + int ret = U_SUCCESS; + + pfn = gfn_to_pfn(kvm, gfn); + if (is_error_noslot_pfn(pfn)) + return -EFAULT; + + mutex_lock(&kvm->arch.uvmem_lock); + if (kvmppc_gfn_is_uvmem_pfn(gfn, kvm, NULL)) + goto out; + + ret = uv_page_in(kvm->arch.lpid, pfn << PAGE_SHIFT, gfn << PAGE_SHIFT, + 0, PAGE_SHIFT); +out: + kvm_release_pfn_clean(pfn); + mutex_unlock(&kvm->arch.uvmem_lock); + return (ret == U_SUCCESS) ? RESUME_GUEST : -EFAULT; +} + static u64 kvmppc_get_secmem_size(void) { struct device_node *np;