From patchwork Mon Oct 22 05:18:34 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Bharata B Rao X-Patchwork-Id: 10651689 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 5C3AC921 for ; Mon, 22 Oct 2018 05:19:03 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 49FD128382 for ; Mon, 22 Oct 2018 05:19:03 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 3E479287A4; Mon, 22 Oct 2018 05:19:03 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id CBA9E28382 for ; Mon, 22 Oct 2018 05:19:01 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 98F916B0008; Mon, 22 Oct 2018 01:19:00 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 93F9E6B000C; Mon, 22 Oct 2018 01:19:00 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7E2AC6B000D; Mon, 22 Oct 2018 01:19:00 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-oi1-f197.google.com (mail-oi1-f197.google.com [209.85.167.197]) by kanga.kvack.org (Postfix) with ESMTP id 47B7B6B0008 for ; Mon, 22 Oct 2018 01:19:00 -0400 (EDT) Received: by mail-oi1-f197.google.com with SMTP id y68-v6so27980561oie.21 for ; Sun, 21 Oct 2018 22:19:00 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-original-authentication-results:x-gm-message-state:from:to:cc :subject:date:in-reply-to:references:mime-version :content-transfer-encoding:message-id; bh=c4Ox41pm0xBa4hodABsAXSbMfAAF9AnVAwxMIBp7isU=; b=V0qbuHfTOKoMtQU1lFAlmleVIbEj9kPOtKKkzMtOjIMIVnU2ht8Q0/Vs5vBH4t+9lz XzOlzBmDq2kvfxGg/PpHZR4/WZFof24rpQaWhcPDo5dKDr7M4o87xPxNxYIKu0p14Wm4 ELq79mEKsEuyGip3YNuNM8UHX+lwgeBRPJ0pKepDonVaPlBgFi4wRcpROFnAq3HAZ6Yl Cvf7Pg2xwCQMULNge1sZDiSNSP9cJkQMnGJDrtQolrlWJH9TR7HV/Fd4hY0t3ee1N+ah G0r+vkoRgjZ2kHeWmpMOBO7sm4Xkx/1n6878JtVCist91yT+vUu/bnZWH+tfhy0ue1Fo l/Iw== X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of bharata@linux.ibm.com designates 148.163.156.1 as permitted sender) smtp.mailfrom=bharata@linux.ibm.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=ibm.com X-Gm-Message-State: ABuFfohT5X3oi47guWxHBvZPrJimzkBCsLcZm5voGOcWqNrSnzwDzICy aVj83p1yV4cFcNAWhj2xBc3pQYY40OBIFI6imW9ddvlsoNm8go3ScBfNYHh2xbtTAKk8zlLLNJq mkCpKfbKnVM99Ou7tB7MNCMnLwSbazumo2wG2t/UGc3i3ea+Wc1Ki5xaET4d4bIjlVA== X-Received: by 2002:aca:b4c6:: with SMTP id d189-v6mr22255654oif.165.1540185539967; Sun, 21 Oct 2018 22:18:59 -0700 (PDT) X-Google-Smtp-Source: ACcGV61EgWPhGfj6LspZaxUz3gc4tQp+FN7GWpv5rrMcfu52aT7B9uU+RIpyUkg5qh6gsqf2V1cy X-Received: by 2002:aca:b4c6:: with SMTP id d189-v6mr22255623oif.165.1540185538136; Sun, 21 Oct 2018 22:18:58 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1540185538; cv=none; d=google.com; s=arc-20160816; b=u9yhrMWoutLvOVnEvf+Wv/SIE9kXIytS2rfLbfnCOCQzgGU8LJN/ETXSHEQCxY9tIB t48EDWdjZpPLwEPVicSStu+WP/yG2UJUUAumNRRoCn2uFec7eZ1RzY5nPhDsc5jN2h8T EOmQrUMIJZLKUAz6IpMfgjBetVB1yMQsZv38Y4tUsNTmTfMgGDkdhH4lsEZArTSJpKMZ tfZIgfUkPwsKwnHtVnbwHGk+Oh2PWOFqkrSQBFm1r1qgzU55pTzmxb9q/24v/JaTJorW mOY+iXC+XqekMne70AGpUnT1uOzo5BQZk0EuyIR7rx2DuEb8sxOQaw5zBUUDvLXeKsrH Nnsw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=message-id:content-transfer-encoding:mime-version:references :in-reply-to:date:subject:cc:to:from; bh=c4Ox41pm0xBa4hodABsAXSbMfAAF9AnVAwxMIBp7isU=; b=Np1HmJL+1HnH1EMJszWFaIBFEWRYumf7R6iaH30ODFPUHyeec2GJa3EoxCuC7J91pH LoIkaywTWR3pVsGWjOI38hIFcid1DCy/F8hoCNAkk7HnQdI0G+TGRCfI9G9JWGASUqsH 3iWP/YE2/AEFvX8dkrjzGsEoR61jhywM/8SsB3ktSkkHZzn+CPacwoEbbkSUDWiWrCZh 6VpuajlVSOqgmGGQ557mErm4Fj9Wt1HwrjpA0DogPp7Z+QD2LktD9Kr2TQnG2izhRbW3 eGz1PIRDAwMxpb9Z/TcGKftP9eV7uxhOXS0Luzq0DSjvvwYKKIgztbRzRA8htijmTRKa qStA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of bharata@linux.ibm.com designates 148.163.156.1 as permitted sender) smtp.mailfrom=bharata@linux.ibm.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=ibm.com Received: from mx0a-001b2d01.pphosted.com (mx0a-001b2d01.pphosted.com. [148.163.156.1]) by mx.google.com with ESMTPS id k24si13301924otl.102.2018.10.21.22.18.57 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Sun, 21 Oct 2018 22:18:58 -0700 (PDT) Received-SPF: pass (google.com: domain of bharata@linux.ibm.com designates 148.163.156.1 as permitted sender) client-ip=148.163.156.1; Authentication-Results: mx.google.com; spf=pass (google.com: domain of bharata@linux.ibm.com designates 148.163.156.1 as permitted sender) smtp.mailfrom=bharata@linux.ibm.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=ibm.com Received: from pps.filterd (m0098399.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.16.0.22/8.16.0.22) with SMTP id w9M59VfQ061159 for ; Mon, 22 Oct 2018 01:18:57 -0400 Received: from e06smtp02.uk.ibm.com (e06smtp02.uk.ibm.com [195.75.94.98]) by mx0a-001b2d01.pphosted.com with ESMTP id 2n939dgc6f-1 (version=TLSv1.2 cipher=AES256-GCM-SHA384 bits=256 verify=NOT) for ; Mon, 22 Oct 2018 01:18:56 -0400 Received: from localhost by e06smtp02.uk.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Mon, 22 Oct 2018 06:18:54 +0100 Received: from b06cxnps4074.portsmouth.uk.ibm.com (9.149.109.196) by e06smtp02.uk.ibm.com (192.168.101.132) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; (version=TLSv1/SSLv3 cipher=AES256-GCM-SHA384 bits=256/256) Mon, 22 Oct 2018 06:18:51 +0100 Received: from d06av26.portsmouth.uk.ibm.com (d06av26.portsmouth.uk.ibm.com [9.149.105.62]) by b06cxnps4074.portsmouth.uk.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id w9M5IoC161407320 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL); Mon, 22 Oct 2018 05:18:50 GMT Received: from d06av26.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 1EA1AAE055; Mon, 22 Oct 2018 05:18:50 +0000 (GMT) Received: from d06av26.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 733A9AE04D; Mon, 22 Oct 2018 05:18:48 +0000 (GMT) Received: from bharata.in.ibm.com (unknown [9.124.35.126]) by d06av26.portsmouth.uk.ibm.com (Postfix) with ESMTP; Mon, 22 Oct 2018 05:18:48 +0000 (GMT) From: Bharata B Rao To: linuxppc-dev@lists.ozlabs.org Cc: kvm-ppc@vger.kernel.org, linux-mm@kvack.org, paulus@au1.ibm.com, benh@linux.ibm.com, aneesh.kumar@linux.vnet.ibm.com, jglisse@redhat.com, linuxram@us.ibm.com, Bharata B Rao Subject: [RFC PATCH v1 1/4] kvmppc: HMM backend driver to manage pages of secure guest Date: Mon, 22 Oct 2018 10:48:34 +0530 X-Mailer: git-send-email 2.17.1 In-Reply-To: <20181022051837.1165-1-bharata@linux.ibm.com> References: <20181022051837.1165-1-bharata@linux.ibm.com> MIME-Version: 1.0 X-TM-AS-GCONF: 00 x-cbid: 18102205-0008-0000-0000-0000028318E9 X-IBM-AV-DETECTION: SAVI=unused REMOTE=unused XFE=unused x-cbparentid: 18102205-0009-0000-0000-000021EC95EF Message-Id: <20181022051837.1165-2-bharata@linux.ibm.com> X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:,, definitions=2018-10-21_14:,, signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 priorityscore=1501 malwarescore=0 suspectscore=4 phishscore=0 bulkscore=0 spamscore=0 clxscore=1015 lowpriorityscore=0 mlxscore=0 impostorscore=0 mlxlogscore=999 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1807170000 definitions=main-1810220049 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP HMM driver for KVM PPC to manage page transitions of secure guest via H_SVM_PAGE_IN and H_SVM_PAGE_OUT hcalls. H_SVM_PAGE_IN: Move the content of a normal page to secure page H_SVM_PAGE_OUT: Move the content of a secure page to normal page Signed-off-by: Bharata B Rao --- arch/powerpc/include/asm/hvcall.h | 7 +- arch/powerpc/include/asm/kvm_host.h | 15 + arch/powerpc/include/asm/kvm_ppc.h | 28 ++ arch/powerpc/include/asm/ucall-api.h | 20 ++ arch/powerpc/kvm/Makefile | 3 + arch/powerpc/kvm/book3s_hv.c | 38 ++ arch/powerpc/kvm/book3s_hv_hmm.c | 514 +++++++++++++++++++++++++++ 7 files changed, 624 insertions(+), 1 deletion(-) create mode 100644 arch/powerpc/include/asm/ucall-api.h create mode 100644 arch/powerpc/kvm/book3s_hv_hmm.c diff --git a/arch/powerpc/include/asm/hvcall.h b/arch/powerpc/include/asm/hvcall.h index a0b17f9f1ea4..89e6b70c1857 100644 --- a/arch/powerpc/include/asm/hvcall.h +++ b/arch/powerpc/include/asm/hvcall.h @@ -158,6 +158,9 @@ /* Each control block has to be on a 4K boundary */ #define H_CB_ALIGNMENT 4096 +/* Flags for H_SVM_PAGE_IN */ +#define H_PAGE_IN_SHARED 0x1 + /* pSeries hypervisor opcodes */ #define H_REMOVE 0x04 #define H_ENTER 0x08 @@ -295,7 +298,9 @@ #define H_INT_ESB 0x3C8 #define H_INT_SYNC 0x3CC #define H_INT_RESET 0x3D0 -#define MAX_HCALL_OPCODE H_INT_RESET +#define H_SVM_PAGE_IN 0x3D4 +#define H_SVM_PAGE_OUT 0x3D8 +#define MAX_HCALL_OPCODE H_SVM_PAGE_OUT /* H_VIOCTL functions */ #define H_GET_VIOA_DUMP_SIZE 0x01 diff --git a/arch/powerpc/include/asm/kvm_host.h b/arch/powerpc/include/asm/kvm_host.h index 906bcbdfd2a1..194e6e0ff239 100644 --- a/arch/powerpc/include/asm/kvm_host.h +++ b/arch/powerpc/include/asm/kvm_host.h @@ -310,6 +310,9 @@ struct kvm_arch { struct kvmppc_passthru_irqmap *pimap; #endif struct kvmppc_ops *kvm_ops; +#ifdef CONFIG_PPC_SVM + struct hlist_head *hmm_hash; +#endif #ifdef CONFIG_KVM_BOOK3S_HV_POSSIBLE /* This array can grow quite large, keep it at the end */ struct kvmppc_vcore *vcores[KVM_MAX_VCORES]; @@ -830,4 +833,16 @@ static inline void kvm_arch_vcpu_blocking(struct kvm_vcpu *vcpu) {} static inline void kvm_arch_vcpu_unblocking(struct kvm_vcpu *vcpu) {} static inline void kvm_arch_vcpu_block_finish(struct kvm_vcpu *vcpu) {} +#ifdef CONFIG_PPC_SVM +struct kvmppc_hmm_device { + struct hmm_device *device; + struct hmm_devmem *devmem; + unsigned long *pfn_bitmap; +}; + +extern int kvmppc_hmm_init(void); +extern void kvmppc_hmm_free(void); +extern int kvmppc_hmm_hash_create(struct kvm *kvm); +extern void kvmppc_hmm_hash_destroy(struct kvm *kvm); +#endif #endif /* __POWERPC_KVM_HOST_H__ */ diff --git a/arch/powerpc/include/asm/kvm_ppc.h b/arch/powerpc/include/asm/kvm_ppc.h index e991821dd7fa..ba81a07e2bdf 100644 --- a/arch/powerpc/include/asm/kvm_ppc.h +++ b/arch/powerpc/include/asm/kvm_ppc.h @@ -906,4 +906,32 @@ static inline ulong kvmppc_get_ea_indexed(struct kvm_vcpu *vcpu, int ra, int rb) extern void xics_wake_cpu(int cpu); +#ifdef CONFIG_PPC_SVM +extern unsigned long kvmppc_h_svm_page_in(struct kvm *kvm, + unsigned int lpid, + unsigned long gra, + unsigned long flags, + unsigned long page_shift); +extern unsigned long kvmppc_h_svm_page_out(struct kvm *kvm, + unsigned int lpid, + unsigned long gra, + unsigned long flags, + unsigned long page_shift); +#else +static inline unsigned long +kvmppc_h_svm_page_in(struct kvm *kvm, unsigned int lpid, + unsigned long gra, unsigned long flags, + unsigned long page_shift) +{ + return H_UNSUPPORTED; +} + +static inline unsigned long +kvmppc_h_svm_page_out(struct kvm *kvm, unsigned int lpid, + unsigned long gra, unsigned long flags, + unsigned long page_shift) +{ + return H_UNSUPPORTED; +} +#endif #endif /* __POWERPC_KVM_PPC_H__ */ diff --git a/arch/powerpc/include/asm/ucall-api.h b/arch/powerpc/include/asm/ucall-api.h new file mode 100644 index 000000000000..2c12f514f8ab --- /dev/null +++ b/arch/powerpc/include/asm/ucall-api.h @@ -0,0 +1,20 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef _ASM_POWERPC_UCALL_API_H +#define _ASM_POWERPC_UCALL_API_H + +#define U_SUCCESS 0 + +/* + * TODO: Dummy uvcalls, will be replaced by real calls + */ +static inline int uv_page_in(u64 lpid, u64 dw0, u64 dw1, u64 dw2, u64 dw3) +{ + return U_SUCCESS; +} + +static inline int uv_page_out(u64 lpid, u64 dw0, u64 dw1, u64 dw2, u64 dw3) +{ + return U_SUCCESS; +} + +#endif /* _ASM_POWERPC_UCALL_API_H */ diff --git a/arch/powerpc/kvm/Makefile b/arch/powerpc/kvm/Makefile index f872c04bb5b1..6945ffc18679 100644 --- a/arch/powerpc/kvm/Makefile +++ b/arch/powerpc/kvm/Makefile @@ -77,6 +77,9 @@ kvm-hv-y += \ book3s_64_mmu_hv.o \ book3s_64_mmu_radix.o +kvm-hv-$(CONFIG_PPC_SVM) += \ + book3s_hv_hmm.o + kvm-hv-$(CONFIG_PPC_TRANSACTIONAL_MEM) += \ book3s_hv_tm.o diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index 3e3a71594e63..05084eb8aadd 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -73,6 +73,7 @@ #include #include #include +#include #include "book3s.h" @@ -935,6 +936,20 @@ int kvmppc_pseries_do_hcall(struct kvm_vcpu *vcpu) if (ret == H_TOO_HARD) return RESUME_HOST; break; + case H_SVM_PAGE_IN: + ret = kvmppc_h_svm_page_in(vcpu->kvm, + kvmppc_get_gpr(vcpu, 4), + kvmppc_get_gpr(vcpu, 5), + kvmppc_get_gpr(vcpu, 6), + kvmppc_get_gpr(vcpu, 7)); + break; + case H_SVM_PAGE_OUT: + ret = kvmppc_h_svm_page_out(vcpu->kvm, + kvmppc_get_gpr(vcpu, 4), + kvmppc_get_gpr(vcpu, 5), + kvmppc_get_gpr(vcpu, 6), + kvmppc_get_gpr(vcpu, 7)); + break; default: return RESUME_HOST; } @@ -961,6 +976,8 @@ static int kvmppc_hcall_impl_hv(unsigned long cmd) case H_IPOLL: case H_XIRR_X: #endif + case H_SVM_PAGE_IN: + case H_SVM_PAGE_OUT: return 1; } @@ -3938,6 +3955,13 @@ static int kvmppc_core_init_vm_hv(struct kvm *kvm) return -ENOMEM; kvm->arch.lpid = lpid; +#ifdef CONFIG_PPC_SVM + ret = kvmppc_hmm_hash_create(kvm); + if (ret) { + kvmppc_free_lpid(kvm->arch.lpid); + return ret; + } +#endif kvmppc_alloc_host_rm_ops(); /* @@ -4073,6 +4097,9 @@ static void kvmppc_core_destroy_vm_hv(struct kvm *kvm) kvmppc_free_vcores(kvm); +#ifdef CONFIG_PPC_SVM + kvmppc_hmm_hash_destroy(kvm); +#endif kvmppc_free_lpid(kvm->arch.lpid); if (kvm_is_radix(kvm)) @@ -4384,6 +4411,8 @@ static unsigned int default_hcall_list[] = { H_XIRR, H_XIRR_X, #endif + H_SVM_PAGE_IN, + H_SVM_PAGE_OUT, 0 }; @@ -4596,11 +4625,20 @@ static int kvmppc_book3s_init_hv(void) no_mixing_hpt_and_radix = true; } +#ifdef CONFIG_PPC_SVM + r = kvmppc_hmm_init(); + if (r < 0) + pr_err("KVM-HV: kvmppc_hmm_init failed %d\n", r); +#endif + return r; } static void kvmppc_book3s_exit_hv(void) { +#ifdef CONFIG_PPC_SVM + kvmppc_hmm_free(); +#endif kvmppc_free_host_rm_ops(); if (kvmppc_radix_possible()) kvmppc_radix_exit(); diff --git a/arch/powerpc/kvm/book3s_hv_hmm.c b/arch/powerpc/kvm/book3s_hv_hmm.c new file mode 100644 index 000000000000..a2ee3163a312 --- /dev/null +++ b/arch/powerpc/kvm/book3s_hv_hmm.c @@ -0,0 +1,514 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * HMM driver to manage page migration between normal and secure + * memory. + * + * Based on Jérôme Glisse's HMM dummy driver. + * + * Copyright 2018 Bharata B Rao, IBM Corp. + */ + +/* + * A pseries guest can be run as a secure guest on Ultravisor-enabled + * POWER platforms. On such platforms, this driver will be used to manage + * the movement of guest pages between the normal memory managed by + * hypervisor (HV) and secure memory managed by Ultravisor (UV). + * + * Private ZONE_DEVICE memory equal to the amount of secure memory + * available in the platform for running secure guests is created + * via a HMM device. The movement of pages between normal and secure + * memory is done by ->alloc_and_copy() callback routine of migrate_vma(). + * + * The page-in or page-out requests from UV will come to HV as hcalls and + * HV will call back into UV via uvcalls to satisfy these page requests. + * + * For each page that gets moved into secure memory, a HMM PFN is used + * on the HV side and HMM migration PTE corresponding to that PFN would be + * populated in the QEMU page tables. A per-guest hash table is created to + * manage the pool of HMM PFNs. Guest real address is used as key to index + * into the hash table and choose a free HMM PFN. + */ + +#include +#include +#include +#include + +static struct kvmppc_hmm_device *kvmppc_hmm; +spinlock_t kvmppc_hmm_lock; + +#define KVMPPC_HMM_HASH_BITS 10 +#define KVMPPC_HMM_HASH_SIZE (1 << KVMPPC_HMM_HASH_BITS) + +struct kvmppc_hmm_pfn_entry { + struct hlist_node hlist; + unsigned long addr; + unsigned long hmm_pfn; +}; + +struct kvmppc_hmm_page_pvt { + struct hlist_head *hmm_hash; + unsigned int lpid; + unsigned long gpa; +}; + +struct kvmppc_hmm_migrate_args { + struct hlist_head *hmm_hash; + unsigned int lpid; + unsigned long gpa; + unsigned long page_shift; +}; + +int kvmppc_hmm_hash_create(struct kvm *kvm) +{ + int i; + + kvm->arch.hmm_hash = kzalloc(KVMPPC_HMM_HASH_SIZE * + sizeof(struct hlist_head), GFP_KERNEL); + if (!kvm->arch.hmm_hash) + return -ENOMEM; + + for (i = 0; i < KVMPPC_HMM_HASH_SIZE; i++) + INIT_HLIST_HEAD(&kvm->arch.hmm_hash[i]); + return 0; +} + +/* + * Cleanup the HMM pages hash table when guest terminates + * + * Iterate over all the HMM pages hash list entries and release + * reference on them. The actual freeing of the entry happens + * via hmm_devmem_ops.free path. + */ +void kvmppc_hmm_hash_destroy(struct kvm *kvm) +{ + int i; + struct kvmppc_hmm_pfn_entry *p; + struct page *hmm_page; + + for (i = 0; i < KVMPPC_HMM_HASH_SIZE; i++) { + while (!hlist_empty(&kvm->arch.hmm_hash[i])) { + p = hlist_entry(kvm->arch.hmm_hash[i].first, + struct kvmppc_hmm_pfn_entry, + hlist); + hmm_page = pfn_to_page(p->hmm_pfn); + put_page(hmm_page); + } + } + kfree(kvm->arch.hmm_hash); +} + +static u64 kvmppc_hmm_pfn_hash_fn(u64 addr) +{ + return hash_64(addr, KVMPPC_HMM_HASH_BITS); +} + +static void +kvmppc_hmm_hash_free_pfn(struct hlist_head *hmm_hash, unsigned long gpa) +{ + struct kvmppc_hmm_pfn_entry *p; + struct hlist_head *list; + + list = &hmm_hash[kvmppc_hmm_pfn_hash_fn(gpa)]; + hlist_for_each_entry(p, list, hlist) { + if (p->addr == gpa) { + hlist_del(&p->hlist); + kfree(p); + return; + } + } +} + +/* + * Get a free HMM PFN from the pool + * + * Called when a normal page is moved to secure memory (UV_PAGE_IN). HMM + * PFN will be used to keep track of the secure page on HV side. + */ +static struct page *kvmppc_hmm_get_page(struct hlist_head *hmm_hash, + unsigned long gpa, unsigned int lpid) +{ + struct page *dpage = NULL; + unsigned long bit; + unsigned long nr_pfns = kvmppc_hmm->devmem->pfn_last - + kvmppc_hmm->devmem->pfn_first; + struct hlist_head *list; + struct kvmppc_hmm_pfn_entry *p; + bool found = false; + unsigned long flags; + struct kvmppc_hmm_page_pvt *pvt; + + spin_lock_irqsave(&kvmppc_hmm_lock, flags); + list = &hmm_hash[kvmppc_hmm_pfn_hash_fn(gpa)]; + hlist_for_each_entry(p, list, hlist) { + if (p->addr == gpa) { + found = true; + break; + } + } + if (!found) { + p = kzalloc(sizeof(struct kvmppc_hmm_pfn_entry), GFP_ATOMIC); + if (!p) { + spin_unlock_irqrestore(&kvmppc_hmm_lock, flags); + return NULL; + } + p->addr = gpa; + bit = find_first_zero_bit(kvmppc_hmm->pfn_bitmap, nr_pfns); + if (bit >= nr_pfns) { + kfree(p); + spin_unlock_irqrestore(&kvmppc_hmm_lock, flags); + return NULL; + } + bitmap_set(kvmppc_hmm->pfn_bitmap, bit, 1); + p->hmm_pfn = bit + kvmppc_hmm->devmem->pfn_first; + INIT_HLIST_NODE(&p->hlist); + hlist_add_head(&p->hlist, list); + } else { + spin_unlock_irqrestore(&kvmppc_hmm_lock, flags); + return NULL; + } + dpage = pfn_to_page(p->hmm_pfn); + + if (!trylock_page(dpage)) { + bitmap_clear(kvmppc_hmm->pfn_bitmap, + p->hmm_pfn - kvmppc_hmm->devmem->pfn_first, 1); + hlist_del(&p->hlist); + kfree(p); + spin_unlock_irqrestore(&kvmppc_hmm_lock, flags); + return NULL; + } + spin_unlock_irqrestore(&kvmppc_hmm_lock, flags); + + pvt = kzalloc(sizeof(*pvt), GFP_ATOMIC); + pvt->hmm_hash = hmm_hash; + pvt->gpa = gpa; + pvt->lpid = lpid; + hmm_devmem_page_set_drvdata(dpage, (unsigned long)pvt); + + get_page(dpage); + return dpage; +} + +/* + * Release the HMM PFN back to the pool + * + * Called when secure page becomes a normal page during UV_PAGE_OUT. + */ +static void kvmppc_hmm_put_page(struct page *page) +{ + unsigned long pfn = page_to_pfn(page); + unsigned long flags; + struct kvmppc_hmm_page_pvt *pvt; + + pvt = (struct kvmppc_hmm_page_pvt *)hmm_devmem_page_get_drvdata(page); + hmm_devmem_page_set_drvdata(page, 0); + + spin_lock_irqsave(&kvmppc_hmm_lock, flags); + bitmap_clear(kvmppc_hmm->pfn_bitmap, + pfn - kvmppc_hmm->devmem->pfn_first, 1); + kvmppc_hmm_hash_free_pfn(pvt->hmm_hash, pvt->gpa); + spin_unlock_irqrestore(&kvmppc_hmm_lock, flags); + kfree(pvt); +} + +static void +kvmppc_hmm_migrate_alloc_and_copy(struct vm_area_struct *vma, + const unsigned long *src_pfns, + unsigned long *dst_pfns, + unsigned long start, + unsigned long end, + void *private) +{ + unsigned long addr; + struct kvmppc_hmm_migrate_args *args = private; + unsigned long page_size = 1UL << args->page_shift; + + for (addr = start; addr < end; + addr += page_size, src_pfns++, dst_pfns++) { + struct page *spage = migrate_pfn_to_page(*src_pfns); + struct page *dpage; + unsigned long pfn = *src_pfns >> MIGRATE_PFN_SHIFT; + + *dst_pfns = 0; + if (!spage && !(*src_pfns & MIGRATE_PFN_MIGRATE)) + continue; + + if (spage && !(*src_pfns & MIGRATE_PFN_MIGRATE)) + continue; + + dpage = kvmppc_hmm_get_page(args->hmm_hash, args->gpa, + args->lpid); + if (!dpage) + continue; + + if (spage) + uv_page_in(args->lpid, pfn << args->page_shift, + args->gpa, 0, args->page_shift); + + *dst_pfns = migrate_pfn(page_to_pfn(dpage)) | + MIGRATE_PFN_DEVICE | MIGRATE_PFN_LOCKED; + } +} + +static void +kvmppc_hmm_migrate_finalize_and_map(struct vm_area_struct *vma, + const unsigned long *src_pfns, + const unsigned long *dst_pfns, + unsigned long start, + unsigned long end, + void *private) +{ +} + +static const struct migrate_vma_ops kvmppc_hmm_migrate_ops = { + .alloc_and_copy = kvmppc_hmm_migrate_alloc_and_copy, + .finalize_and_map = kvmppc_hmm_migrate_finalize_and_map, +}; + +static unsigned long kvmppc_gpa_to_hva(struct kvm *kvm, unsigned long gpa, + unsigned long page_shift) +{ + unsigned long gfn, hva; + struct kvm_memory_slot *memslot; + + gfn = gpa >> page_shift; + memslot = gfn_to_memslot(kvm, gfn); + hva = gfn_to_hva_memslot(memslot, gfn); + + return hva; +} + +/* + * Move page from normal memory to secure memory. + */ +unsigned long +kvmppc_h_svm_page_in(struct kvm *kvm, unsigned long gpa, + unsigned long flags, unsigned long page_shift) +{ + unsigned long addr, end; + unsigned long src_pfn, dst_pfn; + struct kvmppc_hmm_migrate_args args; + struct mm_struct *mm = get_task_mm(current); + struct vm_area_struct *vma; + int ret = H_SUCCESS; + + if (page_shift != PAGE_SHIFT) + return H_P3; + + addr = kvmppc_gpa_to_hva(kvm, gpa, page_shift); + if (!addr) + return H_PARAMETER; + end = addr + (1UL << page_shift); + + if (flags) + return H_P2; + + args.hmm_hash = kvm->arch.hmm_hash; + args.lpid = kvm->arch.lpid; + args.gpa = gpa; + args.page_shift = page_shift; + + down_read(&mm->mmap_sem); + vma = find_vma_intersection(mm, addr, end); + if (!vma || vma->vm_start > addr || vma->vm_end < end) { + ret = H_PARAMETER; + goto out; + } + ret = migrate_vma(&kvmppc_hmm_migrate_ops, vma, addr, end, + &src_pfn, &dst_pfn, &args); + if (ret < 0) + ret = H_PARAMETER; +out: + up_read(&mm->mmap_sem); + return ret; +} + +static void +kvmppc_hmm_fault_migrate_alloc_and_copy(struct vm_area_struct *vma, + const unsigned long *src_pfn, + unsigned long *dst_pfn, + unsigned long start, + unsigned long end, + void *private) +{ + struct page *dpage, *spage; + struct kvmppc_hmm_page_pvt *pvt; + unsigned long pfn; + int ret = U_SUCCESS; + + *dst_pfn = MIGRATE_PFN_ERROR; + spage = migrate_pfn_to_page(*src_pfn); + if (!spage || !(*src_pfn & MIGRATE_PFN_MIGRATE)) + return; + if (!is_zone_device_page(spage)) + return; + dpage = hmm_vma_alloc_locked_page(vma, start); + if (!dpage) + return; + pvt = (struct kvmppc_hmm_page_pvt *) + hmm_devmem_page_get_drvdata(spage); + + pfn = page_to_pfn(dpage); + ret = uv_page_out(pvt->lpid, pfn << PAGE_SHIFT, + pvt->gpa, 0, PAGE_SHIFT); + if (ret == U_SUCCESS) + *dst_pfn = migrate_pfn(pfn) | MIGRATE_PFN_LOCKED; +} + +static void +kvmppc_hmm_fault_migrate_finalize_and_map(struct vm_area_struct *vma, + const unsigned long *src_pfns, + const unsigned long *dst_pfns, + unsigned long start, + unsigned long end, + void *private) +{ +} + +static const struct migrate_vma_ops kvmppc_hmm_fault_migrate_ops = { + .alloc_and_copy = kvmppc_hmm_fault_migrate_alloc_and_copy, + .finalize_and_map = kvmppc_hmm_fault_migrate_finalize_and_map, +}; + +/* + * Fault handler callback when HV touches any page that has been + * moved to secure memory, we ask UV to give back the page by + * issuing a UV_PAGE_OUT uvcall. + */ +static int kvmppc_hmm_devmem_fault(struct hmm_devmem *devmem, + struct vm_area_struct *vma, + unsigned long addr, + const struct page *page, + unsigned int flags, + pmd_t *pmdp) +{ + unsigned long end = addr + PAGE_SIZE; + unsigned long src_pfn, dst_pfn = 0; + + if (migrate_vma(&kvmppc_hmm_fault_migrate_ops, vma, addr, end, + &src_pfn, &dst_pfn, NULL)) + return VM_FAULT_SIGBUS; + if (dst_pfn == MIGRATE_PFN_ERROR) + return VM_FAULT_SIGBUS; + return 0; +} + +static void kvmppc_hmm_devmem_free(struct hmm_devmem *devmem, + struct page *page) +{ + kvmppc_hmm_put_page(page); +} + +static const struct hmm_devmem_ops kvmppc_hmm_devmem_ops = { + .free = kvmppc_hmm_devmem_free, + .fault = kvmppc_hmm_devmem_fault, +}; + +/* + * Move page from secure memory to normal memory. + */ +unsigned long +kvmppc_h_svm_page_out(struct kvm *kvm, unsigned long gpa, + unsigned long flags, unsigned long page_shift) +{ + unsigned long addr, end; + struct mm_struct *mm = get_task_mm(current); + struct vm_area_struct *vma; + unsigned long src_pfn, dst_pfn = 0; + int ret = H_SUCCESS; + + if (page_shift != PAGE_SHIFT) + return H_P4; + + addr = kvmppc_gpa_to_hva(kvm, gpa, page_shift); + if (!addr) + return H_P2; + end = addr + (1UL << page_shift); + + down_read(&mm->mmap_sem); + vma = find_vma_intersection(mm, addr, end); + if (!vma || vma->vm_start > addr || vma->vm_end < end) { + ret = H_PARAMETER; + goto out; + } + ret = migrate_vma(&kvmppc_hmm_fault_migrate_ops, vma, addr, end, + &src_pfn, &dst_pfn, NULL); + if (ret < 0) + ret = H_PARAMETER; +out: + up_read(&mm->mmap_sem); + return ret; +} + +/* + * TODO: Number of secure pages and the page size order would probably come + * via DT or via some uvcall. Return 8G for now. + */ +static unsigned long kvmppc_get_secmem_size(void) +{ + return (1UL << 33); +} + +static int kvmppc_hmm_pages_init(void) +{ + unsigned long nr_pfns = kvmppc_hmm->devmem->pfn_last - + kvmppc_hmm->devmem->pfn_first; + + kvmppc_hmm->pfn_bitmap = kcalloc(BITS_TO_LONGS(nr_pfns), + sizeof(unsigned long), GFP_KERNEL); + if (!kvmppc_hmm->pfn_bitmap) + return -ENOMEM; + + spin_lock_init(&kvmppc_hmm_lock); + + return 0; +} + +int kvmppc_hmm_init(void) +{ + int ret = 0; + unsigned long size = kvmppc_get_secmem_size(); + + kvmppc_hmm = kzalloc(sizeof(*kvmppc_hmm), GFP_KERNEL); + if (!kvmppc_hmm) { + ret = -ENOMEM; + goto out; + } + + kvmppc_hmm->device = hmm_device_new(NULL); + if (IS_ERR(kvmppc_hmm->device)) { + ret = PTR_ERR(kvmppc_hmm->device); + goto out_free; + } + + kvmppc_hmm->devmem = hmm_devmem_add(&kvmppc_hmm_devmem_ops, + &kvmppc_hmm->device->device, size); + if (IS_ERR(kvmppc_hmm->devmem)) { + ret = PTR_ERR(kvmppc_hmm->devmem); + goto out_device; + } + ret = kvmppc_hmm_pages_init(); + if (ret < 0) + goto out_devmem; + + return ret; + +out_devmem: + hmm_devmem_remove(kvmppc_hmm->devmem); +out_device: + hmm_device_put(kvmppc_hmm->device); +out_free: + kfree(kvmppc_hmm); + kvmppc_hmm = NULL; +out: + return ret; +} + +void kvmppc_hmm_free(void) +{ + kfree(kvmppc_hmm->pfn_bitmap); + hmm_devmem_remove(kvmppc_hmm->devmem); + hmm_device_put(kvmppc_hmm->device); + kfree(kvmppc_hmm); + kvmppc_hmm = NULL; +} From patchwork Mon Oct 22 05:18:35 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bharata B Rao X-Patchwork-Id: 10651691 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 1CA38921 for ; Mon, 22 Oct 2018 05:19:05 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 0EA5C28382 for ; Mon, 22 Oct 2018 05:19:05 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 03032287A4; Mon, 22 Oct 2018 05:19:05 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 72D2028382 for ; Mon, 22 Oct 2018 05:19:04 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 1A1A06B000C; Mon, 22 Oct 2018 01:19:02 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 1524B6B000D; Mon, 22 Oct 2018 01:19:02 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id EE61A6B000E; Mon, 22 Oct 2018 01:19:01 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-ot1-f72.google.com (mail-ot1-f72.google.com [209.85.210.72]) by kanga.kvack.org (Postfix) with ESMTP id BB4C56B000C for ; Mon, 22 Oct 2018 01:19:01 -0400 (EDT) Received: by mail-ot1-f72.google.com with SMTP id 36so29706630ott.22 for ; Sun, 21 Oct 2018 22:19:01 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-original-authentication-results:x-gm-message-state:from:to:cc :subject:date:in-reply-to:references:message-id; bh=Qj0yae3t/T7LwNvAhzOTLBfNOmalTAG1mZFaAFNwQwc=; b=Zks0g+E8sG8k9FXUCKquvJ+0nVHnhjP387m+cVcG7mc04djCyCajW0b+JkOUdl40Rv qT8h9q+4u1vwwuecV/YMD/gYep4dtleOncm4Z7ZViyF3bRxpcK4mgHvRFljKAipoOUPe ZRSi62/8x9cVdAHQhR8gqrInmcE+SKMuQV22j/tqjWn0f0f2Zk+GjLN90/7bSIR8SRXG 3pOTQE8Ps7eUpA+N7FYoGVwySCD+MT6Z1XYLiubpANlGs3GNfw+jtZAA7w3asnCJ2aq2 cx8J1buLwoCMHu81jK8yYvslo4cDjgiRxWdY7JYE1lw5GIney2538vw4is2DmDR2Zd1D u+Tg== X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of bharata@linux.ibm.com designates 148.163.156.1 as permitted sender) smtp.mailfrom=bharata@linux.ibm.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=ibm.com X-Gm-Message-State: ABuFfoj7VjhYei0Pt1WCOFjT9ge71gZrAsTd2DAayfoM/PJhGdAfGGkd 8W+pBSUfN39HBvTDNW8qn6IxIbfjcI2RoKkm2SnbbC2J1MkKYl3XeDRSM6vi9hJE/+lHwtnnCe7 Frwh7RNI+kEEJu7U4UgEeTrbHy53QQvmaztuLEyc3v9vf0iII2IKHdctqgTeoXiMwFw== X-Received: by 2002:aca:f4c2:: with SMTP id s185-v6mr23977583oih.245.1540185541404; Sun, 21 Oct 2018 22:19:01 -0700 (PDT) X-Google-Smtp-Source: ACcGV62w9Y1GFqaVez43dV4cz7P0YnfX/13egeE/65cFUh6qdbw3CcfKWXmc9+5xjcuLrwLHKqq/ X-Received: by 2002:aca:f4c2:: with SMTP id s185-v6mr23977567oih.245.1540185540659; Sun, 21 Oct 2018 22:19:00 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1540185540; cv=none; d=google.com; s=arc-20160816; b=jkjZ+rp39of6I7q7/C1yyOrvjK1Fa0/cqCeMxBH4LDpsklJUzujoNFIYjJqP2ivEtk SPG7U5D1Doc/t3D0GCGfHqLmPePnd3yBrwOfFiLhJqIBLlGild/WegzJcpoNhI+HPfA6 TYgu8e/izSpYzNJcDtxYaTU5fKrPJ9mdVtKnLpTkBrUd4HBQHkMVZlEOE4AJQwJL0MoA 9aAlrQZKZZOGXZj2PuRUCkJKFP76KuZehIPhE8cp29NffqHm5vRlaMSRtyAIxVgdTSJH mfrsXKr0/A3pV60v4ThA5ZTsMZmzE0R+VLlp4IQ1KrlezKePzDBoBxHYqagH5biK+c9j n7eA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=message-id:references:in-reply-to:date:subject:cc:to:from; bh=Qj0yae3t/T7LwNvAhzOTLBfNOmalTAG1mZFaAFNwQwc=; b=auhsPSszThkWM0aX810UaXieyBRPvVnC85rkqRNSZQNS+9IzjA8AcGkZ318pDUqvUj GuYGDYuRv31UiEyp54ofNURvs6aoED+yWVkoFqnP1exfJh63ZZtsD8HC8dllVYjBKc+5 +MFPVZhn+xndk5lgzYAx135HDdMyQFRG7naRLlLGiuft3qJgy5PwgvdYazOCDHZljHMv letqg8WjLh7z6VKJzQdtW6fjBGV1DzUi0lF3RdI8MKN7V5KlsJXHA4fIQ31JRMmrov1n 1XJnENSnttopOpv0VzDBdSUDRSKmjGBzo1ZllhKeGJcsM0Z8JCn0pstEx7YAnmC9uJJq dmHw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of bharata@linux.ibm.com designates 148.163.156.1 as permitted sender) smtp.mailfrom=bharata@linux.ibm.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=ibm.com Received: from mx0a-001b2d01.pphosted.com (mx0a-001b2d01.pphosted.com. [148.163.156.1]) by mx.google.com with ESMTPS id l3si13176520otf.33.2018.10.21.22.19.00 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Sun, 21 Oct 2018 22:19:00 -0700 (PDT) Received-SPF: pass (google.com: domain of bharata@linux.ibm.com designates 148.163.156.1 as permitted sender) client-ip=148.163.156.1; Authentication-Results: mx.google.com; spf=pass (google.com: domain of bharata@linux.ibm.com designates 148.163.156.1 as permitted sender) smtp.mailfrom=bharata@linux.ibm.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=ibm.com Received: from pps.filterd (m0098396.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.16.0.22/8.16.0.22) with SMTP id w9M59U9k034174 for ; Mon, 22 Oct 2018 01:18:59 -0400 Received: from e06smtp02.uk.ibm.com (e06smtp02.uk.ibm.com [195.75.94.98]) by mx0a-001b2d01.pphosted.com with ESMTP id 2n94xr5q68-1 (version=TLSv1.2 cipher=AES256-GCM-SHA384 bits=256 verify=NOT) for ; Mon, 22 Oct 2018 01:18:59 -0400 Received: from localhost by e06smtp02.uk.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Mon, 22 Oct 2018 06:18:57 +0100 Received: from b06cxnps4076.portsmouth.uk.ibm.com (9.149.109.198) by e06smtp02.uk.ibm.com (192.168.101.132) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; (version=TLSv1/SSLv3 cipher=AES256-GCM-SHA384 bits=256/256) Mon, 22 Oct 2018 06:18:54 +0100 Received: from d06av26.portsmouth.uk.ibm.com (d06av26.portsmouth.uk.ibm.com [9.149.105.62]) by b06cxnps4076.portsmouth.uk.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id w9M5Irhi1769772 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL); Mon, 22 Oct 2018 05:18:53 GMT Received: from d06av26.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 06E28AE04D; Mon, 22 Oct 2018 05:18:53 +0000 (GMT) Received: from d06av26.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 64B13AE045; Mon, 22 Oct 2018 05:18:51 +0000 (GMT) Received: from bharata.in.ibm.com (unknown [9.124.35.126]) by d06av26.portsmouth.uk.ibm.com (Postfix) with ESMTP; Mon, 22 Oct 2018 05:18:51 +0000 (GMT) From: Bharata B Rao To: linuxppc-dev@lists.ozlabs.org Cc: kvm-ppc@vger.kernel.org, linux-mm@kvack.org, paulus@au1.ibm.com, benh@linux.ibm.com, aneesh.kumar@linux.vnet.ibm.com, jglisse@redhat.com, linuxram@us.ibm.com, Bharata B Rao Subject: [RFC PATCH v1 2/4] kvmppc: Add support for shared pages in HMM driver Date: Mon, 22 Oct 2018 10:48:35 +0530 X-Mailer: git-send-email 2.17.1 In-Reply-To: <20181022051837.1165-1-bharata@linux.ibm.com> References: <20181022051837.1165-1-bharata@linux.ibm.com> X-TM-AS-GCONF: 00 x-cbid: 18102205-0008-0000-0000-0000028318EA X-IBM-AV-DETECTION: SAVI=unused REMOTE=unused XFE=unused x-cbparentid: 18102205-0009-0000-0000-000021EC95F0 Message-Id: <20181022051837.1165-3-bharata@linux.ibm.com> X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:,, definitions=2018-10-21_14:,, signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 priorityscore=1501 malwarescore=0 suspectscore=1 phishscore=0 bulkscore=0 spamscore=0 clxscore=1015 lowpriorityscore=0 mlxscore=0 impostorscore=0 mlxlogscore=536 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1807170000 definitions=main-1810220049 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP A secure guest will share some of its pages with hypervisor (Eg. virtio bounce buffers etc). Support shared pages in HMM driver. Signed-off-by: Bharata B Rao --- arch/powerpc/kvm/book3s_hv_hmm.c | 69 ++++++++++++++++++++++++++++++-- 1 file changed, 65 insertions(+), 4 deletions(-) diff --git a/arch/powerpc/kvm/book3s_hv_hmm.c b/arch/powerpc/kvm/book3s_hv_hmm.c index a2ee3163a312..09b8e19b7605 100644 --- a/arch/powerpc/kvm/book3s_hv_hmm.c +++ b/arch/powerpc/kvm/book3s_hv_hmm.c @@ -50,6 +50,7 @@ struct kvmppc_hmm_page_pvt { struct hlist_head *hmm_hash; unsigned int lpid; unsigned long gpa; + bool skip_page_out; }; struct kvmppc_hmm_migrate_args { @@ -278,6 +279,65 @@ static unsigned long kvmppc_gpa_to_hva(struct kvm *kvm, unsigned long gpa, return hva; } +/* + * Shares the page with HV, thus making it a normal page. + * + * - If the page is already secure, then provision a new page and share + * - If the page is a normal page, share the existing page + * + * In the former case, uses the HMM fault handler to release the HMM page. + */ +static unsigned long +kvmppc_share_page(struct kvm *kvm, unsigned long gpa, + unsigned long addr, unsigned long page_shift) +{ + + int ret; + struct hlist_head *list, *hmm_hash; + unsigned int lpid = kvm->arch.lpid; + unsigned long flags; + struct kvmppc_hmm_pfn_entry *p; + struct page *hmm_page, *page; + struct kvmppc_hmm_page_pvt *pvt; + unsigned long pfn; + + /* + * First check if the requested page has already been given to + * UV as a secure page. If so, ensure that we don't issue a + * UV_PAGE_OUT but instead directly send the page + */ + spin_lock_irqsave(&kvmppc_hmm_lock, flags); + hmm_hash = kvm->arch.hmm_hash; + list = &hmm_hash[kvmppc_hmm_pfn_hash_fn(gpa)]; + hlist_for_each_entry(p, list, hlist) { + if (p->addr == gpa) { + hmm_page = pfn_to_page(p->hmm_pfn); + get_page(hmm_page); /* TODO: Necessary ? */ + pvt = (struct kvmppc_hmm_page_pvt *) + hmm_devmem_page_get_drvdata(hmm_page); + pvt->skip_page_out = true; + put_page(hmm_page); + break; + } + } + spin_unlock_irqrestore(&kvmppc_hmm_lock, flags); + + ret = get_user_pages_fast(addr, 1, 0, &page); + if (ret != 1) + return H_PARAMETER; + + pfn = page_to_pfn(page); + if (is_zero_pfn(pfn)) { + put_page(page); + return H_SUCCESS; + } + + ret = uv_page_in(lpid, pfn << page_shift, gpa, 0, page_shift); + put_page(page); + + return (ret == U_SUCCESS) ? H_SUCCESS : H_PARAMETER; +} + /* * Move page from normal memory to secure memory. */ @@ -300,8 +360,8 @@ kvmppc_h_svm_page_in(struct kvm *kvm, unsigned long gpa, return H_PARAMETER; end = addr + (1UL << page_shift); - if (flags) - return H_P2; + if (flags & H_PAGE_IN_SHARED) + return kvmppc_share_page(kvm, gpa, addr, page_shift); args.hmm_hash = kvm->arch.hmm_hash; args.lpid = kvm->arch.lpid; @@ -349,8 +409,9 @@ kvmppc_hmm_fault_migrate_alloc_and_copy(struct vm_area_struct *vma, hmm_devmem_page_get_drvdata(spage); pfn = page_to_pfn(dpage); - ret = uv_page_out(pvt->lpid, pfn << PAGE_SHIFT, - pvt->gpa, 0, PAGE_SHIFT); + if (!pvt->skip_page_out) + ret = uv_page_out(pvt->lpid, pfn << PAGE_SHIFT, + pvt->gpa, 0, PAGE_SHIFT); if (ret == U_SUCCESS) *dst_pfn = migrate_pfn(pfn) | MIGRATE_PFN_LOCKED; } From patchwork Mon Oct 22 05:18:36 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bharata B Rao X-Patchwork-Id: 10651693 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D3066921 for ; Mon, 22 Oct 2018 05:19:07 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id C449528382 for ; Mon, 22 Oct 2018 05:19:07 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id B8643287A4; Mon, 22 Oct 2018 05:19:07 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 2C31228382 for ; Mon, 22 Oct 2018 05:19:07 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 0DF0E6B000D; Mon, 22 Oct 2018 01:19:05 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 06B056B000E; Mon, 22 Oct 2018 01:19:04 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DFE316B0010; Mon, 22 Oct 2018 01:19:04 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-ed1-f70.google.com (mail-ed1-f70.google.com [209.85.208.70]) by kanga.kvack.org (Postfix) with ESMTP id 8AC836B000D for ; Mon, 22 Oct 2018 01:19:04 -0400 (EDT) Received: by mail-ed1-f70.google.com with SMTP id i16-v6so23709957ede.11 for ; Sun, 21 Oct 2018 22:19:04 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-original-authentication-results:x-gm-message-state:from:to:cc :subject:date:in-reply-to:references:message-id; bh=mwFcU9rpnGtrKmOZO/PFxUi5Cv1lx7IkLLIoNTUu7ak=; b=MKLcVJgoszBa0T+bMqQrW3tK23l9sI6pyBtGa5BPCnkAkJBwetUqu1FlrUv2IswGbF wrlyOre8wbtvo0ORGO+H/OX8OuFEPQyx6DqpofAFvK6kr3l1DZf/d7nTYcYUJW2Q0UYH VNAsjjxUXdjRZNE8Hc4Qz/UdD/seIPWqAIIIjwk0zFqe4GXkJUoMMuNAbFJAoJSWxMcG OnDigOCSws8VCbgMN2qWqBUWWyMleDhs4pBWrzm3SBgZApM/NPoUCmv0K75uzgaPcl2V 7ydbndPyUhi5GOrIZ5rnFkKcFQaYDwZ5xsg/fK7IN8Da+yqC6rjPmqfWcOOHPy0qSCTV 2SmQ== X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of bharata@linux.ibm.com designates 148.163.158.5 as permitted sender) smtp.mailfrom=bharata@linux.ibm.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=ibm.com X-Gm-Message-State: ABuFfoiWmD6vWB63whMarmruu/L3U6kUwg6slZ/2QTlEZatGGSCXiyGK btMNGeb3pwHDPIqGZwUPzv5o05vwvDc4NaVXqQDbgw1XhEZ/pO8JPz7Dq493G0vwWOqQZpaQsj6 jodBltpHFSUXjotvkDAlsILT73T/QEA6pYTTydOMgoOJKitvsOTrd92GZegUhaLFaxw== X-Received: by 2002:a50:b905:: with SMTP id m5-v6mr11804844ede.108.1540185543974; Sun, 21 Oct 2018 22:19:03 -0700 (PDT) X-Google-Smtp-Source: ACcGV63eE1zUpckM8Z1wItn89zKFOY5qgNsQCl1XoNTniGF/PQAX2euzz0TqUn4/o5/qh01Trg4s X-Received: by 2002:a50:b905:: with SMTP id m5-v6mr11804815ede.108.1540185543038; Sun, 21 Oct 2018 22:19:03 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1540185543; cv=none; d=google.com; s=arc-20160816; b=VDz/bHSHk/zOouuJoJK8dJE4X0Cu79T5aDSTfoXhELFr8fH0tbbFgQ5qFUoJdwRT5r KwefQm5O6+9aW4abtIr4oZNbg1+SS3cQ0A1Pc5iwphznY+RqljUngWc7Ki8TnDLxzODy jvAb+Hrx5VTh8gL6DoBsjP1gdoOZGnlQXQCKfXC+7GAlSnZBNC1Mqb3mwvuNKRC8VUIX YAWhq3o5Zb4gQ7whK5rEyvwa78JjMoct2sARbRRs08RMgXa6TH9r9wnbOazO3GogJh96 jDsJvDtVrfuLB1lA5ojmS7yRxYWFMhkXm62RHvy01hjmQPyVmK9Vsa+tNSrGGOSVhcFI Hc7A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=message-id:references:in-reply-to:date:subject:cc:to:from; bh=mwFcU9rpnGtrKmOZO/PFxUi5Cv1lx7IkLLIoNTUu7ak=; b=O7M9bCI2t5hnNhACmaWLk4BqKQ1c+0iSGNGNhOqPtuuX87M+t8dMDsl2XCSqaqYekE is0aHoKb03OQ+UUFElOrTkt/H67xnSCPICsFxqRc/WWWXBgkODXwCXO5cV/+ML0Cnh+v KecINEHlOSx/Fbl57v17LqHmUkRzEiWVOht99Nas9MN6iIjTAFKXEMv3UuBVDvdrr48d Ft9zDK2a+hvJ4lfLed9tdBNqUx0Em8o4hfHw8OJHTiA18Y+fKWzMphn8BKp6PP7u3YPa 14hAivEJEMxQAZmO2SBpuP23TiNCCBiE7hjfLZA8TCPJfvgljXuxPAakrea3fs5Xr4q7 SmcA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of bharata@linux.ibm.com designates 148.163.158.5 as permitted sender) smtp.mailfrom=bharata@linux.ibm.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=ibm.com Received: from mx0a-001b2d01.pphosted.com (mx0b-001b2d01.pphosted.com. [148.163.158.5]) by mx.google.com with ESMTPS id x43-v6si17050926edd.165.2018.10.21.22.19.02 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Sun, 21 Oct 2018 22:19:03 -0700 (PDT) Received-SPF: pass (google.com: domain of bharata@linux.ibm.com designates 148.163.158.5 as permitted sender) client-ip=148.163.158.5; Authentication-Results: mx.google.com; spf=pass (google.com: domain of bharata@linux.ibm.com designates 148.163.158.5 as permitted sender) smtp.mailfrom=bharata@linux.ibm.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=ibm.com Received: from pps.filterd (m0098417.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.16.0.22/8.16.0.22) with SMTP id w9M59a1k145373 for ; Mon, 22 Oct 2018 01:19:01 -0400 Received: from e06smtp05.uk.ibm.com (e06smtp05.uk.ibm.com [195.75.94.101]) by mx0a-001b2d01.pphosted.com with ESMTP id 2n921ftafx-1 (version=TLSv1.2 cipher=AES256-GCM-SHA384 bits=256 verify=NOT) for ; Mon, 22 Oct 2018 01:19:01 -0400 Received: from localhost by e06smtp05.uk.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Mon, 22 Oct 2018 06:18:59 +0100 Received: from b06cxnps3075.portsmouth.uk.ibm.com (9.149.109.195) by e06smtp05.uk.ibm.com (192.168.101.135) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; (version=TLSv1/SSLv3 cipher=AES256-GCM-SHA384 bits=256/256) Mon, 22 Oct 2018 06:18:57 +0100 Received: from d06av26.portsmouth.uk.ibm.com (d06av26.portsmouth.uk.ibm.com [9.149.105.62]) by b06cxnps3075.portsmouth.uk.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id w9M5Iuw620119786 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL); Mon, 22 Oct 2018 05:18:56 GMT Received: from d06av26.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id E2066AE058; Mon, 22 Oct 2018 05:18:55 +0000 (GMT) Received: from d06av26.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 4CB03AE045; Mon, 22 Oct 2018 05:18:54 +0000 (GMT) Received: from bharata.in.ibm.com (unknown [9.124.35.126]) by d06av26.portsmouth.uk.ibm.com (Postfix) with ESMTP; Mon, 22 Oct 2018 05:18:54 +0000 (GMT) From: Bharata B Rao To: linuxppc-dev@lists.ozlabs.org Cc: kvm-ppc@vger.kernel.org, linux-mm@kvack.org, paulus@au1.ibm.com, benh@linux.ibm.com, aneesh.kumar@linux.vnet.ibm.com, jglisse@redhat.com, linuxram@us.ibm.com, Bharata B Rao Subject: [RFC PATCH v1 3/4] kvmppc: H_SVM_INIT_START and H_SVM_INIT_DONE hcalls Date: Mon, 22 Oct 2018 10:48:36 +0530 X-Mailer: git-send-email 2.17.1 In-Reply-To: <20181022051837.1165-1-bharata@linux.ibm.com> References: <20181022051837.1165-1-bharata@linux.ibm.com> X-TM-AS-GCONF: 00 x-cbid: 18102205-0020-0000-0000-000002D8147D X-IBM-AV-DETECTION: SAVI=unused REMOTE=unused XFE=unused x-cbparentid: 18102205-0021-0000-0000-000021272E0F Message-Id: <20181022051837.1165-4-bharata@linux.ibm.com> X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:,, definitions=2018-10-21_14:,, signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 priorityscore=1501 malwarescore=0 suspectscore=1 phishscore=0 bulkscore=0 spamscore=0 clxscore=1015 lowpriorityscore=0 mlxscore=0 impostorscore=0 mlxlogscore=999 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1807170000 definitions=main-1810220049 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP H_SVM_INIT_START: Initiate securing a VM H_SVM_INIT_DONE: Conclude securing a VM During early guest init, these hcalls will be issued by UV. As part of these hcalls, [un]register memslots with UV. Signed-off-by: Bharata B Rao --- arch/powerpc/include/asm/hvcall.h | 4 ++- arch/powerpc/include/asm/kvm_host.h | 1 + arch/powerpc/include/asm/ucall-api.h | 6 ++++ arch/powerpc/kvm/book3s_hv.c | 54 ++++++++++++++++++++++++++++ 4 files changed, 64 insertions(+), 1 deletion(-) diff --git a/arch/powerpc/include/asm/hvcall.h b/arch/powerpc/include/asm/hvcall.h index 89e6b70c1857..6091276fef07 100644 --- a/arch/powerpc/include/asm/hvcall.h +++ b/arch/powerpc/include/asm/hvcall.h @@ -300,7 +300,9 @@ #define H_INT_RESET 0x3D0 #define H_SVM_PAGE_IN 0x3D4 #define H_SVM_PAGE_OUT 0x3D8 -#define MAX_HCALL_OPCODE H_SVM_PAGE_OUT +#define H_SVM_INIT_START 0x3DC +#define H_SVM_INIT_DONE 0x3E0 +#define MAX_HCALL_OPCODE H_SVM_INIT_DONE /* H_VIOCTL functions */ #define H_GET_VIOA_DUMP_SIZE 0x01 diff --git a/arch/powerpc/include/asm/kvm_host.h b/arch/powerpc/include/asm/kvm_host.h index 194e6e0ff239..267f8c568bc3 100644 --- a/arch/powerpc/include/asm/kvm_host.h +++ b/arch/powerpc/include/asm/kvm_host.h @@ -292,6 +292,7 @@ struct kvm_arch { struct dentry *debugfs_dir; struct dentry *htab_dentry; struct kvm_resize_hpt *resize_hpt; /* protected by kvm->lock */ + bool svm_init_start; /* Indicates H_SVM_INIT_START has been called */ #endif /* CONFIG_KVM_BOOK3S_HV_POSSIBLE */ #ifdef CONFIG_KVM_BOOK3S_PR_POSSIBLE struct mutex hpt_mutex; diff --git a/arch/powerpc/include/asm/ucall-api.h b/arch/powerpc/include/asm/ucall-api.h index 2c12f514f8ab..9ddfcf541211 100644 --- a/arch/powerpc/include/asm/ucall-api.h +++ b/arch/powerpc/include/asm/ucall-api.h @@ -17,4 +17,10 @@ static inline int uv_page_out(u64 lpid, u64 dw0, u64 dw1, u64 dw2, u64 dw3) return U_SUCCESS; } +static inline int uv_register_mem_slot(u64 lpid, u64 dw0, u64 dw1, u64 dw2, + u64 dw3) +{ + return 0; +} + #endif /* _ASM_POWERPC_UCALL_API_H */ diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index 05084eb8aadd..47f366f634fd 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -819,6 +819,50 @@ static int kvmppc_get_yield_count(struct kvm_vcpu *vcpu) return yield_count; } +#ifdef CONFIG_PPC_SVM +#include +/* + * TODO: Check if memslots related calls here need to be called + * under any lock. + */ +static unsigned long kvmppc_h_svm_init_start(struct kvm *kvm) +{ + struct kvm_memslots *slots; + struct kvm_memory_slot *memslot; + int ret; + + slots = kvm_memslots(kvm); + kvm_for_each_memslot(memslot, slots) { + ret = uv_register_mem_slot(kvm->arch.lpid, + memslot->base_gfn << PAGE_SHIFT, + memslot->npages * PAGE_SIZE, + 0, memslot->id); + if (ret < 0) + return H_PARAMETER; + } + kvm->arch.svm_init_start = true; + return H_SUCCESS; +} + +static unsigned long kvmppc_h_svm_init_done(struct kvm *kvm) +{ + if (kvm->arch.svm_init_start) + return H_SUCCESS; + else + return H_UNSUPPORTED; +} +#else +static unsigned long kvmppc_h_svm_init_start(struct kvm *kvm) +{ + return H_UNSUPPORTED; +} + +static unsigned long kvmppc_h_svm_init_done(struct kvm *kvm) +{ + return H_UNSUPPORTED; +} +#endif + int kvmppc_pseries_do_hcall(struct kvm_vcpu *vcpu) { unsigned long req = kvmppc_get_gpr(vcpu, 3); @@ -950,6 +994,12 @@ int kvmppc_pseries_do_hcall(struct kvm_vcpu *vcpu) kvmppc_get_gpr(vcpu, 6), kvmppc_get_gpr(vcpu, 7)); break; + case H_SVM_INIT_START: + ret = kvmppc_h_svm_init_start(vcpu->kvm); + break; + case H_SVM_INIT_DONE: + ret = kvmppc_h_svm_init_done(vcpu->kvm); + break; default: return RESUME_HOST; } @@ -978,6 +1028,8 @@ static int kvmppc_hcall_impl_hv(unsigned long cmd) #endif case H_SVM_PAGE_IN: case H_SVM_PAGE_OUT: + case H_SVM_INIT_START: + case H_SVM_INIT_DONE: return 1; } @@ -4413,6 +4465,8 @@ static unsigned int default_hcall_list[] = { #endif H_SVM_PAGE_IN, H_SVM_PAGE_OUT, + H_SVM_INIT_START, + H_SVM_INIT_DONE, 0 }; From patchwork Mon Oct 22 05:18:37 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bharata B Rao X-Patchwork-Id: 10651695 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 8B8CA14E2 for ; Mon, 22 Oct 2018 05:19:10 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 7B0F528382 for ; Mon, 22 Oct 2018 05:19:10 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 6EF16287A4; Mon, 22 Oct 2018 05:19:10 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id CFC5C28382 for ; Mon, 22 Oct 2018 05:19:09 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C76186B000E; Mon, 22 Oct 2018 01:19:07 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id C2A6F6B0010; Mon, 22 Oct 2018 01:19:07 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A7DED6B0266; Mon, 22 Oct 2018 01:19:07 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-oi1-f200.google.com (mail-oi1-f200.google.com [209.85.167.200]) by kanga.kvack.org (Postfix) with ESMTP id 7BAE16B000E for ; Mon, 22 Oct 2018 01:19:07 -0400 (EDT) Received: by mail-oi1-f200.google.com with SMTP id d23-v6so28029180oib.6 for ; Sun, 21 Oct 2018 22:19:07 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-original-authentication-results:x-gm-message-state:from:to:cc :subject:date:in-reply-to:references:message-id; bh=HA2eQr8+PBgyk91VkWivxGk/XWMy9Ne8vnRQT4A3lRs=; b=NKEd9v1n9oBPUn/0cGrCVcf1egMhP0Ms7Nucd9PgUcx7SXFs/T3q0QYn9N6F5/zQeR jdOVhvYCdS+BX7xkw6Sh9SkiwW8Slt7Viyoox5NssyrXPJPULewrQF4BFY+NDeBqBqUt Tb/2Y0m2SLhVChh7xooV7b+lwlE8JmaB/jnYeBMmPucj21I68snP1kV2RPhPXUQ4XaZg lHdnWKjK0cXYt+FBc3srCTLY82sL6Mx42cJ54G6XILfocC5Kju/cHBxuXbFFb38oWiwD ZmMiFMPrTowl6M6lFAcP1prDc6DqTkRQvgG4BVhv4I/HfSpoOFaDjlKExAUtY03CJfV4 +4dQ== X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of bharata@linux.ibm.com designates 148.163.156.1 as permitted sender) smtp.mailfrom=bharata@linux.ibm.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=ibm.com X-Gm-Message-State: ABuFfohqRumjtjEVoXyU9b5z7cHkRQW6MU5bbAjLKLbhwm57Iitz2Q7D 5ZAFsffZBESsT+8ToXsi/puwi3y2jnXA0HscazZ0fRv4bQnW1DVPxsqjlc8mjOiq4+lj6J12Hvk 5PwHbxO2COVjolr6r/TLYri57DifdRQWZ+VsUDzwn29QbVtyzEB6c7PUc4gK0dHv+Aw== X-Received: by 2002:a9d:a67:: with SMTP id 94mr29406179otg.61.1540185547208; Sun, 21 Oct 2018 22:19:07 -0700 (PDT) X-Google-Smtp-Source: ACcGV61fTRJo8p/whbxd/07FmT2v6GRzMps5dQJzDIp0pk38xzIeC3XhBbQn813q801fLK1jylB7 X-Received: by 2002:a9d:a67:: with SMTP id 94mr29406164otg.61.1540185546547; Sun, 21 Oct 2018 22:19:06 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1540185546; cv=none; d=google.com; s=arc-20160816; b=Hc+LYtjwGQmngsfZJoQX83L65I6eC+muod0Q1v6lkL0zsLNLlEDz8nccL2gxkFJmQT yZ8/Gj9LKAS+8PAurvsuy8c4FRUcssLbhIycawL9FbEwEVnViZAqkJqbp57ukoEUng44 H5Y1s2inDUOjpq+4+SXMuHMLE3CW49DMDUlQTaDW3opDSXM0exi5PjJwgL1Rr3U462fQ yg1ZlA+ZySAcWnh7wA2ods861enqd4AvxFV9n3Q+UcpCiaZ9K5ft3F5igBnuhX+Ar4Lc KHl0clirhtTFvy0Neer95prnmLXa5iHkQ3bVtnvKqbj7PwDiziFCZEcjYK90EYQnGaAr lNHA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=message-id:references:in-reply-to:date:subject:cc:to:from; bh=HA2eQr8+PBgyk91VkWivxGk/XWMy9Ne8vnRQT4A3lRs=; b=ljCtjnX/aazxYZlbOeoF/LgqIbNP8fq+8rR+n9GzYXXN5Fj6CXfMY7pPgJ/3JpY4rN knh+WaJhTYMUKUTFnVv2Vh+U6HcAXkFCVeT1Hk1aXJMAa9PoZlmsXJzkch5GiooEeKcb mp4Bwch88DKcBiw5wQ7qv5AzuT3NSJ32CwLpSAihWsZ+DSziU5nf5d8eb5geohC/lksp 1qpZD173xgk9u3aC720j/l2lCs0Dw3Qwwpkj0TlR1lFEfMdstVgMNplMhQgr0ExvRJLH 8sziuAZ4Fr9xRL5EWRU/QAV4rFNDGTrNtc7hxnXOA6Ad2pfhN5vdQsZXurKExAOMw502 BlmQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of bharata@linux.ibm.com designates 148.163.156.1 as permitted sender) smtp.mailfrom=bharata@linux.ibm.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=ibm.com Received: from mx0a-001b2d01.pphosted.com (mx0a-001b2d01.pphosted.com. [148.163.156.1]) by mx.google.com with ESMTPS id e128-v6si13583794oib.202.2018.10.21.22.19.06 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Sun, 21 Oct 2018 22:19:06 -0700 (PDT) Received-SPF: pass (google.com: domain of bharata@linux.ibm.com designates 148.163.156.1 as permitted sender) client-ip=148.163.156.1; Authentication-Results: mx.google.com; spf=pass (google.com: domain of bharata@linux.ibm.com designates 148.163.156.1 as permitted sender) smtp.mailfrom=bharata@linux.ibm.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=ibm.com Received: from pps.filterd (m0098393.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.16.0.22/8.16.0.22) with SMTP id w9M59jvs131269 for ; Mon, 22 Oct 2018 01:19:05 -0400 Received: from e06smtp05.uk.ibm.com (e06smtp05.uk.ibm.com [195.75.94.101]) by mx0a-001b2d01.pphosted.com with ESMTP id 2n976tsxyr-1 (version=TLSv1.2 cipher=AES256-GCM-SHA384 bits=256 verify=NOT) for ; Mon, 22 Oct 2018 01:19:05 -0400 Received: from localhost by e06smtp05.uk.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Mon, 22 Oct 2018 06:19:03 +0100 Received: from b06cxnps4074.portsmouth.uk.ibm.com (9.149.109.196) by e06smtp05.uk.ibm.com (192.168.101.135) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; (version=TLSv1/SSLv3 cipher=AES256-GCM-SHA384 bits=256/256) Mon, 22 Oct 2018 06:19:00 +0100 Received: from d06av26.portsmouth.uk.ibm.com (d06av26.portsmouth.uk.ibm.com [9.149.105.62]) by b06cxnps4074.portsmouth.uk.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id w9M5IwMJ66453662 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL); Mon, 22 Oct 2018 05:18:58 GMT Received: from d06av26.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id C869FAE051; Mon, 22 Oct 2018 05:18:58 +0000 (GMT) Received: from d06av26.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 333C1AE045; Mon, 22 Oct 2018 05:18:57 +0000 (GMT) Received: from bharata.in.ibm.com (unknown [9.124.35.126]) by d06av26.portsmouth.uk.ibm.com (Postfix) with ESMTP; Mon, 22 Oct 2018 05:18:57 +0000 (GMT) From: Bharata B Rao To: linuxppc-dev@lists.ozlabs.org Cc: kvm-ppc@vger.kernel.org, linux-mm@kvack.org, paulus@au1.ibm.com, benh@linux.ibm.com, aneesh.kumar@linux.vnet.ibm.com, jglisse@redhat.com, linuxram@us.ibm.com, Bharata B Rao Subject: [RFC PATCH v1 4/4] kvmppc: Handle memory plug/unplug to secure VM Date: Mon, 22 Oct 2018 10:48:37 +0530 X-Mailer: git-send-email 2.17.1 In-Reply-To: <20181022051837.1165-1-bharata@linux.ibm.com> References: <20181022051837.1165-1-bharata@linux.ibm.com> X-TM-AS-GCONF: 00 x-cbid: 18102205-0020-0000-0000-000002D8147E X-IBM-AV-DETECTION: SAVI=unused REMOTE=unused XFE=unused x-cbparentid: 18102205-0021-0000-0000-000021272E10 Message-Id: <20181022051837.1165-5-bharata@linux.ibm.com> X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:,, definitions=2018-10-21_14:,, signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 priorityscore=1501 malwarescore=0 suspectscore=1 phishscore=0 bulkscore=0 spamscore=0 clxscore=1015 lowpriorityscore=0 mlxscore=0 impostorscore=0 mlxlogscore=907 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1807170000 definitions=main-1810220049 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP Register the new memslot with UV during plug and unregister the memslot during unplug. Signed-off-by: Bharata B Rao --- arch/powerpc/include/asm/kvm_ppc.h | 6 ++++-- arch/powerpc/include/asm/ucall-api.h | 5 +++++ arch/powerpc/kvm/book3s.c | 5 +++-- arch/powerpc/kvm/book3s_hv.c | 23 ++++++++++++++++++++++- arch/powerpc/kvm/book3s_pr.c | 3 ++- arch/powerpc/kvm/powerpc.c | 2 +- 6 files changed, 37 insertions(+), 7 deletions(-) diff --git a/arch/powerpc/include/asm/kvm_ppc.h b/arch/powerpc/include/asm/kvm_ppc.h index ba81a07e2bdf..2f0d7c64eb18 100644 --- a/arch/powerpc/include/asm/kvm_ppc.h +++ b/arch/powerpc/include/asm/kvm_ppc.h @@ -226,7 +226,8 @@ extern int kvmppc_core_prepare_memory_region(struct kvm *kvm, extern void kvmppc_core_commit_memory_region(struct kvm *kvm, const struct kvm_userspace_memory_region *mem, const struct kvm_memory_slot *old, - const struct kvm_memory_slot *new); + const struct kvm_memory_slot *new, + enum kvm_mr_change change); extern int kvm_vm_ioctl_get_smmu_info(struct kvm *kvm, struct kvm_ppc_smmu_info *info); extern void kvmppc_core_flush_memslot(struct kvm *kvm, @@ -296,7 +297,8 @@ struct kvmppc_ops { void (*commit_memory_region)(struct kvm *kvm, const struct kvm_userspace_memory_region *mem, const struct kvm_memory_slot *old, - const struct kvm_memory_slot *new); + const struct kvm_memory_slot *new, + enum kvm_mr_change change); int (*unmap_hva_range)(struct kvm *kvm, unsigned long start, unsigned long end); int (*age_hva)(struct kvm *kvm, unsigned long start, unsigned long end); diff --git a/arch/powerpc/include/asm/ucall-api.h b/arch/powerpc/include/asm/ucall-api.h index 9ddfcf541211..652797184b86 100644 --- a/arch/powerpc/include/asm/ucall-api.h +++ b/arch/powerpc/include/asm/ucall-api.h @@ -23,4 +23,9 @@ static inline int uv_register_mem_slot(u64 lpid, u64 dw0, u64 dw1, u64 dw2, return 0; } +static inline int uv_unregister_mem_slot(u64 lpid, u64 dw0) +{ + return 0; +} + #endif /* _ASM_POWERPC_UCALL_API_H */ diff --git a/arch/powerpc/kvm/book3s.c b/arch/powerpc/kvm/book3s.c index 87348e498c89..15ddae43849d 100644 --- a/arch/powerpc/kvm/book3s.c +++ b/arch/powerpc/kvm/book3s.c @@ -804,9 +804,10 @@ int kvmppc_core_prepare_memory_region(struct kvm *kvm, void kvmppc_core_commit_memory_region(struct kvm *kvm, const struct kvm_userspace_memory_region *mem, const struct kvm_memory_slot *old, - const struct kvm_memory_slot *new) + const struct kvm_memory_slot *new, + enum kvm_mr_change change) { - kvm->arch.kvm_ops->commit_memory_region(kvm, mem, old, new); + kvm->arch.kvm_ops->commit_memory_region(kvm, mem, old, new, change); } int kvm_unmap_hva_range(struct kvm *kvm, unsigned long start, unsigned long end) diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index 47f366f634fd..5f20c37a59b2 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -3729,7 +3729,8 @@ static int kvmppc_core_prepare_memory_region_hv(struct kvm *kvm, static void kvmppc_core_commit_memory_region_hv(struct kvm *kvm, const struct kvm_userspace_memory_region *mem, const struct kvm_memory_slot *old, - const struct kvm_memory_slot *new) + const struct kvm_memory_slot *new, + enum kvm_mr_change change) { unsigned long npages = mem->memory_size >> PAGE_SHIFT; @@ -3741,6 +3742,26 @@ static void kvmppc_core_commit_memory_region_hv(struct kvm *kvm, */ if (npages) atomic64_inc(&kvm->arch.mmio_update); + /* + * If UV hasn't yet called H_SVM_INIT_START, don't register memslots. + */ + if (!kvm->arch.svm_init_start) + return; + +#ifdef CONFIG_PPC_SVM + /* + * TODO: Handle KVM_MR_MOVE + */ + if (change == KVM_MR_CREATE) { + uv_register_mem_slot(kvm->arch.lpid, + new->base_gfn << PAGE_SHIFT, + new->npages * PAGE_SIZE, + 0, + new->id); + } else if (change == KVM_MR_DELETE) { + uv_unregister_mem_slot(kvm->arch.lpid, old->id); + } +#endif } /* diff --git a/arch/powerpc/kvm/book3s_pr.c b/arch/powerpc/kvm/book3s_pr.c index 614ebb4261f7..844af9844a0c 100644 --- a/arch/powerpc/kvm/book3s_pr.c +++ b/arch/powerpc/kvm/book3s_pr.c @@ -1914,7 +1914,8 @@ static int kvmppc_core_prepare_memory_region_pr(struct kvm *kvm, static void kvmppc_core_commit_memory_region_pr(struct kvm *kvm, const struct kvm_userspace_memory_region *mem, const struct kvm_memory_slot *old, - const struct kvm_memory_slot *new) + const struct kvm_memory_slot *new, + enum kvm_mr_change change) { return; } diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c index eba5756d5b41..cfc6e5dcd1c5 100644 --- a/arch/powerpc/kvm/powerpc.c +++ b/arch/powerpc/kvm/powerpc.c @@ -691,7 +691,7 @@ void kvm_arch_commit_memory_region(struct kvm *kvm, const struct kvm_memory_slot *new, enum kvm_mr_change change) { - kvmppc_core_commit_memory_region(kvm, mem, old, new); + kvmppc_core_commit_memory_region(kvm, mem, old, new, change); } void kvm_arch_flush_shadow_memslot(struct kvm *kvm,