From patchwork Mon Dec 10 03:58:18 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suraj Jitindar Singh X-Patchwork-Id: 10720593 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 6D2B891E for ; Mon, 10 Dec 2018 03:58:51 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 5ECB1289C7 for ; Mon, 10 Dec 2018 03:58:51 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 533FD29CB4; Mon, 10 Dec 2018 03:58:51 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FROM,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 04EA629E29 for ; Mon, 10 Dec 2018 03:58:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726303AbeLJD6u (ORCPT ); Sun, 9 Dec 2018 22:58:50 -0500 Received: from mail-pl1-f193.google.com ([209.85.214.193]:35922 "EHLO mail-pl1-f193.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726098AbeLJD6u (ORCPT ); Sun, 9 Dec 2018 22:58:50 -0500 Received: by mail-pl1-f193.google.com with SMTP id g9so4583822plo.3; Sun, 09 Dec 2018 19:58:49 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=d9n2ndYxPLku3kpK66KCR216WXpYN1G+1wSQlGxFKcY=; b=AgNcDzzig7F7GZAq+h+i7Z/e37eQXANWWTxWHpMzZ5bmfBCVW7RWUN/6blrytm8SIt +wPxNdcCR/veJdwfeuifMxDMX0tmOmW9tYJfZ3XCdJOC1JjEArsUYHW1E93kAkvsp+gF O3zhpgOQKYonHh8a0Yg2ETRDYVSkA7XznHsx9bgLjehNcs3388G1NSbtubqhQGASqgqH TNiTC5nSjkE0fZ0HziS/Y3Aggg2OPdILXj6dr7VKQmIY90kYNKTnj/sEnOAwxLjRMd9+ t0JtFHjJjDQfO0p1F581h+1yi19uOWhWrvny8Ru6aOSowq3tvQjhQoMjgFIf+LBQioy8 zXaw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=d9n2ndYxPLku3kpK66KCR216WXpYN1G+1wSQlGxFKcY=; b=eUiBtclvuWiwLbGRkJF49MkdxD69n5q82yQTKvCysLLTZUxi2CyfkKsDRA2ziZsW/h oZaYR8OyK5e7eZSoTFHn+M5ZyI/VEcx+LPISGBx25A5ZPGqAk4KwYZ6KAKkLX7IlvTBV uINhcauXEJhEPf7Q5a47mZRuXrcO/hz4gkqDIOG9gKq/RTUa2L+RvwcO9T+PHho5Ai4A NJ9POEF1vGn979PebbTFK93Qm4f79te51AtL+Yhtgz/T8qpnNdhzaI4OaaLVo62+QpdK d9/7yKxc2hXIE/ejqRKcY+PUTpPeb/gDXCmeGjPJel9s+rzwoOP/xRj9qeoqLXbMH+Qn x01w== X-Gm-Message-State: AA+aEWYlHbJDS5zG6/s6PDnJ4X5cEK/G42N26b1zCbwcBTbl238aIOlp VUS8fZ0VamDECv8jl9sPztZxiMVq X-Google-Smtp-Source: AFSGD/XcudXpDAFVVAZTUfFhtxTvwwT5aVGuGFRZQbYGlO4gYyO8ed2zJyHIxx6I27I386SZL9NPzQ== X-Received: by 2002:a17:902:7005:: with SMTP id y5mr10760175plk.7.1544414329468; Sun, 09 Dec 2018 19:58:49 -0800 (PST) Received: from surajjs2.ozlabs.ibm.com.ozlabs.ibm.com ([122.99.82.10]) by smtp.gmail.com with ESMTPSA id f6sm14291899pfg.188.2018.12.09.19.58.46 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Sun, 09 Dec 2018 19:58:48 -0800 (PST) From: Suraj Jitindar Singh To: kvm-ppc@vger.kernel.org Cc: sjitindarsingh@gmail.com, kvm@vger.kernel.org, paulus@ozlabs.org, linuxppc-dev@lists.ozlabs.org, aik@ozlabs.ru Subject: [PATCH V2 1/8] KVM: PPC: Only report KVM_CAP_SPAPR_TCE_VFIO on powernv machines Date: Mon, 10 Dec 2018 14:58:18 +1100 Message-Id: <20181210035825.29404-2-sjitindarsingh@gmail.com> X-Mailer: git-send-email 2.13.6 In-Reply-To: <20181210035825.29404-1-sjitindarsingh@gmail.com> References: <20181210035825.29404-1-sjitindarsingh@gmail.com> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP The kvm capability KVM_CAP_SPAPR_TCE_VFIO is used to indicate the availability of in kernel tce acceleration for vfio. However it is currently the case that this is only available on a powernv machine, not for a pseries machine. Thus make this capability dependent on having the cpu feature CPU_FTR_HVMODE. Signed-off-by: Suraj Jitindar Singh --- arch/powerpc/kvm/powerpc.c | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c index 2869a299c4ed..95859c53a5cd 100644 --- a/arch/powerpc/kvm/powerpc.c +++ b/arch/powerpc/kvm/powerpc.c @@ -496,6 +496,7 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext) int r; /* Assume we're using HV mode when the HV module is loaded */ int hv_enabled = kvmppc_hv_ops ? 1 : 0; + int kvm_on_pseries = !cpu_has_feature(CPU_FTR_HVMODE); if (kvm) { /* @@ -543,8 +544,11 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext) #ifdef CONFIG_PPC_BOOK3S_64 case KVM_CAP_SPAPR_TCE: case KVM_CAP_SPAPR_TCE_64: - /* fallthrough */ + r = 1; + break; case KVM_CAP_SPAPR_TCE_VFIO: + r = !kvm_on_pseries; + break; case KVM_CAP_PPC_RTAS: case KVM_CAP_PPC_FIXUP_HCALL: case KVM_CAP_PPC_ENABLE_HCALL: From patchwork Mon Dec 10 03:58:19 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suraj Jitindar Singh X-Patchwork-Id: 10720595 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D247A13BF for ; Mon, 10 Dec 2018 03:58:54 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id BF98A286AA for ; Mon, 10 Dec 2018 03:58:54 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id B404229E2D; Mon, 10 Dec 2018 03:58:54 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FROM,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 5782A29E27 for ; Mon, 10 Dec 2018 03:58:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726382AbeLJD6x (ORCPT ); Sun, 9 Dec 2018 22:58:53 -0500 Received: from mail-pl1-f195.google.com ([209.85.214.195]:39504 "EHLO mail-pl1-f195.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726275AbeLJD6x (ORCPT ); Sun, 9 Dec 2018 22:58:53 -0500 Received: by mail-pl1-f195.google.com with SMTP id 101so4583605pld.6; Sun, 09 Dec 2018 19:58:53 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=LvmJuwwziss2COckjTxUbK8p0ozKauVVNTpVJ1JHZwE=; b=dcl4EvUpjyM41HQWEnO6Io09ugNK29E2EaD4PciPQZ89NpiBLMGkRHpwKrYEgbYfxx dBhVadhfJjw+pN2Ioo4Xjz51VxoonTWggePt8f5mSzKvjEP5YsZwCHXnerssu2Un4iaK ahNjOkt8v59viSYj+ptI1Gxd8o9sdQXrTAKuO7wm8stFlTH9Hx5Ul87iXlVrXNmpI6+C GHNQNShzLBht1Rp86fLqi6fbyMHHKecnOWYOtpSH+ioq3sSXFTdlzIcVsi++Rs+2iIOZ M4QubfpDnKiEWIWro1laRazZtkLXp08BODxaIVeQ+KtPWHK1kCp/Y1oMKF/8ATYyNSBE dPCA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=LvmJuwwziss2COckjTxUbK8p0ozKauVVNTpVJ1JHZwE=; b=I+3XWqTP+6BoQ00iilelzxloaGp7odhaKVvE1DmB6H4hUYBAmNjDksjoNYLODjD0eP q5Qpqg34xX6c2lXU3Z5FHKG29XxynWomforX+Zg3z9G3yYXRorAtUywPA7Shtphj5fMP LGh1EAN4yXGl50Rf1hW+Bcbnwd1AFRx4JXLkG4fn7AfbsC19Bl8LZmQlTS7PFSa8YfA2 gaVuseo7m1sx4P+/UDncf4V9PfHGK/+S/ss9CY0ccaWxSKMqmnj3jgVx+NIBJs+SQmQh bF1tl0OSfAVELp8VKby2aZX3ReMUKbPRwN2BobPZwTZ0weeGHAiCjBhjW8SuN/Plx8eD ETZA== X-Gm-Message-State: AA+aEWZHIJ2pm1W3ELMdaihTJPTY9XQbSxc3F0vqAtY2HbTjXKNUzLWE yxAjkCJi4exBZQbX1ECt5VJF5v+K X-Google-Smtp-Source: AFSGD/Xdxp1OMyKe62V1a73tnIdCC7PGn93vKHFgudjjfYTy6a8LV+AmgYT34uDCxC7VVwYejzXbiA== X-Received: by 2002:a17:902:bc43:: with SMTP id t3mr10016054plz.124.1544414332704; Sun, 09 Dec 2018 19:58:52 -0800 (PST) Received: from surajjs2.ozlabs.ibm.com.ozlabs.ibm.com ([122.99.82.10]) by smtp.gmail.com with ESMTPSA id f6sm14291899pfg.188.2018.12.09.19.58.49 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Sun, 09 Dec 2018 19:58:51 -0800 (PST) From: Suraj Jitindar Singh To: kvm-ppc@vger.kernel.org Cc: sjitindarsingh@gmail.com, kvm@vger.kernel.org, paulus@ozlabs.org, linuxppc-dev@lists.ozlabs.org, aik@ozlabs.ru Subject: [PATCH V2 2/8] KVM: PPC: Book3S HV: Add function kvmhv_vcpu_is_radix() Date: Mon, 10 Dec 2018 14:58:19 +1100 Message-Id: <20181210035825.29404-3-sjitindarsingh@gmail.com> X-Mailer: git-send-email 2.13.6 In-Reply-To: <20181210035825.29404-1-sjitindarsingh@gmail.com> References: <20181210035825.29404-1-sjitindarsingh@gmail.com> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP There exists a function kvm_is_radix() which is used to determine if a kvm instance is using the radix mmu. However this only applies to the first level (L1) guest. Add a function kvmhv_vcpu_is_radix() which can be used to determine if the current execution context of the vcpu is radix, accounting for if the vcpu is running a nested guest. Currently all nested guests must be radix but this may change in the future. Signed-off-by: Suraj Jitindar Singh --- arch/powerpc/include/asm/kvm_book3s_64.h | 13 +++++++++++++ arch/powerpc/kvm/book3s_hv_nested.c | 1 + 2 files changed, 14 insertions(+) diff --git a/arch/powerpc/include/asm/kvm_book3s_64.h b/arch/powerpc/include/asm/kvm_book3s_64.h index 6d298145d564..7a9e472f2872 100644 --- a/arch/powerpc/include/asm/kvm_book3s_64.h +++ b/arch/powerpc/include/asm/kvm_book3s_64.h @@ -55,6 +55,7 @@ struct kvm_nested_guest { cpumask_t need_tlb_flush; cpumask_t cpu_in_guest; short prev_cpu[NR_CPUS]; + u8 radix; /* is this nested guest radix */ }; /* @@ -150,6 +151,18 @@ static inline bool kvm_is_radix(struct kvm *kvm) return kvm->arch.radix; } +static inline bool kvmhv_vcpu_is_radix(struct kvm_vcpu *vcpu) +{ + bool radix; + + if (vcpu->arch.nested) + radix = vcpu->arch.nested->radix; + else + radix = kvm_is_radix(vcpu->kvm); + + return radix; +} + #define KVM_DEFAULT_HPT_ORDER 24 /* 16MB HPT by default */ #endif diff --git a/arch/powerpc/kvm/book3s_hv_nested.c b/arch/powerpc/kvm/book3s_hv_nested.c index 401d2ecbebc5..4fca462e54c4 100644 --- a/arch/powerpc/kvm/book3s_hv_nested.c +++ b/arch/powerpc/kvm/book3s_hv_nested.c @@ -480,6 +480,7 @@ struct kvm_nested_guest *kvmhv_alloc_nested(struct kvm *kvm, unsigned int lpid) if (shadow_lpid < 0) goto out_free2; gp->shadow_lpid = shadow_lpid; + gp->radix = 1; memset(gp->prev_cpu, -1, sizeof(gp->prev_cpu)); From patchwork Mon Dec 10 03:58:20 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suraj Jitindar Singh X-Patchwork-Id: 10720597 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 734F713BF for ; Mon, 10 Dec 2018 03:58:58 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 6003C28685 for ; Mon, 10 Dec 2018 03:58:58 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 5402E289C7; Mon, 10 Dec 2018 03:58:58 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FROM,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id A1E4828685 for ; Mon, 10 Dec 2018 03:58:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726400AbeLJD65 (ORCPT ); Sun, 9 Dec 2018 22:58:57 -0500 Received: from mail-pl1-f193.google.com ([209.85.214.193]:39505 "EHLO mail-pl1-f193.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726098AbeLJD64 (ORCPT ); Sun, 9 Dec 2018 22:58:56 -0500 Received: by mail-pl1-f193.google.com with SMTP id 101so4583657pld.6; Sun, 09 Dec 2018 19:58:56 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=8HeWXUBwSnhJhwa0lSBGy+RixzwB2qy89KlhIMg333A=; b=NQrNQxe1tlPruobHmcobsI6UlFlzM/vW9I0cmx0LSUv8LxBbjrYLJ71Q8aJdEIf8Wc L30IQ7Nm79gPaGD8xM7LXvoVHBsy6p6D6qrmr//z7KdKyx4y6jc7Tq0Jw1o3mnATmLh1 wC+3lbsnBOXYY/ITO8WD7nsSznosgJVJ1uZ79w4ExhTy/m8asJ3TEThMoZTlqq0XHucJ aDa+tqvQ2vRDW4Wv2Sr4RpmZZAmHvi7T7g5QaZs8cQqaXTfXgMyAKL5gwI9KLGKPU+us 9rWzUqd11ZMwjSG/m/sK4JPDZicxBp2v/CsgYSx95gTwBng009BsT9TjYJ8ONZd6c82B pSbg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=8HeWXUBwSnhJhwa0lSBGy+RixzwB2qy89KlhIMg333A=; b=fYxBOjDgoDz0i+YxBbdtrZrkaDqN8Ksmz1JcBfsijXNpCL2vJo/1QKAet2gMS16R0X UbRiUxFNdNr500flb9KltpqDlotjWdPAsjK6heAXX6oyOCW4CfXHyFu+zrml+GypooVx JS2UrlOtPFqT1LFK87LeyRDhFAR7kjfCYO41PbfP+tfvDbBgbR56AncRmkvCI21xxEYo 8QETPRShWsDe8i3e6DgDghEqbmbssIYj/To/hQZKYYXtfXNbST8jFeUXyQO/N6iDjaLS xL2XVTQ79g6YsjS+6vAAB8RlD+JobQpqhiGUF6YtyFb9QvVLVR2mp68UCi99EZs1VMRP 7v6w== X-Gm-Message-State: AA+aEWYgPs+D/wWqo3d1q7zqD88w/PgkOqpwdU6uJyZjy9Iaof9pFr7V NF3hL64HuLWOMGlxqout8ABzqFdL X-Google-Smtp-Source: AFSGD/WOvGhPKnJOQa2fx//Kn0it2uabQHZF4aznDImtAec006Gq5qUdCnrPwY6CbfYuPsuDKfqpqg== X-Received: by 2002:a17:902:59d6:: with SMTP id d22mr10945061plj.10.1544414335632; Sun, 09 Dec 2018 19:58:55 -0800 (PST) Received: from surajjs2.ozlabs.ibm.com.ozlabs.ibm.com ([122.99.82.10]) by smtp.gmail.com with ESMTPSA id f6sm14291899pfg.188.2018.12.09.19.58.52 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Sun, 09 Dec 2018 19:58:55 -0800 (PST) From: Suraj Jitindar Singh To: kvm-ppc@vger.kernel.org Cc: sjitindarsingh@gmail.com, kvm@vger.kernel.org, paulus@ozlabs.org, linuxppc-dev@lists.ozlabs.org, aik@ozlabs.ru Subject: [PATCH V2 3/8] KVM: PPC: Book3S HV: Implement functions to access quadrants 1 & 2 Date: Mon, 10 Dec 2018 14:58:20 +1100 Message-Id: <20181210035825.29404-4-sjitindarsingh@gmail.com> X-Mailer: git-send-email 2.13.6 In-Reply-To: <20181210035825.29404-1-sjitindarsingh@gmail.com> References: <20181210035825.29404-1-sjitindarsingh@gmail.com> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP The POWER9 radix mmu has the concept of quadrants. The quadrant number is the two high bits of the effective address and determines the fully qualified address to be used for the translation. The fully qualified address consists of the effective lpid, the effective pid and the effective address. This gives then 4 possible quadrants 0, 1, 2, and 3. When accessing these quadrants the fully qualified address is obtained as follows: Quadrant | Hypervisor | Guest -------------------------------------------------------------------------- | EA[0:1] = 0b00 | EA[0:1] = 0b00 0 | effLPID = 0 | effLPID = LPIDR | effPID = PIDR | effPID = PIDR -------------------------------------------------------------------------- | EA[0:1] = 0b01 | 1 | effLPID = LPIDR | Invalid Access | effPID = PIDR | -------------------------------------------------------------------------- | EA[0:1] = 0b10 | 2 | effLPID = LPIDR | Invalid Access | effPID = 0 | -------------------------------------------------------------------------- | EA[0:1] = 0b11 | EA[0:1] = 0b11 3 | effLPID = 0 | effLPID = LPIDR | effPID = 0 | effPID = 0 -------------------------------------------------------------------------- In the Guest; Quadrant 3 is normally used to address the operating system since this uses effPID=0 and effLPID=LPIDR, meaning the PID register doesn't need to be switched. Quadrant 0 is normally used to address user space since the effLPID and effPID are taken from the corresponding registers. In the Host; Quadrant 0 and 3 are used as above, however the effLPID is always 0 to address the host. Quadrants 1 and 2 can be used by the host to address guest memory using a guest effective address. Since the effLPID comes from the LPID register, the host loads the LPID of the guest it would like to access (and the PID of the process) and can perform accesses to a guest effective address. This means quadrant 1 can be used to address the guest user space and quadrant 2 can be used to address the guest operating system from the hypervisor, using a guest effective address. Access to the quadrants can cause a Hypervisor Data Storage Interrupt (HDSI) due to being unable to perform partition scoped translation. Previously this could only be generated from a guest and so the code path expects us to take the KVM trampoline in the interrupt handler. This is no longer the case so we modify the handler to call bad_page_fault() to check if we were expecting this fault so we can handle it gracefully and just return with an error code. In the hash mmu case we still raise an unknown exception since quadrants aren't defined for the hash mmu. Signed-off-by: Suraj Jitindar Singh --- arch/powerpc/include/asm/kvm_book3s.h | 4 ++ arch/powerpc/kernel/exceptions-64s.S | 9 ++++ arch/powerpc/kvm/book3s_64_mmu_radix.c | 97 ++++++++++++++++++++++++++++++++++ arch/powerpc/mm/fault.c | 1 + 4 files changed, 111 insertions(+) diff --git a/arch/powerpc/include/asm/kvm_book3s.h b/arch/powerpc/include/asm/kvm_book3s.h index 09f8e9ba69bc..5883fcce7009 100644 --- a/arch/powerpc/include/asm/kvm_book3s.h +++ b/arch/powerpc/include/asm/kvm_book3s.h @@ -188,6 +188,10 @@ extern int kvmppc_book3s_hcall_implemented(struct kvm *kvm, unsigned long hc); extern int kvmppc_book3s_radix_page_fault(struct kvm_run *run, struct kvm_vcpu *vcpu, unsigned long ea, unsigned long dsisr); +extern long kvmhv_copy_from_guest_radix(struct kvm_vcpu *vcpu, gva_t eaddr, + void *to, unsigned long n); +extern long kvmhv_copy_to_guest_radix(struct kvm_vcpu *vcpu, gva_t eaddr, + void *from, unsigned long n); extern int kvmppc_mmu_walk_radix_tree(struct kvm_vcpu *vcpu, gva_t eaddr, struct kvmppc_pte *gpte, u64 root, u64 *pte_ret_p); diff --git a/arch/powerpc/kernel/exceptions-64s.S b/arch/powerpc/kernel/exceptions-64s.S index 89d32bb79d5e..db2691ff4c0b 100644 --- a/arch/powerpc/kernel/exceptions-64s.S +++ b/arch/powerpc/kernel/exceptions-64s.S @@ -995,7 +995,16 @@ EXC_COMMON_BEGIN(h_data_storage_common) bl save_nvgprs RECONCILE_IRQ_STATE(r10, r11) addi r3,r1,STACK_FRAME_OVERHEAD +BEGIN_MMU_FTR_SECTION + ld r4,PACA_EXGEN+EX_DAR(r13) + lwz r5,PACA_EXGEN+EX_DSISR(r13) + std r4,_DAR(r1) + std r5,_DSISR(r1) + li r5,SIGSEGV + bl bad_page_fault +MMU_FTR_SECTION_ELSE bl unknown_exception +ALT_MMU_FTR_SECTION_END_IFSET(MMU_FTR_TYPE_RADIX) b ret_from_except diff --git a/arch/powerpc/kvm/book3s_64_mmu_radix.c b/arch/powerpc/kvm/book3s_64_mmu_radix.c index d68162ee159b..e1e3ef710bd0 100644 --- a/arch/powerpc/kvm/book3s_64_mmu_radix.c +++ b/arch/powerpc/kvm/book3s_64_mmu_radix.c @@ -29,6 +29,103 @@ */ static int p9_supported_radix_bits[4] = { 5, 9, 9, 13 }; +static unsigned long __kvmhv_copy_tofrom_guest_radix(int lpid, int pid, + gva_t eaddr, void *to, void *from, + unsigned long n) +{ + unsigned long quadrant, ret = n; + int old_pid, old_lpid; + bool is_load = !!to; + + /* Can't access quadrants 1 or 2 in non-HV mode */ + if (kvmhv_on_pseries()) { + /* TODO h-call */ + return -EPERM; + } + + quadrant = 1; + if (!pid) + quadrant = 2; + if (is_load) + from = (void *) (eaddr | (quadrant << 62)); + else + to = (void *) (eaddr | (quadrant << 62)); + + preempt_disable(); + + /* switch the lpid first to avoid running host with unallocated pid */ + old_lpid = mfspr(SPRN_LPID); + if (old_lpid != lpid) + mtspr(SPRN_LPID, lpid); + if (quadrant == 1) { + old_pid = mfspr(SPRN_PID); + if (old_pid != pid) + mtspr(SPRN_PID, pid); + } + isync(); + + pagefault_disable(); + if (is_load) + ret = raw_copy_from_user(to, from, n); + else + ret = raw_copy_to_user(to, from, n); + pagefault_enable(); + + /* switch the pid first to avoid running host with unallocated pid */ + if (quadrant == 1 && pid != old_pid) + mtspr(SPRN_PID, old_pid); + if (lpid != old_lpid) + mtspr(SPRN_LPID, old_lpid); + isync(); + + preempt_enable(); + + return ret; +} + +static long kvmhv_copy_tofrom_guest_radix(struct kvm_vcpu *vcpu, gva_t eaddr, + void *to, void *from, unsigned long n) +{ + int lpid = vcpu->kvm->arch.lpid; + int pid = vcpu->arch.pid; + + /* This would cause a data segment intr so don't allow the access */ + if (eaddr & (0x3FFUL << 52)) + return -EINVAL; + + /* Should we be using the nested lpid */ + if (vcpu->arch.nested) + lpid = vcpu->arch.nested->shadow_lpid; + + /* If accessing quadrant 3 then pid is expected to be 0 */ + if (((eaddr >> 62) & 0x3) == 0x3) + pid = 0; + + eaddr &= ~(0xFFFUL << 52); + + return __kvmhv_copy_tofrom_guest_radix(lpid, pid, eaddr, to, from, n); +} + +long kvmhv_copy_from_guest_radix(struct kvm_vcpu *vcpu, gva_t eaddr, void *to, + unsigned long n) +{ + long ret; + + ret = kvmhv_copy_tofrom_guest_radix(vcpu, eaddr, to, NULL, n); + if (ret > 0) + memset(to + (n - ret), 0, ret); + + return ret; +} +EXPORT_SYMBOL_GPL(kvmhv_copy_from_guest_radix); + +long kvmhv_copy_to_guest_radix(struct kvm_vcpu *vcpu, gva_t eaddr, void *from, + unsigned long n) +{ + return kvmhv_copy_tofrom_guest_radix(vcpu, eaddr, NULL, from, n); +} +EXPORT_SYMBOL_GPL(kvmhv_copy_to_guest_radix); + int kvmppc_mmu_walk_radix_tree(struct kvm_vcpu *vcpu, gva_t eaddr, struct kvmppc_pte *gpte, u64 root, u64 *pte_ret_p) diff --git a/arch/powerpc/mm/fault.c b/arch/powerpc/mm/fault.c index 1697e903bbf2..2e6fb1d758c3 100644 --- a/arch/powerpc/mm/fault.c +++ b/arch/powerpc/mm/fault.c @@ -636,6 +636,7 @@ void bad_page_fault(struct pt_regs *regs, unsigned long address, int sig) switch (TRAP(regs)) { case 0x300: case 0x380: + case 0xe00: printk(KERN_ALERT "Unable to handle kernel paging request for " "data at address 0x%08lx\n", regs->dar); break; From patchwork Mon Dec 10 03:58:21 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suraj Jitindar Singh X-Patchwork-Id: 10720599 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id E588E91E for ; Mon, 10 Dec 2018 03:59:00 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id D5A9A286AA for ; Mon, 10 Dec 2018 03:59:00 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id C9EC32962F; Mon, 10 Dec 2018 03:59:00 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FROM,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 6A6DA286AA for ; Mon, 10 Dec 2018 03:59:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726430AbeLJD67 (ORCPT ); Sun, 9 Dec 2018 22:58:59 -0500 Received: from mail-pl1-f196.google.com ([209.85.214.196]:46907 "EHLO mail-pl1-f196.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726417AbeLJD67 (ORCPT ); Sun, 9 Dec 2018 22:58:59 -0500 Received: by mail-pl1-f196.google.com with SMTP id t13so4568852ply.13; Sun, 09 Dec 2018 19:58:59 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=h7DqnNU1Ggyii/PzsZtgi3n0KTNf05KRkHCfe8fV+04=; b=gQ2koxsOTgOLErQW0TBc43xMiVrWuyD05olzJl8/NxgvZt24V1h5wW7HM+lOLiV5mJ jw5zspZePyyPWp652n2CKBy/uwu1Dmsd5cHe1pfyH1JV9kyVKsdTDNhXDQiy0lDTQl9F mQNeiKYtpSW4c1jkC10jBjJC4iTZ+7w/g9nU8xjJhmjp3KqWrc8A0fxwvW2RtdLMLBCA JP9zqGWfR2nG95kOwoodNlabtNuJMw+h7b5pNpp4+wiL8+WPbvbdRcDRhHLqPJoTH61b qQ7bH4Bj+a+P66pIHiPYqeox13ueVALagKD9yrl8otUYYCI41uWsnNb4wNIOu+HfZK74 P/dQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=h7DqnNU1Ggyii/PzsZtgi3n0KTNf05KRkHCfe8fV+04=; b=DHQparmgy9QFT/4J12Rtkj/Ep9EZ5FshP96Ajz9BJF90gPNi3CnhhmXES0U7CobnrQ M7k31zOygmNz58AX4h9vI7c9qBRST7hrviADYG15CcpmtGk3IliQph8+PknAtkbk9Ffz IeWlNvYbumCwKHxgFkh79Fe7xkLhUB+Jj1YSYf8yaKGNO3QT7JtiGcmfRrrwmiSEHwSB NrKb8tL3FwIrvI/KgXvtvYJyLyY3GaJEW18gZBCdnGdOy48LItlU1uMYxHFSopseiIh9 BVpifwNVfm7411OJoBFHK8QNunbbGju79zXRuR/XmMKyl7iqe8JieWt6BvJp9KTAANyY ifJw== X-Gm-Message-State: AA+aEWZBO7k9qnQtxykjIEPEwGYyLo6XELqqkahK2gW9tqX1PV5UD6sd yLym1SaaVHWsHy/STbhNC7WcsExj X-Google-Smtp-Source: AFSGD/X8Cv3d+R2w6ERG4xhDAT+Uc/RAjEjEUQCCsx74Iwvru58OZgqEV+lupyVQjif2EULfTFpnNA== X-Received: by 2002:a17:902:33c1:: with SMTP id b59mr10578928plc.220.1544414338715; Sun, 09 Dec 2018 19:58:58 -0800 (PST) Received: from surajjs2.ozlabs.ibm.com.ozlabs.ibm.com ([122.99.82.10]) by smtp.gmail.com with ESMTPSA id f6sm14291899pfg.188.2018.12.09.19.58.55 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Sun, 09 Dec 2018 19:58:58 -0800 (PST) From: Suraj Jitindar Singh To: kvm-ppc@vger.kernel.org Cc: sjitindarsingh@gmail.com, kvm@vger.kernel.org, paulus@ozlabs.org, linuxppc-dev@lists.ozlabs.org, aik@ozlabs.ru Subject: [PATCH V2 4/8] KVM: PPC: Add load_from_eaddr and store_to_eaddr to the kvmppc_ops struct Date: Mon, 10 Dec 2018 14:58:21 +1100 Message-Id: <20181210035825.29404-5-sjitindarsingh@gmail.com> X-Mailer: git-send-email 2.13.6 In-Reply-To: <20181210035825.29404-1-sjitindarsingh@gmail.com> References: <20181210035825.29404-1-sjitindarsingh@gmail.com> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP The kvmppc_ops struct is used to store function pointers to kvm implementation specific functions. Introduce two new functions load_from_eaddr and store_to_eaddr to be used to load from and store to a guest effective address respectively. Also implement these for the kvm-hv module. If we are using the radix mmu then we can call the functions to access quadrant 1 and 2. Signed-off-by: Suraj Jitindar Singh --- arch/powerpc/include/asm/kvm_ppc.h | 4 ++++ arch/powerpc/kvm/book3s_hv.c | 40 ++++++++++++++++++++++++++++++++++++++ 2 files changed, 44 insertions(+) diff --git a/arch/powerpc/include/asm/kvm_ppc.h b/arch/powerpc/include/asm/kvm_ppc.h index 9b89b1918dfc..159dd76700cb 100644 --- a/arch/powerpc/include/asm/kvm_ppc.h +++ b/arch/powerpc/include/asm/kvm_ppc.h @@ -326,6 +326,10 @@ struct kvmppc_ops { unsigned long flags); void (*giveup_ext)(struct kvm_vcpu *vcpu, ulong msr); int (*enable_nested)(struct kvm *kvm); + int (*load_from_eaddr)(struct kvm_vcpu *vcpu, ulong *eaddr, void *ptr, + int size); + int (*store_to_eaddr)(struct kvm_vcpu *vcpu, ulong *eaddr, void *ptr, + int size); }; extern struct kvmppc_ops *kvmppc_hv_ops; diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index a56f8413758a..8a0921176a60 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -5214,6 +5214,44 @@ static int kvmhv_enable_nested(struct kvm *kvm) return 0; } +static int kvmhv_load_from_eaddr(struct kvm_vcpu *vcpu, ulong *eaddr, void *ptr, + int size) +{ + int rc = -EINVAL; + + if (kvmhv_vcpu_is_radix(vcpu)) { + rc = kvmhv_copy_from_guest_radix(vcpu, *eaddr, ptr, size); + + if (rc > 0) + rc = -EINVAL; + } + + /* For now quadrants are the only way to access nested guest memory */ + if (rc && vcpu->arch.nested) + rc = -EAGAIN; + + return rc; +} + +static int kvmhv_store_to_eaddr(struct kvm_vcpu *vcpu, ulong *eaddr, void *ptr, + int size) +{ + int rc = -EINVAL; + + if (kvmhv_vcpu_is_radix(vcpu)) { + rc = kvmhv_copy_to_guest_radix(vcpu, *eaddr, ptr, size); + + if (rc > 0) + rc = -EINVAL; + } + + /* For now quadrants are the only way to access nested guest memory */ + if (rc && vcpu->arch.nested) + rc = -EAGAIN; + + return rc; +} + static struct kvmppc_ops kvm_ops_hv = { .get_sregs = kvm_arch_vcpu_ioctl_get_sregs_hv, .set_sregs = kvm_arch_vcpu_ioctl_set_sregs_hv, @@ -5254,6 +5292,8 @@ static struct kvmppc_ops kvm_ops_hv = { .get_rmmu_info = kvmhv_get_rmmu_info, .set_smt_mode = kvmhv_set_smt_mode, .enable_nested = kvmhv_enable_nested, + .load_from_eaddr = kvmhv_load_from_eaddr, + .store_to_eaddr = kvmhv_store_to_eaddr, }; static int kvm_init_subcore_bitmap(void) From patchwork Mon Dec 10 03:58:22 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suraj Jitindar Singh X-Patchwork-Id: 10720601 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id E456913BF for ; Mon, 10 Dec 2018 03:59:03 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id D3B20286AA for ; Mon, 10 Dec 2018 03:59:03 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id C84372962F; Mon, 10 Dec 2018 03:59:03 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FROM,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 6C842286AA for ; Mon, 10 Dec 2018 03:59:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726404AbeLJD7C (ORCPT ); Sun, 9 Dec 2018 22:59:02 -0500 Received: from mail-pl1-f193.google.com ([209.85.214.193]:46909 "EHLO mail-pl1-f193.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726417AbeLJD7C (ORCPT ); Sun, 9 Dec 2018 22:59:02 -0500 Received: by mail-pl1-f193.google.com with SMTP id t13so4568898ply.13; Sun, 09 Dec 2018 19:59:02 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=v/8AAl8q/xQgZCOCCKR0EHdYpuAx8C+FNvDhIEvGom4=; b=ac+TcSMBwFlXVewfw+gklKTllhnR6h4oWCN8TULDqGyapBZylBiEvJrv/0FF7jHj4p URQX8ezo1vh2598CMszPRQeVNRDgXJfe6jvjvofdtGfucjPbvwxW05GkAgS/RQEKu+We YxFMd3iypcyI9rSr0k0/OjLeGNHQvstKL3XNrQoUGj8lOlN90zQD7wnQAe6VVF81Gv4G JuU0Z5mzetbenLGzsyvnkWldmWrT+z2uMp5DfTX0/joPdAjmBMFM+b9TQrxCEujBtwm8 mKX8MCqc61tjrGy3lTJkjThCnKp4cII4Hc2Bw2IIaRBJLL2GN30lCOz8m5N69S+GVtYJ u9Gw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=v/8AAl8q/xQgZCOCCKR0EHdYpuAx8C+FNvDhIEvGom4=; b=FSIkevYF86qr4nu1dwplsJQwIsQq1ZymreGRWAF/7+KTb9oYT5y5Ceg/IGf7nJGhr/ k6qI893rj4LpmT0JV9s9/xjHbWt9htqM9osu22ITpeuR7b+ERwHP/3h7XYeDE1xWsLIk LQ2unA8VDkcKGTaAsbOhRKf3llAeY3sGTX49915+gSiNPW5jC+/UdYvYn0Ra/0XkZWNl n+sSFOYShxERHX+RgkO3D2mIx9wWS2KS7OmvnAXI3/LmSawDOi3gJdpL5aYL/yoZkYyu aV2XR6T0gF+PWw0MgPymQ0jPQoznpM6RcaEroG+D6/nSTFScmWGWpsncm9GgbA71Vs1+ NKig== X-Gm-Message-State: AA+aEWYEiFdkAbAARM41rIVyWP49RQw90KjXLJsLwogIz3p+p1D+jd3e KkU0NHnyKl6/aOcTgxw1sUlx/XnN X-Google-Smtp-Source: AFSGD/X4vHBLgX0OQGfuwb/XOvmGxtLkXrAKx+4MXLbKqFFNmvSfMF82pMwANFen/0VjTyUbNYEb8A== X-Received: by 2002:a17:902:2bc5:: with SMTP id l63mr10896824plb.107.1544414341693; Sun, 09 Dec 2018 19:59:01 -0800 (PST) Received: from surajjs2.ozlabs.ibm.com.ozlabs.ibm.com ([122.99.82.10]) by smtp.gmail.com with ESMTPSA id f6sm14291899pfg.188.2018.12.09.19.58.58 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Sun, 09 Dec 2018 19:59:01 -0800 (PST) From: Suraj Jitindar Singh To: kvm-ppc@vger.kernel.org Cc: sjitindarsingh@gmail.com, kvm@vger.kernel.org, paulus@ozlabs.org, linuxppc-dev@lists.ozlabs.org, aik@ozlabs.ru Subject: [PATCH V2 5/8] KVM: PPC: Update kvmppc_st and kvmppc_ld to use quadrants Date: Mon, 10 Dec 2018 14:58:22 +1100 Message-Id: <20181210035825.29404-6-sjitindarsingh@gmail.com> X-Mailer: git-send-email 2.13.6 In-Reply-To: <20181210035825.29404-1-sjitindarsingh@gmail.com> References: <20181210035825.29404-1-sjitindarsingh@gmail.com> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP The functions kvmppc_st and kvmppc_ld are used to access guest memory from the host using a guest effective address. They do so by translating through the process table to obtain a guest real address and then using kvm_read_guest or kvm_write_guest to make the access with the guest real address. This method of access however only works for L1 guests and will give the incorrect results for a nested guest. We can however use the store_to_eaddr and load_from_eaddr kvmppc_ops to perform the access for a nested guesti (and a L1 guest). So attempt this method first and fall back to the old method if this fails and we aren't running a nested guest. At this stage there is no fall back method to perform the access for a nested guest and this is left as a future improvement. For now we will return to the nested guest and rely on the fact that a translation should be faulted in before retrying the access. Signed-off-by: Suraj Jitindar Singh --- arch/powerpc/kvm/powerpc.c | 18 ++++++++++++++++-- 1 file changed, 16 insertions(+), 2 deletions(-) diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c index 95859c53a5cd..cb029fcab404 100644 --- a/arch/powerpc/kvm/powerpc.c +++ b/arch/powerpc/kvm/powerpc.c @@ -331,10 +331,17 @@ int kvmppc_st(struct kvm_vcpu *vcpu, ulong *eaddr, int size, void *ptr, { ulong mp_pa = vcpu->arch.magic_page_pa & KVM_PAM & PAGE_MASK; struct kvmppc_pte pte; - int r; + int r = -EINVAL; vcpu->stat.st++; + if (vcpu->kvm->arch.kvm_ops && vcpu->kvm->arch.kvm_ops->store_to_eaddr) + r = vcpu->kvm->arch.kvm_ops->store_to_eaddr(vcpu, eaddr, ptr, + size); + + if ((!r) || (r == -EAGAIN)) + return r; + r = kvmppc_xlate(vcpu, *eaddr, data ? XLATE_DATA : XLATE_INST, XLATE_WRITE, &pte); if (r < 0) @@ -367,10 +374,17 @@ int kvmppc_ld(struct kvm_vcpu *vcpu, ulong *eaddr, int size, void *ptr, { ulong mp_pa = vcpu->arch.magic_page_pa & KVM_PAM & PAGE_MASK; struct kvmppc_pte pte; - int rc; + int rc = -EINVAL; vcpu->stat.ld++; + if (vcpu->kvm->arch.kvm_ops && vcpu->kvm->arch.kvm_ops->load_from_eaddr) + rc = vcpu->kvm->arch.kvm_ops->load_from_eaddr(vcpu, eaddr, ptr, + size); + + if ((!rc) || (rc == -EAGAIN)) + return rc; + rc = kvmppc_xlate(vcpu, *eaddr, data ? XLATE_DATA : XLATE_INST, XLATE_READ, &pte); if (rc) From patchwork Mon Dec 10 03:58:23 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suraj Jitindar Singh X-Patchwork-Id: 10720603 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 5CE3C13BF for ; Mon, 10 Dec 2018 03:59:08 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 4BEFC29E5D for ; Mon, 10 Dec 2018 03:59:08 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 3E5DE29E71; Mon, 10 Dec 2018 03:59:08 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FROM,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 8D7E529E5D for ; Mon, 10 Dec 2018 03:59:07 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726445AbeLJD7G (ORCPT ); Sun, 9 Dec 2018 22:59:06 -0500 Received: from mail-pl1-f196.google.com ([209.85.214.196]:35944 "EHLO mail-pl1-f196.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726280AbeLJD7G (ORCPT ); Sun, 9 Dec 2018 22:59:06 -0500 Received: by mail-pl1-f196.google.com with SMTP id g9so4584111plo.3; Sun, 09 Dec 2018 19:59:05 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=wMmnriQzFW0hhsUgTA/dT9iB2pQjl78PFDtt49uX+uc=; b=RnbgBFka3Pk3BhwdszddA5Yy+YssMsLvjZem2miMlkTDQAB/EhSEZuHpeqr14xv3dN HxD4f8dn/tcwJptFek7rE9rrh3gTmLoig0LcKI2ZQXA7SZa+HMjHRy8HkpxThftsGxfH aDxCc80/Y3ms3mUfADoIy55RFz90OI0ryJAgxSsT4pqVhkOkcFRpUvIAg2H39LjUvR0n TS9gAblrpPRySofVOPFiI/DTWoiSwynz4O539nBqNliM2AEsowzakH4zv/o6ADsD37Tr wsQlK4Ze9wL1sSa6zih7DUB1QRawxoP5Vkyy3hhTuVbGyi0FyX5zOG0BjQiYnJMUmQr9 6+pw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=wMmnriQzFW0hhsUgTA/dT9iB2pQjl78PFDtt49uX+uc=; b=k39/QVjAxkIeSOC0YTFSHe+FAugYKFB6QvnhzvCScPHnfMu0vaiaemHIAr6GRCci0E qJNy1brfk7toD0kQGtWWAZ9w3IE/gbP1pNDhGhS5XsthlcM29Qfsn0YKr3rDVz8jI3ib QtXYjzxctG0/Du7RzS3eqmFL9JYF819bCi7+7Ko47gcau/2DUTzc2huw8LrTmOlUef13 JYtSOUNKUFqTwiRSC7XE0HPhklEfKjBtzUNNMEaQ02/B+tiV8OFOqsscwh77YJ+VBwce X3wrXGEnFgFtgdcALxe/MJyOyzyHZv9p10W+sguQyTOIcxbKnZMGfRfQtK04Ug40Feg+ kV0w== X-Gm-Message-State: AA+aEWbNqefrI3ZI+UiuewlCMIi7kW02oAyatXxx0mArtcJ5k0UfVvxL DnpwcCv+FZ5eWEjEzX0tWS5NlIai X-Google-Smtp-Source: AFSGD/XaBAn8Jp21lAY3RPBREw5PBnKm6rKTfu44O7XuM7PPqn4IPv0jFNKVp+LTBoh67ypEMPZavQ== X-Received: by 2002:a17:902:a6:: with SMTP id a35mr10708234pla.201.1544414345081; Sun, 09 Dec 2018 19:59:05 -0800 (PST) Received: from surajjs2.ozlabs.ibm.com.ozlabs.ibm.com ([122.99.82.10]) by smtp.gmail.com with ESMTPSA id f6sm14291899pfg.188.2018.12.09.19.59.01 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Sun, 09 Dec 2018 19:59:04 -0800 (PST) From: Suraj Jitindar Singh To: kvm-ppc@vger.kernel.org Cc: sjitindarsingh@gmail.com, kvm@vger.kernel.org, paulus@ozlabs.org, linuxppc-dev@lists.ozlabs.org, aik@ozlabs.ru Subject: [PATCH V2 6/8] KVM: PPC: Book3S HV: Allow passthrough of an emulated device to an L2 guest Date: Mon, 10 Dec 2018 14:58:23 +1100 Message-Id: <20181210035825.29404-7-sjitindarsingh@gmail.com> X-Mailer: git-send-email 2.13.6 In-Reply-To: <20181210035825.29404-1-sjitindarsingh@gmail.com> References: <20181210035825.29404-1-sjitindarsingh@gmail.com> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Allow for a device which is being emulated at L0 (the host) for an L1 guest to be passed through to a nested (L2) guest. The existing kvmppc_hv_emulate_mmio function can be used here. The main challenge is that for a load the result must be stored into the L2 gpr, not an L1 gpr as would normally be the case after going out to qemu to complete the operation. This presents a challenge as at this point the L2 gpr state has been written back into L1 memory. To work around this we store the address in L1 memory of the L2 gpr where the result of the load is to be stored and use the new io_gpr value KVM_MMIO_REG_NESTED_GPR to indicate that this is a nested load for which completion must be done when returning back into the kernel. Then in kvmppc_complete_mmio_load() the resultant value is written into L1 memory at the location of the indicated L2 gpr. Note that we don't currently let an L1 guest emulate a device for an L2 guest which is then passed through to an L3 guest. Signed-off-by: Suraj Jitindar Singh --- arch/powerpc/include/asm/kvm_book3s.h | 2 +- arch/powerpc/include/asm/kvm_host.h | 3 +++ arch/powerpc/kvm/book3s_hv.c | 12 ++++++---- arch/powerpc/kvm/book3s_hv_nested.c | 43 ++++++++++++++++++++++++++++++----- arch/powerpc/kvm/powerpc.c | 6 +++++ 5 files changed, 55 insertions(+), 11 deletions(-) diff --git a/arch/powerpc/include/asm/kvm_book3s.h b/arch/powerpc/include/asm/kvm_book3s.h index 5883fcce7009..ea94110bfde4 100644 --- a/arch/powerpc/include/asm/kvm_book3s.h +++ b/arch/powerpc/include/asm/kvm_book3s.h @@ -311,7 +311,7 @@ int kvmhv_run_single_vcpu(struct kvm_run *kvm_run, struct kvm_vcpu *vcpu, void kvmhv_save_hv_regs(struct kvm_vcpu *vcpu, struct hv_guest_state *hr); void kvmhv_restore_hv_return_state(struct kvm_vcpu *vcpu, struct hv_guest_state *hr); -long int kvmhv_nested_page_fault(struct kvm_vcpu *vcpu); +long int kvmhv_nested_page_fault(struct kvm_run *run, struct kvm_vcpu *vcpu); void kvmppc_giveup_fac(struct kvm_vcpu *vcpu, ulong fac); diff --git a/arch/powerpc/include/asm/kvm_host.h b/arch/powerpc/include/asm/kvm_host.h index fac6f631ed29..7a2483a139cf 100644 --- a/arch/powerpc/include/asm/kvm_host.h +++ b/arch/powerpc/include/asm/kvm_host.h @@ -793,6 +793,7 @@ struct kvm_vcpu_arch { /* For support of nested guests */ struct kvm_nested_guest *nested; u32 nested_vcpu_id; + gpa_t nested_io_gpr; #endif #ifdef CONFIG_KVM_BOOK3S_HV_EXIT_TIMING @@ -827,6 +828,8 @@ struct kvm_vcpu_arch { #define KVM_MMIO_REG_FQPR 0x00c0 #define KVM_MMIO_REG_VSX 0x0100 #define KVM_MMIO_REG_VMX 0x0180 +#define KVM_MMIO_REG_NESTED_GPR 0xffc0 + #define __KVM_HAVE_ARCH_WQP #define __KVM_HAVE_CREATE_DEVICE diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index 8a0921176a60..2280bc4778f5 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -985,6 +985,10 @@ int kvmppc_pseries_do_hcall(struct kvm_vcpu *vcpu) kvmppc_set_gpr(vcpu, 3, 0); vcpu->arch.hcall_needed = 0; return -EINTR; + } else if (ret == H_TOO_HARD) { + kvmppc_set_gpr(vcpu, 3, 0); + vcpu->arch.hcall_needed = 0; + return RESUME_HOST; } break; case H_TLB_INVALIDATE: @@ -1336,7 +1340,7 @@ static int kvmppc_handle_exit_hv(struct kvm_run *run, struct kvm_vcpu *vcpu, return r; } -static int kvmppc_handle_nested_exit(struct kvm_vcpu *vcpu) +static int kvmppc_handle_nested_exit(struct kvm_run *run, struct kvm_vcpu *vcpu) { int r; int srcu_idx; @@ -1394,7 +1398,7 @@ static int kvmppc_handle_nested_exit(struct kvm_vcpu *vcpu) */ case BOOK3S_INTERRUPT_H_DATA_STORAGE: srcu_idx = srcu_read_lock(&vcpu->kvm->srcu); - r = kvmhv_nested_page_fault(vcpu); + r = kvmhv_nested_page_fault(run, vcpu); srcu_read_unlock(&vcpu->kvm->srcu, srcu_idx); break; case BOOK3S_INTERRUPT_H_INST_STORAGE: @@ -1404,7 +1408,7 @@ static int kvmppc_handle_nested_exit(struct kvm_vcpu *vcpu) if (vcpu->arch.shregs.msr & HSRR1_HISI_WRITE) vcpu->arch.fault_dsisr |= DSISR_ISSTORE; srcu_idx = srcu_read_lock(&vcpu->kvm->srcu); - r = kvmhv_nested_page_fault(vcpu); + r = kvmhv_nested_page_fault(run, vcpu); srcu_read_unlock(&vcpu->kvm->srcu, srcu_idx); break; @@ -4059,7 +4063,7 @@ int kvmhv_run_single_vcpu(struct kvm_run *kvm_run, if (!nested) r = kvmppc_handle_exit_hv(kvm_run, vcpu, current); else - r = kvmppc_handle_nested_exit(vcpu); + r = kvmppc_handle_nested_exit(kvm_run, vcpu); } vcpu->arch.ret = r; diff --git a/arch/powerpc/kvm/book3s_hv_nested.c b/arch/powerpc/kvm/book3s_hv_nested.c index 4fca462e54c4..991f40ce4eea 100644 --- a/arch/powerpc/kvm/book3s_hv_nested.c +++ b/arch/powerpc/kvm/book3s_hv_nested.c @@ -195,6 +195,26 @@ void kvmhv_restore_hv_return_state(struct kvm_vcpu *vcpu, vcpu->arch.ppr = hr->ppr; } +static void kvmhv_nested_mmio_needed(struct kvm_vcpu *vcpu, u64 regs_ptr) +{ + /* No need to reflect the page fault to L1, we've handled it */ + vcpu->arch.trap = 0; + + /* + * Since the L2 gprs have already been written back into L1 memory when + * we complete the mmio, store the L1 memory location of the L2 gpr + * being loaded into by the mmio so that the loaded value can be + * written there in kvmppc_complete_mmio_load() + */ + if (((vcpu->arch.io_gpr & KVM_MMIO_REG_EXT_MASK) == KVM_MMIO_REG_GPR) + && (vcpu->mmio_is_write == 0)) { + vcpu->arch.nested_io_gpr = (gpa_t) regs_ptr + + offsetof(struct pt_regs, + gpr[vcpu->arch.io_gpr]); + vcpu->arch.io_gpr = KVM_MMIO_REG_NESTED_GPR; + } +} + long kvmhv_enter_nested_guest(struct kvm_vcpu *vcpu) { long int err, r; @@ -316,6 +336,11 @@ long kvmhv_enter_nested_guest(struct kvm_vcpu *vcpu) if (r == -EINTR) return H_INTERRUPT; + if (vcpu->mmio_needed) { + kvmhv_nested_mmio_needed(vcpu, regs_ptr); + return H_TOO_HARD; + } + return vcpu->arch.trap; } @@ -1100,7 +1125,8 @@ static inline int kvmppc_radix_shift_to_level(int shift) } /* called with gp->tlb_lock held */ -static long int __kvmhv_nested_page_fault(struct kvm_vcpu *vcpu, +static long int __kvmhv_nested_page_fault(struct kvm_run *run, + struct kvm_vcpu *vcpu, struct kvm_nested_guest *gp) { struct kvm *kvm = vcpu->kvm; @@ -1181,9 +1207,14 @@ static long int __kvmhv_nested_page_fault(struct kvm_vcpu *vcpu, kvmppc_core_queue_data_storage(vcpu, ea, dsisr); return RESUME_GUEST; } - /* passthrough of emulated MMIO case... */ - pr_err("emulated MMIO passthrough?\n"); - return -EINVAL; + + /* passthrough of emulated MMIO case */ + if (kvmhv_on_pseries()) { + pr_err("emulated MMIO passthrough?\n"); + return -EINVAL; + } + + return kvmppc_hv_emulate_mmio(run, vcpu, gpa, ea, writing); } if (memslot->flags & KVM_MEM_READONLY) { if (writing) { @@ -1265,13 +1296,13 @@ static long int __kvmhv_nested_page_fault(struct kvm_vcpu *vcpu, return RESUME_GUEST; } -long int kvmhv_nested_page_fault(struct kvm_vcpu *vcpu) +long int kvmhv_nested_page_fault(struct kvm_run *run, struct kvm_vcpu *vcpu) { struct kvm_nested_guest *gp = vcpu->arch.nested; long int ret; mutex_lock(&gp->tlb_lock); - ret = __kvmhv_nested_page_fault(vcpu, gp); + ret = __kvmhv_nested_page_fault(run, vcpu, gp); mutex_unlock(&gp->tlb_lock); return ret; } diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c index cb029fcab404..fbfc305bd77e 100644 --- a/arch/powerpc/kvm/powerpc.c +++ b/arch/powerpc/kvm/powerpc.c @@ -1210,6 +1210,12 @@ static void kvmppc_complete_mmio_load(struct kvm_vcpu *vcpu, kvmppc_set_vmx_byte(vcpu, gpr); break; #endif + case KVM_MMIO_REG_NESTED_GPR: + if (kvmppc_need_byteswap(vcpu)) + gpr = swab64(gpr); + kvm_vcpu_write_guest(vcpu, vcpu->arch.nested_io_gpr, &gpr, + sizeof(gpr)); + break; default: BUG(); } From patchwork Mon Dec 10 03:58:24 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suraj Jitindar Singh X-Patchwork-Id: 10720605 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id DEB8291E for ; Mon, 10 Dec 2018 03:59:10 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id CF4C529E71 for ; Mon, 10 Dec 2018 03:59:10 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id C382B29E69; Mon, 10 Dec 2018 03:59:10 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FROM,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 2D60E29E69 for ; Mon, 10 Dec 2018 03:59:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726447AbeLJD7J (ORCPT ); Sun, 9 Dec 2018 22:59:09 -0500 Received: from mail-pg1-f196.google.com ([209.85.215.196]:34978 "EHLO mail-pg1-f196.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726098AbeLJD7J (ORCPT ); Sun, 9 Dec 2018 22:59:09 -0500 Received: by mail-pg1-f196.google.com with SMTP id s198so4337309pgs.2; Sun, 09 Dec 2018 19:59:08 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=0idvQkyjk/mc9YO49T8Qpmp2zDIoDMDFQ36v9GJgSYg=; b=K2IJEajkXKSfCaHjiKXMI/POsbH4cfMgBrnt+ihAIYoNtcX6eCWpxiSv77oKt9/JhT fQt+QcxHledbmp+cLn0C+lGkf0XQpnBR9ShgYRP5nA/h9gPvHjw2B2c9oMRMn0Vk3vBS 6kuZwrvM0XlYIaE7XBKMnmY6TeyHpso0SEoVbQJCDkJ+mIFDOVgsRkp5gZSMaXIuUMtb 6Cfgetb8VPNqj6PAaMweMWsN1lY2AOL9DEhIZS1yUbaRPYanG25cYh9A0iVuHSnBC+wH lPvwkB/Eag4vDKvfOd0v+ukISg5Ct6WJziscoqqI2d1GtyyydoZkdegUqrJctmzQ0X8k eriA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=0idvQkyjk/mc9YO49T8Qpmp2zDIoDMDFQ36v9GJgSYg=; b=JuKw5aLz9uagj0ZhLvIcJu2QZvpo2aFgxSykartbYmkGxXLxqqC7s/CkGK+T4Gb97z f/9/JxZH4gxhi+WRI9vd7ZI47Q1wJFvTwwYiBV4//ualgMgOS87SSIhTN/eORnJpSl/R 12URdGcdaDQq7dZldFpptG6zDbkFOD0ZmXp6+rHBlkOYOGQQSCpj5fRQhAy09V8v4rja yjf+uBv36UlsEo6Oy3s8MEHAE9HzHCP2r+ltLbDQ+/0UZeXSUt2jCgNfam+QP1kJvKL7 eJ5d0c6ht2B9wsSvQ3+axaWcDtxntQyJYYF7D4VBdUn0pW+Mo2sF2MvhRv2v1cF7DJ32 T8/A== X-Gm-Message-State: AA+aEWZDJ1BjhEobEFLsEFRa0d9853Zhm7X+cBkFVXkOK6yBPW8Kt7OA m3kwRghCmnj3x7IEQ/BJAZtulGO0 X-Google-Smtp-Source: AFSGD/Venn7qJmZ1+zHc/IkrQ8cTToK3xlWKLNyM7Qiegwsh1P/3l5pTHQZUhW20FwGIJXxFc5kKPQ== X-Received: by 2002:a63:8043:: with SMTP id j64mr9764291pgd.405.1544414347944; Sun, 09 Dec 2018 19:59:07 -0800 (PST) Received: from surajjs2.ozlabs.ibm.com.ozlabs.ibm.com ([122.99.82.10]) by smtp.gmail.com with ESMTPSA id f6sm14291899pfg.188.2018.12.09.19.59.05 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Sun, 09 Dec 2018 19:59:07 -0800 (PST) From: Suraj Jitindar Singh To: kvm-ppc@vger.kernel.org Cc: sjitindarsingh@gmail.com, kvm@vger.kernel.org, paulus@ozlabs.org, linuxppc-dev@lists.ozlabs.org, aik@ozlabs.ru Subject: [PATCH V2 7/8] KVM: PPC: Introduce new hcall H_COPY_TOFROM_GUEST to access quadrants 1 & 2 Date: Mon, 10 Dec 2018 14:58:24 +1100 Message-Id: <20181210035825.29404-8-sjitindarsingh@gmail.com> X-Mailer: git-send-email 2.13.6 In-Reply-To: <20181210035825.29404-1-sjitindarsingh@gmail.com> References: <20181210035825.29404-1-sjitindarsingh@gmail.com> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP A guest cannot access quadrants 1 or 2 as this would result in an exception. Thus introduce the hcall H_COPY_TOFROM_GUEST to be used by a guest when it wants to perform an access to quadrants 1 or 2, for example when it wants to access memory for one of its nested guests. Also provide an implementation for the kvm-hv module. Signed-off-by: Suraj Jitindar Singh --- arch/powerpc/include/asm/hvcall.h | 1 + arch/powerpc/include/asm/kvm_book3s.h | 4 ++ arch/powerpc/kvm/book3s_64_mmu_radix.c | 7 ++-- arch/powerpc/kvm/book3s_hv.c | 6 ++- arch/powerpc/kvm/book3s_hv_nested.c | 75 ++++++++++++++++++++++++++++++++++ 5 files changed, 89 insertions(+), 4 deletions(-) diff --git a/arch/powerpc/include/asm/hvcall.h b/arch/powerpc/include/asm/hvcall.h index 33a4fc891947..463c63a9fcf1 100644 --- a/arch/powerpc/include/asm/hvcall.h +++ b/arch/powerpc/include/asm/hvcall.h @@ -335,6 +335,7 @@ #define H_SET_PARTITION_TABLE 0xF800 #define H_ENTER_NESTED 0xF804 #define H_TLB_INVALIDATE 0xF808 +#define H_COPY_TOFROM_GUEST 0xF80C /* Values for 2nd argument to H_SET_MODE */ #define H_SET_MODE_RESOURCE_SET_CIABR 1 diff --git a/arch/powerpc/include/asm/kvm_book3s.h b/arch/powerpc/include/asm/kvm_book3s.h index ea94110bfde4..720483733bb2 100644 --- a/arch/powerpc/include/asm/kvm_book3s.h +++ b/arch/powerpc/include/asm/kvm_book3s.h @@ -188,6 +188,9 @@ extern int kvmppc_book3s_hcall_implemented(struct kvm *kvm, unsigned long hc); extern int kvmppc_book3s_radix_page_fault(struct kvm_run *run, struct kvm_vcpu *vcpu, unsigned long ea, unsigned long dsisr); +extern unsigned long __kvmhv_copy_tofrom_guest_radix(int lpid, int pid, + gva_t eaddr, void *to, void *from, + unsigned long n); extern long kvmhv_copy_from_guest_radix(struct kvm_vcpu *vcpu, gva_t eaddr, void *to, unsigned long n); extern long kvmhv_copy_to_guest_radix(struct kvm_vcpu *vcpu, gva_t eaddr, @@ -302,6 +305,7 @@ long kvmhv_nested_init(void); void kvmhv_nested_exit(void); void kvmhv_vm_nested_init(struct kvm *kvm); long kvmhv_set_partition_table(struct kvm_vcpu *vcpu); +long kvmhv_copy_tofrom_guest_nested(struct kvm_vcpu *vcpu); void kvmhv_set_ptbl_entry(unsigned int lpid, u64 dw0, u64 dw1); void kvmhv_release_all_nested(struct kvm *kvm); long kvmhv_enter_nested_guest(struct kvm_vcpu *vcpu); diff --git a/arch/powerpc/kvm/book3s_64_mmu_radix.c b/arch/powerpc/kvm/book3s_64_mmu_radix.c index e1e3ef710bd0..da89d10e5886 100644 --- a/arch/powerpc/kvm/book3s_64_mmu_radix.c +++ b/arch/powerpc/kvm/book3s_64_mmu_radix.c @@ -29,9 +29,9 @@ */ static int p9_supported_radix_bits[4] = { 5, 9, 9, 13 }; -static unsigned long __kvmhv_copy_tofrom_guest_radix(int lpid, int pid, - gva_t eaddr, void *to, void *from, - unsigned long n) +unsigned long __kvmhv_copy_tofrom_guest_radix(int lpid, int pid, + gva_t eaddr, void *to, void *from, + unsigned long n) { unsigned long quadrant, ret = n; int old_pid, old_lpid; @@ -82,6 +82,7 @@ static unsigned long __kvmhv_copy_tofrom_guest_radix(int lpid, int pid, return ret; } +EXPORT_SYMBOL_GPL(__kvmhv_copy_tofrom_guest_radix); static long kvmhv_copy_tofrom_guest_radix(struct kvm_vcpu *vcpu, gva_t eaddr, void *to, void *from, unsigned long n) diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index 2280bc4778f5..bd07f9b7c5e8 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -996,7 +996,11 @@ int kvmppc_pseries_do_hcall(struct kvm_vcpu *vcpu) if (nesting_enabled(vcpu->kvm)) ret = kvmhv_do_nested_tlbie(vcpu); break; - + case H_COPY_TOFROM_GUEST: + ret = H_FUNCTION; + if (nesting_enabled(vcpu->kvm)) + ret = kvmhv_copy_tofrom_guest_nested(vcpu); + break; default: return RESUME_HOST; } diff --git a/arch/powerpc/kvm/book3s_hv_nested.c b/arch/powerpc/kvm/book3s_hv_nested.c index 991f40ce4eea..f54301fcfbe4 100644 --- a/arch/powerpc/kvm/book3s_hv_nested.c +++ b/arch/powerpc/kvm/book3s_hv_nested.c @@ -462,6 +462,81 @@ long kvmhv_set_partition_table(struct kvm_vcpu *vcpu) } /* + * Handle the H_COPY_TOFROM_GUEST hcall. + * r4 = L1 lpid of nested guest + * r5 = pid + * r6 = eaddr to access + * r7 = to buffer (L1 gpa) + * r8 = from buffer (L1 gpa) + * r9 = n bytes to copy + */ +long kvmhv_copy_tofrom_guest_nested(struct kvm_vcpu *vcpu) +{ + struct kvm_nested_guest *gp; + int l1_lpid = kvmppc_get_gpr(vcpu, 4); + int pid = kvmppc_get_gpr(vcpu, 5); + gva_t eaddr = kvmppc_get_gpr(vcpu, 6); + void *gp_to = (void *) kvmppc_get_gpr(vcpu, 7); + void *gp_from = (void *) kvmppc_get_gpr(vcpu, 8); + void *buf; + unsigned long n = kvmppc_get_gpr(vcpu, 9); + bool is_load = !!gp_to; + long rc; + + if (gp_to && gp_from) /* One must be NULL to determine the direction */ + return H_PARAMETER; + + if (eaddr & (0xFFFUL << 52)) + return H_PARAMETER; + + buf = kzalloc(n, GFP_KERNEL); + if (!buf) + return H_NO_MEM; + + gp = kvmhv_get_nested(vcpu->kvm, l1_lpid, false); + if (!gp) { + rc = H_PARAMETER; + goto out_free; + } + + mutex_lock(&gp->tlb_lock); + + if (is_load) { + /* Load from the nested guest into our buffer */ + rc = __kvmhv_copy_tofrom_guest_radix(gp->shadow_lpid, pid, + eaddr, buf, NULL, n); + if (rc) + goto not_found; + + /* Write what was loaded into our buffer back to the L1 guest */ + rc = kvmppc_st(vcpu, (ulong *) &gp_to, n, buf, true); + if (rc) + goto not_found; + } else { + /* Load the data to be stored from the L1 guest into our buf */ + rc = kvmppc_ld(vcpu, (ulong *) &gp_from, n, buf, true); + if (rc) + goto not_found; + + /* Store from our buffer into the nested guest */ + rc = __kvmhv_copy_tofrom_guest_radix(gp->shadow_lpid, pid, + eaddr, NULL, buf, n); + if (rc) + goto not_found; + } + +out_unlock: + mutex_unlock(&gp->tlb_lock); + kvmhv_put_nested(gp); +out_free: + kfree(buf); + return rc; +not_found: + rc = H_NOT_FOUND; + goto out_unlock; +} + +/* * Reload the partition table entry for a guest. * Caller must hold gp->tlb_lock. */ From patchwork Mon Dec 10 03:58:25 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suraj Jitindar Singh X-Patchwork-Id: 10720607 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 5011C91E for ; Mon, 10 Dec 2018 03:59:13 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 403DD29E69 for ; Mon, 10 Dec 2018 03:59:13 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 34CB329E7F; Mon, 10 Dec 2018 03:59:13 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FROM,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id D1C0629E69 for ; Mon, 10 Dec 2018 03:59:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726449AbeLJD7M (ORCPT ); Sun, 9 Dec 2018 22:59:12 -0500 Received: from mail-pg1-f194.google.com ([209.85.215.194]:34980 "EHLO mail-pg1-f194.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726375AbeLJD7L (ORCPT ); Sun, 9 Dec 2018 22:59:11 -0500 Received: by mail-pg1-f194.google.com with SMTP id s198so4337353pgs.2; Sun, 09 Dec 2018 19:59:11 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=DbPqPcd4WIKnhVkwlfCIadFdzwDFLEQDHrX3PfHEBIg=; b=SCj3Qsy6YhoCBKhmR0TPSiBJfVS2aVrMCz4+d6m5ZNm1UotopnHhCbVSsNpl5OFS3W 1s5MhXhx7T7AUY9Nn24j+cvRSz1G/QH8rhqwVqNWh9jKatj086u09+2v/iLPqW/zOo0L IulwvNFsqmAUllLnU0hgNm+at1u82lfR11j7QE46TfhC5CrEP2eHAJ1Yu08nwN0bC0Nw p/28f9o+0ZKa71GJqvZMr8OZGSMI1BsQAKch8ER/4Bw7NkJhdjvX5DQVfHqPLXos+Sdh 1sEZnXlyakRbmQ7xQxiGv45nqMsbUpiWlTmcO7H76KeLApfCdhsiyeV4t6+3cMPazdxg JUSA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=DbPqPcd4WIKnhVkwlfCIadFdzwDFLEQDHrX3PfHEBIg=; b=g7sk4NQZcNaMxl7SQxL5LsCY3jiw23EH085RUnoL1x2iWAkjuXgmchY74mejvc4pa1 eH9TWMmgoBqsDafdaNTsfGKPOE0haVfRw1gBZJ79FvL7zm/Oj+dFuOpQJYeNMihbSM3h MiIp1BSRwsWNnQhKyPqyvYY11vPHtp0+xXj0K5CqL3uHEQ8gIq+Tp3r3sS+i/S4c7Fjs sq0FJZrlP23XBlCCwHkrfDIBfd7Z/HoMAZue7fxUfIZxykbgFtZpEaWPddio9Uwrst1D /FS1+Ui2DOMrE3VgdfQ/UBrm8l5+VHzvyvhjAJ2Ikt6P9/D2ywAlIFBx+ozLVv+dpTkY IYjg== X-Gm-Message-State: AA+aEWad1BejzKFtAgL9Xbqmr9G21QSPmQ/gC2guwd+xi8J50w47XTU8 IZMAXT+As78ldgjZ5/lm28rbDB0V X-Google-Smtp-Source: AFSGD/VP/xdVZj30p2ZzaxdpdfaUGnh4zxaUNDXlid5n1TjLuf/sMKo/Ozgys89W3ybYhvyC3OuVaQ== X-Received: by 2002:aa7:8608:: with SMTP id p8mr11080980pfn.125.1544414350889; Sun, 09 Dec 2018 19:59:10 -0800 (PST) Received: from surajjs2.ozlabs.ibm.com.ozlabs.ibm.com ([122.99.82.10]) by smtp.gmail.com with ESMTPSA id f6sm14291899pfg.188.2018.12.09.19.59.08 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Sun, 09 Dec 2018 19:59:10 -0800 (PST) From: Suraj Jitindar Singh To: kvm-ppc@vger.kernel.org Cc: sjitindarsingh@gmail.com, kvm@vger.kernel.org, paulus@ozlabs.org, linuxppc-dev@lists.ozlabs.org, aik@ozlabs.ru Subject: [PATCH V2 8/8] KVM: PPC: Book3S HV: Allow passthrough of an emulated device to an L3 guest Date: Mon, 10 Dec 2018 14:58:25 +1100 Message-Id: <20181210035825.29404-9-sjitindarsingh@gmail.com> X-Mailer: git-send-email 2.13.6 In-Reply-To: <20181210035825.29404-1-sjitindarsingh@gmail.com> References: <20181210035825.29404-1-sjitindarsingh@gmail.com> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Previously when a device was being emulated by an L1 guest for an L2 guest, that device couldn't then be passed through to an L3 guest. This was because the L1 guest had no method for accessing L3 memory. The hcall H_COPY_TOFROM_GUEST provides this access. Thus this setup for passthrough can now be allowed. Signed-off-by: Suraj Jitindar Singh --- arch/powerpc/kvm/book3s_64_mmu_radix.c | 9 ++++----- arch/powerpc/kvm/book3s_hv_nested.c | 5 ----- 2 files changed, 4 insertions(+), 10 deletions(-) diff --git a/arch/powerpc/kvm/book3s_64_mmu_radix.c b/arch/powerpc/kvm/book3s_64_mmu_radix.c index da89d10e5886..cf16e9d207a5 100644 --- a/arch/powerpc/kvm/book3s_64_mmu_radix.c +++ b/arch/powerpc/kvm/book3s_64_mmu_radix.c @@ -37,11 +37,10 @@ unsigned long __kvmhv_copy_tofrom_guest_radix(int lpid, int pid, int old_pid, old_lpid; bool is_load = !!to; - /* Can't access quadrants 1 or 2 in non-HV mode */ - if (kvmhv_on_pseries()) { - /* TODO h-call */ - return -EPERM; - } + /* Can't access quadrants 1 or 2 in non-HV mode, call the HV to do it */ + if (kvmhv_on_pseries()) + return plpar_hcall_norets(H_COPY_TOFROM_GUEST, lpid, pid, eaddr, + to, from, n); quadrant = 1; if (!pid) diff --git a/arch/powerpc/kvm/book3s_hv_nested.c b/arch/powerpc/kvm/book3s_hv_nested.c index f54301fcfbe4..acde90eb56f7 100644 --- a/arch/powerpc/kvm/book3s_hv_nested.c +++ b/arch/powerpc/kvm/book3s_hv_nested.c @@ -1284,11 +1284,6 @@ static long int __kvmhv_nested_page_fault(struct kvm_run *run, } /* passthrough of emulated MMIO case */ - if (kvmhv_on_pseries()) { - pr_err("emulated MMIO passthrough?\n"); - return -EINVAL; - } - return kvmppc_hv_emulate_mmio(run, vcpu, gpa, ea, writing); } if (memslot->flags & KVM_MEM_READONLY) {