From patchwork Tue Feb 27 17:37:17 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: simon X-Patchwork-Id: 10245869 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 26B4D60384 for ; Tue, 27 Feb 2018 17:38:47 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 116DB2879B for ; Tue, 27 Feb 2018 17:38:47 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 060CD28A1D; Tue, 27 Feb 2018 17:38:47 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, FREEMAIL_FROM, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 8C5052879B for ; Tue, 27 Feb 2018 17:38:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751906AbeB0Rio (ORCPT ); Tue, 27 Feb 2018 12:38:44 -0500 Received: from mail-pg0-f68.google.com ([74.125.83.68]:46947 "EHLO mail-pg0-f68.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751898AbeB0Rim (ORCPT ); Tue, 27 Feb 2018 12:38:42 -0500 Received: by mail-pg0-f68.google.com with SMTP id r26so2549041pgv.13; Tue, 27 Feb 2018 09:38:41 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=ZcWW+tfDTvjG5S9hu9pEEs9gNQnN24jPFsTe2H+aBuY=; b=k8hbtrT/s2q+L8rPAaVELPL0CZfVsKunv8tkDl/ItidW2I8SPkgELho51dv27goz+W xs6oaWj6a0842A+qg9N/DWIINN50LdIalYlBh22yVk5USW2FsB+zcGO2vMOHAmOuupkR CuHJwWqcJ2Hqtz5vweHaxOmiKsDLO1CsQTf0A8s2C+VtJV8dnax/lpvZ+Z3gjYKH4l0P vhLvdorAdxd4rWgfJWYav4KCQ8jNsgXq/D8O7XqdHN4hbtEqkPs+wUfiBZvPV4JckfC/ JEpR0aDaYx/n/zNvXmGAHaOU2N3e4mdWmumXhhi2YkVkSGDR2Ek//S+xCR1A7FVKFQ7P B0nw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=ZcWW+tfDTvjG5S9hu9pEEs9gNQnN24jPFsTe2H+aBuY=; b=cKvI4OX3RDUrUYrnQoQYO8lVXjQcpHV6PRzMIhEov/RI+IkPgr9Enx2D0zuPEkTF5s EDiPg0fRBFxiXsDRdnSBR8Mhvtug/1e448G6eDDKXPTG07CHJxGXbV7dO0JDR9Xdr4aJ EQ2rnO66qGB32pzcO8H+tEhNrxde3VTfCnxIvawdmCIw1A5+YO+nw3ttPOjZTiO0/ry+ cCTcuyLpcSLxdzzG6vlD7IhZgndvppMV7QjJf+A9cUd/uCW2+i5Ft8Fl+FQAidUswOpS zE82EAfnsTco35vBaw0QS6LGSj9+ddPX5sWsFxvJ0LfcS1135Z4/b2haCMB2OUdQ48Po ziIA== X-Gm-Message-State: APf1xPAzaETJov6xR2YpGdCBxg57ak/IK552aTXLjh33Uns4W/t+Tvgx WRXEBcY77t+ptvsz76Euydc= X-Google-Smtp-Source: AH8x226LRyByyExvxcwUvcddYSZRpctXKH6/2FnOXH8L5zsmoDJy3LWcLhYJRx9KOL3UiWt9Yd2mIA== X-Received: by 10.101.93.71 with SMTP id e7mr12139139pgt.248.1519753121416; Tue, 27 Feb 2018 09:38:41 -0800 (PST) Received: from simonLocalRHEL7.x64 ([101.80.181.226]) by smtp.gmail.com with ESMTPSA id q87sm14734143pfa.29.2018.02.27.09.38.39 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 27 Feb 2018 09:38:40 -0800 (PST) From: wei.guo.simon@gmail.com To: linuxppc-dev@lists.ozlabs.org Cc: Paul Mackerras , kvm@vger.kernel.org, kvm-ppc@vger.kernel.org, Simon Guo Subject: [PATCH v2 10/30] KVM: PPC: Book3S PR: Sync TM bits to shadow msr for problem state guest Date: Wed, 28 Feb 2018 01:37:17 +0800 Message-Id: <1519753057-11059-11-git-send-email-wei.guo.simon@gmail.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1519753057-11059-1-git-send-email-wei.guo.simon@gmail.com> References: <1519753057-11059-1-git-send-email-wei.guo.simon@gmail.com> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Simon Guo MSR TS bits can be modified with non-privileged instruction like tbegin./tend. That means guest can change MSR value "silently" without notifying host. It is necessary to sync the TM bits to host so that host can calculate shadow msr correctly. note privilege guest will always fail transactions so we only take care of problem state guest. The logic is put into kvmppc_copy_from_svcpu() so that kvmppc_handle_exit_pr() can use correct MSR TM bits even when preemption. Signed-off-by: Simon Guo --- arch/powerpc/kvm/book3s_pr.c | 73 ++++++++++++++++++++++++++++++-------------- 1 file changed, 50 insertions(+), 23 deletions(-) diff --git a/arch/powerpc/kvm/book3s_pr.c b/arch/powerpc/kvm/book3s_pr.c index 5c9e43f..4bf76c9 100644 --- a/arch/powerpc/kvm/book3s_pr.c +++ b/arch/powerpc/kvm/book3s_pr.c @@ -179,10 +179,36 @@ void kvmppc_copy_to_svcpu(struct kvmppc_book3s_shadow_vcpu *svcpu, svcpu->in_use = true; } +static void kvmppc_recalc_shadow_msr(struct kvm_vcpu *vcpu) +{ + ulong guest_msr = kvmppc_get_msr(vcpu); + ulong smsr = guest_msr; + + /* Guest MSR values */ +#ifdef CONFIG_PPC_TRANSACTIONAL_MEM + smsr &= MSR_FE0 | MSR_FE1 | MSR_SF | MSR_SE | MSR_BE | MSR_LE | + MSR_TM | MSR_TS_MASK; +#else + smsr &= MSR_FE0 | MSR_FE1 | MSR_SF | MSR_SE | MSR_BE | MSR_LE; +#endif + /* Process MSR values */ + smsr |= MSR_ME | MSR_RI | MSR_IR | MSR_DR | MSR_PR | MSR_EE; + /* External providers the guest reserved */ + smsr |= (guest_msr & vcpu->arch.guest_owned_ext); + /* 64-bit Process MSR values */ +#ifdef CONFIG_PPC_BOOK3S_64 + smsr |= MSR_ISF | MSR_HV; +#endif + vcpu->arch.shadow_msr = smsr; +} + /* Copy data touched by real-mode code from shadow vcpu back to vcpu */ void kvmppc_copy_from_svcpu(struct kvm_vcpu *vcpu, struct kvmppc_book3s_shadow_vcpu *svcpu) { +#ifdef CONFIG_PPC_TRANSACTIONAL_MEM + ulong old_msr; +#endif /* * vcpu_put would just call us again because in_use hasn't * been updated yet. @@ -230,6 +256,30 @@ void kvmppc_copy_from_svcpu(struct kvm_vcpu *vcpu, to_book3s(vcpu)->vtb += get_vtb() - vcpu->arch.entry_vtb; if (cpu_has_feature(CPU_FTR_ARCH_207S)) vcpu->arch.ic += mfspr(SPRN_IC) - vcpu->arch.entry_ic; + +#ifdef CONFIG_PPC_TRANSACTIONAL_MEM + /* + * Unlike other MSR bits, MSR[TS]bits can be changed at guest without + * notifying host: + * modified by unprivileged instructions like "tbegin"/"tend"/ + * "tresume"/"tsuspend" in PR KVM guest. + * + * It is necessary to sync here to calculate a correct shadow_msr. + * + * privileged guest's tbegin will be failed at present. So we + * only take care of problem state guest. + */ + old_msr = kvmppc_get_msr(vcpu); + if (unlikely((old_msr & MSR_PR) && + (vcpu->arch.shadow_srr1 & (MSR_TS_MASK)) != + (old_msr & (MSR_TS_MASK)))) { + old_msr &= ~(MSR_TS_MASK); + old_msr |= (vcpu->arch.shadow_srr1 & (MSR_TS_MASK)); + kvmppc_set_msr_fast(vcpu, old_msr); + kvmppc_recalc_shadow_msr(vcpu); + } +#endif + svcpu->in_use = false; out: @@ -317,29 +367,6 @@ static void kvm_set_spte_hva_pr(struct kvm *kvm, unsigned long hva, pte_t pte) /*****************************************/ -static void kvmppc_recalc_shadow_msr(struct kvm_vcpu *vcpu) -{ - ulong guest_msr = kvmppc_get_msr(vcpu); - ulong smsr = guest_msr; - - /* Guest MSR values */ -#ifdef CONFIG_PPC_TRANSACTIONAL_MEM - smsr &= MSR_FE0 | MSR_FE1 | MSR_SF | MSR_SE | MSR_BE | MSR_LE | - MSR_TM | MSR_TS_MASK; -#else - smsr &= MSR_FE0 | MSR_FE1 | MSR_SF | MSR_SE | MSR_BE | MSR_LE; -#endif - /* Process MSR values */ - smsr |= MSR_ME | MSR_RI | MSR_IR | MSR_DR | MSR_PR | MSR_EE; - /* External providers the guest reserved */ - smsr |= (guest_msr & vcpu->arch.guest_owned_ext); - /* 64-bit Process MSR values */ -#ifdef CONFIG_PPC_BOOK3S_64 - smsr |= MSR_ISF | MSR_HV; -#endif - vcpu->arch.shadow_msr = smsr; -} - static void kvmppc_set_msr_pr(struct kvm_vcpu *vcpu, u64 msr) { ulong old_msr = kvmppc_get_msr(vcpu);