From patchwork Mon May 11 08:44:33 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christian Borntraeger X-Patchwork-Id: 6374171 Return-Path: X-Original-To: patchwork-kvm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 57126BEEE1 for ; Mon, 11 May 2015 08:44:35 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 3524B2025B for ; Mon, 11 May 2015 08:44:34 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 0A0652037F for ; Mon, 11 May 2015 08:44:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752967AbbEKIoa (ORCPT ); Mon, 11 May 2015 04:44:30 -0400 Received: from e06smtp17.uk.ibm.com ([195.75.94.113]:57025 "EHLO e06smtp17.uk.ibm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752713AbbEKIoW (ORCPT ); Mon, 11 May 2015 04:44:22 -0400 Received: from /spool/local by e06smtp17.uk.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Mon, 11 May 2015 09:44:21 +0100 Received: from d06dlp03.portsmouth.uk.ibm.com (9.149.20.15) by e06smtp17.uk.ibm.com (192.168.101.147) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; Mon, 11 May 2015 09:44:20 +0100 Received: from b06cxnps3074.portsmouth.uk.ibm.com (d06relay09.portsmouth.uk.ibm.com [9.149.109.194]) by d06dlp03.portsmouth.uk.ibm.com (Postfix) with ESMTP id AEFF91B08023; Mon, 11 May 2015 09:45:04 +0100 (BST) Received: from d06av04.portsmouth.uk.ibm.com (d06av04.portsmouth.uk.ibm.com [9.149.37.216]) by b06cxnps3074.portsmouth.uk.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id t4B8iKeA6488444; Mon, 11 May 2015 08:44:20 GMT Received: from d06av04.portsmouth.uk.ibm.com (localhost [127.0.0.1]) by d06av04.portsmouth.uk.ibm.com (8.14.4/8.14.4/NCO v10.0 AVout) with ESMTP id t4B8iFNt008602; Mon, 11 May 2015 02:44:18 -0600 Received: from tuxmaker.boeblingen.de.ibm.com (tuxmaker.boeblingen.de.ibm.com [9.152.85.9]) by d06av04.portsmouth.uk.ibm.com (8.14.4/8.14.4/NCO v10.0 AVin) with ESMTP id t4B8iE11008510; Mon, 11 May 2015 02:44:14 -0600 Received: by tuxmaker.boeblingen.de.ibm.com (Postfix, from userid 25651) id 1FA2A1224435; Mon, 11 May 2015 10:44:14 +0200 (CEST) From: Christian Borntraeger To: Paolo Bonzini Cc: Alexander Graf , KVM , Cornelia Huck , Jens Freimann , linux-s390 , Christian Borntraeger Subject: [GIT PULL 5/9] KVM: s390: make exit_sie_sync more robust Date: Mon, 11 May 2015 10:44:33 +0200 Message-Id: <1431333877-28700-6-git-send-email-borntraeger@de.ibm.com> X-Mailer: git-send-email 2.3.0 In-Reply-To: <1431333877-28700-1-git-send-email-borntraeger@de.ibm.com> References: <1431333877-28700-1-git-send-email-borntraeger@de.ibm.com> X-TM-AS-MML: disable X-Content-Scanned: Fidelis XPS MAILER x-cbid: 15051108-0029-0000-0000-0000048CDA3D Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Spam-Status: No, score=-6.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, T_RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP exit_sie_sync is used to kick CPUs out of SIE and prevent reentering at any point in time. This is used to reload the prefix pages and to set the IBS stuff in a way that guarantees that after this function returns we are no longer in SIE. All current users trigger KVM requests. The request must be set before we block the CPUs to avoid races. Let's make this implicit by adding the request into a new function kvm_s390_sync_requests that replaces exit_sie_sync and split out s390_vcpu_block and s390_vcpu_unblock, that can be used to keep CPUs out of SIE independent of requests. Signed-off-by: Christian Borntraeger Reviewed-by: David Hildenbrand Reviewed-by: Cornelia Huck --- arch/s390/include/asm/kvm_host.h | 3 ++- arch/s390/kernel/entry.S | 2 +- arch/s390/kvm/kvm-s390.c | 28 ++++++++++++++++++---------- arch/s390/kvm/kvm-s390.h | 2 +- 4 files changed, 22 insertions(+), 13 deletions(-) diff --git a/arch/s390/include/asm/kvm_host.h b/arch/s390/include/asm/kvm_host.h index 1011ac1..444c412 100644 --- a/arch/s390/include/asm/kvm_host.h +++ b/arch/s390/include/asm/kvm_host.h @@ -96,7 +96,8 @@ struct kvm_s390_sie_block { #define PROG_IN_SIE (1<<0) __u32 prog0c; /* 0x000c */ __u8 reserved10[16]; /* 0x0010 */ -#define PROG_BLOCK_SIE 0x00000001 +#define PROG_BLOCK_SIE (1<<0) +#define PROG_REQUEST (1<<1) atomic_t prog20; /* 0x0020 */ __u8 reserved24[4]; /* 0x0024 */ __u64 cputm; /* 0x0028 */ diff --git a/arch/s390/kernel/entry.S b/arch/s390/kernel/entry.S index 99b44ac..3238893 100644 --- a/arch/s390/kernel/entry.S +++ b/arch/s390/kernel/entry.S @@ -1005,7 +1005,7 @@ ENTRY(sie64a) .Lsie_gmap: lg %r14,__SF_EMPTY(%r15) # get control block pointer oi __SIE_PROG0C+3(%r14),1 # we are going into SIE now - tm __SIE_PROG20+3(%r14),1 # last exit... + tm __SIE_PROG20+3(%r14),3 # last exit... jnz .Lsie_done LPP __SF_EMPTY(%r15) # set guest id sie 0(%r14) diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c index 142d9b4..9bc57af 100644 --- a/arch/s390/kvm/kvm-s390.c +++ b/arch/s390/kvm/kvm-s390.c @@ -1424,6 +1424,16 @@ void s390_vcpu_unblock(struct kvm_vcpu *vcpu) atomic_clear_mask(PROG_BLOCK_SIE, &vcpu->arch.sie_block->prog20); } +static void kvm_s390_vcpu_request(struct kvm_vcpu *vcpu) +{ + atomic_set_mask(PROG_REQUEST, &vcpu->arch.sie_block->prog20); +} + +static void kvm_s390_vcpu_request_handled(struct kvm_vcpu *vcpu) +{ + atomic_clear_mask(PROG_REQUEST, &vcpu->arch.sie_block->prog20); +} + /* * Kick a guest cpu out of SIE and wait until SIE is not running. * If the CPU is not running (e.g. waiting as idle) the function will @@ -1435,10 +1445,11 @@ void exit_sie(struct kvm_vcpu *vcpu) cpu_relax(); } -/* Kick a guest cpu out of SIE and prevent SIE-reentry */ -void exit_sie_sync(struct kvm_vcpu *vcpu) +/* Kick a guest cpu out of SIE to process a request synchronously */ +void kvm_s390_sync_request(int req, struct kvm_vcpu *vcpu) { - s390_vcpu_block(vcpu); + kvm_make_request(req, vcpu); + kvm_s390_vcpu_request(vcpu); exit_sie(vcpu); } @@ -1452,8 +1463,7 @@ static void kvm_gmap_notifier(struct gmap *gmap, unsigned long address) /* match against both prefix pages */ if (kvm_s390_get_prefix(vcpu) == (address & ~0x1000UL)) { VCPU_EVENT(vcpu, 2, "gmap notifier for %lx", address); - kvm_make_request(KVM_REQ_MMU_RELOAD, vcpu); - exit_sie_sync(vcpu); + kvm_s390_sync_request(KVM_REQ_MMU_RELOAD, vcpu); } } } @@ -1728,7 +1738,7 @@ static int kvm_s390_handle_requests(struct kvm_vcpu *vcpu) if (!vcpu->requests) return 0; retry: - s390_vcpu_unblock(vcpu); + kvm_s390_vcpu_request_handled(vcpu); /* * We use MMU_RELOAD just to re-arm the ipte notifier for the * guest prefix page. gmap_ipte_notify will wait on the ptl lock. @@ -2213,8 +2223,7 @@ int kvm_s390_vcpu_store_adtl_status(struct kvm_vcpu *vcpu, unsigned long addr) static void __disable_ibs_on_vcpu(struct kvm_vcpu *vcpu) { kvm_check_request(KVM_REQ_ENABLE_IBS, vcpu); - kvm_make_request(KVM_REQ_DISABLE_IBS, vcpu); - exit_sie_sync(vcpu); + kvm_s390_sync_request(KVM_REQ_DISABLE_IBS, vcpu); } static void __disable_ibs_on_all_vcpus(struct kvm *kvm) @@ -2230,8 +2239,7 @@ static void __disable_ibs_on_all_vcpus(struct kvm *kvm) static void __enable_ibs_on_vcpu(struct kvm_vcpu *vcpu) { kvm_check_request(KVM_REQ_DISABLE_IBS, vcpu); - kvm_make_request(KVM_REQ_ENABLE_IBS, vcpu); - exit_sie_sync(vcpu); + kvm_s390_sync_request(KVM_REQ_ENABLE_IBS, vcpu); } void kvm_s390_vcpu_start(struct kvm_vcpu *vcpu) diff --git a/arch/s390/kvm/kvm-s390.h b/arch/s390/kvm/kvm-s390.h index ca108b9..8edca85 100644 --- a/arch/s390/kvm/kvm-s390.h +++ b/arch/s390/kvm/kvm-s390.h @@ -214,7 +214,7 @@ void kvm_s390_vcpu_stop(struct kvm_vcpu *vcpu); void s390_vcpu_block(struct kvm_vcpu *vcpu); void s390_vcpu_unblock(struct kvm_vcpu *vcpu); void exit_sie(struct kvm_vcpu *vcpu); -void exit_sie_sync(struct kvm_vcpu *vcpu); +void kvm_s390_sync_request(int req, struct kvm_vcpu *vcpu); int kvm_s390_vcpu_setup_cmma(struct kvm_vcpu *vcpu); void kvm_s390_vcpu_unsetup_cmma(struct kvm_vcpu *vcpu); /* is cmma enabled */