From patchwork Wed Jul 10 12:59:52 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: dingel@linux.vnet.ibm.com X-Patchwork-Id: 2825714 Return-Path: X-Original-To: patchwork-kvm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 21FE0C0AB2 for ; Wed, 10 Jul 2013 13:01:06 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 4948D201BF for ; Wed, 10 Jul 2013 13:01:01 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id D0FE1201B5 for ; Wed, 10 Jul 2013 13:00:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754541Ab3GJNA4 (ORCPT ); Wed, 10 Jul 2013 09:00:56 -0400 Received: from e06smtp15.uk.ibm.com ([195.75.94.111]:42958 "EHLO e06smtp15.uk.ibm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754539Ab3GJNAK (ORCPT ); Wed, 10 Jul 2013 09:00:10 -0400 Received: from /spool/local by e06smtp15.uk.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Wed, 10 Jul 2013 13:55:18 +0100 Received: from d06dlp01.portsmouth.uk.ibm.com (9.149.20.13) by e06smtp15.uk.ibm.com (192.168.101.145) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; Wed, 10 Jul 2013 13:55:16 +0100 Received: from b06cxnps4075.portsmouth.uk.ibm.com (d06relay12.portsmouth.uk.ibm.com [9.149.109.197]) by d06dlp01.portsmouth.uk.ibm.com (Postfix) with ESMTP id 4770217D801A; Wed, 10 Jul 2013 14:01:40 +0100 (BST) Received: from d06av12.portsmouth.uk.ibm.com (d06av12.portsmouth.uk.ibm.com [9.149.37.247]) by b06cxnps4075.portsmouth.uk.ibm.com (8.13.8/8.13.8/NCO v10.0) with ESMTP id r6ACxsSL47972530; Wed, 10 Jul 2013 12:59:54 GMT Received: from d06av12.portsmouth.uk.ibm.com (localhost [127.0.0.1]) by d06av12.portsmouth.uk.ibm.com (8.14.4/8.14.4/NCO v10.0 AVout) with ESMTP id r6AD01wb010904; Wed, 10 Jul 2013 07:00:05 -0600 Received: from tuxmaker.boeblingen.de.ibm.com (tuxmaker.boeblingen.de.ibm.com [9.152.85.9]) by d06av12.portsmouth.uk.ibm.com (8.14.4/8.14.4/NCO v10.0 AVin) with ESMTP id r6AD01YG010695; Wed, 10 Jul 2013 07:00:01 -0600 Received: by tuxmaker.boeblingen.de.ibm.com (Postfix, from userid 2944) id 47887122442F; Wed, 10 Jul 2013 15:00:01 +0200 (CEST) From: Dominik Dingel To: Gleb Natapov , Paolo Bonzini Cc: Christian Borntraeger , Heiko Carstens , Martin Schwidefsky , Cornelia Huck , Xiantao Zhang , Alexander Graf , Christoffer Dall , Marc Zyngier , Ralf Baechle , kvm@vger.kernel.org, linux-s390@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Dominik Dingel Subject: [PATCH 1/4] PF: Add FAULT_FLAG_RETRY_NOWAIT for guest fault Date: Wed, 10 Jul 2013 14:59:52 +0200 Message-Id: <1373461195-27628-2-git-send-email-dingel@linux.vnet.ibm.com> X-Mailer: git-send-email 1.8.2.2 In-Reply-To: <1373461195-27628-1-git-send-email-dingel@linux.vnet.ibm.com> References: <1373461195-27628-1-git-send-email-dingel@linux.vnet.ibm.com> X-TM-AS-MML: No X-Content-Scanned: Fidelis XPS MAILER x-cbid: 13071012-0342-0000-0000-0000058B39B5 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Spam-Status: No, score=-7.2 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP In case of a fault retry exit sie64() with gmap_fault indication for the running thread set. This makes it possible to handle async page faults without the need for mm notifiers. Based on a patch from Martin Schwidefsky. Signed-off-by: Dominik Dingel Acked-by: Christian Borntraeger --- arch/s390/include/asm/pgtable.h | 2 ++ arch/s390/include/asm/processor.h | 1 + arch/s390/kvm/kvm-s390.c | 13 +++++++++++++ arch/s390/mm/fault.c | 26 ++++++++++++++++++++++---- 4 files changed, 38 insertions(+), 4 deletions(-) diff --git a/arch/s390/include/asm/pgtable.h b/arch/s390/include/asm/pgtable.h index 0ea4e59..4a4cc64 100644 --- a/arch/s390/include/asm/pgtable.h +++ b/arch/s390/include/asm/pgtable.h @@ -740,6 +740,7 @@ static inline void pgste_set_pte(pte_t *ptep, pte_t entry) * @table: pointer to the page directory * @asce: address space control element for gmap page table * @crst_list: list of all crst tables used in the guest address space + * @pfault_enabled: defines if pfaults are applicable for the guest */ struct gmap { struct list_head list; @@ -748,6 +749,7 @@ struct gmap { unsigned long asce; void *private; struct list_head crst_list; + unsigned long pfault_enabled; }; /** diff --git a/arch/s390/include/asm/processor.h b/arch/s390/include/asm/processor.h index 6b49987..4fa96ca 100644 --- a/arch/s390/include/asm/processor.h +++ b/arch/s390/include/asm/processor.h @@ -77,6 +77,7 @@ struct thread_struct { unsigned long ksp; /* kernel stack pointer */ mm_segment_t mm_segment; unsigned long gmap_addr; /* address of last gmap fault. */ + unsigned int gmap_pfault; /* signal of a pending guest pfault */ struct per_regs per_user; /* User specified PER registers */ struct per_event per_event; /* Cause of the last PER trap */ unsigned long per_flags; /* Flags to control debug behavior */ diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c index ba694d2..702daca 100644 --- a/arch/s390/kvm/kvm-s390.c +++ b/arch/s390/kvm/kvm-s390.c @@ -682,6 +682,15 @@ static int kvm_s390_handle_requests(struct kvm_vcpu *vcpu) return 0; } +static void kvm_arch_fault_in_sync(struct kvm_vcpu *vcpu) +{ + hva_t fault = gmap_fault(current->thread.gmap_addr, vcpu->arch.gmap); + struct mm_struct *mm = current->mm; + down_read(&mm->mmap_sem); + get_user_pages(current, mm, fault, 1, 1, 0, NULL, NULL); + up_read(&mm->mmap_sem); +} + static int __vcpu_run(struct kvm_vcpu *vcpu) { int rc; @@ -715,6 +724,10 @@ static int __vcpu_run(struct kvm_vcpu *vcpu) if (rc < 0) { if (kvm_is_ucontrol(vcpu->kvm)) { rc = SIE_INTERCEPT_UCONTROL; + } else if (current->thread.gmap_pfault) { + kvm_arch_fault_in_sync(vcpu); + current->thread.gmap_pfault = 0; + rc = 0; } else { VCPU_EVENT(vcpu, 3, "%s", "fault in sie instruction"); trace_kvm_s390_sie_fault(vcpu); diff --git a/arch/s390/mm/fault.c b/arch/s390/mm/fault.c index 047c3e4..7d4c4b1 100644 --- a/arch/s390/mm/fault.c +++ b/arch/s390/mm/fault.c @@ -50,6 +50,7 @@ #define VM_FAULT_BADMAP 0x020000 #define VM_FAULT_BADACCESS 0x040000 #define VM_FAULT_SIGNAL 0x080000 +#define VM_FAULT_PFAULT 0x100000 static unsigned long store_indication __read_mostly; @@ -232,6 +233,7 @@ static noinline void do_fault_error(struct pt_regs *regs, int fault) return; } case VM_FAULT_BADCONTEXT: + case VM_FAULT_PFAULT: do_no_context(regs); break; case VM_FAULT_SIGNAL: @@ -269,6 +271,9 @@ static noinline void do_fault_error(struct pt_regs *regs, int fault) */ static inline int do_exception(struct pt_regs *regs, int access) { +#ifdef CONFIG_PGSTE + struct gmap *gmap; +#endif struct task_struct *tsk; struct mm_struct *mm; struct vm_area_struct *vma; @@ -307,9 +312,10 @@ static inline int do_exception(struct pt_regs *regs, int access) down_read(&mm->mmap_sem); #ifdef CONFIG_PGSTE - if ((current->flags & PF_VCPU) && S390_lowcore.gmap) { - address = __gmap_fault(address, - (struct gmap *) S390_lowcore.gmap); + gmap = (struct gmap *) + ((current->flags & PF_VCPU) ? S390_lowcore.gmap : 0); + if (gmap) { + address = __gmap_fault(address, gmap); if (address == -EFAULT) { fault = VM_FAULT_BADMAP; goto out_up; @@ -318,6 +324,8 @@ static inline int do_exception(struct pt_regs *regs, int access) fault = VM_FAULT_OOM; goto out_up; } + if (test_bit(1, &gmap->pfault_enabled)) + flags |= FAULT_FLAG_RETRY_NOWAIT; } #endif @@ -374,9 +382,19 @@ retry: regs, address); } if (fault & VM_FAULT_RETRY) { +#ifdef CONFIG_PGSTE + if (gmap && (flags & FAULT_FLAG_RETRY_NOWAIT)) { + /* FAULT_FLAG_RETRY_NOWAIT has been set, + * mmap_sem has not been released */ + current->thread.gmap_pfault = 1; + fault = VM_FAULT_PFAULT; + goto out_up; + } +#endif /* Clear FAULT_FLAG_ALLOW_RETRY to avoid any risk * of starvation. */ - flags &= ~FAULT_FLAG_ALLOW_RETRY; + flags &= ~(FAULT_FLAG_ALLOW_RETRY | + FAULT_FLAG_RETRY_NOWAIT); flags |= FAULT_FLAG_TRIED; down_read(&mm->mmap_sem); goto retry;