From patchwork Wed Jun 22 21:36:53 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 12891504 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id EF7D7C43334 for ; Wed, 22 Jun 2022 21:37:13 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234220AbiFVVhL (ORCPT ); Wed, 22 Jun 2022 17:37:11 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58530 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233434AbiFVVhK (ORCPT ); Wed, 22 Jun 2022 17:37:10 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 05D7F35AA7 for ; Wed, 22 Jun 2022 14:37:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1655933826; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=gOKc2ikh4vL40/zpc4P3kwwtd9Sh2ixHRUheDMcVbAY=; b=YdO2se9gd94Ut8tZVWLUOh82I8qdYnUNy2qkS/jX8iYlM+cXEkNI9W8AzEUi+41JwLlxli U/BOb7SdCxVJIp765xdYKTXVC27iQlh7PlKxY/A2AZ1AsltRzEOSoy0LAEUOZ7Bw1O6vVn ExHvEFwCjw550+2+sHx7xQRSKk/oUeg= Received: from mail-il1-f199.google.com (mail-il1-f199.google.com [209.85.166.199]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-616-VvxXXtcCP82jPR6MO9w3sw-1; Wed, 22 Jun 2022 17:37:05 -0400 X-MC-Unique: VvxXXtcCP82jPR6MO9w3sw-1 Received: by mail-il1-f199.google.com with SMTP id 3-20020a056e0220c300b002d3d7ebdfdeso11749326ilq.16 for ; Wed, 22 Jun 2022 14:37:04 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=gOKc2ikh4vL40/zpc4P3kwwtd9Sh2ixHRUheDMcVbAY=; b=e0SVoSgrS6vSg+xNhF93L/yTF4lSk7k8f4Gwvi/S0Q6X2H9kBSFjYSSL3C71VCSF8P 7EMNjSW0/YC1i3GvjaZobyospR+HRX6oYV4p8C0FAbXCEr00OKNEDhJxf9fuy6TdDR0s trJKBBQvHU+zSJKyrinVdtEmyjNh0UXcvItTJ293PJ34SGnL+jlyV5wBgSN7n/3L09mh /3b6Nj8blw0VA9zzINmXhNopwRhq3/5Y/0PoErrcNog/KFOX0jV7w+AA/tPDiWGM2alT Ehb8h+pcto+/aUECuAeM251RlYWbs9a8VzLzHvdJEWUWpQc9ynVZ9ivGzIuELUs+cEHj ISJA== X-Gm-Message-State: AJIora9CGLiIM6/CiXMhrx1/V2uyPEO9oSOdx8kcVv5/CK1eV0EvPCrQ B45oRNjtuQSkXuZGGCNKH4WFyN0uiX0nB7XUwOmy8ykq8SV5eFIecXs+sLkWStKvlGzwLIKf89v j/Ot+b0o4ShlWS7WrrrScC464HAnnwa4M0TinwLtvO6nWy9YggmN0g2j5kMgesA== X-Received: by 2002:a05:6602:2a42:b0:65a:eb90:2a12 with SMTP id k2-20020a0566022a4200b0065aeb902a12mr2846102iov.73.1655933823980; Wed, 22 Jun 2022 14:37:03 -0700 (PDT) X-Google-Smtp-Source: AGRyM1t/8cXtBj2IFYSYh/9eeCC/nf69NJ99ofpl9KFVNDHUyqPHOLaFoA3xbf+adONapZRIiJVl6Q== X-Received: by 2002:a05:6602:2a42:b0:65a:eb90:2a12 with SMTP id k2-20020a0566022a4200b0065aeb902a12mr2846084iov.73.1655933823672; Wed, 22 Jun 2022 14:37:03 -0700 (PDT) Received: from localhost.localdomain (cpec09435e3e0ee-cmc09435e3e0ec.cpe.net.cable.rogers.com. [99.241.198.116]) by smtp.gmail.com with ESMTPSA id g7-20020a0566380c4700b00339d892cc89sm1510446jal.83.2022.06.22.14.37.01 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Wed, 22 Jun 2022 14:37:02 -0700 (PDT) From: Peter Xu To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: peterx@redhat.com, Paolo Bonzini , Andrew Morton , David Hildenbrand , "Dr . David Alan Gilbert" , Andrea Arcangeli , Linux MM Mailing List , Sean Christopherson Subject: [PATCH 1/4] mm/gup: Add FOLL_INTERRUPTIBLE Date: Wed, 22 Jun 2022 17:36:53 -0400 Message-Id: <20220622213656.81546-2-peterx@redhat.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220622213656.81546-1-peterx@redhat.com> References: <20220622213656.81546-1-peterx@redhat.com> MIME-Version: 1.0 Content-type: text/plain Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org We have had FAULT_FLAG_INTERRUPTIBLE but it was never applied to GUPs. One issue with it is that not all GUP paths are able to handle signal delivers besides SIGKILL. That's not ideal for the GUP users who are actually able to handle these cases, like KVM. KVM uses GUP extensively on faulting guest pages, during which we've got existing infrastructures to retry a page fault at a later time. Allowing the GUP to be interrupted by generic signals can make KVM related threads to be more responsive. For examples: (1) SIGUSR1: which QEMU/KVM uses to deliver an inter-process IPI, e.g. when the admin issues a vm_stop QMP command, SIGUSR1 can be generated to kick the vcpus out of kernel context immediately, (2) SIGINT: which can be used with interactive hypervisor users to stop a virtual machine with Ctrl-C without any delays/hangs, (3) SIGTRAP: which grants GDB capability even during page faults that are stuck for a long time. Normally hypervisor will be able to receive these signals properly, but not if we're stuck in a GUP for a long time for whatever reason. It happens easily with a stucked postcopy migration when e.g. a network temp failure happens, then some vcpu threads can hang death waiting for the pages. With the new FOLL_INTERRUPTIBLE, we can allow GUP users like KVM to selectively enable the ability to trap these signals. Signed-off-by: Peter Xu Reviewed-by: John Hubbard --- include/linux/mm.h | 1 + mm/gup.c | 33 +++++++++++++++++++++++++++++---- 2 files changed, 30 insertions(+), 4 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index bc8f326be0ce..ebdf8a6b86c1 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2941,6 +2941,7 @@ struct page *follow_page(struct vm_area_struct *vma, unsigned long address, #define FOLL_SPLIT_PMD 0x20000 /* split huge pmd before returning */ #define FOLL_PIN 0x40000 /* pages must be released via unpin_user_page */ #define FOLL_FAST_ONLY 0x80000 /* gup_fast: prevent fall-back to slow gup */ +#define FOLL_INTERRUPTIBLE 0x100000 /* allow interrupts from generic signals */ /* * FOLL_PIN and FOLL_LONGTERM may be used in various combinations with each diff --git a/mm/gup.c b/mm/gup.c index 551264407624..ad74b137d363 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -933,8 +933,17 @@ static int faultin_page(struct vm_area_struct *vma, fault_flags |= FAULT_FLAG_WRITE; if (*flags & FOLL_REMOTE) fault_flags |= FAULT_FLAG_REMOTE; - if (locked) + if (locked) { fault_flags |= FAULT_FLAG_ALLOW_RETRY | FAULT_FLAG_KILLABLE; + /* + * We should only grant FAULT_FLAG_INTERRUPTIBLE when we're + * (at least) killable. It also mostly means we're not + * with NOWAIT. Otherwise ignore FOLL_INTERRUPTIBLE since + * it won't make a lot of sense to be used alone. + */ + if (*flags & FOLL_INTERRUPTIBLE) + fault_flags |= FAULT_FLAG_INTERRUPTIBLE; + } if (*flags & FOLL_NOWAIT) fault_flags |= FAULT_FLAG_ALLOW_RETRY | FAULT_FLAG_RETRY_NOWAIT; if (*flags & FOLL_TRIED) { @@ -1322,6 +1331,22 @@ int fixup_user_fault(struct mm_struct *mm, } EXPORT_SYMBOL_GPL(fixup_user_fault); +/* + * GUP always responds to fatal signals. When FOLL_INTERRUPTIBLE is + * specified, it'll also respond to generic signals. The caller of GUP + * that has FOLL_INTERRUPTIBLE should take care of the GUP interruption. + */ +static bool gup_signal_pending(unsigned int flags) +{ + if (fatal_signal_pending(current)) + return true; + + if (!(flags & FOLL_INTERRUPTIBLE)) + return false; + + return signal_pending(current); +} + /* * Please note that this function, unlike __get_user_pages will not * return 0 for nr_pages > 0 without FOLL_NOWAIT @@ -1403,11 +1428,11 @@ static __always_inline long __get_user_pages_locked(struct mm_struct *mm, * Repeat on the address that fired VM_FAULT_RETRY * with both FAULT_FLAG_ALLOW_RETRY and * FAULT_FLAG_TRIED. Note that GUP can be interrupted - * by fatal signals, so we need to check it before we + * by fatal signals of even common signals, depending on + * the caller's request. So we need to check it before we * start trying again otherwise it can loop forever. */ - - if (fatal_signal_pending(current)) { + if (gup_signal_pending(flags)) { if (!pages_done) pages_done = -EINTR; break; From patchwork Wed Jun 22 21:36:54 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 12891505 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C2AA0C3F2D4 for ; Wed, 22 Jun 2022 21:37:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231429AbiFVVhO (ORCPT ); Wed, 22 Jun 2022 17:37:14 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58536 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234047AbiFVVhK (ORCPT ); Wed, 22 Jun 2022 17:37:10 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id AEB5735AA8 for ; Wed, 22 Jun 2022 14:37:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1655933828; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=vhppIX6vNX1hZt/miaFbnfjIfVwbxcf3RlezEUIw2Gs=; b=H+KYQO5T999+7XafPgn3PFTaJbBlMrcFBkd4WO35rSoVcB2TNYuzDhQZN9WAGPy2NJm7fE n7wyEFr19fmJQ/rg73aTqoHVh3g6l89oP4QlZrzem1Ns9TBDt6SqfaXmcrQLR8mEa0vLTL eR5c4Nr9JLmWViyrlo7hxMRd6a6bAzE= Received: from mail-il1-f198.google.com (mail-il1-f198.google.com [209.85.166.198]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-126-Lf3JDPBmO9m-ml0OkVpc0Q-1; Wed, 22 Jun 2022 17:37:07 -0400 X-MC-Unique: Lf3JDPBmO9m-ml0OkVpc0Q-1 Received: by mail-il1-f198.google.com with SMTP id k8-20020a056e02156800b002d91998aef7so5374065ilu.0 for ; Wed, 22 Jun 2022 14:37:07 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=vhppIX6vNX1hZt/miaFbnfjIfVwbxcf3RlezEUIw2Gs=; b=SpKrWtfy9gYgJkl/iJMWhNkRk4BXw8gJIJQHQSUD1aRn7gYFQmfVgsG0O9XLR0vzO5 vFWbHiYOWwMUlZberIZr89WPRJmLyd9KOXBgVRaSb5u3OPHDMJ5lE2nOZuzFQd17vEPG DQU59zv1QZpQpouN4swkwcJMLcHde8lhPMuuS4d+ojpcAS2zvyoaEHYc1a97gDhstM+5 QtQmIlctcq8x0RKcJuKFJpfkVL11l4WLd3Lsha07zS19LiTs2xQCSbKysSvOF94n+n+r evz7rgXSddvCuPiX8vRwkFZ1M9sycnONszyWwjnll8sAmkcruFUfDeIcdTePO7zx1SHw lZTg== X-Gm-Message-State: AJIora+ld22Mnl41L4M87ZT7Ae6HSpKkwprinnUGDL3xWmVZJn5f2L+n cgao9z/cplQB4X2ApSuSIrccm3gN5ZgXKBVtphs8uumdENSbl8Ebltowv0Hz3GwESx8gSbUdC6d Svs8GJDJj6J67OZvJ1qUbgb/17QP5dN7QFZP1O+pDiF1XCrgAsU533FXQ8AAfSg== X-Received: by 2002:a05:6e02:1a28:b0:2d8:e770:d43f with SMTP id g8-20020a056e021a2800b002d8e770d43fmr3278543ile.137.1655933825873; Wed, 22 Jun 2022 14:37:05 -0700 (PDT) X-Google-Smtp-Source: AGRyM1uJcMRkwsDNKTMq4mKtUplbgjsXpC8ptulSt8S8Q5/HCNbhHNMkLxzTCVOW888ONQuKUYm7dA== X-Received: by 2002:a05:6e02:1a28:b0:2d8:e770:d43f with SMTP id g8-20020a056e021a2800b002d8e770d43fmr3278518ile.137.1655933825523; Wed, 22 Jun 2022 14:37:05 -0700 (PDT) Received: from localhost.localdomain (cpec09435e3e0ee-cmc09435e3e0ec.cpe.net.cable.rogers.com. [99.241.198.116]) by smtp.gmail.com with ESMTPSA id g7-20020a0566380c4700b00339d892cc89sm1510446jal.83.2022.06.22.14.37.03 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Wed, 22 Jun 2022 14:37:04 -0700 (PDT) From: Peter Xu To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: peterx@redhat.com, Paolo Bonzini , Andrew Morton , David Hildenbrand , "Dr . David Alan Gilbert" , Andrea Arcangeli , Linux MM Mailing List , Sean Christopherson Subject: [PATCH 2/4] kvm: Merge "atomic" and "write" in __gfn_to_pfn_memslot() Date: Wed, 22 Jun 2022 17:36:54 -0400 Message-Id: <20220622213656.81546-3-peterx@redhat.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220622213656.81546-1-peterx@redhat.com> References: <20220622213656.81546-1-peterx@redhat.com> MIME-Version: 1.0 Content-type: text/plain Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Merge two boolean parameters into a bitmask flag called kvm_gtp_flag_t for __gfn_to_pfn_memslot(). This cleans the parameter lists, and also prepare for new boolean to be added to __gfn_to_pfn_memslot(). Signed-off-by: Peter Xu --- arch/arm64/kvm/mmu.c | 5 ++-- arch/powerpc/kvm/book3s_64_mmu_hv.c | 5 ++-- arch/powerpc/kvm/book3s_64_mmu_radix.c | 5 ++-- arch/x86/kvm/mmu/mmu.c | 10 +++---- include/linux/kvm_host.h | 9 ++++++- virt/kvm/kvm_main.c | 37 +++++++++++++++----------- virt/kvm/kvm_mm.h | 6 +++-- virt/kvm/pfncache.c | 2 +- 8 files changed, 49 insertions(+), 30 deletions(-) diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index f5651a05b6a8..ce1edb512b4e 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -1204,8 +1204,9 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, */ smp_rmb(); - pfn = __gfn_to_pfn_memslot(memslot, gfn, false, NULL, - write_fault, &writable, NULL); + pfn = __gfn_to_pfn_memslot(memslot, gfn, + write_fault ? KVM_GTP_WRITE : 0, + NULL, &writable, NULL); if (pfn == KVM_PFN_ERR_HWPOISON) { kvm_send_hwpoison_signal(hva, vma_shift); return 0; diff --git a/arch/powerpc/kvm/book3s_64_mmu_hv.c b/arch/powerpc/kvm/book3s_64_mmu_hv.c index 514fd45c1994..e2769d58dd87 100644 --- a/arch/powerpc/kvm/book3s_64_mmu_hv.c +++ b/arch/powerpc/kvm/book3s_64_mmu_hv.c @@ -598,8 +598,9 @@ int kvmppc_book3s_hv_page_fault(struct kvm_vcpu *vcpu, write_ok = true; } else { /* Call KVM generic code to do the slow-path check */ - pfn = __gfn_to_pfn_memslot(memslot, gfn, false, NULL, - writing, &write_ok, NULL); + pfn = __gfn_to_pfn_memslot(memslot, gfn, + writing ? KVM_GTP_WRITE : 0, + NULL, &write_ok, NULL); if (is_error_noslot_pfn(pfn)) return -EFAULT; page = NULL; diff --git a/arch/powerpc/kvm/book3s_64_mmu_radix.c b/arch/powerpc/kvm/book3s_64_mmu_radix.c index 42851c32ff3b..232b17c75b83 100644 --- a/arch/powerpc/kvm/book3s_64_mmu_radix.c +++ b/arch/powerpc/kvm/book3s_64_mmu_radix.c @@ -845,8 +845,9 @@ int kvmppc_book3s_instantiate_page(struct kvm_vcpu *vcpu, unsigned long pfn; /* Call KVM generic code to do the slow-path check */ - pfn = __gfn_to_pfn_memslot(memslot, gfn, false, NULL, - writing, upgrade_p, NULL); + pfn = __gfn_to_pfn_memslot(memslot, gfn, + writing ? KVM_GTP_WRITE : 0, + NULL, upgrade_p, NULL); if (is_error_noslot_pfn(pfn)) return -EFAULT; page = NULL; diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index f4653688fa6d..e92f1ab63d6a 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -3968,6 +3968,7 @@ void kvm_arch_async_page_ready(struct kvm_vcpu *vcpu, struct kvm_async_pf *work) static int kvm_faultin_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) { + kvm_gtp_flag_t flags = fault->write ? KVM_GTP_WRITE : 0; struct kvm_memory_slot *slot = fault->slot; bool async; @@ -3999,8 +4000,8 @@ static int kvm_faultin_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) } async = false; - fault->pfn = __gfn_to_pfn_memslot(slot, fault->gfn, false, &async, - fault->write, &fault->map_writable, + fault->pfn = __gfn_to_pfn_memslot(slot, fault->gfn, flags, + &async, &fault->map_writable, &fault->hva); if (!async) return RET_PF_CONTINUE; /* *pfn has correct page already */ @@ -4016,9 +4017,8 @@ static int kvm_faultin_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) } } - fault->pfn = __gfn_to_pfn_memslot(slot, fault->gfn, false, NULL, - fault->write, &fault->map_writable, - &fault->hva); + fault->pfn = __gfn_to_pfn_memslot(slot, fault->gfn, flags, NULL, + &fault->map_writable, &fault->hva); return RET_PF_CONTINUE; } diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index c20f2d55840c..b646b6fcaec6 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -1146,8 +1146,15 @@ kvm_pfn_t gfn_to_pfn_prot(struct kvm *kvm, gfn_t gfn, bool write_fault, bool *writable); kvm_pfn_t gfn_to_pfn_memslot(const struct kvm_memory_slot *slot, gfn_t gfn); kvm_pfn_t gfn_to_pfn_memslot_atomic(const struct kvm_memory_slot *slot, gfn_t gfn); + +/* gfn_to_pfn (gtp) flags */ +typedef unsigned int __bitwise kvm_gtp_flag_t; + +#define KVM_GTP_WRITE ((__force kvm_gtp_flag_t) BIT(0)) +#define KVM_GTP_ATOMIC ((__force kvm_gtp_flag_t) BIT(1)) + kvm_pfn_t __gfn_to_pfn_memslot(const struct kvm_memory_slot *slot, gfn_t gfn, - bool atomic, bool *async, bool write_fault, + kvm_gtp_flag_t gtp_flags, bool *async, bool *writable, hva_t *hva); void kvm_release_pfn_clean(kvm_pfn_t pfn); diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 64ec2222a196..952400b42ee9 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -2444,9 +2444,11 @@ static bool hva_to_pfn_fast(unsigned long addr, bool write_fault, * The slow path to get the pfn of the specified host virtual address, * 1 indicates success, -errno is returned if error is detected. */ -static int hva_to_pfn_slow(unsigned long addr, bool *async, bool write_fault, +static int hva_to_pfn_slow(unsigned long addr, bool *async, + kvm_gtp_flag_t gtp_flags, bool *writable, kvm_pfn_t *pfn) { + bool write_fault = gtp_flags & KVM_GTP_WRITE; unsigned int flags = FOLL_HWPOISON; struct page *page; int npages = 0; @@ -2565,20 +2567,22 @@ static int hva_to_pfn_remapped(struct vm_area_struct *vma, /* * Pin guest page in memory and return its pfn. * @addr: host virtual address which maps memory to the guest - * @atomic: whether this function can sleep + * @gtp_flags: kvm_gtp_flag_t flags (atomic, write, ..) * @async: whether this function need to wait IO complete if the * host page is not in the memory - * @write_fault: whether we should get a writable host page * @writable: whether it allows to map a writable host page for !@write_fault * - * The function will map a writable host page for these two cases: + * The function will map a writable (KVM_GTP_WRITE set) host page for these + * two cases: * 1): @write_fault = true * 2): @write_fault = false && @writable, @writable will tell the caller * whether the mapping is writable. */ -kvm_pfn_t hva_to_pfn(unsigned long addr, bool atomic, bool *async, - bool write_fault, bool *writable) +kvm_pfn_t hva_to_pfn(unsigned long addr, kvm_gtp_flag_t gtp_flags, bool *async, + bool *writable) { + bool write_fault = gtp_flags & KVM_GTP_WRITE; + bool atomic = gtp_flags & KVM_GTP_ATOMIC; struct vm_area_struct *vma; kvm_pfn_t pfn = 0; int npages, r; @@ -2592,7 +2596,7 @@ kvm_pfn_t hva_to_pfn(unsigned long addr, bool atomic, bool *async, if (atomic) return KVM_PFN_ERR_FAULT; - npages = hva_to_pfn_slow(addr, async, write_fault, writable, &pfn); + npages = hva_to_pfn_slow(addr, async, gtp_flags, writable, &pfn); if (npages == 1) return pfn; @@ -2625,10 +2629,11 @@ kvm_pfn_t hva_to_pfn(unsigned long addr, bool atomic, bool *async, } kvm_pfn_t __gfn_to_pfn_memslot(const struct kvm_memory_slot *slot, gfn_t gfn, - bool atomic, bool *async, bool write_fault, + kvm_gtp_flag_t gtp_flags, bool *async, bool *writable, hva_t *hva) { - unsigned long addr = __gfn_to_hva_many(slot, gfn, NULL, write_fault); + unsigned long addr = __gfn_to_hva_many(slot, gfn, NULL, + gtp_flags & KVM_GTP_WRITE); if (hva) *hva = addr; @@ -2651,28 +2656,30 @@ kvm_pfn_t __gfn_to_pfn_memslot(const struct kvm_memory_slot *slot, gfn_t gfn, writable = NULL; } - return hva_to_pfn(addr, atomic, async, write_fault, - writable); + return hva_to_pfn(addr, gtp_flags, async, writable); } EXPORT_SYMBOL_GPL(__gfn_to_pfn_memslot); kvm_pfn_t gfn_to_pfn_prot(struct kvm *kvm, gfn_t gfn, bool write_fault, bool *writable) { - return __gfn_to_pfn_memslot(gfn_to_memslot(kvm, gfn), gfn, false, NULL, - write_fault, writable, NULL); + return __gfn_to_pfn_memslot(gfn_to_memslot(kvm, gfn), gfn, + write_fault ? KVM_GTP_WRITE : 0, + NULL, writable, NULL); } EXPORT_SYMBOL_GPL(gfn_to_pfn_prot); kvm_pfn_t gfn_to_pfn_memslot(const struct kvm_memory_slot *slot, gfn_t gfn) { - return __gfn_to_pfn_memslot(slot, gfn, false, NULL, true, NULL, NULL); + return __gfn_to_pfn_memslot(slot, gfn, KVM_GTP_WRITE, + NULL, NULL, NULL); } EXPORT_SYMBOL_GPL(gfn_to_pfn_memslot); kvm_pfn_t gfn_to_pfn_memslot_atomic(const struct kvm_memory_slot *slot, gfn_t gfn) { - return __gfn_to_pfn_memslot(slot, gfn, true, NULL, true, NULL, NULL); + return __gfn_to_pfn_memslot(slot, gfn, KVM_GTP_WRITE | KVM_GTP_ATOMIC, + NULL, NULL, NULL); } EXPORT_SYMBOL_GPL(gfn_to_pfn_memslot_atomic); diff --git a/virt/kvm/kvm_mm.h b/virt/kvm/kvm_mm.h index 41da467d99c9..1c870911eb48 100644 --- a/virt/kvm/kvm_mm.h +++ b/virt/kvm/kvm_mm.h @@ -3,6 +3,8 @@ #ifndef __KVM_MM_H__ #define __KVM_MM_H__ 1 +#include + /* * Architectures can choose whether to use an rwlock or spinlock * for the mmu_lock. These macros, for use in common code @@ -24,8 +26,8 @@ #define KVM_MMU_READ_UNLOCK(kvm) spin_unlock(&(kvm)->mmu_lock) #endif /* KVM_HAVE_MMU_RWLOCK */ -kvm_pfn_t hva_to_pfn(unsigned long addr, bool atomic, bool *async, - bool write_fault, bool *writable); +kvm_pfn_t hva_to_pfn(unsigned long addr, kvm_gtp_flag_t gtp_flags, bool *async, + bool *writable); #ifdef CONFIG_HAVE_KVM_PFNCACHE void gfn_to_pfn_cache_invalidate_start(struct kvm *kvm, diff --git a/virt/kvm/pfncache.c b/virt/kvm/pfncache.c index dd84676615f1..0f9f6b5d2fbb 100644 --- a/virt/kvm/pfncache.c +++ b/virt/kvm/pfncache.c @@ -123,7 +123,7 @@ static kvm_pfn_t hva_to_pfn_retry(struct kvm *kvm, unsigned long uhva) smp_rmb(); /* We always request a writeable mapping */ - new_pfn = hva_to_pfn(uhva, false, NULL, true, NULL); + new_pfn = hva_to_pfn(uhva, KVM_GTP_WRITE, NULL, NULL); if (is_error_noslot_pfn(new_pfn)) break; From patchwork Wed Jun 22 21:36:55 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 12891506 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 05159C433EF for ; Wed, 22 Jun 2022 21:37:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235585AbiFVVhU (ORCPT ); Wed, 22 Jun 2022 17:37:20 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58574 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233434AbiFVVhM (ORCPT ); Wed, 22 Jun 2022 17:37:12 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 066FD35AA7 for ; Wed, 22 Jun 2022 14:37:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1655933831; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=ghHac7UV1N6vPGfjk7ojRXZFbMUyEtSeJQo5S/wAoOg=; b=gjBBx2FoTQpQD/K0jTzVmTi8YDOeXgTAwvHCRJt7rsBHP3hjuLB4gi2WIVGsM1c1UMbu5K fURZIxXfuB+BSayMh7SUSmNbnQ7P+nWLyuC1vQRG0b0nBmd7HfYsGufQjoAHA/gnR0t6nC N5yB1Hepnm0v8Qg1wxnUVQTt+aXX4f0= Received: from mail-io1-f70.google.com (mail-io1-f70.google.com [209.85.166.70]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-286-Y63SC2jaPz-YGUsR6j-XWQ-1; Wed, 22 Jun 2022 17:37:10 -0400 X-MC-Unique: Y63SC2jaPz-YGUsR6j-XWQ-1 Received: by mail-io1-f70.google.com with SMTP id n85-20020a6b8b58000000b00672792497b8so646170iod.3 for ; Wed, 22 Jun 2022 14:37:09 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=ghHac7UV1N6vPGfjk7ojRXZFbMUyEtSeJQo5S/wAoOg=; b=FfC9t38uSr44dKWHFEbBE/3H4EZWSpkpUAbKOIHp51LmT1ubCFQ3A1lusNrYUmadgs x18VUIsm3vsa29O+bo3e7rbYfgU1r582EsIdKLyRcLSEGcqPf9AeTJaKtoFk8J2Sxhox 4lvD10JQ7WymRV6BYkevG4BN2qBzcvZPS2RHlQuSBhOcDxOB9MJfSi1lbfMq2/OFv4SE 4vIARTB5kAv7L1/JGXTqZQuM+HOVZ1sl39ki5M7mqu5zUyDLfrYoGwha3qp6L8KZy2YS SqsqjiLaWPjTgUFxpIArVX12TCHDA5YiPi/vzZ+eBCJ0/P8GvvD/5VgFy6e3SJGK8Xs+ g9xQ== X-Gm-Message-State: AJIora+8LQun/vuICyTOKBMFDaojqKPe2CJptphz4DiJ0n36uoib9BnL 2EX+uOWKEyqzsxnbjPwyzIfWsrOcL5KhTL0xLXaDvE8or/WMRv4ASPvIeeVUFUbvEc1rKBGVQMN mW6MkbAB+PP/CJ3NpuYf/+d2LNKLoeZQIpKUJ9KEkq5FKQxbxYI/d85pOWTzbZA== X-Received: by 2002:a05:6638:3802:b0:32e:3d9a:9817 with SMTP id i2-20020a056638380200b0032e3d9a9817mr3381409jav.206.1655933829047; Wed, 22 Jun 2022 14:37:09 -0700 (PDT) X-Google-Smtp-Source: AGRyM1sef3p8Uu7ssABriH3qblNDoH/AkpKRoJmrO09ZGEgP+PL9or7LCLMx+soJFip3k0p/jz0afA== X-Received: by 2002:a05:6638:3802:b0:32e:3d9a:9817 with SMTP id i2-20020a056638380200b0032e3d9a9817mr3381390jav.206.1655933828779; Wed, 22 Jun 2022 14:37:08 -0700 (PDT) Received: from localhost.localdomain (cpec09435e3e0ee-cmc09435e3e0ec.cpe.net.cable.rogers.com. [99.241.198.116]) by smtp.gmail.com with ESMTPSA id g7-20020a0566380c4700b00339d892cc89sm1510446jal.83.2022.06.22.14.37.05 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Wed, 22 Jun 2022 14:37:07 -0700 (PDT) From: Peter Xu To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: peterx@redhat.com, Paolo Bonzini , Andrew Morton , David Hildenbrand , "Dr . David Alan Gilbert" , Andrea Arcangeli , Linux MM Mailing List , Sean Christopherson Subject: [PATCH 3/4] kvm: Add new pfn error KVM_PFN_ERR_INTR Date: Wed, 22 Jun 2022 17:36:55 -0400 Message-Id: <20220622213656.81546-4-peterx@redhat.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220622213656.81546-1-peterx@redhat.com> References: <20220622213656.81546-1-peterx@redhat.com> MIME-Version: 1.0 Content-type: text/plain Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Add one new PFN error type to show when we cannot finish fetching the PFN due to interruptions. For example, by receiving a generic signal. This prepares KVM to be able to respond to SIGUSR1 (for QEMU that's the SIGIPI) even during e.g. handling an userfaultfd page fault. Signed-off-by: Peter Xu --- include/linux/kvm_host.h | 11 +++++++++++ 1 file changed, 11 insertions(+) diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index b646b6fcaec6..4f84a442f67f 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -96,6 +96,7 @@ #define KVM_PFN_ERR_FAULT (KVM_PFN_ERR_MASK) #define KVM_PFN_ERR_HWPOISON (KVM_PFN_ERR_MASK + 1) #define KVM_PFN_ERR_RO_FAULT (KVM_PFN_ERR_MASK + 2) +#define KVM_PFN_ERR_INTR (KVM_PFN_ERR_MASK + 3) /* * error pfns indicate that the gfn is in slot but faild to @@ -106,6 +107,16 @@ static inline bool is_error_pfn(kvm_pfn_t pfn) return !!(pfn & KVM_PFN_ERR_MASK); } +/* + * When KVM_PFN_ERR_INTR is returned, it means we're interrupted during + * fetching the PFN (e.g. a signal might have arrived), so we may want to + * retry at some later point and kick the userspace to handle the signal. + */ +static inline bool is_intr_pfn(kvm_pfn_t pfn) +{ + return pfn == KVM_PFN_ERR_INTR; +} + /* * error_noslot pfns indicate that the gfn can not be * translated to pfn - it is not in slot or failed to From patchwork Wed Jun 22 21:36:56 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 12891508 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C04DAC43334 for ; Wed, 22 Jun 2022 21:37:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237587AbiFVVhX (ORCPT ); Wed, 22 Jun 2022 17:37:23 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58678 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234854AbiFVVhS (ORCPT ); Wed, 22 Jun 2022 17:37:18 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 31B183A5E5 for ; Wed, 22 Jun 2022 14:37:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1655933833; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=+Bc6oiDMKyczPP3MnRg+YG2Ibpmu1/F27D1fgoc2Uo0=; b=T1pLqhKhtL574a0aBdZMb5X0f1ZLsqyiVoIre+I/+7AJTWOZmccP9C9hufiNkZSbMJ9hC+ dQ2CM5xSKQppkqwSpY2/aGzWTq8vs52PIChhFxt8ECNB3rFBuEav1SeTCmb8fjc1kC0mtg n2N8QX59eCkZ55hiTqlgP1zEbC4yiaA= Received: from mail-io1-f70.google.com (mail-io1-f70.google.com [209.85.166.70]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-528-4Qmzv34NMduxh3K5lLjNAA-1; Wed, 22 Jun 2022 17:37:11 -0400 X-MC-Unique: 4Qmzv34NMduxh3K5lLjNAA-1 Received: by mail-io1-f70.google.com with SMTP id d11-20020a6bb40b000000b006727828a19fso774275iof.15 for ; Wed, 22 Jun 2022 14:37:11 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=+Bc6oiDMKyczPP3MnRg+YG2Ibpmu1/F27D1fgoc2Uo0=; b=QG6sHGJuNsGLAMVKM1Iarsu2t5Iyc5qyU7V4pZCF1r5UZX0LlUK7Z2wAC572f7DYO8 wmVIEQlDqzEc/jCYTi9AKbJB1FgfsdU4vEWz6NWryu4zdUDcLPX2qj4iaXvTa3PJ/xLX dwwCvi1KYEE/SrgthcewHUKZLJmwSs4LLusERggAg59IMNcLTfcUevStOSm3kfqAB5uZ xBntcKo4uYTGm8rBj3isWPwKcMJ0hfLNhGU46gyD9McGo9lCp+2ANykRAZEzTrKEIt4w eSp2Sw43xBGN9coiXADMtYmfZNjZ4CLG7Rak7SjyZolFtmJ+yplcdwxTRJc9rcOqpynO 9C4g== X-Gm-Message-State: AJIora/KmT+Pe0aXmv8uAe8x+nmFMdvyDdwLOWoDqHkZhSxXG45WGoPf SsujWgOzZLLvZwdxgR8P0XTj0KK7XLhXSLOeVBP4/WVbvz5gClVayNzm4FllRDJodwRxy2TvaC3 3Sm/vDLAH0jgYqBHuL2eTMyk/LzwvLw8ja2qrSG078ezjvGvbMWXkX8By9IfHTw== X-Received: by 2002:a05:6638:3043:b0:314:7ce2:4a6e with SMTP id u3-20020a056638304300b003147ce24a6emr3407098jak.258.1655933830757; Wed, 22 Jun 2022 14:37:10 -0700 (PDT) X-Google-Smtp-Source: AGRyM1tXYF1gATLPxilQ8J8azYs80FKmrYKCExwZSpp2Y52+zhUd9Cgm+xA8CMppEVs8Il5ycEHTqA== X-Received: by 2002:a05:6638:3043:b0:314:7ce2:4a6e with SMTP id u3-20020a056638304300b003147ce24a6emr3407077jak.258.1655933830514; Wed, 22 Jun 2022 14:37:10 -0700 (PDT) Received: from localhost.localdomain (cpec09435e3e0ee-cmc09435e3e0ec.cpe.net.cable.rogers.com. [99.241.198.116]) by smtp.gmail.com with ESMTPSA id g7-20020a0566380c4700b00339d892cc89sm1510446jal.83.2022.06.22.14.37.08 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Wed, 22 Jun 2022 14:37:09 -0700 (PDT) From: Peter Xu To: kvm@vger.kernel.org, linux-kernel@vger.kernel.org Cc: peterx@redhat.com, Paolo Bonzini , Andrew Morton , David Hildenbrand , "Dr . David Alan Gilbert" , Andrea Arcangeli , Linux MM Mailing List , Sean Christopherson Subject: [PATCH 4/4] kvm/x86: Allow to respond to generic signals during slow page faults Date: Wed, 22 Jun 2022 17:36:56 -0400 Message-Id: <20220622213656.81546-5-peterx@redhat.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220622213656.81546-1-peterx@redhat.com> References: <20220622213656.81546-1-peterx@redhat.com> MIME-Version: 1.0 Content-type: text/plain Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org All the facilities should be ready for this, what we need to do is to add a new KVM_GTP_INTERRUPTIBLE flag showing that we're willing to be interrupted by common signals during the __gfn_to_pfn_memslot() request, and wire it up with a FOLL_INTERRUPTIBLE flag that we've just introduced. Note that only x86 slow page fault routine will set this new bit. The new bit is not used in non-x86 arch or on other gup paths even for x86. However it can actually be used elsewhere too but not yet covered. When we see the PFN fetching was interrupted, do early exit to userspace with an KVM_EXIT_INTR exit reason. Signed-off-by: Peter Xu --- arch/x86/kvm/mmu/mmu.c | 9 +++++++++ include/linux/kvm_host.h | 1 + virt/kvm/kvm_main.c | 4 ++++ 3 files changed, 14 insertions(+) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index e92f1ab63d6a..b39acb7cb16d 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -3012,6 +3012,13 @@ static int kvm_handle_bad_page(struct kvm_vcpu *vcpu, gfn_t gfn, kvm_pfn_t pfn) static int handle_abnormal_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault, unsigned int access) { + /* NOTE: not all error pfn is fatal; handle intr before the other ones */ + if (unlikely(is_intr_pfn(fault->pfn))) { + vcpu->run->exit_reason = KVM_EXIT_INTR; + ++vcpu->stat.signal_exits; + return -EINTR; + } + /* The pfn is invalid, report the error! */ if (unlikely(is_error_pfn(fault->pfn))) return kvm_handle_bad_page(vcpu, fault->gfn, fault->pfn); @@ -4017,6 +4024,8 @@ static int kvm_faultin_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) } } + /* Allow to respond to generic signals in slow page faults */ + flags |= KVM_GTP_INTERRUPTIBLE; fault->pfn = __gfn_to_pfn_memslot(slot, fault->gfn, flags, NULL, &fault->map_writable, &fault->hva); return RET_PF_CONTINUE; diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 4f84a442f67f..c8d98e435537 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -1163,6 +1163,7 @@ typedef unsigned int __bitwise kvm_gtp_flag_t; #define KVM_GTP_WRITE ((__force kvm_gtp_flag_t) BIT(0)) #define KVM_GTP_ATOMIC ((__force kvm_gtp_flag_t) BIT(1)) +#define KVM_GTP_INTERRUPTIBLE ((__force kvm_gtp_flag_t) BIT(2)) kvm_pfn_t __gfn_to_pfn_memslot(const struct kvm_memory_slot *slot, gfn_t gfn, kvm_gtp_flag_t gtp_flags, bool *async, diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 952400b42ee9..b3873cac5672 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -2462,6 +2462,8 @@ static int hva_to_pfn_slow(unsigned long addr, bool *async, flags |= FOLL_WRITE; if (async) flags |= FOLL_NOWAIT; + if (gtp_flags & KVM_GTP_INTERRUPTIBLE) + flags |= FOLL_INTERRUPTIBLE; npages = get_user_pages_unlocked(addr, 1, &page, flags); if (npages != 1) @@ -2599,6 +2601,8 @@ kvm_pfn_t hva_to_pfn(unsigned long addr, kvm_gtp_flag_t gtp_flags, bool *async, npages = hva_to_pfn_slow(addr, async, gtp_flags, writable, &pfn); if (npages == 1) return pfn; + if (npages == -EINTR) + return KVM_PFN_ERR_INTR; mmap_read_lock(current->mm); if (npages == -EHWPOISON ||