From patchwork Wed Aug 17 00:36:12 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 12945369 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D54B6C25B0E for ; Wed, 17 Aug 2022 00:36:21 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 47D0B8D0005; Tue, 16 Aug 2022 20:36:21 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 401AC8D0003; Tue, 16 Aug 2022 20:36:21 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 298AC8D0003; Tue, 16 Aug 2022 20:36:21 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 16D8D8D0001 for ; Tue, 16 Aug 2022 20:36:21 -0400 (EDT) Received: from smtpin01.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id E72A2C030D for ; Wed, 17 Aug 2022 00:36:20 +0000 (UTC) X-FDA: 79807218120.01.A6E1762 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf16.hostedemail.com (Postfix) with ESMTP id 67E911801B5 for ; Wed, 17 Aug 2022 00:36:20 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1660696579; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=e/SVxUH9CqIfzsVnNZsd2Uihd8CdpKj/1oI3QClWoPk=; b=NqCQfPjjE4H71DARY83H9QlQo0RUPd4iOkjtfGvuWS9VH/G5Ad4Q0M1A/mXf71c6WqrVuz D6I5YDK9MmpWADPRfJoAQa7GAWU0bfv+d8gw94W9B4kpURnVDnH8Sggf/qugE6KxFrzBn4 PT4Gw+uJIlKO25HIUDmgPOfHGhU+Y0A= Received: from mail-qt1-f200.google.com (mail-qt1-f200.google.com [209.85.160.200]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_128_GCM_SHA256) id us-mta-387-GP1bcsQ4Muivz5CHCcxfXw-1; Tue, 16 Aug 2022 20:36:18 -0400 X-MC-Unique: GP1bcsQ4Muivz5CHCcxfXw-1 Received: by mail-qt1-f200.google.com with SMTP id z6-20020ac875c6000000b0034454b14c91so7303866qtq.15 for ; Tue, 16 Aug 2022 17:36:18 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc; bh=e/SVxUH9CqIfzsVnNZsd2Uihd8CdpKj/1oI3QClWoPk=; b=nbolD7fn2a4Y6bCY59PLKrXskVB2utBElcBmvMihDfp/tvP4TyR8ULi/dV1a2uHa4e 0zNPPHLy2aaG614BVIUtHAYdQUEvyk2lK0sJqNf8jYh8anXZnA+Non7yyD+TfTwk7SeM 9PNE+0z7DuyJ/PjJMTzOUy5wZ3jlC72HZNuszeNyR7E9VfV6cXtSVjnsa6YN5+qqKySW UEcfSmXa/9P2OH13JBNjNiy2kwAmJ1rJNlUCXz//PAEjGs39VDYas3DecvMhXIKPBfxt wV4XoWynOMMhyi31C669OWPH64yBKIQWCZyVMI9RgoOVFqsQg5QcK3hQG1/ANzfAUu+x 1IjA== X-Gm-Message-State: ACgBeo10CWXzFM0IjCDBZUHniaohJL3zCQUL8NSWnLEnIk3Z+jeGJuO7 dRpJu8yK/CIoqPRCQyXJ1EU9y6RVpKYxc1h9caCEN9rNHt8IN1vf0GY9zygQLhNzGtrMm7oVTUW 8WqAOspm4hyI= X-Received: by 2002:a05:620a:164b:b0:6bb:761:fe1d with SMTP id c11-20020a05620a164b00b006bb0761fe1dmr10840796qko.597.1660696578020; Tue, 16 Aug 2022 17:36:18 -0700 (PDT) X-Google-Smtp-Source: AA6agR43jJoPki+u1Yj/TOnNQJFWojKLBcEgxpYaJnT2CS+d5ISlEfpihIPryWn36nZhVvg5r/Yv2g== X-Received: by 2002:a05:620a:164b:b0:6bb:761:fe1d with SMTP id c11-20020a05620a164b00b006bb0761fe1dmr10840780qko.597.1660696577744; Tue, 16 Aug 2022 17:36:17 -0700 (PDT) Received: from localhost.localdomain (bras-base-aurron9127w-grc-35-70-27-3-10.dsl.bell.ca. [70.27.3.10]) by smtp.gmail.com with ESMTPSA id c13-20020ac87dcd000000b0034358bfc3c8sm12007175qte.67.2022.08.16.17.36.16 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Tue, 16 Aug 2022 17:36:17 -0700 (PDT) From: Peter Xu To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Sean Christopherson , David Hildenbrand , Andrew Morton , Andrea Arcangeli , peterx@redhat.com, Paolo Bonzini , "Dr . David Alan Gilbert" , Linux MM Mailing List , John Hubbard Subject: [PATCH v3 1/3] mm/gup: Add FOLL_INTERRUPTIBLE Date: Tue, 16 Aug 2022 20:36:12 -0400 Message-Id: <20220817003614.58900-2-peterx@redhat.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220817003614.58900-1-peterx@redhat.com> References: <20220817003614.58900-1-peterx@redhat.com> MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-type: text/plain ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1660696580; a=rsa-sha256; cv=none; b=MijO05J6HFhFeR7UsfT8koS6nB9TQdzwA8HyV5kqvExkVSAQ4xvuhRvP62dQ+9QMGUUx9a J47ZYh0Nr0m4G/Z7B5rD3hfF/APgKTMD4VXYw7HiQu62WKEEpsBW0fGt9kAiQb8QCdibf4 w4mlYlX2Hu47N9mM8yYGR+EdNVyKeOg= ARC-Authentication-Results: i=1; imf16.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=NqCQfPjj; spf=pass (imf16.hostedemail.com: domain of peterx@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=peterx@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1660696580; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=e/SVxUH9CqIfzsVnNZsd2Uihd8CdpKj/1oI3QClWoPk=; b=TG7SB7iYzecHNLJvX6uwIx6D/34vdinodpD+H5ssqFX6vWhLhnvDCmdnm1zK96zZdfuCa2 WaocyxlTOcK0in705KKzuAq0pgNO9RIg5jvk0RFasI8gCXjaj/DjI3guUpzgAZEbA1oFRu kLhUrq1meUzKVRKRS3sSiEJhZqSYP9w= X-Stat-Signature: b4tnyteoqhwi4manm5ka668xk9j8wpon X-Rspamd-Queue-Id: 67E911801B5 Authentication-Results: imf16.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=NqCQfPjj; spf=pass (imf16.hostedemail.com: domain of peterx@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=peterx@redhat.com; dmarc=pass (policy=none) header.from=redhat.com X-Rspam-User: X-Rspamd-Server: rspam03 X-HE-Tag: 1660696580-436171 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: We have had FAULT_FLAG_INTERRUPTIBLE but it was never applied to GUPs. One issue with it is that not all GUP paths are able to handle signal delivers besides SIGKILL. That's not ideal for the GUP users who are actually able to handle these cases, like KVM. KVM uses GUP extensively on faulting guest pages, during which we've got existing infrastructures to retry a page fault at a later time. Allowing the GUP to be interrupted by generic signals can make KVM related threads to be more responsive. For examples: (1) SIGUSR1: which QEMU/KVM uses to deliver an inter-process IPI, e.g. when the admin issues a vm_stop QMP command, SIGUSR1 can be generated to kick the vcpus out of kernel context immediately, (2) SIGINT: which can be used with interactive hypervisor users to stop a virtual machine with Ctrl-C without any delays/hangs, (3) SIGTRAP: which grants GDB capability even during page faults that are stuck for a long time. Normally hypervisor will be able to receive these signals properly, but not if we're stuck in a GUP for a long time for whatever reason. It happens easily with a stucked postcopy migration when e.g. a network temp failure happens, then some vcpu threads can hang death waiting for the pages. With the new FOLL_INTERRUPTIBLE, we can allow GUP users like KVM to selectively enable the ability to trap these signals. Reviewed-by: John Hubbard Reviewed-by: David Hildenbrand Signed-off-by: Peter Xu --- include/linux/mm.h | 1 + mm/gup.c | 33 +++++++++++++++++++++++++++++---- mm/hugetlb.c | 5 ++++- 3 files changed, 34 insertions(+), 5 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index cf3d0d673f6b..c09eccd5d553 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2941,6 +2941,7 @@ struct page *follow_page(struct vm_area_struct *vma, unsigned long address, #define FOLL_SPLIT_PMD 0x20000 /* split huge pmd before returning */ #define FOLL_PIN 0x40000 /* pages must be released via unpin_user_page */ #define FOLL_FAST_ONLY 0x80000 /* gup_fast: prevent fall-back to slow gup */ +#define FOLL_INTERRUPTIBLE 0x100000 /* allow interrupts from generic signals */ /* * FOLL_PIN and FOLL_LONGTERM may be used in various combinations with each diff --git a/mm/gup.c b/mm/gup.c index 551264407624..f39cbe011cf1 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -933,8 +933,17 @@ static int faultin_page(struct vm_area_struct *vma, fault_flags |= FAULT_FLAG_WRITE; if (*flags & FOLL_REMOTE) fault_flags |= FAULT_FLAG_REMOTE; - if (locked) + if (locked) { fault_flags |= FAULT_FLAG_ALLOW_RETRY | FAULT_FLAG_KILLABLE; + /* + * FAULT_FLAG_INTERRUPTIBLE is opt-in. GUP callers must set + * FOLL_INTERRUPTIBLE to enable FAULT_FLAG_INTERRUPTIBLE. + * That's because some callers may not be prepared to + * handle early exits caused by non-fatal signals. + */ + if (*flags & FOLL_INTERRUPTIBLE) + fault_flags |= FAULT_FLAG_INTERRUPTIBLE; + } if (*flags & FOLL_NOWAIT) fault_flags |= FAULT_FLAG_ALLOW_RETRY | FAULT_FLAG_RETRY_NOWAIT; if (*flags & FOLL_TRIED) { @@ -1322,6 +1331,22 @@ int fixup_user_fault(struct mm_struct *mm, } EXPORT_SYMBOL_GPL(fixup_user_fault); +/* + * GUP always responds to fatal signals. When FOLL_INTERRUPTIBLE is + * specified, it'll also respond to generic signals. The caller of GUP + * that has FOLL_INTERRUPTIBLE should take care of the GUP interruption. + */ +static bool gup_signal_pending(unsigned int flags) +{ + if (fatal_signal_pending(current)) + return true; + + if (!(flags & FOLL_INTERRUPTIBLE)) + return false; + + return signal_pending(current); +} + /* * Please note that this function, unlike __get_user_pages will not * return 0 for nr_pages > 0 without FOLL_NOWAIT @@ -1403,11 +1428,11 @@ static __always_inline long __get_user_pages_locked(struct mm_struct *mm, * Repeat on the address that fired VM_FAULT_RETRY * with both FAULT_FLAG_ALLOW_RETRY and * FAULT_FLAG_TRIED. Note that GUP can be interrupted - * by fatal signals, so we need to check it before we + * by fatal signals of even common signals, depending on + * the caller's request. So we need to check it before we * start trying again otherwise it can loop forever. */ - - if (fatal_signal_pending(current)) { + if (gup_signal_pending(flags)) { if (!pages_done) pages_done = -EINTR; break; diff --git a/mm/hugetlb.c b/mm/hugetlb.c index a57e1be41401..4025a305d573 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -6176,9 +6176,12 @@ long follow_hugetlb_page(struct mm_struct *mm, struct vm_area_struct *vma, fault_flags |= FAULT_FLAG_WRITE; else if (unshare) fault_flags |= FAULT_FLAG_UNSHARE; - if (locked) + if (locked) { fault_flags |= FAULT_FLAG_ALLOW_RETRY | FAULT_FLAG_KILLABLE; + if (flags & FOLL_INTERRUPTIBLE) + fault_flags |= FAULT_FLAG_INTERRUPTIBLE; + } if (flags & FOLL_NOWAIT) fault_flags |= FAULT_FLAG_ALLOW_RETRY | FAULT_FLAG_RETRY_NOWAIT;