From patchwork Wed Jun 28 17:25:27 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 13296087 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5527DEB64DC for ; Wed, 28 Jun 2023 17:25:45 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id CA8118D0007; Wed, 28 Jun 2023 13:25:44 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id BE5728D0001; Wed, 28 Jun 2023 13:25:44 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8FD148D0007; Wed, 28 Jun 2023 13:25:44 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 80B148D0001 for ; Wed, 28 Jun 2023 13:25:44 -0400 (EDT) Received: from smtpin15.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 498F6B0380 for ; Wed, 28 Jun 2023 17:25:44 +0000 (UTC) X-FDA: 80952833808.15.69CEB58 Received: from mail-yw1-f202.google.com (mail-yw1-f202.google.com [209.85.128.202]) by imf18.hostedemail.com (Postfix) with ESMTP id 80B501C0009 for ; Wed, 28 Jun 2023 17:25:42 +0000 (UTC) Authentication-Results: imf18.hostedemail.com; dkim=pass header.d=google.com header.s=20221208 header.b=3gplJzXJ; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf18.hostedemail.com: domain of 3FW2cZAYKCIw8A7u3rw44w1u.s421y3AD-220Bqs0.47w@flex--surenb.bounces.google.com designates 209.85.128.202 as permitted sender) smtp.mailfrom=3FW2cZAYKCIw8A7u3rw44w1u.s421y3AD-220Bqs0.47w@flex--surenb.bounces.google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1687973142; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=Xs/kXC7xcHrq9c07EUAAvOT83+rro6HC4rZ5AGfMFCA=; b=kPXv5Jc1HkkV0hcndAT/IloM46WF9jQG6qDF75sd+CWC7sDN4zUAxwTjg+EvsQPJY61M8C ePg8Oyih+EH0b2cv2W+EJhulpOUUc2OVhq8BpqaUesdTw05Tj2UCxkwOHaWwlq1XOv6JR/ K3n4iaAbSpXguDITMCnd9VLUDnWEK4w= ARC-Authentication-Results: i=1; imf18.hostedemail.com; dkim=pass header.d=google.com header.s=20221208 header.b=3gplJzXJ; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf18.hostedemail.com: domain of 3FW2cZAYKCIw8A7u3rw44w1u.s421y3AD-220Bqs0.47w@flex--surenb.bounces.google.com designates 209.85.128.202 as permitted sender) smtp.mailfrom=3FW2cZAYKCIw8A7u3rw44w1u.s421y3AD-220Bqs0.47w@flex--surenb.bounces.google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1687973142; a=rsa-sha256; cv=none; b=wztesRG7A7DYbeiCRG1zt2CUjFdEOfKMcJnISkTITurukhjrNJoCvfbA/FM4ktwQkenl+j +VCR20uE2gI5wzwo28k3b3HnolWxTBG6pwSrZIDTv3gCwcFqzMUMvu8kpsLXTx0QDNfl7J CRe8aIti4jrEv58ZnDkK8EG0bhL2RSw= Received: by mail-yw1-f202.google.com with SMTP id 00721157ae682-573d70da2dcso624447b3.1 for ; Wed, 28 Jun 2023 10:25:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1687973141; x=1690565141; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=Xs/kXC7xcHrq9c07EUAAvOT83+rro6HC4rZ5AGfMFCA=; b=3gplJzXJhBY6iff0MITaQg7qq1fbPjAqgCxVk48TLjKNSRk2ZVcYwl9B6ys4XThx6f 4IdWWG0uKl6jF+dmFqO4p08c7kHuPcS9TL5P/uGpBMbVMyrYimn/Of87aoPfQAiqOVfS QMswoUkYhkicP3xt1gz73zQ8oeo8QqEbKdXFe4gLWWjmwICm2jDUAYYFk1yzdYFSabjA 4PyPp8TADBs4TtABQ6fFSIDwNgJWO/Lg16mBxHi80pof6pmgch5a6OE6+/Deo/nQKs6J VkyihG41HS4jHC6qBw0I8NjkgWHola51sBbUKa06EMV+qVzS56YMvtCbbg1yWN2hssRp kErw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1687973141; x=1690565141; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=Xs/kXC7xcHrq9c07EUAAvOT83+rro6HC4rZ5AGfMFCA=; b=TG5CI+Hrd2tG0Mj5H960EwQ1Yf/nVP9PBbXKfF53Cn7/wWJuHEkbSzFfpA4ckqmBRn 2ePAFNELXcj7zzAh28usqrQMR3eTiYGL1rDVczeVJmpgSi6lxsXnX3J22QP48u8SuYfo r8n0i7Pni9Eg5bFl6Ujwzx/BfSZan8GX/r1vdxStoMqilvBPH+MnocE1bO8ptc+/FGqr q4no1BfikGSgtSBeum+ub9AXrSmEFPmy99+3dagVCeitCCM99Q488N4xgA3oDfLuXHvy MsGnCNH3deN8Iz62UVkYcJVrU2YdMTksSs4SYcTs/jA0Zy+dBku7hgk1aYmJKzuaxvJw BtKQ== X-Gm-Message-State: AC+VfDxumhMnHt+vsZNY9zCa9CrulwUSfULyKxS8EcEcFjfHoxDU6/TA P2ZZLgwzVGnIbRmV1GHUEnkyJ0f/0j4= X-Google-Smtp-Source: ACHHUZ4DmmL1KbXoJUcLTcsqVboHZrBXg1e+ydt0kho9DvtJokDrg9K4gqv8lq1COe4UurCVTdxS896v+mk= X-Received: from surenb-desktop.mtv.corp.google.com ([2620:15c:211:201:eea3:e898:7d7a:1125]) (user=surenb job=sendgmr) by 2002:a81:e247:0:b0:56d:502:9eb0 with SMTP id z7-20020a81e247000000b0056d05029eb0mr13999020ywl.6.1687973141513; Wed, 28 Jun 2023 10:25:41 -0700 (PDT) Date: Wed, 28 Jun 2023 10:25:27 -0700 In-Reply-To: <20230628172529.744839-1-surenb@google.com> Mime-Version: 1.0 References: <20230628172529.744839-1-surenb@google.com> X-Mailer: git-send-email 2.41.0.162.gfafddb0af9-goog Message-ID: <20230628172529.744839-5-surenb@google.com> Subject: [PATCH v5 4/6] mm: change folio_lock_or_retry to use vm_fault directly From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: willy@infradead.org, hannes@cmpxchg.org, mhocko@suse.com, josef@toxicpanda.com, jack@suse.cz, ldufour@linux.ibm.com, laurent.dufour@fr.ibm.com, michel@lespinasse.org, liam.howlett@oracle.com, jglisse@google.com, vbabka@suse.cz, minchan@google.com, dave@stgolabs.net, punit.agrawal@bytedance.com, lstoakes@gmail.com, hdanton@sina.com, apopple@nvidia.com, peterx@redhat.com, ying.huang@intel.com, david@redhat.com, yuzhao@google.com, dhowells@redhat.com, hughd@google.com, viro@zeniv.linux.org.uk, brauner@kernel.org, pasha.tatashin@soleen.com, surenb@google.com, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, kernel-team@android.com X-Rspamd-Queue-Id: 80B501C0009 X-Rspam-User: X-Rspamd-Server: rspam04 X-Stat-Signature: 77bp6mbrc4nyqz3wdyj9b5p9b4s9txdh X-HE-Tag: 1687973142-794806 X-HE-Meta: U2FsdGVkX19rKV0sIbCY8p1ovqdQ0NHspzdn/LTcL8GSi9tlRLAuUKpCmDKRtRTep4rtHoAqr9wHA6meWfLGjDXUvttwVLVGLato+L0gcq0axZDU6gzHyMSi6e6vSlSJjpq3jjbPi5xOKTZlaSCQlnyPBM1Z9SnKoWrcQ5s18mimNdUJdgh/UYQ5jL04Oy8ziSJJ3Wr/BdNWPYYmFkQl2NRY2vml9LCjINgg5p9IqQuG8aBctoqx46R7fxiqSqUWiMeI4VThvrklCOdySTWsXxNr9fUJavJTZTHmZXv8oBHUEb9nbcT9ap7IejeOaiMrGCbG109PhNwH7CDWqIXlWWmRlkiDz/zBroM+/M+pkujMJFmDP0WYDs7F4s1wtPaVO/R7z4KnSp/OgSw8ajH+fb+tDO+344NUpHL2H76DNj34dZw5JMpdf207g2fsq34i7jUF8ypK1kI3N3kibWo6zoNZhwTZsdLAbrn0WtxlwSJSJ7jPSQ0dtV6dsyor4rGZX1B+INj6RQZJrjGRfEQRTXx0CQ+a92j/+FbsC1TKBmVVnXfaStIbVOlKycmloIE8R+5cltWMViMCdlOroc1Yxq/KeJ5Zd8ezrpLlIutu/vd94CvzeK+a9rxijaLTFpvLwmITe1cWShBovJV2NELLo9cAHgH4iUxnyltCNZATMH8U7obFU2tHg1hcYj4k3RpSppk4kvDlye67ziKrLH72jcHOqpN+w9UR6b4h0UeKg22K+Yv3RpgVbCkLLlYMsPEZLY1e2bDEIUpWxCd4VYJ5atE3WVR31MAyafQbp4vhp33meOMYCaDkiNLq9o68k11ch1gMpvkYNZRNQcLyHsP/ZpCUuCoNqO3huHhmrdDMpfZB/C4niF6Io0dqyDRnVn4o1ZkreZxG0hxW+dX9F9bS6qnKyWoQMT21sKA4S7HKwUeRY/bxdhtQ1EbBXfV5cCdr044kJ5OpqX4+7uQZhZ/ +HzbDwBH ou54mjPOUX9sHPtZKstbU4fUtr9o9OvdBr3sYvXKJIoR0Rs31z84n/0EgD2Ye54Mm9hUdl2Uj7sgO0g4gEFWu9ND0A8mk/qDWROavQNm8by3VHAFzOyRj1Iv/htGsYe5rqsx0fhAGQSs5sjGxnEXbP+vuRVVx9bPzQZARVy8diejez490xF0tZT4/6xK1qUgCHSq7Bujsm1XFpoqxDpX0XHZt+yif+aVWr6aSgj1EUzpoig9n7g33hvFw7dTVoTrk7DeBWM9Hp1+CnMoCqzPV85JQ8AvQEVoMzMofL0fe/YyQaUtKGdC1GPDT9R+HKBOahOlJkHmqwQ8/fhDj3GlLSbm04w1eMuXtJgBSbi1RRUGzjaNXGaB4tniMYkDZsXi7/jNRWdSMTZDMQm8RjLe4LehIA4OdDA6B3NAWiyAGiwHSX8LHuYswZoczMwdMA9F2SyPwPKAR3uvitola6FZe2D0s1mZ//TgrFkRHEEN5lv65XjcMCuEW38QYbxIG4BFikL0sS0Leh0KH6MZoqffX8fw8vxwT8V4RHJZuyYhc5P05xK9z4SQLM9x6d2CxwulFdjdCDV8ixStYNxSHFA1AktUkEBllWIZwX9NiaKj5SdsMnPtAS4ARhY3mbIQGmNhfSHChtqVya1Ry57IcSzVgTpWgZwGYqMqAE5bJlurIS2PnPm1XQkn1a5S/9i8HIoj1t6s8t+WbNwjPwnZUs8urGnUerj0lrbQbiUlKh7JHcgL7Z1Y= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Change folio_lock_or_retry to accept vm_fault struct and return the vm_fault_t directly. Suggested-by: Matthew Wilcox Signed-off-by: Suren Baghdasaryan Acked-by: Peter Xu --- include/linux/pagemap.h | 9 ++++----- mm/filemap.c | 22 ++++++++++++---------- mm/memory.c | 14 ++++++-------- 3 files changed, 22 insertions(+), 23 deletions(-) diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index a56308a9d1a4..59d070c55c97 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -896,8 +896,7 @@ static inline bool wake_page_match(struct wait_page_queue *wait_page, void __folio_lock(struct folio *folio); int __folio_lock_killable(struct folio *folio); -bool __folio_lock_or_retry(struct folio *folio, struct mm_struct *mm, - unsigned int flags); +vm_fault_t __folio_lock_or_retry(struct folio *folio, struct vm_fault *vmf); void unlock_page(struct page *page); void folio_unlock(struct folio *folio); @@ -1001,11 +1000,11 @@ static inline int folio_lock_killable(struct folio *folio) * Return value and mmap_lock implications depend on flags; see * __folio_lock_or_retry(). */ -static inline bool folio_lock_or_retry(struct folio *folio, - struct mm_struct *mm, unsigned int flags) +static inline vm_fault_t folio_lock_or_retry(struct folio *folio, + struct vm_fault *vmf) { might_sleep(); - return folio_trylock(folio) || __folio_lock_or_retry(folio, mm, flags); + return folio_trylock(folio) ? 0 : __folio_lock_or_retry(folio, vmf); } /* diff --git a/mm/filemap.c b/mm/filemap.c index 00f01d8ead47..52bcf12dcdbf 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -1701,32 +1701,34 @@ static int __folio_lock_async(struct folio *folio, struct wait_page_queue *wait) /* * Return values: - * true - folio is locked; mmap_lock is still held. - * false - folio is not locked. + * 0 - folio is locked. + * VM_FAULT_RETRY - folio is not locked. * mmap_lock has been released (mmap_read_unlock(), unless flags had both * FAULT_FLAG_ALLOW_RETRY and FAULT_FLAG_RETRY_NOWAIT set, in * which case mmap_lock is still held. * - * If neither ALLOW_RETRY nor KILLABLE are set, will always return true + * If neither ALLOW_RETRY nor KILLABLE are set, will always return 0 * with the folio locked and the mmap_lock unperturbed. */ -bool __folio_lock_or_retry(struct folio *folio, struct mm_struct *mm, - unsigned int flags) +vm_fault_t __folio_lock_or_retry(struct folio *folio, struct vm_fault *vmf) { + struct mm_struct *mm = vmf->vma->vm_mm; + unsigned int flags = vmf->flags; + if (fault_flag_allow_retry_first(flags)) { /* * CAUTION! In this case, mmap_lock is not released - * even though return 0. + * even though return VM_FAULT_RETRY. */ if (flags & FAULT_FLAG_RETRY_NOWAIT) - return false; + return VM_FAULT_RETRY; mmap_read_unlock(mm); if (flags & FAULT_FLAG_KILLABLE) folio_wait_locked_killable(folio); else folio_wait_locked(folio); - return false; + return VM_FAULT_RETRY; } if (flags & FAULT_FLAG_KILLABLE) { bool ret; @@ -1734,13 +1736,13 @@ bool __folio_lock_or_retry(struct folio *folio, struct mm_struct *mm, ret = __folio_lock_killable(folio); if (ret) { mmap_read_unlock(mm); - return false; + return VM_FAULT_RETRY; } } else { __folio_lock(folio); } - return true; + return 0; } /** diff --git a/mm/memory.c b/mm/memory.c index f14d45957b83..345080052003 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -3568,6 +3568,7 @@ static vm_fault_t remove_device_exclusive_entry(struct vm_fault *vmf) struct folio *folio = page_folio(vmf->page); struct vm_area_struct *vma = vmf->vma; struct mmu_notifier_range range; + vm_fault_t ret; /* * We need a reference to lock the folio because we don't hold @@ -3580,9 +3581,10 @@ static vm_fault_t remove_device_exclusive_entry(struct vm_fault *vmf) if (!folio_try_get(folio)) return 0; - if (!folio_lock_or_retry(folio, vma->vm_mm, vmf->flags)) { + ret = folio_lock_or_retry(folio, vmf); + if (ret) { folio_put(folio); - return VM_FAULT_RETRY; + return ret; } mmu_notifier_range_init_owner(&range, MMU_NOTIFY_EXCLUSIVE, 0, vma->vm_mm, vmf->address & PAGE_MASK, @@ -3704,7 +3706,6 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) bool exclusive = false; swp_entry_t entry; pte_t pte; - int locked; vm_fault_t ret = 0; void *shadow = NULL; @@ -3826,12 +3827,9 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) goto out_release; } - locked = folio_lock_or_retry(folio, vma->vm_mm, vmf->flags); - - if (!locked) { - ret |= VM_FAULT_RETRY; + ret |= folio_lock_or_retry(folio, vmf); + if (ret & VM_FAULT_RETRY) goto out_release; - } if (swapcache) { /*