From patchwork Fri Jun 9 00:51:57 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 13273056 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E34A1C83003 for ; Fri, 9 Jun 2023 00:52:17 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 692C48E0008; Thu, 8 Jun 2023 20:52:17 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 61B228E0001; Thu, 8 Jun 2023 20:52:17 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 446E58E0008; Thu, 8 Jun 2023 20:52:17 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 31C168E0001 for ; Thu, 8 Jun 2023 20:52:17 -0400 (EDT) Received: from smtpin30.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 0660812028D for ; Fri, 9 Jun 2023 00:52:17 +0000 (UTC) X-FDA: 80881383114.30.C090F0C Received: from mail-yb1-f201.google.com (mail-yb1-f201.google.com [209.85.219.201]) by imf05.hostedemail.com (Postfix) with ESMTP id 4CB0D100005 for ; Fri, 9 Jun 2023 00:52:15 +0000 (UTC) Authentication-Results: imf05.hostedemail.com; dkim=pass header.d=google.com header.s=20221208 header.b=oVug16yE; spf=pass (imf05.hostedemail.com: domain of 3vneCZAYKCOEVXUHQEJRRJOH.FRPOLQXa-PPNYDFN.RUJ@flex--surenb.bounces.google.com designates 209.85.219.201 as permitted sender) smtp.mailfrom=3vneCZAYKCOEVXUHQEJRRJOH.FRPOLQXa-PPNYDFN.RUJ@flex--surenb.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1686271935; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=yDKE+Wkkd9TQZuY1u1+D5VbBuChdz1DMs8UdXXj0e2g=; b=j2CLl1izsm5OxepBPW1b9Q+KsuCpsdybshP4JP29wZi9Jp2vqaFbnDYfYHHlCgPtx6b+Gf /pKLcrYriBRybhKnwVihRhTRV0GvGmBYM6mhrq3KOhef8/HVP9CFmOpV/fVW7VtF6Ef4YY Z05ZEE4kiE8rdWeWDvrCs2w6tzjJpnQ= ARC-Authentication-Results: i=1; imf05.hostedemail.com; dkim=pass header.d=google.com header.s=20221208 header.b=oVug16yE; spf=pass (imf05.hostedemail.com: domain of 3vneCZAYKCOEVXUHQEJRRJOH.FRPOLQXa-PPNYDFN.RUJ@flex--surenb.bounces.google.com designates 209.85.219.201 as permitted sender) smtp.mailfrom=3vneCZAYKCOEVXUHQEJRRJOH.FRPOLQXa-PPNYDFN.RUJ@flex--surenb.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1686271935; a=rsa-sha256; cv=none; b=hGHmRJ1EnSrSGchXLafaVIbkgG87HcCJ57RmZ1vUMq060hephndX4MvE8dC7npWWgOSNLv 8Wj+qvTsz+o6BHk9PToS5DcoAP2THIrZnl+IXmAZpQ8yecc43e4RQ9oPuF+uIk0yn29PS4 MyGjP/E5q54IWYAsgzGNQvY10AbaagM= Received: by mail-yb1-f201.google.com with SMTP id 3f1490d57ef6-bb3855c34deso1716323276.2 for ; Thu, 08 Jun 2023 17:52:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1686271934; x=1688863934; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=yDKE+Wkkd9TQZuY1u1+D5VbBuChdz1DMs8UdXXj0e2g=; b=oVug16yEjdlnNEpZ7pnd5qNolPTgsrM9t75qLEfhGOZfvwQQTTtBUSAfR3QIEEr+oC v/fzE6FqQXocpbRK9V3XWMbF2lwE13bpKfYmrqjm6uNGGYsi/lgDYeC/9YEof7hp5FqS 5A0uKzNV8LOj4fKrnAEEjVJwZBkiAOghnhCcJkEwlpwZR4xAR7bH/08Zl/fuOCMap4kM hmRM8jBS/+AwHvOnOyZBcktaKtQpOMW1qGAdckEP5f61vXaEekTcAIlIX8pWgUawmGin 2woY8gWii5fyujAxxvv+l/JMuwNA3M044wzqA3CPsKBvTCAFqHIUtCTz2+/Qo24VtWBB JIiA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1686271934; x=1688863934; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=yDKE+Wkkd9TQZuY1u1+D5VbBuChdz1DMs8UdXXj0e2g=; b=CqVCuRBKZ5DR1hhpJfQKxaLNUrnxdbQGqsJqSw6t/4RsckjvN4cSGUZhFN24GQtilf 2G2ifWOrIEomb+nyr9jBU5X9p7XtoGbwGJGmZBeEuhF02BwQ1kDdfQW/QWPqqkeHXLvY 8PAhbakMxVdvpZu8gnHa1ZEJqDVeWwjMeCqVqZ7rfBpzTFOiMcibhKHn7pp8rHn4xaIy nO7mPY3Ls96KeLNULHMsY87/VC4u3eEhqpV4Q7u6U9Hu1Cj9mdiMjQt6e30nQai96u6r fDCVlBGLZ8ZoKbUiblF6L9dyboF4Dh3DgfewtlFVC1V/9NNNu0SloG3XHmdYij/8P47Y sl7w== X-Gm-Message-State: AC+VfDxyEwQFLxHIQG+3Oqd57GP5qlPWvTLQDQGcb8bpM6YrtcR0Icj8 TCOTrQgxWClE4IXSc5RWEgS5mTuMoxI= X-Google-Smtp-Source: ACHHUZ6qGzpVQ3JwgFBzFsFEN7W5Nqjbd5tmkwDtF5dFLLF9fdY1OW4nuRJzWHLumkRKhGTOuLgN6g2TsCw= X-Received: from surenb-desktop.mtv.corp.google.com ([2620:15c:211:201:c03e:d3b7:767a:9467]) (user=surenb job=sendgmr) by 2002:a25:3216:0:b0:bb3:ac6a:6d61 with SMTP id y22-20020a253216000000b00bb3ac6a6d61mr657350yby.3.1686271934456; Thu, 08 Jun 2023 17:52:14 -0700 (PDT) Date: Thu, 8 Jun 2023 17:51:57 -0700 In-Reply-To: <20230609005158.2421285-1-surenb@google.com> Mime-Version: 1.0 References: <20230609005158.2421285-1-surenb@google.com> X-Mailer: git-send-email 2.41.0.162.gfafddb0af9-goog Message-ID: <20230609005158.2421285-6-surenb@google.com> Subject: [PATCH v2 5/6] mm: implement folio wait under VMA lock From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: willy@infradead.org, hannes@cmpxchg.org, mhocko@suse.com, josef@toxicpanda.com, jack@suse.cz, ldufour@linux.ibm.com, laurent.dufour@fr.ibm.com, michel@lespinasse.org, liam.howlett@oracle.com, jglisse@google.com, vbabka@suse.cz, minchan@google.com, dave@stgolabs.net, punit.agrawal@bytedance.com, lstoakes@gmail.com, hdanton@sina.com, apopple@nvidia.com, peterx@redhat.com, ying.huang@intel.com, david@redhat.com, yuzhao@google.com, dhowells@redhat.com, hughd@google.com, viro@zeniv.linux.org.uk, brauner@kernel.org, pasha.tatashin@soleen.com, surenb@google.com, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, kernel-team@android.com X-Rspamd-Queue-Id: 4CB0D100005 X-Rspam-User: X-Stat-Signature: uu4r514e7axru39zmtsoki9rxyzbckfn X-Rspamd-Server: rspam01 X-HE-Tag: 1686271935-446808 X-HE-Meta: U2FsdGVkX19ntqLCk2rBm2+qvvhKayAuULTQr70IO6nNo2+e15o1VK0IugjEGQ8UNYcY8MrAHU+5GwFy9C889wjwtBRZ3L6Oz0XcqGbqMROrHCRs+0162SskNfKsB24FYoBmif/K0AsXKpgvNhQ4bRzOyhO0LOGLcavKtj2HYdCzq27E2JKQjUCF5iwJWVaYIWcNj3T/+3j8G8H+cXg9SRxkJAbpzU1l6j5Y7bcJjT5/+PAZP1u2GhCHY+NB/Gt49M1bOzDlaRJ9WEoFo4b/m0498ZeROUo6HRZum+t5mmX6vx4sf4NWcz7wByi/YVCJBURhDb0EzndasssXpbcw1RTcfS1NdRit+ZlYskkO17+q8xodm2a4zkOoJn1duP66kgB/l9nGMDovYRb8BaZbXvteC+2NLEWYVkZy93hirf7WGQaPeFmrK7YsB2++1dqqpm0esbIRjOC1bbA9hZoPX61izq63QPlG6yxUmX7Kkurq9GTpSWzeypWRP5Ygmmmy6hg1VA/2BbKLRJ/jrQmeBe1Mhog6URuWowBaWd/b5Xh6ZosBNewv5rkq4MTpe2r6TuQmpN03cbUQ5ATkv6+vcwD4XBqrJdUFJIXauZzbbGZXPZbJikTw7VOaM7/5ZUpikBWmEd9n2+Ln0HEa+JydYMAWvu+kNxS1HO9dg/wYTVi3wcOiLZLgBbgqWVujiMq7agSwcifFG5G1Qx+L0l08kQIZA+g3S53mDTALGS/EaM8783G6zZP2MJdvx3vODeShM5OE7UifaN4lQp8e67Ku5g+VFIYysyX2ARetCgufVRZrUGZHKqL+CXGNH7a+RH2MGX7iojl7MJqZ6dcmTf/WVDw/3B1pxglDVHzyAHjauzrw1m9bmci9oSzhsvz1Zg/ifwPrqXIAm5xYFZpXjTIk1chcbXhOTdc4qTKOX9Ngy5TvSf6JK2zYs1BEZPJd8ONRlEznqTIhDMY9Z9/zeIA u0SMrmGp G8WkXbz024vphOMpsavTgIUjMQaxge9R27dyvGcQdsjZ/ojf9Po0TbMapB6AYwK3pdp9buhzZPVPlfuZawR16R/V0S/aQG8k4/8dgbByg2aB84gxJfDNKIfxVDO9SzHuNhf71Fb2CRw7sPT0HfGMG6j1u18bBjcuRiL8hKy6L84XOEukgbmjL1dzOngFbQo9powGgy7R1kqOCnU3XsnUCwhWB5jZdZNm4LyK5SzVZLcjZYIOj2vYxsVUTeG22pMOnNUfehus98Ee3dGhIyvO3FfkmqZ+SoA6vxmehf+Jeo23KPUOFHwAXHos4lzOz4BGNFE1P6nTBh5LJufc81hW+Dc4S1TuN3iuI+USGY28Mf7tVxH6EoEjhwqigljOh7T697J/8wwbHjNTw+cyhzHSMyWejWL4qhyW42v001v5c4WsolAmBNy0wAYMqbGvgTEtloQ9hzvPIHl1lylvSX2WvTkEziuqGBou+AQdxzUvHxCEadexJSAWUW1c/S+l1oz/npa8Lo2q5pLa1OBVVaKeTRVgROdF6hSE1EyG7mYr+2r0CF5BHpvsnT8NxY0T6sA8ovLGpo/FQytW+fJ1fDrgXb6J/CRMAVRwpZC5SpwdSH0f/QEs= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Follow the same pattern as mmap_lock when waiting for folio by dropping VMA lock before the wait and retrying once folio is available. Signed-off-by: Suren Baghdasaryan --- include/linux/pagemap.h | 14 ++++++++++---- mm/filemap.c | 43 ++++++++++++++++++++++------------------- mm/memory.c | 13 ++++++++----- 3 files changed, 41 insertions(+), 29 deletions(-) diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index a56308a9d1a4..6c9493314c21 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -896,8 +896,8 @@ static inline bool wake_page_match(struct wait_page_queue *wait_page, void __folio_lock(struct folio *folio); int __folio_lock_killable(struct folio *folio); -bool __folio_lock_or_retry(struct folio *folio, struct mm_struct *mm, - unsigned int flags); +bool __folio_lock_or_retry(struct folio *folio, struct vm_area_struct *vma, + unsigned int flags, bool *lock_dropped); void unlock_page(struct page *page); void folio_unlock(struct folio *folio); @@ -1002,10 +1002,16 @@ static inline int folio_lock_killable(struct folio *folio) * __folio_lock_or_retry(). */ static inline bool folio_lock_or_retry(struct folio *folio, - struct mm_struct *mm, unsigned int flags) + struct vm_area_struct *vma, unsigned int flags, + bool *lock_dropped) { might_sleep(); - return folio_trylock(folio) || __folio_lock_or_retry(folio, mm, flags); + if (folio_trylock(folio)) { + *lock_dropped = false; + return true; + } + + return __folio_lock_or_retry(folio, vma, flags, lock_dropped); } /* diff --git a/mm/filemap.c b/mm/filemap.c index 7cb0a3776a07..838955635fbc 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -1701,37 +1701,35 @@ static int __folio_lock_async(struct folio *folio, struct wait_page_queue *wait) /* * Return values: - * true - folio is locked; mmap_lock is still held. + * true - folio is locked. * false - folio is not locked. - * mmap_lock has been released (mmap_read_unlock(), unless flags had both - * FAULT_FLAG_ALLOW_RETRY and FAULT_FLAG_RETRY_NOWAIT set, in - * which case mmap_lock is still held. - * If flags had FAULT_FLAG_VMA_LOCK set, meaning the operation is performed - * with VMA lock only, the VMA lock is still held. + * + * lock_dropped indicates whether mmap_lock/VMA lock got dropped. + * mmap_lock/VMA lock is dropped when function fails to lock the folio, + * unless flags had both FAULT_FLAG_ALLOW_RETRY and FAULT_FLAG_RETRY_NOWAIT + * set, in which case mmap_lock/VMA lock is still held. * * If neither ALLOW_RETRY nor KILLABLE are set, will always return true - * with the folio locked and the mmap_lock unperturbed. + * with the folio locked and the mmap_lock/VMA lock unperturbed. */ -bool __folio_lock_or_retry(struct folio *folio, struct mm_struct *mm, - unsigned int flags) +bool __folio_lock_or_retry(struct folio *folio, struct vm_area_struct *vma, + unsigned int flags, bool *lock_dropped) { - /* Can't do this if not holding mmap_lock */ - if (flags & FAULT_FLAG_VMA_LOCK) - return false; - if (fault_flag_allow_retry_first(flags)) { - /* - * CAUTION! In this case, mmap_lock is not released - * even though return 0. - */ - if (flags & FAULT_FLAG_RETRY_NOWAIT) + if (flags & FAULT_FLAG_RETRY_NOWAIT) { + *lock_dropped = false; return false; + } - mmap_read_unlock(mm); + if (flags & FAULT_FLAG_VMA_LOCK) + vma_end_read(vma); + else + mmap_read_unlock(vma->vm_mm); if (flags & FAULT_FLAG_KILLABLE) folio_wait_locked_killable(folio); else folio_wait_locked(folio); + *lock_dropped = true; return false; } if (flags & FAULT_FLAG_KILLABLE) { @@ -1739,13 +1737,18 @@ bool __folio_lock_or_retry(struct folio *folio, struct mm_struct *mm, ret = __folio_lock_killable(folio); if (ret) { - mmap_read_unlock(mm); + if (flags & FAULT_FLAG_VMA_LOCK) + vma_end_read(vma); + else + mmap_read_unlock(vma->vm_mm); + *lock_dropped = true; return false; } } else { __folio_lock(folio); } + *lock_dropped = false; return true; } diff --git a/mm/memory.c b/mm/memory.c index c234f8085f1e..acb09a3aad53 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -3568,6 +3568,7 @@ static vm_fault_t remove_device_exclusive_entry(struct vm_fault *vmf) struct folio *folio = page_folio(vmf->page); struct vm_area_struct *vma = vmf->vma; struct mmu_notifier_range range; + bool lock_dropped; /* * We need a reference to lock the folio because we don't hold @@ -3580,8 +3581,10 @@ static vm_fault_t remove_device_exclusive_entry(struct vm_fault *vmf) if (!folio_try_get(folio)) return 0; - if (!folio_lock_or_retry(folio, vma->vm_mm, vmf->flags)) { + if (!folio_lock_or_retry(folio, vma, vmf->flags, &lock_dropped)) { folio_put(folio); + if (lock_dropped && vmf->flags & FAULT_FLAG_VMA_LOCK) + return VM_FAULT_VMA_UNLOCKED | VM_FAULT_RETRY; return VM_FAULT_RETRY; } mmu_notifier_range_init_owner(&range, MMU_NOTIFY_EXCLUSIVE, 0, @@ -3704,7 +3707,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) bool exclusive = false; swp_entry_t entry; pte_t pte; - int locked; + bool lock_dropped; vm_fault_t ret = 0; void *shadow = NULL; @@ -3837,9 +3840,9 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) goto out_release; } - locked = folio_lock_or_retry(folio, vma->vm_mm, vmf->flags); - - if (!locked) { + if (!folio_lock_or_retry(folio, vma, vmf->flags, &lock_dropped)) { + if (lock_dropped && vmf->flags & FAULT_FLAG_VMA_LOCK) + ret |= VM_FAULT_VMA_UNLOCKED; ret |= VM_FAULT_RETRY; goto out_release; }