From patchwork Wed Jun 28 17:25:24 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 13296081 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E54C0EB64DA for ; Wed, 28 Jun 2023 17:25:58 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231421AbjF1RZ5 (ORCPT ); Wed, 28 Jun 2023 13:25:57 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43726 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231481AbjF1RZi (ORCPT ); Wed, 28 Jun 2023 13:25:38 -0400 Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com [IPv6:2607:f8b0:4864:20::b49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 450C21FCD for ; Wed, 28 Jun 2023 10:25:36 -0700 (PDT) Received: by mail-yb1-xb49.google.com with SMTP id 3f1490d57ef6-c17812e30b4so5309428276.1 for ; Wed, 28 Jun 2023 10:25:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1687973135; x=1690565135; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=KMxqHAnxOZxMzgoGe7kH228DI7ZaEMF5Tfki0D7IS+4=; b=e6SPrWEkGiS7vhpHhhy37EcTRVrgLixp2j9nc9EBXVKej7rIbih+NkwJb1MJn3rRDX 5muFPgNOpa6ZEK4o/0O1g9m3lbEwIyCcXkljX26TsSqupAwnhpmWQRx8CDSGC558JJIO zsfH9ZLyWCPVb029aawxSuzo4keH1XGw00BY/iVUfHzSafcc2ABx8vXH3xmVaWp7skVn YIYwuCss3vEqI8yrAYLu29TIrCDFEXYAgjNrcxHG3KB3oPG2oESX91b3haK9+Q4XKU6Z styvaJQzhk+b5f88CjLax8vu0Fc8jk/0sma4xwI+oS81IE2WBsB+5IiiWsfpeaCvJ4UY vfrA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1687973135; x=1690565135; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=KMxqHAnxOZxMzgoGe7kH228DI7ZaEMF5Tfki0D7IS+4=; b=cy/xMWIwcRCJEXw/ZGqR77V89hG/uJFxibPZBp705qcn2cN9aaGwgz0SgcWGtkF9rB M0sWxEOHyFv9ZHZPO2s3PLrBpipiLLUJB818fowVVsMfPbSfBtNA74N8uLpYmPOfLLpo OHgkMa854TyxNPUmOjr2hqK9aHytfNYPJVTVMaCIJv7V9wfNBnG+fwFYwyZeT/yQdnJM NeUXYsgCyepPS4GYvS+Eov8w+ig1Xq3yJ36bgKTrMf8ezRSOXhG2k4zpCAzMPmzIgEWZ Tx67/W0sldo3U4cfjrw+kJ2PfiLix0xjColJwS5UZG5oGj0gHV7jT/QnPqK4kVoD/IOy KHxQ== X-Gm-Message-State: AC+VfDzOZub55Q3JP/D0eu0dTcGc7g/AkXX7cSDWnKejUlV3nFENWtwE rLYMMRWmu4ZSKdxJwAVzGLHRtWysP3A= X-Google-Smtp-Source: ACHHUZ6N7gWosjP8G3HWgmkBJCbFxP8A/hCoy48IKxEWFUFucTCUbC4VUp9YtI79x0Uj4BJMkP4VIAbKaOk= X-Received: from surenb-desktop.mtv.corp.google.com ([2620:15c:211:201:eea3:e898:7d7a:1125]) (user=surenb job=sendgmr) by 2002:a25:2f95:0:b0:c15:cbd1:60dd with SMTP id v143-20020a252f95000000b00c15cbd160ddmr3699476ybv.5.1687973135437; Wed, 28 Jun 2023 10:25:35 -0700 (PDT) Date: Wed, 28 Jun 2023 10:25:24 -0700 In-Reply-To: <20230628172529.744839-1-surenb@google.com> Mime-Version: 1.0 References: <20230628172529.744839-1-surenb@google.com> X-Mailer: git-send-email 2.41.0.162.gfafddb0af9-goog Message-ID: <20230628172529.744839-2-surenb@google.com> Subject: [PATCH v5 1/6] swap: remove remnants of polling from read_swap_cache_async From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: willy@infradead.org, hannes@cmpxchg.org, mhocko@suse.com, josef@toxicpanda.com, jack@suse.cz, ldufour@linux.ibm.com, laurent.dufour@fr.ibm.com, michel@lespinasse.org, liam.howlett@oracle.com, jglisse@google.com, vbabka@suse.cz, minchan@google.com, dave@stgolabs.net, punit.agrawal@bytedance.com, lstoakes@gmail.com, hdanton@sina.com, apopple@nvidia.com, peterx@redhat.com, ying.huang@intel.com, david@redhat.com, yuzhao@google.com, dhowells@redhat.com, hughd@google.com, viro@zeniv.linux.org.uk, brauner@kernel.org, pasha.tatashin@soleen.com, surenb@google.com, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, kernel-team@android.com, Christoph Hellwig Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org Commit [1] introduced IO polling support duding swapin to reduce swap read latency for block devices that can be polled. However later commit [2] removed polling support. Therefore it seems safe to remove do_poll parameter in read_swap_cache_async and always call swap_readpage with synchronous=false waiting for IO completion in folio_lock_or_retry. [1] commit 23955622ff8d ("swap: add block io poll in swapin path") [2] commit 9650b453a3d4 ("block: ignore RWF_HIPRI hint for sync dio") Suggested-by: "Huang, Ying" Signed-off-by: Suren Baghdasaryan Reviewed-by: "Huang, Ying" Reviewed-by: Christoph Hellwig --- mm/madvise.c | 4 ++-- mm/swap.h | 1 - mm/swap_state.c | 12 +++++------- 3 files changed, 7 insertions(+), 10 deletions(-) diff --git a/mm/madvise.c b/mm/madvise.c index b5ffbaf616f5..b1e8adf1234e 100644 --- a/mm/madvise.c +++ b/mm/madvise.c @@ -215,7 +215,7 @@ static int swapin_walk_pmd_entry(pmd_t *pmd, unsigned long start, continue; page = read_swap_cache_async(entry, GFP_HIGHUSER_MOVABLE, - vma, index, false, &splug); + vma, index, &splug); if (page) put_page(page); } @@ -252,7 +252,7 @@ static void force_shm_swapin_readahead(struct vm_area_struct *vma, rcu_read_unlock(); page = read_swap_cache_async(swap, GFP_HIGHUSER_MOVABLE, - NULL, 0, false, &splug); + NULL, 0, &splug); if (page) put_page(page); diff --git a/mm/swap.h b/mm/swap.h index 7c033d793f15..8a3c7a0ace4f 100644 --- a/mm/swap.h +++ b/mm/swap.h @@ -46,7 +46,6 @@ struct folio *filemap_get_incore_folio(struct address_space *mapping, struct page *read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask, struct vm_area_struct *vma, unsigned long addr, - bool do_poll, struct swap_iocb **plug); struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask, struct vm_area_struct *vma, diff --git a/mm/swap_state.c b/mm/swap_state.c index b76a65ac28b3..a3839de71f3f 100644 --- a/mm/swap_state.c +++ b/mm/swap_state.c @@ -517,15 +517,14 @@ struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask, */ struct page *read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask, struct vm_area_struct *vma, - unsigned long addr, bool do_poll, - struct swap_iocb **plug) + unsigned long addr, struct swap_iocb **plug) { bool page_was_allocated; struct page *retpage = __read_swap_cache_async(entry, gfp_mask, vma, addr, &page_was_allocated); if (page_was_allocated) - swap_readpage(retpage, do_poll, plug); + swap_readpage(retpage, false, plug); return retpage; } @@ -620,7 +619,7 @@ struct page *swap_cluster_readahead(swp_entry_t entry, gfp_t gfp_mask, struct swap_info_struct *si = swp_swap_info(entry); struct blk_plug plug; struct swap_iocb *splug = NULL; - bool do_poll = true, page_allocated; + bool page_allocated; struct vm_area_struct *vma = vmf->vma; unsigned long addr = vmf->address; @@ -628,7 +627,6 @@ struct page *swap_cluster_readahead(swp_entry_t entry, gfp_t gfp_mask, if (!mask) goto skip; - do_poll = false; /* Read a page_cluster sized and aligned cluster around offset. */ start_offset = offset & ~mask; end_offset = offset | mask; @@ -660,7 +658,7 @@ struct page *swap_cluster_readahead(swp_entry_t entry, gfp_t gfp_mask, lru_add_drain(); /* Push any new pages onto the LRU now */ skip: /* The page was likely read above, so no need for plugging here */ - return read_swap_cache_async(entry, gfp_mask, vma, addr, do_poll, NULL); + return read_swap_cache_async(entry, gfp_mask, vma, addr, NULL); } int init_swap_address_space(unsigned int type, unsigned long nr_pages) @@ -825,7 +823,7 @@ static struct page *swap_vma_readahead(swp_entry_t fentry, gfp_t gfp_mask, skip: /* The page was likely read above, so no need for plugging here */ return read_swap_cache_async(fentry, gfp_mask, vma, vmf->address, - ra_info.win == 1, NULL); + NULL); } /** From patchwork Wed Jun 28 17:25:25 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 13296082 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D41D0EB64D7 for ; Wed, 28 Jun 2023 17:26:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232156AbjF1RZ6 (ORCPT ); Wed, 28 Jun 2023 13:25:58 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43734 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231543AbjF1RZj (ORCPT ); Wed, 28 Jun 2023 13:25:39 -0400 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 172FF1FDC for ; Wed, 28 Jun 2023 10:25:38 -0700 (PDT) Received: by mail-yb1-xb4a.google.com with SMTP id 3f1490d57ef6-bf34588085bso1341276.0 for ; Wed, 28 Jun 2023 10:25:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1687973137; x=1690565137; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=83rYaWFW6QNQhfB0rTt9RkG69OVxHHwKFqolT0xWQu0=; b=UokIsNFS7/P5ifE0g0gbJ51af5XfHkQTHzAgzLLCATuc/7MAyN3e2BIPy1izcYPkLY z5f44UMQRnCVB/vZkWWNMEwOIDkzuui+0Zwrl35SVD1vVSw69/uOHfZ8N6f04+Hc4UE8 OMS5qOGaEvHe2e7Pkp97+c129PKIZIfcz2hG05HLZYnQzbeA5Kf51CkDDRCs5Gj3bCn3 IyzanXcnxJEZy6IFmpLEMW/Hpa919YAfn1aORWnHhQljmXr6br6YiaYfYPvCtBgZON7s ICsPLNMrtcvCAid/Y2Aigd28hxg2v4s1SjgCANnNW3FUlDE6VL5Cwje43GGlxg+Y6Z5I IRYQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1687973137; x=1690565137; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=83rYaWFW6QNQhfB0rTt9RkG69OVxHHwKFqolT0xWQu0=; b=UebdeKZwu788nwrtr2R3Xu4IiC88ti7VxzO8y03s3GXIGXE+6NlVjitKz6XnOx7Kkm 86cmeXba5+/Lz4ba8gRP0tyTf+eXMkS6PgEbfgtZH8ER3qOpxpOIf4X0bJJo8v7KzOd9 mDifVCI+WvB9k3Eqf1vU7CI/mT4EKVVr4pFq1OLpdb8wS7ElGQr4UaYfPC+wHthNEscC IQ+/MopNrW5fO/x+yjHSPYrNbDcuifdYR5b51jRbr/GVdNNH5doAgIcGiTuQQa+/oWDc YNrjStrxdB26JxUiPycE+tHRkLhlzEND6ihFXNRj+e0K1jAy7S4+iCgB9GRPvwWI8gJZ NQFA== X-Gm-Message-State: AC+VfDzWd4wCnxYI52rWtQv1zSf9bWEx9cPzqYPSzHz6Zu01IFj7qzUj rUbvyZ7dRx5Zu84WzTt9bs8++1DXzxc= X-Google-Smtp-Source: ACHHUZ7JjKtitm+lPkm21TpKtpBUVfUzi/WI1UUa7vSu8SsD9xW8kKDV4Pf2ivIVJgyup2CqnecVVXuo0yI= X-Received: from surenb-desktop.mtv.corp.google.com ([2620:15c:211:201:eea3:e898:7d7a:1125]) (user=surenb job=sendgmr) by 2002:a25:ad94:0:b0:ba6:e7ee:bb99 with SMTP id z20-20020a25ad94000000b00ba6e7eebb99mr14318213ybi.12.1687973137283; Wed, 28 Jun 2023 10:25:37 -0700 (PDT) Date: Wed, 28 Jun 2023 10:25:25 -0700 In-Reply-To: <20230628172529.744839-1-surenb@google.com> Mime-Version: 1.0 References: <20230628172529.744839-1-surenb@google.com> X-Mailer: git-send-email 2.41.0.162.gfafddb0af9-goog Message-ID: <20230628172529.744839-3-surenb@google.com> Subject: [PATCH v5 2/6] mm: add missing VM_FAULT_RESULT_TRACE name for VM_FAULT_COMPLETED From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: willy@infradead.org, hannes@cmpxchg.org, mhocko@suse.com, josef@toxicpanda.com, jack@suse.cz, ldufour@linux.ibm.com, laurent.dufour@fr.ibm.com, michel@lespinasse.org, liam.howlett@oracle.com, jglisse@google.com, vbabka@suse.cz, minchan@google.com, dave@stgolabs.net, punit.agrawal@bytedance.com, lstoakes@gmail.com, hdanton@sina.com, apopple@nvidia.com, peterx@redhat.com, ying.huang@intel.com, david@redhat.com, yuzhao@google.com, dhowells@redhat.com, hughd@google.com, viro@zeniv.linux.org.uk, brauner@kernel.org, pasha.tatashin@soleen.com, surenb@google.com, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, kernel-team@android.com Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org VM_FAULT_RESULT_TRACE should contain an element for every vm_fault_reason to be used as flag_array inside trace_print_flags_seq(). The element for VM_FAULT_COMPLETED is missing, add it. Signed-off-by: Suren Baghdasaryan Reviewed-by: Peter Xu --- include/linux/mm_types.h | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 306a3d1a0fa6..79765e3dd8f3 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -1070,7 +1070,8 @@ enum vm_fault_reason { { VM_FAULT_RETRY, "RETRY" }, \ { VM_FAULT_FALLBACK, "FALLBACK" }, \ { VM_FAULT_DONE_COW, "DONE_COW" }, \ - { VM_FAULT_NEEDDSYNC, "NEEDDSYNC" } + { VM_FAULT_NEEDDSYNC, "NEEDDSYNC" }, \ + { VM_FAULT_COMPLETED, "COMPLETED" } struct vm_special_mapping { const char *name; /* The name, e.g. "[vdso]". */ From patchwork Wed Jun 28 17:25:26 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 13296091 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4F63FEB64D7 for ; Wed, 28 Jun 2023 17:26:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232190AbjF1R0E (ORCPT ); Wed, 28 Jun 2023 13:26:04 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43760 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231911AbjF1RZn (ORCPT ); Wed, 28 Jun 2023 13:25:43 -0400 Received: from mail-yb1-xb4a.google.com (mail-yb1-xb4a.google.com [IPv6:2607:f8b0:4864:20::b4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0234E1FEB for ; Wed, 28 Jun 2023 10:25:40 -0700 (PDT) Received: by mail-yb1-xb4a.google.com with SMTP id 3f1490d57ef6-c118efd0c36so229725276.0 for ; Wed, 28 Jun 2023 10:25:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1687973139; x=1690565139; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=Etjd7yIk6y8wpf6gNh6t/+qVgTHbYuzl98QGtof+wLo=; b=f/qWBsZGlwZ5P3Q7P10m/N0F147acALs/pNCcM/R0Pth04XyJd0edvT4fGoeFanU7i nprMaYJ57ZehrPFrJEWv3ay7af2qJ50IjmH60upDZ9HdpmFC8tlgMQahwdbr6hPxrcaH iOO6oMrTdA91rUvBSlj3mK5n37r8vBp1nCpkzIfw6tcfelQj9RfuyeoCq4XRyPGgxE2q dQMo/1n5ck9TGwQyYB/muJ7ImteNS4kZ9vd0EFUXwjlDgrqn/G4AIMcN3x+wt+TV2nOX SE48ZlUM2T4t9Cw7Ds2yMNxtlnhpPB6+PXqMHtIIOsUUg96SvUV200B4lfaag6Y6fl1F QOjA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1687973139; x=1690565139; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=Etjd7yIk6y8wpf6gNh6t/+qVgTHbYuzl98QGtof+wLo=; b=fUDo/pMJkohi2yTGWOBLqVeEUoUcaOzN7P6erFPY4xFMWvwOEnbxcgZyJ+vILrZ8ju Hln0LX1ny+qt8NXJck9tbpc34KcZNHRpycfj2CLXrkNBCTDNzBs16sYY7ZNvEnJPUmkc 3Xswtv6olOu1M5Q3lzq/qkedb1JOmgkpRJ4jHF3Ff2ghoOrerFs0e8WFkMSotEMHRcow oDXEgq6/jJ1LraG2OSBQQWeecCorOfp4ViFbr7hcS059nDl5TMowgzySqReiVLUHNSX0 o7bejJdNFASFtMgPdyCTF6e/ZTtPm4qaW9pQKwIBub97QwD5Takk49ymX+x2v7kJlq0+ rBug== X-Gm-Message-State: ABy/qLbh3bPvKeK6AoGU1gJ+vghubMNmB5E2pKTbsV4amhO2BDDEDoAu le6H1x1cA/aFS8KsrDgQ48PUVR4Okxk= X-Google-Smtp-Source: APBJJlGsLGXNMWvz9SGYOgX+1/1dEWX9HTUaDi0WsEVrC4gPxKfPFDmgxWjZwZruzhN/4I0d7h/V6yPZrxg= X-Received: from surenb-desktop.mtv.corp.google.com ([2620:15c:211:201:eea3:e898:7d7a:1125]) (user=surenb job=sendgmr) by 2002:a25:f501:0:b0:ba8:1646:c15d with SMTP id a1-20020a25f501000000b00ba81646c15dmr21941ybe.1.1687973139195; Wed, 28 Jun 2023 10:25:39 -0700 (PDT) Date: Wed, 28 Jun 2023 10:25:26 -0700 In-Reply-To: <20230628172529.744839-1-surenb@google.com> Mime-Version: 1.0 References: <20230628172529.744839-1-surenb@google.com> X-Mailer: git-send-email 2.41.0.162.gfafddb0af9-goog Message-ID: <20230628172529.744839-4-surenb@google.com> Subject: [PATCH v5 3/6] mm: drop per-VMA lock when returning VM_FAULT_RETRY or VM_FAULT_COMPLETED From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: willy@infradead.org, hannes@cmpxchg.org, mhocko@suse.com, josef@toxicpanda.com, jack@suse.cz, ldufour@linux.ibm.com, laurent.dufour@fr.ibm.com, michel@lespinasse.org, liam.howlett@oracle.com, jglisse@google.com, vbabka@suse.cz, minchan@google.com, dave@stgolabs.net, punit.agrawal@bytedance.com, lstoakes@gmail.com, hdanton@sina.com, apopple@nvidia.com, peterx@redhat.com, ying.huang@intel.com, david@redhat.com, yuzhao@google.com, dhowells@redhat.com, hughd@google.com, viro@zeniv.linux.org.uk, brauner@kernel.org, pasha.tatashin@soleen.com, surenb@google.com, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, kernel-team@android.com Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org handle_mm_fault returning VM_FAULT_RETRY or VM_FAULT_COMPLETED means mmap_lock has been released. However with per-VMA locks behavior is different and the caller should still release it. To make the rules consistent for the caller, drop the per-VMA lock when returning VM_FAULT_RETRY or VM_FAULT_COMPLETED. Currently the only path returning VM_FAULT_RETRY under per-VMA locks is do_swap_page and no path returns VM_FAULT_COMPLETED for now. Signed-off-by: Suren Baghdasaryan Acked-by: Peter Xu --- arch/arm64/mm/fault.c | 3 ++- arch/powerpc/mm/fault.c | 3 ++- arch/s390/mm/fault.c | 3 ++- arch/x86/mm/fault.c | 3 ++- mm/memory.c | 1 + 5 files changed, 9 insertions(+), 4 deletions(-) diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c index c85b6d70b222..9c06c53a9ff3 100644 --- a/arch/arm64/mm/fault.c +++ b/arch/arm64/mm/fault.c @@ -612,7 +612,8 @@ static int __kprobes do_page_fault(unsigned long far, unsigned long esr, goto lock_mmap; } fault = handle_mm_fault(vma, addr, mm_flags | FAULT_FLAG_VMA_LOCK, regs); - vma_end_read(vma); + if (!(fault & (VM_FAULT_RETRY | VM_FAULT_COMPLETED))) + vma_end_read(vma); if (!(fault & VM_FAULT_RETRY)) { count_vm_vma_lock_event(VMA_LOCK_SUCCESS); diff --git a/arch/powerpc/mm/fault.c b/arch/powerpc/mm/fault.c index 531177a4ee08..4697c5dca31c 100644 --- a/arch/powerpc/mm/fault.c +++ b/arch/powerpc/mm/fault.c @@ -494,7 +494,8 @@ static int ___do_page_fault(struct pt_regs *regs, unsigned long address, } fault = handle_mm_fault(vma, address, flags | FAULT_FLAG_VMA_LOCK, regs); - vma_end_read(vma); + if (!(fault & (VM_FAULT_RETRY | VM_FAULT_COMPLETED))) + vma_end_read(vma); if (!(fault & VM_FAULT_RETRY)) { count_vm_vma_lock_event(VMA_LOCK_SUCCESS); diff --git a/arch/s390/mm/fault.c b/arch/s390/mm/fault.c index b65144c392b0..cccefe41038b 100644 --- a/arch/s390/mm/fault.c +++ b/arch/s390/mm/fault.c @@ -418,7 +418,8 @@ static inline vm_fault_t do_exception(struct pt_regs *regs, int access) goto lock_mmap; } fault = handle_mm_fault(vma, address, flags | FAULT_FLAG_VMA_LOCK, regs); - vma_end_read(vma); + if (!(fault & (VM_FAULT_RETRY | VM_FAULT_COMPLETED))) + vma_end_read(vma); if (!(fault & VM_FAULT_RETRY)) { count_vm_vma_lock_event(VMA_LOCK_SUCCESS); goto out; diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c index e4399983c50c..d69c85c1c04e 100644 --- a/arch/x86/mm/fault.c +++ b/arch/x86/mm/fault.c @@ -1347,7 +1347,8 @@ void do_user_addr_fault(struct pt_regs *regs, goto lock_mmap; } fault = handle_mm_fault(vma, address, flags | FAULT_FLAG_VMA_LOCK, regs); - vma_end_read(vma); + if (!(fault & (VM_FAULT_RETRY | VM_FAULT_COMPLETED))) + vma_end_read(vma); if (!(fault & VM_FAULT_RETRY)) { count_vm_vma_lock_event(VMA_LOCK_SUCCESS); diff --git a/mm/memory.c b/mm/memory.c index f69fbc251198..f14d45957b83 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -3713,6 +3713,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) if (vmf->flags & FAULT_FLAG_VMA_LOCK) { ret = VM_FAULT_RETRY; + vma_end_read(vma); goto out; } From patchwork Wed Jun 28 17:25:27 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 13296092 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5EBFFEB64D7 for ; Wed, 28 Jun 2023 17:26:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232105AbjF1R0K (ORCPT ); Wed, 28 Jun 2023 13:26:10 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43808 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232114AbjF1RZz (ORCPT ); Wed, 28 Jun 2023 13:25:55 -0400 Received: from mail-yw1-x114a.google.com (mail-yw1-x114a.google.com [IPv6:2607:f8b0:4864:20::114a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 571C31FF5 for ; Wed, 28 Jun 2023 10:25:42 -0700 (PDT) Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-57320c10635so519327b3.3 for ; Wed, 28 Jun 2023 10:25:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1687973141; x=1690565141; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=Xs/kXC7xcHrq9c07EUAAvOT83+rro6HC4rZ5AGfMFCA=; b=3gplJzXJhBY6iff0MITaQg7qq1fbPjAqgCxVk48TLjKNSRk2ZVcYwl9B6ys4XThx6f 4IdWWG0uKl6jF+dmFqO4p08c7kHuPcS9TL5P/uGpBMbVMyrYimn/Of87aoPfQAiqOVfS QMswoUkYhkicP3xt1gz73zQ8oeo8QqEbKdXFe4gLWWjmwICm2jDUAYYFk1yzdYFSabjA 4PyPp8TADBs4TtABQ6fFSIDwNgJWO/Lg16mBxHi80pof6pmgch5a6OE6+/Deo/nQKs6J VkyihG41HS4jHC6qBw0I8NjkgWHola51sBbUKa06EMV+qVzS56YMvtCbbg1yWN2hssRp kErw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1687973141; x=1690565141; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=Xs/kXC7xcHrq9c07EUAAvOT83+rro6HC4rZ5AGfMFCA=; b=gxcuU8G7SEcDR/aKGde6lYM4yiDnWHCKrbf/21rOHSL7ABV0y2OkhDtTQUoV4YReYQ kQ5vqAiK+0xZHFMEJwamXyJmXI7k49R4yrnvNW02ZYTV8RlvMKKztRy7rCnbeRD9QACj mSOhd87xArAVZUYuned0ZJ8yW+mqAaBwCMlB299PhhisPwgCVvc5kYr6kSugunqKi9Ux b5NG9tg8kSTjHiLoP50ieVoMV5VMgF52V2YNI4WAA7/O5Px5hVedgZwH/GwtLbCkiKAy lh/l18FFxK05CeCHdmhfc5UU4ca5He4ssY6rrK/NHevQZ/wbhAsvdC2Rrgt11F2nrzBK icsQ== X-Gm-Message-State: AC+VfDzb9jr4FAjtXlkb62B1kvQFDv0qq48n6YoWz8XQVXOBYur4RjIk xy27p/OZhDeaq3UXwfgy74BmZ/juFSE= X-Google-Smtp-Source: ACHHUZ4DmmL1KbXoJUcLTcsqVboHZrBXg1e+ydt0kho9DvtJokDrg9K4gqv8lq1COe4UurCVTdxS896v+mk= X-Received: from surenb-desktop.mtv.corp.google.com ([2620:15c:211:201:eea3:e898:7d7a:1125]) (user=surenb job=sendgmr) by 2002:a81:e247:0:b0:56d:502:9eb0 with SMTP id z7-20020a81e247000000b0056d05029eb0mr13999020ywl.6.1687973141513; Wed, 28 Jun 2023 10:25:41 -0700 (PDT) Date: Wed, 28 Jun 2023 10:25:27 -0700 In-Reply-To: <20230628172529.744839-1-surenb@google.com> Mime-Version: 1.0 References: <20230628172529.744839-1-surenb@google.com> X-Mailer: git-send-email 2.41.0.162.gfafddb0af9-goog Message-ID: <20230628172529.744839-5-surenb@google.com> Subject: [PATCH v5 4/6] mm: change folio_lock_or_retry to use vm_fault directly From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: willy@infradead.org, hannes@cmpxchg.org, mhocko@suse.com, josef@toxicpanda.com, jack@suse.cz, ldufour@linux.ibm.com, laurent.dufour@fr.ibm.com, michel@lespinasse.org, liam.howlett@oracle.com, jglisse@google.com, vbabka@suse.cz, minchan@google.com, dave@stgolabs.net, punit.agrawal@bytedance.com, lstoakes@gmail.com, hdanton@sina.com, apopple@nvidia.com, peterx@redhat.com, ying.huang@intel.com, david@redhat.com, yuzhao@google.com, dhowells@redhat.com, hughd@google.com, viro@zeniv.linux.org.uk, brauner@kernel.org, pasha.tatashin@soleen.com, surenb@google.com, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, kernel-team@android.com Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org Change folio_lock_or_retry to accept vm_fault struct and return the vm_fault_t directly. Suggested-by: Matthew Wilcox Signed-off-by: Suren Baghdasaryan Acked-by: Peter Xu --- include/linux/pagemap.h | 9 ++++----- mm/filemap.c | 22 ++++++++++++---------- mm/memory.c | 14 ++++++-------- 3 files changed, 22 insertions(+), 23 deletions(-) diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index a56308a9d1a4..59d070c55c97 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -896,8 +896,7 @@ static inline bool wake_page_match(struct wait_page_queue *wait_page, void __folio_lock(struct folio *folio); int __folio_lock_killable(struct folio *folio); -bool __folio_lock_or_retry(struct folio *folio, struct mm_struct *mm, - unsigned int flags); +vm_fault_t __folio_lock_or_retry(struct folio *folio, struct vm_fault *vmf); void unlock_page(struct page *page); void folio_unlock(struct folio *folio); @@ -1001,11 +1000,11 @@ static inline int folio_lock_killable(struct folio *folio) * Return value and mmap_lock implications depend on flags; see * __folio_lock_or_retry(). */ -static inline bool folio_lock_or_retry(struct folio *folio, - struct mm_struct *mm, unsigned int flags) +static inline vm_fault_t folio_lock_or_retry(struct folio *folio, + struct vm_fault *vmf) { might_sleep(); - return folio_trylock(folio) || __folio_lock_or_retry(folio, mm, flags); + return folio_trylock(folio) ? 0 : __folio_lock_or_retry(folio, vmf); } /* diff --git a/mm/filemap.c b/mm/filemap.c index 00f01d8ead47..52bcf12dcdbf 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -1701,32 +1701,34 @@ static int __folio_lock_async(struct folio *folio, struct wait_page_queue *wait) /* * Return values: - * true - folio is locked; mmap_lock is still held. - * false - folio is not locked. + * 0 - folio is locked. + * VM_FAULT_RETRY - folio is not locked. * mmap_lock has been released (mmap_read_unlock(), unless flags had both * FAULT_FLAG_ALLOW_RETRY and FAULT_FLAG_RETRY_NOWAIT set, in * which case mmap_lock is still held. * - * If neither ALLOW_RETRY nor KILLABLE are set, will always return true + * If neither ALLOW_RETRY nor KILLABLE are set, will always return 0 * with the folio locked and the mmap_lock unperturbed. */ -bool __folio_lock_or_retry(struct folio *folio, struct mm_struct *mm, - unsigned int flags) +vm_fault_t __folio_lock_or_retry(struct folio *folio, struct vm_fault *vmf) { + struct mm_struct *mm = vmf->vma->vm_mm; + unsigned int flags = vmf->flags; + if (fault_flag_allow_retry_first(flags)) { /* * CAUTION! In this case, mmap_lock is not released - * even though return 0. + * even though return VM_FAULT_RETRY. */ if (flags & FAULT_FLAG_RETRY_NOWAIT) - return false; + return VM_FAULT_RETRY; mmap_read_unlock(mm); if (flags & FAULT_FLAG_KILLABLE) folio_wait_locked_killable(folio); else folio_wait_locked(folio); - return false; + return VM_FAULT_RETRY; } if (flags & FAULT_FLAG_KILLABLE) { bool ret; @@ -1734,13 +1736,13 @@ bool __folio_lock_or_retry(struct folio *folio, struct mm_struct *mm, ret = __folio_lock_killable(folio); if (ret) { mmap_read_unlock(mm); - return false; + return VM_FAULT_RETRY; } } else { __folio_lock(folio); } - return true; + return 0; } /** diff --git a/mm/memory.c b/mm/memory.c index f14d45957b83..345080052003 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -3568,6 +3568,7 @@ static vm_fault_t remove_device_exclusive_entry(struct vm_fault *vmf) struct folio *folio = page_folio(vmf->page); struct vm_area_struct *vma = vmf->vma; struct mmu_notifier_range range; + vm_fault_t ret; /* * We need a reference to lock the folio because we don't hold @@ -3580,9 +3581,10 @@ static vm_fault_t remove_device_exclusive_entry(struct vm_fault *vmf) if (!folio_try_get(folio)) return 0; - if (!folio_lock_or_retry(folio, vma->vm_mm, vmf->flags)) { + ret = folio_lock_or_retry(folio, vmf); + if (ret) { folio_put(folio); - return VM_FAULT_RETRY; + return ret; } mmu_notifier_range_init_owner(&range, MMU_NOTIFY_EXCLUSIVE, 0, vma->vm_mm, vmf->address & PAGE_MASK, @@ -3704,7 +3706,6 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) bool exclusive = false; swp_entry_t entry; pte_t pte; - int locked; vm_fault_t ret = 0; void *shadow = NULL; @@ -3826,12 +3827,9 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) goto out_release; } - locked = folio_lock_or_retry(folio, vma->vm_mm, vmf->flags); - - if (!locked) { - ret |= VM_FAULT_RETRY; + ret |= folio_lock_or_retry(folio, vmf); + if (ret & VM_FAULT_RETRY) goto out_release; - } if (swapcache) { /* From patchwork Wed Jun 28 17:25:28 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 13296093 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 98121EB64D7 for ; Wed, 28 Jun 2023 17:26:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231422AbjF1R0m (ORCPT ); Wed, 28 Jun 2023 13:26:42 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43818 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232131AbjF1RZ4 (ORCPT ); Wed, 28 Jun 2023 13:25:56 -0400 Received: from mail-yw1-x114a.google.com (mail-yw1-x114a.google.com [IPv6:2607:f8b0:4864:20::114a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B00C12102 for ; Wed, 28 Jun 2023 10:25:44 -0700 (PDT) Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-5704e551e8bso462447b3.3 for ; Wed, 28 Jun 2023 10:25:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1687973144; x=1690565144; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=oD5hhI9l4Jf87DrZhgC9LlLYP4fwxMSCMPLIWpUYiKs=; b=MhkAySfkftL2P4BiqUnPcVvmpQvk/auH3LM7Z+ueqGc1NfLH7/HLo0jTbZXA1VBccI MPcn3IPYjRbtN8+PS8/WHGstetjsvYpAjB7wiPzwBwA8WrUeLSwMo3QonXvzVRty79ZB fRpfwoTvHml/EijbQiE610yUvpg+YKk8ds2myvs7Iaz02mVp+//6pTH44iU75MwgifSq GqZmgUdybSD1Qfo62qK8Rt2zHT3/1u/rMj1Tj+s5rzDAYW+Y65erL0beG/ik/HH+W9b0 FfPThQsgjlORcaRtantGIZZAxCbaworyNXS0Ixo143Gu7hHDPB0CcHfBEhoO6N2MxhF+ xccg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1687973144; x=1690565144; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=oD5hhI9l4Jf87DrZhgC9LlLYP4fwxMSCMPLIWpUYiKs=; b=QcSpd1RgcoQxGWd1Idl9EY/XPpikR1r7pjvBP9bqnTXmOWtxbxauQOJByKvLYK+sUf Gy4bZOxm6IR5FNENsDqj6FjOiZyhB0Sl/6mceeCkB8kUDD9r1YS+A3rQFgzT9Vhtj2mc flq5NOoVfW0QTlaWG85+hublrdRA6V8XElI3HwzBMF5DQ2QPyOCmOq9hNM2UWkoe+lNw 5UDWBwaYYqxaR+YQihU7OddGmDYR1UuWYjH+3vMrtDndJdS351JF6vNja6h5Hnj/M9qH n18T/9N8dDnGNkdHuMk9Ac7fXUezfUd1fHKnPkGfDT0e/tM+7haozvBIjldA67kt/41K pKtw== X-Gm-Message-State: AC+VfDxKAFWN33XNnHvbT/f/aXekCnZJb1LjxPnBmdUQkw48CshKFvEg a0avMCrNrNjq5QPjz4eGQhpWFvZNrdI= X-Google-Smtp-Source: ACHHUZ4No9DOKnC41jadwjTwUmerxqhcl1cZj6cyLHnTgWAWhGX3rU1mn98pnnUeCN9J0h+uWTD8lD/fZdg= X-Received: from surenb-desktop.mtv.corp.google.com ([2620:15c:211:201:eea3:e898:7d7a:1125]) (user=surenb job=sendgmr) by 2002:a81:af0c:0:b0:55d:6af3:1e2c with SMTP id n12-20020a81af0c000000b0055d6af31e2cmr13786933ywh.3.1687973143932; Wed, 28 Jun 2023 10:25:43 -0700 (PDT) Date: Wed, 28 Jun 2023 10:25:28 -0700 In-Reply-To: <20230628172529.744839-1-surenb@google.com> Mime-Version: 1.0 References: <20230628172529.744839-1-surenb@google.com> X-Mailer: git-send-email 2.41.0.162.gfafddb0af9-goog Message-ID: <20230628172529.744839-6-surenb@google.com> Subject: [PATCH v5 5/6] mm: handle swap page faults under per-VMA lock From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: willy@infradead.org, hannes@cmpxchg.org, mhocko@suse.com, josef@toxicpanda.com, jack@suse.cz, ldufour@linux.ibm.com, laurent.dufour@fr.ibm.com, michel@lespinasse.org, liam.howlett@oracle.com, jglisse@google.com, vbabka@suse.cz, minchan@google.com, dave@stgolabs.net, punit.agrawal@bytedance.com, lstoakes@gmail.com, hdanton@sina.com, apopple@nvidia.com, peterx@redhat.com, ying.huang@intel.com, david@redhat.com, yuzhao@google.com, dhowells@redhat.com, hughd@google.com, viro@zeniv.linux.org.uk, brauner@kernel.org, pasha.tatashin@soleen.com, surenb@google.com, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, kernel-team@android.com Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org When page fault is handled under per-VMA lock protection, all swap page faults are retried with mmap_lock because folio_lock_or_retry has to drop and reacquire mmap_lock if folio could not be immediately locked. Follow the same pattern as mmap_lock to drop per-VMA lock when waiting for folio and retrying once folio is available. With this obstacle removed, enable do_swap_page to operate under per-VMA lock protection. Drivers implementing ops->migrate_to_ram might still rely on mmap_lock, therefore we have to fall back to mmap_lock in that particular case. Note that the only time do_swap_page calls synchronous swap_readpage is when SWP_SYNCHRONOUS_IO is set, which is only set for QUEUE_FLAG_SYNCHRONOUS devices: brd, zram and nvdimms (both btt and pmem). Therefore we don't sleep in this path, and there's no need to drop the mmap or per-VMA lock. Signed-off-by: Suren Baghdasaryan Acked-by: Peter Xu Tested-by: Alistair Popple Reviewed-by: Alistair Popple --- include/linux/mm.h | 13 +++++++++++++ mm/filemap.c | 17 ++++++++--------- mm/memory.c | 16 ++++++++++------ 3 files changed, 31 insertions(+), 15 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index fec149585985..bbaec479bf98 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -723,6 +723,14 @@ static inline void vma_mark_detached(struct vm_area_struct *vma, bool detached) struct vm_area_struct *lock_vma_under_rcu(struct mm_struct *mm, unsigned long address); +static inline void release_fault_lock(struct vm_fault *vmf) +{ + if (vmf->flags & FAULT_FLAG_VMA_LOCK) + vma_end_read(vmf->vma); + else + mmap_read_unlock(vmf->vma->vm_mm); +} + #else /* CONFIG_PER_VMA_LOCK */ static inline void vma_init_lock(struct vm_area_struct *vma) {} @@ -736,6 +744,11 @@ static inline void vma_assert_write_locked(struct vm_area_struct *vma) {} static inline void vma_mark_detached(struct vm_area_struct *vma, bool detached) {} +static inline void release_fault_lock(struct vm_fault *vmf) +{ + mmap_read_unlock(vmf->vma->vm_mm); +} + #endif /* CONFIG_PER_VMA_LOCK */ /* diff --git a/mm/filemap.c b/mm/filemap.c index 52bcf12dcdbf..d4d8f474e0c5 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -1703,27 +1703,26 @@ static int __folio_lock_async(struct folio *folio, struct wait_page_queue *wait) * Return values: * 0 - folio is locked. * VM_FAULT_RETRY - folio is not locked. - * mmap_lock has been released (mmap_read_unlock(), unless flags had both - * FAULT_FLAG_ALLOW_RETRY and FAULT_FLAG_RETRY_NOWAIT set, in - * which case mmap_lock is still held. + * mmap_lock or per-VMA lock has been released (mmap_read_unlock() or + * vma_end_read()), unless flags had both FAULT_FLAG_ALLOW_RETRY and + * FAULT_FLAG_RETRY_NOWAIT set, in which case the lock is still held. * * If neither ALLOW_RETRY nor KILLABLE are set, will always return 0 - * with the folio locked and the mmap_lock unperturbed. + * with the folio locked and the mmap_lock/per-VMA lock is left unperturbed. */ vm_fault_t __folio_lock_or_retry(struct folio *folio, struct vm_fault *vmf) { - struct mm_struct *mm = vmf->vma->vm_mm; unsigned int flags = vmf->flags; if (fault_flag_allow_retry_first(flags)) { /* - * CAUTION! In this case, mmap_lock is not released - * even though return VM_FAULT_RETRY. + * CAUTION! In this case, mmap_lock/per-VMA lock is not + * released even though returning VM_FAULT_RETRY. */ if (flags & FAULT_FLAG_RETRY_NOWAIT) return VM_FAULT_RETRY; - mmap_read_unlock(mm); + release_fault_lock(vmf); if (flags & FAULT_FLAG_KILLABLE) folio_wait_locked_killable(folio); else @@ -1735,7 +1734,7 @@ vm_fault_t __folio_lock_or_retry(struct folio *folio, struct vm_fault *vmf) ret = __folio_lock_killable(folio); if (ret) { - mmap_read_unlock(mm); + release_fault_lock(vmf); return VM_FAULT_RETRY; } } else { diff --git a/mm/memory.c b/mm/memory.c index 345080052003..4fb8ecfc6d13 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -3712,12 +3712,6 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) if (!pte_unmap_same(vmf)) goto out; - if (vmf->flags & FAULT_FLAG_VMA_LOCK) { - ret = VM_FAULT_RETRY; - vma_end_read(vma); - goto out; - } - entry = pte_to_swp_entry(vmf->orig_pte); if (unlikely(non_swap_entry(entry))) { if (is_migration_entry(entry)) { @@ -3727,6 +3721,16 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) vmf->page = pfn_swap_entry_to_page(entry); ret = remove_device_exclusive_entry(vmf); } else if (is_device_private_entry(entry)) { + if (vmf->flags & FAULT_FLAG_VMA_LOCK) { + /* + * migrate_to_ram is not yet ready to operate + * under VMA lock. + */ + vma_end_read(vma); + ret = VM_FAULT_RETRY; + goto out; + } + vmf->page = pfn_swap_entry_to_page(entry); vmf->pte = pte_offset_map_lock(vma->vm_mm, vmf->pmd, vmf->address, &vmf->ptl); From patchwork Wed Jun 28 17:25:29 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 13296094 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 52F11EB64DC for ; Wed, 28 Jun 2023 17:26:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231609AbjF1R0n (ORCPT ); Wed, 28 Jun 2023 13:26:43 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43824 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231497AbjF1RZ5 (ORCPT ); Wed, 28 Jun 2023 13:25:57 -0400 Received: from mail-yw1-x1149.google.com (mail-yw1-x1149.google.com [IPv6:2607:f8b0:4864:20::1149]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 09C452105 for ; Wed, 28 Jun 2023 10:25:47 -0700 (PDT) Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-576d63dfc1dso491067b3.3 for ; Wed, 28 Jun 2023 10:25:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1687973146; x=1690565146; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=OlfhR7EToLpThGeQT880X4R1DISXmAyjsxAIr0E3t3A=; b=cv9vKOTj1uWilOQw2ueM+a7smhsXMKEOe4ULfV9/3oHnfMC+VyWkg0YagZLfag3QHt VTJVW68wDX5A6qNCjY1/v0NLpgB8pHfHimYDj35/V7zXaqiOra1xeA1GkBbA67NjquPW x1QSTu4uNaPPEroE9IurX9RKH4SkOXwFavLPUCF6hvBPJKloBSUU4o+6g7KIM6M/Kdrl Ee89eRG8kSHNaxe/bYk8hYg9yrrFR5dU/fxBHCJQ0CcEb54uf2JQBXMie2rfUmkqeUKw huYvjvg5vTodCn7//hTPnzWx41V+3ZeyKFUf8CEnL3AFwmjDEeS4lOCQ4Q4hhBG07yIU YDlQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1687973146; x=1690565146; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=OlfhR7EToLpThGeQT880X4R1DISXmAyjsxAIr0E3t3A=; b=J3ITkPLaix86amivUIMnls+t3EGmq/5N2KRc4peZGBDCjiOR4hsFiVpZwgwtnIShlG gJ9lKFkoR3dfpfdKTgGG5UiBCb7pT7iRmgB/OdwVFwUv2E2nH4QO55NfBUPLR0VK0FQi fWiIV1A4i0cvZXk3lUhTWiiwuwBvWFNBML02rwdGlKzP+PHYDV2TNWAqOMx1YHWuBXFl 6lEdYXodtOK/0GWiKJoNmoqOi/hRiRxPUCXxG3kvaPwRrLYRvNr0wEilpVgHpxoqSg81 yoMIkuLmnN5oGh4GHkfhKhRQghliLQUqtm/XxpUcQ+ZNhx0Q+F9yKCVqAUpz9hWcmpB8 pm6g== X-Gm-Message-State: AC+VfDy+J8UhiVYKg6NItkMoAL2qJrGM/OEd2hzbe8a8eQ3528RXMthz uWyyk/W0EJo0u5tmHuDWQDPDjqvSX7Q= X-Google-Smtp-Source: ACHHUZ6mVzUq9ryDcwh2Xte1sV3YO3u9EdCX8f+cPy2W6N1hipmOz/QDUBjgoPGi9sK1StwJ6zYhxS8iBnQ= X-Received: from surenb-desktop.mtv.corp.google.com ([2620:15c:211:201:eea3:e898:7d7a:1125]) (user=surenb job=sendgmr) by 2002:a25:5091:0:b0:c1a:dc07:1d74 with SMTP id e139-20020a255091000000b00c1adc071d74mr3744020ybb.0.1687973146287; Wed, 28 Jun 2023 10:25:46 -0700 (PDT) Date: Wed, 28 Jun 2023 10:25:29 -0700 In-Reply-To: <20230628172529.744839-1-surenb@google.com> Mime-Version: 1.0 References: <20230628172529.744839-1-surenb@google.com> X-Mailer: git-send-email 2.41.0.162.gfafddb0af9-goog Message-ID: <20230628172529.744839-7-surenb@google.com> Subject: [PATCH v5 6/6] mm: handle userfaults under VMA lock From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: willy@infradead.org, hannes@cmpxchg.org, mhocko@suse.com, josef@toxicpanda.com, jack@suse.cz, ldufour@linux.ibm.com, laurent.dufour@fr.ibm.com, michel@lespinasse.org, liam.howlett@oracle.com, jglisse@google.com, vbabka@suse.cz, minchan@google.com, dave@stgolabs.net, punit.agrawal@bytedance.com, lstoakes@gmail.com, hdanton@sina.com, apopple@nvidia.com, peterx@redhat.com, ying.huang@intel.com, david@redhat.com, yuzhao@google.com, dhowells@redhat.com, hughd@google.com, viro@zeniv.linux.org.uk, brauner@kernel.org, pasha.tatashin@soleen.com, surenb@google.com, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, kernel-team@android.com Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org Enable handle_userfault to operate under VMA lock by releasing VMA lock instead of mmap_lock and retrying. Note that FAULT_FLAG_RETRY_NOWAIT should never be used when handling faults under per-VMA lock protection because that would break the assumption that lock is dropped on retry. Signed-off-by: Suren Baghdasaryan Acked-by: Peter Xu --- fs/userfaultfd.c | 34 ++++++++++++++-------------------- include/linux/mm.h | 26 ++++++++++++++++++++++++++ mm/memory.c | 20 +++++++++++--------- 3 files changed, 51 insertions(+), 29 deletions(-) diff --git a/fs/userfaultfd.c b/fs/userfaultfd.c index 4e800bb7d2ab..9d61e3e7da7b 100644 --- a/fs/userfaultfd.c +++ b/fs/userfaultfd.c @@ -277,17 +277,16 @@ static inline struct uffd_msg userfault_msg(unsigned long address, * hugepmd ranges. */ static inline bool userfaultfd_huge_must_wait(struct userfaultfd_ctx *ctx, - struct vm_area_struct *vma, - unsigned long address, - unsigned long flags, - unsigned long reason) + struct vm_fault *vmf, + unsigned long reason) { + struct vm_area_struct *vma = vmf->vma; pte_t *ptep, pte; bool ret = true; - mmap_assert_locked(ctx->mm); + assert_fault_locked(vmf); - ptep = hugetlb_walk(vma, address, vma_mmu_pagesize(vma)); + ptep = hugetlb_walk(vma, vmf->address, vma_mmu_pagesize(vma)); if (!ptep) goto out; @@ -308,10 +307,8 @@ static inline bool userfaultfd_huge_must_wait(struct userfaultfd_ctx *ctx, } #else static inline bool userfaultfd_huge_must_wait(struct userfaultfd_ctx *ctx, - struct vm_area_struct *vma, - unsigned long address, - unsigned long flags, - unsigned long reason) + struct vm_fault *vmf, + unsigned long reason) { return false; /* should never get here */ } @@ -325,11 +322,11 @@ static inline bool userfaultfd_huge_must_wait(struct userfaultfd_ctx *ctx, * threads. */ static inline bool userfaultfd_must_wait(struct userfaultfd_ctx *ctx, - unsigned long address, - unsigned long flags, + struct vm_fault *vmf, unsigned long reason) { struct mm_struct *mm = ctx->mm; + unsigned long address = vmf->address; pgd_t *pgd; p4d_t *p4d; pud_t *pud; @@ -337,7 +334,7 @@ static inline bool userfaultfd_must_wait(struct userfaultfd_ctx *ctx, pte_t *pte; bool ret = true; - mmap_assert_locked(mm); + assert_fault_locked(vmf); pgd = pgd_offset(mm, address); if (!pgd_present(*pgd)) @@ -445,7 +442,7 @@ vm_fault_t handle_userfault(struct vm_fault *vmf, unsigned long reason) * Coredumping runs without mmap_lock so we can only check that * the mmap_lock is held, if PF_DUMPCORE was not set. */ - mmap_assert_locked(mm); + assert_fault_locked(vmf); ctx = vma->vm_userfaultfd_ctx.ctx; if (!ctx) @@ -561,15 +558,12 @@ vm_fault_t handle_userfault(struct vm_fault *vmf, unsigned long reason) spin_unlock_irq(&ctx->fault_pending_wqh.lock); if (!is_vm_hugetlb_page(vma)) - must_wait = userfaultfd_must_wait(ctx, vmf->address, vmf->flags, - reason); + must_wait = userfaultfd_must_wait(ctx, vmf, reason); else - must_wait = userfaultfd_huge_must_wait(ctx, vma, - vmf->address, - vmf->flags, reason); + must_wait = userfaultfd_huge_must_wait(ctx, vmf, reason); if (is_vm_hugetlb_page(vma)) hugetlb_vma_unlock_read(vma); - mmap_read_unlock(mm); + release_fault_lock(vmf); if (likely(must_wait && !READ_ONCE(ctx->released))) { wake_up_poll(&ctx->fd_wqh, EPOLLIN); diff --git a/include/linux/mm.h b/include/linux/mm.h index bbaec479bf98..cd5389338def 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -705,6 +705,17 @@ static inline bool vma_try_start_write(struct vm_area_struct *vma) return true; } +static inline void vma_assert_locked(struct vm_area_struct *vma) +{ + int mm_lock_seq; + + if (__is_vma_write_locked(vma, &mm_lock_seq)) + return; + + lockdep_assert_held(&vma->vm_lock->lock); + VM_BUG_ON_VMA(!rwsem_is_locked(&vma->vm_lock->lock), vma); +} + static inline void vma_assert_write_locked(struct vm_area_struct *vma) { int mm_lock_seq; @@ -731,6 +742,15 @@ static inline void release_fault_lock(struct vm_fault *vmf) mmap_read_unlock(vmf->vma->vm_mm); } +static inline +void assert_fault_locked(struct vm_fault *vmf) +{ + if (vmf->flags & FAULT_FLAG_VMA_LOCK) + vma_assert_locked(vmf->vma); + else + mmap_assert_locked(vmf->vma->vm_mm); +} + #else /* CONFIG_PER_VMA_LOCK */ static inline void vma_init_lock(struct vm_area_struct *vma) {} @@ -749,6 +769,12 @@ static inline void release_fault_lock(struct vm_fault *vmf) mmap_read_unlock(vmf->vma->vm_mm); } +static inline +void assert_fault_locked(struct vm_fault *vmf) +{ + mmap_assert_locked(vmf->vma->vm_mm); +} + #endif /* CONFIG_PER_VMA_LOCK */ /* diff --git a/mm/memory.c b/mm/memory.c index 4fb8ecfc6d13..672f7383a622 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -5202,6 +5202,17 @@ static vm_fault_t sanitize_fault_flags(struct vm_area_struct *vma, !is_cow_mapping(vma->vm_flags))) return VM_FAULT_SIGSEGV; } +#ifdef CONFIG_PER_VMA_LOCK + /* + * Per-VMA locks can't be used with FAULT_FLAG_RETRY_NOWAIT because of + * the assumption that lock is dropped on VM_FAULT_RETRY. + */ + if (WARN_ON_ONCE((*flags & + (FAULT_FLAG_VMA_LOCK | FAULT_FLAG_RETRY_NOWAIT)) == + (FAULT_FLAG_VMA_LOCK | FAULT_FLAG_RETRY_NOWAIT))) + return VM_FAULT_SIGSEGV; +#endif + return 0; } @@ -5294,15 +5305,6 @@ struct vm_area_struct *lock_vma_under_rcu(struct mm_struct *mm, if (!vma_start_read(vma)) goto inval; - /* - * Due to the possibility of userfault handler dropping mmap_lock, avoid - * it for now and fall back to page fault handling under mmap_lock. - */ - if (userfaultfd_armed(vma)) { - vma_end_read(vma); - goto inval; - } - /* Check since vm_start/vm_end might change before we lock the VMA */ if (unlikely(address < vma->vm_start || address >= vma->vm_end)) { vma_end_read(vma);