From patchwork Wed Jun 28 07:17:55 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 13295377 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 06CD6EB64DA for ; Wed, 28 Jun 2023 08:38:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234069AbjF1Iiq (ORCPT ); Wed, 28 Jun 2023 04:38:46 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42234 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234257AbjF1Ids (ORCPT ); Wed, 28 Jun 2023 04:33:48 -0400 Received: from mail-oo1-xc49.google.com (mail-oo1-xc49.google.com [IPv6:2607:f8b0:4864:20::c49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9382646AC for ; Wed, 28 Jun 2023 01:25:37 -0700 (PDT) Received: by mail-oo1-xc49.google.com with SMTP id 006d021491bc7-565922a8e03so998199eaf.1 for ; Wed, 28 Jun 2023 01:25:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1687940737; x=1690532737; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=KMxqHAnxOZxMzgoGe7kH228DI7ZaEMF5Tfki0D7IS+4=; b=3psyOj5qEpaotRH+BkO2jH0v1Q7dwyBKsagWBSzQQ0C1O6O5N0xJcLYS+FIGOXBV/O dpjhSFcAAMdM3OoVjidYHBu4NLn70IQtYYRCLUJFFNXTUNWSjWtko1Qxga4Fep3jduCE SuQ3a5XTFPLP02T2AzSJiBdZT6Qv30ozCquujC6PuP9EOowhrSezu34XH2VA6/n5jNNv iF+OnmGIqSSI9d5sCemaiPs7YizsAlQvtNXNvMXmpbSLch31XAvYvch6Is2W5MhXLoQ5 iLZTqy9E5LjrQEN/HKXHfEqh8lzVdCSv8BA5e11cJs4L8idT4z12mrGW2mYoquHkIxn5 JSoA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1687940737; x=1690532737; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=KMxqHAnxOZxMzgoGe7kH228DI7ZaEMF5Tfki0D7IS+4=; b=KnvhWuo7dNCJPGUoDYM6oh3R+1NZJu93aa/TD15XayIkvdvB4FpzSakUUfM/BYzcnY YRMjyY7Q/9UVv0ST8ume1j1W23VclMjGst6HnuyTPp05iWt6EB1CJVNbvGmTkdUK84PE JLxTD4yoXJ35/5NeFjIAN/3gxawckAR0B9IgfnaWnZndMK2mwUuXRkNSK0llM5y8zrCc rgAA5jDJoSXhqpSDKEc97GhFR0ntp3XJOpJbIgkbYzt6C1pj72R7odHDCyczYC86dvBW 1KoJLZI78IuK5cF3NL2bTQO1W8zQA700SJBQ0rcZzUgwgVOSKx3UYagQdvqBqPg4d8zR jTQw== X-Gm-Message-State: AC+VfDxyF2z8xmpB9OF74AbL/wE4EzXVyO5K5gnWtVMTSpe1W4JzDvVk Bz5RIc5StlSLBMzW0EvweesIS92qTNs= X-Google-Smtp-Source: ACHHUZ7N2+Vdz5wzVt4qo3ORJk1cvXaFXFgrNn/dwhdlBeAqH0Ef2MJ/aZaUm1mJL36im1QJNi0z39SWTj8= X-Received: from surenb-desktop.mtv.corp.google.com ([2620:15c:211:201:6664:8bd3:57fd:c83a]) (user=surenb job=sendgmr) by 2002:a25:ab82:0:b0:bb0:f056:cf43 with SMTP id v2-20020a25ab82000000b00bb0f056cf43mr7882151ybi.1.1687936687271; Wed, 28 Jun 2023 00:18:07 -0700 (PDT) Date: Wed, 28 Jun 2023 00:17:55 -0700 In-Reply-To: <20230628071800.544800-1-surenb@google.com> Mime-Version: 1.0 References: <20230628071800.544800-1-surenb@google.com> X-Mailer: git-send-email 2.41.0.162.gfafddb0af9-goog Message-ID: <20230628071800.544800-2-surenb@google.com> Subject: [PATCH v4 1/6] swap: remove remnants of polling from read_swap_cache_async From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: willy@infradead.org, hannes@cmpxchg.org, mhocko@suse.com, josef@toxicpanda.com, jack@suse.cz, ldufour@linux.ibm.com, laurent.dufour@fr.ibm.com, michel@lespinasse.org, liam.howlett@oracle.com, jglisse@google.com, vbabka@suse.cz, minchan@google.com, dave@stgolabs.net, punit.agrawal@bytedance.com, lstoakes@gmail.com, hdanton@sina.com, apopple@nvidia.com, peterx@redhat.com, ying.huang@intel.com, david@redhat.com, yuzhao@google.com, dhowells@redhat.com, hughd@google.com, viro@zeniv.linux.org.uk, brauner@kernel.org, pasha.tatashin@soleen.com, surenb@google.com, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, kernel-team@android.com, Christoph Hellwig Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org Commit [1] introduced IO polling support duding swapin to reduce swap read latency for block devices that can be polled. However later commit [2] removed polling support. Therefore it seems safe to remove do_poll parameter in read_swap_cache_async and always call swap_readpage with synchronous=false waiting for IO completion in folio_lock_or_retry. [1] commit 23955622ff8d ("swap: add block io poll in swapin path") [2] commit 9650b453a3d4 ("block: ignore RWF_HIPRI hint for sync dio") Suggested-by: "Huang, Ying" Signed-off-by: Suren Baghdasaryan Reviewed-by: "Huang, Ying" Reviewed-by: Christoph Hellwig --- mm/madvise.c | 4 ++-- mm/swap.h | 1 - mm/swap_state.c | 12 +++++------- 3 files changed, 7 insertions(+), 10 deletions(-) diff --git a/mm/madvise.c b/mm/madvise.c index b5ffbaf616f5..b1e8adf1234e 100644 --- a/mm/madvise.c +++ b/mm/madvise.c @@ -215,7 +215,7 @@ static int swapin_walk_pmd_entry(pmd_t *pmd, unsigned long start, continue; page = read_swap_cache_async(entry, GFP_HIGHUSER_MOVABLE, - vma, index, false, &splug); + vma, index, &splug); if (page) put_page(page); } @@ -252,7 +252,7 @@ static void force_shm_swapin_readahead(struct vm_area_struct *vma, rcu_read_unlock(); page = read_swap_cache_async(swap, GFP_HIGHUSER_MOVABLE, - NULL, 0, false, &splug); + NULL, 0, &splug); if (page) put_page(page); diff --git a/mm/swap.h b/mm/swap.h index 7c033d793f15..8a3c7a0ace4f 100644 --- a/mm/swap.h +++ b/mm/swap.h @@ -46,7 +46,6 @@ struct folio *filemap_get_incore_folio(struct address_space *mapping, struct page *read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask, struct vm_area_struct *vma, unsigned long addr, - bool do_poll, struct swap_iocb **plug); struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask, struct vm_area_struct *vma, diff --git a/mm/swap_state.c b/mm/swap_state.c index b76a65ac28b3..a3839de71f3f 100644 --- a/mm/swap_state.c +++ b/mm/swap_state.c @@ -517,15 +517,14 @@ struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask, */ struct page *read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask, struct vm_area_struct *vma, - unsigned long addr, bool do_poll, - struct swap_iocb **plug) + unsigned long addr, struct swap_iocb **plug) { bool page_was_allocated; struct page *retpage = __read_swap_cache_async(entry, gfp_mask, vma, addr, &page_was_allocated); if (page_was_allocated) - swap_readpage(retpage, do_poll, plug); + swap_readpage(retpage, false, plug); return retpage; } @@ -620,7 +619,7 @@ struct page *swap_cluster_readahead(swp_entry_t entry, gfp_t gfp_mask, struct swap_info_struct *si = swp_swap_info(entry); struct blk_plug plug; struct swap_iocb *splug = NULL; - bool do_poll = true, page_allocated; + bool page_allocated; struct vm_area_struct *vma = vmf->vma; unsigned long addr = vmf->address; @@ -628,7 +627,6 @@ struct page *swap_cluster_readahead(swp_entry_t entry, gfp_t gfp_mask, if (!mask) goto skip; - do_poll = false; /* Read a page_cluster sized and aligned cluster around offset. */ start_offset = offset & ~mask; end_offset = offset | mask; @@ -660,7 +658,7 @@ struct page *swap_cluster_readahead(swp_entry_t entry, gfp_t gfp_mask, lru_add_drain(); /* Push any new pages onto the LRU now */ skip: /* The page was likely read above, so no need for plugging here */ - return read_swap_cache_async(entry, gfp_mask, vma, addr, do_poll, NULL); + return read_swap_cache_async(entry, gfp_mask, vma, addr, NULL); } int init_swap_address_space(unsigned int type, unsigned long nr_pages) @@ -825,7 +823,7 @@ static struct page *swap_vma_readahead(swp_entry_t fentry, gfp_t gfp_mask, skip: /* The page was likely read above, so no need for plugging here */ return read_swap_cache_async(fentry, gfp_mask, vma, vmf->address, - ra_info.win == 1, NULL); + NULL); } /** From patchwork Wed Jun 28 07:17:56 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 13295372 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 035F9EB64DA for ; Wed, 28 Jun 2023 08:34:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234834AbjF1Id5 (ORCPT ); Wed, 28 Jun 2023 04:33:57 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42040 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235481AbjF1Ibr (ORCPT ); Wed, 28 Jun 2023 04:31:47 -0400 Received: from mail-vs1-xe4a.google.com (mail-vs1-xe4a.google.com [IPv6:2607:f8b0:4864:20::e4a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5B0F544BF for ; Wed, 28 Jun 2023 01:23:26 -0700 (PDT) Received: by mail-vs1-xe4a.google.com with SMTP id ada2fe7eead31-4435dff9f48so486358137.0 for ; Wed, 28 Jun 2023 01:23:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1687940605; x=1690532605; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=83rYaWFW6QNQhfB0rTt9RkG69OVxHHwKFqolT0xWQu0=; b=EzNVWclc8hF1fZENtSKl9fY1f+KC4OOifE65PJNbJ6nBql2U7FGsy9bJPhB73u9Brl 3S94bgAVf2FLeRSe6d8vlkbeo1lP1gNFo+NN76W8KsGs/4Bm9p4wr5NSAme9IctEM4oh 3mf1pSOZ8dyHGUiorfCLU6BaYEz1cjjHEbqv5ZeLIzpU7Ep4LJBHyARMj0FQo3dxGRY5 EoyXulXe5CazLJ4xT0pC8j38HjzrfMT3mc5E81hFviVDx5t+g9ZSw6Jk6RQJgNh1H7a4 APnsZ/mTtgdDsGreywJhBWv5sJk+zWfgHws09rBSIx2LWzSX5yA4iToJ/8pKfxVbtGNC mIBg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1687940605; x=1690532605; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=83rYaWFW6QNQhfB0rTt9RkG69OVxHHwKFqolT0xWQu0=; b=FinOdHUPnpGTTG3sAtbNzApwEl2s7FXW672HUYbSManFB8cHbgY75sQFKt8boSr3KF dcxArzMbDwl4hora/NpZUNz/FC5INdGtoeKGLS/y4g5ZjVz2VI3eyN2NKgtFuYJDfXQd iW42gFeFjdfoAbtSZJ8kZUwfIdxP5zdDFLjJyC9u6hmzzEAatD9IZJpo41gSwYBU01pQ 4y1dRWXWo0kDJ73Ve5ALpgf3jUqOTJ2HcbH60FCyLnLL6C/2kg1gHEf3BqtPswCp5PtO hbaA37GYewgiyDVw4uwoPXuzODIbYDV7OgHV4jGuFySE3yYs/YRg8R7/6s0FDugkK92p HIOw== X-Gm-Message-State: AC+VfDwTgqI1TkRBqzwuRFbKZeFnkGW+F8lvx+PPL37AjGnpxQLFsfuQ X9o/udpavw1ng/tDNeZKZmGcYIay9uI= X-Google-Smtp-Source: APBJJlEw1+A/dz6nNLU/xIoq936J+gnmtGHgb4WKOZkDIT7PH8urmhAvh0DMqKPzkAG85bCaYotcRGxUtQE= X-Received: from surenb-desktop.mtv.corp.google.com ([2620:15c:211:201:6664:8bd3:57fd:c83a]) (user=surenb job=sendgmr) by 2002:a5b:512:0:b0:c38:993b:3be5 with SMTP id o18-20020a5b0512000000b00c38993b3be5mr245ybp.0.1687936689805; Wed, 28 Jun 2023 00:18:09 -0700 (PDT) Date: Wed, 28 Jun 2023 00:17:56 -0700 In-Reply-To: <20230628071800.544800-1-surenb@google.com> Mime-Version: 1.0 References: <20230628071800.544800-1-surenb@google.com> X-Mailer: git-send-email 2.41.0.162.gfafddb0af9-goog Message-ID: <20230628071800.544800-3-surenb@google.com> Subject: [PATCH v4 2/6] mm: add missing VM_FAULT_RESULT_TRACE name for VM_FAULT_COMPLETED From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: willy@infradead.org, hannes@cmpxchg.org, mhocko@suse.com, josef@toxicpanda.com, jack@suse.cz, ldufour@linux.ibm.com, laurent.dufour@fr.ibm.com, michel@lespinasse.org, liam.howlett@oracle.com, jglisse@google.com, vbabka@suse.cz, minchan@google.com, dave@stgolabs.net, punit.agrawal@bytedance.com, lstoakes@gmail.com, hdanton@sina.com, apopple@nvidia.com, peterx@redhat.com, ying.huang@intel.com, david@redhat.com, yuzhao@google.com, dhowells@redhat.com, hughd@google.com, viro@zeniv.linux.org.uk, brauner@kernel.org, pasha.tatashin@soleen.com, surenb@google.com, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, kernel-team@android.com Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org VM_FAULT_RESULT_TRACE should contain an element for every vm_fault_reason to be used as flag_array inside trace_print_flags_seq(). The element for VM_FAULT_COMPLETED is missing, add it. Signed-off-by: Suren Baghdasaryan Reviewed-by: Peter Xu --- include/linux/mm_types.h | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 306a3d1a0fa6..79765e3dd8f3 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -1070,7 +1070,8 @@ enum vm_fault_reason { { VM_FAULT_RETRY, "RETRY" }, \ { VM_FAULT_FALLBACK, "FALLBACK" }, \ { VM_FAULT_DONE_COW, "DONE_COW" }, \ - { VM_FAULT_NEEDDSYNC, "NEEDDSYNC" } + { VM_FAULT_NEEDDSYNC, "NEEDDSYNC" }, \ + { VM_FAULT_COMPLETED, "COMPLETED" } struct vm_special_mapping { const char *name; /* The name, e.g. "[vdso]". */ From patchwork Wed Jun 28 07:17:57 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 13295371 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 63E1BEB64D7 for ; Wed, 28 Jun 2023 08:32:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232981AbjF1IcZ (ORCPT ); Wed, 28 Jun 2023 04:32:25 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41118 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233924AbjF1I1r (ORCPT ); Wed, 28 Jun 2023 04:27:47 -0400 Received: from mail-yw1-x1149.google.com (mail-yw1-x1149.google.com [IPv6:2607:f8b0:4864:20::1149]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 986453ABA for ; Wed, 28 Jun 2023 01:21:11 -0700 (PDT) Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-57704a25be9so10993077b3.1 for ; Wed, 28 Jun 2023 01:21:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1687940471; x=1690532471; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=vpZ4UoIowr79k29KU4+AhJbPBUsrrQbVzV/sKUpKyik=; b=zbLKDH2SWtH0rL/EpkfYuqOJyaH9DqjHpFftBY76uQWFXD077RjYqlR5tXT0nd48/I AgyhNxR6D4Ba/dnniLtzOwXS0tzqLGkAGRHzbJbc7HkRE2Vm9FLD9xnlBxWqtOYCTAX+ B7yopsNtj/eZKSlDz8Kzmg4Ymtb4+DS9MhZDvieZ+kdjbmedkh7JpnWXe3Ns3kTTRmhQ WlQE4GnILtMFyh6N2XkDcGEEUypSGk5FmcF2Loiq91ea012ejaV/0hQtjIvGHlRvjcuA 1k50mqknu2M0/+OVCdnEv/xY0hYoF1m8vO2Ibz00pxfZtKTQJTwAhOwpEPAhPRs+1D+2 lwmg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1687940471; x=1690532471; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=vpZ4UoIowr79k29KU4+AhJbPBUsrrQbVzV/sKUpKyik=; b=TuDO9ena5Np96bOQUusMI0h6h6ytzq6bvJDmEhTef5W4Zz40moyZ+O8VSn7pZPLm0K M7Vpr0qxtEcmPGkoS5tDJ8mAS4/vTt6zv/63bxr+oaw5FzMbLNhqBJSLTnhgxTnyjaw9 bcSKsRM8482eN2jA4n5Oxqk8MSNiQY/BAt9zaBSa5yKvBeEcRzitv4EIDQ09L6/pvG3o FZPGTnulQT1x47haWuydwuYlGFcHEOwDztRG8x8TtvifZ72mPHiRo+nUPvDREYQbNBPc Sxs3Ch2R8crqcx7eNlZpvtb8z0FbI7yZzBrewtqL80F9cgUYSMs5pw/Cg3nG7TStV6Fp HMyA== X-Gm-Message-State: ABy/qLaL9/XiqqMyh/3EmQfPpLJhselaT90JdKlIJWUrGnYObIYwX3fA KoAaTcXeBpPrG909dLDxn2oOI8gxeg0= X-Google-Smtp-Source: APBJJlFgJ2jRK81tWxhk4XDKf3t6TTIa6oT5N32m+JJJWdx/Y3G1ED6lRjtvxsVjVQo+TveZPkRF5dvhPUU= X-Received: from surenb-desktop.mtv.corp.google.com ([2620:15c:211:201:6664:8bd3:57fd:c83a]) (user=surenb job=sendgmr) by 2002:a05:6902:1105:b0:bc3:cdb7:4ec8 with SMTP id o5-20020a056902110500b00bc3cdb74ec8mr7690ybu.6.1687936691637; Wed, 28 Jun 2023 00:18:11 -0700 (PDT) Date: Wed, 28 Jun 2023 00:17:57 -0700 In-Reply-To: <20230628071800.544800-1-surenb@google.com> Mime-Version: 1.0 References: <20230628071800.544800-1-surenb@google.com> X-Mailer: git-send-email 2.41.0.162.gfafddb0af9-goog Message-ID: <20230628071800.544800-4-surenb@google.com> Subject: [PATCH v4 3/6] mm: drop per-VMA lock when returning VM_FAULT_RETRY or VM_FAULT_COMPLETED From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: willy@infradead.org, hannes@cmpxchg.org, mhocko@suse.com, josef@toxicpanda.com, jack@suse.cz, ldufour@linux.ibm.com, laurent.dufour@fr.ibm.com, michel@lespinasse.org, liam.howlett@oracle.com, jglisse@google.com, vbabka@suse.cz, minchan@google.com, dave@stgolabs.net, punit.agrawal@bytedance.com, lstoakes@gmail.com, hdanton@sina.com, apopple@nvidia.com, peterx@redhat.com, ying.huang@intel.com, david@redhat.com, yuzhao@google.com, dhowells@redhat.com, hughd@google.com, viro@zeniv.linux.org.uk, brauner@kernel.org, pasha.tatashin@soleen.com, surenb@google.com, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, kernel-team@android.com Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org handle_mm_fault returning VM_FAULT_RETRY or VM_FAULT_COMPLETED means mmap_lock has been released. However with per-VMA locks behavior is different and the caller should still release it. To make the rules consistent for the caller, drop the per-VMA lock when returning VM_FAULT_RETRY or VM_FAULT_COMPLETED. Currently the only path returning VM_FAULT_RETRY under per-VMA locks is do_swap_page and no path returns VM_FAULT_COMPLETED for now. Signed-off-by: Suren Baghdasaryan Acked-by: Peter Xu --- arch/arm64/mm/fault.c | 3 ++- arch/powerpc/mm/fault.c | 3 ++- arch/s390/mm/fault.c | 3 ++- arch/x86/mm/fault.c | 3 ++- mm/memory.c | 1 + 5 files changed, 9 insertions(+), 4 deletions(-) diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c index c85b6d70b222..9c06c53a9ff3 100644 --- a/arch/arm64/mm/fault.c +++ b/arch/arm64/mm/fault.c @@ -612,7 +612,8 @@ static int __kprobes do_page_fault(unsigned long far, unsigned long esr, goto lock_mmap; } fault = handle_mm_fault(vma, addr, mm_flags | FAULT_FLAG_VMA_LOCK, regs); - vma_end_read(vma); + if (!(fault & (VM_FAULT_RETRY | VM_FAULT_COMPLETED))) + vma_end_read(vma); if (!(fault & VM_FAULT_RETRY)) { count_vm_vma_lock_event(VMA_LOCK_SUCCESS); diff --git a/arch/powerpc/mm/fault.c b/arch/powerpc/mm/fault.c index 531177a4ee08..4697c5dca31c 100644 --- a/arch/powerpc/mm/fault.c +++ b/arch/powerpc/mm/fault.c @@ -494,7 +494,8 @@ static int ___do_page_fault(struct pt_regs *regs, unsigned long address, } fault = handle_mm_fault(vma, address, flags | FAULT_FLAG_VMA_LOCK, regs); - vma_end_read(vma); + if (!(fault & (VM_FAULT_RETRY | VM_FAULT_COMPLETED))) + vma_end_read(vma); if (!(fault & VM_FAULT_RETRY)) { count_vm_vma_lock_event(VMA_LOCK_SUCCESS); diff --git a/arch/s390/mm/fault.c b/arch/s390/mm/fault.c index b65144c392b0..cccefe41038b 100644 --- a/arch/s390/mm/fault.c +++ b/arch/s390/mm/fault.c @@ -418,7 +418,8 @@ static inline vm_fault_t do_exception(struct pt_regs *regs, int access) goto lock_mmap; } fault = handle_mm_fault(vma, address, flags | FAULT_FLAG_VMA_LOCK, regs); - vma_end_read(vma); + if (!(fault & (VM_FAULT_RETRY | VM_FAULT_COMPLETED))) + vma_end_read(vma); if (!(fault & VM_FAULT_RETRY)) { count_vm_vma_lock_event(VMA_LOCK_SUCCESS); goto out; diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c index e4399983c50c..d69c85c1c04e 100644 --- a/arch/x86/mm/fault.c +++ b/arch/x86/mm/fault.c @@ -1347,7 +1347,8 @@ void do_user_addr_fault(struct pt_regs *regs, goto lock_mmap; } fault = handle_mm_fault(vma, address, flags | FAULT_FLAG_VMA_LOCK, regs); - vma_end_read(vma); + if (!(fault & (VM_FAULT_RETRY | VM_FAULT_COMPLETED))) + vma_end_read(vma); if (!(fault & VM_FAULT_RETRY)) { count_vm_vma_lock_event(VMA_LOCK_SUCCESS); diff --git a/mm/memory.c b/mm/memory.c index f69fbc251198..f14d45957b83 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -3713,6 +3713,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) if (vmf->flags & FAULT_FLAG_VMA_LOCK) { ret = VM_FAULT_RETRY; + vma_end_read(vma); goto out; } From patchwork Wed Jun 28 07:17:58 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 13295366 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C7039EB64DA for ; Wed, 28 Jun 2023 08:28:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234297AbjF1I16 (ORCPT ); Wed, 28 Jun 2023 04:27:58 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40090 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234190AbjF1IZu (ORCPT ); Wed, 28 Jun 2023 04:25:50 -0400 Received: from mail-qt1-x84a.google.com (mail-qt1-x84a.google.com [IPv6:2607:f8b0:4864:20::84a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5644E4209 for ; Wed, 28 Jun 2023 01:15:13 -0700 (PDT) Received: by mail-qt1-x84a.google.com with SMTP id d75a77b69052e-3ff2770311dso67084411cf.2 for ; Wed, 28 Jun 2023 01:15:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1687940112; x=1690532112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=70kkRXsfLM/8BYxAf0BjvmYkgZM27G9ch/j23+KotKk=; b=2LoZUPdTnwRNnCDv4a6uA2LO4H7DRig65FaKruCnkSCXbNiB74V8lYEpbEKHafa6F0 bmNa0zOUA6os29frJY7eb2Ai6b60bro+ZXKFW6aXOFwFyhhtkMt6xaxq10hkdB8oUWRo pQB2QouOYpkIdpsAsJWvGw7k4orYITg6Ex1JVwl2dXV26aiglP22p6WtmzDluvfw1Cib flNYlJ6iwtvQwuJ2HsVGCa6n8cv7wcYA/mI1UwihpmagAu0GrGsksKWrr5BbxY8VglH2 RvV+39plh83mDU546M1vah5tr3JE0ayHYd7xJqjZVX6OwmEqQearFXx0hRR8OPsG5aWb WIIg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1687940112; x=1690532112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=70kkRXsfLM/8BYxAf0BjvmYkgZM27G9ch/j23+KotKk=; b=D1cepGlPTSTm3tVWrEXUld/nC9Sg61R9XUQj/HvmNYPYgcnNIu7wXW3fpAQMG6vNZN XN3I45JU/6dZ6fnhljI1pxe8FcKf01EBMyvNCm9mX+7JzbzbbbwvQCt9/K3sTlRfskPB P5HI9VghdQf2eQlh6EfapDP1BB0WnsCLu6ephRQ2UK6GvSbNkv+1cHZ3TNXmu+xIU16x wAGOx5lwL4HXakLGiL4o9RX30bqZk4vpl8e+g00HoWJstgLVvirZeVsS7eRb6LBkCUdJ w2a/4b4ap4vinE9w7Tx8wFJQa6+Q6kPpC53zq4XmYIWf4V2Jy1vPTgTi5BZtezZFPrfJ 6JfA== X-Gm-Message-State: AC+VfDx7tqA3K7AQAP5U8HiIRojOJwaPkkKtmVyjKlhEvRwXYmkxtZ91 ygKtQ81+l23pBCPbfLuqq0En0Zocd5E= X-Google-Smtp-Source: ACHHUZ6OQi+GrrsFt6L2aYE+Bh/8JjB2CJChJJQK4V9w8+uL61Sq3ukd625ySw4N4Q1XugZY6mxhmwtxYNQ= X-Received: from surenb-desktop.mtv.corp.google.com ([2620:15c:211:201:6664:8bd3:57fd:c83a]) (user=surenb job=sendgmr) by 2002:a25:11c4:0:b0:c2a:b486:1085 with SMTP id 187-20020a2511c4000000b00c2ab4861085mr2504239ybr.10.1687936693767; Wed, 28 Jun 2023 00:18:13 -0700 (PDT) Date: Wed, 28 Jun 2023 00:17:58 -0700 In-Reply-To: <20230628071800.544800-1-surenb@google.com> Mime-Version: 1.0 References: <20230628071800.544800-1-surenb@google.com> X-Mailer: git-send-email 2.41.0.162.gfafddb0af9-goog Message-ID: <20230628071800.544800-5-surenb@google.com> Subject: [PATCH v4 4/6] mm: change folio_lock_or_retry to use vm_fault directly From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: willy@infradead.org, hannes@cmpxchg.org, mhocko@suse.com, josef@toxicpanda.com, jack@suse.cz, ldufour@linux.ibm.com, laurent.dufour@fr.ibm.com, michel@lespinasse.org, liam.howlett@oracle.com, jglisse@google.com, vbabka@suse.cz, minchan@google.com, dave@stgolabs.net, punit.agrawal@bytedance.com, lstoakes@gmail.com, hdanton@sina.com, apopple@nvidia.com, peterx@redhat.com, ying.huang@intel.com, david@redhat.com, yuzhao@google.com, dhowells@redhat.com, hughd@google.com, viro@zeniv.linux.org.uk, brauner@kernel.org, pasha.tatashin@soleen.com, surenb@google.com, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, kernel-team@android.com Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org Change folio_lock_or_retry to accept vm_fault struct and return the vm_fault_t directly. Suggested-by: Matthew Wilcox Signed-off-by: Suren Baghdasaryan Acked-by: Peter Xu --- include/linux/pagemap.h | 9 ++++----- mm/filemap.c | 22 ++++++++++++---------- mm/memory.c | 14 ++++++-------- 3 files changed, 22 insertions(+), 23 deletions(-) diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index a56308a9d1a4..59d070c55c97 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -896,8 +896,7 @@ static inline bool wake_page_match(struct wait_page_queue *wait_page, void __folio_lock(struct folio *folio); int __folio_lock_killable(struct folio *folio); -bool __folio_lock_or_retry(struct folio *folio, struct mm_struct *mm, - unsigned int flags); +vm_fault_t __folio_lock_or_retry(struct folio *folio, struct vm_fault *vmf); void unlock_page(struct page *page); void folio_unlock(struct folio *folio); @@ -1001,11 +1000,11 @@ static inline int folio_lock_killable(struct folio *folio) * Return value and mmap_lock implications depend on flags; see * __folio_lock_or_retry(). */ -static inline bool folio_lock_or_retry(struct folio *folio, - struct mm_struct *mm, unsigned int flags) +static inline vm_fault_t folio_lock_or_retry(struct folio *folio, + struct vm_fault *vmf) { might_sleep(); - return folio_trylock(folio) || __folio_lock_or_retry(folio, mm, flags); + return folio_trylock(folio) ? 0 : __folio_lock_or_retry(folio, vmf); } /* diff --git a/mm/filemap.c b/mm/filemap.c index 00f01d8ead47..52bcf12dcdbf 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -1701,32 +1701,34 @@ static int __folio_lock_async(struct folio *folio, struct wait_page_queue *wait) /* * Return values: - * true - folio is locked; mmap_lock is still held. - * false - folio is not locked. + * 0 - folio is locked. + * VM_FAULT_RETRY - folio is not locked. * mmap_lock has been released (mmap_read_unlock(), unless flags had both * FAULT_FLAG_ALLOW_RETRY and FAULT_FLAG_RETRY_NOWAIT set, in * which case mmap_lock is still held. * - * If neither ALLOW_RETRY nor KILLABLE are set, will always return true + * If neither ALLOW_RETRY nor KILLABLE are set, will always return 0 * with the folio locked and the mmap_lock unperturbed. */ -bool __folio_lock_or_retry(struct folio *folio, struct mm_struct *mm, - unsigned int flags) +vm_fault_t __folio_lock_or_retry(struct folio *folio, struct vm_fault *vmf) { + struct mm_struct *mm = vmf->vma->vm_mm; + unsigned int flags = vmf->flags; + if (fault_flag_allow_retry_first(flags)) { /* * CAUTION! In this case, mmap_lock is not released - * even though return 0. + * even though return VM_FAULT_RETRY. */ if (flags & FAULT_FLAG_RETRY_NOWAIT) - return false; + return VM_FAULT_RETRY; mmap_read_unlock(mm); if (flags & FAULT_FLAG_KILLABLE) folio_wait_locked_killable(folio); else folio_wait_locked(folio); - return false; + return VM_FAULT_RETRY; } if (flags & FAULT_FLAG_KILLABLE) { bool ret; @@ -1734,13 +1736,13 @@ bool __folio_lock_or_retry(struct folio *folio, struct mm_struct *mm, ret = __folio_lock_killable(folio); if (ret) { mmap_read_unlock(mm); - return false; + return VM_FAULT_RETRY; } } else { __folio_lock(folio); } - return true; + return 0; } /** diff --git a/mm/memory.c b/mm/memory.c index f14d45957b83..345080052003 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -3568,6 +3568,7 @@ static vm_fault_t remove_device_exclusive_entry(struct vm_fault *vmf) struct folio *folio = page_folio(vmf->page); struct vm_area_struct *vma = vmf->vma; struct mmu_notifier_range range; + vm_fault_t ret; /* * We need a reference to lock the folio because we don't hold @@ -3580,9 +3581,10 @@ static vm_fault_t remove_device_exclusive_entry(struct vm_fault *vmf) if (!folio_try_get(folio)) return 0; - if (!folio_lock_or_retry(folio, vma->vm_mm, vmf->flags)) { + ret = folio_lock_or_retry(folio, vmf); + if (ret) { folio_put(folio); - return VM_FAULT_RETRY; + return ret; } mmu_notifier_range_init_owner(&range, MMU_NOTIFY_EXCLUSIVE, 0, vma->vm_mm, vmf->address & PAGE_MASK, @@ -3704,7 +3706,6 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) bool exclusive = false; swp_entry_t entry; pte_t pte; - int locked; vm_fault_t ret = 0; void *shadow = NULL; @@ -3826,12 +3827,9 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) goto out_release; } - locked = folio_lock_or_retry(folio, vma->vm_mm, vmf->flags); - - if (!locked) { - ret |= VM_FAULT_RETRY; + ret |= folio_lock_or_retry(folio, vmf); + if (ret & VM_FAULT_RETRY) goto out_release; - } if (swapcache) { /* From patchwork Wed Jun 28 07:17:59 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 13295379 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2D206EB64DA for ; Wed, 28 Jun 2023 08:38:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234252AbjF1Iiy (ORCPT ); Wed, 28 Jun 2023 04:38:54 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42954 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235160AbjF1IfV (ORCPT ); Wed, 28 Jun 2023 04:35:21 -0400 Received: from mail-qk1-x749.google.com (mail-qk1-x749.google.com [IPv6:2607:f8b0:4864:20::749]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5767049DF for ; Wed, 28 Jun 2023 01:26:53 -0700 (PDT) Received: by mail-qk1-x749.google.com with SMTP id af79cd13be357-765986c0521so312340485a.3 for ; Wed, 28 Jun 2023 01:26:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1687940812; x=1690532812; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=b7MqQHJHdH35CYJ6Jb1zpJ4/eRYK1pcerh43dShkH+M=; b=LA2E19xf7pdXj42A1ExmJSbdMIaIbsn7NWmS9urFjUy/mNI98IsTbKUKdeui0/yaYA hA8WhjLz+tlG2LQrbUqRv+bCNiqRBz3Ow/Ir38THLz/uOLDlktWQjDRYkOGrClthKIH8 88rADAgRwM4a7SvKH7Tc5/wsUsonQMNhX/qboDJQr3bvMfWE8nrJr/cN7CKEQZyQ/zl+ PKg1CZTu0m3nXicuafs4hHYj1qjQDcWTs5N+b4z1WGmFzatznKMugkaipKlHv8d6x/Mb 25StywCuDNj2wyw70TU82osLYLvglPWcSxaCp4h+fhOxWdoCf5jPI34AUXXQTcU+f1eF e3LA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1687940812; x=1690532812; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=b7MqQHJHdH35CYJ6Jb1zpJ4/eRYK1pcerh43dShkH+M=; b=Ux1grZ/xCoNohR+4RKTXBA8glYyPNDWubQVCWR9vkPott8lQz+HqBJWNsk5Qls+0fF 0C1lq6lsxjuEFLtzKiDGyNLgDfAC031np4ftx0uz/ipDJI/s5QRgp2veEzkN+R2ase5y z/CJFX+O6djzq5AEE04R7MoSdJoxANxYqoTt2tsYzn6RtsLvRM6qnXA6q7bXgLNo+9r3 qwN2idwjDeO05rpTzsP2Wai2q8fTbHQEqtppE/Z7nChtH1XgOxT/YRteO1QHHJlodpQF Z36oZbiH/BfjdKyev8VaydJqFcQwNnNeWhLC5dhG6lPnkAcUhXf62wmIXA/g5IZSRKf8 lFlA== X-Gm-Message-State: AC+VfDz5dUhyWGuqyfJGiJLnBIEYluMieiQkBNoZiuePWtf3ADaSiYXD OU9Zq4cX9k2NNFt7roztOgO0LqdLrS4= X-Google-Smtp-Source: ACHHUZ5p7K5vwu3GLTcOloeUXHu3uFg6BFvaVURIKfx6u7AbyQoKOvh6cJprBsCrY0QmdnAunXGn4WazRWI= X-Received: from surenb-desktop.mtv.corp.google.com ([2620:15c:211:201:6664:8bd3:57fd:c83a]) (user=surenb job=sendgmr) by 2002:a81:af22:0:b0:559:c032:eb5e with SMTP id n34-20020a81af22000000b00559c032eb5emr14112427ywh.1.1687936696048; Wed, 28 Jun 2023 00:18:16 -0700 (PDT) Date: Wed, 28 Jun 2023 00:17:59 -0700 In-Reply-To: <20230628071800.544800-1-surenb@google.com> Mime-Version: 1.0 References: <20230628071800.544800-1-surenb@google.com> X-Mailer: git-send-email 2.41.0.162.gfafddb0af9-goog Message-ID: <20230628071800.544800-6-surenb@google.com> Subject: [PATCH v4 5/6] mm: handle swap page faults under per-VMA lock From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: willy@infradead.org, hannes@cmpxchg.org, mhocko@suse.com, josef@toxicpanda.com, jack@suse.cz, ldufour@linux.ibm.com, laurent.dufour@fr.ibm.com, michel@lespinasse.org, liam.howlett@oracle.com, jglisse@google.com, vbabka@suse.cz, minchan@google.com, dave@stgolabs.net, punit.agrawal@bytedance.com, lstoakes@gmail.com, hdanton@sina.com, apopple@nvidia.com, peterx@redhat.com, ying.huang@intel.com, david@redhat.com, yuzhao@google.com, dhowells@redhat.com, hughd@google.com, viro@zeniv.linux.org.uk, brauner@kernel.org, pasha.tatashin@soleen.com, surenb@google.com, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, kernel-team@android.com Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org When page fault is handled under per-VMA lock protection, all swap page faults are retried with mmap_lock because folio_lock_or_retry has to drop and reacquire mmap_lock if folio could not be immediately locked. Follow the same pattern as mmap_lock to drop per-VMA lock when waiting for folio and retrying once folio is available. With this obstacle removed, enable do_swap_page to operate under per-VMA lock protection. Drivers implementing ops->migrate_to_ram might still rely on mmap_lock, therefore we have to fall back to mmap_lock in that particular case. Note that the only time do_swap_page calls synchronous swap_readpage is when SWP_SYNCHRONOUS_IO is set, which is only set for QUEUE_FLAG_SYNCHRONOUS devices: brd, zram and nvdimms (both btt and pmem). Therefore we don't sleep in this path, and there's no need to drop the mmap or per-VMA lock. Signed-off-by: Suren Baghdasaryan Acked-by: Peter Xu --- mm/filemap.c | 25 ++++++++++++++++--------- mm/memory.c | 16 ++++++++++------ 2 files changed, 26 insertions(+), 15 deletions(-) diff --git a/mm/filemap.c b/mm/filemap.c index 52bcf12dcdbf..7ee078e1a0d2 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -1699,31 +1699,38 @@ static int __folio_lock_async(struct folio *folio, struct wait_page_queue *wait) return ret; } +static void release_fault_lock(struct vm_fault *vmf) +{ + if (vmf->flags & FAULT_FLAG_VMA_LOCK) + vma_end_read(vmf->vma); + else + mmap_read_unlock(vmf->vma->vm_mm); +} + /* * Return values: * 0 - folio is locked. * VM_FAULT_RETRY - folio is not locked. - * mmap_lock has been released (mmap_read_unlock(), unless flags had both - * FAULT_FLAG_ALLOW_RETRY and FAULT_FLAG_RETRY_NOWAIT set, in - * which case mmap_lock is still held. + * mmap_lock or per-VMA lock has been released (mmap_read_unlock() or + * vma_end_read()), unless flags had both FAULT_FLAG_ALLOW_RETRY and + * FAULT_FLAG_RETRY_NOWAIT set, in which case the lock is still held. * * If neither ALLOW_RETRY nor KILLABLE are set, will always return 0 - * with the folio locked and the mmap_lock unperturbed. + * with the folio locked and the mmap_lock/per-VMA lock is left unperturbed. */ vm_fault_t __folio_lock_or_retry(struct folio *folio, struct vm_fault *vmf) { - struct mm_struct *mm = vmf->vma->vm_mm; unsigned int flags = vmf->flags; if (fault_flag_allow_retry_first(flags)) { /* - * CAUTION! In this case, mmap_lock is not released - * even though return VM_FAULT_RETRY. + * CAUTION! In this case, mmap_lock/per-VMA lock is not + * released even though returning VM_FAULT_RETRY. */ if (flags & FAULT_FLAG_RETRY_NOWAIT) return VM_FAULT_RETRY; - mmap_read_unlock(mm); + release_fault_lock(vmf); if (flags & FAULT_FLAG_KILLABLE) folio_wait_locked_killable(folio); else @@ -1735,7 +1742,7 @@ vm_fault_t __folio_lock_or_retry(struct folio *folio, struct vm_fault *vmf) ret = __folio_lock_killable(folio); if (ret) { - mmap_read_unlock(mm); + release_fault_lock(vmf); return VM_FAULT_RETRY; } } else { diff --git a/mm/memory.c b/mm/memory.c index 345080052003..76c7907e7286 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -3712,12 +3712,6 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) if (!pte_unmap_same(vmf)) goto out; - if (vmf->flags & FAULT_FLAG_VMA_LOCK) { - ret = VM_FAULT_RETRY; - vma_end_read(vma); - goto out; - } - entry = pte_to_swp_entry(vmf->orig_pte); if (unlikely(non_swap_entry(entry))) { if (is_migration_entry(entry)) { @@ -3727,6 +3721,16 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) vmf->page = pfn_swap_entry_to_page(entry); ret = remove_device_exclusive_entry(vmf); } else if (is_device_private_entry(entry)) { + if (vmf->flags & FAULT_FLAG_VMA_LOCK) { + /* + * migrate_to_ram is not yet ready to operate + * under VMA lock. + */ + vma_end_read(vma); + ret |= VM_FAULT_RETRY; + goto out; + } + vmf->page = pfn_swap_entry_to_page(entry); vmf->pte = pte_offset_map_lock(vma->vm_mm, vmf->pmd, vmf->address, &vmf->ptl); From patchwork Wed Jun 28 07:18:00 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 13295367 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 976B8EB64DC for ; Wed, 28 Jun 2023 08:29:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235081AbjF1I3N (ORCPT ); Wed, 28 Jun 2023 04:29:13 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40570 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234751AbjF1I06 (ORCPT ); Wed, 28 Jun 2023 04:26:58 -0400 Received: from mail-oi1-x24a.google.com (mail-oi1-x24a.google.com [IPv6:2607:f8b0:4864:20::24a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 516D335BE for ; Wed, 28 Jun 2023 01:19:22 -0700 (PDT) Received: by mail-oi1-x24a.google.com with SMTP id 5614622812f47-39acaa239b2so6136818b6e.1 for ; Wed, 28 Jun 2023 01:19:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1687940361; x=1690532361; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=P/PrTNNebDD3ET79IWIr+QJG2wgn5QmJDayPL9+A73A=; b=bgv9U3NlOjrMU0EpD0oVUBvijT37kr5cF/5qNNIoX2Hj8Rz3n+BlfUoimLE7Q1CpE8 hM4B3f8rtZLhAarOwUauyoMcl4nbcy/mzi2FbjOJJyOEgsLyFlg/x6YI2n/HZZ5gOamy pbOkwOSORlejJXagaeInBJQ4Lscb6EPCeR3LN3LiLpof1pWsuTwGBvPiqWM3qp5LmTkH DJUf3H88Swc/geyUOM67dp8zndvRx6/1OXFCayTlB2jfw897N4YvBQDZ2Q3NvP8RkHfK dmVIe8kW9K5dIl2aWJwtqHeLHZQWcRPqhWMMk3IHgLgnup4lhxVhgQymFpePVGX6nHDP iB4g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1687940361; x=1690532361; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=P/PrTNNebDD3ET79IWIr+QJG2wgn5QmJDayPL9+A73A=; b=jMe17iG0bqrt+/KFE3EoqmnwfehcIdYgxq7ZI6LVThehPOptCMfx9Ecl9noOeI3qZc NMb8kbxqjOuYiSZEtdxgUfKKXpWMfukYp3gLzevWsqfSe9nfT63dOlUshkn9kYj4guq2 59iMoIup9ZMV6z35Ti2nNkd+l5Uh/cvIC58IVYfhkRo08iQ/qubAQKkzIwZs3NH9NPHS fLEOttlh4EReZ0FCpV9KEUH29wcrEseilPem2WTo51x/+FGPxHDy8lzcjl5mjqhojZXA boYO67svnfGoUhohTMvW+hvq+zcXuA4Ot7gAu+d+t8y5F2HaAGMTQCpPHXveUBtdgRIy qtHA== X-Gm-Message-State: AC+VfDzynSu2Xu+OWzVPQ0uemqUYxRW36dQeZAh7UobmFmL22guop0VT e38qTXqY1U61rmGvFTkqUbzvdjL50lI= X-Google-Smtp-Source: ACHHUZ7aI7icl9yP+XTVAxjb/B56LWhJlYyY/bkxia+e7VEwFYxsmG4eFO4RUCr5qbQXOHl5LdpkQKpzOJA= X-Received: from surenb-desktop.mtv.corp.google.com ([2620:15c:211:201:6664:8bd3:57fd:c83a]) (user=surenb job=sendgmr) by 2002:a25:198a:0:b0:c00:a33:7 with SMTP id 132-20020a25198a000000b00c000a330007mr9718444ybz.8.1687936698504; Wed, 28 Jun 2023 00:18:18 -0700 (PDT) Date: Wed, 28 Jun 2023 00:18:00 -0700 In-Reply-To: <20230628071800.544800-1-surenb@google.com> Mime-Version: 1.0 References: <20230628071800.544800-1-surenb@google.com> X-Mailer: git-send-email 2.41.0.162.gfafddb0af9-goog Message-ID: <20230628071800.544800-7-surenb@google.com> Subject: [PATCH v4 6/6] mm: handle userfaults under VMA lock From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: willy@infradead.org, hannes@cmpxchg.org, mhocko@suse.com, josef@toxicpanda.com, jack@suse.cz, ldufour@linux.ibm.com, laurent.dufour@fr.ibm.com, michel@lespinasse.org, liam.howlett@oracle.com, jglisse@google.com, vbabka@suse.cz, minchan@google.com, dave@stgolabs.net, punit.agrawal@bytedance.com, lstoakes@gmail.com, hdanton@sina.com, apopple@nvidia.com, peterx@redhat.com, ying.huang@intel.com, david@redhat.com, yuzhao@google.com, dhowells@redhat.com, hughd@google.com, viro@zeniv.linux.org.uk, brauner@kernel.org, pasha.tatashin@soleen.com, surenb@google.com, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, kernel-team@android.com Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org Enable handle_userfault to operate under VMA lock by releasing VMA lock instead of mmap_lock and retrying. Note that FAULT_FLAG_RETRY_NOWAIT should never be used when handling faults under per-VMA lock protection because that would break the assumption that lock is dropped on retry. Signed-off-by: Suren Baghdasaryan --- fs/userfaultfd.c | 39 ++++++++++++++++++--------------------- include/linux/mm.h | 39 +++++++++++++++++++++++++++++++++++++++ mm/filemap.c | 8 -------- mm/memory.c | 9 --------- 4 files changed, 57 insertions(+), 38 deletions(-) diff --git a/fs/userfaultfd.c b/fs/userfaultfd.c index 4e800bb7d2ab..d019e7df6f15 100644 --- a/fs/userfaultfd.c +++ b/fs/userfaultfd.c @@ -277,17 +277,16 @@ static inline struct uffd_msg userfault_msg(unsigned long address, * hugepmd ranges. */ static inline bool userfaultfd_huge_must_wait(struct userfaultfd_ctx *ctx, - struct vm_area_struct *vma, - unsigned long address, - unsigned long flags, - unsigned long reason) + struct vm_fault *vmf, + unsigned long reason) { + struct vm_area_struct *vma = vmf->vma; pte_t *ptep, pte; bool ret = true; - mmap_assert_locked(ctx->mm); + assert_fault_locked(ctx->mm, vmf); - ptep = hugetlb_walk(vma, address, vma_mmu_pagesize(vma)); + ptep = hugetlb_walk(vma, vmf->address, vma_mmu_pagesize(vma)); if (!ptep) goto out; @@ -308,10 +307,8 @@ static inline bool userfaultfd_huge_must_wait(struct userfaultfd_ctx *ctx, } #else static inline bool userfaultfd_huge_must_wait(struct userfaultfd_ctx *ctx, - struct vm_area_struct *vma, - unsigned long address, - unsigned long flags, - unsigned long reason) + struct vm_fault *vmf, + unsigned long reason) { return false; /* should never get here */ } @@ -325,11 +322,11 @@ static inline bool userfaultfd_huge_must_wait(struct userfaultfd_ctx *ctx, * threads. */ static inline bool userfaultfd_must_wait(struct userfaultfd_ctx *ctx, - unsigned long address, - unsigned long flags, + struct vm_fault *vmf, unsigned long reason) { struct mm_struct *mm = ctx->mm; + unsigned long address = vmf->address; pgd_t *pgd; p4d_t *p4d; pud_t *pud; @@ -337,7 +334,7 @@ static inline bool userfaultfd_must_wait(struct userfaultfd_ctx *ctx, pte_t *pte; bool ret = true; - mmap_assert_locked(mm); + assert_fault_locked(mm, vmf); pgd = pgd_offset(mm, address); if (!pgd_present(*pgd)) @@ -445,7 +442,7 @@ vm_fault_t handle_userfault(struct vm_fault *vmf, unsigned long reason) * Coredumping runs without mmap_lock so we can only check that * the mmap_lock is held, if PF_DUMPCORE was not set. */ - mmap_assert_locked(mm); + assert_fault_locked(mm, vmf); ctx = vma->vm_userfaultfd_ctx.ctx; if (!ctx) @@ -522,8 +519,11 @@ vm_fault_t handle_userfault(struct vm_fault *vmf, unsigned long reason) * and wait. */ ret = VM_FAULT_RETRY; - if (vmf->flags & FAULT_FLAG_RETRY_NOWAIT) + if (vmf->flags & FAULT_FLAG_RETRY_NOWAIT) { + /* Per-VMA lock is expected to be dropped on VM_FAULT_RETRY */ + BUG_ON(vmf->flags & FAULT_FLAG_RETRY_NOWAIT); goto out; + } /* take the reference before dropping the mmap_lock */ userfaultfd_ctx_get(ctx); @@ -561,15 +561,12 @@ vm_fault_t handle_userfault(struct vm_fault *vmf, unsigned long reason) spin_unlock_irq(&ctx->fault_pending_wqh.lock); if (!is_vm_hugetlb_page(vma)) - must_wait = userfaultfd_must_wait(ctx, vmf->address, vmf->flags, - reason); + must_wait = userfaultfd_must_wait(ctx, vmf, reason); else - must_wait = userfaultfd_huge_must_wait(ctx, vma, - vmf->address, - vmf->flags, reason); + must_wait = userfaultfd_huge_must_wait(ctx, vmf, reason); if (is_vm_hugetlb_page(vma)) hugetlb_vma_unlock_read(vma); - mmap_read_unlock(mm); + release_fault_lock(vmf); if (likely(must_wait && !READ_ONCE(ctx->released))) { wake_up_poll(&ctx->fd_wqh, EPOLLIN); diff --git a/include/linux/mm.h b/include/linux/mm.h index fec149585985..70bb2f923e33 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -705,6 +705,17 @@ static inline bool vma_try_start_write(struct vm_area_struct *vma) return true; } +static inline void vma_assert_locked(struct vm_area_struct *vma) +{ + int mm_lock_seq; + + if (__is_vma_write_locked(vma, &mm_lock_seq)) + return; + + lockdep_assert_held(&vma->vm_lock->lock); + VM_BUG_ON_VMA(!rwsem_is_locked(&vma->vm_lock->lock), vma); +} + static inline void vma_assert_write_locked(struct vm_area_struct *vma) { int mm_lock_seq; @@ -723,6 +734,23 @@ static inline void vma_mark_detached(struct vm_area_struct *vma, bool detached) struct vm_area_struct *lock_vma_under_rcu(struct mm_struct *mm, unsigned long address); +static inline +void assert_fault_locked(struct mm_struct *mm, struct vm_fault *vmf) +{ + if (vmf->flags & FAULT_FLAG_VMA_LOCK) + vma_assert_locked(vmf->vma); + else + mmap_assert_locked(mm); +} + +static inline void release_fault_lock(struct vm_fault *vmf) +{ + if (vmf->flags & FAULT_FLAG_VMA_LOCK) + vma_end_read(vmf->vma); + else + mmap_read_unlock(vmf->vma->vm_mm); +} + #else /* CONFIG_PER_VMA_LOCK */ static inline void vma_init_lock(struct vm_area_struct *vma) {} @@ -736,6 +764,17 @@ static inline void vma_assert_write_locked(struct vm_area_struct *vma) {} static inline void vma_mark_detached(struct vm_area_struct *vma, bool detached) {} +static inline +void assert_fault_locked(struct mm_struct *mm, struct vm_fault *vmf) +{ + mmap_assert_locked(mm); +} + +static inline void release_fault_lock(struct vm_fault *vmf) +{ + mmap_read_unlock(vmf->vma->vm_mm); +} + #endif /* CONFIG_PER_VMA_LOCK */ /* diff --git a/mm/filemap.c b/mm/filemap.c index 7ee078e1a0d2..d4d8f474e0c5 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -1699,14 +1699,6 @@ static int __folio_lock_async(struct folio *folio, struct wait_page_queue *wait) return ret; } -static void release_fault_lock(struct vm_fault *vmf) -{ - if (vmf->flags & FAULT_FLAG_VMA_LOCK) - vma_end_read(vmf->vma); - else - mmap_read_unlock(vmf->vma->vm_mm); -} - /* * Return values: * 0 - folio is locked. diff --git a/mm/memory.c b/mm/memory.c index 76c7907e7286..c6c759922f39 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -5294,15 +5294,6 @@ struct vm_area_struct *lock_vma_under_rcu(struct mm_struct *mm, if (!vma_start_read(vma)) goto inval; - /* - * Due to the possibility of userfault handler dropping mmap_lock, avoid - * it for now and fall back to page fault handling under mmap_lock. - */ - if (userfaultfd_armed(vma)) { - vma_end_read(vma); - goto inval; - } - /* Check since vm_start/vm_end might change before we lock the VMA */ if (unlikely(address < vma->vm_start || address >= vma->vm_end)) { vma_end_read(vma);