From patchwork Fri Jun 30 02:04:30 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 13297511 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E1B38EB64DD for ; Fri, 30 Jun 2023 02:04:45 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 6B9EA8D0003; Thu, 29 Jun 2023 22:04:45 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 61AD88D0001; Thu, 29 Jun 2023 22:04:45 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 46DEB8D0003; Thu, 29 Jun 2023 22:04:45 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 2E0708D0001 for ; Thu, 29 Jun 2023 22:04:45 -0400 (EDT) Received: from smtpin09.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id E51C5C0759 for ; Fri, 30 Jun 2023 02:04:44 +0000 (UTC) X-FDA: 80957770488.09.2B12550 Received: from mail-yw1-f201.google.com (mail-yw1-f201.google.com [209.85.128.201]) by imf24.hostedemail.com (Postfix) with ESMTP id 2A4DC18001C for ; Fri, 30 Jun 2023 02:04:42 +0000 (UTC) Authentication-Results: imf24.hostedemail.com; dkim=pass header.d=google.com header.s=20221208 header.b=rvyDGGah; spf=pass (imf24.hostedemail.com: domain of 3OjieZAYKCE89B8v4sx55x2v.t532z4BE-331Crt1.58x@flex--surenb.bounces.google.com designates 209.85.128.201 as permitted sender) smtp.mailfrom=3OjieZAYKCE89B8v4sx55x2v.t532z4BE-331Crt1.58x@flex--surenb.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1688090683; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=T+U3NPNy+xyAlbM+feS8QAyMObapdpoy5/9PcF9TKLI=; b=kfvxSIACVAmtpdjw9ZxuKTCzUtS9jXHCLn7eIY483MTBKr25iBm61WJvg69IjDOIio0oun 2Fxzr8BDqzEvKT3pZ6+hIM4Q0lJyK7CvAIb24nc22R45+koyGNCsYNCc2UoVt8KfcnaVF2 6OdUyw3ELVrDLyvA6X1UABKVY4A10HE= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1688090683; a=rsa-sha256; cv=none; b=PhLluaWRdexxVacyV/ADw47UHQQjO3Kx+e4QyFYM0lxVSDNebqr0J96m7ZBXh/GEUoYq2E 0UTuxt9rNTIYyjH0HQ3LvXzJgYKEBprNsCrAu/9t8w4/zdPdlbGB7q8mfbzvZOnjy1cQ2u jRhM0k/VGisiMXvgwFjUv0AWcYLb7Wg= ARC-Authentication-Results: i=1; imf24.hostedemail.com; dkim=pass header.d=google.com header.s=20221208 header.b=rvyDGGah; spf=pass (imf24.hostedemail.com: domain of 3OjieZAYKCE89B8v4sx55x2v.t532z4BE-331Crt1.58x@flex--surenb.bounces.google.com designates 209.85.128.201 as permitted sender) smtp.mailfrom=3OjieZAYKCE89B8v4sx55x2v.t532z4BE-331Crt1.58x@flex--surenb.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com Received: by mail-yw1-f201.google.com with SMTP id 00721157ae682-565ba5667d5so11746967b3.0 for ; Thu, 29 Jun 2023 19:04:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1688090682; x=1690682682; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=T+U3NPNy+xyAlbM+feS8QAyMObapdpoy5/9PcF9TKLI=; b=rvyDGGahsbSARiQKNJBvs5NaGYjXW8NZzm9dcWursQkpPyYTW87vOp0jaL/v9wFn72 bVCnSMsCL0Uww5Cz2gl4BTCbB3YOvEPgid/Pz/LzO8PDWuvfrJ2G8Jiylai9nQSwghqV XTPn7m5a6XLji4dmREhc1FM19R1MCvpV3tbEnuzVLNravKUeqSKCb2E7NDFaKDZgK3oA 59VRX6qKLr9O5DBQxMvgWTxqC8FOrrGsqmH7OYYKLdSnJ0MiFxDdJVnxpCvmqod/rmwQ xQnRBu3/xHvFBzDvleXzw2kra/qngRMyB/qMKcD/nmpoEBHDjoyDF9TWDoyXcI8GVjIk 1b0w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1688090682; x=1690682682; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=T+U3NPNy+xyAlbM+feS8QAyMObapdpoy5/9PcF9TKLI=; b=OUv9Wmtc8320KiLN8lq9Vx/waXqu8yeWq16fRydTs0F1q0BTaA+0zwa2oPd6H9sQYh T637IPfgTRYjaJVgUzT9c6/9yEgB5+WNXBd+J6qde50MxRU6SuFTEHs6kom6++aF5E6g 0Yv51WDtSn1O3HR2p/89qB7oy60onWMkmyV22jLeJ8bVT2Bf/pW21rv0q1fiC6aHQ+e5 56E6Ma2Wjeslb8MS9nKpWzT91+7XUT61dWJ90QsWAScnQ9qrKPhtTNpalwMLK+/i9oq4 JjQqtC5CxVVkbXa6p+DiTzyaTSRQk1f+2CeQceDi82E06fslr+vkYmXHDWfVVcmlw5kx O2yw== X-Gm-Message-State: ABy/qLZJmQRoKaHDmHySd4H0jWQvzQoDWvgdfdLfWobnfFE5K4iBf5lW h0aJtvWdsexDsqcTClxAR7NktDbD+jA= X-Google-Smtp-Source: APBJJlFHuVnY9FeKGILHpU4FVRKk28e82VnRW+1SQlAaSM9Gd4WVwIdiE5OHAJvPzkmeNDyyPG7JbAA7D7A= X-Received: from surenb-desktop.mtv.corp.google.com ([2620:15c:211:201:1f11:a3d0:19a9:16e5]) (user=surenb job=sendgmr) by 2002:a0d:db8a:0:b0:576:f208:4d91 with SMTP id d132-20020a0ddb8a000000b00576f2084d91mr8639ywe.4.1688090682204; Thu, 29 Jun 2023 19:04:42 -0700 (PDT) Date: Thu, 29 Jun 2023 19:04:30 -0700 In-Reply-To: <20230630020436.1066016-1-surenb@google.com> Mime-Version: 1.0 References: <20230630020436.1066016-1-surenb@google.com> X-Mailer: git-send-email 2.41.0.255.g8b1d071c50-goog Message-ID: <20230630020436.1066016-2-surenb@google.com> Subject: [PATCH v6 1/6] swap: remove remnants of polling from read_swap_cache_async From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: willy@infradead.org, hannes@cmpxchg.org, mhocko@suse.com, josef@toxicpanda.com, jack@suse.cz, ldufour@linux.ibm.com, laurent.dufour@fr.ibm.com, michel@lespinasse.org, liam.howlett@oracle.com, jglisse@google.com, vbabka@suse.cz, minchan@google.com, dave@stgolabs.net, punit.agrawal@bytedance.com, lstoakes@gmail.com, hdanton@sina.com, apopple@nvidia.com, peterx@redhat.com, ying.huang@intel.com, david@redhat.com, yuzhao@google.com, dhowells@redhat.com, hughd@google.com, viro@zeniv.linux.org.uk, brauner@kernel.org, pasha.tatashin@soleen.com, surenb@google.com, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, kernel-team@android.com, Christoph Hellwig X-Stat-Signature: qgcozwtjnooyjjkpb66swuark7rqs6aj X-Rspamd-Server: rspam10 X-Rspamd-Queue-Id: 2A4DC18001C X-Rspam-User: X-HE-Tag: 1688090682-547812 X-HE-Meta: U2FsdGVkX18uIqtKIQ8BQrpEz2AYyXenF7aUmK9ile4xfQCiSIdtcTtyofJMMGfJ9cQp+mhrXd2yHS9HPgezucERCbUN1591ZM8fr+6zvRydRWsgimmmXTm3Yexx/Knpum1sW/8lcMhNwTcOVEzi9vN87pXej3oLt0kT0czZPrbw+SW+LSM+HW4e2K/fD5JiShCzThHoZnbVtMCUqYJH1hV4XZzUXu4P0dSO2J/tQlfdclKxfA8PAdJ/Coxp5xJQOovXV+XpHWUhT8Bh6WCrmzOeT2cNDUsLJ9nnVYSQdjr23rU5f05qFK5OX12anzzJCI/vMqi/MaKe7UvQ/8Pl8WCVmKZjrjGdIRakk/ALvV7W+tMQurT/GbcsCpjr+NJrzeP5Ax45Egqvij3Xwlm9UodFFJzm/XfwsRrpZq6MfgpBCtFRL+yAYM1kelrmTvzfGIN9q+uy9+ZyT+CDvXBCfeMZ4PoQwy1sYR8ld8SEpYWYPyUu2hbeSasXhNMHf5Ub2OTItI152y/cCmML5ZsbDVk63UfjTv/OSY55PC1tSUJkeP7IHUEK32UysRwDucuIdH7U455bT2Bcpioy91R0ztWQQju1ZYsMckwv6tmoaDrQmVuS6ZfIGM36F5VhyiMyyRjqsQ1XXOY8h1KPfRi3YKBliDg+jhyIU3BRoWpielXaN/mxWaJ3ya7QzHsjODRttL/Vj3P5QNVaPGAlcp2/ZoRW4lFrFb2T+CDH2U2+ImtCDUFQy8FTc2KqCRWCnnolXYuK1l2xkFK30gLkZ4tz2Qv8K3lA9/SWajOhEOjgj2CbJNCTLjdm3atoN5n0ZGZRfWF+NGDI/ZBzJC2QerKi6XDL4BbLNFFUBiFqryNKQzS3+2FcgIAMDdDgYCAUukdugpCk1bQOZYhdVKYL+tNgiBxVui6FOj1BF2/FAvmHaRoqrDyFxbPBAlrf9N0F2gc8+mE95m3mC4ejJ6X+cFA qAG0FDCW uO6UY6MiYbSHmq6uJ/sGd8+KJqcUPCCfzWkayxVJlGbDeNJWVQ4coZ/E9kcEzY+z6pBmcLiDLDccvP6cpm8AaRyrz7JT/oOVpwuLlRm3qUirFjwbSkTAuGYwk2RpIL19gSW4Pwerdg35nYKgoB+DKrlJTmHuDLejVSwFeh3REmBSZ+Te3SCsYDsPNkNd3Nx2NThAh8Tloh5ySAqFa0/1KZ68HXocFTtOZYk+FH18rDjHx6Ad12tFWB9G8wEFFgybSCv41dnY2Q/2cu/H/QGA+xg03qItahbfS3wk4Fc3JSfjO9XKbtptZVk+J6KnrJUnrKHn/16mQQRWmktkBcVnmKSr5Z2G3PuNmaK/nFNUDNHPxG6jJTG52F+zCgl6UAHSPMb2qgunAqUmbt39+eb2z2RD1RxlbNidxdUeYuQsj1517onMEsz4UFix0RGPmf6TM8Ii1RZUk2Y9rZQjyGPGfGxpbNiUjWIKTjpVbG1r9wkPgd2EDa2vw9rCenrbHe0GdfqXJBLkv5sQp6wkANzl57a4eXS6XG/6ofN7tD2B2P+36gcF08OEGtj5tHwoucQKE7mKcNEgOwL6+HsCRpQsWQao4s0sXMvBzaNndSYDamu+0UQ5/0Fzkf7lHSKxXn1pJ6q5I0zVYjHEsYxLl+4xxB1mQf+eb/bb2czIoOJa7BIwk4VKHt9ZcawqnzQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Commit [1] introduced IO polling support duding swapin to reduce swap read latency for block devices that can be polled. However later commit [2] removed polling support. Therefore it seems safe to remove do_poll parameter in read_swap_cache_async and always call swap_readpage with synchronous=false waiting for IO completion in folio_lock_or_retry. [1] commit 23955622ff8d ("swap: add block io poll in swapin path") [2] commit 9650b453a3d4 ("block: ignore RWF_HIPRI hint for sync dio") Suggested-by: "Huang, Ying" Signed-off-by: Suren Baghdasaryan Reviewed-by: "Huang, Ying" Reviewed-by: Christoph Hellwig --- mm/madvise.c | 4 ++-- mm/swap.h | 1 - mm/swap_state.c | 12 +++++------- 3 files changed, 7 insertions(+), 10 deletions(-) diff --git a/mm/madvise.c b/mm/madvise.c index 886f06066622..ac6d92f74f6d 100644 --- a/mm/madvise.c +++ b/mm/madvise.c @@ -218,7 +218,7 @@ static int swapin_walk_pmd_entry(pmd_t *pmd, unsigned long start, ptep = NULL; page = read_swap_cache_async(entry, GFP_HIGHUSER_MOVABLE, - vma, addr, false, &splug); + vma, addr, &splug); if (page) put_page(page); } @@ -262,7 +262,7 @@ static void shmem_swapin_range(struct vm_area_struct *vma, rcu_read_unlock(); page = read_swap_cache_async(entry, mapping_gfp_mask(mapping), - vma, addr, false, &splug); + vma, addr, &splug); if (page) put_page(page); diff --git a/mm/swap.h b/mm/swap.h index 7c033d793f15..8a3c7a0ace4f 100644 --- a/mm/swap.h +++ b/mm/swap.h @@ -46,7 +46,6 @@ struct folio *filemap_get_incore_folio(struct address_space *mapping, struct page *read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask, struct vm_area_struct *vma, unsigned long addr, - bool do_poll, struct swap_iocb **plug); struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask, struct vm_area_struct *vma, diff --git a/mm/swap_state.c b/mm/swap_state.c index f8ea7015bad4..5a690c79cc13 100644 --- a/mm/swap_state.c +++ b/mm/swap_state.c @@ -527,15 +527,14 @@ struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask, */ struct page *read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask, struct vm_area_struct *vma, - unsigned long addr, bool do_poll, - struct swap_iocb **plug) + unsigned long addr, struct swap_iocb **plug) { bool page_was_allocated; struct page *retpage = __read_swap_cache_async(entry, gfp_mask, vma, addr, &page_was_allocated); if (page_was_allocated) - swap_readpage(retpage, do_poll, plug); + swap_readpage(retpage, false, plug); return retpage; } @@ -630,7 +629,7 @@ struct page *swap_cluster_readahead(swp_entry_t entry, gfp_t gfp_mask, struct swap_info_struct *si = swp_swap_info(entry); struct blk_plug plug; struct swap_iocb *splug = NULL; - bool do_poll = true, page_allocated; + bool page_allocated; struct vm_area_struct *vma = vmf->vma; unsigned long addr = vmf->address; @@ -638,7 +637,6 @@ struct page *swap_cluster_readahead(swp_entry_t entry, gfp_t gfp_mask, if (!mask) goto skip; - do_poll = false; /* Read a page_cluster sized and aligned cluster around offset. */ start_offset = offset & ~mask; end_offset = offset | mask; @@ -670,7 +668,7 @@ struct page *swap_cluster_readahead(swp_entry_t entry, gfp_t gfp_mask, lru_add_drain(); /* Push any new pages onto the LRU now */ skip: /* The page was likely read above, so no need for plugging here */ - return read_swap_cache_async(entry, gfp_mask, vma, addr, do_poll, NULL); + return read_swap_cache_async(entry, gfp_mask, vma, addr, NULL); } int init_swap_address_space(unsigned int type, unsigned long nr_pages) @@ -838,7 +836,7 @@ static struct page *swap_vma_readahead(swp_entry_t fentry, gfp_t gfp_mask, skip: /* The page was likely read above, so no need for plugging here */ return read_swap_cache_async(fentry, gfp_mask, vma, vmf->address, - ra_info.win == 1, NULL); + NULL); } /** From patchwork Fri Jun 30 02:04:31 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 13297512 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 31F8FC001B3 for ; Fri, 30 Jun 2023 02:04:48 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C5E7C8D0005; Thu, 29 Jun 2023 22:04:47 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id BE9218D0001; Thu, 29 Jun 2023 22:04:47 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A60E28D0005; Thu, 29 Jun 2023 22:04:47 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 85F938D0001 for ; Thu, 29 Jun 2023 22:04:47 -0400 (EDT) Received: from smtpin24.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 3D1A114050F for ; Fri, 30 Jun 2023 02:04:47 +0000 (UTC) X-FDA: 80957770614.24.815B449 Received: from mail-yw1-f201.google.com (mail-yw1-f201.google.com [209.85.128.201]) by imf29.hostedemail.com (Postfix) with ESMTP id 73305120015 for ; Fri, 30 Jun 2023 02:04:45 +0000 (UTC) Authentication-Results: imf29.hostedemail.com; dkim=pass header.d=google.com header.s=20221208 header.b=O6qrIABF; spf=pass (imf29.hostedemail.com: domain of 3PDieZAYKCFEBDAx6uz77z4x.v75416DG-553Etv3.7Az@flex--surenb.bounces.google.com designates 209.85.128.201 as permitted sender) smtp.mailfrom=3PDieZAYKCFEBDAx6uz77z4x.v75416DG-553Etv3.7Az@flex--surenb.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1688090685; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=wxpT0wtXiDyXAkq9m4Ft7pVFAXex2TsyXSa3a6xpvEY=; b=L8HrGoxRu/MN+5+F9tUdRhOh9CdomkUJeS7wA6Fs9o1kN/Csr2fJtVMAbs4YBSyd8S9VmW GiHU3c4SW5afi25OSVbta1nkYx0Lm4irjYHTvUyNbQOZJ2fjIara958B2LxDQQL/2JICHx QgpnDgLVctAj+jX9SCyfbMGilBMPFJI= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1688090685; a=rsa-sha256; cv=none; b=HR+aE8w2Kqz8z+FQi6JqUpcF4wCt0gez4/+MKgcBWP9GVpZQSKm79/YCumAxwkkNyokj2b iJQBPNceVpC7g+UMP5uKvk+KA3vEhz0zer7yFFDaHJKYH6/gTXAsk+FCnztbU/d9oOO7lg fKXQWY+Cj/HUmoUh4PMIRpHVMsRkjI8= ARC-Authentication-Results: i=1; imf29.hostedemail.com; dkim=pass header.d=google.com header.s=20221208 header.b=O6qrIABF; spf=pass (imf29.hostedemail.com: domain of 3PDieZAYKCFEBDAx6uz77z4x.v75416DG-553Etv3.7Az@flex--surenb.bounces.google.com designates 209.85.128.201 as permitted sender) smtp.mailfrom=3PDieZAYKCFEBDAx6uz77z4x.v75416DG-553Etv3.7Az@flex--surenb.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com Received: by mail-yw1-f201.google.com with SMTP id 00721157ae682-5618857518dso11500767b3.2 for ; Thu, 29 Jun 2023 19:04:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1688090684; x=1690682684; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=wxpT0wtXiDyXAkq9m4Ft7pVFAXex2TsyXSa3a6xpvEY=; b=O6qrIABFeVKumQb4z+O4Rwoiry/zCXrDFmQqF+Gf2qqpRo3dr3IuEVNfNulzR2+kK5 e2f359XuQFnf5KHHVWh0sdLfRObOSqXo2wiGLj6SYF/QVu5Ag1Xc42AsiOD4EFRVI3D/ 6WXUEdxIEuyZ0pcXaLJ6wNwP83t6XrfWjBfwoYQvysnINji0qB8vghNiKOp740Ul7Ukr kmeYyHgwQ/HkRJAGRDgQT17Uk6XRL3V+GxjQ5EJ8pKYtf2xZ9C7jcOmmmprH0oLA/TvH VfRZ7vZRznwMdcR48dtpA/zVhSiR/0T69uIXQOd6IX6RY44wnsHcF9ERl0Frpjr5dnO5 f7aA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1688090684; x=1690682684; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=wxpT0wtXiDyXAkq9m4Ft7pVFAXex2TsyXSa3a6xpvEY=; b=RyjDY5kojSgvLg9v/zc8Ud6l0NcDcv2acbA8FrwDoCJG0LPyGc0ISvRGZQnu1rFQjg kebCcUr1Vnn7NpFMNygqdXscOMYi/K952b2Ikk8xZ6BF3ROF8PL7UqN8ZCPeZLSlM364 sSkU9yRyTTnAaXAt2V4Q3SnMqtAbq+OnB3bdMiQTiOLUYZaY1eWaLRFwKHZV5NQPMIjQ rDF7ko7iwA9N4TGBRq4LTmfVrxwyIrekFgfAOkhsvIJBRBDCWGt2lGZT4zdlMgezpiJQ eAP+6Uk0yjLhauUGbVpGDXqk6VS1BRW+jglKnB3Vz+O2rdbY9FytIJQpO4TmoOnbd3Ug pAKQ== X-Gm-Message-State: ABy/qLb9II7s8IWD0VYF3ZEg0DktI89QOGDo8IQjLS3S/rsLyThvPkbW EX70/GNw1js6H5qjwjfOiog1Uc04iZg= X-Google-Smtp-Source: APBJJlFlO+/FPiatjPAMJcuTJU5ZkETGLKXrSfIwtCQca/pP1cHpPw/0ejIEECjWBi0brJ9r19L8sTxC/BM= X-Received: from surenb-desktop.mtv.corp.google.com ([2620:15c:211:201:1f11:a3d0:19a9:16e5]) (user=surenb job=sendgmr) by 2002:a81:d106:0:b0:576:b85b:7bd1 with SMTP id w6-20020a81d106000000b00576b85b7bd1mr6970ywi.9.1688090684551; Thu, 29 Jun 2023 19:04:44 -0700 (PDT) Date: Thu, 29 Jun 2023 19:04:31 -0700 In-Reply-To: <20230630020436.1066016-1-surenb@google.com> Mime-Version: 1.0 References: <20230630020436.1066016-1-surenb@google.com> X-Mailer: git-send-email 2.41.0.255.g8b1d071c50-goog Message-ID: <20230630020436.1066016-3-surenb@google.com> Subject: [PATCH v6 2/6] mm: add missing VM_FAULT_RESULT_TRACE name for VM_FAULT_COMPLETED From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: willy@infradead.org, hannes@cmpxchg.org, mhocko@suse.com, josef@toxicpanda.com, jack@suse.cz, ldufour@linux.ibm.com, laurent.dufour@fr.ibm.com, michel@lespinasse.org, liam.howlett@oracle.com, jglisse@google.com, vbabka@suse.cz, minchan@google.com, dave@stgolabs.net, punit.agrawal@bytedance.com, lstoakes@gmail.com, hdanton@sina.com, apopple@nvidia.com, peterx@redhat.com, ying.huang@intel.com, david@redhat.com, yuzhao@google.com, dhowells@redhat.com, hughd@google.com, viro@zeniv.linux.org.uk, brauner@kernel.org, pasha.tatashin@soleen.com, surenb@google.com, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, kernel-team@android.com X-Rspamd-Queue-Id: 73305120015 X-Rspam-User: X-Rspamd-Server: rspam11 X-Stat-Signature: 8us6iw66qgus4kbtwg3tmofi1aexdw38 X-HE-Tag: 1688090685-546819 X-HE-Meta: U2FsdGVkX1/EB/tqjL5hlvo81L/u24NfyItA0Jp6pOOViPCRkn2bN6fMe2S/WQ5gXlAM43MqvRD8tPx7d2W21+ixO0Mtax2Fwu9t1Y2IPD1FBeH1W+hAzd3ZOskRjzMHZXebW8ys4gfW76gTU/eNnESlUBTbhJ56qyJgkJ3Bno8f90y9hKCZrKhyggujjJ+gDP5mjr2D2vdSEa3Ssp4B4OaHGsm65qjJuRcM35DQUznCt4CXkrA0jSpg7PWP5+lbd+uIupgrte37EmV1/Rc2Ird03ywF/rnTAnbsWGYkw0pdnh7YVMKoy1+71VLN2Oj6Q9f3XDRmCDHhnxn2m18zd9VW7LVlnnhGmMASjNWHXyONbKu6dcUSUb98F46w5uHd3u3Or5J/KQ3vCjsVQ7yDMzQv+dpIOxyzCBu34fatI6PxpsOatfwgqhdrHVbQivr7GfVhnslCNKDRdOSZwrpFQBu1KZdrs5ihgd1v/J5My2+3dXb5DOsCSbb2lOJDjpTDWpXrBz5L2paD//Qed5KOVBU/iW3mC4Xf+kGk1VuEI+pxiqcZBlcsBbE+X7LRjp2vXw80G92aGLhBmYA6qT9ANVHwBkNyDZUFPRA4j/1XqLT71IyC4Yuvpj1dbjolWuWbUGfuzEUfsflDvfIRfTF5opiUtZ5Bw1XBVyd2UM7TR2e8JZ2eQUg9jdgiJmBXhCTIpoW17et/cWdRXEOxEdOrWO7JUk8mrpCfrlZdrZwhsTOwvdBwbGMBoqa7fpSIJZbt6pXQr4SM81lJ5cWq2qEGllkeaS8Cxd5aCIZq8k4B5t4hCKsYczGCo1xuxW8QwMYRwE7Ihnh240vnO3ePY3hoF7nKnLl0fV0L2Tmy/B4cDXxmJ/YZrMQRbxSBHEBo570dfnD+we7djbP4jc47pBwmXI2+ckkMrh0z8OrXfxJhfP2KA/dQ5CVPMYQt9KRt/KWG1VHAcwIo2waRVrr8VHU UZhQMaJ5 HT87SnLwHf5mmvalf0ccTIofes/ZPburfUtU+HZto4o/aaR0E+bB5UeIjL0TJK6zvsqLBXbBiK5cyesJI2mze4laOWJhAOtHusvJ+odEvOX2cH/vpAZE+jFsAnDQuIYkCvUSdW4fc/QLJUp/hdOR8q9rUw3QqvnJ3xykPhSu6wsiu8vV4V8dZDKR6LhW1enFg2aFnUj8seLXTh65h6PAc5dZdvhHktdXsxrYK2fG18OJApeYz3qO6YwokSfwqy+t5y22RHpvl+hYYfF8+ysIinOkZ7kXSrZWdrMolpP1Wv0qetQoBZ8dccA5t5JMk9obGWsnu3D409r0nX5G4dCrXRvPUn8OcEIACY1FcA1hqmLFych512P7GlpQOZxVPKrcXTLxw91Titj91tBzE/P860rMqnkFYTQ3KcLAf3lh3GpnM02gckzI3JiYDNuKyr74y4u24SrlhrNrrXwl5VCmqYPX9uVijqsCoUTiV2J7X8lT+zk9sca56tzt65m+SZHG8D3KIxrozCkhSzoYZz2qTnnbQsBaGBAQj8oUrv31xHa3yqGDYZ+E7uxMCZ0ZlB6vxiywRv0upk/MeTt6Ya7vEHkwvwIok4kQ7vo+Fd61lCQfaH/MoFqosjr8IOSuzA6Ki1e0g0ytWOgkbBdCjrwexWSWuNFfcCyIxSi0Bfm9xWJv8zl2PNnKK+SFuL6nDgU4j7Tv8PXvICL9JnN7AdAfXB595PCTeSh7S7K0C7PQBBj+0AL5MEnGgdEcAg/ZolEBcaPvQ X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: VM_FAULT_RESULT_TRACE should contain an element for every vm_fault_reason to be used as flag_array inside trace_print_flags_seq(). The element for VM_FAULT_COMPLETED is missing, add it. Signed-off-by: Suren Baghdasaryan Reviewed-by: Peter Xu --- include/linux/mm_types.h | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index de10fc797c8e..39cd34b4dbaa 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -1077,7 +1077,8 @@ enum vm_fault_reason { { VM_FAULT_RETRY, "RETRY" }, \ { VM_FAULT_FALLBACK, "FALLBACK" }, \ { VM_FAULT_DONE_COW, "DONE_COW" }, \ - { VM_FAULT_NEEDDSYNC, "NEEDDSYNC" } + { VM_FAULT_NEEDDSYNC, "NEEDDSYNC" }, \ + { VM_FAULT_COMPLETED, "COMPLETED" } struct vm_special_mapping { const char *name; /* The name, e.g. "[vdso]". */ From patchwork Fri Jun 30 02:04:32 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 13297513 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3A5D1EB64D9 for ; Fri, 30 Jun 2023 02:04:50 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id CADFD8D0006; Thu, 29 Jun 2023 22:04:49 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id C10298D0001; Thu, 29 Jun 2023 22:04:49 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id AAFBC8D0006; Thu, 29 Jun 2023 22:04:49 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 980A08D0001 for ; Thu, 29 Jun 2023 22:04:49 -0400 (EDT) Received: from smtpin07.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 6340EA0A17 for ; Fri, 30 Jun 2023 02:04:49 +0000 (UTC) X-FDA: 80957770698.07.E978952 Received: from mail-yw1-f201.google.com (mail-yw1-f201.google.com [209.85.128.201]) by imf24.hostedemail.com (Postfix) with ESMTP id 9F676180007 for ; Fri, 30 Jun 2023 02:04:47 +0000 (UTC) Authentication-Results: imf24.hostedemail.com; dkim=pass header.d=google.com header.s=20221208 header.b=DUV7BVM9; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf24.hostedemail.com: domain of 3PjieZAYKCFMDFCz8w19916z.x97638FI-775Gvx5.9C1@flex--surenb.bounces.google.com designates 209.85.128.201 as permitted sender) smtp.mailfrom=3PjieZAYKCFMDFCz8w19916z.x97638FI-775Gvx5.9C1@flex--surenb.bounces.google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1688090687; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=V7p+xkeFpHpY+8imX6dpZwoTYVp943eibQ5qK+rLf3c=; b=jnhvjWUxG9fZiiar4pqup/+3EKvQQ4NUNLu+QBotxByWLJP5372qaBHQpxxKgSQ4iXop0f 9UcNx13RrHUFux6lbKAVvkhauiw91doI2hBiaJwoDci82egtTFAfJaJX56+rxJRG8c65w3 pSiueWKl5ozNiji+5+bSrPCYbFXbt6A= ARC-Authentication-Results: i=1; imf24.hostedemail.com; dkim=pass header.d=google.com header.s=20221208 header.b=DUV7BVM9; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf24.hostedemail.com: domain of 3PjieZAYKCFMDFCz8w19916z.x97638FI-775Gvx5.9C1@flex--surenb.bounces.google.com designates 209.85.128.201 as permitted sender) smtp.mailfrom=3PjieZAYKCFMDFCz8w19916z.x97638FI-775Gvx5.9C1@flex--surenb.bounces.google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1688090687; a=rsa-sha256; cv=none; b=6P8/hD/H0zrUKIZPGJVNowtVUQ+MxqxhzdHrpXGqNgvxzsQpSs8CBrpzMndgVVb2PeQq6h YBk3uXm22TwEMu2DEIpirCmhDsii1YdSOiM9Bn+7FNieARPOlERRmFJW5lsjs1UcwKUfP7 T1gCs7XwJHT4uxesjg+RJPh7to0uTmc= Received: by mail-yw1-f201.google.com with SMTP id 00721157ae682-56fffdea2d0so11520547b3.1 for ; Thu, 29 Jun 2023 19:04:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1688090687; x=1690682687; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=V7p+xkeFpHpY+8imX6dpZwoTYVp943eibQ5qK+rLf3c=; b=DUV7BVM9nFkRrG4OmL2z1rbzNQPYA08eD/W6CM9ZTP8ycZScg2wT7L7+zDm47i3EEp YYXhyTKoKtmqRcvzOyTyV44bllZXd+7i1AMkEKCnp9VhDpEzAvKcS+2YEZK4NAZsYcoe y/bdw210HSJrZUvj1zbrUaU+YWomeS88ogbqihRlgl+T+b4TNg18Gr+BlED4NnaQkkkw f1IhR/XhDFUZ7ovgt4RYv6+3pQDpmsTR7as7++jPBwf7pFCGJ+iXgGbx70KDjUWJPZkG Jf9I49Iy4KlltuDt6TFCOy2p6d0dzZ9NTdp0L/acX6A4axhyfobo5rKVJIXrfSI6c/Al EGHg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1688090687; x=1690682687; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=V7p+xkeFpHpY+8imX6dpZwoTYVp943eibQ5qK+rLf3c=; b=ex6mAW8mkIk2L0eNUEtaJTEIVR9B/9ZJQpLJP8OJdqAZE32XpXkuPN1BOHWzJd165D Q5PMZUPQLNkZnvxUss0pmOAH9O4EfJgTKPWLGSGBjv/ZwzuFzUrwRWmumCW0Squ5AsQq aHFmYUz6iR+12gsdM5PEyeYOyKTUa+x79MDnDuus8yREOP2wMwFPZtUtjPvP/l5d7LGp Bskt6jyPiiYAar9evmqHWaKJnCxyckVvFe/0vF/kWMEJb8fhjKiqcKqeWMoE69oZnQcb a+C2A+2ZzZOQqltM+Q1wHKNxelAaTdP533y621T3Q+nMkHLPLwdJ/xXUdEkPqKzliBlm M7BA== X-Gm-Message-State: ABy/qLaWBBXjcRBCeCTe1QMq3BRe63IDgmVwN7STS+YtIZi/RPg0DJrn cEQSYgzqEw61Helg2Hn+7ljMX8YLOPU= X-Google-Smtp-Source: APBJJlEey6/2V3MIX/R8fPAMY5b+xzPIZFVw7itp9GblxpDJfWRGEmOCyuEH2lOm02vp1cSgPAo6FSaIDcg= X-Received: from surenb-desktop.mtv.corp.google.com ([2620:15c:211:201:1f11:a3d0:19a9:16e5]) (user=surenb job=sendgmr) by 2002:a5b:a01:0:b0:c1d:d3e6:d52e with SMTP id k1-20020a5b0a01000000b00c1dd3e6d52emr10821ybq.0.1688090686807; Thu, 29 Jun 2023 19:04:46 -0700 (PDT) Date: Thu, 29 Jun 2023 19:04:32 -0700 In-Reply-To: <20230630020436.1066016-1-surenb@google.com> Mime-Version: 1.0 References: <20230630020436.1066016-1-surenb@google.com> X-Mailer: git-send-email 2.41.0.255.g8b1d071c50-goog Message-ID: <20230630020436.1066016-4-surenb@google.com> Subject: [PATCH v6 3/6] mm: drop per-VMA lock when returning VM_FAULT_RETRY or VM_FAULT_COMPLETED From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: willy@infradead.org, hannes@cmpxchg.org, mhocko@suse.com, josef@toxicpanda.com, jack@suse.cz, ldufour@linux.ibm.com, laurent.dufour@fr.ibm.com, michel@lespinasse.org, liam.howlett@oracle.com, jglisse@google.com, vbabka@suse.cz, minchan@google.com, dave@stgolabs.net, punit.agrawal@bytedance.com, lstoakes@gmail.com, hdanton@sina.com, apopple@nvidia.com, peterx@redhat.com, ying.huang@intel.com, david@redhat.com, yuzhao@google.com, dhowells@redhat.com, hughd@google.com, viro@zeniv.linux.org.uk, brauner@kernel.org, pasha.tatashin@soleen.com, surenb@google.com, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, kernel-team@android.com X-Rspamd-Queue-Id: 9F676180007 X-Rspam-User: X-Rspamd-Server: rspam05 X-Stat-Signature: e9qxkdf5o83mxhqx8f7ny5gigtiatmxa X-HE-Tag: 1688090687-156184 X-HE-Meta: U2FsdGVkX185GTlWoiYBWspX4Iad3QnGZQ7JPn+Txaeri0uNQF3jfnZilthqyEFW5dHwkp3pQsbQAFGzSelc4iNtz6YAdDp5+J6nCcI2zdF8gz/qFV95RhPni0+h9Q2PpSRK+GgeF104r42rXMg34uJazyw1H6J5Yy+aIWYYdx8WUSML3BPuT1OdpeKTCzadjX0cVl7GH9H1ImDmrfTiLe5nWLeAHYSpaglDfZ9Asy2MNgEoBOXbHhVsGuFpCptBH6Xdq9OIJxLZlZ95rbbx8ds9gw5f/WqEgACwhCmkxK448e3abCXFGENe2QTyRLIJr49UaGHq/ey3uzR4ZZB/43o1bZcrP/AnmbjCmU8mHbsgGw3jA8JEMcdaBHGxRKvKNpg3Yz7nMlWKgUIqqbYrpSV/OZopaN76rin8MuBxIbzkju++ID/+mCzcvqgmmPQ04YcIjcDMJfWPuO//9p6ePBzht7v7d9wYnJdjawIZHyxj5WIjxYU+zY+6eYNIWbuNFrskw0Sf7T/goSeTeobK4L9a4CcJ2PJufErIxAjj2NoixfY+ehNlYDyfh1EZfR47O3ov/lpfr7ZTN4K0i+s3eJv1lkQIOiQCmbwOnnZ7DBT9OSEzGLPDWFELT7S5HxZboEbySmwT+oRHEkMOpjoJ1hVDr/RPkbWAPvDY5437Au6Xz3ze3MXisIZsRSmQDj5wYy4E9zIi3G9Lf9qi9NEGgv7xJ/o/pKFSVPs0m2WxR5/ultGPJa8pGKEHSIDPRiMKJxS1U7430CZ0jyZ4xo58VKTw0MPs6ZX0A9VoC1NkZM3g4j9MXChhoOl1ZZ+l9Z/cqmBmCi3DAUfY75qH8wVcwarq0/M8wix+WNh7OO5XrAbVp/UxiD64zmv/7BJ0nPxU02VhTgWAyPtHJ+kF+RlC3HouCnMFSNh2duLzpn55yAPVQ07fymSIsa39KaKrJou0dzNYDlI8r/zhdt07nOW U5qquRhg wZxnvdHQN87HHV24DOgm7B4Z16taoIl8J7PQT3X0lhkkTxQMJd2BJsUDNKaVlIn9JTD0UO0DgNxJ91b8TloOZZCXJUghaFsH6zcABW6MDmjBgBvKIhRjO9+5q12zGTbYcXFnVr9vuXZgU5dfOCf/g/K4u4aGJT/KMFH4snXJjDOdaMnRiDst84+GFkaGhSucARReimg7fB4cJrRO92flVnT+GT+ST8KSidnzsrj+5yuDrzWY6RifPnuMCOwffHJPCL8B/2iqzqMYnqpqo0jkYEXa/F8r0+Xvq8h6UJeIRcDUve96nwEp8hPtDAbhd8OZrz0GjB8wNG3m/goiHoaTqmmt52+NRU3qdtZ/U/LNllmdfu88jyq07hKl6QHp+0rUzP2iHlqR1uoDRNs9QA+T0Uxy6TQ7QBTvxvfwud9bI1ZvEg/y/FZRval/aU+rQSy2U3Q8oQl65qrVF1S7r5RA9Gr3CySC7EBxlifEtvqbX83iRQQvQFGWlfXBsecwuMChioEUvTMKtGyIt4GNkWnLP5VJmfy4AELmABLE66sK2aGe1qK+AWhC8axXUDXV+NkPbtIohgT9kdVDesAs+2H5kjO9YP4+kCIJjvzFE2lFriHexujjuvUigOPZIuAHTi37GoBX6sdHXXpGnPw9acnOS4EXuSkeHLX6+CtIf+TG+PKlvHEypf6c8IDnpyw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: handle_mm_fault returning VM_FAULT_RETRY or VM_FAULT_COMPLETED means mmap_lock has been released. However with per-VMA locks behavior is different and the caller should still release it. To make the rules consistent for the caller, drop the per-VMA lock when returning VM_FAULT_RETRY or VM_FAULT_COMPLETED. Currently the only path returning VM_FAULT_RETRY under per-VMA locks is do_swap_page and no path returns VM_FAULT_COMPLETED for now. Signed-off-by: Suren Baghdasaryan Acked-by: Peter Xu --- arch/arm64/mm/fault.c | 3 ++- arch/powerpc/mm/fault.c | 3 ++- arch/s390/mm/fault.c | 3 ++- arch/x86/mm/fault.c | 3 ++- mm/memory.c | 12 ++++++++++++ 5 files changed, 20 insertions(+), 4 deletions(-) diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c index 935f0a8911f9..9d78ff78b0e3 100644 --- a/arch/arm64/mm/fault.c +++ b/arch/arm64/mm/fault.c @@ -602,7 +602,8 @@ static int __kprobes do_page_fault(unsigned long far, unsigned long esr, goto lock_mmap; } fault = handle_mm_fault(vma, addr, mm_flags | FAULT_FLAG_VMA_LOCK, regs); - vma_end_read(vma); + if (!(fault & (VM_FAULT_RETRY | VM_FAULT_COMPLETED))) + vma_end_read(vma); if (!(fault & VM_FAULT_RETRY)) { count_vm_vma_lock_event(VMA_LOCK_SUCCESS); diff --git a/arch/powerpc/mm/fault.c b/arch/powerpc/mm/fault.c index 5bfdf6ecfa96..82954d0e6906 100644 --- a/arch/powerpc/mm/fault.c +++ b/arch/powerpc/mm/fault.c @@ -489,7 +489,8 @@ static int ___do_page_fault(struct pt_regs *regs, unsigned long address, } fault = handle_mm_fault(vma, address, flags | FAULT_FLAG_VMA_LOCK, regs); - vma_end_read(vma); + if (!(fault & (VM_FAULT_RETRY | VM_FAULT_COMPLETED))) + vma_end_read(vma); if (!(fault & VM_FAULT_RETRY)) { count_vm_vma_lock_event(VMA_LOCK_SUCCESS); diff --git a/arch/s390/mm/fault.c b/arch/s390/mm/fault.c index dbe8394234e2..40a71063949b 100644 --- a/arch/s390/mm/fault.c +++ b/arch/s390/mm/fault.c @@ -418,7 +418,8 @@ static inline vm_fault_t do_exception(struct pt_regs *regs, int access) goto lock_mmap; } fault = handle_mm_fault(vma, address, flags | FAULT_FLAG_VMA_LOCK, regs); - vma_end_read(vma); + if (!(fault & (VM_FAULT_RETRY | VM_FAULT_COMPLETED))) + vma_end_read(vma); if (!(fault & VM_FAULT_RETRY)) { count_vm_vma_lock_event(VMA_LOCK_SUCCESS); goto out; diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c index e8711b2cafaf..56b4f9faf8c4 100644 --- a/arch/x86/mm/fault.c +++ b/arch/x86/mm/fault.c @@ -1341,7 +1341,8 @@ void do_user_addr_fault(struct pt_regs *regs, goto lock_mmap; } fault = handle_mm_fault(vma, address, flags | FAULT_FLAG_VMA_LOCK, regs); - vma_end_read(vma); + if (!(fault & (VM_FAULT_RETRY | VM_FAULT_COMPLETED))) + vma_end_read(vma); if (!(fault & VM_FAULT_RETRY)) { count_vm_vma_lock_event(VMA_LOCK_SUCCESS); diff --git a/mm/memory.c b/mm/memory.c index 0ae594703021..5f26c56ce979 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -3730,6 +3730,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) if (vmf->flags & FAULT_FLAG_VMA_LOCK) { ret = VM_FAULT_RETRY; + vma_end_read(vma); goto out; } @@ -5182,6 +5183,17 @@ static vm_fault_t sanitize_fault_flags(struct vm_area_struct *vma, !is_cow_mapping(vma->vm_flags))) return VM_FAULT_SIGSEGV; } +#ifdef CONFIG_PER_VMA_LOCK + /* + * Per-VMA locks can't be used with FAULT_FLAG_RETRY_NOWAIT because of + * the assumption that lock is dropped on VM_FAULT_RETRY. + */ + if (WARN_ON_ONCE((*flags & + (FAULT_FLAG_VMA_LOCK | FAULT_FLAG_RETRY_NOWAIT)) == + (FAULT_FLAG_VMA_LOCK | FAULT_FLAG_RETRY_NOWAIT))) + return VM_FAULT_SIGSEGV; +#endif + return 0; } From patchwork Fri Jun 30 02:04:33 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 13297514 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D2A16EB64D9 for ; Fri, 30 Jun 2023 02:04:52 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 6B8C28D0007; Thu, 29 Jun 2023 22:04:52 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 642338D0001; Thu, 29 Jun 2023 22:04:52 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4BB9C8D0007; Thu, 29 Jun 2023 22:04:52 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 3D8758D0001 for ; Thu, 29 Jun 2023 22:04:52 -0400 (EDT) Received: from smtpin13.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 0C65A4041A for ; Fri, 30 Jun 2023 02:04:52 +0000 (UTC) X-FDA: 80957770824.13.9CCE8E7 Received: from mail-yw1-f202.google.com (mail-yw1-f202.google.com [209.85.128.202]) by imf29.hostedemail.com (Postfix) with ESMTP id 42A92120012 for ; Fri, 30 Jun 2023 02:04:50 +0000 (UTC) Authentication-Results: imf29.hostedemail.com; dkim=pass header.d=google.com header.s=20221208 header.b=6PpY0zxI; spf=pass (imf29.hostedemail.com: domain of 3QTieZAYKCFYGIF2Bz4CC492.0CA96BIL-AA8Jy08.CF4@flex--surenb.bounces.google.com designates 209.85.128.202 as permitted sender) smtp.mailfrom=3QTieZAYKCFYGIF2Bz4CC492.0CA96BIL-AA8Jy08.CF4@flex--surenb.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1688090690; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=FPLq4zv3roaklCz3lgPe63Se1fgo5eMfP7akPG/a/ow=; b=17vXsxXPeMLxDRblwHIRK4RdB1VTMKiRxoajSifgvCbhe4qXbG7Lp9lzBd+3XEN7VpNsVa 7jmg3iBvv0/ThCHICZ0GQLuqR21IA0Y8Fa1eqkjY3HYIzKlFxWzKfxHY5tLHV6CmXRzCc7 BUAkkpi8gnqgNJG3PyT3BItFhwsietk= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1688090690; a=rsa-sha256; cv=none; b=tMWc03p2ciD6UJvaHmoAGqkvYuCoyIOfqnyD/tdpMGQeiKbF2elWoUNRhZiQcWu1Wg/ZNI stLiBEDobDA5xk9XTfTVsksXqeHo431ChtRX5w6mW0jl6aQ//i8RCW5az0n0n5de0YZMyT UMYwmt3RE2j+hgtSH08RAoHXJBW5nfc= ARC-Authentication-Results: i=1; imf29.hostedemail.com; dkim=pass header.d=google.com header.s=20221208 header.b=6PpY0zxI; spf=pass (imf29.hostedemail.com: domain of 3QTieZAYKCFYGIF2Bz4CC492.0CA96BIL-AA8Jy08.CF4@flex--surenb.bounces.google.com designates 209.85.128.202 as permitted sender) smtp.mailfrom=3QTieZAYKCFYGIF2Bz4CC492.0CA96BIL-AA8Jy08.CF4@flex--surenb.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com Received: by mail-yw1-f202.google.com with SMTP id 00721157ae682-57003dac4a8so19794177b3.1 for ; Thu, 29 Jun 2023 19:04:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1688090689; x=1690682689; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=FPLq4zv3roaklCz3lgPe63Se1fgo5eMfP7akPG/a/ow=; b=6PpY0zxIk7j9Nuz5B+KxXIPHpGGMEYKk3CUWBJ5VjSX284Yj8oPSXI2iT2Fydz3ie6 /QZ1Jby0umP//VyvIHxtQHR0BtS/pqKJXXEpDYRxgJpDwAZ+onleEU/W1qDCWKybMN6g tlFzWNOgeUS5JUijf+PhyR2rdGYuM+DLYL0bD6UwtFtPG5KdVaAFtt9KqSwnUHLsvr5L spl/lvcjwPrcYkUgTR8WOV2P2i7CZXMnh4/X/NqLaSO4UTmN7xBwAMBb/XsIOmkRYzkf 5YmpauIoTGHRojFq+ezgT4pRCfiqWjPkdC0avAJTXDEpDjYb4dKPQbFkpIztJtKZDcQn 6k3w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1688090689; x=1690682689; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=FPLq4zv3roaklCz3lgPe63Se1fgo5eMfP7akPG/a/ow=; b=MubXQ7znM3+z15XRzt0wXEychatv0PYZ1wACIjJhrXQw6oaLEgWUL4w7hzf139zsNb CUG6rOdyQqfCRqTz5pVztLGhI/PzaUuyxEY37+6/tctAb7JzfZmIOczBrrXXHJSQlpFr GBJ8TezwYZEdQtDMd1f5X5HOWr7l4sKcNeuJmtNwBw4N/2SfPivgTWi0fuyP6gym+drb R6effgj7sLL9EN6eH5xm5y4qtJh7Ejh5Mh1E5ZdRrq8728dNYUFnysiFAH80i5T/Bmri bGv47ksiva0oZCijOr6yCoklzcjKYzBrD+FaWBdIs60M4afEWrfKQKKuNbVYRZb01DiK S/0Q== X-Gm-Message-State: ABy/qLaKSOVd7B5Y+jS/n+dhvLKTL+7OEItV5h56sVCKSFTdHvjPKw0Y qUFQ6zgWSS4kWxLWHY0BAK/EbQYzye4= X-Google-Smtp-Source: APBJJlHrGzz2xdHx6UvUBVeHJ0BvkwaA6W8kb2bX6RsS6g6D2VZ4jkFr3U1teRxP9jVrOn+Hph+5c2ioFqg= X-Received: from surenb-desktop.mtv.corp.google.com ([2620:15c:211:201:1f11:a3d0:19a9:16e5]) (user=surenb job=sendgmr) by 2002:a05:690c:2f82:b0:56d:5db:2f07 with SMTP id ew2-20020a05690c2f8200b0056d05db2f07mr70815ywb.5.1688090689367; Thu, 29 Jun 2023 19:04:49 -0700 (PDT) Date: Thu, 29 Jun 2023 19:04:33 -0700 In-Reply-To: <20230630020436.1066016-1-surenb@google.com> Mime-Version: 1.0 References: <20230630020436.1066016-1-surenb@google.com> X-Mailer: git-send-email 2.41.0.255.g8b1d071c50-goog Message-ID: <20230630020436.1066016-5-surenb@google.com> Subject: [PATCH v6 4/6] mm: change folio_lock_or_retry to use vm_fault directly From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: willy@infradead.org, hannes@cmpxchg.org, mhocko@suse.com, josef@toxicpanda.com, jack@suse.cz, ldufour@linux.ibm.com, laurent.dufour@fr.ibm.com, michel@lespinasse.org, liam.howlett@oracle.com, jglisse@google.com, vbabka@suse.cz, minchan@google.com, dave@stgolabs.net, punit.agrawal@bytedance.com, lstoakes@gmail.com, hdanton@sina.com, apopple@nvidia.com, peterx@redhat.com, ying.huang@intel.com, david@redhat.com, yuzhao@google.com, dhowells@redhat.com, hughd@google.com, viro@zeniv.linux.org.uk, brauner@kernel.org, pasha.tatashin@soleen.com, surenb@google.com, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, kernel-team@android.com X-Rspamd-Queue-Id: 42A92120012 X-Rspam-User: X-Rspamd-Server: rspam11 X-Stat-Signature: n68yrpz9zcoom6e8uryw1rsxutkecuc6 X-HE-Tag: 1688090690-181705 X-HE-Meta: U2FsdGVkX1+fPshSep7l5XfYH1Q25FwmZeuMuYfisSOcZjCDCHJ7pQPTI4P8kSEAY92ZaydrLBiB4D3Iu222SKM/dFrcWShsQxiRn1KXTJRj+0XbajaCz1gKB444DiNpRYvnnDNg2xHJAMydtegrfYWnSdWN7tAX+lyis2oyToW1/7qZ4+KsXvAwyWqZoQDUN8I8TgqpS8FAs5BDtIag4uIr6Ty8GW4FofwSiI9yzRsZ0MKwEpWQCuGrAXJtRXgAGBiyj6nNRdEorFS8xhwbxYmZB4sqSwd1QQqz4Co+1/S1suDO3VGlJ3QU0skwAVns/vyABWtggkrJFV6K4dh8LfozJskviR4aFhiB0LTVVbR6ggTlrMuTWSg5+hkKSiVPHEBTYQnRya3uSbSA3YK4Q9Dhw2J9AbqkNg/rxsogncmmQqsaedr5kLK5DdTAYOmvBj/f6x837p1xXAmjR2xrDF3eAs3hNbbMCltpLiO3TfkJ+ek/X0Zr2N1PY7JWt7AmsAknQA4CsmAsGSZFydlfVv9vUaGuidHmTtTQAfNtQvJfeKHP3/fhwxU5wXZDVM0SCJ8tucfHsUEW3byHURwu78S5JaIOBygyl//jeLQNaXDNmcetHQyh6mQ6AXA1FJ43Vciv5WCs2Xabok1JSybmQl3xFrgFTY8vSldLDS7/xHW9IF8LnAzeLh1zM2uWbHb5AcPLRkyvNs2K2knqi1IR3gJNvB8+9Zy1CeEuBFWVMtaS2y79v6g04ON5VHUhfqr0d1ouFDEc03tpKje3zP5DA+R1pTUo1S4fHtuwX6cAjfXDIqqO19XaCKXzSSf02nO4vJrrGnfPcRJBKCc/1g7mlNt1yMVhi2d1yINADlaGNj91jI8PT4j1MjLRcCTo0RamQ/uH8MB0MQ48x7v3ZGH4wc20fBRqdfpru2pHG8cb7gkp0nOyk0eXptbZ1xSd6o4DaLJsBAwldjcvHM5B2dx C2YyMmJW M/22+TmuB0VmShkYg9Hke76KCxyLHFUjwrouoeUrdyL/0YDmqOMTgiimy9oc2l5e2xy94lhZuyQR8y2hx+wJJFHAc2Hhi2VuOmv2ok+pB1PPyMCe/PfcRwd3keX3XePCNiOLTVPjR2w9LCaOSWwao9cCBL6YzqSn+w24yOAUHQ9TKsG5RKRtojP53N0uGMqlmjR3vQCw5YZCNmgnZHnq8lLMMLgxsb0F1/uHiroZqQjJ8fOt51H6V7n1cuLNG/wcRoZHXX3kW94vfU8B+/aVVqE2t1cfgceC7ZLYD0EPWaYrJW9j4vd+OzoWNvp/jLFOyWEUiUEbYNPCx1jd0Nkt1rwqvxlCHjsXSPBP3mniaFD7rgYOsLiW0LNX3PTvIIk0/2UX50jhWToL8/jRGxrzTS8lfj2CfuXCL8ek4Ql5rxY+qf3pHWBkjcYp7FdeyTMjOPkbjfs7jnqrq1H+xLsR7Nfvh8FtpyuZGGeaehY2d/N4IUVQBoHVpekLlBrAK8zgxM+wdVdlALktfm+wrOD8uYwNjjCERjsvDU+TAYYUulPRy1cFAzycAfsga25i331dHo8Mw8Hv/EHhUGofr4DSCShh3tZwx4qdj0TIPrV2icHudH8YlPNJrJtAbzbHTp3eHFSwvBgjcUJb0Lo6JFCDmnHfwgUDlNNF8nnzNsIw/21g+2p+LfjhCLTS3yq3tja7u1XqQa1lSh2DkcRJ9egCbpHpdTzqxWYvG/8WkMQPPXEz2GJw= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Change folio_lock_or_retry to accept vm_fault struct and return the vm_fault_t directly. Suggested-by: Matthew Wilcox Signed-off-by: Suren Baghdasaryan Acked-by: Peter Xu --- include/linux/pagemap.h | 9 ++++----- mm/filemap.c | 22 ++++++++++++---------- mm/memory.c | 14 ++++++-------- 3 files changed, 22 insertions(+), 23 deletions(-) diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index 716953ee1ebd..0026a0a8277c 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -900,8 +900,7 @@ static inline bool wake_page_match(struct wait_page_queue *wait_page, void __folio_lock(struct folio *folio); int __folio_lock_killable(struct folio *folio); -bool __folio_lock_or_retry(struct folio *folio, struct mm_struct *mm, - unsigned int flags); +vm_fault_t __folio_lock_or_retry(struct folio *folio, struct vm_fault *vmf); void unlock_page(struct page *page); void folio_unlock(struct folio *folio); @@ -1005,11 +1004,11 @@ static inline int folio_lock_killable(struct folio *folio) * Return value and mmap_lock implications depend on flags; see * __folio_lock_or_retry(). */ -static inline bool folio_lock_or_retry(struct folio *folio, - struct mm_struct *mm, unsigned int flags) +static inline vm_fault_t folio_lock_or_retry(struct folio *folio, + struct vm_fault *vmf) { might_sleep(); - return folio_trylock(folio) || __folio_lock_or_retry(folio, mm, flags); + return folio_trylock(folio) ? 0 : __folio_lock_or_retry(folio, vmf); } /* diff --git a/mm/filemap.c b/mm/filemap.c index 9e44a49bbd74..d245bb4f7153 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -1669,32 +1669,34 @@ static int __folio_lock_async(struct folio *folio, struct wait_page_queue *wait) /* * Return values: - * true - folio is locked; mmap_lock is still held. - * false - folio is not locked. + * 0 - folio is locked. + * VM_FAULT_RETRY - folio is not locked. * mmap_lock has been released (mmap_read_unlock(), unless flags had both * FAULT_FLAG_ALLOW_RETRY and FAULT_FLAG_RETRY_NOWAIT set, in * which case mmap_lock is still held. * - * If neither ALLOW_RETRY nor KILLABLE are set, will always return true + * If neither ALLOW_RETRY nor KILLABLE are set, will always return 0 * with the folio locked and the mmap_lock unperturbed. */ -bool __folio_lock_or_retry(struct folio *folio, struct mm_struct *mm, - unsigned int flags) +vm_fault_t __folio_lock_or_retry(struct folio *folio, struct vm_fault *vmf) { + struct mm_struct *mm = vmf->vma->vm_mm; + unsigned int flags = vmf->flags; + if (fault_flag_allow_retry_first(flags)) { /* * CAUTION! In this case, mmap_lock is not released - * even though return 0. + * even though return VM_FAULT_RETRY. */ if (flags & FAULT_FLAG_RETRY_NOWAIT) - return false; + return VM_FAULT_RETRY; mmap_read_unlock(mm); if (flags & FAULT_FLAG_KILLABLE) folio_wait_locked_killable(folio); else folio_wait_locked(folio); - return false; + return VM_FAULT_RETRY; } if (flags & FAULT_FLAG_KILLABLE) { bool ret; @@ -1702,13 +1704,13 @@ bool __folio_lock_or_retry(struct folio *folio, struct mm_struct *mm, ret = __folio_lock_killable(folio); if (ret) { mmap_read_unlock(mm); - return false; + return VM_FAULT_RETRY; } } else { __folio_lock(folio); } - return true; + return 0; } /** diff --git a/mm/memory.c b/mm/memory.c index 5f26c56ce979..4ae3f046f593 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -3582,6 +3582,7 @@ static vm_fault_t remove_device_exclusive_entry(struct vm_fault *vmf) struct folio *folio = page_folio(vmf->page); struct vm_area_struct *vma = vmf->vma; struct mmu_notifier_range range; + vm_fault_t ret; /* * We need a reference to lock the folio because we don't hold @@ -3594,9 +3595,10 @@ static vm_fault_t remove_device_exclusive_entry(struct vm_fault *vmf) if (!folio_try_get(folio)) return 0; - if (!folio_lock_or_retry(folio, vma->vm_mm, vmf->flags)) { + ret = folio_lock_or_retry(folio, vmf); + if (ret) { folio_put(folio); - return VM_FAULT_RETRY; + return ret; } mmu_notifier_range_init_owner(&range, MMU_NOTIFY_EXCLUSIVE, 0, vma->vm_mm, vmf->address & PAGE_MASK, @@ -3721,7 +3723,6 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) bool exclusive = false; swp_entry_t entry; pte_t pte; - int locked; vm_fault_t ret = 0; void *shadow = NULL; @@ -3844,12 +3845,9 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) goto out_release; } - locked = folio_lock_or_retry(folio, vma->vm_mm, vmf->flags); - - if (!locked) { - ret |= VM_FAULT_RETRY; + ret |= folio_lock_or_retry(folio, vmf); + if (ret & VM_FAULT_RETRY) goto out_release; - } if (swapcache) { /* From patchwork Fri Jun 30 02:04:34 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 13297515 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E4FDEEB64DD for ; Fri, 30 Jun 2023 02:04:54 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 7F1368D0001; Thu, 29 Jun 2023 22:04:54 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 77C0E8E0001; Thu, 29 Jun 2023 22:04:54 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5F4AB8D0008; Thu, 29 Jun 2023 22:04:54 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 4C3518D0001 for ; Thu, 29 Jun 2023 22:04:54 -0400 (EDT) Received: from smtpin27.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 21A1C80BBF for ; Fri, 30 Jun 2023 02:04:54 +0000 (UTC) X-FDA: 80957770908.27.30C451D Received: from mail-yb1-f202.google.com (mail-yb1-f202.google.com [209.85.219.202]) by imf25.hostedemail.com (Postfix) with ESMTP id 5CF3CA000D for ; Fri, 30 Jun 2023 02:04:52 +0000 (UTC) Authentication-Results: imf25.hostedemail.com; dkim=pass header.d=google.com header.s=20221208 header.b=k2hpUcvz; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf25.hostedemail.com: domain of 3QzieZAYKCFgIKH4D16EE6B4.2ECB8DKN-CCAL02A.EH6@flex--surenb.bounces.google.com designates 209.85.219.202 as permitted sender) smtp.mailfrom=3QzieZAYKCFgIKH4D16EE6B4.2ECB8DKN-CCAL02A.EH6@flex--surenb.bounces.google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1688090692; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=Gu0VoANOHpdn98KZ5A8oxTD1IO6x9vVXdvuQ30VhewY=; b=aWvRzkmm2wZNaeo3pqMbbS1qcLTsRt9aRy1dEx24mWi20WN3INm3FnF+HWIPOnY8hFk796 gAlcVnquOIMYJHxQxQ3ndrGobbU/BfIGgOpO51BiioKy7ad5VfNPcFXgWRIS++hnIBFbyV nvMwbKfyti/a0Qlmotl8R5z6D8gsiWc= ARC-Authentication-Results: i=1; imf25.hostedemail.com; dkim=pass header.d=google.com header.s=20221208 header.b=k2hpUcvz; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf25.hostedemail.com: domain of 3QzieZAYKCFgIKH4D16EE6B4.2ECB8DKN-CCAL02A.EH6@flex--surenb.bounces.google.com designates 209.85.219.202 as permitted sender) smtp.mailfrom=3QzieZAYKCFgIKH4D16EE6B4.2ECB8DKN-CCAL02A.EH6@flex--surenb.bounces.google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1688090692; a=rsa-sha256; cv=none; b=Yb+8P3Ua2i0444ADSrnVPT+94a+NExJmEEuDWq6tq4X8R3KzzXNHzzO7CKNfzFlbGczzwv 4ZztKFU/3IgyJYvGylZjRz+p3hQIRJbVxn5Bf3hgnznkWDKqZ1FNxGddR/mojuUyuY6LAa N2A6na89ILcnk4U+u2b91EkRaUvKrDw= Received: by mail-yb1-f202.google.com with SMTP id 3f1490d57ef6-c386ccab562so1172928276.3 for ; Thu, 29 Jun 2023 19:04:52 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1688090691; x=1690682691; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=Gu0VoANOHpdn98KZ5A8oxTD1IO6x9vVXdvuQ30VhewY=; b=k2hpUcvzZlE/wuJXtV9O7dL4UXXxuhDpxzQcAiputSPknItCWluXlZUxpoRBIZi4+h KPN0+q/TUs10CzJQOKd+bOTOon7y7djgB0shcBBLBkGelsKwDLz9YUoVX7yZo65DufTY P4JkhlOyxl+5Y4sHtisEscpkNUPQkRg0YJdrsF7TLvPZPYDazeDOXw56kQkG6lk7IMOb LIpjhA2ZqK8avd82tEbvK3qi2QqZXfHhXxV0tM1dYGuNxYmBSvT99J+b02I50CSXB1re Y5/Z9eDJjxBwbtZTVpuSEiPowUAnw0SnBR6/RuHq31FRA8xTlhoG8GfqRwahTeOntn1C rK+Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1688090691; x=1690682691; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=Gu0VoANOHpdn98KZ5A8oxTD1IO6x9vVXdvuQ30VhewY=; b=NoMiKmvntbV55JhQICBb4i0y8dFY00RHicRo/GWuIef8Kpas7OyhSkAf7Y4ZE77BE1 y20rpJceRA9XdjiXAl/LqQAf0KRmKmCh54c3O1/OGIP2MPIL5YOtDZRMw8DOpx75LNn6 TiBGR/wUDDQTX8Z7TI48iDfsI0he10LFysU60y6HC1pAVIdyn+6ePt6eBFVN6LwRXP3T wp1TO3npqILOkny88H7p3HvSjUuey6V3nWrA2Y2482kCIqEvUfLyxNyXS9EL5ZN1RptR vZLH+o8Toi7758wLwP0q7poFAn8evPxYL1c3KtvjSi/6VTYG/mC907mFCeBx4PE+Mabv hhsA== X-Gm-Message-State: ABy/qLYy3O8Nd0NZEGjgRFJ4V4JK1F8+bIKVgqfhRXhJymPBZdJS6nNF 5qqyVhAK0kKWL3rwxTife8EN61HCxAw= X-Google-Smtp-Source: APBJJlGKtiVgOdnPeKYm+NCCAggIfmI1+89sKApFSCd6E4GMafU0mEfZRNDPkjtpY2CHw7a1YIb4IaOz4KU= X-Received: from surenb-desktop.mtv.corp.google.com ([2620:15c:211:201:1f11:a3d0:19a9:16e5]) (user=surenb job=sendgmr) by 2002:a25:7495:0:b0:bc5:f7fe:9e7d with SMTP id p143-20020a257495000000b00bc5f7fe9e7dmr11606ybc.11.1688090691484; Thu, 29 Jun 2023 19:04:51 -0700 (PDT) Date: Thu, 29 Jun 2023 19:04:34 -0700 In-Reply-To: <20230630020436.1066016-1-surenb@google.com> Mime-Version: 1.0 References: <20230630020436.1066016-1-surenb@google.com> X-Mailer: git-send-email 2.41.0.255.g8b1d071c50-goog Message-ID: <20230630020436.1066016-6-surenb@google.com> Subject: [PATCH v6 5/6] mm: handle swap page faults under per-VMA lock From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: willy@infradead.org, hannes@cmpxchg.org, mhocko@suse.com, josef@toxicpanda.com, jack@suse.cz, ldufour@linux.ibm.com, laurent.dufour@fr.ibm.com, michel@lespinasse.org, liam.howlett@oracle.com, jglisse@google.com, vbabka@suse.cz, minchan@google.com, dave@stgolabs.net, punit.agrawal@bytedance.com, lstoakes@gmail.com, hdanton@sina.com, apopple@nvidia.com, peterx@redhat.com, ying.huang@intel.com, david@redhat.com, yuzhao@google.com, dhowells@redhat.com, hughd@google.com, viro@zeniv.linux.org.uk, brauner@kernel.org, pasha.tatashin@soleen.com, surenb@google.com, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, kernel-team@android.com X-Rspam-User: X-Stat-Signature: toacmr99kpu5fukgo5k8czqga99wfs1g X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 5CF3CA000D X-HE-Tag: 1688090692-503406 X-HE-Meta: U2FsdGVkX190FrrAJYGNWIan4sSsCQ81iRX0eYB5gi3k7Z9y9oWvzaEr4neR1/wBzvNnj3bUKPcxnf+R3DCmiA3D5iyUTJJ7BQfE05z4Cn7waDprgW3Q+WMcCc3PqU1dSCEOoD65nFnLuZXltNbaoz8jF98simZJuAaYweqRKZlTbWbOExZd5D8wZbKrhXizvDH6DaOWD/NxK+eEE1RrMm0JYx/L5A3NMWiYveYNkbLHOIlsyCLDt92e3S3g8704MFP66tL6SeZ2XQFo3QoUyZHpqaU3ARmGNtbWyWEenmH2JT4SrVpJMH0eb03R6O2q2GByXYZ0jT8DyDKxKBGttY1wLOl/sln+BPXHmJcX8pV5YRJtDc/1fFtuKhgsMVvUOB2QanUNkMz39FsPKEZ241nVR8SkDi9Hnx1/ktMvGySfJq4Ufv6dVxxEMMIDrZ/9kXqP06N4w0PNvhNMJb96+8GUtars2Y9DE8OC41q3JFDsN+XUY9jqyqCh1In0jycYaMIQVsUjxkvBOIY6pVcPQileQ0Bh9uqag00KvBgciBwzqOqCITASqR66GaNxSIT7UFyDD8UJnlZwoBQGJGkZfKWKDC4oWXV/BqiO4qt7XoxL28rYNisWE8eK4VT+6T77pw9jJ0u6M2Lr9BEh5d5/lbiIHIMeGCnWQiDpHyN5GU73rwI5ZsYRC6xRQ9lbpZBUvbSbvKD85FFXyeEwrXoXe6RhqdxH9bmy8vsbBmDhbwiNLldU/2c4uWUwm9quBzIEIVrGV7amRQBx/7+prt5L10pKm3nc8wAmhexQVaJGUHAI0xEX6bHbnBFziGLjlo1J5QbBG6AUX52qtVQdo/ZD1Zb5krLoVavq73A3BmYyZ5wyziZtcRkNfQQEn+xd8md5a1cQ3lFuXHgURj55Do7ahCBmzcENp3UZk3v6lVc4Thjh9jvX4oO+WzQKx0VNBVWUiAfEa6HKztohog2Re5v wdcy4qtk /DwKGu+GmpA5eGKE95Vmk4cM7bI5d8Kimn0L+DISSr27mCf/gVTT7tGpjwYfgmqZvZ0vF5M+zZf9GcK50Msaa0Bol22R65EvBGhK4JDR60chKt6CebwHbuNzABxqhUG1MV7F6jdufjHVGH23qB/yRY+fN3NnCqHr2vdGaM2DwjdrVpWj6Om0ym7rufl8CuX5I52mYYuVLfGhwIDKCaMQYzMeJxV9a3H44tjQiJ8g20tY32mtge5NQ1WGWIftukQOPIsjhLzEJ0B0ea6eKaH06ZZRDjxpcFCK5GWQLaZ+QZdmepIEoRQolH3svUtCm3OOVKXEFdUhZdjVcZ81UrCqF7ZjhAZIupcSRGougxjWN6ucaB9YTR3+a4ddgtZofccLU3enDWJHpBOTcX96lT5Wuoovq/wEc6y/c8svp9Gku21ZMd7+rBFF7ogPTUcw3AKG+NRqSJRv4rmkqL2Ox28slt6p36wldmvnLxwqQJG2Oxj3qTcT+Zx9OD4sH9ONCElnrFXzoswhHtjJo1U/k5Z7x0t0USe2yW5gE64Q4jqXeJVaoNexLYUPnNSQCkTft8gW//Ddz5yBa739VdB6yo/BkGLLzVCHIlUwDhS2TxF2qjj2dm6V/hIdBaMsgowRf067myQNOKnIm98sMh6sSTbAxWq4ivQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: When page fault is handled under per-VMA lock protection, all swap page faults are retried with mmap_lock because folio_lock_or_retry has to drop and reacquire mmap_lock if folio could not be immediately locked. Follow the same pattern as mmap_lock to drop per-VMA lock when waiting for folio and retrying once folio is available. With this obstacle removed, enable do_swap_page to operate under per-VMA lock protection. Drivers implementing ops->migrate_to_ram might still rely on mmap_lock, therefore we have to fall back to mmap_lock in that particular case. Note that the only time do_swap_page calls synchronous swap_readpage is when SWP_SYNCHRONOUS_IO is set, which is only set for QUEUE_FLAG_SYNCHRONOUS devices: brd, zram and nvdimms (both btt and pmem). Therefore we don't sleep in this path, and there's no need to drop the mmap or per-VMA lock. Signed-off-by: Suren Baghdasaryan Tested-by: Alistair Popple Reviewed-by: Alistair Popple Acked-by: Peter Xu --- include/linux/mm.h | 13 +++++++++++++ mm/filemap.c | 17 ++++++++--------- mm/memory.c | 16 ++++++++++------ 3 files changed, 31 insertions(+), 15 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 39aa409e84d5..54ab11214f4f 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -720,6 +720,14 @@ static inline void vma_mark_detached(struct vm_area_struct *vma, bool detached) vma->detached = detached; } +static inline void release_fault_lock(struct vm_fault *vmf) +{ + if (vmf->flags & FAULT_FLAG_VMA_LOCK) + vma_end_read(vmf->vma); + else + mmap_read_unlock(vmf->vma->vm_mm); +} + struct vm_area_struct *lock_vma_under_rcu(struct mm_struct *mm, unsigned long address); @@ -735,6 +743,11 @@ static inline void vma_assert_write_locked(struct vm_area_struct *vma) {} static inline void vma_mark_detached(struct vm_area_struct *vma, bool detached) {} +static inline void release_fault_lock(struct vm_fault *vmf) +{ + mmap_read_unlock(vmf->vma->vm_mm); +} + #endif /* CONFIG_PER_VMA_LOCK */ /* diff --git a/mm/filemap.c b/mm/filemap.c index d245bb4f7153..6f4a3d83a073 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -1671,27 +1671,26 @@ static int __folio_lock_async(struct folio *folio, struct wait_page_queue *wait) * Return values: * 0 - folio is locked. * VM_FAULT_RETRY - folio is not locked. - * mmap_lock has been released (mmap_read_unlock(), unless flags had both - * FAULT_FLAG_ALLOW_RETRY and FAULT_FLAG_RETRY_NOWAIT set, in - * which case mmap_lock is still held. + * mmap_lock or per-VMA lock has been released (mmap_read_unlock() or + * vma_end_read()), unless flags had both FAULT_FLAG_ALLOW_RETRY and + * FAULT_FLAG_RETRY_NOWAIT set, in which case the lock is still held. * * If neither ALLOW_RETRY nor KILLABLE are set, will always return 0 - * with the folio locked and the mmap_lock unperturbed. + * with the folio locked and the mmap_lock/per-VMA lock is left unperturbed. */ vm_fault_t __folio_lock_or_retry(struct folio *folio, struct vm_fault *vmf) { - struct mm_struct *mm = vmf->vma->vm_mm; unsigned int flags = vmf->flags; if (fault_flag_allow_retry_first(flags)) { /* - * CAUTION! In this case, mmap_lock is not released - * even though return VM_FAULT_RETRY. + * CAUTION! In this case, mmap_lock/per-VMA lock is not + * released even though returning VM_FAULT_RETRY. */ if (flags & FAULT_FLAG_RETRY_NOWAIT) return VM_FAULT_RETRY; - mmap_read_unlock(mm); + release_fault_lock(vmf); if (flags & FAULT_FLAG_KILLABLE) folio_wait_locked_killable(folio); else @@ -1703,7 +1702,7 @@ vm_fault_t __folio_lock_or_retry(struct folio *folio, struct vm_fault *vmf) ret = __folio_lock_killable(folio); if (ret) { - mmap_read_unlock(mm); + release_fault_lock(vmf); return VM_FAULT_RETRY; } } else { diff --git a/mm/memory.c b/mm/memory.c index 4ae3f046f593..bb0f68a73b0c 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -3729,12 +3729,6 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) if (!pte_unmap_same(vmf)) goto out; - if (vmf->flags & FAULT_FLAG_VMA_LOCK) { - ret = VM_FAULT_RETRY; - vma_end_read(vma); - goto out; - } - entry = pte_to_swp_entry(vmf->orig_pte); if (unlikely(non_swap_entry(entry))) { if (is_migration_entry(entry)) { @@ -3744,6 +3738,16 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) vmf->page = pfn_swap_entry_to_page(entry); ret = remove_device_exclusive_entry(vmf); } else if (is_device_private_entry(entry)) { + if (vmf->flags & FAULT_FLAG_VMA_LOCK) { + /* + * migrate_to_ram is not yet ready to operate + * under VMA lock. + */ + vma_end_read(vma); + ret = VM_FAULT_RETRY; + goto out; + } + vmf->page = pfn_swap_entry_to_page(entry); vmf->pte = pte_offset_map_lock(vma->vm_mm, vmf->pmd, vmf->address, &vmf->ptl); From patchwork Fri Jun 30 02:04:35 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 13297516 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3CFEFC001B0 for ; Fri, 30 Jun 2023 02:04:57 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C87018E0002; Thu, 29 Jun 2023 22:04:56 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id BE7948E0001; Thu, 29 Jun 2023 22:04:56 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A882A8E0002; Thu, 29 Jun 2023 22:04:56 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 90BB78E0001 for ; Thu, 29 Jun 2023 22:04:56 -0400 (EDT) Received: from smtpin14.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 7147280BBF for ; Fri, 30 Jun 2023 02:04:56 +0000 (UTC) X-FDA: 80957770992.14.F8BF084 Received: from mail-yb1-f201.google.com (mail-yb1-f201.google.com [209.85.219.201]) by imf28.hostedemail.com (Postfix) with ESMTP id A20C8C000F for ; Fri, 30 Jun 2023 02:04:54 +0000 (UTC) Authentication-Results: imf28.hostedemail.com; dkim=pass header.d=google.com header.s=20221208 header.b=OfSNNJfP; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf28.hostedemail.com: domain of 3RTieZAYKCFoKMJ6F38GG8D6.4GEDAFMP-EECN24C.GJ8@flex--surenb.bounces.google.com designates 209.85.219.201 as permitted sender) smtp.mailfrom=3RTieZAYKCFoKMJ6F38GG8D6.4GEDAFMP-EECN24C.GJ8@flex--surenb.bounces.google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1688090694; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=CYZzVVGdFW3BZ7BX39EfN49IRv4dv2Sl4PMSdQqxDi4=; b=cwM61JjY9iXg1RO017JHdFflItgtZyTDJIJaiX0iQJt4Ko2G7JTIZ31ZhUbA4+3mX0hULs fyQRMHz5Wf36yVemPoZsJA7/dXzOsBCW7Zs9ItwQwSaxvWzLOc4NOkKoQmAKXKTOqhLGNF iUBWmBNCS9yYUICwCkc2cj0PDeicM2Q= ARC-Authentication-Results: i=1; imf28.hostedemail.com; dkim=pass header.d=google.com header.s=20221208 header.b=OfSNNJfP; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf28.hostedemail.com: domain of 3RTieZAYKCFoKMJ6F38GG8D6.4GEDAFMP-EECN24C.GJ8@flex--surenb.bounces.google.com designates 209.85.219.201 as permitted sender) smtp.mailfrom=3RTieZAYKCFoKMJ6F38GG8D6.4GEDAFMP-EECN24C.GJ8@flex--surenb.bounces.google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1688090694; a=rsa-sha256; cv=none; b=jR6q4sRFvKwsjhz/KHzZMixXdekDutQ4Ga8PjZLMpoYxJ+Q8teKx+dWVKjNE0J+NJHkBG5 TZvzr4uds2BpyiAdHg0hGOONCxZ4LP94AlvYLJQCTJ/jUhuI78/94sTTsshOkIGUYkvHUT hjO4G56j3r3y9JnFByDLKGlmBzVmvA0= Received: by mail-yb1-f201.google.com with SMTP id 3f1490d57ef6-bfebb1beeccso1189980276.2 for ; Thu, 29 Jun 2023 19:04:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1688090693; x=1690682693; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=CYZzVVGdFW3BZ7BX39EfN49IRv4dv2Sl4PMSdQqxDi4=; b=OfSNNJfPU4LwQ+4p9lDzr7d1rFDuCbznmXwZGpFoQOb9UUudIpyItFW/YnfzN2Uh76 17QZyg1dnJZ0RJOiYlJksJqf+ObRNyfur60YXCCTmzLOEpNbJptma3iOBWfggxbFypeQ SuvH+35aKNV8vVQgv8xFp3l8bhmps0ul+sZaLrf42oWdSvTkXAXDfwilgWGr0jerAG4l sSY5toJRTgjNO5CA+/+11sgpLNowGp96hGOd7L/hLE+sdwuDraoAAgdGjVBoc+inBYNF /ADidYiQiiUKM6I0fDPL0+DkQgUrxvrePdheW3EmveHdBrFadg1h+kL02l0hzoDZg27p 8OYQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1688090693; x=1690682693; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=CYZzVVGdFW3BZ7BX39EfN49IRv4dv2Sl4PMSdQqxDi4=; b=UKIaXmiA5QR8F72eV92GFos/URB3yYq5zgHpFJpz86WZ/mRGPQBTyDaEIqDUPkGbte 0UPGcA58QqXd6k7iAuWqJMxECMDsq5jUA/sBOBfp/WjpVGBnEOTzjbDz6BoOw+LWgQ0o 7qjsLYnoTsw3DnYlwmK2EmJ6YdGwXlU8wDwM46lzKrSqFV8Z79TWCZOgI+HF+Pq8CnKy /hgudqMpdmzlCJ5TFlY45bPrhF4udUgPCGSaNXrLpeKZezEaa39esdag8jcTiqjnb2s9 PG4vgYvC3uMxGv274LLL5lknqC44FA7kFQy85XD5FyOqm+XbiOdDjjCz1uxOTPWBIeSQ gBhw== X-Gm-Message-State: ABy/qLYYBpN4HAv41/nhQu4gteRgxGGGM0pn3Pf3D/VMdj4EBeU2y90r RuxSPhCeB0h2gre4Y1W7uOM95XoZQ5M= X-Google-Smtp-Source: APBJJlHphA7u0fNQhe1Hsg8YKxbjFOsjCu3koFUTN0LEhcBBqL04HvYnu+QQsZhyytzMLSCp+6sbkkOtcng= X-Received: from surenb-desktop.mtv.corp.google.com ([2620:15c:211:201:1f11:a3d0:19a9:16e5]) (user=surenb job=sendgmr) by 2002:a25:4183:0:b0:c1e:f91c:2691 with SMTP id o125-20020a254183000000b00c1ef91c2691mr12237yba.10.1688090693727; Thu, 29 Jun 2023 19:04:53 -0700 (PDT) Date: Thu, 29 Jun 2023 19:04:35 -0700 In-Reply-To: <20230630020436.1066016-1-surenb@google.com> Mime-Version: 1.0 References: <20230630020436.1066016-1-surenb@google.com> X-Mailer: git-send-email 2.41.0.255.g8b1d071c50-goog Message-ID: <20230630020436.1066016-7-surenb@google.com> Subject: [PATCH v6 6/6] mm: handle userfaults under VMA lock From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: willy@infradead.org, hannes@cmpxchg.org, mhocko@suse.com, josef@toxicpanda.com, jack@suse.cz, ldufour@linux.ibm.com, laurent.dufour@fr.ibm.com, michel@lespinasse.org, liam.howlett@oracle.com, jglisse@google.com, vbabka@suse.cz, minchan@google.com, dave@stgolabs.net, punit.agrawal@bytedance.com, lstoakes@gmail.com, hdanton@sina.com, apopple@nvidia.com, peterx@redhat.com, ying.huang@intel.com, david@redhat.com, yuzhao@google.com, dhowells@redhat.com, hughd@google.com, viro@zeniv.linux.org.uk, brauner@kernel.org, pasha.tatashin@soleen.com, surenb@google.com, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, kernel-team@android.com X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: A20C8C000F X-Stat-Signature: 1yjzn3woswntnb63q9cnikykk63fr6qd X-Rspam-User: X-HE-Tag: 1688090694-282800 X-HE-Meta: U2FsdGVkX19yfJGosWJo/X1rJ48O+Ryfo9k1QEjVrqtoE9ZCOrF3Li9xMkQEAalHmryQvlpNMIu/vMZYG9pjId3SHT2Lqt7XXDqQl2IHveoFaTbOVa8ZwnUdiHpKChMAz4TEM9hKE1nP7v67uRkfa9KOD1npSVDVJblzg/jjO5afhWcSVNhar0NFynEGVV5s/Qp8FF5Y/GFAm4+wketGFV/EAoistJKVSAPGCkzflYIvE4MkzBf0NwJWEv7ELrd8QeOJsTqBznIYW7bmD5sAkyHM7s7HAAiw2yGG7hxGGeQU91aRcmY/88BlIxdqu2EC0dx0sMJs1wtnq+13NOHIu9BLkibiWf/lOWoVAO5KYxfWZ7Wg+p35wezRw2XPb2W3WueBXeVH4K8snXt9yc7zclUc0whbooxGlDaseYISN/KU/xttok4m0wTihhAiMHtv2CfKixZ4yj4LjtYKkenCGk+NGXWpXsux47r+j1OfbQ5taqVjxfP6A1brpDMJ6L4BtapRh/fvBJTHlmtlSxPR/Q1PfVYDa3vfRhAOMgLQ4I73rQsUYcLYKUW+cNHpS+aLp1siOi7PpAWBmVMI1dBM3WGU8VSUQcbx/Ax5LTo2mbYIGyog9zKmuGmzxnS8/QfKKLqJTx7EtTvLHjHDvtj9poksZW/uHl0cmut0ueMzLWp7P3+5NryZ/RVNQyoQQov1r/TE7VWo3Gszf2AqLKmz2LHxdYgynveCNpIZH9wPv2n030wjtfMsbqcbnoUrv2y2rVqYmAjaaPEz9/DN4BhBk52fR3SR3Xp7Ad+7pwXRqiZChg03ucyY6rITwAM/w4E+Och6E9hpRvpEiSkJu6Jvc9EDgqnSutKOb2IMzcgAO7rldp/H6Oh4CqHGs6QkwYgjvAhffOk9zYd3qYyGlT7ma56gK1fMRaCPwOaq5YHgm5lSHsHENsD8/ZhusKwy+LzXZgmx6dg5vEGVP5BJq6m LsYUPLfI /SbgJxJXyMBSAiIOMcQwOzDQGqdDBpx5maQkVp5rAn4lUFgwWgk6tzGn3r18iQ5KYROPerziCD5OC4M+JqyRz0velsaJ0xHLIZxu6ELNK3b0n1PRkVSrEVef73On9bQM1lmpy2M0epqdFvCiSCxTfVvcQI8CoH+DXhs2EiOZ/Emjnd+iqzzzROHp4RO0q+wL7M8OnTisvtyG9cg1PrDFwd4fcm+z6BmMBPzH6oQhJeq8Vjiu1OeHH2RUSUnw8RiwyEwb95f1B1ZKv0528TNT5rc8a4eG1x/PENT5EnNtHEVRbyNvlgSYjdyMajuW2m4Eq74VkvgwWeRLHZndWhA35CqyUZI0HV+2nX+LPCZ//ASo3d6NLXyP/dl9SAiTIWDYsg1sq4xkK8vqMnbzXjiJ5F9fFWIIJDh5kk7o8ZOvfkfM+ZNwwNZIsZHc87x5kGbDFQffLNMaeIo6WkUY5530HUsNLEkQfXqims7IkxyVjAIiti8oCxW2WHuZnbZ15+Dz7cpgPQmIvZ9PtRlfhn4r/ifujpHdCtfV00l0+NBAC0PcSeGeKPnbv+lhk4gtQlsGtOJYKMvGC+tx7gO5Ut6nlxCKJm9g8MbU/gz2nSoH22lqYbmPLKgbmkG0KK9y4ooywlQf6 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Enable handle_userfault to operate under VMA lock by releasing VMA lock instead of mmap_lock and retrying. Note that FAULT_FLAG_RETRY_NOWAIT should never be used when handling faults under per-VMA lock protection because that would break the assumption that lock is dropped on retry. Signed-off-by: Suren Baghdasaryan Acked-by: Peter Xu --- fs/userfaultfd.c | 34 ++++++++++++++-------------------- include/linux/mm.h | 24 ++++++++++++++++++++++++ mm/memory.c | 9 --------- 3 files changed, 38 insertions(+), 29 deletions(-) diff --git a/fs/userfaultfd.c b/fs/userfaultfd.c index 7cecd49e078b..21a546eaf9f7 100644 --- a/fs/userfaultfd.c +++ b/fs/userfaultfd.c @@ -277,17 +277,16 @@ static inline struct uffd_msg userfault_msg(unsigned long address, * hugepmd ranges. */ static inline bool userfaultfd_huge_must_wait(struct userfaultfd_ctx *ctx, - struct vm_area_struct *vma, - unsigned long address, - unsigned long flags, - unsigned long reason) + struct vm_fault *vmf, + unsigned long reason) { + struct vm_area_struct *vma = vmf->vma; pte_t *ptep, pte; bool ret = true; - mmap_assert_locked(ctx->mm); + assert_fault_locked(vmf); - ptep = hugetlb_walk(vma, address, vma_mmu_pagesize(vma)); + ptep = hugetlb_walk(vma, vmf->address, vma_mmu_pagesize(vma)); if (!ptep) goto out; @@ -308,10 +307,8 @@ static inline bool userfaultfd_huge_must_wait(struct userfaultfd_ctx *ctx, } #else static inline bool userfaultfd_huge_must_wait(struct userfaultfd_ctx *ctx, - struct vm_area_struct *vma, - unsigned long address, - unsigned long flags, - unsigned long reason) + struct vm_fault *vmf, + unsigned long reason) { return false; /* should never get here */ } @@ -325,11 +322,11 @@ static inline bool userfaultfd_huge_must_wait(struct userfaultfd_ctx *ctx, * threads. */ static inline bool userfaultfd_must_wait(struct userfaultfd_ctx *ctx, - unsigned long address, - unsigned long flags, + struct vm_fault *vmf, unsigned long reason) { struct mm_struct *mm = ctx->mm; + unsigned long address = vmf->address; pgd_t *pgd; p4d_t *p4d; pud_t *pud; @@ -338,7 +335,7 @@ static inline bool userfaultfd_must_wait(struct userfaultfd_ctx *ctx, pte_t ptent; bool ret = true; - mmap_assert_locked(mm); + assert_fault_locked(vmf); pgd = pgd_offset(mm, address); if (!pgd_present(*pgd)) @@ -440,7 +437,7 @@ vm_fault_t handle_userfault(struct vm_fault *vmf, unsigned long reason) * Coredumping runs without mmap_lock so we can only check that * the mmap_lock is held, if PF_DUMPCORE was not set. */ - mmap_assert_locked(mm); + assert_fault_locked(vmf); ctx = vma->vm_userfaultfd_ctx.ctx; if (!ctx) @@ -556,15 +553,12 @@ vm_fault_t handle_userfault(struct vm_fault *vmf, unsigned long reason) spin_unlock_irq(&ctx->fault_pending_wqh.lock); if (!is_vm_hugetlb_page(vma)) - must_wait = userfaultfd_must_wait(ctx, vmf->address, vmf->flags, - reason); + must_wait = userfaultfd_must_wait(ctx, vmf, reason); else - must_wait = userfaultfd_huge_must_wait(ctx, vma, - vmf->address, - vmf->flags, reason); + must_wait = userfaultfd_huge_must_wait(ctx, vmf, reason); if (is_vm_hugetlb_page(vma)) hugetlb_vma_unlock_read(vma); - mmap_read_unlock(mm); + release_fault_lock(vmf); if (likely(must_wait && !READ_ONCE(ctx->released))) { wake_up_poll(&ctx->fd_wqh, EPOLLIN); diff --git a/include/linux/mm.h b/include/linux/mm.h index 54ab11214f4f..2794225b2d42 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -705,6 +705,17 @@ static inline bool vma_try_start_write(struct vm_area_struct *vma) return true; } +static inline void vma_assert_locked(struct vm_area_struct *vma) +{ + int mm_lock_seq; + + if (__is_vma_write_locked(vma, &mm_lock_seq)) + return; + + lockdep_assert_held(&vma->vm_lock->lock); + VM_BUG_ON_VMA(!rwsem_is_locked(&vma->vm_lock->lock), vma); +} + static inline void vma_assert_write_locked(struct vm_area_struct *vma) { int mm_lock_seq; @@ -728,6 +739,14 @@ static inline void release_fault_lock(struct vm_fault *vmf) mmap_read_unlock(vmf->vma->vm_mm); } +static inline void assert_fault_locked(struct vm_fault *vmf) +{ + if (vmf->flags & FAULT_FLAG_VMA_LOCK) + vma_assert_locked(vmf->vma); + else + mmap_assert_locked(vmf->vma->vm_mm); +} + struct vm_area_struct *lock_vma_under_rcu(struct mm_struct *mm, unsigned long address); @@ -748,6 +767,11 @@ static inline void release_fault_lock(struct vm_fault *vmf) mmap_read_unlock(vmf->vma->vm_mm); } +static inline void assert_fault_locked(struct vm_fault *vmf) +{ + mmap_assert_locked(vmf->vma->vm_mm); +} + #endif /* CONFIG_PER_VMA_LOCK */ /* diff --git a/mm/memory.c b/mm/memory.c index bb0f68a73b0c..d9f36f9392a9 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -5407,15 +5407,6 @@ struct vm_area_struct *lock_vma_under_rcu(struct mm_struct *mm, if (!vma_start_read(vma)) goto inval; - /* - * Due to the possibility of userfault handler dropping mmap_lock, avoid - * it for now and fall back to page fault handling under mmap_lock. - */ - if (userfaultfd_armed(vma)) { - vma_end_read(vma); - goto inval; - } - /* Check since vm_start/vm_end might change before we lock the VMA */ if (unlikely(address < vma->vm_start || address >= vma->vm_end)) { vma_end_read(vma);