From patchwork Fri Jun 30 21:19:52 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suren Baghdasaryan X-Patchwork-Id: 13298783 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 67C8AEB64DC for ; Fri, 30 Jun 2023 21:20:07 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id DE15B8E004C; Fri, 30 Jun 2023 17:20:06 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id D91548E000F; Fri, 30 Jun 2023 17:20:06 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C80958E004C; Fri, 30 Jun 2023 17:20:06 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id BA6C28E000F for ; Fri, 30 Jun 2023 17:20:06 -0400 (EDT) Received: from smtpin12.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 68C3F80E56 for ; Fri, 30 Jun 2023 21:20:06 +0000 (UTC) X-FDA: 80960682012.12.BBDCB13 Received: from mail-yw1-f201.google.com (mail-yw1-f201.google.com [209.85.128.201]) by imf09.hostedemail.com (Postfix) with ESMTP id 9DDF114000F for ; Fri, 30 Jun 2023 21:20:04 +0000 (UTC) Authentication-Results: imf09.hostedemail.com; dkim=pass header.d=google.com header.s=20221208 header.b=gj3Eoj1y; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf09.hostedemail.com: domain of 3A0efZAYKCDooqnajXckkcha.Ykihejqt-iigrWYg.knc@flex--surenb.bounces.google.com designates 209.85.128.201 as permitted sender) smtp.mailfrom=3A0efZAYKCDooqnajXckkcha.Ykihejqt-iigrWYg.knc@flex--surenb.bounces.google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1688160004; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=T+U3NPNy+xyAlbM+feS8QAyMObapdpoy5/9PcF9TKLI=; b=UQ0wn+YPy2zVi0MioStiKOwbbT2xzYVkxWHtzmfBneqrYq182Bmu+Fcp5TdcSrfE+TPfhb I6PagJeeh93MzI1VBAqNvNhuyMSepEeR9tu2yeLmlDvbU1n/QaVA1y4NWXpWZBBki1H9x8 2qCoSa9vmFFtGi9bIw0ljbm95PnMUKU= ARC-Authentication-Results: i=1; imf09.hostedemail.com; dkim=pass header.d=google.com header.s=20221208 header.b=gj3Eoj1y; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf09.hostedemail.com: domain of 3A0efZAYKCDooqnajXckkcha.Ykihejqt-iigrWYg.knc@flex--surenb.bounces.google.com designates 209.85.128.201 as permitted sender) smtp.mailfrom=3A0efZAYKCDooqnajXckkcha.Ykihejqt-iigrWYg.knc@flex--surenb.bounces.google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1688160004; a=rsa-sha256; cv=none; b=49kuLc1lag1kiXjmwwl6OUnzOmhYjyhR2PakZX2To41IJRjBtn8nw87UYCNAnZfYgZugxd 3LTdf2dpJ2Q7Cm0HnmkvcdXbt/suaIrnDczEvgSLDCgtus8EbwvrnSEUDU+0fwXWA9i4qb p9MUfula1A2UuEL9tcRmtGrRsUEA6Q4= Received: by mail-yw1-f201.google.com with SMTP id 00721157ae682-573d70da2dcso22630137b3.1 for ; Fri, 30 Jun 2023 14:20:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1688160004; x=1690752004; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=T+U3NPNy+xyAlbM+feS8QAyMObapdpoy5/9PcF9TKLI=; b=gj3Eoj1ySBPw7PRJkS7EMLlo2pHw7Fy4fqTaMfYLzfs8i//BzYVVgxavozWht9eYa3 9QAis0pBfrH9H8O+BTK2P79HmbSy+ccpBICKn8dunthp5sDi7iBpTMHFffkiZCrb/4pR En+6nDy6jwP37RQrIPo3q7g/52V2l2w5AufZPHDuq4ZHdGhSDNnMavuXRPZVKntKOTDi uiJ3U638bAeT24PExrpL+ltCWuRdoPULemr1kCD8+I4EoR2xOlnV2Nf8827lutjv9zcq lKwsFdM6xI6ZVtuw93zeDjoRoNdTpjbnxRcchetindVZNkpIckBFK5QbU6hmmDd25SjA DVkA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1688160004; x=1690752004; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=T+U3NPNy+xyAlbM+feS8QAyMObapdpoy5/9PcF9TKLI=; b=feMwKObbL7Bq5hZAudOUptzE7jqQ5+KxUi7NklIEsXnZbmEL0T17zxDww+XdNs5aoH ZTTmOfDwYQhNQ5CCiSUXel95WOYsfGDggtvSfnnj803lPXE5RCE2PUQjmUeFLy9HxiFV 7U+jUMATp2RMrL7YuTLVwwyOZeq1nQhydHCpOFizglsjDUToN8lEBR+01zIZBmtUAxyJ cCwgV2VsCDc4TSJ1aNFU7P1hIMUCcfUIYsT4G7PdfaW1kQSwbp7r/NWYp8gXh/1owI3X +0TM9HCNP21GJzVDkMzZHMEQzAG8fOzQkJdMBP9CIG52il9ydnWVA91KKEQWPao0Tofb Bk5A== X-Gm-Message-State: ABy/qLY8DuNHmnuWHLG/BtpR9zIfLf0uHWkHjtiGuD8QB8xPye5eOKVR j2OR25j0YbwNh03UfmriVUCIeQF6wrY= X-Google-Smtp-Source: APBJJlHm1PS8GELal/xG6N1RB/HDwu0xhxZBZ/b8TRPUqLWRpiaRfbIuTnZTA+ooMa+BPx0Ft/eRiuXr8FU= X-Received: from surenb-desktop.mtv.corp.google.com ([2620:15c:211:201:b54c:4d64:f00a:1b67]) (user=surenb job=sendgmr) by 2002:a25:2cb:0:b0:c43:b4b6:dde8 with SMTP id 194-20020a2502cb000000b00c43b4b6dde8mr17502ybc.8.1688160003659; Fri, 30 Jun 2023 14:20:03 -0700 (PDT) Date: Fri, 30 Jun 2023 14:19:52 -0700 In-Reply-To: <20230630211957.1341547-1-surenb@google.com> Mime-Version: 1.0 References: <20230630211957.1341547-1-surenb@google.com> X-Mailer: git-send-email 2.41.0.255.g8b1d071c50-goog Message-ID: <20230630211957.1341547-2-surenb@google.com> Subject: [PATCH v7 1/6] swap: remove remnants of polling from read_swap_cache_async From: Suren Baghdasaryan To: akpm@linux-foundation.org Cc: willy@infradead.org, hannes@cmpxchg.org, mhocko@suse.com, josef@toxicpanda.com, jack@suse.cz, ldufour@linux.ibm.com, laurent.dufour@fr.ibm.com, michel@lespinasse.org, liam.howlett@oracle.com, jglisse@google.com, vbabka@suse.cz, minchan@google.com, dave@stgolabs.net, punit.agrawal@bytedance.com, lstoakes@gmail.com, hdanton@sina.com, apopple@nvidia.com, peterx@redhat.com, ying.huang@intel.com, david@redhat.com, yuzhao@google.com, dhowells@redhat.com, hughd@google.com, viro@zeniv.linux.org.uk, brauner@kernel.org, pasha.tatashin@soleen.com, surenb@google.com, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, kernel-team@android.com, Christoph Hellwig X-Rspamd-Queue-Id: 9DDF114000F X-Rspam-User: X-Rspamd-Server: rspam05 X-Stat-Signature: mijc67pnwpmojcnbhkcz1skcarmdq1hh X-HE-Tag: 1688160004-478021 X-HE-Meta: U2FsdGVkX18qMQOkZ7pU/RnGcyDLHkhJHhQVeQhp9gbxGk+ATB4CvcmcLjOc6FGDoQkHFHVGSJjlOhgY2r8QTGj1QQt3QCl0w7nqfIM+kM697Rfp6QqEugWwtUwwgjrY+CwbSucWZ04nd3F2pBShNt94apcLeH/80hceqCFQAjsgrGuih8zuR+lK7FFpwf2qygrrl/V/1cmp3wcXWDn7uY3/6UX2VCRH/TczI4UFBsDijzUnOOPI4QqmI41Aui0AZ6p3Oxlbn7BrU6MGSJOgdZliA2yVbmxXHQOTiQEa8dr3UdmOu7B+I1AkvbAAi1o1fORqC5iYFX8j/Y4LAaAmmXgKCzFyC6VEfskh+Ff6KOWvMSLJOJWU/1DZYf4ZAs6S4HMmNzcpQGdYvaNUvNZUPNybDWuLUHIU1G8emIrzAebNjwk53GWaRlLXgUogq70gHRspMw/xY+fbbp3TZf6kc/5ZkLWVVJDW6qhNZLuBNjgnt8RHs6SyPbzHvb5qdPMkAMDIE8G6VidyXcY2NBmb20n0BSB4/pb4l5ZgfAh7lK+w56hSbL+jfWEqOOODyZuZJ+EfBJ+qIoFN7cNJ4hn6EjyvMBZC4AxnJn1KvRuLkEL51fZX5KNNrXGttHiiRaclg89yjqFtVcDWf2L7wP1CvgKdkM/SIEPHw+bVfZgJafw2/1E2QBS4MCQif4i9qP+5ah/At6O1hlu0ywLi6+JZCC0ompCdMhnlM5LAXUBc9FYWPjZ6dGIB5J/5e3vppBQrQQVg8HlLWvBd77axmddKRz13HTVEzHie1ovEe08hHGufu85p1Qv5eio4Vad/fJc5GeV9bQcerl2NWx6ZrXTGvs9UATcYjtywy7kVb1s4qABFNaWECv4Dn31oymqRhnPR52aoXVpx5IRbI1XX48bQAtB7qrKpwXG2a2Gd8gkeDfr3uCoOwUP6n1JNoZW3j7FaAuVSvLbMxCo4ygz8lLx JtiRT3as g8q0jH5mi+I5ebz4PJ4gu2lnNDKCG4I9kPnTykWoWK+2wuSCAFnmL88OVQQqyicFxuCVghHqNJLPyL/nridGLzKt3Z120mgDN0kbL1oJ/E3gNwq3G3LxdzJEiIMthzasVGOx2wljaCR//pn5uiGzbPSMk53sdW38SPZyFtjmOwpjxb6MG/5g0squNBTn8R5mdtWnxtYp59+PPh2ceBFbG0hHX1WisH6IJuHJc2spWgQ/00hy5V6ve72uFva4dMvnmUgk46NJkm26T4JzIUknD9foIOCcEj9/BUkcIsFIOxMhL7onmCqkg4jz2LTOitDrTxSUiY0c3GTrgbvuEBoOW7NpBG0lIqJDy2ud5nNjTmTyHR/r2jqGJsPf+BL495ZgI4hP1dzVXUDF8cPLCL/QcNdAWHOIJtXwEg/x93PRsXtmBhZyQoFYO4Lt0M8WUb0COtShvus3JWgvi9N0PAquo4No8Ju0O/ZoTw1dvtk3VqKC//bGKAYksqdpRwwA65JCJnoUnDz0H6eYVTg15pNPzvf0Oel76T2MqoUT+Nu1uzMoXY7AhEG3qEMrXU/aL1Qw6qChuidFnqXFj/jgLWabEeTamzyXTEoHN+LVs26P+QOOExLhSXYH/z8BKSrJ2nUFifGEW73tharzZBxX3SbMAn6h1uKE9P2AvLaQuTiAWiJDPqFpFZor+cR1u8g== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Commit [1] introduced IO polling support duding swapin to reduce swap read latency for block devices that can be polled. However later commit [2] removed polling support. Therefore it seems safe to remove do_poll parameter in read_swap_cache_async and always call swap_readpage with synchronous=false waiting for IO completion in folio_lock_or_retry. [1] commit 23955622ff8d ("swap: add block io poll in swapin path") [2] commit 9650b453a3d4 ("block: ignore RWF_HIPRI hint for sync dio") Suggested-by: "Huang, Ying" Signed-off-by: Suren Baghdasaryan Reviewed-by: "Huang, Ying" Reviewed-by: Christoph Hellwig --- mm/madvise.c | 4 ++-- mm/swap.h | 1 - mm/swap_state.c | 12 +++++------- 3 files changed, 7 insertions(+), 10 deletions(-) diff --git a/mm/madvise.c b/mm/madvise.c index 886f06066622..ac6d92f74f6d 100644 --- a/mm/madvise.c +++ b/mm/madvise.c @@ -218,7 +218,7 @@ static int swapin_walk_pmd_entry(pmd_t *pmd, unsigned long start, ptep = NULL; page = read_swap_cache_async(entry, GFP_HIGHUSER_MOVABLE, - vma, addr, false, &splug); + vma, addr, &splug); if (page) put_page(page); } @@ -262,7 +262,7 @@ static void shmem_swapin_range(struct vm_area_struct *vma, rcu_read_unlock(); page = read_swap_cache_async(entry, mapping_gfp_mask(mapping), - vma, addr, false, &splug); + vma, addr, &splug); if (page) put_page(page); diff --git a/mm/swap.h b/mm/swap.h index 7c033d793f15..8a3c7a0ace4f 100644 --- a/mm/swap.h +++ b/mm/swap.h @@ -46,7 +46,6 @@ struct folio *filemap_get_incore_folio(struct address_space *mapping, struct page *read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask, struct vm_area_struct *vma, unsigned long addr, - bool do_poll, struct swap_iocb **plug); struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask, struct vm_area_struct *vma, diff --git a/mm/swap_state.c b/mm/swap_state.c index f8ea7015bad4..5a690c79cc13 100644 --- a/mm/swap_state.c +++ b/mm/swap_state.c @@ -527,15 +527,14 @@ struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask, */ struct page *read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask, struct vm_area_struct *vma, - unsigned long addr, bool do_poll, - struct swap_iocb **plug) + unsigned long addr, struct swap_iocb **plug) { bool page_was_allocated; struct page *retpage = __read_swap_cache_async(entry, gfp_mask, vma, addr, &page_was_allocated); if (page_was_allocated) - swap_readpage(retpage, do_poll, plug); + swap_readpage(retpage, false, plug); return retpage; } @@ -630,7 +629,7 @@ struct page *swap_cluster_readahead(swp_entry_t entry, gfp_t gfp_mask, struct swap_info_struct *si = swp_swap_info(entry); struct blk_plug plug; struct swap_iocb *splug = NULL; - bool do_poll = true, page_allocated; + bool page_allocated; struct vm_area_struct *vma = vmf->vma; unsigned long addr = vmf->address; @@ -638,7 +637,6 @@ struct page *swap_cluster_readahead(swp_entry_t entry, gfp_t gfp_mask, if (!mask) goto skip; - do_poll = false; /* Read a page_cluster sized and aligned cluster around offset. */ start_offset = offset & ~mask; end_offset = offset | mask; @@ -670,7 +668,7 @@ struct page *swap_cluster_readahead(swp_entry_t entry, gfp_t gfp_mask, lru_add_drain(); /* Push any new pages onto the LRU now */ skip: /* The page was likely read above, so no need for plugging here */ - return read_swap_cache_async(entry, gfp_mask, vma, addr, do_poll, NULL); + return read_swap_cache_async(entry, gfp_mask, vma, addr, NULL); } int init_swap_address_space(unsigned int type, unsigned long nr_pages) @@ -838,7 +836,7 @@ static struct page *swap_vma_readahead(swp_entry_t fentry, gfp_t gfp_mask, skip: /* The page was likely read above, so no need for plugging here */ return read_swap_cache_async(fentry, gfp_mask, vma, vmf->address, - ra_info.win == 1, NULL); + NULL); } /**