From patchwork Tue Sep 25 15:30:06 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Josef Bacik X-Patchwork-Id: 10614249 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 3CECB161F for ; Tue, 25 Sep 2018 15:30:31 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 2D3572A7DE for ; Tue, 25 Sep 2018 15:30:31 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 215BD2A87F; Tue, 25 Sep 2018 15:30:31 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 4A3A62A885 for ; Tue, 25 Sep 2018 15:30:30 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 39D858E009F; Tue, 25 Sep 2018 11:30:22 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 2D1518E009E; Tue, 25 Sep 2018 11:30:22 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 063568E009F; Tue, 25 Sep 2018 11:30:21 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-qt1-f198.google.com (mail-qt1-f198.google.com [209.85.160.198]) by kanga.kvack.org (Postfix) with ESMTP id B10948E009E for ; Tue, 25 Sep 2018 11:30:21 -0400 (EDT) Received: by mail-qt1-f198.google.com with SMTP id h26-v6so7646041qtp.18 for ; Tue, 25 Sep 2018 08:30:21 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:from:to:cc:subject:date :message-id:in-reply-to:references; bh=CbVu8tMWjWUQaLeNEaAx14400uYFzt+wHZIzJ3Bs3Ms=; b=tF15XKQEvaIKko9SgRxmBCHgw02uByLsbxJ6kvU568G65P6ps2AkGGMQhI5QMsgtHS //ByXp2NDV/kMIXlGQExrD2c4hV0CcEu2XNct2Ph8UJSZZ+SOAxpFcqzM04fYsa2L0ir D9+cteKiLqeHTGii0WMbkyzocakz/kA+4WAH+Z7bcE9DvQ9yD/eRWcJk1HpaFQiOzxM6 dKsif4X8LcAaK2q4M61hLngEIpwml4y+hkkglr0NXeDp6GlfTwK8SIcEwTE7MbMck4Ku m6erhQJ6Peswwmaqmhw9f21ewqCCHbaflsWbRPe8yOMuMYAkqTV1kP0myuHu6kxttKqR VWGg== X-Gm-Message-State: ABuFfoj4zYc/uhYYzgfgKmr1tyEx6v9MMdhYb9775s41vdMBWkGlVXBz h5f0GEkc2qDOi5L0e+t3zj/Huwt9xnvxqtA1Y+gsJpfM/wMDUvPKukRBOLdGJBTCPi7/Vb1hmVT mrcin78YnOXidk3vl701Ad2Comc9JbUX3F8eXTIz4NtsZPnmBZXuFfwb9vHXRzdzUe5IZwd/y23 /hufizS0nyu/5C3ZSAtBzWUd5eAO6IXSek7RcdWWz8HJgND2bAvri+9jl6nqGtaC/HG/NaG79C/ F4Q4fiwT+hwHRudn1nViEu7yoai57jESYXGlqxRtqU9QP7F3BrcwhYgHW0usoFV0iGXO0rE0R5f xilxFj2BFLvf1VfR3fmmYkpTFMNUgM7clvcfwl1XmINVMBlpF42wYIaykXqS81yRvQPPDK/RHew K X-Received: by 2002:a0c:ae15:: with SMTP id y21-v6mr1141538qvc.233.1537889421481; Tue, 25 Sep 2018 08:30:21 -0700 (PDT) X-Received: by 2002:a0c:ae15:: with SMTP id y21-v6mr1141485qvc.233.1537889420678; Tue, 25 Sep 2018 08:30:20 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1537889420; cv=none; d=google.com; s=arc-20160816; b=kDijWe52TPvZtEDfIz1OjkW//SWX1kr+ABf/5iUj7Z5SotnuP8y3kUX0d+9TURVGb6 yl5BG8A6bTvREpGyboVid5YrOVvHM1Mp89y0PGyV6FQRRKxg73oW1U8Isk9qi6snvjg9 rdyArczUZxMDi3dehOpUkZGn0gsvekcHVR70zbZcp8c3gA+tZ9V4yfIza5FJ0SDeaav8 y4YxYC/YWQWN9JT90Vd+CpEr4lNiEnS44V7LZSSGbzhFL1c/f9gLdZs7ZHWQ6Yo+BeFM YO9Sc4wblLPaJ/4gxTmUNFI2ZYuM5hrPcol5QUOmhcrwEvIK5J8jQ6hyaoPFSHRooa43 JmqQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=CbVu8tMWjWUQaLeNEaAx14400uYFzt+wHZIzJ3Bs3Ms=; b=TuLA39/MCB6hfeHGsqlXTsWScRhHz05xyqThcswx17wiji2A+N+JCX6uCooSkYqLfM LC6XADDrO9Sb9qeBeMR6XYMWfyT0ThH9dvKEnoZCuGiuJU9f10UIB2W+R9XJ7w9YIUIg uuGqiiehTGY2I0lqnS6MxbftXcNTPMJDx7q5r93vHq3v6E87VUu3NmZSaG19bPeY0lRo ZxxfMmMx5h+5xWj14Irc8eh/jkMDyPm68fUcUtO3RdhAd3Y2IFyMtbCPbQp7c5+r7CFP IdETEyYCXDVzGEiVTUbG1yhVD7MwQq42Ypx+b38mzueMQzz+/Q44ymIJhqxnd7IMr9UP iWzQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@toxicpanda-com.20150623.gappssmtp.com header.s=20150623 header.b=dcqFFAvW; spf=neutral (google.com: 209.85.220.65 is neither permitted nor denied by best guess record for domain of josef@toxicpanda.com) smtp.mailfrom=josef@toxicpanda.com Received: from mail-sor-f65.google.com (mail-sor-f65.google.com. [209.85.220.65]) by mx.google.com with SMTPS id j9-v6sor878274qvi.147.2018.09.25.08.30.20 for (Google Transport Security); Tue, 25 Sep 2018 08:30:20 -0700 (PDT) Received-SPF: neutral (google.com: 209.85.220.65 is neither permitted nor denied by best guess record for domain of josef@toxicpanda.com) client-ip=209.85.220.65; Authentication-Results: mx.google.com; dkim=pass header.i=@toxicpanda-com.20150623.gappssmtp.com header.s=20150623 header.b=dcqFFAvW; spf=neutral (google.com: 209.85.220.65 is neither permitted nor denied by best guess record for domain of josef@toxicpanda.com) smtp.mailfrom=josef@toxicpanda.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=toxicpanda-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=CbVu8tMWjWUQaLeNEaAx14400uYFzt+wHZIzJ3Bs3Ms=; b=dcqFFAvWlWUj5T9zy+uxEBmg0YxNzJrjmOC7u1YkCHd208/d0F6bwQr3RnkYBthtAJ HDs2M64h06yCRsXxhwvfAIAchrJR7YeYJVEYxL4MMI4Fx01RzWqj2e7OJ7By9WxXrxth T5C/8M6/xwmcEPFgLUZ39Wq/Oe4BPdWlXAXK2gDg1Peh6DzdpNho2Dzg7c676P5RxwVQ WyUKr+jp7bBCuLkFwAeRvMnQtF5NbZ5/EZPRChFsu60rJ2gptQhYMlZjPff/deAlie6i CkgWGQ/YUC//gP1DdQb6cEkYKjMfIxSNoMXbSYcLz/i6z1F4XJDiOwiqgSbqWNMmMT4b owdg== X-Google-Smtp-Source: ACcGV60MKc9cGahvCmsKYiU3+tba0XoW1JntBlakzIAaie30ejuR2WUAz1oL+AZ7EdNUC7snQMBw0A== X-Received: by 2002:a0c:9691:: with SMTP id a17-v6mr1207731qvd.30.1537889420257; Tue, 25 Sep 2018 08:30:20 -0700 (PDT) Received: from localhost ([107.15.81.208]) by smtp.gmail.com with ESMTPSA id 53-v6sm1617242qto.61.2018.09.25.08.30.19 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Tue, 25 Sep 2018 08:30:19 -0700 (PDT) From: Josef Bacik To: akpm@linux-foundation.org, linux-kernel@vger.kernel.org, kernel-team@fb.com, linux-btrfs@vger.kernel.org, riel@redhat.com, hannes@cmpxchg.org, tj@kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org Cc: Johannes Weiner Subject: [PATCH 3/8] mm: clean up swapcache lookup and creation function names Date: Tue, 25 Sep 2018 11:30:06 -0400 Message-Id: <20180925153011.15311-4-josef@toxicpanda.com> X-Mailer: git-send-email 2.14.3 In-Reply-To: <20180925153011.15311-1-josef@toxicpanda.com> References: <20180925153011.15311-1-josef@toxicpanda.com> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP From: Johannes Weiner __read_swap_cache_async() has a misleading name. All it does is look up or create a page in swapcache; it doesn't initiate any IO. The swapcache has many parallels to the page cache, and shares naming schemes with it elsewhere. Analogous to the cache lookup and creation API, rename __read_swap_cache_async() find_or_create_swap_cache() and lookup_swap_cache() to find_swap_cache(). Signed-off-by: Johannes Weiner Signed-off-by: Josef Bacik --- include/linux/swap.h | 14 ++++++++------ mm/memory.c | 2 +- mm/shmem.c | 2 +- mm/swap_state.c | 43 ++++++++++++++++++++++--------------------- mm/zswap.c | 8 ++++---- 5 files changed, 36 insertions(+), 33 deletions(-) diff --git a/include/linux/swap.h b/include/linux/swap.h index 8e2c11e692ba..293a84c34448 100644 --- a/include/linux/swap.h +++ b/include/linux/swap.h @@ -412,15 +412,17 @@ extern void __delete_from_swap_cache(struct page *); extern void delete_from_swap_cache(struct page *); extern void free_page_and_swap_cache(struct page *); extern void free_pages_and_swap_cache(struct page **, int); -extern struct page *lookup_swap_cache(swp_entry_t entry, - struct vm_area_struct *vma, - unsigned long addr); +extern struct page *find_swap_cache(swp_entry_t entry, + struct vm_area_struct *vma, + unsigned long addr); +extern struct page *find_or_create_swap_cache(swp_entry_t entry, + gfp_t gfp_mask, + struct vm_area_struct *vma, + unsigned long addr, + bool *created); extern struct page *read_swap_cache_async(swp_entry_t, gfp_t, struct vm_area_struct *vma, unsigned long addr, bool do_poll); -extern struct page *__read_swap_cache_async(swp_entry_t, gfp_t, - struct vm_area_struct *vma, unsigned long addr, - bool *new_page_allocated); extern struct page *swap_cluster_readahead(swp_entry_t entry, gfp_t flag, struct vm_fault *vmf); extern struct page *swapin_readahead(swp_entry_t entry, gfp_t flag, diff --git a/mm/memory.c b/mm/memory.c index 9152c2a2c9f6..f27295c1c91d 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -2935,7 +2935,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) delayacct_set_flag(DELAYACCT_PF_SWAPIN); - page = lookup_swap_cache(entry, vma, vmf->address); + page = find_swap_cache(entry, vma, vmf->address); swapcache = page; if (!page) { diff --git a/mm/shmem.c b/mm/shmem.c index 0376c124b043..9854903ae92f 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -1679,7 +1679,7 @@ static int shmem_getpage_gfp(struct inode *inode, pgoff_t index, if (swap.val) { /* Look it up and read it in.. */ - page = lookup_swap_cache(swap, NULL, 0); + page = find_swap_cache(swap, NULL, 0); if (!page) { /* Or update major stats only when swapin succeeds?? */ if (fault_type) { diff --git a/mm/swap_state.c b/mm/swap_state.c index ecee9c6c4cc1..bae758e19f7a 100644 --- a/mm/swap_state.c +++ b/mm/swap_state.c @@ -330,8 +330,8 @@ static inline bool swap_use_vma_readahead(void) * lock getting page table operations atomic even if we drop the page * lock before returning. */ -struct page *lookup_swap_cache(swp_entry_t entry, struct vm_area_struct *vma, - unsigned long addr) +struct page *find_swap_cache(swp_entry_t entry, struct vm_area_struct *vma, + unsigned long addr) { struct page *page; @@ -374,19 +374,20 @@ struct page *lookup_swap_cache(swp_entry_t entry, struct vm_area_struct *vma, return page; } -struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask, +struct page *find_or_create_swap_cache(swp_entry_t entry, gfp_t gfp_mask, struct vm_area_struct *vma, unsigned long addr, - bool *new_page_allocated) + bool *created) { struct page *found_page, *new_page = NULL; struct address_space *swapper_space = swap_address_space(entry); int err; - *new_page_allocated = false; + + *created = false; do { /* * First check the swap cache. Since this is normally - * called after lookup_swap_cache() failed, re-calling + * called after find_swap_cache() failed, re-calling * that would confuse statistics. */ found_page = find_get_page(swapper_space, swp_offset(entry)); @@ -449,7 +450,7 @@ struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask, * Initiate read into locked page and return. */ lru_cache_add_anon(new_page); - *new_page_allocated = true; + *created = true; return new_page; } radix_tree_preload_end(); @@ -475,14 +476,14 @@ struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask, struct page *read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask, struct vm_area_struct *vma, unsigned long addr, bool do_poll) { - bool page_was_allocated; - struct page *retpage = __read_swap_cache_async(entry, gfp_mask, - vma, addr, &page_was_allocated); + struct page *page; + bool created; - if (page_was_allocated) - swap_readpage(retpage, do_poll); + page = find_or_create_swap_cache(entry, gfp_mask, vma, addr, &created); + if (created) + swap_readpage(page, do_poll); - return retpage; + return page; } static unsigned int __swapin_nr_pages(unsigned long prev_offset, @@ -573,7 +574,7 @@ struct page *swap_cluster_readahead(swp_entry_t entry, gfp_t gfp_mask, unsigned long mask; struct swap_info_struct *si = swp_swap_info(entry); struct blk_plug plug; - bool do_poll = true, page_allocated; + bool do_poll = true, created; struct vm_area_struct *vma = vmf->vma; unsigned long addr = vmf->address; @@ -593,12 +594,12 @@ struct page *swap_cluster_readahead(swp_entry_t entry, gfp_t gfp_mask, blk_start_plug(&plug); for (offset = start_offset; offset <= end_offset ; offset++) { /* Ok, do the async read-ahead now */ - page = __read_swap_cache_async( + page = find_or_create_swap_cache( swp_entry(swp_type(entry), offset), - gfp_mask, vma, addr, &page_allocated); + gfp_mask, vma, addr, &created); if (!page) continue; - if (page_allocated) { + if (created) { swap_readpage(page, false); if (offset != entry_offset) { SetPageReadahead(page); @@ -738,7 +739,7 @@ static struct page *swap_vma_readahead(swp_entry_t fentry, gfp_t gfp_mask, pte_t *pte, pentry; swp_entry_t entry; unsigned int i; - bool page_allocated; + bool created; struct vma_swap_readahead ra_info = {0,}; swap_ra_info(vmf, &ra_info); @@ -756,11 +757,11 @@ static struct page *swap_vma_readahead(swp_entry_t fentry, gfp_t gfp_mask, entry = pte_to_swp_entry(pentry); if (unlikely(non_swap_entry(entry))) continue; - page = __read_swap_cache_async(entry, gfp_mask, vma, - vmf->address, &page_allocated); + page = find_or_create_swap_cache(entry, gfp_mask, vma, + vmf->address, &created); if (!page) continue; - if (page_allocated) { + if (created) { swap_readpage(page, false); if (i != ra_info.offset) { SetPageReadahead(page); diff --git a/mm/zswap.c b/mm/zswap.c index cd91fd9d96b8..6f05faa75766 100644 --- a/mm/zswap.c +++ b/mm/zswap.c @@ -823,11 +823,11 @@ enum zswap_get_swap_ret { static int zswap_get_swap_cache_page(swp_entry_t entry, struct page **retpage) { - bool page_was_allocated; + bool created; - *retpage = __read_swap_cache_async(entry, GFP_KERNEL, - NULL, 0, &page_was_allocated); - if (page_was_allocated) + *retpage = find_or_create_swap_cache(entry, GFP_KERNEL, + NULL, 0, &created); + if (created) return ZSWAP_SWAPCACHE_NEW; if (!*retpage) return ZSWAP_SWAPCACHE_FAIL;