From patchwork Tue Jun 29 02:37:00 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Morton X-Patchwork-Id: 12349013 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7DB0FC11F67 for ; Tue, 29 Jun 2021 02:37:03 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 37CAE61A2B for ; Tue, 29 Jun 2021 02:37:03 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 37CAE61A2B Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linux-foundation.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 8E7328D00BC; Mon, 28 Jun 2021 22:37:02 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 8BD158D008F; Mon, 28 Jun 2021 22:37:02 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 75F328D00BC; Mon, 28 Jun 2021 22:37:02 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0219.hostedemail.com [216.40.44.219]) by kanga.kvack.org (Postfix) with ESMTP id 541D08D008F for ; Mon, 28 Jun 2021 22:37:02 -0400 (EDT) Received: from smtpin39.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 4969B1E4FB for ; Tue, 29 Jun 2021 02:37:02 +0000 (UTC) X-FDA: 78305199084.39.4411ED8 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf13.hostedemail.com (Postfix) with ESMTP id DBCD4E000253 for ; Tue, 29 Jun 2021 02:37:01 +0000 (UTC) Received: by mail.kernel.org (Postfix) with ESMTPSA id E4A1F61D08; Tue, 29 Jun 2021 02:37:00 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1624934221; bh=hovObR+c5cuvTwcj0pvJ613x2mzPh2+/nIQPuoBHogs=; h=Date:From:To:Subject:In-Reply-To:From; b=vxGHACCOhhU9CWMQx7bqRn3jSnrb007/UD25AFQVt/sfzIC9Akv0mPEqaGpty0Fly 6x1gdoQd5VsZDS7Nlg2T1kQdmIVOyP9ONy0WBNaHS4+mvqt72EMIRxSmY0vCkmJMxn mmGqdoIFGkV2qKNH712Lqe7cxuUKM7yebLg7Yxrw= Date: Mon, 28 Jun 2021 19:37:00 -0700 From: Andrew Morton To: akpm@linux-foundation.org, hughd@google.com, linmiaohe@huawei.com, linux-mm@kvack.org, mm-commits@vger.kernel.org, torvalds@linux-foundation.org, willy@infradead.org Subject: [patch 071/192] mm/swapfile: move get_swap_page_of_type() under CONFIG_HIBERNATION Message-ID: <20210629023700.-7OvS1acX%akpm@linux-foundation.org> In-Reply-To: <20210628193256.008961950a714730751c1423@linux-foundation.org> User-Agent: s-nail v14.8.16 X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: DBCD4E000253 Authentication-Results: imf13.hostedemail.com; dkim=pass header.d=linux-foundation.org header.s=korg header.b=vxGHACCO; spf=pass (imf13.hostedemail.com: domain of akpm@linux-foundation.org designates 198.145.29.99 as permitted sender) smtp.mailfrom=akpm@linux-foundation.org; dmarc=none X-Stat-Signature: y7k4b1nhxm37zfew5iswpska8ew75enu X-HE-Tag: 1624934221-221232 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Miaohe Lin Subject: mm/swapfile: move get_swap_page_of_type() under CONFIG_HIBERNATION Patch series "Cleanups for swap", v2. This series contains just cleanups to remove some unused variables, delete meaningless forward declarations and so on. More details can be found in the respective changelogs. This patch (of 4): We should move get_swap_page_of_type() under CONFIG_HIBERNATION since the only caller of this function is now suspend routine. [linmiaohe@huawei.com: move scan_swap_map() under CONFIG_HIBERNATION] Link: https://lkml.kernel.org/r/20210521070855.2015094-1-linmiaohe@huawei.com [linmiaohe@huawei.com: fold scan_swap_map() into the only caller get_swap_page_of_type()] Link: https://lkml.kernel.org/r/20210527120328.3935132-1-linmiaohe@huawei.com Link: https://lkml.kernel.org/r/20210520134022.1370406-1-linmiaohe@huawei.com Link: https://lkml.kernel.org/r/20210520134022.1370406-2-linmiaohe@huawei.com Signed-off-by: Miaohe Lin Cc: Hugh Dickins Cc: Matthew Wilcox (Oracle) Signed-off-by: Andrew Morton --- mm/swapfile.c | 83 +++++++++++++++++------------------------------- 1 file changed, 31 insertions(+), 52 deletions(-) --- a/mm/swapfile.c~mm-swapfile-move-get_swap_page_of_type-under-config_hibernation +++ a/mm/swapfile.c @@ -453,10 +453,10 @@ static void swap_cluster_schedule_discar unsigned int idx) { /* - * If scan_swap_map() can't find a free cluster, it will check + * If scan_swap_map_slots() can't find a free cluster, it will check * si->swap_map directly. To make sure the discarding cluster isn't - * taken by scan_swap_map(), mark the swap entries bad (occupied). It - * will be cleared after discard + * taken by scan_swap_map_slots(), mark the swap entries bad (occupied). + * It will be cleared after discard */ memset(si->swap_map + idx * SWAPFILE_CLUSTER, SWAP_MAP_BAD, SWAPFILE_CLUSTER); @@ -589,7 +589,7 @@ static void dec_cluster_info_page(struct } /* - * It's possible scan_swap_map() uses a free cluster in the middle of free + * It's possible scan_swap_map_slots() uses a free cluster in the middle of free * cluster list. Avoiding such abuse to avoid list corruption. */ static bool @@ -1037,21 +1037,6 @@ static void swap_free_cluster(struct swa swap_range_free(si, offset, SWAPFILE_CLUSTER); } -static unsigned long scan_swap_map(struct swap_info_struct *si, - unsigned char usage) -{ - swp_entry_t entry; - int n_ret; - - n_ret = scan_swap_map_slots(si, usage, 1, &entry); - - if (n_ret) - return swp_offset(entry); - else - return 0; - -} - int get_swap_pages(int n_goal, swp_entry_t swp_entries[], int entry_size) { unsigned long size = swap_entry_size(entry_size); @@ -1114,14 +1099,14 @@ start_over: nextsi: /* * if we got here, it's likely that si was almost full before, - * and since scan_swap_map() can drop the si->lock, multiple - * callers probably all tried to get a page from the same si - * and it filled up before we could get one; or, the si filled - * up between us dropping swap_avail_lock and taking si->lock. - * Since we dropped the swap_avail_lock, the swap_avail_head - * list may have been modified; so if next is still in the - * swap_avail_head list then try it, otherwise start over - * if we have not gotten any slots. + * and since scan_swap_map_slots() can drop the si->lock, + * multiple callers probably all tried to get a page from the + * same si and it filled up before we could get one; or, the si + * filled up between us dropping swap_avail_lock and taking + * si->lock. Since we dropped the swap_avail_lock, the + * swap_avail_head list may have been modified; so if next is + * still in the swap_avail_head list then try it, otherwise + * start over if we have not gotten any slots. */ if (plist_node_empty(&next->avail_lists[node])) goto start_over; @@ -1137,30 +1122,6 @@ noswap: return n_ret; } -/* The only caller of this function is now suspend routine */ -swp_entry_t get_swap_page_of_type(int type) -{ - struct swap_info_struct *si = swap_type_to_swap_info(type); - pgoff_t offset; - - if (!si) - goto fail; - - spin_lock(&si->lock); - if (si->flags & SWP_WRITEOK) { - /* This is called for allocating swap entry, not cache */ - offset = scan_swap_map(si, 1); - if (offset) { - atomic_long_dec(&nr_swap_pages); - spin_unlock(&si->lock); - return swp_entry(type, offset); - } - } - spin_unlock(&si->lock); -fail: - return (swp_entry_t) {0}; -} - static struct swap_info_struct *__swap_info_get(swp_entry_t entry) { struct swap_info_struct *p; @@ -1812,6 +1773,24 @@ int free_swap_and_cache(swp_entry_t entr } #ifdef CONFIG_HIBERNATION + +swp_entry_t get_swap_page_of_type(int type) +{ + struct swap_info_struct *si = swap_type_to_swap_info(type); + swp_entry_t entry = {0}; + + if (!si) + goto fail; + + /* This is called for allocating swap entry, not cache */ + spin_lock(&si->lock); + if ((si->flags & SWP_WRITEOK) && scan_swap_map_slots(si, 1, 1, &entry)) + atomic_long_dec(&nr_swap_pages); + spin_unlock(&si->lock); +fail: + return entry; +} + /* * Find the swap type that corresponds to given device (if any). * @@ -2649,7 +2628,7 @@ SYSCALL_DEFINE1(swapoff, const char __us spin_lock(&p->lock); drain_mmlist(); - /* wait for anyone still in scan_swap_map */ + /* wait for anyone still in scan_swap_map_slots */ p->highest_bit = 0; /* cuts scans short */ while (p->flags >= SWP_SCANNING) { spin_unlock(&p->lock);