From patchwork Wed Jun 30 17:07:26 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Evan Green X-Patchwork-Id: 12352603 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0E01EC11F65 for ; Wed, 30 Jun 2021 17:07:58 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id A404B61166 for ; Wed, 30 Jun 2021 17:07:57 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org A404B61166 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=chromium.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id C46328D01BB; Wed, 30 Jun 2021 13:07:56 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id BF6238D01A2; Wed, 30 Jun 2021 13:07:56 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A6FCE8D01BB; Wed, 30 Jun 2021 13:07:56 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0152.hostedemail.com [216.40.44.152]) by kanga.kvack.org (Postfix) with ESMTP id 7C1028D01A2 for ; Wed, 30 Jun 2021 13:07:56 -0400 (EDT) Received: from smtpin22.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 3921E82F30F5 for ; Wed, 30 Jun 2021 17:07:56 +0000 (UTC) X-FDA: 78311022552.22.0880DA1 Received: from mail-pg1-f179.google.com (mail-pg1-f179.google.com [209.85.215.179]) by imf04.hostedemail.com (Postfix) with ESMTP id E1C7C500009F for ; Wed, 30 Jun 2021 17:07:54 +0000 (UTC) Received: by mail-pg1-f179.google.com with SMTP id h4so2927774pgp.5 for ; Wed, 30 Jun 2021 10:07:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=QvKR09L/+eyUPmXhwUrDGsuNs85a9cp92Ey9Voe1J+o=; b=QIzcdNfmGPAqRzWjezAA33W4tYYPhmedHPy/VbFEQJIHnN+PtcD9HtzTZwmw9cbpUZ 3a6NPryaJj1XkPlw0Dm3LUm/iDb2nrzVDTQW77AvYQnItjQ/cYyhnoiBdaCWVaEexogF vnx3WibV3K2vQKp+L4PmiDf+y8iKUQH6OqNps= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=QvKR09L/+eyUPmXhwUrDGsuNs85a9cp92Ey9Voe1J+o=; b=MJyFatWu5QjGu03h9T7zK3+dNUN2S51GrpZ+ctraE7H+c/+4ShkPc4ywMhhr7Z0Hir V4Syt0W4Xt3SXQdkfQY8+HoJqXJK4VHA0GWPmuc51vkY4lHrFeUcop0XAKBEucKOylmx 9rmloJyDZ53voIUjlvbiJLn9BrFhCNv1ACcz+d+QZmZyGdgq5cJWIZCnugoW8Ks1t+nF lnYUmEU88gNNKpSSV1r8NBriKmLNOCpguA/b0XyL7RAeGwtCZzN14nc4riWUmHCVAH+B siXIkJjhfFk5+g8X0XXg0c9cz7FfVHNowyoDoPqLnrwLH/BX5Pmg0lcx2hsMEC2KHc01 pXpg== X-Gm-Message-State: AOAM5305XWUUB7U1PzfeNJZaiqRmGevctrcaRaKbBvNCLbpKNlYPsUCo 5x344Ydf2PFR7v80hagi4+vlSA== X-Google-Smtp-Source: ABdhPJzz2Y/9FapiZdh7ArWUM/0VBt3Y6NUo8Lx+mw99szYfNMhdBmW3JfzyRrQm30vWv5+HCz7GBw== X-Received: by 2002:a63:4c5e:: with SMTP id m30mr35026847pgl.153.1625072873759; Wed, 30 Jun 2021 10:07:53 -0700 (PDT) Received: from evgreen-glaptop.lan ([2601:646:c780:5ba8:fc3:76ac:de4c:1c78]) by smtp.gmail.com with ESMTPSA id l20sm160645pjq.24.2021.06.30.10.07.52 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 30 Jun 2021 10:07:53 -0700 (PDT) From: Evan Green To: Andrew Morton Cc: Evan Green , Alex Shi , Alistair Popple , David Hildenbrand , Jens Axboe , Johannes Weiner , Joonsoo Kim , "Matthew Wilcox (Oracle)" , Miaohe Lin , Minchan Kim , Stephen Rothwell , Vlastimil Babka , linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH v1] mm: Enable suspend-only swap spaces Date: Wed, 30 Jun 2021 10:07:26 -0700 Message-Id: <20210630100432.v1.1.I09866d90c6de14f21223a03e9e6a31f8a02ecbaf@changeid> X-Mailer: git-send-email 2.31.0 MIME-Version: 1.0 Authentication-Results: imf04.hostedemail.com; dkim=pass header.d=chromium.org header.s=google header.b=QIzcdNfm; dmarc=pass (policy=none) header.from=chromium.org; spf=pass (imf04.hostedemail.com: domain of evgreen@chromium.org designates 209.85.215.179 as permitted sender) smtp.mailfrom=evgreen@chromium.org X-Rspamd-Server: rspam02 X-Stat-Signature: uj6ffupr9xk7rjm5tazoahqgjnggncsn X-Rspamd-Queue-Id: E1C7C500009F X-HE-Tag: 1625072874-802342 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Currently it's not possible to enable hibernation without also enabling generic swap for a given swap area. These two use cases are not the same. For example there may be users who want to enable hibernation, but whose drives don't have the write endurance for generic swap activities. Add a new SWAP_FLAG_NOSWAP that adds a swap region but refuses to allow generic swapping to it. This region can still be wired up for use in suspend-to-disk activities, but will never have regular pages swapped to it. Signed-off-by: Evan Green Reviewed-by: Pavel Machek --- include/linux/swap.h | 4 +++- mm/swapfile.c | 24 ++++++++++++++++++------ 2 files changed, 21 insertions(+), 7 deletions(-) diff --git a/include/linux/swap.h b/include/linux/swap.h index 6f5a43251593c8..a9fc37e29c17d6 100644 --- a/include/linux/swap.h +++ b/include/linux/swap.h @@ -28,10 +28,11 @@ struct pagevec; #define SWAP_FLAG_DISCARD 0x10000 /* enable discard for swap */ #define SWAP_FLAG_DISCARD_ONCE 0x20000 /* discard swap area at swapon-time */ #define SWAP_FLAG_DISCARD_PAGES 0x40000 /* discard page-clusters after use */ +#define SWAP_FLAG_NOSWAP 0x80000 /* use only for suspend, not swap */ #define SWAP_FLAGS_VALID (SWAP_FLAG_PRIO_MASK | SWAP_FLAG_PREFER | \ SWAP_FLAG_DISCARD | SWAP_FLAG_DISCARD_ONCE | \ - SWAP_FLAG_DISCARD_PAGES) + SWAP_FLAG_DISCARD_PAGES | SWAP_FLAG_NOSWAP) #define SWAP_BATCH 64 static inline int current_is_kswapd(void) @@ -182,6 +183,7 @@ enum { SWP_PAGE_DISCARD = (1 << 10), /* freed swap page-cluster discards */ SWP_STABLE_WRITES = (1 << 11), /* no overwrite PG_writeback pages */ SWP_SYNCHRONOUS_IO = (1 << 12), /* synchronous IO is efficient */ + SWP_NOSWAP = (1 << 13), /* use only for suspend, not swap */ /* add others here before... */ SWP_SCANNING = (1 << 14), /* refcount in scan_swap_map */ }; diff --git a/mm/swapfile.c b/mm/swapfile.c index 1e07d1c776f2ae..164937f958c319 100644 --- a/mm/swapfile.c +++ b/mm/swapfile.c @@ -697,7 +697,8 @@ static void swap_range_alloc(struct swap_info_struct *si, unsigned long offset, if (si->inuse_pages == si->pages) { si->lowest_bit = si->max; si->highest_bit = 0; - del_from_avail_list(si); + if (!(si->flags & SWP_NOSWAP)) + del_from_avail_list(si); } } @@ -726,7 +727,8 @@ static void swap_range_free(struct swap_info_struct *si, unsigned long offset, bool was_full = !si->highest_bit; WRITE_ONCE(si->highest_bit, end); - if (was_full && (si->flags & SWP_WRITEOK)) + if (was_full && + ((si->flags & (SWP_WRITEOK | SWP_NOSWAP)) == SWP_WRITEOK)) add_to_avail_list(si); } atomic_long_add(nr_entries, &nr_swap_pages); @@ -1078,6 +1080,9 @@ int get_swap_pages(int n_goal, swp_entry_t swp_entries[], int entry_size) WARN(!(si->flags & SWP_WRITEOK), "swap_info %d in list but !SWP_WRITEOK\n", si->type); + WARN((si->flags & SWP_NOSWAP), + "swap_info %d in list but SWP_NOSWAP\n", + si->type); __del_from_avail_list(si); spin_unlock(&si->lock); goto nextsi; @@ -2469,7 +2474,8 @@ static void _enable_swap_info(struct swap_info_struct *p) * swap_info_struct. */ plist_add(&p->list, &swap_active_head); - add_to_avail_list(p); + if (!(p->flags & SWP_NOSWAP)) + add_to_avail_list(p); } static void enable_swap_info(struct swap_info_struct *p, int prio, @@ -2564,7 +2570,9 @@ SYSCALL_DEFINE1(swapoff, const char __user *, specialfile) spin_unlock(&swap_lock); goto out_dput; } - del_from_avail_list(p); + if (!(p->flags & SWP_NOSWAP)) + del_from_avail_list(p); + spin_lock(&p->lock); if (p->prio < 0) { struct swap_info_struct *si = p; @@ -3329,16 +3337,20 @@ SYSCALL_DEFINE2(swapon, const char __user *, specialfile, int, swap_flags) if (swap_flags & SWAP_FLAG_PREFER) prio = (swap_flags & SWAP_FLAG_PRIO_MASK) >> SWAP_FLAG_PRIO_SHIFT; + + if (swap_flags & SWAP_FLAG_NOSWAP) + p->flags |= SWP_NOSWAP; enable_swap_info(p, prio, swap_map, cluster_info, frontswap_map); - pr_info("Adding %uk swap on %s. Priority:%d extents:%d across:%lluk %s%s%s%s%s\n", + pr_info("Adding %uk swap on %s. Priority:%d extents:%d across:%lluk %s%s%s%s%s%s\n", p->pages<<(PAGE_SHIFT-10), name->name, p->prio, nr_extents, (unsigned long long)span<<(PAGE_SHIFT-10), (p->flags & SWP_SOLIDSTATE) ? "SS" : "", (p->flags & SWP_DISCARDABLE) ? "D" : "", (p->flags & SWP_AREA_DISCARD) ? "s" : "", (p->flags & SWP_PAGE_DISCARD) ? "c" : "", - (frontswap_map) ? "FS" : ""); + (frontswap_map) ? "FS" : "", + (p->flags & SWP_NOSWAP) ? "N" : ""); mutex_unlock(&swapon_mutex); atomic_inc(&proc_poll_event);