From patchwork Mon Apr 8 19:58:14 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hugh Dickins X-Patchwork-Id: 10890063 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 693891708 for ; Mon, 8 Apr 2019 19:58:21 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 5305328691 for ; Mon, 8 Apr 2019 19:58:21 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 4601D286D1; Mon, 8 Apr 2019 19:58:21 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.5 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE, USER_IN_DEF_DKIM_WL autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 04EE928691 for ; Mon, 8 Apr 2019 19:58:20 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 21FF36B0008; Mon, 8 Apr 2019 15:58:19 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 1CF526B000A; Mon, 8 Apr 2019 15:58:19 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0BF3C6B000C; Mon, 8 Apr 2019 15:58:19 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pl1-f200.google.com (mail-pl1-f200.google.com [209.85.214.200]) by kanga.kvack.org (Postfix) with ESMTP id C2BCB6B0008 for ; Mon, 8 Apr 2019 15:58:18 -0400 (EDT) Received: by mail-pl1-f200.google.com with SMTP id 4so10673694plb.5 for ; Mon, 08 Apr 2019 12:58:18 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:date:from:to:cc:subject :in-reply-to:message-id:references:user-agent:mime-version; bh=29W9ucIOqItYnzmL2gQIQ5ExGyQgwi2PVG1yglWaKBM=; b=hxMyuD554iAbMdVsBzMReGT/iggMt2S5wIEj+2zKZVKVQ8RUYyGGT/aJMSFq61JBch yecjBtw38fLQWhcSYYfh/l0FPanx8vBRHGHtyipo3Dl/svD1kFZDHbBRgMCCN0Qdt+v8 XmJWWlTR1A0wD38KZjGmyhnNq/lx8PekrgzhCpamk59TFGD7LBgGrRl5Q2kv88G03IvG WbBsnBgqa9uN7SbxAiihS1hqfswLQtrpvH6Z6hi89mn+xbd76TwWW0Aeaw/rVhBZgZq/ neyfjBR+hDDOv5b1NMxZelQDB9JRMqedUnNDxyX/BG0B+4ikmfhQtT/g7zBL7n6jPGju k1Pw== X-Gm-Message-State: APjAAAWelubiuHFCPiXjTzGwEvUK0dnlzkjn7lFaJblhAFr/Uzof4xtw 1EqluICE4G6IOLgA+u3Exqn9vPYUWhNqCiFoG3+NFAWawV/ugJ/yfzfol+XFAHXujM12y28Rp32 8jDHAGPuTAFUD3k9Nr++79mzNzbqJ6+DMQ05lY1VhKGtsx/AN8WjhDwuB6Uo+S/y77A== X-Received: by 2002:a62:3892:: with SMTP id f140mr31668405pfa.128.1554753498403; Mon, 08 Apr 2019 12:58:18 -0700 (PDT) X-Received: by 2002:a62:3892:: with SMTP id f140mr31668350pfa.128.1554753497664; Mon, 08 Apr 2019 12:58:17 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1554753497; cv=none; d=google.com; s=arc-20160816; b=uO/B/JuXd3+HOBh9JtbbohlkYzaCE844YjBewHCFDkdCftOYcJBWnuxHQDuHFSJNBZ atx2FBtHkGTreeUJoqUWRCnX/31QFRyLOsx6LQDHfEApsYgKpQIOiXo2ZX6RkCa4XA0Z ApRy/mmMFUBbhUr64E9Rl/i2GlA0UXOx4mf10pqLwroBg5mWd+8R6o3CgP177fucTb90 5BXhWwTxEZTx/HpRHN7SIoTAc8ykeUbDFo18TWeA4GIbt9j7B0Ft7vXcW2KtL47TCcF3 xb0rnmjE6WDv2n+PEmiIJv2Qo742+Fpvl/mIeNqwh/+BA75XHbMVQcf9V5jxQVFJeMPO fGZA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=mime-version:user-agent:references:message-id:in-reply-to:subject :cc:to:from:date:dkim-signature; bh=29W9ucIOqItYnzmL2gQIQ5ExGyQgwi2PVG1yglWaKBM=; b=hyqIXgk9MNN6E96HY8zJzkSGAKuzy6kxnDIe4EvgCST9eOBs0ga6hA29WOeGn5cjTP sdoJejsUw4hBpLlw6CS9XpQFzlMeczyso339RoLOzaUSc/q4CwrqMFJTVr/mVNmws3z6 lqXGHt+SEmxHlz5ZfXKX4FfYX4Z2kwETRyJ40XVodLK9wGm3KVpypUHGsbWChcgt7l2g XtXCLDBCxG76mplt97WrJciydTgAajUr8+fHZiyIpEbWMWNJEanIZU80OVJe5O/X7GiC UCtfZytRsorFZMYx+wjnU7tiboDglro6na2eLfTG2ydjjH8B7FqeEa3Z3D4vf+fO+SlQ LFjg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=Fy4t1CyU; spf=pass (google.com: domain of hughd@google.com designates 209.85.220.65 as permitted sender) smtp.mailfrom=hughd@google.com; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from mail-sor-f65.google.com (mail-sor-f65.google.com. [209.85.220.65]) by mx.google.com with SMTPS id t65sor31766194pfb.3.2019.04.08.12.58.17 for (Google Transport Security); Mon, 08 Apr 2019 12:58:17 -0700 (PDT) Received-SPF: pass (google.com: domain of hughd@google.com designates 209.85.220.65 as permitted sender) client-ip=209.85.220.65; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=Fy4t1CyU; spf=pass (google.com: domain of hughd@google.com designates 209.85.220.65 as permitted sender) smtp.mailfrom=hughd@google.com; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:from:to:cc:subject:in-reply-to:message-id:references :user-agent:mime-version; bh=29W9ucIOqItYnzmL2gQIQ5ExGyQgwi2PVG1yglWaKBM=; b=Fy4t1CyUr8Vd/Tq1kRsSKxv77weNKQj5DhgPy9nD2bCEsiBn3FEGyA8sZ83Fpwo6bW tRb1hXjVG3V8vMhxBjeLI6C1RvZMzNOtU33E4GDSDs0Ps0T0NgRjLBkn9n5ALGsshrwb Oxspjm1t7dapzNkUm1UBxH7cqGC7RV4j4v9uZukrNkd09KxA+zKmao0FCp1YBCvYdbAN 76r1I75mjP8bgo4nUEuuqhIjwkoGsrIzEzuTrPl0JUP+dwFcE0tW06qlb0KlIOOev7tg 5f+JD1JqYlb1nfbuCaPOdDZFwIKIgmIe2jO8gc4TX04rNrLUFEsId6g6YZYGYoCqlm2Z r0zA== X-Google-Smtp-Source: APXvYqwgOOpcElBS1UxKQNb+mYhbX92mCmLobuI2RnT22NAeFYi++VncsNYP/Uv6ILgmoBb2UinsgQ== X-Received: by 2002:a62:209c:: with SMTP id m28mr31113893pfj.233.1554753496623; Mon, 08 Apr 2019 12:58:16 -0700 (PDT) Received: from [100.112.89.103] ([104.133.8.103]) by smtp.gmail.com with ESMTPSA id l5sm55488310pfi.97.2019.04.08.12.58.15 (version=TLS1 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Mon, 08 Apr 2019 12:58:15 -0700 (PDT) Date: Mon, 8 Apr 2019 12:58:14 -0700 (PDT) From: Hugh Dickins X-X-Sender: hugh@eggly.anvils To: Andrew Morton cc: Konstantin Khlebnikov , "Alex Xu (Hello71)" , Vineeth Pillai , Kelley Nielsen , Rik van Riel , Huang Ying , Hugh Dickins , linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH 2/4] mm: swapoff: remove too limiting SWAP_UNUSE_MAX_TRIES In-Reply-To: Message-ID: References: User-Agent: Alpine 2.11 (LSU 23 2013-08-11) MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP SWAP_UNUSE_MAX_TRIES 3 appeared to work well in earlier testing, but further testing has proved it to be a source of unnecessary swapoff EBUSY failures (which can then be followed by unmount EBUSY failures). When mmget_not_zero() or shmem's igrab() fails, there is an mm exiting or inode being evicted, freeing up swap independent of try_to_unuse(). Those typically completed much sooner than the old quadratic swapoff, but now it's more common that swapoff may need to wait for them. It's possible to move those cases from init_mm.mmlist and shmem_swaplist to separate "exiting" swaplists, and try_to_unuse() then wait for those lists to be emptied; but we've not bothered with that in the past, and don't want to risk missing some other forgotten case. So just revert to cycling around until the swap is gone, without any retries limit. Fixes: b56a2d8af914 ("mm: rid swapoff of quadratic complexity") Signed-off-by: Hugh Dickins --- mm/swapfile.c | 16 ++++++++-------- 1 file changed, 8 insertions(+), 8 deletions(-) --- 5.1-rc4/mm/swapfile.c 2019-03-17 16:18:15.713823942 -0700 +++ linux/mm/swapfile.c 2019-04-07 19:15:01.269054187 -0700 @@ -2023,7 +2023,6 @@ static unsigned int find_next_to_unuse(s * If the boolean frontswap is true, only unuse pages_to_unuse pages; * pages_to_unuse==0 means all pages; ignored if frontswap is false */ -#define SWAP_UNUSE_MAX_TRIES 3 int try_to_unuse(unsigned int type, bool frontswap, unsigned long pages_to_unuse) { @@ -2035,7 +2034,6 @@ int try_to_unuse(unsigned int type, bool struct page *page; swp_entry_t entry; unsigned int i; - int retries = 0; if (!si->inuse_pages) return 0; @@ -2117,14 +2115,16 @@ retry: * If yes, we would need to do retry the unuse logic again. * Under global memory pressure, swap entries can be reinserted back * into process space after the mmlist loop above passes over them. - * Its not worth continuosuly retrying to unuse the swap in this case. - * So we try SWAP_UNUSE_MAX_TRIES times. + * + * Limit the number of retries? No: when shmem_unuse()'s igrab() fails, + * a shmem inode using swap is being evicted; and when mmget_not_zero() + * above fails, that mm is likely to be freeing swap from exit_mmap(). + * Both proceed at their own independent pace: we could move them to + * separate lists, and wait for those lists to be emptied; but it's + * easier and more robust (though cpu-intensive) just to keep retrying. */ - if (++retries >= SWAP_UNUSE_MAX_TRIES) - retval = -EBUSY; - else if (si->inuse_pages) + if (si->inuse_pages) goto retry; - out: return (retval == FRONTSWAP_PAGES_UNUSED) ? 0 : retval; }