From patchwork Mon Apr 13 11:18:10 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrea Righi X-Patchwork-Id: 11485445 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D2E9F174A for ; Mon, 13 Apr 2020 11:18:16 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id A4C0D2075E for ; Mon, 13 Apr 2020 11:18:16 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org A4C0D2075E Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=canonical.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id C94D58E00FF; Mon, 13 Apr 2020 07:18:15 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id C44CA8E00FA; Mon, 13 Apr 2020 07:18:15 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B5A5C8E00FF; Mon, 13 Apr 2020 07:18:15 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0124.hostedemail.com [216.40.44.124]) by kanga.kvack.org (Postfix) with ESMTP id 9A5708E00FA for ; Mon, 13 Apr 2020 07:18:15 -0400 (EDT) Received: from smtpin13.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 5725B180AD802 for ; Mon, 13 Apr 2020 11:18:15 +0000 (UTC) X-FDA: 76702582950.13.shelf90_651bf7193eb5a X-Spam-Summary: 57,3.5,0,d513d3d1e5d7e259,d41d8cd98f00b204,andrea.righi@canonical.com,,RULES_HIT:41:69:355:379:800:960:973:988:989:1260:1277:1312:1313:1314:1345:1431:1437:1516:1518:1519:1535:1543:1593:1594:1595:1596:1711:1730:1747:1777:1792:2198:2199:2282:2393:2559:2562:2890:2901:2910:2912:3138:3139:3140:3141:3142:3355:3865:3866:3867:3868:3870:3871:3872:3874:4042:4250:4321:4605:5007:6261:6630:7875:7903:8603:8784:9121:10013:10400:11026:11232:11233:11658:11914:12043:12295:12297:12438:12517:12519:12555:12663:12679:12895:12986:13156:13161:13180:13228:13229:13439:13895:14040:14096:14097:14181:14394:14721:21060:21080:21444:21450:21451:21627:21740:21990:30001:30054:30055:30056,0,RBL:91.189.89.112:@canonical.com:.lbl8.mailshell.net-62.8.15.100 64.201.201.201,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fn,MSBL:0,DNSBL:neutral,Custom_rules:0:1:0,LFtime:25,LUA_SUMMARY:none X-HE-Tag: shelf90_651bf7193eb5a X-Filterd-Recvd-Size: 5994 Received: from youngberry.canonical.com (youngberry.canonical.com [91.189.89.112]) by imf26.hostedemail.com (Postfix) with ESMTP for ; Mon, 13 Apr 2020 11:18:14 +0000 (UTC) Received: from mail-wr1-f72.google.com ([209.85.221.72]) by youngberry.canonical.com with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.86_2) (envelope-from ) id 1jNx6T-0002TQ-3M for linux-mm@kvack.org; Mon, 13 Apr 2020 11:18:13 +0000 Received: by mail-wr1-f72.google.com with SMTP id g6so6521212wru.8 for ; Mon, 13 Apr 2020 04:18:13 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:mime-version :content-disposition; bh=4Wgt9hYhRWXTtBSGQfGcckzQoy8qEw0JqLqmDtAt4rw=; b=WGtLeick5kydYUBTGHBAsPdoVez5XOfCkma3IEmsDtYxd6i9Nieun5ZA4vHkTwRIrF N0QekfvkYgn5hJ5nMhPQCLoTCy/L1yGsBHKtA9Y1KG2Yf89AG14QfrjaHZJIMS+P0c8R JpnI7ATXebwZMtzkSQSTcocth72N7/L5bB3KYL6AZ+tG76RD9b1ikqVMsevJ1m41/4FI N93Gw3OfPKCH7rphTHjtEBtA09dAh+LdBVNYZ+52BRZX4/IXIbw1qma5nsP4G+mBEIL9 pwOFps2deWeG3Mz20O//46h9DU66NnFWsruqV5tlGwx4K3r3ydzBf0O1M3m6HOC7UqvY QOcg== X-Gm-Message-State: AGi0PuZ6lVZQ7ceC0tSGusbygNZxWOv0kHm3nmA27kjPdrKJIE/KvXjr Fk9L4LAu5AKB9b9dVPe5Dkovz18uVbSi5/3cjUT2OnKU1NJmhzKoGaSnA9UU4FxGmbeZ3apuWT1 V18c+yiPv+ZT+BW1uZBlqhUJgFRL/ X-Received: by 2002:a1c:bd8b:: with SMTP id n133mr19275858wmf.175.1586776692672; Mon, 13 Apr 2020 04:18:12 -0700 (PDT) X-Google-Smtp-Source: APiQypKTNVrn76PNgX0RFMmFWzCcdcHufYo8SC1yL2UcDpaAG9qMAr8lhPd+RZcdsOiYVmIDPFnW7g== X-Received: by 2002:a1c:bd8b:: with SMTP id n133mr19275822wmf.175.1586776692276; Mon, 13 Apr 2020 04:18:12 -0700 (PDT) Received: from localhost (host123-127-dynamic.36-79-r.retail.telecomitalia.it. [79.36.127.123]) by smtp.gmail.com with ESMTPSA id a1sm5377747wrn.80.2020.04.13.04.18.11 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 13 Apr 2020 04:18:11 -0700 (PDT) Date: Mon, 13 Apr 2020 13:18:10 +0200 From: Andrea Righi To: Andrew Morton Cc: Huang Ying , Minchan Kim , Anchal Agarwal , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v2] mm: swap: use fixed-size readahead during swapoff Message-ID: <20200413111810.GA801367@xps-13> MIME-Version: 1.0 Content-Disposition: inline X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The global swap-in readahead policy takes in account the previous access patterns, using a scaling heuristic to determine the optimal readahead chunk dynamically. This works pretty well in most cases, but like any heuristic there are specific cases when this approach is not ideal, for example the swapoff scenario. During swapoff we just want to load back into memory all the swapped-out pages and for this specific use case a fixed-size readahead is more efficient. The specific use case this patch is addressing is to improve swapoff performance when a VM has been hibernated, resumed and all memory needs to be forced back to RAM by disabling swap (see the test case below). But it is not the only case where a fixed-size readahead can show its benefits. More in general, the fixed-size approach can be beneficial in all the cases when a task that is using a large part of swapped out pages needs to load them back to memory as fast as possible. Testing environment =================== - Host: CPU: 1.8GHz Intel Core i7-8565U (quad-core, 8MB cache) HDD: PC401 NVMe SK hynix 512GB MEM: 16GB - Guest (kvm): 8GB of RAM virtio block driver 16GB swap file on ext4 (/swapfile) Test case ========= - allocate 85% of memory - `systemctl hibernate` to force all the pages to be swapped-out to the swap file - resume the system - measure the time that swapoff takes to complete: # /usr/bin/time swapoff /swapfile Result ====== 5.6 vanilla 5.6 w/ this patch ----------- ----------------- page-cluster=1 26.77s 21.25s page-cluster=2 28.29s 12.66s page-cluster=3 22.09s 8.77s page-cluster=4 21.50s 7.60s page-cluster=5 25.35s 7.75s page-cluster=6 23.19s 8.32s page-cluster=7 22.25s 9.40s page-cluster=8 22.09s 8.93s Signed-off-by: Andrea Righi --- Changes in v2: - avoid introducing a new ABI to select the fixed-size readahead NOTE: after running some tests with this new patch I don't see any difference in terms of performance, so I'm reporting the same test result of the previous version. mm/swap_state.c | 9 +++++++++ 1 file changed, 9 insertions(+) diff --git a/mm/swap_state.c b/mm/swap_state.c index ebed37bbf7a3..c71abc8df304 100644 --- a/mm/swap_state.c +++ b/mm/swap_state.c @@ -20,6 +20,7 @@ #include #include #include +#include #include #include @@ -507,6 +508,14 @@ static unsigned long swapin_nr_pages(unsigned long offset) max_pages = 1 << READ_ONCE(page_cluster); if (max_pages <= 1) return 1; + /* + * If current task is using too much memory or swapoff is running + * simply use the max readahead size. Since we likely want to load a + * lot of pages back into memory, using a fixed-size max readhaead can + * give better performance in this case. + */ + if (oom_task_origin(current)) + return max_pages; hits = atomic_xchg(&swapin_readahead_hits, 0); pages = __swapin_nr_pages(prev_offset, offset, hits, max_pages,