From patchwork Fri Oct 4 13:09:22 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Konstantin Khlebnikov X-Patchwork-Id: 11174455 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 5194014DB for ; Fri, 4 Oct 2019 13:09:28 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 1379421D81 for ; Fri, 4 Oct 2019 13:09:28 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=yandex-team.ru header.i=@yandex-team.ru header.b="i1OFqRe1" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 1379421D81 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=yandex-team.ru Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 3299E6B0003; Fri, 4 Oct 2019 09:09:27 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 2B28B6B0005; Fri, 4 Oct 2019 09:09:27 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 17A486B0007; Fri, 4 Oct 2019 09:09:27 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0189.hostedemail.com [216.40.44.189]) by kanga.kvack.org (Postfix) with ESMTP id E3A706B0003 for ; Fri, 4 Oct 2019 09:09:26 -0400 (EDT) Received: from smtpin16.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with SMTP id 90D2A180AD803 for ; Fri, 4 Oct 2019 13:09:26 +0000 (UTC) X-FDA: 76006133532.16.ring37_524b96da05519 X-Spam-Summary: 2,0,0,007593aed1cb977a,d41d8cd98f00b204,khlebnikov@yandex-team.ru,::akpm@linux-foundation.org:mhocko@suse.com:linux-kernel@vger.kernel.org:willy@infradead.org,RULES_HIT:41:152:355:379:800:960:988:989:1260:1277:1311:1313:1314:1345:1437:1515:1516:1518:1534:1541:1593:1594:1711:1730:1747:1777:1792:2194:2199:2393:2559:2562:3138:3139:3140:3141:3142:3165:3353:3865:3866:3867:3868:3871:3872:5007:6261:6653:9207:10004:10400:11026:11473:11658:11914:12043:12296:12297:12438:12517:12519:12555:12760:13069:13311:13357:14096:14097:14181:14394:14687:14721:14922:21080:21212:21324:21451:21627:21740:30054,0,RBL:77.88.29.217:@yandex-team.ru:.lbl8.mailshell.net-62.2.3.100 66.100.201.201,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:131,LUA_SUMMARY:none X-HE-Tag: ring37_524b96da05519 X-Filterd-Recvd-Size: 3729 Received: from forwardcorp1p.mail.yandex.net (forwardcorp1p.mail.yandex.net [77.88.29.217]) by imf41.hostedemail.com (Postfix) with ESMTP for ; Fri, 4 Oct 2019 13:09:25 +0000 (UTC) Received: from mxbackcorp2j.mail.yandex.net (mxbackcorp2j.mail.yandex.net [IPv6:2a02:6b8:0:1619::119]) by forwardcorp1p.mail.yandex.net (Yandex) with ESMTP id 1ED7D2E0A1C; Fri, 4 Oct 2019 16:09:23 +0300 (MSK) Received: from iva8-b53eb3f76dc7.qloud-c.yandex.net (iva8-b53eb3f76dc7.qloud-c.yandex.net [2a02:6b8:c0c:2ca1:0:640:b53e:b3f7]) by mxbackcorp2j.mail.yandex.net (nwsmtp/Yandex) with ESMTP id VpcdspqVFC-9MNGcrnJ; Fri, 04 Oct 2019 16:09:23 +0300 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yandex-team.ru; s=default; t=1570194563; bh=yAPYaJ3oJgwpDyUBZu6nyxA51pk4jSRQtkpSzJ5+NBA=; h=Message-ID:Date:To:From:Subject:Cc; b=i1OFqRe1V+4wqbQRoJPNi2tgjj9/yhnp69XlwSBAqiTBuAgTNCBXXAPCOLY6clB0Z mJg18K6W3/PprBqB3uDMD79kCGuJn107RDoWkcX39kx64Keqzmc+TPhIIrGFJ0W37a OMLt9TEj0iSCJu35iUCfASNFufVFQk75bxkUMvls= Authentication-Results: mxbackcorp2j.mail.yandex.net; dkim=pass header.i=@yandex-team.ru Received: from dynamic-red.dhcp.yndx.net (dynamic-red.dhcp.yndx.net [2a02:6b8:0:40c:3d4d:a9cb:ef29:4bb1]) by iva8-b53eb3f76dc7.qloud-c.yandex.net (nwsmtp/Yandex) with ESMTPSA id GTiLEDP0fi-9MHuZTQ6; Fri, 04 Oct 2019 16:09:22 +0300 (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (Client certificate not present) Subject: [PATCH v2] mm/swap: piggyback lru_add_drain_all() calls From: Konstantin Khlebnikov To: linux-mm@kvack.org, Andrew Morton Cc: Michal Hocko , linux-kernel@vger.kernel.org, Matthew Wilcox Date: Fri, 04 Oct 2019 16:09:22 +0300 Message-ID: <157019456205.3142.3369423180908482020.stgit@buzz> User-Agent: StGit/0.17.1-dirty MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This is very slow operation. There is no reason to do it again if somebody else already drained all per-cpu vectors while we waited for lock. Piggyback on drain started and finished while we waited for lock: all pages pended at the time of our enter were drained from vectors. Callers like POSIX_FADV_DONTNEED retry their operations once after draining per-cpu vectors when pages have unexpected references. Signed-off-by: Konstantin Khlebnikov --- mm/swap.c | 16 +++++++++++++++- 1 file changed, 15 insertions(+), 1 deletion(-) diff --git a/mm/swap.c b/mm/swap.c index 38c3fa4308e2..5ba948a9d82a 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -708,9 +708,10 @@ static void lru_add_drain_per_cpu(struct work_struct *dummy) */ void lru_add_drain_all(void) { + static seqcount_t seqcount = SEQCNT_ZERO(seqcount); static DEFINE_MUTEX(lock); static struct cpumask has_work; - int cpu; + int cpu, seq; /* * Make sure nobody triggers this path before mm_percpu_wq is fully @@ -719,7 +720,19 @@ void lru_add_drain_all(void) if (WARN_ON(!mm_percpu_wq)) return; + seq = raw_read_seqcount_latch(&seqcount); + mutex_lock(&lock); + + /* + * Piggyback on drain started and finished while we waited for lock: + * all pages pended at the time of our enter were drained from vectors. + */ + if (__read_seqcount_retry(&seqcount, seq)) + goto done; + + raw_write_seqcount_latch(&seqcount); + cpumask_clear(&has_work); for_each_online_cpu(cpu) { @@ -740,6 +753,7 @@ void lru_add_drain_all(void) for_each_cpu(cpu, &has_work) flush_work(&per_cpu(lru_add_drain_work, cpu)); +done: mutex_unlock(&lock); } #else