From patchwork Mon Jun 1 14:37:34 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hillf Danton X-Patchwork-Id: 11582155 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 25DA390 for ; Mon, 1 Jun 2020 14:38:01 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id EEED72065C for ; Mon, 1 Jun 2020 14:38:00 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org EEED72065C Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=sina.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 3BA8D80007; Mon, 1 Jun 2020 10:37:59 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 36AB78E0006; Mon, 1 Jun 2020 10:37:59 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 25D0180007; Mon, 1 Jun 2020 10:37:59 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0166.hostedemail.com [216.40.44.166]) by kanga.kvack.org (Postfix) with ESMTP id 0B8DB8E0006 for ; Mon, 1 Jun 2020 10:37:59 -0400 (EDT) Received: from smtpin08.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id C152A1812A475 for ; Mon, 1 Jun 2020 14:37:58 +0000 (UTC) X-FDA: 76880897436.08.drug49_2827061a26e41 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin08.hostedemail.com (Postfix) with ESMTP id A5C051819E643 for ; Mon, 1 Jun 2020 14:37:58 +0000 (UTC) X-Spam-Summary: 2,0,0,bc5be60fd9f1a497,d41d8cd98f00b204,hdanton@sina.com,,RULES_HIT:41:355:379:800:960:968:973:988:989:1260:1311:1314:1345:1437:1515:1534:1542:1711:1730:1747:1777:1792:2194:2199:2393:2559:2562:2741:3138:3139:3140:3141:3142:3165:3353:3865:3866:3867:3868:3871:4250:5007:6119:6120:6261:7903:9207:10004:11026:11232:11334:11537:11658:11914:12043:12297:12438:13161:13229:13894:14096:14181:14721:21080:21324:21451:21627:21740:21990:30054:30070,0,RBL:202.108.3.22:@sina.com:.lbl8.mailshell.net-62.18.2.100 64.100.201.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:27,LUA_SUMMARY:none X-HE-Tag: drug49_2827061a26e41 X-Filterd-Recvd-Size: 2976 Received: from r3-22.sinamail.sina.com.cn (r3-22.sinamail.sina.com.cn [202.108.3.22]) by imf37.hostedemail.com (Postfix) with SMTP for ; Mon, 1 Jun 2020 14:37:55 +0000 (UTC) Received: from unknown (HELO localhost.localdomain)([221.219.1.77]) by sina.com with ESMTP id 5ED512B800012F65; Mon, 1 Jun 2020 22:37:49 +0800 (CST) X-Sender: hdanton@sina.com X-Auth-ID: hdanton@sina.com X-SMAIL-MID: 321509628868 From: Hillf Danton To: linux-mm Cc: LKML , Sebastian Andrzej Siewior , Konstantin Khlebnikov , Hillf Danton Subject: [RFC PATCH] mm: swap: remove lru drain waiters Date: Mon, 1 Jun 2020 22:37:34 +0800 Message-Id: <20200601143734.9572-1-hdanton@sina.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: A5C051819E643 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam01 X-Bogosity: Ham, tests=bogofilter, spamicity=0.446658, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: After updating the lru drain sequence, new comers avoid waiting for the current drainer, because he is flushing works on each online CPU, by trying to lock the mutex; the drainer OTOH tries to do works for those who fail to acquire the lock by checking the lru drain sequence after releasing lock. See eef1a429f234 ("mm/swap.c: piggyback lru_add_drain_all() calls") for reasons why we can skip waiting for the lock. The memory barriers around the sequence and the lock come together to remove waiters without their drain works bandoned. Cc: Sebastian Andrzej Siewior Cc: Konstantin Khlebnikov Signed-off-by: Hillf Danton --- This is inspired by one of the works from Sebastian. -- --- a/mm/swap.c +++ b/mm/swap.c @@ -714,10 +714,11 @@ static void lru_add_drain_per_cpu(struct */ void lru_add_drain_all(void) { - static seqcount_t seqcount = SEQCNT_ZERO(seqcount); + static unsigned int lru_drain_seq; static DEFINE_MUTEX(lock); static struct cpumask has_work; - int cpu, seq; + int cpu; + unsigned int seq; /* * Make sure nobody triggers this path before mm_percpu_wq is fully @@ -726,18 +727,16 @@ void lru_add_drain_all(void) if (WARN_ON(!mm_percpu_wq)) return; - seq = raw_read_seqcount_latch(&seqcount); + lru_drain_seq++; + smp_mb(); - mutex_lock(&lock); +more_work: - /* - * Piggyback on drain started and finished while we waited for lock: - * all pages pended at the time of our enter were drained from vectors. - */ - if (__read_seqcount_retry(&seqcount, seq)) - goto done; + if (!mutex_trylock(&lock)) + return; - raw_write_seqcount_latch(&seqcount); + smp_mb(); + seq = lru_drain_seq; cpumask_clear(&has_work); @@ -759,8 +758,11 @@ void lru_add_drain_all(void) for_each_cpu(cpu, &has_work) flush_work(&per_cpu(lru_add_drain_work, cpu)); -done: mutex_unlock(&lock); + + smp_mb(); + if (seq != lru_drain_seq) + goto more_work; } #else void lru_add_drain_all(void)