From patchwork Tue Sep 7 21:23:47 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Minchan Kim X-Patchwork-Id: 12479355 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7BCFCC433F5 for ; Tue, 7 Sep 2021 21:24:06 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id E947961103 for ; Tue, 7 Sep 2021 21:24:05 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org E947961103 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 254CF6B0071; Tue, 7 Sep 2021 17:24:05 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 1DDA16B0072; Tue, 7 Sep 2021 17:24:05 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 008EE900002; Tue, 7 Sep 2021 17:24:04 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0171.hostedemail.com [216.40.44.171]) by kanga.kvack.org (Postfix) with ESMTP id E2F8B6B0071 for ; Tue, 7 Sep 2021 17:24:04 -0400 (EDT) Received: from smtpin01.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 909E51802C53E for ; Tue, 7 Sep 2021 21:24:04 +0000 (UTC) X-FDA: 78562055208.01.25216EF Received: from mail-pj1-f43.google.com (mail-pj1-f43.google.com [209.85.216.43]) by imf15.hostedemail.com (Postfix) with ESMTP id 588E0D000096 for ; Tue, 7 Sep 2021 21:24:04 +0000 (UTC) Received: by mail-pj1-f43.google.com with SMTP id g13-20020a17090a3c8d00b00196286963b9so2366506pjc.3 for ; Tue, 07 Sep 2021 14:24:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=sender:from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=C80Moy0vM+aiXkXep0Savz554R1XBxbknOkAN+if4B4=; b=AVa0xvI77kvZwuvMQP3Xn3cpoxgvJ1ADEPo1F/5arKtCac/M8XU5wtFm47RUiEv39u XEKeyjB75rZOCE7Dk1wfDb2tIRni0tanat2isCdcwi8rFS+6Ny548t/AQQnM1PK3FNpU XUtcIo04cfr3VeIy+yaRLpNT1XjSyiQpXvpttKGx8jerf/LSY6nLlHfZkKCWc/6RKxae FK94pJNLSSZMJ25MDYTmWz6EA7PVdqL1gGmhrR/OrddQzpRc4wd5HOb06/eJ5+51s0VP MwlLSanODKUyOAgidOWBpjN7w3uI1i9pKM0lTeq8gu/5Z8RIRq4AVFtOKFTGOKvs1G7z q++A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:from:to:cc:subject:date:message-id :mime-version:content-transfer-encoding; bh=C80Moy0vM+aiXkXep0Savz554R1XBxbknOkAN+if4B4=; b=Y6lqvVDlYlQG0Q5GKW4p+JD3MQs6CYI1HiZJYWEsxExxuRadCc5R3slVZecKTnQsc3 h8T0nDgh4tDd0vFOAABsjBSSE61DO+dMgxLu0+LsEjlEeKBgOh3qhud+p05hIBCD+nVc H7axCQMvVLT/Ch0JqZxsurRjtyPDV4XoUVr8dNaCcBjG1n+x8cyB9IWnwpOqzjoH+0Fn Rzo4JKUefjtuZzCF65ybnKHoYvuNcTGtMtDzavgl4AXpf6x0XT1TmZTPFlnS5Z2xYc9j Oh18MCWMpgk9uXVEaPRt11aEbA8LSWpqivtBnfieBxnIJu0izW6vP7w0bnSkR4OPe1pk 7VUA== X-Gm-Message-State: AOAM533GmlXP11kgHqjuXdzuOiEE5mUagqXbY6i3flRhsa568MoDmzsA ckO2+8LdZs7gTaYU69aqcs0= X-Google-Smtp-Source: ABdhPJzkA7gpuRyUrMiu3+56SR1zYRuGO30BZ08jqN6jHakp3VlTvgC9Vp5xAkaVwvl1KYtdMCYRlw== X-Received: by 2002:a17:90a:4498:: with SMTP id t24mr342930pjg.235.1631049843268; Tue, 07 Sep 2021 14:24:03 -0700 (PDT) Received: from bbox-1.mtv.corp.google.com ([2620:15c:211:201:998a:2486:f524:8502]) by smtp.gmail.com with ESMTPSA id o15sm102648pjr.0.2021.09.07.14.24.01 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 07 Sep 2021 14:24:02 -0700 (PDT) From: Minchan Kim To: Andrew Morton Cc: Linus Torvalds , Laura Abbott , Oliver Sang , David Hildenbrand , John Dias , Matthew Wilcox , Michal Hocko , Suren Baghdasaryan , Vlastimil Babka , LKML , linux-mm , lkp@lists.01.org, lkp@intel.com, ying.huang@intel.com, feng.tang@intel.com, zhengjun.xing@intel.com, Minchan Kim , Chris Goldsworthy Subject: [PATCH v3] mm: fs: invalidate bh_lrus for only cold path Date: Tue, 7 Sep 2021 14:23:47 -0700 Message-Id: <20210907212347.1977686-1-minchan@kernel.org> X-Mailer: git-send-email 2.33.0.309.g3052b89438-goog MIME-Version: 1.0 Authentication-Results: imf15.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=AVa0xvI7; dmarc=fail reason="SPF not aligned (relaxed), DKIM not aligned (relaxed)" header.from=kernel.org (policy=none); spf=pass (imf15.hostedemail.com: domain of minchankim@gmail.com designates 209.85.216.43 as permitted sender) smtp.mailfrom=minchankim@gmail.com X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 588E0D000096 X-Stat-Signature: mm9phe9roufj4euzmqaapshuztijamgc X-HE-Tag: 1631049844-213042 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: kernel test robot reported the regression of fio.write_iops[1] with [2]. Since lru_add_drain is called frequently, invalidate bh_lrus there could increase bh_lrus cache miss ratio, which needs more IO in the end. This patch moves the bh_lrus invalidation from the hot path( e.g., zap_page_range, pagevec_release) to cold path(i.e., lru_add_drain_all, lru_cache_disable). [1] https://lore.kernel.org/lkml/20210520083144.GD14190@xsang-OptiPlex-9020/ [2] 8cc621d2f45d, mm: fs: invalidate BH LRU during page migration Reviewed-by: Chris Goldsworthy Reported-by: kernel test robot Signed-off-by: Minchan Kim --- * v2: https://lore.kernel.org/lkml/20210601145425.1396981-1-minchan@kernel.org/ * v1: https://lore.kernel.org/lkml/YK0oQ76zX0uVZCwQ@google.com/ fs/buffer.c | 8 ++++++-- include/linux/buffer_head.h | 4 ++-- mm/swap.c | 19 ++++++++++++++++--- 3 files changed, 24 insertions(+), 7 deletions(-) diff --git a/fs/buffer.c b/fs/buffer.c index ab7573d72dd7..c615387aedca 100644 --- a/fs/buffer.c +++ b/fs/buffer.c @@ -1425,12 +1425,16 @@ void invalidate_bh_lrus(void) } EXPORT_SYMBOL_GPL(invalidate_bh_lrus); -void invalidate_bh_lrus_cpu(int cpu) +/* + * It's called from workqueue context so we need a bh_lru_lock to close + * the race with preemption/irq. + */ +void invalidate_bh_lrus_cpu(void) { struct bh_lru *b; bh_lru_lock(); - b = per_cpu_ptr(&bh_lrus, cpu); + b = this_cpu_ptr(&bh_lrus); __invalidate_bh_lrus(b); bh_lru_unlock(); } diff --git a/include/linux/buffer_head.h b/include/linux/buffer_head.h index 6486d3c19463..36f33685c8c0 100644 --- a/include/linux/buffer_head.h +++ b/include/linux/buffer_head.h @@ -194,7 +194,7 @@ void __breadahead_gfp(struct block_device *, sector_t block, unsigned int size, struct buffer_head *__bread_gfp(struct block_device *, sector_t block, unsigned size, gfp_t gfp); void invalidate_bh_lrus(void); -void invalidate_bh_lrus_cpu(int cpu); +void invalidate_bh_lrus_cpu(void); bool has_bh_in_lru(int cpu, void *dummy); struct buffer_head *alloc_buffer_head(gfp_t gfp_flags); void free_buffer_head(struct buffer_head * bh); @@ -408,7 +408,7 @@ static inline int inode_has_buffers(struct inode *inode) { return 0; } static inline void invalidate_inode_buffers(struct inode *inode) {} static inline int remove_inode_buffers(struct inode *inode) { return 1; } static inline int sync_mapping_buffers(struct address_space *mapping) { return 0; } -static inline void invalidate_bh_lrus_cpu(int cpu) {} +static inline void invalidate_bh_lrus_cpu(void) {} static inline bool has_bh_in_lru(int cpu, void *dummy) { return false; } #define buffer_heads_over_limit 0 diff --git a/mm/swap.c b/mm/swap.c index 897200d27dd0..af3cad4e5378 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -620,7 +620,6 @@ void lru_add_drain_cpu(int cpu) pagevec_lru_move_fn(pvec, lru_lazyfree_fn); activate_page_drain(cpu); - invalidate_bh_lrus_cpu(cpu); } /** @@ -703,6 +702,20 @@ void lru_add_drain(void) local_unlock(&lru_pvecs.lock); } +/* + * It's called from per-cpu workqueue context in SMP case so + * lru_add_drain_cpu and invalidate_bh_lrus_cpu should run on + * the same cpu. It shouldn't be a problem in !SMP case since + * the core is only one and the locks will disable preemption. + */ +static void lru_add_and_bh_lrus_drain(void) +{ + local_lock(&lru_pvecs.lock); + lru_add_drain_cpu(smp_processor_id()); + local_unlock(&lru_pvecs.lock); + invalidate_bh_lrus_cpu(); +} + void lru_add_drain_cpu_zone(struct zone *zone) { local_lock(&lru_pvecs.lock); @@ -717,7 +730,7 @@ static DEFINE_PER_CPU(struct work_struct, lru_add_drain_work); static void lru_add_drain_per_cpu(struct work_struct *dummy) { - lru_add_drain(); + lru_add_and_bh_lrus_drain(); } /* @@ -858,7 +871,7 @@ void lru_cache_disable(void) */ __lru_add_drain_all(true); #else - lru_add_drain(); + lru_add_and_bh_lrus_drain(); #endif }