From patchwork Fri Sep 14 14:59:23 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sebastian Andrzej Siewior X-Patchwork-Id: 10600911 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 9F34515A7 for ; Fri, 14 Sep 2018 14:59:37 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 80BC62B8CE for ; Fri, 14 Sep 2018 14:59:37 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 750CD2B8D0; Fri, 14 Sep 2018 14:59:37 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id A9ABA2B8CE for ; Fri, 14 Sep 2018 14:59:36 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id ED56C8E0003; Fri, 14 Sep 2018 10:59:34 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id E0E628E0001; Fri, 14 Sep 2018 10:59:34 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id CB0D48E0003; Fri, 14 Sep 2018 10:59:34 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-wr1-f70.google.com (mail-wr1-f70.google.com [209.85.221.70]) by kanga.kvack.org (Postfix) with ESMTP id 69C248E0001 for ; Fri, 14 Sep 2018 10:59:34 -0400 (EDT) Received: by mail-wr1-f70.google.com with SMTP id i11-v6so10267670wrr.10 for ; Fri, 14 Sep 2018 07:59:34 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-original-authentication-results:x-gm-message-state:from:to:cc :subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=WH/LA9YbWorZtVk/xXCptcwetPGIxDypz3TK1sESBO0=; b=pATNzPrUfa3o4Y4tQKAvbicgmVhVLAuV60N5Hp4swXlWOsGJkjXtbqEqT7k1E0oE2x /pwjJttC7yzgd872of7l7LjUG7lPB342AlguSo8zoeXF/qKGZeLPDDWsXcsFcRtqtmiO y3+OfIeqmH3zmcPrzZ5Mq3ZjXc/m8WSUSq5wuT9JlMuJqXFmc0GRonz04gOjFqQhQ57t VkEsGWA76NzjIhDjbGz15HB5hguJeyhV/zfWn7v2QOWoOJ8DLlW9U5plMI8uPiSpx62K qKknbfMlsvIeWrhst4n1PyhMWGjNOXtT8WJgyicxycibTm6HI+9AINMzPdK+9YmKY7MN cgbQ== X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of bigeasy@linutronix.de designates 2a01:7a0:2:106d:700::1 as permitted sender) smtp.mailfrom=bigeasy@linutronix.de X-Gm-Message-State: APzg51B+tfFwGO+MFOLPdOQRJ6zpxbQmEQyEYmpUlwDRXB28AQUWXxO5 CqNL6PgudJICYrx5EHgME1XnuwIKMJm8Y63IaChjMhQ7O0qxU15UJu7K1ipInj8WvDUDLvwzXkM +6SRLIlIIPP1HsqR4PHnrVac5LUjm0begqt92wSl8XsNOw42L1Ally1fe7gxriumdTA== X-Received: by 2002:adf:ba12:: with SMTP id o18-v6mr9894806wrg.249.1536937173796; Fri, 14 Sep 2018 07:59:33 -0700 (PDT) X-Google-Smtp-Source: ANB0VdakTw5LJBzhePnHiEwWvGsCIDrS5i/tFeD72f0z4tFquTajnl88igGdsty3gqIDhzr4I5d0 X-Received: by 2002:adf:ba12:: with SMTP id o18-v6mr9894735wrg.249.1536937172729; Fri, 14 Sep 2018 07:59:32 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1536937172; cv=none; d=google.com; s=arc-20160816; b=yRmDpGIFHboTbe/fhl7LiAfFetrywbbV4C0AgTkm76QpGYAKMZmJFUX5expgbheBm0 W0QIv8mc8vIjLFXcIl9JrOGDqKGXeDsgG/Iep/zsIlfGb2x72F7lD9BrkpBF5vLRpwBc mEAse2IVoOBpBQPp/EhFS4iws6unCywDme/pMEkwZNV7OlUeTKTu8emsLUpBz+36swv5 clmDMfS9q2xKVvYhUpILxfsDyndt/wbIFIVI6US4JbtPXG74FfWUt7ea7g5f9cMLzjrT Yc24cP1KgVBVpD2ufMCw+QdashQG0gjhywBKS0ky3T4HstsuAttURItLGHduzwBFzZyq 3qJg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from; bh=WH/LA9YbWorZtVk/xXCptcwetPGIxDypz3TK1sESBO0=; b=aEADFVrWCVvFguzo9H31Sx+MLWP2K7Bc5uCJGS8RqzeK7Nw8HPFJ90FMj0YypzAJXd XIjdmycT3AYJME82SV+PnuwSowhzRwMY1hyBL2Cnfm38Iw3adVaRCKEWKpoLnPm/ji7R wBIfbNJOFZh1y/Dz+5gx0qlBNo+hPt4LAt/BQtsNDBjvwuaavnshHKgruAHgxEYY+ALh i7fGCQhYvvvpfogVfR7QHoXRyVTER+WP6KKu3qoVUH31xWt/2VglVoxY7KazalzLde9/ twfeWjgfah0V0VuQ8c5aqOmd9+QsmKWP0kJVZTvbgTDNc3AiNoKW7MpO9M9mBwfhe7kA 3MAg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of bigeasy@linutronix.de designates 2a01:7a0:2:106d:700::1 as permitted sender) smtp.mailfrom=bigeasy@linutronix.de Received: from Galois.linutronix.de (Galois.linutronix.de. [2a01:7a0:2:106d:700::1]) by mx.google.com with ESMTPS id k191-v6si1926670wmd.18.2018.09.14.07.59.32 for (version=TLS1_2 cipher=AES128-SHA bits=128/128); Fri, 14 Sep 2018 07:59:32 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of bigeasy@linutronix.de designates 2a01:7a0:2:106d:700::1 as permitted sender) client-ip=2a01:7a0:2:106d:700::1; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of bigeasy@linutronix.de designates 2a01:7a0:2:106d:700::1 as permitted sender) smtp.mailfrom=bigeasy@linutronix.de Received: from localhost ([127.0.0.1] helo=bazinga.breakpoint.cc) by Galois.linutronix.de with esmtp (Exim 4.80) (envelope-from ) id 1g0pZB-00019w-Bk; Fri, 14 Sep 2018 16:59:29 +0200 From: Sebastian Andrzej Siewior To: linux-mm@kvack.org Cc: tglx@linutronix.de, Vlastimil Babka , frederic@kernel.org, Sebastian Andrzej Siewior Subject: [PATCH 1/2] mm/swap: Add pagevec locking Date: Fri, 14 Sep 2018 16:59:23 +0200 Message-Id: <20180914145924.22055-2-bigeasy@linutronix.de> X-Mailer: git-send-email 2.19.0 In-Reply-To: <20180914145924.22055-1-bigeasy@linutronix.de> References: <20180914145924.22055-1-bigeasy@linutronix.de> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP From: Thomas Gleixner The locking of struct pagevec is done by disabling preemption. In case the struct has be accessed form interrupt context then interrupts are disabled. This means the struct can only be accessed locally from the CPU. There is also no lockdep coverage which would scream during if it accessed from wrong context. Create struct swap_pagevec which contains of a pagevec member and a spin_lock_t. Before the struct is accessed the spin_lock has to be acquired instead of using preempt_disable(). Since the struct is used CPU-locally there is no spinning on the lock but the lock is acquired immediately. If the struct is accessed from interrupt context, spin_lock_irqsave() is used. Signed-off-by: Thomas Gleixner [bigeasy: +commit message] Signed-off-by: Sebastian Andrzej Siewior --- mm/compaction.c | 7 +-- mm/swap.c | 145 +++++++++++++++++++++++++++++++++++++----------- 2 files changed, 115 insertions(+), 37 deletions(-) diff --git a/mm/compaction.c b/mm/compaction.c index faca45ebe62df..569823e381081 100644 --- a/mm/compaction.c +++ b/mm/compaction.c @@ -1652,15 +1652,14 @@ static enum compact_result compact_zone(struct zone *zone, struct compact_contro * would succeed. */ if (cc->order > 0 && cc->last_migrated_pfn) { - int cpu; unsigned long current_block_start = block_start_pfn(cc->migrate_pfn, cc->order); if (cc->last_migrated_pfn < current_block_start) { - cpu = get_cpu(); - lru_add_drain_cpu(cpu); + lru_add_drain(); + preempt_disable(); drain_local_pages(zone); - put_cpu(); + preempt_enable(); /* No more flushing until we migrate again */ cc->last_migrated_pfn = 0; } diff --git a/mm/swap.c b/mm/swap.c index 26fc9b5f1b6c1..17702ee5bf81c 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -44,14 +44,71 @@ /* How many pages do we try to swap or page in/out together? */ int page_cluster; -static DEFINE_PER_CPU(struct pagevec, lru_add_pvec); -static DEFINE_PER_CPU(struct pagevec, lru_rotate_pvecs); -static DEFINE_PER_CPU(struct pagevec, lru_deactivate_file_pvecs); -static DEFINE_PER_CPU(struct pagevec, lru_lazyfree_pvecs); +struct swap_pagevec { + spinlock_t lock; + struct pagevec pvec; +}; + +#define DEFINE_PER_CPU_PAGEVEC(lvar) \ + DEFINE_PER_CPU(struct swap_pagevec, lvar) = { \ + .lock = __SPIN_LOCK_UNLOCKED((lvar).lock) } + +static DEFINE_PER_CPU_PAGEVEC(lru_add_pvec); +static DEFINE_PER_CPU_PAGEVEC(lru_rotate_pvecs); +static DEFINE_PER_CPU_PAGEVEC(lru_deactivate_file_pvecs); +static DEFINE_PER_CPU_PAGEVEC(lru_lazyfree_pvecs); #ifdef CONFIG_SMP -static DEFINE_PER_CPU(struct pagevec, activate_page_pvecs); +static DEFINE_PER_CPU_PAGEVEC(activate_page_pvecs); #endif +static inline +struct swap_pagevec *lock_swap_pvec(struct swap_pagevec __percpu *p) +{ + struct swap_pagevec *swpvec = raw_cpu_ptr(p); + + spin_lock(&swpvec->lock); + return swpvec; +} + +static inline struct swap_pagevec * +lock_swap_pvec_cpu(struct swap_pagevec __percpu *p, int cpu) +{ + struct swap_pagevec *swpvec = per_cpu_ptr(p, cpu); + + spin_lock(&swpvec->lock); + return swpvec; +} + +static inline struct swap_pagevec * +lock_swap_pvec_irqsave(struct swap_pagevec __percpu *p, unsigned long *flags) +{ + struct swap_pagevec *swpvec = raw_cpu_ptr(p); + + spin_lock_irqsave(&swpvec->lock, (*flags)); + return swpvec; +} + +static inline struct swap_pagevec * +lock_swap_pvec_cpu_irqsave(struct swap_pagevec __percpu *p, int cpu, + unsigned long *flags) +{ + struct swap_pagevec *swpvec = per_cpu_ptr(p, cpu); + + spin_lock_irqsave(&swpvec->lock, *flags); + return swpvec; +} + +static inline void unlock_swap_pvec(struct swap_pagevec *swpvec) +{ + spin_unlock(&swpvec->lock); +} + +static inline void +unlock_swap_pvec_irqrestore(struct swap_pagevec *swpvec, unsigned long flags) +{ + spin_unlock_irqrestore(&swpvec->lock, flags); +} + /* * This path almost never happens for VM activity - pages are normally * freed via pagevecs. But it gets used by networking. @@ -249,15 +306,17 @@ void rotate_reclaimable_page(struct page *page) { if (!PageLocked(page) && !PageDirty(page) && !PageUnevictable(page) && PageLRU(page)) { + struct swap_pagevec *swpvec; struct pagevec *pvec; unsigned long flags; get_page(page); - local_irq_save(flags); - pvec = this_cpu_ptr(&lru_rotate_pvecs); + + swpvec = lock_swap_pvec_irqsave(&lru_rotate_pvecs, &flags); + pvec = &swpvec->pvec; if (!pagevec_add(pvec, page) || PageCompound(page)) pagevec_move_tail(pvec); - local_irq_restore(flags); + unlock_swap_pvec_irqrestore(swpvec, flags); } } @@ -292,27 +351,32 @@ static void __activate_page(struct page *page, struct lruvec *lruvec, #ifdef CONFIG_SMP static void activate_page_drain(int cpu) { - struct pagevec *pvec = &per_cpu(activate_page_pvecs, cpu); + struct swap_pagevec *swpvec = lock_swap_pvec(&activate_page_pvecs); + struct pagevec *pvec = &swpvec->pvec; if (pagevec_count(pvec)) pagevec_lru_move_fn(pvec, __activate_page, NULL); + unlock_swap_pvec(swpvec); } static bool need_activate_page_drain(int cpu) { - return pagevec_count(&per_cpu(activate_page_pvecs, cpu)) != 0; + return pagevec_count(per_cpu_ptr(&activate_page_pvecs.pvec, cpu)) != 0; } void activate_page(struct page *page) { page = compound_head(page); if (PageLRU(page) && !PageActive(page) && !PageUnevictable(page)) { - struct pagevec *pvec = &get_cpu_var(activate_page_pvecs); + struct swap_pagevec *swpvec; + struct pagevec *pvec; get_page(page); + swpvec = lock_swap_pvec(&activate_page_pvecs); + pvec = &swpvec->pvec; if (!pagevec_add(pvec, page) || PageCompound(page)) pagevec_lru_move_fn(pvec, __activate_page, NULL); - put_cpu_var(activate_page_pvecs); + unlock_swap_pvec(swpvec); } } @@ -339,7 +403,8 @@ void activate_page(struct page *page) static void __lru_cache_activate_page(struct page *page) { - struct pagevec *pvec = &get_cpu_var(lru_add_pvec); + struct swap_pagevec *swpvec = lock_swap_pvec(&lru_add_pvec); + struct pagevec *pvec = &swpvec->pvec; int i; /* @@ -361,7 +426,7 @@ static void __lru_cache_activate_page(struct page *page) } } - put_cpu_var(lru_add_pvec); + unlock_swap_pvec(swpvec); } /* @@ -403,12 +468,13 @@ EXPORT_SYMBOL(mark_page_accessed); static void __lru_cache_add(struct page *page) { - struct pagevec *pvec = &get_cpu_var(lru_add_pvec); + struct swap_pagevec *swpvec = lock_swap_pvec(&lru_add_pvec); + struct pagevec *pvec = &swpvec->pvec; get_page(page); if (!pagevec_add(pvec, page) || PageCompound(page)) __pagevec_lru_add(pvec); - put_cpu_var(lru_add_pvec); + unlock_swap_pvec(swpvec); } /** @@ -576,28 +642,34 @@ static void lru_lazyfree_fn(struct page *page, struct lruvec *lruvec, */ void lru_add_drain_cpu(int cpu) { - struct pagevec *pvec = &per_cpu(lru_add_pvec, cpu); + struct swap_pagevec *swpvec = lock_swap_pvec_cpu(&lru_add_pvec, cpu); + struct pagevec *pvec = &swpvec->pvec; + unsigned long flags; if (pagevec_count(pvec)) __pagevec_lru_add(pvec); + unlock_swap_pvec(swpvec); - pvec = &per_cpu(lru_rotate_pvecs, cpu); + swpvec = lock_swap_pvec_cpu_irqsave(&lru_rotate_pvecs, cpu, &flags); + pvec = &swpvec->pvec; if (pagevec_count(pvec)) { - unsigned long flags; /* No harm done if a racing interrupt already did this */ - local_irq_save(flags); pagevec_move_tail(pvec); - local_irq_restore(flags); } + unlock_swap_pvec_irqrestore(swpvec, flags); - pvec = &per_cpu(lru_deactivate_file_pvecs, cpu); + swpvec = lock_swap_pvec_cpu(&lru_deactivate_file_pvecs, cpu); + pvec = &swpvec->pvec; if (pagevec_count(pvec)) pagevec_lru_move_fn(pvec, lru_deactivate_file_fn, NULL); + unlock_swap_pvec(swpvec); - pvec = &per_cpu(lru_lazyfree_pvecs, cpu); + swpvec = lock_swap_pvec_cpu(&lru_lazyfree_pvecs, cpu); + pvec = &swpvec->pvec; if (pagevec_count(pvec)) pagevec_lru_move_fn(pvec, lru_lazyfree_fn, NULL); + unlock_swap_pvec(swpvec); activate_page_drain(cpu); } @@ -612,6 +684,9 @@ void lru_add_drain_cpu(int cpu) */ void deactivate_file_page(struct page *page) { + struct swap_pagevec *swpvec; + struct pagevec *pvec; + /* * In a workload with many unevictable page such as mprotect, * unevictable page deactivation for accelerating reclaim is pointless. @@ -620,11 +695,12 @@ void deactivate_file_page(struct page *page) return; if (likely(get_page_unless_zero(page))) { - struct pagevec *pvec = &get_cpu_var(lru_deactivate_file_pvecs); + swpvec = lock_swap_pvec(&lru_deactivate_file_pvecs); + pvec = &swpvec->pvec; if (!pagevec_add(pvec, page) || PageCompound(page)) pagevec_lru_move_fn(pvec, lru_deactivate_file_fn, NULL); - put_cpu_var(lru_deactivate_file_pvecs); + unlock_swap_pvec(swpvec); } } @@ -637,21 +713,24 @@ void deactivate_file_page(struct page *page) */ void mark_page_lazyfree(struct page *page) { + struct swap_pagevec *swpvec; + struct pagevec *pvec; + if (PageLRU(page) && PageAnon(page) && PageSwapBacked(page) && !PageSwapCache(page) && !PageUnevictable(page)) { - struct pagevec *pvec = &get_cpu_var(lru_lazyfree_pvecs); + swpvec = lock_swap_pvec(&lru_lazyfree_pvecs); + pvec = &swpvec->pvec; get_page(page); if (!pagevec_add(pvec, page) || PageCompound(page)) pagevec_lru_move_fn(pvec, lru_lazyfree_fn, NULL); - put_cpu_var(lru_lazyfree_pvecs); + unlock_swap_pvec(swpvec); } } void lru_add_drain(void) { - lru_add_drain_cpu(get_cpu()); - put_cpu(); + lru_add_drain_cpu(raw_smp_processor_id()); } static void lru_add_drain_per_cpu(struct work_struct *dummy) @@ -687,10 +766,10 @@ void lru_add_drain_all(void) for_each_online_cpu(cpu) { struct work_struct *work = &per_cpu(lru_add_drain_work, cpu); - if (pagevec_count(&per_cpu(lru_add_pvec, cpu)) || - pagevec_count(&per_cpu(lru_rotate_pvecs, cpu)) || - pagevec_count(&per_cpu(lru_deactivate_file_pvecs, cpu)) || - pagevec_count(&per_cpu(lru_lazyfree_pvecs, cpu)) || + if (pagevec_count(&per_cpu(lru_add_pvec.pvec, cpu)) || + pagevec_count(&per_cpu(lru_rotate_pvecs.pvec, cpu)) || + pagevec_count(&per_cpu(lru_deactivate_file_pvecs.pvec, cpu)) || + pagevec_count(&per_cpu(lru_lazyfree_pvecs.pvec, cpu)) || need_activate_page_drain(cpu)) { INIT_WORK(work, lru_add_drain_per_cpu); queue_work_on(cpu, mm_percpu_wq, work); From patchwork Fri Sep 14 14:59:24 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sebastian Andrzej Siewior X-Patchwork-Id: 10600913 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id CA19815A7 for ; Fri, 14 Sep 2018 14:59:39 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id AC1812B8CE for ; Fri, 14 Sep 2018 14:59:39 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id A02C32B8D0; Fri, 14 Sep 2018 14:59:39 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 3CD1A2B8CE for ; Fri, 14 Sep 2018 14:59:39 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 8E7508E0001; Fri, 14 Sep 2018 10:59:35 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 81ACB8E0004; Fri, 14 Sep 2018 10:59:35 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6E3D98E0001; Fri, 14 Sep 2018 10:59:35 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-wr1-f72.google.com (mail-wr1-f72.google.com [209.85.221.72]) by kanga.kvack.org (Postfix) with ESMTP id 0F0D68E0004 for ; Fri, 14 Sep 2018 10:59:35 -0400 (EDT) Received: by mail-wr1-f72.google.com with SMTP id u12-v6so10504502wrc.1 for ; Fri, 14 Sep 2018 07:59:35 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-original-authentication-results:x-gm-message-state:from:to:cc :subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=jycUThBL+jyg5DdP317vUo9nsuQyw7JLVDvltXGVpsE=; b=I6356fMHHfnqGQ16N9DM4UM0JNzKsADUQGmJSwV+xW+L03I4ugvxklU3dpXB+QUJxl eETurTULRlUe6w/M+8WErzsytk6l2NrxL3glAqQfGrY1tqTqEXVzHAN8Vh0MJ1jiqyPV zwh21nF8PFnDU3HdlyPxgxQygZVweC0or4sZgYYfZFZZVEizmat4LBLXUsWG6RQq041C 3QZ512hSs/u8w0hjJcHDIOmyg4dSi6kZFgLRhoAXWyXtfdorlgbUYrd5q30IfFyH6eWw tOFAAqlEtOTejXBkbjpVNpKnYjRGcq1v5JJ5TK8f9CrZliDTOAefCXI4V9u7S5F8ahx3 zYZw== X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of bigeasy@linutronix.de designates 2a01:7a0:2:106d:700::1 as permitted sender) smtp.mailfrom=bigeasy@linutronix.de X-Gm-Message-State: APzg51CY7+aY9sL2vulGF7C0a+Kfo5kfhkABd0Y+ZxhPME7bKZ2cBUdk FJuSxTS+tVGihelqDwDwB71Wnu8YqngtUEuQjv1qsce7XFJLtWnAbZQ+/VIQLRx3Au8ldWjpI4n Ez5SAmPrlrgDBUbGbHn4AP0OpzFLkmwTXOG5zfy1XyOJQTscwVkcUxllpUerLspIYIw== X-Received: by 2002:a1c:cf8a:: with SMTP id f132-v6mr2672178wmg.0.1536937174583; Fri, 14 Sep 2018 07:59:34 -0700 (PDT) X-Google-Smtp-Source: ANB0Vdbp0kEGA+vhnhNowB+0KEgFxFMfWTnxECY38YL7d+LUO7FvyWvIh6uJDiLDsoEC2sXX3u5H X-Received: by 2002:a1c:cf8a:: with SMTP id f132-v6mr2672132wmg.0.1536937173692; Fri, 14 Sep 2018 07:59:33 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1536937173; cv=none; d=google.com; s=arc-20160816; b=HjBt4pMR/A/2QH/B+ahlXxEqsGGc7Fxs2kPYexTWgQagvKfYD/QEQD9pxUBEe1pjF9 BCU56UrN/6Qcydp1Q5XXB62IZxhiK8Fmt5+Qdv2a+2/emT48zSMvglLT1lZ5tKtHJ8eU szeMZ+vXiOVw83k30yRrLMS0srmJxlrOyC5bEaz2Q/GPJ61VuHpH/juWSJMwe+bWLHGs K46Ya2RIMhtkAUeE+wDociDmFnPbLvcSTdcdqQjDydyXzNRwthK5wrKAObTdPlWN035r juMFWTSr+UauDr15j0dE8cZz89OerwwAgP9E3C1DdsqaKwMLca3NdiFCAaTT+iLv2BeI Sikw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from; bh=jycUThBL+jyg5DdP317vUo9nsuQyw7JLVDvltXGVpsE=; b=BuPdYxqYJksjkeq9GVQTnNlPBhxmaGgKR840INvs6uKT3mVXMF/jcX9tftOPu+pdRF 8Gn2lz9RVv/anKcBhgJELiizTK0XX5OxtIbB6OraaujzF5zgW6MTdlvrp3j2Qeb6zY78 0oAdSOIJtCNuUf5BToCb4CBgrflyexwGcc5ygmtyTEiSicipeUvk/wFqe5sscXi+Frpo oa9leIjA5FTRyn6FR06MnF/z6zYL/FefQ34IiirqcdPAepQuhwbBB+yHsK6qD2YBCvwi Rf1ueW1c0F8fnxi2rSQIb3Lh9pludoH+BHeXIK7BwtZLStSVl1PTWPqgmN4s73MbkQHo 8JjQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of bigeasy@linutronix.de designates 2a01:7a0:2:106d:700::1 as permitted sender) smtp.mailfrom=bigeasy@linutronix.de Received: from Galois.linutronix.de (Galois.linutronix.de. [2a01:7a0:2:106d:700::1]) by mx.google.com with ESMTPS id b192-v6si1894684wmd.110.2018.09.14.07.59.33 for (version=TLS1_2 cipher=AES128-SHA bits=128/128); Fri, 14 Sep 2018 07:59:33 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of bigeasy@linutronix.de designates 2a01:7a0:2:106d:700::1 as permitted sender) client-ip=2a01:7a0:2:106d:700::1; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of bigeasy@linutronix.de designates 2a01:7a0:2:106d:700::1 as permitted sender) smtp.mailfrom=bigeasy@linutronix.de Received: from localhost ([127.0.0.1] helo=bazinga.breakpoint.cc) by Galois.linutronix.de with esmtp (Exim 4.80) (envelope-from ) id 1g0pZC-00019w-4L; Fri, 14 Sep 2018 16:59:30 +0200 From: Sebastian Andrzej Siewior To: linux-mm@kvack.org Cc: tglx@linutronix.de, Vlastimil Babka , frederic@kernel.org, Sebastian Andrzej Siewior Subject: [PATCH 2/2] mm/swap: Access struct pagevec remotely Date: Fri, 14 Sep 2018 16:59:24 +0200 Message-Id: <20180914145924.22055-3-bigeasy@linutronix.de> X-Mailer: git-send-email 2.19.0 In-Reply-To: <20180914145924.22055-1-bigeasy@linutronix.de> References: <20180914145924.22055-1-bigeasy@linutronix.de> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP From: Thomas Gleixner Now that struct pagevec is locked during access, it is possible to access it from a remote CPU. The advantage is that the work can be done from the "requesting" CPU without firing a worker on a remote CPU and waiting for it to complete the work. Signed-off-by: Thomas Gleixner [bigeasy: +commit message] Signed-off-by: Sebastian Andrzej Siewior --- mm/swap.c | 37 +------------------------------------ 1 file changed, 1 insertion(+), 36 deletions(-) diff --git a/mm/swap.c b/mm/swap.c index 17702ee5bf81c..ec36e733aab5d 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -733,54 +733,19 @@ void lru_add_drain(void) lru_add_drain_cpu(raw_smp_processor_id()); } -static void lru_add_drain_per_cpu(struct work_struct *dummy) -{ - lru_add_drain(); -} - -static DEFINE_PER_CPU(struct work_struct, lru_add_drain_work); - -/* - * Doesn't need any cpu hotplug locking because we do rely on per-cpu - * kworkers being shut down before our page_alloc_cpu_dead callback is - * executed on the offlined cpu. - * Calling this function with cpu hotplug locks held can actually lead - * to obscure indirect dependencies via WQ context. - */ void lru_add_drain_all(void) { - static DEFINE_MUTEX(lock); - static struct cpumask has_work; int cpu; - /* - * Make sure nobody triggers this path before mm_percpu_wq is fully - * initialized. - */ - if (WARN_ON(!mm_percpu_wq)) - return; - - mutex_lock(&lock); - cpumask_clear(&has_work); - for_each_online_cpu(cpu) { - struct work_struct *work = &per_cpu(lru_add_drain_work, cpu); - if (pagevec_count(&per_cpu(lru_add_pvec.pvec, cpu)) || pagevec_count(&per_cpu(lru_rotate_pvecs.pvec, cpu)) || pagevec_count(&per_cpu(lru_deactivate_file_pvecs.pvec, cpu)) || pagevec_count(&per_cpu(lru_lazyfree_pvecs.pvec, cpu)) || need_activate_page_drain(cpu)) { - INIT_WORK(work, lru_add_drain_per_cpu); - queue_work_on(cpu, mm_percpu_wq, work); - cpumask_set_cpu(cpu, &has_work); + lru_add_drain_cpu(cpu); } } - - for_each_cpu(cpu, &has_work) - flush_work(&per_cpu(lru_add_drain_work, cpu)); - - mutex_unlock(&lock); } /**