From patchwork Fri Apr 25 23:21:22 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Linus Torvalds X-Patchwork-Id: 4067071 Return-Path: X-Original-To: patchwork-linux-sh@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 89B7B9F1F4 for ; Fri, 25 Apr 2014 23:21:30 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 5771A2012D for ; Fri, 25 Apr 2014 23:21:29 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 02DDC20165 for ; Fri, 25 Apr 2014 23:21:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752304AbaDYXV1 (ORCPT ); Fri, 25 Apr 2014 19:21:27 -0400 Received: from mail-pa0-f48.google.com ([209.85.220.48]:40615 "EHLO mail-pa0-f48.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751581AbaDYXVZ (ORCPT ); Fri, 25 Apr 2014 19:21:25 -0400 Received: by mail-pa0-f48.google.com with SMTP id hz1so3680216pad.35 for ; Fri, 25 Apr 2014 16:21:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=sender:date:from:to:cc:subject:message-id:user-agent:mime-version :content-type; bh=xl0vYyzcnqOWxPpqCdgHjSWa8EI/wahF4YEtp32zZGw=; b=L6B9y7rWPaxG4caSN0Ym6EFQu9NaWWqWll37/ZXF7dseECYzxg6bYw8O+j1vmGc7w4 b2SJIw+5qZdMu03x0Xfu4A5/DA0hVPm1nmcOq/LGvE5WO4WSdcvK94MVJ7X+bdoneo8p UFiGQtqEXZDQaF6b/ivtFh0mbyUXJaK3b09gHJ79ZuDHPZFUz45I4NcvBzMszO6o8EKv BblF9hdRrJU/I7QqtgWEwS+c8BlMC/BQToh7CoPHNvQKoV3DctbTPsdYpB5tYfNzuxbn hmzpntFwYkZWfX8facBPMPm/m/TGOI8z/lltEvYnAlJWnj047yNwMtUTu3I6c1i4becV qltw== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux-foundation.org; s=google; h=sender:date:from:to:cc:subject:message-id:user-agent:mime-version :content-type; bh=xl0vYyzcnqOWxPpqCdgHjSWa8EI/wahF4YEtp32zZGw=; b=QyhRqwpQGWwsuhUUI0quj+OpM5B5lsBl9HuIHl/bqOoGl2JxcWptsmZmdGyWvmTe5Z jRJEOSGeyqx6KF9oh49E+OjmtiLE05uMY5+JvFl9oAcpXanHGbdeAiTLlHilXVhgG1+o fVQ8qiQpj0D3BKn7gE78Jsl+uaGnYD4Qd6xIs= X-Received: by 10.67.5.233 with SMTP id cp9mr10971998pad.147.1398468084366; Fri, 25 Apr 2014 16:21:24 -0700 (PDT) Received: from i7.lan (c-24-22-13-12.hsd1.or.comcast.net. [24.22.13.12]) by mx.google.com with ESMTPSA id ci4sm18425318pbb.50.2014.04.25.16.21.23 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Fri, 25 Apr 2014 16:21:23 -0700 (PDT) Date: Fri, 25 Apr 2014 16:21:22 -0700 (PDT) From: Linus Torvalds To: Dave Hansen , Hugh Dickins cc: Peter Anvin , Benjamin Herrenschmidt , linux-arch@vger.kernel.org, linux-mm@kvack.org, Russell King , Tony Luck , Michal Hocko , linux-sh@vger.kernel.org, Guenter Roeck , linux-s390@vger.kernel.org Subject: [PATCH] mm: split 'tlb_flush_mmu()' into tlb flushing and memory freeing parts Message-ID: User-Agent: Alpine 2.11 (LFD 23 2013-08-11) MIME-Version: 1.0 Sender: linux-sh-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-sh@vger.kernel.org X-Spam-Status: No, score=-7.4 required=5.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_HI,RP_MATCHES_RCVD,T_DKIM_INVALID,UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Linus Torvalds Date: Fri, 25 Apr 2014 16:05:40 -0700 Subject: [PATCH] mm: split 'tlb_flush_mmu()' into tlb flushing and memory freeing parts The mmu-gather operation 'tlb_flush_mmu()' has done two things: the actual tlb flush operation, and the batched freeing of the pages that the TLB entries pointed at. This splits the operation into separate phases, so that the forced batched flushing done by zap_pte_range() can now do the actual TLB flush while still holding the page table lock, but delay the batched freeing of all the pages to after the lock has been dropped. This in turn allows us to avoid a race condition between set_page_dirty() (as called by zap_pte_range() when it finds a dirty shared memory pte) and page_mkclean(): because we now flush all the dirty page data from the TLB's while holding the pte lock, page_mkclean() will be held up walking the (recently cleaned) page tables until after the TLB entries have been flushed from all CPU's. Reported-by: Benjamin Herrenschmidt Tested-by: Dave Hansen Acked-by: Hugh Dickins Cc: Peter Zijlstra Cc: Russell King - ARM Linux Cc: Tony Luck Signed-off-by: Linus Torvalds --- Ok, this is now tentatively committed to my tree. I decided to just go ahead and do the ARM/ia64/sh parts blindly, since the splitup was fairly trivial (ia64 being slightly more complicated). I tested the UM one, at least as far as compiling. The others are totally untested, it would be good to make sure I didn't screw anything up. The x86 and generic parts are the same that DaveH already tested and that got sent out a few days ago. arch/arm/include/asm/tlb.h | 12 +++++++++- arch/ia64/include/asm/tlb.h | 42 ++++++++++++++++++++++++++--------- arch/s390/include/asm/tlb.h | 13 ++++++++++- arch/sh/include/asm/tlb.h | 8 +++++++ arch/um/include/asm/tlb.h | 16 ++++++++++++-- mm/memory.c | 53 +++++++++++++++++++++++++++++---------------- 6 files changed, 111 insertions(+), 33 deletions(-) diff --git a/arch/arm/include/asm/tlb.h b/arch/arm/include/asm/tlb.h index 0baf7f0d9394..f1a0dace3efe 100644 --- a/arch/arm/include/asm/tlb.h +++ b/arch/arm/include/asm/tlb.h @@ -98,15 +98,25 @@ static inline void __tlb_alloc_page(struct mmu_gather *tlb) } } -static inline void tlb_flush_mmu(struct mmu_gather *tlb) +static inline void tlb_flush_mmu_tlbonly(struct mmu_gather *tlb) { tlb_flush(tlb); +} + +static inline void tlb_flush_mmu_free(struct mmu_gather *tlb) +{ free_pages_and_swap_cache(tlb->pages, tlb->nr); tlb->nr = 0; if (tlb->pages == tlb->local) __tlb_alloc_page(tlb); } +static inline void tlb_flush_mmu(struct mmu_gather *tlb) +{ + tlb_flush_mmu_tlbonly(tlb); + tlb_flush_mmu_free(tlb); +} + static inline void tlb_gather_mmu(struct mmu_gather *tlb, struct mm_struct *mm, unsigned long start, unsigned long end) { diff --git a/arch/ia64/include/asm/tlb.h b/arch/ia64/include/asm/tlb.h index bc5efc7c3f3f..39d64e0df1de 100644 --- a/arch/ia64/include/asm/tlb.h +++ b/arch/ia64/include/asm/tlb.h @@ -91,18 +91,9 @@ extern struct ia64_tr_entry *ia64_idtrs[NR_CPUS]; #define RR_RID_MASK 0x00000000ffffff00L #define RR_TO_RID(val) ((val >> 8) & 0xffffff) -/* - * Flush the TLB for address range START to END and, if not in fast mode, release the - * freed pages that where gathered up to this point. - */ static inline void -ia64_tlb_flush_mmu (struct mmu_gather *tlb, unsigned long start, unsigned long end) +ia64_tlb_flush_mmu_tlbonly(struct mmu_gather *tlb, unsigned long start, unsigned long end) { - unsigned long i; - unsigned int nr; - - if (!tlb->need_flush) - return; tlb->need_flush = 0; if (tlb->fullmm) { @@ -135,6 +126,14 @@ ia64_tlb_flush_mmu (struct mmu_gather *tlb, unsigned long start, unsigned long e flush_tlb_range(&vma, ia64_thash(start), ia64_thash(end)); } +} + +static inline void +ia64_tlb_flush_mmu_free(struct mmu_gather *tlb) +{ + unsigned long i; + unsigned int nr; + /* lastly, release the freed pages */ nr = tlb->nr; @@ -144,6 +143,19 @@ ia64_tlb_flush_mmu (struct mmu_gather *tlb, unsigned long start, unsigned long e free_page_and_swap_cache(tlb->pages[i]); } +/* + * Flush the TLB for address range START to END and, if not in fast mode, release the + * freed pages that where gathered up to this point. + */ +static inline void +ia64_tlb_flush_mmu (struct mmu_gather *tlb, unsigned long start, unsigned long end) +{ + if (!tlb->need_flush) + return; + ia64_tlb_flush_mmu_tlbonly(tlb, start, end); + ia64_tlb_flush_mmu_free(tlb); +} + static inline void __tlb_alloc_page(struct mmu_gather *tlb) { unsigned long addr = __get_free_pages(GFP_NOWAIT | __GFP_NOWARN, 0); @@ -206,6 +218,16 @@ static inline int __tlb_remove_page(struct mmu_gather *tlb, struct page *page) return tlb->max - tlb->nr; } +static inline void tlb_flush_mmu_tlbonly(struct mmu_gather *tlb) +{ + ia64_tlb_flush_mmu_tlbonly(tlb, tlb->start_addr, tlb->end_addr); +} + +static inline void tlb_flush_mmu_free(struct mmu_gather *tlb) +{ + ia64_tlb_flush_mmu_free(tlb); +} + static inline void tlb_flush_mmu(struct mmu_gather *tlb) { ia64_tlb_flush_mmu(tlb, tlb->start_addr, tlb->end_addr); diff --git a/arch/s390/include/asm/tlb.h b/arch/s390/include/asm/tlb.h index c544b6f05d95..a25f09fbaf36 100644 --- a/arch/s390/include/asm/tlb.h +++ b/arch/s390/include/asm/tlb.h @@ -59,12 +59,23 @@ static inline void tlb_gather_mmu(struct mmu_gather *tlb, tlb->batch = NULL; } -static inline void tlb_flush_mmu(struct mmu_gather *tlb) +static inline void tlb_flush_mmu_tlbonly(struct mmu_gather *tlb) { __tlb_flush_mm_lazy(tlb->mm); +} + +static inline void tlb_flush_mmu_free(struct mmu_gather *tlb) +{ tlb_table_flush(tlb); } + +static inline void tlb_flush_mmu(struct mmu_gather *tlb) +{ + tlb_flush_mmu_tlbonly(tlb); + tlb_flush_mmu_free(tlb); +} + static inline void tlb_finish_mmu(struct mmu_gather *tlb, unsigned long start, unsigned long end) { diff --git a/arch/sh/include/asm/tlb.h b/arch/sh/include/asm/tlb.h index 362192ed12fe..62f80d2a9df9 100644 --- a/arch/sh/include/asm/tlb.h +++ b/arch/sh/include/asm/tlb.h @@ -86,6 +86,14 @@ tlb_end_vma(struct mmu_gather *tlb, struct vm_area_struct *vma) } } +static inline void tlb_flush_mmu_tlbonly(struct mmu_gather *tlb) +{ +} + +static inline void tlb_flush_mmu_free(struct mmu_gather *tlb) +{ +} + static inline void tlb_flush_mmu(struct mmu_gather *tlb) { } diff --git a/arch/um/include/asm/tlb.h b/arch/um/include/asm/tlb.h index 29b0301c18aa..16eb63fac57d 100644 --- a/arch/um/include/asm/tlb.h +++ b/arch/um/include/asm/tlb.h @@ -59,13 +59,25 @@ extern void flush_tlb_mm_range(struct mm_struct *mm, unsigned long start, unsigned long end); static inline void +tlb_flush_mmu_tlbonly(struct mmu_gather *tlb) +{ + flush_tlb_mm_range(tlb->mm, tlb->start, tlb->end); +} + +static inline void +tlb_flush_mmu_free(struct mmu_gather *tlb) +{ + init_tlb_gather(tlb); +} + +static inline void tlb_flush_mmu(struct mmu_gather *tlb) { if (!tlb->need_flush) return; - flush_tlb_mm_range(tlb->mm, tlb->start, tlb->end); - init_tlb_gather(tlb); + tlb_flush_mmu_tlbonly(tlb); + tlb_flush_mmu_free(tlb); } /* tlb_finish_mmu diff --git a/mm/memory.c b/mm/memory.c index 93e332d5ed77..037b812a9531 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -232,17 +232,18 @@ void tlb_gather_mmu(struct mmu_gather *tlb, struct mm_struct *mm, unsigned long #endif } -void tlb_flush_mmu(struct mmu_gather *tlb) +static void tlb_flush_mmu_tlbonly(struct mmu_gather *tlb) { - struct mmu_gather_batch *batch; - - if (!tlb->need_flush) - return; tlb->need_flush = 0; tlb_flush(tlb); #ifdef CONFIG_HAVE_RCU_TABLE_FREE tlb_table_flush(tlb); #endif +} + +static void tlb_flush_mmu_free(struct mmu_gather *tlb) +{ + struct mmu_gather_batch *batch; for (batch = &tlb->local; batch; batch = batch->next) { free_pages_and_swap_cache(batch->pages, batch->nr); @@ -251,6 +252,14 @@ void tlb_flush_mmu(struct mmu_gather *tlb) tlb->active = &tlb->local; } +void tlb_flush_mmu(struct mmu_gather *tlb) +{ + if (!tlb->need_flush) + return; + tlb_flush_mmu_tlbonly(tlb); + tlb_flush_mmu_free(tlb); +} + /* tlb_finish_mmu * Called at the end of the shootdown operation to free up any resources * that were required. @@ -1127,8 +1136,10 @@ again: if (PageAnon(page)) rss[MM_ANONPAGES]--; else { - if (pte_dirty(ptent)) + if (pte_dirty(ptent)) { + force_flush = 1; set_page_dirty(page); + } if (pte_young(ptent) && likely(!(vma->vm_flags & VM_SEQ_READ))) mark_page_accessed(page); @@ -1137,9 +1148,10 @@ again: page_remove_rmap(page); if (unlikely(page_mapcount(page) < 0)) print_bad_pte(vma, addr, ptent, page); - force_flush = !__tlb_remove_page(tlb, page); - if (force_flush) + if (unlikely(!__tlb_remove_page(tlb, page))) { + force_flush = 1; break; + } continue; } /* @@ -1174,18 +1186,11 @@ again: add_mm_rss_vec(mm, rss); arch_leave_lazy_mmu_mode(); - pte_unmap_unlock(start_pte, ptl); - /* - * mmu_gather ran out of room to batch pages, we break out of - * the PTE lock to avoid doing the potential expensive TLB invalidate - * and page-free while holding it. - */ + /* Do the actual TLB flush before dropping ptl */ if (force_flush) { unsigned long old_end; - force_flush = 0; - /* * Flush the TLB just for the previous segment, * then update the range to be the remaining @@ -1193,11 +1198,21 @@ again: */ old_end = tlb->end; tlb->end = addr; - - tlb_flush_mmu(tlb); - + tlb_flush_mmu_tlbonly(tlb); tlb->start = addr; tlb->end = old_end; + } + pte_unmap_unlock(start_pte, ptl); + + /* + * If we forced a TLB flush (either due to running out of + * batch buffers or because we needed to flush dirty TLB + * entries before releasing the ptl), free the batched + * memory too. Restart if we didn't do everything. + */ + if (force_flush) { + force_flush = 0; + tlb_flush_mmu_free(tlb); if (addr != end) goto again;