From patchwork Wed Sep 26 11:36:24 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Zijlstra X-Patchwork-Id: 10615783 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 09B8314BD for ; Wed, 26 Sep 2018 11:54:35 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id F0F2B2A97C for ; Wed, 26 Sep 2018 11:54:34 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id E4DE82A98A; Wed, 26 Sep 2018 11:54:34 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.7 required=2.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 204C32A97C for ; Wed, 26 Sep 2018 11:54:31 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 0C49B8E0007; Wed, 26 Sep 2018 07:54:28 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id EF2208E0001; Wed, 26 Sep 2018 07:54:27 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id CFB7D8E0007; Wed, 26 Sep 2018 07:54:27 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pl1-f198.google.com (mail-pl1-f198.google.com [209.85.214.198]) by kanga.kvack.org (Postfix) with ESMTP id 73AC08E0003 for ; Wed, 26 Sep 2018 07:54:27 -0400 (EDT) Received: by mail-pl1-f198.google.com with SMTP id 3-v6so10671036plq.6 for ; Wed, 26 Sep 2018 04:54:27 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:message-id:user-agent:date:from :to:cc:subject:references:mime-version; bh=+tA8Q9MFdkLFV7zO6AF9h18nDDBKm/mt3m8i8F4mU94=; b=K+tM1KJ22qRsFk9Kne2hEpObBOfLdbjMmccKiG5Kc3RyhK1PthglzG/IyYybTz0gpS +MP5oJ47wCoyErrUhYZxAyrTR70trDvnbgAmhn6qnZoovfjZiig5+Y1nr00uXgoIKuKZ ThgFzm23+tJRg24hYoSQtb3chDMaB5ub3WJo4hetAnfQo+TDtvUw20azSrOKE/yusCiI rHJdJ3ljl5rabCIpGmQ0RGc+CNhe4/iqxJAg3IsN6n+xE7DY1F+gq0TAe2rPJribijbp viB22d7VErnwMsBB87QPTBsVPh8e17QVYwo68WmbnUyMo5jDHn/514iUCp4ROStovQT2 7+xg== X-Gm-Message-State: ABuFfojsdeLHUR+sEHp5c1GSdu+Xp4suO+RqaPz9/cIgt3wCpiYaKdtg 64GptMFb/kl8xX870l9UuiTFL+Rsfm7yehOc5l4x64bi/RHe0tBV7rkOZ2DgdLf29Eg9LnK8+s9 2V48AYbdBWnEVLUeYHFzLxfIC2i6nJw/dC/acGGqK0k7Rpo7X+2ZGjP3Js5Dp9bOlJA== X-Received: by 2002:a62:f610:: with SMTP id x16-v6mr5856007pfh.169.1537962867097; Wed, 26 Sep 2018 04:54:27 -0700 (PDT) X-Google-Smtp-Source: ACcGV60ZPoQf0lgGJ5tMKCy9HnWkj9l+6rv2FwsSt7iHHZSC5blCI/Muh01XJIjCIHgaPT7TncS/ X-Received: by 2002:a62:f610:: with SMTP id x16-v6mr5855950pfh.169.1537962866116; Wed, 26 Sep 2018 04:54:26 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1537962866; cv=none; d=google.com; s=arc-20160816; b=mgyWc95rzqxU0P36pHs3mgINnaSVMUN13xz5gZcmZa5bpmNQjiWN5LXrWXuFV39Bgf NMnXCkKASh62B9qNuwxmw1kHfHsRFC0iUQzU1Jupsf1jNwN7S4pjlGbj5JVvS6kuz/XF EH2a9KJhfCstH6JEaO3f6t09C8Z3HgyK+9i6C6OcPi1NE2r4J/+WBpvjVx8y0KhptTmZ evpUHFzHS9VaWpEUB8a7TsDMD/mi1xMQlolURQS2ZYnLQHcGSgnmJRAkcMLklU7TisYL niJMuUFWwVSZn4Jw5TSpz0Regcd0tcKjvX5DXDlwi/y7HyzljAtpECyP0XjmAL+GFJbq va6A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=mime-version:references:subject:cc:to:from:date:user-agent :message-id:dkim-signature; bh=+tA8Q9MFdkLFV7zO6AF9h18nDDBKm/mt3m8i8F4mU94=; b=FIbWhplCKEO5LdYIujNGVj9gpfthlZEQ/BtIluud6efl4Ucrf0ErIZIN8bDe34vKtg E0s7VTQBz8WnCAslHassHwNXIFUwNjxzpJ0Rz8XngTa3HNHNk/i5PTGR3mg1pDHSFN6A r9EqN/UeZn5K9iLja4oCpsTkBJR5GJIOzzRE7HOYdsNl09VnECi0jRWQFaTO9p8PilJl pKP4sWIcxLwue0vtWAigGlvdJ+3BT4pR82JwkoZtzgRE6XkybGeQkPCYofgqBlxVcwHC BfbBBXD/cvu169B8R3enxJ31ATC9ZO2cEVNdf8QPR2z78GG9xQ6MGHDDVwFXCH09Eute TMLw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=bombadil.20170209 header.b=gPL6d7yS; spf=pass (google.com: best guess record for domain of peterz@infradead.org designates 2607:7c80:54:e::133 as permitted sender) smtp.mailfrom=peterz@infradead.org Received: from bombadil.infradead.org (bombadil.infradead.org. [2607:7c80:54:e::133]) by mx.google.com with ESMTPS id o1-v6si4787055pfe.259.2018.09.26.04.54.25 for (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Wed, 26 Sep 2018 04:54:26 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of peterz@infradead.org designates 2607:7c80:54:e::133 as permitted sender) client-ip=2607:7c80:54:e::133; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=bombadil.20170209 header.b=gPL6d7yS; spf=pass (google.com: best guess record for domain of peterz@infradead.org designates 2607:7c80:54:e::133 as permitted sender) smtp.mailfrom=peterz@infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=Content-Type:MIME-Version:References: Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding: Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender: Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:List-Id:List-Help: List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=+tA8Q9MFdkLFV7zO6AF9h18nDDBKm/mt3m8i8F4mU94=; b=gPL6d7ySVlX1zkspubHdYLYPPz JHNxtySC9tV/OnbUzT6l4g+T3kBT2fitMK+ZiClHOw0emzwyjxzEDHSe2UH4uWMHoGP7qWHWMvAgD nFcN2lTgWPw6KID9r7/0OYTX7UZX+yU1CH95xdBrnNNXlPVr4ppS67PmWHJSj2Nu+FC2DBJenM0dT cVWSHt2hK6JcFCHnoluO349hwPyOBDYV1FtiuQ+elXrVjldiUoAynH6geYfjMIftHVAIP/SbWFK4A kvGmUJ1NKA13olh4n+56yj/aXI5D/y9jWCQlIzbOg+giilYTkIjOjbi0HgenjOcXFoQ3/zX9phbN2 vloLJK2A==; Received: from j217100.upc-j.chello.nl ([24.132.217.100] helo=hirez.programming.kicks-ass.net) by bombadil.infradead.org with esmtpsa (Exim 4.90_1 #2 (Red Hat Linux)) id 1g58OM-0005vc-71; Wed, 26 Sep 2018 11:54:21 +0000 Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0) id 7577F206F1800; Wed, 26 Sep 2018 13:54:04 +0200 (CEST) Message-ID: <20180926114800.561468800@infradead.org> User-Agent: quilt/0.65 Date: Wed, 26 Sep 2018 13:36:24 +0200 From: Peter Zijlstra To: will.deacon@arm.com, aneesh.kumar@linux.vnet.ibm.com, akpm@linux-foundation.org, npiggin@gmail.com Cc: linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, peterz@infradead.org, linux@armlinux.org.uk, heiko.carstens@de.ibm.com, riel@surriel.com Subject: [PATCH 01/18] asm-generic/tlb: Provide a comment References: <20180926113623.863696043@infradead.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP Write a comment explaining some of this.. Cc: Nick Piggin Cc: Andrew Morton Cc: "Aneesh Kumar K.V" Cc: Will Deacon Signed-off-by: Peter Zijlstra (Intel) --- include/asm-generic/tlb.h | 119 ++++++++++++++++++++++++++++++++++++++++++++-- 1 file changed, 116 insertions(+), 3 deletions(-) --- a/include/asm-generic/tlb.h +++ b/include/asm-generic/tlb.h @@ -22,6 +22,118 @@ #ifdef CONFIG_MMU +/* + * Generic MMU-gather implementation. + * + * The mmu_gather data structure is used by the mm code to implement the + * correct and efficient ordering of freeing pages and TLB invalidations. + * + * This correct ordering is: + * + * 1) unhook page + * 2) TLB invalidate page + * 3) free page + * + * That is, we must never free a page before we have ensured there are no live + * translations left to it. Otherwise it might be possible to observe (or + * worse, change) the page content after it has been reused. + * + * The mmu_gather API consists of: + * + * - tlb_gather_mmu() / tlb_finish_mmu(); start and finish a mmu_gather + * + * Finish in particular will issue a (final) TLB invalidate and free + * all (remaining) queued pages. + * + * - tlb_start_vma() / tlb_end_vma(); marks the start / end of a VMA + * + * Defaults to flushing at tlb_end_vma() to reset the range; helps when + * there's large holes between the VMAs. + * + * - tlb_remove_page() / __tlb_remove_page() + * - tlb_remove_page_size() / __tlb_remove_page_size() + * + * __tlb_remove_page_size() is the basic primitive that queues a page for + * freeing. __tlb_remove_page() assumes PAGE_SIZE. Both will return a + * boolean indicating if the queue is (now) full and a call to + * tlb_flush_mmu() is required. + * + * tlb_remove_page() and tlb_remove_page_size() imply the call to + * tlb_flush_mmu() when required and has no return value. + * + * - tlb_remove_check_page_size_change() + * + * call before __tlb_remove_page*() to set the current page-size; implies a + * possible tlb_flush_mmu() call. + * + * - tlb_flush_mmu() / tlb_flush_mmu_tlbonly() / tlb_flush_mmu_free() + * + * tlb_flush_mmu_tlbonly() - does the TLB invalidate (and resets + * related state, like the range) + * + * tlb_flush_mmu_free() - frees the queued pages; make absolutely + * sure no additional tlb_remove_page() + * calls happen between _tlbonly() and this. + * + * tlb_flush_mmu() - the above two calls. + * + * - mmu_gather::fullmm + * + * A flag set by tlb_gather_mmu() to indicate we're going to free + * the entire mm; this allows a number of optimizations. + * + * - We can ignore tlb_{start,end}_vma(); because we don't + * care about ranges. Everything will be shot down. + * + * - (RISC) architectures that use ASIDs can cycle to a new ASID + * and delay the invalidation until ASID space runs out. + * + * - mmu_gather::need_flush_all + * + * A flag that can be set by the arch code if it wants to force + * flush the entire TLB irrespective of the range. For instance + * x86-PAE needs this when changing top-level entries. + * + * And requires the architecture to provide and implement tlb_flush(). + * + * tlb_flush() may, in addition to the above mentioned mmu_gather fields, make + * use of: + * + * - mmu_gather::start / mmu_gather::end + * + * which provides the range that needs to be flushed to cover the pages to + * be freed. + * + * - mmu_gather::freed_tables + * + * set when we freed page table pages + * + * - tlb_get_unmap_shift() / tlb_get_unmap_size() + * + * returns the smallest TLB entry size unmapped in this range + * + * Additionally there are a few opt-in features: + * + * HAVE_RCU_TABLE_FREE + * + * This provides tlb_remove_table(), to be used instead of tlb_remove_page() + * for page directores (__p*_free_tlb()). This provides separate freeing of + * the page-table pages themselves in a semi-RCU fashion (see comment below). + * Useful if your architecture doesn't use IPIs for remote TLB invalidates + * and therefore doesn't naturally serialize with software page-table walkers. + * + * When used, an architecture is expected to provide __tlb_remove_table() + * which does the actual freeing of these pages. + * + * HAVE_RCU_TABLE_INVALIDATE + * + * This makes HAVE_RCU_TABLE_FREE call tlb_flush_mmu_tlbonly() before freeing + * the page-table pages. Required if you use HAVE_RCU_TABLE_FREE and your + * architecture uses the Linux page-tables natively. + * + */ +#define HAVE_GENERIC_MMU_GATHER + #ifdef CONFIG_HAVE_RCU_TABLE_FREE /* * Semi RCU freeing of the page directories. @@ -89,14 +201,17 @@ struct mmu_gather_batch { */ #define MAX_GATHER_BATCH_COUNT (10000UL/MAX_GATHER_BATCH) -/* struct mmu_gather is an opaque type used by the mm code for passing around +/* + * struct mmu_gather is an opaque type used by the mm code for passing around * any data needed by arch specific code for tlb_remove_page. */ struct mmu_gather { struct mm_struct *mm; + #ifdef CONFIG_HAVE_RCU_TABLE_FREE struct mmu_table_batch *batch; #endif + unsigned long start; unsigned long end; /* @@ -131,8 +246,6 @@ struct mmu_gather { int page_size; }; -#define HAVE_GENERIC_MMU_GATHER - void arch_tlb_gather_mmu(struct mmu_gather *tlb, struct mm_struct *mm, unsigned long start, unsigned long end); void tlb_flush_mmu(struct mmu_gather *tlb); From patchwork Wed Sep 26 11:36:25 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Zijlstra X-Patchwork-Id: 10615791 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 134EE14BD for ; Wed, 26 Sep 2018 11:54:47 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 061C52A980 for ; Wed, 26 Sep 2018 11:54:47 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id EE9862A98A; Wed, 26 Sep 2018 11:54:46 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.7 required=2.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 25DEB2A980 for ; Wed, 26 Sep 2018 11:54:43 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 9C86F8E0009; Wed, 26 Sep 2018 07:54:39 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 94FE98E0003; Wed, 26 Sep 2018 07:54:39 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7CFD58E0009; Wed, 26 Sep 2018 07:54:39 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-it1-f200.google.com (mail-it1-f200.google.com [209.85.166.200]) by kanga.kvack.org (Postfix) with ESMTP id 4C3FC8E0003 for ; Wed, 26 Sep 2018 07:54:39 -0400 (EDT) Received: by mail-it1-f200.google.com with SMTP id d128-v6so2839607ite.5 for ; Wed, 26 Sep 2018 04:54:39 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:message-id:user-agent:date:from :to:cc:subject:references:mime-version; bh=N1OVJgW1ngapU7SQzJH0GBbUBqWOoCJ7NvD7fyO4Wgs=; b=cbn/5//KNkNGfTv+uSyhnL91Lqs+tDVIzGvk2DYxiEgVoE25zfIBSnBHyVBQvjsBtV Kn1KfVG7B1PRqGvdLv7yEgi6BdIze4+ZiVBmTL0ZhN89xb7AX8bavZTC7Go3/2apmAY9 Z5aqNk84bkKXVwnX4mKJGJlcA06BgUoT0newuEs/laoLQD0BbTTNeNFtGADqdBgM/LdL JwkS8IX9tJjA5F1UTo7IHgLk1N0y8VpZSdHEeAn1+1I95wvyfU3tbSyzMWN06OECSmPi jbscYD/5LJen6bvJlta90jZWkHJ7uBgYHlebeWP40ogi2m4LcnVdty6nlgjhzQ6JgoPi Uo/A== X-Gm-Message-State: ABuFfogbz6Stj7lnwGvJJt9sg3JZuqU+k8vwa9saECyXpD9+EBU6ZIEe lzYCGuAGCyviOgDl3XoafqNdP9d8du92uruHn2YRnrWkb8cILpWJkze3gaUXlvEn0htzm4/p/9h nHCpMZgCmc4qCIXvHUIpysVCNg+UHO/0z6HATwZQFEyqTD6/Y99L+8KGix5MFBBGq3g== X-Received: by 2002:a24:fc44:: with SMTP id b65-v6mr4557547ith.145.1537962878948; Wed, 26 Sep 2018 04:54:38 -0700 (PDT) X-Google-Smtp-Source: ACcGV60ll6IQn8WP/Ukac/sAoLAocKsGZExfGt0x9hJaaVZoVv7XCGHuTkm85gXLxs5oYdvrU/xT X-Received: by 2002:a24:fc44:: with SMTP id b65-v6mr4557506ith.145.1537962878115; Wed, 26 Sep 2018 04:54:38 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1537962878; cv=none; d=google.com; s=arc-20160816; b=JpleNZaBexddRL4mRysAYdspL7LQsphejjTA11ISs5i3U0lU/H4bqee5QI9ifVxMjN 0KRQUbon7uAqaFQUbu/ztQkvfQ9u9u/gvEI0bYWrCR4BRJ2GoG0y1yD9vhiLQUsq3bEh 0P4VH+6ONiucMNUr3gEzAd88CpJOLTDzacLNyHeAkq1+uCkRbhhYijDvVnHBtQezNrWM OfQzFlLd6AwkFEF798Ceju8oQjAD63UDSfDop1bG5yqBww0MfJvc1vkRNltTcUbwea6i k1JFJ9IplUke08EU+iG/Cy0SISkGF2woqT0Fk8BpSliLi0kLeFpy7rl2s/GXsFdDs5hH s9mw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=mime-version:references:subject:cc:to:from:date:user-agent :message-id:dkim-signature; bh=N1OVJgW1ngapU7SQzJH0GBbUBqWOoCJ7NvD7fyO4Wgs=; b=oLZjiX7XfMEQTpq+wdRfJud9wWPdXsPI3RHsV35bFOaYg8I2/+1D+RBcMHl1TTPXGL MZrovxtg4xEt3BIbdP+TJvDlYUvUrwG1TzBu2STPTArfjtC3/GtcUWOEGEWsseB/Y0yV +5nI0jeIMBPtNyWQL4JQfdXXt69MTUPsIsAoD/P//x+FzbcG+3EGlnjQ2LMcb8If88aJ O4IVqXgf+OwywpPcU/DQi7vzJlfupQlc1saumeYxlO9qaUEE7z8v3l4qbf0If/vx5zkQ Jf843j8HZkgUliMD9N5MsXe14GKmP2Hl+tqpPh//4nQh/GXzeE5xI47gAaYqZ9HJKLkw Z61w== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=merlin.20170209 header.b=oWnMQWRG; spf=pass (google.com: best guess record for domain of peterz@infradead.org designates 2001:8b0:10b:1231::1 as permitted sender) smtp.mailfrom=peterz@infradead.org Received: from merlin.infradead.org (merlin.infradead.org. [2001:8b0:10b:1231::1]) by mx.google.com with ESMTPS id c67-v6si3546392jad.123.2018.09.26.04.54.37 for (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Wed, 26 Sep 2018 04:54:37 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of peterz@infradead.org designates 2001:8b0:10b:1231::1 as permitted sender) client-ip=2001:8b0:10b:1231::1; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=merlin.20170209 header.b=oWnMQWRG; spf=pass (google.com: best guess record for domain of peterz@infradead.org designates 2001:8b0:10b:1231::1 as permitted sender) smtp.mailfrom=peterz@infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=merlin.20170209; h=Content-Type:MIME-Version:References: Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding: Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender: Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:List-Id:List-Help: List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=N1OVJgW1ngapU7SQzJH0GBbUBqWOoCJ7NvD7fyO4Wgs=; b=oWnMQWRGhGc7ootgQlh/KvKs+m FAkrX1ggqp5t5+U+U9Z2c9HAFlzsuPk2xGm1QUp+6euQu1OTA1JdoIUZUqHsQqe/Bc01Grq0zSkA4 UXJe5+8fM49QSQubvb2SA+qzml8AyZ0EgGgD+JcKrqr3PxYZG1kk3w+R1NdrvPVeJU7akpxX03Rs2 k8IBKaY5BeZs6+7r3RHXQkwMcgVWIKj9Z2oEdngG4Td14G5eqvZLT4Zfs7pT5k7+toO9heCWuDQgz xW7pSfeKh1TrPAm0EHOMUJZ5Y0JnJuzNQG7gFDfRb3bx+ZjyiXG7z1XB2N+Ltg5t/IL0eHiyy2OhH Dj7Vf0Ww==; Received: from j217100.upc-j.chello.nl ([24.132.217.100] helo=hirez.programming.kicks-ass.net) by merlin.infradead.org with esmtpsa (Exim 4.90_1 #2 (Red Hat Linux)) id 1g58OL-0007TT-TU; Wed, 26 Sep 2018 11:54:25 +0000 Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0) id 77D45206F1804; Wed, 26 Sep 2018 13:54:04 +0200 (CEST) Message-ID: <20180926114800.614533601@infradead.org> User-Agent: quilt/0.65 Date: Wed, 26 Sep 2018 13:36:25 +0200 From: Peter Zijlstra To: will.deacon@arm.com, aneesh.kumar@linux.vnet.ibm.com, akpm@linux-foundation.org, npiggin@gmail.com Cc: linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, peterz@infradead.org, linux@armlinux.org.uk, heiko.carstens@de.ibm.com, riel@surriel.com Subject: [PATCH 02/18] asm-generic/tlb: Provide HAVE_MMU_GATHER_PAGE_SIZE References: <20180926113623.863696043@infradead.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP Move the mmu_gather::page_size things into the generic code instead of powerpc specific bits. Cc: Nick Piggin Cc: Andrew Morton Cc: "Aneesh Kumar K.V" Cc: Will Deacon Signed-off-by: Peter Zijlstra (Intel) --- arch/Kconfig | 3 +++ arch/arm/include/asm/tlb.h | 3 +-- arch/ia64/include/asm/tlb.h | 3 +-- arch/powerpc/Kconfig | 1 + arch/powerpc/include/asm/tlb.h | 17 ----------------- arch/s390/include/asm/tlb.h | 4 +--- arch/sh/include/asm/tlb.h | 4 +--- arch/um/include/asm/tlb.h | 4 +--- include/asm-generic/tlb.h | 32 +++++++++++++++++++------------- mm/huge_memory.c | 4 ++-- mm/hugetlb.c | 2 +- mm/madvise.c | 2 +- mm/memory.c | 4 ++-- mm/mmu_gather.c | 5 +++++ 14 files changed, 39 insertions(+), 49 deletions(-) --- a/arch/Kconfig +++ b/arch/Kconfig @@ -365,6 +365,9 @@ config HAVE_RCU_TABLE_FREE config HAVE_RCU_TABLE_INVALIDATE bool +config HAVE_MMU_GATHER_PAGE_SIZE + bool + config ARCH_HAVE_NMI_SAFE_CMPXCHG bool --- a/arch/arm/include/asm/tlb.h +++ b/arch/arm/include/asm/tlb.h @@ -286,8 +286,7 @@ tlb_remove_pmd_tlb_entry(struct mmu_gath #define tlb_migrate_finish(mm) do { } while (0) -#define tlb_remove_check_page_size_change tlb_remove_check_page_size_change -static inline void tlb_remove_check_page_size_change(struct mmu_gather *tlb, +static inline void tlb_change_page_size(struct mmu_gather *tlb, unsigned int page_size) { } --- a/arch/ia64/include/asm/tlb.h +++ b/arch/ia64/include/asm/tlb.h @@ -282,8 +282,7 @@ do { \ #define tlb_remove_huge_tlb_entry(h, tlb, ptep, address) \ tlb_remove_tlb_entry(tlb, ptep, address) -#define tlb_remove_check_page_size_change tlb_remove_check_page_size_change -static inline void tlb_remove_check_page_size_change(struct mmu_gather *tlb, +static inline void tlb_change_page_size(struct mmu_gather *tlb, unsigned int page_size) { } --- a/arch/powerpc/Kconfig +++ b/arch/powerpc/Kconfig @@ -216,6 +216,7 @@ config PPC select HAVE_PERF_REGS select HAVE_PERF_USER_STACK_DUMP select HAVE_RCU_TABLE_FREE if SMP + select HAVE_MMU_GATHER_PAGE_SIZE select HAVE_REGS_AND_STACK_ACCESS_API select HAVE_RELIABLE_STACKTRACE if PPC64 && CPU_LITTLE_ENDIAN select HAVE_SYSCALL_TRACEPOINTS --- a/arch/powerpc/include/asm/tlb.h +++ b/arch/powerpc/include/asm/tlb.h @@ -27,7 +27,6 @@ #define tlb_start_vma(tlb, vma) do { } while (0) #define tlb_end_vma(tlb, vma) do { } while (0) #define __tlb_remove_tlb_entry __tlb_remove_tlb_entry -#define tlb_remove_check_page_size_change tlb_remove_check_page_size_change extern void tlb_flush(struct mmu_gather *tlb); @@ -46,22 +45,6 @@ static inline void __tlb_remove_tlb_entr #endif } -static inline void tlb_remove_check_page_size_change(struct mmu_gather *tlb, - unsigned int page_size) -{ - if (!tlb->page_size) - tlb->page_size = page_size; - else if (tlb->page_size != page_size) { - if (!tlb->fullmm) - tlb_flush_mmu(tlb); - /* - * update the page size after flush for the new - * mmu_gather. - */ - tlb->page_size = page_size; - } -} - #ifdef CONFIG_SMP static inline int mm_is_core_local(struct mm_struct *mm) { --- a/arch/s390/include/asm/tlb.h +++ b/arch/s390/include/asm/tlb.h @@ -180,9 +180,7 @@ static inline void pud_free_tlb(struct m #define tlb_remove_huge_tlb_entry(h, tlb, ptep, address) \ tlb_remove_tlb_entry(tlb, ptep, address) -#define tlb_remove_check_page_size_change tlb_remove_check_page_size_change -static inline void tlb_remove_check_page_size_change(struct mmu_gather *tlb, - unsigned int page_size) +static inline void tlb_change_page_size(struct mmu_gather *tlb, unsigned int page_size) { } --- a/arch/sh/include/asm/tlb.h +++ b/arch/sh/include/asm/tlb.h @@ -127,9 +127,7 @@ static inline void tlb_remove_page_size( return tlb_remove_page(tlb, page); } -#define tlb_remove_check_page_size_change tlb_remove_check_page_size_change -static inline void tlb_remove_check_page_size_change(struct mmu_gather *tlb, - unsigned int page_size) +static inline void tlb_change_page_size(struct mmu_gather *tlb, unsigned int page_size) { } --- a/arch/um/include/asm/tlb.h +++ b/arch/um/include/asm/tlb.h @@ -146,9 +146,7 @@ static inline void tlb_remove_page_size( #define tlb_remove_huge_tlb_entry(h, tlb, ptep, address) \ tlb_remove_tlb_entry(tlb, ptep, address) -#define tlb_remove_check_page_size_change tlb_remove_check_page_size_change -static inline void tlb_remove_check_page_size_change(struct mmu_gather *tlb, - unsigned int page_size) +static inline void tlb_change_page_size(struct mmu_gather *tlb, unsigned int page_size) { } --- a/include/asm-generic/tlb.h +++ b/include/asm-generic/tlb.h @@ -61,7 +61,7 @@ * tlb_remove_page() and tlb_remove_page_size() imply the call to * tlb_flush_mmu() when required and has no return value. * - * - tlb_remove_check_page_size_change() + * - tlb_change_page_size() * * call before __tlb_remove_page*() to set the current page-size; implies a * possible tlb_flush_mmu() call. @@ -110,6 +110,11 @@ * * Additionally there are a few opt-in features: * + * HAVE_MMU_GATHER_PAGE_SIZE + * + * This ensures we call tlb_flush() every time tlb_change_page_size() actually + * changes the size and provides mmu_gather::page_size to tlb_flush(). + * * HAVE_RCU_TABLE_FREE * * This provides tlb_remove_table(), to be used instead of tlb_remove_page() @@ -235,11 +240,15 @@ struct mmu_gather { unsigned int cleared_puds : 1; unsigned int cleared_p4ds : 1; + unsigned int batch_count; + struct mmu_gather_batch *active; struct mmu_gather_batch local; struct page *__pages[MMU_GATHER_BUNDLE]; - unsigned int batch_count; - int page_size; + +#ifdef CONFIG_HAVE_MMU_GATHER_PAGE_SIZE + unsigned int page_size; +#endif }; void arch_tlb_gather_mmu(struct mmu_gather *tlb, @@ -305,21 +314,18 @@ static inline void tlb_remove_page(struc return tlb_remove_page_size(tlb, page, PAGE_SIZE); } -#ifndef tlb_remove_check_page_size_change -#define tlb_remove_check_page_size_change tlb_remove_check_page_size_change -static inline void tlb_remove_check_page_size_change(struct mmu_gather *tlb, +static inline void tlb_change_page_size(struct mmu_gather *tlb, unsigned int page_size) { - /* - * We don't care about page size change, just update - * mmu_gather page size here so that debug checks - * doesn't throw false warning. - */ -#ifdef CONFIG_DEBUG_VM +#ifdef CONFIG_HAVE_MMU_GATHER_PAGE_SIZE + if (tlb->page_size && tlb->page_size != page_size) { + if (!tlb->fullmm) + tlb_flush_mmu(tlb); + } + tlb->page_size = page_size; #endif } -#endif static inline unsigned long tlb_get_unmap_shift(struct mmu_gather *tlb) { --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -1617,7 +1617,7 @@ bool madvise_free_huge_pmd(struct mmu_ga struct mm_struct *mm = tlb->mm; bool ret = false; - tlb_remove_check_page_size_change(tlb, HPAGE_PMD_SIZE); + tlb_change_page_size(tlb, HPAGE_PMD_SIZE); ptl = pmd_trans_huge_lock(pmd, vma); if (!ptl) @@ -1693,7 +1693,7 @@ int zap_huge_pmd(struct mmu_gather *tlb, pmd_t orig_pmd; spinlock_t *ptl; - tlb_remove_check_page_size_change(tlb, HPAGE_PMD_SIZE); + tlb_change_page_size(tlb, HPAGE_PMD_SIZE); ptl = __pmd_trans_huge_lock(pmd, vma); if (!ptl) --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -3337,7 +3337,7 @@ void __unmap_hugepage_range(struct mmu_g * This is a hugetlb vma, all the pte entries should point * to huge page. */ - tlb_remove_check_page_size_change(tlb, sz); + tlb_change_page_size(tlb, sz); tlb_start_vma(tlb, vma); mmu_notifier_invalidate_range_start(mm, mmun_start, mmun_end); address = start; --- a/mm/madvise.c +++ b/mm/madvise.c @@ -328,7 +328,7 @@ static int madvise_free_pte_range(pmd_t if (pmd_trans_unstable(pmd)) return 0; - tlb_remove_check_page_size_change(tlb, PAGE_SIZE); + tlb_change_page_size(tlb, PAGE_SIZE); orig_pte = pte = pte_offset_map_lock(mm, pmd, addr, &ptl); flush_tlb_batched_pending(mm); arch_enter_lazy_mmu_mode(); --- a/mm/memory.c +++ b/mm/memory.c @@ -355,7 +355,7 @@ void free_pgd_range(struct mmu_gather *t * We add page table cache pages with PAGE_SIZE, * (see pte_free_tlb()), flush the tlb if we need */ - tlb_remove_check_page_size_change(tlb, PAGE_SIZE); + tlb_change_page_size(tlb, PAGE_SIZE); pgd = pgd_offset(tlb->mm, addr); do { next = pgd_addr_end(addr, end); @@ -1046,7 +1046,7 @@ static unsigned long zap_pte_range(struc pte_t *pte; swp_entry_t entry; - tlb_remove_check_page_size_change(tlb, PAGE_SIZE); + tlb_change_page_size(tlb, PAGE_SIZE); again: init_rss_vec(rss); start_pte = pte_offset_map_lock(mm, pmd, addr, &ptl); --- a/mm/mmu_gather.c +++ b/mm/mmu_gather.c @@ -58,7 +58,9 @@ void arch_tlb_gather_mmu(struct mmu_gath #ifdef CONFIG_HAVE_RCU_TABLE_FREE tlb->batch = NULL; #endif +#ifdef CONFIG_HAVE_MMU_GATHER_PAGE_SIZE tlb->page_size = 0; +#endif __tlb_reset_range(tlb); } @@ -121,7 +123,10 @@ bool __tlb_remove_page_size(struct mmu_g struct mmu_gather_batch *batch; VM_BUG_ON(!tlb->end); + +#ifdef CONFIG_HAVE_MMU_GATHER_PAGE_SIZE VM_WARN_ON(tlb->page_size != page_size); +#endif batch = tlb->active; /* From patchwork Wed Sep 26 11:36:26 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Zijlstra X-Patchwork-Id: 10615785 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id A2BCE14BD for ; Wed, 26 Sep 2018 11:54:37 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 92FA82A980 for ; Wed, 26 Sep 2018 11:54:37 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 872A52A98A; Wed, 26 Sep 2018 11:54:37 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.7 required=2.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id CE6682A980 for ; Wed, 26 Sep 2018 11:54:33 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 2F5198E0001; Wed, 26 Sep 2018 07:54:28 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 0C2588E0005; Wed, 26 Sep 2018 07:54:28 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id EA4A38E0003; Wed, 26 Sep 2018 07:54:27 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pl1-f199.google.com (mail-pl1-f199.google.com [209.85.214.199]) by kanga.kvack.org (Postfix) with ESMTP id 9694D8E0001 for ; Wed, 26 Sep 2018 07:54:27 -0400 (EDT) Received: by mail-pl1-f199.google.com with SMTP id 43-v6so5873550ple.19 for ; Wed, 26 Sep 2018 04:54:27 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:message-id:user-agent:date:from :to:cc:subject:references:mime-version; bh=8CAUMl8gZutcZmBzDRYq6jo4NUmI1AIT8UQur7unr98=; b=GnVJP7XkWdq8zTNZj67QqQ168XZ1w1ePi3ME6hfTePLzwPUWmiBEhM7A5Ous3P/o9I 4EfGBsGFiEKoaqUSXbmhl93LBuJuTv7Vu9ClCIcknxcSd4giPM77RcNmUfixzg2inveI K+3AVwYCTErzClh4SqF6bnjswcDNlHaZERYwYMkADzJGyblc9D+j0ak7L2IaCuQ7YoxV rPUIPFF5UIAJq3BcgujrTS5nALrdm2bl4QgjksIFtw4w2v1BHip+YVQlliVrdLFH3fJo RhpwK3Th25U3S4Y7EMqvIKcVdJf2MIdpAK9mlDDerRnCtZI8vJItze1rQJucG0Za7tdd +Etw== X-Gm-Message-State: ABuFfoh/4ZvKRSXimNJbmnvwd9T87PZH347/sQV8oL7IPBKgv+tG6QTH G9lQhepHsJFvRwrYTgoTCFTD1vnRe6dhNelVqFLYg8NrsP1vWAdbQOv5yPtIuX+kG45dnYiIRuf Zj6g6LQoMCNobKThG/Eb2ATcvw9JTEoRvGH82yhvxXkeKZVyf7zDCBlQwTybokFbVvA== X-Received: by 2002:a63:e318:: with SMTP id f24-v6mr5352640pgh.175.1537962867284; Wed, 26 Sep 2018 04:54:27 -0700 (PDT) X-Google-Smtp-Source: ACcGV63MS4GZS2NuxvnL8CjD31s9lTX9Y6ns85q6nNzlRJBtDnEPIhN3b+WupB58yk5ADko3WzwB X-Received: by 2002:a63:e318:: with SMTP id f24-v6mr5352590pgh.175.1537962866447; Wed, 26 Sep 2018 04:54:26 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1537962866; cv=none; d=google.com; s=arc-20160816; b=w/TO4vxKAPpbnHhTtP8Dl+rARPuZGVqi9zxlm4yQSubsDVQHLnZZeo8hKj9kkl3qHK KD6RSR+Yu+sSz9gJ3//qcJf9xNvg8G3XAdaE/YpDLikSDxbggBGgvcuOb7rhDX0PMDQk rv6oGSAFFxSgqz8J55mRBiZdfZ3hqbQejpiyklSThy4FyO09SC7kEAccSTOTBjzhY5GM mQS37tMrTVMf0jJg1OwxSC79YtEV9VQcOgBzYjbq56n7DMxXPuVVN3T5zpuRhoES+ypz bte1SIVWC2+XfyeVLOP27Uk2pQI9tyD4jBd7T1RkZGsEvB73kxspW99DUkYgOJokiFSH FAig== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=mime-version:references:subject:cc:to:from:date:user-agent :message-id:dkim-signature; bh=8CAUMl8gZutcZmBzDRYq6jo4NUmI1AIT8UQur7unr98=; b=suFtwcSAUsujClbbJuKw/p2jQ7AANCwdC/NSi6/Xaby5+0Mn61YcFtvH2HMoQfNdD8 ddxdmVZWxyZ3uP/qhhcWBMAuVlqUPqgQL2tKMALXMGLLHcrlFFajUkfu7Zgk57Pi2EX3 2xbkbA2IMriAbqPXNPQW3sUXwsFgojDq45Ygwl+n8q8RYS9+nohDmg/e+MbXnvPMUS4w OsCBWhuk9lwBaTu3VslDNilCDJeYANgg8cjzRnd6466k5qUzR2ZAk4TLHLUw5MCEoHbz hh7iTnBFzmMl8DrnYYFPhWi3mf0FerdwCNMmP4vVoR/KwlQx7gCLIdv6azOVuoMprc6m U7LA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=bombadil.20170209 header.b="tW9/sI6K"; spf=pass (google.com: best guess record for domain of peterz@infradead.org designates 2607:7c80:54:e::133 as permitted sender) smtp.mailfrom=peterz@infradead.org Received: from bombadil.infradead.org (bombadil.infradead.org. [2607:7c80:54:e::133]) by mx.google.com with ESMTPS id y15-v6si4295697plp.371.2018.09.26.04.54.26 for (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Wed, 26 Sep 2018 04:54:26 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of peterz@infradead.org designates 2607:7c80:54:e::133 as permitted sender) client-ip=2607:7c80:54:e::133; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=bombadil.20170209 header.b="tW9/sI6K"; spf=pass (google.com: best guess record for domain of peterz@infradead.org designates 2607:7c80:54:e::133 as permitted sender) smtp.mailfrom=peterz@infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=Content-Type:MIME-Version:References: Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding: Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender: Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:List-Id:List-Help: List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=8CAUMl8gZutcZmBzDRYq6jo4NUmI1AIT8UQur7unr98=; b=tW9/sI6KYT4BUDjm88JFUIWyI+ NVZk+t4aavyqPoBWvyzdtYdr7zudbCHT8qBmyd+0ozB8+lZUHffNgpnZixNKVN/+aQ4K46wboNnng /JFZX93t20Agy19ZCGcHZZtqyttarTUAxrn8NO3gzVJfWIFao8dUdoxeqCtM+9qHGgzerYijUKenc /+GCMMxuYDdiWP7FOD45xEjhmOyy9z5JlJWq2q5NVr7tAC7u3A9YhFuTEIaCtSYV0jZvGJdWMfyDg Tu9dsDuI1IVdMhHdqntVDg9sf10Pa/dBrSNkH30ze3aH/VBfFl2iU535/s44PXqUUPZe3JD6iMCKz yEeylKHQ==; Received: from j217100.upc-j.chello.nl ([24.132.217.100] helo=hirez.programming.kicks-ass.net) by bombadil.infradead.org with esmtpsa (Exim 4.90_1 #2 (Red Hat Linux)) id 1g58OM-0005vd-7L; Wed, 26 Sep 2018 11:54:21 +0000 Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0) id 7D24B206F1808; Wed, 26 Sep 2018 13:54:04 +0200 (CEST) Message-ID: <20180926114800.666150500@infradead.org> User-Agent: quilt/0.65 Date: Wed, 26 Sep 2018 13:36:26 +0200 From: Peter Zijlstra To: will.deacon@arm.com, aneesh.kumar@linux.vnet.ibm.com, akpm@linux-foundation.org, npiggin@gmail.com Cc: linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, peterz@infradead.org, linux@armlinux.org.uk, heiko.carstens@de.ibm.com, riel@surriel.com, Dave Hansen Subject: [PATCH 03/18] x86/mm: Page size aware flush_tlb_mm_range() References: <20180926113623.863696043@infradead.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP Use the new tlb_get_unmap_shift() to determine the stride of the INVLPG loop. Cc: Nick Piggin Cc: Andrew Morton Cc: "Aneesh Kumar K.V" Cc: Will Deacon Cc: Dave Hansen Signed-off-by: Peter Zijlstra (Intel) --- arch/x86/include/asm/tlb.h | 21 ++++++++++++++------- arch/x86/include/asm/tlbflush.h | 12 ++++++++---- arch/x86/mm/tlb.c | 17 ++++++++--------- mm/pgtable-generic.c | 1 + 4 files changed, 31 insertions(+), 20 deletions(-) --- a/arch/x86/include/asm/tlb.h +++ b/arch/x86/include/asm/tlb.h @@ -6,16 +6,23 @@ #define tlb_end_vma(tlb, vma) do { } while (0) #define __tlb_remove_tlb_entry(tlb, ptep, address) do { } while (0) -#define tlb_flush(tlb) \ -{ \ - if (!tlb->fullmm && !tlb->need_flush_all) \ - flush_tlb_mm_range(tlb->mm, tlb->start, tlb->end, 0UL); \ - else \ - flush_tlb_mm_range(tlb->mm, 0UL, TLB_FLUSH_ALL, 0UL); \ -} +static inline void tlb_flush(struct mmu_gather *tlb); #include +static inline void tlb_flush(struct mmu_gather *tlb) +{ + unsigned long start = 0UL, end = TLB_FLUSH_ALL; + unsigned int stride_shift = tlb_get_unmap_shift(tlb); + + if (!tlb->fullmm && !tlb->need_flush_all) { + start = tlb->start; + end = tlb->end; + } + + flush_tlb_mm_range(tlb->mm, start, end, stride_shift); +} + /* * While x86 architecture in general requires an IPI to perform TLB * shootdown, enablement code for several hypervisors overrides --- a/arch/x86/include/asm/tlbflush.h +++ b/arch/x86/include/asm/tlbflush.h @@ -547,23 +547,27 @@ struct flush_tlb_info { unsigned long start; unsigned long end; u64 new_tlb_gen; + unsigned int stride_shift; }; #define local_flush_tlb() __flush_tlb() #define flush_tlb_mm(mm) flush_tlb_mm_range(mm, 0UL, TLB_FLUSH_ALL, 0UL) -#define flush_tlb_range(vma, start, end) \ - flush_tlb_mm_range(vma->vm_mm, start, end, vma->vm_flags) +#define flush_tlb_range(vma, start, end) \ + flush_tlb_mm_range((vma)->vm_mm, start, end, \ + ((vma)->vm_flags & VM_HUGETLB) \ + ? huge_page_shift(hstate_vma(vma)) \ + : PAGE_SHIFT) extern void flush_tlb_all(void); extern void flush_tlb_mm_range(struct mm_struct *mm, unsigned long start, - unsigned long end, unsigned long vmflag); + unsigned long end, unsigned int stride_shift); extern void flush_tlb_kernel_range(unsigned long start, unsigned long end); static inline void flush_tlb_page(struct vm_area_struct *vma, unsigned long a) { - flush_tlb_mm_range(vma->vm_mm, a, a + PAGE_SIZE, VM_NONE); + flush_tlb_mm_range(vma->vm_mm, a, a + PAGE_SIZE, PAGE_SHIFT); } void native_flush_tlb_others(const struct cpumask *cpumask, --- a/arch/x86/mm/tlb.c +++ b/arch/x86/mm/tlb.c @@ -528,17 +528,16 @@ static void flush_tlb_func_common(const f->new_tlb_gen == local_tlb_gen + 1 && f->new_tlb_gen == mm_tlb_gen) { /* Partial flush */ - unsigned long addr; - unsigned long nr_pages = (f->end - f->start) >> PAGE_SHIFT; + unsigned long nr_invalidate = (f->end - f->start) >> f->stride_shift; + unsigned long addr = f->start; - addr = f->start; while (addr < f->end) { __flush_tlb_one_user(addr); - addr += PAGE_SIZE; + addr += 1UL << f->stride_shift; } if (local) - count_vm_tlb_events(NR_TLB_LOCAL_FLUSH_ONE, nr_pages); - trace_tlb_flush(reason, nr_pages); + count_vm_tlb_events(NR_TLB_LOCAL_FLUSH_ONE, nr_invalidate); + trace_tlb_flush(reason, nr_invalidate); } else { /* Full flush. */ local_flush_tlb(); @@ -623,12 +622,13 @@ void native_flush_tlb_others(const struc static unsigned long tlb_single_page_flush_ceiling __read_mostly = 33; void flush_tlb_mm_range(struct mm_struct *mm, unsigned long start, - unsigned long end, unsigned long vmflag) + unsigned long end, unsigned int stride_shift) { int cpu; struct flush_tlb_info info __aligned(SMP_CACHE_BYTES) = { .mm = mm, + .stride_shift = stride_shift, }; cpu = get_cpu(); @@ -638,8 +638,7 @@ void flush_tlb_mm_range(struct mm_struct /* Should we flush just the requested range? */ if ((end != TLB_FLUSH_ALL) && - !(vmflag & VM_HUGETLB) && - ((end - start) >> PAGE_SHIFT) <= tlb_single_page_flush_ceiling) { + ((end - start) >> stride_shift) <= tlb_single_page_flush_ceiling) { info.start = start; info.end = end; } else { --- a/mm/pgtable-generic.c +++ b/mm/pgtable-generic.c @@ -8,6 +8,7 @@ */ #include +#include #include #include From patchwork Wed Sep 26 11:36:27 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Zijlstra X-Patchwork-Id: 10615789 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 6EAF2913 for ; Wed, 26 Sep 2018 11:54:43 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 606E42A987 for ; Wed, 26 Sep 2018 11:54:43 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 54BA22A9AE; Wed, 26 Sep 2018 11:54:43 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.7 required=2.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id A509D2A99F for ; Wed, 26 Sep 2018 11:54:39 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 115988E0005; Wed, 26 Sep 2018 07:54:29 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 0A10E8E0003; Wed, 26 Sep 2018 07:54:29 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DED4E8E0009; Wed, 26 Sep 2018 07:54:28 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pf1-f197.google.com (mail-pf1-f197.google.com [209.85.210.197]) by kanga.kvack.org (Postfix) with ESMTP id 7345F8E0005 for ; Wed, 26 Sep 2018 07:54:28 -0400 (EDT) Received: by mail-pf1-f197.google.com with SMTP id j15-v6so14536242pff.12 for ; Wed, 26 Sep 2018 04:54:28 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:message-id:user-agent:date:from :to:cc:subject:references:mime-version; bh=FNsWh74kzV1TOR4F01eM2Vp8b7v4O6aBxyOGm5+4Sns=; b=soSNnQOES2eXSbCX8ZlwS3KJ3sAb7kl8fhd04Xysc4C+Snl2pGl20/cE7BkrMuJM+i Ue8wZuRy8G3gqoWtH3wUPf0iNTijmBRJutAoJ7r4KbEhvA4EqM25FZjycACsy+wQhPZJ /wbsrBw0ewlG6SMiYqezj0ghCgh0Ga3IRIyaAmXZiNC7B1pZk0XpugUF3d3yBayBKuod IJ1aOO3xCxHuqzfxrjAiblTjHTCxmCmHtCuKPLdGxOCnCOySdgmwFE8JGxj6GKfKHmnd DkdYeSTkjmxsa6++uxwod6CEd5OfxArlaq/wYhe6c9p6NcZu5vJNwB/ikU0Qkn1zr5F+ JKsw== X-Gm-Message-State: ABuFfohmzkJv+SnGXXyroFe6orqGSaioLu5J0+P7AbbbQUKXVpybHaWs 3AVDWcWpYDRCQ1cPx4WPWzvnmB+OyOFNTcnnFcpLpglq+37n8uj+rxiglOQlEgkrP3GnMi+oMrc fX16hQfy4vvV1EL7dBwxC29cr0+fSLMum8WG1jpddWTiZ06AJz+fBfOjBvk1FtJtbtQ== X-Received: by 2002:a17:902:6909:: with SMTP id j9-v6mr5781634plk.196.1537962868138; Wed, 26 Sep 2018 04:54:28 -0700 (PDT) X-Google-Smtp-Source: ACcGV618H3jjC6yw8D6VKN/6diPQyoXrXNujT084AMMCKxkFvqxpNGxQZ0zE9Gbx4iv4U2npb//O X-Received: by 2002:a17:902:6909:: with SMTP id j9-v6mr5781573plk.196.1537962867344; Wed, 26 Sep 2018 04:54:27 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1537962867; cv=none; d=google.com; s=arc-20160816; b=hjBrxMepYMWsSJtzslGserXamv63vtnqG5OM1ORqK6fRvGc4s6VKGt0VMZgCK3rjUS Q98WUdiQtcFZmcOFFiEuXtEw06A86ip56PaKqyARXqbrUgkOBGRG/i0w0yeJasmm9xjo 6Pi5ePvN30TTAa9b9oCMTEdmE+oaRlbdMKk8x4OPGNYzBb5DWiGkeZXVYV9F9NEJXMYE 4F0QzrGqhs8TGtoBreD+OGOIPM+UZJogx0f4FjZ9vrZXHyQd9w/kERsmOORn3mdTsxSO dFnpJaDxFRrW3CD+28VDEKUULymdOgnzcSf8+GkjjXPYhAmNogJcf2tWId5foYsJz/Rw tF2A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=mime-version:references:subject:cc:to:from:date:user-agent :message-id:dkim-signature; bh=FNsWh74kzV1TOR4F01eM2Vp8b7v4O6aBxyOGm5+4Sns=; b=DT9ZpsCSxqxofhQsr/3uYAcbfNBMRYBil9yDzuKa1pDPOP7u+AUdMp7eqIG/amxsAv yU+2KCVyS2ob9hYhxTdYnToQ9G6AGUTluiD5ITzohP2pfDok+4/DSlnKiOGkzUiVrZWN MBp3ALbvVp7PeH3QJJLj+FUqZsSgviGvZ6xeCgDHsd6KeUCOatWLCgnBmgun2sb+SRS+ MoZPDkVI2wsj+5J7NqlUSI+st6CErRLGyCcuLUJY++nDWEFdsn2z3fBTSJxXFW6XadYd keoNMoQSPW+VB/Lte53/NvLxIGe4Egy/PNF6MpjxrjN5omtNXGf349GmNNWviV7YL0kB Z6sw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=bombadil.20170209 header.b=djyf47LC; spf=pass (google.com: best guess record for domain of peterz@infradead.org designates 2607:7c80:54:e::133 as permitted sender) smtp.mailfrom=peterz@infradead.org Received: from bombadil.infradead.org (bombadil.infradead.org. [2607:7c80:54:e::133]) by mx.google.com with ESMTPS id p12-v6si4667457pfj.244.2018.09.26.04.54.27 for (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Wed, 26 Sep 2018 04:54:27 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of peterz@infradead.org designates 2607:7c80:54:e::133 as permitted sender) client-ip=2607:7c80:54:e::133; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=bombadil.20170209 header.b=djyf47LC; spf=pass (google.com: best guess record for domain of peterz@infradead.org designates 2607:7c80:54:e::133 as permitted sender) smtp.mailfrom=peterz@infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=Content-Type:MIME-Version:References: Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding: Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender: Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:List-Id:List-Help: List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=FNsWh74kzV1TOR4F01eM2Vp8b7v4O6aBxyOGm5+4Sns=; b=djyf47LCZM4hS5J3X29qaMVdAj hk0IHBazK3YS6efDwUIqwhI7QnOMCPeOoSInMcVe7ccVLc6s5hEzEC8EUyFQvbHCVps5klqXwZixn RYW77AT6QLNhqPKFS/XVuw4W1uMjTPRGocYjA1/GvL16h5sORkM39WDpYLCW06Uj5zPs5gaPOLgvN JLjeu6zdWbksiSZCkpzBal7jistL8Ti8dtKTxqKL5rkzj6xgc2bIYItwvFq81lt1bJ0Z6xAA43Nqk tfNYdvzuSw1AJQFTkkPJesY7MY8gjJXSmr4pwTEcAsSbe2z/Hi2pV+X2okvvIguQlXWM+rqneBEZm 7uDcAz3Q==; Received: from j217100.upc-j.chello.nl ([24.132.217.100] helo=hirez.programming.kicks-ass.net) by bombadil.infradead.org with esmtpsa (Exim 4.90_1 #2 (Red Hat Linux)) id 1g58OM-0005ve-6e; Wed, 26 Sep 2018 11:54:21 +0000 Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0) id 85984206F180A; Wed, 26 Sep 2018 13:54:04 +0200 (CEST) Message-ID: <20180926114800.718627623@infradead.org> User-Agent: quilt/0.65 Date: Wed, 26 Sep 2018 13:36:27 +0200 From: Peter Zijlstra To: will.deacon@arm.com, aneesh.kumar@linux.vnet.ibm.com, akpm@linux-foundation.org, npiggin@gmail.com Cc: linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, peterz@infradead.org, linux@armlinux.org.uk, heiko.carstens@de.ibm.com, riel@surriel.com, David Miller , Guan Xuetao Subject: [PATCH 04/18] asm-generic/tlb: Provide generic VIPT cache flush References: <20180926113623.863696043@infradead.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP The one obvious thing SH and ARM want is a sensible default for tlb_start_vma(). (also: https://lkml.org/lkml/2004/1/15/6 ) Avoid all VIPT architectures providing their own tlb_start_vma() implementation and rely on architectures to provide a no-op flush_cache_range() when it is not relevant. The below makes tlb_start_vma() default to flush_cache_range(), which should be right and sufficient. The only exceptions that I found where (oddly): - m68k-mmu - sparc64 - unicore Those architectures appear to have flush_cache_range(), but their current tlb_start_vma() does not call it. Cc: "Aneesh Kumar K.V" Cc: Andrew Morton Cc: Nick Piggin Cc: David Miller Cc: Guan Xuetao Acked-by: Will Deacon Signed-off-by: Peter Zijlstra (Intel) --- arch/arc/include/asm/tlb.h | 9 --------- arch/mips/include/asm/tlb.h | 9 --------- arch/nds32/include/asm/tlb.h | 6 ------ arch/nios2/include/asm/tlb.h | 10 ---------- arch/parisc/include/asm/tlb.h | 5 ----- arch/sparc/include/asm/tlb_32.h | 5 ----- arch/xtensa/include/asm/tlb.h | 9 --------- include/asm-generic/tlb.h | 19 +++++++++++-------- 8 files changed, 11 insertions(+), 61 deletions(-) --- a/arch/arc/include/asm/tlb.h +++ b/arch/arc/include/asm/tlb.h @@ -23,15 +23,6 @@ do { \ * * Note, read http://lkml.org/lkml/2004/1/15/6 */ -#ifndef CONFIG_ARC_CACHE_VIPT_ALIASING -#define tlb_start_vma(tlb, vma) -#else -#define tlb_start_vma(tlb, vma) \ -do { \ - if (!tlb->fullmm) \ - flush_cache_range(vma, vma->vm_start, vma->vm_end); \ -} while(0) -#endif #define tlb_end_vma(tlb, vma) \ do { \ --- a/arch/mips/include/asm/tlb.h +++ b/arch/mips/include/asm/tlb.h @@ -5,15 +5,6 @@ #include #include -/* - * MIPS doesn't need any special per-pte or per-vma handling, except - * we need to flush cache for area to be unmapped. - */ -#define tlb_start_vma(tlb, vma) \ - do { \ - if (!tlb->fullmm) \ - flush_cache_range(vma, vma->vm_start, vma->vm_end); \ - } while (0) #define tlb_end_vma(tlb, vma) do { } while (0) #define __tlb_remove_tlb_entry(tlb, ptep, address) do { } while (0) --- a/arch/nds32/include/asm/tlb.h +++ b/arch/nds32/include/asm/tlb.h @@ -4,12 +4,6 @@ #ifndef __ASMNDS32_TLB_H #define __ASMNDS32_TLB_H -#define tlb_start_vma(tlb,vma) \ - do { \ - if (!tlb->fullmm) \ - flush_cache_range(vma, vma->vm_start, vma->vm_end); \ - } while (0) - #define tlb_end_vma(tlb,vma) \ do { \ if(!tlb->fullmm) \ --- a/arch/nios2/include/asm/tlb.h +++ b/arch/nios2/include/asm/tlb.h @@ -15,16 +15,6 @@ extern void set_mmu_pid(unsigned long pid); -/* - * NiosII doesn't need any special per-pte or per-vma handling, except - * we need to flush cache for the area to be unmapped. - */ -#define tlb_start_vma(tlb, vma) \ - do { \ - if (!tlb->fullmm) \ - flush_cache_range(vma, vma->vm_start, vma->vm_end); \ - } while (0) - #define tlb_end_vma(tlb, vma) do { } while (0) #define __tlb_remove_tlb_entry(tlb, ptep, address) do { } while (0) --- a/arch/parisc/include/asm/tlb.h +++ b/arch/parisc/include/asm/tlb.h @@ -7,11 +7,6 @@ do { if ((tlb)->fullmm) \ flush_tlb_mm((tlb)->mm);\ } while (0) -#define tlb_start_vma(tlb, vma) \ -do { if (!(tlb)->fullmm) \ - flush_cache_range(vma, vma->vm_start, vma->vm_end); \ -} while (0) - #define tlb_end_vma(tlb, vma) \ do { if (!(tlb)->fullmm) \ flush_tlb_range(vma, vma->vm_start, vma->vm_end); \ --- a/arch/sparc/include/asm/tlb_32.h +++ b/arch/sparc/include/asm/tlb_32.h @@ -2,11 +2,6 @@ #ifndef _SPARC_TLB_H #define _SPARC_TLB_H -#define tlb_start_vma(tlb, vma) \ -do { \ - flush_cache_range(vma, vma->vm_start, vma->vm_end); \ -} while (0) - #define tlb_end_vma(tlb, vma) \ do { \ flush_tlb_range(vma, vma->vm_start, vma->vm_end); \ --- a/arch/xtensa/include/asm/tlb.h +++ b/arch/xtensa/include/asm/tlb.h @@ -16,19 +16,10 @@ #if (DCACHE_WAY_SIZE <= PAGE_SIZE) -/* Note, read http://lkml.org/lkml/2004/1/15/6 */ - -# define tlb_start_vma(tlb,vma) do { } while (0) # define tlb_end_vma(tlb,vma) do { } while (0) #else -# define tlb_start_vma(tlb, vma) \ - do { \ - if (!tlb->fullmm) \ - flush_cache_range(vma, vma->vm_start, vma->vm_end); \ - } while(0) - # define tlb_end_vma(tlb, vma) \ do { \ if (!tlb->fullmm) \ --- a/include/asm-generic/tlb.h +++ b/include/asm-generic/tlb.h @@ -19,6 +19,7 @@ #include #include #include +#include #ifdef CONFIG_MMU @@ -351,17 +352,19 @@ static inline unsigned long tlb_get_unma * the vmas are adjusted to only cover the region to be torn down. */ #ifndef tlb_start_vma -#define tlb_start_vma(tlb, vma) do { } while (0) +#define tlb_start_vma(tlb, vma) \ +do { \ + if (!tlb->fullmm) \ + flush_cache_range(vma, vma->vm_start, vma->vm_end); \ +} while (0) #endif -#define __tlb_end_vma(tlb, vma) \ - do { \ - if (!tlb->fullmm) \ - tlb_flush_mmu_tlbonly(tlb); \ - } while (0) - #ifndef tlb_end_vma -#define tlb_end_vma __tlb_end_vma +#define tlb_end_vma(tlb, vma) \ +do { \ + if (!tlb->fullmm) \ + tlb_flush_mmu_tlbonly(tlb); \ +} while (0) #endif #ifndef __tlb_remove_tlb_entry From patchwork Wed Sep 26 11:36:28 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Zijlstra X-Patchwork-Id: 10615797 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id BC58E14BD for ; Wed, 26 Sep 2018 11:55:07 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id AF3F12A97C for ; Wed, 26 Sep 2018 11:55:07 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id A27DE2A99F; Wed, 26 Sep 2018 11:55:07 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.7 required=2.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id D447E2A97C for ; Wed, 26 Sep 2018 11:55:03 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 99F148E000C; Wed, 26 Sep 2018 07:54:57 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 94DEF8E0003; Wed, 26 Sep 2018 07:54:57 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 78C0C8E000D; Wed, 26 Sep 2018 07:54:57 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-io1-f71.google.com (mail-io1-f71.google.com [209.85.166.71]) by kanga.kvack.org (Postfix) with ESMTP id 419738E0003 for ; Wed, 26 Sep 2018 07:54:57 -0400 (EDT) Received: by mail-io1-f71.google.com with SMTP id k9-v6so51613426iob.16 for ; Wed, 26 Sep 2018 04:54:57 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:message-id:user-agent:date:from :to:cc:subject:references:mime-version; bh=D2BrPEaVEi1W4wGOOeArDPZr4Vg22Yb2gdLEGz+CpyM=; b=dXffGgQFjRBosu6kI4T2EL9pjETeH4QgxpW0zCD2A3xEK+mJCtQag6P7dUqGWK1UnY IV+WvDD2DgElpzTdcRX7evzuptehrdSSDLK6aakHT8wkkTEMRb4Wm4+jEIwakXMV88SX tOwiMJB6ZyDjOfmFsZ8Ftm8GXYRICpn3mlcGUyshG/vSSV03UenuW3E0MTb7RJlcxmgv 1mUEASpY5bCVXYmkZC+6MAU90VRXCZkYb6nyp4yDVDdDyo3ASmMP7wMuU9/D3SgjztvT 8Zv5WLl4gTe70hqW4kQIO3fLvgBrY7v+9C9MuiwgXkfkUbRHxRLMe5fOsWLqlOgXDONL WNpg== X-Gm-Message-State: ABuFfoh4iDqNh97rOB0nVDt/1fJSeqefvUjfBgp7Jp0kH0k3Ag31y9uc UNPrI+3XHCU4ZPkZN8Y/ORfZtyXRlZqnSpUzM9ox1QGuUdce3+oiS2DCHT96OAdKxsMs6mV3Jj6 fSDYAmTQH3cKcf+S0mRkTGaJ9oE76L+N43o4nXJz+gbmEwfJJlpUca4oqeV4oyvQBvQ== X-Received: by 2002:a24:ecc4:: with SMTP id g187-v6mr4375463ith.145.1537962896929; Wed, 26 Sep 2018 04:54:56 -0700 (PDT) X-Google-Smtp-Source: ACcGV61QK/29gAjjP/XcjRExa0znz2yDz6ov8cGfODLNubPO7ozytkHIfJSVTIYRZRKLKWhOHAnB X-Received: by 2002:a24:ecc4:: with SMTP id g187-v6mr4375413ith.145.1537962896122; Wed, 26 Sep 2018 04:54:56 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1537962896; cv=none; d=google.com; s=arc-20160816; b=Rzz5b04j1nx1AtqLdhQwVBA70FoPt2HAJKrd/a/iYhkFecIjsslzMUvulzofnprz1/ bNQ44gUst4sFWcFyYEgByKydF84s2LFTymNFUCbB9/tJ3o6fU4umTi4UMiRHL5uB602a tS6aULXQZAoeA9wUlQYLkWZpYp0dQLsjcQ36tj1u8NDsz4Q6ndLaYWjRmgAN1VHkRxpc SD2s8X4CJPVTfhdMJn6diM5cW29B7QphcILundIRTEQ/kDZP6Nzxff4N+kAGGBjT4x/1 DcuJvDYrDs2ApCEU+SZwSlPrAzqZ3Z/ezuFcu/0TKXPT0tU3mH8PGdDWpTrC0E1OJHbU x4OQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=mime-version:references:subject:cc:to:from:date:user-agent :message-id:dkim-signature; bh=D2BrPEaVEi1W4wGOOeArDPZr4Vg22Yb2gdLEGz+CpyM=; b=g4xlU1qP80Hlhf3KCzRKNmVJnkiyBvmNq+J4LBwg5VM13PX8SkxtwE+HuTTW6qj5Eb 7RO8vY8HGgQ+yY/dq8CA0WYVG314p9UHqA2MJxWZ3XnM0etxkZ5C6U3MAKtCnGERHkzV ULonzxzONJ+HX9H0kyimYS+m+XlN2zTMWea3zek1o9WJSp0JfDtaWZfq0l68TdL5B4b4 1knmvM45zhRw7vbcfamznfFxDZdpvfnv9m5azsgp7R/L2d3//1rRHlxEPC6QBJwRapZ4 rYYQYMR/YoRL3YmB5Tb64RdMmyuOQixzWoK/ClHm2NrTULYWtl66p1izUIgwSAz8FoA6 CAuQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=merlin.20170209 header.b=RbjHnQ7O; spf=pass (google.com: best guess record for domain of peterz@infradead.org designates 2001:8b0:10b:1231::1 as permitted sender) smtp.mailfrom=peterz@infradead.org Received: from merlin.infradead.org (merlin.infradead.org. [2001:8b0:10b:1231::1]) by mx.google.com with ESMTPS id k7-v6si3227061ite.30.2018.09.26.04.54.56 for (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Wed, 26 Sep 2018 04:54:56 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of peterz@infradead.org designates 2001:8b0:10b:1231::1 as permitted sender) client-ip=2001:8b0:10b:1231::1; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=merlin.20170209 header.b=RbjHnQ7O; spf=pass (google.com: best guess record for domain of peterz@infradead.org designates 2001:8b0:10b:1231::1 as permitted sender) smtp.mailfrom=peterz@infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=merlin.20170209; h=Content-Type:MIME-Version:References: Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding: Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender: Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:List-Id:List-Help: List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=D2BrPEaVEi1W4wGOOeArDPZr4Vg22Yb2gdLEGz+CpyM=; b=RbjHnQ7OgBnwdK0sek7gQJzL6N YSug9OJPzHq9v8E80JyL0xTSMI2hFdE00yF7RdGR6Kw9n1KrKWaa7Bi5k7uM7nXn/9DM0a3cQiB5a NKJfYEN6yUHrhDjzH7XSX/HBww10CFzefeDhhDbXl6WfiZfi61HtgIdZEYJCbeCkejQoZ/Ai40a5k TNsvUT+pIrSh8+WhZrB0SvEIUbJ35GLz7sz/0tj0H2XEmhfiBhrHqzVP6aAIqDwWAwzh9fYgHjWth R3Ut8Um9j0O7kPtcVWIXBrfzQmmeflRWJUEoKy32/ihSZdqlic76HxtBZizLg05jFgBkH1zR4c+0m z0B2TnbQ==; Received: from j217100.upc-j.chello.nl ([24.132.217.100] helo=hirez.programming.kicks-ass.net) by merlin.infradead.org with esmtpsa (Exim 4.90_1 #2 (Red Hat Linux)) id 1g58Oc-0007U5-Bt; Wed, 26 Sep 2018 11:54:42 +0000 Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0) id 8AD99206F180C; Wed, 26 Sep 2018 13:54:04 +0200 (CEST) Message-ID: <20180926114800.770817616@infradead.org> User-Agent: quilt/0.65 Date: Wed, 26 Sep 2018 13:36:28 +0200 From: Peter Zijlstra To: will.deacon@arm.com, aneesh.kumar@linux.vnet.ibm.com, akpm@linux-foundation.org, npiggin@gmail.com Cc: linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, peterz@infradead.org, linux@armlinux.org.uk, heiko.carstens@de.ibm.com, riel@surriel.com Subject: [PATCH 05/18] asm-generic/tlb: Provide generic tlb_flush References: <20180926113623.863696043@infradead.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP Provide a generic tlb_flush() implementation that relies on flush_tlb_range(). This is a little awkward because flush_tlb_range() assumes a VMA for range invalidation, but we no longer have one. Audit of all flush_tlb_range() implementations shows only vma->vm_mm and vma->vm_flags are used, and of the latter only VM_EXEC (I-TLB invalidates) and VM_HUGETLB (large TLB invalidate) are used. Therefore, track VM_EXEC and VM_HUGETLB in two more bits, and create a 'fake' VMA. This allows architectures that have a reasonably efficient flush_tlb_range() to not require any additional effort. Cc: Nick Piggin Cc: Andrew Morton Cc: "Aneesh Kumar K.V" Cc: Will Deacon Signed-off-by: Peter Zijlstra (Intel) Acked-by: Will Deacon --- arch/arm64/include/asm/tlb.h | 1 arch/powerpc/include/asm/tlb.h | 1 arch/riscv/include/asm/tlb.h | 1 arch/x86/include/asm/tlb.h | 1 include/asm-generic/tlb.h | 80 +++++++++++++++++++++++++++++++++++------ 5 files changed, 74 insertions(+), 10 deletions(-) --- a/arch/arm64/include/asm/tlb.h +++ b/arch/arm64/include/asm/tlb.h @@ -27,6 +27,7 @@ static inline void __tlb_remove_table(vo free_page_and_swap_cache((struct page *)_table); } +#define tlb_flush tlb_flush static void tlb_flush(struct mmu_gather *tlb); #include --- a/arch/powerpc/include/asm/tlb.h +++ b/arch/powerpc/include/asm/tlb.h @@ -28,6 +28,7 @@ #define tlb_end_vma(tlb, vma) do { } while (0) #define __tlb_remove_tlb_entry __tlb_remove_tlb_entry +#define tlb_flush tlb_flush extern void tlb_flush(struct mmu_gather *tlb); /* Get the generic bits... */ --- a/arch/riscv/include/asm/tlb.h +++ b/arch/riscv/include/asm/tlb.h @@ -18,6 +18,7 @@ struct mmu_gather; static void tlb_flush(struct mmu_gather *tlb); +#define tlb_flush tlb_flush #include static inline void tlb_flush(struct mmu_gather *tlb) --- a/arch/x86/include/asm/tlb.h +++ b/arch/x86/include/asm/tlb.h @@ -6,6 +6,7 @@ #define tlb_end_vma(tlb, vma) do { } while (0) #define __tlb_remove_tlb_entry(tlb, ptep, address) do { } while (0) +#define tlb_flush tlb_flush static inline void tlb_flush(struct mmu_gather *tlb); #include --- a/include/asm-generic/tlb.h +++ b/include/asm-generic/tlb.h @@ -241,6 +241,12 @@ struct mmu_gather { unsigned int cleared_puds : 1; unsigned int cleared_p4ds : 1; + /* + * tracks VM_EXEC | VM_HUGETLB in tlb_start_vma + */ + unsigned int vma_exec : 1; + unsigned int vma_huge : 1; + unsigned int batch_count; struct mmu_gather_batch *active; @@ -282,7 +288,35 @@ static inline void __tlb_reset_range(str tlb->cleared_pmds = 0; tlb->cleared_puds = 0; tlb->cleared_p4ds = 0; + /* + * Do not reset mmu_gather::vma_* fields here, we do not + * call into tlb_start_vma() again to set them if there is an + * intermediate flush. + */ +} + +#ifndef tlb_flush + +#if defined(tlb_start_vma) || defined(tlb_end_vma) +#error Default tlb_flush() relies on default tlb_start_vma() and tlb_end_vma() +#endif + +#define tlb_flush tlb_flush +static inline void tlb_flush(struct mmu_gather *tlb) +{ + if (tlb->fullmm || tlb->need_flush_all) { + flush_tlb_mm(tlb->mm); + } else { + struct vm_area_struct vma = { + .vm_mm = tlb->mm, + .vm_flags = (tlb->vma_exec ? VM_EXEC : 0) | + (tlb->vma_huge ? VM_HUGETLB : 0), + }; + + flush_tlb_range(&vma, tlb->start, tlb->end); + } } +#endif static inline void tlb_flush_mmu_tlbonly(struct mmu_gather *tlb) { @@ -353,19 +387,45 @@ static inline unsigned long tlb_get_unma * the vmas are adjusted to only cover the region to be torn down. */ #ifndef tlb_start_vma -#define tlb_start_vma(tlb, vma) \ -do { \ - if (!tlb->fullmm) \ - flush_cache_range(vma, vma->vm_start, vma->vm_end); \ -} while (0) +#define tlb_start_vma tlb_start_vma +static inline void tlb_start_vma(struct mmu_gather *tlb, struct vm_area_struct *vma) +{ + if (tlb->fullmm) + return; + + /* + * flush_tlb_range() implementations that look at VM_HUGETLB (tile, + * mips-4k) flush only large pages. + * + * flush_tlb_range() implementations that flush I-TLB also flush D-TLB + * (tile, xtensa, arm), so it's ok to just add VM_EXEC to an existing + * range. + * + * We rely on tlb_end_vma() to issue a flush, such that when we reset + * these values the batch is empty. + */ + tlb->vma_huge = !!(vma->vm_flags & VM_HUGETLB); + tlb->vma_exec = !!(vma->vm_flags & VM_EXEC); + + flush_cache_range(vma, vma->vm_start, vma->vm_end); +} #endif #ifndef tlb_end_vma -#define tlb_end_vma(tlb, vma) \ -do { \ - if (!tlb->fullmm) \ - tlb_flush_mmu_tlbonly(tlb); \ -} while (0) +#define tlb_end_vma tlb_end_vma +static inline void tlb_end_vma(struct mmu_gather *tlb, struct vm_area_struct *vma) +{ + if (tlb->fullmm) + return; + + /* + * Do a TLB flush and reset the range at VMA boundaries; this avoids + * the ranges growing with the unused space between consecutive VMAs, + * but also the mmu_gather::vma_* flags from tlb_start_vma() rely on + * this. + */ + tlb_flush_mmu_tlbonly(tlb); +} #endif #ifndef __tlb_remove_tlb_entry From patchwork Wed Sep 26 11:36:29 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Zijlstra X-Patchwork-Id: 10615805 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 6726214BD for ; Wed, 26 Sep 2018 11:55:22 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 5A98526B41 for ; Wed, 26 Sep 2018 11:55:22 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 4EF832A980; Wed, 26 Sep 2018 11:55:22 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.7 required=2.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 91BF026B41 for ; Wed, 26 Sep 2018 11:55:18 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C84188E0011; Wed, 26 Sep 2018 07:54:59 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id C3DD18E000F; Wed, 26 Sep 2018 07:54:59 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9C3B98E0011; Wed, 26 Sep 2018 07:54:59 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-it1-f199.google.com (mail-it1-f199.google.com [209.85.166.199]) by kanga.kvack.org (Postfix) with ESMTP id 69F6D8E000F for ; Wed, 26 Sep 2018 07:54:59 -0400 (EDT) Received: by mail-it1-f199.google.com with SMTP id m131-v6so1390940ita.4 for ; Wed, 26 Sep 2018 04:54:59 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:message-id:user-agent:date:from :to:cc:subject:references:mime-version; bh=Xzoq3822VPftHOSX/XRMpwACoEFrdDki2itBY3y0uPM=; b=XglorqJ9uyFiZ1ZeuZxOzw0nliHZ6TsAXBu2+TKpsOGIirlI90WPkxGYnwIkIfM2Mr J8RyWdHBvCrzIO0rmya4Twx6B0VoPtGpJLfFc+zEtX1Y9OWdk0bveLZzDZ4gCLQ2buxK 01gb2uEQ1Eb+KEmXEXaOunJQycdSb1FKd3Co61dC9qbcw5TWztq4+fawUsPpeakEJWqI JzKmiajC2vCpnQJjaKTD/W+IqF5tN/pDA6tjyoqBfhaC8FQSkv21251bLzmu5O5YlES5 kh9PlD5GYTVGezmzRjxHuWhFQ34/zJteQ5rUIV+edETcBx72fPJGafM+TXa3CvEq/oSe pGNw== X-Gm-Message-State: ABuFfoghqksm7Glkf9oO9W6y4IuwYFJTOeCdsbGwU86bFD8ZccVk3StK pvuJmKhycdCQBuo/oCH24KrCCpDD+7z9Fn9NgHAw51rx7krGSRecMiAz3S72lZQTHwLCIDj5xIL +K9VQ3tHGhR87B47dIlyaN15QpeamNO29xS0rrWsYQ8yTamfTOEOJia7By6DZN6iSkg== X-Received: by 2002:a02:129d:: with SMTP id 29-v6mr5160368jap.126.1537962899195; Wed, 26 Sep 2018 04:54:59 -0700 (PDT) X-Google-Smtp-Source: ACcGV60jGVJqT52PsukQA0S8ld+h6m/Yk8nfXb0s7+f5qJPF1xlQf4M1jWWCSeRad1OtcDalL6A2 X-Received: by 2002:a02:129d:: with SMTP id 29-v6mr5160325jap.126.1537962898521; Wed, 26 Sep 2018 04:54:58 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1537962898; cv=none; d=google.com; s=arc-20160816; b=MGhaS+HwPfHYa4V0JcvGqElHAxsxwEdRlwaeKQTNm0J3dayw3CRSd7U4hkBUnm00Lq MsLZKKo7vYDgj0CavLJt3EdvXTMrxHGBjony2P0n5y0MXENrEDrbnHEHcEJeAHCRFIm9 RdILgm3ZOcHAihgEzyuIQlDJHEr99FMuhFJOnseHx4jPH5CsAM2Tr18q7jprSrHSfkT2 N8knylTYUvOj+SHBeidrKxWsmWpHbd9DV6GyqajmLN6GTN55kcFicsVFmaRSPS3j/RGP cwvHWSaaF3IaVz2WVMnrl8a0l/Id1juAi5ILte/3raczQgIVbymwIJCDhejWxFXqtW+5 wb5w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=mime-version:references:subject:cc:to:from:date:user-agent :message-id:dkim-signature; bh=Xzoq3822VPftHOSX/XRMpwACoEFrdDki2itBY3y0uPM=; b=AKW5EsAssiH+5lYKjI63jY4Rwb1vN8uCJT5Dp/J0Cj7obAtE8z4oh87peGGganoYZ9 0b71q7LEdP2DoyJhbUuH3qjKTMj8K005r2Ndkv49DtZ2GdarLrkYtmrGVI+mdihNnnah s7PtqEeJqLK52qyiwYxrdtmPAy4ODYR1Iaq3UQ69IW5BTKI+dztq9INSlAUbcfpj0CKE DnSP8WL6ZyCcO3aWwVdM/+VWtIRieQGboax1aBTjUQtNzEACQNB0T6I5fD67EwYmOMfY X8wBqmZJpV+0K9IyxhztJ/MkLHCEb3Rc7aoeM1GThTrdh/5nof+XGKIZ7TDJyjZN5r/y f2iw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=merlin.20170209 header.b=wrvYsjyJ; spf=pass (google.com: best guess record for domain of peterz@infradead.org designates 2001:8b0:10b:1231::1 as permitted sender) smtp.mailfrom=peterz@infradead.org Received: from merlin.infradead.org (merlin.infradead.org. [2001:8b0:10b:1231::1]) by mx.google.com with ESMTPS id d129-v6si3178769itc.67.2018.09.26.04.54.58 for (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Wed, 26 Sep 2018 04:54:58 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of peterz@infradead.org designates 2001:8b0:10b:1231::1 as permitted sender) client-ip=2001:8b0:10b:1231::1; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=merlin.20170209 header.b=wrvYsjyJ; spf=pass (google.com: best guess record for domain of peterz@infradead.org designates 2001:8b0:10b:1231::1 as permitted sender) smtp.mailfrom=peterz@infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=merlin.20170209; h=Content-Type:MIME-Version:References: Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding: Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender: Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:List-Id:List-Help: List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=Xzoq3822VPftHOSX/XRMpwACoEFrdDki2itBY3y0uPM=; b=wrvYsjyJWfTDcdePZDtjYplk4h OgRbp5PvipKKkmFcBpXUvcVFp+R0f1TVKn+te7+BIzL3mPoDDSD06mOs8KePc2bgrpEHexpH9bwpJ hwlBkjc+wIQtRDJbMAWVzl4r9FP3Ip9C/nP1EDyhWgHGO6M7Bu7Km1djRMWpl76AMt+qla2MaKxsJ P8l2QbFyYov8VEm1yRzUvWuvB4Ju0b4UCWpHvvzdFpd8ypZrAaA/kb4jJxyGVyAOt9Ny9M7yqoEwP O5srcSf3yT46M/70FRdnPZ9cG2V21Mg2L3V+fY9YscU9rCc4lXS3CvpRENk0fFuDfpKpJPtH6XELp A7qFbUaA==; Received: from j217100.upc-j.chello.nl ([24.132.217.100] helo=hirez.programming.kicks-ass.net) by merlin.infradead.org with esmtpsa (Exim 4.90_1 #2 (Red Hat Linux)) id 1g58Oc-0007U6-Dz; Wed, 26 Sep 2018 11:54:42 +0000 Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0) id 8E7C9206F180E; Wed, 26 Sep 2018 13:54:04 +0200 (CEST) Message-ID: <20180926114800.822722731@infradead.org> User-Agent: quilt/0.65 Date: Wed, 26 Sep 2018 13:36:29 +0200 From: Peter Zijlstra To: will.deacon@arm.com, aneesh.kumar@linux.vnet.ibm.com, akpm@linux-foundation.org, npiggin@gmail.com Cc: linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, peterz@infradead.org, linux@armlinux.org.uk, heiko.carstens@de.ibm.com, riel@surriel.com Subject: [PATCH 06/18] asm-generic/tlb: Conditionally provide tlb_migrate_finish() References: <20180926113623.863696043@infradead.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP Needed for ia64 -- alternatively we drop the entire hook. Cc: Will Deacon Cc: "Aneesh Kumar K.V" Cc: Andrew Morton Cc: Nick Piggin Signed-off-by: Peter Zijlstra (Intel) --- include/asm-generic/tlb.h | 2 ++ 1 file changed, 2 insertions(+) --- a/include/asm-generic/tlb.h +++ b/include/asm-generic/tlb.h @@ -539,6 +539,8 @@ static inline void tlb_end_vma(struct mm #endif /* CONFIG_MMU */ +#ifndef tlb_migrate_finish #define tlb_migrate_finish(mm) do {} while (0) +#endif #endif /* _ASM_GENERIC__TLB_H */ From patchwork Wed Sep 26 11:36:30 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Zijlstra X-Patchwork-Id: 10615823 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 65DC214BD for ; Wed, 26 Sep 2018 11:55:53 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 57A002A987 for ; Wed, 26 Sep 2018 11:55:53 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 4A5BD2A992; Wed, 26 Sep 2018 11:55:53 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.7 required=2.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 570762A98A for ; Wed, 26 Sep 2018 11:55:49 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 7A0C48E0017; Wed, 26 Sep 2018 07:55:29 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 752D28E000F; Wed, 26 Sep 2018 07:55:29 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5CBB28E0017; Wed, 26 Sep 2018 07:55:29 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pl1-f200.google.com (mail-pl1-f200.google.com [209.85.214.200]) by kanga.kvack.org (Postfix) with ESMTP id 18C478E000F for ; Wed, 26 Sep 2018 07:55:29 -0400 (EDT) Received: by mail-pl1-f200.google.com with SMTP id m3-v6so10680709plt.9 for ; Wed, 26 Sep 2018 04:55:29 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:message-id:user-agent:date:from :to:cc:subject:references:mime-version; bh=aaiTxqjuwEt1CjB6UBF0EKgCYMTbvgoKK5cB1HwCicU=; b=gY9K+bQlfXNJLYg8qWrbk5fslVn+/2UZI9AmMiwLXYhI9xs8ze6sdstrGg8WlmQTqE +es2SIQVzRSzb6NeL82T7/lqJ5BX/FiaYn7cbSmLAkx7oP0SMqwZXE+XZbYNicWLLTGa qlrgo9jO7IsCo8WsVsdFMtVYUf8Nsi2HWKe8OwbdQPdSkmQmo5NRf4gOQD44iqJiRZ+o ZHK1CCAu26WNq1VeH48ytegxRM3g3bB9vwLYcZ5aJd2k5hXmyqQcmfnOx6AQqKK+/yNB s4yKi1FljdenNK6yXceW1XO8XGH3zu45805s24aRbPV+C/Hvk4o50RmCPE57RQMo70RV RYMw== X-Gm-Message-State: ABuFfogLQkokiXbf36JbLXJfSdyTsavoMJzBGtkidnM8xn+SdAWX5WBM DahXoDwVgIwwJ/N9HY4Dm2JBFqIjkGcrV82lr2kYXEWdwacrIqWO/MkpVUBeSp+JuJwHxBw/xxC h/bxgPg61z41PJP3e+Z0X5srQc7SI/zn+WBvYhIhZcsaGtJ4Op0Y+oZCAt6K6Hi0nLA== X-Received: by 2002:a62:63c2:: with SMTP id x185-v6mr5913932pfb.13.1537962928760; Wed, 26 Sep 2018 04:55:28 -0700 (PDT) X-Google-Smtp-Source: ACcGV63y8rWoGiAwIAXyakbTIoYjSsk8ilMloJTud1OBPWOfIKmal75GHgDYmtgpT/WKzJUALGGL X-Received: by 2002:a62:63c2:: with SMTP id x185-v6mr5913861pfb.13.1537962927673; Wed, 26 Sep 2018 04:55:27 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1537962927; cv=none; d=google.com; s=arc-20160816; b=orcIUjBTLcEXk6sh+sKETxXx4SAThjaUSio/9pZzBWU+njpdXK5mIitce8woCNfo9f SjBey8xnTjJSmXSlF5ZsHvYdU9eDjjjoBl7PAonHTvvZXTZ1E7HVC3yHtQ/lrBGCek9z C4GReGjJFqkoci3W8Qx37EA2qE2WtdxjO7+PYNZqxYqGDoKQ9rd02xBOWBderyRDNDAK /yl//VPIh3k/lIcAwDLz+zDMGbNi0SE/Gyo4hwJ/APkdPMfmj5ev0ieQkyBmhJK40bCQ pOzzey6dlXV/6r0x7aMFL+zITBLWzbaCWwfxYH0dDDVqxpiZkCIulLnEWrqT8fkMmv+L KeuQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=mime-version:references:subject:cc:to:from:date:user-agent :message-id:dkim-signature; bh=aaiTxqjuwEt1CjB6UBF0EKgCYMTbvgoKK5cB1HwCicU=; b=n5Z93NBVXZ4XSy/4/OwICqjlHu7reIHOjajjhctVwv8rVGZrK/RQgrrJz7gEI8jSZu ctfy7IJKqMZ6quItBkQr+XwhcxlDCV3cb/HQeGfWaAUk7dvc7jIMhTGKkHSzeH+6JbLs BzZyXMx+5ifmYvV2Nl7IDnO80VoozcGI+G0K9w78uqmAno767VZ1ZKIO9fVgyMx7mMWe 2FJppC7AOj7VeTKLk0XFkKh4z19Hr68ME7LSr1uZZ3advlsT+9Afc7jk29a7UUDFJKer WUXUKac59hekF1/ZjVTd/LqhGedm1864hDmT23tNnunnecK2PB/Y4Oez2PlDPLVChDHu Sjrw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=bombadil.20170209 header.b=rE0laMPd; spf=pass (google.com: best guess record for domain of peterz@infradead.org designates 2607:7c80:54:e::133 as permitted sender) smtp.mailfrom=peterz@infradead.org Received: from bombadil.infradead.org (bombadil.infradead.org. [2607:7c80:54:e::133]) by mx.google.com with ESMTPS id f6-v6si5113110plt.16.2018.09.26.04.55.27 for (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Wed, 26 Sep 2018 04:55:27 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of peterz@infradead.org designates 2607:7c80:54:e::133 as permitted sender) client-ip=2607:7c80:54:e::133; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=bombadil.20170209 header.b=rE0laMPd; spf=pass (google.com: best guess record for domain of peterz@infradead.org designates 2607:7c80:54:e::133 as permitted sender) smtp.mailfrom=peterz@infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=Content-Type:MIME-Version:References: Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding: Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender: Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:List-Id:List-Help: List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=aaiTxqjuwEt1CjB6UBF0EKgCYMTbvgoKK5cB1HwCicU=; b=rE0laMPdZ+lJPFMsKUkkSf4cKT H5hNfTI1XW2vLuEVOK1kFsZpSiglLfxlJzl91+T/0H6I4VcwCELGeft8J/rLMxcwqcHMTYYFrUrFl RKtw+vgmNL+g5SrDgLw6QUxFMtxdh/7x96mJZcZadAGgV6T7Wyn2WTduILBUeSCjtmGSczlsW6ryq WI7zt+0btuCFpi6xxhrZeozLcc03hdA6bMXHGmwDHw7uN9V9XxQzBhXHh7YlcWZX96OngwiCwonKc 8tH4m7q0+FzssudKZSB5wVGSuQC1sDB36QHs2GL2RE6wMTMIIg2lruIMsdI2qXrSj+eq6iEPAZoXN LT9VsY4w==; Received: from j217100.upc-j.chello.nl ([24.132.217.100] helo=hirez.programming.kicks-ass.net) by bombadil.infradead.org with esmtpsa (Exim 4.90_1 #2 (Red Hat Linux)) id 1g58Oc-000610-Kf; Wed, 26 Sep 2018 11:54:42 +0000 Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0) id 9281F206F1810; Wed, 26 Sep 2018 13:54:04 +0200 (CEST) Message-ID: <20180926114800.875099964@infradead.org> User-Agent: quilt/0.65 Date: Wed, 26 Sep 2018 13:36:30 +0200 From: Peter Zijlstra To: will.deacon@arm.com, aneesh.kumar@linux.vnet.ibm.com, akpm@linux-foundation.org, npiggin@gmail.com Cc: linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, peterz@infradead.org, linux@armlinux.org.uk, heiko.carstens@de.ibm.com, riel@surriel.com Subject: [PATCH 07/18] asm-generic/tlb: Invert HAVE_RCU_TABLE_INVALIDATE References: <20180926113623.863696043@infradead.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP Make issuing a TLB invalidate for page-table pages the normal case. The reason is twofold: - too many invalidates is safer than too few, - most architectures use the linux page-tables natively and would thus require this. Make it an opt-out, instead of an opt-in. Acked-by: Will Deacon Signed-off-by: Peter Zijlstra (Intel) --- arch/Kconfig | 2 +- arch/arm64/Kconfig | 1 - arch/powerpc/Kconfig | 1 + arch/sparc/Kconfig | 1 + arch/x86/Kconfig | 1 - include/asm-generic/tlb.h | 9 +++++---- mm/mmu_gather.c | 2 +- 7 files changed, 9 insertions(+), 8 deletions(-) --- a/arch/Kconfig +++ b/arch/Kconfig @@ -362,7 +362,7 @@ config HAVE_ARCH_JUMP_LABEL config HAVE_RCU_TABLE_FREE bool -config HAVE_RCU_TABLE_INVALIDATE +config HAVE_RCU_TABLE_NO_INVALIDATE bool config HAVE_MMU_GATHER_PAGE_SIZE --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -142,7 +142,6 @@ config ARM64 select HAVE_PERF_USER_STACK_DUMP select HAVE_REGS_AND_STACK_ACCESS_API select HAVE_RCU_TABLE_FREE - select HAVE_RCU_TABLE_INVALIDATE select HAVE_RSEQ select HAVE_STACKPROTECTOR select HAVE_SYSCALL_TRACEPOINTS --- a/arch/powerpc/Kconfig +++ b/arch/powerpc/Kconfig @@ -216,6 +216,7 @@ config PPC select HAVE_PERF_REGS select HAVE_PERF_USER_STACK_DUMP select HAVE_RCU_TABLE_FREE if SMP + select HAVE_RCU_TABLE_NO_INVALIDATE if HAVE_RCU_TABLE_FREE select HAVE_MMU_GATHER_PAGE_SIZE select HAVE_REGS_AND_STACK_ACCESS_API select HAVE_RELIABLE_STACKTRACE if PPC64 && CPU_LITTLE_ENDIAN --- a/arch/sparc/Kconfig +++ b/arch/sparc/Kconfig @@ -64,6 +64,7 @@ config SPARC64 select HAVE_KRETPROBES select HAVE_KPROBES select HAVE_RCU_TABLE_FREE if SMP + select HAVE_RCU_TABLE_NO_INVALIDATE if HAVE_RCU_TABLE_FREE select HAVE_MEMBLOCK_NODE_MAP select HAVE_ARCH_TRANSPARENT_HUGEPAGE select HAVE_DYNAMIC_FTRACE --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -181,7 +181,6 @@ config X86 select HAVE_PERF_REGS select HAVE_PERF_USER_STACK_DUMP select HAVE_RCU_TABLE_FREE if PARAVIRT - select HAVE_RCU_TABLE_INVALIDATE if HAVE_RCU_TABLE_FREE select HAVE_REGS_AND_STACK_ACCESS_API select HAVE_RELIABLE_STACKTRACE if X86_64 && (UNWINDER_FRAME_POINTER || UNWINDER_ORC) && STACK_VALIDATION select HAVE_STACKPROTECTOR if CC_HAS_SANE_STACKPROTECTOR --- a/include/asm-generic/tlb.h +++ b/include/asm-generic/tlb.h @@ -127,11 +127,12 @@ * When used, an architecture is expected to provide __tlb_remove_table() * which does the actual freeing of these pages. * - * HAVE_RCU_TABLE_INVALIDATE + * HAVE_RCU_TABLE_NO_INVALIDATE * - * This makes HAVE_RCU_TABLE_FREE call tlb_flush_mmu_tlbonly() before freeing - * the page-table pages. Required if you use HAVE_RCU_TABLE_FREE and your - * architecture uses the Linux page-tables natively. + * This makes HAVE_RCU_TABLE_FREE avoid calling tlb_flush_mmu_tlbonly() before + * freeing the page-table pages. This can be avoided if you use + * HAVE_RCU_TABLE_FREE and your architecture does _NOT_ use the Linux + * page-tables natively. * */ #define HAVE_GENERIC_MMU_GATHER --- a/mm/mmu_gather.c +++ b/mm/mmu_gather.c @@ -157,7 +157,7 @@ bool __tlb_remove_page_size(struct mmu_g */ static inline void tlb_table_invalidate(struct mmu_gather *tlb) { -#ifdef CONFIG_HAVE_RCU_TABLE_INVALIDATE +#ifndef CONFIG_HAVE_RCU_TABLE_NO_INVALIDATE /* * Invalidate page-table caches used by hardware walkers. Then we still * need to RCU-sched wait while freeing the pages because software From patchwork Wed Sep 26 11:36:31 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Zijlstra X-Patchwork-Id: 10615799 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 6074D913 for ; Wed, 26 Sep 2018 11:55:11 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 525062A97C for ; Wed, 26 Sep 2018 11:55:11 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 465132A987; Wed, 26 Sep 2018 11:55:11 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.7 required=2.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 641952A98A for ; Wed, 26 Sep 2018 11:55:07 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 2C5368E0003; Wed, 26 Sep 2018 07:54:58 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 2262A8E000E; Wed, 26 Sep 2018 07:54:58 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 000328E0003; Wed, 26 Sep 2018 07:54:57 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pl1-f197.google.com (mail-pl1-f197.google.com [209.85.214.197]) by kanga.kvack.org (Postfix) with ESMTP id AD1C38E000D for ; Wed, 26 Sep 2018 07:54:57 -0400 (EDT) Received: by mail-pl1-f197.google.com with SMTP id 90-v6so10555124pla.18 for ; Wed, 26 Sep 2018 04:54:57 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:message-id:user-agent:date:from :to:cc:subject:references:mime-version; bh=ZMbBDLcGC8Uc8IbvUaRxV4jrZ71rQd1SK3GTNzaOalc=; b=msamu1Ky22f0cqaVe2SEBVnxMl4dORC3A0gUBwKDEPzUhNpx8vBBqJwcOpBGB0LcLi G1MU7knyxEhqPpYZ5rOC4khFOD1sm2Qrh+q1MWIBjMPFr1L1tn3a//UiFjN5AGW+HfFl K8ea7zRF1w1hxA5tiLpt6C7yQrpiYeiWNbmurn7azXfxqWe5rAR4OrZ+GIn4BoJwl4fA HS4mVQdpJw2u4WFjm6wr6S7Ze8KidPLedFc8lzyBYb3kfjZwQn/ll8wiZ2YYUiauhYer iDlmeURCvrgRIsoq2F0HamLQLmX6T1AKxt92oCJvWchPwL8JE2qpcmJmRBsz+k3fa4mb ZUXQ== X-Gm-Message-State: ABuFfojVyCH54hVRusfhv8DXXhBFuO6+lZrvclb6Vngy5TrxWEa7yJOp jOAqD/1bouSADnhSPsNjIIiYM/wHdqiE32efQ3nk5gFIHL6hIcghD2zNyKoDGeg0eJO06R7hkf4 zqjnSuXiXa7jNzqa4osI/kYfNQ9IYAkOODNl+nsgbZJo/kBFhvwbVR1PJa14hrBZwwg== X-Received: by 2002:a62:215b:: with SMTP id h88-v6mr1014169pfh.233.1537962897377; Wed, 26 Sep 2018 04:54:57 -0700 (PDT) X-Google-Smtp-Source: ACcGV62acTJaaYbCKBcwzt7Pzw6m0dZIT9skYja5MDCMw2+4L/7VAHeZzLZ7d/MammQltrPL23aa X-Received: by 2002:a62:215b:: with SMTP id h88-v6mr1014100pfh.233.1537962896433; Wed, 26 Sep 2018 04:54:56 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1537962896; cv=none; d=google.com; s=arc-20160816; b=uIGxssxsaYRbM7yVWU2OOocguxq8V7Pdt/iKUNf6FWVBJCc8gTTcXbuUSefDEaCldO dfYA3UAtSg+rnPI0L4UGYWgPgJ9kHlbUBI+3zzHH5x+xKWPm7GTyufaT+m+ym3Rjkcay d8C/bOmQCVEH35zLx56oR+pfHXTFfy5QFC0tNrl8BZ9aiOuK9FYDWvp0DIA7Ut/9YdZ6 xaf+GP4/7GviVrAkuIjsk+drcULrs/aWykAKOGzRCg2hI+yVM877V7ed+aveZ9TPdIRU /BotYidf4yv+9uzCUfoD8YwZNkUeFRI+61dw+z4mL35QM2ZOrG5RHFhEVkWxiqRTz29W YDEg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=mime-version:references:subject:cc:to:from:date:user-agent :message-id:dkim-signature; bh=ZMbBDLcGC8Uc8IbvUaRxV4jrZ71rQd1SK3GTNzaOalc=; b=ZYUw5WYTxH5Pg5GyO4tCAWFySRltwn9JK01MTya/5feC1F7TYgqNVVks2V6//ccsC6 QD8Z7Kqk6ZNCqieTxZVdGpk7DnMfleCNn9kIfqpRfKyb8OsQNhuVDMP1AJQ5EpZBSaoq K5kRUt5MU8Siu0/PVpAeBlV1tRODwInMu7mI9SlCXRJKf6NVV48Daue50tCvvWOIgNso kcIKIcrS4x/4ukC9JsXqLBc012Qu7j9MINOkXvq2uAi8R0u14Zt8gzZQGlQPzEIgIFpR TJ/78Spnrt0ZXScI4hzis3IHVoqr4iYopWwm7oVgXmGyWt2hszGUe4Dm4OPgChI8UOZ0 ETJQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=bombadil.20170209 header.b=DVOiFI6h; spf=pass (google.com: best guess record for domain of peterz@infradead.org designates 2607:7c80:54:e::133 as permitted sender) smtp.mailfrom=peterz@infradead.org Received: from bombadil.infradead.org (bombadil.infradead.org. [2607:7c80:54:e::133]) by mx.google.com with ESMTPS id a10-v6si5765555pln.137.2018.09.26.04.54.56 for (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Wed, 26 Sep 2018 04:54:56 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of peterz@infradead.org designates 2607:7c80:54:e::133 as permitted sender) client-ip=2607:7c80:54:e::133; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=bombadil.20170209 header.b=DVOiFI6h; spf=pass (google.com: best guess record for domain of peterz@infradead.org designates 2607:7c80:54:e::133 as permitted sender) smtp.mailfrom=peterz@infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=Content-Type:MIME-Version:References: Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding: Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender: Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:List-Id:List-Help: List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=ZMbBDLcGC8Uc8IbvUaRxV4jrZ71rQd1SK3GTNzaOalc=; b=DVOiFI6h/68Ok7Q43DHA++quci JsRV85h8+54cbqKMzUEXVR7qCHCMd9yRH5EELxNj0ulbJPnYGt1TJ/Szfe/IzejndObnqboZ2/suF Af+r1aIR14Aef8Op5BkSqiNYXNLQWz1R1CA4fNIHy/47yDkn3x1LochyhdRwNZHcZusPyRq0V0nK1 Bsn8Z2QkgqlAjJRppAh+8K+fehwhJBA2zRmwrCQ6067d8JDv6uRicErqfE/m6N8llNer4zpBimtuZ T8Ut1P8A8RHrznm8PCs1Ad8IGGFN1wR43AXh6hLAnwwpSd36Td7ScCtiQayNi5imDQOYyKiLUsOv+ orQyn/IA==; Received: from j217100.upc-j.chello.nl ([24.132.217.100] helo=hirez.programming.kicks-ass.net) by bombadil.infradead.org with esmtpsa (Exim 4.90_1 #2 (Red Hat Linux)) id 1g58Oc-00060z-Ke; Wed, 26 Sep 2018 11:54:42 +0000 Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0) id 963E5206F1812; Wed, 26 Sep 2018 13:54:04 +0200 (CEST) Message-ID: <20180926114800.927066872@infradead.org> User-Agent: quilt/0.65 Date: Wed, 26 Sep 2018 13:36:31 +0200 From: Peter Zijlstra To: will.deacon@arm.com, aneesh.kumar@linux.vnet.ibm.com, akpm@linux-foundation.org, npiggin@gmail.com Cc: linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, peterz@infradead.org, linux@armlinux.org.uk, heiko.carstens@de.ibm.com, riel@surriel.com Subject: [PATCH 08/18] arm/tlb: Convert to generic mmu_gather References: <20180926113623.863696043@infradead.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP Generic mmu_gather provides everything that ARM needs: - range tracking - RCU table free - VM_EXEC tracking - VIPT cache flushing The one notable curiosity is the 'funny' range tracking for classical ARM in __pte_free_tlb(). Cc: Nick Piggin Cc: Andrew Morton Cc: "Aneesh Kumar K.V" Cc: Will Deacon Cc: Russell King Signed-off-by: Peter Zijlstra (Intel) Acked-by: Will Deacon --- arch/arm/include/asm/tlb.h | 255 ++------------------------------------------- 1 file changed, 14 insertions(+), 241 deletions(-) --- a/arch/arm/include/asm/tlb.h +++ b/arch/arm/include/asm/tlb.h @@ -33,270 +33,43 @@ #include #include -#define MMU_GATHER_BUNDLE 8 - -#ifdef CONFIG_HAVE_RCU_TABLE_FREE static inline void __tlb_remove_table(void *_table) { free_page_and_swap_cache((struct page *)_table); } -struct mmu_table_batch { - struct rcu_head rcu; - unsigned int nr; - void *tables[0]; -}; - -#define MAX_TABLE_BATCH \ - ((PAGE_SIZE - sizeof(struct mmu_table_batch)) / sizeof(void *)) - -extern void tlb_table_flush(struct mmu_gather *tlb); -extern void tlb_remove_table(struct mmu_gather *tlb, void *table); - -#define tlb_remove_entry(tlb, entry) tlb_remove_table(tlb, entry) -#else -#define tlb_remove_entry(tlb, entry) tlb_remove_page(tlb, entry) -#endif /* CONFIG_HAVE_RCU_TABLE_FREE */ - -/* - * TLB handling. This allows us to remove pages from the page - * tables, and efficiently handle the TLB issues. - */ -struct mmu_gather { - struct mm_struct *mm; -#ifdef CONFIG_HAVE_RCU_TABLE_FREE - struct mmu_table_batch *batch; - unsigned int need_flush; -#endif - unsigned int fullmm; - struct vm_area_struct *vma; - unsigned long start, end; - unsigned long range_start; - unsigned long range_end; - unsigned int nr; - unsigned int max; - struct page **pages; - struct page *local[MMU_GATHER_BUNDLE]; -}; - -DECLARE_PER_CPU(struct mmu_gather, mmu_gathers); - -/* - * This is unnecessarily complex. There's three ways the TLB shootdown - * code is used: - * 1. Unmapping a range of vmas. See zap_page_range(), unmap_region(). - * tlb->fullmm = 0, and tlb_start_vma/tlb_end_vma will be called. - * tlb->vma will be non-NULL. - * 2. Unmapping all vmas. See exit_mmap(). - * tlb->fullmm = 1, and tlb_start_vma/tlb_end_vma will be called. - * tlb->vma will be non-NULL. Additionally, page tables will be freed. - * 3. Unmapping argument pages. See shift_arg_pages(). - * tlb->fullmm = 0, but tlb_start_vma/tlb_end_vma will not be called. - * tlb->vma will be NULL. - */ -static inline void tlb_flush(struct mmu_gather *tlb) -{ - if (tlb->fullmm || !tlb->vma) - flush_tlb_mm(tlb->mm); - else if (tlb->range_end > 0) { - flush_tlb_range(tlb->vma, tlb->range_start, tlb->range_end); - tlb->range_start = TASK_SIZE; - tlb->range_end = 0; - } -} - -static inline void tlb_add_flush(struct mmu_gather *tlb, unsigned long addr) -{ - if (!tlb->fullmm) { - if (addr < tlb->range_start) - tlb->range_start = addr; - if (addr + PAGE_SIZE > tlb->range_end) - tlb->range_end = addr + PAGE_SIZE; - } -} - -static inline void __tlb_alloc_page(struct mmu_gather *tlb) -{ - unsigned long addr = __get_free_pages(GFP_NOWAIT | __GFP_NOWARN, 0); - - if (addr) { - tlb->pages = (void *)addr; - tlb->max = PAGE_SIZE / sizeof(struct page *); - } -} - -static inline void tlb_flush_mmu_tlbonly(struct mmu_gather *tlb) -{ - tlb_flush(tlb); -#ifdef CONFIG_HAVE_RCU_TABLE_FREE - tlb_table_flush(tlb); -#endif -} - -static inline void tlb_flush_mmu_free(struct mmu_gather *tlb) -{ - free_pages_and_swap_cache(tlb->pages, tlb->nr); - tlb->nr = 0; - if (tlb->pages == tlb->local) - __tlb_alloc_page(tlb); -} - -static inline void tlb_flush_mmu(struct mmu_gather *tlb) -{ - tlb_flush_mmu_tlbonly(tlb); - tlb_flush_mmu_free(tlb); -} - -static inline void -arch_tlb_gather_mmu(struct mmu_gather *tlb, struct mm_struct *mm, - unsigned long start, unsigned long end) -{ - tlb->mm = mm; - tlb->fullmm = !(start | (end+1)); - tlb->start = start; - tlb->end = end; - tlb->vma = NULL; - tlb->max = ARRAY_SIZE(tlb->local); - tlb->pages = tlb->local; - tlb->nr = 0; - __tlb_alloc_page(tlb); +#include -#ifdef CONFIG_HAVE_RCU_TABLE_FREE - tlb->batch = NULL; +#ifndef CONFIG_HAVE_RCU_TABLE_FREE +#define tlb_remove_table(tlb, entry) tlb_remove_page(tlb, entry) #endif -} - -static inline void -arch_tlb_finish_mmu(struct mmu_gather *tlb, - unsigned long start, unsigned long end, bool force) -{ - if (force) { - tlb->range_start = start; - tlb->range_end = end; - } - - tlb_flush_mmu(tlb); - /* keep the page table cache within bounds */ - check_pgt_cache(); - - if (tlb->pages != tlb->local) - free_pages((unsigned long)tlb->pages, 0); -} - -/* - * Memorize the range for the TLB flush. - */ static inline void -tlb_remove_tlb_entry(struct mmu_gather *tlb, pte_t *ptep, unsigned long addr) -{ - tlb_add_flush(tlb, addr); -} - -#define tlb_remove_huge_tlb_entry(h, tlb, ptep, address) \ - tlb_remove_tlb_entry(tlb, ptep, address) -/* - * In the case of tlb vma handling, we can optimise these away in the - * case where we're doing a full MM flush. When we're doing a munmap, - * the vmas are adjusted to only cover the region to be torn down. - */ -static inline void -tlb_start_vma(struct mmu_gather *tlb, struct vm_area_struct *vma) -{ - if (!tlb->fullmm) { - flush_cache_range(vma, vma->vm_start, vma->vm_end); - tlb->vma = vma; - tlb->range_start = TASK_SIZE; - tlb->range_end = 0; - } -} - -static inline void -tlb_end_vma(struct mmu_gather *tlb, struct vm_area_struct *vma) -{ - if (!tlb->fullmm) - tlb_flush(tlb); -} - -static inline bool __tlb_remove_page(struct mmu_gather *tlb, struct page *page) -{ - tlb->pages[tlb->nr++] = page; - VM_WARN_ON(tlb->nr > tlb->max); - if (tlb->nr == tlb->max) - return true; - return false; -} - -static inline void tlb_remove_page(struct mmu_gather *tlb, struct page *page) -{ - if (__tlb_remove_page(tlb, page)) - tlb_flush_mmu(tlb); -} - -static inline bool __tlb_remove_page_size(struct mmu_gather *tlb, - struct page *page, int page_size) -{ - return __tlb_remove_page(tlb, page); -} - -static inline void tlb_remove_page_size(struct mmu_gather *tlb, - struct page *page, int page_size) -{ - return tlb_remove_page(tlb, page); -} - -static inline void __pte_free_tlb(struct mmu_gather *tlb, pgtable_t pte, - unsigned long addr) +__pte_free_tlb(struct mmu_gather *tlb, pgtable_t pte, unsigned long addr) { pgtable_page_dtor(pte); -#ifdef CONFIG_ARM_LPAE - tlb_add_flush(tlb, addr); -#else +#ifndef CONFIG_ARM_LPAE /* * With the classic ARM MMU, a pte page has two corresponding pmd * entries, each covering 1MB. */ - addr &= PMD_MASK; - tlb_add_flush(tlb, addr + SZ_1M - PAGE_SIZE); - tlb_add_flush(tlb, addr + SZ_1M); + addr = (addr & PMD_MASK) + SZ_1M; + __tlb_adjust_range(tlb, addr - PAGE_SIZE, 2 * PAGE_SIZE); #endif - tlb_remove_entry(tlb, pte); -} - -static inline void __pmd_free_tlb(struct mmu_gather *tlb, pmd_t *pmdp, - unsigned long addr) -{ -#ifdef CONFIG_ARM_LPAE - tlb_add_flush(tlb, addr); - tlb_remove_entry(tlb, virt_to_page(pmdp)); -#endif + tlb_remove_table(tlb, pte); } static inline void -tlb_remove_pmd_tlb_entry(struct mmu_gather *tlb, pmd_t *pmdp, unsigned long addr) +__pmd_free_tlb(struct mmu_gather *tlb, pmd_t *pmdp, unsigned long addr) { - tlb_add_flush(tlb, addr); -} - -#define pte_free_tlb(tlb, ptep, addr) __pte_free_tlb(tlb, ptep, addr) -#define pmd_free_tlb(tlb, pmdp, addr) __pmd_free_tlb(tlb, pmdp, addr) -#define pud_free_tlb(tlb, pudp, addr) pud_free((tlb)->mm, pudp) - -#define tlb_migrate_finish(mm) do { } while (0) - -static inline void tlb_change_page_size(struct mmu_gather *tlb, - unsigned int page_size) -{ -} - -static inline void tlb_flush_remove_tables(struct mm_struct *mm) -{ -} +#ifdef CONFIG_ARM_LPAE + struct page *page = virt_to_page(pmdp); -static inline void tlb_flush_remove_tables_local(void *arg) -{ + pgtable_pmd_page_dtor(page); + tlb_remove_table(tlb, page); +#endif } #endif /* CONFIG_MMU */ From patchwork Wed Sep 26 11:36:32 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Zijlstra X-Patchwork-Id: 10615809 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id BD8B914BD for ; Wed, 26 Sep 2018 11:55:31 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id AFC9726B41 for ; Wed, 26 Sep 2018 11:55:31 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id A3AAA2A987; Wed, 26 Sep 2018 11:55:31 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.7 required=2.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id C7AFD26B41 for ; Wed, 26 Sep 2018 11:55:27 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 783C68E0013; Wed, 26 Sep 2018 07:55:05 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 6E44E8E000F; Wed, 26 Sep 2018 07:55:05 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4BE428E0013; Wed, 26 Sep 2018 07:55:05 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-it1-f198.google.com (mail-it1-f198.google.com [209.85.166.198]) by kanga.kvack.org (Postfix) with ESMTP id 148CC8E000F for ; Wed, 26 Sep 2018 07:55:05 -0400 (EDT) Received: by mail-it1-f198.google.com with SMTP id f134-v6so2860649ite.3 for ; Wed, 26 Sep 2018 04:55:05 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:message-id:user-agent:date:from :to:cc:subject:references:mime-version; bh=ngbrIENLEkIPPxOXSmPjRwnweo/SdtFjhtblxd03rZA=; b=Jz5KsWcHUDuFgMlsaFaMfPK4T+PKWMVXtSUbPdu/CsujqvRJIMxPFYOaThuFRbBAzi TtWmB1jpMMDoyOF57wf3Fv5unM0YXKWXNK0Noj3Dewl8jOI2xfx2MkRC7bi0aJYI61rh H7RFMqdHQNNvEYGp4KaBEwdAOhF6lK1pDvnWQBFFofhlBaWmW56zZ8e3SyZ7ef24794y oqf9kqWxTRzqg6GD0tbHdPUSv1vXnOlEqdckxFHqZbB930103P6SgBZ3EXyO4l0+0Nh3 cNbBagFbD8QRKZhlREl3ktV52a1sTRDtxF2TsKRogvhGSYm+Hn1BrJM8/rRjUtlH6yRl UV4Q== X-Gm-Message-State: ABuFfoij/LGlxhohr3ZwFN1xIkz/qZ5J96LmpBdT+77LCiu2G/7XXfmz XH/G7HT1d4IbG+yOu/9xAcSfk80ZYZrYDLFl/KXSN7QXE4ppulerb1YUkOSscJISi3TNBeWTD6E 4UKvQ6Unhk0JYJT37lDOqeBjOwhCxbOQ/5jKlCF4RcMolxZU9vGFH+pyr7UiQlnNO1g== X-Received: by 2002:a24:9687:: with SMTP id z129-v6mr4614767itd.112.1537962904830; Wed, 26 Sep 2018 04:55:04 -0700 (PDT) X-Google-Smtp-Source: ACcGV60xZBmd5BWqbceLsgS/ImNBKJH9PHsnqb7UYyhAlfTHjhKGB+OcK5jSUs2G32pWFQEcnak6 X-Received: by 2002:a24:9687:: with SMTP id z129-v6mr4614707itd.112.1537962904017; Wed, 26 Sep 2018 04:55:04 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1537962903; cv=none; d=google.com; s=arc-20160816; b=ui5ffXCk4hRUtcljw6aGuFivXNM3Pi7rThC1Ce2GagK+J6W/Ab5GVfrKYVQ8Wqjq2o 3SibmRUIw/IL0JNpIiKPvi3yhCSrOgjKpTsKhpUQVOITuAZyIHaXbfN8c/n+LYD40cVA +WioXuxfgUhlBv1ljo328ZYEqNEUPh0deNce4CQ+1HBkZzFqQVB5Fzvvchz+FM0ujQxz AexoBy5zvLHqCNmVARo21EzSHia/Ci2jmEPSvYXK9dfi7WYgiKT+a8uvSRLnz0ySbTi1 uKLWZLKzX7osIOVuTMsoFm1Wv6NfsHZ8Cm5e51p/YGhpKihQJkOH7IVZJ/zqtVK8VkFb DsZA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=mime-version:references:subject:cc:to:from:date:user-agent :message-id:dkim-signature; bh=ngbrIENLEkIPPxOXSmPjRwnweo/SdtFjhtblxd03rZA=; b=uFVatV137/PoqCRl5/awNh2CHDiPERluuC1oNeLJtbobMJj6GNsDigozzY6jo25U0x Kd1kHb+9JS2KOUJmgsfb+RYwHdhqRx9//YUzpVUbZGocLC5Drop9rY/hrnFymSEJw7mH KuBR8s8cRPdi6cpp7AT6jHPLV475ajEmpEhyxmrDwL3wLWCMQF9vXNxzylw5/1VZNsRt 6w6ahJPtFo4KB2hJRO06oDheVuA6TOTvFMwK8m7r6A0/q+Kg178m2UhYPj6fx14dYjAO rs9KS9BuWgPbO7L/WLbRjsFJF7+qYB9/0EyCaEvUWwB4/d7q6d8PBiRq+sdIixMC8+VO RTAA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=merlin.20170209 header.b=Hy5vd8QI; spf=pass (google.com: best guess record for domain of peterz@infradead.org designates 2001:8b0:10b:1231::1 as permitted sender) smtp.mailfrom=peterz@infradead.org Received: from merlin.infradead.org (merlin.infradead.org. [2001:8b0:10b:1231::1]) by mx.google.com with ESMTPS id q2-v6si2939596iop.109.2018.09.26.04.55.03 for (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Wed, 26 Sep 2018 04:55:03 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of peterz@infradead.org designates 2001:8b0:10b:1231::1 as permitted sender) client-ip=2001:8b0:10b:1231::1; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=merlin.20170209 header.b=Hy5vd8QI; spf=pass (google.com: best guess record for domain of peterz@infradead.org designates 2001:8b0:10b:1231::1 as permitted sender) smtp.mailfrom=peterz@infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=merlin.20170209; h=Content-Type:MIME-Version:References: Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding: Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender: Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:List-Id:List-Help: List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=ngbrIENLEkIPPxOXSmPjRwnweo/SdtFjhtblxd03rZA=; b=Hy5vd8QITrWRWaftEsFPUe5KN9 FPu8LMCM3tQjH+obWsJSlEdcp3ruFp+UGxWNLN7FOiV0q79xN6VVq7f2a/qcdLxXMLxVFKS3g+4Gu Q7XqoduO95RauXQ3NWM3FhPxHmcuT4Qm/aTU5eftVnGQpAKQ1689tXwErYyW0OFJYHNAtRCLhNUNj +NABytwA/W6HhY84tLXHL1ILJmjbAzn7AmnZEAiJqrJ+zBErWLlNqHggd9cP1STMOpUcbacAKB2wF obXWas6+ASX72skF4CFhnLtdLqTXy+pK/7/VrpPFqst+KNkeRKr/a2sIDiTlAM+VLEIL77SLPDAs2 rJaI2x5g==; Received: from j217100.upc-j.chello.nl ([24.132.217.100] helo=hirez.programming.kicks-ass.net) by merlin.infradead.org with esmtpsa (Exim 4.90_1 #2 (Red Hat Linux)) id 1g58Oc-0007U7-ED; Wed, 26 Sep 2018 11:54:42 +0000 Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0) id 98337206F1806; Wed, 26 Sep 2018 13:54:04 +0200 (CEST) Message-ID: <20180926114800.978538448@infradead.org> User-Agent: quilt/0.65 Date: Wed, 26 Sep 2018 13:36:32 +0200 From: Peter Zijlstra To: will.deacon@arm.com, aneesh.kumar@linux.vnet.ibm.com, akpm@linux-foundation.org, npiggin@gmail.com Cc: linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, peterz@infradead.org, linux@armlinux.org.uk, heiko.carstens@de.ibm.com, riel@surriel.com, Tony Luck Subject: [PATCH 09/18] ia64/tlb: Conver to generic mmu_gather References: <20180926113623.863696043@infradead.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP Generic mmu_gather provides everything ia64 needs (range tracking). Cc: Will Deacon Cc: "Aneesh Kumar K.V" Cc: Andrew Morton Cc: Nick Piggin Cc: Tony Luck Signed-off-by: Peter Zijlstra (Intel) --- arch/ia64/include/asm/tlb.h | 256 --------------------------------------- arch/ia64/include/asm/tlbflush.h | 25 +++ arch/ia64/mm/tlb.c | 23 +++ 3 files changed, 47 insertions(+), 257 deletions(-) --- a/arch/ia64/include/asm/tlb.h +++ b/arch/ia64/include/asm/tlb.h @@ -47,262 +47,8 @@ #include #include -/* - * If we can't allocate a page to make a big batch of page pointers - * to work on, then just handle a few from the on-stack structure. - */ -#define IA64_GATHER_BUNDLE 8 - -struct mmu_gather { - struct mm_struct *mm; - unsigned int nr; - unsigned int max; - unsigned char fullmm; /* non-zero means full mm flush */ - unsigned char need_flush; /* really unmapped some PTEs? */ - unsigned long start, end; - unsigned long start_addr; - unsigned long end_addr; - struct page **pages; - struct page *local[IA64_GATHER_BUNDLE]; -}; - -struct ia64_tr_entry { - u64 ifa; - u64 itir; - u64 pte; - u64 rr; -}; /*Record for tr entry!*/ - -extern int ia64_itr_entry(u64 target_mask, u64 va, u64 pte, u64 log_size); -extern void ia64_ptr_entry(u64 target_mask, int slot); - -extern struct ia64_tr_entry *ia64_idtrs[NR_CPUS]; - -/* - region register macros -*/ -#define RR_TO_VE(val) (((val) >> 0) & 0x0000000000000001) -#define RR_VE(val) (((val) & 0x0000000000000001) << 0) -#define RR_VE_MASK 0x0000000000000001L -#define RR_VE_SHIFT 0 -#define RR_TO_PS(val) (((val) >> 2) & 0x000000000000003f) -#define RR_PS(val) (((val) & 0x000000000000003f) << 2) -#define RR_PS_MASK 0x00000000000000fcL -#define RR_PS_SHIFT 2 -#define RR_RID_MASK 0x00000000ffffff00L -#define RR_TO_RID(val) ((val >> 8) & 0xffffff) - -static inline void -ia64_tlb_flush_mmu_tlbonly(struct mmu_gather *tlb, unsigned long start, unsigned long end) -{ - tlb->need_flush = 0; - - if (tlb->fullmm) { - /* - * Tearing down the entire address space. This happens both as a result - * of exit() and execve(). The latter case necessitates the call to - * flush_tlb_mm() here. - */ - flush_tlb_mm(tlb->mm); - } else if (unlikely (end - start >= 1024*1024*1024*1024UL - || REGION_NUMBER(start) != REGION_NUMBER(end - 1))) - { - /* - * If we flush more than a tera-byte or across regions, we're probably - * better off just flushing the entire TLB(s). This should be very rare - * and is not worth optimizing for. - */ - flush_tlb_all(); - } else { - /* - * flush_tlb_range() takes a vma instead of a mm pointer because - * some architectures want the vm_flags for ITLB/DTLB flush. - */ - struct vm_area_struct vma = TLB_FLUSH_VMA(tlb->mm, 0); - - /* flush the address range from the tlb: */ - flush_tlb_range(&vma, start, end); - /* now flush the virt. page-table area mapping the address range: */ - flush_tlb_range(&vma, ia64_thash(start), ia64_thash(end)); - } - -} - -static inline void -ia64_tlb_flush_mmu_free(struct mmu_gather *tlb) -{ - unsigned long i; - unsigned int nr; - - /* lastly, release the freed pages */ - nr = tlb->nr; - - tlb->nr = 0; - tlb->start_addr = ~0UL; - for (i = 0; i < nr; ++i) - free_page_and_swap_cache(tlb->pages[i]); -} - -/* - * Flush the TLB for address range START to END and, if not in fast mode, release the - * freed pages that where gathered up to this point. - */ -static inline void -ia64_tlb_flush_mmu (struct mmu_gather *tlb, unsigned long start, unsigned long end) -{ - if (!tlb->need_flush) - return; - ia64_tlb_flush_mmu_tlbonly(tlb, start, end); - ia64_tlb_flush_mmu_free(tlb); -} - -static inline void __tlb_alloc_page(struct mmu_gather *tlb) -{ - unsigned long addr = __get_free_pages(GFP_NOWAIT | __GFP_NOWARN, 0); - - if (addr) { - tlb->pages = (void *)addr; - tlb->max = PAGE_SIZE / sizeof(void *); - } -} - - -static inline void -arch_tlb_gather_mmu(struct mmu_gather *tlb, struct mm_struct *mm, - unsigned long start, unsigned long end) -{ - tlb->mm = mm; - tlb->max = ARRAY_SIZE(tlb->local); - tlb->pages = tlb->local; - tlb->nr = 0; - tlb->fullmm = !(start | (end+1)); - tlb->start = start; - tlb->end = end; - tlb->start_addr = ~0UL; -} - -/* - * Called at the end of the shootdown operation to free up any resources that were - * collected. - */ -static inline void -arch_tlb_finish_mmu(struct mmu_gather *tlb, - unsigned long start, unsigned long end, bool force) -{ - if (force) - tlb->need_flush = 1; - /* - * Note: tlb->nr may be 0 at this point, so we can't rely on tlb->start_addr and - * tlb->end_addr. - */ - ia64_tlb_flush_mmu(tlb, start, end); - - /* keep the page table cache within bounds */ - check_pgt_cache(); - - if (tlb->pages != tlb->local) - free_pages((unsigned long)tlb->pages, 0); -} - -/* - * Logically, this routine frees PAGE. On MP machines, the actual freeing of the page - * must be delayed until after the TLB has been flushed (see comments at the beginning of - * this file). - */ -static inline bool __tlb_remove_page(struct mmu_gather *tlb, struct page *page) -{ - tlb->need_flush = 1; - - if (!tlb->nr && tlb->pages == tlb->local) - __tlb_alloc_page(tlb); - - tlb->pages[tlb->nr++] = page; - VM_WARN_ON(tlb->nr > tlb->max); - if (tlb->nr == tlb->max) - return true; - return false; -} - -static inline void tlb_flush_mmu_tlbonly(struct mmu_gather *tlb) -{ - ia64_tlb_flush_mmu_tlbonly(tlb, tlb->start_addr, tlb->end_addr); -} - -static inline void tlb_flush_mmu_free(struct mmu_gather *tlb) -{ - ia64_tlb_flush_mmu_free(tlb); -} - -static inline void tlb_flush_mmu(struct mmu_gather *tlb) -{ - ia64_tlb_flush_mmu(tlb, tlb->start_addr, tlb->end_addr); -} - -static inline void tlb_remove_page(struct mmu_gather *tlb, struct page *page) -{ - if (__tlb_remove_page(tlb, page)) - tlb_flush_mmu(tlb); -} - -static inline bool __tlb_remove_page_size(struct mmu_gather *tlb, - struct page *page, int page_size) -{ - return __tlb_remove_page(tlb, page); -} - -static inline void tlb_remove_page_size(struct mmu_gather *tlb, - struct page *page, int page_size) -{ - return tlb_remove_page(tlb, page); -} - -/* - * Remove TLB entry for PTE mapped at virtual address ADDRESS. This is called for any - * PTE, not just those pointing to (normal) physical memory. - */ -static inline void -__tlb_remove_tlb_entry (struct mmu_gather *tlb, pte_t *ptep, unsigned long address) -{ - if (tlb->start_addr == ~0UL) - tlb->start_addr = address; - tlb->end_addr = address + PAGE_SIZE; -} - #define tlb_migrate_finish(mm) platform_tlb_migrate_finish(mm) -#define tlb_start_vma(tlb, vma) do { } while (0) -#define tlb_end_vma(tlb, vma) do { } while (0) - -#define tlb_remove_tlb_entry(tlb, ptep, addr) \ -do { \ - tlb->need_flush = 1; \ - __tlb_remove_tlb_entry(tlb, ptep, addr); \ -} while (0) - -#define tlb_remove_huge_tlb_entry(h, tlb, ptep, address) \ - tlb_remove_tlb_entry(tlb, ptep, address) - -static inline void tlb_change_page_size(struct mmu_gather *tlb, - unsigned int page_size) -{ -} - -#define pte_free_tlb(tlb, ptep, address) \ -do { \ - tlb->need_flush = 1; \ - __pte_free_tlb(tlb, ptep, address); \ -} while (0) - -#define pmd_free_tlb(tlb, ptep, address) \ -do { \ - tlb->need_flush = 1; \ - __pmd_free_tlb(tlb, ptep, address); \ -} while (0) - -#define pud_free_tlb(tlb, pudp, address) \ -do { \ - tlb->need_flush = 1; \ - __pud_free_tlb(tlb, pudp, address); \ -} while (0) +#include #endif /* _ASM_IA64_TLB_H */ --- a/arch/ia64/include/asm/tlbflush.h +++ b/arch/ia64/include/asm/tlbflush.h @@ -14,6 +14,31 @@ #include #include +struct ia64_tr_entry { + u64 ifa; + u64 itir; + u64 pte; + u64 rr; +}; /*Record for tr entry!*/ + +extern int ia64_itr_entry(u64 target_mask, u64 va, u64 pte, u64 log_size); +extern void ia64_ptr_entry(u64 target_mask, int slot); +extern struct ia64_tr_entry *ia64_idtrs[NR_CPUS]; + +/* + region register macros +*/ +#define RR_TO_VE(val) (((val) >> 0) & 0x0000000000000001) +#define RR_VE(val) (((val) & 0x0000000000000001) << 0) +#define RR_VE_MASK 0x0000000000000001L +#define RR_VE_SHIFT 0 +#define RR_TO_PS(val) (((val) >> 2) & 0x000000000000003f) +#define RR_PS(val) (((val) & 0x000000000000003f) << 2) +#define RR_PS_MASK 0x00000000000000fcL +#define RR_PS_SHIFT 2 +#define RR_RID_MASK 0x00000000ffffff00L +#define RR_TO_RID(val) ((val >> 8) & 0xffffff) + /* * Now for some TLB flushing routines. This is the kind of stuff that * can be very expensive, so try to avoid them whenever possible. --- a/arch/ia64/mm/tlb.c +++ b/arch/ia64/mm/tlb.c @@ -297,8 +297,8 @@ local_flush_tlb_all (void) ia64_srlz_i(); /* srlz.i implies srlz.d */ } -void -flush_tlb_range (struct vm_area_struct *vma, unsigned long start, +static void +__flush_tlb_range (struct vm_area_struct *vma, unsigned long start, unsigned long end) { struct mm_struct *mm = vma->vm_mm; @@ -335,6 +335,25 @@ flush_tlb_range (struct vm_area_struct * preempt_enable(); ia64_srlz_i(); /* srlz.i implies srlz.d */ } + +void flush_tlb_range(struct vm_area_struct *vma, + unsigned long start, unsigned long end) +{ + if (unlikely(end - start >= 1024*1024*1024*1024UL + || REGION_NUMBER(start) != REGION_NUMBER(end - 1))) { + /* + * If we flush more than a tera-byte or across regions, we're + * probably better off just flushing the entire TLB(s). This + * should be very rare and is not worth optimizing for. + */ + flush_tlb_all(); + } else { + /* flush the address range from the tlb */ + __flush_tlb_range(vma, start, end); + /* flush the virt. page-table area mapping the addr range */ + __flush_tlb_range(vma, ia64_thash(start), ia64_thash(end)); + } +} EXPORT_SYMBOL(flush_tlb_range); void ia64_tlb_init(void) From patchwork Wed Sep 26 11:36:33 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Zijlstra X-Patchwork-Id: 10615795 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 97D45913 for ; Wed, 26 Sep 2018 11:55:01 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 8A75A2A97C for ; Wed, 26 Sep 2018 11:55:01 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 7E8302A9CD; Wed, 26 Sep 2018 11:55:01 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.7 required=2.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id C55002A980 for ; Wed, 26 Sep 2018 11:55:00 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 5AA628E000B; Wed, 26 Sep 2018 07:54:57 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 446008E000C; Wed, 26 Sep 2018 07:54:57 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3108B8E000B; Wed, 26 Sep 2018 07:54:57 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-it1-f199.google.com (mail-it1-f199.google.com [209.85.166.199]) by kanga.kvack.org (Postfix) with ESMTP id 053438E0003 for ; Wed, 26 Sep 2018 07:54:57 -0400 (EDT) Received: by mail-it1-f199.google.com with SMTP id d194-v6so2803753itb.8 for ; Wed, 26 Sep 2018 04:54:57 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:message-id:user-agent:date:from :to:cc:subject:references:mime-version; bh=d6CMV2hoQm5MnywciM5LER++5NYy5jN9jsJ9wqpDUgs=; b=LvWkl7D5D7yPo7pvIfgDko/8R5VC3mz59S8kYAAzuRXc9kL0haZILTxH/Qy9Gc8sk4 uxUzxjPJ2xXJSecROqANU/ZXpcnv1bbqD+eUfgiKLuDBgRS7JtP7QOiQldLmqRWdwGKU uAoq5cjEeIMtwFShwvajNEBcIWyXlY1zeQZlQoeiE4OsAJZE4mxYEzDOmzI4Mh36m8gv fFB6cYowSZwgQpZgW5VSmDYk0ARFgs/nAIooFYATrh1t4GmL/MCv8m78naoQz69US3v7 D4Iy6OLZehiKKnyT9WBTdDLDOWsCWGyxUYdu782+FmYuEyx++0Eu/fLNGdsqh5mACDUd 6vJg== X-Gm-Message-State: ABuFfoi1TwUhhjo4in04+kwXhQKGDlBAJ4AFgL9OUn4pUqnnWAmb0EHo N2g7YBhvdPw95tx0lN+KYUY91EK6bNoJLBkrPpC7YsbfviVyy61pcuHewjhXsrymOhYm+YtOvsd PF4WM4+ySbGfp7rRmJFX9ZaaT70kg5E42lRUkK0XmRnggWAsuwAyNdt40Iaf5wNXuGQ== X-Received: by 2002:a24:7cca:: with SMTP id a193-v6mr4881957itd.9.1537962896663; Wed, 26 Sep 2018 04:54:56 -0700 (PDT) X-Google-Smtp-Source: ACcGV604NQfTxonTilL8vu88++Ru2ZW1CF95n1x4pPOYUAvFgGfvzjOrp8ZEBiSRRnhmPxbo9Fpc X-Received: by 2002:a24:7cca:: with SMTP id a193-v6mr4881916itd.9.1537962895922; Wed, 26 Sep 2018 04:54:55 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1537962895; cv=none; d=google.com; s=arc-20160816; b=duJ5AhvMUJIT4ZavpOw2tmIqviIb7p2jGe02CVA3xDi9wG1imf0pr2yW72ojyLQajg IaL9A6M1yUKMG8CbrYm/Lmaia/3f4tA0PjFzY++HPVJK9xQ4xZPnUhAUvG6/axD/Dzi0 nwQUU7Tq2NM/0pW43TTmf6qDqyHBRM/1zzFtT97yM1Ohu6aUY2q3vnlxWF1T/BXRNnUq 7nhtxJ6IQscs/mCPgeK5Q+5AC3aZ2V7G9pPKW2VSpvnyq1ZpblE5QU3OLoaQBOZxqpvX bnCC0EGHTVQj55zXPBNQRUWuoV1mauJNjL6lZb1YSS1Swibrz2LvpK9ODlMV8Zxviatx FcCA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=mime-version:references:subject:cc:to:from:date:user-agent :message-id:dkim-signature; bh=d6CMV2hoQm5MnywciM5LER++5NYy5jN9jsJ9wqpDUgs=; b=i3DyywbdZLcV8a3Bt7efAf7986dl9S7amnFnJHhwWovj5AqpIH5/c4/AwfMMKoPdU/ tH6bTKJcZ+xfEmGUNFlbkEqrBumINTYAAWza7wzLdZ/ta2j9x/ZuDAB9TJWZBvDjc7kG pPx7RlHHuiHSjeUl51rZhBV/i78Cycpm/JbB8rwrZywuyi+tb+FTjfZ++TaTqSPB+78g cCc6zzVYzfxIboI9h/b0JKkjY6Fne16Uq/onoQl9Nc8aXLpXzeDv7envBPgZxuRBk8Qu XVY2TAqaedwXE8BBxCuYwmgodBObQ42rod21UIv00+kCFKAqRhd7S9NSGELw8Ux80vZ7 2RwQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=merlin.20170209 header.b=PjgXboiH; spf=pass (google.com: best guess record for domain of peterz@infradead.org designates 2001:8b0:10b:1231::1 as permitted sender) smtp.mailfrom=peterz@infradead.org Received: from merlin.infradead.org (merlin.infradead.org. [2001:8b0:10b:1231::1]) by mx.google.com with ESMTPS id m2-v6si3330862iob.97.2018.09.26.04.54.55 for (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Wed, 26 Sep 2018 04:54:55 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of peterz@infradead.org designates 2001:8b0:10b:1231::1 as permitted sender) client-ip=2001:8b0:10b:1231::1; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=merlin.20170209 header.b=PjgXboiH; spf=pass (google.com: best guess record for domain of peterz@infradead.org designates 2001:8b0:10b:1231::1 as permitted sender) smtp.mailfrom=peterz@infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=merlin.20170209; h=Content-Type:MIME-Version:References: Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding: Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender: Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:List-Id:List-Help: List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=d6CMV2hoQm5MnywciM5LER++5NYy5jN9jsJ9wqpDUgs=; b=PjgXboiHHrjg1EHp3aQsJF18d/ E6lo9Gyjw8eZgCRTeHs80WNUwVJ0EtjUt9e66OiaVLhjclOKRkh5SLMgYskhStWtkjp8KuYaMyQpJ mVGnWnDRI2PQnI+6DOxvJBLgQGzzvVguWUHq+ojN5cd0ymEw3IYG8ObFcrOo37y0y8C9Uz9JIoFMG vxRwefYuQpkaMOfyMDiK0xtWfxSsp+O0hAn+KIhl6bAr6AfL/6l/9iszBzxBJ0cikg43EVGkOrjSO iekGiyZ32oWTs5XNCm1/LjLnzA/+9BGP4jBQKOHxUZ+zBAhIZO93fGUxI8OnMmnLlxnW9h+Nl3Fmf WSfPtPJA==; Received: from j217100.upc-j.chello.nl ([24.132.217.100] helo=hirez.programming.kicks-ass.net) by merlin.infradead.org with esmtpsa (Exim 4.90_1 #2 (Red Hat Linux)) id 1g58Oc-0007U8-ED; Wed, 26 Sep 2018 11:54:42 +0000 Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0) id 9D998206F1814; Wed, 26 Sep 2018 13:54:04 +0200 (CEST) Message-ID: <20180926114801.040318402@infradead.org> User-Agent: quilt/0.65 Date: Wed, 26 Sep 2018 13:36:33 +0200 From: Peter Zijlstra To: will.deacon@arm.com, aneesh.kumar@linux.vnet.ibm.com, akpm@linux-foundation.org, npiggin@gmail.com Cc: linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, peterz@infradead.org, linux@armlinux.org.uk, heiko.carstens@de.ibm.com, riel@surriel.com, Yoshinori Sato , Rich Felker Subject: [PATCH 10/18] sh/tlb: Convert SH to generic mmu_gather References: <20180926113623.863696043@infradead.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP Generic mmu_gather provides everything SH needs (range tracking and cache coherency). Cc: Will Deacon Cc: "Aneesh Kumar K.V" Cc: Andrew Morton Cc: Nick Piggin Cc: Yoshinori Sato Cc: Rich Felker Signed-off-by: Peter Zijlstra (Intel) --- arch/sh/include/asm/pgalloc.h | 7 ++ arch/sh/include/asm/tlb.h | 130 ------------------------------------------ 2 files changed, 8 insertions(+), 129 deletions(-) --- a/arch/sh/include/asm/pgalloc.h +++ b/arch/sh/include/asm/pgalloc.h @@ -72,6 +72,15 @@ do { \ tlb_remove_page((tlb), (pte)); \ } while (0) +#if CONFIG_PGTABLE_LEVELS > 2 +#define __pmd_free_tlb(tlb, pmdp, addr) \ +do { \ + struct page *page = virt_to_page(pmdp); \ + pgtable_pmd_page_dtor(page); \ + tlb_remove_page((tlb), page); \ +} while (0); +#endif + static inline void check_pgt_cache(void) { quicklist_trim(QUICK_PT, NULL, 25, 16); --- a/arch/sh/include/asm/tlb.h +++ b/arch/sh/include/asm/tlb.h @@ -11,131 +11,8 @@ #ifdef CONFIG_MMU #include -#include -#include -#include - -/* - * TLB handling. This allows us to remove pages from the page - * tables, and efficiently handle the TLB issues. - */ -struct mmu_gather { - struct mm_struct *mm; - unsigned int fullmm; - unsigned long start, end; -}; -static inline void init_tlb_gather(struct mmu_gather *tlb) -{ - tlb->start = TASK_SIZE; - tlb->end = 0; - - if (tlb->fullmm) { - tlb->start = 0; - tlb->end = TASK_SIZE; - } -} - -static inline void -arch_tlb_gather_mmu(struct mmu_gather *tlb, struct mm_struct *mm, - unsigned long start, unsigned long end) -{ - tlb->mm = mm; - tlb->start = start; - tlb->end = end; - tlb->fullmm = !(start | (end+1)); - - init_tlb_gather(tlb); -} - -static inline void -arch_tlb_finish_mmu(struct mmu_gather *tlb, - unsigned long start, unsigned long end, bool force) -{ - if (tlb->fullmm || force) - flush_tlb_mm(tlb->mm); - - /* keep the page table cache within bounds */ - check_pgt_cache(); -} - -static inline void -tlb_remove_tlb_entry(struct mmu_gather *tlb, pte_t *ptep, unsigned long address) -{ - if (tlb->start > address) - tlb->start = address; - if (tlb->end < address + PAGE_SIZE) - tlb->end = address + PAGE_SIZE; -} - -#define tlb_remove_huge_tlb_entry(h, tlb, ptep, address) \ - tlb_remove_tlb_entry(tlb, ptep, address) - -/* - * In the case of tlb vma handling, we can optimise these away in the - * case where we're doing a full MM flush. When we're doing a munmap, - * the vmas are adjusted to only cover the region to be torn down. - */ -static inline void -tlb_start_vma(struct mmu_gather *tlb, struct vm_area_struct *vma) -{ - if (!tlb->fullmm) - flush_cache_range(vma, vma->vm_start, vma->vm_end); -} - -static inline void -tlb_end_vma(struct mmu_gather *tlb, struct vm_area_struct *vma) -{ - if (!tlb->fullmm && tlb->end) { - flush_tlb_range(vma, tlb->start, tlb->end); - init_tlb_gather(tlb); - } -} - -static inline void tlb_flush_mmu_tlbonly(struct mmu_gather *tlb) -{ -} - -static inline void tlb_flush_mmu_free(struct mmu_gather *tlb) -{ -} - -static inline void tlb_flush_mmu(struct mmu_gather *tlb) -{ -} - -static inline int __tlb_remove_page(struct mmu_gather *tlb, struct page *page) -{ - free_page_and_swap_cache(page); - return false; /* avoid calling tlb_flush_mmu */ -} - -static inline void tlb_remove_page(struct mmu_gather *tlb, struct page *page) -{ - __tlb_remove_page(tlb, page); -} - -static inline bool __tlb_remove_page_size(struct mmu_gather *tlb, - struct page *page, int page_size) -{ - return __tlb_remove_page(tlb, page); -} - -static inline void tlb_remove_page_size(struct mmu_gather *tlb, - struct page *page, int page_size) -{ - return tlb_remove_page(tlb, page); -} - -static inline void tlb_change_page_size(struct mmu_gather *tlb, unsigned int page_size) -{ -} - -#define pte_free_tlb(tlb, ptep, addr) pte_free((tlb)->mm, ptep) -#define pmd_free_tlb(tlb, pmdp, addr) pmd_free((tlb)->mm, pmdp) -#define pud_free_tlb(tlb, pudp, addr) pud_free((tlb)->mm, pudp) - -#define tlb_migrate_finish(mm) do { } while (0) +#include #if defined(CONFIG_CPU_SH4) || defined(CONFIG_SUPERH64) extern void tlb_wire_entry(struct vm_area_struct *, unsigned long, pte_t); @@ -155,11 +32,6 @@ static inline void tlb_unwire_entry(void #else /* CONFIG_MMU */ -#define tlb_start_vma(tlb, vma) do { } while (0) -#define tlb_end_vma(tlb, vma) do { } while (0) -#define __tlb_remove_tlb_entry(tlb, pte, address) do { } while (0) -#define tlb_flush(tlb) do { } while (0) - #include #endif /* CONFIG_MMU */ From patchwork Wed Sep 26 11:36:34 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Zijlstra X-Patchwork-Id: 10615803 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 899EA14BD for ; Wed, 26 Sep 2018 11:55:18 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 7D40E26B41 for ; Wed, 26 Sep 2018 11:55:18 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 716862A980; Wed, 26 Sep 2018 11:55:18 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.7 required=2.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id B60B62A97C for ; Wed, 26 Sep 2018 11:55:14 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id EF52F8E000D; Wed, 26 Sep 2018 07:54:58 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id EA4B78E000F; Wed, 26 Sep 2018 07:54:58 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D25AA8E000D; Wed, 26 Sep 2018 07:54:58 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-it1-f198.google.com (mail-it1-f198.google.com [209.85.166.198]) by kanga.kvack.org (Postfix) with ESMTP id 9F7CD8E000F for ; Wed, 26 Sep 2018 07:54:58 -0400 (EDT) Received: by mail-it1-f198.google.com with SMTP id k143-v6so2717749ite.5 for ; Wed, 26 Sep 2018 04:54:58 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:message-id:user-agent:date:from :to:cc:subject:references:mime-version; bh=Qgx3kgv+AkSn1Y+HQ6cCIZsGaoQ/+n1Kc3v/8+oaveQ=; b=Zv7n50KjdDKvWiw11daQDmyP1orxDkZsLve8JZbdYMCsvVBwAMJwq+pTFkydrnvHVy piiQD70HqOfSKSJ955fZmEdvjnX0FzDsKByUqaDpoHA+NRJusliMO+pikKOC2yei8ht6 fbuHPGO7yM9Dd26XEdf347Y9Sju/wd1+zOTHOQG2wtkZJi9WY6oKVNkt/fQEP2pWNpqE rSsS0AserX2gxCwVJVb83Nqz4W9AFNjVlYh3Wpaf6naoCqR1zepEvz806Vlg6bj+8NML 7QuEDqnMUfbkD6/l2Yt5pHoge9E791Mp2Si5K0lSiNR118xpDGlisOU+niI9VYBM0tmc JOJg== X-Gm-Message-State: ABuFfoiJXinBrsyePdp7nyiMWYa0pVwt8bP1jBLCn6hUABwJKi92VuSt yZuvfVteLgL4dtp83pGJtMjcSmdfRGk3UjkgehKnO5qXSnfBG2wVdcOxPCiPFEwg18+1OZiRPdI wH5auU9hSj59WkAdF7sm453N2xtQM+4yWycbWJD9M6oFgR9YUOuQswA5pFbuYdV0Spw== X-Received: by 2002:a24:36c9:: with SMTP id l192-v6mr4386507itl.62.1537962898354; Wed, 26 Sep 2018 04:54:58 -0700 (PDT) X-Google-Smtp-Source: ACcGV635YrMwGM6zMc2a+v6iWF8JRi5Xlrt9ZCjW/GggoQc410g0gBVKzwqmUC8tlCi74ro9c1rd X-Received: by 2002:a24:36c9:: with SMTP id l192-v6mr4386475itl.62.1537962897562; Wed, 26 Sep 2018 04:54:57 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1537962897; cv=none; d=google.com; s=arc-20160816; b=bNhWuRVaDnI/NYFBfWY+87+FMUlOo+AWajIUY3LM1e4KbPsxiOzwcXndBS7GfAocNE 1iiB4vthur7aplgz6lHWwWUKUySiSZMpZCzgB72cK+XFpPzeidxL5jnFNNtc/rfxxYXy tsJutnNHyt5/RGoAknx/o/h/MnwhVdksJT/Vz/eOekzwcRipRWP7CXVxxd0oPe0DqhHY j6sx1D0upDQl3/XMDSwC8w4FLGZSTsJqpee259j0mgRg3lXkFHmIYqVNLkLWeKUlWfSK ivuuy06dZjgt77zucGDZDpH+3uKDgjVtglu7uldipUaejf4MUZiNem0DsALZacvWfds4 G8GQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=mime-version:references:subject:cc:to:from:date:user-agent :message-id:dkim-signature; bh=Qgx3kgv+AkSn1Y+HQ6cCIZsGaoQ/+n1Kc3v/8+oaveQ=; b=jdMLw4zbUwZnipHzipiir44bQdPHvidEYLoEIz4vbx+0yeM7LJ+KxFv89RtTq7lIV/ vQ7H6Z/cgDMzosBMmVWXvYuaj1hXQ9cw4cO3K4OjJUOUScP9G8W0WA8HoLt5N4rK6k90 rNEWgmC775vy03O4nDn2Vw1YtMFP1Z/+6Re2OgXEunbMleAtfk2bfc96RpofWSRVT2t9 /FrCd34HH/CVowMwi8hOQL6gm2lfB3PgWYs5myVDxLawfSJ4tBybVPcuZLKJCdc5FcZ6 gnB7WMKUlwblpw/1ONhPOQlwD55t/cn7B5l86lh4j9zt0pwBPo/5zz8syMN9jO5Ow2HW 3Yyg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=merlin.20170209 header.b=R6gqY0PU; spf=pass (google.com: best guess record for domain of peterz@infradead.org designates 2001:8b0:10b:1231::1 as permitted sender) smtp.mailfrom=peterz@infradead.org Received: from merlin.infradead.org (merlin.infradead.org. [2001:8b0:10b:1231::1]) by mx.google.com with ESMTPS id n142-v6si2770791itg.69.2018.09.26.04.54.57 for (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Wed, 26 Sep 2018 04:54:57 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of peterz@infradead.org designates 2001:8b0:10b:1231::1 as permitted sender) client-ip=2001:8b0:10b:1231::1; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=merlin.20170209 header.b=R6gqY0PU; spf=pass (google.com: best guess record for domain of peterz@infradead.org designates 2001:8b0:10b:1231::1 as permitted sender) smtp.mailfrom=peterz@infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=merlin.20170209; h=Content-Type:MIME-Version:References: Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding: Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender: Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:List-Id:List-Help: List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=Qgx3kgv+AkSn1Y+HQ6cCIZsGaoQ/+n1Kc3v/8+oaveQ=; b=R6gqY0PUNyz8/Q1IQLhFD6pzfg ZzAVro2Rzf6Fxb7S2/jaIQxsBn8Il+h0HEqraoUApRfgmiooHOzdlGmc1Fi4lqMgayiHfJcfP5sFZ WEwR0YSJ93dQuy1FmT0UDCNzU1FhCTU3Dr+8gJA+KHYx8jjfmQ812ftI16ucc3WYj5kWSGLModRkB yjwD4vIC8gpKvHTvmUp7koDxFwa8YarZYScCooNK4gV+nkDEOSN2qtz+sR4AU+EEKiBgPwvOwGXXd DvI6tmRd7jgqPDJ5EHuB2nholMPNTZlqE7heUVxIg/8v0/45QQ4fRGCEYMWMJWcP8pjNvq/XLvdw7 wU7Dn30w==; Received: from j217100.upc-j.chello.nl ([24.132.217.100] helo=hirez.programming.kicks-ass.net) by merlin.infradead.org with esmtpsa (Exim 4.90_1 #2 (Red Hat Linux)) id 1g58Oc-0007U9-Ce; Wed, 26 Sep 2018 11:54:42 +0000 Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0) id 9F819206F1816; Wed, 26 Sep 2018 13:54:04 +0200 (CEST) Message-ID: <20180926114801.094298493@infradead.org> User-Agent: quilt/0.65 Date: Wed, 26 Sep 2018 13:36:34 +0200 From: Peter Zijlstra To: will.deacon@arm.com, aneesh.kumar@linux.vnet.ibm.com, akpm@linux-foundation.org, npiggin@gmail.com Cc: linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, peterz@infradead.org, linux@armlinux.org.uk, heiko.carstens@de.ibm.com, riel@surriel.com, Richard Weinberger Subject: [PATCH 11/18] um/tlb: Convert to generic mmu_gather References: <20180926113623.863696043@infradead.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP Generic mmu_gather provides the simple flush_tlb_range() based range tracking mmu_gather UM needs. Cc: Will Deacon Cc: "Aneesh Kumar K.V" Cc: Andrew Morton Cc: Nick Piggin Cc: Richard Weinberger Signed-off-by: Peter Zijlstra (Intel) --- arch/um/include/asm/tlb.h | 156 ---------------------------------------------- 1 file changed, 2 insertions(+), 154 deletions(-) --- a/arch/um/include/asm/tlb.h +++ b/arch/um/include/asm/tlb.h @@ -2,160 +2,8 @@ #ifndef __UM_TLB_H #define __UM_TLB_H -#include -#include -#include -#include #include - -#define tlb_start_vma(tlb, vma) do { } while (0) -#define tlb_end_vma(tlb, vma) do { } while (0) -#define tlb_flush(tlb) flush_tlb_mm((tlb)->mm) - -/* struct mmu_gather is an opaque type used by the mm code for passing around - * any data needed by arch specific code for tlb_remove_page. - */ -struct mmu_gather { - struct mm_struct *mm; - unsigned int need_flush; /* Really unmapped some ptes? */ - unsigned long start; - unsigned long end; - unsigned int fullmm; /* non-zero means full mm flush */ -}; - -static inline void __tlb_remove_tlb_entry(struct mmu_gather *tlb, pte_t *ptep, - unsigned long address) -{ - if (tlb->start > address) - tlb->start = address; - if (tlb->end < address + PAGE_SIZE) - tlb->end = address + PAGE_SIZE; -} - -static inline void init_tlb_gather(struct mmu_gather *tlb) -{ - tlb->need_flush = 0; - - tlb->start = TASK_SIZE; - tlb->end = 0; - - if (tlb->fullmm) { - tlb->start = 0; - tlb->end = TASK_SIZE; - } -} - -static inline void -arch_tlb_gather_mmu(struct mmu_gather *tlb, struct mm_struct *mm, - unsigned long start, unsigned long end) -{ - tlb->mm = mm; - tlb->start = start; - tlb->end = end; - tlb->fullmm = !(start | (end+1)); - - init_tlb_gather(tlb); -} - -extern void flush_tlb_mm_range(struct mm_struct *mm, unsigned long start, - unsigned long end); - -static inline void -tlb_flush_mmu_tlbonly(struct mmu_gather *tlb) -{ - flush_tlb_mm_range(tlb->mm, tlb->start, tlb->end); -} - -static inline void -tlb_flush_mmu_free(struct mmu_gather *tlb) -{ - init_tlb_gather(tlb); -} - -static inline void -tlb_flush_mmu(struct mmu_gather *tlb) -{ - if (!tlb->need_flush) - return; - - tlb_flush_mmu_tlbonly(tlb); - tlb_flush_mmu_free(tlb); -} - -/* arch_tlb_finish_mmu - * Called at the end of the shootdown operation to free up any resources - * that were required. - */ -static inline void -arch_tlb_finish_mmu(struct mmu_gather *tlb, - unsigned long start, unsigned long end, bool force) -{ - if (force) { - tlb->start = start; - tlb->end = end; - tlb->need_flush = 1; - } - tlb_flush_mmu(tlb); - - /* keep the page table cache within bounds */ - check_pgt_cache(); -} - -/* tlb_remove_page - * Must perform the equivalent to __free_pte(pte_get_and_clear(ptep)), - * while handling the additional races in SMP caused by other CPUs - * caching valid mappings in their TLBs. - */ -static inline int __tlb_remove_page(struct mmu_gather *tlb, struct page *page) -{ - tlb->need_flush = 1; - free_page_and_swap_cache(page); - return false; /* avoid calling tlb_flush_mmu */ -} - -static inline void tlb_remove_page(struct mmu_gather *tlb, struct page *page) -{ - __tlb_remove_page(tlb, page); -} - -static inline bool __tlb_remove_page_size(struct mmu_gather *tlb, - struct page *page, int page_size) -{ - return __tlb_remove_page(tlb, page); -} - -static inline void tlb_remove_page_size(struct mmu_gather *tlb, - struct page *page, int page_size) -{ - return tlb_remove_page(tlb, page); -} - -/** - * tlb_remove_tlb_entry - remember a pte unmapping for later tlb invalidation. - * - * Record the fact that pte's were really umapped in ->need_flush, so we can - * later optimise away the tlb invalidate. This helps when userspace is - * unmapping already-unmapped pages, which happens quite a lot. - */ -#define tlb_remove_tlb_entry(tlb, ptep, address) \ - do { \ - tlb->need_flush = 1; \ - __tlb_remove_tlb_entry(tlb, ptep, address); \ - } while (0) - -#define tlb_remove_huge_tlb_entry(h, tlb, ptep, address) \ - tlb_remove_tlb_entry(tlb, ptep, address) - -static inline void tlb_change_page_size(struct mmu_gather *tlb, unsigned int page_size) -{ -} - -#define pte_free_tlb(tlb, ptep, addr) __pte_free_tlb(tlb, ptep, addr) - -#define pud_free_tlb(tlb, pudp, addr) __pud_free_tlb(tlb, pudp, addr) - -#define pmd_free_tlb(tlb, pmdp, addr) __pmd_free_tlb(tlb, pmdp, addr) - -#define tlb_migrate_finish(mm) do {} while (0) +#include +#include #endif From patchwork Wed Sep 26 11:36:35 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Zijlstra X-Patchwork-Id: 10615813 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id C89E7913 for ; Wed, 26 Sep 2018 11:55:34 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id BAFCF26B41 for ; Wed, 26 Sep 2018 11:55:34 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id AE3462A987; Wed, 26 Sep 2018 11:55:34 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.7 required=2.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 4151726B41 for ; Wed, 26 Sep 2018 11:55:33 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 0291C8E0012; Wed, 26 Sep 2018 07:55:07 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id EF40B8E000F; Wed, 26 Sep 2018 07:55:06 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D47188E0012; Wed, 26 Sep 2018 07:55:06 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-io1-f71.google.com (mail-io1-f71.google.com [209.85.166.71]) by kanga.kvack.org (Postfix) with ESMTP id A6FF28E000F for ; Wed, 26 Sep 2018 07:55:06 -0400 (EDT) Received: by mail-io1-f71.google.com with SMTP id r18-v6so39737819ioj.4 for ; Wed, 26 Sep 2018 04:55:06 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:message-id:user-agent:date:from :to:cc:subject:references:mime-version; bh=ssVVFeBPuJ2lnE2i2z35anxW9upgC7wt7IOAkXx5I9g=; b=g7bF8dVOevhpq9OEz2LKQAjWIH80JfoS2qdTMZ3L4jYBrX7RraRHaXcXG040cptYd+ vbq+ilTS8CF65KIaxeW4YWR9HTHQR4KetdLHFnYgCZnJJll5sHKDiqLzB1SctaT5WzlS muWZulIn/cPnvrhNEmyayHWxqr87qLDpd1dZqo/LbRhu82fSpDTGKNRKiQV7ye45ymfk Ctge5Y/kr09xDdFroE9OKy/H4zA8SG+QQxFCQcWIuH2FM9fRe8Jc/drK40pC0wkFlywG mJ3DzCj0WFhZO5shL0nyIJP0tBULvd+xmdjafBVWpkNKAYw2OWlMf4bbiL3A9yUDIy62 oWfg== X-Gm-Message-State: ABuFfoiSq08qIDwbgTVoxyN0ox+1/QQnzF15t3bz9/weHKOT7pJHHDoR ILEjLmXAEHTgDzDHOXdac+f9XDBaSOWxYbd1xGXxYHwxa3bjLrL5XBJphsmkVAsY4YLQKr5tOJj OScaQXi7qKd8N6wC6QS7rdnfVVv29neAz4A/S+1YXsxhxkNiU1iwkie+Vzef5tan1QA== X-Received: by 2002:a24:53c3:: with SMTP id n186-v6mr4363432itb.11.1537962906392; Wed, 26 Sep 2018 04:55:06 -0700 (PDT) X-Google-Smtp-Source: ACcGV61e4mKkwL6PTmV5Cgm82OM/lJwolKe9IDMZkaLW2ZO+v8eh7VzOTYM5BGwM3c882RNxCFqO X-Received: by 2002:a24:53c3:: with SMTP id n186-v6mr4363374itb.11.1537962905334; Wed, 26 Sep 2018 04:55:05 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1537962905; cv=none; d=google.com; s=arc-20160816; b=IounWxejdEmPdraavejBD0jcfQx4J0nl3yhBqF9QZFWPNiM1KsYBwmeZr48pIJIJRS +dnxJeYlw23ncUdoozvubjtv3OfgGBQSD2EK2Ww1DrLtV5lM5EiXdnYzWzWvf0cEcEVN F6J9xJgndl/HJeRSia3R98ESqVaK8t9g08wOJlZde+LJ+1g6ZjJiY7VNMaOnyMtHRvob aTC/2rHdHKNHf+r2nbbYSjiBrhHLcZKgKm53B1aCS3m+ROLc0IROhM/mOpl5jhqvlTp1 XDvcZQAjZeWydYmPS975Y8VdHz5tEswz4+sqDZ7vjgMSP/y2Vjbp7WjcuBQdKTkvUBzs uYFw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=mime-version:references:subject:cc:to:from:date:user-agent :message-id:dkim-signature; bh=ssVVFeBPuJ2lnE2i2z35anxW9upgC7wt7IOAkXx5I9g=; b=EbfPLQFiB36XEBrxN1v/iIC7AyB32S73SzxzFTeQu8m4AvDVvdMpiZBYNt9/6pOM0M tpuOI9NEk55Ld73Vpaa+eLCWXVfuHdyOCqTve4ahilftZo6tSonBrfO3av4yKPecJsbN bePmDg3hElEU54bc8gVmOKimFrWUDmqVAHcjq+zZI+FFJynAmgy6q+oYarIo0ISX6yif hHxKkuqk0+NCgB+ZlRmiLA1nvEkjoJ6P45MFBQ8KJBmo7mQYn+wfzspNclTYWhNV/QrR 2hYMBZxPqGLbr2KTrLRETIUp7cMxEvljX3YRyf8ZWdMz2AxiVqQura0bjQA1rBg0+yjK WNqA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=merlin.20170209 header.b=lXDRnQtA; spf=pass (google.com: best guess record for domain of peterz@infradead.org designates 2001:8b0:10b:1231::1 as permitted sender) smtp.mailfrom=peterz@infradead.org Received: from merlin.infradead.org (merlin.infradead.org. [2001:8b0:10b:1231::1]) by mx.google.com with ESMTPS id j16-v6si3076837iok.283.2018.09.26.04.55.05 for (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Wed, 26 Sep 2018 04:55:05 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of peterz@infradead.org designates 2001:8b0:10b:1231::1 as permitted sender) client-ip=2001:8b0:10b:1231::1; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=merlin.20170209 header.b=lXDRnQtA; spf=pass (google.com: best guess record for domain of peterz@infradead.org designates 2001:8b0:10b:1231::1 as permitted sender) smtp.mailfrom=peterz@infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=merlin.20170209; h=Content-Type:MIME-Version:References: Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding: Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender: Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:List-Id:List-Help: List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=ssVVFeBPuJ2lnE2i2z35anxW9upgC7wt7IOAkXx5I9g=; b=lXDRnQtAjC4ELY9kaOW1VBlYxe 4JDkG8Z7iO12jdCDTTtNswwkOPXIlDwPbDKnVezF5ZV1djVqLVY/BywURE7Y0vmigNkiwHP8v2Yru BT0unLVn7evVyIhE9mRz8vI1gSTvpFcgTl1wLZ4w5AvTvIybJnYEF8bjqZzVUPed+hydEhJuH8vnu VLSsIj/gZiJbYg1QKKao4tHtRj/B9SarWT6C3e+hFDqEgdoEEqdTQj1y8UcMXVsfyLx84c63e1nQ3 85xN1UeQAf/fTIqW9jK15+Qj+Bh6Fmzhf9aSQD/0+IfzAo9GivwrcLAPXZ9F+c0hhvw9KNa0ZtkUf goNp/xCQ==; Received: from j217100.upc-j.chello.nl ([24.132.217.100] helo=hirez.programming.kicks-ass.net) by merlin.infradead.org with esmtpsa (Exim 4.90_1 #2 (Red Hat Linux)) id 1g58Oc-0007UA-Cp; Wed, 26 Sep 2018 11:54:42 +0000 Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0) id A5307206F181A; Wed, 26 Sep 2018 13:54:04 +0200 (CEST) Message-ID: <20180926114801.146189550@infradead.org> User-Agent: quilt/0.65 Date: Wed, 26 Sep 2018 13:36:35 +0200 From: Peter Zijlstra To: will.deacon@arm.com, aneesh.kumar@linux.vnet.ibm.com, akpm@linux-foundation.org, npiggin@gmail.com Cc: linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, peterz@infradead.org, linux@armlinux.org.uk, heiko.carstens@de.ibm.com, riel@surriel.com, Richard Henderson , Vineet Gupta , Mark Salter , Richard Kuo , Michal Simek , Paul Burton , Greentime Hu , Ley Foon Tan , Jonas Bonn , Helge Deller , "David S. Miller" , Guan Xuetao , Max Filippov Subject: [PATCH 12/18] arch/tlb: Clean up simple architectures References: <20180926113623.863696043@infradead.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP There are generally two cases: 1) either the platform has an efficient flush_tlb_range() and asm-generic/tlb.h doesn't need any overrides at all. 2) or an architecture lacks an efficient flush_tlb_range() and we override tlb_end_vma() and tlb_flush(). Convert all 'simple' architectures to one of these two forms. alpha: has no range invalidate -> 2 arc: already used flush_tlb_range() -> 1 c6x: has no range invalidate -> 2 h8300: has no mmu hexagon: has an efficient flush_tlb_range() -> 1 (flush_tlb_mm() is in fact a full range invalidate, so no need to shoot down everything) m68k: has inefficient flush_tlb_range() -> 2 microblaze: has no flush_tlb_range() -> 2 mips: has efficient flush_tlb_range() -> 1 (even though it currently seems to use flush_tlb_mm()) nds32: already uses flush_tlb_range() -> 1 nios2: has inefficient flush_tlb_range() -> 2 (no limit on range iteration) openrisc: has inefficient flush_tlb_range() -> 2 (no limit on range iteration) parisc: already uses flush_tlb_range() -> 1 sparc32: already uses flush_tlb_range() -> 1 unicore32: has inefficient flush_tlb_range() -> 2 (no limit on range iteration) xtensa: has efficient flush_tlb_range() -> 1 Cc: Richard Henderson Cc: Vineet Gupta Cc: Mark Salter Cc: Richard Kuo Cc: Michal Simek Cc: Paul Burton Cc: Greentime Hu Cc: Ley Foon Tan Cc: Jonas Bonn Cc: Helge Deller Cc: "David S. Miller" Cc: Guan Xuetao Cc: Max Filippov Cc: Will Deacon Cc: "Aneesh Kumar K.V" Cc: Andrew Morton Cc: Nick Piggin Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Peter Zijlstra (Intel) --- arch/alpha/include/asm/tlb.h | 2 -- arch/arc/include/asm/tlb.h | 23 ----------------------- arch/c6x/include/asm/tlb.h | 1 + arch/h8300/include/asm/tlb.h | 2 -- arch/hexagon/include/asm/tlb.h | 12 ------------ arch/m68k/include/asm/tlb.h | 1 - arch/microblaze/include/asm/tlb.h | 4 +--- arch/mips/include/asm/tlb.h | 8 -------- arch/nds32/include/asm/tlb.h | 10 ---------- arch/nios2/include/asm/tlb.h | 8 +++++--- arch/openrisc/include/asm/tlb.h | 6 ++++-- arch/parisc/include/asm/tlb.h | 13 ------------- arch/powerpc/include/asm/tlb.h | 1 - arch/sparc/include/asm/tlb_32.h | 13 ------------- arch/unicore32/include/asm/tlb.h | 10 ++++++---- arch/xtensa/include/asm/tlb.h | 17 ----------------- 16 files changed, 17 insertions(+), 114 deletions(-) --- a/arch/alpha/include/asm/tlb.h +++ b/arch/alpha/include/asm/tlb.h @@ -4,8 +4,6 @@ #define tlb_start_vma(tlb, vma) do { } while (0) #define tlb_end_vma(tlb, vma) do { } while (0) -#define __tlb_remove_tlb_entry(tlb, pte, addr) do { } while (0) - #define tlb_flush(tlb) flush_tlb_mm((tlb)->mm) #include --- a/arch/arc/include/asm/tlb.h +++ b/arch/arc/include/asm/tlb.h @@ -9,29 +9,6 @@ #ifndef _ASM_ARC_TLB_H #define _ASM_ARC_TLB_H -#define tlb_flush(tlb) \ -do { \ - if (tlb->fullmm) \ - flush_tlb_mm((tlb)->mm); \ -} while (0) - -/* - * This pair is called at time of munmap/exit to flush cache and TLB entries - * for mappings being torn down. - * 1) cache-flush part -implemented via tlb_start_vma( ) for VIPT aliasing D$ - * 2) tlb-flush part - implemted via tlb_end_vma( ) flushes the TLB range - * - * Note, read http://lkml.org/lkml/2004/1/15/6 - */ - -#define tlb_end_vma(tlb, vma) \ -do { \ - if (!tlb->fullmm) \ - flush_tlb_range(vma, vma->vm_start, vma->vm_end); \ -} while (0) - -#define __tlb_remove_tlb_entry(tlb, ptep, address) - #include #include --- a/arch/c6x/include/asm/tlb.h +++ b/arch/c6x/include/asm/tlb.h @@ -2,6 +2,7 @@ #ifndef _ASM_C6X_TLB_H #define _ASM_C6X_TLB_H +#define tlb_end_vma(tlb,vma) do { } while (0) #define tlb_flush(tlb) flush_tlb_mm((tlb)->mm) #include --- a/arch/h8300/include/asm/tlb.h +++ b/arch/h8300/include/asm/tlb.h @@ -2,8 +2,6 @@ #ifndef __H8300_TLB_H__ #define __H8300_TLB_H__ -#define tlb_flush(tlb) do { } while (0) - #include #endif --- a/arch/hexagon/include/asm/tlb.h +++ b/arch/hexagon/include/asm/tlb.h @@ -22,18 +22,6 @@ #include #include -/* - * We don't need any special per-pte or per-vma handling... - */ -#define tlb_start_vma(tlb, vma) do { } while (0) -#define tlb_end_vma(tlb, vma) do { } while (0) -#define __tlb_remove_tlb_entry(tlb, ptep, address) do { } while (0) - -/* - * .. because we flush the whole mm when it fills up - */ -#define tlb_flush(tlb) flush_tlb_mm((tlb)->mm) - #include #endif --- a/arch/m68k/include/asm/tlb.h +++ b/arch/m68k/include/asm/tlb.h @@ -8,7 +8,6 @@ */ #define tlb_start_vma(tlb, vma) do { } while (0) #define tlb_end_vma(tlb, vma) do { } while (0) -#define __tlb_remove_tlb_entry(tlb, ptep, address) do { } while (0) /* * .. because we flush the whole mm when it --- a/arch/microblaze/include/asm/tlb.h +++ b/arch/microblaze/include/asm/tlb.h @@ -11,14 +11,12 @@ #ifndef _ASM_MICROBLAZE_TLB_H #define _ASM_MICROBLAZE_TLB_H -#define tlb_flush(tlb) flush_tlb_mm((tlb)->mm) - #include #ifdef CONFIG_MMU #define tlb_start_vma(tlb, vma) do { } while (0) #define tlb_end_vma(tlb, vma) do { } while (0) -#define __tlb_remove_tlb_entry(tlb, pte, address) do { } while (0) +#define tlb_flush(tlb) flush_tlb_mm((tlb)->mm) #endif #include --- a/arch/mips/include/asm/tlb.h +++ b/arch/mips/include/asm/tlb.h @@ -5,14 +5,6 @@ #include #include -#define tlb_end_vma(tlb, vma) do { } while (0) -#define __tlb_remove_tlb_entry(tlb, ptep, address) do { } while (0) - -/* - * .. because we flush the whole mm when it fills up. - */ -#define tlb_flush(tlb) flush_tlb_mm((tlb)->mm) - #define _UNIQUE_ENTRYHI(base, idx) \ (((base) + ((idx) << (PAGE_SHIFT + 1))) | \ (cpu_has_tlbinv ? MIPS_ENTRYHI_EHINV : 0)) --- a/arch/nds32/include/asm/tlb.h +++ b/arch/nds32/include/asm/tlb.h @@ -4,16 +4,6 @@ #ifndef __ASMNDS32_TLB_H #define __ASMNDS32_TLB_H -#define tlb_end_vma(tlb,vma) \ - do { \ - if(!tlb->fullmm) \ - flush_tlb_range(vma, vma->vm_start, vma->vm_end); \ - } while (0) - -#define __tlb_remove_tlb_entry(tlb, pte, addr) do { } while (0) - -#define tlb_flush(tlb) flush_tlb_mm((tlb)->mm) - #include #define __pte_free_tlb(tlb, pte, addr) pte_free((tlb)->mm, pte) --- a/arch/nios2/include/asm/tlb.h +++ b/arch/nios2/include/asm/tlb.h @@ -11,12 +11,14 @@ #ifndef _ASM_NIOS2_TLB_H #define _ASM_NIOS2_TLB_H -#define tlb_flush(tlb) flush_tlb_mm((tlb)->mm) - extern void set_mmu_pid(unsigned long pid); +/* + * NIOS32 does have flush_tlb_range(), but it lacks a limit and fallback to + * full mm invalidation. So use flush_tlb_mm() for everything. + */ #define tlb_end_vma(tlb, vma) do { } while (0) -#define __tlb_remove_tlb_entry(tlb, ptep, address) do { } while (0) +#define tlb_flush(tlb) flush_tlb_mm((tlb)->mm) #include #include --- a/arch/openrisc/include/asm/tlb.h +++ b/arch/openrisc/include/asm/tlb.h @@ -22,12 +22,14 @@ /* * or32 doesn't need any special per-pte or * per-vma handling.. + * + * OpenRISC doesn't have an efficient flush_tlb_range() so use flush_tlb_mm() + * for everything. */ #define tlb_start_vma(tlb, vma) do { } while (0) #define tlb_end_vma(tlb, vma) do { } while (0) -#define __tlb_remove_tlb_entry(tlb, ptep, address) do { } while (0) - #define tlb_flush(tlb) flush_tlb_mm((tlb)->mm) + #include #include --- a/arch/parisc/include/asm/tlb.h +++ b/arch/parisc/include/asm/tlb.h @@ -2,19 +2,6 @@ #ifndef _PARISC_TLB_H #define _PARISC_TLB_H -#define tlb_flush(tlb) \ -do { if ((tlb)->fullmm) \ - flush_tlb_mm((tlb)->mm);\ -} while (0) - -#define tlb_end_vma(tlb, vma) \ -do { if (!(tlb)->fullmm) \ - flush_tlb_range(vma, vma->vm_start, vma->vm_end); \ -} while (0) - -#define __tlb_remove_tlb_entry(tlb, pte, address) \ - do { } while (0) - #include #define __pmd_free_tlb(tlb, pmd, addr) pmd_free((tlb)->mm, pmd) --- a/arch/sparc/include/asm/tlb_32.h +++ b/arch/sparc/include/asm/tlb_32.h @@ -2,19 +2,6 @@ #ifndef _SPARC_TLB_H #define _SPARC_TLB_H -#define tlb_end_vma(tlb, vma) \ -do { \ - flush_tlb_range(vma, vma->vm_start, vma->vm_end); \ -} while (0) - -#define __tlb_remove_tlb_entry(tlb, pte, address) \ - do { } while (0) - -#define tlb_flush(tlb) \ -do { \ - flush_tlb_mm((tlb)->mm); \ -} while (0) - #include #endif /* _SPARC_TLB_H */ --- a/arch/unicore32/include/asm/tlb.h +++ b/arch/unicore32/include/asm/tlb.h @@ -12,10 +12,12 @@ #ifndef __UNICORE_TLB_H__ #define __UNICORE_TLB_H__ -#define tlb_start_vma(tlb, vma) do { } while (0) -#define tlb_end_vma(tlb, vma) do { } while (0) -#define __tlb_remove_tlb_entry(tlb, ptep, address) do { } while (0) -#define tlb_flush(tlb) flush_tlb_mm((tlb)->mm) +/* + * unicore32 lacks an afficient flush_tlb_range(), use flush_tlb_mm(). + */ +#define tlb_start_vma(tlb, vma) do { } while (0) +#define tlb_end_vma(tlb, vma) do { } while (0) +#define tlb_flush(tlb) flush_tlb_mm((tlb)->mm) #define __pte_free_tlb(tlb, pte, addr) \ do { \ --- a/arch/xtensa/include/asm/tlb.h +++ b/arch/xtensa/include/asm/tlb.h @@ -14,23 +14,6 @@ #include #include -#if (DCACHE_WAY_SIZE <= PAGE_SIZE) - -# define tlb_end_vma(tlb,vma) do { } while (0) - -#else - -# define tlb_end_vma(tlb, vma) \ - do { \ - if (!tlb->fullmm) \ - flush_tlb_range(vma, vma->vm_start, vma->vm_end); \ - } while(0) - -#endif - -#define __tlb_remove_tlb_entry(tlb,pte,addr) do { } while (0) -#define tlb_flush(tlb) flush_tlb_mm((tlb)->mm) - #include #define __pte_free_tlb(tlb, pte, address) pte_free((tlb)->mm, pte) From patchwork Wed Sep 26 11:36:36 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Zijlstra X-Patchwork-Id: 10615801 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D715E913 for ; Wed, 26 Sep 2018 11:55:14 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id C6FAC26B41 for ; Wed, 26 Sep 2018 11:55:14 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id BA46E2A987; Wed, 26 Sep 2018 11:55:14 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.7 required=2.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 0AD9626B41 for ; Wed, 26 Sep 2018 11:55:11 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 7A1B08E000E; Wed, 26 Sep 2018 07:54:58 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 751EE8E000D; Wed, 26 Sep 2018 07:54:58 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5F53C8E000E; Wed, 26 Sep 2018 07:54:58 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pf1-f198.google.com (mail-pf1-f198.google.com [209.85.210.198]) by kanga.kvack.org (Postfix) with ESMTP id 1769E8E000D for ; Wed, 26 Sep 2018 07:54:58 -0400 (EDT) Received: by mail-pf1-f198.google.com with SMTP id x85-v6so14699052pfe.13 for ; Wed, 26 Sep 2018 04:54:58 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:message-id:user-agent:date:from :to:cc:subject:references:mime-version; bh=l4TwOLypeLrg8jwGO2V6hg9OgwsYg7tBLQ3YZNFkico=; b=HyETlHCyPRCIGIv+MtyBPJqR3r63EDiRy1WEcLE4w4O350RQkbwX6TOSfr+7tt1SC4 R41ht1KM6Qfq3Dtfgf3q0Bvapat29wVarLG7jKPTYdvJ+daO0LwjxGNo7B/3PxinXwmv 5LEG0NhLBHjNAQ9sZnlsRKk3Q1Gax/zOvnj3WHyCFSS8T19Nv82MLjTjynP5RPbemsru +Yj5cpuG2LXr/4vvvKJIfrKf7T1R6PT97raMYzYdc17xMvWyg0A6pP/WYNwTbbyAjS5m 9RI1vfD8O7LOPItwa44phgdoRsFai+RKYA7x0K0zOx06OseltezdLuHzsxTHW4GiEAf7 eANQ== X-Gm-Message-State: ABuFfojTCPFumTWRn4IZPuHCr+VIdC6jZPXphP9f7qhyyxIwPtY2z1gl 2XOuTZFXadXK5nlo88nK8DrwDA5Mt74BWkjUVAy+CGSfSV9YCVzxkIdMQkCGXvSescednY+XPLV CLX4tqOxuTUS6CvgVMP7Cns6y2TdkoOeZadu+AdKztHIZbjY1dLb++FkV9rleeK/rRQ== X-Received: by 2002:a62:cdcf:: with SMTP id o198-v6mr6061207pfg.12.1537962897522; Wed, 26 Sep 2018 04:54:57 -0700 (PDT) X-Google-Smtp-Source: ACcGV62NZeC+spEk8GzZ6BoQOjPtFkbecc4YYBABFZXfpOJE7PEn6S5soVjG/e5VbJRut+CikMFF X-Received: by 2002:a62:cdcf:: with SMTP id o198-v6mr6061140pfg.12.1537962896485; Wed, 26 Sep 2018 04:54:56 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1537962896; cv=none; d=google.com; s=arc-20160816; b=rFk2RFsOE7mSXKMJXrl6Sy0Yc2eMaUJUksj0lrUbUcYQXDwLCI/Y1CP8qMNc15Acsw grHSYotXkHMqDmvpPvmwP8ZwKbKl37w2C0lUERyUuRfSShi0z620/klR6x46apX87Y/t 0Yps4Z3f1E+tMnRvpKHliSrNuSgXN0Iy2j861MYOYICdao540dOUGRALHh1thHlNk+Px IyCybUlbbUJftxg74AKOjL+Jj2+KuI+6LK59sA+F+idhqfAkgmK8qk/iuEWX3YjqVOYQ R8Mj3k0VsYr5CRA+UeHUXBq3L0ot/5V/900Tw1mVISMsg+RDNUcnWDtfMQcoM8KcTM8B m2Hg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=mime-version:references:subject:cc:to:from:date:user-agent :message-id:dkim-signature; bh=l4TwOLypeLrg8jwGO2V6hg9OgwsYg7tBLQ3YZNFkico=; b=PDEmc7QfgbfX93lEPP3YyumQOYxawLz47k19lxsVx1x/8O3q3tkG8QMId55W+ruRaG lhJPZReBknn8vEDNi32yY+9yT3iliVj8XRHyHqnBgBYyzfSZZrJfoPd1emyU+M8DuVrL rysYXKSHB+BaJ8nKsY/uBsC00yJ29VgyKVtm4N2aIO5VrxIrPsO8DysWgOjFtCBhJKsk 88avA9LrvJ7ocqXrOyJxHw2WAd+A3sbrjSuSaIzPTWuMyO/MV3RADWXwBWf3XREfPzxr 8E4O84VY463BKzlWZmQc4TwAiGUBmGJPepD8L64aWKyjrrW8vjJNx6idArd8TH96Bdeu SyvA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=bombadil.20170209 header.b="m/t6hoHE"; spf=pass (google.com: best guess record for domain of peterz@infradead.org designates 2607:7c80:54:e::133 as permitted sender) smtp.mailfrom=peterz@infradead.org Received: from bombadil.infradead.org (bombadil.infradead.org. [2607:7c80:54:e::133]) by mx.google.com with ESMTPS id 137-v6si4846523pfx.155.2018.09.26.04.54.56 for (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Wed, 26 Sep 2018 04:54:56 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of peterz@infradead.org designates 2607:7c80:54:e::133 as permitted sender) client-ip=2607:7c80:54:e::133; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=bombadil.20170209 header.b="m/t6hoHE"; spf=pass (google.com: best guess record for domain of peterz@infradead.org designates 2607:7c80:54:e::133 as permitted sender) smtp.mailfrom=peterz@infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=Content-Type:MIME-Version:References: Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding: Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender: Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:List-Id:List-Help: List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=l4TwOLypeLrg8jwGO2V6hg9OgwsYg7tBLQ3YZNFkico=; b=m/t6hoHEMC6O0KYP1dcigJOIsF oyucecRXnOqF9JqyNXklAc+6DaDhz0W31iCBRhFFEkOjmgD+Ctw37rKNqj3OslcPdy6wr6bJQ+wuL IzQtpJtd7xgqn7H825vqN5g9WUkWwwL27xl8tzlWvW+wFLwucQHkLShQeI4/5SvSiioEoKliIlCuq q/FKyT/VguQjPuQuQfF2o02uNhpdIDyuAUcmqOviITIoYhGOQmCcVj9YMmtxV5hmJrRlsWXFBT/Su I46YKHv2qBYnFEJWFEP3vj4nvsrMsztzOeMtTURdFPazgF/kbIn4TQ36bcTGr/Lenz/eJfrGfc87B bTMnsXFg==; Received: from j217100.upc-j.chello.nl ([24.132.217.100] helo=hirez.programming.kicks-ass.net) by bombadil.infradead.org with esmtpsa (Exim 4.90_1 #2 (Red Hat Linux)) id 1g58Og-00062I-Qa; Wed, 26 Sep 2018 11:54:42 +0000 Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0) id A9271206F1818; Wed, 26 Sep 2018 13:54:04 +0200 (CEST) Message-ID: <20180926114801.199256189@infradead.org> User-Agent: quilt/0.65 Date: Wed, 26 Sep 2018 13:36:36 +0200 From: Peter Zijlstra To: will.deacon@arm.com, aneesh.kumar@linux.vnet.ibm.com, akpm@linux-foundation.org, npiggin@gmail.com Cc: linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, peterz@infradead.org, linux@armlinux.org.uk, heiko.carstens@de.ibm.com, riel@surriel.com, Linus Torvalds , Martin Schwidefsky Subject: [PATCH 13/18] asm-generic/tlb: Introduce HAVE_MMU_GATHER_NO_GATHER References: <20180926113623.863696043@infradead.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP From: Martin Schwidefsky Add the Kconfig option HAVE_MMU_GATHER_NO_GATHER to the generic mmu_gather code. If the option is set the mmu_gather will not track individual pages for delayed page free anymore. A platform that enables the option needs to provide its own implementation of the __tlb_remove_page_size function to free pages. Cc: npiggin@gmail.com Cc: heiko.carstens@de.ibm.com Cc: will.deacon@arm.com Cc: aneesh.kumar@linux.vnet.ibm.com Cc: akpm@linux-foundation.org Cc: Linus Torvalds Cc: linux@armlinux.org.uk Signed-off-by: Martin Schwidefsky Signed-off-by: Peter Zijlstra (Intel) Link: http://lkml.kernel.org/r/20180918125151.31744-2-schwidefsky@de.ibm.com --- arch/Kconfig | 3 + include/asm-generic/tlb.h | 9 +++ mm/mmu_gather.c | 107 +++++++++++++++++++++++++--------------------- 3 files changed, 70 insertions(+), 49 deletions(-) --- a/arch/Kconfig +++ b/arch/Kconfig @@ -368,6 +368,9 @@ config HAVE_RCU_TABLE_NO_INVALIDATE config HAVE_MMU_GATHER_PAGE_SIZE bool +config HAVE_MMU_GATHER_NO_GATHER + bool + config ARCH_HAVE_NMI_SAFE_CMPXCHG bool --- a/include/asm-generic/tlb.h +++ b/include/asm-generic/tlb.h @@ -184,6 +184,7 @@ extern void tlb_remove_table(struct mmu_ #endif +#ifndef CONFIG_HAVE_MMU_GATHER_NO_GATHER /* * If we can't allocate a page to make a big batch of page pointers * to work on, then just handle a few from the on-stack structure. @@ -208,6 +209,10 @@ struct mmu_gather_batch { */ #define MAX_GATHER_BATCH_COUNT (10000UL/MAX_GATHER_BATCH) +extern bool __tlb_remove_page_size(struct mmu_gather *tlb, struct page *page, + int page_size); +#endif + /* * struct mmu_gather is an opaque type used by the mm code for passing around * any data needed by arch specific code for tlb_remove_page. @@ -254,6 +259,7 @@ struct mmu_gather { unsigned int batch_count; +#ifndef CONFIG_HAVE_MMU_GATHER_NO_GATHER struct mmu_gather_batch *active; struct mmu_gather_batch local; struct page *__pages[MMU_GATHER_BUNDLE]; @@ -261,6 +267,7 @@ struct mmu_gather { #ifdef CONFIG_HAVE_MMU_GATHER_PAGE_SIZE unsigned int page_size; #endif +#endif }; void arch_tlb_gather_mmu(struct mmu_gather *tlb, @@ -269,8 +276,6 @@ void tlb_flush_mmu(struct mmu_gather *tl void arch_tlb_finish_mmu(struct mmu_gather *tlb, unsigned long start, unsigned long end, bool force); void tlb_flush_mmu_free(struct mmu_gather *tlb); -extern bool __tlb_remove_page_size(struct mmu_gather *tlb, struct page *page, - int page_size); static inline void __tlb_adjust_range(struct mmu_gather *tlb, unsigned long address, --- a/mm/mmu_gather.c +++ b/mm/mmu_gather.c @@ -13,6 +13,8 @@ #ifdef HAVE_GENERIC_MMU_GATHER +#ifndef CONFIG_HAVE_MMU_GATHER_NO_GATHER + static bool tlb_next_batch(struct mmu_gather *tlb) { struct mmu_gather_batch *batch; @@ -41,6 +43,56 @@ static bool tlb_next_batch(struct mmu_ga return true; } +static void tlb_batch_pages_flush(struct mmu_gather *tlb) +{ + struct mmu_gather_batch *batch; + + for (batch = &tlb->local; batch && batch->nr; batch = batch->next) { + free_pages_and_swap_cache(batch->pages, batch->nr); + batch->nr = 0; + } + tlb->active = &tlb->local; +} + +static void tlb_batch_list_free(struct mmu_gather *tlb) +{ + struct mmu_gather_batch *batch, *next; + + for (batch = tlb->local.next; batch; batch = next) { + next = batch->next; + free_pages((unsigned long)batch, 0); + } + tlb->local.next = NULL; +} + +bool __tlb_remove_page_size(struct mmu_gather *tlb, struct page *page, int page_size) +{ + struct mmu_gather_batch *batch; + + VM_BUG_ON(!tlb->end); + +#ifdef CONFIG_HAVE_MMU_GATHER_PAGE_SIZE + VM_WARN_ON(tlb->page_size != page_size); +#endif + + batch = tlb->active; + /* + * Add the page and check if we are full. If so + * force a flush. + */ + batch->pages[batch->nr++] = page; + if (batch->nr == batch->max) { + if (!tlb_next_batch(tlb)) + return true; + batch = tlb->active; + } + VM_BUG_ON_PAGE(batch->nr > batch->max, page); + + return false; +} + +#endif /* HAVE_MMU_GATHER_NO_GATHER */ + void arch_tlb_gather_mmu(struct mmu_gather *tlb, struct mm_struct *mm, unsigned long start, unsigned long end) { @@ -48,12 +100,15 @@ void arch_tlb_gather_mmu(struct mmu_gath /* Is it from 0 to ~0? */ tlb->fullmm = !(start | (end+1)); + +#ifndef CONFIG_HAVE_MMU_GATHER_NO_GATHER tlb->need_flush_all = 0; tlb->local.next = NULL; tlb->local.nr = 0; tlb->local.max = ARRAY_SIZE(tlb->__pages); tlb->active = &tlb->local; tlb->batch_count = 0; +#endif #ifdef CONFIG_HAVE_RCU_TABLE_FREE tlb->batch = NULL; @@ -67,16 +122,12 @@ void arch_tlb_gather_mmu(struct mmu_gath void tlb_flush_mmu_free(struct mmu_gather *tlb) { - struct mmu_gather_batch *batch; - #ifdef CONFIG_HAVE_RCU_TABLE_FREE tlb_table_flush(tlb); #endif - for (batch = &tlb->local; batch && batch->nr; batch = batch->next) { - free_pages_and_swap_cache(batch->pages, batch->nr); - batch->nr = 0; - } - tlb->active = &tlb->local; +#ifndef CONFIG_HAVE_MMU_GATHER_NO_GATHER + tlb_batch_pages_flush(tlb); +#endif } void tlb_flush_mmu(struct mmu_gather *tlb) @@ -92,8 +143,6 @@ void tlb_flush_mmu(struct mmu_gather *tl void arch_tlb_finish_mmu(struct mmu_gather *tlb, unsigned long start, unsigned long end, bool force) { - struct mmu_gather_batch *batch, *next; - if (force) { __tlb_reset_range(tlb); __tlb_adjust_range(tlb, start, end - start); @@ -103,45 +152,9 @@ void arch_tlb_finish_mmu(struct mmu_gath /* keep the page table cache within bounds */ check_pgt_cache(); - - for (batch = tlb->local.next; batch; batch = next) { - next = batch->next; - free_pages((unsigned long)batch, 0); - } - tlb->local.next = NULL; -} - -/* __tlb_remove_page - * Must perform the equivalent to __free_pte(pte_get_and_clear(ptep)), while - * handling the additional races in SMP caused by other CPUs caching valid - * mappings in their TLBs. Returns the number of free page slots left. - * When out of page slots we must call tlb_flush_mmu(). - *returns true if the caller should flush. - */ -bool __tlb_remove_page_size(struct mmu_gather *tlb, struct page *page, int page_size) -{ - struct mmu_gather_batch *batch; - - VM_BUG_ON(!tlb->end); - -#ifdef CONFIG_HAVE_MMU_GATHER_PAGE_SIZE - VM_WARN_ON(tlb->page_size != page_size); +#ifndef CONFIG_HAVE_MMU_GATHER_NO_GATHER + tlb_batch_list_free(tlb); #endif - - batch = tlb->active; - /* - * Add the page and check if we are full. If so - * force a flush. - */ - batch->pages[batch->nr++] = page; - if (batch->nr == batch->max) { - if (!tlb_next_batch(tlb)) - return true; - batch = tlb->active; - } - VM_BUG_ON_PAGE(batch->nr > batch->max, page); - - return false; } #endif /* HAVE_GENERIC_MMU_GATHER */ From patchwork Wed Sep 26 11:36:37 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Zijlstra X-Patchwork-Id: 10615793 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 8099214BD for ; Wed, 26 Sep 2018 11:55:01 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 7100D2A97C for ; Wed, 26 Sep 2018 11:55:01 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 634982A9CC; Wed, 26 Sep 2018 11:55:01 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.7 required=2.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 9F8102A97C for ; Wed, 26 Sep 2018 11:54:57 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 6DDA48E000A; Wed, 26 Sep 2018 07:54:56 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 68DB78E0003; Wed, 26 Sep 2018 07:54:56 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 531B28E000A; Wed, 26 Sep 2018 07:54:56 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pl1-f200.google.com (mail-pl1-f200.google.com [209.85.214.200]) by kanga.kvack.org (Postfix) with ESMTP id 0ABD78E0003 for ; Wed, 26 Sep 2018 07:54:56 -0400 (EDT) Received: by mail-pl1-f200.google.com with SMTP id w18-v6so10684780plp.3 for ; Wed, 26 Sep 2018 04:54:56 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:message-id:user-agent:date:from :to:cc:subject:references:mime-version; bh=+pldmGqd3JjO/U8GcilPsAczLWXeONB6ja/7zhAvKSU=; b=gB+sjy9IFO5IPCLHUiBmXXV/UrFojLlpLZt36lHfeFIEt7m1kF5hk6SqJem4rApnvT 8c1O/O24/UcKf0W/inMh97TLsdYF+SHp2bJg00fvu9vQP0i/uwOozbqc8Fn/MSlWigLF 4iEFNpdFkKIIn6WzeJ71dtCcD5Ro9sApZ228U2P150SBsu5xUakBq1o1CxaSvQVgDJcA lrHHoXmIoKnLWumwk6VUzuCS7IRosJ7QetUUFG4bwrD3AiclAIy28x46N5GU4qyXMH04 NY1Lm5xJJ8IOWAZoZ7ADXq7NxsnsleXO7m3k4jDsYPVeA7oamhNbb+Kp4FRqie9H45hl BP0w== X-Gm-Message-State: ABuFfojcowy5+0T1AQGmsEyiLLP+9ptE3Icx0n891CawAtOlSus78YvT IuC7VAtBwZ8Rfwbb2Q1gP4KlPGzvzKFaNLYybpELyuyfa94iyj3QmERIpKV8qXsbgMNkoHkLkT7 0AIvLa0qARUQt9gvdTOgRn/3jNfsqkriZQUN2TAAMKnF/xKBBC/7Qaq0kWHfiNPbX+g== X-Received: by 2002:a17:902:b212:: with SMTP id t18-v6mr5896869plr.136.1537962895691; Wed, 26 Sep 2018 04:54:55 -0700 (PDT) X-Google-Smtp-Source: ACcGV63iDinbexaRQqElvZGDCZNP2s97B9PINUMf6rUsoPingCi+ZK8+/ijLlSSWtJR5DPbgkB/x X-Received: by 2002:a17:902:b212:: with SMTP id t18-v6mr5896786plr.136.1537962894554; Wed, 26 Sep 2018 04:54:54 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1537962894; cv=none; d=google.com; s=arc-20160816; b=yeTvNpyW/SX2H//wmFXo39gWfzOwAxqw3zjYZhRqoyDVQb/JdA8kAbb4ln2FIwcfP6 28WfoVp61yzNx2cMafCZeK78F54Ued7zlTBPKrl/97QWydNAupdt/BprdyJxXsIxekXX DjjgHtYnjZ4jh+vDTbCz8Ql/I3xmC0XvhZtkhXQap/CwHNzgnmma2MKPUh5snHfuLtIG BHsUdkCmPPDZmQF2w7jZGCiSU9yX0YgNs2MX61cZ6PuDf/xbsrhEeJfflPQWbFqrPxzK naJ5x5VXI0906ScS+eLwld1cbSVptLVp7C1Ur68SmDTZoiR7/C/JfXll+Zy4L7QBEHib V6xQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=mime-version:references:subject:cc:to:from:date:user-agent :message-id:dkim-signature; bh=+pldmGqd3JjO/U8GcilPsAczLWXeONB6ja/7zhAvKSU=; b=UbXWmPPJtVplNMGDh4fLSRU3lA8bAbKQa/tR/ph1ARwv83uP5zFFkakHdURPLC9SzD 2fTdgUQzKN8D9+kn40gxriQyKu0IsPO27MaysrRFWxCkK1gMxHqEEZr1xvMBDdYdjiZV /+I2buTE3nExhL/iKtkvzavn6QyK0S37OwTOMBAukv6hAVmvBpPJYCtHaLViQTGwcj19 JdsBMIaxQzZKIU7jx5ivnIoj7UIIF+GowTSpqA9lCA0Bwqip09SIbylngukGksq+2yFT tPlXrpBIU0pzDU5baYYFp366yDCRskgLv7TagFHzvCUbnURJyy22Ey2ZeCdLYk5XUS5P YsDQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=bombadil.20170209 header.b=Y2DnjqfM; spf=pass (google.com: best guess record for domain of peterz@infradead.org designates 2607:7c80:54:e::133 as permitted sender) smtp.mailfrom=peterz@infradead.org Received: from bombadil.infradead.org (bombadil.infradead.org. [2607:7c80:54:e::133]) by mx.google.com with ESMTPS id q61-v6si5116763plb.231.2018.09.26.04.54.54 for (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Wed, 26 Sep 2018 04:54:54 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of peterz@infradead.org designates 2607:7c80:54:e::133 as permitted sender) client-ip=2607:7c80:54:e::133; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=bombadil.20170209 header.b=Y2DnjqfM; spf=pass (google.com: best guess record for domain of peterz@infradead.org designates 2607:7c80:54:e::133 as permitted sender) smtp.mailfrom=peterz@infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=Content-Type:MIME-Version:References: Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding: Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender: Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:List-Id:List-Help: List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=+pldmGqd3JjO/U8GcilPsAczLWXeONB6ja/7zhAvKSU=; b=Y2DnjqfMTbp9shgseYYmzfFKUc JOxDUcVKeeDmUkBtPNcNEOp5ex2rzz1BhXLBNgjbZPc4DKoQEqY1itSR8a8JmCrnG6SSrj/ImAjYM jdSHDN9zx9tBQEJC0fzexTubhryRSoUHg/tuu+STAipkqbh9Wk4xQckyXhbFzyGpmgwL+A9xSypZx peqZxaWhMXuycKSnEFp84/MEhexrd1lQT88tkqETKrOgJhCAG+SO4ve7Z+rRTWGx7qbMbY6XJ1bvS Fx3rkA4Cy5oebDhYzMk6/1gvSSRdwxvS/XVR6k1On+8FBQbF51En9R8PLR73r31pzvshPPNxphkep 1R4+kamA==; Received: from j217100.upc-j.chello.nl ([24.132.217.100] helo=hirez.programming.kicks-ass.net) by bombadil.infradead.org with esmtpsa (Exim 4.90_1 #2 (Red Hat Linux)) id 1g58Og-00062J-R9; Wed, 26 Sep 2018 11:54:42 +0000 Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0) id AD924206F181E; Wed, 26 Sep 2018 13:54:04 +0200 (CEST) Message-ID: <20180926114801.260918352@infradead.org> User-Agent: quilt/0.65 Date: Wed, 26 Sep 2018 13:36:37 +0200 From: Peter Zijlstra To: will.deacon@arm.com, aneesh.kumar@linux.vnet.ibm.com, akpm@linux-foundation.org, npiggin@gmail.com Cc: linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, peterz@infradead.org, linux@armlinux.org.uk, heiko.carstens@de.ibm.com, riel@surriel.com, Linus Torvalds , Martin Schwidefsky Subject: [PATCH 14/18] s390/tlb: convert to generic mmu_gather References: <20180926113623.863696043@infradead.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP From: Martin Schwidefsky Cc: npiggin@gmail.com Cc: heiko.carstens@de.ibm.com Cc: will.deacon@arm.com Cc: aneesh.kumar@linux.vnet.ibm.com Cc: akpm@linux-foundation.org Cc: Linus Torvalds Cc: linux@armlinux.org.uk Signed-off-by: Martin Schwidefsky Signed-off-by: Peter Zijlstra (Intel) Link: http://lkml.kernel.org/r/20180918125151.31744-3-schwidefsky@de.ibm.com --- arch/s390/Kconfig | 2 arch/s390/include/asm/tlb.h | 128 +++++++++++++------------------------------- arch/s390/mm/pgalloc.c | 63 --------------------- 3 files changed, 42 insertions(+), 151 deletions(-) --- a/arch/s390/Kconfig +++ b/arch/s390/Kconfig @@ -157,10 +157,12 @@ config S390 select HAVE_MEMBLOCK select HAVE_MEMBLOCK_NODE_MAP select HAVE_MEMBLOCK_PHYS_MAP + select HAVE_MMU_GATHER_NO_GATHER select HAVE_MOD_ARCH_SPECIFIC select HAVE_NOP_MCOUNT select HAVE_OPROFILE select HAVE_PERF_EVENTS + select HAVE_RCU_TABLE_FREE select HAVE_REGS_AND_STACK_ACCESS_API select HAVE_RSEQ select HAVE_SYSCALL_TRACEPOINTS --- a/arch/s390/include/asm/tlb.h +++ b/arch/s390/include/asm/tlb.h @@ -22,98 +22,39 @@ * Pages used for the page tables is a different story. FIXME: more */ -#include -#include -#include -#include -#include -#include - -struct mmu_gather { - struct mm_struct *mm; - struct mmu_table_batch *batch; - unsigned int fullmm; - unsigned long start, end; -}; - -struct mmu_table_batch { - struct rcu_head rcu; - unsigned int nr; - void *tables[0]; -}; - -#define MAX_TABLE_BATCH \ - ((PAGE_SIZE - sizeof(struct mmu_table_batch)) / sizeof(void *)) - -extern void tlb_table_flush(struct mmu_gather *tlb); -extern void tlb_remove_table(struct mmu_gather *tlb, void *table); - -static inline void -arch_tlb_gather_mmu(struct mmu_gather *tlb, struct mm_struct *mm, - unsigned long start, unsigned long end) -{ - tlb->mm = mm; - tlb->start = start; - tlb->end = end; - tlb->fullmm = !(start | (end+1)); - tlb->batch = NULL; -} - -static inline void tlb_flush_mmu_tlbonly(struct mmu_gather *tlb) -{ - __tlb_flush_mm_lazy(tlb->mm); -} - -static inline void tlb_flush_mmu_free(struct mmu_gather *tlb) -{ - tlb_table_flush(tlb); -} - +void __tlb_remove_table(void *_table); +static inline void tlb_flush(struct mmu_gather *tlb); +static inline bool __tlb_remove_page_size(struct mmu_gather *tlb, + struct page *page, int page_size); -static inline void tlb_flush_mmu(struct mmu_gather *tlb) -{ - tlb_flush_mmu_tlbonly(tlb); - tlb_flush_mmu_free(tlb); -} +#define tlb_start_vma(tlb, vma) do { } while (0) +#define tlb_end_vma(tlb, vma) do { } while (0) -static inline void -arch_tlb_finish_mmu(struct mmu_gather *tlb, - unsigned long start, unsigned long end, bool force) -{ - if (force) { - tlb->start = start; - tlb->end = end; - } +#define tlb_flush tlb_flush +#define pte_free_tlb pte_free_tlb +#define pmd_free_tlb pmd_free_tlb +#define p4d_free_tlb p4d_free_tlb +#define pud_free_tlb pud_free_tlb - tlb_flush_mmu(tlb); -} +#include +#include +#include /* * Release the page cache reference for a pte removed by * tlb_ptep_clear_flush. In both flush modes the tlb for a page cache page * has already been freed, so just do free_page_and_swap_cache. */ -static inline bool __tlb_remove_page(struct mmu_gather *tlb, struct page *page) -{ - free_page_and_swap_cache(page); - return false; /* avoid calling tlb_flush_mmu */ -} - -static inline void tlb_remove_page(struct mmu_gather *tlb, struct page *page) -{ - free_page_and_swap_cache(page); -} - static inline bool __tlb_remove_page_size(struct mmu_gather *tlb, struct page *page, int page_size) { - return __tlb_remove_page(tlb, page); + free_page_and_swap_cache(page); + return false; } -static inline void tlb_remove_page_size(struct mmu_gather *tlb, - struct page *page, int page_size) +static inline void tlb_flush(struct mmu_gather *tlb) { - return tlb_remove_page(tlb, page); + __tlb_flush_mm_lazy(tlb->mm); } /* @@ -121,8 +62,17 @@ static inline void tlb_remove_page_size( * page table from the tlb. */ static inline void pte_free_tlb(struct mmu_gather *tlb, pgtable_t pte, - unsigned long address) + unsigned long address) { + __tlb_adjust_range(tlb, address, PAGE_SIZE); + tlb->mm->context.flush_mm = 1; + tlb->freed_tables = 1; + tlb->cleared_ptes = 1; + /* + * page_table_free_rcu takes care of the allocation bit masks + * of the 2K table fragments in the 4K page table page, + * then calls tlb_remove_table. + */ page_table_free_rcu(tlb, (unsigned long *) pte, address); } @@ -139,6 +89,10 @@ static inline void pmd_free_tlb(struct m if (tlb->mm->context.asce_limit <= _REGION3_SIZE) return; pgtable_pmd_page_dtor(virt_to_page(pmd)); + __tlb_adjust_range(tlb, address, PAGE_SIZE); + tlb->mm->context.flush_mm = 1; + tlb->freed_tables = 1; + tlb->cleared_puds = 1; tlb_remove_table(tlb, pmd); } @@ -154,6 +108,10 @@ static inline void p4d_free_tlb(struct m { if (tlb->mm->context.asce_limit <= _REGION1_SIZE) return; + __tlb_adjust_range(tlb, address, PAGE_SIZE); + tlb->mm->context.flush_mm = 1; + tlb->freed_tables = 1; + tlb->cleared_p4ds = 1; tlb_remove_table(tlb, p4d); } @@ -169,19 +127,11 @@ static inline void pud_free_tlb(struct m { if (tlb->mm->context.asce_limit <= _REGION2_SIZE) return; + tlb->mm->context.flush_mm = 1; + tlb->freed_tables = 1; + tlb->cleared_puds = 1; tlb_remove_table(tlb, pud); } -#define tlb_start_vma(tlb, vma) do { } while (0) -#define tlb_end_vma(tlb, vma) do { } while (0) -#define tlb_remove_tlb_entry(tlb, ptep, addr) do { } while (0) -#define tlb_remove_pmd_tlb_entry(tlb, pmdp, addr) do { } while (0) -#define tlb_migrate_finish(mm) do { } while (0) -#define tlb_remove_huge_tlb_entry(h, tlb, ptep, address) \ - tlb_remove_tlb_entry(tlb, ptep, address) - -static inline void tlb_change_page_size(struct mmu_gather *tlb, unsigned int page_size) -{ -} #endif /* _S390_TLB_H */ --- a/arch/s390/mm/pgalloc.c +++ b/arch/s390/mm/pgalloc.c @@ -288,7 +288,7 @@ void page_table_free_rcu(struct mmu_gath tlb_remove_table(tlb, table); } -static void __tlb_remove_table(void *_table) +void __tlb_remove_table(void *_table) { unsigned int mask = (unsigned long) _table & 3; void *table = (void *)((unsigned long) _table ^ mask); @@ -314,67 +314,6 @@ static void __tlb_remove_table(void *_ta } } -static void tlb_remove_table_smp_sync(void *arg) -{ - /* Simply deliver the interrupt */ -} - -static void tlb_remove_table_one(void *table) -{ - /* - * This isn't an RCU grace period and hence the page-tables cannot be - * assumed to be actually RCU-freed. - * - * It is however sufficient for software page-table walkers that rely - * on IRQ disabling. See the comment near struct mmu_table_batch. - */ - smp_call_function(tlb_remove_table_smp_sync, NULL, 1); - __tlb_remove_table(table); -} - -static void tlb_remove_table_rcu(struct rcu_head *head) -{ - struct mmu_table_batch *batch; - int i; - - batch = container_of(head, struct mmu_table_batch, rcu); - - for (i = 0; i < batch->nr; i++) - __tlb_remove_table(batch->tables[i]); - - free_page((unsigned long)batch); -} - -void tlb_table_flush(struct mmu_gather *tlb) -{ - struct mmu_table_batch **batch = &tlb->batch; - - if (*batch) { - call_rcu_sched(&(*batch)->rcu, tlb_remove_table_rcu); - *batch = NULL; - } -} - -void tlb_remove_table(struct mmu_gather *tlb, void *table) -{ - struct mmu_table_batch **batch = &tlb->batch; - - tlb->mm->context.flush_mm = 1; - if (*batch == NULL) { - *batch = (struct mmu_table_batch *) - __get_free_page(GFP_NOWAIT | __GFP_NOWARN); - if (*batch == NULL) { - __tlb_flush_mm_lazy(tlb->mm); - tlb_remove_table_one(table); - return; - } - (*batch)->nr = 0; - } - (*batch)->tables[(*batch)->nr++] = table; - if ((*batch)->nr == MAX_TABLE_BATCH) - tlb_flush_mmu(tlb); -} - /* * Base infrastructure required to generate basic asces, region, segment, * and page tables that do not make use of enhanced features like EDAT1. From patchwork Wed Sep 26 11:36:38 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Zijlstra X-Patchwork-Id: 10615821 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 7379D913 for ; Wed, 26 Sep 2018 11:55:49 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 641B326B41 for ; Wed, 26 Sep 2018 11:55:49 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 572C92A992; Wed, 26 Sep 2018 11:55:49 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.7 required=2.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 4A7442A987 for ; Wed, 26 Sep 2018 11:55:45 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 162BA8E0016; Wed, 26 Sep 2018 07:55:14 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 113138E000F; Wed, 26 Sep 2018 07:55:14 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id EAD9F8E0016; Wed, 26 Sep 2018 07:55:13 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-io1-f70.google.com (mail-io1-f70.google.com [209.85.166.70]) by kanga.kvack.org (Postfix) with ESMTP id B9E2A8E000F for ; Wed, 26 Sep 2018 07:55:13 -0400 (EDT) Received: by mail-io1-f70.google.com with SMTP id s5-v6so52117825iop.3 for ; Wed, 26 Sep 2018 04:55:13 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:message-id:user-agent:date:from :to:cc:subject:references:mime-version; bh=3ULS8xakk5PjFAR+8tJXFiitZhxMy7nnK13h1c+fJV4=; b=ScxEafhqPvFooLov2oprTdy+Rr85TR4OO/Njh2qegdg3viXJTx2GtFYvkpG7SMutxo uImuZYYaUFPodi4PxGFY/me9QxoHJB/QHL3zHPcYow8/rqoCPJkVBH0+Yt7HI0q3lrBi xLhV3M81f/W6euVwwCocVj5Qt9KNnX6dS+tHgkNpz2PTBgPQ69zwlMaevqofkcR2u4IJ KSoxhowCgmkJyH4/9BBdjpIJjd4h1dzxrcr3UgVBHwB5akVGiDLZJOOXxPpyZzYXstEc yEmu1W/s3irKQmXKTb3luFmJQ7+gx3/s+cUnmXPMMNy+2E+o2fcrgLDZls2Gzqp1rLOY g4wg== X-Gm-Message-State: ABuFfohpbXcYDKu3wj4fTYJay7Av2w+wt67U/QFAmQd9pnfVcM3FSwFV E23w3uYcDZp4Giyy9oBND0xtPOsFmyMZbAByFTezV8aMcdymN2thDGEFYrb9c74BcM00Bud1XZA HC8Cvi9mPMe2Gmio6lLaCmspGh9HyTRj2LH5Auu2jBuL0MjjwAr6tgdgEK9nB6cGSuA== X-Received: by 2002:a24:bd41:: with SMTP id x62-v6mr4358290ite.152.1537962913459; Wed, 26 Sep 2018 04:55:13 -0700 (PDT) X-Google-Smtp-Source: ACcGV60F5+XKlTIgCckvBc0cMW2tW8ARYoKtd79vQU+4/PEYyyBQSquptqB/g8yUTfg//7L+FKZY X-Received: by 2002:a24:bd41:: with SMTP id x62-v6mr4358239ite.152.1537962912437; Wed, 26 Sep 2018 04:55:12 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1537962912; cv=none; d=google.com; s=arc-20160816; b=h4XJvh4FhgUt2vBQzgbvHa8DiJYbKSBsZgNLlGOhPejH8weWBWYF2lGsd3ZKgDkV/Q 95FKhtYApsBlMFhgLoy4ufW1OTtp+W8ZYrlurHiLPoyHd+hzs0Gu+6gAvgnRhhSPa/If zjp8vItqHSGzyXkNSQh84406rbWQQdN9Ksy+oi17ny7Nr3YUo/Rz14uvHaza+63/EKF+ 2d5MFMWkoTvjUp2JjHyvn+Wh6uAtS0RA2MlCYS39dfSh5hS1whKtBpio/8o+R+BFMuM0 rtn43DnSdBdmWKHey2MvZ1jlpGSQtefgIEwLxzOLbVx86pOPUaOFv/MThsWveW1aLmuc 6wUw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=mime-version:references:subject:cc:to:from:date:user-agent :message-id:dkim-signature; bh=3ULS8xakk5PjFAR+8tJXFiitZhxMy7nnK13h1c+fJV4=; b=G24gU4z/NIPRznIQ8DUmm3FODmISxD21itdwVXIm2hJLW0vEMrWVrM/nMdZ8MQC70W Ka7l2c6oSDwDqKzlo4bdh2cAPj0C56NVe6HuLYjiUVdJqKN9Fl1oG5DJlzlm6qEwC085 J4FXy0mHJA3OmaDtRRUwFV0qCbpLVZBP66FYY5dAve48QKtXLlHd9eTnB7jqm8HSpHCi XpGmz+xGeuFdBQMfFg5rGA4wuBZhY/u4Gso9hCFFUXXb5Vh9kKEwnJZhRMfF+d4H+bPx TJK0oaTwZNrYXnjh5Hee++RmzMrRD9cI5za2mTJNb29y3wtVNHqimZBW+0Wv6u4HtGOn +9HA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=merlin.20170209 header.b=q0sErcDE; spf=pass (google.com: best guess record for domain of peterz@infradead.org designates 2001:8b0:10b:1231::1 as permitted sender) smtp.mailfrom=peterz@infradead.org Received: from merlin.infradead.org (merlin.infradead.org. [2001:8b0:10b:1231::1]) by mx.google.com with ESMTPS id b6-v6si2349054jam.65.2018.09.26.04.55.12 for (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Wed, 26 Sep 2018 04:55:12 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of peterz@infradead.org designates 2001:8b0:10b:1231::1 as permitted sender) client-ip=2001:8b0:10b:1231::1; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=merlin.20170209 header.b=q0sErcDE; spf=pass (google.com: best guess record for domain of peterz@infradead.org designates 2001:8b0:10b:1231::1 as permitted sender) smtp.mailfrom=peterz@infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=merlin.20170209; h=Content-Type:MIME-Version:References: Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding: Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender: Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:List-Id:List-Help: List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=3ULS8xakk5PjFAR+8tJXFiitZhxMy7nnK13h1c+fJV4=; b=q0sErcDEJDK+u+Ku5DIDNK3SEs WsjoR0HSlVdufw8byMr4LzGTmCpO8hXsMhrVvvnQvzGjIbGXpfrj39OcWhWWOaD6Fb4O/zVCeEs0N DNg2h5adaU4nm/Org0E2lpQtXIUGtUUnBsRcaN4P5CjDWxyZDKpAF3n1atbPNe0k2uKs2u/2AOhH7 Y044Vl9WXefdTfjF/daNYD2PTKpVQpC/nEen9aeXuMZ3yju3gLL41mz4Ie7qX+qcmGkQzcibzM1FA 5jnNt24XmNhldoGRmZAo8LQT7bsv/RwQj/UFMGXpCk4rSE70Gvl9r1ixDnCbZzgjQ+eEknqXtwUgo lGXQn1IA==; Received: from j217100.upc-j.chello.nl ([24.132.217.100] helo=hirez.programming.kicks-ass.net) by merlin.infradead.org with esmtpsa (Exim 4.90_1 #2 (Red Hat Linux)) id 1g58Oy-0007Un-0e; Wed, 26 Sep 2018 11:55:04 +0000 Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0) id B2BCC206F1820; Wed, 26 Sep 2018 13:54:04 +0200 (CEST) Message-ID: <20180926114801.314124744@infradead.org> User-Agent: quilt/0.65 Date: Wed, 26 Sep 2018 13:36:38 +0200 From: Peter Zijlstra To: will.deacon@arm.com, aneesh.kumar@linux.vnet.ibm.com, akpm@linux-foundation.org, npiggin@gmail.com Cc: linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, peterz@infradead.org, linux@armlinux.org.uk, heiko.carstens@de.ibm.com, riel@surriel.com Subject: [PATCH 15/18] asm-generic/tlb: Remove arch_tlb*_mmu() References: <20180926113623.863696043@infradead.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP Now that all architectures are converted to the generic code, remove the arch hooks. Signed-off-by: Peter Zijlstra (Intel) --- mm/mmu_gather.c | 93 +++++++++++++++++++++++++------------------------------- 1 file changed, 42 insertions(+), 51 deletions(-) --- a/mm/mmu_gather.c +++ b/mm/mmu_gather.c @@ -93,33 +93,6 @@ bool __tlb_remove_page_size(struct mmu_g #endif /* HAVE_MMU_GATHER_NO_GATHER */ -void arch_tlb_gather_mmu(struct mmu_gather *tlb, struct mm_struct *mm, - unsigned long start, unsigned long end) -{ - tlb->mm = mm; - - /* Is it from 0 to ~0? */ - tlb->fullmm = !(start | (end+1)); - -#ifndef CONFIG_HAVE_MMU_GATHER_NO_GATHER - tlb->need_flush_all = 0; - tlb->local.next = NULL; - tlb->local.nr = 0; - tlb->local.max = ARRAY_SIZE(tlb->__pages); - tlb->active = &tlb->local; - tlb->batch_count = 0; -#endif - -#ifdef CONFIG_HAVE_RCU_TABLE_FREE - tlb->batch = NULL; -#endif -#ifdef CONFIG_HAVE_MMU_GATHER_PAGE_SIZE - tlb->page_size = 0; -#endif - - __tlb_reset_range(tlb); -} - void tlb_flush_mmu_free(struct mmu_gather *tlb) { #ifdef CONFIG_HAVE_RCU_TABLE_FREE @@ -136,27 +109,6 @@ void tlb_flush_mmu(struct mmu_gather *tl tlb_flush_mmu_free(tlb); } -/* tlb_finish_mmu - * Called at the end of the shootdown operation to free up any resources - * that were required. - */ -void arch_tlb_finish_mmu(struct mmu_gather *tlb, - unsigned long start, unsigned long end, bool force) -{ - if (force) { - __tlb_reset_range(tlb); - __tlb_adjust_range(tlb, start, end - start); - } - - tlb_flush_mmu(tlb); - - /* keep the page table cache within bounds */ - check_pgt_cache(); -#ifndef CONFIG_HAVE_MMU_GATHER_NO_GATHER - tlb_batch_list_free(tlb); -#endif -} - #endif /* HAVE_GENERIC_MMU_GATHER */ #ifdef CONFIG_HAVE_RCU_TABLE_FREE @@ -258,10 +210,40 @@ void tlb_remove_table(struct mmu_gather void tlb_gather_mmu(struct mmu_gather *tlb, struct mm_struct *mm, unsigned long start, unsigned long end) { - arch_tlb_gather_mmu(tlb, mm, start, end); + tlb->mm = mm; + + /* Is it from 0 to ~0? */ + tlb->fullmm = !(start | (end+1)); + +#ifndef CONFIG_HAVE_MMU_GATHER_NO_GATHER + tlb->need_flush_all = 0; + tlb->local.next = NULL; + tlb->local.nr = 0; + tlb->local.max = ARRAY_SIZE(tlb->__pages); + tlb->active = &tlb->local; + tlb->batch_count = 0; +#endif + +#ifdef CONFIG_HAVE_RCU_TABLE_FREE + tlb->batch = NULL; +#endif +#ifdef CONFIG_HAVE_MMU_GATHER_PAGE_SIZE + tlb->page_size = 0; +#endif + + __tlb_reset_range(tlb); inc_tlb_flush_pending(tlb->mm); } +/** + * tlb_finish_mmu - finish an mmu_gather structure + * @tlb: the mmu_gather structure to finish + * @start: start of the region that will be removed from the page-table + * @end: end of the region that will be removed from the page-table + * + * Called at the end of the shootdown operation to free up any resources that + * were required. + */ void tlb_finish_mmu(struct mmu_gather *tlb, unsigned long start, unsigned long end) { @@ -272,8 +254,17 @@ void tlb_finish_mmu(struct mmu_gather *t * the TLB by observing pte_none|!pte_dirty, for example so flush TLB * forcefully if we detect parallel PTE batching threads. */ - bool force = mm_tlb_flush_nested(tlb->mm); + if (mm_tlb_flush_nested(tlb->mm)) { + __tlb_reset_range(tlb); + __tlb_adjust_range(tlb, start, end - start); + } - arch_tlb_finish_mmu(tlb, start, end, force); + tlb_flush_mmu(tlb); + + /* keep the page table cache within bounds */ + check_pgt_cache(); +#ifndef CONFIG_HAVE_MMU_GATHER_NO_GATHER + tlb_batch_list_free(tlb); +#endif dec_tlb_flush_pending(tlb->mm); } From patchwork Wed Sep 26 11:36:39 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Zijlstra X-Patchwork-Id: 10615817 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 89CDA14BD for ; Wed, 26 Sep 2018 11:55:41 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 7E02526B41 for ; Wed, 26 Sep 2018 11:55:41 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 718622A992; Wed, 26 Sep 2018 11:55:41 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.7 required=2.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 6987226B41 for ; Wed, 26 Sep 2018 11:55:37 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 641808E0014; Wed, 26 Sep 2018 07:55:11 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 5A7F78E000F; Wed, 26 Sep 2018 07:55:11 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3FB3E8E0014; Wed, 26 Sep 2018 07:55:11 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-it1-f200.google.com (mail-it1-f200.google.com [209.85.166.200]) by kanga.kvack.org (Postfix) with ESMTP id 0EE7C8E000F for ; Wed, 26 Sep 2018 07:55:11 -0400 (EDT) Received: by mail-it1-f200.google.com with SMTP id k143-v6so2718419ite.5 for ; Wed, 26 Sep 2018 04:55:11 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:message-id:user-agent:date:from :to:cc:subject:references:mime-version; bh=KA2dSx29O1++y9jSzA5WFLAkZsmP8i/iX3gCcgZBjng=; b=LTJVJiZ1bn0wVgfAQz/IdCU1fEULRf24Av0yPQ19puG16w1FQTTDTw3nmkBAKs8JC3 K6f6o4Wb5wjONrOqBStx8se7bUsH3vKa6buWdMST/P2ADM201gHsmuLYb8cot5L1YCjc Zoss0VNFvubn4+raTG+qpdGrjpjNPFxMtcVNnB/BStbd0o6na69xSm1+kjishaoq2UAR q9k8lB6WNI7EyC5TcgZGk6DjdvjBD/9EPqwN5PCBmon+ERoQIxOq+W7KKiKeThBWD2FH RBjQgF70OLvepuFcPtcutEFGcnQiJKEUgvvDtqmqYIuTTCPjuArmP9YNfqWNBo0+mOtb jpPg== X-Gm-Message-State: ABuFfojTEV5OBV8b768o7qmX9JU1AkCIKcnU/grAmCniC965Ccc5JfLA +UfOGsFpZlNmuxGKo2edxzm4URi9PuJDsiXfb3BVVtSZyx0Il206XpWnt7cauNQDZYPZ8H/30LL XdlE3RnaURQAly97Aiosa0QG3CRecPyx7cLuAdtQV4+UfRf+7GfztKaPSOGWDbD8bNQ== X-Received: by 2002:a6b:f80e:: with SMTP id o14-v6mr4312344ioh.139.1537962910837; Wed, 26 Sep 2018 04:55:10 -0700 (PDT) X-Google-Smtp-Source: ACcGV60wcMXarVT1pKx0ggl/6VhWj2Vi3VkWIDDbjGDQv6tbuo44AtdW+ggtSJ7XAzNANRm0/18e X-Received: by 2002:a6b:f80e:: with SMTP id o14-v6mr4312323ioh.139.1537962910247; Wed, 26 Sep 2018 04:55:10 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1537962910; cv=none; d=google.com; s=arc-20160816; b=u0/Jc19MbgrOetGT7IoYEggwczqxtAvSRHoIxcI1oNahHCaANoYSTH9eAG3t2AYyc7 AWXzyTIGkQEcxrH/wM+CpD0cB8d2dyA6ywBjwwxCFdKCa984QEYfFQfsYEQsTd6v2Rtl KKjOESobifav0KuuHscW8vTBwRfxIbNRGM9j8vXKDcqhExTFGy8RrlP/KyheKR4lENPn 51tqhCT6BuixHznXpsxb+iGzhRe6hkZKn8Rkv493s8LHJgaDKMhPMwm7iGi+bdVPW21X KlZziPEtHu2JxvNZbvR9HrJ48hmnphuD1VqEU53YUFHih20Sb6ab5wb3paDPpRRySIpQ Tycg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=mime-version:references:subject:cc:to:from:date:user-agent :message-id:dkim-signature; bh=KA2dSx29O1++y9jSzA5WFLAkZsmP8i/iX3gCcgZBjng=; b=HuOitJOY0xmCpnSp9D97Owf4AlKsclMYVBUF3PYlL+7QOw0Hx5pqny1G75BO70gjIm NXLOs0PDvP6/f57PoWsLmHBIlR4eADIOlig2ovdRrw4VTRNzJzB01Uhz41oGB9pLUSiD U4bebyjIsYekSFAsk4EjxVrwQRd328k+fWYLw7GOrPWxzIsw/GUe2jMKEy2uPlRGYrkJ +xPLlyrA51e40jR1xBSb1hc3P/8POge6LKIvBZPAAWujZpkezY1AlJU66BQrkY5LQVgd P6HZJ4pY4K986VGqHhR8Ea5EWmAseARFxdae4Qzqiv5JnDBJGZ6RCUUlmgfi/R7IHi9a l6bQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=merlin.20170209 header.b=jbD8plmc; spf=pass (google.com: best guess record for domain of peterz@infradead.org designates 2001:8b0:10b:1231::1 as permitted sender) smtp.mailfrom=peterz@infradead.org Received: from merlin.infradead.org (merlin.infradead.org. [2001:8b0:10b:1231::1]) by mx.google.com with ESMTPS id r141-v6si3185153ita.13.2018.09.26.04.55.10 for (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Wed, 26 Sep 2018 04:55:10 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of peterz@infradead.org designates 2001:8b0:10b:1231::1 as permitted sender) client-ip=2001:8b0:10b:1231::1; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=merlin.20170209 header.b=jbD8plmc; spf=pass (google.com: best guess record for domain of peterz@infradead.org designates 2001:8b0:10b:1231::1 as permitted sender) smtp.mailfrom=peterz@infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=merlin.20170209; h=Content-Type:MIME-Version:References: Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding: Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender: Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:List-Id:List-Help: List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=KA2dSx29O1++y9jSzA5WFLAkZsmP8i/iX3gCcgZBjng=; b=jbD8plmcHGE46ObbvYYsmPpiTB UC0Kyw06PgzPDYzi0+oBF+bFhAzFpTr07l6g8FuYwDuOtuEgNzaK6Zwh/HGeBajYXkgFTYq+WwEH8 9aGMfmI+DpYw5Z5Hfdg9ykvhEn1fQNxKnU1YAQ0szSs/dHGT2MV1cKurOGHNVz9gjH18Lig2eSVI+ 3QWhc8Yj4eGPVTOkZdwQE6tJcMAQTG6icBLlij5y8+WnMiiKxp68tlYeWI+oRgbyPrmqk93fmNu2h kGh2U99J2CWuxvv4YBeAqNfQhZhEmHPsZ9Dhl/zqlRKqwKlz0/IZrbOClOeg4ZCzQZjpYr+z2yy+A AT26wr8Q==; Received: from j217100.upc-j.chello.nl ([24.132.217.100] helo=hirez.programming.kicks-ass.net) by merlin.infradead.org with esmtpsa (Exim 4.90_1 #2 (Red Hat Linux)) id 1g58Oy-0007Uo-2N; Wed, 26 Sep 2018 11:55:04 +0000 Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0) id B9A40206F1834; Wed, 26 Sep 2018 13:54:04 +0200 (CEST) Message-ID: <20180926114801.366086396@infradead.org> User-Agent: quilt/0.65 Date: Wed, 26 Sep 2018 13:36:39 +0200 From: Peter Zijlstra To: will.deacon@arm.com, aneesh.kumar@linux.vnet.ibm.com, akpm@linux-foundation.org, npiggin@gmail.com Cc: linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, peterz@infradead.org, linux@armlinux.org.uk, heiko.carstens@de.ibm.com, riel@surriel.com Subject: [PATCH 16/18] asm-generic/tlb: Remove HAVE_GENERIC_MMU_GATHER References: <20180926113623.863696043@infradead.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP Since all architectures are now using it, it is redundant. Signed-off-by: Peter Zijlstra (Intel) --- include/asm-generic/tlb.h | 1 - mm/mmu_gather.c | 4 ---- 2 files changed, 5 deletions(-) --- a/include/asm-generic/tlb.h +++ b/include/asm-generic/tlb.h @@ -139,7 +139,6 @@ * page-tables natively. * */ -#define HAVE_GENERIC_MMU_GATHER #ifdef CONFIG_HAVE_RCU_TABLE_FREE /* --- a/mm/mmu_gather.c +++ b/mm/mmu_gather.c @@ -11,8 +11,6 @@ #include #include -#ifdef HAVE_GENERIC_MMU_GATHER - #ifndef CONFIG_HAVE_MMU_GATHER_NO_GATHER static bool tlb_next_batch(struct mmu_gather *tlb) @@ -109,8 +107,6 @@ void tlb_flush_mmu(struct mmu_gather *tl tlb_flush_mmu_free(tlb); } -#endif /* HAVE_GENERIC_MMU_GATHER */ - #ifdef CONFIG_HAVE_RCU_TABLE_FREE /* From patchwork Wed Sep 26 11:36:40 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Zijlstra X-Patchwork-Id: 10615819 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 8E9F014BD for ; Wed, 26 Sep 2018 11:55:45 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 81DD22A98A for ; Wed, 26 Sep 2018 11:55:45 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 761A12A995; Wed, 26 Sep 2018 11:55:45 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.7 required=2.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 6265A2A98A for ; Wed, 26 Sep 2018 11:55:41 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 7F8238E0015; Wed, 26 Sep 2018 07:55:12 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 752218E000F; Wed, 26 Sep 2018 07:55:12 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5F1E38E0015; Wed, 26 Sep 2018 07:55:12 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-it1-f199.google.com (mail-it1-f199.google.com [209.85.166.199]) by kanga.kvack.org (Postfix) with ESMTP id 2E53C8E000F for ; Wed, 26 Sep 2018 07:55:12 -0400 (EDT) Received: by mail-it1-f199.google.com with SMTP id k143-v6so2718478ite.5 for ; Wed, 26 Sep 2018 04:55:12 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:message-id:user-agent:date:from :to:cc:subject:references:mime-version; bh=AJ/FeihIS/WiUtEfjDbg6CRgoMVGPUxLj7TmPmBEqU8=; b=oHhsjsoEVaDboj7MtjzfEFbnU77VaQZMRrbmoyKPsKikTbzDDhCjqJC5d7qJyeSj9q UVg06vQjtTJQL0nSNKlmljVG50XFhR9Y4hBR/FRGWgWEgrwhyZRFM83oGMVpljvQLg0N zTjOXBck7csmZdekKOx0TR7WrNEd1LAYMr7ImwpnZeEHrY8cQzTiZBqgg889jXXrtkf+ Obqj2Gw9o3K85WrMHAu/JRFe9zG0/zomW37Y+gblDocTJXBhlPs6cmNvPtnQlfQj36xK pVzA23RJgl87obA3cvvhzZ65sao+DhjN7gJCAmqPIGO91eaKtiuLyBkylKZoT7NCg3gj 3OdQ== X-Gm-Message-State: ABuFfojs6vQi45yAIC1MPERLFHvQv649Rqym4UXMOCIWFL9Nzx0odEWB gMfNvjPwzEM9RsSCQ/A+zoQYbQANV3wodtLkqAz/uEl/CCbtaDsoZZg9MLjIGwZccLzHQfyMOZn CnPU30aDRU4p4UPJSHF6dA9ssqp2TqWmF8SCAwH1HbWbUwKVpnguCb2qA/gMZOMJ0Ag== X-Received: by 2002:a24:7012:: with SMTP id f18-v6mr4428390itc.87.1537962911964; Wed, 26 Sep 2018 04:55:11 -0700 (PDT) X-Google-Smtp-Source: ACcGV61LYsL3ncXaqj7kts412BCBnvffl+YpbnlCxfhPTiM2KYd32jZ9fW0s//plIrv1VfDeIrw9 X-Received: by 2002:a24:7012:: with SMTP id f18-v6mr4428361itc.87.1537962911260; Wed, 26 Sep 2018 04:55:11 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1537962911; cv=none; d=google.com; s=arc-20160816; b=rHYh1z7RCBVR2RyrjtCP5/GyR8G78M6EQGG0B33RKbInoz4ODXNsGZ2L0t320pL3R4 0EbHlA7IObltRYi9ORZREroKpTjaQhlQPgjZW8QO5t9wq5NLMr2PvsUTH/vcXBM3Ub/o TWho8uQe4K5hjsMZRN+CsFGEuVFx6Dcjk8+fZjQe+ixn8nnH8jDtEsn7KUIUjgx8IVfB 5H4FkU3CNj6icLbjIRgoEIi66+ygBwhLhm9qjtuLRQZTrIdsftw8R38M5/khnt2umwl0 JHFY9afbsIBUEQL3/as8zzzBHyURXetRkX/LcWImIdLqnNvAovioi8BwLf1wmXig3L7f 4oxA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=mime-version:references:subject:cc:to:from:date:user-agent :message-id:dkim-signature; bh=AJ/FeihIS/WiUtEfjDbg6CRgoMVGPUxLj7TmPmBEqU8=; b=ssALGNCvjzMQRkFKFQ+g23HyFCExks+fASceCM3d5JyJTlJ1ZKf2ptCB8ZL7We8OJU ECDH1YCAQKqMNQHsrZlAYGH9s+gMocN2WkYe+dV8KLNDqra0G8TQw7d5pW+3hrASYZg/ 2o1T73PgOeH991rW5NFsW73W/aN0/9/LqpAWTXm7tfZXA0sGuy/bHRxQjc4uvcAWHQic U1RvRgpuPGorNaP4eRSjVaQPMhSZAC6hQzWB+eNQbhAr4v5lp8dcpdwl8xXyNFBDfYZ/ ry/Nejs8bCGGRZ1y749IM8ETEIv/mP/NYc+rdu5aPlE/Lxg2ZRfNXPNtsWwmouAGrvm4 2QXw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=merlin.20170209 header.b=SKVl8yeC; spf=pass (google.com: best guess record for domain of peterz@infradead.org designates 2001:8b0:10b:1231::1 as permitted sender) smtp.mailfrom=peterz@infradead.org Received: from merlin.infradead.org (merlin.infradead.org. [2001:8b0:10b:1231::1]) by mx.google.com with ESMTPS id c3-v6si3020674iod.59.2018.09.26.04.55.11 for (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Wed, 26 Sep 2018 04:55:11 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of peterz@infradead.org designates 2001:8b0:10b:1231::1 as permitted sender) client-ip=2001:8b0:10b:1231::1; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=merlin.20170209 header.b=SKVl8yeC; spf=pass (google.com: best guess record for domain of peterz@infradead.org designates 2001:8b0:10b:1231::1 as permitted sender) smtp.mailfrom=peterz@infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=merlin.20170209; h=Content-Type:MIME-Version:References: Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding: Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender: Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:List-Id:List-Help: List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=AJ/FeihIS/WiUtEfjDbg6CRgoMVGPUxLj7TmPmBEqU8=; b=SKVl8yeCprFBC/uN3i/cyFvKUt XarrrRHP42O9ay5qHsxYQofUAs52UA+0BleMRp8+GLZGETe9OzGhwyQZFpwmvN7lzPo1MPwRlXymn b2ulcMeUrOrgQuvVRx6d9EdloryH+etBGbag9jzzgIa9zYN0fJ1Qm7/fzhTUkhmSlUMkic4Z9ZHoC 5IHV4ALZh3gEgZiGYCbI4DjzL1MkP726VsTRPJqJjbHbzZfIjPnpZHrh6SGaQRZYcDw+Tqh/gEpDG YgQlMeOfYyv3gHf2FOp3u4DhunMmij1kOW3caAbqNTuV5PiEOXhcAoWGrW4By3KZPn+QkYKIa9pri oR4mCrlQ==; Received: from j217100.upc-j.chello.nl ([24.132.217.100] helo=hirez.programming.kicks-ass.net) by merlin.infradead.org with esmtpsa (Exim 4.90_1 #2 (Red Hat Linux)) id 1g58Oy-0007Up-23; Wed, 26 Sep 2018 11:55:04 +0000 Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0) id C1B02206F1836; Wed, 26 Sep 2018 13:54:04 +0200 (CEST) Message-ID: <20180926114801.417460864@infradead.org> User-Agent: quilt/0.65 Date: Wed, 26 Sep 2018 13:36:40 +0200 From: Peter Zijlstra To: will.deacon@arm.com, aneesh.kumar@linux.vnet.ibm.com, akpm@linux-foundation.org, npiggin@gmail.com Cc: linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, peterz@infradead.org, linux@armlinux.org.uk, heiko.carstens@de.ibm.com, riel@surriel.com Subject: [PATCH 17/18] asm-generic/tlb: Remove tlb_flush_mmu_free() References: <20180926113623.863696043@infradead.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP As the comment notes; it is a potentially dangerous operation. Just use tlb_flush_mmu(), that will skip the (double) TLB invalidate if it really isn't needed anyway. Signed-off-by: Peter Zijlstra (Intel) --- include/asm-generic/tlb.h | 10 +++------- mm/memory.c | 2 +- mm/mmu_gather.c | 2 +- 3 files changed, 5 insertions(+), 9 deletions(-) --- a/include/asm-generic/tlb.h +++ b/include/asm-generic/tlb.h @@ -67,16 +67,13 @@ * call before __tlb_remove_page*() to set the current page-size; implies a * possible tlb_flush_mmu() call. * - * - tlb_flush_mmu() / tlb_flush_mmu_tlbonly() / tlb_flush_mmu_free() + * - tlb_flush_mmu() / tlb_flush_mmu_tlbonly() * * tlb_flush_mmu_tlbonly() - does the TLB invalidate (and resets * related state, like the range) * - * tlb_flush_mmu_free() - frees the queued pages; make absolutely - * sure no additional tlb_remove_page() - * calls happen between _tlbonly() and this. - * - * tlb_flush_mmu() - the above two calls. + * tlb_flush_mmu() - in addition to the above TLB invalidate, also frees + * whatever pages are still batched. * * - mmu_gather::fullmm * @@ -274,7 +271,6 @@ void arch_tlb_gather_mmu(struct mmu_gath void tlb_flush_mmu(struct mmu_gather *tlb); void arch_tlb_finish_mmu(struct mmu_gather *tlb, unsigned long start, unsigned long end, bool force); -void tlb_flush_mmu_free(struct mmu_gather *tlb); static inline void __tlb_adjust_range(struct mmu_gather *tlb, unsigned long address, --- a/mm/memory.c +++ b/mm/memory.c @@ -1155,7 +1155,7 @@ static unsigned long zap_pte_range(struc */ if (force_flush) { force_flush = 0; - tlb_flush_mmu_free(tlb); + tlb_flush_mmu(tlb); if (addr != end) goto again; } --- a/mm/mmu_gather.c +++ b/mm/mmu_gather.c @@ -91,7 +91,7 @@ bool __tlb_remove_page_size(struct mmu_g #endif /* HAVE_MMU_GATHER_NO_GATHER */ -void tlb_flush_mmu_free(struct mmu_gather *tlb) +static void tlb_flush_mmu_free(struct mmu_gather *tlb) { #ifdef CONFIG_HAVE_RCU_TABLE_FREE tlb_table_flush(tlb); From patchwork Wed Sep 26 11:36:41 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Zijlstra X-Patchwork-Id: 10615807 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 4A994913 for ; Wed, 26 Sep 2018 11:55:27 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 3E48726B41 for ; Wed, 26 Sep 2018 11:55:27 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 32A712A980; Wed, 26 Sep 2018 11:55:27 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.7 required=2.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 1E93E26B41 for ; Wed, 26 Sep 2018 11:55:23 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 41F088E0010; Wed, 26 Sep 2018 07:55:02 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 3A9918E000F; Wed, 26 Sep 2018 07:55:02 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2781F8E0010; Wed, 26 Sep 2018 07:55:02 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pl1-f197.google.com (mail-pl1-f197.google.com [209.85.214.197]) by kanga.kvack.org (Postfix) with ESMTP id C4BE88E000F for ; Wed, 26 Sep 2018 07:55:01 -0400 (EDT) Received: by mail-pl1-f197.google.com with SMTP id t1-v6so1517092plz.17 for ; Wed, 26 Sep 2018 04:55:01 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:message-id:user-agent:date:from :to:cc:subject:references:mime-version; bh=WaLjinVEspX0O+0lLuyjlJBGguoL7I8ZRB9m9FveuTs=; b=mfZ52rmvXn46Q0a6kpLnxloeOcF4y2vJo4+n0EuNCEkGIaRWhNAGsAzYm+qfcDpaad L5SMXGSWVOLB+FzdR4m5gW8dJ4vgsTadnr3YQXvLsskC0Xb2DLxsRhCuW1k2JYMaoM8R D7r6CbcUPSx1bsmvm1WT/q3sVhi0NptlX0A5CXciHYRE7Y58rItdViphg7eN2l0gthF+ eYjn12psPkyOgHRKgbIlSt3dnVk1MfiOyXnqJmnG4DsoTPN8vzFC0X6fvydjfGBgQAzS /Hdc6jac4M+LEAqTwt8sM/B7EKNHODJogYKHJA//PH3JMbRMA8rSZg6trahHeHb51PQK TT1g== X-Gm-Message-State: ABuFfogIGmclOnvdD3t9s6xWKnB3OFgZEAEzY1ZL3X+JP1TjqnbDSBax /OHqMvdH9xAJsfjRS4Qh64jnm+4Nk9WXwtg1iHKlAh2xqScRKxEDA5NxoBVXS8HmxibfRQJtHDa ePmEsxCoBpE5Yv40FBb+FiwdjaZT+BmQ4xHt9+UXP1r2oy/kocJqXDMkWzwVWXZSc6Q== X-Received: by 2002:a17:902:599d:: with SMTP id p29-v6mr5837698pli.74.1537962901484; Wed, 26 Sep 2018 04:55:01 -0700 (PDT) X-Google-Smtp-Source: ACcGV60esggImTBk+tgqBrqEZ7O5ql20Qd/L+eq1mey4TA8uiDzHPhpZ1sseConO9eyaXpMwotdA X-Received: by 2002:a17:902:599d:: with SMTP id p29-v6mr5837641pli.74.1537962900782; Wed, 26 Sep 2018 04:55:00 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1537962900; cv=none; d=google.com; s=arc-20160816; b=AngGEfjqBqZdsOpjvqwRGfLbXex4fiSnRNYdCgBsqlo04Kblzd3T5UmK4YoD8ny00t wfvnKahVTCnghyrbo3SJ5b5yXWNzqxpXW3rPFa2UQeATsWABhNIoHLNTOFZBdNJJkmDa D57R9pMsmWDBZGhINE1sxF0uw2b2XgrUoxDnsMzzAQgS8hzHPuBcsoBwEEre1fodm+xW QKMK3QPqyqfwERM83KzgziX6c06BCKSVQW2V1J5aseAKlQoC9z1BMuOE14NYs62UfYpi 5DRhltVOIsX3DvnlJ+TTppIBxtIFWFD8yv7r3YqwQ22MaR2A+t4dmPBGvliKWitlHyK+ lp/w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=mime-version:references:subject:cc:to:from:date:user-agent :message-id:dkim-signature; bh=WaLjinVEspX0O+0lLuyjlJBGguoL7I8ZRB9m9FveuTs=; b=b3VErj4lz5ovF1m+LudxwiEgM0hxE9gTSe6Y5Pp8WDauZorLsStRqbhI6BoW5EKg0P diA3dHU2rAFD4f/n9YGZXX61EgCtKD5ZjoFhpXfQigivLHvwfDQfXLZgjfuGneDry92Q yZK4QAkWbyr+ob/oA75WnDJIuHr8eYC5sUBSUlBWgLv4FutgiCd3cMxqGP7UqVs254N7 M3mOW+yIfbP8xjXCkxeDWJYqLjaF8NdgUg/zztEs+10UVgP11OCHdFXAupfVVIV55f4i ZlB7qE7vzqgHl7cYR//+N3cZIJllFZaRWyBfhjDog6TDfpbLHDHvPYX8F1VCj0IvGgnJ vGsQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=bombadil.20170209 header.b=JAUZDEpV; spf=pass (google.com: best guess record for domain of peterz@infradead.org designates 2607:7c80:54:e::133 as permitted sender) smtp.mailfrom=peterz@infradead.org Received: from bombadil.infradead.org (bombadil.infradead.org. [2607:7c80:54:e::133]) by mx.google.com with ESMTPS id n28-v6si4679006pfg.127.2018.09.26.04.55.00 for (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Wed, 26 Sep 2018 04:55:00 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of peterz@infradead.org designates 2607:7c80:54:e::133 as permitted sender) client-ip=2607:7c80:54:e::133; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=bombadil.20170209 header.b=JAUZDEpV; spf=pass (google.com: best guess record for domain of peterz@infradead.org designates 2607:7c80:54:e::133 as permitted sender) smtp.mailfrom=peterz@infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=Content-Type:MIME-Version:References: Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding: Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender: Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:List-Id:List-Help: List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=WaLjinVEspX0O+0lLuyjlJBGguoL7I8ZRB9m9FveuTs=; b=JAUZDEpVN1a7aakroh+E8gNgii Y7gvDl/yidhk8qNXrRuRKMorNEl1Wy08pNzMRNIcG4dz78hAhENWJhR8rMAK34JlEFyK+fbJ2lC6T sH4ISC17a+6T4fmBcb/A2ty0gSvS32cTLg3OYReiE9Z1ObwMBjxDFsPUyZMe0JnuL2EW7aBB6cLkS ObK7td3D0/PPYXvspU2El5Paj/AC7s6mEOe8/u1U7KytCrv9z88aM7JVDqU1F21nHGUkr0eVH6io4 kz+HumdQ57Q3bZSvf2HzpVLh796U13lvgScXPhoMndasUrQlYJj9iLn5zcFV/s8GgABFeHRk8Susg uiX1zidQ==; Received: from j217100.upc-j.chello.nl ([24.132.217.100] helo=hirez.programming.kicks-ass.net) by bombadil.infradead.org with esmtpsa (Exim 4.90_1 #2 (Red Hat Linux)) id 1g58Oy-00063O-8J; Wed, 26 Sep 2018 11:54:55 +0000 Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0) id C990E206F1838; Wed, 26 Sep 2018 13:54:04 +0200 (CEST) Message-ID: <20180926114801.468888082@infradead.org> User-Agent: quilt/0.65 Date: Wed, 26 Sep 2018 13:36:41 +0200 From: Peter Zijlstra To: will.deacon@arm.com, aneesh.kumar@linux.vnet.ibm.com, akpm@linux-foundation.org, npiggin@gmail.com Cc: linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, peterz@infradead.org, linux@armlinux.org.uk, heiko.carstens@de.ibm.com, riel@surriel.com Subject: [PATCH 18/18] asm-generic/tlb: Remove tlb_table_flush() References: <20180926113623.863696043@infradead.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP There are no external users of this API (nor should there be); remove it. Signed-off-by: Peter Zijlstra (Intel) --- include/asm-generic/tlb.h | 1 - mm/mmu_gather.c | 34 +++++++++++++++++----------------- 2 files changed, 17 insertions(+), 18 deletions(-) --- a/include/asm-generic/tlb.h +++ b/include/asm-generic/tlb.h @@ -174,7 +174,6 @@ struct mmu_table_batch { #define MAX_TABLE_BATCH \ ((PAGE_SIZE - sizeof(struct mmu_table_batch)) / sizeof(void *)) -extern void tlb_table_flush(struct mmu_gather *tlb); extern void tlb_remove_table(struct mmu_gather *tlb, void *table); #endif --- a/mm/mmu_gather.c +++ b/mm/mmu_gather.c @@ -91,22 +91,6 @@ bool __tlb_remove_page_size(struct mmu_g #endif /* HAVE_MMU_GATHER_NO_GATHER */ -static void tlb_flush_mmu_free(struct mmu_gather *tlb) -{ -#ifdef CONFIG_HAVE_RCU_TABLE_FREE - tlb_table_flush(tlb); -#endif -#ifndef CONFIG_HAVE_MMU_GATHER_NO_GATHER - tlb_batch_pages_flush(tlb); -#endif -} - -void tlb_flush_mmu(struct mmu_gather *tlb) -{ - tlb_flush_mmu_tlbonly(tlb); - tlb_flush_mmu_free(tlb); -} - #ifdef CONFIG_HAVE_RCU_TABLE_FREE /* @@ -159,7 +143,7 @@ static void tlb_remove_table_rcu(struct free_page((unsigned long)batch); } -void tlb_table_flush(struct mmu_gather *tlb) +static void tlb_table_flush(struct mmu_gather *tlb) { struct mmu_table_batch **batch = &tlb->batch; @@ -191,6 +175,22 @@ void tlb_remove_table(struct mmu_gather #endif /* CONFIG_HAVE_RCU_TABLE_FREE */ +static void tlb_flush_mmu_free(struct mmu_gather *tlb) +{ +#ifdef CONFIG_HAVE_RCU_TABLE_FREE + tlb_table_flush(tlb); +#endif +#ifndef CONFIG_HAVE_MMU_GATHER_NO_GATHER + tlb_batch_pages_flush(tlb); +#endif +} + +void tlb_flush_mmu(struct mmu_gather *tlb) +{ + tlb_flush_mmu_tlbonly(tlb); + tlb_flush_mmu_free(tlb); +} + /** * tlb_gather_mmu - initialize an mmu_gather structure for page-table tear-down * @tlb: the mmu_gather structure to initialize