From patchwork Thu Sep 13 09:21:11 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Zijlstra X-Patchwork-Id: 10599105 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id E94AF14E5 for ; Thu, 13 Sep 2018 09:29:57 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id D94072A457 for ; Thu, 13 Sep 2018 09:29:57 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id CD21C2A48D; Thu, 13 Sep 2018 09:29:57 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.8 required=2.0 tests=BAYES_00,DKIM_SIGNED, MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE,T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 2B3CE2A457 for ; Thu, 13 Sep 2018 09:29:57 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 585838E000D; Thu, 13 Sep 2018 05:29:47 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 50E458E000C; Thu, 13 Sep 2018 05:29:47 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3D47C8E000D; Thu, 13 Sep 2018 05:29:47 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pl1-f200.google.com (mail-pl1-f200.google.com [209.85.214.200]) by kanga.kvack.org (Postfix) with ESMTP id E6B6B8E000C for ; Thu, 13 Sep 2018 05:29:46 -0400 (EDT) Received: by mail-pl1-f200.google.com with SMTP id bh1-v6so2481746plb.15 for ; Thu, 13 Sep 2018 02:29:46 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:message-id:user-agent:date:from :to:cc:subject:references:mime-version; bh=Bdpi8XYGmctjDZDpmcUjIYicrWhlqIYbn1DDQ6wv/28=; b=nRCa3NpCcG9NZ9hQqBpY3VTyTkWxO6G3u7PXpy8z2ytLRFPfOn91R3btGg5GP7gP5j dEaFYKzpe4qaTanBgTtas5LfyC471fcLiNGNyH7kemFZZUiQzWAO9k53LkdHgy+uKs3e M5VQQ/AaOHOIPZwh9iDLITVEXpDpo2h45IIXvmtFTF2BKTVoe3v6fs9zeNpuETjPmYVt DsFjsE5bA5MX720bgBzaPDFrlmV76c+MK302MBsJZEYEuofBYbJYzbJ+5nBZeLN93bj1 z+Ob9sNT9yJ415/W2D8pBeDwIG8fPzW1LuG6snEv3RKfe6VRm/sB7smretktkWFEUJX4 tudA== X-Gm-Message-State: APzg51CSIsE7JNGlXLy24AduaWWSqw+MIRhwG2EgPpCeM8kRe/IUv0pN 0xkrxye3NLuAnEPz34onslgKofrLsdOtPy7v6uVkRv9jq/uhzGHStkmTefX8q8Ut4VVMS+opyLh RbtIGwQapkz21Tq+hv7hFpnD23tMAbLajaFYBnXS0PflPnW/fo84RN9kVAvhWsKBmXg== X-Received: by 2002:a65:6102:: with SMTP id z2-v6mr6362091pgu.46.1536830986604; Thu, 13 Sep 2018 02:29:46 -0700 (PDT) X-Google-Smtp-Source: ANB0VdbBKkOSTKJfA6fxjHqIcQctOq9jg7NS5/xi+IxqE+pagG6fMNskIVDFxvUpkKvy9NPwTiqs X-Received: by 2002:a65:6102:: with SMTP id z2-v6mr6362034pgu.46.1536830985603; Thu, 13 Sep 2018 02:29:45 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1536830985; cv=none; d=google.com; s=arc-20160816; b=wg3E2nY7BHaEwcnofDEVBqA89nQKDTf8EPt0C+ItJ/s0irjbk6oa5CIbnNwconmmVa lIXkR7X5KsQmBqpNSONEp4lqaPwOgkUgx8NjO2wVcoBOb0RMeHAyRGrBPjmrE62kXqZt OqK7mo51uWS7W8PNYP1qMiAFlOpANXt1GR11+m6H0jDbcAhW7MwYxb4KwLyCrLyap39T 4WnCIUwlUH3Wun5rZydP77Qm7/oFzt0UAM0sxIlFejrXrqB1hA/wjDhOVpxHE1emdkwz KUjTUDtlmTM9ilYzg1zyEqqh9guastNzcIFNwzmkLbjQbjF0zqDV8jibupD77GDQ1nkB B/LA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=mime-version:references:subject:cc:to:from:date:user-agent :message-id:dkim-signature; bh=Bdpi8XYGmctjDZDpmcUjIYicrWhlqIYbn1DDQ6wv/28=; b=Zg7MBEXxCJawdIHijmIciCfb4USm7+JMGk5jHkM0Pw2yWoqRh2aI513lJQ6BVLVx58 ibUSS4Yg4CHZ8n5gO4UOlIY71S8LXnW2mh6mvHYdTeIORb/33FjpQdEX0j+tERDwzMc/ o3Lh4f1td/RRf8IquqQJRkPveCcNOBuZliJ/D1kBxbCfneOlOQ37z62TOGH70ECbW2fL Bh+4n1Vu2CDGCNQQEi/La/fKF2UXTwKR+tbsNhVQJR1UOk1DLpNmaV4whoWH+V9xAqIb 62b86namL9V+pnLBZSii88fiCy3yu7NBOO6nBCya+5nF6pNlhDAlXtxKefSLWTCE/Bj8 aiwA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=bombadil.20170209 header.b=WZwLYVgX; spf=pass (google.com: best guess record for domain of peterz@infradead.org designates 2607:7c80:54:e::133 as permitted sender) smtp.mailfrom=peterz@infradead.org Received: from bombadil.infradead.org (bombadil.infradead.org. [2607:7c80:54:e::133]) by mx.google.com with ESMTPS id p9-v6si3880230pgi.553.2018.09.13.02.29.45 for (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Thu, 13 Sep 2018 02:29:45 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of peterz@infradead.org designates 2607:7c80:54:e::133 as permitted sender) client-ip=2607:7c80:54:e::133; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=bombadil.20170209 header.b=WZwLYVgX; spf=pass (google.com: best guess record for domain of peterz@infradead.org designates 2607:7c80:54:e::133 as permitted sender) smtp.mailfrom=peterz@infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=Content-Type:MIME-Version:References: Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding: Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender: Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:List-Id:List-Help: List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=Bdpi8XYGmctjDZDpmcUjIYicrWhlqIYbn1DDQ6wv/28=; b=WZwLYVgXbE4QAVC6YGSbu6zs7H Gfe7C5NXtHM1il+Nq8mtrPXRRTUgcSCde8VCMVlm/PSVXXjDZ/dBvPp/Uxmr+QwZ5tg2LCHrtGucl xHUrgzP95PGdKZPq3AqbOcb3ovOh/yHYRkoBYKG8zlHmVJTtnPW8ToGZhJHFPXRpMX1IWsf6tVCm1 hxJQlgruft8fdmaIrOqIXtbNyyiXrS4rSMJT/O535VWeLTEf2sCHp9Nrw/Ipw65yvZOXKLfrHGvnu peVWzu2+7L+5nWAcsbsJ//nIQMw21TpsxN0invyjx8/dxZJlPvc2n8plj8F2fgTojvpFSAKMzzzPc pfI8kSbA==; Received: from j217100.upc-j.chello.nl ([24.132.217.100] helo=hirez.programming.kicks-ass.net) by bombadil.infradead.org with esmtpsa (Exim 4.90_1 #2 (Red Hat Linux)) id 1g0Nw0-0000ur-GV; Thu, 13 Sep 2018 09:29:12 +0000 Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0) id 3B63920147C5D; Thu, 13 Sep 2018 11:29:10 +0200 (CEST) Message-ID: <20180913092811.894806629@infradead.org> User-Agent: quilt/0.65 Date: Thu, 13 Sep 2018 11:21:11 +0200 From: Peter Zijlstra To: will.deacon@arm.com, aneesh.kumar@linux.vnet.ibm.com, akpm@linux-foundation.org, npiggin@gmail.com Cc: linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, peterz@infradead.org, linux@armlinux.org.uk, heiko.carstens@de.ibm.com Subject: [RFC][PATCH 01/11] asm-generic/tlb: Provide a comment References: <20180913092110.817204997@infradead.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP Write a comment explaining some of this.. Cc: Will Deacon Cc: "Aneesh Kumar K.V" Cc: Andrew Morton Cc: Nick Piggin Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Martin Schwidefsky --- include/asm-generic/tlb.h | 120 ++++++++++++++++++++++++++++++++++++++++++++-- 1 file changed, 117 insertions(+), 3 deletions(-) --- a/include/asm-generic/tlb.h +++ b/include/asm-generic/tlb.h @@ -22,6 +22,119 @@ #ifdef CONFIG_MMU +/* + * Generic MMU-gather implementation. + * + * The mmu_gather data structure is used by the mm code to implement the + * correct and efficient ordering of freeing pages and TLB invalidations. + * + * This correct ordering is: + * + * 1) unhook page + * 2) TLB invalidate page + * 3) free page + * + * That is, we must never free a page before we have ensured there are no live + * translations left to it. Otherwise it might be possible to observe (or + * worse, change) the page content after it has been reused. + * + * The mmu_gather API consists of: + * + * - tlb_gather_mmu() / tlb_finish_mmu(); start and finish a mmu_gather + * + * Finish in particular will issue a (final) TLB invalidate and free + * all (remaining) queued pages. + * + * - tlb_start_vma() / tlb_end_vma(); marks the start / end of a VMA + * + * Defaults to flushing at tlb_end_vma() to reset the range; helps when + * there's large holes between the VMAs. + * + * - tlb_remove_page() / __tlb_remove_page() + * - tlb_remove_page_size() / __tlb_remove_page_size() + * + * __tlb_remove_page_size() is the basic primitive that queues a page for + * freeing. __tlb_remove_page() assumes PAGE_SIZE. Both will return a + * boolean indicating if the queue is (now) full and a call to + * tlb_flush_mmu() is required. + * + * tlb_remove_page() and tlb_remove_page_size() imply the call to + * tlb_flush_mmu() when required and has no return value. + * + * - tlb_change_page_size() + * + * call before __tlb_remove_page*() to set the current page-size; implies a + * possible tlb_flush_mmu() call. + * + * - tlb_flush_mmu() / tlb_flush_mmu_tlbonly() / tlb_flush_mmu_free() + * + * tlb_flush_mmu_tlbonly() - does the TLB invalidate (and resets + * related state, like the range) + * + * tlb_flush_mmu_free() - frees the queued pages; make absolutely + * sure no additional tlb_remove_page() + * calls happen between _tlbonly() and this. + * + * tlb_flush_mmu() - the above two calls. + * + * - mmu_gather::fullmm + * + * A flag set by tlb_gather_mmu() to indicate we're going to free + * the entire mm; this allows a number of optimizations. + * + * XXX list optimizations + * + * - mmu_gather::need_flush_all + * + * A flag that can be set by the arch code if it wants to force + * flush the entire TLB irrespective of the range. For instance + * x86-PAE needs this when changing top-level entries. + * + * And requires the architecture to provide and implement tlb_flush(). + * + * tlb_flush() may, in addition to the above mentioned mmu_gather fields, make + * use of: + * + * - mmu_gather::start / mmu_gather::end + * + * which (when !need_flush_all; fullmm will have start = end = ~0UL) provides + * the range that needs to be flushed to cover the pages to be freed. + * + * - mmu_gather::freed_tables + * + * set when we freed page table pages + * + * - tlb_get_unmap_shift() / tlb_get_unmap_size() + * + * returns the smallest TLB entry size unmapped in this range + * + * Additionally there are a few opt-in features: + * + * HAVE_MMU_GATHER_PAGE_SIZE + * + * This ensures we call tlb_flush() every time tlb_change_page_size() actually + * changes the size and provides mmu_gather::page_size to tlb_flush(). + * + * HAVE_RCU_TABLE_FREE + * + * This provides tlb_remove_table(), to be used instead of tlb_remove_page() + * for page directores (__p*_free_tlb()). This provides separate freeing of + * the page-table pages themselves in a semi-RCU fashion (see comment below). + * Useful if your architecture doesn't use IPIs for remote TLB invalidates + * and therefore doesn't naturally serialize with software page-table walkers. + * + * When used, an architecture is expected to provide __tlb_remove_table() + * which does the actual freeing of these pages. + * + * HAVE_RCU_TABLE_INVALIDATE + * + * This makes HAVE_RCU_TABLE_FREE call tlb_flush_mmu_tlbonly() before freeing + * the page-table pages. Required if you use HAVE_RCU_TABLE_FREE and your + * architecture uses the Linux page-tables natively. + * + */ +#define HAVE_GENERIC_MMU_GATHER + #ifdef CONFIG_HAVE_RCU_TABLE_FREE /* * Semi RCU freeing of the page directories. @@ -89,14 +202,17 @@ struct mmu_gather_batch { */ #define MAX_GATHER_BATCH_COUNT (10000UL/MAX_GATHER_BATCH) -/* struct mmu_gather is an opaque type used by the mm code for passing around +/* + * struct mmu_gather is an opaque type used by the mm code for passing around * any data needed by arch specific code for tlb_remove_page. */ struct mmu_gather { struct mm_struct *mm; + #ifdef CONFIG_HAVE_RCU_TABLE_FREE struct mmu_table_batch *batch; #endif + unsigned long start; unsigned long end; /* @@ -131,8 +247,6 @@ struct mmu_gather { int page_size; }; -#define HAVE_GENERIC_MMU_GATHER - void arch_tlb_gather_mmu(struct mmu_gather *tlb, struct mm_struct *mm, unsigned long start, unsigned long end); void tlb_flush_mmu(struct mmu_gather *tlb); From patchwork Thu Sep 13 09:21:12 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Zijlstra X-Patchwork-Id: 10599089 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 03F6B14E0 for ; Thu, 13 Sep 2018 09:29:32 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id E73382A457 for ; Thu, 13 Sep 2018 09:29:31 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id DB9772A475; Thu, 13 Sep 2018 09:29:31 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.8 required=2.0 tests=BAYES_00,DKIM_SIGNED, MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE,T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 2011C2A457 for ; Thu, 13 Sep 2018 09:29:31 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id EE1B38E0006; Thu, 13 Sep 2018 05:29:27 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id D985F8E0001; Thu, 13 Sep 2018 05:29:27 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BEB328E0007; Thu, 13 Sep 2018 05:29:27 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pl1-f197.google.com (mail-pl1-f197.google.com [209.85.214.197]) by kanga.kvack.org (Postfix) with ESMTP id 7A5408E0001 for ; Thu, 13 Sep 2018 05:29:27 -0400 (EDT) Received: by mail-pl1-f197.google.com with SMTP id a8-v6so2475503pla.10 for ; Thu, 13 Sep 2018 02:29:27 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:message-id:user-agent:date:from :to:cc:subject:references:mime-version; bh=o5nw1itJU/mnycA7QL1BS9jPMH4d8A1CFEKuhjFhguE=; b=E5Ew21dVnGaoKZF8DwZaXsXREfZ726w9B8LnJQwvdpwe59IDeuSx/U5qV017qoQC0D Joj56xXLNtLvaxCi6Ppwe9g4mnCKoiTGaNJcqWJUJY3Y3z9zPsf0T3RjP0DmF/EDcsoe fgxIvCuN1JXbIWAt8F/y4NJgy/YI7XFstuqgBsqhMl6I0ONhYJqzWSFEolJJau0U2rPl djBBYeZ1aT9nyhQGxT3Ro7ls29Jl/fpEOAcoIlyh4+iSZ8nB/WlYULn3ynrzNLkZv+zu lqYakXWHO9mk8CvyqfRlCLTQJQYdu30IYvfLyptYdgHgQoF8xtpJ12DLFxF9fscVS9sk jS9A== X-Gm-Message-State: APzg51C9mMtjNS+xr6ak6RLaVsGqNCALVGi3LSCWddFX42wnmQjAsPcD nb7CFGUCy/URX+eiBNjH5VMDP+N8sEVKKOXF+7F6thgQxMg4DM0IV8MmGF3BRTUzV8S5ew77BGX YhsIMCVk04QKa2jCoetX94rQQnJWHPeyp5ntYIWiKsiWiPzjpqpWNaWHdH6PyNxx9gA== X-Received: by 2002:a17:902:5a4e:: with SMTP id f14-v6mr6442821plm.311.1536830967136; Thu, 13 Sep 2018 02:29:27 -0700 (PDT) X-Google-Smtp-Source: ANB0Vdawg0P65be5EEPFTYox7Qzk5KSBJAmiHrcUWrohS/f1dPb2yjGawgCu2QVWNIqQkmHNoDJ0 X-Received: by 2002:a17:902:5a4e:: with SMTP id f14-v6mr6442766plm.311.1536830966127; Thu, 13 Sep 2018 02:29:26 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1536830966; cv=none; d=google.com; s=arc-20160816; b=zDHvJVi3J1ZJsiHEFLf/yTtbFJgKFp6YjlPX4JlwvBBPTGefKoRxff8ODWyJqj1YTL 5cQWYfz+wVNrbDFZ+F6cO6twVTtYA42/PBUnmFA+UakBXoxr+egHP5iyVCZVNIil0qN7 vLigebh/ushWVtEXnN0cYp3TY7WBvr29YS+Bf6WS5eke4asMU0of4HlxZoa+2s+wNaea k3SqT2UiWhx6jg+ytczqtCq900qAt5MNpnzShwyglAkGL/Audct6D0gvT/gJVWv1Yk+h FCo+0PHuKMMP+hWDySRs+GIMgmQ4la9uKBkZ2FjZEoqxojsD+2Nzu+Y+HpqH05F7ydt9 6Irw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=mime-version:references:subject:cc:to:from:date:user-agent :message-id:dkim-signature; bh=o5nw1itJU/mnycA7QL1BS9jPMH4d8A1CFEKuhjFhguE=; b=tgP7o14yKrmb0Jk0f9NsHEyWbcGZClZczFwNN54EN9Kd1mPx5J8hz6aF0xk8Y3IyS1 WkRbuVcYN2ACShgORGsV4c6bepiuEtIKpAWgqTYVtkdNCnuISxRm+Nurjpeg7l/m+vaR c3ncgFbaD5HOzYoCRZhc+7/z8kfLGo3Bc+15a6DjizxrPfy9GaKFZzg6bDlLTLC7WXnm wA/4QRLCzBF5KxtZz/BSdx8VzTHlUJoK+TlDE9mav2MAN8ZWcc78KHA5F9Zo5Z7GWM/n xj6WsqXA37RRDgralSNjq4ABVnmiUD/RMTOWgpHO8PqnZ3vO5fJo0nb3OgGRyFoZvGrY Ni3Q== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=bombadil.20170209 header.b=EaJ5lOoz; spf=pass (google.com: best guess record for domain of peterz@infradead.org designates 2607:7c80:54:e::133 as permitted sender) smtp.mailfrom=peterz@infradead.org Received: from bombadil.infradead.org (bombadil.infradead.org. [2607:7c80:54:e::133]) by mx.google.com with ESMTPS id j61-v6si3364285plb.49.2018.09.13.02.29.25 for (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Thu, 13 Sep 2018 02:29:26 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of peterz@infradead.org designates 2607:7c80:54:e::133 as permitted sender) client-ip=2607:7c80:54:e::133; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=bombadil.20170209 header.b=EaJ5lOoz; spf=pass (google.com: best guess record for domain of peterz@infradead.org designates 2607:7c80:54:e::133 as permitted sender) smtp.mailfrom=peterz@infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=Content-Type:MIME-Version:References: Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding: Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender: Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:List-Id:List-Help: List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=o5nw1itJU/mnycA7QL1BS9jPMH4d8A1CFEKuhjFhguE=; b=EaJ5lOozbilNl4fRIDqUYIjQ9z s65VHIRn1P20/mx+tm2jptvO3BhVFeVuGJdEm19xpHrZkxj2TT8hucwng1RdeLGt3d9PMJ8nDSBPA KJQsomAwrx55wh/vRsOmCZJ+NYy0jqcgdiQjXuyIx9nzSMUxpDWG3IjHFAUnoWOCNODvv8WH5hVDy NiXE0487Iu1u45gXedqknn5cYtl62n47hgLkTz4V5beI1dmKB5ZmJKjh6F77dtS+v4/Q1s5sUrs7s QoL0RZkJJmtJoDGYpDNVN8gJMdja+9WIUO+pyS7iWipL/tF+MPpABpm+eh7op4+QMjDyUNbfOWstI nXVKBxtA==; Received: from j217100.upc-j.chello.nl ([24.132.217.100] helo=hirez.programming.kicks-ass.net) by bombadil.infradead.org with esmtpsa (Exim 4.90_1 #2 (Red Hat Linux)) id 1g0Nw0-0000ul-9V; Thu, 13 Sep 2018 09:29:12 +0000 Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0) id 3D59E20587E46; Thu, 13 Sep 2018 11:29:10 +0200 (CEST) Message-ID: <20180913092811.955706111@infradead.org> User-Agent: quilt/0.65 Date: Thu, 13 Sep 2018 11:21:12 +0200 From: Peter Zijlstra To: will.deacon@arm.com, aneesh.kumar@linux.vnet.ibm.com, akpm@linux-foundation.org, npiggin@gmail.com Cc: linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, peterz@infradead.org, linux@armlinux.org.uk, heiko.carstens@de.ibm.com Subject: [RFC][PATCH 02/11] asm-generic/tlb: Provide HAVE_MMU_GATHER_PAGE_SIZE References: <20180913092110.817204997@infradead.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP Move the mmu_gather::page_size things into the generic code instead of powerpc specific bits. Cc: Will Deacon Cc: "Aneesh Kumar K.V" Cc: Andrew Morton Cc: Nick Piggin Signed-off-by: Peter Zijlstra (Intel) Acked-by: Will Deacon --- arch/Kconfig | 3 +++ arch/arm/include/asm/tlb.h | 3 +-- arch/ia64/include/asm/tlb.h | 3 +-- arch/powerpc/Kconfig | 1 + arch/powerpc/include/asm/tlb.h | 17 ----------------- arch/s390/include/asm/tlb.h | 4 +--- arch/sh/include/asm/tlb.h | 4 +--- arch/um/include/asm/tlb.h | 4 +--- include/asm-generic/tlb.h | 25 +++++++++++++------------ mm/huge_memory.c | 4 ++-- mm/hugetlb.c | 2 +- mm/madvise.c | 2 +- mm/memory.c | 4 ++-- mm/mmu_gather.c | 5 +++++ 14 files changed, 33 insertions(+), 48 deletions(-) --- a/arch/Kconfig +++ b/arch/Kconfig @@ -365,6 +365,9 @@ config HAVE_RCU_TABLE_FREE config HAVE_RCU_TABLE_INVALIDATE bool +config HAVE_MMU_GATHER_PAGE_SIZE + bool + config ARCH_HAVE_NMI_SAFE_CMPXCHG bool --- a/arch/arm/include/asm/tlb.h +++ b/arch/arm/include/asm/tlb.h @@ -286,8 +286,7 @@ tlb_remove_pmd_tlb_entry(struct mmu_gath #define tlb_migrate_finish(mm) do { } while (0) -#define tlb_remove_check_page_size_change tlb_remove_check_page_size_change -static inline void tlb_remove_check_page_size_change(struct mmu_gather *tlb, +static inline void tlb_change_page_size(struct mmu_gather *tlb, unsigned int page_size) { } --- a/arch/ia64/include/asm/tlb.h +++ b/arch/ia64/include/asm/tlb.h @@ -282,8 +282,7 @@ do { \ #define tlb_remove_huge_tlb_entry(h, tlb, ptep, address) \ tlb_remove_tlb_entry(tlb, ptep, address) -#define tlb_remove_check_page_size_change tlb_remove_check_page_size_change -static inline void tlb_remove_check_page_size_change(struct mmu_gather *tlb, +static inline void tlb_change_page_size(struct mmu_gather *tlb, unsigned int page_size) { } --- a/arch/powerpc/Kconfig +++ b/arch/powerpc/Kconfig @@ -216,6 +216,7 @@ config PPC select HAVE_PERF_REGS select HAVE_PERF_USER_STACK_DUMP select HAVE_RCU_TABLE_FREE if SMP + select HAVE_MMU_GATHER_PAGE_SIZE select HAVE_REGS_AND_STACK_ACCESS_API select HAVE_RELIABLE_STACKTRACE if PPC64 && CPU_LITTLE_ENDIAN select HAVE_SYSCALL_TRACEPOINTS --- a/arch/powerpc/include/asm/tlb.h +++ b/arch/powerpc/include/asm/tlb.h @@ -27,7 +27,6 @@ #define tlb_start_vma(tlb, vma) do { } while (0) #define tlb_end_vma(tlb, vma) do { } while (0) #define __tlb_remove_tlb_entry __tlb_remove_tlb_entry -#define tlb_remove_check_page_size_change tlb_remove_check_page_size_change extern void tlb_flush(struct mmu_gather *tlb); @@ -46,22 +45,6 @@ static inline void __tlb_remove_tlb_entr #endif } -static inline void tlb_remove_check_page_size_change(struct mmu_gather *tlb, - unsigned int page_size) -{ - if (!tlb->page_size) - tlb->page_size = page_size; - else if (tlb->page_size != page_size) { - if (!tlb->fullmm) - tlb_flush_mmu(tlb); - /* - * update the page size after flush for the new - * mmu_gather. - */ - tlb->page_size = page_size; - } -} - #ifdef CONFIG_SMP static inline int mm_is_core_local(struct mm_struct *mm) { --- a/arch/s390/include/asm/tlb.h +++ b/arch/s390/include/asm/tlb.h @@ -180,9 +180,7 @@ static inline void pud_free_tlb(struct m #define tlb_remove_huge_tlb_entry(h, tlb, ptep, address) \ tlb_remove_tlb_entry(tlb, ptep, address) -#define tlb_remove_check_page_size_change tlb_remove_check_page_size_change -static inline void tlb_remove_check_page_size_change(struct mmu_gather *tlb, - unsigned int page_size) +static inline void tlb_change_page_size(struct mmu_gather *tlb, unsigned int page_size) { } --- a/arch/sh/include/asm/tlb.h +++ b/arch/sh/include/asm/tlb.h @@ -127,9 +127,7 @@ static inline void tlb_remove_page_size( return tlb_remove_page(tlb, page); } -#define tlb_remove_check_page_size_change tlb_remove_check_page_size_change -static inline void tlb_remove_check_page_size_change(struct mmu_gather *tlb, - unsigned int page_size) +static inline void tlb_change_page_size(struct mmu_gather *tlb, unsigned int page_size) { } --- a/arch/um/include/asm/tlb.h +++ b/arch/um/include/asm/tlb.h @@ -146,9 +146,7 @@ static inline void tlb_remove_page_size( #define tlb_remove_huge_tlb_entry(h, tlb, ptep, address) \ tlb_remove_tlb_entry(tlb, ptep, address) -#define tlb_remove_check_page_size_change tlb_remove_check_page_size_change -static inline void tlb_remove_check_page_size_change(struct mmu_gather *tlb, - unsigned int page_size) +static inline void tlb_change_page_size(struct mmu_gather *tlb, unsigned int page_size) { } --- a/include/asm-generic/tlb.h +++ b/include/asm-generic/tlb.h @@ -240,11 +240,15 @@ struct mmu_gather { unsigned int cleared_puds : 1; unsigned int cleared_p4ds : 1; + unsigned int batch_count; + struct mmu_gather_batch *active; struct mmu_gather_batch local; struct page *__pages[MMU_GATHER_BUNDLE]; - unsigned int batch_count; - int page_size; + +#ifdef CONFIG_HAVE_MMU_GATHER_PAGE_SIZE + unsigned int page_size; +#endif }; void arch_tlb_gather_mmu(struct mmu_gather *tlb, @@ -310,21 +314,18 @@ static inline void tlb_remove_page(struc return tlb_remove_page_size(tlb, page, PAGE_SIZE); } -#ifndef tlb_remove_check_page_size_change -#define tlb_remove_check_page_size_change tlb_remove_check_page_size_change -static inline void tlb_remove_check_page_size_change(struct mmu_gather *tlb, +static inline void tlb_change_page_size(struct mmu_gather *tlb, unsigned int page_size) { - /* - * We don't care about page size change, just update - * mmu_gather page size here so that debug checks - * doesn't throw false warning. - */ -#ifdef CONFIG_DEBUG_VM +#ifdef CONFIG_HAVE_MMU_GATHER_PAGE_SIZE + if (tlb->page_size && tlb->page_size != page_size) { + if (!tlb->fullmm) + tlb_flush_mmu(tlb); + } + tlb->page_size = page_size; #endif } -#endif static inline unsigned long tlb_get_unmap_shift(struct mmu_gather *tlb) { --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -1617,7 +1617,7 @@ bool madvise_free_huge_pmd(struct mmu_ga struct mm_struct *mm = tlb->mm; bool ret = false; - tlb_remove_check_page_size_change(tlb, HPAGE_PMD_SIZE); + tlb_change_page_size(tlb, HPAGE_PMD_SIZE); ptl = pmd_trans_huge_lock(pmd, vma); if (!ptl) @@ -1693,7 +1693,7 @@ int zap_huge_pmd(struct mmu_gather *tlb, pmd_t orig_pmd; spinlock_t *ptl; - tlb_remove_check_page_size_change(tlb, HPAGE_PMD_SIZE); + tlb_change_page_size(tlb, HPAGE_PMD_SIZE); ptl = __pmd_trans_huge_lock(pmd, vma); if (!ptl) --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -3337,7 +3337,7 @@ void __unmap_hugepage_range(struct mmu_g * This is a hugetlb vma, all the pte entries should point * to huge page. */ - tlb_remove_check_page_size_change(tlb, sz); + tlb_change_page_size(tlb, sz); tlb_start_vma(tlb, vma); mmu_notifier_invalidate_range_start(mm, mmun_start, mmun_end); address = start; --- a/mm/madvise.c +++ b/mm/madvise.c @@ -328,7 +328,7 @@ static int madvise_free_pte_range(pmd_t if (pmd_trans_unstable(pmd)) return 0; - tlb_remove_check_page_size_change(tlb, PAGE_SIZE); + tlb_change_page_size(tlb, PAGE_SIZE); orig_pte = pte = pte_offset_map_lock(mm, pmd, addr, &ptl); flush_tlb_batched_pending(mm); arch_enter_lazy_mmu_mode(); --- a/mm/memory.c +++ b/mm/memory.c @@ -355,7 +355,7 @@ void free_pgd_range(struct mmu_gather *t * We add page table cache pages with PAGE_SIZE, * (see pte_free_tlb()), flush the tlb if we need */ - tlb_remove_check_page_size_change(tlb, PAGE_SIZE); + tlb_change_page_size(tlb, PAGE_SIZE); pgd = pgd_offset(tlb->mm, addr); do { next = pgd_addr_end(addr, end); @@ -1046,7 +1046,7 @@ static unsigned long zap_pte_range(struc pte_t *pte; swp_entry_t entry; - tlb_remove_check_page_size_change(tlb, PAGE_SIZE); + tlb_change_page_size(tlb, PAGE_SIZE); again: init_rss_vec(rss); start_pte = pte_offset_map_lock(mm, pmd, addr, &ptl); --- a/mm/mmu_gather.c +++ b/mm/mmu_gather.c @@ -58,7 +58,9 @@ void arch_tlb_gather_mmu(struct mmu_gath #ifdef CONFIG_HAVE_RCU_TABLE_FREE tlb->batch = NULL; #endif +#ifdef CONFIG_HAVE_MMU_GATHER_PAGE_SIZE tlb->page_size = 0; +#endif __tlb_reset_range(tlb); } @@ -121,7 +123,10 @@ bool __tlb_remove_page_size(struct mmu_g struct mmu_gather_batch *batch; VM_BUG_ON(!tlb->end); + +#ifdef CONFIG_HAVE_MMU_GATHER_PAGE_SIZE VM_WARN_ON(tlb->page_size != page_size); +#endif batch = tlb->active; /* From patchwork Thu Sep 13 09:21:13 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Zijlstra X-Patchwork-Id: 10599091 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id B2FD614E5 for ; Thu, 13 Sep 2018 09:29:34 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id A25C42A457 for ; Thu, 13 Sep 2018 09:29:34 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 96E972A475; Thu, 13 Sep 2018 09:29:34 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.8 required=2.0 tests=BAYES_00,DKIM_SIGNED, MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE,T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 0B7D82A457 for ; Thu, 13 Sep 2018 09:29:34 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3146D8E0009; Thu, 13 Sep 2018 05:29:28 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 2C6C08E0008; Thu, 13 Sep 2018 05:29:28 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id EDB008E0007; Thu, 13 Sep 2018 05:29:27 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pg1-f199.google.com (mail-pg1-f199.google.com [209.85.215.199]) by kanga.kvack.org (Postfix) with ESMTP id 9FA228E0006 for ; Thu, 13 Sep 2018 05:29:27 -0400 (EDT) Received: by mail-pg1-f199.google.com with SMTP id s11-v6so2335867pgv.9 for ; Thu, 13 Sep 2018 02:29:27 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:message-id:user-agent:date:from :to:cc:subject:references:mime-version; bh=jgrjUnzXtEdFVgHINLQMwf19p5Mc/gjokyImpgoVnII=; b=liDLPbUu7PFFVEcNKTKBQuont2HdZfZZLEffmopAqNSnyHJRCBFPKrROsC2qRVdauB 3hFk5RWXTvm/H38h1BTm+UO86EJiJU3u7fU5DxGf5ThFVB8kakpexS1kGU6w1WCEL9mA cBOWAX4YaSSAyO5dMgGrvPFjapF0BSgMsvA+eireYgU41dcEcedUcSGcHS/mWfe2RcVH wVxUZSp+Dy5y26JBavkEV0NzEawdTMKPKfXscZePogVOvglF2lFeE1l+0bdiyonYvIws 4+lpuV6YmvFqT7Lj8XCiQFfhlqRyYVWiijBt69HmJ18zQlJwm1X2tNqNTrvWYpibvPjj /bgg== X-Gm-Message-State: APzg51Cjf/mY8PnZXwapVaqX+OXhVRpEZoq2VD88Ya8Zo+Z3+vpvZFVf +xkYZMIedENnG6iZXGw4RJmUnwQfJsD1nUjYgtlC8lsC0mjUD5Lgxcv3/ZVzONWU8D8pt/IQVXY Gon/8C+1n+EcJztH6y6OF/bZtXHRQ7MdMXRGVQQIdxrNrRbU/JYpo59SmaR5onE8B5Q== X-Received: by 2002:a62:198e:: with SMTP id 136-v6mr6568637pfz.103.1536830967308; Thu, 13 Sep 2018 02:29:27 -0700 (PDT) X-Google-Smtp-Source: ANB0VdbL2N2ooCNI4q94ijrtoC/HjSOtvRKK1fcQTzMgf+3rB3Uvfh+nn4fhzZaluQvaGN/9eTTI X-Received: by 2002:a62:198e:: with SMTP id 136-v6mr6568558pfz.103.1536830966190; Thu, 13 Sep 2018 02:29:26 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1536830966; cv=none; d=google.com; s=arc-20160816; b=zbuhr3C0/Pp5mjaVUf+IrwkpwTh2KKiznX5KkWkLZWMIs+pvzwiYimkZ1MK9s1UESO /rR+XLiRzxIkTC+/89U1/Vx4nhJNePG6TcZ8tr/a7UtuKa2yEJ5d9GJ2j3guBgjQnAzY faRon3ISjFY/vLgIGH7WcLXToEodxLUYKJ82heWnTF7jN04Uq/kolv/qeCcz5OwTD6RY NDaOLeQawlN146UqIgS3UIl3rfxPd7hRWhEtzIq0Nj+VhbodnuuVFVE3f/bbedQfu8Q1 eLrDbuqoQBDqoSfK/I9HycPnpJSNDcpx0QUQWjD0RkJAYdyDAxcrQTHxUT15B1ogM10p Xqmw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=mime-version:references:subject:cc:to:from:date:user-agent :message-id:dkim-signature; bh=jgrjUnzXtEdFVgHINLQMwf19p5Mc/gjokyImpgoVnII=; b=SDl2TpcdGydZ8M3tV5WArlWlniemoEsKcs1XjAwvEAOek6fJkrTK5JgKAE2A68O1UH 8V8ISXK87wc+FkvVyuuOEgdgRrq2EcB2R67IyCTrSL/Xs75rOCdMlTt2AiPsp5kgc12S XesHI2vze3iHaORlBjMkEHXJfxqLHbeqOfbvQL6R6UFd4KVPtuKyNpCCxMpUy6jyNE1G yiXvT9LnUf5Q4q8ujubEtp+aWWxHVrOrZBQhXeM+4JeGxteWVqeh6Xz516tHSJt6FxBx 54IS5wqGpKBMaeTOTqW+ZZu7eYlCjxJKgHy3+sTCOIgE6p88LpXp582fwXjxqzCrwUcY aTUw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=bombadil.20170209 header.b=dG7yd+j3; spf=pass (google.com: best guess record for domain of peterz@infradead.org designates 2607:7c80:54:e::133 as permitted sender) smtp.mailfrom=peterz@infradead.org Received: from bombadil.infradead.org (bombadil.infradead.org. [2607:7c80:54:e::133]) by mx.google.com with ESMTPS id j30-v6si4070215pgj.73.2018.09.13.02.29.26 for (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Thu, 13 Sep 2018 02:29:26 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of peterz@infradead.org designates 2607:7c80:54:e::133 as permitted sender) client-ip=2607:7c80:54:e::133; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=bombadil.20170209 header.b=dG7yd+j3; spf=pass (google.com: best guess record for domain of peterz@infradead.org designates 2607:7c80:54:e::133 as permitted sender) smtp.mailfrom=peterz@infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=Content-Type:MIME-Version:References: Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding: Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender: Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:List-Id:List-Help: List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=jgrjUnzXtEdFVgHINLQMwf19p5Mc/gjokyImpgoVnII=; b=dG7yd+j3hwiQWUBzF/T8E5qxOO FoRrlpP092nw81flcWecqF/OuPeOcB6KMxMSoA3C3V0pVh4wUrg3M2eggdHHfTX94/nUHGOPKAnpz 5YmR7QzbZaBIkra0K9ZanS4trhOGz77ug42Az+irfHcoAhIS44tUCuaNV3GIFu0cHPpceaFlxSxdr Ft9qNrgzk4Y6bM/5FMEJ1HotlrLxmwliOMwjzqRWUVw48WfS4hvN5mIiK2ed3imS6lczr+GZEX7ZP 3r0DAnJ/KHW3kzasBluftuLbgczxolfSC52jW8B7UmUXGX9OUR+EQH7BLjVwaqt7CYL6F4eBFSDCi s46JL1Bg==; Received: from j217100.upc-j.chello.nl ([24.132.217.100] helo=hirez.programming.kicks-ass.net) by bombadil.infradead.org with esmtpsa (Exim 4.90_1 #2 (Red Hat Linux)) id 1g0Nw0-0000uo-92; Thu, 13 Sep 2018 09:29:12 +0000 Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0) id 4300120587E4D; Thu, 13 Sep 2018 11:29:10 +0200 (CEST) Message-ID: <20180913092812.012757318@infradead.org> User-Agent: quilt/0.65 Date: Thu, 13 Sep 2018 11:21:13 +0200 From: Peter Zijlstra To: will.deacon@arm.com, aneesh.kumar@linux.vnet.ibm.com, akpm@linux-foundation.org, npiggin@gmail.com Cc: linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, peterz@infradead.org, linux@armlinux.org.uk, heiko.carstens@de.ibm.com, Dave Hansen Subject: [RFC][PATCH 03/11] x86/mm: Page size aware flush_tlb_mm_range() References: <20180913092110.817204997@infradead.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP Use the new tlb_get_unmap_shift() to determine the stride of the INVLPG loop. Cc: Will Deacon Cc: "Aneesh Kumar K.V" Cc: Andrew Morton Cc: Nick Piggin Cc: Dave Hansen Signed-off-by: Peter Zijlstra (Intel) --- arch/x86/include/asm/tlb.h | 21 ++++++++++++++------- arch/x86/include/asm/tlbflush.h | 10 ++++++---- arch/x86/mm/tlb.c | 10 +++++----- 3 files changed, 25 insertions(+), 16 deletions(-) --- a/arch/x86/include/asm/tlb.h +++ b/arch/x86/include/asm/tlb.h @@ -6,16 +6,23 @@ #define tlb_end_vma(tlb, vma) do { } while (0) #define __tlb_remove_tlb_entry(tlb, ptep, address) do { } while (0) -#define tlb_flush(tlb) \ -{ \ - if (!tlb->fullmm && !tlb->need_flush_all) \ - flush_tlb_mm_range(tlb->mm, tlb->start, tlb->end, 0UL); \ - else \ - flush_tlb_mm_range(tlb->mm, 0UL, TLB_FLUSH_ALL, 0UL); \ -} +static inline void tlb_flush(struct mmu_gather *tlb); #include +static inline void tlb_flush(struct mmu_gather *tlb) +{ + unsigned long start = 0UL, end = TLB_FLUSH_ALL; + unsigned int invl_shift = tlb_get_unmap_shift(tlb); + + if (!tlb->fullmm && !tlb->need_flush_all) { + start = tlb->start; + end = tlb->end; + } + + flush_tlb_mm_range(tlb->mm, start, end, invl_shift); +} + /* * While x86 architecture in general requires an IPI to perform TLB * shootdown, enablement code for several hypervisors overrides --- a/arch/x86/include/asm/tlbflush.h +++ b/arch/x86/include/asm/tlbflush.h @@ -507,23 +507,25 @@ struct flush_tlb_info { unsigned long start; unsigned long end; u64 new_tlb_gen; + unsigned int invl_shift; }; #define local_flush_tlb() __flush_tlb() #define flush_tlb_mm(mm) flush_tlb_mm_range(mm, 0UL, TLB_FLUSH_ALL, 0UL) -#define flush_tlb_range(vma, start, end) \ - flush_tlb_mm_range(vma->vm_mm, start, end, vma->vm_flags) +#define flush_tlb_range(vma, start, end) \ + flush_tlb_mm_range((vma)->vm_mm, start, end, \ + (vma)->vm_flags & VM_HUGETLB ? PMD_SHIFT : PAGE_SHIFT) extern void flush_tlb_all(void); extern void flush_tlb_mm_range(struct mm_struct *mm, unsigned long start, - unsigned long end, unsigned long vmflag); + unsigned long end, unsigned int invl_shift); extern void flush_tlb_kernel_range(unsigned long start, unsigned long end); static inline void flush_tlb_page(struct vm_area_struct *vma, unsigned long a) { - flush_tlb_mm_range(vma->vm_mm, a, a + PAGE_SIZE, VM_NONE); + flush_tlb_mm_range(vma->vm_mm, a, a + PAGE_SIZE, PAGE_SHIFT); } void native_flush_tlb_others(const struct cpumask *cpumask, --- a/arch/x86/mm/tlb.c +++ b/arch/x86/mm/tlb.c @@ -522,12 +522,12 @@ static void flush_tlb_func_common(const f->new_tlb_gen == mm_tlb_gen) { /* Partial flush */ unsigned long addr; - unsigned long nr_pages = (f->end - f->start) >> PAGE_SHIFT; + unsigned long nr_pages = (f->end - f->start) >> f->invl_shift; addr = f->start; while (addr < f->end) { __flush_tlb_one_user(addr); - addr += PAGE_SIZE; + addr += 1UL << f->invl_shift; } if (local) count_vm_tlb_events(NR_TLB_LOCAL_FLUSH_ONE, nr_pages); @@ -616,12 +616,13 @@ void native_flush_tlb_others(const struc static unsigned long tlb_single_page_flush_ceiling __read_mostly = 33; void flush_tlb_mm_range(struct mm_struct *mm, unsigned long start, - unsigned long end, unsigned long vmflag) + unsigned long end, unsigned int invl_shift) { int cpu; struct flush_tlb_info info __aligned(SMP_CACHE_BYTES) = { .mm = mm, + .invl_shift = invl_shift, }; cpu = get_cpu(); @@ -631,8 +632,7 @@ void flush_tlb_mm_range(struct mm_struct /* Should we flush just the requested range? */ if ((end != TLB_FLUSH_ALL) && - !(vmflag & VM_HUGETLB) && - ((end - start) >> PAGE_SHIFT) <= tlb_single_page_flush_ceiling) { + ((end - start) >> invl_shift) <= tlb_single_page_flush_ceiling) { info.start = start; info.end = end; } else { From patchwork Thu Sep 13 09:21:14 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Zijlstra X-Patchwork-Id: 10599115 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 151A014DB for ; Thu, 13 Sep 2018 09:30:54 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 02B932A475 for ; Thu, 13 Sep 2018 09:30:54 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id E9D592A491; Thu, 13 Sep 2018 09:30:53 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.8 required=2.0 tests=BAYES_00,DKIM_SIGNED, MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE,T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id BBCEC2A475 for ; Thu, 13 Sep 2018 09:30:51 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 9F18A8E0010; Thu, 13 Sep 2018 05:30:50 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 99EE78E000E; Thu, 13 Sep 2018 05:30:50 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 866F08E0010; Thu, 13 Sep 2018 05:30:50 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-wr1-f70.google.com (mail-wr1-f70.google.com [209.85.221.70]) by kanga.kvack.org (Postfix) with ESMTP id 285698E000E for ; Thu, 13 Sep 2018 05:30:50 -0400 (EDT) Received: by mail-wr1-f70.google.com with SMTP id 40-v6so4275751wrb.23 for ; Thu, 13 Sep 2018 02:30:50 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:message-id:user-agent:date:from :to:cc:subject:references:mime-version; bh=8SZF02odPbsQ4xvgQWyvxT1sx9U/BZ3+F6zLfQjUdFU=; b=UbGwC1kKIZKLAj54sNXOPyNj/Wl7z0D4WYVEgFewzNfCuaRYWBk61ZOEqVZ0ElqbtA 9v9V1zvismT7HCfL1ozDXAV1M7/HFxtHuIC0wequG0R7qrQfbrzubJzkFClRbB6XpwPv Qjz462GSi9hwXqCy156EdmLCZAsC7zkIQaE6vdJds4wzjUhF3K7p/9GU+BdYNIST0gpX cmvb9I0VE3hQZ+r5DJ5hUjhuS5uxdLzHIchB+LUNQsGCQd6LELRk0FFyj1FlNVPYXEfO LsXC1JbKRJuNnIoz2I6AFeTI2/IVQ2//g1ONHAeUL1DKFQRIzagLavOOxAKsJelcN6uf VZiQ== X-Gm-Message-State: APzg51DCPVBFRnEM++9l0pS+YPG3qGX0tbM4A4XfPwu8tBCECflGkvzY BQ2UqDZKekzRO1N5/O81b29eQYjySYv3Zps0JOOsyXI3pA8fPUMzPu6Mt4Ab77db6ZUECyAQLxv EYlO54cGnely64vNIGvp34vuwqGdESzfS9cDulZHfU0QLUhRUdASGEskmS5KFvMfthw== X-Received: by 2002:a1c:1eca:: with SMTP id e193-v6mr4290904wme.99.1536831049668; Thu, 13 Sep 2018 02:30:49 -0700 (PDT) X-Google-Smtp-Source: ANB0VdYTbqmizD0CjLULydkk4F/k7kEMcRJfPRhyB8N66/P5GiX91JLJix1DIAEylmDQvWiiGf1n X-Received: by 2002:a1c:1eca:: with SMTP id e193-v6mr4290868wme.99.1536831048902; Thu, 13 Sep 2018 02:30:48 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1536831048; cv=none; d=google.com; s=arc-20160816; b=QJxSW/qKRJg4HeTeg0cTLEtKoTQ5Kf8ZpY+QTRlvrtT3uMKRBnMc7J76NhoScM/fQZ ulu7pks3JPAun8GHLTTsR3uUCly8DfPFr/1BNc15coq8TEAGAfXYvwrQVmK7jOu0+jt3 Dk+eCuPaxqa/djmuCa5ww/jx+ZOx2fEC8boFftkIHDgcJ8iROL/qt3bOsF8ILIf/KLiB 8xWkGmRNWVOunHfbu5i77pvY1k7k98rkNLSL1QfhM7OidxCWZ+BZe3MIg426rojtohGs pveGwSYvp3yhjGixg5aR6VKt5BYd9FMP11rxnQ5jtubWpjarskPk4TUEzjU/rpymwJSB Uaog== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=mime-version:references:subject:cc:to:from:date:user-agent :message-id:dkim-signature; bh=8SZF02odPbsQ4xvgQWyvxT1sx9U/BZ3+F6zLfQjUdFU=; b=YAJKLzJW2DEmSYlH3D9Bqg3d9t4Ckut4cLKgQ4RwHB4Du80U5Wj1jttjJAJIIGiZAz 6UGSgoaRMstxZRf8T9ta+2JgrZrc+evMn/kVw6RQts4PEdZnvCdM9qnED5+eWqKPkEgr duHUt0GDC8R+fy1FvKlfzM2tGC2gQKZ1duOX9EjXBiRk69P5wvsj2Zms3hHGYkpRduik +4aFAZgxFDY3PWvwH1oJd3RfLFChcEpaNvm8kf3NX0ulI+TO170j5lTIei/GBu9XTT+v iZmxJ1QXr+toq4g3kBAQKtPWPCJVgdWHGHeDpXHmVSFR8up55ZoVFfy+10kCZqHQToYY XQDw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=merlin.20170209 header.b=SAPuWZxE; spf=pass (google.com: best guess record for domain of peterz@infradead.org designates 2001:8b0:10b:1231::1 as permitted sender) smtp.mailfrom=peterz@infradead.org Received: from merlin.infradead.org (merlin.infradead.org. [2001:8b0:10b:1231::1]) by mx.google.com with ESMTPS id m16-v6si3161097wrr.317.2018.09.13.02.30.48 for (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Thu, 13 Sep 2018 02:30:48 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of peterz@infradead.org designates 2001:8b0:10b:1231::1 as permitted sender) client-ip=2001:8b0:10b:1231::1; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=merlin.20170209 header.b=SAPuWZxE; spf=pass (google.com: best guess record for domain of peterz@infradead.org designates 2001:8b0:10b:1231::1 as permitted sender) smtp.mailfrom=peterz@infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=merlin.20170209; h=Content-Type:MIME-Version:References: Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding: Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender: Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:List-Id:List-Help: List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=8SZF02odPbsQ4xvgQWyvxT1sx9U/BZ3+F6zLfQjUdFU=; b=SAPuWZxE8US/drkjmCPvaY4SGz nDXhcIA2qU091yXD2HQ6jmZmqbtLzEzwgv3TKWnrdjAXDWslYqEEL29XCWq4L8cNkqCAN6+UimrBO 6Fmz5ogiDd5vP8ioRckIwaZqqQvwPYpy+CZzHdMWLq46QtZNQTxNpnUM9KpySkVDP262Sr0i/Y5ry eVPw37h2qjzRFNYSfsuz1q7zwfSj4h1suU3WAiVMISnL5/5ELIubpha14EgaFxpB7DJSewcqt12YY //ymUPpSR8g3Ib2EG3PN7Tpyk5J6pO2BL6L7XcQZKJtsasiqIHT7+lURbn8QdsEN1FWYxNXUG68n0 RkJNUCdA==; Received: from j217100.upc-j.chello.nl ([24.132.217.100] helo=hirez.programming.kicks-ass.net) by merlin.infradead.org with esmtpsa (Exim 4.90_1 #2 (Red Hat Linux)) id 1g0Nw0-0000yn-SG; Thu, 13 Sep 2018 09:29:13 +0000 Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0) id 4738020587E51; Thu, 13 Sep 2018 11:29:10 +0200 (CEST) Message-ID: <20180913092812.071989585@infradead.org> User-Agent: quilt/0.65 Date: Thu, 13 Sep 2018 11:21:14 +0200 From: Peter Zijlstra To: will.deacon@arm.com, aneesh.kumar@linux.vnet.ibm.com, akpm@linux-foundation.org, npiggin@gmail.com Cc: linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, peterz@infradead.org, linux@armlinux.org.uk, heiko.carstens@de.ibm.com, David Miller , Guan Xuetao Subject: [RFC][PATCH 04/11] asm-generic/tlb: Provide generic VIPT cache flush References: <20180913092110.817204997@infradead.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP The one obvious thing SH and ARM want is a sensible default for tlb_start_vma(). (also: https://lkml.org/lkml/2004/1/15/6 ) Avoid all VIPT architectures providing their own tlb_start_vma() implementation and rely on architectures to provide a no-op flush_cache_range() when it is not relevant. The below makes tlb_start_vma() default to flush_cache_range(), which should be right and sufficient. The only exceptions that I found where (oddly): - m68k-mmu - sparc64 - unicore Those architectures appear to have flush_cache_range(), but their current tlb_start_vma() does not call it. Cc: Will Deacon Cc: "Aneesh Kumar K.V" Cc: Andrew Morton Cc: Nick Piggin Cc: David Miller Cc: Guan Xuetao Signed-off-by: Peter Zijlstra (Intel) Acked-by: Will Deacon --- arch/arc/include/asm/tlb.h | 9 --------- arch/mips/include/asm/tlb.h | 9 --------- arch/nds32/include/asm/tlb.h | 6 ------ arch/nios2/include/asm/tlb.h | 10 ---------- arch/parisc/include/asm/tlb.h | 5 ----- arch/sparc/include/asm/tlb_32.h | 5 ----- arch/xtensa/include/asm/tlb.h | 9 --------- include/asm-generic/tlb.h | 19 +++++++++++-------- 8 files changed, 11 insertions(+), 61 deletions(-) --- a/arch/arc/include/asm/tlb.h +++ b/arch/arc/include/asm/tlb.h @@ -23,15 +23,6 @@ do { \ * * Note, read http://lkml.org/lkml/2004/1/15/6 */ -#ifndef CONFIG_ARC_CACHE_VIPT_ALIASING -#define tlb_start_vma(tlb, vma) -#else -#define tlb_start_vma(tlb, vma) \ -do { \ - if (!tlb->fullmm) \ - flush_cache_range(vma, vma->vm_start, vma->vm_end); \ -} while(0) -#endif #define tlb_end_vma(tlb, vma) \ do { \ --- a/arch/mips/include/asm/tlb.h +++ b/arch/mips/include/asm/tlb.h @@ -5,15 +5,6 @@ #include #include -/* - * MIPS doesn't need any special per-pte or per-vma handling, except - * we need to flush cache for area to be unmapped. - */ -#define tlb_start_vma(tlb, vma) \ - do { \ - if (!tlb->fullmm) \ - flush_cache_range(vma, vma->vm_start, vma->vm_end); \ - } while (0) #define tlb_end_vma(tlb, vma) do { } while (0) #define __tlb_remove_tlb_entry(tlb, ptep, address) do { } while (0) --- a/arch/nds32/include/asm/tlb.h +++ b/arch/nds32/include/asm/tlb.h @@ -4,12 +4,6 @@ #ifndef __ASMNDS32_TLB_H #define __ASMNDS32_TLB_H -#define tlb_start_vma(tlb,vma) \ - do { \ - if (!tlb->fullmm) \ - flush_cache_range(vma, vma->vm_start, vma->vm_end); \ - } while (0) - #define tlb_end_vma(tlb,vma) \ do { \ if(!tlb->fullmm) \ --- a/arch/nios2/include/asm/tlb.h +++ b/arch/nios2/include/asm/tlb.h @@ -15,16 +15,6 @@ extern void set_mmu_pid(unsigned long pid); -/* - * NiosII doesn't need any special per-pte or per-vma handling, except - * we need to flush cache for the area to be unmapped. - */ -#define tlb_start_vma(tlb, vma) \ - do { \ - if (!tlb->fullmm) \ - flush_cache_range(vma, vma->vm_start, vma->vm_end); \ - } while (0) - #define tlb_end_vma(tlb, vma) do { } while (0) #define __tlb_remove_tlb_entry(tlb, ptep, address) do { } while (0) --- a/arch/parisc/include/asm/tlb.h +++ b/arch/parisc/include/asm/tlb.h @@ -7,11 +7,6 @@ do { if ((tlb)->fullmm) \ flush_tlb_mm((tlb)->mm);\ } while (0) -#define tlb_start_vma(tlb, vma) \ -do { if (!(tlb)->fullmm) \ - flush_cache_range(vma, vma->vm_start, vma->vm_end); \ -} while (0) - #define tlb_end_vma(tlb, vma) \ do { if (!(tlb)->fullmm) \ flush_tlb_range(vma, vma->vm_start, vma->vm_end); \ --- a/arch/sparc/include/asm/tlb_32.h +++ b/arch/sparc/include/asm/tlb_32.h @@ -2,11 +2,6 @@ #ifndef _SPARC_TLB_H #define _SPARC_TLB_H -#define tlb_start_vma(tlb, vma) \ -do { \ - flush_cache_range(vma, vma->vm_start, vma->vm_end); \ -} while (0) - #define tlb_end_vma(tlb, vma) \ do { \ flush_tlb_range(vma, vma->vm_start, vma->vm_end); \ --- a/arch/xtensa/include/asm/tlb.h +++ b/arch/xtensa/include/asm/tlb.h @@ -16,19 +16,10 @@ #if (DCACHE_WAY_SIZE <= PAGE_SIZE) -/* Note, read http://lkml.org/lkml/2004/1/15/6 */ - -# define tlb_start_vma(tlb,vma) do { } while (0) # define tlb_end_vma(tlb,vma) do { } while (0) #else -# define tlb_start_vma(tlb, vma) \ - do { \ - if (!tlb->fullmm) \ - flush_cache_range(vma, vma->vm_start, vma->vm_end); \ - } while(0) - # define tlb_end_vma(tlb, vma) \ do { \ if (!tlb->fullmm) \ --- a/include/asm-generic/tlb.h +++ b/include/asm-generic/tlb.h @@ -19,6 +19,7 @@ #include #include #include +#include #ifdef CONFIG_MMU @@ -351,17 +352,19 @@ static inline unsigned long tlb_get_unma * the vmas are adjusted to only cover the region to be torn down. */ #ifndef tlb_start_vma -#define tlb_start_vma(tlb, vma) do { } while (0) +#define tlb_start_vma(tlb, vma) \ +do { \ + if (!tlb->fullmm) \ + flush_cache_range(vma, vma->vm_start, vma->vm_end); \ +} while (0) #endif -#define __tlb_end_vma(tlb, vma) \ - do { \ - if (!tlb->fullmm) \ - tlb_flush_mmu_tlbonly(tlb); \ - } while (0) - #ifndef tlb_end_vma -#define tlb_end_vma __tlb_end_vma +#define tlb_end_vma(tlb, vma) \ +do { \ + if (!tlb->fullmm) \ + tlb_flush_mmu_tlbonly(tlb); \ +} while (0) #endif #ifndef __tlb_remove_tlb_entry From patchwork Thu Sep 13 09:21:15 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Zijlstra X-Patchwork-Id: 10599095 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 6D22214E0 for ; Thu, 13 Sep 2018 09:29:41 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 5A9AF2A457 for ; Thu, 13 Sep 2018 09:29:41 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 4E4022A491; Thu, 13 Sep 2018 09:29:41 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.8 required=2.0 tests=BAYES_00,DKIM_SIGNED, MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE,T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id A540B2A457 for ; Thu, 13 Sep 2018 09:29:40 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D134C8E0001; Thu, 13 Sep 2018 05:29:28 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id B291C8E0008; Thu, 13 Sep 2018 05:29:28 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 903CB8E0007; Thu, 13 Sep 2018 05:29:28 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-wr1-f70.google.com (mail-wr1-f70.google.com [209.85.221.70]) by kanga.kvack.org (Postfix) with ESMTP id 288E78E0001 for ; Thu, 13 Sep 2018 05:29:28 -0400 (EDT) Received: by mail-wr1-f70.google.com with SMTP id a37-v6so4400308wrc.5 for ; Thu, 13 Sep 2018 02:29:28 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:message-id:user-agent:date:from :to:cc:subject:references:mime-version; bh=SUb/FsNdk3D3aNn2XSnRfO/6PJM/aNspqX3A3hmuSZY=; b=nGNRsYRydbhGzUzlX6z+rc43t73nJgExK4UxSNJeHR/dyM58T5mv+yZkIOL92FKBMP AWD58o6aSM+FrPU2Hvra0S9DB5YpGv00SbBTWnRN1lC/EcK+//u4ZlgrRJZdjrwE8e3S GSHZSl7MSfpUAPWot4KqtgFl31aU12sin8RRIlpNPcm0ljXGDMmZB+WMK030RyMwjpkh Eb4V9j9knsWScVngDF1nkRSX9JtRVxGZAzVJDH/v3ilqU145erUPnu4H/0hTNBLgiBYU c2Eclo3KnsG4BCE3ZkJXarulAtcAEpEch4Fo1EOFtsheXBfeR1aYPVA+A9h9W3uSQBTK laWg== X-Gm-Message-State: APzg51CfXarETjngend1v3AAy7ws/fm748zTJr/UoDZQZLSMIUGp7/oL CNdwJh9WyQV0dU5e86R2hJRuCYMMCXcRARJkKHvqEtPVBzgEjz3oDQ7Nci2swMqrIOzr6Ebv0UD 6QKTnulOOsqGEVD+zTurpk5KX4tiMysMnvhoNkAUw+YunRY2Pv08ipTQQa/TH4iELRA== X-Received: by 2002:adf:b2b5:: with SMTP id g50-v6mr4398670wrd.218.1536830967508; Thu, 13 Sep 2018 02:29:27 -0700 (PDT) X-Google-Smtp-Source: ANB0Vdao4gN42wkUWz1JTS/qGIY5J4PFsPuV919+rwOrniz3M9rNGOM6Z3g3+w/yhj0ek/GzUQpB X-Received: by 2002:adf:b2b5:: with SMTP id g50-v6mr4398631wrd.218.1536830966787; Thu, 13 Sep 2018 02:29:26 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1536830966; cv=none; d=google.com; s=arc-20160816; b=Vmj4/FrkHzmyDDZehYATWfXDEu/ER19nDyQ5gqX8HEtjEQEnJ+SuZNls9iKI6BXWrd zAKrP0ucm9/Vsl+FJB1tl8jgf/98RTO0g90ZRPCqsLkROsd6swKmwaZ+6wUDDssxTKYA DcGQZgD4WbNGAexmXjooSymJ2lcqrWvxcQ9u1kpnwOsGUWksHvAQSD/Z59OXipWcqDld 94yFZMzdq1Ni5FfpdPrIv+7OFFVtvZU0xtt5IWHI++KD3NgJY8Xz0n4nC/MXOhzbBEUb T+4hO71bXItmSmAQPc9Ip4OL+Ib1XxNOEl0tQLpWgHhp9obL84LcnMTRy2dVHuWT4f+S RNkQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=mime-version:references:subject:cc:to:from:date:user-agent :message-id:dkim-signature; bh=SUb/FsNdk3D3aNn2XSnRfO/6PJM/aNspqX3A3hmuSZY=; b=qRCQ56pb0ZrqlSD6dVj91FhlD3TGiYdbrjxc4KNwCAiop9idCkWrnz9D0LQ75rAlo8 frUdSCvb7VRg1EQA8dtAM4KEGA3TDlJNXwc7ghEVIWCwVLDo86QWFRz+XV3U7i5dBzyO LTcD3vWE1o474p8naqFPVmwokgCkQukxfHEXYHKR40voinF6DIiZo4rgSgrMurYXpd8w TDItzWoCKt0U2l/9hKSKPixNh55cLIp7ZK3A0e/glmLQw9EqN4+0VsVQMfb13yheGf5v qTCo3+Ynj5kNVpms8ae3JvnWL74HlSHXbPzSjidX1jyYwWc/iMuOMOXv1m0mjNny7+yi fj3Q== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=merlin.20170209 header.b=DBoSpxOJ; spf=pass (google.com: best guess record for domain of peterz@infradead.org designates 2001:8b0:10b:1231::1 as permitted sender) smtp.mailfrom=peterz@infradead.org Received: from merlin.infradead.org (merlin.infradead.org. [2001:8b0:10b:1231::1]) by mx.google.com with ESMTPS id e3-v6si3058152wri.119.2018.09.13.02.29.26 for (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Thu, 13 Sep 2018 02:29:26 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of peterz@infradead.org designates 2001:8b0:10b:1231::1 as permitted sender) client-ip=2001:8b0:10b:1231::1; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=merlin.20170209 header.b=DBoSpxOJ; spf=pass (google.com: best guess record for domain of peterz@infradead.org designates 2001:8b0:10b:1231::1 as permitted sender) smtp.mailfrom=peterz@infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=merlin.20170209; h=Content-Type:MIME-Version:References: Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding: Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender: Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:List-Id:List-Help: List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=SUb/FsNdk3D3aNn2XSnRfO/6PJM/aNspqX3A3hmuSZY=; b=DBoSpxOJeBYKM8THNd2Pn+ZGVo QXGQCcqFqsS2V25y1REJiIeW0hQnSVmWOs8ALiLx0r18Q2rSvEURQo0KC+0sogpPFfhejx088CS8w LGjl1GQzmBTcDiIzgLrEf3dpFWaMufxmCLZ2Hlhssd8jcnLTu7WbboMyQE0QK5DYYCgBpOOwWsv2R pEUJQfhPLgCftm2hgZTsZ92WSd65A0RHe86ylnG91s4tq9p+iwysxE6u7krLJzk/usV0hqsRVUTgG 5ik8aqqbn/ytwPE9wLaNOfloKGARJQ+JYoFU8v4IAXZS1n2v9Hup5nVNLgmmgvTq7sTY3NN/7JtY2 +SZD1o9Q==; Received: from j217100.upc-j.chello.nl ([24.132.217.100] helo=hirez.programming.kicks-ass.net) by merlin.infradead.org with esmtpsa (Exim 4.90_1 #2 (Red Hat Linux)) id 1g0Nw1-0000yo-Ub; Thu, 13 Sep 2018 09:29:14 +0000 Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0) id 4B0C620587E52; Thu, 13 Sep 2018 11:29:10 +0200 (CEST) Message-ID: <20180913092812.132208484@infradead.org> User-Agent: quilt/0.65 Date: Thu, 13 Sep 2018 11:21:15 +0200 From: Peter Zijlstra To: will.deacon@arm.com, aneesh.kumar@linux.vnet.ibm.com, akpm@linux-foundation.org, npiggin@gmail.com Cc: linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, peterz@infradead.org, linux@armlinux.org.uk, heiko.carstens@de.ibm.com Subject: [RFC][PATCH 05/11] asm-generic/tlb: Provide generic tlb_flush References: <20180913092110.817204997@infradead.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP Provide a generic tlb_flush() implementation that relies on flush_tlb_range(). This is a little awkward because flush_tlb_range() assumes a VMA for range invalidation, but we no longer have one. Audit of all flush_tlb_range() implementations shows only vma->vm_mm and vma->vm_flags are used, and of the latter only VM_EXEC (I-TLB invalidates) and VM_HUGETLB (large TLB invalidate) are used. Therefore, track VM_EXEC and VM_HUGETLB in two more bits, and create a 'fake' VMA. This allows architectures that have a reasonably efficient flush_tlb_range() to not require any additional effort. Cc: Will Deacon Cc: "Aneesh Kumar K.V" Cc: Andrew Morton Cc: Nick Piggin Signed-off-by: Peter Zijlstra (Intel) --- arch/arm64/include/asm/tlb.h | 1 arch/powerpc/include/asm/tlb.h | 1 arch/riscv/include/asm/tlb.h | 1 arch/x86/include/asm/tlb.h | 1 include/asm-generic/tlb.h | 80 +++++++++++++++++++++++++++++++++++------ 5 files changed, 74 insertions(+), 10 deletions(-) --- a/arch/arm64/include/asm/tlb.h +++ b/arch/arm64/include/asm/tlb.h @@ -27,6 +27,7 @@ static inline void __tlb_remove_table(vo free_page_and_swap_cache((struct page *)_table); } +#define tlb_flush tlb_flush static void tlb_flush(struct mmu_gather *tlb); #include --- a/arch/powerpc/include/asm/tlb.h +++ b/arch/powerpc/include/asm/tlb.h @@ -28,6 +28,7 @@ #define tlb_end_vma(tlb, vma) do { } while (0) #define __tlb_remove_tlb_entry __tlb_remove_tlb_entry +#define tlb_flush tlb_flush extern void tlb_flush(struct mmu_gather *tlb); /* Get the generic bits... */ --- a/arch/riscv/include/asm/tlb.h +++ b/arch/riscv/include/asm/tlb.h @@ -18,6 +18,7 @@ struct mmu_gather; static void tlb_flush(struct mmu_gather *tlb); +#define tlb_flush tlb_flush #include static inline void tlb_flush(struct mmu_gather *tlb) --- a/arch/x86/include/asm/tlb.h +++ b/arch/x86/include/asm/tlb.h @@ -6,6 +6,7 @@ #define tlb_end_vma(tlb, vma) do { } while (0) #define __tlb_remove_tlb_entry(tlb, ptep, address) do { } while (0) +#define tlb_flush tlb_flush static inline void tlb_flush(struct mmu_gather *tlb); #include --- a/include/asm-generic/tlb.h +++ b/include/asm-generic/tlb.h @@ -241,6 +241,12 @@ struct mmu_gather { unsigned int cleared_puds : 1; unsigned int cleared_p4ds : 1; + /* + * tracks VM_EXEC | VM_HUGETLB in tlb_start_vma + */ + unsigned int vma_exec : 1; + unsigned int vma_huge : 1; + unsigned int batch_count; struct mmu_gather_batch *active; @@ -282,7 +288,35 @@ static inline void __tlb_reset_range(str tlb->cleared_pmds = 0; tlb->cleared_puds = 0; tlb->cleared_p4ds = 0; + /* + * Do not reset mmu_gather::vma_* fields here, we do not + * call into tlb_start_vma() again to set them if there is an + * intermediate flush. + */ +} + +#ifndef tlb_flush + +#if defined(tlb_start_vma) || defined(tlb_end_vma) +#error Default tlb_flush() relies on default tlb_start_vma() and tlb_end_vma() +#endif + +#define tlb_flush tlb_flush +static inline void tlb_flush(struct mmu_gather *tlb) +{ + if (tlb->fullmm || tlb->need_flush_all) { + flush_tlb_mm(tlb->mm); + } else { + struct vm_area_struct vma = { + .vm_mm = tlb->mm, + .vm_flags = tlb->vma_exec ? VM_EXEC : 0 | + tlb->vma_huge ? VM_HUGETLB : 0, + }; + + flush_tlb_range(&vma, tlb->start, tlb->end); + } } +#endif static inline void tlb_flush_mmu_tlbonly(struct mmu_gather *tlb) { @@ -353,19 +387,45 @@ static inline unsigned long tlb_get_unma * the vmas are adjusted to only cover the region to be torn down. */ #ifndef tlb_start_vma -#define tlb_start_vma(tlb, vma) \ -do { \ - if (!tlb->fullmm) \ - flush_cache_range(vma, vma->vm_start, vma->vm_end); \ -} while (0) +#define tlb_start_vma tlb_start_vma +static inline void tlb_start_vma(struct mmu_gather *tlb, struct vm_area_struct *vma) +{ + if (tlb->fullmm) + return; + + /* + * flush_tlb_range() implementations that look at VM_HUGETLB (tile, + * mips-4k) flush only large pages. + * + * flush_tlb_range() implementations that flush I-TLB also flush D-TLB + * (tile, xtensa, arm), so it's ok to just add VM_EXEC to an existing + * range. + * + * We rely on tlb_end_vma() to issue a flush, such that when we reset + * these values the batch is empty. + */ + tlb->vma_huge = !!(vma->vm_flags & VM_HUGETLB); + tlb->vma_exec = !!(vma->vm_flags & VM_EXEC); + + flush_cache_range(vma, vma->vm_start, vma->vm_end); +} #endif #ifndef tlb_end_vma -#define tlb_end_vma(tlb, vma) \ -do { \ - if (!tlb->fullmm) \ - tlb_flush_mmu_tlbonly(tlb); \ -} while (0) +#define tlb_end_vma tlb_end_vma +static inline void tlb_end_vma(struct mmu_gather *tlb, struct vm_area_struct *vma) +{ + if (tlb->fullmm) + return; + + /* + * Do a TLB flush and reset the range at VMA boundaries; this avoids + * the ranges growing with the unused space between consecutive VMAs, + * but also the mmu_gather::vma_* flags from tlb_start_vma() rely on + * this. + */ + tlb_flush_mmu_tlbonly(tlb); +} #endif #ifndef __tlb_remove_tlb_entry From patchwork Thu Sep 13 09:21:16 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Zijlstra X-Patchwork-Id: 10599093 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id B0E7C14E5 for ; Thu, 13 Sep 2018 09:29:37 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id A0B102A45A for ; Thu, 13 Sep 2018 09:29:37 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 92C082A457; Thu, 13 Sep 2018 09:29:37 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.8 required=2.0 tests=BAYES_00,DKIM_SIGNED, MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE,T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 1F0852A457 for ; Thu, 13 Sep 2018 09:29:37 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B2B6B8E000A; Thu, 13 Sep 2018 05:29:28 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id A3FCF8E0001; Thu, 13 Sep 2018 05:29:28 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7592B8E0008; Thu, 13 Sep 2018 05:29:28 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pg1-f200.google.com (mail-pg1-f200.google.com [209.85.215.200]) by kanga.kvack.org (Postfix) with ESMTP id 2DF6E8E0007 for ; Thu, 13 Sep 2018 05:29:28 -0400 (EDT) Received: by mail-pg1-f200.google.com with SMTP id s77-v6so2344183pgs.2 for ; Thu, 13 Sep 2018 02:29:28 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:message-id:user-agent:date:from :to:cc:subject:references:mime-version; bh=Xzoq3822VPftHOSX/XRMpwACoEFrdDki2itBY3y0uPM=; b=PtxAnKJ6Vp0Smbcyze2zHqMai6wgDdJAgb6Whq9K9YJN8gjnl8F16K5fz9mpqBE+Iy Rj27cd7tNvQquksP7VUSjR0EBYuG2P4Ny1HR5Q68atL0ujLwkICaGB6rprd5b1ZMwsZO igXDH7dvdBu++8PsX18BFMqcM7ioPwyYWkPgEOuwIK4sZR3EisWz2J/Go2dA5KtxpFXH MlTuccrwq8A82eal3py8kHM2GZzLBcruh1acL3lwoAn7iOtRG7nennVL3/bXMSbxcETY FZj7/f/o/a5njjk8mOYkP+JQSN1X2vJMBDqVKYHB8N3zEWWCTFDS1pXK7kLMtRApDauV w51A== X-Gm-Message-State: APzg51BDfoHmOz/lfH29s5V0blMzpnHClshbuhRD9au/weBUb9yXk1Lr JMbMzFKhemuJbrzr/gqpvevGvvJoaoJ0z2cJB5kBfOwpN5Ch2DZ9LpBzpXvacHaxUtfSkOAukqz uEgts+0oZ5m7a3dYEBD52khDRsjWgX+rPHrPrYJWQh3PyyfnabfJNDwVbYTOyi2Yl6Q== X-Received: by 2002:a17:902:724c:: with SMTP id c12-v6mr6277195pll.326.1536830967887; Thu, 13 Sep 2018 02:29:27 -0700 (PDT) X-Google-Smtp-Source: ANB0VdY8mUGVxC9/HVzk/okFmX/j06nnzcoz3vHou+pNXT7MThK/92mMesFjk7t1oDG474x4i8tH X-Received: by 2002:a17:902:724c:: with SMTP id c12-v6mr6277131pll.326.1536830966833; Thu, 13 Sep 2018 02:29:26 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1536830966; cv=none; d=google.com; s=arc-20160816; b=kbSOqPbc6mm69zbkJbps6EZheNCMvTRekwb38knaVoxNlmFCiZ+HY5FiAIgcupjiqZ S3QA1wZ15H6qElmgKnYaoU/Krhw/GTjmdh5Gkoh/0/zE8iG8IoKC4LLvZV97tjiSRRPe yviE76GcQLllILWh8yVXgFOS2G/jhJuJG3sVKa1WPg4CZNRM7Vl0ZDkncgKFwgM8mecQ XLxwVAvaFd5WzDMwYyYFcmJ72hA8NqRhNNKbnh8b1+8BYLUGru2A3crCLLpuHvfZ/p+U mJ6f+DyAZwRit223aR7TwrZ5hkberAd/GN/YsgEyz3qsI7lcGvMDZTqpyHflUk50InnM Yhnw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=mime-version:references:subject:cc:to:from:date:user-agent :message-id:dkim-signature; bh=Xzoq3822VPftHOSX/XRMpwACoEFrdDki2itBY3y0uPM=; b=qdWylYV/CvJWrr38IP1Y7boHjY70bajf+HxHAo7NyE+/MCoHPrT7qTwVhw0ft4fyBo e/3yAREfwmoCeb0VxcwLWz+rXA6bvShHx/3LzDGoPAhXFmlDWH9jsupWP6inqkYb7uXI Dzm2ilSnr9xQk/eBz3s1faO7z2sNkFLcuzE/8OJmSTjp818ZZrZRRYlndSJARm0FsNQj vUEv2190uMbSdkF/Z8MZ08CySpgasoX78GRvjq3N67cjxivDmprmz+v+fqTlD5Ukyk8c D39a0mtgSXNU0AcdzLtp44reo2igTNCPQ6lWtlsu0z0g96b6lxSEDHLvNddrJxXyDnoZ NzMw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=bombadil.20170209 header.b=ZxYzhuzv; spf=pass (google.com: best guess record for domain of peterz@infradead.org designates 2607:7c80:54:e::133 as permitted sender) smtp.mailfrom=peterz@infradead.org Received: from bombadil.infradead.org (bombadil.infradead.org. [2607:7c80:54:e::133]) by mx.google.com with ESMTPS id u6-v6si3711931pfu.143.2018.09.13.02.29.26 for (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Thu, 13 Sep 2018 02:29:26 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of peterz@infradead.org designates 2607:7c80:54:e::133 as permitted sender) client-ip=2607:7c80:54:e::133; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=bombadil.20170209 header.b=ZxYzhuzv; spf=pass (google.com: best guess record for domain of peterz@infradead.org designates 2607:7c80:54:e::133 as permitted sender) smtp.mailfrom=peterz@infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=Content-Type:MIME-Version:References: Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding: Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender: Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:List-Id:List-Help: List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=Xzoq3822VPftHOSX/XRMpwACoEFrdDki2itBY3y0uPM=; b=ZxYzhuzvrDDduAl8V41DdphEPe 7gQUO268fFWSFv4b6wSRjFZSH5JTeGOdHMN8GwTlu4l7kiHcYjTOKwGFz49KAi3FUezNSRUFsj8Mr qpUV2uUysHzZe7iuL8j2+sHTTPsyd1qFAD4XATJVweOZLMcBKrRWzkAZ+/53fA+QWiyobrfeR752B c//tTV19aeA6uJdUgUyY9ccIcFxemTGIvZttUk1ck6haC+1z00fg59B3nkm9h3P4amFhKm95HNAdi JOyF83B64RtMyx7JM5moBjaqaPgHYAZeHnVzrZp1AtvFig0gmyroDUVDTr7kJBLY79lDn+DVgbWzQ mDvgk70Q==; Received: from j217100.upc-j.chello.nl ([24.132.217.100] helo=hirez.programming.kicks-ass.net) by bombadil.infradead.org with esmtpsa (Exim 4.90_1 #2 (Red Hat Linux)) id 1g0Nw2-0000vo-Ek; Thu, 13 Sep 2018 09:29:14 +0000 Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0) id 4F58420587E55; Thu, 13 Sep 2018 11:29:10 +0200 (CEST) Message-ID: <20180913092812.190579217@infradead.org> User-Agent: quilt/0.65 Date: Thu, 13 Sep 2018 11:21:16 +0200 From: Peter Zijlstra To: will.deacon@arm.com, aneesh.kumar@linux.vnet.ibm.com, akpm@linux-foundation.org, npiggin@gmail.com Cc: linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, peterz@infradead.org, linux@armlinux.org.uk, heiko.carstens@de.ibm.com Subject: [RFC][PATCH 06/11] asm-generic/tlb: Conditionally provide tlb_migrate_finish() References: <20180913092110.817204997@infradead.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP Needed for ia64 -- alternatively we drop the entire hook. Cc: Will Deacon Cc: "Aneesh Kumar K.V" Cc: Andrew Morton Cc: Nick Piggin Signed-off-by: Peter Zijlstra (Intel) --- include/asm-generic/tlb.h | 2 ++ 1 file changed, 2 insertions(+) --- a/include/asm-generic/tlb.h +++ b/include/asm-generic/tlb.h @@ -539,6 +539,8 @@ static inline void tlb_end_vma(struct mm #endif /* CONFIG_MMU */ +#ifndef tlb_migrate_finish #define tlb_migrate_finish(mm) do {} while (0) +#endif #endif /* _ASM_GENERIC__TLB_H */ From patchwork Thu Sep 13 09:21:17 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Zijlstra X-Patchwork-Id: 10599101 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 71D9C14E5 for ; Thu, 13 Sep 2018 09:29:52 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 5EFA12A457 for ; Thu, 13 Sep 2018 09:29:52 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 52D792A48D; Thu, 13 Sep 2018 09:29:52 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.8 required=2.0 tests=BAYES_00,DKIM_SIGNED, MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE,T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 972A02A457 for ; Thu, 13 Sep 2018 09:29:48 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 5D38B8E0008; Thu, 13 Sep 2018 05:29:29 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 580768E000B; Thu, 13 Sep 2018 05:29:29 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4006C8E0008; Thu, 13 Sep 2018 05:29:29 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pf1-f198.google.com (mail-pf1-f198.google.com [209.85.210.198]) by kanga.kvack.org (Postfix) with ESMTP id E3C618E000B for ; Thu, 13 Sep 2018 05:29:28 -0400 (EDT) Received: by mail-pf1-f198.google.com with SMTP id i68-v6so2642796pfb.9 for ; Thu, 13 Sep 2018 02:29:28 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:message-id:user-agent:date:from :to:cc:subject:references:mime-version; bh=6x6gl9HU7fpPpxCn3RBEdMjz+VRMCi/7uA4oKxeTObg=; b=kTitu50muui8xJOX0EnfXy/mTtE1P/Xp8I7ZJhpMzPPEJ98hvSjcPm4uzZ8fZflD3X J7INxsecHuGqANsnWCR6molUtDqckSa0LBisybYErs7QZEVcjt/jGjcQHcNEYpUALSNt /Z3zf1W4rQhUbD0HpD+pNa8VhdP/XGeBMA6UEaX073DCbA5G8Z6x8UhSft+yu8CMc9lF hwc18EYSQ+/VAdZi9j/4EBuGPLdi8pns9e8s3sKe3LekiUEWT+nCKEXsvD1vJ14YYMqX vov8XOjGTA2SeSNFEZpqb6cPydPBbMtwYJsn3zwoHmToD/naXRDy5yGVcTH6hDRBQ4Bf xutw== X-Gm-Message-State: APzg51A1O2OKUkqb1jjAel2RhP6VsCPQ3d9LPawAcsT07Mk+JEx1gxaR xOY9P/AwHMKLDVkREa3CJv3sVZxHjg6dYW+jciUdSUnyF4fxsIGV8PLtk2cch/PRXJ1bQ9OvhQ2 pfkzfTxeQYzj2tMHqN/WzPGiHicrHcfaLKpSQuu+9x5vTG6dkvzdukoxcGyFHMh0Blw== X-Received: by 2002:a17:902:286a:: with SMTP id e97-v6mr6411617plb.340.1536830968586; Thu, 13 Sep 2018 02:29:28 -0700 (PDT) X-Google-Smtp-Source: ANB0VdazdgHDbzNlQrOhMmGf2RyInZ9gC5S1JNzzHvIui2GIYPbr/rm95j5Dly8Nw/q9qZjgZkd9 X-Received: by 2002:a17:902:286a:: with SMTP id e97-v6mr6411551plb.340.1536830967494; Thu, 13 Sep 2018 02:29:27 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1536830967; cv=none; d=google.com; s=arc-20160816; b=DPjL9HxI+fwz2UzfDNU7phPnJYoMwU/gbVMG+36xHnD8u0x7dyTzAPLooON+9Z1dvh FN8ibM15V5ALV9k1twWx+IZtsjErnGuo1vSSwW3kpREaPNKh10eJvl/dFzLh97ADNRzu CUQ5R3+9ImkFVUzCXumhLnK9T8Mg0kfVqtnuJabBt8jKEburf+NstfMdD2K2kSz9xmrL gCiTAsCxUxAY6oPETCe+xQ8B6IyOR6xNYRVpppzqVc+dUfEiKG3Dee3R5IOsBKHj9G6o eJ644aRv56t9c5rtNnoxN75rMiwVA6pf80Jiai2H2Ll3c+Rb2hkeva11nPyb5EmkOhom uuJg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=mime-version:references:subject:cc:to:from:date:user-agent :message-id:dkim-signature; bh=6x6gl9HU7fpPpxCn3RBEdMjz+VRMCi/7uA4oKxeTObg=; b=ECEHFaQxOGyIulzlwxttLE/K203ia096LAaOy7P070hGXX9aa63k2Zw9P6myA0DUX2 nVG+u/L9M4b4wdPFdDCZh8E4+ujtHK21q4hB5it/e6mw8i1yQbU8ggIFBFw8W+tCvzNR 9vXinB9pZ/uskNFnclVwOvCBiJ4E+qBNr2/qPt5tJk216uvH4zBaqS+8JuMGOAw+FG2p 5oufcWV7CJJblaJ6E5DXsb1tNr/Cdl8SNJTBRiPBibb5b8WIeiCpk8Wzwy6QzzmeMhC8 aToCD37nmZlNoryX9sdWQzQ/SDj06kBBW9ED0AHE5mX+45QN6zCfm3R9AXnxmY6orKQh nZBg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=bombadil.20170209 header.b=rJrwuirT; spf=pass (google.com: best guess record for domain of peterz@infradead.org designates 2607:7c80:54:e::133 as permitted sender) smtp.mailfrom=peterz@infradead.org Received: from bombadil.infradead.org (bombadil.infradead.org. [2607:7c80:54:e::133]) by mx.google.com with ESMTPS id v66-v6si3426761pfb.368.2018.09.13.02.29.27 for (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Thu, 13 Sep 2018 02:29:27 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of peterz@infradead.org designates 2607:7c80:54:e::133 as permitted sender) client-ip=2607:7c80:54:e::133; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=bombadil.20170209 header.b=rJrwuirT; spf=pass (google.com: best guess record for domain of peterz@infradead.org designates 2607:7c80:54:e::133 as permitted sender) smtp.mailfrom=peterz@infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=Content-Type:MIME-Version:References: Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding: Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender: Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:List-Id:List-Help: List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=6x6gl9HU7fpPpxCn3RBEdMjz+VRMCi/7uA4oKxeTObg=; b=rJrwuirTNbykmM67dOX8DfbqiQ uH973yEnOmG0kkF8JSyb3Ijy29xa2MBU3uRZbdLRDnEKOLOcJR8Vyonza7cuG522mvw+b+CZVkfHo 0H7yQPhSwGo7RPb5/DhBxFZ99FTTXpqy1Knd8bb2b1o/ZLfIO+cxQKTDPaoqxIlEdZjaJysvvqf1H xuo3ms2KAfBKMDkRsxUYU5BIlQ/zI/nBm7GC9hujtiRBzs5Wr2ZJgKhk/avKxVOzUccVwfvcYCVLH eMp9nQST8034xt46c+nAhHTXifhTslD3ICfb7j00F0MWRdV0Tnmm3FsGEIMd+zSvKdRxvMZ62cW0P itD3mK0w==; Received: from j217100.upc-j.chello.nl ([24.132.217.100] helo=hirez.programming.kicks-ass.net) by bombadil.infradead.org with esmtpsa (Exim 4.90_1 #2 (Red Hat Linux)) id 1g0Nw2-0000vs-LN; Thu, 13 Sep 2018 09:29:15 +0000 Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0) id 5329820587E61; Thu, 13 Sep 2018 11:29:10 +0200 (CEST) Message-ID: <20180913092812.247989787@infradead.org> User-Agent: quilt/0.65 Date: Thu, 13 Sep 2018 11:21:17 +0200 From: Peter Zijlstra To: will.deacon@arm.com, aneesh.kumar@linux.vnet.ibm.com, akpm@linux-foundation.org, npiggin@gmail.com Cc: linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, peterz@infradead.org, linux@armlinux.org.uk, heiko.carstens@de.ibm.com Subject: [RFC][PATCH 07/11] arm/tlb: Convert to generic mmu_gather References: <20180913092110.817204997@infradead.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP Generic mmu_gather provides everything that ARM needs: - range tracking - RCU table free - VM_EXEC tracking - VIPT cache flushing The one notable curiosity is the 'funny' range tracking for classical ARM in __pte_free_tlb(). Cc: Will Deacon Cc: "Aneesh Kumar K.V" Cc: Andrew Morton Cc: Nick Piggin Cc: Russell King Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Peter Zijlstra (Intel) Acked-by: Will Deacon --- arch/arm/include/asm/tlb.h | 255 ++------------------------------------------- 1 file changed, 14 insertions(+), 241 deletions(-) --- a/arch/arm/include/asm/tlb.h +++ b/arch/arm/include/asm/tlb.h @@ -33,270 +33,43 @@ #include #include -#define MMU_GATHER_BUNDLE 8 - -#ifdef CONFIG_HAVE_RCU_TABLE_FREE static inline void __tlb_remove_table(void *_table) { free_page_and_swap_cache((struct page *)_table); } -struct mmu_table_batch { - struct rcu_head rcu; - unsigned int nr; - void *tables[0]; -}; - -#define MAX_TABLE_BATCH \ - ((PAGE_SIZE - sizeof(struct mmu_table_batch)) / sizeof(void *)) - -extern void tlb_table_flush(struct mmu_gather *tlb); -extern void tlb_remove_table(struct mmu_gather *tlb, void *table); - -#define tlb_remove_entry(tlb, entry) tlb_remove_table(tlb, entry) -#else -#define tlb_remove_entry(tlb, entry) tlb_remove_page(tlb, entry) -#endif /* CONFIG_HAVE_RCU_TABLE_FREE */ - -/* - * TLB handling. This allows us to remove pages from the page - * tables, and efficiently handle the TLB issues. - */ -struct mmu_gather { - struct mm_struct *mm; -#ifdef CONFIG_HAVE_RCU_TABLE_FREE - struct mmu_table_batch *batch; - unsigned int need_flush; -#endif - unsigned int fullmm; - struct vm_area_struct *vma; - unsigned long start, end; - unsigned long range_start; - unsigned long range_end; - unsigned int nr; - unsigned int max; - struct page **pages; - struct page *local[MMU_GATHER_BUNDLE]; -}; - -DECLARE_PER_CPU(struct mmu_gather, mmu_gathers); - -/* - * This is unnecessarily complex. There's three ways the TLB shootdown - * code is used: - * 1. Unmapping a range of vmas. See zap_page_range(), unmap_region(). - * tlb->fullmm = 0, and tlb_start_vma/tlb_end_vma will be called. - * tlb->vma will be non-NULL. - * 2. Unmapping all vmas. See exit_mmap(). - * tlb->fullmm = 1, and tlb_start_vma/tlb_end_vma will be called. - * tlb->vma will be non-NULL. Additionally, page tables will be freed. - * 3. Unmapping argument pages. See shift_arg_pages(). - * tlb->fullmm = 0, but tlb_start_vma/tlb_end_vma will not be called. - * tlb->vma will be NULL. - */ -static inline void tlb_flush(struct mmu_gather *tlb) -{ - if (tlb->fullmm || !tlb->vma) - flush_tlb_mm(tlb->mm); - else if (tlb->range_end > 0) { - flush_tlb_range(tlb->vma, tlb->range_start, tlb->range_end); - tlb->range_start = TASK_SIZE; - tlb->range_end = 0; - } -} - -static inline void tlb_add_flush(struct mmu_gather *tlb, unsigned long addr) -{ - if (!tlb->fullmm) { - if (addr < tlb->range_start) - tlb->range_start = addr; - if (addr + PAGE_SIZE > tlb->range_end) - tlb->range_end = addr + PAGE_SIZE; - } -} - -static inline void __tlb_alloc_page(struct mmu_gather *tlb) -{ - unsigned long addr = __get_free_pages(GFP_NOWAIT | __GFP_NOWARN, 0); - - if (addr) { - tlb->pages = (void *)addr; - tlb->max = PAGE_SIZE / sizeof(struct page *); - } -} - -static inline void tlb_flush_mmu_tlbonly(struct mmu_gather *tlb) -{ - tlb_flush(tlb); -#ifdef CONFIG_HAVE_RCU_TABLE_FREE - tlb_table_flush(tlb); -#endif -} - -static inline void tlb_flush_mmu_free(struct mmu_gather *tlb) -{ - free_pages_and_swap_cache(tlb->pages, tlb->nr); - tlb->nr = 0; - if (tlb->pages == tlb->local) - __tlb_alloc_page(tlb); -} - -static inline void tlb_flush_mmu(struct mmu_gather *tlb) -{ - tlb_flush_mmu_tlbonly(tlb); - tlb_flush_mmu_free(tlb); -} - -static inline void -arch_tlb_gather_mmu(struct mmu_gather *tlb, struct mm_struct *mm, - unsigned long start, unsigned long end) -{ - tlb->mm = mm; - tlb->fullmm = !(start | (end+1)); - tlb->start = start; - tlb->end = end; - tlb->vma = NULL; - tlb->max = ARRAY_SIZE(tlb->local); - tlb->pages = tlb->local; - tlb->nr = 0; - __tlb_alloc_page(tlb); +#include -#ifdef CONFIG_HAVE_RCU_TABLE_FREE - tlb->batch = NULL; +#ifndef CONFIG_HAVE_RCU_TABLE_FREE +#define tlb_remove_table(tlb, entry) tlb_remove_page(tlb, entry) #endif -} - -static inline void -arch_tlb_finish_mmu(struct mmu_gather *tlb, - unsigned long start, unsigned long end, bool force) -{ - if (force) { - tlb->range_start = start; - tlb->range_end = end; - } - - tlb_flush_mmu(tlb); - /* keep the page table cache within bounds */ - check_pgt_cache(); - - if (tlb->pages != tlb->local) - free_pages((unsigned long)tlb->pages, 0); -} - -/* - * Memorize the range for the TLB flush. - */ static inline void -tlb_remove_tlb_entry(struct mmu_gather *tlb, pte_t *ptep, unsigned long addr) -{ - tlb_add_flush(tlb, addr); -} - -#define tlb_remove_huge_tlb_entry(h, tlb, ptep, address) \ - tlb_remove_tlb_entry(tlb, ptep, address) -/* - * In the case of tlb vma handling, we can optimise these away in the - * case where we're doing a full MM flush. When we're doing a munmap, - * the vmas are adjusted to only cover the region to be torn down. - */ -static inline void -tlb_start_vma(struct mmu_gather *tlb, struct vm_area_struct *vma) -{ - if (!tlb->fullmm) { - flush_cache_range(vma, vma->vm_start, vma->vm_end); - tlb->vma = vma; - tlb->range_start = TASK_SIZE; - tlb->range_end = 0; - } -} - -static inline void -tlb_end_vma(struct mmu_gather *tlb, struct vm_area_struct *vma) -{ - if (!tlb->fullmm) - tlb_flush(tlb); -} - -static inline bool __tlb_remove_page(struct mmu_gather *tlb, struct page *page) -{ - tlb->pages[tlb->nr++] = page; - VM_WARN_ON(tlb->nr > tlb->max); - if (tlb->nr == tlb->max) - return true; - return false; -} - -static inline void tlb_remove_page(struct mmu_gather *tlb, struct page *page) -{ - if (__tlb_remove_page(tlb, page)) - tlb_flush_mmu(tlb); -} - -static inline bool __tlb_remove_page_size(struct mmu_gather *tlb, - struct page *page, int page_size) -{ - return __tlb_remove_page(tlb, page); -} - -static inline void tlb_remove_page_size(struct mmu_gather *tlb, - struct page *page, int page_size) -{ - return tlb_remove_page(tlb, page); -} - -static inline void __pte_free_tlb(struct mmu_gather *tlb, pgtable_t pte, - unsigned long addr) +__pte_free_tlb(struct mmu_gather *tlb, pgtable_t pte, unsigned long addr) { pgtable_page_dtor(pte); -#ifdef CONFIG_ARM_LPAE - tlb_add_flush(tlb, addr); -#else +#ifndef CONFIG_ARM_LPAE /* * With the classic ARM MMU, a pte page has two corresponding pmd * entries, each covering 1MB. */ - addr &= PMD_MASK; - tlb_add_flush(tlb, addr + SZ_1M - PAGE_SIZE); - tlb_add_flush(tlb, addr + SZ_1M); + addr = (addr & PMD_MASK) + SZ_1M; + __tlb_adjust_range(tlb, addr - PAGE_SIZE, addr + PAGE_SIZE); #endif - tlb_remove_entry(tlb, pte); -} - -static inline void __pmd_free_tlb(struct mmu_gather *tlb, pmd_t *pmdp, - unsigned long addr) -{ -#ifdef CONFIG_ARM_LPAE - tlb_add_flush(tlb, addr); - tlb_remove_entry(tlb, virt_to_page(pmdp)); -#endif + tlb_remove_table(tlb, pte); } static inline void -tlb_remove_pmd_tlb_entry(struct mmu_gather *tlb, pmd_t *pmdp, unsigned long addr) +__pmd_free_tlb(struct mmu_gather *tlb, pmd_t *pmdp, unsigned long addr) { - tlb_add_flush(tlb, addr); -} - -#define pte_free_tlb(tlb, ptep, addr) __pte_free_tlb(tlb, ptep, addr) -#define pmd_free_tlb(tlb, pmdp, addr) __pmd_free_tlb(tlb, pmdp, addr) -#define pud_free_tlb(tlb, pudp, addr) pud_free((tlb)->mm, pudp) - -#define tlb_migrate_finish(mm) do { } while (0) - -static inline void tlb_change_page_size(struct mmu_gather *tlb, - unsigned int page_size) -{ -} - -static inline void tlb_flush_remove_tables(struct mm_struct *mm) -{ -} +#ifdef CONFIG_ARM_LPAE + struct page *page = virt_to_page(pmdp); -static inline void tlb_flush_remove_tables_local(void *arg) -{ + pgtable_pmd_page_dtor(page); + tlb_remove_table(tlb, page); +#endif } #endif /* CONFIG_MMU */ From patchwork Thu Sep 13 09:21:18 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Zijlstra X-Patchwork-Id: 10599107 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 0505714DB for ; Thu, 13 Sep 2018 09:30:03 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id E595D2A475 for ; Thu, 13 Sep 2018 09:30:02 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id D6C822A491; Thu, 13 Sep 2018 09:30:02 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.8 required=2.0 tests=BAYES_00,DKIM_SIGNED, MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE,T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 923DF2A475 for ; Thu, 13 Sep 2018 09:30:01 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id AE2578E000F; Thu, 13 Sep 2018 05:29:48 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id A64DF8E000C; Thu, 13 Sep 2018 05:29:48 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 86BB58E000F; Thu, 13 Sep 2018 05:29:48 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pf1-f198.google.com (mail-pf1-f198.google.com [209.85.210.198]) by kanga.kvack.org (Postfix) with ESMTP id 42FE28E000C for ; Thu, 13 Sep 2018 05:29:48 -0400 (EDT) Received: by mail-pf1-f198.google.com with SMTP id x85-v6so2635236pfe.13 for ; Thu, 13 Sep 2018 02:29:48 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:message-id:user-agent:date:from :to:cc:subject:references:mime-version; bh=ngbrIENLEkIPPxOXSmPjRwnweo/SdtFjhtblxd03rZA=; b=rf5VRiaMqZvi/hmKyHnKMn3kB4/IrNiz8iYxgWzxInq1y6TqWb+Jr+ne2HIoNz6t56 so1umt46hWkFtaziVgaLV7bRc5lJmawrd+1F8kX4cvXJSDUGCgT1MYJaKh8QiVBTUTuK 4zP/ZCR4CBKVrVwr3kRj7Wn0sg8bqP4SKNCKVD5Ba+sxwzM080xXh9nzR/lmVMD699gT AoYpTDUPgPGj4diDoR8RJk7ocz4KqQP6isYiuuhoFfHQsGTRjkXPcMiH00ljI6uMGkCQ DgDcauNjC50ZpA0aZO+mNUmupjzsqMX91bQ9bc9XO2DlbqoO/8q/rrtkH8f40+dOYXgc w/OQ== X-Gm-Message-State: APzg51D6ltYsDVGznkjT48KDwHIVLavZr/XRznkz43qkSyXPmyuUaigO pCkBMMBIo4kLj5XZWxQZZcxjqkrTAqK/SkbM4KI6GLrUAOB+71LxITY7Qd0+SRSwYHzcysPb8s0 NR0DEDTdNFG6DVibNb0+WviQJYgpNOxkCFTJUnDzqdidD3mho6dBmKPx/rwZtQ0aG+Q== X-Received: by 2002:a63:28c7:: with SMTP id o190-v6mr6336730pgo.84.1536830987913; Thu, 13 Sep 2018 02:29:47 -0700 (PDT) X-Google-Smtp-Source: ANB0VdZcaDN3hsJq9amJRmals8O9qLADwaUWPRbxpKGHdD4RjopPRqHnDIZb5PgZepVvs7HbQi2T X-Received: by 2002:a63:28c7:: with SMTP id o190-v6mr6336660pgo.84.1536830986730; Thu, 13 Sep 2018 02:29:46 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1536830986; cv=none; d=google.com; s=arc-20160816; b=LIc6/tvlsaRHITnLRHL3SLY0nzaY6Nhn71u0HaIJezKYMM8EhrwVX3rRbLcBuQiMxy uJ1EQB6FhuWEknQynxgQVQCNcGGwPw/QOASpZxCmJuSMb8FT4uKNvHQewuyN8aUp5Ooq TplTVQPtdRUgxLH/Ls3LXLwt53tcDYCxL76oL7MKzhUK9yONi7jGdGePE//CvtDP5nHs 7mcJe+bWfk1kkysTnBbtFdHT0PgRn4hw9PuBUhgynilmUXMO1Dsd72RkYiLHAM0QXolH pxYZH5BSbvy4CQml1N9NdvHZTEoLpFBzKWr8THSV5dX6w8YlChAuaDUAQMSdQCDriKMa GTIw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=mime-version:references:subject:cc:to:from:date:user-agent :message-id:dkim-signature; bh=ngbrIENLEkIPPxOXSmPjRwnweo/SdtFjhtblxd03rZA=; b=R8+yUD6NmJ/jvdTT0bM4looYoORgcOkDf6WGy4ycW7yOUL23DuzlNslNl8cOJ1zxDE iojw5CyEAaGaxlDlhuFO/iLx9O/4yvlp2BZmATL8fDQDzZwRsb9XxtR+c0R+NifHug00 xAYI/l1EPMquND2nabtrFncsvmuyMOPVNuK7EGr8tOSoINYsvVVPzuKn+3wQZfAllpxB Cb0MVxQ5e1HHe1fEixti8qwxO4Ip1rucAs6+z2zYv0AcdG/PHhiIi6VqV8YpHTz/kwnk jadugU9ZoqquTXgbRT3Cu8su9hdcidApxxAjXlbbqPX0jWBNjvXoMS4uYW/AaURETGhM VWww== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=bombadil.20170209 header.b=rZYjk8aE; spf=pass (google.com: best guess record for domain of peterz@infradead.org designates 2607:7c80:54:e::133 as permitted sender) smtp.mailfrom=peterz@infradead.org Received: from bombadil.infradead.org (bombadil.infradead.org. [2607:7c80:54:e::133]) by mx.google.com with ESMTPS id g5-v6si3644221plm.445.2018.09.13.02.29.46 for (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Thu, 13 Sep 2018 02:29:46 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of peterz@infradead.org designates 2607:7c80:54:e::133 as permitted sender) client-ip=2607:7c80:54:e::133; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=bombadil.20170209 header.b=rZYjk8aE; spf=pass (google.com: best guess record for domain of peterz@infradead.org designates 2607:7c80:54:e::133 as permitted sender) smtp.mailfrom=peterz@infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=Content-Type:MIME-Version:References: Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding: Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender: Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:List-Id:List-Help: List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=ngbrIENLEkIPPxOXSmPjRwnweo/SdtFjhtblxd03rZA=; b=rZYjk8aEg91HOSbBQz6g9EA717 Y7s/8L7gNJIJm5cMHX5VEMjaiK7WOiJm35/NZ+x8QVyiKTkwPeqUH5hlLkfLZ+eQ85YLW7UV8/ktT q8ugznZ1zIKL9Iu4YjQrNqqznRCiW8bygVhLe5DMwScx32dlkpoRiahr5T1ThybgfUQrLgSCt0Wtb QUERgCEfrjMkFoe6UNg2pghzgapKqf1dqdSaIIunRWhu95oNttfcJRgvVdHEERVbyJpDXj3zEUcNX MdxYscGHdreAM2GC/9MYTLCbuzMcRbtBQe9DoQUPRSoJy8UdrgtcnsTkms/7q9yTAI2qF68ADBOo+ 01HDcnCg==; Received: from j217100.upc-j.chello.nl ([24.132.217.100] helo=hirez.programming.kicks-ass.net) by bombadil.infradead.org with esmtpsa (Exim 4.90_1 #2 (Red Hat Linux)) id 1g0Nw2-0000vt-Np; Thu, 13 Sep 2018 09:29:15 +0000 Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0) id 578A020587E65; Thu, 13 Sep 2018 11:29:10 +0200 (CEST) Message-ID: <20180913092812.306865475@infradead.org> User-Agent: quilt/0.65 Date: Thu, 13 Sep 2018 11:21:18 +0200 From: Peter Zijlstra To: will.deacon@arm.com, aneesh.kumar@linux.vnet.ibm.com, akpm@linux-foundation.org, npiggin@gmail.com Cc: linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, peterz@infradead.org, linux@armlinux.org.uk, heiko.carstens@de.ibm.com, Tony Luck Subject: [RFC][PATCH 08/11] ia64/tlb: Conver to generic mmu_gather References: <20180913092110.817204997@infradead.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP Generic mmu_gather provides everything ia64 needs (range tracking). Cc: Will Deacon Cc: "Aneesh Kumar K.V" Cc: Andrew Morton Cc: Nick Piggin Cc: Tony Luck Signed-off-by: Peter Zijlstra (Intel) --- arch/ia64/include/asm/tlb.h | 256 --------------------------------------- arch/ia64/include/asm/tlbflush.h | 25 +++ arch/ia64/mm/tlb.c | 23 +++ 3 files changed, 47 insertions(+), 257 deletions(-) --- a/arch/ia64/include/asm/tlb.h +++ b/arch/ia64/include/asm/tlb.h @@ -47,262 +47,8 @@ #include #include -/* - * If we can't allocate a page to make a big batch of page pointers - * to work on, then just handle a few from the on-stack structure. - */ -#define IA64_GATHER_BUNDLE 8 - -struct mmu_gather { - struct mm_struct *mm; - unsigned int nr; - unsigned int max; - unsigned char fullmm; /* non-zero means full mm flush */ - unsigned char need_flush; /* really unmapped some PTEs? */ - unsigned long start, end; - unsigned long start_addr; - unsigned long end_addr; - struct page **pages; - struct page *local[IA64_GATHER_BUNDLE]; -}; - -struct ia64_tr_entry { - u64 ifa; - u64 itir; - u64 pte; - u64 rr; -}; /*Record for tr entry!*/ - -extern int ia64_itr_entry(u64 target_mask, u64 va, u64 pte, u64 log_size); -extern void ia64_ptr_entry(u64 target_mask, int slot); - -extern struct ia64_tr_entry *ia64_idtrs[NR_CPUS]; - -/* - region register macros -*/ -#define RR_TO_VE(val) (((val) >> 0) & 0x0000000000000001) -#define RR_VE(val) (((val) & 0x0000000000000001) << 0) -#define RR_VE_MASK 0x0000000000000001L -#define RR_VE_SHIFT 0 -#define RR_TO_PS(val) (((val) >> 2) & 0x000000000000003f) -#define RR_PS(val) (((val) & 0x000000000000003f) << 2) -#define RR_PS_MASK 0x00000000000000fcL -#define RR_PS_SHIFT 2 -#define RR_RID_MASK 0x00000000ffffff00L -#define RR_TO_RID(val) ((val >> 8) & 0xffffff) - -static inline void -ia64_tlb_flush_mmu_tlbonly(struct mmu_gather *tlb, unsigned long start, unsigned long end) -{ - tlb->need_flush = 0; - - if (tlb->fullmm) { - /* - * Tearing down the entire address space. This happens both as a result - * of exit() and execve(). The latter case necessitates the call to - * flush_tlb_mm() here. - */ - flush_tlb_mm(tlb->mm); - } else if (unlikely (end - start >= 1024*1024*1024*1024UL - || REGION_NUMBER(start) != REGION_NUMBER(end - 1))) - { - /* - * If we flush more than a tera-byte or across regions, we're probably - * better off just flushing the entire TLB(s). This should be very rare - * and is not worth optimizing for. - */ - flush_tlb_all(); - } else { - /* - * flush_tlb_range() takes a vma instead of a mm pointer because - * some architectures want the vm_flags for ITLB/DTLB flush. - */ - struct vm_area_struct vma = TLB_FLUSH_VMA(tlb->mm, 0); - - /* flush the address range from the tlb: */ - flush_tlb_range(&vma, start, end); - /* now flush the virt. page-table area mapping the address range: */ - flush_tlb_range(&vma, ia64_thash(start), ia64_thash(end)); - } - -} - -static inline void -ia64_tlb_flush_mmu_free(struct mmu_gather *tlb) -{ - unsigned long i; - unsigned int nr; - - /* lastly, release the freed pages */ - nr = tlb->nr; - - tlb->nr = 0; - tlb->start_addr = ~0UL; - for (i = 0; i < nr; ++i) - free_page_and_swap_cache(tlb->pages[i]); -} - -/* - * Flush the TLB for address range START to END and, if not in fast mode, release the - * freed pages that where gathered up to this point. - */ -static inline void -ia64_tlb_flush_mmu (struct mmu_gather *tlb, unsigned long start, unsigned long end) -{ - if (!tlb->need_flush) - return; - ia64_tlb_flush_mmu_tlbonly(tlb, start, end); - ia64_tlb_flush_mmu_free(tlb); -} - -static inline void __tlb_alloc_page(struct mmu_gather *tlb) -{ - unsigned long addr = __get_free_pages(GFP_NOWAIT | __GFP_NOWARN, 0); - - if (addr) { - tlb->pages = (void *)addr; - tlb->max = PAGE_SIZE / sizeof(void *); - } -} - - -static inline void -arch_tlb_gather_mmu(struct mmu_gather *tlb, struct mm_struct *mm, - unsigned long start, unsigned long end) -{ - tlb->mm = mm; - tlb->max = ARRAY_SIZE(tlb->local); - tlb->pages = tlb->local; - tlb->nr = 0; - tlb->fullmm = !(start | (end+1)); - tlb->start = start; - tlb->end = end; - tlb->start_addr = ~0UL; -} - -/* - * Called at the end of the shootdown operation to free up any resources that were - * collected. - */ -static inline void -arch_tlb_finish_mmu(struct mmu_gather *tlb, - unsigned long start, unsigned long end, bool force) -{ - if (force) - tlb->need_flush = 1; - /* - * Note: tlb->nr may be 0 at this point, so we can't rely on tlb->start_addr and - * tlb->end_addr. - */ - ia64_tlb_flush_mmu(tlb, start, end); - - /* keep the page table cache within bounds */ - check_pgt_cache(); - - if (tlb->pages != tlb->local) - free_pages((unsigned long)tlb->pages, 0); -} - -/* - * Logically, this routine frees PAGE. On MP machines, the actual freeing of the page - * must be delayed until after the TLB has been flushed (see comments at the beginning of - * this file). - */ -static inline bool __tlb_remove_page(struct mmu_gather *tlb, struct page *page) -{ - tlb->need_flush = 1; - - if (!tlb->nr && tlb->pages == tlb->local) - __tlb_alloc_page(tlb); - - tlb->pages[tlb->nr++] = page; - VM_WARN_ON(tlb->nr > tlb->max); - if (tlb->nr == tlb->max) - return true; - return false; -} - -static inline void tlb_flush_mmu_tlbonly(struct mmu_gather *tlb) -{ - ia64_tlb_flush_mmu_tlbonly(tlb, tlb->start_addr, tlb->end_addr); -} - -static inline void tlb_flush_mmu_free(struct mmu_gather *tlb) -{ - ia64_tlb_flush_mmu_free(tlb); -} - -static inline void tlb_flush_mmu(struct mmu_gather *tlb) -{ - ia64_tlb_flush_mmu(tlb, tlb->start_addr, tlb->end_addr); -} - -static inline void tlb_remove_page(struct mmu_gather *tlb, struct page *page) -{ - if (__tlb_remove_page(tlb, page)) - tlb_flush_mmu(tlb); -} - -static inline bool __tlb_remove_page_size(struct mmu_gather *tlb, - struct page *page, int page_size) -{ - return __tlb_remove_page(tlb, page); -} - -static inline void tlb_remove_page_size(struct mmu_gather *tlb, - struct page *page, int page_size) -{ - return tlb_remove_page(tlb, page); -} - -/* - * Remove TLB entry for PTE mapped at virtual address ADDRESS. This is called for any - * PTE, not just those pointing to (normal) physical memory. - */ -static inline void -__tlb_remove_tlb_entry (struct mmu_gather *tlb, pte_t *ptep, unsigned long address) -{ - if (tlb->start_addr == ~0UL) - tlb->start_addr = address; - tlb->end_addr = address + PAGE_SIZE; -} - #define tlb_migrate_finish(mm) platform_tlb_migrate_finish(mm) -#define tlb_start_vma(tlb, vma) do { } while (0) -#define tlb_end_vma(tlb, vma) do { } while (0) - -#define tlb_remove_tlb_entry(tlb, ptep, addr) \ -do { \ - tlb->need_flush = 1; \ - __tlb_remove_tlb_entry(tlb, ptep, addr); \ -} while (0) - -#define tlb_remove_huge_tlb_entry(h, tlb, ptep, address) \ - tlb_remove_tlb_entry(tlb, ptep, address) - -static inline void tlb_change_page_size(struct mmu_gather *tlb, - unsigned int page_size) -{ -} - -#define pte_free_tlb(tlb, ptep, address) \ -do { \ - tlb->need_flush = 1; \ - __pte_free_tlb(tlb, ptep, address); \ -} while (0) - -#define pmd_free_tlb(tlb, ptep, address) \ -do { \ - tlb->need_flush = 1; \ - __pmd_free_tlb(tlb, ptep, address); \ -} while (0) - -#define pud_free_tlb(tlb, pudp, address) \ -do { \ - tlb->need_flush = 1; \ - __pud_free_tlb(tlb, pudp, address); \ -} while (0) +#include #endif /* _ASM_IA64_TLB_H */ --- a/arch/ia64/include/asm/tlbflush.h +++ b/arch/ia64/include/asm/tlbflush.h @@ -14,6 +14,31 @@ #include #include +struct ia64_tr_entry { + u64 ifa; + u64 itir; + u64 pte; + u64 rr; +}; /*Record for tr entry!*/ + +extern int ia64_itr_entry(u64 target_mask, u64 va, u64 pte, u64 log_size); +extern void ia64_ptr_entry(u64 target_mask, int slot); +extern struct ia64_tr_entry *ia64_idtrs[NR_CPUS]; + +/* + region register macros +*/ +#define RR_TO_VE(val) (((val) >> 0) & 0x0000000000000001) +#define RR_VE(val) (((val) & 0x0000000000000001) << 0) +#define RR_VE_MASK 0x0000000000000001L +#define RR_VE_SHIFT 0 +#define RR_TO_PS(val) (((val) >> 2) & 0x000000000000003f) +#define RR_PS(val) (((val) & 0x000000000000003f) << 2) +#define RR_PS_MASK 0x00000000000000fcL +#define RR_PS_SHIFT 2 +#define RR_RID_MASK 0x00000000ffffff00L +#define RR_TO_RID(val) ((val >> 8) & 0xffffff) + /* * Now for some TLB flushing routines. This is the kind of stuff that * can be very expensive, so try to avoid them whenever possible. --- a/arch/ia64/mm/tlb.c +++ b/arch/ia64/mm/tlb.c @@ -297,8 +297,8 @@ local_flush_tlb_all (void) ia64_srlz_i(); /* srlz.i implies srlz.d */ } -void -flush_tlb_range (struct vm_area_struct *vma, unsigned long start, +static void +__flush_tlb_range (struct vm_area_struct *vma, unsigned long start, unsigned long end) { struct mm_struct *mm = vma->vm_mm; @@ -335,6 +335,25 @@ flush_tlb_range (struct vm_area_struct * preempt_enable(); ia64_srlz_i(); /* srlz.i implies srlz.d */ } + +void flush_tlb_range(struct vm_area_struct *vma, + unsigned long start, unsigned long end) +{ + if (unlikely(end - start >= 1024*1024*1024*1024UL + || REGION_NUMBER(start) != REGION_NUMBER(end - 1))) { + /* + * If we flush more than a tera-byte or across regions, we're + * probably better off just flushing the entire TLB(s). This + * should be very rare and is not worth optimizing for. + */ + flush_tlb_all(); + } else { + /* flush the address range from the tlb */ + __flush_tlb_range(vma, start, end); + /* flush the virt. page-table area mapping the addr range */ + __flush_tlb_range(vma, ia64_thash(start), ia64_thash(end)); + } +} EXPORT_SYMBOL(flush_tlb_range); void ia64_tlb_init(void) From patchwork Thu Sep 13 09:21:19 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Zijlstra X-Patchwork-Id: 10599087 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 0B55714E0 for ; Thu, 13 Sep 2018 09:29:29 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id EDDFE2A457 for ; Thu, 13 Sep 2018 09:29:28 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id E0E852A475; Thu, 13 Sep 2018 09:29:28 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.8 required=2.0 tests=BAYES_00,DKIM_SIGNED, MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE,T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 319AF2A457 for ; Thu, 13 Sep 2018 09:29:28 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 24F9A8E0005; Thu, 13 Sep 2018 05:29:27 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 1FDB08E0001; Thu, 13 Sep 2018 05:29:27 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0EFFF8E0005; Thu, 13 Sep 2018 05:29:27 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-wm0-f69.google.com (mail-wm0-f69.google.com [74.125.82.69]) by kanga.kvack.org (Postfix) with ESMTP id A89EA8E0001 for ; Thu, 13 Sep 2018 05:29:26 -0400 (EDT) Received: by mail-wm0-f69.google.com with SMTP id z23-v6so3454043wma.2 for ; Thu, 13 Sep 2018 02:29:26 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:message-id:user-agent:date:from :to:cc:subject:references:mime-version; bh=d6CMV2hoQm5MnywciM5LER++5NYy5jN9jsJ9wqpDUgs=; b=SnsVZvb5HXf0XehRDTOWWqLtUtPlAz2twD7Qnsf/MuyOHS8VsRGn90irXOC5GKZLQc HQUQbA8AmqP7pmVCqk574m+z8FZ6diUggIhdbm8KR/AHUT6LuczYS/G2nj6bOxJ1tpv5 MMHlkP2Gi87U0fnQ1Du4nnPakHkUDnXcjhx7A6rnt25+iNFrmnybUyNX6+mkowwL4Wzk keGkGOk4a7/wWoogYNYhuDmc3JlaR98HQ/A3KQ0ExCGrm9TmkP29D2AUaXOwMJ4Czium OhCmG53Twxl6/ud82AUGZ+14NYK+Q7OmfgiVXKWoMmmpk8544eBvaLYSLpPgTI3GCpU4 hwFw== X-Gm-Message-State: APzg51BoMQlyQjpZmkx6dqE/tKT9qIrTs3VxzGeXQgT6Z6ngv0+K+A7d 1xl7BU5ynJFXaJjBCjerLmZXEbQa+5d1+QyZfRW+I438tffYc7UL+t5xqSCDEIBWf/xsObtZwLu pF4zu0p9Xkk6Bfi69MSDNf0/b/p6l4i3TCTzMZo7bCCJATC4zHwytRF8qPEtFXcXG1w== X-Received: by 2002:adf:80ea:: with SMTP id 97-v6mr4496735wrl.57.1536830966071; Thu, 13 Sep 2018 02:29:26 -0700 (PDT) X-Google-Smtp-Source: ANB0VdbknN2M3N1JMIjTN9K/OXzGZUQD5tu9n2ZOibsAUG2FzCN62ojSxDJgRXEQgplc1Oe006pG X-Received: by 2002:adf:80ea:: with SMTP id 97-v6mr4496705wrl.57.1536830965269; Thu, 13 Sep 2018 02:29:25 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1536830965; cv=none; d=google.com; s=arc-20160816; b=TqQw0OUjO0IUhO4ce9ZiwvAITYmMfaUMS4UzRnkJLLs+GI76tvxof++7wkO6N9Pry4 Web44fXayQHnFe+ktRFOmFLY48lMmG2iUfnMFTqKX/m456W54rmwDwSTY6ioCBXouCKi hU852GgkGfUvNjZjskqhh+awYrLCh4yTiCsVqCzOWfReqnw13AfDdeR+2Ms6+yJoASiG XCOxZDheP79kI2MPeAeTlcZKEt+RaAWYR973fTGZHHD3ahLTo4WTsVfxmbWnQVddBvXa CesEM7niL9QjZ94SMhjNlHapgIHIfI/gwdvO6lPQfU6J8V6zjlc/5z2Lrx21HDDha1Uj C3/Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=mime-version:references:subject:cc:to:from:date:user-agent :message-id:dkim-signature; bh=d6CMV2hoQm5MnywciM5LER++5NYy5jN9jsJ9wqpDUgs=; b=ol6c05YwZAvYZpfN7W/rAFCAjs2Sd5S8hvMGvdDSWMc+vTL5zsrvyjiSJtLiGCtJxO yRiu5jJg3DKxgz8qs5wZm51eezNuXDZGNZr7rQ4vEod5OyrUjM6NDJHUQMMw2oioRiwc tv+YBuN5RciNJHKQitXpqnVumnWM99AiKrembo0CakPGc0GZgjDtdy8dBM6P50PXQAXR QpvYxCMHmmzkIS6+Y9SOvrSDQnefesn4ZrnWSXjQIXQE65V6VZs5yiApXSgGY/bc8JFG a/esaHj+xZb/lgDhbU7ISMQbmDiibuGYJ+W2LLqFVNoqsBVcgWaxsdLW3+RBp9QAJITc 0Llg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=merlin.20170209 header.b=za8rNYRu; spf=pass (google.com: best guess record for domain of peterz@infradead.org designates 2001:8b0:10b:1231::1 as permitted sender) smtp.mailfrom=peterz@infradead.org Received: from merlin.infradead.org (merlin.infradead.org. [2001:8b0:10b:1231::1]) by mx.google.com with ESMTPS id 17-v6si3124434wmb.123.2018.09.13.02.29.25 for (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Thu, 13 Sep 2018 02:29:25 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of peterz@infradead.org designates 2001:8b0:10b:1231::1 as permitted sender) client-ip=2001:8b0:10b:1231::1; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=merlin.20170209 header.b=za8rNYRu; spf=pass (google.com: best guess record for domain of peterz@infradead.org designates 2001:8b0:10b:1231::1 as permitted sender) smtp.mailfrom=peterz@infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=merlin.20170209; h=Content-Type:MIME-Version:References: Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding: Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender: Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:List-Id:List-Help: List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=d6CMV2hoQm5MnywciM5LER++5NYy5jN9jsJ9wqpDUgs=; b=za8rNYRunBSGvqdTAjdQd7XChQ 1sPTiUitxErVUHFO/SK6OlzNLeYLp/CM8GJuLGLmRpEqIOyAO1VZnku95dgCl5XDp6dqnF5mHIeDN 7Of6TyqnfXElzokPJYCU1udxWdMOERKJUL0ymUm0VT6jyGNYYvmQFKYBFdkJJ+QK3DbgVsyKQ6h8v xycPSpRuWzFcn1R9JUGZ1NzT0nASEBiyrTeLInRQNeRLHnIYRxFWwdNj025J98Tj10ESvOgV3xHjd k+1SAtXWdD1uW9VMDV+bj6p51fCvq+bTYWZpIgYPjbbLL30HBYj1DeVLov6gK3QIvbXuERxvK0fsA FCzQm29w==; Received: from j217100.upc-j.chello.nl ([24.132.217.100] helo=hirez.programming.kicks-ass.net) by merlin.infradead.org with esmtpsa (Exim 4.90_1 #2 (Red Hat Linux)) id 1g0Nw2-0000yp-5K; Thu, 13 Sep 2018 09:29:14 +0000 Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0) id 5B54C20587E66; Thu, 13 Sep 2018 11:29:10 +0200 (CEST) Message-ID: <20180913092812.366008016@infradead.org> User-Agent: quilt/0.65 Date: Thu, 13 Sep 2018 11:21:19 +0200 From: Peter Zijlstra To: will.deacon@arm.com, aneesh.kumar@linux.vnet.ibm.com, akpm@linux-foundation.org, npiggin@gmail.com Cc: linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, peterz@infradead.org, linux@armlinux.org.uk, heiko.carstens@de.ibm.com, Yoshinori Sato , Rich Felker Subject: [RFC][PATCH 09/11] sh/tlb: Convert SH to generic mmu_gather References: <20180913092110.817204997@infradead.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP Generic mmu_gather provides everything SH needs (range tracking and cache coherency). Cc: Will Deacon Cc: "Aneesh Kumar K.V" Cc: Andrew Morton Cc: Nick Piggin Cc: Yoshinori Sato Cc: Rich Felker Signed-off-by: Peter Zijlstra (Intel) --- arch/sh/include/asm/pgalloc.h | 7 ++ arch/sh/include/asm/tlb.h | 130 ------------------------------------------ 2 files changed, 8 insertions(+), 129 deletions(-) --- a/arch/sh/include/asm/pgalloc.h +++ b/arch/sh/include/asm/pgalloc.h @@ -72,6 +72,15 @@ do { \ tlb_remove_page((tlb), (pte)); \ } while (0) +#if CONFIG_PGTABLE_LEVELS > 2 +#define __pmd_free_tlb(tlb, pmdp, addr) \ +do { \ + struct page *page = virt_to_page(pmdp); \ + pgtable_pmd_page_dtor(page); \ + tlb_remove_page((tlb), page); \ +} while (0); +#endif + static inline void check_pgt_cache(void) { quicklist_trim(QUICK_PT, NULL, 25, 16); --- a/arch/sh/include/asm/tlb.h +++ b/arch/sh/include/asm/tlb.h @@ -11,131 +11,8 @@ #ifdef CONFIG_MMU #include -#include -#include -#include - -/* - * TLB handling. This allows us to remove pages from the page - * tables, and efficiently handle the TLB issues. - */ -struct mmu_gather { - struct mm_struct *mm; - unsigned int fullmm; - unsigned long start, end; -}; -static inline void init_tlb_gather(struct mmu_gather *tlb) -{ - tlb->start = TASK_SIZE; - tlb->end = 0; - - if (tlb->fullmm) { - tlb->start = 0; - tlb->end = TASK_SIZE; - } -} - -static inline void -arch_tlb_gather_mmu(struct mmu_gather *tlb, struct mm_struct *mm, - unsigned long start, unsigned long end) -{ - tlb->mm = mm; - tlb->start = start; - tlb->end = end; - tlb->fullmm = !(start | (end+1)); - - init_tlb_gather(tlb); -} - -static inline void -arch_tlb_finish_mmu(struct mmu_gather *tlb, - unsigned long start, unsigned long end, bool force) -{ - if (tlb->fullmm || force) - flush_tlb_mm(tlb->mm); - - /* keep the page table cache within bounds */ - check_pgt_cache(); -} - -static inline void -tlb_remove_tlb_entry(struct mmu_gather *tlb, pte_t *ptep, unsigned long address) -{ - if (tlb->start > address) - tlb->start = address; - if (tlb->end < address + PAGE_SIZE) - tlb->end = address + PAGE_SIZE; -} - -#define tlb_remove_huge_tlb_entry(h, tlb, ptep, address) \ - tlb_remove_tlb_entry(tlb, ptep, address) - -/* - * In the case of tlb vma handling, we can optimise these away in the - * case where we're doing a full MM flush. When we're doing a munmap, - * the vmas are adjusted to only cover the region to be torn down. - */ -static inline void -tlb_start_vma(struct mmu_gather *tlb, struct vm_area_struct *vma) -{ - if (!tlb->fullmm) - flush_cache_range(vma, vma->vm_start, vma->vm_end); -} - -static inline void -tlb_end_vma(struct mmu_gather *tlb, struct vm_area_struct *vma) -{ - if (!tlb->fullmm && tlb->end) { - flush_tlb_range(vma, tlb->start, tlb->end); - init_tlb_gather(tlb); - } -} - -static inline void tlb_flush_mmu_tlbonly(struct mmu_gather *tlb) -{ -} - -static inline void tlb_flush_mmu_free(struct mmu_gather *tlb) -{ -} - -static inline void tlb_flush_mmu(struct mmu_gather *tlb) -{ -} - -static inline int __tlb_remove_page(struct mmu_gather *tlb, struct page *page) -{ - free_page_and_swap_cache(page); - return false; /* avoid calling tlb_flush_mmu */ -} - -static inline void tlb_remove_page(struct mmu_gather *tlb, struct page *page) -{ - __tlb_remove_page(tlb, page); -} - -static inline bool __tlb_remove_page_size(struct mmu_gather *tlb, - struct page *page, int page_size) -{ - return __tlb_remove_page(tlb, page); -} - -static inline void tlb_remove_page_size(struct mmu_gather *tlb, - struct page *page, int page_size) -{ - return tlb_remove_page(tlb, page); -} - -static inline void tlb_change_page_size(struct mmu_gather *tlb, unsigned int page_size) -{ -} - -#define pte_free_tlb(tlb, ptep, addr) pte_free((tlb)->mm, ptep) -#define pmd_free_tlb(tlb, pmdp, addr) pmd_free((tlb)->mm, pmdp) -#define pud_free_tlb(tlb, pudp, addr) pud_free((tlb)->mm, pudp) - -#define tlb_migrate_finish(mm) do { } while (0) +#include #if defined(CONFIG_CPU_SH4) || defined(CONFIG_SUPERH64) extern void tlb_wire_entry(struct vm_area_struct *, unsigned long, pte_t); @@ -155,11 +32,6 @@ static inline void tlb_unwire_entry(void #else /* CONFIG_MMU */ -#define tlb_start_vma(tlb, vma) do { } while (0) -#define tlb_end_vma(tlb, vma) do { } while (0) -#define __tlb_remove_tlb_entry(tlb, pte, address) do { } while (0) -#define tlb_flush(tlb) do { } while (0) - #include #endif /* CONFIG_MMU */ From patchwork Thu Sep 13 09:21:20 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Zijlstra X-Patchwork-Id: 10599099 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 6A0B814E5 for ; Thu, 13 Sep 2018 09:29:45 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 595B52A457 for ; Thu, 13 Sep 2018 09:29:45 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 4DE9D2A475; Thu, 13 Sep 2018 09:29:45 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.8 required=2.0 tests=BAYES_00,DKIM_SIGNED, MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE,T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 5D48D2A457 for ; Thu, 13 Sep 2018 09:29:44 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 0BBB58E0007; Thu, 13 Sep 2018 05:29:29 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 056688E000C; Thu, 13 Sep 2018 05:29:28 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id CB6FA8E0007; Thu, 13 Sep 2018 05:29:28 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-wm0-f71.google.com (mail-wm0-f71.google.com [74.125.82.71]) by kanga.kvack.org (Postfix) with ESMTP id 39F088E000A for ; Thu, 13 Sep 2018 05:29:28 -0400 (EDT) Received: by mail-wm0-f71.google.com with SMTP id j129-v6so3456541wmj.3 for ; Thu, 13 Sep 2018 02:29:28 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:message-id:user-agent:date:from :to:cc:subject:references:mime-version; bh=Qgx3kgv+AkSn1Y+HQ6cCIZsGaoQ/+n1Kc3v/8+oaveQ=; b=gSD20WxK89mOimGUWzfUxkYZrI4VgXCZUJTHmP/vlOrTGtUiw172AlmMSoTln6c2QU Jgi+WEqp9q3i31jypYn3j77e2IqyrMm5GDiX8ZCox1KVDKYLhXdB7yyigF553B1dGMj/ sNQIQhTqS8cSlCPcqdJd1/maUDrQsv9z3evJKtgdbMKKihN5MuijOSoOXq1bIKSpQNnj /Yb0sCQSdxO19sEdInKflfKeMW7oU2Gv93ndzdWtwCBTGrK9sqV2ogWtWCq0zPxCkGK4 4NkNgjPnNCVWEB6BDNPU+LkmKeJOEdLgdQ/0dz6xgz6kVX2ctiySbk+Bc1Y4beO7gekk EthQ== X-Gm-Message-State: APzg51DU1qY7jGhPuDk6LBesFK0S4GDoJcGq455QLu6jlG8U32QrLV+Y ojltgI3UacbwgVaEmlCDUKFinrwsN2QeTiD4G7/DJrk8iFPZzk86+W+vvRmfVyzyBKUOLZsvUY/ WnLqs7dH6C8u4z3FiB0f4izPf/LPhm/FTk7PNB5REUIcUC/lUSPlCZFPCtiBvsU16HQ== X-Received: by 2002:adf:bacf:: with SMTP id w15-v6mr4962365wrg.203.1536830967625; Thu, 13 Sep 2018 02:29:27 -0700 (PDT) X-Google-Smtp-Source: ANB0VdbF0R1oA3wAPfhYq0IE6NeICMGLBt04W+gleg96Pwh6994z8pNKGpmbuSNeugpsL+YWWCUU X-Received: by 2002:adf:bacf:: with SMTP id w15-v6mr4962331wrg.203.1536830966972; Thu, 13 Sep 2018 02:29:26 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1536830966; cv=none; d=google.com; s=arc-20160816; b=R598fj+rqo3fSR5+Sop22D8WUw9E4PsIfm3NkUsTlQYXTk6OqH2Vr3l5r4wY9Up1Yn aUALRtJZ/zZZNM7ipKnrbJtG1xZYdpBC3d/uIsK03UXbJAuLf+SnTMtajZmPYmwPY0ur JcQC8oz2fTVyS8qdl+Frnt1jN3IhiHNhXShu+9a2q6T0vmJL9Vrbv/kea5P2jjzPV5tP IS3BDSIAf1OzFesSTh/lyXhRCcg2rUe/Rks6AOjGEzgSFjp3kNSfNcoPq/7wUuPUYlyn BtxHjmcFYbaBsU9q6SXRp96Ve4J9HGW6AvTQXA/ugJ0YKS8eAsaS0g3seFr6A/5q0Rge 6Tgg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=mime-version:references:subject:cc:to:from:date:user-agent :message-id:dkim-signature; bh=Qgx3kgv+AkSn1Y+HQ6cCIZsGaoQ/+n1Kc3v/8+oaveQ=; b=m83bXEacqQA6waOPHWes8kO+ca0+W6R2yXXstfTUifPNoSIHPksFQMtrD24XBbd/M8 sjP03smghXBT1YDjblwRfNfftTBx/vYAb/XCcmZ8cg9IL8a1bchWIJLSqWKH2t1bZCHP rs2GN8Vcst5DY4DNUOLV8ipoG0KBGtnSy9Qk8f3iltTEX429fxeeZCAkaZ6mor47TAwS pM9kkD2CBwODbafnGLp/M4BAp1sewTlVHVyDEat1+0rKTodosyQPE7W6cIh+eXR3j6Zm MaNIEz9BtLl4ZewoT44EY5ssEDaYU8YFFLKUUdrT3rvTsAsmiDwRU4WcXwEG+f4dq05+ 9qsA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=merlin.20170209 header.b=IlzD3+Z5; spf=pass (google.com: best guess record for domain of peterz@infradead.org designates 2001:8b0:10b:1231::1 as permitted sender) smtp.mailfrom=peterz@infradead.org Received: from merlin.infradead.org (merlin.infradead.org. [2001:8b0:10b:1231::1]) by mx.google.com with ESMTPS id t11-v6si3514516wre.25.2018.09.13.02.29.26 for (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Thu, 13 Sep 2018 02:29:26 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of peterz@infradead.org designates 2001:8b0:10b:1231::1 as permitted sender) client-ip=2001:8b0:10b:1231::1; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=merlin.20170209 header.b=IlzD3+Z5; spf=pass (google.com: best guess record for domain of peterz@infradead.org designates 2001:8b0:10b:1231::1 as permitted sender) smtp.mailfrom=peterz@infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=merlin.20170209; h=Content-Type:MIME-Version:References: Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding: Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender: Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:List-Id:List-Help: List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=Qgx3kgv+AkSn1Y+HQ6cCIZsGaoQ/+n1Kc3v/8+oaveQ=; b=IlzD3+Z56pVL8cES+A72p3j82H gP4ymqf1BioBy99orpB/4pjbLrz0oOIoJXjjrqzSEcEi04pk9/XnotJu/2t+wekHOSwFvXR6c3JB1 u3t0gkOyJZSGc9GI77gKka2q3VGdcGymXuWzcxT0W2rmrjtmM+R/gl/h8wQEqnkyVcZ0jIoz/EV1q DO9kAUNnmu2pjMH7qzA2yVV+Kdt1tpm/dsdgmDSYlJAqCZHIk0jx88jIWkZ+uJsYtGk85xShV5a8L 4ZpAOCUIyfoBBe5PEsh9Ci3ASfkrAUzqtXBEt8ySmaLtm9iHQkmDL+zWzOPdrwZfVVDQhLvo11zpB uNfHtl/A==; Received: from j217100.upc-j.chello.nl ([24.132.217.100] helo=hirez.programming.kicks-ass.net) by merlin.infradead.org with esmtpsa (Exim 4.90_1 #2 (Red Hat Linux)) id 1g0Nw2-0000yq-63; Thu, 13 Sep 2018 09:29:14 +0000 Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0) id 5F9AB20587E67; Thu, 13 Sep 2018 11:29:10 +0200 (CEST) Message-ID: <20180913092812.436341429@infradead.org> User-Agent: quilt/0.65 Date: Thu, 13 Sep 2018 11:21:20 +0200 From: Peter Zijlstra To: will.deacon@arm.com, aneesh.kumar@linux.vnet.ibm.com, akpm@linux-foundation.org, npiggin@gmail.com Cc: linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, peterz@infradead.org, linux@armlinux.org.uk, heiko.carstens@de.ibm.com, Richard Weinberger Subject: [RFC][PATCH 10/11] um/tlb: Convert to generic mmu_gather References: <20180913092110.817204997@infradead.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP Generic mmu_gather provides the simple flush_tlb_range() based range tracking mmu_gather UM needs. Cc: Will Deacon Cc: "Aneesh Kumar K.V" Cc: Andrew Morton Cc: Nick Piggin Cc: Richard Weinberger Signed-off-by: Peter Zijlstra (Intel) --- arch/um/include/asm/tlb.h | 156 ---------------------------------------------- 1 file changed, 2 insertions(+), 154 deletions(-) --- a/arch/um/include/asm/tlb.h +++ b/arch/um/include/asm/tlb.h @@ -2,160 +2,8 @@ #ifndef __UM_TLB_H #define __UM_TLB_H -#include -#include -#include -#include #include - -#define tlb_start_vma(tlb, vma) do { } while (0) -#define tlb_end_vma(tlb, vma) do { } while (0) -#define tlb_flush(tlb) flush_tlb_mm((tlb)->mm) - -/* struct mmu_gather is an opaque type used by the mm code for passing around - * any data needed by arch specific code for tlb_remove_page. - */ -struct mmu_gather { - struct mm_struct *mm; - unsigned int need_flush; /* Really unmapped some ptes? */ - unsigned long start; - unsigned long end; - unsigned int fullmm; /* non-zero means full mm flush */ -}; - -static inline void __tlb_remove_tlb_entry(struct mmu_gather *tlb, pte_t *ptep, - unsigned long address) -{ - if (tlb->start > address) - tlb->start = address; - if (tlb->end < address + PAGE_SIZE) - tlb->end = address + PAGE_SIZE; -} - -static inline void init_tlb_gather(struct mmu_gather *tlb) -{ - tlb->need_flush = 0; - - tlb->start = TASK_SIZE; - tlb->end = 0; - - if (tlb->fullmm) { - tlb->start = 0; - tlb->end = TASK_SIZE; - } -} - -static inline void -arch_tlb_gather_mmu(struct mmu_gather *tlb, struct mm_struct *mm, - unsigned long start, unsigned long end) -{ - tlb->mm = mm; - tlb->start = start; - tlb->end = end; - tlb->fullmm = !(start | (end+1)); - - init_tlb_gather(tlb); -} - -extern void flush_tlb_mm_range(struct mm_struct *mm, unsigned long start, - unsigned long end); - -static inline void -tlb_flush_mmu_tlbonly(struct mmu_gather *tlb) -{ - flush_tlb_mm_range(tlb->mm, tlb->start, tlb->end); -} - -static inline void -tlb_flush_mmu_free(struct mmu_gather *tlb) -{ - init_tlb_gather(tlb); -} - -static inline void -tlb_flush_mmu(struct mmu_gather *tlb) -{ - if (!tlb->need_flush) - return; - - tlb_flush_mmu_tlbonly(tlb); - tlb_flush_mmu_free(tlb); -} - -/* arch_tlb_finish_mmu - * Called at the end of the shootdown operation to free up any resources - * that were required. - */ -static inline void -arch_tlb_finish_mmu(struct mmu_gather *tlb, - unsigned long start, unsigned long end, bool force) -{ - if (force) { - tlb->start = start; - tlb->end = end; - tlb->need_flush = 1; - } - tlb_flush_mmu(tlb); - - /* keep the page table cache within bounds */ - check_pgt_cache(); -} - -/* tlb_remove_page - * Must perform the equivalent to __free_pte(pte_get_and_clear(ptep)), - * while handling the additional races in SMP caused by other CPUs - * caching valid mappings in their TLBs. - */ -static inline int __tlb_remove_page(struct mmu_gather *tlb, struct page *page) -{ - tlb->need_flush = 1; - free_page_and_swap_cache(page); - return false; /* avoid calling tlb_flush_mmu */ -} - -static inline void tlb_remove_page(struct mmu_gather *tlb, struct page *page) -{ - __tlb_remove_page(tlb, page); -} - -static inline bool __tlb_remove_page_size(struct mmu_gather *tlb, - struct page *page, int page_size) -{ - return __tlb_remove_page(tlb, page); -} - -static inline void tlb_remove_page_size(struct mmu_gather *tlb, - struct page *page, int page_size) -{ - return tlb_remove_page(tlb, page); -} - -/** - * tlb_remove_tlb_entry - remember a pte unmapping for later tlb invalidation. - * - * Record the fact that pte's were really umapped in ->need_flush, so we can - * later optimise away the tlb invalidate. This helps when userspace is - * unmapping already-unmapped pages, which happens quite a lot. - */ -#define tlb_remove_tlb_entry(tlb, ptep, address) \ - do { \ - tlb->need_flush = 1; \ - __tlb_remove_tlb_entry(tlb, ptep, address); \ - } while (0) - -#define tlb_remove_huge_tlb_entry(h, tlb, ptep, address) \ - tlb_remove_tlb_entry(tlb, ptep, address) - -static inline void tlb_change_page_size(struct mmu_gather *tlb, unsigned int page_size) -{ -} - -#define pte_free_tlb(tlb, ptep, addr) __pte_free_tlb(tlb, ptep, addr) - -#define pud_free_tlb(tlb, pudp, addr) __pud_free_tlb(tlb, pudp, addr) - -#define pmd_free_tlb(tlb, pmdp, addr) __pmd_free_tlb(tlb, pmdp, addr) - -#define tlb_migrate_finish(mm) do {} while (0) +#include +#include #endif From patchwork Thu Sep 13 09:21:21 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Zijlstra X-Patchwork-Id: 10599109 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id A349B14E0 for ; Thu, 13 Sep 2018 09:30:07 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 91FF62A48D for ; Thu, 13 Sep 2018 09:30:07 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 8661C2A49C; Thu, 13 Sep 2018 09:30:07 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.8 required=2.0 tests=BAYES_00,DKIM_SIGNED, MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE,T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 912C22A48D for ; Thu, 13 Sep 2018 09:30:06 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id E38AC8E000C; Thu, 13 Sep 2018 05:29:48 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id D28A38E000E; Thu, 13 Sep 2018 05:29:48 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B9C1A8E0010; Thu, 13 Sep 2018 05:29:48 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-wr1-f72.google.com (mail-wr1-f72.google.com [209.85.221.72]) by kanga.kvack.org (Postfix) with ESMTP id 4ECF88E000E for ; Thu, 13 Sep 2018 05:29:48 -0400 (EDT) Received: by mail-wr1-f72.google.com with SMTP id a37-v6so4400823wrc.5 for ; Thu, 13 Sep 2018 02:29:48 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:message-id:user-agent:date:from :to:cc:subject:references:mime-version; bh=0w2JNjFex+EFvTUV5IZLO2Pdwze2KWHKrEHhXJfZ/90=; b=hrZKvSmxQePqB2a1JmgNSqm2USrPcvXYpmwJn0m0Y1Jfz9GBtwjvd/ty/5K8XNWZhA TZ2AAspEuGM86u135pRK+xJqnjCtMf4lJz48Jrimwy7c4oqazEjNJNzacLLNBmEMQigl iof1JH9Mev5utT/oNhloJdFXUyAr1D3NEe/wI4FGL2SdC+7qC575c4P7kfkFr9ZIAZtV WR4CAwkwG65M+8LUFGNyiTnW4ZVzaP0iQWqFF51/VjgBUgBoru2evoZGzse2yPi8vqAB cCJihVGMC+HrdILvKMT4iY5gPhL/AH8ekc/uCDb/NPTyZL/aCpTiql7/WjUq5XFt3Osw WIcg== X-Gm-Message-State: APzg51CZMU19JZPm1pWUNYiq9AHBiZaNtDEZPphDcpr+KoWSipxdS/pw udvmz9O5d1/zpSum71L0HcMldSd590+6s19mysMfbuF7rTh9+ikyN40v5zx+aOfE/MpTkfsG1hS jXR1MCGx4AvpAPv6CBVsqfRRBqZLzS5PDVySSnSLXBF6illcCR3u08Gjev9D4xUCDOw== X-Received: by 2002:adf:add0:: with SMTP id w74-v6mr4961424wrc.73.1536830987804; Thu, 13 Sep 2018 02:29:47 -0700 (PDT) X-Google-Smtp-Source: ANB0VdZyfYaJo1DXBpd/+LQQvoznbCKBhDQ51cEBb8nZgvOjJG+7YtQmfCY8UETnF38MD4Ab/FX0 X-Received: by 2002:adf:add0:: with SMTP id w74-v6mr4961380wrc.73.1536830986768; Thu, 13 Sep 2018 02:29:46 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1536830986; cv=none; d=google.com; s=arc-20160816; b=dHDm1X4BT49DycL+9wVuwwBPAO+gmpz3ce6Kp+DgMNgZDRNilAxb8gA20n8Gs32Y4a fzMv77g6PHEARt2qhe1/ntevgFow27cCJZoEKQ+Br6b7b0DTh8fPugIhkFYE+JjnnFNz VZcPznPHXUZHpmY3WJZPAUTy/vyA+qri8Ij31wHKsy2oxJAMnlInI+nrk3ZZbsu03rZm 2nEsPmGESI1YPK7qCP7SUTG9QdYMIubQmnEfWCimMbvHmnBfYlvYQ0lsbkwIJugjWlmf JW/1scwT0qVacBo5dErdTqVRl2gc7D2F/P58WohTL+lhZd70qGkO8UYzqhmgEhss0GUl ClRQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=mime-version:references:subject:cc:to:from:date:user-agent :message-id:dkim-signature; bh=0w2JNjFex+EFvTUV5IZLO2Pdwze2KWHKrEHhXJfZ/90=; b=qa4rQJiFkS0EBgDH7xcaPJeBr4Dzl2cGtpUF+x2QC467KEdi/YBu3LsULyxabGnFnw mM5SxZ2NTPoHMKFyfMUYUB+VvNdOYMP23cXa46AW+VP/VyvZDT/8TXT9pozN5tKZbwQv bNUMoXUaL0vLafQxcIEgfM8DM00HbQeJQcwCYpFQxzFOQtFhjQl8V/P/ZDjjfNiK+vOM V3FCqf1yx/PmyPrkQ3JRoakAtPNk9X7bWNdJr04Oi+UHb+CBEjF/rC1i0CWwM0M37sN+ BL9YFwhzmLryUG79pHS7ROBHraB0amgFRc20PKu5eKK/WzijfBwP9v0eGSQIrN7JdJWi P6iQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=merlin.20170209 header.b=UQQZx+PC; spf=pass (google.com: best guess record for domain of peterz@infradead.org designates 2001:8b0:10b:1231::1 as permitted sender) smtp.mailfrom=peterz@infradead.org Received: from merlin.infradead.org (merlin.infradead.org. [2001:8b0:10b:1231::1]) by mx.google.com with ESMTPS id g199-v6si3176100wmg.71.2018.09.13.02.29.46 for (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Thu, 13 Sep 2018 02:29:46 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of peterz@infradead.org designates 2001:8b0:10b:1231::1 as permitted sender) client-ip=2001:8b0:10b:1231::1; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=merlin.20170209 header.b=UQQZx+PC; spf=pass (google.com: best guess record for domain of peterz@infradead.org designates 2001:8b0:10b:1231::1 as permitted sender) smtp.mailfrom=peterz@infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=merlin.20170209; h=Content-Type:MIME-Version:References: Subject:Cc:To:From:Date:Message-ID:Sender:Reply-To:Content-Transfer-Encoding: Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender: Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:List-Id:List-Help: List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=0w2JNjFex+EFvTUV5IZLO2Pdwze2KWHKrEHhXJfZ/90=; b=UQQZx+PCxK4TazZG3Ok4PaNmTL XVu/PSyFZ61DOp9/ZVf+lV96MLSm1F7HNIRbo7M0WTyfDt9JGEqFLcsCRfeJ10W4xq+KMPGTMVNpP DYConOBuL6UYoQDagZZlCLXYBCLJkvUSjaBxwKdIvJgAxmbR4xCvT0IL2FxvMRYj4g+r+3V86pcnt p2o5gvYfxVySAG5uShs0BkrNOOWSw71vQvBhvcLy84cvGd+mgKMzN7OpnLZ7MgmWsLl08/+jxho9s qgMA1tABnV8vswLM95UYWytQWwtMwjMtn7MXI4S3mt+GWS3tRZZAuvwLEhwm7erfCVu1vWUHUeOZb AWzt0xQQ==; Received: from j217100.upc-j.chello.nl ([24.132.217.100] helo=hirez.programming.kicks-ass.net) by merlin.infradead.org with esmtpsa (Exim 4.90_1 #2 (Red Hat Linux)) id 1g0Nw2-0000yr-77; Thu, 13 Sep 2018 09:29:14 +0000 Received: by hirez.programming.kicks-ass.net (Postfix, from userid 0) id 6380020587E68; Thu, 13 Sep 2018 11:29:10 +0200 (CEST) Message-ID: <20180913092812.496674324@infradead.org> User-Agent: quilt/0.65 Date: Thu, 13 Sep 2018 11:21:21 +0200 From: Peter Zijlstra To: will.deacon@arm.com, aneesh.kumar@linux.vnet.ibm.com, akpm@linux-foundation.org, npiggin@gmail.com Cc: linux-arch@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, peterz@infradead.org, linux@armlinux.org.uk, heiko.carstens@de.ibm.com Subject: [RFC][PATCH 11/11] arch/tlb: Clean up simple architectures References: <20180913092110.817204997@infradead.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP There are generally two cases: 1) either the platform has an efficient flush_tlb_range() and asm-generic/tlb.h doesn't need any overrides at all. 2) or an architecture lacks an efficient flush_tlb_range() and we override tlb_end_vma() and tlb_flush(). Convert all 'simple' architectures to one of these two forms. alpha: has no range invalidate -> 2 arc: already used flush_tlb_range() -> 1 c6x: has no range invalidate -> 2 h8300: has no mmu hexagon: has an efficient flush_tlb_range() -> 1 (flush_tlb_mm() is in fact a full range invalidate, so no need to shoot down everything) m68k: has inefficient flush_tlb_range() -> 2 microblaze: has no flush_tlb_range() -> 2 mips: has efficient flush_tlb_range() -> 1 (even though it currently seems to use flush_tlb_mm()) nds32: already uses flush_tlb_range() -> 1 nios2: has inefficient flush_tlb_range() -> 2 (no limit on range iteration) openrisc: has inefficient flush_tlb_range() -> 2 (no limit on range iteration) parisc: already uses flush_tlb_range() -> 1 sparc32: already uses flush_tlb_range() -> 1 unicore32: has inefficient flush_tlb_range() -> 2 (no limit on range iteration) xtensa: has efficient flush_tlb_range() -> 1 Cc: Will Deacon Cc: "Aneesh Kumar K.V" Cc: Andrew Morton Cc: Nick Piggin Signed-off-by: Peter Zijlstra (Intel) --- arch/alpha/include/asm/tlb.h | 2 -- arch/arc/include/asm/tlb.h | 23 ----------------------- arch/c6x/include/asm/tlb.h | 1 + arch/h8300/include/asm/tlb.h | 2 -- arch/hexagon/include/asm/tlb.h | 12 ------------ arch/m68k/include/asm/tlb.h | 1 - arch/microblaze/include/asm/tlb.h | 4 +--- arch/mips/include/asm/tlb.h | 8 -------- arch/nds32/include/asm/tlb.h | 10 ---------- arch/nios2/include/asm/tlb.h | 8 +++++--- arch/openrisc/include/asm/tlb.h | 6 ++++-- arch/parisc/include/asm/tlb.h | 13 ------------- arch/powerpc/include/asm/tlb.h | 1 - arch/sparc/include/asm/tlb_32.h | 13 ------------- arch/unicore32/include/asm/tlb.h | 10 ++++++---- arch/xtensa/include/asm/tlb.h | 17 ----------------- 16 files changed, 17 insertions(+), 114 deletions(-) --- a/arch/alpha/include/asm/tlb.h +++ b/arch/alpha/include/asm/tlb.h @@ -4,8 +4,6 @@ #define tlb_start_vma(tlb, vma) do { } while (0) #define tlb_end_vma(tlb, vma) do { } while (0) -#define __tlb_remove_tlb_entry(tlb, pte, addr) do { } while (0) - #define tlb_flush(tlb) flush_tlb_mm((tlb)->mm) #include --- a/arch/arc/include/asm/tlb.h +++ b/arch/arc/include/asm/tlb.h @@ -9,29 +9,6 @@ #ifndef _ASM_ARC_TLB_H #define _ASM_ARC_TLB_H -#define tlb_flush(tlb) \ -do { \ - if (tlb->fullmm) \ - flush_tlb_mm((tlb)->mm); \ -} while (0) - -/* - * This pair is called at time of munmap/exit to flush cache and TLB entries - * for mappings being torn down. - * 1) cache-flush part -implemented via tlb_start_vma( ) for VIPT aliasing D$ - * 2) tlb-flush part - implemted via tlb_end_vma( ) flushes the TLB range - * - * Note, read http://lkml.org/lkml/2004/1/15/6 - */ - -#define tlb_end_vma(tlb, vma) \ -do { \ - if (!tlb->fullmm) \ - flush_tlb_range(vma, vma->vm_start, vma->vm_end); \ -} while (0) - -#define __tlb_remove_tlb_entry(tlb, ptep, address) - #include #include --- a/arch/c6x/include/asm/tlb.h +++ b/arch/c6x/include/asm/tlb.h @@ -2,6 +2,7 @@ #ifndef _ASM_C6X_TLB_H #define _ASM_C6X_TLB_H +#define tlb_end_vma(tlb,vma) do { } while (0) #define tlb_flush(tlb) flush_tlb_mm((tlb)->mm) #include --- a/arch/h8300/include/asm/tlb.h +++ b/arch/h8300/include/asm/tlb.h @@ -2,8 +2,6 @@ #ifndef __H8300_TLB_H__ #define __H8300_TLB_H__ -#define tlb_flush(tlb) do { } while (0) - #include #endif --- a/arch/hexagon/include/asm/tlb.h +++ b/arch/hexagon/include/asm/tlb.h @@ -22,18 +22,6 @@ #include #include -/* - * We don't need any special per-pte or per-vma handling... - */ -#define tlb_start_vma(tlb, vma) do { } while (0) -#define tlb_end_vma(tlb, vma) do { } while (0) -#define __tlb_remove_tlb_entry(tlb, ptep, address) do { } while (0) - -/* - * .. because we flush the whole mm when it fills up - */ -#define tlb_flush(tlb) flush_tlb_mm((tlb)->mm) - #include #endif --- a/arch/m68k/include/asm/tlb.h +++ b/arch/m68k/include/asm/tlb.h @@ -8,7 +8,6 @@ */ #define tlb_start_vma(tlb, vma) do { } while (0) #define tlb_end_vma(tlb, vma) do { } while (0) -#define __tlb_remove_tlb_entry(tlb, ptep, address) do { } while (0) /* * .. because we flush the whole mm when it --- a/arch/microblaze/include/asm/tlb.h +++ b/arch/microblaze/include/asm/tlb.h @@ -11,14 +11,12 @@ #ifndef _ASM_MICROBLAZE_TLB_H #define _ASM_MICROBLAZE_TLB_H -#define tlb_flush(tlb) flush_tlb_mm((tlb)->mm) - #include #ifdef CONFIG_MMU #define tlb_start_vma(tlb, vma) do { } while (0) #define tlb_end_vma(tlb, vma) do { } while (0) -#define __tlb_remove_tlb_entry(tlb, pte, address) do { } while (0) +#define tlb_flush(tlb) flush_tlb_mm((tlb)->mm) #endif #include --- a/arch/mips/include/asm/tlb.h +++ b/arch/mips/include/asm/tlb.h @@ -5,14 +5,6 @@ #include #include -#define tlb_end_vma(tlb, vma) do { } while (0) -#define __tlb_remove_tlb_entry(tlb, ptep, address) do { } while (0) - -/* - * .. because we flush the whole mm when it fills up. - */ -#define tlb_flush(tlb) flush_tlb_mm((tlb)->mm) - #define _UNIQUE_ENTRYHI(base, idx) \ (((base) + ((idx) << (PAGE_SHIFT + 1))) | \ (cpu_has_tlbinv ? MIPS_ENTRYHI_EHINV : 0)) --- a/arch/nds32/include/asm/tlb.h +++ b/arch/nds32/include/asm/tlb.h @@ -4,16 +4,6 @@ #ifndef __ASMNDS32_TLB_H #define __ASMNDS32_TLB_H -#define tlb_end_vma(tlb,vma) \ - do { \ - if(!tlb->fullmm) \ - flush_tlb_range(vma, vma->vm_start, vma->vm_end); \ - } while (0) - -#define __tlb_remove_tlb_entry(tlb, pte, addr) do { } while (0) - -#define tlb_flush(tlb) flush_tlb_mm((tlb)->mm) - #include #define __pte_free_tlb(tlb, pte, addr) pte_free((tlb)->mm, pte) --- a/arch/nios2/include/asm/tlb.h +++ b/arch/nios2/include/asm/tlb.h @@ -11,12 +11,14 @@ #ifndef _ASM_NIOS2_TLB_H #define _ASM_NIOS2_TLB_H -#define tlb_flush(tlb) flush_tlb_mm((tlb)->mm) - extern void set_mmu_pid(unsigned long pid); +/* + * NIOS32 does have flush_tlb_range(), but it lacks a limit and fallback to + * full mm invalidation. So use flush_tlb_mm() for everything. + */ #define tlb_end_vma(tlb, vma) do { } while (0) -#define __tlb_remove_tlb_entry(tlb, ptep, address) do { } while (0) +#define tlb_flush(tlb) flush_tlb_mm((tlb)->mm) #include #include --- a/arch/openrisc/include/asm/tlb.h +++ b/arch/openrisc/include/asm/tlb.h @@ -22,12 +22,14 @@ /* * or32 doesn't need any special per-pte or * per-vma handling.. + * + * OpenRISC doesn't have an efficient flush_tlb_range() so use flush_tlb_mm() + * for everything. */ #define tlb_start_vma(tlb, vma) do { } while (0) #define tlb_end_vma(tlb, vma) do { } while (0) -#define __tlb_remove_tlb_entry(tlb, ptep, address) do { } while (0) - #define tlb_flush(tlb) flush_tlb_mm((tlb)->mm) + #include #include --- a/arch/parisc/include/asm/tlb.h +++ b/arch/parisc/include/asm/tlb.h @@ -2,19 +2,6 @@ #ifndef _PARISC_TLB_H #define _PARISC_TLB_H -#define tlb_flush(tlb) \ -do { if ((tlb)->fullmm) \ - flush_tlb_mm((tlb)->mm);\ -} while (0) - -#define tlb_end_vma(tlb, vma) \ -do { if (!(tlb)->fullmm) \ - flush_tlb_range(vma, vma->vm_start, vma->vm_end); \ -} while (0) - -#define __tlb_remove_tlb_entry(tlb, pte, address) \ - do { } while (0) - #include #define __pmd_free_tlb(tlb, pmd, addr) pmd_free((tlb)->mm, pmd) --- a/arch/sparc/include/asm/tlb_32.h +++ b/arch/sparc/include/asm/tlb_32.h @@ -2,19 +2,6 @@ #ifndef _SPARC_TLB_H #define _SPARC_TLB_H -#define tlb_end_vma(tlb, vma) \ -do { \ - flush_tlb_range(vma, vma->vm_start, vma->vm_end); \ -} while (0) - -#define __tlb_remove_tlb_entry(tlb, pte, address) \ - do { } while (0) - -#define tlb_flush(tlb) \ -do { \ - flush_tlb_mm((tlb)->mm); \ -} while (0) - #include #endif /* _SPARC_TLB_H */ --- a/arch/unicore32/include/asm/tlb.h +++ b/arch/unicore32/include/asm/tlb.h @@ -12,10 +12,12 @@ #ifndef __UNICORE_TLB_H__ #define __UNICORE_TLB_H__ -#define tlb_start_vma(tlb, vma) do { } while (0) -#define tlb_end_vma(tlb, vma) do { } while (0) -#define __tlb_remove_tlb_entry(tlb, ptep, address) do { } while (0) -#define tlb_flush(tlb) flush_tlb_mm((tlb)->mm) +/* + * unicore32 lacks an afficient flush_tlb_range(), use flush_tlb_mm(). + */ +#define tlb_start_vma(tlb, vma) do { } while (0) +#define tlb_end_vma(tlb, vma) do { } while (0) +#define tlb_flush(tlb) flush_tlb_mm((tlb)->mm) #define __pte_free_tlb(tlb, pte, addr) \ do { \ --- a/arch/xtensa/include/asm/tlb.h +++ b/arch/xtensa/include/asm/tlb.h @@ -14,23 +14,6 @@ #include #include -#if (DCACHE_WAY_SIZE <= PAGE_SIZE) - -# define tlb_end_vma(tlb,vma) do { } while (0) - -#else - -# define tlb_end_vma(tlb, vma) \ - do { \ - if (!tlb->fullmm) \ - flush_tlb_range(vma, vma->vm_start, vma->vm_end); \ - } while(0) - -#endif - -#define __tlb_remove_tlb_entry(tlb,pte,addr) do { } while (0) -#define tlb_flush(tlb) flush_tlb_mm((tlb)->mm) - #include #define __pte_free_tlb(tlb, pte, address) pte_free((tlb)->mm, pte)