From patchwork Tue Dec 17 07:17:12 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Aneesh Kumar K.V" X-Patchwork-Id: 11296941 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 362D114E3 for ; Tue, 17 Dec 2019 07:18:12 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id F294A20733 for ; Tue, 17 Dec 2019 07:18:11 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org F294A20733 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.ibm.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 45C548E0042; Tue, 17 Dec 2019 02:18:11 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 432AF8E0040; Tue, 17 Dec 2019 02:18:11 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2FAD38E0042; Tue, 17 Dec 2019 02:18:11 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0224.hostedemail.com [216.40.44.224]) by kanga.kvack.org (Postfix) with ESMTP id 18E158E0040 for ; Tue, 17 Dec 2019 02:18:11 -0500 (EST) Received: from smtpin28.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with SMTP id C6D7C3A97 for ; Tue, 17 Dec 2019 07:18:10 +0000 (UTC) X-FDA: 76273779540.28.death70_d250794d872a X-Spam-Summary: 2,0,0,d3ddf667adb315ac,d41d8cd98f00b204,aneesh.kumar@linux.ibm.com,:akpm@linux-foundation.org:npiggin@gmail.com:mpe@ellerman.id.au:peterz@infradead.org::linux-kernel@vger.kernel.org:linuxppc-dev@lists.ozlabs.org:aneesh.kumar@linux.ibm.com,RULES_HIT:2:41:355:379:541:800:960:966:968:973:988:989:1042:1260:1261:1311:1314:1345:1431:1437:1515:1535:1605:1606:1730:1747:1777:1792:1801:1981:2194:2196:2199:2200:2393:2559:2562:3138:3139:3140:3141:3142:3865:3866:3867:3868:3870:3871:3873:3874:4120:4250:4321:4385:4605:5007:6119:6120:6261:6691:6755:7901:7903:8634:9592:10004:11026:11473:11657:11658:11914:12043:12048:12219:12296:12297:12438:12555:12679:12895:12986:13255:13894:14096:21080:21324:21451:21611:21627:21772:30003:30054:30074:30089,0,RBL:error,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: death70_d250794d872a X-Filterd-Recvd-Size: 9353 Received: from mx0a-001b2d01.pphosted.com (mx0b-001b2d01.pphosted.com [148.163.158.5]) by imf24.hostedemail.com (Postfix) with ESMTP for ; Tue, 17 Dec 2019 07:18:10 +0000 (UTC) Received: from pps.filterd (m0098417.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id xBH7HthU054645; Tue, 17 Dec 2019 02:17:56 -0500 Received: from pps.reinject (localhost [127.0.0.1]) by mx0a-001b2d01.pphosted.com with ESMTP id 2wwe2q9jn1-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 17 Dec 2019 02:17:55 -0500 Received: from m0098417.ppops.net (m0098417.ppops.net [127.0.0.1]) by pps.reinject (8.16.0.36/8.16.0.36) with SMTP id xBH7HtbO054646; Tue, 17 Dec 2019 02:17:55 -0500 Received: from ppma05wdc.us.ibm.com (1b.90.2fa9.ip4.static.sl-reverse.com [169.47.144.27]) by mx0a-001b2d01.pphosted.com with ESMTP id 2wwe2q9jdj-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 17 Dec 2019 02:17:54 -0500 Received: from pps.filterd (ppma05wdc.us.ibm.com [127.0.0.1]) by ppma05wdc.us.ibm.com (8.16.0.27/8.16.0.27) with SMTP id xBH7EvC1028968; Tue, 17 Dec 2019 07:17:23 GMT Received: from b03cxnp08027.gho.boulder.ibm.com (b03cxnp08027.gho.boulder.ibm.com [9.17.130.19]) by ppma05wdc.us.ibm.com with ESMTP id 2wvqc621r8-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 17 Dec 2019 07:17:23 +0000 Received: from b03ledav001.gho.boulder.ibm.com (b03ledav001.gho.boulder.ibm.com [9.17.130.232]) by b03cxnp08027.gho.boulder.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id xBH7HMuO36634968 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Tue, 17 Dec 2019 07:17:22 GMT Received: from b03ledav001.gho.boulder.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 5308F6E054; Tue, 17 Dec 2019 07:17:22 +0000 (GMT) Received: from b03ledav001.gho.boulder.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 85B9D6E04E; Tue, 17 Dec 2019 07:17:19 +0000 (GMT) Received: from skywalker.in.ibm.com (unknown [9.204.201.20]) by b03ledav001.gho.boulder.ibm.com (Postfix) with ESMTP; Tue, 17 Dec 2019 07:17:19 +0000 (GMT) From: "Aneesh Kumar K.V" To: akpm@linux-foundation.org, npiggin@gmail.com, mpe@ellerman.id.au, peterz@infradead.org Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, "Aneesh Kumar K.V" Subject: [RFC PATCH 1/2] mm/mmu_gather: Invalidate TLB correctly on batch allocation failure and flush Date: Tue, 17 Dec 2019 12:47:12 +0530 Message-Id: <20191217071713.93399-1-aneesh.kumar@linux.ibm.com> X-Mailer: git-send-email 2.23.0 MIME-Version: 1.0 X-TM-AS-GCONF: 00 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.95,18.0.572 definitions=2019-12-17_01:2019-12-16,2019-12-16 signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 lowpriorityscore=0 malwarescore=0 phishscore=0 bulkscore=0 mlxscore=0 adultscore=0 priorityscore=1501 suspectscore=2 spamscore=0 impostorscore=0 mlxlogscore=999 clxscore=1015 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-1910280000 definitions=main-1912170063 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Architectures for which we have hardware walkers of Linux page table should flush TLB on mmu gather batch allocation failures and batch flush. Some architectures like POWER supports multiple translation modes (hash and radix) and in the case of POWER only radix translation mode needs the above TLBI. This is because for hash translation mode kernel wants to avoid this extra flush since there are no hardware walkers of linux page table. With radix translation, the hardware also walks linux page table and with that, kernel needs to make sure to TLB invalidate page walk cache before page table pages are freed. More details in commit: d86564a2f085 ("mm/tlb, x86/mm: Support invalidating TLB caches for RCU_TABLE_FREE") Based on changes from Peter Zijlstra Signed-off-by: Aneesh Kumar K.V --- arch/Kconfig | 3 --- arch/powerpc/Kconfig | 1 - arch/powerpc/include/asm/tlb.h | 4 ++++ arch/sparc/Kconfig | 1 - arch/sparc/include/asm/tlb_64.h | 9 +++++++++ include/asm-generic/tlb.h | 22 +++++++++++++++------- mm/mmu_gather.c | 16 ++++++++-------- 7 files changed, 36 insertions(+), 20 deletions(-) diff --git a/arch/Kconfig b/arch/Kconfig index 48b5e103bdb0..208aad121630 100644 --- a/arch/Kconfig +++ b/arch/Kconfig @@ -396,9 +396,6 @@ config HAVE_ARCH_JUMP_LABEL_RELATIVE config HAVE_RCU_TABLE_FREE bool -config HAVE_RCU_TABLE_NO_INVALIDATE - bool - config HAVE_MMU_GATHER_PAGE_SIZE bool diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig index 1ec34e16ed65..a15f5584b0de 100644 --- a/arch/powerpc/Kconfig +++ b/arch/powerpc/Kconfig @@ -223,7 +223,6 @@ config PPC select HAVE_PERF_REGS select HAVE_PERF_USER_STACK_DUMP select HAVE_RCU_TABLE_FREE if SMP - select HAVE_RCU_TABLE_NO_INVALIDATE if HAVE_RCU_TABLE_FREE select HAVE_MMU_GATHER_PAGE_SIZE select HAVE_REGS_AND_STACK_ACCESS_API select HAVE_RELIABLE_STACKTRACE if PPC_BOOK3S_64 && CPU_LITTLE_ENDIAN diff --git a/arch/powerpc/include/asm/tlb.h b/arch/powerpc/include/asm/tlb.h index b2c0be93929d..feea1a09bbce 100644 --- a/arch/powerpc/include/asm/tlb.h +++ b/arch/powerpc/include/asm/tlb.h @@ -27,6 +27,10 @@ #define tlb_flush tlb_flush extern void tlb_flush(struct mmu_gather *tlb); +#ifdef CONFIG_HAVE_RCU_TABLE_FREE +#define tlb_needs_table_invalidate() radix_enabled() +#endif + /* Get the generic bits... */ #include diff --git a/arch/sparc/Kconfig b/arch/sparc/Kconfig index eb24cb1afc11..18e9fb6fcf1b 100644 --- a/arch/sparc/Kconfig +++ b/arch/sparc/Kconfig @@ -65,7 +65,6 @@ config SPARC64 select HAVE_KRETPROBES select HAVE_KPROBES select HAVE_RCU_TABLE_FREE if SMP - select HAVE_RCU_TABLE_NO_INVALIDATE if HAVE_RCU_TABLE_FREE select HAVE_MEMBLOCK_NODE_MAP select HAVE_ARCH_TRANSPARENT_HUGEPAGE select HAVE_DYNAMIC_FTRACE diff --git a/arch/sparc/include/asm/tlb_64.h b/arch/sparc/include/asm/tlb_64.h index a2f3fa61ee36..8cb8f3833239 100644 --- a/arch/sparc/include/asm/tlb_64.h +++ b/arch/sparc/include/asm/tlb_64.h @@ -28,6 +28,15 @@ void flush_tlb_pending(void); #define __tlb_remove_tlb_entry(tlb, ptep, address) do { } while (0) #define tlb_flush(tlb) flush_tlb_pending() +/* + * SPARC64's hardware TLB fill does not use the Linux page-tables + * and therefore we don't need a TLBI when freeing page-table pages. + */ + +#ifdef CONFIG_HAVE_RCU_TABLE_FREE +#define tlb_needs_table_invalidate() (false) +#endif + #include #endif /* _SPARC64_TLB_H */ diff --git a/include/asm-generic/tlb.h b/include/asm-generic/tlb.h index 2b10036fefd0..dcdf13fc0a0b 100644 --- a/include/asm-generic/tlb.h +++ b/include/asm-generic/tlb.h @@ -137,13 +137,6 @@ * When used, an architecture is expected to provide __tlb_remove_table() * which does the actual freeing of these pages. * - * HAVE_RCU_TABLE_NO_INVALIDATE - * - * This makes HAVE_RCU_TABLE_FREE avoid calling tlb_flush_mmu_tlbonly() before - * freeing the page-table pages. This can be avoided if you use - * HAVE_RCU_TABLE_FREE and your architecture does _NOT_ use the Linux - * page-tables natively. - * * MMU_GATHER_NO_RANGE * * Use this if your architecture lacks an efficient flush_tlb_range(). @@ -189,8 +182,23 @@ struct mmu_table_batch { extern void tlb_remove_table(struct mmu_gather *tlb, void *table); +/* + * This allows an architecture that does not use the linux page-tables for + * hardware to skip the TLBI when freeing page tables. + */ +#ifndef tlb_needs_table_invalidate +#define tlb_needs_table_invalidate() (true) +#endif + +#else + +#ifdef tlb_needs_table_invalidate +#error tlb_needs_table_invalidate() requires MMU_GATHER_RCU_TABLE_FREE #endif +#endif /* CONFIG_HAVE_RCU_TABLE_FREE */ + + #ifndef CONFIG_HAVE_MMU_GATHER_NO_GATHER /* * If we can't allocate a page to make a big batch of page pointers diff --git a/mm/mmu_gather.c b/mm/mmu_gather.c index 7d70e5c78f97..7c1b8f67af7b 100644 --- a/mm/mmu_gather.c +++ b/mm/mmu_gather.c @@ -102,14 +102,14 @@ bool __tlb_remove_page_size(struct mmu_gather *tlb, struct page *page, int page_ */ static inline void tlb_table_invalidate(struct mmu_gather *tlb) { -#ifndef CONFIG_HAVE_RCU_TABLE_NO_INVALIDATE - /* - * Invalidate page-table caches used by hardware walkers. Then we still - * need to RCU-sched wait while freeing the pages because software - * walkers can still be in-flight. - */ - tlb_flush_mmu_tlbonly(tlb); -#endif + if (tlb_needs_table_invalidate()) { + /* + * Invalidate page-table caches used by hardware walkers. Then + * we still need to RCU-sched wait while freeing the pages + * because software walkers can still be in-flight. + */ + tlb_flush_mmu_tlbonly(tlb); + } } static void tlb_remove_table_smp_sync(void *arg) From patchwork Tue Dec 17 07:17:13 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Aneesh Kumar K.V" X-Patchwork-Id: 11296939 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D5C4C14E3 for ; Tue, 17 Dec 2019 07:17:42 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id A282520733 for ; Tue, 17 Dec 2019 07:17:42 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org A282520733 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.ibm.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id EE1F28E0041; Tue, 17 Dec 2019 02:17:41 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id E92F88E0040; Tue, 17 Dec 2019 02:17:41 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DA9458E0041; Tue, 17 Dec 2019 02:17:41 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0030.hostedemail.com [216.40.44.30]) by kanga.kvack.org (Postfix) with ESMTP id C396F8E0040 for ; Tue, 17 Dec 2019 02:17:41 -0500 (EST) Received: from smtpin25.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with SMTP id 831F74403 for ; Tue, 17 Dec 2019 07:17:41 +0000 (UTC) X-FDA: 76273778322.25.quilt19_8e5608705825 X-Spam-Summary: 2,0,0,f1ea82849e8dcd44,d41d8cd98f00b204,aneesh.kumar@linux.ibm.com,:akpm@linux-foundation.org:npiggin@gmail.com:mpe@ellerman.id.au:peterz@infradead.org::linux-kernel@vger.kernel.org:linuxppc-dev@lists.ozlabs.org:aneesh.kumar@linux.ibm.com,RULES_HIT:41:355:379:541:800:960:966:968:973:988:989:1260:1261:1311:1314:1345:1359:1431:1437:1515:1534:1541:1711:1730:1747:1777:1792:2196:2199:2393:2559:2562:3138:3139:3140:3141:3142:3352:3865:4321:4385:5007:6119:6120:6261:7903:8634:10004:11026:11473:11658:11914:12043:12048:12296:12297:12438:12555:12895:12986:13069:13161:13229:13311:13357:13894:14096:14181:14384:14721:21080:21451:21627:30054,0,RBL:148.163.156.1:@linux.ibm.com:.lbl8.mailshell.net-62.2.0.100 64.100.201.201,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:26,LUA_SUMMARY:none X-HE-Tag: quilt19_8e5608705825 X-Filterd-Recvd-Size: 5000 Received: from mx0a-001b2d01.pphosted.com (mx0a-001b2d01.pphosted.com [148.163.156.1]) by imf02.hostedemail.com (Postfix) with ESMTP for ; Tue, 17 Dec 2019 07:17:40 +0000 (UTC) Received: from pps.filterd (m0098404.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id xBH7HT88083014; Tue, 17 Dec 2019 02:17:32 -0500 Received: from pps.reinject (localhost [127.0.0.1]) by mx0a-001b2d01.pphosted.com with ESMTP id 2wxpvwdx17-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 17 Dec 2019 02:17:31 -0500 Received: from m0098404.ppops.net (m0098404.ppops.net [127.0.0.1]) by pps.reinject (8.16.0.36/8.16.0.36) with SMTP id xBH7HVkO083309; Tue, 17 Dec 2019 02:17:31 -0500 Received: from ppma03dal.us.ibm.com (b.bd.3ea9.ip4.static.sl-reverse.com [169.62.189.11]) by mx0a-001b2d01.pphosted.com with ESMTP id 2wxpvwdx02-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 17 Dec 2019 02:17:31 -0500 Received: from pps.filterd (ppma03dal.us.ibm.com [127.0.0.1]) by ppma03dal.us.ibm.com (8.16.0.27/8.16.0.27) with SMTP id xBH7Ev8j003103; Tue, 17 Dec 2019 07:17:29 GMT Received: from b03cxnp08027.gho.boulder.ibm.com (b03cxnp08027.gho.boulder.ibm.com [9.17.130.19]) by ppma03dal.us.ibm.com with ESMTP id 2wvqc6gnqf-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 17 Dec 2019 07:17:29 +0000 Received: from b03ledav001.gho.boulder.ibm.com (b03ledav001.gho.boulder.ibm.com [9.17.130.232]) by b03cxnp08027.gho.boulder.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id xBH7HSIX50266446 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Tue, 17 Dec 2019 07:17:28 GMT Received: from b03ledav001.gho.boulder.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 5D27B6E04C; Tue, 17 Dec 2019 07:17:28 +0000 (GMT) Received: from b03ledav001.gho.boulder.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 6CC7E6E050; Tue, 17 Dec 2019 07:17:25 +0000 (GMT) Received: from skywalker.in.ibm.com (unknown [9.204.201.20]) by b03ledav001.gho.boulder.ibm.com (Postfix) with ESMTP; Tue, 17 Dec 2019 07:17:25 +0000 (GMT) From: "Aneesh Kumar K.V" To: akpm@linux-foundation.org, npiggin@gmail.com, mpe@ellerman.id.au, peterz@infradead.org Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, "Aneesh Kumar K.V" Subject: [RFC PATCH 2/2] mm/mmu_gather: Avoid multiple page walk cache flush Date: Tue, 17 Dec 2019 12:47:13 +0530 Message-Id: <20191217071713.93399-2-aneesh.kumar@linux.ibm.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20191217071713.93399-1-aneesh.kumar@linux.ibm.com> References: <20191217071713.93399-1-aneesh.kumar@linux.ibm.com> MIME-Version: 1.0 X-TM-AS-GCONF: 00 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.95,18.0.572 definitions=2019-12-17_01:2019-12-16,2019-12-16 signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 mlxscore=0 adultscore=0 phishscore=0 mlxlogscore=999 impostorscore=0 priorityscore=1501 spamscore=0 lowpriorityscore=0 malwarescore=0 suspectscore=2 clxscore=1015 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-1910280000 definitions=main-1912170063 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On tlb_finish_mmu() kernel does a tlb flush before mmu gather table invalidate. The mmu gather table invalidate depending on kernel config also does another TLBI. Avoid the later on tlb_finish_mmu(). Signed-off-by: Aneesh Kumar K.V --- mm/mmu_gather.c | 12 +++++++++--- 1 file changed, 9 insertions(+), 3 deletions(-) diff --git a/mm/mmu_gather.c b/mm/mmu_gather.c index 7c1b8f67af7b..7e2bd43b9084 100644 --- a/mm/mmu_gather.c +++ b/mm/mmu_gather.c @@ -143,17 +143,23 @@ static void tlb_remove_table_rcu(struct rcu_head *head) free_page((unsigned long)batch); } -static void tlb_table_flush(struct mmu_gather *tlb) +static void __tlb_table_flush(struct mmu_gather *tlb, bool table_inval) { struct mmu_table_batch **batch = &tlb->batch; if (*batch) { - tlb_table_invalidate(tlb); + if (table_inval) + tlb_table_invalidate(tlb); call_rcu(&(*batch)->rcu, tlb_remove_table_rcu); *batch = NULL; } } +static void tlb_table_flush(struct mmu_gather *tlb) +{ + __tlb_table_flush(tlb, true); +} + void tlb_remove_table(struct mmu_gather *tlb, void *table) { struct mmu_table_batch **batch = &tlb->batch; @@ -178,7 +184,7 @@ void tlb_remove_table(struct mmu_gather *tlb, void *table) static void tlb_flush_mmu_free(struct mmu_gather *tlb) { #ifdef CONFIG_HAVE_RCU_TABLE_FREE - tlb_table_flush(tlb); + __tlb_table_flush(tlb, false); #endif #ifndef CONFIG_HAVE_MMU_GATHER_NO_GATHER tlb_batch_pages_flush(tlb);