From patchwork Thu Mar 12 04:10:16 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zhenyu Ye X-Patchwork-Id: 11433179 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id AFD55913 for ; Thu, 12 Mar 2020 04:10:45 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 6301B206F7 for ; Thu, 12 Mar 2020 04:10:45 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 6301B206F7 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 716026B0006; Thu, 12 Mar 2020 00:10:44 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 6EF296B0007; Thu, 12 Mar 2020 00:10:44 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 602806B0008; Thu, 12 Mar 2020 00:10:44 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0210.hostedemail.com [216.40.44.210]) by kanga.kvack.org (Postfix) with ESMTP id 497BC6B0006 for ; Thu, 12 Mar 2020 00:10:44 -0400 (EDT) Received: from smtpin04.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 2AF948E44 for ; Thu, 12 Mar 2020 04:10:44 +0000 (UTC) X-FDA: 76585384008.04.walk37_2367f14acd20b X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,yezhenyu2@huawei.com,,RULES_HIT:30054,0,RBL:45.249.212.35:@huawei.com:.lbl8.mailshell.net-62.18.2.100 64.95.201.95,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:23,LUA_SUMMARY:none X-HE-Tag: walk37_2367f14acd20b X-Filterd-Recvd-Size: 3749 Received: from huawei.com (szxga07-in.huawei.com [45.249.212.35]) by imf34.hostedemail.com (Postfix) with ESMTP for ; Thu, 12 Mar 2020 04:10:43 +0000 (UTC) Received: from DGGEMS411-HUB.china.huawei.com (unknown [172.30.72.58]) by Forcepoint Email with ESMTP id 999F68A68DD6D196A9F1; Thu, 12 Mar 2020 12:10:36 +0800 (CST) Received: from DESKTOP-KKJBAGG.china.huawei.com (10.173.220.25) by DGGEMS411-HUB.china.huawei.com (10.3.19.211) with Microsoft SMTP Server id 14.3.487.0; Thu, 12 Mar 2020 12:10:26 +0800 From: Zhenyu Ye To: , , , , , , , CC: , , , , , , , , Subject: [RFC PATCH v2 1/3] arm64: tlb: use __tlbi_level replace __tlbi in Stage-1 Date: Thu, 12 Mar 2020 12:10:16 +0800 Message-ID: <20200312041018.1927-2-yezhenyu2@huawei.com> X-Mailer: git-send-email 2.22.0.windows.1 In-Reply-To: <20200312041018.1927-1-yezhenyu2@huawei.com> References: <20200312041018.1927-1-yezhenyu2@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.173.220.25] X-CFilter-Loop: Reflected X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: ARMv8.4-TTL provides the TTL field in tlbi instruction to indicate the level of translation table walk holding the leaf entry for the address that is being invalidated. This patch use __tlbi_level replace __tlbi and __tlbi_user in Stage-1, and set the default value of level to 0. Signed-off-by: Zhenyu Ye --- arch/arm64/include/asm/tlbflush.h | 19 ++++++------------- 1 file changed, 6 insertions(+), 13 deletions(-) diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlbflush.h index a3f70778a325..dda693f32099 100644 --- a/arch/arm64/include/asm/tlbflush.h +++ b/arch/arm64/include/asm/tlbflush.h @@ -46,11 +46,6 @@ #define __tlbi(op, ...) __TLBI_N(op, ##__VA_ARGS__, 1, 0) -#define __tlbi_user(op, arg) do { \ - if (arm64_kernel_unmapped_at_el0()) \ - __tlbi(op, (arg) | USER_ASID_FLAG); \ -} while (0) - /* This macro creates a properly formatted VA operand for the TLBI */ #define __TLBI_VADDR(addr, asid) \ ({ \ @@ -87,6 +82,8 @@ } \ \ __tlbi(op, arg); \ + if (arm64_kernel_unmapped_at_el0()) \ + __tlbi(op, (arg) | USER_ASID_FLAG); \ } while(0) /* @@ -179,8 +176,7 @@ static inline void flush_tlb_mm(struct mm_struct *mm) unsigned long asid = __TLBI_VADDR(0, ASID(mm)); dsb(ishst); - __tlbi(aside1is, asid); - __tlbi_user(aside1is, asid); + __tlbi_level(aside1is, asid, 0); dsb(ish); } @@ -190,8 +186,7 @@ static inline void flush_tlb_page_nosync(struct vm_area_struct *vma, unsigned long addr = __TLBI_VADDR(uaddr, ASID(vma->vm_mm)); dsb(ishst); - __tlbi(vale1is, addr); - __tlbi_user(vale1is, addr); + __tlbi_level(vale1is, addr, 0); } static inline void flush_tlb_page(struct vm_area_struct *vma, @@ -231,11 +226,9 @@ static inline void __flush_tlb_range(struct vm_area_struct *vma, dsb(ishst); for (addr = start; addr < end; addr += stride) { if (last_level) { - __tlbi(vale1is, addr); - __tlbi_user(vale1is, addr); + __tlbi_level(vale1is, addr, 0); } else { - __tlbi(vae1is, addr); - __tlbi_user(vae1is, addr); + __tlbi_level(vae1is, addr, 0); } } dsb(ish);