From patchwork Tue Mar 31 14:29:19 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zhenyu Ye X-Patchwork-Id: 11467905 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 92EA61805 for ; Tue, 31 Mar 2020 14:30:03 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 5B628208E0 for ; Tue, 31 Mar 2020 14:30:03 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 5B628208E0 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 843966B0032; Tue, 31 Mar 2020 10:30:02 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 7CD966B0037; Tue, 31 Mar 2020 10:30:02 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6BBAE6B006C; Tue, 31 Mar 2020 10:30:02 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0128.hostedemail.com [216.40.44.128]) by kanga.kvack.org (Postfix) with ESMTP id 4C6E06B0032 for ; Tue, 31 Mar 2020 10:30:02 -0400 (EDT) Received: from smtpin26.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 02B8E181AEF15 for ; Tue, 31 Mar 2020 14:30:01 +0000 (UTC) X-FDA: 76655891844.26.crack02_1ff8bab1451 X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,yezhenyu2@huawei.com,,RULES_HIT:30036:30054:30074,0,RBL:45.249.212.190:@huawei.com:.lbl8.mailshell.net-62.18.2.100 64.95.201.95,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:23,LUA_SUMMARY:none X-HE-Tag: crack02_1ff8bab1451 X-Filterd-Recvd-Size: 7184 Received: from huawei.com (szxga04-in.huawei.com [45.249.212.190]) by imf47.hostedemail.com (Postfix) with ESMTP for ; Tue, 31 Mar 2020 14:30:00 +0000 (UTC) Received: from DGGEMS412-HUB.china.huawei.com (unknown [172.30.72.59]) by Forcepoint Email with ESMTP id 112F5A18324581DE4AB4; Tue, 31 Mar 2020 22:29:53 +0800 (CST) Received: from DESKTOP-KKJBAGG.china.huawei.com (10.173.220.25) by DGGEMS412-HUB.china.huawei.com (10.3.19.212) with Microsoft SMTP Server id 14.3.487.0; Tue, 31 Mar 2020 22:29:44 +0800 From: Zhenyu Ye To: , , , , , , , , , , , , , , , , , , , CC: , , , , , , , , , Subject: [RFC PATCH v5 0/8] arm64: tlb: add support for TTL feature Date: Tue, 31 Mar 2020 22:29:19 +0800 Message-ID: <20200331142927.1237-1-yezhenyu2@huawei.com> X-Mailer: git-send-email 2.22.0.windows.1 MIME-Version: 1.0 X-Originating-IP: [10.173.220.25] X-CFilter-Loop: Reflected X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: In order to reduce the cost of TLB invalidation, the ARMv8.4 TTL feature allows TLBs to be issued with a level allowing for quicker invalidation. This series provide support for this feature. Patch 1 and Patch 2 was provided by Marc on his NV series[1] patches, which detect the TTL feature and add __tlbi_level interface. Patch 4-7 passes struct mmu_gather to flush_tlb_range, which can pass the level of tlbi invalidations. Arm64 and power9 can benefit from this. Patch 8 set the TTL field in arm64 by using the cleared_* values in struct mmu_gather. See patches for details, Thanks. [1] https://lore.kernel.org/linux-arm-kernel/20200211174938.27809-1-maz@kernel.org/ [2] https://lore.kernel.org/linux-arm-kernel/7859561b-78b4-4a12-2642-3741d7d3e7b8@huawei.com/ --- ChangeList: v1: add support for TTL feature in arm64. v2: build the patch on Marc's NV series[1]. v3: use vma->vm_flags to replace mm->context.flags. v4: add Marc's patches into my series. v5: pass struct mmu_gather to flush_tlb_range, then set the TTL field by using infos in struct mmu_gather. Marc Zyngier (2): arm64: Detect the ARMv8.4 TTL feature arm64: Add level-hinted TLB invalidation helper Zhenyu Ye (6): arm64: Add tlbi_user_level TLB invalidation helper mm: tlb: Pass struct mmu_gather to flush_pmd_tlb_range mm: tlb: Pass struct mmu_gather to flush_pud_tlb_range mm: tlb: Pass struct mmu_gather to flush_hugetlb_tlb_range mm: tlb: Pass struct mmu_gather to flush_tlb_range arm64: tlb: Set the TTL field in flush_tlb_range Documentation/core-api/cachetlb.rst | 8 ++- arch/alpha/include/asm/tlbflush.h | 8 +-- arch/alpha/kernel/smp.c | 3 +- arch/arc/include/asm/hugepage.h | 4 +- arch/arc/include/asm/tlbflush.h | 11 ++-- arch/arc/mm/tlb.c | 8 +-- arch/arm/include/asm/tlbflush.h | 12 ++-- arch/arm/kernel/smp_tlb.c | 4 +- arch/arm/mach-rpc/ecard.c | 8 ++- arch/arm64/crypto/aes-glue.c | 1 - arch/arm64/include/asm/cpucaps.h | 3 +- arch/arm64/include/asm/sysreg.h | 1 + arch/arm64/include/asm/tlb.h | 39 +++++++++++- arch/arm64/include/asm/tlbflush.h | 63 +++++++++++++------ arch/arm64/kernel/cpufeature.c | 11 ++++ arch/arm64/mm/hugetlbpage.c | 10 ++- arch/csky/include/asm/tlb.h | 2 +- arch/csky/include/asm/tlbflush.h | 6 +- arch/csky/mm/tlb.c | 4 +- arch/hexagon/include/asm/tlbflush.h | 2 +- arch/hexagon/mm/vm_tlb.c | 4 +- arch/ia64/include/asm/tlbflush.h | 6 +- arch/ia64/mm/tlb.c | 5 +- arch/m68k/include/asm/tlbflush.h | 10 +-- arch/microblaze/include/asm/tlbflush.h | 5 +- arch/mips/include/asm/hugetlb.h | 6 +- arch/mips/include/asm/tlbflush.h | 9 +-- arch/mips/kernel/smp.c | 3 +- arch/nds32/include/asm/tlbflush.h | 3 +- arch/nios2/include/asm/tlbflush.h | 9 +-- arch/nios2/mm/tlb.c | 8 ++- arch/openrisc/include/asm/tlbflush.h | 10 +-- arch/openrisc/kernel/smp.c | 2 +- arch/parisc/include/asm/tlbflush.h | 2 +- arch/parisc/kernel/cache.c | 13 +++- arch/powerpc/include/asm/book3s/32/tlbflush.h | 4 +- arch/powerpc/include/asm/book3s/64/tlbflush.h | 9 ++- arch/powerpc/include/asm/nohash/tlbflush.h | 7 ++- arch/powerpc/mm/book3s32/tlb.c | 6 +- arch/powerpc/mm/book3s64/pgtable.c | 8 ++- arch/powerpc/mm/book3s64/radix_tlb.c | 2 +- arch/powerpc/mm/nohash/tlb.c | 6 +- arch/riscv/include/asm/tlbflush.h | 7 ++- arch/riscv/mm/tlbflush.c | 4 +- arch/s390/include/asm/tlbflush.h | 5 +- arch/sh/include/asm/tlbflush.h | 8 +-- arch/sh/kernel/smp.c | 2 +- arch/sparc/include/asm/tlbflush_32.h | 2 +- arch/sparc/include/asm/tlbflush_64.h | 3 +- arch/sparc/mm/tlb.c | 5 +- arch/um/include/asm/tlbflush.h | 6 +- arch/um/kernel/tlb.c | 4 +- arch/unicore32/include/asm/tlbflush.h | 5 +- arch/x86/include/asm/tlbflush.h | 4 +- arch/x86/mm/pgtable.c | 10 ++- arch/xtensa/include/asm/tlbflush.h | 10 +-- arch/xtensa/kernel/smp.c | 2 +- include/asm-generic/pgtable.h | 10 +-- include/asm-generic/tlb.h | 2 +- mm/huge_memory.c | 19 +++++- mm/hugetlb.c | 17 +++-- mm/mapping_dirty_helpers.c | 23 ++++--- mm/migrate.c | 8 ++- mm/mprotect.c | 8 ++- mm/mremap.c | 17 ++++- mm/pgtable-generic.c | 51 ++++++++++++--- mm/rmap.c | 6 +- 67 files changed, 409 insertions(+), 174 deletions(-)