From patchwork Thu Jun 25 08:03:08 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zhenyu Ye X-Patchwork-Id: 11624725 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D47646C1 for ; Thu, 25 Jun 2020 08:03:42 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id A9E3820720 for ; Thu, 25 Jun 2020 08:03:42 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org A9E3820720 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id B853B6B0002; Thu, 25 Jun 2020 04:03:41 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id B5B416B0003; Thu, 25 Jun 2020 04:03:41 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A71BB6B0005; Thu, 25 Jun 2020 04:03:41 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0008.hostedemail.com [216.40.44.8]) by kanga.kvack.org (Postfix) with ESMTP id 915716B0002 for ; Thu, 25 Jun 2020 04:03:41 -0400 (EDT) Received: from smtpin25.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 379631EE6 for ; Thu, 25 Jun 2020 08:03:41 +0000 (UTC) X-FDA: 76966995042.25.rings42_3e0d2eb26e4b Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin25.hostedemail.com (Postfix) with ESMTP id 1203E1804E3A1 for ; Thu, 25 Jun 2020 08:03:41 +0000 (UTC) X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,yezhenyu2@huawei.com,,RULES_HIT:30012:30054:30055:30074,0,RBL:45.249.212.191:@huawei.com:.lbl8.mailshell.net-64.95.201.95 62.18.2.100;04yf7r1traznwpj3f1fbx39bxmhjhopou9ghi3e3kk5yyabqwgkwp3qi71doy8q.4h79p4nyi9rpwdzncz4pnbhnbkbict7umsu8uh1awoyfgf4iejo385eixgkxah6.e-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:23,LUA_SUMMARY:none X-HE-Tag: rings42_3e0d2eb26e4b X-Filterd-Recvd-Size: 3338 Received: from huawei.com (szxga05-in.huawei.com [45.249.212.191]) by imf42.hostedemail.com (Postfix) with ESMTP for ; Thu, 25 Jun 2020 08:03:40 +0000 (UTC) Received: from DGGEMS413-HUB.china.huawei.com (unknown [172.30.72.60]) by Forcepoint Email with ESMTP id F3283E3998D28F9F9FAD; Thu, 25 Jun 2020 16:03:31 +0800 (CST) Received: from DESKTOP-KKJBAGG.china.huawei.com (10.173.220.25) by DGGEMS413-HUB.china.huawei.com (10.3.19.213) with Microsoft SMTP Server id 14.3.487.0; Thu, 25 Jun 2020 16:03:24 +0800 From: Zhenyu Ye To: , , , , , , , , , , , , , , , , CC: , , , , , , , , , Subject: [RESEND PATCH v5 0/6] arm64: tlb: add support for TTL feature Date: Thu, 25 Jun 2020 16:03:08 +0800 Message-ID: <20200625080314.230-1-yezhenyu2@huawei.com> X-Mailer: git-send-email 2.22.0.windows.1 MIME-Version: 1.0 X-Originating-IP: [10.173.220.25] X-CFilter-Loop: Reflected X-Rspamd-Queue-Id: 1203E1804E3A1 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam04 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: In order to reduce the cost of TLB invalidation, ARMv8.4 provides the TTL field in TLBI instruction. The TTL field indicates the level of page table walk holding the leaf entry for the address being invalidated. This series provide support for this feature. When ARMv8.4-TTL is implemented, the operand for TLBIs looks like below: * +----------+-------+----------------------+ * | ASID | TTL | BADDR | * +----------+-------+----------------------+ * |63 48|47 44|43 0| See patches for details, Thanks. --- ChangeList: v5: rebase the series on Linux 5.8-rc2. v4: implement flush_*_tlb_range only on arm64. v3: minor changes: reduce the indentation levels of __tlbi_level(). v2: rebase series on Linux 5.7-rc1 and simplify the code implementation. v1: add support for TTL feature in arm64. Marc Zyngier (2): arm64: Detect the ARMv8.4 TTL feature arm64: Add level-hinted TLB invalidation helper Peter Zijlstra (Intel) (1): tlb: mmu_gather: add tlb_flush_*_range APIs Zhenyu Ye (3): arm64: Add tlbi_user_level TLB invalidation helper arm64: tlb: Set the TTL field in flush_tlb_range arm64: tlb: Set the TTL field in flush_*_tlb_range arch/arm64/include/asm/cpucaps.h | 3 +- arch/arm64/include/asm/pgtable.h | 10 ++++++ arch/arm64/include/asm/sysreg.h | 1 + arch/arm64/include/asm/tlb.h | 29 +++++++++++++++- arch/arm64/include/asm/tlbflush.h | 54 +++++++++++++++++++++++++----- arch/arm64/kernel/cpufeature.c | 11 +++++++ include/asm-generic/tlb.h | 55 ++++++++++++++++++++++--------- 7 files changed, 138 insertions(+), 25 deletions(-)