From patchwork Tue Mar 24 13:45:29 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zhenyu Ye X-Patchwork-Id: 11455473 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id B4CF81667 for ; Tue, 24 Mar 2020 13:46:54 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 3B2852073E for ; Tue, 24 Mar 2020 13:46:54 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="YBZiByaL" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 3B2852073E Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=i2pJmE5wsnDtUM+rF+lxGR3bPPyJ92csuWl1TDGjLtM=; b=YBZiByaLqDHjOs s4X45VevwGFmPlJZkbOLanbikM5MwDs9eeFU55Hl3MxmIdpvVsv0xROQIPR1+dTkun4PiLYyVCpA5 dOCvAeXmlpuRasAllleG8Xd/OrRl8tYr9N8zYsxbAnLkMslcAN00PzK36VM1B17a8Yy0GNRVWFwG1 mQOi+HhiIatKME1rz/cOg9bV5YNvQ/hSIR0dAV6yVnJ72KkkKKjilYfx4bmKjQZ2AdumeLghtjC79 LmN/XxS/epc/ETyfH7PQbUtiaGqu1gMT5HR2i0NMv66yujLGdldeegtuvko9gIbS136oYlyYANy1T QZSaGsNPuXFNGYs2tbHw==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1jGjtH-00074d-Jf; Tue, 24 Mar 2020 13:46:47 +0000 Received: from szxga06-in.huawei.com ([45.249.212.32] helo=huawei.com) by bombadil.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1jGjsl-0006cS-LC for linux-arm-kernel@lists.infradead.org; Tue, 24 Mar 2020 13:46:17 +0000 Received: from DGGEMS410-HUB.china.huawei.com (unknown [172.30.72.58]) by Forcepoint Email with ESMTP id 69F4F845CCA92D2E04BC; Tue, 24 Mar 2020 21:46:11 +0800 (CST) Received: from DESKTOP-KKJBAGG.china.huawei.com (10.173.220.25) by DGGEMS410-HUB.china.huawei.com (10.3.19.210) with Microsoft SMTP Server id 14.3.487.0; Tue, 24 Mar 2020 21:46:00 +0800 From: Zhenyu Ye To: , , , , , , , , , , , , , , , , Subject: [RFC PATCH v4 1/6] arm64: Detect the ARMv8.4 TTL feature Date: Tue, 24 Mar 2020 21:45:29 +0800 Message-ID: <20200324134534.1570-2-yezhenyu2@huawei.com> X-Mailer: git-send-email 2.22.0.windows.1 In-Reply-To: <20200324134534.1570-1-yezhenyu2@huawei.com> References: <20200324134534.1570-1-yezhenyu2@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.173.220.25] X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20200324_064615_856139_F6340BC2 X-CRM114-Status: GOOD ( 10.74 ) X-Spam-Score: -2.3 (--) X-Spam-Report: SpamAssassin version 3.4.3 on bombadil.infradead.org summary: Content analysis details: (-2.3 points) pts rule name description ---- ---------------------- -------------------------------------------------- -2.3 RCVD_IN_DNSWL_MED RBL: Sender listed at https://www.dnswl.org/, medium trust [45.249.212.32 listed in list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record -0.0 SPF_HELO_PASS SPF: HELO matches SPF record X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: linux-arch@vger.kernel.org, yezhenyu2@huawei.com, linux-kernel@vger.kernel.org, xiexiangyou@huawei.com, zhangshaokun@hisilicon.com, linux-mm@kvack.org, arm@kernel.org, prime.zeng@hisilicon.com, linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org From: Marc Zyngier In order to reduce the cost of TLB invalidation, the ARMv8.4 TTL feature allows TLBs to be issued with a level allowing for quicker invalidation. Let's detect the feature for now. Further patches will implement its actual usage. Signed-off-by: Marc Zyngier Signed-off-by: Zhenyu Ye --- arch/arm64/include/asm/cpucaps.h | 3 ++- arch/arm64/include/asm/sysreg.h | 1 + arch/arm64/kernel/cpufeature.c | 11 +++++++++++ 3 files changed, 14 insertions(+), 1 deletion(-) diff --git a/arch/arm64/include/asm/cpucaps.h b/arch/arm64/include/asm/cpucaps.h index 865e0253fc1e..8b3b4dd612b3 100644 --- a/arch/arm64/include/asm/cpucaps.h +++ b/arch/arm64/include/asm/cpucaps.h @@ -58,7 +58,8 @@ #define ARM64_WORKAROUND_SPECULATIVE_AT_NVHE 48 #define ARM64_HAS_E0PD 49 #define ARM64_HAS_RNG 50 +#define ARM64_HAS_ARMv8_4_TTL 51 -#define ARM64_NCAPS 51 +#define ARM64_NCAPS 52 #endif /* __ASM_CPUCAPS_H */ diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h index b91570ff9db1..a28b76f32ba7 100644 --- a/arch/arm64/include/asm/sysreg.h +++ b/arch/arm64/include/asm/sysreg.h @@ -685,6 +685,7 @@ /* id_aa64mmfr2 */ #define ID_AA64MMFR2_E0PD_SHIFT 60 +#define ID_AA64MMFR2_TTL_SHIFT 48 #define ID_AA64MMFR2_FWB_SHIFT 40 #define ID_AA64MMFR2_AT_SHIFT 32 #define ID_AA64MMFR2_LVA_SHIFT 16 diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index 0b6715625cf6..cbe46ad2900a 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -241,6 +241,7 @@ static const struct arm64_ftr_bits ftr_id_aa64mmfr1[] = { static const struct arm64_ftr_bits ftr_id_aa64mmfr2[] = { ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_E0PD_SHIFT, 4, 0), + ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_TTL_SHIFT, 4, 0), ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_FWB_SHIFT, 4, 0), ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_AT_SHIFT, 4, 0), ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_LVA_SHIFT, 4, 0), @@ -1523,6 +1524,16 @@ static const struct arm64_cpu_capabilities arm64_features[] = { .matches = has_cpuid_feature, .cpu_enable = cpu_has_fwb, }, + { + .desc = "ARMv8.4 Translation Table Level", + .type = ARM64_CPUCAP_SYSTEM_FEATURE, + .capability = ARM64_HAS_ARMv8_4_TTL, + .sys_reg = SYS_ID_AA64MMFR2_EL1, + .sign = FTR_UNSIGNED, + .field_pos = ID_AA64MMFR2_TTL_SHIFT, + .min_field_value = 1, + .matches = has_cpuid_feature, + }, #ifdef CONFIG_ARM64_HW_AFDBM { /* From patchwork Tue Mar 24 13:45:30 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zhenyu Ye X-Patchwork-Id: 11455475 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 004BB1667 for ; Tue, 24 Mar 2020 13:47:04 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id D3DB1206F6 for ; Tue, 24 Mar 2020 13:47:03 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="sAnGEiLr" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org D3DB1206F6 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=v4/eAI+/7QMyj0TsQJlKLHQwBSEeZzTcNAK6kM2ek5A=; b=sAnGEiLrO6O0Dx l6IOmnN237UEYxjjdwR3eRe7DfVJ32nkUMWqfka+lzdJgLDEbw2B8blxU8vATpkrsNgHRHzyL71ea KwzFcz+F+bpTY5ePZ89godFmnJfQjrZX7XqWx7TmJ5ZCyt/HDTBpq2ut1nQzV3UY89pcGHWIZie2V qfMumnR9CEpszeR+meSl6g0nM3cJTzVR4WrkyBSLM9qQXGEh2qCbf7e+Ws1LE9WSeJPH9SWAyL4fE e88Jp981MzjGxFlxb/CJkhut/X4SPrQKfcH8exfLY8wa+7vlgIorPkr/ACBVDGXrq9mWPi4wdxoCf Wl4AA5sISCkvrpeS39rw==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1jGjtU-0007He-Nq; Tue, 24 Mar 2020 13:47:00 +0000 Received: from szxga06-in.huawei.com ([45.249.212.32] helo=huawei.com) by bombadil.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1jGjsl-0006cT-52 for linux-arm-kernel@lists.infradead.org; Tue, 24 Mar 2020 13:46:17 +0000 Received: from DGGEMS410-HUB.china.huawei.com (unknown [172.30.72.58]) by Forcepoint Email with ESMTP id 8761E8220D7296278E8C; Tue, 24 Mar 2020 21:46:11 +0800 (CST) Received: from DESKTOP-KKJBAGG.china.huawei.com (10.173.220.25) by DGGEMS410-HUB.china.huawei.com (10.3.19.210) with Microsoft SMTP Server id 14.3.487.0; Tue, 24 Mar 2020 21:46:02 +0800 From: Zhenyu Ye To: , , , , , , , , , , , , , , , , Subject: [RFC PATCH v4 2/6] arm64: Add level-hinted TLB invalidation helper Date: Tue, 24 Mar 2020 21:45:30 +0800 Message-ID: <20200324134534.1570-3-yezhenyu2@huawei.com> X-Mailer: git-send-email 2.22.0.windows.1 In-Reply-To: <20200324134534.1570-1-yezhenyu2@huawei.com> References: <20200324134534.1570-1-yezhenyu2@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.173.220.25] X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20200324_064615_355646_00836B86 X-CRM114-Status: UNSURE ( 8.29 ) X-CRM114-Notice: Please train this message. X-Spam-Score: -2.3 (--) X-Spam-Report: SpamAssassin version 3.4.3 on bombadil.infradead.org summary: Content analysis details: (-2.3 points) pts rule name description ---- ---------------------- -------------------------------------------------- -2.3 RCVD_IN_DNSWL_MED RBL: Sender listed at https://www.dnswl.org/, medium trust [45.249.212.32 listed in list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record -0.0 SPF_HELO_PASS SPF: HELO matches SPF record X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: linux-arch@vger.kernel.org, yezhenyu2@huawei.com, linux-kernel@vger.kernel.org, xiexiangyou@huawei.com, zhangshaokun@hisilicon.com, linux-mm@kvack.org, arm@kernel.org, prime.zeng@hisilicon.com, linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org From: Marc Zyngier Add a level-hinted TLB invalidation helper that only gets used if ARMv8.4-TTL gets detected. Signed-off-by: Marc Zyngier Signed-off-by: Zhenyu Ye --- arch/arm64/include/asm/tlbflush.h | 30 ++++++++++++++++++++++++++++++ 1 file changed, 30 insertions(+) diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlbflush.h index bc3949064725..a3f70778a325 100644 --- a/arch/arm64/include/asm/tlbflush.h +++ b/arch/arm64/include/asm/tlbflush.h @@ -10,6 +10,7 @@ #ifndef __ASSEMBLY__ +#include #include #include #include @@ -59,6 +60,35 @@ __ta; \ }) +#define TLBI_TTL_MASK GENMASK_ULL(47, 44) + +#define __tlbi_level(op, addr, level) \ + do { \ + u64 arg = addr; \ + \ + if (cpus_have_const_cap(ARM64_HAS_ARMv8_4_TTL) && \ + level) { \ + u64 ttl = level; \ + \ + switch (PAGE_SIZE) { \ + case SZ_4K: \ + ttl |= 1 << 2; \ + break; \ + case SZ_16K: \ + ttl |= 2 << 2; \ + break; \ + case SZ_64K: \ + ttl |= 3 << 2; \ + break; \ + } \ + \ + arg &= ~TLBI_TTL_MASK; \ + arg |= FIELD_PREP(TLBI_TTL_MASK, ttl); \ + } \ + \ + __tlbi(op, arg); \ + } while(0) + /* * TLB Invalidation * ================ From patchwork Tue Mar 24 13:45:31 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zhenyu Ye X-Patchwork-Id: 11455461 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D246A1667 for ; Tue, 24 Mar 2020 13:46:21 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id B19102076E for ; Tue, 24 Mar 2020 13:46:21 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="W/7IAiJu" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org B19102076E Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=Tp9XvE4Gmx6YipYIklqbOLtlQvJVXVCpwS73gvXxXR8=; b=W/7IAiJuNoOx7I 5C2WLxuCp2cAvSrZp4pQP0fz+xm5QSPYbm4XsJ83K0EyYhcMFWpGwahcJHKWE9z/lYmTeOv93K12v 2/RlCqWo2emKXWHpxdbvPQY0YJZUMd3dhMDrFGBLmEuJ5Wq3reQ/HQ/MdY/GfioDuNvVCn/QzeOU6 npV7d7E3DBCqHGU402J8B6goTAszlizXkO8ngUQnQvAIxMuc2o0o0tCQ0ydqQK8yv0Tceb8oQ6lad qlHeYi20OrhgF4zft4eCIbRPjQKLoBF+H0um0lbi/G7LqeRbXgDJrweDwTogIBsJ+xQUyI3jYPWWt zTOay14eRI67ZMqbgWHQ==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1jGjsn-0006dl-BH; Tue, 24 Mar 2020 13:46:17 +0000 Received: from szxga06-in.huawei.com ([45.249.212.32] helo=huawei.com) by bombadil.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1jGjsk-0006cQ-7r for linux-arm-kernel@lists.infradead.org; Tue, 24 Mar 2020 13:46:16 +0000 Received: from DGGEMS410-HUB.china.huawei.com (unknown [172.30.72.58]) by Forcepoint Email with ESMTP id 5E93463BE1D35BF784D7; Tue, 24 Mar 2020 21:46:11 +0800 (CST) Received: from DESKTOP-KKJBAGG.china.huawei.com (10.173.220.25) by DGGEMS410-HUB.china.huawei.com (10.3.19.210) with Microsoft SMTP Server id 14.3.487.0; Tue, 24 Mar 2020 21:46:03 +0800 From: Zhenyu Ye To: , , , , , , , , , , , , , , , , Subject: [RFC PATCH v4 3/6] arm64: Add level-hinted TLB invalidation helper to tlbi_user Date: Tue, 24 Mar 2020 21:45:31 +0800 Message-ID: <20200324134534.1570-4-yezhenyu2@huawei.com> X-Mailer: git-send-email 2.22.0.windows.1 In-Reply-To: <20200324134534.1570-1-yezhenyu2@huawei.com> References: <20200324134534.1570-1-yezhenyu2@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.173.220.25] X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20200324_064615_247458_29358680 X-CRM114-Status: GOOD ( 11.41 ) X-Spam-Score: -2.3 (--) X-Spam-Report: SpamAssassin version 3.4.3 on bombadil.infradead.org summary: Content analysis details: (-2.3 points) pts rule name description ---- ---------------------- -------------------------------------------------- -2.3 RCVD_IN_DNSWL_MED RBL: Sender listed at https://www.dnswl.org/, medium trust [45.249.212.32 listed in list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record -0.0 SPF_HELO_PASS SPF: HELO matches SPF record X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: linux-arch@vger.kernel.org, yezhenyu2@huawei.com, linux-kernel@vger.kernel.org, xiexiangyou@huawei.com, zhangshaokun@hisilicon.com, linux-mm@kvack.org, arm@kernel.org, prime.zeng@hisilicon.com, linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org Add a level-hinted parameter to __tlbi_user, which only gets used if ARMv8.4-TTL gets detected. ARMv8.4-TTL provides the TTL field in tlbi instruction to indicate the level of translation table walk holding the leaf entry for the address that is being invalidated. This patch set the default level value to 0. Signed-off-by: Zhenyu Ye --- arch/arm64/include/asm/tlbflush.h | 42 ++++++++++++++++++++++++++----- 1 file changed, 36 insertions(+), 6 deletions(-) diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlbflush.h index a3f70778a325..d141c080e494 100644 --- a/arch/arm64/include/asm/tlbflush.h +++ b/arch/arm64/include/asm/tlbflush.h @@ -89,6 +89,36 @@ __tlbi(op, arg); \ } while(0) +#define __tlbi_user_level(op, addr, level) \ + do { \ + u64 arg = addr; \ + \ + if (!arm64_kernel_unmapped_at_el0()) \ + break; \ + \ + if (cpus_have_const_cap(ARM64_HAS_ARMv8_4_TTL) && \ + level) { \ + u64 ttl = level; \ + \ + switch (PAGE_SIZE) { \ + case SZ_4K: \ + ttl |= 1 << 2; \ + break; \ + case SZ_16K: \ + ttl |= 2 << 2; \ + break; \ + case SZ_64K: \ + ttl |= 3 << 2; \ + break; \ + } \ + \ + arg &= ~TLBI_TTL_MASK; \ + arg |= FIELD_PREP(TLBI_TTL_MASK, ttl); \ + } \ + \ + __tlbi(op, (arg) | USER_ASID_FLAG); \ + } while (0) + /* * TLB Invalidation * ================ @@ -190,8 +220,8 @@ static inline void flush_tlb_page_nosync(struct vm_area_struct *vma, unsigned long addr = __TLBI_VADDR(uaddr, ASID(vma->vm_mm)); dsb(ishst); - __tlbi(vale1is, addr); - __tlbi_user(vale1is, addr); + __tlbi_level(vale1is, addr, 0); + __tlbi_user_level(vale1is, addr, 0); } static inline void flush_tlb_page(struct vm_area_struct *vma, @@ -231,11 +261,11 @@ static inline void __flush_tlb_range(struct vm_area_struct *vma, dsb(ishst); for (addr = start; addr < end; addr += stride) { if (last_level) { - __tlbi(vale1is, addr); - __tlbi_user(vale1is, addr); + __tlbi_level(vale1is, addr, 0); + __tlbi_user_level(vale1is, addr, 0); } else { - __tlbi(vae1is, addr); - __tlbi_user(vae1is, addr); + __tlbi_level(vae1is, addr, 0); + __tlbi_user_level(vae1is, addr, 0); } } dsb(ish); From patchwork Tue Mar 24 13:45:32 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zhenyu Ye X-Patchwork-Id: 11455469 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id B3F0992A for ; Tue, 24 Mar 2020 13:46:35 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 67CE52076E for ; Tue, 24 Mar 2020 13:46:35 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="m8Kv7O83" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 67CE52076E Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=RYFFaL8xXm1BrM4PYGCx/TbbYVoX58C/CUs6d9qotjY=; b=m8Kv7O83d0wtOt Cp9+0WjGkTNcYwLPDYH9oOyQ8H4dVxZwCznxaHA1gPrPENLR7tHbYXf0/Gi6Z6pO4bF84sfICv748 vqGFh5KSH+kA0xQ6pCWvMnImnawMrKt9/ZUjcNMTIaGHVYffhWHn1L//8u+g/P77L9KJXGxCcXrKx U5LNJ/ZcySCacUWGorjmBDVVFYGVgaYp+01DZUIzjvd9N6FexqOX8+ekh34i2mtuE4cmxW05LY+Hu XXV7NSTLkbSq9E3J2eUVottRnHRcA9j6OnMq0uAhU0IKWCc+1lYVXjfx95yHPuqt5KfI47Ee4Rpw1 /AY5pCgvlZsC35tTX/7Q==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1jGjsz-0006pz-CI; Tue, 24 Mar 2020 13:46:29 +0000 Received: from szxga06-in.huawei.com ([45.249.212.32] helo=huawei.com) by bombadil.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1jGjsk-0006cR-O3 for linux-arm-kernel@lists.infradead.org; Tue, 24 Mar 2020 13:46:16 +0000 Received: from DGGEMS410-HUB.china.huawei.com (unknown [172.30.72.58]) by Forcepoint Email with ESMTP id 5368CAA17414541C128E; Tue, 24 Mar 2020 21:46:11 +0800 (CST) Received: from DESKTOP-KKJBAGG.china.huawei.com (10.173.220.25) by DGGEMS410-HUB.china.huawei.com (10.3.19.210) with Microsoft SMTP Server id 14.3.487.0; Tue, 24 Mar 2020 21:46:04 +0800 From: Zhenyu Ye To: , , , , , , , , , , , , , , , , Subject: [RFC PATCH v4 4/6] mm: Add page table level flags to vm_flags Date: Tue, 24 Mar 2020 21:45:32 +0800 Message-ID: <20200324134534.1570-5-yezhenyu2@huawei.com> X-Mailer: git-send-email 2.22.0.windows.1 In-Reply-To: <20200324134534.1570-1-yezhenyu2@huawei.com> References: <20200324134534.1570-1-yezhenyu2@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.173.220.25] X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20200324_064614_946830_9B71887E X-CRM114-Status: GOOD ( 10.93 ) X-Spam-Score: -2.3 (--) X-Spam-Report: SpamAssassin version 3.4.3 on bombadil.infradead.org summary: Content analysis details: (-2.3 points) pts rule name description ---- ---------------------- -------------------------------------------------- -2.3 RCVD_IN_DNSWL_MED RBL: Sender listed at https://www.dnswl.org/, medium trust [45.249.212.32 listed in list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record -0.0 SPF_HELO_PASS SPF: HELO matches SPF record X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: linux-arch@vger.kernel.org, yezhenyu2@huawei.com, linux-kernel@vger.kernel.org, xiexiangyou@huawei.com, zhangshaokun@hisilicon.com, linux-mm@kvack.org, arm@kernel.org, prime.zeng@hisilicon.com, linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org Add VM_LEVEL_[PUD|PMD|PTE] to vm_flags to indicate which level of the page tables the vma is in. Those flags can be used to reduce the cost of TLB invalidation. These should be common flags for all architectures, however, those flags are only available in 64-bits system currently, because the lower-order flags are fully used. These flags are only used by ARM64 architecture now. See in next patch. Signed-off-by: Zhenyu Ye --- include/linux/mm.h | 10 ++++++++++ include/trace/events/mmflags.h | 15 ++++++++++++++- 2 files changed, 24 insertions(+), 1 deletion(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index c54fb96cb1e6..3ff16ffa5e83 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -313,6 +313,16 @@ extern unsigned int kobjsize(const void *objp); #endif #endif /* CONFIG_ARCH_HAS_PKEYS */ +#ifdef CONFIG_64BIT +# define VM_LEVEL_PUD BIT(37) /* vma is in pud-level of page table */ +# define VM_LEVEL_PMD BIT(38) /* vma is in pmd-level of page table */ +# define VM_LEVEL_PTE BIT(39) /* vma is in pte-level of page table */ +#else +# define VM_LEVEL_PUD 0 +# define VM_LEVEL_PMD 0 +# define VM_LEVEL_PTE 0 +#endif /* CONFIG_64BIT */ + #if defined(CONFIG_X86) # define VM_PAT VM_ARCH_1 /* PAT reserves whole VMA at once (x86) */ #elif defined(CONFIG_PPC) diff --git a/include/trace/events/mmflags.h b/include/trace/events/mmflags.h index a1675d43777e..9f13cfa96f9f 100644 --- a/include/trace/events/mmflags.h +++ b/include/trace/events/mmflags.h @@ -130,6 +130,16 @@ IF_HAVE_PG_IDLE(PG_idle, "idle" ) #define IF_HAVE_VM_SOFTDIRTY(flag,name) #endif +#ifdef CONFIG_64BIT +#define IF_HAVE_VM_LEVEL_PUD(flag,name) {flag, name} +#define IF_HAVE_VM_LEVEL_PMD(flag,name) {flag, name} +#define IF_HAVE_VM_LEVEL_PTE(flag,name) {flag, name} +#else +#define IF_HAVE_VM_LEVEL_PUD(flag,name) +#define IF_HAVE_VM_LEVEL_PMD(flag,name) +#define IF_HAVE_VM_LEVEL_PTE(flag,name) +#endif + #define __def_vmaflag_names \ {VM_READ, "read" }, \ {VM_WRITE, "write" }, \ @@ -161,7 +171,10 @@ IF_HAVE_VM_SOFTDIRTY(VM_SOFTDIRTY, "softdirty" ) \ {VM_MIXEDMAP, "mixedmap" }, \ {VM_HUGEPAGE, "hugepage" }, \ {VM_NOHUGEPAGE, "nohugepage" }, \ - {VM_MERGEABLE, "mergeable" } \ + {VM_MERGEABLE, "mergeable" }, \ +IF_HAVE_VM_LEVEL_PUD(VM_LEVEL_PUD, "pud-level" ), \ +IF_HAVE_VM_LEVEL_PMD(VM_LEVEL_PMD, "pmd-level" ), \ +IF_HAVE_VM_LEVEL_PTE(VM_LEVEL_PTE, "pte-level" ) \ #define show_vma_flags(flags) \ (flags) ? __print_flags(flags, "|", \ From patchwork Tue Mar 24 13:45:33 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zhenyu Ye X-Patchwork-Id: 11455483 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id ED0CD1667 for ; Tue, 24 Mar 2020 13:47:29 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id CCCE42073E for ; Tue, 24 Mar 2020 13:47:29 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="nFsThWxo" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org CCCE42073E Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=VRfp2lWgT44o37f7q+M4zh3RWtgsh89tr//Y7f7epOM=; b=nFsThWxoZLQZU0 5QDNNPW8W5VYq4agXTQLj3aJAGRQ+DLunx3unRltcg+7hCFpsOMl9V9mszlVpnACFh8VVAR5hgF6s ZkK1cyst9z1hxtvdbR8DZYEJIuYjaemQbxJnPgWkHv/Rentb0acApcHTfKmelP+WSJGTeZdXteZq+ cMbg7IA/wuQRZAfUSKIdPvcfx1r3MpHm9CBaiUvIcEsxXCSq9nUJ7uGiegAaUQ+oe3c27dWY1r/R4 jAJfsAaeoI3MzhKUMpoioyVUQQ/B7SMaflEn6JdNBPHuVxv57qT87X6SDRzlMJJzPlsWlyrtrkIVZ GN3SPMCr5bJ12D5r9a4Q==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1jGjtv-0007pF-6b; Tue, 24 Mar 2020 13:47:27 +0000 Received: from szxga05-in.huawei.com ([45.249.212.191] helo=huawei.com) by bombadil.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1jGjsp-0006dt-Fc for linux-arm-kernel@lists.infradead.org; Tue, 24 Mar 2020 13:46:21 +0000 Received: from DGGEMS410-HUB.china.huawei.com (unknown [172.30.72.60]) by Forcepoint Email with ESMTP id 832F217CAAF3D6C1D6B5; Tue, 24 Mar 2020 21:46:16 +0800 (CST) Received: from DESKTOP-KKJBAGG.china.huawei.com (10.173.220.25) by DGGEMS410-HUB.china.huawei.com (10.3.19.210) with Microsoft SMTP Server id 14.3.487.0; Tue, 24 Mar 2020 21:46:05 +0800 From: Zhenyu Ye To: , , , , , , , , , , , , , , , , Subject: [RFC PATCH v4 5/6] arm64: tlb: Use translation level hint in vm_flags Date: Tue, 24 Mar 2020 21:45:33 +0800 Message-ID: <20200324134534.1570-6-yezhenyu2@huawei.com> X-Mailer: git-send-email 2.22.0.windows.1 In-Reply-To: <20200324134534.1570-1-yezhenyu2@huawei.com> References: <20200324134534.1570-1-yezhenyu2@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.173.220.25] X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20200324_064619_708253_E35C8B37 X-CRM114-Status: GOOD ( 10.49 ) X-Spam-Score: -2.3 (--) X-Spam-Report: SpamAssassin version 3.4.3 on bombadil.infradead.org summary: Content analysis details: (-2.3 points) pts rule name description ---- ---------------------- -------------------------------------------------- -2.3 RCVD_IN_DNSWL_MED RBL: Sender listed at https://www.dnswl.org/, medium trust [45.249.212.191 listed in list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record -0.0 SPF_HELO_PASS SPF: HELO matches SPF record X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: linux-arch@vger.kernel.org, yezhenyu2@huawei.com, linux-kernel@vger.kernel.org, xiexiangyou@huawei.com, zhangshaokun@hisilicon.com, linux-mm@kvack.org, arm@kernel.org, prime.zeng@hisilicon.com, linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org This patch used the VM_LEVEL flags in vma->vm_flags to set the TTL field in tlbi instruction. Signed-off-by: Zhenyu Ye --- arch/arm64/include/asm/mmu.h | 2 ++ arch/arm64/include/asm/tlbflush.h | 14 ++++++++------ arch/arm64/mm/mmu.c | 14 ++++++++++++++ 3 files changed, 24 insertions(+), 6 deletions(-) diff --git a/arch/arm64/include/asm/mmu.h b/arch/arm64/include/asm/mmu.h index d79ce6df9e12..a8b8824a7405 100644 --- a/arch/arm64/include/asm/mmu.h +++ b/arch/arm64/include/asm/mmu.h @@ -86,6 +86,8 @@ extern void create_pgd_mapping(struct mm_struct *mm, phys_addr_t phys, extern void *fixmap_remap_fdt(phys_addr_t dt_phys, int *size, pgprot_t prot); extern void mark_linear_text_alias_ro(void); extern bool kaslr_requires_kpti(void); +extern unsigned int get_vma_level(struct vm_area_struct *vma); + #define INIT_MM_CONTEXT(name) \ .pgd = init_pg_dir, diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlbflush.h index d141c080e494..93bb09fdfafd 100644 --- a/arch/arm64/include/asm/tlbflush.h +++ b/arch/arm64/include/asm/tlbflush.h @@ -218,10 +218,11 @@ static inline void flush_tlb_page_nosync(struct vm_area_struct *vma, unsigned long uaddr) { unsigned long addr = __TLBI_VADDR(uaddr, ASID(vma->vm_mm)); + unsigned int level = get_vma_level(vma); dsb(ishst); - __tlbi_level(vale1is, addr, 0); - __tlbi_user_level(vale1is, addr, 0); + __tlbi_level(vale1is, addr, level); + __tlbi_user_level(vale1is, addr, level); } static inline void flush_tlb_page(struct vm_area_struct *vma, @@ -242,6 +243,7 @@ static inline void __flush_tlb_range(struct vm_area_struct *vma, unsigned long stride, bool last_level) { unsigned long asid = ASID(vma->vm_mm); + unsigned int level = get_vma_level(vma); unsigned long addr; start = round_down(start, stride); @@ -261,11 +263,11 @@ static inline void __flush_tlb_range(struct vm_area_struct *vma, dsb(ishst); for (addr = start; addr < end; addr += stride) { if (last_level) { - __tlbi_level(vale1is, addr, 0); - __tlbi_user_level(vale1is, addr, 0); + __tlbi_level(vale1is, addr, level); + __tlbi_user_level(vale1is, addr, level); } else { - __tlbi_level(vae1is, addr, 0); - __tlbi_user_level(vae1is, addr, 0); + __tlbi_level(vae1is, addr, level); + __tlbi_user_level(vae1is, addr, level); } } dsb(ish); diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index 128f70852bf3..e6a1221cd86b 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -60,6 +60,20 @@ static pud_t bm_pud[PTRS_PER_PUD] __page_aligned_bss __maybe_unused; static DEFINE_SPINLOCK(swapper_pgdir_lock); +inline unsigned int get_vma_level(struct vm_area_struct *vma) +{ + unsigned int level = 0; + if (vma->vm_flags & VM_LEVEL_PUD) + level = 1; + else if (vma->vm_flags & VM_LEVEL_PMD) + level = 2; + else if (vma->vm_flags & VM_LEVEL_PTE) + level = 3; + + vma->vm_flags &= ~(VM_LEVEL_PUD | VM_LEVEL_PMD | VM_LEVEL_PTE); + return level; +} + void set_swapper_pgd(pgd_t *pgdp, pgd_t pgd) { pgd_t *fixmap_pgdp; From patchwork Tue Mar 24 13:45:34 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zhenyu Ye X-Patchwork-Id: 11455489 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 03AFF92A for ; Tue, 24 Mar 2020 13:47:47 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id D615920753 for ; Tue, 24 Mar 2020 13:47:46 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="lHhx1x5J" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org D615920753 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=1zptBV7cbKqJ1IuL0hjxskvzo8Iz06YLWmiCDzGuC2o=; b=lHhx1x5JGm8hnC QLg/oE+ypp8SvTas0llywa1yHhG7WcxPgWcMZG0TcSGfss1oScbo7sRXDBbe2zrXwu8hha5QYw1zk 6fT2ZoDOqw7Jp2ZJCZojlPWLh6QtxWJnCkmx1eDd/Rzjku85SR8lItsp/lWOq1Nq0y7YQbvL6g7jk RMUFBzJyKVuim3C1LCPkZ1iSilMVamMjfpOMt3Oy5qRQBoUAkqVlw3u2v5UZutyb0IcrQYis1rIZb QDrELwgUpC5vAAK3UdCgQ8Bl1T9vWA5xjsLIuONtHGaLNXQodQ7wEdq/2y5eIUL+591/w8zsbNF7/ YvBR1eXkH18/N4KRjxZw==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1jGju9-00085Q-Ab; Tue, 24 Mar 2020 13:47:41 +0000 Received: from szxga05-in.huawei.com ([45.249.212.191] helo=huawei.com) by bombadil.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1jGjsq-0006dk-5T for linux-arm-kernel@lists.infradead.org; Tue, 24 Mar 2020 13:46:21 +0000 Received: from DGGEMS410-HUB.china.huawei.com (unknown [172.30.72.60]) by Forcepoint Email with ESMTP id 7822A9255E05E2122FC4; Tue, 24 Mar 2020 21:46:16 +0800 (CST) Received: from DESKTOP-KKJBAGG.china.huawei.com (10.173.220.25) by DGGEMS410-HUB.china.huawei.com (10.3.19.210) with Microsoft SMTP Server id 14.3.487.0; Tue, 24 Mar 2020 21:46:07 +0800 From: Zhenyu Ye To: , , , , , , , , , , , , , , , , Subject: [RFC PATCH v4 6/6] mm: Set VM_LEVEL flags in some tlb_flush functions Date: Tue, 24 Mar 2020 21:45:34 +0800 Message-ID: <20200324134534.1570-7-yezhenyu2@huawei.com> X-Mailer: git-send-email 2.22.0.windows.1 In-Reply-To: <20200324134534.1570-1-yezhenyu2@huawei.com> References: <20200324134534.1570-1-yezhenyu2@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.173.220.25] X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20200324_064620_388130_8217D056 X-CRM114-Status: GOOD ( 14.63 ) X-Spam-Score: -2.3 (--) X-Spam-Report: SpamAssassin version 3.4.3 on bombadil.infradead.org summary: Content analysis details: (-2.3 points) pts rule name description ---- ---------------------- -------------------------------------------------- -2.3 RCVD_IN_DNSWL_MED RBL: Sender listed at https://www.dnswl.org/, medium trust [45.249.212.191 listed in list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record -0.0 SPF_HELO_PASS SPF: HELO matches SPF record X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: linux-arch@vger.kernel.org, yezhenyu2@huawei.com, linux-kernel@vger.kernel.org, xiexiangyou@huawei.com, zhangshaokun@hisilicon.com, linux-mm@kvack.org, arm@kernel.org, prime.zeng@hisilicon.com, linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org Set VM_LEVEL flags in some tlb_flush functions. The relevant functions are: tlb_flush in asm/tlb.h get_clear_flush and clear_flush in mm/hugetlbpage.c flush_pmd|pud_tlb_range in asm-generic/patable.h do_huge_pmd_numa_page and move_huge_pmd in mm/huge_memory.c Signed-off-by: Zhenyu Ye --- arch/arm64/include/asm/tlb.h | 12 ++++++++++++ arch/arm64/mm/hugetlbpage.c | 4 ++-- include/asm-generic/pgtable.h | 16 ++++++++++++++-- mm/huge_memory.c | 8 +++++++- 4 files changed, 35 insertions(+), 5 deletions(-) diff --git a/arch/arm64/include/asm/tlb.h b/arch/arm64/include/asm/tlb.h index b76df828e6b7..77fe942b30b6 100644 --- a/arch/arm64/include/asm/tlb.h +++ b/arch/arm64/include/asm/tlb.h @@ -27,6 +27,18 @@ static inline void tlb_flush(struct mmu_gather *tlb) bool last_level = !tlb->freed_tables; unsigned long stride = tlb_get_unmap_size(tlb); + /* + * mm_gather tracked which levels of the page tables + * have been cleared, we can use this info to set + * vm->vm_flags. + */ + if (tlb->cleared_ptes) + vma.vm_flags |= VM_LEVEL_PTE; + else if (tlb->cleared_pmds) + vma.vm_flags |= VM_LEVEL_PMD; + else if (tlb->cleared_puds) + vma.vm_flags |= VM_LEVEL_PUD; + /* * If we're tearing down the address space then we only care about * invalidating the walk-cache, since the ASID allocator won't diff --git a/arch/arm64/mm/hugetlbpage.c b/arch/arm64/mm/hugetlbpage.c index bbeb6a5a6ba6..c35a1bd06bd0 100644 --- a/arch/arm64/mm/hugetlbpage.c +++ b/arch/arm64/mm/hugetlbpage.c @@ -140,7 +140,7 @@ static pte_t get_clear_flush(struct mm_struct *mm, } if (valid) { - struct vm_area_struct vma = TLB_FLUSH_VMA(mm, 0); + struct vm_area_struct vma = TLB_FLUSH_VMA(mm, VM_LEVEL_PTE); flush_tlb_range(&vma, saddr, addr); } return orig_pte; @@ -161,7 +161,7 @@ static void clear_flush(struct mm_struct *mm, unsigned long pgsize, unsigned long ncontig) { - struct vm_area_struct vma = TLB_FLUSH_VMA(mm, 0); + struct vm_area_struct vma = TLB_FLUSH_VMA(mm, VM_LEVEL_PTE); unsigned long i, saddr = addr; for (i = 0; i < ncontig; i++, addr += pgsize, ptep++) diff --git a/include/asm-generic/pgtable.h b/include/asm-generic/pgtable.h index e2e2bef07dd2..391e704faf7a 100644 --- a/include/asm-generic/pgtable.h +++ b/include/asm-generic/pgtable.h @@ -1160,8 +1160,20 @@ static inline int pmd_free_pte_page(pmd_t *pmd, unsigned long addr) * invalidate the entire TLB which is not desitable. * e.g. see arch/arc: flush_pmd_tlb_range */ -#define flush_pmd_tlb_range(vma, addr, end) flush_tlb_range(vma, addr, end) -#define flush_pud_tlb_range(vma, addr, end) flush_tlb_range(vma, addr, end) +#define flush_pmd_tlb_range(vma, addr, end) \ + do { \ + vma->vm_flags &= ~(VM_LEVEL_PUD | VM_LEVEL_PTE); \ + vma->vm_flags |= VM_LEVEL_PMD; \ + flush_tlb_range(vma, addr, end); \ + } while (0) + +#define flush_pud_tlb_range(vma, addr, end) \ + do { \ + vma->vm_flags &= ~(VM_LEVEL_PMD | VM_LEVEL_PTE); \ + vma->vm_flags |= VM_LEVEL_PUD; \ + flush_tlb_range(vma, addr, end); \ + } while (0) + #else #define flush_pmd_tlb_range(vma, addr, end) BUILD_BUG() #define flush_pud_tlb_range(vma, addr, end) BUILD_BUG() diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 24ad53b4dfc0..9a78b8d865f0 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -1646,6 +1646,8 @@ vm_fault_t do_huge_pmd_numa_page(struct vm_fault *vmf, pmd_t pmd) * mapping or not. Hence use the tlb range variant */ if (mm_tlb_flush_pending(vma->vm_mm)) { + vma->vm_flags &= ~(VM_LEVEL_PUD | VM_LEVEL_PTE); + vma->vm_flags |= VM_LEVEL_PMD; flush_tlb_range(vma, haddr, haddr + HPAGE_PMD_SIZE); /* * change_huge_pmd() released the pmd lock before @@ -1917,8 +1919,12 @@ bool move_huge_pmd(struct vm_area_struct *vma, unsigned long old_addr, } pmd = move_soft_dirty_pmd(pmd); set_pmd_at(mm, new_addr, new_pmd, pmd); - if (force_flush) + if (force_flush) { + vma->vm_flags &= ~(VM_LEVEL_PUD | VM_LEVEL_PTE); + vma->vm_flags |= VM_LEVEL_PMD; flush_tlb_range(vma, old_addr, old_addr + PMD_SIZE); + } + if (new_ptl != old_ptl) spin_unlock(new_ptl); spin_unlock(old_ptl);