Message ID | 20220921084302.43631-1-yangyicong@huawei.com (mailing list archive) |
---|---|
Headers | show
Return-Path: <owner-linux-mm@kvack.org> X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 83768C32771 for <linux-mm@archiver.kernel.org>; Wed, 21 Sep 2022 08:45:36 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C9B736B0072; Wed, 21 Sep 2022 04:45:35 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id C49C86B0073; Wed, 21 Sep 2022 04:45:35 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B11BA940007; Wed, 21 Sep 2022 04:45:35 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 9F8E06B0072 for <linux-mm@kvack.org>; Wed, 21 Sep 2022 04:45:35 -0400 (EDT) Received: from smtpin25.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 72D561C613F for <linux-mm@kvack.org>; Wed, 21 Sep 2022 08:45:35 +0000 (UTC) X-FDA: 79935459030.25.34DBB1C Received: from szxga03-in.huawei.com (szxga03-in.huawei.com [45.249.212.189]) by imf08.hostedemail.com (Postfix) with ESMTP id 84D7116000E for <linux-mm@kvack.org>; Wed, 21 Sep 2022 08:45:33 +0000 (UTC) Received: from canpemm500009.china.huawei.com (unknown [172.30.72.55]) by szxga03-in.huawei.com (SkyGuard) with ESMTP id 4MXX4T0cFqzHpq5; Wed, 21 Sep 2022 16:43:21 +0800 (CST) Received: from localhost.localdomain (10.67.164.66) by canpemm500009.china.huawei.com (7.192.105.203) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.31; Wed, 21 Sep 2022 16:45:30 +0800 From: Yicong Yang <yangyicong@huawei.com> To: <akpm@linux-foundation.org>, <linux-mm@kvack.org>, <linux-arm-kernel@lists.infradead.org>, <x86@kernel.org>, <catalin.marinas@arm.com>, <will@kernel.org>, <linux-doc@vger.kernel.org> CC: <corbet@lwn.net>, <peterz@infradead.org>, <arnd@arndb.de>, <linux-kernel@vger.kernel.org>, <darren@os.amperecomputing.com>, <yangyicong@hisilicon.com>, <huzhanyuan@oppo.com>, <lipeifeng@oppo.com>, <zhangshiming@oppo.com>, <guojian@oppo.com>, <realmz6@gmail.com>, <linux-mips@vger.kernel.org>, <openrisc@lists.librecores.org>, <linuxppc-dev@lists.ozlabs.org>, <linux-riscv@lists.infradead.org>, <linux-s390@vger.kernel.org>, Barry Song <21cnbao@gmail.com>, <wangkefeng.wang@huawei.com>, <xhao@linux.alibaba.com>, <prime.zeng@hisilicon.com>, <anshuman.khandual@arm.com> Subject: [PATCH v4 0/2] mm: arm64: bring up BATCHED_UNMAP_TLB_FLUSH Date: Wed, 21 Sep 2022 16:43:00 +0800 Message-ID: <20220921084302.43631-1-yangyicong@huawei.com> X-Mailer: git-send-email 2.31.0 MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-Originating-IP: [10.67.164.66] X-ClientProxiedBy: dggems706-chm.china.huawei.com (10.3.19.183) To canpemm500009.china.huawei.com (7.192.105.203) X-CFilter-Loop: Reflected ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1663749934; a=rsa-sha256; cv=none; b=TdjhHV2M6ir3NllBzRC4hAFz79IcnVHTychm+4pGnJ+0cNj64B7Y+GnYI07e0gNwUGUKIQ HnKe2DZx4G5c+88a67qKw/x2ptewKum4d7m65p3hBt2wezlGQlfdHvFASuhvcKeSZ9tsf9 ambCigP6poltx2l+2thGumLfxmH1G4k= ARC-Authentication-Results: i=1; imf08.hostedemail.com; dkim=none; spf=pass (imf08.hostedemail.com: domain of yangyicong@huawei.com designates 45.249.212.189 as permitted sender) smtp.mailfrom=yangyicong@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1663749934; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding:in-reply-to: references; bh=tb3PEgbBZcoR/8rb30c6UyB1IULtCTN0WEX5eISItAk=; b=u2V3oacYc7D5OWSnSs5a8Krj8ZEdTYxk0ng+IKOdhccJJWCm3atvmrbljYYueMG9zW0HED 20WnYR4y8mxgChQlawkZ5zr7DAOQfIJm8LiB+WoezrOQGbTssBSn2OPJXG/MrElIRvMHqF Er9jNtvMVyvx+Gx0MNlfijuOdA2yjKk= X-Rspamd-Queue-Id: 84D7116000E X-Rspam-User: Authentication-Results: imf08.hostedemail.com; dkim=none; spf=pass (imf08.hostedemail.com: domain of yangyicong@huawei.com designates 45.249.212.189 as permitted sender) smtp.mailfrom=yangyicong@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com X-Rspamd-Server: rspam11 X-Stat-Signature: ic6siud1cxjhmae91stoh9ohqk1hudq6 X-HE-Tag: 1663749933-1625 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: <linux-mm.kvack.org> |
Series |
mm: arm64: bring up BATCHED_UNMAP_TLB_FLUSH
|
expand
|
From: Yicong Yang <yangyicong@hisilicon.com> Though ARM64 has the hardware to do tlb shootdown, the hardware broadcasting is not free. A simplest micro benchmark shows even on snapdragon 888 with only 8 cores, the overhead for ptep_clear_flush is huge even for paging out one page mapped by only one process: 5.36% a.out [kernel.kallsyms] [k] ptep_clear_flush While pages are mapped by multiple processes or HW has more CPUs, the cost should become even higher due to the bad scalability of tlb shootdown. The same benchmark can result in 16.99% CPU consumption on ARM64 server with around 100 cores according to Yicong's test on patch 4/4. This patchset leverages the existing BATCHED_UNMAP_TLB_FLUSH by 1. only send tlbi instructions in the first stage - arch_tlbbatch_add_mm() 2. wait for the completion of tlbi by dsb while doing tlbbatch sync in arch_tlbbatch_flush() Testing on snapdragon shows the overhead of ptep_clear_flush is removed by the patchset. The micro benchmark becomes 5% faster even for one page mapped by single process on snapdragon 888. -v4: 1. Add tags from Kefeng and Anshuman, Thanks. 2. Limit the TLB batch/defer on systems with >4 CPUs, per Anshuman 3. Merge previous Patch 1,2-3 into one, per Anshuman Link: https://lore.kernel.org/linux-mm/20220822082120.8347-1-yangyicong@huawei.com/ -v3: 1. Declare arch's tlbbatch defer support by arch_tlbbatch_should_defer() instead of ARCH_HAS_MM_CPUMASK, per Barry and Kefeng 2. Add Tested-by from Xin Hao Link: https://lore.kernel.org/linux-mm/20220711034615.482895-1-21cnbao@gmail.com/ -v2: 1. Collected Yicong's test result on kunpeng920 ARM64 server; 2. Removed the redundant vma parameter in arch_tlbbatch_add_mm() according to the comments of Peter Zijlstra and Dave Hansen 3. Added ARCH_HAS_MM_CPUMASK rather than checking if mm_cpumask is empty according to the comments of Nadav Amit Thanks, Peter, Dave and Nadav for your testing or reviewing , and comments. -v1: https://lore.kernel.org/lkml/20220707125242.425242-1-21cnbao@gmail.com/ Anshuman Khandual (1): mm/tlbbatch: Introduce arch_tlbbatch_should_defer() Barry Song (1): arm64: support batched/deferred tlb shootdown during page reclamation .../features/vm/TLB/arch-support.txt | 2 +- arch/arm64/Kconfig | 1 + arch/arm64/include/asm/tlbbatch.h | 12 ++++++ arch/arm64/include/asm/tlbflush.h | 37 ++++++++++++++++++- arch/x86/include/asm/tlbflush.h | 15 +++++++- mm/rmap.c | 19 ++++------ 6 files changed, 70 insertions(+), 16 deletions(-) create mode 100644 arch/arm64/include/asm/tlbbatch.h