From patchwork Mon Oct 29 09:25:58 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ashish Mhetre X-Patchwork-Id: 10658943 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 6DF2713B5 for ; Mon, 29 Oct 2018 09:27:22 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 601F52985E for ; Mon, 29 Oct 2018 09:27:22 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 537E129860; Mon, 29 Oct 2018 09:27:22 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 4E6A02985E for ; Mon, 29 Oct 2018 09:27:20 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:Message-ID:Date:Subject:To :From:Reply-To:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References: List-Owner; bh=qyq4QKU3tt9iR+kp8ihhTe+W/0WBaejM+6FmlZk/vH0=; b=DwdTPdKdffv7zI NeXgo5jl7r858GVD5IkTv9W2xL7Ush7eDFJPdco/qyo7y9Z31bk1sjYh1BI7wWjoTdftFTtyOBPkc Ifleon2IlSq2CUony3ZgBBbe770MkBAeBtj82sVEIOC8S6zPkugmfwTUEh0eWuQXq2vOsxy5vPTxY x/2CI/Y582RaWAzOVIn4plCwX+KwZdClheism9zJVQPWxqXst6F/mRs6s2RXSEGV1vCEXaMgG62XX 6vhN/gYw6tG9oxG7NaAJkC58CW2va+yevsDCCGC/MC7/Ptgsqf9GD3FA0nAJTryM8OCt5SecNrD8X v54HbABCLYx4+p0IFIig==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1gH3pH-0007Lm-7z; Mon, 29 Oct 2018 09:27:11 +0000 Received: from hqemgate15.nvidia.com ([216.228.121.64]) by bombadil.infradead.org with esmtps (Exim 4.90_1 #2 (Red Hat Linux)) id 1gH3pD-0007Ks-Gu for linux-arm-kernel@lists.infradead.org; Mon, 29 Oct 2018 09:27:09 +0000 Received: from hqpgpgate101.nvidia.com (Not Verified[216.228.121.13]) by hqemgate15.nvidia.com (using TLS: TLSv1.2, DES-CBC3-SHA) id ; Mon, 29 Oct 2018 02:26:43 -0700 Received: from HQMAIL108.nvidia.com ([172.20.161.6]) by hqpgpgate101.nvidia.com (PGP Universal service); Mon, 29 Oct 2018 02:26:56 -0700 X-PGP-Universal: processed; by hqpgpgate101.nvidia.com on Mon, 29 Oct 2018 02:26:56 -0700 Received: from HQMAIL102.nvidia.com (172.18.146.10) by HQMAIL108.nvidia.com (172.18.146.13) with Microsoft SMTP Server (TLS) id 15.0.1395.4; Mon, 29 Oct 2018 09:26:55 +0000 Received: from hqnvemgw01.nvidia.com (172.20.150.20) by HQMAIL102.nvidia.com (172.18.146.10) with Microsoft SMTP Server (TLS) id 15.0.1395.4 via Frontend Transport; Mon, 29 Oct 2018 09:26:56 +0000 Received: from amhetre.nvidia.com (Not Verified[10.24.229.42]) by hqnvemgw01.nvidia.com with Trustwave SEG (v7, 5, 8, 10121) id ; Mon, 29 Oct 2018 02:26:55 -0700 From: Ashish Mhetre To: , , , Subject: [PATCH V3] arm64: Don't flush tlb while clearing the accessed bit Date: Mon, 29 Oct 2018 14:55:58 +0530 Message-ID: <1540805158-618-1-git-send-email-amhetre@nvidia.com> X-Mailer: git-send-email 2.7.4 X-NVConfidentiality: public MIME-Version: 1.0 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1540805203; bh=FS1bkL6DVpddE8b37EBRqC5NBqnYB3AwAjXcBjxeM68=; h=X-PGP-Universal:From:To:CC:Subject:Date:Message-ID:X-Mailer: X-NVConfidentiality:MIME-Version:Content-Type; b=arhp51z6aXvw5Hh68fopPGUfWKKwgmkk5C3iRBkHIDSNz7z/rpjMpIY2kTq9KKTsB GwAaWyZRKWIBHwG1TJZjk6tEhmbEupv9MTkEm4hE9lYxVe3Dzv5Bjs3EWsmaoMKjX4 DniIHEIrWbtJ/BaQ26lYpqn5joe9Bm7xpPhaoJpgepLK7Xh89yoepv7LNh5y+Ri48n GB61YTUb37UXImi4jS9+2wraGiIsgnkoiNUp1CUrl94BpYantfN/70AtYDr9Xlv7MU Xiiv008sDMKHDXa2mrRg30RDOC/lbxpCKWMLqEZi0pfy6KP+vwzt/UKCZ1+rt4YMtr p2+1gwgqn9jKQ== X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20181029_022707_591873_F1E3FC92 X-CRM114-Status: UNSURE ( 9.45 ) X-CRM114-Notice: Please train this message. X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Snikam@nvidia.com, Ashish Mhetre , linux-kernel@vger.kernel.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP From: Alex Van Brunt Accessed bit is used to age a page and in generic implementation there is flush_tlb while clearing the accessed bit. Flushing a TLB is overhead on ARM64 as access flag faults don't get translation table entries cached into TLB's. Flushing TLB is not necessary for this. Clearing the accessed bit without flushing TLB doesn't cause data corruption on ARM64. In our case with this patch, speed of reading from fast NVMe/SSD through PCIe got improved by 10% ~ 15% and writing got improved by 20% ~ 40%. So for performance optimisation don't flush TLB when clearing the accessed bit on ARM64. x86 made the same optimization even though their TLB invalidate is much faster as it doesn't broadcast to other CPUs. Please refer to: 'commit b13b1d2d8692 ("x86/mm: In the PTE swapout page reclaim case clear the accessed bit instead of flushing the TLB")' Signed-off-by: Alex Van Brunt Signed-off-by: Ashish Mhetre Signed-off-by: Alex Van Brunt Signed-off-by: Ashish Mhetre Signed-off-by: Will Deacon --- arch/arm64/include/asm/pgtable.h | 20 ++++++++++++++++++++ 1 file changed, 20 insertions(+) diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h index 2ab2031..080d842 100644 --- a/arch/arm64/include/asm/pgtable.h +++ b/arch/arm64/include/asm/pgtable.h @@ -652,6 +652,26 @@ static inline int ptep_test_and_clear_young(struct vm_area_struct *vma, return __ptep_test_and_clear_young(ptep); } +#define __HAVE_ARCH_PTEP_CLEAR_YOUNG_FLUSH +static inline int ptep_clear_flush_young(struct vm_area_struct *vma, + unsigned long address, pte_t *ptep) +{ + /* + * On ARM64 CPUs, clearing the accessed bit without a TLB flush + * doesn't cause data corruption. [ It could cause incorrect + * page aging and the (mistaken) reclaim of hot pages, but the + * chance of that should be relatively low. ] + * + * So as a performance optimization don't flush the TLB when + * clearing the accessed bit, it will eventually be flushed by + * a context switch or a VM operation anyway. [ In the rare + * event of it not getting flushed for a long time the delay + * shouldn't really matter because there's no real memory + * pressure for swapout to react to. ] + */ + return ptep_test_and_clear_young(vma, address, ptep); +} + #ifdef CONFIG_TRANSPARENT_HUGEPAGE #define __HAVE_ARCH_PMDP_TEST_AND_CLEAR_YOUNG static inline int pmdp_test_and_clear_young(struct vm_area_struct *vma,