From patchwork Mon Jul 18 09:00:50 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Barry Song <21cnbao@gmail.com> X-Patchwork-Id: 12921037 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id AAB27C433EF for ; Mon, 18 Jul 2022 09:01:28 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3618F6B0071; Mon, 18 Jul 2022 05:01:28 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 2EB2D8E0001; Mon, 18 Jul 2022 05:01:28 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 18B0E6B0073; Mon, 18 Jul 2022 05:01:28 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 031526B0071 for ; Mon, 18 Jul 2022 05:01:28 -0400 (EDT) Received: from smtpin18.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id C7617C51 for ; Mon, 18 Jul 2022 09:01:27 +0000 (UTC) X-FDA: 79699627014.18.EBA31FD Received: from mail-pj1-f53.google.com (mail-pj1-f53.google.com [209.85.216.53]) by imf11.hostedemail.com (Postfix) with ESMTP id B6379400B9 for ; Mon, 18 Jul 2022 09:01:25 +0000 (UTC) Received: by mail-pj1-f53.google.com with SMTP id o5-20020a17090a3d4500b001ef76490983so11975934pjf.2 for ; Mon, 18 Jul 2022 02:01:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=FBNOHK6TrUivbMpa8W7z4583vq/fBAQTSSPZ8CCGyBI=; b=IfSuY4Q7WSHNUGWWbfgnGSZ78tMxg18N8pNM5tk+rChm1EHP4NmEI4a9rAxl2+OebI Rs8tbPHUfz8pRk2bWdj9sEiKwe+Y1ppMZWql4VEKgKHS32eS40quCgd81ddYCFupwCMf ds8jf4TeqxEv/TuRTBxAYEB2SbaOLP3iOwrMWVhq8EljOMiXb6QLAmH4fI3oNMYl3NNb JADbGabjS1e4fTnBYqMMTk8k3slV1tdkjbFHj+WR1Vtl0dSuddCNpCdFaZU+xgwYX2Fz POLZECcUWgPGYjwRSN1mZAYcgNmdaMz6N2O3q36d4QqDfjHmKMlBc6SYb4H7VaU2/yVE JGKA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=FBNOHK6TrUivbMpa8W7z4583vq/fBAQTSSPZ8CCGyBI=; b=MWA5Wy6N2CvgmDtB9cQ817VI7mqHmQmi+1wX4kQOH9ECk9MSuEGKZ8RscKbwAr0pl6 6Y0zdTHgY7hXSBGrkOxyRSJqvLKtTVVI/CSqFcKQ1OZ7Wd+Dh7ozGKJ1o+47s074F4v/ 4s2fI5A+lJXnK0sGcd9gpRtRw9QQSgImijdiaVqpz815tE6IQEhPvzKZBCPeviFheLb/ /W3kg0IjV5Jw88zFReJOPdlQ41LmwGRL6tQAqolig3uovWyODwjO/HAtYiKVToWRnMHr GiG+LdDEv40/7Mm7Dy9pleH+lUFMLXzNxXgHZ2IhY8FUL7ynBmE75wcbFuplPlcvc062 csbw== X-Gm-Message-State: AJIora8bS9NYNKYLILZXrNJ8WXYN9OGtCK5RLulFvV8MXnOq3FB3wqH1 PheyY7bg1i8s1xGbywKe4+Q= X-Google-Smtp-Source: AGRyM1u6qJVvDRHYc6nRxMF7cFc+EiWIMQyUF443MB5tyA81TER+ZrQttZvSd3UpqOUkw03+BwDfSw== X-Received: by 2002:a17:90b:3b4d:b0:1f0:4547:8a31 with SMTP id ot13-20020a17090b3b4d00b001f045478a31mr31445247pjb.129.1658134884535; Mon, 18 Jul 2022 02:01:24 -0700 (PDT) Received: from localhost.localdomain (47-72-206-164.dsl.dyn.ihug.co.nz. [47.72.206.164]) by smtp.gmail.com with ESMTPSA id b14-20020a170903228e00b0016b9b6d67a2sm8909831plh.155.2022.07.18.02.01.18 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 18 Jul 2022 02:01:23 -0700 (PDT) From: Barry Song <21cnbao@gmail.com> To: akpm@linux-foundation.org, anshuman.khandual@arm.com, catalin.marinas@arm.com, linux-arm-kernel@lists.infradead.org, linux-mm@kvack.org, steven.price@arm.com, will@kernel.org Cc: aarcange@redhat.com, guojian@oppo.com, hanchuanhua@oppo.com, hannes@cmpxchg.org, hughd@google.com, linux-kernel@vger.kernel.org, minchan@kernel.org, shy828301@gmail.com, v-songbaohua@oppo.com, ying.huang@intel.com, zhangshiming@oppo.com Subject: [RESEND PATCH v3] arm64: enable THP_SWAP for arm64 Date: Mon, 18 Jul 2022 21:00:50 +1200 Message-Id: <20220718090050.2261-1-21cnbao@gmail.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1658134885; a=rsa-sha256; cv=none; b=jM5WSpkWCRsUER/7d5JBD+hMLMXmsLWBjLqBCbk1ibR2oMsMRaiCFx+i7eQOs1bDqmNOVM yG+0sRqlhbbtOC2bl/ktWcg3iUj6+QNU7Zb7nwqgZmFAxZHSmlq2l83YMmiF7WjdVQ9QwC JViurXCNax2rV+0DYppTijSOaFLWhQU= ARC-Authentication-Results: i=1; imf11.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=IfSuY4Q7; spf=pass (imf11.hostedemail.com: domain of 21cnbao@gmail.com designates 209.85.216.53 as permitted sender) smtp.mailfrom=21cnbao@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1658134885; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:references:dkim-signature; bh=FBNOHK6TrUivbMpa8W7z4583vq/fBAQTSSPZ8CCGyBI=; b=cTDXau/5Qp6MQzl1dqfokWFNZEG4de1uHJGeDxmSH2bbI1kKz5rDJnl+avafa16y68SFcD DW46t15RTOIzE3SXnI8Z2WqjTs8Rn/F5TkzJtVG5VsRgmEciOiRqzHGQnVwixAQB5h1ChR QqVg6N5fx8CShco067Rn6eW322l/LrU= X-Rspamd-Queue-Id: B6379400B9 Authentication-Results: imf11.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=IfSuY4Q7; spf=pass (imf11.hostedemail.com: domain of 21cnbao@gmail.com designates 209.85.216.53 as permitted sender) smtp.mailfrom=21cnbao@gmail.com; dmarc=pass (policy=none) header.from=gmail.com X-Rspamd-Server: rspam05 X-Rspam-User: X-Stat-Signature: bc3iymojdpd34cokeaobnb46skoc5ykh X-HE-Tag: 1658134885-469894 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Barry Song THP_SWAP has been proven to improve the swap throughput significantly on x86_64 according to commit bd4c82c22c367e ("mm, THP, swap: delay splitting THP after swapped out"). As long as arm64 uses 4K page size, it is quite similar with x86_64 by having 2MB PMD THP. THP_SWAP is architecture-independent, thus, enabling it on arm64 will benefit arm64 as well. A corner case is that MTE has an assumption that only base pages can be swapped. We won't enable THP_SWAP for ARM64 hardware with MTE support until MTE is reworked to coexist with THP_SWAP. A micro-benchmark is written to measure thp swapout throughput as below, unsigned long long tv_to_ms(struct timeval tv) { return tv.tv_sec * 1000 + tv.tv_usec / 1000; } main() { struct timeval tv_b, tv_e;; #define SIZE 400*1024*1024 volatile void *p = mmap(NULL, SIZE, PROT_READ | PROT_WRITE, MAP_PRIVATE | MAP_ANONYMOUS, -1, 0); if (!p) { perror("fail to get memory"); exit(-1); } madvise(p, SIZE, MADV_HUGEPAGE); memset(p, 0x11, SIZE); /* write to get mem */ gettimeofday(&tv_b, NULL); madvise(p, SIZE, MADV_PAGEOUT); gettimeofday(&tv_e, NULL); printf("swp out bandwidth: %ld bytes/ms\n", SIZE/(tv_to_ms(tv_e) - tv_to_ms(tv_b))); } Testing is done on rk3568 64bit quad core processor Quad Core Cortex-A55 platform - ROCK 3A. thp swp throughput w/o patch: 2734bytes/ms (mean of 10 tests) thp swp throughput w/ patch: 3331bytes/ms (mean of 10 tests) Cc: "Huang, Ying" Cc: Minchan Kim Cc: Johannes Weiner Cc: Hugh Dickins Cc: Andrea Arcangeli Cc: Anshuman Khandual Cc: Steven Price Cc: Yang Shi Signed-off-by: Barry Song Reviewed-by: Anshuman Khandual --- -v3: * refine the commit log; * add a benchmark result; * refine the macro of arch_thp_swp_supported Thanks to the comments of Anshuman, Andrew, Steven arch/arm64/Kconfig | 1 + arch/arm64/include/asm/pgtable.h | 6 ++++++ include/linux/huge_mm.h | 12 ++++++++++++ mm/swap_slots.c | 2 +- 4 files changed, 20 insertions(+), 1 deletion(-) diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 1652a9800ebe..e1c540e80eec 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -101,6 +101,7 @@ config ARM64 select ARCH_WANT_HUGETLB_PAGE_OPTIMIZE_VMEMMAP select ARCH_WANT_LD_ORPHAN_WARN select ARCH_WANTS_NO_INSTR + select ARCH_WANTS_THP_SWAP if ARM64_4K_PAGES select ARCH_HAS_UBSAN_SANITIZE_ALL select ARM_AMBA select ARM_ARCH_TIMER diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h index 0b6632f18364..78d6f6014bfb 100644 --- a/arch/arm64/include/asm/pgtable.h +++ b/arch/arm64/include/asm/pgtable.h @@ -45,6 +45,12 @@ __flush_tlb_range(vma, addr, end, PUD_SIZE, false, 1) #endif /* CONFIG_TRANSPARENT_HUGEPAGE */ +static inline bool arch_thp_swp_supported(void) +{ + return !system_supports_mte(); +} +#define arch_thp_swp_supported arch_thp_swp_supported + /* * Outside of a few very special situations (e.g. hibernation), we always * use broadcast TLB invalidation instructions, therefore a spurious page diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h index de29821231c9..4ddaf6ad73ef 100644 --- a/include/linux/huge_mm.h +++ b/include/linux/huge_mm.h @@ -461,4 +461,16 @@ static inline int split_folio_to_list(struct folio *folio, return split_huge_page_to_list(&folio->page, list); } +/* + * archs that select ARCH_WANTS_THP_SWAP but don't support THP_SWP due to + * limitations in the implementation like arm64 MTE can override this to + * false + */ +#ifndef arch_thp_swp_supported +static inline bool arch_thp_swp_supported(void) +{ + return true; +} +#endif + #endif /* _LINUX_HUGE_MM_H */ diff --git a/mm/swap_slots.c b/mm/swap_slots.c index 2a65a89b5b4d..10b94d64cc25 100644 --- a/mm/swap_slots.c +++ b/mm/swap_slots.c @@ -307,7 +307,7 @@ swp_entry_t folio_alloc_swap(struct folio *folio) entry.val = 0; if (folio_test_large(folio)) { - if (IS_ENABLED(CONFIG_THP_SWAP)) + if (IS_ENABLED(CONFIG_THP_SWAP) && arch_thp_swp_supported()) get_swap_pages(1, &entry, folio_nr_pages(folio)); goto out; }