From patchwork Thu Nov 18 08:18:03 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zhaoyang Huang X-Patchwork-Id: 12626275 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 179D7C433EF for ; Thu, 18 Nov 2021 08:23:54 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id B02A5611C4 for ; Thu, 18 Nov 2021 08:23:53 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org B02A5611C4 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id EC95E6B0072; Thu, 18 Nov 2021 03:18:39 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id E78CD6B0073; Thu, 18 Nov 2021 03:18:39 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D42C06B0074; Thu, 18 Nov 2021 03:18:39 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0051.hostedemail.com [216.40.44.51]) by kanga.kvack.org (Postfix) with ESMTP id C42A96B0072 for ; Thu, 18 Nov 2021 03:18:39 -0500 (EST) Received: from smtpin04.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 84D40183C040D for ; Thu, 18 Nov 2021 08:18:29 +0000 (UTC) X-FDA: 78821349138.04.A68DE31 Received: from mail-pl1-f170.google.com (mail-pl1-f170.google.com [209.85.214.170]) by imf16.hostedemail.com (Postfix) with ESMTP id 75A0CF00009C for ; Thu, 18 Nov 2021 08:18:27 +0000 (UTC) Received: by mail-pl1-f170.google.com with SMTP id o14so4585073plg.5 for ; Thu, 18 Nov 2021 00:18:29 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:subject:date:message-id; bh=cZ+YA8uAzApfXRQQGU6EpEsIwHWl1lHGZYEmkQkBhzo=; b=cDwrnb4aNFg6kGJfRpci4saVKp9WwBYstRoJ9IEugLZKi87EST/F7M1qXnu7J2ok/4 ZcrVJavz89knFtMW0q/I4Fk6vYCG9Y5kmMwipfF0e5NpxET602HWoABwrloMbTdg7Opy UlFcec2RzTyjQ/9rX+gfUcmqB8YoOB6vRgp611AfO6sd8EAF9sz9g4rqp2W69L9B9D2A GcnSzz/K2KlRtTFwNwcftOaNZgGjBt1W053Wyxp/aOZIziVWbi63/44H5E+OWjH3lS0y E7Q1LrlP6EY6ijozc68K8jDU6RdoWCpiPYo2Y4+WbH0eBkd+5WhYQdXViOT9tKuNBBDV 40AA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:subject:date:message-id; bh=cZ+YA8uAzApfXRQQGU6EpEsIwHWl1lHGZYEmkQkBhzo=; b=YzdRiLtxOQL7DNpeOxHq+nKz5bNM7pYURW4M6U9Hn7CQZcUZkzu5ujWvxOzV5tza8C Vken7O3ri0vBmy6MJ8qPYSftbnXf6e6V+E/VswbVW8zo6rfQGrhIcJg/U/s8FiWM6sIi 7rsj7wuZF738q8Fg23LIy3e9FLrc0IeX2zajxkswclUl+7Zy0vVBys54XqoAvBv35HVv 9JTCCAr/tFU8LVwr/arZ/K6EgHMwSN0rIKhtr2xwhZx7rP6XIrGYnPyCv7annoNualQ+ EozfwlV+vQS39k0GS8h25YwjwoMwUWvZaeku0GYacX1f/YVkNCe7vzYUV3GJ7PeFEISe GIaw== X-Gm-Message-State: AOAM532qu7Cgy2JWfuAvr+B1XVooDRrgufTUmVkLIWcSIQU1mwXH4lhy cQej0cM3Y3g27QJIvZDvv9c= X-Google-Smtp-Source: ABdhPJwCx6Y69smeN1eEBx4ltoy2FmVuGGdC0A2LA+y9NC70l7kpBNN/WdakR5rIOQdrMIwNqGEViQ== X-Received: by 2002:a17:902:d490:b0:141:fd0f:5316 with SMTP id c16-20020a170902d49000b00141fd0f5316mr63602417plg.14.1637223508108; Thu, 18 Nov 2021 00:18:28 -0800 (PST) Received: from bj03382pcu.spreadtrum.com ([117.18.48.102]) by smtp.gmail.com with ESMTPSA id v1sm2000265pfg.169.2021.11.18.00.18.20 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 18 Nov 2021 00:18:27 -0800 (PST) From: Huangzhaoyang To: Ard Biesheuvel , Catalin Marinas , Will Deacon , Anshuman Khandual , Andrew Morton , Nicholas Piggin , Mike Rapoport , Pavel Tatashin , Christophe Leroy , Jonathan Marek , Zhaoyang Huang , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [RFC PATCH] arch: arm64: try to use PTE_CONT when change page attr Date: Thu, 18 Nov 2021 16:18:03 +0800 Message-Id: <1637223483-2867-1-git-send-email-huangzhaoyang@gmail.com> X-Mailer: git-send-email 1.7.9.5 X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: 75A0CF00009C X-Stat-Signature: y5pp9766optnwcyauauywsd6u9r4hxjt Authentication-Results: imf16.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=cDwrnb4a; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf16.hostedemail.com: domain of huangzhaoyang@gmail.com designates 209.85.214.170 as permitted sender) smtp.mailfrom=huangzhaoyang@gmail.com X-HE-Tag: 1637223507-956803 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Zhaoyang Huang kernel will use the min granularity when rodata_full enabled which make TLB pressure high. Furthermore, there is no PTE_CONT applied. Try to improve these a little by apply PTE_CONT when change page's attr. Signed-off-by: Zhaoyang Huang --- arch/arm64/mm/pageattr.c | 62 ++++++++++++++++++++++++++++++++++++++++++++---- 1 file changed, 58 insertions(+), 4 deletions(-) diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c index a3bacd7..0b6a354 100644 --- a/arch/arm64/mm/pageattr.c +++ b/arch/arm64/mm/pageattr.c @@ -61,8 +61,13 @@ static int change_memory_common(unsigned long addr, int numpages, unsigned long start = addr; unsigned long size = PAGE_SIZE * numpages; unsigned long end = start + size; + unsigned long cont_pte_start = 0; + unsigned long cont_pte_end = 0; + unsigned long cont_pmd_start = 0; + unsigned long cont_pmd_end = 0; + pgprot_t orig_set_mask = set_mask; struct vm_struct *area; - int i; + int i = 0; if (!PAGE_ALIGNED(addr)) { start &= PAGE_MASK; @@ -98,9 +103,58 @@ static int change_memory_common(unsigned long addr, int numpages, */ if (rodata_full && (pgprot_val(set_mask) == PTE_RDONLY || pgprot_val(clear_mask) == PTE_RDONLY)) { - for (i = 0; i < area->nr_pages; i++) { - __change_memory_common((u64)page_address(area->pages[i]), - PAGE_SIZE, set_mask, clear_mask); + cont_pmd_start = (start + ~CONT_PMD_MASK + 1) & CONT_PMD_MASK; + cont_pmd_end = cont_pmd_start + ~CONT_PMD_MASK + 1; + cont_pte_start = (start + ~CONT_PTE_MASK + 1) & CONT_PTE_MASK; + cont_pte_end = cont_pte_start + ~CONT_PTE_MASK + 1; + + if (addr <= cont_pmd_start && end > cont_pmd_end) { + do { + __change_memory_common((u64)page_address(area->pages[i]), + PAGE_SIZE, set_mask, clear_mask); + i++; + addr++; + } while(addr < cont_pmd_start); + do { + set_mask = __pgprot(pgprot_val(set_mask) | PTE_CONT); + __change_memory_common((u64)page_address(area->pages[i]), + PAGE_SIZE, set_mask, clear_mask); + i++; + addr++; + } while(addr < cont_pmd_end); + set_mask = orig_set_mask; + do { + __change_memory_common((u64)page_address(area->pages[i]), + PAGE_SIZE, set_mask, clear_mask); + i++; + addr++; + } while(addr <= end); + } else if (addr <= cont_pte_start && end > cont_pte_end) { + do { + __change_memory_common((u64)page_address(area->pages[i]), + PAGE_SIZE, set_mask, clear_mask); + i++; + addr++; + } while(addr < cont_pte_start); + do { + set_mask = __pgprot(pgprot_val(set_mask) | PTE_CONT); + __change_memory_common((u64)page_address(area->pages[i]), + PAGE_SIZE, set_mask, clear_mask); + i++; + addr++; + } while(addr < cont_pte_end); + set_mask = orig_set_mask; + do { + __change_memory_common((u64)page_address(area->pages[i]), + PAGE_SIZE, set_mask, clear_mask); + i++; + addr++; + } while(addr <= end); + } else { + for (i = 0; i < area->nr_pages; i++) { + __change_memory_common((u64)page_address(area->pages[i]), + PAGE_SIZE, set_mask, clear_mask); + } } }