From patchwork Thu Jun 11 13:49:09 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gregory CLEMENT X-Patchwork-Id: 11600165 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 37B6E618 for ; Thu, 11 Jun 2020 13:51:00 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 0F05220835 for ; Thu, 11 Jun 2020 13:51:00 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="EFjehugb" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 0F05220835 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=bootlin.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=44263YlSTr9NO+7ujZoVnn7TRSeQrX0Z7Y3uochhdMc=; b=EFjehugbeBH5ee IxWrsVforIV4XZUEemIsI9Z+4ZEIm8ELS/R0lO1Mc6kwqjHKhpes2cXNO03XJutN1xmPMA3Qvhytv 7Ad5LdHzLHWEfFfA/xruVqyAlZAdbE3T93OmG1s0kADF5EoOL9AgxfZbBWBUa9GUNiXIhNjUjE8+N OccK+R1Hb603PIBTDWj8R80JjisRk4ZMJ65WblqgtAOjKx4bjfK93d/R07MJyOXGwPxeYJwT+zrfF V/KBsH8oMSvtT63jxBUHyGF7HgYgc+m61ZYHXrms0UJ5eLadzhbDbXo87rw1B4cy/lUanC04ZuTF9 hMeWikPppJEVtaXn38Qw==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1jjNbc-00070s-LT; Thu, 11 Jun 2020 13:50:56 +0000 Received: from relay2-d.mail.gandi.net ([217.70.183.194]) by bombadil.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1jjNaJ-0003Tw-RA for linux-arm-kernel@lists.infradead.org; Thu, 11 Jun 2020 13:49:38 +0000 X-Originating-IP: 91.175.115.186 Received: from localhost (91-175-115-186.subs.proxad.net [91.175.115.186]) (Authenticated sender: gregory.clement@bootlin.com) by relay2-d.mail.gandi.net (Postfix) with ESMTPSA id 69CA54000E; Thu, 11 Jun 2020 13:49:29 +0000 (UTC) From: Gregory CLEMENT To: Russell King , Arnd Bergmann Subject: [PATCH v2 1/6] ARM: Use PAGE_SIZE for ELF_EXEC_PAGESIZE Date: Thu, 11 Jun 2020 15:49:09 +0200 Message-Id: <20200611134914.765827-2-gregory.clement@bootlin.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200611134914.765827-1-gregory.clement@bootlin.com> References: <20200611134914.765827-1-gregory.clement@bootlin.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20200611_064936_032644_C077F3DD X-CRM114-Status: GOOD ( 11.26 ) X-Spam-Score: -0.7 (/) X-Spam-Report: SpamAssassin version 3.4.4 on bombadil.infradead.org summary: Content analysis details: (-0.7 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.7 RCVD_IN_DNSWL_LOW RBL: Sender listed at https://www.dnswl.org/, low trust [217.70.183.194 listed in list.dnswl.org] 0.0 RCVD_IN_MSPIKE_H3 RBL: Good reputation (+3) [217.70.183.194 listed in wl.mailspike.net] -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record 0.0 RCVD_IN_MSPIKE_WL Mailspike good senders X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Gregory CLEMENT , Thomas Petazzoni , linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org Currently ELF_EXEC_PAGESIZE is 4096 which is also the page size. In order to be able to use other size of page than 4K, use PAGE_SIZE instead of the hardcoded value. The use of PAGE_SIZE will be also aligned with what we find in other architectures such as arm64. This is inspired from fa0ca2726ea9 ("DSMP 64K support") and 4ef803e12baf ("mmu: large-page: Added support for multiple kernel page sizes") from https://github.com/MarvellEmbeddedProcessors/linux-marvell.git Signed-off-by: Gregory CLEMENT --- arch/arm/include/asm/elf.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/arm/include/asm/elf.h b/arch/arm/include/asm/elf.h index b078d992414b..0e406ce25379 100644 --- a/arch/arm/include/asm/elf.h +++ b/arch/arm/include/asm/elf.h @@ -116,7 +116,7 @@ int dump_task_regs(struct task_struct *t, elf_gregset_t *elfregs); #define ELF_CORE_COPY_TASK_REGS dump_task_regs #define CORE_DUMP_USE_REGSET -#define ELF_EXEC_PAGESIZE 4096 +#define ELF_EXEC_PAGESIZE PAGE_SIZE /* This is the base location for PIE (ET_DYN with INTERP) loads. */ #define ELF_ET_DYN_BASE 0x400000UL From patchwork Thu Jun 11 13:49:10 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gregory CLEMENT X-Patchwork-Id: 11600153 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 2B1E6618 for ; Thu, 11 Jun 2020 13:49:44 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 088D420801 for ; Thu, 11 Jun 2020 13:49:44 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="i3TAJ0C6" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 088D420801 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=bootlin.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=nMKy27l1quwz9kiJN4FErnsV2r8F4FO/Q0ScL007ZyY=; b=i3TAJ0C6cwLmIe Rs+eI0pNygZ97+KLZDTej/1OWDlZfOc/2k5GRn8T/Q2xZX2uXCtHC9Drkw10WuDZ2SBh+jRDIkRXX wIOwkADkRRqko1cyiAh7w+njFG6WlyBQBcwizs/bBp9rrDWUlMMT7e/VUMMGEFfka5qYJqr+GpgMz TBvI5a0uSjAsvcLImFVARrRJTHVILi1tDRuV856gim/AVCGdoTGFFSnuyQGQ67CgsbuDh5+sGo536 ANKX7tV+T5w83HvwAZJBV3vh7XjNYRvz2Xc6D2STTXVmco10e1Ku6QaeKUFS/BAnzpU8E8JvM+IGj 9bkKX0II/sete1P50y8w==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1jjNaL-0003Vs-26; Thu, 11 Jun 2020 13:49:37 +0000 Received: from relay7-d.mail.gandi.net ([217.70.183.200]) by bombadil.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1jjNaH-0003Ty-9H for linux-arm-kernel@lists.infradead.org; Thu, 11 Jun 2020 13:49:35 +0000 X-Originating-IP: 91.175.115.186 Received: from localhost (91-175-115-186.subs.proxad.net [91.175.115.186]) (Authenticated sender: gregory.clement@bootlin.com) by relay7-d.mail.gandi.net (Postfix) with ESMTPSA id 9DACD20008; Thu, 11 Jun 2020 13:49:30 +0000 (UTC) From: Gregory CLEMENT To: Russell King , Arnd Bergmann Subject: [PATCH v2 2/6] ARM: pagetable: prepare hardware page table to use large page Date: Thu, 11 Jun 2020 15:49:10 +0200 Message-Id: <20200611134914.765827-3-gregory.clement@bootlin.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200611134914.765827-1-gregory.clement@bootlin.com> References: <20200611134914.765827-1-gregory.clement@bootlin.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20200611_064933_456149_0E9DFF0D X-CRM114-Status: GOOD ( 13.71 ) X-Spam-Score: -0.7 (/) X-Spam-Report: SpamAssassin version 3.4.4 on bombadil.infradead.org summary: Content analysis details: (-0.7 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.7 RCVD_IN_DNSWL_LOW RBL: Sender listed at https://www.dnswl.org/, low trust [217.70.183.200 listed in list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -0.0 RCVD_IN_MSPIKE_H2 RBL: Average reputation (+2) [217.70.183.200 listed in wl.mailspike.net] X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Gregory CLEMENT , Thomas Petazzoni , linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org With 4 KB pages, each page table contained 512 entries in the hardware page tables, and 512 entries in the Linux page tables, each of those entries pointing to 4 KB page. With larger page sizes being emulated, the hardware page tables will continue to contain 512 entries, as we keep using 4 KB pages at the MMU level. Hence PTE_HWTABLE_PTRS is changed to 512. However, the number of Linux page tables entries will vary depending on the page size: 512 entries with 4 KB pages, 256 entries with 8 KB pages, 128 entries with 16 KB pages, etc. In the case of 4K pages, this patch doesn't modify the values being used. This is inspired from fa0ca2726ea9 ("DSMP 64K support") and 4ef803e12baf ("mmu: large-page: Added support for multiple kernel page sizes") from https://github.com/MarvellEmbeddedProcessors/linux-marvell.git Signed-off-by: Gregory CLEMENT --- arch/arm/include/asm/pgtable-2level.h | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/arch/arm/include/asm/pgtable-2level.h b/arch/arm/include/asm/pgtable-2level.h index 9e084a464a97..6316ef4a9f5c 100644 --- a/arch/arm/include/asm/pgtable-2level.h +++ b/arch/arm/include/asm/pgtable-2level.h @@ -67,13 +67,13 @@ * until either the TLB entry is evicted under pressure, or a context * switch which changes the user space mapping occurs. */ -#define PTRS_PER_PTE 512 +#define PTRS_PER_PTE (512 >> (PAGE_SHIFT-12)) #define PTRS_PER_PMD 1 #define PTRS_PER_PGD 2048 -#define PTE_HWTABLE_PTRS (PTRS_PER_PTE) +#define PTE_HWTABLE_PTRS (512) #define PTE_HWTABLE_OFF (PTE_HWTABLE_PTRS * sizeof(pte_t)) -#define PTE_HWTABLE_SIZE (PTRS_PER_PTE * sizeof(u32)) +#define PTE_HWTABLE_SIZE (PTE_HWTABLE_PTRS * sizeof(u32)) /* * PMD_SHIFT determines the size of the area a second-level page table can map From patchwork Thu Jun 11 13:49:11 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gregory CLEMENT X-Patchwork-Id: 11600157 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 9E980618 for ; Thu, 11 Jun 2020 13:50:25 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 7897820691 for ; Thu, 11 Jun 2020 13:50:25 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="nmLYqvNk" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 7897820691 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=bootlin.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=2bZbhrsaIK+PSIsbsFoXkCP2tlBbDKL88EfGlBp0E+o=; b=nmLYqvNkTISy4x VB1f6NfDmIDU08NBIRbbKD4cPQgoTwfPPgs7tZP4FFWFrA6e7VF2I05x2mJFoM74kVYdhRZHq2A0H Tp3m2oUb/OY0dee2uZDNCC6Lx9amm4v3qX6rp8m1HaUb61tedyxH5qF/XZH/kqCCCR6ckWNYh5qbT Lz6ICKoCH5rTwUCJdoQscHkTrOBlPw3GJTmusZZDSeAdBCwRGB2OAafg1R3JAE3k3GbjJoSMpPWwn A9NojyptSCX03FMBrCGpfm8LUBkARWzWKVTscoxsgLJwLtg48CeKnBKgQIu5KMnNgZzGehhoqdowI LNsvaSEQvmLm/DpCfnUA==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1jjNay-0005Xw-Jt; Thu, 11 Jun 2020 13:50:16 +0000 Received: from relay4-d.mail.gandi.net ([217.70.183.196]) by bombadil.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1jjNaI-0003U1-CO for linux-arm-kernel@lists.infradead.org; Thu, 11 Jun 2020 13:49:36 +0000 X-Originating-IP: 91.175.115.186 Received: from localhost (91-175-115-186.subs.proxad.net [91.175.115.186]) (Authenticated sender: gregory.clement@bootlin.com) by relay4-d.mail.gandi.net (Postfix) with ESMTPSA id 6DEF4E0009; Thu, 11 Jun 2020 13:49:31 +0000 (UTC) From: Gregory CLEMENT To: Russell King , Arnd Bergmann Subject: [PATCH v2 3/6] ARM: Make the number of fix bitmap depend on the page size Date: Thu, 11 Jun 2020 15:49:11 +0200 Message-Id: <20200611134914.765827-4-gregory.clement@bootlin.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200611134914.765827-1-gregory.clement@bootlin.com> References: <20200611134914.765827-1-gregory.clement@bootlin.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20200611_064934_549272_6BEDB0B6 X-CRM114-Status: GOOD ( 12.18 ) X-Spam-Score: -0.7 (/) X-Spam-Report: SpamAssassin version 3.4.4 on bombadil.infradead.org summary: Content analysis details: (-0.7 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.7 RCVD_IN_DNSWL_LOW RBL: Sender listed at https://www.dnswl.org/, low trust [217.70.183.196 listed in list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record 0.0 RCVD_IN_MSPIKE_H3 RBL: Good reputation (+3) [217.70.183.196 listed in wl.mailspike.net] 0.0 RCVD_IN_MSPIKE_WL Mailspike good senders X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Gregory CLEMENT , Thomas Petazzoni , linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org Currently the number of fixmap used is fixed. However, if the page size is no more 4K but a larger one, then, the space occupied by fixmap is too big. The total fixmap size being fixed, the number of fixmap should depend of the page size as it is done for arm64. Instead of always using 32 fixmap, we try to always having the same size: 128KB, which for 4KB page matches these 32 pages. Signed-off-by: Gregory CLEMENT --- arch/arm/include/asm/fixmap.h | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/arch/arm/include/asm/fixmap.h b/arch/arm/include/asm/fixmap.h index 472c93db5dac..d4b82af5a96d 100644 --- a/arch/arm/include/asm/fixmap.h +++ b/arch/arm/include/asm/fixmap.h @@ -6,6 +6,7 @@ #define FIXADDR_END 0xfff00000UL #define FIXADDR_TOP (FIXADDR_END - PAGE_SIZE) +#include #include #include @@ -27,7 +28,7 @@ enum fixed_addresses { * not to clash since early_ioremap() is only available before * paging_init(), and kmap() only after. */ -#define NR_FIX_BTMAPS 32 +#define NR_FIX_BTMAPS (SZ_128K / PAGE_SIZE) #define FIX_BTMAPS_SLOTS 7 #define TOTAL_FIX_BTMAPS (NR_FIX_BTMAPS * FIX_BTMAPS_SLOTS) From patchwork Thu Jun 11 13:49:12 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gregory CLEMENT X-Patchwork-Id: 11600159 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 0B8A190 for ; Thu, 11 Jun 2020 13:50:32 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id DD5BA2081A for ; Thu, 11 Jun 2020 13:50:31 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="d+RV/I+4" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org DD5BA2081A Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=bootlin.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=VXjGuG32DFmE/6/lW+6zPw0zOBfqr4D/JzI0JH976AA=; b=d+RV/I+4JaQQj3 +eXCwQpV9Vxut10hqJ4tsTtLPj4Hdpvcz5DwyOLOT3xVcukZCIvbPdCHBusvT6cUPs2S9SBprQ3v5 y/ZU1vmSuv2GLRvxQs9nh+2XmFRWu8BsG/QiCPITO/tRCtNRDZQVzhyuJm0YO+OZba6HpCBwKlrbu qaPoxIbsEbyr865cSWFRsxkLzap4Eg9DScdXh1y/HPKNsteLBf5/S66QmsWJ63EEB7XadmuJKbucz Uy8v81AiCQldBSsreVqGLhT9a3SPa7IxyUNN+aAa5vUfidGk3Fdh/RXpdomzEJbRLCDIJVcwsOs5Z I4Ji+iltgjfIBM9GdHoQ==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1jjNbA-0006Y5-Td; Thu, 11 Jun 2020 13:50:28 +0000 Received: from relay9-d.mail.gandi.net ([217.70.183.199]) by bombadil.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1jjNaH-0003U0-V3 for linux-arm-kernel@lists.infradead.org; Thu, 11 Jun 2020 13:49:36 +0000 X-Originating-IP: 91.175.115.186 Received: from localhost (91-175-115-186.subs.proxad.net [91.175.115.186]) (Authenticated sender: gregory.clement@bootlin.com) by relay9-d.mail.gandi.net (Postfix) with ESMTPSA id 38EDBFF81A; Thu, 11 Jun 2020 13:49:32 +0000 (UTC) From: Gregory CLEMENT To: Russell King , Arnd Bergmann Subject: [PATCH v2 4/6] ARM: mm: Aligned pte allocation to one page Date: Thu, 11 Jun 2020 15:49:12 +0200 Message-Id: <20200611134914.765827-5-gregory.clement@bootlin.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200611134914.765827-1-gregory.clement@bootlin.com> References: <20200611134914.765827-1-gregory.clement@bootlin.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20200611_064934_128163_5BD5277D X-CRM114-Status: GOOD ( 12.75 ) X-Spam-Score: -0.7 (/) X-Spam-Report: SpamAssassin version 3.4.4 on bombadil.infradead.org summary: Content analysis details: (-0.7 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.7 RCVD_IN_DNSWL_LOW RBL: Sender listed at https://www.dnswl.org/, low trust [217.70.183.199 listed in list.dnswl.org] -0.0 RCVD_IN_MSPIKE_H2 RBL: Average reputation (+2) [217.70.183.199 listed in wl.mailspike.net] -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Gregory CLEMENT , Thomas Petazzoni , linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org In pte_offset_kernel() the pte_index macro is used. This macro makes the assumption that the address is aligned to a page size. In arm_pte_allocation, the size allocated is the size needed for 512 entries. Actually this size was calculated to fit in a 4K page. When using larger page, the size of the table allocated is no more aligned which end to give a wrong physical address. The solution is to round up the allocation to a page size instead of the exact size of the tables (which is 4KB). It allows to comply with the assumption of pte_index() but the drawback is a waste of memory for the early allocation if page size is bigger than 4KB. This is inspired from fa0ca2726ea9 ("DSMP 64K support") and 4ef803e12baf ("mmu: large-page: Added support for multiple kernel page sizes") from https://github.com/MarvellEmbeddedProcessors/linux-marvell.git Signed-off-by: Gregory CLEMENT --- arch/arm/mm/mmu.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/arch/arm/mm/mmu.c b/arch/arm/mm/mmu.c index ec8d0008bfa1..b7fdea7e0cbe 100644 --- a/arch/arm/mm/mmu.c +++ b/arch/arm/mm/mmu.c @@ -715,7 +715,9 @@ static pte_t * __init arm_pte_alloc(pmd_t *pmd, unsigned long addr, void *(*alloc)(unsigned long sz)) { if (pmd_none(*pmd)) { - pte_t *pte = alloc(PTE_HWTABLE_OFF + PTE_HWTABLE_SIZE); + /* The PTE needs to be page to be page aligned */ + pte_t *pte = alloc(round_up(PTE_HWTABLE_OFF + PTE_HWTABLE_SIZE, + PAGE_SIZE)); __pmd_populate(pmd, __pa(pte), prot); } BUG_ON(pmd_bad(*pmd)); From patchwork Thu Jun 11 13:49:13 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gregory CLEMENT X-Patchwork-Id: 11600169 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id C7E53618 for ; Thu, 11 Jun 2020 13:51:23 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id A3C1B2081A for ; Thu, 11 Jun 2020 13:51:23 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="a7M/tHI6" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org A3C1B2081A Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=bootlin.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=e59ofkhALl3B63FmHRh3qAUYq8xeuW+L/PFCVDdJePo=; b=a7M/tHI67mw54K 9juWhdR0RVNsz7dvgXR2eUS/f8Q5BBteY6KWwjeOs5mA7XlcZ6DhvnfhOTRskwQJrvldIe6Q3/Zud TF251waBeF+2CbbuKEWc/2iRE343r2wnIy0WUXssurPqp7JhMbcQRziFolvKpq+XXCoR+SbEZXsAl ITmk2QrWxih8JD0R8q79MuomdrvcAK+OSfqEt1mJfKsVTW3uvDCkwQ0GjXH9sj4S9bskF9u5zmugS 8wJnRnbRJPQuNDU9OCiEeHH0Ww6AxRAPBT8rdVgtbhlI/rVVvMRdlnwQmdDDCq9Iibz/OeBIv8F/S zHNUqzmbQAFzqWekfwDw==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1jjNbv-0007GM-7j; Thu, 11 Jun 2020 13:51:15 +0000 Received: from relay2-d.mail.gandi.net ([217.70.183.194]) by bombadil.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1jjNaI-0003UP-Jm for linux-arm-kernel@lists.infradead.org; Thu, 11 Jun 2020 13:49:38 +0000 X-Originating-IP: 91.175.115.186 Received: from localhost (91-175-115-186.subs.proxad.net [91.175.115.186]) (Authenticated sender: gregory.clement@bootlin.com) by relay2-d.mail.gandi.net (Postfix) with ESMTPSA id D2EC940008; Thu, 11 Jun 2020 13:49:32 +0000 (UTC) From: Gregory CLEMENT To: Russell King , Arnd Bergmann Subject: [PATCH v2 5/6] ARM: Add large kernel page support Date: Thu, 11 Jun 2020 15:49:13 +0200 Message-Id: <20200611134914.765827-6-gregory.clement@bootlin.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200611134914.765827-1-gregory.clement@bootlin.com> References: <20200611134914.765827-1-gregory.clement@bootlin.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20200611_064934_918188_E9A34335 X-CRM114-Status: GOOD ( 27.81 ) X-Spam-Score: -0.7 (/) X-Spam-Report: SpamAssassin version 3.4.4 on bombadil.infradead.org summary: Content analysis details: (-0.7 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.7 RCVD_IN_DNSWL_LOW RBL: Sender listed at https://www.dnswl.org/, low trust [217.70.183.194 listed in list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record 0.0 RCVD_IN_MSPIKE_H3 RBL: Good reputation (+3) [217.70.183.194 listed in wl.mailspike.net] 0.0 RCVD_IN_MSPIKE_WL Mailspike good senders X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Gregory CLEMENT , Thomas Petazzoni , linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org On 32 bits system with 4K page it is not possible to support volume larger than 16TB even with ext4 support. To achieve this, the size of the page must be larger. This patch allows to support until 64K kernel page but at the MMU level it is still the 4K page that is used. Indeed for ARM there is already a difference between the kernel page and the hardware page in the way they are managed. In the same 4K space the Linux kernel deals with 2 PTE tables at the beginning, while the hardware deals with 2 other hardware PTE tables. This patch takes advantage of it and push further the difference between hardware and Linux version by using larger page at Linux kernel level. At the lower level when the Linux kernel deals with a single large page, then it was several 4K pages that are managed. This is inspired from fa0ca2726ea9 ("DSMP 64K support") and 4ef803e12baf ("mmu: large-page: Added support for multiple kernel page sizes") from https://github.com/MarvellEmbeddedProcessors/linux-marvell.git Signed-off-by: Gregory CLEMENT --- arch/arm/include/asm/page.h | 10 +++++++ arch/arm/include/asm/pgtable.h | 4 +++ arch/arm/include/asm/shmparam.h | 4 +++ arch/arm/include/asm/tlbflush.h | 21 ++++++++++++- arch/arm/kernel/entry-common.S | 13 ++++++++ arch/arm/kernel/traps.c | 10 +++++++ arch/arm/mm/Kconfig | 53 +++++++++++++++++++++++++++++++++ arch/arm/mm/fault.c | 19 ++++++++++++ arch/arm/mm/mmu.c | 18 +++++++++++ arch/arm/mm/pgd.c | 2 ++ arch/arm/mm/proc-v7-2level.S | 44 +++++++++++++++++++++++++-- arch/arm/mm/tlb-v7.S | 14 +++++++-- 12 files changed, 205 insertions(+), 7 deletions(-) diff --git a/arch/arm/include/asm/page.h b/arch/arm/include/asm/page.h index 11b058a72a5b..42784fed8834 100644 --- a/arch/arm/include/asm/page.h +++ b/arch/arm/include/asm/page.h @@ -8,7 +8,17 @@ #define _ASMARM_PAGE_H /* PAGE_SHIFT determines the page size */ +#ifdef CONFIG_ARM_8KB_SW_PAGE_SIZE_SUPPORT +#define PAGE_SHIFT 13 +#elif defined(CONFIG_ARM_16KB_SW_PAGE_SIZE_SUPPORT) +#define PAGE_SHIFT 14 +#elif defined(CONFIG_ARM_32KB_SW_PAGE_SIZE_SUPPORT) +#define PAGE_SHIFT 15 +#elif defined(CONFIG_ARM_64KB_SW_PAGE_SIZE_SUPPORT) +#define PAGE_SHIFT 16 +#else #define PAGE_SHIFT 12 +#endif #define PAGE_SIZE (_AC(1,UL) << PAGE_SHIFT) #define PAGE_MASK (~((1 << PAGE_SHIFT) - 1)) diff --git a/arch/arm/include/asm/pgtable.h b/arch/arm/include/asm/pgtable.h index befc8fcec98f..8b0a85ec8614 100644 --- a/arch/arm/include/asm/pgtable.h +++ b/arch/arm/include/asm/pgtable.h @@ -59,7 +59,11 @@ extern void __pgd_error(const char *file, int line, pgd_t); * mapping to be mapped at. This is particularly important for * non-high vector CPUs. */ +#ifndef CONFIG_ARM_LARGE_PAGE_SUPPORT #define FIRST_USER_ADDRESS (PAGE_SIZE * 2) +#else +#define FIRST_USER_ADDRESS PAGE_SIZE +#endif /* * Use TASK_SIZE as the ceiling argument for free_pgtables() and diff --git a/arch/arm/include/asm/shmparam.h b/arch/arm/include/asm/shmparam.h index 367a9dac6150..01de64a57a5e 100644 --- a/arch/arm/include/asm/shmparam.h +++ b/arch/arm/include/asm/shmparam.h @@ -7,7 +7,11 @@ * or page size, whichever is greater since the cache aliases * every size/ways bytes. */ +#ifdef CONFIG_ARM_LARGE_PAGE_SUPPORT +#define SHMLBA (16 << 10) /* attach addr a multiple of (4 * 4096) */ +#else #define SHMLBA (4 * PAGE_SIZE) /* attach addr a multiple of this */ +#endif /* * Enforce SHMLBA in shmat diff --git a/arch/arm/include/asm/tlbflush.h b/arch/arm/include/asm/tlbflush.h index 24cbfc112dfa..d8ad4021a4da 100644 --- a/arch/arm/include/asm/tlbflush.h +++ b/arch/arm/include/asm/tlbflush.h @@ -419,8 +419,16 @@ static inline void __flush_tlb_mm(struct mm_struct *mm) static inline void __local_flush_tlb_page(struct vm_area_struct *vma, unsigned long uaddr) { - const int zero = 0; const unsigned int __tlb_flag = __cpu_tlb_flags; +#if defined(CONFIG_ARM_LARGE_PAGE_SUPPORT) + if (tlb_flag(TLB_WB)) + dsb(); + + uaddr = (uaddr & PAGE_MASK); + __cpu_flush_user_tlb_range(uaddr, uaddr + PAGE_SIZE, vma); + +#else + const int zero = 0; uaddr = (uaddr & PAGE_MASK) | ASID(vma->vm_mm); @@ -436,6 +444,7 @@ __local_flush_tlb_page(struct vm_area_struct *vma, unsigned long uaddr) tlb_op(TLB_V6_U_PAGE, "c8, c7, 1", uaddr); tlb_op(TLB_V6_D_PAGE, "c8, c6, 1", uaddr); tlb_op(TLB_V6_I_PAGE, "c8, c5, 1", uaddr); +#endif /* CONFIG_ARM_LARGE_PAGE_SUPPORT */ } static inline void @@ -449,7 +458,9 @@ local_flush_tlb_page(struct vm_area_struct *vma, unsigned long uaddr) dsb(nshst); __local_flush_tlb_page(vma, uaddr); +#if !defined(CONFIG_ARM_LARGE_PAGE_SUPPORT) tlb_op(TLB_V7_UIS_PAGE, "c8, c7, 1", uaddr); +#endif if (tlb_flag(TLB_BARRIER)) dsb(nsh); @@ -478,6 +489,9 @@ __flush_tlb_page(struct vm_area_struct *vma, unsigned long uaddr) static inline void __local_flush_tlb_kernel_page(unsigned long kaddr) { +#if defined(CONFIG_ARM_LARGE_PAGE_SUPPORT) + __cpu_flush_kern_tlb_range(kaddr, kaddr + PAGE_SIZE); +#else const int zero = 0; const unsigned int __tlb_flag = __cpu_tlb_flags; @@ -490,6 +504,7 @@ static inline void __local_flush_tlb_kernel_page(unsigned long kaddr) tlb_op(TLB_V6_U_PAGE, "c8, c7, 1", kaddr); tlb_op(TLB_V6_D_PAGE, "c8, c6, 1", kaddr); tlb_op(TLB_V6_I_PAGE, "c8, c5, 1", kaddr); +#endif /* CONFIG_ARM_LARGE_PAGE_SUPPORT */ } static inline void local_flush_tlb_kernel_page(unsigned long kaddr) @@ -502,7 +517,9 @@ static inline void local_flush_tlb_kernel_page(unsigned long kaddr) dsb(nshst); __local_flush_tlb_kernel_page(kaddr); +#if !defined(CONFIG_ARM_LARGE_PAGE_SUPPORT) tlb_op(TLB_V7_UIS_PAGE, "c8, c7, 1", kaddr); +#endif if (tlb_flag(TLB_BARRIER)) { dsb(nsh); @@ -520,7 +537,9 @@ static inline void __flush_tlb_kernel_page(unsigned long kaddr) dsb(ishst); __local_flush_tlb_kernel_page(kaddr); +#if !defined(CONFIG_ARM_LARGE_PAGE_SUPPORT) tlb_op(TLB_V7_UIS_PAGE, "c8, c3, 1", kaddr); +#endif if (tlb_flag(TLB_BARRIER)) { dsb(ish); diff --git a/arch/arm/kernel/entry-common.S b/arch/arm/kernel/entry-common.S index 271cb8a1eba1..3a6ff31b8554 100644 --- a/arch/arm/kernel/entry-common.S +++ b/arch/arm/kernel/entry-common.S @@ -407,9 +407,22 @@ ENDPROC(sys_fstatfs64_wrapper) * Note: off_4k (r5) is always units of 4K. If we can't do the requested * offset, we return EINVAL. */ + +#define PGOFF_SHIFT (PAGE_SHIFT - 12) +#define PGOFF_MASK ((1 << PGOFF_SHIFT) - 1) + sys_mmap2: +#ifdef CONFIG_ARM_LARGE_PAGE_SUPPORT + tst r5, #PGOFF_MASK + moveq r5, r5, lsr #PGOFF_SHIFT + streq r5, [sp, #4] + beq sys_mmap_pgoff + mov r0, #-EINVAL + ret lr +#else str r5, [sp, #4] b sys_mmap_pgoff +#endif ENDPROC(sys_mmap2) #ifdef CONFIG_OABI_COMPAT diff --git a/arch/arm/kernel/traps.c b/arch/arm/kernel/traps.c index 1e70e7227f0f..19cf3e66df31 100644 --- a/arch/arm/kernel/traps.c +++ b/arch/arm/kernel/traps.c @@ -830,7 +830,17 @@ void __init early_trap_init(void *vectors_base) kuser_init(vectors_base); +#if defined(CONFIG_ARM_LARGE_PAGE_SUPPORT) + /* + * With large page support, the page are at least 8K, so there + * enough space in one page for the stubs are copied at + * 4K offset. + */ + flush_icache_range(vectors, vectors + PAGE_SIZE); +#else flush_icache_range(vectors, vectors + PAGE_SIZE * 2); +#endif + #else /* ifndef CONFIG_CPU_V7M */ /* * on V7-M there is no need to copy the vector table to a dedicated diff --git a/arch/arm/mm/Kconfig b/arch/arm/mm/Kconfig index 65e4482e3849..6266caa93520 100644 --- a/arch/arm/mm/Kconfig +++ b/arch/arm/mm/Kconfig @@ -975,6 +975,59 @@ config MIGHT_HAVE_CACHE_L2X0 instead of this option, thus preventing the user from inadvertently configuring a broken kernel. +config ARM_LARGE_PAGE_SUPPORT + bool + +choice + prompt "Kernel Large Page Support" + depends on CPU_V7 && !ARM_LPAE + default ARM_NO_LARGE_PAGE_SUPPORT + help + Support kennel large pages (> 4KB) by software emulation of + large pages (using 4KB MMU pages). Select one of the page + sizes below. + +config ARM_NO_LARGE_PAGE_SUPPORT + bool "Disabled - Use default" + help + Use kernel default page size (4KB). + If you are not sure, select this option. + This option does not make any changes to default kernel page size + MMU management. + +config ARM_8KB_SW_PAGE_SIZE_SUPPORT + bool "8KB software page size support" + select ARM_LARGE_PAGE_SUPPORT + help + The kernel uses 8KB pages, MMU page table will still use 4KB pages. + This feature enables support for large storage volumes up to 32TB + at the expense of higher memory fragmentation. + +config ARM_16KB_SW_PAGE_SIZE_SUPPORT + bool "16KB software page size support" + select ARM_LARGE_PAGE_SUPPORT + help + The kernel uses 16KB pages, MMU page table will still use 4KB pages. + This feature enables support for large storage volumes up to 64TB. + at the expense of higher memory fragmentation. + +config ARM_32KB_SW_PAGE_SIZE_SUPPORT + bool "32KB software page size support" + select ARM_LARGE_PAGE_SUPPORT + help + The kernel uses 32KB pages, MMU page table will still use 4KB pages. + This feature enables support for large storage volumes up to 128TB. + at the expense of higher memory fragmentation. + +config ARM_64KB_SW_PAGE_SIZE_SUPPORT + bool "64KB software page size support" + select ARM_LARGE_PAGE_SUPPORT + help + The kernel uses 64KB pages, MMU page table will still use 4KB pages. + This feature enables support for large storage volumes up to 256TB. + at the expense of higher memory fragmentation. +endchoice + config CACHE_L2X0 bool "Enable the L2x0 outer cache controller" if MIGHT_HAVE_CACHE_L2X0 default MIGHT_HAVE_CACHE_L2X0 diff --git a/arch/arm/mm/fault.c b/arch/arm/mm/fault.c index 2dd5c41cbb8d..ee4241b3cb2b 100644 --- a/arch/arm/mm/fault.c +++ b/arch/arm/mm/fault.c @@ -27,6 +27,20 @@ #ifdef CONFIG_MMU +#ifdef CONFIG_ARM_LARGE_PAGE_SUPPORT +static long long get_large_pte_hw_val(pte_t *pte) +{ + unsigned long pte_ptr = (unsigned long)pte; + unsigned long tmp = pte_ptr; + + pte_ptr += (PTE_HWTABLE_PTRS * sizeof(void *)); + pte_ptr &= ~0x7FC; + tmp &= 0x7FC & (~(((PAGE_SHIFT - 12) - 1) << 7)); + pte_ptr += (tmp << (PAGE_SHIFT - 12)); + return (long long)pte_val(*(pte_t *)pte_ptr); +} +#endif + /* * This is useful to dump out the page tables associated with * 'addr' in mm 'mm'. @@ -86,9 +100,14 @@ void show_pte(const char *lvl, struct mm_struct *mm, unsigned long addr) pte = pte_offset_map(pmd, addr); pr_cont(", *pte=%08llx", (long long)pte_val(*pte)); #ifndef CONFIG_ARM_LPAE +#ifdef CONFIG_ARM_LARGE_PAGE_SUPPORT + pr_cont(", *ppte=%08llx", get_large_pte_hw_val(pte)); + +#else pr_cont(", *ppte=%08llx", (long long)pte_val(pte[PTE_HWTABLE_PTRS])); #endif +#endif /* CONFIG_ARM_LPAE */ pte_unmap(pte); } while(0); diff --git a/arch/arm/mm/mmu.c b/arch/arm/mm/mmu.c index b7fdea7e0cbe..06549714973a 100644 --- a/arch/arm/mm/mmu.c +++ b/arch/arm/mm/mmu.c @@ -1318,8 +1318,17 @@ static void __init devicemaps_init(const struct machine_desc *mdesc) /* * Allocate the vector page early. */ +#if defined(CONFIG_ARM_LARGE_PAGE_SUPPORT) + /* + * With large page support, the pages are at least 8K, so + * there is enough space in one page for the stubs that are + * copied at 4K offset. + */ + vectors = early_alloc(PAGE_SIZE); +#else vectors = early_alloc(PAGE_SIZE * 2); +#endif early_trap_init(vectors); /* @@ -1380,12 +1389,21 @@ static void __init devicemaps_init(const struct machine_desc *mdesc) create_mapping(&map); } + /* + * With large page support, the page are at least 8K, so this + * hardware page was already mapped. Actually the hardcoded + * 4KB offset causes trouble with the virtual address passed + * to create_mapping: the address is no more aligned to a + * page. + */ +#ifndef CONFIG_ARM_LARGE_PAGE_SUPPORT /* Now create a kernel read-only mapping */ map.pfn += 1; map.virtual = 0xffff0000 + PAGE_SIZE; map.length = PAGE_SIZE; map.type = MT_LOW_VECTORS; create_mapping(&map); +#endif /* * Ask the machine support to map in the statically mapped devices. diff --git a/arch/arm/mm/pgd.c b/arch/arm/mm/pgd.c index 478bd2c6aa50..ade3f3885b4c 100644 --- a/arch/arm/mm/pgd.c +++ b/arch/arm/mm/pgd.c @@ -95,7 +95,9 @@ pgd_t *pgd_alloc(struct mm_struct *mm) init_pmd = pmd_offset(init_pud, 0); init_pte = pte_offset_map(init_pmd, 0); set_pte_ext(new_pte + 0, init_pte[0], 0); +#ifndef CONFIG_ARM_LARGE_PAGE_SUPPORT set_pte_ext(new_pte + 1, init_pte[1], 0); +#endif pte_unmap(init_pte); pte_unmap(new_pte); } diff --git a/arch/arm/mm/proc-v7-2level.S b/arch/arm/mm/proc-v7-2level.S index 5db029c8f987..7e34b421c8b8 100644 --- a/arch/arm/mm/proc-v7-2level.S +++ b/arch/arm/mm/proc-v7-2level.S @@ -59,6 +59,11 @@ ENTRY(cpu_v7_switch_mm) bx lr ENDPROC(cpu_v7_switch_mm) + .macro flush_pte adr + ALT_SMP(W(nop)) + ALT_UP (mcr p15, 0, \adr, c7, c10, 1) @ flush_pte +.endm + /* * cpu_v7_set_pte_ext(ptep, pte) * @@ -73,6 +78,19 @@ ENTRY(cpu_v7_set_pte_ext) #ifdef CONFIG_MMU str r1, [r0] @ linux version + /* Calc HW PTE Entry Offset */ +#ifdef CONFIG_ARM_LARGE_PAGE_SUPPORT +#define PTE_SHIFT (PAGE_SHIFT - 12) +#define PTE_MASK (0x3FC >> (PTE_SHIFT - 1)) + mov r3, #PTE_MASK + and r3, r3, r0 + mov r3, r3, lsl#PTE_SHIFT + + bic r0, r0, #0x3FC + bic r0, r0, #0x400 + orr r0, r0, r3 +#endif /* CONFIG_ARM_LARGE_PAGE_SUPPORT */ + bic r3, r1, #0x000003f0 bic r3, r3, #PTE_TYPE_MASK orr r3, r3, r2 @@ -100,9 +118,29 @@ ENTRY(cpu_v7_set_pte_ext) ARM( str r3, [r0, #2048]! ) THUMB( add r0, r0, #2048 ) THUMB( str r3, [r0] ) - ALT_SMP(W(nop)) - ALT_UP (mcr p15, 0, r0, c7, c10, 1) @ flush_pte -#endif +#ifdef CONFIG_ARM_LARGE_PAGE_SUPPORT +#define PTE_OFFSET ((1 << (PAGE_SHIFT - 12)) * 4) + mov r1, #PTE_OFFSET + mov r2, #4 +1: add r3, r3, #0x1000 + str r3, [r0, r2] + add r2, r2, #4 +#if PAGE_SHIFT > 15 /* 64KB in this case 2 cache lines need to be flushed */ + cmp r2, #32 @ cache line size + bne 2f + cmp r2, r1 + beq 3f + flush_pte r0 + mov r1, #32 + add r0, r0, #32 + mov r2, #0 +#endif /* PAGE_SHIFT > 15 */ +2: cmp r2, r1 + bne 1b +#endif /* CONFIG_ARM_LARGE_PAGE_SUPPORT */ +3: flush_pte r0 +#endif /* CONFIG_MMU */ + bx lr ENDPROC(cpu_v7_set_pte_ext) diff --git a/arch/arm/mm/tlb-v7.S b/arch/arm/mm/tlb-v7.S index 1bb28d7db567..8e68218e53d3 100644 --- a/arch/arm/mm/tlb-v7.S +++ b/arch/arm/mm/tlb-v7.S @@ -50,8 +50,12 @@ ENTRY(v7wbi_flush_user_tlb_range) #endif ALT_UP(mcr p15, 0, r0, c8, c7, 1) @ TLB invalidate U MVA - add r0, r0, #PAGE_SZ - cmp r0, r1 +#if defined(CONFIG_ARM_LARGE_PAGE_SUPPORT) + add r0, r0, #0x1000 +#else + add r0, r0, #PAGE_SZ +#endif /* CONFIG_ARM_LARGE_PAGE_SUPPORT */ + cmp r0, r1 blo 1b dsb ish ret lr @@ -78,7 +82,11 @@ ENTRY(v7wbi_flush_kern_tlb_range) ALT_SMP(mcr p15, 0, r0, c8, c3, 1) @ TLB invalidate U MVA (shareable) #endif ALT_UP(mcr p15, 0, r0, c8, c7, 1) @ TLB invalidate U MVA - add r0, r0, #PAGE_SZ +#if defined(CONFIG_ARM_LARGE_PAGE_SUPPORT) + add r0, r0, #0x1000 +#else + add r0, r0, #PAGE_SZ +#endif /* CONFIG_ARM_LARGE_PAGE_SUPPORT */ cmp r0, r1 blo 1b dsb ish From patchwork Thu Jun 11 13:49:14 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gregory CLEMENT X-Patchwork-Id: 11600163 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 99C96618 for ; Thu, 11 Jun 2020 13:50:47 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 6B6ED20691 for ; Thu, 11 Jun 2020 13:50:47 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="r+rbrXgd" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 6B6ED20691 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=bootlin.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=P/jXSE9L5mhH+lbvweGm5g4U4aSdjaFZlMwTBpLzBoM=; b=r+rbrXgdRLAsZE twxQa8+GS2jAtT+x8m7L98WmHwvG49epeJFuLMiiC7DpWbUCAhQpOeALAMWuWDhSHgIBiLb61Epo4 wS5X0ta7dpJ/191Az7+n+jVni5XaYg0a5StAnM7EwqEQF3ImphQ9mfE0QW+oE1yS0hGtEisE49lLn DfI82LW4xsJIb/5y9fEr3ega2QuRhSbERR0eGh36gQ8uzOl8tG8hbcy9i6pmxNcrCWD/Hj7oS+Tlx Ztqzu9y7zgTZsjeOuKDTqTwfh5bFZyDO8aSua9tQ0fSPe/3JG9dh6iVlPWWyrSjSsxBDbCvevJoC3 qvBEghfgubglHxyldCXg==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1jjNbM-0006mJ-Qi; Thu, 11 Jun 2020 13:50:40 +0000 Received: from relay7-d.mail.gandi.net ([217.70.183.200]) by bombadil.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1jjNaJ-0003UU-53 for linux-arm-kernel@lists.infradead.org; Thu, 11 Jun 2020 13:49:37 +0000 X-Originating-IP: 91.175.115.186 Received: from localhost (91-175-115-186.subs.proxad.net [91.175.115.186]) (Authenticated sender: gregory.clement@bootlin.com) by relay7-d.mail.gandi.net (Postfix) with ESMTPSA id 7C99F2000C; Thu, 11 Jun 2020 13:49:33 +0000 (UTC) From: Gregory CLEMENT To: Russell King , Arnd Bergmann Subject: [PATCH v2 6/6] ARM: Add 64K page support at MMU level Date: Thu, 11 Jun 2020 15:49:14 +0200 Message-Id: <20200611134914.765827-7-gregory.clement@bootlin.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200611134914.765827-1-gregory.clement@bootlin.com> References: <20200611134914.765827-1-gregory.clement@bootlin.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20200611_064935_468861_3B696239 X-CRM114-Status: GOOD ( 14.35 ) X-Spam-Score: -0.7 (/) X-Spam-Report: SpamAssassin version 3.4.4 on bombadil.infradead.org summary: Content analysis details: (-0.7 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.7 RCVD_IN_DNSWL_LOW RBL: Sender listed at https://www.dnswl.org/, low trust [217.70.183.200 listed in list.dnswl.org] -0.0 RCVD_IN_MSPIKE_H2 RBL: Average reputation (+2) [217.70.183.200 listed in wl.mailspike.net] -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Gregory CLEMENT , Thomas Petazzoni , linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org While 8K, 16K or 32K pages are not supported by ARM, it is possible to use large page with a 64K size. Compared to the large page support based on software, by using real 64K page the tlb flush can be done on a single page instead of a range of pages. This is inspired from fa0ca2726ea9 ("DSMP 64K support") and 4ef803e12baf ("mmu: large-page: Added support for multiple kernel page sizes") from https://github.com/MarvellEmbeddedProcessors/linux-marvell.git Signed-off-by: Gregory CLEMENT --- arch/arm/include/asm/page.h | 2 ++ arch/arm/include/asm/pgtable-2level-hwdef.h | 8 ++++++ arch/arm/include/asm/tlbflush.h | 14 +++++------ arch/arm/mm/Kconfig | 23 +++++++++++++++-- arch/arm/mm/proc-v7-2level.S | 28 +++++++++++++++++++++ arch/arm/mm/tlb-v7.S | 8 +++--- 6 files changed, 70 insertions(+), 13 deletions(-) diff --git a/arch/arm/include/asm/page.h b/arch/arm/include/asm/page.h index 42784fed8834..8d6b16e73b06 100644 --- a/arch/arm/include/asm/page.h +++ b/arch/arm/include/asm/page.h @@ -16,6 +16,8 @@ #define PAGE_SHIFT 15 #elif defined(CONFIG_ARM_64KB_SW_PAGE_SIZE_SUPPORT) #define PAGE_SHIFT 16 +#elif defined(CONFIG_ARM_64KB_MMU_PAGE_SIZE_SUPPORT) +#define PAGE_SHIFT 16 #else #define PAGE_SHIFT 12 #endif diff --git a/arch/arm/include/asm/pgtable-2level-hwdef.h b/arch/arm/include/asm/pgtable-2level-hwdef.h index 556937e1790e..37503789c6d6 100644 --- a/arch/arm/include/asm/pgtable-2level-hwdef.h +++ b/arch/arm/include/asm/pgtable-2level-hwdef.h @@ -66,7 +66,11 @@ /* * - extended small page/tiny page */ +#ifdef CONFIG_ARM_64KB_MMU_PAGE_SIZE_SUPPORT +#define PTE_EXT_XN (_AT(pteval_t, 1) << 15) /* v6 */ +#else #define PTE_EXT_XN (_AT(pteval_t, 1) << 0) /* v6 */ +#endif #define PTE_EXT_AP_MASK (_AT(pteval_t, 3) << 4) #define PTE_EXT_AP0 (_AT(pteval_t, 1) << 4) #define PTE_EXT_AP1 (_AT(pteval_t, 2) << 4) @@ -74,7 +78,11 @@ #define PTE_EXT_AP_UNO_SRW (PTE_EXT_AP0) #define PTE_EXT_AP_URO_SRW (PTE_EXT_AP1) #define PTE_EXT_AP_URW_SRW (PTE_EXT_AP1|PTE_EXT_AP0) +#ifdef CONFIG_ARM_64KB_MMU_PAGE_SIZE_SUPPORT +#define PTE_EXT_TEX(x) (_AT(pteval_t, (x)) << 12) /* Large Page */ +#else #define PTE_EXT_TEX(x) (_AT(pteval_t, (x)) << 6) /* v5 */ +#endif #define PTE_EXT_APX (_AT(pteval_t, 1) << 9) /* v6 */ #define PTE_EXT_COHERENT (_AT(pteval_t, 1) << 9) /* XScale3 */ #define PTE_EXT_SHARED (_AT(pteval_t, 1) << 10) /* v6 */ diff --git a/arch/arm/include/asm/tlbflush.h b/arch/arm/include/asm/tlbflush.h index d8ad4021a4da..1d2b17a9b6ee 100644 --- a/arch/arm/include/asm/tlbflush.h +++ b/arch/arm/include/asm/tlbflush.h @@ -420,7 +420,7 @@ static inline void __local_flush_tlb_page(struct vm_area_struct *vma, unsigned long uaddr) { const unsigned int __tlb_flag = __cpu_tlb_flags; -#if defined(CONFIG_ARM_LARGE_PAGE_SUPPORT) +#if defined(CONFIG_SW_ARM_LARGE_PAGE_SUPPORT) if (tlb_flag(TLB_WB)) dsb(); @@ -444,7 +444,7 @@ __local_flush_tlb_page(struct vm_area_struct *vma, unsigned long uaddr) tlb_op(TLB_V6_U_PAGE, "c8, c7, 1", uaddr); tlb_op(TLB_V6_D_PAGE, "c8, c6, 1", uaddr); tlb_op(TLB_V6_I_PAGE, "c8, c5, 1", uaddr); -#endif /* CONFIG_ARM_LARGE_PAGE_SUPPORT */ +#endif /* CONFIG_SW_ARM_LARGE_PAGE_SUPPORT */ } static inline void @@ -458,7 +458,7 @@ local_flush_tlb_page(struct vm_area_struct *vma, unsigned long uaddr) dsb(nshst); __local_flush_tlb_page(vma, uaddr); -#if !defined(CONFIG_ARM_LARGE_PAGE_SUPPORT) +#if !defined(CONFIG_SW_ARM_LARGE_PAGE_SUPPORT) tlb_op(TLB_V7_UIS_PAGE, "c8, c7, 1", uaddr); #endif @@ -489,7 +489,7 @@ __flush_tlb_page(struct vm_area_struct *vma, unsigned long uaddr) static inline void __local_flush_tlb_kernel_page(unsigned long kaddr) { -#if defined(CONFIG_ARM_LARGE_PAGE_SUPPORT) +#if defined(CONFIG_SW_ARM_LARGE_PAGE_SUPPORT) __cpu_flush_kern_tlb_range(kaddr, kaddr + PAGE_SIZE); #else const int zero = 0; @@ -504,7 +504,7 @@ static inline void __local_flush_tlb_kernel_page(unsigned long kaddr) tlb_op(TLB_V6_U_PAGE, "c8, c7, 1", kaddr); tlb_op(TLB_V6_D_PAGE, "c8, c6, 1", kaddr); tlb_op(TLB_V6_I_PAGE, "c8, c5, 1", kaddr); -#endif /* CONFIG_ARM_LARGE_PAGE_SUPPORT */ +#endif /* CONFIG_SW_ARM_LARGE_PAGE_SUPPORT */ } static inline void local_flush_tlb_kernel_page(unsigned long kaddr) @@ -517,7 +517,7 @@ static inline void local_flush_tlb_kernel_page(unsigned long kaddr) dsb(nshst); __local_flush_tlb_kernel_page(kaddr); -#if !defined(CONFIG_ARM_LARGE_PAGE_SUPPORT) +#if !defined(CONFIG_SW_ARM_LARGE_PAGE_SUPPORT) tlb_op(TLB_V7_UIS_PAGE, "c8, c7, 1", kaddr); #endif @@ -537,7 +537,7 @@ static inline void __flush_tlb_kernel_page(unsigned long kaddr) dsb(ishst); __local_flush_tlb_kernel_page(kaddr); -#if !defined(CONFIG_ARM_LARGE_PAGE_SUPPORT) +#if !defined(CONFIG_SW_ARM_LARGE_PAGE_SUPPORT) tlb_op(TLB_V7_UIS_PAGE, "c8, c3, 1", kaddr); #endif diff --git a/arch/arm/mm/Kconfig b/arch/arm/mm/Kconfig index 6266caa93520..b566708af0bf 100644 --- a/arch/arm/mm/Kconfig +++ b/arch/arm/mm/Kconfig @@ -978,13 +978,16 @@ config MIGHT_HAVE_CACHE_L2X0 config ARM_LARGE_PAGE_SUPPORT bool +config ARM_SW_LARGE_PAGE_SUPPORT + bool + choice prompt "Kernel Large Page Support" depends on CPU_V7 && !ARM_LPAE default ARM_NO_LARGE_PAGE_SUPPORT help - Support kennel large pages (> 4KB) by software emulation of - large pages (using 4KB MMU pages). Select one of the page + Support kennel large pages (> 4KB), this includes MMU large pages + (64KB) and software emulation of large pages (using 4KB MMU pages). sizes below. config ARM_NO_LARGE_PAGE_SUPPORT @@ -998,6 +1001,7 @@ config ARM_NO_LARGE_PAGE_SUPPORT config ARM_8KB_SW_PAGE_SIZE_SUPPORT bool "8KB software page size support" select ARM_LARGE_PAGE_SUPPORT + select ARM_SW_LARGE_PAGE_SUPPORT help The kernel uses 8KB pages, MMU page table will still use 4KB pages. This feature enables support for large storage volumes up to 32TB @@ -1006,6 +1010,7 @@ config ARM_8KB_SW_PAGE_SIZE_SUPPORT config ARM_16KB_SW_PAGE_SIZE_SUPPORT bool "16KB software page size support" select ARM_LARGE_PAGE_SUPPORT + select ARM_SW_LARGE_PAGE_SUPPORT help The kernel uses 16KB pages, MMU page table will still use 4KB pages. This feature enables support for large storage volumes up to 64TB. @@ -1014,6 +1019,7 @@ config ARM_16KB_SW_PAGE_SIZE_SUPPORT config ARM_32KB_SW_PAGE_SIZE_SUPPORT bool "32KB software page size support" select ARM_LARGE_PAGE_SUPPORT + select ARM_SW_LARGE_PAGE_SUPPORT help The kernel uses 32KB pages, MMU page table will still use 4KB pages. This feature enables support for large storage volumes up to 128TB. @@ -1022,10 +1028,23 @@ config ARM_32KB_SW_PAGE_SIZE_SUPPORT config ARM_64KB_SW_PAGE_SIZE_SUPPORT bool "64KB software page size support" select ARM_LARGE_PAGE_SUPPORT + select ARM_SW_LARGE_PAGE_SUPPORT help The kernel uses 64KB pages, MMU page table will still use 4KB pages. This feature enables support for large storage volumes up to 256TB. at the expense of higher memory fragmentation. + If you need 64KB pages, consider using the ARM_64KB_MMU_PAGE_SIZE_SUPPORT + option. + +config ARM_64KB_MMU_PAGE_SIZE_SUPPORT + bool "64KB MMU page size support" + select ARM_LARGE_PAGE_SUPPORT + help + The kernel uses 64KB pages. The page-table will use large-pages (64KB) + as well. + This feature enables support for large storage volumes up to 256TB. + at the expense of higher memory fragmentation. + endchoice config CACHE_L2X0 diff --git a/arch/arm/mm/proc-v7-2level.S b/arch/arm/mm/proc-v7-2level.S index 7e34b421c8b8..67401f859c2d 100644 --- a/arch/arm/mm/proc-v7-2level.S +++ b/arch/arm/mm/proc-v7-2level.S @@ -92,9 +92,16 @@ ENTRY(cpu_v7_set_pte_ext) #endif /* CONFIG_ARM_LARGE_PAGE_SUPPORT */ bic r3, r1, #0x000003f0 +#ifdef CONFIG_ARM_64KB_MMU_PAGE_SIZE_SUPPORT + bic r3, r3, #0x00000F000 +#endif bic r3, r3, #PTE_TYPE_MASK orr r3, r3, r2 +#ifdef CONFIG_ARM_64KB_MMU_PAGE_SIZE_SUPPORT + orr r3, r3, #PTE_EXT_AP0 | 1 +#else orr r3, r3, #PTE_EXT_AP0 | 2 +#endif tst r1, #1 << 4 orrne r3, r3, #PTE_EXT_TEX(1) @@ -119,6 +126,26 @@ ENTRY(cpu_v7_set_pte_ext) THUMB( add r0, r0, #2048 ) THUMB( str r3, [r0] ) #ifdef CONFIG_ARM_LARGE_PAGE_SUPPORT +#ifdef CONFIG_ARM_64KB_MMU_PAGE_SIZE_SUPPORT + @ Need to duplicate the entry 16 times because of overlapping in PTE index bits. + str r3, [r0, #4] + str r3, [r0, #8] + str r3, [r0, #12] + str r3, [r0, #16] + str r3, [r0, #20] + str r3, [r0, #24] + str r3, [r0, #28] + flush_pte r0 + add r0, r0, #32 + str r3, [r0] + str r3, [r0, #4] + str r3, [r0, #8] + str r3, [r0, #12] + str r3, [r0, #16] + str r3, [r0, #20] + str r3, [r0, #24] + str r3, [r0, #28] +#else #define PTE_OFFSET ((1 << (PAGE_SHIFT - 12)) * 4) mov r1, #PTE_OFFSET mov r2, #4 @@ -137,6 +164,7 @@ ENTRY(cpu_v7_set_pte_ext) #endif /* PAGE_SHIFT > 15 */ 2: cmp r2, r1 bne 1b +#endif /* CONFIG_ARM_64KB_MMU_PAGE_SIZE_SUPPORT */ #endif /* CONFIG_ARM_LARGE_PAGE_SUPPORT */ 3: flush_pte r0 #endif /* CONFIG_MMU */ diff --git a/arch/arm/mm/tlb-v7.S b/arch/arm/mm/tlb-v7.S index 8e68218e53d3..c90dbbd6aa5e 100644 --- a/arch/arm/mm/tlb-v7.S +++ b/arch/arm/mm/tlb-v7.S @@ -50,11 +50,11 @@ ENTRY(v7wbi_flush_user_tlb_range) #endif ALT_UP(mcr p15, 0, r0, c8, c7, 1) @ TLB invalidate U MVA -#if defined(CONFIG_ARM_LARGE_PAGE_SUPPORT) +#if defined(CONFIG_SW_ARM_LARGE_PAGE_SUPPORT) add r0, r0, #0x1000 #else add r0, r0, #PAGE_SZ -#endif /* CONFIG_ARM_LARGE_PAGE_SUPPORT */ +#endif /* CONFIG_SW_ARM_LARGE_PAGE_SUPPORT */ cmp r0, r1 blo 1b dsb ish @@ -82,11 +82,11 @@ ENTRY(v7wbi_flush_kern_tlb_range) ALT_SMP(mcr p15, 0, r0, c8, c3, 1) @ TLB invalidate U MVA (shareable) #endif ALT_UP(mcr p15, 0, r0, c8, c7, 1) @ TLB invalidate U MVA -#if defined(CONFIG_ARM_LARGE_PAGE_SUPPORT) +#if defined(CONFIG_SW_ARM_LARGE_PAGE_SUPPORT) add r0, r0, #0x1000 #else add r0, r0, #PAGE_SZ -#endif /* CONFIG_ARM_LARGE_PAGE_SUPPORT */ +#endif /* CONFIG_SW_ARM_LARGE_PAGE_SUPPORT */ cmp r0, r1 blo 1b dsb ish