From patchwork Mon Jul 21 14:47:16 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Daniel Thompson X-Patchwork-Id: 4595801 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork2.web.kernel.org (Postfix) with ESMTP id AF8E8C0514 for ; Mon, 21 Jul 2014 14:51:45 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id AE1E920120 for ; Mon, 21 Jul 2014 14:51:44 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id A4BA120127 for ; Mon, 21 Jul 2014 14:51:43 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1X9Eu5-0003Td-0W; Mon, 21 Jul 2014 14:49:25 +0000 Received: from mail-we0-f171.google.com ([74.125.82.171]) by bombadil.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1X9Ete-0002go-Ox for linux-arm-kernel@lists.infradead.org; Mon, 21 Jul 2014 14:48:59 +0000 Received: by mail-we0-f171.google.com with SMTP id p10so7689216wes.30 for ; Mon, 21 Jul 2014 07:48:36 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=m30UZwrwMVx1JWQPQ83ainWAlu4YK4MuOobmrOE5SUU=; b=hltOJ/PFlSakszOuUjctoFGBh/IQutQbLbobqsXpqftzKRFVHf3PlwYRbPjjUmJI8e 9CbJLMt8xFtAJ85kfX/hnVHbZuBJ+UL+KWZ3w7N6UTJ9MlU2UU1Bw39T3oP+f3UAVhCc t6ltIDTTvYWXK7SS9mf1yMtk6yj+ngdMTINwzIiaBOI9nMyAPM7wo9qrb0FAznSl2Ku/ ilnpGLeA5uuUTSkC9qDepT8mtnOHCDuBiyPePPyIJRXgGwpILZ/Z92WWfH7A7Evgyn7N hkbV4aC0KXrrQ5BR3WJz5SAD4TVmvPtd+UHsld9x3VdeXc95amu5Win7WYwhIKsr3wTf eLjg== X-Gm-Message-State: ALoCoQmyXLoTaSU0cURP5ZLlns4ZXaaR4zNjMX6TG+1Zd+DrwFRlf0dr1GVDtu8HXYy4pO2s8Ga3 X-Received: by 10.180.19.138 with SMTP id f10mr5316534wie.38.1405954116345; Mon, 21 Jul 2014 07:48:36 -0700 (PDT) Received: from sundance.lan (cpc4-aztw19-0-0-cust157.18-1.cable.virginm.net. [82.33.25.158]) by mx.google.com with ESMTPSA id di7sm38135166wjb.34.2014.07.21.07.48.32 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 21 Jul 2014 07:48:35 -0700 (PDT) From: Daniel Thompson To: Russell King , Thomas Gleixner , Jason Cooper Subject: [PATCH RFC 5/9] ARM: Add L1 PTE non-secure mapping Date: Mon, 21 Jul 2014 15:47:16 +0100 Message-Id: <1405954040-30399-6-git-send-email-daniel.thompson@linaro.org> X-Mailer: git-send-email 1.9.3 In-Reply-To: <1405954040-30399-1-git-send-email-daniel.thompson@linaro.org> References: <1405954040-30399-1-git-send-email-daniel.thompson@linaro.org> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20140721_074858_987152_2552219A X-CRM114-Status: GOOD ( 18.57 ) X-Spam-Score: -0.7 (/) Cc: Marek Vasut , Daniel Thompson , linaro-kernel@lists.linaro.org, patches@linaro.org, Harro Haan , linux-kernel@vger.kernel.org, John Stultz , linux-arm-kernel@lists.infradead.org X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00,RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Marek Vasut Add new device type, MT_DEVICE_NS. This type sets the NS bit in L1 PTE [1]. Accesses to a memory region which is mapped this way generate non-secure access to that memory area. One must be careful here, since the NS bit is only available in L1 PTE, therefore when creating the mapping, the mapping must be at least 1 MiB big and must be aligned to 1 MiB. If that condition was false, the kernel would use regular L2 page mapping for this area instead and the NS bit setting would be ineffective. [1] See DDI0406B , Section B3 "Virtual Memory System Architecture (VMSA)", Subsection B3.3.1 "Translation table entry formats", paragraph "First-level descriptors", Table B3-1 and associated description of the NS bit in the "Section" table entry. Signed-off-by: Marek Vasut Signed-off-by: Daniel Thompson --- arch/arm/include/asm/io.h | 5 ++++- arch/arm/include/asm/mach/map.h | 4 ++-- arch/arm/include/asm/pgtable-2level-hwdef.h | 1 + arch/arm/mm/mmu.c | 13 ++++++++++++- 4 files changed, 19 insertions(+), 4 deletions(-) diff --git a/arch/arm/include/asm/io.h b/arch/arm/include/asm/io.h index 3d23418..22765e0 100644 --- a/arch/arm/include/asm/io.h +++ b/arch/arm/include/asm/io.h @@ -125,8 +125,10 @@ static inline u32 __raw_readl(const volatile void __iomem *addr) #define MT_DEVICE_NONSHARED 1 #define MT_DEVICE_CACHED 2 #define MT_DEVICE_WC 3 +#define MT_DEVICE_NS 4 + /* - * types 4 onwards can be found in asm/mach/map.h and are undefined + * types 5 onwards can be found in asm/mach/map.h and are undefined * for ioremap */ @@ -343,6 +345,7 @@ extern void _memset_io(volatile void __iomem *, int, size_t); #define ioremap_nocache(cookie,size) __arm_ioremap((cookie), (size), MT_DEVICE) #define ioremap_cache(cookie,size) __arm_ioremap((cookie), (size), MT_DEVICE_CACHED) #define ioremap_wc(cookie,size) __arm_ioremap((cookie), (size), MT_DEVICE_WC) +#define ioremap_ns(cookie,size) __arm_ioremap((cookie), (size), MT_DEVICE_NS) #define iounmap __arm_iounmap /* diff --git a/arch/arm/include/asm/mach/map.h b/arch/arm/include/asm/mach/map.h index f98c7f3..42be265 100644 --- a/arch/arm/include/asm/mach/map.h +++ b/arch/arm/include/asm/mach/map.h @@ -21,9 +21,9 @@ struct map_desc { unsigned int type; }; -/* types 0-3 are defined in asm/io.h */ +/* types 0-4 are defined in asm/io.h */ enum { - MT_UNCACHED = 4, + MT_UNCACHED = 5, MT_CACHECLEAN, MT_MINICLEAN, MT_LOW_VECTORS, diff --git a/arch/arm/include/asm/pgtable-2level-hwdef.h b/arch/arm/include/asm/pgtable-2level-hwdef.h index 5cfba15..d24e7ea 100644 --- a/arch/arm/include/asm/pgtable-2level-hwdef.h +++ b/arch/arm/include/asm/pgtable-2level-hwdef.h @@ -36,6 +36,7 @@ #define PMD_SECT_S (_AT(pmdval_t, 1) << 16) /* v6 */ #define PMD_SECT_nG (_AT(pmdval_t, 1) << 17) /* v6 */ #define PMD_SECT_SUPER (_AT(pmdval_t, 1) << 18) /* v6 */ +#define PMD_SECT_NS (_AT(pmdval_t, 1) << 19) /* v6 */ #define PMD_SECT_AF (_AT(pmdval_t, 0)) #define PMD_SECT_UNCACHED (_AT(pmdval_t, 0)) diff --git a/arch/arm/mm/mmu.c b/arch/arm/mm/mmu.c index ab14b79..9baf1cb 100644 --- a/arch/arm/mm/mmu.c +++ b/arch/arm/mm/mmu.c @@ -268,6 +268,13 @@ static struct mem_type mem_types[] = { .prot_sect = PROT_SECT_DEVICE, .domain = DOMAIN_IO, }, + [MT_DEVICE_NS] = { /* Non-secure accesses from secure mode */ + .prot_pte = PROT_PTE_DEVICE | L_PTE_MT_DEV_SHARED | + L_PTE_SHARED, + .prot_l1 = PMD_TYPE_TABLE, + .prot_sect = PROT_SECT_DEVICE | PMD_SECT_S | PMD_SECT_NS, + .domain = DOMAIN_IO, + }, [MT_UNCACHED] = { .prot_pte = PROT_PTE_DEVICE, .prot_l1 = PMD_TYPE_TABLE, @@ -474,6 +481,7 @@ static void __init build_mem_type_table(void) mem_types[MT_DEVICE_NONSHARED].prot_sect |= PMD_SECT_XN; mem_types[MT_DEVICE_CACHED].prot_sect |= PMD_SECT_XN; mem_types[MT_DEVICE_WC].prot_sect |= PMD_SECT_XN; + mem_types[MT_DEVICE_NS].prot_sect |= PMD_SECT_XN; /* Also setup NX memory mapping */ mem_types[MT_MEMORY_RW].prot_sect |= PMD_SECT_XN; @@ -489,6 +497,7 @@ static void __init build_mem_type_table(void) mem_types[MT_DEVICE].prot_sect |= PMD_SECT_TEX(1); mem_types[MT_DEVICE_NONSHARED].prot_sect |= PMD_SECT_TEX(1); mem_types[MT_DEVICE_WC].prot_sect |= PMD_SECT_BUFFERABLE; + mem_types[MT_DEVICE_NS].prot_sect |= PMD_SECT_TEX(1); } else if (cpu_is_xsc3()) { /* * For Xscale3, @@ -500,6 +509,7 @@ static void __init build_mem_type_table(void) mem_types[MT_DEVICE].prot_sect |= PMD_SECT_TEX(1) | PMD_SECT_BUFFERED; mem_types[MT_DEVICE_NONSHARED].prot_sect |= PMD_SECT_TEX(2); mem_types[MT_DEVICE_WC].prot_sect |= PMD_SECT_TEX(1); + mem_types[MT_DEVICE_NS].prot_sect |= PMD_SECT_TEX(1) | PMD_SECT_BUFFERED; } else { /* * For ARMv6 and ARMv7 without TEX remapping, @@ -511,6 +521,7 @@ static void __init build_mem_type_table(void) mem_types[MT_DEVICE].prot_sect |= PMD_SECT_BUFFERED; mem_types[MT_DEVICE_NONSHARED].prot_sect |= PMD_SECT_TEX(2); mem_types[MT_DEVICE_WC].prot_sect |= PMD_SECT_TEX(1); + mem_types[MT_DEVICE_NS].prot_sect |= PMD_SECT_BUFFERED; } } else { /* @@ -856,7 +867,7 @@ static void __init create_mapping(struct map_desc *md) return; } - if ((md->type == MT_DEVICE || md->type == MT_ROM) && + if ((md->type == MT_DEVICE || md->type == MT_DEVICE_NS || md->type == MT_ROM) && md->virtual >= PAGE_OFFSET && (md->virtual < VMALLOC_START || md->virtual >= VMALLOC_END)) { printk(KERN_WARNING "BUG: mapping for 0x%08llx"