From patchwork Thu Jan 31 13:32:18 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Kartashov X-Patchwork-Id: 2073851 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-process-083081@patchwork2.kernel.org Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) by patchwork2.kernel.org (Postfix) with ESMTP id 48CA2DF2E5 for ; Thu, 31 Jan 2013 13:34:39 +0000 (UTC) Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.76 #1 (Red Hat Linux)) id 1U0uFe-0001pB-VY; Thu, 31 Jan 2013 13:32:26 +0000 Received: from mx2.parallels.com ([64.131.90.16]) by merlin.infradead.org with esmtps (Exim 4.76 #1 (Red Hat Linux)) id 1U0uFb-0001o4-As for linux-arm-kernel@lists.infradead.org; Thu, 31 Jan 2013 13:32:24 +0000 Received: from [199.115.105.252] (helo=mail.parallels.com) by mx2.parallels.com with esmtps (TLSv1:AES128-SHA:128) (Exim 4.80.1) (envelope-from ) id 1U0uFX-000650-Nn for linux-arm-kernel@lists.infradead.org; Thu, 31 Jan 2013 08:32:19 -0500 Received: from [10.30.20.132] (195.214.232.10) by mail.parallels.com (10.255.249.32) with Microsoft SMTP Server (TLS) id 14.2.247.3; Thu, 31 Jan 2013 05:32:18 -0800 Message-ID: <510A7262.4060307@parallels.com> Date: Thu, 31 Jan 2013 17:32:18 +0400 From: Alexander Kartashov User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:17.0) Gecko/20130106 Thunderbird/17.0.2 MIME-Version: 1.0 To: Subject: IPC SHM alignment on ARMv7 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20130131_083223_421714_132A9F91 X-CRM114-Status: GOOD ( 15.92 ) X-Spam-Score: -4.9 (----) X-Spam-Report: SpamAssassin version 3.3.2 on merlin.infradead.org summary: Content analysis details: (-4.9 points) pts rule name description ---- ---------------------- -------------------------------------------------- -2.3 RCVD_IN_DNSWL_MED RBL: Sender listed at http://www.dnswl.org/, medium trust [64.131.90.16 listed in list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record -0.7 RP_MATCHES_RCVD Envelope sender domain matches handover relay domain -1.9 BAYES_00 BODY: Bayes spam probability is 0 to 1% [score: 0.0000] X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: linux-arm-kernel-bounces@lists.infradead.org Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org Dear colleagues, It tuned out that IPC SHM works in a bit strange way on ARMv7: the syscall sys_shmat() requires the argument shmaddr to be SHMLBA-aligned: [ipc/shm.c] [...] 979 else if ((addr = (ulong)shmaddr)) { 980 if (addr & (shmlba - 1)) { 981 if (shmflg & SHM_RND) 982 addr &= ~(shmlba - 1); /* round down */ 983 else 984 #ifndef __ARCH_FORCE_SHMLBA 985 if (addr & ~PAGE_MASK) 986 #endif 987 goto out; 988 } 989 flags = MAP_SHARED | MAP_FIXED; [...] since macro __ARCH_FORCE_SHMLBA is unconditionally defined for the ARM architecture. However it uses the function arch_get_unmapped_area() introduced in the commit 4197692eef113eeb8e3e413cc70993a5e667e5b8 in the mainstream kernel to allocate memory for a SHM segment. However the function allocates SHMLBA-aligned memory only if I or D caches alias as the following comment reads: [arch/arm/mm/mmap.c] [...] 54 unsigned long 55 arch_get_unmapped_area(struct file *filp, unsigned long addr, 56 unsigned long len, unsigned long pgoff, unsigned long flags) 57 { 58 struct mm_struct *mm = current->mm; 59 struct vm_area_struct *vma; 60 int do_align = 0; 61 int aliasing = cache_is_vipt_aliasing(); 62 struct vm_unmapped_area_info info; 63 64 /* 65 * We only need to do colour alignment if either the I or D 66 * caches alias. 67 */ 68 if (aliasing) 69 do_align = filp || (flags & MAP_SHARED); [...] So a SHM segment isn't always SHMLBA-aligned. This results in the following inconvenience: the address returned by the syscall sys_shmat() may not be passed as its argument later. This is however crucial for implementing IPC SHM checkpoint/restore for the ARM architecture I'm currently working on. As far as I can see from the commit c0e9587841a0fd79bbf8296034faefb9afe72fb4 in the mainstream kernel: the flag CACHEID_VIPT_ALIASING is never set for ARMv7 so it's impossible to guarantee that a IPC SHM segment is always SHMLBA-aligned. Is it true that the desired SHM alignment is impossible to be achieved on the ARMv7 architecture? diff --git a/arch/arm/kernel/setup.c b/arch/arm/kernel/setup.c index 1939c90..5b121d8 100644 --- a/arch/arm/kernel/setup.c +++ b/arch/arm/kernel/setup.c @@ -67,6 +67,8 @@ unsigned int processor_id; EXPORT_SYMBOL(processor_id); unsigned int __machine_arch_type; EXPORT_SYMBOL(__machine_arch_type); +unsigned int cacheid; +EXPORT_SYMBOL(cacheid); unsigned int __atags_pointer __initdata; @@ -229,6 +231,25 @@ int cpu_architecture(void) return cpu_arch; } +static void __init cacheid_init(void) +{ + unsigned int cachetype = read_cpuid_cachetype(); + unsigned int arch = cpu_architecture(); + + if (arch >= CPU_ARCH_ARMv7) { + cacheid = CACHEID_VIPT_NONALIASING; + if ((cachetype & (3 << 14)) == 1 << 14) + cacheid |= CACHEID_ASID_TAGGED; + } else if (arch >= CPU_ARCH_ARMv6) { + if (cachetype & (1 << 23)) + cacheid = CACHEID_VIPT_ALIASING; + else + cacheid = CACHEID_VIPT_NONALIASING; + } else { + cacheid = CACHEID_VIVT; + } +} + /* * These functions re-use the assembly code in head.S, which