From patchwork Sun Jul 22 13:03:55 2012 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Will Deacon X-Patchwork-Id: 1224461 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-process-083081@patchwork2.kernel.org Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) by patchwork2.kernel.org (Postfix) with ESMTP id 53D9AE0039 for ; Sun, 22 Jul 2012 13:09:27 +0000 (UTC) Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.76 #1 (Red Hat Linux)) id 1SsvpX-0004yr-KE; Sun, 22 Jul 2012 13:04:15 +0000 Received: from cam-admin0.cambridge.arm.com ([217.140.96.50]) by merlin.infradead.org with esmtp (Exim 4.76 #1 (Red Hat Linux)) id 1SsvpM-0004yX-Uv for linux-arm-kernel@lists.infradead.org; Sun, 22 Jul 2012 13:04:07 +0000 Received: from mudshark.cambridge.arm.com (mudshark.cambridge.arm.com [10.1.79.58]) by cam-admin0.cambridge.arm.com (8.12.6/8.12.6) with ESMTP id q6MD3wOK016112; Sun, 22 Jul 2012 14:03:58 +0100 (BST) Date: Sun, 22 Jul 2012 14:03:55 +0100 From: Will Deacon To: Gilles Chanteperdrix Subject: Re: [PATCH] ARM: mm: avoid attempting to flush the gate_vma with VIVT caches Message-ID: <20120722130355.GA29138@mudshark.cambridge.arm.com> References: <1342455826-9425-1-git-send-email-will.deacon@arm.com> <20120719122814.GE29153@mudshark.cambridge.arm.com> <5009C26B.6080901@xenomai.org> <500AAC2B.5060300@xenomai.org> <20120721143517.GB26790@mudshark.cambridge.arm.com> <500ABF78.10208@xenomai.org> <500AC109.3060708@xenomai.org> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <500AC109.3060708@xenomai.org> User-Agent: Mutt/1.5.21 (2010-09-15) X-Spam-Note: CRM114 invocation failed X-Spam-Score: -6.9 (------) X-Spam-Report: SpamAssassin version 3.3.2 on merlin.infradead.org summary: Content analysis details: (-6.9 points) pts rule name description ---- ---------------------- -------------------------------------------------- -5.0 RCVD_IN_DNSWL_HI RBL: Sender listed at http://www.dnswl.org/, high trust [217.140.96.50 listed in list.dnswl.org] -0.0 T_RP_MATCHES_RCVD Envelope sender domain matches handover relay domain -0.0 SPF_PASS SPF: sender matches SPF record -1.9 BAYES_00 BODY: Bayes spam probability is 0 to 1% [score: 0.0000] Cc: Uros Bizjak , "linux-arm-kernel@lists.infradead.org" X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: linux-arm-kernel-bounces@lists.infradead.org Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org On Sat, Jul 21, 2012 at 03:47:37PM +0100, Gilles Chanteperdrix wrote: > On 07/21/2012 04:40 PM, Gilles Chanteperdrix wrote: > > On 07/21/2012 04:35 PM, Will Deacon wrote: > >> Hi Gilles, > >> > >> On Sat, Jul 21, 2012 at 02:18:35PM +0100, Gilles Chanteperdrix wrote: > >>> On 07/20/2012 10:41 PM, Gilles Chanteperdrix wrote: > >>>> Being 0 or 1 whether we want to flush the vector page (I believe we do > >>>> not want to flush it, but am not sure). > >>> > >>> Actually, I believe we want to flush the vector page, at least on > >>> systems with VIVT cache: on systems with VIVT cache, the vector page is > >>> writeable in kernel mode, so may have been modified, and the address > >>> used by elf_core_dump is not the vectors address, but the address in the > >>> kernel direct-mapped RAM region where the vector page was allocated, so > >>> there is a cache aliasing issue. > >> > >> It may be writable, but we never actually write to it after it has been > >> initialised so there's no need to worry about caching issues (the cache is > >> flushed in devicemaps_init). > > > > Except if CONFIG_TLS_REG_EMUL is enabled > > is disabled I mean. Well spotted! I disagree about the address being flushed though -- it looks to me like we flush from 0xffff0000 - 0xffff1000, which is what we want. Why do you think we're flushing from the linear mapping? Anyway, the TLS issue can easily be resolved by changing my previous patch so that we flush unconditionally when there's no mm (see below). In the meantime, I'll remove the old patch from the patch system while we address your remaining concerns. Cheers, Will ---8<--- Tested-by: Uros Bizjak diff --git a/arch/arm/include/asm/cacheflush.h b/arch/arm/include/asm/cacheflush.h index 8cf828e..e4448e1 100644 --- a/arch/arm/include/asm/cacheflush.h +++ b/arch/arm/include/asm/cacheflush.h @@ -217,7 +217,7 @@ vivt_flush_cache_range(struct vm_area_struct *vma, unsigned long start, unsigned { struct mm_struct *mm = vma->vm_mm; - if (mm && cpumask_test_cpu(smp_processor_id(), mm_cpumask(mm))) + if (!mm || cpumask_test_cpu(smp_processor_id(), mm_cpumask(mm))) __cpuc_flush_user_range(start & PAGE_MASK, PAGE_ALIGN(end), vma->vm_flags); } @@ -227,7 +227,7 @@ vivt_flush_cache_page(struct vm_area_struct *vma, unsigned long user_addr, unsig { struct mm_struct *mm = vma->vm_mm; - if (mm && cpumask_test_cpu(smp_processor_id(), mm_cpumask(mm))) { + if (!mm || cpumask_test_cpu(smp_processor_id(), mm_cpumask(mm))) { unsigned long addr = user_addr & PAGE_MASK; __cpuc_flush_user_range(addr, addr + PAGE_SIZE, vma->vm_flags); }