From patchwork Wed Nov 12 10:47:18 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vladimir Murzin X-Patchwork-Id: 5289021 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork1.web.kernel.org (Postfix) with ESMTP id D41CC9F2ED for ; Wed, 12 Nov 2014 12:13:59 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id C80F020176 for ; Wed, 12 Nov 2014 12:13:58 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id BEDD920172 for ; Wed, 12 Nov 2014 12:13:57 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1XoWmI-0005jx-Vj; Wed, 12 Nov 2014 12:12:02 +0000 Received: from service88.mimecast.com ([195.130.217.12]) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1XoWm7-0005Sb-K2 for linux-arm-kernel@lists.infradead.org; Wed, 12 Nov 2014 12:11:53 +0000 Received: from cam-owa1.Emea.Arm.com (fw-tnat.cambridge.arm.com [217.140.96.21]) by service87.mimecast.com; Wed, 12 Nov 2014 10:47:42 +0000 Received: from login1.euhpc.arm.com ([10.1.255.212]) by cam-owa1.Emea.Arm.com with Microsoft SMTPSVC(6.0.3790.3959); Wed, 12 Nov 2014 10:47:38 +0000 From: Vladimir Murzin To: linux-arm-kernel@lists.infradead.org Subject: [PATCH] arm64: compat: align cacheflush syscall with arch/arm Date: Wed, 12 Nov 2014 10:47:18 +0000 Message-Id: <1415789238-4573-1-git-send-email-vladimir.murzin@arm.com> X-Mailer: git-send-email 2.0.0 X-OriginalArrivalTime: 12 Nov 2014 10:47:38.0411 (UTC) FILETIME=[137CF7B0:01CFFE66] X-MC-Unique: 114111210474202701 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20141112_041152_262207_6BD386D9 X-CRM114-Status: GOOD ( 10.62 ) X-Spam-Score: -2.3 (--) Cc: will.deacon@arm.com X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-3.2 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_LOW, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Update handling of cacheflush syscall with changes made in arch/arm counterpart: - return error to userspace when flushing syscall fails - split user cache-flushing into interruptible chunks - don't bother rounding to nearest vma Signed-off-by: Vladimir Murzin Acked-by: Will Deacon --- arch/arm64/include/asm/cacheflush.h | 2 +- arch/arm64/include/asm/thread_info.h | 12 ++++++ arch/arm64/kernel/sys_compat.c | 77 ++++++++++++++++++++++++---------- arch/arm64/mm/cache.S | 6 ++- 4 files changed, 74 insertions(+), 23 deletions(-) diff --git a/arch/arm64/include/asm/cacheflush.h b/arch/arm64/include/asm/cacheflush.h index 689b637..7ae31a2 100644 --- a/arch/arm64/include/asm/cacheflush.h +++ b/arch/arm64/include/asm/cacheflush.h @@ -73,7 +73,7 @@ extern void flush_cache_all(void); extern void flush_cache_range(struct vm_area_struct *vma, unsigned long start, unsigned long end); extern void flush_icache_range(unsigned long start, unsigned long end); extern void __flush_dcache_area(void *addr, size_t len); -extern void __flush_cache_user_range(unsigned long start, unsigned long end); +extern long __flush_cache_user_range(unsigned long start, unsigned long end); static inline void flush_cache_mm(struct mm_struct *mm) { diff --git a/arch/arm64/include/asm/thread_info.h b/arch/arm64/include/asm/thread_info.h index 459bf8e..61ea595 100644 --- a/arch/arm64/include/asm/thread_info.h +++ b/arch/arm64/include/asm/thread_info.h @@ -39,6 +39,15 @@ struct exec_domain; typedef unsigned long mm_segment_t; +struct compat_restart_block { + union { + /* For user cache flushing */ + struct { + unsigned long start; + unsigned long end; + } cache; + }; +}; /* * low level task data that entry.S needs immediate access to. * __switch_to() assumes cpu_context follows immediately after cpu_domain. @@ -51,6 +60,9 @@ struct thread_info { struct restart_block restart_block; int preempt_count; /* 0 => preemptable, <0 => bug */ int cpu; /* cpu */ +#ifdef CONFIG_COMPAT + struct compat_restart_block compat_restart_block; +#endif }; #define INIT_THREAD_INFO(tsk) \ diff --git a/arch/arm64/kernel/sys_compat.c b/arch/arm64/kernel/sys_compat.c index dc47e53..7139637 100644 --- a/arch/arm64/kernel/sys_compat.c +++ b/arch/arm64/kernel/sys_compat.c @@ -28,29 +28,65 @@ #include #include -static inline void -do_compat_cache_op(unsigned long start, unsigned long end, int flags) +static long do_compat_cache_op_restart(struct restart_block *); + +static long +__do_compat_cache_op(unsigned long start, unsigned long end) { - struct mm_struct *mm = current->active_mm; - struct vm_area_struct *vma; + long ret; - if (end < start || flags) - return; - - down_read(&mm->mmap_sem); - vma = find_vma(mm, start); - if (vma && vma->vm_start < end) { - if (start < vma->vm_start) - start = vma->vm_start; - if (end > vma->vm_end) - end = vma->vm_end; - up_read(&mm->mmap_sem); - __flush_cache_user_range(start & PAGE_MASK, PAGE_ALIGN(end)); - return; - } - up_read(&mm->mmap_sem); + do { + unsigned long chunk = min(PAGE_SIZE, end - start); + + if (signal_pending(current)) { + struct thread_info *ti = current_thread_info(); + + ti->restart_block = (struct restart_block) { + .fn = do_compat_cache_op_restart, + }; + + ti->compat_restart_block = (struct compat_restart_block) { + { + .cache = { + .start = start, + .end = end, + }, + }, + }; + + return -ERESTART_RESTARTBLOCK; + } + ret = __flush_cache_user_range(start, start + chunk); + if (ret) + return ret; + + cond_resched(); + start += chunk; + } while (start < end); + + return 0; } +static long do_compat_cache_op_restart(struct restart_block *unused) +{ + struct compat_restart_block *restart_block; + + restart_block = ¤t_thread_info()->compat_restart_block; + return __do_compat_cache_op(restart_block->cache.start, + restart_block->cache.end); +} + +static inline long +do_compat_cache_op(unsigned long start, unsigned long end, int flags) +{ + if (end < start || flags) + return -EINVAL; + + if (!access_ok(VERIFY_READ, start, end - start)) + return -EFAULT; + + return __do_compat_cache_op(start, end); +} /* * Handle all unrecognised system calls. */ @@ -74,8 +110,7 @@ long compat_arm_syscall(struct pt_regs *regs) * the specified region). */ case __ARM_NR_compat_cacheflush: - do_compat_cache_op(regs->regs[0], regs->regs[1], regs->regs[2]); - return 0; + return do_compat_cache_op(regs->regs[0], regs->regs[1], regs->regs[2]); case __ARM_NR_compat_set_tls: current->thread.tp_value = regs->regs[0]; diff --git a/arch/arm64/mm/cache.S b/arch/arm64/mm/cache.S index 2366383..d41899f 100644 --- a/arch/arm64/mm/cache.S +++ b/arch/arm64/mm/cache.S @@ -17,6 +17,7 @@ * along with this program. If not, see . */ +#include #include #include #include @@ -138,9 +139,12 @@ USER(9f, ic ivau, x4 ) // invalidate I line PoU add x4, x4, x2 cmp x4, x1 b.lo 1b -9: // ignore any faulting cache operation dsb ish isb + mov x0, #0 + ret +9: + mov x0, #-EFAULT ret ENDPROC(flush_icache_range) ENDPROC(__flush_cache_user_range)