From patchwork Wed Apr 16 05:31:37 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Victor Kamensky X-Patchwork-Id: 3998091 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 4743BBFF02 for ; Wed, 16 Apr 2014 05:35:09 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 536F720259 for ; Wed, 16 Apr 2014 05:35:08 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 50C8D20222 for ; Wed, 16 Apr 2014 05:35:07 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1WaISc-0002Jn-8M; Wed, 16 Apr 2014 05:32:38 +0000 Received: from mail-pd0-f171.google.com ([209.85.192.171]) by bombadil.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1WaISV-0002I7-Sn for linux-arm-kernel@lists.infradead.org; Wed, 16 Apr 2014 05:32:32 +0000 Received: by mail-pd0-f171.google.com with SMTP id r10so10356089pdi.30 for ; Tue, 15 Apr 2014 22:32:11 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=EY94EqRah3HnazYIIeDDbBEBCJwQmHcpvGqfQiHj1yU=; b=hDYcnjDBjZvl97+bPb5bJfyWXDbYHh/oLcoxrQx9bZBfO21CXVrRl+DqRZk3TypfjE ao8sAE9RiAeMcZ9oHY35zwe6YsHeoAQB7YcVsYnEesfnhZxGjrr2ZyG0rPabKsi44Rhb QYJMdeplWfmNKU+/tpEalTaxk9iyW6R7d4soh1msNhR7rfT9Ei+VT6XRe3J6NGO9udf3 RIzH7DSiaika9YhpPQCRxoampTDJDSQ7XU/lSNZTawl7xMjWsoMkHj058SPtLVDzP2EX c92CCEa7oaXosoQ84MMtdFWVtBB5Z/V7l1ORdMYtVZbgvsDJ521IzdS9s8PTzeqEmPug nVEg== X-Gm-Message-State: ALoCoQnPmAfaE3oOSUYXcaKE8RYAyfU7MZQrnMS9W6Jyk4dVajPoUotp3zE5B1ZUTzSU2/or0L4h X-Received: by 10.66.66.66 with SMTP id d2mr6418747pat.36.1397626330926; Tue, 15 Apr 2014 22:32:10 -0700 (PDT) Received: from kamensky-w530.cisco.com.net (c-24-6-79-41.hsd1.ca.comcast.net. [24.6.79.41]) by mx.google.com with ESMTPSA id cz3sm44273882pbc.9.2014.04.15.22.32.09 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 15 Apr 2014 22:32:10 -0700 (PDT) From: Victor Kamensky To: rmk@arm.linux.org.uk, davem@davemloft.net, oleg@redhat.com, dave.long@linaro.org Subject: [RFC PATCH v4] ARM: uprobes xol write directly to userspace Date: Tue, 15 Apr 2014 22:31:37 -0700 Message-Id: <1397626297-23873-2-git-send-email-victor.kamensky@linaro.org> X-Mailer: git-send-email 1.8.1.4 In-Reply-To: <1397626297-23873-1-git-send-email-victor.kamensky@linaro.org> References: <1397626297-23873-1-git-send-email-victor.kamensky@linaro.org> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20140415_223231_982123_53D8B1F4 X-CRM114-Status: GOOD ( 16.62 ) X-Spam-Score: 0.0 (/) Cc: tixy@linaro.org, linaro-kernel@lists.linaro.org, ananth@in.ibm.com, Victor Kamensky , peterz@infradead.org, taras.kondratiuk@linaro.org, rabin@rab.in, torvalds@linux-foundation.org, Dave.Martin@arm.com, linux-arm-kernel@lists.infradead.org X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-2.6 required=5.0 tests=BAYES_00,RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP After instruction write into xol area, on ARM V7 architecture code need to flush dcache and icache to sync them up for given set of addresses. Having just 'flush_dcache_page(page)' call is not enough - it is possible to have stale instruction sitting in icache for given xol area slot address. Introduce arch_uprobe_ixol_copy weak function that by default calls __copy_to_user function, and that sufficient for CPUs that can snoop instruction writes from dcache. On ARM define new one that handles xol slot copy in ARM specific way. Arm implementation of arch_uprobe_ixol_copy function makes __copy_to_user call which does not have dcache aliasing issues and then flush_cache_user_range to push dcache out and invalidate corresponding icache entries. Note in order to write into uprobes xol area had to add VM_WRITE to xol area mapping. Signed-off-by: Victor Kamensky --- arch/arm/kernel/uprobes.c | 8 ++++++++ include/linux/uprobes.h | 3 +++ kernel/events/uprobes.c | 28 +++++++++++++++++++--------- 3 files changed, 30 insertions(+), 9 deletions(-) diff --git a/arch/arm/kernel/uprobes.c b/arch/arm/kernel/uprobes.c index f9bacee..4836e54 100644 --- a/arch/arm/kernel/uprobes.c +++ b/arch/arm/kernel/uprobes.c @@ -113,6 +113,14 @@ int arch_uprobe_analyze_insn(struct arch_uprobe *auprobe, struct mm_struct *mm, return 0; } +void arch_uprobe_copy_ixol(struct page *page, unsigned long vaddr, + void *src, unsigned long len) +{ + if (!__copy_to_user((void *) vaddr, src, len)) + flush_cache_user_range(vaddr, len); +} + + int arch_uprobe_pre_xol(struct arch_uprobe *auprobe, struct pt_regs *regs) { struct uprobe_task *utask = current->utask; diff --git a/include/linux/uprobes.h b/include/linux/uprobes.h index edff2b9..c52f827 100644 --- a/include/linux/uprobes.h +++ b/include/linux/uprobes.h @@ -32,6 +32,7 @@ struct vm_area_struct; struct mm_struct; struct inode; struct notifier_block; +struct page; #define UPROBE_HANDLER_REMOVE 1 #define UPROBE_HANDLER_MASK 1 @@ -127,6 +128,8 @@ extern int arch_uprobe_exception_notify(struct notifier_block *self, unsigned l extern void arch_uprobe_abort_xol(struct arch_uprobe *aup, struct pt_regs *regs); extern unsigned long arch_uretprobe_hijack_return_addr(unsigned long trampoline_vaddr, struct pt_regs *regs); extern bool __weak arch_uprobe_ignore(struct arch_uprobe *aup, struct pt_regs *regs); +extern void __weak arch_uprobe_copy_ixol(struct page *page, unsigned long vaddr, + void *src, unsigned long len); #else /* !CONFIG_UPROBES */ struct uprobes_state { }; diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c index 04709b6..1038e57 100644 --- a/kernel/events/uprobes.c +++ b/kernel/events/uprobes.c @@ -1149,7 +1149,7 @@ static int xol_add_vma(struct mm_struct *mm, struct xol_area *area) } ret = install_special_mapping(mm, area->vaddr, PAGE_SIZE, - VM_EXEC|VM_MAYEXEC|VM_DONTCOPY|VM_IO, &area->page); + VM_EXEC|VM_MAYEXEC|VM_DONTCOPY|VM_IO|VM_WRITE, &area->page); if (ret) goto fail; @@ -1296,14 +1296,8 @@ static unsigned long xol_get_insn_slot(struct uprobe *uprobe) if (unlikely(!xol_vaddr)) return 0; - /* Initialize the slot */ - copy_to_page(area->page, xol_vaddr, - &uprobe->arch.ixol, sizeof(uprobe->arch.ixol)); - /* - * We probably need flush_icache_user_range() but it needs vma. - * This should work on supported architectures too. - */ - flush_dcache_page(area->page); + arch_uprobe_copy_ixol(area->page, xol_vaddr, + &uprobe->arch.ixol, sizeof(uprobe->arch.ixol)); return xol_vaddr; } @@ -1346,6 +1340,22 @@ static void xol_free_insn_slot(struct task_struct *tsk) } } +void __weak arch_uprobe_copy_ixol(struct page *page, unsigned long vaddr, + void *src, unsigned long len) +{ + /* + * Note if CPU does not support instructions write snooping + * from dcache it needs to define its own version of this + * function that would take care of proper cache flushes. + * + * Nothing we can do if it fails, added if to make unused + * result warning happy. If xol write failed because process + * unmapped xol area by mistake, process will crash in some + * other place. + */ + if (__copy_to_user((void *) vaddr, src, len)); +} + /** * uprobe_get_swbp_addr - compute address of swbp given post-swbp regs * @regs: Reflects the saved state of the task after it has hit a breakpoint