From patchwork Fri Apr 11 14:26:51 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Victor Kamensky X-Patchwork-Id: 3968701 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 5D440BFF02 for ; Fri, 11 Apr 2014 14:35:06 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 74112207F5 for ; Fri, 11 Apr 2014 14:35:05 +0000 (UTC) Received: from casper.infradead.org (casper.infradead.org [85.118.1.10]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 2BAB3207F3 for ; Fri, 11 Apr 2014 14:35:04 +0000 (UTC) Received: from merlin.infradead.org ([2001:4978:20e::2]) by casper.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1WYcXg-0007wn-Rb; Fri, 11 Apr 2014 14:34:56 +0000 Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1WYcXe-0005Db-Ao; Fri, 11 Apr 2014 14:34:54 +0000 Received: from bombadil.infradead.org ([2001:1868:205::9]) by merlin.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1WYcXa-0005DB-HS for linux-arm-kernel@merlin.infradead.org; Fri, 11 Apr 2014 14:34:51 +0000 Received: from mail-qa0-f42.google.com ([209.85.216.42]) by bombadil.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1WYcXY-0001Pu-Rm for linux-arm-kernel@lists.infradead.org; Fri, 11 Apr 2014 14:34:49 +0000 Received: by mail-qa0-f42.google.com with SMTP id k15so5479594qaq.1 for ; Fri, 11 Apr 2014 07:34:26 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:in-reply-to:references:date :message-id:subject:from:to:cc:content-type; bh=l299fOjECTieqPdDnhCpFo6RjdtUhWR07yg68o+SaOs=; b=Uq0oXIj6hix3nNza5NCNJf4YAq5KFEn3JnX7t/VnPkJfdDpJVKM9myX+HCZuIq831e WE4Og4Uwt1cANSgFO9LvaFvy0p1Ivj243GeMLD9Xz0LN9Z1cFNjp3f6Fnmxgv2AMMcEH v4NjscH0JriIwjWxjPuegRTxsdKl25kVgiPQxwRUPxmLDS+xGq8AQRPWQQlIoVKoeKct EVlOWEqEJksc65/CVlBw9TzSmiXRfnwXju9jRsE3Bl//za82rIhiqbnfuZS+0g1hWxNS ZR6yDe5DDjtr+bN6yPa4pLMSofCS6L1/UA7LSKWQMQ7rIa8zT0yux6fjDu75riK2ouAl mfiw== X-Gm-Message-State: ALoCoQmLwdiEMC5woHpXOQkjfhi2dTeHHIK12lq6k30w1sG3tWmQG21vd7167drCMnA8Pb1EWavE MIME-Version: 1.0 X-Received: by 10.140.24.33 with SMTP id 30mr27973439qgq.40.1397226411736; Fri, 11 Apr 2014 07:26:51 -0700 (PDT) Received: by 10.229.95.6 with HTTP; Fri, 11 Apr 2014 07:26:51 -0700 (PDT) In-Reply-To: <20140411.003636.272212797007496394.davem@davemloft.net> References: <20140409184507.GA1058@redhat.com> <5347655B.3080307@linaro.org> <20140411.003636.272212797007496394.davem@davemloft.net> Date: Fri, 11 Apr 2014 07:26:51 -0700 Message-ID: Subject: Re: [RFC PATCH] uprobes: copy to user-space xol page with proper cache flushing From: Victor Kamensky To: David Miller X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20140411_073448_965255_EF79D4C1 X-CRM114-Status: GOOD ( 20.99 ) X-Spam-Score: -0.7 (/) Cc: Jon Medhurst , "linaro-kernel@lists.linaro.org" , Russell King - ARM Linux , ananth@in.ibm.com, Taras Kondratiuk , Oleg Nesterov , rabin@rab.in, Dave Long , Dave Martin , "linux-arm-kernel@lists.infradead.org" X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-4.6 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP On 10 April 2014 21:36, David Miller wrote: > From: David Long > Date: Thu, 10 Apr 2014 23:45:31 -0400 > >> Replace memcpy and dcache flush in generic uprobes with a call to >> copy_to_user_page(), which will do a proper flushing of kernel and >> user cache. Also modify the inmplementation of copy_to_user_page >> to assume a NULL vma pointer means the user icache corresponding >> to this right is stale and needs to be flushed. Note that this patch >> does not fix copy_to_user page for the sh, alpha, sparc, or mips >> architectures (which do not currently support uprobes). >> >> Signed-off-by: David A. Long > > You really need to pass the proper VMA down to the call site > rather than pass NULL, that's extremely ugly and totally > unnecesary. Agreed that VMA is really needed. Here is variant that I tried while waiting for Oleg's response: From 4a6a9043e0910041dd8842835a528cbdc39fad34 Mon Sep 17 00:00:00 2001 From: Victor Kamensky Date: Thu, 10 Apr 2014 17:06:39 -0700 Subject: [PATCH] uprobes: use copy_to_user_page function to copy instr to xol area Use copy_to_user_page function to copy instruction into xol area. copy_to_user_page function guarantee that all caches are correctly flushed during such write (including icache as well if needed). Because copy_to_user_page needs vm_area_struct, vma field was added into struct xol_area. It holds cached vma value for xol_area. Also using copy_to_user_page we make sure that we use the same code that ptrace write code uses. Signed-off-by: Victor Kamensky --- kernel/events/uprobes.c | 10 ++++++++-- 1 file changed, 8 insertions(+), 2 deletions(-) @@ -1287,6 +1289,7 @@ static unsigned long xol_get_insn_slot(struct uprobe *uprobe) { struct xol_area *area; unsigned long xol_vaddr; + void *xol_page_kaddr; area = get_xol_area(); if (!area) @@ -1297,8 +1300,11 @@ static unsigned long xol_get_insn_slot(struct uprobe *uprobe) return 0; /* Initialize the slot */ - copy_to_page(area->page, xol_vaddr, - &uprobe->arch.ixol, sizeof(uprobe->arch.ixol)); + xol_page_kaddr = kmap_atomic(area->page); + copy_to_user_page(area->vma, area->page, xol_vaddr, + xol_page_kaddr + (xol_vaddr & ~PAGE_MASK), + &uprobe->arch.ixol, sizeof(uprobe->arch.ixol)); + kunmap_atomic(xol_page_kaddr); /* * We probably need flush_icache_user_range() but it needs vma. * This should work on supported architectures too. diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c index 04709b6..1ae4563 100644 --- a/kernel/events/uprobes.c +++ b/kernel/events/uprobes.c @@ -117,6 +117,7 @@ struct xol_area { * the vma go away, and we must handle that reasonably gracefully. */ unsigned long vaddr; /* Page(s) of instruction slots */ + struct vm_area_struct *vma; /* VMA that holds above address */ }; /* @@ -1150,6 +1151,7 @@ static int xol_add_vma(struct mm_struct *mm, struct xol_area *area) ret = install_special_mapping(mm, area->vaddr, PAGE_SIZE, VM_EXEC|VM_MAYEXEC|VM_DONTCOPY|VM_IO, &area->page); + area->vma = find_vma(mm, area->vaddr); if (ret) goto fail;