From patchwork Mon Aug 25 05:24:56 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Victor Kamensky X-Patchwork-Id: 4772561 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 66E1CC0338 for ; Mon, 25 Aug 2014 05:30:20 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 75E312012B for ; Mon, 25 Aug 2014 05:30:19 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 714A120121 for ; Mon, 25 Aug 2014 05:30:18 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1XLmmh-0007B1-6n; Mon, 25 Aug 2014 05:25:39 +0000 Received: from mail-pd0-f172.google.com ([209.85.192.172]) by bombadil.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1XLmme-0006EJ-SZ for linux-arm-kernel@lists.infradead.org; Mon, 25 Aug 2014 05:25:37 +0000 Received: by mail-pd0-f172.google.com with SMTP id y13so19701959pdi.3 for ; Sun, 24 Aug 2014 22:25:14 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id; bh=MH/JkrAcRFPb7MnBQUB+X3vfm69XSrXfixRm5EN8GmQ=; b=Peaq0RuIKJ+iZJzD4UyHCdX5TfaLXkdsINCktIOcYq0+4AML0FDAabus/I4HMrl61U oKVIaX3+PXHfZIkhswpZ0A0KDUcA0sqhpm3+rpxDW+gs1t7aE5Ve4JtI+QBa4nKsnrYD 1SgR67wsAwdaB2NW78JnS9XLGvuHdElo2z0bNSlA9hT1TEQuYFzhqWzRnVQhgxOA9f9W tLlv48xIi8JXplKn1XrO4dWiz+1jYgyTHSC58Y1Jhc1Cfz6QM3gkOKrNv2iqGjAFSxa2 8lN9zxDldwGr87/wCGzlSn0qgc8dyE4bGWKfbHXVRzc0sngn47b/aKKAx0tfy21yjuso Q69A== X-Gm-Message-State: ALoCoQnHiQBxiQVXNziodQzAnsNfKaJskQhDlL9u1Agy+2efJAfoax/h3Kt+7GMYGclaBAsKsrab X-Received: by 10.70.94.201 with SMTP id de9mr25258332pdb.103.1408944314720; Sun, 24 Aug 2014 22:25:14 -0700 (PDT) Received: from kamensky-w530.cisco.com (128-107-239-233.cisco.com. [128.107.239.233]) by mx.google.com with ESMTPSA id gm1sm36125254pbc.40.2014.08.24.22.25.12 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Sun, 24 Aug 2014 22:25:13 -0700 (PDT) From: Victor Kamensky To: linux-arm-kernel@lists.infradead.org, linux@arm.linux.org.uk, daniel.thompson@linaro.org Subject: [RFC PATCH V2] arm: fix get_user BE behavior for target variable with size of 8 bytes Date: Sun, 24 Aug 2014 22:24:56 -0700 Message-Id: <1408944296-10032-1-git-send-email-victor.kamensky@linaro.org> X-Mailer: git-send-email 1.8.1.4 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20140824_222536_971731_F9FE31AA X-CRM114-Status: GOOD ( 13.29 ) X-Spam-Score: -0.8 (/) Cc: =nicolas.pitre@linaro.org, Victor Kamensky , marc.zyngier@arm.com, will.deacon@arm.com, arnd.bergmann@linaro.org, christoffer.dall@linaro.org X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_NONE, T_RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP e38361d 'ARM: 8091/2: add get_user() support for 8 byte types' commit broke V7 BE get_user call when target var size is 64 bit, but '*ptr' size is 32 bit or smaller. e38361d changed type of __r2 from 'register unsigned long' to 'register typeof(x) __r2 asm("r2")' i.e before the change even when target variable size was 64 bit, __r2 was still 32 bit. But after e38361d commit, for target var of 64 bit size, __r2 became 64 bit and now it should occupy 2 registers r2, and r3. The issue in BE case that r3 register is least significant word of __r2 and r2 register is most significant word of __r2. But __get_user_4 still copies result into r2 (most significant word of __r2). Subsequent code copies from __r2 into x, but for situation described it will pick up only garbage from r3 register. It was discovered during 3.17-rc1 V7 BE KVM testing. Simple test case below. Note it works in LE case because r2 in LE case is still least significant word. This is 2nd variant of the fix, idea was suggested by Daniel Thompson. In this variant of the fix for case of BE image and target variable size is 8 bytes, special __get_user_64t_(124) functions are introduced they are similar to corresponding __get_user_(124) function but result stored in r3 register (lsw in case of 64 bit __r2 in BE image). Changelog: v2: this version: uses __get_user_64t_(124) special function of BE sizeof(__r2) == 64 case v1: first variant, that used different types for __r2 depending on brach in switch statement, has problem of generating multiple warnings in case of incorrect but single get_user usage. Signed-off-by: Victor Kamensky Reviewed-by: Daniel Thompson --- arch/arm/include/asm/uaccess.h | 36 +++++++++++++++++++++++++++++++++--- arch/arm/lib/getuser.S | 38 ++++++++++++++++++++++++++++++++++++++ 2 files changed, 71 insertions(+), 3 deletions(-) diff --git a/arch/arm/include/asm/uaccess.h b/arch/arm/include/asm/uaccess.h index a4cd7af..58e53da 100644 --- a/arch/arm/include/asm/uaccess.h +++ b/arch/arm/include/asm/uaccess.h @@ -109,6 +109,9 @@ extern int __get_user_2(void *); extern int __get_user_4(void *); extern int __get_user_lo8(void *); extern int __get_user_8(void *); +extern int __get_user_64t_1(void *); +extern int __get_user_64t_2(void *); +extern int __get_user_64t_4(void *); #define __GUP_CLOBBER_1 "lr", "cc" #ifdef CONFIG_CPU_USE_DOMAINS @@ -137,6 +140,24 @@ extern int __get_user_8(void *); #define __get_user_xb __get_user_x #endif +/* + * storing result into proper least significant word of 64bit target var, + * different only for big endian case where 64 bit __r2 lsw is r3: + */ +#ifdef __ARMEB__ +#define __get_user_x_64t(__r2, __p, __e, __l, __s) \ + __asm__ __volatile__ ( \ + __asmeq("%0", "r0") __asmeq("%1", "r2") \ + __asmeq("%3", "r1") \ + "bl __get_user_64t_" #__s \ + : "=&r" (__e), "=r" (__r2) \ + : "0" (__p), "r" (__l) \ + : __GUP_CLOBBER_##__s) +#else +#define __get_user_x_64t __get_user_x +#endif + + #define __get_user_check(x,p) \ ({ \ unsigned long __limit = current_thread_info()->addr_limit - 1; \ @@ -146,13 +167,22 @@ extern int __get_user_8(void *); register int __e asm("r0"); \ switch (sizeof(*(__p))) { \ case 1: \ - __get_user_x(__r2, __p, __e, __l, 1); \ + if (sizeof((x)) >= 8) \ + __get_user_x_64t(__r2, __p, __e, __l, 1); \ + else \ + __get_user_x(__r2, __p, __e, __l, 1); \ break; \ case 2: \ - __get_user_x(__r2, __p, __e, __l, 2); \ + if (sizeof((x)) >= 8) \ + __get_user_x_64t(__r2, __p, __e, __l, 2); \ + else \ + __get_user_x(__r2, __p, __e, __l, 2); \ break; \ case 4: \ - __get_user_x(__r2, __p, __e, __l, 4); \ + if (sizeof((x)) >= 8) \ + __get_user_x_64t(__r2, __p, __e, __l, 4); \ + else \ + __get_user_x(__r2, __p, __e, __l, 4); \ break; \ case 8: \ if (sizeof((x)) < 8) \ diff --git a/arch/arm/lib/getuser.S b/arch/arm/lib/getuser.S index 9386000..5025459 100644 --- a/arch/arm/lib/getuser.S +++ b/arch/arm/lib/getuser.S @@ -91,6 +91,40 @@ ENTRY(__get_user_lo8) mov r0, #0 ret lr ENDPROC(__get_user_lo8) + +ENTRY(__get_user_64t_1) + check_uaccess r0, 1, r1, r2, __get_user_bad8 +8: TUSER(ldrb) r3, [r0] + mov r0, #0 + ret lr +ENDPROC(__get_user_64t_1) + +ENTRY(__get_user_64t_2) + check_uaccess r0, 2, r1, r2, __get_user_bad8 +#ifdef CONFIG_CPU_USE_DOMAINS +rb .req ip +9: ldrbt r3, [r0], #1 +10: ldrbt rb, [r0], #0 +#else +rb .req r0 +9: ldrb r3, [r0] +10: ldrb rb, [r0, #1] +#endif +#ifndef __ARMEB__ + orr r3, r3, rb, lsl #8 +#else + orr r3, rb, r3, lsl #8 +#endif + mov r0, #0 + ret lr +ENDPROC(__get_user_64t_2) + +ENTRY(__get_user_64t_4) + check_uaccess r0, 4, r1, r2, __get_user_bad8 +11: TUSER(ldr) r3, [r0] + mov r0, #0 + ret lr +ENDPROC(__get_user_64t_4) #endif __get_user_bad8: @@ -111,5 +145,9 @@ ENDPROC(__get_user_bad8) .long 6b, __get_user_bad8 #ifdef __ARMEB__ .long 7b, __get_user_bad + .long 8b, __get_user_bad8 + .long 9b, __get_user_bad8 + .long 10b, __get_user_bad8 + .long 11b, __get_user_bad8 #endif .popsection