From patchwork Mon Sep 7 15:36:42 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Arnd Bergmann X-Patchwork-Id: 11761283 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id EFAC792C for ; Mon, 7 Sep 2020 15:37:40 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id A499321473 for ; Mon, 7 Sep 2020 15:37:40 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org A499321473 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=arndb.de Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 897B66B0037; Mon, 7 Sep 2020 11:37:39 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 848A56B0055; Mon, 7 Sep 2020 11:37:39 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 75E676B005A; Mon, 7 Sep 2020 11:37:39 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0246.hostedemail.com [216.40.44.246]) by kanga.kvack.org (Postfix) with ESMTP id 5D8B46B0037 for ; Mon, 7 Sep 2020 11:37:39 -0400 (EDT) Received: from smtpin15.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 219BB8248047 for ; Mon, 7 Sep 2020 15:37:39 +0000 (UTC) X-FDA: 77236670238.15.man07_0b18300270cd Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin15.hostedemail.com (Postfix) with ESMTP id E4E3D1814B0C7 for ; Mon, 7 Sep 2020 15:37:38 +0000 (UTC) X-Spam-Summary: 1,0,0,0bb8ad27fe34c75e,d41d8cd98f00b204,arnd@arndb.de,,RULES_HIT:41:355:379:541:800:960:988:989:1260:1311:1314:1345:1359:1431:1437:1515:1534:1542:1711:1730:1747:1777:1792:2194:2199:2393:2559:2562:2901:3138:3139:3140:3141:3142:3353:3865:3867:3868:3871:3874:4321:5007:6119:6261:6742:7903:8603:10004:11026:11473:11658:11914:12043:12114:12160:12291:12297:12438:12555:12895:13894:14096:14181:14394:14721:21080:21627:21740:21990:30054:30070:30079,0,RBL:217.72.192.73:@arndb.de:.lbl8.mailshell.net-64.201.201.201 62.14.6.100;04yrwqafu9udsmiyo44dxekhkjqrpychm4hkpyigu7f7ybpy85rwuxrkenre4ek.dcj4a3ueugthgb94iouind8i5yprszbzf44mia8gycfyj69em5owt133hiexq7a.o-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fn,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:25,LUA_SUMMARY:none X-HE-Tag: man07_0b18300270cd X-Filterd-Recvd-Size: 4861 Received: from mout.kundenserver.de (mout.kundenserver.de [217.72.192.73]) by imf47.hostedemail.com (Postfix) with ESMTP for ; Mon, 7 Sep 2020 15:37:38 +0000 (UTC) Received: from threadripper.lan ([149.172.98.151]) by mrelayeu.kundenserver.de (mreue109 [212.227.15.145]) with ESMTPA (Nemesis) id 1MBltK-1kPFtF1hMq-00CCY9; Mon, 07 Sep 2020 17:37:29 +0200 From: Arnd Bergmann To: Christoph Hellwig , Russell King Cc: Alexander Viro , kernel@vger.kernel.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linus.walleij@linaro.org, Arnd Bergmann , Andrew Morton , Daniel Borkmann , Alexei Starovoitov , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH 1/9] mm/maccess: fix unaligned copy_{from,to}_kernel_nofault Date: Mon, 7 Sep 2020 17:36:42 +0200 Message-Id: <20200907153701.2981205-2-arnd@arndb.de> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20200907153701.2981205-1-arnd@arndb.de> References: <20200907153701.2981205-1-arnd@arndb.de> MIME-Version: 1.0 X-Provags-ID: V03:K1:31/POUWQp1frtERuDjS9nzjZRuMNBQ7Hl3VxeiobYDrtzHzJ2GM 9wXwMRq/o9LvfehtVxWSMILv/LQI5+kUntODj1eCkVxeCYq5Lrz/gGasQFrjKx+LO0h/Qf6 pjokHD1+IYoLYtmU2LohgFAguxtZa3LdkAiQyIOU8lZ6+niqwzeZQNVYkR+L7T4UuZdL8Zf gsLsiO7j5MDIewk++Z/ow== X-Spam-Flag: NO X-UI-Out-Filterresults: notjunk:1;V03:K0:aItepcr89DA=:aiAcRLMkq0gHgO7TDTGkcb uZEMfKxb/i/P9Ubbqltgx+eWjZotocAHZ6qMJ6MtcmtFoSW/uCpn3Zs8e9ruuMRzn1XGLJk1g oxfQ/Gw5UpJhnV0SR73Ld0iVUA5Q2M8lo9SD/j1cf5y9wQ4iJ34KKpBZLkzGMwnuxYe6/X3rV DEylejz1mpw7B4lGzkTN891jykp3fRJHx9pw4/CsgeTQntVMBFyMzmNurX9MOXelYvtdc2jLu e8acewZxxgXxvvSrl2kVxbyg5PfHhc7ShUvPVErZFaQxlDipp4PvwR0eHB/2kEiS3NZc/27hU SFjh94GcA8w+/n4sEnYYDCAaML4bjRvTkWKQf4nGnepuRWYswuBcf4xNhSFdk8KsWQnXt65D8 UNfjEVJCc0Wd16VxUE1hDSX4TzHlhoRwgaKJH6PudhgLkDQqCwVLjewc4cc+5gFX+cjhbphQo P/Er+fCNyKiuGLgO7cMIr0z/FXBjHXiK2OCJE5dVyaBJqusIvJblWEaKBKkSa5c70fTkBd3GH OG0LoaYDZ3mK/Yy4y66vHYgs8tf3Y/heXTXtFaRgcctWAYHLcvBj7rQX/PhFCnuhepHUC0GbE WUrtPFQfdltMfXembr86ptRtpjfcfwKhLVZ23UA3kIxcDrGRjet8CxNdQ+oTD3X/jLx0JTVf0 folBd4CNWLTkLl6F+jFElPI9kgbia3Vqr3sMU8qOTKzdHw7kl5z93oUwI0fIPhuwMCi7xGOkm gnWLik9otETPdKt4WZxfP++KnRgPaaIVeGb3qjiOv2I5lk1qWl/AaISjzVPgpqAvCXJyYkOtc zovMuSd2ZTY1/lfXKqRhtcE0uDKI5q7wwwW+/00cUI/XyCFxTYGFzh6EryfpzIvgexCu69y X-Rspamd-Queue-Id: E4E3D1814B0C7 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam01 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On machines such as ARMv5 that trap unaligned accesses, these two functions can be slow when each access needs to be emulated, or they might not work at all. Change them so that each loop is only used when both the src and dst pointers are naturally aligned. Signed-off-by: Arnd Bergmann Reviewed-by: Christoph Hellwig Reviewed-by: Linus Walleij --- mm/maccess.c | 28 ++++++++++++++++++++++------ 1 file changed, 22 insertions(+), 6 deletions(-) diff --git a/mm/maccess.c b/mm/maccess.c index 3bd70405f2d8..d3f1a1f0b1c1 100644 --- a/mm/maccess.c +++ b/mm/maccess.c @@ -24,13 +24,21 @@ bool __weak copy_from_kernel_nofault_allowed(const void *unsafe_src, long copy_from_kernel_nofault(void *dst, const void *src, size_t size) { + unsigned long align = 0; + + if (!IS_ENABLED(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS)) + align = (unsigned long)dst | (unsigned long)src; + if (!copy_from_kernel_nofault_allowed(src, size)) return -ERANGE; pagefault_disable(); - copy_from_kernel_nofault_loop(dst, src, size, u64, Efault); - copy_from_kernel_nofault_loop(dst, src, size, u32, Efault); - copy_from_kernel_nofault_loop(dst, src, size, u16, Efault); + if (!(align & 7)) + copy_from_kernel_nofault_loop(dst, src, size, u64, Efault); + if (!(align & 3)) + copy_from_kernel_nofault_loop(dst, src, size, u32, Efault); + if (!(align & 1)) + copy_from_kernel_nofault_loop(dst, src, size, u16, Efault); copy_from_kernel_nofault_loop(dst, src, size, u8, Efault); pagefault_enable(); return 0; @@ -50,10 +58,18 @@ EXPORT_SYMBOL_GPL(copy_from_kernel_nofault); long copy_to_kernel_nofault(void *dst, const void *src, size_t size) { + unsigned long align = 0; + + if (!IS_ENABLED(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS)) + align = (unsigned long)dst | (unsigned long)src; + pagefault_disable(); - copy_to_kernel_nofault_loop(dst, src, size, u64, Efault); - copy_to_kernel_nofault_loop(dst, src, size, u32, Efault); - copy_to_kernel_nofault_loop(dst, src, size, u16, Efault); + if (!(align & 7)) + copy_to_kernel_nofault_loop(dst, src, size, u64, Efault); + if (!(align & 3)) + copy_to_kernel_nofault_loop(dst, src, size, u32, Efault); + if (!(align & 1)) + copy_to_kernel_nofault_loop(dst, src, size, u16, Efault); copy_to_kernel_nofault_loop(dst, src, size, u8, Efault); pagefault_enable(); return 0;