From patchwork Fri Sep 18 12:46:16 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Arnd Bergmann X-Patchwork-Id: 11784869 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 2C36714F6 for ; Fri, 18 Sep 2020 12:46:42 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id CFF5E21D24 for ; Fri, 18 Sep 2020 12:46:41 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org CFF5E21D24 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=arndb.de Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 789F36B0003; Fri, 18 Sep 2020 08:46:38 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 6CD7C900002; Fri, 18 Sep 2020 08:46:38 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4F2CA6B005A; Fri, 18 Sep 2020 08:46:38 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0088.hostedemail.com [216.40.44.88]) by kanga.kvack.org (Postfix) with ESMTP id 31291900002 for ; Fri, 18 Sep 2020 08:46:38 -0400 (EDT) Received: from smtpin26.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id E71D3181AEF1F for ; Fri, 18 Sep 2020 12:46:37 +0000 (UTC) X-FDA: 77276156034.26.card76_3c1213a2712b Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin26.hostedemail.com (Postfix) with ESMTP id C49B31804B655 for ; Fri, 18 Sep 2020 12:46:37 +0000 (UTC) X-Spam-Summary: 1,0,0,b5fa4b3d1d2a435d,d41d8cd98f00b204,arnd@arndb.de,,RULES_HIT:41:355:379:541:800:960:988:989:1260:1311:1314:1345:1359:1431:1437:1515:1534:1542:1711:1730:1747:1777:1792:2194:2199:2393:2559:2562:2901:3138:3139:3140:3141:3142:3353:3865:3867:3868:3871:3874:4321:5007:6119:6261:7903:8603:10004:11026:11473:11658:11914:12043:12114:12160:12291:12297:12438:12555:12895:13894:14096:14181:14394:14721:21080:21451:21627:21740:21990:30054:30070:30079,0,RBL:212.227.126.187:@arndb.de:.lbl8.mailshell.net-62.14.6.100 66.201.201.201;04yg9ih3od58gt6ggimy7c61e9313ochm4hkpyigu7f7ybpy85bw9cxpjdeh4ek.dcj4a3ueugthgb94iouind8i5yprszbzf44mia8gycfyj69em5owt19kg8yujjy.c-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fn,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:23,LUA_SUMMARY:none X-HE-Tag: card76_3c1213a2712b X-Filterd-Recvd-Size: 4773 Received: from mout.kundenserver.de (mout.kundenserver.de [212.227.126.187]) by imf08.hostedemail.com (Postfix) with ESMTP for ; Fri, 18 Sep 2020 12:46:36 +0000 (UTC) Received: from threadripper.lan ([149.172.98.151]) by mrelayeu.kundenserver.de (mreue011 [212.227.15.129]) with ESMTPA (Nemesis) id 1Mr8zO-1knLTA2coI-00oGkp; Fri, 18 Sep 2020 14:46:34 +0200 From: Arnd Bergmann To: Christoph Hellwig , Russell King , Alexander Viro Cc: linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-arch@vger.kernel.org, linux-mm@kvack.org, Arnd Bergmann , Christoph Hellwig Subject: [PATCH v2 1/9] mm/maccess: fix unaligned copy_{from,to}_kernel_nofault Date: Fri, 18 Sep 2020 14:46:16 +0200 Message-Id: <20200918124624.1469673-2-arnd@arndb.de> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20200918124624.1469673-1-arnd@arndb.de> References: <20200918124624.1469673-1-arnd@arndb.de> MIME-Version: 1.0 X-Provags-ID: V03:K1:aM+IxhTWQ8zLEy6I5xMM14Ev05fAqzi2N6KWBwjGmrTs5WlEUtT h3d/Ek0+MD7Kg+UFacJgz6oWfZKeIYwr4MM2bOz3usLS6zOvlxqXzO+j1mOgipiLpoWR2SD 5K6CrFeWW5//N3cy9JMAg2AO19WmtU3iNg006du/rVt/cP2Aa1dqQ0+GdFtoh+I1fO8EW4r aTvEWserNBkjpzkGWOOuQ== X-Spam-Flag: NO X-UI-Out-Filterresults: notjunk:1;V03:K0:t7msc0UXfsQ=:yaYfo6Eb6GxlQw0v2fCfgT qfpkjhEkog+a5swVrJWRLI0F60ZAoQa+U72qb8Dm4gq3x2tUgEn4wddszH62L6tjKGbUgg0ma vv2qlf2F5gMIXlIEKTRPr7MP0E7h658vOczUAtSP9ULseis7pnmHdXl5mY1o8bv6x2h+moCol pUZKxC2avRCwspOZYSqaWOJqTaecq5Kp5j38nJIpwfH7GepL4JmeElY6K+prcigvu2b/KGzLt rzR8UxhOt3ILMlL4zIcQ83t35YaRRce7iFh29RIWWV5RGuo7GS2sPr0Xqm0NCLNd5xsJirFmH d6q07QmGQzg9HIkWjyEz6ochFwmR8Bbds6hT8jb+LYefWbdMwlEaG3yebGqvR2AbSgrAGySGx bwgquAH3VI8ifEi31v601nWLoPohbfUDjLLvxZP/ud2cPnJFir3ntvThVPvaxEhpFrZJbvimP u2wy7qe9bqtokp4OHIx5D8mqu0XpgDYFq//rCsYqg+1jifdcfcSomDujlg45g0eM9fgCXDG68 Gn54qF2GMix5PSMOWWE+IQxHzRQXj5GUghUjAG/wxM2JD+juq6t9ul7AJKeotoue5gtBfVEJe 92lmPtwB87ccXhEbD6OrywRci3W5jl3s6YwNIgsv6XQU7/UjOa7Gz4fpNDIm5dktfQwfjFUXv nayGdJc/vWr86qx+LItPMChsRCRpcyHqxxA/esHHJeCeyohXYCseWppKuP/1f2hAAkq8BIGy4 Uj337rpz5iNoUmGJU2x/XjiYqPziCVKKhjYFv6Tg/SL4BO0pV6d8l2eBrFUJiVpY6GlcjXKdz 3yxbq0G6dBJ/N50FfWidrUtiDKawO2xbzBq8KoqXgMPql3f1q2P6a17UKjqNjxNFnjrnBkd X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On machines such as ARMv5 that trap unaligned accesses, these two functions can be slow when each access needs to be emulated, or they might not work at all. Change them so that each loop is only used when both the src and dst pointers are naturally aligned. Reviewed-by: Christoph Hellwig Signed-off-by: Arnd Bergmann --- mm/maccess.c | 28 ++++++++++++++++++++++------ 1 file changed, 22 insertions(+), 6 deletions(-) diff --git a/mm/maccess.c b/mm/maccess.c index 3bd70405f2d8..d3f1a1f0b1c1 100644 --- a/mm/maccess.c +++ b/mm/maccess.c @@ -24,13 +24,21 @@ bool __weak copy_from_kernel_nofault_allowed(const void *unsafe_src, long copy_from_kernel_nofault(void *dst, const void *src, size_t size) { + unsigned long align = 0; + + if (!IS_ENABLED(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS)) + align = (unsigned long)dst | (unsigned long)src; + if (!copy_from_kernel_nofault_allowed(src, size)) return -ERANGE; pagefault_disable(); - copy_from_kernel_nofault_loop(dst, src, size, u64, Efault); - copy_from_kernel_nofault_loop(dst, src, size, u32, Efault); - copy_from_kernel_nofault_loop(dst, src, size, u16, Efault); + if (!(align & 7)) + copy_from_kernel_nofault_loop(dst, src, size, u64, Efault); + if (!(align & 3)) + copy_from_kernel_nofault_loop(dst, src, size, u32, Efault); + if (!(align & 1)) + copy_from_kernel_nofault_loop(dst, src, size, u16, Efault); copy_from_kernel_nofault_loop(dst, src, size, u8, Efault); pagefault_enable(); return 0; @@ -50,10 +58,18 @@ EXPORT_SYMBOL_GPL(copy_from_kernel_nofault); long copy_to_kernel_nofault(void *dst, const void *src, size_t size) { + unsigned long align = 0; + + if (!IS_ENABLED(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS)) + align = (unsigned long)dst | (unsigned long)src; + pagefault_disable(); - copy_to_kernel_nofault_loop(dst, src, size, u64, Efault); - copy_to_kernel_nofault_loop(dst, src, size, u32, Efault); - copy_to_kernel_nofault_loop(dst, src, size, u16, Efault); + if (!(align & 7)) + copy_to_kernel_nofault_loop(dst, src, size, u64, Efault); + if (!(align & 3)) + copy_to_kernel_nofault_loop(dst, src, size, u32, Efault); + if (!(align & 1)) + copy_to_kernel_nofault_loop(dst, src, size, u16, Efault); copy_to_kernel_nofault_loop(dst, src, size, u8, Efault); pagefault_enable(); return 0;