From patchwork Mon Jul 26 14:11:32 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Arnd Bergmann X-Patchwork-Id: 12399573 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.7 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E2ECFC432BE for ; Mon, 26 Jul 2021 14:14:36 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id A91AF60560 for ; Mon, 26 Jul 2021 14:14:36 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org A91AF60560 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=CvFQG4HLZ8SlGM5kMjBC0BDCE7cjAuMxRksx/lr4fnU=; b=jDDx/sJ5cwwyAK sm9p2PLdVbxIdhIU1skUslE41oekTT70u5ltuGd9/rluqhsr/It5fA3MAIz0H6Mz6CaIj9PLK6oJv QY8jprErPMMEHjNS//1DclrajrNlfCaF9Qao9INFCjwgu5XOI7iVy/oAh/9lQk7OLpwzBXeTdvmM9 ziXuNE8Wl8uFIvZB9qqKkfEgoE92pqFzxqxPwImyejMPwv3edooC7Xldw+aXq8SrBdAsce6tzMpg1 Hyx1XMNLHX23FiHt3H29oDR3BE/z8DERVolssdDnsfaIHC+dvPVevJXLT0/dUrP4kNh+O2jjNT9pW 1cR/5ASE6B/1iGwVyqdA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1m81Kt-00BRuV-Rf; Mon, 26 Jul 2021 14:12:03 +0000 Received: from mail.kernel.org ([198.145.29.99]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1m81Kf-00BRoi-Ar for linux-arm-kernel@lists.infradead.org; Mon, 26 Jul 2021 14:11:50 +0000 Received: by mail.kernel.org (Postfix) with ESMTPSA id 2864660F44; Mon, 26 Jul 2021 14:11:47 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1627308708; bh=5kcXJiHE0PQpWAyz9VUZ4OV94qxkV3Y2oDR5o1D2FOk=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=edl2wetoyjeM475gLpRFpus5Lw3iCGQUSLyjx59TAayU1mfJZVjoegxXh7crKj/Fv DwRwg4HYWIj5I71VkADRX3WqOpuJxMCbJF9wvdQ/3HcoITTfnTUPrfhDgCKIcIijFi 1DcvZTIZa1t1cfhx3LFoLesc4BGq76w5KtdQf/UzLnVL5XuuAr7FlYvDLXvHmx70GZ OfzBk+yr2qlGUc4S8Hla4OaI7+ZIuF5yR6gqAQh6tR7OSl1KkFmAS6evg30YOf4E+T MBpnsrMi+Trz1apkV3orFWPMK/wMVeMuI6J9P8UasfJa9G3HpEtTGBFpj2MT6ehu0I qMfyHcos58hyg== From: Arnd Bergmann To: Russell King Cc: Arnd Bergmann , linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-arch@vger.kernel.org, linux-mm@kvack.org, Alexander Viro , Linus Walleij , Christoph Hellwig Subject: [PATCH v5 01/10] mm/maccess: fix unaligned copy_{from, to}_kernel_nofault Date: Mon, 26 Jul 2021 16:11:32 +0200 Message-Id: <20210726141141.2839385-2-arnd@kernel.org> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20210726141141.2839385-1-arnd@kernel.org> References: <20210726141141.2839385-1-arnd@kernel.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210726_071149_465316_DDC5D8BC X-CRM114-Status: GOOD ( 13.25 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Arnd Bergmann On machines such as ARMv5 that trap unaligned accesses, these two functions can be slow when each access needs to be emulated, or they might not work at all. Change them so that each loop is only used when both the src and dst pointers are naturally aligned. Reviewed-by: Christoph Hellwig Reviewed-by: Linus Walleij Signed-off-by: Arnd Bergmann --- mm/maccess.c | 28 ++++++++++++++++++++++------ 1 file changed, 22 insertions(+), 6 deletions(-) diff --git a/mm/maccess.c b/mm/maccess.c index 3bd70405f2d8..d3f1a1f0b1c1 100644 --- a/mm/maccess.c +++ b/mm/maccess.c @@ -24,13 +24,21 @@ bool __weak copy_from_kernel_nofault_allowed(const void *unsafe_src, long copy_from_kernel_nofault(void *dst, const void *src, size_t size) { + unsigned long align = 0; + + if (!IS_ENABLED(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS)) + align = (unsigned long)dst | (unsigned long)src; + if (!copy_from_kernel_nofault_allowed(src, size)) return -ERANGE; pagefault_disable(); - copy_from_kernel_nofault_loop(dst, src, size, u64, Efault); - copy_from_kernel_nofault_loop(dst, src, size, u32, Efault); - copy_from_kernel_nofault_loop(dst, src, size, u16, Efault); + if (!(align & 7)) + copy_from_kernel_nofault_loop(dst, src, size, u64, Efault); + if (!(align & 3)) + copy_from_kernel_nofault_loop(dst, src, size, u32, Efault); + if (!(align & 1)) + copy_from_kernel_nofault_loop(dst, src, size, u16, Efault); copy_from_kernel_nofault_loop(dst, src, size, u8, Efault); pagefault_enable(); return 0; @@ -50,10 +58,18 @@ EXPORT_SYMBOL_GPL(copy_from_kernel_nofault); long copy_to_kernel_nofault(void *dst, const void *src, size_t size) { + unsigned long align = 0; + + if (!IS_ENABLED(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS)) + align = (unsigned long)dst | (unsigned long)src; + pagefault_disable(); - copy_to_kernel_nofault_loop(dst, src, size, u64, Efault); - copy_to_kernel_nofault_loop(dst, src, size, u32, Efault); - copy_to_kernel_nofault_loop(dst, src, size, u16, Efault); + if (!(align & 7)) + copy_to_kernel_nofault_loop(dst, src, size, u64, Efault); + if (!(align & 3)) + copy_to_kernel_nofault_loop(dst, src, size, u32, Efault); + if (!(align & 1)) + copy_to_kernel_nofault_loop(dst, src, size, u16, Efault); copy_to_kernel_nofault_loop(dst, src, size, u8, Efault); pagefault_enable(); return 0;