From patchwork Wed Jul 12 14:44:14 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 9836861 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id B420F60363 for ; Wed, 12 Jul 2017 14:45:16 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id A3BEF204C1 for ; Wed, 12 Jul 2017 14:45:16 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 97CE7284FF; Wed, 12 Jul 2017 14:45:16 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.1 required=2.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_MED,T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.wl.linuxfoundation.org (Postfix) with SMTP id CAF12204C1 for ; Wed, 12 Jul 2017 14:45:15 +0000 (UTC) Received: (qmail 32397 invoked by uid 550); 12 Jul 2017 14:45:10 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 32022 invoked from network); 12 Jul 2017 14:45:06 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=NucQSOrWD34JZPw6zNiv35jsS3YRixXByjPKQeDnnfk=; b=Ksc1c1L5OUqmS/5QRUIUXDsqHSNkjTmkIEUnhI0ubcpOkmkoO2xAIPdUMre58qDOk4 bXbwy2JXc2RMjovy/ebiYH5UgtQjONlwpUlpMM4eadvJcGFDsJJeB70u1sEbzgLMm8W/ DeKatthSV8lzp6sXP+S4177Y8XoYVeftj7p6c= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=NucQSOrWD34JZPw6zNiv35jsS3YRixXByjPKQeDnnfk=; b=dtJvZG3hpUt4xkVGkDOALRXbYXUteZAVyx3BclUk/WpnG5xQreDl1WyCJy/od2/mSO E29l+I0bffjILapZzqr6KO+Lc0Hlt+sYnOiTXlGIMnOq+6yhFcqzWNQ/z6Ti3Vlaf5d7 hxac2UwF6Sa3Gj7n/1ydqwBtl6A9ScsgHiuvYEbpxUibSqXJuLyTxeB5QzdVmeQ/s6ra XzDjovMBY8rhxkLMEcnNFo3i27spw2xvtqCiRfZr4Yb2CdMd7RuaP71mhMbcakgCkGKh elZlBNiiQEuzKkxDz+xh/D2Kl16iTb9I/YV2tzXRFMArAnl1w0XfhrH7xBXMLGYFUJx+ V5bA== X-Gm-Message-State: AIVw112nyfy8mXF5lDq+Eyx3gtLyaC+upIjZW7qaxMVwlaz6/z81tlKN 3+fbQtVyoG58RjrT X-Received: by 10.28.66.22 with SMTP id p22mr2766868wma.56.1499870694959; Wed, 12 Jul 2017 07:44:54 -0700 (PDT) From: Ard Biesheuvel To: linux-arm-kernel@lists.infradead.org, kernel-hardening@lists.openwall.com Cc: mark.rutland@arm.com, labbott@fedoraproject.org, will.deacon@arm.com, dave.martin@arm.com, catalin.marinas@arm.com, Ard Biesheuvel Date: Wed, 12 Jul 2017 15:44:14 +0100 Message-Id: <20170712144424.19528-2-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.9.3 In-Reply-To: <20170712144424.19528-1-ard.biesheuvel@linaro.org> References: <20170712144424.19528-1-ard.biesheuvel@linaro.org> Subject: [kernel-hardening] [RFC PATCH 01/10] arm64/lib: copy_page: use consistent prefetch stride X-Virus-Scanned: ClamAV using ClamSMTP The optional prefetch instructions in the copy_page() routine are inconsistent: at the start of the function, two cachelines are prefetched beyond the one being loaded in the first iteration, but in the loop, the prefetch is one more line ahead. This appears to be unintentional, so let's fix it. While at it, fix the comment style and white space. Signed-off-by: Ard Biesheuvel --- arch/arm64/lib/copy_page.S | 9 +++++---- 1 file changed, 5 insertions(+), 4 deletions(-) diff --git a/arch/arm64/lib/copy_page.S b/arch/arm64/lib/copy_page.S index c3cd65e31814..076c43715e64 100644 --- a/arch/arm64/lib/copy_page.S +++ b/arch/arm64/lib/copy_page.S @@ -30,9 +30,10 @@ */ ENTRY(copy_page) alternative_if ARM64_HAS_NO_HW_PREFETCH - # Prefetch two cache lines ahead. - prfm pldl1strm, [x1, #128] - prfm pldl1strm, [x1, #256] + // Prefetch three cache lines ahead. + prfm pldl1strm, [x1, #128] + prfm pldl1strm, [x1, #256] + prfm pldl1strm, [x1, #384] alternative_else_nop_endif ldp x2, x3, [x1] @@ -50,7 +51,7 @@ alternative_else_nop_endif subs x18, x18, #128 alternative_if ARM64_HAS_NO_HW_PREFETCH - prfm pldl1strm, [x1, #384] + prfm pldl1strm, [x1, #384] alternative_else_nop_endif stnp x2, x3, [x0]