From patchwork Sun Jun 2 16:21:48 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John David Anglin X-Patchwork-Id: 2649841 Return-Path: X-Original-To: patchwork-linux-parisc@patchwork.kernel.org Delivered-To: patchwork-process-083081@patchwork1.kernel.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by patchwork1.kernel.org (Postfix) with ESMTP id 7F9113FC23 for ; Sun, 2 Jun 2013 16:21:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753885Ab3FBQV7 (ORCPT ); Sun, 2 Jun 2013 12:21:59 -0400 Received: from blu0-omc4-s26.blu0.hotmail.com ([65.55.111.165]:21856 "EHLO blu0-omc4-s26.blu0.hotmail.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753687Ab3FBQV6 (ORCPT ); Sun, 2 Jun 2013 12:21:58 -0400 Received: from BLU0-SMTP62 ([65.55.111.137]) by blu0-omc4-s26.blu0.hotmail.com with Microsoft SMTPSVC(6.0.3790.4675); Sun, 2 Jun 2013 09:21:57 -0700 X-EIP: [TOvJ5EukvZHqD580EzTmxnnBeETasyLQ] X-Originating-Email: [dave.anglin@bell.net] Message-ID: Received: from [192.168.2.10] ([174.92.95.143]) by BLU0-SMTP62.phx.gbl over TLS secured channel with Microsoft SMTPSVC(6.0.3790.4675); Sun, 2 Jun 2013 09:21:57 -0700 From: John David Anglin To: linux-parisc List Subject: [PATCH] parisc: Use unshadowed index register for flush instructions in flush_dcache_page_asm and flush_icache_page_asm MIME-Version: 1.0 (Apple Message framework v936) Date: Sun, 2 Jun 2013 12:21:48 -0400 CC: Helge Deller , "James E.J. Bottomley" X-Mailer: Apple Mail (2.936) X-OriginalArrivalTime: 02 Jun 2013 16:21:57.0327 (UTC) FILETIME=[4D6CA5F0:01CE5FAD] Sender: linux-parisc-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-parisc@vger.kernel.org The comment at the start of pacache.S states that the base and index registers used for fdc,fic, and pdc instructions should not use shadowed registers. Although this is probably unnecessary for tmpalias flushes, there is also no reason not to comply. The same index register (%r23) is used as in other routines. Signed-off-by: John David Anglin --- -- John David Anglin dave.anglin@bell.net diff --git a/arch/parisc/kernel/pacache.S b/arch/parisc/kernel/pacache.S index 36d7f40..5f98abc 100644 --- a/arch/parisc/kernel/pacache.S +++ b/arch/parisc/kernel/pacache.S @@ -860,7 +860,7 @@ ENTRY(flush_dcache_page_asm) #endif ldil L%dcache_stride, %r1 - ldw R%dcache_stride(%r1), %r1 + ldw R%dcache_stride(%r1), r23 #ifdef CONFIG_64BIT depdi,z 1, 63-PAGE_SHIFT,1, %r25 @@ -868,26 +868,26 @@ ENTRY(flush_dcache_page_asm) depwi,z 1, 31-PAGE_SHIFT,1, %r25 #endif add %r28, %r25, %r25 - sub %r25, %r1, %r25 - - -1: fdc,m %r1(%r28) - fdc,m %r1(%r28) - fdc,m %r1(%r28) - fdc,m %r1(%r28) - fdc,m %r1(%r28) - fdc,m %r1(%r28) - fdc,m %r1(%r28) - fdc,m %r1(%r28) - fdc,m %r1(%r28) - fdc,m %r1(%r28) - fdc,m %r1(%r28) - fdc,m %r1(%r28) - fdc,m %r1(%r28) - fdc,m %r1(%r28) - fdc,m %r1(%r28) + sub %r25, r23, %r25 + + +1: fdc,m r23(%r28) + fdc,m r23(%r28) + fdc,m r23(%r28) + fdc,m r23(%r28) + fdc,m r23(%r28) + fdc,m r23(%r28) + fdc,m r23(%r28) + fdc,m r23(%r28) + fdc,m r23(%r28) + fdc,m r23(%r28) + fdc,m r23(%r28) + fdc,m r23(%r28) + fdc,m r23(%r28) + fdc,m r23(%r28) + fdc,m r23(%r28) cmpb,COND(<<) %r28, %r25,1b - fdc,m %r1(%r28) + fdc,m r23(%r28) sync @@ -936,7 +936,7 @@ ENTRY(flush_icache_page_asm) #endif ldil L%icache_stride, %r1 - ldw R%icache_stride(%r1), %r1 + ldw R%icache_stride(%r1), %r23 #ifdef CONFIG_64BIT depdi,z 1, 63-PAGE_SHIFT,1, %r25 @@ -944,28 +944,28 @@ ENTRY(flush_icache_page_asm) depwi,z 1, 31-PAGE_SHIFT,1, %r25 #endif add %r28, %r25, %r25 - sub %r25, %r1, %r25 + sub %r25, %r23, %r25 /* fic only has the type 26 form on PA1.1, requiring an * explicit space specification, so use %sr4 */ -1: fic,m %r1(%sr4,%r28) - fic,m %r1(%sr4,%r28) - fic,m %r1(%sr4,%r28) - fic,m %r1(%sr4,%r28) - fic,m %r1(%sr4,%r28) - fic,m %r1(%sr4,%r28) - fic,m %r1(%sr4,%r28) - fic,m %r1(%sr4,%r28) - fic,m %r1(%sr4,%r28) - fic,m %r1(%sr4,%r28) - fic,m %r1(%sr4,%r28) - fic,m %r1(%sr4,%r28) - fic,m %r1(%sr4,%r28) - fic,m %r1(%sr4,%r28) - fic,m %r1(%sr4,%r28) +1: fic,m %r23(%sr4,%r28) + fic,m %r23(%sr4,%r28) + fic,m %r23(%sr4,%r28) + fic,m %r23(%sr4,%r28) + fic,m %r23(%sr4,%r28) + fic,m %r23(%sr4,%r28) + fic,m %r23(%sr4,%r28) + fic,m %r23(%sr4,%r28) + fic,m %r23(%sr4,%r28) + fic,m %r23(%sr4,%r28) + fic,m %r23(%sr4,%r28) + fic,m %r23(%sr4,%r28) + fic,m %r23(%sr4,%r28) + fic,m %r23(%sr4,%r28) + fic,m %r23(%sr4,%r28) cmpb,COND(<<) %r28, %r25,1b - fic,m %r1(%sr4,%r28) + fic,m %r23(%sr4,%r28) sync