diff mbox

parisc: Ensure volatile space register %sr1 is not clobbered

Message ID BLU0-SMTP2112E5305CF38E7EA12B08978B0@phx.gbl (mailing list archive)
State Superseded
Headers show

Commit Message

John David Anglin June 25, 2013, 11:10 p.m. UTC
I still see the occasional random segv on rp3440.  Looking at one of  
these (a code 15), it appeared the problem must
be with the cache handling of anonymous pages.  Reviewing this, I  
noticed that the space register %sr1 might be
being clobbered when we flush an anonymous page.

Register %sr1 is used for TLB purges in a couple of places.  These  
purges are needed on PA8800 and PA8900
processors to ensure cache consistency of flushed cache lines.

The solution here is simply to move the %sr1 load into the TLB lock  
region needed to ensure that one purge executes
at a time on SMP systems.  This was already the case for one use.   
After a few days of operation, I haven't had a random
segv on my rp3440.

Signed-off-by: John David Anglin  <dave.anglin@bell.net>
---

--
John David Anglin	dave.anglin@bell.net
diff mbox

Patch

diff --git a/arch/parisc/include/asm/tlbflush.h b/arch/parisc/include/asm/tlbflush.h
index 5273da9..9cdbc74 100644
--- a/arch/parisc/include/asm/tlbflush.h
+++ b/arch/parisc/include/asm/tlbflush.h
@@ -68,8 +68,8 @@  static inline void flush_tlb_page(struct vm_area_struct *vma,
 	/* For one page, it's not worth testing the split_tlb variable */
 
 	mb();
-	mtsp(vma->vm_mm->context,1);
 	purge_tlb_start(flags);
+	mtsp(vma->vm_mm->context,1);
 	pdtlb(addr);
 	pitlb(addr);
 	purge_tlb_end(flags);
diff --git a/arch/parisc/kernel/cache.c b/arch/parisc/kernel/cache.c
index 65fb4cb..2e65aa5 100644
--- a/arch/parisc/kernel/cache.c
+++ b/arch/parisc/kernel/cache.c
@@ -440,8 +440,8 @@  void __flush_tlb_range(unsigned long sid, unsigned long start,
 	else {
 		unsigned long flags;
 
-		mtsp(sid, 1);
 		purge_tlb_start(flags);
+		mtsp(sid, 1);
 		if (split_tlb) {
 			while (npages--) {
 				pdtlb(start);