From patchwork Tue Apr 23 22:06:04 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Helge Deller X-Patchwork-Id: 2480891 Return-Path: X-Original-To: patchwork-linux-parisc@patchwork.kernel.org Delivered-To: patchwork-process-083081@patchwork2.kernel.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by patchwork2.kernel.org (Postfix) with ESMTP id EB5E9DF2E5 for ; Tue, 23 Apr 2013 22:06:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755336Ab3DWWGM (ORCPT ); Tue, 23 Apr 2013 18:06:12 -0400 Received: from mout.gmx.net ([212.227.17.20]:55652 "EHLO mout.gmx.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752095Ab3DWWGI (ORCPT ); Tue, 23 Apr 2013 18:06:08 -0400 Received: from mailout-de.gmx.net ([10.1.76.2]) by mrigmx.server.lan (mrigmx001) with ESMTP (Nemesis) id 0LlKzA-1V2Rfk1zN8-00b7Zb for ; Wed, 24 Apr 2013 00:06:06 +0200 Received: (qmail invoked by alias); 23 Apr 2013 22:06:06 -0000 Received: from p54AD11D8.dip0.t-ipconnect.de (EHLO p100.box) [84.173.17.216] by mail.gmx.net (mp002) with SMTP; 24 Apr 2013 00:06:06 +0200 X-Authenticated: #1045983 X-Provags-ID: V01U2FsdGVkX18EHm5shYoEftNLzAukVNtn3lgIZcgcSixHwQz1ZG i3qEbPEzB0MOc7 Date: Wed, 24 Apr 2013 00:06:04 +0200 From: Helge Deller To: linux-parisc@vger.kernel.org, James Bottomley , John David Anglin Subject: [PATCH] parisc: use spin_lock_irqsave/spin_unlock_irqrestore for PTE updates Message-ID: <20130423220604.GA7659@p100.box> MIME-Version: 1.0 Content-Disposition: inline User-Agent: Mutt/1.5.21 (2010-09-15) X-Y-GMX-Trusted: 0 Sender: linux-parisc-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-parisc@vger.kernel.org From: John David Anglin User applications running on SMP kernels have long suffered from instability and random segmentation faults. This patch improves the situation although there is more work to be done. One of the problems is the various routines in pgtable.h that update page table entries use different locking mechanisms, or no lock at all (set_pte_at). This change modifies the routines to all use the same lock pa_dbit_lock. This lock is used for dirty bit updates in the interruption code. The patch also purges the TLB entries associated with the PTE to ensure that inconsistent values are not used after the page table entry is updated. The UP and SMP code are now identical. The change also includes a minor update to the purge_tlb_entries function in cache.c to improve its efficiency. Signed-off-by: John David Anglin Cc: Helge Deller --- To unsubscribe from this list: send the line "unsubscribe linux-parisc" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html diff --git a/arch/parisc/include/asm/pgtable.h b/arch/parisc/include/asm/pgtable.h index 7df49fa..d5ad7a6 100644 --- a/arch/parisc/include/asm/pgtable.h +++ b/arch/parisc/include/asm/pgtable.h @@ -16,6 +16,8 @@ #include #include +extern spinlock_t pa_dbit_lock; + /* * kern_addr_valid(ADDR) tests if ADDR is pointing to valid kernel * memory. For the return value to be meaningful, ADDR must be >= @@ -44,8 +46,11 @@ extern void purge_tlb_entries(struct mm_struct *, unsigned long); #define set_pte_at(mm, addr, ptep, pteval) \ do { \ + unsigned long flags; \ + spin_lock_irqsave(&pa_dbit_lock, flags); \ set_pte(ptep, pteval); \ purge_tlb_entries(mm, addr); \ + spin_unlock_irqrestore(&pa_dbit_lock, flags); \ } while (0) #endif /* !__ASSEMBLY__ */ @@ -435,48 +440,46 @@ extern void update_mmu_cache(struct vm_area_struct *, unsigned long, pte_t *); static inline int ptep_test_and_clear_young(struct vm_area_struct *vma, unsigned long addr, pte_t *ptep) { -#ifdef CONFIG_SMP + pte_t pte; + unsigned long flags; + if (!pte_young(*ptep)) return 0; - return test_and_clear_bit(xlate_pabit(_PAGE_ACCESSED_BIT), &pte_val(*ptep)); -#else - pte_t pte = *ptep; - if (!pte_young(pte)) + + spin_lock_irqsave(&pa_dbit_lock, flags); + pte = *ptep; + if (!pte_young(pte)) { + spin_unlock_irqrestore(&pa_dbit_lock, flags); return 0; - set_pte_at(vma->vm_mm, addr, ptep, pte_mkold(pte)); + } + set_pte(ptep, pte_mkold(pte)); + purge_tlb_entries(vma->vm_mm, addr); + spin_unlock_irqrestore(&pa_dbit_lock, flags); return 1; -#endif } -extern spinlock_t pa_dbit_lock; - struct mm_struct; static inline pte_t ptep_get_and_clear(struct mm_struct *mm, unsigned long addr, pte_t *ptep) { pte_t old_pte; + unsigned long flags; - spin_lock(&pa_dbit_lock); + spin_lock_irqsave(&pa_dbit_lock, flags); old_pte = *ptep; pte_clear(mm,addr,ptep); - spin_unlock(&pa_dbit_lock); + purge_tlb_entries(mm, addr); + spin_unlock_irqrestore(&pa_dbit_lock, flags); return old_pte; } static inline void ptep_set_wrprotect(struct mm_struct *mm, unsigned long addr, pte_t *ptep) { -#ifdef CONFIG_SMP - unsigned long new, old; - - do { - old = pte_val(*ptep); - new = pte_val(pte_wrprotect(__pte (old))); - } while (cmpxchg((unsigned long *) ptep, old, new) != old); + unsigned long flags; + spin_lock_irqsave(&pa_dbit_lock, flags); + set_pte(ptep, pte_wrprotect(*ptep)); purge_tlb_entries(mm, addr); -#else - pte_t old_pte = *ptep; - set_pte_at(mm, addr, ptep, pte_wrprotect(old_pte)); -#endif + spin_unlock_irqrestore(&pa_dbit_lock, flags); } #define pte_same(A,B) (pte_val(A) == pte_val(B)) diff --git a/arch/parisc/kernel/cache.c b/arch/parisc/kernel/cache.c index 4b12890..83ded26 100644 --- a/arch/parisc/kernel/cache.c +++ b/arch/parisc/kernel/cache.c @@ -421,14 +421,11 @@ void purge_tlb_entries(struct mm_struct *mm, unsigned long addr) /* Note: purge_tlb_entries can be called at startup with no context. */ - /* Disable preemption while we play with %sr1. */ - preempt_disable(); - mtsp(mm->context, 1); purge_tlb_start(flags); + mtsp(mm->context, 1); pdtlb(addr); pitlb(addr); purge_tlb_end(flags); - preempt_enable(); } EXPORT_SYMBOL(purge_tlb_entries);