diff mbox series

[v2] MIPS: Expand MIPS32 ASIDs to 64 bits

Message ID 20181204234333.21243-1-paul.burton@mips.com (mailing list archive)
State Accepted
Commit ff4dd232ec45a0e45ea69f28f069f2ab22b4908a
Headers show
Series [v2] MIPS: Expand MIPS32 ASIDs to 64 bits | expand

Commit Message

Paul Burton Dec. 4, 2018, 11:44 p.m. UTC
ASIDs have always been stored as unsigned longs, ie. 32 bits on MIPS32
kernels. This is problematic because it is feasible for the ASID version
to overflow & wrap around to zero.

We currently attempt to handle this overflow by simply setting the ASID
version to 1, using asid_first_version(), but we make no attempt to
account for the fact that there may be mm_structs with stale ASIDs that
have versions which we now reuse due to the overflow & wrap around.

Encountering this requires that:

  1) A struct mm_struct X is active on CPU A using ASID (V,n).

  2) That mm is not used on CPU A for the length of time that it takes
     for CPU A's asid_cache to overflow & wrap around to the same
     version V that the mm had in step 1. During this time tasks using
     the mm could either be sleeping or only scheduled on other CPUs.

  3) Some other mm Y becomes active on CPU A and is allocated the same
     ASID (V,n).

  4) mm X now becomes active on CPU A again, and now incorrectly has the
     same ASID as mm Y.

Where struct mm_struct ASIDs are represented above in the format
(version, EntryHi.ASID), and on a typical MIPS32 system version will be
24 bits wide & EntryHi.ASID will be 8 bits wide.

The length of time required in step 2 is highly dependent upon the CPU &
workload, but for a hypothetical 2GHz CPU running a workload which
generates a new ASID every 10000 cycles this period is around 248 days.
Due to this long period of time & the fact that tasks need to be
scheduled in just the right (or wrong, depending upon your inclination)
way, this is obviously a difficult bug to encounter but it's entirely
possible as evidenced by reports.

In order to fix this, simply extend ASIDs to 64 bits even on MIPS32
builds. This will extend the period of time required for the
hypothetical system above to encounter the problem from 28 days to
around 3 trillion years, which feels safely outside of the realms of
possibility.

The cost of this is slightly more generated code in some commonly
executed paths, but this is pretty minimal:

                         | Code Size Gain | Percentage
  -----------------------|----------------|-------------
    decstation_defconfig |           +270 | +0.00%
        32r2el_defconfig |           +652 | +0.01%
        32r6el_defconfig |          +1000 | +0.01%

I have been unable to measure any change in performance of the LMbench
lat_ctx or lat_proc tests resulting from the 64b ASIDs on either
32r2el_defconfig+interAptiv or 32r6el_defconfig+I6500 systems.

Signed-off-by: Paul Burton <paul.burton@mips.com>
Suggested-by: James Hogan <jhogan@kernel.org>
References: https://lore.kernel.org/linux-mips/80B78A8B8FEE6145A87579E8435D78C30205D5F3@fzex.ruijie.com.cn/
References: https://lore.kernel.org/linux-mips/1488684260-18867-1-git-send-email-jiwei.sun@windriver.com/
Cc: Jiwei Sun <jiwei.sun@windriver.com>
Cc: Yu Huabing <yhb@ruijie.com.cn>
Cc: stable@vger.kernel.org # 2.6.12+

---

Changes in v2:
- Drop the overflow asid_first_version() handling.
- Declare asid_first_version() & asid_version_mask() static inline now
  that they may not be used within the translation unit, in order to
  avoid unused-function warnings.

 arch/mips/include/asm/cpu-info.h    |  2 +-
 arch/mips/include/asm/mmu.h         |  2 +-
 arch/mips/include/asm/mmu_context.h | 10 ++++------
 arch/mips/mm/c-r3k.c                |  2 +-
 4 files changed, 7 insertions(+), 9 deletions(-)

Comments

Paul Burton Dec. 14, 2018, 7:03 p.m. UTC | #1
Hello,

Paul Burton wrote:
> ASIDs have always been stored as unsigned longs, ie. 32 bits on MIPS32
> kernels. This is problematic because it is feasible for the ASID version
> to overflow & wrap around to zero.
> 
> We currently attempt to handle this overflow by simply setting the ASID
> version to 1, using asid_first_version(), but we make no attempt to
> account for the fact that there may be mm_structs with stale ASIDs that
> have versions which we now reuse due to the overflow & wrap around.
> 
> Encountering this requires that:
> 
> 1) A struct mm_struct X is active on CPU A using ASID (V,n).
> 
> 2) That mm is not used on CPU A for the length of time that it takes
> for CPU A's asid_cache to overflow & wrap around to the same
> version V that the mm had in step 1. During this time tasks using
> the mm could either be sleeping or only scheduled on other CPUs.
> 
> 3) Some other mm Y becomes active on CPU A and is allocated the same
> ASID (V,n).
> 
> 4) mm X now becomes active on CPU A again, and now incorrectly has the
> same ASID as mm Y.
> 
> Where struct mm_struct ASIDs are represented above in the format
> (version, EntryHi.ASID), and on a typical MIPS32 system version will be
> 24 bits wide & EntryHi.ASID will be 8 bits wide.
> 
> The length of time required in step 2 is highly dependent upon the CPU &
> workload, but for a hypothetical 2GHz CPU running a workload which
> generates a new ASID every 10000 cycles this period is around 248 days.
> Due to this long period of time & the fact that tasks need to be
> scheduled in just the right (or wrong, depending upon your inclination)
> way, this is obviously a difficult bug to encounter but it's entirely
> possible as evidenced by reports.
> 
> In order to fix this, simply extend ASIDs to 64 bits even on MIPS32
> builds. This will extend the period of time required for the
> hypothetical system above to encounter the problem from 28 days to
> around 3 trillion years, which feels safely outside of the realms of
> possibility.
> 
> The cost of this is slightly more generated code in some commonly
> executed paths, but this is pretty minimal:
> 
> | Code Size Gain | Percentage
> -----------------------|----------------|-------------
> decstation_defconfig |           +270 | +0.00%
> 32r2el_defconfig |           +652 | +0.01%
> 32r6el_defconfig |          +1000 | +0.01%
> 
> I have been unable to measure any change in performance of the LMbench
> lat_ctx or lat_proc tests resulting from the 64b ASIDs on either
> 32r2el_defconfig+interAptiv or 32r6el_defconfig+I6500 systems.
> 
> Signed-off-by: Paul Burton <paul.burton@mips.com>
> Suggested-by: James Hogan <jhogan@kernel.org>
> References: https://lore.kernel.org/linux-mips/80B78A8B8FEE6145A87579E8435D78C30205D5F3@fzex.ruijie.com.cn/
> References: https://lore.kernel.org/linux-mips/1488684260-18867-1-git-send-email-jiwei.sun@windriver.com/
> Cc: Jiwei Sun <jiwei.sun@windriver.com>
> Cc: Yu Huabing <yhb@ruijie.com.cn>
> Cc: stable@vger.kernel.org # 2.6.12+

Applied to mips-next.

Thanks,
    Paul

[ This message was auto-generated; if you believe anything is incorrect
  then please email paul.burton@mips.com to report it. ]
diff mbox series

Patch

diff --git a/arch/mips/include/asm/cpu-info.h b/arch/mips/include/asm/cpu-info.h
index a41059d47d31..ed7ffe4e63a3 100644
--- a/arch/mips/include/asm/cpu-info.h
+++ b/arch/mips/include/asm/cpu-info.h
@@ -50,7 +50,7 @@  struct guest_info {
 #define MIPS_CACHE_PINDEX	0x00000020	/* Physically indexed cache */
 
 struct cpuinfo_mips {
-	unsigned long		asid_cache;
+	u64			asid_cache;
 #ifdef CONFIG_MIPS_ASID_BITS_VARIABLE
 	unsigned long		asid_mask;
 #endif
diff --git a/arch/mips/include/asm/mmu.h b/arch/mips/include/asm/mmu.h
index 0740be7d5d4a..24d6b42345fb 100644
--- a/arch/mips/include/asm/mmu.h
+++ b/arch/mips/include/asm/mmu.h
@@ -7,7 +7,7 @@ 
 #include <linux/wait.h>
 
 typedef struct {
-	unsigned long asid[NR_CPUS];
+	u64 asid[NR_CPUS];
 	void *vdso;
 	atomic_t fp_mode_switching;
 
diff --git a/arch/mips/include/asm/mmu_context.h b/arch/mips/include/asm/mmu_context.h
index 94414561de0e..a589585be21b 100644
--- a/arch/mips/include/asm/mmu_context.h
+++ b/arch/mips/include/asm/mmu_context.h
@@ -76,14 +76,14 @@  extern unsigned long pgd_current[];
  *  All unused by hardware upper bits will be considered
  *  as a software asid extension.
  */
-static unsigned long asid_version_mask(unsigned int cpu)
+static inline u64 asid_version_mask(unsigned int cpu)
 {
 	unsigned long asid_mask = cpu_asid_mask(&cpu_data[cpu]);
 
-	return ~(asid_mask | (asid_mask - 1));
+	return ~(u64)(asid_mask | (asid_mask - 1));
 }
 
-static unsigned long asid_first_version(unsigned int cpu)
+static inline u64 asid_first_version(unsigned int cpu)
 {
 	return ~asid_version_mask(cpu) + 1;
 }
@@ -102,14 +102,12 @@  static inline void enter_lazy_tlb(struct mm_struct *mm, struct task_struct *tsk)
 static inline void
 get_new_mmu_context(struct mm_struct *mm, unsigned long cpu)
 {
-	unsigned long asid = asid_cache(cpu);
+	u64 asid = asid_cache(cpu);
 
 	if (!((asid += cpu_asid_inc()) & cpu_asid_mask(&cpu_data[cpu]))) {
 		if (cpu_has_vtag_icache)
 			flush_icache_all();
 		local_flush_tlb_all();	/* start new asid cycle */
-		if (!asid)		/* fix version if needed */
-			asid = asid_first_version(cpu);
 	}
 
 	cpu_context(cpu, mm) = asid_cache(cpu) = asid;
diff --git a/arch/mips/mm/c-r3k.c b/arch/mips/mm/c-r3k.c
index 3466fcdae0ca..01848cdf2074 100644
--- a/arch/mips/mm/c-r3k.c
+++ b/arch/mips/mm/c-r3k.c
@@ -245,7 +245,7 @@  static void r3k_flush_cache_page(struct vm_area_struct *vma,
 	pmd_t *pmdp;
 	pte_t *ptep;
 
-	pr_debug("cpage[%08lx,%08lx]\n",
+	pr_debug("cpage[%08llx,%08lx]\n",
 		 cpu_context(smp_processor_id(), mm), addr);
 
 	/* No ASID => no such page in the cache.  */