Message ID | 1552431636-31511-5-git-send-email-fenghua.yu@intel.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | x86/split_lock: Enable #AC exception for split locked accesses | expand |
diff --git a/arch/x86/include/asm/processor.h b/arch/x86/include/asm/processor.h index 2bb3a648fc12..b92296595fbe 100644 --- a/arch/x86/include/asm/processor.h +++ b/arch/x86/include/asm/processor.h @@ -93,7 +93,9 @@ struct cpuinfo_x86 { __u32 extended_cpuid_level; /* Maximum supported CPUID level, -1=no CPUID: */ int cpuid_level; - __u32 x86_capability[NCAPINTS + NBUGINTS]; + /* Unsigned long alignment to avoid split lock in atomic bitmap ops */ + __u32 x86_capability[NCAPINTS + NBUGINTS] + __aligned(sizeof(unsigned long)); char x86_vendor_id[16]; char x86_model_id[64]; /* in KB - valid for CPUS which support this call: */
set_cpu_cap() calls locked BTS and clear_cpu_cap() calls locked BTR to operate on bitmap defined in x86_capability. Locked BTS/BTR accesses a single unsigned long location. In 64-bit mode, the location is at: base address of x86_capability + (bit offset in x86_capability % 64) * 8 Since base address of x86_capability may not aligned to unsigned long, the single unsigned long location may cross two cache lines and accessing the location by locked BTS/BTR introductions will trigger #AC. To fix the split lock issue, align x86_capability to unsigned long so that the location will be always within one cache line. Changing x86_capability[]'s type to unsigned long may also fix the issue because x86_capability[] will be naturally aligned to unsigned long. But this needs additional code changes. So we choose the simpler solution by enforcing alignment using __aligned(unsigned long). Signed-off-by: Fenghua Yu <fenghua.yu@intel.com> --- arch/x86/include/asm/processor.h | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-)