Message ID | 20200324151859.31068-5-xiaoyao.li@intel.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | x86/split_lock: Fix and virtualization of split lock detection | expand |
Xiaoyao Li <xiaoyao.li@intel.com> writes: > > +bool split_lock_detect_on(void) > +{ > + return sld_state != sld_off; > +} > +EXPORT_SYMBOL_GPL(split_lock_detect_on); 1) You export this function here 2) You change that in one of the next patches to something else 3) According to patch 1/8 X86_FEATURE_SPLIT_LOCK_DETECT is not set when sld_state == sld_off. FYI, I did that on purpose. AFAICT #1 and #2 are just historical leftovers of your previous patch series and the extra step was just adding more changed lines per patch for no value. #3 changed the detection mechanism and at the same time the semantics of the feature flag. So what's the point of this exercise? Thanks, tglx
On 3/25/2020 8:00 AM, Thomas Gleixner wrote: > Xiaoyao Li <xiaoyao.li@intel.com> writes: >> >> +bool split_lock_detect_on(void) >> +{ >> + return sld_state != sld_off; >> +} >> +EXPORT_SYMBOL_GPL(split_lock_detect_on); > > 1) You export this function here > > 2) You change that in one of the next patches to something else > > 3) According to patch 1/8 X86_FEATURE_SPLIT_LOCK_DETECT is not set when > sld_state == sld_off. FYI, I did that on purpose. > > AFAICT #1 and #2 are just historical leftovers of your previous patch > series and the extra step was just adding more changed lines per patch > for no value. > > #3 changed the detection mechanism and at the same time the semantics of > the feature flag. > > So what's the point of this exercise? Right. In this series, setting X86_FEATURE_SPLIT_LOCK_DETECT flag means SLD is turned on. Need to remove split_lock_detect_on(). Thanks for pointing out this. > Thanks, > > tglx >
diff --git a/arch/x86/include/asm/cpu.h b/arch/x86/include/asm/cpu.h index ff567afa6ee1..d2071f6a35ac 100644 --- a/arch/x86/include/asm/cpu.h +++ b/arch/x86/include/asm/cpu.h @@ -44,6 +44,7 @@ unsigned int x86_stepping(unsigned int sig); extern void __init cpu_set_core_cap_bits(struct cpuinfo_x86 *c); extern void switch_to_sld(unsigned long tifn); extern bool handle_user_split_lock(unsigned long ip); +extern bool split_lock_detect_on(void); #else static inline void __init cpu_set_core_cap_bits(struct cpuinfo_x86 *c) {} static inline void switch_to_sld(unsigned long tifn) {} @@ -51,5 +52,6 @@ static inline bool handle_user_split_lock(unsigned long ip) { return false; } +static inline bool split_lock_detect_on(void) { return false; } #endif #endif /* _ASM_X86_CPU_H */ diff --git a/arch/x86/kernel/cpu/intel.c b/arch/x86/kernel/cpu/intel.c index aed2b477e2ad..fd67be719284 100644 --- a/arch/x86/kernel/cpu/intel.c +++ b/arch/x86/kernel/cpu/intel.c @@ -1070,6 +1070,12 @@ static void split_lock_init(void) sld_update_msr(sld_state != sld_off); } +bool split_lock_detect_on(void) +{ + return sld_state != sld_off; +} +EXPORT_SYMBOL_GPL(split_lock_detect_on); + bool handle_user_split_lock(unsigned long ip) { if (sld_state == sld_fatal) diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index ebd56aa10d9f..5ef57e3a315f 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -5831,6 +5831,7 @@ static int emulator_cmpxchg_emulated(struct x86_emulate_ctxt *ctxt, { struct kvm_host_map map; struct kvm_vcpu *vcpu = emul_to_vcpu(ctxt); + u64 page_line_mask = PAGE_MASK; gpa_t gpa; char *kaddr; bool exchanged; @@ -5845,7 +5846,11 @@ static int emulator_cmpxchg_emulated(struct x86_emulate_ctxt *ctxt, (gpa & PAGE_MASK) == APIC_DEFAULT_PHYS_BASE) goto emul_write; - if (((gpa + bytes - 1) & PAGE_MASK) != (gpa & PAGE_MASK)) + if (split_lock_detect_on()) + page_line_mask = ~(cache_line_size() - 1); + + /* when write spans page or spans cache when SLD enabled */ + if (((gpa + bytes - 1) & page_line_mask) != (gpa & page_line_mask)) goto emul_write; if (kvm_vcpu_map(vcpu, gpa_to_gfn(gpa), &map))
If split lock detect is on (warn/fatal), #AC handler calls die() when split lock happens in kernel. Malicous guest can exploit the KVM emulator to trigger split lock #AC in kernel[1]. So just emulating the access as a write if it's a split-lock access (the same as access spans page) to avoid malicious guest attacking kernel. More discussion can be found [2][3]. [1] https://lore.kernel.org/lkml/8c5b11c9-58df-38e7-a514-dc12d687b198@redhat.com/ [2] https://lkml.kernel.org/r/20200131200134.GD18946@linux.intel.com [3] https://lkml.kernel.org/r/20200227001117.GX9940@linux.intel.com Suggested-by: Sean Christopherson <sean.j.christopherson@intel.com> Signed-off-by: Xiaoyao Li <xiaoyao.li@intel.com> --- arch/x86/include/asm/cpu.h | 2 ++ arch/x86/kernel/cpu/intel.c | 6 ++++++ arch/x86/kvm/x86.c | 7 ++++++- 3 files changed, 14 insertions(+), 1 deletion(-)