Message ID | 6cd5dfca-a10c-0847-c084-a511ab2cbb1c@suse.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | x86: introduce read_sregs() and elf_core_save_regs() adjustments | expand |
On 28/09/2020 13:05, Jan Beulich wrote: > --- a/xen/include/asm-x86/regs.h > +++ b/xen/include/asm-x86/regs.h > @@ -15,4 +15,18 @@ > (diff == 0); \ > }) > > +#define read_sreg(name) ({ \ > + unsigned int __sel; \ > + asm volatile ( "mov %%" STR(name) ",%0" : "=r" (__sel) ); \ > + __sel; \ > +}) > + > +static inline void read_sregs(struct cpu_user_regs *regs) > +{ > + asm volatile ( "mov %%ds, %0" : "=m" (regs->ds) ); > + asm volatile ( "mov %%es, %0" : "=m" (regs->es) ); > + asm volatile ( "mov %%fs, %0" : "=m" (regs->fs) ); > + asm volatile ( "mov %%gs, %0" : "=m" (regs->gs) ); It occurs to me that reads don't need to be volatile. There are no side effects. With that fixed, Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
On 28.09.2020 14:47, Andrew Cooper wrote: > On 28/09/2020 13:05, Jan Beulich wrote: >> --- a/xen/include/asm-x86/regs.h >> +++ b/xen/include/asm-x86/regs.h >> @@ -15,4 +15,18 @@ >> (diff == 0); \ >> }) >> >> +#define read_sreg(name) ({ \ >> + unsigned int __sel; \ >> + asm volatile ( "mov %%" STR(name) ",%0" : "=r" (__sel) ); \ >> + __sel; \ >> +}) >> + >> +static inline void read_sregs(struct cpu_user_regs *regs) >> +{ >> + asm volatile ( "mov %%ds, %0" : "=m" (regs->ds) ); >> + asm volatile ( "mov %%es, %0" : "=m" (regs->es) ); >> + asm volatile ( "mov %%fs, %0" : "=m" (regs->fs) ); >> + asm volatile ( "mov %%gs, %0" : "=m" (regs->gs) ); > > It occurs to me that reads don't need to be volatile. There are no side > effects. Oh yes, of course. Too mechanical moving / copying ... > With that fixed, Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com> Thanks, Jan
On 28.09.2020 14:47, Andrew Cooper wrote: > On 28/09/2020 13:05, Jan Beulich wrote: >> --- a/xen/include/asm-x86/regs.h >> +++ b/xen/include/asm-x86/regs.h >> @@ -15,4 +15,18 @@ >> (diff == 0); \ >> }) >> >> +#define read_sreg(name) ({ \ >> + unsigned int __sel; \ >> + asm volatile ( "mov %%" STR(name) ",%0" : "=r" (__sel) ); \ >> + __sel; \ >> +}) >> + >> +static inline void read_sregs(struct cpu_user_regs *regs) >> +{ >> + asm volatile ( "mov %%ds, %0" : "=m" (regs->ds) ); >> + asm volatile ( "mov %%es, %0" : "=m" (regs->es) ); >> + asm volatile ( "mov %%fs, %0" : "=m" (regs->fs) ); >> + asm volatile ( "mov %%gs, %0" : "=m" (regs->gs) ); > > It occurs to me that reads don't need to be volatile. There are no side > effects. I'll do the same for what patches 3 and 5 alter anyway, assuming this won't invalidate your R-b there. Jan
On 28/09/2020 15:49, Jan Beulich wrote: > On 28.09.2020 14:47, Andrew Cooper wrote: >> On 28/09/2020 13:05, Jan Beulich wrote: >>> --- a/xen/include/asm-x86/regs.h >>> +++ b/xen/include/asm-x86/regs.h >>> @@ -15,4 +15,18 @@ >>> (diff == 0); \ >>> }) >>> >>> +#define read_sreg(name) ({ \ >>> + unsigned int __sel; \ >>> + asm volatile ( "mov %%" STR(name) ",%0" : "=r" (__sel) ); \ >>> + __sel; \ >>> +}) >>> + >>> +static inline void read_sregs(struct cpu_user_regs *regs) >>> +{ >>> + asm volatile ( "mov %%ds, %0" : "=m" (regs->ds) ); >>> + asm volatile ( "mov %%es, %0" : "=m" (regs->es) ); >>> + asm volatile ( "mov %%fs, %0" : "=m" (regs->fs) ); >>> + asm volatile ( "mov %%gs, %0" : "=m" (regs->gs) ); >> It occurs to me that reads don't need to be volatile. There are no side >> effects. > I'll do the same for what patches 3 and 5 alter anyway, assuming > this won't invalidate your R-b there. 3 is fine. 5 is a little more problematic, because there are serialising side effects, but I suppose we really don't care here. ~Andrew
--- a/xen/arch/x86/domain.c +++ b/xen/arch/x86/domain.c @@ -1703,10 +1703,7 @@ static void save_segments(struct vcpu *v { struct cpu_user_regs *regs = &v->arch.user_regs; - regs->ds = read_sreg(ds); - regs->es = read_sreg(es); - regs->fs = read_sreg(fs); - regs->gs = read_sreg(gs); + read_sregs(regs); if ( !is_pv_32bit_vcpu(v) ) { --- a/xen/arch/x86/x86_64/traps.c +++ b/xen/arch/x86/x86_64/traps.c @@ -43,10 +43,7 @@ static void read_registers(struct cpu_us crs[2] = read_cr2(); crs[3] = read_cr3(); crs[4] = read_cr4(); - regs->ds = read_sreg(ds); - regs->es = read_sreg(es); - regs->fs = read_sreg(fs); - regs->gs = read_sreg(gs); + read_sregs(regs); crs[5] = rdfsbase(); crs[6] = rdgsbase(); crs[7] = rdgsshadow(); --- a/xen/include/asm-x86/regs.h +++ b/xen/include/asm-x86/regs.h @@ -15,4 +15,18 @@ (diff == 0); \ }) +#define read_sreg(name) ({ \ + unsigned int __sel; \ + asm volatile ( "mov %%" STR(name) ",%0" : "=r" (__sel) ); \ + __sel; \ +}) + +static inline void read_sregs(struct cpu_user_regs *regs) +{ + asm volatile ( "mov %%ds, %0" : "=m" (regs->ds) ); + asm volatile ( "mov %%es, %0" : "=m" (regs->es) ); + asm volatile ( "mov %%fs, %0" : "=m" (regs->fs) ); + asm volatile ( "mov %%gs, %0" : "=m" (regs->gs) ); +} + #endif /* __X86_REGS_H__ */ --- a/xen/include/asm-x86/system.h +++ b/xen/include/asm-x86/system.h @@ -5,12 +5,6 @@ #include <xen/bitops.h> #include <asm/processor.h> -#define read_sreg(name) \ -({ unsigned int __sel; \ - asm volatile ( "mov %%" STR(name) ",%0" : "=r" (__sel) ); \ - __sel; \ -}) - static inline void wbinvd(void) { asm volatile ( "wbinvd" ::: "memory" );
When storing all (data) segment registers in one go, prefer writing the selector values directly to memory (as opposed to read_sreg()). Also move the single register variant into the regs.h. Signed-off-by: Jan Beulich <jbeulich@suse.com>