Message ID | 142668.1440884956@turing-police.cc.vt.edu (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
On 2015-08-29 17:49, Valdis Kletnieks wrote: > Compiler warning: > > CC [M] arch/x86/kvm/emulate.o > arch/x86/kvm/emulate.c: In function "__do_insn_fetch_bytes": > arch/x86/kvm/emulate.c:814:9: warning: "linear" may be used uninitialized in this function [-Wmaybe-uninitialized] > > GCC is smart enough to realize that the inlined __linearize may return before > setting the value of linear, but not smart enough to realize the same > X86EMU_CONTINUE blocks actual use of the value. However, the value of > 'linear' can only be set to one value, so hoisting the one line of code > upwards makes GCC happy with the code. > > Reported-by: Aruna Hewapathirane <aruna.hewapathirane@gmail.com> > Tested-by: Aruna Hewapathirane <aruna.hewapathirane@gmail.com> > Signed-off-by: Valdis Kletnieks <valdis.kletnieks@vt.edu> > > --- a/arch/x86/kvm/emulate.c.dist 2015-08-11 14:10:05.366061993 -0400 > +++ b/arch/x86/kvm/emulate.c 2015-08-29 13:43:13.014163958 -0400 > @@ -650,6 +650,7 @@ static __always_inline int __linearize(s > u16 sel; > > la = seg_base(ctxt, addr.seg) + addr.ea; > + *linear = la; > *max_size = 0; > switch (mode) { > case X86EMUL_MODE_PROT64: > @@ -693,7 +694,6 @@ static __always_inline int __linearize(s > } > if (insn_aligned(ctxt, size) && ((la & (size - 1)) != 0)) > return emulate_gp(ctxt, 0); > - *linear = la; > return X86EMUL_CONTINUE; > bad: > if (addr.seg == VCPU_SREG_SS) > Unfortunately this patch broke GNU/Hurd when running under KVM. It fails to boot almost immediately. I haven't debug it more, but it looks like *linear should not always be written. This can easily be reproduced by trying to boot Debian Installer from this ISO: http://ftp.debian-ports.org/debian-cd/hurd-i386/debian-hurd-2015/debian-hurd-2015-i386-CD-1.iso Aurelien
On 2016-02-19 12:11, Aurelien Jarno wrote: > On 2015-08-29 17:49, Valdis Kletnieks wrote: > > Compiler warning: > > > > CC [M] arch/x86/kvm/emulate.o > > arch/x86/kvm/emulate.c: In function "__do_insn_fetch_bytes": > > arch/x86/kvm/emulate.c:814:9: warning: "linear" may be used uninitialized in this function [-Wmaybe-uninitialized] > > > > GCC is smart enough to realize that the inlined __linearize may return before > > setting the value of linear, but not smart enough to realize the same > > X86EMU_CONTINUE blocks actual use of the value. However, the value of > > 'linear' can only be set to one value, so hoisting the one line of code > > upwards makes GCC happy with the code. > > > > Reported-by: Aruna Hewapathirane <aruna.hewapathirane@gmail.com> > > Tested-by: Aruna Hewapathirane <aruna.hewapathirane@gmail.com> > > Signed-off-by: Valdis Kletnieks <valdis.kletnieks@vt.edu> > > > > --- a/arch/x86/kvm/emulate.c.dist 2015-08-11 14:10:05.366061993 -0400 > > +++ b/arch/x86/kvm/emulate.c 2015-08-29 13:43:13.014163958 -0400 > > @@ -650,6 +650,7 @@ static __always_inline int __linearize(s > > u16 sel; > > > > la = seg_base(ctxt, addr.seg) + addr.ea; > > + *linear = la; > > *max_size = 0; > > switch (mode) { > > case X86EMUL_MODE_PROT64: > > @@ -693,7 +694,6 @@ static __always_inline int __linearize(s > > } > > if (insn_aligned(ctxt, size) && ((la & (size - 1)) != 0)) > > return emulate_gp(ctxt, 0); > > - *linear = la; > > return X86EMUL_CONTINUE; > > bad: > > if (addr.seg == VCPU_SREG_SS) > > > > Unfortunately this patch broke GNU/Hurd when running under KVM. It fails > to boot almost immediately. I haven't debug it more, but it looks like > *linear should not always be written. Actually the same patch with a bit more context shows the issue: > diff --git a/arch/x86/kvm/emulate.c b/arch/x86/kvm/emulate.c > index e7a4fde..b372a75 100644 > --- a/arch/x86/kvm/emulate.c > +++ b/arch/x86/kvm/emulate.c > @@ -647,12 +647,13 @@ static __always_inline int __linearize(struct x86_emulate_ctxt *ctxt, > bool usable; > ulong la; > u32 lim; > u16 sel; > > la = seg_base(ctxt, addr.seg) + addr.ea; > + *linear = la; The assignation is moved here... > *max_size = 0; > switch (mode) { > case X86EMUL_MODE_PROT64: > if (is_noncanonical_address(la)) > goto bad; > > @@ -690,13 +691,12 @@ static __always_inline int __linearize(struct x86_emulate_ctxt *ctxt, > } > la &= (u32)-1; ... while the value of la might be modified in between. > break; > } > if (insn_aligned(ctxt, size) && ((la & (size - 1)) != 0)) > return emulate_gp(ctxt, 0); > - *linear = la; > return X86EMUL_CONTINUE; > bad: > if (addr.seg == VCPU_SREG_SS) > return emulate_ss(ctxt, 0); > else > return emulate_gp(ctxt, 0); One possibility would be to assign it both at the beginning of the function and at the original location should fix the bug and prevent GCC to issue a warning. Aurelien
On Fri, 19 Feb 2016 17:45:48 +0100, Aurelien Jarno said: > On 2016-02-19 12:11, Aurelien Jarno wrote: > Actually the same patch with a bit more context shows the issue: > > > diff --git a/arch/x86/kvm/emulate.c b/arch/x86/kvm/emulate.c > > index e7a4fde..b372a75 100644 > > --- a/arch/x86/kvm/emulate.c > > +++ b/arch/x86/kvm/emulate.c > > @@ -647,12 +647,13 @@ static __always_inline int __linearize(struct x86_emu late_ctxt *ctxt, > > bool usable; > > ulong la; > > u32 lim; > > u16 sel; > > > > la = seg_base(ctxt, addr.seg) + addr.ea; > > + *linear = la; > > The assignation is moved here... > > > *max_size = 0; > > switch (mode) { > > case X86EMUL_MODE_PROT64: > > if (is_noncanonical_address(la)) > > goto bad; > > > > @@ -690,13 +691,12 @@ static __always_inline int __linearize(struct x86_emulate_ctxt *ctxt, > > } > > la &= (u32)-1; > > ... while the value of la might be modified in between. (trying to reconstruct my thought process from 6 months ago. I remember staring at that, and I convinced myself it was still OK to move the assignment.) la can get changed here - but there's 2 cases to consider. If it's in a 32-bit kernel, anding with -1 is a no-op. Now if we're on a 64-bit kernel, the 'and' clears the high 32 bits. But under what conditions is 'la' a 64-bit quantity that has any bits set in the high 32 bits (meaning it's a pointer to something over the 4G line) - but it's still valid to smash those bits?
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 On 19/02/2016 18:54, Valdis.Kletnieks@vt.edu wrote: > la can get changed here - but there's 2 cases to consider. If it's > in a 32-bit kernel, anding with -1 is a no-op. > > Now if we're on a 64-bit kernel, the 'and' clears the high 32 > bits. > > But under what conditions is 'la' a 64-bit quantity that has any > bits set in the high 32 bits (meaning it's a pointer to something > over the 4G line) - but it's still valid to smash those bits? That can happen for example if there is a non-zero segment base. Then the linear address wraps at 4G. Paolo -----BEGIN PGP SIGNATURE----- Version: GnuPG v2 iQEcBAEBCAAGBQJWx1cxAAoJEL/70l94x66DFBcIAIkWZ7KaTyh3gfy5Cur7aukA 8NkEKvxZxO46lQAiKxf5ORTdBCdtZDjUjbxTVJoguITK/nnXuBLQIP/aeDhfHdhz BTYumgH+QV+kmZfn7mwgY5omS05Qx08DmdpQ1jyu1Y1aPVBv6FlsoWcHFrA+oXI2 wtit0OejbPJ9gT4dv1S/etuJvzdINQ/Y4fh/ulkyJJRIw5vvEgW+PN81UNCiSust w1zkljlfoU4he54IWHa0R1Am/uQBmWRuhvzaMZVKdkGlrN/jJo4ObR5DX+qPujPB sGSw2HTh8p4IuLwAJ0PvZZPNag+6vdOv9jxJuFpRNQGZLYY5eHL1IFVHaESv0vQ= =ssTD -----END PGP SIGNATURE----- -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Fri, 19 Feb 2016 18:56:05 +0100, Paolo Bonzini said: > On 19/02/2016 18:54, Valdis.Kletnieks@vt.edu wrote: > > But under what conditions is 'la' a 64-bit quantity that has any > > bits set in the high 32 bits (meaning it's a pointer to something > > over the 4G line) - but it's still valid to smash those bits? > > That can happen for example if there is a non-zero segment base. Then > the linear address wraps at 4G. Gaah. Obviously, the concept that software could actually depend on a segment base pointing at the 3G line (or whatever) to wrap around and be used to address memory down in the first gig of RAM was too bizarre for my brain to visualize. :) The IBM S/360 with 24 bit addresses and S/370 with 24 or 31 bit addresses would allow some instructions (most famously MVCL Move Character Long) to start operating at the high end of memory and wrap around to the beginning. The system documentation was pretty clear that although this *worked*, it was probably not what you actually wanted to do.... :)
--- a/arch/x86/kvm/emulate.c.dist 2015-08-11 14:10:05.366061993 -0400 +++ b/arch/x86/kvm/emulate.c 2015-08-29 13:43:13.014163958 -0400 @@ -650,6 +650,7 @@ static __always_inline int __linearize(s u16 sel; la = seg_base(ctxt, addr.seg) + addr.ea; + *linear = la; *max_size = 0; switch (mode) { case X86EMUL_MODE_PROT64: @@ -693,7 +694,6 @@ static __always_inline int __linearize(s } if (insn_aligned(ctxt, size) && ((la & (size - 1)) != 0)) return emulate_gp(ctxt, 0); - *linear = la; return X86EMUL_CONTINUE; bad: if (addr.seg == VCPU_SREG_SS)