diff mbox series

[v4,2/5] docs/about/deprecated: Deprecate the qemu-system-i386 binary

Message ID 20230306084658.29709-3-thuth@redhat.com (mailing list archive)
State New, archived
Headers show
Series Deprecate system emulation support for 32-bit x86 and arm hosts | expand

Commit Message

Thomas Huth March 6, 2023, 8:46 a.m. UTC
Aside from not supporting KVM on 32-bit hosts, the qemu-system-x86_64
binary is a proper superset of the qemu-system-i386 binary. With the
32-bit host support being deprecated, it is now also possible to
deprecate the qemu-system-i386 binary.

With regards to 32-bit KVM support in the x86 Linux kernel,
the developers confirmed that they do not need a recent
qemu-system-i386 binary here:

 https://lore.kernel.org/kvm/Y%2ffkTs5ajFy0hP1U@google.com/

Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Reviewed-by: Wilfred Mallawa <wilfred.mallawa@wdc.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
---
 docs/about/deprecated.rst | 14 ++++++++++++++
 1 file changed, 14 insertions(+)

Comments

Daniel P. Berrangé March 6, 2023, 9:27 a.m. UTC | #1
On Mon, Mar 06, 2023 at 09:46:55AM +0100, Thomas Huth wrote:
> Aside from not supporting KVM on 32-bit hosts, the qemu-system-x86_64
> binary is a proper superset of the qemu-system-i386 binary. With the
> 32-bit host support being deprecated, it is now also possible to
> deprecate the qemu-system-i386 binary.
> 
> With regards to 32-bit KVM support in the x86 Linux kernel,
> the developers confirmed that they do not need a recent
> qemu-system-i386 binary here:
> 
>  https://lore.kernel.org/kvm/Y%2ffkTs5ajFy0hP1U@google.com/
> 
> Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
> Reviewed-by: Wilfred Mallawa <wilfred.mallawa@wdc.com>
> Signed-off-by: Thomas Huth <thuth@redhat.com>
> ---
>  docs/about/deprecated.rst | 14 ++++++++++++++
>  1 file changed, 14 insertions(+)
> 
> diff --git a/docs/about/deprecated.rst b/docs/about/deprecated.rst
> index 1ca9dc33d6..c4fcc6b33c 100644
> --- a/docs/about/deprecated.rst
> +++ b/docs/about/deprecated.rst
> @@ -34,6 +34,20 @@ deprecating the build option and no longer defend it in CI. The
>  ``--enable-gcov`` build option remains for analysis test case
>  coverage.
>  
> +``qemu-system-i386`` binary (since 8.0)
> +'''''''''''''''''''''''''''''''''''''''
> +
> +The ``qemu-system-i386`` binary was mainly useful for running with KVM
> +on 32-bit x86 hosts, but most Linux distributions already removed their
> +support for 32-bit x86 kernels, so hardly anybody still needs this. The
> +``qemu-system-x86_64`` binary is a proper superset and can be used to
> +run 32-bit guests by selecting a 32-bit CPU model, including KVM support
> +on x86_64 hosts. Thus users are recommended to reconfigure their systems
> +to use the ``qemu-system-x86_64`` binary instead. If a 32-bit CPU guest
> +environment should be enforced, you can switch off the "long mode" CPU
> +flag, e.g. with ``-cpu max,lm=off``.

I had the idea to check this today and this is not quite sufficient,
because we have code that changes the family/model/stepping for
'max' which is target dependent:

#ifdef TARGET_X86_64
    object_property_set_int(OBJECT(cpu), "family", 15, &error_abort);
    object_property_set_int(OBJECT(cpu), "model", 107, &error_abort);
    object_property_set_int(OBJECT(cpu), "stepping", 1, &error_abort);
#else
    object_property_set_int(OBJECT(cpu), "family", 6, &error_abort);
    object_property_set_int(OBJECT(cpu), "model", 6, &error_abort);
    object_property_set_int(OBJECT(cpu), "stepping", 3, &error_abort);
#endif

The former is a 64-bit AMD model and the latter is a 32-bit model.

Seems LLVM was sensitive to this distinction to some extent:

   https://gitlab.com/qemu-project/qemu/-/issues/191


A further difference is that qemy-system-i686 does not appear to enable
the 'syscall' flag, but I've not figured out where that difference is
coming from in the code.

With regards,
Daniel
Thomas Huth March 6, 2023, 9:54 a.m. UTC | #2
On 06/03/2023 10.27, Daniel P. Berrangé wrote:
> On Mon, Mar 06, 2023 at 09:46:55AM +0100, Thomas Huth wrote:
>> Aside from not supporting KVM on 32-bit hosts, the qemu-system-x86_64
>> binary is a proper superset of the qemu-system-i386 binary. With the
>> 32-bit host support being deprecated, it is now also possible to
>> deprecate the qemu-system-i386 binary.
>>
>> With regards to 32-bit KVM support in the x86 Linux kernel,
>> the developers confirmed that they do not need a recent
>> qemu-system-i386 binary here:
>>
>>   https://lore.kernel.org/kvm/Y%2ffkTs5ajFy0hP1U@google.com/
>>
>> Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
>> Reviewed-by: Wilfred Mallawa <wilfred.mallawa@wdc.com>
>> Signed-off-by: Thomas Huth <thuth@redhat.com>
>> ---
>>   docs/about/deprecated.rst | 14 ++++++++++++++
>>   1 file changed, 14 insertions(+)
>>
>> diff --git a/docs/about/deprecated.rst b/docs/about/deprecated.rst
>> index 1ca9dc33d6..c4fcc6b33c 100644
>> --- a/docs/about/deprecated.rst
>> +++ b/docs/about/deprecated.rst
>> @@ -34,6 +34,20 @@ deprecating the build option and no longer defend it in CI. The
>>   ``--enable-gcov`` build option remains for analysis test case
>>   coverage.
>>   
>> +``qemu-system-i386`` binary (since 8.0)
>> +'''''''''''''''''''''''''''''''''''''''
>> +
>> +The ``qemu-system-i386`` binary was mainly useful for running with KVM
>> +on 32-bit x86 hosts, but most Linux distributions already removed their
>> +support for 32-bit x86 kernels, so hardly anybody still needs this. The
>> +``qemu-system-x86_64`` binary is a proper superset and can be used to
>> +run 32-bit guests by selecting a 32-bit CPU model, including KVM support
>> +on x86_64 hosts. Thus users are recommended to reconfigure their systems
>> +to use the ``qemu-system-x86_64`` binary instead. If a 32-bit CPU guest
>> +environment should be enforced, you can switch off the "long mode" CPU
>> +flag, e.g. with ``-cpu max,lm=off``.
> 
> I had the idea to check this today and this is not quite sufficient,
> because we have code that changes the family/model/stepping for
> 'max' which is target dependent:
> 
> #ifdef TARGET_X86_64
>      object_property_set_int(OBJECT(cpu), "family", 15, &error_abort);
>      object_property_set_int(OBJECT(cpu), "model", 107, &error_abort);
>      object_property_set_int(OBJECT(cpu), "stepping", 1, &error_abort);
> #else
>      object_property_set_int(OBJECT(cpu), "family", 6, &error_abort);
>      object_property_set_int(OBJECT(cpu), "model", 6, &error_abort);
>      object_property_set_int(OBJECT(cpu), "stepping", 3, &error_abort);
> #endif
> 
> The former is a 64-bit AMD model and the latter is a 32-bit model.
> 
> Seems LLVM was sensitive to this distinction to some extent:
> 
>     https://gitlab.com/qemu-project/qemu/-/issues/191
> 
> A further difference is that qemy-system-i686 does not appear to enable
> the 'syscall' flag, but I've not figured out where that difference is
> coming from in the code.

Ugh, ok. I gave it a quick try with a patch like this:

diff --git a/target/i386/cpu.c b/target/i386/cpu.c
--- a/target/i386/cpu.c
+++ b/target/i386/cpu.c
@@ -4344,15 +4344,15 @@ static void max_x86_cpu_initfn(Object *obj)
       */
      object_property_set_str(OBJECT(cpu), "vendor", CPUID_VENDOR_AMD,
                              &error_abort);
-#ifdef TARGET_X86_64
-    object_property_set_int(OBJECT(cpu), "family", 15, &error_abort);
-    object_property_set_int(OBJECT(cpu), "model", 107, &error_abort);
-    object_property_set_int(OBJECT(cpu), "stepping", 1, &error_abort);
-#else
-    object_property_set_int(OBJECT(cpu), "family", 6, &error_abort);
-    object_property_set_int(OBJECT(cpu), "model", 6, &error_abort);
-    object_property_set_int(OBJECT(cpu), "stepping", 3, &error_abort);
-#endif
+    if (object_property_get_bool(obj, "lm", &error_abort)) {
+        object_property_set_int(obj, "family", 15, &error_abort);
+        object_property_set_int(obj, "model", 107, &error_abort);
+        object_property_set_int(obj, "stepping", 1, &error_abort);
+    } else {
+        object_property_set_int(obj, "family", 6, &error_abort);
+        object_property_set_int(obj, "model", 6, &error_abort);
+        object_property_set_int(obj, "stepping", 3, &error_abort);
+    }
      object_property_set_str(OBJECT(cpu), "model-id",
                              "QEMU TCG CPU version " QEMU_HW_VERSION,
                              &error_abort);

... but it seems like the "lm" property is not initialized
there yet, so this does not work... :-/

Giving that we have soft-freeze tomorrow, let's ignore this patch
for now and revisit this topic during the 8.1 cycle. But I'll
queue the other 4 patches to get some pressure out of our CI
during the freeze time.

  Thomas
Daniel P. Berrangé March 6, 2023, 9:58 a.m. UTC | #3
On Mon, Mar 06, 2023 at 10:54:15AM +0100, Thomas Huth wrote:
> On 06/03/2023 10.27, Daniel P. Berrangé wrote:
> > On Mon, Mar 06, 2023 at 09:46:55AM +0100, Thomas Huth wrote:
> > > Aside from not supporting KVM on 32-bit hosts, the qemu-system-x86_64
> > > binary is a proper superset of the qemu-system-i386 binary. With the
> > > 32-bit host support being deprecated, it is now also possible to
> > > deprecate the qemu-system-i386 binary.
> > > 
> > > With regards to 32-bit KVM support in the x86 Linux kernel,
> > > the developers confirmed that they do not need a recent
> > > qemu-system-i386 binary here:
> > > 
> > >   https://lore.kernel.org/kvm/Y%2ffkTs5ajFy0hP1U@google.com/
> > > 
> > > Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
> > > Reviewed-by: Wilfred Mallawa <wilfred.mallawa@wdc.com>
> > > Signed-off-by: Thomas Huth <thuth@redhat.com>
> > > ---
> > >   docs/about/deprecated.rst | 14 ++++++++++++++
> > >   1 file changed, 14 insertions(+)
> > > 
> > > diff --git a/docs/about/deprecated.rst b/docs/about/deprecated.rst
> > > index 1ca9dc33d6..c4fcc6b33c 100644
> > > --- a/docs/about/deprecated.rst
> > > +++ b/docs/about/deprecated.rst
> > > @@ -34,6 +34,20 @@ deprecating the build option and no longer defend it in CI. The
> > >   ``--enable-gcov`` build option remains for analysis test case
> > >   coverage.
> > > +``qemu-system-i386`` binary (since 8.0)
> > > +'''''''''''''''''''''''''''''''''''''''
> > > +
> > > +The ``qemu-system-i386`` binary was mainly useful for running with KVM
> > > +on 32-bit x86 hosts, but most Linux distributions already removed their
> > > +support for 32-bit x86 kernels, so hardly anybody still needs this. The
> > > +``qemu-system-x86_64`` binary is a proper superset and can be used to
> > > +run 32-bit guests by selecting a 32-bit CPU model, including KVM support
> > > +on x86_64 hosts. Thus users are recommended to reconfigure their systems
> > > +to use the ``qemu-system-x86_64`` binary instead. If a 32-bit CPU guest
> > > +environment should be enforced, you can switch off the "long mode" CPU
> > > +flag, e.g. with ``-cpu max,lm=off``.
> > 
> > I had the idea to check this today and this is not quite sufficient,
> > because we have code that changes the family/model/stepping for
> > 'max' which is target dependent:
> > 
> > #ifdef TARGET_X86_64
> >      object_property_set_int(OBJECT(cpu), "family", 15, &error_abort);
> >      object_property_set_int(OBJECT(cpu), "model", 107, &error_abort);
> >      object_property_set_int(OBJECT(cpu), "stepping", 1, &error_abort);
> > #else
> >      object_property_set_int(OBJECT(cpu), "family", 6, &error_abort);
> >      object_property_set_int(OBJECT(cpu), "model", 6, &error_abort);
> >      object_property_set_int(OBJECT(cpu), "stepping", 3, &error_abort);
> > #endif
> > 
> > The former is a 64-bit AMD model and the latter is a 32-bit model.
> > 
> > Seems LLVM was sensitive to this distinction to some extent:
> > 
> >     https://gitlab.com/qemu-project/qemu/-/issues/191
> > 
> > A further difference is that qemy-system-i686 does not appear to enable
> > the 'syscall' flag, but I've not figured out where that difference is
> > coming from in the code.
> 
> Ugh, ok. I gave it a quick try with a patch like this:
> 
> diff --git a/target/i386/cpu.c b/target/i386/cpu.c
> --- a/target/i386/cpu.c
> +++ b/target/i386/cpu.c
> @@ -4344,15 +4344,15 @@ static void max_x86_cpu_initfn(Object *obj)
>       */
>      object_property_set_str(OBJECT(cpu), "vendor", CPUID_VENDOR_AMD,
>                              &error_abort);
> -#ifdef TARGET_X86_64
> -    object_property_set_int(OBJECT(cpu), "family", 15, &error_abort);
> -    object_property_set_int(OBJECT(cpu), "model", 107, &error_abort);
> -    object_property_set_int(OBJECT(cpu), "stepping", 1, &error_abort);
> -#else
> -    object_property_set_int(OBJECT(cpu), "family", 6, &error_abort);
> -    object_property_set_int(OBJECT(cpu), "model", 6, &error_abort);
> -    object_property_set_int(OBJECT(cpu), "stepping", 3, &error_abort);
> -#endif
> +    if (object_property_get_bool(obj, "lm", &error_abort)) {
> +        object_property_set_int(obj, "family", 15, &error_abort);
> +        object_property_set_int(obj, "model", 107, &error_abort);
> +        object_property_set_int(obj, "stepping", 1, &error_abort);
> +    } else {
> +        object_property_set_int(obj, "family", 6, &error_abort);
> +        object_property_set_int(obj, "model", 6, &error_abort);
> +        object_property_set_int(obj, "stepping", 3, &error_abort);
> +    }
>      object_property_set_str(OBJECT(cpu), "model-id",
>                              "QEMU TCG CPU version " QEMU_HW_VERSION,
>                              &error_abort);
> 
> ... but it seems like the "lm" property is not initialized
> there yet, so this does not work... :-/
> 
> Giving that we have soft-freeze tomorrow, let's ignore this patch
> for now and revisit this topic during the 8.1 cycle. But I'll
> queue the other 4 patches to get some pressure out of our CI
> during the freeze time.

Yep, makes sense.

More generally the whole impl of the 'max' CPU feels somewhat
questionable even for qemu-system-i386. It exposes all features
that TCG supports. A large set of these features never existed
on *any* 32-bit silicon. Hands up who has seen 32-bit silicon
with AVX2 support ? From a correctness POV we should have
capped CPU features in some manner. Given the lack of interest
in 32-bit though, we've ignored the problem and it likely does
not affect apps anyway as they're not likely to be looking for
newish features.

With regards,
Daniel
Thomas Huth March 6, 2023, 1:48 p.m. UTC | #4
On 06/03/2023 10.27, Daniel P. Berrangé wrote:
> On Mon, Mar 06, 2023 at 09:46:55AM +0100, Thomas Huth wrote:
>> [...] If a 32-bit CPU guest
>> +environment should be enforced, you can switch off the "long mode" CPU
>> +flag, e.g. with ``-cpu max,lm=off``.
> 
> I had the idea to check this today and this is not quite sufficient,
[...]
> A further difference is that qemy-system-i686 does not appear to enable
> the 'syscall' flag, but I've not figured out where that difference is
> coming from in the code.

I think I just spotted this by accident in target/i386/cpu.c
around line 637:

#ifdef TARGET_X86_64
#define TCG_EXT2_X86_64_FEATURES (CPUID_EXT2_SYSCALL | CPUID_EXT2_LM)
#else
#define TCG_EXT2_X86_64_FEATURES 0
#endif

  Thomas
Daniel P. Berrangé March 6, 2023, 2:06 p.m. UTC | #5
On Mon, Mar 06, 2023 at 02:48:16PM +0100, Thomas Huth wrote:
> On 06/03/2023 10.27, Daniel P. Berrangé wrote:
> > On Mon, Mar 06, 2023 at 09:46:55AM +0100, Thomas Huth wrote:
> > > [...] If a 32-bit CPU guest
> > > +environment should be enforced, you can switch off the "long mode" CPU
> > > +flag, e.g. with ``-cpu max,lm=off``.
> > 
> > I had the idea to check this today and this is not quite sufficient,
> [...]
> > A further difference is that qemy-system-i686 does not appear to enable
> > the 'syscall' flag, but I've not figured out where that difference is
> > coming from in the code.
> 
> I think I just spotted this by accident in target/i386/cpu.c
> around line 637:
> 
> #ifdef TARGET_X86_64
> #define TCG_EXT2_X86_64_FEATURES (CPUID_EXT2_SYSCALL | CPUID_EXT2_LM)
> #else
> #define TCG_EXT2_X86_64_FEATURES 0
> #endif

Hmm, so right now the difference between qemu-system-i386 and
qemu-system-x86_64 is based on compile time conditionals. So we
have the burden of building everything twice and also a burden
of testing everything twice.

If we eliminate qemu-system-i386 we get rid of our own burden,
but users/mgmt apps need to adapt to force qemu-system-x86_64
to present a 32-bit system.

What about if we had qemu-system-i386 be a hardlink to
qemu-system-x86_64, and then changed behaviour based off the
executed binary name ?

ie if running qemu-system-i386, we could present a 32-bit CPU by
default. We eliminate all of our double compilation burden still.
We still have extra testing burden, but it is in a fairly narrow
area, so does not imply x2 the testing burden just $small-percentage
extra testing ?  That would means apps/users would not need to change
at all, but we still get most of the win we're after on the
QEMU side 

Essentially #ifdef TARGET_X86_64  would be change  'if (is_64bit) {...}'
in a handful of places, with 'bool is_64bit' initialized in main() from
argv[0] ?

With regards,
Daniel
Thomas Huth March 6, 2023, 2:18 p.m. UTC | #6
On 06/03/2023 15.06, Daniel P. Berrangé wrote:
> On Mon, Mar 06, 2023 at 02:48:16PM +0100, Thomas Huth wrote:
>> On 06/03/2023 10.27, Daniel P. Berrangé wrote:
>>> On Mon, Mar 06, 2023 at 09:46:55AM +0100, Thomas Huth wrote:
>>>> [...] If a 32-bit CPU guest
>>>> +environment should be enforced, you can switch off the "long mode" CPU
>>>> +flag, e.g. with ``-cpu max,lm=off``.
>>>
>>> I had the idea to check this today and this is not quite sufficient,
>> [...]
>>> A further difference is that qemy-system-i686 does not appear to enable
>>> the 'syscall' flag, but I've not figured out where that difference is
>>> coming from in the code.
>>
>> I think I just spotted this by accident in target/i386/cpu.c
>> around line 637:
>>
>> #ifdef TARGET_X86_64
>> #define TCG_EXT2_X86_64_FEATURES (CPUID_EXT2_SYSCALL | CPUID_EXT2_LM)
>> #else
>> #define TCG_EXT2_X86_64_FEATURES 0
>> #endif
> 
> Hmm, so right now the difference between qemu-system-i386 and
> qemu-system-x86_64 is based on compile time conditionals. So we
> have the burden of building everything twice and also a burden
> of testing everything twice.
> 
> If we eliminate qemu-system-i386 we get rid of our own burden,
> but users/mgmt apps need to adapt to force qemu-system-x86_64
> to present a 32-bit system.
> 
> What about if we had qemu-system-i386 be a hardlink to
> qemu-system-x86_64, and then changed behaviour based off the
> executed binary name ?

We could also simply provide a shell script that runs:

  qemu-system-x86_64 -cpu qemu32 $*

... that'd sounds like the simplest solution to me.

  Thomas
Daniel P. Berrangé March 6, 2023, 2:25 p.m. UTC | #7
On Mon, Mar 06, 2023 at 03:18:23PM +0100, Thomas Huth wrote:
> On 06/03/2023 15.06, Daniel P. Berrangé wrote:
> > On Mon, Mar 06, 2023 at 02:48:16PM +0100, Thomas Huth wrote:
> > > On 06/03/2023 10.27, Daniel P. Berrangé wrote:
> > > > On Mon, Mar 06, 2023 at 09:46:55AM +0100, Thomas Huth wrote:
> > > > > [...] If a 32-bit CPU guest
> > > > > +environment should be enforced, you can switch off the "long mode" CPU
> > > > > +flag, e.g. with ``-cpu max,lm=off``.
> > > > 
> > > > I had the idea to check this today and this is not quite sufficient,
> > > [...]
> > > > A further difference is that qemy-system-i686 does not appear to enable
> > > > the 'syscall' flag, but I've not figured out where that difference is
> > > > coming from in the code.
> > > 
> > > I think I just spotted this by accident in target/i386/cpu.c
> > > around line 637:
> > > 
> > > #ifdef TARGET_X86_64
> > > #define TCG_EXT2_X86_64_FEATURES (CPUID_EXT2_SYSCALL | CPUID_EXT2_LM)
> > > #else
> > > #define TCG_EXT2_X86_64_FEATURES 0
> > > #endif
> > 
> > Hmm, so right now the difference between qemu-system-i386 and
> > qemu-system-x86_64 is based on compile time conditionals. So we
> > have the burden of building everything twice and also a burden
> > of testing everything twice.
> > 
> > If we eliminate qemu-system-i386 we get rid of our own burden,
> > but users/mgmt apps need to adapt to force qemu-system-x86_64
> > to present a 32-bit system.
> > 
> > What about if we had qemu-system-i386 be a hardlink to
> > qemu-system-x86_64, and then changed behaviour based off the
> > executed binary name ?
> 
> We could also simply provide a shell script that runs:
> 
>  qemu-system-x86_64 -cpu qemu32 $*
> 
> ... that'd sounds like the simplest solution to me.

That woudn't do the right thing if the user ran 'qemu-system-i386 -cpu max'
because their '-cpu max' would override the -cpu arg in the shell script
that forced 32-bit mode.


With regards,
Daniel
Philippe Mathieu-Daudé March 6, 2023, 2:56 p.m. UTC | #8
On 6/3/23 15:06, Daniel P. Berrangé wrote:
> On Mon, Mar 06, 2023 at 02:48:16PM +0100, Thomas Huth wrote:
>> On 06/03/2023 10.27, Daniel P. Berrangé wrote:
>>> On Mon, Mar 06, 2023 at 09:46:55AM +0100, Thomas Huth wrote:
>>>> [...] If a 32-bit CPU guest
>>>> +environment should be enforced, you can switch off the "long mode" CPU
>>>> +flag, e.g. with ``-cpu max,lm=off``.
>>>
>>> I had the idea to check this today and this is not quite sufficient,
>> [...]
>>> A further difference is that qemy-system-i686 does not appear to enable
>>> the 'syscall' flag, but I've not figured out where that difference is
>>> coming from in the code.
>>
>> I think I just spotted this by accident in target/i386/cpu.c
>> around line 637:
>>
>> #ifdef TARGET_X86_64
>> #define TCG_EXT2_X86_64_FEATURES (CPUID_EXT2_SYSCALL | CPUID_EXT2_LM)
>> #else
>> #define TCG_EXT2_X86_64_FEATURES 0
>> #endif
> 
> Hmm, so right now the difference between qemu-system-i386 and
> qemu-system-x86_64 is based on compile time conditionals. So we
> have the burden of building everything twice and also a burden
> of testing everything twice.
> 
> If we eliminate qemu-system-i386 we get rid of our own burden,
> but users/mgmt apps need to adapt to force qemu-system-x86_64
> to present a 32-bit system.
> 
> What about if we had qemu-system-i386 be a hardlink to
> qemu-system-x86_64, and then changed behaviour based off the
> executed binary name ?
> 
> ie if running qemu-system-i386, we could present a 32-bit CPU by
> default. We eliminate all of our double compilation burden still.
> We still have extra testing burden, but it is in a fairly narrow
> area, so does not imply x2 the testing burden just $small-percentage
> extra testing ?  That would means apps/users would not need to change
> at all, but we still get most of the win we're after on the
> QEMU side
> 
> Essentially #ifdef TARGET_X86_64  would be change  'if (is_64bit) {...}'
> in a handful of places, with 'bool is_64bit' initialized in main() from
> argv[0] ?

That is what Alex suggested me to do with ARM binaries as a prototype
of unifying 32/64-bit binaries, avoiding to break users scripts.
Daniel P. Berrangé March 6, 2023, 2:58 p.m. UTC | #9
On Mon, Mar 06, 2023 at 02:25:46PM +0000, Daniel P. Berrangé wrote:
> On Mon, Mar 06, 2023 at 03:18:23PM +0100, Thomas Huth wrote:
> > On 06/03/2023 15.06, Daniel P. Berrangé wrote:
> > > On Mon, Mar 06, 2023 at 02:48:16PM +0100, Thomas Huth wrote:
> > > > On 06/03/2023 10.27, Daniel P. Berrangé wrote:
> > > > > On Mon, Mar 06, 2023 at 09:46:55AM +0100, Thomas Huth wrote:
> > > > > > [...] If a 32-bit CPU guest
> > > > > > +environment should be enforced, you can switch off the "long mode" CPU
> > > > > > +flag, e.g. with ``-cpu max,lm=off``.
> > > > > 
> > > > > I had the idea to check this today and this is not quite sufficient,
> > > > [...]
> > > > > A further difference is that qemy-system-i686 does not appear to enable
> > > > > the 'syscall' flag, but I've not figured out where that difference is
> > > > > coming from in the code.
> > > > 
> > > > I think I just spotted this by accident in target/i386/cpu.c
> > > > around line 637:
> > > > 
> > > > #ifdef TARGET_X86_64
> > > > #define TCG_EXT2_X86_64_FEATURES (CPUID_EXT2_SYSCALL | CPUID_EXT2_LM)
> > > > #else
> > > > #define TCG_EXT2_X86_64_FEATURES 0
> > > > #endif
> > > 
> > > Hmm, so right now the difference between qemu-system-i386 and
> > > qemu-system-x86_64 is based on compile time conditionals. So we
> > > have the burden of building everything twice and also a burden
> > > of testing everything twice.
> > > 
> > > If we eliminate qemu-system-i386 we get rid of our own burden,
> > > but users/mgmt apps need to adapt to force qemu-system-x86_64
> > > to present a 32-bit system.
> > > 
> > > What about if we had qemu-system-i386 be a hardlink to
> > > qemu-system-x86_64, and then changed behaviour based off the
> > > executed binary name ?
> > 
> > We could also simply provide a shell script that runs:
> > 
> >  qemu-system-x86_64 -cpu qemu32 $*
> > 
> > ... that'd sounds like the simplest solution to me.
> 
> That woudn't do the right thing if the user ran 'qemu-system-i386 -cpu max'
> because their '-cpu max' would override the -cpu arg in the shell script
> that forced 32-bit mode.

It would also fail to work with SELinux, because policy restrictions
doesn't allow for an intermediate wrapper script to exec binaries.

With regards,
Daniel
diff mbox series

Patch

diff --git a/docs/about/deprecated.rst b/docs/about/deprecated.rst
index 1ca9dc33d6..c4fcc6b33c 100644
--- a/docs/about/deprecated.rst
+++ b/docs/about/deprecated.rst
@@ -34,6 +34,20 @@  deprecating the build option and no longer defend it in CI. The
 ``--enable-gcov`` build option remains for analysis test case
 coverage.
 
+``qemu-system-i386`` binary (since 8.0)
+'''''''''''''''''''''''''''''''''''''''
+
+The ``qemu-system-i386`` binary was mainly useful for running with KVM
+on 32-bit x86 hosts, but most Linux distributions already removed their
+support for 32-bit x86 kernels, so hardly anybody still needs this. The
+``qemu-system-x86_64`` binary is a proper superset and can be used to
+run 32-bit guests by selecting a 32-bit CPU model, including KVM support
+on x86_64 hosts. Thus users are recommended to reconfigure their systems
+to use the ``qemu-system-x86_64`` binary instead. If a 32-bit CPU guest
+environment should be enforced, you can switch off the "long mode" CPU
+flag, e.g. with ``-cpu max,lm=off``.
+
+
 System emulator command line arguments
 --------------------------------------