diff mbox

"xl vcpu-set" not persistent across reboot?

Message ID 20160603163519.GO14588@citrix.com (mailing list archive)
State New, archived
Headers show

Commit Message

Wei Liu June 3, 2016, 4:35 p.m. UTC
On Fri, Jun 03, 2016 at 08:42:11AM -0600, Jan Beulich wrote:
> >>> On 03.06.16 at 15:41, <wei.liu2@citrix.com> wrote:
> > On Fri, Jun 03, 2016 at 02:29:12AM -0600, Jan Beulich wrote:
> >> Ian, Wei,
> >> 
> >> is it intentional that rebooting a (HVM) guest after having altered its
> >> vCPU count will reset it back to the vCPU count it was originally
> >> started with? That doesn't seem very natural - if one hotplugs a CPU
> >> into a physical system and then reboots, that CPU will remain there.
> >> 
> > 
> > This is probably an oversight.
> > 
> > I've added this to my list of things to look at after the release.
> 
> Thanks!
> 

I got a patch ready.  But QEMU upstream refuses to start on the receiving end
with following error message:

qemu-system-i386: Unknown savevm section or instance 'cpu_common' 1
qemu-system-i386: load of migration failed: Invalid argument

With QEMU traditional HVM guest and PV guest, the guest works fine -- up
and running with all hot plugged cpus available.

So I think the relevant libxl information is transmitted but we also
need to fix QEMU upstream. But that's a separate issue.

Wei.

---8<---
From 790ff77c6307b341dec0b4cc5e2d394e42f82e7c Mon Sep 17 00:00:00 2001
From: Wei Liu <wei.liu2@citrix.com>
Date: Fri, 3 Jun 2016 16:38:32 +0100
Subject: [PATCH] libxl: update vcpus bitmap in retrieved geust config

... because the available vcpu bitmap can change during domain life time
due to cpu hotplug and unplug.

Reported-by: Jan Beulich <jbeulich@suse.com>
Signed-off-by: Wei Liu <wei.liu2@citrix.com>
---
 tools/libxl/libxl.c | 31 +++++++++++++++++++++++++++++++
 1 file changed, 31 insertions(+)

Comments

Jan Beulich June 6, 2016, 8:58 a.m. UTC | #1
>>> On 03.06.16 at 18:35, <wei.liu2@citrix.com> wrote:
> I got a patch ready.  But QEMU upstream refuses to start on the receiving end
> with following error message:
> 
> qemu-system-i386: Unknown savevm section or instance 'cpu_common' 1
> qemu-system-i386: load of migration failed: Invalid argument
> 
> With QEMU traditional HVM guest and PV guest, the guest works fine -- up
> and running with all hot plugged cpus available.
> 
> So I think the relevant libxl information is transmitted but we also
> need to fix QEMU upstream. But that's a separate issue.

Stefano, Anthony,

any thoughts here?

Thanks, Jan

> ---8<---
> From 790ff77c6307b341dec0b4cc5e2d394e42f82e7c Mon Sep 17 00:00:00 2001
> From: Wei Liu <wei.liu2@citrix.com>
> Date: Fri, 3 Jun 2016 16:38:32 +0100
> Subject: [PATCH] libxl: update vcpus bitmap in retrieved geust config
> 
> ... because the available vcpu bitmap can change during domain life time
> due to cpu hotplug and unplug.
> 
> Reported-by: Jan Beulich <jbeulich@suse.com>
> Signed-off-by: Wei Liu <wei.liu2@citrix.com>
> ---
>  tools/libxl/libxl.c | 31 +++++++++++++++++++++++++++++++
>  1 file changed, 31 insertions(+)
> 
> diff --git a/tools/libxl/libxl.c b/tools/libxl/libxl.c
> index 006b83f..99617f3 100644
> --- a/tools/libxl/libxl.c
> +++ b/tools/libxl/libxl.c
> @@ -7270,6 +7270,37 @@ int libxl_retrieve_domain_configuration(libxl_ctx 
> *ctx, uint32_t domid,
>          libxl_dominfo_dispose(&info);
>      }
>  
> +    /* VCPUs */
> +    {
> +        libxl_vcpuinfo *vcpus;
> +        libxl_bitmap *map;
> +        int nr_vcpus, nr_cpus;
> +        unsigned int i;
> +
> +        vcpus = libxl_list_vcpu(ctx, domid, &nr_vcpus, &nr_cpus);
> +        if (!vcpus) {
> +            LOG(ERROR, "fail to get vcpu list for domain %d", domid);
> +            rc = ERROR_FAIL;
> +            goto out;
> +        }
> +
> +        /* Update the avail_vcpus bitmap accordingly */
> +        map = &d_config->b_info.avail_vcpus;
> +
> +        libxl_bitmap_dispose(map);
> +
> +        libxl_bitmap_alloc(ctx, map, nr_vcpus);
> +
> +        libxl_bitmap_init(map);
> +
> +        for (i = 0; i < nr_vcpus; i++) {
> +            if (vcpus[i].online)
> +                libxl_bitmap_set(map, i);
> +        }
> +
> +        libxl_vcpuinfo_list_free(vcpus, nr_vcpus);
> +    }
> +
>      /* Memory limits:
>       *
>       * Currently there are three memory limits:
> -- 
> 2.1.4
Wei Liu June 6, 2016, 5:18 p.m. UTC | #2
On Fri, Jun 03, 2016 at 05:35:20PM +0100, Wei Liu wrote:
> On Fri, Jun 03, 2016 at 08:42:11AM -0600, Jan Beulich wrote:
> > >>> On 03.06.16 at 15:41, <wei.liu2@citrix.com> wrote:
> > > On Fri, Jun 03, 2016 at 02:29:12AM -0600, Jan Beulich wrote:
> > >> Ian, Wei,
> > >> 
> > >> is it intentional that rebooting a (HVM) guest after having altered its
> > >> vCPU count will reset it back to the vCPU count it was originally
> > >> started with? That doesn't seem very natural - if one hotplugs a CPU
> > >> into a physical system and then reboots, that CPU will remain there.
> > >> 
> > > 
> > > This is probably an oversight.
> > > 
> > > I've added this to my list of things to look at after the release.
> > 
> > Thanks!
> > 
> 
> I got a patch ready.  But QEMU upstream refuses to start on the receiving end
> with following error message:
> 
> qemu-system-i386: Unknown savevm section or instance 'cpu_common' 1
> qemu-system-i386: load of migration failed: Invalid argument
> 
> With QEMU traditional HVM guest and PV guest, the guest works fine -- up
> and running with all hot plugged cpus available.
> 
> So I think the relevant libxl information is transmitted but we also
> need to fix QEMU upstream. But that's a separate issue.
> 
> Wei.
> 
> ---8<---
> From 790ff77c6307b341dec0b4cc5e2d394e42f82e7c Mon Sep 17 00:00:00 2001
> From: Wei Liu <wei.liu2@citrix.com>
> Date: Fri, 3 Jun 2016 16:38:32 +0100
> Subject: [PATCH] libxl: update vcpus bitmap in retrieved geust config
> 
> ... because the available vcpu bitmap can change during domain life time
> due to cpu hotplug and unplug.
> 
> Reported-by: Jan Beulich <jbeulich@suse.com>
> Signed-off-by: Wei Liu <wei.liu2@citrix.com>

This patch has two issues:
1. The code to allocate bitmap is wrong
2. The check to see if vcpu is available is wrong

It happens to work on PV and qemu-trad because they don't really rely
on the bitmap provided.

#1 is fixed.

For #2, I haven't really gotten to the point today on how to correctly
get the number of online vcpus.

The current issue I discover is that:

  xl vcpu-set jessie-hvm 4
  xl list -l jessie-hvm | less # search for avail_vcpus

A vcpu is not really considered online from xen's point of view, unless
the guest explicitly activates it, like in guest `echo 1 >
.../cpu1/online`.

This is still not desirable because it would still cause qemu upstream
migration to fail. I will see if there is other way to figure out how
many vcpus are there.

Off the top of my head I might need to interrogate QEMU for that. I will
continue investigation later.

Any hint on how to effectively identify online vcpus is very welcomed.

Wei.
Wei Liu June 6, 2016, 5:20 p.m. UTC | #3
Use Stefano's new email address

On Mon, Jun 06, 2016 at 06:18:06PM +0100, Wei Liu wrote:
> On Fri, Jun 03, 2016 at 05:35:20PM +0100, Wei Liu wrote:
> > On Fri, Jun 03, 2016 at 08:42:11AM -0600, Jan Beulich wrote:
> > > >>> On 03.06.16 at 15:41, <wei.liu2@citrix.com> wrote:
> > > > On Fri, Jun 03, 2016 at 02:29:12AM -0600, Jan Beulich wrote:
> > > >> Ian, Wei,
> > > >> 
> > > >> is it intentional that rebooting a (HVM) guest after having altered its
> > > >> vCPU count will reset it back to the vCPU count it was originally
> > > >> started with? That doesn't seem very natural - if one hotplugs a CPU
> > > >> into a physical system and then reboots, that CPU will remain there.
> > > >> 
> > > > 
> > > > This is probably an oversight.
> > > > 
> > > > I've added this to my list of things to look at after the release.
> > > 
> > > Thanks!
> > > 
> > 
> > I got a patch ready.  But QEMU upstream refuses to start on the receiving end
> > with following error message:
> > 
> > qemu-system-i386: Unknown savevm section or instance 'cpu_common' 1
> > qemu-system-i386: load of migration failed: Invalid argument
> > 
> > With QEMU traditional HVM guest and PV guest, the guest works fine -- up
> > and running with all hot plugged cpus available.
> > 
> > So I think the relevant libxl information is transmitted but we also
> > need to fix QEMU upstream. But that's a separate issue.
> > 
> > Wei.
> > 
> > ---8<---
> > From 790ff77c6307b341dec0b4cc5e2d394e42f82e7c Mon Sep 17 00:00:00 2001
> > From: Wei Liu <wei.liu2@citrix.com>
> > Date: Fri, 3 Jun 2016 16:38:32 +0100
> > Subject: [PATCH] libxl: update vcpus bitmap in retrieved geust config
> > 
> > ... because the available vcpu bitmap can change during domain life time
> > due to cpu hotplug and unplug.
> > 
> > Reported-by: Jan Beulich <jbeulich@suse.com>
> > Signed-off-by: Wei Liu <wei.liu2@citrix.com>
> 
> This patch has two issues:
> 1. The code to allocate bitmap is wrong
> 2. The check to see if vcpu is available is wrong
> 
> It happens to work on PV and qemu-trad because they don't really rely
> on the bitmap provided.
> 
> #1 is fixed.
> 
> For #2, I haven't really gotten to the point today on how to correctly
> get the number of online vcpus.
> 
> The current issue I discover is that:
> 
>   xl vcpu-set jessie-hvm 4
>   xl list -l jessie-hvm | less # search for avail_vcpus
> 
> A vcpu is not really considered online from xen's point of view, unless
> the guest explicitly activates it, like in guest `echo 1 >
> .../cpu1/online`.
> 
> This is still not desirable because it would still cause qemu upstream
> migration to fail. I will see if there is other way to figure out how
> many vcpus are there.
> 
> Off the top of my head I might need to interrogate QEMU for that. I will
> continue investigation later.
> 
> Any hint on how to effectively identify online vcpus is very welcomed.
> 
> Wei.
Andrew Cooper June 6, 2016, 5:34 p.m. UTC | #4
On 06/06/16 18:20, Wei Liu wrote:
> Use Stefano's new email address
>
> On Mon, Jun 06, 2016 at 06:18:06PM +0100, Wei Liu wrote:
>> On Fri, Jun 03, 2016 at 05:35:20PM +0100, Wei Liu wrote:
>>> On Fri, Jun 03, 2016 at 08:42:11AM -0600, Jan Beulich wrote:
>>>>>>> On 03.06.16 at 15:41, <wei.liu2@citrix.com> wrote:
>>>>> On Fri, Jun 03, 2016 at 02:29:12AM -0600, Jan Beulich wrote:
>>>>>> Ian, Wei,
>>>>>>
>>>>>> is it intentional that rebooting a (HVM) guest after having altered its
>>>>>> vCPU count will reset it back to the vCPU count it was originally
>>>>>> started with? That doesn't seem very natural - if one hotplugs a CPU
>>>>>> into a physical system and then reboots, that CPU will remain there.
>>>>>>
>>>>> This is probably an oversight.
>>>>>
>>>>> I've added this to my list of things to look at after the release.
>>>> Thanks!
>>>>
>>> I got a patch ready.  But QEMU upstream refuses to start on the receiving end
>>> with following error message:
>>>
>>> qemu-system-i386: Unknown savevm section or instance 'cpu_common' 1
>>> qemu-system-i386: load of migration failed: Invalid argument
>>>
>>> With QEMU traditional HVM guest and PV guest, the guest works fine -- up
>>> and running with all hot plugged cpus available.
>>>
>>> So I think the relevant libxl information is transmitted but we also
>>> need to fix QEMU upstream. But that's a separate issue.
>>>
>>> Wei.
>>>
>>> ---8<---
>>> From 790ff77c6307b341dec0b4cc5e2d394e42f82e7c Mon Sep 17 00:00:00 2001
>>> From: Wei Liu <wei.liu2@citrix.com>
>>> Date: Fri, 3 Jun 2016 16:38:32 +0100
>>> Subject: [PATCH] libxl: update vcpus bitmap in retrieved geust config
>>>
>>> ... because the available vcpu bitmap can change during domain life time
>>> due to cpu hotplug and unplug.
>>>
>>> Reported-by: Jan Beulich <jbeulich@suse.com>
>>> Signed-off-by: Wei Liu <wei.liu2@citrix.com>
>> This patch has two issues:
>> 1. The code to allocate bitmap is wrong
>> 2. The check to see if vcpu is available is wrong
>>
>> It happens to work on PV and qemu-trad because they don't really rely
>> on the bitmap provided.
>>
>> #1 is fixed.
>>
>> For #2, I haven't really gotten to the point today on how to correctly
>> get the number of online vcpus.
>>
>> The current issue I discover is that:
>>
>>   xl vcpu-set jessie-hvm 4
>>   xl list -l jessie-hvm | less # search for avail_vcpus
>>
>> A vcpu is not really considered online from xen's point of view, unless
>> the guest explicitly activates it, like in guest `echo 1 >
>> .../cpu1/online`.
>>
>> This is still not desirable because it would still cause qemu upstream
>> migration to fail. I will see if there is other way to figure out how
>> many vcpus are there.
>>
>> Off the top of my head I might need to interrogate QEMU for that. I will
>> continue investigation later.
>>
>> Any hint on how to effectively identify online vcpus is very welcomed.

Why does qemu even care?  It has nothing to do with vcpu handling. 
There should not be any qemu vcpu records in the first place.

~Andrew
Jan Beulich June 7, 2016, 6:38 a.m. UTC | #5
>>> On 06.06.16 at 19:18, <wei.liu2@citrix.com> wrote:
> The current issue I discover is that:
> 
>   xl vcpu-set jessie-hvm 4
>   xl list -l jessie-hvm | less # search for avail_vcpus
> 
> A vcpu is not really considered online from xen's point of view, unless
> the guest explicitly activates it, like in guest `echo 1 >
> .../cpu1/online`.
> 
> This is still not desirable because it would still cause qemu upstream
> migration to fail. I will see if there is other way to figure out how
> many vcpus are there.
> 
> Off the top of my head I might need to interrogate QEMU for that. I will
> continue investigation later.
> 
> Any hint on how to effectively identify online vcpus is very welcomed.

How come xl itself doesn't know right from the "xl vcpu-set"?

And considering the guest reboot case I originally had the issue
with, the guest not activating the vCPU while still running would
not mean it won't use it post reboot. E.g. if the guest OS is not
CPU-hotplug capable, the reboot may be for the very reason of
activating the extra vCPU(s)...

Jan
Wei Liu June 7, 2016, 8:27 a.m. UTC | #6
On Tue, Jun 07, 2016 at 12:38:26AM -0600, Jan Beulich wrote:
> >>> On 06.06.16 at 19:18, <wei.liu2@citrix.com> wrote:
> > The current issue I discover is that:
> > 
> >   xl vcpu-set jessie-hvm 4
> >   xl list -l jessie-hvm | less # search for avail_vcpus
> > 
> > A vcpu is not really considered online from xen's point of view, unless
> > the guest explicitly activates it, like in guest `echo 1 >
> > .../cpu1/online`.
> > 
> > This is still not desirable because it would still cause qemu upstream
> > migration to fail. I will see if there is other way to figure out how
> > many vcpus are there.
> > 
> > Off the top of my head I might need to interrogate QEMU for that. I will
> > continue investigation later.
> > 
> > Any hint on how to effectively identify online vcpus is very welcomed.
> 
> How come xl itself doesn't know right from the "xl vcpu-set"?
> 

That is because:

1. libxl only pulls data from various sources when it needs to right
   at the moment you ask for domain configuration.
2. For QEMU upstream, there is no way in telling whether a hotplug is
   successful or not -- the return value is ignored because response
   from QEMU is not sensible.

Not saying these things are completely unfixable though.

Wei.

> And considering the guest reboot case I originally had the issue
> with, the guest not activating the vCPU while still running would
> not mean it won't use it post reboot. E.g. if the guest OS is not
> CPU-hotplug capable, the reboot may be for the very reason of
> activating the extra vCPU(s)...
> 
> Jan
>
Wei Liu June 7, 2016, 8:30 a.m. UTC | #7
On Mon, Jun 06, 2016 at 06:34:44PM +0100, Andrew Cooper wrote:
> On 06/06/16 18:20, Wei Liu wrote:
> > Use Stefano's new email address
> >
> > On Mon, Jun 06, 2016 at 06:18:06PM +0100, Wei Liu wrote:
> >> On Fri, Jun 03, 2016 at 05:35:20PM +0100, Wei Liu wrote:
> >>> On Fri, Jun 03, 2016 at 08:42:11AM -0600, Jan Beulich wrote:
> >>>>>>> On 03.06.16 at 15:41, <wei.liu2@citrix.com> wrote:
> >>>>> On Fri, Jun 03, 2016 at 02:29:12AM -0600, Jan Beulich wrote:
> >>>>>> Ian, Wei,
> >>>>>>
> >>>>>> is it intentional that rebooting a (HVM) guest after having altered its
> >>>>>> vCPU count will reset it back to the vCPU count it was originally
> >>>>>> started with? That doesn't seem very natural - if one hotplugs a CPU
> >>>>>> into a physical system and then reboots, that CPU will remain there.
> >>>>>>
> >>>>> This is probably an oversight.
> >>>>>
> >>>>> I've added this to my list of things to look at after the release.
> >>>> Thanks!
> >>>>
> >>> I got a patch ready.  But QEMU upstream refuses to start on the receiving end
> >>> with following error message:
> >>>
> >>> qemu-system-i386: Unknown savevm section or instance 'cpu_common' 1
> >>> qemu-system-i386: load of migration failed: Invalid argument
> >>>
> >>> With QEMU traditional HVM guest and PV guest, the guest works fine -- up
> >>> and running with all hot plugged cpus available.
> >>>
> >>> So I think the relevant libxl information is transmitted but we also
> >>> need to fix QEMU upstream. But that's a separate issue.
> >>>
> >>> Wei.
> >>>
> >>> ---8<---
> >>> From 790ff77c6307b341dec0b4cc5e2d394e42f82e7c Mon Sep 17 00:00:00 2001
> >>> From: Wei Liu <wei.liu2@citrix.com>
> >>> Date: Fri, 3 Jun 2016 16:38:32 +0100
> >>> Subject: [PATCH] libxl: update vcpus bitmap in retrieved geust config
> >>>
> >>> ... because the available vcpu bitmap can change during domain life time
> >>> due to cpu hotplug and unplug.
> >>>
> >>> Reported-by: Jan Beulich <jbeulich@suse.com>
> >>> Signed-off-by: Wei Liu <wei.liu2@citrix.com>
> >> This patch has two issues:
> >> 1. The code to allocate bitmap is wrong
> >> 2. The check to see if vcpu is available is wrong
> >>
> >> It happens to work on PV and qemu-trad because they don't really rely
> >> on the bitmap provided.
> >>
> >> #1 is fixed.
> >>
> >> For #2, I haven't really gotten to the point today on how to correctly
> >> get the number of online vcpus.
> >>
> >> The current issue I discover is that:
> >>
> >>   xl vcpu-set jessie-hvm 4
> >>   xl list -l jessie-hvm | less # search for avail_vcpus
> >>
> >> A vcpu is not really considered online from xen's point of view, unless
> >> the guest explicitly activates it, like in guest `echo 1 >
> >> .../cpu1/online`.
> >>
> >> This is still not desirable because it would still cause qemu upstream
> >> migration to fail. I will see if there is other way to figure out how
> >> many vcpus are there.
> >>
> >> Off the top of my head I might need to interrogate QEMU for that. I will
> >> continue investigation later.
> >>
> >> Any hint on how to effectively identify online vcpus is very welcomed.
> 
> Why does qemu even care?  It has nothing to do with vcpu handling. 
> There should not be any qemu vcpu records in the first place.
> 

IIRC upstream rejected the idea of having no cpu attached to a platform.

Wei.

> ~Andrew
Ian Jackson June 14, 2016, 4:34 p.m. UTC | #8
Wei Liu writes ("Re: [Xen-devel] "xl vcpu-set" not persistent across reboot?"):
> On Mon, Jun 06, 2016 at 06:34:44PM +0100, Andrew Cooper wrote:
> > Why does qemu even care?  It has nothing to do with vcpu handling. 
> > There should not be any qemu vcpu records in the first place.
> 
> IIRC upstream rejected the idea of having no cpu attached to a platform.

That doesn't explain why the number of vcpus that qemu believes in has
to have anything to do with the number of vcpus that the guest has.

The qemu vcpu state is a dummy, regardless of how many of it there
are, surely ?

Ian.
Wei Liu June 14, 2016, 4:39 p.m. UTC | #9
On Tue, Jun 14, 2016 at 05:34:22PM +0100, Ian Jackson wrote:
> Wei Liu writes ("Re: [Xen-devel] "xl vcpu-set" not persistent across reboot?"):
> > On Mon, Jun 06, 2016 at 06:34:44PM +0100, Andrew Cooper wrote:
> > > Why does qemu even care?  It has nothing to do with vcpu handling. 
> > > There should not be any qemu vcpu records in the first place.
> > 
> > IIRC upstream rejected the idea of having no cpu attached to a platform.
> 
> That doesn't explain why the number of vcpus that qemu believes in has
> to have anything to do with the number of vcpus that the guest has.
> 
> The qemu vcpu state is a dummy, regardless of how many of it there
> are, surely ?
> 

What Andrew means is that QEMU shouldn't have kept the CPU state
structures in the first place. My response explains why that is not
possible from a QEMU upstream point of view.

Hence the unfortunate fact is that we need to live with it for now. To
start QEMU we need to create a bunch of dummy CPUs to keep QEMU happy.
All those dummy states need to be kept.

The guest is irrelevant here. We don't care about what guest thinks, but
we need to keep toolstack side view consistent so that migration and
save / restore won't fail.

Wei.

> Ian.
Ian Jackson June 14, 2016, 4:57 p.m. UTC | #10
Wei Liu writes ("Re: [Xen-devel] "xl vcpu-set" not persistent across reboot?"):
> What Andrew means is that QEMU shouldn't have kept the CPU state
> structures in the first place. My response explains why that is not
> possible from a QEMU upstream point of view.

I don't think it addresses my point.

> Hence the unfortunate fact is that we need to live with it for now. To
> start QEMU we need to create a bunch of dummy CPUs to keep QEMU happy.
> All those dummy states need to be kept.

Why do we need one dummy state per actual vcpu rather than just one
dummy state no matter how many vcpus ?

Or is qemu involved in hvm cpu hotplug ?

Ian.
Andrew Cooper June 14, 2016, 4:59 p.m. UTC | #11
On 14/06/16 17:57, Ian Jackson wrote:
> Wei Liu writes ("Re: [Xen-devel] "xl vcpu-set" not persistent across reboot?"):
>> What Andrew means is that QEMU shouldn't have kept the CPU state
>> structures in the first place. My response explains why that is not
>> possible from a QEMU upstream point of view.
> I don't think it addresses my point.
>
>> Hence the unfortunate fact is that we need to live with it for now. To
>> start QEMU we need to create a bunch of dummy CPUs to keep QEMU happy.
>> All those dummy states need to be kept.
> Why do we need one dummy state per actual vcpu rather than just one
> dummy state no matter how many vcpus ?
>
> Or is qemu involved in hvm cpu hotplug ?

Qemu has nothing to do with vcpus at all, and should not have vcpu state
in its migration stream when acting as a device model.

Someone needs to fix this upstream in Qemu, and that is the *only*
viable option here.

~Andrew
Wei Liu June 14, 2016, 5:03 p.m. UTC | #12
On Tue, Jun 14, 2016 at 05:57:00PM +0100, Ian Jackson wrote:
> Wei Liu writes ("Re: [Xen-devel] "xl vcpu-set" not persistent across reboot?"):
> > What Andrew means is that QEMU shouldn't have kept the CPU state
> > structures in the first place. My response explains why that is not
> > possible from a QEMU upstream point of view.
> 
> I don't think it addresses my point.
> 
> > Hence the unfortunate fact is that we need to live with it for now. To
> > start QEMU we need to create a bunch of dummy CPUs to keep QEMU happy.
> > All those dummy states need to be kept.
> 
> Why do we need one dummy state per actual vcpu rather than just one
> dummy state no matter how many vcpus ?
> 

We can't because ...

> Or is qemu involved in hvm cpu hotplug ?
> 

when doing hotplug, libxl uses QMP command to tell QEMU to create
CPUs.

Whether this can be changed I will let Anthony and Stefano to answer.

Wei.

> Ian.
Wei Liu June 14, 2016, 5:06 p.m. UTC | #13
On Tue, Jun 14, 2016 at 05:59:30PM +0100, Andrew Cooper wrote:
> On 14/06/16 17:57, Ian Jackson wrote:
> > Wei Liu writes ("Re: [Xen-devel] "xl vcpu-set" not persistent across reboot?"):
> >> What Andrew means is that QEMU shouldn't have kept the CPU state
> >> structures in the first place. My response explains why that is not
> >> possible from a QEMU upstream point of view.
> > I don't think it addresses my point.
> >
> >> Hence the unfortunate fact is that we need to live with it for now. To
> >> start QEMU we need to create a bunch of dummy CPUs to keep QEMU happy.
> >> All those dummy states need to be kept.
> > Why do we need one dummy state per actual vcpu rather than just one
> > dummy state no matter how many vcpus ?
> >
> > Or is qemu involved in hvm cpu hotplug ?
> 
> Qemu has nothing to do with vcpus at all, and should not have vcpu state
> in its migration stream when acting as a device model.
> 
> Someone needs to fix this upstream in Qemu, and that is the *only*
> viable option here.
> 

Correct me if I'm wrong -- are you suggesting to add a board / platform
without any CPU to QEMU? If that's the suggestion I think QEMU
maintainers have made clear they won't accept such thing.

Wei.

> ~Andrew
Anthony PERARD June 14, 2016, 5:35 p.m. UTC | #14
On Tue, Jun 14, 2016 at 05:57:00PM +0100, Ian Jackson wrote:
> Wei Liu writes ("Re: [Xen-devel] "xl vcpu-set" not persistent across reboot?"):
> > What Andrew means is that QEMU shouldn't have kept the CPU state
> > structures in the first place. My response explains why that is not
> > possible from a QEMU upstream point of view.
> 
> I don't think it addresses my point.
> 
> > Hence the unfortunate fact is that we need to live with it for now. To
> > start QEMU we need to create a bunch of dummy CPUs to keep QEMU happy.
> > All those dummy states need to be kept.
> 
> Why do we need one dummy state per actual vcpu rather than just one
> dummy state no matter how many vcpus ?
> 
> Or is qemu involved in hvm cpu hotplug ?

It is, QEMU does the notification to the guest of a newly available CPU
via an ACPI GPE or something like that. I think QEMU manage the bitfield
of online CPUs then notify the guest about changes.
diff mbox

Patch

diff --git a/tools/libxl/libxl.c b/tools/libxl/libxl.c
index 006b83f..99617f3 100644
--- a/tools/libxl/libxl.c
+++ b/tools/libxl/libxl.c
@@ -7270,6 +7270,37 @@  int libxl_retrieve_domain_configuration(libxl_ctx *ctx, uint32_t domid,
         libxl_dominfo_dispose(&info);
     }
 
+    /* VCPUs */
+    {
+        libxl_vcpuinfo *vcpus;
+        libxl_bitmap *map;
+        int nr_vcpus, nr_cpus;
+        unsigned int i;
+
+        vcpus = libxl_list_vcpu(ctx, domid, &nr_vcpus, &nr_cpus);
+        if (!vcpus) {
+            LOG(ERROR, "fail to get vcpu list for domain %d", domid);
+            rc = ERROR_FAIL;
+            goto out;
+        }
+
+        /* Update the avail_vcpus bitmap accordingly */
+        map = &d_config->b_info.avail_vcpus;
+
+        libxl_bitmap_dispose(map);
+
+        libxl_bitmap_alloc(ctx, map, nr_vcpus);
+
+        libxl_bitmap_init(map);
+
+        for (i = 0; i < nr_vcpus; i++) {
+            if (vcpus[i].online)
+                libxl_bitmap_set(map, i);
+        }
+
+        libxl_vcpuinfo_list_free(vcpus, nr_vcpus);
+    }
+
     /* Memory limits:
      *
      * Currently there are three memory limits: