diff mbox series

[v3,01/11] xen/manage: keep track of the on-going suspend mode

Message ID 9b970e12491107afda0c1d4a6f154b52d90346ac.1598042152.git.anchalag@amazon.com (mailing list archive)
State Changes Requested, archived
Headers show
Series Fix PM hibernation in Xen guests | expand

Commit Message

Anchal Agarwal Aug. 21, 2020, 10:25 p.m. UTC
From: Munehisa Kamata <kamatam@amazon.com>  

Guest hibernation is different from xen suspend/resume/live migration.
Xen save/restore does not use pm_ops as is needed by guest hibernation.
Hibernation in guest follows ACPI path and is guest inititated , the
hibernation image is saved within guest as compared to later modes
which are xen toolstack assisted and image creation/storage is in
control of hypervisor/host machine.
To differentiate between Xen suspend and PM hibernation, keep track
of the on-going suspend mode by mainly using a new API to keep track of
SHUTDOWN_SUSPEND state.
Introduce a simple function that keeps track of on-going suspend mode
so that PM hibernation code can behave differently according to the
current suspend mode.
Since Xen suspend doesn't have corresponding PM event, its main logic
is modfied to acquire pm_mutex.

Though, accquirng pm_mutex is still right thing to do, we may
see deadlock if PM hibernation is interrupted by Xen suspend.
PM hibernation depends on xenwatch thread to process xenbus state
transactions, but the thread will sleep to wait pm_mutex which is
already held by PM hibernation context in the scenario. Xen shutdown
code may need some changes to avoid the issue.

[Anchal Agarwal: Changelog]:
 RFC v1->v2: Code refactoring
 v1->v2:     Remove unused functions for PM SUSPEND/PM hibernation
 v2->v3:     Added logic to use existing pm_notifier to detect for ARM
	     and abort hibernation for ARM guests. Also removed different
	     suspend_modes and simplified the code with using existing state
	     variables for tracking Xen suspend. The notifier won't get
	     registered for pvh dom0 either.

Signed-off-by: Anchal Agarwal <anchalag@amazon.com>
Signed-off-by: Munehisa Kamata <kamatam@amazon.com>
---
 drivers/xen/manage.c  | 46 ++++++++++++++++++++++++++++++++++++++++++++++
 include/xen/xen-ops.h |  1 +
 2 files changed, 47 insertions(+)

Comments

Boris Ostrovsky Sept. 13, 2020, 3:43 p.m. UTC | #1
On 8/21/20 6:25 PM, Anchal Agarwal wrote:
> From: Munehisa Kamata <kamatam@amazon.com>  
> 
> Guest hibernation is different from xen suspend/resume/live migration.
> Xen save/restore does not use pm_ops as is needed by guest hibernation.
> Hibernation in guest follows ACPI path and is guest inititated , the
> hibernation image is saved within guest as compared to later modes
> which are xen toolstack assisted and image creation/storage is in
> control of hypervisor/host machine.
> To differentiate between Xen suspend and PM hibernation, keep track
> of the on-going suspend mode by mainly using a new API to keep track of
> SHUTDOWN_SUSPEND state.
> Introduce a simple function that keeps track of on-going suspend mode
> so that PM hibernation code can behave differently according to the
> current suspend mode.
> Since Xen suspend doesn't have corresponding PM event, its main logic
> is modfied to acquire pm_mutex.


lock_system_sleep() is not taking this mutex.


> 
> Though, accquirng pm_mutex is still right thing to do, we may
> see deadlock if PM hibernation is interrupted by Xen suspend.
> PM hibernation depends on xenwatch thread to process xenbus state
> transactions, but the thread will sleep to wait pm_mutex which is
> already held by PM hibernation context in the scenario. Xen shutdown
> code may need some changes to avoid the issue.



Is it Xen's shutdown or suspend code that needs to address this? (Or I
may not understand what the problem is that you are describing)


> 
> +
> +static int xen_pm_notifier(struct notifier_block *notifier,
> +	unsigned long pm_event, void *unused)
> +{
> +	int ret;
> +
> +	switch (pm_event) {
> +	case PM_SUSPEND_PREPARE:
> +	case PM_HIBERNATION_PREPARE:
> +	/* Guest hibernation is not supported for aarch64 currently*/
> +	if (IS_ENABLED(CONFIG_ARM64)) {
> +		ret = NOTIFY_BAD;
> +		break;
> +	}

Indentation.

> +	case PM_RESTORE_PREPARE:
> +	case PM_POST_SUSPEND:
> +	case PM_POST_HIBERNATION:
> +	case PM_POST_RESTORE:
> +	default:
> +		ret = NOTIFY_OK;
> +	}
> +	return ret;
> +};


This whole routine now is

	if (IS_ENABLED(CONFIG_ARM64))
		return NOTIFY_BAD;

	return NOTIFY_OK;

isn't it?


> +
> +static struct notifier_block xen_pm_notifier_block = {
> +	.notifier_call = xen_pm_notifier
> +};
> +
> +static int xen_setup_pm_notifier(void)
> +{
> +	if (!xen_hvm_domain() || xen_initial_domain())
> +		return -ENODEV;


I don't think this works anymore.

In the past your notifier would set suspend_mode (or something) but now
it really doesn't do anything except reports an error in some (ARM) cases.

So I think you should move this check into the notifier.

(And BTW I still think PM_SUSPEND_PREPARE should return an error too.
The fact that we are using "suspend" in xen routine names is irrelevant)



-boris



> +	return register_pm_notifier(&xen_pm_notifier_block);
> +}
> +
Boris Ostrovsky Sept. 13, 2020, 5:07 p.m. UTC | #2
On 8/21/20 6:25 PM, Anchal Agarwal wrote:
> From: Munehisa Kamata <kamatam@amazon.com>  
>
> Guest hibernation is different from xen suspend/resume/live migration.
> Xen save/restore does not use pm_ops as is needed by guest hibernation.
> Hibernation in guest follows ACPI path and is guest inititated , the
> hibernation image is saved within guest as compared to later modes
> which are xen toolstack assisted and image creation/storage is in
> control of hypervisor/host machine.
> To differentiate between Xen suspend and PM hibernation, keep track
> of the on-going suspend mode by mainly using a new API to keep track of
> SHUTDOWN_SUSPEND state.
> Introduce a simple function that keeps track of on-going suspend mode
> so that PM hibernation code can behave differently according to the
> current suspend mode.
> Since Xen suspend doesn't have corresponding PM event, its main logic
> is modfied to acquire pm_mutex.
>
> Though, accquirng pm_mutex is still right thing to do, we may
> see deadlock if PM hibernation is interrupted by Xen suspend.
> PM hibernation depends on xenwatch thread to process xenbus state
> transactions, but the thread will sleep to wait pm_mutex which is
> already held by PM hibernation context in the scenario. Xen shutdown
> code may need some changes to avoid the issue.
>
> [Anchal Agarwal: Changelog]:
>  RFC v1->v2: Code refactoring
>  v1->v2:     Remove unused functions for PM SUSPEND/PM hibernation
>  v2->v3:     Added logic to use existing pm_notifier to detect for ARM
> 	     and abort hibernation for ARM guests. Also removed different
> 	     suspend_modes and simplified the code with using existing state
> 	     variables for tracking Xen suspend. The notifier won't get
> 	     registered for pvh dom0 either.
>
> Signed-off-by: Anchal Agarwal <anchalag@amazon.com>
> Signed-off-by: Munehisa Kamata <kamatam@amazon.com>


BTW, just to make sure --- Thomas' comments about commit message format
(SoB order, no changelog, etc) apply to all patches, not just #5.


-boris
Anchal Agarwal Sept. 14, 2020, 9:47 p.m. UTC | #3
On Sun, Sep 13, 2020 at 11:43:30AM -0400, boris.ostrovsky@oracle.com wrote:
> CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe.
> 
> 
> 
> On 8/21/20 6:25 PM, Anchal Agarwal wrote:
> > From: Munehisa Kamata <kamatam@amazon.com>
> >
> > Guest hibernation is different from xen suspend/resume/live migration.
> > Xen save/restore does not use pm_ops as is needed by guest hibernation.
> > Hibernation in guest follows ACPI path and is guest inititated , the
> > hibernation image is saved within guest as compared to later modes
> > which are xen toolstack assisted and image creation/storage is in
> > control of hypervisor/host machine.
> > To differentiate between Xen suspend and PM hibernation, keep track
> > of the on-going suspend mode by mainly using a new API to keep track of
> > SHUTDOWN_SUSPEND state.
> > Introduce a simple function that keeps track of on-going suspend mode
> > so that PM hibernation code can behave differently according to the
> > current suspend mode.
> > Since Xen suspend doesn't have corresponding PM event, its main logic
> > is modfied to acquire pm_mutex.
> 
> 
> lock_system_sleep() is not taking this mutex.
>
Yes, I just realized that the commit 55f2503c ("PM / reboot: Eliminate race
between reboot and suspend") changed its name to system_transition_mutex.
I think I missed that change somehow and assumed its still pm_mutex.
Will fix the description.
> 
> >
> > Though, accquirng pm_mutex is still right thing to do, we may
> > see deadlock if PM hibernation is interrupted by Xen suspend.
> > PM hibernation depends on xenwatch thread to process xenbus state
> > transactions, but the thread will sleep to wait pm_mutex which is
> > already held by PM hibernation context in the scenario. Xen shutdown
> > code may need some changes to avoid the issue.
> 
> 
> 
> Is it Xen's shutdown or suspend code that needs to address this? (Or I
> may not understand what the problem is that you are describing)
> 
Its Xen suspend code I think. If we do not take the system_transition_mutex
in do_suspend then if hibernation is triggered in parallel to xen suspend there
could be issues. Now this is still theoretical in my case and I havent been able
to reproduce such a race. So the approach the original author took was to take
this lock which to me seems right.
And its Xen suspend and not Xen Shutdown. So basically if this scenario
happens I am of the view one of other will fail to occur then how do we recover
or avoid this at all.

Does that answer your question?
> 
> >
> > +
> > +static int xen_pm_notifier(struct notifier_block *notifier,
> > +     unsigned long pm_event, void *unused)
> > +{
> > +     int ret;
> > +
> > +     switch (pm_event) {
> > +     case PM_SUSPEND_PREPARE:
> > +     case PM_HIBERNATION_PREPARE:
> > +     /* Guest hibernation is not supported for aarch64 currently*/
> > +     if (IS_ENABLED(CONFIG_ARM64)) {
> > +             ret = NOTIFY_BAD;
> > +             break;
> > +     }
> 
> Indentation.
> 
> > +     case PM_RESTORE_PREPARE:
> > +     case PM_POST_SUSPEND:
> > +     case PM_POST_HIBERNATION:
> > +     case PM_POST_RESTORE:
> > +     default:
> > +             ret = NOTIFY_OK;
> > +     }
> > +     return ret;
> > +};
> 
> 
> This whole routine now is
> 
>         if (IS_ENABLED(CONFIG_ARM64))
>                 return NOTIFY_BAD;
> 
>         return NOTIFY_OK;
> 
> isn't it?
> 
Yes.
> 
> > +
> > +static struct notifier_block xen_pm_notifier_block = {
> > +     .notifier_call = xen_pm_notifier
> > +};
> > +
> > +static int xen_setup_pm_notifier(void)
> > +{
> > +     if (!xen_hvm_domain() || xen_initial_domain())
> > +             return -ENODEV;
> 
> 
> I don't think this works anymore.
What do you mean?
The first check is for xen domain types and other is for architecture support. 
The reason I put this check here is because I wanted to segregate the two.
I do not want to register this notifier at all for !hmv guest and also if its
an initial control domain.
The arm check only lands in notifier because once hibernate() api is called ->
calls pm_notifier_call_chain for PM_HIBERNATION_PREPARE this will fail for
aarch64. 
Once we have support for aarch64 this notifier can go away altogether. 

Is there any other reason I may be missing why we should move this check to
notifier?
> 
> In the past your notifier would set suspend_mode (or something) but now
> it really doesn't do anything except reports an error in some (ARM) cases.
> 
> So I think you should move this check into the notifier.

> 
> (And BTW I still think PM_SUSPEND_PREPARE should return an error too.
> The fact that we are using "suspend" in xen routine names is irrelevant)
> 
I may have send "not-updated" version of the notifier's function change.

+    switch (pm_event) {
+       case PM_HIBERNATION_PREPARE:
+        /* Guest hibernation is not supported for aarch64 currently*/
+        if (IS_ENABLED(CONFIG_ARM64)) {
+             ret = NOTIFY_BAD;                                                                                                                                                                                                                                                    
+             break;                                                                                                                                                                                                                                                               
+     }               
+       case PM_RESTORE_PREPARE:
+       case PM_POST_RESTORE:
+       case PM_POST_HIBERNATION:
+       default:
+           ret = NOTIFY_OK;
+    }

With the above path PM_SUSPEND_PREPARE will go all together. Does that
resolves this issue? I wanted to get rid of all SUSPEND_* as they are not needed
here clearly.
The only reason I kept it there is if someone tries to trigger hibernation on
ARM instances they should get an error. As I am not sure about the current
behavior. There may be a better way to not invoke hibernation on ARM DomU's and
get rid of this block all together.

Again, sorry for sending in the half baked fix. My workspace switch may have
caused the error.
>
> 
> 
> -boris
> 
Anchal
> 
> 
> > +     return register_pm_notifier(&xen_pm_notifier_block);
> > +}
> > +
Boris Ostrovsky Sept. 15, 2020, 12:24 a.m. UTC | #4
On 9/14/20 5:47 PM, Anchal Agarwal wrote:
> On Sun, Sep 13, 2020 at 11:43:30AM -0400, boris.ostrovsky@oracle.com wrote:
>> CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe.
>>
>>
>>
>> On 8/21/20 6:25 PM, Anchal Agarwal wrote:
>>> Though, accquirng pm_mutex is still right thing to do, we may
>>> see deadlock if PM hibernation is interrupted by Xen suspend.
>>> PM hibernation depends on xenwatch thread to process xenbus state
>>> transactions, but the thread will sleep to wait pm_mutex which is
>>> already held by PM hibernation context in the scenario. Xen shutdown
>>> code may need some changes to avoid the issue.
>>
>>
>> Is it Xen's shutdown or suspend code that needs to address this? (Or I
>> may not understand what the problem is that you are describing)
>>
> Its Xen suspend code I think. If we do not take the system_transition_mutex
> in do_suspend then if hibernation is triggered in parallel to xen suspend there
> could be issues. 


But you *are* taking this mutex to avoid this exact race, aren't you?


> Now this is still theoretical in my case and I havent been able
> to reproduce such a race. So the approach the original author took was to take
> this lock which to me seems right.
> And its Xen suspend and not Xen Shutdown. So basically if this scenario
> happens I am of the view one of other will fail to occur then how do we recover
> or avoid this at all.
>
> Does that answer your question?
>


>>> +
>>> +static int xen_setup_pm_notifier(void)
>>> +{
>>> +     if (!xen_hvm_domain() || xen_initial_domain())
>>> +             return -ENODEV;
>>
>> I don't think this works anymore.
> What do you mean?
> The first check is for xen domain types and other is for architecture support. 
> The reason I put this check here is because I wanted to segregate the two.
> I do not want to register this notifier at all for !hmv guest and also if its
> an initial control domain.
> The arm check only lands in notifier because once hibernate() api is called ->
> calls pm_notifier_call_chain for PM_HIBERNATION_PREPARE this will fail for
> aarch64. 
> Once we have support for aarch64 this notifier can go away altogether. 
>
> Is there any other reason I may be missing why we should move this check to
> notifier?


Not registering this notifier is equivalent to having it return NOTIFY_OK.


In your earlier versions just returning NOTIFY_OK was not sufficient for
hibernation to proceed since the notifier would also need to set
suspend_mode appropriately. But now your notifier essentially filters
out unsupported configurations. And so if it is not called your
configuration (e.g. PV domain) will be considered supported.


>> In the past your notifier would set suspend_mode (or something) but now
>> it really doesn't do anything except reports an error in some (ARM) cases.
>>
>> So I think you should move this check into the notifier.
>> (And BTW I still think PM_SUSPEND_PREPARE should return an error too.
>> The fact that we are using "suspend" in xen routine names is irrelevant)
>>
> I may have send "not-updated" version of the notifier's function change.
>
> +    switch (pm_event) {
> +       case PM_HIBERNATION_PREPARE:
> +        /* Guest hibernation is not supported for aarch64 currently*/
> +        if (IS_ENABLED(CONFIG_ARM64)) {
> +             ret = NOTIFY_BAD;                                                                                                                                                                                                                                                    
> +             break;                                                                                                                                                                                                                                                               
> +     }               
> +       case PM_RESTORE_PREPARE:
> +       case PM_POST_RESTORE:
> +       case PM_POST_HIBERNATION:
> +       default:
> +           ret = NOTIFY_OK;
> +    }


There is no difference on x86 between this code and what you sent
earlier. In both instances PM_SUSPEND_PREPARE will return NOTIFY_OK.


On ARM this code will allow suspend to proceed (which is not what we want).


-boris


>
> With the above path PM_SUSPEND_PREPARE will go all together. Does that
> resolves this issue? I wanted to get rid of all SUSPEND_* as they are not needed
> here clearly.
> The only reason I kept it there is if someone tries to trigger hibernation on
> ARM instances they should get an error. As I am not sure about the current
> behavior. There may be a better way to not invoke hibernation on ARM DomU's and
> get rid of this block all together.
>
> Again, sorry for sending in the half baked fix. My workspace switch may have
> caused the error.
>>
>>
>> -boris
>>
> Anchal
>>
>>> +     return register_pm_notifier(&xen_pm_notifier_block);
>>> +}
>>> +
Anchal Agarwal Sept. 15, 2020, 6 p.m. UTC | #5
On Mon, Sep 14, 2020 at 08:24:22PM -0400, boris.ostrovsky@oracle.com wrote:
> CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe.
> 
> 
> 
> On 9/14/20 5:47 PM, Anchal Agarwal wrote:
> > On Sun, Sep 13, 2020 at 11:43:30AM -0400, boris.ostrovsky@oracle.com wrote:
> >> CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe.
> >>
> >>
> >>
> >> On 8/21/20 6:25 PM, Anchal Agarwal wrote:
> >>> Though, accquirng pm_mutex is still right thing to do, we may
> >>> see deadlock if PM hibernation is interrupted by Xen suspend.
> >>> PM hibernation depends on xenwatch thread to process xenbus state
> >>> transactions, but the thread will sleep to wait pm_mutex which is
> >>> already held by PM hibernation context in the scenario. Xen shutdown
> >>> code may need some changes to avoid the issue.
> >>
> >>
> >> Is it Xen's shutdown or suspend code that needs to address this? (Or I
> >> may not understand what the problem is that you are describing)
> >>
> > Its Xen suspend code I think. If we do not take the system_transition_mutex
> > in do_suspend then if hibernation is triggered in parallel to xen suspend there
> > could be issues.
> 
> 
> But you *are* taking this mutex to avoid this exact race, aren't you?
yes, in that case this race should not occur and either one of it should fail
gracefully.
> 
> 
> > Now this is still theoretical in my case and I havent been able
> > to reproduce such a race. So the approach the original author took was to take
> > this lock which to me seems right.
> > And its Xen suspend and not Xen Shutdown. So basically if this scenario
> > happens I am of the view one of other will fail to occur then how do we recover
> > or avoid this at all.
> >
> > Does that answer your question?
> >
> 
> 
> >>> +
> >>> +static int xen_setup_pm_notifier(void)
> >>> +{
> >>> +     if (!xen_hvm_domain() || xen_initial_domain())
> >>> +             return -ENODEV;
> >>
> >> I don't think this works anymore.
> > What do you mean?
> > The first check is for xen domain types and other is for architecture support.
> > The reason I put this check here is because I wanted to segregate the two.
> > I do not want to register this notifier at all for !hmv guest and also if its
> > an initial control domain.
> > The arm check only lands in notifier because once hibernate() api is called ->
> > calls pm_notifier_call_chain for PM_HIBERNATION_PREPARE this will fail for
> > aarch64.
> > Once we have support for aarch64 this notifier can go away altogether.
> >
> > Is there any other reason I may be missing why we should move this check to
> > notifier?
> 
> 
> Not registering this notifier is equivalent to having it return NOTIFY_OK.
>
How is that different from current behavior?
> 
> In your earlier versions just returning NOTIFY_OK was not sufficient for
> hibernation to proceed since the notifier would also need to set
> suspend_mode appropriately. But now your notifier essentially filters
> out unsupported configurations. And so if it is not called your
> configuration (e.g. PV domain) will be considered supported.
> 
I am sorry if I am having a bit of hard time understanding this. 
How will it be considered supported when its not even registered? My
understanding is if its not registered, it will not land in notifier call chain
which is invoked in pm_notifier_call_chain().

As Roger, mentioned in last series none of this should be a part of PVH dom0 
hibernation as its not tested but this series should also not break anything.
If I register this notifier for PVH dom0 and return error later that will alter
the current behavior right?

If a pm_notifier for pvh dom0 is not registered then it will not land in the
notifier call chain and system will work as before this series.
If I look for unsupported configurations, then !hvm domain is also one but we
filter that out at the beginning and don't even bother about it.

Unless you mean guest running VMs itself? [Trying to read between the lines may
not be the case though]
> 
> >> In the past your notifier would set suspend_mode (or something) but now
> >> it really doesn't do anything except reports an error in some (ARM) cases.
> >>
> >> So I think you should move this check into the notifier.
> >> (And BTW I still think PM_SUSPEND_PREPARE should return an error too.
> >> The fact that we are using "suspend" in xen routine names is irrelevant)
> >>
> > I may have send "not-updated" version of the notifier's function change.
> >
> > +    switch (pm_event) {
> > +       case PM_HIBERNATION_PREPARE:
> > +        /* Guest hibernation is not supported for aarch64 currently*/
> > +        if (IS_ENABLED(CONFIG_ARM64)) {
> > +             ret = NOTIFY_BAD;
> > +             break;
> > +     }
> > +       case PM_RESTORE_PREPARE:
> > +       case PM_POST_RESTORE:
> > +       case PM_POST_HIBERNATION:
> > +       default:
> > +           ret = NOTIFY_OK;
> > +    }
> 
> 
> There is no difference on x86 between this code and what you sent
> earlier. In both instances PM_SUSPEND_PREPARE will return NOTIFY_OK.
> 
> 
> On ARM this code will allow suspend to proceed (which is not what we want).
> 
Ok, I think I may have overlooked arm code. I will fix that.
> 
> -boris
> 
Thanks,
Anchal
> 
> >
> > With the above path PM_SUSPEND_PREPARE will go all together. Does that
> > resolves this issue? I wanted to get rid of all SUSPEND_* as they are not needed
> > here clearly.
> > The only reason I kept it there is if someone tries to trigger hibernation on
> > ARM instances they should get an error. As I am not sure about the current
> > behavior. There may be a better way to not invoke hibernation on ARM DomU's and
> > get rid of this block all together.
> >
> > Again, sorry for sending in the half baked fix. My workspace switch may have
> > caused the error.
> >>
> >>
> >> -boris
> >>
> > Anchal
> >>
> >>> +     return register_pm_notifier(&xen_pm_notifier_block);
> >>> +}
> >>> +
Boris Ostrovsky Sept. 15, 2020, 7:58 p.m. UTC | #6
>>
>>
>>>>> +
>>>>> +static int xen_setup_pm_notifier(void)
>>>>> +{
>>>>> +     if (!xen_hvm_domain() || xen_initial_domain())
>>>>> +             return -ENODEV;
>>>>
>>>> I don't think this works anymore.
>>> What do you mean?
>>> The first check is for xen domain types and other is for architecture support.
>>> The reason I put this check here is because I wanted to segregate the two.
>>> I do not want to register this notifier at all for !hmv guest and also if its
>>> an initial control domain.
>>> The arm check only lands in notifier because once hibernate() api is called ->
>>> calls pm_notifier_call_chain for PM_HIBERNATION_PREPARE this will fail for
>>> aarch64.
>>> Once we have support for aarch64 this notifier can go away altogether.
>>>
>>> Is there any other reason I may be missing why we should move this check to
>>> notifier?
>>
>>
>> Not registering this notifier is equivalent to having it return NOTIFY_OK.
>>
> How is that different from current behavior?
>>
>> In your earlier versions just returning NOTIFY_OK was not sufficient for
>> hibernation to proceed since the notifier would also need to set
>> suspend_mode appropriately. But now your notifier essentially filters
>> out unsupported configurations. And so if it is not called your
>> configuration (e.g. PV domain) will be considered supported.
>>
> I am sorry if I am having a bit of hard time understanding this. 
> How will it be considered supported when its not even registered? My
> understanding is if its not registered, it will not land in notifier call chain
> which is invoked in pm_notifier_call_chain().


Returning an error from xen_setup_pm_notifier() doesn't have any effect
on whether hibernation will start. It's the notifier that can stop it.

> 
> As Roger, mentioned in last series none of this should be a part of PVH dom0 
> hibernation as its not tested but this series should also not break anything.
> If I register this notifier for PVH dom0 and return error later that will alter
> the current behavior right?
> 
> If a pm_notifier for pvh dom0 is not registered then it will not land in the
> notifier call chain and system will work as before this series.
> If I look for unsupported configurations, then !hvm domain is also one but we
> filter that out at the beginning and don't even bother about it.
> 
> Unless you mean guest running VMs itself? [Trying to read between the lines may
> not be the case though]



In hibernate():

        error = __pm_notifier_call_chain(PM_HIBERNATION_PREPARE, -1,
&nr_calls);
        if (error) {
                nr_calls--;
                goto Exit;
        }


Is you don't have notifier registered (as will be the case with PV
domains and dom0) you won't get an error and proceed with hibernation.
(And now I actually suspect it didn't work even with your previous patches)


But something like this I think will do what you want:


static int xen_pm_notifier(struct notifier_block *notifier,
	unsigned long pm_event, void *unused)

{

       if (IS_ENABLED(CONFIG_ARM64) ||
	   !xen_hvm_domain() || xen_initial_domain() ||
	   (pm_event == PM_SUSPEND_PREPARE)) {
		if ((pm_event == PM_SUSPEND_PREPARE) || (pm_event ==
PM_HIBERNATION_PREPARE))
			pr_warn("%s is not supported for this guest",
				(pm_event == PM_SUSPEND_PREPARE) ?
				"Suspend" : "Hibernation");
                return NOTIFY_BAD;

        return NOTIFY_OK;

}

static int xen_setup_pm_notifier(void)
{
	return register_pm_notifier(&xen_pm_notifier_block);
}


I tried to see if there is a way to prevent hibernation without using
notifiers (like having a global flag or something) but didn't find
anything obvious. Perhaps others can point to a simpler way of doing this.


-boris
Anchal Agarwal Sept. 21, 2020, 9:54 p.m. UTC | #7
On Tue, Sep 15, 2020 at 03:58:45PM -0400, boris.ostrovsky@oracle.com wrote:
> CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe.
> 
> 
> 
> >>
> >>
> >>>>> +
> >>>>> +static int xen_setup_pm_notifier(void)
> >>>>> +{
> >>>>> +     if (!xen_hvm_domain() || xen_initial_domain())
> >>>>> +             return -ENODEV;
> >>>>
> >>>> I don't think this works anymore.
> >>> What do you mean?
> >>> The first check is for xen domain types and other is for architecture support.
> >>> The reason I put this check here is because I wanted to segregate the two.
> >>> I do not want to register this notifier at all for !hmv guest and also if its
> >>> an initial control domain.
> >>> The arm check only lands in notifier because once hibernate() api is called ->
> >>> calls pm_notifier_call_chain for PM_HIBERNATION_PREPARE this will fail for
> >>> aarch64.
> >>> Once we have support for aarch64 this notifier can go away altogether.
> >>>
> >>> Is there any other reason I may be missing why we should move this check to
> >>> notifier?
> >>
> >>
> >> Not registering this notifier is equivalent to having it return NOTIFY_OK.
> >>
> > How is that different from current behavior?
> >>
> >> In your earlier versions just returning NOTIFY_OK was not sufficient for
> >> hibernation to proceed since the notifier would also need to set
> >> suspend_mode appropriately. But now your notifier essentially filters
> >> out unsupported configurations. And so if it is not called your
> >> configuration (e.g. PV domain) will be considered supported.
> >>
> > I am sorry if I am having a bit of hard time understanding this.
> > How will it be considered supported when its not even registered? My
> > understanding is if its not registered, it will not land in notifier call chain
> > which is invoked in pm_notifier_call_chain().
> 
> 
> Returning an error from xen_setup_pm_notifier() doesn't have any effect
> on whether hibernation will start. It's the notifier that can stop it.
>
I actually got that point where we have to return an error for certain scenarios
to fail. 
What I was trying to understand a scenario is if there is no notifier involved and
xen_initial_domain() is true what should happen when hibernation is triggered on such domain?

After my changes yes, it should not be able to proceed as you suggested below
when hibernation is triggered from within Xen guest. 
> >
> > As Roger, mentioned in last series none of this should be a part of PVH dom0
> > hibernation as its not tested but this series should also not break anything.
> > If I register this notifier for PVH dom0 and return error later that will alter
> > the current behavior right?
> >
> > If a pm_notifier for pvh dom0 is not registered then it will not land in the
> > notifier call chain and system will work as before this series.
> > If I look for unsupported configurations, then !hvm domain is also one but we
> > filter that out at the beginning and don't even bother about it.
> >
> > Unless you mean guest running VMs itself? [Trying to read between the lines may
> > not be the case though]
> 
> 
> 
> In hibernate():
> 
>         error = __pm_notifier_call_chain(PM_HIBERNATION_PREPARE, -1,
> &nr_calls);
>         if (error) {
>                 nr_calls--;
>                 goto Exit;
>         }
> 
> 
> Is you don't have notifier registered (as will be the case with PV
> domains and dom0) you won't get an error and proceed with hibernation.
> (And now I actually suspect it didn't work even with your previous patches)
> 
> 
> But something like this I think will do what you want:
> 
> 
> static int xen_pm_notifier(struct notifier_block *notifier,
>         unsigned long pm_event, void *unused)
> 
> {
> 
>        if (IS_ENABLED(CONFIG_ARM64) ||
>            !xen_hvm_domain() || xen_initial_domain() ||
>            (pm_event == PM_SUSPEND_PREPARE)) {
>                 if ((pm_event == PM_SUSPEND_PREPARE) || (pm_event ==
> PM_HIBERNATION_PREPARE))
>                         pr_warn("%s is not supported for this guest",
>                                 (pm_event == PM_SUSPEND_PREPARE) ?
>                                 "Suspend" : "Hibernation");
>                 return NOTIFY_BAD;
> 
>         return NOTIFY_OK;
> 
> }
> 
> static int xen_setup_pm_notifier(void)
> {
>         return register_pm_notifier(&xen_pm_notifier_block);
> }
> 
> 
Thanks for the above suggestion. You are right I didn't find a way to declare
a global state either. I just broke the above check in 2 so that once we have
support for ARM we should be able to remove aarch64 condition easily. Let me
know if I am missing nay corner cases with this one.

static int xen_pm_notifier(struct notifier_block *notifier,
	unsigned long pm_event, void *unused)
{
    int ret = NOTIFY_OK;
    if (!xen_hvm_domain() || xen_initial_domain())
	ret = NOTIFY_BAD;
    if(IS_ENABLED(CONFIG_ARM64) && (pm_event == PM_SUSPEND_PREPARE || pm_event == HIBERNATION_PREPARE))
	ret = NOTIFY_BAD;

    return ret;
}

> I tried to see if there is a way to prevent hibernation without using
> notifiers (like having a global flag or something) but didn't find
> anything obvious. Perhaps others can point to a simpler way of doing this.
> 
> 
> -boris
> 
>
-Anchal
Boris Ostrovsky Sept. 22, 2020, 4:18 p.m. UTC | #8
On 9/21/20 5:54 PM, Anchal Agarwal wrote:
> Thanks for the above suggestion. You are right I didn't find a way to declare
> a global state either. I just broke the above check in 2 so that once we have
> support for ARM we should be able to remove aarch64 condition easily. Let me
> know if I am missing nay corner cases with this one.
>
> static int xen_pm_notifier(struct notifier_block *notifier,
> 	unsigned long pm_event, void *unused)
> {
>     int ret = NOTIFY_OK;
>     if (!xen_hvm_domain() || xen_initial_domain())
> 	ret = NOTIFY_BAD;
>     if(IS_ENABLED(CONFIG_ARM64) && (pm_event == PM_SUSPEND_PREPARE || pm_event == HIBERNATION_PREPARE))
> 	ret = NOTIFY_BAD;
>
>     return ret;
> }



This will allow PM suspend to proceed on x86.


-boris
Anchal Agarwal Sept. 22, 2020, 11:17 p.m. UTC | #9
On Tue, Sep 22, 2020 at 12:18:05PM -0400, boris.ostrovsky@oracle.com wrote:
> CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe.
> 
> 
> 
> On 9/21/20 5:54 PM, Anchal Agarwal wrote:
> > Thanks for the above suggestion. You are right I didn't find a way to declare
> > a global state either. I just broke the above check in 2 so that once we have
> > support for ARM we should be able to remove aarch64 condition easily. Let me
> > know if I am missing nay corner cases with this one.
> >
> > static int xen_pm_notifier(struct notifier_block *notifier,
> >       unsigned long pm_event, void *unused)
> > {
> >     int ret = NOTIFY_OK;
> >     if (!xen_hvm_domain() || xen_initial_domain())
> >       ret = NOTIFY_BAD;
> >     if(IS_ENABLED(CONFIG_ARM64) && (pm_event == PM_SUSPEND_PREPARE || pm_event == HIBERNATION_PREPARE))
> >       ret = NOTIFY_BAD;
> >
> >     return ret;
> > }
> 
> 
> 
> This will allow PM suspend to proceed on x86.
Right!! Missed it.
Also, wrt KASLR stuff, that issue is still seen sometimes but I haven't had
bandwidth to dive deep into the issue and fix it. I seem to have lost your email
in my inbox hence covering the question here.
> 
> 
> -boris
>
Boris Ostrovsky Sept. 25, 2020, 8:02 p.m. UTC | #10
On 9/25/20 3:04 PM, Anchal Agarwal wrote:
> On Tue, Sep 22, 2020 at 11:17:36PM +0000, Anchal Agarwal wrote:
>> On Tue, Sep 22, 2020 at 12:18:05PM -0400, boris.ostrovsky@oracle.com wrote:
>>> CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe.
>>>
>>>
>>>
>>> On 9/21/20 5:54 PM, Anchal Agarwal wrote:
>>>> Thanks for the above suggestion. You are right I didn't find a way to declare
>>>> a global state either. I just broke the above check in 2 so that once we have
>>>> support for ARM we should be able to remove aarch64 condition easily. Let me
>>>> know if I am missing nay corner cases with this one.
>>>>
>>>> static int xen_pm_notifier(struct notifier_block *notifier,
>>>>       unsigned long pm_event, void *unused)
>>>> {
>>>>     int ret = NOTIFY_OK;
>>>>     if (!xen_hvm_domain() || xen_initial_domain())
>>>>       ret = NOTIFY_BAD;
>>>>     if(IS_ENABLED(CONFIG_ARM64) && (pm_event == PM_SUSPEND_PREPARE || pm_event == HIBERNATION_PREPARE))
>>>>       ret = NOTIFY_BAD;
>>>>
>>>>     return ret;
>>>> }
>>>
>>>
>>> This will allow PM suspend to proceed on x86.
>> Right!! Missed it.
>> Also, wrt KASLR stuff, that issue is still seen sometimes but I haven't had
>> bandwidth to dive deep into the issue and fix it.


So what's the plan there? You first mentioned this issue early this year and judged by your response it is not clear whether you will ever spend time looking at it.


>>  I seem to have lost your email
>> in my inbox hence covering the question here.
>>>
> Can I add your Reviewed-by or Signed-off-by to it?


Are you asking me to add my R-b to the broken code above?


-boris
Anchal Agarwal Sept. 25, 2020, 10:28 p.m. UTC | #11
On Fri, Sep 25, 2020 at 04:02:58PM -0400, boris.ostrovsky@oracle.com wrote:
> CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe.
> 
> 
> 
> On 9/25/20 3:04 PM, Anchal Agarwal wrote:
> > On Tue, Sep 22, 2020 at 11:17:36PM +0000, Anchal Agarwal wrote:
> >> On Tue, Sep 22, 2020 at 12:18:05PM -0400, boris.ostrovsky@oracle.com wrote:
> >>> CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe.
> >>>
> >>>
> >>>
> >>> On 9/21/20 5:54 PM, Anchal Agarwal wrote:
> >>>> Thanks for the above suggestion. You are right I didn't find a way to declare
> >>>> a global state either. I just broke the above check in 2 so that once we have
> >>>> support for ARM we should be able to remove aarch64 condition easily. Let me
> >>>> know if I am missing nay corner cases with this one.
> >>>>
> >>>> static int xen_pm_notifier(struct notifier_block *notifier,
> >>>>       unsigned long pm_event, void *unused)
> >>>> {
> >>>>     int ret = NOTIFY_OK;
> >>>>     if (!xen_hvm_domain() || xen_initial_domain())
> >>>>       ret = NOTIFY_BAD;
> >>>>     if(IS_ENABLED(CONFIG_ARM64) && (pm_event == PM_SUSPEND_PREPARE || pm_event == HIBERNATION_PREPARE))
> >>>>       ret = NOTIFY_BAD;
> >>>>
> >>>>     return ret;
> >>>> }
> >>>
> >>>
> >>> This will allow PM suspend to proceed on x86.
> >> Right!! Missed it.
> >> Also, wrt KASLR stuff, that issue is still seen sometimes but I haven't had
> >> bandwidth to dive deep into the issue and fix it.
> 
> 
> So what's the plan there? You first mentioned this issue early this year and judged by your response it is not clear whether you will ever spend time looking at it.
> 
I do want to fix it and did do some debugging earlier this year just haven't
gotten back to it. Also, wanted to understand if the issue is a blocker to this
series?
I had some theories when debugging around this like if the random base address picked by kaslr for the
resuming kernel mismatches the suspended kernel and just jogging my memory, I didn't find that as the case.
Another hunch was if physical address of registered vcpu info at boot is different from what suspended kernel
has and that can cause CPU's to get stuck when coming online. The issue was only
reproducible 3% of the time out of 3000 runs hence its hard to just reproduce this.

Moreover, I also wanted to get an insight on if hibernation works correctly with KASLR
generally and its only Xen causing the issue?
> 
> >>  I seem to have lost your email
> >> in my inbox hence covering the question here.
> >>>
> > Can I add your Reviewed-by or Signed-off-by to it?
> 
> 
> Are you asking me to add my R-b to the broken code above?
> 
Of course not!! After its fixed.
Well can forget it for now then!
> 
> -boris
> 
Thanks,
Anchal
Boris Ostrovsky Sept. 28, 2020, 6:49 p.m. UTC | #12
On 9/25/20 6:28 PM, Anchal Agarwal wrote:
> On Fri, Sep 25, 2020 at 04:02:58PM -0400, boris.ostrovsky@oracle.com wrote:
>> CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe.
>>
>>
>>
>> On 9/25/20 3:04 PM, Anchal Agarwal wrote:
>>> On Tue, Sep 22, 2020 at 11:17:36PM +0000, Anchal Agarwal wrote:
>>>> On Tue, Sep 22, 2020 at 12:18:05PM -0400, boris.ostrovsky@oracle.com wrote:
>>>>> CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe.
>>>>>
>>>>>
>>>>>
>>>>> On 9/21/20 5:54 PM, Anchal Agarwal wrote:

>>>>> Also, wrt KASLR stuff, that issue is still seen sometimes but I haven't had
>>>>> bandwidth to dive deep into the issue and fix it.
>>
>> So what's the plan there? You first mentioned this issue early this year and judged by your response it is not clear whether you will ever spend time looking at it.
>>
> I do want to fix it and did do some debugging earlier this year just haven't
> gotten back to it. Also, wanted to understand if the issue is a blocker to this
> series?


Integrating code with known bugs is less than ideal.


3% failure for this feature seems to be a manageable number from the reproducability perspective --- you should be able to script this and each iteration should take way under a minute, no?


> I had some theories when debugging around this like if the random base address picked by kaslr for the
> resuming kernel mismatches the suspended kernel and just jogging my memory, I didn't find that as the case.
> Another hunch was if physical address of registered vcpu info at boot is different from what suspended kernel
> has and that can cause CPU's to get stuck when coming online. 


I'd think if this were the case you'd have 100% failure rate. And we are also re-registering vcpu info on xen restore and I am not aware of any failures due to KASLR.


> The issue was only
> reproducible 3% of the time out of 3000 runs hence its hard to just reproduce this.
>
> Moreover, I also wanted to get an insight on if hibernation works correctly with KASLR
> generally and its only Xen causing the issue?


With KASLR being on by default I'd be surprised if it didn't.


-boris
Anchal Agarwal Sept. 30, 2020, 9:29 p.m. UTC | #13
On Mon, Sep 28, 2020 at 02:49:56PM -0400, boris.ostrovsky@oracle.com wrote:
> CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe.
> 
> 
> 
> On 9/25/20 6:28 PM, Anchal Agarwal wrote:
> > On Fri, Sep 25, 2020 at 04:02:58PM -0400, boris.ostrovsky@oracle.com wrote:
> >> CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe.
> >>
> >>
> >>
> >> On 9/25/20 3:04 PM, Anchal Agarwal wrote:
> >>> On Tue, Sep 22, 2020 at 11:17:36PM +0000, Anchal Agarwal wrote:
> >>>> On Tue, Sep 22, 2020 at 12:18:05PM -0400, boris.ostrovsky@oracle.com wrote:
> >>>>> CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe.
> >>>>>
> >>>>>
> >>>>>
> >>>>> On 9/21/20 5:54 PM, Anchal Agarwal wrote:
> 
> >>>>> Also, wrt KASLR stuff, that issue is still seen sometimes but I haven't had
> >>>>> bandwidth to dive deep into the issue and fix it.
> >>
> >> So what's the plan there? You first mentioned this issue early this year and judged by your response it is not clear whether you will ever spend time looking at it.
> >>
> > I do want to fix it and did do some debugging earlier this year just haven't
> > gotten back to it. Also, wanted to understand if the issue is a blocker to this
> > series?
> 
> 
> Integrating code with known bugs is less than ideal.
> 
So for this series to be accepted, KASLR needs to be fixed along with other
comments of course? 
> 
> 3% failure for this feature seems to be a manageable number from the reproducability perspective --- you should be able to script this and each iteration should take way under a minute, no?
> 
>
Yes it should be doable. The % is not constant here that's the max I have seen.
Also, if at worse it takes a min per run and I have to run 2000-3000 runs to
produce failure that will still be slower. I have to dig in to see if I can find
a better way. 

> > I had some theories when debugging around this like if the random base address picked by kaslr for the
> > resuming kernel mismatches the suspended kernel and just jogging my memory, I didn't find that as the case.
> > Another hunch was if physical address of registered vcpu info at boot is different from what suspended kernel
> > has and that can cause CPU's to get stuck when coming online.
> 
> 
> I'd think if this were the case you'd have 100% failure rate. And we are also re-registering vcpu info on xen restore and I am not aware of any failures due to KASLR.
> 
What I meant there wrt VCPU info was that VCPU info is not unregistered during hibernation,
so Xen still remembers the old physical addresses for the VCPU information, created by the
booting kernel. But since the hibernation kernel may have different physical
addresses for VCPU info and if mismatch happens, it may cause issues with resume. 
During hibernation, the VCPU info register hypercall is not invoked again.
> 
> > The issue was only
> > reproducible 3% of the time out of 3000 runs hence its hard to just reproduce this.
> >
> > Moreover, I also wanted to get an insight on if hibernation works correctly with KASLR
> > generally and its only Xen causing the issue?
> 
> 
> With KASLR being on by default I'd be surprised if it didn't.
>
Thant makes it xen specific then. Also, I have not seen the issue on KVM based
instances.
> 
> -boris
> 
- Anchal
Boris Ostrovsky Oct. 1, 2020, 12:43 p.m. UTC | #14
>>>>>>> Also, wrt KASLR stuff, that issue is still seen sometimes but I haven't had
>>>>>>> bandwidth to dive deep into the issue and fix it.
>>>> So what's the plan there? You first mentioned this issue early this year and judged by your response it is not clear whether you will ever spend time looking at it.
>>>>
>>> I do want to fix it and did do some debugging earlier this year just haven't
>>> gotten back to it. Also, wanted to understand if the issue is a blocker to this
>>> series?
>>
>> Integrating code with known bugs is less than ideal.
>>
> So for this series to be accepted, KASLR needs to be fixed along with other
> comments of course? 


Yes, please.



>>> I had some theories when debugging around this like if the random base address picked by kaslr for the
>>> resuming kernel mismatches the suspended kernel and just jogging my memory, I didn't find that as the case.
>>> Another hunch was if physical address of registered vcpu info at boot is different from what suspended kernel
>>> has and that can cause CPU's to get stuck when coming online.
>>
>> I'd think if this were the case you'd have 100% failure rate. And we are also re-registering vcpu info on xen restore and I am not aware of any failures due to KASLR.
>>
> What I meant there wrt VCPU info was that VCPU info is not unregistered during hibernation,
> so Xen still remembers the old physical addresses for the VCPU information, created by the
> booting kernel. But since the hibernation kernel may have different physical
> addresses for VCPU info and if mismatch happens, it may cause issues with resume. 
> During hibernation, the VCPU info register hypercall is not invoked again.


I still don't think that's the cause but it's certainly worth having a look.


-boris
Anchal Agarwal May 21, 2021, 5:26 a.m. UTC | #15
On Thu, Oct 01, 2020 at 08:43:58AM -0400, boris.ostrovsky@oracle.com wrote:
> CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe.
> 
> 
> 
> >>>>>>> Also, wrt KASLR stuff, that issue is still seen sometimes but I haven't had
> >>>>>>> bandwidth to dive deep into the issue and fix it.
> >>>> So what's the plan there? You first mentioned this issue early this year and judged by your response it is not clear whether you will ever spend time looking at it.
> >>>>
> >>> I do want to fix it and did do some debugging earlier this year just haven't
> >>> gotten back to it. Also, wanted to understand if the issue is a blocker to this
> >>> series?
> >>
> >> Integrating code with known bugs is less than ideal.
> >>
> > So for this series to be accepted, KASLR needs to be fixed along with other
> > comments of course?
> 
> 
> Yes, please.
> 
> 
> 
> >>> I had some theories when debugging around this like if the random base address picked by kaslr for the
> >>> resuming kernel mismatches the suspended kernel and just jogging my memory, I didn't find that as the case.
> >>> Another hunch was if physical address of registered vcpu info at boot is different from what suspended kernel
> >>> has and that can cause CPU's to get stuck when coming online.
> >>
> >> I'd think if this were the case you'd have 100% failure rate. And we are also re-registering vcpu info on xen restore and I am not aware of any failures due to KASLR.
> >>
> > What I meant there wrt VCPU info was that VCPU info is not unregistered during hibernation,
> > so Xen still remembers the old physical addresses for the VCPU information, created by the
> > booting kernel. But since the hibernation kernel may have different physical
> > addresses for VCPU info and if mismatch happens, it may cause issues with resume.
> > During hibernation, the VCPU info register hypercall is not invoked again.
> 
> 
> I still don't think that's the cause but it's certainly worth having a look.
> 
Hi Boris,
Apologies for picking this up after last year. 
I did some dive deep on the above statement and that is indeed the case that's happening. 
I did some debugging around KASLR and hibernation using reboot mode.
I observed in my debug prints that whenever vcpu_info* address for secondary vcpu assigned 
in xen_vcpu_setup at boot is different than what is in the image, resume gets stuck for that vcpu
in bringup_cpu(). That means we have different addresses for &per_cpu(xen_vcpu_info, cpu) at boot and after
control jumps into the image. 

I failed to get any prints after it got stuck in bringup_cpu() and
I do not have an option to send a sysrq signal to the guest or rather get a kdump.
This change is not observed in every hibernate-resume cycle. I am not sure if this is a bug or an 
expected behavior. 
Also, I am contemplating the idea that it may be a bug in xen code getting triggered only when
KASLR is enabled but I do not have substantial data to prove that.
Is this a coincidence that this always happens for 1st vcpu?
Moreover, since hypervisor is not aware that guest is hibernated and it looks like a regular shutdown to dom0 during reboot mode,
will re-registering vcpu_info for secondary vcpu's even plausible? I could definitely use some advice to debug this further.

 
Some printk's from my debugging:

At Boot:

xen_vcpu_setup: xen_have_vcpu_info_placement=1 cpu=1, vcpup=0xffff9e548fa560e0, info.mfn=3996246 info.offset=224,

Image Loads:
It ends up in the condition:
 xen_vcpu_setup()
 {
 ...
 if (xen_hvm_domain()) {
        if (per_cpu(xen_vcpu, cpu) == &per_cpu(xen_vcpu_info, cpu))
                return 0; 
 }
 ...
 }

xen_vcpu_setup: checking mfn on resume cpu=1, info.mfn=3934806 info.offset=224, &per_cpu(xen_vcpu_info, cpu)=0xffff9d7240a560e0

This is tested on c4.2xlarge [8vcpu 15GB mem] instance with 5.10 kernel running
in the guest.

Thanks,
Anchal.
> 
> -boris
> 
>
Boris Ostrovsky May 25, 2021, 10:23 p.m. UTC | #16
On 5/21/21 1:26 AM, Anchal Agarwal wrote:
>>> What I meant there wrt VCPU info was that VCPU info is not unregistered during hibernation,
>>> so Xen still remembers the old physical addresses for the VCPU information, created by the
>>> booting kernel. But since the hibernation kernel may have different physical
>>> addresses for VCPU info and if mismatch happens, it may cause issues with resume.
>>> During hibernation, the VCPU info register hypercall is not invoked again.
>>
>> I still don't think that's the cause but it's certainly worth having a look.
>>
> Hi Boris,
> Apologies for picking this up after last year. 
> I did some dive deep on the above statement and that is indeed the case that's happening. 
> I did some debugging around KASLR and hibernation using reboot mode.
> I observed in my debug prints that whenever vcpu_info* address for secondary vcpu assigned 
> in xen_vcpu_setup at boot is different than what is in the image, resume gets stuck for that vcpu
> in bringup_cpu(). That means we have different addresses for &per_cpu(xen_vcpu_info, cpu) at boot and after
> control jumps into the image. 
>
> I failed to get any prints after it got stuck in bringup_cpu() and
> I do not have an option to send a sysrq signal to the guest or rather get a kdump.


xenctx and xen-hvmctx might be helpful.


> This change is not observed in every hibernate-resume cycle. I am not sure if this is a bug or an 
> expected behavior. 
> Also, I am contemplating the idea that it may be a bug in xen code getting triggered only when
> KASLR is enabled but I do not have substantial data to prove that.
> Is this a coincidence that this always happens for 1st vcpu?
> Moreover, since hypervisor is not aware that guest is hibernated and it looks like a regular shutdown to dom0 during reboot mode,
> will re-registering vcpu_info for secondary vcpu's even plausible?


I think I am missing how this is supposed to work (maybe we've talked about this but it's been many months since then). You hibernate the guest and it writes the state to swap. The guest is then shut down? And what's next? How do you wake it up?


-boris



>  I could definitely use some advice to debug this further.
>
>  
> Some printk's from my debugging:
>
> At Boot:
>
> xen_vcpu_setup: xen_have_vcpu_info_placement=1 cpu=1, vcpup=0xffff9e548fa560e0, info.mfn=3996246 info.offset=224,
>
> Image Loads:
> It ends up in the condition:
>  xen_vcpu_setup()
>  {
>  ...
>  if (xen_hvm_domain()) {
>         if (per_cpu(xen_vcpu, cpu) == &per_cpu(xen_vcpu_info, cpu))
>                 return 0; 
>  }
>  ...
>  }
>
> xen_vcpu_setup: checking mfn on resume cpu=1, info.mfn=3934806 info.offset=224, &per_cpu(xen_vcpu_info, cpu)=0xffff9d7240a560e0
>
> This is tested on c4.2xlarge [8vcpu 15GB mem] instance with 5.10 kernel running
> in the guest.
>
> Thanks,
> Anchal.
>> -boris
>>
>>
Anchal Agarwal May 26, 2021, 4:40 a.m. UTC | #17
On Tue, May 25, 2021 at 06:23:35PM -0400, Boris Ostrovsky wrote:
> CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe.
> 
> 
> 
> On 5/21/21 1:26 AM, Anchal Agarwal wrote:
> >>> What I meant there wrt VCPU info was that VCPU info is not unregistered during hibernation,
> >>> so Xen still remembers the old physical addresses for the VCPU information, created by the
> >>> booting kernel. But since the hibernation kernel may have different physical
> >>> addresses for VCPU info and if mismatch happens, it may cause issues with resume.
> >>> During hibernation, the VCPU info register hypercall is not invoked again.
> >>
> >> I still don't think that's the cause but it's certainly worth having a look.
> >>
> > Hi Boris,
> > Apologies for picking this up after last year.
> > I did some dive deep on the above statement and that is indeed the case that's happening.
> > I did some debugging around KASLR and hibernation using reboot mode.
> > I observed in my debug prints that whenever vcpu_info* address for secondary vcpu assigned
> > in xen_vcpu_setup at boot is different than what is in the image, resume gets stuck for that vcpu
> > in bringup_cpu(). That means we have different addresses for &per_cpu(xen_vcpu_info, cpu) at boot and after
> > control jumps into the image.
> >
> > I failed to get any prints after it got stuck in bringup_cpu() and
> > I do not have an option to send a sysrq signal to the guest or rather get a kdump.
> 
> 
> xenctx and xen-hvmctx might be helpful.
> 
> 
> > This change is not observed in every hibernate-resume cycle. I am not sure if this is a bug or an
> > expected behavior.
> > Also, I am contemplating the idea that it may be a bug in xen code getting triggered only when
> > KASLR is enabled but I do not have substantial data to prove that.
> > Is this a coincidence that this always happens for 1st vcpu?
> > Moreover, since hypervisor is not aware that guest is hibernated and it looks like a regular shutdown to dom0 during reboot mode,
> > will re-registering vcpu_info for secondary vcpu's even plausible?
> 
> 
> I think I am missing how this is supposed to work (maybe we've talked about this but it's been many months since then). You hibernate the guest and it writes the state to swap. The guest is then shut down? And what's next? How do you wake it up?
> 
> 
> -boris
> 
To resume a guest, guest boots up as the fresh guest and then software_resume()
is called which if finds a stored hibernation image, quiesces the devices and loads 
the memory contents from the image. The control then transfers to the targeted kernel.
This further disables non boot cpus,sycore_suspend/resume callbacks are invoked which sets up
the shared_info, pvclock, grant tables etc. Since the vcpu_info pointer for each
non-boot cpu is already registered, the hypercall does not happen again when
bringing up the non boot cpus. This leads to inconsistencies as pointed
out earlier when KASLR is enabled.

Thanks,
Anchal
> 
> 
> >  I could definitely use some advice to debug this further.
> >
> >
> > Some printk's from my debugging:
> >
> > At Boot:
> >
> > xen_vcpu_setup: xen_have_vcpu_info_placement=1 cpu=1, vcpup=0xffff9e548fa560e0, info.mfn=3996246 info.offset=224,
> >
> > Image Loads:
> > It ends up in the condition:
> >  xen_vcpu_setup()
> >  {
> >  ...
> >  if (xen_hvm_domain()) {
> >         if (per_cpu(xen_vcpu, cpu) == &per_cpu(xen_vcpu_info, cpu))
> >                 return 0;
> >  }
> >  ...
> >  }
> >
> > xen_vcpu_setup: checking mfn on resume cpu=1, info.mfn=3934806 info.offset=224, &per_cpu(xen_vcpu_info, cpu)=0xffff9d7240a560e0
> >
> > This is tested on c4.2xlarge [8vcpu 15GB mem] instance with 5.10 kernel running
> > in the guest.
> >
> > Thanks,
> > Anchal.
> >> -boris
> >>
> >>
Boris Ostrovsky May 26, 2021, 6:29 p.m. UTC | #18
On 5/26/21 12:40 AM, Anchal Agarwal wrote:
> On Tue, May 25, 2021 at 06:23:35PM -0400, Boris Ostrovsky wrote:
>> CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe.
>>
>>
>>
>> On 5/21/21 1:26 AM, Anchal Agarwal wrote:
>>>>> What I meant there wrt VCPU info was that VCPU info is not unregistered during hibernation,
>>>>> so Xen still remembers the old physical addresses for the VCPU information, created by the
>>>>> booting kernel. But since the hibernation kernel may have different physical
>>>>> addresses for VCPU info and if mismatch happens, it may cause issues with resume.
>>>>> During hibernation, the VCPU info register hypercall is not invoked again.
>>>> I still don't think that's the cause but it's certainly worth having a look.
>>>>
>>> Hi Boris,
>>> Apologies for picking this up after last year.
>>> I did some dive deep on the above statement and that is indeed the case that's happening.
>>> I did some debugging around KASLR and hibernation using reboot mode.
>>> I observed in my debug prints that whenever vcpu_info* address for secondary vcpu assigned
>>> in xen_vcpu_setup at boot is different than what is in the image, resume gets stuck for that vcpu
>>> in bringup_cpu(). That means we have different addresses for &per_cpu(xen_vcpu_info, cpu) at boot and after
>>> control jumps into the image.
>>>
>>> I failed to get any prints after it got stuck in bringup_cpu() and
>>> I do not have an option to send a sysrq signal to the guest or rather get a kdump.
>>
>> xenctx and xen-hvmctx might be helpful.
>>
>>
>>> This change is not observed in every hibernate-resume cycle. I am not sure if this is a bug or an
>>> expected behavior.
>>> Also, I am contemplating the idea that it may be a bug in xen code getting triggered only when
>>> KASLR is enabled but I do not have substantial data to prove that.
>>> Is this a coincidence that this always happens for 1st vcpu?
>>> Moreover, since hypervisor is not aware that guest is hibernated and it looks like a regular shutdown to dom0 during reboot mode,
>>> will re-registering vcpu_info for secondary vcpu's even plausible?
>>
>> I think I am missing how this is supposed to work (maybe we've talked about this but it's been many months since then). You hibernate the guest and it writes the state to swap. The guest is then shut down? And what's next? How do you wake it up?
>>
>>
>> -boris
>>
> To resume a guest, guest boots up as the fresh guest and then software_resume()
> is called which if finds a stored hibernation image, quiesces the devices and loads 
> the memory contents from the image. The control then transfers to the targeted kernel.
> This further disables non boot cpus,sycore_suspend/resume callbacks are invoked which sets up
> the shared_info, pvclock, grant tables etc. Since the vcpu_info pointer for each
> non-boot cpu is already registered, the hypercall does not happen again when
> bringing up the non boot cpus. This leads to inconsistencies as pointed
> out earlier when KASLR is enabled.


I'd think the 'if' condition in the code fragment below should always fail since hypervisor is creating new guest, resulting in the hypercall. Just like in the case of save/restore.


Do you call xen_vcpu_info_reset() on resume? That will re-initialize per_cpu(xen_vcpu). Maybe you need to add this to xen_syscore_resume().


-boris


>
> Thanks,
> Anchal
>>
>>>  I could definitely use some advice to debug this further.
>>>
>>>
>>> Some printk's from my debugging:
>>>
>>> At Boot:
>>>
>>> xen_vcpu_setup: xen_have_vcpu_info_placement=1 cpu=1, vcpup=0xffff9e548fa560e0, info.mfn=3996246 info.offset=224,
>>>
>>> Image Loads:
>>> It ends up in the condition:
>>>  xen_vcpu_setup()
>>>  {
>>>  ...
>>>  if (xen_hvm_domain()) {
>>>         if (per_cpu(xen_vcpu, cpu) == &per_cpu(xen_vcpu_info, cpu))
>>>                 return 0;
>>>  }
>>>  ...
>>>  }
>>>
>>> xen_vcpu_setup: checking mfn on resume cpu=1, info.mfn=3934806 info.offset=224, &per_cpu(xen_vcpu_info, cpu)=0xffff9d7240a560e0
>>>
>>> This is tested on c4.2xlarge [8vcpu 15GB mem] instance with 5.10 kernel running
>>> in the guest.
>>>
>>> Thanks,
>>> Anchal.
>>>> -boris
>>>>
>>>>
Boris Ostrovsky June 1, 2021, 2:18 p.m. UTC | #19
On 5/28/21 5:50 PM, Anchal Agarwal wrote:

> That only fails during boot but not after the control jumps into the image. The
> non boot cpus are brought offline(freeze_secondary_cpus) and then online via cpu hotplug path. In that case xen_vcpu_setup doesn't invokes the hypercall again.


OK, that makes sense --- by that time VCPUs have already been registered. What I don't understand though is why resume doesn't fail every time --- xen_vcpu and xen_vcpu_info should be different practically always, shouldn't they? Do you observe successful resumes when the hypercall fails?


>
> Another line of thought is something what kexec does to come around this problem
> is to abuse soft_reset and issue it during syscore_resume or may be before the image get loaded.
> I haven't experimented with that yet as I am assuming there has to be a way to re-register vcpus during resume.


Right, that sounds like it should work.


-boris
Anchal Agarwal June 2, 2021, 7:37 p.m. UTC | #20
On Tue, Jun 01, 2021 at 10:18:36AM -0400, Boris Ostrovsky wrote:
> CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe.
> 
> 
> 
> On 5/28/21 5:50 PM, Anchal Agarwal wrote:
> 
> > That only fails during boot but not after the control jumps into the image. The
> > non boot cpus are brought offline(freeze_secondary_cpus) and then online via cpu hotplug path. In that case xen_vcpu_setup doesn't invokes the hypercall again.
> 
> 
> OK, that makes sense --- by that time VCPUs have already been registered. What I don't understand though is why resume doesn't fail every time --- xen_vcpu and xen_vcpu_info should be different practically always, shouldn't they? Do you observe successful resumes when the hypercall fails?
> 
> 
The resume won't fail because in the image the xen_vcpu and xen_vcpu_info are
same. These are the same values that got in there during saving of the
hibernation image. So whatever xen_vcpu got as a value during boot time registration on resume is
essentially lost once the jump into the saved kernel image happens. Interesting
part is if KASLR is not enabled boot time vcpup mfn is same as in the image.
Once you enable KASLR this value changes sometimes and whenever that happens
resume gets stuck. Does that make sense?

No it does not resume successfully if hypercall fails because I was trying to
explicitly reset vcpu and invoke hypercall.
I am just wondering why does restore logic fails to work here or probably I am
missing a critical piece here.
> >
> > Another line of thought is something what kexec does to come around this problem
> > is to abuse soft_reset and issue it during syscore_resume or may be before the image get loaded.
> > I haven't experimented with that yet as I am assuming there has to be a way to re-register vcpus during resume.
> 
> 
> Right, that sounds like it should work.
> 
You mean soft reset or re-register vcpu?

-Anchal
> 
> -boris
> 
>
Boris Ostrovsky June 3, 2021, 8:11 p.m. UTC | #21
On 6/2/21 3:37 PM, Anchal Agarwal wrote:
> On Tue, Jun 01, 2021 at 10:18:36AM -0400, Boris Ostrovsky wrote:
>>
> The resume won't fail because in the image the xen_vcpu and xen_vcpu_info are
> same. These are the same values that got in there during saving of the
> hibernation image. So whatever xen_vcpu got as a value during boot time registration on resume is
> essentially lost once the jump into the saved kernel image happens. Interesting
> part is if KASLR is not enabled boot time vcpup mfn is same as in the image.


Do you start the your guest right after you've hibernated it? What happens if you create (and keep running) a few other guests in-between? mfn would likely be different then I'd think.


> Once you enable KASLR this value changes sometimes and whenever that happens
> resume gets stuck. Does that make sense?
>
> No it does not resume successfully if hypercall fails because I was trying to
> explicitly reset vcpu and invoke hypercall.
> I am just wondering why does restore logic fails to work here or probably I am
> missing a critical piece here.


If you are not using KASLR then xen_vcpu_info is at the same address every time you boot. So whatever you registered before hibernating stays the same when you boot second time and register again, and so successful comparison in xen_vcpu_setup() works. (Mostly by chance.)


But if KASLR is on then this comparison not failing should cause xen_vcpu pointer in the loaded image to become bogus because xen_vcpu is now registered for a different xen_vcpu_info address during boot.


>>> Another line of thought is something what kexec does to come around this problem
>>> is to abuse soft_reset and issue it during syscore_resume or may be before the image get loaded.
>>> I haven't experimented with that yet as I am assuming there has to be a way to re-register vcpus during resume.
>>
>> Right, that sounds like it should work.
>>
> You mean soft reset or re-register vcpu?


Doing something along the lines of a soft reset. It should allow you to re-register. Not sure how you can use it without Xen changes though. 



-boris
Anchal Agarwal June 3, 2021, 11:27 p.m. UTC | #22
On Thu, Jun 03, 2021 at 04:11:46PM -0400, Boris Ostrovsky wrote:
> CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe.
> 
> 
> 
> On 6/2/21 3:37 PM, Anchal Agarwal wrote:
> > On Tue, Jun 01, 2021 at 10:18:36AM -0400, Boris Ostrovsky wrote:
> >>
> > The resume won't fail because in the image the xen_vcpu and xen_vcpu_info are
> > same. These are the same values that got in there during saving of the
> > hibernation image. So whatever xen_vcpu got as a value during boot time registration on resume is
> > essentially lost once the jump into the saved kernel image happens. Interesting
> > part is if KASLR is not enabled boot time vcpup mfn is same as in the image.
> 
> 
> Do you start the your guest right after you've hibernated it? What happens if you create (and keep running) a few other guests in-between? mfn would likely be different then I'd think.
> 
>
Yes, I just run it in loops on a single guest and I am able to see the issue in
20-40 iterations sometime may be sooner. Yeah, you could be right and this could
definitely happen more often depending what's happening on dom0 side.
> > Once you enable KASLR this value changes sometimes and whenever that happens
> > resume gets stuck. Does that make sense?
> >
> > No it does not resume successfully if hypercall fails because I was trying to
> > explicitly reset vcpu and invoke hypercall.
> > I am just wondering why does restore logic fails to work here or probably I am
> > missing a critical piece here.
> 
> 
> If you are not using KASLR then xen_vcpu_info is at the same address every time you boot. So whatever you registered before hibernating stays the same when you boot second time and register again, and so successful comparison in xen_vcpu_setup() works. (Mostly by chance.)
>
That's what I thought so too.
> 
> But if KASLR is on then this comparison not failing should cause xen_vcpu pointer in the loaded image to become bogus because xen_vcpu is now registered for a different xen_vcpu_info address during boot.
> 
The reason for that I think is once you jump into the image that information is
getting lost. But there is  some residue somewhere that's causing the resume to
fail. I haven't been able to pinpoint the exact field value that may be causing
that issue.
Correct me if I am wrong here, but even if hypothetically I put a hack to tell the kernel
somehow re-register vcpu it won't pass because there is no hypercall to
unregister it in first place? Can the resumed kernel use the new values in that
case [Now this is me just throwing wild guesses!!]

> 
> >>> Another line of thought is something what kexec does to come around this problem
> >>> is to abuse soft_reset and issue it during syscore_resume or may be before the image get loaded.
> >>> I haven't experimented with that yet as I am assuming there has to be a way to re-register vcpus during resume.
> >>
> >> Right, that sounds like it should work.
> >>
> > You mean soft reset or re-register vcpu?
> 
> 
> Doing something along the lines of a soft reset. It should allow you to re-register. Not sure how you can use it without Xen changes though.
> 
No not without xen changes. It won't work. I will have xen changes in place to
test that on our infrastructure. 

--
Anchal
> 
> 
> -boris
>
Boris Ostrovsky June 4, 2021, 1:49 a.m. UTC | #23
On 6/3/21 7:27 PM, Anchal Agarwal wrote:
> On Thu, Jun 03, 2021 at 04:11:46PM -0400, Boris Ostrovsky wrote:
>
>> But if KASLR is on then this comparison not failing should cause xen_vcpu pointer in the loaded image to become bogus because xen_vcpu is now registered for a different xen_vcpu_info address during boot.
>>
> The reason for that I think is once you jump into the image that information is
> getting lost. But there is  some residue somewhere that's causing the resume to
> fail. I haven't been able to pinpoint the exact field value that may be causing
> that issue.


xen_vcpu now points to address which is not where the hypervisor thinks vcpu_info should be.


> Correct me if I am wrong here, but even if hypothetically I put a hack to tell the kernel
> somehow re-register vcpu it won't pass because there is no hypercall to
> unregister it in first place? 


Right. You will be shown the door in map_vcpu_info():

       if ( !mfn_eq(v->vcpu_info_mfn, INVALID_MFN) )
           return -EINVAL;


> Can the resumed kernel use the new values in that
> case [Now this is me just throwing wild guesses!!]


I don't think so --- hypervisor is now pointing to a random location in your image.


-boris
diff mbox series

Patch

diff --git a/drivers/xen/manage.c b/drivers/xen/manage.c
index cd046684e0d1..d2a51c454c3e 100644
--- a/drivers/xen/manage.c
+++ b/drivers/xen/manage.c
@@ -14,6 +14,7 @@ 
 #include <linux/freezer.h>
 #include <linux/syscore_ops.h>
 #include <linux/export.h>
+#include <linux/suspend.h>
 
 #include <xen/xen.h>
 #include <xen/xenbus.h>
@@ -40,6 +41,11 @@  enum shutdown_state {
 /* Ignore multiple shutdown requests. */
 static enum shutdown_state shutting_down = SHUTDOWN_INVALID;
 
+bool is_xen_suspend(void)
+{
+	return shutting_down == SHUTDOWN_SUSPEND;
+}
+
 struct suspend_info {
 	int cancelled;
 };
@@ -99,6 +105,8 @@  static void do_suspend(void)
 	int err;
 	struct suspend_info si;
 
+	lock_system_sleep();
+
 	shutting_down = SHUTDOWN_SUSPEND;
 
 	err = freeze_processes();
@@ -162,6 +170,8 @@  static void do_suspend(void)
 	thaw_processes();
 out:
 	shutting_down = SHUTDOWN_INVALID;
+
+	unlock_system_sleep();
 }
 #endif	/* CONFIG_HIBERNATE_CALLBACKS */
 
@@ -387,3 +397,39 @@  int xen_setup_shutdown_event(void)
 EXPORT_SYMBOL_GPL(xen_setup_shutdown_event);
 
 subsys_initcall(xen_setup_shutdown_event);
+
+static int xen_pm_notifier(struct notifier_block *notifier,
+	unsigned long pm_event, void *unused)
+{
+	int ret;
+
+	switch (pm_event) {
+	case PM_SUSPEND_PREPARE:
+	case PM_HIBERNATION_PREPARE:
+	/* Guest hibernation is not supported for aarch64 currently*/
+	if (IS_ENABLED(CONFIG_ARM64)) {
+		ret = NOTIFY_BAD;
+		break;
+	}
+	case PM_RESTORE_PREPARE:
+	case PM_POST_SUSPEND:
+	case PM_POST_HIBERNATION:
+	case PM_POST_RESTORE:
+	default:
+		ret = NOTIFY_OK;
+	}
+	return ret;
+};
+
+static struct notifier_block xen_pm_notifier_block = {
+	.notifier_call = xen_pm_notifier
+};
+
+static int xen_setup_pm_notifier(void)
+{
+	if (!xen_hvm_domain() || xen_initial_domain())
+		return -ENODEV;
+	return register_pm_notifier(&xen_pm_notifier_block);
+}
+
+subsys_initcall(xen_setup_pm_notifier);
diff --git a/include/xen/xen-ops.h b/include/xen/xen-ops.h
index 39a5580f8feb..e8b08734fab1 100644
--- a/include/xen/xen-ops.h
+++ b/include/xen/xen-ops.h
@@ -40,6 +40,7 @@  u64 xen_steal_clock(int cpu);
 
 int xen_setup_shutdown_event(void);
 
+bool is_xen_suspend(void);
 extern unsigned long *xen_contiguous_bitmap;
 
 #if defined(CONFIG_XEN_PV) || defined(CONFIG_ARM) || defined(CONFIG_ARM64)