diff mbox series

[v5,20/27] drm: Scope all DRM IOCTLs with drm_dev_enter/exit

Message ID 20210428151207.1212258-21-andrey.grodzovsky@amd.com (mailing list archive)
State Superseded
Headers show
Series RFC Support hot device unplug in amdgpu | expand

Commit Message

Andrey Grodzovsky April 28, 2021, 3:12 p.m. UTC
With this calling drm_dev_unplug will flush and block
all in flight IOCTLs

Also, add feature such that if device supports graceful unplug
we enclose entire IOCTL in SRCU critical section.

Signed-off-by: Andrey Grodzovsky <andrey.grodzovsky@amd.com>
---
 drivers/gpu/drm/drm_ioctl.c | 15 +++++++++++++--
 include/drm/drm_drv.h       |  6 ++++++
 2 files changed, 19 insertions(+), 2 deletions(-)

Comments

Daniel Vetter April 29, 2021, 11:23 a.m. UTC | #1
On Wed, Apr 28, 2021 at 11:12:00AM -0400, Andrey Grodzovsky wrote:
> With this calling drm_dev_unplug will flush and block
> all in flight IOCTLs
> 
> Also, add feature such that if device supports graceful unplug
> we enclose entire IOCTL in SRCU critical section.
> 
> Signed-off-by: Andrey Grodzovsky <andrey.grodzovsky@amd.com>

Nope.

The idea of drm_dev_enter/exit is to mark up hw access. Not entire ioctl.

Especially not with an opt-in flag so that it could be shrugged of as a
driver hack. Most of these ioctls should have absolutely no problem
working after hotunplug.

Also, doing this defeats the point since it pretty much guarantees
userspace will die in assert()s and stuff. E.g. on i915 the rough contract
is that only execbuf (and even that only when userspace has indicated
support for non-recoverable hw ctx) is allowed to fail. Anything else
might crash userspace.

You probably need similar (and very precisely defined) rules for amdgpu.
And those must definitely exclude any shard ioctls from randomly failing
with EIO, because that just kills the box and defeats the point of trying
to gracefully handling hotunplug and making sure userspace has a chance of
survival. E.g. for atomic everything should continue, including flip
completion, but we set all outputs to "disconnected" and send out the
uevent. Maybe crtc enabling can fail too, but that can also be handled
through the async status we're using to signal DP link failures to
userspace.

I guess we should clarify this in the hotunplug doc?

Cheers, Daniel

> ---
>  drivers/gpu/drm/drm_ioctl.c | 15 +++++++++++++--
>  include/drm/drm_drv.h       |  6 ++++++
>  2 files changed, 19 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/gpu/drm/drm_ioctl.c b/drivers/gpu/drm/drm_ioctl.c
> index d273d1a8603a..5882ef2183bb 100644
> --- a/drivers/gpu/drm/drm_ioctl.c
> +++ b/drivers/gpu/drm/drm_ioctl.c
> @@ -815,7 +815,7 @@ long drm_ioctl(struct file *filp,
>  	const struct drm_ioctl_desc *ioctl = NULL;
>  	drm_ioctl_t *func;
>  	unsigned int nr = DRM_IOCTL_NR(cmd);
> -	int retcode = -EINVAL;
> +	int idx, retcode = -EINVAL;
>  	char stack_kdata[128];
>  	char *kdata = NULL;
>  	unsigned int in_size, out_size, drv_size, ksize;
> @@ -884,7 +884,18 @@ long drm_ioctl(struct file *filp,
>  	if (ksize > in_size)
>  		memset(kdata + in_size, 0, ksize - in_size);
>  
> -	retcode = drm_ioctl_kernel(filp, func, kdata, ioctl->flags);
> +	if (drm_core_check_feature(dev, DRIVER_HOTUNPLUG_SUPPORT)) {
> +		if (drm_dev_enter(dev, &idx)) {
> +			retcode = drm_ioctl_kernel(filp, func, kdata, ioctl->flags);
> +			drm_dev_exit(idx);
> +		} else {
> +			retcode = -ENODEV;
> +			goto err_i1;
> +		}
> +	} else {
> +		retcode = drm_ioctl_kernel(filp, func, kdata, ioctl->flags);
> +	}
> +
>  	if (copy_to_user((void __user *)arg, kdata, out_size) != 0)
>  		retcode = -EFAULT;
>  
> diff --git a/include/drm/drm_drv.h b/include/drm/drm_drv.h
> index b439ae1921b8..63e05cec46c1 100644
> --- a/include/drm/drm_drv.h
> +++ b/include/drm/drm_drv.h
> @@ -94,6 +94,12 @@ enum drm_driver_feature {
>  	 * synchronization of command submission.
>  	 */
>  	DRIVER_SYNCOBJ_TIMELINE         = BIT(6),
> +	/**
> +	 * @DRIVER_NO_HOTUNPLUG_SUPPORT:
> +	 *
> +	 * Driver support gracefull remove.
> +	 */
> +	DRIVER_HOTUNPLUG_SUPPORT         = BIT(7),
>  
>  	/* IMPORTANT: Below are all the legacy flags, add new ones above. */
>  
> -- 
> 2.25.1
>
Daniel Vetter April 29, 2021, 11:32 a.m. UTC | #2
On Thu, Apr 29, 2021 at 01:23:19PM +0200, Daniel Vetter wrote:
> On Wed, Apr 28, 2021 at 11:12:00AM -0400, Andrey Grodzovsky wrote:
> > With this calling drm_dev_unplug will flush and block
> > all in flight IOCTLs
> > 
> > Also, add feature such that if device supports graceful unplug
> > we enclose entire IOCTL in SRCU critical section.
> > 
> > Signed-off-by: Andrey Grodzovsky <andrey.grodzovsky@amd.com>
> 
> Nope.
> 
> The idea of drm_dev_enter/exit is to mark up hw access. Not entire ioctl.
> 
> Especially not with an opt-in flag so that it could be shrugged of as a
> driver hack. Most of these ioctls should have absolutely no problem
> working after hotunplug.
> 
> Also, doing this defeats the point since it pretty much guarantees
> userspace will die in assert()s and stuff. E.g. on i915 the rough contract
> is that only execbuf (and even that only when userspace has indicated
> support for non-recoverable hw ctx) is allowed to fail. Anything else
> might crash userspace.
> 
> You probably need similar (and very precisely defined) rules for amdgpu.
> And those must definitely exclude any shard ioctls from randomly failing
> with EIO, because that just kills the box and defeats the point of trying
> to gracefully handling hotunplug and making sure userspace has a chance of
> survival. E.g. for atomic everything should continue, including flip
> completion, but we set all outputs to "disconnected" and send out the
> uevent. Maybe crtc enabling can fail too, but that can also be handled
> through the async status we're using to signal DP link failures to
> userspace.
> 
> I guess we should clarify this in the hotunplug doc?

To clarify: I'm not against throwing an ENODEV at userspace for ioctl that
really make no sense, and where we're rather confident that all properly
implemented userspace will gracefully handle failures. Like a modeset, or
opening a device, or trying to import a dma-buf or stuff like that which
can already fail in normal operation for any kind of reason.

But stuff that never fails, like GETRESOURCES ioctl, really shouldn't fail
after hotunplug.

And then there's the middle ground, like doing a pageflip or buffer flush,
which I guess some userspace might handle, but risky to inflict those
consequences on them. atomic modeset is especially fun since depending
what you're doing it can be both "failures expected" and "failures not
really expected in normal operation".

Also, this really should be consistent across drivers, not solved with a
driver flag for every possible combination.

If you look at the current hotunplug kms drivers, they have
drm_dev_enter/exit sprinkled in specific hw callback functions because of
the above problems. But maybe it makes sense to change things in a few
cases. But then we should do it across the board.

Cheers, Daniel
Andrey Grodzovsky April 29, 2021, 4:04 p.m. UTC | #3
On 2021-04-29 7:32 a.m., Daniel Vetter wrote:
> On Thu, Apr 29, 2021 at 01:23:19PM +0200, Daniel Vetter wrote:
>> On Wed, Apr 28, 2021 at 11:12:00AM -0400, Andrey Grodzovsky wrote:
>>> With this calling drm_dev_unplug will flush and block
>>> all in flight IOCTLs
>>>
>>> Also, add feature such that if device supports graceful unplug
>>> we enclose entire IOCTL in SRCU critical section.
>>>
>>> Signed-off-by: Andrey Grodzovsky <andrey.grodzovsky@amd.com>
>>
>> Nope.
>>
>> The idea of drm_dev_enter/exit is to mark up hw access. Not entire ioctl.

Then I am confused why we have 
https://elixir.bootlin.com/linux/v5.12/source/drivers/gpu/drm/drm_ioctl.c#L826
currently in code ?

>>
>> Especially not with an opt-in flag so that it could be shrugged of as a
>> driver hack. Most of these ioctls should have absolutely no problem
>> working after hotunplug.
>>
>> Also, doing this defeats the point since it pretty much guarantees
>> userspace will die in assert()s and stuff. E.g. on i915 the rough contract
>> is that only execbuf (and even that only when userspace has indicated
>> support for non-recoverable hw ctx) is allowed to fail. Anything else
>> might crash userspace.

Given that as I pointed above we already fail any IOCTls with -ENODEV
when device is unplugged, it seems those crashes don't happen that
often ? Also, in all my testing I don't think I saw a user space crash
I could attribute to this.

>>
>> You probably need similar (and very precisely defined) rules for amdgpu.
>> And those must definitely exclude any shard ioctls from randomly failing
>> with EIO, because that just kills the box and defeats the point of trying
>> to gracefully handling hotunplug and making sure userspace has a chance of
>> survival. E.g. for atomic everything should continue, including flip
>> completion, but we set all outputs to "disconnected" and send out the
>> uevent. Maybe crtc enabling can fail too, but that can also be handled
>> through the async status we're using to signal DP link failures to
>> userspace.

As I pointed before, because of the complexity of the topic I prefer to
take it step by step and solve first for secondary device use case, not
for primary, display attached device.

>>
>> I guess we should clarify this in the hotunplug doc?

Agree

> 
> To clarify: I'm not against throwing an ENODEV at userspace for ioctl that
> really make no sense, and where we're rather confident that all properly
> implemented userspace will gracefully handle failures. Like a modeset, or
> opening a device, or trying to import a dma-buf or stuff like that which
> can already fail in normal operation for any kind of reason.
> 
> But stuff that never fails, like GETRESOURCES ioctl, really shouldn't fail
> after hotunplug.

As I pointed above, this a bit confuses me given that we already do
blanker rejection of IOCTLs if device is unplugged.


> 
> And then there's the middle ground, like doing a pageflip or buffer flush,
> which I guess some userspace might handle, but risky to inflict those
> consequences on them. atomic modeset is especially fun since depending
> what you're doing it can be both "failures expected" and "failures not
> really expected in normal operation".
> 
> Also, this really should be consistent across drivers, not solved with a
> driver flag for every possible combination.
> 
> If you look at the current hotunplug kms drivers, they have
> drm_dev_enter/exit sprinkled in specific hw callback functions because of
> the above problems. But maybe it makes sense to change things in a few
> cases. But then we should do it across the board.

So as I understand your preferred approach is that I scope any back_end, 
HW specific function with drm_dev_enter/exit because that where MMIO
access takes place. But besides explicit MMIO access thorough
register accessors in the HW back-end there is also indirect MMIO access
taking place throughout the code in the driver because of various VRAM
BOs which provide CPU access to VRAM through the VRAM BAR. This kind of
access is spread all over in the driver and even in mid-layers such as
TTM and not limited to HW back-end functions. It means it's much harder
to spot such places to surgically scope them with drm_dev_enter/exit and
also that any new such code introduced will immediately break hot unplug
because the developers can't be expected to remember making their code
robust to this specific use case. That why when we discussed internally
what approach to take to protecting code with drm_dev_enter/exit we
opted for using the widest available scope.

Andrey

> 
> Cheers, Daniel
>
Felix Kuehling April 29, 2021, 4:15 p.m. UTC | #4
Am 2021-04-29 um 12:04 p.m. schrieb Andrey Grodzovsky:
> So as I understand your preferred approach is that I scope any
> back_end, HW specific function with drm_dev_enter/exit because that
> where MMIO
> access takes place. But besides explicit MMIO access thorough
> register accessors in the HW back-end there is also indirect MMIO access
> taking place throughout the code in the driver because of various VRAM
> BOs which provide CPU access to VRAM through the VRAM BAR. This kind of
> access is spread all over in the driver and even in mid-layers such as
> TTM and not limited to HW back-end functions. It means it's much harder
> to spot such places to surgically scope them with drm_dev_enter/exit and
> also that any new such code introduced will immediately break hot unplug
> because the developers can't be expected to remember making their code
> robust to this specific use case. That why when we discussed internally
> what approach to take to protecting code with drm_dev_enter/exit we
> opted for using the widest available scope.

VRAM can also be mapped in user mode. Is there anything preventing user
mode from accessing the memory after unplug? I guess the best you could
do is unmap it from the CPU page table and let the application segfault
on the next access. Or replace the mapping with a dummy page in system
memory?

Regards,
  Felix


>
> Andrey
Andrey Grodzovsky April 29, 2021, 4:21 p.m. UTC | #5
On 2021-04-29 12:15 p.m., Felix Kuehling wrote:
> Am 2021-04-29 um 12:04 p.m. schrieb Andrey Grodzovsky:
>> So as I understand your preferred approach is that I scope any
>> back_end, HW specific function with drm_dev_enter/exit because that
>> where MMIO
>> access takes place. But besides explicit MMIO access thorough
>> register accessors in the HW back-end there is also indirect MMIO access
>> taking place throughout the code in the driver because of various VRAM
>> BOs which provide CPU access to VRAM through the VRAM BAR. This kind of
>> access is spread all over in the driver and even in mid-layers such as
>> TTM and not limited to HW back-end functions. It means it's much harder
>> to spot such places to surgically scope them with drm_dev_enter/exit and
>> also that any new such code introduced will immediately break hot unplug
>> because the developers can't be expected to remember making their code
>> robust to this specific use case. That why when we discussed internally
>> what approach to take to protecting code with drm_dev_enter/exit we
>> opted for using the widest available scope.
> 
> VRAM can also be mapped in user mode. Is there anything preventing user
> mode from accessing the memory after unplug? I guess the best you could
> do is unmap it from the CPU page table and let the application segfault
> on the next access. Or replace the mapping with a dummy page in system
> memory?

We indeed unmap but instead of letting it segfault insert dummy page on
the next page fault. See here 
https://cgit.freedesktop.org/~agrodzov/linux/commit/?h=drm-misc-next&id=6dde3330ffa450e2e6da4d19e2fd0bb94b66b6ce
And I am aware that this doesn't take care of KFD user mapping.
As you know, we had some discussions with you on this topic and it's on
my TODO list to follow up on this to solve it for KFD too.

Andrey

> 
> Regards,
>    Felix
> 
> 
>>
>> Andrey
Felix Kuehling April 29, 2021, 4:29 p.m. UTC | #6
Am 2021-04-29 um 12:21 p.m. schrieb Andrey Grodzovsky:
>
>
> On 2021-04-29 12:15 p.m., Felix Kuehling wrote:
>> Am 2021-04-29 um 12:04 p.m. schrieb Andrey Grodzovsky:
>>> So as I understand your preferred approach is that I scope any
>>> back_end, HW specific function with drm_dev_enter/exit because that
>>> where MMIO
>>> access takes place. But besides explicit MMIO access thorough
>>> register accessors in the HW back-end there is also indirect MMIO
>>> access
>>> taking place throughout the code in the driver because of various VRAM
>>> BOs which provide CPU access to VRAM through the VRAM BAR. This kind of
>>> access is spread all over in the driver and even in mid-layers such as
>>> TTM and not limited to HW back-end functions. It means it's much harder
>>> to spot such places to surgically scope them with drm_dev_enter/exit
>>> and
>>> also that any new such code introduced will immediately break hot
>>> unplug
>>> because the developers can't be expected to remember making their code
>>> robust to this specific use case. That why when we discussed internally
>>> what approach to take to protecting code with drm_dev_enter/exit we
>>> opted for using the widest available scope.
>>
>> VRAM can also be mapped in user mode. Is there anything preventing user
>> mode from accessing the memory after unplug? I guess the best you could
>> do is unmap it from the CPU page table and let the application segfault
>> on the next access. Or replace the mapping with a dummy page in system
>> memory?
>
> We indeed unmap but instead of letting it segfault insert dummy page on
> the next page fault. See here
> https://cgit.freedesktop.org/~agrodzov/linux/commit/?h=drm-misc-next&id=6dde3330ffa450e2e6da4d19e2fd0bb94b66b6ce
> And I am aware that this doesn't take care of KFD user mapping.
> As you know, we had some discussions with you on this topic and it's on
> my TODO list to follow up on this to solve it for KFD too.

ROCm user mode maps VRAM BOs using render nodes. So I'd expect
ttm_bo_vm_dummy_page to work for KFD as well.

I guess we'd need something special for KFD's doorbell and MMIO (HDP
flush) mappings. Was that the discussion about the file address space?

Regards,
  Felix


>
> Andrey
>
>>
>> Regards,
>>    Felix
>>
>>
>>>
>>> Andrey
Andrey Grodzovsky April 29, 2021, 4:33 p.m. UTC | #7
On 2021-04-29 12:29 p.m., Felix Kuehling wrote:
> Am 2021-04-29 um 12:21 p.m. schrieb Andrey Grodzovsky:
>>
>>
>> On 2021-04-29 12:15 p.m., Felix Kuehling wrote:
>>> Am 2021-04-29 um 12:04 p.m. schrieb Andrey Grodzovsky:
>>>> So as I understand your preferred approach is that I scope any
>>>> back_end, HW specific function with drm_dev_enter/exit because that
>>>> where MMIO
>>>> access takes place. But besides explicit MMIO access thorough
>>>> register accessors in the HW back-end there is also indirect MMIO
>>>> access
>>>> taking place throughout the code in the driver because of various VRAM
>>>> BOs which provide CPU access to VRAM through the VRAM BAR. This kind of
>>>> access is spread all over in the driver and even in mid-layers such as
>>>> TTM and not limited to HW back-end functions. It means it's much harder
>>>> to spot such places to surgically scope them with drm_dev_enter/exit
>>>> and
>>>> also that any new such code introduced will immediately break hot
>>>> unplug
>>>> because the developers can't be expected to remember making their code
>>>> robust to this specific use case. That why when we discussed internally
>>>> what approach to take to protecting code with drm_dev_enter/exit we
>>>> opted for using the widest available scope.
>>>
>>> VRAM can also be mapped in user mode. Is there anything preventing user
>>> mode from accessing the memory after unplug? I guess the best you could
>>> do is unmap it from the CPU page table and let the application segfault
>>> on the next access. Or replace the mapping with a dummy page in system
>>> memory?
>>
>> We indeed unmap but instead of letting it segfault insert dummy page on
>> the next page fault. See here
>> https://cgit.freedesktop.org/~agrodzov/linux/commit/?h=drm-misc-next&id=6dde3330ffa450e2e6da4d19e2fd0bb94b66b6ce
>> And I am aware that this doesn't take care of KFD user mapping.
>> As you know, we had some discussions with you on this topic and it's on
>> my TODO list to follow up on this to solve it for KFD too.
> 
> ROCm user mode maps VRAM BOs using render nodes. So I'd expect
> ttm_bo_vm_dummy_page to work for KFD as well.
> 
> I guess we'd need something special for KFD's doorbell and MMIO (HDP
> flush) mappings. Was that the discussion about the file address space?

Yes

Andrey

> 
> Regards,
>    Felix
> 
> 
>>
>> Andrey
>>
>>>
>>> Regards,
>>>     Felix
>>>
>>>
>>>>
>>>> Andrey
Daniel Vetter April 29, 2021, 7:05 p.m. UTC | #8
On Thu, Apr 29, 2021 at 12:04:33PM -0400, Andrey Grodzovsky wrote:
> 
> 
> On 2021-04-29 7:32 a.m., Daniel Vetter wrote:
> > On Thu, Apr 29, 2021 at 01:23:19PM +0200, Daniel Vetter wrote:
> > > On Wed, Apr 28, 2021 at 11:12:00AM -0400, Andrey Grodzovsky wrote:
> > > > With this calling drm_dev_unplug will flush and block
> > > > all in flight IOCTLs
> > > > 
> > > > Also, add feature such that if device supports graceful unplug
> > > > we enclose entire IOCTL in SRCU critical section.
> > > > 
> > > > Signed-off-by: Andrey Grodzovsky <andrey.grodzovsky@amd.com>
> > > 
> > > Nope.
> > > 
> > > The idea of drm_dev_enter/exit is to mark up hw access. Not entire ioctl.
> 
> Then I am confused why we have https://elixir.bootlin.com/linux/v5.12/source/drivers/gpu/drm/drm_ioctl.c#L826
> currently in code ?

I forgot about this one, again. Thanks for reminding.

> > > Especially not with an opt-in flag so that it could be shrugged of as a
> > > driver hack. Most of these ioctls should have absolutely no problem
> > > working after hotunplug.
> > > 
> > > Also, doing this defeats the point since it pretty much guarantees
> > > userspace will die in assert()s and stuff. E.g. on i915 the rough contract
> > > is that only execbuf (and even that only when userspace has indicated
> > > support for non-recoverable hw ctx) is allowed to fail. Anything else
> > > might crash userspace.
> 
> Given that as I pointed above we already fail any IOCTls with -ENODEV
> when device is unplugged, it seems those crashes don't happen that
> often ? Also, in all my testing I don't think I saw a user space crash
> I could attribute to this.

I guess it should be ok.

My reasons for making this work is both less trouble for userspace (did
you test with various wayland compositors out there, not just amdgpu x86
driver?), but also testing.

We still need a bunch of these checks in various places or you'll wait a
very long time for a pending modeset or similar to complete. Being able to
run that code easily after hotunplug has completed should help a lot with
testing.

Plus various drivers already acquired drm_dev_enter/exit and now I wonder
whether that was properly tested or not ...

I guess maybe we need a drm module option to disable this check, so that
we can exercise the code as if the ioctl has raced with hotunplug at the
worst possible moment.

Also atomic is really tricky here: I assume your testing has just done
normal synchronous commits, but anything that goes through atomic can be
done nonblocking in a separate thread. Which the ioctl catch-all here wont
capture.

> > > You probably need similar (and very precisely defined) rules for amdgpu.
> > > And those must definitely exclude any shard ioctls from randomly failing
> > > with EIO, because that just kills the box and defeats the point of trying
> > > to gracefully handling hotunplug and making sure userspace has a chance of
> > > survival. E.g. for atomic everything should continue, including flip
> > > completion, but we set all outputs to "disconnected" and send out the
> > > uevent. Maybe crtc enabling can fail too, but that can also be handled
> > > through the async status we're using to signal DP link failures to
> > > userspace.
> 
> As I pointed before, because of the complexity of the topic I prefer to
> take it step by step and solve first for secondary device use case, not
> for primary, display attached device.

Yeah makes sense. But then I think the right patch is to roll this out for
all drivers, properly justified with existing code. Not behind a driver
flag, because with all these different compositors the last thing we want
is a proliferation of driver-specific behaviour. That's imo the worst
option of all of them and needs to be avoided.

Cheers, Daniel


> 
> > > 
> > > I guess we should clarify this in the hotunplug doc?
> 
> Agree
> 
> > 
> > To clarify: I'm not against throwing an ENODEV at userspace for ioctl that
> > really make no sense, and where we're rather confident that all properly
> > implemented userspace will gracefully handle failures. Like a modeset, or
> > opening a device, or trying to import a dma-buf or stuff like that which
> > can already fail in normal operation for any kind of reason.
> > 
> > But stuff that never fails, like GETRESOURCES ioctl, really shouldn't fail
> > after hotunplug.
> 
> As I pointed above, this a bit confuses me given that we already do
> blanker rejection of IOCTLs if device is unplugged.

Well I'm confused about this too :-/

> > And then there's the middle ground, like doing a pageflip or buffer flush,
> > which I guess some userspace might handle, but risky to inflict those
> > consequences on them. atomic modeset is especially fun since depending
> > what you're doing it can be both "failures expected" and "failures not
> > really expected in normal operation".
> > 
> > Also, this really should be consistent across drivers, not solved with a
> > driver flag for every possible combination.
> > 
> > If you look at the current hotunplug kms drivers, they have
> > drm_dev_enter/exit sprinkled in specific hw callback functions because of
> > the above problems. But maybe it makes sense to change things in a few
> > cases. But then we should do it across the board.
> 
> So as I understand your preferred approach is that I scope any back_end, HW
> specific function with drm_dev_enter/exit because that where MMIO
> access takes place. But besides explicit MMIO access thorough
> register accessors in the HW back-end there is also indirect MMIO access
> taking place throughout the code in the driver because of various VRAM
> BOs which provide CPU access to VRAM through the VRAM BAR. This kind of
> access is spread all over in the driver and even in mid-layers such as
> TTM and not limited to HW back-end functions. It means it's much harder
> to spot such places to surgically scope them with drm_dev_enter/exit and
> also that any new such code introduced will immediately break hot unplug
> because the developers can't be expected to remember making their code
> robust to this specific use case. That why when we discussed internally
> what approach to take to protecting code with drm_dev_enter/exit we
> opted for using the widest available scope.

The thing is, you kinda have to anyway. There's enormous amounts of
asynchronous processing going on. E.g. nonblocking atomic commits also do
ttm unpinning and fun stuff like that, which if you sync things wrong can
happen way late. So the door for bad fallout is wide open :-(

I'm not sure where the right tradeoff is to make sure we catch them all,
and can make sure with testing that we've indeed caught them all.
-Daniel
Andrey Grodzovsky April 29, 2021, 8:34 p.m. UTC | #9
On 2021-04-29 3:05 p.m., Daniel Vetter wrote:
> On Thu, Apr 29, 2021 at 12:04:33PM -0400, Andrey Grodzovsky wrote:
>>
>>
>> On 2021-04-29 7:32 a.m., Daniel Vetter wrote:
>>> On Thu, Apr 29, 2021 at 01:23:19PM +0200, Daniel Vetter wrote:
>>>> On Wed, Apr 28, 2021 at 11:12:00AM -0400, Andrey Grodzovsky wrote:
>>>>> With this calling drm_dev_unplug will flush and block
>>>>> all in flight IOCTLs
>>>>>
>>>>> Also, add feature such that if device supports graceful unplug
>>>>> we enclose entire IOCTL in SRCU critical section.
>>>>>
>>>>> Signed-off-by: Andrey Grodzovsky <andrey.grodzovsky@amd.com>
>>>>
>>>> Nope.
>>>>
>>>> The idea of drm_dev_enter/exit is to mark up hw access. Not entire ioctl.
>>
>> Then I am confused why we have https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Felixir.bootlin.com%2Flinux%2Fv5.12%2Fsource%2Fdrivers%2Fgpu%2Fdrm%2Fdrm_ioctl.c%23L826&amp;data=04%7C01%7Candrey.grodzovsky%40amd.com%7C1821a19173a84ebae31108d90b41b2fa%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637553199084555468%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=d6kXadWHv4CEDgODsm%2FOzIIjIDA9rZDLUuV11MmEU3A%3D&amp;reserved=0
>> currently in code ?
> 
> I forgot about this one, again. Thanks for reminding.
> 
>>>> Especially not with an opt-in flag so that it could be shrugged of as a
>>>> driver hack. Most of these ioctls should have absolutely no problem
>>>> working after hotunplug.
>>>>
>>>> Also, doing this defeats the point since it pretty much guarantees
>>>> userspace will die in assert()s and stuff. E.g. on i915 the rough contract
>>>> is that only execbuf (and even that only when userspace has indicated
>>>> support for non-recoverable hw ctx) is allowed to fail. Anything else
>>>> might crash userspace.
>>
>> Given that as I pointed above we already fail any IOCTls with -ENODEV
>> when device is unplugged, it seems those crashes don't happen that
>> often ? Also, in all my testing I don't think I saw a user space crash
>> I could attribute to this.
> 
> I guess it should be ok.

What should be ok ?
> 
> My reasons for making this work is both less trouble for userspace (did
> you test with various wayland compositors out there, not just amdgpu x86

I didn't - will give it a try.

> driver?), but also testing.
> 
> We still need a bunch of these checks in various places or you'll wait a
> very long time for a pending modeset or similar to complete. Being able to
> run that code easily after hotunplug has completed should help a lot with
> testing.
> 
> Plus various drivers already acquired drm_dev_enter/exit and now I wonder
> whether that was properly tested or not ...
> 
> I guess maybe we need a drm module option to disable this check, so that
> we can exercise the code as if the ioctl has raced with hotunplug at the
> worst possible moment.
> 
> Also atomic is really tricky here: I assume your testing has just done
> normal synchronous commits, but anything that goes through atomic can be
> done nonblocking in a separate thread. Which the ioctl catch-all here wont
> capture.

Yes, async commit was on my mind and thanks for reminding me. Indeed
I forgot this but i planned to scope the entire amdgpu_dm_atomic_tail in
drm_dev_enter/exit. Note that i have a bunch of patches, all name's
starting with 'Scope....' that just methodically put all the background
work items and timers the drivers schedules in drm_dev_enter/exit scope.
This was supposed to be part of the 'Scope Display code' patch.

> 
>>>> You probably need similar (and very precisely defined) rules for amdgpu.
>>>> And those must definitely exclude any shard ioctls from randomly failing
>>>> with EIO, because that just kills the box and defeats the point of trying
>>>> to gracefully handling hotunplug and making sure userspace has a chance of
>>>> survival. E.g. for atomic everything should continue, including flip
>>>> completion, but we set all outputs to "disconnected" and send out the
>>>> uevent. Maybe crtc enabling can fail too, but that can also be handled
>>>> through the async status we're using to signal DP link failures to
>>>> userspace.
>>
>> As I pointed before, because of the complexity of the topic I prefer to
>> take it step by step and solve first for secondary device use case, not
>> for primary, display attached device.
> 
> Yeah makes sense. But then I think the right patch is to roll this out for
> all drivers, properly justified with existing code. Not behind a driver
> flag, because with all these different compositors the last thing we want
> is a proliferation of driver-specific behaviour. That's imo the worst
> option of all of them and needs to be avoided.

So this kind of patch would be acceptable to you if I unconditionally
scope the drm_ioctl with drm_dev_enter/exit without the driver flag ?
I am worried to break other drivers with this, see patch 
https://cgit.freedesktop.org/~agrodzov/linux/commit/?h=drm-misc-next&id=f0c593f35b22ca5bf60ed9e7ce2bf2b80e6c68c6
Before setting drm_dev_unplug I go through a whole process of signalling
all possible fences in the system which some one some where might be
waiting on. My concern is that in the absence of HW those fences won't
signal and so unless I signal them myself srcu_synchrionize in
drm_dev_unplug will hang waiting for any such code scoped by
drm_dev_enter/exit.

Andrey

> 
> Cheers, Daniel
> 
> 
>>
>>>>
>>>> I guess we should clarify this in the hotunplug doc?
>>
>> Agree
>>
>>>
>>> To clarify: I'm not against throwing an ENODEV at userspace for ioctl that
>>> really make no sense, and where we're rather confident that all properly
>>> implemented userspace will gracefully handle failures. Like a modeset, or
>>> opening a device, or trying to import a dma-buf or stuff like that which
>>> can already fail in normal operation for any kind of reason.
>>>
>>> But stuff that never fails, like GETRESOURCES ioctl, really shouldn't fail
>>> after hotunplug.
>>
>> As I pointed above, this a bit confuses me given that we already do
>> blanker rejection of IOCTLs if device is unplugged.
> 
> Well I'm confused about this too :-/
> 
>>> And then there's the middle ground, like doing a pageflip or buffer flush,
>>> which I guess some userspace might handle, but risky to inflict those
>>> consequences on them. atomic modeset is especially fun since depending
>>> what you're doing it can be both "failures expected" and "failures not
>>> really expected in normal operation".
>>>
>>> Also, this really should be consistent across drivers, not solved with a
>>> driver flag for every possible combination.
>>>
>>> If you look at the current hotunplug kms drivers, they have
>>> drm_dev_enter/exit sprinkled in specific hw callback functions because of
>>> the above problems. But maybe it makes sense to change things in a few
>>> cases. But then we should do it across the board.
>>
>> So as I understand your preferred approach is that I scope any back_end, HW
>> specific function with drm_dev_enter/exit because that where MMIO
>> access takes place. But besides explicit MMIO access thorough
>> register accessors in the HW back-end there is also indirect MMIO access
>> taking place throughout the code in the driver because of various VRAM
>> BOs which provide CPU access to VRAM through the VRAM BAR. This kind of
>> access is spread all over in the driver and even in mid-layers such as
>> TTM and not limited to HW back-end functions. It means it's much harder
>> to spot such places to surgically scope them with drm_dev_enter/exit and
>> also that any new such code introduced will immediately break hot unplug
>> because the developers can't be expected to remember making their code
>> robust to this specific use case. That why when we discussed internally
>> what approach to take to protecting code with drm_dev_enter/exit we
>> opted for using the widest available scope.
> 
> The thing is, you kinda have to anyway. There's enormous amounts of
> asynchronous processing going on. E.g. nonblocking atomic commits also do
> ttm unpinning and fun stuff like that, which if you sync things wrong can
> happen way late. So the door for bad fallout is wide open :-(
> 
> I'm not sure where the right tradeoff is to make sure we catch them all,
> and can make sure with testing that we've indeed caught them all.
> -Daniel
>
Daniel Vetter April 30, 2021, 10:25 a.m. UTC | #10
On Thu, Apr 29, 2021 at 04:34:55PM -0400, Andrey Grodzovsky wrote:
> 
> 
> On 2021-04-29 3:05 p.m., Daniel Vetter wrote:
> > On Thu, Apr 29, 2021 at 12:04:33PM -0400, Andrey Grodzovsky wrote:
> > > 
> > > 
> > > On 2021-04-29 7:32 a.m., Daniel Vetter wrote:
> > > > On Thu, Apr 29, 2021 at 01:23:19PM +0200, Daniel Vetter wrote:
> > > > > On Wed, Apr 28, 2021 at 11:12:00AM -0400, Andrey Grodzovsky wrote:
> > > > > > With this calling drm_dev_unplug will flush and block
> > > > > > all in flight IOCTLs
> > > > > > 
> > > > > > Also, add feature such that if device supports graceful unplug
> > > > > > we enclose entire IOCTL in SRCU critical section.
> > > > > > 
> > > > > > Signed-off-by: Andrey Grodzovsky <andrey.grodzovsky@amd.com>
> > > > > 
> > > > > Nope.
> > > > > 
> > > > > The idea of drm_dev_enter/exit is to mark up hw access. Not entire ioctl.
> > > 
> > > Then I am confused why we have https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Felixir.bootlin.com%2Flinux%2Fv5.12%2Fsource%2Fdrivers%2Fgpu%2Fdrm%2Fdrm_ioctl.c%23L826&amp;data=04%7C01%7Candrey.grodzovsky%40amd.com%7C1821a19173a84ebae31108d90b41b2fa%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637553199084555468%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=d6kXadWHv4CEDgODsm%2FOzIIjIDA9rZDLUuV11MmEU3A%3D&amp;reserved=0
> > > currently in code ?
> > 
> > I forgot about this one, again. Thanks for reminding.
> > 
> > > > > Especially not with an opt-in flag so that it could be shrugged of as a
> > > > > driver hack. Most of these ioctls should have absolutely no problem
> > > > > working after hotunplug.
> > > > > 
> > > > > Also, doing this defeats the point since it pretty much guarantees
> > > > > userspace will die in assert()s and stuff. E.g. on i915 the rough contract
> > > > > is that only execbuf (and even that only when userspace has indicated
> > > > > support for non-recoverable hw ctx) is allowed to fail. Anything else
> > > > > might crash userspace.
> > > 
> > > Given that as I pointed above we already fail any IOCTls with -ENODEV
> > > when device is unplugged, it seems those crashes don't happen that
> > > often ? Also, in all my testing I don't think I saw a user space crash
> > > I could attribute to this.
> > 
> > I guess it should be ok.
> 
> What should be ok ?

Your approach, but not your patch. If we go with this let's just lift it
to drm_ioctl() as the default behavior. No driver opt-in flag, because
that's definitely worse than any other approach because we really need to
get rid of driver specific behaviour for generic ioctls, especially
anything a compositor will use directly.

> > My reasons for making this work is both less trouble for userspace (did
> > you test with various wayland compositors out there, not just amdgpu x86
> 
> I didn't - will give it a try.
> 
> > driver?), but also testing.
> > 
> > We still need a bunch of these checks in various places or you'll wait a
> > very long time for a pending modeset or similar to complete. Being able to
> > run that code easily after hotunplug has completed should help a lot with
> > testing.
> > 
> > Plus various drivers already acquired drm_dev_enter/exit and now I wonder
> > whether that was properly tested or not ...
> > 
> > I guess maybe we need a drm module option to disable this check, so that
> > we can exercise the code as if the ioctl has raced with hotunplug at the
> > worst possible moment.
> > 
> > Also atomic is really tricky here: I assume your testing has just done
> > normal synchronous commits, but anything that goes through atomic can be
> > done nonblocking in a separate thread. Which the ioctl catch-all here wont
> > capture.
> 
> Yes, async commit was on my mind and thanks for reminding me. Indeed
> I forgot this but i planned to scope the entire amdgpu_dm_atomic_tail in
> drm_dev_enter/exit. Note that i have a bunch of patches, all name's
> starting with 'Scope....' that just methodically put all the background
> work items and timers the drivers schedules in drm_dev_enter/exit scope.
> This was supposed to be part of the 'Scope Display code' patch.

That's too much. You still have to arrange that the flip completion event
gets sent out. So it's a bit tricky.

In other places the same problem applies, e.g. probe functions need to
make sure they report "disconnected".

> > > > > You probably need similar (and very precisely defined) rules for amdgpu.
> > > > > And those must definitely exclude any shard ioctls from randomly failing
> > > > > with EIO, because that just kills the box and defeats the point of trying
> > > > > to gracefully handling hotunplug and making sure userspace has a chance of
> > > > > survival. E.g. for atomic everything should continue, including flip
> > > > > completion, but we set all outputs to "disconnected" and send out the
> > > > > uevent. Maybe crtc enabling can fail too, but that can also be handled
> > > > > through the async status we're using to signal DP link failures to
> > > > > userspace.
> > > 
> > > As I pointed before, because of the complexity of the topic I prefer to
> > > take it step by step and solve first for secondary device use case, not
> > > for primary, display attached device.
> > 
> > Yeah makes sense. But then I think the right patch is to roll this out for
> > all drivers, properly justified with existing code. Not behind a driver
> > flag, because with all these different compositors the last thing we want
> > is a proliferation of driver-specific behaviour. That's imo the worst
> > option of all of them and needs to be avoided.
> 
> So this kind of patch would be acceptable to you if I unconditionally
> scope the drm_ioctl with drm_dev_enter/exit without the driver flag ?
> I am worried to break other drivers with this, see patch https://cgit.freedesktop.org/~agrodzov/linux/commit/?h=drm-misc-next&id=f0c593f35b22ca5bf60ed9e7ce2bf2b80e6c68c6
> Before setting drm_dev_unplug I go through a whole process of signalling
> all possible fences in the system which some one some where might be
> waiting on. My concern is that in the absence of HW those fences won't
> signal and so unless I signal them myself srcu_synchrionize in
> drm_dev_unplug will hang waiting for any such code scoped by
> drm_dev_enter/exit.

Uh right. I forgot about this.

Which would kinda mean the top level scope is maybe not the best idea, and
perhaps we should indeed drill it down. But then the testing issue
definitely gets a lot worse.

So what if we'd push that drm_dev_is_unplugged check down into ioctls?
Then we can make a case-by case decision whether it should be converted to
drm_dev_enter/exit, needs to be pushed down further into drivers (due to
fence wait issues) or other concerns?

Also I guess we need to have a subsystem wide rule on whether you need to
force complete all fences before you call drm_dev_unplug, or afterwards.
If we have mixed behaviour on this there will be disappointment. And since
hotunplug and dma_fence completion are both userspace visible that
inconsistency might have bigger impact.

This is all very tricky indeed :-/

btw for the "gradual pushing drm_dev_enter into ioctl" approach, if we go
with that: We could do the same trick we've done for DRM_UNLOCKED:
- drm_dev_enter/exit is called for any ioctl that has not set the
  DRM_HOTUNPLUG_SAFE flag
- for drm core ioctls we push them into all ioctls and decide how to
  handle/where (with the aim to have the least amount of code flow
  different during hotunplug vs after hotunplug has finished, to reduce
  testing scope)
- then we make DRM_HOTUNPLUG_SAFE the implied default

This would have us left with render ioctls, and I think the defensive
assumption there is that they're all hotunplug safe. We might hang on a
fence wait, but that's fixable, and it's better than blowing up on a
use-after-free security bug.

Thoughts?

It is unfortunately even more work until we've reached the goal, but I
think it's safest and most flexible approach overall.

Cheers, Daniel

> 
> Andrey
> 
> > 
> > Cheers, Daniel
> > 
> > 
> > > 
> > > > > 
> > > > > I guess we should clarify this in the hotunplug doc?
> > > 
> > > Agree
> > > 
> > > > 
> > > > To clarify: I'm not against throwing an ENODEV at userspace for ioctl that
> > > > really make no sense, and where we're rather confident that all properly
> > > > implemented userspace will gracefully handle failures. Like a modeset, or
> > > > opening a device, or trying to import a dma-buf or stuff like that which
> > > > can already fail in normal operation for any kind of reason.
> > > > 
> > > > But stuff that never fails, like GETRESOURCES ioctl, really shouldn't fail
> > > > after hotunplug.
> > > 
> > > As I pointed above, this a bit confuses me given that we already do
> > > blanker rejection of IOCTLs if device is unplugged.
> > 
> > Well I'm confused about this too :-/
> > 
> > > > And then there's the middle ground, like doing a pageflip or buffer flush,
> > > > which I guess some userspace might handle, but risky to inflict those
> > > > consequences on them. atomic modeset is especially fun since depending
> > > > what you're doing it can be both "failures expected" and "failures not
> > > > really expected in normal operation".
> > > > 
> > > > Also, this really should be consistent across drivers, not solved with a
> > > > driver flag for every possible combination.
> > > > 
> > > > If you look at the current hotunplug kms drivers, they have
> > > > drm_dev_enter/exit sprinkled in specific hw callback functions because of
> > > > the above problems. But maybe it makes sense to change things in a few
> > > > cases. But then we should do it across the board.
> > > 
> > > So as I understand your preferred approach is that I scope any back_end, HW
> > > specific function with drm_dev_enter/exit because that where MMIO
> > > access takes place. But besides explicit MMIO access thorough
> > > register accessors in the HW back-end there is also indirect MMIO access
> > > taking place throughout the code in the driver because of various VRAM
> > > BOs which provide CPU access to VRAM through the VRAM BAR. This kind of
> > > access is spread all over in the driver and even in mid-layers such as
> > > TTM and not limited to HW back-end functions. It means it's much harder
> > > to spot such places to surgically scope them with drm_dev_enter/exit and
> > > also that any new such code introduced will immediately break hot unplug
> > > because the developers can't be expected to remember making their code
> > > robust to this specific use case. That why when we discussed internally
> > > what approach to take to protecting code with drm_dev_enter/exit we
> > > opted for using the widest available scope.
> > 
> > The thing is, you kinda have to anyway. There's enormous amounts of
> > asynchronous processing going on. E.g. nonblocking atomic commits also do
> > ttm unpinning and fun stuff like that, which if you sync things wrong can
> > happen way late. So the door for bad fallout is wide open :-(
> > 
> > I'm not sure where the right tradeoff is to make sure we catch them all,
> > and can make sure with testing that we've indeed caught them all.
> > -Daniel
> >
Andrey Grodzovsky April 30, 2021, 5:27 p.m. UTC | #11
On 2021-04-30 6:25 a.m., Daniel Vetter wrote:
> On Thu, Apr 29, 2021 at 04:34:55PM -0400, Andrey Grodzovsky wrote:
>>
>>
>> On 2021-04-29 3:05 p.m., Daniel Vetter wrote:
>>> On Thu, Apr 29, 2021 at 12:04:33PM -0400, Andrey Grodzovsky wrote:
>>>>
>>>>
>>>> On 2021-04-29 7:32 a.m., Daniel Vetter wrote:
>>>>> On Thu, Apr 29, 2021 at 01:23:19PM +0200, Daniel Vetter wrote:
>>>>>> On Wed, Apr 28, 2021 at 11:12:00AM -0400, Andrey Grodzovsky wrote:
>>>>>>> With this calling drm_dev_unplug will flush and block
>>>>>>> all in flight IOCTLs
>>>>>>>
>>>>>>> Also, add feature such that if device supports graceful unplug
>>>>>>> we enclose entire IOCTL in SRCU critical section.
>>>>>>>
>>>>>>> Signed-off-by: Andrey Grodzovsky <andrey.grodzovsky@amd.com>
>>>>>>
>>>>>> Nope.
>>>>>>
>>>>>> The idea of drm_dev_enter/exit is to mark up hw access. Not entire ioctl.
>>>>
>>>> Then I am confused why we have https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Felixir.bootlin.com%2Flinux%2Fv5.12%2Fsource%2Fdrivers%2Fgpu%2Fdrm%2Fdrm_ioctl.c%23L826&amp;data=04%7C01%7Candrey.grodzovsky%40amd.com%7Cf4c0568093cc462f625808d90bc23a3c%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637553751106596888%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=PPKrQYBrgRMjpwlL0r8n5zenIhQMFWc6gniHgUTxTAY%3D&amp;reserved=0
>>>> currently in code ?
>>>
>>> I forgot about this one, again. Thanks for reminding.
>>>
>>>>>> Especially not with an opt-in flag so that it could be shrugged of as a
>>>>>> driver hack. Most of these ioctls should have absolutely no problem
>>>>>> working after hotunplug.
>>>>>>
>>>>>> Also, doing this defeats the point since it pretty much guarantees
>>>>>> userspace will die in assert()s and stuff. E.g. on i915 the rough contract
>>>>>> is that only execbuf (and even that only when userspace has indicated
>>>>>> support for non-recoverable hw ctx) is allowed to fail. Anything else
>>>>>> might crash userspace.
>>>>
>>>> Given that as I pointed above we already fail any IOCTls with -ENODEV
>>>> when device is unplugged, it seems those crashes don't happen that
>>>> often ? Also, in all my testing I don't think I saw a user space crash
>>>> I could attribute to this.
>>>
>>> I guess it should be ok.
>>
>> What should be ok ?
> 
> Your approach, but not your patch. If we go with this let's just lift it
> to drm_ioctl() as the default behavior. No driver opt-in flag, because
> that's definitely worse than any other approach because we really need to
> get rid of driver specific behaviour for generic ioctls, especially
> anything a compositor will use directly.
> 
>>> My reasons for making this work is both less trouble for userspace (did
>>> you test with various wayland compositors out there, not just amdgpu x86
>>
>> I didn't - will give it a try.

Weston worked without crashes, run the egl tester cube there.

>>
>>> driver?), but also testing.
>>>
>>> We still need a bunch of these checks in various places or you'll wait a
>>> very long time for a pending modeset or similar to complete. Being able to
>>> run that code easily after hotunplug has completed should help a lot with
>>> testing.
>>>
>>> Plus various drivers already acquired drm_dev_enter/exit and now I wonder
>>> whether that was properly tested or not ...
>>>
>>> I guess maybe we need a drm module option to disable this check, so that
>>> we can exercise the code as if the ioctl has raced with hotunplug at the
>>> worst possible moment.
>>>
>>> Also atomic is really tricky here: I assume your testing has just done
>>> normal synchronous commits, but anything that goes through atomic can be
>>> done nonblocking in a separate thread. Which the ioctl catch-all here wont
>>> capture.
>>
>> Yes, async commit was on my mind and thanks for reminding me. Indeed
>> I forgot this but i planned to scope the entire amdgpu_dm_atomic_tail in
>> drm_dev_enter/exit. Note that i have a bunch of patches, all name's
>> starting with 'Scope....' that just methodically put all the background
>> work items and timers the drivers schedules in drm_dev_enter/exit scope.
>> This was supposed to be part of the 'Scope Display code' patch.
> 
> That's too much. You still have to arrange that the flip completion event
> gets sent out. So it's a bit tricky.
> 
> In other places the same problem applies, e.g. probe functions need to
> make sure they report "disconnected".

I see, well, this is all part of KMS support which I defer for now
anyway. Will tackle it then.

> 
>>>>>> You probably need similar (and very precisely defined) rules for amdgpu.
>>>>>> And those must definitely exclude any shard ioctls from randomly failing
>>>>>> with EIO, because that just kills the box and defeats the point of trying
>>>>>> to gracefully handling hotunplug and making sure userspace has a chance of
>>>>>> survival. E.g. for atomic everything should continue, including flip
>>>>>> completion, but we set all outputs to "disconnected" and send out the
>>>>>> uevent. Maybe crtc enabling can fail too, but that can also be handled
>>>>>> through the async status we're using to signal DP link failures to
>>>>>> userspace.
>>>>
>>>> As I pointed before, because of the complexity of the topic I prefer to
>>>> take it step by step and solve first for secondary device use case, not
>>>> for primary, display attached device.
>>>
>>> Yeah makes sense. But then I think the right patch is to roll this out for
>>> all drivers, properly justified with existing code. Not behind a driver
>>> flag, because with all these different compositors the last thing we want
>>> is a proliferation of driver-specific behaviour. That's imo the worst
>>> option of all of them and needs to be avoided.
>>
>> So this kind of patch would be acceptable to you if I unconditionally
>> scope the drm_ioctl with drm_dev_enter/exit without the driver flag ?
>> I am worried to break other drivers with this, see patch https://nam11.safelinks.protection.outlook.com/?url=https:%2F%2Fcgit.freedesktop.org%2F~agrodzov%2Flinux%2Fcommit%2F%3Fh%3Ddrm-misc-next%26id%3Df0c593f35b22ca5bf60ed9e7ce2bf2b80e6c68c6&amp;data=04%7C01%7Candrey.grodzovsky%40amd.com%7Cf4c0568093cc462f625808d90bc23a3c%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637553751106596888%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=%2F3Jq6SvTm%2BZX7AVpaxEepfOj0C3O7%2Bo2Wm3y0gxrmKI%3D&amp;reserved=0
>> Before setting drm_dev_unplug I go through a whole process of signalling
>> all possible fences in the system which some one some where might be
>> waiting on. My concern is that in the absence of HW those fences won't
>> signal and so unless I signal them myself srcu_synchrionize in
>> drm_dev_unplug will hang waiting for any such code scoped by
>> drm_dev_enter/exit.
> 
> Uh right. I forgot about this.
> 
> Which would kinda mean the top level scope is maybe not the best idea, and
> perhaps we should indeed drill it down. But then the testing issue
> definitely gets a lot worse.
> 
> So what if we'd push that drm_dev_is_unplugged check down into ioctls?
> Then we can make a case-by case decision whether it should be converted to
> drm_dev_enter/exit, needs to be pushed down further into drivers (due to
> fence wait issues) or other concerns?
> 
> Also I guess we need to have a subsystem wide rule on whether you need to
> force complete all fences before you call drm_dev_unplug, or afterwards.

I don't see how you can handle it afterwards. If a thread is stuck in
dma_fence_wait in non interruptible wait (any kernel thread) and with no
timeout there is nothing you can do to stop the wait. Any such code
scopped with drm_dev_enter/exit will cause a hang in drm_dev_unplug.
The only way then is to preemptively force signal all such fences before
calling drm_dev_unplug - as I do in the above mentioned patch.

> If we have mixed behaviour on this there will be disappointment. And since
> hotunplug and dma_fence completion are both userspace visible that
> inconsistency might have bigger impact.
> 
> This is all very tricky indeed :-/
> 
> btw for the "gradual pushing drm_dev_enter into ioctl" approach, if we go
> with that: We could do the same trick we've done for DRM_UNLOCKED:
> - drm_dev_enter/exit is called for any ioctl that has not set the
>    DRM_HOTUNPLUG_SAFE flag
> - for drm core ioctls we push them into all ioctls and decide how to
>    handle/where (with the aim to have the least amount of code flow
>    different during hotunplug vs after hotunplug has finished, to reduce
>    testing scope)
> - then we make DRM_HOTUNPLUG_SAFE the implied default
> 
> This would have us left with render ioctls, and I think the defensive
> assumption there is that they're all hotunplug safe. We might hang on a
> fence wait, but that's fixable, and it's better than blowing up on a
> use-after-free security bug.
> 
> Thoughts?

I don't fully see a difference between the approach described above and
the full drill down to each driver and even within the driver, to the HW
back-ends - what criteria I would use to decide if for a given IOCTL i
scope with drm_dev_enter/exit at the highest level while for another
i go all the way down ? If we would agree that signaling the fences
preemptively before engaging drm_dev_unplug is generically the right
approach maybe we can then scope drm_ioctl unconditionally with
drm_dev_enter/exit and then for each driver go through the same process
I do for amdgpu - writing driver specific function which takes care of
all the fences. We could then just create a drm callback which would
be called from drm_ioctl before drm_dev_unplug is called.

Andrey

> 
> It is unfortunately even more work until we've reached the goal, but I
> think it's safest and most flexible approach overall.
> 
> Cheers, Daniel
> 
>>
>> Andrey
>>
>>>
>>> Cheers, Daniel
>>>
>>>
>>>>
>>>>>>
>>>>>> I guess we should clarify this in the hotunplug doc?
>>>>
>>>> Agree
>>>>
>>>>>
>>>>> To clarify: I'm not against throwing an ENODEV at userspace for ioctl that
>>>>> really make no sense, and where we're rather confident that all properly
>>>>> implemented userspace will gracefully handle failures. Like a modeset, or
>>>>> opening a device, or trying to import a dma-buf or stuff like that which
>>>>> can already fail in normal operation for any kind of reason.
>>>>>
>>>>> But stuff that never fails, like GETRESOURCES ioctl, really shouldn't fail
>>>>> after hotunplug.
>>>>
>>>> As I pointed above, this a bit confuses me given that we already do
>>>> blanker rejection of IOCTLs if device is unplugged.
>>>
>>> Well I'm confused about this too :-/
>>>
>>>>> And then there's the middle ground, like doing a pageflip or buffer flush,
>>>>> which I guess some userspace might handle, but risky to inflict those
>>>>> consequences on them. atomic modeset is especially fun since depending
>>>>> what you're doing it can be both "failures expected" and "failures not
>>>>> really expected in normal operation".
>>>>>
>>>>> Also, this really should be consistent across drivers, not solved with a
>>>>> driver flag for every possible combination.
>>>>>
>>>>> If you look at the current hotunplug kms drivers, they have
>>>>> drm_dev_enter/exit sprinkled in specific hw callback functions because of
>>>>> the above problems. But maybe it makes sense to change things in a few
>>>>> cases. But then we should do it across the board.
>>>>
>>>> So as I understand your preferred approach is that I scope any back_end, HW
>>>> specific function with drm_dev_enter/exit because that where MMIO
>>>> access takes place. But besides explicit MMIO access thorough
>>>> register accessors in the HW back-end there is also indirect MMIO access
>>>> taking place throughout the code in the driver because of various VRAM
>>>> BOs which provide CPU access to VRAM through the VRAM BAR. This kind of
>>>> access is spread all over in the driver and even in mid-layers such as
>>>> TTM and not limited to HW back-end functions. It means it's much harder
>>>> to spot such places to surgically scope them with drm_dev_enter/exit and
>>>> also that any new such code introduced will immediately break hot unplug
>>>> because the developers can't be expected to remember making their code
>>>> robust to this specific use case. That why when we discussed internally
>>>> what approach to take to protecting code with drm_dev_enter/exit we
>>>> opted for using the widest available scope.
>>>
>>> The thing is, you kinda have to anyway. There's enormous amounts of
>>> asynchronous processing going on. E.g. nonblocking atomic commits also do
>>> ttm unpinning and fun stuff like that, which if you sync things wrong can
>>> happen way late. So the door for bad fallout is wide open :-(
>>>
>>> I'm not sure where the right tradeoff is to make sure we catch them all,
>>> and can make sure with testing that we've indeed caught them all.
>>> -Daniel
>>>
>
Andrey Grodzovsky May 5, 2021, 1:57 p.m. UTC | #12
Ping

Andrey

On 2021-04-30 1:27 p.m., Andrey Grodzovsky wrote:
> 
> 
> On 2021-04-30 6:25 a.m., Daniel Vetter wrote:
>> On Thu, Apr 29, 2021 at 04:34:55PM -0400, Andrey Grodzovsky wrote:
>>>
>>>
>>> On 2021-04-29 3:05 p.m., Daniel Vetter wrote:
>>>> On Thu, Apr 29, 2021 at 12:04:33PM -0400, Andrey Grodzovsky wrote:
>>>>>
>>>>>
>>>>> On 2021-04-29 7:32 a.m., Daniel Vetter wrote:
>>>>>> On Thu, Apr 29, 2021 at 01:23:19PM +0200, Daniel Vetter wrote:
>>>>>>> On Wed, Apr 28, 2021 at 11:12:00AM -0400, Andrey Grodzovsky wrote:
>>>>>>>> With this calling drm_dev_unplug will flush and block
>>>>>>>> all in flight IOCTLs
>>>>>>>>
>>>>>>>> Also, add feature such that if device supports graceful unplug
>>>>>>>> we enclose entire IOCTL in SRCU critical section.
>>>>>>>>
>>>>>>>> Signed-off-by: Andrey Grodzovsky <andrey.grodzovsky@amd.com>
>>>>>>>
>>>>>>> Nope.
>>>>>>>
>>>>>>> The idea of drm_dev_enter/exit is to mark up hw access. Not 
>>>>>>> entire ioctl.
>>>>>
>>>>> Then I am confused why we have 
>>>>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Felixir.bootlin.com%2Flinux%2Fv5.12%2Fsource%2Fdrivers%2Fgpu%2Fdrm%2Fdrm_ioctl.c%23L826&amp;data=04%7C01%7Candrey.grodzovsky%40amd.com%7Cf4c0568093cc462f625808d90bc23a3c%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637553751106596888%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=PPKrQYBrgRMjpwlL0r8n5zenIhQMFWc6gniHgUTxTAY%3D&amp;reserved=0 
>>>>>
>>>>> currently in code ?
>>>>
>>>> I forgot about this one, again. Thanks for reminding.
>>>>
>>>>>>> Especially not with an opt-in flag so that it could be shrugged 
>>>>>>> of as a
>>>>>>> driver hack. Most of these ioctls should have absolutely no problem
>>>>>>> working after hotunplug.
>>>>>>>
>>>>>>> Also, doing this defeats the point since it pretty much guarantees
>>>>>>> userspace will die in assert()s and stuff. E.g. on i915 the rough 
>>>>>>> contract
>>>>>>> is that only execbuf (and even that only when userspace has 
>>>>>>> indicated
>>>>>>> support for non-recoverable hw ctx) is allowed to fail. Anything 
>>>>>>> else
>>>>>>> might crash userspace.
>>>>>
>>>>> Given that as I pointed above we already fail any IOCTls with -ENODEV
>>>>> when device is unplugged, it seems those crashes don't happen that
>>>>> often ? Also, in all my testing I don't think I saw a user space crash
>>>>> I could attribute to this.
>>>>
>>>> I guess it should be ok.
>>>
>>> What should be ok ?
>>
>> Your approach, but not your patch. If we go with this let's just lift it
>> to drm_ioctl() as the default behavior. No driver opt-in flag, because
>> that's definitely worse than any other approach because we really need to
>> get rid of driver specific behaviour for generic ioctls, especially
>> anything a compositor will use directly.
>>
>>>> My reasons for making this work is both less trouble for userspace (did
>>>> you test with various wayland compositors out there, not just amdgpu 
>>>> x86
>>>
>>> I didn't - will give it a try.
> 
> Weston worked without crashes, run the egl tester cube there.
> 
>>>
>>>> driver?), but also testing.
>>>>
>>>> We still need a bunch of these checks in various places or you'll 
>>>> wait a
>>>> very long time for a pending modeset or similar to complete. Being 
>>>> able to
>>>> run that code easily after hotunplug has completed should help a lot 
>>>> with
>>>> testing.
>>>>
>>>> Plus various drivers already acquired drm_dev_enter/exit and now I 
>>>> wonder
>>>> whether that was properly tested or not ...
>>>>
>>>> I guess maybe we need a drm module option to disable this check, so 
>>>> that
>>>> we can exercise the code as if the ioctl has raced with hotunplug at 
>>>> the
>>>> worst possible moment.
>>>>
>>>> Also atomic is really tricky here: I assume your testing has just done
>>>> normal synchronous commits, but anything that goes through atomic 
>>>> can be
>>>> done nonblocking in a separate thread. Which the ioctl catch-all 
>>>> here wont
>>>> capture.
>>>
>>> Yes, async commit was on my mind and thanks for reminding me. Indeed
>>> I forgot this but i planned to scope the entire amdgpu_dm_atomic_tail in
>>> drm_dev_enter/exit. Note that i have a bunch of patches, all name's
>>> starting with 'Scope....' that just methodically put all the background
>>> work items and timers the drivers schedules in drm_dev_enter/exit scope.
>>> This was supposed to be part of the 'Scope Display code' patch.
>>
>> That's too much. You still have to arrange that the flip completion event
>> gets sent out. So it's a bit tricky.
>>
>> In other places the same problem applies, e.g. probe functions need to
>> make sure they report "disconnected".
> 
> I see, well, this is all part of KMS support which I defer for now
> anyway. Will tackle it then.
> 
>>
>>>>>>> You probably need similar (and very precisely defined) rules for 
>>>>>>> amdgpu.
>>>>>>> And those must definitely exclude any shard ioctls from randomly 
>>>>>>> failing
>>>>>>> with EIO, because that just kills the box and defeats the point 
>>>>>>> of trying
>>>>>>> to gracefully handling hotunplug and making sure userspace has a 
>>>>>>> chance of
>>>>>>> survival. E.g. for atomic everything should continue, including flip
>>>>>>> completion, but we set all outputs to "disconnected" and send out 
>>>>>>> the
>>>>>>> uevent. Maybe crtc enabling can fail too, but that can also be 
>>>>>>> handled
>>>>>>> through the async status we're using to signal DP link failures to
>>>>>>> userspace.
>>>>>
>>>>> As I pointed before, because of the complexity of the topic I 
>>>>> prefer to
>>>>> take it step by step and solve first for secondary device use case, 
>>>>> not
>>>>> for primary, display attached device.
>>>>
>>>> Yeah makes sense. But then I think the right patch is to roll this 
>>>> out for
>>>> all drivers, properly justified with existing code. Not behind a driver
>>>> flag, because with all these different compositors the last thing we 
>>>> want
>>>> is a proliferation of driver-specific behaviour. That's imo the worst
>>>> option of all of them and needs to be avoided.
>>>
>>> So this kind of patch would be acceptable to you if I unconditionally
>>> scope the drm_ioctl with drm_dev_enter/exit without the driver flag ?
>>> I am worried to break other drivers with this, see patch 
>>> https://nam11.safelinks.protection.outlook.com/?url=https:%2F%2Fcgit.freedesktop.org%2F~agrodzov%2Flinux%2Fcommit%2F%3Fh%3Ddrm-misc-next%26id%3Df0c593f35b22ca5bf60ed9e7ce2bf2b80e6c68c6&amp;data=04%7C01%7Candrey.grodzovsky%40amd.com%7Cf4c0568093cc462f625808d90bc23a3c%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637553751106596888%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=%2F3Jq6SvTm%2BZX7AVpaxEepfOj0C3O7%2Bo2Wm3y0gxrmKI%3D&amp;reserved=0 
>>>
>>> Before setting drm_dev_unplug I go through a whole process of signalling
>>> all possible fences in the system which some one some where might be
>>> waiting on. My concern is that in the absence of HW those fences won't
>>> signal and so unless I signal them myself srcu_synchrionize in
>>> drm_dev_unplug will hang waiting for any such code scoped by
>>> drm_dev_enter/exit.
>>
>> Uh right. I forgot about this.
>>
>> Which would kinda mean the top level scope is maybe not the best idea, 
>> and
>> perhaps we should indeed drill it down. But then the testing issue
>> definitely gets a lot worse.
>>
>> So what if we'd push that drm_dev_is_unplugged check down into ioctls?
>> Then we can make a case-by case decision whether it should be 
>> converted to
>> drm_dev_enter/exit, needs to be pushed down further into drivers (due to
>> fence wait issues) or other concerns?
>>
>> Also I guess we need to have a subsystem wide rule on whether you need to
>> force complete all fences before you call drm_dev_unplug, or afterwards.
> 
> I don't see how you can handle it afterwards. If a thread is stuck in
> dma_fence_wait in non interruptible wait (any kernel thread) and with no
> timeout there is nothing you can do to stop the wait. Any such code
> scopped with drm_dev_enter/exit will cause a hang in drm_dev_unplug.
> The only way then is to preemptively force signal all such fences before
> calling drm_dev_unplug - as I do in the above mentioned patch.
> 
>> If we have mixed behaviour on this there will be disappointment. And 
>> since
>> hotunplug and dma_fence completion are both userspace visible that
>> inconsistency might have bigger impact.
>>
>> This is all very tricky indeed :-/
>>
>> btw for the "gradual pushing drm_dev_enter into ioctl" approach, if we go
>> with that: We could do the same trick we've done for DRM_UNLOCKED:
>> - drm_dev_enter/exit is called for any ioctl that has not set the
>>    DRM_HOTUNPLUG_SAFE flag
>> - for drm core ioctls we push them into all ioctls and decide how to
>>    handle/where (with the aim to have the least amount of code flow
>>    different during hotunplug vs after hotunplug has finished, to reduce
>>    testing scope)
>> - then we make DRM_HOTUNPLUG_SAFE the implied default
>>
>> This would have us left with render ioctls, and I think the defensive
>> assumption there is that they're all hotunplug safe. We might hang on a
>> fence wait, but that's fixable, and it's better than blowing up on a
>> use-after-free security bug.
>>
>> Thoughts?
> 
> I don't fully see a difference between the approach described above and
> the full drill down to each driver and even within the driver, to the HW
> back-ends - what criteria I would use to decide if for a given IOCTL i
> scope with drm_dev_enter/exit at the highest level while for another
> i go all the way down ? If we would agree that signaling the fences
> preemptively before engaging drm_dev_unplug is generically the right
> approach maybe we can then scope drm_ioctl unconditionally with
> drm_dev_enter/exit and then for each driver go through the same process
> I do for amdgpu - writing driver specific function which takes care of
> all the fences. We could then just create a drm callback which would
> be called from drm_ioctl before drm_dev_unplug is called.
> 
> Andrey
> 
>>
>> It is unfortunately even more work until we've reached the goal, but I
>> think it's safest and most flexible approach overall.
>>
>> Cheers, Daniel
>>
>>>
>>> Andrey
>>>
>>>>
>>>> Cheers, Daniel
>>>>
>>>>
>>>>>
>>>>>>>
>>>>>>> I guess we should clarify this in the hotunplug doc?
>>>>>
>>>>> Agree
>>>>>
>>>>>>
>>>>>> To clarify: I'm not against throwing an ENODEV at userspace for 
>>>>>> ioctl that
>>>>>> really make no sense, and where we're rather confident that all 
>>>>>> properly
>>>>>> implemented userspace will gracefully handle failures. Like a 
>>>>>> modeset, or
>>>>>> opening a device, or trying to import a dma-buf or stuff like that 
>>>>>> which
>>>>>> can already fail in normal operation for any kind of reason.
>>>>>>
>>>>>> But stuff that never fails, like GETRESOURCES ioctl, really 
>>>>>> shouldn't fail
>>>>>> after hotunplug.
>>>>>
>>>>> As I pointed above, this a bit confuses me given that we already do
>>>>> blanker rejection of IOCTLs if device is unplugged.
>>>>
>>>> Well I'm confused about this too :-/
>>>>
>>>>>> And then there's the middle ground, like doing a pageflip or 
>>>>>> buffer flush,
>>>>>> which I guess some userspace might handle, but risky to inflict those
>>>>>> consequences on them. atomic modeset is especially fun since 
>>>>>> depending
>>>>>> what you're doing it can be both "failures expected" and "failures 
>>>>>> not
>>>>>> really expected in normal operation".
>>>>>>
>>>>>> Also, this really should be consistent across drivers, not solved 
>>>>>> with a
>>>>>> driver flag for every possible combination.
>>>>>>
>>>>>> If you look at the current hotunplug kms drivers, they have
>>>>>> drm_dev_enter/exit sprinkled in specific hw callback functions 
>>>>>> because of
>>>>>> the above problems. But maybe it makes sense to change things in a 
>>>>>> few
>>>>>> cases. But then we should do it across the board.
>>>>>
>>>>> So as I understand your preferred approach is that I scope any 
>>>>> back_end, HW
>>>>> specific function with drm_dev_enter/exit because that where MMIO
>>>>> access takes place. But besides explicit MMIO access thorough
>>>>> register accessors in the HW back-end there is also indirect MMIO 
>>>>> access
>>>>> taking place throughout the code in the driver because of various VRAM
>>>>> BOs which provide CPU access to VRAM through the VRAM BAR. This 
>>>>> kind of
>>>>> access is spread all over in the driver and even in mid-layers such as
>>>>> TTM and not limited to HW back-end functions. It means it's much 
>>>>> harder
>>>>> to spot such places to surgically scope them with 
>>>>> drm_dev_enter/exit and
>>>>> also that any new such code introduced will immediately break hot 
>>>>> unplug
>>>>> because the developers can't be expected to remember making their code
>>>>> robust to this specific use case. That why when we discussed 
>>>>> internally
>>>>> what approach to take to protecting code with drm_dev_enter/exit we
>>>>> opted for using the widest available scope.
>>>>
>>>> The thing is, you kinda have to anyway. There's enormous amounts of
>>>> asynchronous processing going on. E.g. nonblocking atomic commits 
>>>> also do
>>>> ttm unpinning and fun stuff like that, which if you sync things 
>>>> wrong can
>>>> happen way late. So the door for bad fallout is wide open :-(
>>>>
>>>> I'm not sure where the right tradeoff is to make sure we catch them 
>>>> all,
>>>> and can make sure with testing that we've indeed caught them all.
>>>> -Daniel
>>>>
>>
Daniel Vetter May 6, 2021, 9:40 a.m. UTC | #13
On Fri, Apr 30, 2021 at 01:27:37PM -0400, Andrey Grodzovsky wrote:
> 
> 
> On 2021-04-30 6:25 a.m., Daniel Vetter wrote:
> > On Thu, Apr 29, 2021 at 04:34:55PM -0400, Andrey Grodzovsky wrote:
> > > 
> > > 
> > > On 2021-04-29 3:05 p.m., Daniel Vetter wrote:
> > > > On Thu, Apr 29, 2021 at 12:04:33PM -0400, Andrey Grodzovsky wrote:
> > > > > 
> > > > > 
> > > > > On 2021-04-29 7:32 a.m., Daniel Vetter wrote:
> > > > > > On Thu, Apr 29, 2021 at 01:23:19PM +0200, Daniel Vetter wrote:
> > > > > > > On Wed, Apr 28, 2021 at 11:12:00AM -0400, Andrey Grodzovsky wrote:
> > > > > > > > With this calling drm_dev_unplug will flush and block
> > > > > > > > all in flight IOCTLs
> > > > > > > > 
> > > > > > > > Also, add feature such that if device supports graceful unplug
> > > > > > > > we enclose entire IOCTL in SRCU critical section.
> > > > > > > > 
> > > > > > > > Signed-off-by: Andrey Grodzovsky <andrey.grodzovsky@amd.com>
> > > > > > > 
> > > > > > > Nope.
> > > > > > > 
> > > > > > > The idea of drm_dev_enter/exit is to mark up hw access. Not entire ioctl.
> > > > > 
> > > > > Then I am confused why we have https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Felixir.bootlin.com%2Flinux%2Fv5.12%2Fsource%2Fdrivers%2Fgpu%2Fdrm%2Fdrm_ioctl.c%23L826&amp;data=04%7C01%7Candrey.grodzovsky%40amd.com%7Cf4c0568093cc462f625808d90bc23a3c%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637553751106596888%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=PPKrQYBrgRMjpwlL0r8n5zenIhQMFWc6gniHgUTxTAY%3D&amp;reserved=0
> > > > > currently in code ?
> > > > 
> > > > I forgot about this one, again. Thanks for reminding.
> > > > 
> > > > > > > Especially not with an opt-in flag so that it could be shrugged of as a
> > > > > > > driver hack. Most of these ioctls should have absolutely no problem
> > > > > > > working after hotunplug.
> > > > > > > 
> > > > > > > Also, doing this defeats the point since it pretty much guarantees
> > > > > > > userspace will die in assert()s and stuff. E.g. on i915 the rough contract
> > > > > > > is that only execbuf (and even that only when userspace has indicated
> > > > > > > support for non-recoverable hw ctx) is allowed to fail. Anything else
> > > > > > > might crash userspace.
> > > > > 
> > > > > Given that as I pointed above we already fail any IOCTls with -ENODEV
> > > > > when device is unplugged, it seems those crashes don't happen that
> > > > > often ? Also, in all my testing I don't think I saw a user space crash
> > > > > I could attribute to this.
> > > > 
> > > > I guess it should be ok.
> > > 
> > > What should be ok ?
> > 
> > Your approach, but not your patch. If we go with this let's just lift it
> > to drm_ioctl() as the default behavior. No driver opt-in flag, because
> > that's definitely worse than any other approach because we really need to
> > get rid of driver specific behaviour for generic ioctls, especially
> > anything a compositor will use directly.
> > 
> > > > My reasons for making this work is both less trouble for userspace (did
> > > > you test with various wayland compositors out there, not just amdgpu x86
> > > 
> > > I didn't - will give it a try.
> 
> Weston worked without crashes, run the egl tester cube there.
> 
> > > 
> > > > driver?), but also testing.
> > > > 
> > > > We still need a bunch of these checks in various places or you'll wait a
> > > > very long time for a pending modeset or similar to complete. Being able to
> > > > run that code easily after hotunplug has completed should help a lot with
> > > > testing.
> > > > 
> > > > Plus various drivers already acquired drm_dev_enter/exit and now I wonder
> > > > whether that was properly tested or not ...
> > > > 
> > > > I guess maybe we need a drm module option to disable this check, so that
> > > > we can exercise the code as if the ioctl has raced with hotunplug at the
> > > > worst possible moment.
> > > > 
> > > > Also atomic is really tricky here: I assume your testing has just done
> > > > normal synchronous commits, but anything that goes through atomic can be
> > > > done nonblocking in a separate thread. Which the ioctl catch-all here wont
> > > > capture.
> > > 
> > > Yes, async commit was on my mind and thanks for reminding me. Indeed
> > > I forgot this but i planned to scope the entire amdgpu_dm_atomic_tail in
> > > drm_dev_enter/exit. Note that i have a bunch of patches, all name's
> > > starting with 'Scope....' that just methodically put all the background
> > > work items and timers the drivers schedules in drm_dev_enter/exit scope.
> > > This was supposed to be part of the 'Scope Display code' patch.
> > 
> > That's too much. You still have to arrange that the flip completion event
> > gets sent out. So it's a bit tricky.
> > 
> > In other places the same problem applies, e.g. probe functions need to
> > make sure they report "disconnected".
> 
> I see, well, this is all part of KMS support which I defer for now
> anyway. Will tackle it then.
> 
> > 
> > > > > > > You probably need similar (and very precisely defined) rules for amdgpu.
> > > > > > > And those must definitely exclude any shard ioctls from randomly failing
> > > > > > > with EIO, because that just kills the box and defeats the point of trying
> > > > > > > to gracefully handling hotunplug and making sure userspace has a chance of
> > > > > > > survival. E.g. for atomic everything should continue, including flip
> > > > > > > completion, but we set all outputs to "disconnected" and send out the
> > > > > > > uevent. Maybe crtc enabling can fail too, but that can also be handled
> > > > > > > through the async status we're using to signal DP link failures to
> > > > > > > userspace.
> > > > > 
> > > > > As I pointed before, because of the complexity of the topic I prefer to
> > > > > take it step by step and solve first for secondary device use case, not
> > > > > for primary, display attached device.
> > > > 
> > > > Yeah makes sense. But then I think the right patch is to roll this out for
> > > > all drivers, properly justified with existing code. Not behind a driver
> > > > flag, because with all these different compositors the last thing we want
> > > > is a proliferation of driver-specific behaviour. That's imo the worst
> > > > option of all of them and needs to be avoided.
> > > 
> > > So this kind of patch would be acceptable to you if I unconditionally
> > > scope the drm_ioctl with drm_dev_enter/exit without the driver flag ?
> > > I am worried to break other drivers with this, see patch https://nam11.safelinks.protection.outlook.com/?url=https:%2F%2Fcgit.freedesktop.org%2F~agrodzov%2Flinux%2Fcommit%2F%3Fh%3Ddrm-misc-next%26id%3Df0c593f35b22ca5bf60ed9e7ce2bf2b80e6c68c6&amp;data=04%7C01%7Candrey.grodzovsky%40amd.com%7Cf4c0568093cc462f625808d90bc23a3c%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637553751106596888%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=%2F3Jq6SvTm%2BZX7AVpaxEepfOj0C3O7%2Bo2Wm3y0gxrmKI%3D&amp;reserved=0
> > > Before setting drm_dev_unplug I go through a whole process of signalling
> > > all possible fences in the system which some one some where might be
> > > waiting on. My concern is that in the absence of HW those fences won't
> > > signal and so unless I signal them myself srcu_synchrionize in
> > > drm_dev_unplug will hang waiting for any such code scoped by
> > > drm_dev_enter/exit.
> > 
> > Uh right. I forgot about this.
> > 
> > Which would kinda mean the top level scope is maybe not the best idea, and
> > perhaps we should indeed drill it down. But then the testing issue
> > definitely gets a lot worse.
> > 
> > So what if we'd push that drm_dev_is_unplugged check down into ioctls?
> > Then we can make a case-by case decision whether it should be converted to
> > drm_dev_enter/exit, needs to be pushed down further into drivers (due to
> > fence wait issues) or other concerns?
> > 
> > Also I guess we need to have a subsystem wide rule on whether you need to
> > force complete all fences before you call drm_dev_unplug, or afterwards.
> 
> I don't see how you can handle it afterwards. If a thread is stuck in
> dma_fence_wait in non interruptible wait (any kernel thread) and with no
> timeout there is nothing you can do to stop the wait. Any such code
> scopped with drm_dev_enter/exit will cause a hang in drm_dev_unplug.
> The only way then is to preemptively force signal all such fences before
> calling drm_dev_unplug - as I do in the above mentioned patch.

Yeah, which is why I don't think top-level drm_dev_enter/exit is a good
idea.

> > If we have mixed behaviour on this there will be disappointment. And since
> > hotunplug and dma_fence completion are both userspace visible that
> > inconsistency might have bigger impact.
> > 
> > This is all very tricky indeed :-/
> > 
> > btw for the "gradual pushing drm_dev_enter into ioctl" approach, if we go
> > with that: We could do the same trick we've done for DRM_UNLOCKED:
> > - drm_dev_enter/exit is called for any ioctl that has not set the
> >    DRM_HOTUNPLUG_SAFE flag
> > - for drm core ioctls we push them into all ioctls and decide how to
> >    handle/where (with the aim to have the least amount of code flow
> >    different during hotunplug vs after hotunplug has finished, to reduce
> >    testing scope)
> > - then we make DRM_HOTUNPLUG_SAFE the implied default
> > 
> > This would have us left with render ioctls, and I think the defensive
> > assumption there is that they're all hotunplug safe. We might hang on a
> > fence wait, but that's fixable, and it's better than blowing up on a
> > use-after-free security bug.
> > 
> > Thoughts?
> 
> I don't fully see a difference between the approach described above and
> the full drill down to each driver and even within the driver, to the HW
> back-ends - what criteria I would use to decide if for a given IOCTL i
> scope with drm_dev_enter/exit at the highest level while for another
> i go all the way down ? If we would agree that signaling the fences
> preemptively before engaging drm_dev_unplug is generically the right
> approach maybe we can then scope drm_ioctl unconditionally with
> drm_dev_enter/exit and then for each driver go through the same process
> I do for amdgpu - writing driver specific function which takes care of
> all the fences. We could then just create a drm callback which would
> be called from drm_ioctl before drm_dev_unplug is called.

So I see the appeal of just nuking all the fences, but I'm not sure that's
a good plan. We've done this in the old i915 gpu reset code too, and the
issue is it's defacto inverting the locking. But also the hw is truly
gone, so it also makes sense.

The problem is a bit roll-out, if we state that dma_fence_wait is allowed
with a drm_dev_enter/exit, then all drivers need to force-retire their
fences.

The other option would be that we require that dma_fence_wait is _not_
allowed in drm_dev_enter/exit, and that therefore these areas must be
marked up more fine-grained to avoid deadlocks. I like this more from the
testing aspect (it makes it easier to be reasonable sure your code handles
concurrent hotunplug), but also it's pretty easy to validate with the
dma_fence lockdep annotations we have I think.

A third reasons for not requiring force-retiring of dma_fence before
drm_dev_unplug is the races: Before drm_dev_unplug you haven't stopped new
fences from happening, but until you've stopped new fences it's hard to
guarantee they're all retired. How do you solve this currently.

Finally there's still hangcheck and all that, so if we go with forbidding
dma_fence_wait from within drm_dev_enter/exit sections, then drivers don't
need to have additional tricky code to force-retire fences. TDR will take
care already (albeit with maybe a slightly annoying long timeout, which
we can shorten to "time out everything immediately" after drm_dev_unplug).

What we definitely can't have is half the drivers doing it one way, and
the other half the other way. So your driver flag to wrap the ioctl
optionally in a drm_dev_enter/exit path is a no-go still I think.

I guess my tldr; is: I definitely see how your current approach gives
quicker results for amdgpu right now, but long term I'm seeing more
positives on the other one. At least I expect less special cases due to
hotunplug with that.

Cheers, Daniel

> 
> Andrey
> 
> > 
> > It is unfortunately even more work until we've reached the goal, but I
> > think it's safest and most flexible approach overall.
> > 
> > Cheers, Daniel
> > 
> > > 
> > > Andrey
> > > 
> > > > 
> > > > Cheers, Daniel
> > > > 
> > > > 
> > > > > 
> > > > > > > 
> > > > > > > I guess we should clarify this in the hotunplug doc?
> > > > > 
> > > > > Agree
> > > > > 
> > > > > > 
> > > > > > To clarify: I'm not against throwing an ENODEV at userspace for ioctl that
> > > > > > really make no sense, and where we're rather confident that all properly
> > > > > > implemented userspace will gracefully handle failures. Like a modeset, or
> > > > > > opening a device, or trying to import a dma-buf or stuff like that which
> > > > > > can already fail in normal operation for any kind of reason.
> > > > > > 
> > > > > > But stuff that never fails, like GETRESOURCES ioctl, really shouldn't fail
> > > > > > after hotunplug.
> > > > > 
> > > > > As I pointed above, this a bit confuses me given that we already do
> > > > > blanker rejection of IOCTLs if device is unplugged.
> > > > 
> > > > Well I'm confused about this too :-/
> > > > 
> > > > > > And then there's the middle ground, like doing a pageflip or buffer flush,
> > > > > > which I guess some userspace might handle, but risky to inflict those
> > > > > > consequences on them. atomic modeset is especially fun since depending
> > > > > > what you're doing it can be both "failures expected" and "failures not
> > > > > > really expected in normal operation".
> > > > > > 
> > > > > > Also, this really should be consistent across drivers, not solved with a
> > > > > > driver flag for every possible combination.
> > > > > > 
> > > > > > If you look at the current hotunplug kms drivers, they have
> > > > > > drm_dev_enter/exit sprinkled in specific hw callback functions because of
> > > > > > the above problems. But maybe it makes sense to change things in a few
> > > > > > cases. But then we should do it across the board.
> > > > > 
> > > > > So as I understand your preferred approach is that I scope any back_end, HW
> > > > > specific function with drm_dev_enter/exit because that where MMIO
> > > > > access takes place. But besides explicit MMIO access thorough
> > > > > register accessors in the HW back-end there is also indirect MMIO access
> > > > > taking place throughout the code in the driver because of various VRAM
> > > > > BOs which provide CPU access to VRAM through the VRAM BAR. This kind of
> > > > > access is spread all over in the driver and even in mid-layers such as
> > > > > TTM and not limited to HW back-end functions. It means it's much harder
> > > > > to spot such places to surgically scope them with drm_dev_enter/exit and
> > > > > also that any new such code introduced will immediately break hot unplug
> > > > > because the developers can't be expected to remember making their code
> > > > > robust to this specific use case. That why when we discussed internally
> > > > > what approach to take to protecting code with drm_dev_enter/exit we
> > > > > opted for using the widest available scope.
> > > > 
> > > > The thing is, you kinda have to anyway. There's enormous amounts of
> > > > asynchronous processing going on. E.g. nonblocking atomic commits also do
> > > > ttm unpinning and fun stuff like that, which if you sync things wrong can
> > > > happen way late. So the door for bad fallout is wide open :-(
> > > > 
> > > > I'm not sure where the right tradeoff is to make sure we catch them all,
> > > > and can make sure with testing that we've indeed caught them all.
> > > > -Daniel
> > > > 
> >
Andrey Grodzovsky May 6, 2021, 4:25 p.m. UTC | #14
On 2021-05-06 5:40 a.m., Daniel Vetter wrote:
> On Fri, Apr 30, 2021 at 01:27:37PM -0400, Andrey Grodzovsky wrote:
>>
>>
>> On 2021-04-30 6:25 a.m., Daniel Vetter wrote:
>>> On Thu, Apr 29, 2021 at 04:34:55PM -0400, Andrey Grodzovsky wrote:
>>>>
>>>>
>>>> On 2021-04-29 3:05 p.m., Daniel Vetter wrote:
>>>>> On Thu, Apr 29, 2021 at 12:04:33PM -0400, Andrey Grodzovsky wrote:
>>>>>>
>>>>>>
>>>>>> On 2021-04-29 7:32 a.m., Daniel Vetter wrote:
>>>>>>> On Thu, Apr 29, 2021 at 01:23:19PM +0200, Daniel Vetter wrote:
>>>>>>>> On Wed, Apr 28, 2021 at 11:12:00AM -0400, Andrey Grodzovsky wrote:
>>>>>>>>> With this calling drm_dev_unplug will flush and block
>>>>>>>>> all in flight IOCTLs
>>>>>>>>>
>>>>>>>>> Also, add feature such that if device supports graceful unplug
>>>>>>>>> we enclose entire IOCTL in SRCU critical section.
>>>>>>>>>
>>>>>>>>> Signed-off-by: Andrey Grodzovsky <andrey.grodzovsky@amd.com>
>>>>>>>>
>>>>>>>> Nope.
>>>>>>>>
>>>>>>>> The idea of drm_dev_enter/exit is to mark up hw access. Not entire ioctl.
>>>>>>
>>>>>> Then I am confused why we have https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Felixir.bootlin.com%2Flinux%2Fv5.12%2Fsource%2Fdrivers%2Fgpu%2Fdrm%2Fdrm_ioctl.c%23L826&amp;data=04%7C01%7Candrey.grodzovsky%40amd.com%7Ca0ca5bdab20a4533491c08d91072fe2a%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637558908355926609%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=SESZFWQEcQUHGGek8d1cNi9Iwo9XOmXqxg9MieRkxNU%3D&amp;reserved=0
>>>>>> currently in code ?
>>>>>
>>>>> I forgot about this one, again. Thanks for reminding.
>>>>>
>>>>>>>> Especially not with an opt-in flag so that it could be shrugged of as a
>>>>>>>> driver hack. Most of these ioctls should have absolutely no problem
>>>>>>>> working after hotunplug.
>>>>>>>>
>>>>>>>> Also, doing this defeats the point since it pretty much guarantees
>>>>>>>> userspace will die in assert()s and stuff. E.g. on i915 the rough contract
>>>>>>>> is that only execbuf (and even that only when userspace has indicated
>>>>>>>> support for non-recoverable hw ctx) is allowed to fail. Anything else
>>>>>>>> might crash userspace.
>>>>>>
>>>>>> Given that as I pointed above we already fail any IOCTls with -ENODEV
>>>>>> when device is unplugged, it seems those crashes don't happen that
>>>>>> often ? Also, in all my testing I don't think I saw a user space crash
>>>>>> I could attribute to this.
>>>>>
>>>>> I guess it should be ok.
>>>>
>>>> What should be ok ?
>>>
>>> Your approach, but not your patch. If we go with this let's just lift it
>>> to drm_ioctl() as the default behavior. No driver opt-in flag, because
>>> that's definitely worse than any other approach because we really need to
>>> get rid of driver specific behaviour for generic ioctls, especially
>>> anything a compositor will use directly.
>>>
>>>>> My reasons for making this work is both less trouble for userspace (did
>>>>> you test with various wayland compositors out there, not just amdgpu x86
>>>>
>>>> I didn't - will give it a try.
>>
>> Weston worked without crashes, run the egl tester cube there.
>>
>>>>
>>>>> driver?), but also testing.
>>>>>
>>>>> We still need a bunch of these checks in various places or you'll wait a
>>>>> very long time for a pending modeset or similar to complete. Being able to
>>>>> run that code easily after hotunplug has completed should help a lot with
>>>>> testing.
>>>>>
>>>>> Plus various drivers already acquired drm_dev_enter/exit and now I wonder
>>>>> whether that was properly tested or not ...
>>>>>
>>>>> I guess maybe we need a drm module option to disable this check, so that
>>>>> we can exercise the code as if the ioctl has raced with hotunplug at the
>>>>> worst possible moment.
>>>>>
>>>>> Also atomic is really tricky here: I assume your testing has just done
>>>>> normal synchronous commits, but anything that goes through atomic can be
>>>>> done nonblocking in a separate thread. Which the ioctl catch-all here wont
>>>>> capture.
>>>>
>>>> Yes, async commit was on my mind and thanks for reminding me. Indeed
>>>> I forgot this but i planned to scope the entire amdgpu_dm_atomic_tail in
>>>> drm_dev_enter/exit. Note that i have a bunch of patches, all name's
>>>> starting with 'Scope....' that just methodically put all the background
>>>> work items and timers the drivers schedules in drm_dev_enter/exit scope.
>>>> This was supposed to be part of the 'Scope Display code' patch.
>>>
>>> That's too much. You still have to arrange that the flip completion event
>>> gets sent out. So it's a bit tricky.
>>>
>>> In other places the same problem applies, e.g. probe functions need to
>>> make sure they report "disconnected".
>>
>> I see, well, this is all part of KMS support which I defer for now
>> anyway. Will tackle it then.
>>
>>>
>>>>>>>> You probably need similar (and very precisely defined) rules for amdgpu.
>>>>>>>> And those must definitely exclude any shard ioctls from randomly failing
>>>>>>>> with EIO, because that just kills the box and defeats the point of trying
>>>>>>>> to gracefully handling hotunplug and making sure userspace has a chance of
>>>>>>>> survival. E.g. for atomic everything should continue, including flip
>>>>>>>> completion, but we set all outputs to "disconnected" and send out the
>>>>>>>> uevent. Maybe crtc enabling can fail too, but that can also be handled
>>>>>>>> through the async status we're using to signal DP link failures to
>>>>>>>> userspace.
>>>>>>
>>>>>> As I pointed before, because of the complexity of the topic I prefer to
>>>>>> take it step by step and solve first for secondary device use case, not
>>>>>> for primary, display attached device.
>>>>>
>>>>> Yeah makes sense. But then I think the right patch is to roll this out for
>>>>> all drivers, properly justified with existing code. Not behind a driver
>>>>> flag, because with all these different compositors the last thing we want
>>>>> is a proliferation of driver-specific behaviour. That's imo the worst
>>>>> option of all of them and needs to be avoided.
>>>>
>>>> So this kind of patch would be acceptable to you if I unconditionally
>>>> scope the drm_ioctl with drm_dev_enter/exit without the driver flag ?
>>>> I am worried to break other drivers with this, see patch https://nam11.safelinks.protection.outlook.com/?url=https:%2F%2Fcgit.freedesktop.org%2F~agrodzov%2Flinux%2Fcommit%2F%3Fh%3Ddrm-misc-next%26id%3Df0c593f35b22ca5bf60ed9e7ce2bf2b80e6c68c6&amp;data=04%7C01%7Candrey.grodzovsky%40amd.com%7Ca0ca5bdab20a4533491c08d91072fe2a%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637558908355926609%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=62f4gdl3lQH0ap58HTyv47zxALjaa5Td%2BysskR83rig%3D&amp;reserved=0
>>>> Before setting drm_dev_unplug I go through a whole process of signalling
>>>> all possible fences in the system which some one some where might be
>>>> waiting on. My concern is that in the absence of HW those fences won't
>>>> signal and so unless I signal them myself srcu_synchrionize in
>>>> drm_dev_unplug will hang waiting for any such code scoped by
>>>> drm_dev_enter/exit.
>>>
>>> Uh right. I forgot about this.
>>>
>>> Which would kinda mean the top level scope is maybe not the best idea, and
>>> perhaps we should indeed drill it down. But then the testing issue
>>> definitely gets a lot worse.
>>>
>>> So what if we'd push that drm_dev_is_unplugged check down into ioctls?
>>> Then we can make a case-by case decision whether it should be converted to
>>> drm_dev_enter/exit, needs to be pushed down further into drivers (due to
>>> fence wait issues) or other concerns?
>>>
>>> Also I guess we need to have a subsystem wide rule on whether you need to
>>> force complete all fences before you call drm_dev_unplug, or afterwards.
>>
>> I don't see how you can handle it afterwards. If a thread is stuck in
>> dma_fence_wait in non interruptible wait (any kernel thread) and with no
>> timeout there is nothing you can do to stop the wait. Any such code
>> scopped with drm_dev_enter/exit will cause a hang in drm_dev_unplug.
>> The only way then is to preemptively force signal all such fences before
>> calling drm_dev_unplug - as I do in the above mentioned patch.
> 
> Yeah, which is why I don't think top-level drm_dev_enter/exit is a good
> idea.
> 
>>> If we have mixed behaviour on this there will be disappointment. And since
>>> hotunplug and dma_fence completion are both userspace visible that
>>> inconsistency might have bigger impact.
>>>
>>> This is all very tricky indeed :-/
>>>
>>> btw for the "gradual pushing drm_dev_enter into ioctl" approach, if we go
>>> with that: We could do the same trick we've done for DRM_UNLOCKED:
>>> - drm_dev_enter/exit is called for any ioctl that has not set the
>>>     DRM_HOTUNPLUG_SAFE flag
>>> - for drm core ioctls we push them into all ioctls and decide how to
>>>     handle/where (with the aim to have the least amount of code flow
>>>     different during hotunplug vs after hotunplug has finished, to reduce
>>>     testing scope)
>>> - then we make DRM_HOTUNPLUG_SAFE the implied default
>>>
>>> This would have us left with render ioctls, and I think the defensive
>>> assumption there is that they're all hotunplug safe. We might hang on a
>>> fence wait, but that's fixable, and it's better than blowing up on a
>>> use-after-free security bug.
>>>
>>> Thoughts?
>>
>> I don't fully see a difference between the approach described above and
>> the full drill down to each driver and even within the driver, to the HW
>> back-ends - what criteria I would use to decide if for a given IOCTL i
>> scope with drm_dev_enter/exit at the highest level while for another
>> i go all the way down ? If we would agree that signaling the fences
>> preemptively before engaging drm_dev_unplug is generically the right
>> approach maybe we can then scope drm_ioctl unconditionally with
>> drm_dev_enter/exit and then for each driver go through the same process
>> I do for amdgpu - writing driver specific function which takes care of
>> all the fences. We could then just create a drm callback which would
>> be called from drm_ioctl before drm_dev_unplug is called.
> 
> So I see the appeal of just nuking all the fences, but I'm not sure that's
> a good plan. We've done this in the old i915 gpu reset code too, and the
> issue is it's defacto inverting the locking. But also the hw is truly
> gone, so it also makes sense.
> 
> The problem is a bit roll-out, if we state that dma_fence_wait is allowed
> with a drm_dev_enter/exit, then all drivers need to force-retire their
> fences.
> 
> The other option would be that we require that dma_fence_wait is _not_
> allowed in drm_dev_enter/exit, and that therefore these areas must be
> marked up more fine-grained to avoid deadlocks. I like this more from the
> testing aspect (it makes it easier to be reasonable sure your code handles
> concurrent hotunplug), but also it's pretty easy to validate with the
> dma_fence lockdep annotations we have I think.

They key question as I see it - is it ok for a device to be unplugged
while it's driver has anywhere in it's code a dma_fence_wait
waiting for work completion from this device. The answers seems to me
is no, the HW is gone, this fence will never signal and so you will be
left with indefinitely hanged code thread with all it's unreleased
resources. If i am correct in the above statement then avoiding scoping
code with drm_dev_enter/exit because a dma_fence_wait might be there in 
the middle
just hides the problem. Also, then the only solution for each driver
wanting to support hot-unplug is to force retire all it's HW
fences once it's notified of device removal.

> 
> A third reasons for not requiring force-retiring of dma_fence before
> drm_dev_unplug is the races: Before drm_dev_unplug you haven't stopped new
> fences from happening, but until you've stopped new fences it's hard to
> guarantee they're all retired. How do you solve this currently.

See amdgpu_finilize_device_fences in 
https://patchwork.ozlabs.org/project/linux-pci/patch/20210428151207.1212258-20-andrey.grodzovsky@amd.com/ 
I think the steps described there answer your
concern here.

> 
> Finally there's still hangcheck and all that, so if we go with forbidding
> dma_fence_wait from within drm_dev_enter/exit sections, then drivers don't
> need to have additional tricky code to force-retire fences. TDR will take
> care already (albeit with maybe a slightly annoying long timeout, which
> we can shorten to "time out everything immediately" after drm_dev_unplug).

I am not aware of TDR handlers  that do it today, at least we don't,
we don't check that if device is gone let's instead of resetting the device
and resubmit jobs just force retire all the HW fences. In any case, this
can and i think should be done in pci remove callback because this is
the place that supposed to handle device extraction. I for example in 
amdgpu_finilize_device_fences just block all TDRs from taking place as 
first step in the process. If other drivers want to force retire fences
in their TDR handlers they still need to block and wait for all such
TDRs in their pci_remove handler.

> 
> What we definitely can't have is half the drivers doing it one way, and
> the other half the other way. So your driver flag to wrap the ioctl
> optionally in a drm_dev_enter/exit path is a no-go still I think.
> 
> I guess my tldr; is: I definitely see how your current approach gives
> quicker results for amdgpu right now, but long term I'm seeing more
> positives on the other one. At least I expect less special cases due to
> hotunplug with that.

As i expressed my viewpoint above - seems to me any driver in need to
support hot-unplug must force retire it's fences because of need to
unblock all dma_fence waits and so it will not be a special case.

Andrey

> 
> Cheers, Daniel
> 
>>
>> Andrey
>>
>>>
>>> It is unfortunately even more work until we've reached the goal, but I
>>> think it's safest and most flexible approach overall.
>>>
>>> Cheers, Daniel
>>>
>>>>
>>>> Andrey
>>>>
>>>>>
>>>>> Cheers, Daniel
>>>>>
>>>>>
>>>>>>
>>>>>>>>
>>>>>>>> I guess we should clarify this in the hotunplug doc?
>>>>>>
>>>>>> Agree
>>>>>>
>>>>>>>
>>>>>>> To clarify: I'm not against throwing an ENODEV at userspace for ioctl that
>>>>>>> really make no sense, and where we're rather confident that all properly
>>>>>>> implemented userspace will gracefully handle failures. Like a modeset, or
>>>>>>> opening a device, or trying to import a dma-buf or stuff like that which
>>>>>>> can already fail in normal operation for any kind of reason.
>>>>>>>
>>>>>>> But stuff that never fails, like GETRESOURCES ioctl, really shouldn't fail
>>>>>>> after hotunplug.
>>>>>>
>>>>>> As I pointed above, this a bit confuses me given that we already do
>>>>>> blanker rejection of IOCTLs if device is unplugged.
>>>>>
>>>>> Well I'm confused about this too :-/
>>>>>
>>>>>>> And then there's the middle ground, like doing a pageflip or buffer flush,
>>>>>>> which I guess some userspace might handle, but risky to inflict those
>>>>>>> consequences on them. atomic modeset is especially fun since depending
>>>>>>> what you're doing it can be both "failures expected" and "failures not
>>>>>>> really expected in normal operation".
>>>>>>>
>>>>>>> Also, this really should be consistent across drivers, not solved with a
>>>>>>> driver flag for every possible combination.
>>>>>>>
>>>>>>> If you look at the current hotunplug kms drivers, they have
>>>>>>> drm_dev_enter/exit sprinkled in specific hw callback functions because of
>>>>>>> the above problems. But maybe it makes sense to change things in a few
>>>>>>> cases. But then we should do it across the board.
>>>>>>
>>>>>> So as I understand your preferred approach is that I scope any back_end, HW
>>>>>> specific function with drm_dev_enter/exit because that where MMIO
>>>>>> access takes place. But besides explicit MMIO access thorough
>>>>>> register accessors in the HW back-end there is also indirect MMIO access
>>>>>> taking place throughout the code in the driver because of various VRAM
>>>>>> BOs which provide CPU access to VRAM through the VRAM BAR. This kind of
>>>>>> access is spread all over in the driver and even in mid-layers such as
>>>>>> TTM and not limited to HW back-end functions. It means it's much harder
>>>>>> to spot such places to surgically scope them with drm_dev_enter/exit and
>>>>>> also that any new such code introduced will immediately break hot unplug
>>>>>> because the developers can't be expected to remember making their code
>>>>>> robust to this specific use case. That why when we discussed internally
>>>>>> what approach to take to protecting code with drm_dev_enter/exit we
>>>>>> opted for using the widest available scope.
>>>>>
>>>>> The thing is, you kinda have to anyway. There's enormous amounts of
>>>>> asynchronous processing going on. E.g. nonblocking atomic commits also do
>>>>> ttm unpinning and fun stuff like that, which if you sync things wrong can
>>>>> happen way late. So the door for bad fallout is wide open :-(
>>>>>
>>>>> I'm not sure where the right tradeoff is to make sure we catch them all,
>>>>> and can make sure with testing that we've indeed caught them all.
>>>>> -Daniel
>>>>>
>>>
>
Daniel Vetter May 7, 2021, 9:11 a.m. UTC | #15
On Thu, May 06, 2021 at 12:25:06PM -0400, Andrey Grodzovsky wrote:
> 
> 
> On 2021-05-06 5:40 a.m., Daniel Vetter wrote:
> > On Fri, Apr 30, 2021 at 01:27:37PM -0400, Andrey Grodzovsky wrote:
> > > 
> > > 
> > > On 2021-04-30 6:25 a.m., Daniel Vetter wrote:
> > > > On Thu, Apr 29, 2021 at 04:34:55PM -0400, Andrey Grodzovsky wrote:
> > > > > 
> > > > > 
> > > > > On 2021-04-29 3:05 p.m., Daniel Vetter wrote:
> > > > > > On Thu, Apr 29, 2021 at 12:04:33PM -0400, Andrey Grodzovsky wrote:
> > > > > > > 
> > > > > > > 
> > > > > > > On 2021-04-29 7:32 a.m., Daniel Vetter wrote:
> > > > > > > > On Thu, Apr 29, 2021 at 01:23:19PM +0200, Daniel Vetter wrote:
> > > > > > > > > On Wed, Apr 28, 2021 at 11:12:00AM -0400, Andrey Grodzovsky wrote:
> > > > > > > > > > With this calling drm_dev_unplug will flush and block
> > > > > > > > > > all in flight IOCTLs
> > > > > > > > > > 
> > > > > > > > > > Also, add feature such that if device supports graceful unplug
> > > > > > > > > > we enclose entire IOCTL in SRCU critical section.
> > > > > > > > > > 
> > > > > > > > > > Signed-off-by: Andrey Grodzovsky <andrey.grodzovsky@amd.com>
> > > > > > > > > 
> > > > > > > > > Nope.
> > > > > > > > > 
> > > > > > > > > The idea of drm_dev_enter/exit is to mark up hw access. Not entire ioctl.
> > > > > > > 
> > > > > > > Then I am confused why we have https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Felixir.bootlin.com%2Flinux%2Fv5.12%2Fsource%2Fdrivers%2Fgpu%2Fdrm%2Fdrm_ioctl.c%23L826&amp;data=04%7C01%7Candrey.grodzovsky%40amd.com%7Ca0ca5bdab20a4533491c08d91072fe2a%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637558908355926609%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=SESZFWQEcQUHGGek8d1cNi9Iwo9XOmXqxg9MieRkxNU%3D&amp;reserved=0
> > > > > > > currently in code ?
> > > > > > 
> > > > > > I forgot about this one, again. Thanks for reminding.
> > > > > > 
> > > > > > > > > Especially not with an opt-in flag so that it could be shrugged of as a
> > > > > > > > > driver hack. Most of these ioctls should have absolutely no problem
> > > > > > > > > working after hotunplug.
> > > > > > > > > 
> > > > > > > > > Also, doing this defeats the point since it pretty much guarantees
> > > > > > > > > userspace will die in assert()s and stuff. E.g. on i915 the rough contract
> > > > > > > > > is that only execbuf (and even that only when userspace has indicated
> > > > > > > > > support for non-recoverable hw ctx) is allowed to fail. Anything else
> > > > > > > > > might crash userspace.
> > > > > > > 
> > > > > > > Given that as I pointed above we already fail any IOCTls with -ENODEV
> > > > > > > when device is unplugged, it seems those crashes don't happen that
> > > > > > > often ? Also, in all my testing I don't think I saw a user space crash
> > > > > > > I could attribute to this.
> > > > > > 
> > > > > > I guess it should be ok.
> > > > > 
> > > > > What should be ok ?
> > > > 
> > > > Your approach, but not your patch. If we go with this let's just lift it
> > > > to drm_ioctl() as the default behavior. No driver opt-in flag, because
> > > > that's definitely worse than any other approach because we really need to
> > > > get rid of driver specific behaviour for generic ioctls, especially
> > > > anything a compositor will use directly.
> > > > 
> > > > > > My reasons for making this work is both less trouble for userspace (did
> > > > > > you test with various wayland compositors out there, not just amdgpu x86
> > > > > 
> > > > > I didn't - will give it a try.
> > > 
> > > Weston worked without crashes, run the egl tester cube there.
> > > 
> > > > > 
> > > > > > driver?), but also testing.
> > > > > > 
> > > > > > We still need a bunch of these checks in various places or you'll wait a
> > > > > > very long time for a pending modeset or similar to complete. Being able to
> > > > > > run that code easily after hotunplug has completed should help a lot with
> > > > > > testing.
> > > > > > 
> > > > > > Plus various drivers already acquired drm_dev_enter/exit and now I wonder
> > > > > > whether that was properly tested or not ...
> > > > > > 
> > > > > > I guess maybe we need a drm module option to disable this check, so that
> > > > > > we can exercise the code as if the ioctl has raced with hotunplug at the
> > > > > > worst possible moment.
> > > > > > 
> > > > > > Also atomic is really tricky here: I assume your testing has just done
> > > > > > normal synchronous commits, but anything that goes through atomic can be
> > > > > > done nonblocking in a separate thread. Which the ioctl catch-all here wont
> > > > > > capture.
> > > > > 
> > > > > Yes, async commit was on my mind and thanks for reminding me. Indeed
> > > > > I forgot this but i planned to scope the entire amdgpu_dm_atomic_tail in
> > > > > drm_dev_enter/exit. Note that i have a bunch of patches, all name's
> > > > > starting with 'Scope....' that just methodically put all the background
> > > > > work items and timers the drivers schedules in drm_dev_enter/exit scope.
> > > > > This was supposed to be part of the 'Scope Display code' patch.
> > > > 
> > > > That's too much. You still have to arrange that the flip completion event
> > > > gets sent out. So it's a bit tricky.
> > > > 
> > > > In other places the same problem applies, e.g. probe functions need to
> > > > make sure they report "disconnected".
> > > 
> > > I see, well, this is all part of KMS support which I defer for now
> > > anyway. Will tackle it then.
> > > 
> > > > 
> > > > > > > > > You probably need similar (and very precisely defined) rules for amdgpu.
> > > > > > > > > And those must definitely exclude any shard ioctls from randomly failing
> > > > > > > > > with EIO, because that just kills the box and defeats the point of trying
> > > > > > > > > to gracefully handling hotunplug and making sure userspace has a chance of
> > > > > > > > > survival. E.g. for atomic everything should continue, including flip
> > > > > > > > > completion, but we set all outputs to "disconnected" and send out the
> > > > > > > > > uevent. Maybe crtc enabling can fail too, but that can also be handled
> > > > > > > > > through the async status we're using to signal DP link failures to
> > > > > > > > > userspace.
> > > > > > > 
> > > > > > > As I pointed before, because of the complexity of the topic I prefer to
> > > > > > > take it step by step and solve first for secondary device use case, not
> > > > > > > for primary, display attached device.
> > > > > > 
> > > > > > Yeah makes sense. But then I think the right patch is to roll this out for
> > > > > > all drivers, properly justified with existing code. Not behind a driver
> > > > > > flag, because with all these different compositors the last thing we want
> > > > > > is a proliferation of driver-specific behaviour. That's imo the worst
> > > > > > option of all of them and needs to be avoided.
> > > > > 
> > > > > So this kind of patch would be acceptable to you if I unconditionally
> > > > > scope the drm_ioctl with drm_dev_enter/exit without the driver flag ?
> > > > > I am worried to break other drivers with this, see patch https://nam11.safelinks.protection.outlook.com/?url=https:%2F%2Fcgit.freedesktop.org%2F~agrodzov%2Flinux%2Fcommit%2F%3Fh%3Ddrm-misc-next%26id%3Df0c593f35b22ca5bf60ed9e7ce2bf2b80e6c68c6&amp;data=04%7C01%7Candrey.grodzovsky%40amd.com%7Ca0ca5bdab20a4533491c08d91072fe2a%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637558908355926609%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=62f4gdl3lQH0ap58HTyv47zxALjaa5Td%2BysskR83rig%3D&amp;reserved=0
> > > > > Before setting drm_dev_unplug I go through a whole process of signalling
> > > > > all possible fences in the system which some one some where might be
> > > > > waiting on. My concern is that in the absence of HW those fences won't
> > > > > signal and so unless I signal them myself srcu_synchrionize in
> > > > > drm_dev_unplug will hang waiting for any such code scoped by
> > > > > drm_dev_enter/exit.
> > > > 
> > > > Uh right. I forgot about this.
> > > > 
> > > > Which would kinda mean the top level scope is maybe not the best idea, and
> > > > perhaps we should indeed drill it down. But then the testing issue
> > > > definitely gets a lot worse.
> > > > 
> > > > So what if we'd push that drm_dev_is_unplugged check down into ioctls?
> > > > Then we can make a case-by case decision whether it should be converted to
> > > > drm_dev_enter/exit, needs to be pushed down further into drivers (due to
> > > > fence wait issues) or other concerns?
> > > > 
> > > > Also I guess we need to have a subsystem wide rule on whether you need to
> > > > force complete all fences before you call drm_dev_unplug, or afterwards.
> > > 
> > > I don't see how you can handle it afterwards. If a thread is stuck in
> > > dma_fence_wait in non interruptible wait (any kernel thread) and with no
> > > timeout there is nothing you can do to stop the wait. Any such code
> > > scopped with drm_dev_enter/exit will cause a hang in drm_dev_unplug.
> > > The only way then is to preemptively force signal all such fences before
> > > calling drm_dev_unplug - as I do in the above mentioned patch.
> > 
> > Yeah, which is why I don't think top-level drm_dev_enter/exit is a good
> > idea.
> > 
> > > > If we have mixed behaviour on this there will be disappointment. And since
> > > > hotunplug and dma_fence completion are both userspace visible that
> > > > inconsistency might have bigger impact.
> > > > 
> > > > This is all very tricky indeed :-/
> > > > 
> > > > btw for the "gradual pushing drm_dev_enter into ioctl" approach, if we go
> > > > with that: We could do the same trick we've done for DRM_UNLOCKED:
> > > > - drm_dev_enter/exit is called for any ioctl that has not set the
> > > >     DRM_HOTUNPLUG_SAFE flag
> > > > - for drm core ioctls we push them into all ioctls and decide how to
> > > >     handle/where (with the aim to have the least amount of code flow
> > > >     different during hotunplug vs after hotunplug has finished, to reduce
> > > >     testing scope)
> > > > - then we make DRM_HOTUNPLUG_SAFE the implied default
> > > > 
> > > > This would have us left with render ioctls, and I think the defensive
> > > > assumption there is that they're all hotunplug safe. We might hang on a
> > > > fence wait, but that's fixable, and it's better than blowing up on a
> > > > use-after-free security bug.
> > > > 
> > > > Thoughts?
> > > 
> > > I don't fully see a difference between the approach described above and
> > > the full drill down to each driver and even within the driver, to the HW
> > > back-ends - what criteria I would use to decide if for a given IOCTL i
> > > scope with drm_dev_enter/exit at the highest level while for another
> > > i go all the way down ? If we would agree that signaling the fences
> > > preemptively before engaging drm_dev_unplug is generically the right
> > > approach maybe we can then scope drm_ioctl unconditionally with
> > > drm_dev_enter/exit and then for each driver go through the same process
> > > I do for amdgpu - writing driver specific function which takes care of
> > > all the fences. We could then just create a drm callback which would
> > > be called from drm_ioctl before drm_dev_unplug is called.
> > 
> > So I see the appeal of just nuking all the fences, but I'm not sure that's
> > a good plan. We've done this in the old i915 gpu reset code too, and the
> > issue is it's defacto inverting the locking. But also the hw is truly
> > gone, so it also makes sense.
> > 
> > The problem is a bit roll-out, if we state that dma_fence_wait is allowed
> > with a drm_dev_enter/exit, then all drivers need to force-retire their
> > fences.
> > 
> > The other option would be that we require that dma_fence_wait is _not_
> > allowed in drm_dev_enter/exit, and that therefore these areas must be
> > marked up more fine-grained to avoid deadlocks. I like this more from the
> > testing aspect (it makes it easier to be reasonable sure your code handles
> > concurrent hotunplug), but also it's pretty easy to validate with the
> > dma_fence lockdep annotations we have I think.
> 
> They key question as I see it - is it ok for a device to be unplugged
> while it's driver has anywhere in it's code a dma_fence_wait
> waiting for work completion from this device. The answers seems to me
> is no, the HW is gone, this fence will never signal and so you will be
> left with indefinitely hanged code thread with all it's unreleased
> resources. If i am correct in the above statement then avoiding scoping
> code with drm_dev_enter/exit because a dma_fence_wait might be there in the
> middle
> just hides the problem. Also, then the only solution for each driver
> wanting to support hot-unplug is to force retire all it's HW
> fences once it's notified of device removal.

At a high level, yes dma_fence must always complete. I don't think we have
a disagreement here on that.

What we're discussing here is the precise sequencing and barriers, where
things get tricky. Requiring that you force-complete all dma_fence that
might be affected before you hotunplug is one solution, the other is
tuning the critical sections that drm_dev_enter/exit annotates.

This isn't about avoiding anything or hiding problems, this is about
locking/synchronization design. And for that we must agree on what is
allowed inside/outside of a critical section for all possible
combinations.

E.g. we're also "hiding" problems with calling dma_fence_wait from
shrinkers/mmu notifiers by forbidding allocations in
dma_fence_begin/end_signalling critical paths.

> > A third reasons for not requiring force-retiring of dma_fence before
> > drm_dev_unplug is the races: Before drm_dev_unplug you haven't stopped new
> > fences from happening, but until you've stopped new fences it's hard to
> > guarantee they're all retired. How do you solve this currently.
> 
> See amdgpu_finilize_device_fences in https://patchwork.ozlabs.org/project/linux-pci/patch/20210428151207.1212258-20-andrey.grodzovsky@amd.com/
> I think the steps described there answer your
> concern here.

The hard problem is stopping further command submission. Not seeing how
you solve that.

But I'm definitely scared about all the scheduler/tdr interactions you
already have there, and that looks quite a bit like fallout from doing
things the wrong way round.

Also given that drm/scheduler is shared, why can't this be a drm/scheduler
helper function?

> > Finally there's still hangcheck and all that, so if we go with forbidding
> > dma_fence_wait from within drm_dev_enter/exit sections, then drivers don't
> > need to have additional tricky code to force-retire fences. TDR will take
> > care already (albeit with maybe a slightly annoying long timeout, which
> > we can shorten to "time out everything immediately" after drm_dev_unplug).
> 
> I am not aware of TDR handlers  that do it today, at least we don't,
> we don't check that if device is gone let's instead of resetting the device
> and resubmit jobs just force retire all the HW fences. In any case, this
> can and i think should be done in pci remove callback because this is
> the place that supposed to handle device extraction. I for example in
> amdgpu_finilize_device_fences just block all TDRs from taking place as first
> step in the process. If other drivers want to force retire fences
> in their TDR handlers they still need to block and wait for all such
> TDRs in their pci_remove handler.

TDR definitely force-completes the fence that did hang. Of course it'll
take a while until they've all completed this way, but we do have
guaranteed forward progress since we've stopped all further fences from
showing up because drm_dev_unplug is called already.

And yes after drm_dev_unplug you can then force-retire the tdr stuff.

> > What we definitely can't have is half the drivers doing it one way, and
> > the other half the other way. So your driver flag to wrap the ioctl
> > optionally in a drm_dev_enter/exit path is a no-go still I think.
> > 
> > I guess my tldr; is: I definitely see how your current approach gives
> > quicker results for amdgpu right now, but long term I'm seeing more
> > positives on the other one. At least I expect less special cases due to
> > hotunplug with that.
> 
> As i expressed my viewpoint above - seems to me any driver in need to
> support hot-unplug must force retire it's fences because of need to
> unblock all dma_fence waits and so it will not be a special case.

This isn't the special case I meant. It's the very tricky
force-retire-before-you-unplugged-officially which is large scale nasty.

Also if your driver doesn't force-retire already, it's buggy. The
additional need of hotunplug is just that we're trying to force-retire a
bit faster, because we know it's all hopeless. But e.g. i915 already has a
fallback that does this automatically:
- first we reset only the engine/context, keeping everyone else running
- if that doesn't pan out, we reset the entire chip and give up an
  anything that's in-flight, which (iirc, it did so at least in the past)
  force retires everything outstanding.

I think amdgpu only has full chip reset, so your first step tries to
reissue all other tasks. But that's not necessarily how it needs to
happen.

Either way drivers must force retire everything (albeit maybe a bit at a
slow pace) if the hw ceased to work properly already. Hotunplug really
isn't anything new here.
-Daniel
> 
> Andrey
> 
> > 
> > Cheers, Daniel
> > 
> > > 
> > > Andrey
> > > 
> > > > 
> > > > It is unfortunately even more work until we've reached the goal, but I
> > > > think it's safest and most flexible approach overall.
> > > > 
> > > > Cheers, Daniel
> > > > 
> > > > > 
> > > > > Andrey
> > > > > 
> > > > > > 
> > > > > > Cheers, Daniel
> > > > > > 
> > > > > > 
> > > > > > > 
> > > > > > > > > 
> > > > > > > > > I guess we should clarify this in the hotunplug doc?
> > > > > > > 
> > > > > > > Agree
> > > > > > > 
> > > > > > > > 
> > > > > > > > To clarify: I'm not against throwing an ENODEV at userspace for ioctl that
> > > > > > > > really make no sense, and where we're rather confident that all properly
> > > > > > > > implemented userspace will gracefully handle failures. Like a modeset, or
> > > > > > > > opening a device, or trying to import a dma-buf or stuff like that which
> > > > > > > > can already fail in normal operation for any kind of reason.
> > > > > > > > 
> > > > > > > > But stuff that never fails, like GETRESOURCES ioctl, really shouldn't fail
> > > > > > > > after hotunplug.
> > > > > > > 
> > > > > > > As I pointed above, this a bit confuses me given that we already do
> > > > > > > blanker rejection of IOCTLs if device is unplugged.
> > > > > > 
> > > > > > Well I'm confused about this too :-/
> > > > > > 
> > > > > > > > And then there's the middle ground, like doing a pageflip or buffer flush,
> > > > > > > > which I guess some userspace might handle, but risky to inflict those
> > > > > > > > consequences on them. atomic modeset is especially fun since depending
> > > > > > > > what you're doing it can be both "failures expected" and "failures not
> > > > > > > > really expected in normal operation".
> > > > > > > > 
> > > > > > > > Also, this really should be consistent across drivers, not solved with a
> > > > > > > > driver flag for every possible combination.
> > > > > > > > 
> > > > > > > > If you look at the current hotunplug kms drivers, they have
> > > > > > > > drm_dev_enter/exit sprinkled in specific hw callback functions because of
> > > > > > > > the above problems. But maybe it makes sense to change things in a few
> > > > > > > > cases. But then we should do it across the board.
> > > > > > > 
> > > > > > > So as I understand your preferred approach is that I scope any back_end, HW
> > > > > > > specific function with drm_dev_enter/exit because that where MMIO
> > > > > > > access takes place. But besides explicit MMIO access thorough
> > > > > > > register accessors in the HW back-end there is also indirect MMIO access
> > > > > > > taking place throughout the code in the driver because of various VRAM
> > > > > > > BOs which provide CPU access to VRAM through the VRAM BAR. This kind of
> > > > > > > access is spread all over in the driver and even in mid-layers such as
> > > > > > > TTM and not limited to HW back-end functions. It means it's much harder
> > > > > > > to spot such places to surgically scope them with drm_dev_enter/exit and
> > > > > > > also that any new such code introduced will immediately break hot unplug
> > > > > > > because the developers can't be expected to remember making their code
> > > > > > > robust to this specific use case. That why when we discussed internally
> > > > > > > what approach to take to protecting code with drm_dev_enter/exit we
> > > > > > > opted for using the widest available scope.
> > > > > > 
> > > > > > The thing is, you kinda have to anyway. There's enormous amounts of
> > > > > > asynchronous processing going on. E.g. nonblocking atomic commits also do
> > > > > > ttm unpinning and fun stuff like that, which if you sync things wrong can
> > > > > > happen way late. So the door for bad fallout is wide open :-(
> > > > > > 
> > > > > > I'm not sure where the right tradeoff is to make sure we catch them all,
> > > > > > and can make sure with testing that we've indeed caught them all.
> > > > > > -Daniel
> > > > > > 
> > > > 
> >
Andrey Grodzovsky May 7, 2021, 3:39 p.m. UTC | #16
On 2021-05-07 5:11 a.m., Daniel Vetter wrote:
> On Thu, May 06, 2021 at 12:25:06PM -0400, Andrey Grodzovsky wrote:
>>
>>
>> On 2021-05-06 5:40 a.m., Daniel Vetter wrote:
>>> On Fri, Apr 30, 2021 at 01:27:37PM -0400, Andrey Grodzovsky wrote:
>>>>
>>>>
>>>> On 2021-04-30 6:25 a.m., Daniel Vetter wrote:
>>>>> On Thu, Apr 29, 2021 at 04:34:55PM -0400, Andrey Grodzovsky wrote:
>>>>>>
>>>>>>
>>>>>> On 2021-04-29 3:05 p.m., Daniel Vetter wrote:
>>>>>>> On Thu, Apr 29, 2021 at 12:04:33PM -0400, Andrey Grodzovsky wrote:
>>>>>>>>
>>>>>>>>
>>>>>>>> On 2021-04-29 7:32 a.m., Daniel Vetter wrote:
>>>>>>>>> On Thu, Apr 29, 2021 at 01:23:19PM +0200, Daniel Vetter wrote:
>>>>>>>>>> On Wed, Apr 28, 2021 at 11:12:00AM -0400, Andrey Grodzovsky wrote:
>>>>>>>>>>> With this calling drm_dev_unplug will flush and block
>>>>>>>>>>> all in flight IOCTLs
>>>>>>>>>>>
>>>>>>>>>>> Also, add feature such that if device supports graceful unplug
>>>>>>>>>>> we enclose entire IOCTL in SRCU critical section.
>>>>>>>>>>>
>>>>>>>>>>> Signed-off-by: Andrey Grodzovsky <andrey.grodzovsky@amd.com>
>>>>>>>>>>
>>>>>>>>>> Nope.
>>>>>>>>>>
>>>>>>>>>> The idea of drm_dev_enter/exit is to mark up hw access. Not entire ioctl.
>>>>>>>>
>>>>>>>> Then I am confused why we have https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Felixir.bootlin.com%2Flinux%2Fv5.12%2Fsource%2Fdrivers%2Fgpu%2Fdrm%2Fdrm_ioctl.c%23L826&amp;data=04%7C01%7Candrey.grodzovsky%40amd.com%7Ce53ea46e66fa40a0e03f08d911381a05%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637559754928702763%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=zMlHiglnn8Vm%2BVxI9Rbk8X%2BTyuokq1x1INbhbRCWK4E%3D&amp;reserved=0
>>>>>>>> currently in code ?
>>>>>>>
>>>>>>> I forgot about this one, again. Thanks for reminding.
>>>>>>>
>>>>>>>>>> Especially not with an opt-in flag so that it could be shrugged of as a
>>>>>>>>>> driver hack. Most of these ioctls should have absolutely no problem
>>>>>>>>>> working after hotunplug.
>>>>>>>>>>
>>>>>>>>>> Also, doing this defeats the point since it pretty much guarantees
>>>>>>>>>> userspace will die in assert()s and stuff. E.g. on i915 the rough contract
>>>>>>>>>> is that only execbuf (and even that only when userspace has indicated
>>>>>>>>>> support for non-recoverable hw ctx) is allowed to fail. Anything else
>>>>>>>>>> might crash userspace.
>>>>>>>>
>>>>>>>> Given that as I pointed above we already fail any IOCTls with -ENODEV
>>>>>>>> when device is unplugged, it seems those crashes don't happen that
>>>>>>>> often ? Also, in all my testing I don't think I saw a user space crash
>>>>>>>> I could attribute to this.
>>>>>>>
>>>>>>> I guess it should be ok.
>>>>>>
>>>>>> What should be ok ?
>>>>>
>>>>> Your approach, but not your patch. If we go with this let's just lift it
>>>>> to drm_ioctl() as the default behavior. No driver opt-in flag, because
>>>>> that's definitely worse than any other approach because we really need to
>>>>> get rid of driver specific behaviour for generic ioctls, especially
>>>>> anything a compositor will use directly.
>>>>>
>>>>>>> My reasons for making this work is both less trouble for userspace (did
>>>>>>> you test with various wayland compositors out there, not just amdgpu x86
>>>>>>
>>>>>> I didn't - will give it a try.
>>>>
>>>> Weston worked without crashes, run the egl tester cube there.
>>>>
>>>>>>
>>>>>>> driver?), but also testing.
>>>>>>>
>>>>>>> We still need a bunch of these checks in various places or you'll wait a
>>>>>>> very long time for a pending modeset or similar to complete. Being able to
>>>>>>> run that code easily after hotunplug has completed should help a lot with
>>>>>>> testing.
>>>>>>>
>>>>>>> Plus various drivers already acquired drm_dev_enter/exit and now I wonder
>>>>>>> whether that was properly tested or not ...
>>>>>>>
>>>>>>> I guess maybe we need a drm module option to disable this check, so that
>>>>>>> we can exercise the code as if the ioctl has raced with hotunplug at the
>>>>>>> worst possible moment.
>>>>>>>
>>>>>>> Also atomic is really tricky here: I assume your testing has just done
>>>>>>> normal synchronous commits, but anything that goes through atomic can be
>>>>>>> done nonblocking in a separate thread. Which the ioctl catch-all here wont
>>>>>>> capture.
>>>>>>
>>>>>> Yes, async commit was on my mind and thanks for reminding me. Indeed
>>>>>> I forgot this but i planned to scope the entire amdgpu_dm_atomic_tail in
>>>>>> drm_dev_enter/exit. Note that i have a bunch of patches, all name's
>>>>>> starting with 'Scope....' that just methodically put all the background
>>>>>> work items and timers the drivers schedules in drm_dev_enter/exit scope.
>>>>>> This was supposed to be part of the 'Scope Display code' patch.
>>>>>
>>>>> That's too much. You still have to arrange that the flip completion event
>>>>> gets sent out. So it's a bit tricky.
>>>>>
>>>>> In other places the same problem applies, e.g. probe functions need to
>>>>> make sure they report "disconnected".
>>>>
>>>> I see, well, this is all part of KMS support which I defer for now
>>>> anyway. Will tackle it then.
>>>>
>>>>>
>>>>>>>>>> You probably need similar (and very precisely defined) rules for amdgpu.
>>>>>>>>>> And those must definitely exclude any shard ioctls from randomly failing
>>>>>>>>>> with EIO, because that just kills the box and defeats the point of trying
>>>>>>>>>> to gracefully handling hotunplug and making sure userspace has a chance of
>>>>>>>>>> survival. E.g. for atomic everything should continue, including flip
>>>>>>>>>> completion, but we set all outputs to "disconnected" and send out the
>>>>>>>>>> uevent. Maybe crtc enabling can fail too, but that can also be handled
>>>>>>>>>> through the async status we're using to signal DP link failures to
>>>>>>>>>> userspace.
>>>>>>>>
>>>>>>>> As I pointed before, because of the complexity of the topic I prefer to
>>>>>>>> take it step by step and solve first for secondary device use case, not
>>>>>>>> for primary, display attached device.
>>>>>>>
>>>>>>> Yeah makes sense. But then I think the right patch is to roll this out for
>>>>>>> all drivers, properly justified with existing code. Not behind a driver
>>>>>>> flag, because with all these different compositors the last thing we want
>>>>>>> is a proliferation of driver-specific behaviour. That's imo the worst
>>>>>>> option of all of them and needs to be avoided.
>>>>>>
>>>>>> So this kind of patch would be acceptable to you if I unconditionally
>>>>>> scope the drm_ioctl with drm_dev_enter/exit without the driver flag ?
>>>>>> I am worried to break other drivers with this, see patch https://nam11.safelinks.protection.outlook.com/?url=https:%2F%2Fcgit.freedesktop.org%2F~agrodzov%2Flinux%2Fcommit%2F%3Fh%3Ddrm-misc-next%26id%3Df0c593f35b22ca5bf60ed9e7ce2bf2b80e6c68c6&amp;data=04%7C01%7Candrey.grodzovsky%40amd.com%7Ce53ea46e66fa40a0e03f08d911381a05%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637559754928702763%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=NcUTm%2BttKzbr2yo6PlSZRZ4e5%2BkHF%2BCZJSQyo3m7w7Q%3D&amp;reserved=0
>>>>>> Before setting drm_dev_unplug I go through a whole process of signalling
>>>>>> all possible fences in the system which some one some where might be
>>>>>> waiting on. My concern is that in the absence of HW those fences won't
>>>>>> signal and so unless I signal them myself srcu_synchrionize in
>>>>>> drm_dev_unplug will hang waiting for any such code scoped by
>>>>>> drm_dev_enter/exit.
>>>>>
>>>>> Uh right. I forgot about this.
>>>>>
>>>>> Which would kinda mean the top level scope is maybe not the best idea, and
>>>>> perhaps we should indeed drill it down. But then the testing issue
>>>>> definitely gets a lot worse.
>>>>>
>>>>> So what if we'd push that drm_dev_is_unplugged check down into ioctls?
>>>>> Then we can make a case-by case decision whether it should be converted to
>>>>> drm_dev_enter/exit, needs to be pushed down further into drivers (due to
>>>>> fence wait issues) or other concerns?
>>>>>
>>>>> Also I guess we need to have a subsystem wide rule on whether you need to
>>>>> force complete all fences before you call drm_dev_unplug, or afterwards.
>>>>
>>>> I don't see how you can handle it afterwards. If a thread is stuck in
>>>> dma_fence_wait in non interruptible wait (any kernel thread) and with no
>>>> timeout there is nothing you can do to stop the wait. Any such code
>>>> scopped with drm_dev_enter/exit will cause a hang in drm_dev_unplug.
>>>> The only way then is to preemptively force signal all such fences before
>>>> calling drm_dev_unplug - as I do in the above mentioned patch.
>>>
>>> Yeah, which is why I don't think top-level drm_dev_enter/exit is a good
>>> idea.
>>>
>>>>> If we have mixed behaviour on this there will be disappointment. And since
>>>>> hotunplug and dma_fence completion are both userspace visible that
>>>>> inconsistency might have bigger impact.
>>>>>
>>>>> This is all very tricky indeed :-/
>>>>>
>>>>> btw for the "gradual pushing drm_dev_enter into ioctl" approach, if we go
>>>>> with that: We could do the same trick we've done for DRM_UNLOCKED:
>>>>> - drm_dev_enter/exit is called for any ioctl that has not set the
>>>>>      DRM_HOTUNPLUG_SAFE flag
>>>>> - for drm core ioctls we push them into all ioctls and decide how to
>>>>>      handle/where (with the aim to have the least amount of code flow
>>>>>      different during hotunplug vs after hotunplug has finished, to reduce
>>>>>      testing scope)
>>>>> - then we make DRM_HOTUNPLUG_SAFE the implied default
>>>>>
>>>>> This would have us left with render ioctls, and I think the defensive
>>>>> assumption there is that they're all hotunplug safe. We might hang on a
>>>>> fence wait, but that's fixable, and it's better than blowing up on a
>>>>> use-after-free security bug.
>>>>>
>>>>> Thoughts?
>>>>
>>>> I don't fully see a difference between the approach described above and
>>>> the full drill down to each driver and even within the driver, to the HW
>>>> back-ends - what criteria I would use to decide if for a given IOCTL i
>>>> scope with drm_dev_enter/exit at the highest level while for another
>>>> i go all the way down ? If we would agree that signaling the fences
>>>> preemptively before engaging drm_dev_unplug is generically the right
>>>> approach maybe we can then scope drm_ioctl unconditionally with
>>>> drm_dev_enter/exit and then for each driver go through the same process
>>>> I do for amdgpu - writing driver specific function which takes care of
>>>> all the fences. We could then just create a drm callback which would
>>>> be called from drm_ioctl before drm_dev_unplug is called.
>>>
>>> So I see the appeal of just nuking all the fences, but I'm not sure that's
>>> a good plan. We've done this in the old i915 gpu reset code too, and the
>>> issue is it's defacto inverting the locking. But also the hw is truly
>>> gone, so it also makes sense.
>>>
>>> The problem is a bit roll-out, if we state that dma_fence_wait is allowed
>>> with a drm_dev_enter/exit, then all drivers need to force-retire their
>>> fences.
>>>
>>> The other option would be that we require that dma_fence_wait is _not_
>>> allowed in drm_dev_enter/exit, and that therefore these areas must be
>>> marked up more fine-grained to avoid deadlocks. I like this more from the
>>> testing aspect (it makes it easier to be reasonable sure your code handles
>>> concurrent hotunplug), but also it's pretty easy to validate with the
>>> dma_fence lockdep annotations we have I think.
>>
>> They key question as I see it - is it ok for a device to be unplugged
>> while it's driver has anywhere in it's code a dma_fence_wait
>> waiting for work completion from this device. The answers seems to me
>> is no, the HW is gone, this fence will never signal and so you will be
>> left with indefinitely hanged code thread with all it's unreleased
>> resources. If i am correct in the above statement then avoiding scoping
>> code with drm_dev_enter/exit because a dma_fence_wait might be there in the
>> middle
>> just hides the problem. Also, then the only solution for each driver
>> wanting to support hot-unplug is to force retire all it's HW
>> fences once it's notified of device removal.
> 
> At a high level, yes dma_fence must always complete. I don't think we have
> a disagreement here on that.
> 
> What we're discussing here is the precise sequencing and barriers, where
> things get tricky. Requiring that you force-complete all dma_fence that
> might be affected before you hotunplug is one solution, the other is
> tuning the critical sections that drm_dev_enter/exit annotates.
> 
> This isn't about avoiding anything or hiding problems, this is about
> locking/synchronization design. And for that we must agree on what is
> allowed inside/outside of a critical section for all possible
> combinations.
> 
> E.g. we're also "hiding" problems with calling dma_fence_wait from
> shrinkers/mmu notifiers by forbidding allocations in
> dma_fence_begin/end_signalling critical paths.
> 
>>> A third reasons for not requiring force-retiring of dma_fence before
>>> drm_dev_unplug is the races: Before drm_dev_unplug you haven't stopped new
>>> fences from happening, but until you've stopped new fences it's hard to
>>> guarantee they're all retired. How do you solve this currently.
>>
>> See amdgpu_finilize_device_fences in https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fpatchwork.ozlabs.org%2Fproject%2Flinux-pci%2Fpatch%2F20210428151207.1212258-20-andrey.grodzovsky%40amd.com%2F&amp;data=04%7C01%7Candrey.grodzovsky%40amd.com%7Ce53ea46e66fa40a0e03f08d911381a05%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637559754928702763%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=pinKT1LMic%2FCEfAlr%2FPAhyBAaDEqpMeeJC%2BPdqUaiL8%3D&amp;reserved=0
>> I think the steps described there answer your
>> concern here.
> 
> The hard problem is stopping further command submission. Not seeing how
> you solve that.

By stopping GPU SW scheduler before force completion of HW fences, see
amdgpu_finilize_device_fences->amdgpu_fence_driver_fini_hw and the
comment above it.

> 
> But I'm definitely scared about all the scheduler/tdr interactions you
> already have there, and that looks quite a bit like fallout from doing
> things the wrong way round.
> 
> Also given that drm/scheduler is shared, why can't this be a drm/scheduler
> helper function?

I was thinking about it, what stopped me is that HW fences signaling is
done from driver specific HW fence array. But we could do it generic by
instead iterating sched->pending_list and signaling
s_job->s_fence->parent instead.
You also need to retire scheduler's scheduled fences once you stopped
the schedulers as they are waited on as dependencies for other jobs
submissions (i do take care of it).

> 
>>> Finally there's still hangcheck and all that, so if we go with forbidding
>>> dma_fence_wait from within drm_dev_enter/exit sections, then drivers don't
>>> need to have additional tricky code to force-retire fences. TDR will take
>>> care already (albeit with maybe a slightly annoying long timeout, which
>>> we can shorten to "time out everything immediately" after drm_dev_unplug).
>>
>> I am not aware of TDR handlers  that do it today, at least we don't,
>> we don't check that if device is gone let's instead of resetting the device
>> and resubmit jobs just force retire all the HW fences. In any case, this
>> can and i think should be done in pci remove callback because this is
>> the place that supposed to handle device extraction. I for example in
>> amdgpu_finilize_device_fences just block all TDRs from taking place as first
>> step in the process. If other drivers want to force retire fences
>> in their TDR handlers they still need to block and wait for all such
>> TDRs in their pci_remove handler.
> 
> TDR definitely force-completes the fence that did hang. Of course it'll
> take a while until they've all completed this way, but we do have
> guaranteed forward progress since we've stopped all further fences from
> showing up because drm_dev_unplug is called already.
> 
> And yes after drm_dev_unplug you can then force-retire the tdr stuff.
> 
>>> What we definitely can't have is half the drivers doing it one way, and
>>> the other half the other way. So your driver flag to wrap the ioctl
>>> optionally in a drm_dev_enter/exit path is a no-go still I think.
>>>
>>> I guess my tldr; is: I definitely see how your current approach gives
>>> quicker results for amdgpu right now, but long term I'm seeing more
>>> positives on the other one. At least I expect less special cases due to
>>> hotunplug with that.
>>
>> As i expressed my viewpoint above - seems to me any driver in need to
>> support hot-unplug must force retire it's fences because of need to
>> unblock all dma_fence waits and so it will not be a special case.
> 
> This isn't the special case I meant. It's the very tricky
> force-retire-before-you-unplugged-officially which is large scale nasty.
> 
> Also if your driver doesn't force-retire already, it's buggy. The
> additional need of hotunplug is just that we're trying to force-retire a
> bit faster, because we know it's all hopeless. But e.g. i915 already has a
> fallback that does this automatically:
> - first we reset only the engine/context, keeping everyone else running
> - if that doesn't pan out, we reset the entire chip and give up an
>    anything that's in-flight, which (iirc, it did so at least in the past)
>    force retires everything outstanding.
> 
> I think amdgpu only has full chip reset, so your first step tries to
> reissue all other tasks. But that's not necessarily how it needs to
> happen.
> 
> Either way drivers must force retire everything (albeit maybe a bit at a
> slow pace) if the hw ceased to work properly already. Hotunplug really
> isn't anything new here.
> -Daniel

Let's then agree on the way forward -

You raised before the following suggestion -

"
btw for the "gradual pushing drm_dev_enter into ioctl" approach, if we go
with that: We could do the same trick we've done for DRM_UNLOCKED:
- drm_dev_enter/exit is called for any ioctl that has not set the
   DRM_HOTUNPLUG_SAFE flag
- for drm core ioctls we push them into all ioctls and decide how to
   handle/where (with the aim to have the least amount of code flow
   different during hotunplug vs after hotunplug has finished, to reduce
   testing scope)
- then we make DRM_HOTUNPLUG_SAFE the implied default
"
My problem here is that I have no good understating, criteria
for how to decide per each ioctl on the right scope of drm_dev_enter/
exit. It depends on whether each next function call can lead somewhere
down the call stack to dma_fence_wait and/or whether it can lead
to registers access. Seems to me very hard to cover and error prone.

Another options which we discussed internally before and is basically
same as current drivers i guess is simply to scope with drm_dev_enter/
exit all the back-end HW specific callbacks. Those are most of the
places MMIO access takes place and by definition no dma_fence_wait
can be there as it's HW specific code. This leaves MMIO
access through pointers (memcpy, and various pointer de-references)
which will need to be protected on case by case, but given that I unmap
all MMIO anyway as last step of PCI remove callback, all of them will
be found by try and error eventually.
I feel more comfortable with this approach as I have a clear
understating of how to deal with it.

P.S Please respond on the question for you on the other thread at
'PATCH v5 15/27] drm/scheduler: Fix hang when sched_entity released'
about suggestion by Christian of partial up-streaming of this code up to
and before the patches dealing with scoping of drm_dev_enter/exit scoping.

Andrey






>>
>> Andrey
>>
>>>
>>> Cheers, Daniel
>>>
>>>>
>>>> Andrey
>>>>
>>>>>
>>>>> It is unfortunately even more work until we've reached the goal, but I
>>>>> think it's safest and most flexible approach overall.
>>>>>
>>>>> Cheers, Daniel
>>>>>
>>>>>>
>>>>>> Andrey
>>>>>>
>>>>>>>
>>>>>>> Cheers, Daniel
>>>>>>>
>>>>>>>
>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> I guess we should clarify this in the hotunplug doc?
>>>>>>>>
>>>>>>>> Agree
>>>>>>>>
>>>>>>>>>
>>>>>>>>> To clarify: I'm not against throwing an ENODEV at userspace for ioctl that
>>>>>>>>> really make no sense, and where we're rather confident that all properly
>>>>>>>>> implemented userspace will gracefully handle failures. Like a modeset, or
>>>>>>>>> opening a device, or trying to import a dma-buf or stuff like that which
>>>>>>>>> can already fail in normal operation for any kind of reason.
>>>>>>>>>
>>>>>>>>> But stuff that never fails, like GETRESOURCES ioctl, really shouldn't fail
>>>>>>>>> after hotunplug.
>>>>>>>>
>>>>>>>> As I pointed above, this a bit confuses me given that we already do
>>>>>>>> blanker rejection of IOCTLs if device is unplugged.
>>>>>>>
>>>>>>> Well I'm confused about this too :-/
>>>>>>>
>>>>>>>>> And then there's the middle ground, like doing a pageflip or buffer flush,
>>>>>>>>> which I guess some userspace might handle, but risky to inflict those
>>>>>>>>> consequences on them. atomic modeset is especially fun since depending
>>>>>>>>> what you're doing it can be both "failures expected" and "failures not
>>>>>>>>> really expected in normal operation".
>>>>>>>>>
>>>>>>>>> Also, this really should be consistent across drivers, not solved with a
>>>>>>>>> driver flag for every possible combination.
>>>>>>>>>
>>>>>>>>> If you look at the current hotunplug kms drivers, they have
>>>>>>>>> drm_dev_enter/exit sprinkled in specific hw callback functions because of
>>>>>>>>> the above problems. But maybe it makes sense to change things in a few
>>>>>>>>> cases. But then we should do it across the board.
>>>>>>>>
>>>>>>>> So as I understand your preferred approach is that I scope any back_end, HW
>>>>>>>> specific function with drm_dev_enter/exit because that where MMIO
>>>>>>>> access takes place. But besides explicit MMIO access thorough
>>>>>>>> register accessors in the HW back-end there is also indirect MMIO access
>>>>>>>> taking place throughout the code in the driver because of various VRAM
>>>>>>>> BOs which provide CPU access to VRAM through the VRAM BAR. This kind of
>>>>>>>> access is spread all over in the driver and even in mid-layers such as
>>>>>>>> TTM and not limited to HW back-end functions. It means it's much harder
>>>>>>>> to spot such places to surgically scope them with drm_dev_enter/exit and
>>>>>>>> also that any new such code introduced will immediately break hot unplug
>>>>>>>> because the developers can't be expected to remember making their code
>>>>>>>> robust to this specific use case. That why when we discussed internally
>>>>>>>> what approach to take to protecting code with drm_dev_enter/exit we
>>>>>>>> opted for using the widest available scope.
>>>>>>>
>>>>>>> The thing is, you kinda have to anyway. There's enormous amounts of
>>>>>>> asynchronous processing going on. E.g. nonblocking atomic commits also do
>>>>>>> ttm unpinning and fun stuff like that, which if you sync things wrong can
>>>>>>> happen way late. So the door for bad fallout is wide open :-(
>>>>>>>
>>>>>>> I'm not sure where the right tradeoff is to make sure we catch them all,
>>>>>>> and can make sure with testing that we've indeed caught them all.
>>>>>>> -Daniel
>>>>>>>
>>>>>
>>>
>
Daniel Vetter May 7, 2021, 4:24 p.m. UTC | #17
On Fri, May 07, 2021 at 11:39:49AM -0400, Andrey Grodzovsky wrote:
> 
> 
> On 2021-05-07 5:11 a.m., Daniel Vetter wrote:
> > On Thu, May 06, 2021 at 12:25:06PM -0400, Andrey Grodzovsky wrote:
> > > 
> > > 
> > > On 2021-05-06 5:40 a.m., Daniel Vetter wrote:
> > > > On Fri, Apr 30, 2021 at 01:27:37PM -0400, Andrey Grodzovsky wrote:
> > > > > 
> > > > > 
> > > > > On 2021-04-30 6:25 a.m., Daniel Vetter wrote:
> > > > > > On Thu, Apr 29, 2021 at 04:34:55PM -0400, Andrey Grodzovsky wrote:
> > > > > > > 
> > > > > > > 
> > > > > > > On 2021-04-29 3:05 p.m., Daniel Vetter wrote:
> > > > > > > > On Thu, Apr 29, 2021 at 12:04:33PM -0400, Andrey Grodzovsky wrote:
> > > > > > > > > 
> > > > > > > > > 
> > > > > > > > > On 2021-04-29 7:32 a.m., Daniel Vetter wrote:
> > > > > > > > > > On Thu, Apr 29, 2021 at 01:23:19PM +0200, Daniel Vetter wrote:
> > > > > > > > > > > On Wed, Apr 28, 2021 at 11:12:00AM -0400, Andrey Grodzovsky wrote:
> > > > > > > > > > > > With this calling drm_dev_unplug will flush and block
> > > > > > > > > > > > all in flight IOCTLs
> > > > > > > > > > > > 
> > > > > > > > > > > > Also, add feature such that if device supports graceful unplug
> > > > > > > > > > > > we enclose entire IOCTL in SRCU critical section.
> > > > > > > > > > > > 
> > > > > > > > > > > > Signed-off-by: Andrey Grodzovsky <andrey.grodzovsky@amd.com>
> > > > > > > > > > > 
> > > > > > > > > > > Nope.
> > > > > > > > > > > 
> > > > > > > > > > > The idea of drm_dev_enter/exit is to mark up hw access. Not entire ioctl.
> > > > > > > > > 
> > > > > > > > > Then I am confused why we have https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Felixir.bootlin.com%2Flinux%2Fv5.12%2Fsource%2Fdrivers%2Fgpu%2Fdrm%2Fdrm_ioctl.c%23L826&amp;data=04%7C01%7Candrey.grodzovsky%40amd.com%7Ce53ea46e66fa40a0e03f08d911381a05%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637559754928702763%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=zMlHiglnn8Vm%2BVxI9Rbk8X%2BTyuokq1x1INbhbRCWK4E%3D&amp;reserved=0
> > > > > > > > > currently in code ?
> > > > > > > > 
> > > > > > > > I forgot about this one, again. Thanks for reminding.
> > > > > > > > 
> > > > > > > > > > > Especially not with an opt-in flag so that it could be shrugged of as a
> > > > > > > > > > > driver hack. Most of these ioctls should have absolutely no problem
> > > > > > > > > > > working after hotunplug.
> > > > > > > > > > > 
> > > > > > > > > > > Also, doing this defeats the point since it pretty much guarantees
> > > > > > > > > > > userspace will die in assert()s and stuff. E.g. on i915 the rough contract
> > > > > > > > > > > is that only execbuf (and even that only when userspace has indicated
> > > > > > > > > > > support for non-recoverable hw ctx) is allowed to fail. Anything else
> > > > > > > > > > > might crash userspace.
> > > > > > > > > 
> > > > > > > > > Given that as I pointed above we already fail any IOCTls with -ENODEV
> > > > > > > > > when device is unplugged, it seems those crashes don't happen that
> > > > > > > > > often ? Also, in all my testing I don't think I saw a user space crash
> > > > > > > > > I could attribute to this.
> > > > > > > > 
> > > > > > > > I guess it should be ok.
> > > > > > > 
> > > > > > > What should be ok ?
> > > > > > 
> > > > > > Your approach, but not your patch. If we go with this let's just lift it
> > > > > > to drm_ioctl() as the default behavior. No driver opt-in flag, because
> > > > > > that's definitely worse than any other approach because we really need to
> > > > > > get rid of driver specific behaviour for generic ioctls, especially
> > > > > > anything a compositor will use directly.
> > > > > > 
> > > > > > > > My reasons for making this work is both less trouble for userspace (did
> > > > > > > > you test with various wayland compositors out there, not just amdgpu x86
> > > > > > > 
> > > > > > > I didn't - will give it a try.
> > > > > 
> > > > > Weston worked without crashes, run the egl tester cube there.
> > > > > 
> > > > > > > 
> > > > > > > > driver?), but also testing.
> > > > > > > > 
> > > > > > > > We still need a bunch of these checks in various places or you'll wait a
> > > > > > > > very long time for a pending modeset or similar to complete. Being able to
> > > > > > > > run that code easily after hotunplug has completed should help a lot with
> > > > > > > > testing.
> > > > > > > > 
> > > > > > > > Plus various drivers already acquired drm_dev_enter/exit and now I wonder
> > > > > > > > whether that was properly tested or not ...
> > > > > > > > 
> > > > > > > > I guess maybe we need a drm module option to disable this check, so that
> > > > > > > > we can exercise the code as if the ioctl has raced with hotunplug at the
> > > > > > > > worst possible moment.
> > > > > > > > 
> > > > > > > > Also atomic is really tricky here: I assume your testing has just done
> > > > > > > > normal synchronous commits, but anything that goes through atomic can be
> > > > > > > > done nonblocking in a separate thread. Which the ioctl catch-all here wont
> > > > > > > > capture.
> > > > > > > 
> > > > > > > Yes, async commit was on my mind and thanks for reminding me. Indeed
> > > > > > > I forgot this but i planned to scope the entire amdgpu_dm_atomic_tail in
> > > > > > > drm_dev_enter/exit. Note that i have a bunch of patches, all name's
> > > > > > > starting with 'Scope....' that just methodically put all the background
> > > > > > > work items and timers the drivers schedules in drm_dev_enter/exit scope.
> > > > > > > This was supposed to be part of the 'Scope Display code' patch.
> > > > > > 
> > > > > > That's too much. You still have to arrange that the flip completion event
> > > > > > gets sent out. So it's a bit tricky.
> > > > > > 
> > > > > > In other places the same problem applies, e.g. probe functions need to
> > > > > > make sure they report "disconnected".
> > > > > 
> > > > > I see, well, this is all part of KMS support which I defer for now
> > > > > anyway. Will tackle it then.
> > > > > 
> > > > > > 
> > > > > > > > > > > You probably need similar (and very precisely defined) rules for amdgpu.
> > > > > > > > > > > And those must definitely exclude any shard ioctls from randomly failing
> > > > > > > > > > > with EIO, because that just kills the box and defeats the point of trying
> > > > > > > > > > > to gracefully handling hotunplug and making sure userspace has a chance of
> > > > > > > > > > > survival. E.g. for atomic everything should continue, including flip
> > > > > > > > > > > completion, but we set all outputs to "disconnected" and send out the
> > > > > > > > > > > uevent. Maybe crtc enabling can fail too, but that can also be handled
> > > > > > > > > > > through the async status we're using to signal DP link failures to
> > > > > > > > > > > userspace.
> > > > > > > > > 
> > > > > > > > > As I pointed before, because of the complexity of the topic I prefer to
> > > > > > > > > take it step by step and solve first for secondary device use case, not
> > > > > > > > > for primary, display attached device.
> > > > > > > > 
> > > > > > > > Yeah makes sense. But then I think the right patch is to roll this out for
> > > > > > > > all drivers, properly justified with existing code. Not behind a driver
> > > > > > > > flag, because with all these different compositors the last thing we want
> > > > > > > > is a proliferation of driver-specific behaviour. That's imo the worst
> > > > > > > > option of all of them and needs to be avoided.
> > > > > > > 
> > > > > > > So this kind of patch would be acceptable to you if I unconditionally
> > > > > > > scope the drm_ioctl with drm_dev_enter/exit without the driver flag ?
> > > > > > > I am worried to break other drivers with this, see patch https://nam11.safelinks.protection.outlook.com/?url=https:%2F%2Fcgit.freedesktop.org%2F~agrodzov%2Flinux%2Fcommit%2F%3Fh%3Ddrm-misc-next%26id%3Df0c593f35b22ca5bf60ed9e7ce2bf2b80e6c68c6&amp;data=04%7C01%7Candrey.grodzovsky%40amd.com%7Ce53ea46e66fa40a0e03f08d911381a05%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637559754928702763%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=NcUTm%2BttKzbr2yo6PlSZRZ4e5%2BkHF%2BCZJSQyo3m7w7Q%3D&amp;reserved=0
> > > > > > > Before setting drm_dev_unplug I go through a whole process of signalling
> > > > > > > all possible fences in the system which some one some where might be
> > > > > > > waiting on. My concern is that in the absence of HW those fences won't
> > > > > > > signal and so unless I signal them myself srcu_synchrionize in
> > > > > > > drm_dev_unplug will hang waiting for any such code scoped by
> > > > > > > drm_dev_enter/exit.
> > > > > > 
> > > > > > Uh right. I forgot about this.
> > > > > > 
> > > > > > Which would kinda mean the top level scope is maybe not the best idea, and
> > > > > > perhaps we should indeed drill it down. But then the testing issue
> > > > > > definitely gets a lot worse.
> > > > > > 
> > > > > > So what if we'd push that drm_dev_is_unplugged check down into ioctls?
> > > > > > Then we can make a case-by case decision whether it should be converted to
> > > > > > drm_dev_enter/exit, needs to be pushed down further into drivers (due to
> > > > > > fence wait issues) or other concerns?
> > > > > > 
> > > > > > Also I guess we need to have a subsystem wide rule on whether you need to
> > > > > > force complete all fences before you call drm_dev_unplug, or afterwards.
> > > > > 
> > > > > I don't see how you can handle it afterwards. If a thread is stuck in
> > > > > dma_fence_wait in non interruptible wait (any kernel thread) and with no
> > > > > timeout there is nothing you can do to stop the wait. Any such code
> > > > > scopped with drm_dev_enter/exit will cause a hang in drm_dev_unplug.
> > > > > The only way then is to preemptively force signal all such fences before
> > > > > calling drm_dev_unplug - as I do in the above mentioned patch.
> > > > 
> > > > Yeah, which is why I don't think top-level drm_dev_enter/exit is a good
> > > > idea.
> > > > 
> > > > > > If we have mixed behaviour on this there will be disappointment. And since
> > > > > > hotunplug and dma_fence completion are both userspace visible that
> > > > > > inconsistency might have bigger impact.
> > > > > > 
> > > > > > This is all very tricky indeed :-/
> > > > > > 
> > > > > > btw for the "gradual pushing drm_dev_enter into ioctl" approach, if we go
> > > > > > with that: We could do the same trick we've done for DRM_UNLOCKED:
> > > > > > - drm_dev_enter/exit is called for any ioctl that has not set the
> > > > > >      DRM_HOTUNPLUG_SAFE flag
> > > > > > - for drm core ioctls we push them into all ioctls and decide how to
> > > > > >      handle/where (with the aim to have the least amount of code flow
> > > > > >      different during hotunplug vs after hotunplug has finished, to reduce
> > > > > >      testing scope)
> > > > > > - then we make DRM_HOTUNPLUG_SAFE the implied default
> > > > > > 
> > > > > > This would have us left with render ioctls, and I think the defensive
> > > > > > assumption there is that they're all hotunplug safe. We might hang on a
> > > > > > fence wait, but that's fixable, and it's better than blowing up on a
> > > > > > use-after-free security bug.
> > > > > > 
> > > > > > Thoughts?
> > > > > 
> > > > > I don't fully see a difference between the approach described above and
> > > > > the full drill down to each driver and even within the driver, to the HW
> > > > > back-ends - what criteria I would use to decide if for a given IOCTL i
> > > > > scope with drm_dev_enter/exit at the highest level while for another
> > > > > i go all the way down ? If we would agree that signaling the fences
> > > > > preemptively before engaging drm_dev_unplug is generically the right
> > > > > approach maybe we can then scope drm_ioctl unconditionally with
> > > > > drm_dev_enter/exit and then for each driver go through the same process
> > > > > I do for amdgpu - writing driver specific function which takes care of
> > > > > all the fences. We could then just create a drm callback which would
> > > > > be called from drm_ioctl before drm_dev_unplug is called.
> > > > 
> > > > So I see the appeal of just nuking all the fences, but I'm not sure that's
> > > > a good plan. We've done this in the old i915 gpu reset code too, and the
> > > > issue is it's defacto inverting the locking. But also the hw is truly
> > > > gone, so it also makes sense.
> > > > 
> > > > The problem is a bit roll-out, if we state that dma_fence_wait is allowed
> > > > with a drm_dev_enter/exit, then all drivers need to force-retire their
> > > > fences.
> > > > 
> > > > The other option would be that we require that dma_fence_wait is _not_
> > > > allowed in drm_dev_enter/exit, and that therefore these areas must be
> > > > marked up more fine-grained to avoid deadlocks. I like this more from the
> > > > testing aspect (it makes it easier to be reasonable sure your code handles
> > > > concurrent hotunplug), but also it's pretty easy to validate with the
> > > > dma_fence lockdep annotations we have I think.
> > > 
> > > They key question as I see it - is it ok for a device to be unplugged
> > > while it's driver has anywhere in it's code a dma_fence_wait
> > > waiting for work completion from this device. The answers seems to me
> > > is no, the HW is gone, this fence will never signal and so you will be
> > > left with indefinitely hanged code thread with all it's unreleased
> > > resources. If i am correct in the above statement then avoiding scoping
> > > code with drm_dev_enter/exit because a dma_fence_wait might be there in the
> > > middle
> > > just hides the problem. Also, then the only solution for each driver
> > > wanting to support hot-unplug is to force retire all it's HW
> > > fences once it's notified of device removal.
> > 
> > At a high level, yes dma_fence must always complete. I don't think we have
> > a disagreement here on that.
> > 
> > What we're discussing here is the precise sequencing and barriers, where
> > things get tricky. Requiring that you force-complete all dma_fence that
> > might be affected before you hotunplug is one solution, the other is
> > tuning the critical sections that drm_dev_enter/exit annotates.
> > 
> > This isn't about avoiding anything or hiding problems, this is about
> > locking/synchronization design. And for that we must agree on what is
> > allowed inside/outside of a critical section for all possible
> > combinations.
> > 
> > E.g. we're also "hiding" problems with calling dma_fence_wait from
> > shrinkers/mmu notifiers by forbidding allocations in
> > dma_fence_begin/end_signalling critical paths.
> > 
> > > > A third reasons for not requiring force-retiring of dma_fence before
> > > > drm_dev_unplug is the races: Before drm_dev_unplug you haven't stopped new
> > > > fences from happening, but until you've stopped new fences it's hard to
> > > > guarantee they're all retired. How do you solve this currently.
> > > 
> > > See amdgpu_finilize_device_fences in https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fpatchwork.ozlabs.org%2Fproject%2Flinux-pci%2Fpatch%2F20210428151207.1212258-20-andrey.grodzovsky%40amd.com%2F&amp;data=04%7C01%7Candrey.grodzovsky%40amd.com%7Ce53ea46e66fa40a0e03f08d911381a05%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637559754928702763%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=pinKT1LMic%2FCEfAlr%2FPAhyBAaDEqpMeeJC%2BPdqUaiL8%3D&amp;reserved=0
> > > I think the steps described there answer your
> > > concern here.
> > 
> > The hard problem is stopping further command submission. Not seeing how
> > you solve that.
> 
> By stopping GPU SW scheduler before force completion of HW fences, see
> amdgpu_finilize_device_fences->amdgpu_fence_driver_fini_hw and the
> comment above it.
> 
> > 
> > But I'm definitely scared about all the scheduler/tdr interactions you
> > already have there, and that looks quite a bit like fallout from doing
> > things the wrong way round.
> > 
> > Also given that drm/scheduler is shared, why can't this be a drm/scheduler
> > helper function?
> 
> I was thinking about it, what stopped me is that HW fences signaling is
> done from driver specific HW fence array. But we could do it generic by
> instead iterating sched->pending_list and signaling
> s_job->s_fence->parent instead.
> You also need to retire scheduler's scheduled fences once you stopped
> the schedulers as they are waited on as dependencies for other jobs
> submissions (i do take care of it).
> 
> > 
> > > > Finally there's still hangcheck and all that, so if we go with forbidding
> > > > dma_fence_wait from within drm_dev_enter/exit sections, then drivers don't
> > > > need to have additional tricky code to force-retire fences. TDR will take
> > > > care already (albeit with maybe a slightly annoying long timeout, which
> > > > we can shorten to "time out everything immediately" after drm_dev_unplug).
> > > 
> > > I am not aware of TDR handlers  that do it today, at least we don't,
> > > we don't check that if device is gone let's instead of resetting the device
> > > and resubmit jobs just force retire all the HW fences. In any case, this
> > > can and i think should be done in pci remove callback because this is
> > > the place that supposed to handle device extraction. I for example in
> > > amdgpu_finilize_device_fences just block all TDRs from taking place as first
> > > step in the process. If other drivers want to force retire fences
> > > in their TDR handlers they still need to block and wait for all such
> > > TDRs in their pci_remove handler.
> > 
> > TDR definitely force-completes the fence that did hang. Of course it'll
> > take a while until they've all completed this way, but we do have
> > guaranteed forward progress since we've stopped all further fences from
> > showing up because drm_dev_unplug is called already.
> > 
> > And yes after drm_dev_unplug you can then force-retire the tdr stuff.
> > 
> > > > What we definitely can't have is half the drivers doing it one way, and
> > > > the other half the other way. So your driver flag to wrap the ioctl
> > > > optionally in a drm_dev_enter/exit path is a no-go still I think.
> > > > 
> > > > I guess my tldr; is: I definitely see how your current approach gives
> > > > quicker results for amdgpu right now, but long term I'm seeing more
> > > > positives on the other one. At least I expect less special cases due to
> > > > hotunplug with that.
> > > 
> > > As i expressed my viewpoint above - seems to me any driver in need to
> > > support hot-unplug must force retire it's fences because of need to
> > > unblock all dma_fence waits and so it will not be a special case.
> > 
> > This isn't the special case I meant. It's the very tricky
> > force-retire-before-you-unplugged-officially which is large scale nasty.
> > 
> > Also if your driver doesn't force-retire already, it's buggy. The
> > additional need of hotunplug is just that we're trying to force-retire a
> > bit faster, because we know it's all hopeless. But e.g. i915 already has a
> > fallback that does this automatically:
> > - first we reset only the engine/context, keeping everyone else running
> > - if that doesn't pan out, we reset the entire chip and give up an
> >    anything that's in-flight, which (iirc, it did so at least in the past)
> >    force retires everything outstanding.
> > 
> > I think amdgpu only has full chip reset, so your first step tries to
> > reissue all other tasks. But that's not necessarily how it needs to
> > happen.
> > 
> > Either way drivers must force retire everything (albeit maybe a bit at a
> > slow pace) if the hw ceased to work properly already. Hotunplug really
> > isn't anything new here.
> > -Daniel
> 
> Let's then agree on the way forward -
> 
> You raised before the following suggestion -
> 
> "
> btw for the "gradual pushing drm_dev_enter into ioctl" approach, if we go
> with that: We could do the same trick we've done for DRM_UNLOCKED:
> - drm_dev_enter/exit is called for any ioctl that has not set the
>   DRM_HOTUNPLUG_SAFE flag
> - for drm core ioctls we push them into all ioctls and decide how to
>   handle/where (with the aim to have the least amount of code flow
>   different during hotunplug vs after hotunplug has finished, to reduce
>   testing scope)
> - then we make DRM_HOTUNPLUG_SAFE the implied default
> "
> My problem here is that I have no good understating, criteria
> for how to decide per each ioctl on the right scope of drm_dev_enter/
> exit. It depends on whether each next function call can lead somewhere
> down the call stack to dma_fence_wait and/or whether it can lead
> to registers access. Seems to me very hard to cover and error prone.

Tbh, neither do I. This requires a lot of work to analyze.

> Another options which we discussed internally before and is basically
> same as current drivers i guess is simply to scope with drm_dev_enter/
> exit all the back-end HW specific callbacks. Those are most of the
> places MMIO access takes place and by definition no dma_fence_wait
> can be there as it's HW specific code. This leaves MMIO
> access through pointers (memcpy, and various pointer de-references)
> which will need to be protected on case by case, but given that I unmap
> all MMIO anyway as last step of PCI remove callback, all of them will
> be found by try and error eventually.
> I feel more comfortable with this approach as I have a clear
> understating of how to deal with it.

Hm ... I'm maybe failing to see the difference, but at least on the kms
side "put drm_dev_enter/exit into driver callbacks" feels like the right
place for them.

Render (and I guess kfd for amd as the special case) ioctl are different,
especially all the driver specific ones.

So one thing that cross my mind maybe as step 0 is to annotate the rules
for drm_dev_enter/exit using lockdep. With lockdep we can both check
whether a lock is held, but also whether it's not held (but the latter is
only possible with CONFIG_PROVE_LOCKING enabled). I think it would be good
to annotate all the major locks in the kernel against drm_dev_enter/exit:

- dma_fence_wait could check that the drm_dev_enter/exit srcu is _not_
  held.

- because srcu nest _very_ freely there's kinda no real restrictions for
  putting drm_dev_enter/exit within a lock critical section. Might still
  be good to explicitly call out in docs where it's all ok:

  - interrupt handlers (I hope that's the case, otherwise I screwed up)
  - shrinkers/mmu_notifier callbacks

- anything else that's not allowed in within drm_dev_enter/exit. I'm e.g.
  wondering whether we should disallow drm_modeset_lock() or maybe
  dma_resv_lock(), or whether that's too restrictive. It could help quite
  a bit in finding places where the drm_dev_enter/exit section is too
  wide.

- another one is the inverse, but I guess you have that already with
  putting a drm_dev_is_held() into mmio helpers and all that to make sure
  we really have them all caught.

Above is just examples, I think the more we nail down these rules one way
or the other, the better for consistency across drivers. And without
consistency everyone will be forced to write their own mmap helpers
instead of one in ttm, or scheduler cleanup helpers instead of one in
drm/scheduler.

> P.S Please respond on the question for you on the other thread at
> 'PATCH v5 15/27] drm/scheduler: Fix hang when sched_entity released'
> about suggestion by Christian of partial up-streaming of this code up to
> and before the patches dealing with scoping of drm_dev_enter/exit scoping.

Ok will try, I'm a bit burried unfortunately so thanks for reminder when I
miss something.
-Daniel

> 
> Andrey
> 
> 
> 
> 
> 
> 
> > > 
> > > Andrey
> > > 
> > > > 
> > > > Cheers, Daniel
> > > > 
> > > > > 
> > > > > Andrey
> > > > > 
> > > > > > 
> > > > > > It is unfortunately even more work until we've reached the goal, but I
> > > > > > think it's safest and most flexible approach overall.
> > > > > > 
> > > > > > Cheers, Daniel
> > > > > > 
> > > > > > > 
> > > > > > > Andrey
> > > > > > > 
> > > > > > > > 
> > > > > > > > Cheers, Daniel
> > > > > > > > 
> > > > > > > > 
> > > > > > > > > 
> > > > > > > > > > > 
> > > > > > > > > > > I guess we should clarify this in the hotunplug doc?
> > > > > > > > > 
> > > > > > > > > Agree
> > > > > > > > > 
> > > > > > > > > > 
> > > > > > > > > > To clarify: I'm not against throwing an ENODEV at userspace for ioctl that
> > > > > > > > > > really make no sense, and where we're rather confident that all properly
> > > > > > > > > > implemented userspace will gracefully handle failures. Like a modeset, or
> > > > > > > > > > opening a device, or trying to import a dma-buf or stuff like that which
> > > > > > > > > > can already fail in normal operation for any kind of reason.
> > > > > > > > > > 
> > > > > > > > > > But stuff that never fails, like GETRESOURCES ioctl, really shouldn't fail
> > > > > > > > > > after hotunplug.
> > > > > > > > > 
> > > > > > > > > As I pointed above, this a bit confuses me given that we already do
> > > > > > > > > blanker rejection of IOCTLs if device is unplugged.
> > > > > > > > 
> > > > > > > > Well I'm confused about this too :-/
> > > > > > > > 
> > > > > > > > > > And then there's the middle ground, like doing a pageflip or buffer flush,
> > > > > > > > > > which I guess some userspace might handle, but risky to inflict those
> > > > > > > > > > consequences on them. atomic modeset is especially fun since depending
> > > > > > > > > > what you're doing it can be both "failures expected" and "failures not
> > > > > > > > > > really expected in normal operation".
> > > > > > > > > > 
> > > > > > > > > > Also, this really should be consistent across drivers, not solved with a
> > > > > > > > > > driver flag for every possible combination.
> > > > > > > > > > 
> > > > > > > > > > If you look at the current hotunplug kms drivers, they have
> > > > > > > > > > drm_dev_enter/exit sprinkled in specific hw callback functions because of
> > > > > > > > > > the above problems. But maybe it makes sense to change things in a few
> > > > > > > > > > cases. But then we should do it across the board.
> > > > > > > > > 
> > > > > > > > > So as I understand your preferred approach is that I scope any back_end, HW
> > > > > > > > > specific function with drm_dev_enter/exit because that where MMIO
> > > > > > > > > access takes place. But besides explicit MMIO access thorough
> > > > > > > > > register accessors in the HW back-end there is also indirect MMIO access
> > > > > > > > > taking place throughout the code in the driver because of various VRAM
> > > > > > > > > BOs which provide CPU access to VRAM through the VRAM BAR. This kind of
> > > > > > > > > access is spread all over in the driver and even in mid-layers such as
> > > > > > > > > TTM and not limited to HW back-end functions. It means it's much harder
> > > > > > > > > to spot such places to surgically scope them with drm_dev_enter/exit and
> > > > > > > > > also that any new such code introduced will immediately break hot unplug
> > > > > > > > > because the developers can't be expected to remember making their code
> > > > > > > > > robust to this specific use case. That why when we discussed internally
> > > > > > > > > what approach to take to protecting code with drm_dev_enter/exit we
> > > > > > > > > opted for using the widest available scope.
> > > > > > > > 
> > > > > > > > The thing is, you kinda have to anyway. There's enormous amounts of
> > > > > > > > asynchronous processing going on. E.g. nonblocking atomic commits also do
> > > > > > > > ttm unpinning and fun stuff like that, which if you sync things wrong can
> > > > > > > > happen way late. So the door for bad fallout is wide open :-(
> > > > > > > > 
> > > > > > > > I'm not sure where the right tradeoff is to make sure we catch them all,
> > > > > > > > and can make sure with testing that we've indeed caught them all.
> > > > > > > > -Daniel
> > > > > > > > 
> > > > > > 
> > > > 
> >
Andrey Grodzovsky May 7, 2021, 6 p.m. UTC | #18
On 2021-05-07 12:24 p.m., Daniel Vetter wrote:
> On Fri, May 07, 2021 at 11:39:49AM -0400, Andrey Grodzovsky wrote:
>>
>>
>> On 2021-05-07 5:11 a.m., Daniel Vetter wrote:
>>> On Thu, May 06, 2021 at 12:25:06PM -0400, Andrey Grodzovsky wrote:
>>>>
>>>>
>>>> On 2021-05-06 5:40 a.m., Daniel Vetter wrote:
>>>>> On Fri, Apr 30, 2021 at 01:27:37PM -0400, Andrey Grodzovsky wrote:
>>>>>>
>>>>>>
>>>>>> On 2021-04-30 6:25 a.m., Daniel Vetter wrote:
>>>>>>> On Thu, Apr 29, 2021 at 04:34:55PM -0400, Andrey Grodzovsky wrote:
>>>>>>>>
>>>>>>>>
>>>>>>>> On 2021-04-29 3:05 p.m., Daniel Vetter wrote:
>>>>>>>>> On Thu, Apr 29, 2021 at 12:04:33PM -0400, Andrey Grodzovsky wrote:
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> On 2021-04-29 7:32 a.m., Daniel Vetter wrote:
>>>>>>>>>>> On Thu, Apr 29, 2021 at 01:23:19PM +0200, Daniel Vetter wrote:
>>>>>>>>>>>> On Wed, Apr 28, 2021 at 11:12:00AM -0400, Andrey Grodzovsky wrote:
>>>>>>>>>>>>> With this calling drm_dev_unplug will flush and block
>>>>>>>>>>>>> all in flight IOCTLs
>>>>>>>>>>>>>
>>>>>>>>>>>>> Also, add feature such that if device supports graceful unplug
>>>>>>>>>>>>> we enclose entire IOCTL in SRCU critical section.
>>>>>>>>>>>>>
>>>>>>>>>>>>> Signed-off-by: Andrey Grodzovsky <andrey.grodzovsky@amd.com>
>>>>>>>>>>>>
>>>>>>>>>>>> Nope.
>>>>>>>>>>>>
>>>>>>>>>>>> The idea of drm_dev_enter/exit is to mark up hw access. Not entire ioctl.
>>>>>>>>>>
>>>>>>>>>> Then I am confused why we have https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Felixir.bootlin.com%2Flinux%2Fv5.12%2Fsource%2Fdrivers%2Fgpu%2Fdrm%2Fdrm_ioctl.c%23L826&amp;data=04%7C01%7Candrey.grodzovsky%40amd.com%7C66e4988eb341427e8b0108d91174a232%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637560014906903277%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=rdb3xesAUYYTeqU2WdoZ%2BWLzOuuRdOxBBQNTMMB%2BKB4%3D&amp;reserved=0
>>>>>>>>>> currently in code ?
>>>>>>>>>
>>>>>>>>> I forgot about this one, again. Thanks for reminding.
>>>>>>>>>
>>>>>>>>>>>> Especially not with an opt-in flag so that it could be shrugged of as a
>>>>>>>>>>>> driver hack. Most of these ioctls should have absolutely no problem
>>>>>>>>>>>> working after hotunplug.
>>>>>>>>>>>>
>>>>>>>>>>>> Also, doing this defeats the point since it pretty much guarantees
>>>>>>>>>>>> userspace will die in assert()s and stuff. E.g. on i915 the rough contract
>>>>>>>>>>>> is that only execbuf (and even that only when userspace has indicated
>>>>>>>>>>>> support for non-recoverable hw ctx) is allowed to fail. Anything else
>>>>>>>>>>>> might crash userspace.
>>>>>>>>>>
>>>>>>>>>> Given that as I pointed above we already fail any IOCTls with -ENODEV
>>>>>>>>>> when device is unplugged, it seems those crashes don't happen that
>>>>>>>>>> often ? Also, in all my testing I don't think I saw a user space crash
>>>>>>>>>> I could attribute to this.
>>>>>>>>>
>>>>>>>>> I guess it should be ok.
>>>>>>>>
>>>>>>>> What should be ok ?
>>>>>>>
>>>>>>> Your approach, but not your patch. If we go with this let's just lift it
>>>>>>> to drm_ioctl() as the default behavior. No driver opt-in flag, because
>>>>>>> that's definitely worse than any other approach because we really need to
>>>>>>> get rid of driver specific behaviour for generic ioctls, especially
>>>>>>> anything a compositor will use directly.
>>>>>>>
>>>>>>>>> My reasons for making this work is both less trouble for userspace (did
>>>>>>>>> you test with various wayland compositors out there, not just amdgpu x86
>>>>>>>>
>>>>>>>> I didn't - will give it a try.
>>>>>>
>>>>>> Weston worked without crashes, run the egl tester cube there.
>>>>>>
>>>>>>>>
>>>>>>>>> driver?), but also testing.
>>>>>>>>>
>>>>>>>>> We still need a bunch of these checks in various places or you'll wait a
>>>>>>>>> very long time for a pending modeset or similar to complete. Being able to
>>>>>>>>> run that code easily after hotunplug has completed should help a lot with
>>>>>>>>> testing.
>>>>>>>>>
>>>>>>>>> Plus various drivers already acquired drm_dev_enter/exit and now I wonder
>>>>>>>>> whether that was properly tested or not ...
>>>>>>>>>
>>>>>>>>> I guess maybe we need a drm module option to disable this check, so that
>>>>>>>>> we can exercise the code as if the ioctl has raced with hotunplug at the
>>>>>>>>> worst possible moment.
>>>>>>>>>
>>>>>>>>> Also atomic is really tricky here: I assume your testing has just done
>>>>>>>>> normal synchronous commits, but anything that goes through atomic can be
>>>>>>>>> done nonblocking in a separate thread. Which the ioctl catch-all here wont
>>>>>>>>> capture.
>>>>>>>>
>>>>>>>> Yes, async commit was on my mind and thanks for reminding me. Indeed
>>>>>>>> I forgot this but i planned to scope the entire amdgpu_dm_atomic_tail in
>>>>>>>> drm_dev_enter/exit. Note that i have a bunch of patches, all name's
>>>>>>>> starting with 'Scope....' that just methodically put all the background
>>>>>>>> work items and timers the drivers schedules in drm_dev_enter/exit scope.
>>>>>>>> This was supposed to be part of the 'Scope Display code' patch.
>>>>>>>
>>>>>>> That's too much. You still have to arrange that the flip completion event
>>>>>>> gets sent out. So it's a bit tricky.
>>>>>>>
>>>>>>> In other places the same problem applies, e.g. probe functions need to
>>>>>>> make sure they report "disconnected".
>>>>>>
>>>>>> I see, well, this is all part of KMS support which I defer for now
>>>>>> anyway. Will tackle it then.
>>>>>>
>>>>>>>
>>>>>>>>>>>> You probably need similar (and very precisely defined) rules for amdgpu.
>>>>>>>>>>>> And those must definitely exclude any shard ioctls from randomly failing
>>>>>>>>>>>> with EIO, because that just kills the box and defeats the point of trying
>>>>>>>>>>>> to gracefully handling hotunplug and making sure userspace has a chance of
>>>>>>>>>>>> survival. E.g. for atomic everything should continue, including flip
>>>>>>>>>>>> completion, but we set all outputs to "disconnected" and send out the
>>>>>>>>>>>> uevent. Maybe crtc enabling can fail too, but that can also be handled
>>>>>>>>>>>> through the async status we're using to signal DP link failures to
>>>>>>>>>>>> userspace.
>>>>>>>>>>
>>>>>>>>>> As I pointed before, because of the complexity of the topic I prefer to
>>>>>>>>>> take it step by step and solve first for secondary device use case, not
>>>>>>>>>> for primary, display attached device.
>>>>>>>>>
>>>>>>>>> Yeah makes sense. But then I think the right patch is to roll this out for
>>>>>>>>> all drivers, properly justified with existing code. Not behind a driver
>>>>>>>>> flag, because with all these different compositors the last thing we want
>>>>>>>>> is a proliferation of driver-specific behaviour. That's imo the worst
>>>>>>>>> option of all of them and needs to be avoided.
>>>>>>>>
>>>>>>>> So this kind of patch would be acceptable to you if I unconditionally
>>>>>>>> scope the drm_ioctl with drm_dev_enter/exit without the driver flag ?
>>>>>>>> I am worried to break other drivers with this, see patch https://nam11.safelinks.protection.outlook.com/?url=https:%2F%2Fcgit.freedesktop.org%2F~agrodzov%2Flinux%2Fcommit%2F%3Fh%3Ddrm-misc-next%26id%3Df0c593f35b22ca5bf60ed9e7ce2bf2b80e6c68c6&amp;data=04%7C01%7Candrey.grodzovsky%40amd.com%7C66e4988eb341427e8b0108d91174a232%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637560014906903277%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=e0PWmu%2FYZlPJMajOmR6rxRp%2Fj0w%2FsfdJPnJ6dwDqVag%3D&amp;reserved=0
>>>>>>>> Before setting drm_dev_unplug I go through a whole process of signalling
>>>>>>>> all possible fences in the system which some one some where might be
>>>>>>>> waiting on. My concern is that in the absence of HW those fences won't
>>>>>>>> signal and so unless I signal them myself srcu_synchrionize in
>>>>>>>> drm_dev_unplug will hang waiting for any such code scoped by
>>>>>>>> drm_dev_enter/exit.
>>>>>>>
>>>>>>> Uh right. I forgot about this.
>>>>>>>
>>>>>>> Which would kinda mean the top level scope is maybe not the best idea, and
>>>>>>> perhaps we should indeed drill it down. But then the testing issue
>>>>>>> definitely gets a lot worse.
>>>>>>>
>>>>>>> So what if we'd push that drm_dev_is_unplugged check down into ioctls?
>>>>>>> Then we can make a case-by case decision whether it should be converted to
>>>>>>> drm_dev_enter/exit, needs to be pushed down further into drivers (due to
>>>>>>> fence wait issues) or other concerns?
>>>>>>>
>>>>>>> Also I guess we need to have a subsystem wide rule on whether you need to
>>>>>>> force complete all fences before you call drm_dev_unplug, or afterwards.
>>>>>>
>>>>>> I don't see how you can handle it afterwards. If a thread is stuck in
>>>>>> dma_fence_wait in non interruptible wait (any kernel thread) and with no
>>>>>> timeout there is nothing you can do to stop the wait. Any such code
>>>>>> scopped with drm_dev_enter/exit will cause a hang in drm_dev_unplug.
>>>>>> The only way then is to preemptively force signal all such fences before
>>>>>> calling drm_dev_unplug - as I do in the above mentioned patch.
>>>>>
>>>>> Yeah, which is why I don't think top-level drm_dev_enter/exit is a good
>>>>> idea.
>>>>>
>>>>>>> If we have mixed behaviour on this there will be disappointment. And since
>>>>>>> hotunplug and dma_fence completion are both userspace visible that
>>>>>>> inconsistency might have bigger impact.
>>>>>>>
>>>>>>> This is all very tricky indeed :-/
>>>>>>>
>>>>>>> btw for the "gradual pushing drm_dev_enter into ioctl" approach, if we go
>>>>>>> with that: We could do the same trick we've done for DRM_UNLOCKED:
>>>>>>> - drm_dev_enter/exit is called for any ioctl that has not set the
>>>>>>>       DRM_HOTUNPLUG_SAFE flag
>>>>>>> - for drm core ioctls we push them into all ioctls and decide how to
>>>>>>>       handle/where (with the aim to have the least amount of code flow
>>>>>>>       different during hotunplug vs after hotunplug has finished, to reduce
>>>>>>>       testing scope)
>>>>>>> - then we make DRM_HOTUNPLUG_SAFE the implied default
>>>>>>>
>>>>>>> This would have us left with render ioctls, and I think the defensive
>>>>>>> assumption there is that they're all hotunplug safe. We might hang on a
>>>>>>> fence wait, but that's fixable, and it's better than blowing up on a
>>>>>>> use-after-free security bug.
>>>>>>>
>>>>>>> Thoughts?
>>>>>>
>>>>>> I don't fully see a difference between the approach described above and
>>>>>> the full drill down to each driver and even within the driver, to the HW
>>>>>> back-ends - what criteria I would use to decide if for a given IOCTL i
>>>>>> scope with drm_dev_enter/exit at the highest level while for another
>>>>>> i go all the way down ? If we would agree that signaling the fences
>>>>>> preemptively before engaging drm_dev_unplug is generically the right
>>>>>> approach maybe we can then scope drm_ioctl unconditionally with
>>>>>> drm_dev_enter/exit and then for each driver go through the same process
>>>>>> I do for amdgpu - writing driver specific function which takes care of
>>>>>> all the fences. We could then just create a drm callback which would
>>>>>> be called from drm_ioctl before drm_dev_unplug is called.
>>>>>
>>>>> So I see the appeal of just nuking all the fences, but I'm not sure that's
>>>>> a good plan. We've done this in the old i915 gpu reset code too, and the
>>>>> issue is it's defacto inverting the locking. But also the hw is truly
>>>>> gone, so it also makes sense.
>>>>>
>>>>> The problem is a bit roll-out, if we state that dma_fence_wait is allowed
>>>>> with a drm_dev_enter/exit, then all drivers need to force-retire their
>>>>> fences.
>>>>>
>>>>> The other option would be that we require that dma_fence_wait is _not_
>>>>> allowed in drm_dev_enter/exit, and that therefore these areas must be
>>>>> marked up more fine-grained to avoid deadlocks. I like this more from the
>>>>> testing aspect (it makes it easier to be reasonable sure your code handles
>>>>> concurrent hotunplug), but also it's pretty easy to validate with the
>>>>> dma_fence lockdep annotations we have I think.
>>>>
>>>> They key question as I see it - is it ok for a device to be unplugged
>>>> while it's driver has anywhere in it's code a dma_fence_wait
>>>> waiting for work completion from this device. The answers seems to me
>>>> is no, the HW is gone, this fence will never signal and so you will be
>>>> left with indefinitely hanged code thread with all it's unreleased
>>>> resources. If i am correct in the above statement then avoiding scoping
>>>> code with drm_dev_enter/exit because a dma_fence_wait might be there in the
>>>> middle
>>>> just hides the problem. Also, then the only solution for each driver
>>>> wanting to support hot-unplug is to force retire all it's HW
>>>> fences once it's notified of device removal.
>>>
>>> At a high level, yes dma_fence must always complete. I don't think we have
>>> a disagreement here on that.
>>>
>>> What we're discussing here is the precise sequencing and barriers, where
>>> things get tricky. Requiring that you force-complete all dma_fence that
>>> might be affected before you hotunplug is one solution, the other is
>>> tuning the critical sections that drm_dev_enter/exit annotates.
>>>
>>> This isn't about avoiding anything or hiding problems, this is about
>>> locking/synchronization design. And for that we must agree on what is
>>> allowed inside/outside of a critical section for all possible
>>> combinations.
>>>
>>> E.g. we're also "hiding" problems with calling dma_fence_wait from
>>> shrinkers/mmu notifiers by forbidding allocations in
>>> dma_fence_begin/end_signalling critical paths.
>>>
>>>>> A third reasons for not requiring force-retiring of dma_fence before
>>>>> drm_dev_unplug is the races: Before drm_dev_unplug you haven't stopped new
>>>>> fences from happening, but until you've stopped new fences it's hard to
>>>>> guarantee they're all retired. How do you solve this currently.
>>>>
>>>> See amdgpu_finilize_device_fences in https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fpatchwork.ozlabs.org%2Fproject%2Flinux-pci%2Fpatch%2F20210428151207.1212258-20-andrey.grodzovsky%40amd.com%2F&amp;data=04%7C01%7Candrey.grodzovsky%40amd.com%7C66e4988eb341427e8b0108d91174a232%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637560014906903277%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=VEpwZuwoT0lBPJriOYCrZwEFcb0zAnWWpenFaPHsUQI%3D&amp;reserved=0
>>>> I think the steps described there answer your
>>>> concern here.
>>>
>>> The hard problem is stopping further command submission. Not seeing how
>>> you solve that.
>>
>> By stopping GPU SW scheduler before force completion of HW fences, see
>> amdgpu_finilize_device_fences->amdgpu_fence_driver_fini_hw and the
>> comment above it.
>>
>>>
>>> But I'm definitely scared about all the scheduler/tdr interactions you
>>> already have there, and that looks quite a bit like fallout from doing
>>> things the wrong way round.
>>>
>>> Also given that drm/scheduler is shared, why can't this be a drm/scheduler
>>> helper function?
>>
>> I was thinking about it, what stopped me is that HW fences signaling is
>> done from driver specific HW fence array. But we could do it generic by
>> instead iterating sched->pending_list and signaling
>> s_job->s_fence->parent instead.
>> You also need to retire scheduler's scheduled fences once you stopped
>> the schedulers as they are waited on as dependencies for other jobs
>> submissions (i do take care of it).
>>
>>>
>>>>> Finally there's still hangcheck and all that, so if we go with forbidding
>>>>> dma_fence_wait from within drm_dev_enter/exit sections, then drivers don't
>>>>> need to have additional tricky code to force-retire fences. TDR will take
>>>>> care already (albeit with maybe a slightly annoying long timeout, which
>>>>> we can shorten to "time out everything immediately" after drm_dev_unplug).
>>>>
>>>> I am not aware of TDR handlers  that do it today, at least we don't,
>>>> we don't check that if device is gone let's instead of resetting the device
>>>> and resubmit jobs just force retire all the HW fences. In any case, this
>>>> can and i think should be done in pci remove callback because this is
>>>> the place that supposed to handle device extraction. I for example in
>>>> amdgpu_finilize_device_fences just block all TDRs from taking place as first
>>>> step in the process. If other drivers want to force retire fences
>>>> in their TDR handlers they still need to block and wait for all such
>>>> TDRs in their pci_remove handler.
>>>
>>> TDR definitely force-completes the fence that did hang. Of course it'll
>>> take a while until they've all completed this way, but we do have
>>> guaranteed forward progress since we've stopped all further fences from
>>> showing up because drm_dev_unplug is called already.
>>>
>>> And yes after drm_dev_unplug you can then force-retire the tdr stuff.
>>>
>>>>> What we definitely can't have is half the drivers doing it one way, and
>>>>> the other half the other way. So your driver flag to wrap the ioctl
>>>>> optionally in a drm_dev_enter/exit path is a no-go still I think.
>>>>>
>>>>> I guess my tldr; is: I definitely see how your current approach gives
>>>>> quicker results for amdgpu right now, but long term I'm seeing more
>>>>> positives on the other one. At least I expect less special cases due to
>>>>> hotunplug with that.
>>>>
>>>> As i expressed my viewpoint above - seems to me any driver in need to
>>>> support hot-unplug must force retire it's fences because of need to
>>>> unblock all dma_fence waits and so it will not be a special case.
>>>
>>> This isn't the special case I meant. It's the very tricky
>>> force-retire-before-you-unplugged-officially which is large scale nasty.
>>>
>>> Also if your driver doesn't force-retire already, it's buggy. The
>>> additional need of hotunplug is just that we're trying to force-retire a
>>> bit faster, because we know it's all hopeless. But e.g. i915 already has a
>>> fallback that does this automatically:
>>> - first we reset only the engine/context, keeping everyone else running
>>> - if that doesn't pan out, we reset the entire chip and give up an
>>>     anything that's in-flight, which (iirc, it did so at least in the past)
>>>     force retires everything outstanding.
>>>
>>> I think amdgpu only has full chip reset, so your first step tries to
>>> reissue all other tasks. But that's not necessarily how it needs to
>>> happen.
>>>
>>> Either way drivers must force retire everything (albeit maybe a bit at a
>>> slow pace) if the hw ceased to work properly already. Hotunplug really
>>> isn't anything new here.
>>> -Daniel
>>
>> Let's then agree on the way forward -
>>
>> You raised before the following suggestion -
>>
>> "
>> btw for the "gradual pushing drm_dev_enter into ioctl" approach, if we go
>> with that: We could do the same trick we've done for DRM_UNLOCKED:
>> - drm_dev_enter/exit is called for any ioctl that has not set the
>>    DRM_HOTUNPLUG_SAFE flag
>> - for drm core ioctls we push them into all ioctls and decide how to
>>    handle/where (with the aim to have the least amount of code flow
>>    different during hotunplug vs after hotunplug has finished, to reduce
>>    testing scope)
>> - then we make DRM_HOTUNPLUG_SAFE the implied default
>> "
>> My problem here is that I have no good understating, criteria
>> for how to decide per each ioctl on the right scope of drm_dev_enter/
>> exit. It depends on whether each next function call can lead somewhere
>> down the call stack to dma_fence_wait and/or whether it can lead
>> to registers access. Seems to me very hard to cover and error prone.
> 
> Tbh, neither do I. This requires a lot of work to analyze.
> 
>> Another options which we discussed internally before and is basically
>> same as current drivers i guess is simply to scope with drm_dev_enter/
>> exit all the back-end HW specific callbacks. Those are most of the
>> places MMIO access takes place and by definition no dma_fence_wait
>> can be there as it's HW specific code. This leaves MMIO
>> access through pointers (memcpy, and various pointer de-references)
>> which will need to be protected on case by case, but given that I unmap
>> all MMIO anyway as last step of PCI remove callback, all of them will
>> be found by try and error eventually.
>> I feel more comfortable with this approach as I have a clear
>> understating of how to deal with it.
> 
> Hm ... I'm maybe failing to see the difference, but at least on the kms
> side "put drm_dev_enter/exit into driver callbacks" feels like the right
> place for them.
> 
> Render (and I guess kfd for amd as the special case) ioctl are different,
> especially all the driver specific ones.
> 
> So one thing that cross my mind maybe as step 0 is to annotate the rules
> for drm_dev_enter/exit using lockdep. With lockdep we can both check
> whether a lock is held, but also whether it's not held (but the latter is
> only possible with CONFIG_PROVE_LOCKING enabled). I think it would be good
> to annotate all the major locks in the kernel against drm_dev_enter/exit:
> 
> - dma_fence_wait could check that the drm_dev_enter/exit srcu is _not_
>    held.

We can't insert this directly inside dma_fence_wait as a DRM agnostic
layer, are you porpoising to wrap all dma_fence_waits in DRM and drivers
code with thin wrapper calling lockdep_assert_held(drm_unplug_srcu) ?
What about dma_resv objs and dma_resv_wait_timeout_rcu style of waits ?


> 
> - because srcu nest _very_ freely there's kinda no real restrictions for
>    putting drm_dev_enter/exit within a lock critical section. Might still
>    be good to explicitly call out in docs where it's all ok:
> 
>    - interrupt handlers (I hope that's the case, otherwise I screwed up)
>    - shrinkers/mmu_notifier callbacks

Is there a problem scooping these with drm_dev_enter/exit ? Might they
hang indefinitely once a device is gone ?

> 
> - anything else that's not allowed in within drm_dev_enter/exit. I'm e.g.
>    wondering whether we should disallow drm_modeset_lock() or maybe
>    dma_resv_lock(), or whether that's too restrictive. It could help quite
>    a bit in finding places where the drm_dev_enter/exit section is too
>    wide.
> 
> - another one is the inverse, but I guess you have that already with
>    putting a drm_dev_is_held() into mmio helpers and all that to make sure
>    we really have them all caught.

Not sure what drm_dev_is_held means here ? Also what do you mean by MMIO
helpers ? Maybe you meant the page fault helpers like ttm_bo_vm_fault ?

Andrey

> 
> Above is just examples, I think the more we nail down these rules one way
> or the other, the better for consistency across drivers. And without
> consistency everyone will be forced to write their own mmap helpers
> instead of one in ttm, or scheduler cleanup helpers instead of one in
> drm/scheduler.
> 
>> P.S Please respond on the question for you on the other thread at
>> 'PATCH v5 15/27] drm/scheduler: Fix hang when sched_entity released'
>> about suggestion by Christian of partial up-streaming of this code up to
>> and before the patches dealing with scoping of drm_dev_enter/exit scoping.
> 
> Ok will try, I'm a bit burried unfortunately so thanks for reminder when I
> miss something.
> -Daniel
> 
>>
>> Andrey
>>
>>
>>
>>
>>
>>
>>>>
>>>> Andrey
>>>>
>>>>>
>>>>> Cheers, Daniel
>>>>>
>>>>>>
>>>>>> Andrey
>>>>>>
>>>>>>>
>>>>>>> It is unfortunately even more work until we've reached the goal, but I
>>>>>>> think it's safest and most flexible approach overall.
>>>>>>>
>>>>>>> Cheers, Daniel
>>>>>>>
>>>>>>>>
>>>>>>>> Andrey
>>>>>>>>
>>>>>>>>>
>>>>>>>>> Cheers, Daniel
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> I guess we should clarify this in the hotunplug doc?
>>>>>>>>>>
>>>>>>>>>> Agree
>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> To clarify: I'm not against throwing an ENODEV at userspace for ioctl that
>>>>>>>>>>> really make no sense, and where we're rather confident that all properly
>>>>>>>>>>> implemented userspace will gracefully handle failures. Like a modeset, or
>>>>>>>>>>> opening a device, or trying to import a dma-buf or stuff like that which
>>>>>>>>>>> can already fail in normal operation for any kind of reason.
>>>>>>>>>>>
>>>>>>>>>>> But stuff that never fails, like GETRESOURCES ioctl, really shouldn't fail
>>>>>>>>>>> after hotunplug.
>>>>>>>>>>
>>>>>>>>>> As I pointed above, this a bit confuses me given that we already do
>>>>>>>>>> blanker rejection of IOCTLs if device is unplugged.
>>>>>>>>>
>>>>>>>>> Well I'm confused about this too :-/
>>>>>>>>>
>>>>>>>>>>> And then there's the middle ground, like doing a pageflip or buffer flush,
>>>>>>>>>>> which I guess some userspace might handle, but risky to inflict those
>>>>>>>>>>> consequences on them. atomic modeset is especially fun since depending
>>>>>>>>>>> what you're doing it can be both "failures expected" and "failures not
>>>>>>>>>>> really expected in normal operation".
>>>>>>>>>>>
>>>>>>>>>>> Also, this really should be consistent across drivers, not solved with a
>>>>>>>>>>> driver flag for every possible combination.
>>>>>>>>>>>
>>>>>>>>>>> If you look at the current hotunplug kms drivers, they have
>>>>>>>>>>> drm_dev_enter/exit sprinkled in specific hw callback functions because of
>>>>>>>>>>> the above problems. But maybe it makes sense to change things in a few
>>>>>>>>>>> cases. But then we should do it across the board.
>>>>>>>>>>
>>>>>>>>>> So as I understand your preferred approach is that I scope any back_end, HW
>>>>>>>>>> specific function with drm_dev_enter/exit because that where MMIO
>>>>>>>>>> access takes place. But besides explicit MMIO access thorough
>>>>>>>>>> register accessors in the HW back-end there is also indirect MMIO access
>>>>>>>>>> taking place throughout the code in the driver because of various VRAM
>>>>>>>>>> BOs which provide CPU access to VRAM through the VRAM BAR. This kind of
>>>>>>>>>> access is spread all over in the driver and even in mid-layers such as
>>>>>>>>>> TTM and not limited to HW back-end functions. It means it's much harder
>>>>>>>>>> to spot such places to surgically scope them with drm_dev_enter/exit and
>>>>>>>>>> also that any new such code introduced will immediately break hot unplug
>>>>>>>>>> because the developers can't be expected to remember making their code
>>>>>>>>>> robust to this specific use case. That why when we discussed internally
>>>>>>>>>> what approach to take to protecting code with drm_dev_enter/exit we
>>>>>>>>>> opted for using the widest available scope.
>>>>>>>>>
>>>>>>>>> The thing is, you kinda have to anyway. There's enormous amounts of
>>>>>>>>> asynchronous processing going on. E.g. nonblocking atomic commits also do
>>>>>>>>> ttm unpinning and fun stuff like that, which if you sync things wrong can
>>>>>>>>> happen way late. So the door for bad fallout is wide open :-(
>>>>>>>>>
>>>>>>>>> I'm not sure where the right tradeoff is to make sure we catch them all,
>>>>>>>>> and can make sure with testing that we've indeed caught them all.
>>>>>>>>> -Daniel
>>>>>>>>>
>>>>>>>
>>>>>
>>>
>
Daniel Vetter May 10, 2021, 3:46 p.m. UTC | #19
On Fri, May 07, 2021 at 02:00:14PM -0400, Andrey Grodzovsky wrote:
> 
> 
> On 2021-05-07 12:24 p.m., Daniel Vetter wrote:
> > On Fri, May 07, 2021 at 11:39:49AM -0400, Andrey Grodzovsky wrote:
> > > 
> > > 
> > > On 2021-05-07 5:11 a.m., Daniel Vetter wrote:
> > > > On Thu, May 06, 2021 at 12:25:06PM -0400, Andrey Grodzovsky wrote:
> > > > > 
> > > > > 
> > > > > On 2021-05-06 5:40 a.m., Daniel Vetter wrote:
> > > > > > On Fri, Apr 30, 2021 at 01:27:37PM -0400, Andrey Grodzovsky wrote:
> > > > > > > 
> > > > > > > 
> > > > > > > On 2021-04-30 6:25 a.m., Daniel Vetter wrote:
> > > > > > > > On Thu, Apr 29, 2021 at 04:34:55PM -0400, Andrey Grodzovsky wrote:
> > > > > > > > > 
> > > > > > > > > 
> > > > > > > > > On 2021-04-29 3:05 p.m., Daniel Vetter wrote:
> > > > > > > > > > On Thu, Apr 29, 2021 at 12:04:33PM -0400, Andrey Grodzovsky wrote:
> > > > > > > > > > > 
> > > > > > > > > > > 
> > > > > > > > > > > On 2021-04-29 7:32 a.m., Daniel Vetter wrote:
> > > > > > > > > > > > On Thu, Apr 29, 2021 at 01:23:19PM +0200, Daniel Vetter wrote:
> > > > > > > > > > > > > On Wed, Apr 28, 2021 at 11:12:00AM -0400, Andrey Grodzovsky wrote:
> > > > > > > > > > > > > > With this calling drm_dev_unplug will flush and block
> > > > > > > > > > > > > > all in flight IOCTLs
> > > > > > > > > > > > > > 
> > > > > > > > > > > > > > Also, add feature such that if device supports graceful unplug
> > > > > > > > > > > > > > we enclose entire IOCTL in SRCU critical section.
> > > > > > > > > > > > > > 
> > > > > > > > > > > > > > Signed-off-by: Andrey Grodzovsky <andrey.grodzovsky@amd.com>
> > > > > > > > > > > > > 
> > > > > > > > > > > > > Nope.
> > > > > > > > > > > > > 
> > > > > > > > > > > > > The idea of drm_dev_enter/exit is to mark up hw access. Not entire ioctl.
> > > > > > > > > > > 
> > > > > > > > > > > Then I am confused why we have https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Felixir.bootlin.com%2Flinux%2Fv5.12%2Fsource%2Fdrivers%2Fgpu%2Fdrm%2Fdrm_ioctl.c%23L826&amp;data=04%7C01%7Candrey.grodzovsky%40amd.com%7C66e4988eb341427e8b0108d91174a232%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637560014906903277%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=rdb3xesAUYYTeqU2WdoZ%2BWLzOuuRdOxBBQNTMMB%2BKB4%3D&amp;reserved=0
> > > > > > > > > > > currently in code ?
> > > > > > > > > > 
> > > > > > > > > > I forgot about this one, again. Thanks for reminding.
> > > > > > > > > > 
> > > > > > > > > > > > > Especially not with an opt-in flag so that it could be shrugged of as a
> > > > > > > > > > > > > driver hack. Most of these ioctls should have absolutely no problem
> > > > > > > > > > > > > working after hotunplug.
> > > > > > > > > > > > > 
> > > > > > > > > > > > > Also, doing this defeats the point since it pretty much guarantees
> > > > > > > > > > > > > userspace will die in assert()s and stuff. E.g. on i915 the rough contract
> > > > > > > > > > > > > is that only execbuf (and even that only when userspace has indicated
> > > > > > > > > > > > > support for non-recoverable hw ctx) is allowed to fail. Anything else
> > > > > > > > > > > > > might crash userspace.
> > > > > > > > > > > 
> > > > > > > > > > > Given that as I pointed above we already fail any IOCTls with -ENODEV
> > > > > > > > > > > when device is unplugged, it seems those crashes don't happen that
> > > > > > > > > > > often ? Also, in all my testing I don't think I saw a user space crash
> > > > > > > > > > > I could attribute to this.
> > > > > > > > > > 
> > > > > > > > > > I guess it should be ok.
> > > > > > > > > 
> > > > > > > > > What should be ok ?
> > > > > > > > 
> > > > > > > > Your approach, but not your patch. If we go with this let's just lift it
> > > > > > > > to drm_ioctl() as the default behavior. No driver opt-in flag, because
> > > > > > > > that's definitely worse than any other approach because we really need to
> > > > > > > > get rid of driver specific behaviour for generic ioctls, especially
> > > > > > > > anything a compositor will use directly.
> > > > > > > > 
> > > > > > > > > > My reasons for making this work is both less trouble for userspace (did
> > > > > > > > > > you test with various wayland compositors out there, not just amdgpu x86
> > > > > > > > > 
> > > > > > > > > I didn't - will give it a try.
> > > > > > > 
> > > > > > > Weston worked without crashes, run the egl tester cube there.
> > > > > > > 
> > > > > > > > > 
> > > > > > > > > > driver?), but also testing.
> > > > > > > > > > 
> > > > > > > > > > We still need a bunch of these checks in various places or you'll wait a
> > > > > > > > > > very long time for a pending modeset or similar to complete. Being able to
> > > > > > > > > > run that code easily after hotunplug has completed should help a lot with
> > > > > > > > > > testing.
> > > > > > > > > > 
> > > > > > > > > > Plus various drivers already acquired drm_dev_enter/exit and now I wonder
> > > > > > > > > > whether that was properly tested or not ...
> > > > > > > > > > 
> > > > > > > > > > I guess maybe we need a drm module option to disable this check, so that
> > > > > > > > > > we can exercise the code as if the ioctl has raced with hotunplug at the
> > > > > > > > > > worst possible moment.
> > > > > > > > > > 
> > > > > > > > > > Also atomic is really tricky here: I assume your testing has just done
> > > > > > > > > > normal synchronous commits, but anything that goes through atomic can be
> > > > > > > > > > done nonblocking in a separate thread. Which the ioctl catch-all here wont
> > > > > > > > > > capture.
> > > > > > > > > 
> > > > > > > > > Yes, async commit was on my mind and thanks for reminding me. Indeed
> > > > > > > > > I forgot this but i planned to scope the entire amdgpu_dm_atomic_tail in
> > > > > > > > > drm_dev_enter/exit. Note that i have a bunch of patches, all name's
> > > > > > > > > starting with 'Scope....' that just methodically put all the background
> > > > > > > > > work items and timers the drivers schedules in drm_dev_enter/exit scope.
> > > > > > > > > This was supposed to be part of the 'Scope Display code' patch.
> > > > > > > > 
> > > > > > > > That's too much. You still have to arrange that the flip completion event
> > > > > > > > gets sent out. So it's a bit tricky.
> > > > > > > > 
> > > > > > > > In other places the same problem applies, e.g. probe functions need to
> > > > > > > > make sure they report "disconnected".
> > > > > > > 
> > > > > > > I see, well, this is all part of KMS support which I defer for now
> > > > > > > anyway. Will tackle it then.
> > > > > > > 
> > > > > > > > 
> > > > > > > > > > > > > You probably need similar (and very precisely defined) rules for amdgpu.
> > > > > > > > > > > > > And those must definitely exclude any shard ioctls from randomly failing
> > > > > > > > > > > > > with EIO, because that just kills the box and defeats the point of trying
> > > > > > > > > > > > > to gracefully handling hotunplug and making sure userspace has a chance of
> > > > > > > > > > > > > survival. E.g. for atomic everything should continue, including flip
> > > > > > > > > > > > > completion, but we set all outputs to "disconnected" and send out the
> > > > > > > > > > > > > uevent. Maybe crtc enabling can fail too, but that can also be handled
> > > > > > > > > > > > > through the async status we're using to signal DP link failures to
> > > > > > > > > > > > > userspace.
> > > > > > > > > > > 
> > > > > > > > > > > As I pointed before, because of the complexity of the topic I prefer to
> > > > > > > > > > > take it step by step and solve first for secondary device use case, not
> > > > > > > > > > > for primary, display attached device.
> > > > > > > > > > 
> > > > > > > > > > Yeah makes sense. But then I think the right patch is to roll this out for
> > > > > > > > > > all drivers, properly justified with existing code. Not behind a driver
> > > > > > > > > > flag, because with all these different compositors the last thing we want
> > > > > > > > > > is a proliferation of driver-specific behaviour. That's imo the worst
> > > > > > > > > > option of all of them and needs to be avoided.
> > > > > > > > > 
> > > > > > > > > So this kind of patch would be acceptable to you if I unconditionally
> > > > > > > > > scope the drm_ioctl with drm_dev_enter/exit without the driver flag ?
> > > > > > > > > I am worried to break other drivers with this, see patch https://nam11.safelinks.protection.outlook.com/?url=https:%2F%2Fcgit.freedesktop.org%2F~agrodzov%2Flinux%2Fcommit%2F%3Fh%3Ddrm-misc-next%26id%3Df0c593f35b22ca5bf60ed9e7ce2bf2b80e6c68c6&amp;data=04%7C01%7Candrey.grodzovsky%40amd.com%7C66e4988eb341427e8b0108d91174a232%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637560014906903277%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=e0PWmu%2FYZlPJMajOmR6rxRp%2Fj0w%2FsfdJPnJ6dwDqVag%3D&amp;reserved=0
> > > > > > > > > Before setting drm_dev_unplug I go through a whole process of signalling
> > > > > > > > > all possible fences in the system which some one some where might be
> > > > > > > > > waiting on. My concern is that in the absence of HW those fences won't
> > > > > > > > > signal and so unless I signal them myself srcu_synchrionize in
> > > > > > > > > drm_dev_unplug will hang waiting for any such code scoped by
> > > > > > > > > drm_dev_enter/exit.
> > > > > > > > 
> > > > > > > > Uh right. I forgot about this.
> > > > > > > > 
> > > > > > > > Which would kinda mean the top level scope is maybe not the best idea, and
> > > > > > > > perhaps we should indeed drill it down. But then the testing issue
> > > > > > > > definitely gets a lot worse.
> > > > > > > > 
> > > > > > > > So what if we'd push that drm_dev_is_unplugged check down into ioctls?
> > > > > > > > Then we can make a case-by case decision whether it should be converted to
> > > > > > > > drm_dev_enter/exit, needs to be pushed down further into drivers (due to
> > > > > > > > fence wait issues) or other concerns?
> > > > > > > > 
> > > > > > > > Also I guess we need to have a subsystem wide rule on whether you need to
> > > > > > > > force complete all fences before you call drm_dev_unplug, or afterwards.
> > > > > > > 
> > > > > > > I don't see how you can handle it afterwards. If a thread is stuck in
> > > > > > > dma_fence_wait in non interruptible wait (any kernel thread) and with no
> > > > > > > timeout there is nothing you can do to stop the wait. Any such code
> > > > > > > scopped with drm_dev_enter/exit will cause a hang in drm_dev_unplug.
> > > > > > > The only way then is to preemptively force signal all such fences before
> > > > > > > calling drm_dev_unplug - as I do in the above mentioned patch.
> > > > > > 
> > > > > > Yeah, which is why I don't think top-level drm_dev_enter/exit is a good
> > > > > > idea.
> > > > > > 
> > > > > > > > If we have mixed behaviour on this there will be disappointment. And since
> > > > > > > > hotunplug and dma_fence completion are both userspace visible that
> > > > > > > > inconsistency might have bigger impact.
> > > > > > > > 
> > > > > > > > This is all very tricky indeed :-/
> > > > > > > > 
> > > > > > > > btw for the "gradual pushing drm_dev_enter into ioctl" approach, if we go
> > > > > > > > with that: We could do the same trick we've done for DRM_UNLOCKED:
> > > > > > > > - drm_dev_enter/exit is called for any ioctl that has not set the
> > > > > > > >       DRM_HOTUNPLUG_SAFE flag
> > > > > > > > - for drm core ioctls we push them into all ioctls and decide how to
> > > > > > > >       handle/where (with the aim to have the least amount of code flow
> > > > > > > >       different during hotunplug vs after hotunplug has finished, to reduce
> > > > > > > >       testing scope)
> > > > > > > > - then we make DRM_HOTUNPLUG_SAFE the implied default
> > > > > > > > 
> > > > > > > > This would have us left with render ioctls, and I think the defensive
> > > > > > > > assumption there is that they're all hotunplug safe. We might hang on a
> > > > > > > > fence wait, but that's fixable, and it's better than blowing up on a
> > > > > > > > use-after-free security bug.
> > > > > > > > 
> > > > > > > > Thoughts?
> > > > > > > 
> > > > > > > I don't fully see a difference between the approach described above and
> > > > > > > the full drill down to each driver and even within the driver, to the HW
> > > > > > > back-ends - what criteria I would use to decide if for a given IOCTL i
> > > > > > > scope with drm_dev_enter/exit at the highest level while for another
> > > > > > > i go all the way down ? If we would agree that signaling the fences
> > > > > > > preemptively before engaging drm_dev_unplug is generically the right
> > > > > > > approach maybe we can then scope drm_ioctl unconditionally with
> > > > > > > drm_dev_enter/exit and then for each driver go through the same process
> > > > > > > I do for amdgpu - writing driver specific function which takes care of
> > > > > > > all the fences. We could then just create a drm callback which would
> > > > > > > be called from drm_ioctl before drm_dev_unplug is called.
> > > > > > 
> > > > > > So I see the appeal of just nuking all the fences, but I'm not sure that's
> > > > > > a good plan. We've done this in the old i915 gpu reset code too, and the
> > > > > > issue is it's defacto inverting the locking. But also the hw is truly
> > > > > > gone, so it also makes sense.
> > > > > > 
> > > > > > The problem is a bit roll-out, if we state that dma_fence_wait is allowed
> > > > > > with a drm_dev_enter/exit, then all drivers need to force-retire their
> > > > > > fences.
> > > > > > 
> > > > > > The other option would be that we require that dma_fence_wait is _not_
> > > > > > allowed in drm_dev_enter/exit, and that therefore these areas must be
> > > > > > marked up more fine-grained to avoid deadlocks. I like this more from the
> > > > > > testing aspect (it makes it easier to be reasonable sure your code handles
> > > > > > concurrent hotunplug), but also it's pretty easy to validate with the
> > > > > > dma_fence lockdep annotations we have I think.
> > > > > 
> > > > > They key question as I see it - is it ok for a device to be unplugged
> > > > > while it's driver has anywhere in it's code a dma_fence_wait
> > > > > waiting for work completion from this device. The answers seems to me
> > > > > is no, the HW is gone, this fence will never signal and so you will be
> > > > > left with indefinitely hanged code thread with all it's unreleased
> > > > > resources. If i am correct in the above statement then avoiding scoping
> > > > > code with drm_dev_enter/exit because a dma_fence_wait might be there in the
> > > > > middle
> > > > > just hides the problem. Also, then the only solution for each driver
> > > > > wanting to support hot-unplug is to force retire all it's HW
> > > > > fences once it's notified of device removal.
> > > > 
> > > > At a high level, yes dma_fence must always complete. I don't think we have
> > > > a disagreement here on that.
> > > > 
> > > > What we're discussing here is the precise sequencing and barriers, where
> > > > things get tricky. Requiring that you force-complete all dma_fence that
> > > > might be affected before you hotunplug is one solution, the other is
> > > > tuning the critical sections that drm_dev_enter/exit annotates.
> > > > 
> > > > This isn't about avoiding anything or hiding problems, this is about
> > > > locking/synchronization design. And for that we must agree on what is
> > > > allowed inside/outside of a critical section for all possible
> > > > combinations.
> > > > 
> > > > E.g. we're also "hiding" problems with calling dma_fence_wait from
> > > > shrinkers/mmu notifiers by forbidding allocations in
> > > > dma_fence_begin/end_signalling critical paths.
> > > > 
> > > > > > A third reasons for not requiring force-retiring of dma_fence before
> > > > > > drm_dev_unplug is the races: Before drm_dev_unplug you haven't stopped new
> > > > > > fences from happening, but until you've stopped new fences it's hard to
> > > > > > guarantee they're all retired. How do you solve this currently.
> > > > > 
> > > > > See amdgpu_finilize_device_fences in https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fpatchwork.ozlabs.org%2Fproject%2Flinux-pci%2Fpatch%2F20210428151207.1212258-20-andrey.grodzovsky%40amd.com%2F&amp;data=04%7C01%7Candrey.grodzovsky%40amd.com%7C66e4988eb341427e8b0108d91174a232%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637560014906903277%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=VEpwZuwoT0lBPJriOYCrZwEFcb0zAnWWpenFaPHsUQI%3D&amp;reserved=0
> > > > > I think the steps described there answer your
> > > > > concern here.
> > > > 
> > > > The hard problem is stopping further command submission. Not seeing how
> > > > you solve that.
> > > 
> > > By stopping GPU SW scheduler before force completion of HW fences, see
> > > amdgpu_finilize_device_fences->amdgpu_fence_driver_fini_hw and the
> > > comment above it.
> > > 
> > > > 
> > > > But I'm definitely scared about all the scheduler/tdr interactions you
> > > > already have there, and that looks quite a bit like fallout from doing
> > > > things the wrong way round.
> > > > 
> > > > Also given that drm/scheduler is shared, why can't this be a drm/scheduler
> > > > helper function?
> > > 
> > > I was thinking about it, what stopped me is that HW fences signaling is
> > > done from driver specific HW fence array. But we could do it generic by
> > > instead iterating sched->pending_list and signaling
> > > s_job->s_fence->parent instead.
> > > You also need to retire scheduler's scheduled fences once you stopped
> > > the schedulers as they are waited on as dependencies for other jobs
> > > submissions (i do take care of it).
> > > 
> > > > 
> > > > > > Finally there's still hangcheck and all that, so if we go with forbidding
> > > > > > dma_fence_wait from within drm_dev_enter/exit sections, then drivers don't
> > > > > > need to have additional tricky code to force-retire fences. TDR will take
> > > > > > care already (albeit with maybe a slightly annoying long timeout, which
> > > > > > we can shorten to "time out everything immediately" after drm_dev_unplug).
> > > > > 
> > > > > I am not aware of TDR handlers  that do it today, at least we don't,
> > > > > we don't check that if device is gone let's instead of resetting the device
> > > > > and resubmit jobs just force retire all the HW fences. In any case, this
> > > > > can and i think should be done in pci remove callback because this is
> > > > > the place that supposed to handle device extraction. I for example in
> > > > > amdgpu_finilize_device_fences just block all TDRs from taking place as first
> > > > > step in the process. If other drivers want to force retire fences
> > > > > in their TDR handlers they still need to block and wait for all such
> > > > > TDRs in their pci_remove handler.
> > > > 
> > > > TDR definitely force-completes the fence that did hang. Of course it'll
> > > > take a while until they've all completed this way, but we do have
> > > > guaranteed forward progress since we've stopped all further fences from
> > > > showing up because drm_dev_unplug is called already.
> > > > 
> > > > And yes after drm_dev_unplug you can then force-retire the tdr stuff.
> > > > 
> > > > > > What we definitely can't have is half the drivers doing it one way, and
> > > > > > the other half the other way. So your driver flag to wrap the ioctl
> > > > > > optionally in a drm_dev_enter/exit path is a no-go still I think.
> > > > > > 
> > > > > > I guess my tldr; is: I definitely see how your current approach gives
> > > > > > quicker results for amdgpu right now, but long term I'm seeing more
> > > > > > positives on the other one. At least I expect less special cases due to
> > > > > > hotunplug with that.
> > > > > 
> > > > > As i expressed my viewpoint above - seems to me any driver in need to
> > > > > support hot-unplug must force retire it's fences because of need to
> > > > > unblock all dma_fence waits and so it will not be a special case.
> > > > 
> > > > This isn't the special case I meant. It's the very tricky
> > > > force-retire-before-you-unplugged-officially which is large scale nasty.
> > > > 
> > > > Also if your driver doesn't force-retire already, it's buggy. The
> > > > additional need of hotunplug is just that we're trying to force-retire a
> > > > bit faster, because we know it's all hopeless. But e.g. i915 already has a
> > > > fallback that does this automatically:
> > > > - first we reset only the engine/context, keeping everyone else running
> > > > - if that doesn't pan out, we reset the entire chip and give up an
> > > >     anything that's in-flight, which (iirc, it did so at least in the past)
> > > >     force retires everything outstanding.
> > > > 
> > > > I think amdgpu only has full chip reset, so your first step tries to
> > > > reissue all other tasks. But that's not necessarily how it needs to
> > > > happen.
> > > > 
> > > > Either way drivers must force retire everything (albeit maybe a bit at a
> > > > slow pace) if the hw ceased to work properly already. Hotunplug really
> > > > isn't anything new here.
> > > > -Daniel
> > > 
> > > Let's then agree on the way forward -
> > > 
> > > You raised before the following suggestion -
> > > 
> > > "
> > > btw for the "gradual pushing drm_dev_enter into ioctl" approach, if we go
> > > with that: We could do the same trick we've done for DRM_UNLOCKED:
> > > - drm_dev_enter/exit is called for any ioctl that has not set the
> > >    DRM_HOTUNPLUG_SAFE flag
> > > - for drm core ioctls we push them into all ioctls and decide how to
> > >    handle/where (with the aim to have the least amount of code flow
> > >    different during hotunplug vs after hotunplug has finished, to reduce
> > >    testing scope)
> > > - then we make DRM_HOTUNPLUG_SAFE the implied default
> > > "
> > > My problem here is that I have no good understating, criteria
> > > for how to decide per each ioctl on the right scope of drm_dev_enter/
> > > exit. It depends on whether each next function call can lead somewhere
> > > down the call stack to dma_fence_wait and/or whether it can lead
> > > to registers access. Seems to me very hard to cover and error prone.
> > 
> > Tbh, neither do I. This requires a lot of work to analyze.
> > 
> > > Another options which we discussed internally before and is basically
> > > same as current drivers i guess is simply to scope with drm_dev_enter/
> > > exit all the back-end HW specific callbacks. Those are most of the
> > > places MMIO access takes place and by definition no dma_fence_wait
> > > can be there as it's HW specific code. This leaves MMIO
> > > access through pointers (memcpy, and various pointer de-references)
> > > which will need to be protected on case by case, but given that I unmap
> > > all MMIO anyway as last step of PCI remove callback, all of them will
> > > be found by try and error eventually.
> > > I feel more comfortable with this approach as I have a clear
> > > understating of how to deal with it.
> > 
> > Hm ... I'm maybe failing to see the difference, but at least on the kms
> > side "put drm_dev_enter/exit into driver callbacks" feels like the right
> > place for them.
> > 
> > Render (and I guess kfd for amd as the special case) ioctl are different,
> > especially all the driver specific ones.
> > 
> > So one thing that cross my mind maybe as step 0 is to annotate the rules
> > for drm_dev_enter/exit using lockdep. With lockdep we can both check
> > whether a lock is held, but also whether it's not held (but the latter is
> > only possible with CONFIG_PROVE_LOCKING enabled). I think it would be good
> > to annotate all the major locks in the kernel against drm_dev_enter/exit:
> > 
> > - dma_fence_wait could check that the drm_dev_enter/exit srcu is _not_
> >    held.
> 
> We can't insert this directly inside dma_fence_wait as a DRM agnostic
> layer, are you porpoising to wrap all dma_fence_waits in DRM and drivers
> code with thin wrapper calling lockdep_assert_held(drm_unplug_srcu) ?
> What about dma_resv objs and dma_resv_wait_timeout_rcu style of waits ?

Hm ... yeah this doesn't work. Least because we also don't have a generic
way to get from dma_fence to anything resembling a drm_device or any other
device. Which is really annoying.

The dma_resv waits boil down to dma_fence waits underneath, which is why I
wanted to annotate dma_fence_wait ...

The problem is also that because srcu sections nest completely freely I
don't think we can teach lockdep that it's not allowed to hold the srcu
read side lock when we call dma_fence_wait. The latter has it's own
lockdep_key, so maybe annotating things that way is possible. Not sure.
Lockdep has some big limitations for read side locks.

Maybe you can figure out something, my current bandwidth for screaming at
clever lockdep annotations is a bit thin :-(

> > - because srcu nest _very_ freely there's kinda no real restrictions for
> >    putting drm_dev_enter/exit within a lock critical section. Might still
> >    be good to explicitly call out in docs where it's all ok:
> > 
> >    - interrupt handlers (I hope that's the case, otherwise I screwed up)
> >    - shrinkers/mmu_notifier callbacks
> 
> Is there a problem scooping these with drm_dev_enter/exit ? Might they
> hang indefinitely once a device is gone ?

Well your irq handler must be robust, since drm_dev_enter/exit only helps
so much.

The shrinker still needs to work somewhat so it can help free memory,
while the hotunplug code

> > - anything else that's not allowed in within drm_dev_enter/exit. I'm e.g.
> >    wondering whether we should disallow drm_modeset_lock() or maybe
> >    dma_resv_lock(), or whether that's too restrictive. It could help quite
> >    a bit in finding places where the drm_dev_enter/exit section is too
> >    wide.
> > 
> > - another one is the inverse, but I guess you have that already with
> >    putting a drm_dev_is_held() into mmio helpers and all that to make sure
> >    we really have them all caught.
> 
> Not sure what drm_dev_is_held means here ? Also what do you mean by MMIO
> helpers ? Maybe you meant the page fault helpers like ttm_bo_vm_fault ?

Checking that we're in the srcu read side critical section is what I meant
with drm_dev_is_held. Maybe drm_dev_is_entered is clearer, but also sounds
a bit silly.
-Daniel

> 
> Andrey
> 
> > 
> > Above is just examples, I think the more we nail down these rules one way
> > or the other, the better for consistency across drivers. And without
> > consistency everyone will be forced to write their own mmap helpers
> > instead of one in ttm, or scheduler cleanup helpers instead of one in
> > drm/scheduler.
> > 
> > > P.S Please respond on the question for you on the other thread at
> > > 'PATCH v5 15/27] drm/scheduler: Fix hang when sched_entity released'
> > > about suggestion by Christian of partial up-streaming of this code up to
> > > and before the patches dealing with scoping of drm_dev_enter/exit scoping.
> > 
> > Ok will try, I'm a bit burried unfortunately so thanks for reminder when I
> > miss something.
> > -Daniel
> > 
> > > 
> > > Andrey
> > > 
> > > 
> > > 
> > > 
> > > 
> > > 
> > > > > 
> > > > > Andrey
> > > > > 
> > > > > > 
> > > > > > Cheers, Daniel
> > > > > > 
> > > > > > > 
> > > > > > > Andrey
> > > > > > > 
> > > > > > > > 
> > > > > > > > It is unfortunately even more work until we've reached the goal, but I
> > > > > > > > think it's safest and most flexible approach overall.
> > > > > > > > 
> > > > > > > > Cheers, Daniel
> > > > > > > > 
> > > > > > > > > 
> > > > > > > > > Andrey
> > > > > > > > > 
> > > > > > > > > > 
> > > > > > > > > > Cheers, Daniel
> > > > > > > > > > 
> > > > > > > > > > 
> > > > > > > > > > > 
> > > > > > > > > > > > > 
> > > > > > > > > > > > > I guess we should clarify this in the hotunplug doc?
> > > > > > > > > > > 
> > > > > > > > > > > Agree
> > > > > > > > > > > 
> > > > > > > > > > > > 
> > > > > > > > > > > > To clarify: I'm not against throwing an ENODEV at userspace for ioctl that
> > > > > > > > > > > > really make no sense, and where we're rather confident that all properly
> > > > > > > > > > > > implemented userspace will gracefully handle failures. Like a modeset, or
> > > > > > > > > > > > opening a device, or trying to import a dma-buf or stuff like that which
> > > > > > > > > > > > can already fail in normal operation for any kind of reason.
> > > > > > > > > > > > 
> > > > > > > > > > > > But stuff that never fails, like GETRESOURCES ioctl, really shouldn't fail
> > > > > > > > > > > > after hotunplug.
> > > > > > > > > > > 
> > > > > > > > > > > As I pointed above, this a bit confuses me given that we already do
> > > > > > > > > > > blanker rejection of IOCTLs if device is unplugged.
> > > > > > > > > > 
> > > > > > > > > > Well I'm confused about this too :-/
> > > > > > > > > > 
> > > > > > > > > > > > And then there's the middle ground, like doing a pageflip or buffer flush,
> > > > > > > > > > > > which I guess some userspace might handle, but risky to inflict those
> > > > > > > > > > > > consequences on them. atomic modeset is especially fun since depending
> > > > > > > > > > > > what you're doing it can be both "failures expected" and "failures not
> > > > > > > > > > > > really expected in normal operation".
> > > > > > > > > > > > 
> > > > > > > > > > > > Also, this really should be consistent across drivers, not solved with a
> > > > > > > > > > > > driver flag for every possible combination.
> > > > > > > > > > > > 
> > > > > > > > > > > > If you look at the current hotunplug kms drivers, they have
> > > > > > > > > > > > drm_dev_enter/exit sprinkled in specific hw callback functions because of
> > > > > > > > > > > > the above problems. But maybe it makes sense to change things in a few
> > > > > > > > > > > > cases. But then we should do it across the board.
> > > > > > > > > > > 
> > > > > > > > > > > So as I understand your preferred approach is that I scope any back_end, HW
> > > > > > > > > > > specific function with drm_dev_enter/exit because that where MMIO
> > > > > > > > > > > access takes place. But besides explicit MMIO access thorough
> > > > > > > > > > > register accessors in the HW back-end there is also indirect MMIO access
> > > > > > > > > > > taking place throughout the code in the driver because of various VRAM
> > > > > > > > > > > BOs which provide CPU access to VRAM through the VRAM BAR. This kind of
> > > > > > > > > > > access is spread all over in the driver and even in mid-layers such as
> > > > > > > > > > > TTM and not limited to HW back-end functions. It means it's much harder
> > > > > > > > > > > to spot such places to surgically scope them with drm_dev_enter/exit and
> > > > > > > > > > > also that any new such code introduced will immediately break hot unplug
> > > > > > > > > > > because the developers can't be expected to remember making their code
> > > > > > > > > > > robust to this specific use case. That why when we discussed internally
> > > > > > > > > > > what approach to take to protecting code with drm_dev_enter/exit we
> > > > > > > > > > > opted for using the widest available scope.
> > > > > > > > > > 
> > > > > > > > > > The thing is, you kinda have to anyway. There's enormous amounts of
> > > > > > > > > > asynchronous processing going on. E.g. nonblocking atomic commits also do
> > > > > > > > > > ttm unpinning and fun stuff like that, which if you sync things wrong can
> > > > > > > > > > happen way late. So the door for bad fallout is wide open :-(
> > > > > > > > > > 
> > > > > > > > > > I'm not sure where the right tradeoff is to make sure we catch them all,
> > > > > > > > > > and can make sure with testing that we've indeed caught them all.
> > > > > > > > > > -Daniel
> > > > > > > > > > 
> > > > > > > > 
> > > > > > 
> > > > 
> >
diff mbox series

Patch

diff --git a/drivers/gpu/drm/drm_ioctl.c b/drivers/gpu/drm/drm_ioctl.c
index d273d1a8603a..5882ef2183bb 100644
--- a/drivers/gpu/drm/drm_ioctl.c
+++ b/drivers/gpu/drm/drm_ioctl.c
@@ -815,7 +815,7 @@  long drm_ioctl(struct file *filp,
 	const struct drm_ioctl_desc *ioctl = NULL;
 	drm_ioctl_t *func;
 	unsigned int nr = DRM_IOCTL_NR(cmd);
-	int retcode = -EINVAL;
+	int idx, retcode = -EINVAL;
 	char stack_kdata[128];
 	char *kdata = NULL;
 	unsigned int in_size, out_size, drv_size, ksize;
@@ -884,7 +884,18 @@  long drm_ioctl(struct file *filp,
 	if (ksize > in_size)
 		memset(kdata + in_size, 0, ksize - in_size);
 
-	retcode = drm_ioctl_kernel(filp, func, kdata, ioctl->flags);
+	if (drm_core_check_feature(dev, DRIVER_HOTUNPLUG_SUPPORT)) {
+		if (drm_dev_enter(dev, &idx)) {
+			retcode = drm_ioctl_kernel(filp, func, kdata, ioctl->flags);
+			drm_dev_exit(idx);
+		} else {
+			retcode = -ENODEV;
+			goto err_i1;
+		}
+	} else {
+		retcode = drm_ioctl_kernel(filp, func, kdata, ioctl->flags);
+	}
+
 	if (copy_to_user((void __user *)arg, kdata, out_size) != 0)
 		retcode = -EFAULT;
 
diff --git a/include/drm/drm_drv.h b/include/drm/drm_drv.h
index b439ae1921b8..63e05cec46c1 100644
--- a/include/drm/drm_drv.h
+++ b/include/drm/drm_drv.h
@@ -94,6 +94,12 @@  enum drm_driver_feature {
 	 * synchronization of command submission.
 	 */
 	DRIVER_SYNCOBJ_TIMELINE         = BIT(6),
+	/**
+	 * @DRIVER_NO_HOTUNPLUG_SUPPORT:
+	 *
+	 * Driver support gracefull remove.
+	 */
+	DRIVER_HOTUNPLUG_SUPPORT         = BIT(7),
 
 	/* IMPORTANT: Below are all the legacy flags, add new ones above. */