diff mbox series

drm/msm/dpu: Correct dpu encoder spinlock initialization

Message ID 1561357632-15361-1-git-send-email-dhar@codeaurora.org (mailing list archive)
State Not Applicable, archived
Delegated to: Andy Gross
Headers show
Series drm/msm/dpu: Correct dpu encoder spinlock initialization | expand

Commit Message

Shubhashree Dhar June 24, 2019, 6:27 a.m. UTC
dpu encoder spinlock should be initialized during dpu encoder
init instead of dpu encoder setup which is part of commit.
There are chances that vblank control uses the uninitialized
spinlock if not initialized during encoder init.

Change-Id: I5a18b95fa47397c834a266b22abf33a517b03a4e
Signed-off-by: Shubhashree Dhar <dhar@codeaurora.org>
---
 drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

Comments

Jeykumar Sankaran June 24, 2019, 10:26 p.m. UTC | #1
On 2019-06-23 23:27, Shubhashree Dhar wrote:
> dpu encoder spinlock should be initialized during dpu encoder
> init instead of dpu encoder setup which is part of commit.
> There are chances that vblank control uses the uninitialized
> spinlock if not initialized during encoder init.
Not much can be done if someone is performing a vblank operation
before encoder_setup is done.
Can you point to the path where this lock is acquired before
the encoder_setup?

Thanks
Jeykumar S.
> 
> Change-Id: I5a18b95fa47397c834a266b22abf33a517b03a4e
> Signed-off-by: Shubhashree Dhar <dhar@codeaurora.org>
> ---
>  drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c | 3 +--
>  1 file changed, 1 insertion(+), 2 deletions(-)
> 
> diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c
> b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c
> index 5f085b5..22938c7 100644
> --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c
> +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c
> @@ -2195,8 +2195,6 @@ int dpu_encoder_setup(struct drm_device *dev, 
> struct
> drm_encoder *enc,
>  	if (ret)
>  		goto fail;
> 
> -	spin_lock_init(&dpu_enc->enc_spinlock);
> -
>  	atomic_set(&dpu_enc->frame_done_timeout, 0);
>  	timer_setup(&dpu_enc->frame_done_timer,
>  			dpu_encoder_frame_done_timeout, 0);
> @@ -2250,6 +2248,7 @@ struct drm_encoder *dpu_encoder_init(struct
> drm_device *dev,
> 
>  	drm_encoder_helper_add(&dpu_enc->base, &dpu_encoder_helper_funcs);
> 
> +	spin_lock_init(&dpu_enc->enc_spinlock);
>  	dpu_enc->enabled = false;
> 
>  	return &dpu_enc->base;
Shubhashree Dhar June 25, 2019, 5:44 a.m. UTC | #2
On 2019-06-25 03:56, Jeykumar Sankaran wrote:
> On 2019-06-23 23:27, Shubhashree Dhar wrote:
>> dpu encoder spinlock should be initialized during dpu encoder
>> init instead of dpu encoder setup which is part of commit.
>> There are chances that vblank control uses the uninitialized
>> spinlock if not initialized during encoder init.
> Not much can be done if someone is performing a vblank operation
> before encoder_setup is done.
> Can you point to the path where this lock is acquired before
> the encoder_setup?
> 
> Thanks
> Jeykumar S.
>> 

When running some dp usecase, we are hitting this callstack.

Process kworker/u16:8 (pid: 215, stack limit = 0x00000000df9dd930)
Call trace:
  spin_dump+0x84/0x8c
  spin_dump+0x0/0x8c
  do_raw_spin_lock+0x80/0xb0
  _raw_spin_lock_irqsave+0x34/0x44
  dpu_encoder_toggle_vblank_for_crtc+0x8c/0xe8
  dpu_crtc_vblank+0x168/0x1a0
  dpu_kms_enable_vblank+0[   11.648998]  vblank_ctrl_worker+0x3c/0x60
  process_one_work+0x16c/0x2d8
  worker_thread+0x1d8/0x2b0
  kthread+0x124/0x134

Looks like vblank is getting enabled earlier causing this issue and we 
are using the spinlock without initializing it.

Thanks,
Shubhashree

>> Change-Id: I5a18b95fa47397c834a266b22abf33a517b03a4e
>> Signed-off-by: Shubhashree Dhar <dhar@codeaurora.org>
>> ---
>>  drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c | 3 +--
>>  1 file changed, 1 insertion(+), 2 deletions(-)
>> 
>> diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c
>> b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c
>> index 5f085b5..22938c7 100644
>> --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c
>> +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c
>> @@ -2195,8 +2195,6 @@ int dpu_encoder_setup(struct drm_device *dev, 
>> struct
>> drm_encoder *enc,
>>  	if (ret)
>>  		goto fail;
>> 
>> -	spin_lock_init(&dpu_enc->enc_spinlock);
>> -
>>  	atomic_set(&dpu_enc->frame_done_timeout, 0);
>>  	timer_setup(&dpu_enc->frame_done_timer,
>>  			dpu_encoder_frame_done_timeout, 0);
>> @@ -2250,6 +2248,7 @@ struct drm_encoder *dpu_encoder_init(struct
>> drm_device *dev,
>> 
>>  	drm_encoder_helper_add(&dpu_enc->base, &dpu_encoder_helper_funcs);
>> 
>> +	spin_lock_init(&dpu_enc->enc_spinlock);
>>  	dpu_enc->enabled = false;
>> 
>>  	return &dpu_enc->base;
Jeykumar Sankaran June 25, 2019, 9:40 p.m. UTC | #3
On 2019-06-24 22:44, dhar@codeaurora.org wrote:
> On 2019-06-25 03:56, Jeykumar Sankaran wrote:
>> On 2019-06-23 23:27, Shubhashree Dhar wrote:
>>> dpu encoder spinlock should be initialized during dpu encoder
>>> init instead of dpu encoder setup which is part of commit.
>>> There are chances that vblank control uses the uninitialized
>>> spinlock if not initialized during encoder init.
>> Not much can be done if someone is performing a vblank operation
>> before encoder_setup is done.
>> Can you point to the path where this lock is acquired before
>> the encoder_setup?
>> 
>> Thanks
>> Jeykumar S.
>>> 
> 
> When running some dp usecase, we are hitting this callstack.
> 
> Process kworker/u16:8 (pid: 215, stack limit = 0x00000000df9dd930)
> Call trace:
>  spin_dump+0x84/0x8c
>  spin_dump+0x0/0x8c
>  do_raw_spin_lock+0x80/0xb0
>  _raw_spin_lock_irqsave+0x34/0x44
>  dpu_encoder_toggle_vblank_for_crtc+0x8c/0xe8
>  dpu_crtc_vblank+0x168/0x1a0
>  dpu_kms_enable_vblank+0[   11.648998]  vblank_ctrl_worker+0x3c/0x60
>  process_one_work+0x16c/0x2d8
>  worker_thread+0x1d8/0x2b0
>  kthread+0x124/0x134
> 
> Looks like vblank is getting enabled earlier causing this issue and we
> are using the spinlock without initializing it.
> 
> Thanks,
> Shubhashree
> 
DP calls into set_encoder_mode during hotplug before even notifying the
u/s. Can you trace out the original caller of this stack?

Even though the patch is harmless, I am not entirely convinced to move 
this
initialization. Any call which acquires the lock before encoder_setup
will be a no-op since there will not be any physical encoder to work 
with.

Thanks and Regards,
Jeykumar S.

>>> Change-Id: I5a18b95fa47397c834a266b22abf33a517b03a4e
>>> Signed-off-by: Shubhashree Dhar <dhar@codeaurora.org>
>>> ---
>>>  drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c | 3 +--
>>>  1 file changed, 1 insertion(+), 2 deletions(-)
>>> 
>>> diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c
>>> b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c
>>> index 5f085b5..22938c7 100644
>>> --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c
>>> +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c
>>> @@ -2195,8 +2195,6 @@ int dpu_encoder_setup(struct drm_device *dev, 
>>> struct
>>> drm_encoder *enc,
>>>  	if (ret)
>>>  		goto fail;
>>> 
>>> -	spin_lock_init(&dpu_enc->enc_spinlock);
>>> -
>>>  	atomic_set(&dpu_enc->frame_done_timeout, 0);
>>>  	timer_setup(&dpu_enc->frame_done_timer,
>>>  			dpu_encoder_frame_done_timeout, 0);
>>> @@ -2250,6 +2248,7 @@ struct drm_encoder *dpu_encoder_init(struct
>>> drm_device *dev,
>>> 
>>>  	drm_encoder_helper_add(&dpu_enc->base, &dpu_encoder_helper_funcs);
>>> 
>>> +	spin_lock_init(&dpu_enc->enc_spinlock);
>>>  	dpu_enc->enabled = false;
>>> 
>>>  	return &dpu_enc->base;
Shubhashree Dhar July 1, 2019, 10:29 a.m. UTC | #4
On 2019-06-26 03:10, Jeykumar Sankaran wrote:
> On 2019-06-24 22:44, dhar@codeaurora.org wrote:
>> On 2019-06-25 03:56, Jeykumar Sankaran wrote:
>>> On 2019-06-23 23:27, Shubhashree Dhar wrote:
>>>> dpu encoder spinlock should be initialized during dpu encoder
>>>> init instead of dpu encoder setup which is part of commit.
>>>> There are chances that vblank control uses the uninitialized
>>>> spinlock if not initialized during encoder init.
>>> Not much can be done if someone is performing a vblank operation
>>> before encoder_setup is done.
>>> Can you point to the path where this lock is acquired before
>>> the encoder_setup?
>>> 
>>> Thanks
>>> Jeykumar S.
>>>> 
>> 
>> When running some dp usecase, we are hitting this callstack.
>> 
>> Process kworker/u16:8 (pid: 215, stack limit = 0x00000000df9dd930)
>> Call trace:
>>  spin_dump+0x84/0x8c
>>  spin_dump+0x0/0x8c
>>  do_raw_spin_lock+0x80/0xb0
>>  _raw_spin_lock_irqsave+0x34/0x44
>>  dpu_encoder_toggle_vblank_for_crtc+0x8c/0xe8
>>  dpu_crtc_vblank+0x168/0x1a0
>>  dpu_kms_enable_vblank+0[   11.648998]  vblank_ctrl_worker+0x3c/0x60
>>  process_one_work+0x16c/0x2d8
>>  worker_thread+0x1d8/0x2b0
>>  kthread+0x124/0x134
>> 
>> Looks like vblank is getting enabled earlier causing this issue and we
>> are using the spinlock without initializing it.
>> 
>> Thanks,
>> Shubhashree
>> 
> DP calls into set_encoder_mode during hotplug before even notifying the
> u/s. Can you trace out the original caller of this stack?
> 
> Even though the patch is harmless, I am not entirely convinced to move 
> this
> initialization. Any call which acquires the lock before encoder_setup
> will be a no-op since there will not be any physical encoder to work 
> with.
> 
> Thanks and Regards,
> Jeykumar S.
> 
>>>> Change-Id: I5a18b95fa47397c834a266b22abf33a517b03a4e
>>>> Signed-off-by: Shubhashree Dhar <dhar@codeaurora.org>
>>>> ---
>>>>  drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c | 3 +--
>>>>  1 file changed, 1 insertion(+), 2 deletions(-)
>>>> 
>>>> diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c
>>>> b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c
>>>> index 5f085b5..22938c7 100644
>>>> --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c
>>>> +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c
>>>> @@ -2195,8 +2195,6 @@ int dpu_encoder_setup(struct drm_device *dev, 
>>>> struct
>>>> drm_encoder *enc,
>>>>  	if (ret)
>>>>  		goto fail;
>>>> 
>>>> -	spin_lock_init(&dpu_enc->enc_spinlock);
>>>> -
>>>>  	atomic_set(&dpu_enc->frame_done_timeout, 0);
>>>>  	timer_setup(&dpu_enc->frame_done_timer,
>>>>  			dpu_encoder_frame_done_timeout, 0);
>>>> @@ -2250,6 +2248,7 @@ struct drm_encoder *dpu_encoder_init(struct
>>>> drm_device *dev,
>>>> 
>>>>  	drm_encoder_helper_add(&dpu_enc->base, &dpu_encoder_helper_funcs);
>>>> 
>>>> +	spin_lock_init(&dpu_enc->enc_spinlock);
>>>>  	dpu_enc->enabled = false;
>>>> 
>>>>  	return &dpu_enc->base;

In dpu_crtc_vblank(), we are looping through all the encoders in the 
present mode_config: 
https://github.com/torvalds/linux/blob/master/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c#L1082
and hence calling dpu_encoder_toggle_vblank_for_crtc() for all the 
encoders. But in dpu_encoder_toggle_vblank_for_crtc(), after acquiring 
the spinlock, we will do a early return for
the encoders which are not currently assigned to our crtc: 
https://github.com/torvalds/linux/blob/master/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c#L1318.
Since the encoder_setup for the secondary encoder(dp encoder in this 
case) is not called until dp hotplug, we are hitting kernel panic while 
acquiring the lock.
Jeykumar Sankaran July 2, 2019, 6:21 p.m. UTC | #5
On 2019-07-01 03:29, dhar@codeaurora.org wrote:
> On 2019-06-26 03:10, Jeykumar Sankaran wrote:
>> On 2019-06-24 22:44, dhar@codeaurora.org wrote:
>>> On 2019-06-25 03:56, Jeykumar Sankaran wrote:
>>>> On 2019-06-23 23:27, Shubhashree Dhar wrote:
>>>>> dpu encoder spinlock should be initialized during dpu encoder
>>>>> init instead of dpu encoder setup which is part of commit.
>>>>> There are chances that vblank control uses the uninitialized
>>>>> spinlock if not initialized during encoder init.
>>>> Not much can be done if someone is performing a vblank operation
>>>> before encoder_setup is done.
>>>> Can you point to the path where this lock is acquired before
>>>> the encoder_setup?
>>>> 
>>>> Thanks
>>>> Jeykumar S.
>>>>> 
>>> 
>>> When running some dp usecase, we are hitting this callstack.
>>> 
>>> Process kworker/u16:8 (pid: 215, stack limit = 0x00000000df9dd930)
>>> Call trace:
>>>  spin_dump+0x84/0x8c
>>>  spin_dump+0x0/0x8c
>>>  do_raw_spin_lock+0x80/0xb0
>>>  _raw_spin_lock_irqsave+0x34/0x44
>>>  dpu_encoder_toggle_vblank_for_crtc+0x8c/0xe8
>>>  dpu_crtc_vblank+0x168/0x1a0
>>>  dpu_kms_enable_vblank+0[   11.648998]  vblank_ctrl_worker+0x3c/0x60
>>>  process_one_work+0x16c/0x2d8
>>>  worker_thread+0x1d8/0x2b0
>>>  kthread+0x124/0x134
>>> 
>>> Looks like vblank is getting enabled earlier causing this issue and 
>>> we
>>> are using the spinlock without initializing it.
>>> 
>>> Thanks,
>>> Shubhashree
>>> 
>> DP calls into set_encoder_mode during hotplug before even notifying 
>> the
>> u/s. Can you trace out the original caller of this stack?
>> 
>> Even though the patch is harmless, I am not entirely convinced to move 
>> this
>> initialization. Any call which acquires the lock before encoder_setup
>> will be a no-op since there will not be any physical encoder to work 
>> with.
>> 
>> Thanks and Regards,
>> Jeykumar S.
>> 
>>>>> Change-Id: I5a18b95fa47397c834a266b22abf33a517b03a4e
>>>>> Signed-off-by: Shubhashree Dhar <dhar@codeaurora.org>
>>>>> ---
>>>>>  drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c | 3 +--
>>>>>  1 file changed, 1 insertion(+), 2 deletions(-)
>>>>> 
>>>>> diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c
>>>>> b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c
>>>>> index 5f085b5..22938c7 100644
>>>>> --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c
>>>>> +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c
>>>>> @@ -2195,8 +2195,6 @@ int dpu_encoder_setup(struct drm_device *dev, 
>>>>> struct
>>>>> drm_encoder *enc,
>>>>>  	if (ret)
>>>>>  		goto fail;
>>>>> 
>>>>> -	spin_lock_init(&dpu_enc->enc_spinlock);
>>>>> -
>>>>>  	atomic_set(&dpu_enc->frame_done_timeout, 0);
>>>>>  	timer_setup(&dpu_enc->frame_done_timer,
>>>>>  			dpu_encoder_frame_done_timeout, 0);
>>>>> @@ -2250,6 +2248,7 @@ struct drm_encoder *dpu_encoder_init(struct
>>>>> drm_device *dev,
>>>>> 
>>>>>  	drm_encoder_helper_add(&dpu_enc->base, 
>>>>> &dpu_encoder_helper_funcs);
>>>>> 
>>>>> +	spin_lock_init(&dpu_enc->enc_spinlock);
>>>>>  	dpu_enc->enabled = false;
>>>>> 
>>>>>  	return &dpu_enc->base;
> 
> In dpu_crtc_vblank(), we are looping through all the encoders in the
> present mode_config:
> https://github.com/torvalds/linux/blob/master/drivers/gpu/drm/msm/disp/dpu
> 1/dpu_crtc.c#L1082
> and hence calling dpu_encoder_toggle_vblank_for_crtc() for all the
> encoders. But in dpu_encoder_toggle_vblank_for_crtc(), after acquiring
> the spinlock, we will do a early return for
> the encoders which are not currently assigned to our crtc:
> https://github.com/torvalds/linux/blob/master/drivers/gpu/drm/msm/disp/dpu
> 1/dpu_encoder.c#L1318.
> Since the encoder_setup for the secondary encoder(dp encoder in this
> case) is not called until dp hotplug, we are hitting kernel panic
> while acquiring the lock.
This is the sequence in which the events are expected to happen:

1) DP connector is instantiated with an inactive state
2) Hot plug on DP
3) DP connector is activated
4) User space attaches a CRTC to the activated connector
5) CRTC is enabled
6) CRTC_VBLANK_ON is called
7) dpu_crtc_vblank is called.

So can you help tracing out why dpu_crtc_vblank is called when the 
connector
is not activated yet (no hotplug)?
Jeykumar Sankaran July 2, 2019, 7:15 p.m. UTC | #6
On 2019-07-02 11:21, Jeykumar Sankaran wrote:
> On 2019-07-01 03:29, dhar@codeaurora.org wrote:
>> On 2019-06-26 03:10, Jeykumar Sankaran wrote:
>>> On 2019-06-24 22:44, dhar@codeaurora.org wrote:
>>>> On 2019-06-25 03:56, Jeykumar Sankaran wrote:
>>>>> On 2019-06-23 23:27, Shubhashree Dhar wrote:
>>>>>> dpu encoder spinlock should be initialized during dpu encoder
>>>>>> init instead of dpu encoder setup which is part of commit.
>>>>>> There are chances that vblank control uses the uninitialized
>>>>>> spinlock if not initialized during encoder init.
>>>>> Not much can be done if someone is performing a vblank operation
>>>>> before encoder_setup is done.
>>>>> Can you point to the path where this lock is acquired before
>>>>> the encoder_setup?
>>>>> 
>>>>> Thanks
>>>>> Jeykumar S.
>>>>>> 
>>>> 
>>>> When running some dp usecase, we are hitting this callstack.
>>>> 
>>>> Process kworker/u16:8 (pid: 215, stack limit = 0x00000000df9dd930)
>>>> Call trace:
>>>>  spin_dump+0x84/0x8c
>>>>  spin_dump+0x0/0x8c
>>>>  do_raw_spin_lock+0x80/0xb0
>>>>  _raw_spin_lock_irqsave+0x34/0x44
>>>>  dpu_encoder_toggle_vblank_for_crtc+0x8c/0xe8
>>>>  dpu_crtc_vblank+0x168/0x1a0
>>>>  dpu_kms_enable_vblank+0[   11.648998]  vblank_ctrl_worker+0x3c/0x60
>>>>  process_one_work+0x16c/0x2d8
>>>>  worker_thread+0x1d8/0x2b0
>>>>  kthread+0x124/0x134
>>>> 
>>>> Looks like vblank is getting enabled earlier causing this issue and 
>>>> we
>>>> are using the spinlock without initializing it.
>>>> 
>>>> Thanks,
>>>> Shubhashree
>>>> 
>>> DP calls into set_encoder_mode during hotplug before even notifying 
>>> the
>>> u/s. Can you trace out the original caller of this stack?
>>> 
>>> Even though the patch is harmless, I am not entirely convinced to 
>>> move this
>>> initialization. Any call which acquires the lock before encoder_setup
>>> will be a no-op since there will not be any physical encoder to work 
>>> with.
>>> 
>>> Thanks and Regards,
>>> Jeykumar S.
>>> 
>>>>>> Change-Id: I5a18b95fa47397c834a266b22abf33a517b03a4e
>>>>>> Signed-off-by: Shubhashree Dhar <dhar@codeaurora.org>
>>>>>> ---
>>>>>>  drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c | 3 +--
>>>>>>  1 file changed, 1 insertion(+), 2 deletions(-)
>>>>>> 
>>>>>> diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c
>>>>>> b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c
>>>>>> index 5f085b5..22938c7 100644
>>>>>> --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c
>>>>>> +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c
>>>>>> @@ -2195,8 +2195,6 @@ int dpu_encoder_setup(struct drm_device 
>>>>>> *dev, struct
>>>>>> drm_encoder *enc,
>>>>>>  	if (ret)
>>>>>>  		goto fail;
>>>>>> 
>>>>>> -	spin_lock_init(&dpu_enc->enc_spinlock);
>>>>>> -
>>>>>>  	atomic_set(&dpu_enc->frame_done_timeout, 0);
>>>>>>  	timer_setup(&dpu_enc->frame_done_timer,
>>>>>>  			dpu_encoder_frame_done_timeout, 0);
>>>>>> @@ -2250,6 +2248,7 @@ struct drm_encoder *dpu_encoder_init(struct
>>>>>> drm_device *dev,
>>>>>> 
>>>>>>  	drm_encoder_helper_add(&dpu_enc->base, 
>>>>>> &dpu_encoder_helper_funcs);
>>>>>> 
>>>>>> +	spin_lock_init(&dpu_enc->enc_spinlock);
>>>>>>  	dpu_enc->enabled = false;
>>>>>> 
>>>>>>  	return &dpu_enc->base;
>> 
>> In dpu_crtc_vblank(), we are looping through all the encoders in the
>> present mode_config:
>> https://github.com/torvalds/linux/blob/master/drivers/gpu/drm/msm/disp/dpu
>> 1/dpu_crtc.c#L1082
>> and hence calling dpu_encoder_toggle_vblank_for_crtc() for all the
>> encoders. But in dpu_encoder_toggle_vblank_for_crtc(), after acquiring
>> the spinlock, we will do a early return for
>> the encoders which are not currently assigned to our crtc:
>> https://github.com/torvalds/linux/blob/master/drivers/gpu/drm/msm/disp/dpu
>> 1/dpu_encoder.c#L1318.
>> Since the encoder_setup for the secondary encoder(dp encoder in this
>> case) is not called until dp hotplug, we are hitting kernel panic
>> while acquiring the lock.
> This is the sequence in which the events are expected to happen:
> 
> 1) DP connector is instantiated with an inactive state
> 2) Hot plug on DP
> 3) DP connector is activated
> 4) User space attaches a CRTC to the activated connector
> 5) CRTC is enabled
> 6) CRTC_VBLANK_ON is called
> 7) dpu_crtc_vblank is called.
> 
> So can you help tracing out why dpu_crtc_vblank is called when the 
> connector
> is not activated yet (no hotplug)?

Overlooked the loop which iterates through *all* the encoders 
irrespective of their
activated status.

Reviewed-by: Jeykumar Sankaran <jsanka@codeaurora.org>
Sean Paul July 22, 2019, 6:20 p.m. UTC | #7
On Mon, Jun 24, 2019 at 11:57:12AM +0530, Shubhashree Dhar wrote:
> dpu encoder spinlock should be initialized during dpu encoder
> init instead of dpu encoder setup which is part of commit.
> There are chances that vblank control uses the uninitialized
> spinlock if not initialized during encoder init.
> 
> Change-Id: I5a18b95fa47397c834a266b22abf33a517b03a4e
> Signed-off-by: Shubhashree Dhar <dhar@codeaurora.org>

Thanks for your patch.

I've resolved the conflict and tweaked the commit message a bit to reflect
current reality.

Applied to drm-misc-fixes for 5.3

Sean

> ---
>  drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c | 3 +--
>  1 file changed, 1 insertion(+), 2 deletions(-)
> 
> diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c
> index 5f085b5..22938c7 100644
> --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c
> +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c
> @@ -2195,8 +2195,6 @@ int dpu_encoder_setup(struct drm_device *dev, struct drm_encoder *enc,
>  	if (ret)
>  		goto fail;
>  
> -	spin_lock_init(&dpu_enc->enc_spinlock);
> -
>  	atomic_set(&dpu_enc->frame_done_timeout, 0);
>  	timer_setup(&dpu_enc->frame_done_timer,
>  			dpu_encoder_frame_done_timeout, 0);
> @@ -2250,6 +2248,7 @@ struct drm_encoder *dpu_encoder_init(struct drm_device *dev,
>  
>  	drm_encoder_helper_add(&dpu_enc->base, &dpu_encoder_helper_funcs);
>  
> +	spin_lock_init(&dpu_enc->enc_spinlock);
>  	dpu_enc->enabled = false;
>  
>  	return &dpu_enc->base;
> -- 
> 1.9.1
>
diff mbox series

Patch

diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c
index 5f085b5..22938c7 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c
+++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c
@@ -2195,8 +2195,6 @@  int dpu_encoder_setup(struct drm_device *dev, struct drm_encoder *enc,
 	if (ret)
 		goto fail;
 
-	spin_lock_init(&dpu_enc->enc_spinlock);
-
 	atomic_set(&dpu_enc->frame_done_timeout, 0);
 	timer_setup(&dpu_enc->frame_done_timer,
 			dpu_encoder_frame_done_timeout, 0);
@@ -2250,6 +2248,7 @@  struct drm_encoder *dpu_encoder_init(struct drm_device *dev,
 
 	drm_encoder_helper_add(&dpu_enc->base, &dpu_encoder_helper_funcs);
 
+	spin_lock_init(&dpu_enc->enc_spinlock);
 	dpu_enc->enabled = false;
 
 	return &dpu_enc->base;