diff mbox series

[v2,1/2] bus: mhi: host: Add spinlock to protect WP access when queueing TREs

Message ID 1694594861-12691-2-git-send-email-quic_qianyu@quicinc.com (mailing list archive)
State Not Applicable
Headers show
Series Add lock to avoid race when ringing channel DB | expand

Commit Message

Qiang Yu Sept. 13, 2023, 8:47 a.m. UTC
From: Bhaumik Bhatt <bbhatt@codeaurora.org>

Protect WP accesses such that multiple threads queueing buffers for
incoming data do not race and access the same WP twice. Ensure read and
write locks for the channel are not taken in succession by dropping the
read lock from parse_xfer_event() such that a callback given to client
can potentially queue buffers and acquire the write lock in that process.
Any queueing of buffers should be done without channel read lock acquired
as it can result in multiple locks and a soft lockup.

Signed-off-by: Bhaumik Bhatt <bbhatt@codeaurora.org>
Signed-off-by: Qiang Yu <quic_qianyu@quicinc.com>
---
 drivers/bus/mhi/host/main.c | 11 ++++++++++-
 1 file changed, 10 insertions(+), 1 deletion(-)

Comments

Jeffrey Hugo Sept. 22, 2023, 2:44 p.m. UTC | #1
On 9/13/2023 2:47 AM, Qiang Yu wrote:
> From: Bhaumik Bhatt <bbhatt@codeaurora.org>
> 
> Protect WP accesses such that multiple threads queueing buffers for
> incoming data do not race and access the same WP twice. Ensure read and
> write locks for the channel are not taken in succession by dropping the
> read lock from parse_xfer_event() such that a callback given to client
> can potentially queue buffers and acquire the write lock in that process.
> Any queueing of buffers should be done without channel read lock acquired
> as it can result in multiple locks and a soft lockup.
> 
> Signed-off-by: Bhaumik Bhatt <bbhatt@codeaurora.org>
> Signed-off-by: Qiang Yu <quic_qianyu@quicinc.com>
> ---
>   drivers/bus/mhi/host/main.c | 11 ++++++++++-
>   1 file changed, 10 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/bus/mhi/host/main.c b/drivers/bus/mhi/host/main.c
> index dcf627b..13c4b89 100644
> --- a/drivers/bus/mhi/host/main.c
> +++ b/drivers/bus/mhi/host/main.c
> @@ -642,6 +642,7 @@ static int parse_xfer_event(struct mhi_controller *mhi_cntrl,
>   			mhi_del_ring_element(mhi_cntrl, tre_ring);
>   			local_rp = tre_ring->rp;
>   
> +			read_unlock_bh(&mhi_chan->lock);

This doesn't work due to the write_lock_irqsave(&mhi_chan->lock, flags); 
on line 591.

I really don't like that we are unlocking the mhi_chan while still using 
it.  It opens up a window where the mhi_chan state can be updated 
between here and the client using the callback to queue a buf.

Perhaps we need a new lock that just protects the wp, and needs to be 
only grabbed while mhi_chan->lock is held?

>   			/* notify client */
>   			mhi_chan->xfer_cb(mhi_chan->mhi_dev, &result);
>   
> @@ -667,6 +668,7 @@ static int parse_xfer_event(struct mhi_controller *mhi_cntrl,
>   					kfree(buf_info->cb_buf);
>   				}
>   			}
> +			read_lock_bh(&mhi_chan->lock);
>   		}
>   		break;
>   	} /* CC_EOT */
> @@ -1204,6 +1206,9 @@ int mhi_gen_tre(struct mhi_controller *mhi_cntrl, struct mhi_chan *mhi_chan,
>   	int eot, eob, chain, bei;
>   	int ret;
>   
> +	/* Protect accesses for reading and incrementing WP */
> +	write_lock_bh(&mhi_chan->lock);
> +
>   	buf_ring = &mhi_chan->buf_ring;
>   	tre_ring = &mhi_chan->tre_ring;
>   
> @@ -1221,8 +1226,10 @@ int mhi_gen_tre(struct mhi_controller *mhi_cntrl, struct mhi_chan *mhi_chan,
>   
>   	if (!info->pre_mapped) {
>   		ret = mhi_cntrl->map_single(mhi_cntrl, buf_info);
> -		if (ret)
> +		if (ret) {
> +			write_unlock_bh(&mhi_chan->lock);
>   			return ret;
> +		}
>   	}
>   
>   	eob = !!(flags & MHI_EOB);
> @@ -1239,6 +1246,8 @@ int mhi_gen_tre(struct mhi_controller *mhi_cntrl, struct mhi_chan *mhi_chan,
>   	mhi_add_ring_element(mhi_cntrl, tre_ring);
>   	mhi_add_ring_element(mhi_cntrl, buf_ring);
>   
> +	write_unlock_bh(&mhi_chan->lock);
> +
>   	return 0;
>   }
>
Qiang Yu Sept. 25, 2023, 3:10 a.m. UTC | #2
On 9/22/2023 10:44 PM, Jeffrey Hugo wrote:
> On 9/13/2023 2:47 AM, Qiang Yu wrote:
>> From: Bhaumik Bhatt <bbhatt@codeaurora.org>
>>
>> Protect WP accesses such that multiple threads queueing buffers for
>> incoming data do not race and access the same WP twice. Ensure read and
>> write locks for the channel are not taken in succession by dropping the
>> read lock from parse_xfer_event() such that a callback given to client
>> can potentially queue buffers and acquire the write lock in that 
>> process.
>> Any queueing of buffers should be done without channel read lock 
>> acquired
>> as it can result in multiple locks and a soft lockup.
>>
>> Signed-off-by: Bhaumik Bhatt <bbhatt@codeaurora.org>
>> Signed-off-by: Qiang Yu <quic_qianyu@quicinc.com>
>> ---
>>   drivers/bus/mhi/host/main.c | 11 ++++++++++-
>>   1 file changed, 10 insertions(+), 1 deletion(-)
>>
>> diff --git a/drivers/bus/mhi/host/main.c b/drivers/bus/mhi/host/main.c
>> index dcf627b..13c4b89 100644
>> --- a/drivers/bus/mhi/host/main.c
>> +++ b/drivers/bus/mhi/host/main.c
>> @@ -642,6 +642,7 @@ static int parse_xfer_event(struct mhi_controller 
>> *mhi_cntrl,
>>               mhi_del_ring_element(mhi_cntrl, tre_ring);
>>               local_rp = tre_ring->rp;
>>   +            read_unlock_bh(&mhi_chan->lock);
>
> This doesn't work due to the write_lock_irqsave(&mhi_chan->lock, 
> flags); on line 591.
Write_lock_irqsave(&mhi_chan->lock, flags) is used in case of ev_code >= 
MHI_EV_CC_OOB. We only read_lock/read_unlock the mhi_chan while ev_code 
< MHI_EV_CC_OOB.
>
> I really don't like that we are unlocking the mhi_chan while still 
> using it.  It opens up a window where the mhi_chan state can be 
> updated between here and the client using the callback to queue a buf.
>
> Perhaps we need a new lock that just protects the wp, and needs to be 
> only grabbed while mhi_chan->lock is held?

Since we have employed mhi_chan lock to protect the channel and what we 
are concerned here is that client may queue buf to a disabled or stopped 
channel, can we check channel state after getting mhi_chan->lock like 
line 595.

We can add the check after getting write lock in mhi_gen_tre() and after 
getting read lock again here.

>
>>               /* notify client */
>>               mhi_chan->xfer_cb(mhi_chan->mhi_dev, &result);
>>   @@ -667,6 +668,7 @@ static int parse_xfer_event(struct 
>> mhi_controller *mhi_cntrl,
>>                       kfree(buf_info->cb_buf);
>>                   }
>>               }
>> +            read_lock_bh(&mhi_chan->lock);
>>           }
>>           break;
>>       } /* CC_EOT */
>> @@ -1204,6 +1206,9 @@ int mhi_gen_tre(struct mhi_controller 
>> *mhi_cntrl, struct mhi_chan *mhi_chan,
>>       int eot, eob, chain, bei;
>>       int ret;
>>   +    /* Protect accesses for reading and incrementing WP */
>> +    write_lock_bh(&mhi_chan->lock);
>> +
>>       buf_ring = &mhi_chan->buf_ring;
>>       tre_ring = &mhi_chan->tre_ring;
>>   @@ -1221,8 +1226,10 @@ int mhi_gen_tre(struct mhi_controller 
>> *mhi_cntrl, struct mhi_chan *mhi_chan,
>>         if (!info->pre_mapped) {
>>           ret = mhi_cntrl->map_single(mhi_cntrl, buf_info);
>> -        if (ret)
>> +        if (ret) {
>> +            write_unlock_bh(&mhi_chan->lock);
>>               return ret;
>> +        }
>>       }
>>         eob = !!(flags & MHI_EOB);
>> @@ -1239,6 +1246,8 @@ int mhi_gen_tre(struct mhi_controller 
>> *mhi_cntrl, struct mhi_chan *mhi_chan,
>>       mhi_add_ring_element(mhi_cntrl, tre_ring);
>>       mhi_add_ring_element(mhi_cntrl, buf_ring);
>>   +    write_unlock_bh(&mhi_chan->lock);
>> +
>>       return 0;
>>   }
>
Jeffrey Hugo Sept. 29, 2023, 3:22 p.m. UTC | #3
On 9/24/2023 9:10 PM, Qiang Yu wrote:
> 
> On 9/22/2023 10:44 PM, Jeffrey Hugo wrote:
>> On 9/13/2023 2:47 AM, Qiang Yu wrote:
>>> From: Bhaumik Bhatt <bbhatt@codeaurora.org>
>>>
>>> Protect WP accesses such that multiple threads queueing buffers for
>>> incoming data do not race and access the same WP twice. Ensure read and
>>> write locks for the channel are not taken in succession by dropping the
>>> read lock from parse_xfer_event() such that a callback given to client
>>> can potentially queue buffers and acquire the write lock in that 
>>> process.
>>> Any queueing of buffers should be done without channel read lock 
>>> acquired
>>> as it can result in multiple locks and a soft lockup.
>>>
>>> Signed-off-by: Bhaumik Bhatt <bbhatt@codeaurora.org>
>>> Signed-off-by: Qiang Yu <quic_qianyu@quicinc.com>
>>> ---
>>>   drivers/bus/mhi/host/main.c | 11 ++++++++++-
>>>   1 file changed, 10 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/drivers/bus/mhi/host/main.c b/drivers/bus/mhi/host/main.c
>>> index dcf627b..13c4b89 100644
>>> --- a/drivers/bus/mhi/host/main.c
>>> +++ b/drivers/bus/mhi/host/main.c
>>> @@ -642,6 +642,7 @@ static int parse_xfer_event(struct mhi_controller 
>>> *mhi_cntrl,
>>>               mhi_del_ring_element(mhi_cntrl, tre_ring);
>>>               local_rp = tre_ring->rp;
>>>   +            read_unlock_bh(&mhi_chan->lock);
>>
>> This doesn't work due to the write_lock_irqsave(&mhi_chan->lock, 
>> flags); on line 591.
> Write_lock_irqsave(&mhi_chan->lock, flags) is used in case of ev_code >= 
> MHI_EV_CC_OOB. We only read_lock/read_unlock the mhi_chan while ev_code 
> < MHI_EV_CC_OOB.

Sorry.  OOB != EOB

>>
>> I really don't like that we are unlocking the mhi_chan while still 
>> using it.  It opens up a window where the mhi_chan state can be 
>> updated between here and the client using the callback to queue a buf.
>>
>> Perhaps we need a new lock that just protects the wp, and needs to be 
>> only grabbed while mhi_chan->lock is held?
> 
> Since we have employed mhi_chan lock to protect the channel and what we 
> are concerned here is that client may queue buf to a disabled or stopped 
> channel, can we check channel state after getting mhi_chan->lock like 
> line 595.
> 
> We can add the check after getting write lock in mhi_gen_tre() and after 
> getting read lock again here.

I'm not sure that is sufficient.  After you unlock to notify the client, 
MHI is going to manipulate the packet count and runtime_pm without the 
lock (648-652).  It seems like that adds additional races which won't be 
covered by the additional check you propose.

> 
>>
>>>               /* notify client */
>>>               mhi_chan->xfer_cb(mhi_chan->mhi_dev, &result);
>>>   @@ -667,6 +668,7 @@ static int parse_xfer_event(struct 
>>> mhi_controller *mhi_cntrl,
>>>                       kfree(buf_info->cb_buf);
>>>                   }
>>>               }
>>> +            read_lock_bh(&mhi_chan->lock);
>>>           }
>>>           break;
>>>       } /* CC_EOT */
>>> @@ -1204,6 +1206,9 @@ int mhi_gen_tre(struct mhi_controller 
>>> *mhi_cntrl, struct mhi_chan *mhi_chan,
>>>       int eot, eob, chain, bei;
>>>       int ret;
>>>   +    /* Protect accesses for reading and incrementing WP */
>>> +    write_lock_bh(&mhi_chan->lock);
>>> +
>>>       buf_ring = &mhi_chan->buf_ring;
>>>       tre_ring = &mhi_chan->tre_ring;
>>>   @@ -1221,8 +1226,10 @@ int mhi_gen_tre(struct mhi_controller 
>>> *mhi_cntrl, struct mhi_chan *mhi_chan,
>>>         if (!info->pre_mapped) {
>>>           ret = mhi_cntrl->map_single(mhi_cntrl, buf_info);
>>> -        if (ret)
>>> +        if (ret) {
>>> +            write_unlock_bh(&mhi_chan->lock);
>>>               return ret;
>>> +        }
>>>       }
>>>         eob = !!(flags & MHI_EOB);
>>> @@ -1239,6 +1246,8 @@ int mhi_gen_tre(struct mhi_controller 
>>> *mhi_cntrl, struct mhi_chan *mhi_chan,
>>>       mhi_add_ring_element(mhi_cntrl, tre_ring);
>>>       mhi_add_ring_element(mhi_cntrl, buf_ring);
>>>   +    write_unlock_bh(&mhi_chan->lock);
>>> +
>>>       return 0;
>>>   }
>>
Qiang Yu Oct. 16, 2023, 8:46 a.m. UTC | #4
On 9/29/2023 11:22 PM, Jeffrey Hugo wrote:
> On 9/24/2023 9:10 PM, Qiang Yu wrote:
>>
>> On 9/22/2023 10:44 PM, Jeffrey Hugo wrote:
>>> On 9/13/2023 2:47 AM, Qiang Yu wrote:
>>>> From: Bhaumik Bhatt <bbhatt@codeaurora.org>
>>>>
>>>> Protect WP accesses such that multiple threads queueing buffers for
>>>> incoming data do not race and access the same WP twice. Ensure read 
>>>> and
>>>> write locks for the channel are not taken in succession by dropping 
>>>> the
>>>> read lock from parse_xfer_event() such that a callback given to client
>>>> can potentially queue buffers and acquire the write lock in that 
>>>> process.
>>>> Any queueing of buffers should be done without channel read lock 
>>>> acquired
>>>> as it can result in multiple locks and a soft lockup.
>>>>
>>>> Signed-off-by: Bhaumik Bhatt <bbhatt@codeaurora.org>
>>>> Signed-off-by: Qiang Yu <quic_qianyu@quicinc.com>
>>>> ---
>>>>   drivers/bus/mhi/host/main.c | 11 ++++++++++-
>>>>   1 file changed, 10 insertions(+), 1 deletion(-)
>>>>
>>>> diff --git a/drivers/bus/mhi/host/main.c b/drivers/bus/mhi/host/main.c
>>>> index dcf627b..13c4b89 100644
>>>> --- a/drivers/bus/mhi/host/main.c
>>>> +++ b/drivers/bus/mhi/host/main.c
>>>> @@ -642,6 +642,7 @@ static int parse_xfer_event(struct 
>>>> mhi_controller *mhi_cntrl,
>>>>               mhi_del_ring_element(mhi_cntrl, tre_ring);
>>>>               local_rp = tre_ring->rp;
>>>>   +            read_unlock_bh(&mhi_chan->lock);
>>>
>>> This doesn't work due to the write_lock_irqsave(&mhi_chan->lock, 
>>> flags); on line 591.
>> Write_lock_irqsave(&mhi_chan->lock, flags) is used in case of ev_code 
>> >= MHI_EV_CC_OOB. We only read_lock/read_unlock the mhi_chan while 
>> ev_code < MHI_EV_CC_OOB.
>
> Sorry.  OOB != EOB
>
>>>
>>> I really don't like that we are unlocking the mhi_chan while still 
>>> using it.  It opens up a window where the mhi_chan state can be 
>>> updated between here and the client using the callback to queue a buf.
>>>
>>> Perhaps we need a new lock that just protects the wp, and needs to 
>>> be only grabbed while mhi_chan->lock is held?
>>
>> Since we have employed mhi_chan lock to protect the channel and what 
>> we are concerned here is that client may queue buf to a disabled or 
>> stopped channel, can we check channel state after getting 
>> mhi_chan->lock like line 595.
>>
>> We can add the check after getting write lock in mhi_gen_tre() and 
>> after getting read lock again here.
>
> I'm not sure that is sufficient.  After you unlock to notify the 
> client, MHI is going to manipulate the packet count and runtime_pm 
> without the lock (648-652).  It seems like that adds additional races 
> which won't be covered by the additional check you propose.

I don't think read_lock_bh(&mhi_chan->lock) can protect runtime_pm and 
the packet count here. Even if we do not unlock, mhi state and packet 
count can still be changed because we did not get pm_lock here, which is 
used in all mhi state transition function.

I also checked all places that mhi_chan->lock is grabbed, did not see 
packet count and runtime_pm be protected by write_lock(&mhi_chan->lock).


If you really don't like the unlock operation, we can also take a new 
lock. But I think we only need to add the new lock in two places, 
mhi_gen_tre and mhi_pm_m0_transition while mhi_chan->lock is held.

>
>>
>>>
>>>>               /* notify client */
>>>>               mhi_chan->xfer_cb(mhi_chan->mhi_dev, &result);
>>>>   @@ -667,6 +668,7 @@ static int parse_xfer_event(struct 
>>>> mhi_controller *mhi_cntrl,
>>>>                       kfree(buf_info->cb_buf);
>>>>                   }
>>>>               }
>>>> +            read_lock_bh(&mhi_chan->lock);
>>>>           }
>>>>           break;
>>>>       } /* CC_EOT */
>>>> @@ -1204,6 +1206,9 @@ int mhi_gen_tre(struct mhi_controller 
>>>> *mhi_cntrl, struct mhi_chan *mhi_chan,
>>>>       int eot, eob, chain, bei;
>>>>       int ret;
>>>>   +    /* Protect accesses for reading and incrementing WP */
>>>> +    write_lock_bh(&mhi_chan->lock);
>>>> +
>>>>       buf_ring = &mhi_chan->buf_ring;
>>>>       tre_ring = &mhi_chan->tre_ring;
>>>>   @@ -1221,8 +1226,10 @@ int mhi_gen_tre(struct mhi_controller 
>>>> *mhi_cntrl, struct mhi_chan *mhi_chan,
>>>>         if (!info->pre_mapped) {
>>>>           ret = mhi_cntrl->map_single(mhi_cntrl, buf_info);
>>>> -        if (ret)
>>>> +        if (ret) {
>>>> +            write_unlock_bh(&mhi_chan->lock);
>>>>               return ret;
>>>> +        }
>>>>       }
>>>>         eob = !!(flags & MHI_EOB);
>>>> @@ -1239,6 +1246,8 @@ int mhi_gen_tre(struct mhi_controller 
>>>> *mhi_cntrl, struct mhi_chan *mhi_chan,
>>>>       mhi_add_ring_element(mhi_cntrl, tre_ring);
>>>>       mhi_add_ring_element(mhi_cntrl, buf_ring);
>>>>   +    write_unlock_bh(&mhi_chan->lock);
>>>> +
>>>>       return 0;
>>>>   }
>>>
>
Jeffrey Hugo Oct. 20, 2023, 3:07 p.m. UTC | #5
On 10/16/2023 2:46 AM, Qiang Yu wrote:
> 
> On 9/29/2023 11:22 PM, Jeffrey Hugo wrote:
>> On 9/24/2023 9:10 PM, Qiang Yu wrote:
>>>
>>> On 9/22/2023 10:44 PM, Jeffrey Hugo wrote:
>>>> On 9/13/2023 2:47 AM, Qiang Yu wrote:
>>>>> From: Bhaumik Bhatt <bbhatt@codeaurora.org>
>>>>>
>>>>> Protect WP accesses such that multiple threads queueing buffers for
>>>>> incoming data do not race and access the same WP twice. Ensure read 
>>>>> and
>>>>> write locks for the channel are not taken in succession by dropping 
>>>>> the
>>>>> read lock from parse_xfer_event() such that a callback given to client
>>>>> can potentially queue buffers and acquire the write lock in that 
>>>>> process.
>>>>> Any queueing of buffers should be done without channel read lock 
>>>>> acquired
>>>>> as it can result in multiple locks and a soft lockup.
>>>>>
>>>>> Signed-off-by: Bhaumik Bhatt <bbhatt@codeaurora.org>
>>>>> Signed-off-by: Qiang Yu <quic_qianyu@quicinc.com>
>>>>> ---
>>>>>   drivers/bus/mhi/host/main.c | 11 ++++++++++-
>>>>>   1 file changed, 10 insertions(+), 1 deletion(-)
>>>>>
>>>>> diff --git a/drivers/bus/mhi/host/main.c b/drivers/bus/mhi/host/main.c
>>>>> index dcf627b..13c4b89 100644
>>>>> --- a/drivers/bus/mhi/host/main.c
>>>>> +++ b/drivers/bus/mhi/host/main.c
>>>>> @@ -642,6 +642,7 @@ static int parse_xfer_event(struct 
>>>>> mhi_controller *mhi_cntrl,
>>>>>               mhi_del_ring_element(mhi_cntrl, tre_ring);
>>>>>               local_rp = tre_ring->rp;
>>>>>   +            read_unlock_bh(&mhi_chan->lock);
>>>>
>>>> This doesn't work due to the write_lock_irqsave(&mhi_chan->lock, 
>>>> flags); on line 591.
>>> Write_lock_irqsave(&mhi_chan->lock, flags) is used in case of ev_code 
>>> >= MHI_EV_CC_OOB. We only read_lock/read_unlock the mhi_chan while 
>>> ev_code < MHI_EV_CC_OOB.
>>
>> Sorry.  OOB != EOB
>>
>>>>
>>>> I really don't like that we are unlocking the mhi_chan while still 
>>>> using it.  It opens up a window where the mhi_chan state can be 
>>>> updated between here and the client using the callback to queue a buf.
>>>>
>>>> Perhaps we need a new lock that just protects the wp, and needs to 
>>>> be only grabbed while mhi_chan->lock is held?
>>>
>>> Since we have employed mhi_chan lock to protect the channel and what 
>>> we are concerned here is that client may queue buf to a disabled or 
>>> stopped channel, can we check channel state after getting 
>>> mhi_chan->lock like line 595.
>>>
>>> We can add the check after getting write lock in mhi_gen_tre() and 
>>> after getting read lock again here.
>>
>> I'm not sure that is sufficient.  After you unlock to notify the 
>> client, MHI is going to manipulate the packet count and runtime_pm 
>> without the lock (648-652).  It seems like that adds additional races 
>> which won't be covered by the additional check you propose.
> 
> I don't think read_lock_bh(&mhi_chan->lock) can protect runtime_pm and 
> the packet count here. Even if we do not unlock, mhi state and packet 
> count can still be changed because we did not get pm_lock here, which is 
> used in all mhi state transition function.
> 
> I also checked all places that mhi_chan->lock is grabbed, did not see 
> packet count and runtime_pm be protected by write_lock(&mhi_chan->lock).
> 
> 
> If you really don't like the unlock operation, we can also take a new 
> lock. But I think we only need to add the new lock in two places, 
> mhi_gen_tre and mhi_pm_m0_transition while mhi_chan->lock is held.

Mani, if I recall correctly, you were the architect of the locking.  Do 
you have an opinion?
Manivannan Sadhasivam Nov. 6, 2023, 4:41 a.m. UTC | #6
On Wed, Sep 13, 2023 at 04:47:40PM +0800, Qiang Yu wrote:
> From: Bhaumik Bhatt <bbhatt@codeaurora.org>
> 
> Protect WP accesses such that multiple threads queueing buffers for
> incoming data do not race and access the same WP twice. Ensure read and
> write locks for the channel are not taken in succession by dropping the
> read lock from parse_xfer_event() such that a callback given to client
> can potentially queue buffers and acquire the write lock in that process.
> Any queueing of buffers should be done without channel read lock acquired
> as it can result in multiple locks and a soft lockup.
> 

This change is doing two things:

1. Unlocking xfer_cb to prevent potential lockup
2. Protecting mhi_gen_tre() against concurrent access

So you should split this into two patches and also add Fixes tag if appropriate.

- Mani

> Signed-off-by: Bhaumik Bhatt <bbhatt@codeaurora.org>
> Signed-off-by: Qiang Yu <quic_qianyu@quicinc.com>
> ---
>  drivers/bus/mhi/host/main.c | 11 ++++++++++-
>  1 file changed, 10 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/bus/mhi/host/main.c b/drivers/bus/mhi/host/main.c
> index dcf627b..13c4b89 100644
> --- a/drivers/bus/mhi/host/main.c
> +++ b/drivers/bus/mhi/host/main.c
> @@ -642,6 +642,7 @@ static int parse_xfer_event(struct mhi_controller *mhi_cntrl,
>  			mhi_del_ring_element(mhi_cntrl, tre_ring);
>  			local_rp = tre_ring->rp;
>  
> +			read_unlock_bh(&mhi_chan->lock);
>  			/* notify client */
>  			mhi_chan->xfer_cb(mhi_chan->mhi_dev, &result);
>  
> @@ -667,6 +668,7 @@ static int parse_xfer_event(struct mhi_controller *mhi_cntrl,
>  					kfree(buf_info->cb_buf);
>  				}
>  			}
> +			read_lock_bh(&mhi_chan->lock);
>  		}
>  		break;
>  	} /* CC_EOT */
> @@ -1204,6 +1206,9 @@ int mhi_gen_tre(struct mhi_controller *mhi_cntrl, struct mhi_chan *mhi_chan,
>  	int eot, eob, chain, bei;
>  	int ret;
>  
> +	/* Protect accesses for reading and incrementing WP */
> +	write_lock_bh(&mhi_chan->lock);
> +
>  	buf_ring = &mhi_chan->buf_ring;
>  	tre_ring = &mhi_chan->tre_ring;
>  
> @@ -1221,8 +1226,10 @@ int mhi_gen_tre(struct mhi_controller *mhi_cntrl, struct mhi_chan *mhi_chan,
>  
>  	if (!info->pre_mapped) {
>  		ret = mhi_cntrl->map_single(mhi_cntrl, buf_info);
> -		if (ret)
> +		if (ret) {
> +			write_unlock_bh(&mhi_chan->lock);
>  			return ret;
> +		}
>  	}
>  
>  	eob = !!(flags & MHI_EOB);
> @@ -1239,6 +1246,8 @@ int mhi_gen_tre(struct mhi_controller *mhi_cntrl, struct mhi_chan *mhi_chan,
>  	mhi_add_ring_element(mhi_cntrl, tre_ring);
>  	mhi_add_ring_element(mhi_cntrl, buf_ring);
>  
> +	write_unlock_bh(&mhi_chan->lock);
> +
>  	return 0;
>  }
>  
> -- 
> 2.7.4
> 
>
Manivannan Sadhasivam Nov. 6, 2023, 4:51 a.m. UTC | #7
On Fri, Oct 20, 2023 at 09:07:35AM -0600, Jeffrey Hugo wrote:
> On 10/16/2023 2:46 AM, Qiang Yu wrote:
> > 
> > On 9/29/2023 11:22 PM, Jeffrey Hugo wrote:
> > > On 9/24/2023 9:10 PM, Qiang Yu wrote:
> > > > 
> > > > On 9/22/2023 10:44 PM, Jeffrey Hugo wrote:
> > > > > On 9/13/2023 2:47 AM, Qiang Yu wrote:
> > > > > > From: Bhaumik Bhatt <bbhatt@codeaurora.org>
> > > > > > 
> > > > > > Protect WP accesses such that multiple threads queueing buffers for
> > > > > > incoming data do not race and access the same WP twice.
> > > > > > Ensure read and
> > > > > > write locks for the channel are not taken in succession
> > > > > > by dropping the
> > > > > > read lock from parse_xfer_event() such that a callback given to client
> > > > > > can potentially queue buffers and acquire the write lock
> > > > > > in that process.
> > > > > > Any queueing of buffers should be done without channel
> > > > > > read lock acquired
> > > > > > as it can result in multiple locks and a soft lockup.
> > > > > > 
> > > > > > Signed-off-by: Bhaumik Bhatt <bbhatt@codeaurora.org>
> > > > > > Signed-off-by: Qiang Yu <quic_qianyu@quicinc.com>
> > > > > > ---
> > > > > >   drivers/bus/mhi/host/main.c | 11 ++++++++++-
> > > > > >   1 file changed, 10 insertions(+), 1 deletion(-)
> > > > > > 
> > > > > > diff --git a/drivers/bus/mhi/host/main.c b/drivers/bus/mhi/host/main.c
> > > > > > index dcf627b..13c4b89 100644
> > > > > > --- a/drivers/bus/mhi/host/main.c
> > > > > > +++ b/drivers/bus/mhi/host/main.c
> > > > > > @@ -642,6 +642,7 @@ static int parse_xfer_event(struct
> > > > > > mhi_controller *mhi_cntrl,
> > > > > >               mhi_del_ring_element(mhi_cntrl, tre_ring);
> > > > > >               local_rp = tre_ring->rp;
> > > > > >   +            read_unlock_bh(&mhi_chan->lock);
> > > > > 
> > > > > This doesn't work due to the
> > > > > write_lock_irqsave(&mhi_chan->lock, flags); on line 591.
> > > > Write_lock_irqsave(&mhi_chan->lock, flags) is used in case of
> > > > ev_code >= MHI_EV_CC_OOB. We only read_lock/read_unlock the
> > > > mhi_chan while ev_code < MHI_EV_CC_OOB.
> > > 
> > > Sorry.  OOB != EOB
> > > 
> > > > > 
> > > > > I really don't like that we are unlocking the mhi_chan while
> > > > > still using it.  It opens up a window where the mhi_chan
> > > > > state can be updated between here and the client using the
> > > > > callback to queue a buf.
> > > > > 
> > > > > Perhaps we need a new lock that just protects the wp, and
> > > > > needs to be only grabbed while mhi_chan->lock is held?
> > > > 
> > > > Since we have employed mhi_chan lock to protect the channel and
> > > > what we are concerned here is that client may queue buf to a
> > > > disabled or stopped channel, can we check channel state after
> > > > getting mhi_chan->lock like line 595.
> > > > 
> > > > We can add the check after getting write lock in mhi_gen_tre()
> > > > and after getting read lock again here.
> > > 
> > > I'm not sure that is sufficient.  After you unlock to notify the
> > > client, MHI is going to manipulate the packet count and runtime_pm
> > > without the lock (648-652).  It seems like that adds additional
> > > races which won't be covered by the additional check you propose.
> > 
> > I don't think read_lock_bh(&mhi_chan->lock) can protect runtime_pm and
> > the packet count here. Even if we do not unlock, mhi state and packet
> > count can still be changed because we did not get pm_lock here, which is
> > used in all mhi state transition function.
> > 
> > I also checked all places that mhi_chan->lock is grabbed, did not see
> > packet count and runtime_pm be protected by write_lock(&mhi_chan->lock).
> > 
> > 
> > If you really don't like the unlock operation, we can also take a new
> > lock. But I think we only need to add the new lock in two places,
> > mhi_gen_tre and mhi_pm_m0_transition while mhi_chan->lock is held.
> 
> Mani, if I recall correctly, you were the architect of the locking.  Do you
> have an opinion?
> 

TBH, the locking situation is a mess with MHI. Initially, we happen to have
separate locks for protecting various operations, but then during review, it was
advised to reuse existing locks and avoid having too many separate locks.

This worked well but then we kind of abused the locks over time. I asked Hemant
and Bhaumik to audit the locks and fix them, but both of them left Qcom.

So in this situation, the intent of the pm_lock was to protect concurrent access
against updating the pm_state. And it also happen to protect _other_things_ such
as runtime_put, pending_pkts etc... But not properly, because most of the time
read lock is taken in places where pm_state is being read. So there is still a
possibility of race while accessing these _other_things_.

For this patch, I'm happy with dropping chan->lock before calling xfer_cb() and
I want someone (maybe Qiang) to do the audit of locking in general and come up
with fixes where needed.

- Mani
Qiang Yu Nov. 7, 2023, 7:19 a.m. UTC | #8
On 11/6/2023 12:41 PM, Manivannan Sadhasivam wrote:
> On Wed, Sep 13, 2023 at 04:47:40PM +0800, Qiang Yu wrote:
>> From: Bhaumik Bhatt <bbhatt@codeaurora.org>
>>
>> Protect WP accesses such that multiple threads queueing buffers for
>> incoming data do not race and access the same WP twice. Ensure read and
>> write locks for the channel are not taken in succession by dropping the
>> read lock from parse_xfer_event() such that a callback given to client
>> can potentially queue buffers and acquire the write lock in that process.
>> Any queueing of buffers should be done without channel read lock acquired
>> as it can result in multiple locks and a soft lockup.
>>
> This change is doing two things:
>
> 1. Unlocking xfer_cb to prevent potential lockup
> 2. Protecting mhi_gen_tre() against concurrent access
>
> So you should split this into two patches and also add Fixes tag if appropriate.
>
> - Mani
Hi Mani, thanks for review. I split this into two patch and add Fixes tag.
>
>> Signed-off-by: Bhaumik Bhatt <bbhatt@codeaurora.org>
>> Signed-off-by: Qiang Yu <quic_qianyu@quicinc.com>
>> ---
>>   drivers/bus/mhi/host/main.c | 11 ++++++++++-
>>   1 file changed, 10 insertions(+), 1 deletion(-)
>>
>> diff --git a/drivers/bus/mhi/host/main.c b/drivers/bus/mhi/host/main.c
>> index dcf627b..13c4b89 100644
>> --- a/drivers/bus/mhi/host/main.c
>> +++ b/drivers/bus/mhi/host/main.c
>> @@ -642,6 +642,7 @@ static int parse_xfer_event(struct mhi_controller *mhi_cntrl,
>>   			mhi_del_ring_element(mhi_cntrl, tre_ring);
>>   			local_rp = tre_ring->rp;
>>   
>> +			read_unlock_bh(&mhi_chan->lock);
>>   			/* notify client */
>>   			mhi_chan->xfer_cb(mhi_chan->mhi_dev, &result);
>>   
>> @@ -667,6 +668,7 @@ static int parse_xfer_event(struct mhi_controller *mhi_cntrl,
>>   					kfree(buf_info->cb_buf);
>>   				}
>>   			}
>> +			read_lock_bh(&mhi_chan->lock);
>>   		}
>>   		break;
>>   	} /* CC_EOT */
>> @@ -1204,6 +1206,9 @@ int mhi_gen_tre(struct mhi_controller *mhi_cntrl, struct mhi_chan *mhi_chan,
>>   	int eot, eob, chain, bei;
>>   	int ret;
>>   
>> +	/* Protect accesses for reading and incrementing WP */
>> +	write_lock_bh(&mhi_chan->lock);
>> +
>>   	buf_ring = &mhi_chan->buf_ring;
>>   	tre_ring = &mhi_chan->tre_ring;
>>   
>> @@ -1221,8 +1226,10 @@ int mhi_gen_tre(struct mhi_controller *mhi_cntrl, struct mhi_chan *mhi_chan,
>>   
>>   	if (!info->pre_mapped) {
>>   		ret = mhi_cntrl->map_single(mhi_cntrl, buf_info);
>> -		if (ret)
>> +		if (ret) {
>> +			write_unlock_bh(&mhi_chan->lock);
>>   			return ret;
>> +		}
>>   	}
>>   
>>   	eob = !!(flags & MHI_EOB);
>> @@ -1239,6 +1246,8 @@ int mhi_gen_tre(struct mhi_controller *mhi_cntrl, struct mhi_chan *mhi_chan,
>>   	mhi_add_ring_element(mhi_cntrl, tre_ring);
>>   	mhi_add_ring_element(mhi_cntrl, buf_ring);
>>   
>> +	write_unlock_bh(&mhi_chan->lock);
>> +
>>   	return 0;
>>   }
>>   
>> -- 
>> 2.7.4
>>
>>
Qiang Yu Nov. 7, 2023, 7:59 a.m. UTC | #9
On 11/6/2023 12:51 PM, Manivannan Sadhasivam wrote:
> On Fri, Oct 20, 2023 at 09:07:35AM -0600, Jeffrey Hugo wrote:
>> On 10/16/2023 2:46 AM, Qiang Yu wrote:
>>> On 9/29/2023 11:22 PM, Jeffrey Hugo wrote:
>>>> On 9/24/2023 9:10 PM, Qiang Yu wrote:
>>>>> On 9/22/2023 10:44 PM, Jeffrey Hugo wrote:
>>>>>> On 9/13/2023 2:47 AM, Qiang Yu wrote:
>>>>>>> From: Bhaumik Bhatt <bbhatt@codeaurora.org>
>>>>>>>
>>>>>>> Protect WP accesses such that multiple threads queueing buffers for
>>>>>>> incoming data do not race and access the same WP twice.
>>>>>>> Ensure read and
>>>>>>> write locks for the channel are not taken in succession
>>>>>>> by dropping the
>>>>>>> read lock from parse_xfer_event() such that a callback given to client
>>>>>>> can potentially queue buffers and acquire the write lock
>>>>>>> in that process.
>>>>>>> Any queueing of buffers should be done without channel
>>>>>>> read lock acquired
>>>>>>> as it can result in multiple locks and a soft lockup.
>>>>>>>
>>>>>>> Signed-off-by: Bhaumik Bhatt <bbhatt@codeaurora.org>
>>>>>>> Signed-off-by: Qiang Yu <quic_qianyu@quicinc.com>
>>>>>>> ---
>>>>>>>    drivers/bus/mhi/host/main.c | 11 ++++++++++-
>>>>>>>    1 file changed, 10 insertions(+), 1 deletion(-)
>>>>>>>
>>>>>>> diff --git a/drivers/bus/mhi/host/main.c b/drivers/bus/mhi/host/main.c
>>>>>>> index dcf627b..13c4b89 100644
>>>>>>> --- a/drivers/bus/mhi/host/main.c
>>>>>>> +++ b/drivers/bus/mhi/host/main.c
>>>>>>> @@ -642,6 +642,7 @@ static int parse_xfer_event(struct
>>>>>>> mhi_controller *mhi_cntrl,
>>>>>>>                mhi_del_ring_element(mhi_cntrl, tre_ring);
>>>>>>>                local_rp = tre_ring->rp;
>>>>>>>    +            read_unlock_bh(&mhi_chan->lock);
>>>>>> This doesn't work due to the
>>>>>> write_lock_irqsave(&mhi_chan->lock, flags); on line 591.
>>>>> Write_lock_irqsave(&mhi_chan->lock, flags) is used in case of
>>>>> ev_code >= MHI_EV_CC_OOB. We only read_lock/read_unlock the
>>>>> mhi_chan while ev_code < MHI_EV_CC_OOB.
>>>> Sorry.  OOB != EOB
>>>>
>>>>>> I really don't like that we are unlocking the mhi_chan while
>>>>>> still using it.  It opens up a window where the mhi_chan
>>>>>> state can be updated between here and the client using the
>>>>>> callback to queue a buf.
>>>>>>
>>>>>> Perhaps we need a new lock that just protects the wp, and
>>>>>> needs to be only grabbed while mhi_chan->lock is held?
>>>>> Since we have employed mhi_chan lock to protect the channel and
>>>>> what we are concerned here is that client may queue buf to a
>>>>> disabled or stopped channel, can we check channel state after
>>>>> getting mhi_chan->lock like line 595.
>>>>>
>>>>> We can add the check after getting write lock in mhi_gen_tre()
>>>>> and after getting read lock again here.
>>>> I'm not sure that is sufficient.  After you unlock to notify the
>>>> client, MHI is going to manipulate the packet count and runtime_pm
>>>> without the lock (648-652).  It seems like that adds additional
>>>> races which won't be covered by the additional check you propose.
>>> I don't think read_lock_bh(&mhi_chan->lock) can protect runtime_pm and
>>> the packet count here. Even if we do not unlock, mhi state and packet
>>> count can still be changed because we did not get pm_lock here, which is
>>> used in all mhi state transition function.
>>>
>>> I also checked all places that mhi_chan->lock is grabbed, did not see
>>> packet count and runtime_pm be protected by write_lock(&mhi_chan->lock).
>>>
>>>
>>> If you really don't like the unlock operation, we can also take a new
>>> lock. But I think we only need to add the new lock in two places,
>>> mhi_gen_tre and mhi_pm_m0_transition while mhi_chan->lock is held.
>> Mani, if I recall correctly, you were the architect of the locking.  Do you
>> have an opinion?
>>
> TBH, the locking situation is a mess with MHI. Initially, we happen to have
> separate locks for protecting various operations, but then during review, it was
> advised to reuse existing locks and avoid having too many separate locks.
>
> This worked well but then we kind of abused the locks over time. I asked Hemant
> and Bhaumik to audit the locks and fix them, but both of them left Qcom.
>
> So in this situation, the intent of the pm_lock was to protect concurrent access
> against updating the pm_state. And it also happen to protect _other_things_ such
> as runtime_put, pending_pkts etc... But not properly, because most of the time
> read lock is taken in places where pm_state is being read. So there is still a
> possibility of race while accessing these _other_things_.
>
> For this patch, I'm happy with dropping chan->lock before calling xfer_cb() and
> I want someone (maybe Qiang) to do the audit of locking in general and come up
> with fixes where needed.
>
> - Mani

As discussed with Jeff before, we also need to check channel state 
before queue buffer and after re-lock

in parse_xfer_event, so I also add the channel state check in next 
version patch.

Probably I can do the audit of locking. It's a good chance for me to 
understand various locks in MHI host

driver completely.
diff mbox series

Patch

diff --git a/drivers/bus/mhi/host/main.c b/drivers/bus/mhi/host/main.c
index dcf627b..13c4b89 100644
--- a/drivers/bus/mhi/host/main.c
+++ b/drivers/bus/mhi/host/main.c
@@ -642,6 +642,7 @@  static int parse_xfer_event(struct mhi_controller *mhi_cntrl,
 			mhi_del_ring_element(mhi_cntrl, tre_ring);
 			local_rp = tre_ring->rp;
 
+			read_unlock_bh(&mhi_chan->lock);
 			/* notify client */
 			mhi_chan->xfer_cb(mhi_chan->mhi_dev, &result);
 
@@ -667,6 +668,7 @@  static int parse_xfer_event(struct mhi_controller *mhi_cntrl,
 					kfree(buf_info->cb_buf);
 				}
 			}
+			read_lock_bh(&mhi_chan->lock);
 		}
 		break;
 	} /* CC_EOT */
@@ -1204,6 +1206,9 @@  int mhi_gen_tre(struct mhi_controller *mhi_cntrl, struct mhi_chan *mhi_chan,
 	int eot, eob, chain, bei;
 	int ret;
 
+	/* Protect accesses for reading and incrementing WP */
+	write_lock_bh(&mhi_chan->lock);
+
 	buf_ring = &mhi_chan->buf_ring;
 	tre_ring = &mhi_chan->tre_ring;
 
@@ -1221,8 +1226,10 @@  int mhi_gen_tre(struct mhi_controller *mhi_cntrl, struct mhi_chan *mhi_chan,
 
 	if (!info->pre_mapped) {
 		ret = mhi_cntrl->map_single(mhi_cntrl, buf_info);
-		if (ret)
+		if (ret) {
+			write_unlock_bh(&mhi_chan->lock);
 			return ret;
+		}
 	}
 
 	eob = !!(flags & MHI_EOB);
@@ -1239,6 +1246,8 @@  int mhi_gen_tre(struct mhi_controller *mhi_cntrl, struct mhi_chan *mhi_chan,
 	mhi_add_ring_element(mhi_cntrl, tre_ring);
 	mhi_add_ring_element(mhi_cntrl, buf_ring);
 
+	write_unlock_bh(&mhi_chan->lock);
+
 	return 0;
 }