diff mbox series

bonding: 3ad: Fix the conflict between bond_update_slave_arr and the state machine

Message ID 1618994301-1186-1-git-send-email-jinyiting@huawei.com (mailing list archive)
State Changes Requested
Delegated to: Netdev Maintainers
Headers show
Series bonding: 3ad: Fix the conflict between bond_update_slave_arr and the state machine | expand

Checks

Context Check Description
netdev/cover_letter success Link
netdev/fixes_present success Link
netdev/patch_count success Link
netdev/tree_selection success Guessed tree name to be net-next
netdev/subject_prefix warning Target tree name not specified in the subject
netdev/cc_maintainers success CCed 6 of 6 maintainers
netdev/source_inline success Was 0 now: 0
netdev/verify_signedoff success Link
netdev/module_param success Was 0 now: 0
netdev/build_32bit success Errors and warnings before: 5 this patch: 5
netdev/kdoc success Errors and warnings before: 0 this patch: 0
netdev/verify_fixes success Link
netdev/checkpatch success total: 0 errors, 0 warnings, 0 checks, 26 lines checked
netdev/build_allmodconfig_warn success Errors and warnings before: 5 this patch: 5
netdev/header_inline success Link

Commit Message

jinyiting April 21, 2021, 8:38 a.m. UTC
The bond works in mode 4, and performs down/up operations on the bond
that is normally negotiated. The probability of bond-> slave_arr is NULL

Test commands:
   ifconfig bond1 down
   ifconfig bond1 up

The conflict occurs in the following process:

__dev_open (CPU A)
--bond_open
  --queue_delayed_work(bond->wq,&bond->ad_work,0);
  --bond_update_slave_arr
    --bond_3ad_get_active_agg_info

ad_work(CPU B)
--bond_3ad_state_machine_handler
  --ad_agg_selection_logic

ad_work runs on cpu B. In the function ad_agg_selection_logic, all
agg->is_active will be cleared. Before the new active aggregator is
selected on CPU B, bond_3ad_get_active_agg_info failed on CPU A,
bond->slave_arr will be set to NULL. The best aggregator in
ad_agg_selection_logic has not changed, no need to update slave arr.

The conflict occurred in that ad_agg_selection_logic clears
agg->is_active under mode_lock, but bond_open -> bond_update_slave_arr
is inspecting agg->is_active outside the lock.

Also, bond_update_slave_arr is normal for potential sleep when
allocating memory, so replace the WARN_ON with a call to might_sleep.

Signed-off-by: jinyiting <jinyiting@huawei.com>
---

Previous versions:
 * https://lore.kernel.org/netdev/612b5e32-ea11-428e-0c17-e2977185f045@huawei.com/

 drivers/net/bonding/bond_main.c | 7 ++++---
 1 file changed, 4 insertions(+), 3 deletions(-)

Comments

David Miller April 23, 2021, 8:07 p.m. UTC | #1
From: jinyiting <jinyiting@huawei.com>
Date: Wed, 21 Apr 2021 16:38:21 +0800

> The bond works in mode 4, and performs down/up operations on the bond
> that is normally negotiated. The probability of bond-> slave_arr is NULL
> 
> Test commands:
>    ifconfig bond1 down
>    ifconfig bond1 up
> 
> The conflict occurs in the following process:
> 
> __dev_open (CPU A)
> --bond_open
>   --queue_delayed_work(bond->wq,&bond->ad_work,0);
>   --bond_update_slave_arr
>     --bond_3ad_get_active_agg_info
> 
> ad_work(CPU B)
> --bond_3ad_state_machine_handler
>   --ad_agg_selection_logic
> 
> ad_work runs on cpu B. In the function ad_agg_selection_logic, all
> agg->is_active will be cleared. Before the new active aggregator is
> selected on CPU B, bond_3ad_get_active_agg_info failed on CPU A,
> bond->slave_arr will be set to NULL. The best aggregator in
> ad_agg_selection_logic has not changed, no need to update slave arr.
> 
> The conflict occurred in that ad_agg_selection_logic clears
> agg->is_active under mode_lock, but bond_open -> bond_update_slave_arr
> is inspecting agg->is_active outside the lock.
> 
> Also, bond_update_slave_arr is normal for potential sleep when
> allocating memory, so replace the WARN_ON with a call to might_sleep.
> 
> Signed-off-by: jinyiting <jinyiting@huawei.com>
> ---
> 
> Previous versions:
>  * https://lore.kernel.org/netdev/612b5e32-ea11-428e-0c17-e2977185f045@huawei.com/
> 
>  drivers/net/bonding/bond_main.c | 7 ++++---
>  1 file changed, 4 insertions(+), 3 deletions(-)
> 
> diff --git a/drivers/net/bonding/bond_main.c b/drivers/net/bonding/bond_main.c
> index 74cbbb2..83ef62d 100644
> --- a/drivers/net/bonding/bond_main.c
> +++ b/drivers/net/bonding/bond_main.c
> @@ -4406,7 +4404,9 @@ int bond_update_slave_arr(struct bonding *bond, struct slave *skipslave)
>  	if (BOND_MODE(bond) == BOND_MODE_8023AD) {
>  		struct ad_info ad_info;
>  
> +		spin_lock_bh(&bond->mode_lock);

The code paths that call this function with mode_lock held will now deadlock.
Jay Vosburgh April 26, 2021, 3:22 p.m. UTC | #2
David Miller <davem@davemloft.net> wrote:

>From: jinyiting <jinyiting@huawei.com>
>Date: Wed, 21 Apr 2021 16:38:21 +0800
>
>> The bond works in mode 4, and performs down/up operations on the bond
>> that is normally negotiated. The probability of bond-> slave_arr is NULL
>> 
>> Test commands:
>>    ifconfig bond1 down
>>    ifconfig bond1 up
>> 
>> The conflict occurs in the following process:
>> 
>> __dev_open (CPU A)
>> --bond_open
>>   --queue_delayed_work(bond->wq,&bond->ad_work,0);
>>   --bond_update_slave_arr
>>     --bond_3ad_get_active_agg_info
>> 
>> ad_work(CPU B)
>> --bond_3ad_state_machine_handler
>>   --ad_agg_selection_logic
>> 
>> ad_work runs on cpu B. In the function ad_agg_selection_logic, all
>> agg->is_active will be cleared. Before the new active aggregator is
>> selected on CPU B, bond_3ad_get_active_agg_info failed on CPU A,
>> bond->slave_arr will be set to NULL. The best aggregator in
>> ad_agg_selection_logic has not changed, no need to update slave arr.
>> 
>> The conflict occurred in that ad_agg_selection_logic clears
>> agg->is_active under mode_lock, but bond_open -> bond_update_slave_arr
>> is inspecting agg->is_active outside the lock.
>> 
>> Also, bond_update_slave_arr is normal for potential sleep when
>> allocating memory, so replace the WARN_ON with a call to might_sleep.
>> 
>> Signed-off-by: jinyiting <jinyiting@huawei.com>
>> ---
>> 
>> Previous versions:
>>  * https://lore.kernel.org/netdev/612b5e32-ea11-428e-0c17-e2977185f045@huawei.com/
>> 
>>  drivers/net/bonding/bond_main.c | 7 ++++---
>>  1 file changed, 4 insertions(+), 3 deletions(-)
>> 
>> diff --git a/drivers/net/bonding/bond_main.c b/drivers/net/bonding/bond_main.c
>> index 74cbbb2..83ef62d 100644
>> --- a/drivers/net/bonding/bond_main.c
>> +++ b/drivers/net/bonding/bond_main.c
>> @@ -4406,7 +4404,9 @@ int bond_update_slave_arr(struct bonding *bond, struct slave *skipslave)
>>  	if (BOND_MODE(bond) == BOND_MODE_8023AD) {
>>  		struct ad_info ad_info;
>>  
>> +		spin_lock_bh(&bond->mode_lock);
>
>The code paths that call this function with mode_lock held will now deadlock.

	No path should be calling bond_update_slave_arr with mode_lock
already held (it expects RTNL only); did you find one?

	My concern is that there's something else that does the opposite
order, i.e., mode_lock first, then RTNL, but I haven't found an example.

	-J

---
	-Jay Vosburgh, jay.vosburgh@canonical.com
David Miller April 26, 2021, 7:08 p.m. UTC | #3
From: Jay Vosburgh <jay.vosburgh@canonical.com>
Date: Mon, 26 Apr 2021 08:22:37 -0700

> David Miller <davem@davemloft.net> wrote:
> 
>>From: jinyiting <jinyiting@huawei.com>
>>Date: Wed, 21 Apr 2021 16:38:21 +0800
>>
>>> The bond works in mode 4, and performs down/up operations on the bond
>>> that is normally negotiated. The probability of bond-> slave_arr is NULL
>>> 
>>> Test commands:
>>>    ifconfig bond1 down
>>>    ifconfig bond1 up
>>> 
>>> The conflict occurs in the following process:
>>> 
>>> __dev_open (CPU A)
>>> --bond_open
>>>   --queue_delayed_work(bond->wq,&bond->ad_work,0);
>>>   --bond_update_slave_arr
>>>     --bond_3ad_get_active_agg_info
>>> 
>>> ad_work(CPU B)
>>> --bond_3ad_state_machine_handler
>>>   --ad_agg_selection_logic
>>> 
>>> ad_work runs on cpu B. In the function ad_agg_selection_logic, all
>>> agg->is_active will be cleared. Before the new active aggregator is
>>> selected on CPU B, bond_3ad_get_active_agg_info failed on CPU A,
>>> bond->slave_arr will be set to NULL. The best aggregator in
>>> ad_agg_selection_logic has not changed, no need to update slave arr.
>>> 
>>> The conflict occurred in that ad_agg_selection_logic clears
>>> agg->is_active under mode_lock, but bond_open -> bond_update_slave_arr
>>> is inspecting agg->is_active outside the lock.
>>> 
>>> Also, bond_update_slave_arr is normal for potential sleep when
>>> allocating memory, so replace the WARN_ON with a call to might_sleep.
>>> 
>>> Signed-off-by: jinyiting <jinyiting@huawei.com>
>>> ---
>>> 
>>> Previous versions:
>>>  * https://lore.kernel.org/netdev/612b5e32-ea11-428e-0c17-e2977185f045@huawei.com/
>>> 
>>>  drivers/net/bonding/bond_main.c | 7 ++++---
>>>  1 file changed, 4 insertions(+), 3 deletions(-)
>>> 
>>> diff --git a/drivers/net/bonding/bond_main.c b/drivers/net/bonding/bond_main.c
>>> index 74cbbb2..83ef62d 100644
>>> --- a/drivers/net/bonding/bond_main.c
>>> +++ b/drivers/net/bonding/bond_main.c
>>> @@ -4406,7 +4404,9 @@ int bond_update_slave_arr(struct bonding *bond, struct slave *skipslave)
>>>  	if (BOND_MODE(bond) == BOND_MODE_8023AD) {
>>>  		struct ad_info ad_info;
>>>  
>>> +		spin_lock_bh(&bond->mode_lock);
>>
>>The code paths that call this function with mode_lock held will now deadlock.
> 
> 	No path should be calling bond_update_slave_arr with mode_lock
> already held (it expects RTNL only); did you find one?
> 
> 	My concern is that there's something else that does the opposite
> order, i.e., mode_lock first, then RTNL, but I haven't found an example.
> 

This patch is removing a lockdep assertion masking sure that mode_lock was held
when this function was called.  That should have been triggering all the time, right?

Thanks.
Jay Vosburgh April 26, 2021, 7:29 p.m. UTC | #4
David Miller <davem@davemloft.net> wrote:

>From: Jay Vosburgh <jay.vosburgh@canonical.com>
>Date: Mon, 26 Apr 2021 08:22:37 -0700
>
>> David Miller <davem@davemloft.net> wrote:
>> 
>>>From: jinyiting <jinyiting@huawei.com>
>>>Date: Wed, 21 Apr 2021 16:38:21 +0800
>>>
>>>> The bond works in mode 4, and performs down/up operations on the bond
>>>> that is normally negotiated. The probability of bond-> slave_arr is NULL
>>>> 
>>>> Test commands:
>>>>    ifconfig bond1 down
>>>>    ifconfig bond1 up
>>>> 
>>>> The conflict occurs in the following process:
>>>> 
>>>> __dev_open (CPU A)
>>>> --bond_open
>>>>   --queue_delayed_work(bond->wq,&bond->ad_work,0);
>>>>   --bond_update_slave_arr
>>>>     --bond_3ad_get_active_agg_info
>>>> 
>>>> ad_work(CPU B)
>>>> --bond_3ad_state_machine_handler
>>>>   --ad_agg_selection_logic
>>>> 
>>>> ad_work runs on cpu B. In the function ad_agg_selection_logic, all
>>>> agg->is_active will be cleared. Before the new active aggregator is
>>>> selected on CPU B, bond_3ad_get_active_agg_info failed on CPU A,
>>>> bond->slave_arr will be set to NULL. The best aggregator in
>>>> ad_agg_selection_logic has not changed, no need to update slave arr.
>>>> 
>>>> The conflict occurred in that ad_agg_selection_logic clears
>>>> agg->is_active under mode_lock, but bond_open -> bond_update_slave_arr
>>>> is inspecting agg->is_active outside the lock.
>>>> 
>>>> Also, bond_update_slave_arr is normal for potential sleep when
>>>> allocating memory, so replace the WARN_ON with a call to might_sleep.
>>>> 
>>>> Signed-off-by: jinyiting <jinyiting@huawei.com>
>>>> ---
>>>> 
>>>> Previous versions:
>>>>  * https://lore.kernel.org/netdev/612b5e32-ea11-428e-0c17-e2977185f045@huawei.com/
>>>> 
>>>>  drivers/net/bonding/bond_main.c | 7 ++++---
>>>>  1 file changed, 4 insertions(+), 3 deletions(-)
>>>> 
>>>> diff --git a/drivers/net/bonding/bond_main.c b/drivers/net/bonding/bond_main.c
>>>> index 74cbbb2..83ef62d 100644
>>>> --- a/drivers/net/bonding/bond_main.c
>>>> +++ b/drivers/net/bonding/bond_main.c
>>>> @@ -4406,7 +4404,9 @@ int bond_update_slave_arr(struct bonding *bond, struct slave *skipslave)
>>>>  	if (BOND_MODE(bond) == BOND_MODE_8023AD) {
>>>>  		struct ad_info ad_info;
>>>>  
>>>> +		spin_lock_bh(&bond->mode_lock);
>>>
>>>The code paths that call this function with mode_lock held will now deadlock.
>> 
>> 	No path should be calling bond_update_slave_arr with mode_lock
>> already held (it expects RTNL only); did you find one?
>> 
>> 	My concern is that there's something else that does the opposite
>> order, i.e., mode_lock first, then RTNL, but I haven't found an example.
>> 
>
>This patch is removing a lockdep assertion masking sure that mode_lock was held
>when this function was called.  That should have been triggering all the time, right?

	The line in question is:
	
#ifdef CONFIG_LOCKDEP
	WARN_ON(lockdep_is_held(&bond->mode_lock));
#endif

	The WARN_ON is triggering if mode_lock is held, not asserting
that mode_lock is held.  I think that's wrong anyway, since mode_lock
could be held by some other thread, leading to false positives, thus the
change to might_sleep.

	-J

---
	-Jay Vosburgh, jay.vosburgh@canonical.com
diff mbox series

Patch

diff --git a/drivers/net/bonding/bond_main.c b/drivers/net/bonding/bond_main.c
index 74cbbb2..83ef62d 100644
--- a/drivers/net/bonding/bond_main.c
+++ b/drivers/net/bonding/bond_main.c
@@ -4391,9 +4391,7 @@  int bond_update_slave_arr(struct bonding *bond, struct slave *skipslave)
 	int agg_id = 0;
 	int ret = 0;
 
-#ifdef CONFIG_LOCKDEP
-	WARN_ON(lockdep_is_held(&bond->mode_lock));
-#endif
+	might_sleep();
 
 	usable_slaves = kzalloc(struct_size(usable_slaves, arr,
 					    bond->slave_cnt), GFP_KERNEL);
@@ -4406,7 +4404,9 @@  int bond_update_slave_arr(struct bonding *bond, struct slave *skipslave)
 	if (BOND_MODE(bond) == BOND_MODE_8023AD) {
 		struct ad_info ad_info;
 
+		spin_lock_bh(&bond->mode_lock);
 		if (bond_3ad_get_active_agg_info(bond, &ad_info)) {
+			spin_unlock_bh(&bond->mode_lock);
 			pr_debug("bond_3ad_get_active_agg_info failed\n");
 			/* No active aggragator means it's not safe to use
 			 * the previous array.
@@ -4414,6 +4414,7 @@  int bond_update_slave_arr(struct bonding *bond, struct slave *skipslave)
 			bond_reset_slave_arr(bond);
 			goto out;
 		}
+		spin_unlock_bh(&bond->mode_lock);
 		agg_id = ad_info.aggregator_id;
 	}
 	bond_for_each_slave(bond, slave, iter) {