diff mbox series

[v6,1/1] md: generate CHANGE uevents for md device

Message ID 20240522073310.8014-2-kinga.stefaniuk@intel.com (mailing list archive)
State Changes Requested, archived
Headers show
Series md: generate CHANGE uevents for md device | expand

Commit Message

Kinga Stefaniuk May 22, 2024, 7:33 a.m. UTC
In mdadm commit 49b69533e8 ("mdmonitor: check if udev has finished
events processing") mdmonitor has been learnt to wait for udev to finish
processing, and later in commit 9935cf0f64f3 ("Mdmonitor: Improve udev
event handling") pooling for MD events on /proc/mdstat file has been
deprecated because relying on udev events is more reliable and less bug
prone (we are not competing with udev).

After those changes we are still observing missing mdmonitor events in
some scenarios, especially SpareEvent is likely to be missed. With this
patch MD will be able to generate more change uevents and wakeup
mdmonitor more frequently to give it possibility to notice events.
MD has md_new_event() functionality to trigger events and with this
patch this function is extended to generate udev CHANGE uevents. It
cannot be done directly for md_error because this function is called on
interrupts context, so appropriate workqueue is used. Uevents are less time
critical, it is safe to use workqueue. It is limited to CHANGE event as
there is no need to generate other uevents for now.
With this change, mdmonitor events are less likely to be missed. Our
internal tests suite confirms that, mdmonitor reliability is (again)
improved.

Signed-off-by: Mateusz Grzonka <mateusz.grzonka@intel.com>
Signed-off-by: Kinga Stefaniuk <kinga.stefaniuk@intel.com>

---

v6: use another workqueue and only on md_error, make configurable
    if kobject_uevent is run immediately on event or queued
v5: fix flush_work missing and commit message fixes
v4: add more detailed commit message
v3: fix problems with calling function from interrupt context,
    add work_queue and queue events notification
v2: resolve merge conflicts with applying the patch
Signed-off-by: Kinga Stefaniuk <kinga.stefaniuk@intel.com>
---
 drivers/md/md.c     | 47 ++++++++++++++++++++++++++++++---------------
 drivers/md/md.h     |  2 +-
 drivers/md/raid10.c |  2 +-
 drivers/md/raid5.c  |  2 +-
 4 files changed, 35 insertions(+), 18 deletions(-)

Comments

kernel test robot May 28, 2024, 8:51 a.m. UTC | #1
Hello,

kernel test robot noticed "mdadm-selftests.07revert-inplace.fail" on:

commit: 14a629abdd2e5e55a5122a59e338d9b6570c2c81 ("[PATCH v6 1/1] md: generate CHANGE uevents for md device")
url: https://github.com/intel-lab-lkp/linux/commits/Kinga-Stefaniuk/md-generate-CHANGE-uevents-for-md-device/20240522-153509
base: v6.9
patch link: https://lore.kernel.org/all/20240522073310.8014-2-kinga.stefaniuk@intel.com/
patch subject: [PATCH v6 1/1] md: generate CHANGE uevents for md device

in testcase: mdadm-selftests
version: mdadm-selftests-x86_64-5f41845-1_20240412
with following parameters:

	disk: 1HDD
	test_prefix: 07revert-inplace



compiler: gcc-13
test machine: 8 threads 1 sockets Intel(R) Core(TM) i7-4790T CPU @ 2.70GHz (Haswell) with 16G memory

(please refer to attached dmesg/kmsg for entire log/backtrace)




If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <oliver.sang@intel.com>
| Closes: https://lore.kernel.org/oe-lkp/202405281639.be74a40e-oliver.sang@intel.com

2024-05-26 00:53:50 mkdir -p /var/tmp
2024-05-26 00:53:50 mke2fs -t ext3 -b 4096 -J size=4 -q /dev/sdb1
2024-05-26 00:54:20 mount -t ext3 /dev/sdb1 /var/tmp
sed -e 's/{DEFAULT_METADATA}/1.2/g' \
-e 's,{MAP_PATH},/run/mdadm/map,g'  mdadm.8.in > mdadm.8
/usr/bin/install -D -m 644 mdadm.8 /usr/share/man/man8/mdadm.8
/usr/bin/install -D -m 644 mdmon.8 /usr/share/man/man8/mdmon.8
/usr/bin/install -D -m 644 md.4 /usr/share/man/man4/md.4
/usr/bin/install -D -m 644 mdadm.conf.5 /usr/share/man/man5/mdadm.conf.5
/usr/bin/install -D -m 644 udev-md-raid-creating.rules /lib/udev/rules.d/01-md-raid-creating.rules
/usr/bin/install -D -m 644 udev-md-raid-arrays.rules /lib/udev/rules.d/63-md-raid-arrays.rules
/usr/bin/install -D -m 644 udev-md-raid-assembly.rules /lib/udev/rules.d/64-md-raid-assembly.rules
/usr/bin/install -D -m 644 udev-md-clustered-confirm-device.rules /lib/udev/rules.d/69-md-clustered-confirm-device.rules
/usr/bin/install -D  -m 755 mdadm /sbin/mdadm
/usr/bin/install -D  -m 755 mdmon /sbin/mdmon
Testing on linux-6.9.0-00001-g14a629abdd2e kernel
/lkp/benchmarks/mdadm-selftests/tests/07revert-inplace... FAILED - see /var/tmp/07revert-inplace.log and /var/tmp/fail07revert-inplace.log for details


(log is attached)



The kernel config and materials to reproduce are available at:
https://download.01.org/0day-ci/archive/20240528/202405281639.be74a40e-oliver.sang@intel.com
Song Liu June 10, 2024, 6:03 p.m. UTC | #2
On Wed, May 22, 2024 at 12:32 AM Kinga Stefaniuk
<kinga.stefaniuk@intel.com> wrote:
>
[...]

> ---
>  drivers/md/md.c     | 47 ++++++++++++++++++++++++++++++---------------
>  drivers/md/md.h     |  2 +-
>  drivers/md/raid10.c |  2 +-
>  drivers/md/raid5.c  |  2 +-
>  4 files changed, 35 insertions(+), 18 deletions(-)
>
> diff --git a/drivers/md/md.c b/drivers/md/md.c
> index aff9118ff697..2ec696e17f3d 100644
> --- a/drivers/md/md.c
> +++ b/drivers/md/md.c
> @@ -313,6 +313,16 @@ static int start_readonly;
>   */
>  static bool create_on_open = true;
>
> +/*
> + * Send every new event to the userspace.
> + */
> +static void trigger_kobject_uevent(struct work_struct *work)
> +{
> +       struct mddev *mddev = container_of(work, struct mddev, event_work);
> +
> +       kobject_uevent(&disk_to_dev(mddev->gendisk)->kobj, KOBJ_CHANGE);
> +}
> +
>  /*
>   * We have a system wide 'event count' that is incremented
>   * on any 'interesting' event, and readers of /proc/mdstat
> @@ -325,10 +335,15 @@ static bool create_on_open = true;
>   */
>  static DECLARE_WAIT_QUEUE_HEAD(md_event_waiters);
>  static atomic_t md_event_count;
> -void md_new_event(void)
> +void md_new_event(struct mddev *mddev, bool trigger_event)
>  {
>         atomic_inc(&md_event_count);
>         wake_up(&md_event_waiters);
> +
> +       if (trigger_event)
> +               kobject_uevent(&disk_to_dev(mddev->gendisk)->kobj, KOBJ_CHANGE);
> +       else
> +               schedule_work(&mddev->event_work);

event_work is also used by dmraid. Will this cause an issue with dmraid?

Thanks,
Song
Yu Kuai June 11, 2024, 8:56 a.m. UTC | #3
Hi,

Please CC me in the next version. Some nits below.

在 2024/05/22 15:33, Kinga Stefaniuk 写道:
> In mdadm commit 49b69533e8 ("mdmonitor: check if udev has finished
> events processing") mdmonitor has been learnt to wait for udev to finish
> processing, and later in commit 9935cf0f64f3 ("Mdmonitor: Improve udev
> event handling") pooling for MD events on /proc/mdstat file has been
> deprecated because relying on udev events is more reliable and less bug
> prone (we are not competing with udev).
> 
> After those changes we are still observing missing mdmonitor events in
> some scenarios, especially SpareEvent is likely to be missed. With this
> patch MD will be able to generate more change uevents and wakeup
> mdmonitor more frequently to give it possibility to notice events.
> MD has md_new_event() functionality to trigger events and with this
> patch this function is extended to generate udev CHANGE uevents. It
> cannot be done directly for md_error because this function is called on
> interrupts context, so appropriate workqueue is used. Uevents are less time
> critical, it is safe to use workqueue. It is limited to CHANGE event as
> there is no need to generate other uevents for now.
> With this change, mdmonitor events are less likely to be missed. Our
> internal tests suite confirms that, mdmonitor reliability is (again)
> improved.
> 
> Signed-off-by: Mateusz Grzonka <mateusz.grzonka@intel.com>
> Signed-off-by: Kinga Stefaniuk <kinga.stefaniuk@intel.com>
> 
> ---
> 
> v6: use another workqueue and only on md_error, make configurable
>      if kobject_uevent is run immediately on event or queued
> v5: fix flush_work missing and commit message fixes
> v4: add more detailed commit message
> v3: fix problems with calling function from interrupt context,
>      add work_queue and queue events notification
> v2: resolve merge conflicts with applying the patch
> Signed-off-by: Kinga Stefaniuk <kinga.stefaniuk@intel.com>
> ---
>   drivers/md/md.c     | 47 ++++++++++++++++++++++++++++++---------------
>   drivers/md/md.h     |  2 +-
>   drivers/md/raid10.c |  2 +-
>   drivers/md/raid5.c  |  2 +-
>   4 files changed, 35 insertions(+), 18 deletions(-)
> 
> diff --git a/drivers/md/md.c b/drivers/md/md.c
> index aff9118ff697..2ec696e17f3d 100644
> --- a/drivers/md/md.c
> +++ b/drivers/md/md.c
> @@ -313,6 +313,16 @@ static int start_readonly;
>    */
>   static bool create_on_open = true;
>   
> +/*
> + * Send every new event to the userspace.
> + */
> +static void trigger_kobject_uevent(struct work_struct *work)

I'll prefer the name md_kobject_uevent_fn().
> +{
> +	struct mddev *mddev = container_of(work, struct mddev, event_work);
> +
> +	kobject_uevent(&disk_to_dev(mddev->gendisk)->kobj, KOBJ_CHANGE);
> +}
> +
>   /*
>    * We have a system wide 'event count' that is incremented
>    * on any 'interesting' event, and readers of /proc/mdstat
> @@ -325,10 +335,15 @@ static bool create_on_open = true;
>    */
>   static DECLARE_WAIT_QUEUE_HEAD(md_event_waiters);
>   static atomic_t md_event_count;
> -void md_new_event(void)
> +void md_new_event(struct mddev *mddev, bool trigger_event)

You're going to send uevent anyway, the differece is sync/asyn, hence
I'll use the name 'bool sync' instead.
>   {
>   	atomic_inc(&md_event_count);
>   	wake_up(&md_event_waiters);
> +
> +	if (trigger_event)
> +		kobject_uevent(&disk_to_dev(mddev->gendisk)->kobj, KOBJ_CHANGE);
> +	else
> +		schedule_work(&mddev->event_work);

As I said in the last version, please use the workqueue md_misc_wq
that is allocated by raid.
>   }
>   EXPORT_SYMBOL_GPL(md_new_event);
>   
> @@ -863,6 +878,7 @@ static void mddev_free(struct mddev *mddev)
>   	list_del(&mddev->all_mddevs);
>   	spin_unlock(&all_mddevs_lock);
>   
> +	cancel_work_sync(&mddev->event_work);

This is too late, you must cancel the work before deleting the mddev
kobject in mddev_delayed_delete().

BTW, I think is reasonable to add a kobject_del() here as well.
>   	mddev_destroy(mddev);
>   	kfree(mddev);
>   }
> @@ -2940,7 +2956,7 @@ static int add_bound_rdev(struct md_rdev *rdev)
>   	if (mddev->degraded)
>   		set_bit(MD_RECOVERY_RECOVER, &mddev->recovery);
>   	set_bit(MD_RECOVERY_NEEDED, &mddev->recovery);
> -	md_new_event();
> +	md_new_event(mddev, true);
>   	return 0;
>   }
>   
> @@ -3057,7 +3073,7 @@ state_store(struct md_rdev *rdev, const char *buf, size_t len)
>   				md_kick_rdev_from_array(rdev);
>   				if (mddev->pers)
>   					set_bit(MD_SB_CHANGE_DEVS, &mddev->sb_flags);
> -				md_new_event();
> +				md_new_event(mddev, true);
>   			}
>   		}
>   	} else if (cmd_match(buf, "writemostly")) {
> @@ -4173,7 +4189,7 @@ level_store(struct mddev *mddev, const char *buf, size_t len)
>   	if (!mddev->thread)
>   		md_update_sb(mddev, 1);
>   	sysfs_notify_dirent_safe(mddev->sysfs_level);
> -	md_new_event();
> +	md_new_event(mddev, true);
>   	rv = len;
>   out_unlock:
>   	mddev_unlock_and_resume(mddev);
> @@ -4700,7 +4716,7 @@ new_dev_store(struct mddev *mddev, const char *buf, size_t len)
>   		export_rdev(rdev, mddev);
>   	mddev_unlock_and_resume(mddev);
>   	if (!err)
> -		md_new_event();
> +		md_new_event(mddev, true);
>   	return err ? err : len;
>   }
>   
> @@ -5902,6 +5918,7 @@ struct mddev *md_alloc(dev_t dev, char *name)
>   		return ERR_PTR(error);
>   	}
>   
> +	INIT_WORK(&mddev->event_work, trigger_kobject_uevent);

Please add a new work struct, and add INIT_WORK() in mddev_init() with
other works.
>   	kobject_uevent(&mddev->kobj, KOBJ_ADD);
>   	mddev->sysfs_state = sysfs_get_dirent_safe(mddev->kobj.sd, "array_state");
>   	mddev->sysfs_level = sysfs_get_dirent_safe(mddev->kobj.sd, "level");
> @@ -6244,7 +6261,7 @@ int md_run(struct mddev *mddev)
>   	if (mddev->sb_flags)
>   		md_update_sb(mddev, 0);
>   
> -	md_new_event();
> +	md_new_event(mddev, true);
>   	return 0;
>   
>   bitmap_abort:
> @@ -6603,7 +6620,7 @@ static int do_md_stop(struct mddev *mddev, int mode)
>   		if (mddev->hold_active == UNTIL_STOP)
>   			mddev->hold_active = 0;
>   	}
> -	md_new_event();
> +	md_new_event(mddev, true);
>   	sysfs_notify_dirent_safe(mddev->sysfs_state);
>   	return 0;
>   }
> @@ -7099,7 +7116,7 @@ static int hot_remove_disk(struct mddev *mddev, dev_t dev)
>   	set_bit(MD_SB_CHANGE_DEVS, &mddev->sb_flags);
>   	if (!mddev->thread)
>   		md_update_sb(mddev, 1);
> -	md_new_event();
> +	md_new_event(mddev, true);
>   
>   	return 0;
>   busy:
> @@ -7179,7 +7196,7 @@ static int hot_add_disk(struct mddev *mddev, dev_t dev)
>   	 * array immediately.
>   	 */
>   	set_bit(MD_RECOVERY_NEEDED, &mddev->recovery);
> -	md_new_event();
> +	md_new_event(mddev, true);
>   	return 0;
>   
>   abort_export:
> @@ -8159,7 +8176,7 @@ void md_error(struct mddev *mddev, struct md_rdev *rdev)
>   	}
>   	if (mddev->event_work.func)
>   		queue_work(md_misc_wq, &mddev->event_work);
> -	md_new_event();
> +	md_new_event(mddev, false);

md_error() has lots of callers, and I'm not quite sure yet if this can
concurent with deleting mddev. Otherwise you must check if 'MD_DELETED'
is not set before queuing the new work.

Thanks,
Kuai


>   }
>   EXPORT_SYMBOL(md_error);
>   
> @@ -9049,7 +9066,7 @@ void md_do_sync(struct md_thread *thread)
>   		mddev->curr_resync = MD_RESYNC_ACTIVE; /* no longer delayed */
>   	mddev->curr_resync_completed = j;
>   	sysfs_notify_dirent_safe(mddev->sysfs_completed);
> -	md_new_event();
> +	md_new_event(mddev, true);
>   	update_time = jiffies;
>   
>   	blk_start_plug(&plug);
> @@ -9120,7 +9137,7 @@ void md_do_sync(struct md_thread *thread)
>   			/* this is the earliest that rebuild will be
>   			 * visible in /proc/mdstat
>   			 */
> -			md_new_event();
> +			md_new_event(mddev, true);
>   
>   		if (last_check + window > io_sectors || j == max_sectors)
>   			continue;
> @@ -9386,7 +9403,7 @@ static int remove_and_add_spares(struct mddev *mddev,
>   			sysfs_link_rdev(mddev, rdev);
>   			if (!test_bit(Journal, &rdev->flags))
>   				spares++;
> -			md_new_event();
> +			md_new_event(mddev, true);
>   			set_bit(MD_SB_CHANGE_DEVS, &mddev->sb_flags);
>   		}
>   	}
> @@ -9505,7 +9522,7 @@ static void md_start_sync(struct work_struct *ws)
>   		__mddev_resume(mddev, false);
>   	md_wakeup_thread(mddev->sync_thread);
>   	sysfs_notify_dirent_safe(mddev->sysfs_action);
> -	md_new_event();
> +	md_new_event(mddev, true);
>   	return;
>   
>   not_running:
> @@ -9757,7 +9774,7 @@ void md_reap_sync_thread(struct mddev *mddev)
>   	set_bit(MD_RECOVERY_NEEDED, &mddev->recovery);
>   	sysfs_notify_dirent_safe(mddev->sysfs_completed);
>   	sysfs_notify_dirent_safe(mddev->sysfs_action);
> -	md_new_event();
> +	md_new_event(mddev, true);
>   	if (mddev->event_work.func)
>   		queue_work(md_misc_wq, &mddev->event_work);
>   	wake_up(&resync_wait);
> diff --git a/drivers/md/md.h b/drivers/md/md.h
> index ca085ecad504..6c0a45d4613e 100644
> --- a/drivers/md/md.h
> +++ b/drivers/md/md.h
> @@ -803,7 +803,7 @@ extern int md_super_wait(struct mddev *mddev);
>   extern int sync_page_io(struct md_rdev *rdev, sector_t sector, int size,
>   		struct page *page, blk_opf_t opf, bool metadata_op);
>   extern void md_do_sync(struct md_thread *thread);
> -extern void md_new_event(void);
> +extern void md_new_event(struct mddev *mddev, bool trigger_event);
>   extern void md_allow_write(struct mddev *mddev);
>   extern void md_wait_for_blocked_rdev(struct md_rdev *rdev, struct mddev *mddev);
>   extern void md_set_array_sectors(struct mddev *mddev, sector_t array_sectors);
> diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c
> index a4556d2e46bf..4f4adbe5da95 100644
> --- a/drivers/md/raid10.c
> +++ b/drivers/md/raid10.c
> @@ -4545,7 +4545,7 @@ static int raid10_start_reshape(struct mddev *mddev)
>   	set_bit(MD_RECOVERY_RESHAPE, &mddev->recovery);
>   	set_bit(MD_RECOVERY_NEEDED, &mddev->recovery);
>   	conf->reshape_checkpoint = jiffies;
> -	md_new_event();
> +	md_new_event(mddev, true);
>   	return 0;
>   
>   abort:
> diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c
> index 2bd1ce9b3922..085206f1cdcc 100644
> --- a/drivers/md/raid5.c
> +++ b/drivers/md/raid5.c
> @@ -8503,7 +8503,7 @@ static int raid5_start_reshape(struct mddev *mddev)
>   	set_bit(MD_RECOVERY_RESHAPE, &mddev->recovery);
>   	set_bit(MD_RECOVERY_NEEDED, &mddev->recovery);
>   	conf->reshape_checkpoint = jiffies;
> -	md_new_event();
> +	md_new_event(mddev, true);
>   	return 0;
>   }
>   
>
Yu Kuai June 11, 2024, 9:01 a.m. UTC | #4
Hi,

在 2024/06/11 16:56, Yu Kuai 写道:
> Hi,
> 
> Please CC me in the next version. Some nits below.
> 
> 在 2024/05/22 15:33, Kinga Stefaniuk 写道:
>> In mdadm commit 49b69533e8 ("mdmonitor: check if udev has finished
>> events processing") mdmonitor has been learnt to wait for udev to finish
>> processing, and later in commit 9935cf0f64f3 ("Mdmonitor: Improve udev
>> event handling") pooling for MD events on /proc/mdstat file has been
>> deprecated because relying on udev events is more reliable and less bug
>> prone (we are not competing with udev).
>>
>> After those changes we are still observing missing mdmonitor events in
>> some scenarios, especially SpareEvent is likely to be missed. With this
>> patch MD will be able to generate more change uevents and wakeup
>> mdmonitor more frequently to give it possibility to notice events.
>> MD has md_new_event() functionality to trigger events and with this
>> patch this function is extended to generate udev CHANGE uevents. It
>> cannot be done directly for md_error because this function is called on
>> interrupts context, so appropriate workqueue is used. Uevents are less 
>> time
>> critical, it is safe to use workqueue. It is limited to CHANGE event as
>> there is no need to generate other uevents for now.
>> With this change, mdmonitor events are less likely to be missed. Our
>> internal tests suite confirms that, mdmonitor reliability is (again)
>> improved.
>>
>> Signed-off-by: Mateusz Grzonka <mateusz.grzonka@intel.com>
>> Signed-off-by: Kinga Stefaniuk <kinga.stefaniuk@intel.com>
>>
>> ---
>>
>> v6: use another workqueue and only on md_error, make configurable
>>      if kobject_uevent is run immediately on event or queued
>> v5: fix flush_work missing and commit message fixes
>> v4: add more detailed commit message
>> v3: fix problems with calling function from interrupt context,
>>      add work_queue and queue events notification
>> v2: resolve merge conflicts with applying the patch
>> Signed-off-by: Kinga Stefaniuk <kinga.stefaniuk@intel.com>
>> ---
>>   drivers/md/md.c     | 47 ++++++++++++++++++++++++++++++---------------
>>   drivers/md/md.h     |  2 +-
>>   drivers/md/raid10.c |  2 +-
>>   drivers/md/raid5.c  |  2 +-
>>   4 files changed, 35 insertions(+), 18 deletions(-)
>>
>> diff --git a/drivers/md/md.c b/drivers/md/md.c
>> index aff9118ff697..2ec696e17f3d 100644
>> --- a/drivers/md/md.c
>> +++ b/drivers/md/md.c
>> @@ -313,6 +313,16 @@ static int start_readonly;
>>    */
>>   static bool create_on_open = true;
>> +/*
>> + * Send every new event to the userspace.
>> + */
>> +static void trigger_kobject_uevent(struct work_struct *work)
> 
> I'll prefer the name md_kobject_uevent_fn().
>> +{
>> +    struct mddev *mddev = container_of(work, struct mddev, event_work);
>> +
>> +    kobject_uevent(&disk_to_dev(mddev->gendisk)->kobj, KOBJ_CHANGE);
>> +}
>> +
>>   /*
>>    * We have a system wide 'event count' that is incremented
>>    * on any 'interesting' event, and readers of /proc/mdstat
>> @@ -325,10 +335,15 @@ static bool create_on_open = true;
>>    */
>>   static DECLARE_WAIT_QUEUE_HEAD(md_event_waiters);
>>   static atomic_t md_event_count;
>> -void md_new_event(void)
>> +void md_new_event(struct mddev *mddev, bool trigger_event)
> 
> You're going to send uevent anyway, the differece is sync/asyn, hence
> I'll use the name 'bool sync' instead.
>>   {
>>       atomic_inc(&md_event_count);
>>       wake_up(&md_event_waiters);
>> +
>> +    if (trigger_event)
>> +        kobject_uevent(&disk_to_dev(mddev->gendisk)->kobj, KOBJ_CHANGE);
>> +    else
>> +        schedule_work(&mddev->event_work);

And for dm-raid, mddev->kobj is never initialized, you can't use it for
dm-raid(mddev_is_dm()).

Thanks,
Kuai

> 
> As I said in the last version, please use the workqueue md_misc_wq
> that is allocated by raid.
>>   }
>>   EXPORT_SYMBOL_GPL(md_new_event);
>> @@ -863,6 +878,7 @@ static void mddev_free(struct mddev *mddev)
>>       list_del(&mddev->all_mddevs);
>>       spin_unlock(&all_mddevs_lock);
>> +    cancel_work_sync(&mddev->event_work);
> 
> This is too late, you must cancel the work before deleting the mddev
> kobject in mddev_delayed_delete().
> 
> BTW, I think is reasonable to add a kobject_del() here as well.
>>       mddev_destroy(mddev);
>>       kfree(mddev);
>>   }
>> @@ -2940,7 +2956,7 @@ static int add_bound_rdev(struct md_rdev *rdev)
>>       if (mddev->degraded)
>>           set_bit(MD_RECOVERY_RECOVER, &mddev->recovery);
>>       set_bit(MD_RECOVERY_NEEDED, &mddev->recovery);
>> -    md_new_event();
>> +    md_new_event(mddev, true);
>>       return 0;
>>   }
>> @@ -3057,7 +3073,7 @@ state_store(struct md_rdev *rdev, const char 
>> *buf, size_t len)
>>                   md_kick_rdev_from_array(rdev);
>>                   if (mddev->pers)
>>                       set_bit(MD_SB_CHANGE_DEVS, &mddev->sb_flags);
>> -                md_new_event();
>> +                md_new_event(mddev, true);
>>               }
>>           }
>>       } else if (cmd_match(buf, "writemostly")) {
>> @@ -4173,7 +4189,7 @@ level_store(struct mddev *mddev, const char 
>> *buf, size_t len)
>>       if (!mddev->thread)
>>           md_update_sb(mddev, 1);
>>       sysfs_notify_dirent_safe(mddev->sysfs_level);
>> -    md_new_event();
>> +    md_new_event(mddev, true);
>>       rv = len;
>>   out_unlock:
>>       mddev_unlock_and_resume(mddev);
>> @@ -4700,7 +4716,7 @@ new_dev_store(struct mddev *mddev, const char 
>> *buf, size_t len)
>>           export_rdev(rdev, mddev);
>>       mddev_unlock_and_resume(mddev);
>>       if (!err)
>> -        md_new_event();
>> +        md_new_event(mddev, true);
>>       return err ? err : len;
>>   }
>> @@ -5902,6 +5918,7 @@ struct mddev *md_alloc(dev_t dev, char *name)
>>           return ERR_PTR(error);
>>       }
>> +    INIT_WORK(&mddev->event_work, trigger_kobject_uevent);
> 
> Please add a new work struct, and add INIT_WORK() in mddev_init() with
> other works.
>>       kobject_uevent(&mddev->kobj, KOBJ_ADD);
>>       mddev->sysfs_state = sysfs_get_dirent_safe(mddev->kobj.sd, 
>> "array_state");
>>       mddev->sysfs_level = sysfs_get_dirent_safe(mddev->kobj.sd, 
>> "level");
>> @@ -6244,7 +6261,7 @@ int md_run(struct mddev *mddev)
>>       if (mddev->sb_flags)
>>           md_update_sb(mddev, 0);
>> -    md_new_event();
>> +    md_new_event(mddev, true);
>>       return 0;
>>   bitmap_abort:
>> @@ -6603,7 +6620,7 @@ static int do_md_stop(struct mddev *mddev, int 
>> mode)
>>           if (mddev->hold_active == UNTIL_STOP)
>>               mddev->hold_active = 0;
>>       }
>> -    md_new_event();
>> +    md_new_event(mddev, true);
>>       sysfs_notify_dirent_safe(mddev->sysfs_state);
>>       return 0;
>>   }
>> @@ -7099,7 +7116,7 @@ static int hot_remove_disk(struct mddev *mddev, 
>> dev_t dev)
>>       set_bit(MD_SB_CHANGE_DEVS, &mddev->sb_flags);
>>       if (!mddev->thread)
>>           md_update_sb(mddev, 1);
>> -    md_new_event();
>> +    md_new_event(mddev, true);
>>       return 0;
>>   busy:
>> @@ -7179,7 +7196,7 @@ static int hot_add_disk(struct mddev *mddev, 
>> dev_t dev)
>>        * array immediately.
>>        */
>>       set_bit(MD_RECOVERY_NEEDED, &mddev->recovery);
>> -    md_new_event();
>> +    md_new_event(mddev, true);
>>       return 0;
>>   abort_export:
>> @@ -8159,7 +8176,7 @@ void md_error(struct mddev *mddev, struct 
>> md_rdev *rdev)
>>       }
>>       if (mddev->event_work.func)
>>           queue_work(md_misc_wq, &mddev->event_work);
>> -    md_new_event();
>> +    md_new_event(mddev, false);
> 
> md_error() has lots of callers, and I'm not quite sure yet if this can
> concurent with deleting mddev. Otherwise you must check if 'MD_DELETED'
> is not set before queuing the new work.
> 
> Thanks,
> Kuai
> 
> 
>>   }
>>   EXPORT_SYMBOL(md_error);
>> @@ -9049,7 +9066,7 @@ void md_do_sync(struct md_thread *thread)
>>           mddev->curr_resync = MD_RESYNC_ACTIVE; /* no longer delayed */
>>       mddev->curr_resync_completed = j;
>>       sysfs_notify_dirent_safe(mddev->sysfs_completed);
>> -    md_new_event();
>> +    md_new_event(mddev, true);
>>       update_time = jiffies;
>>       blk_start_plug(&plug);
>> @@ -9120,7 +9137,7 @@ void md_do_sync(struct md_thread *thread)
>>               /* this is the earliest that rebuild will be
>>                * visible in /proc/mdstat
>>                */
>> -            md_new_event();
>> +            md_new_event(mddev, true);
>>           if (last_check + window > io_sectors || j == max_sectors)
>>               continue;
>> @@ -9386,7 +9403,7 @@ static int remove_and_add_spares(struct mddev 
>> *mddev,
>>               sysfs_link_rdev(mddev, rdev);
>>               if (!test_bit(Journal, &rdev->flags))
>>                   spares++;
>> -            md_new_event();
>> +            md_new_event(mddev, true);
>>               set_bit(MD_SB_CHANGE_DEVS, &mddev->sb_flags);
>>           }
>>       }
>> @@ -9505,7 +9522,7 @@ static void md_start_sync(struct work_struct *ws)
>>           __mddev_resume(mddev, false);
>>       md_wakeup_thread(mddev->sync_thread);
>>       sysfs_notify_dirent_safe(mddev->sysfs_action);
>> -    md_new_event();
>> +    md_new_event(mddev, true);
>>       return;
>>   not_running:
>> @@ -9757,7 +9774,7 @@ void md_reap_sync_thread(struct mddev *mddev)
>>       set_bit(MD_RECOVERY_NEEDED, &mddev->recovery);
>>       sysfs_notify_dirent_safe(mddev->sysfs_completed);
>>       sysfs_notify_dirent_safe(mddev->sysfs_action);
>> -    md_new_event();
>> +    md_new_event(mddev, true);
>>       if (mddev->event_work.func)
>>           queue_work(md_misc_wq, &mddev->event_work);
>>       wake_up(&resync_wait);
>> diff --git a/drivers/md/md.h b/drivers/md/md.h
>> index ca085ecad504..6c0a45d4613e 100644
>> --- a/drivers/md/md.h
>> +++ b/drivers/md/md.h
>> @@ -803,7 +803,7 @@ extern int md_super_wait(struct mddev *mddev);
>>   extern int sync_page_io(struct md_rdev *rdev, sector_t sector, int 
>> size,
>>           struct page *page, blk_opf_t opf, bool metadata_op);
>>   extern void md_do_sync(struct md_thread *thread);
>> -extern void md_new_event(void);
>> +extern void md_new_event(struct mddev *mddev, bool trigger_event);
>>   extern void md_allow_write(struct mddev *mddev);
>>   extern void md_wait_for_blocked_rdev(struct md_rdev *rdev, struct 
>> mddev *mddev);
>>   extern void md_set_array_sectors(struct mddev *mddev, sector_t 
>> array_sectors);
>> diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c
>> index a4556d2e46bf..4f4adbe5da95 100644
>> --- a/drivers/md/raid10.c
>> +++ b/drivers/md/raid10.c
>> @@ -4545,7 +4545,7 @@ static int raid10_start_reshape(struct mddev 
>> *mddev)
>>       set_bit(MD_RECOVERY_RESHAPE, &mddev->recovery);
>>       set_bit(MD_RECOVERY_NEEDED, &mddev->recovery);
>>       conf->reshape_checkpoint = jiffies;
>> -    md_new_event();
>> +    md_new_event(mddev, true);
>>       return 0;
>>   abort:
>> diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c
>> index 2bd1ce9b3922..085206f1cdcc 100644
>> --- a/drivers/md/raid5.c
>> +++ b/drivers/md/raid5.c
>> @@ -8503,7 +8503,7 @@ static int raid5_start_reshape(struct mddev *mddev)
>>       set_bit(MD_RECOVERY_RESHAPE, &mddev->recovery);
>>       set_bit(MD_RECOVERY_NEEDED, &mddev->recovery);
>>       conf->reshape_checkpoint = jiffies;
>> -    md_new_event();
>> +    md_new_event(mddev, true);
>>       return 0;
>>   }
>>
> 
> .
>
Kinga Stefaniuk July 4, 2024, 9:43 a.m. UTC | #5
On Mon, 10 Jun 2024 11:03:25 -0700
Song Liu <song@kernel.org> wrote:

> On Wed, May 22, 2024 at 12:32 AM Kinga Stefaniuk
> <kinga.stefaniuk@intel.com> wrote:
> >  
> [...]
> 
> > ---
> >  drivers/md/md.c     | 47
> > ++++++++++++++++++++++++++++++--------------- drivers/md/md.h     |
> >  2 +- drivers/md/raid10.c |  2 +-
> >  drivers/md/raid5.c  |  2 +-
> >  4 files changed, 35 insertions(+), 18 deletions(-)
> >
> > diff --git a/drivers/md/md.c b/drivers/md/md.c
> > index aff9118ff697..2ec696e17f3d 100644
> > --- a/drivers/md/md.c
> > +++ b/drivers/md/md.c
> > @@ -313,6 +313,16 @@ static int start_readonly;
> >   */
> >  static bool create_on_open = true;
> >
> > +/*
> > + * Send every new event to the userspace.
> > + */
> > +static void trigger_kobject_uevent(struct work_struct *work)
> > +{
> > +       struct mddev *mddev = container_of(work, struct mddev,
> > event_work); +
> > +       kobject_uevent(&disk_to_dev(mddev->gendisk)->kobj,
> > KOBJ_CHANGE); +}
> > +
> >  /*
> >   * We have a system wide 'event count' that is incremented
> >   * on any 'interesting' event, and readers of /proc/mdstat
> > @@ -325,10 +335,15 @@ static bool create_on_open = true;
> >   */
> >  static DECLARE_WAIT_QUEUE_HEAD(md_event_waiters);
> >  static atomic_t md_event_count;
> > -void md_new_event(void)
> > +void md_new_event(struct mddev *mddev, bool trigger_event)
> >  {
> >         atomic_inc(&md_event_count);
> >         wake_up(&md_event_waiters);
> > +
> > +       if (trigger_event)
> > +               kobject_uevent(&disk_to_dev(mddev->gendisk)->kobj,
> > KOBJ_CHANGE);
> > +       else
> > +               schedule_work(&mddev->event_work);  
> 
> event_work is also used by dmraid. Will this cause an issue with
> dmraid?
> 
> Thanks,
> Song
> 

Hi Song,

yes, you're right. It is fixed in next patchset - new work_struct
uevent_work was added.

Regards,
Kinga
diff mbox series

Patch

diff --git a/drivers/md/md.c b/drivers/md/md.c
index aff9118ff697..2ec696e17f3d 100644
--- a/drivers/md/md.c
+++ b/drivers/md/md.c
@@ -313,6 +313,16 @@  static int start_readonly;
  */
 static bool create_on_open = true;
 
+/*
+ * Send every new event to the userspace.
+ */
+static void trigger_kobject_uevent(struct work_struct *work)
+{
+	struct mddev *mddev = container_of(work, struct mddev, event_work);
+
+	kobject_uevent(&disk_to_dev(mddev->gendisk)->kobj, KOBJ_CHANGE);
+}
+
 /*
  * We have a system wide 'event count' that is incremented
  * on any 'interesting' event, and readers of /proc/mdstat
@@ -325,10 +335,15 @@  static bool create_on_open = true;
  */
 static DECLARE_WAIT_QUEUE_HEAD(md_event_waiters);
 static atomic_t md_event_count;
-void md_new_event(void)
+void md_new_event(struct mddev *mddev, bool trigger_event)
 {
 	atomic_inc(&md_event_count);
 	wake_up(&md_event_waiters);
+
+	if (trigger_event)
+		kobject_uevent(&disk_to_dev(mddev->gendisk)->kobj, KOBJ_CHANGE);
+	else
+		schedule_work(&mddev->event_work);
 }
 EXPORT_SYMBOL_GPL(md_new_event);
 
@@ -863,6 +878,7 @@  static void mddev_free(struct mddev *mddev)
 	list_del(&mddev->all_mddevs);
 	spin_unlock(&all_mddevs_lock);
 
+	cancel_work_sync(&mddev->event_work);
 	mddev_destroy(mddev);
 	kfree(mddev);
 }
@@ -2940,7 +2956,7 @@  static int add_bound_rdev(struct md_rdev *rdev)
 	if (mddev->degraded)
 		set_bit(MD_RECOVERY_RECOVER, &mddev->recovery);
 	set_bit(MD_RECOVERY_NEEDED, &mddev->recovery);
-	md_new_event();
+	md_new_event(mddev, true);
 	return 0;
 }
 
@@ -3057,7 +3073,7 @@  state_store(struct md_rdev *rdev, const char *buf, size_t len)
 				md_kick_rdev_from_array(rdev);
 				if (mddev->pers)
 					set_bit(MD_SB_CHANGE_DEVS, &mddev->sb_flags);
-				md_new_event();
+				md_new_event(mddev, true);
 			}
 		}
 	} else if (cmd_match(buf, "writemostly")) {
@@ -4173,7 +4189,7 @@  level_store(struct mddev *mddev, const char *buf, size_t len)
 	if (!mddev->thread)
 		md_update_sb(mddev, 1);
 	sysfs_notify_dirent_safe(mddev->sysfs_level);
-	md_new_event();
+	md_new_event(mddev, true);
 	rv = len;
 out_unlock:
 	mddev_unlock_and_resume(mddev);
@@ -4700,7 +4716,7 @@  new_dev_store(struct mddev *mddev, const char *buf, size_t len)
 		export_rdev(rdev, mddev);
 	mddev_unlock_and_resume(mddev);
 	if (!err)
-		md_new_event();
+		md_new_event(mddev, true);
 	return err ? err : len;
 }
 
@@ -5902,6 +5918,7 @@  struct mddev *md_alloc(dev_t dev, char *name)
 		return ERR_PTR(error);
 	}
 
+	INIT_WORK(&mddev->event_work, trigger_kobject_uevent);
 	kobject_uevent(&mddev->kobj, KOBJ_ADD);
 	mddev->sysfs_state = sysfs_get_dirent_safe(mddev->kobj.sd, "array_state");
 	mddev->sysfs_level = sysfs_get_dirent_safe(mddev->kobj.sd, "level");
@@ -6244,7 +6261,7 @@  int md_run(struct mddev *mddev)
 	if (mddev->sb_flags)
 		md_update_sb(mddev, 0);
 
-	md_new_event();
+	md_new_event(mddev, true);
 	return 0;
 
 bitmap_abort:
@@ -6603,7 +6620,7 @@  static int do_md_stop(struct mddev *mddev, int mode)
 		if (mddev->hold_active == UNTIL_STOP)
 			mddev->hold_active = 0;
 	}
-	md_new_event();
+	md_new_event(mddev, true);
 	sysfs_notify_dirent_safe(mddev->sysfs_state);
 	return 0;
 }
@@ -7099,7 +7116,7 @@  static int hot_remove_disk(struct mddev *mddev, dev_t dev)
 	set_bit(MD_SB_CHANGE_DEVS, &mddev->sb_flags);
 	if (!mddev->thread)
 		md_update_sb(mddev, 1);
-	md_new_event();
+	md_new_event(mddev, true);
 
 	return 0;
 busy:
@@ -7179,7 +7196,7 @@  static int hot_add_disk(struct mddev *mddev, dev_t dev)
 	 * array immediately.
 	 */
 	set_bit(MD_RECOVERY_NEEDED, &mddev->recovery);
-	md_new_event();
+	md_new_event(mddev, true);
 	return 0;
 
 abort_export:
@@ -8159,7 +8176,7 @@  void md_error(struct mddev *mddev, struct md_rdev *rdev)
 	}
 	if (mddev->event_work.func)
 		queue_work(md_misc_wq, &mddev->event_work);
-	md_new_event();
+	md_new_event(mddev, false);
 }
 EXPORT_SYMBOL(md_error);
 
@@ -9049,7 +9066,7 @@  void md_do_sync(struct md_thread *thread)
 		mddev->curr_resync = MD_RESYNC_ACTIVE; /* no longer delayed */
 	mddev->curr_resync_completed = j;
 	sysfs_notify_dirent_safe(mddev->sysfs_completed);
-	md_new_event();
+	md_new_event(mddev, true);
 	update_time = jiffies;
 
 	blk_start_plug(&plug);
@@ -9120,7 +9137,7 @@  void md_do_sync(struct md_thread *thread)
 			/* this is the earliest that rebuild will be
 			 * visible in /proc/mdstat
 			 */
-			md_new_event();
+			md_new_event(mddev, true);
 
 		if (last_check + window > io_sectors || j == max_sectors)
 			continue;
@@ -9386,7 +9403,7 @@  static int remove_and_add_spares(struct mddev *mddev,
 			sysfs_link_rdev(mddev, rdev);
 			if (!test_bit(Journal, &rdev->flags))
 				spares++;
-			md_new_event();
+			md_new_event(mddev, true);
 			set_bit(MD_SB_CHANGE_DEVS, &mddev->sb_flags);
 		}
 	}
@@ -9505,7 +9522,7 @@  static void md_start_sync(struct work_struct *ws)
 		__mddev_resume(mddev, false);
 	md_wakeup_thread(mddev->sync_thread);
 	sysfs_notify_dirent_safe(mddev->sysfs_action);
-	md_new_event();
+	md_new_event(mddev, true);
 	return;
 
 not_running:
@@ -9757,7 +9774,7 @@  void md_reap_sync_thread(struct mddev *mddev)
 	set_bit(MD_RECOVERY_NEEDED, &mddev->recovery);
 	sysfs_notify_dirent_safe(mddev->sysfs_completed);
 	sysfs_notify_dirent_safe(mddev->sysfs_action);
-	md_new_event();
+	md_new_event(mddev, true);
 	if (mddev->event_work.func)
 		queue_work(md_misc_wq, &mddev->event_work);
 	wake_up(&resync_wait);
diff --git a/drivers/md/md.h b/drivers/md/md.h
index ca085ecad504..6c0a45d4613e 100644
--- a/drivers/md/md.h
+++ b/drivers/md/md.h
@@ -803,7 +803,7 @@  extern int md_super_wait(struct mddev *mddev);
 extern int sync_page_io(struct md_rdev *rdev, sector_t sector, int size,
 		struct page *page, blk_opf_t opf, bool metadata_op);
 extern void md_do_sync(struct md_thread *thread);
-extern void md_new_event(void);
+extern void md_new_event(struct mddev *mddev, bool trigger_event);
 extern void md_allow_write(struct mddev *mddev);
 extern void md_wait_for_blocked_rdev(struct md_rdev *rdev, struct mddev *mddev);
 extern void md_set_array_sectors(struct mddev *mddev, sector_t array_sectors);
diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c
index a4556d2e46bf..4f4adbe5da95 100644
--- a/drivers/md/raid10.c
+++ b/drivers/md/raid10.c
@@ -4545,7 +4545,7 @@  static int raid10_start_reshape(struct mddev *mddev)
 	set_bit(MD_RECOVERY_RESHAPE, &mddev->recovery);
 	set_bit(MD_RECOVERY_NEEDED, &mddev->recovery);
 	conf->reshape_checkpoint = jiffies;
-	md_new_event();
+	md_new_event(mddev, true);
 	return 0;
 
 abort:
diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c
index 2bd1ce9b3922..085206f1cdcc 100644
--- a/drivers/md/raid5.c
+++ b/drivers/md/raid5.c
@@ -8503,7 +8503,7 @@  static int raid5_start_reshape(struct mddev *mddev)
 	set_bit(MD_RECOVERY_RESHAPE, &mddev->recovery);
 	set_bit(MD_RECOVERY_NEEDED, &mddev->recovery);
 	conf->reshape_checkpoint = jiffies;
-	md_new_event();
+	md_new_event(mddev, true);
 	return 0;
 }