Message ID | 20240828021150.63240-2-xni@redhat.com (mailing list archive) |
---|---|
State | Superseded |
Headers | show |
Series | mdadm tests fix | expand |
Context | Check | Description |
---|---|---|
mdraidci/vmtest-md-6_11-PR | fail | merge-conflict |
mdraidci/vmtest-md-6_12-PR | fail | merge-conflict |
Hi Xiao, Thanks for patches. On Wed, 28 Aug 2024 10:11:41 +0800 Xiao Ni <xni@redhat.com> wrote: > Reshape needs to specify a backup file when it can't update data offset > of member disks. For this situation, first, it starts reshape and then > it kicks off mdadm-grow-continue service which does backup job and > monitors the reshape process. The service is a new process, so it needs > to read superblock from member disks to get information. Looks like kernel is fine with reset the same level so I don't see a risk in this change for other scenarios but please mention that. > > But in the first step, it doesn't update new level in superblock. So > it can't change level after reshape finishes, because the new level is > not right. So records the new level in the first step. > > Signed-off-by: Xiao Ni <xni@redhat.com> > --- > Grow.c | 3 +++ > 1 file changed, 3 insertions(+) > > diff --git a/Grow.c b/Grow.c > index 5810b128aa99..97e48d86a33f 100644 > --- a/Grow.c > +++ b/Grow.c > @@ -2946,6 +2946,9 @@ static int impose_reshape(struct mdinfo *sra, > if (!err && sysfs_set_num(sra, NULL, "layout", > reshape->after.layout) < 0) > err = errno; > + if (!err && sysfs_set_num(sra, NULL, "new_level", > + info->new_level) < 0) > + err = errno; Please add empty line before and after and please merge if statement to one line (we support up to 100). > if (!err && subarray_set_num(container, sra, "raid_disks", > reshape->after.data_disks + > reshape->parity) < 0) Thanks, Mariusz
On Mon, 2 Sep 2024 11:50:13 +0200 Mariusz Tkaczyk <mariusz.tkaczyk@linux.intel.com> wrote: > Hi Xiao, > Thanks for patches. > > On Wed, 28 Aug 2024 10:11:41 +0800 > Xiao Ni <xni@redhat.com> wrote: > > > Reshape needs to specify a backup file when it can't update data offset > > of member disks. For this situation, first, it starts reshape and then > > it kicks off mdadm-grow-continue service which does backup job and > > monitors the reshape process. The service is a new process, so it needs > > to read superblock from member disks to get information. > > Looks like kernel is fine with reset the same level so I don't see a risk in > this change for other scenarios but please mention that. > Sorry, I didn't notice that it is new field. We should not update it if it doesn't exist. Perhaps, we should print message that kernel patch is needed? > > > > But in the first step, it doesn't update new level in superblock. So > > it can't change level after reshape finishes, because the new level is > > not right. So records the new level in the first step. > > > > > Signed-off-by: Xiao Ni <xni@redhat.com> > > --- > > Grow.c | 3 +++ > > 1 file changed, 3 insertions(+) > > > > diff --git a/Grow.c b/Grow.c > > index 5810b128aa99..97e48d86a33f 100644 > > --- a/Grow.c > > +++ b/Grow.c > > @@ -2946,6 +2946,9 @@ static int impose_reshape(struct mdinfo *sra, > > if (!err && sysfs_set_num(sra, NULL, "layout", > > reshape->after.layout) < 0) > > err = errno; > > + if (!err && sysfs_set_num(sra, NULL, "new_level", > > + info->new_level) < 0) > > + err = errno; > > Please add empty line before and after and please merge if statement to one > line (we support up to 100). > > > > if (!err && subarray_set_num(container, sra, "raid_disks", > > reshape->after.data_disks + > > reshape->parity) < 0) > > > Thanks, > Mariusz >
On Mon, Sep 2, 2024 at 5:50 PM Mariusz Tkaczyk <mariusz.tkaczyk@linux.intel.com> wrote: > > Hi Xiao, > Thanks for patches. > > On Wed, 28 Aug 2024 10:11:41 +0800 > Xiao Ni <xni@redhat.com> wrote: > > > Reshape needs to specify a backup file when it can't update data offset > > of member disks. For this situation, first, it starts reshape and then > > it kicks off mdadm-grow-continue service which does backup job and > > monitors the reshape process. The service is a new process, so it needs > > to read superblock from member disks to get information. > Hi Mariusz > Looks like kernel is fine with reset the same level so I don't see a risk in > this change for other scenarios but please mention that. impose_level can't be called if the new level is the same as the old level. It already checks it before calling impose_level. > > > > > But in the first step, it doesn't update new level in superblock. So > > it can't change level after reshape finishes, because the new level is > > not right. So records the new level in the first step. > > > > > Signed-off-by: Xiao Ni <xni@redhat.com> > > --- > > Grow.c | 3 +++ > > 1 file changed, 3 insertions(+) > > > > diff --git a/Grow.c b/Grow.c > > index 5810b128aa99..97e48d86a33f 100644 > > --- a/Grow.c > > +++ b/Grow.c > > @@ -2946,6 +2946,9 @@ static int impose_reshape(struct mdinfo *sra, > > if (!err && sysfs_set_num(sra, NULL, "layout", > > reshape->after.layout) < 0) > > err = errno; > > + if (!err && sysfs_set_num(sra, NULL, "new_level", > > + info->new_level) < 0) > > + err = errno; > > Please add empty line before and after and please merge if statement to one > line (we support up to 100). Ok > > > > if (!err && subarray_set_num(container, sra, "raid_disks", > > reshape->after.data_disks + > > reshape->parity) < 0) > > > Thanks, > Mariusz >
On Mon, Sep 2, 2024 at 6:10 PM Mariusz Tkaczyk <mariusz.tkaczyk@linux.intel.com> wrote: > > On Mon, 2 Sep 2024 11:50:13 +0200 > Mariusz Tkaczyk <mariusz.tkaczyk@linux.intel.com> wrote: > > > Hi Xiao, > > Thanks for patches. > > > > On Wed, 28 Aug 2024 10:11:41 +0800 > > Xiao Ni <xni@redhat.com> wrote: > > > > > Reshape needs to specify a backup file when it can't update data offset > > > of member disks. For this situation, first, it starts reshape and then > > > it kicks off mdadm-grow-continue service which does backup job and > > > monitors the reshape process. The service is a new process, so it needs > > > to read superblock from member disks to get information. > > > > Looks like kernel is fine with reset the same level so I don't see a risk in > > this change for other scenarios but please mention that. > > > > Sorry, I didn't notice that it is new field. We should not update it if it > doesn't exist. Perhaps, we should print message that kernel patch is needed? We can merge this patch set after the kernel patch is merged. Because this change depends on the kernel change. If the kernel patch is rejected, we need to find another way to fix this problem. Regards Xiao > > > > > > > But in the first step, it doesn't update new level in superblock. So > > > it can't change level after reshape finishes, because the new level is > > > not right. So records the new level in the first step. > > > > > > > > Signed-off-by: Xiao Ni <xni@redhat.com> > > > --- > > > Grow.c | 3 +++ > > > 1 file changed, 3 insertions(+) > > > > > > diff --git a/Grow.c b/Grow.c > > > index 5810b128aa99..97e48d86a33f 100644 > > > --- a/Grow.c > > > +++ b/Grow.c > > > @@ -2946,6 +2946,9 @@ static int impose_reshape(struct mdinfo *sra, > > > if (!err && sysfs_set_num(sra, NULL, "layout", > > > reshape->after.layout) < 0) > > > err = errno; > > > + if (!err && sysfs_set_num(sra, NULL, "new_level", > > > + info->new_level) < 0) > > > + err = errno; > > > > Please add empty line before and after and please merge if statement to one > > line (we support up to 100). > > > > > > > if (!err && subarray_set_num(container, sra, "raid_disks", > > > reshape->after.data_disks + > > > reshape->parity) < 0) > > > > > > Thanks, > > Mariusz > > >
On Tue, 3 Sep 2024 08:34:46 +0800 Xiao Ni <xni@redhat.com> wrote: > On Mon, Sep 2, 2024 at 6:10 PM Mariusz Tkaczyk > <mariusz.tkaczyk@linux.intel.com> wrote: > > > > On Mon, 2 Sep 2024 11:50:13 +0200 > > Mariusz Tkaczyk <mariusz.tkaczyk@linux.intel.com> wrote: > > > > > Hi Xiao, > > > Thanks for patches. > > > > > > On Wed, 28 Aug 2024 10:11:41 +0800 > > > Xiao Ni <xni@redhat.com> wrote: > > > > > > > Reshape needs to specify a backup file when it can't update data offset > > > > of member disks. For this situation, first, it starts reshape and then > > > > it kicks off mdadm-grow-continue service which does backup job and > > > > monitors the reshape process. The service is a new process, so it needs > > > > to read superblock from member disks to get information. > > > > > > Looks like kernel is fine with reset the same level so I don't see a risk > > > in this change for other scenarios but please mention that. > > > > > > > Sorry, I didn't notice that it is new field. We should not update it if it > > doesn't exist. Perhaps, we should print message that kernel patch is > > needed? > > We can merge this patch set after the kernel patch is merged. Because > this change depends on the kernel change. If the kernel patch is > rejected, we need to find another way to fix this problem. We have to mention kernel commit in description then. Let say that it is merged, we should probably notify user, "hey your kernel is missing the kernel patch that allow this functionality to work reliably, issue may occur". What do you think? Is it valuable? Thanks, Mariusz
On Tue, Sep 3, 2024 at 3:34 PM Mariusz Tkaczyk <mariusz.tkaczyk@linux.intel.com> wrote: > > On Tue, 3 Sep 2024 08:34:46 +0800 > Xiao Ni <xni@redhat.com> wrote: > > > On Mon, Sep 2, 2024 at 6:10 PM Mariusz Tkaczyk > > <mariusz.tkaczyk@linux.intel.com> wrote: > > > > > > On Mon, 2 Sep 2024 11:50:13 +0200 > > > Mariusz Tkaczyk <mariusz.tkaczyk@linux.intel.com> wrote: > > > > > > > Hi Xiao, > > > > Thanks for patches. > > > > > > > > On Wed, 28 Aug 2024 10:11:41 +0800 > > > > Xiao Ni <xni@redhat.com> wrote: > > > > > > > > > Reshape needs to specify a backup file when it can't update data offset > > > > > of member disks. For this situation, first, it starts reshape and then > > > > > it kicks off mdadm-grow-continue service which does backup job and > > > > > monitors the reshape process. The service is a new process, so it needs > > > > > to read superblock from member disks to get information. > > > > > > > > Looks like kernel is fine with reset the same level so I don't see a risk > > > > in this change for other scenarios but please mention that. > > > > > > > > > > Sorry, I didn't notice that it is new field. We should not update it if it > > > doesn't exist. Perhaps, we should print message that kernel patch is > > > needed? > > > > We can merge this patch set after the kernel patch is merged. Because > > this change depends on the kernel change. If the kernel patch is > > rejected, we need to find another way to fix this problem. > > We have to mention kernel commit in description then. > > > Let say that it is merged, we should probably notify user, > "hey your kernel is missing the kernel patch that allow this functionality to > work reliably, issue may occur". What do you think? Is it valuable? Hi Mariusz It makes sense. I'll add this in V2 once kernel patch is merged. Regards Xiao > > Thanks, > Mariusz >
diff --git a/Grow.c b/Grow.c index 5810b128aa99..97e48d86a33f 100644 --- a/Grow.c +++ b/Grow.c @@ -2946,6 +2946,9 @@ static int impose_reshape(struct mdinfo *sra, if (!err && sysfs_set_num(sra, NULL, "layout", reshape->after.layout) < 0) err = errno; + if (!err && sysfs_set_num(sra, NULL, "new_level", + info->new_level) < 0) + err = errno; if (!err && subarray_set_num(container, sra, "raid_disks", reshape->after.data_disks + reshape->parity) < 0)
Reshape needs to specify a backup file when it can't update data offset of member disks. For this situation, first, it starts reshape and then it kicks off mdadm-grow-continue service which does backup job and monitors the reshape process. The service is a new process, so it needs to read superblock from member disks to get information. But in the first step, it doesn't update new level in superblock. So it can't change level after reshape finishes, because the new level is not right. So records the new level in the first step. Signed-off-by: Xiao Ni <xni@redhat.com> --- Grow.c | 3 +++ 1 file changed, 3 insertions(+)