diff mbox series

dax: Fix xarray entry association for mixed mappings

Message ID 20190606091028.31715-1-jack@suse.cz (mailing list archive)
State Mainlined
Commit 1571c029a2ff289683ddb0a32253850363bcb8a7
Headers show
Series dax: Fix xarray entry association for mixed mappings | expand

Commit Message

Jan Kara June 6, 2019, 9:10 a.m. UTC
When inserting entry into xarray, we store mapping and index in
corresponding struct pages for memory error handling. When it happened
that one process was mapping file at PMD granularity while another
process at PTE granularity, we could wrongly deassociate PMD range and
then reassociate PTE range leaving the rest of struct pages in PMD range
without mapping information which could later cause missed notifications
about memory errors. Fix the problem by calling the association /
deassociation code if and only if we are really going to update the
xarray (deassociating and associating zero or empty entries is just
no-op so there's no reason to complicate the code with trying to avoid
the calls for these cases).

Signed-off-by: Jan Kara <jack@suse.cz>
---
 fs/dax.c | 9 ++++-----
 1 file changed, 4 insertions(+), 5 deletions(-)

Comments

Dan Williams June 6, 2019, 5 p.m. UTC | #1
On Thu, Jun 6, 2019 at 2:10 AM Jan Kara <jack@suse.cz> wrote:
>
> When inserting entry into xarray, we store mapping and index in
> corresponding struct pages for memory error handling. When it happened
> that one process was mapping file at PMD granularity while another
> process at PTE granularity, we could wrongly deassociate PMD range and
> then reassociate PTE range leaving the rest of struct pages in PMD range
> without mapping information which could later cause missed notifications
> about memory errors. Fix the problem by calling the association /
> deassociation code if and only if we are really going to update the
> xarray (deassociating and associating zero or empty entries is just
> no-op so there's no reason to complicate the code with trying to avoid
> the calls for these cases).

Looks good to me, I assume this also needs:

Cc: <stable@vger.kernel.org>
Fixes: d2c997c0f145 ("fs, dax: use page->mapping to warn if truncate
collides with a busy page")

>
> Signed-off-by: Jan Kara <jack@suse.cz>
> ---
>  fs/dax.c | 9 ++++-----
>  1 file changed, 4 insertions(+), 5 deletions(-)
>
> diff --git a/fs/dax.c b/fs/dax.c
> index f74386293632..9fd908f3df32 100644
> --- a/fs/dax.c
> +++ b/fs/dax.c
> @@ -728,12 +728,11 @@ static void *dax_insert_entry(struct xa_state *xas,
>
>         xas_reset(xas);
>         xas_lock_irq(xas);
> -       if (dax_entry_size(entry) != dax_entry_size(new_entry)) {
> +       if (dax_is_zero_entry(entry) || dax_is_empty_entry(entry)) {
> +               void *old;
> +
>                 dax_disassociate_entry(entry, mapping, false);
>                 dax_associate_entry(new_entry, mapping, vmf->vma, vmf->address);
> -       }
> -
> -       if (dax_is_zero_entry(entry) || dax_is_empty_entry(entry)) {
>                 /*
>                  * Only swap our new entry into the page cache if the current
>                  * entry is a zero page or an empty entry.  If a normal PTE or
> @@ -742,7 +741,7 @@ static void *dax_insert_entry(struct xa_state *xas,
>                  * existing entry is a PMD, we will just leave the PMD in the
>                  * tree and dirty it if necessary.
>                  */
> -               void *old = dax_lock_entry(xas, new_entry);
> +               old = dax_lock_entry(xas, new_entry);
>                 WARN_ON_ONCE(old != xa_mk_value(xa_to_value(entry) |
>                                         DAX_LOCKED));
>                 entry = new_entry;
> --
> 2.16.4
>
Jan Kara June 6, 2019, 9:28 p.m. UTC | #2
On Thu 06-06-19 10:00:01, Dan Williams wrote:
> On Thu, Jun 6, 2019 at 2:10 AM Jan Kara <jack@suse.cz> wrote:
> >
> > When inserting entry into xarray, we store mapping and index in
> > corresponding struct pages for memory error handling. When it happened
> > that one process was mapping file at PMD granularity while another
> > process at PTE granularity, we could wrongly deassociate PMD range and
> > then reassociate PTE range leaving the rest of struct pages in PMD range
> > without mapping information which could later cause missed notifications
> > about memory errors. Fix the problem by calling the association /
> > deassociation code if and only if we are really going to update the
> > xarray (deassociating and associating zero or empty entries is just
> > no-op so there's no reason to complicate the code with trying to avoid
> > the calls for these cases).
> 
> Looks good to me, I assume this also needs:
> 
> Cc: <stable@vger.kernel.org>
> Fixes: d2c997c0f145 ("fs, dax: use page->mapping to warn if truncate
> collides with a busy page")

Yes, thanks for that.

								Honza

> 
> >
> > Signed-off-by: Jan Kara <jack@suse.cz>
> > ---
> >  fs/dax.c | 9 ++++-----
> >  1 file changed, 4 insertions(+), 5 deletions(-)
> >
> > diff --git a/fs/dax.c b/fs/dax.c
> > index f74386293632..9fd908f3df32 100644
> > --- a/fs/dax.c
> > +++ b/fs/dax.c
> > @@ -728,12 +728,11 @@ static void *dax_insert_entry(struct xa_state *xas,
> >
> >         xas_reset(xas);
> >         xas_lock_irq(xas);
> > -       if (dax_entry_size(entry) != dax_entry_size(new_entry)) {
> > +       if (dax_is_zero_entry(entry) || dax_is_empty_entry(entry)) {
> > +               void *old;
> > +
> >                 dax_disassociate_entry(entry, mapping, false);
> >                 dax_associate_entry(new_entry, mapping, vmf->vma, vmf->address);
> > -       }
> > -
> > -       if (dax_is_zero_entry(entry) || dax_is_empty_entry(entry)) {
> >                 /*
> >                  * Only swap our new entry into the page cache if the current
> >                  * entry is a zero page or an empty entry.  If a normal PTE or
> > @@ -742,7 +741,7 @@ static void *dax_insert_entry(struct xa_state *xas,
> >                  * existing entry is a PMD, we will just leave the PMD in the
> >                  * tree and dirty it if necessary.
> >                  */
> > -               void *old = dax_lock_entry(xas, new_entry);
> > +               old = dax_lock_entry(xas, new_entry);
> >                 WARN_ON_ONCE(old != xa_mk_value(xa_to_value(entry) |
> >                                         DAX_LOCKED));
> >                 entry = new_entry;
> > --
> > 2.16.4
> >
diff mbox series

Patch

diff --git a/fs/dax.c b/fs/dax.c
index f74386293632..9fd908f3df32 100644
--- a/fs/dax.c
+++ b/fs/dax.c
@@ -728,12 +728,11 @@  static void *dax_insert_entry(struct xa_state *xas,
 
 	xas_reset(xas);
 	xas_lock_irq(xas);
-	if (dax_entry_size(entry) != dax_entry_size(new_entry)) {
+	if (dax_is_zero_entry(entry) || dax_is_empty_entry(entry)) {
+		void *old;
+
 		dax_disassociate_entry(entry, mapping, false);
 		dax_associate_entry(new_entry, mapping, vmf->vma, vmf->address);
-	}
-
-	if (dax_is_zero_entry(entry) || dax_is_empty_entry(entry)) {
 		/*
 		 * Only swap our new entry into the page cache if the current
 		 * entry is a zero page or an empty entry.  If a normal PTE or
@@ -742,7 +741,7 @@  static void *dax_insert_entry(struct xa_state *xas,
 		 * existing entry is a PMD, we will just leave the PMD in the
 		 * tree and dirty it if necessary.
 		 */
-		void *old = dax_lock_entry(xas, new_entry);
+		old = dax_lock_entry(xas, new_entry);
 		WARN_ON_ONCE(old != xa_mk_value(xa_to_value(entry) |
 					DAX_LOCKED));
 		entry = new_entry;