Message ID | 20170517171639.14501-2-ross.zwisler@linux.intel.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
On Wed 17-05-17 11:16:39, Ross Zwisler wrote: > We currently have two related PMD vs PTE races in the DAX code. These can > both be easily triggered by having two threads reading and writing > simultaneously to the same private mapping, with the key being that private > mapping reads can be handled with PMDs but private mapping writes are > always handled with PTEs so that we can COW. > > Here is the first race: > > CPU 0 CPU 1 > > (private mapping write) > __handle_mm_fault() > create_huge_pmd() - FALLBACK > handle_pte_fault() > passes check for pmd_devmap() > > (private mapping read) > __handle_mm_fault() > create_huge_pmd() > dax_iomap_pmd_fault() inserts PMD > > dax_iomap_pte_fault() does a PTE fault, but we already have a DAX PMD > installed in our page tables at this spot. > > > Here's the second race: > > CPU 0 CPU 1 > > (private mapping write) > __handle_mm_fault() > create_huge_pmd() - FALLBACK > (private mapping read) > __handle_mm_fault() > passes check for pmd_none() > create_huge_pmd() > > handle_pte_fault() > dax_iomap_pte_fault() inserts PTE > dax_iomap_pmd_fault() inserts PMD, > but we already have a PTE at > this spot. So I don't see how this second scenario can happen. dax_iomap_pmd_fault() will call grab_mapping_entry(). That will either find PTE entry in the radix tree -> EEXIST and we retry the fault. Or we will not find PTE entry -> try to insert PMD entry which collides with the PTE entry -> EEXIST and we retry the fault. Am I missing something? The first scenario seems to be possible. dax_iomap_pmd_fault() will create PMD entry in the radix tree. Then dax_iomap_pte_fault() will come, do grab_mapping_entry(), there it sees entry is PMD but we are doing PTE fault so I'd think that pmd_downgrade = true... But actually the condition there doesn't trigger in this case. And that's a catch that although we asked grab_mapping_entry() for PTE, we've got PMD back and that screws us later. Actually I'm not convinced your patch quite fixes this because dax_load_hole() or dax_insert_mapping_entry() will modify the passed entry with the assumption that it's PTE entry and so they will likely corrupt the entry in the radix tree. So I think to fix the first case we should rather modify grab_mapping_entry() to properly go through the pmd_downgrade path once we find PMD entry and we do PTE fault. What do you think? Honza > > The core of the issue is that while there is isolation between faults to > the same range in the DAX fault handlers via our DAX entry locking, there > is no isolation between faults in the code in mm/memory.c. This means for > instance that this code in __handle_mm_fault() can run: > > if (pmd_none(*vmf.pmd) && transparent_hugepage_enabled(vma)) { > ret = create_huge_pmd(&vmf); > > But by the time we actually get to run the fault handler called by > create_huge_pmd(), the PMD is no longer pmd_none() because a racing PTE > fault has installed a normal PMD here as a parent. This is the cause of > the 2nd race. The first race is similar - there is the following check in > handle_pte_fault(): > > } else { > /* See comment in pte_alloc_one_map() */ > if (pmd_devmap(*vmf->pmd) || pmd_trans_unstable(vmf->pmd)) > return 0; > > So if a pmd_devmap() PMD (a DAX PMD) has been installed at vmf->pmd, we > will bail and retry the fault. This is correct, but there is nothing > preventing the PMD from being installed after this check but before we > actually get to the DAX PTE fault handlers. > > In my testing these races result in the following types of errors: > > BUG: Bad rss-counter state mm:ffff8800a817d280 idx:1 val:1 > BUG: non-zero nr_ptes on freeing mm: 15 > > Fix this issue by having the DAX fault handlers verify that it is safe to > continue their fault after they have taken an entry lock to block other > racing faults. > > Signed-off-by: Ross Zwisler <ross.zwisler@linux.intel.com> > Reported-by: Pawel Lebioda <pawel.lebioda@intel.com> > Cc: stable@vger.kernel.org > > --- > > I've written a new xfstest for this race, which I will send in response to > this patch series. This series has also survived an xfstest run without > any new issues. > > --- > fs/dax.c | 18 ++++++++++++++++++ > 1 file changed, 18 insertions(+) > > diff --git a/fs/dax.c b/fs/dax.c > index c22eaf1..3cc02d1 100644 > --- a/fs/dax.c > +++ b/fs/dax.c > @@ -1155,6 +1155,15 @@ static int dax_iomap_pte_fault(struct vm_fault *vmf, > } > > /* > + * It is possible, particularly with mixed reads & writes to private > + * mappings, that we have raced with a PMD fault that overlaps with > + * the PTE we need to set up. Now that we have a locked mapping entry > + * we can safely unmap the huge PMD so that we can install our PTE in > + * our page tables. > + */ > + split_huge_pmd(vmf->vma, vmf->pmd, vmf->address); > + > + /* > * Note that we don't bother to use iomap_apply here: DAX required > * the file system block size to be equal the page size, which means > * that we never have to deal with more than a single extent here. > @@ -1398,6 +1407,15 @@ static int dax_iomap_pmd_fault(struct vm_fault *vmf, > goto fallback; > > /* > + * It is possible, particularly with mixed reads & writes to private > + * mappings, that we have raced with a PTE fault that overlaps with > + * the PMD we need to set up. If so we just fall back to a PTE fault > + * ourselves. > + */ > + if (!pmd_none(*vmf->pmd)) > + goto unlock_entry; > + > + /* > * Note that we don't use iomap_apply here. We aren't doing I/O, only > * setting up a mapping, so really we're using iomap_begin() as a way > * to look up our filesystem block. > -- > 2.9.4 >
On Thu, May 18, 2017 at 09:50:37AM +0200, Jan Kara wrote: > On Wed 17-05-17 11:16:39, Ross Zwisler wrote: > > We currently have two related PMD vs PTE races in the DAX code. These can > > both be easily triggered by having two threads reading and writing > > simultaneously to the same private mapping, with the key being that private > > mapping reads can be handled with PMDs but private mapping writes are > > always handled with PTEs so that we can COW. > > > > Here is the first race: > > > > CPU 0 CPU 1 > > > > (private mapping write) > > __handle_mm_fault() > > create_huge_pmd() - FALLBACK > > handle_pte_fault() > > passes check for pmd_devmap() > > > > (private mapping read) > > __handle_mm_fault() > > create_huge_pmd() > > dax_iomap_pmd_fault() inserts PMD > > > > dax_iomap_pte_fault() does a PTE fault, but we already have a DAX PMD > > installed in our page tables at this spot. > > > > > > Here's the second race: > > > > CPU 0 CPU 1 > > > > (private mapping write) > > __handle_mm_fault() > > create_huge_pmd() - FALLBACK > > (private mapping read) > > __handle_mm_fault() > > passes check for pmd_none() > > create_huge_pmd() > > > > handle_pte_fault() > > dax_iomap_pte_fault() inserts PTE > > dax_iomap_pmd_fault() inserts PMD, > > but we already have a PTE at > > this spot. > > So I don't see how this second scenario can happen. dax_iomap_pmd_fault() > will call grab_mapping_entry(). That will either find PTE entry in the > radix tree -> EEXIST and we retry the fault. Or we will not find PTE entry > -> try to insert PMD entry which collides with the PTE entry -> EEXIST and > we retry the fault. Am I missing something? Yep, sorry, I guess I needed a few extra steps in my flow (the initial private mapping read by CPU 0): CPU 0 CPU 1 (private mapping read) __handle_mm_fault() passes check for pmd_none() create_huge_pmd() dax_iomap_pmd_fault() inserts PMD (private mapping write) __handle_mm_fault() create_huge_pmd() - FALLBACK (private mapping read) __handle_mm_fault() passes check for pmd_none() create_huge_pmd() handle_pte_fault() dax_iomap_pte_fault() inserts PTE dax_iomap_pmd_fault() inserts PMD, but we already have a PTE at this spot. So what happens is that CPU 0 inserts a DAX PMD into the radix tree that has real storage backing, and all PTE and PMD faults just use that same PMD radix tree entry for locking and dirty tracking. > The first scenario seems to be possible. dax_iomap_pmd_fault() will create > PMD entry in the radix tree. Then dax_iomap_pte_fault() will come, do > grab_mapping_entry(), there it sees entry is PMD but we are doing PTE fault > so I'd think that pmd_downgrade = true... But actually the condition there > doesn't trigger in this case. And that's a catch that although we asked > grab_mapping_entry() for PTE, we've got PMD back and that screws us later. Yep, it was a concious decision when implementing the PMD support to allow one thread to use PMDs and another to use PTEs in the same range, as long as the thread faulting in PMDs is the first to insert into the radix tree. A PMD radix tree entry will be inserted and used for locking and dirty tracking, and each thread or process can fault in either PTEs or PMDs into its own address space as needed. We can revisit this, if you think it is incorrect. The option you outline below would basically mean that if any thread were to fault in a PTE in a range, all threads and processes would be forced to use PTEs because we would use PTEs in the radix tree. This is cleaner...I'm not sure if the use case of having two threads accessing the same area, one with PTEs and one with PMDs, is actually prevalent. It's also maybe a bit weird that the current behavior varies based on which thread faulted first - if the PTE thread faults first, it'll insert a PTE into the radix tree and everyone will just use PTEs. > Actually I'm not convinced your patch quite fixes this because > dax_load_hole() or dax_insert_mapping_entry() will modify the passed entry > with the assumption that it's PTE entry and so they will likely corrupt the > entry in the radix tree. I don't think we can ever call dax_load_hole() if we have a DAX PMD entry in the radix tree, because we have a block mapping from the filesystem. For dax_insert_mapping_entry(), we do the right thing. From the comments above the function: * If we happen to be trying to insert a PTE and there is a PMD * already in the tree, we will skip the insertion and just dirty the PMD as * appropriate. If we happen to be trying to insert a PTE and there is a PMD * already in the tree, we will skip the insertion and just dirty the PMD as * appropriate. > So I think to fix the first case we should rather modify > grab_mapping_entry() to properly go through the pmd_downgrade path once we > find PMD entry and we do PTE fault. > > What do you think? That could also work, though I do think the fix as submitted is correct. I think it comes down to whether we want to keep the behavior where a thread faulting in a PTEs will use an existing PMD entry in the radix tree, instead of making all other threads fall back to PTEs. I think either way solves this issue for the DAX case...but do you understand how this is solved for other fault handlers? They don't have any isolation between faults either in the mm/memory.c code, and are susceptible to the same races. How do they deal with the fact that by the time they get to their PTE fault handler, a racing PMD fault handler in another thread could have inserted a PMD into their page tables, and vice versa?
On Thu 18-05-17 15:29:39, Ross Zwisler wrote: > On Thu, May 18, 2017 at 09:50:37AM +0200, Jan Kara wrote: > > On Wed 17-05-17 11:16:39, Ross Zwisler wrote: > > > We currently have two related PMD vs PTE races in the DAX code. These can > > > both be easily triggered by having two threads reading and writing > > > simultaneously to the same private mapping, with the key being that private > > > mapping reads can be handled with PMDs but private mapping writes are > > > always handled with PTEs so that we can COW. > > > > > > Here is the first race: > > > > > > CPU 0 CPU 1 > > > > > > (private mapping write) > > > __handle_mm_fault() > > > create_huge_pmd() - FALLBACK > > > handle_pte_fault() > > > passes check for pmd_devmap() > > > > > > (private mapping read) > > > __handle_mm_fault() > > > create_huge_pmd() > > > dax_iomap_pmd_fault() inserts PMD > > > > > > dax_iomap_pte_fault() does a PTE fault, but we already have a DAX PMD > > > installed in our page tables at this spot. > > > > > > > > > Here's the second race: > > > > > > CPU 0 CPU 1 > > > > > > (private mapping write) > > > __handle_mm_fault() > > > create_huge_pmd() - FALLBACK > > > (private mapping read) > > > __handle_mm_fault() > > > passes check for pmd_none() > > > create_huge_pmd() > > > > > > handle_pte_fault() > > > dax_iomap_pte_fault() inserts PTE > > > dax_iomap_pmd_fault() inserts PMD, > > > but we already have a PTE at > > > this spot. > > > > So I don't see how this second scenario can happen. dax_iomap_pmd_fault() > > will call grab_mapping_entry(). That will either find PTE entry in the > > radix tree -> EEXIST and we retry the fault. Or we will not find PTE entry > > -> try to insert PMD entry which collides with the PTE entry -> EEXIST and > > we retry the fault. Am I missing something? > > Yep, sorry, I guess I needed a few extra steps in my flow (the initial private > mapping read by CPU 0): > > > CPU 0 CPU 1 > > (private mapping read) > __handle_mm_fault() > passes check for pmd_none() > create_huge_pmd() > dax_iomap_pmd_fault() inserts PMD > > (private mapping write) > __handle_mm_fault() > create_huge_pmd() - FALLBACK > (private mapping read) > __handle_mm_fault() > passes check for pmd_none() > create_huge_pmd() > > handle_pte_fault() > dax_iomap_pte_fault() inserts PTE > dax_iomap_pmd_fault() inserts PMD, > but we already have a PTE at > this spot. > > So what happens is that CPU 0 inserts a DAX PMD into the radix tree that has > real storage backing, and all PTE and PMD faults just use that same PMD radix > tree entry for locking and dirty tracking. OK, I see now. So essentially it's the same catch as the other case - grab_mapping_entry() returns PMD entry on CPU0 although we asked for PTE entry. > > The first scenario seems to be possible. dax_iomap_pmd_fault() will create > > PMD entry in the radix tree. Then dax_iomap_pte_fault() will come, do > > grab_mapping_entry(), there it sees entry is PMD but we are doing PTE fault > > so I'd think that pmd_downgrade = true... But actually the condition there > > doesn't trigger in this case. And that's a catch that although we asked > > grab_mapping_entry() for PTE, we've got PMD back and that screws us later. > > Yep, it was a concious decision when implementing the PMD support to allow one > thread to use PMDs and another to use PTEs in the same range, as long as the > thread faulting in PMDs is the first to insert into the radix tree. A PMD > radix tree entry will be inserted and used for locking and dirty tracking, and > each thread or process can fault in either PTEs or PMDs into its own address > space as needed. Well, for *threads* it doesn't really make good sense to mix PMDs and PTEs as they share page tables. However for *processes* it makes some sense to allow one process to use PTEs and another process to use PMDs. And I remember we were discussing this in the past. > We can revisit this, if you think it is incorrect. The option you outline > below would basically mean that if any thread were to fault in a PTE in a > range, all threads and processes would be forced to use PTEs because we would > use PTEs in the radix tree. Well, I don't think it is necessarily incorrect. I just think it is more difficult to get it right (as current bugs show) so I'm just considering whether the complexity is worth it. > This is cleaner...I'm not sure if the use case of having two threads accessing > the same area, one with PTEs and one with PMDs, is actually prevalent. It's > also maybe a bit weird that the current behavior varies based on which thread > faulted first - if the PTE thread faults first, it'll insert a PTE into the > radix tree and everyone will just use PTEs. So for two *threads*, I don't think that is a sensible use-case. We just have to get it right. For two *processes* it makes sense - your DB might want to use PMDs while your backup program may just use PTEs. So thinking more about it I guess it is worth the effort to make the mixed case work efficiently. > > Actually I'm not convinced your patch quite fixes this because > > dax_load_hole() or dax_insert_mapping_entry() will modify the passed entry > > with the assumption that it's PTE entry and so they will likely corrupt the > > entry in the radix tree. > > I don't think we can ever call dax_load_hole() if we have a DAX PMD entry in > the radix tree, because we have a block mapping from the filesystem. > > For dax_insert_mapping_entry(), we do the right thing. From the comments > above the function: > > * If we happen to be trying to insert a PTE and there is a PMD > * already in the tree, we will skip the insertion and just dirty the PMD as > * appropriate. If we happen to be trying to insert a PTE and there is a PMD > * already in the tree, we will skip the insertion and just dirty the PMD as > * appropriate. Yeah, on the first reading I missed that we won't modify the radix tree in that particular case. Frankly, I think we should somewhat clean up that code to make things more obvious but let's leave that for a bit later. For now the code looks correct. > > So I think to fix the first case we should rather modify > > grab_mapping_entry() to properly go through the pmd_downgrade path once we > > find PMD entry and we do PTE fault. > > > > What do you think? > > That could also work, though I do think the fix as submitted is correct. > I think it comes down to whether we want to keep the behavior where a thread > faulting in a PTEs will use an existing PMD entry in the radix tree, instead > of making all other threads fall back to PTEs. > > I think either way solves this issue for the DAX case...but do you understand > how this is solved for other fault handlers? They don't have any isolation > between faults either in the mm/memory.c code, and are susceptible to the same > races. How do they deal with the fact that by the time they get to their PTE > fault handler, a racing PMD fault handler in another thread could have > inserted a PMD into their page tables, and vice versa? So normal fault path uses alloc_set_pte() for installing new PTE. And that uses pte_alloc_one_map() which checks whether PMD is still suitable for inserting a PTE. If not, we return VM_FAULT_NOPAGE. Probably it would be cleanest to factor our common parts of PTE and PMD insertion so that we can use these functions both from DAX and generic fault paths. Anyway, I'll have a look at your fixes with fresh eyes as they could be the right way to go as a quick fix. Refactoring and cleanups can come later. Honza
On Wed 17-05-17 11:16:39, Ross Zwisler wrote: > We currently have two related PMD vs PTE races in the DAX code. These can > both be easily triggered by having two threads reading and writing > simultaneously to the same private mapping, with the key being that private > mapping reads can be handled with PMDs but private mapping writes are > always handled with PTEs so that we can COW. > > Here is the first race: > > CPU 0 CPU 1 > > (private mapping write) > __handle_mm_fault() > create_huge_pmd() - FALLBACK > handle_pte_fault() > passes check for pmd_devmap() > > (private mapping read) > __handle_mm_fault() > create_huge_pmd() > dax_iomap_pmd_fault() inserts PMD > > dax_iomap_pte_fault() does a PTE fault, but we already have a DAX PMD > installed in our page tables at this spot. > > Here's the second race: > > CPU 0 CPU 1 > > (private mapping write) > __handle_mm_fault() > create_huge_pmd() - FALLBACK > (private mapping read) > __handle_mm_fault() > passes check for pmd_none() > create_huge_pmd() > > handle_pte_fault() > dax_iomap_pte_fault() inserts PTE > dax_iomap_pmd_fault() inserts PMD, > but we already have a PTE at > this spot. > > The core of the issue is that while there is isolation between faults to > the same range in the DAX fault handlers via our DAX entry locking, there > is no isolation between faults in the code in mm/memory.c. This means for > instance that this code in __handle_mm_fault() can run: > > if (pmd_none(*vmf.pmd) && transparent_hugepage_enabled(vma)) { > ret = create_huge_pmd(&vmf); > > But by the time we actually get to run the fault handler called by > create_huge_pmd(), the PMD is no longer pmd_none() because a racing PTE > fault has installed a normal PMD here as a parent. This is the cause of > the 2nd race. The first race is similar - there is the following check in > handle_pte_fault(): > > } else { > /* See comment in pte_alloc_one_map() */ > if (pmd_devmap(*vmf->pmd) || pmd_trans_unstable(vmf->pmd)) > return 0; > > So if a pmd_devmap() PMD (a DAX PMD) has been installed at vmf->pmd, we > will bail and retry the fault. This is correct, but there is nothing > preventing the PMD from being installed after this check but before we > actually get to the DAX PTE fault handlers. > > In my testing these races result in the following types of errors: > > BUG: Bad rss-counter state mm:ffff8800a817d280 idx:1 val:1 > BUG: non-zero nr_ptes on freeing mm: 15 > > Fix this issue by having the DAX fault handlers verify that it is safe to > continue their fault after they have taken an entry lock to block other > racing faults. > > Signed-off-by: Ross Zwisler <ross.zwisler@linux.intel.com> > Reported-by: Pawel Lebioda <pawel.lebioda@intel.com> > Cc: stable@vger.kernel.org > > --- > > I've written a new xfstest for this race, which I will send in response to > this patch series. This series has also survived an xfstest run without > any new issues. > > --- > fs/dax.c | 18 ++++++++++++++++++ > 1 file changed, 18 insertions(+) > > diff --git a/fs/dax.c b/fs/dax.c > index c22eaf1..3cc02d1 100644 > --- a/fs/dax.c > +++ b/fs/dax.c > @@ -1155,6 +1155,15 @@ static int dax_iomap_pte_fault(struct vm_fault *vmf, > } > > /* > + * It is possible, particularly with mixed reads & writes to private > + * mappings, that we have raced with a PMD fault that overlaps with > + * the PTE we need to set up. Now that we have a locked mapping entry > + * we can safely unmap the huge PMD so that we can install our PTE in > + * our page tables. > + */ > + split_huge_pmd(vmf->vma, vmf->pmd, vmf->address); > + Can we just check the PMD and if is isn't as we want it, bail out and retry the fault? IMHO it will be more obvious that way (and also more in line like these races are handled for the classical THP). Otherwise the patch looks good to me. Honza
On Mon, May 22, 2017 at 04:44:57PM +0200, Jan Kara wrote: > On Wed 17-05-17 11:16:39, Ross Zwisler wrote: > > We currently have two related PMD vs PTE races in the DAX code. These can > > both be easily triggered by having two threads reading and writing > > simultaneously to the same private mapping, with the key being that private > > mapping reads can be handled with PMDs but private mapping writes are > > always handled with PTEs so that we can COW. > > > > Here is the first race: > > > > CPU 0 CPU 1 > > > > (private mapping write) > > __handle_mm_fault() > > create_huge_pmd() - FALLBACK > > handle_pte_fault() > > passes check for pmd_devmap() > > > > (private mapping read) > > __handle_mm_fault() > > create_huge_pmd() > > dax_iomap_pmd_fault() inserts PMD > > > > dax_iomap_pte_fault() does a PTE fault, but we already have a DAX PMD > > installed in our page tables at this spot. > > > > Here's the second race: > > > > CPU 0 CPU 1 > > > > (private mapping write) > > __handle_mm_fault() > > create_huge_pmd() - FALLBACK > > (private mapping read) > > __handle_mm_fault() > > passes check for pmd_none() > > create_huge_pmd() > > > > handle_pte_fault() > > dax_iomap_pte_fault() inserts PTE > > dax_iomap_pmd_fault() inserts PMD, > > but we already have a PTE at > > this spot. > > > > The core of the issue is that while there is isolation between faults to > > the same range in the DAX fault handlers via our DAX entry locking, there > > is no isolation between faults in the code in mm/memory.c. This means for > > instance that this code in __handle_mm_fault() can run: > > > > if (pmd_none(*vmf.pmd) && transparent_hugepage_enabled(vma)) { > > ret = create_huge_pmd(&vmf); > > > > But by the time we actually get to run the fault handler called by > > create_huge_pmd(), the PMD is no longer pmd_none() because a racing PTE > > fault has installed a normal PMD here as a parent. This is the cause of > > the 2nd race. The first race is similar - there is the following check in > > handle_pte_fault(): > > > > } else { > > /* See comment in pte_alloc_one_map() */ > > if (pmd_devmap(*vmf->pmd) || pmd_trans_unstable(vmf->pmd)) > > return 0; > > > > So if a pmd_devmap() PMD (a DAX PMD) has been installed at vmf->pmd, we > > will bail and retry the fault. This is correct, but there is nothing > > preventing the PMD from being installed after this check but before we > > actually get to the DAX PTE fault handlers. > > > > In my testing these races result in the following types of errors: > > > > BUG: Bad rss-counter state mm:ffff8800a817d280 idx:1 val:1 > > BUG: non-zero nr_ptes on freeing mm: 15 > > > > Fix this issue by having the DAX fault handlers verify that it is safe to > > continue their fault after they have taken an entry lock to block other > > racing faults. > > > > Signed-off-by: Ross Zwisler <ross.zwisler@linux.intel.com> > > Reported-by: Pawel Lebioda <pawel.lebioda@intel.com> > > Cc: stable@vger.kernel.org > > > > --- > > > > I've written a new xfstest for this race, which I will send in response to > > this patch series. This series has also survived an xfstest run without > > any new issues. > > > > --- > > fs/dax.c | 18 ++++++++++++++++++ > > 1 file changed, 18 insertions(+) > > > > diff --git a/fs/dax.c b/fs/dax.c > > index c22eaf1..3cc02d1 100644 > > --- a/fs/dax.c > > +++ b/fs/dax.c > > @@ -1155,6 +1155,15 @@ static int dax_iomap_pte_fault(struct vm_fault *vmf, > > } > > > > /* > > + * It is possible, particularly with mixed reads & writes to private > > + * mappings, that we have raced with a PMD fault that overlaps with > > + * the PTE we need to set up. Now that we have a locked mapping entry > > + * we can safely unmap the huge PMD so that we can install our PTE in > > + * our page tables. > > + */ > > + split_huge_pmd(vmf->vma, vmf->pmd, vmf->address); > > + > > Can we just check the PMD and if is isn't as we want it, bail out and retry > the fault? IMHO it will be more obvious that way (and also more in line > like these races are handled for the classical THP). Otherwise the patch > looks good to me. Yep, that works as well. I'll do this for v2. Thanks for the review.
On Mon, May 22, 2017 at 04:37:48PM +0200, Jan Kara wrote: > On Thu 18-05-17 15:29:39, Ross Zwisler wrote: > > On Thu, May 18, 2017 at 09:50:37AM +0200, Jan Kara wrote: > > > On Wed 17-05-17 11:16:39, Ross Zwisler wrote: <> > > > The first scenario seems to be possible. dax_iomap_pmd_fault() will create > > > PMD entry in the radix tree. Then dax_iomap_pte_fault() will come, do > > > grab_mapping_entry(), there it sees entry is PMD but we are doing PTE fault > > > so I'd think that pmd_downgrade = true... But actually the condition there > > > doesn't trigger in this case. And that's a catch that although we asked > > > grab_mapping_entry() for PTE, we've got PMD back and that screws us later. > > > > Yep, it was a concious decision when implementing the PMD support to allow one > > thread to use PMDs and another to use PTEs in the same range, as long as the > > thread faulting in PMDs is the first to insert into the radix tree. A PMD > > radix tree entry will be inserted and used for locking and dirty tracking, and > > each thread or process can fault in either PTEs or PMDs into its own address > > space as needed. > > Well, for *threads* it doesn't really make good sense to mix PMDs and PTEs > as they share page tables. However for *processes* it makes some sense to > allow one process to use PTEs and another process to use PMDs. And I > remember we were discussing this in the past. Ugh, I was super sloppy with my use of "thread" and "process" in my previous email. Sorry, and thanks for the clarifications. I think we're on the same page, even if I had trouble articulating it. :) > So normal fault path uses alloc_set_pte() for installing new PTE. And that > uses pte_alloc_one_map() which checks whether PMD is still suitable for > inserting a PTE. If not, we return VM_FAULT_NOPAGE. Probably it would be > cleanest to factor our common parts of PTE and PMD insertion so that we can > use these functions both from DAX and generic fault paths. Makes sense, thanks.
diff --git a/fs/dax.c b/fs/dax.c index c22eaf1..3cc02d1 100644 --- a/fs/dax.c +++ b/fs/dax.c @@ -1155,6 +1155,15 @@ static int dax_iomap_pte_fault(struct vm_fault *vmf, } /* + * It is possible, particularly with mixed reads & writes to private + * mappings, that we have raced with a PMD fault that overlaps with + * the PTE we need to set up. Now that we have a locked mapping entry + * we can safely unmap the huge PMD so that we can install our PTE in + * our page tables. + */ + split_huge_pmd(vmf->vma, vmf->pmd, vmf->address); + + /* * Note that we don't bother to use iomap_apply here: DAX required * the file system block size to be equal the page size, which means * that we never have to deal with more than a single extent here. @@ -1398,6 +1407,15 @@ static int dax_iomap_pmd_fault(struct vm_fault *vmf, goto fallback; /* + * It is possible, particularly with mixed reads & writes to private + * mappings, that we have raced with a PTE fault that overlaps with + * the PMD we need to set up. If so we just fall back to a PTE fault + * ourselves. + */ + if (!pmd_none(*vmf->pmd)) + goto unlock_entry; + + /* * Note that we don't use iomap_apply here. We aren't doing I/O, only * setting up a mapping, so really we're using iomap_begin() as a way * to look up our filesystem block.
We currently have two related PMD vs PTE races in the DAX code. These can both be easily triggered by having two threads reading and writing simultaneously to the same private mapping, with the key being that private mapping reads can be handled with PMDs but private mapping writes are always handled with PTEs so that we can COW. Here is the first race: CPU 0 CPU 1 (private mapping write) __handle_mm_fault() create_huge_pmd() - FALLBACK handle_pte_fault() passes check for pmd_devmap() (private mapping read) __handle_mm_fault() create_huge_pmd() dax_iomap_pmd_fault() inserts PMD dax_iomap_pte_fault() does a PTE fault, but we already have a DAX PMD installed in our page tables at this spot. Here's the second race: CPU 0 CPU 1 (private mapping write) __handle_mm_fault() create_huge_pmd() - FALLBACK (private mapping read) __handle_mm_fault() passes check for pmd_none() create_huge_pmd() handle_pte_fault() dax_iomap_pte_fault() inserts PTE dax_iomap_pmd_fault() inserts PMD, but we already have a PTE at this spot. The core of the issue is that while there is isolation between faults to the same range in the DAX fault handlers via our DAX entry locking, there is no isolation between faults in the code in mm/memory.c. This means for instance that this code in __handle_mm_fault() can run: if (pmd_none(*vmf.pmd) && transparent_hugepage_enabled(vma)) { ret = create_huge_pmd(&vmf); But by the time we actually get to run the fault handler called by create_huge_pmd(), the PMD is no longer pmd_none() because a racing PTE fault has installed a normal PMD here as a parent. This is the cause of the 2nd race. The first race is similar - there is the following check in handle_pte_fault(): } else { /* See comment in pte_alloc_one_map() */ if (pmd_devmap(*vmf->pmd) || pmd_trans_unstable(vmf->pmd)) return 0; So if a pmd_devmap() PMD (a DAX PMD) has been installed at vmf->pmd, we will bail and retry the fault. This is correct, but there is nothing preventing the PMD from being installed after this check but before we actually get to the DAX PTE fault handlers. In my testing these races result in the following types of errors: BUG: Bad rss-counter state mm:ffff8800a817d280 idx:1 val:1 BUG: non-zero nr_ptes on freeing mm: 15 Fix this issue by having the DAX fault handlers verify that it is safe to continue their fault after they have taken an entry lock to block other racing faults. Signed-off-by: Ross Zwisler <ross.zwisler@linux.intel.com> Reported-by: Pawel Lebioda <pawel.lebioda@intel.com> Cc: stable@vger.kernel.org --- I've written a new xfstest for this race, which I will send in response to this patch series. This series has also survived an xfstest run without any new issues. --- fs/dax.c | 18 ++++++++++++++++++ 1 file changed, 18 insertions(+)