diff mbox series

[4/7] mm/hmm: properly handle migration pmd

Message ID 20180824192549.30844-5-jglisse@redhat.com (mailing list archive)
State New, archived
Headers show
Series HMM updates, improvements and fixes | expand

Commit Message

Jerome Glisse Aug. 24, 2018, 7:25 p.m. UTC
From: Jérôme Glisse <jglisse@redhat.com>

Before this patch migration pmd entry (!pmd_present()) would have
been treated as a bad entry (pmd_bad() returns true on migration
pmd entry). The outcome was that device driver would believe that
the range covered by the pmd was bad and would either SIGBUS or
simply kill all the device's threads (each device driver decide
how to react when the device tries to access poisonnous or invalid
range of memory).

This patch explicitly handle the case of migration pmd entry which
are non present pmd entry and either wait for the migration to
finish or report empty range (when device is just trying to pre-
fill a range of virtual address and thus do not want to wait or
trigger page fault).

Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
Signed-off-by: Jérôme Glisse <jglisse@redhat.com>
Cc: Ralph Campbell <rcampbell@nvidia.com>
Cc: John Hubbard <jhubbard@nvidia.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
---
 mm/hmm.c | 45 +++++++++++++++++++++++++++++++++++++++------
 1 file changed, 39 insertions(+), 6 deletions(-)

Comments

Zi Yan Aug. 25, 2018, 12:05 a.m. UTC | #1
Hi Jérôme,

On 24 Aug 2018, at 15:25, jglisse@redhat.com wrote:

> From: Jérôme Glisse <jglisse@redhat.com>
>
> Before this patch migration pmd entry (!pmd_present()) would have
> been treated as a bad entry (pmd_bad() returns true on migration
> pmd entry). The outcome was that device driver would believe that
> the range covered by the pmd was bad and would either SIGBUS or
> simply kill all the device's threads (each device driver decide
> how to react when the device tries to access poisonnous or invalid
> range of memory).
>
> This patch explicitly handle the case of migration pmd entry which
> are non present pmd entry and either wait for the migration to
> finish or report empty range (when device is just trying to pre-
> fill a range of virtual address and thus do not want to wait or
> trigger page fault).
>
> Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
> Signed-off-by: Jérôme Glisse <jglisse@redhat.com>
> Cc: Ralph Campbell <rcampbell@nvidia.com>
> Cc: John Hubbard <jhubbard@nvidia.com>
> Cc: Andrew Morton <akpm@linux-foundation.org>
> ---
>  mm/hmm.c | 45 +++++++++++++++++++++++++++++++++++++++------
>  1 file changed, 39 insertions(+), 6 deletions(-)
>
> diff --git a/mm/hmm.c b/mm/hmm.c
> index a16678d08127..659efc9aada6 100644
> --- a/mm/hmm.c
> +++ b/mm/hmm.c
> @@ -577,22 +577,47 @@ static int hmm_vma_walk_pmd(pmd_t *pmdp,
>  {
>  	struct hmm_vma_walk *hmm_vma_walk = walk->private;
>  	struct hmm_range *range = hmm_vma_walk->range;
> +	struct vm_area_struct *vma = walk->vma;
>  	uint64_t *pfns = range->pfns;
>  	unsigned long addr = start, i;
>  	pte_t *ptep;
> +	pmd_t pmd;
>
> -	i = (addr - range->start) >> PAGE_SHIFT;
>
>  again:
> -	if (pmd_none(*pmdp))
> +	pmd = READ_ONCE(*pmdp);
> +	if (pmd_none(pmd))
>  		return hmm_vma_walk_hole(start, end, walk);
>
> -	if (pmd_huge(*pmdp) && (range->vma->vm_flags & VM_HUGETLB))
> +	if (pmd_huge(pmd) && (range->vma->vm_flags & VM_HUGETLB))
>  		return hmm_pfns_bad(start, end, walk);
>
> -	if (pmd_devmap(*pmdp) || pmd_trans_huge(*pmdp)) {
> -		pmd_t pmd;
> +	if (!pmd_present(pmd)) {
> +		swp_entry_t entry = pmd_to_swp_entry(pmd);
> +
> +		if (is_migration_entry(entry)) {

I think you should check thp_migration_supported() here, since PMD migration is only enabled in x86_64 systems.
Other architectures should treat PMD migration entries as bad.

> +			bool fault, write_fault;
> +			unsigned long npages;
> +			uint64_t *pfns;
> +
> +			i = (addr - range->start) >> PAGE_SHIFT;
> +			npages = (end - addr) >> PAGE_SHIFT;
> +			pfns = &range->pfns[i];
> +
> +			hmm_range_need_fault(hmm_vma_walk, pfns, npages,
> +					     0, &fault, &write_fault);
> +			if (fault || write_fault) {
> +				hmm_vma_walk->last = addr;
> +				pmd_migration_entry_wait(vma->vm_mm, pmdp);
> +				return -EAGAIN;
> +			}
> +			return 0;
> +		}
> +
> +		return hmm_pfns_bad(start, end, walk);
> +	}
>

—
Best Regards,
Yan Zi
Jerome Glisse Aug. 28, 2018, 12:35 a.m. UTC | #2
On Fri, Aug 24, 2018 at 08:05:46PM -0400, Zi Yan wrote:
> Hi Jérôme,
> 
> On 24 Aug 2018, at 15:25, jglisse@redhat.com wrote:
> 
> > From: Jérôme Glisse <jglisse@redhat.com>
> >
> > Before this patch migration pmd entry (!pmd_present()) would have
> > been treated as a bad entry (pmd_bad() returns true on migration
> > pmd entry). The outcome was that device driver would believe that
> > the range covered by the pmd was bad and would either SIGBUS or
> > simply kill all the device's threads (each device driver decide
> > how to react when the device tries to access poisonnous or invalid
> > range of memory).
> >
> > This patch explicitly handle the case of migration pmd entry which
> > are non present pmd entry and either wait for the migration to
> > finish or report empty range (when device is just trying to pre-
> > fill a range of virtual address and thus do not want to wait or
> > trigger page fault).
> >
> > Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
> > Signed-off-by: Jérôme Glisse <jglisse@redhat.com>
> > Cc: Ralph Campbell <rcampbell@nvidia.com>
> > Cc: John Hubbard <jhubbard@nvidia.com>
> > Cc: Andrew Morton <akpm@linux-foundation.org>
> > ---
> >  mm/hmm.c | 45 +++++++++++++++++++++++++++++++++++++++------
> >  1 file changed, 39 insertions(+), 6 deletions(-)
> >
> > diff --git a/mm/hmm.c b/mm/hmm.c
> > index a16678d08127..659efc9aada6 100644
> > --- a/mm/hmm.c
> > +++ b/mm/hmm.c
> > @@ -577,22 +577,47 @@ static int hmm_vma_walk_pmd(pmd_t *pmdp,
> >  {
> >  	struct hmm_vma_walk *hmm_vma_walk = walk->private;
> >  	struct hmm_range *range = hmm_vma_walk->range;
> > +	struct vm_area_struct *vma = walk->vma;
> >  	uint64_t *pfns = range->pfns;
> >  	unsigned long addr = start, i;
> >  	pte_t *ptep;
> > +	pmd_t pmd;
> >
> > -	i = (addr - range->start) >> PAGE_SHIFT;
> >
> >  again:
> > -	if (pmd_none(*pmdp))
> > +	pmd = READ_ONCE(*pmdp);
> > +	if (pmd_none(pmd))
> >  		return hmm_vma_walk_hole(start, end, walk);
> >
> > -	if (pmd_huge(*pmdp) && (range->vma->vm_flags & VM_HUGETLB))
> > +	if (pmd_huge(pmd) && (range->vma->vm_flags & VM_HUGETLB))
> >  		return hmm_pfns_bad(start, end, walk);
> >
> > -	if (pmd_devmap(*pmdp) || pmd_trans_huge(*pmdp)) {
> > -		pmd_t pmd;
> > +	if (!pmd_present(pmd)) {
> > +		swp_entry_t entry = pmd_to_swp_entry(pmd);
> > +
> > +		if (is_migration_entry(entry)) {
> 
> I think you should check thp_migration_supported() here, since PMD migration is only enabled in x86_64 systems.
> Other architectures should treat PMD migration entries as bad.

You are right, Andrew do you want to repost or can you edit above if
to:

if (thp_migration_supported() && is_migration_entry(entry)) {

Cheers,
Jérôme
Michal Hocko Aug. 28, 2018, 3:24 p.m. UTC | #3
On Fri 24-08-18 20:05:46, Zi Yan wrote:
[...]
> > +	if (!pmd_present(pmd)) {
> > +		swp_entry_t entry = pmd_to_swp_entry(pmd);
> > +
> > +		if (is_migration_entry(entry)) {
> 
> I think you should check thp_migration_supported() here, since PMD migration is only enabled in x86_64 systems.
> Other architectures should treat PMD migration entries as bad.

How can we have a migration pmd entry when the migration is not
supported?
Jerome Glisse Aug. 28, 2018, 3:36 p.m. UTC | #4
On Tue, Aug 28, 2018 at 05:24:14PM +0200, Michal Hocko wrote:
> On Fri 24-08-18 20:05:46, Zi Yan wrote:
> [...]
> > > +	if (!pmd_present(pmd)) {
> > > +		swp_entry_t entry = pmd_to_swp_entry(pmd);
> > > +
> > > +		if (is_migration_entry(entry)) {
> > 
> > I think you should check thp_migration_supported() here, since PMD migration is only enabled in x86_64 systems.
> > Other architectures should treat PMD migration entries as bad.
> 
> How can we have a migration pmd entry when the migration is not
> supported?

Not sure i follow here, migration can happen anywhere (assuming
that something like compaction is active or numa or ...). So this
code can face pmd migration entry on architecture that support
it. What is missing here is thp_migration_supported() call to
protect the is_migration_entry() to avoid false positive on arch
which do not support thp migration.

Cheers,
Jérôme
Michal Hocko Aug. 28, 2018, 3:42 p.m. UTC | #5
On Tue 28-08-18 11:36:59, Jerome Glisse wrote:
> On Tue, Aug 28, 2018 at 05:24:14PM +0200, Michal Hocko wrote:
> > On Fri 24-08-18 20:05:46, Zi Yan wrote:
> > [...]
> > > > +	if (!pmd_present(pmd)) {
> > > > +		swp_entry_t entry = pmd_to_swp_entry(pmd);
> > > > +
> > > > +		if (is_migration_entry(entry)) {
> > > 
> > > I think you should check thp_migration_supported() here, since PMD migration is only enabled in x86_64 systems.
> > > Other architectures should treat PMD migration entries as bad.
> > 
> > How can we have a migration pmd entry when the migration is not
> > supported?
> 
> Not sure i follow here, migration can happen anywhere (assuming
> that something like compaction is active or numa or ...). So this
> code can face pmd migration entry on architecture that support
> it. What is missing here is thp_migration_supported() call to
> protect the is_migration_entry() to avoid false positive on arch
> which do not support thp migration.

I mean that architectures which do not support THP migration shouldn't
ever see any migration entry. So is_migration_entry should be always
false. Or do I miss something?
Michal Hocko Aug. 28, 2018, 3:45 p.m. UTC | #6
On Tue 28-08-18 17:42:06, Michal Hocko wrote:
> On Tue 28-08-18 11:36:59, Jerome Glisse wrote:
> > On Tue, Aug 28, 2018 at 05:24:14PM +0200, Michal Hocko wrote:
> > > On Fri 24-08-18 20:05:46, Zi Yan wrote:
> > > [...]
> > > > > +	if (!pmd_present(pmd)) {
> > > > > +		swp_entry_t entry = pmd_to_swp_entry(pmd);
> > > > > +
> > > > > +		if (is_migration_entry(entry)) {
> > > > 
> > > > I think you should check thp_migration_supported() here, since PMD migration is only enabled in x86_64 systems.
> > > > Other architectures should treat PMD migration entries as bad.
> > > 
> > > How can we have a migration pmd entry when the migration is not
> > > supported?
> > 
> > Not sure i follow here, migration can happen anywhere (assuming
> > that something like compaction is active or numa or ...). So this
> > code can face pmd migration entry on architecture that support
> > it. What is missing here is thp_migration_supported() call to
> > protect the is_migration_entry() to avoid false positive on arch
> > which do not support thp migration.
> 
> I mean that architectures which do not support THP migration shouldn't
> ever see any migration entry. So is_migration_entry should be always
> false. Or do I miss something?

And just to be clear. thp_migration_supported should be checked only
when we actually _do_ the migration or evaluate migratability of the
page. We definitely do want to sprinkle this check to all places where
is_migration_entry is checked.
Zi Yan Aug. 28, 2018, 3:54 p.m. UTC | #7
Hi Michal,

On 28 Aug 2018, at 11:45, Michal Hocko wrote:

> On Tue 28-08-18 17:42:06, Michal Hocko wrote:
>> On Tue 28-08-18 11:36:59, Jerome Glisse wrote:
>>> On Tue, Aug 28, 2018 at 05:24:14PM +0200, Michal Hocko wrote:
>>>> On Fri 24-08-18 20:05:46, Zi Yan wrote:
>>>> [...]
>>>>>> +	if (!pmd_present(pmd)) {
>>>>>> +		swp_entry_t entry = pmd_to_swp_entry(pmd);
>>>>>> +
>>>>>> +		if (is_migration_entry(entry)) {
>>>>>
>>>>> I think you should check thp_migration_supported() here, since PMD migration is only enabled in x86_64 systems.
>>>>> Other architectures should treat PMD migration entries as bad.
>>>>
>>>> How can we have a migration pmd entry when the migration is not
>>>> supported?
>>>
>>> Not sure i follow here, migration can happen anywhere (assuming
>>> that something like compaction is active or numa or ...). So this
>>> code can face pmd migration entry on architecture that support
>>> it. What is missing here is thp_migration_supported() call to
>>> protect the is_migration_entry() to avoid false positive on arch
>>> which do not support thp migration.
>>
>> I mean that architectures which do not support THP migration shouldn't
>> ever see any migration entry. So is_migration_entry should be always
>> false. Or do I miss something?
>
> And just to be clear. thp_migration_supported should be checked only
> when we actually _do_ the migration or evaluate migratability of the
> page. We definitely do want to sprinkle this check to all places where
> is_migration_entry is checked.

is_migration_entry() is a general check for swp_entry_t, so it can return
true even if THP migration is not enabled. is_pmd_migration_entry() always
returns false when THP migration is not enabled.

So the code can be changed in two ways, either replacing is_migration_entry()
with is_pmd_migration_entry() or adding thp_migration_supported() check
like Jerome did.

Does this clarify your question?

—
Best Regards,
Yan Zi
Jerome Glisse Aug. 28, 2018, 4:06 p.m. UTC | #8
On Tue, Aug 28, 2018 at 11:54:33AM -0400, Zi Yan wrote:
> Hi Michal,
> 
> On 28 Aug 2018, at 11:45, Michal Hocko wrote:
> 
> > On Tue 28-08-18 17:42:06, Michal Hocko wrote:
> >> On Tue 28-08-18 11:36:59, Jerome Glisse wrote:
> >>> On Tue, Aug 28, 2018 at 05:24:14PM +0200, Michal Hocko wrote:
> >>>> On Fri 24-08-18 20:05:46, Zi Yan wrote:
> >>>> [...]
> >>>>>> +	if (!pmd_present(pmd)) {
> >>>>>> +		swp_entry_t entry = pmd_to_swp_entry(pmd);
> >>>>>> +
> >>>>>> +		if (is_migration_entry(entry)) {
> >>>>>
> >>>>> I think you should check thp_migration_supported() here, since PMD migration is only enabled in x86_64 systems.
> >>>>> Other architectures should treat PMD migration entries as bad.
> >>>>
> >>>> How can we have a migration pmd entry when the migration is not
> >>>> supported?
> >>>
> >>> Not sure i follow here, migration can happen anywhere (assuming
> >>> that something like compaction is active or numa or ...). So this
> >>> code can face pmd migration entry on architecture that support
> >>> it. What is missing here is thp_migration_supported() call to
> >>> protect the is_migration_entry() to avoid false positive on arch
> >>> which do not support thp migration.
> >>
> >> I mean that architectures which do not support THP migration shouldn't
> >> ever see any migration entry. So is_migration_entry should be always
> >> false. Or do I miss something?
> >
> > And just to be clear. thp_migration_supported should be checked only
> > when we actually _do_ the migration or evaluate migratability of the
> > page. We definitely do want to sprinkle this check to all places where
> > is_migration_entry is checked.
> 
> is_migration_entry() is a general check for swp_entry_t, so it can return
> true even if THP migration is not enabled. is_pmd_migration_entry() always
> returns false when THP migration is not enabled.
> 
> So the code can be changed in two ways, either replacing is_migration_entry()
> with is_pmd_migration_entry() or adding thp_migration_supported() check
> like Jerome did.
> 
> Does this clarify your question?
> 

Well looking back at code is_migration_entry() will return false on arch
which do not have thp migration because pmd_to_swp_entry() will return
swp_entry(0,0) which is can not be a valid migration entry.

Maybe using is_pmd_migration_entry() would be better here ? It seems
that is_pmd_migration_entry() is more common then the open coded
thp_migration_supported() && is_migration_entry()

Cheers,
Jérôme
Michal Hocko Aug. 28, 2018, 4:10 p.m. UTC | #9
On Tue 28-08-18 11:54:33, Zi Yan wrote:
> Hi Michal,
> 
> On 28 Aug 2018, at 11:45, Michal Hocko wrote:
> 
> > On Tue 28-08-18 17:42:06, Michal Hocko wrote:
> >> On Tue 28-08-18 11:36:59, Jerome Glisse wrote:
> >>> On Tue, Aug 28, 2018 at 05:24:14PM +0200, Michal Hocko wrote:
> >>>> On Fri 24-08-18 20:05:46, Zi Yan wrote:
> >>>> [...]
> >>>>>> +	if (!pmd_present(pmd)) {
> >>>>>> +		swp_entry_t entry = pmd_to_swp_entry(pmd);
> >>>>>> +
> >>>>>> +		if (is_migration_entry(entry)) {
> >>>>>
> >>>>> I think you should check thp_migration_supported() here, since PMD migration is only enabled in x86_64 systems.
> >>>>> Other architectures should treat PMD migration entries as bad.
> >>>>
> >>>> How can we have a migration pmd entry when the migration is not
> >>>> supported?
> >>>
> >>> Not sure i follow here, migration can happen anywhere (assuming
> >>> that something like compaction is active or numa or ...). So this
> >>> code can face pmd migration entry on architecture that support
> >>> it. What is missing here is thp_migration_supported() call to
> >>> protect the is_migration_entry() to avoid false positive on arch
> >>> which do not support thp migration.
> >>
> >> I mean that architectures which do not support THP migration shouldn't
> >> ever see any migration entry. So is_migration_entry should be always
> >> false. Or do I miss something?
> >
> > And just to be clear. thp_migration_supported should be checked only
> > when we actually _do_ the migration or evaluate migratability of the
> > page. We definitely do want to sprinkle this check to all places where
> > is_migration_entry is checked.
> 
> is_migration_entry() is a general check for swp_entry_t, so it can return
> true even if THP migration is not enabled. is_pmd_migration_entry() always
> returns false when THP migration is not enabled.
> 
> So the code can be changed in two ways, either replacing is_migration_entry()
> with is_pmd_migration_entry() or adding thp_migration_supported() check
> like Jerome did.
> 
> Does this clarify your question?

Not really. IIUC the code checks for the pmd. So even though
is_migration_entry is a more generic check it should never return true
for thp_migration_supported() == F because we simply never have those
unless I am missing something.

is_pmd_migration_entry is much more readable of course and I suspect it
can save few cycles as well.
diff mbox series

Patch

diff --git a/mm/hmm.c b/mm/hmm.c
index a16678d08127..659efc9aada6 100644
--- a/mm/hmm.c
+++ b/mm/hmm.c
@@ -577,22 +577,47 @@  static int hmm_vma_walk_pmd(pmd_t *pmdp,
 {
 	struct hmm_vma_walk *hmm_vma_walk = walk->private;
 	struct hmm_range *range = hmm_vma_walk->range;
+	struct vm_area_struct *vma = walk->vma;
 	uint64_t *pfns = range->pfns;
 	unsigned long addr = start, i;
 	pte_t *ptep;
+	pmd_t pmd;
 
-	i = (addr - range->start) >> PAGE_SHIFT;
 
 again:
-	if (pmd_none(*pmdp))
+	pmd = READ_ONCE(*pmdp);
+	if (pmd_none(pmd))
 		return hmm_vma_walk_hole(start, end, walk);
 
-	if (pmd_huge(*pmdp) && (range->vma->vm_flags & VM_HUGETLB))
+	if (pmd_huge(pmd) && (range->vma->vm_flags & VM_HUGETLB))
 		return hmm_pfns_bad(start, end, walk);
 
-	if (pmd_devmap(*pmdp) || pmd_trans_huge(*pmdp)) {
-		pmd_t pmd;
+	if (!pmd_present(pmd)) {
+		swp_entry_t entry = pmd_to_swp_entry(pmd);
+
+		if (is_migration_entry(entry)) {
+			bool fault, write_fault;
+			unsigned long npages;
+			uint64_t *pfns;
+
+			i = (addr - range->start) >> PAGE_SHIFT;
+			npages = (end - addr) >> PAGE_SHIFT;
+			pfns = &range->pfns[i];
+
+			hmm_range_need_fault(hmm_vma_walk, pfns, npages,
+					     0, &fault, &write_fault);
+			if (fault || write_fault) {
+				hmm_vma_walk->last = addr;
+				pmd_migration_entry_wait(vma->vm_mm, pmdp);
+				return -EAGAIN;
+			}
+			return 0;
+		}
+
+		return hmm_pfns_bad(start, end, walk);
+	}
 
+	if (pmd_devmap(pmd) || pmd_trans_huge(pmd)) {
 		/*
 		 * No need to take pmd_lock here, even if some other threads
 		 * is splitting the huge pmd we will get that event through
@@ -607,13 +632,21 @@  static int hmm_vma_walk_pmd(pmd_t *pmdp,
 		if (!pmd_devmap(pmd) && !pmd_trans_huge(pmd))
 			goto again;
 
+		i = (addr - range->start) >> PAGE_SHIFT;
 		return hmm_vma_handle_pmd(walk, addr, end, &pfns[i], pmd);
 	}
 
-	if (pmd_bad(*pmdp))
+	/*
+	 * We have handled all the valid case above ie either none, migration,
+	 * huge or transparent huge. At this point either it is a valid pmd
+	 * entry pointing to pte directory or it is a bad pmd that will not
+	 * recover.
+	 */
+	if (pmd_bad(pmd))
 		return hmm_pfns_bad(start, end, walk);
 
 	ptep = pte_offset_map(pmdp, addr);
+	i = (addr - range->start) >> PAGE_SHIFT;
 	for (; addr < end; addr += PAGE_SIZE, ptep++, i++) {
 		int r;