diff mbox

[V2] arm64: hwpoison: add VM_FAULT_HWPOISON[_LARGE] handling

Message ID 87efy6mjgj.fsf@e105922-lin.cambridge.arm.com (mailing list archive)
State New, archived
Headers show

Commit Message

Punit Agrawal March 9, 2017, 5:46 p.m. UTC
[ +steve for arm64 mm and hugepages chops ]

"Baicar, Tyler" <tbaicar@codeaurora.org> writes:

> On 3/7/2017 12:56 PM, Punit Agrawal wrote:
>> Punit Agrawal <punit.agrawal@arm.com> writes:
>>
>> [...]
>>
>>> The code looks good but I ran into some failures while running the
>>> hugepages hwpoison tests from mce-tests suite[0]. I get a bad pmd error
>>> in dmesg -
>>>
>>> [  344.165544] mm/pgtable-generic.c:33: bad pmd 000000083af00074.
>>>
>>> I suspect that this is due to the huge pte accessors not correctly
>>> dealing with poisoned entries (which are represented as swap entries).
>> I think I've got to the bottom of the issue - the problem is due to
>> huge_pte_at() returning NULL for poisoned pmd entries (which in turn is
>> due to pmd_present() not handling poisoned pmd entries correctly)
>>
>> The following is the call chain for the failure case.
>>
>> do_munmap
>>    unmap_region
>>      unmap_vmas
>>        unmap_single_vma
>>          __unmap_hugepage_range_final    # The test case uses hugepages
>>            __unmap_hugepage_range
>>              huge_pte_offset             # Returns NULL for a poisoned pmd
>>
>> Reverting 5bb1cc0ff9a6 ("arm64: Ensure pmd_present() returns false after
>> pmd_mknotpresent()") fixes the problem for me but I don't think that is
>> the right fix.
>>
>> While I work on a proper fix, it would be great if you can confirm that
>> reverting 5bb1cc0ff9a6 makes the problem go away at your end.
> Thanks Punit! I haven't got a chance to do this yet, but I will let
> you know once I get it tested :)

This time with a patch. Please test this instead.

After a lot of head scratching, I've bit the bullet and added a check to
return the poisoned entry from huge_pte_offset(). What with having to
deal with contiguous hugepages et al., there just doesn't seem to be any
leeway in how we handle the situation here.

Let's see if there are any other ideas. Patch follows.

Thanks,
Punit

----------->8-------------
From d5ad3f428e629c80b0f93f2bbdf99b4cae28c9bc Mon Sep 17 00:00:00 2001
From: Punit Agrawal <punit.agrawal@arm.com>
Date: Thu, 9 Mar 2017 16:16:29 +0000
Subject: [PATCH] arm64: hugetlb: Fix huge_pte_offset to return poisoned pmd

When memory failure is enabled, a poisoned hugepage PMD is marked as a
swap entry. As pmd_present() only checks for VALID and PROT_NONE
bits (turned off for swap entries), it causues huge_pte_offset() to
return NULL for poisoned PMDs.

This behaviour of huge_pte_offset() leads to the error such as below
when munmap is called on poisoned hugepages.

[  344.165544] mm/pgtable-generic.c:33: bad pmd 000000083af00074.

Fix huge_pte_offset() to return the poisoned PMD which is then
appropriately handled by the generic layer code.

Signed-off-by: Punit Agrawal <punit.agrawal@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Steve Capper <steve.capper@arm.com>
---
 arch/arm64/mm/hugetlbpage.c | 11 ++++++++++-
 1 file changed, 10 insertions(+), 1 deletion(-)

--
2.11.0

Comments

Tyler Baicar March 10, 2017, 6:06 p.m. UTC | #1
Hello Punit,

I ran the test with and without the kernel patch you're suggesting 
below. I do not see the "bad pmd ..." print that you are seeing in 
either case. All the tests are not passing for me though, it runs 42 
test cases and 14 show up as failed for some reason.

Thanks,

Tyler


On 3/9/2017 10:46 AM, Punit Agrawal wrote:
> [ +steve for arm64 mm and hugepages chops ]
>
> "Baicar, Tyler" <tbaicar@codeaurora.org> writes:
>
>> On 3/7/2017 12:56 PM, Punit Agrawal wrote:
>>> Punit Agrawal <punit.agrawal@arm.com> writes:
>>>
>>> [...]
>>>
>>>> The code looks good but I ran into some failures while running the
>>>> hugepages hwpoison tests from mce-tests suite[0]. I get a bad pmd error
>>>> in dmesg -
>>>>
>>>> [  344.165544] mm/pgtable-generic.c:33: bad pmd 000000083af00074.
>>>>
>>>> I suspect that this is due to the huge pte accessors not correctly
>>>> dealing with poisoned entries (which are represented as swap entries).
>>> I think I've got to the bottom of the issue - the problem is due to
>>> huge_pte_at() returning NULL for poisoned pmd entries (which in turn is
>>> due to pmd_present() not handling poisoned pmd entries correctly)
>>>
>>> The following is the call chain for the failure case.
>>>
>>> do_munmap
>>>     unmap_region
>>>       unmap_vmas
>>>         unmap_single_vma
>>>           __unmap_hugepage_range_final    # The test case uses hugepages
>>>             __unmap_hugepage_range
>>>               huge_pte_offset             # Returns NULL for a poisoned pmd
>>>
>>> Reverting 5bb1cc0ff9a6 ("arm64: Ensure pmd_present() returns false after
>>> pmd_mknotpresent()") fixes the problem for me but I don't think that is
>>> the right fix.
>>>
>>> While I work on a proper fix, it would be great if you can confirm that
>>> reverting 5bb1cc0ff9a6 makes the problem go away at your end.
>> Thanks Punit! I haven't got a chance to do this yet, but I will let
>> you know once I get it tested :)
> This time with a patch. Please test this instead.
>
> After a lot of head scratching, I've bit the bullet and added a check to
> return the poisoned entry from huge_pte_offset(). What with having to
> deal with contiguous hugepages et al., there just doesn't seem to be any
> leeway in how we handle the situation here.
>
> Let's see if there are any other ideas. Patch follows.
>
> Thanks,
> Punit
>
> ----------->8-------------
>  From d5ad3f428e629c80b0f93f2bbdf99b4cae28c9bc Mon Sep 17 00:00:00 2001
> From: Punit Agrawal <punit.agrawal@arm.com>
> Date: Thu, 9 Mar 2017 16:16:29 +0000
> Subject: [PATCH] arm64: hugetlb: Fix huge_pte_offset to return poisoned pmd
>
> When memory failure is enabled, a poisoned hugepage PMD is marked as a
> swap entry. As pmd_present() only checks for VALID and PROT_NONE
> bits (turned off for swap entries), it causues huge_pte_offset() to
> return NULL for poisoned PMDs.
>
> This behaviour of huge_pte_offset() leads to the error such as below
> when munmap is called on poisoned hugepages.
>
> [  344.165544] mm/pgtable-generic.c:33: bad pmd 000000083af00074.
>
> Fix huge_pte_offset() to return the poisoned PMD which is then
> appropriately handled by the generic layer code.
>
> Signed-off-by: Punit Agrawal <punit.agrawal@arm.com>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Steve Capper <steve.capper@arm.com>
> ---
>   arch/arm64/mm/hugetlbpage.c | 11 ++++++++++-
>   1 file changed, 10 insertions(+), 1 deletion(-)
>
> diff --git a/arch/arm64/mm/hugetlbpage.c b/arch/arm64/mm/hugetlbpage.c
> index e25584d72396..9263f206353c 100644
> --- a/arch/arm64/mm/hugetlbpage.c
> +++ b/arch/arm64/mm/hugetlbpage.c
> @@ -150,8 +150,17 @@ pte_t *huge_pte_offset(struct mm_struct *mm, unsigned long addr)
>          if (pud_huge(*pud))
>                  return (pte_t *)pud;
>          pmd = pmd_offset(pud, addr);
> +
> +       /*
> +        * In case of HW Poisoning, a hugepage pmd can contain
> +        * poisoned entries. Poisoned entries are marked as swap
> +        * entries.
> +        *
> +        * For pmds that are not present, check to see if it could be
> +        * a swap entry (!present and !none) before giving up.
> +        */
>          if (!pmd_present(*pmd))
> -               return NULL;
> +               return !pmd_none(*pmd) ? (pte_t *)pmd : NULL;
>
>          if (pte_cont(pmd_pte(*pmd))) {
>                  pmd = pmd_offset(
> --
> 2.11.0
Punit Agrawal March 14, 2017, 4:20 p.m. UTC | #2
Hi Tyler,

"Baicar, Tyler" <tbaicar@codeaurora.org> writes:

> Hello Punit,
>
> I ran the test with and without the kernel patch you're suggesting
> below. I do not see the "bad pmd ..." print that you are seeing in
> either case.

Thanks for trying out the patch. It's important to understand why we are
seeing the difference in behaviour.

Looking at the code path, you should be hitting the "bad pmd" pr_err in
dmesg. Any chance either hugepages or the memory failure configs weren't
enabled in the test kernel?

The test script (run_hugepage.sh) isn't particularly robust. It carries
on executing even though some of the pre-conditions are not satisfied. I
had seen the script continue even though some of the dependencies were
missing from "bin" directory in the mce-test repo (Fixed by running
"make install" in tools/page-types".

Also, I reduced the console output and dmesg noise by executing only the
failing test in run_hugepage.sh -

"exec_testcase head late_touch file fork_shared killed".

Can you try re-running with the other tests commented out?

> All the tests are not passing for me though, it runs 42
> test cases and 14 show up as failed for some reason.

I see similar behaviour. I think the failures are due to timing
sensitivity when synchronising multi-process test cases - I saw a
comment implying this somewhere but can't seem to find it now.

Thanks,
Punit

>
> Thanks,
>
> Tyler
>
>
> On 3/9/2017 10:46 AM, Punit Agrawal wrote:
>> [ +steve for arm64 mm and hugepages chops ]
>>
>> "Baicar, Tyler" <tbaicar@codeaurora.org> writes:
>>
>>> On 3/7/2017 12:56 PM, Punit Agrawal wrote:
>>>> Punit Agrawal <punit.agrawal@arm.com> writes:
>>>>
>>>> [...]
>>>>
>>>>> The code looks good but I ran into some failures while running the
>>>>> hugepages hwpoison tests from mce-tests suite[0]. I get a bad pmd error
>>>>> in dmesg -
>>>>>
>>>>> [  344.165544] mm/pgtable-generic.c:33: bad pmd 000000083af00074.
>>>>>
>>>>> I suspect that this is due to the huge pte accessors not correctly
>>>>> dealing with poisoned entries (which are represented as swap entries).
>>>> I think I've got to the bottom of the issue - the problem is due to
>>>> huge_pte_at() returning NULL for poisoned pmd entries (which in turn is
>>>> due to pmd_present() not handling poisoned pmd entries correctly)
>>>>
>>>> The following is the call chain for the failure case.
>>>>
>>>> do_munmap
>>>>     unmap_region
>>>>       unmap_vmas
>>>>         unmap_single_vma
>>>>           __unmap_hugepage_range_final    # The test case uses hugepages
>>>>             __unmap_hugepage_range
>>>>               huge_pte_offset             # Returns NULL for a poisoned pmd
>>>>
>>>> Reverting 5bb1cc0ff9a6 ("arm64: Ensure pmd_present() returns false after
>>>> pmd_mknotpresent()") fixes the problem for me but I don't think that is
>>>> the right fix.
>>>>
>>>> While I work on a proper fix, it would be great if you can confirm that
>>>> reverting 5bb1cc0ff9a6 makes the problem go away at your end.
>>> Thanks Punit! I haven't got a chance to do this yet, but I will let
>>> you know once I get it tested :)
>> This time with a patch. Please test this instead.
>>
>> After a lot of head scratching, I've bit the bullet and added a check to
>> return the poisoned entry from huge_pte_offset(). What with having to
>> deal with contiguous hugepages et al., there just doesn't seem to be any
>> leeway in how we handle the situation here.
>>
>> Let's see if there are any other ideas. Patch follows.
>>
>> Thanks,
>> Punit
>>
>> ----------->8-------------
>>  From d5ad3f428e629c80b0f93f2bbdf99b4cae28c9bc Mon Sep 17 00:00:00 2001
>> From: Punit Agrawal <punit.agrawal@arm.com>
>> Date: Thu, 9 Mar 2017 16:16:29 +0000
>> Subject: [PATCH] arm64: hugetlb: Fix huge_pte_offset to return poisoned pmd
>>
>> When memory failure is enabled, a poisoned hugepage PMD is marked as a
>> swap entry. As pmd_present() only checks for VALID and PROT_NONE
>> bits (turned off for swap entries), it causues huge_pte_offset() to
>> return NULL for poisoned PMDs.
>>
>> This behaviour of huge_pte_offset() leads to the error such as below
>> when munmap is called on poisoned hugepages.
>>
>> [  344.165544] mm/pgtable-generic.c:33: bad pmd 000000083af00074.
>>
>> Fix huge_pte_offset() to return the poisoned PMD which is then
>> appropriately handled by the generic layer code.
>>
>> Signed-off-by: Punit Agrawal <punit.agrawal@arm.com>
>> Cc: Catalin Marinas <catalin.marinas@arm.com>
>> Cc: Steve Capper <steve.capper@arm.com>
>> ---
>>   arch/arm64/mm/hugetlbpage.c | 11 ++++++++++-
>>   1 file changed, 10 insertions(+), 1 deletion(-)
>>
>> diff --git a/arch/arm64/mm/hugetlbpage.c b/arch/arm64/mm/hugetlbpage.c
>> index e25584d72396..9263f206353c 100644
>> --- a/arch/arm64/mm/hugetlbpage.c
>> +++ b/arch/arm64/mm/hugetlbpage.c
>> @@ -150,8 +150,17 @@ pte_t *huge_pte_offset(struct mm_struct *mm, unsigned long addr)
>>          if (pud_huge(*pud))
>>                  return (pte_t *)pud;
>>          pmd = pmd_offset(pud, addr);
>> +
>> +       /*
>> +        * In case of HW Poisoning, a hugepage pmd can contain
>> +        * poisoned entries. Poisoned entries are marked as swap
>> +        * entries.
>> +        *
>> +        * For pmds that are not present, check to see if it could be
>> +        * a swap entry (!present and !none) before giving up.
>> +        */
>>          if (!pmd_present(*pmd))
>> -               return NULL;
>> +               return !pmd_none(*pmd) ? (pte_t *)pmd : NULL;
>>
>>          if (pte_cont(pmd_pte(*pmd))) {
>>                  pmd = pmd_offset(
>> --
>> 2.11.0
Catalin Marinas March 15, 2017, 11:19 a.m. UTC | #3
Hi Punit,

Adding David Woods since he seems to have added the arm64-specific
huge_pte_offset() code.

On Thu, Mar 09, 2017 at 05:46:36PM +0000, Punit Agrawal wrote:
> From d5ad3f428e629c80b0f93f2bbdf99b4cae28c9bc Mon Sep 17 00:00:00 2001
> From: Punit Agrawal <punit.agrawal@arm.com>
> Date: Thu, 9 Mar 2017 16:16:29 +0000
> Subject: [PATCH] arm64: hugetlb: Fix huge_pte_offset to return poisoned pmd
> 
> When memory failure is enabled, a poisoned hugepage PMD is marked as a
> swap entry. As pmd_present() only checks for VALID and PROT_NONE
> bits (turned off for swap entries), it causues huge_pte_offset() to
> return NULL for poisoned PMDs.
> 
> This behaviour of huge_pte_offset() leads to the error such as below
> when munmap is called on poisoned hugepages.
> 
> [  344.165544] mm/pgtable-generic.c:33: bad pmd 000000083af00074.
> 
> Fix huge_pte_offset() to return the poisoned PMD which is then
> appropriately handled by the generic layer code.
> 
> Signed-off-by: Punit Agrawal <punit.agrawal@arm.com>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Steve Capper <steve.capper@arm.com>
> ---
>  arch/arm64/mm/hugetlbpage.c | 11 ++++++++++-
>  1 file changed, 10 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/arm64/mm/hugetlbpage.c b/arch/arm64/mm/hugetlbpage.c
> index e25584d72396..9263f206353c 100644
> --- a/arch/arm64/mm/hugetlbpage.c
> +++ b/arch/arm64/mm/hugetlbpage.c
> @@ -150,8 +150,17 @@ pte_t *huge_pte_offset(struct mm_struct *mm, unsigned long addr)
>         if (pud_huge(*pud))
>                 return (pte_t *)pud;
>         pmd = pmd_offset(pud, addr);
> +
> +       /*
> +        * In case of HW Poisoning, a hugepage pmd can contain
> +        * poisoned entries. Poisoned entries are marked as swap
> +        * entries.
> +        *
> +        * For pmds that are not present, check to see if it could be
> +        * a swap entry (!present and !none) before giving up.
> +        */
>         if (!pmd_present(*pmd))
> -               return NULL;
> +               return !pmd_none(*pmd) ? (pte_t *)pmd : NULL;

I'm not sure we need to return NULL here when pmd_none(). If we use
hugetlb at the pmd level we don't need to allocate a pmd page but just
fall back to hugetlb_no_page() in hugetlb_fault(). The problem is we
can't tell what kind of huge page we have when calling
huge_pte_offset(), so we always rely on huge_pte_alloc(). But there are
places where huge_pte_none() is checked explicitly and we would never
return it from huge_pte_get().

Can we improve the generic code to pass the huge page size to
huge_pte_offset()? Otherwise we make all kind of assumptions/guesses in
the arch code.

> 
>         if (pte_cont(pmd_pte(*pmd))) {
>                 pmd = pmd_offset(

Given that we can have huge pages at the pud level, we should address
that as well. The generic huge_pte_offset() doesn't need to since it
assumes huge pages at the pmd level only. If a pud is not present, you
can't dereference it to find the pmd, hence returning NULL.

Apart from hw poisoning, I think another use-case for non-present
pmd/pud entries is is_hugetlb_entry_migration() (see hugetlb_fault()),
so we need to fix this either way.

We have a discrepancy between the pud_present and pmd_present. The
latter was modified to fall back on pte_present because of THP which
does not support puds (last time I checked). So if a pud is poisoned,
huge_pte_offset thinks it is present and will try to get the pmd it
points to.

I think we can leave the pud_present() unchanged but fix the
huge_pte_offset() to check for pud_table() before dereferencing,
otherwise returning the actual value. And we need to figure out which
huge page size we have when the pud/pmd is 0.
Steve Capper March 15, 2017, 4:07 p.m. UTC | #4
Hi,
Sorry for replying to this thread late.

On 15 March 2017 at 11:19, Catalin Marinas <catalin.marinas@arm.com> wrote:
> Hi Punit,
>
> Adding David Woods since he seems to have added the arm64-specific
> huge_pte_offset() code.
>
> On Thu, Mar 09, 2017 at 05:46:36PM +0000, Punit Agrawal wrote:
>> From d5ad3f428e629c80b0f93f2bbdf99b4cae28c9bc Mon Sep 17 00:00:00 2001
>> From: Punit Agrawal <punit.agrawal@arm.com>
>> Date: Thu, 9 Mar 2017 16:16:29 +0000
>> Subject: [PATCH] arm64: hugetlb: Fix huge_pte_offset to return poisoned pmd
>>
>> When memory failure is enabled, a poisoned hugepage PMD is marked as a
>> swap entry. As pmd_present() only checks for VALID and PROT_NONE
>> bits (turned off for swap entries), it causues huge_pte_offset() to
>> return NULL for poisoned PMDs.
>>
>> This behaviour of huge_pte_offset() leads to the error such as below
>> when munmap is called on poisoned hugepages.
>>
>> [  344.165544] mm/pgtable-generic.c:33: bad pmd 000000083af00074.
>>
>> Fix huge_pte_offset() to return the poisoned PMD which is then
>> appropriately handled by the generic layer code.
>>
>> Signed-off-by: Punit Agrawal <punit.agrawal@arm.com>
>> Cc: Catalin Marinas <catalin.marinas@arm.com>
>> Cc: Steve Capper <steve.capper@arm.com>
>> ---
>>  arch/arm64/mm/hugetlbpage.c | 11 ++++++++++-
>>  1 file changed, 10 insertions(+), 1 deletion(-)
>>
>> diff --git a/arch/arm64/mm/hugetlbpage.c b/arch/arm64/mm/hugetlbpage.c
>> index e25584d72396..9263f206353c 100644
>> --- a/arch/arm64/mm/hugetlbpage.c
>> +++ b/arch/arm64/mm/hugetlbpage.c
>> @@ -150,8 +150,17 @@ pte_t *huge_pte_offset(struct mm_struct *mm, unsigned long addr)
>>         if (pud_huge(*pud))
>>                 return (pte_t *)pud;
>>         pmd = pmd_offset(pud, addr);
>> +
>> +       /*
>> +        * In case of HW Poisoning, a hugepage pmd can contain
>> +        * poisoned entries. Poisoned entries are marked as swap
>> +        * entries.
>> +        *
>> +        * For pmds that are not present, check to see if it could be
>> +        * a swap entry (!present and !none) before giving up.
>> +        */
>>         if (!pmd_present(*pmd))
>> -               return NULL;
>> +               return !pmd_none(*pmd) ? (pte_t *)pmd : NULL;
>
> I'm not sure we need to return NULL here when pmd_none(). If we use
> hugetlb at the pmd level we don't need to allocate a pmd page but just
> fall back to hugetlb_no_page() in hugetlb_fault(). The problem is we
> can't tell what kind of huge page we have when calling
> huge_pte_offset(), so we always rely on huge_pte_alloc(). But there are
> places where huge_pte_none() is checked explicitly and we would never
> return it from huge_pte_get().
>
> Can we improve the generic code to pass the huge page size to
> huge_pte_offset()? Otherwise we make all kind of assumptions/guesses in
> the arch code.

We'll certainly need the huge page size as we are unable to
differentiate between pmd and contiguous pmd for invalid entries too;
and we'll need to return a pointer to the "head" pte_t.

>
>>
>>         if (pte_cont(pmd_pte(*pmd))) {
>>                 pmd = pmd_offset(
>
> Given that we can have huge pages at the pud level, we should address
> that as well. The generic huge_pte_offset() doesn't need to since it
> assumes huge pages at the pmd level only. If a pud is not present, you
> can't dereference it to find the pmd, hence returning NULL.
>
> Apart from hw poisoning, I think another use-case for non-present
> pmd/pud entries is is_hugetlb_entry_migration() (see hugetlb_fault()),
> so we need to fix this either way.
>
> We have a discrepancy between the pud_present and pmd_present. The
> latter was modified to fall back on pte_present because of THP which
> does not support puds (last time I checked). So if a pud is poisoned,
> huge_pte_offset thinks it is present and will try to get the pmd it
> points to.
>
> I think we can leave the pud_present() unchanged but fix the
> huge_pte_offset() to check for pud_table() before dereferencing,
> otherwise returning the actual value. And we need to figure out which
> huge page size we have when the pud/pmd is 0.

I don't understand the suggestions for puds, as they won't be contiguous?

Cheers,
--
Steve
Catalin Marinas March 15, 2017, 4:42 p.m. UTC | #5
On Wed, Mar 15, 2017 at 04:07:20PM +0000, Steve Capper wrote:
> On 15 March 2017 at 11:19, Catalin Marinas <catalin.marinas@arm.com> wrote:
> > On Thu, Mar 09, 2017 at 05:46:36PM +0000, Punit Agrawal wrote:
> >> From d5ad3f428e629c80b0f93f2bbdf99b4cae28c9bc Mon Sep 17 00:00:00 2001
> >> From: Punit Agrawal <punit.agrawal@arm.com>
> >> Date: Thu, 9 Mar 2017 16:16:29 +0000
> >> Subject: [PATCH] arm64: hugetlb: Fix huge_pte_offset to return poisoned pmd
> >>
> >> When memory failure is enabled, a poisoned hugepage PMD is marked as a
> >> swap entry. As pmd_present() only checks for VALID and PROT_NONE
> >> bits (turned off for swap entries), it causues huge_pte_offset() to
> >> return NULL for poisoned PMDs.
[...]
> > Given that we can have huge pages at the pud level, we should address
> > that as well. The generic huge_pte_offset() doesn't need to since it
> > assumes huge pages at the pmd level only. If a pud is not present, you
> > can't dereference it to find the pmd, hence returning NULL.
> >
> > Apart from hw poisoning, I think another use-case for non-present
> > pmd/pud entries is is_hugetlb_entry_migration() (see hugetlb_fault()),
> > so we need to fix this either way.
> >
> > We have a discrepancy between the pud_present and pmd_present. The
> > latter was modified to fall back on pte_present because of THP which
> > does not support puds (last time I checked). So if a pud is poisoned,
> > huge_pte_offset thinks it is present and will try to get the pmd it
> > points to.
> >
> > I think we can leave the pud_present() unchanged but fix the
> > huge_pte_offset() to check for pud_table() before dereferencing,
> > otherwise returning the actual value. And we need to figure out which
> > huge page size we have when the pud/pmd is 0.
> 
> I don't understand the suggestions for puds, as they won't be contiguous?

I wasn't thinking of the contiguous bit for pud but rather what to
return early based on present/huge/table. I think we have the cases
below:

1. pud_present() && pud_huge():
	return pud

2. pud_present() && pud_table():
	continue to pmd

3. pud_present() && !pud_huge() && !pud_table():
	return pud (huge pte poison at the pud level)

4. !pud_present() (a.k.a. pud_none()):
	a) return pud (if we have huge pages at the pud level)
	b) return NULL

At 3 I assumed that we don't poison table entries, therefore it is safe
to assume that the pud is an invalid huge page entry (poisoned,
migration).

At 4, I don't think we can currently distinguished between an empty huge
page pud and an empty table pointing further to a pmd. We could go for
NULL and assume that huge_pte_alloc() handles it properly.
Punit Agrawal March 15, 2017, 6:49 p.m. UTC | #6
Catalin Marinas <catalin.marinas@arm.com> writes:

> Hi Punit,
>
> Adding David Woods since he seems to have added the arm64-specific
> huge_pte_offset() code.
>
> On Thu, Mar 09, 2017 at 05:46:36PM +0000, Punit Agrawal wrote:
>> From d5ad3f428e629c80b0f93f2bbdf99b4cae28c9bc Mon Sep 17 00:00:00 2001
>> From: Punit Agrawal <punit.agrawal@arm.com>
>> Date: Thu, 9 Mar 2017 16:16:29 +0000
>> Subject: [PATCH] arm64: hugetlb: Fix huge_pte_offset to return poisoned pmd
>> 
>> When memory failure is enabled, a poisoned hugepage PMD is marked as a
>> swap entry. As pmd_present() only checks for VALID and PROT_NONE
>> bits (turned off for swap entries), it causues huge_pte_offset() to
>> return NULL for poisoned PMDs.
>> 
>> This behaviour of huge_pte_offset() leads to the error such as below
>> when munmap is called on poisoned hugepages.
>> 
>> [  344.165544] mm/pgtable-generic.c:33: bad pmd 000000083af00074.
>> 
>> Fix huge_pte_offset() to return the poisoned PMD which is then
>> appropriately handled by the generic layer code.
>> 
>> Signed-off-by: Punit Agrawal <punit.agrawal@arm.com>
>> Cc: Catalin Marinas <catalin.marinas@arm.com>
>> Cc: Steve Capper <steve.capper@arm.com>
>> ---
>>  arch/arm64/mm/hugetlbpage.c | 11 ++++++++++-
>>  1 file changed, 10 insertions(+), 1 deletion(-)
>> 
>> diff --git a/arch/arm64/mm/hugetlbpage.c b/arch/arm64/mm/hugetlbpage.c
>> index e25584d72396..9263f206353c 100644
>> --- a/arch/arm64/mm/hugetlbpage.c
>> +++ b/arch/arm64/mm/hugetlbpage.c
>> @@ -150,8 +150,17 @@ pte_t *huge_pte_offset(struct mm_struct *mm, unsigned long addr)
>>         if (pud_huge(*pud))
>>                 return (pte_t *)pud;
>>         pmd = pmd_offset(pud, addr);
>> +
>> +       /*
>> +        * In case of HW Poisoning, a hugepage pmd can contain
>> +        * poisoned entries. Poisoned entries are marked as swap
>> +        * entries.
>> +        *
>> +        * For pmds that are not present, check to see if it could be
>> +        * a swap entry (!present and !none) before giving up.
>> +        */
>>         if (!pmd_present(*pmd))
>> -               return NULL;
>> +               return !pmd_none(*pmd) ? (pte_t *)pmd : NULL;
>
> I'm not sure we need to return NULL here when pmd_none(). If we use
> hugetlb at the pmd level we don't need to allocate a pmd page but just
> fall back to hugetlb_no_page() in hugetlb_fault(). The problem is we
> can't tell what kind of huge page we have when calling
> huge_pte_offset(), so we always rely on huge_pte_alloc(). But there are
> places where huge_pte_none() is checked explicitly and we would never
> return it from huge_pte_get().

Makes sense.

>
> Can we improve the generic code to pass the huge page size to
> huge_pte_offset()? Otherwise we make all kind of assumptions/guesses in
> the arch code.

Agreed. The present fix only works for poisoned PMD entries. I'll
prototype adding size parameter and using that to disambiguate huge page
sizes. The change will touch a lot of architectures, most seem to have a
local definition of huge_pte_offset().

>
>> 
>>         if (pte_cont(pmd_pte(*pmd))) {
>>                 pmd = pmd_offset(
>
> Given that we can have huge pages at the pud level, we should address
> that as well. The generic huge_pte_offset() doesn't need to since it
> assumes huge pages at the pmd level only. If a pud is not present, you
> can't dereference it to find the pmd, hence returning NULL.
>
> Apart from hw poisoning, I think another use-case for non-present
> pmd/pud entries is is_hugetlb_entry_migration() (see hugetlb_fault()),
> so we need to fix this either way.
>
> We have a discrepancy between the pud_present and pmd_present. The
> latter was modified to fall back on pte_present because of THP which
> does not support puds (last time I checked). So if a pud is poisoned,
> huge_pte_offset thinks it is present and will try to get the pmd it
> points to.
>
> I think we can leave the pud_present() unchanged but fix the
> huge_pte_offset() to check for pud_table() before dereferencing,
> otherwise returning the actual value. And we need to figure out which
> huge page size we have when the pud/pmd is 0.

Ack. I'll add the check in the next update.
diff mbox

Patch

diff --git a/arch/arm64/mm/hugetlbpage.c b/arch/arm64/mm/hugetlbpage.c
index e25584d72396..9263f206353c 100644
--- a/arch/arm64/mm/hugetlbpage.c
+++ b/arch/arm64/mm/hugetlbpage.c
@@ -150,8 +150,17 @@  pte_t *huge_pte_offset(struct mm_struct *mm, unsigned long addr)
        if (pud_huge(*pud))
                return (pte_t *)pud;
        pmd = pmd_offset(pud, addr);
+
+       /*
+        * In case of HW Poisoning, a hugepage pmd can contain
+        * poisoned entries. Poisoned entries are marked as swap
+        * entries.
+        *
+        * For pmds that are not present, check to see if it could be
+        * a swap entry (!present and !none) before giving up.
+        */
        if (!pmd_present(*pmd))
-               return NULL;
+               return !pmd_none(*pmd) ? (pte_t *)pmd : NULL;

        if (pte_cont(pmd_pte(*pmd))) {
                pmd = pmd_offset(