diff mbox series

[5/5] hugetlbfs: Limit wait time when trying to share huge PMD

Message ID 20190911150537.19527-6-longman@redhat.com (mailing list archive)
State New, archived
Headers show
Series hugetlbfs: Disable PMD sharing for large systems | expand

Commit Message

Waiman Long Sept. 11, 2019, 3:05 p.m. UTC
When allocating a large amount of static hugepages (~500-1500GB) on a
system with large number of CPUs (4, 8 or even 16 sockets), performance
degradation (random multi-second delays) was observed when thousands
of processes are trying to fault in the data into the huge pages. The
likelihood of the delay increases with the number of sockets and hence
the CPUs a system has.  This only happens in the initial setup phase
and will be gone after all the necessary data are faulted in.

These random delays, however, are deemed unacceptable. The cause of
that delay is the long wait time in acquiring the mmap_sem when trying
to share the huge PMDs.

To remove the unacceptable delays, we have to limit the amount of wait
time on the mmap_sem. So the new down_write_timedlock() function is
used to acquire the write lock on the mmap_sem with a timeout value of
10ms which should not cause a perceivable delay. If timeout happens,
the task will abandon its effort to share the PMD and allocate its own
copy instead.

When too many timeouts happens (threshold currently set at 256), the
system may be too large for PMD sharing to be useful without undue delay.
So the sharing will be disabled in this case.

Signed-off-by: Waiman Long <longman@redhat.com>
---
 include/linux/fs.h |  7 +++++++
 mm/hugetlb.c       | 24 +++++++++++++++++++++---
 2 files changed, 28 insertions(+), 3 deletions(-)

Comments

Matthew Wilcox Sept. 11, 2019, 3:14 p.m. UTC | #1
On Wed, Sep 11, 2019 at 04:05:37PM +0100, Waiman Long wrote:
> When allocating a large amount of static hugepages (~500-1500GB) on a
> system with large number of CPUs (4, 8 or even 16 sockets), performance
> degradation (random multi-second delays) was observed when thousands
> of processes are trying to fault in the data into the huge pages. The
> likelihood of the delay increases with the number of sockets and hence
> the CPUs a system has.  This only happens in the initial setup phase
> and will be gone after all the necessary data are faulted in.

Can;t the application just specify MAP_POPULATE?
Waiman Long Sept. 11, 2019, 3:44 p.m. UTC | #2
On 9/11/19 4:14 PM, Matthew Wilcox wrote:
> On Wed, Sep 11, 2019 at 04:05:37PM +0100, Waiman Long wrote:
>> When allocating a large amount of static hugepages (~500-1500GB) on a
>> system with large number of CPUs (4, 8 or even 16 sockets), performance
>> degradation (random multi-second delays) was observed when thousands
>> of processes are trying to fault in the data into the huge pages. The
>> likelihood of the delay increases with the number of sockets and hence
>> the CPUs a system has.  This only happens in the initial setup phase
>> and will be gone after all the necessary data are faulted in.
> Can;t the application just specify MAP_POPULATE?

Originally, I thought that this happened in the startup phase when the
pages were faulted in. The problem persists after steady state had been
reached though. Every time you have a new user process created, it will
have its own page table. It is the sharing of the of huge page shared
memory that is causing problem. Of course, it depends on how the
application is written.

Anyway, MAP_POPULATE will not be useful in this case.

Thanks,
Longman
Qian Cai Sept. 11, 2019, 4:01 p.m. UTC | #3
> On Sep 11, 2019, at 11:05 AM, Waiman Long <longman@redhat.com> wrote:
> 
> When allocating a large amount of static hugepages (~500-1500GB) on a
> system with large number of CPUs (4, 8 or even 16 sockets), performance
> degradation (random multi-second delays) was observed when thousands
> of processes are trying to fault in the data into the huge pages. The
> likelihood of the delay increases with the number of sockets and hence
> the CPUs a system has.  This only happens in the initial setup phase
> and will be gone after all the necessary data are faulted in.
> 
> These random delays, however, are deemed unacceptable. The cause of
> that delay is the long wait time in acquiring the mmap_sem when trying
> to share the huge PMDs.
> 
> To remove the unacceptable delays, we have to limit the amount of wait
> time on the mmap_sem. So the new down_write_timedlock() function is
> used to acquire the write lock on the mmap_sem with a timeout value of
> 10ms which should not cause a perceivable delay. If timeout happens,
> the task will abandon its effort to share the PMD and allocate its own
> copy instead.
> 
> When too many timeouts happens (threshold currently set at 256), the
> system may be too large for PMD sharing to be useful without undue delay.
> So the sharing will be disabled in this case.
> 
> Signed-off-by: Waiman Long <longman@redhat.com>
> ---
> include/linux/fs.h |  7 +++++++
> mm/hugetlb.c       | 24 +++++++++++++++++++++---
> 2 files changed, 28 insertions(+), 3 deletions(-)
> 
> diff --git a/include/linux/fs.h b/include/linux/fs.h
> index 997a530ff4e9..e9d3ad465a6b 100644
> --- a/include/linux/fs.h
> +++ b/include/linux/fs.h
> @@ -40,6 +40,7 @@
> #include <linux/fs_types.h>
> #include <linux/build_bug.h>
> #include <linux/stddef.h>
> +#include <linux/ktime.h>
> 
> #include <asm/byteorder.h>
> #include <uapi/linux/fs.h>
> @@ -519,6 +520,12 @@ static inline void i_mmap_lock_write(struct address_space *mapping)
> 	down_write(&mapping->i_mmap_rwsem);
> }
> 
> +static inline bool i_mmap_timedlock_write(struct address_space *mapping,
> +					 ktime_t timeout)
> +{
> +	return down_write_timedlock(&mapping->i_mmap_rwsem, timeout);
> +}
> +
> static inline void i_mmap_unlock_write(struct address_space *mapping)
> {
> 	up_write(&mapping->i_mmap_rwsem);
> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index 6d7296dd11b8..445af661ae29 100644
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -4750,6 +4750,8 @@ void adjust_range_if_pmd_sharing_possible(struct vm_area_struct *vma,
> 	}
> }
> 
> +#define PMD_SHARE_DISABLE_THRESHOLD	(1 << 8)
> +
> /*
>  * Search for a shareable pmd page for hugetlb. In any case calls pmd_alloc()
>  * and returns the corresponding pte. While this is not necessary for the
> @@ -4770,11 +4772,24 @@ pte_t *huge_pmd_share(struct mm_struct *mm, unsigned long addr, pud_t *pud)
> 	pte_t *spte = NULL;
> 	pte_t *pte;
> 	spinlock_t *ptl;
> +	static atomic_t timeout_cnt;
> 
> -	if (!vma_shareable(vma, addr))
> -		return (pte_t *)pmd_alloc(mm, pud, addr);
> +	/*
> +	 * Don't share if it is not sharable or locking attempt timed out
> +	 * after 10ms. After 256 timeouts, PMD sharing will be permanently
> +	 * disabled as it is just too slow.

It looks like this kind of policy interacts with kernel debug options like KASAN (which is going to slow the system down
anyway) could introduce tricky issues due to different timings on a debug kernel.

> +	 */
> +	if (!vma_shareable(vma, addr) ||
> +	   (atomic_read(&timeout_cnt) >= PMD_SHARE_DISABLE_THRESHOLD))
> +		goto out_no_share;
> +
> +	if (!i_mmap_timedlock_write(mapping, ms_to_ktime(10))) {
> +		if (atomic_inc_return(&timeout_cnt) ==
> +		    PMD_SHARE_DISABLE_THRESHOLD)
> +			pr_info("Hugetlbfs PMD sharing disabled because of timeouts!\n");
> +		goto out_no_share;
> +	}
> 
> -	i_mmap_lock_write(mapping);
> 	vma_interval_tree_foreach(svma, &mapping->i_mmap, idx, idx) {
> 		if (svma == vma)
> 			continue;
> @@ -4806,6 +4821,9 @@ pte_t *huge_pmd_share(struct mm_struct *mm, unsigned long addr, pud_t *pud)
> 	pte = (pte_t *)pmd_alloc(mm, pud, addr);
> 	i_mmap_unlock_write(mapping);
> 	return pte;
> +
> +out_no_share:
> +	return (pte_t *)pmd_alloc(mm, pud, addr);
> }
> 
> /*
> -- 
> 2.18.1
> 
>
Waiman Long Sept. 11, 2019, 4:34 p.m. UTC | #4
On 9/11/19 5:01 PM, Qian Cai wrote:
>
>> On Sep 11, 2019, at 11:05 AM, Waiman Long <longman@redhat.com> wrote:
>>
>> When allocating a large amount of static hugepages (~500-1500GB) on a
>> system with large number of CPUs (4, 8 or even 16 sockets), performance
>> degradation (random multi-second delays) was observed when thousands
>> of processes are trying to fault in the data into the huge pages. The
>> likelihood of the delay increases with the number of sockets and hence
>> the CPUs a system has.  This only happens in the initial setup phase
>> and will be gone after all the necessary data are faulted in.
>>
>> These random delays, however, are deemed unacceptable. The cause of
>> that delay is the long wait time in acquiring the mmap_sem when trying
>> to share the huge PMDs.
>>
>> To remove the unacceptable delays, we have to limit the amount of wait
>> time on the mmap_sem. So the new down_write_timedlock() function is
>> used to acquire the write lock on the mmap_sem with a timeout value of
>> 10ms which should not cause a perceivable delay. If timeout happens,
>> the task will abandon its effort to share the PMD and allocate its own
>> copy instead.
>>
>> When too many timeouts happens (threshold currently set at 256), the
>> system may be too large for PMD sharing to be useful without undue delay.
>> So the sharing will be disabled in this case.
>>
>> Signed-off-by: Waiman Long <longman@redhat.com>
>> ---
>> include/linux/fs.h |  7 +++++++
>> mm/hugetlb.c       | 24 +++++++++++++++++++++---
>> 2 files changed, 28 insertions(+), 3 deletions(-)
>>
>> diff --git a/include/linux/fs.h b/include/linux/fs.h
>> index 997a530ff4e9..e9d3ad465a6b 100644
>> --- a/include/linux/fs.h
>> +++ b/include/linux/fs.h
>> @@ -40,6 +40,7 @@
>> #include <linux/fs_types.h>
>> #include <linux/build_bug.h>
>> #include <linux/stddef.h>
>> +#include <linux/ktime.h>
>>
>> #include <asm/byteorder.h>
>> #include <uapi/linux/fs.h>
>> @@ -519,6 +520,12 @@ static inline void i_mmap_lock_write(struct address_space *mapping)
>> 	down_write(&mapping->i_mmap_rwsem);
>> }
>>
>> +static inline bool i_mmap_timedlock_write(struct address_space *mapping,
>> +					 ktime_t timeout)
>> +{
>> +	return down_write_timedlock(&mapping->i_mmap_rwsem, timeout);
>> +}
>> +
>> static inline void i_mmap_unlock_write(struct address_space *mapping)
>> {
>> 	up_write(&mapping->i_mmap_rwsem);
>> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
>> index 6d7296dd11b8..445af661ae29 100644
>> --- a/mm/hugetlb.c
>> +++ b/mm/hugetlb.c
>> @@ -4750,6 +4750,8 @@ void adjust_range_if_pmd_sharing_possible(struct vm_area_struct *vma,
>> 	}
>> }
>>
>> +#define PMD_SHARE_DISABLE_THRESHOLD	(1 << 8)
>> +
>> /*
>>  * Search for a shareable pmd page for hugetlb. In any case calls pmd_alloc()
>>  * and returns the corresponding pte. While this is not necessary for the
>> @@ -4770,11 +4772,24 @@ pte_t *huge_pmd_share(struct mm_struct *mm, unsigned long addr, pud_t *pud)
>> 	pte_t *spte = NULL;
>> 	pte_t *pte;
>> 	spinlock_t *ptl;
>> +	static atomic_t timeout_cnt;
>>
>> -	if (!vma_shareable(vma, addr))
>> -		return (pte_t *)pmd_alloc(mm, pud, addr);
>> +	/*
>> +	 * Don't share if it is not sharable or locking attempt timed out
>> +	 * after 10ms. After 256 timeouts, PMD sharing will be permanently
>> +	 * disabled as it is just too slow.
> It looks like this kind of policy interacts with kernel debug options like KASAN (which is going to slow the system down
> anyway) could introduce tricky issues due to different timings on a debug kernel.

With respect to lockdep, down_write_timedlock() works like a trylock. So
a lot of checking will be skipped. Also the lockdep code won't be run
until the lock is acquired. So its execution time has no effect on the
timeout.

Cheers,
Longman
Mike Kravetz Sept. 11, 2019, 5:03 p.m. UTC | #5
On 9/11/19 8:44 AM, Waiman Long wrote:
> On 9/11/19 4:14 PM, Matthew Wilcox wrote:
>> On Wed, Sep 11, 2019 at 04:05:37PM +0100, Waiman Long wrote:
>>> When allocating a large amount of static hugepages (~500-1500GB) on a
>>> system with large number of CPUs (4, 8 or even 16 sockets), performance
>>> degradation (random multi-second delays) was observed when thousands
>>> of processes are trying to fault in the data into the huge pages. The
>>> likelihood of the delay increases with the number of sockets and hence
>>> the CPUs a system has.  This only happens in the initial setup phase
>>> and will be gone after all the necessary data are faulted in.
>> Can;t the application just specify MAP_POPULATE?
> 
> Originally, I thought that this happened in the startup phase when the
> pages were faulted in. The problem persists after steady state had been
> reached though. Every time you have a new user process created, it will
> have its own page table.

This is still at fault time.  Although, for the particular application it
may be after the 'startup phase'.

>                          It is the sharing of the of huge page shared
> memory that is causing problem. Of course, it depends on how the
> application is written.

It may be the case that some applications would find the delays acceptable
for the benefit of shared pmds once they reach steady state.  As you say, of
course this depends on how the application is written.

I know that Oracle DB would not like it if PMD sharing is disabled for them.
Based on what I know of their model, all processes which share PMDs perform
faults (write or read) during the startup phase.  This is in environments as
big or bigger than you describe above.  I have never looked at/for delays in
these environments around pmd sharing (page faults), but that does not mean
they do not exist.  I will try to get the DB group to give me access to one
of their large environments for analysis.

We may want to consider making the timeout value and disable threshold user
configurable.
Waiman Long Sept. 11, 2019, 5:15 p.m. UTC | #6
On 9/11/19 6:03 PM, Mike Kravetz wrote:
> On 9/11/19 8:44 AM, Waiman Long wrote:
>> On 9/11/19 4:14 PM, Matthew Wilcox wrote:
>>> On Wed, Sep 11, 2019 at 04:05:37PM +0100, Waiman Long wrote:
>>>> When allocating a large amount of static hugepages (~500-1500GB) on a
>>>> system with large number of CPUs (4, 8 or even 16 sockets), performance
>>>> degradation (random multi-second delays) was observed when thousands
>>>> of processes are trying to fault in the data into the huge pages. The
>>>> likelihood of the delay increases with the number of sockets and hence
>>>> the CPUs a system has.  This only happens in the initial setup phase
>>>> and will be gone after all the necessary data are faulted in.
>>> Can;t the application just specify MAP_POPULATE?
>> Originally, I thought that this happened in the startup phase when the
>> pages were faulted in. The problem persists after steady state had been
>> reached though. Every time you have a new user process created, it will
>> have its own page table.
> This is still at fault time.  Although, for the particular application it
> may be after the 'startup phase'.
>
>>                          It is the sharing of the of huge page shared
>> memory that is causing problem. Of course, it depends on how the
>> application is written.
> It may be the case that some applications would find the delays acceptable
> for the benefit of shared pmds once they reach steady state.  As you say, of
> course this depends on how the application is written.
>
> I know that Oracle DB would not like it if PMD sharing is disabled for them.
> Based on what I know of their model, all processes which share PMDs perform
> faults (write or read) during the startup phase.  This is in environments as
> big or bigger than you describe above.  I have never looked at/for delays in
> these environments around pmd sharing (page faults), but that does not mean
> they do not exist.  I will try to get the DB group to give me access to one
> of their large environments for analysis.
>
> We may want to consider making the timeout value and disable threshold user
> configurable.

Making it configurable is certainly doable. They can be sysctl
parameters so that the users can reenable PMD sharing by making those
parameters larger.

Cheers,
Longman
Qian Cai Sept. 11, 2019, 5:22 p.m. UTC | #7
> On Sep 11, 2019, at 1:15 PM, Waiman Long <longman@redhat.com> wrote:
> 
> On 9/11/19 6:03 PM, Mike Kravetz wrote:
>> On 9/11/19 8:44 AM, Waiman Long wrote:
>>> On 9/11/19 4:14 PM, Matthew Wilcox wrote:
>>>> On Wed, Sep 11, 2019 at 04:05:37PM +0100, Waiman Long wrote:
>>>>> When allocating a large amount of static hugepages (~500-1500GB) on a
>>>>> system with large number of CPUs (4, 8 or even 16 sockets), performance
>>>>> degradation (random multi-second delays) was observed when thousands
>>>>> of processes are trying to fault in the data into the huge pages. The
>>>>> likelihood of the delay increases with the number of sockets and hence
>>>>> the CPUs a system has.  This only happens in the initial setup phase
>>>>> and will be gone after all the necessary data are faulted in.
>>>> Can;t the application just specify MAP_POPULATE?
>>> Originally, I thought that this happened in the startup phase when the
>>> pages were faulted in. The problem persists after steady state had been
>>> reached though. Every time you have a new user process created, it will
>>> have its own page table.
>> This is still at fault time.  Although, for the particular application it
>> may be after the 'startup phase'.
>> 
>>>                         It is the sharing of the of huge page shared
>>> memory that is causing problem. Of course, it depends on how the
>>> application is written.
>> It may be the case that some applications would find the delays acceptable
>> for the benefit of shared pmds once they reach steady state.  As you say, of
>> course this depends on how the application is written.
>> 
>> I know that Oracle DB would not like it if PMD sharing is disabled for them.
>> Based on what I know of their model, all processes which share PMDs perform
>> faults (write or read) during the startup phase.  This is in environments as
>> big or bigger than you describe above.  I have never looked at/for delays in
>> these environments around pmd sharing (page faults), but that does not mean
>> they do not exist.  I will try to get the DB group to give me access to one
>> of their large environments for analysis.
>> 
>> We may want to consider making the timeout value and disable threshold user
>> configurable.
> 
> Making it configurable is certainly doable. They can be sysctl
> parameters so that the users can reenable PMD sharing by making those
> parameters larger.

It could be a Kconfig option, so people don’t need to change the setting every time
after reinstalling the system. There are times people don’t care too much
about those random multi-second delays. For example, running a debug kernel.
Waiman Long Sept. 11, 2019, 5:28 p.m. UTC | #8
On 9/11/19 6:15 PM, Waiman Long wrote:
> On 9/11/19 6:03 PM, Mike Kravetz wrote:
>> On 9/11/19 8:44 AM, Waiman Long wrote:
>>> On 9/11/19 4:14 PM, Matthew Wilcox wrote:
>>>> On Wed, Sep 11, 2019 at 04:05:37PM +0100, Waiman Long wrote:
>>>>> When allocating a large amount of static hugepages (~500-1500GB) on a
>>>>> system with large number of CPUs (4, 8 or even 16 sockets), performance
>>>>> degradation (random multi-second delays) was observed when thousands
>>>>> of processes are trying to fault in the data into the huge pages. The
>>>>> likelihood of the delay increases with the number of sockets and hence
>>>>> the CPUs a system has.  This only happens in the initial setup phase
>>>>> and will be gone after all the necessary data are faulted in.
>>>> Can;t the application just specify MAP_POPULATE?
>>> Originally, I thought that this happened in the startup phase when the
>>> pages were faulted in. The problem persists after steady state had been
>>> reached though. Every time you have a new user process created, it will
>>> have its own page table.
>> This is still at fault time.  Although, for the particular application it
>> may be after the 'startup phase'.
>>
>>>                          It is the sharing of the of huge page shared
>>> memory that is causing problem. Of course, it depends on how the
>>> application is written.
>> It may be the case that some applications would find the delays acceptable
>> for the benefit of shared pmds once they reach steady state.  As you say, of
>> course this depends on how the application is written.
>>
>> I know that Oracle DB would not like it if PMD sharing is disabled for them.
>> Based on what I know of their model, all processes which share PMDs perform
>> faults (write or read) during the startup phase.  This is in environments as
>> big or bigger than you describe above.  I have never looked at/for delays in
>> these environments around pmd sharing (page faults), but that does not mean
>> they do not exist.  I will try to get the DB group to give me access to one
>> of their large environments for analysis.
>>
>> We may want to consider making the timeout value and disable threshold user
>> configurable.
> Making it configurable is certainly doable. They can be sysctl
> parameters so that the users can reenable PMD sharing by making those
> parameters larger.

I suspect that the customer's application may be generating a new
process with its own address space for each transaction. That will be
causing a lot of PMD sharing operations when hundreds of threads are
pounding it simultaneously. I had inserted some instrumentation code to
a test kernel that the customers used for testing, the number of
timeouts after a certain time went up more than 20k.

On the other hands, if the application is structured in such a way that
there is limited number of separate address spaces with worker threads
processing the transaction, PMD sharing will be less of a problem. It
will be hard to convince users to make such a structural changes to
their application.

Cheers,
Longman
Qian Cai Sept. 11, 2019, 7:42 p.m. UTC | #9
> On Sep 11, 2019, at 12:34 PM, Waiman Long <longman@redhat.com> wrote:
> 
> On 9/11/19 5:01 PM, Qian Cai wrote:
>> 
>>> On Sep 11, 2019, at 11:05 AM, Waiman Long <longman@redhat.com> wrote:
>>> 
>>> When allocating a large amount of static hugepages (~500-1500GB) on a
>>> system with large number of CPUs (4, 8 or even 16 sockets), performance
>>> degradation (random multi-second delays) was observed when thousands
>>> of processes are trying to fault in the data into the huge pages. The
>>> likelihood of the delay increases with the number of sockets and hence
>>> the CPUs a system has.  This only happens in the initial setup phase
>>> and will be gone after all the necessary data are faulted in.
>>> 
>>> These random delays, however, are deemed unacceptable. The cause of
>>> that delay is the long wait time in acquiring the mmap_sem when trying
>>> to share the huge PMDs.
>>> 
>>> To remove the unacceptable delays, we have to limit the amount of wait
>>> time on the mmap_sem. So the new down_write_timedlock() function is
>>> used to acquire the write lock on the mmap_sem with a timeout value of
>>> 10ms which should not cause a perceivable delay. If timeout happens,
>>> the task will abandon its effort to share the PMD and allocate its own
>>> copy instead.
>>> 
>>> When too many timeouts happens (threshold currently set at 256), the
>>> system may be too large for PMD sharing to be useful without undue delay.
>>> So the sharing will be disabled in this case.
>>> 
>>> Signed-off-by: Waiman Long <longman@redhat.com>
>>> ---
>>> include/linux/fs.h |  7 +++++++
>>> mm/hugetlb.c       | 24 +++++++++++++++++++++---
>>> 2 files changed, 28 insertions(+), 3 deletions(-)
>>> 
>>> diff --git a/include/linux/fs.h b/include/linux/fs.h
>>> index 997a530ff4e9..e9d3ad465a6b 100644
>>> --- a/include/linux/fs.h
>>> +++ b/include/linux/fs.h
>>> @@ -40,6 +40,7 @@
>>> #include <linux/fs_types.h>
>>> #include <linux/build_bug.h>
>>> #include <linux/stddef.h>
>>> +#include <linux/ktime.h>
>>> 
>>> #include <asm/byteorder.h>
>>> #include <uapi/linux/fs.h>
>>> @@ -519,6 +520,12 @@ static inline void i_mmap_lock_write(struct address_space *mapping)
>>> 	down_write(&mapping->i_mmap_rwsem);
>>> }
>>> 
>>> +static inline bool i_mmap_timedlock_write(struct address_space *mapping,
>>> +					 ktime_t timeout)
>>> +{
>>> +	return down_write_timedlock(&mapping->i_mmap_rwsem, timeout);
>>> +}
>>> +
>>> static inline void i_mmap_unlock_write(struct address_space *mapping)
>>> {
>>> 	up_write(&mapping->i_mmap_rwsem);
>>> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
>>> index 6d7296dd11b8..445af661ae29 100644
>>> --- a/mm/hugetlb.c
>>> +++ b/mm/hugetlb.c
>>> @@ -4750,6 +4750,8 @@ void adjust_range_if_pmd_sharing_possible(struct vm_area_struct *vma,
>>> 	}
>>> }
>>> 
>>> +#define PMD_SHARE_DISABLE_THRESHOLD	(1 << 8)
>>> +
>>> /*
>>> * Search for a shareable pmd page for hugetlb. In any case calls pmd_alloc()
>>> * and returns the corresponding pte. While this is not necessary for the
>>> @@ -4770,11 +4772,24 @@ pte_t *huge_pmd_share(struct mm_struct *mm, unsigned long addr, pud_t *pud)
>>> 	pte_t *spte = NULL;
>>> 	pte_t *pte;
>>> 	spinlock_t *ptl;
>>> +	static atomic_t timeout_cnt;
>>> 
>>> -	if (!vma_shareable(vma, addr))
>>> -		return (pte_t *)pmd_alloc(mm, pud, addr);
>>> +	/*
>>> +	 * Don't share if it is not sharable or locking attempt timed out
>>> +	 * after 10ms. After 256 timeouts, PMD sharing will be permanently
>>> +	 * disabled as it is just too slow.
>> It looks like this kind of policy interacts with kernel debug options like KASAN (which is going to slow the system down
>> anyway) could introduce tricky issues due to different timings on a debug kernel.
> 
> With respect to lockdep, down_write_timedlock() works like a trylock. So
> a lot of checking will be skipped. Also the lockdep code won't be run
> until the lock is acquired. So its execution time has no effect on the
> timeout.

No only lockdep, but also things like KASAN, debug_pagealloc, page_poison, kmemleak, debug
objects etc that  all going to slow down things in huge_pmd_share(), and make it tricky to get a
right timeout value for those debug kernels without changing the previous behavior.
Matthew Wilcox Sept. 11, 2019, 7:57 p.m. UTC | #10
On Wed, Sep 11, 2019 at 04:05:37PM +0100, Waiman Long wrote:
> To remove the unacceptable delays, we have to limit the amount of wait
> time on the mmap_sem. So the new down_write_timedlock() function is
> used to acquire the write lock on the mmap_sem with a timeout value of
> 10ms which should not cause a perceivable delay. If timeout happens,
> the task will abandon its effort to share the PMD and allocate its own
> copy instead.

If you do a v2, this is *NOT* the mmap_sem.  It's the i_mmap_rwsem
which protects a very different data structure from the mmap_sem.

> +static inline bool i_mmap_timedlock_write(struct address_space *mapping,
> +					 ktime_t timeout)
> +{
> +	return down_write_timedlock(&mapping->i_mmap_rwsem, timeout);
> +}
Waiman Long Sept. 11, 2019, 8:51 p.m. UTC | #11
On 9/11/19 8:57 PM, Matthew Wilcox wrote:
> On Wed, Sep 11, 2019 at 04:05:37PM +0100, Waiman Long wrote:
>> To remove the unacceptable delays, we have to limit the amount of wait
>> time on the mmap_sem. So the new down_write_timedlock() function is
>> used to acquire the write lock on the mmap_sem with a timeout value of
>> 10ms which should not cause a perceivable delay. If timeout happens,
>> the task will abandon its effort to share the PMD and allocate its own
>> copy instead.
> If you do a v2, this is *NOT* the mmap_sem.  It's the i_mmap_rwsem
> which protects a very different data structure from the mmap_sem.
>
Thanks for reminder. I should have read the code more carefully.

Cheers,
Longman
Waiman Long Sept. 11, 2019, 8:54 p.m. UTC | #12
On 9/11/19 8:42 PM, Qian Cai wrote:
>
>> On Sep 11, 2019, at 12:34 PM, Waiman Long <longman@redhat.com> wrote:
>>
>> On 9/11/19 5:01 PM, Qian Cai wrote:
>>>> On Sep 11, 2019, at 11:05 AM, Waiman Long <longman@redhat.com> wrote:
>>>>
>>>> When allocating a large amount of static hugepages (~500-1500GB) on a
>>>> system with large number of CPUs (4, 8 or even 16 sockets), performance
>>>> degradation (random multi-second delays) was observed when thousands
>>>> of processes are trying to fault in the data into the huge pages. The
>>>> likelihood of the delay increases with the number of sockets and hence
>>>> the CPUs a system has.  This only happens in the initial setup phase
>>>> and will be gone after all the necessary data are faulted in.
>>>>
>>>> These random delays, however, are deemed unacceptable. The cause of
>>>> that delay is the long wait time in acquiring the mmap_sem when trying
>>>> to share the huge PMDs.
>>>>
>>>> To remove the unacceptable delays, we have to limit the amount of wait
>>>> time on the mmap_sem. So the new down_write_timedlock() function is
>>>> used to acquire the write lock on the mmap_sem with a timeout value of
>>>> 10ms which should not cause a perceivable delay. If timeout happens,
>>>> the task will abandon its effort to share the PMD and allocate its own
>>>> copy instead.
>>>>
>>>> When too many timeouts happens (threshold currently set at 256), the
>>>> system may be too large for PMD sharing to be useful without undue delay.
>>>> So the sharing will be disabled in this case.
>>>>
>>>> Signed-off-by: Waiman Long <longman@redhat.com>
>>>> ---
>>>> include/linux/fs.h |  7 +++++++
>>>> mm/hugetlb.c       | 24 +++++++++++++++++++++---
>>>> 2 files changed, 28 insertions(+), 3 deletions(-)
>>>>
>>>> diff --git a/include/linux/fs.h b/include/linux/fs.h
>>>> index 997a530ff4e9..e9d3ad465a6b 100644
>>>> --- a/include/linux/fs.h
>>>> +++ b/include/linux/fs.h
>>>> @@ -40,6 +40,7 @@
>>>> #include <linux/fs_types.h>
>>>> #include <linux/build_bug.h>
>>>> #include <linux/stddef.h>
>>>> +#include <linux/ktime.h>
>>>>
>>>> #include <asm/byteorder.h>
>>>> #include <uapi/linux/fs.h>
>>>> @@ -519,6 +520,12 @@ static inline void i_mmap_lock_write(struct address_space *mapping)
>>>> 	down_write(&mapping->i_mmap_rwsem);
>>>> }
>>>>
>>>> +static inline bool i_mmap_timedlock_write(struct address_space *mapping,
>>>> +					 ktime_t timeout)
>>>> +{
>>>> +	return down_write_timedlock(&mapping->i_mmap_rwsem, timeout);
>>>> +}
>>>> +
>>>> static inline void i_mmap_unlock_write(struct address_space *mapping)
>>>> {
>>>> 	up_write(&mapping->i_mmap_rwsem);
>>>> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
>>>> index 6d7296dd11b8..445af661ae29 100644
>>>> --- a/mm/hugetlb.c
>>>> +++ b/mm/hugetlb.c
>>>> @@ -4750,6 +4750,8 @@ void adjust_range_if_pmd_sharing_possible(struct vm_area_struct *vma,
>>>> 	}
>>>> }
>>>>
>>>> +#define PMD_SHARE_DISABLE_THRESHOLD	(1 << 8)
>>>> +
>>>> /*
>>>> * Search for a shareable pmd page for hugetlb. In any case calls pmd_alloc()
>>>> * and returns the corresponding pte. While this is not necessary for the
>>>> @@ -4770,11 +4772,24 @@ pte_t *huge_pmd_share(struct mm_struct *mm, unsigned long addr, pud_t *pud)
>>>> 	pte_t *spte = NULL;
>>>> 	pte_t *pte;
>>>> 	spinlock_t *ptl;
>>>> +	static atomic_t timeout_cnt;
>>>>
>>>> -	if (!vma_shareable(vma, addr))
>>>> -		return (pte_t *)pmd_alloc(mm, pud, addr);
>>>> +	/*
>>>> +	 * Don't share if it is not sharable or locking attempt timed out
>>>> +	 * after 10ms. After 256 timeouts, PMD sharing will be permanently
>>>> +	 * disabled as it is just too slow.
>>> It looks like this kind of policy interacts with kernel debug options like KASAN (which is going to slow the system down
>>> anyway) could introduce tricky issues due to different timings on a debug kernel.
>> With respect to lockdep, down_write_timedlock() works like a trylock. So
>> a lot of checking will be skipped. Also the lockdep code won't be run
>> until the lock is acquired. So its execution time has no effect on the
>> timeout.
> No only lockdep, but also things like KASAN, debug_pagealloc, page_poison, kmemleak, debug
> objects etc that  all going to slow down things in huge_pmd_share(), and make it tricky to get a
> right timeout value for those debug kernels without changing the previous behavior.

Right, I understand that. I will move to use a sysctl parameters for the
timeout and then set its default value to either 10ms or 20ms if some
debug options are detected. Usually the slower than should not be more
than 2X.

Cheers,
Longman
Qian Cai Sept. 11, 2019, 9:57 p.m. UTC | #13
> On Sep 11, 2019, at 4:54 PM, Waiman Long <longman@redhat.com> wrote:
> 
> On 9/11/19 8:42 PM, Qian Cai wrote:
>> 
>>> On Sep 11, 2019, at 12:34 PM, Waiman Long <longman@redhat.com> wrote:
>>> 
>>> On 9/11/19 5:01 PM, Qian Cai wrote:
>>>>> On Sep 11, 2019, at 11:05 AM, Waiman Long <longman@redhat.com> wrote:
>>>>> 
>>>>> When allocating a large amount of static hugepages (~500-1500GB) on a
>>>>> system with large number of CPUs (4, 8 or even 16 sockets), performance
>>>>> degradation (random multi-second delays) was observed when thousands
>>>>> of processes are trying to fault in the data into the huge pages. The
>>>>> likelihood of the delay increases with the number of sockets and hence
>>>>> the CPUs a system has.  This only happens in the initial setup phase
>>>>> and will be gone after all the necessary data are faulted in.
>>>>> 
>>>>> These random delays, however, are deemed unacceptable. The cause of
>>>>> that delay is the long wait time in acquiring the mmap_sem when trying
>>>>> to share the huge PMDs.
>>>>> 
>>>>> To remove the unacceptable delays, we have to limit the amount of wait
>>>>> time on the mmap_sem. So the new down_write_timedlock() function is
>>>>> used to acquire the write lock on the mmap_sem with a timeout value of
>>>>> 10ms which should not cause a perceivable delay. If timeout happens,
>>>>> the task will abandon its effort to share the PMD and allocate its own
>>>>> copy instead.
>>>>> 
>>>>> When too many timeouts happens (threshold currently set at 256), the
>>>>> system may be too large for PMD sharing to be useful without undue delay.
>>>>> So the sharing will be disabled in this case.
>>>>> 
>>>>> Signed-off-by: Waiman Long <longman@redhat.com>
>>>>> ---
>>>>> include/linux/fs.h |  7 +++++++
>>>>> mm/hugetlb.c       | 24 +++++++++++++++++++++---
>>>>> 2 files changed, 28 insertions(+), 3 deletions(-)
>>>>> 
>>>>> diff --git a/include/linux/fs.h b/include/linux/fs.h
>>>>> index 997a530ff4e9..e9d3ad465a6b 100644
>>>>> --- a/include/linux/fs.h
>>>>> +++ b/include/linux/fs.h
>>>>> @@ -40,6 +40,7 @@
>>>>> #include <linux/fs_types.h>
>>>>> #include <linux/build_bug.h>
>>>>> #include <linux/stddef.h>
>>>>> +#include <linux/ktime.h>
>>>>> 
>>>>> #include <asm/byteorder.h>
>>>>> #include <uapi/linux/fs.h>
>>>>> @@ -519,6 +520,12 @@ static inline void i_mmap_lock_write(struct address_space *mapping)
>>>>> 	down_write(&mapping->i_mmap_rwsem);
>>>>> }
>>>>> 
>>>>> +static inline bool i_mmap_timedlock_write(struct address_space *mapping,
>>>>> +					 ktime_t timeout)
>>>>> +{
>>>>> +	return down_write_timedlock(&mapping->i_mmap_rwsem, timeout);
>>>>> +}
>>>>> +
>>>>> static inline void i_mmap_unlock_write(struct address_space *mapping)
>>>>> {
>>>>> 	up_write(&mapping->i_mmap_rwsem);
>>>>> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
>>>>> index 6d7296dd11b8..445af661ae29 100644
>>>>> --- a/mm/hugetlb.c
>>>>> +++ b/mm/hugetlb.c
>>>>> @@ -4750,6 +4750,8 @@ void adjust_range_if_pmd_sharing_possible(struct vm_area_struct *vma,
>>>>> 	}
>>>>> }
>>>>> 
>>>>> +#define PMD_SHARE_DISABLE_THRESHOLD	(1 << 8)
>>>>> +
>>>>> /*
>>>>> * Search for a shareable pmd page for hugetlb. In any case calls pmd_alloc()
>>>>> * and returns the corresponding pte. While this is not necessary for the
>>>>> @@ -4770,11 +4772,24 @@ pte_t *huge_pmd_share(struct mm_struct *mm, unsigned long addr, pud_t *pud)
>>>>> 	pte_t *spte = NULL;
>>>>> 	pte_t *pte;
>>>>> 	spinlock_t *ptl;
>>>>> +	static atomic_t timeout_cnt;
>>>>> 
>>>>> -	if (!vma_shareable(vma, addr))
>>>>> -		return (pte_t *)pmd_alloc(mm, pud, addr);
>>>>> +	/*
>>>>> +	 * Don't share if it is not sharable or locking attempt timed out
>>>>> +	 * after 10ms. After 256 timeouts, PMD sharing will be permanently
>>>>> +	 * disabled as it is just too slow.
>>>> It looks like this kind of policy interacts with kernel debug options like KASAN (which is going to slow the system down
>>>> anyway) could introduce tricky issues due to different timings on a debug kernel.
>>> With respect to lockdep, down_write_timedlock() works like a trylock. So
>>> a lot of checking will be skipped. Also the lockdep code won't be run
>>> until the lock is acquired. So its execution time has no effect on the
>>> timeout.
>> No only lockdep, but also things like KASAN, debug_pagealloc, page_poison, kmemleak, debug
>> objects etc that  all going to slow down things in huge_pmd_share(), and make it tricky to get a
>> right timeout value for those debug kernels without changing the previous behavior.
> 
> Right, I understand that. I will move to use a sysctl parameters for the
> timeout and then set its default value to either 10ms or 20ms if some
> debug options are detected. Usually the slower than should not be more
> than 2X.

That 2X is another magic number which has no testing data back for it. We need a way to disable timeout
completely in Kconfig, so it can ship in the part of a debug kernel package.
Mike Kravetz Sept. 12, 2019, 3:26 a.m. UTC | #14
On 9/11/19 8:05 AM, Waiman Long wrote:
> When allocating a large amount of static hugepages (~500-1500GB) on a
> system with large number of CPUs (4, 8 or even 16 sockets), performance
> degradation (random multi-second delays) was observed when thousands
> of processes are trying to fault in the data into the huge pages. The
> likelihood of the delay increases with the number of sockets and hence
> the CPUs a system has.  This only happens in the initial setup phase
> and will be gone after all the necessary data are faulted in.
> 
> These random delays, however, are deemed unacceptable. The cause of
> that delay is the long wait time in acquiring the mmap_sem when trying
> to share the huge PMDs.
> 
> To remove the unacceptable delays, we have to limit the amount of wait
> time on the mmap_sem. So the new down_write_timedlock() function is
> used to acquire the write lock on the mmap_sem with a timeout value of
> 10ms which should not cause a perceivable delay. If timeout happens,
> the task will abandon its effort to share the PMD and allocate its own
> copy instead.
> 
<snip>
> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index 6d7296dd11b8..445af661ae29 100644
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -4750,6 +4750,8 @@ void adjust_range_if_pmd_sharing_possible(struct vm_area_struct *vma,
>  	}
>  }
>  
> +#define PMD_SHARE_DISABLE_THRESHOLD	(1 << 8)
> +
>  /*
>   * Search for a shareable pmd page for hugetlb. In any case calls pmd_alloc()
>   * and returns the corresponding pte. While this is not necessary for the
> @@ -4770,11 +4772,24 @@ pte_t *huge_pmd_share(struct mm_struct *mm, unsigned long addr, pud_t *pud)
>  	pte_t *spte = NULL;
>  	pte_t *pte;
>  	spinlock_t *ptl;
> +	static atomic_t timeout_cnt;
>  
> -	if (!vma_shareable(vma, addr))
> -		return (pte_t *)pmd_alloc(mm, pud, addr);
> +	/*
> +	 * Don't share if it is not sharable or locking attempt timed out
> +	 * after 10ms. After 256 timeouts, PMD sharing will be permanently
> +	 * disabled as it is just too slow.
> +	 */
> +	if (!vma_shareable(vma, addr) ||
> +	   (atomic_read(&timeout_cnt) >= PMD_SHARE_DISABLE_THRESHOLD))
> +		goto out_no_share;
> +
> +	if (!i_mmap_timedlock_write(mapping, ms_to_ktime(10))) {
> +		if (atomic_inc_return(&timeout_cnt) ==
> +		    PMD_SHARE_DISABLE_THRESHOLD)
> +			pr_info("Hugetlbfs PMD sharing disabled because of timeouts!\n");
> +		goto out_no_share;
> +	}
>  
> -	i_mmap_lock_write(mapping);

All this got me wondering if we really need to take i_mmap_rwsem in write
mode here.  We are not changing the tree, only traversing it looking for
a suitable vma.

Unless I am missing something, the hugetlb code only ever takes the semaphore
in write mode; never read.  Could this have been the result of changing the
tree semaphore to read/write?  Instead of analyzing all the code, the easiest
and safest thing would have been to take all accesses in write mode.

I can investigate more, but wanted to ask the question in case someone already
knows.

At one time, I thought it was safe to acquire the semaphore in read mode for
huge_pmd_share, but write mode for huge_pmd_unshare.  See commit b43a99900559.
This was reverted along with another patch for other reasons.

If we change change from write to read mode, this may have significant impact
on the stalls.
Matthew Wilcox Sept. 12, 2019, 3:41 a.m. UTC | #15
On Wed, Sep 11, 2019 at 08:26:52PM -0700, Mike Kravetz wrote:
> All this got me wondering if we really need to take i_mmap_rwsem in write
> mode here.  We are not changing the tree, only traversing it looking for
> a suitable vma.
> 
> Unless I am missing something, the hugetlb code only ever takes the semaphore
> in write mode; never read.  Could this have been the result of changing the
> tree semaphore to read/write?  Instead of analyzing all the code, the easiest
> and safest thing would have been to take all accesses in write mode.

I was wondering the same thing.  It was changed here:

commit 83cde9e8ba95d180eaefefe834958fbf7008cf39
Author: Davidlohr Bueso <dave@stgolabs.net>
Date:   Fri Dec 12 16:54:21 2014 -0800

    mm: use new helper functions around the i_mmap_mutex
    
    Convert all open coded mutex_lock/unlock calls to the
    i_mmap_[lock/unlock]_write() helpers.

and a subsequent patch said:

    This conversion is straightforward.  For now, all users take the write
    lock.

There were subsequent patches which changed a few places
c8475d144abb1e62958cc5ec281d2a9e161c1946
1acf2e040721564d579297646862b8ea3dd4511b
d28eb9c861f41aa2af4cfcc5eeeddff42b13d31e
874bfcaf79e39135cd31e1cfc9265cf5222d1ec3
3dec0ba0be6a532cac949e02b853021bf6d57dad

but I don't know why this one wasn't changed.

(I was also wondering about caching a potentially sharable page table
in the address_space to avoid having to walk the VMA tree at all if that
one happened to be sharable).
Davidlohr Bueso Sept. 12, 2019, 4:40 a.m. UTC | #16
On Wed, 11 Sep 2019, Matthew Wilcox wrote:

>On Wed, Sep 11, 2019 at 08:26:52PM -0700, Mike Kravetz wrote:
>> All this got me wondering if we really need to take i_mmap_rwsem in write
>> mode here.  We are not changing the tree, only traversing it looking for
>> a suitable vma.
>>
>> Unless I am missing something, the hugetlb code only ever takes the semaphore
>> in write mode; never read.  Could this have been the result of changing the
>> tree semaphore to read/write?  Instead of analyzing all the code, the easiest
>> and safest thing would have been to take all accesses in write mode.
>
>I was wondering the same thing.  It was changed here:
>
>commit 83cde9e8ba95d180eaefefe834958fbf7008cf39
>Author: Davidlohr Bueso <dave@stgolabs.net>
>Date:   Fri Dec 12 16:54:21 2014 -0800
>
>    mm: use new helper functions around the i_mmap_mutex
>
>    Convert all open coded mutex_lock/unlock calls to the
>    i_mmap_[lock/unlock]_write() helpers.
>
>and a subsequent patch said:
>
>    This conversion is straightforward.  For now, all users take the write
>    lock.
>
>There were subsequent patches which changed a few places
>c8475d144abb1e62958cc5ec281d2a9e161c1946
>1acf2e040721564d579297646862b8ea3dd4511b
>d28eb9c861f41aa2af4cfcc5eeeddff42b13d31e
>874bfcaf79e39135cd31e1cfc9265cf5222d1ec3
>3dec0ba0be6a532cac949e02b853021bf6d57dad
>
>but I don't know why this one wasn't changed.

I cannot recall why huge_pmd_share() was not changed along with the other
callers that don't modify the interval tree. By looking at the function,
I agree that this could be shared, in fact this lock is much less involved
than it's anon_vma counterpart, last I checked (perhaps with the exception
of take_rmap_locks().

>
>(I was also wondering about caching a potentially sharable page table
>in the address_space to avoid having to walk the VMA tree at all if that
>one happened to be sharable).

I also think that the right solution is within the mm instead of adding
a new api to rwsem and the extra complexity/overhead to osq _just_ for this
case. We've managed to not need timeout extensions in our locking primitives
thus far, which is a good thing imo.

Thanks,
Davidlohr
Waiman Long Sept. 12, 2019, 9:06 a.m. UTC | #17
On 9/12/19 4:26 AM, Mike Kravetz wrote:
> On 9/11/19 8:05 AM, Waiman Long wrote:
>> When allocating a large amount of static hugepages (~500-1500GB) on a
>> system with large number of CPUs (4, 8 or even 16 sockets), performance
>> degradation (random multi-second delays) was observed when thousands
>> of processes are trying to fault in the data into the huge pages. The
>> likelihood of the delay increases with the number of sockets and hence
>> the CPUs a system has.  This only happens in the initial setup phase
>> and will be gone after all the necessary data are faulted in.
>>
>> These random delays, however, are deemed unacceptable. The cause of
>> that delay is the long wait time in acquiring the mmap_sem when trying
>> to share the huge PMDs.
>>
>> To remove the unacceptable delays, we have to limit the amount of wait
>> time on the mmap_sem. So the new down_write_timedlock() function is
>> used to acquire the write lock on the mmap_sem with a timeout value of
>> 10ms which should not cause a perceivable delay. If timeout happens,
>> the task will abandon its effort to share the PMD and allocate its own
>> copy instead.
>>
> <snip>
>> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
>> index 6d7296dd11b8..445af661ae29 100644
>> --- a/mm/hugetlb.c
>> +++ b/mm/hugetlb.c
>> @@ -4750,6 +4750,8 @@ void adjust_range_if_pmd_sharing_possible(struct vm_area_struct *vma,
>>  	}
>>  }
>>  
>> +#define PMD_SHARE_DISABLE_THRESHOLD	(1 << 8)
>> +
>>  /*
>>   * Search for a shareable pmd page for hugetlb. In any case calls pmd_alloc()
>>   * and returns the corresponding pte. While this is not necessary for the
>> @@ -4770,11 +4772,24 @@ pte_t *huge_pmd_share(struct mm_struct *mm, unsigned long addr, pud_t *pud)
>>  	pte_t *spte = NULL;
>>  	pte_t *pte;
>>  	spinlock_t *ptl;
>> +	static atomic_t timeout_cnt;
>>  
>> -	if (!vma_shareable(vma, addr))
>> -		return (pte_t *)pmd_alloc(mm, pud, addr);
>> +	/*
>> +	 * Don't share if it is not sharable or locking attempt timed out
>> +	 * after 10ms. After 256 timeouts, PMD sharing will be permanently
>> +	 * disabled as it is just too slow.
>> +	 */
>> +	if (!vma_shareable(vma, addr) ||
>> +	   (atomic_read(&timeout_cnt) >= PMD_SHARE_DISABLE_THRESHOLD))
>> +		goto out_no_share;
>> +
>> +	if (!i_mmap_timedlock_write(mapping, ms_to_ktime(10))) {
>> +		if (atomic_inc_return(&timeout_cnt) ==
>> +		    PMD_SHARE_DISABLE_THRESHOLD)
>> +			pr_info("Hugetlbfs PMD sharing disabled because of timeouts!\n");
>> +		goto out_no_share;
>> +	}
>>  
>> -	i_mmap_lock_write(mapping);
> All this got me wondering if we really need to take i_mmap_rwsem in write
> mode here.  We are not changing the tree, only traversing it looking for
> a suitable vma.
>
> Unless I am missing something, the hugetlb code only ever takes the semaphore
> in write mode; never read.  Could this have been the result of changing the
> tree semaphore to read/write?  Instead of analyzing all the code, the easiest
> and safest thing would have been to take all accesses in write mode.
>
> I can investigate more, but wanted to ask the question in case someone already
> knows.
>
> At one time, I thought it was safe to acquire the semaphore in read mode for
> huge_pmd_share, but write mode for huge_pmd_unshare.  See commit b43a99900559.
> This was reverted along with another patch for other reasons.
>
> If we change change from write to read mode, this may have significant impact
> on the stalls.

If we can take the rwsem in read mode, that should solve the problem
AFAICS. As I don't have a full understanding of the history of that
code, I didn't try to do that in my patch.

Cheers,
Longman
Mike Kravetz Sept. 12, 2019, 4:43 p.m. UTC | #18
On 9/12/19 2:06 AM, Waiman Long wrote:
> If we can take the rwsem in read mode, that should solve the problem
> AFAICS. As I don't have a full understanding of the history of that
> code, I didn't try to do that in my patch.

Do you still have access to an environment that creates the long stalls?
If so, can you try the simple change of taking the semaphore in read mode
in huge_pmd_share.
Waiman Long Sept. 13, 2019, 6:23 p.m. UTC | #19
On 9/12/19 5:43 PM, Mike Kravetz wrote:
> On 9/12/19 2:06 AM, Waiman Long wrote:
>> If we can take the rwsem in read mode, that should solve the problem
>> AFAICS. As I don't have a full understanding of the history of that
>> code, I didn't try to do that in my patch.
> Do you still have access to an environment that creates the long stalls?
> If so, can you try the simple change of taking the semaphore in read mode
> in huge_pmd_share.
>
That is what I am planning to do. I don't have an environment to
reproduce the problem myself. I have to create a test kernel and ask the
customer to try it out.

Cheers,
Longman
Waiman Long Sept. 16, 2019, 1:53 p.m. UTC | #20
On 9/12/19 12:40 AM, Davidlohr Bueso wrote:
>
> I also think that the right solution is within the mm instead of adding
> a new api to rwsem and the extra complexity/overhead to osq _just_ for
> this
> case. We've managed to not need timeout extensions in our locking
> primitives
> thus far, which is a good thing imo. 

Adding a variant with timeout can be useful in resolving some potential
deadlock issues found by lockdep. Anyway, there were talk about merging
rt-mutex and regular mutex in the LPC last week. So we will need to have
mutex_lock() variant with timeout for that to happen.

Cheers,
Longman
diff mbox series

Patch

diff --git a/include/linux/fs.h b/include/linux/fs.h
index 997a530ff4e9..e9d3ad465a6b 100644
--- a/include/linux/fs.h
+++ b/include/linux/fs.h
@@ -40,6 +40,7 @@ 
 #include <linux/fs_types.h>
 #include <linux/build_bug.h>
 #include <linux/stddef.h>
+#include <linux/ktime.h>
 
 #include <asm/byteorder.h>
 #include <uapi/linux/fs.h>
@@ -519,6 +520,12 @@  static inline void i_mmap_lock_write(struct address_space *mapping)
 	down_write(&mapping->i_mmap_rwsem);
 }
 
+static inline bool i_mmap_timedlock_write(struct address_space *mapping,
+					 ktime_t timeout)
+{
+	return down_write_timedlock(&mapping->i_mmap_rwsem, timeout);
+}
+
 static inline void i_mmap_unlock_write(struct address_space *mapping)
 {
 	up_write(&mapping->i_mmap_rwsem);
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 6d7296dd11b8..445af661ae29 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -4750,6 +4750,8 @@  void adjust_range_if_pmd_sharing_possible(struct vm_area_struct *vma,
 	}
 }
 
+#define PMD_SHARE_DISABLE_THRESHOLD	(1 << 8)
+
 /*
  * Search for a shareable pmd page for hugetlb. In any case calls pmd_alloc()
  * and returns the corresponding pte. While this is not necessary for the
@@ -4770,11 +4772,24 @@  pte_t *huge_pmd_share(struct mm_struct *mm, unsigned long addr, pud_t *pud)
 	pte_t *spte = NULL;
 	pte_t *pte;
 	spinlock_t *ptl;
+	static atomic_t timeout_cnt;
 
-	if (!vma_shareable(vma, addr))
-		return (pte_t *)pmd_alloc(mm, pud, addr);
+	/*
+	 * Don't share if it is not sharable or locking attempt timed out
+	 * after 10ms. After 256 timeouts, PMD sharing will be permanently
+	 * disabled as it is just too slow.
+	 */
+	if (!vma_shareable(vma, addr) ||
+	   (atomic_read(&timeout_cnt) >= PMD_SHARE_DISABLE_THRESHOLD))
+		goto out_no_share;
+
+	if (!i_mmap_timedlock_write(mapping, ms_to_ktime(10))) {
+		if (atomic_inc_return(&timeout_cnt) ==
+		    PMD_SHARE_DISABLE_THRESHOLD)
+			pr_info("Hugetlbfs PMD sharing disabled because of timeouts!\n");
+		goto out_no_share;
+	}
 
-	i_mmap_lock_write(mapping);
 	vma_interval_tree_foreach(svma, &mapping->i_mmap, idx, idx) {
 		if (svma == vma)
 			continue;
@@ -4806,6 +4821,9 @@  pte_t *huge_pmd_share(struct mm_struct *mm, unsigned long addr, pud_t *pud)
 	pte = (pte_t *)pmd_alloc(mm, pud, addr);
 	i_mmap_unlock_write(mapping);
 	return pte;
+
+out_no_share:
+	return (pte_t *)pmd_alloc(mm, pud, addr);
 }
 
 /*