diff mbox series

[4/6] mm/hmm: add output flag for compound page mapping

Message ID 20200508192009.15302-5-rcampbell@nvidia.com (mailing list archive)
State New
Headers show
Series nouveau/hmm: add support for mapping large pages | expand

Commit Message

Ralph Campbell May 8, 2020, 7:20 p.m. UTC
hmm_range_fault() returns an array of page frame numbers and flags for
how the pages are mapped in the requested process' page tables. The PFN
can be used to get the struct page with hmm_pfn_to_page() and the page size
order can be determined with compound_order(page) but if the page is larger
than order 0 (PAGE_SIZE), there is no indication that the page is mapped
using a larger page size. To be fully general, hmm_range_fault() would need
to return the mapping size to handle cases like a 1GB compound page being
mapped with 2MB PMD entries. However, the most common case is the mapping
size the same as the underlying compound page size.
Add a new output flag to indicate this so that callers know it is safe to
use a large device page table mapping if one is available.

Signed-off-by: Ralph Campbell <rcampbell@nvidia.com>
---
 include/linux/hmm.h |  4 +++-
 mm/hmm.c            | 10 +++++++---
 2 files changed, 10 insertions(+), 4 deletions(-)

Comments

Christoph Hellwig May 8, 2020, 7:51 p.m. UTC | #1
On Fri, May 08, 2020 at 12:20:07PM -0700, Ralph Campbell wrote:
> hmm_range_fault() returns an array of page frame numbers and flags for
> how the pages are mapped in the requested process' page tables. The PFN
> can be used to get the struct page with hmm_pfn_to_page() and the page size
> order can be determined with compound_order(page) but if the page is larger
> than order 0 (PAGE_SIZE), there is no indication that the page is mapped
> using a larger page size. To be fully general, hmm_range_fault() would need
> to return the mapping size to handle cases like a 1GB compound page being
> mapped with 2MB PMD entries. However, the most common case is the mapping
> size the same as the underlying compound page size.
> Add a new output flag to indicate this so that callers know it is safe to
> use a large device page table mapping if one is available.

Why do you need the flag?  The caller should be able to just use
page_size() (or willys new thp_size helper).
Ralph Campbell May 8, 2020, 8:06 p.m. UTC | #2
On 5/8/20 12:51 PM, Christoph Hellwig wrote:
> On Fri, May 08, 2020 at 12:20:07PM -0700, Ralph Campbell wrote:
>> hmm_range_fault() returns an array of page frame numbers and flags for
>> how the pages are mapped in the requested process' page tables. The PFN
>> can be used to get the struct page with hmm_pfn_to_page() and the page size
>> order can be determined with compound_order(page) but if the page is larger
>> than order 0 (PAGE_SIZE), there is no indication that the page is mapped
>> using a larger page size. To be fully general, hmm_range_fault() would need
>> to return the mapping size to handle cases like a 1GB compound page being
>> mapped with 2MB PMD entries. However, the most common case is the mapping
>> size the same as the underlying compound page size.
>> Add a new output flag to indicate this so that callers know it is safe to
>> use a large device page table mapping if one is available.
> 
> Why do you need the flag?  The caller should be able to just use
> page_size() (or willys new thp_size helper).
> 

The question is whether or not a large page can be mapped with smaller
page table entries with different permissions. If one process has a 2MB
page mapped with 4K PTEs with different read/write permissions, I don't think
it would be OK for a device to map the whole 2MB with write access enabled.
The flag is supposed to indicate that the whole page can be mapped by the
device with the indicated read/write permissions.
Zi Yan May 26, 2020, 10:29 p.m. UTC | #3
On 8 May 2020, at 16:06, Ralph Campbell wrote:

> On 5/8/20 12:51 PM, Christoph Hellwig wrote:
>> On Fri, May 08, 2020 at 12:20:07PM -0700, Ralph Campbell wrote:
>>> hmm_range_fault() returns an array of page frame numbers and flags for
>>> how the pages are mapped in the requested process' page tables. The PFN
>>> can be used to get the struct page with hmm_pfn_to_page() and the page size
>>> order can be determined with compound_order(page) but if the page is larger
>>> than order 0 (PAGE_SIZE), there is no indication that the page is mapped
>>> using a larger page size. To be fully general, hmm_range_fault() would need
>>> to return the mapping size to handle cases like a 1GB compound page being
>>> mapped with 2MB PMD entries. However, the most common case is the mapping
>>> size the same as the underlying compound page size.
>>> Add a new output flag to indicate this so that callers know it is safe to
>>> use a large device page table mapping if one is available.
>>
>> Why do you need the flag?  The caller should be able to just use
>> page_size() (or willys new thp_size helper).
>>
>
> The question is whether or not a large page can be mapped with smaller
> page table entries with different permissions. If one process has a 2MB
> page mapped with 4K PTEs with different read/write permissions, I don't think
> it would be OK for a device to map the whole 2MB with write access enabled.
> The flag is supposed to indicate that the whole page can be mapped by the
> device with the indicated read/write permissions.

If hmm_range_fault() only walks one VMA at a time, you would not have this permission
issue, right? Since all pages from one VMA should have the same permission.
But it seems that hmm_range_fault() deals with pages across multiple VMAs.
Maybe we should make hmm_range_fault() bail out early when it encounters
a VMA with a different permission than the existing ones?


—
Best Regards,
Yan Zi
Ralph Campbell May 26, 2020, 10:47 p.m. UTC | #4
On 5/26/20 3:29 PM, Zi Yan wrote:
> On 8 May 2020, at 16:06, Ralph Campbell wrote:
> 
>> On 5/8/20 12:51 PM, Christoph Hellwig wrote:
>>> On Fri, May 08, 2020 at 12:20:07PM -0700, Ralph Campbell wrote:
>>>> hmm_range_fault() returns an array of page frame numbers and flags for
>>>> how the pages are mapped in the requested process' page tables. The PFN
>>>> can be used to get the struct page with hmm_pfn_to_page() and the page size
>>>> order can be determined with compound_order(page) but if the page is larger
>>>> than order 0 (PAGE_SIZE), there is no indication that the page is mapped
>>>> using a larger page size. To be fully general, hmm_range_fault() would need
>>>> to return the mapping size to handle cases like a 1GB compound page being
>>>> mapped with 2MB PMD entries. However, the most common case is the mapping
>>>> size the same as the underlying compound page size.
>>>> Add a new output flag to indicate this so that callers know it is safe to
>>>> use a large device page table mapping if one is available.
>>>
>>> Why do you need the flag?  The caller should be able to just use
>>> page_size() (or willys new thp_size helper).
>>>
>>
>> The question is whether or not a large page can be mapped with smaller
>> page table entries with different permissions. If one process has a 2MB
>> page mapped with 4K PTEs with different read/write permissions, I don't think
>> it would be OK for a device to map the whole 2MB with write access enabled.
>> The flag is supposed to indicate that the whole page can be mapped by the
>> device with the indicated read/write permissions.
> 
> If hmm_range_fault() only walks one VMA at a time, you would not have this permission
> issue, right? Since all pages from one VMA should have the same permission.
> But it seems that hmm_range_fault() deals with pages across multiple VMAs.
> Maybe we should make hmm_range_fault() bail out early when it encounters
> a VMA with a different permission than the existing ones?
> 
> 
> —
> Best Regards,
> Yan Zi

I don't think so. The VMA might have read/write permission but the page table might
have read-only permission in order to trigger a fault for copy-on-write. Or the
PTE might be read-only or invalid to trigger faults for architectures that don't
have hardware updated accessed bits and are using the minor faults to update LRU.
The goal is that the MM core see the same faults whether the HMM device accesses
memory or a CPU thread.
diff mbox series

Patch

diff --git a/include/linux/hmm.h b/include/linux/hmm.h
index e912b9dc4633..f2d38af421e7 100644
--- a/include/linux/hmm.h
+++ b/include/linux/hmm.h
@@ -41,12 +41,14 @@  enum hmm_pfn_flags {
 	HMM_PFN_VALID = 1UL << (BITS_PER_LONG - 1),
 	HMM_PFN_WRITE = 1UL << (BITS_PER_LONG - 2),
 	HMM_PFN_ERROR = 1UL << (BITS_PER_LONG - 3),
+	HMM_PFN_COMPOUND = 1UL << (BITS_PER_LONG - 4),
 
 	/* Input flags */
 	HMM_PFN_REQ_FAULT = HMM_PFN_VALID,
 	HMM_PFN_REQ_WRITE = HMM_PFN_WRITE,
 
-	HMM_PFN_FLAGS = HMM_PFN_VALID | HMM_PFN_WRITE | HMM_PFN_ERROR,
+	HMM_PFN_FLAGS = HMM_PFN_VALID | HMM_PFN_WRITE | HMM_PFN_ERROR |
+			HMM_PFN_COMPOUND,
 };
 
 /*
diff --git a/mm/hmm.c b/mm/hmm.c
index 41673a6d8d46..a9dd06e190a1 100644
--- a/mm/hmm.c
+++ b/mm/hmm.c
@@ -170,7 +170,9 @@  static inline unsigned long pmd_to_hmm_pfn_flags(struct hmm_range *range,
 {
 	if (pmd_protnone(pmd))
 		return 0;
-	return pmd_write(pmd) ? (HMM_PFN_VALID | HMM_PFN_WRITE) : HMM_PFN_VALID;
+	return pmd_write(pmd) ?
+			(HMM_PFN_VALID | HMM_PFN_COMPOUND | HMM_PFN_WRITE) :
+			(HMM_PFN_VALID | HMM_PFN_COMPOUND);
 }
 
 #ifdef CONFIG_TRANSPARENT_HUGEPAGE
@@ -389,7 +391,9 @@  static inline unsigned long pud_to_hmm_pfn_flags(struct hmm_range *range,
 {
 	if (!pud_present(pud))
 		return 0;
-	return pud_write(pud) ? (HMM_PFN_VALID | HMM_PFN_WRITE) : HMM_PFN_VALID;
+	return pud_write(pud) ?
+			(HMM_PFN_VALID | HMM_PFN_COMPOUND | HMM_PFN_WRITE) :
+			(HMM_PFN_VALID | HMM_PFN_COMPOUND);
 }
 
 static int hmm_vma_walk_pud(pud_t *pudp, unsigned long start, unsigned long end,
@@ -484,7 +488,7 @@  static int hmm_vma_walk_hugetlb_entry(pte_t *pte, unsigned long hmask,
 
 	pfn = pte_pfn(entry) + ((start & ~hmask) >> PAGE_SHIFT);
 	for (; addr < end; addr += PAGE_SIZE, i++, pfn++)
-		range->hmm_pfns[i] = pfn | cpu_flags;
+		range->hmm_pfns[i] = pfn | cpu_flags | HMM_PFN_COMPOUND;
 
 	spin_unlock(ptl);
 	return 0;