diff mbox series

[09/16] mm/hmm: add output flag for compound page mapping

Message ID 20200619215649.32297-10-rcampbell@nvidia.com (mailing list archive)
State New
Headers show
Series mm/hmm/nouveau: THP mapping and migration | expand

Commit Message

Ralph Campbell June 19, 2020, 9:56 p.m. UTC
hmm_range_fault() returns an array of page frame numbers and flags for
how the pages are mapped in the requested process' page tables. The PFN
can be used to get the struct page with hmm_pfn_to_page() and the page size
order can be determined with compound_order(page) but if the page is larger
than order 0 (PAGE_SIZE), there is no indication that the page is mapped
using a larger page size. To be fully general, hmm_range_fault() would need
to return the mapping size to handle cases like a 1GB compound page being
mapped with 2MB PMD entries. However, the most common case is the mapping
size is the same as the underlying compound page size.
Add a new output flag to indicate this so that callers know it is safe to
use a large device page table mapping if one is available.

Signed-off-by: Ralph Campbell <rcampbell@nvidia.com>
---
 include/linux/hmm.h |  4 +++-
 mm/hmm.c            | 10 +++++++---
 2 files changed, 10 insertions(+), 4 deletions(-)

Comments

Jason Gunthorpe June 22, 2020, 5:25 p.m. UTC | #1
On Fri, Jun 19, 2020 at 02:56:42PM -0700, Ralph Campbell wrote:
> hmm_range_fault() returns an array of page frame numbers and flags for
> how the pages are mapped in the requested process' page tables. The PFN
> can be used to get the struct page with hmm_pfn_to_page() and the page size
> order can be determined with compound_order(page) but if the page is larger
> than order 0 (PAGE_SIZE), there is no indication that the page is mapped
> using a larger page size. To be fully general, hmm_range_fault() would need
> to return the mapping size to handle cases like a 1GB compound page being
> mapped with 2MB PMD entries. However, the most common case is the mapping
> size is the same as the underlying compound page size.
> Add a new output flag to indicate this so that callers know it is safe to
> use a large device page table mapping if one is available.

But what size should the caller use?

You already explained that the caller cannot use compound_ordet() to
get the size, so what should it be?

Probably this needs to be two flags, PUD and PMD, and the caller should
use the PUD and PMD sizes to figure out how big it is?

Jason
Ralph Campbell June 22, 2020, 6:10 p.m. UTC | #2
On 6/22/20 10:25 AM, Jason Gunthorpe wrote:
> On Fri, Jun 19, 2020 at 02:56:42PM -0700, Ralph Campbell wrote:
>> hmm_range_fault() returns an array of page frame numbers and flags for
>> how the pages are mapped in the requested process' page tables. The PFN
>> can be used to get the struct page with hmm_pfn_to_page() and the page size
>> order can be determined with compound_order(page) but if the page is larger
>> than order 0 (PAGE_SIZE), there is no indication that the page is mapped
>> using a larger page size. To be fully general, hmm_range_fault() would need
>> to return the mapping size to handle cases like a 1GB compound page being
>> mapped with 2MB PMD entries. However, the most common case is the mapping
>> size is the same as the underlying compound page size.
>> Add a new output flag to indicate this so that callers know it is safe to
>> use a large device page table mapping if one is available.
> 
> But what size should the caller use?
> 
> You already explained that the caller cannot use compound_ordet() to
> get the size, so what should it be?
> 
> Probably this needs to be two flags, PUD and PMD, and the caller should
> use the PUD and PMD sizes to figure out how big it is?
> 
> Jason
> 

I guess I didn't explain it as clearly as I thought. :-)

The page size *can* be determined with compound_order(page) but without the
flag, the caller doesn't know how much of that page is being mapped by the
CPU. The flag says the CPU is mapping the whole compound page (based on compound_order)
and that the caller can use device mappings up to the size of compound_order(page).
Jason Gunthorpe June 22, 2020, 11:18 p.m. UTC | #3
On Mon, Jun 22, 2020 at 11:10:05AM -0700, Ralph Campbell wrote:
> 
> On 6/22/20 10:25 AM, Jason Gunthorpe wrote:
> > On Fri, Jun 19, 2020 at 02:56:42PM -0700, Ralph Campbell wrote:
> > > hmm_range_fault() returns an array of page frame numbers and flags for
> > > how the pages are mapped in the requested process' page tables. The PFN
> > > can be used to get the struct page with hmm_pfn_to_page() and the page size
> > > order can be determined with compound_order(page) but if the page is larger
> > > than order 0 (PAGE_SIZE), there is no indication that the page is mapped
> > > using a larger page size. To be fully general, hmm_range_fault() would need
> > > to return the mapping size to handle cases like a 1GB compound page being
> > > mapped with 2MB PMD entries. However, the most common case is the mapping
> > > size is the same as the underlying compound page size.
> > > Add a new output flag to indicate this so that callers know it is safe to
> > > use a large device page table mapping if one is available.
> > 
> > But what size should the caller use?
> > 
> > You already explained that the caller cannot use compound_ordet() to
> > get the size, so what should it be?
> > 
> > Probably this needs to be two flags, PUD and PMD, and the caller should
> > use the PUD and PMD sizes to figure out how big it is?
> > 
> > Jason
> > 
> 
> I guess I didn't explain it as clearly as I thought. :-)
> 
> The page size *can* be determined with compound_order(page) but without the
> flag, the caller doesn't know how much of that page is being mapped by the
> CPU. The flag says the CPU is mapping the whole compound page (based on compound_order)
> and that the caller can use device mappings up to the size of compound_order(page).

No, I got it, I just don't like the assumption that just because a PMD
or PUD points to a page that the only possible value for
compound_page() is PMD or PUD respectively. Partial mapping should be
possible in both cases, if not today, then maybe down the road with
some of the large page work that has been floating about

It seems much safer to just directly encode the PUD/PMD size in the
flags

Jason
Ralph Campbell June 22, 2020, 11:26 p.m. UTC | #4
On 6/22/20 4:18 PM, Jason Gunthorpe wrote:
> On Mon, Jun 22, 2020 at 11:10:05AM -0700, Ralph Campbell wrote:
>>
>> On 6/22/20 10:25 AM, Jason Gunthorpe wrote:
>>> On Fri, Jun 19, 2020 at 02:56:42PM -0700, Ralph Campbell wrote:
>>>> hmm_range_fault() returns an array of page frame numbers and flags for
>>>> how the pages are mapped in the requested process' page tables. The PFN
>>>> can be used to get the struct page with hmm_pfn_to_page() and the page size
>>>> order can be determined with compound_order(page) but if the page is larger
>>>> than order 0 (PAGE_SIZE), there is no indication that the page is mapped
>>>> using a larger page size. To be fully general, hmm_range_fault() would need
>>>> to return the mapping size to handle cases like a 1GB compound page being
>>>> mapped with 2MB PMD entries. However, the most common case is the mapping
>>>> size is the same as the underlying compound page size.
>>>> Add a new output flag to indicate this so that callers know it is safe to
>>>> use a large device page table mapping if one is available.
>>>
>>> But what size should the caller use?
>>>
>>> You already explained that the caller cannot use compound_ordet() to
>>> get the size, so what should it be?
>>>
>>> Probably this needs to be two flags, PUD and PMD, and the caller should
>>> use the PUD and PMD sizes to figure out how big it is?
>>>
>>> Jason
>>>
>>
>> I guess I didn't explain it as clearly as I thought. :-)
>>
>> The page size *can* be determined with compound_order(page) but without the
>> flag, the caller doesn't know how much of that page is being mapped by the
>> CPU. The flag says the CPU is mapping the whole compound page (based on compound_order)
>> and that the caller can use device mappings up to the size of compound_order(page).
> 
> No, I got it, I just don't like the assumption that just because a PMD
> or PUD points to a page that the only possible value for
> compound_page() is PMD or PUD respectively. Partial mapping should be
> possible in both cases, if not today, then maybe down the road with
> some of the large page work that has been floating about
> 
> It seems much safer to just directly encode the PUD/PMD size in the
> flags
> 
> Jason

That is fine with me. I'll make that change for v2.
I was just trying to minimize the number of flags being added.
diff mbox series

Patch

diff --git a/include/linux/hmm.h b/include/linux/hmm.h
index f4a09ed223ac..d0db78025baa 100644
--- a/include/linux/hmm.h
+++ b/include/linux/hmm.h
@@ -41,12 +41,14 @@  enum hmm_pfn_flags {
 	HMM_PFN_VALID = 1UL << (BITS_PER_LONG - 1),
 	HMM_PFN_WRITE = 1UL << (BITS_PER_LONG - 2),
 	HMM_PFN_ERROR = 1UL << (BITS_PER_LONG - 3),
+	HMM_PFN_COMPOUND = 1UL << (BITS_PER_LONG - 4),
 
 	/* Input flags */
 	HMM_PFN_REQ_FAULT = HMM_PFN_VALID,
 	HMM_PFN_REQ_WRITE = HMM_PFN_WRITE,
 
-	HMM_PFN_FLAGS = HMM_PFN_VALID | HMM_PFN_WRITE | HMM_PFN_ERROR,
+	HMM_PFN_FLAGS = HMM_PFN_VALID | HMM_PFN_WRITE | HMM_PFN_ERROR |
+			HMM_PFN_COMPOUND,
 };
 
 /*
diff --git a/mm/hmm.c b/mm/hmm.c
index e9a545751108..d145d44256df 100644
--- a/mm/hmm.c
+++ b/mm/hmm.c
@@ -170,7 +170,9 @@  static inline unsigned long pmd_to_hmm_pfn_flags(struct hmm_range *range,
 {
 	if (pmd_protnone(pmd))
 		return 0;
-	return pmd_write(pmd) ? (HMM_PFN_VALID | HMM_PFN_WRITE) : HMM_PFN_VALID;
+	return pmd_write(pmd) ?
+			(HMM_PFN_VALID | HMM_PFN_COMPOUND | HMM_PFN_WRITE) :
+			(HMM_PFN_VALID | HMM_PFN_COMPOUND);
 }
 
 #ifdef CONFIG_TRANSPARENT_HUGEPAGE
@@ -389,7 +391,9 @@  static inline unsigned long pud_to_hmm_pfn_flags(struct hmm_range *range,
 {
 	if (!pud_present(pud))
 		return 0;
-	return pud_write(pud) ? (HMM_PFN_VALID | HMM_PFN_WRITE) : HMM_PFN_VALID;
+	return pud_write(pud) ?
+			(HMM_PFN_VALID | HMM_PFN_COMPOUND | HMM_PFN_WRITE) :
+			(HMM_PFN_VALID | HMM_PFN_COMPOUND);
 }
 
 static int hmm_vma_walk_pud(pud_t *pudp, unsigned long start, unsigned long end,
@@ -484,7 +488,7 @@  static int hmm_vma_walk_hugetlb_entry(pte_t *pte, unsigned long hmask,
 
 	pfn = pte_pfn(entry) + ((start & ~hmask) >> PAGE_SHIFT);
 	for (; addr < end; addr += PAGE_SIZE, i++, pfn++)
-		range->hmm_pfns[i] = pfn | cpu_flags;
+		range->hmm_pfns[i] = pfn | cpu_flags | HMM_PFN_COMPOUND;
 
 	spin_unlock(ptl);
 	return 0;