diff mbox series

[v3,08/34] ia64: mm: Add p?d_large() definitions

Message ID 20190227170608.27963-9-steven.price@arm.com (mailing list archive)
State New, archived
Headers show
Series Convert x86 & arm64 to use generic page walk | expand

Commit Message

Steven Price Feb. 27, 2019, 5:05 p.m. UTC
walk_page_range() is going to be allowed to walk page tables other than
those of user space. For this it needs to know when it has reached a
'leaf' entry in the page tables. This information is provided by the
p?d_large() functions/macros.

For ia64 leaf entries are always at the lowest level, so implement
stubs returning 0.

CC: Tony Luck <tony.luck@intel.com>
CC: Fenghua Yu <fenghua.yu@intel.com>
CC: linux-ia64@vger.kernel.org
Signed-off-by: Steven Price <steven.price@arm.com>
---
 arch/ia64/include/asm/pgtable.h | 3 +++
 1 file changed, 3 insertions(+)

Comments

Kirill A . Shutemov March 1, 2019, 9:57 p.m. UTC | #1
On Wed, Feb 27, 2019 at 05:05:42PM +0000, Steven Price wrote:
> walk_page_range() is going to be allowed to walk page tables other than
> those of user space. For this it needs to know when it has reached a
> 'leaf' entry in the page tables. This information is provided by the
> p?d_large() functions/macros.
> 
> For ia64 leaf entries are always at the lowest level, so implement
> stubs returning 0.

Are you sure about this? I see pte_mkhuge defined for ia64 and Kconfig
contains hugetlb references.
Steven Price March 4, 2019, 1:16 p.m. UTC | #2
On 01/03/2019 21:57, Kirill A. Shutemov wrote:
> On Wed, Feb 27, 2019 at 05:05:42PM +0000, Steven Price wrote:
>> walk_page_range() is going to be allowed to walk page tables other than
>> those of user space. For this it needs to know when it has reached a
>> 'leaf' entry in the page tables. This information is provided by the
>> p?d_large() functions/macros.
>>
>> For ia64 leaf entries are always at the lowest level, so implement
>> stubs returning 0.
> 
> Are you sure about this? I see pte_mkhuge defined for ia64 and Kconfig
> contains hugetlb references.
> 

I'm not completely familiar with ia64, but my understanding is that it
doesn't have the situation where a page table walk ends early - there is
always the full depth of entries. The p?d_huge() functions always return 0.

However my understanding is that it does support huge TLB entries, so
when populating the TLB a region larger than a standard page can be mapped.

I'd definitely welcome review by someone more familiar with ia64 to
check my assumptions.

Thanks,

Steve
Luck, Tony March 4, 2019, 7:06 p.m. UTC | #3
On Mon, Mar 04, 2019 at 01:16:47PM +0000, Steven Price wrote:
> On 01/03/2019 21:57, Kirill A. Shutemov wrote:
> > On Wed, Feb 27, 2019 at 05:05:42PM +0000, Steven Price wrote:
> >> walk_page_range() is going to be allowed to walk page tables other than
> >> those of user space. For this it needs to know when it has reached a
> >> 'leaf' entry in the page tables. This information is provided by the
> >> p?d_large() functions/macros.
> >>
> >> For ia64 leaf entries are always at the lowest level, so implement
> >> stubs returning 0.
> > 
> > Are you sure about this? I see pte_mkhuge defined for ia64 and Kconfig
> > contains hugetlb references.
> > 
> 
> I'm not completely familiar with ia64, but my understanding is that it
> doesn't have the situation where a page table walk ends early - there is
> always the full depth of entries. The p?d_huge() functions always return 0.
> 
> However my understanding is that it does support huge TLB entries, so
> when populating the TLB a region larger than a standard page can be mapped.
> 
> I'd definitely welcome review by someone more familiar with ia64 to
> check my assumptions.

ia64 has several ways to manage page tables. The one
used by Linux has multi-level table walks like other
architectures, but we don't allow mixing of different
page sizes within a "region" (there are eight regions
selected by the high 3 bits of the virtual address).

Is the series in some GIT tree that I can pull, rather
than tracking down all 34 pieces?  I can try it out and
see if things work/break.

-Tony
Steven Price March 6, 2019, 1:45 p.m. UTC | #4
On 04/03/2019 19:06, Luck, Tony wrote:
> On Mon, Mar 04, 2019 at 01:16:47PM +0000, Steven Price wrote:
>> On 01/03/2019 21:57, Kirill A. Shutemov wrote:
>>> On Wed, Feb 27, 2019 at 05:05:42PM +0000, Steven Price wrote:
>>>> walk_page_range() is going to be allowed to walk page tables other than
>>>> those of user space. For this it needs to know when it has reached a
>>>> 'leaf' entry in the page tables. This information is provided by the
>>>> p?d_large() functions/macros.
>>>>
>>>> For ia64 leaf entries are always at the lowest level, so implement
>>>> stubs returning 0.
>>>
>>> Are you sure about this? I see pte_mkhuge defined for ia64 and Kconfig
>>> contains hugetlb references.
>>>
>>
>> I'm not completely familiar with ia64, but my understanding is that it
>> doesn't have the situation where a page table walk ends early - there is
>> always the full depth of entries. The p?d_huge() functions always return 0.
>>
>> However my understanding is that it does support huge TLB entries, so
>> when populating the TLB a region larger than a standard page can be mapped.
>>
>> I'd definitely welcome review by someone more familiar with ia64 to
>> check my assumptions.
> 
> ia64 has several ways to manage page tables. The one
> used by Linux has multi-level table walks like other
> architectures, but we don't allow mixing of different
> page sizes within a "region" (there are eight regions
> selected by the high 3 bits of the virtual address).

I'd gathered ia64 has this "region" concept, from what I can tell the
existing p?d_present() etc macros are assuming a particular
configuration of a region, and so the p?d_large macros would follow that
scheme. This of course does limit any generic page walking code to
dealing only with this one type of region, but that doesn't seem
unreasonable.

> Is the series in some GIT tree that I can pull, rather
> than tracking down all 34 pieces?  I can try it out and
> see if things work/break.

At the moment I don't have a public tree - I'm trying to get that set
up. In the meantime you can download the entire series as a mbox from
patchwork:

https://patchwork.kernel.org/series/85673/mbox/

(it's currently based on v5.0-rc6)

However you won't see anything particularly interesting on ia64 (yet)
because my focus has been converting the PTDUMP implementations that
several architecture have (arm, arm64, powerpc, s390, x86) but not ia64.
For now I've also only done the PTDUMP work for arm64/x86 as a way of
testing out the idea. Ideally the PTDUMP code can be made generic enough
that implementing it for other architecture (including ia64) will be
trivial.

Thanks,

Steve
diff mbox series

Patch

diff --git a/arch/ia64/include/asm/pgtable.h b/arch/ia64/include/asm/pgtable.h
index b1e7468eb65a..84dda295391b 100644
--- a/arch/ia64/include/asm/pgtable.h
+++ b/arch/ia64/include/asm/pgtable.h
@@ -271,6 +271,7 @@  extern unsigned long VMALLOC_END;
 #define pmd_none(pmd)			(!pmd_val(pmd))
 #define pmd_bad(pmd)			(!ia64_phys_addr_valid(pmd_val(pmd)))
 #define pmd_present(pmd)		(pmd_val(pmd) != 0UL)
+#define pmd_large(pmd)			(0)
 #define pmd_clear(pmdp)			(pmd_val(*(pmdp)) = 0UL)
 #define pmd_page_vaddr(pmd)		((unsigned long) __va(pmd_val(pmd) & _PFN_MASK))
 #define pmd_page(pmd)			virt_to_page((pmd_val(pmd) + PAGE_OFFSET))
@@ -278,6 +279,7 @@  extern unsigned long VMALLOC_END;
 #define pud_none(pud)			(!pud_val(pud))
 #define pud_bad(pud)			(!ia64_phys_addr_valid(pud_val(pud)))
 #define pud_present(pud)		(pud_val(pud) != 0UL)
+#define pud_large(pud)			(0)
 #define pud_clear(pudp)			(pud_val(*(pudp)) = 0UL)
 #define pud_page_vaddr(pud)		((unsigned long) __va(pud_val(pud) & _PFN_MASK))
 #define pud_page(pud)			virt_to_page((pud_val(pud) + PAGE_OFFSET))
@@ -286,6 +288,7 @@  extern unsigned long VMALLOC_END;
 #define pgd_none(pgd)			(!pgd_val(pgd))
 #define pgd_bad(pgd)			(!ia64_phys_addr_valid(pgd_val(pgd)))
 #define pgd_present(pgd)		(pgd_val(pgd) != 0UL)
+#define pgd_large(pgd)			(0)
 #define pgd_clear(pgdp)			(pgd_val(*(pgdp)) = 0UL)
 #define pgd_page_vaddr(pgd)		((unsigned long) __va(pgd_val(pgd) & _PFN_MASK))
 #define pgd_page(pgd)			virt_to_page((pgd_val(pgd) + PAGE_OFFSET))