Message ID | YwdpwykpV9RB+4tL@worktop.programming.kicks-ass.net (mailing list archive) |
---|---|
State | Superseded |
Headers | show |
Series | x86/mm: Refuse W^X violations | expand |
On Thu, Aug 25, 2022, Peter Zijlstra wrote: > > x86 has STRICT_*_RWX, but not even a warning when someone violates it. > > Add this warning and fully refuse the transition. > > Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> > --- > diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c > index 1abd5438f126..9e9bef3f36b3 100644 > --- a/arch/x86/mm/pat/set_memory.c > +++ b/arch/x86/mm/pat/set_memory.c > @@ -579,6 +579,30 @@ static inline pgprot_t static_protections(pgprot_t prot, unsigned long start, > return __pgprot(pgprot_val(prot) & ~forbidden); > } > > +/* > + * Validate and enforce strict W^X semantics. > + */ > +static inline pgprot_t verify_rwx(pgprot_t old, pgprot_t new, unsigned long start, > + unsigned long pfn, unsigned long npg) > +{ > + unsigned long end; > + I think this needs if (!(__supported_pte_mask & _PAGE_NX)) return new; to play nice with non-PAE 32-bit kernels. > + if (!((pgprot_val(old) ^ pgprot_val(new)) & (_PAGE_RW | _PAGE_NX))) > + return new; > + > + if ((pgprot_val(new) & (_PAGE_RW | _PAGE_NX)) != _PAGE_RW) > + return new; > + > + end = start + npg * PAGE_SIZE - 1; > + WARN(1, "CPA refuse W^X violation: %016llx -> %016llx range: 0x%016lx - 0x%016lx PFN %lx\n", WARN_ONCE() to avoid eternal spam if something does go sideways? > + (unsigned long long)pgprot_val(old), > + (unsigned long long)pgprot_val(new), > + start, end, pfn); > + > + /* refuse the transition into WX */ > + return old; > +}
On 8/25/22 10:18, Sean Christopherson wrote: >> +/* >> + * Validate and enforce strict W^X semantics. >> + */ >> +static inline pgprot_t verify_rwx(pgprot_t old, pgprot_t new, unsigned long start, >> + unsigned long pfn, unsigned long npg) >> +{ >> + unsigned long end; >> + > I think this needs > > if (!(__supported_pte_mask & _PAGE_NX)) > return new; > > to play nice with non-PAE 32-bit kernels. Good catch. Nit: I'd probably write this up as: if (!cpu_feature_enabled(X86_FEATURE_NX)) return new; That gets us our fancy static branches and is a bit easier on the eyes. I checked and don't see a way for __supported_pte_mask to have _PAGE_NX clear when X86_FEATURE_NX==1.
On Thu, Aug 25, 2022 at 02:23:31PM +0200, Peter Zijlstra wrote:
> x86 has STRICT_*_RWX, but not even a warning when someone violates it.
Yes please. I assume this is only kernel pages? Doing this globally is
nice too, but runs into annoying problems[1].
-Kees
[1] https://lore.kernel.org/all/20220701130444.2945106-1-ardb@kernel.org/
On Thu, Aug 25, 2022 at 11:16:12AM -0700, Kees Cook wrote: > On Thu, Aug 25, 2022 at 02:23:31PM +0200, Peter Zijlstra wrote: > > x86 has STRICT_*_RWX, but not even a warning when someone violates it. > > Yes please. I assume this is only kernel pages? Doing this globally is > nice too, but runs into annoying problems[1]. Yeah, this interface should only be used on kernel pages.
diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c index 1abd5438f126..9e9bef3f36b3 100644 --- a/arch/x86/mm/pat/set_memory.c +++ b/arch/x86/mm/pat/set_memory.c @@ -579,6 +579,30 @@ static inline pgprot_t static_protections(pgprot_t prot, unsigned long start, return __pgprot(pgprot_val(prot) & ~forbidden); } +/* + * Validate and enforce strict W^X semantics. + */ +static inline pgprot_t verify_rwx(pgprot_t old, pgprot_t new, unsigned long start, + unsigned long pfn, unsigned long npg) +{ + unsigned long end; + + if (!((pgprot_val(old) ^ pgprot_val(new)) & (_PAGE_RW | _PAGE_NX))) + return new; + + if ((pgprot_val(new) & (_PAGE_RW | _PAGE_NX)) != _PAGE_RW) + return new; + + end = start + npg * PAGE_SIZE - 1; + WARN(1, "CPA refuse W^X violation: %016llx -> %016llx range: 0x%016lx - 0x%016lx PFN %lx\n", + (unsigned long long)pgprot_val(old), + (unsigned long long)pgprot_val(new), + start, end, pfn); + + /* refuse the transition into WX */ + return old; +} + /* * Lookup the page table entry for a virtual address in a specific pgd. * Return a pointer to the entry and the level of the mapping. @@ -885,6 +909,8 @@ static int __should_split_large_page(pte_t *kpte, unsigned long address, new_prot = static_protections(req_prot, lpaddr, old_pfn, numpages, psize, CPA_DETECT); + new_prot = verify_rwx(old_prot, new_prot, lpaddr, old_pfn, numpages); + /* * If there is a conflict, split the large page. * @@ -1525,6 +1551,7 @@ static int __change_page_attr(struct cpa_data *cpa, int primary) if (level == PG_LEVEL_4K) { pte_t new_pte; + pgprot_t old_prot = pte_pgprot(old_pte); pgprot_t new_prot = pte_pgprot(old_pte); unsigned long pfn = pte_pfn(old_pte); @@ -1536,6 +1563,8 @@ static int __change_page_attr(struct cpa_data *cpa, int primary) new_prot = static_protections(new_prot, address, pfn, 1, 0, CPA_PROTECT); + new_prot = verify_rwx(old_prot, new_prot, address, pfn, 1); + new_prot = pgprot_clear_protnone_bits(new_prot); /*
x86 has STRICT_*_RWX, but not even a warning when someone violates it. Add this warning and fully refuse the transition. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> ---