Message ID | 20190129003422.9328-10-rick.p.edgecombe@intel.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | Merge text_poke fixes and executable lockdowns | expand |
Only nitpicks: > Subject: Re: [PATCH v2 09/20] x86/kprobes: instruction pages initialization enhancements Subject needs a verb. On Mon, Jan 28, 2019 at 04:34:11PM -0800, Rick Edgecombe wrote: > From: Nadav Amit <namit@vmware.com> > > Make kprobes instruction pages read-only (and executable) after they are > set to prevent them from mistaken or malicious modifications. > > This is a preparatory patch for a following patch that makes module > allocated pages non-executable and sets the page as executable after > allocation. > > While at it, do some small cleanup of what appears to be unnecessary > masking. > > Acked-by: Masami Hiramatsu <mhiramat@kernel.org> > Signed-off-by: Nadav Amit <namit@vmware.com> > Signed-off-by: Rick Edgecombe <rick.p.edgecombe@intel.com> > --- > arch/x86/kernel/kprobes/core.c | 24 ++++++++++++++++++++---- > 1 file changed, 20 insertions(+), 4 deletions(-) > > diff --git a/arch/x86/kernel/kprobes/core.c b/arch/x86/kernel/kprobes/core.c > index 4ba75afba527..fac692e36833 100644 > --- a/arch/x86/kernel/kprobes/core.c > +++ b/arch/x86/kernel/kprobes/core.c > @@ -431,8 +431,20 @@ void *alloc_insn_page(void) > void *page; > > page = module_alloc(PAGE_SIZE); > - if (page) > - set_memory_ro((unsigned long)page & PAGE_MASK, 1); > + if (page == NULL) > + return NULL; Null tests we generally do like this: if (! ... like in the rest of this file. > + > + /* > + * First make the page read-only, and then only then make it executable s/then only then/only then/ ditto below. > + * to prevent it from being W+X in between. > + */ > + set_memory_ro((unsigned long)page, 1); > + > + /* > + * TODO: Once additional kernel code protection mechanisms are set, ensure > + * that the page was not maliciously altered and it is still zeroed. > + */ > + set_memory_x((unsigned long)page, 1); > > return page; > } > @@ -440,8 +452,12 @@ void *alloc_insn_page(void) > /* Recover page to RW mode before releasing it */ > void free_insn_page(void *page) > { > - set_memory_nx((unsigned long)page & PAGE_MASK, 1); > - set_memory_rw((unsigned long)page & PAGE_MASK, 1); > + /* > + * First make the page non-executable, and then only then make it > + * writable to prevent it from being W+X in between. > + */ > + set_memory_nx((unsigned long)page, 1); > + set_memory_rw((unsigned long)page, 1); > module_memfree(page); > } > > --
> On Feb 11, 2019, at 10:22 AM, Borislav Petkov <bp@alien8.de> wrote: > > Only nitpicks: Thanks for the feedback. Applied.
diff --git a/arch/x86/kernel/kprobes/core.c b/arch/x86/kernel/kprobes/core.c index 4ba75afba527..fac692e36833 100644 --- a/arch/x86/kernel/kprobes/core.c +++ b/arch/x86/kernel/kprobes/core.c @@ -431,8 +431,20 @@ void *alloc_insn_page(void) void *page; page = module_alloc(PAGE_SIZE); - if (page) - set_memory_ro((unsigned long)page & PAGE_MASK, 1); + if (page == NULL) + return NULL; + + /* + * First make the page read-only, and then only then make it executable + * to prevent it from being W+X in between. + */ + set_memory_ro((unsigned long)page, 1); + + /* + * TODO: Once additional kernel code protection mechanisms are set, ensure + * that the page was not maliciously altered and it is still zeroed. + */ + set_memory_x((unsigned long)page, 1); return page; } @@ -440,8 +452,12 @@ void *alloc_insn_page(void) /* Recover page to RW mode before releasing it */ void free_insn_page(void *page) { - set_memory_nx((unsigned long)page & PAGE_MASK, 1); - set_memory_rw((unsigned long)page & PAGE_MASK, 1); + /* + * First make the page non-executable, and then only then make it + * writable to prevent it from being W+X in between. + */ + set_memory_nx((unsigned long)page, 1); + set_memory_rw((unsigned long)page, 1); module_memfree(page); }