Message ID | 20220929222936.14584-19-rick.p.edgecombe@intel.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | Shadowstacks for userspace | expand |
On Thu, Sep 29, 2022 at 03:29:15PM -0700, Rick Edgecombe wrote: > [...] > +unsigned long stack_guard_start_gap(struct vm_area_struct *vma) > +{ > + if (vma->vm_flags & VM_GROWSDOWN) > + return stack_guard_gap; > + > + /* > + * Shadow stack pointer is moved by CALL, RET, and INCSSP(Q/D). > + * INCSSPQ moves shadow stack pointer up to 255 * 8 = ~2 KB > + * (~1KB for INCSSPD) and touches the first and the last element > + * in the range, which triggers a page fault if the range is not > + * in a shadow stack. Because of this, creating 4-KB guard pages > + * around a shadow stack prevents these instructions from going > + * beyond. > + * > + * Creation of VM_SHADOW_STACK is tightly controlled, so a vma > + * can't be both VM_GROWSDOWN and VM_SHADOW_STACK > + */ Thank you for the details on how the size choice is made here! :) > diff --git a/include/linux/mm.h b/include/linux/mm.h > index fef14ab3abcb..09458e77bf52 100644 > --- a/include/linux/mm.h > +++ b/include/linux/mm.h > @@ -2775,15 +2775,16 @@ struct vm_area_struct *vma_lookup(struct mm_struct *mm, unsigned long addr) > return vma; > } > > +unsigned long stack_guard_start_gap(struct vm_area_struct *vma); > + > static inline unsigned long vm_start_gap(struct vm_area_struct *vma) > { > + unsigned long gap = stack_guard_start_gap(vma); > unsigned long vm_start = vma->vm_start; > > - if (vma->vm_flags & VM_GROWSDOWN) { > - vm_start -= stack_guard_gap; > - if (vm_start > vma->vm_start) > - vm_start = 0; > - } > + vm_start -= gap; > + if (vm_start > vma->vm_start) > + vm_start = 0; > return vm_start; > } > > diff --git a/mm/mmap.c b/mm/mmap.c > index 9d780f415be3..f0d2e9143bd0 100644 > --- a/mm/mmap.c > +++ b/mm/mmap.c > @@ -247,6 +247,13 @@ SYSCALL_DEFINE1(brk, unsigned long, brk) > return origbrk; > } > I feel like something could be done with this definitions to make them inline, instead of __weak: #ifndef stack_guard_start_gap > +unsigned long __weak stack_guard_start_gap(struct vm_area_struct *vma) > +{ > + if (vma->vm_flags & VM_GROWSDOWN) > + return stack_guard_gap; > + return 0; > +} #endif And then move the x86 stack_guard_start_gap to a header? It's not exactly fast-path, but it feels a little weird. Regardlesss: Reviewed-by: Kees Cook <keescook@chromium.org>
On 03/10/2022 19:30, Kees Cook wrote: > On Thu, Sep 29, 2022 at 03:29:15PM -0700, Rick Edgecombe wrote: >> [...] >> +unsigned long stack_guard_start_gap(struct vm_area_struct *vma) >> +{ >> + if (vma->vm_flags & VM_GROWSDOWN) >> + return stack_guard_gap; >> + >> + /* >> + * Shadow stack pointer is moved by CALL, RET, and INCSSP(Q/D). >> + * INCSSPQ moves shadow stack pointer up to 255 * 8 = ~2 KB >> + * (~1KB for INCSSPD) and touches the first and the last element >> + * in the range, which triggers a page fault if the range is not >> + * in a shadow stack. Because of this, creating 4-KB guard pages >> + * around a shadow stack prevents these instructions from going >> + * beyond. >> + * >> + * Creation of VM_SHADOW_STACK is tightly controlled, so a vma >> + * can't be both VM_GROWSDOWN and VM_SHADOW_STACK >> + */ > Thank you for the details on how the size choice is made here! :) (In case anyone is hankering for some premature optimisation...) You don't actually need a hole to create a guard. Any mapping of type != shstk will do. If you've got a load of threads, you can tightly pack stack / shstk / stack / shstk with no holes, and they each act as each other guard pages. ~Andrew
* Andrew Cooper: > You don't actually need a hole to create a guard. Any mapping of type > != shstk will do. > > If you've got a load of threads, you can tightly pack stack / shstk / > stack / shstk with no holes, and they each act as each other guard pages. Can userspace read the shadow stack directly? Writing is obviously blocked, but reading? GCC's stack-clash probing uses OR instructions, so it would be fine with a readable mapping. POSIX does not appear to require PROT_NONE mappings for the stack guard region, either. However, the pthread_attr_setguardsize manual page pretty clearly says that it's got to be unreadable and unwriteable. Hence my question. Thanks, Florian
On 10/10/2022 13:33, Florian Weimer wrote: > * Andrew Cooper: > >> You don't actually need a hole to create a guard. Any mapping of type >> != shstk will do. >> >> If you've got a load of threads, you can tightly pack stack / shstk / >> stack / shstk with no holes, and they each act as each other guard pages. > Can userspace read the shadow stack directly? Writing is obviously > blocked, but reading? Yes - regular reads are permitted to shstk memory. It's actually a great way to get backtraces with no extra metadata needed. > GCC's stack-clash probing uses OR instructions, so it would be fine with > a readable mapping. It's `or $0, (%rsp)` which is a read/modify/write and will fault when hitting a shstk mapping. > POSIX does not appear to require PROT_NONE mappings > for the stack guard region, either. However, the > pthread_attr_setguardsize manual page pretty clearly says that it's got > to be unreadable and unwriteable. Hence my question. Hmm. If that's what the manuals say, then fine. But honestly, you don't get very far at all without faulting on a read-only stack. ~Andrew
* Andrew Cooper: > On 10/10/2022 13:33, Florian Weimer wrote: >> * Andrew Cooper: >> >>> You don't actually need a hole to create a guard. Any mapping of type >>> != shstk will do. >>> >>> If you've got a load of threads, you can tightly pack stack / shstk / >>> stack / shstk with no holes, and they each act as each other guard pages. >> Can userspace read the shadow stack directly? Writing is obviously >> blocked, but reading? > > Yes - regular reads are permitted to shstk memory. > > It's actually a great way to get backtraces with no extra metadata > needed. Indeed, I hope shadow stacks can be used to put the discussion around frame pointers to a rest, at least when it comes to profiling. 8-) >> POSIX does not appear to require PROT_NONE mappings >> for the stack guard region, either. However, the >> pthread_attr_setguardsize manual page pretty clearly says that it's got >> to be unreadable and unwriteable. Hence my question. > > Hmm. If that's what the manuals say, then fine. > > But honestly, you don't get very far at all without faulting on a > read-only stack. I guess we can update the manual page proactively. It does look like a tempting optimization. Thanks, Florian
On 10/10/2022 14:40, Florian Weimer wrote: > * Andrew Cooper: > >>> POSIX does not appear to require PROT_NONE mappings >>> for the stack guard region, either. However, the >>> pthread_attr_setguardsize manual page pretty clearly says that it's got >>> to be unreadable and unwriteable. Hence my question. >> Hmm. If that's what the manuals say, then fine. >> >> But honestly, you don't get very far at all without faulting on a >> read-only stack. > I guess we can update the manual page proactively. It does look like a > tempting optimization. Here's one I prepared earlier, discussing getting supervisor shadow stacks working in Xen. http://xenbits.xen.org/people/andrewcoop/Xen-CET-SS.pdf This optimisation turned out to be very helpful by being able to put the shadow stacks in what were previously the guard holes, meaning we didn't actually need to allocate any more memory for the stacks. ~Andrew
diff --git a/arch/x86/mm/mmap.c b/arch/x86/mm/mmap.c index f3f52c5e2fd6..b0427bd2da30 100644 --- a/arch/x86/mm/mmap.c +++ b/arch/x86/mm/mmap.c @@ -250,3 +250,26 @@ bool pfn_modify_allowed(unsigned long pfn, pgprot_t prot) return false; return true; } + +unsigned long stack_guard_start_gap(struct vm_area_struct *vma) +{ + if (vma->vm_flags & VM_GROWSDOWN) + return stack_guard_gap; + + /* + * Shadow stack pointer is moved by CALL, RET, and INCSSP(Q/D). + * INCSSPQ moves shadow stack pointer up to 255 * 8 = ~2 KB + * (~1KB for INCSSPD) and touches the first and the last element + * in the range, which triggers a page fault if the range is not + * in a shadow stack. Because of this, creating 4-KB guard pages + * around a shadow stack prevents these instructions from going + * beyond. + * + * Creation of VM_SHADOW_STACK is tightly controlled, so a vma + * can't be both VM_GROWSDOWN and VM_SHADOW_STACK + */ + if (vma->vm_flags & VM_SHADOW_STACK) + return PAGE_SIZE; + + return 0; +} diff --git a/include/linux/mm.h b/include/linux/mm.h index fef14ab3abcb..09458e77bf52 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2775,15 +2775,16 @@ struct vm_area_struct *vma_lookup(struct mm_struct *mm, unsigned long addr) return vma; } +unsigned long stack_guard_start_gap(struct vm_area_struct *vma); + static inline unsigned long vm_start_gap(struct vm_area_struct *vma) { + unsigned long gap = stack_guard_start_gap(vma); unsigned long vm_start = vma->vm_start; - if (vma->vm_flags & VM_GROWSDOWN) { - vm_start -= stack_guard_gap; - if (vm_start > vma->vm_start) - vm_start = 0; - } + vm_start -= gap; + if (vm_start > vma->vm_start) + vm_start = 0; return vm_start; } diff --git a/mm/mmap.c b/mm/mmap.c index 9d780f415be3..f0d2e9143bd0 100644 --- a/mm/mmap.c +++ b/mm/mmap.c @@ -247,6 +247,13 @@ SYSCALL_DEFINE1(brk, unsigned long, brk) return origbrk; } +unsigned long __weak stack_guard_start_gap(struct vm_area_struct *vma) +{ + if (vma->vm_flags & VM_GROWSDOWN) + return stack_guard_gap; + return 0; +} + static inline unsigned long vma_compute_gap(struct vm_area_struct *vma) { unsigned long gap, prev_end;