Message ID | 20240708191840.335463-2-kees@kernel.org (mailing list archive) |
---|---|
State | Handled Elsewhere |
Headers | show |
Series | slab: Allow for type introspection during allocation | expand |
On 7/8/24 21:18, Kees Cook wrote: > The allocator will already reject giant sizes seen from negative size > arguments, so this commit mainly services as an example for initial > type-based filtering. The size argument is checked for negative values > in signed arguments, saturating any if found instead of passing them on. > > For example, now the size is checked: > > Before: > /* %rdi unchecked */ > 1eb: be c0 0c 00 00 mov $0xcc0,%esi > 1f0: e8 00 00 00 00 call 1f5 <do_SLAB_NEGATIVE+0x15> > 1f1: R_X86_64_PLT32 __kmalloc_noprof-0x4 > > After: > 6d0: 48 63 c7 movslq %edi,%rax > 6d3: 85 ff test %edi,%edi > 6d5: be c0 0c 00 00 mov $0xcc0,%esi > 6da: 48 c7 c2 ff ff ff ff mov $0xffffffffffffffff,%rdx > 6e1: 48 0f 49 d0 cmovns %rax,%rdx > 6e5: 48 89 d7 mov %rdx,%rdi > 6e8: e8 00 00 00 00 call 6ed <do_SLAB_NEGATIVE+0x1d> > 6e9: R_X86_64_PLT32 __kmalloc_noprof-0x4 > > Signed-off-by: Kees Cook <kees@kernel.org> > --- > Cc: Christoph Lameter <cl@linux.com> > Cc: Pekka Enberg <penberg@kernel.org> > Cc: David Rientjes <rientjes@google.com> > Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> > Cc: Andrew Morton <akpm@linux-foundation.org> > Cc: Vlastimil Babka <vbabka@suse.cz> > Cc: Roman Gushchin <roman.gushchin@linux.dev> > Cc: Hyeonggon Yoo <42.hyeyoo@gmail.com> > Cc: linux-mm@kvack.org > --- > include/linux/slab.h | 19 ++++++++++++++++++- > 1 file changed, 18 insertions(+), 1 deletion(-) > > diff --git a/include/linux/slab.h b/include/linux/slab.h > index d99afce36098..7353756cbec6 100644 > --- a/include/linux/slab.h > +++ b/include/linux/slab.h > @@ -684,7 +684,24 @@ static __always_inline __alloc_size(1) void *kmalloc_noprof(size_t size, gfp_t f > } > return __kmalloc_noprof(size, flags); > } > -#define kmalloc(...) alloc_hooks(kmalloc_noprof(__VA_ARGS__)) > +#define kmalloc_sized(...) alloc_hooks(kmalloc_noprof(__VA_ARGS__)) > + > +#define __size_force_positive(x) \ > + ({ \ > + typeof(__force_integral_expr(x)) __forced_val = \ > + __force_integral_expr(x); \ > + __forced_val < 0 ? SIZE_MAX : __forced_val; \ > + }) > + > +#define kmalloc(p, gfp) _Generic((p), \ > + unsigned char: kmalloc_sized(__force_integral_expr(p), gfp), \ > + unsigned short: kmalloc_sized(__force_integral_expr(p), gfp), \ > + unsigned int: kmalloc_sized(__force_integral_expr(p), gfp), \ > + unsigned long: kmalloc_sized(__force_integral_expr(p), gfp), \ > + signed char: kmalloc_sized(__size_force_positive(p), gfp), \ > + signed short: kmalloc_sized(__size_force_positive(p), gfp), \ > + signed int: kmalloc_sized(__size_force_positive(p), gfp), \ > + signed long: kmalloc_sized(__size_force_positive(p), gfp)) I like this idea and series very much, thank you! What about bool? What about long long? (by this commit one will get a rather easy to parse compile error, but the next one will obscure it a bit) Consider the following correct (albeit somewhat weird) code: /* header */ char *state; /* .c impl, init part */ bool needs_state = some_expr(); state = kmalloc(needs_state, GFP_KERNEL); /* .c, other part */ if (ZERO_OR_NULL_PTR(state)) return _EARLY; *state = state_machine_action(*state); > > #define kmem_buckets_alloc(_b, _size, _flags) \ > alloc_hooks(__kmalloc_node_noprof(PASS_BUCKET_PARAMS(_size, _b), _flags, NUMA_NO_NODE))
On Tue, Jul 09, 2024 at 08:57:55AM +0200, Przemek Kitszel wrote: > On 7/8/24 21:18, Kees Cook wrote: > > The allocator will already reject giant sizes seen from negative size > > arguments, so this commit mainly services as an example for initial > > type-based filtering. The size argument is checked for negative values > > in signed arguments, saturating any if found instead of passing them on. > > > > For example, now the size is checked: > > > > Before: > > /* %rdi unchecked */ > > 1eb: be c0 0c 00 00 mov $0xcc0,%esi > > 1f0: e8 00 00 00 00 call 1f5 <do_SLAB_NEGATIVE+0x15> > > 1f1: R_X86_64_PLT32 __kmalloc_noprof-0x4 > > > > After: > > 6d0: 48 63 c7 movslq %edi,%rax > > 6d3: 85 ff test %edi,%edi > > 6d5: be c0 0c 00 00 mov $0xcc0,%esi > > 6da: 48 c7 c2 ff ff ff ff mov $0xffffffffffffffff,%rdx > > 6e1: 48 0f 49 d0 cmovns %rax,%rdx > > 6e5: 48 89 d7 mov %rdx,%rdi > > 6e8: e8 00 00 00 00 call 6ed <do_SLAB_NEGATIVE+0x1d> > > 6e9: R_X86_64_PLT32 __kmalloc_noprof-0x4 > > > > Signed-off-by: Kees Cook <kees@kernel.org> > > --- > > Cc: Christoph Lameter <cl@linux.com> > > Cc: Pekka Enberg <penberg@kernel.org> > > Cc: David Rientjes <rientjes@google.com> > > Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> > > Cc: Andrew Morton <akpm@linux-foundation.org> > > Cc: Vlastimil Babka <vbabka@suse.cz> > > Cc: Roman Gushchin <roman.gushchin@linux.dev> > > Cc: Hyeonggon Yoo <42.hyeyoo@gmail.com> > > Cc: linux-mm@kvack.org > > --- > > include/linux/slab.h | 19 ++++++++++++++++++- > > 1 file changed, 18 insertions(+), 1 deletion(-) > > > > diff --git a/include/linux/slab.h b/include/linux/slab.h > > index d99afce36098..7353756cbec6 100644 > > --- a/include/linux/slab.h > > +++ b/include/linux/slab.h > > @@ -684,7 +684,24 @@ static __always_inline __alloc_size(1) void *kmalloc_noprof(size_t size, gfp_t f > > } > > return __kmalloc_noprof(size, flags); > > } > > -#define kmalloc(...) alloc_hooks(kmalloc_noprof(__VA_ARGS__)) > > +#define kmalloc_sized(...) alloc_hooks(kmalloc_noprof(__VA_ARGS__)) > > + > > +#define __size_force_positive(x) \ > > + ({ \ > > + typeof(__force_integral_expr(x)) __forced_val = \ > > + __force_integral_expr(x); \ > > + __forced_val < 0 ? SIZE_MAX : __forced_val; \ > > + }) > > + > > +#define kmalloc(p, gfp) _Generic((p), \ > > + unsigned char: kmalloc_sized(__force_integral_expr(p), gfp), \ > > + unsigned short: kmalloc_sized(__force_integral_expr(p), gfp), \ > > + unsigned int: kmalloc_sized(__force_integral_expr(p), gfp), \ > > + unsigned long: kmalloc_sized(__force_integral_expr(p), gfp), \ > > + signed char: kmalloc_sized(__size_force_positive(p), gfp), \ > > + signed short: kmalloc_sized(__size_force_positive(p), gfp), \ > > + signed int: kmalloc_sized(__size_force_positive(p), gfp), \ > > + signed long: kmalloc_sized(__size_force_positive(p), gfp)) > > I like this idea and series very much, thank you! Thanks! > What about bool? > What about long long? Ah yes, I will add these. LKP also found a weird one (a bitfield!) that I'm fixing at the source: https://lore.kernel.org/lkml/20240709154953.work.953-kees@kernel.org/
diff --git a/include/linux/slab.h b/include/linux/slab.h index d99afce36098..7353756cbec6 100644 --- a/include/linux/slab.h +++ b/include/linux/slab.h @@ -684,7 +684,24 @@ static __always_inline __alloc_size(1) void *kmalloc_noprof(size_t size, gfp_t f } return __kmalloc_noprof(size, flags); } -#define kmalloc(...) alloc_hooks(kmalloc_noprof(__VA_ARGS__)) +#define kmalloc_sized(...) alloc_hooks(kmalloc_noprof(__VA_ARGS__)) + +#define __size_force_positive(x) \ + ({ \ + typeof(__force_integral_expr(x)) __forced_val = \ + __force_integral_expr(x); \ + __forced_val < 0 ? SIZE_MAX : __forced_val; \ + }) + +#define kmalloc(p, gfp) _Generic((p), \ + unsigned char: kmalloc_sized(__force_integral_expr(p), gfp), \ + unsigned short: kmalloc_sized(__force_integral_expr(p), gfp), \ + unsigned int: kmalloc_sized(__force_integral_expr(p), gfp), \ + unsigned long: kmalloc_sized(__force_integral_expr(p), gfp), \ + signed char: kmalloc_sized(__size_force_positive(p), gfp), \ + signed short: kmalloc_sized(__size_force_positive(p), gfp), \ + signed int: kmalloc_sized(__size_force_positive(p), gfp), \ + signed long: kmalloc_sized(__size_force_positive(p), gfp)) #define kmem_buckets_alloc(_b, _size, _flags) \ alloc_hooks(__kmalloc_node_noprof(PASS_BUCKET_PARAMS(_size, _b), _flags, NUMA_NO_NODE))
The allocator will already reject giant sizes seen from negative size arguments, so this commit mainly services as an example for initial type-based filtering. The size argument is checked for negative values in signed arguments, saturating any if found instead of passing them on. For example, now the size is checked: Before: /* %rdi unchecked */ 1eb: be c0 0c 00 00 mov $0xcc0,%esi 1f0: e8 00 00 00 00 call 1f5 <do_SLAB_NEGATIVE+0x15> 1f1: R_X86_64_PLT32 __kmalloc_noprof-0x4 After: 6d0: 48 63 c7 movslq %edi,%rax 6d3: 85 ff test %edi,%edi 6d5: be c0 0c 00 00 mov $0xcc0,%esi 6da: 48 c7 c2 ff ff ff ff mov $0xffffffffffffffff,%rdx 6e1: 48 0f 49 d0 cmovns %rax,%rdx 6e5: 48 89 d7 mov %rdx,%rdi 6e8: e8 00 00 00 00 call 6ed <do_SLAB_NEGATIVE+0x1d> 6e9: R_X86_64_PLT32 __kmalloc_noprof-0x4 Signed-off-by: Kees Cook <kees@kernel.org> --- Cc: Christoph Lameter <cl@linux.com> Cc: Pekka Enberg <penberg@kernel.org> Cc: David Rientjes <rientjes@google.com> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Vlastimil Babka <vbabka@suse.cz> Cc: Roman Gushchin <roman.gushchin@linux.dev> Cc: Hyeonggon Yoo <42.hyeyoo@gmail.com> Cc: linux-mm@kvack.org --- include/linux/slab.h | 19 ++++++++++++++++++- 1 file changed, 18 insertions(+), 1 deletion(-)