Message ID | 20240620-fault-injection-statickeys-v2-6-e23947d3d84b@suse.cz (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | static key support for error injection functions | expand |
On 6/20/24 12:49 AM, Vlastimil Babka wrote: > --- a/mm/slub.c > +++ b/mm/slub.c > @@ -3874,13 +3874,37 @@ static __always_inline void maybe_wipe_obj_freeptr(struct kmem_cache *s, > 0, sizeof(void *)); > } > > -noinline int should_failslab(struct kmem_cache *s, gfp_t gfpflags) > +#if defined(CONFIG_FUNCTION_ERROR_INJECTION) || defined(CONFIG_FAILSLAB) > +DEFINE_STATIC_KEY_FALSE(should_failslab_active); > + > +#ifdef CONFIG_FUNCTION_ERROR_INJECTION > +noinline > +#else > +static inline > +#endif > +int should_failslab(struct kmem_cache *s, gfp_t gfpflags) Note that it has been found that (regardless of this series) gcc may clone this to a should_failslab.constprop.0 in case the function is empty because __should_failslab is compiled out (CONFIG_FAILSLAB=n). The "noinline" doesn't help - the original function stays but only the clone is actually being called, thus overriding the original function achieves nothing, see: https://github.com/bpftrace/bpftrace/issues/3258 So we could use __noclone to prevent that, and I was thinking by adding something this to error-injection.h: #ifdef CONFIG_FUNCTION_ERROR_INJECTION #define __error_injectable(alternative) noinline __noclone #else #define __error_injectable(alternative) alternative #endif and the usage here would be: __error_injectable(static inline) int should_failslab(...) Does that look acceptable, or is it too confusing that "static inline" is specified there as the storage class to use when error injection is actually disabled? > { > if (__should_failslab(s, gfpflags)) > return -ENOMEM; > return 0; > } > -ALLOW_ERROR_INJECTION(should_failslab, ERRNO); > +ALLOW_ERROR_INJECTION_KEY(should_failslab, ERRNO, &should_failslab_active); > + > +static __always_inline int should_failslab_wrapped(struct kmem_cache *s, > + gfp_t gfp) > +{ > + if (static_branch_unlikely(&should_failslab_active)) > + return should_failslab(s, gfp); > + else > + return 0; > +} > +#else > +static __always_inline int should_failslab_wrapped(struct kmem_cache *s, > + gfp_t gfp) > +{ > + return false; > +} > +#endif > > static __fastpath_inline > struct kmem_cache *slab_pre_alloc_hook(struct kmem_cache *s, gfp_t flags) > @@ -3889,7 +3913,7 @@ struct kmem_cache *slab_pre_alloc_hook(struct kmem_cache *s, gfp_t flags) > > might_alloc(flags); > > - if (unlikely(should_failslab(s, flags))) > + if (should_failslab_wrapped(s, flags)) > return NULL; > > return s; >
On Tue, Jun 25, 2024 at 7:24 AM Vlastimil Babka <vbabka@suse.cz> wrote: > > On 6/20/24 12:49 AM, Vlastimil Babka wrote: > > --- a/mm/slub.c > > +++ b/mm/slub.c > > @@ -3874,13 +3874,37 @@ static __always_inline void maybe_wipe_obj_freeptr(struct kmem_cache *s, > > 0, sizeof(void *)); > > } > > > > -noinline int should_failslab(struct kmem_cache *s, gfp_t gfpflags) > > +#if defined(CONFIG_FUNCTION_ERROR_INJECTION) || defined(CONFIG_FAILSLAB) > > +DEFINE_STATIC_KEY_FALSE(should_failslab_active); > > + > > +#ifdef CONFIG_FUNCTION_ERROR_INJECTION > > +noinline > > +#else > > +static inline > > +#endif > > +int should_failslab(struct kmem_cache *s, gfp_t gfpflags) > > Note that it has been found that (regardless of this series) gcc may clone > this to a should_failslab.constprop.0 in case the function is empty because > __should_failslab is compiled out (CONFIG_FAILSLAB=n). The "noinline" > doesn't help - the original function stays but only the clone is actually > being called, thus overriding the original function achieves nothing, see: > https://github.com/bpftrace/bpftrace/issues/3258 > > So we could use __noclone to prevent that, and I was thinking by adding > something this to error-injection.h: > > #ifdef CONFIG_FUNCTION_ERROR_INJECTION > #define __error_injectable(alternative) noinline __noclone To prevent such compiler transformations we typically use __used noinline We didn't have a need for __noclone yet. If __used is enough I'd stick to that.
On 6/25/24 7:12 PM, Alexei Starovoitov wrote: > On Tue, Jun 25, 2024 at 7:24 AM Vlastimil Babka <vbabka@suse.cz> wrote: >> >> On 6/20/24 12:49 AM, Vlastimil Babka wrote: >> > --- a/mm/slub.c >> > +++ b/mm/slub.c >> > @@ -3874,13 +3874,37 @@ static __always_inline void maybe_wipe_obj_freeptr(struct kmem_cache *s, >> > 0, sizeof(void *)); >> > } >> > >> > -noinline int should_failslab(struct kmem_cache *s, gfp_t gfpflags) >> > +#if defined(CONFIG_FUNCTION_ERROR_INJECTION) || defined(CONFIG_FAILSLAB) >> > +DEFINE_STATIC_KEY_FALSE(should_failslab_active); >> > + >> > +#ifdef CONFIG_FUNCTION_ERROR_INJECTION >> > +noinline >> > +#else >> > +static inline >> > +#endif >> > +int should_failslab(struct kmem_cache *s, gfp_t gfpflags) >> >> Note that it has been found that (regardless of this series) gcc may clone >> this to a should_failslab.constprop.0 in case the function is empty because >> __should_failslab is compiled out (CONFIG_FAILSLAB=n). The "noinline" >> doesn't help - the original function stays but only the clone is actually >> being called, thus overriding the original function achieves nothing, see: >> https://github.com/bpftrace/bpftrace/issues/3258 >> >> So we could use __noclone to prevent that, and I was thinking by adding >> something this to error-injection.h: >> >> #ifdef CONFIG_FUNCTION_ERROR_INJECTION >> #define __error_injectable(alternative) noinline __noclone > > To prevent such compiler transformations we typically use > __used noinline > > We didn't have a need for __noclone yet. If __used is enough I'd stick to that. __used made no difference here (gcc 13.3), __noclone did
diff --git a/include/linux/fault-inject.h b/include/linux/fault-inject.h index cfe75cc1bac4..0d0fa94dc1c8 100644 --- a/include/linux/fault-inject.h +++ b/include/linux/fault-inject.h @@ -107,9 +107,11 @@ static inline bool __should_fail_alloc_page(gfp_t gfp_mask, unsigned int order) } #endif /* CONFIG_FAIL_PAGE_ALLOC */ +#ifdef CONFIG_FUNCTION_ERROR_INJECTION int should_failslab(struct kmem_cache *s, gfp_t gfpflags); +#endif #ifdef CONFIG_FAILSLAB -extern bool __should_failslab(struct kmem_cache *s, gfp_t gfpflags); +bool __should_failslab(struct kmem_cache *s, gfp_t gfpflags); #else static inline bool __should_failslab(struct kmem_cache *s, gfp_t gfpflags) { diff --git a/mm/failslab.c b/mm/failslab.c index ffc420c0e767..878fd08e5dac 100644 --- a/mm/failslab.c +++ b/mm/failslab.c @@ -9,7 +9,7 @@ static struct { bool ignore_gfp_reclaim; bool cache_filter; } failslab = { - .attr = FAULT_ATTR_INITIALIZER, + .attr = FAULT_ATTR_INITIALIZER_KEY(&should_failslab_active.key), .ignore_gfp_reclaim = true, .cache_filter = false, }; diff --git a/mm/slab.h b/mm/slab.h index 5f8f47c5bee0..792e19cb37b8 100644 --- a/mm/slab.h +++ b/mm/slab.h @@ -11,6 +11,7 @@ #include <linux/memcontrol.h> #include <linux/kfence.h> #include <linux/kasan.h> +#include <linux/jump_label.h> /* * Internal slab definitions @@ -160,6 +161,8 @@ static_assert(IS_ALIGNED(offsetof(struct slab, freelist), sizeof(freelist_aba_t) */ #define slab_page(s) folio_page(slab_folio(s), 0) +DECLARE_STATIC_KEY_FALSE(should_failslab_active); + /* * If network-based swap is enabled, sl*b must keep track of whether pages * were allocated from pfmemalloc reserves. diff --git a/mm/slub.c b/mm/slub.c index 0809760cf789..11980aa94631 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -3874,13 +3874,37 @@ static __always_inline void maybe_wipe_obj_freeptr(struct kmem_cache *s, 0, sizeof(void *)); } -noinline int should_failslab(struct kmem_cache *s, gfp_t gfpflags) +#if defined(CONFIG_FUNCTION_ERROR_INJECTION) || defined(CONFIG_FAILSLAB) +DEFINE_STATIC_KEY_FALSE(should_failslab_active); + +#ifdef CONFIG_FUNCTION_ERROR_INJECTION +noinline +#else +static inline +#endif +int should_failslab(struct kmem_cache *s, gfp_t gfpflags) { if (__should_failslab(s, gfpflags)) return -ENOMEM; return 0; } -ALLOW_ERROR_INJECTION(should_failslab, ERRNO); +ALLOW_ERROR_INJECTION_KEY(should_failslab, ERRNO, &should_failslab_active); + +static __always_inline int should_failslab_wrapped(struct kmem_cache *s, + gfp_t gfp) +{ + if (static_branch_unlikely(&should_failslab_active)) + return should_failslab(s, gfp); + else + return 0; +} +#else +static __always_inline int should_failslab_wrapped(struct kmem_cache *s, + gfp_t gfp) +{ + return false; +} +#endif static __fastpath_inline struct kmem_cache *slab_pre_alloc_hook(struct kmem_cache *s, gfp_t flags) @@ -3889,7 +3913,7 @@ struct kmem_cache *slab_pre_alloc_hook(struct kmem_cache *s, gfp_t flags) might_alloc(flags); - if (unlikely(should_failslab(s, flags))) + if (should_failslab_wrapped(s, flags)) return NULL; return s;