Message ID | e06c7c0fdbad7044f150891d827393665c5742fd.1743772053.git.maciej.wieczor-retman@intel.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | [v3,01/14] kasan: sw_tags: Use arithmetic shift for shadow computation | expand |
On 4/4/25 06:14, Maciej Wieczor-Retman wrote: > +static inline const void *__tag_set(const void *addr, u8 tag) > +{ > + u64 __addr = (u64)addr & ~__tag_shifted(KASAN_TAG_KERNEL); > + return (const void *)(__addr | __tag_shifted(tag)); > +} This becomes a lot clearer to read if you separate out the casting from the logical bit manipulation. For instance: static inline const void *__tag_set(const void *__addr, u8 tag) { u64 addr = (u64)__addr; addr &= ~__tag_shifted(KASAN_TAG_KERNEL); addr |= __tag_shifted(tag); return (const void *)addr; } Also, unless there's a good reason for it, you might as well limit the places you need to use "__". Now that we can read this, I think it's potentially buggy. If someone went and changed: #define KASAN_TAG_KERNEL 0xFF to, say: #define KASAN_TAG_KERNEL 0xAB the '&' would miss clearing bits. It works fine in the arm64 implementation: u64 __addr = (u64)addr & ~__tag_shifted(0xff); because they've hard-coded 0xff. I _think_ that's what you actually want here. You don't want to mask out KASAN_TAG_KERNEL, you actually want to mask out *ANYTHING* in those bits. So the best thing is probably to define a KASAN_TAG_MASK that makes it clear which are the tag bits.
diff --git a/arch/x86/include/asm/kasan.h b/arch/x86/include/asm/kasan.h index d7e33c7f096b..212218622963 100644 --- a/arch/x86/include/asm/kasan.h +++ b/arch/x86/include/asm/kasan.h @@ -3,6 +3,8 @@ #define _ASM_X86_KASAN_H #include <linux/const.h> +#include <linux/kasan-tags.h> +#include <linux/types.h> #define KASAN_SHADOW_OFFSET _AC(CONFIG_KASAN_SHADOW_OFFSET, UL) #define KASAN_SHADOW_SCALE_SHIFT 3 @@ -24,8 +26,33 @@ KASAN_SHADOW_SCALE_SHIFT))) #ifndef __ASSEMBLER__ +#include <linux/bitops.h> +#include <linux/bitfield.h> +#include <linux/bits.h> + +#ifdef CONFIG_KASAN_SW_TAGS + +#define __tag_shifted(tag) FIELD_PREP(GENMASK_ULL(60, 57), tag) +#define __tag_reset(addr) (sign_extend64((u64)(addr), 56)) +#define __tag_get(addr) ((u8)FIELD_GET(GENMASK_ULL(60, 57), (u64)addr)) +#else +#define __tag_shifted(tag) 0UL +#define __tag_reset(addr) (addr) +#define __tag_get(addr) 0 +#endif /* CONFIG_KASAN_SW_TAGS */ + +static inline const void *__tag_set(const void *addr, u8 tag) +{ + u64 __addr = (u64)addr & ~__tag_shifted(KASAN_TAG_KERNEL); + return (const void *)(__addr | __tag_shifted(tag)); +} + +#define arch_kasan_set_tag(addr, tag) __tag_set(addr, tag) +#define arch_kasan_reset_tag(addr) __tag_reset(addr) +#define arch_kasan_get_tag(addr) __tag_get(addr) #ifdef CONFIG_KASAN + void __init kasan_early_init(void); void __init kasan_init(void); void __init kasan_populate_shadow_for_vaddr(void *va, size_t size, int nid); @@ -34,8 +61,9 @@ static inline void kasan_early_init(void) { } static inline void kasan_init(void) { } static inline void kasan_populate_shadow_for_vaddr(void *va, size_t size, int nid) { } -#endif -#endif +#endif /* CONFIG_KASAN */ + +#endif /* __ASSEMBLER__ */ #endif
KASAN's software tag-based mode needs multiple macros/functions to handle tag and pointer interactions - mainly to set and retrieve tags from the top bits of a pointer. Mimic functions currently used by arm64 but change the tag's position to bits [60:57] in the pointer. Signed-off-by: Maciej Wieczor-Retman <maciej.wieczor-retman@intel.com> --- Changelog v3: - Reorder functions so that __tag_*() etc are above the arch_kasan_*() ones. - Remove CONFIG_KASAN condition from __tag_set() arch/x86/include/asm/kasan.h | 32 ++++++++++++++++++++++++++++++-- 1 file changed, 30 insertions(+), 2 deletions(-)