Message ID | 20231121220155.1217090-33-iii@linux.ibm.com (mailing list archive) |
---|---|
State | Superseded |
Headers | show |
Series | kmsan: Enable on s390 | expand |
> +static inline void *arch_kmsan_get_meta_or_null(void *addr, bool is_origin) > +{ > + if (addr >= (void *)&S390_lowcore && > + addr < (void *)(&S390_lowcore + 1)) { > + /* > + * Different lowcores accessed via S390_lowcore are described > + * by the same struct page. Resolve the prefix manually in > + * order to get a distinct struct page. > + */ > + addr += (void *)lowcore_ptr[raw_smp_processor_id()] - > + (void *)&S390_lowcore; > + return kmsan_get_metadata(addr, is_origin); > + } > + return NULL; > +} Is there a possibility for infinite recursion here? E.g. can `lowcore_ptr[raw_smp_processor_id()]` point somewhere in between `(void *)&S390_lowcore` and `(void *)(&S390_lowcore + 1))`?
On Mon, 2023-12-11 at 11:26 +0100, Alexander Potapenko wrote: > > +static inline void *arch_kmsan_get_meta_or_null(void *addr, bool > > is_origin) > > +{ > > + if (addr >= (void *)&S390_lowcore && > > + addr < (void *)(&S390_lowcore + 1)) { > > + /* > > + * Different lowcores accessed via S390_lowcore are > > described > > + * by the same struct page. Resolve the prefix > > manually in > > + * order to get a distinct struct page. > > + */ > > + addr += (void *)lowcore_ptr[raw_smp_processor_id()] > > - > > + (void *)&S390_lowcore; > > + return kmsan_get_metadata(addr, is_origin); > > + } > > + return NULL; > > +} > > Is there a possibility for infinite recursion here? E.g. can > `lowcore_ptr[raw_smp_processor_id()]` point somewhere in between > `(void *)&S390_lowcore` and `(void *)(&S390_lowcore + 1))`? No, it's allocated with __get_free_pages() or memblock_alloc_low(). But since this question came up, I should probably add a check and a WARN_ON_ONCE() here.
> > Is there a possibility for infinite recursion here? E.g. can > > `lowcore_ptr[raw_smp_processor_id()]` point somewhere in between > > `(void *)&S390_lowcore` and `(void *)(&S390_lowcore + 1))`? > > No, it's allocated with __get_free_pages() or memblock_alloc_low(). > But since this question came up, I should probably add a check and > a WARN_ON_ONCE() here. Yes, please.
diff --git a/arch/s390/include/asm/kmsan.h b/arch/s390/include/asm/kmsan.h new file mode 100644 index 000000000000..afec71e9e9ac --- /dev/null +++ b/arch/s390/include/asm/kmsan.h @@ -0,0 +1,36 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef _ASM_S390_KMSAN_H +#define _ASM_S390_KMSAN_H + +#include <asm/lowcore.h> +#include <asm/page.h> +#include <linux/kmsan.h> +#include <linux/mmzone.h> +#include <linux/stddef.h> + +#ifndef MODULE + +static inline void *arch_kmsan_get_meta_or_null(void *addr, bool is_origin) +{ + if (addr >= (void *)&S390_lowcore && + addr < (void *)(&S390_lowcore + 1)) { + /* + * Different lowcores accessed via S390_lowcore are described + * by the same struct page. Resolve the prefix manually in + * order to get a distinct struct page. + */ + addr += (void *)lowcore_ptr[raw_smp_processor_id()] - + (void *)&S390_lowcore; + return kmsan_get_metadata(addr, is_origin); + } + return NULL; +} + +static inline bool kmsan_virt_addr_valid(void *addr) +{ + return virt_addr_valid(addr); +} + +#endif /* !MODULE */ + +#endif /* _ASM_S390_KMSAN_H */
arch_kmsan_get_meta_or_null() finds the lowcore shadow by querying the prefix and calling kmsan_get_metadata() again. kmsan_virt_addr_valid() delegates to virt_addr_valid(). Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com> --- arch/s390/include/asm/kmsan.h | 36 +++++++++++++++++++++++++++++++++++ 1 file changed, 36 insertions(+) create mode 100644 arch/s390/include/asm/kmsan.h