diff mbox series

[v5,36/37] s390/kmsan: Implement the architecture-specific functions

Message ID 20240619154530.163232-37-iii@linux.ibm.com (mailing list archive)
State New
Headers show
Series kmsan: Enable on s390 | expand

Commit Message

Ilya Leoshkevich June 19, 2024, 3:44 p.m. UTC
arch_kmsan_get_meta_or_null() finds the lowcore shadow by querying the
prefix and calling kmsan_get_metadata() again.

kmsan_virt_addr_valid() delegates to virt_addr_valid().

Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
---
 arch/s390/include/asm/kmsan.h | 59 +++++++++++++++++++++++++++++++++++
 1 file changed, 59 insertions(+)
 create mode 100644 arch/s390/include/asm/kmsan.h

Comments

Alexander Gordeev June 20, 2024, 9:25 a.m. UTC | #1
On Wed, Jun 19, 2024 at 05:44:11PM +0200, Ilya Leoshkevich wrote:

Hi Ilya,

> +static inline bool is_lowcore_addr(void *addr)
> +{
> +	return addr >= (void *)&S390_lowcore &&
> +	       addr < (void *)(&S390_lowcore + 1);
> +}
> +
> +static inline void *arch_kmsan_get_meta_or_null(void *addr, bool is_origin)
> +{
> +	if (is_lowcore_addr(addr)) {
> +		/*
> +		 * Different lowcores accessed via S390_lowcore are described
> +		 * by the same struct page. Resolve the prefix manually in
> +		 * order to get a distinct struct page.
> +		 */

> +		addr += (void *)lowcore_ptr[raw_smp_processor_id()] -
> +			(void *)&S390_lowcore;

If I am not mistaken neither raw_smp_processor_id() itself, nor
lowcore_ptr[raw_smp_processor_id()] are atomic. Should the preemption
be disabled while the addr is calculated?

But then the question arises - how meaningful the returned value is?
AFAICT kmsan_get_metadata() is called from a preemptable context.
So if the CPU is changed - how useful the previous CPU lowcore meta is?

Is it a memory block that needs to be ignored instead?

> +		if (WARN_ON_ONCE(is_lowcore_addr(addr)))
> +			return NULL;

lowcore_ptr[] pointing into S390_lowcore is rather a bug.

> +		return kmsan_get_metadata(addr, is_origin);
> +	}
> +	return NULL;
> +}

Thanks!
Ilya Leoshkevich June 20, 2024, 1:38 p.m. UTC | #2
On Thu, 2024-06-20 at 11:25 +0200, Alexander Gordeev wrote:
> On Wed, Jun 19, 2024 at 05:44:11PM +0200, Ilya Leoshkevich wrote:
> 
> Hi Ilya,
> 
> > +static inline bool is_lowcore_addr(void *addr)
> > +{
> > +	return addr >= (void *)&S390_lowcore &&
> > +	       addr < (void *)(&S390_lowcore + 1);
> > +}
> > +
> > +static inline void *arch_kmsan_get_meta_or_null(void *addr, bool
> > is_origin)
> > +{
> > +	if (is_lowcore_addr(addr)) {
> > +		/*
> > +		 * Different lowcores accessed via S390_lowcore
> > are described
> > +		 * by the same struct page. Resolve the prefix
> > manually in
> > +		 * order to get a distinct struct page.
> > +		 */
> 
> > +		addr += (void
> > *)lowcore_ptr[raw_smp_processor_id()] -
> > +			(void *)&S390_lowcore;
> 
> If I am not mistaken neither raw_smp_processor_id() itself, nor
> lowcore_ptr[raw_smp_processor_id()] are atomic. Should the preemption
> be disabled while the addr is calculated?
> 
> But then the question arises - how meaningful the returned value is?
> AFAICT kmsan_get_metadata() is called from a preemptable context.
> So if the CPU is changed - how useful the previous CPU lowcore meta
> is?

This code path will only be triggered by instrumented code that
accesses lowcore. That code is supposed to disable preemption;
if it didn't, it's a bug in that code and it should be fixed there.

> 
> Is it a memory block that needs to be ignored instead?
> 
> > +		if (WARN_ON_ONCE(is_lowcore_addr(addr)))
> > +			return NULL;
> 
> lowcore_ptr[] pointing into S390_lowcore is rather a bug.

Right, but AFAIK BUG() calls are discouraged. I guess in a debug tool
the rules are more relaxed, but we can recover from this condition here
easily, that's why I still went for WARN_ON_ONCE().

> > +		return kmsan_get_metadata(addr, is_origin);
> > +	}
> > +	return NULL;
> > +}
> 
> Thanks!
Alexander Gordeev June 20, 2024, 1:59 p.m. UTC | #3
On Wed, Jun 19, 2024 at 05:44:11PM +0200, Ilya Leoshkevich wrote:
> arch_kmsan_get_meta_or_null() finds the lowcore shadow by querying the
> prefix and calling kmsan_get_metadata() again.
> 
> kmsan_virt_addr_valid() delegates to virt_addr_valid().
> 
> Signed-off-by: Ilya Leoshkevich <iii@linux.ibm.com>
> ---
>  arch/s390/include/asm/kmsan.h | 59 +++++++++++++++++++++++++++++++++++
>  1 file changed, 59 insertions(+)
>  create mode 100644 arch/s390/include/asm/kmsan.h


Acked-by: Alexander Gordeev <agordeev@linux.ibm.com>
Alexander Potapenko June 20, 2024, 2:18 p.m. UTC | #4
On Thu, Jun 20, 2024 at 3:38 PM Ilya Leoshkevich <iii@linux.ibm.com> wrote:
>
> On Thu, 2024-06-20 at 11:25 +0200, Alexander Gordeev wrote:
> > On Wed, Jun 19, 2024 at 05:44:11PM +0200, Ilya Leoshkevich wrote:
> >
> > Hi Ilya,
> >
> > > +static inline bool is_lowcore_addr(void *addr)
> > > +{
> > > +   return addr >= (void *)&S390_lowcore &&
> > > +          addr < (void *)(&S390_lowcore + 1);
> > > +}
> > > +
> > > +static inline void *arch_kmsan_get_meta_or_null(void *addr, bool
> > > is_origin)
> > > +{
> > > +   if (is_lowcore_addr(addr)) {
> > > +           /*
> > > +            * Different lowcores accessed via S390_lowcore
> > > are described
> > > +            * by the same struct page. Resolve the prefix
> > > manually in
> > > +            * order to get a distinct struct page.
> > > +            */
> >
> > > +           addr += (void
> > > *)lowcore_ptr[raw_smp_processor_id()] -
> > > +                   (void *)&S390_lowcore;
> >
> > If I am not mistaken neither raw_smp_processor_id() itself, nor
> > lowcore_ptr[raw_smp_processor_id()] are atomic. Should the preemption
> > be disabled while the addr is calculated?
> >
> > But then the question arises - how meaningful the returned value is?
> > AFAICT kmsan_get_metadata() is called from a preemptable context.
> > So if the CPU is changed - how useful the previous CPU lowcore meta
> > is?
>
> This code path will only be triggered by instrumented code that
> accesses lowcore. That code is supposed to disable preemption;
> if it didn't, it's a bug in that code and it should be fixed there.
>
> >
> > Is it a memory block that needs to be ignored instead?
> >
> > > +           if (WARN_ON_ONCE(is_lowcore_addr(addr)))
> > > +                   return NULL;
> >
> > lowcore_ptr[] pointing into S390_lowcore is rather a bug.
>
> Right, but AFAIK BUG() calls are discouraged. I guess in a debug tool
> the rules are more relaxed, but we can recover from this condition here
> easily, that's why I still went for WARN_ON_ONCE().

We have KMSAN_WARN_ON() for that, sorry for not pointing it out
earlier: https://elixir.bootlin.com/linux/latest/source/mm/kmsan/kmsan.h#L46
Alexander Potapenko June 20, 2024, 2:19 p.m. UTC | #5
On Thu, Jun 20, 2024 at 4:18 PM Alexander Potapenko <glider@google.com> wrote:
>
> On Thu, Jun 20, 2024 at 3:38 PM Ilya Leoshkevich <iii@linux.ibm.com> wrote:
> >
> > On Thu, 2024-06-20 at 11:25 +0200, Alexander Gordeev wrote:
> > > On Wed, Jun 19, 2024 at 05:44:11PM +0200, Ilya Leoshkevich wrote:
> > >
> > > Hi Ilya,
> > >
> > > > +static inline bool is_lowcore_addr(void *addr)
> > > > +{
> > > > +   return addr >= (void *)&S390_lowcore &&
> > > > +          addr < (void *)(&S390_lowcore + 1);
> > > > +}
> > > > +
> > > > +static inline void *arch_kmsan_get_meta_or_null(void *addr, bool
> > > > is_origin)
> > > > +{
> > > > +   if (is_lowcore_addr(addr)) {
> > > > +           /*
> > > > +            * Different lowcores accessed via S390_lowcore
> > > > are described
> > > > +            * by the same struct page. Resolve the prefix
> > > > manually in
> > > > +            * order to get a distinct struct page.
> > > > +            */
> > >
> > > > +           addr += (void
> > > > *)lowcore_ptr[raw_smp_processor_id()] -
> > > > +                   (void *)&S390_lowcore;
> > >
> > > If I am not mistaken neither raw_smp_processor_id() itself, nor
> > > lowcore_ptr[raw_smp_processor_id()] are atomic. Should the preemption
> > > be disabled while the addr is calculated?
> > >
> > > But then the question arises - how meaningful the returned value is?
> > > AFAICT kmsan_get_metadata() is called from a preemptable context.
> > > So if the CPU is changed - how useful the previous CPU lowcore meta
> > > is?
> >
> > This code path will only be triggered by instrumented code that
> > accesses lowcore. That code is supposed to disable preemption;
> > if it didn't, it's a bug in that code and it should be fixed there.
> >
> > >
> > > Is it a memory block that needs to be ignored instead?
> > >
> > > > +           if (WARN_ON_ONCE(is_lowcore_addr(addr)))
> > > > +                   return NULL;
> > >
> > > lowcore_ptr[] pointing into S390_lowcore is rather a bug.
> >
> > Right, but AFAIK BUG() calls are discouraged. I guess in a debug tool
> > the rules are more relaxed, but we can recover from this condition here
> > easily, that's why I still went for WARN_ON_ONCE().
>
> We have KMSAN_WARN_ON() for that, sorry for not pointing it out
> earlier: https://elixir.bootlin.com/linux/latest/source/mm/kmsan/kmsan.h#L46

Apart from that:

Reviewed-by: Alexander Potapenko <glider@google.com>
diff mbox series

Patch

diff --git a/arch/s390/include/asm/kmsan.h b/arch/s390/include/asm/kmsan.h
new file mode 100644
index 000000000000..eb850c942204
--- /dev/null
+++ b/arch/s390/include/asm/kmsan.h
@@ -0,0 +1,59 @@ 
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef _ASM_S390_KMSAN_H
+#define _ASM_S390_KMSAN_H
+
+#include <asm/lowcore.h>
+#include <asm/page.h>
+#include <linux/kmsan.h>
+#include <linux/mmzone.h>
+#include <linux/stddef.h>
+
+#ifndef MODULE
+
+static inline bool is_lowcore_addr(void *addr)
+{
+	return addr >= (void *)&S390_lowcore &&
+	       addr < (void *)(&S390_lowcore + 1);
+}
+
+static inline void *arch_kmsan_get_meta_or_null(void *addr, bool is_origin)
+{
+	if (is_lowcore_addr(addr)) {
+		/*
+		 * Different lowcores accessed via S390_lowcore are described
+		 * by the same struct page. Resolve the prefix manually in
+		 * order to get a distinct struct page.
+		 */
+		addr += (void *)lowcore_ptr[raw_smp_processor_id()] -
+			(void *)&S390_lowcore;
+		if (WARN_ON_ONCE(is_lowcore_addr(addr)))
+			return NULL;
+		return kmsan_get_metadata(addr, is_origin);
+	}
+	return NULL;
+}
+
+static inline bool kmsan_virt_addr_valid(void *addr)
+{
+	bool ret;
+
+	/*
+	 * pfn_valid() relies on RCU, and may call into the scheduler on exiting
+	 * the critical section. However, this would result in recursion with
+	 * KMSAN. Therefore, disable preemption here, and re-enable preemption
+	 * below while suppressing reschedules to avoid recursion.
+	 *
+	 * Note, this sacrifices occasionally breaking scheduling guarantees.
+	 * Although, a kernel compiled with KMSAN has already given up on any
+	 * performance guarantees due to being heavily instrumented.
+	 */
+	preempt_disable();
+	ret = virt_addr_valid(addr);
+	preempt_enable_no_resched();
+
+	return ret;
+}
+
+#endif /* !MODULE */
+
+#endif /* _ASM_S390_KMSAN_H */