diff mbox series

[v4,07/13] iov_iter: Make copy_from_iter() always handle MCE

Message ID 20230913165648.2570623-8-dhowells@redhat.com (mailing list archive)
State New, archived
Headers show
Series iov_iter: Convert the iterator macros into inline funcs | expand

Commit Message

David Howells Sept. 13, 2023, 4:56 p.m. UTC
Make copy_from_iter() always catch an MCE and return a short copy and make
the coredump code rely on that.  This requires arch support in the form of
a memcpy_mc() function that returns the length copied.

[?] Is it better to kill the thread in the event of an MCE occurring?

Signed-off-by: David Howells <dhowells@redhat.com>
cc: Alexander Viro <viro@zeniv.linux.org.uk>
cc: Jens Axboe <axboe@kernel.dk>
cc: Christoph Hellwig <hch@lst.de>
cc: Christian Brauner <christian@brauner.io>
cc: Matthew Wilcox <willy@infradead.org>
cc: Linus Torvalds <torvalds@linux-foundation.org>
cc: David Laight <David.Laight@ACULAB.COM>
cc: linux-block@vger.kernel.org
cc: linux-fsdevel@vger.kernel.org
cc: linux-mm@kvack.org
---
 arch/x86/include/asm/mce.h | 23 +++++++++++++++++++++++
 fs/coredump.c              |  1 -
 lib/iov_iter.c             | 12 +++++-------
 3 files changed, 28 insertions(+), 8 deletions(-)

Comments

Linus Torvalds Sept. 13, 2023, 7:43 p.m. UTC | #1
On Wed, 13 Sept 2023 at 09:57, David Howells <dhowells@redhat.com> wrote:
>
> Make copy_from_iter() always catch an MCE and return a short copy and make
> the coredump code rely on that.  This requires arch support in the form of
> a memcpy_mc() function that returns the length copied.

What?

This patch seems to miss the point of the machine check copy entirely.

You create that completely random memcpy_mc() function, that has
nothing to do with our existing copy_mc_to_kernel(), and you claim
that the issue is that it should return the length copied.

Which is not the issue at all.

Several x86 chips will HANG due to internal CPU corruption if you use
the string instructions for copying data when a machine check
exception happens (possibly only due to memory poisoning with some
non-volatile RAM thing).

Are these chips buggy? Yes.

Is the Intel machine check architecture nasty and bad? Yes, Christ yes.

Can these machines hang if user space does repeat string instructions
to said memory? Afaik, very much yes again. They are buggy.

I _think_ this only happens with the non-volatile storage stuff (thus
the dax / pmem / etc angle), and I hope we can put it behind us some
day.

But that doesn't mean that you can take our existing
copy_mc_to_kernel() code that tries to work around this and replace it
with something completely different that definitely does *not* work
around it.

See the comment in arch/x86/lib/copy_mc_64.S:

 * copy_mc_fragile - copy memory with indication if an exception /
fault happened
 *
 * The 'fragile' version is opted into by platform quirks and takes
 * pains to avoid unrecoverable corner cases like 'fast-string'
 * instruction sequences, and consuming poison across a cacheline
 * boundary. The non-fragile version is equivalent to memcpy()
 * regardless of CPU machine-check-recovery capability.

and yes, it's disgusting, and no, I've never seen a machine that does
this, since it's all "enterprise hardware", and I don't want to touch
that shite with a ten-foot pole.

Should I go on another rant about how "enterprise" means "over-priced
garbage, but with a paper trail of how bad it is, so that you can
point fingers at somebody else"?

That's true both when applied to software and to hardware, I'm afraid.

So if we get rid of that horrendous "copy_mc_fragile", then pretty
much THE WHOLE POINT of the stupid MC copy goes away, and we should
just get rid of it all entirely.

Which might be a good idea, but is absolutely *not* something that
should be done randomly as part of some iov_iter rewrite series.

I'll dance on the grave of that *horrible* machine check copy code,
but when I see this as part of iov_iter cleanup, I can only say "No.
Not this way".

> [?] Is it better to kill the thread in the event of an MCE occurring?

Oh, the thread will be dead already. In fact, if I understand the
problem correctly, the whole f$^!ng machine will be dead and need to
be power-cycled.

                 Linus
diff mbox series

Patch

diff --git a/arch/x86/include/asm/mce.h b/arch/x86/include/asm/mce.h
index 180b1cbfcc4e..77ce2044536c 100644
--- a/arch/x86/include/asm/mce.h
+++ b/arch/x86/include/asm/mce.h
@@ -353,4 +353,27 @@  static inline void mce_hygon_feature_init(struct cpuinfo_x86 *c)	{ return mce_am
 
 unsigned long copy_mc_fragile_handle_tail(char *to, char *from, unsigned len);
 
+static __always_inline __must_check
+size_t memcpy_mc(void *to, const void *from, size_t len)
+{
+#ifdef CONFIG_ARCH_HAS_COPY_MC
+	/*
+	 * If CPU has FSRM feature, use 'rep movs'.
+	 * Otherwise, use rep_movs_alternative.
+	 */
+	asm volatile(
+		"1:\n\t"
+		ALTERNATIVE("rep movsb",
+			    "call rep_movs_alternative", ALT_NOT(X86_FEATURE_FSRM))
+		"2:\n"
+		_ASM_EXTABLE_TYPE(1b, 2b, EX_TYPE_DEFAULT_MCE_SAFE)
+		:"+c" (len), "+D" (to), "+S" (from), ASM_CALL_CONSTRAINT
+		: : "memory", "rax", "r8", "r9", "r10", "r11");
+#else
+	memcpy(to, from, len);
+	return 0;
+#endif
+	return len;
+}
+
 #endif /* _ASM_X86_MCE_H */
diff --git a/fs/coredump.c b/fs/coredump.c
index 9d235fa14ab9..ad54102a5e14 100644
--- a/fs/coredump.c
+++ b/fs/coredump.c
@@ -884,7 +884,6 @@  static int dump_emit_page(struct coredump_params *cprm, struct page *page)
 	pos = file->f_pos;
 	bvec_set_page(&bvec, page, PAGE_SIZE, 0);
 	iov_iter_bvec(&iter, ITER_SOURCE, &bvec, 1, PAGE_SIZE);
-	iov_iter_set_copy_mc(&iter);
 	n = __kernel_write_iter(cprm->file, &iter, &pos);
 	if (n != PAGE_SIZE)
 		return 0;
diff --git a/lib/iov_iter.c b/lib/iov_iter.c
index 65374ee91ecd..b574601783bc 100644
--- a/lib/iov_iter.c
+++ b/lib/iov_iter.c
@@ -14,6 +14,7 @@ 
 #include <linux/scatterlist.h>
 #include <linux/instrumented.h>
 #include <linux/iov_iter.h>
+#include <asm/mce.h>
 
 static __always_inline
 size_t copy_to_user_iter(void __user *iter_to, size_t progress,
@@ -253,14 +254,11 @@  size_t _copy_mc_to_iter(const void *addr, size_t bytes, struct iov_iter *i)
 EXPORT_SYMBOL_GPL(_copy_mc_to_iter);
 #endif /* CONFIG_ARCH_HAS_COPY_MC */
 
-static size_t memcpy_from_iter_mc(void *iter_from, size_t progress,
-				  size_t len, void *to, void *priv2)
+static __always_inline
+size_t memcpy_from_iter_mc(void *iter_from, size_t progress,
+			   size_t len, void *to, void *priv2)
 {
-	struct iov_iter *iter = priv2;
-
-	if (iov_iter_is_copy_mc(iter))
-		return copy_mc_to_kernel(to + progress, iter_from, len);
-	return memcpy_from_iter(iter_from, progress, len, to, priv2);
+	return memcpy_mc(to + progress, iter_from, len);
 }
 
 size_t _copy_from_iter(void *addr, size_t bytes, struct iov_iter *i)