Message ID | 20221116041342.3841-1-elliott@hpe.com (mailing list archive) |
---|---|
Headers | show |
Series | crypto: fix RCU stalls | expand |
On Tue, Nov 15, 2022 at 10:13:18PM -0600, Robert Elliott wrote: > This series fixes the RCU stalls triggered by the x86 crypto > modules discussed in > https://lore.kernel.org/all/MW5PR84MB18426EBBA3303770A8BC0BDFAB759@MW5PR84MB1842.NAMPRD84.PROD.OUTLOOK.COM/ > > Two root causes were: > - too much data processed between kernel_fpu_begin and > kernel_fpu_end calls (which are heavily used by the x86 > optimized drivers) > - tcrypt not calling cond_resched during speed test loops > > These problems have always been lurking, but improving the > loading of the x86/sha512 module led to it happening a lot > during boot when using SHA-512 for module signature checking. Can we split this series up please? The fixes to the stalls should stand separately from the changes to how modules are loaded. The latter is more of an improvement while the former should be applied ASAP. Thanks,
> -----Original Message----- > From: Herbert Xu <herbert@gondor.apana.org.au> > Sent: Wednesday, November 16, 2022 9:59 PM > Subject: Re: [PATCH v4 00/24] crypto: fix RCU stalls > > On Tue, Nov 15, 2022 at 10:13:18PM -0600, Robert Elliott wrote: ... > > These problems have always been lurking, but improving the > > loading of the x86/sha512 module led to it happening a lot > > during boot when using SHA-512 for module signature checking. > > Can we split this series up please? The fixes to the stalls should > stand separately from the changes to how modules are loaded. The > latter is more of an improvement while the former should be applied > ASAP. Yes. With the v4 patch numbers: [PATCH v4 01/24] crypto: tcrypt - test crc32 [PATCH v4 02/24] crypto: tcrypt - test nhpoly1305 Those ensure the changes to those hash modules are testable. [PATCH v4 03/24] crypto: tcrypt - reschedule during cycles speed That's only for tcrypt so not urgent for users, but pretty simple. [PATCH v4 04/24] crypto: x86/sha - limit FPU preemption [PATCH v4 05/24] crypto: x86/crc - limit FPU preemption [PATCH v4 06/24] crypto: x86/sm3 - limit FPU preemption [PATCH v4 07/24] crypto: x86/ghash - use u8 rather than char [PATCH v4 08/24] crypto: x86/ghash - restructure FPU context saving [PATCH v4 09/24] crypto: x86/ghash - limit FPU preemption [PATCH v4 10/24] crypto: x86/poly - limit FPU preemption [PATCH v4 11/24] crypto: x86/aegis - limit FPU preemption [PATCH v4 12/24] crypto: x86/sha - register all variations [PATCH v4 13/24] crypto: x86/sha - minimize time in FPU context That's the end of the fixes set. [PATCH v4 14/24] crypto: x86/sha - load based on CPU features [PATCH v4 15/24] crypto: x86/crc - load based on CPU features [PATCH v4 16/24] crypto: x86/sm3 - load based on CPU features [PATCH v4 17/24] crypto: x86/poly - load based on CPU features [PATCH v4 18/24] crypto: x86/ghash - load based on CPU features [PATCH v4 19/24] crypto: x86/aesni - avoid type conversions [PATCH v4 20/24] crypto: x86/ciphers - load based on CPU features [PATCH v4 21/24] crypto: x86 - report used CPU features via module [PATCH v4 22/24] crypto: x86 - report missing CPU features via module [PATCH v4 23/24] crypto: x86 - report suboptimal CPUs via module [PATCH v4 24/24] crypto: x86 - standardize module descriptions I'll put those in a new series. For 6.1, I still suggest reverting aa031b8f702e ("crypto: x86/sha512 - load based on CPU features) since that exposed the problem. Target the fixes for 6.2 and module loading for 6.2 or 6.3.
On Thu, Nov 17, 2022 at 4:14 PM Elliott, Robert (Servers) <elliott@hpe.com> wrote: > > -----Original Message----- > > From: Herbert Xu <herbert@gondor.apana.org.au> > > Sent: Wednesday, November 16, 2022 9:59 PM > > Subject: Re: [PATCH v4 00/24] crypto: fix RCU stalls > > > > On Tue, Nov 15, 2022 at 10:13:18PM -0600, Robert Elliott wrote: > ... > > > These problems have always been lurking, but improving the > > > loading of the x86/sha512 module led to it happening a lot > > > during boot when using SHA-512 for module signature checking. > > > > Can we split this series up please? The fixes to the stalls should > > stand separately from the changes to how modules are loaded. The > > latter is more of an improvement while the former should be applied > > ASAP. > > Yes. With the v4 patch numbers: > [PATCH v4 01/24] crypto: tcrypt - test crc32 > [PATCH v4 02/24] crypto: tcrypt - test nhpoly1305 > > Those ensure the changes to those hash modules are testable. > > [PATCH v4 03/24] crypto: tcrypt - reschedule during cycles speed > > That's only for tcrypt so not urgent for users, but pretty > simple. > > [PATCH v4 04/24] crypto: x86/sha - limit FPU preemption > [PATCH v4 05/24] crypto: x86/crc - limit FPU preemption > [PATCH v4 06/24] crypto: x86/sm3 - limit FPU preemption > [PATCH v4 07/24] crypto: x86/ghash - use u8 rather than char > [PATCH v4 08/24] crypto: x86/ghash - restructure FPU context saving > [PATCH v4 09/24] crypto: x86/ghash - limit FPU preemption > [PATCH v4 10/24] crypto: x86/poly - limit FPU preemption > [PATCH v4 11/24] crypto: x86/aegis - limit FPU preemption > [PATCH v4 12/24] crypto: x86/sha - register all variations > [PATCH v4 13/24] crypto: x86/sha - minimize time in FPU context > > That's the end of the fixes set. > > [PATCH v4 14/24] crypto: x86/sha - load based on CPU features > [PATCH v4 15/24] crypto: x86/crc - load based on CPU features > [PATCH v4 16/24] crypto: x86/sm3 - load based on CPU features > [PATCH v4 17/24] crypto: x86/poly - load based on CPU features > [PATCH v4 18/24] crypto: x86/ghash - load based on CPU features > [PATCH v4 19/24] crypto: x86/aesni - avoid type conversions > [PATCH v4 20/24] crypto: x86/ciphers - load based on CPU features > [PATCH v4 21/24] crypto: x86 - report used CPU features via module > [PATCH v4 22/24] crypto: x86 - report missing CPU features via module > [PATCH v4 23/24] crypto: x86 - report suboptimal CPUs via module > [PATCH v4 24/24] crypto: x86 - standardize module descriptions > > I'll put those in a new series. Thanks. Please take into account my review feedback this time for your next series. Jason