Message ID | 20240409012344.3194724-2-liaochang1@huawei.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | Rework the DAIF mask, unmask and track API | expand |
On Tue, Apr 09, 2024 at 01:23:36AM +0000, Liao Chang wrote: > From: Mark Brown <broonie@kernel.org> > > Encodings are provided for ALLINT which allow setting of ALLINT.ALLINT > using an immediate rather than requiring that a register be loaded with > the value to write. Since these don't currently fit within the scheme we > have for sysreg generation add manual encodings like we currently do for > other similar registers such as SVCR. > > Since it is required that these immediate versions be encoded with xzr > as the source register provide asm wrapper which ensure this is the > case. > > Signed-off-by: Mark Brown <broonie@kernel.org> > --- > arch/arm64/include/asm/nmi.h | 27 +++++++++++++++++++++++++++ You've not provided a Signed-off-by for this so people can't do anything with it, please see Documentation/process/submitting-patches.rst for details on what this is and why it's important.
Mark, 在 2024/4/9 20:28, Mark Brown 写道: > On Tue, Apr 09, 2024 at 01:23:36AM +0000, Liao Chang wrote: >> From: Mark Brown <broonie@kernel.org> >> >> Encodings are provided for ALLINT which allow setting of ALLINT.ALLINT >> using an immediate rather than requiring that a register be loaded with >> the value to write. Since these don't currently fit within the scheme we >> have for sysreg generation add manual encodings like we currently do for >> other similar registers such as SVCR. >> >> Since it is required that these immediate versions be encoded with xzr >> as the source register provide asm wrapper which ensure this is the >> case. >> >> Signed-off-by: Mark Brown <broonie@kernel.org> >> --- >> arch/arm64/include/asm/nmi.h | 27 +++++++++++++++++++++++++++ > > You've not provided a Signed-off-by for this so people can't do anything > with it, please see Documentation/process/submitting-patches.rst for > details on what this is and why it's important. Acked, thanks for the heads-up. I'll add Signed-off-by tags to the relevant patches in the next revision. This includes patches from your FEAT_NMI patchset and Jinjie's contribution.
diff --git a/arch/arm64/include/asm/nmi.h b/arch/arm64/include/asm/nmi.h new file mode 100644 index 000000000000..0c566c649485 --- /dev/null +++ b/arch/arm64/include/asm/nmi.h @@ -0,0 +1,27 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Copyright (C) 2022 ARM Ltd. + */ +#ifndef __ASM_NMI_H +#define __ASM_NMI_H + +#ifndef __ASSEMBLER__ + +#include <linux/cpumask.h> + +extern bool arm64_supports_nmi(void); + +#endif /* !__ASSEMBLER__ */ + +static __always_inline void _allint_clear(void) +{ + asm volatile(__msr_s(SYS_ALLINT_CLR, "xzr")); +} + +static __always_inline void _allint_set(void) +{ + asm volatile(__msr_s(SYS_ALLINT_SET, "xzr")); +} + +#endif + diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h index 9e8999592f3a..b105773c57ca 100644 --- a/arch/arm64/include/asm/sysreg.h +++ b/arch/arm64/include/asm/sysreg.h @@ -167,6 +167,8 @@ * System registers, organised loosely by encoding but grouped together * where the architected name contains an index. e.g. ID_MMFR<n>_EL1. */ +#define SYS_ALLINT_CLR sys_reg(0, 1, 4, 0, 0) +#define SYS_ALLINT_SET sys_reg(0, 1, 4, 1, 0) #define SYS_SVCR_SMSTOP_SM_EL0 sys_reg(0, 3, 4, 2, 3) #define SYS_SVCR_SMSTART_SM_EL0 sys_reg(0, 3, 4, 3, 3) #define SYS_SVCR_SMSTOP_SMZA_EL0 sys_reg(0, 3, 4, 6, 3)