Message ID | 20231031-optimize_checksum-v9-0-ea018e69b229@rivosinc.com (mailing list archive) |
---|---|
Headers | show |
Series | riscv: Add fine-tuned checksum functions | expand |
On Tue, Oct 31, 2023 at 05:18:50PM -0700, Charlie Jenkins wrote: > Each architecture generally implements fine-tuned checksum functions to > leverage the instruction set. This patch adds the main checksum > functions that are used in networking. > > This patch takes heavy use of the Zbb extension using alternatives > patching. > > To test this patch, enable the configs for KUNIT, then CHECKSUM_KUNIT > and RISCV_CHECKSUM_KUNIT. > > I have attempted to make these functions as optimal as possible, but I > have not ran anything on actual riscv hardware. My performance testing > has been limited to inspecting the assembly, running the algorithms on > x86 hardware, and running in QEMU. > > ip_fast_csum is a relatively small function so even though it is > possible to read 64 bits at a time on compatible hardware, the > bottleneck becomes the clean up and setup code so loading 32 bits at a > time is actually faster. > > Relies on https://lore.kernel.org/lkml/20230920193801.3035093-1-evan@rivosinc.com/ I coulda sworn I reported build issues against the v8 of this series that are still present in this v9. For example: https://patchwork.kernel.org/project/linux-riscv/patch/20231031-optimize_checksum-v9-3-ea018e69b229@rivosinc.com/ Cheers, Conor.
On Wed, Nov 01, 2023 at 11:50:46AM +0000, Conor Dooley wrote: > On Tue, Oct 31, 2023 at 05:18:50PM -0700, Charlie Jenkins wrote: > > Each architecture generally implements fine-tuned checksum functions to > > leverage the instruction set. This patch adds the main checksum > > functions that are used in networking. > > > > This patch takes heavy use of the Zbb extension using alternatives > > patching. > > > > To test this patch, enable the configs for KUNIT, then CHECKSUM_KUNIT > > and RISCV_CHECKSUM_KUNIT. > > > > I have attempted to make these functions as optimal as possible, but I > > have not ran anything on actual riscv hardware. My performance testing > > has been limited to inspecting the assembly, running the algorithms on > > x86 hardware, and running in QEMU. > > > > ip_fast_csum is a relatively small function so even though it is > > possible to read 64 bits at a time on compatible hardware, the > > bottleneck becomes the clean up and setup code so loading 32 bits at a > > time is actually faster. > > > > Relies on https://lore.kernel.org/lkml/20230920193801.3035093-1-evan@rivosinc.com/ > > I coulda sworn I reported build issues against the v8 of this series > that are still present in this v9. For example: > https://patchwork.kernel.org/project/linux-riscv/patch/20231031-optimize_checksum-v9-3-ea018e69b229@rivosinc.com/ > > Cheers, > Conor. You did, and I fixed the build issues. This is another instance of how Patchwork reports the results of the previous build before the new build completes. Patchwork was very far behind so it took around 15 hours for the result to be ready. There are some miscellaneous warnings in random drivers that I don't think can be attributed to this patch. - Charlie
On Wed, Nov 01, 2023 at 10:06:26AM -0700, Charlie Jenkins wrote: > On Wed, Nov 01, 2023 at 11:50:46AM +0000, Conor Dooley wrote: > > On Tue, Oct 31, 2023 at 05:18:50PM -0700, Charlie Jenkins wrote: > > > Each architecture generally implements fine-tuned checksum functions to > > > leverage the instruction set. This patch adds the main checksum > > > functions that are used in networking. > > > > > > This patch takes heavy use of the Zbb extension using alternatives > > > patching. > > > > > > To test this patch, enable the configs for KUNIT, then CHECKSUM_KUNIT > > > and RISCV_CHECKSUM_KUNIT. > > > > > > I have attempted to make these functions as optimal as possible, but I > > > have not ran anything on actual riscv hardware. My performance testing > > > has been limited to inspecting the assembly, running the algorithms on > > > x86 hardware, and running in QEMU. > > > > > > ip_fast_csum is a relatively small function so even though it is > > > possible to read 64 bits at a time on compatible hardware, the > > > bottleneck becomes the clean up and setup code so loading 32 bits at a > > > time is actually faster. > > > > > > Relies on https://lore.kernel.org/lkml/20230920193801.3035093-1-evan@rivosinc.com/ > > > > I coulda sworn I reported build issues against the v8 of this series > > that are still present in this v9. For example: > > https://patchwork.kernel.org/project/linux-riscv/patch/20231031-optimize_checksum-v9-3-ea018e69b229@rivosinc.com/ > You did, and I fixed the build issues. This is another instance of how > Patchwork reports the results of the previous build before the new build > completes. Patchwork was very far behind so it took around 15 hours for > the result to be ready. :clown_face: > There are some miscellaneous warnings in random > drivers that I don't think can be attributed to this patch. Yeah, there sometimes are warnings that seem spurious when you touch a bunch of header files. I'm not really sure how to improve on that, since it was newly introduced. My theory is that how we do a build of commit A, then commit A~1 and then commit A again & take the difference between the 2nd and 3rd builds (which should both be partial rebuilds) is not as symmetrical as I might've thought and is the source of those seemingly unrelated issues that come up from time to time.
Each architecture generally implements fine-tuned checksum functions to leverage the instruction set. This patch adds the main checksum functions that are used in networking. This patch takes heavy use of the Zbb extension using alternatives patching. To test this patch, enable the configs for KUNIT, then CHECKSUM_KUNIT and RISCV_CHECKSUM_KUNIT. I have attempted to make these functions as optimal as possible, but I have not ran anything on actual riscv hardware. My performance testing has been limited to inspecting the assembly, running the algorithms on x86 hardware, and running in QEMU. ip_fast_csum is a relatively small function so even though it is possible to read 64 bits at a time on compatible hardware, the bottleneck becomes the clean up and setup code so loading 32 bits at a time is actually faster. Relies on https://lore.kernel.org/lkml/20230920193801.3035093-1-evan@rivosinc.com/ --- The algorithm proposed to replace the default csum_fold can be seen to compute the same result by running all 2^32 possible inputs. static inline unsigned int ror32(unsigned int word, unsigned int shift) { return (word >> (shift & 31)) | (word << ((-shift) & 31)); } unsigned short csum_fold(unsigned int csum) { unsigned int sum = csum; sum = (sum & 0xffff) + (sum >> 16); sum = (sum & 0xffff) + (sum >> 16); return ~sum; } unsigned short csum_fold_arc(unsigned int csum) { return ((~csum - ror32(csum, 16)) >> 16); } int main() { unsigned int start = 0x0; do { if (csum_fold(start) != csum_fold_arc(start)) { printf("Not the same %u\n", start); return -1; } start += 1; } while(start != 0x0); printf("The same\n"); return 0; } Cc: Paul Walmsley <paul.walmsley@sifive.com> Cc: Albert Ou <aou@eecs.berkeley.edu> Cc: Arnd Bergmann <arnd@arndb.de> To: Charlie Jenkins <charlie@rivosinc.com> To: Palmer Dabbelt <palmer@dabbelt.com> To: Conor Dooley <conor@kernel.org> To: Samuel Holland <samuel.holland@sifive.com> To: David Laight <David.Laight@aculab.com> To: Xiao Wang <xiao.w.wang@intel.com> To: Evan Green <evan@rivosinc.com> To: linux-riscv@lists.infradead.org To: linux-kernel@vger.kernel.org To: linux-arch@vger.kernel.org Signed-off-by: Charlie Jenkins <charlie@rivosinc.com> --- Changes in v9: - Use ror64 (Xiao) - Move do_csum and csum_ipv6_magic headers to patch 4 (Xiao) - Remove word "IP" from checksum headers (Xiao) - Swap to using ifndef CONFIG_32BIT instead of ifdef CONFIG_64BIT (Xiao) - Run no alignment code when buff is aligned (Xiao) - Consolidate two do_csum implementations overlap into do_csum_common - Link to v8: https://lore.kernel.org/r/20231027-optimize_checksum-v8-0-feb7101d128d@rivosinc.com Changes in v8: - Speedups of 12% without Zbb and 21% with Zbb when cpu supports fast misaligned accesses for do_csum - Various formatting updates - Patch now relies on https://lore.kernel.org/lkml/20230920193801.3035093-1-evan@rivosinc.com/ - Link to v7: https://lore.kernel.org/r/20230919-optimize_checksum-v7-0-06c7d0ddd5d6@rivosinc.com Changes in v7: - Included linux/bitops.h in asm-generic/checksum.h to use ror (Conor) - Optimized loop in do_csum (David) - Used ror instead of shifting (David) - Unfortunately had to reintroduce ifdefs because gcc is not smart enough to not throw warnings on code that will never execute - Use ifdef instead of IS_ENABLED on __LITTLE_ENDIAN because IS_ENABLED does not work on that - Only optimize for zbb when alternatives is enabled in do_csum - Link to v6: https://lore.kernel.org/r/20230915-optimize_checksum-v6-0-14a6cf61c618@rivosinc.com Changes in v6: - Fix accuracy of commit message for csum_fold - Fix indentation - Link to v5: https://lore.kernel.org/r/20230914-optimize_checksum-v5-0-c95b82a2757e@rivosinc.com Changes in v5: - Drop vector patches - Check ZBB enabled before doing any ZBB code (Conor) - Check endianness in IS_ENABLED - Revert to the simpler non-tree based version of ipv6_csum_magic since David pointed out that the tree based version is not better. - Link to v4: https://lore.kernel.org/r/20230911-optimize_checksum-v4-0-77cc2ad9e9d7@rivosinc.com Changes in v4: - Suggestion by David Laight to use an improved checksum used in arch/arc. - Eliminates zero-extension on rv32, but not on rv64. - Reduces data dependency which should improve execution speed on rv32 and rv64 - Still passes CHECKSUM_KUNIT and RISCV_CHECKSUM_KUNIT on rv32 and rv64 with and without zbb. - Link to v3: https://lore.kernel.org/r/20230907-optimize_checksum-v3-0-c502d34d9d73@rivosinc.com Changes in v3: - Use riscv_has_extension_likely and has_vector where possible (Conor) - Reduce ifdefs by using IS_ENABLED where possible (Conor) - Use kernel_vector_begin in the vector code (Samuel) - Link to v2: https://lore.kernel.org/r/20230905-optimize_checksum-v2-0-ccd658db743b@rivosinc.com Changes in v2: - After more benchmarking, rework functions to improve performance. - Remove tests that overlapped with the already existing checksum tests and make tests more extensive. - Use alternatives to activate code with Zbb and vector extensions - Link to v1: https://lore.kernel.org/r/20230826-optimize_checksum-v1-0-937501b4522a@rivosinc.com --- Charlie Jenkins (5): asm-generic: Improve csum_fold riscv: Add static key for misaligned accesses riscv: Checksum header riscv: Add checksum library riscv: Test checksum functions arch/riscv/Kconfig.debug | 1 + arch/riscv/include/asm/checksum.h | 92 ++++++++++ arch/riscv/include/asm/cpufeature.h | 3 + arch/riscv/kernel/cpufeature.c | 30 ++++ arch/riscv/lib/Kconfig.debug | 31 ++++ arch/riscv/lib/Makefile | 3 + arch/riscv/lib/csum.c | 326 +++++++++++++++++++++++++++++++++ arch/riscv/lib/riscv_checksum_kunit.c | 330 ++++++++++++++++++++++++++++++++++ include/asm-generic/checksum.h | 6 +- 9 files changed, 819 insertions(+), 3 deletions(-) --- base-commit: 8d68c506cd34a142331623fd23eb1c4e680e1955 change-id: 20230804-optimize_checksum-db145288ac21