Message ID | 20231101-optimize_checksum-v10-0-a498577bb969@rivosinc.com (mailing list archive) |
---|---|
Headers | show
Return-Path: <linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org> X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id D155FC4167D for <linux-riscv@archiver.kernel.org>; Wed, 1 Nov 2023 22:48:25 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:MIME-Version:Message-Id:Date: Subject:From:Reply-To:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References: List-Owner; bh=wMAPq4expIA5wijPoz3ZIWPxp0u2A+SOH7/XSjkn8aw=; b=4w7HvrfeWKNNtx E73y4vkw5PBDLO1X/ILIL8whrz15mRnvKk+AsR7u+P+xo45PRqs5aGpkYBWgiZHewibdxKthoh/1O qN1ktZpYHgVhpuM0VmBNPfniEN+x4QdA9debfnnmtsB5PnLsUYkUp+AmNmdMoMYMspJLhQqUyOMR/ UPfExbEkMqG8/gJdbzw1Tj518EJuwlsSvdddq0x7iOqU+4N0pxkP9QCkmOzZS3vYdXxMkU82+BQeB mPUh9GVC+RdRZ43eAiCumWzJWxKD4ZcXWctFytxNIDdBQiE7Jx5dmJpvAYck5Z7d7hkIrEzzXYcbE wmf5LsRbf90k2t1kLPQg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qyK0a-008Eq5-1m; Wed, 01 Nov 2023 22:48:20 +0000 Received: from mail-ot1-x333.google.com ([2607:f8b0:4864:20::333]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1qyK0W-008EoV-0S for linux-riscv@lists.infradead.org; Wed, 01 Nov 2023 22:48:18 +0000 Received: by mail-ot1-x333.google.com with SMTP id 46e09a7af769-6ce29d1db6eso672391a34.1 for <linux-riscv@lists.infradead.org>; Wed, 01 Nov 2023 15:48:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20230601.gappssmtp.com; s=20230601; t=1698878893; x=1699483693; darn=lists.infradead.org; h=cc:to:content-transfer-encoding:mime-version:message-id:date :subject:from:from:to:cc:subject:date:message-id:reply-to; bh=IQCU8CvUd0qtGWMvDD449YAC+H6Mg3XfAOJXkEVCODM=; b=h/XZ12hlArscLF4qJwH11JXwe1+27juouOfUaRCrrqsd2X2eujJAwTf0x0i0cMteRe dbxaSPxaRhx8R3027EEW5TddOuwAd5c364EDY4AlsSQmkhsuHJ+kU3jROFdCAJwX+nge 6mgiU35dnL3eSkSxh3oPoK03zOeV9n5pmn8n9M/HtHq5AWxmVxBgDpEcTAAfK/n59HwK 0/S6r2DtNxBywfAnt5x8ILNWnImMybuo3QKCjsBehEY5VMCh8bdUeC0qIj4VufJMoNSE jD+KaWN0RjNu1pS061WZw34MYntQscTEaLQp5CBvro3HhB3f4q/q0+ALUALlFEAYh70c lVhA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1698878893; x=1699483693; h=cc:to:content-transfer-encoding:mime-version:message-id:date :subject:from:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=IQCU8CvUd0qtGWMvDD449YAC+H6Mg3XfAOJXkEVCODM=; b=Dalk4MRKAU5xbiOEZMnQKsOB0ucbTSOGTbbjKT0c4FVrJeOV72HyzoYvEkEvFp4q0H ih9450xl5Lwe+hWQjpqvgpPPDQmWCnWsiLm0SrQ2cWZPlnU/B4OzOM0zVzT5GvQ7+mNc 4D/i76sN+JF5Q3yfITeJWW6nVR2sj7hreiPaFIh0mkKfj92zRfVsJtXIsc7YkbthRKhr 5qXaM+0u/NRtGuvQAuIB6UFlcXBg7lF4puWDkXBJjXUoBHgxW8kl5uDpDtQqiP392W49 UIRvPdfVuRWSJVskIRvoJnpqt+wMArHJBMwfmu0ZgY4w2hj5/AP9nVJPKk3ke4G+LZQk d0Lg== X-Gm-Message-State: AOJu0Yx3bxtFYPjhX7jEz8WLLRkMXhR5sNCmQUJpBO58heJyq9zPZENF tOdKDzE5L/PJsaK+kt1uymw4eg== X-Google-Smtp-Source: AGHT+IE42j6/4Im2FoiFhWaVAZbCl3CrA42ail2toH8Zku5WV8w+RST5qa+sz1llT1Nue7MGet+1vA== X-Received: by 2002:a05:6808:f09:b0:3b2:e4b7:2af2 with SMTP id m9-20020a0568080f0900b003b2e4b72af2mr2429154oiw.6.1698878893596; Wed, 01 Nov 2023 15:48:13 -0700 (PDT) Received: from charlie.ba.rivosinc.com ([64.71.180.162]) by smtp.gmail.com with ESMTPSA id be24-20020a056808219800b003b274008e46sm376580oib.0.2023.11.01.15.48.12 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 01 Nov 2023 15:48:13 -0700 (PDT) From: Charlie Jenkins <charlie@rivosinc.com> Subject: [PATCH v10 0/5] riscv: Add fine-tuned checksum functions Date: Wed, 01 Nov 2023 15:48:10 -0700 Message-Id: <20231101-optimize_checksum-v10-0-a498577bb969@rivosinc.com> MIME-Version: 1.0 X-B4-Tracking: v=1; b=H4sIAKvVQmUC/23Q3UrEMBAF4FdZcm0lM/n3yvcQkSSTukG2Xdq1q Evf3XQRLTKXc2C+Gc5VzGWqZRYPh6uYylLnOg5tAHl3EPkYh9fSVWqBQIlKeqm78Xypp/pVXvK x5Lf5/dRRAm3Q+5gRRNs7T6WvHzf06bnNxzpfxunzdmOBLf3R0DLaAp3sgnJGQmosxsepLuNch 3yfx5PYwAX/kCANh2BDciZrPCWnVWIQtUcch6gNMRJJaQrkFIPoHQLAIbohzuWMkUJpCoOYPcI 1vJjtk2CSx4jOuMIgdo+wndiGgI429xayBc8gbo8EDnENkTY7kkRkyDKI/0VAIlusb0hfkgMJB OiJQcIOUWyxoSElSvDFhoQY/iHrun4Dltrtu94CAAA= To: Charlie Jenkins <charlie@rivosinc.com>, Palmer Dabbelt <palmer@dabbelt.com>, Conor Dooley <conor@kernel.org>, Samuel Holland <samuel.holland@sifive.com>, David Laight <David.Laight@aculab.com>, Xiao Wang <xiao.w.wang@intel.com>, Evan Green <evan@rivosinc.com>, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, linux-arch@vger.kernel.org Cc: Paul Walmsley <paul.walmsley@sifive.com>, Albert Ou <aou@eecs.berkeley.edu>, Arnd Bergmann <arnd@arndb.de>, David Laight <david.laight@aculab.com>, Conor Dooley <conor.dooley@microchip.com> X-Mailer: b4 0.12.3 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20231101_154816_412164_4459C276 X-CRM114-Status: GOOD ( 23.96 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: <linux-riscv.lists.infradead.org> List-Unsubscribe: <http://lists.infradead.org/mailman/options/linux-riscv>, <mailto:linux-riscv-request@lists.infradead.org?subject=unsubscribe> List-Archive: <http://lists.infradead.org/pipermail/linux-riscv/> List-Post: <mailto:linux-riscv@lists.infradead.org> List-Help: <mailto:linux-riscv-request@lists.infradead.org?subject=help> List-Subscribe: <http://lists.infradead.org/mailman/listinfo/linux-riscv>, <mailto:linux-riscv-request@lists.infradead.org?subject=subscribe> Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-riscv" <linux-riscv-bounces@lists.infradead.org> Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org |
Series | riscv: Add fine-tuned checksum functions | expand |
Each architecture generally implements fine-tuned checksum functions to leverage the instruction set. This patch adds the main checksum functions that are used in networking. This patch takes heavy use of the Zbb extension using alternatives patching. To test this patch, enable the configs for KUNIT, then CHECKSUM_KUNIT. I have attempted to make these functions as optimal as possible, but I have not ran anything on actual riscv hardware. My performance testing has been limited to inspecting the assembly, running the algorithms on x86 hardware, and running in QEMU. ip_fast_csum is a relatively small function so even though it is possible to read 64 bits at a time on compatible hardware, the bottleneck becomes the clean up and setup code so loading 32 bits at a time is actually faster. Relies on https://lore.kernel.org/lkml/20230920193801.3035093-1-evan@rivosinc.com/ --- The algorithm proposed to replace the default csum_fold can be seen to compute the same result by running all 2^32 possible inputs. static inline unsigned int ror32(unsigned int word, unsigned int shift) { return (word >> (shift & 31)) | (word << ((-shift) & 31)); } unsigned short csum_fold(unsigned int csum) { unsigned int sum = csum; sum = (sum & 0xffff) + (sum >> 16); sum = (sum & 0xffff) + (sum >> 16); return ~sum; } unsigned short csum_fold_arc(unsigned int csum) { return ((~csum - ror32(csum, 16)) >> 16); } int main() { unsigned int start = 0x0; do { if (csum_fold(start) != csum_fold_arc(start)) { printf("Not the same %u\n", start); return -1; } start += 1; } while(start != 0x0); printf("The same\n"); return 0; } Cc: Paul Walmsley <paul.walmsley@sifive.com> Cc: Albert Ou <aou@eecs.berkeley.edu> Cc: Arnd Bergmann <arnd@arndb.de> To: Charlie Jenkins <charlie@rivosinc.com> To: Palmer Dabbelt <palmer@dabbelt.com> To: Conor Dooley <conor@kernel.org> To: Samuel Holland <samuel.holland@sifive.com> To: David Laight <David.Laight@aculab.com> To: Xiao Wang <xiao.w.wang@intel.com> To: Evan Green <evan@rivosinc.com> To: linux-riscv@lists.infradead.org To: linux-kernel@vger.kernel.org To: linux-arch@vger.kernel.org Signed-off-by: Charlie Jenkins <charlie@rivosinc.com> --- Changes in v10: - Move tests that were riscv-specific to be arch agnostic (Arnd) - Link to v9: https://lore.kernel.org/r/20231031-optimize_checksum-v9-0-ea018e69b229@rivosinc.com Changes in v9: - Use ror64 (Xiao) - Move do_csum and csum_ipv6_magic headers to patch 4 (Xiao) - Remove word "IP" from checksum headers (Xiao) - Swap to using ifndef CONFIG_32BIT instead of ifdef CONFIG_64BIT (Xiao) - Run no alignment code when buff is aligned (Xiao) - Consolidate two do_csum implementations overlap into do_csum_common - Link to v8: https://lore.kernel.org/r/20231027-optimize_checksum-v8-0-feb7101d128d@rivosinc.com Changes in v8: - Speedups of 12% without Zbb and 21% with Zbb when cpu supports fast misaligned accesses for do_csum - Various formatting updates - Patch now relies on https://lore.kernel.org/lkml/20230920193801.3035093-1-evan@rivosinc.com/ - Link to v7: https://lore.kernel.org/r/20230919-optimize_checksum-v7-0-06c7d0ddd5d6@rivosinc.com Changes in v7: - Included linux/bitops.h in asm-generic/checksum.h to use ror (Conor) - Optimized loop in do_csum (David) - Used ror instead of shifting (David) - Unfortunately had to reintroduce ifdefs because gcc is not smart enough to not throw warnings on code that will never execute - Use ifdef instead of IS_ENABLED on __LITTLE_ENDIAN because IS_ENABLED does not work on that - Only optimize for zbb when alternatives is enabled in do_csum - Link to v6: https://lore.kernel.org/r/20230915-optimize_checksum-v6-0-14a6cf61c618@rivosinc.com Changes in v6: - Fix accuracy of commit message for csum_fold - Fix indentation - Link to v5: https://lore.kernel.org/r/20230914-optimize_checksum-v5-0-c95b82a2757e@rivosinc.com Changes in v5: - Drop vector patches - Check ZBB enabled before doing any ZBB code (Conor) - Check endianness in IS_ENABLED - Revert to the simpler non-tree based version of ipv6_csum_magic since David pointed out that the tree based version is not better. - Link to v4: https://lore.kernel.org/r/20230911-optimize_checksum-v4-0-77cc2ad9e9d7@rivosinc.com Changes in v4: - Suggestion by David Laight to use an improved checksum used in arch/arc. - Eliminates zero-extension on rv32, but not on rv64. - Reduces data dependency which should improve execution speed on rv32 and rv64 - Still passes CHECKSUM_KUNIT and RISCV_CHECKSUM_KUNIT on rv32 and rv64 with and without zbb. - Link to v3: https://lore.kernel.org/r/20230907-optimize_checksum-v3-0-c502d34d9d73@rivosinc.com Changes in v3: - Use riscv_has_extension_likely and has_vector where possible (Conor) - Reduce ifdefs by using IS_ENABLED where possible (Conor) - Use kernel_vector_begin in the vector code (Samuel) - Link to v2: https://lore.kernel.org/r/20230905-optimize_checksum-v2-0-ccd658db743b@rivosinc.com Changes in v2: - After more benchmarking, rework functions to improve performance. - Remove tests that overlapped with the already existing checksum tests and make tests more extensive. - Use alternatives to activate code with Zbb and vector extensions - Link to v1: https://lore.kernel.org/r/20230826-optimize_checksum-v1-0-937501b4522a@rivosinc.com --- Charlie Jenkins (5): asm-generic: Improve csum_fold riscv: Add static key for misaligned accesses riscv: Checksum header riscv: Add checksum library kunit: Add tests for csum_ipv6_magic and ip_fast_csum arch/riscv/include/asm/checksum.h | 92 ++++++++++ arch/riscv/include/asm/cpufeature.h | 3 + arch/riscv/kernel/cpufeature.c | 30 ++++ arch/riscv/lib/Makefile | 1 + arch/riscv/lib/csum.c | 326 ++++++++++++++++++++++++++++++++++++ include/asm-generic/checksum.h | 6 +- lib/checksum_kunit.c | 284 ++++++++++++++++++++++++++++++- 7 files changed, 738 insertions(+), 4 deletions(-) --- base-commit: 8d68c506cd34a142331623fd23eb1c4e680e1955 change-id: 20230804-optimize_checksum-db145288ac21