From patchwork Thu Dec 14 11:06:33 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 13492810 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 349F9C4332F for ; Thu, 14 Dec 2023 11:07:30 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=5gjvPQgG8hVBWkki8Qm0a5a8cttVbYMZXTIsgfpaoHM=; b=tjPOhZUVAamyPue/lTTfYG+aF+ Mju3BFFyAw6Y8BoWfz3H+5FcBJMA1uPOPFQYOTmuKzNTD6rfXAABN1eiRC5xiSyGgywB/ozjAan1m wuter+BOlDWGd/lmXYAE4wTwzz7v1QEB1HE4i6kbBUvH10D58ZwiIa8gM0q0Rmm7/6G7NJcuQjuZD yehMP9/wQFGYxaJmSICc7KkXqSVil/AZm7CgcCgFYpOLNqro6yhP32DXgz5Rc3dNA3O+KtZs7l/nw 3npvGx9KQm4qgWJQJli5s1ZZH/geBOI7ElJ/h1IAxAp+QdGN+lUNOqD2y/lj4TVIQYOTh/YkvT2wj DVGk9LPQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1rDjYP-00HZuO-07; Thu, 14 Dec 2023 11:06:57 +0000 Received: from mail-yw1-x1149.google.com ([2607:f8b0:4864:20::1149]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1rDjYJ-00HZsY-2N for linux-arm-kernel@lists.infradead.org; Thu, 14 Dec 2023 11:06:53 +0000 Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-5e2fc8fe1a8so3253277b3.1 for ; Thu, 14 Dec 2023 03:06:50 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1702552009; x=1703156809; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=EyXxVtW+rdQsX3V7pnuyU4+nkx4xXk2D+H4JW0Q7bfM=; b=a3FBh5HO+jX4cQ/BfnDxMHVWyUwQAJ27ONdj1hyWrGjU4kEvAtEFKQZWiQnNT/Q8LO EXGY5a34T0nM+b/6q8ShsBDxLK2urpjTUF6jA1iw1tc9ZVtD/y3ZbSfzmaRU7BYnagnb 0aChm/0cKzQMVqIiuF7e61PA8jABFyEAYpZ/8aBqWHV/YNAt5YEHav2Buzhdphjg3gh9 7rT7jX/jQdPnP99ROMXOL1+6q1518mfSYSu9N8IWM3EqlyeBk5kj2CT6FhK+F3s/0sWA hsU0Ulh/HWS8PbjeAvDjmhCKuhh6ZtAanp02wW9+UjXJGExoZxNXEhmfqVnssJvNkJSi K7Ug== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1702552009; x=1703156809; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=EyXxVtW+rdQsX3V7pnuyU4+nkx4xXk2D+H4JW0Q7bfM=; b=kx9wuhs7CgtsawRToT5MPhXnOYJCcHLB6maR9U59b5ijBAmqcmLd41Z5eF7/rOOe2F Z1x6rTGeKzyvR3UixZL8cPtlv7+5VsCWFgEHpFJIO+4yphrpVzlBmnYzkfyA+ROVUJOX 99/F5SuAMcpqBAIVhSlorZpLzFWRUrpNC8F7kaXvCIidB7uyovvsPVj6a2Fj7bnqI6l/ Dh5PRp8gfCHGuG7BPEM+VcgbribWFBLnXX2fJTTx6sYjsQdCaTGgeC567kVxmueByjti KzDW7NrMbNuEk/Fu7/fKy68Osxm8urWLTP0tLizflWNEiidnFaZmrNUDjhmEpZkoWbKu DaOw== X-Gm-Message-State: AOJu0Ywa1VJgoxXt7VLFWxfIvIlIzxVAQCEXBlUDqovNoWqWdnkUKTnC UVK54m/h3Pvy92SR29SAHHFPnSLGaTY= X-Google-Smtp-Source: AGHT+IGW0HdZBurxxyqI6QmXx4AKMYKZD446PLJHAHdSqHiY2mQ2I9QfmLcwZ4od/25icUASR9vq/iRmuJQ= X-Received: from glider.muc.corp.google.com ([2a00:79e0:9c:201:8447:3e89:1f77:ff8a]) (user=glider job=sendgmr) by 2002:a05:690c:a8c:b0:5d3:e8b8:e1fd with SMTP id ci12-20020a05690c0a8c00b005d3e8b8e1fdmr182658ywb.3.1702552009224; Thu, 14 Dec 2023 03:06:49 -0800 (PST) Date: Thu, 14 Dec 2023 12:06:33 +0100 In-Reply-To: <20231214110639.2294687-1-glider@google.com> Mime-Version: 1.0 References: <20231214110639.2294687-1-glider@google.com> X-Mailer: git-send-email 2.43.0.472.g3155946c3a-goog Message-ID: <20231214110639.2294687-2-glider@google.com> Subject: [PATCH v10-mte 1/7] lib/bitmap: add bitmap_{read,write}() From: Alexander Potapenko To: glider@google.com, catalin.marinas@arm.com, will@kernel.org, pcc@google.com, andreyknvl@gmail.com, andriy.shevchenko@linux.intel.com, aleksander.lobakin@intel.com, linux@rasmusvillemoes.dk, yury.norov@gmail.com, alexandru.elisei@arm.com Cc: linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, eugenis@google.com, syednwaris@gmail.com, william.gray@linaro.org, Arnd Bergmann X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20231214_030651_772432_D77CA2DC X-CRM114-Status: GOOD ( 19.68 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Syed Nayyar Waris The two new functions allow reading/writing values of length up to BITS_PER_LONG bits at arbitrary position in the bitmap. The code was taken from "bitops: Introduce the for_each_set_clump macro" by Syed Nayyar Waris with a number of changes and simplifications: - instead of using roundup(), which adds an unnecessary dependency on , we calculate space as BITS_PER_LONG-offset; - indentation is reduced by not using else-clauses (suggested by checkpatch for bitmap_get_value()); - bitmap_get_value()/bitmap_set_value() are renamed to bitmap_read() and bitmap_write(); - some redundant computations are omitted. Cc: Arnd Bergmann Signed-off-by: Syed Nayyar Waris Signed-off-by: William Breathitt Gray Link: https://lore.kernel.org/lkml/fe12eedf3666f4af5138de0e70b67a07c7f40338.1592224129.git.syednwaris@gmail.com/ Suggested-by: Yury Norov Co-developed-by: Alexander Potapenko Signed-off-by: Alexander Potapenko Reviewed-by: Andy Shevchenko Acked-by: Yury Norov --- v10-mte: - send this patch together with the "Implement MTE tag compression for swapped pages" Revisions v8-v12 of bitmap patches were reviewed separately from the "Implement MTE tag compression for swapped pages" series (https://lore.kernel.org/lkml/20231109151106.2385155-1-glider@google.com/) This patch was previously called "lib/bitmap: add bitmap_{set,get}_value()" (https://lore.kernel.org/lkml/20230720173956.3674987-2-glider@google.com/) v11: - rearrange whitespace as requested by Andy Shevchenko, add Reviewed-by:, update a comment v10: - update comments as requested by Andy Shevchenko v8: - as suggested by Andy Shevchenko, handle reads/writes of more than BITS_PER_LONG bits, add a note for 32-bit systems v7: - Address comments by Yury Norov, Andy Shevchenko, Rasmus Villemoes: - update code comments; - get rid of GENMASK(); - s/assign_bit/__assign_bit; - more vertical whitespace for better readability; - more compact code for bitmap_write() (now for real) v6: - As suggested by Yury Norov, do not require bitmap_read(..., 0) to return 0. v5: - Address comments by Yury Norov: - updated code comments and patch title/description - replace GENMASK(nbits - 1, 0) with BITMAP_LAST_WORD_MASK(nbits) - more compact bitmap_write() implementation v4: - Address comments by Andy Shevchenko and Yury Norov: - prevent passing values >= 64 to GENMASK() - fix commit authorship - change comments - check for unlikely(nbits==0) - drop unnecessary const declarations - fix kernel-doc comments - rename bitmap_{get,set}_value() to bitmap_{read,write}() --- include/linux/bitmap.h | 77 ++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 77 insertions(+) diff --git a/include/linux/bitmap.h b/include/linux/bitmap.h index 99451431e4d65..7ca0379be8c13 100644 --- a/include/linux/bitmap.h +++ b/include/linux/bitmap.h @@ -79,6 +79,10 @@ struct device; * bitmap_to_arr64(buf, src, nbits) Copy nbits from buf to u64[] dst * bitmap_get_value8(map, start) Get 8bit value from map at start * bitmap_set_value8(map, value, start) Set 8bit value to map at start + * bitmap_read(map, start, nbits) Read an nbits-sized value from + * map at start + * bitmap_write(map, value, start, nbits) Write an nbits-sized value to + * map at start * * Note, bitmap_zero() and bitmap_fill() operate over the region of * unsigned longs, that is, bits behind bitmap till the unsigned long @@ -636,6 +640,79 @@ static inline void bitmap_set_value8(unsigned long *map, unsigned long value, map[index] |= value << offset; } +/** + * bitmap_read - read a value of n-bits from the memory region + * @map: address to the bitmap memory region + * @start: bit offset of the n-bit value + * @nbits: size of value in bits, nonzero, up to BITS_PER_LONG + * + * Returns: value of @nbits bits located at the @start bit offset within the + * @map memory region. For @nbits = 0 and @nbits > BITS_PER_LONG the return + * value is undefined. + */ +static inline unsigned long bitmap_read(const unsigned long *map, + unsigned long start, + unsigned long nbits) +{ + size_t index = BIT_WORD(start); + unsigned long offset = start % BITS_PER_LONG; + unsigned long space = BITS_PER_LONG - offset; + unsigned long value_low, value_high; + + if (unlikely(!nbits || nbits > BITS_PER_LONG)) + return 0; + + if (space >= nbits) + return (map[index] >> offset) & BITMAP_LAST_WORD_MASK(nbits); + + value_low = map[index] & BITMAP_FIRST_WORD_MASK(start); + value_high = map[index + 1] & BITMAP_LAST_WORD_MASK(start + nbits); + return (value_low >> offset) | (value_high << space); +} + +/** + * bitmap_write - write n-bit value within a memory region + * @map: address to the bitmap memory region + * @value: value to write, clamped to nbits + * @start: bit offset of the n-bit value + * @nbits: size of value in bits, nonzero, up to BITS_PER_LONG. + * + * bitmap_write() behaves as-if implemented as @nbits calls of __assign_bit(), + * i.e. bits beyond @nbits are ignored: + * + * for (bit = 0; bit < nbits; bit++) + * __assign_bit(start + bit, bitmap, val & BIT(bit)); + * + * For @nbits == 0 and @nbits > BITS_PER_LONG no writes are performed. + */ +static inline void bitmap_write(unsigned long *map, unsigned long value, + unsigned long start, unsigned long nbits) +{ + size_t index; + unsigned long offset; + unsigned long space; + unsigned long mask; + bool fit; + + if (unlikely(!nbits || nbits > BITS_PER_LONG)) + return; + + mask = BITMAP_LAST_WORD_MASK(nbits); + value &= mask; + offset = start % BITS_PER_LONG; + space = BITS_PER_LONG - offset; + fit = space >= nbits; + index = BIT_WORD(start); + + map[index] &= (fit ? (~(mask << offset)) : ~BITMAP_FIRST_WORD_MASK(start)); + map[index] |= value << offset; + if (fit) + return; + + map[index + 1] &= BITMAP_FIRST_WORD_MASK(start + nbits); + map[index + 1] |= (value >> space); +} + #endif /* __ASSEMBLY__ */ #endif /* __LINUX_BITMAP_H */ From patchwork Thu Dec 14 11:06:34 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 13492809 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 1AEC0C4167D for ; Thu, 14 Dec 2023 11:07:26 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=LWJyT3zLl8TkQknHAKs2L7m363Q2XR0cMp0YDV9/imw=; b=ZbzheWpkpD9ygOrklwgEIZJO43 dwIOGOQ9cHrc92X3zhFRdW8o9JUzgY/OQpR95iQlzcaPuyUvpGfgCDtVgscL74JC9YSewKo6KA4th /H/p67SvwFoYJwLrBBBRkTbGYeBv5DguRck43QC0MH/oRok5Iupy6SFjXgEEiVK/FtrnJFaxSq/Fs GQp+7897UHg10Lg8ljk/YdSfr8CYLFmgMp1tGJUVnspDxm6lkTRSX+eTaXhCoALvTp7tO5j08dS/X xU/MXZicCS3x68+3PluG+Bss4kGL5p9wfER4/TAdqjUpi0QCc4t2s7TRSyPD0f3sc3juVWeto3tvp WY35uSCQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1rDjYP-00HZuj-1r; Thu, 14 Dec 2023 11:06:57 +0000 Received: from mail-yw1-x1149.google.com ([2607:f8b0:4864:20::1149]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1rDjYL-00HZt1-1A for linux-arm-kernel@lists.infradead.org; Thu, 14 Dec 2023 11:06:55 +0000 Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-5ddd64f83a4so63853947b3.0 for ; Thu, 14 Dec 2023 03:06:52 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1702552012; x=1703156812; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=hls/cSgyQjjQ+ZYtpE7D/6PwXwlUmCuSXktuDqFik/0=; b=XlKwMgTee1JM0G9hb9Q1Df2e02W+dDO+R+R5rjFsAMbU2nB12pzqOrJsjjPcegogHR Pdf9sc+qdJ2PVdWCM8ey36RJBkSazrXx3/98F29mm8PNsXPHqg3NnQ6VdTbGw3ECAoqo FbfxwM4xAcrZ1lxNVnCVoAcANJf55cOlPLfccyAB6adUCDALz0ccIY/CKTkVIZvBt9aL xBhg2sH88CBNpYH3RJ4M4DmmHFBw4rcJQHHvRdMnnzk/KatlIzmu+UlgOe4AWcPq73YX 4oSFJTL3KYM/AU5b9o8sD6FGgm4LuSGjM8Bd6CEbog4IyfD9DcWRfSdIeiKGSpIbe8XE 85Sw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1702552012; x=1703156812; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=hls/cSgyQjjQ+ZYtpE7D/6PwXwlUmCuSXktuDqFik/0=; b=aTTugurAoBq7lq/rw+6+gCr9dk/eqTb6mbqG16IFuJsxBJlzPDNGsUjyF33NWVVfge UGZ+FdrIow4BfzohgjOK/KmfAwmdnWlEak9514du39HFXSxZGTYySrVGD/+UcHJNTDxI FyHdNSQEuwnQ8id7GcdyjZ9qppJnoszZKopAbXEBxSjBbEdtH8/KdL6HD3VklO+gEkZH yto+oqKgJlGE3WBwcHtR/wCJqR20/Mx69fGqaxqMcpKpXjEsk7OJc6B19wBD9osnhO3z 0kIwA0hKaGIJQFW3AzRUWo5cssTim+Qjdy7Zvy0jWbB99qAgdi/aDhtD5nJgOogUYW0g 0mHw== X-Gm-Message-State: AOJu0Yzd9tqrxd9SdQgX/Wg7QPZ8FtrjwifSvVHmCHJJs3tRXYMgYgRG UGxD6olvicU8/1nMesUvutV8E2Csia4= X-Google-Smtp-Source: AGHT+IEjpQMwFT68NRnvlcTcCrNZBvxWm62UXqwNPKA1jIGJeOIM5XLelcRGDEhW83hxKme6Xq8vC7rpCok= X-Received: from glider.muc.corp.google.com ([2a00:79e0:9c:201:8447:3e89:1f77:ff8a]) (user=glider job=sendgmr) by 2002:a81:fe05:0:b0:5e3:5f02:360a with SMTP id j5-20020a81fe05000000b005e35f02360amr16012ywn.9.1702552011910; Thu, 14 Dec 2023 03:06:51 -0800 (PST) Date: Thu, 14 Dec 2023 12:06:34 +0100 In-Reply-To: <20231214110639.2294687-1-glider@google.com> Mime-Version: 1.0 References: <20231214110639.2294687-1-glider@google.com> X-Mailer: git-send-email 2.43.0.472.g3155946c3a-goog Message-ID: <20231214110639.2294687-3-glider@google.com> Subject: [PATCH v10-mte 2/7] lib/test_bitmap: add tests for bitmap_{read,write}() From: Alexander Potapenko To: glider@google.com, catalin.marinas@arm.com, will@kernel.org, pcc@google.com, andreyknvl@gmail.com, andriy.shevchenko@linux.intel.com, aleksander.lobakin@intel.com, linux@rasmusvillemoes.dk, yury.norov@gmail.com, alexandru.elisei@arm.com Cc: linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, eugenis@google.com, syednwaris@gmail.com, william.gray@linaro.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20231214_030653_400938_372E4CF6 X-CRM114-Status: GOOD ( 23.69 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Add basic tests ensuring that values can be added at arbitrary positions of the bitmap, including those spanning into the adjacent unsigned longs. Two new performance tests, test_bitmap_read_perf() and test_bitmap_write_perf(), can be used to assess future performance improvements of bitmap_read() and bitmap_write(): [ 0.431119][ T1] test_bitmap: Time spent in test_bitmap_read_perf: 615253 [ 0.433197][ T1] test_bitmap: Time spent in test_bitmap_write_perf: 916313 (numbers from a Intel(R) Xeon(R) Gold 6154 CPU @ 3.00GHz machine running QEMU). Signed-off-by: Alexander Potapenko Reviewed-by: Andy Shevchenko Acked-by: Yury Norov --- v10-mte: - send this patch together with the "Implement MTE tag compression for swapped pages" Revisions v8-v12 of bitmap patches were reviewed separately from the "Implement MTE tag compression for swapped pages" series (https://lore.kernel.org/lkml/20231109151106.2385155-1-glider@google.com/) This patch was previously called "lib/test_bitmap: add tests for bitmap_{set,get}_value()" (https://lore.kernel.org/lkml/20230720173956.3674987-3-glider@google.com/) and "lib/test_bitmap: add tests for bitmap_{set,get}_value_unaligned" (https://lore.kernel.org/lkml/20230713125706.2884502-3-glider@google.com/) v12: - as suggested by Alexander Lobakin, replace expect_eq_uint() with expect_eq_ulong() and a cast v9: - use WRITE_ONCE() to prevent optimizations in test_bitmap_read_perf() - update patch description v8: - as requested by Andy Shevchenko, add tests for reading/writing sizes > BITS_PER_LONG v7: - as requested by Yury Norov, add performance tests for bitmap_read() and bitmap_write() v6: - use bitmap API to initialize test bitmaps - as requested by Yury Norov, do not check the return value of bitmap_read(..., 0) - fix a compiler warning on 32-bit systems v5: - update patch title - address Yury Norov's comments: - rename the test cases - factor out test_bitmap_write_helper() to test writing over different background patterns; - add a test case copying a nontrivial value bit-by-bit; - drop volatile v4: - Address comments by Andy Shevchenko: added Reviewed-by: and a link to the previous discussion - Address comments by Yury Norov: - expand the bitmap to catch more corner cases - add code testing that bitmap_set_value() does not touch adjacent bits - add code testing the nbits==0 case - rename bitmap_{get,set}_value() to bitmap_{read,write}() v3: - switch to using bitmap_{set,get}_value() - change the expected bit pattern in test_set_get_value(), as the test was incorrectly assuming 0 is the LSB. --- lib/test_bitmap.c | 179 ++++++++++++++++++++++++++++++++++++++++++++-- 1 file changed, 172 insertions(+), 7 deletions(-) diff --git a/lib/test_bitmap.c b/lib/test_bitmap.c index 65f22c2578b06..46c0154680772 100644 --- a/lib/test_bitmap.c +++ b/lib/test_bitmap.c @@ -60,18 +60,17 @@ static const unsigned long exp3_1_0[] __initconst = { }; static bool __init -__check_eq_uint(const char *srcfile, unsigned int line, - const unsigned int exp_uint, unsigned int x) +__check_eq_ulong(const char *srcfile, unsigned int line, + const unsigned long exp_ulong, unsigned long x) { - if (exp_uint != x) { - pr_err("[%s:%u] expected %u, got %u\n", - srcfile, line, exp_uint, x); + if (exp_ulong != x) { + pr_err("[%s:%u] expected %lu, got %lu\n", + srcfile, line, exp_ulong, x); return false; } return true; } - static bool __init __check_eq_bitmap(const char *srcfile, unsigned int line, const unsigned long *exp_bmap, const unsigned long *bmap, @@ -185,7 +184,8 @@ __check_eq_str(const char *srcfile, unsigned int line, result; \ }) -#define expect_eq_uint(...) __expect_eq(uint, ##__VA_ARGS__) +#define expect_eq_ulong(...) __expect_eq(ulong, ##__VA_ARGS__) +#define expect_eq_uint(x, y) expect_eq_ulong((unsigned int)(x), (unsigned int)(y)) #define expect_eq_bitmap(...) __expect_eq(bitmap, ##__VA_ARGS__) #define expect_eq_pbl(...) __expect_eq(pbl, ##__VA_ARGS__) #define expect_eq_u32_array(...) __expect_eq(u32_array, ##__VA_ARGS__) @@ -1245,6 +1245,168 @@ static void __init test_bitmap_const_eval(void) BUILD_BUG_ON(~var != ~BIT(25)); } +/* + * Test bitmap should be big enough to include the cases when start is not in + * the first word, and start+nbits lands in the following word. + */ +#define TEST_BIT_LEN (1000) + +/* + * Helper function to test bitmap_write() overwriting the chosen byte pattern. + */ +static void __init test_bitmap_write_helper(const char *pattern) +{ + DECLARE_BITMAP(bitmap, TEST_BIT_LEN); + DECLARE_BITMAP(exp_bitmap, TEST_BIT_LEN); + DECLARE_BITMAP(pat_bitmap, TEST_BIT_LEN); + unsigned long w, r, bit; + int i, n, nbits; + + /* + * Only parse the pattern once and store the result in the intermediate + * bitmap. + */ + bitmap_parselist(pattern, pat_bitmap, TEST_BIT_LEN); + + /* + * Check that writing a single bit does not accidentally touch the + * adjacent bits. + */ + for (i = 0; i < TEST_BIT_LEN; i++) { + bitmap_copy(bitmap, pat_bitmap, TEST_BIT_LEN); + bitmap_copy(exp_bitmap, pat_bitmap, TEST_BIT_LEN); + for (bit = 0; bit <= 1; bit++) { + bitmap_write(bitmap, bit, i, 1); + __assign_bit(i, exp_bitmap, bit); + expect_eq_bitmap(exp_bitmap, bitmap, + TEST_BIT_LEN); + } + } + + /* Ensure writing 0 bits does not change anything. */ + bitmap_copy(bitmap, pat_bitmap, TEST_BIT_LEN); + bitmap_copy(exp_bitmap, pat_bitmap, TEST_BIT_LEN); + for (i = 0; i < TEST_BIT_LEN; i++) { + bitmap_write(bitmap, ~0UL, i, 0); + expect_eq_bitmap(exp_bitmap, bitmap, TEST_BIT_LEN); + } + + for (nbits = BITS_PER_LONG; nbits >= 1; nbits--) { + w = IS_ENABLED(CONFIG_64BIT) ? 0xdeadbeefdeadbeefUL + : 0xdeadbeefUL; + w >>= (BITS_PER_LONG - nbits); + for (i = 0; i <= TEST_BIT_LEN - nbits; i++) { + bitmap_copy(bitmap, pat_bitmap, TEST_BIT_LEN); + bitmap_copy(exp_bitmap, pat_bitmap, TEST_BIT_LEN); + for (n = 0; n < nbits; n++) + __assign_bit(i + n, exp_bitmap, w & BIT(n)); + bitmap_write(bitmap, w, i, nbits); + expect_eq_bitmap(exp_bitmap, bitmap, TEST_BIT_LEN); + r = bitmap_read(bitmap, i, nbits); + expect_eq_ulong(r, w); + } + } +} + +static void __init test_bitmap_read_write(void) +{ + unsigned char *pattern[3] = {"", "all:1/2", "all"}; + DECLARE_BITMAP(bitmap, TEST_BIT_LEN); + unsigned long zero_bits = 0, bits_per_long = BITS_PER_LONG; + unsigned long val; + int i, pi; + + /* + * Reading/writing zero bits should not crash the kernel. + * READ_ONCE() prevents constant folding. + */ + bitmap_write(NULL, 0, 0, READ_ONCE(zero_bits)); + /* Return value of bitmap_read() is undefined here. */ + bitmap_read(NULL, 0, READ_ONCE(zero_bits)); + + /* + * Reading/writing more than BITS_PER_LONG bits should not crash the + * kernel. READ_ONCE() prevents constant folding. + */ + bitmap_write(NULL, 0, 0, READ_ONCE(bits_per_long) + 1); + /* Return value of bitmap_read() is undefined here. */ + bitmap_read(NULL, 0, READ_ONCE(bits_per_long) + 1); + + /* + * Ensure that bitmap_read() reads the same value that was previously + * written, and two consequent values are correctly merged. + * The resulting bit pattern is asymmetric to rule out possible issues + * with bit numeration order. + */ + for (i = 0; i < TEST_BIT_LEN - 7; i++) { + bitmap_zero(bitmap, TEST_BIT_LEN); + + bitmap_write(bitmap, 0b10101UL, i, 5); + val = bitmap_read(bitmap, i, 5); + expect_eq_ulong(0b10101UL, val); + + bitmap_write(bitmap, 0b101UL, i + 5, 3); + val = bitmap_read(bitmap, i + 5, 3); + expect_eq_ulong(0b101UL, val); + + val = bitmap_read(bitmap, i, 8); + expect_eq_ulong(0b10110101UL, val); + } + + for (pi = 0; pi < ARRAY_SIZE(pattern); pi++) + test_bitmap_write_helper(pattern[pi]); +} + +static void __init test_bitmap_read_perf(void) +{ + DECLARE_BITMAP(bitmap, TEST_BIT_LEN); + unsigned int cnt, nbits, i; + unsigned long val; + ktime_t time; + + bitmap_fill(bitmap, TEST_BIT_LEN); + time = ktime_get(); + for (cnt = 0; cnt < 5; cnt++) { + for (nbits = 1; nbits <= BITS_PER_LONG; nbits++) { + for (i = 0; i < TEST_BIT_LEN; i++) { + if (i + nbits > TEST_BIT_LEN) + break; + /* + * Prevent the compiler from optimizing away the + * bitmap_read() by using its value. + */ + WRITE_ONCE(val, bitmap_read(bitmap, i, nbits)); + } + } + } + time = ktime_get() - time; + pr_err("Time spent in %s:\t%llu\n", __func__, time); +} + +static void __init test_bitmap_write_perf(void) +{ + DECLARE_BITMAP(bitmap, TEST_BIT_LEN); + unsigned int cnt, nbits, i; + unsigned long val = 0xfeedface; + ktime_t time; + + bitmap_zero(bitmap, TEST_BIT_LEN); + time = ktime_get(); + for (cnt = 0; cnt < 5; cnt++) { + for (nbits = 1; nbits <= BITS_PER_LONG; nbits++) { + for (i = 0; i < TEST_BIT_LEN; i++) { + if (i + nbits > TEST_BIT_LEN) + break; + bitmap_write(bitmap, val, i, nbits); + } + } + } + time = ktime_get() - time; + pr_err("Time spent in %s:\t%llu\n", __func__, time); +} + +#undef TEST_BIT_LEN + static void __init selftest(void) { test_zero_clear(); @@ -1261,6 +1423,9 @@ static void __init selftest(void) test_bitmap_cut(); test_bitmap_print_buf(); test_bitmap_const_eval(); + test_bitmap_read_write(); + test_bitmap_read_perf(); + test_bitmap_write_perf(); test_find_nth_bit(); test_for_each_set_bit(); From patchwork Thu Dec 14 11:06:35 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 13492814 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 6F40AC4332F for ; Thu, 14 Dec 2023 11:07:48 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=zntz2Yb0dsYlDD05tr3HAOAR+H0BQBAYmw8aYscXFnI=; b=JZAib6Q9ln+u8KgJMgzG3Zghhh z6gwPkr3XlWXf4Z9xzOlfwPw4AnELNRBxFWJu4P125f3/y2Geb1ColP9ux2WKEb8AuRPSiOX6bbiD wB3JfjmYXBEBtuiZq7t4jdhoIaVFSOzu/M9vr8eI6k716SdMyY4tq+2VGE3CWJMPjzSok3ryb6tPo a6IHmKRkhyiBvFSvs/acvhfYxNhtHPu9ssB7O0ZpWjM+K4mCT4RHrR5iT07CiWPHhMeA9wXUKKBJp 9PBE9i7LlKQrIydaoqPiO2ll5pvcFiDO0q+8ZuKel2WL5EnojkumARLJt9ld8KUn2S9JyKLmEVuYh sn0RD4Qw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1rDjYj-00Ha4l-33; Thu, 14 Dec 2023 11:07:17 +0000 Received: from mail-yb1-xb4a.google.com ([2607:f8b0:4864:20::b4a]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1rDjYW-00HZu3-1w for linux-arm-kernel@lists.infradead.org; Thu, 14 Dec 2023 11:07:07 +0000 Received: by mail-yb1-xb4a.google.com with SMTP id 3f1490d57ef6-dbcd4696590so1754365276.2 for ; Thu, 14 Dec 2023 03:06:55 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1702552014; x=1703156814; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=g2GSxzhcZ07xCv7d1L9tozvjfnh/gD6wLbTwZXMSaKU=; b=EGDdEkIMbCzWRdKEXhh2HGfQ0fN6gfqMfbRlJhAVBf8XB09oPPVCtxL+Kj7PDH3ooc TepuYVzJEABh8eTtVbIh5QLVE1sgOm11yrcEudExiH+cFDCatrX1bxigNfiAtojXRUvs SyWwplCYM/4NfhIoy/EUJKI6Rr56QaXNIzCgP/2r/dc4/e6+8Bs56UoXDSuSVK9NMQ3U kMXTPHU+QAZ3Y9EBqt1B007cLD4reSQniKiToVxTJlagJrvYD2TT6fh8H9H7u7jIAoLp 5qusDqKqnfgE1FwgpA86FM6PBwTjMdaTtcMDrltD9JOWegx3l0YzSw5GhSqCW+p4rjZJ mAwQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1702552014; x=1703156814; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=g2GSxzhcZ07xCv7d1L9tozvjfnh/gD6wLbTwZXMSaKU=; b=CVAdgAeGLbKhuuugteiSTfrQG/VAfwJxsuxu6OoO1JbIZAyTQOzcHe+nUf9ZX13XQK iM3t7kGOV4J6bltrztm0wPdpW6HeukqhCzabY6WLo5C1zlzcC4y0Tsqyb42jGgFcqcrG UcWUSw1z1OKdeFj2JF4eR36SLQTps2J/KXuqQlqAf6SHslOSS2OwC33i4xWPv+ANe+Bh hhjM7O2Zz2UO5OZ7ch4lUhpm7ObnYZ5k940PslkQlXprwkYZKsjfKl2T32SV4vpMxtGk ozCqAP7M6xA8tIp8Pj+a/CnO39kBCF+ZE5KheLNA13Nl2qYpZVyhpjVwlr7s0JzA+k8n 6HFQ== X-Gm-Message-State: AOJu0Yz/28pBVgJRULmj1LWCFXQXwfsq6Gpbl0EZOyNvydUJB3KzHyjK awepTJx/Xu3fGvvSRIs7eZzCFU6Kmcg= X-Google-Smtp-Source: AGHT+IGGkSzNTGa/Zh9JtgjWjYaXHf5NHT9/GP0mNOA+eSkSEyP37yRz72mdweuJoDQ9Wlcg+oAYfOGLUZc= X-Received: from glider.muc.corp.google.com ([2a00:79e0:9c:201:8447:3e89:1f77:ff8a]) (user=glider job=sendgmr) by 2002:a05:6902:1345:b0:db7:e75c:24c1 with SMTP id g5-20020a056902134500b00db7e75c24c1mr6642ybu.9.1702552014550; Thu, 14 Dec 2023 03:06:54 -0800 (PST) Date: Thu, 14 Dec 2023 12:06:35 +0100 In-Reply-To: <20231214110639.2294687-1-glider@google.com> Mime-Version: 1.0 References: <20231214110639.2294687-1-glider@google.com> X-Mailer: git-send-email 2.43.0.472.g3155946c3a-goog Message-ID: <20231214110639.2294687-4-glider@google.com> Subject: [PATCH v10-mte 3/7] lib/test_bitmap: use pr_info() for non-error messages From: Alexander Potapenko To: glider@google.com, catalin.marinas@arm.com, will@kernel.org, pcc@google.com, andreyknvl@gmail.com, andriy.shevchenko@linux.intel.com, aleksander.lobakin@intel.com, linux@rasmusvillemoes.dk, yury.norov@gmail.com, alexandru.elisei@arm.com Cc: linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, eugenis@google.com, syednwaris@gmail.com, william.gray@linaro.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20231214_030704_658434_AED5D52F X-CRM114-Status: GOOD ( 11.29 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org pr_err() messages may be treated as errors by some log readers, so let us only use them for test failures. For non-error messages, replace them with pr_info(). Suggested-by: Alexander Lobakin Signed-off-by: Alexander Potapenko Acked-by: Yury Norov --- v10-mte: - send this patch together with the "Implement MTE tag compression for swapped pages" --- lib/test_bitmap.c | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/lib/test_bitmap.c b/lib/test_bitmap.c index 46c0154680772..a6e92cf5266af 100644 --- a/lib/test_bitmap.c +++ b/lib/test_bitmap.c @@ -507,7 +507,7 @@ static void __init test_bitmap_parselist(void) } if (ptest.flags & PARSE_TIME) - pr_err("parselist: %d: input is '%s' OK, Time: %llu\n", + pr_info("parselist: %d: input is '%s' OK, Time: %llu\n", i, ptest.in, time); #undef ptest @@ -546,7 +546,7 @@ static void __init test_bitmap_printlist(void) goto out; } - pr_err("bitmap_print_to_pagebuf: input is '%s', Time: %llu\n", buf, time); + pr_info("bitmap_print_to_pagebuf: input is '%s', Time: %llu\n", buf, time); out: kfree(buf); kfree(bmap); @@ -624,7 +624,7 @@ static void __init test_bitmap_parse(void) } if (test.flags & PARSE_TIME) - pr_err("parse: %d: input is '%s' OK, Time: %llu\n", + pr_info("parse: %d: input is '%s' OK, Time: %llu\n", i, test.in, time); } } @@ -1380,7 +1380,7 @@ static void __init test_bitmap_read_perf(void) } } time = ktime_get() - time; - pr_err("Time spent in %s:\t%llu\n", __func__, time); + pr_info("Time spent in %s:\t%llu\n", __func__, time); } static void __init test_bitmap_write_perf(void) @@ -1402,7 +1402,7 @@ static void __init test_bitmap_write_perf(void) } } time = ktime_get() - time; - pr_err("Time spent in %s:\t%llu\n", __func__, time); + pr_info("Time spent in %s:\t%llu\n", __func__, time); } #undef TEST_BIT_LEN From patchwork Thu Dec 14 11:06:36 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 13492812 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 59CD8C4167D for ; Thu, 14 Dec 2023 11:07:40 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=okRCZYWCeRa/Qr4Pmn0WZkaV10Wju8YqRpZ1lZ6pDEw=; b=VUhIZeW57zFeGtzDPXcKAWW1qO 0WV1TrFZ6WRDGdleUbC5cK1BiFaIAfLnTh9QyG757U1n8yiHzvltoevz9l9+f6KfBM2fJyrkdxHgJ vbJoVIyC1iOH9X/ZHsLpAaT5E9YU9EtaKhfB78iaiTCJ8F2+gov2/Q7aJzZIRN/GtsP8kTa+IqdE+ N+1cBteFfuzn0mFbmGIjR7ScG4EuGRUs/qh9ZnAqqVQV2g/ntXXPdHLOw8KKvFACJQwWc444CxYfR VZhtjP0242PlCfmo3nhdO1cakger8wJAOsTA4mVf6Jr+u/Ho2mFj15xHm3mHOQxfplZ1HCWd5bD/h EgyGS2BA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1rDjYZ-00HZyR-0s; Thu, 14 Dec 2023 11:07:07 +0000 Received: from mail-yw1-x1149.google.com ([2607:f8b0:4864:20::1149]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1rDjYQ-00HZui-0h for linux-arm-kernel@lists.infradead.org; Thu, 14 Dec 2023 11:07:01 +0000 Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-5e2aa53a692so19875517b3.2 for ; Thu, 14 Dec 2023 03:06:57 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1702552017; x=1703156817; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=ZQcjn3HKRIFjRjPtUqZGi0kpXnoJ+H0gJj/u7XtaxV0=; b=F2kImZxQcxnxs2nYvg8sZRy3EqRV1LXk8wKEYQDWE3rIzLPuRgFHsKX4Hi7ufmVgyX ioYOU92J4Rik/lW8yjmT2go8+1UecULGFMopjTknkbf/KuP5PxoJbsuZ/UgbjN2Ru+gG yu60HLBB3XVfkhUwIDelBxf53Snk58WPP4pjkSWCtL2Y3J13MLU5KDBHb59SMPkOb02S g2RmsiCaw2e/kV2zaoqlEC+2+7bDu1TtRToBIgqXveaSczsS/nnGigP9czHF0SqY2Jh3 HpikpnUtjfwRTBsgnRZJ3UGCqjuojqtXe4rCRzSapEyr0Vj7cDRf2JsQrRI2PiWukc0y 9nKg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1702552017; x=1703156817; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=ZQcjn3HKRIFjRjPtUqZGi0kpXnoJ+H0gJj/u7XtaxV0=; b=EUMZDG69z+TxeGEhduE4bZLJ/2iwF1gicOingQ04VpOOD+iGNG9fADLslCNGyHKEM1 6EVLCzNrXVUqa1gkAm28dxkUHufeAvWRv/jKxvNosXv4qy2Iifq1xyZAQJr97XbynA4S iMk2eSV/8ITW5oj1qbtikQc3n653jdzu4w7NLp0P6PSRHerrE/Dz+drXiFQKuBseN9eW Q/PH5cgmNi8MmGdkHYoaMLg+mnR80D6/tEt/OQGk8IsQNvWNfxz/v2boolCqFfz5Lp0b WuxQwfdAdreSxvTIOlbgKhG4NQgW81/16QNxxRwAy+VLtTge70OLRGrh3bjBAF+Wfbyv MtsA== X-Gm-Message-State: AOJu0YzNrdUNxDcIly3LwNgApsKjGuSJ9oJm46AkEUMY6L1TpLhJ/0Lx Gt94FLrTwceJhJ3OBP2NhSoNnlrpjFA= X-Google-Smtp-Source: AGHT+IHZUxlk2ydWjkVM3eEnz5UgSsZtEHF9ttwaUfzN+X+521ZJ4sG4g8taV4ZW5sGtfxhkEsk7VUFm2Qs= X-Received: from glider.muc.corp.google.com ([2a00:79e0:9c:201:8447:3e89:1f77:ff8a]) (user=glider job=sendgmr) by 2002:a05:690c:a81:b0:5df:4a9b:fb6c with SMTP id ci1-20020a05690c0a8100b005df4a9bfb6cmr109547ywb.3.1702552017196; Thu, 14 Dec 2023 03:06:57 -0800 (PST) Date: Thu, 14 Dec 2023 12:06:36 +0100 In-Reply-To: <20231214110639.2294687-1-glider@google.com> Mime-Version: 1.0 References: <20231214110639.2294687-1-glider@google.com> X-Mailer: git-send-email 2.43.0.472.g3155946c3a-goog Message-ID: <20231214110639.2294687-5-glider@google.com> Subject: [PATCH v10-mte 4/7] arm64: mte: implement CONFIG_ARM64_MTE_COMP From: Alexander Potapenko To: glider@google.com, catalin.marinas@arm.com, will@kernel.org, pcc@google.com, andreyknvl@gmail.com, andriy.shevchenko@linux.intel.com, aleksander.lobakin@intel.com, linux@rasmusvillemoes.dk, yury.norov@gmail.com, alexandru.elisei@arm.com Cc: linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, eugenis@google.com, syednwaris@gmail.com, william.gray@linaro.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20231214_030658_258500_703467FE X-CRM114-Status: GOOD ( 35.50 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org The config implements the algorithm compressing memory tags for ARM MTE during swapping. The algorithm is based on RLE and specifically targets buffers of tags corresponding to a single page. In many cases a buffer can be compressed into 63 bits, making it possible to store it without additional memory allocation. Suggested-by: Evgenii Stepanov Signed-off-by: Alexander Potapenko Acked-by: Catalin Marinas --- v10-mte: - added Catalin's Acked-by: v8: - As suggested by Catalin Marinas, only compress tags if they can be stored inline. This simplifies the code drastically. - Update the documentation. - Split off patches introducing bitmap_read()/bitmap_write(). v6: - shuffle bits in inline handles so that they can't be confused with canonical pointers; - use kmem_cache_zalloc() to allocate compressed storage - correctly handle range size overflow - minor documentation fixes, clarify the special cases v5: - make code PAGE_SIZE-agnostic, remove hardcoded constants, updated the docs - implement debugfs interface - Address comments by Andy Shevchenko: - update description of mtecomp.c - remove redundant assignments, simplify mte_tags_to_ranges() - various code simplifications - introduce mtecomp.h - add :export: to Documentation/arch/arm64/mte-tag-compression.rst v4: - Addressed comments by Andy Shevchenko: - expanded "MTE" to "Memory Tagging Extension" in Kconfig - fixed kernel-doc comments, moved them to C source - changed variables to unsigned where applicable - some code simplifications, fewer unnecessary assignments - added the mte_largest_idx_bits() helper - added namespace prefixes to all functions - added missing headers (but removed bits.h) - Addressed comments by Yury Norov: - removed test-only functions from mtecomp.h - dropped the algoritm name (all functions are now prefixed with "mte") - added more comments - got rid of MTE_RANGES_INLINE - renamed bitmap_{get,set}_value() to bitmap_{read,write}() - moved the big comment explaining the algorithm to Documentation/arch/arm64/mte-tag-compression.rst, expanded it, add a link to it from Documentation/arch/arm64/index.rst - removed hardcoded ranges from mte_alloc_size()/mte_size_to_ranges() v3: - Addressed comments by Andy Shevchenko: - use bitmap_{set,get}_value() writte by Syed Nayyar Waris - switched to unsigned long everywhere (fewer casts) - simplified the code, removed redundant checks - dropped ea0_compress_inline() - added bit size constants and helpers to access the bitmap - explicitly initialize all compressed sizes in ea0_compress_to_buf() - initialize all handle bits v2: - as suggested by Yury Norov, switched from struct bitq (which is not needed anymore) to - add missing symbol exports --- Documentation/arch/arm64/index.rst | 1 + .../arch/arm64/mte-tag-compression.rst | 154 +++++++++++ arch/arm64/Kconfig | 11 + arch/arm64/include/asm/mtecomp.h | 39 +++ arch/arm64/mm/Makefile | 1 + arch/arm64/mm/mtecomp.c | 257 ++++++++++++++++++ arch/arm64/mm/mtecomp.h | 12 + 7 files changed, 475 insertions(+) create mode 100644 Documentation/arch/arm64/mte-tag-compression.rst create mode 100644 arch/arm64/include/asm/mtecomp.h create mode 100644 arch/arm64/mm/mtecomp.c create mode 100644 arch/arm64/mm/mtecomp.h diff --git a/Documentation/arch/arm64/index.rst b/Documentation/arch/arm64/index.rst index d08e924204bf1..bf6c1583233a9 100644 --- a/Documentation/arch/arm64/index.rst +++ b/Documentation/arch/arm64/index.rst @@ -19,6 +19,7 @@ ARM64 Architecture legacy_instructions memory memory-tagging-extension + mte-tag-compression perf pointer-authentication ptdump diff --git a/Documentation/arch/arm64/mte-tag-compression.rst b/Documentation/arch/arm64/mte-tag-compression.rst new file mode 100644 index 0000000000000..8fe6b51a9db6d --- /dev/null +++ b/Documentation/arch/arm64/mte-tag-compression.rst @@ -0,0 +1,154 @@ +.. SPDX-License-Identifier: GPL-2.0 + +================================================== +Tag Compression for Memory Tagging Extension (MTE) +================================================== + +This document describes the algorithm used to compress memory tags used by the +ARM Memory Tagging Extension (MTE). + +Introduction +============ + +MTE assigns tags to memory pages: for 4K pages those tags occupy 128 bytes +(256 4-bit tags each corresponding to a 16-byte MTE granule), for 16K pages - +512 bytes, for 64K pages - 2048 bytes. By default, MTE carves out 3.125% (1/16) +of the available physical memory to store the tags. + +When MTE pages are saved to swap, their tags need to be stored in the kernel +memory. If the system swap is used heavily, these tags may take a substantial +portion of the physical memory. To reduce memory waste, ``CONFIG_ARM64_MTE_COMP`` +allows the kernel to store the tags in compressed form. + +Implementation details +====================== + +The algorithm attempts to compress an array of ``MTE_PAGE_TAG_STORAGE`` +tag bytes into a byte sequence that can be stored in an 8-byte pointer. If that +is not possible, the data is stored uncompressed. + +Tag manipulation and storage +---------------------------- + +Tags for swapped pages are stored in an XArray that maps swap entries to 63-bit +values (see ``arch/arm64/mm/mteswap.c``). Bit 0 of these values indicates how +their contents should be treated: + + - 0: value is a pointer to an uncompressed buffer allocated with kmalloc() + (always the case if ``CONFIG_ARM64_MTE_COMP=n``) with the highest bit set + to 0; + - 1: value contains compressed data. + +``arch/arm64/include/asm/mtecomp.h`` declares the following functions that +manipulate with tags: + +- mte_compress() - compresses the given ``MTE_PAGE_TAG_STORAGE``-byte ``tags`` + buffer into a pointer; +- mte_decompress() - decompresses the tags from a pointer; +- mte_is_compressed() - returns ``true`` iff the pointer passed to it should be + treated as compressed data. + +Tag compression +--------------- + +The compression algorithm is a variation of RLE (run-length encoding) and works +as follows (we will be considering 4K pages and 128-byte tag buffers, but the +same approach scales to 16K and 64K pages): + +1. The input array of 128 (``MTE_PAGE_TAG_STORAGE``) bytes is transformed into + tag ranges (two arrays: ``r_tags[]`` containing tag values and ``r_sizes[]`` + containing range lengths) by mte_tags_to_ranges(). Note that + ``r_sizes[]`` sums up to 256 (``MTE_GRANULES_PER_PAGE``). + + If ``r_sizes[]`` consists of a single element + (``{ MTE_GRANULES_PER_PAGE }``), the corresponding range is split into two + halves, i.e.:: + + r_sizes_new[2] = { MTE_GRANULES_PER_PAGE/2, MTE_GRANULES_PER_PAGE/2 }; + r_tags_new[2] = { r_tags[0], r_tags[0] }; + +2. The number of the largest element of ``r_sizes[]`` is stored in + ``largest_idx``. The element itself is thrown away from ``r_sizes[]``, + because it can be reconstructed from the sum of the remaining elements. Note + that now none of the remaining ``r_sizes[]`` elements exceeds + ``MTE_GRANULES_PER_PAGE/2``. + +3. If the number ``N`` of ranges does not exceed ``6``, the ranges can be + compressed into 64 bits. This is done by storing the following values packed + into the pointer (``i`` means a ````-bit unsigned integer) + treated as a bitmap (see ``include/linux/bitmap.h``):: + + bit 0 : (always 1) : i1 + bits 1-3 : largest_idx : i3 + bits 4-27 : r_tags[0..5] : i4 x 6 + bits 28-62 : r_sizes[0..4] : i7 x 5 + bit 63 : (always 0) : i1 + + If N is less than 6, ``r_tags`` and ``r_sizes`` are padded up with zero + values. The unused bits in the pointer, including bit 63, are also set to 0, + so the compressed data can be stored in XArray. + + Range size of ``MTE_GRANULES_PER_PAGE/2`` (at most one) does not fit into + i7 and will be written as 0. This case is handled separately by the + decompressing procedure. + +Tag decompression +----------------- + +The decompression algorithm performs the steps below. + +1. Read the lowest bit of the data from the input buffer and check that it is 1, + otherwise bail out. + +2. Read ``largest_idx``, ``r_tags[]`` and ``r_sizes[]`` from the + input buffer. + + If ``largest_idx`` is zero, and all ``r_sizes[]`` are zero, set + ``r_sizes[0] = MTE_GRANULES_PER_PAGE/2``. + + Calculate the removed largest element of ``r_sizes[]`` as + ``largest = 256 - sum(r_sizes)`` and insert it into ``r_sizes`` at + position ``largest_idx``. + +6. For each ``r_sizes[i] > 0``, add a 4-bit value ``r_tags[i]`` to the output + buffer ``r_sizes[i]`` times. + + +Why these numbers? +------------------ + +To be able to reconstruct ``N`` tag ranges from the compressed data, we need to +store the indicator bit together with ``largest_idx``, ``r_tags[N]``, and +``r_sizes[N-1]`` in 63 bits. +Knowing that the sizes do not exceed ``MTE_PAGE_TAG_STORAGE``, each of them can be +packed into ``S = ilog2(MTE_PAGE_TAG_STORAGE)`` bits, whereas a single tag occupies +4 bits. + +It is evident that the number of ranges that can be stored in 63 bits is +strictly less than 8, therefore we only need 3 bits to store ``largest_idx``. + +The maximum values of ``N`` so that the number ``1 + 3 + N * 4 + (N-1) * S`` of +storage bits does not exceed 63, are shown in the table below:: + + +-----------+-----------------+----+---+-------------------+ + | Page size | Tag buffer size | S | N | Storage bits | + +-----------+-----------------+----+---+-------------------+ + | 4 KB | 128 B | 7 | 6 | 63 = 1+3+6*4+5*7 | + | 16 KB | 512 B | 9 | 5 | 60 = 1+3+5*4+4*9 | + | 64 KB | 2048 B | 11 | 4 | 53 = 1+3+4*4+3*11 | + +-----------+-----------------+----+---+-------------------+ + +Note +---- + +Tag compression and decompression implicitly rely on the fixed MTE tag size +(4 bits) and number of tags per page. Should these values change, the algorithm +may need to be revised. + + +Programming Interface +===================== + + .. kernel-doc:: arch/arm64/include/asm/mtecomp.h + .. kernel-doc:: arch/arm64/mm/mtecomp.c + :export: diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 7b071a00425d2..5f4d4b49a512e 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -2078,6 +2078,17 @@ config ARM64_EPAN if the cpu does not implement the feature. endmenu # "ARMv8.7 architectural features" +config ARM64_MTE_COMP + bool "Tag compression for ARM64 Memory Tagging Extension" + default y + depends on ARM64_MTE + help + Enable tag compression support for ARM64 Memory Tagging Extension. + + Tag buffers corresponding to swapped RAM pages are compressed using + RLE to conserve heap memory. In the common case compressed tags + occupy 2.5x less memory. + config ARM64_SVE bool "ARM Scalable Vector Extension support" default y diff --git a/arch/arm64/include/asm/mtecomp.h b/arch/arm64/include/asm/mtecomp.h new file mode 100644 index 0000000000000..b9a3a921a38d4 --- /dev/null +++ b/arch/arm64/include/asm/mtecomp.h @@ -0,0 +1,39 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +#ifndef __ASM_MTECOMP_H +#define __ASM_MTECOMP_H + +#include + +/** + * mte_is_compressed() - check if the supplied pointer contains compressed tags. + * @ptr: pointer returned by kmalloc() or mte_compress(). + * + * Returns: true iff bit 0 of @ptr is 1, which is only possible if @ptr was + * returned by mte_is_compressed(). + */ +static inline bool mte_is_compressed(void *ptr) +{ + return ((unsigned long)ptr & 1); +} + +#if defined(CONFIG_ARM64_MTE_COMP) + +void *mte_compress(u8 *tags); +bool mte_decompress(void *handle, u8 *tags); + +#else + +static inline void *mte_compress(u8 *tags) +{ + return NULL; +} + +static inline bool mte_decompress(void *data, u8 *tags) +{ + return false; +} + +#endif // CONFIG_ARM64_MTE_COMP + +#endif // __ASM_MTECOMP_H diff --git a/arch/arm64/mm/Makefile b/arch/arm64/mm/Makefile index dbd1bc95967d0..46778f6dd83c2 100644 --- a/arch/arm64/mm/Makefile +++ b/arch/arm64/mm/Makefile @@ -10,6 +10,7 @@ obj-$(CONFIG_TRANS_TABLE) += trans_pgd.o obj-$(CONFIG_TRANS_TABLE) += trans_pgd-asm.o obj-$(CONFIG_DEBUG_VIRTUAL) += physaddr.o obj-$(CONFIG_ARM64_MTE) += mteswap.o +obj-$(CONFIG_ARM64_MTE_COMP) += mtecomp.o KASAN_SANITIZE_physaddr.o += n obj-$(CONFIG_KASAN) += kasan_init.o diff --git a/arch/arm64/mm/mtecomp.c b/arch/arm64/mm/mtecomp.c new file mode 100644 index 0000000000000..c948921525030 --- /dev/null +++ b/arch/arm64/mm/mtecomp.c @@ -0,0 +1,257 @@ +// SPDX-License-Identifier: GPL-2.0-only + +/* + * MTE tag compression algorithm. + * See Documentation/arch/arm64/mte-tag-compression.rst for more details. + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include + +#include "mtecomp.h" + +#define MTE_BITS_PER_LARGEST_IDX 3 +/* Range size cannot exceed MTE_GRANULES_PER_PAGE / 2. */ +#define MTE_BITS_PER_SIZE (ilog2(MTE_GRANULES_PER_PAGE) - 1) + +/* + * See Documentation/arch/arm64/mte-tag-compression.rst for details on how the + * maximum number of ranges is calculated. + */ +#if defined(CONFIG_ARM64_4K_PAGES) +#define MTE_MAX_RANGES 6 +#elif defined(CONFIG_ARM64_16K_PAGES) +#define MTE_MAX_RANGES 5 +#else +#define MTE_MAX_RANGES 4 +#endif + +/** + * mte_tags_to_ranges() - break @tags into arrays of tag ranges. + * @tags: MTE_GRANULES_PER_PAGE-byte array containing MTE tags. + * @out_tags: u8 array to store the tag of every range. + * @out_sizes: unsigned short array to store the size of every range. + * @out_len: length of @out_tags and @out_sizes (output parameter, initially + * equal to lengths of out_tags[] and out_sizes[]). + * + * This function is exported for testing purposes. + */ +void mte_tags_to_ranges(u8 *tags, u8 *out_tags, unsigned short *out_sizes, + size_t *out_len) +{ + u8 prev_tag = tags[0] / 16; /* First tag in the array. */ + unsigned int cur_idx = 0, i, j; + u8 cur_tag; + + memset(out_tags, 0, array_size(*out_len, sizeof(*out_tags))); + memset(out_sizes, 0, array_size(*out_len, sizeof(*out_sizes))); + + out_tags[cur_idx] = prev_tag; + for (i = 0; i < MTE_GRANULES_PER_PAGE; i++) { + j = i % 2; + cur_tag = j ? (tags[i / 2] % 16) : (tags[i / 2] / 16); + if (cur_tag == prev_tag) { + out_sizes[cur_idx]++; + } else { + cur_idx++; + prev_tag = cur_tag; + out_tags[cur_idx] = prev_tag; + out_sizes[cur_idx] = 1; + } + } + *out_len = cur_idx + 1; +} +EXPORT_SYMBOL_NS(mte_tags_to_ranges, MTECOMP); + +/** + * mte_ranges_to_tags() - fill @tags using given tag ranges. + * @r_tags: u8[] containing the tag of every range. + * @r_sizes: unsigned short[] containing the size of every range. + * @r_len: length of @r_tags and @r_sizes. + * @tags: MTE_GRANULES_PER_PAGE-byte array to write the tags to. + * + * This function is exported for testing purposes. + */ +void mte_ranges_to_tags(u8 *r_tags, unsigned short *r_sizes, size_t r_len, + u8 *tags) +{ + unsigned int i, j, pos = 0; + u8 prev; + + for (i = 0; i < r_len; i++) { + for (j = 0; j < r_sizes[i]; j++) { + if (pos % 2) + tags[pos / 2] = (prev << 4) | r_tags[i]; + else + prev = r_tags[i]; + pos++; + } + } +} +EXPORT_SYMBOL_NS(mte_ranges_to_tags, MTECOMP); + +static void mte_bitmap_write(unsigned long *bitmap, unsigned long value, + unsigned long *pos, unsigned long bits) +{ + bitmap_write(bitmap, value, *pos, bits); + *pos += bits; +} + +/* Compress ranges into an unsigned long. */ +static void mte_compress_to_ulong(size_t len, u8 *tags, unsigned short *sizes, + unsigned long *result) +{ + unsigned long bit_pos = 0; + unsigned int largest_idx, i; + unsigned short largest = 0; + + for (i = 0; i < len; i++) { + if (sizes[i] > largest) { + largest = sizes[i]; + largest_idx = i; + } + } + /* Bit 1 in position 0 indicates compressed data. */ + mte_bitmap_write(result, 1, &bit_pos, 1); + mte_bitmap_write(result, largest_idx, &bit_pos, + MTE_BITS_PER_LARGEST_IDX); + for (i = 0; i < len; i++) + mte_bitmap_write(result, tags[i], &bit_pos, MTE_TAG_SIZE); + if (len == 1) { + /* + * We are compressing MTE_GRANULES_PER_PAGE of identical tags. + * Split it into two ranges containing + * MTE_GRANULES_PER_PAGE / 2 tags, so that it falls into the + * special case described below. + */ + mte_bitmap_write(result, tags[0], &bit_pos, MTE_TAG_SIZE); + i = 2; + } else { + i = len; + } + for (; i < MTE_MAX_RANGES; i++) + mte_bitmap_write(result, 0, &bit_pos, MTE_TAG_SIZE); + /* + * Most of the time sizes[i] fits into MTE_BITS_PER_SIZE, apart from a + * special case when: + * len = 2; + * sizes = { MTE_GRANULES_PER_PAGE / 2, MTE_GRANULES_PER_PAGE / 2}; + * In this case largest_idx will be set to 0, and the size written to + * the bitmap will be also 0. + */ + for (i = 0; i < len; i++) { + if (i != largest_idx) + mte_bitmap_write(result, sizes[i], &bit_pos, + MTE_BITS_PER_SIZE); + } + for (i = len; i < MTE_MAX_RANGES; i++) + mte_bitmap_write(result, 0, &bit_pos, MTE_BITS_PER_SIZE); +} + +/** + * mte_compress() - compress the given tag array. + * @tags: MTE_GRANULES_PER_PAGE-byte array to read the tags from. + * + * Attempts to compress the user-supplied tag array. + * + * Returns: compressed data or NULL. + */ +void *mte_compress(u8 *tags) +{ + unsigned short *r_sizes; + void *result = NULL; + u8 *r_tags; + size_t r_len; + + r_sizes = kmalloc_array(MTE_GRANULES_PER_PAGE, sizeof(unsigned short), + GFP_KERNEL); + r_tags = kmalloc(MTE_GRANULES_PER_PAGE, GFP_KERNEL); + if (!r_sizes || !r_tags) + goto ret; + r_len = MTE_GRANULES_PER_PAGE; + mte_tags_to_ranges(tags, r_tags, r_sizes, &r_len); + if (r_len <= MTE_MAX_RANGES) + mte_compress_to_ulong(r_len, r_tags, r_sizes, + (unsigned long *)&result); +ret: + kfree(r_tags); + kfree(r_sizes); + return result; +} +EXPORT_SYMBOL_NS(mte_compress, MTECOMP); + +static unsigned long mte_bitmap_read(const unsigned long *bitmap, + unsigned long *pos, unsigned long bits) +{ + unsigned long start = *pos; + + *pos += bits; + return bitmap_read(bitmap, start, bits); +} + +/** + * mte_decompress() - decompress the tag array from the given pointer. + * @data: pointer returned by @mte_compress() + * @tags: MTE_GRANULES_PER_PAGE-byte array to write the tags to. + * + * Reads the compressed data and writes it into the user-supplied tag array. + * + * Returns: true on success, false if the passed data is uncompressed. + */ +bool mte_decompress(void *data, u8 *tags) +{ + unsigned short r_sizes[MTE_MAX_RANGES]; + u8 r_tags[MTE_MAX_RANGES]; + unsigned int largest_idx, i; + unsigned long bit_pos = 0; + unsigned long *bitmap; + unsigned short sum; + size_t max_ranges; + + if (!mte_is_compressed(data)) + return false; + + bitmap = (unsigned long *)&data; + max_ranges = MTE_MAX_RANGES; + /* Skip the leading bit indicating the inline case. */ + mte_bitmap_read(bitmap, &bit_pos, 1); + largest_idx = + mte_bitmap_read(bitmap, &bit_pos, MTE_BITS_PER_LARGEST_IDX); + if (largest_idx >= MTE_MAX_RANGES) + return false; + + for (i = 0; i < max_ranges; i++) + r_tags[i] = mte_bitmap_read(bitmap, &bit_pos, MTE_TAG_SIZE); + for (i = 0, sum = 0; i < max_ranges; i++) { + if (i == largest_idx) + continue; + r_sizes[i] = + mte_bitmap_read(bitmap, &bit_pos, MTE_BITS_PER_SIZE); + /* + * Special case: tag array consists of two ranges of + * `MTE_GRANULES_PER_PAGE / 2` tags. + */ + if ((largest_idx == 0) && (i == 1) && (r_sizes[i] == 0)) + r_sizes[i] = MTE_GRANULES_PER_PAGE / 2; + if (!r_sizes[i]) { + max_ranges = i; + break; + } + sum += r_sizes[i]; + } + if (sum >= MTE_GRANULES_PER_PAGE) + return false; + r_sizes[largest_idx] = MTE_GRANULES_PER_PAGE - sum; + mte_ranges_to_tags(r_tags, r_sizes, max_ranges, tags); + return true; +} +EXPORT_SYMBOL_NS(mte_decompress, MTECOMP); diff --git a/arch/arm64/mm/mtecomp.h b/arch/arm64/mm/mtecomp.h new file mode 100644 index 0000000000000..b94cf0384f2af --- /dev/null +++ b/arch/arm64/mm/mtecomp.h @@ -0,0 +1,12 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +#ifndef ARCH_ARM64_MM_MTECOMP_H_ +#define ARCH_ARM64_MM_MTECOMP_H_ + +/* Functions exported from mtecomp.c for test_mtecomp.c. */ +void mte_tags_to_ranges(u8 *tags, u8 *out_tags, unsigned short *out_sizes, + size_t *out_len); +void mte_ranges_to_tags(u8 *r_tags, unsigned short *r_sizes, size_t r_len, + u8 *tags); + +#endif // ARCH_ARM64_MM_TEST_MTECOMP_H_ From patchwork Thu Dec 14 11:06:37 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 13492813 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 658C0C4167D for ; Thu, 14 Dec 2023 11:07:47 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=tvrWLstJnP4ZNpGclN82uxGpVsKeqrBBwlAIw/8H2lk=; b=0fMF0o62zpS4ORbLEB8icrzgx9 GJRFu5TCuAOFcN5q4BPK2fggHenUO8eJTa0yJsRX3eKbHTTH78kC7JhJLuROL+g/TFqkFS70vzNkb cu7PVnw02epe+prYGq5wThKC+dg5TW2pM9lXHdCCPCpaEVngx99nUwqWY8OJFHj3wEx6h+HB1K3Wl Wgwvp5VM/XtwkrsxkTKzNlJxwIB1k/FA8fnjCzMsGnxFF9zb1D6PoJ6VkqGQiizcvLfIKYrjR9hI+ uWZ9TTmWnjtij6XhGLOXIDYoZrlIte/DyJAqFzOvSZo7odJOGfh5JATSFoVNpPP62tv69Q8OxEJgI ikAbsUXA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1rDjYk-00Ha58-1j; Thu, 14 Dec 2023 11:07:18 +0000 Received: from mail-wr1-x44a.google.com ([2a00:1450:4864:20::44a]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1rDjYY-00HZw3-2L for linux-arm-kernel@lists.infradead.org; Thu, 14 Dec 2023 11:07:09 +0000 Received: by mail-wr1-x44a.google.com with SMTP id ffacd0b85a97d-3364a99b38eso159200f8f.0 for ; Thu, 14 Dec 2023 03:07:01 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1702552020; x=1703156820; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=Z23ZDr01NkgZyjyXcvWE5FnYbJhU70TfuuBALSdMK5s=; b=JubPkDolufT5Vqyce4tLIitl780u3cpmAP5qCkUY4hAjDxU/Q73Rd97YZnmzfK9wr2 MfdjOVzBj0eETuyDS43mIKrCUoAi77aWg0cpQ2etv4VSZdseMb17aFLe0YIWOeGhjWV/ iU2/dbhfIDeI/rb8ROcSmgoQOn6rADD2dDPYboKXg4lRy0fcnAFyGWJgiyy8Jj4rZlRo TVJqmYmeLIiLggvcOdMWatLhVC625NMH77HMNxh1QqqmvSMW/m5vBv4s0l7WRJlD9/rG Vys5xtKLybVYwbBfkj+x1uRqrpPi+qVoALHlfBvlrKkr6hMs+2XRh7sFeGd8cEg/9jmW 2jfA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1702552020; x=1703156820; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=Z23ZDr01NkgZyjyXcvWE5FnYbJhU70TfuuBALSdMK5s=; b=LE2FLMw36DG9NIG9JKorF0S3za1jbxU87c1p22FyQxwn0Rzwj3MJw9CFZhv33BHfZU iKZ4mTbc4LHLAQ70V8txh4SoMdZkEzjIrNXzFJEPuSqUYrpyzfYy23H1MpIGxKEIyCIv NYdm3rSln1aepah+a6trtimgPvC4N2ZKjTkp0noFeunrghXTKzhj/Ps2iorB6j17D8DS DrubU1KjlRscqxz2KV8FG0QKS+k+r0Bm5BQ8ezPJgg4T30tU+iOTUJHuwcLEPz1wkXNY lrcjRWfTd+KtyOdRDSwRDGmmC1jWr2gQ8i2zAYoG2UIsM2SZkqA9LePUaaAHtrz15Xs7 weQA== X-Gm-Message-State: AOJu0Yz327ofHyINuGxHg1bDmbpiRJxkCyhM2ArLx0Kd87BBqu3jMj2a gSez9f+/AmPM1DNdyk0LjKwe47eV0YY= X-Google-Smtp-Source: AGHT+IEffuRK7eMhk5v3MPxIHyqDU9QxpDmCoK2fcfSBY82IEhz5qti3EblDU7cstiHLqMSjdDiZR9T2KOI= X-Received: from glider.muc.corp.google.com ([2a00:79e0:9c:201:8447:3e89:1f77:ff8a]) (user=glider job=sendgmr) by 2002:a5d:47ad:0:b0:336:365c:ac3e with SMTP id 13-20020a5d47ad000000b00336365cac3emr20573wrb.10.1702552019764; Thu, 14 Dec 2023 03:06:59 -0800 (PST) Date: Thu, 14 Dec 2023 12:06:37 +0100 In-Reply-To: <20231214110639.2294687-1-glider@google.com> Mime-Version: 1.0 References: <20231214110639.2294687-1-glider@google.com> X-Mailer: git-send-email 2.43.0.472.g3155946c3a-goog Message-ID: <20231214110639.2294687-6-glider@google.com> Subject: [PATCH v10-mte 5/7] arm64: mte: add a test for MTE tags compression From: Alexander Potapenko To: glider@google.com, catalin.marinas@arm.com, will@kernel.org, pcc@google.com, andreyknvl@gmail.com, andriy.shevchenko@linux.intel.com, aleksander.lobakin@intel.com, linux@rasmusvillemoes.dk, yury.norov@gmail.com, alexandru.elisei@arm.com Cc: linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, eugenis@google.com, syednwaris@gmail.com, william.gray@linaro.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20231214_030706_767802_0A9A08C7 X-CRM114-Status: GOOD ( 27.29 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Ensure that tag sequences containing alternating values are compressed to buffers of expected size and correctly decompressed afterwards. Signed-off-by: Alexander Potapenko Acked-by: Catalin Marinas --- v10-mte: - added Catalin's Acked-by: v9: - minor changes to Kconfig description v8: - adapt to the simplified compression algorithm v6: - add test_decompress_invalid() to ensure invalid handles are ignored; - add test_upper_bits(), which is a regression test for a case where an inline handle looked like an out-of-line one; - add test_compress_nonzero() to ensure a full nonzero tag array is compressed correctly; - add test_two_ranges() to test cases when the input buffer is divided into two ranges. v5: - remove hardcoded constants, added test setup/teardown; - support 16- and 64K pages; - replace nested if-clauses with expected_size_from_ranges(); - call mte_release_handle() after tests that perform compression/decompression; - address comments by Andy Shevchenko: - fix include order; - use mtecomp.h instead of function prototypes. v4: - addressed comments by Andy Shevchenko: - expanded MTE to "Memory Tagging Extension" in Kconfig - changed signed variables to unsigned where applicable - added missing header dependencies - addressed comments by Yury Norov: - moved test-only declarations from mtecomp.h into this test - switched to the new "mte"-prefixed function names, dropped the mentions of "EA0" - added test_tag_to_ranges_n() v3: - addressed comments by Andy Shevchenko in another patch: - switched from u64 to unsigned long - added MODULE_IMPORT_NS(MTECOMP) - fixed includes order --- arch/arm64/Kconfig | 11 ++ arch/arm64/mm/Makefile | 1 + arch/arm64/mm/test_mtecomp.c | 364 +++++++++++++++++++++++++++++++++++ 3 files changed, 376 insertions(+) create mode 100644 arch/arm64/mm/test_mtecomp.c diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 5f4d4b49a512e..6a1397a96f2f0 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -2089,6 +2089,17 @@ config ARM64_MTE_COMP RLE to conserve heap memory. In the common case compressed tags occupy 2.5x less memory. +config ARM64_MTE_COMP_KUNIT_TEST + tristate "Test tag compression for ARM64 Memory Tagging Extension" if !KUNIT_ALL_TESTS + default KUNIT_ALL_TESTS + depends on KUNIT && ARM64_MTE_COMP + help + Test MTE compression algorithm enabled by CONFIG_ARM64_MTE_COMP. + + Ensure that certain tag sequences containing alternating values can + be compressed into pointer-size values and correctly decompressed + afterwards. + config ARM64_SVE bool "ARM Scalable Vector Extension support" default y diff --git a/arch/arm64/mm/Makefile b/arch/arm64/mm/Makefile index 46778f6dd83c2..170dc62b010b9 100644 --- a/arch/arm64/mm/Makefile +++ b/arch/arm64/mm/Makefile @@ -11,6 +11,7 @@ obj-$(CONFIG_TRANS_TABLE) += trans_pgd-asm.o obj-$(CONFIG_DEBUG_VIRTUAL) += physaddr.o obj-$(CONFIG_ARM64_MTE) += mteswap.o obj-$(CONFIG_ARM64_MTE_COMP) += mtecomp.o +obj-$(CONFIG_ARM64_MTE_COMP_KUNIT_TEST) += test_mtecomp.o KASAN_SANITIZE_physaddr.o += n obj-$(CONFIG_KASAN) += kasan_init.o diff --git a/arch/arm64/mm/test_mtecomp.c b/arch/arm64/mm/test_mtecomp.c new file mode 100644 index 0000000000000..e8aeb7607ff41 --- /dev/null +++ b/arch/arm64/mm/test_mtecomp.c @@ -0,0 +1,364 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Test cases for MTE tags compression algorithm. + */ + +#include +#include +#include +#include +#include + +#include + +#include + +#include "mtecomp.h" + +/* Per-test storage allocated in mtecomp_test_init(). */ +struct test_data { + u8 *tags, *dtags; + unsigned short *r_sizes; + size_t r_len; + u8 *r_tags; +}; + +/* + * Split td->tags to ranges stored in td->r_tags, td->r_sizes, td->r_len, + * then convert those ranges back to tags stored in td->dtags. + */ +static void tags_to_ranges_to_tags_helper(struct kunit *test) +{ + struct test_data *td = test->priv; + + mte_tags_to_ranges(td->tags, td->r_tags, td->r_sizes, &td->r_len); + mte_ranges_to_tags(td->r_tags, td->r_sizes, td->r_len, td->dtags); + KUNIT_EXPECT_EQ(test, memcmp(td->tags, td->dtags, MTE_PAGE_TAG_STORAGE), + 0); +} + +/* + * Test that mte_tags_to_ranges() produces a single range for a zero-filled tag + * buffer. + */ +static void test_tags_to_ranges_zero(struct kunit *test) +{ + struct test_data *td = test->priv; + + memset(td->tags, 0, MTE_PAGE_TAG_STORAGE); + tags_to_ranges_to_tags_helper(test); + + KUNIT_EXPECT_EQ(test, td->r_len, 1); + KUNIT_EXPECT_EQ(test, td->r_tags[0], 0); + KUNIT_EXPECT_EQ(test, td->r_sizes[0], MTE_GRANULES_PER_PAGE); +} + +/* + * Test that a small number of different tags is correctly transformed into + * ranges. + */ +static void test_tags_to_ranges_simple(struct kunit *test) +{ + struct test_data *td = test->priv; + const u8 ex_tags[] = { 0xa, 0x0, 0xa, 0xb, 0x0 }; + const unsigned short ex_sizes[] = { 1, 2, 2, 1, + MTE_GRANULES_PER_PAGE - 6 }; + + memset(td->tags, 0, MTE_PAGE_TAG_STORAGE); + td->tags[0] = 0xa0; + td->tags[1] = 0x0a; + td->tags[2] = 0xab; + tags_to_ranges_to_tags_helper(test); + + KUNIT_EXPECT_EQ(test, td->r_len, 5); + KUNIT_EXPECT_EQ(test, memcmp(td->r_tags, ex_tags, sizeof(ex_tags)), 0); + KUNIT_EXPECT_EQ(test, memcmp(td->r_sizes, ex_sizes, sizeof(ex_sizes)), + 0); +} + +/* Test that repeated 0xa0 byte produces MTE_GRANULES_PER_PAGE ranges of length 1. */ +static void test_tags_to_ranges_repeated(struct kunit *test) +{ + struct test_data *td = test->priv; + + memset(td->tags, 0xa0, MTE_PAGE_TAG_STORAGE); + tags_to_ranges_to_tags_helper(test); + + KUNIT_EXPECT_EQ(test, td->r_len, MTE_GRANULES_PER_PAGE); +} + +/* Generate a buffer that will contain @nranges of tag ranges. */ +static void gen_tag_range_helper(u8 *tags, int nranges) +{ + unsigned int i; + + memset(tags, 0, MTE_PAGE_TAG_STORAGE); + if (nranges > 1) { + nranges--; + for (i = 0; i < nranges / 2; i++) + tags[i] = 0xab; + if (nranges % 2) + tags[nranges / 2] = 0xa0; + } +} + +/* + * Test that mte_tags_to_ranges()/mte_ranges_to_tags() work for various + * r_len values. + */ +static void test_tag_to_ranges_n(struct kunit *test) +{ + struct test_data *td = test->priv; + unsigned int i, j, sum; + + for (i = 1; i <= MTE_GRANULES_PER_PAGE; i++) { + gen_tag_range_helper(td->tags, i); + tags_to_ranges_to_tags_helper(test); + sum = 0; + for (j = 0; j < td->r_len; j++) + sum += td->r_sizes[j]; + KUNIT_EXPECT_EQ(test, sum, MTE_GRANULES_PER_PAGE); + } +} + +/* + * Check that the tag buffer in test->priv can be compressed and decompressed + * without changes. + */ +static void *compress_decompress_helper(struct kunit *test) +{ + struct test_data *td = test->priv; + void *handle; + + handle = mte_compress(td->tags); + KUNIT_EXPECT_EQ(test, (unsigned long)handle & BIT_ULL(63), 0); + if (handle) { + KUNIT_EXPECT_TRUE(test, mte_decompress(handle, td->dtags)); + KUNIT_EXPECT_EQ(test, memcmp(td->tags, td->dtags, MTE_PAGE_TAG_STORAGE), + 0); + } + return handle; +} + +/* Test that a zero-filled array is compressed into inline storage. */ +static void test_compress_zero(struct kunit *test) +{ + struct test_data *td = test->priv; + void *handle; + + memset(td->tags, 0, MTE_PAGE_TAG_STORAGE); + handle = compress_decompress_helper(test); + /* Tags are stored inline. */ + KUNIT_EXPECT_TRUE(test, mte_is_compressed(handle)); +} + +/* Test that a 0xaa-filled array is compressed into inline storage. */ +static void test_compress_nonzero(struct kunit *test) +{ + struct test_data *td = test->priv; + void *handle; + + memset(td->tags, 0xaa, MTE_PAGE_TAG_STORAGE); + handle = compress_decompress_helper(test); + /* Tags are stored inline. */ + KUNIT_EXPECT_TRUE(test, mte_is_compressed(handle)); +} + +/* + * Test that two tag ranges are compressed into inline storage. + * + * This also covers a special case where both ranges contain + * `MTE_GRANULES_PER_PAGE / 2` tags and overflow the designated range size. + */ +static void test_two_ranges(struct kunit *test) +{ + struct test_data *td = test->priv; + void *handle; + unsigned int i; + size_t r_len = 2; + unsigned char r_tags[2] = { 0xe, 0x0 }; + unsigned short r_sizes[2]; + + for (i = 1; i < MTE_GRANULES_PER_PAGE; i++) { + r_sizes[0] = i; + r_sizes[1] = MTE_GRANULES_PER_PAGE - i; + mte_ranges_to_tags(r_tags, r_sizes, r_len, td->tags); + handle = compress_decompress_helper(test); + KUNIT_EXPECT_TRUE(test, mte_is_compressed(handle)); + } +} + +/* + * Test that a very small number of tag ranges ends up compressed into 8 bytes. + */ +static void test_compress_simple(struct kunit *test) +{ + struct test_data *td = test->priv; + void *handle; + + memset(td->tags, 0, MTE_PAGE_TAG_STORAGE); + td->tags[0] = 0xa0; + td->tags[1] = 0x0a; + + handle = compress_decompress_helper(test); + /* Tags are stored inline. */ + KUNIT_EXPECT_TRUE(test, mte_is_compressed(handle)); +} + +/* + * Test that a buffer containing @nranges ranges compresses into @exp_size + * bytes and decompresses into the original tag sequence. + */ +static void compress_range_helper(struct kunit *test, int nranges, + bool exp_inl) +{ + struct test_data *td = test->priv; + void *handle; + + gen_tag_range_helper(td->tags, nranges); + handle = compress_decompress_helper(test); + KUNIT_EXPECT_EQ(test, mte_is_compressed(handle), exp_inl); +} + +static inline size_t max_inline_ranges(void) +{ +#if defined CONFIG_ARM64_4K_PAGES + return 6; +#elif defined(CONFIG_ARM64_16K_PAGES) + return 5; +#else + return 4; +#endif +} + +/* + * Test that every number of tag ranges is correctly compressed and + * decompressed. + */ +static void test_compress_ranges(struct kunit *test) +{ + unsigned int i; + bool exp_inl; + + for (i = 1; i <= MTE_GRANULES_PER_PAGE; i++) { + exp_inl = i <= max_inline_ranges(); + compress_range_helper(test, i, exp_inl); + } +} + +/* + * Test that invalid handles are ignored by mte_decompress(). + */ +static void test_decompress_invalid(struct kunit *test) +{ + void *handle1 = (void *)0xeb0b0b0100804020; + void *handle2 = (void *)0x6b0b0b010080402f; + struct test_data *td = test->priv; + + /* handle1 has bit 0 set to 1. */ + KUNIT_EXPECT_FALSE(test, mte_decompress(handle1, td->dtags)); + /* + * handle2 is an inline handle, but its largest_idx (bits 1..3) + * is out of bounds for the inline storage. + */ + KUNIT_EXPECT_FALSE(test, mte_decompress(handle2, td->dtags)); +} + +/* + * Test that compressed inline tags cannot be confused with out-of-line + * pointers. + * + * Compressed values are written from bit 0 to bit 63, so the size of the last + * tag range initially ends up in the upper bits of the inline representation. + * Make sure mte_compress() rearranges the bits so that the resulting handle does + * not have 0b0111 as the upper four bits. + */ +static void test_upper_bits(struct kunit *test) +{ + struct test_data *td = test->priv; + void *handle; + unsigned char r_tags[6] = { 7, 0, 7, 0, 7, 0 }; + unsigned short r_sizes[6] = { 1, 1, 1, 1, 1, 1 }; + size_t r_len; + + /* Maximum number of ranges that can be encoded inline. */ + r_len = max_inline_ranges(); + /* Maximum range size possible, will be omitted. */ + r_sizes[0] = MTE_GRANULES_PER_PAGE / 2 - 1; + /* A number close to r_sizes[0] that has most of its bits set. */ + r_sizes[r_len - 1] = MTE_GRANULES_PER_PAGE - r_sizes[0] - r_len + 2; + + mte_ranges_to_tags(r_tags, r_sizes, r_len, td->tags); + handle = compress_decompress_helper(test); + KUNIT_EXPECT_TRUE(test, mte_is_compressed(handle)); +} + +static void mtecomp_dealloc_testdata(struct test_data *td) +{ + kfree(td->tags); + kfree(td->dtags); + kfree(td->r_sizes); + kfree(td->r_tags); +} + +static int mtecomp_test_init(struct kunit *test) +{ + struct test_data *td; + + td = kmalloc(sizeof(struct test_data), GFP_KERNEL); + if (!td) + return 1; + td->tags = kmalloc(MTE_PAGE_TAG_STORAGE, GFP_KERNEL); + if (!td->tags) + goto error; + td->dtags = kmalloc(MTE_PAGE_TAG_STORAGE, GFP_KERNEL); + if (!td->dtags) + goto error; + td->r_len = MTE_GRANULES_PER_PAGE; + td->r_sizes = kmalloc_array(MTE_GRANULES_PER_PAGE, + sizeof(unsigned short), GFP_KERNEL); + if (!td->r_sizes) + goto error; + td->r_tags = kmalloc(MTE_GRANULES_PER_PAGE, GFP_KERNEL); + if (!td->r_tags) + goto error; + test->priv = (void *)td; + return 0; +error: + mtecomp_dealloc_testdata(td); + return 1; +} + +static void mtecomp_test_exit(struct kunit *test) +{ + struct test_data *td = test->priv; + + mtecomp_dealloc_testdata(td); +} + +static struct kunit_case mtecomp_test_cases[] = { + KUNIT_CASE(test_tags_to_ranges_zero), + KUNIT_CASE(test_tags_to_ranges_simple), + KUNIT_CASE(test_tags_to_ranges_repeated), + KUNIT_CASE(test_tag_to_ranges_n), + KUNIT_CASE(test_compress_zero), + KUNIT_CASE(test_compress_nonzero), + KUNIT_CASE(test_two_ranges), + KUNIT_CASE(test_compress_simple), + KUNIT_CASE(test_compress_ranges), + KUNIT_CASE(test_decompress_invalid), + KUNIT_CASE(test_upper_bits), + {} +}; + +static struct kunit_suite mtecomp_test_suite = { + .name = "mtecomp", + .init = mtecomp_test_init, + .exit = mtecomp_test_exit, + .test_cases = mtecomp_test_cases, +}; +kunit_test_suites(&mtecomp_test_suite); + +MODULE_IMPORT_NS(MTECOMP); +MODULE_LICENSE("GPL"); +MODULE_AUTHOR("Alexander Potapenko "); From patchwork Thu Dec 14 11:06:38 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 13492811 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 277A1C4332F for ; Thu, 14 Dec 2023 11:07:34 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=bBCH/MBNqhTlvz+1jqTjXLIaYZKzQNTUQzf0JxEPrjU=; b=t5VRxsOPouZPufwFWV6we+buoi WR8W7gIivpf7taoM0Rbgzr+vnKe7+bg/WFYEyDYIZAISDOtLD8DvY9YctG8SOHgYn4mClXFTv3mwL PUpoUldIv7G/5VhJU+JOUnlbPWrBHTPDJNz5brw2Xeg8eIhmibXUw5HLAd3RpvAcePfOhiaCJrvKZ RikZSmL/bfMVJrLZ2sQTIDfs2X68/qr+121YfdXyz1Pog9k3B7Bpz4YAlva5C8IEvisRf/fvwuJtN J0BMZGTjQZU5Q98tdqdqT2cX/gg4DVhYx//G+I95xUJcLyBfZ2sdMQRgKIoI2clFyybsZshxYggH6 qpIy8+SQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1rDjYa-00HZz8-0w; Thu, 14 Dec 2023 11:07:08 +0000 Received: from mail-yw1-x1149.google.com ([2607:f8b0:4864:20::1149]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1rDjYW-00HZwf-0P for linux-arm-kernel@lists.infradead.org; Thu, 14 Dec 2023 11:07:05 +0000 Received: by mail-yw1-x1149.google.com with SMTP id 00721157ae682-5caf86963ecso89075347b3.3 for ; Thu, 14 Dec 2023 03:07:03 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1702552022; x=1703156822; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=qETeNVW4T3lC++QM3orD9LTmmtp3iyUET5aUKhJnmac=; b=z/VW9h1ICU6rwIJgm5MryOUXf01tl6tug/JGuXerbDZjH1ACB2j8TCP8U5AzfQKRqD m5EVOWZyAcn53GG/vn8CbdCOzo6HS6yGnuEGSWQb4GPt+4XEhQbKuc11qzIQAqvpvbDJ MuVpFz47IhhmW+daAhLwe8wfLbcwo4UooymycIvs/Qdbtnv7dDKy0M7uLXyvfj5RUGA+ wxzkbGDULaT5cmr+MzAqwcyYCHEzaKvh6+t+WLCW5geaHIaTFrqQmwQE+SRS94cdgdzL MFW2vFugZ1Kxdyvsf6WPewAMjvU7avkI7OjhaWufhFQ2ZFQ6si0w7eR2PUYJbu3gsGxx 9IEg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1702552022; x=1703156822; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=qETeNVW4T3lC++QM3orD9LTmmtp3iyUET5aUKhJnmac=; b=cFQVhAWbUbUpuFi88UHAg0tLohDlLCg0cVjnZzBmB8G4qvtmIdLb9zIVmZN9bCZZjj WjTeITmbQ/JVySgwuXIBXcVMM+EEknjXnIsX5vwRShlBsJleFDOmAdhx6iaNwB+GmdU0 nzm0HfvUPmMeMOr3Qr8UQqUoZh4hqQ5HChLUJzPMEEqXC/YZZpdZX+S6t2uRLIIhc9HV +Z6mhtGaa8U+Ht01UUp9L5bOYzo+Pvdc0lnggQccV96QwZhUN4tgZfEHnXjLAubXqF/f 19ITinemozlrFxoeXsls9zEYYRZG0bhaSMWQfW5WAQcH0kBoLn6XZwnC08z83FLz/wuS B1zQ== X-Gm-Message-State: AOJu0YzZUFpm112jhvyaQ23iGg1gaenGX7Rofuui8G5M0QBC4PuQXNAp mIWAk1A54BxVEVf3OAJ+DhDHGsLQL7Q= X-Google-Smtp-Source: AGHT+IHFo4Qd+bUyHXV7uDnWTfmpROKcqEvV1HIH+ht9/jDqhLREekiPTUW/I5ttnI06QXPQH14EnN1Qqvg= X-Received: from glider.muc.corp.google.com ([2a00:79e0:9c:201:8447:3e89:1f77:ff8a]) (user=glider job=sendgmr) by 2002:a05:690c:a81:b0:5df:4a9b:fb6c with SMTP id ci1-20020a05690c0a8100b005df4a9bfb6cmr109550ywb.3.1702552022462; Thu, 14 Dec 2023 03:07:02 -0800 (PST) Date: Thu, 14 Dec 2023 12:06:38 +0100 In-Reply-To: <20231214110639.2294687-1-glider@google.com> Mime-Version: 1.0 References: <20231214110639.2294687-1-glider@google.com> X-Mailer: git-send-email 2.43.0.472.g3155946c3a-goog Message-ID: <20231214110639.2294687-7-glider@google.com> Subject: [PATCH v10-mte 6/7] arm64: mte: add compression support to mteswap.c From: Alexander Potapenko To: glider@google.com, catalin.marinas@arm.com, will@kernel.org, pcc@google.com, andreyknvl@gmail.com, andriy.shevchenko@linux.intel.com, aleksander.lobakin@intel.com, linux@rasmusvillemoes.dk, yury.norov@gmail.com, alexandru.elisei@arm.com Cc: linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, eugenis@google.com, syednwaris@gmail.com, william.gray@linaro.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20231214_030704_166966_6DDEDCD9 X-CRM114-Status: GOOD ( 17.25 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Update mteswap.c to perform inline compression of memory tags when possible. If CONFIG_ARM64_MTE_COMP is enabled, mteswap.c will attempt to compress saved tags for a struct page and store them directly in Xarray entry instead of wasting heap space. Soon after booting Android, tag compression saves ~2x memory previously spent by mteswap.c on tag allocations. On a moderately loaded device with ~20% tagged pages, this leads to saving several megabytes of kernel heap: # cat /sys/kernel/debug/mteswap/stats 8 bytes: 102496 allocations, 67302 deallocations 128 bytes: 212234 allocations, 178278 deallocations uncompressed tag storage size: 8851200 compressed tag storage size: 4346368 (statistics collection is introduced in the following patch) Signed-off-by: Alexander Potapenko Reviewed-by: Catalin Marinas --- v10-mte: - added Catalin's Reviewed-by: v9: - as requested by Yury Norov, split off statistics collection into a separate patch - minor fixes v8: - adapt to the new compression API, abandon mteswap_{no,}comp.c - move stats collection to mteswap.c v5: - drop a dead variable from _mte_free_saved_tags() in mteswap_comp.c - ensure MTE compression works with arbitrary page sizes - update patch description v4: - minor code simplifications suggested by Andy Shevchenko, added missing header dependencies - changed compression API names to reflect modifications made to memcomp.h (as suggested by Yury Norov) v3: - Addressed comments by Andy Shevchenko in another patch: - fixed includes order - replaced u64 with unsigned long - added MODULE_IMPORT_NS(MTECOMP) --- arch/arm64/mm/mteswap.c | 19 +++++++++++++++++-- 1 file changed, 17 insertions(+), 2 deletions(-) diff --git a/arch/arm64/mm/mteswap.c b/arch/arm64/mm/mteswap.c index a31833e3ddc54..70f5c8ecd640d 100644 --- a/arch/arm64/mm/mteswap.c +++ b/arch/arm64/mm/mteswap.c @@ -6,6 +6,8 @@ #include #include #include +#include +#include "mtecomp.h" static DEFINE_XARRAY(mte_pages); @@ -17,12 +19,13 @@ void *mte_allocate_tag_storage(void) void mte_free_tag_storage(char *storage) { - kfree(storage); + if (!mte_is_compressed(storage)) + kfree(storage); } int mte_save_tags(struct page *page) { - void *tag_storage, *ret; + void *tag_storage, *compressed_storage, *ret; if (!page_mte_tagged(page)) return 0; @@ -32,6 +35,11 @@ int mte_save_tags(struct page *page) return -ENOMEM; mte_save_page_tags(page_address(page), tag_storage); + compressed_storage = mte_compress(tag_storage); + if (compressed_storage) { + mte_free_tag_storage(tag_storage); + tag_storage = compressed_storage; + } /* lookup the swap entry.val from the page */ ret = xa_store(&mte_pages, page_swap_entry(page).val, tag_storage, @@ -50,13 +58,20 @@ int mte_save_tags(struct page *page) void mte_restore_tags(swp_entry_t entry, struct page *page) { void *tags = xa_load(&mte_pages, entry.val); + void *tag_storage = NULL; if (!tags) return; if (try_page_mte_tagging(page)) { + if (mte_is_compressed(tags)) { + tag_storage = mte_allocate_tag_storage(); + mte_decompress(tags, tag_storage); + tags = tag_storage; + } mte_restore_page_tags(page_address(page), tags); set_page_mte_tagged(page); + mte_free_tag_storage(tag_storage); } } From patchwork Thu Dec 14 11:06:39 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 13492815 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 82942C4332F for ; Thu, 14 Dec 2023 11:07:52 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:From:Subject:Message-ID: References:Mime-Version:In-Reply-To:Date:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=MVYGgFP7AFFN/110k0vD9qo7oMMB2wog89FtAudMMBs=; b=W/9LclvioenPxDDYhBW1cUgf9g gkfUM2+hEx9w5DKi+xET8BLfDDBzxFJ40carsSUv3aj9t4gr9hEh4bneVLUXk3dRGVMbBKCR2DlHH PlZMyE3Tl2gwnSJWXnnqvzIjrBu7XpbEltuD91faL1o1XE1tMRetJKBQkK8pxgIlL6uNMsa68WPFu 1mI4r0UOvj+lyJ6hiMs/3cpprOqy8LgWZF5v59ee/EXAgQtvPQeLK5TABD8KGFzMtLYSHkrrnyJh0 xRs6Ka726mKIqW7DfrJKL6/np8TiiuT2QxFDKAa6I8/wJWuilWpQbOcDGsoquCw1frW3sQeS6XNta 5GP/6cOg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1rDjYl-00Ha5c-0d; Thu, 14 Dec 2023 11:07:19 +0000 Received: from mail-yw1-x114a.google.com ([2607:f8b0:4864:20::114a]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1rDjYb-00HZxr-0E for linux-arm-kernel@lists.infradead.org; Thu, 14 Dec 2023 11:07:11 +0000 Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-5cf4696e202so91760287b3.2 for ; Thu, 14 Dec 2023 03:07:06 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1702552025; x=1703156825; darn=lists.infradead.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=FKxTzibU43FbKAAwU1cbn+VKrccGDamZlyB7KN8LFoo=; b=cOVx1xl5d/pZfgrqSy/dJqjNLIHJZXf6Jmi0kuIAoJs5Vywed9uBYXVTDGFjFTcMqW vgeRhe1VRVeAeAoLk1Oz+SlsmpCFt0HioFHQ8Qdeq2n7S7dB+fhABkwSJomWtwGK6rIn BfFpPkaP7vqhsp7C43Xp3bk4WsGBDyOXBUXCQqN5NxCYQTlO1h3iuUSLAsOw37mZ7qff sIRRNbK0jLlNNwCE1u/2uUl1NC18D1QSU5sC4RaRI3QaFohQams3SIl/7DBatxJU8TZu kcdgZWl3aqeugDWa/38OdQe6YJ8H6z4muzZ+/xXNVFgotdM5xyUQyhIcEXTHvtrqiW51 L3fQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1702552025; x=1703156825; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=FKxTzibU43FbKAAwU1cbn+VKrccGDamZlyB7KN8LFoo=; b=Z6+mpnNgvJ8gr28bsDii80fg3hmwSQVv0kUyXtzCJe7oZqLLiIMeGS33B9lK0BJie0 AdaLWbvcLulQpL8v2d8c+Zhu6Bd2j3ESxjw5J0Fvzsg10ymhFmlCtEuFeBOLE5yuq4m0 uegaGE8czU/OnInXcOuxB9fN/ki2RC9DJ2C1HjCSiaGBYu7RMo6ZIamv9z8NoA+2V8kX kDWjEI3H4ezFcd1RYUIv+bDzm5T069cfkl4WDxYAZP3QvjJrySwt2OI8a1vDAUqE+y67 m8VcakkPsLyHdlP7GQ6CW5jSjcJTYVqEDzBDl78pEX3bRC5Jh8HDDsdXxyopY4Z2jvm2 6yrQ== X-Gm-Message-State: AOJu0Yx6bkY7V6zsiD/R4PyUY4G8o+aHs/dDX30xJ02LR/K4YMOhmR+s iiWS9/e9ckDHZTZRdVGYG+nL9YT9Ums= X-Google-Smtp-Source: AGHT+IFBw8Rja0Ncdp/+8Jl8ZkbXeAqW7FhNZ926MndywhHb2Vn/iUMG2mrbxSRArQwUykHTFC+G8aHdUrw= X-Received: from glider.muc.corp.google.com ([2a00:79e0:9c:201:8447:3e89:1f77:ff8a]) (user=glider job=sendgmr) by 2002:a05:690c:3389:b0:5e3:e985:18bb with SMTP id fl9-20020a05690c338900b005e3e98518bbmr1990ywb.7.1702552025370; Thu, 14 Dec 2023 03:07:05 -0800 (PST) Date: Thu, 14 Dec 2023 12:06:39 +0100 In-Reply-To: <20231214110639.2294687-1-glider@google.com> Mime-Version: 1.0 References: <20231214110639.2294687-1-glider@google.com> X-Mailer: git-send-email 2.43.0.472.g3155946c3a-goog Message-ID: <20231214110639.2294687-8-glider@google.com> Subject: [PATCH v10-mte 7/7] arm64: mte: implement CONFIG_ARM64_MTE_SWAP_STATS From: Alexander Potapenko To: glider@google.com, catalin.marinas@arm.com, will@kernel.org, pcc@google.com, andreyknvl@gmail.com, andriy.shevchenko@linux.intel.com, aleksander.lobakin@intel.com, linux@rasmusvillemoes.dk, yury.norov@gmail.com, alexandru.elisei@arm.com Cc: linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, eugenis@google.com, syednwaris@gmail.com, william.gray@linaro.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20231214_030709_135798_92607B49 X-CRM114-Status: GOOD ( 23.62 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Provide a config to collect the usage statistics for ARM MTE tag compression. This patch introduces allocation/deallocation counters for buffers that were stored uncompressed (and thus occupy 128 bytes of heap plus the Xarray overhead to store a pointer) and those that were compressed into 8-byte pointers (effectively using 0 bytes of heap in addition to the Xarray overhead). The counters are exposed to the userspace via /sys/kernel/debug/mteswap/stats: # cat /sys/kernel/debug/mteswap/stats 8 bytes: 102496 allocations, 67302 deallocations 128 bytes: 212234 allocations, 178278 deallocations uncompressed tag storage size: 8851200 compressed tag storage size: 4346368 Suggested-by: Yury Norov Signed-off-by: Alexander Potapenko Acked-by: Catalin Marinas Reviewed-by: Yury Norov --- This patch was split off from the "arm64: mte: add compression support to mteswap.c" patch (https://lore.kernel.org/linux-arm-kernel/ZUVulBKVYK7cq2rJ@yury-ThinkPad/T/#m819ec30beb9de53d5c442f7e3247456f8966d88a) v10-mte: - added Catalin's Acked-by: v9: - add this patch, put the stats behind a separate config, mention /sys/kernel/debug/mteswap/stats in the documentation --- .../arch/arm64/mte-tag-compression.rst | 12 +++ arch/arm64/Kconfig | 15 +++ arch/arm64/mm/mteswap.c | 93 ++++++++++++++++++- 3 files changed, 118 insertions(+), 2 deletions(-) diff --git a/Documentation/arch/arm64/mte-tag-compression.rst b/Documentation/arch/arm64/mte-tag-compression.rst index 8fe6b51a9db6d..4c25b96f7d4b5 100644 --- a/Documentation/arch/arm64/mte-tag-compression.rst +++ b/Documentation/arch/arm64/mte-tag-compression.rst @@ -145,6 +145,18 @@ Tag compression and decompression implicitly rely on the fixed MTE tag size (4 bits) and number of tags per page. Should these values change, the algorithm may need to be revised. +Stats +===== + +When `CONFIG_ARM64_MTE_SWAP_STATS` is enabled, `arch/arm64/mm/mteswap.c` exports +usage statistics for tag compression used when swapping tagged pages. The data +can be accessed via debugfs:: + + # cat /sys/kernel/debug/mteswap/stats + 8 bytes: 10438 allocations, 10417 deallocations + 128 bytes: 26180 allocations, 26179 deallocations + uncompressed tag storage size: 2816 + compressed tag storage size: 128 Programming Interface ===================== diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 6a1397a96f2f0..49a786c7edadd 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -2100,6 +2100,21 @@ config ARM64_MTE_COMP_KUNIT_TEST be compressed into pointer-size values and correctly decompressed afterwards. +config ARM64_MTE_SWAP_STATS + bool "Collect usage statistics of tag compression for swapped MTE tags" + default y + depends on ARM64_MTE && ARM64_MTE_COMP + help + Collect usage statistics for ARM64 MTE tag compression during swapping. + + Adds allocation/deallocation counters for buffers that were stored + uncompressed (and thus occupy 128 bytes of heap plus the Xarray + overhead to store a pointer) and those that were compressed into + 8-byte pointers (effectively using 0 bytes of heap in addition to + the Xarray overhead). + The counters are exposed to the userspace via + /sys/kernel/debug/mteswap/stats. + config ARM64_SVE bool "ARM Scalable Vector Extension support" default y diff --git a/arch/arm64/mm/mteswap.c b/arch/arm64/mm/mteswap.c index 70f5c8ecd640d..1c6c78b9a9037 100644 --- a/arch/arm64/mm/mteswap.c +++ b/arch/arm64/mm/mteswap.c @@ -1,5 +1,6 @@ // SPDX-License-Identifier: GPL-2.0-only +#include #include #include #include @@ -11,16 +12,54 @@ static DEFINE_XARRAY(mte_pages); +enum mteswap_counters { + MTESWAP_CTR_INLINE = 0, + MTESWAP_CTR_OUTLINE, + MTESWAP_CTR_SIZE +}; + +#if defined(CONFIG_ARM64_MTE_SWAP_STATS) +static atomic_long_t alloc_counters[MTESWAP_CTR_SIZE]; +static atomic_long_t dealloc_counters[MTESWAP_CTR_SIZE]; + +static void inc_alloc_counter(int kind) +{ + atomic_long_inc(&alloc_counters[kind]); +} + +static void inc_dealloc_counter(int kind) +{ + atomic_long_inc(&dealloc_counters[kind]); +} +#else +static void inc_alloc_counter(int kind) +{ +} + +static void inc_dealloc_counter(int kind) +{ +} +#endif + void *mte_allocate_tag_storage(void) { + void *ret; + /* tags granule is 16 bytes, 2 tags stored per byte */ - return kmalloc(MTE_PAGE_TAG_STORAGE, GFP_KERNEL); + ret = kmalloc(MTE_PAGE_TAG_STORAGE, GFP_KERNEL); + if (ret) + inc_alloc_counter(MTESWAP_CTR_OUTLINE); + return ret; } void mte_free_tag_storage(char *storage) { - if (!mte_is_compressed(storage)) + if (!mte_is_compressed(storage)) { kfree(storage); + inc_dealloc_counter(MTESWAP_CTR_OUTLINE); + } else { + inc_dealloc_counter(MTESWAP_CTR_INLINE); + } } int mte_save_tags(struct page *page) @@ -39,6 +78,7 @@ int mte_save_tags(struct page *page) if (compressed_storage) { mte_free_tag_storage(tag_storage); tag_storage = compressed_storage; + inc_alloc_counter(MTESWAP_CTR_INLINE); } /* lookup the swap entry.val from the page */ @@ -98,3 +138,52 @@ void mte_invalidate_tags_area(int type) } xa_unlock(&mte_pages); } + +#if defined(CONFIG_ARM64_MTE_SWAP_STATS) +/* DebugFS interface. */ +static int stats_show(struct seq_file *seq, void *v) +{ + unsigned long total_mem_alloc = 0, total_mem_dealloc = 0; + unsigned long total_num_alloc = 0, total_num_dealloc = 0; + unsigned long sizes[2] = { 8, MTE_PAGE_TAG_STORAGE }; + long alloc, dealloc; + unsigned long size; + int i; + + for (i = 0; i < MTESWAP_CTR_SIZE; i++) { + alloc = atomic_long_read(&alloc_counters[i]); + dealloc = atomic_long_read(&dealloc_counters[i]); + total_num_alloc += alloc; + total_num_dealloc += dealloc; + size = sizes[i]; + /* + * Do not count 8-byte buffers towards compressed tag storage + * size. + */ + if (i) { + total_mem_alloc += (size * alloc); + total_mem_dealloc += (size * dealloc); + } + seq_printf(seq, + "%lu bytes:\t%lu allocations,\t%lu deallocations\n", + size, alloc, dealloc); + } + seq_printf(seq, "uncompressed tag storage size:\t%lu\n", + (total_num_alloc - total_num_dealloc) * + MTE_PAGE_TAG_STORAGE); + seq_printf(seq, "compressed tag storage size:\t%lu\n", + total_mem_alloc - total_mem_dealloc); + return 0; +} +DEFINE_SHOW_ATTRIBUTE(stats); + +static int mteswap_init(void) +{ + struct dentry *mteswap_dir; + + mteswap_dir = debugfs_create_dir("mteswap", NULL); + debugfs_create_file("stats", 0444, mteswap_dir, NULL, &stats_fops); + return 0; +} +module_init(mteswap_init); +#endif