From patchwork Mon Jul 5 02:40:58 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?b?WWVlIExlZSAo5p2O5bu66Kq8KQ==?= X-Patchwork-Id: 12357961 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.2 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,UNPARSEABLE_RELAY, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 346F9C07E95 for ; Mon, 5 Jul 2021 02:51:51 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id F1E8E6135A for ; Mon, 5 Jul 2021 02:51:50 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org F1E8E6135A Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=mediatek.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-mediatek-bounces+linux-mediatek=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:CC:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=n1TnfV7Qyt8kes0WnQskU8zw7qonZHXMn6aGYeFM9FM=; b=NrZQcpFv53QIH8 N/qdy8XwOLpSFRCX0blIBIfDQ4kU13QbXPfrfxOg6xRNOB0iRp0ttH2GOzKGcWbcgcEHes9iT7DVy xeY1H9cS8VIpAb6MxOXRVZMk5aAcy7IJ5XpY2JCv8S61iTXR4F/05tqUMO5WLUomQkEHn2fOKXZvB cJph02rMRCeoqmJb4UlyX9vHV5jM8fkIV6VewURHfVkRJQF21bT+Z9UI8AuIy3E/CLpgV/wU8r4cv PrQADKN8mZKNQ9l/JX4Fd6BZgMw6GjYl9fW5yCFdAQMRbm5hTl6W0XSL2JNhTnYYau/S6SU6b69Qq 0PsScRAk8qvOlpPT28+g==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1m0Ehw-007XYO-7Q; Mon, 05 Jul 2021 02:51:40 +0000 Received: from mailgw02.mediatek.com ([216.200.240.185]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1m0EhZ-007XUs-FO; Mon, 05 Jul 2021 02:51:18 +0000 X-UUID: 12bab97c6bda4dc681ff5b14b5dd9da2-20210704 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=mediatek.com; s=dk; h=Content-Transfer-Encoding:Content-Type:MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:CC:To:From; bh=TJxOG717f7ZEBeyBa5KClESpjr7PlithXTgayw2FFUE=; b=Id7XHqzRuyzJM9koAJCWwKnreQGzoAtrXd+mvti/qEyvTCW5+OBUqXxhNmS8b/+d0Pj4JR+FsqGVLY0cvbHEahM3wtiMfsDLp1uaiW7JnDTrbi3NYGrr84/PD+ZL7U8G4soiOwD6FxIxvZ1l1QblcFA/zxny9p+seAI9Gy5frXc=; X-UUID: 12bab97c6bda4dc681ff5b14b5dd9da2-20210704 Received: from mtkcas66.mediatek.inc [(172.29.193.44)] by mailgw02.mediatek.com (envelope-from ) (musrelay.mediatek.com ESMTP with TLSv1.2 ECDHE-RSA-AES256-SHA384 256/256) with ESMTP id 1541651360; Sun, 04 Jul 2021 19:51:12 -0700 Received: from mtkmbs07n1.mediatek.inc (172.21.101.16) by MTKMBS62N1.mediatek.inc (172.29.193.41) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Sun, 4 Jul 2021 19:41:10 -0700 Received: from mtkcas11.mediatek.inc (172.21.101.40) by mtkmbs07n1.mediatek.inc (172.21.101.16) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Mon, 5 Jul 2021 10:41:09 +0800 Received: from mtksdccf07.mediatek.inc (172.21.84.99) by mtkcas11.mediatek.inc (172.21.101.73) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Mon, 5 Jul 2021 10:41:09 +0800 From: To: CC: , , , , Yee Lee , Andrey Ryabinin , "Alexander Potapenko" , Dmitry Vyukov , "Andrew Morton" , Andrey Konovalov , Matthias Brugger , "open list:KASAN" , "open list:MEMORY MANAGEMENT" , "moderated list:ARM/Mediatek SoC support" , "moderated list:ARM/Mediatek SoC support" Subject: [PATCH v5 2/2] kasan: Add memzero int for unaligned size at DEBUG Date: Mon, 5 Jul 2021 10:40:58 +0800 Message-ID: <20210705024101.1567-3-yee.lee@mediatek.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20210705024101.1567-1-yee.lee@mediatek.com> References: <20210705024101.1567-1-yee.lee@mediatek.com> MIME-Version: 1.0 X-MTK: N X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210704_195117_565246_5798CF4C X-CRM114-Status: GOOD ( 13.25 ) X-BeenThere: linux-mediatek@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "Linux-mediatek" Errors-To: linux-mediatek-bounces+linux-mediatek=archiver.kernel.org@lists.infradead.org From: Yee Lee Issue: when SLUB debug is on, hwtag kasan_unpoison() would overwrite the redzone of object with unaligned size. An additional memzero_explicit() path is added to replacing init by hwtag instruction for those unaligned size at SLUB debug mode. The penalty is acceptable since they are only enabled in debug mode, not production builds. A block of comment is added for explanation. Signed-off-by: Yee Lee Suggested-by: Marco Elver Suggested-by: Andrey Konovalov Cc: Andrey Ryabinin Cc: Alexander Potapenko Cc: Dmitry Vyukov Cc: Andrew Morton --- mm/kasan/kasan.h | 12 ++++++++++++ 1 file changed, 12 insertions(+) diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h index 98e3059bfea4..a9d837197302 100644 --- a/mm/kasan/kasan.h +++ b/mm/kasan/kasan.h @@ -9,6 +9,7 @@ #ifdef CONFIG_KASAN_HW_TAGS #include +#include "../slab.h" DECLARE_STATIC_KEY_FALSE(kasan_flag_stacktrace); extern bool kasan_flag_async __ro_after_init; @@ -387,6 +388,17 @@ static inline void kasan_unpoison(const void *addr, size_t size, bool init) if (WARN_ON((unsigned long)addr & KASAN_GRANULE_MASK)) return; + /* + * Explicitly initialize the memory with the precise object size to + * avoid overwriting the SLAB redzone. This disables initialization in + * the arch code and may thus lead to performance penalty. The penalty + * is accepted since SLAB redzones aren't enabled in production builds. + */ + if (slub_debug_enabled_unlikely() && + init && ((unsigned long)size & KASAN_GRANULE_MASK)) { + init = false; + memzero_explicit((void *)addr, size); + } size = round_up(size, KASAN_GRANULE_SIZE); hw_set_mem_tag_range((void *)addr, size, tag, init);