From patchwork Mon Sep 27 12:26:46 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kefeng Wang X-Patchwork-Id: 12519799 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C6F64C433EF for ; Mon, 27 Sep 2021 12:24:12 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 7042260FF2 for ; Mon, 27 Sep 2021 12:24:12 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 7042260FF2 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id DF2566B0071; Mon, 27 Sep 2021 08:24:11 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id DA1366B0072; Mon, 27 Sep 2021 08:24:11 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C909C900002; Mon, 27 Sep 2021 08:24:11 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0057.hostedemail.com [216.40.44.57]) by kanga.kvack.org (Postfix) with ESMTP id BB6846B0071 for ; Mon, 27 Sep 2021 08:24:11 -0400 (EDT) Received: from smtpin25.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 62982181B2E2C for ; Mon, 27 Sep 2021 12:24:11 +0000 (UTC) X-FDA: 78633270702.25.0EB36F0 Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) by imf21.hostedemail.com (Postfix) with ESMTP id A2DE0D03270B for ; Mon, 27 Sep 2021 12:24:10 +0000 (UTC) Received: from dggemv703-chm.china.huawei.com (unknown [172.30.72.53]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4HJ1sz2gJyzbmnv; Mon, 27 Sep 2021 20:19:51 +0800 (CST) Received: from dggpemm500001.china.huawei.com (7.185.36.107) by dggemv703-chm.china.huawei.com (10.3.19.46) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.8; Mon, 27 Sep 2021 20:24:07 +0800 Received: from localhost.localdomain.localdomain (10.175.113.25) by dggpemm500001.china.huawei.com (7.185.36.107) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.8; Mon, 27 Sep 2021 20:24:06 +0800 From: Kefeng Wang To: , , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , , , Matthew Wilcox CC: Kefeng Wang Subject: [PATCH v2] slub: Add back check for free nonslab objects Date: Mon, 27 Sep 2021 20:26:46 +0800 Message-ID: <20210927122646.91934-1-wangkefeng.wang@huawei.com> X-Mailer: git-send-email 2.26.2 MIME-Version: 1.0 X-Originating-IP: [10.175.113.25] X-ClientProxiedBy: dggems704-chm.china.huawei.com (10.3.19.181) To dggpemm500001.china.huawei.com (7.185.36.107) X-CFilter-Loop: Reflected X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: A2DE0D03270B X-Stat-Signature: 64jw6sm9xytz33fh7xcoh8fyp54zi7fk Authentication-Results: imf21.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=huawei.com; spf=pass (imf21.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.187 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com X-HE-Tag: 1632745450-89201 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: After commit ("f227f0faf63b slub: fix unreclaimable slab stat for bulk free"), the check for free nonslab page is replaced by VM_BUG_ON_PAGE, which only check with CONFIG_DEBUG_VM enabled, but this config may impact performance, so it only for debug. Commit ("0937502af7c9 slub: Add check for kfree() of non slab objects.") add the ability, which should be needed in any configs to catch the invalid free, they even could be potential issue, eg, memory corruption, use after free and double-free, so replace VM_BUG_ON_PAGE to WARN_ON, add dump_page() and object address printing to help use to debug the issue. Signed-off-by: Kefeng Wang --- v2: Add object address printing suggested by Matthew Wilcox mm/slub.c | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git a/mm/slub.c b/mm/slub.c index 3095b889fab4..157973e22faf 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -3522,7 +3522,11 @@ static inline void free_nonslab_page(struct page *page, void *object) { unsigned int order = compound_order(page); - VM_BUG_ON_PAGE(!PageCompound(page), page); + if (WARN_ON(!PageCompound(page))) { + dump_page(page, "invalid free nonslab page"); + pr_warn("object pointer: 0x%p\n", object); + } + kfree_hook(object); mod_lruvec_page_state(page, NR_SLAB_UNRECLAIMABLE_B, -(PAGE_SIZE << order)); __free_pages(page, order);