From patchwork Mon Oct 16 07:12:45 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "zhaoyang.huang" X-Patchwork-Id: 13422645 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id C533CCDB465 for ; Mon, 16 Oct 2023 07:13:25 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 423428D0014; Mon, 16 Oct 2023 03:13:25 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 3D3318D0001; Mon, 16 Oct 2023 03:13:25 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 29B0C8D0014; Mon, 16 Oct 2023 03:13:25 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 1B2CC8D0001 for ; Mon, 16 Oct 2023 03:13:25 -0400 (EDT) Received: from smtpin15.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id D2089160B12 for ; Mon, 16 Oct 2023 07:13:24 +0000 (UTC) X-FDA: 81350458728.15.03E2CA9 Received: from SHSQR01.spreadtrum.com (unknown [222.66.158.135]) by imf02.hostedemail.com (Postfix) with ESMTP id DC0778000D for ; Mon, 16 Oct 2023 07:13:21 +0000 (UTC) Authentication-Results: imf02.hostedemail.com; dkim=none; spf=pass (imf02.hostedemail.com: domain of zhaoyang.huang@unisoc.com designates 222.66.158.135 as permitted sender) smtp.mailfrom=zhaoyang.huang@unisoc.com; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1697440403; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding:in-reply-to: references; bh=rbvh5O7avPAgsCze7na39wTf5wDX5P0g3D5WnqbUxJE=; b=d+6DOqmf2C4wyv+nK2zkKXKPg3STaGn39z8bR9GZnMrn5QzORSBJEbmf4Kfcln1MFpq5wn wwIeBnZWwJ9GiqMAeFBEAOfxJgjcRUdtDqVFiAP1FAiUloB3xNybW1FToImqXkUvlo61VU XuKUY7cgH4MvZVOA+JhF2yDGuHfLtWI= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1697440403; a=rsa-sha256; cv=none; b=KhQD6FEjMEQ6Rl2IzxRA6QJ/dAebch0UUC0TrzDvDHychGX9G9E7Bzk0XmtxynUtvbbeYW LxOqep2/bHw0MwDkdUazX7C5inkmNEH1z0RE2gRJin1TKLI/uYo1jGeRj3oBvYogYsayKe MSQ6iD5j9aSnQlc0vXvqXWFPlvAHXKI= ARC-Authentication-Results: i=1; imf02.hostedemail.com; dkim=none; spf=pass (imf02.hostedemail.com: domain of zhaoyang.huang@unisoc.com designates 222.66.158.135 as permitted sender) smtp.mailfrom=zhaoyang.huang@unisoc.com; dmarc=none Received: from dlp.unisoc.com ([10.29.3.86]) by SHSQR01.spreadtrum.com with ESMTP id 39G7Cote053037; Mon, 16 Oct 2023 15:12:50 +0800 (+08) (envelope-from zhaoyang.huang@unisoc.com) Received: from SHDLP.spreadtrum.com (bjmbx01.spreadtrum.com [10.0.64.7]) by dlp.unisoc.com (SkyGuard) with ESMTPS id 4S87WF1gbWz2KmQJM; Mon, 16 Oct 2023 15:08:41 +0800 (CST) Received: from bj03382pcu01.spreadtrum.com (10.0.73.40) by BJMBX01.spreadtrum.com (10.0.64.7) with Microsoft SMTP Server (TLS) id 15.0.1497.23; Mon, 16 Oct 2023 15:12:48 +0800 From: "zhaoyang.huang" To: Andrew Morton , Johannes Weiner , Roman Gushchin , , , Zhaoyang Huang , Subject: [PATCHv6 1/1] mm: optimization on page allocation when CMA enabled Date: Mon, 16 Oct 2023 15:12:45 +0800 Message-ID: <20231016071245.2865233-1-zhaoyang.huang@unisoc.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 X-Originating-IP: [10.0.73.40] X-ClientProxiedBy: SHCAS03.spreadtrum.com (10.0.1.207) To BJMBX01.spreadtrum.com (10.0.64.7) X-MAIL: SHSQR01.spreadtrum.com 39G7Cote053037 X-Rspamd-Queue-Id: DC0778000D X-Rspam-User: X-Rspamd-Server: rspam11 X-Stat-Signature: syw1tm8eypnt7qb3dppo954q7efsjy76 X-HE-Tag: 1697440401-421306 X-HE-Meta: U2FsdGVkX18ISyM+wRcp0J0AviKCww8OSKNSyoOtOABNLdd6iFQk8t2PBzmxri+R4GPCiXNRnm8hvza3UhgfJBst8wT9wCMLYPfSe4/4DWJR/3Dk7XrZ301+ZuTJptnW6OjsheN9WVDn3dgv7PwUiRvx9ztf46sZIEea42dq66d/KgYr+BCYY/yYfwv0RGb5FmGmcvGSv5W9xisSyF1fXOyl43jPcapiRZZG/EM2FJ1GFEm8+AB1LpzOUukIcMNb7rxnuV2cN/jR/wWAun0aKmPrya+cOJCKoSrxQDuGml4wjRNMQZz+u5oMLd6rMzqqJatjHgnJEDkxKWWhVhQwKyOFiKBUX3xfZ3MH0AAs7QJ4a3bnGnsqxC4ZG50+4qkjsC9TomBFzVr1ZJvCPirZGUx4Nad9NOqA+dhGKkKvaLzRWKTTOk8ZJQbc2Wy2lODeEWKMYLsGomLIcbp7C8UhwtnnZdD5Glv0F457z78FWHnhGtGmcZNBD8JJ5UEpC3QIEvvQIbjtCe1WZTnJL8Kj9NjYcbd1/+LOgKSeemj9s47DqV69lakfIzxPsxRdNuspLopwJgMO3/RS7GTvYo5q/mwueMJAWvmOV2qDZzLR41yFWuhm4yG+RdrFpahzt70CcYWZ/XSq0z6XVOR7rFtm2PhnPLRZrtmq5cRNqvDcQ/0f5Wjd7++3XH+siGKMcCoSD9yRrPpeNAChI7+tJj7Tm7p30d6w2UTV28DQ+k2Fo5ZkrQVVzouJua4x8DgfDRMoe3aaAY02ENVfCijExEFG0jQ8FV2oS2POcpY8uP3QyYE2TKhGvTQiCepVkRsyryg/iPjtlF+RbC1WluwPkGptyu84iaBXY0zjLVoKhzuBpRmjmjTVfu2r6Uz7YRhnEV5DSsmBKvDTxYAAden6hJjL1Tvhc4ikfik7dDhCCbzYuMURSbOMUCm60F7gQUDv7HJb7W2RAKYhehFlH//cwgU el1K1BI3 qonvS3XPkvUDbrFoE/GRrdOAXoe7pvqc7PddQrKeVV18vNZev81YKTzvzPuadT06gFwrn9No2ExBKB9Kd6Lmxsd6YgBRP8SSENuN7hTSYQtX0V7Xd34tenr1Av0vzKpzW9+T5OKomVwGGbgCgjf8I+utqI59nh0slKAFjyoBi/vB2RgP6ZSUlDJnCaMLZZflSpF5+ X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Zhaoyang Huang According to current CMA utilization policy, an alloc_pages(GFP_USER) could 'steal' UNMOVABLE & RECLAIMABLE page blocks via the help of CMA(pass zone_watermark_ok by counting CMA in but use U&R in rmqueue), which could lead to following alloc_pages(GFP_KERNEL) fail. Solving this by introducing second watermark checking for GFP_MOVABLE, which could have the allocation use CMA when proper. -- Free_pages(30MB) | | -- WMARK_LOW(25MB) | -- Free_CMA(12MB) | | Signed-off-by: Johannes Weiner --- Signed-off-by: Zhaoyang Huang --- v6: update comments --- --- mm/page_alloc.c | 44 ++++++++++++++++++++++++++++++++++++++++---- 1 file changed, 40 insertions(+), 4 deletions(-) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 452459836b71..5a146aa7c0aa 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -2078,6 +2078,43 @@ __rmqueue_fallback(struct zone *zone, int order, int start_migratetype, } +#ifdef CONFIG_CMA +/* + * GFP_MOVABLE allocation could drain UNMOVABLE & RECLAIMABLE page blocks via + * the help of CMA which makes GFP_KERNEL failed. Checking if zone_watermark_ok + * again without ALLOC_CMA to see if to use CMA first. + */ +static bool use_cma_first(struct zone *zone, unsigned int order, unsigned int alloc_flags) +{ + unsigned long watermark; + bool cma_first = false; + + watermark = wmark_pages(zone, alloc_flags & ALLOC_WMARK_MASK); + /* check if GFP_MOVABLE pass previous zone_watermark_ok via the help of CMA */ + if (zone_watermark_ok(zone, order, watermark, 0, alloc_flags & (~ALLOC_CMA))) { + /* + * Balance movable allocations between regular and CMA areas by + * allocating from CMA when over half of the zone's free memory + * is in the CMA area. + */ + cma_first = (zone_page_state(zone, NR_FREE_CMA_PAGES) > + zone_page_state(zone, NR_FREE_PAGES) / 2); + } else { + /* + * watermark failed means UNMOVABLE & RECLAIMBLE is not enough + * now, we should use cma first to keep them stay around the + * corresponding watermark + */ + cma_first = true; + } + return cma_first; +} +#else +static bool use_cma_first(struct zone *zone, unsigned int order, unsigned int alloc_flags) +{ + return false; +} +#endif /* * Do the hard work of removing an element from the buddy allocator. * Call me with the zone->lock already held. @@ -2091,12 +2128,11 @@ __rmqueue(struct zone *zone, unsigned int order, int migratetype, if (IS_ENABLED(CONFIG_CMA)) { /* * Balance movable allocations between regular and CMA areas by - * allocating from CMA when over half of the zone's free memory - * is in the CMA area. + * allocating from CMA base on judging zone_watermark_ok again + * to see if the latest check got pass via the help of CMA */ if (alloc_flags & ALLOC_CMA && - zone_page_state(zone, NR_FREE_CMA_PAGES) > - zone_page_state(zone, NR_FREE_PAGES) / 2) { + use_cma_first(zone, order, alloc_flags)) { page = __rmqueue_cma_fallback(zone, order); if (page) return page;