From patchwork Thu Jan 10 05:35:43 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Arun KS X-Patchwork-Id: 10755213 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 96E1214E5 for ; Thu, 10 Jan 2019 05:35:53 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 7E33D28AC2 for ; Thu, 10 Jan 2019 05:35:53 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 7240928E97; Thu, 10 Jan 2019 05:35:53 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE,SUBJ_OBFU_PUNCT_FEW autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 76D8B28AC2 for ; Thu, 10 Jan 2019 05:35:51 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 6A91F8E009D; Thu, 10 Jan 2019 00:35:50 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 62ED28E0038; Thu, 10 Jan 2019 00:35:50 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4D10E8E009D; Thu, 10 Jan 2019 00:35:50 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pl1-f197.google.com (mail-pl1-f197.google.com [209.85.214.197]) by kanga.kvack.org (Postfix) with ESMTP id 0715E8E0038 for ; Thu, 10 Jan 2019 00:35:50 -0500 (EST) Received: by mail-pl1-f197.google.com with SMTP id 4so5577269plc.5 for ; Wed, 09 Jan 2019 21:35:49 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-original-authentication-results:x-gm-message-state:from:to:cc :subject:date:message-id; bh=8zNQPC/V2LTUhx6+TK/JJ+zgkGLjOsHuyXiE8GaYgxY=; b=T1xWyggSprJ3OBOlKbTSNuwdTkzCYT4NbUc1irVujUvIreJZ0pBh+fo7D2IAN4KnaT CPXYRVPkAepaEZ7v4iYktoM9Ah2jsfCq7UFdNGgyhCpjmisMDubrppb8T+XB/WmT5Mu9 Q7AWxv0Ymlvw4VWROdvPs+yroxQnnDIkrjyk2SOwFBkB8qMTSMblJNp24rfcFMU/dxKh ZwePHt4n3LkVxfMSGTkkj3ABk+x1HkJL+qoLO0e9Bsyive3bm4/EK4fM7sl5RkarS9PG NOynINGM98PuV0s5KyZ7UOUY20yD36OhbGMp6mNfSUfL2MSarDvq2Gpgo3oyAdMpNXRD 4xpg== X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of arunks@qualcomm.com designates 103.229.18.198 as permitted sender) smtp.mailfrom=arunks@qualcomm.com X-Gm-Message-State: AJcUukc1Xm2VhgItVFs8TfUGQqreWNUJIvKyF9O/Fb6iSno8CQeNRdHU +1G8zs1flXNTvi1e4fKT/c7AvIqMQ+nohRK0tGOouxfiN/Gz6HlTpVIFa/y1JBwL5RhbdJrMAPI 15mTLiDcDeF7NlJLdCCzgxJ05rgu/oV+G3GEo02BQhiznuuFwiC0ozlXAq/vYLmI= X-Received: by 2002:a62:fb07:: with SMTP id x7mr8837688pfm.71.1547098549628; Wed, 09 Jan 2019 21:35:49 -0800 (PST) X-Google-Smtp-Source: ALg8bN6LRgcEAPAdYqNIrWwy16dbSfdHjwQ7bygUvhW6oEEwXl92JwP1wGemE/SbJevXAn8rhK0R X-Received: by 2002:a62:fb07:: with SMTP id x7mr8837659pfm.71.1547098548664; Wed, 09 Jan 2019 21:35:48 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1547098548; cv=none; d=google.com; s=arc-20160816; b=hwN0p9ZoOMqQ/ARg3gQW1ieo4i/apVKcSLtK6iC90+ejLNGT8jE9mHxoAI4OMKRdEq kcizo+C0kWoT8YX6ZZhaVoSGcwJfqb3U6ydpCKSQfHc/lf0yy3EU3JJngnnfhHhSgnQC dbZWEyAv7L8GXy0MEByGpQCPO+iQWfgAoKKy8MfLccgyguC/aMoUhzLD5HkWByh8w442 GGQA2ynxK0cbfaC7tAkrmWUVgBMMgREM1mUnEvd3AszT7MKIP/xU1m2PEnum/0s/U3bo NQxiB6LAxFnmiSDXUj62fb5sL7WPKMmGsNcUpX5g5E3REcU2712VUzrlDoQFHQvfhQE3 +m8Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=message-id:date:subject:cc:to:from; bh=8zNQPC/V2LTUhx6+TK/JJ+zgkGLjOsHuyXiE8GaYgxY=; b=z4CFbrpP0GDoPRyQ3WjIPubpX0RjP8F5zXrMDnSQ5su/zI3iNwACicpsdTcLs1SJGa DW/lOKLz5P9yZnGfbazBVK3FAnG5SGB/e9anMNOb4QQ/ERKvIZrjQd4qJAWWVW+rpubh F4x+tWw2RLeHiJSofln65AArU0SwjlNIkZfv0N9EN+qoiShhOXsP3A9QpBtk/qy6Hfb3 r/9JDarVskDkSDJa8iJ8tCxj2KpwSZLaqk1Tkf2X2W2WJ00m0BvyVDBimRvFqoFlA/44 97/nhTi7wh4TQTU6oSIw4zurpFO0SB8wP2TsBtW71kPEpe9F9wd5LJtH1CnvdLsj7M5M oezg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of arunks@qualcomm.com designates 103.229.18.198 as permitted sender) smtp.mailfrom=arunks@qualcomm.com Received: from alexa-out-blr.qualcomm.com (alexa-out-blr-02.qualcomm.com. [103.229.18.198]) by mx.google.com with ESMTPS id b7si49891612plb.234.2019.01.09.21.35.47 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 09 Jan 2019 21:35:48 -0800 (PST) Received-SPF: pass (google.com: domain of arunks@qualcomm.com designates 103.229.18.198 as permitted sender) client-ip=103.229.18.198; Authentication-Results: mx.google.com; spf=pass (google.com: domain of arunks@qualcomm.com designates 103.229.18.198 as permitted sender) smtp.mailfrom=arunks@qualcomm.com X-IronPort-AV: E=Sophos;i="5.56,459,1539628200"; d="scan'208";a="299952" Received: from ironmsg03-blr.qualcomm.com ([10.86.208.132]) by alexa-out-blr.qualcomm.com with ESMTP/TLS/AES256-SHA; 10 Jan 2019 11:05:45 +0530 X-IronPort-AV: E=McAfee;i="5900,7806,9131"; a="2665209" Received: from blr-ubuntu-104.ap.qualcomm.com (HELO blr-ubuntu-104.qualcomm.com) ([10.79.40.64]) by ironmsg03-blr.qualcomm.com with ESMTP; 10 Jan 2019 11:05:45 +0530 Received: by blr-ubuntu-104.qualcomm.com (Postfix, from userid 346745) id A8C1339E1; Thu, 10 Jan 2019 11:05:44 +0530 (IST) From: Arun KS To: arunks.linux@gmail.com, alexander.h.duyck@linux.intel.com, akpm@linux-foundation.org, mhocko@kernel.org, vbabka@suse.cz, osalvador@suse.de, linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: getarunks@gmail.com, Arun KS Subject: [PATCH v9] mm/page_alloc.c: memory_hotplug: free pages as higher order Date: Thu, 10 Jan 2019 11:05:43 +0530 Message-Id: <1547098543-26452-1-git-send-email-arunks@codeaurora.org> X-Mailer: git-send-email 1.9.1 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP When freeing pages are done with higher order, time spent on coalescing pages by buddy allocator can be reduced. With section size of 256MB, hot add latency of a single section shows improvement from 50-60 ms to less than 1 ms, hence improving the hot add latency by 60 times. Modify external providers of online callback to align with the change. Signed-off-by: Arun KS Acked-by: Michal Hocko Reviewed-by: Oscar Salvador Reviewed-by: Alexander Duyck --- Changes since v8: - Remove return type change for online_page_callback. - Use consistent names for external online_page providers. - Fix onlined_pages accounting. Changes since v7: - Rebased to 5.0-rc1. - Fixed onlined_pages accounting. - Added comment for return value of online_page_callback. - Renamed xen_bring_pgs_online to xen_online_pages. Changes since v6: - Rebased to 4.20 - Changelog updated. - No improvement seen on arm64, hence removed removal of prefetch. Changes since v5: - Rebased to 4.20-rc1. - Changelog updated. Changes since v4: - As suggested by Michal Hocko, - Simplify logic in online_pages_block() by using get_order(). - Seperate out removal of prefetch from __free_pages_core(). Changes since v3: - Renamed _free_pages_boot_core -> __free_pages_core. - Removed prefetch from __free_pages_core. - Removed xen_online_page(). Changes since v2: - Reuse code from __free_pages_boot_core(). Changes since v1: - Removed prefetch(). Changes since RFC: - Rebase. - As suggested by Michal Hocko remove pages_per_block. - Modifed external providers of online_page_callback. v8: https://lore.kernel.org/patchwork/patch/1030332/ v7: https://lore.kernel.org/patchwork/patch/1028908/ v6: https://lore.kernel.org/patchwork/patch/1007253/ v5: https://lore.kernel.org/patchwork/patch/995739/ v4: https://lore.kernel.org/patchwork/patch/995111/ v3: https://lore.kernel.org/patchwork/patch/992348/ v2: https://lore.kernel.org/patchwork/patch/991363/ v1: https://lore.kernel.org/patchwork/patch/989445/ RFC: https://lore.kernel.org/patchwork/patch/984754/ --- --- drivers/hv/hv_balloon.c | 4 ++-- drivers/xen/balloon.c | 15 ++++++++++----- include/linux/memory_hotplug.h | 2 +- mm/internal.h | 1 + mm/memory_hotplug.c | 37 +++++++++++++++++++++++++------------ mm/page_alloc.c | 8 ++++---- 6 files changed, 43 insertions(+), 24 deletions(-) diff --git a/drivers/hv/hv_balloon.c b/drivers/hv/hv_balloon.c index 5301fef..55d79f8 100644 --- a/drivers/hv/hv_balloon.c +++ b/drivers/hv/hv_balloon.c @@ -771,7 +771,7 @@ static void hv_mem_hot_add(unsigned long start, unsigned long size, } } -static void hv_online_page(struct page *pg) +static void hv_online_page(struct page *pg, unsigned int order) { struct hv_hotadd_state *has; unsigned long flags; @@ -783,7 +783,7 @@ static void hv_online_page(struct page *pg) if ((pfn < has->start_pfn) || (pfn >= has->end_pfn)) continue; - hv_page_online_one(has, pg); + hv_bring_pgs_online(has, pfn, (1UL << order)); break; } spin_unlock_irqrestore(&dm_device.ha_lock, flags); diff --git a/drivers/xen/balloon.c b/drivers/xen/balloon.c index ceb5048..d107447 100644 --- a/drivers/xen/balloon.c +++ b/drivers/xen/balloon.c @@ -369,14 +369,19 @@ static enum bp_state reserve_additional_memory(void) return BP_ECANCELED; } -static void xen_online_page(struct page *page) +static void xen_online_page(struct page *page, unsigned int order) { - __online_page_set_limits(page); + unsigned long i, size = (1 << order); + unsigned long start_pfn = page_to_pfn(page); + struct page *p; + pr_debug("Online %lu pages starting at pfn 0x%lx\n", size, start_pfn); mutex_lock(&balloon_mutex); - - __balloon_append(page); - + for (i = 0; i < size; i++) { + p = pfn_to_page(start_pfn + i); + __online_page_set_limits(p); + __balloon_append(p); + } mutex_unlock(&balloon_mutex); } diff --git a/include/linux/memory_hotplug.h b/include/linux/memory_hotplug.h index 07da5c6..e368730 100644 --- a/include/linux/memory_hotplug.h +++ b/include/linux/memory_hotplug.h @@ -87,7 +87,7 @@ extern int test_pages_in_a_zone(unsigned long start_pfn, unsigned long end_pfn, unsigned long *valid_start, unsigned long *valid_end); extern void __offline_isolated_pages(unsigned long, unsigned long); -typedef void (*online_page_callback_t)(struct page *page); +typedef void (*online_page_callback_t)(struct page *page, unsigned int order); extern int set_online_page_callback(online_page_callback_t callback); extern int restore_online_page_callback(online_page_callback_t callback); diff --git a/mm/internal.h b/mm/internal.h index f4a7bb0..536bc2a 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -163,6 +163,7 @@ static inline struct page *pageblock_pfn_to_page(unsigned long start_pfn, extern int __isolate_free_page(struct page *page, unsigned int order); extern void memblock_free_pages(struct page *page, unsigned long pfn, unsigned int order); +extern void __free_pages_core(struct page *page, unsigned int order); extern void prep_compound_page(struct page *page, unsigned int order); extern void post_alloc_hook(struct page *page, unsigned int order, gfp_t gfp_flags); diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c index b9a667d..77dff24 100644 --- a/mm/memory_hotplug.c +++ b/mm/memory_hotplug.c @@ -47,7 +47,7 @@ * and restore_online_page_callback() for generic callback restore. */ -static void generic_online_page(struct page *page); +static void generic_online_page(struct page *page, unsigned int order); static online_page_callback_t online_page_callback = generic_online_page; static DEFINE_MUTEX(online_page_callback_lock); @@ -656,26 +656,39 @@ void __online_page_free(struct page *page) } EXPORT_SYMBOL_GPL(__online_page_free); -static void generic_online_page(struct page *page) +static void generic_online_page(struct page *page, unsigned int order) { - __online_page_set_limits(page); - __online_page_increment_counters(page); - __online_page_free(page); + __free_pages_core(page, order); + totalram_pages_add(1UL << order); +#ifdef CONFIG_HIGHMEM + if (PageHighMem(page)) + totalhigh_pages_add(1UL << order); +#endif +} + +static int online_pages_blocks(unsigned long start, unsigned long nr_pages) +{ + unsigned long end = start + nr_pages; + int order, ret, onlined_pages = 0; + + while (start < end) { + order = min(MAX_ORDER - 1, + get_order(PFN_PHYS(end) - PFN_PHYS(start))); + (*online_page_callback)(pfn_to_page(start), order); + + onlined_pages += (1UL << order); + start += (1UL << order); + } + return onlined_pages; } static int online_pages_range(unsigned long start_pfn, unsigned long nr_pages, void *arg) { - unsigned long i; unsigned long onlined_pages = *(unsigned long *)arg; - struct page *page; if (PageReserved(pfn_to_page(start_pfn))) - for (i = 0; i < nr_pages; i++) { - page = pfn_to_page(start_pfn + i); - (*online_page_callback)(page); - onlined_pages++; - } + onlined_pages += online_pages_blocks(start_pfn, nr_pages); online_mem_sections(start_pfn, start_pfn + nr_pages); diff --git a/mm/page_alloc.c b/mm/page_alloc.c index d295c9b..883212a 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -1303,7 +1303,7 @@ static void __free_pages_ok(struct page *page, unsigned int order) local_irq_restore(flags); } -static void __init __free_pages_boot_core(struct page *page, unsigned int order) +void __free_pages_core(struct page *page, unsigned int order) { unsigned int nr_pages = 1 << order; struct page *p = page; @@ -1382,7 +1382,7 @@ void __init memblock_free_pages(struct page *page, unsigned long pfn, { if (early_page_uninitialised(pfn)) return; - return __free_pages_boot_core(page, order); + __free_pages_core(page, order); } /* @@ -1472,14 +1472,14 @@ static void __init deferred_free_range(unsigned long pfn, if (nr_pages == pageblock_nr_pages && (pfn & (pageblock_nr_pages - 1)) == 0) { set_pageblock_migratetype(page, MIGRATE_MOVABLE); - __free_pages_boot_core(page, pageblock_order); + __free_pages_core(page, pageblock_order); return; } for (i = 0; i < nr_pages; i++, page++, pfn++) { if ((pfn & (pageblock_nr_pages - 1)) == 0) set_pageblock_migratetype(page, MIGRATE_MOVABLE); - __free_pages_boot_core(page, 0); + __free_pages_core(page, 0); } }