From patchwork Tue Dec 12 11:55:57 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Wei Wang X-Patchwork-Id: 10106845 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id A320C60327 for ; Tue, 12 Dec 2017 12:18:43 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 9C4AC29B8E for ; Tue, 12 Dec 2017 12:18:43 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 90E0B29B97; Tue, 12 Dec 2017 12:18:43 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=2.0 tests=BAYES_00,RCVD_IN_DNSWL_HI autolearn=unavailable version=3.3.1 Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 1B38229B95 for ; Tue, 12 Dec 2017 12:18:43 +0000 (UTC) Received: from localhost ([::1]:57894 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1eOjWE-0005Fm-8a for patchwork-qemu-devel@patchwork.kernel.org; Tue, 12 Dec 2017 07:18:42 -0500 Received: from eggs.gnu.org ([2001:4830:134:3::10]:56587) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1eOjQh-0007hg-Af for qemu-devel@nongnu.org; Tue, 12 Dec 2017 07:13:00 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1eOjQf-0006Xl-FS for qemu-devel@nongnu.org; Tue, 12 Dec 2017 07:12:59 -0500 Received: from mga17.intel.com ([192.55.52.151]:39928) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1eOjQf-0006Pn-4f for qemu-devel@nongnu.org; Tue, 12 Dec 2017 07:12:57 -0500 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by fmsmga107.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 12 Dec 2017 04:12:56 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.45,395,1508828400"; d="scan'208";a="1896132" Received: from devel-ww.sh.intel.com ([10.239.48.110]) by fmsmga008.fm.intel.com with ESMTP; 12 Dec 2017 04:12:53 -0800 From: Wei Wang To: virtio-dev@lists.oasis-open.org, linux-kernel@vger.kernel.org, qemu-devel@nongnu.org, virtualization@lists.linux-foundation.org, kvm@vger.kernel.org, linux-mm@kvack.org, mst@redhat.com, mhocko@kernel.org, akpm@linux-foundation.org, mawilcox@microsoft.com Date: Tue, 12 Dec 2017 19:55:57 +0800 Message-Id: <1513079759-14169-6-git-send-email-wei.w.wang@intel.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1513079759-14169-1-git-send-email-wei.w.wang@intel.com> References: <1513079759-14169-1-git-send-email-wei.w.wang@intel.com> X-detected-operating-system: by eggs.gnu.org: Genre and OS details not recognized. X-Received-From: 192.55.52.151 Subject: [Qemu-devel] [PATCH v19 5/7] mm: support reporting free page blocks X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: aarcange@redhat.com, yang.zhang.wz@gmail.com, david@redhat.com, penguin-kernel@I-love.SAKURA.ne.jp, liliang.opensource@gmail.com, willy@infradead.org, amit.shah@redhat.com, wei.w.wang@intel.com, quan.xu@aliyun.com, cornelia.huck@de.ibm.com, pbonzini@redhat.com, nilal@redhat.com, mgorman@techsingularity.net Errors-To: qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Sender: "Qemu-devel" X-Virus-Scanned: ClamAV using ClamSMTP This patch adds support to walk through the free page blocks in the system and report them via a callback function. Some page blocks may leave the free list after zone->lock is released, so it is the caller's responsibility to either detect or prevent the use of such pages. One use example of this patch is to accelerate live migration by skipping the transfer of free pages reported from the guest. A popular method used by the hypervisor to track which part of memory is written during live migration is to write-protect all the guest memory. So, those pages that are reported as free pages but are written after the report function returns will be captured by the hypervisor, and they will be added to the next round of memory transfer. Signed-off-by: Wei Wang Signed-off-by: Liang Li Cc: Michal Hocko Cc: Michael S. Tsirkin Acked-by: Michal Hocko --- include/linux/mm.h | 6 ++++ mm/page_alloc.c | 91 ++++++++++++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 97 insertions(+) diff --git a/include/linux/mm.h b/include/linux/mm.h index ea818ff..b3077dd 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1938,6 +1938,12 @@ extern void free_area_init_node(int nid, unsigned long * zones_size, unsigned long zone_start_pfn, unsigned long *zholes_size); extern void free_initmem(void); +extern void walk_free_mem_block(void *opaque, + int min_order, + bool (*report_pfn_range)(void *opaque, + unsigned long pfn, + unsigned long num)); + /* * Free reserved pages within range [PAGE_ALIGN(start), end & PAGE_MASK) * into the buddy system. The freed pages will be poisoned with pattern diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 73f5d45..0de461d 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -4888,6 +4888,97 @@ void show_free_areas(unsigned int filter, nodemask_t *nodemask) show_swap_cache_info(); } +/* + * Walk through a free page list and report the found pfn range via the + * callback. + * + * Return false if the callback requests to stop reporting. Otherwise, + * return true. + */ +static bool walk_free_page_list(void *opaque, + struct zone *zone, + int order, + enum migratetype mt, + bool (*report_pfn_range)(void *, + unsigned long, + unsigned long)) +{ + struct page *page; + struct list_head *list; + unsigned long pfn, flags; + bool ret; + + spin_lock_irqsave(&zone->lock, flags); + list = &zone->free_area[order].free_list[mt]; + list_for_each_entry(page, list, lru) { + pfn = page_to_pfn(page); + ret = report_pfn_range(opaque, pfn, 1 << order); + if (!ret) + break; + } + spin_unlock_irqrestore(&zone->lock, flags); + + return ret; +} + +/** + * walk_free_mem_block - Walk through the free page blocks in the system + * @opaque: the context passed from the caller + * @min_order: the minimum order of free lists to check + * @report_pfn_range: the callback to report the pfn range of the free pages + * + * If the callback returns false, stop iterating the list of free page blocks. + * Otherwise, continue to report. + * + * Please note that there are no locking guarantees for the callback and + * that the reported pfn range might be freed or disappear after the + * callback returns so the caller has to be very careful how it is used. + * + * The callback itself must not sleep or perform any operations which would + * require any memory allocations directly (not even GFP_NOWAIT/GFP_ATOMIC) + * or via any lock dependency. It is generally advisable to implement + * the callback as simple as possible and defer any heavy lifting to a + * different context. + * + * There is no guarantee that each free range will be reported only once + * during one walk_free_mem_block invocation. + * + * pfn_to_page on the given range is strongly discouraged and if there is + * an absolute need for that make sure to contact MM people to discuss + * potential problems. + * + * The function itself might sleep so it cannot be called from atomic + * contexts. + * + * In general low orders tend to be very volatile and so it makes more + * sense to query larger ones first for various optimizations which like + * ballooning etc... This will reduce the overhead as well. + */ +void walk_free_mem_block(void *opaque, + int min_order, + bool (*report_pfn_range)(void *opaque, + unsigned long pfn, + unsigned long num)) +{ + struct zone *zone; + int order; + enum migratetype mt; + bool ret; + + for_each_populated_zone(zone) { + for (order = MAX_ORDER - 1; order >= min_order; order--) { + for (mt = 0; mt < MIGRATE_TYPES; mt++) { + ret = walk_free_page_list(opaque, zone, + order, mt, + report_pfn_range); + if (!ret) + return; + } + } + } +} +EXPORT_SYMBOL_GPL(walk_free_mem_block); + static void zoneref_set_zone(struct zone *zone, struct zoneref *zoneref) { zoneref->zone = zone;