From patchwork Tue Aug 4 07:24:08 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 11699833 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id AE5C86C1 for ; Tue, 4 Aug 2020 07:24:50 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id A74A022B45 for ; Tue, 4 Aug 2020 07:24:50 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="aNrKJpjM" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org A74A022B45 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 9BFA48D012D; Tue, 4 Aug 2020 03:24:49 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 96FE78D0081; Tue, 4 Aug 2020 03:24:49 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 85EA58D012D; Tue, 4 Aug 2020 03:24:49 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0154.hostedemail.com [216.40.44.154]) by kanga.kvack.org (Postfix) with ESMTP id 713268D0081 for ; Tue, 4 Aug 2020 03:24:49 -0400 (EDT) Received: from smtpin27.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 3D6E9181AC9B6 for ; Tue, 4 Aug 2020 07:24:49 +0000 (UTC) X-FDA: 77112049098.27.cast60_160f12126fa4 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin27.hostedemail.com (Postfix) with ESMTP id 143523D677 for ; Tue, 4 Aug 2020 07:24:49 +0000 (UTC) X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,david@redhat.com,,RULES_HIT:30034:30045:30054:30064,0,RBL:205.139.110.120:@redhat.com:.lbl8.mailshell.net-62.18.0.100 66.10.201.10;04yfbya5ft8kf39unsdq1673npteaypp9nwjtiasw5idagwr9yyb8ze7k8cs4kr.rtrmfc8bftzrs8pp5a9rnscjo6mbkcr73ai8n6igpk5dxktrqr148se1mpipqtr.e-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:none,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: cast60_160f12126fa4 X-Filterd-Recvd-Size: 5201 Received: from us-smtp-1.mimecast.com (us-smtp-delivery-1.mimecast.com [205.139.110.120]) by imf41.hostedemail.com (Postfix) with ESMTP for ; Tue, 4 Aug 2020 07:24:48 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1596525888; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=x8xawYLO0vUiARW1lmruTP1AmZxbUMKLU8ccSw/5V4w=; b=aNrKJpjMKisFtLuYm4GhIG/IQvvc9WFMO3o6l1MSb6V3b0l4PcIN76RQf1vasrkgaZaF2k scfj49o7icMb0yneEY60ykLjetyHbPvsV6l1vbQfNIrarC4KvoK2r7KZwex0OIljBVWift ZBbDRmLoJVkknwqQm55ZXMgvSLjZHDI= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-458-7o7WNa6WMw6osuJDRunGrg-1; Tue, 04 Aug 2020 03:24:43 -0400 X-MC-Unique: 7o7WNa6WMw6osuJDRunGrg-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.phx2.redhat.com [10.5.11.14]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 7295A100A628; Tue, 4 Aug 2020 07:24:42 +0000 (UTC) Received: from t480s.redhat.com (ovpn-113-95.ams2.redhat.com [10.36.113.95]) by smtp.corp.redhat.com (Postfix) with ESMTP id 4C43C5D9F7; Tue, 4 Aug 2020 07:24:37 +0000 (UTC) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: virtualization@lists.linux-foundation.org, linux-mm@kvack.org, David Hildenbrand , Andrew Morton , Michal Hocko , "Michael S . Tsirkin" , Mike Kravetz , Mike Rapoport , Pankaj Gupta , Baoquan He Subject: [PATCH v3 6/6] mm: document semantics of ZONE_MOVABLE Date: Tue, 4 Aug 2020 09:24:08 +0200 Message-Id: <20200804072408.5481-7-david@redhat.com> In-Reply-To: <20200804072408.5481-1-david@redhat.com> References: <20200804072408.5481-1-david@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.14 X-Rspamd-Queue-Id: 143523D677 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam05 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Let's document what ZONE_MOVABLE means, how it's used, and which special cases we have regarding unmovable pages (memory offlining vs. migration / allocations). Cc: Andrew Morton Cc: Michal Hocko Cc: Michael S. Tsirkin Cc: Mike Kravetz Cc: Mike Rapoport Cc: Pankaj Gupta Cc: Baoquan He Signed-off-by: David Hildenbrand Acked-by: Mike Rapoport --- include/linux/mmzone.h | 34 ++++++++++++++++++++++++++++++++++ 1 file changed, 34 insertions(+) diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index f6f884970511d..600d449e7d9e9 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -372,6 +372,40 @@ enum zone_type { */ ZONE_HIGHMEM, #endif + /* + * ZONE_MOVABLE is similar to ZONE_NORMAL, except that it *primarily* + * only contains movable pages. Main use cases are to make memory + * offlining more likely to succeed, and to locally limit unmovable + * allocations - e.g., to increase the number of THP/huge pages. + * Notable special cases are: + * + * 1. Pinned pages: (Long-term) pinning of movable pages might + * essentially turn such pages unmovable. Memory offlining might + * retry a long time. + * 2. memblock allocations: kernelcore/movablecore setups might create + * situations where ZONE_MOVABLE contains unmovable allocations + * after boot. Memory offlining and allocations fail early. + * 3. Memory holes: Such pages cannot be allocated. Applies only to + * boot memory, not hotplugged memory. Memory offlining and + * allocations fail early. + * 4. PG_hwpoison pages: While poisoned pages can be skipped during + * memory offlining, such pages cannot be allocated. + * 5. Unmovable PG_offline pages: In paravirtualized environments, + * hotplugged memory blocks might only partially be managed by the + * buddy (e.g., via XEN-balloon, Hyper-V balloon, virtio-mem). The + * parts not manged by the buddy are unmovable PG_offline pages. In + * some cases (virtio-mem), such pages can be skipped during + * memory offlining, however, cannot be moved/allocated. These + * techniques might use alloc_contig_range() to hide previously + * exposed pages from the buddy again (e.g., to implement some sort + * of memory unplug in virtio-mem). + * + * In general, no unmovable allocations that degrade memory offlining + * should end up in ZONE_MOVABLE. Allocators (like alloc_contig_range()) + * have to expect that migrating pages in ZONE_MOVABLE can fail (even + * if has_unmovable_pages() states that there are no unmovable pages, + * there can be false negatives). + */ ZONE_MOVABLE, #ifdef CONFIG_ZONE_DEVICE ZONE_DEVICE,