From patchwork Thu Sep 14 01:55:10 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Christopherson X-Patchwork-Id: 13384019 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 48293EE0217 for ; Thu, 14 Sep 2023 01:56:11 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 09A606B02A7; Wed, 13 Sep 2023 21:56:04 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 049F56B02A8; Wed, 13 Sep 2023 21:56:03 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DE2AA6B02A9; Wed, 13 Sep 2023 21:56:03 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id C7E1B6B02A7 for ; Wed, 13 Sep 2023 21:56:03 -0400 (EDT) Received: from smtpin28.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 998891CAC09 for ; Thu, 14 Sep 2023 01:56:03 +0000 (UTC) X-FDA: 81233537406.28.507A1EF Received: from mail-pl1-f202.google.com (mail-pl1-f202.google.com [209.85.214.202]) by imf08.hostedemail.com (Postfix) with ESMTP id D25F9160009 for ; Thu, 14 Sep 2023 01:56:01 +0000 (UTC) Authentication-Results: imf08.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b="XRL+G/79"; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf08.hostedemail.com: domain of 3L2gCZQYKCDgmYUhdWaiiafY.Wigfchor-ggepUWe.ila@flex--seanjc.bounces.google.com designates 209.85.214.202 as permitted sender) smtp.mailfrom=3L2gCZQYKCDgmYUhdWaiiafY.Wigfchor-ggepUWe.ila@flex--seanjc.bounces.google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1694656561; h=from:from:sender:reply-to:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=IroGtA6z4X06VM4XXyW5Hwunrfyb2UP8Ai0nAjfYz+0=; b=rTm1irVeXNGolUPAYCC+pvPzzDCxRmBZLW9MpM4i/mX+GFCxB5bahcJ7gimueo16RO4TQS DJZf712dar4kXCNtKwKNdk3GTfQ89jkpak/K+ef4hP3wmAX1zElLkTPh2kdR+If9zCoJvX kJ4CR2jgr7zIQco8g0uaqKuKcID1FzI= ARC-Authentication-Results: i=1; imf08.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b="XRL+G/79"; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf08.hostedemail.com: domain of 3L2gCZQYKCDgmYUhdWaiiafY.Wigfchor-ggepUWe.ila@flex--seanjc.bounces.google.com designates 209.85.214.202 as permitted sender) smtp.mailfrom=3L2gCZQYKCDgmYUhdWaiiafY.Wigfchor-ggepUWe.ila@flex--seanjc.bounces.google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1694656561; a=rsa-sha256; cv=none; b=VeaToaPbxe/nuppDkThTvXPEbeNAK/8+ljBPlrwVwX/zUdOTjTvolWK40SjvmuH0/ZUssH bz9ZdiueROdOrLUxFwUvLVWTH0A0RGqeESHhRKe0cPRTEJNBfeL2Sws/tyQuevmyBc5vey heaHQiREfzCD+vQCRJzlzVLWKtK0LhU= Received: by mail-pl1-f202.google.com with SMTP id d9443c01a7336-1c395534687so3972665ad.1 for ; Wed, 13 Sep 2023 18:56:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1694656561; x=1695261361; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:from:to:cc:subject:date:message-id:reply-to; bh=IroGtA6z4X06VM4XXyW5Hwunrfyb2UP8Ai0nAjfYz+0=; b=XRL+G/79REw88nYDBRz4tdzFkOvD0YGeYXOcpL3HWZ/xF9pqy/o+QoaKgxVcukItLH zJ5aIDCaPez46hTj1kiA4yoqdxsQhOq6eiA+hu8VXAEHzkZBqY46QfBZ5ZhpG8rrg1Z/ F3mc/q/lBQm3wcDvBkKn8YbJObyl5Sla5YEwSAuNhM4Efhq0e/cZGKOyyCPJX7zIxw/O M5Jq4k5U2Ytag0hNCO29BkygCilI2Mtb61TmSUFkbA1GWWMIrdeqiFmo5VsANxpLb3SF PRlzZtZYVCEPRQAUdsQLAx1IDDd4zntg53H5SgOuJ1tdtJRvhsQp1F03ukt+0wqvWTvs G4Qg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1694656561; x=1695261361; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:reply-to:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=IroGtA6z4X06VM4XXyW5Hwunrfyb2UP8Ai0nAjfYz+0=; b=genyYPQvPIRxvG71mwQG7yXYs50xa1+0jA9U1425s+PBtSyGX8iXm8MErGRvZNexVh YsAamTtPhLoOO+gTQOvV5jpqPGnLnkSKaw+g1w7RW+w7HfOU7PUN+3bsXxfHWAZIeo/Q WTkp+sDcc6zgSzmsY3arS9JAE1ajTLh7iquuei/dYLVDR0Z4lBGkN2GD3m4x/tTG/HrA pYzmS62gHnuVQ8SOZNUGt+s3bP4f9zCYoz43tQ0EKZmpfxKPFyJZniQ6IlrbF9+aRrOB MD/Y29NfzY1Q1Ymx6rG4WIi4Gq5i1GHhMM7p2So8zp0kXrO9kQdKYlsY7iSPPp2q+kGw 1Byg== X-Gm-Message-State: AOJu0YxtIu11GBTx/l5okyZfnieXDaesW5IEJYjwDVUWxTEBlQqJdCQX nEWCN4zRdZzuaJTOFSvHtfDqTFaKH1M= X-Google-Smtp-Source: AGHT+IGFyRd+BYegO6/D6beA7XY/sIg6slB9OxyqJyx+o15XvIYq7E/XS8tz54WBztwLFVG+Qn77mgnd2p8= X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37]) (user=seanjc job=sendgmr) by 2002:a17:902:fb83:b0:1c0:ac09:4032 with SMTP id lg3-20020a170902fb8300b001c0ac094032mr144135plb.9.1694656559883; Wed, 13 Sep 2023 18:55:59 -0700 (PDT) Reply-To: Sean Christopherson Date: Wed, 13 Sep 2023 18:55:10 -0700 In-Reply-To: <20230914015531.1419405-1-seanjc@google.com> Mime-Version: 1.0 References: <20230914015531.1419405-1-seanjc@google.com> X-Mailer: git-send-email 2.42.0.283.g2d96d420d3-goog Message-ID: <20230914015531.1419405-13-seanjc@google.com> Subject: [RFC PATCH v12 12/33] mm: Add AS_UNMOVABLE to mark mapping as completely unmovable From: Sean Christopherson To: Paolo Bonzini , Marc Zyngier , Oliver Upton , Huacai Chen , Michael Ellerman , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Sean Christopherson , "Matthew Wilcox (Oracle)" , Andrew Morton , Paul Moore , James Morris , "Serge E. Hallyn" Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-security-module@vger.kernel.org, linux-kernel@vger.kernel.org, Chao Peng , Fuad Tabba , Jarkko Sakkinen , Anish Moorthy , Yu Zhang , Isaku Yamahata , Xu Yilun , Vlastimil Babka , Vishal Annapurve , Ackerley Tng , Maciej Szmigiero , David Hildenbrand , Quentin Perret , Michael Roth , Wang , Liam Merwick , Isaku Yamahata , "Kirill A . Shutemov" X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: D25F9160009 X-Stat-Signature: yxwpanfzeucpngfcajmxk9ppspsyhgpw X-Rspam-User: X-HE-Tag: 1694656561-270472 X-HE-Meta: U2FsdGVkX1/yWqgrPCTvAL0S4BNygkO9XmSjPVcgYSYzvqCD7pomuuHS6afez+dc95tpiRrk40eNYrELXwkuhHLf9tThWPKpbJg9LryssXaKhKnBIR3tErAtwWp5M3PlkkhnWG/nNDX8pSKNLwnLZ9A1LJVlVPp6uJW3uJplqlQ/zRARPWY1AeCNmDYq9po7d12yLAKMEV9va0QT/hY+ILPedbM4cm8YN4gMjUGZ8QNuVnxma83NCwwhJ7wQrnw+NY3gXh2JjMbgeZ+3LlBkjZMqPyZiuXgAVzgGmK5pHWS7Y/pA+A/N9p58vCdLx6YXYY1mQcmaLrglpfMz5c9kbGGLCTbEt0uK0Q+uF8HSQnkLKLuxdAOAlW8FsSZeH7rJz11xmTS6aRVMg7ncgQn/oxMUDMORmragIacoMNoz1dG1aM2AIcZ7MlNcXDS26A8QJ+X/xu2L7UGPp8/Kt2UFZyF8r0NhdKsoOvsYtvvcycM+KZ5UdQ3KkSTl2CkqoFGOljZbWgQjo/X8sfxqhkinU4u1ke3mqr4aIjRODkstYgY/rS7nbiGlCFRTd6nWV3Io1KWaEoPn6JyZqD/h0iGjjm7ju1PTfvBlDmOHtJrCljVLZzHZaZdb72aswyqysWVz5216nOV96//7wvjRH1CXHds320S3ChFFkMl8+sHp7A3DsjKu9GeVioOLSJpnpv2LRMEK/WEhDEXF/+cK/r5zdGHCGWmjBoc/pGGFRcjsVO3TGRZwlR9WOaA+e+jAnrce/4/zKBPkkBEqX9v1gxrTOtQp7ERkgTajv3cZxYHRf79GJOZK4hFEyknXF+i5a0o0M9uHApEr3vu/q1gtt0xuhq93MynnX1dvcY3vi2zHMIegptGhqQwB84ZKxnpcaPUU+ns8Lvf5qJIiSftwrHsYelqUvSzskg5Gp3nWMFKUxaIGBJS+8N7Fb0+vvkjbdBeDZO45dSZHAFQiLa/nGE6 buUKdkE5 hsLOOtKReM8fSISak+EWNn3vlZW1N88Yl7aAl6lYkBP6DmQDoCp25Esp921wpces/GIKa7prEeIVteUSKGQnA8lQTQ64mbjyr4h6DTD58v6etD0HoYOEYhcdhOj5J6kHr0k0hJO72QWLeeXgrm9JFnCNge0XuXt8y3n+j2byvsQU8WvCN+EdXzVFKCGopmDqVjS/NsQeQ/jh0qErwGf8ctAGCJlonWNgjy0BHldKEfQjIOSMjj1/JZ8lhLI7y/0HPsRTSaD1ndnkGfw306OjqJ+3CkpYwYWqsE8FGmT45F5U6S/76c1BXwdt7jKrHKB/JB+ICsTqfJkxuM4G/ulcH2bk5bIuX2OA0/JYQz2jGAfRrN1bS+tttEw0Up/CD7724OCLBRTGTugQ8VVFMK5A8mrGpEWfZUcAqwaQrwD/jv0RwkOdYbY9exUMI9wesa/UH9Q2JrzUEjfAznkmGiMRULJYTDJy6iYr19fubOMmIUSJflRKA3hkQ7cNFS430Za2HVOFIyQCy253IoIBkQbcLlpEwVJm848OO95cqUxsAEA+jrNExG6T0tomuB0ER57cAJGYEpR1ojXz00Nlt/YMDAK3SR1hDrLw0827xCAkeJ3xby+aA/t3pGum4iWztX6ScrBFpT77wPl9Ser6YDGsbSNj2sfE8nFPA8PozrMJPbZDw5HvAL2td07sukibWkzKUhyLk+ef7dlXSvYmfozJrAwK/2cGP9KefGwPtHGGjvObF0BqfoP54KfJZlhCjZBqG7JzuWVz9QY6VDYZUAjcm06NDWdp7uUPv0H94toR1S9XiLFlmlCoJY+q1za4F8Ba5J+WFVyCqnoC1efbXb1IrYDOHQGb2KNaga4fUvEx2DIep8S+Yu9IP5YtDwOoSgRfR4nGp X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add an "unmovable" flag for mappings that cannot be migrated under any circumstance. KVM will use the flag for its upcoming GUEST_MEMFD support, which will not support compaction/migration, at least not in the foreseeable future. Test AS_UNMOVABLE under folio lock as already done for the async compaction/dirty folio case, as the mapping can be removed by truncation while compaction is running. To avoid having to lock every folio with a mapping, assume/require that unmovable mappings are also unevictable, and have mapping_set_unmovable() also set AS_UNEVICTABLE. Cc: Matthew Wilcox Co-developed-by: Vlastimil Babka Signed-off-by: Vlastimil Babka Signed-off-by: Sean Christopherson --- include/linux/pagemap.h | 19 +++++++++++++++++- mm/compaction.c | 43 +++++++++++++++++++++++++++++------------ mm/migrate.c | 2 ++ 3 files changed, 51 insertions(+), 13 deletions(-) diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index 351c3b7f93a1..82c9bf506b79 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -203,7 +203,8 @@ enum mapping_flags { /* writeback related tags are not used */ AS_NO_WRITEBACK_TAGS = 5, AS_LARGE_FOLIO_SUPPORT = 6, - AS_RELEASE_ALWAYS, /* Call ->release_folio(), even if no private data */ + AS_RELEASE_ALWAYS = 7, /* Call ->release_folio(), even if no private data */ + AS_UNMOVABLE = 8, /* The mapping cannot be moved, ever */ }; /** @@ -289,6 +290,22 @@ static inline void mapping_clear_release_always(struct address_space *mapping) clear_bit(AS_RELEASE_ALWAYS, &mapping->flags); } +static inline void mapping_set_unmovable(struct address_space *mapping) +{ + /* + * It's expected unmovable mappings are also unevictable. Compaction + * migrate scanner (isolate_migratepages_block()) relies on this to + * reduce page locking. + */ + set_bit(AS_UNEVICTABLE, &mapping->flags); + set_bit(AS_UNMOVABLE, &mapping->flags); +} + +static inline bool mapping_unmovable(struct address_space *mapping) +{ + return test_bit(AS_UNMOVABLE, &mapping->flags); +} + static inline gfp_t mapping_gfp_mask(struct address_space * mapping) { return mapping->gfp_mask; diff --git a/mm/compaction.c b/mm/compaction.c index 38c8d216c6a3..12b828aed7c8 100644 --- a/mm/compaction.c +++ b/mm/compaction.c @@ -883,6 +883,7 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn, /* Time to isolate some pages for migration */ for (; low_pfn < end_pfn; low_pfn++) { + bool is_dirty, is_unevictable; if (skip_on_failure && low_pfn >= next_skip_pfn) { /* @@ -1080,8 +1081,10 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn, if (!folio_test_lru(folio)) goto isolate_fail_put; + is_unevictable = folio_test_unevictable(folio); + /* Compaction might skip unevictable pages but CMA takes them */ - if (!(mode & ISOLATE_UNEVICTABLE) && folio_test_unevictable(folio)) + if (!(mode & ISOLATE_UNEVICTABLE) && is_unevictable) goto isolate_fail_put; /* @@ -1093,26 +1096,42 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn, if ((mode & ISOLATE_ASYNC_MIGRATE) && folio_test_writeback(folio)) goto isolate_fail_put; - if ((mode & ISOLATE_ASYNC_MIGRATE) && folio_test_dirty(folio)) { - bool migrate_dirty; + is_dirty = folio_test_dirty(folio); + + if (((mode & ISOLATE_ASYNC_MIGRATE) && is_dirty) || + (mapping && is_unevictable)) { + bool migrate_dirty = true; + bool is_unmovable; /* * Only folios without mappings or that have - * a ->migrate_folio callback are possible to - * migrate without blocking. However, we may - * be racing with truncation, which can free - * the mapping. Truncation holds the folio lock - * until after the folio is removed from the page - * cache so holding it ourselves is sufficient. + * a ->migrate_folio callback are possible to migrate + * without blocking. + * + * Folios from unmovable mappings are not migratable. + * + * However, we can be racing with truncation, which can + * free the mapping that we need to check. Truncation + * holds the folio lock until after the folio is removed + * from the page so holding it ourselves is sufficient. + * + * To avoid locking the folio just to check unmovable, + * assume every unmovable folio is also unevictable, + * which is a cheaper test. If our assumption goes + * wrong, it's not a correctness bug, just potentially + * wasted cycles. */ if (!folio_trylock(folio)) goto isolate_fail_put; mapping = folio_mapping(folio); - migrate_dirty = !mapping || - mapping->a_ops->migrate_folio; + if ((mode & ISOLATE_ASYNC_MIGRATE) && is_dirty) { + migrate_dirty = !mapping || + mapping->a_ops->migrate_folio; + } + is_unmovable = mapping && mapping_unmovable(mapping); folio_unlock(folio); - if (!migrate_dirty) + if (!migrate_dirty || is_unmovable) goto isolate_fail_put; } diff --git a/mm/migrate.c b/mm/migrate.c index b7fa020003f3..3d25c145098d 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -953,6 +953,8 @@ static int move_to_new_folio(struct folio *dst, struct folio *src, if (!mapping) rc = migrate_folio(mapping, dst, src, mode); + else if (mapping_unmovable(mapping)) + rc = -EOPNOTSUPP; else if (mapping->a_ops->migrate_folio) /* * Most folios have a mapping and most filesystems