From patchwork Mon Jun 26 17:14:23 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ryan Roberts X-Patchwork-Id: 13293232 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 104EBEB64D7 for ; Mon, 26 Jun 2023 17:14:54 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 9DE028D0003; Mon, 26 Jun 2023 13:14:53 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 968498D0001; Mon, 26 Jun 2023 13:14:53 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 793918D0003; Mon, 26 Jun 2023 13:14:53 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 69E1F8D0001 for ; Mon, 26 Jun 2023 13:14:53 -0400 (EDT) Received: from smtpin05.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id D116B1407EE for ; Mon, 26 Jun 2023 17:14:52 +0000 (UTC) X-FDA: 80945548824.05.E0DB974 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf23.hostedemail.com (Postfix) with ESMTP id 252E914001F for ; Mon, 26 Jun 2023 17:14:50 +0000 (UTC) Authentication-Results: imf23.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf23.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1687799691; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=lSO1RBWRS/esyQ/yVe9ST1uKh8G+pWrwI0yEsZeuvk8=; b=hodJ+4H7fTQJ1bglzy3tMK9b7IVxTRAaAZ6qjP9pN0FUK40wCWU8QJFzlgN2qq9xS6Gsx9 bY9o4o9zFTdr6/KnItaXX1BbT3iqOBIJ9umYUuztZSlyT3cj9HIu+Gm0X3+FT5jaAU6uFE 4id12+UmIj1TBOim9DM+3Ofivhj2pXo= ARC-Authentication-Results: i=1; imf23.hostedemail.com; dkim=none; dmarc=pass (policy=none) header.from=arm.com; spf=pass (imf23.hostedemail.com: domain of ryan.roberts@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=ryan.roberts@arm.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1687799691; a=rsa-sha256; cv=none; b=jroANzCzQwQi8bPe6FEgZa792M34RTyZ9/JjXCUN6Ue1DJDqEYbHWSc7Y8J2Pf7nsR1cYq ja9Ps5tBMpO2le9AZ+SM8eW2wiIw01Cv9MXx+rlHiucxdZwFCl4eSQEQmKakEvM+OEMHfL 055G1jQBTrSwMPPAEIIKO/GMaROBlZ4= Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 6A1EA1474; Mon, 26 Jun 2023 10:15:34 -0700 (PDT) Received: from e125769.cambridge.arm.com (e125769.cambridge.arm.com [10.1.196.26]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 80DFE3F663; Mon, 26 Jun 2023 10:14:47 -0700 (PDT) From: Ryan Roberts To: Andrew Morton , "Matthew Wilcox (Oracle)" , "Kirill A. Shutemov" , Yin Fengwei , David Hildenbrand , Yu Zhao , Catalin Marinas , Will Deacon , Geert Uytterhoeven , Christian Borntraeger , Sven Schnelle , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , "H. Peter Anvin" Cc: Ryan Roberts , linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-alpha@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-ia64@vger.kernel.org, linux-m68k@lists.linux-m68k.org, linux-s390@vger.kernel.org Subject: [PATCH v1 03/10] mm: Introduce try_vma_alloc_movable_folio() Date: Mon, 26 Jun 2023 18:14:23 +0100 Message-Id: <20230626171430.3167004-4-ryan.roberts@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230626171430.3167004-1-ryan.roberts@arm.com> References: <20230626171430.3167004-1-ryan.roberts@arm.com> MIME-Version: 1.0 X-Rspam-User: X-Stat-Signature: wjqi6g9x5aykbmotkuz4wt1g4cotcbzp X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 252E914001F X-HE-Tag: 1687799690-140058 X-HE-Meta: U2FsdGVkX18dQN0F4qTmpaZeuRTWMyR09sZ/5cUGen6rzky2O/ZLpNTF1LBjjLUQ/MrufG+6ZrSd1SAm18isKWF4/ztLhQDHpDSQuaMQvaDNplrKSD/eHRgjfe8GwxGZj8wDXmOTFxQB/3dUx3x0DfXjJMSJXzQGlnSlWDx2fJcGxIBODuJwVU2TQEOymACv9Arx7ezJefhB8FSt9IRV6CKz+OS11Mr7X/Xyh4D1Lw2vRmaLYla0K6wVlq/ZckWnbVySfwfa+YjCvaVJ+tsejM7EqxGFkeIM1ltaXx9ajjh4oDMXxCbkStovcScbqHmMdRlDjLye0IXD5PjGECY2LupqcV7XvkH7VMqUdw1SUiTAHzkBCU4Vq7LNSzDdRrGcbrx5v/Ca32+17/zFvciEiUKDEHmvqY9/yQUoiAFZh3OZ+VN9YBHEcIdqaAYb7ER/V9rglbMGJh++4x2HR4zfL5+ug+K40Eq1nvJn+lHqqb4D/5YWr2Nwvv0S/nzkSUer/vW3HXVeVyEepOrdNjalxMESdNVFf0sjxrvyRp5BPVbYEJCFijOZMEO0M/Xvboa4jjQY7aGHu2eM4hLRvaw/0pzCaVAaBKXhQ8F8oYImX9ff8iQPVg8ioWu9QRpKFAgrWnxxQk0MHH2OGx2/9XpNDzHb48iUb5v0Z3Mhg9dKV3q5DGU1dxWRrQOzpDeFKa0vPRVJI8dKQ/I+A5hmjjT7sn10EQEXxcZDFDgd5rsXnWBmnzRsvCkm61PfQdMBhRM/007kuQbB6HlFn09K/jTIhtgfxUQSVoKoQZZvx0yI+HK3IzlhlEYgxUdnVatCpkapy0x4kyI0LqoV4hTk+s8DetaQ+m0cPGZvEtKGGF0rRu3j9mOMKfF5d7sEyx7wpCn4N3rJpUotda2GZRa5l53RfhNqXBpI0UGJUreIrwo8d0TOTJPNpJcWdmEPBRXdPrUHFSA7lxQTwE0MbpO046s NGsh+R7W iINdX1fJ1hbkzCXWt26auIAswc2byjGCX54hI6k1Pz/m1/DZJcdGdj4FgE6P0+CT8rTihJB8e3RpPJ2W/FXXntvunbBBML7PjzLJHxSjvWPIJolp6Bwjtxw3O+aZs8v83byjEjnZhQv8e6dL6p7gIBPVhfF8d4f36zjwgUTQFhTSgg2osSNktSbTA+VDXkBeJOCcJBlC3/UEhgVw= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Opportunistically attempt to allocate high-order folios in highmem, optionally zeroed. Retry with lower orders all the way to order-0, until success. Although, of note, order-1 allocations are skipped since a large folio must be at least order-2 to work with the THP machinery. The user must check what they got with folio_order(). This will be used to oportunistically allocate large folios for anonymous memory with a sensible fallback under memory pressure. For attempts to allocate non-0 orders, we set __GFP_NORETRY to prevent high latency due to reclaim, instead preferring to just try for a lower order. The same approach is used by the readahead code when allocating large folios. Signed-off-by: Ryan Roberts --- mm/memory.c | 33 +++++++++++++++++++++++++++++++++ 1 file changed, 33 insertions(+) diff --git a/mm/memory.c b/mm/memory.c index 367bbbb29d91..53896d46e686 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -3001,6 +3001,39 @@ static vm_fault_t fault_dirty_shared_page(struct vm_fault *vmf) return 0; } +static inline struct folio *vma_alloc_movable_folio(struct vm_area_struct *vma, + unsigned long vaddr, int order, bool zeroed) +{ + gfp_t gfp = order > 0 ? __GFP_NORETRY | __GFP_NOWARN : 0; + + if (zeroed) + return vma_alloc_zeroed_movable_folio(vma, vaddr, gfp, order); + else + return vma_alloc_folio(GFP_HIGHUSER_MOVABLE | gfp, order, vma, + vaddr, false); +} + +/* + * Opportunistically attempt to allocate high-order folios, retrying with lower + * orders all the way to order-0, until success. order-1 allocations are skipped + * since a folio must be at least order-2 to work with the THP machinery. The + * user must check what they got with folio_order(). vaddr can be any virtual + * address that will be mapped by the allocated folio. + */ +static struct folio *try_vma_alloc_movable_folio(struct vm_area_struct *vma, + unsigned long vaddr, int order, bool zeroed) +{ + struct folio *folio; + + for (; order > 1; order--) { + folio = vma_alloc_movable_folio(vma, vaddr, order, zeroed); + if (folio) + return folio; + } + + return vma_alloc_movable_folio(vma, vaddr, 0, zeroed); +} + /* * Handle write page faults for pages that can be reused in the current vma *