From patchwork Mon Sep 16 09:43:07 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dev Jain X-Patchwork-Id: 13805220 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9B0ECC3ABA2 for ; Mon, 16 Sep 2024 09:43:51 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D301A6B0089; Mon, 16 Sep 2024 05:43:50 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id CE0526B008A; Mon, 16 Sep 2024 05:43:50 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BA7AD6B008C; Mon, 16 Sep 2024 05:43:50 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 9D9C26B0089 for ; Mon, 16 Sep 2024 05:43:50 -0400 (EDT) Received: from smtpin10.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 44FEF1402A3 for ; Mon, 16 Sep 2024 09:43:50 +0000 (UTC) X-FDA: 82570114620.10.3B8DB5F Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf29.hostedemail.com (Postfix) with ESMTP id 23328120007 for ; Mon, 16 Sep 2024 09:43:47 +0000 (UTC) Authentication-Results: imf29.hostedemail.com; dkim=none; spf=pass (imf29.hostedemail.com: domain of dev.jain@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=dev.jain@arm.com; dmarc=pass (policy=none) header.from=arm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1726479719; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:references; bh=NlB9UxmIwOKPS1a6MLDo4ToUWZusQgqfJqzNcNKCvEY=; b=Ba6UP3+g/DYZW1tZalLEia6qZKqRa5nECelpWAGkwVDNY/o4nqMc8L3/iFqrTzPWqSSW/3 SSYexDdxLXNG7dSCNagb0RLrpChzuvqHtom08HRSUUAyJLOkK9qiYm4QHG9I8rqRnNtB+b coiDtNSROGziM1pTCcdXrtTRChfOK54= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1726479719; a=rsa-sha256; cv=none; b=n2p3kuubJFUYC5ASrzIUTx0oAWDp2mGs6047jSHWUNnzPAiD8S+lOzKtFQ08AxO6aNh+GQ l/gb0orfTptXI/HDhzDxmofI5fmZZN7VXNd9PlKpRvZVUlv93WMXXhWmE518sjkE8jk8Cd +g5KkntTXab3QQ6gxhonG5f2o9I/Sko= ARC-Authentication-Results: i=1; imf29.hostedemail.com; dkim=none; spf=pass (imf29.hostedemail.com: domain of dev.jain@arm.com designates 217.140.110.172 as permitted sender) smtp.mailfrom=dev.jain@arm.com; dmarc=pass (policy=none) header.from=arm.com Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 5CEE911FB; Mon, 16 Sep 2024 02:44:16 -0700 (PDT) Received: from e116581.blr.arm.com (e116581.arm.com [10.162.42.11]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 747EE3F66E; Mon, 16 Sep 2024 02:43:38 -0700 (PDT) From: Dev Jain To: akpm@linux-foundation.org, david@redhat.com, willy@infradead.org, kirill.shutemov@linux.intel.com Cc: ryan.roberts@arm.com, anshuman.khandual@arm.com, catalin.marinas@arm.com, cl@gentwo.org, vbabka@suse.cz, mhocko@suse.com, apopple@nvidia.com, dave.hansen@linux.intel.com, will@kernel.org, baohua@kernel.org, jack@suse.cz, mark.rutland@arm.com, hughd@google.com, aneesh.kumar@kernel.org, yang@os.amperecomputing.com, peterx@redhat.com, ioworker0@gmail.com, jglisse@google.com, wangkefeng.wang@huawei.com, ziy@nvidia.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Dev Jain Subject: [PATCH v4 0/2] Do not shatter hugezeropage on wp-fault Date: Mon, 16 Sep 2024 15:13:07 +0530 Message-Id: <20240916094309.1226908-1-dev.jain@arm.com> X-Mailer: git-send-email 2.34.1 MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 23328120007 X-Stat-Signature: tp1dubzdepw8s986rjp5uw666y55foob X-HE-Tag: 1726479827-178938 X-HE-Meta: U2FsdGVkX1+RhP8fQKL8HZkfCwrtOdwTxeyeM0AN5Ux9/c3zfMSyVvR8bkL3VCsdypz7mCpmOdQhFeettQaeGNo2Y06+LIxoINAIAeArWeVXSqPEtKhIrv7XWkWHzglJOJOUQjF499VdhYK0Fmg7Dh6jXF8GuqxFd5Uz7vHeBsQdhxc05cQ7fK0fZsa7K5DUlsPfz5d5e2BEBMdaYoWb4imub85s1RP2ffKYC8NNrcDGeTzQTdAHT6rI610Km8jxrGdAn5gXpT0NsgBiXkORkoXy4KPKvFhGVKA1dDF4y0XLkrdOlWtjOslnmGnAIcYvOedCrvWn5rCs3/JXYsdVwcXAJq2RIBwpJ3pyDO9BezSDshinzPErj4G4MrA0H6F2FLlKO+XjRaUCm4ohpjllQDzBGfdkcDPOp8F4Gavu6cvAndGYjjNmGQ2QRyvP2c1mOjTnsHuk15uW04TC662c1fi7uS7623XmF0+vF6GAh4SKwAtapud4b4VOZJmzKaIc6Fp8CElXV5XhLxx6U+A9KD78PJxYvQPKtxE/DJEyjkY0PWc945oY42PSEvs3woXD7qO9e5TU9Hy/kqpt0FTLaEvrerkIwSw80tNnDnMzPfK68yJWwhVqK1krrUUJ3X91iE3h5QZ439YTg5j1qxNyhk3AfVsm3jouhQ1pbtWbojZuTC3V7FG5VwJdiz3nFh6K2HLIbQct/P6/M/12KZVjf3NHLGGnr9RKn6P8isqdfdlWxG1yAsGyqDWHXghtkKYO2NHBq4O4bi9TrSOIHqSE2Frxbh1ggRnKjR1ZhzT5qfOZkZ6sYW2hh5kkswsn3U2ni8WuvpoifhaR+iUh815LgADc+hnVRAPWa7lce+0ewwCNM5u9xxmYJIOFfvmX9onzsyGqdKcU68Xrs1WHMq7XWmLi2zhn1cBnWx/nz1w5KfTf4N/CUnzoxcHJCueMAnRjL2XuQKY70m8JNU1fjDc 3jIsSEX9 G6WtjAx7HAc3Pazw9f+G74weODim0WY3EP/c8QXnORajO34O/SZy6cDCEveqNqucv6BFnPwbNSOPbdeAZBTXwk385SXUZU903ImMWtykO9MUJCBLiOuBft7GgmNCZaUsgB/IPtuVJ5Ogi7W97Y9bFpc7WkCsb9MRI2DFlEWUFkZ/Rw0U= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: It was observed at [1] and [2] that the current kernel behaviour of shattering a hugezeropage is inconsistent and suboptimal. For a VMA with a THP allowable order, when we write-fault on it, the kernel installs a PMD-mapped THP. On the other hand, if we first get a read fault, we get a PMD pointing to the hugezeropage; subsequent write will trigger a write-protection fault, shattering the hugezeropage into one writable page, and all the other PTEs write-protected. The conclusion being, as compared to the case of a single write-fault, applications have to suffer 512 extra page faults if they were to use the VMA as such, plus we get the overhead of khugepaged trying to replace that area with a THP anyway. Instead, replace the hugezeropage with a THP on wp-fault. v3->v4: - Renames: pmd_thp_fault_alloc -> vma_alloc_anon_folio_pmd, map_pmd_thp -> map_anon_folio_pmd - Instead of passing around, compute haddr at various places, similar with gfp flags - Pass haddr to update_mmu_cache_pmd() instead of unaligned address - Do not pass vmf to map_anon_folio_pmd - Do declarations in reverse xmas tree order - Drop a new line which was introduced accidentally - Call __pmd_thp_fault_success_stats from map_anon_folio_pmd - Correctly return NULL from vma_alloc_anon_folio_pmd - Initialize pgtable to NULL in __do_huge_pmd_anonymous_page, to prevent freeing pgtable when not even allocated - Drop if conditions from map_anon_folio_pmd, let the caller handle that v2->v3: - Drop foliop and order parameters, prefix the thp functions with pmd_ - First allocate THP, then pgtable, not vice-versa - Move pgtable_trans_huge_deposit() from map_pmd_thp() to caller - Drop exposing functions in include/linux/huge_mm.h - Open code do_huge_zero_wp_pmd_locked() - Release folio in case of pmd change after taking the lock, or check_stable_address_space() returning VM_FAULT_SIGBUS - Drop uffd-wp preservation. Looking at page_table_check_pmd_flags(), preserving uffd-wp on a writable entry is invalid. Looking at mfill_atomic(), uffd_copy() is a null operation when pmd is marked uffd-wp. v1->v2: - Wrap do_huge_zero_wp_pmd_locked() around lock and unlock - Call thp_fault_alloc() before do_huge_zero_wp_pmd_locked() to avoid - calling sleeping function from spinlock context [1]: https://lore.kernel.org/all/3743d7e1-0b79-4eaf-82d5-d1ca29fe347d@arm.com/ [2]: https://lore.kernel.org/all/1cfae0c0-96a2-4308-9c62-f7a640520242@arm.com/ The patchset has been rebased on the mm-unstable branch. Dev Jain (2): mm: Abstract THP allocation mm: Allocate THP on hugezeropage wp-fault mm/huge_memory.c | 152 +++++++++++++++++++++++++++++++++-------------- 1 file changed, 109 insertions(+), 43 deletions(-)