From patchwork Fri Jul 29 01:40:38 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 12931860 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1F4DAC19F2A for ; Fri, 29 Jul 2022 01:40:49 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 7A1096B0071; Thu, 28 Jul 2022 21:40:48 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 728C7940007; Thu, 28 Jul 2022 21:40:48 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 57B106B0073; Thu, 28 Jul 2022 21:40:48 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 314376B0071 for ; Thu, 28 Jul 2022 21:40:48 -0400 (EDT) Received: from smtpin02.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id EE83AC0FBD for ; Fri, 29 Jul 2022 01:40:47 +0000 (UTC) X-FDA: 79738433334.02.7C6BE15 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf04.hostedemail.com (Postfix) with ESMTP id 664A7400AC for ; Fri, 29 Jul 2022 01:40:47 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1659058846; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=ByVAEoHvRqEiYcceU8xxkCrxysohxn623txuIswhCFQ=; b=AGnBBDNawTY9n89dkIUkjnn3f1/h4NEr317fX45excq7z8DMljQxZ/Skv0Q64cfGP6zA+m MrkNeptRxDkY3rfyRTJ+BTLbWGQ82Ptzt+kZoYcoeuW6uJUPn6xhRKSlRKzOMwogMvbTa9 kSrNOfguMLmK9PN1gqM7xLHYNLolrrU= Received: from mail-qk1-f200.google.com (mail-qk1-f200.google.com [209.85.222.200]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-324-csPy4gdPMi6GbeuBlF0l-Q-1; Thu, 28 Jul 2022 21:40:45 -0400 X-MC-Unique: csPy4gdPMi6GbeuBlF0l-Q-1 Received: by mail-qk1-f200.google.com with SMTP id x22-20020a05620a259600b006b552a69231so2606594qko.18 for ; Thu, 28 Jul 2022 18:40:45 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=ByVAEoHvRqEiYcceU8xxkCrxysohxn623txuIswhCFQ=; b=ndretUSaDPmW5+s1OuagPubdZWsequUfiHq+dkImR0eNMa8iut1Y912w1X/Mk+WkYc IGNw46xO47zB/aBquArYnKKdigQ0B4zZrM2BPb3h/hbe8pOPMTE5KyG/4MKDx4e/eJOV DTOudyA5y5oUBpZl5cs0nkn6Orl2ONuU8cckY8Q5E1GhYO/G5suWYBepAZItvX8zuO7W 9qL87S0dujTP4fFVR3Zn3khIQsqQz2PavkdLzy2Qm/b9weicOye4O0Ro/sCLWSGqhfBA J14ZjkgYoFxIAh4n84E4BMK1GPazmIIA6u2X+gOyfYgVw2FPGsFQufQRI3TGCRyA6ody gXiQ== X-Gm-Message-State: AJIora8NFUJV5DbgbLe7LaJD+eo0av7LUmbsbIkK4eGFhqf7Vauz8jPc wyZz5gy3U/wNOnpipaRkcYVLowsW7A+s0t/6azGDLOKdvkExyxk3nH35D/UxBUKiyqaI29ITMG6 SOT6Nk85vZ4a/7NVtX7pMr099fSxxtyScvbFQQNtlpeiNa/SjAfZfT8nfDGja X-Received: by 2002:a05:622a:1a0d:b0:31e:da67:7cb4 with SMTP id f13-20020a05622a1a0d00b0031eda677cb4mr1543905qtb.643.1659058845018; Thu, 28 Jul 2022 18:40:45 -0700 (PDT) X-Google-Smtp-Source: AGRyM1vm3fD51DEm5b8XSQBVK347L4yufDVjbS4SrwnC/QphKpN4yRq5nbIIELe355N7kwGIcwdXgA== X-Received: by 2002:a05:622a:1a0d:b0:31e:da67:7cb4 with SMTP id f13-20020a05622a1a0d00b0031eda677cb4mr1543886qtb.643.1659058844712; Thu, 28 Jul 2022 18:40:44 -0700 (PDT) Received: from localhost.localdomain (bras-base-aurron9127w-grc-35-70-27-3-10.dsl.bell.ca. [70.27.3.10]) by smtp.gmail.com with ESMTPSA id u9-20020a05620a454900b006b259b5dd12sm1584531qkp.53.2022.07.28.18.40.43 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Thu, 28 Jul 2022 18:40:44 -0700 (PDT) From: Peter Xu To: linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: Huang Ying , peterx@redhat.com, Andrea Arcangeli , Andrew Morton , "Kirill A . Shutemov" , Nadav Amit , Hugh Dickins , David Hildenbrand , Vlastimil Babka Subject: [PATCH RFC 1/4] mm/swap: Add swp_offset_pfn() to fetch PFN from swap entry Date: Thu, 28 Jul 2022 21:40:38 -0400 Message-Id: <20220729014041.21292-2-peterx@redhat.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220729014041.21292-1-peterx@redhat.com> References: <20220729014041.21292-1-peterx@redhat.com> MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-type: text/plain ARC-Authentication-Results: i=1; imf04.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=AGnBBDNa; spf=pass (imf04.hostedemail.com: domain of peterx@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=peterx@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1659058847; a=rsa-sha256; cv=none; b=yqd6vYflw8nDbS4f2CIIZ8YlLHI0ywCCMtsmZMS6Moh6MqBw0u/j4DLpnvU0TQvDvq8O0o mzXu7QrmtHi0SEJKrXMW3WE3U97eeSY2E4LU+2VGJwzBd1dh6qNIRzLjBf6Mewv6GlVg1S zhhNRya6Gv+hTJno8pE0K6GQAOytLJo= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1659058847; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=ByVAEoHvRqEiYcceU8xxkCrxysohxn623txuIswhCFQ=; b=PE494W2J8qXCK2Dda52uiQsI4BMKl88T/GLln6aT9q3j2+jEbU31ccuA2Hw//7UUGfudH5 ijBwer2sjC+WF8JrNnLr8ZFX2xJxfS+b33uJTUtKFDzBHdxwG6xvNJxxUbUHrgY2n+aLqF Q7s+rwYvZzFcSVlt8Uyz3KGSPxbJMPc= X-Stat-Signature: 1gmipqzpyxytbs5dax43djyqhe4gmorf X-Rspamd-Queue-Id: 664A7400AC X-Rspam-User: Authentication-Results: imf04.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=AGnBBDNa; spf=pass (imf04.hostedemail.com: domain of peterx@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=peterx@redhat.com; dmarc=pass (policy=none) header.from=redhat.com X-Rspamd-Server: rspam02 X-HE-Tag: 1659058847-191003 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: We've got a bunch of special swap entries that stores PFN inside the swap offset fields. To fetch the PFN, normally the user just calls swp_offset() assuming that'll be the PFN. Add a helper swp_offset_pfn() to fetch the PFN instead, fetching only the max possible length of a PFN on the host, meanwhile doing proper check with MAX_PHYSMEM_BITS to make sure the swap offsets can actually store the PFNs properly always using the BUILD_BUG_ON() in is_pfn_swap_entry(). One reason to do so is we never tried to sanitize whether swap offset can really fit for storing PFN. At the meantime, this patch also prepares us with the future possibility to store more information inside the swp offset field, so assuming "swp_offset(entry)" to be the PFN will not stand any more very soon. Replace many of the swp_offset() callers to use swp_offset_pfn() where proper. Note that many of the existing users are not candidates for the replacement, e.g.: (1) When the swap entry is not a pfn swap entry at all, or, (2) when we wanna keep the whole swp_offset but only change the swp type. For the latter, it can happen when fork() triggered on a write-migration swap entry pte, we may want to only change the migration type from write->read but keep the rest, so it's not "fetching PFN" but "changing swap type only". They're left aside so that when there're more information within the swp offset they'll be carried over naturally in those cases. Since at it, dropping hwpoison_entry_to_pfn() because that's exactly what the new swp_offset_pfn() is about. Signed-off-by: Peter Xu --- arch/arm64/mm/hugetlbpage.c | 2 +- include/linux/swapops.h | 28 ++++++++++++++++++++++------ mm/hmm.c | 2 +- mm/memory-failure.c | 2 +- mm/page_vma_mapped.c | 6 +++--- 5 files changed, 28 insertions(+), 12 deletions(-) diff --git a/arch/arm64/mm/hugetlbpage.c b/arch/arm64/mm/hugetlbpage.c index 7430060cb0d6..f897d40821dd 100644 --- a/arch/arm64/mm/hugetlbpage.c +++ b/arch/arm64/mm/hugetlbpage.c @@ -242,7 +242,7 @@ static inline struct folio *hugetlb_swap_entry_to_folio(swp_entry_t entry) { VM_BUG_ON(!is_migration_entry(entry) && !is_hwpoison_entry(entry)); - return page_folio(pfn_to_page(swp_offset(entry))); + return page_folio(pfn_to_page(swp_offset_pfn(entry))); } void set_huge_pte_at(struct mm_struct *mm, unsigned long addr, diff --git a/include/linux/swapops.h b/include/linux/swapops.h index a3d435bf9f97..5378f77860fb 100644 --- a/include/linux/swapops.h +++ b/include/linux/swapops.h @@ -23,6 +23,14 @@ #define SWP_TYPE_SHIFT (BITS_PER_XA_VALUE - MAX_SWAPFILES_SHIFT) #define SWP_OFFSET_MASK ((1UL << SWP_TYPE_SHIFT) - 1) +/* + * Definitions only for PFN swap entries (see is_pfn_swap_entry()). To + * store PFN, we only need SWP_PFN_BITS bits. Each of the pfn swap entries + * can use the extra bits to store other information besides PFN. + */ +#define SWP_PFN_BITS (MAX_PHYSMEM_BITS - PAGE_SHIFT) +#define SWP_PFN_MASK ((1UL << SWP_PFN_BITS) - 1) + /* Clear all flags but only keep swp_entry_t related information */ static inline pte_t pte_swp_clear_flags(pte_t pte) { @@ -64,6 +72,16 @@ static inline pgoff_t swp_offset(swp_entry_t entry) return entry.val & SWP_OFFSET_MASK; } +/* + * This should only be called upon a pfn swap entry to get the PFN stored + * in the swap entry. Please refers to is_pfn_swap_entry() for definition + * of pfn swap entry. + */ +static inline unsigned long swp_offset_pfn(swp_entry_t entry) +{ + return swp_offset(entry) & SWP_PFN_MASK; +} + /* check whether a pte points to a swap entry */ static inline int is_swap_pte(pte_t pte) { @@ -369,7 +387,7 @@ static inline int pte_none_mostly(pte_t pte) static inline struct page *pfn_swap_entry_to_page(swp_entry_t entry) { - struct page *p = pfn_to_page(swp_offset(entry)); + struct page *p = pfn_to_page(swp_offset_pfn(entry)); /* * Any use of migration entries may only occur while the @@ -387,6 +405,9 @@ static inline struct page *pfn_swap_entry_to_page(swp_entry_t entry) */ static inline bool is_pfn_swap_entry(swp_entry_t entry) { + /* Make sure the swp offset can always store the needed fields */ + BUILD_BUG_ON(SWP_TYPE_SHIFT < SWP_PFN_BITS); + return is_migration_entry(entry) || is_device_private_entry(entry) || is_device_exclusive_entry(entry); } @@ -475,11 +496,6 @@ static inline int is_hwpoison_entry(swp_entry_t entry) return swp_type(entry) == SWP_HWPOISON; } -static inline unsigned long hwpoison_entry_to_pfn(swp_entry_t entry) -{ - return swp_offset(entry); -} - static inline void num_poisoned_pages_inc(void) { atomic_long_inc(&num_poisoned_pages); diff --git a/mm/hmm.c b/mm/hmm.c index f2aa63b94d9b..3850fb625dda 100644 --- a/mm/hmm.c +++ b/mm/hmm.c @@ -253,7 +253,7 @@ static int hmm_vma_handle_pte(struct mm_walk *walk, unsigned long addr, cpu_flags = HMM_PFN_VALID; if (is_writable_device_private_entry(entry)) cpu_flags |= HMM_PFN_WRITE; - *hmm_pfn = swp_offset(entry) | cpu_flags; + *hmm_pfn = swp_offset_pfn(entry) | cpu_flags; return 0; } diff --git a/mm/memory-failure.c b/mm/memory-failure.c index cc6fc9be8d22..e451219124dd 100644 --- a/mm/memory-failure.c +++ b/mm/memory-failure.c @@ -632,7 +632,7 @@ static int check_hwpoisoned_entry(pte_t pte, unsigned long addr, short shift, swp_entry_t swp = pte_to_swp_entry(pte); if (is_hwpoison_entry(swp)) - pfn = hwpoison_entry_to_pfn(swp); + pfn = swp_offset_pfn(swp); } if (!pfn || pfn != poisoned_pfn) diff --git a/mm/page_vma_mapped.c b/mm/page_vma_mapped.c index 8e9e574d535a..93e13fc17d3c 100644 --- a/mm/page_vma_mapped.c +++ b/mm/page_vma_mapped.c @@ -86,7 +86,7 @@ static bool check_pte(struct page_vma_mapped_walk *pvmw) !is_device_exclusive_entry(entry)) return false; - pfn = swp_offset(entry); + pfn = swp_offset_pfn(entry); } else if (is_swap_pte(*pvmw->pte)) { swp_entry_t entry; @@ -96,7 +96,7 @@ static bool check_pte(struct page_vma_mapped_walk *pvmw) !is_device_exclusive_entry(entry)) return false; - pfn = swp_offset(entry); + pfn = swp_offset_pfn(entry); } else { if (!pte_present(*pvmw->pte)) return false; @@ -221,7 +221,7 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw) return not_found(pvmw); entry = pmd_to_swp_entry(pmde); if (!is_migration_entry(entry) || - !check_pmd(swp_offset(entry), pvmw)) + !check_pmd(swp_offset_pfn(entry), pvmw)) return not_found(pvmw); return true; } From patchwork Fri Jul 29 01:40:39 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 12931861 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E3859C00140 for ; Fri, 29 Jul 2022 01:40:50 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 5CCD5940008; Thu, 28 Jul 2022 21:40:50 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 57BDC940007; Thu, 28 Jul 2022 21:40:50 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 38186940008; Thu, 28 Jul 2022 21:40:50 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 1CBD4940007 for ; Thu, 28 Jul 2022 21:40:50 -0400 (EDT) Received: from smtpin13.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id EE549405EC for ; Fri, 29 Jul 2022 01:40:49 +0000 (UTC) X-FDA: 79738433418.13.F608DB6 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf02.hostedemail.com (Postfix) with ESMTP id 6BCF380093 for ; Fri, 29 Jul 2022 01:40:48 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1659058849; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=1Pdq3GP5K2kZBYouzoKDRpFce2r59+NT7hzfgEhUpfY=; b=eJwQg6JqF6Z5+Axr05nETMHLk0bvRYoTprtv5F1zJ7/kj+pXAG+NgKZkKl6P3QaVA+TAgY fBd1jCtQOKc7vqiYH1sIanEN6qgUhefndXWVm/Af7nR0r+Z3oE8NmGLmdOwKI2OsbnL13Q gwInNIC6kb+U6w5F3Gxp8CLDLEjcSyM= Received: from mail-qt1-f200.google.com (mail-qt1-f200.google.com [209.85.160.200]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-606-jOzRa4ZoPMOFIibHQiqNDA-1; Thu, 28 Jul 2022 21:40:47 -0400 X-MC-Unique: jOzRa4ZoPMOFIibHQiqNDA-1 Received: by mail-qt1-f200.google.com with SMTP id f1-20020ac84641000000b0031ecb35e4d1so2072970qto.2 for ; Thu, 28 Jul 2022 18:40:47 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=1Pdq3GP5K2kZBYouzoKDRpFce2r59+NT7hzfgEhUpfY=; b=0l05WbL1EyAlVFgxBRqIUDeaPVihIjFKG/7BXW0jaHCSIGZT+8E/wz7HgGxG+bjg5s AfYheDQzEg0Xv979FkLNxzFz4QqpmdUerI2Y88EeQTUOJITyfNmwbVTy9QY07XHYGlju uefsQVUgjfXVExI3kPa1Ym8Fen8RGfOM0kjxOWDiGBoKHzOjYQGukDBRfINtSDVoFlpA 2uu2uxqvyY7YAg7+ApSxaqDzovYTnL2hTThcLYX/3Q45cjGFoGecs72E7kCS2T00Bo6J OUiOWsi6hCSY54aO6Q0Gu/rddtO3eSy3uzhmi3smApA/ubZR6PPC9zuDHpCP+gGCBzhi 4y1Q== X-Gm-Message-State: AJIora9J3w359z+8PJaPJiA22HEKSuOvaUb0/T9uay66mYK6MmGm0Jqz 3vxGJSjCPdoApDJ1a2sQV/09HkRYwsRkoE/Dcu8Ao0fbj9Brf+/kU+nGrwLAh6z+6xtgQip2VV5 3L2cwgCG4fbqEOp2XKtydU3RBDQKoXL/PqO0URltjeUlRXkDhWqoZy24u4Gmh X-Received: by 2002:a37:3c3:0:b0:6b5:cd61:cef with SMTP id 186-20020a3703c3000000b006b5cd610cefmr1179621qkd.368.1659058846546; Thu, 28 Jul 2022 18:40:46 -0700 (PDT) X-Google-Smtp-Source: AGRyM1uTTF+MpDAe6hgcAbyvybxSoNRT6Fd6ul/QwMr7NbcBL/2EhxKg35/KJlM9ChukfGVxvuJLdw== X-Received: by 2002:a37:3c3:0:b0:6b5:cd61:cef with SMTP id 186-20020a3703c3000000b006b5cd610cefmr1179603qkd.368.1659058846218; Thu, 28 Jul 2022 18:40:46 -0700 (PDT) Received: from localhost.localdomain (bras-base-aurron9127w-grc-35-70-27-3-10.dsl.bell.ca. [70.27.3.10]) by smtp.gmail.com with ESMTPSA id u9-20020a05620a454900b006b259b5dd12sm1584531qkp.53.2022.07.28.18.40.44 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Thu, 28 Jul 2022 18:40:45 -0700 (PDT) From: Peter Xu To: linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: Huang Ying , peterx@redhat.com, Andrea Arcangeli , Andrew Morton , "Kirill A . Shutemov" , Nadav Amit , Hugh Dickins , David Hildenbrand , Vlastimil Babka Subject: [PATCH RFC 2/4] mm: Remember young bit for page migrations Date: Thu, 28 Jul 2022 21:40:39 -0400 Message-Id: <20220729014041.21292-3-peterx@redhat.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220729014041.21292-1-peterx@redhat.com> References: <20220729014041.21292-1-peterx@redhat.com> MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-type: text/plain ARC-Authentication-Results: i=1; imf02.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=eJwQg6Jq; spf=pass (imf02.hostedemail.com: domain of peterx@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=peterx@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1659058849; a=rsa-sha256; cv=none; b=AEEHyj9QY2W3vINGzQbxLDp2vgKIfUgt2BERcBb53D8JXqiqToMB1dbWUnQAoejUi5rNdE x/LjSVBw6QpE3TL5pWoP6344OfAyKR7z6puVYTmf9rZ3cmtQHXM1Hj2J2bmm4qHUR6O3qp LDMvR9Kihm/loTmz0hCVUUKCG46dv5o= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1659058849; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=1Pdq3GP5K2kZBYouzoKDRpFce2r59+NT7hzfgEhUpfY=; b=xtSQm5R+vfxCcXEs3T/uNVoUUddJIVKhRNFG64atG+6VZUgd3ecGRTSrPRVWk7pddZo14P GOtPbhzfXF/qntff7Bg5aqaJ6ediQGkEUu6v5GbVGKMTMnnT5oSLzUouVVlRy/LgPK+zkl TzSDHL8L8b01i0APwloHdmj0gG66l5E= X-Rspamd-Queue-Id: 6BCF380093 Authentication-Results: imf02.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=eJwQg6Jq; spf=pass (imf02.hostedemail.com: domain of peterx@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=peterx@redhat.com; dmarc=pass (policy=none) header.from=redhat.com X-Rspamd-Server: rspam01 X-Rspam-User: X-Stat-Signature: zz56e6cfm7kde1tk4h6rhuj7k8zeg9sm X-HE-Tag: 1659058848-952310 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: When page migration happens, we always ignore the young bit settings in the old pgtable, and marking the page as old in the new page table using either pte_mkold() or pmd_mkold(). That's fine from functional-wise, but that's not friendly to page reclaim because the moving page can be actively accessed within the procedure. Actually we can easily remember the young bit configuration and make that information recovered when the page is migrated. To achieve it, define a new bit in the migration swap offset field to show whether the old pte has young bit set or not. Then when removing/recovering the migration entry, we can recover the young bit even if the page changed. One thing to mention is that the whole feature is based on an arch specific macro __ARCH_SWP_OFFSET_BITS that needs to be defined per-arch. The macro tells how many bits are available for the arch specific swp offset field. When that macro is not defined, we'll assume we don't have free bits in the migration swap entry offset, so we can't persist the young bit. So until now, there should have no functional change at all with this patch, since no arch has yet defined __ARCH_SWP_OFFSET_BITS. Signed-off-by: Peter Xu --- include/linux/swapops.h | 57 +++++++++++++++++++++++++++++++++++++++++ mm/huge_memory.c | 10 ++++++-- mm/migrate.c | 4 ++- mm/migrate_device.c | 2 ++ mm/rmap.c | 3 ++- 5 files changed, 72 insertions(+), 4 deletions(-) diff --git a/include/linux/swapops.h b/include/linux/swapops.h index 5378f77860fb..3bbb57aa6742 100644 --- a/include/linux/swapops.h +++ b/include/linux/swapops.h @@ -31,6 +31,28 @@ #define SWP_PFN_BITS (MAX_PHYSMEM_BITS - PAGE_SHIFT) #define SWP_PFN_MASK ((1UL << SWP_PFN_BITS) - 1) +#ifdef __ARCH_SWP_OFFSET_BITS +#define SWP_PFN_OFFSET_FREE_BITS (__ARCH_SWP_OFFSET_BITS - SWP_PFN_BITS) +#else +/* + * If __ARCH_SWP_OFFSET_BITS not defined, assuming we don't have free bits + * to be on the safe side. + */ +#define SWP_PFN_OFFSET_FREE_BITS 0 +#endif + +/** + * Migration swap entry specific bitfield definitions. + * + * @SWP_MIG_YOUNG_BIT: Whether the page used to have young bit set + * + * Note: these bits will be used only if there're free bits in arch + * specific swp offset field. Arch needs __ARCH_SWP_OFFSET_BITS defined to + * use the bits/features. + */ +#define SWP_MIG_YOUNG_BIT (1UL << SWP_PFN_BITS) +#define SWP_MIG_OFFSET_BITS (SWP_PFN_BITS + 1) + /* Clear all flags but only keep swp_entry_t related information */ static inline pte_t pte_swp_clear_flags(pte_t pte) { @@ -258,6 +280,30 @@ static inline swp_entry_t make_writable_migration_entry(pgoff_t offset) return swp_entry(SWP_MIGRATION_WRITE, offset); } +static inline swp_entry_t make_migration_entry_young(swp_entry_t entry) +{ + /* + * Due to a limitation on x86_64 we can't use #ifdef, as + * SWP_PFN_OFFSET_FREE_BITS value can be changed dynamically for + * 4/5 level pgtables. For all the non-x86_64 archs (where the + * macro MAX_PHYSMEM_BITS is constant) this branching should be + * optimized out by the compiler. + */ + if (SWP_PFN_OFFSET_FREE_BITS) + return swp_entry(swp_type(entry), + swp_offset(entry) | SWP_MIG_YOUNG_BIT); + return entry; +} + +static inline bool is_migration_entry_young(swp_entry_t entry) +{ + /* Please refer to comment in make_migration_entry_young() */ + if (SWP_PFN_OFFSET_FREE_BITS) + return swp_offset(entry) & SWP_MIG_YOUNG_BIT; + /* Keep the old behavior of aging page after migration */ + return false; +} + extern void __migration_entry_wait(struct mm_struct *mm, pte_t *ptep, spinlock_t *ptl); extern void migration_entry_wait(struct mm_struct *mm, pmd_t *pmd, @@ -304,6 +350,16 @@ static inline int is_readable_migration_entry(swp_entry_t entry) return 0; } +static inline swp_entry_t make_migration_entry_young(swp_entry_t entry) +{ + return entry; +} + +static inline bool is_migration_entry_young(swp_entry_t entry) +{ + return false; +} + #endif typedef unsigned long pte_marker; @@ -407,6 +463,7 @@ static inline bool is_pfn_swap_entry(swp_entry_t entry) { /* Make sure the swp offset can always store the needed fields */ BUILD_BUG_ON(SWP_TYPE_SHIFT < SWP_PFN_BITS); + BUILD_BUG_ON(SWP_TYPE_SHIFT < SWP_MIG_OFFSET_BITS); return is_migration_entry(entry) || is_device_private_entry(entry) || is_device_exclusive_entry(entry); diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 29e3628687a6..131fe5754d8f 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2088,7 +2088,7 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd, write = is_writable_migration_entry(entry); if (PageAnon(page)) anon_exclusive = is_readable_exclusive_migration_entry(entry); - young = false; + young = is_migration_entry_young(entry); soft_dirty = pmd_swp_soft_dirty(old_pmd); uffd_wp = pmd_swp_uffd_wp(old_pmd); } else { @@ -2146,6 +2146,8 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd, else swp_entry = make_readable_migration_entry( page_to_pfn(page + i)); + if (young) + swp_entry = make_migration_entry_young(swp_entry); entry = swp_entry_to_pte(swp_entry); if (soft_dirty) entry = pte_swp_mksoft_dirty(entry); @@ -3148,6 +3150,8 @@ int set_pmd_migration_entry(struct page_vma_mapped_walk *pvmw, entry = make_readable_exclusive_migration_entry(page_to_pfn(page)); else entry = make_readable_migration_entry(page_to_pfn(page)); + if (pmd_young(pmdval)) + entry = make_migration_entry_young(entry); pmdswp = swp_entry_to_pmd(entry); if (pmd_soft_dirty(pmdval)) pmdswp = pmd_swp_mksoft_dirty(pmdswp); @@ -3173,13 +3177,15 @@ void remove_migration_pmd(struct page_vma_mapped_walk *pvmw, struct page *new) entry = pmd_to_swp_entry(*pvmw->pmd); get_page(new); - pmde = pmd_mkold(mk_huge_pmd(new, READ_ONCE(vma->vm_page_prot))); + pmde = mk_huge_pmd(new, READ_ONCE(vma->vm_page_prot)); if (pmd_swp_soft_dirty(*pvmw->pmd)) pmde = pmd_mksoft_dirty(pmde); if (is_writable_migration_entry(entry)) pmde = maybe_pmd_mkwrite(pmde, vma); if (pmd_swp_uffd_wp(*pvmw->pmd)) pmde = pmd_wrprotect(pmd_mkuffd_wp(pmde)); + if (!is_migration_entry_young(entry)) + pmde = pmd_mkold(pmde); if (PageAnon(new)) { rmap_t rmap_flags = RMAP_COMPOUND; diff --git a/mm/migrate.c b/mm/migrate.c index 1649270bc1a7..62cb3a9451de 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -199,7 +199,7 @@ static bool remove_migration_pte(struct folio *folio, #endif folio_get(folio); - pte = pte_mkold(mk_pte(new, READ_ONCE(vma->vm_page_prot))); + pte = mk_pte(new, READ_ONCE(vma->vm_page_prot)); if (pte_swp_soft_dirty(*pvmw.pte)) pte = pte_mksoft_dirty(pte); @@ -207,6 +207,8 @@ static bool remove_migration_pte(struct folio *folio, * Recheck VMA as permissions can change since migration started */ entry = pte_to_swp_entry(*pvmw.pte); + if (!is_migration_entry_young(entry)) + pte = pte_mkold(pte); if (is_writable_migration_entry(entry)) pte = maybe_mkwrite(pte, vma); else if (pte_swp_uffd_wp(*pvmw.pte)) diff --git a/mm/migrate_device.c b/mm/migrate_device.c index 7feeb447e3b9..fd8daf45c1a6 100644 --- a/mm/migrate_device.c +++ b/mm/migrate_device.c @@ -221,6 +221,8 @@ static int migrate_vma_collect_pmd(pmd_t *pmdp, else entry = make_readable_migration_entry( page_to_pfn(page)); + if (pte_young(pte)) + entry = make_migration_entry_young(entry); swp_pte = swp_entry_to_pte(entry); if (pte_present(pte)) { if (pte_soft_dirty(pte)) diff --git a/mm/rmap.c b/mm/rmap.c index af775855e58f..605fb37ae95e 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -2065,7 +2065,8 @@ static bool try_to_migrate_one(struct folio *folio, struct vm_area_struct *vma, else entry = make_readable_migration_entry( page_to_pfn(subpage)); - + if (pte_young(pteval)) + entry = make_migration_entry_young(entry); swp_pte = swp_entry_to_pte(entry); if (pte_soft_dirty(pteval)) swp_pte = pte_swp_mksoft_dirty(swp_pte); From patchwork Fri Jul 29 01:40:40 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 12931862 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1CBB2C04A68 for ; Fri, 29 Jul 2022 01:40:52 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D6DEE940009; Thu, 28 Jul 2022 21:40:50 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id CF6DF940007; Thu, 28 Jul 2022 21:40:50 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id ADEAE940009; Thu, 28 Jul 2022 21:40:50 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 9C355940007 for ; Thu, 28 Jul 2022 21:40:50 -0400 (EDT) Received: from smtpin28.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 6E03D12109D for ; Fri, 29 Jul 2022 01:40:50 +0000 (UTC) X-FDA: 79738433460.28.3E0E3A8 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf16.hostedemail.com (Postfix) with ESMTP id A70D418002D for ; Fri, 29 Jul 2022 01:40:49 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1659058849; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=MNwWCoZOuwPxSYyDQ6YzxKBvZEYHs5K0G6f2+t5kiFc=; b=Hei0UT4Xd2mE/KvgiXPoUYwRf8LcHMnsIAYyKx3VYOUDowJ4sIG4/NCdm4sHIbbDP68BPh simTkoYPrE9PrlvnvIpuaj3IWCqck0EGcKDSKxBxHO06O7DFqmTh3GINtDFFVX6HX2FcYL tlWqYIaDMgP1r/wcHzWy4yQeFZkfcFQ= Received: from mail-qt1-f200.google.com (mail-qt1-f200.google.com [209.85.160.200]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-605-LhKkFr1SOeygH9cX1htU3g-1; Thu, 28 Jul 2022 21:40:48 -0400 X-MC-Unique: LhKkFr1SOeygH9cX1htU3g-1 Received: by mail-qt1-f200.google.com with SMTP id i8-20020ac871c8000000b0031ed35facf3so2087138qtp.0 for ; Thu, 28 Jul 2022 18:40:48 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=MNwWCoZOuwPxSYyDQ6YzxKBvZEYHs5K0G6f2+t5kiFc=; b=TH2pTXoL1pH5IlM6Z+URqh/MvBhabFMssoPBPWPPspObpMLaQAgTG6yKMwjWZuYujX ybUUEp6c0++EdXe1kFKlW+2DGAK7buW5FmyETNAp/eXT10UGRrosSx6W+EfDbRsFKFbR MCvmwBzwr/o4U2G9GV7pXG8zwpcKpekVzZNBcMe1tXnDZPZhj0PMbQzf8I5uFTbeTTc4 QNdy1IFomqKwOPSiQ+sxmHgIdneYnkok5XCULLIo0U5OYIQ9ZHyqQ0MsXEvoEUy422Tw PytJkfnv4SivGinBoDA0hYrrvBCIk+AeLciFNck5EYwQ8aePWWJfSCU0CcrUZlr/A4zc FZ/g== X-Gm-Message-State: AJIora9gXGb0eRC7By99IovLka2IVT/GhUTxnlUVUraUk6flMCo+XjmU dxla66myIt5+N+vFwxgvNO+wG99Vy+1PkA5AgiDWyH/qtIFfBHOpuAbjLAxqsbLCqfl9srJxdqZ XhqEoP1uy89fyRB1us4mip+PWTmtZn9kvsq3L3EIbheN2OnCRHSlAlD1ujJPh X-Received: by 2002:a05:620a:19a7:b0:6b6:b88:3c77 with SMTP id bm39-20020a05620a19a700b006b60b883c77mr1212520qkb.128.1659058848107; Thu, 28 Jul 2022 18:40:48 -0700 (PDT) X-Google-Smtp-Source: AGRyM1uZayzexwXnoLh2k6d3k24YaRhYz1essZsiXQQVUsnUpgPU7CF4SN2TaMXPZPqXjes7ezd3dQ== X-Received: by 2002:a05:620a:19a7:b0:6b6:b88:3c77 with SMTP id bm39-20020a05620a19a700b006b60b883c77mr1212495qkb.128.1659058847825; Thu, 28 Jul 2022 18:40:47 -0700 (PDT) Received: from localhost.localdomain (bras-base-aurron9127w-grc-35-70-27-3-10.dsl.bell.ca. [70.27.3.10]) by smtp.gmail.com with ESMTPSA id u9-20020a05620a454900b006b259b5dd12sm1584531qkp.53.2022.07.28.18.40.46 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Thu, 28 Jul 2022 18:40:47 -0700 (PDT) From: Peter Xu To: linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: Huang Ying , peterx@redhat.com, Andrea Arcangeli , Andrew Morton , "Kirill A . Shutemov" , Nadav Amit , Hugh Dickins , David Hildenbrand , Vlastimil Babka Subject: [PATCH RFC 3/4] mm/x86: Use SWP_TYPE_BITS in 3-level swap macros Date: Thu, 28 Jul 2022 21:40:40 -0400 Message-Id: <20220729014041.21292-4-peterx@redhat.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220729014041.21292-1-peterx@redhat.com> References: <20220729014041.21292-1-peterx@redhat.com> MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-type: text/plain ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1659058850; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=MNwWCoZOuwPxSYyDQ6YzxKBvZEYHs5K0G6f2+t5kiFc=; b=7rKsqDptuBZW78YsI2xmyKtKB/9aNVTt6laiuxztnzRRMCS+UOkdR52GXfMT99Efq2gwN/ pe5qT7Ear4PLAqIQRCATVmLgigBvuyeQSWQdGnVZ/5TgwEmEsV8SAYFUrL8JR0mcQd5wJC M+ZHmPC3CU7xXgSLrxeDTZqw+rV9txQ= ARC-Authentication-Results: i=1; imf16.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=Hei0UT4X; spf=pass (imf16.hostedemail.com: domain of peterx@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=peterx@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1659058850; a=rsa-sha256; cv=none; b=NFGnnDiKaxJivzfH4E69nxR9Y036QP7R+PcRkWWZskp4v13A6jG/YOmVg7kXRQp1wd3qlI jV486OKnlxuvHDpT3+W1Z3+MyBF/eDwFdBJS6YFtw4RBrlRgJ+W881Ngn/Fil/IZAFV1Zf N2D/cDovvlyajsDXhTemyUvqtCrPHGQ= X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: A70D418002D X-Rspam-User: Authentication-Results: imf16.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=Hei0UT4X; spf=pass (imf16.hostedemail.com: domain of peterx@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=peterx@redhat.com; dmarc=pass (policy=none) header.from=redhat.com X-Stat-Signature: xkkqmp3mc8ew5tmcrtagoachcwxs37dn X-HE-Tag: 1659058849-942160 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Replace all the magic "5" with the macro. Signed-off-by: Peter Xu --- arch/x86/include/asm/pgtable-3level.h | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/arch/x86/include/asm/pgtable-3level.h b/arch/x86/include/asm/pgtable-3level.h index e896ebef8c24..28421a887209 100644 --- a/arch/x86/include/asm/pgtable-3level.h +++ b/arch/x86/include/asm/pgtable-3level.h @@ -256,10 +256,10 @@ static inline pud_t native_pudp_get_and_clear(pud_t *pudp) /* We always extract/encode the offset by shifting it all the way up, and then down again */ #define SWP_OFFSET_SHIFT (SWP_OFFSET_FIRST_BIT + SWP_TYPE_BITS) -#define MAX_SWAPFILES_CHECK() BUILD_BUG_ON(MAX_SWAPFILES_SHIFT > 5) -#define __swp_type(x) (((x).val) & 0x1f) -#define __swp_offset(x) ((x).val >> 5) -#define __swp_entry(type, offset) ((swp_entry_t){(type) | (offset) << 5}) +#define MAX_SWAPFILES_CHECK() BUILD_BUG_ON(MAX_SWAPFILES_SHIFT > SWP_TYPE_BITS) +#define __swp_type(x) (((x).val) & ((1UL << SWP_TYPE_BITS) - 1)) +#define __swp_offset(x) ((x).val >> SWP_TYPE_BITS) +#define __swp_entry(type, offset) ((swp_entry_t){(type) | (offset) << SWP_TYPE_BITS}) /* * Normally, __swp_entry() converts from arch-independent swp_entry_t to From patchwork Fri Jul 29 01:40:41 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 12931863 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 820F2C00140 for ; Fri, 29 Jul 2022 01:40:53 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B790B94000A; Thu, 28 Jul 2022 21:40:52 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id A8467940007; Thu, 28 Jul 2022 21:40:52 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8B06D94000A; Thu, 28 Jul 2022 21:40:52 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 746FC940007 for ; Thu, 28 Jul 2022 21:40:52 -0400 (EDT) Received: from smtpin27.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id 531711A1091 for ; Fri, 29 Jul 2022 01:40:52 +0000 (UTC) X-FDA: 79738433544.27.DD01D5D Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf17.hostedemail.com (Postfix) with ESMTP id D7E4C4002C for ; Fri, 29 Jul 2022 01:40:51 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1659058851; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=uHCAtjKZb2F/RWm6an0jiLe9Ivmk/LTJJW/gUBH3IyE=; b=UMtlD86tCEgq4P5alY/2+yqc4dUS0/WHWGAy0si/33cKjFYIA6aiRjwijjt9O1aPx4PHIM RiafF0BqwnaucU88bHOvKZh96pse6zh/glNZyLZjYwdPvC7PUdPUOqiwsuI8P6fAFPBxkd WQ4M0+HLqD+F1goX/NP8WoKKhGgGhnA= Received: from mail-qk1-f199.google.com (mail-qk1-f199.google.com [209.85.222.199]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-159-XPBN-SMlP2GB9WM2LSZhVg-1; Thu, 28 Jul 2022 21:40:50 -0400 X-MC-Unique: XPBN-SMlP2GB9WM2LSZhVg-1 Received: by mail-qk1-f199.google.com with SMTP id bj26-20020a05620a191a00b006b5c4e2dc77so2622042qkb.16 for ; Thu, 28 Jul 2022 18:40:50 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=uHCAtjKZb2F/RWm6an0jiLe9Ivmk/LTJJW/gUBH3IyE=; b=w/MAaxGTE3H8/LdJdOm8Lz1R4N1KpUeHzSiTUTfZ1Ctatm1V6KmAzpFspCiI/8uv63 iAuanTMNh56cB7L6Jo8JIPtKw1kKaYtpbQKY/N1OUvJDeMgkvckoz2gaBuCUhdGBkj14 GSn/BYFGH2NP5bjWl6E77+z1dXJd3ZhVkHgDKCmbfuiGHN+19UWluz4MZrUoe3Zo8p/e YfnCk1bYBxDCFB+WsV9bX4UGi5T8TL0NUDmYegRx20N1fdXzEuuhQ37ZH7lWAnEKUlnX HMeUkTTNf/qjSoFXXh8t9Y9meWw74YacFov6SabfzyH2qvqfdu2TXAynOHXjTY8E515Q 8AsA== X-Gm-Message-State: AJIora8n3z7TUyRNR5RuYSyAUVX/6FMFEuVGglum/MX5ennffv7qGF0b voa2CQ4RFhRygW8x0KnJlHb/KaJugrdArNA+lAPJsunOHKE1MsgJYJxpzMCGX1fwJgoBxhkQzP4 mlv9SrU1P5reSsO4qfGu4txZVr7eqA6I2NKSVdJFMeqlkSWWDlR5VXDImZ4YH X-Received: by 2002:a05:620a:440e:b0:6b2:82cf:d7f9 with SMTP id v14-20020a05620a440e00b006b282cfd7f9mr1167970qkp.761.1659058849608; Thu, 28 Jul 2022 18:40:49 -0700 (PDT) X-Google-Smtp-Source: AGRyM1tfN//pDoVVznadr+/YqP0HOgEHhd9oPJGkyGyuWdvXw2h2kgiA9ytnZN60UnD+9V24+DCcyQ== X-Received: by 2002:a05:620a:440e:b0:6b2:82cf:d7f9 with SMTP id v14-20020a05620a440e00b006b282cfd7f9mr1167951qkp.761.1659058849320; Thu, 28 Jul 2022 18:40:49 -0700 (PDT) Received: from localhost.localdomain (bras-base-aurron9127w-grc-35-70-27-3-10.dsl.bell.ca. [70.27.3.10]) by smtp.gmail.com with ESMTPSA id u9-20020a05620a454900b006b259b5dd12sm1584531qkp.53.2022.07.28.18.40.48 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Thu, 28 Jul 2022 18:40:48 -0700 (PDT) From: Peter Xu To: linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: Huang Ying , peterx@redhat.com, Andrea Arcangeli , Andrew Morton , "Kirill A . Shutemov" , Nadav Amit , Hugh Dickins , David Hildenbrand , Vlastimil Babka Subject: [PATCH RFC 4/4] mm/x86: Define __ARCH_SWP_OFFSET_BITS Date: Thu, 28 Jul 2022 21:40:41 -0400 Message-Id: <20220729014041.21292-5-peterx@redhat.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220729014041.21292-1-peterx@redhat.com> References: <20220729014041.21292-1-peterx@redhat.com> MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-type: text/plain ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1659058852; a=rsa-sha256; cv=none; b=5RP4EFyxRts8Ru9/VqM4riVccL0TeUk0xzxSnvw/TcVroCT4mwb1r/r6vJ0WJUeJ3ofK0R NzbGlgV2IMlKJOVO7P7obyvecn0Vi0792z27Yjt7p60INF0MhhBQY36x8SxTfH1iZaF5jb MWqxJlYUYzXgujJGz+jdgPI9CdZIPUU= ARC-Authentication-Results: i=1; imf17.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=UMtlD86t; spf=pass (imf17.hostedemail.com: domain of peterx@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=peterx@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1659058851; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=uHCAtjKZb2F/RWm6an0jiLe9Ivmk/LTJJW/gUBH3IyE=; b=HxDlS43iiWCmQh5K6Pb8LX1D6Cj0t7bxjNJFyi+pw6aTVuh3qo6MfP3K0gI0sZdp9k1W4B fgfPAyGMGMQT9IR3GUKI2s4n9q3VWxmTGJLs1A76oRfl+hikKKMrPR8jo1VhiIC1DzsIEF aoac3LNm4M2UOkbyT0zYInspMbk7nwc= X-Rspamd-Queue-Id: D7E4C4002C X-Rspam-User: X-Stat-Signature: ztjfirjfsfu8emqdie5paw5dj7y6qf8y Authentication-Results: imf17.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=UMtlD86t; spf=pass (imf17.hostedemail.com: domain of peterx@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=peterx@redhat.com; dmarc=pass (policy=none) header.from=redhat.com X-Rspamd-Server: rspam08 X-HE-Tag: 1659058851-53375 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This will enable the new migration young bit for all x86 systems for both 32 bits and 64 bits systems (including PAE). Signed-off-by: Peter Xu --- arch/x86/include/asm/pgtable-2level.h | 6 ++++++ arch/x86/include/asm/pgtable-3level.h | 7 +++++++ arch/x86/include/asm/pgtable_64.h | 5 +++++ 3 files changed, 18 insertions(+) diff --git a/arch/x86/include/asm/pgtable-2level.h b/arch/x86/include/asm/pgtable-2level.h index 60d0f9015317..6e70833feb69 100644 --- a/arch/x86/include/asm/pgtable-2level.h +++ b/arch/x86/include/asm/pgtable-2level.h @@ -95,6 +95,12 @@ static inline unsigned long pte_bitop(unsigned long value, unsigned int rightshi #define __pte_to_swp_entry(pte) ((swp_entry_t) { (pte).pte_low }) #define __swp_entry_to_pte(x) ((pte_t) { .pte = (x).val }) +/* + * This defines how many bits we have in the arch specific swp offset. + * For 32 bits vanilla systems the pte and swap entry has the same size. + */ +#define __ARCH_SWP_OFFSET_BITS (sizeof(swp_entry_t) - SWP_TYPE_BITS) + /* No inverted PFNs on 2 level page tables */ static inline u64 protnone_mask(u64 val) diff --git a/arch/x86/include/asm/pgtable-3level.h b/arch/x86/include/asm/pgtable-3level.h index 28421a887209..8dbf29b51f8b 100644 --- a/arch/x86/include/asm/pgtable-3level.h +++ b/arch/x86/include/asm/pgtable-3level.h @@ -287,6 +287,13 @@ static inline pud_t native_pudp_get_and_clear(pud_t *pudp) #define __pte_to_swp_entry(pte) (__swp_entry(__pteval_swp_type(pte), \ __pteval_swp_offset(pte))) +/* + * This defines how many bits we have in the arch specific swp offset. + * Here since we're putting the 32 bits swap entry into 64 bits pte, the + * limitation is the 32 bits swap entry minus the swap type field. + */ +#define __ARCH_SWP_OFFSET_BITS (sizeof(swp_entry_t) - SWP_TYPE_BITS) + #include #endif /* _ASM_X86_PGTABLE_3LEVEL_H */ diff --git a/arch/x86/include/asm/pgtable_64.h b/arch/x86/include/asm/pgtable_64.h index e479491da8d5..1714f0ded1db 100644 --- a/arch/x86/include/asm/pgtable_64.h +++ b/arch/x86/include/asm/pgtable_64.h @@ -217,6 +217,11 @@ static inline void native_pgd_clear(pgd_t *pgd) /* We always extract/encode the offset by shifting it all the way up, and then down again */ #define SWP_OFFSET_SHIFT (SWP_OFFSET_FIRST_BIT+SWP_TYPE_BITS) +/* + * This defines how many bits we have in the arch specific swp offset. 64 + * bits systems have both swp_entry_t and pte in 64 bits. + */ +#define __ARCH_SWP_OFFSET_BITS (BITS_PER_LONG - SWP_OFFSET_SHIFT) #define MAX_SWAPFILES_CHECK() BUILD_BUG_ON(MAX_SWAPFILES_SHIFT > SWP_TYPE_BITS)