From patchwork Thu Aug 11 16:13:25 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 12941581 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1F06BC19F2A for ; Thu, 11 Aug 2022 16:13:40 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3E7428E0003; Thu, 11 Aug 2022 12:13:39 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 3CEC18E0002; Thu, 11 Aug 2022 12:13:39 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 247BA8E0003; Thu, 11 Aug 2022 12:13:39 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 0BF7A8E0001 for ; Thu, 11 Aug 2022 12:13:39 -0400 (EDT) Received: from smtpin09.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 99F261217D7 for ; Thu, 11 Aug 2022 16:13:38 +0000 (UTC) X-FDA: 79787807316.09.C2317DA Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf17.hostedemail.com (Postfix) with ESMTP id 31B904017E for ; Thu, 11 Aug 2022 16:13:37 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1660234417; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=nfi/Ecb+oYo1RXzYzaJhIF6aEHnKWbHUlt161URhhFA=; b=GhsOGOx+UWqgdk2hLvFdA/eje74sCFQsPeRXdDNjHvzK3Otd+Mrpav1B0lpKp30PgCg2wa btm7fZwBHcS040weXYswe/FFkU71n+WM5P4a7vtmKfZUZm+OaoHQgl3n8ew5BA36ineajT YGPkKdNXFqkbrkRMUQfoWriJsUbg+Qo= Received: from mail-io1-f70.google.com (mail-io1-f70.google.com [209.85.166.70]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_128_GCM_SHA256) id us-mta-29-3LUFS5QcNdKeaQOQmWXhAA-1; Thu, 11 Aug 2022 12:13:36 -0400 X-MC-Unique: 3LUFS5QcNdKeaQOQmWXhAA-1 Received: by mail-io1-f70.google.com with SMTP id i20-20020a5d88d4000000b0067d13ffbe8cso10007442iol.22 for ; Thu, 11 Aug 2022 09:13:36 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc; bh=nfi/Ecb+oYo1RXzYzaJhIF6aEHnKWbHUlt161URhhFA=; b=rSOdpBoJAd8gYk6xTC0tw2cXxNJcgEhMXLOgamFIpWoI9ZwR607WLV5iHqhoC0WCHl gu2w+SeXPSCGNS/t2MbuPimHR67byjcYIgvmYH/Y5ufoEjW71UzRU/jSjEFkk8wzzPo7 kEAUblvU1z/Z+XOTaK26+VEKNUQk7cIHxhCI7W7FgjabYwGvodVerAqMbnI7YsHl2q0a PgrTvcotoiE5nylQGZHV4EQ0f7yhJx148AYLoEG92LjWUNIFyR6bciqLe6Q4OjD+IBXg hCdJqaveXaf/xNk2WReNeLgagakVrATeuAu7O+89LikrfU280smuNNaDZgT92HNK06XU USog== X-Gm-Message-State: ACgBeo0gwuBwn6pRPNUsisQVNKGNcrzhUJEK0YtRAXujosKsDQ92LPs+ HK7BuIAd3pTt+othtcnNRVVKbZJacTVvSQYRDdNYm3llHVQ6aRbRkJvT8DjiBng3aXBKhpvuVCx DNUBBARWWd/Wku3u87gzro5+5FZMv4OE586r1rmRW0WfJx/+ue4ME+w1bjS9S X-Received: by 2002:a05:6602:4019:b0:684:4cde:9e74 with SMTP id bk25-20020a056602401900b006844cde9e74mr16667iob.53.1660234415607; Thu, 11 Aug 2022 09:13:35 -0700 (PDT) X-Google-Smtp-Source: AA6agR6aerHoxC67XZ45dRi+nnFHBgH3aaV4pC70vmG8Qq33dGhvpVJki9kiKQSL4lwRWMdaL0bZUw== X-Received: by 2002:a05:6602:4019:b0:684:4cde:9e74 with SMTP id bk25-20020a056602401900b006844cde9e74mr16649iob.53.1660234415385; Thu, 11 Aug 2022 09:13:35 -0700 (PDT) Received: from localhost.localdomain (bras-base-aurron9127w-grc-35-70-27-3-10.dsl.bell.ca. [70.27.3.10]) by smtp.gmail.com with ESMTPSA id t1-20020a92ca81000000b002dd1c3c5c46sm3415429ilo.73.2022.08.11.09.13.33 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Thu, 11 Aug 2022 09:13:34 -0700 (PDT) From: Peter Xu To: linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: Hugh Dickins , "Kirill A . Shutemov" , Alistair Popple , peterx@redhat.com, Andrea Arcangeli , Minchan Kim , Andrew Morton , David Hildenbrand , Andi Kleen , Nadav Amit , Huang Ying , Vlastimil Babka Subject: [PATCH v4 1/7] mm/x86: Use SWP_TYPE_BITS in 3-level swap macros Date: Thu, 11 Aug 2022 12:13:25 -0400 Message-Id: <20220811161331.37055-2-peterx@redhat.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220811161331.37055-1-peterx@redhat.com> References: <20220811161331.37055-1-peterx@redhat.com> MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-type: text/plain ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1660234418; a=rsa-sha256; cv=none; b=agJeR2EDffEHgO8R9dc16+q577sVmlQrLXxz527DfFizqhOsoEz+E2Dh0gDEIfUQqGZjwY HnlwJK4hiun7xa6W7vyZYiPwq7ghhsHv7mZPRV3BBZtEoly1ijguLsZQ/3qeQ3+Gtqbj3C Ks2TALi0RiYjFn1AtEEPfwBgbiqgmQs= ARC-Authentication-Results: i=1; imf17.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=GhsOGOx+; spf=pass (imf17.hostedemail.com: domain of peterx@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=peterx@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1660234418; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=nfi/Ecb+oYo1RXzYzaJhIF6aEHnKWbHUlt161URhhFA=; b=ygvJBfkqKZXiU/1bOWTIZguGTnb2ln4FpmWjKrBZXUAMsbMia5tIdW09C1aVtNBQGq10Kf LAFg1r+hUrfePXcLyJh9f0Le0NDG0LT5T69YCz8lxQtRLm8llWlz5wdos0VcAm6wODneGQ f5UqrZdmAzNQ0oRGji+tNi05/fBVeOI= X-Stat-Signature: q9titms985ii9wtx4tywi6m817r755cw X-Rspamd-Queue-Id: 31B904017E X-Rspam-User: X-Rspamd-Server: rspam03 Authentication-Results: imf17.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=GhsOGOx+; spf=pass (imf17.hostedemail.com: domain of peterx@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=peterx@redhat.com; dmarc=pass (policy=none) header.from=redhat.com X-HE-Tag: 1660234417-128123 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Replace all the magic "5" with the macro. Reviewed-by: David Hildenbrand Reviewed-by: Huang Ying Signed-off-by: Peter Xu --- arch/x86/include/asm/pgtable-3level.h | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/arch/x86/include/asm/pgtable-3level.h b/arch/x86/include/asm/pgtable-3level.h index e896ebef8c24..28421a887209 100644 --- a/arch/x86/include/asm/pgtable-3level.h +++ b/arch/x86/include/asm/pgtable-3level.h @@ -256,10 +256,10 @@ static inline pud_t native_pudp_get_and_clear(pud_t *pudp) /* We always extract/encode the offset by shifting it all the way up, and then down again */ #define SWP_OFFSET_SHIFT (SWP_OFFSET_FIRST_BIT + SWP_TYPE_BITS) -#define MAX_SWAPFILES_CHECK() BUILD_BUG_ON(MAX_SWAPFILES_SHIFT > 5) -#define __swp_type(x) (((x).val) & 0x1f) -#define __swp_offset(x) ((x).val >> 5) -#define __swp_entry(type, offset) ((swp_entry_t){(type) | (offset) << 5}) +#define MAX_SWAPFILES_CHECK() BUILD_BUG_ON(MAX_SWAPFILES_SHIFT > SWP_TYPE_BITS) +#define __swp_type(x) (((x).val) & ((1UL << SWP_TYPE_BITS) - 1)) +#define __swp_offset(x) ((x).val >> SWP_TYPE_BITS) +#define __swp_entry(type, offset) ((swp_entry_t){(type) | (offset) << SWP_TYPE_BITS}) /* * Normally, __swp_entry() converts from arch-independent swp_entry_t to From patchwork Thu Aug 11 16:13:26 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 12941583 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id B4D05C25B06 for ; Thu, 11 Aug 2022 16:13:42 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 5CC1F8E0005; Thu, 11 Aug 2022 12:13:41 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 555508E0002; Thu, 11 Aug 2022 12:13:41 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3CEAC8E0005; Thu, 11 Aug 2022 12:13:41 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 1BEF48E0002 for ; Thu, 11 Aug 2022 12:13:41 -0400 (EDT) Received: from smtpin25.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id EC150C013D for ; Thu, 11 Aug 2022 16:13:40 +0000 (UTC) X-FDA: 79787807400.25.CFA0686 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf21.hostedemail.com (Postfix) with ESMTP id 6E3261C01B8 for ; Thu, 11 Aug 2022 16:13:40 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1660234420; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=m9J0o2gxjPkVRafIB7R87NmoXI0VU2ltkrT9Ru6AEjI=; b=KfxjeyuubahoXRX0vgxCbfl9Bl8gEB2OAPI4I9QFSe3BcczwfCvD+E9AyCcetTIK/odDkE I59/0xw9xNVl9qQuK4r7WLmoMmctVXUhQcwVWbv01vRLg4OYTe4CJ/UF0hfSxi9ZLKM1L+ 0+yonq+0CEl4lPvKzK5NWubyQ7AmrO4= Received: from mail-il1-f198.google.com (mail-il1-f198.google.com [209.85.166.198]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_128_GCM_SHA256) id us-mta-624--U_Tu21sNZ25nrmnhXW-xQ-1; Thu, 11 Aug 2022 12:13:38 -0400 X-MC-Unique: -U_Tu21sNZ25nrmnhXW-xQ-1 Received: by mail-il1-f198.google.com with SMTP id o5-20020a056e02102500b002ddcc65029cso12771726ilj.8 for ; Thu, 11 Aug 2022 09:13:38 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc; bh=m9J0o2gxjPkVRafIB7R87NmoXI0VU2ltkrT9Ru6AEjI=; b=zYBeQgjkK7AGo9k5oePvE9uFiHMJsfGbKds1N1uMhHyhXE/tYA924elXo0SLX5hS6J c1aEdCFxbpBbOtIPBkKjIJH8VcOfuxxxdmSy+ISb6hDFwfwTiwFASSnMLFfYcz0dyvTS L1KNPTgR+TMfvPeEtcX/TkgXIuhrdrb4JcglCJ2HjvP2/qIsyvro76l0g5P3vfpmzOHV 771ItcMgzYT4XMVpz3TxmW55GyNn2gi1aTMtjA4Vw13h0o6M8/wLEYFpn7eriZhBBenN d1TvVLvUGr5fYEY4QKlk/GJ1z5ojjDPmWnOc2QfdcDgci//CCXktKA0kQSuzC1FTbP12 LDBw== X-Gm-Message-State: ACgBeo1ba0UTqcwIblSnUs/S6bab1kPq83nzzHY9m6OTvRNKvYB3JDb4 /PnnG4rEOrGkd0uoXNITNqf8h+ttOmiCwZoNPUo7uDIovXFFGj5Rx9miFMmhcho3lfksyxIBLqc QbzMnZhy+EeGE2MTYMaLAS6gcU7onRN+HXFqZ5/O5dy08G/V9pryDtv/VP0uH X-Received: by 2002:a6b:c505:0:b0:67c:dcd:a5b2 with SMTP id v5-20020a6bc505000000b0067c0dcda5b2mr17930iof.37.1660234417511; Thu, 11 Aug 2022 09:13:37 -0700 (PDT) X-Google-Smtp-Source: AA6agR4/bnpktdH+KIUEDaTM72sZY/jQsDJec0sJqNd+QvV0NBcU4DHXpzsUzSl7BCeITxJOQpuLrg== X-Received: by 2002:a6b:c505:0:b0:67c:dcd:a5b2 with SMTP id v5-20020a6bc505000000b0067c0dcda5b2mr17905iof.37.1660234417290; Thu, 11 Aug 2022 09:13:37 -0700 (PDT) Received: from localhost.localdomain (bras-base-aurron9127w-grc-35-70-27-3-10.dsl.bell.ca. [70.27.3.10]) by smtp.gmail.com with ESMTPSA id t1-20020a92ca81000000b002dd1c3c5c46sm3415429ilo.73.2022.08.11.09.13.35 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Thu, 11 Aug 2022 09:13:36 -0700 (PDT) From: Peter Xu To: linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: Hugh Dickins , "Kirill A . Shutemov" , Alistair Popple , peterx@redhat.com, Andrea Arcangeli , Minchan Kim , Andrew Morton , David Hildenbrand , Andi Kleen , Nadav Amit , Huang Ying , Vlastimil Babka Subject: [PATCH v4 2/7] mm/swap: Comment all the ifdef in swapops.h Date: Thu, 11 Aug 2022 12:13:26 -0400 Message-Id: <20220811161331.37055-3-peterx@redhat.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220811161331.37055-1-peterx@redhat.com> References: <20220811161331.37055-1-peterx@redhat.com> MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-type: text/plain ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1660234420; a=rsa-sha256; cv=none; b=nHSiupJlnHNoEdX9joIYsyQRpjd1FG5/z0NZ/AHgqfGjEC+N8o3aGT5RAYAkoKlN1Kte7/ 6RlCa0rrMF1jGXS0QYOMLcD7BudGLwX8IP3AneCEyINRiwdqTxWoxH5oHJmvPoG6qhPAfg EBbN8QWJFgg6NIGFua1NpZUCpW5nh68= ARC-Authentication-Results: i=1; imf21.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=Kfxjeyuu; spf=pass (imf21.hostedemail.com: domain of peterx@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=peterx@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1660234420; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=m9J0o2gxjPkVRafIB7R87NmoXI0VU2ltkrT9Ru6AEjI=; b=R2VJGgZVnhTpxF/6/unzCm5U8JZVeQ9T9AvzCE+UUPp+ROwtuTIfxg3LrTn05yaRQ5a6UB /1HwX1ZxRP/gdfOlHXz44CQ7hUz49KK23uIHuP1QbDekKgXBkXpA2aTUCdE6vYxysEc85i TQMsULAxof5LJ4oJdTWAnaQRYIEF4hE= X-Rspam-User: X-Stat-Signature: 486584suej1fuio1fhytzu8mahkhjtm9 X-Rspamd-Queue-Id: 6E3261C01B8 Authentication-Results: imf21.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=Kfxjeyuu; spf=pass (imf21.hostedemail.com: domain of peterx@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=peterx@redhat.com; dmarc=pass (policy=none) header.from=redhat.com X-Rspamd-Server: rspam01 X-HE-Tag: 1660234420-618622 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: swapops.h contains quite a few layers of ifdef, some of the "else" and "endif" doesn't get proper comment on the macro so it's hard to follow on what are they referring to. Add the comments. Suggested-by: Nadav Amit Reviewed-by: Huang Ying Signed-off-by: Peter Xu Reviewed-by: Alistair Popple --- include/linux/swapops.h | 16 ++++++++-------- 1 file changed, 8 insertions(+), 8 deletions(-) diff --git a/include/linux/swapops.h b/include/linux/swapops.h index a3d435bf9f97..3a2901ff4f1e 100644 --- a/include/linux/swapops.h +++ b/include/linux/swapops.h @@ -247,8 +247,8 @@ extern void migration_entry_wait(struct mm_struct *mm, pmd_t *pmd, #ifdef CONFIG_HUGETLB_PAGE extern void __migration_entry_wait_huge(pte_t *ptep, spinlock_t *ptl); extern void migration_entry_wait_huge(struct vm_area_struct *vma, pte_t *pte); -#endif -#else +#endif /* CONFIG_HUGETLB_PAGE */ +#else /* CONFIG_MIGRATION */ static inline swp_entry_t make_readable_migration_entry(pgoff_t offset) { return swp_entry(0, 0); @@ -276,7 +276,7 @@ static inline void migration_entry_wait(struct mm_struct *mm, pmd_t *pmd, #ifdef CONFIG_HUGETLB_PAGE static inline void __migration_entry_wait_huge(pte_t *ptep, spinlock_t *ptl) { } static inline void migration_entry_wait_huge(struct vm_area_struct *vma, pte_t *pte) { } -#endif +#endif /* CONFIG_HUGETLB_PAGE */ static inline int is_writable_migration_entry(swp_entry_t entry) { return 0; @@ -286,7 +286,7 @@ static inline int is_readable_migration_entry(swp_entry_t entry) return 0; } -#endif +#endif /* CONFIG_MIGRATION */ typedef unsigned long pte_marker; @@ -426,7 +426,7 @@ static inline int is_pmd_migration_entry(pmd_t pmd) { return is_swap_pmd(pmd) && is_migration_entry(pmd_to_swp_entry(pmd)); } -#else +#else /* CONFIG_ARCH_ENABLE_THP_MIGRATION */ static inline int set_pmd_migration_entry(struct page_vma_mapped_walk *pvmw, struct page *page) { @@ -455,7 +455,7 @@ static inline int is_pmd_migration_entry(pmd_t pmd) { return 0; } -#endif +#endif /* CONFIG_ARCH_ENABLE_THP_MIGRATION */ #ifdef CONFIG_MEMORY_FAILURE @@ -495,7 +495,7 @@ static inline void num_poisoned_pages_sub(long i) atomic_long_sub(i, &num_poisoned_pages); } -#else +#else /* CONFIG_MEMORY_FAILURE */ static inline swp_entry_t make_hwpoison_entry(struct page *page) { @@ -514,7 +514,7 @@ static inline void num_poisoned_pages_inc(void) static inline void num_poisoned_pages_sub(long i) { } -#endif +#endif /* CONFIG_MEMORY_FAILURE */ static inline int non_swap_entry(swp_entry_t entry) { From patchwork Thu Aug 11 16:13:27 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 12941584 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0D27DC19F2A for ; Thu, 11 Aug 2022 16:13:44 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id CA9558E0006; Thu, 11 Aug 2022 12:13:42 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id C36508E0002; Thu, 11 Aug 2022 12:13:42 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A11CE8E0006; Thu, 11 Aug 2022 12:13:42 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 8A5D38E0002 for ; Thu, 11 Aug 2022 12:13:42 -0400 (EDT) Received: from smtpin30.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 6713C1205EE for ; Thu, 11 Aug 2022 16:13:42 +0000 (UTC) X-FDA: 79787807484.30.9B221FD Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf05.hostedemail.com (Postfix) with ESMTP id BE8F7100186 for ; Thu, 11 Aug 2022 16:13:41 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1660234421; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=tV5La4wEOuDKwImLzug6SnOpAoLEybFChef0YUGwxZI=; b=afmzxUi4OsRPgSJ5tVXhXpxC1rH3gmxQpIhSHxMdUytfPZlYlParCfqF2rx7yi0WsF4xUt U4R1+yp/3dxDfCMqHalfuzDJ2yUspEU9nB+yOL9EYQBhyyMVfSoaOl31wbyscjQCu/AYUP muZxEMn6CtKXkZHcB/C00f56eZnwtAg= Received: from mail-il1-f198.google.com (mail-il1-f198.google.com [209.85.166.198]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_128_GCM_SHA256) id us-mta-43-tbj7CDCeOU2sCAa8mPbQhQ-1; Thu, 11 Aug 2022 12:13:40 -0400 X-MC-Unique: tbj7CDCeOU2sCAa8mPbQhQ-1 Received: by mail-il1-f198.google.com with SMTP id o5-20020a056e02102500b002ddcc65029cso12771783ilj.8 for ; Thu, 11 Aug 2022 09:13:40 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc; bh=tV5La4wEOuDKwImLzug6SnOpAoLEybFChef0YUGwxZI=; b=7M+y67R3F4n6+BUmkW4qkWA0JMS0fvwpSpr4/xePU82SJBxQJ4Ry0UayDdQOGQmSYP JvLgQmU23u+7bkTa9CziWLuVCRPqmc9JoJWGNEZaiuAwgNtODtpKLrYdecVcessE4uBz MMFs/QAFEkJ82IZIjEmEbVq+fbFG/gQLNHfqEqS+9cIU5Fs/ozZBWTYrlEbXuZFmtjuy K+eXVbjn6CDUqkhRMWZKgHR1+sgWXb2gVJgGHaTqmPQ2AyAdKWuAzmVLIpRPBTwuynJb 03AvY5bbtIkOTyEddi1w4Hkcn8M48lo8wS9CPcfUUpmvhgNQoPEMLwPQINtCGCMLtJja /Yrg== X-Gm-Message-State: ACgBeo17Ec+LZdHU0VIBE4+WsGEQ73VbDUoUa8G1+HHRTqfI/OLSUcDG ZDBErsb6P1cmCbm+4ug85//ziQBLLpKAWBg+dTVt+X7prgdRHnqIeQilvCWj6/M28Qmvrwd1756 zU3ONA0P6AU9JYDCkzmffG4wbWKO1mB/JK+RtsBMWgmLkujhFCEWfhjEfsMdX X-Received: by 2002:a05:6602:2182:b0:684:fb55:663f with SMTP id b2-20020a056602218200b00684fb55663fmr31100iob.60.1660234419373; Thu, 11 Aug 2022 09:13:39 -0700 (PDT) X-Google-Smtp-Source: AA6agR5jIQLvYUpX1q44iT+XDMj9ZK15CNYkE0bFGQEL63lTXbeHWM+wVqA+5wMIWo1nNzyBJbp0sw== X-Received: by 2002:a05:6602:2182:b0:684:fb55:663f with SMTP id b2-20020a056602218200b00684fb55663fmr31074iob.60.1660234419026; Thu, 11 Aug 2022 09:13:39 -0700 (PDT) Received: from localhost.localdomain (bras-base-aurron9127w-grc-35-70-27-3-10.dsl.bell.ca. [70.27.3.10]) by smtp.gmail.com with ESMTPSA id t1-20020a92ca81000000b002dd1c3c5c46sm3415429ilo.73.2022.08.11.09.13.37 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Thu, 11 Aug 2022 09:13:38 -0700 (PDT) From: Peter Xu To: linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: Hugh Dickins , "Kirill A . Shutemov" , Alistair Popple , peterx@redhat.com, Andrea Arcangeli , Minchan Kim , Andrew Morton , David Hildenbrand , Andi Kleen , Nadav Amit , Huang Ying , Vlastimil Babka Subject: [PATCH v4 3/7] mm/swap: Add swp_offset_pfn() to fetch PFN from swap entry Date: Thu, 11 Aug 2022 12:13:27 -0400 Message-Id: <20220811161331.37055-4-peterx@redhat.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220811161331.37055-1-peterx@redhat.com> References: <20220811161331.37055-1-peterx@redhat.com> MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-type: text/plain ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1660234421; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=tV5La4wEOuDKwImLzug6SnOpAoLEybFChef0YUGwxZI=; b=gjPd1O58pVWb19FcoJfCUd9qtL4H5WYxNv2mGeGwmaeNpsNTKY81iQEde6xUKDHUhSeTeJ O6aStwMuwkUPaeoWIk5iBmPA6d/fk3NowBzBmbsa5FVaCHLqXLxzCpoVm2OplP5pSEhm06 47QP1kps/GJniz0pqeIh8SpsIZ8E21g= ARC-Authentication-Results: i=1; imf05.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=afmzxUi4; spf=pass (imf05.hostedemail.com: domain of peterx@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=peterx@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1660234421; a=rsa-sha256; cv=none; b=6N0vJrnIJB1pEQDguN+kl9gI8uw7tRMg71nPX0SCa6JzBHtAuYeDkxqwv6/t/5muuRvM+7 V9tMtkMCo0vGx2YbTMugAyLafFlc5nrJeEQ+sPoPjqR6qBdCuMcEXyeUXHVTuBglyiNHTN mJoSZcaGegNzyWZZVwvq1L4Bp9L5vG0= X-Stat-Signature: 5fkksky698jc94k6s6zr5rqodk96iad5 X-Rspamd-Queue-Id: BE8F7100186 Authentication-Results: imf05.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=afmzxUi4; spf=pass (imf05.hostedemail.com: domain of peterx@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=peterx@redhat.com; dmarc=pass (policy=none) header.from=redhat.com X-Rspam-User: X-Rspamd-Server: rspam06 X-HE-Tag: 1660234421-377808 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: We've got a bunch of special swap entries that stores PFN inside the swap offset fields. To fetch the PFN, normally the user just calls swp_offset() assuming that'll be the PFN. Add a helper swp_offset_pfn() to fetch the PFN instead, fetching only the max possible length of a PFN on the host, meanwhile doing proper check with MAX_PHYSMEM_BITS to make sure the swap offsets can actually store the PFNs properly always using the BUILD_BUG_ON() in is_pfn_swap_entry(). One reason to do so is we never tried to sanitize whether swap offset can really fit for storing PFN. At the meantime, this patch also prepares us with the future possibility to store more information inside the swp offset field, so assuming "swp_offset(entry)" to be the PFN will not stand any more very soon. Replace many of the swp_offset() callers to use swp_offset_pfn() where proper. Note that many of the existing users are not candidates for the replacement, e.g.: (1) When the swap entry is not a pfn swap entry at all, or, (2) when we wanna keep the whole swp_offset but only change the swp type. For the latter, it can happen when fork() triggered on a write-migration swap entry pte, we may want to only change the migration type from write->read but keep the rest, so it's not "fetching PFN" but "changing swap type only". They're left aside so that when there're more information within the swp offset they'll be carried over naturally in those cases. Since at it, dropping hwpoison_entry_to_pfn() because that's exactly what the new swp_offset_pfn() is about. Signed-off-by: Peter Xu Reviewed-by: "Huang, Ying" --- arch/arm64/mm/hugetlbpage.c | 2 +- fs/proc/task_mmu.c | 20 +++++++++++++++++--- include/linux/swapops.h | 35 +++++++++++++++++++++++++++++------ mm/hmm.c | 2 +- mm/memory-failure.c | 2 +- mm/page_vma_mapped.c | 6 +++--- 6 files changed, 52 insertions(+), 15 deletions(-) diff --git a/arch/arm64/mm/hugetlbpage.c b/arch/arm64/mm/hugetlbpage.c index 0795028f017c..35e9a468d13e 100644 --- a/arch/arm64/mm/hugetlbpage.c +++ b/arch/arm64/mm/hugetlbpage.c @@ -245,7 +245,7 @@ static inline struct folio *hugetlb_swap_entry_to_folio(swp_entry_t entry) { VM_BUG_ON(!is_migration_entry(entry) && !is_hwpoison_entry(entry)); - return page_folio(pfn_to_page(swp_offset(entry))); + return page_folio(pfn_to_page(swp_offset_pfn(entry))); } void set_huge_pte_at(struct mm_struct *mm, unsigned long addr, diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c index d56c65f98d00..b3e79128fca0 100644 --- a/fs/proc/task_mmu.c +++ b/fs/proc/task_mmu.c @@ -1419,9 +1419,19 @@ static pagemap_entry_t pte_to_pagemap_entry(struct pagemapread *pm, if (pte_swp_uffd_wp(pte)) flags |= PM_UFFD_WP; entry = pte_to_swp_entry(pte); - if (pm->show_pfn) + if (pm->show_pfn) { + pgoff_t offset; + /* + * For PFN swap offsets, keeping the offset field + * to be PFN only to be compatible with old smaps. + */ + if (is_pfn_swap_entry(entry)) + offset = swp_offset_pfn(entry); + else + offset = swp_offset(entry); frame = swp_type(entry) | - (swp_offset(entry) << MAX_SWAPFILES_SHIFT); + (offset << MAX_SWAPFILES_SHIFT); + } flags |= PM_SWAP; migration = is_migration_entry(entry); if (is_pfn_swap_entry(entry)) @@ -1478,7 +1488,11 @@ static int pagemap_pmd_range(pmd_t *pmdp, unsigned long addr, unsigned long end, unsigned long offset; if (pm->show_pfn) { - offset = swp_offset(entry) + + if (is_pfn_swap_entry(entry)) + offset = swp_offset_pfn(entry); + else + offset = swp_offset(entry); + offset = offset + ((addr & ~PMD_MASK) >> PAGE_SHIFT); frame = swp_type(entry) | (offset << MAX_SWAPFILES_SHIFT); diff --git a/include/linux/swapops.h b/include/linux/swapops.h index 3a2901ff4f1e..bd4c6f0c2103 100644 --- a/include/linux/swapops.h +++ b/include/linux/swapops.h @@ -23,6 +23,20 @@ #define SWP_TYPE_SHIFT (BITS_PER_XA_VALUE - MAX_SWAPFILES_SHIFT) #define SWP_OFFSET_MASK ((1UL << SWP_TYPE_SHIFT) - 1) +/* + * Definitions only for PFN swap entries (see is_pfn_swap_entry()). To + * store PFN, we only need SWP_PFN_BITS bits. Each of the pfn swap entries + * can use the extra bits to store other information besides PFN. + */ +#ifdef MAX_PHYSMEM_BITS +#define SWP_PFN_BITS (MAX_PHYSMEM_BITS - PAGE_SHIFT) +#else /* MAX_PHYSMEM_BITS */ +#define SWP_PFN_BITS (BITS_PER_LONG - PAGE_SHIFT) +#endif /* MAX_PHYSMEM_BITS */ +#define SWP_PFN_MASK (BIT(SWP_PFN_BITS) - 1) + +static inline bool is_pfn_swap_entry(swp_entry_t entry); + /* Clear all flags but only keep swp_entry_t related information */ static inline pte_t pte_swp_clear_flags(pte_t pte) { @@ -64,6 +78,17 @@ static inline pgoff_t swp_offset(swp_entry_t entry) return entry.val & SWP_OFFSET_MASK; } +/* + * This should only be called upon a pfn swap entry to get the PFN stored + * in the swap entry. Please refers to is_pfn_swap_entry() for definition + * of pfn swap entry. + */ +static inline unsigned long swp_offset_pfn(swp_entry_t entry) +{ + VM_BUG_ON(!is_pfn_swap_entry(entry)); + return swp_offset(entry) & SWP_PFN_MASK; +} + /* check whether a pte points to a swap entry */ static inline int is_swap_pte(pte_t pte) { @@ -369,7 +394,7 @@ static inline int pte_none_mostly(pte_t pte) static inline struct page *pfn_swap_entry_to_page(swp_entry_t entry) { - struct page *p = pfn_to_page(swp_offset(entry)); + struct page *p = pfn_to_page(swp_offset_pfn(entry)); /* * Any use of migration entries may only occur while the @@ -387,6 +412,9 @@ static inline struct page *pfn_swap_entry_to_page(swp_entry_t entry) */ static inline bool is_pfn_swap_entry(swp_entry_t entry) { + /* Make sure the swp offset can always store the needed fields */ + BUILD_BUG_ON(SWP_TYPE_SHIFT < SWP_PFN_BITS); + return is_migration_entry(entry) || is_device_private_entry(entry) || is_device_exclusive_entry(entry); } @@ -475,11 +503,6 @@ static inline int is_hwpoison_entry(swp_entry_t entry) return swp_type(entry) == SWP_HWPOISON; } -static inline unsigned long hwpoison_entry_to_pfn(swp_entry_t entry) -{ - return swp_offset(entry); -} - static inline void num_poisoned_pages_inc(void) { atomic_long_inc(&num_poisoned_pages); diff --git a/mm/hmm.c b/mm/hmm.c index f2aa63b94d9b..3850fb625dda 100644 --- a/mm/hmm.c +++ b/mm/hmm.c @@ -253,7 +253,7 @@ static int hmm_vma_handle_pte(struct mm_walk *walk, unsigned long addr, cpu_flags = HMM_PFN_VALID; if (is_writable_device_private_entry(entry)) cpu_flags |= HMM_PFN_WRITE; - *hmm_pfn = swp_offset(entry) | cpu_flags; + *hmm_pfn = swp_offset_pfn(entry) | cpu_flags; return 0; } diff --git a/mm/memory-failure.c b/mm/memory-failure.c index 0dfed9d7b273..e48f6f6a259d 100644 --- a/mm/memory-failure.c +++ b/mm/memory-failure.c @@ -632,7 +632,7 @@ static int check_hwpoisoned_entry(pte_t pte, unsigned long addr, short shift, swp_entry_t swp = pte_to_swp_entry(pte); if (is_hwpoison_entry(swp)) - pfn = hwpoison_entry_to_pfn(swp); + pfn = swp_offset_pfn(swp); } if (!pfn || pfn != poisoned_pfn) diff --git a/mm/page_vma_mapped.c b/mm/page_vma_mapped.c index 8e9e574d535a..93e13fc17d3c 100644 --- a/mm/page_vma_mapped.c +++ b/mm/page_vma_mapped.c @@ -86,7 +86,7 @@ static bool check_pte(struct page_vma_mapped_walk *pvmw) !is_device_exclusive_entry(entry)) return false; - pfn = swp_offset(entry); + pfn = swp_offset_pfn(entry); } else if (is_swap_pte(*pvmw->pte)) { swp_entry_t entry; @@ -96,7 +96,7 @@ static bool check_pte(struct page_vma_mapped_walk *pvmw) !is_device_exclusive_entry(entry)) return false; - pfn = swp_offset(entry); + pfn = swp_offset_pfn(entry); } else { if (!pte_present(*pvmw->pte)) return false; @@ -221,7 +221,7 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw) return not_found(pvmw); entry = pmd_to_swp_entry(pmde); if (!is_migration_entry(entry) || - !check_pmd(swp_offset(entry), pvmw)) + !check_pmd(swp_offset_pfn(entry), pvmw)) return not_found(pvmw); return true; } From patchwork Thu Aug 11 16:13:28 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 12941585 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id AC224C25B0C for ; Thu, 11 Aug 2022 16:13:45 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 186278E0007; Thu, 11 Aug 2022 12:13:44 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 0EA6E8E0002; Thu, 11 Aug 2022 12:13:44 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E2ECD8E0007; Thu, 11 Aug 2022 12:13:43 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id C933D8E0002 for ; Thu, 11 Aug 2022 12:13:43 -0400 (EDT) Received: from smtpin05.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id A8AFCA01E5 for ; Thu, 11 Aug 2022 16:13:43 +0000 (UTC) X-FDA: 79787807526.05.A319BE4 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf18.hostedemail.com (Postfix) with ESMTP id 4CEB21C00A1 for ; Thu, 11 Aug 2022 16:13:43 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1660234422; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=3RIarNO4bl/SlYdwgU6TGCIMSMDAhYZFkoNFGsJp8ZY=; b=Ve3XDgeHzhWl1vN60y2FbtItfPts8jfi6EAA2rmqQ1gwGq9VPqMZmI6sAcqNLeHTob5pkG iDAd1LmFqFFnjYCbLAChj6sLqWTUMIwRseYb37zqxHfPOUPPyhaU6IymFMVG7/h+rLRy/E kiJNpP6QimDI6AA1Jrgf6CmF8y4C9w4= Received: from mail-io1-f72.google.com (mail-io1-f72.google.com [209.85.166.72]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_128_GCM_SHA256) id us-mta-495-fCrgAFvOO4WfZRXT-7vg0g-1; Thu, 11 Aug 2022 12:13:41 -0400 X-MC-Unique: fCrgAFvOO4WfZRXT-7vg0g-1 Received: by mail-io1-f72.google.com with SMTP id i3-20020a5d8403000000b0067bd73cc9eeso9781048ion.19 for ; Thu, 11 Aug 2022 09:13:41 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc; bh=3RIarNO4bl/SlYdwgU6TGCIMSMDAhYZFkoNFGsJp8ZY=; b=wD+FXJJ3FkmSliBA18+Owxfpsdi8aSpwZ3GA/08/R9APfHF8J38PNRLPtvNZYOufbf DHCQLcJMa2XNyQ/SWInvclfcGAHBGGht7BReNctC3Yl56z0/YnKbIGjrKyV99NkQ5LXx bFb71axBbCUyw8guuVswQAqcj0KHPgnLvh8QZVyw7z1SGcqqt7JxsiV0SBkt6S5TN//j LlEgYXoOIxFfrlJgi0SXlz+nyRfhSN5Y1OLZXZ0vs+3rjyXvtHUABOJIoaPItvlDMhze G6PSh1Ewu+r73IXotfvOok+I/HbNuE7bMTIs7hqZK0up2SSOMjHx1GTcatsYxI+GOMX4 TZQg== X-Gm-Message-State: ACgBeo0JbWRsb7XymeWiGOdbk7LuF9qOMmj+dJ6xl+PZH7IgbNbiRO2/ qAuSLi4+w4VJbrBtWqdbi6GnJFR1EIClh/joo43VhXb6ZpVoVGUdp6x689B3UOdlsJ1rPce7DBr Dm0xz7Y1DIk1UFEGRdXmieB6iW8akcqfWKFHEAuAO2jv0O3JM/3b4GFT/6DrC X-Received: by 2002:a6b:c343:0:b0:67c:6033:a682 with SMTP id t64-20020a6bc343000000b0067c6033a682mr26018iof.148.1660234420987; Thu, 11 Aug 2022 09:13:40 -0700 (PDT) X-Google-Smtp-Source: AA6agR6oIBb2BB4Q/hh/sdEHHNGipMyiCcZTfi7mlLc8HUt82uGH8takFsvaPCiWYe/hV/AqCvZxHw== X-Received: by 2002:a6b:c343:0:b0:67c:6033:a682 with SMTP id t64-20020a6bc343000000b0067c6033a682mr25990iof.148.1660234420598; Thu, 11 Aug 2022 09:13:40 -0700 (PDT) Received: from localhost.localdomain (bras-base-aurron9127w-grc-35-70-27-3-10.dsl.bell.ca. [70.27.3.10]) by smtp.gmail.com with ESMTPSA id t1-20020a92ca81000000b002dd1c3c5c46sm3415429ilo.73.2022.08.11.09.13.39 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Thu, 11 Aug 2022 09:13:40 -0700 (PDT) From: Peter Xu To: linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: Hugh Dickins , "Kirill A . Shutemov" , Alistair Popple , peterx@redhat.com, Andrea Arcangeli , Minchan Kim , Andrew Morton , David Hildenbrand , Andi Kleen , Nadav Amit , Huang Ying , Vlastimil Babka Subject: [PATCH v4 4/7] mm/thp: Carry over dirty bit when thp splits on pmd Date: Thu, 11 Aug 2022 12:13:28 -0400 Message-Id: <20220811161331.37055-5-peterx@redhat.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220811161331.37055-1-peterx@redhat.com> References: <20220811161331.37055-1-peterx@redhat.com> MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-type: text/plain ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1660234423; a=rsa-sha256; cv=none; b=oJhttAKEIyFTIMQTEFV+j+XSib32z+LT9faYSAjG5pO9o9bXQP7C1h0HCqtQOJFZF6zMxl NxwW24GOL3gscOvUdXKSTrghiHN3EZdpylUn0YtEQDXiN5NZ92llZEWHKgxOP44MKMYl2t sK3Fx3GQ/UJIavZsUalkmSw2tRSF43o= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1660234423; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=3RIarNO4bl/SlYdwgU6TGCIMSMDAhYZFkoNFGsJp8ZY=; b=w/GmQB9+SCSXC5ekhT8UTdULptEo7wc/xsgDzuXB2njw7TKZORJq9gbAwaht5y28aVNB9m 3FTgzJcooHyC5boUSGSfK+muh4sKJXeTymXUpfLTNQlIXXicDh+lj/G9hMP7nR5hApW908 EMs1AMg6r4l+mNtEoC5cOpo7q53nDf8= ARC-Authentication-Results: i=1; imf18.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=Ve3XDgeH; spf=pass (imf18.hostedemail.com: domain of peterx@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=peterx@redhat.com; dmarc=pass (policy=none) header.from=redhat.com X-Stat-Signature: mseep5u6z6ujyt4xofwad818mjfe174e X-Rspamd-Queue-Id: 4CEB21C00A1 Authentication-Results: imf18.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=Ve3XDgeH; spf=pass (imf18.hostedemail.com: domain of peterx@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=peterx@redhat.com; dmarc=pass (policy=none) header.from=redhat.com X-Rspam-User: X-Rspamd-Server: rspam04 X-HE-Tag: 1660234423-882153 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Carry over the dirty bit from pmd to pte when a huge pmd splits. It shouldn't be a correctness issue since when pmd_dirty() we'll have the page marked dirty anyway, however having dirty bit carried over helps the next initial writes of split ptes on some archs like x86. Reviewed-by: Huang Ying Signed-off-by: Peter Xu --- mm/huge_memory.c | 9 +++++++-- 1 file changed, 7 insertions(+), 2 deletions(-) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 3222b40a0f6d..2f68e034ddec 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2027,7 +2027,7 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd, pgtable_t pgtable; pmd_t old_pmd, _pmd; bool young, write, soft_dirty, pmd_migration = false, uffd_wp = false; - bool anon_exclusive = false; + bool anon_exclusive = false, dirty = false; unsigned long addr; int i; @@ -2116,8 +2116,10 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd, uffd_wp = pmd_swp_uffd_wp(old_pmd); } else { page = pmd_page(old_pmd); - if (pmd_dirty(old_pmd)) + if (pmd_dirty(old_pmd)) { + dirty = true; SetPageDirty(page); + } write = pmd_write(old_pmd); young = pmd_young(old_pmd); soft_dirty = pmd_soft_dirty(old_pmd); @@ -2183,6 +2185,9 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd, entry = pte_wrprotect(entry); if (!young) entry = pte_mkold(entry); + /* NOTE: this may set soft-dirty too on some archs */ + if (dirty) + entry = pte_mkdirty(entry); if (soft_dirty) entry = pte_mksoft_dirty(entry); if (uffd_wp) From patchwork Thu Aug 11 16:13:29 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 12941586 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 92B19C19F2A for ; Thu, 11 Aug 2022 16:13:47 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 88B168E0008; Thu, 11 Aug 2022 12:13:46 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 815738E0002; Thu, 11 Aug 2022 12:13:46 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 61D5E8E0008; Thu, 11 Aug 2022 12:13:46 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 491CC8E0002 for ; Thu, 11 Aug 2022 12:13:46 -0400 (EDT) Received: from smtpin29.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 29E4EABC0C for ; Thu, 11 Aug 2022 16:13:46 +0000 (UTC) X-FDA: 79787807652.29.88F3132 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf19.hostedemail.com (Postfix) with ESMTP id 95FC11A0162 for ; Thu, 11 Aug 2022 16:13:45 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1660234425; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=wcBPlgoBrj0uOf3oLcjW7xdC8Px/qOVmL+drDcWQ9zU=; b=husmR9V/4NfGSxxSDJoxK8XgmWTA7cfN4/GI9h9GPdlPUQ5+Mp3rYiUeA+8UmO3GDJlpCd 4hqvJ5vM3DjZnxGs7g9zTMB/aIp1u2dP7k+f+Abz1HfuUFHgAjEyq7vcXw9QzI1CBJDK/i ZeI3+fj6xVF4XrActND/69m7Q9KSxNQ= Received: from mail-il1-f200.google.com (mail-il1-f200.google.com [209.85.166.200]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_128_GCM_SHA256) id us-mta-606-b66ZqCgvPbyP3ynS6u9HOA-1; Thu, 11 Aug 2022 12:13:44 -0400 X-MC-Unique: b66ZqCgvPbyP3ynS6u9HOA-1 Received: by mail-il1-f200.google.com with SMTP id w6-20020a056e021a6600b002dea6904708so12726532ilv.6 for ; Thu, 11 Aug 2022 09:13:43 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc; bh=wcBPlgoBrj0uOf3oLcjW7xdC8Px/qOVmL+drDcWQ9zU=; b=kGGQ4qIQZxYa0qIgjuqs+9yYTZWRHKvqmS8jMqSTl+TFVseARES9Ij2E/clar8Hy79 /Z/bi89kvTbK+GD/iuyETDrHsl7mC/JD0BReoCPoknaD8O/SigYJp1XhSTv4sDfoHW4z G+w9thrU4HVtz52e10d1jVDTUq3xxdn5d6DyJhIpOIADKg2JnE6H9YAXSZg6G45KMHYy PGKkcGUhwPCzFQbBh9Q38XrvkLEf/VJNa7d05bzYDnTJu38r2eDCf3Nb+Mmt5xmv+xH3 Mk8mC4hJsrPLWjJ3JoKmFlFHaZ7rjb0rGNzRwYxTTHgH047uDJqhWBNA9cNl3FzaAgly g0Pw== X-Gm-Message-State: ACgBeo0nIqzy0T5jbW4rj21THD2HM291s9UUs1ucvs+7XsfMFZjhT/0R DqIStl3u2moMvg2UsDpwCeb7hmjOxNEllHngclR+GgwQvdUR1L7SqlwNxZH/1blil+uTnEjcFlx qenLJop1atY0OFzxTcL5Pk/GCFL4+cfpYYNE/q9jxAUQkEVwK1p2C9KLdKp04 X-Received: by 2002:a05:6602:3c8:b0:672:4e60:7294 with SMTP id g8-20020a05660203c800b006724e607294mr19049iov.17.1660234422835; Thu, 11 Aug 2022 09:13:42 -0700 (PDT) X-Google-Smtp-Source: AA6agR4a4wc03JiaOd+Wy1g/LSTkpve80CiM3YnLVxORIPuirDVkEdT56xr6TTENWxL/H/ODMSTXkQ== X-Received: by 2002:a05:6602:3c8:b0:672:4e60:7294 with SMTP id g8-20020a05660203c800b006724e607294mr19017iov.17.1660234422392; Thu, 11 Aug 2022 09:13:42 -0700 (PDT) Received: from localhost.localdomain (bras-base-aurron9127w-grc-35-70-27-3-10.dsl.bell.ca. [70.27.3.10]) by smtp.gmail.com with ESMTPSA id t1-20020a92ca81000000b002dd1c3c5c46sm3415429ilo.73.2022.08.11.09.13.40 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Thu, 11 Aug 2022 09:13:41 -0700 (PDT) From: Peter Xu To: linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: Hugh Dickins , "Kirill A . Shutemov" , Alistair Popple , peterx@redhat.com, Andrea Arcangeli , Minchan Kim , Andrew Morton , David Hildenbrand , Andi Kleen , Nadav Amit , Huang Ying , Vlastimil Babka Subject: [PATCH v4 5/7] mm: Remember young/dirty bit for page migrations Date: Thu, 11 Aug 2022 12:13:29 -0400 Message-Id: <20220811161331.37055-6-peterx@redhat.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220811161331.37055-1-peterx@redhat.com> References: <20220811161331.37055-1-peterx@redhat.com> MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-type: text/plain ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1660234425; a=rsa-sha256; cv=none; b=FEnDEjuIkv6rPozWk65DitfPg2dltXwZ6TRT41XkMYv7csRBHNrND5ogPN4ZB6zP9gasXX kOoCwf9+5pwC5zPhRai6A30Ylac1vL6+KyGYcfUVG0k31bYCnuUeq+moMFoG6OK6envEQZ /CuTFmFuGtubxVgw4eoGfwhuEmRbjis= ARC-Authentication-Results: i=1; imf19.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b="husmR9V/"; spf=pass (imf19.hostedemail.com: domain of peterx@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=peterx@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1660234425; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=wcBPlgoBrj0uOf3oLcjW7xdC8Px/qOVmL+drDcWQ9zU=; b=qWdjxktWisEJuBzHkvl5cbhCyndOrBsPdevjsjv/BzvgN1Pj/VkxlIo5XuJLq2FwW6NMQT PfMaADRRVIHJF5VFkPke80t9isX1g+aq/XNwjZ2fYRZP43F0diLpknu8iSAV+VthJa5Rh9 ZDZoXiuB7Hti1l+J1oEDXz4og4zjHCQ= X-Stat-Signature: gj1m7wsyw5schs7g5yyyzega9ei38tbo X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: 95FC11A0162 Authentication-Results: imf19.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b="husmR9V/"; spf=pass (imf19.hostedemail.com: domain of peterx@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=peterx@redhat.com; dmarc=pass (policy=none) header.from=redhat.com X-Rspam-User: X-HE-Tag: 1660234425-568793 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: When page migration happens, we always ignore the young/dirty bit settings in the old pgtable, and marking the page as old in the new page table using either pte_mkold() or pmd_mkold(), and keeping the pte clean. That's fine from functional-wise, but that's not friendly to page reclaim because the moving page can be actively accessed within the procedure. Not to mention hardware setting the young bit can bring quite some overhead on some systems, e.g. x86_64 needs a few hundreds nanoseconds to set the bit. The same slowdown problem to dirty bits when the memory is first written after page migration happened. Actually we can easily remember the A/D bit configuration and recover the information after the page is migrated. To achieve it, define a new set of bits in the migration swap offset field to cache the A/D bits for old pte. Then when removing/recovering the migration entry, we can recover the A/D bits even if the page changed. One thing to mention is that here we used max_swapfile_size() to detect how many swp offset bits we have, and we'll only enable this feature if we know the swp offset is big enough to store both the PFN value and the A/D bits. Otherwise the A/D bits are dropped like before. Signed-off-by: Peter Xu Reviewed-by: "Huang, Ying" --- include/linux/swapops.h | 99 +++++++++++++++++++++++++++++++++++++++++ mm/huge_memory.c | 18 +++++++- mm/migrate.c | 6 ++- mm/migrate_device.c | 6 +++ mm/rmap.c | 5 ++- 5 files changed, 130 insertions(+), 4 deletions(-) diff --git a/include/linux/swapops.h b/include/linux/swapops.h index bd4c6f0c2103..36e462e116af 100644 --- a/include/linux/swapops.h +++ b/include/linux/swapops.h @@ -8,6 +8,10 @@ #ifdef CONFIG_MMU +#ifdef CONFIG_SWAP +#include +#endif /* CONFIG_SWAP */ + /* * swapcache pages are stored in the swapper_space radix tree. We want to * get good packing density in that tree, so the index should be dense in @@ -35,6 +39,31 @@ #endif /* MAX_PHYSMEM_BITS */ #define SWP_PFN_MASK (BIT(SWP_PFN_BITS) - 1) +/** + * Migration swap entry specific bitfield definitions. Layout: + * + * |----------+--------------------| + * | swp_type | swp_offset | + * |----------+--------+-+-+-------| + * | | resv |D|A| PFN | + * |----------+--------+-+-+-------| + * + * @SWP_MIG_YOUNG_BIT: Whether the page used to have young bit set (bit A) + * @SWP_MIG_DIRTY_BIT: Whether the page used to have dirty bit set (bit D) + * + * Note: A/D bits will be stored in migration entries iff there're enough + * free bits in arch specific swp offset. By default we'll ignore A/D bits + * when migrating a page. Please refer to migration_entry_supports_ad() + * for more information. If there're more bits besides PFN and A/D bits, + * they should be reserved and always be zeros. + */ +#define SWP_MIG_YOUNG_BIT (SWP_PFN_BITS) +#define SWP_MIG_DIRTY_BIT (SWP_PFN_BITS + 1) +#define SWP_MIG_TOTAL_BITS (SWP_PFN_BITS + 2) + +#define SWP_MIG_YOUNG BIT(SWP_MIG_YOUNG_BIT) +#define SWP_MIG_DIRTY BIT(SWP_MIG_DIRTY_BIT) + static inline bool is_pfn_swap_entry(swp_entry_t entry); /* Clear all flags but only keep swp_entry_t related information */ @@ -265,6 +294,57 @@ static inline swp_entry_t make_writable_migration_entry(pgoff_t offset) return swp_entry(SWP_MIGRATION_WRITE, offset); } +/* + * Returns whether the host has large enough swap offset field to support + * carrying over pgtable A/D bits for page migrations. The result is + * pretty much arch specific. + */ +static inline bool migration_entry_supports_ad(void) +{ + /* + * max_swapfile_size() returns the max supported swp-offset plus 1. + * We can support the migration A/D bits iff the pfn swap entry has + * the offset large enough to cover all of them (PFN, A & D bits). + */ +#ifdef CONFIG_SWAP + return max_swapfile_size() >= (1UL << SWP_MIG_TOTAL_BITS); +#else /* CONFIG_SWAP */ + return false; +#endif /* CONFIG_SWAP */ +} + +static inline swp_entry_t make_migration_entry_young(swp_entry_t entry) +{ + if (migration_entry_supports_ad()) + return swp_entry(swp_type(entry), + swp_offset(entry) | SWP_MIG_YOUNG); + return entry; +} + +static inline bool is_migration_entry_young(swp_entry_t entry) +{ + if (migration_entry_supports_ad()) + return swp_offset(entry) & SWP_MIG_YOUNG; + /* Keep the old behavior of aging page after migration */ + return false; +} + +static inline swp_entry_t make_migration_entry_dirty(swp_entry_t entry) +{ + if (migration_entry_supports_ad()) + return swp_entry(swp_type(entry), + swp_offset(entry) | SWP_MIG_DIRTY); + return entry; +} + +static inline bool is_migration_entry_dirty(swp_entry_t entry) +{ + if (migration_entry_supports_ad()) + return swp_offset(entry) & SWP_MIG_DIRTY; + /* Keep the old behavior of clean page after migration */ + return false; +} + extern void __migration_entry_wait(struct mm_struct *mm, pte_t *ptep, spinlock_t *ptl); extern void migration_entry_wait(struct mm_struct *mm, pmd_t *pmd, @@ -311,6 +391,25 @@ static inline int is_readable_migration_entry(swp_entry_t entry) return 0; } +static inline swp_entry_t make_migration_entry_young(swp_entry_t entry) +{ + return entry; +} + +static inline bool is_migration_entry_young(swp_entry_t entry) +{ + return false; +} + +static inline swp_entry_t make_migration_entry_dirty(swp_entry_t entry) +{ + return entry; +} + +static inline bool is_migration_entry_dirty(swp_entry_t entry) +{ + return false; +} #endif /* CONFIG_MIGRATION */ typedef unsigned long pte_marker; diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 2f68e034ddec..ac858fd9c1f1 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2111,7 +2111,8 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd, write = is_writable_migration_entry(entry); if (PageAnon(page)) anon_exclusive = is_readable_exclusive_migration_entry(entry); - young = false; + young = is_migration_entry_young(entry); + dirty = is_migration_entry_dirty(entry); soft_dirty = pmd_swp_soft_dirty(old_pmd); uffd_wp = pmd_swp_uffd_wp(old_pmd); } else { @@ -2171,6 +2172,10 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd, else swp_entry = make_readable_migration_entry( page_to_pfn(page + i)); + if (young) + swp_entry = make_migration_entry_young(swp_entry); + if (dirty) + swp_entry = make_migration_entry_dirty(swp_entry); entry = swp_entry_to_pte(swp_entry); if (soft_dirty) entry = pte_swp_mksoft_dirty(entry); @@ -3180,6 +3185,10 @@ int set_pmd_migration_entry(struct page_vma_mapped_walk *pvmw, entry = make_readable_exclusive_migration_entry(page_to_pfn(page)); else entry = make_readable_migration_entry(page_to_pfn(page)); + if (pmd_young(pmdval)) + entry = make_migration_entry_young(entry); + if (pmd_dirty(pmdval)) + entry = make_migration_entry_dirty(entry); pmdswp = swp_entry_to_pmd(entry); if (pmd_soft_dirty(pmdval)) pmdswp = pmd_swp_mksoft_dirty(pmdswp); @@ -3205,13 +3214,18 @@ void remove_migration_pmd(struct page_vma_mapped_walk *pvmw, struct page *new) entry = pmd_to_swp_entry(*pvmw->pmd); get_page(new); - pmde = pmd_mkold(mk_huge_pmd(new, READ_ONCE(vma->vm_page_prot))); + pmde = mk_huge_pmd(new, READ_ONCE(vma->vm_page_prot)); if (pmd_swp_soft_dirty(*pvmw->pmd)) pmde = pmd_mksoft_dirty(pmde); if (is_writable_migration_entry(entry)) pmde = maybe_pmd_mkwrite(pmde, vma); if (pmd_swp_uffd_wp(*pvmw->pmd)) pmde = pmd_wrprotect(pmd_mkuffd_wp(pmde)); + if (!is_migration_entry_young(entry)) + pmde = pmd_mkold(pmde); + /* NOTE: this may contain setting soft-dirty on some archs */ + if (PageDirty(new) && is_migration_entry_dirty(entry)) + pmde = pmd_mkdirty(pmde); if (PageAnon(new)) { rmap_t rmap_flags = RMAP_COMPOUND; diff --git a/mm/migrate.c b/mm/migrate.c index 6a1597c92261..0433a71d2bee 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -198,7 +198,7 @@ static bool remove_migration_pte(struct folio *folio, #endif folio_get(folio); - pte = pte_mkold(mk_pte(new, READ_ONCE(vma->vm_page_prot))); + pte = mk_pte(new, READ_ONCE(vma->vm_page_prot)); if (pte_swp_soft_dirty(*pvmw.pte)) pte = pte_mksoft_dirty(pte); @@ -206,6 +206,10 @@ static bool remove_migration_pte(struct folio *folio, * Recheck VMA as permissions can change since migration started */ entry = pte_to_swp_entry(*pvmw.pte); + if (!is_migration_entry_young(entry)) + pte = pte_mkold(pte); + if (folio_test_dirty(folio) && is_migration_entry_dirty(entry)) + pte = pte_mkdirty(pte); if (is_writable_migration_entry(entry)) pte = maybe_mkwrite(pte, vma); else if (pte_swp_uffd_wp(*pvmw.pte)) diff --git a/mm/migrate_device.c b/mm/migrate_device.c index 27fb37d65476..e450b318b01b 100644 --- a/mm/migrate_device.c +++ b/mm/migrate_device.c @@ -221,6 +221,12 @@ static int migrate_vma_collect_pmd(pmd_t *pmdp, else entry = make_readable_migration_entry( page_to_pfn(page)); + if (pte_present(pte)) { + if (pte_young(pte)) + entry = make_migration_entry_young(entry); + if (pte_dirty(pte)) + entry = make_migration_entry_dirty(entry); + } swp_pte = swp_entry_to_pte(entry); if (pte_present(pte)) { if (pte_soft_dirty(pte)) diff --git a/mm/rmap.c b/mm/rmap.c index af775855e58f..28aef434ea41 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -2065,7 +2065,10 @@ static bool try_to_migrate_one(struct folio *folio, struct vm_area_struct *vma, else entry = make_readable_migration_entry( page_to_pfn(subpage)); - + if (pte_young(pteval)) + entry = make_migration_entry_young(entry); + if (pte_dirty(pteval)) + entry = make_migration_entry_dirty(entry); swp_pte = swp_entry_to_pte(entry); if (pte_soft_dirty(pteval)) swp_pte = pte_swp_mksoft_dirty(swp_pte); From patchwork Thu Aug 11 16:13:30 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 12941587 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9FA6AC25B0C for ; Thu, 11 Aug 2022 16:13:49 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 35B308E0009; Thu, 11 Aug 2022 12:13:49 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 30C3A8E0002; Thu, 11 Aug 2022 12:13:49 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 10EC88E0009; Thu, 11 Aug 2022 12:13:49 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id E33E98E0002 for ; Thu, 11 Aug 2022 12:13:48 -0400 (EDT) Received: from smtpin25.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id C3B7B1C6E8F for ; Thu, 11 Aug 2022 16:13:48 +0000 (UTC) X-FDA: 79787807736.25.4858BF5 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf18.hostedemail.com (Postfix) with ESMTP id 483351C0071 for ; Thu, 11 Aug 2022 16:13:48 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1660234427; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=/Huxd7IPCPnmA3ZqjXCerOPqYjMrNourUX2B7v06rhc=; b=bObCE4pvgTFoFuXZVRopMaOlmcy/9JIaiHo5LGcWm/IjOmURF5LsN1v8EWNastQZBsXzoY VN1OunqfKOmft+w8uZkzEdWGY5Ayb72Ym4iPCgupz62pXg1I80yYj3USSACQMvVzypGRFk m5Di79o43UG6dtEaTeZ816SccUyq6/o= Received: from mail-io1-f72.google.com (mail-io1-f72.google.com [209.85.166.72]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_128_GCM_SHA256) id us-mta-446-4V77nAyHNXKUHx5rH4FfHg-1; Thu, 11 Aug 2022 12:13:45 -0400 X-MC-Unique: 4V77nAyHNXKUHx5rH4FfHg-1 Received: by mail-io1-f72.google.com with SMTP id h8-20020a6bfb08000000b00684f0587d0cso5945268iog.14 for ; Thu, 11 Aug 2022 09:13:45 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc; bh=/Huxd7IPCPnmA3ZqjXCerOPqYjMrNourUX2B7v06rhc=; b=Jmfmy/25i8rFOuWPY4LZrqEIjCkyWhRXXfjuyP18lIW5ZocJCqguE4j55tGvMX2h2M eKdGyRvF0LIvZy0CSDurFSDm9PVFdR31a5wruJZLAaVpMpwblIyQKy8127b5fJ9XcaIz 8QrueNQnzWs88/TFEMNsexS61Uw9MIoU+xoxddb6neoWDQendd8+FJ0B+fi5Dtwj4id1 ILY4ibMdgHr3dyWsHyzSgDUJFxvDdFGSJ0RB4PLxPolDf79s2bg2Ji/Eh0CetiPfchNs S+0lbW5GvdyZr6KyVrGW8XZ643w5STT2OxFjmT10kIB0CE40PFryp2yATQt3+C6E0Yjh 5uOQ== X-Gm-Message-State: ACgBeo2WiyE/DM3TSV5Sb6e0C17CE5SyYYOna4X5/M2q9s2yBtxjP/7y 9MwgIQDY2Wns/pYF+2J8t5YYtibpqTqqt5kARpNs1AapDrVqpN9wA6auxhYP+hwyOBdRUnLPw3u f80FJMgPN40O9uiqHh9eNdLMy+sX24cercyH0sn0BRyliI/TE2DbI7MD15KrZ X-Received: by 2002:a05:6638:1301:b0:342:c20d:3b15 with SMTP id r1-20020a056638130100b00342c20d3b15mr21435jad.59.1660234424307; Thu, 11 Aug 2022 09:13:44 -0700 (PDT) X-Google-Smtp-Source: AA6agR4T6gWtzxZxHWUIVbSUYS7vdfLC8nfHajGAKsVlnVfjs8zkeYJ9u3kFIxUuxfWF4Czl1LgvQg== X-Received: by 2002:a05:6638:1301:b0:342:c20d:3b15 with SMTP id r1-20020a056638130100b00342c20d3b15mr21411jad.59.1660234424004; Thu, 11 Aug 2022 09:13:44 -0700 (PDT) Received: from localhost.localdomain (bras-base-aurron9127w-grc-35-70-27-3-10.dsl.bell.ca. [70.27.3.10]) by smtp.gmail.com with ESMTPSA id t1-20020a92ca81000000b002dd1c3c5c46sm3415429ilo.73.2022.08.11.09.13.42 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Thu, 11 Aug 2022 09:13:43 -0700 (PDT) From: Peter Xu To: linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: Hugh Dickins , "Kirill A . Shutemov" , Alistair Popple , peterx@redhat.com, Andrea Arcangeli , Minchan Kim , Andrew Morton , David Hildenbrand , Andi Kleen , Nadav Amit , Huang Ying , Vlastimil Babka Subject: [PATCH v4 6/7] mm/swap: Cache maximum swapfile size when init swap Date: Thu, 11 Aug 2022 12:13:30 -0400 Message-Id: <20220811161331.37055-7-peterx@redhat.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220811161331.37055-1-peterx@redhat.com> References: <20220811161331.37055-1-peterx@redhat.com> MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-type: text/plain ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1660234428; a=rsa-sha256; cv=none; b=Lcgum+F60OgVaM/aiOxwMdP7yC/MMPTuETlHF8yCqj0LofhENlsfxxJp4BvGVKRxnGzN/3 uYD2BHiA1eIunFJWz1ecPztWMFKsWuHABTcMxUg3K02oheGgnpKlnqF4vJYbUSM4Wl4NTT guiIOTFH3kukWbqaCCWvRDf3bdqbDUE= ARC-Authentication-Results: i=1; imf18.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=bObCE4pv; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf18.hostedemail.com: domain of peterx@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=peterx@redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1660234428; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=/Huxd7IPCPnmA3ZqjXCerOPqYjMrNourUX2B7v06rhc=; b=C+Sddarot3JhryFDIU3eVcN4jfjVcdIzZn5CiO9SttaD8dSaWnegAbiv5YZtWP5w8EnNXB JPd3w4reKpNloecR1neBAtryY89RCWDXUMQzI4OM2PdW9Qbpm2fKbYTBgWzRyzhSaHnD0u tgb1jHaTZUiV151uK2QgdzEe3JjRZG0= X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: 483351C0071 Authentication-Results: imf18.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=bObCE4pv; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf18.hostedemail.com: domain of peterx@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=peterx@redhat.com X-Stat-Signature: nr4y8we6bhygghn8d77cbzgi11ds6spn X-Rspam-User: X-HE-Tag: 1660234428-895250 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: We used to have swapfile_maximum_size() fetching a maximum value of swapfile size per-arch. As the caller of max_swapfile_size() grows, this patch introduce a variable "swapfile_maximum_size" and cache the value of old max_swapfile_size(), so that we don't need to calculate the value every time. Caching the value in swapfile_init() is safe because when reaching the phase we should have initialized all the relevant information. Here the major arch to take care of is x86, which defines the max swapfile size based on L1TF mitigation. Here both X86_BUG_L1TF or l1tf_mitigation should have been setup properly when reaching swapfile_init(). As a reference, the code path looks like this for x86: - start_kernel - setup_arch - early_cpu_init - early_identify_cpu --> setup X86_BUG_L1TF - parse_early_param - l1tf_cmdline --> set l1tf_mitigation - check_bugs - l1tf_select_mitigation --> set l1tf_mitigation - arch_call_rest_init - rest_init - kernel_init - kernel_init_freeable - do_basic_setup - do_initcalls --> calls swapfile_init() (initcall level 4) The swapfile size only depends on swp pte format on non-x86 archs, so caching it is safe too. Since at it, rename max_swapfile_size() to arch_max_swapfile_size() because arch can define its own function, so it's more straightforward to have "arch_" as its prefix. At the meantime, export swapfile_maximum_size to replace the old usages of max_swapfile_size(). Signed-off-by: Peter Xu Reviewed-by: "Huang, Ying" --- arch/x86/mm/init.c | 2 +- include/linux/swapfile.h | 3 ++- include/linux/swapops.h | 2 +- mm/swapfile.c | 7 +++++-- 4 files changed, 9 insertions(+), 5 deletions(-) diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c index 82a042c03824..9121bc1b9453 100644 --- a/arch/x86/mm/init.c +++ b/arch/x86/mm/init.c @@ -1054,7 +1054,7 @@ void update_cache_mode_entry(unsigned entry, enum page_cache_mode cache) } #ifdef CONFIG_SWAP -unsigned long max_swapfile_size(void) +unsigned long arch_max_swapfile_size(void) { unsigned long pages; diff --git a/include/linux/swapfile.h b/include/linux/swapfile.h index 54078542134c..165e0bd04862 100644 --- a/include/linux/swapfile.h +++ b/include/linux/swapfile.h @@ -8,6 +8,7 @@ */ extern struct swap_info_struct *swap_info[]; extern unsigned long generic_max_swapfile_size(void); -extern unsigned long max_swapfile_size(void); +/* Maximum swapfile size supported for the arch (not inclusive). */ +extern unsigned long swapfile_maximum_size; #endif /* _LINUX_SWAPFILE_H */ diff --git a/include/linux/swapops.h b/include/linux/swapops.h index 36e462e116af..f25b566643f1 100644 --- a/include/linux/swapops.h +++ b/include/linux/swapops.h @@ -307,7 +307,7 @@ static inline bool migration_entry_supports_ad(void) * the offset large enough to cover all of them (PFN, A & D bits). */ #ifdef CONFIG_SWAP - return max_swapfile_size() >= (1UL << SWP_MIG_TOTAL_BITS); + return swapfile_maximum_size >= (1UL << SWP_MIG_TOTAL_BITS); #else /* CONFIG_SWAP */ return false; #endif /* CONFIG_SWAP */ diff --git a/mm/swapfile.c b/mm/swapfile.c index 1fdccd2f1422..3cc64399df44 100644 --- a/mm/swapfile.c +++ b/mm/swapfile.c @@ -63,6 +63,7 @@ EXPORT_SYMBOL_GPL(nr_swap_pages); /* protected with swap_lock. reading in vm_swap_full() doesn't need lock */ long total_swap_pages; static int least_priority = -1; +unsigned long swapfile_maximum_size; static const char Bad_file[] = "Bad swap file entry "; static const char Unused_file[] = "Unused swap file entry "; @@ -2816,7 +2817,7 @@ unsigned long generic_max_swapfile_size(void) } /* Can be overridden by an architecture for additional checks. */ -__weak unsigned long max_swapfile_size(void) +__weak unsigned long arch_max_swapfile_size(void) { return generic_max_swapfile_size(); } @@ -2856,7 +2857,7 @@ static unsigned long read_swap_header(struct swap_info_struct *p, p->cluster_next = 1; p->cluster_nr = 0; - maxpages = max_swapfile_size(); + maxpages = swapfile_maximum_size; last_page = swap_header->info.last_page; if (!last_page) { pr_warn("Empty swap-file\n"); @@ -3677,6 +3678,8 @@ static int __init swapfile_init(void) for_each_node(nid) plist_head_init(&swap_avail_heads[nid]); + swapfile_maximum_size = arch_max_swapfile_size(); + return 0; } subsys_initcall(swapfile_init); From patchwork Thu Aug 11 16:13:31 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 12941588 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 67662C19F2A for ; Thu, 11 Aug 2022 16:13:52 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id F1BB28E000A; Thu, 11 Aug 2022 12:13:51 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id ECA398E0002; Thu, 11 Aug 2022 12:13:51 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D43A88E000A; Thu, 11 Aug 2022 12:13:51 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id C3AB48E0002 for ; Thu, 11 Aug 2022 12:13:51 -0400 (EDT) Received: from smtpin30.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 8C459C03FA for ; Thu, 11 Aug 2022 16:13:51 +0000 (UTC) X-FDA: 79787807862.30.6BB1ED7 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf14.hostedemail.com (Postfix) with ESMTP id F2B271000B4 for ; Thu, 11 Aug 2022 16:13:49 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1660234429; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Kvn+70I7yPIly7NIcpI+E4BYwfh3vfPaXmRAy92AF5w=; b=hVTN7i0QMKFBh392bac4jgfDb9v6Stf77skSImDft/H2S9/YrAmTxie6y+yysl3hue34rX D7vxaR3YmB3QBPB8v7/q/gafzHMI/KYJYff/osB3qs64VorXQWGuLXU0Fvi+AGgk1cAgjk aDWQpi0nGivaYsrkGwxgC+HCzwkYqk0= Received: from mail-il1-f198.google.com (mail-il1-f198.google.com [209.85.166.198]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_128_GCM_SHA256) id us-mta-138-p3E8d05aMSasS5GwAJG_-A-1; Thu, 11 Aug 2022 12:13:48 -0400 X-MC-Unique: p3E8d05aMSasS5GwAJG_-A-1 Received: by mail-il1-f198.google.com with SMTP id n5-20020a056e021ba500b002df602f3626so13011895ili.22 for ; Thu, 11 Aug 2022 09:13:46 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc; bh=Kvn+70I7yPIly7NIcpI+E4BYwfh3vfPaXmRAy92AF5w=; b=HZBOPHPQ9U4UtGkj1cLWG5YeaD8I/vNQwUf/gt9P8EmIBoTAPbFCG7ckbzqM50Q4tk L77cAKsJ71q4DIIfL+f55eZ7QSSkfFtqY0YxgCOH618ZVp0C91IuW7JX/wVwCpe7/RzJ Tzf7d+jiEALnBbZuFU1DDVeTB4kjNmPhvorvsyoT5J8YjHFQzqpgwh2zEM7BzE+iWxD3 lcbyJKN+B29lQwDV+Qt0XXdKxLvs69F7qOGzRmaj4v0UPDlSnAj7ZOgz7KbjIgioQ64e Q766Tru48yzFiTk+9cPzU0GB+bsOktnbdBNfp0x4rDaeiee1ndL0UXq4iv2tV06zvHfK hPYA== X-Gm-Message-State: ACgBeo28P93gLUVsevUptL82m00L5uModg43n7GemuICM1GIjFvq/4oQ pmx8ZQzn0dVd4QycAlB/zV5g2MFxWK0D10TuCdJ3RfWuFQfAJS8JMiBMtPaaDw3cZyAdx7pdfF7 0Lxh0AA1JLDx88jlN9qB6DHirwrylXcc0O3Hbephk7Bie5Obj0NJaNdspvrWz X-Received: by 2002:a92:ce50:0:b0:2dd:dc8e:1f36 with SMTP id a16-20020a92ce50000000b002dddc8e1f36mr16218653ilr.34.1660234425873; Thu, 11 Aug 2022 09:13:45 -0700 (PDT) X-Google-Smtp-Source: AA6agR4ZzxcH25ORg0vfYxNfi+ta3V6GDOQvV28uQeT5mwGBGdlqyFRzS44blQE276L9QlyIzt5Fvg== X-Received: by 2002:a92:ce50:0:b0:2dd:dc8e:1f36 with SMTP id a16-20020a92ce50000000b002dddc8e1f36mr16218626ilr.34.1660234425555; Thu, 11 Aug 2022 09:13:45 -0700 (PDT) Received: from localhost.localdomain (bras-base-aurron9127w-grc-35-70-27-3-10.dsl.bell.ca. [70.27.3.10]) by smtp.gmail.com with ESMTPSA id t1-20020a92ca81000000b002dd1c3c5c46sm3415429ilo.73.2022.08.11.09.13.44 (version=TLS1_3 cipher=TLS_CHACHA20_POLY1305_SHA256 bits=256/256); Thu, 11 Aug 2022 09:13:45 -0700 (PDT) From: Peter Xu To: linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: Hugh Dickins , "Kirill A . Shutemov" , Alistair Popple , peterx@redhat.com, Andrea Arcangeli , Minchan Kim , Andrew Morton , David Hildenbrand , Andi Kleen , Nadav Amit , Huang Ying , Vlastimil Babka Subject: [PATCH v4 7/7] mm/swap: Cache swap migration A/D bits support Date: Thu, 11 Aug 2022 12:13:31 -0400 Message-Id: <20220811161331.37055-8-peterx@redhat.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220811161331.37055-1-peterx@redhat.com> References: <20220811161331.37055-1-peterx@redhat.com> MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-type: text/plain ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1660234430; a=rsa-sha256; cv=none; b=rlG/sC+ju5nkQdRBefdpFcOv4upj2KzgrYbgb2/Ch32gHBEiqBJjEXw2HdAAAt8lI7yKwT rGc0CshIh3kGOZ6Vxa/09M37WhYdmZk+uLGgXRrND/91XME7kMVUDy4I9QKdf3WsETvbh6 X7kzhIyTSMneNKcnDCXC0gXde0IMRNY= ARC-Authentication-Results: i=1; imf14.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=hVTN7i0Q; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf14.hostedemail.com: domain of peterx@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=peterx@redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1660234430; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=Kvn+70I7yPIly7NIcpI+E4BYwfh3vfPaXmRAy92AF5w=; b=4qLxIa8o89XzlZMldHkxxQoTncG2uZjGUfUWa79C3LgnqSjneN5ARFAdPvmDivB0Vo46U5 ajcTwmWyDnAyjf57gvpDzEY81vgNIddsNNvvltE7HOJpajvgfprCXaS47JGyA1Z23a9Ki8 ZuRXDuoX7ORj+P1AK6MKwD5DDkXeimY= X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: F2B271000B4 Authentication-Results: imf14.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=hVTN7i0Q; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf14.hostedemail.com: domain of peterx@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=peterx@redhat.com X-Stat-Signature: syidx8sbbh8yq9egkth3wzjhygxubeoe X-Rspam-User: X-HE-Tag: 1660234429-14059 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Introduce a variable swap_migration_ad_supported to cache whether the arch supports swap migration A/D bits. Here one thing to mention is that SWP_MIG_TOTAL_BITS will internally reference the other macro MAX_PHYSMEM_BITS, which is a function call on x86 (constant on all the rest of archs). It's safe to reference it in swapfile_init() because when reaching here we're already during initcalls level 4 so we must have initialized 5-level pgtable for x86_64 (right after early_identify_cpu() finishes). - start_kernel - setup_arch - early_cpu_init - get_cpu_cap --> fetch from CPUID (including X86_FEATURE_LA57) - early_identify_cpu --> clear X86_FEATURE_LA57 (if early lvl5 not enabled (USE_EARLY_PGTABLE_L5)) - arch_call_rest_init - rest_init - kernel_init - kernel_init_freeable - do_basic_setup - do_initcalls --> calls swapfile_init() (initcall level 4) This should slightly speed up the migration swap entry handlings. Signed-off-by: Peter Xu --- include/linux/swapfile.h | 2 ++ include/linux/swapops.h | 7 +------ mm/swapfile.c | 8 ++++++++ 3 files changed, 11 insertions(+), 6 deletions(-) diff --git a/include/linux/swapfile.h b/include/linux/swapfile.h index 165e0bd04862..2fbcc9afd814 100644 --- a/include/linux/swapfile.h +++ b/include/linux/swapfile.h @@ -10,5 +10,7 @@ extern struct swap_info_struct *swap_info[]; extern unsigned long generic_max_swapfile_size(void); /* Maximum swapfile size supported for the arch (not inclusive). */ extern unsigned long swapfile_maximum_size; +/* Whether swap migration entry supports storing A/D bits for the arch */ +extern bool swap_migration_ad_supported; #endif /* _LINUX_SWAPFILE_H */ diff --git a/include/linux/swapops.h b/include/linux/swapops.h index f25b566643f1..dbf9df854124 100644 --- a/include/linux/swapops.h +++ b/include/linux/swapops.h @@ -301,13 +301,8 @@ static inline swp_entry_t make_writable_migration_entry(pgoff_t offset) */ static inline bool migration_entry_supports_ad(void) { - /* - * max_swapfile_size() returns the max supported swp-offset plus 1. - * We can support the migration A/D bits iff the pfn swap entry has - * the offset large enough to cover all of them (PFN, A & D bits). - */ #ifdef CONFIG_SWAP - return swapfile_maximum_size >= (1UL << SWP_MIG_TOTAL_BITS); + return swap_migration_ad_supported; #else /* CONFIG_SWAP */ return false; #endif /* CONFIG_SWAP */ diff --git a/mm/swapfile.c b/mm/swapfile.c index 3cc64399df44..263b19e693cf 100644 --- a/mm/swapfile.c +++ b/mm/swapfile.c @@ -64,6 +64,9 @@ EXPORT_SYMBOL_GPL(nr_swap_pages); long total_swap_pages; static int least_priority = -1; unsigned long swapfile_maximum_size; +#ifdef CONFIG_MIGRATION +bool swap_migration_ad_supported; +#endif /* CONFIG_MIGRATION */ static const char Bad_file[] = "Bad swap file entry "; static const char Unused_file[] = "Unused swap file entry "; @@ -3680,6 +3683,11 @@ static int __init swapfile_init(void) swapfile_maximum_size = arch_max_swapfile_size(); +#ifdef CONFIG_MIGRATION + if (swapfile_maximum_size >= (1UL << SWP_MIG_TOTAL_BITS)) + swap_migration_ad_supported = true; +#endif /* CONFIG_MIGRATION */ + return 0; } subsys_initcall(swapfile_init);