From patchwork Fri May 15 17:16:06 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Catalin Marinas X-Patchwork-Id: 11552711 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id B2DB5697 for ; Fri, 15 May 2020 17:17:18 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 810C720671 for ; Fri, 15 May 2020 17:17:18 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 810C720671 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=arm.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 28DD98E0019; Fri, 15 May 2020 13:17:03 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 23F5F8E0001; Fri, 15 May 2020 13:17:03 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 155C08E0019; Fri, 15 May 2020 13:17:03 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0117.hostedemail.com [216.40.44.117]) by kanga.kvack.org (Postfix) with ESMTP id D74688E0001 for ; Fri, 15 May 2020 13:17:02 -0400 (EDT) Received: from smtpin28.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 8FAAB181AEF15 for ; Fri, 15 May 2020 17:17:02 +0000 (UTC) X-FDA: 76819608684.28.cloud89_33a19075251a X-Spam-Summary: 2,0,0,0444de351848fcda,d41d8cd98f00b204,catalin.marinas@arm.com,,RULES_HIT:41:355:379:541:800:960:966:973:981:988:989:1260:1261:1311:1314:1345:1359:1431:1437:1515:1535:1543:1711:1730:1747:1777:1792:1801:1981:2194:2196:2199:2200:2393:2553:2559:2562:2730:3138:3139:3140:3141:3142:3354:3865:3867:3868:3871:3872:4321:4385:4605:5007:6261:6742:7576:8634:10004:11026:11473:11658:11914:12043:12291:12296:12297:12438:12555:12679:13894:14096:14181:14394:14721:21080:21230:21324:21451:21627:21990:30012:30051:30054:30069:30075:30090,0,RBL:217.140.110.172:@arm.com:.lbl8.mailshell.net-62.2.0.100 64.100.201.201,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:26,LUA_SUMMARY:none X-HE-Tag: cloud89_33a19075251a X-Filterd-Recvd-Size: 5203 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by imf46.hostedemail.com (Postfix) with ESMTP for ; Fri, 15 May 2020 17:17:01 +0000 (UTC) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 8ECFE1042; Fri, 15 May 2020 10:17:01 -0700 (PDT) Received: from localhost.localdomain (unknown [172.31.20.19]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id BEDA73F305; Fri, 15 May 2020 10:16:59 -0700 (PDT) From: Catalin Marinas To: linux-arm-kernel@lists.infradead.org Cc: linux-mm@kvack.org, linux-arch@vger.kernel.org, Will Deacon , Dave P Martin , Vincenzo Frascino , Szabolcs Nagy , Kevin Brodsky , Andrey Konovalov , Peter Collingbourne , Steven Price , Andrew Morton Subject: [PATCH v4 20/26] mm: Add arch hooks for saving/restoring tags Date: Fri, 15 May 2020 18:16:06 +0100 Message-Id: <20200515171612.1020-21-catalin.marinas@arm.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200515171612.1020-1-catalin.marinas@arm.com> References: <20200515171612.1020-1-catalin.marinas@arm.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Steven Price Arm's Memory Tagging Extension (MTE) adds some metadata (tags) to every physical page, when swapping pages out to disk it is necessary to save these tags, and later restore them when reading the pages back. Add some hooks along with dummy implementations to enable the arch code to handle this. Three new hooks are added to the swap code: * arch_prepare_to_swap() and * arch_swap_invalidate_page() / arch_swap_invalidate_area(). One new hook is added to shmem: * arch_swap_restore_tags() Signed-off-by: Steven Price [catalin.marinas@arm.com: add unlock_page() on the error path] Signed-off-by: Catalin Marinas Cc: Andrew Morton --- include/asm-generic/pgtable.h | 23 +++++++++++++++++++++++ mm/page_io.c | 10 ++++++++++ mm/shmem.c | 6 ++++++ mm/swapfile.c | 2 ++ 4 files changed, 41 insertions(+) diff --git a/include/asm-generic/pgtable.h b/include/asm-generic/pgtable.h index 329b8c8ca703..306cee75b9ec 100644 --- a/include/asm-generic/pgtable.h +++ b/include/asm-generic/pgtable.h @@ -475,6 +475,29 @@ static inline int arch_unmap_one(struct mm_struct *mm, } #endif +#ifndef __HAVE_ARCH_PREPARE_TO_SWAP +static inline int arch_prepare_to_swap(struct page *page) +{ + return 0; +} +#endif + +#ifndef __HAVE_ARCH_SWAP_INVALIDATE +static inline void arch_swap_invalidate_page(int type, pgoff_t offset) +{ +} + +static inline void arch_swap_invalidate_area(int type) +{ +} +#endif + +#ifndef __HAVE_ARCH_SWAP_RESTORE_TAGS +static inline void arch_swap_restore_tags(swp_entry_t entry, struct page *page) +{ +} +#endif + #ifndef __HAVE_ARCH_PGD_OFFSET_GATE #define pgd_offset_gate(mm, addr) pgd_offset(mm, addr) #endif diff --git a/mm/page_io.c b/mm/page_io.c index 76965be1d40e..dae85d56e45b 100644 --- a/mm/page_io.c +++ b/mm/page_io.c @@ -253,6 +253,16 @@ int swap_writepage(struct page *page, struct writeback_control *wbc) unlock_page(page); goto out; } + /* + * Arch code may have to preserve more data than just the page + * contents, e.g. memory tags. + */ + ret = arch_prepare_to_swap(page); + if (ret) { + set_page_dirty(page); + unlock_page(page); + goto out; + } if (frontswap_store(page) == 0) { set_page_writeback(page); unlock_page(page); diff --git a/mm/shmem.c b/mm/shmem.c index 73754ed7af69..0e87925a7cb4 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -1658,6 +1658,12 @@ static int shmem_swapin_page(struct inode *inode, pgoff_t index, } wait_on_page_writeback(page); + /* + * Some architectures may have to restore extra metadata to the + * physical page after reading from swap. + */ + arch_swap_restore_tags(swap, page); + if (shmem_should_replace_page(page, gfp)) { error = shmem_replace_page(&page, gfp, info, index); if (error) diff --git a/mm/swapfile.c b/mm/swapfile.c index 5871a2aa86a5..b39c6520b0cf 100644 --- a/mm/swapfile.c +++ b/mm/swapfile.c @@ -722,6 +722,7 @@ static void swap_range_free(struct swap_info_struct *si, unsigned long offset, else swap_slot_free_notify = NULL; while (offset <= end) { + arch_swap_invalidate_page(si->type, offset); frontswap_invalidate_page(si->type, offset); if (swap_slot_free_notify) swap_slot_free_notify(si->bdev, offset); @@ -2645,6 +2646,7 @@ SYSCALL_DEFINE1(swapoff, const char __user *, specialfile) frontswap_map = frontswap_map_get(p); spin_unlock(&p->lock); spin_unlock(&swap_lock); + arch_swap_invalidate_area(p->type); frontswap_invalidate_area(p->type); frontswap_map_set(p, NULL); mutex_unlock(&swapon_mutex);