From patchwork Thu Nov 26 17:32:47 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: James Morse X-Patchwork-Id: 7708261 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id B3CFFBF90C for ; Thu, 26 Nov 2015 17:39:19 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id C2D652060E for ; Thu, 26 Nov 2015 17:39:18 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id DA67520603 for ; Thu, 26 Nov 2015 17:39:17 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1a20Tx-0003Pw-O4; Thu, 26 Nov 2015 17:37:21 +0000 Received: from foss.arm.com ([217.140.101.70]) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1a20RN-00006x-SG for linux-arm-kernel@lists.infradead.org; Thu, 26 Nov 2015 17:34:44 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 1F9B6491; Thu, 26 Nov 2015 09:34:06 -0800 (PST) Received: from melchizedek.cambridge.arm.com (melchizedek.cambridge.arm.com [10.1.209.158]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 5C2BB3F308; Thu, 26 Nov 2015 09:34:21 -0800 (PST) From: James Morse To: linux-arm-kernel@lists.infradead.org, linux-pm@vger.kernel.org, "Rafael J. Wysocki" , "Pavel Machek" Subject: [PATCH v3 09/10] PM / Hibernate: Publish pages restored in-place to arch code Date: Thu, 26 Nov 2015 17:32:47 +0000 Message-Id: <1448559168-8363-10-git-send-email-james.morse@arm.com> X-Mailer: git-send-email 2.6.2 In-Reply-To: <1448559168-8363-1-git-send-email-james.morse@arm.com> References: <1448559168-8363-1-git-send-email-james.morse@arm.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20151126_093442_051044_EF0F2124 X-CRM114-Status: GOOD ( 15.43 ) X-Spam-Score: -7.9 (-------) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Mark Rutland , Lorenzo Pieralisi , Geoff Levand , Catalin Marinas , Will Deacon , AKASHI Takahiro , James Morse , Sudeep Holla , Marc Zyngier , wangfei , Kevin Kang MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-5.2 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Some architectures require code written to memory as if it were data to be 'cleaned' from any data caches before the processor can fetch them as new instructions. During resume from hibernate, the snapshot code copies some pages directly, meaning these architectures do not get a chance to perform their cache maintenance. Create a new list of pages that were restored in place, so that the arch code can perform this maintenance when necessary. Signed-off-by: James Morse --- include/linux/suspend.h | 1 + kernel/power/snapshot.c | 42 ++++++++++++++++++++++++++++-------------- 2 files changed, 29 insertions(+), 14 deletions(-) diff --git a/include/linux/suspend.h b/include/linux/suspend.h index 8b6ec7ef0854..b17cf6081bca 100644 --- a/include/linux/suspend.h +++ b/include/linux/suspend.h @@ -384,6 +384,7 @@ extern bool system_entering_hibernation(void); extern bool hibernation_available(void); asmlinkage int swsusp_save(void); extern struct pbe *restore_pblist; +extern struct pbe *restored_inplace_pblist; #else /* CONFIG_HIBERNATION */ static inline void register_nosave_region(unsigned long b, unsigned long e) {} static inline void register_nosave_region_late(unsigned long b, unsigned long e) {} diff --git a/kernel/power/snapshot.c b/kernel/power/snapshot.c index 3a970604308f..f251f5af49fb 100644 --- a/kernel/power/snapshot.c +++ b/kernel/power/snapshot.c @@ -74,6 +74,11 @@ void __init hibernate_image_size_init(void) */ struct pbe *restore_pblist; +/* List of PBEs that were restored in place. modified-harvard architectures + * need to 'clean' these pages before they can be executed. + */ +struct pbe *restored_inplace_pblist; + /* Pointer to an auxiliary buffer (1 page) */ static void *buffer; @@ -1359,6 +1364,7 @@ out: nr_copy_pages = 0; nr_meta_pages = 0; restore_pblist = NULL; + restored_inplace_pblist = NULL; buffer = NULL; alloc_normal = 0; alloc_highmem = 0; @@ -2072,6 +2078,7 @@ load_header(struct swsusp_info *info) int error; restore_pblist = NULL; + restored_inplace_pblist = NULL; error = check_header(info); if (!error) { nr_copy_pages = info->image_pages; @@ -2427,25 +2434,31 @@ static void *get_buffer(struct memory_bitmap *bm, struct chain_allocator *ca) if (PageHighMem(page)) return get_highmem_page_buffer(page, ca); - if (swsusp_page_is_forbidden(page) && swsusp_page_is_free(page)) - /* We have allocated the "original" page frame and we can - * use it directly to store the loaded page. - */ - return page_address(page); - - /* The "original" page frame has not been allocated and we have to - * use a "safe" page frame to store the loaded page. - */ pbe = chain_alloc(ca, sizeof(struct pbe)); if (!pbe) { swsusp_free(); return ERR_PTR(-ENOMEM); } - pbe->orig_address = page_address(page); - pbe->address = safe_pages_list; - safe_pages_list = safe_pages_list->next; - pbe->next = restore_pblist; - restore_pblist = pbe; + + if (swsusp_page_is_forbidden(page) && swsusp_page_is_free(page)) { + /* We have allocated the "original" page frame and we can + * use it directly to store the loaded page. + */ + pbe->orig_address = NULL; + pbe->address = page_address(page); + pbe->next = restored_inplace_pblist; + restored_inplace_pblist = pbe; + } else { + /* The "original" page frame has not been allocated and we + * have to use a "safe" page frame to store the loaded page. + */ + pbe->orig_address = page_address(page); + pbe->address = safe_pages_list; + safe_pages_list = safe_pages_list->next; + pbe->next = restore_pblist; + restore_pblist = pbe; + } + return pbe->address; } @@ -2513,6 +2526,7 @@ int snapshot_write_next(struct snapshot_handle *handle) chain_init(&ca, GFP_ATOMIC, PG_SAFE); memory_bm_position_reset(&orig_bm); restore_pblist = NULL; + restored_inplace_pblist = NULL; handle->buffer = get_buffer(&orig_bm, &ca); handle->sync_read = 0; if (IS_ERR(handle->buffer))