From patchwork Thu Mar 26 03:24:03 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pasha Tatashin X-Patchwork-Id: 11459101 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 5425D161F for ; Thu, 26 Mar 2020 03:24:28 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 2208620774 for ; Thu, 26 Mar 2020 03:24:28 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=soleen.com header.i=@soleen.com header.b="Nt3Xinqq" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 2208620774 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=soleen.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id D29266B000C; Wed, 25 Mar 2020 23:24:25 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id CDABE6B000D; Wed, 25 Mar 2020 23:24:25 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BC8C36B000E; Wed, 25 Mar 2020 23:24:25 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0144.hostedemail.com [216.40.44.144]) by kanga.kvack.org (Postfix) with ESMTP id 9991C6B000C for ; Wed, 25 Mar 2020 23:24:25 -0400 (EDT) Received: from smtpin09.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 4FB75824934B for ; Thu, 26 Mar 2020 03:24:25 +0000 (UTC) X-FDA: 76636070490.09.sense57_978c3bebb45d X-Spam-Summary: 2,0,0,132f0097f158ffa2,d41d8cd98f00b204,pasha.tatashin@soleen.com,,RULES_HIT:41:355:379:541:800:960:973:988:989:1260:1345:1359:1381:1437:1534:1541:1711:1730:1747:1777:1792:2393:2559:2562:3138:3139:3140:3141:3142:3352:3622:3865:3867:3870:3871:3872:4250:4321:5007:6261:6653:6737:6738:7904:10004:11026:11473:11657:11658:11914:12043:12048:12296:12297:12438:12517:12519:12555:12895:13069:13311:13357:14096:14181:14384:14394:14721:21080:21444:21451:21627:30003:30054:30070,0,RBL:209.85.160.196:@soleen.com:.lbl8.mailshell.net-62.2.0.100 66.100.201.201,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:25,LUA_SUMMARY:none X-HE-Tag: sense57_978c3bebb45d X-Filterd-Recvd-Size: 4950 Received: from mail-qt1-f196.google.com (mail-qt1-f196.google.com [209.85.160.196]) by imf37.hostedemail.com (Postfix) with ESMTP for ; Thu, 26 Mar 2020 03:24:24 +0000 (UTC) Received: by mail-qt1-f196.google.com with SMTP id c14so4230462qtp.0 for ; Wed, 25 Mar 2020 20:24:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=from:to:subject:date:message-id:in-reply-to:references; bh=thpe0Z5kEvTnM2eeAPZkTi6F3w4Pb+HUczSr2KuT1UY=; b=Nt3Xinqq4GDXUh4q1gt240eSwVB4FlleAi4yheETVwWyB9pE4ZB9fhvoPxAwkhLLpP Hel+tkJ2MOJLYJF6Ft+ia7fYJNDVTK0/BQEdJGWtrFp/KqJf+jbtO1QKVz1d4/aVr7E/ odepPYYW/C2vMcwYOzeuK7gefOa2hthxOoxybyCgMgvMvSdaIgKnF2VBz1guVAylP8wj gcmFQcPycw8EOZJFcOXFSSph4/YpQFH5NYwCOlC6tBBxmCoZi3jeb5xpSXXAV3abItoU Am7UWAXBZHc8yP9UBGR9FlYtvqfxFO/+pzq6qBUb0dxaMy7wgRE0cMJ/Kt4mHT9uxfON EOrw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references; bh=thpe0Z5kEvTnM2eeAPZkTi6F3w4Pb+HUczSr2KuT1UY=; b=onCMCVcNpnJsJ0QG3aBMV8aYICBU+VePYCc6PO4cSOyom3mc1UWu063JI12vHgszFC hZmxLeHNi2gTwgk4QHgQYoXTNcuB6vCGsGrwFu8gHd8zlkdS6fByK5PiFsV/DIi8m4g3 caOtZBp9OH1rUYQHdS8h1dBkcslmvifqGq1JG84Bm0XVNQwbqDThdEPlIw4vv5Ii8JOI DM351PC9UCzGOwmfnhNIjxkHCYeQjgkt4n2UJbtM49ICtYu/+0T8z1rbMqzlV6Tu0V1m OsDdnPPU3N9LGg0c48Aj7m1UApRMIJJtLfeFB0xe4ROmdK0+Y0mGldJGeGcLouHJ2vH3 5hbg== X-Gm-Message-State: ANhLgQ2zlA0K929MzmsTC4BlksWM8IEkpylUv474gu8qcAEXP1kdsSth 0tmEZUlTzkRVTiaqrLzX+cqQhQ== X-Google-Smtp-Source: ADFU+vs8AB+ycYsqo1ntrv2C2y3Y5wNtM59sTWQw+jvfb/SJvWSGTmarQ+JsRphJYNn9X4qaL+i0xg== X-Received: by 2002:ac8:4982:: with SMTP id f2mr6355374qtq.38.1585193064365; Wed, 25 Mar 2020 20:24:24 -0700 (PDT) Received: from localhost.localdomain (c-73-69-118-222.hsd1.nh.comcast.net. [73.69.118.222]) by smtp.gmail.com with ESMTPSA id u4sm620034qka.35.2020.03.25.20.24.22 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 25 Mar 2020 20:24:23 -0700 (PDT) From: Pavel Tatashin To: pasha.tatashin@soleen.com, jmorris@namei.org, sashal@kernel.org, ebiederm@xmission.com, kexec@lists.infradead.org, linux-kernel@vger.kernel.org, corbet@lwn.net, catalin.marinas@arm.com, will@kernel.org, linux-arm-kernel@lists.infradead.org, maz@kernel.org, james.morse@arm.com, vladimir.murzin@arm.com, matthias.bgg@gmail.com, bhsharma@redhat.com, linux-mm@kvack.org, mark.rutland@arm.com, steve.capper@arm.com, rfontana@redhat.com, tglx@linutronix.de, selindag@gmail.com Subject: [PATCH v9 01/18] arm64: kexec: make dtb_mem always enabled Date: Wed, 25 Mar 2020 23:24:03 -0400 Message-Id: <20200326032420.27220-2-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200326032420.27220-1-pasha.tatashin@soleen.com> References: <20200326032420.27220-1-pasha.tatashin@soleen.com> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Currently, dtb_mem is enabled only when CONFIG_KEXEC_FILE is enabled. This adds ugly ifdefs to c files. Always enabled dtb_mem, when it is not used, it is NULL. Change the dtb_mem to phys_addr_t, as it is a physical address. Signed-off-by: Pavel Tatashin Reviewed-by: James Morse --- arch/arm64/include/asm/kexec.h | 4 ++-- arch/arm64/kernel/machine_kexec.c | 6 +----- 2 files changed, 3 insertions(+), 7 deletions(-) diff --git a/arch/arm64/include/asm/kexec.h b/arch/arm64/include/asm/kexec.h index d24b527e8c00..61530ec3a9b1 100644 --- a/arch/arm64/include/asm/kexec.h +++ b/arch/arm64/include/asm/kexec.h @@ -90,18 +90,18 @@ static inline void crash_prepare_suspend(void) {} static inline void crash_post_resume(void) {} #endif -#ifdef CONFIG_KEXEC_FILE #define ARCH_HAS_KIMAGE_ARCH struct kimage_arch { void *dtb; - unsigned long dtb_mem; + phys_addr_t dtb_mem; /* Core ELF header buffer */ void *elf_headers; unsigned long elf_headers_mem; unsigned long elf_headers_sz; }; +#ifdef CONFIG_KEXEC_FILE extern const struct kexec_file_ops kexec_image_ops; struct kimage; diff --git a/arch/arm64/kernel/machine_kexec.c b/arch/arm64/kernel/machine_kexec.c index 8e9c924423b4..ae1bad0156cd 100644 --- a/arch/arm64/kernel/machine_kexec.c +++ b/arch/arm64/kernel/machine_kexec.c @@ -203,11 +203,7 @@ void machine_kexec(struct kimage *kimage) * In kexec_file case, the kernel starts directly without purgatory. */ cpu_soft_restart(reboot_code_buffer_phys, kimage->head, kimage->start, -#ifdef CONFIG_KEXEC_FILE - kimage->arch.dtb_mem); -#else - 0); -#endif + kimage->arch.dtb_mem); BUG(); /* Should never get here. */ } From patchwork Thu Mar 26 03:24:04 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pasha Tatashin X-Patchwork-Id: 11459103 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id CBAB9161F for ; Thu, 26 Mar 2020 03:24:30 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 7F5B920772 for ; Thu, 26 Mar 2020 03:24:30 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=soleen.com header.i=@soleen.com header.b="Yi5QmqKc" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 7F5B920772 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=soleen.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id E63BC6B000D; Wed, 25 Mar 2020 23:24:27 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id DC5CB6B000E; Wed, 25 Mar 2020 23:24:27 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C8C426B0010; Wed, 25 Mar 2020 23:24:27 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0189.hostedemail.com [216.40.44.189]) by kanga.kvack.org (Postfix) with ESMTP id AF2C16B000D for ; Wed, 25 Mar 2020 23:24:27 -0400 (EDT) Received: from smtpin20.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 65E1B181AD0A5 for ; Thu, 26 Mar 2020 03:24:27 +0000 (UTC) X-FDA: 76636070574.20.grade33_9bfcd1f99b2e X-Spam-Summary: 2,0,0,ed2839491e01ea9f,d41d8cd98f00b204,pasha.tatashin@soleen.com,,RULES_HIT:4:41:69:355:379:541:800:960:968:973:988:989:1260:1345:1359:1381:1431:1437:1605:1730:1747:1777:1792:2194:2198:2199:2200:2393:2538:2553:2559:2562:2640:2693:2731:2896:2898:2914:3138:3139:3140:3141:3142:3865:3866:3867:3868:3870:3871:3872:3874:4250:4321:4605:5007:6117:6119:6261:6653:6737:6738:6755:7688:7903:8784:9592:10004:11026:11232:11473:11657:11658:11914:12043:12048:12219:12291:12296:12297:12438:12517:12519:12555:12895:12986:13141:13230:13255:14096:14394:21080:21325:21433:21444:21451:21627:21990:30003:30012:30054:30067:30069:30070:30074:30079:30090,0,RBL:209.85.160.196:@soleen.com:.lbl8.mailshell.net-62.2.0.100 66.100.201.201,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: grade33_9bfcd1f99b2e X-Filterd-Recvd-Size: 17899 Received: from mail-qt1-f196.google.com (mail-qt1-f196.google.com [209.85.160.196]) by imf35.hostedemail.com (Postfix) with ESMTP for ; Thu, 26 Mar 2020 03:24:26 +0000 (UTC) Received: by mail-qt1-f196.google.com with SMTP id z12so4196660qtq.5 for ; Wed, 25 Mar 2020 20:24:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=from:to:subject:date:message-id:in-reply-to:references; bh=r4qyJ598s8ReeWjPdkKhrtlQmLJMhhLnGRdSKLq7dec=; b=Yi5QmqKcG8HCpCRUaQq35UuIB8Flvz9/GmkR4fbhwtpv4xxhQp1A82Gn3j/VOUfZKl 5K1OBcDcxKjc7RGgywqTpPt66Cv0XLX7sUxEyCw/FG9YOXuCPDRPbJh8EIvEDgtoR0cz qoUt6ge6i64xkjRIzg54WQGw0jbQhq1nRzTTxFp3Chlk2GiaKefJdh6Qlqci/PGgNI4k cjMjHKBHG1naBlh6VkxPRAjCfFmxNBZLZ0qI3+YuAspuWKANLrSYhYs3Z66ElF49bahS WTIKrGc4RWBhVhknhZfurA+ANAyasVTSNkzrZgdJNp9K8dL0ykliG+8dwXzP9ZInEM4B 1xzg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references; bh=r4qyJ598s8ReeWjPdkKhrtlQmLJMhhLnGRdSKLq7dec=; b=lt8PbXUd+uaMkNCG2hv/d75yIQ5bvbNn0744cX++ZB3liqdRwUuSvheFxj9TkQ+Lwa 17IgfET6KWJ6GYAF/H7SqDRLhwiRB6IbsQko5QoFXkSIUnD5z6cMbEKtXaDC0zgMFZVf nqCybSXq83OO2dtjOhUiF7mN97EniCkehHhrqjeU2Bi9fjWpEzjpZuZARXOq8w4OUPVt h0CwQlHRsNBcZzeolOghCtqbdMmxp6Ymi6LK/kUFzaIJ0v05WKXO1kDZK6Dq1RuDvJtr YDSfP+HU8hn6Gr2Tecp9H1/oiFWc92hh/5jceyEd20d0r6IazyhEkHGDO6B5xkBwVdBb w/1Q== X-Gm-Message-State: ANhLgQ2HV12DfmdA2s+cYZ8E7hzYUPY0LE/Qi+kQCelgficPo4nFsuax m3RajPEd80cIVX0WWBLab9deMQ== X-Google-Smtp-Source: ADFU+vu7Q1l/wckqgb6c2jLBm7NTBsdqcTI9aWNtfEa2Bgk2n1XqVSuqBEfaq6Nvi6F6zBYuy6iYWg== X-Received: by 2002:ac8:224c:: with SMTP id p12mr5964245qtp.32.1585193065951; Wed, 25 Mar 2020 20:24:25 -0700 (PDT) Received: from localhost.localdomain (c-73-69-118-222.hsd1.nh.comcast.net. [73.69.118.222]) by smtp.gmail.com with ESMTPSA id u4sm620034qka.35.2020.03.25.20.24.24 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 25 Mar 2020 20:24:25 -0700 (PDT) From: Pavel Tatashin To: pasha.tatashin@soleen.com, jmorris@namei.org, sashal@kernel.org, ebiederm@xmission.com, kexec@lists.infradead.org, linux-kernel@vger.kernel.org, corbet@lwn.net, catalin.marinas@arm.com, will@kernel.org, linux-arm-kernel@lists.infradead.org, maz@kernel.org, james.morse@arm.com, vladimir.murzin@arm.com, matthias.bgg@gmail.com, bhsharma@redhat.com, linux-mm@kvack.org, mark.rutland@arm.com, steve.capper@arm.com, rfontana@redhat.com, tglx@linutronix.de, selindag@gmail.com Subject: [PATCH v9 02/18] arm64: hibernate: move page handling function to new trans_pgd.c Date: Wed, 25 Mar 2020 23:24:04 -0400 Message-Id: <20200326032420.27220-3-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200326032420.27220-1-pasha.tatashin@soleen.com> References: <20200326032420.27220-1-pasha.tatashin@soleen.com> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Now, that we abstracted the required functions move them to a new home. Later, we will generalize these function in order to be useful outside of hibernation. Signed-off-by: Pavel Tatashin Reviewed-by: James Morse --- arch/arm64/Kconfig | 4 + arch/arm64/include/asm/trans_pgd.h | 21 +++ arch/arm64/kernel/hibernate.c | 199 +------------------------- arch/arm64/mm/Makefile | 1 + arch/arm64/mm/trans_pgd.c | 219 +++++++++++++++++++++++++++++ 5 files changed, 246 insertions(+), 198 deletions(-) create mode 100644 arch/arm64/include/asm/trans_pgd.h create mode 100644 arch/arm64/mm/trans_pgd.c diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 0b30e884e088..63e0e1db6b2e 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -1107,6 +1107,10 @@ config CRASH_DUMP For more details see Documentation/admin-guide/kdump/kdump.rst +config TRANS_TABLE + def_bool y + depends on HIBERNATION + config XEN_DOM0 def_bool y depends on XEN diff --git a/arch/arm64/include/asm/trans_pgd.h b/arch/arm64/include/asm/trans_pgd.h new file mode 100644 index 000000000000..23153c13d1ce --- /dev/null +++ b/arch/arm64/include/asm/trans_pgd.h @@ -0,0 +1,21 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +/* + * Copyright (c) 2020, Microsoft Corporation. + * Pavel Tatashin + */ + +#ifndef _ASM_TRANS_TABLE_H +#define _ASM_TRANS_TABLE_H + +#include +#include +#include + +int trans_pgd_create_copy(pgd_t **dst_pgdp, unsigned long start, + unsigned long end); + +int trans_pgd_map_page(pgd_t *trans_pgd, void *page, unsigned long dst_addr, + pgprot_t pgprot); + +#endif /* _ASM_TRANS_TABLE_H */ diff --git a/arch/arm64/kernel/hibernate.c b/arch/arm64/kernel/hibernate.c index 590963c9c609..3d6f0fd73591 100644 --- a/arch/arm64/kernel/hibernate.c +++ b/arch/arm64/kernel/hibernate.c @@ -16,7 +16,6 @@ #define pr_fmt(x) "hibernate: " x #include #include -#include #include #include #include @@ -31,14 +30,12 @@ #include #include #include -#include -#include -#include #include #include #include #include #include +#include #include /* @@ -182,45 +179,6 @@ int arch_hibernation_header_restore(void *addr) } EXPORT_SYMBOL(arch_hibernation_header_restore); -static int trans_pgd_map_page(pgd_t *trans_pgd, void *page, - unsigned long dst_addr, - pgprot_t pgprot) -{ - pgd_t *pgdp; - pud_t *pudp; - pmd_t *pmdp; - pte_t *ptep; - - pgdp = pgd_offset_raw(trans_pgd, dst_addr); - if (pgd_none(READ_ONCE(*pgdp))) { - pudp = (void *)get_safe_page(GFP_ATOMIC); - if (!pudp) - return -ENOMEM; - pgd_populate(&init_mm, pgdp, pudp); - } - - pudp = pud_offset(pgdp, dst_addr); - if (pud_none(READ_ONCE(*pudp))) { - pmdp = (void *)get_safe_page(GFP_ATOMIC); - if (!pmdp) - return -ENOMEM; - pud_populate(&init_mm, pudp, pmdp); - } - - pmdp = pmd_offset(pudp, dst_addr); - if (pmd_none(READ_ONCE(*pmdp))) { - ptep = (void *)get_safe_page(GFP_ATOMIC); - if (!ptep) - return -ENOMEM; - pmd_populate_kernel(&init_mm, pmdp, ptep); - } - - ptep = pte_offset_kernel(pmdp, dst_addr); - set_pte(ptep, pfn_pte(virt_to_pfn(page), PAGE_KERNEL_EXEC)); - - return 0; -} - /* * Copies length bytes, starting at src_start into an new page, * perform cache maintenance, then maps it at the specified address low @@ -339,161 +297,6 @@ int swsusp_arch_suspend(void) return ret; } -static void _copy_pte(pte_t *dst_ptep, pte_t *src_ptep, unsigned long addr) -{ - pte_t pte = READ_ONCE(*src_ptep); - - if (pte_valid(pte)) { - /* - * Resume will overwrite areas that may be marked - * read only (code, rodata). Clear the RDONLY bit from - * the temporary mappings we use during restore. - */ - set_pte(dst_ptep, pte_mkwrite(pte)); - } else if (debug_pagealloc_enabled() && !pte_none(pte)) { - /* - * debug_pagealloc will removed the PTE_VALID bit if - * the page isn't in use by the resume kernel. It may have - * been in use by the original kernel, in which case we need - * to put it back in our copy to do the restore. - * - * Before marking this entry valid, check the pfn should - * be mapped. - */ - BUG_ON(!pfn_valid(pte_pfn(pte))); - - set_pte(dst_ptep, pte_mkpresent(pte_mkwrite(pte))); - } -} - -static int copy_pte(pmd_t *dst_pmdp, pmd_t *src_pmdp, unsigned long start, - unsigned long end) -{ - pte_t *src_ptep; - pte_t *dst_ptep; - unsigned long addr = start; - - dst_ptep = (pte_t *)get_safe_page(GFP_ATOMIC); - if (!dst_ptep) - return -ENOMEM; - pmd_populate_kernel(&init_mm, dst_pmdp, dst_ptep); - dst_ptep = pte_offset_kernel(dst_pmdp, start); - - src_ptep = pte_offset_kernel(src_pmdp, start); - do { - _copy_pte(dst_ptep, src_ptep, addr); - } while (dst_ptep++, src_ptep++, addr += PAGE_SIZE, addr != end); - - return 0; -} - -static int copy_pmd(pud_t *dst_pudp, pud_t *src_pudp, unsigned long start, - unsigned long end) -{ - pmd_t *src_pmdp; - pmd_t *dst_pmdp; - unsigned long next; - unsigned long addr = start; - - if (pud_none(READ_ONCE(*dst_pudp))) { - dst_pmdp = (pmd_t *)get_safe_page(GFP_ATOMIC); - if (!dst_pmdp) - return -ENOMEM; - pud_populate(&init_mm, dst_pudp, dst_pmdp); - } - dst_pmdp = pmd_offset(dst_pudp, start); - - src_pmdp = pmd_offset(src_pudp, start); - do { - pmd_t pmd = READ_ONCE(*src_pmdp); - - next = pmd_addr_end(addr, end); - if (pmd_none(pmd)) - continue; - if (pmd_table(pmd)) { - if (copy_pte(dst_pmdp, src_pmdp, addr, next)) - return -ENOMEM; - } else { - set_pmd(dst_pmdp, - __pmd(pmd_val(pmd) & ~PMD_SECT_RDONLY)); - } - } while (dst_pmdp++, src_pmdp++, addr = next, addr != end); - - return 0; -} - -static int copy_pud(pgd_t *dst_pgdp, pgd_t *src_pgdp, unsigned long start, - unsigned long end) -{ - pud_t *dst_pudp; - pud_t *src_pudp; - unsigned long next; - unsigned long addr = start; - - if (pgd_none(READ_ONCE(*dst_pgdp))) { - dst_pudp = (pud_t *)get_safe_page(GFP_ATOMIC); - if (!dst_pudp) - return -ENOMEM; - pgd_populate(&init_mm, dst_pgdp, dst_pudp); - } - dst_pudp = pud_offset(dst_pgdp, start); - - src_pudp = pud_offset(src_pgdp, start); - do { - pud_t pud = READ_ONCE(*src_pudp); - - next = pud_addr_end(addr, end); - if (pud_none(pud)) - continue; - if (pud_table(pud)) { - if (copy_pmd(dst_pudp, src_pudp, addr, next)) - return -ENOMEM; - } else { - set_pud(dst_pudp, - __pud(pud_val(pud) & ~PUD_SECT_RDONLY)); - } - } while (dst_pudp++, src_pudp++, addr = next, addr != end); - - return 0; -} - -static int copy_page_tables(pgd_t *dst_pgdp, unsigned long start, - unsigned long end) -{ - unsigned long next; - unsigned long addr = start; - pgd_t *src_pgdp = pgd_offset_k(start); - - dst_pgdp = pgd_offset_raw(dst_pgdp, start); - do { - next = pgd_addr_end(addr, end); - if (pgd_none(READ_ONCE(*src_pgdp))) - continue; - if (copy_pud(dst_pgdp, src_pgdp, addr, next)) - return -ENOMEM; - } while (dst_pgdp++, src_pgdp++, addr = next, addr != end); - - return 0; -} - -static int trans_pgd_create_copy(pgd_t **dst_pgdp, unsigned long start, - unsigned long end) -{ - int rc; - pgd_t *trans_pgd = (pgd_t *)get_safe_page(GFP_ATOMIC); - - if (!trans_pgd) { - pr_err("Failed to allocate memory for temporary page tables.\n"); - return -ENOMEM; - } - - rc = copy_page_tables(trans_pgd, start, end); - if (!rc) - *dst_pgdp = trans_pgd; - - return rc; -} - /* * Setup then Resume from the hibernate image using swsusp_arch_suspend_exit(). * diff --git a/arch/arm64/mm/Makefile b/arch/arm64/mm/Makefile index d91030f0ffee..bdad6ff0d72c 100644 --- a/arch/arm64/mm/Makefile +++ b/arch/arm64/mm/Makefile @@ -6,6 +6,7 @@ obj-y := dma-mapping.o extable.o fault.o init.o \ obj-$(CONFIG_HUGETLB_PAGE) += hugetlbpage.o obj-$(CONFIG_PTDUMP_CORE) += dump.o obj-$(CONFIG_PTDUMP_DEBUGFS) += ptdump_debugfs.o +obj-$(CONFIG_TRANS_TABLE) += trans_pgd.o obj-$(CONFIG_NUMA) += numa.o obj-$(CONFIG_DEBUG_VIRTUAL) += physaddr.o KASAN_SANITIZE_physaddr.o += n diff --git a/arch/arm64/mm/trans_pgd.c b/arch/arm64/mm/trans_pgd.c new file mode 100644 index 000000000000..d20e48520cef --- /dev/null +++ b/arch/arm64/mm/trans_pgd.c @@ -0,0 +1,219 @@ +// SPDX-License-Identifier: GPL-2.0 + +/* + * Transitional page tables for kexec and hibernate + * + * This file derived from: arch/arm64/kernel/hibernate.c + * + * Copyright (c) 2020, Microsoft Corporation. + * Pavel Tatashin + * + */ + +/* + * Transitional tables are used during system transferring from one world to + * another: such as during hibernate restore, and kexec reboots. During these + * phases one cannot rely on page table not being overwritten. This is because + * hibernate and kexec can overwrite the current page tables during transition. + */ + +#include +#include +#include +#include +#include +#include +#include + +static void _copy_pte(pte_t *dst_ptep, pte_t *src_ptep, unsigned long addr) +{ + pte_t pte = READ_ONCE(*src_ptep); + + if (pte_valid(pte)) { + /* + * Resume will overwrite areas that may be marked + * read only (code, rodata). Clear the RDONLY bit from + * the temporary mappings we use during restore. + */ + set_pte(dst_ptep, pte_mkwrite(pte)); + } else if (debug_pagealloc_enabled() && !pte_none(pte)) { + /* + * debug_pagealloc will removed the PTE_VALID bit if + * the page isn't in use by the resume kernel. It may have + * been in use by the original kernel, in which case we need + * to put it back in our copy to do the restore. + * + * Before marking this entry valid, check the pfn should + * be mapped. + */ + BUG_ON(!pfn_valid(pte_pfn(pte))); + + set_pte(dst_ptep, pte_mkpresent(pte_mkwrite(pte))); + } +} + +static int copy_pte(pmd_t *dst_pmdp, pmd_t *src_pmdp, unsigned long start, + unsigned long end) +{ + pte_t *src_ptep; + pte_t *dst_ptep; + unsigned long addr = start; + + dst_ptep = (pte_t *)get_safe_page(GFP_ATOMIC); + if (!dst_ptep) + return -ENOMEM; + pmd_populate_kernel(&init_mm, dst_pmdp, dst_ptep); + dst_ptep = pte_offset_kernel(dst_pmdp, start); + + src_ptep = pte_offset_kernel(src_pmdp, start); + do { + _copy_pte(dst_ptep, src_ptep, addr); + } while (dst_ptep++, src_ptep++, addr += PAGE_SIZE, addr != end); + + return 0; +} + +static int copy_pmd(pud_t *dst_pudp, pud_t *src_pudp, unsigned long start, + unsigned long end) +{ + pmd_t *src_pmdp; + pmd_t *dst_pmdp; + unsigned long next; + unsigned long addr = start; + + if (pud_none(READ_ONCE(*dst_pudp))) { + dst_pmdp = (pmd_t *)get_safe_page(GFP_ATOMIC); + if (!dst_pmdp) + return -ENOMEM; + pud_populate(&init_mm, dst_pudp, dst_pmdp); + } + dst_pmdp = pmd_offset(dst_pudp, start); + + src_pmdp = pmd_offset(src_pudp, start); + do { + pmd_t pmd = READ_ONCE(*src_pmdp); + + next = pmd_addr_end(addr, end); + if (pmd_none(pmd)) + continue; + if (pmd_table(pmd)) { + if (copy_pte(dst_pmdp, src_pmdp, addr, next)) + return -ENOMEM; + } else { + set_pmd(dst_pmdp, + __pmd(pmd_val(pmd) & ~PMD_SECT_RDONLY)); + } + } while (dst_pmdp++, src_pmdp++, addr = next, addr != end); + + return 0; +} + +static int copy_pud(pgd_t *dst_pgdp, pgd_t *src_pgdp, unsigned long start, + unsigned long end) +{ + pud_t *dst_pudp; + pud_t *src_pudp; + unsigned long next; + unsigned long addr = start; + + if (pgd_none(READ_ONCE(*dst_pgdp))) { + dst_pudp = (pud_t *)get_safe_page(GFP_ATOMIC); + if (!dst_pudp) + return -ENOMEM; + pgd_populate(&init_mm, dst_pgdp, dst_pudp); + } + dst_pudp = pud_offset(dst_pgdp, start); + + src_pudp = pud_offset(src_pgdp, start); + do { + pud_t pud = READ_ONCE(*src_pudp); + + next = pud_addr_end(addr, end); + if (pud_none(pud)) + continue; + if (pud_table(pud)) { + if (copy_pmd(dst_pudp, src_pudp, addr, next)) + return -ENOMEM; + } else { + set_pud(dst_pudp, + __pud(pud_val(pud) & ~PUD_SECT_RDONLY)); + } + } while (dst_pudp++, src_pudp++, addr = next, addr != end); + + return 0; +} + +static int copy_page_tables(pgd_t *dst_pgdp, unsigned long start, + unsigned long end) +{ + unsigned long next; + unsigned long addr = start; + pgd_t *src_pgdp = pgd_offset_k(start); + + dst_pgdp = pgd_offset_raw(dst_pgdp, start); + do { + next = pgd_addr_end(addr, end); + if (pgd_none(READ_ONCE(*src_pgdp))) + continue; + if (copy_pud(dst_pgdp, src_pgdp, addr, next)) + return -ENOMEM; + } while (dst_pgdp++, src_pgdp++, addr = next, addr != end); + + return 0; +} + +int trans_pgd_create_copy(pgd_t **dst_pgdp, unsigned long start, + unsigned long end) +{ + int rc; + pgd_t *trans_pgd = (pgd_t *)get_safe_page(GFP_ATOMIC); + + if (!trans_pgd) { + pr_err("Failed to allocate memory for temporary page tables.\n"); + return -ENOMEM; + } + + rc = copy_page_tables(trans_pgd, start, end); + if (!rc) + *dst_pgdp = trans_pgd; + + return rc; +} + +int trans_pgd_map_page(pgd_t *trans_pgd, void *page, unsigned long dst_addr, + pgprot_t pgprot) +{ + pgd_t *pgdp; + pud_t *pudp; + pmd_t *pmdp; + pte_t *ptep; + + pgdp = pgd_offset_raw(trans_pgd, dst_addr); + if (pgd_none(READ_ONCE(*pgdp))) { + pudp = (void *)get_safe_page(GFP_ATOMIC); + if (!pudp) + return -ENOMEM; + pgd_populate(&init_mm, pgdp, pudp); + } + + pudp = pud_offset(pgdp, dst_addr); + if (pud_none(READ_ONCE(*pudp))) { + pmdp = (void *)get_safe_page(GFP_ATOMIC); + if (!pmdp) + return -ENOMEM; + pud_populate(&init_mm, pudp, pmdp); + } + + pmdp = pmd_offset(pudp, dst_addr); + if (pmd_none(READ_ONCE(*pmdp))) { + ptep = (void *)get_safe_page(GFP_ATOMIC); + if (!ptep) + return -ENOMEM; + pmd_populate_kernel(&init_mm, pmdp, ptep); + } + + ptep = pte_offset_kernel(pmdp, dst_addr); + set_pte(ptep, pfn_pte(virt_to_pfn(page), PAGE_KERNEL_EXEC)); + + return 0; +} From patchwork Thu Mar 26 03:24:05 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pasha Tatashin X-Patchwork-Id: 11459105 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 180DF161F for ; Thu, 26 Mar 2020 03:24:33 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id CC9E920737 for ; Thu, 26 Mar 2020 03:24:32 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=soleen.com header.i=@soleen.com header.b="Np+ER3Xp" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org CC9E920737 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=soleen.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id EAED16B000E; Wed, 25 Mar 2020 23:24:28 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id E88BB6B0010; Wed, 25 Mar 2020 23:24:28 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D771F6B0032; Wed, 25 Mar 2020 23:24:28 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0211.hostedemail.com [216.40.44.211]) by kanga.kvack.org (Postfix) with ESMTP id BBF3E6B000E for ; Wed, 25 Mar 2020 23:24:28 -0400 (EDT) Received: from smtpin30.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id B3E06248F for ; Thu, 26 Mar 2020 03:24:28 +0000 (UTC) X-FDA: 76636070616.30.chess26_9ef801f4113f X-Spam-Summary: 2,0,0,c91647b3df4de0e0,d41d8cd98f00b204,pasha.tatashin@soleen.com,,RULES_HIT:2:41:355:379:541:800:960:973:988:989:1260:1345:1359:1381:1431:1437:1535:1605:1606:1730:1747:1777:1792:1978:2194:2199:2393:2559:2562:2897:2899:3138:3139:3140:3141:3142:3865:3867:3868:3870:3871:3872:3874:4119:4250:4321:4605:5007:6117:6261:6653:6737:6738:7875:8603:10004:11026:11473:11657:11658:11914:12043:12048:12291:12296:12297:12438:12517:12519:12555:12895:12986:14096:14394:21080:21444:21451:21627:21796:21990:30003:30036:30054:30075,0,RBL:209.85.222.193:@soleen.com:.lbl8.mailshell.net-62.2.0.100 66.100.201.201,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: chess26_9ef801f4113f X-Filterd-Recvd-Size: 8478 Received: from mail-qk1-f193.google.com (mail-qk1-f193.google.com [209.85.222.193]) by imf25.hostedemail.com (Postfix) with ESMTP for ; Thu, 26 Mar 2020 03:24:28 +0000 (UTC) Received: by mail-qk1-f193.google.com with SMTP id h14so5106048qke.5 for ; Wed, 25 Mar 2020 20:24:28 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=from:to:subject:date:message-id:in-reply-to:references; bh=gPIr3sMHKm0Eg/X2MJ83Q4o7q/Yh6VlfcqjWss6+Y4o=; b=Np+ER3XpXP3xatSew72fqYIgywSlmUJyfUzsR7RuwvTnKQrjpyERrNj+EvfXrJyWW9 W+sxelJuhu1ZSGw+1n9sRGoH4Nu4KbAtfrqTCoUWKeqGQ2bicHmBN1jJWZqMxTNmxRnF plOne/JIMjbqnR6Cv0oF8IIkx/ey2DSlmgA0ifhIEVBxx+KhxneyfC8D1ikFhE8Z8Zx3 I4nzMR2Re4f41bAosqjpj5BpevCMGoQKcH23PFSImbgHc94xQtAEg70FWoX9XFL2tjH9 0ZDNtowg0Y+IPP+hUNhz2LdLMllsx4K4YruLTGFWoi9ENKz3fnxKMYegeeFuD0vQM7vV arLA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references; bh=gPIr3sMHKm0Eg/X2MJ83Q4o7q/Yh6VlfcqjWss6+Y4o=; b=LOXYMff+K7YH755YkcTuNcCQIYqkLtbUX2UsWKKKuWj4GeVwY7yU8hr0evYxD2ZycA o4dz+5JmkwYjegA6ULmyix2fkKSYKjR7/pjhY4O+eJM1AG4k9HSgxgKA2/iYIKAhXjnQ EoztSON3nbB1x0BTpbwsgvJRDX7Rk2CL5a9mDMph9riFtLORCrFyNu0LLEU+q6aNLctA o/TRaBfnh5lg0tBysRJ5f+P5tFxtYmyuG62aAJtvNWSpT4/pUv9IqidUnB/gbLlretUf o+xJZ6+OEEZvXf2a+AotX3E5bwQ48AqiTyyDQweoKiuZ/NprBOM+MwJvk7fLzc0wxHlJ N10g== X-Gm-Message-State: ANhLgQ2+HnpQYI8lLkvwZ5nsVQcrZNZk3IZ1yDCIBh1LoIIOeLJmo5C3 EK7WaYup5qTBSc8ksatdznaDuA== X-Google-Smtp-Source: ADFU+vu2LuSP2xM/wRHh5Sup1U57yJstHfVsFcFOMuWM8U2UTB5lN9kl0Z/AesTxoJRpDvCFmCt3Ng== X-Received: by 2002:a37:9e88:: with SMTP id h130mr6125408qke.145.1585193067625; Wed, 25 Mar 2020 20:24:27 -0700 (PDT) Received: from localhost.localdomain (c-73-69-118-222.hsd1.nh.comcast.net. [73.69.118.222]) by smtp.gmail.com with ESMTPSA id u4sm620034qka.35.2020.03.25.20.24.26 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 25 Mar 2020 20:24:27 -0700 (PDT) From: Pavel Tatashin To: pasha.tatashin@soleen.com, jmorris@namei.org, sashal@kernel.org, ebiederm@xmission.com, kexec@lists.infradead.org, linux-kernel@vger.kernel.org, corbet@lwn.net, catalin.marinas@arm.com, will@kernel.org, linux-arm-kernel@lists.infradead.org, maz@kernel.org, james.morse@arm.com, vladimir.murzin@arm.com, matthias.bgg@gmail.com, bhsharma@redhat.com, linux-mm@kvack.org, mark.rutland@arm.com, steve.capper@arm.com, rfontana@redhat.com, tglx@linutronix.de, selindag@gmail.com Subject: [PATCH v9 03/18] arm64: trans_pgd: make trans_pgd_map_page generic Date: Wed, 25 Mar 2020 23:24:05 -0400 Message-Id: <20200326032420.27220-4-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200326032420.27220-1-pasha.tatashin@soleen.com> References: <20200326032420.27220-1-pasha.tatashin@soleen.com> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: kexec is going to use a different allocator, so make trans_pgd_map_page to accept allocator as an argument, and also kexec is going to use a different map protection, so also pass it via argument. Signed-off-by: Pavel Tatashin Reviewed-by: Matthias Brugger --- arch/arm64/include/asm/trans_pgd.h | 18 ++++++++++++++++-- arch/arm64/kernel/hibernate.c | 12 +++++++++++- arch/arm64/mm/trans_pgd.c | 27 +++++++++++++++++++++------ 3 files changed, 48 insertions(+), 9 deletions(-) diff --git a/arch/arm64/include/asm/trans_pgd.h b/arch/arm64/include/asm/trans_pgd.h index 23153c13d1ce..ad5194ad178d 100644 --- a/arch/arm64/include/asm/trans_pgd.h +++ b/arch/arm64/include/asm/trans_pgd.h @@ -12,10 +12,24 @@ #include #include +/* + * trans_alloc_page + * - Allocator that should return exactly one zeroed page, if this + * allocator fails, trans_pgd returns -ENOMEM error. + * + * trans_alloc_arg + * - Passed to trans_alloc_page as an argument + */ + +struct trans_pgd_info { + void * (*trans_alloc_page)(void *arg); + void *trans_alloc_arg; +}; + int trans_pgd_create_copy(pgd_t **dst_pgdp, unsigned long start, unsigned long end); -int trans_pgd_map_page(pgd_t *trans_pgd, void *page, unsigned long dst_addr, - pgprot_t pgprot); +int trans_pgd_map_page(struct trans_pgd_info *info, pgd_t *trans_pgd, + void *page, unsigned long dst_addr, pgprot_t pgprot); #endif /* _ASM_TRANS_TABLE_H */ diff --git a/arch/arm64/kernel/hibernate.c b/arch/arm64/kernel/hibernate.c index 3d6f0fd73591..607bb1fbc349 100644 --- a/arch/arm64/kernel/hibernate.c +++ b/arch/arm64/kernel/hibernate.c @@ -179,6 +179,11 @@ int arch_hibernation_header_restore(void *addr) } EXPORT_SYMBOL(arch_hibernation_header_restore); +static void *hibernate_page_alloc(void *arg) +{ + return (void *)get_safe_page((gfp_t)(unsigned long)arg); +} + /* * Copies length bytes, starting at src_start into an new page, * perform cache maintenance, then maps it at the specified address low @@ -195,6 +200,11 @@ static int create_safe_exec_page(void *src_start, size_t length, unsigned long dst_addr, phys_addr_t *phys_dst_addr) { + struct trans_pgd_info trans_info = { + .trans_alloc_page = hibernate_page_alloc, + .trans_alloc_arg = (void *)GFP_ATOMIC, + }; + void *page = (void *)get_safe_page(GFP_ATOMIC); pgd_t *trans_pgd; int rc; @@ -209,7 +219,7 @@ static int create_safe_exec_page(void *src_start, size_t length, if (!trans_pgd) return -ENOMEM; - rc = trans_pgd_map_page(trans_pgd, page, dst_addr, + rc = trans_pgd_map_page(&trans_info, trans_pgd, page, dst_addr, PAGE_KERNEL_EXEC); if (rc) return rc; diff --git a/arch/arm64/mm/trans_pgd.c b/arch/arm64/mm/trans_pgd.c index d20e48520cef..275a79935d7e 100644 --- a/arch/arm64/mm/trans_pgd.c +++ b/arch/arm64/mm/trans_pgd.c @@ -25,6 +25,11 @@ #include #include +static void *trans_alloc(struct trans_pgd_info *info) +{ + return info->trans_alloc_page(info->trans_alloc_arg); +} + static void _copy_pte(pte_t *dst_ptep, pte_t *src_ptep, unsigned long addr) { pte_t pte = READ_ONCE(*src_ptep); @@ -180,8 +185,18 @@ int trans_pgd_create_copy(pgd_t **dst_pgdp, unsigned long start, return rc; } -int trans_pgd_map_page(pgd_t *trans_pgd, void *page, unsigned long dst_addr, - pgprot_t pgprot) +/* + * Add map entry to trans_pgd for a base-size page at PTE level. + * info: contains allocator and its argument + * trans_pgd: page table in which new map is added. + * page: page to be mapped. + * dst_addr: new VA address for the pages + * pgprot: protection for the page. + * + * Returns 0 on success, and -ENOMEM on failure. + */ +int trans_pgd_map_page(struct trans_pgd_info *info, pgd_t *trans_pgd, + void *page, unsigned long dst_addr, pgprot_t pgprot) { pgd_t *pgdp; pud_t *pudp; @@ -190,7 +205,7 @@ int trans_pgd_map_page(pgd_t *trans_pgd, void *page, unsigned long dst_addr, pgdp = pgd_offset_raw(trans_pgd, dst_addr); if (pgd_none(READ_ONCE(*pgdp))) { - pudp = (void *)get_safe_page(GFP_ATOMIC); + pudp = trans_alloc(info); if (!pudp) return -ENOMEM; pgd_populate(&init_mm, pgdp, pudp); @@ -198,7 +213,7 @@ int trans_pgd_map_page(pgd_t *trans_pgd, void *page, unsigned long dst_addr, pudp = pud_offset(pgdp, dst_addr); if (pud_none(READ_ONCE(*pudp))) { - pmdp = (void *)get_safe_page(GFP_ATOMIC); + pmdp = trans_alloc(info); if (!pmdp) return -ENOMEM; pud_populate(&init_mm, pudp, pmdp); @@ -206,14 +221,14 @@ int trans_pgd_map_page(pgd_t *trans_pgd, void *page, unsigned long dst_addr, pmdp = pmd_offset(pudp, dst_addr); if (pmd_none(READ_ONCE(*pmdp))) { - ptep = (void *)get_safe_page(GFP_ATOMIC); + ptep = trans_alloc(info); if (!ptep) return -ENOMEM; pmd_populate_kernel(&init_mm, pmdp, ptep); } ptep = pte_offset_kernel(pmdp, dst_addr); - set_pte(ptep, pfn_pte(virt_to_pfn(page), PAGE_KERNEL_EXEC)); + set_pte(ptep, pfn_pte(virt_to_pfn(page), pgprot)); return 0; } From patchwork Thu Mar 26 03:24:06 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pasha Tatashin X-Patchwork-Id: 11459109 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 8460A161F for ; Thu, 26 Mar 2020 03:24:35 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 4571520737 for ; Thu, 26 Mar 2020 03:24:35 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=soleen.com header.i=@soleen.com header.b="lfgJ5e8w" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 4571520737 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=soleen.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 97F3E6B0010; Wed, 25 Mar 2020 23:24:30 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 930676B0032; Wed, 25 Mar 2020 23:24:30 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7D2036B0036; Wed, 25 Mar 2020 23:24:30 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0218.hostedemail.com [216.40.44.218]) by kanga.kvack.org (Postfix) with ESMTP id 5EA906B0010 for ; Wed, 25 Mar 2020 23:24:30 -0400 (EDT) Received: from smtpin28.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 26A56181AD0A5 for ; Thu, 26 Mar 2020 03:24:30 +0000 (UTC) X-FDA: 76636070700.28.geese93_a2b0aa8eb833 X-Spam-Summary: 2,0,0,89e6d99274f671e1,d41d8cd98f00b204,pasha.tatashin@soleen.com,,RULES_HIT:2:41:69:355:379:541:800:960:973:988:989:1260:1345:1359:1381:1437:1535:1605:1730:1747:1777:1792:2194:2199:2393:2559:2562:2693:2898:2899:2914:3138:3139:3140:3141:3142:3865:3866:3867:3868:3871:3874:4049:4120:4321:4605:5007:6261:6653:6737:6738:10004:11026:11657:11658:11914:12043:12048:12114:12296:12297:12438:12517:12519:12555:12683:12895:12986:14110:14394:21080:21325:21444:21451:21611:21627:21990:30036:30054:30070:30075:30079,0,RBL:209.85.222.196:@soleen.com:.lbl8.mailshell.net-66.100.201.201 62.2.0.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:25,LUA_SUMMARY:none X-HE-Tag: geese93_a2b0aa8eb833 X-Filterd-Recvd-Size: 9723 Received: from mail-qk1-f196.google.com (mail-qk1-f196.google.com [209.85.222.196]) by imf44.hostedemail.com (Postfix) with ESMTP for ; Thu, 26 Mar 2020 03:24:29 +0000 (UTC) Received: by mail-qk1-f196.google.com with SMTP id e11so5065347qkg.9 for ; Wed, 25 Mar 2020 20:24:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=from:to:subject:date:message-id:in-reply-to:references; bh=/Ha04HVP09wLsA+m7AwzAUIvaSN0h5yH2XDwYGttZt8=; b=lfgJ5e8wODMhjhd3EqUkzc8ismHRO3h4YiAdLWHtNOqpuNN2pXlaN6A/evefZ2/iTV Kyj8wsBHg0YvX66KQaJT8GGeng65oo9d3T6LwHZfgmYGO2BJy57G46z1vHEaPjAz8jwx 81F9gHaLcgrdWqDqdbR7JldCvHkLP6CpD2wzYilGpLfJtaQauW2jvICZbdYTnBdkvX+M Zs71QuTRZ/4qPztZ32Xbqj22+2jniG3bd3TVvPwSqs5V1RJGLKeA5Vv9dlCqKK60/r4j vVJzOtCtZse3+MXqBG02pIWsEmsoaQfGcyPRsHpD/9vgRsXFRCefH/i+3SRUe4G9HjZW tISw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references; bh=/Ha04HVP09wLsA+m7AwzAUIvaSN0h5yH2XDwYGttZt8=; b=oLYMoJuR15jYjpO6V3LCdtQh+zApafBDaGvUcIGYyDGPQCMZo6eL96LIkOyblE2lI3 Kdu62MIuWDu3zKr1H4kIRYTZjwQTAkITykxnYtu31imwH+BBosgmwSfAr8ek+0u/Ws7D YJUyH/LlHXP1dbKtDhnH45AhfzN6kTPwqanbKC4idvKB6p9jibExj9ToA33rf/RabkJV LeDbGfNgM620vG7WaIIfpBQNfdvYsFaT3iociAwv7ij7AG5UV+9zwDeB5jJui5TuEkEI Md5MUywt2IZrL0YRI0C4XF7iOc6mfGxJLcfjRSrz17lsiXGjO4RdyB7gLqmo4t5CZpC2 wY+g== X-Gm-Message-State: ANhLgQ3NM3f58KHUpL0P7FVbIBBpEjh87sLcJKGFaa5e2UMr59jjbUQY KKtfmAg6Kkbvc62RszVyo8oHOA== X-Google-Smtp-Source: ADFU+vu3Rc5UkCrQ+4t6bZVLJVgpd6G5rs76GwLMmw2vzo4tLQGaATsAula88JtT+4Hnl9hceDwEPg== X-Received: by 2002:a37:9e56:: with SMTP id h83mr6432213qke.389.1585193069114; Wed, 25 Mar 2020 20:24:29 -0700 (PDT) Received: from localhost.localdomain (c-73-69-118-222.hsd1.nh.comcast.net. [73.69.118.222]) by smtp.gmail.com with ESMTPSA id u4sm620034qka.35.2020.03.25.20.24.27 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 25 Mar 2020 20:24:28 -0700 (PDT) From: Pavel Tatashin To: pasha.tatashin@soleen.com, jmorris@namei.org, sashal@kernel.org, ebiederm@xmission.com, kexec@lists.infradead.org, linux-kernel@vger.kernel.org, corbet@lwn.net, catalin.marinas@arm.com, will@kernel.org, linux-arm-kernel@lists.infradead.org, maz@kernel.org, james.morse@arm.com, vladimir.murzin@arm.com, matthias.bgg@gmail.com, bhsharma@redhat.com, linux-mm@kvack.org, mark.rutland@arm.com, steve.capper@arm.com, rfontana@redhat.com, tglx@linutronix.de, selindag@gmail.com Subject: [PATCH v9 04/18] arm64: trans_pgd: pass allocator trans_pgd_create_copy Date: Wed, 25 Mar 2020 23:24:06 -0400 Message-Id: <20200326032420.27220-5-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200326032420.27220-1-pasha.tatashin@soleen.com> References: <20200326032420.27220-1-pasha.tatashin@soleen.com> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Make trans_pgd_create_copy and its subroutines to use allocator that is passed as an argument Signed-off-by: Pavel Tatashin Reviewed-by: James Morse --- arch/arm64/include/asm/trans_pgd.h | 4 +-- arch/arm64/kernel/hibernate.c | 7 ++++- arch/arm64/mm/trans_pgd.c | 44 ++++++++++++++++++------------ 3 files changed, 35 insertions(+), 20 deletions(-) diff --git a/arch/arm64/include/asm/trans_pgd.h b/arch/arm64/include/asm/trans_pgd.h index ad5194ad178d..97a7ea73b289 100644 --- a/arch/arm64/include/asm/trans_pgd.h +++ b/arch/arm64/include/asm/trans_pgd.h @@ -26,8 +26,8 @@ struct trans_pgd_info { void *trans_alloc_arg; }; -int trans_pgd_create_copy(pgd_t **dst_pgdp, unsigned long start, - unsigned long end); +int trans_pgd_create_copy(struct trans_pgd_info *info, pgd_t **trans_pgd, + unsigned long start, unsigned long end); int trans_pgd_map_page(struct trans_pgd_info *info, pgd_t *trans_pgd, void *page, unsigned long dst_addr, pgprot_t pgprot); diff --git a/arch/arm64/kernel/hibernate.c b/arch/arm64/kernel/hibernate.c index 607bb1fbc349..95e00536aa67 100644 --- a/arch/arm64/kernel/hibernate.c +++ b/arch/arm64/kernel/hibernate.c @@ -322,13 +322,18 @@ int swsusp_arch_resume(void) phys_addr_t phys_hibernate_exit; void __noreturn (*hibernate_exit)(phys_addr_t, phys_addr_t, void *, void *, phys_addr_t, phys_addr_t); + struct trans_pgd_info trans_info = { + .trans_alloc_page = hibernate_page_alloc, + .trans_alloc_arg = (void *)GFP_ATOMIC, + }; /* * Restoring the memory image will overwrite the ttbr1 page tables. * Create a second copy of just the linear map, and use this when * restoring. */ - rc = trans_pgd_create_copy(&tmp_pg_dir, PAGE_OFFSET, PAGE_END); + rc = trans_pgd_create_copy(&trans_info, &tmp_pg_dir, PAGE_OFFSET, + PAGE_END); if (rc) return rc; diff --git a/arch/arm64/mm/trans_pgd.c b/arch/arm64/mm/trans_pgd.c index 275a79935d7e..c16ae4e2b496 100644 --- a/arch/arm64/mm/trans_pgd.c +++ b/arch/arm64/mm/trans_pgd.c @@ -57,14 +57,14 @@ static void _copy_pte(pte_t *dst_ptep, pte_t *src_ptep, unsigned long addr) } } -static int copy_pte(pmd_t *dst_pmdp, pmd_t *src_pmdp, unsigned long start, - unsigned long end) +static int copy_pte(struct trans_pgd_info *info, pmd_t *dst_pmdp, + pmd_t *src_pmdp, unsigned long start, unsigned long end) { pte_t *src_ptep; pte_t *dst_ptep; unsigned long addr = start; - dst_ptep = (pte_t *)get_safe_page(GFP_ATOMIC); + dst_ptep = trans_alloc(info); if (!dst_ptep) return -ENOMEM; pmd_populate_kernel(&init_mm, dst_pmdp, dst_ptep); @@ -78,8 +78,8 @@ static int copy_pte(pmd_t *dst_pmdp, pmd_t *src_pmdp, unsigned long start, return 0; } -static int copy_pmd(pud_t *dst_pudp, pud_t *src_pudp, unsigned long start, - unsigned long end) +static int copy_pmd(struct trans_pgd_info *info, pud_t *dst_pudp, + pud_t *src_pudp, unsigned long start, unsigned long end) { pmd_t *src_pmdp; pmd_t *dst_pmdp; @@ -87,7 +87,7 @@ static int copy_pmd(pud_t *dst_pudp, pud_t *src_pudp, unsigned long start, unsigned long addr = start; if (pud_none(READ_ONCE(*dst_pudp))) { - dst_pmdp = (pmd_t *)get_safe_page(GFP_ATOMIC); + dst_pmdp = trans_alloc(info); if (!dst_pmdp) return -ENOMEM; pud_populate(&init_mm, dst_pudp, dst_pmdp); @@ -102,7 +102,7 @@ static int copy_pmd(pud_t *dst_pudp, pud_t *src_pudp, unsigned long start, if (pmd_none(pmd)) continue; if (pmd_table(pmd)) { - if (copy_pte(dst_pmdp, src_pmdp, addr, next)) + if (copy_pte(info, dst_pmdp, src_pmdp, addr, next)) return -ENOMEM; } else { set_pmd(dst_pmdp, @@ -113,7 +113,8 @@ static int copy_pmd(pud_t *dst_pudp, pud_t *src_pudp, unsigned long start, return 0; } -static int copy_pud(pgd_t *dst_pgdp, pgd_t *src_pgdp, unsigned long start, +static int copy_pud(struct trans_pgd_info *info, pgd_t *dst_pgdp, + pgd_t *src_pgdp, unsigned long start, unsigned long end) { pud_t *dst_pudp; @@ -122,7 +123,7 @@ static int copy_pud(pgd_t *dst_pgdp, pgd_t *src_pgdp, unsigned long start, unsigned long addr = start; if (pgd_none(READ_ONCE(*dst_pgdp))) { - dst_pudp = (pud_t *)get_safe_page(GFP_ATOMIC); + dst_pudp = trans_alloc(info); if (!dst_pudp) return -ENOMEM; pgd_populate(&init_mm, dst_pgdp, dst_pudp); @@ -137,7 +138,7 @@ static int copy_pud(pgd_t *dst_pgdp, pgd_t *src_pgdp, unsigned long start, if (pud_none(pud)) continue; if (pud_table(pud)) { - if (copy_pmd(dst_pudp, src_pudp, addr, next)) + if (copy_pmd(info, dst_pudp, src_pudp, addr, next)) return -ENOMEM; } else { set_pud(dst_pudp, @@ -148,8 +149,8 @@ static int copy_pud(pgd_t *dst_pgdp, pgd_t *src_pgdp, unsigned long start, return 0; } -static int copy_page_tables(pgd_t *dst_pgdp, unsigned long start, - unsigned long end) +static int copy_page_tables(struct trans_pgd_info *info, pgd_t *dst_pgdp, + unsigned long start, unsigned long end) { unsigned long next; unsigned long addr = start; @@ -160,25 +161,34 @@ static int copy_page_tables(pgd_t *dst_pgdp, unsigned long start, next = pgd_addr_end(addr, end); if (pgd_none(READ_ONCE(*src_pgdp))) continue; - if (copy_pud(dst_pgdp, src_pgdp, addr, next)) + if (copy_pud(info, dst_pgdp, src_pgdp, addr, next)) return -ENOMEM; } while (dst_pgdp++, src_pgdp++, addr = next, addr != end); return 0; } -int trans_pgd_create_copy(pgd_t **dst_pgdp, unsigned long start, - unsigned long end) +/* + * Create trans_pgd and copy linear map. + * info: contains allocator and its argument + * dst_pgdp: new page table that is created, and to which map is copied. + * start: Start of the interval (inclusive). + * end: End of the interval (exclusive). + * + * Returns 0 on success, and -ENOMEM on failure. + */ +int trans_pgd_create_copy(struct trans_pgd_info *info, pgd_t **dst_pgdp, + unsigned long start, unsigned long end) { int rc; - pgd_t *trans_pgd = (pgd_t *)get_safe_page(GFP_ATOMIC); + pgd_t *trans_pgd = trans_alloc(info); if (!trans_pgd) { pr_err("Failed to allocate memory for temporary page tables.\n"); return -ENOMEM; } - rc = copy_page_tables(trans_pgd, start, end); + rc = copy_page_tables(info, trans_pgd, start, end); if (!rc) *dst_pgdp = trans_pgd; From patchwork Thu Mar 26 03:24:07 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pasha Tatashin X-Patchwork-Id: 11459111 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 04958161F for ; Thu, 26 Mar 2020 03:24:38 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id C558C20857 for ; Thu, 26 Mar 2020 03:24:37 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=soleen.com header.i=@soleen.com header.b="f49DFHwQ" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org C558C20857 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=soleen.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 38BE46B0032; Wed, 25 Mar 2020 23:24:32 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 33B9D6B0036; Wed, 25 Mar 2020 23:24:32 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 16A0D6B0037; Wed, 25 Mar 2020 23:24:32 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0061.hostedemail.com [216.40.44.61]) by kanga.kvack.org (Postfix) with ESMTP id 004696B0032 for ; Wed, 25 Mar 2020 23:24:31 -0400 (EDT) Received: from smtpin07.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id B3AD4824934B for ; Thu, 26 Mar 2020 03:24:31 +0000 (UTC) X-FDA: 76636070742.07.class46_a6368ecf4a1f X-Spam-Summary: 2,0,0,a2bdac9f2fce5cff,d41d8cd98f00b204,pasha.tatashin@soleen.com,,RULES_HIT:41:355:379:541:800:960:973:988:989:1260:1345:1359:1381:1437:1535:1542:1711:1730:1747:1777:1792:2393:2559:2562:3138:3139:3140:3141:3142:3353:3865:3871:3872:3874:4321:5007:6117:6261:6653:6737:6738:7903:10004:11026:11658:11914:12048:12297:12517:12519:12555:12895:13255:14096:14181:14394:14721:21080:21433:21444:21627:21990:30012:30036:30054:30075,0,RBL:209.85.222.193:@soleen.com:.lbl8.mailshell.net-66.100.201.201 62.2.0.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: class46_a6368ecf4a1f X-Filterd-Recvd-Size: 5627 Received: from mail-qk1-f193.google.com (mail-qk1-f193.google.com [209.85.222.193]) by imf06.hostedemail.com (Postfix) with ESMTP for ; Thu, 26 Mar 2020 03:24:31 +0000 (UTC) Received: by mail-qk1-f193.google.com with SMTP id o10so5064826qki.10 for ; Wed, 25 Mar 2020 20:24:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=from:to:subject:date:message-id:in-reply-to:references; bh=ze3FGBZb0nSlBTYQ3vDSCVsFtbgdtnnRlj5C751EbXw=; b=f49DFHwQ6bD5X4T6UQG6qlTqIEvg3jahXGHW4JpCoTMg2jEaJI6BxNqCBdECJgFfK3 4TI/epxBM6OVNuRhLYFo4jmICLO5LM/VcYhlagkyOCPXmvQqG/tUrcW4UrajjCAg3J/i nu7t0GPzghw7vKGqT3E0eKZZeIjB4VojCzJnaIgkNWWAZRv4BINCOVe4mktc2HxfAPAe d45QCBnsMTjafOfsfWeFhzYbdxSqDyDUiATTddFL1b8T7FNgG01/VsZ9jk/wxRoHr4+P 9TVuZAZ9WR4x9gl4ppCWZP8/njPwv+C03K7zIiScDSoiqbgSeE1NpbS4WGKA4syFVN0f NWaA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references; bh=ze3FGBZb0nSlBTYQ3vDSCVsFtbgdtnnRlj5C751EbXw=; b=Sk8YR273YVDoAqG2aq3JHht6gTOw2JvYXy9rpBS0P/PK6Ct6N5ZRgHq+wgq2DI4SWv hRVHId/K2/I7wRl6ryKHBYlbNAYOEI533BZa5PZ3fCv/r/xNHBViFYF4GMDigtx4+98d SihqUNu0VljgYeCx4AGjyBeoPEazbczus7eiEYnomnxESaYFiEpbtdX8rtCvXniT4S6r TeJzojjqem3PbVH4zTrkw1elTLZ++ovjtZ1+aOWGCQAn6fGRCD6jnbTrnqCYF6p77kwc 1hi3sY5Ny9/c3c/Lq5U9jyWW0ZHu1xFLyzFDtPcJNhrCLj7LZ6qs+i6T6PIS8otP2flc wx3A== X-Gm-Message-State: ANhLgQ09v4vR27xwtEpqPA4yA9CE3/cbdc1EUns1CoECP4rLgxdHRDe/ 8GnTKzR/bah2ACUKA/knL5+MOA== X-Google-Smtp-Source: ADFU+vtdrLXfDGYEC5WyjNqxKQP1AFgGN296Vf8D65tgrkwFasA1en9oG0QpvaU+6qDqbwwX7VAE7Q== X-Received: by 2002:a37:63c5:: with SMTP id x188mr6282164qkb.276.1585193070601; Wed, 25 Mar 2020 20:24:30 -0700 (PDT) Received: from localhost.localdomain (c-73-69-118-222.hsd1.nh.comcast.net. [73.69.118.222]) by smtp.gmail.com with ESMTPSA id u4sm620034qka.35.2020.03.25.20.24.29 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 25 Mar 2020 20:24:30 -0700 (PDT) From: Pavel Tatashin To: pasha.tatashin@soleen.com, jmorris@namei.org, sashal@kernel.org, ebiederm@xmission.com, kexec@lists.infradead.org, linux-kernel@vger.kernel.org, corbet@lwn.net, catalin.marinas@arm.com, will@kernel.org, linux-arm-kernel@lists.infradead.org, maz@kernel.org, james.morse@arm.com, vladimir.murzin@arm.com, matthias.bgg@gmail.com, bhsharma@redhat.com, linux-mm@kvack.org, mark.rutland@arm.com, steve.capper@arm.com, rfontana@redhat.com, tglx@linutronix.de, selindag@gmail.com Subject: [PATCH v9 05/18] arm64: trans_pgd: pass NULL instead of init_mm to *_populate functions Date: Wed, 25 Mar 2020 23:24:07 -0400 Message-Id: <20200326032420.27220-6-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200326032420.27220-1-pasha.tatashin@soleen.com> References: <20200326032420.27220-1-pasha.tatashin@soleen.com> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: trans_pgd_* should be independent from mm context because the tables that are created by this code are used when there are no mm context around, as it is between kernels. Simply replace mm_init's with NULL. Signed-off-by: Pavel Tatashin Acked-by: James Morse --- arch/arm64/mm/trans_pgd.c | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/arch/arm64/mm/trans_pgd.c b/arch/arm64/mm/trans_pgd.c index c16ae4e2b496..37d7d1c60f65 100644 --- a/arch/arm64/mm/trans_pgd.c +++ b/arch/arm64/mm/trans_pgd.c @@ -67,7 +67,7 @@ static int copy_pte(struct trans_pgd_info *info, pmd_t *dst_pmdp, dst_ptep = trans_alloc(info); if (!dst_ptep) return -ENOMEM; - pmd_populate_kernel(&init_mm, dst_pmdp, dst_ptep); + pmd_populate_kernel(NULL, dst_pmdp, dst_ptep); dst_ptep = pte_offset_kernel(dst_pmdp, start); src_ptep = pte_offset_kernel(src_pmdp, start); @@ -90,7 +90,7 @@ static int copy_pmd(struct trans_pgd_info *info, pud_t *dst_pudp, dst_pmdp = trans_alloc(info); if (!dst_pmdp) return -ENOMEM; - pud_populate(&init_mm, dst_pudp, dst_pmdp); + pud_populate(NULL, dst_pudp, dst_pmdp); } dst_pmdp = pmd_offset(dst_pudp, start); @@ -126,7 +126,7 @@ static int copy_pud(struct trans_pgd_info *info, pgd_t *dst_pgdp, dst_pudp = trans_alloc(info); if (!dst_pudp) return -ENOMEM; - pgd_populate(&init_mm, dst_pgdp, dst_pudp); + pgd_populate(NULL, dst_pgdp, dst_pudp); } dst_pudp = pud_offset(dst_pgdp, start); @@ -218,7 +218,7 @@ int trans_pgd_map_page(struct trans_pgd_info *info, pgd_t *trans_pgd, pudp = trans_alloc(info); if (!pudp) return -ENOMEM; - pgd_populate(&init_mm, pgdp, pudp); + pgd_populate(NULL, pgdp, pudp); } pudp = pud_offset(pgdp, dst_addr); @@ -226,7 +226,7 @@ int trans_pgd_map_page(struct trans_pgd_info *info, pgd_t *trans_pgd, pmdp = trans_alloc(info); if (!pmdp) return -ENOMEM; - pud_populate(&init_mm, pudp, pmdp); + pud_populate(NULL, pudp, pmdp); } pmdp = pmd_offset(pudp, dst_addr); @@ -234,7 +234,7 @@ int trans_pgd_map_page(struct trans_pgd_info *info, pgd_t *trans_pgd, ptep = trans_alloc(info); if (!ptep) return -ENOMEM; - pmd_populate_kernel(&init_mm, pmdp, ptep); + pmd_populate_kernel(NULL, pmdp, ptep); } ptep = pte_offset_kernel(pmdp, dst_addr); From patchwork Thu Mar 26 03:24:08 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pasha Tatashin X-Patchwork-Id: 11459113 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 16D32161F for ; Thu, 26 Mar 2020 03:24:40 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id CEE202074D for ; Thu, 26 Mar 2020 03:24:39 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=soleen.com header.i=@soleen.com header.b="YNFNHzcL" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org CEE202074D Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=soleen.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id BA2006B0036; Wed, 25 Mar 2020 23:24:33 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id B2C716B0037; Wed, 25 Mar 2020 23:24:33 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A1EAC6B006C; Wed, 25 Mar 2020 23:24:33 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 7939B6B0036 for ; Wed, 25 Mar 2020 23:24:33 -0400 (EDT) Received: from smtpin17.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 36DEB2496 for ; Thu, 26 Mar 2020 03:24:33 +0000 (UTC) X-FDA: 76636070826.17.wire18_aa244434fe36 X-Spam-Summary: 2,0,0,b49c81a76201edc4,d41d8cd98f00b204,pasha.tatashin@soleen.com,,RULES_HIT:41:355:379:541:800:960:973:988:989:1260:1345:1359:1381:1437:1461:1535:1541:1711:1730:1747:1777:1792:2393:2553:2559:2562:2903:3138:3139:3140:3141:3142:3353:3865:3866:3867:3868:3871:3872:3874:5007:6119:6261:6653:6737:6738:7576:7903:9036:10004:11026:11473:11657:11658:11914:12043:12048:12297:12438:12517:12519:12555:12679:12895:13069:13161:13229:13311:13357:13972:14181:14384:14394:14721:21080:21325:21433:21444:21451:21627:21740:21990:30034:30054:30070:30079:30090,0,RBL:209.85.222.196:@soleen.com:.lbl8.mailshell.net-66.100.201.201 62.2.0.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:22,LUA_SUMMARY:none X-HE-Tag: wire18_aa244434fe36 X-Filterd-Recvd-Size: 5196 Received: from mail-qk1-f196.google.com (mail-qk1-f196.google.com [209.85.222.196]) by imf35.hostedemail.com (Postfix) with ESMTP for ; Thu, 26 Mar 2020 03:24:32 +0000 (UTC) Received: by mail-qk1-f196.google.com with SMTP id l25so5097923qki.7 for ; Wed, 25 Mar 2020 20:24:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=from:to:subject:date:message-id:in-reply-to:references; bh=BKhAx6BebFw/oEITnjoM5nL1ibAHOPqPAiH4c9gaYa8=; b=YNFNHzcL8opCZWUO6XnLzhdlsu4Yy+Rizy9yCWVbFy3ZRpvtcZv2puWnEOvUC3b1Vt XHs+hMfat0AMwYaJFwe1RjtrQM1/uslxPjzHvIs14d1/5BioLW/YL75JijXhWEBD/9yF 5sEve9b8u+au7l1RKRTggSnH7moh6udjbiEcmw0ePceWaMVwexeB6yeHT0HIof6DX8JL fQtTMoJKaZnvMyOB2sU5splZqi4JPAZcRlyG/7Pf29MOG3wsql0XOZOoGU/cimjTj6y2 2DVjOG87kdt+O93WALQb53VWLj/tVNY4j/j+Wlv7R+JedlxC22BoesYGUjxdDwJ3uBn7 zG/Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references; bh=BKhAx6BebFw/oEITnjoM5nL1ibAHOPqPAiH4c9gaYa8=; b=Vy5MrrepSnnS6gMEJ3SqDS7NE5tKNmN1fyyd7TuPbi/RhtSRswtdCHz1JGVGXi1KPh Pl6UzlEVyCEAyHqXUTNWyCS6K42w92iqQC/EZEcoO9zKXLkwmzkYMHsogfyqb6CsauC/ Qk1ad0tpD01WXNHeN4mbaYJ2sYdtDsyHKffi6MDroLnWCZOqJEKNwt3afO8Kec7Dlm5Q thWqe6ZQdptja4kGfNprIj9t/b0XMjKa/ZK34nqh/f+ArKd+HY6Mjr0PwPMEnApI8rqf vGZySNDGk0+qvuYo3lDqeZODF4Z0rCJAq/0FQXc5jR7xy0+ytGwAKvr7goYqQG832qld pAqw== X-Gm-Message-State: ANhLgQ1M+f3DPhOz8L2ZcELkGL3SDhzNO+mZYYyydIc8dlkWwf+CA2vG AkMu2QmyOsrSA1DlBGJdh1CmFg== X-Google-Smtp-Source: ADFU+vsaZX4t6NPWbuLtjOb6I822H6afCjwQpAn6wOnDl28ywTXqmlBMj7nd/Y69a1G7LrS1oIl+Zw== X-Received: by 2002:a37:a543:: with SMTP id o64mr6053234qke.460.1585193072231; Wed, 25 Mar 2020 20:24:32 -0700 (PDT) Received: from localhost.localdomain (c-73-69-118-222.hsd1.nh.comcast.net. [73.69.118.222]) by smtp.gmail.com with ESMTPSA id u4sm620034qka.35.2020.03.25.20.24.30 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 25 Mar 2020 20:24:31 -0700 (PDT) From: Pavel Tatashin To: pasha.tatashin@soleen.com, jmorris@namei.org, sashal@kernel.org, ebiederm@xmission.com, kexec@lists.infradead.org, linux-kernel@vger.kernel.org, corbet@lwn.net, catalin.marinas@arm.com, will@kernel.org, linux-arm-kernel@lists.infradead.org, maz@kernel.org, james.morse@arm.com, vladimir.murzin@arm.com, matthias.bgg@gmail.com, bhsharma@redhat.com, linux-mm@kvack.org, mark.rutland@arm.com, steve.capper@arm.com, rfontana@redhat.com, tglx@linutronix.de, selindag@gmail.com Subject: [PATCH v9 06/18] arm64: mm: Always update TCR_EL1 from __cpu_set_tcr_t0sz() Date: Wed, 25 Mar 2020 23:24:08 -0400 Message-Id: <20200326032420.27220-7-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200326032420.27220-1-pasha.tatashin@soleen.com> References: <20200326032420.27220-1-pasha.tatashin@soleen.com> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: James Morse Because only the idmap sets a non-standard T0SZ, __cpu_set_tcr_t0sz() can check for platforms that need to do this using __cpu_uses_extended_idmap() before doing its work. The idmap is only built with enough levels, (and T0SZ bits) to map its single page. To allow hibernate, and then kexec to idmap their single page copy routines, __cpu_set_tcr_t0sz() needs to consider additional users, who may need a different number of levels/T0SZ-bits to the idmap. (i.e. VA_BITS may be enough for the idmap, but not hibernate/kexec) Always read TCR_EL1, and check whether any work needs doing for this request. __cpu_uses_extended_idmap() remains as it is used by KVM, whose idmap is also part of the kernel image. This mostly affects the cpuidle path, where we now get an extra system register read . CC: Lorenzo Pieralisi CC: Sudeep Holla Signed-off-by: James Morse Signed-off-by: Pavel Tatashin --- arch/arm64/include/asm/mmu_context.h | 7 +++---- 1 file changed, 3 insertions(+), 4 deletions(-) diff --git a/arch/arm64/include/asm/mmu_context.h b/arch/arm64/include/asm/mmu_context.h index 3827ff4040a3..09ecbfd0ad2e 100644 --- a/arch/arm64/include/asm/mmu_context.h +++ b/arch/arm64/include/asm/mmu_context.h @@ -79,16 +79,15 @@ static inline bool __cpu_uses_extended_idmap_level(void) } /* - * Set TCR.T0SZ to its default value (based on VA_BITS) + * Ensure TCR.T0SZ is set to the provided value. */ static inline void __cpu_set_tcr_t0sz(unsigned long t0sz) { - unsigned long tcr; + unsigned long tcr = read_sysreg(tcr_el1); - if (!__cpu_uses_extended_idmap()) + if ((tcr & TCR_T0SZ_MASK) >> TCR_T0SZ_OFFSET == t0sz) return; - tcr = read_sysreg(tcr_el1); tcr &= ~TCR_T0SZ_MASK; tcr |= t0sz << TCR_T0SZ_OFFSET; write_sysreg(tcr, tcr_el1); From patchwork Thu Mar 26 03:24:09 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pasha Tatashin X-Patchwork-Id: 11459115 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 5F43B1668 for ; Thu, 26 Mar 2020 03:24:42 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 135A720737 for ; Thu, 26 Mar 2020 03:24:42 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=soleen.com header.i=@soleen.com header.b="VAs1MfJj" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 135A720737 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=soleen.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 49C496B0037; Wed, 25 Mar 2020 23:24:35 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 4505B6B006C; Wed, 25 Mar 2020 23:24:35 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 315336B006E; Wed, 25 Mar 2020 23:24:35 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0106.hostedemail.com [216.40.44.106]) by kanga.kvack.org (Postfix) with ESMTP id 197EE6B0037 for ; Wed, 25 Mar 2020 23:24:35 -0400 (EDT) Received: from smtpin02.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id D08BB180AD811 for ; Thu, 26 Mar 2020 03:24:34 +0000 (UTC) X-FDA: 76636070910.02.car77_ad8cdc704156 X-Spam-Summary: 2,0,0,175ff8290243e942,d41d8cd98f00b204,pasha.tatashin@soleen.com,,RULES_HIT:1:2:41:69:355:379:541:800:960:973:988:989:1260:1345:1359:1381:1437:1461:1605:1730:1747:1777:1792:2194:2198:2199:2200:2393:2553:2559:2562:2693:2731:2899:3138:3139:3140:3141:3142:3743:3865:3866:3867:3868:3870:3871:3872:3874:4050:4321:4605:5007:6117:6119:6261:6653:6737:6738:7576:7903:9036:9592:10004:11026:11232:11233:11473:11657:11658:11914:12043:12048:12050:12291:12296:12297:12438:12517:12519:12555:12679:12683:12895:12986:13141:13146:13161:13229:13230:13255:13846:14394:21080:21121:21325:21433:21444:21627:21966:21990:30012:30054:30069:30070:30075:30079:30090:30091,0,RBL:209.85.219.68:@soleen.com:.lbl8.mailshell.net-66.100.201.201 62.2.0.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:23,LUA_SUMMARY:none X-HE-Tag: car77_ad8cdc704156 X-Filterd-Recvd-Size: 10724 Received: from mail-qv1-f68.google.com (mail-qv1-f68.google.com [209.85.219.68]) by imf09.hostedemail.com (Postfix) with ESMTP for ; Thu, 26 Mar 2020 03:24:34 +0000 (UTC) Received: by mail-qv1-f68.google.com with SMTP id m2so2250678qvu.13 for ; Wed, 25 Mar 2020 20:24:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=from:to:subject:date:message-id:in-reply-to:references; bh=79vWRy0i2YzlhYfgfL3OtU1rYDrCPAZhCt4GY/kzDbU=; b=VAs1MfJjSDIrtNcFz+9Q8ZT25hpAwOX3ydp9C6411J1KeAiqQSaBJET61DnDYN3t65 OYNb9gVZ28UO29s5RYt0qHgipHiOb3Q9omuV0F5AN7DLqWtLjlwf35lvcvQatExrmN3B ECWzq1uNVYMAPFYt/TdrccT/38W9l+FiHi9uhtiGQWz4gQWgRq/vCZV2OY2q/w4yxq++ Htuzp2xVaiP3nnw5QfI2ieMT/G0YOppL9HfjhJgqkXGCyRXF913QC/od/grwE2IsXK8p dDwF1EwTWXoUCBsMKoSg+Sh2/EVFGkS5wI5qzfCpo3EK3pHi9rRNy09j9FGaLH3SIyq7 +XXw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references; bh=79vWRy0i2YzlhYfgfL3OtU1rYDrCPAZhCt4GY/kzDbU=; b=GRXxeHq1oTYaDgoDyk4N8stoh2lifji5vIL9slHnb1vjBqBLDxtLiXaAwfIJzN8KDa bdVOwwCXp/M4q4wgxRZtdsbgbL1/zKnKmZCiqzZZ0/OBvUTDzm7tB32H2izpAz/1LpNu bUxJLIa9XuNi0VRBM+WATr05OerjdccNwmEMEDijSSjIJBBUwh4/HXnLBhDpkSGdcJgn E2KOtU3ju+P85EHMdAqAgm9JaolWSQUZqclNl0PHYyXBatgXMWquRGGhP5U+CFFikV+p zwacP8/rQBUrub9E4LLcMX1UN/yvGsHyEScL0AZenQd8C5cg+/RKwSY2qdutWDFOO1dx t2sA== X-Gm-Message-State: ANhLgQ1E/5aXF0+aU4VDikxCooopDjAgD+LqkoVKTEcFOjoMJaDURACH 4xqy9AZ6IvgostSdLUPqoLfZqg== X-Google-Smtp-Source: ADFU+vs04jZQgyyzWnthBR9uVyhpJZVnOP1cEoH7ghqntqa9P123xXiIbPAEb81z75icBIeZHSMlHA== X-Received: by 2002:ad4:4862:: with SMTP id u2mr6360876qvy.67.1585193073744; Wed, 25 Mar 2020 20:24:33 -0700 (PDT) Received: from localhost.localdomain (c-73-69-118-222.hsd1.nh.comcast.net. [73.69.118.222]) by smtp.gmail.com with ESMTPSA id u4sm620034qka.35.2020.03.25.20.24.32 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 25 Mar 2020 20:24:33 -0700 (PDT) From: Pavel Tatashin To: pasha.tatashin@soleen.com, jmorris@namei.org, sashal@kernel.org, ebiederm@xmission.com, kexec@lists.infradead.org, linux-kernel@vger.kernel.org, corbet@lwn.net, catalin.marinas@arm.com, will@kernel.org, linux-arm-kernel@lists.infradead.org, maz@kernel.org, james.morse@arm.com, vladimir.murzin@arm.com, matthias.bgg@gmail.com, bhsharma@redhat.com, linux-mm@kvack.org, mark.rutland@arm.com, steve.capper@arm.com, rfontana@redhat.com, tglx@linutronix.de, selindag@gmail.com Subject: [PATCH v9 07/18] arm64: trans_pgd: hibernate: idmap the single page that holds the copy page routines Date: Wed, 25 Mar 2020 23:24:09 -0400 Message-Id: <20200326032420.27220-8-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200326032420.27220-1-pasha.tatashin@soleen.com> References: <20200326032420.27220-1-pasha.tatashin@soleen.com> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: James Morse To resume from hibernate, the contents of memory are restored from the swap image. This may overwrite any page, including the running kernel and its page tables. Hibernate copies the code it uses to do the restore into a single page that it knows won't be overwritten, and maps it with page tables built from pages that won't be overwritten. Today the address it uses for this mapping is arbitrary, but to allow kexec to reuse this code, it needs to be idmapped. To idmap the page we must avoid the kernel helpers that have VA_BITS baked in. Convert create_single_mapping() to take a single PA, and idmap it. The page tables are built in the reverse order to normal using pfn_pte() to stir in any bits between 52:48. T0SZ is always increased to cover 48bits, or 52 if the copy code has bits 52:48 in its PA. Pasha: The original patch from James inux-arm-kernel/20200115143322.214247-4-james.morse@arm.com Adopted it to trans_pgd, so it can be commonly used by both Kexec and Hibernate. Some minor clean-ups. Signed-off-by: James Morse Signed-off-by: Pavel Tatashin --- arch/arm64/include/asm/trans_pgd.h | 3 ++ arch/arm64/kernel/hibernate.c | 32 +++++++------------ arch/arm64/mm/trans_pgd.c | 49 ++++++++++++++++++++++++++++++ 3 files changed, 63 insertions(+), 21 deletions(-) diff --git a/arch/arm64/include/asm/trans_pgd.h b/arch/arm64/include/asm/trans_pgd.h index 97a7ea73b289..4912d3caf0ca 100644 --- a/arch/arm64/include/asm/trans_pgd.h +++ b/arch/arm64/include/asm/trans_pgd.h @@ -32,4 +32,7 @@ int trans_pgd_create_copy(struct trans_pgd_info *info, pgd_t **trans_pgd, int trans_pgd_map_page(struct trans_pgd_info *info, pgd_t *trans_pgd, void *page, unsigned long dst_addr, pgprot_t pgprot); +int trans_pgd_idmap_page(struct trans_pgd_info *info, phys_addr_t *trans_ttbr0, + unsigned long *t0sz, void *page); + #endif /* _ASM_TRANS_TABLE_H */ diff --git a/arch/arm64/kernel/hibernate.c b/arch/arm64/kernel/hibernate.c index 95e00536aa67..784aa01bb4bd 100644 --- a/arch/arm64/kernel/hibernate.c +++ b/arch/arm64/kernel/hibernate.c @@ -197,7 +197,6 @@ static void *hibernate_page_alloc(void *arg) * page system. */ static int create_safe_exec_page(void *src_start, size_t length, - unsigned long dst_addr, phys_addr_t *phys_dst_addr) { struct trans_pgd_info trans_info = { @@ -206,7 +205,8 @@ static int create_safe_exec_page(void *src_start, size_t length, }; void *page = (void *)get_safe_page(GFP_ATOMIC); - pgd_t *trans_pgd; + phys_addr_t trans_ttbr0; + unsigned long t0sz; int rc; if (!page) @@ -214,13 +214,7 @@ static int create_safe_exec_page(void *src_start, size_t length, memcpy(page, src_start, length); __flush_icache_range((unsigned long)page, (unsigned long)page + length); - - trans_pgd = (void *)get_safe_page(GFP_ATOMIC); - if (!trans_pgd) - return -ENOMEM; - - rc = trans_pgd_map_page(&trans_info, trans_pgd, page, dst_addr, - PAGE_KERNEL_EXEC); + rc = trans_pgd_idmap_page(&trans_info, &trans_ttbr0, &t0sz, page); if (rc) return rc; @@ -233,12 +227,15 @@ static int create_safe_exec_page(void *src_start, size_t length, * page, but TLBs may contain stale ASID-tagged entries (e.g. for EFI * runtime services), while for a userspace-driven test_resume cycle it * points to userspace page tables (and we must point it at a zero page - * ourselves). Elsewhere we only (un)install the idmap with preemption - * disabled, so T0SZ should be as required regardless. + * ourselves). + * + * We change T0SZ as part of installing the idmap. This is undone by + * cpu_uninstall_idmap() in __cpu_suspend_exit(). */ cpu_set_reserved_ttbr0(); local_flush_tlb_all(); - write_sysreg(phys_to_ttbr(virt_to_phys(trans_pgd)), ttbr0_el1); + __cpu_set_tcr_t0sz(t0sz); + write_sysreg(trans_ttbr0, ttbr0_el1); isb(); *phys_dst_addr = virt_to_phys(page); @@ -319,7 +316,6 @@ int swsusp_arch_resume(void) void *zero_page; size_t exit_size; pgd_t *tmp_pg_dir; - phys_addr_t phys_hibernate_exit; void __noreturn (*hibernate_exit)(phys_addr_t, phys_addr_t, void *, void *, phys_addr_t, phys_addr_t); struct trans_pgd_info trans_info = { @@ -347,19 +343,13 @@ int swsusp_arch_resume(void) return -ENOMEM; } - /* - * Locate the exit code in the bottom-but-one page, so that *NULL - * still has disastrous affects. - */ - hibernate_exit = (void *)PAGE_SIZE; exit_size = __hibernate_exit_text_end - __hibernate_exit_text_start; /* * Copy swsusp_arch_suspend_exit() to a safe page. This will generate * a new set of ttbr0 page tables and load them. */ rc = create_safe_exec_page(__hibernate_exit_text_start, exit_size, - (unsigned long)hibernate_exit, - &phys_hibernate_exit); + (phys_addr_t *)&hibernate_exit); if (rc) { pr_err("Failed to create safe executable page for hibernate_exit code.\n"); return rc; @@ -378,7 +368,7 @@ int swsusp_arch_resume(void) * We can skip this step if we booted at EL1, or are running with VHE. */ if (el2_reset_needed()) { - phys_addr_t el2_vectors = phys_hibernate_exit; /* base */ + phys_addr_t el2_vectors = (phys_addr_t)hibernate_exit; el2_vectors += hibernate_el2_vectors - __hibernate_exit_text_start; /* offset */ diff --git a/arch/arm64/mm/trans_pgd.c b/arch/arm64/mm/trans_pgd.c index 37d7d1c60f65..c2517d1af2af 100644 --- a/arch/arm64/mm/trans_pgd.c +++ b/arch/arm64/mm/trans_pgd.c @@ -242,3 +242,52 @@ int trans_pgd_map_page(struct trans_pgd_info *info, pgd_t *trans_pgd, return 0; } + +/* + * The page we want to idmap may be outside the range covered by VA_BITS that + * can be built using the kernel's p?d_populate() helpers. As a one off, for a + * single page, we build these page tables bottom up and just assume that will + * need the maximum T0SZ. + * + * Returns 0 on success, and -ENOMEM on failure. + * On success trans_ttbr0 contains page table with idmapped page, t0sz is set to + * maxumum T0SZ for this page. + */ +int trans_pgd_idmap_page(struct trans_pgd_info *info, phys_addr_t *trans_ttbr0, + unsigned long *t0sz, void *page) +{ + phys_addr_t dst_addr = virt_to_phys(page); + unsigned long pfn = __phys_to_pfn(dst_addr); + int max_msb = (dst_addr & GENMASK(52, 48)) ? 51 : 47; + int bits_mapped = PAGE_SHIFT - 4; + unsigned long level_mask, prev_level_entry, *levels[4]; + int this_level, index, level_lsb, level_msb; + + dst_addr &= PAGE_MASK; + prev_level_entry = pte_val(pfn_pte(pfn, PAGE_KERNEL_EXEC)); + + for (this_level = 3; this_level >= 0; this_level--) { + levels[this_level] = trans_alloc(info); + if (!levels[this_level]) + return -ENOMEM; + + level_lsb = ARM64_HW_PGTABLE_LEVEL_SHIFT(this_level); + level_msb = min(level_lsb + bits_mapped, max_msb); + level_mask = GENMASK_ULL(level_msb, level_lsb); + + index = (dst_addr & level_mask) >> level_lsb; + *(levels[this_level] + index) = prev_level_entry; + + pfn = virt_to_pfn(levels[this_level]); + prev_level_entry = pte_val(pfn_pte(pfn, + __pgprot(PMD_TYPE_TABLE))); + + if (level_msb == max_msb) + break; + } + + *trans_ttbr0 = phys_to_ttbr(__pfn_to_phys(pfn)); + *t0sz = TCR_T0SZ(max_msb + 1); + + return 0; +} From patchwork Thu Mar 26 03:24:10 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pasha Tatashin X-Patchwork-Id: 11459117 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id CAF141668 for ; Thu, 26 Mar 2020 03:24:44 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 898DE208E4 for ; Thu, 26 Mar 2020 03:24:44 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=soleen.com header.i=@soleen.com header.b="a8sWRWxY" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 898DE208E4 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=soleen.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id AB93B6B006C; Wed, 25 Mar 2020 23:24:36 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 9F3756B006E; Wed, 25 Mar 2020 23:24:36 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8E6786B0070; Wed, 25 Mar 2020 23:24:36 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0170.hostedemail.com [216.40.44.170]) by kanga.kvack.org (Postfix) with ESMTP id 762396B006C for ; Wed, 25 Mar 2020 23:24:36 -0400 (EDT) Received: from smtpin16.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 3DBF1180AD811 for ; Thu, 26 Mar 2020 03:24:36 +0000 (UTC) X-FDA: 76636070952.16.cream55_b0e401162505 X-Spam-Summary: 2,0,0,7fc7e31db7f7aa2a,d41d8cd98f00b204,pasha.tatashin@soleen.com,,RULES_HIT:41:69:355:379:541:800:960:968:973:988:989:1260:1345:1359:1381:1437:1535:1543:1711:1730:1747:1777:1792:2393:2553:2559:2562:2693:3138:3139:3140:3141:3142:3354:3622:3865:3866:3867:3868:3870:3871:3872:4118:4250:4321:4605:5007:6261:6653:6737:6738:7875:7903:9592:10004:11026:11657:11658:11914:12043:12048:12294:12296:12297:12438:12517:12519:12555:12895:14096:14181:14394:14721:21063:21080:21444:21611:21627:21990:30054:30056:30079:30090,0,RBL:209.85.160.195:@soleen.com:.lbl8.mailshell.net-62.2.0.100 66.100.201.201,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: cream55_b0e401162505 X-Filterd-Recvd-Size: 7125 Received: from mail-qt1-f195.google.com (mail-qt1-f195.google.com [209.85.160.195]) by imf31.hostedemail.com (Postfix) with ESMTP for ; Thu, 26 Mar 2020 03:24:35 +0000 (UTC) Received: by mail-qt1-f195.google.com with SMTP id m33so4205244qtb.3 for ; Wed, 25 Mar 2020 20:24:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=from:to:subject:date:message-id:in-reply-to:references; bh=QhNz775VKRnk5y9bcc8c5HYbQNDUUf5cmSRr9I0ZSxg=; b=a8sWRWxYc+7MDMvMTEM9/sXq9DGPEMdBEdB0YPEk0ZriOtW7/957gv/EmOOnZXedgr wmJZXMMMPUhBteuPtM2M7iVfIGbzUKmsr2QuafhbN2axLwdDHRrVA4yVO4zJ3oYQrJ6o tuKWzNBUUTh+Mnl6qtByzo0lWeIQmL87AQ1IND0UGfGBiwR2m9eZwbhWuR+rZeuZg972 tmml4/hrYoRrChsZKE1obNpDjGOOpL24yaRLbLxEJGDFtwRexNWZBlI0J6r18eXHrfp8 afChEfRAX/ZfCaJVzRZn/4QwLQ0/S5aAWuM/OFcR7828eDXTuaW3XbxlnjkO4smbWDBN nhjQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references; bh=QhNz775VKRnk5y9bcc8c5HYbQNDUUf5cmSRr9I0ZSxg=; b=V0DhCCUraI/FNuJY0YspdsjRtn9hiUFFWfB7AJMU60FFCfu9xaxYrOSp+BELDreI6u 6PibSTfbIApsSLhaoxYh59tezeYZU42TGi65zDh/rN/KTAqCn1Lc2XEGmGQS1CTJC06L M+pq+KIR1HyulKnOO1e3aFPikqb/G4Ec9YEreeJbkAJCh2b1TRNePA4P/nBnJ3Muz2JT e02hNmgV+fJWDCMPktl7f+QhVDzQJuiSrkcXSu4//wyZaec4jJRJy8HL6iaZ2QDfbXsi /VHx5nZt2To67KMj7pTKjH11EaM2sPanqxLgBGNLMWC2S70jEshLxJmVMc1xqhEcNp+9 K/CQ== X-Gm-Message-State: ANhLgQ3CIVsp051m1T1prTRbaDFaY/3aobPxnlVzvihCtqbFfL6nm1hF IE/JeT6PFtOtjYsojRT5IlaHOw== X-Google-Smtp-Source: ADFU+vu69T1k3xDySQS9u2aotpb2qZ5XiyzNbLLTqTd9rKOnfFUHIbi50Tmyv1njzuyUGzMuj/ak5A== X-Received: by 2002:aed:3607:: with SMTP id e7mr5922168qtb.316.1585193075249; Wed, 25 Mar 2020 20:24:35 -0700 (PDT) Received: from localhost.localdomain (c-73-69-118-222.hsd1.nh.comcast.net. [73.69.118.222]) by smtp.gmail.com with ESMTPSA id u4sm620034qka.35.2020.03.25.20.24.33 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 25 Mar 2020 20:24:34 -0700 (PDT) From: Pavel Tatashin To: pasha.tatashin@soleen.com, jmorris@namei.org, sashal@kernel.org, ebiederm@xmission.com, kexec@lists.infradead.org, linux-kernel@vger.kernel.org, corbet@lwn.net, catalin.marinas@arm.com, will@kernel.org, linux-arm-kernel@lists.infradead.org, maz@kernel.org, james.morse@arm.com, vladimir.murzin@arm.com, matthias.bgg@gmail.com, bhsharma@redhat.com, linux-mm@kvack.org, mark.rutland@arm.com, steve.capper@arm.com, rfontana@redhat.com, tglx@linutronix.de, selindag@gmail.com Subject: [PATCH v9 08/18] arm64: kexec: move relocation function setup Date: Wed, 25 Mar 2020 23:24:10 -0400 Message-Id: <20200326032420.27220-9-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200326032420.27220-1-pasha.tatashin@soleen.com> References: <20200326032420.27220-1-pasha.tatashin@soleen.com> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Currently, kernel relocation function is configured in machine_kexec() at the time of kexec reboot by using control_code_page. This operation, however, is more logical to be done during kexec_load, and thus remove from reboot time. Move, setup of this function to newly added machine_kexec_post_load(). Because once MMU is enabled, kexec control page will contain more than relocation kernel, but also vector table, add pointer to the actual function within this page arch.kern_reloc. Currently, it equals to the beginning of page, we will add offsets later, when vector table is added. Signed-off-by: Pavel Tatashin Reviewed-by: James Morse --- arch/arm64/include/asm/kexec.h | 1 + arch/arm64/kernel/machine_kexec.c | 27 ++++++++++++++------------- 2 files changed, 15 insertions(+), 13 deletions(-) diff --git a/arch/arm64/include/asm/kexec.h b/arch/arm64/include/asm/kexec.h index 61530ec3a9b1..9befcd87e9a8 100644 --- a/arch/arm64/include/asm/kexec.h +++ b/arch/arm64/include/asm/kexec.h @@ -95,6 +95,7 @@ static inline void crash_post_resume(void) {} struct kimage_arch { void *dtb; phys_addr_t dtb_mem; + phys_addr_t kern_reloc; /* Core ELF header buffer */ void *elf_headers; unsigned long elf_headers_mem; diff --git a/arch/arm64/kernel/machine_kexec.c b/arch/arm64/kernel/machine_kexec.c index ae1bad0156cd..ec71a153cc2d 100644 --- a/arch/arm64/kernel/machine_kexec.c +++ b/arch/arm64/kernel/machine_kexec.c @@ -42,6 +42,7 @@ static void _kexec_image_info(const char *func, int line, pr_debug(" start: %lx\n", kimage->start); pr_debug(" head: %lx\n", kimage->head); pr_debug(" nr_segments: %lu\n", kimage->nr_segments); + pr_debug(" kern_reloc: %pa\n", &kimage->arch.kern_reloc); for (i = 0; i < kimage->nr_segments; i++) { pr_debug(" segment[%lu]: %016lx - %016lx, 0x%lx bytes, %lu pages\n", @@ -58,6 +59,17 @@ void machine_kexec_cleanup(struct kimage *kimage) /* Empty routine needed to avoid build errors. */ } +int machine_kexec_post_load(struct kimage *kimage) +{ + void *reloc_code = page_to_virt(kimage->control_code_page); + + memcpy(reloc_code, arm64_relocate_new_kernel, + arm64_relocate_new_kernel_size); + kimage->arch.kern_reloc = __pa(reloc_code); + + return 0; +} + /** * machine_kexec_prepare - Prepare for a kexec reboot. * @@ -143,8 +155,7 @@ static void kexec_segment_flush(const struct kimage *kimage) */ void machine_kexec(struct kimage *kimage) { - phys_addr_t reboot_code_buffer_phys; - void *reboot_code_buffer; + void *reboot_code_buffer = page_to_virt(kimage->control_code_page); bool in_kexec_crash = (kimage == kexec_crash_image); bool stuck_cpus = cpus_are_stuck_in_kernel(); @@ -155,18 +166,8 @@ void machine_kexec(struct kimage *kimage) WARN(in_kexec_crash && (stuck_cpus || smp_crash_stop_failed()), "Some CPUs may be stale, kdump will be unreliable.\n"); - reboot_code_buffer_phys = page_to_phys(kimage->control_code_page); - reboot_code_buffer = phys_to_virt(reboot_code_buffer_phys); - kexec_image_info(kimage); - /* - * Copy arm64_relocate_new_kernel to the reboot_code_buffer for use - * after the kernel is shut down. - */ - memcpy(reboot_code_buffer, arm64_relocate_new_kernel, - arm64_relocate_new_kernel_size); - /* Flush the reboot_code_buffer in preparation for its execution. */ __flush_dcache_area(reboot_code_buffer, arm64_relocate_new_kernel_size); @@ -202,7 +203,7 @@ void machine_kexec(struct kimage *kimage) * userspace (kexec-tools). * In kexec_file case, the kernel starts directly without purgatory. */ - cpu_soft_restart(reboot_code_buffer_phys, kimage->head, kimage->start, + cpu_soft_restart(kimage->arch.kern_reloc, kimage->head, kimage->start, kimage->arch.dtb_mem); BUG(); /* Should never get here. */ From patchwork Thu Mar 26 03:24:11 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pasha Tatashin X-Patchwork-Id: 11459119 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 032C4161F for ; Thu, 26 Mar 2020 03:24:47 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id C522F20737 for ; Thu, 26 Mar 2020 03:24:46 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=soleen.com header.i=@soleen.com header.b="cwhHLgf1" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org C522F20737 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=soleen.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 341006B006E; Wed, 25 Mar 2020 23:24:38 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 27A626B0070; Wed, 25 Mar 2020 23:24:38 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 140EC6B0071; Wed, 25 Mar 2020 23:24:38 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0108.hostedemail.com [216.40.44.108]) by kanga.kvack.org (Postfix) with ESMTP id E9B546B006E for ; Wed, 25 Mar 2020 23:24:37 -0400 (EDT) Received: from smtpin05.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id B0F584857 for ; Thu, 26 Mar 2020 03:24:37 +0000 (UTC) X-FDA: 76636070994.05.pump71_b43da2be9e12 X-Spam-Summary: 2,0,0,6816938573f5e4ce,d41d8cd98f00b204,pasha.tatashin@soleen.com,,RULES_HIT:41:355:379:541:800:960:973:988:989:1260:1345:1359:1381:1437:1534:1541:1711:1730:1747:1777:1792:2393:2553:2559:2562:2693:3138:3139:3140:3141:3142:3352:3622:3865:3866:3867:3868:3870:3871:3872:4321:4605:5007:6261:6653:6737:6738:8660:9592:10004:11026:11473:11658:11914:12043:12048:12294:12297:12438:12517:12519:12555:12895:12986:13069:13148:13230:13311:13357:14181:14384:14394:14721:21080:21212:21444:21627:21990:30025:30054:30079:30090,0,RBL:209.85.219.67:@soleen.com:.lbl8.mailshell.net-62.2.0.100 66.100.201.201,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:23,LUA_SUMMARY:none X-HE-Tag: pump71_b43da2be9e12 X-Filterd-Recvd-Size: 4802 Received: from mail-qv1-f67.google.com (mail-qv1-f67.google.com [209.85.219.67]) by imf46.hostedemail.com (Postfix) with ESMTP for ; Thu, 26 Mar 2020 03:24:37 +0000 (UTC) Received: by mail-qv1-f67.google.com with SMTP id o7so2268694qvq.8 for ; Wed, 25 Mar 2020 20:24:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=from:to:subject:date:message-id:in-reply-to:references; bh=pWO3w3fg/bu1qLpz2kK7AN0HJrvbrhfCZrrLUhDFmXA=; b=cwhHLgf1ShP+9c8KtQ5dZpXfzLqFYUBpwqKsV3HZaplJ84EIJbdpHaM6NuJ2wlnEzt m08QUXtYf36hyXSeXnPsTpKXEJnJhT/HEeBpgHQpag3F8caUMtXyruGEEUxEb9VuT53N xhgjzaa00d1RLd/RIwIljbmeaPfUPwnzdN8U/AzmGBsITwwFUjooj8dcTZl2uUgUmyFv 8LXYv02eoJxsRq0Y42HhNBBm7SHwtS2dGd9g791Nlg3U+3IAuZ5tjDGYoG8eWK8KIIfT 30wbQFnx+teO+i/+te5evTv88yW+bxFAV8lL1oHLvRGfQspHQWgQ6RuRbdApM1rSjQLX OcZg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references; bh=pWO3w3fg/bu1qLpz2kK7AN0HJrvbrhfCZrrLUhDFmXA=; b=REJzur9HZ7tAEkkxrDGbhcf19pbAEawU7H3V/q7kuVcLqq3PFNsf777n5NGgQpVanz lGuQuqCoH19iSZT+qi1xfvg2h0bU+O+RTdi+nRewSSE0zPAu74oeGrZSzV+EcprovQHr BS4h/9GbS3hHjRrDtq5NRpSat7ZZ74YLYaRmUdiASeBfCSKsjibUge9HncT34zL/cp5x Y9recnwGJhQccjsOCzd+C3NCUoRJ9QZTW4LUI/vjl8DNsxUwMKBuwPXlotyJcVyPaudM UbNzpjYZfTTqgygbDed+llZfYZlo18tGdrgBSh2kqmINzWJ6UmpBy900qjplv0mvLDyM 1BwA== X-Gm-Message-State: ANhLgQ28aMCSASlg+ktfGygCscCT4QBn78UDGAf0HQScRomfdha1Ldg4 HVLQnP4nk8cSj8tXOCkaDGa5QQ== X-Google-Smtp-Source: ADFU+vv/k/sLcr5Tyhr8JUG2Ty25MeyzImC89QBOsiqE8u6hhBIFLVdfdL3XZrToBgUbETDslMnrJw== X-Received: by 2002:a0c:fd6b:: with SMTP id k11mr5890909qvs.99.1585193076738; Wed, 25 Mar 2020 20:24:36 -0700 (PDT) Received: from localhost.localdomain (c-73-69-118-222.hsd1.nh.comcast.net. [73.69.118.222]) by smtp.gmail.com with ESMTPSA id u4sm620034qka.35.2020.03.25.20.24.35 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 25 Mar 2020 20:24:36 -0700 (PDT) From: Pavel Tatashin To: pasha.tatashin@soleen.com, jmorris@namei.org, sashal@kernel.org, ebiederm@xmission.com, kexec@lists.infradead.org, linux-kernel@vger.kernel.org, corbet@lwn.net, catalin.marinas@arm.com, will@kernel.org, linux-arm-kernel@lists.infradead.org, maz@kernel.org, james.morse@arm.com, vladimir.murzin@arm.com, matthias.bgg@gmail.com, bhsharma@redhat.com, linux-mm@kvack.org, mark.rutland@arm.com, steve.capper@arm.com, rfontana@redhat.com, tglx@linutronix.de, selindag@gmail.com Subject: [PATCH v9 09/18] arm64: kexec: call kexec_image_info only once Date: Wed, 25 Mar 2020 23:24:11 -0400 Message-Id: <20200326032420.27220-10-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200326032420.27220-1-pasha.tatashin@soleen.com> References: <20200326032420.27220-1-pasha.tatashin@soleen.com> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Currently, kexec_image_info() is called during load time, and right before kernel is being kexec'ed. There is no need to do both. So, call it only once when segments are loaded and the physical location of page with copy of arm64_relocate_new_kernel is known. Signed-off-by: Pavel Tatashin Acked-by: James Morse --- arch/arm64/kernel/machine_kexec.c | 5 +---- 1 file changed, 1 insertion(+), 4 deletions(-) diff --git a/arch/arm64/kernel/machine_kexec.c b/arch/arm64/kernel/machine_kexec.c index ec71a153cc2d..cee3be586384 100644 --- a/arch/arm64/kernel/machine_kexec.c +++ b/arch/arm64/kernel/machine_kexec.c @@ -66,6 +66,7 @@ int machine_kexec_post_load(struct kimage *kimage) memcpy(reloc_code, arm64_relocate_new_kernel, arm64_relocate_new_kernel_size); kimage->arch.kern_reloc = __pa(reloc_code); + kexec_image_info(kimage); return 0; } @@ -79,8 +80,6 @@ int machine_kexec_post_load(struct kimage *kimage) */ int machine_kexec_prepare(struct kimage *kimage) { - kexec_image_info(kimage); - if (kimage->type != KEXEC_TYPE_CRASH && cpus_are_stuck_in_kernel()) { pr_err("Can't kexec: CPUs are stuck in the kernel.\n"); return -EBUSY; @@ -166,8 +165,6 @@ void machine_kexec(struct kimage *kimage) WARN(in_kexec_crash && (stuck_cpus || smp_crash_stop_failed()), "Some CPUs may be stale, kdump will be unreliable.\n"); - kexec_image_info(kimage); - /* Flush the reboot_code_buffer in preparation for its execution. */ __flush_dcache_area(reboot_code_buffer, arm64_relocate_new_kernel_size); From patchwork Thu Mar 26 03:24:12 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pasha Tatashin X-Patchwork-Id: 11459121 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 532911668 for ; Thu, 26 Mar 2020 03:24:49 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 16C192074D for ; Thu, 26 Mar 2020 03:24:49 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=soleen.com header.i=@soleen.com header.b="W08QSBFe" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 16C192074D Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=soleen.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 9CD696B0070; Wed, 25 Mar 2020 23:24:39 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 9819C6B0071; Wed, 25 Mar 2020 23:24:39 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7D1416B0072; Wed, 25 Mar 2020 23:24:39 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0093.hostedemail.com [216.40.44.93]) by kanga.kvack.org (Postfix) with ESMTP id 6135E6B0070 for ; Wed, 25 Mar 2020 23:24:39 -0400 (EDT) Received: from smtpin14.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 32FD352B7 for ; Thu, 26 Mar 2020 03:24:39 +0000 (UTC) X-FDA: 76636071078.14.cover26_b7beb0eb7e63 X-Spam-Summary: 2,0,0,6b25263e18d35b1f,d41d8cd98f00b204,pasha.tatashin@soleen.com,,RULES_HIT:41:355:379:541:800:960:973:982:988:989:1260:1345:1359:1381:1431:1437:1534:1541:1711:1730:1747:1777:1792:2393:2559:2562:3138:3139:3140:3141:3142:3352:3867:4321:4605:5007:6261:6642:6653:6737:6738:10004:11026:11233:11658:11914:12043:12048:12297:12438:12517:12519:12555:12895:12986:13069:13311:13357:14096:14181:14384:14394:14721:21080:21444:21451:21627:30054:30070,0,RBL:209.85.160.194:@soleen.com:.lbl8.mailshell.net-66.100.201.201 62.2.0.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:23,LUA_SUMMARY:none X-HE-Tag: cover26_b7beb0eb7e63 X-Filterd-Recvd-Size: 4573 Received: from mail-qt1-f194.google.com (mail-qt1-f194.google.com [209.85.160.194]) by imf29.hostedemail.com (Postfix) with ESMTP for ; Thu, 26 Mar 2020 03:24:38 +0000 (UTC) Received: by mail-qt1-f194.google.com with SMTP id 10so4219991qtp.1 for ; Wed, 25 Mar 2020 20:24:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=from:to:subject:date:message-id:in-reply-to:references; bh=d2Tg6DUqu4h4AoeF1TJqlJ7g20oT0br2Hcwg7d/p1G4=; b=W08QSBFeQAtNj54YvzgesJNcVcH7833kHC2Us83ZevVOe7Do2dNazfnDt+CowHHQIC faz2AvdaBb/yLUxNw0/J44AfAn28mCBeR6ZDqxdwi8lE6AphaLIb5sZ6jxgB7vY9oOIk WirTf2XUGuqGE0ims6heIVFQaZ1NjjK4vA/HACW8LgMbhJcnRZ8fO4gf3Ukmff15jWgO itOmuS1IhfrI1BHKq+fLFhc1JPpepcBtDb/iE42VAxTYZOSl8arS8TAxmsuXEpTMt1T1 QX+gB0l2JZ6f1BbyCORa0271cXO+lTRznlXSB/d/5l8qUgKci6e0pTZFXHFGgQZwxPKx G61w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references; bh=d2Tg6DUqu4h4AoeF1TJqlJ7g20oT0br2Hcwg7d/p1G4=; b=tWLavW5a5sjp7XF95e5Z5byZD0fyp1LX6sPgZbd93D5rv2FmMR+wrWv6ds4D7V0UqD ykgjfHrujg6eeecTez7Ssmx5mU7QB4w97QCUFQi97yna1mV54Z/GbuHClNgsioYs/SLU q1z7Ft8QPDTYd+bPb+Z+4g0xO2kgYcolE5NcK7ASfZxI0eOhgqEkKyR1pFM7aoFLmA5a bDaTxONcw/CpN+sCTiu0vMivUr7HhyoUJ/mBBaCQFSqhZl14oer9yRYv6PZ25V8KlXO1 x62WaxbXCl1WMEVVzk/urtYFEs3McousDLF0QGieK1c/jHdBzaq5bJfAEkXmyZi22WeS +6og== X-Gm-Message-State: ANhLgQ2EjjYaWFybAUKWqk6PAUfcseX5wrQoygU4y2XwBm4GCbaoQH2g Ru2oCU0jX7fQYlL8GJs6AQRk2g== X-Google-Smtp-Source: ADFU+vtNI1N6KYYNTKClbFa5p1eL7S/68psb1NMui9aroUYD5+m0J4zDvXEIXwzpeJXNUnHnRvaNyA== X-Received: by 2002:ac8:6edc:: with SMTP id f28mr3781501qtv.271.1585193078245; Wed, 25 Mar 2020 20:24:38 -0700 (PDT) Received: from localhost.localdomain (c-73-69-118-222.hsd1.nh.comcast.net. [73.69.118.222]) by smtp.gmail.com with ESMTPSA id u4sm620034qka.35.2020.03.25.20.24.36 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 25 Mar 2020 20:24:37 -0700 (PDT) From: Pavel Tatashin To: pasha.tatashin@soleen.com, jmorris@namei.org, sashal@kernel.org, ebiederm@xmission.com, kexec@lists.infradead.org, linux-kernel@vger.kernel.org, corbet@lwn.net, catalin.marinas@arm.com, will@kernel.org, linux-arm-kernel@lists.infradead.org, maz@kernel.org, james.morse@arm.com, vladimir.murzin@arm.com, matthias.bgg@gmail.com, bhsharma@redhat.com, linux-mm@kvack.org, mark.rutland@arm.com, steve.capper@arm.com, rfontana@redhat.com, tglx@linutronix.de, selindag@gmail.com Subject: [PATCH v9 10/18] arm64: kexec: cpu_soft_restart change argument types Date: Wed, 25 Mar 2020 23:24:12 -0400 Message-Id: <20200326032420.27220-11-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200326032420.27220-1-pasha.tatashin@soleen.com> References: <20200326032420.27220-1-pasha.tatashin@soleen.com> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Change argument types from unsigned long to a more descriptive phys_addr_t. Signed-off-by: Pavel Tatashin --- arch/arm64/kernel/cpu-reset.h | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-) diff --git a/arch/arm64/kernel/cpu-reset.h b/arch/arm64/kernel/cpu-reset.h index ed50e9587ad8..38cbd4068019 100644 --- a/arch/arm64/kernel/cpu-reset.h +++ b/arch/arm64/kernel/cpu-reset.h @@ -10,17 +10,17 @@ #include -void __cpu_soft_restart(unsigned long el2_switch, unsigned long entry, - unsigned long arg0, unsigned long arg1, unsigned long arg2); +void __cpu_soft_restart(phys_addr_t el2_switch, phys_addr_t entry, + phys_addr_t arg0, phys_addr_t arg1, phys_addr_t arg2); -static inline void __noreturn cpu_soft_restart(unsigned long entry, - unsigned long arg0, - unsigned long arg1, - unsigned long arg2) +static inline void __noreturn cpu_soft_restart(phys_addr_t entry, + phys_addr_t arg0, + phys_addr_t arg1, + phys_addr_t arg2) { typeof(__cpu_soft_restart) *restart; - unsigned long el2_switch = !is_kernel_in_hyp_mode() && + phys_addr_t el2_switch = !is_kernel_in_hyp_mode() && is_hyp_mode_available(); restart = (void *)__pa_symbol(__cpu_soft_restart); From patchwork Thu Mar 26 03:24:13 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pasha Tatashin X-Patchwork-Id: 11459123 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 9D4E017EA for ; Thu, 26 Mar 2020 03:24:51 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 5E74120737 for ; Thu, 26 Mar 2020 03:24:51 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=soleen.com header.i=@soleen.com header.b="hZ1nGrOx" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 5E74120737 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=soleen.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 279F16B0071; Wed, 25 Mar 2020 23:24:41 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 22B2E6B0072; Wed, 25 Mar 2020 23:24:41 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 119006B0073; Wed, 25 Mar 2020 23:24:41 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0225.hostedemail.com [216.40.44.225]) by kanga.kvack.org (Postfix) with ESMTP id EE4EA6B0071 for ; Wed, 25 Mar 2020 23:24:40 -0400 (EDT) Received: from smtpin30.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id BB25A4857 for ; Thu, 26 Mar 2020 03:24:40 +0000 (UTC) X-FDA: 76636071120.30.bells05_bb5e2d92775b X-Spam-Summary: 2,0,0,f7dd49bd1bcf9985,d41d8cd98f00b204,pasha.tatashin@soleen.com,,RULES_HIT:41:69:355:379:421:541:800:960:968:973:988:989:1260:1345:1359:1381:1431:1437:1500:1535:1544:1605:1711:1730:1747:1777:1792:2194:2199:2393:2553:2559:2562:2693:3138:3139:3140:3141:3142:3865:3866:3867:3868:3870:3871:3872:3874:4118:4250:4321:4419:5007:6117:6119:6261:6653:6737:6738:7875:7903:8603:8660:9010:9592:10004:10226:11026:11473:11658:11914:12043:12048:12257:12295:12296:12297:12438:12517:12519:12555:12895:12986:13148:13230:14181:14394:14721:21063:21080:21324:21325:21444:21451:21627:21740:21990:30003:30012:30025:30034:30054:30070:30079:30090,0,RBL:209.85.219.66:@soleen.com:.lbl8.mailshell.net-62.2.0.100 66.100.201.201,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:22,LUA_SUMMARY:none X-HE-Tag: bells05_bb5e2d92775b X-Filterd-Recvd-Size: 7485 Received: from mail-qv1-f66.google.com (mail-qv1-f66.google.com [209.85.219.66]) by imf09.hostedemail.com (Postfix) with ESMTP for ; Thu, 26 Mar 2020 03:24:40 +0000 (UTC) Received: by mail-qv1-f66.google.com with SMTP id c28so2259278qvb.10 for ; Wed, 25 Mar 2020 20:24:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=from:to:subject:date:message-id:in-reply-to:references; bh=1sXDoN/0mmZM9G5VQCpkVjfGjiuUeqm4crDPDAsOHEU=; b=hZ1nGrOxfoCkdXKc9+oZYoV6RJjuD0p9MZ71hvhqEwktdZnAwhR8USNLiMMRIzSvr8 4dAJ/Qng+EqH/L8T2YvB4diRuMDjG8aZxC4mjh9wwPRlw8w9bC1mrL4ws1P1gNp1rrFC +6eJ30RkDIe+pX9uULUN79m18FbctcvvLI2V9cXNwu8BPRGzawYOQHRShV8Zj6cPtESq GiSdQsysD/DhROSextRE98zpT3/oBWuPBIYd6SHJz6h1TTiDD4AXO0ACo8fzX9tda8qI oBpxusITsP9ivVZ3u/IZXPd6nii4qA0Rm9hatT59/XhM1por3EzgpjRaN/m7SLvbs7Sv rZJw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references; bh=1sXDoN/0mmZM9G5VQCpkVjfGjiuUeqm4crDPDAsOHEU=; b=b9w2yW4yZ1dAcTiG422sdkZlBHzO57IQ10WkKeigkiq7I0bs+IHfWD9QO2eiK6Iyg1 UHcBgccd3yp3bI1jhBdO6plPoBEdADekVTQ5FpgZ25+DZFQNx1u+Y0hi8a9nM+I5lq8I brI0FKp291iUNddkugHTVDSHgKNjE/bSB5hOS1w5ToBc6Ucl5I2xnvX1nZh0jOlNSsge CqKlV9NAVTzJZKpY83cEYdk6+Em0gmvf9ctvqLs5BPAXR0Z0RfQ3QUIxfc9sSjE2MVcp qdY3P86EHGJyj5DtJURQV7W7usE/1UbemEwuDREeCz2Undc+8rU/0Vx8kPLQnMczDUCV I5Jw== X-Gm-Message-State: ANhLgQ1PGFJK7z7Vm8io/e+uaTQcJjl6Ol3rqBxIPKjQSLgadHIBFhJ8 zXiykkkSP8ZLSVMwewQ5QjBNHZlEl+o= X-Google-Smtp-Source: ADFU+vuHBal1QvkuYASS6ow0XdJSQnH9DVYSoMN2LDG8Rk9VPDx+WBVW2GkTO+Nv+BgfPUM8flR07w== X-Received: by 2002:a05:6214:11f4:: with SMTP id e20mr6354087qvu.66.1585193079735; Wed, 25 Mar 2020 20:24:39 -0700 (PDT) Received: from localhost.localdomain (c-73-69-118-222.hsd1.nh.comcast.net. [73.69.118.222]) by smtp.gmail.com with ESMTPSA id u4sm620034qka.35.2020.03.25.20.24.38 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 25 Mar 2020 20:24:39 -0700 (PDT) From: Pavel Tatashin To: pasha.tatashin@soleen.com, jmorris@namei.org, sashal@kernel.org, ebiederm@xmission.com, kexec@lists.infradead.org, linux-kernel@vger.kernel.org, corbet@lwn.net, catalin.marinas@arm.com, will@kernel.org, linux-arm-kernel@lists.infradead.org, maz@kernel.org, james.morse@arm.com, vladimir.murzin@arm.com, matthias.bgg@gmail.com, bhsharma@redhat.com, linux-mm@kvack.org, mark.rutland@arm.com, steve.capper@arm.com, rfontana@redhat.com, tglx@linutronix.de, selindag@gmail.com Subject: [PATCH v9 11/18] arm64: kexec: arm64_relocate_new_kernel clean-ups Date: Wed, 25 Mar 2020 23:24:13 -0400 Message-Id: <20200326032420.27220-12-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200326032420.27220-1-pasha.tatashin@soleen.com> References: <20200326032420.27220-1-pasha.tatashin@soleen.com> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Remove excessive empty lines from arm64_relocate_new_kernel. Also, use comments on the same lines with instructions where appropriate. Change ENDPROC to END it never returns. copy_page(dest, src, tmps...) Increments dest and src by PAGE_SIZE, so no need to store dest prior to calling copy_page and increment it after. Also, src is not used after a copy, not need to copy either. Call raw_dcache_line_size() only when relocation is actually going to happen. Since '.align 3' is intended to align globals at the end of the file, move it there. Signed-off-by: Pavel Tatashin --- arch/arm64/kernel/relocate_kernel.S | 50 +++++++---------------------- 1 file changed, 11 insertions(+), 39 deletions(-) diff --git a/arch/arm64/kernel/relocate_kernel.S b/arch/arm64/kernel/relocate_kernel.S index c1d7db71a726..e9c974ea4717 100644 --- a/arch/arm64/kernel/relocate_kernel.S +++ b/arch/arm64/kernel/relocate_kernel.S @@ -8,7 +8,6 @@ #include #include - #include #include #include @@ -17,25 +16,21 @@ /* * arm64_relocate_new_kernel - Put a 2nd stage image in place and boot it. * - * The memory that the old kernel occupies may be overwritten when coping the + * The memory that the old kernel occupies may be overwritten when copying the * new image to its final location. To assure that the * arm64_relocate_new_kernel routine which does that copy is not overwritten, * all code and data needed by arm64_relocate_new_kernel must be between the * symbols arm64_relocate_new_kernel and arm64_relocate_new_kernel_end. The * machine_kexec() routine will copy arm64_relocate_new_kernel to the kexec - * control_code_page, a special page which has been set up to be preserved - * during the copy operation. + * safe memory that has been set up to be preserved during the copy operation. */ ENTRY(arm64_relocate_new_kernel) - /* Setup the list loop variables. */ mov x18, x2 /* x18 = dtb address */ mov x17, x1 /* x17 = kimage_start */ mov x16, x0 /* x16 = kimage_head */ - raw_dcache_line_size x15, x0 /* x15 = dcache line size */ mov x14, xzr /* x14 = entry ptr */ mov x13, xzr /* x13 = copy dest */ - /* Clear the sctlr_el2 flags. */ mrs x0, CurrentEL cmp x0, #CurrentEL_EL2 @@ -46,14 +41,11 @@ ENTRY(arm64_relocate_new_kernel) pre_disable_mmu_workaround msr sctlr_el2, x0 isb -1: - - /* Check if the new image needs relocation. */ +1: /* Check if the new image needs relocation. */ tbnz x16, IND_DONE_BIT, .Ldone - + raw_dcache_line_size x15, x1 /* x15 = dcache line size */ .Lloop: and x12, x16, PAGE_MASK /* x12 = addr */ - /* Test the entry flags. */ .Ltest_source: tbz x16, IND_SOURCE_BIT, .Ltest_indirection @@ -69,34 +61,18 @@ ENTRY(arm64_relocate_new_kernel) b.lo 2b dsb sy - mov x20, x13 - mov x21, x12 - copy_page x20, x21, x0, x1, x2, x3, x4, x5, x6, x7 - - /* dest += PAGE_SIZE */ - add x13, x13, PAGE_SIZE + copy_page x13, x12, x0, x1, x2, x3, x4, x5, x6, x7 b .Lnext - .Ltest_indirection: tbz x16, IND_INDIRECTION_BIT, .Ltest_destination - - /* ptr = addr */ - mov x14, x12 + mov x14, x12 /* ptr = addr */ b .Lnext - .Ltest_destination: tbz x16, IND_DESTINATION_BIT, .Lnext - - /* dest = addr */ - mov x13, x12 - + mov x13, x12 /* dest = addr */ .Lnext: - /* entry = *ptr++ */ - ldr x16, [x14], #8 - - /* while (!(entry & DONE)) */ - tbz x16, IND_DONE_BIT, .Lloop - + ldr x16, [x14], #8 /* entry = *ptr++ */ + tbz x16, IND_DONE_BIT, .Lloop /* while (!(entry & DONE)) */ .Ldone: /* wait for writes from copy_page to finish */ dsb nsh @@ -110,16 +86,12 @@ ENTRY(arm64_relocate_new_kernel) mov x2, xzr mov x3, xzr br x17 - -ENDPROC(arm64_relocate_new_kernel) - .ltorg - -.align 3 /* To keep the 64-bit values below naturally aligned. */ +END(arm64_relocate_new_kernel) .Lcopy_end: .org KEXEC_CONTROL_PAGE_SIZE - +.align 3 /* To keep the 64-bit values below naturally aligned. */ /* * arm64_relocate_new_kernel_size - Number of bytes to copy to the * control_code_page. From patchwork Thu Mar 26 03:24:14 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pasha Tatashin X-Patchwork-Id: 11459125 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id CD197161F for ; Thu, 26 Mar 2020 03:24:53 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 9AADF20737 for ; Thu, 26 Mar 2020 03:24:53 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=soleen.com header.i=@soleen.com header.b="ORP3E/Zp" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 9AADF20737 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=soleen.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 8E2DC6B0072; Wed, 25 Mar 2020 23:24:42 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 8BA8F6B0073; Wed, 25 Mar 2020 23:24:42 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7A9D86B0074; Wed, 25 Mar 2020 23:24:42 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0077.hostedemail.com [216.40.44.77]) by kanga.kvack.org (Postfix) with ESMTP id 5E50C6B0072 for ; Wed, 25 Mar 2020 23:24:42 -0400 (EDT) Received: from smtpin14.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 274BD181AD0A5 for ; Thu, 26 Mar 2020 03:24:42 +0000 (UTC) X-FDA: 76636071204.14.net97_bec9e056953f X-Spam-Summary: 2,0,0,4ee40e8ed6bd320f,d41d8cd98f00b204,pasha.tatashin@soleen.com,,RULES_HIT:41:69:355:379:541:800:960:966:973:988:989:1260:1345:1359:1381:1437:1534:1541:1711:1730:1747:1777:1792:2196:2199:2393:2559:2562:2693:3138:3139:3140:3141:3142:3353:3865:3867:3872:4321:4385:4605:5007:6117:6119:6261:6653:6737:6738:7903:10004:11026:11473:11658:11914:12043:12048:12297:12438:12517:12519:12555:12895:13069:13311:13357:13972:14096:14181:14384:14394:14721:21063:21080:21325:21444:21451:21627:21740:30054:30070:30079,0,RBL:209.85.160.195:@soleen.com:.lbl8.mailshell.net-62.2.0.100 66.100.201.201,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: net97_bec9e056953f X-Filterd-Recvd-Size: 4976 Received: from mail-qt1-f195.google.com (mail-qt1-f195.google.com [209.85.160.195]) by imf44.hostedemail.com (Postfix) with ESMTP for ; Thu, 26 Mar 2020 03:24:41 +0000 (UTC) Received: by mail-qt1-f195.google.com with SMTP id f20so4198077qtq.6 for ; Wed, 25 Mar 2020 20:24:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=from:to:subject:date:message-id:in-reply-to:references; bh=+QNwRXfDbgmdgwiAWzmJWWFV3AeWwC/BkLzCJNBiX7U=; b=ORP3E/ZpEgE98OIaAPSZGFAcOrHLCH0OZ+6B4Q98GVNlw635eAenr+Qr6UJfPlzHZY zwcjqMKnATW+Tkpb6om6vwznhxUoKQpNie7bBYkPOzThhNddYDKgHOwX1j7FKdprMZ0j slzaC/miPXlNVoqvbDQpi0b1b9HhU+3trTqExs8FfHkN9BTVYlurgItTUgg9mBvJ/4jM 0eHkcBTq+tojnVx3RUoQkWgoO5ZREG01eZNlkd5JbuMNwpM2SmzDSMHd+kD3UHLh17E6 4hX9u/EVkhq/ryWZUCX7I4vUDngpM8dJCfXmmrYvQCPvj4Vx6r9X4yNAf0rTcJ2FxFIa i0bw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references; bh=+QNwRXfDbgmdgwiAWzmJWWFV3AeWwC/BkLzCJNBiX7U=; b=hDj2Z/KIEOhDY6VTUvcczWMZA9a+r9c4dOa3YYAQuSXVHJOfW8eBmeROJbMau/KPvd ficS7EdQDB2fES7o9KI2OT5P4jeFU8h53sInBXx0vPR4FUmltxV/o9OfhNWk17dt5++q Q/pUkGUz6wEDLeeK+8Anjv3z7TcgbYofTc4p908PNmurtwMvr6iWO4Ptp75v55eA4ixc ZsTEkBQRQzQgz44V1adqfbgf+yykEqQgUeVq4vdyVPvV3HjhOaFyeVIyMaFI9lvv0mjI zvYmjHUDRAb6lFS4HSsfGK2z+kuHEob5+dxZuWKg+yIpQEjzueuNheX96zaort843pWe KGEQ== X-Gm-Message-State: ANhLgQ2Gf+OLRkc0sLK9pGfACIHq4WhYGCzmDJTPYW9x82ffgvSsT5J4 r/R6pC+gWkDf7kv4hKhk4CrFSw== X-Google-Smtp-Source: ADFU+vtbKYyEZjxHqMRMqVkHzcW5e1lkwXuYNftBecpITpTgyTzGVt02JTYGpZDymXDfWspDslX4ZA== X-Received: by 2002:ac8:4641:: with SMTP id f1mr5868894qto.216.1585193081247; Wed, 25 Mar 2020 20:24:41 -0700 (PDT) Received: from localhost.localdomain (c-73-69-118-222.hsd1.nh.comcast.net. [73.69.118.222]) by smtp.gmail.com with ESMTPSA id u4sm620034qka.35.2020.03.25.20.24.39 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 25 Mar 2020 20:24:40 -0700 (PDT) From: Pavel Tatashin To: pasha.tatashin@soleen.com, jmorris@namei.org, sashal@kernel.org, ebiederm@xmission.com, kexec@lists.infradead.org, linux-kernel@vger.kernel.org, corbet@lwn.net, catalin.marinas@arm.com, will@kernel.org, linux-arm-kernel@lists.infradead.org, maz@kernel.org, james.morse@arm.com, vladimir.murzin@arm.com, matthias.bgg@gmail.com, bhsharma@redhat.com, linux-mm@kvack.org, mark.rutland@arm.com, steve.capper@arm.com, rfontana@redhat.com, tglx@linutronix.de, selindag@gmail.com Subject: [PATCH v9 12/18] arm64: kexec: arm64_relocate_new_kernel don't use x0 as temp Date: Wed, 25 Mar 2020 23:24:14 -0400 Message-Id: <20200326032420.27220-13-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200326032420.27220-1-pasha.tatashin@soleen.com> References: <20200326032420.27220-1-pasha.tatashin@soleen.com> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: x0 will contain the only argument to arm64_relocate_new_kernel; don't use it as a temp. Reassigned registers to free-up x0. Signed-off-by: Pavel Tatashin Reviewed-by: James Morse --- arch/arm64/kernel/relocate_kernel.S | 24 ++++++++++++------------ 1 file changed, 12 insertions(+), 12 deletions(-) diff --git a/arch/arm64/kernel/relocate_kernel.S b/arch/arm64/kernel/relocate_kernel.S index e9c974ea4717..41f9c95fabe8 100644 --- a/arch/arm64/kernel/relocate_kernel.S +++ b/arch/arm64/kernel/relocate_kernel.S @@ -32,14 +32,14 @@ ENTRY(arm64_relocate_new_kernel) mov x14, xzr /* x14 = entry ptr */ mov x13, xzr /* x13 = copy dest */ /* Clear the sctlr_el2 flags. */ - mrs x0, CurrentEL - cmp x0, #CurrentEL_EL2 + mrs x2, CurrentEL + cmp x2, #CurrentEL_EL2 b.ne 1f - mrs x0, sctlr_el2 + mrs x2, sctlr_el2 ldr x1, =SCTLR_ELx_FLAGS - bic x0, x0, x1 + bic x2, x2, x1 pre_disable_mmu_workaround - msr sctlr_el2, x0 + msr sctlr_el2, x2 isb 1: /* Check if the new image needs relocation. */ tbnz x16, IND_DONE_BIT, .Ldone @@ -51,17 +51,17 @@ ENTRY(arm64_relocate_new_kernel) tbz x16, IND_SOURCE_BIT, .Ltest_indirection /* Invalidate dest page to PoC. */ - mov x0, x13 - add x20, x0, #PAGE_SIZE + mov x2, x13 + add x20, x2, #PAGE_SIZE sub x1, x15, #1 - bic x0, x0, x1 -2: dc ivac, x0 - add x0, x0, x15 - cmp x0, x20 + bic x2, x2, x1 +2: dc ivac, x2 + add x2, x2, x15 + cmp x2, x20 b.lo 2b dsb sy - copy_page x13, x12, x0, x1, x2, x3, x4, x5, x6, x7 + copy_page x13, x12, x1, x2, x3, x4, x5, x6, x7, x8 b .Lnext .Ltest_indirection: tbz x16, IND_INDIRECTION_BIT, .Ltest_destination From patchwork Thu Mar 26 03:24:15 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pasha Tatashin X-Patchwork-Id: 11459127 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id A89AF161F for ; Thu, 26 Mar 2020 03:24:56 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 5BA9B20737 for ; Thu, 26 Mar 2020 03:24:56 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=soleen.com header.i=@soleen.com header.b="VR0f17PV" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 5BA9B20737 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=soleen.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 49C316B0073; Wed, 25 Mar 2020 23:24:44 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 475996B0074; Wed, 25 Mar 2020 23:24:44 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 340486B0075; Wed, 25 Mar 2020 23:24:44 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0093.hostedemail.com [216.40.44.93]) by kanga.kvack.org (Postfix) with ESMTP id 16ADF6B0073 for ; Wed, 25 Mar 2020 23:24:44 -0400 (EDT) Received: from smtpin30.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id E44C8180AD811 for ; Thu, 26 Mar 2020 03:24:43 +0000 (UTC) X-FDA: 76636071246.30.boat64_c29e18cd5d55 X-Spam-Summary: 2,0,0,83a71fc7fd6cfb23,d41d8cd98f00b204,pasha.tatashin@soleen.com,,RULES_HIT:1:2:41:69:355:379:421:541:800:960:966:973:988:989:1260:1345:1359:1381:1431:1437:1605:1730:1747:1777:1792:2194:2196:2199:2200:2393:2559:2562:2693:2892:2896:3138:3139:3140:3141:3142:3622:3865:3866:3867:3868:3870:3871:3872:3874:4052:4250:4321:4385:4605:5007:6117:6119:6261:6653:6737:6738:7875:7903:8603:8784:9010:10004:11026:11473:11657:11658:11914:12043:12048:12291:12296:12297:12438:12517:12519:12555:12679:12895:12986:13149:13161:13229:13230:13972:14394:21063:21080:21325:21433:21444:21451:21627:21796:21939:21990:30003:30025:30034:30036:30045:30054:30056:30070:30075:30079,0,RBL:209.85.160.194:@soleen.com:.lbl8.mailshell.net-66.100.201.201 62.2.0.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:0,LUA_SUMMARY:none X-HE-Tag: boat64_c29e18cd5d55 X-Filterd-Recvd-Size: 13204 Received: from mail-qt1-f194.google.com (mail-qt1-f194.google.com [209.85.160.194]) by imf50.hostedemail.com (Postfix) with ESMTP for ; Thu, 26 Mar 2020 03:24:43 +0000 (UTC) Received: by mail-qt1-f194.google.com with SMTP id m33so4205400qtb.3 for ; Wed, 25 Mar 2020 20:24:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=from:to:subject:date:message-id:in-reply-to:references; bh=x4L+drChQDoltL6x1b9LQJDDC7YRTzAttIb/9s248/Y=; b=VR0f17PVeFYSQuBjP2w4zyl7OAF3CO0fnKq3h8SjPJLoyYXGWfDFsuo7Ak8XO86gNG IS01mzrMX+asvZ8ygUz2Ln8zY1R8fVf43g9rHRkXd+GUN2GWCvbARr+lgh8t9YZfNWrM 9+mskgu1vkXmh0nslWQcvOGToFibw17g/H15PQ8Z3kLXyW92VEpRI5jLOFvF+LF4tiOw 1PqNNJvfF9drxLKRL0AQeCaqaEGRb0QmF4RHOH3X9ep58G5wvnBjniSxQB2yac8npQoe SjRVSfQpd8Bj7RWbmw7H1yOP5ZAATuf5MQ4IxzicCuZQg/GHrN2Q32WkX++iSUA3fOIr Zi4w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references; bh=x4L+drChQDoltL6x1b9LQJDDC7YRTzAttIb/9s248/Y=; b=LqI2hLqWlVtnh1mMKwy8v6gqnvRzCiX15Ng3lBNezkf6R4DaR/gvGwlsBRREgJGlK6 qZRiIHJ3B3ghoJQSDniLeQ/RXZvKLcs/RSCkgQWuCGeV/XYXoT12LwUMfyHe1LR96Hxt zuZJ8oLmBpES2ZVM5Sx3KVRWRqZFSo8yGZZHt1hUzEGbjlh7ECRMSF3PHUSC3kWHcVZz i7N644BwiHdoo/fSUTBY/0GW5DY8qDoOw24OGwMNIZr56DM1o7ELmzW2sitXgmWPo1Hk 3eqpQvIxVzTGySDxKRB8Pc0gNBBYwtZR9ChuuU1+iuUStjGfpFstsBMPv/1vBuZR3G4P mQyg== X-Gm-Message-State: ANhLgQ2GkDzTSNW3H45p/ejm73czPUdwI9v3GGoTq4uqTuNlRLTpikJX edIYAJ0w0g3Fc1pkjlqWpac8mA== X-Google-Smtp-Source: ADFU+vvZDs/QC4GOMC8IZees1GMscfn90p4ttLqLrFxPycdH3JJtfvGL51oGTPiXx0SvfzooYjbXbQ== X-Received: by 2002:ac8:175d:: with SMTP id u29mr6349138qtk.150.1585193082781; Wed, 25 Mar 2020 20:24:42 -0700 (PDT) Received: from localhost.localdomain (c-73-69-118-222.hsd1.nh.comcast.net. [73.69.118.222]) by smtp.gmail.com with ESMTPSA id u4sm620034qka.35.2020.03.25.20.24.41 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 25 Mar 2020 20:24:42 -0700 (PDT) From: Pavel Tatashin To: pasha.tatashin@soleen.com, jmorris@namei.org, sashal@kernel.org, ebiederm@xmission.com, kexec@lists.infradead.org, linux-kernel@vger.kernel.org, corbet@lwn.net, catalin.marinas@arm.com, will@kernel.org, linux-arm-kernel@lists.infradead.org, maz@kernel.org, james.morse@arm.com, vladimir.murzin@arm.com, matthias.bgg@gmail.com, bhsharma@redhat.com, linux-mm@kvack.org, mark.rutland@arm.com, steve.capper@arm.com, rfontana@redhat.com, tglx@linutronix.de, selindag@gmail.com Subject: [PATCH v9 13/18] arm64: kexec: add expandable argument to relocation function Date: Wed, 25 Mar 2020 23:24:15 -0400 Message-Id: <20200326032420.27220-14-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200326032420.27220-1-pasha.tatashin@soleen.com> References: <20200326032420.27220-1-pasha.tatashin@soleen.com> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Currently, kexec relocation function (arm64_relocate_new_kernel) accepts the following arguments: head: start of array that contains relocation information. entry: entry point for new kernel or purgatory. dtb_mem: first and only argument to entry. The number of arguments cannot be easily expended, because this function is also called from HVC_SOFT_RESTART, which preserves only three arguments. And, also arm64_relocate_new_kernel is written in assembly but called without stack, thus no place to move extra arguments to free registers. Soon, we will need to pass more arguments: once we enable MMU we will need to pass information about page tables. Another benefit of allowing this function to accept more arguments, is that kernel can actually accept up to 4 arguments (x0-x3), however currently only one is used, but if in the future we will need for more (for example, pass information about when previous kernel exited to have a precise measurement in time spent in purgatory), we won't be easilty do that if arm64_relocate_new_kernel can't accept more arguments. So, add a new struct: kern_reloc_arg, and place it in kexec safe page (i.e memory that is not overwritten during relocation). Thus, make arm64_relocate_new_kernel to only take one argument, that contains all the needed information. Signed-off-by: Pavel Tatashin --- arch/arm64/include/asm/kexec.h | 18 ++++++++++++++++++ arch/arm64/kernel/asm-offsets.c | 9 +++++++++ arch/arm64/kernel/cpu-reset.S | 11 +++-------- arch/arm64/kernel/cpu-reset.h | 8 +++----- arch/arm64/kernel/machine_kexec.c | 26 ++++++++++++++++++++++++-- arch/arm64/kernel/relocate_kernel.S | 19 ++++++++----------- 6 files changed, 65 insertions(+), 26 deletions(-) diff --git a/arch/arm64/include/asm/kexec.h b/arch/arm64/include/asm/kexec.h index 9befcd87e9a8..990185744148 100644 --- a/arch/arm64/include/asm/kexec.h +++ b/arch/arm64/include/asm/kexec.h @@ -90,12 +90,30 @@ static inline void crash_prepare_suspend(void) {} static inline void crash_post_resume(void) {} #endif +/* + * kern_reloc_arg is passed to kernel relocation function as an argument. + * head kimage->head, allows to traverse through relocation segments. + * entry_addr kimage->start, where to jump from relocation function (new + * kernel, or purgatory entry address). + * kern_arg0 first argument to kernel is its dtb address. The other + * arguments are currently unused, and must be set to 0 + */ +struct kern_reloc_arg { + phys_addr_t head; + phys_addr_t entry_addr; + phys_addr_t kern_arg0; + phys_addr_t kern_arg1; + phys_addr_t kern_arg2; + phys_addr_t kern_arg3; +}; + #define ARCH_HAS_KIMAGE_ARCH struct kimage_arch { void *dtb; phys_addr_t dtb_mem; phys_addr_t kern_reloc; + phys_addr_t kern_reloc_arg; /* Core ELF header buffer */ void *elf_headers; unsigned long elf_headers_mem; diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c index a5bdce8af65b..448230684749 100644 --- a/arch/arm64/kernel/asm-offsets.c +++ b/arch/arm64/kernel/asm-offsets.c @@ -23,6 +23,7 @@ #include #include #include +#include int main(void) { @@ -127,6 +128,14 @@ int main(void) #ifdef CONFIG_ARM_SDE_INTERFACE DEFINE(SDEI_EVENT_INTREGS, offsetof(struct sdei_registered_event, interrupted_regs)); DEFINE(SDEI_EVENT_PRIORITY, offsetof(struct sdei_registered_event, priority)); +#endif +#ifdef CONFIG_KEXEC_CORE + DEFINE(KEXEC_KRELOC_HEAD, offsetof(struct kern_reloc_arg, head)); + DEFINE(KEXEC_KRELOC_ENTRY_ADDR, offsetof(struct kern_reloc_arg, entry_addr)); + DEFINE(KEXEC_KRELOC_KERN_ARG0, offsetof(struct kern_reloc_arg, kern_arg0)); + DEFINE(KEXEC_KRELOC_KERN_ARG1, offsetof(struct kern_reloc_arg, kern_arg1)); + DEFINE(KEXEC_KRELOC_KERN_ARG2, offsetof(struct kern_reloc_arg, kern_arg2)); + DEFINE(KEXEC_KRELOC_KERN_ARG3, offsetof(struct kern_reloc_arg, kern_arg3)); #endif return 0; } diff --git a/arch/arm64/kernel/cpu-reset.S b/arch/arm64/kernel/cpu-reset.S index 32c7bf858dd9..b8e4b276393e 100644 --- a/arch/arm64/kernel/cpu-reset.S +++ b/arch/arm64/kernel/cpu-reset.S @@ -16,14 +16,11 @@ .pushsection .idmap.text, "awx" /* - * __cpu_soft_restart(el2_switch, entry, arg0, arg1, arg2) - Helper for - * cpu_soft_restart. + * __cpu_soft_restart(el2_switch, entry, arg) - Helper for cpu_soft_restart. * * @el2_switch: Flag to indicate a switch to EL2 is needed. * @entry: Location to jump to for soft reset. - * arg0: First argument passed to @entry. (relocation list) - * arg1: Second argument passed to @entry.(physical kernel entry) - * arg2: Third argument passed to @entry. (physical dtb address) + * arg: Entry argument * * Put the CPU into the same state as it would be if it had been reset, and * branch to what would be the reset vector. It must be executed with the @@ -43,9 +40,7 @@ ENTRY(__cpu_soft_restart) hvc #0 // no return 1: mov x8, x1 // entry - mov x0, x2 // arg0 - mov x1, x3 // arg1 - mov x2, x4 // arg2 + mov x0, x2 // arg br x8 ENDPROC(__cpu_soft_restart) diff --git a/arch/arm64/kernel/cpu-reset.h b/arch/arm64/kernel/cpu-reset.h index 38cbd4068019..7649eec64f82 100644 --- a/arch/arm64/kernel/cpu-reset.h +++ b/arch/arm64/kernel/cpu-reset.h @@ -11,12 +11,10 @@ #include void __cpu_soft_restart(phys_addr_t el2_switch, phys_addr_t entry, - phys_addr_t arg0, phys_addr_t arg1, phys_addr_t arg2); + phys_addr_t arg); static inline void __noreturn cpu_soft_restart(phys_addr_t entry, - phys_addr_t arg0, - phys_addr_t arg1, - phys_addr_t arg2) + phys_addr_t arg) { typeof(__cpu_soft_restart) *restart; @@ -25,7 +23,7 @@ static inline void __noreturn cpu_soft_restart(phys_addr_t entry, restart = (void *)__pa_symbol(__cpu_soft_restart); cpu_install_idmap(); - restart(el2_switch, entry, arg0, arg1, arg2); + restart(el2_switch, entry, arg); unreachable(); } diff --git a/arch/arm64/kernel/machine_kexec.c b/arch/arm64/kernel/machine_kexec.c index cee3be586384..b1122eea627e 100644 --- a/arch/arm64/kernel/machine_kexec.c +++ b/arch/arm64/kernel/machine_kexec.c @@ -43,6 +43,7 @@ static void _kexec_image_info(const char *func, int line, pr_debug(" head: %lx\n", kimage->head); pr_debug(" nr_segments: %lu\n", kimage->nr_segments); pr_debug(" kern_reloc: %pa\n", &kimage->arch.kern_reloc); + pr_debug(" kern_reloc_arg: %pa\n", &kimage->arch.kern_reloc_arg); for (i = 0; i < kimage->nr_segments; i++) { pr_debug(" segment[%lu]: %016lx - %016lx, 0x%lx bytes, %lu pages\n", @@ -59,13 +60,35 @@ void machine_kexec_cleanup(struct kimage *kimage) /* Empty routine needed to avoid build errors. */ } +/* Allocates pages for kexec page table */ +static void *kexec_page_alloc(void *arg) +{ + struct kimage *kimage = (struct kimage *)arg; + struct page *page = kimage_alloc_control_pages(kimage, 0); + + if (!page) + return NULL; + + memset(page_address(page), 0, PAGE_SIZE); + + return page_address(page); +} + int machine_kexec_post_load(struct kimage *kimage) { void *reloc_code = page_to_virt(kimage->control_code_page); + struct kern_reloc_arg *kern_reloc_arg = kexec_page_alloc(kimage); + + if (!kern_reloc_arg) + return -ENOMEM; memcpy(reloc_code, arm64_relocate_new_kernel, arm64_relocate_new_kernel_size); kimage->arch.kern_reloc = __pa(reloc_code); + kimage->arch.kern_reloc_arg = __pa(kern_reloc_arg); + kern_reloc_arg->head = kimage->head; + kern_reloc_arg->entry_addr = kimage->start; + kern_reloc_arg->kern_arg0 = kimage->arch.dtb_mem; kexec_image_info(kimage); return 0; @@ -200,8 +223,7 @@ void machine_kexec(struct kimage *kimage) * userspace (kexec-tools). * In kexec_file case, the kernel starts directly without purgatory. */ - cpu_soft_restart(kimage->arch.kern_reloc, kimage->head, kimage->start, - kimage->arch.dtb_mem); + cpu_soft_restart(kimage->arch.kern_reloc, kimage->arch.kern_reloc_arg); BUG(); /* Should never get here. */ } diff --git a/arch/arm64/kernel/relocate_kernel.S b/arch/arm64/kernel/relocate_kernel.S index 41f9c95fabe8..22ccdcb106d3 100644 --- a/arch/arm64/kernel/relocate_kernel.S +++ b/arch/arm64/kernel/relocate_kernel.S @@ -8,6 +8,7 @@ #include #include +#include #include #include #include @@ -25,12 +26,6 @@ * safe memory that has been set up to be preserved during the copy operation. */ ENTRY(arm64_relocate_new_kernel) - /* Setup the list loop variables. */ - mov x18, x2 /* x18 = dtb address */ - mov x17, x1 /* x17 = kimage_start */ - mov x16, x0 /* x16 = kimage_head */ - mov x14, xzr /* x14 = entry ptr */ - mov x13, xzr /* x13 = copy dest */ /* Clear the sctlr_el2 flags. */ mrs x2, CurrentEL cmp x2, #CurrentEL_EL2 @@ -42,6 +37,7 @@ ENTRY(arm64_relocate_new_kernel) msr sctlr_el2, x2 isb 1: /* Check if the new image needs relocation. */ + ldr x16, [x0, #KEXEC_KRELOC_HEAD] /* x16 = kimage_head */ tbnz x16, IND_DONE_BIT, .Ldone raw_dcache_line_size x15, x1 /* x15 = dcache line size */ .Lloop: @@ -81,11 +77,12 @@ ENTRY(arm64_relocate_new_kernel) isb /* Start new image. */ - mov x0, x18 - mov x1, xzr - mov x2, xzr - mov x3, xzr - br x17 + ldr x4, [x0, #KEXEC_KRELOC_ENTRY_ADDR] /* x4 = kimage_start */ + ldr x3, [x0, #KEXEC_KRELOC_KERN_ARG3] + ldr x2, [x0, #KEXEC_KRELOC_KERN_ARG2] + ldr x1, [x0, #KEXEC_KRELOC_KERN_ARG1] + ldr x0, [x0, #KEXEC_KRELOC_KERN_ARG0] /* x0 = dtb address */ + br x4 .ltorg END(arm64_relocate_new_kernel) From patchwork Thu Mar 26 03:24:16 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pasha Tatashin X-Patchwork-Id: 11459129 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 83C321668 for ; Thu, 26 Mar 2020 03:24:59 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 4828920772 for ; Thu, 26 Mar 2020 03:24:59 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=soleen.com header.i=@soleen.com header.b="i0IEvGyB" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 4828920772 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=soleen.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id D79256B0074; Wed, 25 Mar 2020 23:24:45 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id CDDDB6B0075; Wed, 25 Mar 2020 23:24:45 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B55D36B0078; Wed, 25 Mar 2020 23:24:45 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0124.hostedemail.com [216.40.44.124]) by kanga.kvack.org (Postfix) with ESMTP id 998876B0074 for ; Wed, 25 Mar 2020 23:24:45 -0400 (EDT) Received: from smtpin10.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 8B8FB3AB7 for ; Thu, 26 Mar 2020 03:24:45 +0000 (UTC) X-FDA: 76636071330.10.jewel00_c5f6bdf2bd1c X-Spam-Summary: 2,0,0,54f66c828cf2ecd0,d41d8cd98f00b204,pasha.tatashin@soleen.com,,RULES_HIT:41:355:379:541:800:960:973:988:989:1260:1345:1359:1381:1431:1437:1500:1535:1544:1711:1730:1747:1777:1792:2393:2553:2559:2562:2693:2895:3138:3139:3140:3141:3142:3355:3622:3865:3866:3867:3868:3870:3871:3872:4118:4321:4605:5007:6261:6653:6737:6738:7875:8603:9010:9592:10004:11026:11473:11657:11658:11914:12043:12048:12295:12297:12438:12517:12519:12555:12895:12986:13161:13229:14096:14181:14394:14721:21063:21080:21222:21325:21444:21451:21627:21990:30012:30034:30054:30070:30079:30090,0,RBL:209.85.160.195:@soleen.com:.lbl8.mailshell.net-62.2.0.100 66.100.201.201,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:0,LUA_SUMMARY:none X-HE-Tag: jewel00_c5f6bdf2bd1c X-Filterd-Recvd-Size: 7549 Received: from mail-qt1-f195.google.com (mail-qt1-f195.google.com [209.85.160.195]) by imf46.hostedemail.com (Postfix) with ESMTP for ; Thu, 26 Mar 2020 03:24:44 +0000 (UTC) Received: by mail-qt1-f195.google.com with SMTP id i3so4177274qtv.8 for ; Wed, 25 Mar 2020 20:24:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=from:to:subject:date:message-id:in-reply-to:references; bh=Br14IEdrQ+h/TluYq6M39j0SJ67HIdjC6baxG9oa0nU=; b=i0IEvGyBqoriM4CG8hBTaH9CpgOJ+6Ro/7wdzybTtPbDV3e3M8UaJfMXQB9ptG5KVI FMkd79h+wUDdq/swltuqmENldbzxosdhpw4megU23U4lXShTmo9RkplMrnpRO+HWT58l bz1J8NxmTMNbonU7mU9LYAok6D8F5cpdxBduNMeJ9AUQNYRgM80AiwlQuWMdnGlMnVXe agnuQ8xBU3v88CVQjZNvk6TyyQa75h7hXBgx/asxBD+iZKo8GNG41Q9syhdbNqf6i43O tG+TvCUOkbFnoZd8PT3+ZRr6ee5z6WwOgYig4/oYl1Wel8/aw2hCIU7IPxIaPJrIFEwF 80yA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references; bh=Br14IEdrQ+h/TluYq6M39j0SJ67HIdjC6baxG9oa0nU=; b=Sf2hjqWcwz4FFboc8jZIhzM3ZbbAI0MRwQC7EpQ61VCRrkbW0EzO9n3pkT6qJDktaZ eClcaE+7loGS06YmFoyrM+NTHFgnvBDj4GqPiP7kc5NIT29hwhKw/VUg9FpxpvMXOqeK +baSKXRF9RB6FwD+qtrRcjx/Lkbt51jQi6//yaDS1ATigHXJ7cglNV5AgkmF3b93Bwmc ZTt7/aacoB5xTOKOyn1eMrHjjCWlg/qV9wgvCmlgS0Vmx0rgHO28k1p2qzjN2aeZnzOf q9hM3qZETyu+TGzGqf9sb8Dc5v4FtQQsjp6ZxbUWs3CZA5toLG+fJQ/g8gw3R1bZQQeM GlRg== X-Gm-Message-State: ANhLgQ3um5WVpqxPx0VZ2LxobGCeTsx0wFlHf1J4E4xF8ACBdHuVnvCm 4yu1z4O+P4Ev35JPq4orOMM7hA== X-Google-Smtp-Source: ADFU+vtm27PkglkxZBWZNSEtrtj7Md3gNLlQXHWm/UbMjvs5l2SaL7nAgQ6PGID3oJSqgIkAG7pfhQ== X-Received: by 2002:aed:3c4b:: with SMTP id u11mr6160095qte.208.1585193084262; Wed, 25 Mar 2020 20:24:44 -0700 (PDT) Received: from localhost.localdomain (c-73-69-118-222.hsd1.nh.comcast.net. [73.69.118.222]) by smtp.gmail.com with ESMTPSA id u4sm620034qka.35.2020.03.25.20.24.42 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 25 Mar 2020 20:24:43 -0700 (PDT) From: Pavel Tatashin To: pasha.tatashin@soleen.com, jmorris@namei.org, sashal@kernel.org, ebiederm@xmission.com, kexec@lists.infradead.org, linux-kernel@vger.kernel.org, corbet@lwn.net, catalin.marinas@arm.com, will@kernel.org, linux-arm-kernel@lists.infradead.org, maz@kernel.org, james.morse@arm.com, vladimir.murzin@arm.com, matthias.bgg@gmail.com, bhsharma@redhat.com, linux-mm@kvack.org, mark.rutland@arm.com, steve.capper@arm.com, rfontana@redhat.com, tglx@linutronix.de, selindag@gmail.com Subject: [PATCH v9 14/18] arm64: kexec: offset for relocation function Date: Wed, 25 Mar 2020 23:24:16 -0400 Message-Id: <20200326032420.27220-15-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200326032420.27220-1-pasha.tatashin@soleen.com> References: <20200326032420.27220-1-pasha.tatashin@soleen.com> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Soon, relocation function will share the same page with EL2 vectors. Add offset within this page to arm64_relocate_new_kernel, and also the total size of relocation code which will include both the function and the EL2 vectors. Signed-off-by: Pavel Tatashin --- arch/arm64/include/asm/kexec.h | 7 +++++++ arch/arm64/kernel/machine_kexec.c | 13 ++++--------- arch/arm64/kernel/relocate_kernel.S | 16 +++++++++++----- 3 files changed, 22 insertions(+), 14 deletions(-) diff --git a/arch/arm64/include/asm/kexec.h b/arch/arm64/include/asm/kexec.h index 990185744148..d944c2e289b2 100644 --- a/arch/arm64/include/asm/kexec.h +++ b/arch/arm64/include/asm/kexec.h @@ -90,6 +90,13 @@ static inline void crash_prepare_suspend(void) {} static inline void crash_post_resume(void) {} #endif +#if defined(CONFIG_KEXEC_CORE) +/* The beginning and size of relcation code to stage 2 kernel */ +extern const unsigned long kexec_relocate_code_size; +extern const unsigned char kexec_relocate_code_start[]; +extern const unsigned long kexec_kern_reloc_offset; +#endif + /* * kern_reloc_arg is passed to kernel relocation function as an argument. * head kimage->head, allows to traverse through relocation segments. diff --git a/arch/arm64/kernel/machine_kexec.c b/arch/arm64/kernel/machine_kexec.c index b1122eea627e..ab571fca9bd1 100644 --- a/arch/arm64/kernel/machine_kexec.c +++ b/arch/arm64/kernel/machine_kexec.c @@ -23,10 +23,6 @@ #include "cpu-reset.h" -/* Global variables for the arm64_relocate_new_kernel routine. */ -extern const unsigned char arm64_relocate_new_kernel[]; -extern const unsigned long arm64_relocate_new_kernel_size; - /** * kexec_image_info - For debugging output. */ @@ -82,9 +78,8 @@ int machine_kexec_post_load(struct kimage *kimage) if (!kern_reloc_arg) return -ENOMEM; - memcpy(reloc_code, arm64_relocate_new_kernel, - arm64_relocate_new_kernel_size); - kimage->arch.kern_reloc = __pa(reloc_code); + memcpy(reloc_code, kexec_relocate_code_start, kexec_relocate_code_size); + kimage->arch.kern_reloc = __pa(reloc_code) + kexec_kern_reloc_offset; kimage->arch.kern_reloc_arg = __pa(kern_reloc_arg); kern_reloc_arg->head = kimage->head; kern_reloc_arg->entry_addr = kimage->start; @@ -189,7 +184,7 @@ void machine_kexec(struct kimage *kimage) "Some CPUs may be stale, kdump will be unreliable.\n"); /* Flush the reboot_code_buffer in preparation for its execution. */ - __flush_dcache_area(reboot_code_buffer, arm64_relocate_new_kernel_size); + __flush_dcache_area(reboot_code_buffer, kexec_relocate_code_size); /* * Although we've killed off the secondary CPUs, we don't update @@ -198,7 +193,7 @@ void machine_kexec(struct kimage *kimage) * the offline CPUs. Therefore, we must use the __* variant here. */ __flush_icache_range((uintptr_t)reboot_code_buffer, - arm64_relocate_new_kernel_size); + kexec_relocate_code_size); /* Flush the kimage list and its buffers. */ kexec_list_flush(kimage); diff --git a/arch/arm64/kernel/relocate_kernel.S b/arch/arm64/kernel/relocate_kernel.S index 22ccdcb106d3..aa9f2b2cd77c 100644 --- a/arch/arm64/kernel/relocate_kernel.S +++ b/arch/arm64/kernel/relocate_kernel.S @@ -14,6 +14,9 @@ #include #include +.globl kexec_relocate_code_start +kexec_relocate_code_start: + /* * arm64_relocate_new_kernel - Put a 2nd stage image in place and boot it. * @@ -86,13 +89,16 @@ ENTRY(arm64_relocate_new_kernel) .ltorg END(arm64_relocate_new_kernel) -.Lcopy_end: +.Lkexec_relocate_code_end: .org KEXEC_CONTROL_PAGE_SIZE .align 3 /* To keep the 64-bit values below naturally aligned. */ /* - * arm64_relocate_new_kernel_size - Number of bytes to copy to the + * kexec_relocate_code_size - Number of bytes to copy to the * control_code_page. */ -.globl arm64_relocate_new_kernel_size -arm64_relocate_new_kernel_size: - .quad .Lcopy_end - arm64_relocate_new_kernel +.globl kexec_relocate_code_size +kexec_relocate_code_size: + .quad .Lkexec_relocate_code_end - kexec_relocate_code_start +.globl kexec_kern_reloc_offset +kexec_kern_reloc_offset: + .quad arm64_relocate_new_kernel - kexec_relocate_code_start From patchwork Thu Mar 26 03:24:17 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pasha Tatashin X-Patchwork-Id: 11459131 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 363C71668 for ; Thu, 26 Mar 2020 03:25:02 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id EB7DE2074D for ; Thu, 26 Mar 2020 03:25:01 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=soleen.com header.i=@soleen.com header.b="fiB9ueCs" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org EB7DE2074D Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=soleen.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 4C3106B0075; Wed, 25 Mar 2020 23:24:47 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 44D026B0078; Wed, 25 Mar 2020 23:24:47 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 33D616B007B; Wed, 25 Mar 2020 23:24:47 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0077.hostedemail.com [216.40.44.77]) by kanga.kvack.org (Postfix) with ESMTP id 1C7966B0075 for ; Wed, 25 Mar 2020 23:24:47 -0400 (EDT) Received: from smtpin17.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id E381E181AD0A5 for ; Thu, 26 Mar 2020 03:24:46 +0000 (UTC) X-FDA: 76636071372.17.thumb80_c9970cbae74d X-Spam-Summary: 2,0,0,34a4c13d01d9fd90,d41d8cd98f00b204,pasha.tatashin@soleen.com,,RULES_HIT:41:355:379:541:800:960:973:988:989:1260:1345:1359:1381:1437:1500:1535:1544:1605:1711:1730:1747:1777:1792:2198:2199:2393:2559:2562:2693:2731:3138:3139:3140:3141:3142:3865:3867:3868:3870:3871:3874:4119:4321:4605:5007:6119:6261:6653:6737:6738:7875:7903:10004:11026:11473:11657:11658:11914:12043:12048:12291:12295:12296:12297:12438:12517:12519:12555:12895:12986:14096:14181:14394:14721:14819:21063:21080:21325:21444:21451:21611:21627:21990:30003:30054:30069:30070:30079,0,RBL:209.85.219.68:@soleen.com:.lbl8.mailshell.net-62.2.0.100 66.100.201.201,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:23,LUA_SUMMARY:none X-HE-Tag: thumb80_c9970cbae74d X-Filterd-Recvd-Size: 8117 Received: from mail-qv1-f68.google.com (mail-qv1-f68.google.com [209.85.219.68]) by imf07.hostedemail.com (Postfix) with ESMTP for ; Thu, 26 Mar 2020 03:24:46 +0000 (UTC) Received: by mail-qv1-f68.google.com with SMTP id p19so2285315qve.0 for ; Wed, 25 Mar 2020 20:24:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=from:to:subject:date:message-id:in-reply-to:references; bh=uH50jv2KMwpjM9MUqJEA6M9mjf0TYnrFc+j0/4JA2/8=; b=fiB9ueCsANwmQzTKLdMTwl073niDxot5bMY8SHMvRN7zmHWFcrQH7scnjR2/1TGnfK dVIOFwbvoAVCesAKXWXgQbZUh/zmmYX3iv5+zHFR5bUds4KxsRNggisBdbNLASd90B35 7lcQ5vIafAsStIIPwSFjT2Ihhnkb4cjQJdoZ9AWIkmMTWs63lb4W+f5EjzM/CFYwjp2h 0tSJglzq70UYYtHzJ1aa2v5ejWZTzxjE0bUi2ki/7F2ik+qa8qRnwZmxvfmUvds9LZ9m sfcZqJd1S5chEMjK5bYIapAf1x6QQPcCRIO01A4WZjbtTYzLi4pJ2vB0h5FtXqAviG0v AZtg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references; bh=uH50jv2KMwpjM9MUqJEA6M9mjf0TYnrFc+j0/4JA2/8=; b=qQ+DA7MXnHdc4qq0FK4ZXlL0zM/G7zKihEQlx+uW2239c5fy5znwcZ9QfCW5B80BNx C5pEP5Q2iAvNFD2apQq8ADPXaQh/UzJNUpEAZsc23xTYXWFX1iITQEc9fuOYYNfGD//L THbHkNNTxDIFB8I6vm6kJo5kzjLJLyXfDvj37IixiV+jZn0fBI5KLKoY5slRJu91ygNW iABsj/ufuppeGdNsqYukmqi//fAWldoSsidyZL4rVGe7JsTOWU5+vsVEHI/dmD2tFUWC MNbeSJX+c4d2zg5aOtWB3U99Hj4MikDpYfdHCJ473sm4DxKoAaqUROX/d23gbStU2+l+ rqCw== X-Gm-Message-State: ANhLgQ0mvxOTbdydM7J6QvVdAYs0V/Kwe0LfhCUB38qfAJ4gIoQckxXH B16oY7adwCwZEAV2AZGPugDdgg== X-Google-Smtp-Source: ADFU+vsn5N31l1OOjwnpnGaEWO0LewMtT8xvGuAvQ3zFm5/A5YPwx47lqc7VNLvdKkoNJK+uI7hzhg== X-Received: by 2002:a0c:ebd2:: with SMTP id k18mr6281606qvq.143.1585193085859; Wed, 25 Mar 2020 20:24:45 -0700 (PDT) Received: from localhost.localdomain (c-73-69-118-222.hsd1.nh.comcast.net. [73.69.118.222]) by smtp.gmail.com with ESMTPSA id u4sm620034qka.35.2020.03.25.20.24.44 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 25 Mar 2020 20:24:45 -0700 (PDT) From: Pavel Tatashin To: pasha.tatashin@soleen.com, jmorris@namei.org, sashal@kernel.org, ebiederm@xmission.com, kexec@lists.infradead.org, linux-kernel@vger.kernel.org, corbet@lwn.net, catalin.marinas@arm.com, will@kernel.org, linux-arm-kernel@lists.infradead.org, maz@kernel.org, james.morse@arm.com, vladimir.murzin@arm.com, matthias.bgg@gmail.com, bhsharma@redhat.com, linux-mm@kvack.org, mark.rutland@arm.com, steve.capper@arm.com, rfontana@redhat.com, tglx@linutronix.de, selindag@gmail.com Subject: [PATCH v9 15/18] arm64: kexec: kexec EL2 vectors Date: Wed, 25 Mar 2020 23:24:17 -0400 Message-Id: <20200326032420.27220-16-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200326032420.27220-1-pasha.tatashin@soleen.com> References: <20200326032420.27220-1-pasha.tatashin@soleen.com> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: If we have a EL2 mode without VHE, the EL2 vectors are needed in order to switch to EL2 and jump to new world with hyperivsor privileges. Signed-off-by: Pavel Tatashin --- arch/arm64/include/asm/kexec.h | 5 +++++ arch/arm64/kernel/asm-offsets.c | 1 + arch/arm64/kernel/machine_kexec.c | 5 +++++ arch/arm64/kernel/relocate_kernel.S | 35 +++++++++++++++++++++++++++++ 4 files changed, 46 insertions(+) diff --git a/arch/arm64/include/asm/kexec.h b/arch/arm64/include/asm/kexec.h index d944c2e289b2..0f758fd51518 100644 --- a/arch/arm64/include/asm/kexec.h +++ b/arch/arm64/include/asm/kexec.h @@ -95,6 +95,7 @@ static inline void crash_post_resume(void) {} extern const unsigned long kexec_relocate_code_size; extern const unsigned char kexec_relocate_code_start[]; extern const unsigned long kexec_kern_reloc_offset; +extern const unsigned long kexec_el2_vectors_offset; #endif /* @@ -104,6 +105,9 @@ extern const unsigned long kexec_kern_reloc_offset; * kernel, or purgatory entry address). * kern_arg0 first argument to kernel is its dtb address. The other * arguments are currently unused, and must be set to 0 + * el2_vector If present means that relocation routine will go to EL1 + * from EL2 to do the copy, and then back to EL2 to do the jump + * to new world. */ struct kern_reloc_arg { phys_addr_t head; @@ -112,6 +116,7 @@ struct kern_reloc_arg { phys_addr_t kern_arg1; phys_addr_t kern_arg2; phys_addr_t kern_arg3; + phys_addr_t el2_vector; }; #define ARCH_HAS_KIMAGE_ARCH diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c index 448230684749..ff974b648347 100644 --- a/arch/arm64/kernel/asm-offsets.c +++ b/arch/arm64/kernel/asm-offsets.c @@ -136,6 +136,7 @@ int main(void) DEFINE(KEXEC_KRELOC_KERN_ARG1, offsetof(struct kern_reloc_arg, kern_arg1)); DEFINE(KEXEC_KRELOC_KERN_ARG2, offsetof(struct kern_reloc_arg, kern_arg2)); DEFINE(KEXEC_KRELOC_KERN_ARG3, offsetof(struct kern_reloc_arg, kern_arg3)); + DEFINE(KEXEC_KRELOC_EL2_VECTOR, offsetof(struct kern_reloc_arg, el2_vector)); #endif return 0; } diff --git a/arch/arm64/kernel/machine_kexec.c b/arch/arm64/kernel/machine_kexec.c index ab571fca9bd1..bd398def7627 100644 --- a/arch/arm64/kernel/machine_kexec.c +++ b/arch/arm64/kernel/machine_kexec.c @@ -84,6 +84,11 @@ int machine_kexec_post_load(struct kimage *kimage) kern_reloc_arg->head = kimage->head; kern_reloc_arg->entry_addr = kimage->start; kern_reloc_arg->kern_arg0 = kimage->arch.dtb_mem; + /* Setup vector table only when EL2 is available, but no VHE */ + if (is_hyp_mode_available() && !is_kernel_in_hyp_mode()) { + kern_reloc_arg->el2_vector = __pa(reloc_code) + + kexec_el2_vectors_offset; + } kexec_image_info(kimage); return 0; diff --git a/arch/arm64/kernel/relocate_kernel.S b/arch/arm64/kernel/relocate_kernel.S index aa9f2b2cd77c..6fd2fc0ef373 100644 --- a/arch/arm64/kernel/relocate_kernel.S +++ b/arch/arm64/kernel/relocate_kernel.S @@ -89,6 +89,38 @@ ENTRY(arm64_relocate_new_kernel) .ltorg END(arm64_relocate_new_kernel) +.macro el1_sync_64 + br x4 /* Jump to new world from el2 */ + .fill 31, 4, 0 /* Set other 31 instr to zeroes */ +.endm + +.macro invalid_vector label +\label: + b \label + .fill 31, 4, 0 /* Set other 31 instr to zeroes */ +.endm + +/* el2 vectors - switch el2 here while we restore the memory image. */ + .align 11 +ENTRY(kexec_el2_vectors) + invalid_vector el2_sync_invalid_sp0 /* Synchronous EL2t */ + invalid_vector el2_irq_invalid_sp0 /* IRQ EL2t */ + invalid_vector el2_fiq_invalid_sp0 /* FIQ EL2t */ + invalid_vector el2_error_invalid_sp0 /* Error EL2t */ + invalid_vector el2_sync_invalid_spx /* Synchronous EL2h */ + invalid_vector el2_irq_invalid_spx /* IRQ EL2h */ + invalid_vector el2_fiq_invalid_spx /* FIQ EL2h */ + invalid_vector el2_error_invalid_spx /* Error EL2h */ + el1_sync_64 /* Synchronous 64-bit EL1 */ + invalid_vector el1_irq_invalid_64 /* IRQ 64-bit EL1 */ + invalid_vector el1_fiq_invalid_64 /* FIQ 64-bit EL1 */ + invalid_vector el1_error_invalid_64 /* Error 64-bit EL1 */ + invalid_vector el1_sync_invalid_32 /* Synchronous 32-bit EL1 */ + invalid_vector el1_irq_invalid_32 /* IRQ 32-bit EL1 */ + invalid_vector el1_fiq_invalid_32 /* FIQ 32-bit EL1 */ + invalid_vector el1_error_invalid_32 /* Error 32-bit EL1 */ +END(kexec_el2_vectors) + .Lkexec_relocate_code_end: .org KEXEC_CONTROL_PAGE_SIZE .align 3 /* To keep the 64-bit values below naturally aligned. */ @@ -102,3 +134,6 @@ kexec_relocate_code_size: .globl kexec_kern_reloc_offset kexec_kern_reloc_offset: .quad arm64_relocate_new_kernel - kexec_relocate_code_start +.globl kexec_el2_vectors_offset +kexec_el2_vectors_offset: + .quad kexec_el2_vectors - kexec_relocate_code_start From patchwork Thu Mar 26 03:24:18 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pasha Tatashin X-Patchwork-Id: 11459133 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id E486E161F for ; Thu, 26 Mar 2020 03:25:04 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id A360D2074D for ; Thu, 26 Mar 2020 03:25:04 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=soleen.com header.i=@soleen.com header.b="k9Ab71Xt" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org A360D2074D Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=soleen.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 168866B0078; Wed, 25 Mar 2020 23:24:49 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 11A2F6B007B; Wed, 25 Mar 2020 23:24:49 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id EFD226B007D; Wed, 25 Mar 2020 23:24:48 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0081.hostedemail.com [216.40.44.81]) by kanga.kvack.org (Postfix) with ESMTP id D71D36B0078 for ; Wed, 25 Mar 2020 23:24:48 -0400 (EDT) Received: from smtpin12.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id D00874405 for ; Thu, 26 Mar 2020 03:24:48 +0000 (UTC) X-FDA: 76636071456.12.use30_cdc040d11201 X-Spam-Summary: 2,0,0,aca1b868fccee3c0,d41d8cd98f00b204,pasha.tatashin@soleen.com,,RULES_HIT:2:41:355:379:541:800:960:973:988:989:1260:1345:1359:1381:1431:1437:1461:1535:1605:1730:1747:1777:1792:2194:2199:2393:2559:2562:2693:3138:3139:3140:3141:3142:3865:3866:3867:3868:3870:3871:3874:4049:4120:4250:4321:4605:5007:6119:6261:6653:6737:6738:7903:8603:10004:11026:11473:11657:11658:11914:12043:12048:12291:12297:12438:12517:12519:12555:12683:12895:12986:13161:13229:14096:14394:21063:21080:21433:21444:21451:21627:21939:21990:30034:30054:30070:30075:30079,0,RBL:209.85.219.66:@soleen.com:.lbl8.mailshell.net-66.100.201.201 62.2.0.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:23,LUA_SUMMARY:none X-HE-Tag: use30_cdc040d11201 X-Filterd-Recvd-Size: 9245 Received: from mail-qv1-f66.google.com (mail-qv1-f66.google.com [209.85.219.66]) by imf09.hostedemail.com (Postfix) with ESMTP for ; Thu, 26 Mar 2020 03:24:48 +0000 (UTC) Received: by mail-qv1-f66.google.com with SMTP id c28so2259387qvb.10 for ; Wed, 25 Mar 2020 20:24:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=from:to:subject:date:message-id:in-reply-to:references; bh=CuDr5qYI6GlVY7shRkQ5EEeBFfzJFE9mbrZpS9eu2hQ=; b=k9Ab71XtQB4E3Bo3yPvSRsgc6cg7BdCEUZQt3yKAdiPxDjmADIKr8GdGF5S7Z3a5Ah rNYbsEruPq1t8nZXb0xuXVlx2JWWoe/a72sEW5hTQBaNY677SAb1LGa/f5l6BIqyeqgG eivitYpBTw3+tss381Da/ah6BbrepWmF+R118dYCkh/zXBrZiBMXTlPt3Fb8OIp6c0mu xfZdJN1JkoYMNNpPPERjqGte2DtVAtcsVsxd6/Eatw/TjCMTo29rL0YxqfMoerpQ73am XG5zeqNhqa2/0MqkudJRTb1yIkKTce404h2QZGyCAgfu0FyF9yl7XL0MtFu94R6xijCI 3EbA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references; bh=CuDr5qYI6GlVY7shRkQ5EEeBFfzJFE9mbrZpS9eu2hQ=; b=qhho1w9H1y8Tr2vMmHpQs1l+5wynQCCzM5Naj8iTBkvTwaBsFZFrMhu0kiCAPOXiB9 JxQuwZKQ4+x/jLkQLTkgS1WgmT0i8nRbqLEx9ltQH1OMPyKvLZIrkDwM3e/LvUUon+4N WY9Qvk3xZMBjyFm5IRLIP11h49C3axK6SpBE1gRakI0iAN3gZWJTTDo2+brB2RFYzU4z /JsKRUlv+n29EyYcwsZ6d4jHDWkllRQLY62W8OrQd43Dv+tZ3oEqYiWJhjQQ/ly2N89V evwDenKyjE+c94cQgxr/moN5mI/qy63O+eSwjMKNw+BPQ84he929YEZui9A14+jXM4mp r2RQ== X-Gm-Message-State: ANhLgQ2vP73fTe9jZTpZsuUmxfqwG+4DD3Fhf+timxmfiL8sDP/5X8KR +RQ3fZ2xUA3oR2Z9DYVtSroRmQ== X-Google-Smtp-Source: ADFU+vt6UIEixLCExT8KxYiu1rT68zpEIs/PcykYXkt1mnb3MyWxWvCSpYV+q14Kv0orn+Tr3gtsYw== X-Received: by 2002:a0c:ac48:: with SMTP id m8mr6284600qvb.13.1585193087675; Wed, 25 Mar 2020 20:24:47 -0700 (PDT) Received: from localhost.localdomain (c-73-69-118-222.hsd1.nh.comcast.net. [73.69.118.222]) by smtp.gmail.com with ESMTPSA id u4sm620034qka.35.2020.03.25.20.24.45 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 25 Mar 2020 20:24:47 -0700 (PDT) From: Pavel Tatashin To: pasha.tatashin@soleen.com, jmorris@namei.org, sashal@kernel.org, ebiederm@xmission.com, kexec@lists.infradead.org, linux-kernel@vger.kernel.org, corbet@lwn.net, catalin.marinas@arm.com, will@kernel.org, linux-arm-kernel@lists.infradead.org, maz@kernel.org, james.morse@arm.com, vladimir.murzin@arm.com, matthias.bgg@gmail.com, bhsharma@redhat.com, linux-mm@kvack.org, mark.rutland@arm.com, steve.capper@arm.com, rfontana@redhat.com, tglx@linutronix.de, selindag@gmail.com Subject: [PATCH v9 16/18] arm64: kexec: configure trans_pgd page table for kexec Date: Wed, 25 Mar 2020 23:24:18 -0400 Message-Id: <20200326032420.27220-17-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200326032420.27220-1-pasha.tatashin@soleen.com> References: <20200326032420.27220-1-pasha.tatashin@soleen.com> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Configure a page table located in kexec-safe memory that has the following mappings: 1. identity mapping for text of relocation function with executable permission. 2. linear mappings for all source ranges 3. linear mappings for all destination ranges. Signed-off-by: Pavel Tatashin --- arch/arm64/include/asm/kexec.h | 12 ++++ arch/arm64/kernel/asm-offsets.c | 6 ++ arch/arm64/kernel/machine_kexec.c | 92 ++++++++++++++++++++++++++++++- 3 files changed, 109 insertions(+), 1 deletion(-) diff --git a/arch/arm64/include/asm/kexec.h b/arch/arm64/include/asm/kexec.h index 0f758fd51518..8f4332ac607a 100644 --- a/arch/arm64/include/asm/kexec.h +++ b/arch/arm64/include/asm/kexec.h @@ -108,6 +108,12 @@ extern const unsigned long kexec_el2_vectors_offset; * el2_vector If present means that relocation routine will go to EL1 * from EL2 to do the copy, and then back to EL2 to do the jump * to new world. + * trans_ttbr0 idmap for relocation function and its argument + * trans_ttbr1 linear map for source/destination addresses. + * trans_t0sz t0sz for idmap page in trans_ttbr0 + * src_addr linear map for source pages. + * dst_addr linear map for destination pages. + * copy_len Number of bytes that need to be copied */ struct kern_reloc_arg { phys_addr_t head; @@ -117,6 +123,12 @@ struct kern_reloc_arg { phys_addr_t kern_arg2; phys_addr_t kern_arg3; phys_addr_t el2_vector; + phys_addr_t trans_ttbr0; + phys_addr_t trans_ttbr1; + unsigned long trans_t0sz; + unsigned long src_addr; + unsigned long dst_addr; + unsigned long copy_len; }; #define ARCH_HAS_KIMAGE_ARCH diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c index ff974b648347..58ad5b7816ab 100644 --- a/arch/arm64/kernel/asm-offsets.c +++ b/arch/arm64/kernel/asm-offsets.c @@ -137,6 +137,12 @@ int main(void) DEFINE(KEXEC_KRELOC_KERN_ARG2, offsetof(struct kern_reloc_arg, kern_arg2)); DEFINE(KEXEC_KRELOC_KERN_ARG3, offsetof(struct kern_reloc_arg, kern_arg3)); DEFINE(KEXEC_KRELOC_EL2_VECTOR, offsetof(struct kern_reloc_arg, el2_vector)); + DEFINE(KEXEC_KRELOC_TRANS_TTBR0, offsetof(struct kern_reloc_arg, trans_ttbr0)); + DEFINE(KEXEC_KRELOC_TRANS_TTBR1, offsetof(struct kern_reloc_arg, trans_ttbr1)); + DEFINE(KEXEC_KRELOC_TRANS_T0SZ, offsetof(struct kern_reloc_arg, trans_t0sz)); + DEFINE(KEXEC_KRELOC_SRC_ADDR, offsetof(struct kern_reloc_arg, src_addr)); + DEFINE(KEXEC_KRELOC_DST_ADDR, offsetof(struct kern_reloc_arg, dst_addr)); + DEFINE(KEXEC_KRELOC_COPY_LEN, offsetof(struct kern_reloc_arg, copy_len)); #endif return 0; } diff --git a/arch/arm64/kernel/machine_kexec.c b/arch/arm64/kernel/machine_kexec.c index bd398def7627..db96d2fab8b2 100644 --- a/arch/arm64/kernel/machine_kexec.c +++ b/arch/arm64/kernel/machine_kexec.c @@ -20,6 +20,7 @@ #include #include #include +#include #include "cpu-reset.h" @@ -70,10 +71,90 @@ static void *kexec_page_alloc(void *arg) return page_address(page); } +/* + * Map source segments starting from src_va, and map destination + * segments starting from dst_va, and return size of copy in + * *copy_len argument. + * Relocation function essentially needs to do: + * memcpy(dst_va, src_va, copy_len); + */ +static int map_segments(struct kimage *kimage, pgd_t *pgdp, + struct trans_pgd_info *info, + unsigned long src_va, + unsigned long dst_va, + unsigned long *copy_len) +{ + unsigned long *ptr = 0; + unsigned long dest = 0; + unsigned long len = 0; + unsigned long entry, addr; + int rc; + + for (entry = kimage->head; !(entry & IND_DONE); entry = *ptr++) { + addr = entry & PAGE_MASK; + + switch (entry & IND_FLAGS) { + case IND_DESTINATION: + dest = addr; + break; + case IND_INDIRECTION: + ptr = __va(addr); + if (rc) + return rc; + break; + case IND_SOURCE: + rc = trans_pgd_map_page(info, pgdp, __va(addr), + src_va, PAGE_KERNEL); + if (rc) + return rc; + rc = trans_pgd_map_page(info, pgdp, __va(dest), + dst_va, PAGE_KERNEL); + if (rc) + return rc; + dest += PAGE_SIZE; + src_va += PAGE_SIZE; + dst_va += PAGE_SIZE; + len += PAGE_SIZE; + } + } + *copy_len = len; + + return 0; +} + +static int mmu_relocate_setup(struct kimage *kimage, void *reloc_code, + struct kern_reloc_arg *kern_reloc_arg) +{ + struct trans_pgd_info info = { + .trans_alloc_page = kexec_page_alloc, + .trans_alloc_arg = kimage, + }; + pgd_t *trans_pgd = kexec_page_alloc(kimage); + int rc; + + if (!trans_pgd) + return -ENOMEM; + + /* idmap relocation function */ + rc = trans_pgd_idmap_page(&info, &kern_reloc_arg->trans_ttbr0, + &kern_reloc_arg->trans_t0sz, reloc_code); + if (rc) + return rc; + + kern_reloc_arg->src_addr = _PAGE_OFFSET(VA_BITS_MIN); + kern_reloc_arg->dst_addr = _PAGE_OFFSET(VA_BITS_MIN - 1); + kern_reloc_arg->trans_ttbr1 = phys_to_ttbr(__pa(trans_pgd)); + + rc = map_segments(kimage, trans_pgd, &info, kern_reloc_arg->src_addr, + kern_reloc_arg->dst_addr, &kern_reloc_arg->copy_len); + return rc; +} + int machine_kexec_post_load(struct kimage *kimage) { void *reloc_code = page_to_virt(kimage->control_code_page); struct kern_reloc_arg *kern_reloc_arg = kexec_page_alloc(kimage); + int rc = 0; if (!kern_reloc_arg) return -ENOMEM; @@ -89,9 +170,18 @@ int machine_kexec_post_load(struct kimage *kimage) kern_reloc_arg->el2_vector = __pa(reloc_code) + kexec_el2_vectors_offset; } + + /* + * If relocation is not needed, we do not need to enable MMU in + * relocation routine, therefore do not create page tables for + * scenarios such as crash kernel + */ + if (!(kimage->head & IND_DONE)) + rc = mmu_relocate_setup(kimage, reloc_code, kern_reloc_arg); + kexec_image_info(kimage); - return 0; + return rc; } /** From patchwork Thu Mar 26 03:24:19 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pasha Tatashin X-Patchwork-Id: 11459135 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id C69781668 for ; Thu, 26 Mar 2020 03:25:07 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 8757E2074D for ; Thu, 26 Mar 2020 03:25:07 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=soleen.com header.i=@soleen.com header.b="SzpG8sLR" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 8757E2074D Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=soleen.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id C8DCE6B007B; Wed, 25 Mar 2020 23:24:50 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id C3BF66B007D; Wed, 25 Mar 2020 23:24:50 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id ADEBD6B007E; Wed, 25 Mar 2020 23:24:50 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0043.hostedemail.com [216.40.44.43]) by kanga.kvack.org (Postfix) with ESMTP id 92FEA6B007B for ; Wed, 25 Mar 2020 23:24:50 -0400 (EDT) Received: from smtpin10.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 692CC4405 for ; Thu, 26 Mar 2020 03:24:50 +0000 (UTC) X-FDA: 76636071540.10.road98_d1c6505d134d X-Spam-Summary: 2,0,0,c188f215f55f1ecb,d41d8cd98f00b204,pasha.tatashin@soleen.com,,RULES_HIT:2:41:69:355:379:421:541:800:960:973:988:989:1260:1345:1359:1381:1437:1461:1535:1605:1730:1747:1777:1792:2194:2199:2393:2538:2559:2562:2693:3138:3139:3140:3141:3142:3865:3866:3867:3868:3870:3871:3872:3874:4049:4120:4250:4321:4605:5007:6117:6119:6261:6653:6737:6738:7688:7903:8603:9592:10004:10226:11026:11473:11657:11658:11914:12043:12048:12296:12297:12438:12517:12519:12555:12679:12895:13161:13229:14394:21063:21080:21324:21325:21444:21451:21627:21740:21966:21990:30003:30054:30067:30070:30079:30089,0,RBL:209.85.222.193:@soleen.com:.lbl8.mailshell.net-66.100.201.201 62.2.0.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:23,LUA_SUMMARY:none X-HE-Tag: road98_d1c6505d134d X-Filterd-Recvd-Size: 9556 Received: from mail-qk1-f193.google.com (mail-qk1-f193.google.com [209.85.222.193]) by imf01.hostedemail.com (Postfix) with ESMTP for ; Thu, 26 Mar 2020 03:24:49 +0000 (UTC) Received: by mail-qk1-f193.google.com with SMTP id v7so5140496qkc.0 for ; Wed, 25 Mar 2020 20:24:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=from:to:subject:date:message-id:in-reply-to:references; bh=rP301br3jLV2pUSTiXcOG3lk/+mfTC6rfo7zV+g52uE=; b=SzpG8sLRK43gB03wiBjaqURB25QH24to71FZC+cBWqLbQmoXfEZPD1WOWSfQR/1gxb E0qr9rjv8lKfW1SZyTS+U3Z+Q05ieRm2vqlN0HYUTm/w2f+sGKWeF8IfFcWyhDHIvrn2 u5E6vJMrPy1kmMGG00eioArrNW4Ij97gj1ZItasBymsaR+MS1bmC/tw58X3X7i60yK/7 JsAmx2qXgqlhZZ3CYRdKAN4QLkMKyMGPxxrXuev3OGl3QUKwu4D/6bj/uRuwOM0KRuyp KHi1hAboB0XKc1WQWLSLoK73jsKCDOyxzHhjpNiX/KoYCC7WvueepeunrlcnUL+tt8Gt LTOg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references; bh=rP301br3jLV2pUSTiXcOG3lk/+mfTC6rfo7zV+g52uE=; b=uBc4ci3y5JpxsHgywUIuRX+Ppf6M9YMYb2STOrQs10+utDaXXVDRnapLciETlep5Mx PVyZ8IzR498rFOhu1k5mCcUf+zo+U/wDTjtNsKiaO6YvLkOqVxmYzhdzddKiCnBf/djH +6wu8hMPLTyPCe+DD157Mlc2xVtNcaxuWMLkwTwCkSoDqT0SVWV+hpe4DHIRotwgSLnM V3Zmkb6UNc4enhN5HSJFyvXlel8u51YY3vZxKl8XNW5J59OsapFsInxyIVGaMzJyCxlI VwNGrsAB808uQYTBUenmbSV3T16yOzmUg0fKTRac2bnUjJ90khyZbyBnVpKOj/8CVj8/ ymFg== X-Gm-Message-State: ANhLgQ3t488X3/AqIX7DCr4u6QRdvn2mTK7lnX4SdPoKuM1MHSGHLcam JdWEvV49nkKTikcgGQkumcFAWw== X-Google-Smtp-Source: ADFU+vt/3OwGi00Krey3I6+4YSIATchoONx/f4HuqDlk6oplG5GN05ZeuCyfndKbFky+5rkucArSNg== X-Received: by 2002:a05:620a:1252:: with SMTP id a18mr6235906qkl.204.1585193089294; Wed, 25 Mar 2020 20:24:49 -0700 (PDT) Received: from localhost.localdomain (c-73-69-118-222.hsd1.nh.comcast.net. [73.69.118.222]) by smtp.gmail.com with ESMTPSA id u4sm620034qka.35.2020.03.25.20.24.47 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 25 Mar 2020 20:24:48 -0700 (PDT) From: Pavel Tatashin To: pasha.tatashin@soleen.com, jmorris@namei.org, sashal@kernel.org, ebiederm@xmission.com, kexec@lists.infradead.org, linux-kernel@vger.kernel.org, corbet@lwn.net, catalin.marinas@arm.com, will@kernel.org, linux-arm-kernel@lists.infradead.org, maz@kernel.org, james.morse@arm.com, vladimir.murzin@arm.com, matthias.bgg@gmail.com, bhsharma@redhat.com, linux-mm@kvack.org, mark.rutland@arm.com, steve.capper@arm.com, rfontana@redhat.com, tglx@linutronix.de, selindag@gmail.com Subject: [PATCH v9 17/18] arm64: kexec: enable MMU during kexec relocation Date: Wed, 25 Mar 2020 23:24:19 -0400 Message-Id: <20200326032420.27220-18-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200326032420.27220-1-pasha.tatashin@soleen.com> References: <20200326032420.27220-1-pasha.tatashin@soleen.com> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Now, that we have transitional page tables configured, temporarily enable MMU to allow faster relocation of segments to final destination. The performance data: for a moderate size kernel + initramfs: 25M the relocation was taking 0.382s, with enabled MMU it now takes 0.019s only or x20 improvement. The time is proportional to the size of relocation, therefore if initramfs is larger, 100M it could take over a second. Signed-off-by: Pavel Tatashin --- arch/arm64/kernel/relocate_kernel.S | 144 ++++++++++++++++++---------- 1 file changed, 92 insertions(+), 52 deletions(-) diff --git a/arch/arm64/kernel/relocate_kernel.S b/arch/arm64/kernel/relocate_kernel.S index 6fd2fc0ef373..430e7512ced5 100644 --- a/arch/arm64/kernel/relocate_kernel.S +++ b/arch/arm64/kernel/relocate_kernel.S @@ -4,6 +4,8 @@ * * Copyright (C) Linaro. * Copyright (C) Huawei Futurewei Technologies. + * Copyright (c) 2020, Microsoft Corporation. + * Pavel Tatashin */ #include @@ -16,6 +18,56 @@ .globl kexec_relocate_code_start kexec_relocate_code_start: +/* Invalidae TLB */ +.macro tlb_invalidate + dsb sy + dsb ish + tlbi vmalle1 + dsb ish + isb +.endm + +/* Turn-off mmu at level specified by sctlr */ +.macro turn_off_mmu sctlr, tmp1, tmp2 + mrs \tmp1, \sctlr + ldr \tmp2, =SCTLR_ELx_FLAGS + bic \tmp1, \tmp1, \tmp2 + pre_disable_mmu_workaround + msr \sctlr, \tmp1 + isb +.endm + +/* Turn-on mmu at level specified by sctlr */ +.macro turn_on_mmu sctlr, tmp1, tmp2 + mrs \tmp1, \sctlr + ldr \tmp2, =SCTLR_ELx_FLAGS + orr \tmp1, \tmp1, \tmp2 + msr \sctlr, \tmp1 + ic iallu + dsb nsh + isb +.endm + +/* + * Set ttbr0 and ttbr1, called while MMU is disabled, so no need to temporarily + * set zero_page table. Invalidate TLB after new tables are set. + */ +.macro set_ttbr arg, tmp1, tmp2 + ldr \tmp1, [\arg, #KEXEC_KRELOC_TRANS_TTBR0] + msr ttbr0_el1, \tmp1 + ldr \tmp1, [\arg, #KEXEC_KRELOC_TRANS_TTBR1] + offset_ttbr1 \tmp1, \tmp2 + msr ttbr1_el1, \tmp1 + isb +.endm + +/* Set T0SZ to match the requirements of idmap page */ +.macro set_tcr_t0sz arg, tmp1, tmp2 + ldr \tmp2, [\arg, #KEXEC_KRELOC_TRANS_T0SZ] + mrs \tmp1, tcr_el1 + bfi \tmp1, \tmp2, TCR_T0SZ_OFFSET, TCR_TxSZ_WIDTH + msr tcr_el1, \tmp1 +.endm /* * arm64_relocate_new_kernel - Put a 2nd stage image in place and boot it. @@ -27,65 +79,53 @@ kexec_relocate_code_start: * symbols arm64_relocate_new_kernel and arm64_relocate_new_kernel_end. The * machine_kexec() routine will copy arm64_relocate_new_kernel to the kexec * safe memory that has been set up to be preserved during the copy operation. + * + * This function temporarily enables MMU if kernel relocation is needed. + * Also, if we enter this function at EL2 on non-VHE kernel, we temporarily go + * to EL1 to enable MMU, and escalate back to EL2 at the end to do the jump to + * the new kernel. This is determined by presence of el2_vector. */ ENTRY(arm64_relocate_new_kernel) - /* Clear the sctlr_el2 flags. */ - mrs x2, CurrentEL - cmp x2, #CurrentEL_EL2 + mrs x1, CurrentEL + cmp x1, #CurrentEL_EL2 b.ne 1f - mrs x2, sctlr_el2 - ldr x1, =SCTLR_ELx_FLAGS - bic x2, x2, x1 - pre_disable_mmu_workaround - msr sctlr_el2, x2 - isb -1: /* Check if the new image needs relocation. */ - ldr x16, [x0, #KEXEC_KRELOC_HEAD] /* x16 = kimage_head */ - tbnz x16, IND_DONE_BIT, .Ldone - raw_dcache_line_size x15, x1 /* x15 = dcache line size */ -.Lloop: - and x12, x16, PAGE_MASK /* x12 = addr */ - /* Test the entry flags. */ -.Ltest_source: - tbz x16, IND_SOURCE_BIT, .Ltest_indirection - - /* Invalidate dest page to PoC. */ - mov x2, x13 - add x20, x2, #PAGE_SIZE - sub x1, x15, #1 - bic x2, x2, x1 -2: dc ivac, x2 - add x2, x2, x15 - cmp x2, x20 - b.lo 2b - dsb sy - - copy_page x13, x12, x1, x2, x3, x4, x5, x6, x7, x8 - b .Lnext -.Ltest_indirection: - tbz x16, IND_INDIRECTION_BIT, .Ltest_destination - mov x14, x12 /* ptr = addr */ - b .Lnext -.Ltest_destination: - tbz x16, IND_DESTINATION_BIT, .Lnext - mov x13, x12 /* dest = addr */ -.Lnext: - ldr x16, [x14], #8 /* entry = *ptr++ */ - tbz x16, IND_DONE_BIT, .Lloop /* while (!(entry & DONE)) */ -.Ldone: - /* wait for writes from copy_page to finish */ - dsb nsh - ic iallu - dsb nsh - isb - - /* Start new image. */ - ldr x4, [x0, #KEXEC_KRELOC_ENTRY_ADDR] /* x4 = kimage_start */ + turn_off_mmu sctlr_el2, x1, x2 /* Turn off MMU at EL2 */ +1: mov x20, xzr /* x20 will hold vector value */ + ldr x11, [x0, #KEXEC_KRELOC_COPY_LEN] + cbz x11, 5f /* Check if need to relocate */ + ldr x20, [x0, #KEXEC_KRELOC_EL2_VECTOR] + cbz x20, 2f /* need to reduce to EL1? */ + msr vbar_el2, x20 /* el2_vector present, means */ + adr x1, 2f /* we will do copy in el1 but */ + msr elr_el2, x1 /* do final jump from el2 */ + eret /* Reduce to EL1 */ +2: set_tcr_t0sz x0, x1, x2 /* Set t0sz for idmaped page */ + set_ttbr x0, x1, x2 /* Set our page tables */ + tlb_invalidate + ldr x1, [x0, #KEXEC_KRELOC_DST_ADDR]; /* arg is not idmapped so */ + ldr x2, [x0, #KEXEC_KRELOC_SRC_ADDR]; /* read before MMU is on */ + turn_on_mmu sctlr_el1, x3, x4 /* Turn MMU back on */ + mov x12, x1 /* x12 dst backup */ +3: copy_page x1, x2, x3, x4, x5, x6, x7, x8, x9, x10 + sub x11, x11, #PAGE_SIZE + cbnz x11, 3b /* page copy loop */ + raw_dcache_line_size x2, x3 /* x2 = dcache line size */ + sub x3, x2, #1 /* x3 = dcache_size - 1 */ + bic x12, x12, x3 +4: dc cvau, x12 /* Flush D-cache */ + add x12, x12, x2 + cmp x12, x1 /* Compare to dst + len */ + b.ne 4b /* D-cache flush loop */ + turn_off_mmu sctlr_el1, x1, x2 /* Turn off MMU */ + tlb_invalidate /* Invalidate TLB */ +5: ldr x4, [x0, #KEXEC_KRELOC_ENTRY_ADDR] /* x4 = kimage_start */ ldr x3, [x0, #KEXEC_KRELOC_KERN_ARG3] ldr x2, [x0, #KEXEC_KRELOC_KERN_ARG2] ldr x1, [x0, #KEXEC_KRELOC_KERN_ARG1] ldr x0, [x0, #KEXEC_KRELOC_KERN_ARG0] /* x0 = dtb address */ - br x4 + cbnz x20, 6f /* need to escalate to el2? */ + br x4 /* Jump to new world */ +6: hvc #0 /* enters kexec_el1_sync */ .ltorg END(arm64_relocate_new_kernel) From patchwork Thu Mar 26 03:24:20 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pasha Tatashin X-Patchwork-Id: 11459137 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 792941668 for ; Thu, 26 Mar 2020 03:25:10 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 45FCA2074D for ; Thu, 26 Mar 2020 03:25:10 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=soleen.com header.i=@soleen.com header.b="D/H04vs1" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 45FCA2074D Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=soleen.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 2C94A6B007D; Wed, 25 Mar 2020 23:24:52 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 2A0086B007E; Wed, 25 Mar 2020 23:24:52 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 16A886B0080; Wed, 25 Mar 2020 23:24:52 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0050.hostedemail.com [216.40.44.50]) by kanga.kvack.org (Postfix) with ESMTP id F2EAC6B007D for ; Wed, 25 Mar 2020 23:24:51 -0400 (EDT) Received: from smtpin21.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id DD3454405 for ; Thu, 26 Mar 2020 03:24:51 +0000 (UTC) X-FDA: 76636071582.21.seat68_d4f8c6e10328 X-Spam-Summary: 2,0,0,c23f23accdb02c48,d41d8cd98f00b204,pasha.tatashin@soleen.com,,RULES_HIT:41:355:379:541:800:960:973:988:989:1260:1345:1359:1381:1437:1535:1542:1711:1730:1747:1777:1792:2393:2559:2562:3138:3139:3140:3141:3142:3353:3865:3866:3867:3871:3874:4321:5007:6119:6261:6653:6737:6738:7903:10004:11026:11232:11473:11657:11658:11914:12043:12048:12297:12438:12517:12519:12555:12895:12986:13972:14096:14181:14394:14721:14819:21063:21080:21444:21627:30003:30034:30054,0,RBL:209.85.222.194:@soleen.com:.lbl8.mailshell.net-62.2.0.100 66.100.201.201,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:23,LUA_SUMMARY:none X-HE-Tag: seat68_d4f8c6e10328 X-Filterd-Recvd-Size: 5783 Received: from mail-qk1-f194.google.com (mail-qk1-f194.google.com [209.85.222.194]) by imf42.hostedemail.com (Postfix) with ESMTP for ; Thu, 26 Mar 2020 03:24:51 +0000 (UTC) Received: by mail-qk1-f194.google.com with SMTP id d11so5115422qko.3 for ; Wed, 25 Mar 2020 20:24:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=from:to:subject:date:message-id:in-reply-to:references; bh=MnZdOr+8pXZ+Q7gD/wn/800kgQqBl+vWc7FKb2K6x14=; b=D/H04vs13b+3H+e67P/UARnFyO7yDtN0Ty+ZK+OBnCiOWB9/F+Fc5HLOOOqDcTyHXC upUUgW/VGgZc6ZbqtfIcIdr3SJktRRblws7WTorOV6RIpwbJmBWbJZmJn/xEcGByGchL cgLawoxwGWLWJDPU5b0SNyZVc/3EonILMFMYCcmx5VbqUVEkhwHQx/8p///+O99PsEfh 7tHmGRUCwyp4Gjeb4Dw2BjU252LKFebXdKHsAIdI6bsXvV+axA/VrSlQbWfyV2hfRI+F 7OSwGZufegvOoRbjxEOCpclvwAeJqLE2MQkXVU7ipXNnwAtmZuBXghgbzMJ5ly/fRQMh rQrA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references; bh=MnZdOr+8pXZ+Q7gD/wn/800kgQqBl+vWc7FKb2K6x14=; b=ojXWzQ1c8ULaVGpcP5qOKwt7Y35fx2TE/vzDytbid318U5z4GYZ3y6MeF5LWLsk8MC DipuPpBuOMavtguCrONc500M4QW1VH16ztsitKl+FZ7ixZGqxTpboto6zpI/a7A+4WkL 2wQEuKdFZD0RGBc4HnuJWOSoBIzLUDod0noM1jqN0ttAoX3oqlapvrfmntvDql5x5MVI ns61TocI9Jztjg1bat+XpsWMTkG07LgdNsa5Qvey7vBa2iVv7TEvGGZa5RvB+uHmbiDZ 02I0goB9RaJ4kud3/d0k2IAl0oWVMtA0prYrDwzljNP8lKlSq9xKM7H1umKJDp220h5L yFQg== X-Gm-Message-State: ANhLgQ0++p+Jl4G8f+jz1bm69nTcQzcr3HtlCPJ3+ZNgh7P+VnhplMmQ LQeo+qFR637/iD4L1KHWKkFRYg== X-Google-Smtp-Source: ADFU+vvws7EYyMJKXCP8dR+RR+Z2bEQdm8J7Bgw0yIj4Z09bNr3plFyZB9XoZ8d3WpnVnyh4mAc+YQ== X-Received: by 2002:a37:6807:: with SMTP id d7mr6162363qkc.112.1585193090776; Wed, 25 Mar 2020 20:24:50 -0700 (PDT) Received: from localhost.localdomain (c-73-69-118-222.hsd1.nh.comcast.net. [73.69.118.222]) by smtp.gmail.com with ESMTPSA id u4sm620034qka.35.2020.03.25.20.24.49 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 25 Mar 2020 20:24:50 -0700 (PDT) From: Pavel Tatashin To: pasha.tatashin@soleen.com, jmorris@namei.org, sashal@kernel.org, ebiederm@xmission.com, kexec@lists.infradead.org, linux-kernel@vger.kernel.org, corbet@lwn.net, catalin.marinas@arm.com, will@kernel.org, linux-arm-kernel@lists.infradead.org, maz@kernel.org, james.morse@arm.com, vladimir.murzin@arm.com, matthias.bgg@gmail.com, bhsharma@redhat.com, linux-mm@kvack.org, mark.rutland@arm.com, steve.capper@arm.com, rfontana@redhat.com, tglx@linutronix.de, selindag@gmail.com Subject: [PATCH v9 18/18] arm64: kexec: remove head from relocation argument Date: Wed, 25 Mar 2020 23:24:20 -0400 Message-Id: <20200326032420.27220-19-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200326032420.27220-1-pasha.tatashin@soleen.com> References: <20200326032420.27220-1-pasha.tatashin@soleen.com> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Now, that relocation is done using virtual addresses, reloc_arg->head is not needed anymore. Signed-off-by: Pavel Tatashin --- arch/arm64/include/asm/kexec.h | 2 -- arch/arm64/kernel/asm-offsets.c | 1 - arch/arm64/kernel/machine_kexec.c | 1 - 3 files changed, 4 deletions(-) diff --git a/arch/arm64/include/asm/kexec.h b/arch/arm64/include/asm/kexec.h index 8f4332ac607a..571a2ba886b9 100644 --- a/arch/arm64/include/asm/kexec.h +++ b/arch/arm64/include/asm/kexec.h @@ -100,7 +100,6 @@ extern const unsigned long kexec_el2_vectors_offset; /* * kern_reloc_arg is passed to kernel relocation function as an argument. - * head kimage->head, allows to traverse through relocation segments. * entry_addr kimage->start, where to jump from relocation function (new * kernel, or purgatory entry address). * kern_arg0 first argument to kernel is its dtb address. The other @@ -116,7 +115,6 @@ extern const unsigned long kexec_el2_vectors_offset; * copy_len Number of bytes that need to be copied */ struct kern_reloc_arg { - phys_addr_t head; phys_addr_t entry_addr; phys_addr_t kern_arg0; phys_addr_t kern_arg1; diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c index 58ad5b7816ab..8673a5854807 100644 --- a/arch/arm64/kernel/asm-offsets.c +++ b/arch/arm64/kernel/asm-offsets.c @@ -130,7 +130,6 @@ int main(void) DEFINE(SDEI_EVENT_PRIORITY, offsetof(struct sdei_registered_event, priority)); #endif #ifdef CONFIG_KEXEC_CORE - DEFINE(KEXEC_KRELOC_HEAD, offsetof(struct kern_reloc_arg, head)); DEFINE(KEXEC_KRELOC_ENTRY_ADDR, offsetof(struct kern_reloc_arg, entry_addr)); DEFINE(KEXEC_KRELOC_KERN_ARG0, offsetof(struct kern_reloc_arg, kern_arg0)); DEFINE(KEXEC_KRELOC_KERN_ARG1, offsetof(struct kern_reloc_arg, kern_arg1)); diff --git a/arch/arm64/kernel/machine_kexec.c b/arch/arm64/kernel/machine_kexec.c index db96d2fab8b2..2d3290d7b122 100644 --- a/arch/arm64/kernel/machine_kexec.c +++ b/arch/arm64/kernel/machine_kexec.c @@ -162,7 +162,6 @@ int machine_kexec_post_load(struct kimage *kimage) memcpy(reloc_code, kexec_relocate_code_start, kexec_relocate_code_size); kimage->arch.kern_reloc = __pa(reloc_code) + kexec_kern_reloc_offset; kimage->arch.kern_reloc_arg = __pa(kern_reloc_arg); - kern_reloc_arg->head = kimage->head; kern_reloc_arg->entry_addr = kimage->start; kern_reloc_arg->kern_arg0 = kimage->arch.dtb_mem; /* Setup vector table only when EL2 is available, but no VHE */