From patchwork Mon Sep 9 18:12:05 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pasha Tatashin X-Patchwork-Id: 11138325 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 98E2A1599 for ; Mon, 9 Sep 2019 18:12:29 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 5CD3B222BD for ; Mon, 9 Sep 2019 18:12:29 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=soleen.com header.i=@soleen.com header.b="GJFu06c6" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 5CD3B222BD Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=soleen.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 5F9F56B0006; Mon, 9 Sep 2019 14:12:27 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 534DF6B0007; Mon, 9 Sep 2019 14:12:27 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 363806B0008; Mon, 9 Sep 2019 14:12:27 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 0AC946B0006 for ; Mon, 9 Sep 2019 14:12:27 -0400 (EDT) Received: from smtpin26.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with SMTP id 9EB4E181AC9AE for ; Mon, 9 Sep 2019 18:12:26 +0000 (UTC) X-FDA: 75916177092.26.bird85_6c93478403f43 X-Spam-Summary: 2,0,0,ca7a6013a2a7d720,d41d8cd98f00b204,pasha.tatashin@soleen.com,:pasha.tatashin@soleen.com:jmorris@namei.org:sashal@kernel.org:ebiederm@xmission.com:kexec@lists.infradead.org:linux-kernel@vger.kernel.org:corbet@lwn.net:catalin.marinas@arm.com:will@kernel.org:linux-arm-kernel@lists.infradead.org:marc.zyngier@arm.com:james.morse@arm.com:vladimir.murzin@arm.com:matthias.bgg@gmail.com:bhsharma@redhat.com::mark.rutland@arm.com,RULES_HIT:41:355:379:541:800:960:973:988:989:1260:1311:1314:1345:1359:1381:1437:1515:1534:1541:1711:1730:1747:1777:1792:2393:2559:2562:3138:3139:3140:3141:3142:3352:3653:3867:3868:3871:3872:3874:4321:5007:6261:6653:6737:9390:10004:11026:11658:11914:12048:12296:12297:12517:12519:12555:12663:12895:12986:13069:13095:13311:13357:13894:14040:14181:14384:14394:14721:21080:21325:21433:21444:21451:21611:21627:30054:30069,0,RBL:209.85.160.196:@soleen.com:.lbl8.mailshell.net-62.2.0.100 66.100.201.201,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,Do mainCach X-HE-Tag: bird85_6c93478403f43 X-Filterd-Recvd-Size: 4377 Received: from mail-qt1-f196.google.com (mail-qt1-f196.google.com [209.85.160.196]) by imf42.hostedemail.com (Postfix) with ESMTP for ; Mon, 9 Sep 2019 18:12:25 +0000 (UTC) Received: by mail-qt1-f196.google.com with SMTP id g13so16863413qtj.4 for ; Mon, 09 Sep 2019 11:12:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=KZlSM1+HYlx5AabYJySSHWoh4S6JQNg+E/eloF1WuIo=; b=GJFu06c6b4ZEhDQxac7Ve4vAwFHNQhisIoJMubb/W8uwe/MrKpZAsrxNQcd4Jt7g5J e4MiEhIHIZFc0CSkOjzAFJt5WxtVG7fjoQzEZfBjGiK4PnOzAQqYggmgip0rS3UjFkby IEn2kXuW+ZA/7BhqbRDUv0NteV9mBcLQei1106HP4bWY/NH2IzNUdo8DGHHYDUfIiLFz 8gRB2ZS9V/MExaDThngqFosNAd/W335oiclHxgCS64sgHqkapgnUdv1u/sH0MsWHKKTe cC+q7016fokkyS14WHlyKW1VcDfYEHq13tZzXK/nNPIn+Cz4yS65A1M2qPE1C3bOZZf1 4gFg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=KZlSM1+HYlx5AabYJySSHWoh4S6JQNg+E/eloF1WuIo=; b=GUGDLcdlctXcdmiuTzS3VD9APS9IM/wql8rIxG4OYnJJUZpIZGd+mk0qPK/152XcHd U7uvtawQNLRgTjgUV20vvKNoXWZmL3ptlaJY6xVNqWPEknMVg53+KHqH74k5CvLkigjO KShoZPII8ErLO4stYu+0JJMbLcyS3j3+ut+r/XRZINLDyF/Bdt2Bil7yOHnHrKRkGGyX VJN1qq4kk7aEVzVwYT71IiC/CQGcqDOgJm2vn1RCPp+CE+SaEFihxVvdQo+PjztgHCgB YGDb6wEiqUNiz1C8s3Ox6RtSI4JRVAGz61cqYwWN0ZNcGKCbjasM+waMnNU2tnx/Tp1d MaoQ== X-Gm-Message-State: APjAAAUw+dOzEvWjTaUYeKUyVBuLScV4T/BFzXgar2Wbnqw/e+4btwce ZBUf7jWDQjPeRvVUFJYLR1LV9g== X-Google-Smtp-Source: APXvYqwsApv+H9SarYLtc162JJu+MtiHP3Mrfk3gX2yiHqY711FHTOWynPIJ6PSuaanROg7YMihraA== X-Received: by 2002:a0c:c15d:: with SMTP id i29mr15399213qvh.5.1568052745416; Mon, 09 Sep 2019 11:12:25 -0700 (PDT) Received: from localhost.localdomain (c-73-69-118-222.hsd1.nh.comcast.net. [73.69.118.222]) by smtp.gmail.com with ESMTPSA id q8sm5611310qtj.76.2019.09.09.11.12.24 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 09 Sep 2019 11:12:24 -0700 (PDT) From: Pavel Tatashin To: pasha.tatashin@soleen.com, jmorris@namei.org, sashal@kernel.org, ebiederm@xmission.com, kexec@lists.infradead.org, linux-kernel@vger.kernel.org, corbet@lwn.net, catalin.marinas@arm.com, will@kernel.org, linux-arm-kernel@lists.infradead.org, marc.zyngier@arm.com, james.morse@arm.com, vladimir.murzin@arm.com, matthias.bgg@gmail.com, bhsharma@redhat.com, linux-mm@kvack.org, mark.rutland@arm.com Subject: [PATCH v4 01/17] kexec: quiet down kexec reboot Date: Mon, 9 Sep 2019 14:12:05 -0400 Message-Id: <20190909181221.309510-2-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20190909181221.309510-1-pasha.tatashin@soleen.com> References: <20190909181221.309510-1-pasha.tatashin@soleen.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Here is a regular kexec command sequence and output: ===== $ kexec --reuse-cmdline -i --load Image $ kexec -e [ 161.342002] kexec_core: Starting new kernel Welcome to Buildroot buildroot login: ===== Even when "quiet" kernel parameter is specified, "kexec_core: Starting new kernel" is printed. This message has KERN_EMERG level, but there is no emergency, it is a normal kexec operation, so quiet it down to appropriate KERN_NOTICE. Machines that have slow console baud rate benefit from less output. Signed-off-by: Pavel Tatashin Reviewed-by: Simon Horman --- kernel/kexec_core.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/kernel/kexec_core.c b/kernel/kexec_core.c index d5870723b8ad..2c5b72863b7b 100644 --- a/kernel/kexec_core.c +++ b/kernel/kexec_core.c @@ -1169,7 +1169,7 @@ int kernel_kexec(void) * CPU hotplug again; so re-enable it here. */ cpu_hotplug_enable(); - pr_emerg("Starting new kernel\n"); + pr_notice("Starting new kernel\n"); machine_shutdown(); } From patchwork Mon Sep 9 18:12:06 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pasha Tatashin X-Patchwork-Id: 11138327 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 02E921599 for ; Mon, 9 Sep 2019 18:12:32 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id BA53C218DE for ; Mon, 9 Sep 2019 18:12:31 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=soleen.com header.i=@soleen.com header.b="OkCmWbda" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org BA53C218DE Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=soleen.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id E5DF06B0007; Mon, 9 Sep 2019 14:12:28 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id E0F506B0008; Mon, 9 Sep 2019 14:12:28 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id CD5506B000A; Mon, 9 Sep 2019 14:12:28 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0236.hostedemail.com [216.40.44.236]) by kanga.kvack.org (Postfix) with ESMTP id A90D86B0007 for ; Mon, 9 Sep 2019 14:12:28 -0400 (EDT) Received: from smtpin06.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with SMTP id 14B00181AC9B4 for ; Mon, 9 Sep 2019 18:12:28 +0000 (UTC) X-FDA: 75916177176.06.son80_6cc8af6479060 X-Spam-Summary: 2,0,0,23141c1e817eb74c,d41d8cd98f00b204,pasha.tatashin@soleen.com,:pasha.tatashin@soleen.com:jmorris@namei.org:sashal@kernel.org:ebiederm@xmission.com:kexec@lists.infradead.org:linux-kernel@vger.kernel.org:corbet@lwn.net:catalin.marinas@arm.com:will@kernel.org:linux-arm-kernel@lists.infradead.org:marc.zyngier@arm.com:james.morse@arm.com:vladimir.murzin@arm.com:matthias.bgg@gmail.com:bhsharma@redhat.com::mark.rutland@arm.com,RULES_HIT:41:355:379:541:800:960:973:988:989:1260:1311:1314:1345:1359:1381:1437:1515:1534:1541:1711:1730:1747:1777:1792:2194:2199:2393:2559:2562:3138:3139:3140:3141:3142:3352:3865:3866:3867:3870:3871:3872:4250:4321:5007:6117:6119:6261:6653:6737:7901:7903:10004:11026:11658:11914:12043:12048:12297:12438:12517:12519:12555:12679:12895:12986:13069:13153:13228:13311:13357:13894:14093:14096:14181:14384:14394:14721:21080:21444:21451:21627:21740:30036:30054,0,RBL:209.85.160.195:@soleen.com:.lbl8.mailshell.net-62.2.0.100 66.100.201.201,CacheIP:none,B ayesian: X-HE-Tag: son80_6cc8af6479060 X-Filterd-Recvd-Size: 4795 Received: from mail-qt1-f195.google.com (mail-qt1-f195.google.com [209.85.160.195]) by imf30.hostedemail.com (Postfix) with ESMTP for ; Mon, 9 Sep 2019 18:12:27 +0000 (UTC) Received: by mail-qt1-f195.google.com with SMTP id g4so17287898qtq.7 for ; Mon, 09 Sep 2019 11:12:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=KdFLASV7D2idWllIjItORA3EXDrjQlb3wnbM2/KPLUA=; b=OkCmWbdatFX5LAIn2tZSLcvkwIY8ATYEygi2Y5l77xTfEmsig7FuHjhR+Tc515QvuB euMog6/tyZCybSHTwlh79lmWstJtQD3RSy5kH+1xIfPremKMTqJWefDoRzxNP2TLUsUs A8mnkxLDs8vT8FE9iLekjK4ByYIMB1NN3YTPldP3X2f9sUKe/mLymg0KFvbKoyLBs/Gn j9lDolC3WKF+dc5DLnUZsHk8vVHh9Gj4g3oIHklgpDlJwdONuTr8iJNBwEwkKdeX1KU4 g1KI3/6zAlekor5jSiDFbhluVvtrCqObj12PpnCxo2VKSdot0KUik0DDJKcvZWQxzbaN LhxQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=KdFLASV7D2idWllIjItORA3EXDrjQlb3wnbM2/KPLUA=; b=eeYp7joeTTAZhuD8sSo0gfEtgt1260BN523gHEwrs+fCJd4zbMh6ZkcJOJ+Vs+gceK nb1iP+Ocgk0LXhZb0AF6wh4YtuEIHxTt19gLBB11uRtVvC5oc+UWV0U4SZ72rjQ+cOW8 97NK/lW7jVby29dGvpoD9RK4PyYFzIUWAO+kpjAil3VDOrqXbpuGZS6jfYDVu0f/YXUc j0prhppi5ebLzDRPuuBp3FeetIE7OrirVOC4FrEeebUOzUG5uI0KstJAtAFIvNuGkmIO ydBdw6/3PVI++53zwRiiBmzbXAi3Bp+c97v3eHul8FR9wd+tY/GFLa2jPj06RpzSL50W PDJw== X-Gm-Message-State: APjAAAVvWvtquPsl/z1+xgVHjsgV2UHqhSVPKLa/XKHqJKP8NjV+O7oY V81H1SzCPG/6EDp88G5aO3AOJQ== X-Google-Smtp-Source: APXvYqwN9aenhg2yxq0xot4VzXYd6PfSrS/JrN5n4RjN+y0s2BVwk+1HvDraVw9P1etriXpA8qtInA== X-Received: by 2002:a0c:e64e:: with SMTP id c14mr15415087qvn.17.1568052746853; Mon, 09 Sep 2019 11:12:26 -0700 (PDT) Received: from localhost.localdomain (c-73-69-118-222.hsd1.nh.comcast.net. [73.69.118.222]) by smtp.gmail.com with ESMTPSA id q8sm5611310qtj.76.2019.09.09.11.12.25 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 09 Sep 2019 11:12:26 -0700 (PDT) From: Pavel Tatashin To: pasha.tatashin@soleen.com, jmorris@namei.org, sashal@kernel.org, ebiederm@xmission.com, kexec@lists.infradead.org, linux-kernel@vger.kernel.org, corbet@lwn.net, catalin.marinas@arm.com, will@kernel.org, linux-arm-kernel@lists.infradead.org, marc.zyngier@arm.com, james.morse@arm.com, vladimir.murzin@arm.com, matthias.bgg@gmail.com, bhsharma@redhat.com, linux-mm@kvack.org, mark.rutland@arm.com Subject: [PATCH v4 02/17] arm64: hibernate: pass the allocated pgdp to ttbr0 Date: Mon, 9 Sep 2019 14:12:06 -0400 Message-Id: <20190909181221.309510-3-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20190909181221.309510-1-pasha.tatashin@soleen.com> References: <20190909181221.309510-1-pasha.tatashin@soleen.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: ttbr0 should be set to the beginning of pgdp, however, currently in create_safe_exec_page it is set to pgdp after pgd_offset_raw(), which works by accident. Fixes: 0194e760f7d2 ("arm64: hibernate: avoid potential TLB conflict") Signed-off-by: Pavel Tatashin --- arch/arm64/kernel/hibernate.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/arch/arm64/kernel/hibernate.c b/arch/arm64/kernel/hibernate.c index 9341fcc6e809..025221564252 100644 --- a/arch/arm64/kernel/hibernate.c +++ b/arch/arm64/kernel/hibernate.c @@ -201,6 +201,7 @@ static int create_safe_exec_page(void *src_start, size_t length, gfp_t mask) { int rc = 0; + pgd_t *trans_pgd; pgd_t *pgdp; pud_t *pudp; pmd_t *pmdp; @@ -215,7 +216,8 @@ static int create_safe_exec_page(void *src_start, size_t length, memcpy((void *)dst, src_start, length); __flush_icache_range(dst, dst + length); - pgdp = pgd_offset_raw(allocator(mask), dst_addr); + trans_pgd = allocator(mask); + pgdp = pgd_offset_raw(trans_pgd, dst_addr); if (pgd_none(READ_ONCE(*pgdp))) { pudp = allocator(mask); if (!pudp) { @@ -262,7 +264,7 @@ static int create_safe_exec_page(void *src_start, size_t length, */ cpu_set_reserved_ttbr0(); local_flush_tlb_all(); - write_sysreg(phys_to_ttbr(virt_to_phys(pgdp)), ttbr0_el1); + write_sysreg(phys_to_ttbr(virt_to_phys(trans_pgd)), ttbr0_el1); isb(); *phys_dst_addr = virt_to_phys((void *)dst); From patchwork Mon Sep 9 18:12:07 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pasha Tatashin X-Patchwork-Id: 11138329 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 3CB9114ED for ; Mon, 9 Sep 2019 18:12:34 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id F40AE218DE for ; Mon, 9 Sep 2019 18:12:33 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=soleen.com header.i=@soleen.com header.b="lGvVtacz" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org F40AE218DE Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=soleen.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 1B2FE6B0008; Mon, 9 Sep 2019 14:12:30 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 13F706B000A; Mon, 9 Sep 2019 14:12:30 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id ED1D46B000C; Mon, 9 Sep 2019 14:12:29 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0113.hostedemail.com [216.40.44.113]) by kanga.kvack.org (Postfix) with ESMTP id C6A696B0008 for ; Mon, 9 Sep 2019 14:12:29 -0400 (EDT) Received: from smtpin03.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with SMTP id 60FE48243763 for ; Mon, 9 Sep 2019 18:12:29 +0000 (UTC) X-FDA: 75916177218.03.door46_6cfca174a001d X-Spam-Summary: 2,0,0,f369307c1e73aba1,d41d8cd98f00b204,pasha.tatashin@soleen.com,:pasha.tatashin@soleen.com:jmorris@namei.org:sashal@kernel.org:ebiederm@xmission.com:kexec@lists.infradead.org:linux-kernel@vger.kernel.org:corbet@lwn.net:catalin.marinas@arm.com:will@kernel.org:linux-arm-kernel@lists.infradead.org:marc.zyngier@arm.com:james.morse@arm.com:vladimir.murzin@arm.com:matthias.bgg@gmail.com:bhsharma@redhat.com::mark.rutland@arm.com,RULES_HIT:41:355:379:541:800:960:973:988:989:1260:1311:1314:1345:1359:1381:1437:1515:1534:1541:1711:1730:1747:1777:1792:2194:2199:2393:2559:2562:2896:3138:3139:3140:3141:3142:3352:3867:3870:3871:3872:3874:4321:5007:6117:6119:6120:6261:6653:6737:7901:7903:10004:11026:11658:11914:12043:12048:12297:12438:12517:12519:12555:12679:12895:12986:13069:13161:13229:13311:13357:13870:13894:14096:14181:14384:14394:14721:21080:21444:21451:21627:30054:30070,0,RBL:209.85.160.193:@soleen.com:.lbl8.mailshell.net-62.2.0.100 66.100.201.201,CacheIP:none,Bayesia n:0.5,0. X-HE-Tag: door46_6cfca174a001d X-Filterd-Recvd-Size: 4270 Received: from mail-qt1-f193.google.com (mail-qt1-f193.google.com [209.85.160.193]) by imf45.hostedemail.com (Postfix) with ESMTP for ; Mon, 9 Sep 2019 18:12:28 +0000 (UTC) Received: by mail-qt1-f193.google.com with SMTP id n7so17275940qtb.6 for ; Mon, 09 Sep 2019 11:12:28 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=hIvv7JZmNGiFtMsjwPiXKldWZ14j/ed2TQl7Sv/FusU=; b=lGvVtaczZepcY3pVB902EwUXMLPY+wfRloMScgteToa6kMFSBvVkWpsD7vtQgvSnpb n/a54fuNHqwxUvxcFSzx0cdrSI3mvLlUG+wIbjNeDvCSbMpPXxVZiNC3frr/K8fEKaC5 CexW4Wn3gLJtqbiGVGh4QhuREdkhbMZJsiQpyIH2GbgbDZWnUIFdVP9M4rfJSr+W/OqJ xfYnNoOEc6u+Yyu9eBmybePWEGSyM+5bPWq/2wFa59phJZztRB4Lcn61oUtH4Zxxm7uW hyvJXiag0PQpx5U2WS0yHf8InIp947VYjVgAUFFyWL9SJvJ/5W880Im12Kt7v3afQCVJ EQbQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=hIvv7JZmNGiFtMsjwPiXKldWZ14j/ed2TQl7Sv/FusU=; b=IWe64BchgOUYFQQwd5uWNCWntKRe+ib8cA4juVzH24eY+X7eWnH1OXCyeWXLeWWzUw 9e8Wvni5Kl2BEO9FFLMjcIQozcu9lL10g02dUYHqAFFtklmkTkAZpBkB14Fb/lerPKvd lZMhz30VcO3eVjVVQQ0LMAQmH721nC56uhqTRfhgIUNbaBgLvXPirTPhux+EB/O9wp0V vviETz94vjC+JlgyWNWteEqKe9BSSNvN7cRh+IXmNBIujglSyTGp21pioM8+EHuQUxBV 0dAzxIf9ByG+VZ2q3GjDCrLJFQKLjKcIMGb0eQ0WFoxk6cZxrFRFyPCVGBAWaIADk7lo Q5DA== X-Gm-Message-State: APjAAAUmIcgwOiFOHeNBLDF3iUxx3z7lziCoYMT7eHP4C63Ss453bBDG TKggiRlplTcQewrssiuEecfThg== X-Google-Smtp-Source: APXvYqzqCGrV++beTKTDQJdhdQgon7XutZsT6/j1WY0FvUCkwan7YuQ95/1EShbY3LwmpFjAnArpdw== X-Received: by 2002:a0c:a0e6:: with SMTP id c93mr15594165qva.109.1568052748291; Mon, 09 Sep 2019 11:12:28 -0700 (PDT) Received: from localhost.localdomain (c-73-69-118-222.hsd1.nh.comcast.net. [73.69.118.222]) by smtp.gmail.com with ESMTPSA id q8sm5611310qtj.76.2019.09.09.11.12.26 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 09 Sep 2019 11:12:27 -0700 (PDT) From: Pavel Tatashin To: pasha.tatashin@soleen.com, jmorris@namei.org, sashal@kernel.org, ebiederm@xmission.com, kexec@lists.infradead.org, linux-kernel@vger.kernel.org, corbet@lwn.net, catalin.marinas@arm.com, will@kernel.org, linux-arm-kernel@lists.infradead.org, marc.zyngier@arm.com, james.morse@arm.com, vladimir.murzin@arm.com, matthias.bgg@gmail.com, bhsharma@redhat.com, linux-mm@kvack.org, mark.rutland@arm.com Subject: [PATCH v4 03/17] arm64: hibernate: check pgd table allocation Date: Mon, 9 Sep 2019 14:12:07 -0400 Message-Id: <20190909181221.309510-4-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20190909181221.309510-1-pasha.tatashin@soleen.com> References: <20190909181221.309510-1-pasha.tatashin@soleen.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: There is a bug in create_safe_exec_page(), when page table is allocated it is not checked that table is allocated successfully: But it is dereferenced in: pgd_none(READ_ONCE(*pgdp)). Check that allocation was successful. Fixes: 82869ac57b5d ("arm64: kernel: Add support for hibernate/suspend-to-disk") Signed-off-by: Pavel Tatashin --- arch/arm64/kernel/hibernate.c | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/arch/arm64/kernel/hibernate.c b/arch/arm64/kernel/hibernate.c index 025221564252..227cc26720f7 100644 --- a/arch/arm64/kernel/hibernate.c +++ b/arch/arm64/kernel/hibernate.c @@ -217,6 +217,11 @@ static int create_safe_exec_page(void *src_start, size_t length, __flush_icache_range(dst, dst + length); trans_pgd = allocator(mask); + if (!trans_pgd) { + rc = -ENOMEM; + goto out; + } + pgdp = pgd_offset_raw(trans_pgd, dst_addr); if (pgd_none(READ_ONCE(*pgdp))) { pudp = allocator(mask); From patchwork Mon Sep 9 18:12:08 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pasha Tatashin X-Patchwork-Id: 11138333 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 8805D16B1 for ; Mon, 9 Sep 2019 18:12:36 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 4AC9C222BD for ; Mon, 9 Sep 2019 18:12:36 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=soleen.com header.i=@soleen.com header.b="nnzdApBl" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 4AC9C222BD Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=soleen.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id A42016B000A; Mon, 9 Sep 2019 14:12:31 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 9561F6B000C; Mon, 9 Sep 2019 14:12:31 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7CE976B000D; Mon, 9 Sep 2019 14:12:31 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0243.hostedemail.com [216.40.44.243]) by kanga.kvack.org (Postfix) with ESMTP id 586896B000A for ; Mon, 9 Sep 2019 14:12:31 -0400 (EDT) Received: from smtpin13.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with SMTP id EC70D180AD802 for ; Mon, 9 Sep 2019 18:12:30 +0000 (UTC) X-FDA: 75916177260.13.suit23_6d3551e612d07 X-Spam-Summary: 2,0,0,6d2d3e5e1f5903ad,d41d8cd98f00b204,pasha.tatashin@soleen.com,:pasha.tatashin@soleen.com:jmorris@namei.org:sashal@kernel.org:ebiederm@xmission.com:kexec@lists.infradead.org:linux-kernel@vger.kernel.org:corbet@lwn.net:catalin.marinas@arm.com:will@kernel.org:linux-arm-kernel@lists.infradead.org:marc.zyngier@arm.com:james.morse@arm.com:vladimir.murzin@arm.com:matthias.bgg@gmail.com:bhsharma@redhat.com::mark.rutland@arm.com,RULES_HIT:41:69:355:379:541:800:960:968:973:988:989:1260:1311:1314:1345:1359:1381:1437:1515:1535:1542:1711:1730:1747:1777:1792:2194:2199:2393:2559:2562:2693:3138:3139:3140:3141:3142:3354:3865:3866:3867:3868:3870:3871:3872:3874:4117:4419:5007:6117:6119:6261:6653:6737:7903:10004:11026:11658:11914:12043:12048:12296:12297:12438:12517:12519:12555:12895:13894:14181:14394:14721:21063:21080:21433:21444:21627:30012:30054,0,RBL:209.85.160.196:@soleen.com:.lbl8.mailshell.net-62.2.0.100 66.100.201.201,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,Dom ainCache X-HE-Tag: suit23_6d3551e612d07 X-Filterd-Recvd-Size: 6221 Received: from mail-qt1-f196.google.com (mail-qt1-f196.google.com [209.85.160.196]) by imf40.hostedemail.com (Postfix) with ESMTP for ; Mon, 9 Sep 2019 18:12:30 +0000 (UTC) Received: by mail-qt1-f196.google.com with SMTP id r5so17313395qtd.0 for ; Mon, 09 Sep 2019 11:12:30 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=PMN+GU/7lAwuo7DCEIP2WVoBijVCk7hpbqQ7uFmtqns=; b=nnzdApBloMLNe7CmaSyIXPwdwXUub97dhH0hmTNuPhU6Ey5v2IK6XwrSriwQ8EMtik 1vrTnTVLsxI946LCFlzno1fZjJbbkSBdiwKX/O1V2Sr23i93JseUgZz92ReGWyU9xzMz 5PnosW9KUenjWvGEQg1FYBtiL4AY/NQ5R2BoydvlYSWk78rThpghy/DJwC9veyytWXCv J6VATeAnl1BI6kHItkqSnWP4w0jL/n5QNSwz8LSXfEgxN1LcVaHcptxnHyozclpwZbXv t4Bv5epj9+nYTFy0S8Q1O1I/adisDjKUjth8bD3anE6TV0dCTJwQFLgiUbyM23E4rPs4 vGxw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=PMN+GU/7lAwuo7DCEIP2WVoBijVCk7hpbqQ7uFmtqns=; b=cPWo4Vxm5F1SfEW6Gf9Lg4qWIlu9ue4YunbG32iBFzFc8G5CiL9nw6HSyIiRH5meoK 2IY3e2xHikyHSdpVxO8S1aXhba4bBV65cCherDchNYhWdQBII0vKrA1AXfuJEK5hogBx HKbxU/gRSVQU7kd69Jy31CuC8xNv3adPvs6Nzn/NMJNuhFyBF1CIa/RSaJqgDDJrhXfz tjfE+dPbYG3CSf7NmPD/Mmzb0y2Rxgb+ODz52kbfc9qcXUbEjBqOGBva17vuoXANR9E4 AznDBvTx+FXAC1wRqyUz3ADuPJpDLi4yOME3lfFk0vetKx7e8a1w3hXXwuQOGVO0csJb L7FA== X-Gm-Message-State: APjAAAXSZqvWJ/9EEGssuGfs6VPn0oMK8nzJVQwSeX8hJISStWwty05E 9bk/ybcYsWwjTb7yPcYUfQugz28RajiTdw== X-Google-Smtp-Source: APXvYqz6jhv4a/IfFo65uSy2aKVphPkzU4Qff5D49UQCU1mAfkT1Y9m5gUkUa74hm3EqOo/yODu9Uw== X-Received: by 2002:ad4:4441:: with SMTP id l1mr15336417qvt.7.1568052749777; Mon, 09 Sep 2019 11:12:29 -0700 (PDT) Received: from localhost.localdomain (c-73-69-118-222.hsd1.nh.comcast.net. [73.69.118.222]) by smtp.gmail.com with ESMTPSA id q8sm5611310qtj.76.2019.09.09.11.12.28 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 09 Sep 2019 11:12:29 -0700 (PDT) From: Pavel Tatashin To: pasha.tatashin@soleen.com, jmorris@namei.org, sashal@kernel.org, ebiederm@xmission.com, kexec@lists.infradead.org, linux-kernel@vger.kernel.org, corbet@lwn.net, catalin.marinas@arm.com, will@kernel.org, linux-arm-kernel@lists.infradead.org, marc.zyngier@arm.com, james.morse@arm.com, vladimir.murzin@arm.com, matthias.bgg@gmail.com, bhsharma@redhat.com, linux-mm@kvack.org, mark.rutland@arm.com Subject: [PATCH v4 04/17] arm64: hibernate: use get_safe_page directly Date: Mon, 9 Sep 2019 14:12:08 -0400 Message-Id: <20190909181221.309510-5-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20190909181221.309510-1-pasha.tatashin@soleen.com> References: <20190909181221.309510-1-pasha.tatashin@soleen.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: create_safe_exec_page() uses hibernate's allocator to create a set of page table to map a single page that will contain the relocation code. Remove the allocator related arguments, and use get_safe_page directly, as it is done in other local functions in this file to simplify function prototype. Removing this function pointer makes it easier to refactor the code later. Signed-off-by: Pavel Tatashin Reviewed-by: Matthias Brugger --- arch/arm64/kernel/hibernate.c | 17 +++++++---------- 1 file changed, 7 insertions(+), 10 deletions(-) diff --git a/arch/arm64/kernel/hibernate.c b/arch/arm64/kernel/hibernate.c index 227cc26720f7..47a861e0cb0c 100644 --- a/arch/arm64/kernel/hibernate.c +++ b/arch/arm64/kernel/hibernate.c @@ -196,9 +196,7 @@ EXPORT_SYMBOL(arch_hibernation_header_restore); */ static int create_safe_exec_page(void *src_start, size_t length, unsigned long dst_addr, - phys_addr_t *phys_dst_addr, - void *(*allocator)(gfp_t mask), - gfp_t mask) + phys_addr_t *phys_dst_addr) { int rc = 0; pgd_t *trans_pgd; @@ -206,7 +204,7 @@ static int create_safe_exec_page(void *src_start, size_t length, pud_t *pudp; pmd_t *pmdp; pte_t *ptep; - unsigned long dst = (unsigned long)allocator(mask); + unsigned long dst = get_safe_page(GFP_ATOMIC); if (!dst) { rc = -ENOMEM; @@ -216,7 +214,7 @@ static int create_safe_exec_page(void *src_start, size_t length, memcpy((void *)dst, src_start, length); __flush_icache_range(dst, dst + length); - trans_pgd = allocator(mask); + trans_pgd = (void *)get_safe_page(GFP_ATOMIC); if (!trans_pgd) { rc = -ENOMEM; goto out; @@ -224,7 +222,7 @@ static int create_safe_exec_page(void *src_start, size_t length, pgdp = pgd_offset_raw(trans_pgd, dst_addr); if (pgd_none(READ_ONCE(*pgdp))) { - pudp = allocator(mask); + pudp = (void *)get_safe_page(GFP_ATOMIC); if (!pudp) { rc = -ENOMEM; goto out; @@ -234,7 +232,7 @@ static int create_safe_exec_page(void *src_start, size_t length, pudp = pud_offset(pgdp, dst_addr); if (pud_none(READ_ONCE(*pudp))) { - pmdp = allocator(mask); + pmdp = (void *)get_safe_page(GFP_ATOMIC); if (!pmdp) { rc = -ENOMEM; goto out; @@ -244,7 +242,7 @@ static int create_safe_exec_page(void *src_start, size_t length, pmdp = pmd_offset(pudp, dst_addr); if (pmd_none(READ_ONCE(*pmdp))) { - ptep = allocator(mask); + ptep = (void *)get_safe_page(GFP_ATOMIC); if (!ptep) { rc = -ENOMEM; goto out; @@ -530,8 +528,7 @@ int swsusp_arch_resume(void) */ rc = create_safe_exec_page(__hibernate_exit_text_start, exit_size, (unsigned long)hibernate_exit, - &phys_hibernate_exit, - (void *)get_safe_page, GFP_ATOMIC); + &phys_hibernate_exit); if (rc) { pr_err("Failed to create safe executable page for hibernate_exit code.\n"); goto out; From patchwork Mon Sep 9 18:12:09 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pasha Tatashin X-Patchwork-Id: 11138335 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id B603014ED for ; Mon, 9 Sep 2019 18:12:38 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 801EA222BF for ; Mon, 9 Sep 2019 18:12:38 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=soleen.com header.i=@soleen.com header.b="UthcQ8kO" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 801EA222BF Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=soleen.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 38CA86B000C; Mon, 9 Sep 2019 14:12:33 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 2052C6B000D; Mon, 9 Sep 2019 14:12:33 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0097E6B000E; Mon, 9 Sep 2019 14:12:32 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0059.hostedemail.com [216.40.44.59]) by kanga.kvack.org (Postfix) with ESMTP id CF9006B000C for ; Mon, 9 Sep 2019 14:12:32 -0400 (EDT) Received: from smtpin16.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with SMTP id 5184545C1 for ; Mon, 9 Sep 2019 18:12:32 +0000 (UTC) X-FDA: 75916177344.16.boy71_6d67a4ddbcf3f X-Spam-Summary: 2,0,0,d504dfbc86306273,d41d8cd98f00b204,pasha.tatashin@soleen.com,:pasha.tatashin@soleen.com:jmorris@namei.org:sashal@kernel.org:ebiederm@xmission.com:kexec@lists.infradead.org:linux-kernel@vger.kernel.org:corbet@lwn.net:catalin.marinas@arm.com:will@kernel.org:linux-arm-kernel@lists.infradead.org:marc.zyngier@arm.com:james.morse@arm.com:vladimir.murzin@arm.com:matthias.bgg@gmail.com:bhsharma@redhat.com::mark.rutland@arm.com,RULES_HIT:41:69:355:379:541:800:960:973:988:989:1260:1311:1314:1345:1359:1381:1437:1515:1535:1542:1711:1730:1747:1777:1792:2194:2199:2393:2559:2562:3138:3139:3140:3141:3142:3353:3622:3865:3866:3867:3870:5007:6117:6119:6261:6653:6737:7875:7903:10004:11026:11658:11914:12043:12048:12114:12297:12438:12517:12519:12555:12683:12895:13894:14096:14110:14181:14394:14721:21080:21121:21444:21451:21627:30054,0,RBL:209.85.160.193:@soleen.com:.lbl8.mailshell.net-62.2.0.100 66.100.201.201,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:no t bulk,S X-HE-Tag: boy71_6d67a4ddbcf3f X-Filterd-Recvd-Size: 5666 Received: from mail-qt1-f193.google.com (mail-qt1-f193.google.com [209.85.160.193]) by imf15.hostedemail.com (Postfix) with ESMTP for ; Mon, 9 Sep 2019 18:12:31 +0000 (UTC) Received: by mail-qt1-f193.google.com with SMTP id c9so17255985qth.9 for ; Mon, 09 Sep 2019 11:12:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=Zyn9vu/MXqap+nTmKUuDMqr9t65lKRkBEWPsGRxQJWw=; b=UthcQ8kO9S9FCLunJhnz0p5vTTeqpswrs8HKI3djcyn4qVoYH5c1zslsQ8icQsNuPw lTHINs8zJxAuS3r7gWwVequPv79z7btg4C4cIvo/Bvv7pMUfe3fuVe+bUmep95QO26L2 BI+gHcf2ROIyYWP16vdZ+yqFXHBvGWqGqai0v9yIfE//FjF2txzR9RwNq/Ro6no5KKwV SKLynI2BWiFTWDKzVEX3Ev6oQFBlSmwcv6iEYGjrpwCBHaXU8FBvmaYyobFXAihr09Sg +JDVKpR/sFM89wtVQck3jZCL1jiTauTsbPzPg+yIyOU05PD6RocOP6Ax0DRQKCCIMsDE CEUA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Zyn9vu/MXqap+nTmKUuDMqr9t65lKRkBEWPsGRxQJWw=; b=VjCc/3o2osWc1LJ5WrRK+GPu8iiH+498KaGnr9OuLw2ntEcx+yfNzZUUBXK4xX4q+h wViSLL37vPcZAzxFIFFlUAyucANhMacx3yFNqJRdumo63I6ijBCH7JATEdXCSaLeEiEk ImyOfNi1/3mLuixBS/CXm90q47DtJw1mrJ3iejUDKKqNdXDClI1zWnCUMc0efEWsw+ie 25HkgZ62oYpkVbL8UbQaZwRXwtTlhVyH2YZ+L8wFLj10gYMnNWGH8PhAnmKpvD5FJl5N 3N2pZsUXZT7WWrRg4wrqMuMtCX4o4HlkT/VwjyV1Fba5m0XEz5oMSdHME+NW9dy2s0bE GgKA== X-Gm-Message-State: APjAAAXbYm7VwseTOVMWwRgseGAC0mBB+8z/IffQ/qPBKH0UtIcctVbm O57eH5MJMyX6xruLl7mu6fh/HQ== X-Google-Smtp-Source: APXvYqxD+/dgh77RiunjjHz4vq1SlUkRHl7ZhEwgGQCd/ojd17d3Y9qj1SZatp7epiPygaYzVXaPxA== X-Received: by 2002:a05:6214:451:: with SMTP id cc17mr15008123qvb.15.1568052751174; Mon, 09 Sep 2019 11:12:31 -0700 (PDT) Received: from localhost.localdomain (c-73-69-118-222.hsd1.nh.comcast.net. [73.69.118.222]) by smtp.gmail.com with ESMTPSA id q8sm5611310qtj.76.2019.09.09.11.12.29 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 09 Sep 2019 11:12:30 -0700 (PDT) From: Pavel Tatashin To: pasha.tatashin@soleen.com, jmorris@namei.org, sashal@kernel.org, ebiederm@xmission.com, kexec@lists.infradead.org, linux-kernel@vger.kernel.org, corbet@lwn.net, catalin.marinas@arm.com, will@kernel.org, linux-arm-kernel@lists.infradead.org, marc.zyngier@arm.com, james.morse@arm.com, vladimir.murzin@arm.com, matthias.bgg@gmail.com, bhsharma@redhat.com, linux-mm@kvack.org, mark.rutland@arm.com Subject: [PATCH v4 05/17] arm64: hibernate: remove gotos in create_safe_exec_page Date: Mon, 9 Sep 2019 14:12:09 -0400 Message-Id: <20190909181221.309510-6-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20190909181221.309510-1-pasha.tatashin@soleen.com> References: <20190909181221.309510-1-pasha.tatashin@soleen.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Usually, gotos are used to handle cleanup after exception, but in case of create_safe_exec_page there are no clean-ups. So, simply return the errors directly. Signed-off-by: Pavel Tatashin Reviewed-by: James Morse --- arch/arm64/kernel/hibernate.c | 34 +++++++++++----------------------- 1 file changed, 11 insertions(+), 23 deletions(-) diff --git a/arch/arm64/kernel/hibernate.c b/arch/arm64/kernel/hibernate.c index 47a861e0cb0c..7bbeb33c700d 100644 --- a/arch/arm64/kernel/hibernate.c +++ b/arch/arm64/kernel/hibernate.c @@ -198,7 +198,6 @@ static int create_safe_exec_page(void *src_start, size_t length, unsigned long dst_addr, phys_addr_t *phys_dst_addr) { - int rc = 0; pgd_t *trans_pgd; pgd_t *pgdp; pud_t *pudp; @@ -206,47 +205,37 @@ static int create_safe_exec_page(void *src_start, size_t length, pte_t *ptep; unsigned long dst = get_safe_page(GFP_ATOMIC); - if (!dst) { - rc = -ENOMEM; - goto out; - } + if (!dst) + return -ENOMEM; memcpy((void *)dst, src_start, length); __flush_icache_range(dst, dst + length); trans_pgd = (void *)get_safe_page(GFP_ATOMIC); - if (!trans_pgd) { - rc = -ENOMEM; - goto out; - } + if (!trans_pgd) + return -ENOMEM; pgdp = pgd_offset_raw(trans_pgd, dst_addr); if (pgd_none(READ_ONCE(*pgdp))) { pudp = (void *)get_safe_page(GFP_ATOMIC); - if (!pudp) { - rc = -ENOMEM; - goto out; - } + if (!pudp) + return -ENOMEM; pgd_populate(&init_mm, pgdp, pudp); } pudp = pud_offset(pgdp, dst_addr); if (pud_none(READ_ONCE(*pudp))) { pmdp = (void *)get_safe_page(GFP_ATOMIC); - if (!pmdp) { - rc = -ENOMEM; - goto out; - } + if (!pmdp) + return -ENOMEM; pud_populate(&init_mm, pudp, pmdp); } pmdp = pmd_offset(pudp, dst_addr); if (pmd_none(READ_ONCE(*pmdp))) { ptep = (void *)get_safe_page(GFP_ATOMIC); - if (!ptep) { - rc = -ENOMEM; - goto out; - } + if (!ptep) + return -ENOMEM; pmd_populate_kernel(&init_mm, pmdp, ptep); } @@ -272,8 +261,7 @@ static int create_safe_exec_page(void *src_start, size_t length, *phys_dst_addr = virt_to_phys((void *)dst); -out: - return rc; + return 0; } #define dcache_clean_range(start, end) __flush_dcache_area(start, (end - start)) From patchwork Mon Sep 9 18:12:10 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pasha Tatashin X-Patchwork-Id: 11138337 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 0B7511599 for ; Mon, 9 Sep 2019 18:12:41 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id CD27B21924 for ; Mon, 9 Sep 2019 18:12:40 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=soleen.com header.i=@soleen.com header.b="RzzD8B3m" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org CD27B21924 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=soleen.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 2C5EE6B000D; Mon, 9 Sep 2019 14:12:34 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 1FEE16B000E; Mon, 9 Sep 2019 14:12:34 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0C7E16B0010; Mon, 9 Sep 2019 14:12:33 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0231.hostedemail.com [216.40.44.231]) by kanga.kvack.org (Postfix) with ESMTP id D6D436B000D for ; Mon, 9 Sep 2019 14:12:33 -0400 (EDT) Received: from smtpin08.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with SMTP id 8B83C45C1 for ; Mon, 9 Sep 2019 18:12:33 +0000 (UTC) X-FDA: 75916177386.08.mice15_6d9bd2a0e4d08 X-Spam-Summary: 2,0,0,cb0d1387bd06605f,d41d8cd98f00b204,pasha.tatashin@soleen.com,:pasha.tatashin@soleen.com:jmorris@namei.org:sashal@kernel.org:ebiederm@xmission.com:kexec@lists.infradead.org:linux-kernel@vger.kernel.org:corbet@lwn.net:catalin.marinas@arm.com:will@kernel.org:linux-arm-kernel@lists.infradead.org:marc.zyngier@arm.com:james.morse@arm.com:vladimir.murzin@arm.com:matthias.bgg@gmail.com:bhsharma@redhat.com::mark.rutland@arm.com,RULES_HIT:41:355:379:541:800:960:973:988:989:1260:1311:1314:1345:1359:1381:1437:1515:1535:1542:1711:1730:1747:1777:1792:2194:2199:2393:2559:2562:3138:3139:3140:3141:3142:3353:3743:3865:3866:3867:3868:3870:3871:3872:3874:4250:5007:6119:6120:6261:6653:6737:7901:7903:10004:11026:11473:11658:11914:12043:12048:12114:12297:12438:12517:12519:12555:12895:13161:13229:13894:14096:14181:14394:14721:21080:21444:21627:30003:30025:30054:30070,0,RBL:209.85.160.194:@soleen.com:.lbl8.mailshell.net-62.2.0.100 66.100.201.201,CacheIP:none,Bayesian:0.5,0.5,0.5, Netcheck X-HE-Tag: mice15_6d9bd2a0e4d08 X-Filterd-Recvd-Size: 5497 Received: from mail-qt1-f194.google.com (mail-qt1-f194.google.com [209.85.160.194]) by imf28.hostedemail.com (Postfix) with ESMTP for ; Mon, 9 Sep 2019 18:12:33 +0000 (UTC) Received: by mail-qt1-f194.google.com with SMTP id r5so17313568qtd.0 for ; Mon, 09 Sep 2019 11:12:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=nvbvJVLr4nkLYl4WcL+kdWRjmAFlh6tLMOt4xIPRheI=; b=RzzD8B3mcShO+xNiGkp9aYsaF8e5zs36M3yyRXQYZQsScGZadM3wetxdkkMKDwZhAs 5GJzlnrTMY4MJPkNtxbp/8WWx+DniOE48TA3hKJcCE6INrqeo1R08BOXvZt5dLSWHWsB zOvgEgPACr2i4FOe/q+JTl8q1peE4qF6QCX7UoHhJk9RqfV39j8BwniCkwiq49X91qPP UJcdsAaV3a8kcF6lg6PpmHzsZP1kspVO4+RTIrnIljy78YcmyNiwhasJjzN4jnpIhnnz /wi4yGG6fcN+WwBP4w09/RJqegG5DBYoT4wMjYrFfF+Xj0OaXW86QPLztqCz4NlUf/SC 3XGQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=nvbvJVLr4nkLYl4WcL+kdWRjmAFlh6tLMOt4xIPRheI=; b=o9BdzEuYsJptEXbr50NuqFwD+5wx8aDM4r1AJeN1ApQyVw0f/w69UOxDLKKaqPvlBG IRXOneY/x36xmXyQQoTXmkt7gop5ZWHEZu2uzfZBQ9r22A/H/ahB17FrG/wLGFbDUYBA edm9nIaQafmEI7Eaqm+ZZrgzaQVeXsYVLt8hQEbf2aSIOnamRYYeFvwaU7DdEH4dFBpP uJdq3WeUfzKZabl83b2m0b2CJiaR8SoWHehczvHoTCs/0yDrxpUh4b9OiuxjpnEsEuwM cRZK0JpFp45W/d1/WZzAcrZfyc1EujTtj/reyd265PcTrynJFYZGUmdbkoptDaBICqHM ro3g== X-Gm-Message-State: APjAAAVdhd7Q9D7nbgQQKvhQVOF/ZLhoWZNah4K9ohgRtMj96sLR4Cet 6ZndifHvhhw6UNNUB8MI6CdIgQ== X-Google-Smtp-Source: APXvYqxCUUwIn8Crj99uq610NweqGHwoS2Q66TNJxTfkDd8LzzPyrQkWbGkKYuRh3YqV+VdDq9stmg== X-Received: by 2002:a0c:fc05:: with SMTP id z5mr8555882qvo.128.1568052752590; Mon, 09 Sep 2019 11:12:32 -0700 (PDT) Received: from localhost.localdomain (c-73-69-118-222.hsd1.nh.comcast.net. [73.69.118.222]) by smtp.gmail.com with ESMTPSA id q8sm5611310qtj.76.2019.09.09.11.12.31 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 09 Sep 2019 11:12:32 -0700 (PDT) From: Pavel Tatashin To: pasha.tatashin@soleen.com, jmorris@namei.org, sashal@kernel.org, ebiederm@xmission.com, kexec@lists.infradead.org, linux-kernel@vger.kernel.org, corbet@lwn.net, catalin.marinas@arm.com, will@kernel.org, linux-arm-kernel@lists.infradead.org, marc.zyngier@arm.com, james.morse@arm.com, vladimir.murzin@arm.com, matthias.bgg@gmail.com, bhsharma@redhat.com, linux-mm@kvack.org, mark.rutland@arm.com Subject: [PATCH v4 06/17] arm64: hibernate: rename dst to page in create_safe_exec_page Date: Mon, 9 Sep 2019 14:12:10 -0400 Message-Id: <20190909181221.309510-7-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20190909181221.309510-1-pasha.tatashin@soleen.com> References: <20190909181221.309510-1-pasha.tatashin@soleen.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: create_safe_exec_page() allocates a safe page and maps it at a specific location, also this function returns the physical address of newly allocated page. The destination VA, and PA are specified in arguments: dst_addr, phys_dst_addr However, within the function it uses "dst" which has unsigned long type, but is actually a pointers in the current virtual space. This is confusing to read. Rename dst to more appropriate page (page that is created), and also change its time to "void *" Signed-off-by: Pavel Tatashin Reviewed-by: James Morse --- arch/arm64/kernel/hibernate.c | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/arch/arm64/kernel/hibernate.c b/arch/arm64/kernel/hibernate.c index 7bbeb33c700d..750ecc7f2cbe 100644 --- a/arch/arm64/kernel/hibernate.c +++ b/arch/arm64/kernel/hibernate.c @@ -198,18 +198,18 @@ static int create_safe_exec_page(void *src_start, size_t length, unsigned long dst_addr, phys_addr_t *phys_dst_addr) { + void *page = (void *)get_safe_page(GFP_ATOMIC); pgd_t *trans_pgd; pgd_t *pgdp; pud_t *pudp; pmd_t *pmdp; pte_t *ptep; - unsigned long dst = get_safe_page(GFP_ATOMIC); - if (!dst) + if (!page) return -ENOMEM; - memcpy((void *)dst, src_start, length); - __flush_icache_range(dst, dst + length); + memcpy(page, src_start, length); + __flush_icache_range((unsigned long)page, (unsigned long)page + length); trans_pgd = (void *)get_safe_page(GFP_ATOMIC); if (!trans_pgd) @@ -240,7 +240,7 @@ static int create_safe_exec_page(void *src_start, size_t length, } ptep = pte_offset_kernel(pmdp, dst_addr); - set_pte(ptep, pfn_pte(virt_to_pfn(dst), PAGE_KERNEL_EXEC)); + set_pte(ptep, pfn_pte(virt_to_pfn(page), PAGE_KERNEL_EXEC)); /* * Load our new page tables. A strict BBM approach requires that we @@ -259,7 +259,7 @@ static int create_safe_exec_page(void *src_start, size_t length, write_sysreg(phys_to_ttbr(virt_to_phys(trans_pgd)), ttbr0_el1); isb(); - *phys_dst_addr = virt_to_phys((void *)dst); + *phys_dst_addr = virt_to_phys(page); return 0; } From patchwork Mon Sep 9 18:12:11 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pasha Tatashin X-Patchwork-Id: 11138339 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 0D3D51599 for ; Mon, 9 Sep 2019 18:12:43 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id C4788222C4 for ; Mon, 9 Sep 2019 18:12:42 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=soleen.com header.i=@soleen.com header.b="hJiNQQMh" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org C4788222C4 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=soleen.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id C66876B000E; Mon, 9 Sep 2019 14:12:35 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id BC3866B0010; Mon, 9 Sep 2019 14:12:35 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A16166B0266; Mon, 9 Sep 2019 14:12:35 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0147.hostedemail.com [216.40.44.147]) by kanga.kvack.org (Postfix) with ESMTP id 808916B000E for ; Mon, 9 Sep 2019 14:12:35 -0400 (EDT) Received: from smtpin10.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with SMTP id 3221552D6 for ; Mon, 9 Sep 2019 18:12:35 +0000 (UTC) X-FDA: 75916177470.10.run01_6dd7f642f112a X-Spam-Summary: 2,0,0,d07cda9b92b5440f,d41d8cd98f00b204,pasha.tatashin@soleen.com,:pasha.tatashin@soleen.com:jmorris@namei.org:sashal@kernel.org:ebiederm@xmission.com:kexec@lists.infradead.org:linux-kernel@vger.kernel.org:corbet@lwn.net:catalin.marinas@arm.com:will@kernel.org:linux-arm-kernel@lists.infradead.org:marc.zyngier@arm.com:james.morse@arm.com:vladimir.murzin@arm.com:matthias.bgg@gmail.com:bhsharma@redhat.com::mark.rutland@arm.com,RULES_HIT:41:355:379:541:800:960:973:982:988:989:1260:1311:1314:1345:1359:1381:1437:1515:1534:1541:1711:1730:1747:1777:1792:2393:2559:2562:3138:3139:3140:3141:3142:3352:3870:3871:3874:3876:4321:4605:5007:6261:6653:6737:10004:11026:11473:11657:11658:11914:12043:12048:12297:12517:12519:12555:12895:12986:13069:13161:13229:13311:13357:13894:14181:14384:14394:14721:21080:21444:21451:21627:30054,0,RBL:209.85.160.194:@soleen.com:.lbl8.mailshell.net-62.2.0.100 66.100.201.201,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk ,SPF:fp, X-HE-Tag: run01_6dd7f642f112a X-Filterd-Recvd-Size: 4622 Received: from mail-qt1-f194.google.com (mail-qt1-f194.google.com [209.85.160.194]) by imf47.hostedemail.com (Postfix) with ESMTP for ; Mon, 9 Sep 2019 18:12:34 +0000 (UTC) Received: by mail-qt1-f194.google.com with SMTP id u40so17261253qth.11 for ; Mon, 09 Sep 2019 11:12:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=9qwJvK7duEPa5cSydJwLuayx187fiYj4pOWC7XOD35U=; b=hJiNQQMhPzUeNhjSqA+wl+M2Yw4VzmLAzetQQIh1yJcVnbibJdozQMUJIPTj7G1fZ5 uIcSVRVD7sQv4/zb3vvoXmG7nEkgaDTeuvoEu2B5hEQ4HKyKvqCdxkrMNxnLQvbnC5Sp GXJFdIB9Aa+WwqYBwe5OAvMNEHi0mE0MDeX0ZPZjHb8snLd1akNp9JPJJYPCGkMalc8q KUixyhfh6ehZRO08DryaR+QFPF2S0018x6Gv5fzx8kXs+kuCJp2nCeGP2cJBD0372qDa S1kzC6KQRbcG17aW6isKu0YCKQmeUiKgxNFdBtuyymq/m8jbblhdUBGwoKmff1pBx6J7 /Y0w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=9qwJvK7duEPa5cSydJwLuayx187fiYj4pOWC7XOD35U=; b=Bo2lyNNYBW7fgYFfHvfPqGAZWHY+ugRvdcsFC6OaT91L0rWnK6e/cH2xcC6U9RViCz lvu8/MufR6Gblu27p6Pip5hq0DnYZwcNTm0lj9xNsjZ1rciUzrpHlBn+8Mvxp2nJ8Qly OHNlW6V8u1NjyC2Miqn6qyxpDBS0VNIskIa4UA+/vjDPmWZ9WRNHp/odHmss7d43dm7C Gu6/utQ/pe1CqB/QPGtxGLiYBRbicKJS88gPV45/zXIJ8TnykifzRwYBcdS7XQkADWZX BewcWq6rmw00TSsbRe5RuRd6/DhN37bC01qEoYHPS33Q8KyYCikbnbfirTg/RhDoq/mb 09ww== X-Gm-Message-State: APjAAAX46MwCvzgfwG+rzv+okVdP48IWKHt6nV91myN5cjoSIe9Gi3ZQ u7KWNXsMN3bjlgtoF1uL/uOYWw== X-Google-Smtp-Source: APXvYqz8CjF5wzOFrAWFe3y7s/Lo+3E4+zqXPfUNMu+qW08GhYxza7uSUBke5A6jwxTbDkv2QIy0jg== X-Received: by 2002:a0c:9665:: with SMTP id 34mr15164929qvy.223.1568052754025; Mon, 09 Sep 2019 11:12:34 -0700 (PDT) Received: from localhost.localdomain (c-73-69-118-222.hsd1.nh.comcast.net. [73.69.118.222]) by smtp.gmail.com with ESMTPSA id q8sm5611310qtj.76.2019.09.09.11.12.32 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 09 Sep 2019 11:12:33 -0700 (PDT) From: Pavel Tatashin To: pasha.tatashin@soleen.com, jmorris@namei.org, sashal@kernel.org, ebiederm@xmission.com, kexec@lists.infradead.org, linux-kernel@vger.kernel.org, corbet@lwn.net, catalin.marinas@arm.com, will@kernel.org, linux-arm-kernel@lists.infradead.org, marc.zyngier@arm.com, james.morse@arm.com, vladimir.murzin@arm.com, matthias.bgg@gmail.com, bhsharma@redhat.com, linux-mm@kvack.org, mark.rutland@arm.com Subject: [PATCH v4 07/17] arm64: hibernate: add PUD_SECT_RDONLY Date: Mon, 9 Sep 2019 14:12:11 -0400 Message-Id: <20190909181221.309510-8-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20190909181221.309510-1-pasha.tatashin@soleen.com> References: <20190909181221.309510-1-pasha.tatashin@soleen.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: There is PMD_SECT_RDONLY that is used in pud_* function which is confusing. Signed-off-by: Pavel Tatashin Acked-by: James Morse --- arch/arm64/include/asm/pgtable-hwdef.h | 1 + arch/arm64/kernel/hibernate.c | 2 +- 2 files changed, 2 insertions(+), 1 deletion(-) diff --git a/arch/arm64/include/asm/pgtable-hwdef.h b/arch/arm64/include/asm/pgtable-hwdef.h index db92950bb1a0..dcb4f13c7888 100644 --- a/arch/arm64/include/asm/pgtable-hwdef.h +++ b/arch/arm64/include/asm/pgtable-hwdef.h @@ -110,6 +110,7 @@ #define PUD_TABLE_BIT (_AT(pudval_t, 1) << 1) #define PUD_TYPE_MASK (_AT(pudval_t, 3) << 0) #define PUD_TYPE_SECT (_AT(pudval_t, 1) << 0) +#define PUD_SECT_RDONLY (_AT(pudval_t, 1) << 7) /* AP[2] */ /* * Level 2 descriptor (PMD). diff --git a/arch/arm64/kernel/hibernate.c b/arch/arm64/kernel/hibernate.c index 750ecc7f2cbe..da2b3c5e94cb 100644 --- a/arch/arm64/kernel/hibernate.c +++ b/arch/arm64/kernel/hibernate.c @@ -436,7 +436,7 @@ static int copy_pud(pgd_t *dst_pgdp, pgd_t *src_pgdp, unsigned long start, return -ENOMEM; } else { set_pud(dst_pudp, - __pud(pud_val(pud) & ~PMD_SECT_RDONLY)); + __pud(pud_val(pud) & ~PUD_SECT_RDONLY)); } } while (dst_pudp++, src_pudp++, addr = next, addr != end); From patchwork Mon Sep 9 18:12:12 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pasha Tatashin X-Patchwork-Id: 11138341 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 8BE0A14ED for ; Mon, 9 Sep 2019 18:12:45 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 4C8D2218DE for ; Mon, 9 Sep 2019 18:12:45 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=soleen.com header.i=@soleen.com header.b="FR9ZOvxr" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 4C8D2218DE Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=soleen.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 7F33E6B0010; Mon, 9 Sep 2019 14:12:37 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 7A4BC6B0266; Mon, 9 Sep 2019 14:12:37 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5AE766B0269; Mon, 9 Sep 2019 14:12:37 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0118.hostedemail.com [216.40.44.118]) by kanga.kvack.org (Postfix) with ESMTP id 303416B0010 for ; Mon, 9 Sep 2019 14:12:37 -0400 (EDT) Received: from smtpin15.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with SMTP id C2F46181AC9AE for ; Mon, 9 Sep 2019 18:12:36 +0000 (UTC) X-FDA: 75916177512.15.tax58_6e0be1c0e481f X-Spam-Summary: 2,0,0,7fd41cd5a6af367c,d41d8cd98f00b204,pasha.tatashin@soleen.com,:pasha.tatashin@soleen.com:jmorris@namei.org:sashal@kernel.org:ebiederm@xmission.com:kexec@lists.infradead.org:linux-kernel@vger.kernel.org:corbet@lwn.net:catalin.marinas@arm.com:will@kernel.org:linux-arm-kernel@lists.infradead.org:marc.zyngier@arm.com:james.morse@arm.com:vladimir.murzin@arm.com:matthias.bgg@gmail.com:bhsharma@redhat.com::mark.rutland@arm.com,RULES_HIT:41:69:355:379:541:800:960:966:973:988:989:1260:1311:1314:1345:1359:1381:1437:1515:1535:1544:1605:1711:1730:1747:1777:1792:2194:2196:2199:2200:2393:2553:2559:2562:2693:2903:2914:3138:3139:3140:3141:3142:3865:3866:3867:3868:3871:3872:3874:4118:4250:4321:4385:4605:5007:6117:6119:6261:6653:6737:7903:9592:10004:11026:11473:11658:11914:12043:12048:12291:12296:12297:12438:12517:12519:12555:12679:12683:12895:13161:13180:13229:13894:14181:14394:14721:21080:21325:21433:21444:21451:21627:30003:30012:30054:30079:30090,0,RBL:209.85.160.196:@so leen.com X-HE-Tag: tax58_6e0be1c0e481f X-Filterd-Recvd-Size: 7831 Received: from mail-qt1-f196.google.com (mail-qt1-f196.google.com [209.85.160.196]) by imf22.hostedemail.com (Postfix) with ESMTP for ; Mon, 9 Sep 2019 18:12:36 +0000 (UTC) Received: by mail-qt1-f196.google.com with SMTP id n7so17276461qtb.6 for ; Mon, 09 Sep 2019 11:12:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=OYSYAxR9hpjcZxR27XmKlk/hxg4B/TFu6EwLNmqRAmg=; b=FR9ZOvxr2M6JU0I0ESVpP8goPEO2UmoaLl2ohLEsDRHg5D7xJZa5nMLO+tcG3UUv4Q jPvfWNulGgDDc27S7kXLt2bHg+ic/qkdhFfb2DP74wjXMLc9OKoTccKcAahUE9txaXmV 30GgZBWAuMC4So1j6YOC7DZbibOMN0eq4owflXxkWCwadz1DalcUZrzxgCjPYD0jHxQs es9oD6OLExQLw2Ws/5rPY8lc3uSTK6+Kg79ajtc2H9k84lYvQXckpdGWIW59Fd8haU4p eJCZr0BfUHS8W+UBiHc0V11uWuTUdts3IOmlnh3SZOiRnDhVgqgDkBy2qkb0OzzajmGf C/Jw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=OYSYAxR9hpjcZxR27XmKlk/hxg4B/TFu6EwLNmqRAmg=; b=E0qOUftHMHIZ/g9AjWC07s4mfT8fmle9C/VX24a7czAHCHz7n5kzA/8NraXS0J1q1P IrNpd3HRSzcFNOmcHtpMFYQg0e8/GYG1jN5WcppiIBnBHNMxHZEoVbz4mEKTRi9krhzV K3WRp7W26Zp9jchdFB46pZc5UayoQqSdxgmFj0mcSsVERCDSL8m2GqGz7XfLegGGRpkj XiIj2imxaouOhZBU5Kx2a/U6y5egCrpxERAlEH3WT0yg5iPRV3EmCRn5QjKxmyW6SAu6 gkyQyMeAhmBvpcpNlu4sMovO5uZom7PJwNkBgFq0MfPyAQ0H2NjDV8xRKiOZswGt0vr7 dVdw== X-Gm-Message-State: APjAAAVmKjcVa9Xazbrrvk/rRGJUdtrXm3W5Sd93rsXskQXiDpi+kgOU nCjfcT8e3TF0KK4MadkcvGOxsg== X-Google-Smtp-Source: APXvYqxRp8Tm+1I0aK4a7ZcVOHt+wp8ZLS97ne6JzF/PZ76S2Ge0Fq0VNbDWF8RMRFleadWuQJy0BQ== X-Received: by 2002:a05:6214:4c2:: with SMTP id ck2mr15101338qvb.21.1568052755478; Mon, 09 Sep 2019 11:12:35 -0700 (PDT) Received: from localhost.localdomain (c-73-69-118-222.hsd1.nh.comcast.net. [73.69.118.222]) by smtp.gmail.com with ESMTPSA id q8sm5611310qtj.76.2019.09.09.11.12.34 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 09 Sep 2019 11:12:34 -0700 (PDT) From: Pavel Tatashin To: pasha.tatashin@soleen.com, jmorris@namei.org, sashal@kernel.org, ebiederm@xmission.com, kexec@lists.infradead.org, linux-kernel@vger.kernel.org, corbet@lwn.net, catalin.marinas@arm.com, will@kernel.org, linux-arm-kernel@lists.infradead.org, marc.zyngier@arm.com, james.morse@arm.com, vladimir.murzin@arm.com, matthias.bgg@gmail.com, bhsharma@redhat.com, linux-mm@kvack.org, mark.rutland@arm.com Subject: [PATCH v4 08/17] arm64: hibernate: add trans_pgd public functions Date: Mon, 9 Sep 2019 14:12:12 -0400 Message-Id: <20190909181221.309510-9-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20190909181221.309510-1-pasha.tatashin@soleen.com> References: <20190909181221.309510-1-pasha.tatashin@soleen.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: trans_pgd_create_copy() and trans_pgd_map_page() are going to be the basis for new shared code that handles page tables for cases which are between kernels: kexec, and hibernate. Note: Eventually, get_safe_page() will be moved into a function pointer passed via argument, but for now keep it as is. Signed-off-by: Pavel Tatashin --- arch/arm64/kernel/hibernate.c | 94 ++++++++++++++++++++++------------- 1 file changed, 60 insertions(+), 34 deletions(-) diff --git a/arch/arm64/kernel/hibernate.c b/arch/arm64/kernel/hibernate.c index da2b3c5e94cb..178488a902c7 100644 --- a/arch/arm64/kernel/hibernate.c +++ b/arch/arm64/kernel/hibernate.c @@ -182,39 +182,15 @@ int arch_hibernation_header_restore(void *addr) } EXPORT_SYMBOL(arch_hibernation_header_restore); -/* - * Copies length bytes, starting at src_start into an new page, - * perform cache maintentance, then maps it at the specified address low - * address as executable. - * - * This is used by hibernate to copy the code it needs to execute when - * overwriting the kernel text. This function generates a new set of page - * tables, which it loads into ttbr0. - * - * Length is provided as we probably only want 4K of data, even on a 64K - * page system. - */ -static int create_safe_exec_page(void *src_start, size_t length, - unsigned long dst_addr, - phys_addr_t *phys_dst_addr) +int trans_pgd_map_page(pgd_t *trans_pgd, void *page, + unsigned long dst_addr, + pgprot_t pgprot) { - void *page = (void *)get_safe_page(GFP_ATOMIC); - pgd_t *trans_pgd; pgd_t *pgdp; pud_t *pudp; pmd_t *pmdp; pte_t *ptep; - if (!page) - return -ENOMEM; - - memcpy(page, src_start, length); - __flush_icache_range((unsigned long)page, (unsigned long)page + length); - - trans_pgd = (void *)get_safe_page(GFP_ATOMIC); - if (!trans_pgd) - return -ENOMEM; - pgdp = pgd_offset_raw(trans_pgd, dst_addr); if (pgd_none(READ_ONCE(*pgdp))) { pudp = (void *)get_safe_page(GFP_ATOMIC); @@ -242,6 +218,44 @@ static int create_safe_exec_page(void *src_start, size_t length, ptep = pte_offset_kernel(pmdp, dst_addr); set_pte(ptep, pfn_pte(virt_to_pfn(page), PAGE_KERNEL_EXEC)); + return 0; +} + +/* + * Copies length bytes, starting at src_start into an new page, + * perform cache maintenance, then maps it at the specified address low + * address as executable. + * + * This is used by hibernate to copy the code it needs to execute when + * overwriting the kernel text. This function generates a new set of page + * tables, which it loads into ttbr0. + * + * Length is provided as we probably only want 4K of data, even on a 64K + * page system. + */ +static int create_safe_exec_page(void *src_start, size_t length, + unsigned long dst_addr, + phys_addr_t *phys_dst_addr) +{ + void *page = (void *)get_safe_page(GFP_ATOMIC); + pgd_t *trans_pgd; + int rc; + + if (!page) + return -ENOMEM; + + memcpy(page, src_start, length); + __flush_icache_range((unsigned long)page, (unsigned long)page + length); + + trans_pgd = (void *)get_safe_page(GFP_ATOMIC); + if (!trans_pgd) + return -ENOMEM; + + rc = trans_pgd_map_page(trans_pgd, page, dst_addr, + PAGE_KERNEL_EXEC); + if (rc) + return rc; + /* * Load our new page tables. A strict BBM approach requires that we * ensure that TLBs are free of any entries that may overlap with the @@ -462,6 +476,24 @@ static int copy_page_tables(pgd_t *dst_pgdp, unsigned long start, return 0; } +int trans_pgd_create_copy(pgd_t **dst_pgdp, unsigned long start, + unsigned long end) +{ + int rc; + pgd_t *trans_pgd = (pgd_t *)get_safe_page(GFP_ATOMIC); + + if (!trans_pgd) { + pr_err("Failed to allocate memory for temporary page tables.\n"); + return -ENOMEM; + } + + rc = copy_page_tables(trans_pgd, start, end); + if (!rc) + *dst_pgdp = trans_pgd; + + return rc; +} + /* * Setup then Resume from the hibernate image using swsusp_arch_suspend_exit(). * @@ -483,13 +515,7 @@ int swsusp_arch_resume(void) * Create a second copy of just the linear map, and use this when * restoring. */ - tmp_pg_dir = (pgd_t *)get_safe_page(GFP_ATOMIC); - if (!tmp_pg_dir) { - pr_err("Failed to allocate memory for temporary page tables.\n"); - rc = -ENOMEM; - goto out; - } - rc = copy_page_tables(tmp_pg_dir, PAGE_OFFSET, 0); + rc = trans_pgd_create_copy(&tmp_pg_dir, PAGE_OFFSET, 0); if (rc) goto out; From patchwork Mon Sep 9 18:12:13 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pasha Tatashin X-Patchwork-Id: 11138343 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 4608514ED for ; Mon, 9 Sep 2019 18:12:48 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id ECB2021A4A for ; Mon, 9 Sep 2019 18:12:47 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=soleen.com header.i=@soleen.com header.b="fWLhzckv" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org ECB2021A4A Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=soleen.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 2EBC26B0266; Mon, 9 Sep 2019 14:12:39 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 24DE66B0269; Mon, 9 Sep 2019 14:12:39 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 09FF36B026A; Mon, 9 Sep 2019 14:12:39 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0222.hostedemail.com [216.40.44.222]) by kanga.kvack.org (Postfix) with ESMTP id CF5216B0266 for ; Mon, 9 Sep 2019 14:12:38 -0400 (EDT) Received: from smtpin28.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with SMTP id 798268243762 for ; Mon, 9 Sep 2019 18:12:38 +0000 (UTC) X-FDA: 75916177596.28.hat62_6e4b798485045 X-Spam-Summary: 2,0,0,aa3e421b7379d97e,d41d8cd98f00b204,pasha.tatashin@soleen.com,:pasha.tatashin@soleen.com:jmorris@namei.org:sashal@kernel.org:ebiederm@xmission.com:kexec@lists.infradead.org:linux-kernel@vger.kernel.org:corbet@lwn.net:catalin.marinas@arm.com:will@kernel.org:linux-arm-kernel@lists.infradead.org:marc.zyngier@arm.com:james.morse@arm.com:vladimir.murzin@arm.com:matthias.bgg@gmail.com:bhsharma@redhat.com::mark.rutland@arm.com,RULES_HIT:4:41:69:355:379:541:800:960:968:973:988:989:1260:1311:1314:1345:1359:1381:1431:1437:1515:1605:1730:1747:1777:1792:2194:2198:2199:2200:2393:2538:2553:2559:2562:2640:2693:2731:2896:2898:2914:3138:3139:3140:3141:3142:3865:3866:3867:3868:3870:3871:3872:3874:4250:4321:4605:5007:6117:6119:6261:6653:6737:6755:7688:7903:8784:9592:10004:11026:11232:11473:11657:11658:11914:12043:12048:12219:12291:12296:12297:12438:12517:12519:12555:12895:12986:13141:13230:13255:13894:14096:14394:21080:21325:21433:21444:21451:21627:30003:30012:30054:30067:30 069:3007 X-HE-Tag: hat62_6e4b798485045 X-Filterd-Recvd-Size: 17956 Received: from mail-qk1-f196.google.com (mail-qk1-f196.google.com [209.85.222.196]) by imf49.hostedemail.com (Postfix) with ESMTP for ; Mon, 9 Sep 2019 18:12:37 +0000 (UTC) Received: by mail-qk1-f196.google.com with SMTP id 201so13959917qkd.13 for ; Mon, 09 Sep 2019 11:12:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=Rg4d64JpnTaANXvX3uzyA9vcQSLPYA1Q9YCgAGVLAdE=; b=fWLhzckvBH61fecR7zB865Jg/3mFMNrYTLWAWSGeq+qWJtOfowYsPpUvq+bwuAW9P6 Z7LCFr1BVmxOwJUaO6i/9ZSlrZhuUcqGmWFWOeD/uUuJM8ydEkwk3ivrc1lX1b3hNVsI 6DCj0nb0JRGJZT18h7YflhWJE2k7q9W1HmDDXrGfOZ0SDf8q97x1rnWuJFB2BKLzTCS4 NImAULxu7GGCCiRFsYtpyVn6ijZglF0Iuhiup0xopLhz0Sg5ThcJxkS8CoKbCNGFuH6S ra2JD13gWjlkh3BXg2VAIuzQMCsPc0R75DpBdBNbyUvp01oSAI3Kne5Ef52+Ng+GtYDK Jo/w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Rg4d64JpnTaANXvX3uzyA9vcQSLPYA1Q9YCgAGVLAdE=; b=PrdeKd4lgyKEoGcAVcEsuWkkPUVir+vZoEt8c/ptj1DZQhw/lMrwq3wZI7PW1Mucwl ShSbcrwdVbzdirX8vKPP5r7+ZOZZrQJhrDFbNAw7eXuKk5DLRyRCIxDSO1A1lRpGUqJ4 470M5bCdO8VS5ARtrnl200UQWz1W0eDhLS/a4FzUhymtZ6XMkfA90XcDbii38ycLVWHj mscZR8PrhEkgDgZQ0DLya2kDoh00Wq8GHzUb6pUeRgn61kas//pvs2eM7fCdlZ77pmo2 wCAdSOFAUdCXJnl22sT/6dPTICZpxiTX8AW625Z1t0eVyF35LSpLpJ3bBSNnvSzljXQC OblQ== X-Gm-Message-State: APjAAAUwsWwXfJdTXVal4fSaFcKXgV2WZbFlysBUIBp+2H0TXgG+YuQB NM+2tK2SC9dnmBlBQLSzaTwafg== X-Google-Smtp-Source: APXvYqzBKILQWUrwZxLxdaMSqsJPQCxgxErrstUv9wanVZcERloJPtlkE4yx+Jd36p+uvRKFpIPi1w== X-Received: by 2002:a05:620a:14c:: with SMTP id e12mr24483007qkn.47.1568052757183; Mon, 09 Sep 2019 11:12:37 -0700 (PDT) Received: from localhost.localdomain (c-73-69-118-222.hsd1.nh.comcast.net. [73.69.118.222]) by smtp.gmail.com with ESMTPSA id q8sm5611310qtj.76.2019.09.09.11.12.35 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 09 Sep 2019 11:12:36 -0700 (PDT) From: Pavel Tatashin To: pasha.tatashin@soleen.com, jmorris@namei.org, sashal@kernel.org, ebiederm@xmission.com, kexec@lists.infradead.org, linux-kernel@vger.kernel.org, corbet@lwn.net, catalin.marinas@arm.com, will@kernel.org, linux-arm-kernel@lists.infradead.org, marc.zyngier@arm.com, james.morse@arm.com, vladimir.murzin@arm.com, matthias.bgg@gmail.com, bhsharma@redhat.com, linux-mm@kvack.org, mark.rutland@arm.com Subject: [PATCH v4 09/17] arm64: hibernate: move page handling function to new trans_pgd.c Date: Mon, 9 Sep 2019 14:12:13 -0400 Message-Id: <20190909181221.309510-10-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20190909181221.309510-1-pasha.tatashin@soleen.com> References: <20190909181221.309510-1-pasha.tatashin@soleen.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Now, that we abstracted the required functions move them to a new home. Later, we will generalize these function in order to be useful outside of hibernation. Signed-off-by: Pavel Tatashin --- arch/arm64/Kconfig | 4 + arch/arm64/include/asm/trans_pgd.h | 20 +++ arch/arm64/kernel/hibernate.c | 199 +------------------------- arch/arm64/mm/Makefile | 1 + arch/arm64/mm/trans_pgd.c | 219 +++++++++++++++++++++++++++++ 5 files changed, 245 insertions(+), 198 deletions(-) create mode 100644 arch/arm64/include/asm/trans_pgd.h create mode 100644 arch/arm64/mm/trans_pgd.c diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 3adcec05b1f6..91a7416ffe4e 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -999,6 +999,10 @@ config CRASH_DUMP For more details see Documentation/admin-guide/kdump/kdump.rst +config TRANS_TABLE + def_bool y + depends on HIBERNATION || KEXEC_CORE + config XEN_DOM0 def_bool y depends on XEN diff --git a/arch/arm64/include/asm/trans_pgd.h b/arch/arm64/include/asm/trans_pgd.h new file mode 100644 index 000000000000..c7b5402b7d87 --- /dev/null +++ b/arch/arm64/include/asm/trans_pgd.h @@ -0,0 +1,20 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +/* + * Copyright (c) 2019, Microsoft Corporation. + * Pavel Tatashin + */ + +#ifndef _ASM_TRANS_TABLE_H +#define _ASM_TRANS_TABLE_H + +#include +#include + +int trans_pgd_create_copy(pgd_t **dst_pgdp, unsigned long start, + unsigned long end); + +int trans_pgd_map_page(pgd_t *trans_pgd, void *page, unsigned long dst_addr, + pgprot_t pgprot); + +#endif /* _ASM_TRANS_TABLE_H */ diff --git a/arch/arm64/kernel/hibernate.c b/arch/arm64/kernel/hibernate.c index 178488a902c7..94ede33bd777 100644 --- a/arch/arm64/kernel/hibernate.c +++ b/arch/arm64/kernel/hibernate.c @@ -16,7 +16,6 @@ #define pr_fmt(x) "hibernate: " x #include #include -#include #include #include #include @@ -31,14 +30,12 @@ #include #include #include -#include -#include -#include #include #include #include #include #include +#include #include /* @@ -182,45 +179,6 @@ int arch_hibernation_header_restore(void *addr) } EXPORT_SYMBOL(arch_hibernation_header_restore); -int trans_pgd_map_page(pgd_t *trans_pgd, void *page, - unsigned long dst_addr, - pgprot_t pgprot) -{ - pgd_t *pgdp; - pud_t *pudp; - pmd_t *pmdp; - pte_t *ptep; - - pgdp = pgd_offset_raw(trans_pgd, dst_addr); - if (pgd_none(READ_ONCE(*pgdp))) { - pudp = (void *)get_safe_page(GFP_ATOMIC); - if (!pudp) - return -ENOMEM; - pgd_populate(&init_mm, pgdp, pudp); - } - - pudp = pud_offset(pgdp, dst_addr); - if (pud_none(READ_ONCE(*pudp))) { - pmdp = (void *)get_safe_page(GFP_ATOMIC); - if (!pmdp) - return -ENOMEM; - pud_populate(&init_mm, pudp, pmdp); - } - - pmdp = pmd_offset(pudp, dst_addr); - if (pmd_none(READ_ONCE(*pmdp))) { - ptep = (void *)get_safe_page(GFP_ATOMIC); - if (!ptep) - return -ENOMEM; - pmd_populate_kernel(&init_mm, pmdp, ptep); - } - - ptep = pte_offset_kernel(pmdp, dst_addr); - set_pte(ptep, pfn_pte(virt_to_pfn(page), PAGE_KERNEL_EXEC)); - - return 0; -} - /* * Copies length bytes, starting at src_start into an new page, * perform cache maintenance, then maps it at the specified address low @@ -339,161 +297,6 @@ int swsusp_arch_suspend(void) return ret; } -static void _copy_pte(pte_t *dst_ptep, pte_t *src_ptep, unsigned long addr) -{ - pte_t pte = READ_ONCE(*src_ptep); - - if (pte_valid(pte)) { - /* - * Resume will overwrite areas that may be marked - * read only (code, rodata). Clear the RDONLY bit from - * the temporary mappings we use during restore. - */ - set_pte(dst_ptep, pte_mkwrite(pte)); - } else if (debug_pagealloc_enabled() && !pte_none(pte)) { - /* - * debug_pagealloc will removed the PTE_VALID bit if - * the page isn't in use by the resume kernel. It may have - * been in use by the original kernel, in which case we need - * to put it back in our copy to do the restore. - * - * Before marking this entry valid, check the pfn should - * be mapped. - */ - BUG_ON(!pfn_valid(pte_pfn(pte))); - - set_pte(dst_ptep, pte_mkpresent(pte_mkwrite(pte))); - } -} - -static int copy_pte(pmd_t *dst_pmdp, pmd_t *src_pmdp, unsigned long start, - unsigned long end) -{ - pte_t *src_ptep; - pte_t *dst_ptep; - unsigned long addr = start; - - dst_ptep = (pte_t *)get_safe_page(GFP_ATOMIC); - if (!dst_ptep) - return -ENOMEM; - pmd_populate_kernel(&init_mm, dst_pmdp, dst_ptep); - dst_ptep = pte_offset_kernel(dst_pmdp, start); - - src_ptep = pte_offset_kernel(src_pmdp, start); - do { - _copy_pte(dst_ptep, src_ptep, addr); - } while (dst_ptep++, src_ptep++, addr += PAGE_SIZE, addr != end); - - return 0; -} - -static int copy_pmd(pud_t *dst_pudp, pud_t *src_pudp, unsigned long start, - unsigned long end) -{ - pmd_t *src_pmdp; - pmd_t *dst_pmdp; - unsigned long next; - unsigned long addr = start; - - if (pud_none(READ_ONCE(*dst_pudp))) { - dst_pmdp = (pmd_t *)get_safe_page(GFP_ATOMIC); - if (!dst_pmdp) - return -ENOMEM; - pud_populate(&init_mm, dst_pudp, dst_pmdp); - } - dst_pmdp = pmd_offset(dst_pudp, start); - - src_pmdp = pmd_offset(src_pudp, start); - do { - pmd_t pmd = READ_ONCE(*src_pmdp); - - next = pmd_addr_end(addr, end); - if (pmd_none(pmd)) - continue; - if (pmd_table(pmd)) { - if (copy_pte(dst_pmdp, src_pmdp, addr, next)) - return -ENOMEM; - } else { - set_pmd(dst_pmdp, - __pmd(pmd_val(pmd) & ~PMD_SECT_RDONLY)); - } - } while (dst_pmdp++, src_pmdp++, addr = next, addr != end); - - return 0; -} - -static int copy_pud(pgd_t *dst_pgdp, pgd_t *src_pgdp, unsigned long start, - unsigned long end) -{ - pud_t *dst_pudp; - pud_t *src_pudp; - unsigned long next; - unsigned long addr = start; - - if (pgd_none(READ_ONCE(*dst_pgdp))) { - dst_pudp = (pud_t *)get_safe_page(GFP_ATOMIC); - if (!dst_pudp) - return -ENOMEM; - pgd_populate(&init_mm, dst_pgdp, dst_pudp); - } - dst_pudp = pud_offset(dst_pgdp, start); - - src_pudp = pud_offset(src_pgdp, start); - do { - pud_t pud = READ_ONCE(*src_pudp); - - next = pud_addr_end(addr, end); - if (pud_none(pud)) - continue; - if (pud_table(pud)) { - if (copy_pmd(dst_pudp, src_pudp, addr, next)) - return -ENOMEM; - } else { - set_pud(dst_pudp, - __pud(pud_val(pud) & ~PUD_SECT_RDONLY)); - } - } while (dst_pudp++, src_pudp++, addr = next, addr != end); - - return 0; -} - -static int copy_page_tables(pgd_t *dst_pgdp, unsigned long start, - unsigned long end) -{ - unsigned long next; - unsigned long addr = start; - pgd_t *src_pgdp = pgd_offset_k(start); - - dst_pgdp = pgd_offset_raw(dst_pgdp, start); - do { - next = pgd_addr_end(addr, end); - if (pgd_none(READ_ONCE(*src_pgdp))) - continue; - if (copy_pud(dst_pgdp, src_pgdp, addr, next)) - return -ENOMEM; - } while (dst_pgdp++, src_pgdp++, addr = next, addr != end); - - return 0; -} - -int trans_pgd_create_copy(pgd_t **dst_pgdp, unsigned long start, - unsigned long end) -{ - int rc; - pgd_t *trans_pgd = (pgd_t *)get_safe_page(GFP_ATOMIC); - - if (!trans_pgd) { - pr_err("Failed to allocate memory for temporary page tables.\n"); - return -ENOMEM; - } - - rc = copy_page_tables(trans_pgd, start, end); - if (!rc) - *dst_pgdp = trans_pgd; - - return rc; -} - /* * Setup then Resume from the hibernate image using swsusp_arch_suspend_exit(). * diff --git a/arch/arm64/mm/Makefile b/arch/arm64/mm/Makefile index 849c1df3d214..f3002f1d0e61 100644 --- a/arch/arm64/mm/Makefile +++ b/arch/arm64/mm/Makefile @@ -6,6 +6,7 @@ obj-y := dma-mapping.o extable.o fault.o init.o \ obj-$(CONFIG_HUGETLB_PAGE) += hugetlbpage.o obj-$(CONFIG_ARM64_PTDUMP_CORE) += dump.o obj-$(CONFIG_ARM64_PTDUMP_DEBUGFS) += ptdump_debugfs.o +obj-$(CONFIG_TRANS_TABLE) += trans_pgd.o obj-$(CONFIG_NUMA) += numa.o obj-$(CONFIG_DEBUG_VIRTUAL) += physaddr.o KASAN_SANITIZE_physaddr.o += n diff --git a/arch/arm64/mm/trans_pgd.c b/arch/arm64/mm/trans_pgd.c new file mode 100644 index 000000000000..5ac712b92439 --- /dev/null +++ b/arch/arm64/mm/trans_pgd.c @@ -0,0 +1,219 @@ +// SPDX-License-Identifier: GPL-2.0 + +/* + * Transitional page tables for kexec and hibernate + * + * This file derived from: arch/arm64/kernel/hibernate.c + * + * Copyright (c) 2019, Microsoft Corporation. + * Pavel Tatashin + * + */ + +/* + * Transitional tables are used during system transferring from one world to + * another: such as during hibernate restore, and kexec reboots. During these + * phases one cannot rely on page table not being overwritten. This is because + * hibernate and kexec can overwrite the current page tables during transition. + */ + +#include +#include +#include +#include +#include +#include +#include + +static void _copy_pte(pte_t *dst_ptep, pte_t *src_ptep, unsigned long addr) +{ + pte_t pte = READ_ONCE(*src_ptep); + + if (pte_valid(pte)) { + /* + * Resume will overwrite areas that may be marked + * read only (code, rodata). Clear the RDONLY bit from + * the temporary mappings we use during restore. + */ + set_pte(dst_ptep, pte_mkwrite(pte)); + } else if (debug_pagealloc_enabled() && !pte_none(pte)) { + /* + * debug_pagealloc will removed the PTE_VALID bit if + * the page isn't in use by the resume kernel. It may have + * been in use by the original kernel, in which case we need + * to put it back in our copy to do the restore. + * + * Before marking this entry valid, check the pfn should + * be mapped. + */ + BUG_ON(!pfn_valid(pte_pfn(pte))); + + set_pte(dst_ptep, pte_mkpresent(pte_mkwrite(pte))); + } +} + +static int copy_pte(pmd_t *dst_pmdp, pmd_t *src_pmdp, unsigned long start, + unsigned long end) +{ + pte_t *src_ptep; + pte_t *dst_ptep; + unsigned long addr = start; + + dst_ptep = (pte_t *)get_safe_page(GFP_ATOMIC); + if (!dst_ptep) + return -ENOMEM; + pmd_populate_kernel(&init_mm, dst_pmdp, dst_ptep); + dst_ptep = pte_offset_kernel(dst_pmdp, start); + + src_ptep = pte_offset_kernel(src_pmdp, start); + do { + _copy_pte(dst_ptep, src_ptep, addr); + } while (dst_ptep++, src_ptep++, addr += PAGE_SIZE, addr != end); + + return 0; +} + +static int copy_pmd(pud_t *dst_pudp, pud_t *src_pudp, unsigned long start, + unsigned long end) +{ + pmd_t *src_pmdp; + pmd_t *dst_pmdp; + unsigned long next; + unsigned long addr = start; + + if (pud_none(READ_ONCE(*dst_pudp))) { + dst_pmdp = (pmd_t *)get_safe_page(GFP_ATOMIC); + if (!dst_pmdp) + return -ENOMEM; + pud_populate(&init_mm, dst_pudp, dst_pmdp); + } + dst_pmdp = pmd_offset(dst_pudp, start); + + src_pmdp = pmd_offset(src_pudp, start); + do { + pmd_t pmd = READ_ONCE(*src_pmdp); + + next = pmd_addr_end(addr, end); + if (pmd_none(pmd)) + continue; + if (pmd_table(pmd)) { + if (copy_pte(dst_pmdp, src_pmdp, addr, next)) + return -ENOMEM; + } else { + set_pmd(dst_pmdp, + __pmd(pmd_val(pmd) & ~PMD_SECT_RDONLY)); + } + } while (dst_pmdp++, src_pmdp++, addr = next, addr != end); + + return 0; +} + +static int copy_pud(pgd_t *dst_pgdp, pgd_t *src_pgdp, unsigned long start, + unsigned long end) +{ + pud_t *dst_pudp; + pud_t *src_pudp; + unsigned long next; + unsigned long addr = start; + + if (pgd_none(READ_ONCE(*dst_pgdp))) { + dst_pudp = (pud_t *)get_safe_page(GFP_ATOMIC); + if (!dst_pudp) + return -ENOMEM; + pgd_populate(&init_mm, dst_pgdp, dst_pudp); + } + dst_pudp = pud_offset(dst_pgdp, start); + + src_pudp = pud_offset(src_pgdp, start); + do { + pud_t pud = READ_ONCE(*src_pudp); + + next = pud_addr_end(addr, end); + if (pud_none(pud)) + continue; + if (pud_table(pud)) { + if (copy_pmd(dst_pudp, src_pudp, addr, next)) + return -ENOMEM; + } else { + set_pud(dst_pudp, + __pud(pud_val(pud) & ~PUD_SECT_RDONLY)); + } + } while (dst_pudp++, src_pudp++, addr = next, addr != end); + + return 0; +} + +static int copy_page_tables(pgd_t *dst_pgdp, unsigned long start, + unsigned long end) +{ + unsigned long next; + unsigned long addr = start; + pgd_t *src_pgdp = pgd_offset_k(start); + + dst_pgdp = pgd_offset_raw(dst_pgdp, start); + do { + next = pgd_addr_end(addr, end); + if (pgd_none(READ_ONCE(*src_pgdp))) + continue; + if (copy_pud(dst_pgdp, src_pgdp, addr, next)) + return -ENOMEM; + } while (dst_pgdp++, src_pgdp++, addr = next, addr != end); + + return 0; +} + +int trans_pgd_create_copy(pgd_t **dst_pgdp, unsigned long start, + unsigned long end) +{ + int rc; + pgd_t *trans_pgd = (pgd_t *)get_safe_page(GFP_ATOMIC); + + if (!trans_pgd) { + pr_err("Failed to allocate memory for temporary page tables.\n"); + return -ENOMEM; + } + + rc = copy_page_tables(trans_pgd, start, end); + if (!rc) + *dst_pgdp = trans_pgd; + + return rc; +} + +int trans_pgd_map_page(pgd_t *trans_pgd, void *page, unsigned long dst_addr, + pgprot_t pgprot) +{ + pgd_t *pgdp; + pud_t *pudp; + pmd_t *pmdp; + pte_t *ptep; + + pgdp = pgd_offset_raw(trans_pgd, dst_addr); + if (pgd_none(READ_ONCE(*pgdp))) { + pudp = (void *)get_safe_page(GFP_ATOMIC); + if (!pudp) + return -ENOMEM; + pgd_populate(&init_mm, pgdp, pudp); + } + + pudp = pud_offset(pgdp, dst_addr); + if (pud_none(READ_ONCE(*pudp))) { + pmdp = (void *)get_safe_page(GFP_ATOMIC); + if (!pmdp) + return -ENOMEM; + pud_populate(&init_mm, pudp, pmdp); + } + + pmdp = pmd_offset(pudp, dst_addr); + if (pmd_none(READ_ONCE(*pmdp))) { + ptep = (void *)get_safe_page(GFP_ATOMIC); + if (!ptep) + return -ENOMEM; + pmd_populate_kernel(&init_mm, pmdp, ptep); + } + + ptep = pte_offset_kernel(pmdp, dst_addr); + set_pte(ptep, pfn_pte(virt_to_pfn(page), PAGE_KERNEL_EXEC)); + + return 0; +} From patchwork Mon Sep 9 18:12:14 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pasha Tatashin X-Patchwork-Id: 11138345 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 0BCB41599 for ; Mon, 9 Sep 2019 18:12:51 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id BEF9C218DE for ; Mon, 9 Sep 2019 18:12:50 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=soleen.com header.i=@soleen.com header.b="cSdHPRA/" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org BEF9C218DE Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=soleen.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id B7FBB6B0269; Mon, 9 Sep 2019 14:12:40 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id A8E816B026A; Mon, 9 Sep 2019 14:12:40 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8DEEF6B026B; Mon, 9 Sep 2019 14:12:40 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0111.hostedemail.com [216.40.44.111]) by kanga.kvack.org (Postfix) with ESMTP id 65F6C6B0269 for ; Mon, 9 Sep 2019 14:12:40 -0400 (EDT) Received: from smtpin05.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with SMTP id BEB61181AC9B4 for ; Mon, 9 Sep 2019 18:12:39 +0000 (UTC) X-FDA: 75916177638.05.ocean11_6e81c20ecd713 X-Spam-Summary: 2,0,0,e422a4f61fc9b2d7,d41d8cd98f00b204,pasha.tatashin@soleen.com,:pasha.tatashin@soleen.com:jmorris@namei.org:sashal@kernel.org:ebiederm@xmission.com:kexec@lists.infradead.org:linux-kernel@vger.kernel.org:corbet@lwn.net:catalin.marinas@arm.com:will@kernel.org:linux-arm-kernel@lists.infradead.org:marc.zyngier@arm.com:james.morse@arm.com:vladimir.murzin@arm.com:matthias.bgg@gmail.com:bhsharma@redhat.com::mark.rutland@arm.com,RULES_HIT:2:41:355:379:541:800:960:973:988:989:1260:1311:1314:1345:1359:1381:1431:1437:1515:1535:1605:1606:1730:1747:1777:1792:1978:2194:2199:2393:2559:2562:2897:3138:3139:3140:3141:3142:3865:3867:3868:3871:3872:3874:4119:4250:4321:4605:5007:6117:6261:6653:6737:7875:8603:10004:11026:11473:11657:11658:11914:12043:12048:12291:12296:12297:12438:12517:12519:12555:12895:13894:14096:14394:21080:21444:21451:21627:21796:30003:30036:30054:30075,0,RBL:209.85.160.194:@soleen.com:.lbl8.mailshell.net-62.2.0.100 66.100.201.201,CacheIP:none,Bayesian:0.5,0 .5,0.5,N X-HE-Tag: ocean11_6e81c20ecd713 X-Filterd-Recvd-Size: 8330 Received: from mail-qt1-f194.google.com (mail-qt1-f194.google.com [209.85.160.194]) by imf24.hostedemail.com (Postfix) with ESMTP for ; Mon, 9 Sep 2019 18:12:39 +0000 (UTC) Received: by mail-qt1-f194.google.com with SMTP id j10so17240283qtp.8 for ; Mon, 09 Sep 2019 11:12:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=ZHG0D4w8NA29VDHLYB2t8LbjxPlHMjl2c22MdMK04CY=; b=cSdHPRA/60dwpAtXYvhf1a+RMMcaKgv6gU4EjBWSJPtMj9S2QBYH0FE6afP+FsFqwX oitZfbzciQNQh9f97nfeWBh5L9xdpgVQew2KcwnrrgG56odQNrpZicV1PixyPmKrhxFF u7WRQcE11yWWOw46vBjiGO72W0EeBZfHqz0DI8RU1Bg87FZJY9WrRRkmEGUCpTBW6fYZ UylCKei3xXj4rdc/yuXtr/rGN65ZL9dnNMVYm61FgIJ0Gw3vBbRHu/jzNssLj2VKBfxU Z3ekz6dNhg1yA9bt9NuUJiLDjwfSM7DpS74ENyIHU9U0UfUU902affVWQv3c/9cgQFkI lmnA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=ZHG0D4w8NA29VDHLYB2t8LbjxPlHMjl2c22MdMK04CY=; b=pXAHDh2BaovxMLyeYueBZ/mStYz0a6jj10hxglWdAVJKoNHtYV2WUUxXLEENso1A83 e2V7OMNci7+VPsjgo2G7Tyg5XMovqpKCTPJN7PjNA90LI+/X8cUYaPd9Hvg1lBvNh+1H pABtcsJ3543kh+l1w0TisV5NJnI5nzEkGWJF1XEo7VWG1P6chgqDpflnZ9sU55x1CylA HmP++hTDWJGjrLLUTlDSvf3+27Aog+FGjdDYGcs6TqrgsUCp9aQKMpqcHS+haW4HC1Ec pB2wdbQV9z8m7QjGtwgzfBZSDQDlSXJBbQWheY9SIQjCQpNr3sk7RKCOIMdB9I3JLFHX zv+g== X-Gm-Message-State: APjAAAWMiRWOD2fV6WDdw/712l/iLjCXJreK87cbK6KIsMvuSuMD0jkR 4LyrE93k+LRXr/t++42JJBmVrw== X-Google-Smtp-Source: APXvYqw4No4l7eCxQyc6PhyGalmXf3BDwqCQRB91iT7eD7N2fKHAG7uDuT90z/VyBs2VaIOxC1pelA== X-Received: by 2002:a05:6214:1591:: with SMTP id m17mr7217259qvw.222.1568052758733; Mon, 09 Sep 2019 11:12:38 -0700 (PDT) Received: from localhost.localdomain (c-73-69-118-222.hsd1.nh.comcast.net. [73.69.118.222]) by smtp.gmail.com with ESMTPSA id q8sm5611310qtj.76.2019.09.09.11.12.37 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 09 Sep 2019 11:12:37 -0700 (PDT) From: Pavel Tatashin To: pasha.tatashin@soleen.com, jmorris@namei.org, sashal@kernel.org, ebiederm@xmission.com, kexec@lists.infradead.org, linux-kernel@vger.kernel.org, corbet@lwn.net, catalin.marinas@arm.com, will@kernel.org, linux-arm-kernel@lists.infradead.org, marc.zyngier@arm.com, james.morse@arm.com, vladimir.murzin@arm.com, matthias.bgg@gmail.com, bhsharma@redhat.com, linux-mm@kvack.org, mark.rutland@arm.com Subject: [PATCH v4 10/17] arm64: trans_pgd: make trans_pgd_map_page generic Date: Mon, 9 Sep 2019 14:12:14 -0400 Message-Id: <20190909181221.309510-11-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20190909181221.309510-1-pasha.tatashin@soleen.com> References: <20190909181221.309510-1-pasha.tatashin@soleen.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: kexec is going to use a different allocator, so make trans_pgd_map_page to accept allocator as an argument, and also kexec is going to use a different map protection, so also pass it via argument. Signed-off-by: Pavel Tatashin Reviewed-by: Matthias Brugger --- arch/arm64/include/asm/trans_pgd.h | 24 ++++++++++++++++++++++-- arch/arm64/kernel/hibernate.c | 12 +++++++++++- arch/arm64/mm/trans_pgd.c | 17 +++++++++++------ 3 files changed, 44 insertions(+), 9 deletions(-) diff --git a/arch/arm64/include/asm/trans_pgd.h b/arch/arm64/include/asm/trans_pgd.h index c7b5402b7d87..53f67ec84cdc 100644 --- a/arch/arm64/include/asm/trans_pgd.h +++ b/arch/arm64/include/asm/trans_pgd.h @@ -11,10 +11,30 @@ #include #include +/* + * trans_alloc_page + * - Allocator that should return exactly one zeroed page, if this + * allocator fails, trans_pgd returns -ENOMEM error. + * + * trans_alloc_arg + * - Passed to trans_alloc_page as an argument + */ + +struct trans_pgd_info { + void * (*trans_alloc_page)(void *arg); + void *trans_alloc_arg; +}; + int trans_pgd_create_copy(pgd_t **dst_pgdp, unsigned long start, unsigned long end); -int trans_pgd_map_page(pgd_t *trans_pgd, void *page, unsigned long dst_addr, - pgprot_t pgprot); +/* + * Add map entry to trans_pgd for a base-size page at PTE level. + * page: page to be mapped. + * dst_addr: new VA address for the pages + * pgprot: protection for the page. + */ +int trans_pgd_map_page(struct trans_pgd_info *info, pgd_t *trans_pgd, + void *page, unsigned long dst_addr, pgprot_t pgprot); #endif /* _ASM_TRANS_TABLE_H */ diff --git a/arch/arm64/kernel/hibernate.c b/arch/arm64/kernel/hibernate.c index 94ede33bd777..9b75b680ab70 100644 --- a/arch/arm64/kernel/hibernate.c +++ b/arch/arm64/kernel/hibernate.c @@ -179,6 +179,12 @@ int arch_hibernation_header_restore(void *addr) } EXPORT_SYMBOL(arch_hibernation_header_restore); +static void * +hibernate_page_alloc(void *arg) +{ + return (void *)get_safe_page((gfp_t)(unsigned long)arg); +} + /* * Copies length bytes, starting at src_start into an new page, * perform cache maintenance, then maps it at the specified address low @@ -195,6 +201,10 @@ static int create_safe_exec_page(void *src_start, size_t length, unsigned long dst_addr, phys_addr_t *phys_dst_addr) { + struct trans_pgd_info trans_info = { + .trans_alloc_page = hibernate_page_alloc, + .trans_alloc_arg = (void *)GFP_ATOMIC, + }; void *page = (void *)get_safe_page(GFP_ATOMIC); pgd_t *trans_pgd; int rc; @@ -209,7 +219,7 @@ static int create_safe_exec_page(void *src_start, size_t length, if (!trans_pgd) return -ENOMEM; - rc = trans_pgd_map_page(trans_pgd, page, dst_addr, + rc = trans_pgd_map_page(&trans_info, trans_pgd, page, dst_addr, PAGE_KERNEL_EXEC); if (rc) return rc; diff --git a/arch/arm64/mm/trans_pgd.c b/arch/arm64/mm/trans_pgd.c index 5ac712b92439..7521d558a0b9 100644 --- a/arch/arm64/mm/trans_pgd.c +++ b/arch/arm64/mm/trans_pgd.c @@ -25,6 +25,11 @@ #include #include +static void *trans_alloc(struct trans_pgd_info *info) +{ + return info->trans_alloc_page(info->trans_alloc_arg); +} + static void _copy_pte(pte_t *dst_ptep, pte_t *src_ptep, unsigned long addr) { pte_t pte = READ_ONCE(*src_ptep); @@ -180,8 +185,8 @@ int trans_pgd_create_copy(pgd_t **dst_pgdp, unsigned long start, return rc; } -int trans_pgd_map_page(pgd_t *trans_pgd, void *page, unsigned long dst_addr, - pgprot_t pgprot) +int trans_pgd_map_page(struct trans_pgd_info *info, pgd_t *trans_pgd, + void *page, unsigned long dst_addr, pgprot_t pgprot) { pgd_t *pgdp; pud_t *pudp; @@ -190,7 +195,7 @@ int trans_pgd_map_page(pgd_t *trans_pgd, void *page, unsigned long dst_addr, pgdp = pgd_offset_raw(trans_pgd, dst_addr); if (pgd_none(READ_ONCE(*pgdp))) { - pudp = (void *)get_safe_page(GFP_ATOMIC); + pudp = trans_alloc(info); if (!pudp) return -ENOMEM; pgd_populate(&init_mm, pgdp, pudp); @@ -198,7 +203,7 @@ int trans_pgd_map_page(pgd_t *trans_pgd, void *page, unsigned long dst_addr, pudp = pud_offset(pgdp, dst_addr); if (pud_none(READ_ONCE(*pudp))) { - pmdp = (void *)get_safe_page(GFP_ATOMIC); + pmdp = trans_alloc(info); if (!pmdp) return -ENOMEM; pud_populate(&init_mm, pudp, pmdp); @@ -206,14 +211,14 @@ int trans_pgd_map_page(pgd_t *trans_pgd, void *page, unsigned long dst_addr, pmdp = pmd_offset(pudp, dst_addr); if (pmd_none(READ_ONCE(*pmdp))) { - ptep = (void *)get_safe_page(GFP_ATOMIC); + ptep = trans_alloc(info); if (!ptep) return -ENOMEM; pmd_populate_kernel(&init_mm, pmdp, ptep); } ptep = pte_offset_kernel(pmdp, dst_addr); - set_pte(ptep, pfn_pte(virt_to_pfn(page), PAGE_KERNEL_EXEC)); + set_pte(ptep, pfn_pte(virt_to_pfn(page), pgprot)); return 0; } From patchwork Mon Sep 9 18:12:15 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pasha Tatashin X-Patchwork-Id: 11138347 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id A83301599 for ; Mon, 9 Sep 2019 18:12:53 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 674C3218DE for ; Mon, 9 Sep 2019 18:12:53 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=soleen.com header.i=@soleen.com header.b="b4JWeuEq" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 674C3218DE Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=soleen.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 29E0D6B026A; Mon, 9 Sep 2019 14:12:42 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 1DC456B026B; Mon, 9 Sep 2019 14:12:42 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 07CC86B026C; Mon, 9 Sep 2019 14:12:42 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0181.hostedemail.com [216.40.44.181]) by kanga.kvack.org (Postfix) with ESMTP id D75BA6B026A for ; Mon, 9 Sep 2019 14:12:41 -0400 (EDT) Received: from smtpin02.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with SMTP id 88ECE180AD7C3 for ; Mon, 9 Sep 2019 18:12:41 +0000 (UTC) X-FDA: 75916177722.02.lamp64_6ebcf0e964a3d X-Spam-Summary: 2,0,0,16b8d47e93d6e2de,d41d8cd98f00b204,pasha.tatashin@soleen.com,:pasha.tatashin@soleen.com:jmorris@namei.org:sashal@kernel.org:ebiederm@xmission.com:kexec@lists.infradead.org:linux-kernel@vger.kernel.org:corbet@lwn.net:catalin.marinas@arm.com:will@kernel.org:linux-arm-kernel@lists.infradead.org:marc.zyngier@arm.com:james.morse@arm.com:vladimir.murzin@arm.com:matthias.bgg@gmail.com:bhsharma@redhat.com::mark.rutland@arm.com,RULES_HIT:2:41:69:355:379:541:800:960:973:988:989:1260:1311:1314:1345:1359:1381:1437:1515:1535:1605:1730:1747:1777:1792:2194:2199:2393:2559:2562:2693:2898:2914:3138:3139:3140:3141:3142:3865:3866:3867:3868:3871:3874:4049:4120:4250:4321:4605:5007:6261:6653:6737:8603:10004:11026:11657:11658:11914:12043:12048:12114:12296:12297:12438:12517:12519:12555:12683:12895:12986:13894:14110:14394:21080:21325:21444:21611:21627:30036:30054:30070:30075:30079,0,RBL:209.85.160.196:@soleen.com:.lbl8.mailshell.net-62.2.0.100 66.100.201.201,CacheIP:none,Bayesian: 0.5,0.5, X-HE-Tag: lamp64_6ebcf0e964a3d X-Filterd-Recvd-Size: 9443 Received: from mail-qt1-f196.google.com (mail-qt1-f196.google.com [209.85.160.196]) by imf45.hostedemail.com (Postfix) with ESMTP for ; Mon, 9 Sep 2019 18:12:40 +0000 (UTC) Received: by mail-qt1-f196.google.com with SMTP id j10so17240383qtp.8 for ; Mon, 09 Sep 2019 11:12:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=2LohODRye0AXb6pFFAov36EHdB22I7TfTyru75JBhPY=; b=b4JWeuEqNHu1nxwYKfDEsBPTvMMBWZi2cqM0adTfZFS4l+t2aJ3B29NCi9xuB49OSa C77Y9e1TYrSKEeJmNf39D7K4bVy/4ZNWPvYi4dam1Nwsr0JFlQrwCLlmTVBcNdq5YV+8 MrgKWmXLLtlYEqTc93GG36FcY0hdxv863oZlRKWBhnek6e0iEXX8ItgAmzFG37ddwOGk Amv6GRDOMInAjA3xefoEOGpOL9a8kveoC8s0a5ApLBvvouNSdofiM0Qs0ltzEq6V/nEi zarysFCuCJY5uzMpcTTKAdeRqmX51hU5bkn5vuS7qY/SeLhaIpVhrZlj3rOjk+qTrY+j iZpg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=2LohODRye0AXb6pFFAov36EHdB22I7TfTyru75JBhPY=; b=s14CE8kWovTvK3Ul9hz/YF3BY9A+mn4QiljlsxkfgDnvuUGudafJpDUIhymEGVJCgY +NTwgvn+WdlQSKGy3LkMHsVX/oxrdORkFsvThsqBNTI6R3mUKxXer0ZSYS7iur5RbJrj M3ywUuSpSXXmSmNcpvWqzbLLLQLC+Xd5lXUGk+iCSwDYoWpEkp5VYKZXyFBmgbHVUnXh oFiY/1qAe32GUZqgJpUraEVolL2vpZXcqacatv6Phf3trADa9ZSeIFiOrYm/4hVh6iHT nmA1m+nLkMQ4LrNgWuXOdCsoUKjBjiju1A7QUjmIYvkCl7FYrA0oxjKwk9efAj1YODyP RsuQ== X-Gm-Message-State: APjAAAUs46tPhC8Swq5KB19fc1TfdSP6TZcbsIuy2d5VQMjujrgaO/ke r+tIgIQu4X+PFa+Xs+bJ89jlnQ== X-Google-Smtp-Source: APXvYqwZbXP54u+gMB+FnMTzfwcMtvxaxlwtPafub/S1cyQjKXudNEOHdzHtrwsLKwZy4rWiD0HpCg== X-Received: by 2002:a0c:e64e:: with SMTP id c14mr15416016qvn.17.1568052760215; Mon, 09 Sep 2019 11:12:40 -0700 (PDT) Received: from localhost.localdomain (c-73-69-118-222.hsd1.nh.comcast.net. [73.69.118.222]) by smtp.gmail.com with ESMTPSA id q8sm5611310qtj.76.2019.09.09.11.12.38 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 09 Sep 2019 11:12:39 -0700 (PDT) From: Pavel Tatashin To: pasha.tatashin@soleen.com, jmorris@namei.org, sashal@kernel.org, ebiederm@xmission.com, kexec@lists.infradead.org, linux-kernel@vger.kernel.org, corbet@lwn.net, catalin.marinas@arm.com, will@kernel.org, linux-arm-kernel@lists.infradead.org, marc.zyngier@arm.com, james.morse@arm.com, vladimir.murzin@arm.com, matthias.bgg@gmail.com, bhsharma@redhat.com, linux-mm@kvack.org, mark.rutland@arm.com Subject: [PATCH v4 11/17] arm64: trans_pgd: pass allocator trans_pgd_create_copy Date: Mon, 9 Sep 2019 14:12:15 -0400 Message-Id: <20190909181221.309510-12-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20190909181221.309510-1-pasha.tatashin@soleen.com> References: <20190909181221.309510-1-pasha.tatashin@soleen.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Make trans_pgd_create_copy and its subroutines to use allocator that is passed as an argument Signed-off-by: Pavel Tatashin --- arch/arm64/include/asm/trans_pgd.h | 7 ++++-- arch/arm64/kernel/hibernate.c | 6 ++++- arch/arm64/mm/trans_pgd.c | 35 +++++++++++++++--------------- 3 files changed, 28 insertions(+), 20 deletions(-) diff --git a/arch/arm64/include/asm/trans_pgd.h b/arch/arm64/include/asm/trans_pgd.h index 53f67ec84cdc..61a725fe1093 100644 --- a/arch/arm64/include/asm/trans_pgd.h +++ b/arch/arm64/include/asm/trans_pgd.h @@ -25,8 +25,11 @@ struct trans_pgd_info { void *trans_alloc_arg; }; -int trans_pgd_create_copy(pgd_t **dst_pgdp, unsigned long start, - unsigned long end); +/* + * Create trans_pgd and copy linear map [start, end) + */ +int trans_pgd_create_copy(struct trans_pgd_info *info, pgd_t **trans_pgd, + unsigned long start, unsigned long end); /* * Add map entry to trans_pgd for a base-size page at PTE level. diff --git a/arch/arm64/kernel/hibernate.c b/arch/arm64/kernel/hibernate.c index 9b75b680ab70..36eccf63629c 100644 --- a/arch/arm64/kernel/hibernate.c +++ b/arch/arm64/kernel/hibernate.c @@ -322,13 +322,17 @@ int swsusp_arch_resume(void) phys_addr_t phys_hibernate_exit; void __noreturn (*hibernate_exit)(phys_addr_t, phys_addr_t, void *, void *, phys_addr_t, phys_addr_t); + struct trans_pgd_info trans_info = { + .trans_alloc_page = hibernate_page_alloc, + .trans_alloc_arg = (void *)GFP_ATOMIC, + }; /* * Restoring the memory image will overwrite the ttbr1 page tables. * Create a second copy of just the linear map, and use this when * restoring. */ - rc = trans_pgd_create_copy(&tmp_pg_dir, PAGE_OFFSET, 0); + rc = trans_pgd_create_copy(&trans_info, &tmp_pg_dir, PAGE_OFFSET, 0); if (rc) goto out; diff --git a/arch/arm64/mm/trans_pgd.c b/arch/arm64/mm/trans_pgd.c index 7521d558a0b9..dfde87159840 100644 --- a/arch/arm64/mm/trans_pgd.c +++ b/arch/arm64/mm/trans_pgd.c @@ -57,14 +57,14 @@ static void _copy_pte(pte_t *dst_ptep, pte_t *src_ptep, unsigned long addr) } } -static int copy_pte(pmd_t *dst_pmdp, pmd_t *src_pmdp, unsigned long start, - unsigned long end) +static int copy_pte(struct trans_pgd_info *info, pmd_t *dst_pmdp, + pmd_t *src_pmdp, unsigned long start, unsigned long end) { pte_t *src_ptep; pte_t *dst_ptep; unsigned long addr = start; - dst_ptep = (pte_t *)get_safe_page(GFP_ATOMIC); + dst_ptep = trans_alloc(info); if (!dst_ptep) return -ENOMEM; pmd_populate_kernel(&init_mm, dst_pmdp, dst_ptep); @@ -78,8 +78,8 @@ static int copy_pte(pmd_t *dst_pmdp, pmd_t *src_pmdp, unsigned long start, return 0; } -static int copy_pmd(pud_t *dst_pudp, pud_t *src_pudp, unsigned long start, - unsigned long end) +static int copy_pmd(struct trans_pgd_info *info, pud_t *dst_pudp, + pud_t *src_pudp, unsigned long start, unsigned long end) { pmd_t *src_pmdp; pmd_t *dst_pmdp; @@ -87,7 +87,7 @@ static int copy_pmd(pud_t *dst_pudp, pud_t *src_pudp, unsigned long start, unsigned long addr = start; if (pud_none(READ_ONCE(*dst_pudp))) { - dst_pmdp = (pmd_t *)get_safe_page(GFP_ATOMIC); + dst_pmdp = trans_alloc(info); if (!dst_pmdp) return -ENOMEM; pud_populate(&init_mm, dst_pudp, dst_pmdp); @@ -102,7 +102,7 @@ static int copy_pmd(pud_t *dst_pudp, pud_t *src_pudp, unsigned long start, if (pmd_none(pmd)) continue; if (pmd_table(pmd)) { - if (copy_pte(dst_pmdp, src_pmdp, addr, next)) + if (copy_pte(info, dst_pmdp, src_pmdp, addr, next)) return -ENOMEM; } else { set_pmd(dst_pmdp, @@ -113,7 +113,8 @@ static int copy_pmd(pud_t *dst_pudp, pud_t *src_pudp, unsigned long start, return 0; } -static int copy_pud(pgd_t *dst_pgdp, pgd_t *src_pgdp, unsigned long start, +static int copy_pud(struct trans_pgd_info *info, pgd_t *dst_pgdp, + pgd_t *src_pgdp, unsigned long start, unsigned long end) { pud_t *dst_pudp; @@ -122,7 +123,7 @@ static int copy_pud(pgd_t *dst_pgdp, pgd_t *src_pgdp, unsigned long start, unsigned long addr = start; if (pgd_none(READ_ONCE(*dst_pgdp))) { - dst_pudp = (pud_t *)get_safe_page(GFP_ATOMIC); + dst_pudp = trans_alloc(info); if (!dst_pudp) return -ENOMEM; pgd_populate(&init_mm, dst_pgdp, dst_pudp); @@ -137,7 +138,7 @@ static int copy_pud(pgd_t *dst_pgdp, pgd_t *src_pgdp, unsigned long start, if (pud_none(pud)) continue; if (pud_table(pud)) { - if (copy_pmd(dst_pudp, src_pudp, addr, next)) + if (copy_pmd(info, dst_pudp, src_pudp, addr, next)) return -ENOMEM; } else { set_pud(dst_pudp, @@ -148,8 +149,8 @@ static int copy_pud(pgd_t *dst_pgdp, pgd_t *src_pgdp, unsigned long start, return 0; } -static int copy_page_tables(pgd_t *dst_pgdp, unsigned long start, - unsigned long end) +static int copy_page_tables(struct trans_pgd_info *info, pgd_t *dst_pgdp, + unsigned long start, unsigned long end) { unsigned long next; unsigned long addr = start; @@ -160,25 +161,25 @@ static int copy_page_tables(pgd_t *dst_pgdp, unsigned long start, next = pgd_addr_end(addr, end); if (pgd_none(READ_ONCE(*src_pgdp))) continue; - if (copy_pud(dst_pgdp, src_pgdp, addr, next)) + if (copy_pud(info, dst_pgdp, src_pgdp, addr, next)) return -ENOMEM; } while (dst_pgdp++, src_pgdp++, addr = next, addr != end); return 0; } -int trans_pgd_create_copy(pgd_t **dst_pgdp, unsigned long start, - unsigned long end) +int trans_pgd_create_copy(struct trans_pgd_info *info, pgd_t **dst_pgdp, + unsigned long start, unsigned long end) { int rc; - pgd_t *trans_pgd = (pgd_t *)get_safe_page(GFP_ATOMIC); + pgd_t *trans_pgd = trans_alloc(info); if (!trans_pgd) { pr_err("Failed to allocate memory for temporary page tables.\n"); return -ENOMEM; } - rc = copy_page_tables(trans_pgd, start, end); + rc = copy_page_tables(info, trans_pgd, start, end); if (!rc) *dst_pgdp = trans_pgd; From patchwork Mon Sep 9 18:12:16 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pasha Tatashin X-Patchwork-Id: 11138349 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 566B014ED for ; Mon, 9 Sep 2019 18:12:56 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 1A8C6218DE for ; Mon, 9 Sep 2019 18:12:56 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=soleen.com header.i=@soleen.com header.b="gEFy66rK" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 1A8C6218DE Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=soleen.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id DC46E6B026B; Mon, 9 Sep 2019 14:12:43 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id CD6E46B026C; Mon, 9 Sep 2019 14:12:43 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B77556B026D; Mon, 9 Sep 2019 14:12:43 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0098.hostedemail.com [216.40.44.98]) by kanga.kvack.org (Postfix) with ESMTP id 8F9F56B026B for ; Mon, 9 Sep 2019 14:12:43 -0400 (EDT) Received: from smtpin27.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with SMTP id ED9E88243762 for ; Mon, 9 Sep 2019 18:12:42 +0000 (UTC) X-FDA: 75916177764.27.pot36_6ef0108e6e027 X-Spam-Summary: 2,0,0,427969a89e8618b2,d41d8cd98f00b204,pasha.tatashin@soleen.com,:pasha.tatashin@soleen.com:jmorris@namei.org:sashal@kernel.org:ebiederm@xmission.com:kexec@lists.infradead.org:linux-kernel@vger.kernel.org:corbet@lwn.net:catalin.marinas@arm.com:will@kernel.org:linux-arm-kernel@lists.infradead.org:marc.zyngier@arm.com:james.morse@arm.com:vladimir.murzin@arm.com:matthias.bgg@gmail.com:bhsharma@redhat.com::mark.rutland@arm.com,RULES_HIT:41:355:379:541:800:960:973:988:989:1260:1311:1314:1345:1359:1381:1437:1515:1535:1542:1711:1730:1747:1777:1792:2393:2559:2562:3138:3139:3140:3141:3142:3353:3865:3871:3872:3874:4321:5007:6117:6261:6653:6737:7903:10004:11026:11658:11914:12048:12297:12517:12519:12555:12895:12986:13255:13894:14096:14181:14394:14721:21080:21433:21444:21627:30012:30036:30054:30075,0,RBL:209.85.160.196:@soleen.com:.lbl8.mailshell.net-62.2.0.100 66.100.201.201,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:ne utral,Cu X-HE-Tag: pot36_6ef0108e6e027 X-Filterd-Recvd-Size: 5686 Received: from mail-qt1-f196.google.com (mail-qt1-f196.google.com [209.85.160.196]) by imf03.hostedemail.com (Postfix) with ESMTP for ; Mon, 9 Sep 2019 18:12:42 +0000 (UTC) Received: by mail-qt1-f196.google.com with SMTP id g4so17288882qtq.7 for ; Mon, 09 Sep 2019 11:12:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=VDzqkQgnkwsjeSD2sj+xCRv+aPoQdl7/oAUIodozYzI=; b=gEFy66rKgwtrGikuVUBnPyDp5wFXvKrpxJzr2q8Qx3ehi4m93SO+cT8ckPzt3hC8G1 5wjKdyCc5YIkYVxz+nYq19eoJQ0rdnxqNJnsnOqZZT/PzPbJu2FhQiaHyxcaNdKXnr09 0jnUs37ghN8KKRCsNZnq1V0ltm+b6H7CnXfXZik/8St/OTcFJqQPRi+/6I9Ra8kgjZTm yZfQFlr+97DIgzbJ0FcDnqMXwd4Wdj/tGE00x1RC0STKY6PXXcs1ELvfCOTKQMrVguot oy2uD6HEOhYRGdzg2HchRp5NQ8ML3hZbKNCubNsZaCy/bs/6SVxAEVcXq+z3OiC54hub B0ew== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=VDzqkQgnkwsjeSD2sj+xCRv+aPoQdl7/oAUIodozYzI=; b=OC4nVbrMAE5bDy+e7br3FAkVLgrktpQpMkD/S5JfPNXyQDGnoRS1HoKb7483WdurBw /ppaLbfKQLEWMsUSoOEDEaCRWA8nETBhyz7iE+iLDaf2B9ySlPA9LThYfkJw/kHI8Qvg nuynO6QzawTCDuk1ihPZ2vnh0ZDm6g/wSjkYIEP9A3gqu+pLcOFBlJZhLOnrXG7+6asC b7HUYEtExmPvWxhKb98SYhz1CPuCh1hE0Z2G+OT/RCsKdrbLZAW0eRVN1asWC9C+vuY0 Sndlgo/mm8/7MtpUrq+3CV2QjPxGE38JA8zptvcIGHhGS+vBXDGHu1xEsJxIVZ2oRhcr OWGw== X-Gm-Message-State: APjAAAXH/ZGGeNcO0vXCmOroFxpn8D4jZOBV+HbSJRCkY7jpiW0MsIQ3 ZK150hUqblbJLy/oSKNYF8O4Zg== X-Google-Smtp-Source: APXvYqy2DfI9VQoDCC6Va2zmAeWcK9TGmDIi0yl0BgAbKSBK0D91mrY30M5+sJkkAC/vKvqTDaqtSg== X-Received: by 2002:ad4:4d8e:: with SMTP id cv14mr7524241qvb.49.1568052761763; Mon, 09 Sep 2019 11:12:41 -0700 (PDT) Received: from localhost.localdomain (c-73-69-118-222.hsd1.nh.comcast.net. [73.69.118.222]) by smtp.gmail.com with ESMTPSA id q8sm5611310qtj.76.2019.09.09.11.12.40 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 09 Sep 2019 11:12:41 -0700 (PDT) From: Pavel Tatashin To: pasha.tatashin@soleen.com, jmorris@namei.org, sashal@kernel.org, ebiederm@xmission.com, kexec@lists.infradead.org, linux-kernel@vger.kernel.org, corbet@lwn.net, catalin.marinas@arm.com, will@kernel.org, linux-arm-kernel@lists.infradead.org, marc.zyngier@arm.com, james.morse@arm.com, vladimir.murzin@arm.com, matthias.bgg@gmail.com, bhsharma@redhat.com, linux-mm@kvack.org, mark.rutland@arm.com Subject: [PATCH v4 12/17] arm64: trans_pgd: pass NULL instead of init_mm to *_populate functions Date: Mon, 9 Sep 2019 14:12:16 -0400 Message-Id: <20190909181221.309510-13-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20190909181221.309510-1-pasha.tatashin@soleen.com> References: <20190909181221.309510-1-pasha.tatashin@soleen.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: trans_pgd_* should be independent from mm context because the tables that are created by this code are used when there are no mm context around, as it is between kernels. Simply replace mm_init's with NULL. Signed-off-by: Pavel Tatashin --- arch/arm64/mm/trans_pgd.c | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/arch/arm64/mm/trans_pgd.c b/arch/arm64/mm/trans_pgd.c index dfde87159840..e7b8625b3ac3 100644 --- a/arch/arm64/mm/trans_pgd.c +++ b/arch/arm64/mm/trans_pgd.c @@ -67,7 +67,7 @@ static int copy_pte(struct trans_pgd_info *info, pmd_t *dst_pmdp, dst_ptep = trans_alloc(info); if (!dst_ptep) return -ENOMEM; - pmd_populate_kernel(&init_mm, dst_pmdp, dst_ptep); + pmd_populate_kernel(NULL, dst_pmdp, dst_ptep); dst_ptep = pte_offset_kernel(dst_pmdp, start); src_ptep = pte_offset_kernel(src_pmdp, start); @@ -90,7 +90,7 @@ static int copy_pmd(struct trans_pgd_info *info, pud_t *dst_pudp, dst_pmdp = trans_alloc(info); if (!dst_pmdp) return -ENOMEM; - pud_populate(&init_mm, dst_pudp, dst_pmdp); + pud_populate(NULL, dst_pudp, dst_pmdp); } dst_pmdp = pmd_offset(dst_pudp, start); @@ -126,7 +126,7 @@ static int copy_pud(struct trans_pgd_info *info, pgd_t *dst_pgdp, dst_pudp = trans_alloc(info); if (!dst_pudp) return -ENOMEM; - pgd_populate(&init_mm, dst_pgdp, dst_pudp); + pgd_populate(NULL, dst_pgdp, dst_pudp); } dst_pudp = pud_offset(dst_pgdp, start); @@ -199,7 +199,7 @@ int trans_pgd_map_page(struct trans_pgd_info *info, pgd_t *trans_pgd, pudp = trans_alloc(info); if (!pudp) return -ENOMEM; - pgd_populate(&init_mm, pgdp, pudp); + pgd_populate(NULL, pgdp, pudp); } pudp = pud_offset(pgdp, dst_addr); @@ -207,7 +207,7 @@ int trans_pgd_map_page(struct trans_pgd_info *info, pgd_t *trans_pgd, pmdp = trans_alloc(info); if (!pmdp) return -ENOMEM; - pud_populate(&init_mm, pudp, pmdp); + pud_populate(NULL, pudp, pmdp); } pmdp = pmd_offset(pudp, dst_addr); @@ -215,7 +215,7 @@ int trans_pgd_map_page(struct trans_pgd_info *info, pgd_t *trans_pgd, ptep = trans_alloc(info); if (!ptep) return -ENOMEM; - pmd_populate_kernel(&init_mm, pmdp, ptep); + pmd_populate_kernel(NULL, pmdp, ptep); } ptep = pte_offset_kernel(pmdp, dst_addr); From patchwork Mon Sep 9 18:12:17 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pasha Tatashin X-Patchwork-Id: 11138351 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id ED0241599 for ; Mon, 9 Sep 2019 18:12:58 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id BA02C21A4A for ; Mon, 9 Sep 2019 18:12:58 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=soleen.com header.i=@soleen.com header.b="WSuiLH/E" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org BA02C21A4A Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=soleen.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id ED1776B026D; Mon, 9 Sep 2019 14:12:44 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id E34826B026E; Mon, 9 Sep 2019 14:12:44 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id CD8EA6B026F; Mon, 9 Sep 2019 14:12:44 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0176.hostedemail.com [216.40.44.176]) by kanga.kvack.org (Postfix) with ESMTP id 9C9D46B026D for ; Mon, 9 Sep 2019 14:12:44 -0400 (EDT) Received: from smtpin06.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with SMTP id 4A9586105 for ; Mon, 9 Sep 2019 18:12:44 +0000 (UTC) X-FDA: 75916177848.06.gate56_6f2703a6dc736 X-Spam-Summary: 2,0,0,aef3499f1d8f702d,d41d8cd98f00b204,pasha.tatashin@soleen.com,:pasha.tatashin@soleen.com:jmorris@namei.org:sashal@kernel.org:ebiederm@xmission.com:kexec@lists.infradead.org:linux-kernel@vger.kernel.org:corbet@lwn.net:catalin.marinas@arm.com:will@kernel.org:linux-arm-kernel@lists.infradead.org:marc.zyngier@arm.com:james.morse@arm.com:vladimir.murzin@arm.com:matthias.bgg@gmail.com:bhsharma@redhat.com::mark.rutland@arm.com,RULES_HIT:41:355:379:541:800:960:966:973:988:989:1260:1311:1314:1345:1359:1381:1437:1515:1535:1542:1711:1730:1747:1777:1792:2196:2198:2199:2200:2393:2559:2562:2731:2914:3138:3139:3140:3141:3142:3353:3865:3867:3868:3870:3871:3872:3874:4321:4385:4605:5007:6261:6653:6737:10004:11026:11473:11658:11914:12043:12048:12114:12297:12438:12517:12519:12555:12895:12986:13215:13229:13894:14096:14181:14394:14721:21063:21080:21325:21444:21627:21740:30054,0,RBL:209.85.160.195:@soleen.com:.lbl8.mailshell.net-62.2.0.100 66.100.201.201,CacheIP:none,Bayesian:0. 5,0.5,0. X-HE-Tag: gate56_6f2703a6dc736 X-Filterd-Recvd-Size: 5654 Received: from mail-qt1-f195.google.com (mail-qt1-f195.google.com [209.85.160.195]) by imf34.hostedemail.com (Postfix) with ESMTP for ; Mon, 9 Sep 2019 18:12:43 +0000 (UTC) Received: by mail-qt1-f195.google.com with SMTP id k10so17313943qth.2 for ; Mon, 09 Sep 2019 11:12:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=kCOV/SKGyCS/idoYOjNn3DaQRL2Ie2fUru8T4/v0tiY=; b=WSuiLH/ElypDjQ34fWlc09v2lQ92yg7ePKKlJT2R3aZtootAoUm4IJlQpYdsKPFreN dpG5qcVAh2mngZA1/2H4b0qLtReGH2MCcbrCPA5y+UV3P7a485AFcZBTegOCGdcJwBxo 9gs19c4gMMRWJp1NC7ATV1U16xjnxBej7id2nho6TQv/fzuLv8JZgd6/4yoJ9bwRtY9c V4ZTGMI4qeZ5LWssLrlZGRTvgnWhESrHwINMyVrNSbBzIGRXVKmTUgk/uRCV9RLAnLqM YpdJMzCZEAoE3sGF08ftCMb9PiTGeH7cfVJBN2685ZTBKxUt8c5rcuUkGpWDtmKGR+fr jwDg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=kCOV/SKGyCS/idoYOjNn3DaQRL2Ie2fUru8T4/v0tiY=; b=WIcGs2hyIs+6Q8D0FY9oB4BwnEgeBLDyLFBGjLem2h2JMb3LAk54maaQ1arIGLWJTe 6JTdR9bIIh1OtiUXjmr33oBE5Hm2Io0Vk6ihgwTGtRew/SsoCBa5qpOOYXd/GBx/2ZYm JitNW3mVUi5+A/IoBc+O6/cdnWCieHjkF2lrpk7RzHxda7dGBglI88g8v9SrPbWKa/zr wWYX01g3rSckwxXMuztFAPEVxPRkim7akMhYfTreJSD/qoKuFGP+uwfBAeTWYppdjCNG IXis2c52lVu3VvpzbPLh6WH3QGS4vo+56gDqnWkGPi/wVHKeAVn8t1vmq51bWLvVrpkN 7gcA== X-Gm-Message-State: APjAAAVzsePr5pdehycuhtKam1NBFgYcgjXD625uAytVRx/v//l9Gle5 9JIvBFx6IimtnpvExZhs5IbQEw== X-Google-Smtp-Source: APXvYqz2qIj5gN7e5Hvb6xiqAO3cyUStcV6SZ03no6qDeMeKeESePA5eLLwvVG4yIXOLRtT962e1Ig== X-Received: by 2002:a0c:fc05:: with SMTP id z5mr8556570qvo.128.1568052763165; Mon, 09 Sep 2019 11:12:43 -0700 (PDT) Received: from localhost.localdomain (c-73-69-118-222.hsd1.nh.comcast.net. [73.69.118.222]) by smtp.gmail.com with ESMTPSA id q8sm5611310qtj.76.2019.09.09.11.12.41 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 09 Sep 2019 11:12:42 -0700 (PDT) From: Pavel Tatashin To: pasha.tatashin@soleen.com, jmorris@namei.org, sashal@kernel.org, ebiederm@xmission.com, kexec@lists.infradead.org, linux-kernel@vger.kernel.org, corbet@lwn.net, catalin.marinas@arm.com, will@kernel.org, linux-arm-kernel@lists.infradead.org, marc.zyngier@arm.com, james.morse@arm.com, vladimir.murzin@arm.com, matthias.bgg@gmail.com, bhsharma@redhat.com, linux-mm@kvack.org, mark.rutland@arm.com Subject: [PATCH v4 13/17] kexec: add machine_kexec_post_load() Date: Mon, 9 Sep 2019 14:12:17 -0400 Message-Id: <20190909181221.309510-14-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20190909181221.309510-1-pasha.tatashin@soleen.com> References: <20190909181221.309510-1-pasha.tatashin@soleen.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: It is the same as machine_kexec_prepare(), but is called after segments are loaded. This way, can do processing work with already loaded relocation segments. One such example is arm64: it has to have segments loaded in order to create a page table, but it cannot do it during kexec time, because at that time allocations won't be possible anymore. Signed-off-by: Pavel Tatashin --- kernel/kexec.c | 4 ++++ kernel/kexec_core.c | 6 ++++++ kernel/kexec_file.c | 4 ++++ kernel/kexec_internal.h | 2 ++ 4 files changed, 16 insertions(+) diff --git a/kernel/kexec.c b/kernel/kexec.c index 1b018f1a6e0d..27b71dc7b35a 100644 --- a/kernel/kexec.c +++ b/kernel/kexec.c @@ -159,6 +159,10 @@ static int do_kexec_load(unsigned long entry, unsigned long nr_segments, kimage_terminate(image); + ret = machine_kexec_post_load(image); + if (ret) + goto out; + /* Install the new kernel and uninstall the old */ image = xchg(dest_image, image); diff --git a/kernel/kexec_core.c b/kernel/kexec_core.c index 2c5b72863b7b..8360645d1bbe 100644 --- a/kernel/kexec_core.c +++ b/kernel/kexec_core.c @@ -587,6 +587,12 @@ static void kimage_free_extra_pages(struct kimage *image) kimage_free_page_list(&image->unusable_pages); } + +int __weak machine_kexec_post_load(struct kimage *image) +{ + return 0; +} + void kimage_terminate(struct kimage *image) { if (*image->entry != 0) diff --git a/kernel/kexec_file.c b/kernel/kexec_file.c index b8cc032d5620..cb531d768114 100644 --- a/kernel/kexec_file.c +++ b/kernel/kexec_file.c @@ -391,6 +391,10 @@ SYSCALL_DEFINE5(kexec_file_load, int, kernel_fd, int, initrd_fd, kimage_terminate(image); + ret = machine_kexec_post_load(image); + if (ret) + goto out; + /* * Free up any temporary buffers allocated which are not needed * after image has been loaded diff --git a/kernel/kexec_internal.h b/kernel/kexec_internal.h index 48aaf2ac0d0d..39d30ccf8d87 100644 --- a/kernel/kexec_internal.h +++ b/kernel/kexec_internal.h @@ -13,6 +13,8 @@ void kimage_terminate(struct kimage *image); int kimage_is_destination_range(struct kimage *image, unsigned long start, unsigned long end); +int machine_kexec_post_load(struct kimage *image); + extern struct mutex kexec_mutex; #ifdef CONFIG_KEXEC_FILE From patchwork Mon Sep 9 18:12:18 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pasha Tatashin X-Patchwork-Id: 11138353 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id CDD4D1599 for ; Mon, 9 Sep 2019 18:13:01 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 9087B218DE for ; Mon, 9 Sep 2019 18:13:01 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=soleen.com header.i=@soleen.com header.b="XnfdhYNz" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 9087B218DE Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=soleen.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 66D086B026E; Mon, 9 Sep 2019 14:12:46 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 5F6D66B026F; Mon, 9 Sep 2019 14:12:46 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 497DE6B0270; Mon, 9 Sep 2019 14:12:46 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0248.hostedemail.com [216.40.44.248]) by kanga.kvack.org (Postfix) with ESMTP id 24ACA6B026E for ; Mon, 9 Sep 2019 14:12:46 -0400 (EDT) Received: from smtpin24.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with SMTP id 8716F6105 for ; Mon, 9 Sep 2019 18:12:45 +0000 (UTC) X-FDA: 75916177890.24.clock17_6f58ec12ff757 X-Spam-Summary: 2,0,0,8b9b46e698e9e6c2,d41d8cd98f00b204,pasha.tatashin@soleen.com,:pasha.tatashin@soleen.com:jmorris@namei.org:sashal@kernel.org:ebiederm@xmission.com:kexec@lists.infradead.org:linux-kernel@vger.kernel.org:corbet@lwn.net:catalin.marinas@arm.com:will@kernel.org:linux-arm-kernel@lists.infradead.org:marc.zyngier@arm.com:james.morse@arm.com:vladimir.murzin@arm.com:matthias.bgg@gmail.com:bhsharma@redhat.com::mark.rutland@arm.com,RULES_HIT:2:41:69:355:379:541:800:960:968:973:988:989:1260:1311:1314:1345:1359:1381:1437:1515:1535:1606:1730:1747:1777:1792:2393:2553:2559:2562:2693:3138:3139:3140:3141:3142:3355:3622:3865:3866:3867:3868:3870:3871:4119:4321:4605:5007:6261:6653:6737:7875:7903:9592:10004:11026:11473:11657:11658:11914:12043:12048:12294:12296:12297:12438:12517:12519:12555:12683:12895:13894:14096:14394:21063:21080:21433:21444:21451:21627:30054:30056:30079:30090,0,RBL:209.85.222.193:@soleen.com:.lbl8.mailshell.net-62.2.0.100 66.100.201.201,CacheIP:none,Bayesian:0 .5,0.5,0 X-HE-Tag: clock17_6f58ec12ff757 X-Filterd-Recvd-Size: 8347 Received: from mail-qk1-f193.google.com (mail-qk1-f193.google.com [209.85.222.193]) by imf20.hostedemail.com (Postfix) with ESMTP for ; Mon, 9 Sep 2019 18:12:45 +0000 (UTC) Received: by mail-qk1-f193.google.com with SMTP id x134so14051317qkb.0 for ; Mon, 09 Sep 2019 11:12:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=FUw6qa2MYJdKp/gIXzP4SLM1N30xDLUwNEhq/71i4ng=; b=XnfdhYNzn7N67bW/jWuTKw/6D0K8HU77oHQ9GmGE6Q/qEyfDPaBtEttmir1Uiv9afd cUukiMb4XpNVtqJBapjCHzvo+7GzVxiIogP9cFJnOFfKdPpX/1uznaEaP+LkvHCfnePG dwV86aQNQCiaTvtSyGoCZihzsUzlF+EN8Ln7C8p+uhYefXXJZcE4KK/WPh3iYripIpVO 2o/I27Pb7xPtgaxcBoTumvu6yU/Pi2OZoyYL0QvKDyFPiOyF/S43wRLOdmgnmrRYYviv YkkhhrGPeQ2Y6W3NeNjyOnpse5GbQXZ7fbFTIrXUj5YCx8dmQAAgLZvNLcNl9N96QzVa V9Nw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=FUw6qa2MYJdKp/gIXzP4SLM1N30xDLUwNEhq/71i4ng=; b=UiHdSWaXUuPAcKxxH30KmYw/eys7f8AnqoCbK3M/ZH371YHdBrsK5DUxtgumWvEpVp DJUIEBZ1DEN0vLdjj3rThoIlpfem/u9t0TEvkir+Jm+3Tg5YIQKhTwADf/EKRAWnZ9Ha TLAJH/wR9Jd6T06ZssoQaoVmDIE3SgOcfqth4QuZTcq/PRUMOKyUy1eioiLZQRCdC8t1 4rvEguRW7vPogOCJtZkAi6VJ7Ju7ItcPPhfzTY6eBWsRV5YqwqRAkhUz+9aZ7QMFfbex HGLx+xFQscUJJAWZiTPNCRUbxJBQcxmuhT1EqsbJv/eWz+TaVf5yaI6hNVb7H7qxUHky BfEw== X-Gm-Message-State: APjAAAUAYmF+XB2P9oNwJauR4KR1kS2dSCfAodmCmZ6LVJZlUj0UgmnI N37JOorpelGu+SZmXgb1nVtZFQ== X-Google-Smtp-Source: APXvYqzlcg09nsXuE6H13PAuyBoLAzGitzUW8SWzigHiV9rVFMfj1Y6YsNv0lBsRq+tdW3/nGGVCCw== X-Received: by 2002:a37:bd45:: with SMTP id n66mr23793343qkf.272.1568052764598; Mon, 09 Sep 2019 11:12:44 -0700 (PDT) Received: from localhost.localdomain (c-73-69-118-222.hsd1.nh.comcast.net. [73.69.118.222]) by smtp.gmail.com with ESMTPSA id q8sm5611310qtj.76.2019.09.09.11.12.43 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 09 Sep 2019 11:12:44 -0700 (PDT) From: Pavel Tatashin To: pasha.tatashin@soleen.com, jmorris@namei.org, sashal@kernel.org, ebiederm@xmission.com, kexec@lists.infradead.org, linux-kernel@vger.kernel.org, corbet@lwn.net, catalin.marinas@arm.com, will@kernel.org, linux-arm-kernel@lists.infradead.org, marc.zyngier@arm.com, james.morse@arm.com, vladimir.murzin@arm.com, matthias.bgg@gmail.com, bhsharma@redhat.com, linux-mm@kvack.org, mark.rutland@arm.com Subject: [PATCH v4 14/17] arm64: kexec: move relocation function setup and clean up Date: Mon, 9 Sep 2019 14:12:18 -0400 Message-Id: <20190909181221.309510-15-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20190909181221.309510-1-pasha.tatashin@soleen.com> References: <20190909181221.309510-1-pasha.tatashin@soleen.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Currently, kernel relocation function is configured in machine_kexec() at the time of kexec reboot by using control_code_page. This operation, however, is more logical to be done during kexec_load, and thus remove from reboot time. Move, setup of this function to newly added machine_kexec_post_load(). In addition, do some cleanup: add infor about reloction function to kexec_image_info(), and remove extra messages from machine_kexec(). Make dtb_mem, always available, if CONFIG_KEXEC_FILE is not configured dtb_mem is set to zero anyway. Signed-off-by: Pavel Tatashin --- arch/arm64/include/asm/kexec.h | 3 +- arch/arm64/kernel/machine_kexec.c | 49 +++++++++++-------------------- 2 files changed, 19 insertions(+), 33 deletions(-) diff --git a/arch/arm64/include/asm/kexec.h b/arch/arm64/include/asm/kexec.h index 12a561a54128..d15ca1ca1e83 100644 --- a/arch/arm64/include/asm/kexec.h +++ b/arch/arm64/include/asm/kexec.h @@ -90,14 +90,15 @@ static inline void crash_prepare_suspend(void) {} static inline void crash_post_resume(void) {} #endif -#ifdef CONFIG_KEXEC_FILE #define ARCH_HAS_KIMAGE_ARCH struct kimage_arch { void *dtb; unsigned long dtb_mem; + unsigned long kern_reloc; }; +#ifdef CONFIG_KEXEC_FILE extern const struct kexec_file_ops kexec_image_ops; struct kimage; diff --git a/arch/arm64/kernel/machine_kexec.c b/arch/arm64/kernel/machine_kexec.c index 0df8493624e0..9b41da50e6f7 100644 --- a/arch/arm64/kernel/machine_kexec.c +++ b/arch/arm64/kernel/machine_kexec.c @@ -42,6 +42,7 @@ static void _kexec_image_info(const char *func, int line, pr_debug(" start: %lx\n", kimage->start); pr_debug(" head: %lx\n", kimage->head); pr_debug(" nr_segments: %lu\n", kimage->nr_segments); + pr_debug(" kern_reloc: %pa\n", &kimage->arch.kern_reloc); for (i = 0; i < kimage->nr_segments; i++) { pr_debug(" segment[%lu]: %016lx - %016lx, 0x%lx bytes, %lu pages\n", @@ -58,6 +59,19 @@ void machine_kexec_cleanup(struct kimage *kimage) /* Empty routine needed to avoid build errors. */ } +int machine_kexec_post_load(struct kimage *kimage) +{ + unsigned long kern_reloc; + + kern_reloc = page_to_phys(kimage->control_code_page); + memcpy(__va(kern_reloc), arm64_relocate_new_kernel, + arm64_relocate_new_kernel_size); + kimage->arch.kern_reloc = kern_reloc; + + kexec_image_info(kimage); + return 0; +} + /** * machine_kexec_prepare - Prepare for a kexec reboot. * @@ -67,8 +81,6 @@ void machine_kexec_cleanup(struct kimage *kimage) */ int machine_kexec_prepare(struct kimage *kimage) { - kexec_image_info(kimage); - if (kimage->type != KEXEC_TYPE_CRASH && cpus_are_stuck_in_kernel()) { pr_err("Can't kexec: CPUs are stuck in the kernel.\n"); return -EBUSY; @@ -143,8 +155,7 @@ static void kexec_segment_flush(const struct kimage *kimage) */ void machine_kexec(struct kimage *kimage) { - phys_addr_t reboot_code_buffer_phys; - void *reboot_code_buffer; + void *reboot_code_buffer = phys_to_virt(kimage->arch.kern_reloc); bool in_kexec_crash = (kimage == kexec_crash_image); bool stuck_cpus = cpus_are_stuck_in_kernel(); @@ -155,30 +166,8 @@ void machine_kexec(struct kimage *kimage) WARN(in_kexec_crash && (stuck_cpus || smp_crash_stop_failed()), "Some CPUs may be stale, kdump will be unreliable.\n"); - reboot_code_buffer_phys = page_to_phys(kimage->control_code_page); - reboot_code_buffer = phys_to_virt(reboot_code_buffer_phys); - kexec_image_info(kimage); - pr_debug("%s:%d: control_code_page: %p\n", __func__, __LINE__, - kimage->control_code_page); - pr_debug("%s:%d: reboot_code_buffer_phys: %pa\n", __func__, __LINE__, - &reboot_code_buffer_phys); - pr_debug("%s:%d: reboot_code_buffer: %p\n", __func__, __LINE__, - reboot_code_buffer); - pr_debug("%s:%d: relocate_new_kernel: %p\n", __func__, __LINE__, - arm64_relocate_new_kernel); - pr_debug("%s:%d: relocate_new_kernel_size: 0x%lx(%lu) bytes\n", - __func__, __LINE__, arm64_relocate_new_kernel_size, - arm64_relocate_new_kernel_size); - - /* - * Copy arm64_relocate_new_kernel to the reboot_code_buffer for use - * after the kernel is shut down. - */ - memcpy(reboot_code_buffer, arm64_relocate_new_kernel, - arm64_relocate_new_kernel_size); - /* Flush the reboot_code_buffer in preparation for its execution. */ __flush_dcache_area(reboot_code_buffer, arm64_relocate_new_kernel_size); @@ -214,12 +203,8 @@ void machine_kexec(struct kimage *kimage) * userspace (kexec-tools). * In kexec_file case, the kernel starts directly without purgatory. */ - cpu_soft_restart(reboot_code_buffer_phys, kimage->head, kimage->start, -#ifdef CONFIG_KEXEC_FILE - kimage->arch.dtb_mem); -#else - 0); -#endif + cpu_soft_restart(kimage->arch.kern_reloc, kimage->head, kimage->start, + kimage->arch.dtb_mem); BUG(); /* Should never get here. */ } From patchwork Mon Sep 9 18:12:19 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pasha Tatashin X-Patchwork-Id: 11138357 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id C70B91599 for ; Mon, 9 Sep 2019 18:13:07 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 7985E21A4A for ; Mon, 9 Sep 2019 18:13:07 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=soleen.com header.i=@soleen.com header.b="Si3PCeiu" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 7985E21A4A Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=soleen.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 0A0DA6B0270; Mon, 9 Sep 2019 14:12:48 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 053646B0271; Mon, 9 Sep 2019 14:12:47 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E0B166B0272; Mon, 9 Sep 2019 14:12:47 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0234.hostedemail.com [216.40.44.234]) by kanga.kvack.org (Postfix) with ESMTP id B3D516B0270 for ; Mon, 9 Sep 2019 14:12:47 -0400 (EDT) Received: from smtpin01.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with SMTP id 59DE7181AC9AE for ; Mon, 9 Sep 2019 18:12:47 +0000 (UTC) X-FDA: 75916177974.01.ant84_6f97a92e10403 X-Spam-Summary: 2,0,0,6830419e1109edce,d41d8cd98f00b204,pasha.tatashin@soleen.com,:pasha.tatashin@soleen.com:jmorris@namei.org:sashal@kernel.org:ebiederm@xmission.com:kexec@lists.infradead.org:linux-kernel@vger.kernel.org:corbet@lwn.net:catalin.marinas@arm.com:will@kernel.org:linux-arm-kernel@lists.infradead.org:marc.zyngier@arm.com:james.morse@arm.com:vladimir.murzin@arm.com:matthias.bgg@gmail.com:bhsharma@redhat.com::mark.rutland@arm.com,RULES_HIT:1:41:69:355:379:421:541:800:960:966:973:988:989:1260:1311:1314:1345:1359:1381:1431:1437:1500:1515:1605:1730:1747:1777:1792:2194:2196:2199:2200:2393:2553:2559:2562:2637:2693:2892:2896:3138:3139:3140:3141:3142:3622:3865:3866:3867:3868:3870:3871:3872:3874:4250:4321:4385:4605:5007:6117:6119:6261:6653:6737:7903:8603:8784:9010:9592:10004:10226:11026:11233:11473:11657:11658:11914:12043:12048:12295:12296:12297:12438:12517:12519:12555:12679:12895:12986:13149:13161:13229:13230:13894:13972:14394:21063:21080:21324:21325:21444:21451:21627:2174 0:21796: X-HE-Tag: ant84_6f97a92e10403 X-Filterd-Recvd-Size: 14980 Received: from mail-qt1-f195.google.com (mail-qt1-f195.google.com [209.85.160.195]) by imf38.hostedemail.com (Postfix) with ESMTP for ; Mon, 9 Sep 2019 18:12:46 +0000 (UTC) Received: by mail-qt1-f195.google.com with SMTP id n7so17277179qtb.6 for ; Mon, 09 Sep 2019 11:12:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=uw6s1yEhhfH1B/xKucsl67sYBjMXvbaEFciAMxIuXLw=; b=Si3PCeiuCkgxvB2LLBM5uY9fnfn3yGppFqeGmnjI5rHK3cGMlo90QpDGn3KC8cXqte ldjuQSGx0SFpsaH7IkC3MFmUED72N1xeMLFajiJXaWYkxWJptUO/wa/JQdAU6LOSg46I FqhguHOwnHtQ7PohUaqfoUg1i+PBKrgdr6HfJcGQvBdtAjkaRBkwwsWL8EKfvOn8IqZC jc98M3UPQAXoUDkF4LFXyDU5xcGyK+INPCzNOFaNf9TVCIEa+bpPNnAtjvNlWsiFfH7R aBN5DQYKoWA65U5irUr+MNVOHqPI8JOtGYd8PentXz1MrPsAdR8sY7zaN0jY1xL8vN92 MfNA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=uw6s1yEhhfH1B/xKucsl67sYBjMXvbaEFciAMxIuXLw=; b=Jlm+EmzVQ1ufKcudWJbz5tlmBZopZwF2MUsSN5jlhv8oU+T4m3T6xYTrQYVPLqesQZ 6aHyg+iDaoziZNJTcRDIssgEn++3/3uAV7+45WtbBm26wc/imnt9xTvPV+lmgFbSnXBO dbCS9ZzoJxfkfele+MPNgrXnqa57Q19hOioUUFlpRYPlEy5NYgZHx/Gt6rT+IZ+Oo6Hi qfBmEx4stBI1NDaIQkjCQ7GZf5CrcOOQz5jKF8Uuuq3EF/gIYq1D9asObu3KQAnXDnkN sjokkhOXWssZJUdPCqqGNjxCgF60PeLm6LSta4+9XFoTXozaglCK43F6yLgEu6Mmh8P7 3pyg== X-Gm-Message-State: APjAAAVG+apsJJrQSfmBw8nxcvqQMIWW3+fcggkuF7egUy5fTMEi2Y1j 6V5vZTv2nK0toq+SF2BCXm3RUQ== X-Google-Smtp-Source: APXvYqy6MU40Dm7VbCr73jUd2JEGQwjLYyLtn2MsPVEZk9TaVVLXhJiGFiSP1Pf17jHyKlbgJ/WqzA== X-Received: by 2002:a05:6214:4c2:: with SMTP id ck2mr15102062qvb.21.1568052766034; Mon, 09 Sep 2019 11:12:46 -0700 (PDT) Received: from localhost.localdomain (c-73-69-118-222.hsd1.nh.comcast.net. [73.69.118.222]) by smtp.gmail.com with ESMTPSA id q8sm5611310qtj.76.2019.09.09.11.12.44 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 09 Sep 2019 11:12:45 -0700 (PDT) From: Pavel Tatashin To: pasha.tatashin@soleen.com, jmorris@namei.org, sashal@kernel.org, ebiederm@xmission.com, kexec@lists.infradead.org, linux-kernel@vger.kernel.org, corbet@lwn.net, catalin.marinas@arm.com, will@kernel.org, linux-arm-kernel@lists.infradead.org, marc.zyngier@arm.com, james.morse@arm.com, vladimir.murzin@arm.com, matthias.bgg@gmail.com, bhsharma@redhat.com, linux-mm@kvack.org, mark.rutland@arm.com Subject: [PATCH v4 15/17] arm64: kexec: add expandable argument to relocation function Date: Mon, 9 Sep 2019 14:12:19 -0400 Message-Id: <20190909181221.309510-16-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20190909181221.309510-1-pasha.tatashin@soleen.com> References: <20190909181221.309510-1-pasha.tatashin@soleen.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Currently, kexec relocation function (arm64_relocate_new_kernel) accepts the following arguments: head: start of array that contains relocation information. entry: entry point for new kernel or purgatory. dtb_mem: first and only argument to entry. The number of arguments cannot be easily expended, because this function is also called from HVC_SOFT_RESTART, which preserves only three arguments. And, also arm64_relocate_new_kernel is written in assembly but called without stack, thus no place to move extra arguments to free registers. Soon, we will need to pass more arguments: once we enable MMU we will need to pass information about page tables. Another benefit of allowing this function to accept more arguments, is that kernel can actually accept up to 4 arguments (x0-x3), however currently only one is used, but if in the future we will need for more (for example, pass information about when previous kernel exited to have a precise measurement in time spent in purgatory), we won't be easilty do that if arm64_relocate_new_kernel can't accept more arguments. So, add a new struct: kern_reloc_arg, and place it in kexec safe page (i.e memory that is not overwritten during relocation). Thus, make arm64_relocate_new_kernel to only take one argument, that contains all the needed information. Signed-off-by: Pavel Tatashin --- arch/arm64/include/asm/kexec.h | 18 ++++++ arch/arm64/kernel/asm-offsets.c | 9 +++ arch/arm64/kernel/cpu-reset.S | 4 +- arch/arm64/kernel/cpu-reset.h | 8 +-- arch/arm64/kernel/machine_kexec.c | 29 +++++++++- arch/arm64/kernel/relocate_kernel.S | 88 ++++++++++------------------- 6 files changed, 87 insertions(+), 69 deletions(-) diff --git a/arch/arm64/include/asm/kexec.h b/arch/arm64/include/asm/kexec.h index d15ca1ca1e83..d5b79d4c7fae 100644 --- a/arch/arm64/include/asm/kexec.h +++ b/arch/arm64/include/asm/kexec.h @@ -90,12 +90,30 @@ static inline void crash_prepare_suspend(void) {} static inline void crash_post_resume(void) {} #endif +/* + * kern_reloc_arg is passed to kernel relocation function as an argument. + * head kimage->head, allows to traverse through relocation segments. + * entry_addr kimage->start, where to jump from relocation function (new + * kernel, or purgatory entry address). + * kern_arg0 first argument to kernel is its dtb address. The other + * arguments are currently unused, and must be set to 0 + */ +struct kern_reloc_arg { + unsigned long head; + unsigned long entry_addr; + unsigned long kern_arg0; + unsigned long kern_arg1; + unsigned long kern_arg2; + unsigned long kern_arg3; +}; + #define ARCH_HAS_KIMAGE_ARCH struct kimage_arch { void *dtb; unsigned long dtb_mem; unsigned long kern_reloc; + unsigned long kern_reloc_arg; }; #ifdef CONFIG_KEXEC_FILE diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c index 214685760e1c..900394907fd8 100644 --- a/arch/arm64/kernel/asm-offsets.c +++ b/arch/arm64/kernel/asm-offsets.c @@ -23,6 +23,7 @@ #include #include #include +#include int main(void) { @@ -126,6 +127,14 @@ int main(void) #ifdef CONFIG_ARM_SDE_INTERFACE DEFINE(SDEI_EVENT_INTREGS, offsetof(struct sdei_registered_event, interrupted_regs)); DEFINE(SDEI_EVENT_PRIORITY, offsetof(struct sdei_registered_event, priority)); +#endif +#ifdef CONFIG_KEXEC_CORE + DEFINE(KRELOC_HEAD, offsetof(struct kern_reloc_arg, head)); + DEFINE(KRELOC_ENTRY_ADDR, offsetof(struct kern_reloc_arg, entry_addr)); + DEFINE(KRELOC_KERN_ARG0, offsetof(struct kern_reloc_arg, kern_arg0)); + DEFINE(KRELOC_KERN_ARG1, offsetof(struct kern_reloc_arg, kern_arg1)); + DEFINE(KRELOC_KERN_ARG2, offsetof(struct kern_reloc_arg, kern_arg2)); + DEFINE(KRELOC_KERN_ARG3, offsetof(struct kern_reloc_arg, kern_arg3)); #endif return 0; } diff --git a/arch/arm64/kernel/cpu-reset.S b/arch/arm64/kernel/cpu-reset.S index 6ea337d464c4..64c78a42919f 100644 --- a/arch/arm64/kernel/cpu-reset.S +++ b/arch/arm64/kernel/cpu-reset.S @@ -43,9 +43,7 @@ ENTRY(__cpu_soft_restart) hvc #0 // no return 1: mov x18, x1 // entry - mov x0, x2 // arg0 - mov x1, x3 // arg1 - mov x2, x4 // arg2 + mov x0, x2 // arg br x18 ENDPROC(__cpu_soft_restart) diff --git a/arch/arm64/kernel/cpu-reset.h b/arch/arm64/kernel/cpu-reset.h index ed50e9587ad8..7a8720ff186f 100644 --- a/arch/arm64/kernel/cpu-reset.h +++ b/arch/arm64/kernel/cpu-reset.h @@ -11,12 +11,10 @@ #include void __cpu_soft_restart(unsigned long el2_switch, unsigned long entry, - unsigned long arg0, unsigned long arg1, unsigned long arg2); + unsigned long arg); static inline void __noreturn cpu_soft_restart(unsigned long entry, - unsigned long arg0, - unsigned long arg1, - unsigned long arg2) + unsigned long arg) { typeof(__cpu_soft_restart) *restart; @@ -25,7 +23,7 @@ static inline void __noreturn cpu_soft_restart(unsigned long entry, restart = (void *)__pa_symbol(__cpu_soft_restart); cpu_install_idmap(); - restart(el2_switch, entry, arg0, arg1, arg2); + restart(el2_switch, entry, arg); unreachable(); } diff --git a/arch/arm64/kernel/machine_kexec.c b/arch/arm64/kernel/machine_kexec.c index 9b41da50e6f7..fb6138a1c9ff 100644 --- a/arch/arm64/kernel/machine_kexec.c +++ b/arch/arm64/kernel/machine_kexec.c @@ -43,6 +43,7 @@ static void _kexec_image_info(const char *func, int line, pr_debug(" head: %lx\n", kimage->head); pr_debug(" nr_segments: %lu\n", kimage->nr_segments); pr_debug(" kern_reloc: %pa\n", &kimage->arch.kern_reloc); + pr_debug(" kern_reloc_arg: %pa\n", &kimage->arch.kern_reloc_arg); for (i = 0; i < kimage->nr_segments; i++) { pr_debug(" segment[%lu]: %016lx - %016lx, 0x%lx bytes, %lu pages\n", @@ -59,14 +60,39 @@ void machine_kexec_cleanup(struct kimage *kimage) /* Empty routine needed to avoid build errors. */ } +/* Allocates pages for kexec page table */ +static void *kexec_page_alloc(void *arg) +{ + struct kimage *kimage = (struct kimage *)arg; + struct page *page = kimage_alloc_control_pages(kimage, 0); + + if (!page) + return NULL; + + memset(page_address(page), 0, PAGE_SIZE); + + return page_address(page); +} + int machine_kexec_post_load(struct kimage *kimage) { unsigned long kern_reloc; + struct kern_reloc_arg *kern_reloc_arg; kern_reloc = page_to_phys(kimage->control_code_page); memcpy(__va(kern_reloc), arm64_relocate_new_kernel, arm64_relocate_new_kernel_size); + + kern_reloc_arg = kexec_page_alloc(kimage); + if (!kern_reloc_arg) + return -ENOMEM; + kimage->arch.kern_reloc = kern_reloc; + kimage->arch.kern_reloc_arg = __pa(kern_reloc_arg); + + kern_reloc_arg->head = kimage->head; + kern_reloc_arg->entry_addr = kimage->start; + kern_reloc_arg->kern_arg0 = kimage->arch.dtb_mem; kexec_image_info(kimage); return 0; @@ -203,8 +229,7 @@ void machine_kexec(struct kimage *kimage) * userspace (kexec-tools). * In kexec_file case, the kernel starts directly without purgatory. */ - cpu_soft_restart(kimage->arch.kern_reloc, kimage->head, kimage->start, - kimage->arch.dtb_mem); + cpu_soft_restart(kimage->arch.kern_reloc, kimage->arch.kern_reloc_arg); BUG(); /* Should never get here. */ } diff --git a/arch/arm64/kernel/relocate_kernel.S b/arch/arm64/kernel/relocate_kernel.S index c1d7db71a726..d352faf7cbe6 100644 --- a/arch/arm64/kernel/relocate_kernel.S +++ b/arch/arm64/kernel/relocate_kernel.S @@ -8,7 +8,7 @@ #include #include - +#include #include #include #include @@ -17,86 +17,58 @@ /* * arm64_relocate_new_kernel - Put a 2nd stage image in place and boot it. * - * The memory that the old kernel occupies may be overwritten when coping the + * The memory that the old kernel occupies may be overwritten when copying the * new image to its final location. To assure that the * arm64_relocate_new_kernel routine which does that copy is not overwritten, * all code and data needed by arm64_relocate_new_kernel must be between the * symbols arm64_relocate_new_kernel and arm64_relocate_new_kernel_end. The * machine_kexec() routine will copy arm64_relocate_new_kernel to the kexec - * control_code_page, a special page which has been set up to be preserved - * during the copy operation. + * safe memory that has been set up to be preserved during the copy operation. */ ENTRY(arm64_relocate_new_kernel) - - /* Setup the list loop variables. */ - mov x18, x2 /* x18 = dtb address */ - mov x17, x1 /* x17 = kimage_start */ - mov x16, x0 /* x16 = kimage_head */ - raw_dcache_line_size x15, x0 /* x15 = dcache line size */ - mov x14, xzr /* x14 = entry ptr */ - mov x13, xzr /* x13 = copy dest */ - /* Clear the sctlr_el2 flags. */ - mrs x0, CurrentEL - cmp x0, #CurrentEL_EL2 + mrs x2, CurrentEL + cmp x2, #CurrentEL_EL2 b.ne 1f - mrs x0, sctlr_el2 + mrs x2, sctlr_el2 ldr x1, =SCTLR_ELx_FLAGS - bic x0, x0, x1 + bic x2, x2, x1 pre_disable_mmu_workaround - msr sctlr_el2, x0 + msr sctlr_el2, x2 isb -1: - - /* Check if the new image needs relocation. */ +1: /* Check if the new image needs relocation. */ + ldr x16, [x0, #KRELOC_HEAD] /* x16 = kimage_head */ tbnz x16, IND_DONE_BIT, .Ldone - + raw_dcache_line_size x15, x1 /* x15 = dcache line size */ .Lloop: and x12, x16, PAGE_MASK /* x12 = addr */ - /* Test the entry flags. */ .Ltest_source: tbz x16, IND_SOURCE_BIT, .Ltest_indirection /* Invalidate dest page to PoC. */ - mov x0, x13 - add x20, x0, #PAGE_SIZE + mov x2, x13 + add x20, x2, #PAGE_SIZE sub x1, x15, #1 - bic x0, x0, x1 -2: dc ivac, x0 - add x0, x0, x15 - cmp x0, x20 + bic x2, x2, x1 +2: dc ivac, x2 + add x2, x2, x15 + cmp x2, x20 b.lo 2b dsb sy - mov x20, x13 - mov x21, x12 - copy_page x20, x21, x0, x1, x2, x3, x4, x5, x6, x7 - - /* dest += PAGE_SIZE */ - add x13, x13, PAGE_SIZE + copy_page x13, x12, x1, x2, x3, x4, x5, x6, x7, x8 b .Lnext - .Ltest_indirection: tbz x16, IND_INDIRECTION_BIT, .Ltest_destination - - /* ptr = addr */ - mov x14, x12 + mov x14, x12 /* ptr = addr */ b .Lnext - .Ltest_destination: tbz x16, IND_DESTINATION_BIT, .Lnext - - /* dest = addr */ - mov x13, x12 - + mov x13, x12 /* dest = addr */ .Lnext: - /* entry = *ptr++ */ - ldr x16, [x14], #8 - - /* while (!(entry & DONE)) */ - tbz x16, IND_DONE_BIT, .Lloop - + ldr x16, [x14], #8 /* entry = *ptr++ */ + tbz x16, IND_DONE_BIT, .Lloop /* while (!(entry & DONE)) */ .Ldone: /* wait for writes from copy_page to finish */ dsb nsh @@ -105,18 +77,16 @@ ENTRY(arm64_relocate_new_kernel) isb /* Start new image. */ - mov x0, x18 - mov x1, xzr - mov x2, xzr - mov x3, xzr - br x17 - -ENDPROC(arm64_relocate_new_kernel) + ldr x4, [x0, #KRELOC_ENTRY_ADDR] /* x4 = kimage_start */ + ldr x3, [x0, #KRELOC_KERN_ARG3] + ldr x2, [x0, #KRELOC_KERN_ARG2] + ldr x1, [x0, #KRELOC_KERN_ARG1] + ldr x0, [x0, #KRELOC_KERN_ARG0] /* x0 = dtb address */ + br x4 +END(arm64_relocate_new_kernel) .ltorg - .align 3 /* To keep the 64-bit values below naturally aligned. */ - .Lcopy_end: .org KEXEC_CONTROL_PAGE_SIZE From patchwork Mon Sep 9 18:12:20 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pasha Tatashin X-Patchwork-Id: 11138359 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 1DCBD1599 for ; Mon, 9 Sep 2019 18:13:11 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id C477B21A4A for ; Mon, 9 Sep 2019 18:13:10 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=soleen.com header.i=@soleen.com header.b="J4I7k4O1" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org C477B21A4A Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=soleen.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 535E36B0271; Mon, 9 Sep 2019 14:12:49 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 4A0506B0273; Mon, 9 Sep 2019 14:12:49 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1BD746B0271; Mon, 9 Sep 2019 14:12:49 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0226.hostedemail.com [216.40.44.226]) by kanga.kvack.org (Postfix) with ESMTP id E61706B0271 for ; Mon, 9 Sep 2019 14:12:48 -0400 (EDT) Received: from smtpin16.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with SMTP id 9D729180AD7C3 for ; Mon, 9 Sep 2019 18:12:48 +0000 (UTC) X-FDA: 75916178016.16.joke08_6fc8d1ffc4c4a X-Spam-Summary: 2,0,0,af3140896628c692,d41d8cd98f00b204,pasha.tatashin@soleen.com,:pasha.tatashin@soleen.com:jmorris@namei.org:sashal@kernel.org:ebiederm@xmission.com:kexec@lists.infradead.org:linux-kernel@vger.kernel.org:corbet@lwn.net:catalin.marinas@arm.com:will@kernel.org:linux-arm-kernel@lists.infradead.org:marc.zyngier@arm.com:james.morse@arm.com:vladimir.murzin@arm.com:matthias.bgg@gmail.com:bhsharma@redhat.com::mark.rutland@arm.com,RULES_HIT:1:2:41:355:379:541:800:960:973:988:989:1260:1311:1314:1345:1359:1381:1431:1437:1500:1515:1605:1730:1747:1777:1792:2194:2199:2393:2559:2562:2693:3138:3139:3140:3141:3142:3865:3866:3867:3868:3870:3871:3874:4052:4250:4321:4605:5007:6119:6261:6653:6737:7875:7903:8603:9036:9592:10004:11026:11473:11657:11658:11914:12043:12048:12291:12295:12296:12297:12438:12517:12519:12555:12895:12986:13161:13229:13548:13894:14096:14394:21063:21080:21433:21444:21451:21627:30003:30034:30054:30070:30075:30079,0,RBL:209.85.222.195:@soleen.com:.lbl8.mailshe ll.net-6 X-HE-Tag: joke08_6fc8d1ffc4c4a X-Filterd-Recvd-Size: 13144 Received: from mail-qk1-f195.google.com (mail-qk1-f195.google.com [209.85.222.195]) by imf13.hostedemail.com (Postfix) with ESMTP for ; Mon, 9 Sep 2019 18:12:48 +0000 (UTC) Received: by mail-qk1-f195.google.com with SMTP id d26so14081235qkk.2 for ; Mon, 09 Sep 2019 11:12:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=VGZs2nvlzFGJ/IiQKnNgpSiQ6yaSa+impDQ2rvryRU0=; b=J4I7k4O1k1dZ7mXvzr34uXU7IOnHFyTjHTMrmpmbfs3WTWcfK7daPr4OLvwSed4U+m V8oamOkYKHwrLhu+o1pF9bMGK6me+dWJxpnl+/zVeOde1YOox73ckR4+fnrdJqTuLOtm oTewfVjPxaI5OFmWUH3IPHTx4DJ35Zn6tAuxsYtFRc8GpCe8WLoeGHEOjCKaEPLJmws1 wgiosixIcn+Pkw/EbGP1fRKhTFnj47LAsdeS3ElyjiOwGHLNTdOYLN1TmHFaKdPsWwQb PywrJ2IWQrqu2LLuVGsVNnWRRCyq9qW+dOhzTmlrvJhGaM57Ew1MeLh0p371ardhAv8s KX4w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=VGZs2nvlzFGJ/IiQKnNgpSiQ6yaSa+impDQ2rvryRU0=; b=s4qHEo+IXSy6AfJCuDVjH/I0tmj+gVpCl7Wp+JGdEi89GArXk+l9jZ3+yzjrLy0E6h KTj8iOGmciBqzEimX6V8socKBqqzpepFKslH/6UA+P1iFJZUMcLPcYKECTYq/fGzjns0 4HrBqYVfchSjlrG9EWaoESc5y7S91VkLhv79z1vbsiVfXGmypV7iEpfn5x5cJW/AC/5R qURDwgHsP2PMfgnEce7ac1GQRdJ2Ky5uMYrXDpBjDCbC6NOzpKQGM3Rb+kleAPX+zy67 KiQg5wy1dEdLXWffOAVgFaYGxQL0Kszvk7zN4SeFyjKNZpUubMBH6jH/OmWE6mEuZoUk +bNQ== X-Gm-Message-State: APjAAAXefNDtvLWsbha3VF5hYNDxtLD1lOffzwdiXolbORb68c4P2NDW 7onhb5+YGTb+qlxidYIYNdPIIQ== X-Google-Smtp-Source: APXvYqyp5SZ5GCg0xknPvCLZ+Od7u1bbAex85hzIuchGHU7+1exYUvVB4uODHmwWi70Ijp6F7ctRbQ== X-Received: by 2002:a37:9742:: with SMTP id z63mr25291269qkd.350.1568052767480; Mon, 09 Sep 2019 11:12:47 -0700 (PDT) Received: from localhost.localdomain (c-73-69-118-222.hsd1.nh.comcast.net. [73.69.118.222]) by smtp.gmail.com with ESMTPSA id q8sm5611310qtj.76.2019.09.09.11.12.46 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 09 Sep 2019 11:12:46 -0700 (PDT) From: Pavel Tatashin To: pasha.tatashin@soleen.com, jmorris@namei.org, sashal@kernel.org, ebiederm@xmission.com, kexec@lists.infradead.org, linux-kernel@vger.kernel.org, corbet@lwn.net, catalin.marinas@arm.com, will@kernel.org, linux-arm-kernel@lists.infradead.org, marc.zyngier@arm.com, james.morse@arm.com, vladimir.murzin@arm.com, matthias.bgg@gmail.com, bhsharma@redhat.com, linux-mm@kvack.org, mark.rutland@arm.com Subject: [PATCH v4 16/17] arm64: kexec: configure trans_pgd page table for kexec Date: Mon, 9 Sep 2019 14:12:20 -0400 Message-Id: <20190909181221.309510-17-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20190909181221.309510-1-pasha.tatashin@soleen.com> References: <20190909181221.309510-1-pasha.tatashin@soleen.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Configure a page table located in kexec-safe memory that has the following mappings: 1. identity mapping for text of relocation function with executable permission. 2. identity mapping for argument for relocation function. 3. linear mappings for all source ranges 4. linear mappings for all destination ranges. Also, configure el2_vector, that is used to jump to new kernel from EL2 on non-VHE kernels. Signed-off-by: Pavel Tatashin --- arch/arm64/include/asm/kexec.h | 32 +++++++ arch/arm64/kernel/asm-offsets.c | 6 ++ arch/arm64/kernel/machine_kexec.c | 124 ++++++++++++++++++++++++++-- arch/arm64/kernel/relocate_kernel.S | 16 +++- 4 files changed, 169 insertions(+), 9 deletions(-) diff --git a/arch/arm64/include/asm/kexec.h b/arch/arm64/include/asm/kexec.h index d5b79d4c7fae..450d8440f597 100644 --- a/arch/arm64/include/asm/kexec.h +++ b/arch/arm64/include/asm/kexec.h @@ -90,6 +90,23 @@ static inline void crash_prepare_suspend(void) {} static inline void crash_post_resume(void) {} #endif +#if defined(CONFIG_KEXEC_CORE) +/* Global variables for the arm64_relocate_new_kernel routine. */ +extern const unsigned char arm64_relocate_new_kernel[]; +extern const unsigned long arm64_relocate_new_kernel_size; + +/* Body of the vector for escalating to EL2 from relocation routine */ +extern const unsigned char kexec_el1_sync[]; +extern const unsigned long kexec_el1_sync_size; + +#define KEXEC_EL2_VECTOR_TABLE_SIZE 2048 +#define KEXEC_EL2_SYNC_OFFSET (KEXEC_EL2_VECTOR_TABLE_SIZE / 2) + +#endif + +#define KEXEC_SRC_START PAGE_OFFSET +#define KEXEC_DST_START (PAGE_OFFSET + \ + ((UL(0xffffffffffffffff) - PAGE_OFFSET) >> 1) + 1) /* * kern_reloc_arg is passed to kernel relocation function as an argument. * head kimage->head, allows to traverse through relocation segments. @@ -97,6 +114,15 @@ static inline void crash_post_resume(void) {} * kernel, or purgatory entry address). * kern_arg0 first argument to kernel is its dtb address. The other * arguments are currently unused, and must be set to 0 + * trans_ttbr0 idmap for relocation function and its argument + * trans_ttbr1 linear map for source/destination addresses. + * el2_vector If present means that relocation routine will go to EL1 + * from EL2 to do the copy, and then back to EL2 to do the jump + * to new world. This vector contains only the final jump + * instruction at KEXEC_EL2_SYNC_OFFSET. + * src_addr linear map for source pages. + * dst_addr linear map for destination pages. + * copy_len Number of bytes that need to be copied */ struct kern_reloc_arg { unsigned long head; @@ -105,6 +131,12 @@ struct kern_reloc_arg { unsigned long kern_arg1; unsigned long kern_arg2; unsigned long kern_arg3; + unsigned long trans_ttbr0; + unsigned long trans_ttbr1; + unsigned long el2_vector; + unsigned long src_addr; + unsigned long dst_addr; + unsigned long copy_len; }; #define ARCH_HAS_KIMAGE_ARCH diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c index 900394907fd8..7c2ba09a8ceb 100644 --- a/arch/arm64/kernel/asm-offsets.c +++ b/arch/arm64/kernel/asm-offsets.c @@ -135,6 +135,12 @@ int main(void) DEFINE(KRELOC_KERN_ARG1, offsetof(struct kern_reloc_arg, kern_arg1)); DEFINE(KRELOC_KERN_ARG2, offsetof(struct kern_reloc_arg, kern_arg2)); DEFINE(KRELOC_KERN_ARG3, offsetof(struct kern_reloc_arg, kern_arg3)); + DEFINE(KRELOC_TRANS_TTBR0, offsetof(struct kern_reloc_arg, trans_ttbr0)); + DEFINE(KRELOC_TRANS_TTBR1, offsetof(struct kern_reloc_arg, trans_ttbr1)); + DEFINE(KRELOC_EL2_VECTOR, offsetof(struct kern_reloc_arg, el2_vector)); + DEFINE(KRELOC_SRC_ADDR, offsetof(struct kern_reloc_arg, src_addr)); + DEFINE(KRELOC_DST_ADDR, offsetof(struct kern_reloc_arg, dst_addr)); + DEFINE(KRELOC_COPY_LEN, offsetof(struct kern_reloc_arg, copy_len)); #endif return 0; } diff --git a/arch/arm64/kernel/machine_kexec.c b/arch/arm64/kernel/machine_kexec.c index fb6138a1c9ff..ef7318cb6e70 100644 --- a/arch/arm64/kernel/machine_kexec.c +++ b/arch/arm64/kernel/machine_kexec.c @@ -20,13 +20,10 @@ #include #include #include +#include #include "cpu-reset.h" -/* Global variables for the arm64_relocate_new_kernel routine. */ -extern const unsigned char arm64_relocate_new_kernel[]; -extern const unsigned long arm64_relocate_new_kernel_size; - /** * kexec_image_info - For debugging output. */ @@ -74,15 +71,123 @@ static void *kexec_page_alloc(void *arg) return page_address(page); } +/* + * Map source segments starting from KEXEC_SRC_START, and map destination + * segments starting from KEXEC_DST_START, and return size of copy in + * *copy_len argument. + * Relocation function essentially needs to do: + * memcpy(KEXEC_DST_START, KEXEC_SRC_START, copy_len); + */ +static int map_segments(struct kimage *kimage, pgd_t *pgdp, + struct trans_pgd_info *info, + unsigned long *copy_len) +{ + unsigned long *ptr = 0; + unsigned long dest = 0; + unsigned long src_va = KEXEC_SRC_START; + unsigned long dst_va = KEXEC_DST_START; + unsigned long len = 0; + unsigned long entry, addr; + int rc; + + for (entry = kimage->head; !(entry & IND_DONE); entry = *ptr++) { + addr = entry & PAGE_MASK; + + switch (entry & IND_FLAGS) { + case IND_DESTINATION: + dest = addr; + break; + case IND_INDIRECTION: + ptr = __va(addr); + if (rc) + return rc; + break; + case IND_SOURCE: + rc = trans_pgd_map_page(info, pgdp, __va(addr), + src_va, PAGE_KERNEL); + if (rc) + return rc; + rc = trans_pgd_map_page(info, pgdp, __va(dest), + dst_va, PAGE_KERNEL); + if (rc) + return rc; + dest += PAGE_SIZE; + src_va += PAGE_SIZE; + dst_va += PAGE_SIZE; + len += PAGE_SIZE; + } + } + *copy_len = len; + + return 0; +} + +static int mmu_relocate_setup(struct kimage *kimage, unsigned long kern_reloc, + struct kern_reloc_arg *kern_reloc_arg) +{ + struct trans_pgd_info info = { + .trans_alloc_page = kexec_page_alloc, + .trans_alloc_arg = kimage, + }; + pgd_t *trans_ttbr0 = kexec_page_alloc(kimage); + pgd_t *trans_ttbr1 = kexec_page_alloc(kimage); + int rc; + + if (!trans_ttbr0 || !trans_ttbr1) + return -ENOMEM; + + rc = map_segments(kimage, trans_ttbr1, &info, + &kern_reloc_arg->copy_len); + if (rc) + return rc; + + /* Map relocation function va == pa */ + rc = trans_pgd_map_page(&info, trans_ttbr0, __va(kern_reloc), + kern_reloc, PAGE_KERNEL_EXEC); + if (rc) + return rc; + + /* Map relocation function argument va == pa */ + rc = trans_pgd_map_page(&info, trans_ttbr0, kern_reloc_arg, + __pa(kern_reloc_arg), PAGE_KERNEL); + if (rc) + return rc; + + kern_reloc_arg->trans_ttbr0 = phys_to_ttbr(__pa(trans_ttbr0)); + kern_reloc_arg->trans_ttbr1 = phys_to_ttbr(__pa(trans_ttbr1)); + kern_reloc_arg->src_addr = KEXEC_SRC_START; + kern_reloc_arg->dst_addr = KEXEC_DST_START; + + return 0; +} + int machine_kexec_post_load(struct kimage *kimage) { + unsigned long el2_vector = 0; unsigned long kern_reloc; struct kern_reloc_arg *kern_reloc_arg; + int rc = 0; + + /* + * Sanity check that relocation function + el2_vector fit into one + * page. + */ + if (arm64_relocate_new_kernel_size > KEXEC_EL2_VECTOR_TABLE_SIZE) { + pr_err("can't fit relocation function and el2_vector in one page"); + return -ENOMEM; + } kern_reloc = page_to_phys(kimage->control_code_page); memcpy(__va(kern_reloc), arm64_relocate_new_kernel, arm64_relocate_new_kernel_size); + /* Setup vector table only when EL2 is available, but no VHE */ + if (is_hyp_mode_available() && !is_kernel_in_hyp_mode()) { + el2_vector = kern_reloc + KEXEC_EL2_VECTOR_TABLE_SIZE; + memcpy(__va(el2_vector + KEXEC_EL2_SYNC_OFFSET), kexec_el1_sync, + kexec_el1_sync_size); + } + kern_reloc_arg = kexec_page_alloc(kimage); if (!kern_reloc_arg) return -ENOMEM; @@ -92,10 +197,19 @@ int machine_kexec_post_load(struct kimage *kimage) kern_reloc_arg->head = kimage->head; kern_reloc_arg->entry_addr = kimage->start; + kern_reloc_arg->el2_vector = el2_vector; kern_reloc_arg->kern_arg0 = kimage->arch.dtb_mem; + /* + * If relocation is not needed, we do not need to enable MMU in + * relocation routine, therefore do not create page tables for + * scenarios such as crash kernel + */ + if (!(kimage->head & IND_DONE)) + rc = mmu_relocate_setup(kimage, kern_reloc, kern_reloc_arg); + kexec_image_info(kimage); - return 0; + return rc; } /** diff --git a/arch/arm64/kernel/relocate_kernel.S b/arch/arm64/kernel/relocate_kernel.S index d352faf7cbe6..14243a678277 100644 --- a/arch/arm64/kernel/relocate_kernel.S +++ b/arch/arm64/kernel/relocate_kernel.S @@ -83,17 +83,25 @@ ENTRY(arm64_relocate_new_kernel) ldr x1, [x0, #KRELOC_KERN_ARG1] ldr x0, [x0, #KRELOC_KERN_ARG0] /* x0 = dtb address */ br x4 +.ltorg +.Larm64_relocate_new_kernel_end: END(arm64_relocate_new_kernel) -.ltorg +ENTRY(kexec_el1_sync) + br x4 /* Jump to new world from el2 */ +.Lkexec_el1_sync_end: +END(kexec_el1_sync) + .align 3 /* To keep the 64-bit values below naturally aligned. */ -.Lcopy_end: .org KEXEC_CONTROL_PAGE_SIZE - /* * arm64_relocate_new_kernel_size - Number of bytes to copy to the * control_code_page. */ .globl arm64_relocate_new_kernel_size arm64_relocate_new_kernel_size: - .quad .Lcopy_end - arm64_relocate_new_kernel + .quad .Larm64_relocate_new_kernel_end - arm64_relocate_new_kernel + +.globl kexec_el1_sync_size +kexec_el1_sync_size: + .quad .Lkexec_el1_sync_end - kexec_el1_sync From patchwork Mon Sep 9 18:12:21 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pasha Tatashin X-Patchwork-Id: 11138361 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id A4CC114ED for ; Mon, 9 Sep 2019 18:13:14 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 563D821D79 for ; Mon, 9 Sep 2019 18:13:14 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=soleen.com header.i=@soleen.com header.b="e4EaTlbS" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 563D821D79 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=soleen.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id B506F6B0272; Mon, 9 Sep 2019 14:12:51 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id ADA076B0273; Mon, 9 Sep 2019 14:12:51 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 92DA36B0274; Mon, 9 Sep 2019 14:12:51 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0181.hostedemail.com [216.40.44.181]) by kanga.kvack.org (Postfix) with ESMTP id 6BB3B6B0272 for ; Mon, 9 Sep 2019 14:12:51 -0400 (EDT) Received: from smtpin22.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with SMTP id 297E68243762 for ; Mon, 9 Sep 2019 18:12:51 +0000 (UTC) X-FDA: 75916178142.22.drink09_70233f14ccc37 X-Spam-Summary: 2,0,0,bca63d94befcee13,d41d8cd98f00b204,pasha.tatashin@soleen.com,:pasha.tatashin@soleen.com:jmorris@namei.org:sashal@kernel.org:ebiederm@xmission.com:kexec@lists.infradead.org:linux-kernel@vger.kernel.org:corbet@lwn.net:catalin.marinas@arm.com:will@kernel.org:linux-arm-kernel@lists.infradead.org:marc.zyngier@arm.com:james.morse@arm.com:vladimir.murzin@arm.com:matthias.bgg@gmail.com:bhsharma@redhat.com::mark.rutland@arm.com,RULES_HIT:1:2:41:69:355:379:421:541:800:960:968:973:988:989:1260:1311:1314:1345:1359:1381:1431:1437:1515:1605:1730:1747:1777:1792:2194:2199:2393:2538:2559:2562:2693:3138:3139:3140:3141:3142:3865:3866:3867:3868:3870:3871:3872:3874:4051:4250:4321:4605:5007:6117:6119:6261:6653:6737:7688:7875:7903:8603:9036:9592:10004:10226:11026:11232:11473:11657:11658:11914:12043:12048:12296:12297:12438:12517:12519:12555:12679:12895:12986:13161:13229:13894:13972:14394:21063:21080:21324:21325:21444:21451:21627:21740:30003:30034:30054:30067:30070:30079:30089,0, RBL:209. X-HE-Tag: drink09_70233f14ccc37 X-Filterd-Recvd-Size: 11495 Received: from mail-qk1-f196.google.com (mail-qk1-f196.google.com [209.85.222.196]) by imf41.hostedemail.com (Postfix) with ESMTP for ; Mon, 9 Sep 2019 18:12:50 +0000 (UTC) Received: by mail-qk1-f196.google.com with SMTP id z67so13950586qkb.12 for ; Mon, 09 Sep 2019 11:12:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=uzYhiFz1SMeg2QY6ipqvDOBiB7Gk/JQiJ54Fcl4j63M=; b=e4EaTlbSSZhsprUc0zFrn30dZWsflHuf7nDwDufq3UQw/3JY6oX+ZcljzgSG/KGt/V PxVQ2Lt0lvPYkDv5EihyGrjIuyDratF59J7J30E2NZMc7gsECGLOKfL+rVYtMWoo/gGx K7ap5sDY8ZPPJYBK/RQx3yBXlOnCVJe8XWF4XyJqHZw6+C4blyGAuzMOZLIbCqkr7ZjJ GBEV0ACF19BSrltUvbbd+OkS79Tjs9kDgw2TJNQIbu16H6w+h8x1NXbsjtawhm4LFGO7 4gsqILPszc5kqBc9rTqesjAuw0E+XSCbFJfzl25qkM+6XOYm6rQrGm9b706PpwYdoXRt ziEQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=uzYhiFz1SMeg2QY6ipqvDOBiB7Gk/JQiJ54Fcl4j63M=; b=OvG1BpgX9jARSCC+4yl65AQUo5+UnbEI68FdWB6y9zQY9RLoBQ5KfeBAaqjTXL3HRc hnrqPn4ctnyO2ICFKeGkXP7Vhkfi1NuZNi8rVmH4rPiTE529YEOPAaeARidpaSxEHkm5 KVBTp7IQrofXtvCkl4HUO0gZOoPvjNdrXIRClZlP33yFouG8QwBsdbZfwMeVd2TUgp2g 6NwSOtKc7hB8uvP0mU4/piFtqc6ym1Tq7Q/ccAonAOOfiwjfyi/MVnsXZJidqL+mpK0B daljqVWTGwyfgQshOGxlfysJeUOi6bgPjtkWZs5tr0uJDJ2W+f94cGB3LUoq4NaUdaii Z/hA== X-Gm-Message-State: APjAAAVRtu54eZWKN3tCMWzDYwwoEumdhpluAHk+rCOhIeE3tP31s/sH KXJYWhn58WfeojpnBP66Mc/uxw== X-Google-Smtp-Source: APXvYqzvOa+riUd6zPggF5iQFRZEb8oIrJ8YL+PXBqkAA3Pfy18h2xzXgay/8w4dZS6O22EtrGcIUQ== X-Received: by 2002:ae9:ef8c:: with SMTP id d134mr24652308qkg.286.1568052768825; Mon, 09 Sep 2019 11:12:48 -0700 (PDT) Received: from localhost.localdomain (c-73-69-118-222.hsd1.nh.comcast.net. [73.69.118.222]) by smtp.gmail.com with ESMTPSA id q8sm5611310qtj.76.2019.09.09.11.12.47 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 09 Sep 2019 11:12:48 -0700 (PDT) From: Pavel Tatashin To: pasha.tatashin@soleen.com, jmorris@namei.org, sashal@kernel.org, ebiederm@xmission.com, kexec@lists.infradead.org, linux-kernel@vger.kernel.org, corbet@lwn.net, catalin.marinas@arm.com, will@kernel.org, linux-arm-kernel@lists.infradead.org, marc.zyngier@arm.com, james.morse@arm.com, vladimir.murzin@arm.com, matthias.bgg@gmail.com, bhsharma@redhat.com, linux-mm@kvack.org, mark.rutland@arm.com Subject: [PATCH v4 17/17] arm64: kexec: enable MMU during kexec relocation Date: Mon, 9 Sep 2019 14:12:21 -0400 Message-Id: <20190909181221.309510-18-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20190909181221.309510-1-pasha.tatashin@soleen.com> References: <20190909181221.309510-1-pasha.tatashin@soleen.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Now, that we have transitional page tables configured, temporarily enable MMU to allow faster relocation of segments to final destination. The performance data: for a moderate size kernel + initramfs: 25M the relocation was taking 0.382s, with enabled MMU it now takes 0.019s only or x20 improvement. The time is proportional to the size of relocation, therefore if initramfs is larger, 100M it could take over a second. Also, remove reloc_arg->head, as it is not needed anymore once MMU is enabled. Signed-off-by: Pavel Tatashin --- arch/arm64/include/asm/kexec.h | 2 - arch/arm64/kernel/asm-offsets.c | 1 - arch/arm64/kernel/machine_kexec.c | 1 - arch/arm64/kernel/relocate_kernel.S | 136 +++++++++++++++++----------- 4 files changed, 84 insertions(+), 56 deletions(-) diff --git a/arch/arm64/include/asm/kexec.h b/arch/arm64/include/asm/kexec.h index 450d8440f597..ad81ed3e5751 100644 --- a/arch/arm64/include/asm/kexec.h +++ b/arch/arm64/include/asm/kexec.h @@ -109,7 +109,6 @@ extern const unsigned long kexec_el1_sync_size; ((UL(0xffffffffffffffff) - PAGE_OFFSET) >> 1) + 1) /* * kern_reloc_arg is passed to kernel relocation function as an argument. - * head kimage->head, allows to traverse through relocation segments. * entry_addr kimage->start, where to jump from relocation function (new * kernel, or purgatory entry address). * kern_arg0 first argument to kernel is its dtb address. The other @@ -125,7 +124,6 @@ extern const unsigned long kexec_el1_sync_size; * copy_len Number of bytes that need to be copied */ struct kern_reloc_arg { - unsigned long head; unsigned long entry_addr; unsigned long kern_arg0; unsigned long kern_arg1; diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c index 7c2ba09a8ceb..13ad00b1b90f 100644 --- a/arch/arm64/kernel/asm-offsets.c +++ b/arch/arm64/kernel/asm-offsets.c @@ -129,7 +129,6 @@ int main(void) DEFINE(SDEI_EVENT_PRIORITY, offsetof(struct sdei_registered_event, priority)); #endif #ifdef CONFIG_KEXEC_CORE - DEFINE(KRELOC_HEAD, offsetof(struct kern_reloc_arg, head)); DEFINE(KRELOC_ENTRY_ADDR, offsetof(struct kern_reloc_arg, entry_addr)); DEFINE(KRELOC_KERN_ARG0, offsetof(struct kern_reloc_arg, kern_arg0)); DEFINE(KRELOC_KERN_ARG1, offsetof(struct kern_reloc_arg, kern_arg1)); diff --git a/arch/arm64/kernel/machine_kexec.c b/arch/arm64/kernel/machine_kexec.c index ef7318cb6e70..7fedf58f67f0 100644 --- a/arch/arm64/kernel/machine_kexec.c +++ b/arch/arm64/kernel/machine_kexec.c @@ -195,7 +195,6 @@ int machine_kexec_post_load(struct kimage *kimage) kimage->arch.kern_reloc = kern_reloc; kimage->arch.kern_reloc_arg = __pa(kern_reloc_arg); - kern_reloc_arg->head = kimage->head; kern_reloc_arg->entry_addr = kimage->start; kern_reloc_arg->el2_vector = el2_vector; kern_reloc_arg->kern_arg0 = kimage->arch.dtb_mem; diff --git a/arch/arm64/kernel/relocate_kernel.S b/arch/arm64/kernel/relocate_kernel.S index 14243a678277..96ff6760bd9c 100644 --- a/arch/arm64/kernel/relocate_kernel.S +++ b/arch/arm64/kernel/relocate_kernel.S @@ -4,6 +4,8 @@ * * Copyright (C) Linaro. * Copyright (C) Huawei Futurewei Technologies. + * Copyright (c) 2019, Microsoft Corporation. + * Pavel Tatashin */ #include @@ -14,6 +16,49 @@ #include #include +/* Invalidae TLB */ +.macro tlb_invalidate + dsb sy + dsb ish + tlbi vmalle1 + dsb ish + isb +.endm + +/* Turn-off mmu at level specified by sctlr */ +.macro turn_off_mmu sctlr, tmp1, tmp2 + mrs \tmp1, \sctlr + ldr \tmp2, =SCTLR_ELx_FLAGS + bic \tmp1, \tmp1, \tmp2 + pre_disable_mmu_workaround + msr \sctlr, \tmp1 + isb +.endm + +/* Turn-on mmu at level specified by sctlr */ +.macro turn_on_mmu sctlr, tmp1, tmp2 + mrs \tmp1, \sctlr + ldr \tmp2, =SCTLR_ELx_FLAGS + orr \tmp1, \tmp1, \tmp2 + msr \sctlr, \tmp1 + ic iallu + dsb nsh + isb +.endm + +/* + * Set ttbr0 and ttbr1, called while MMU is disabled, so no need to temporarily + * set zero_page table. Invalidate TLB after new tables are set. + */ +.macro set_ttbr arg, tmp + ldr \tmp, [\arg, #KRELOC_TRANS_TTBR0] + msr ttbr0_el1, \tmp + ldr \tmp, [\arg, #KRELOC_TRANS_TTBR1] + offset_ttbr1 \tmp + msr ttbr1_el1, \tmp + isb +.endm + /* * arm64_relocate_new_kernel - Put a 2nd stage image in place and boot it. * @@ -24,65 +69,52 @@ * symbols arm64_relocate_new_kernel and arm64_relocate_new_kernel_end. The * machine_kexec() routine will copy arm64_relocate_new_kernel to the kexec * safe memory that has been set up to be preserved during the copy operation. + * + * This function temporarily enables MMU if kernel relocation is needed. + * Also, if we enter this function at EL2 on non-VHE kernel, we temporarily go + * to EL1 to enable MMU, and escalate back to EL2 at the end to do the jump to + * the new kernel. This is determined by presence of el2_vector. */ ENTRY(arm64_relocate_new_kernel) - /* Clear the sctlr_el2 flags. */ - mrs x2, CurrentEL - cmp x2, #CurrentEL_EL2 + mrs x1, CurrentEL + cmp x1, #CurrentEL_EL2 b.ne 1f - mrs x2, sctlr_el2 - ldr x1, =SCTLR_ELx_FLAGS - bic x2, x2, x1 - pre_disable_mmu_workaround - msr sctlr_el2, x2 - isb -1: /* Check if the new image needs relocation. */ - ldr x16, [x0, #KRELOC_HEAD] /* x16 = kimage_head */ - tbnz x16, IND_DONE_BIT, .Ldone - raw_dcache_line_size x15, x1 /* x15 = dcache line size */ -.Lloop: - and x12, x16, PAGE_MASK /* x12 = addr */ - /* Test the entry flags. */ -.Ltest_source: - tbz x16, IND_SOURCE_BIT, .Ltest_indirection - - /* Invalidate dest page to PoC. */ - mov x2, x13 - add x20, x2, #PAGE_SIZE - sub x1, x15, #1 - bic x2, x2, x1 -2: dc ivac, x2 - add x2, x2, x15 - cmp x2, x20 - b.lo 2b - dsb sy - - copy_page x13, x12, x1, x2, x3, x4, x5, x6, x7, x8 - b .Lnext -.Ltest_indirection: - tbz x16, IND_INDIRECTION_BIT, .Ltest_destination - mov x14, x12 /* ptr = addr */ - b .Lnext -.Ltest_destination: - tbz x16, IND_DESTINATION_BIT, .Lnext - mov x13, x12 /* dest = addr */ -.Lnext: - ldr x16, [x14], #8 /* entry = *ptr++ */ - tbz x16, IND_DONE_BIT, .Lloop /* while (!(entry & DONE)) */ -.Ldone: - /* wait for writes from copy_page to finish */ - dsb nsh - ic iallu - dsb nsh - isb - - /* Start new image. */ - ldr x4, [x0, #KRELOC_ENTRY_ADDR] /* x4 = kimage_start */ + turn_off_mmu sctlr_el2, x1, x2 /* Turn off MMU at EL2 */ +1: mov x20, xzr /* x20 will hold vector value */ + ldr x11, [x0, #KRELOC_COPY_LEN] + cbz x11, 5f /* Check if need to relocate */ + ldr x20, [x0, #KRELOC_EL2_VECTOR] + cbz x20, 2f /* need to reduce to EL1? */ + msr vbar_el2, x20 /* el2_vector present, means */ + adr x1, 2f /* we will do copy in el1 but */ + msr elr_el2, x1 /* do final jump from el2 */ + eret /* Reduce to EL1 */ +2: set_ttbr x0, x1 /* Set our page tables */ + tlb_invalidate + turn_on_mmu sctlr_el1, x1, x2 /* Turn MMU back on */ + ldr x1, [x0, #KRELOC_DST_ADDR]; + ldr x2, [x0, #KRELOC_SRC_ADDR]; + mov x12, x1 /* x12 dst backup */ +3: copy_page x1, x2, x3, x4, x5, x6, x7, x8, x9, x10 + sub x11, x11, #PAGE_SIZE + cbnz x11, 3b /* page copy loop */ + raw_dcache_line_size x2, x3 /* x2 = dcache line size */ + sub x3, x2, #1 /* x3 = dcache_size - 1 */ + bic x12, x12, x3 +4: dc cvau, x12 /* Flush D-cache */ + add x12, x12, x2 + cmp x12, x1 /* Compare to dst + len */ + b.ne 4b /* D-cache flush loop */ + turn_off_mmu sctlr_el1, x1, x2 /* Turn off MMU */ + tlb_invalidate /* Invalidate TLB */ +5: ldr x4, [x0, #KRELOC_ENTRY_ADDR] /* x4 = kimage_start */ ldr x3, [x0, #KRELOC_KERN_ARG3] ldr x2, [x0, #KRELOC_KERN_ARG2] ldr x1, [x0, #KRELOC_KERN_ARG1] ldr x0, [x0, #KRELOC_KERN_ARG0] /* x0 = dtb address */ - br x4 + cbnz x20, 6f /* need to escalate to el2? */ + br x4 /* Jump to new world */ +6: hvc #0 /* enters kexec_el1_sync */ .ltorg .Larm64_relocate_new_kernel_end: END(arm64_relocate_new_kernel)