From patchwork Thu Aug 1 15:24:32 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pasha Tatashin X-Patchwork-Id: 11070903 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 81C041395 for ; Thu, 1 Aug 2019 15:24:48 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 73FD428609 for ; Thu, 1 Aug 2019 15:24:48 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 68385286B1; Thu, 1 Aug 2019 15:24:48 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 050B628609 for ; Thu, 1 Aug 2019 15:24:47 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C9A9D8E0023; Thu, 1 Aug 2019 11:24:44 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id C237E8E0001; Thu, 1 Aug 2019 11:24:44 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id AC4AC8E0023; Thu, 1 Aug 2019 11:24:44 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-qt1-f199.google.com (mail-qt1-f199.google.com [209.85.160.199]) by kanga.kvack.org (Postfix) with ESMTP id 87A2D8E0001 for ; Thu, 1 Aug 2019 11:24:44 -0400 (EDT) Received: by mail-qt1-f199.google.com with SMTP id s22so64831563qtb.22 for ; Thu, 01 Aug 2019 08:24:44 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:from:to:subject:date:message-id :in-reply-to:references:mime-version:content-transfer-encoding; bh=O8sp7Ivz1alrz3xXygPalicyjdu3OPu+GkykS7h2DZg=; b=Cqq/6/DTcno2dtjUttbgfRBz31e43xl2+pnzcvFMDN4uwgwtIW1Oa62ZeoOFReMKkL 3ozEGZcMSRYg3C7c3fv9OE/KQpvK4qoqZEL1zIbSUL+31xXACZ5Lubljy+WKORWLOyRN IaTXRMGxhc0P1sNFCtyvJgsvhRvqYdCCtGw0ousPOStygmBqvLEx3sKt1XjiG7MzHIx5 b7Zrkoc/Wvft8f2NKlSsJGQbHVYPvWXxSQI/hTYh9e8GslpqwUhSdYChxYsbqP5mN0Ww ylDwZ18wynEMTgiMkCt3HQR0UylBTJcxsvhLYdjyDh8xq6A69FLvuPYe04un61X1kIe2 uWng== X-Gm-Message-State: APjAAAXkJSWVuTrpdpPEMKXh9h0WyFm8QMG1JjfKoQL73H5IPZNqCWNQ aZQDb+6CGx/MRMc/Y0GmukJQkSfQfRDNCwqJ8TLwlUzzJfYDvPwYHCb35229z7QzfEhQDYSwd8i Rx9+9IGLMeeVghxmWqeRq+f3agzAhSeTSLnIBEoQNrSXCd98lEgcLUbp65N5SnPRF4g== X-Received: by 2002:a0c:c688:: with SMTP id d8mr2004579qvj.86.1564673084309; Thu, 01 Aug 2019 08:24:44 -0700 (PDT) X-Received: by 2002:a0c:c688:: with SMTP id d8mr2004519qvj.86.1564673083558; Thu, 01 Aug 2019 08:24:43 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1564673083; cv=none; d=google.com; s=arc-20160816; b=zfIKOZhitZiQAKVNwJisTbgGajDhubzLHhvfHZSg30Sp+RRNpEs96EjX/q0azD5asl JIIpHx87qXvUhnxZgD7M4uYRpDktWDUTG11eGARINLB35UdI3OIetnS6NQop8gPIkoDZ pLlSZDx92pGUDgvAY5YsOwQKT7SjfUcevQlKgV8sTQOVLgA4YH/dbmV1bH4YtOfAFVTL ebL+q+obku3oOotyf0g1x+1MY1IMrYy2d2aJPlugf2QBCPRe4l4GfiPY2onj+QtS8iIk IuCZJoYtIz0ELViJPLMyEEyKO0gj85/C2VNy6vubvWCKTgcN7K70e8VEmDf0KkFR35JU xPnQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:dkim-signature; bh=O8sp7Ivz1alrz3xXygPalicyjdu3OPu+GkykS7h2DZg=; b=TxP2iFOn6z3LSe/eBAKpT5UkH4esplvPgcuSXiTvs2UEK3vTPXjfujWM8C7FZvWbzS FMyVXqhnR1AuBspqGy4y/3Myzhtujy07W0ttPbTlQS11OfFSk0O75Xix9BeUhl3gnkP5 a1sKE5VE1Ko2JkxFWgOCXD/1M3aLB5/KkfYbZHRHZCcVRDB+qK0YpsYt2zPOMTu3WjdV DW3dJAUoOTghCXue2t4vEySJMlUMR1B2khm6PRtVDZwhQL/3EMYB1ATtaTe3DVjmbymL zQ/2BmaAVo3+7+HjuqRgIx1ZRwGWVQ5EZR+sfWOI58yy3MFRsw7sHhPfR527i6u+Xyhk p0ew== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@soleen.com header.s=google header.b=a+ZyqbSB; spf=pass (google.com: domain of pasha.tatashin@soleen.com designates 209.85.220.65 as permitted sender) smtp.mailfrom=pasha.tatashin@soleen.com Received: from mail-sor-f65.google.com (mail-sor-f65.google.com. [209.85.220.65]) by mx.google.com with SMTPS id g41sor93267368qte.46.2019.08.01.08.24.43 for (Google Transport Security); Thu, 01 Aug 2019 08:24:43 -0700 (PDT) Received-SPF: pass (google.com: domain of pasha.tatashin@soleen.com designates 209.85.220.65 as permitted sender) client-ip=209.85.220.65; Authentication-Results: mx.google.com; dkim=pass header.i=@soleen.com header.s=google header.b=a+ZyqbSB; spf=pass (google.com: domain of pasha.tatashin@soleen.com designates 209.85.220.65 as permitted sender) smtp.mailfrom=pasha.tatashin@soleen.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=O8sp7Ivz1alrz3xXygPalicyjdu3OPu+GkykS7h2DZg=; b=a+ZyqbSBLkROpB7/yFWxbV2nj0nJ9hDxmgh4mRALQnnKBGnNtH8T0nQUrv8UkRBg/i 4WX88zAuhNmFQq77+4y9mbgKsQZx5e9p3+RTf3BDWqFUkEYmwTQiOs5gYkKye9q+UvGm rrW5X8jWdRpvHieYaQYWoZO7jd7RenFKNBqyDhphJmxbDqSMnHn0S9ugABAV4ojKAMrt WMN9dfleQMPrUVx/sM+TkzfX29W6VxjDfELNF5QfxwUOSuvm2EHT5TT4TGH3P1ovyQfC Tun6cAtooJ8I/i3nI/74KesTqVzHXq8LFMEI+AszpJeoxYJQBZnZJq8ze4JJS7mFJy2d N/jA== X-Google-Smtp-Source: APXvYqwsbycuknfqaOcSU+c+nk0JiA4YbLBMUrdb658pVMpdB83rdKhorAeSqVY6ZrvSIKM3oMvfcA== X-Received: by 2002:aed:355d:: with SMTP id b29mr90861643qte.12.1564673083294; Thu, 01 Aug 2019 08:24:43 -0700 (PDT) Received: from localhost.localdomain (c-73-69-118-222.hsd1.nh.comcast.net. [73.69.118.222]) by smtp.gmail.com with ESMTPSA id o5sm30899952qkf.10.2019.08.01.08.24.41 (version=TLS1_3 cipher=AEAD-AES256-GCM-SHA384 bits=256/256); Thu, 01 Aug 2019 08:24:42 -0700 (PDT) From: Pavel Tatashin To: pasha.tatashin@soleen.com, jmorris@namei.org, sashal@kernel.org, ebiederm@xmission.com, kexec@lists.infradead.org, linux-kernel@vger.kernel.org, corbet@lwn.net, catalin.marinas@arm.com, will@kernel.org, linux-arm-kernel@lists.infradead.org, marc.zyngier@arm.com, james.morse@arm.com, vladimir.murzin@arm.com, matthias.bgg@gmail.com, bhsharma@redhat.com, linux-mm@kvack.org Subject: [PATCH v1 1/8] kexec: quiet down kexec reboot Date: Thu, 1 Aug 2019 11:24:32 -0400 Message-Id: <20190801152439.11363-2-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.22.0 In-Reply-To: <20190801152439.11363-1-pasha.tatashin@soleen.com> References: <20190801152439.11363-1-pasha.tatashin@soleen.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP Here is a regular kexec command sequence and output: ===== $ kexec --reuse-cmdline -i --load Image $ kexec -e [ 161.342002] kexec_core: Starting new kernel Welcome to Buildroot buildroot login: ===== Even when "quiet" kernel parameter is specified, "kexec_core: Starting new kernel" is printed. This message has KERN_EMERG level, but there is no emergency, it is a normal kexec operation, so quiet it down to appropriate KERN_NOTICE. Machines that have slow console baud rate benefit from less output. Signed-off-by: Pavel Tatashin Reviewed-by: Simon Horman --- kernel/kexec_core.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/kernel/kexec_core.c b/kernel/kexec_core.c index d5870723b8ad..2c5b72863b7b 100644 --- a/kernel/kexec_core.c +++ b/kernel/kexec_core.c @@ -1169,7 +1169,7 @@ int kernel_kexec(void) * CPU hotplug again; so re-enable it here. */ cpu_hotplug_enable(); - pr_emerg("Starting new kernel\n"); + pr_notice("Starting new kernel\n"); machine_shutdown(); } From patchwork Thu Aug 1 15:24:33 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pasha Tatashin X-Patchwork-Id: 11070905 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 07CE61395 for ; Thu, 1 Aug 2019 15:24:52 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id EBE0A28613 for ; Thu, 1 Aug 2019 15:24:51 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id E02C3286CC; Thu, 1 Aug 2019 15:24:51 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id C9A2828613 for ; Thu, 1 Aug 2019 15:24:50 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 302AC8E0024; Thu, 1 Aug 2019 11:24:47 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 264C98E0001; Thu, 1 Aug 2019 11:24:47 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id F0ADB8E0024; Thu, 1 Aug 2019 11:24:46 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-qt1-f197.google.com (mail-qt1-f197.google.com [209.85.160.197]) by kanga.kvack.org (Postfix) with ESMTP id C174E8E0001 for ; Thu, 1 Aug 2019 11:24:46 -0400 (EDT) Received: by mail-qt1-f197.google.com with SMTP id o16so64908205qtj.6 for ; Thu, 01 Aug 2019 08:24:46 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:from:to:subject:date:message-id :in-reply-to:references:mime-version:content-transfer-encoding; bh=cfbTf7XpOgDMGOBWNQGy/KoNKGQummd6Pxykre8yXSk=; b=ivmCFCF/b1o+eaTYPyqxl4WKmz9FHnz0zzXcz0Fov9B9NYR/q3WWLRClN/4kPvVfCz GYD7oK836hrGR8N+2LxE2N4HXNkAP+g8fyaZQVg9nLkWk1DRj5oobd+/JlhInfgg4Wre setHlLV/xCcRij5+qoQ4KrUM+bez9MjR7o7Szt9MSdrVe8hvhisbhYmfVUAtprrCAXAe 9H9afwvc2HvTu+sLrEBdpaXIfCnukweMg49ZQwBsJ7oGqRfYis5jsG+QWUnuTFOsFwDR GAJWQ/swO3H4kJeEL6lYSKttzKWFXEiRqHD7gT59J8+hD076ykKZcgQRJF8VXQKpsix+ V6EA== X-Gm-Message-State: APjAAAX9f/g6S8AeM2+WIBbE7PfJn/easKMEAse6ufTiw4liQeyOmZHH 5bM6uv3eBQWEwGUJmbRgVOxG3jbuoGZ2wMo2EyE4bc7Yu4Da+/3aERNNZqMVYxoj/SqnckqJ+Fz gvQxfDz/R6Qot0/nloxy5sVQjJJOS7FbJb3YUB+rzqc+Kn1kF3x+EnuMYv4T3JNj3ow== X-Received: by 2002:a0c:b627:: with SMTP id f39mr94473584qve.72.1564673086508; Thu, 01 Aug 2019 08:24:46 -0700 (PDT) X-Received: by 2002:a0c:b627:: with SMTP id f39mr94473468qve.72.1564673085029; Thu, 01 Aug 2019 08:24:45 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1564673085; cv=none; d=google.com; s=arc-20160816; b=hnd+WlxqIpB0dFzTWZYVmMGjZ2Rt9aC7mTKKdtAFrxVs1neOwCrGf1M/lJpg0suQef 9G7vpOmj0NQfCn9BYJENBfHRa5aSaf2vyENfVEdawVnRmkTItzk85duOQd//Wy2wGtQ0 XOPQNzZp5w76PLC7QGS7SIDhkI3ZGOwIlOieJth0JrZk04JRB52wYjWknOGHN1qVKkAQ KRBhynkpqmNzPOkybpUlRnH8xsKdIrdPt6INafhll/qYumSBadJtEL+UG0bJHkvJJypz LgY/VP53AyjiUnDGPTaldE/tYwlAME0uhtX+2+rLlY7Q09C7dvndWPqhUIjvNCV97jMX 7xnA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:dkim-signature; bh=cfbTf7XpOgDMGOBWNQGy/KoNKGQummd6Pxykre8yXSk=; b=L4MQ4XSdayxAeqZnSKG9IDqub9k2cvd9Q77ajfV7OPwqgiLXDMbMSfnJ1QAsATStod 52/UPjgPcOCMQnypC6snGupmBhqMk6gfN4+vRNQRa+opZt5Vgu8fUMndk/WiZZKSG1vM P9o7LMSc+JbdAoy2gZWaxROo97RL2P2N49ZxiC59Lt+M17mO6Dzmy+GTvcw3Sr3sShYN p/ovIUvKrV56mJhE53kl77zojAqeLE7hxsfjlb/mBmZ/2YjFIXmhPqE6NxGVnLYZtw1r fH8PHc9x2Ym0/r7b164oa3FWn6qG+f46apQpD2esaS3CxCCIdfe70S0gDg3mfB5KB37O F8zw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@soleen.com header.s=google header.b=fnK3RdEU; spf=pass (google.com: domain of pasha.tatashin@soleen.com designates 209.85.220.65 as permitted sender) smtp.mailfrom=pasha.tatashin@soleen.com Received: from mail-sor-f65.google.com (mail-sor-f65.google.com. [209.85.220.65]) by mx.google.com with SMTPS id o10sor40644675qkg.166.2019.08.01.08.24.44 for (Google Transport Security); Thu, 01 Aug 2019 08:24:45 -0700 (PDT) Received-SPF: pass (google.com: domain of pasha.tatashin@soleen.com designates 209.85.220.65 as permitted sender) client-ip=209.85.220.65; Authentication-Results: mx.google.com; dkim=pass header.i=@soleen.com header.s=google header.b=fnK3RdEU; spf=pass (google.com: domain of pasha.tatashin@soleen.com designates 209.85.220.65 as permitted sender) smtp.mailfrom=pasha.tatashin@soleen.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=cfbTf7XpOgDMGOBWNQGy/KoNKGQummd6Pxykre8yXSk=; b=fnK3RdEUL0bRBUM+gEskgMzqz4c9Ef2xVySuRpOFxG7dwAn/ntlB2cTyAguT7ovJj1 X2ld1C/kZO9ghWRAsj5Ldyh8S8MEyXxbndDuY7jnXqbsogZbajL28mKeAcMXZE6ACZKd g4myFAIw6c3X08r/i+BvbpOukzeyuEgSGkWVYsgTuRtiQhBIEzaPRZeZmgKYPnnG6I4e BgxfiPR0r/530a1QabbbSSqFyOcOvD8Mi41am/zEZDGAetnccnTydt/shai4J2Z6V0Fv I/S+EdiXEC5m2V0D0FAG8VshyVgWsleuMXObwZGW+2SCSNQkosVD9fei6g1s8O8/Ya4/ fLeg== X-Google-Smtp-Source: APXvYqy4eV8S1anMZlqLRO0Q7ImEiIOUGiGudUCpbYFD7Jn7b6wIjKNl4KgAASpS4Vy1cCv6dAeR2A== X-Received: by 2002:a37:a744:: with SMTP id q65mr10294247qke.151.1564673084631; Thu, 01 Aug 2019 08:24:44 -0700 (PDT) Received: from localhost.localdomain (c-73-69-118-222.hsd1.nh.comcast.net. [73.69.118.222]) by smtp.gmail.com with ESMTPSA id o5sm30899952qkf.10.2019.08.01.08.24.43 (version=TLS1_3 cipher=AEAD-AES256-GCM-SHA384 bits=256/256); Thu, 01 Aug 2019 08:24:44 -0700 (PDT) From: Pavel Tatashin To: pasha.tatashin@soleen.com, jmorris@namei.org, sashal@kernel.org, ebiederm@xmission.com, kexec@lists.infradead.org, linux-kernel@vger.kernel.org, corbet@lwn.net, catalin.marinas@arm.com, will@kernel.org, linux-arm-kernel@lists.infradead.org, marc.zyngier@arm.com, james.morse@arm.com, vladimir.murzin@arm.com, matthias.bgg@gmail.com, bhsharma@redhat.com, linux-mm@kvack.org Subject: [PATCH v1 2/8] arm64, mm: transitional tables Date: Thu, 1 Aug 2019 11:24:33 -0400 Message-Id: <20190801152439.11363-3-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.22.0 In-Reply-To: <20190801152439.11363-1-pasha.tatashin@soleen.com> References: <20190801152439.11363-1-pasha.tatashin@soleen.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP There are cases where normal kernel pages tables, i.e. idmap_pg_dir and swapper_pg_dir are not sufficient because they may be overwritten. This happens when we transition from one world to another: for example during kexec kernel relocation transition, and also during hibernate kernel restore transition. In these cases, if MMU is needed, the page table memory must be allocated from a safe place. Transitional tables is intended to allow just that. Signed-off-by: Pavel Tatashin --- arch/arm64/Kconfig | 4 + arch/arm64/include/asm/pgtable-hwdef.h | 1 + arch/arm64/include/asm/trans_table.h | 68 ++++++ arch/arm64/mm/Makefile | 1 + arch/arm64/mm/trans_table.c | 273 +++++++++++++++++++++++++ 5 files changed, 347 insertions(+) create mode 100644 arch/arm64/include/asm/trans_table.h create mode 100644 arch/arm64/mm/trans_table.c diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 3adcec05b1f6..91a7416ffe4e 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -999,6 +999,10 @@ config CRASH_DUMP For more details see Documentation/admin-guide/kdump/kdump.rst +config TRANS_TABLE + def_bool y + depends on HIBERNATION || KEXEC_CORE + config XEN_DOM0 def_bool y depends on XEN diff --git a/arch/arm64/include/asm/pgtable-hwdef.h b/arch/arm64/include/asm/pgtable-hwdef.h index db92950bb1a0..dcb4f13c7888 100644 --- a/arch/arm64/include/asm/pgtable-hwdef.h +++ b/arch/arm64/include/asm/pgtable-hwdef.h @@ -110,6 +110,7 @@ #define PUD_TABLE_BIT (_AT(pudval_t, 1) << 1) #define PUD_TYPE_MASK (_AT(pudval_t, 3) << 0) #define PUD_TYPE_SECT (_AT(pudval_t, 1) << 0) +#define PUD_SECT_RDONLY (_AT(pudval_t, 1) << 7) /* AP[2] */ /* * Level 2 descriptor (PMD). diff --git a/arch/arm64/include/asm/trans_table.h b/arch/arm64/include/asm/trans_table.h new file mode 100644 index 000000000000..c7aef70587a1 --- /dev/null +++ b/arch/arm64/include/asm/trans_table.h @@ -0,0 +1,68 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +/* + * Copyright (c) 2019, Microsoft Corporation. + * Pavel Tatashin + */ + +#ifndef _ASM_TRANS_TABLE_H +#define _ASM_TRANS_TABLE_H + +#include +#include + +/* + * trans_alloc_page + * - Allocator that should return exactly one uninitilaized page, if this + * allocator fails, trans_table returns -ENOMEM error. + * + * trans_alloc_arg + * - Passed to trans_alloc_page as an argument + * + * trans_flags + * - bitmap with flags that control how page table is filled. + * TRANS_MKWRITE: during page table copy make PTE, PME, and PUD page + * writeable by removing RDONLY flag from PTE. + * TRANS_MKVALID: during page table copy, if PTE present, but not valid, + * make it valid. + * TRANS_CHECKPFN: During page table copy, for every PTE entry check that + * PFN that this PTE points to is valid. Otherwise return + * -ENXIO + * TRANS_FORCEMAP: During page map, if translation exists, force + * overwrite it. Otherwise -ENXIO may be returned by + * trans_table_map_* functions if conflict is detected. + */ + +#define TRANS_MKWRITE BIT(0) +#define TRANS_MKVALID BIT(1) +#define TRANS_CHECKPFN BIT(2) +#define TRANS_FORCEMAP BIT(3) + +struct trans_table_info { + void * (*trans_alloc_page)(void *arg); + void *trans_alloc_arg; + unsigned long trans_flags; +}; + +/* Create and empty trans table. */ +int trans_table_create_empty(struct trans_table_info *info, + pgd_t **trans_table); + +/* + * Create trans table and copy entries from from_table to trans_table in range + * [start, end) + */ +int trans_table_create_copy(struct trans_table_info *info, pgd_t **trans_table, + pgd_t *from_table, unsigned long start, + unsigned long end); + +/* + * Add map entry to trans_table for a base-size page at PTE level. + * page: page to be mapped. + * dst_addr: new VA address for the pages + * pgprot: protection for the page. + */ +int trans_table_map_page(struct trans_table_info *info, pgd_t *trans_table, + void *page, unsigned long dst_addr, pgprot_t pgprot); + +#endif /* _ASM_TRANS_TABLE_H */ diff --git a/arch/arm64/mm/Makefile b/arch/arm64/mm/Makefile index 849c1df3d214..3794fff18659 100644 --- a/arch/arm64/mm/Makefile +++ b/arch/arm64/mm/Makefile @@ -6,6 +6,7 @@ obj-y := dma-mapping.o extable.o fault.o init.o \ obj-$(CONFIG_HUGETLB_PAGE) += hugetlbpage.o obj-$(CONFIG_ARM64_PTDUMP_CORE) += dump.o obj-$(CONFIG_ARM64_PTDUMP_DEBUGFS) += ptdump_debugfs.o +obj-$(CONFIG_TRANS_TABLE) += trans_table.o obj-$(CONFIG_NUMA) += numa.o obj-$(CONFIG_DEBUG_VIRTUAL) += physaddr.o KASAN_SANITIZE_physaddr.o += n diff --git a/arch/arm64/mm/trans_table.c b/arch/arm64/mm/trans_table.c new file mode 100644 index 000000000000..e3b8d4a2fa15 --- /dev/null +++ b/arch/arm64/mm/trans_table.c @@ -0,0 +1,273 @@ +// SPDX-License-Identifier: GPL-2.0 + +/* + * Copyright (c) 2019, Microsoft Corporation. + * Pavel Tatashin + */ + +/* + * Transitional tables are used during system transferring from one world to + * another: such as during hibernate restore, and kexec reboots. During these + * phases one cannot rely on page table not being overwritten. + * + */ + +#include +#include +#include + +static void *trans_alloc(struct trans_table_info *info) +{ + void *page = info->trans_alloc_page(info->trans_alloc_arg); + + if (page) + clear_page(page); + + return page; +} + +static int trans_table_copy_pte(struct trans_table_info *info, pte_t *dst_ptep, + pte_t *src_ptep, unsigned long start, + unsigned long end) +{ + unsigned long addr = start; + int i = pgd_index(addr); + + do { + pte_t src_pte = READ_ONCE(src_ptep[i]); + + if (pte_none(src_pte)) + continue; + if (info->trans_flags & TRANS_MKWRITE) + src_pte = pte_mkwrite(src_pte); + if (info->trans_flags & TRANS_MKVALID) + src_pte = pte_mkpresent(src_pte); + if (info->trans_flags & TRANS_CHECKPFN) { + if (!pfn_valid(pte_pfn(src_pte))) + return -ENXIO; + } + set_pte(&dst_ptep[i], src_pte); + } while (addr += PAGE_SIZE, i++, addr != end && i < PTRS_PER_PTE); + + return 0; +} + +static int trans_table_copy_pmd(struct trans_table_info *info, pmd_t *dst_pmdp, + pmd_t *src_pmdp, unsigned long start, + unsigned long end) +{ + unsigned long next; + unsigned long addr = start; + int i = pgd_index(addr); + int rc; + + do { + pmd_t src_pmd = READ_ONCE(src_pmdp[i]); + pmd_t dst_pmd = READ_ONCE(dst_pmdp[i]); + pte_t *dst_ptep, *src_ptep; + + next = pmd_addr_end(addr, end); + if (pmd_none(src_pmd)) + continue; + + if (!pmd_table(src_pmd)) { + if (info->trans_flags & TRANS_MKWRITE) + pmd_val(src_pmd) &= ~PMD_SECT_RDONLY; + set_pmd(&dst_pmdp[i], src_pmd); + continue; + } + + if (pmd_none(dst_pmd)) { + pte_t *t = trans_alloc(info); + + if (!t) + return -ENOMEM; + + __pmd_populate(&dst_pmdp[i], __pa(t), PTE_TYPE_PAGE); + dst_pmd = READ_ONCE(dst_pmdp[i]); + } + + src_ptep = __va(pmd_page_paddr(src_pmd)); + dst_ptep = __va(pmd_page_paddr(dst_pmd)); + + rc = trans_table_copy_pte(info, dst_ptep, src_ptep, addr, next); + if (rc) + return rc; + } while (addr = next, i++, addr != end && i < PTRS_PER_PMD); + + return 0; +} + +static int trans_table_copy_pud(struct trans_table_info *info, pud_t *dst_pudp, + pud_t *src_pudp, unsigned long start, + unsigned long end) +{ + unsigned long next; + unsigned long addr = start; + int i = pgd_index(addr); + int rc; + + do { + pud_t src_pud = READ_ONCE(src_pudp[i]); + pud_t dst_pud = READ_ONCE(dst_pudp[i]); + pmd_t *dst_pmdp, *src_pmdp; + + next = pud_addr_end(addr, end); + if (pud_none(src_pud)) + continue; + + if (!pud_table(src_pud)) { + if (info->trans_flags & TRANS_MKWRITE) + pud_val(src_pud) &= ~PUD_SECT_RDONLY; + set_pud(&dst_pudp[i], src_pud); + continue; + } + + if (pud_none(dst_pud)) { + pmd_t *t = trans_alloc(info); + + if (!t) + return -ENOMEM; + + __pud_populate(&dst_pudp[i], __pa(t), PMD_TYPE_TABLE); + dst_pud = READ_ONCE(dst_pudp[i]); + } + + src_pmdp = __va(pud_page_paddr(src_pud)); + dst_pmdp = __va(pud_page_paddr(dst_pud)); + + rc = trans_table_copy_pmd(info, dst_pmdp, src_pmdp, addr, next); + if (rc) + return rc; + } while (addr = next, i++, addr != end && i < PTRS_PER_PUD); + + return 0; +} + +static int trans_table_copy_pgd(struct trans_table_info *info, pgd_t *dst_pgdp, + pgd_t *src_pgdp, unsigned long start, + unsigned long end) +{ + unsigned long next; + unsigned long addr = start; + int i = pgd_index(addr); + int rc; + + do { + pgd_t src_pgd; + pgd_t dst_pgd; + pud_t *dst_pudp, *src_pudp; + + src_pgd = READ_ONCE(src_pgdp[i]); + dst_pgd = READ_ONCE(dst_pgdp[i]); + next = pgd_addr_end(addr, end); + if (pgd_none(src_pgd)) + continue; + + if (pgd_none(dst_pgd)) { + pud_t *t = trans_alloc(info); + + if (!t) + return -ENOMEM; + + __pgd_populate(&dst_pgdp[i], __pa(t), PUD_TYPE_TABLE); + dst_pgd = READ_ONCE(dst_pgdp[i]); + } + + src_pudp = __va(pgd_page_paddr(src_pgd)); + dst_pudp = __va(pgd_page_paddr(dst_pgd)); + + rc = trans_table_copy_pud(info, dst_pudp, src_pudp, addr, next); + if (rc) + return rc; + } while (addr = next, i++, addr != end && i < PTRS_PER_PGD); + + return 0; +} + +int trans_table_create_empty(struct trans_table_info *info, pgd_t **trans_table) +{ + pgd_t *dst_pgdp = trans_alloc(info); + + if (!dst_pgdp) + return -ENOMEM; + + *trans_table = dst_pgdp; + + return 0; +} + +int trans_table_create_copy(struct trans_table_info *info, pgd_t **trans_table, + pgd_t *from_table, unsigned long start, + unsigned long end) +{ + int rc; + + rc = trans_table_create_empty(info, trans_table); + if (rc) + return rc; + + return trans_table_copy_pgd(info, *trans_table, from_table, start, end); +} + +int trans_table_map_page(struct trans_table_info *info, pgd_t *trans_table, + void *page, unsigned long dst_addr, pgprot_t pgprot) +{ + int pgd_idx = pgd_index(dst_addr); + int pud_idx = pud_index(dst_addr); + int pmd_idx = pmd_index(dst_addr); + int pte_idx = pte_index(dst_addr); + pgd_t *pgdp = trans_table; + pgd_t pgd = READ_ONCE(pgdp[pgd_idx]); + pud_t *pudp, pud; + pmd_t *pmdp, pmd; + pte_t *ptep, pte; + + if (pgd_none(pgd)) { + pud_t *t = trans_alloc(info); + + if (!t) + return -ENOMEM; + + __pgd_populate(&pgdp[pgd_idx], __pa(t), PUD_TYPE_TABLE); + pgd = READ_ONCE(pgdp[pgd_idx]); + } + + pudp = __va(pgd_page_paddr(pgd)); + pud = READ_ONCE(pudp[pud_idx]); + if (pud_sect(pud) && !(info->trans_flags & TRANS_FORCEMAP)) { + return -ENXIO; + } else if (pud_none(pud) || pud_sect(pud)) { + pmd_t *t = trans_alloc(info); + + if (!t) + return -ENOMEM; + + __pud_populate(&pudp[pud_idx], __pa(t), PMD_TYPE_TABLE); + pud = READ_ONCE(pudp[pud_idx]); + } + + pmdp = __va(pud_page_paddr(pud)); + pmd = READ_ONCE(pmdp[pmd_idx]); + if (pmd_sect(pmd) && !(info->trans_flags & TRANS_FORCEMAP)) { + return -ENXIO; + } else if (pmd_none(pmd) || pmd_sect(pmd)) { + pte_t *t = trans_alloc(info); + + if (!t) + return -ENOMEM; + + __pmd_populate(&pmdp[pmd_idx], __pa(t), PTE_TYPE_PAGE); + pmd = READ_ONCE(pmdp[pmd_idx]); + } + + ptep = __va(pmd_page_paddr(pmd)); + pte = READ_ONCE(ptep[pte_idx]); + + if (!pte_none(pte) && !(info->trans_flags & TRANS_FORCEMAP)) + return -ENXIO; + + set_pte(&ptep[pte_idx], pfn_pte(virt_to_pfn(page), pgprot)); + + return 0; +} From patchwork Thu Aug 1 15:24:34 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pasha Tatashin X-Patchwork-Id: 11070909 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 9FEDF13A0 for ; Thu, 1 Aug 2019 15:24:54 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 90A68284F9 for ; Thu, 1 Aug 2019 15:24:54 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 84C54286C1; Thu, 1 Aug 2019 15:24:54 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 759FA286C0 for ; Thu, 1 Aug 2019 15:24:53 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 4D3548E0025; Thu, 1 Aug 2019 11:24:48 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 45DA88E0001; Thu, 1 Aug 2019 11:24:48 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2D66E8E0025; Thu, 1 Aug 2019 11:24:48 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-qk1-f197.google.com (mail-qk1-f197.google.com [209.85.222.197]) by kanga.kvack.org (Postfix) with ESMTP id F25138E0001 for ; Thu, 1 Aug 2019 11:24:47 -0400 (EDT) Received: by mail-qk1-f197.google.com with SMTP id k13so61625102qkj.4 for ; Thu, 01 Aug 2019 08:24:47 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:from:to:subject:date:message-id :in-reply-to:references:mime-version:content-transfer-encoding; bh=HM9ffM3Ct5ZgrBhxNWuGZauxxnBjbr17mWM+iliA67E=; b=Gp+mP/OfA4n2OzIiO0Q+7axAiFB8Uelm7EDAysQJOiX9soHQNJ1RT+TDlZMXgRfrwE Rz1mEcJG2ohYqFsnkZXpS7myCLoiKISkMRGNvSmV1IgOVprdMW224G+vY4gFCDH+9RUI zSUVJBk9rzYNVuGQJ7S0fL0xC0/0xAcHT4N1sLSQQ5m2ng4JP0tebg/ZNF4w3MS4Mb3S MrCUAnqf72nfwGxp+NXksBiz7nf9iRBJ4ZQvWIlMJkeNjZ1mpSMkgqUtgVNK9dr0IQlN VGFmFpgWFdKMsAYHkTtNF1wFepKMqSYwxKg2WyTLD1oyDeLycwm/MT/6npyel+iYjR9k HTqA== X-Gm-Message-State: APjAAAVcopKsHi42eZR/yb3cUAMHhFqrTD1tO88RYF40bLORglQuW3hn fhuMxCZ28KQ1N/Avw0EAQ5zQt9Es5ALsPGQeJTlTrRorhylg/KX+9DBnBA4CQ0m2BGHSkUFe9cl PhJtL5ipqTCPo2uzB5HOzP2EL0wSMohHQ+bL257S3I3Qusl18XI9nj8l9qF84X8cr9A== X-Received: by 2002:a0c:81b8:: with SMTP id 53mr90906476qvd.91.1564673087734; Thu, 01 Aug 2019 08:24:47 -0700 (PDT) X-Received: by 2002:a0c:81b8:: with SMTP id 53mr90906354qvd.91.1564673086315; Thu, 01 Aug 2019 08:24:46 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1564673086; cv=none; d=google.com; s=arc-20160816; b=nwEDdyacjiWAq8s2VM7KRCFp/KmwmgpFNAXkYSo/0M7bQQGz+fBASKf6HL4R6+Slhd GpAgEEcCD9P02Txo9Q4bqL5ubA0i1jArCto+vkn4bK32nW0/kneqHVUSpTt43duVQNOC +OvyCsQxjyj7DoO+2TodU1hoxqBLhvsnmOS2Ylvarjjbp4329Zy425vMRxpIIWE4LjxS z7il+/0cmauSm08CInZa5uVXL2c1sbDxOpdKqYVB5OUAb+F1kJ5hWizr3z8aVpRkCvMJ bQGjb2ia2lE3HXo+HJ+lVmMuTRmIVJPLY/4UtnnGAnrSzHp0xKO5W0i0IJtezW791FEp EzRA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:dkim-signature; bh=HM9ffM3Ct5ZgrBhxNWuGZauxxnBjbr17mWM+iliA67E=; b=IGe7u7nqZnrX44Qg4tUGHQM9sfjUFlEpZkM0e951fhwfrrfF4/fbIUeIg8FoHlOHLa LcW8zXZSam9t4QPXF06HyJr/OmW23l7ALJE5848KwOEofD3bqLrogWEvbAeXQRFquKA6 4CTPvSjW9JinIdf26kxhZLwkrKiaNlB/iXbOa39E52XJAMrYtvKVxDDnm7Yb1GsMx5OL 1U8YAn70BmE/WCztzEy+U3u1rzcBUtByoynZQWxdgfwQDZGhK7sjUER4sRZAqalXzLN/ fN8VugdBqcpz4cK11RCyhUKVkIqMp1KIq87UVFMRGaavyuOfieqV/877L0jTZAETyu8k L8EA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@soleen.com header.s=google header.b=QLgg3y3S; spf=pass (google.com: domain of pasha.tatashin@soleen.com designates 209.85.220.65 as permitted sender) smtp.mailfrom=pasha.tatashin@soleen.com Received: from mail-sor-f65.google.com (mail-sor-f65.google.com. [209.85.220.65]) by mx.google.com with SMTPS id l24sor60278050qvc.4.2019.08.01.08.24.46 for (Google Transport Security); Thu, 01 Aug 2019 08:24:46 -0700 (PDT) Received-SPF: pass (google.com: domain of pasha.tatashin@soleen.com designates 209.85.220.65 as permitted sender) client-ip=209.85.220.65; Authentication-Results: mx.google.com; dkim=pass header.i=@soleen.com header.s=google header.b=QLgg3y3S; spf=pass (google.com: domain of pasha.tatashin@soleen.com designates 209.85.220.65 as permitted sender) smtp.mailfrom=pasha.tatashin@soleen.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=HM9ffM3Ct5ZgrBhxNWuGZauxxnBjbr17mWM+iliA67E=; b=QLgg3y3SfxgF/evGb1Yaq9yI/zqNmApSRiIg4H0HKtV5zU9u4qxjgrVtUqfsbyAhZ4 Au2NPBHD8SGWcnJ7uBiXa66lgok99cD0+Sm5JPm1rs7oaTFAIpDtEQZSb8wrM63ksh+e 9ZbVg/gOiC5DdaCRDjhj/QYezaT3ZfPcq+/57rWeoUZTrs4mlA/lSthhvr5JZHjw7cGS GnFlPpzr78LFz7EtBSYXThQdZ/XRxF6QYK9Ocrn476l0R1WtuU7gJPfPdGKG3azGrAJd q2/3do4AweTvRpU5pXCxmSXnaH8l7J/r2680R0H5yV9X6f8Buv6nWqc+GCuywG2RTg4N 0BpQ== X-Google-Smtp-Source: APXvYqwfsobKqk1IYiuG1mmAcmPE5e3R5pS5B6WpDDjambL3qnYiYciKRqaQqx/MGyD1akzcIM/L4g== X-Received: by 2002:a0c:acfb:: with SMTP id n56mr92816087qvc.87.1564673086003; Thu, 01 Aug 2019 08:24:46 -0700 (PDT) Received: from localhost.localdomain (c-73-69-118-222.hsd1.nh.comcast.net. [73.69.118.222]) by smtp.gmail.com with ESMTPSA id o5sm30899952qkf.10.2019.08.01.08.24.44 (version=TLS1_3 cipher=AEAD-AES256-GCM-SHA384 bits=256/256); Thu, 01 Aug 2019 08:24:45 -0700 (PDT) From: Pavel Tatashin To: pasha.tatashin@soleen.com, jmorris@namei.org, sashal@kernel.org, ebiederm@xmission.com, kexec@lists.infradead.org, linux-kernel@vger.kernel.org, corbet@lwn.net, catalin.marinas@arm.com, will@kernel.org, linux-arm-kernel@lists.infradead.org, marc.zyngier@arm.com, james.morse@arm.com, vladimir.murzin@arm.com, matthias.bgg@gmail.com, bhsharma@redhat.com, linux-mm@kvack.org Subject: [PATCH v1 3/8] arm64: hibernate: switch to transtional page tables. Date: Thu, 1 Aug 2019 11:24:34 -0400 Message-Id: <20190801152439.11363-4-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.22.0 In-Reply-To: <20190801152439.11363-1-pasha.tatashin@soleen.com> References: <20190801152439.11363-1-pasha.tatashin@soleen.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP Transitional page tables provide the needed functionality to setup temporary page tables needed for hibernate resume. Signed-off-by: Pavel Tatashin --- arch/arm64/kernel/hibernate.c | 261 ++++++++-------------------------- 1 file changed, 60 insertions(+), 201 deletions(-) diff --git a/arch/arm64/kernel/hibernate.c b/arch/arm64/kernel/hibernate.c index 9341fcc6e809..4120b03a02fd 100644 --- a/arch/arm64/kernel/hibernate.c +++ b/arch/arm64/kernel/hibernate.c @@ -16,7 +16,6 @@ #define pr_fmt(x) "hibernate: " x #include #include -#include #include #include #include @@ -31,14 +30,12 @@ #include #include #include -#include -#include -#include #include #include #include #include #include +#include #include /* @@ -182,6 +179,12 @@ int arch_hibernation_header_restore(void *addr) } EXPORT_SYMBOL(arch_hibernation_header_restore); +static void * +hibernate_page_alloc(void *arg) +{ + return (void *)get_safe_page((gfp_t)(unsigned long)arg); +} + /* * Copies length bytes, starting at src_start into an new page, * perform cache maintentance, then maps it at the specified address low @@ -196,57 +199,31 @@ EXPORT_SYMBOL(arch_hibernation_header_restore); */ static int create_safe_exec_page(void *src_start, size_t length, unsigned long dst_addr, - phys_addr_t *phys_dst_addr, - void *(*allocator)(gfp_t mask), - gfp_t mask) + phys_addr_t *phys_dst_addr) { - int rc = 0; - pgd_t *pgdp; - pud_t *pudp; - pmd_t *pmdp; - pte_t *ptep; - unsigned long dst = (unsigned long)allocator(mask); - - if (!dst) { - rc = -ENOMEM; - goto out; - } - - memcpy((void *)dst, src_start, length); - __flush_icache_range(dst, dst + length); + struct trans_table_info trans_info = { + .trans_alloc_page = hibernate_page_alloc, + .trans_alloc_arg = (void *)GFP_ATOMIC, + .trans_flags = 0, + }; + void *page = (void *)get_safe_page(GFP_ATOMIC); + pgd_t *trans_table; + int rc; + + if (!page) + return -ENOMEM; - pgdp = pgd_offset_raw(allocator(mask), dst_addr); - if (pgd_none(READ_ONCE(*pgdp))) { - pudp = allocator(mask); - if (!pudp) { - rc = -ENOMEM; - goto out; - } - pgd_populate(&init_mm, pgdp, pudp); - } + memcpy(page, src_start, length); + __flush_icache_range((unsigned long)page, (unsigned long)page + length); - pudp = pud_offset(pgdp, dst_addr); - if (pud_none(READ_ONCE(*pudp))) { - pmdp = allocator(mask); - if (!pmdp) { - rc = -ENOMEM; - goto out; - } - pud_populate(&init_mm, pudp, pmdp); - } - - pmdp = pmd_offset(pudp, dst_addr); - if (pmd_none(READ_ONCE(*pmdp))) { - ptep = allocator(mask); - if (!ptep) { - rc = -ENOMEM; - goto out; - } - pmd_populate_kernel(&init_mm, pmdp, ptep); - } + rc = trans_table_create_empty(&trans_info, &trans_table); + if (rc) + return rc; - ptep = pte_offset_kernel(pmdp, dst_addr); - set_pte(ptep, pfn_pte(virt_to_pfn(dst), PAGE_KERNEL_EXEC)); + rc = trans_table_map_page(&trans_info, trans_table, page, dst_addr, + PAGE_KERNEL_EXEC); + if (rc) + return rc; /* * Load our new page tables. A strict BBM approach requires that we @@ -262,13 +239,12 @@ static int create_safe_exec_page(void *src_start, size_t length, */ cpu_set_reserved_ttbr0(); local_flush_tlb_all(); - write_sysreg(phys_to_ttbr(virt_to_phys(pgdp)), ttbr0_el1); + write_sysreg(phys_to_ttbr(virt_to_phys(trans_table)), ttbr0_el1); isb(); - *phys_dst_addr = virt_to_phys((void *)dst); + *phys_dst_addr = virt_to_phys(page); -out: - return rc; + return 0; } #define dcache_clean_range(start, end) __flush_dcache_area(start, (end - start)) @@ -332,143 +308,6 @@ int swsusp_arch_suspend(void) return ret; } -static void _copy_pte(pte_t *dst_ptep, pte_t *src_ptep, unsigned long addr) -{ - pte_t pte = READ_ONCE(*src_ptep); - - if (pte_valid(pte)) { - /* - * Resume will overwrite areas that may be marked - * read only (code, rodata). Clear the RDONLY bit from - * the temporary mappings we use during restore. - */ - set_pte(dst_ptep, pte_mkwrite(pte)); - } else if (debug_pagealloc_enabled() && !pte_none(pte)) { - /* - * debug_pagealloc will removed the PTE_VALID bit if - * the page isn't in use by the resume kernel. It may have - * been in use by the original kernel, in which case we need - * to put it back in our copy to do the restore. - * - * Before marking this entry valid, check the pfn should - * be mapped. - */ - BUG_ON(!pfn_valid(pte_pfn(pte))); - - set_pte(dst_ptep, pte_mkpresent(pte_mkwrite(pte))); - } -} - -static int copy_pte(pmd_t *dst_pmdp, pmd_t *src_pmdp, unsigned long start, - unsigned long end) -{ - pte_t *src_ptep; - pte_t *dst_ptep; - unsigned long addr = start; - - dst_ptep = (pte_t *)get_safe_page(GFP_ATOMIC); - if (!dst_ptep) - return -ENOMEM; - pmd_populate_kernel(&init_mm, dst_pmdp, dst_ptep); - dst_ptep = pte_offset_kernel(dst_pmdp, start); - - src_ptep = pte_offset_kernel(src_pmdp, start); - do { - _copy_pte(dst_ptep, src_ptep, addr); - } while (dst_ptep++, src_ptep++, addr += PAGE_SIZE, addr != end); - - return 0; -} - -static int copy_pmd(pud_t *dst_pudp, pud_t *src_pudp, unsigned long start, - unsigned long end) -{ - pmd_t *src_pmdp; - pmd_t *dst_pmdp; - unsigned long next; - unsigned long addr = start; - - if (pud_none(READ_ONCE(*dst_pudp))) { - dst_pmdp = (pmd_t *)get_safe_page(GFP_ATOMIC); - if (!dst_pmdp) - return -ENOMEM; - pud_populate(&init_mm, dst_pudp, dst_pmdp); - } - dst_pmdp = pmd_offset(dst_pudp, start); - - src_pmdp = pmd_offset(src_pudp, start); - do { - pmd_t pmd = READ_ONCE(*src_pmdp); - - next = pmd_addr_end(addr, end); - if (pmd_none(pmd)) - continue; - if (pmd_table(pmd)) { - if (copy_pte(dst_pmdp, src_pmdp, addr, next)) - return -ENOMEM; - } else { - set_pmd(dst_pmdp, - __pmd(pmd_val(pmd) & ~PMD_SECT_RDONLY)); - } - } while (dst_pmdp++, src_pmdp++, addr = next, addr != end); - - return 0; -} - -static int copy_pud(pgd_t *dst_pgdp, pgd_t *src_pgdp, unsigned long start, - unsigned long end) -{ - pud_t *dst_pudp; - pud_t *src_pudp; - unsigned long next; - unsigned long addr = start; - - if (pgd_none(READ_ONCE(*dst_pgdp))) { - dst_pudp = (pud_t *)get_safe_page(GFP_ATOMIC); - if (!dst_pudp) - return -ENOMEM; - pgd_populate(&init_mm, dst_pgdp, dst_pudp); - } - dst_pudp = pud_offset(dst_pgdp, start); - - src_pudp = pud_offset(src_pgdp, start); - do { - pud_t pud = READ_ONCE(*src_pudp); - - next = pud_addr_end(addr, end); - if (pud_none(pud)) - continue; - if (pud_table(pud)) { - if (copy_pmd(dst_pudp, src_pudp, addr, next)) - return -ENOMEM; - } else { - set_pud(dst_pudp, - __pud(pud_val(pud) & ~PMD_SECT_RDONLY)); - } - } while (dst_pudp++, src_pudp++, addr = next, addr != end); - - return 0; -} - -static int copy_page_tables(pgd_t *dst_pgdp, unsigned long start, - unsigned long end) -{ - unsigned long next; - unsigned long addr = start; - pgd_t *src_pgdp = pgd_offset_k(start); - - dst_pgdp = pgd_offset_raw(dst_pgdp, start); - do { - next = pgd_addr_end(addr, end); - if (pgd_none(READ_ONCE(*src_pgdp))) - continue; - if (copy_pud(dst_pgdp, src_pgdp, addr, next)) - return -ENOMEM; - } while (dst_pgdp++, src_pgdp++, addr = next, addr != end); - - return 0; -} - /* * Setup then Resume from the hibernate image using swsusp_arch_suspend_exit(). * @@ -484,21 +323,42 @@ int swsusp_arch_resume(void) phys_addr_t phys_hibernate_exit; void __noreturn (*hibernate_exit)(phys_addr_t, phys_addr_t, void *, void *, phys_addr_t, phys_addr_t); + struct trans_table_info trans_info = { + .trans_alloc_page = hibernate_page_alloc, + .trans_alloc_arg = (void *)GFP_ATOMIC, + /* + * Resume will overwrite areas that may be marked read only + * (code, rodata). Clear the RDONLY bit from the temporary + * mappings we use during restore. + */ + .trans_flags = TRANS_MKWRITE, + }; + + /* + * debug_pagealloc will removed the PTE_VALID bit if the page isn't in + * use by the resume kernel. It may have been in use by the original + * kernel, in which case we need to put it back in our copy to do the + * restore. + * + * Before marking this entry valid, check the pfn should be mapped. + */ + if (debug_pagealloc_enabled()) + trans_info.trans_flags |= (TRANS_MKVALID | TRANS_CHECKPFN); /* * Restoring the memory image will overwrite the ttbr1 page tables. * Create a second copy of just the linear map, and use this when * restoring. */ - tmp_pg_dir = (pgd_t *)get_safe_page(GFP_ATOMIC); - if (!tmp_pg_dir) { - pr_err("Failed to allocate memory for temporary page tables.\n"); - rc = -ENOMEM; + rc = trans_table_create_copy(&trans_info, &tmp_pg_dir, + pgd_offset_k(PAGE_OFFSET), PAGE_OFFSET, 0); + if (rc) { + if (rc == -ENOMEM) + pr_err("Failed to allocate memory for temporary page tables.\n"); + else if (rc == -ENXIO) + pr_err("Tried to set PTE for PFN that does not exist\n"); goto out; } - rc = copy_page_tables(tmp_pg_dir, PAGE_OFFSET, 0); - if (rc) - goto out; /* * We need a zero page that is zero before & after resume in order to @@ -523,8 +383,7 @@ int swsusp_arch_resume(void) */ rc = create_safe_exec_page(__hibernate_exit_text_start, exit_size, (unsigned long)hibernate_exit, - &phys_hibernate_exit, - (void *)get_safe_page, GFP_ATOMIC); + &phys_hibernate_exit); if (rc) { pr_err("Failed to create safe executable page for hibernate_exit code.\n"); goto out; From patchwork Thu Aug 1 15:24:35 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pasha Tatashin X-Patchwork-Id: 11070911 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 021231395 for ; Thu, 1 Aug 2019 15:24:57 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id E9023285FB for ; Thu, 1 Aug 2019 15:24:56 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id DD29B28640; Thu, 1 Aug 2019 15:24:56 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 5C6E428635 for ; Thu, 1 Aug 2019 15:24:56 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 264078E0026; Thu, 1 Aug 2019 11:24:49 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 1FA628E0001; Thu, 1 Aug 2019 11:24:49 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 01A8C8E0026; Thu, 1 Aug 2019 11:24:48 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-qt1-f198.google.com (mail-qt1-f198.google.com [209.85.160.198]) by kanga.kvack.org (Postfix) with ESMTP id C960F8E0001 for ; Thu, 1 Aug 2019 11:24:48 -0400 (EDT) Received: by mail-qt1-f198.google.com with SMTP id d26so65065466qte.19 for ; Thu, 01 Aug 2019 08:24:48 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:from:to:subject:date:message-id :in-reply-to:references:mime-version:content-transfer-encoding; bh=R6/SxEPTIzTdE6W1Hx7mOa8dC1CKLzuuI3rK/m6KyjA=; b=p/L3E3cMHYiALdavo14xpPHwX199NKQ9mM33RiogiLYGQoEZIVlO4FvWnQCF6Y394X dr1qGh3pGnXwRRqhoIa7dgtzOzd+GrnfcSops5pxzFfYyyAnaz2GWHQrtFSccM7U9l1z x3Midk7KNCrbtC2Jm9Qz+ayeUSWy3H9V3hzVghKiiLI51LN4utTFWf5bU38FrOjbBzEa eTEEYKmdZD81O8cKIBHbrmyXSzn4X+x9417++njeRQwgHBgl8g9gpCWPt9sSrOxkStkS oMxM1RF3QWniMb3tA+JZ8f7CjrwEOpys59wR9h/Z12xlH8FQAY7iexdBgbVr8e4Ec4ML /xdw== X-Gm-Message-State: APjAAAVmbaTI6FMFLr5G/PbyMgM4dDG9TcRaVkUdAgEpDZPs6TcFJEaV cxkneiyoY2RlbzYN/AaGbBstdRF2bYzMGwtTtzKL0y0mUT7mVr4UOR4XQWTOLGSE7r9z821MSO3 5gOyO8vs6AQgRRhZWV46N1OV38fJPC113nhvoKTZNYaoi0d9urVsTtKWuhXrUH++kWg== X-Received: by 2002:a37:4b58:: with SMTP id y85mr87300545qka.8.1564673088582; Thu, 01 Aug 2019 08:24:48 -0700 (PDT) X-Received: by 2002:a37:4b58:: with SMTP id y85mr87300474qka.8.1564673087630; Thu, 01 Aug 2019 08:24:47 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1564673087; cv=none; d=google.com; s=arc-20160816; b=LdZzMehQd4g9v0n1pkHTBSjDo3qvLFUmX6GWjj7A00zpvR/JhLejJY9hlRff9FMme3 1Cdb8qOuuHQ5HFjT4BUYSZmuC4FPZW4QemLHYAGL86c4a3T9AxGmHJGIdif2p3RIEZoI urGfgV0cRRX11KH8JPI5vGEpe/eAbFKHJyHhpzG1oiohnoWOChDCCKX9rsERa9vJxYg8 Ce+FodxcD8tlrWH8h2U3Zka2YK/xWJadV+kpcO/6ZDHSg3a6SGRLyUoAVrZMzmlteDMw fIriNOKXig5/zwjRiBBW32OfI9nOveP2fQsahSPpzXcjciyC1EvV6X3WXMXKS1UU+kPm kDgQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:dkim-signature; bh=R6/SxEPTIzTdE6W1Hx7mOa8dC1CKLzuuI3rK/m6KyjA=; b=uhB/s+zKkuD3A3oZslBmI6WEJv54WoaCyToUTT6dzjGTmra3m8MunJwxjEFIh2QgUA gjWBCdKHXeNLXKGIuZlrBrJxbhRx2a11ruiVLZnL6vzNrhflBKBADqUSKX7fuigBmG5H 2yqhPcUaVJ2wsBpqEk/mziW7btjpglpai3xWfW3yQg/SMexVOLYJjBibY3dzprzvDK8D 6AiIEODbR1FOW1TPvl0vQ2YGGIw5TTYfWCTs7TAYFGtcYHP7jE1YmrFEKjDu8hTXETYz xxwxujYkwcxfy2hW4qjkmFILdatsI2T90TdU50AH5CXzkn3Bj2mnqxDEquqadWTKA3/c fhgg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@soleen.com header.s=google header.b="JNZ/KgcY"; spf=pass (google.com: domain of pasha.tatashin@soleen.com designates 209.85.220.65 as permitted sender) smtp.mailfrom=pasha.tatashin@soleen.com Received: from mail-sor-f65.google.com (mail-sor-f65.google.com. [209.85.220.65]) by mx.google.com with SMTPS id v12sor61370813qvj.22.2019.08.01.08.24.47 for (Google Transport Security); Thu, 01 Aug 2019 08:24:47 -0700 (PDT) Received-SPF: pass (google.com: domain of pasha.tatashin@soleen.com designates 209.85.220.65 as permitted sender) client-ip=209.85.220.65; Authentication-Results: mx.google.com; dkim=pass header.i=@soleen.com header.s=google header.b="JNZ/KgcY"; spf=pass (google.com: domain of pasha.tatashin@soleen.com designates 209.85.220.65 as permitted sender) smtp.mailfrom=pasha.tatashin@soleen.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=R6/SxEPTIzTdE6W1Hx7mOa8dC1CKLzuuI3rK/m6KyjA=; b=JNZ/KgcYfb4dNpim+k2MOWWLFaur4nKwhjsoOoTJ+D9swbQ7WakCBsSE79BZJK5VUd TjaR5Fn6DG/7FS5/YK0BwQFEPkSVX+wxFzBFXOJHef1uJRGPIeUsvsLHJfXvbd1LGc58 jSLve3ailjdQ/fS1b0OEFemmlrJeEh9hMrTcExWeVX2lAEBSKx0SywyPQV1LbcXFN+Qh bYGCXdm6pxLTRxasMToe/hcej2jpVrw92NRmHdvnSvfeQnxY+fK8ThuKP47ARnpprP5N YVNaR7JIkuxX+NzvFrcGZfJzUE20a9ytsxMazVrOY+8n2NC9P0G+nZoK1RtL9Sk5qgaS P7cw== X-Google-Smtp-Source: APXvYqyqwg6VPHMO7J3bHZKnQ3bO0qH12fclYw4spACWNbJ1vUFrzBMweyq01CQXeNwF7FoAnD2faQ== X-Received: by 2002:a0c:99e6:: with SMTP id y38mr92593639qve.42.1564673087348; Thu, 01 Aug 2019 08:24:47 -0700 (PDT) Received: from localhost.localdomain (c-73-69-118-222.hsd1.nh.comcast.net. [73.69.118.222]) by smtp.gmail.com with ESMTPSA id o5sm30899952qkf.10.2019.08.01.08.24.46 (version=TLS1_3 cipher=AEAD-AES256-GCM-SHA384 bits=256/256); Thu, 01 Aug 2019 08:24:46 -0700 (PDT) From: Pavel Tatashin To: pasha.tatashin@soleen.com, jmorris@namei.org, sashal@kernel.org, ebiederm@xmission.com, kexec@lists.infradead.org, linux-kernel@vger.kernel.org, corbet@lwn.net, catalin.marinas@arm.com, will@kernel.org, linux-arm-kernel@lists.infradead.org, marc.zyngier@arm.com, james.morse@arm.com, vladimir.murzin@arm.com, matthias.bgg@gmail.com, bhsharma@redhat.com, linux-mm@kvack.org Subject: [PATCH v1 4/8] kexec: add machine_kexec_post_load() Date: Thu, 1 Aug 2019 11:24:35 -0400 Message-Id: <20190801152439.11363-5-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.22.0 In-Reply-To: <20190801152439.11363-1-pasha.tatashin@soleen.com> References: <20190801152439.11363-1-pasha.tatashin@soleen.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP It is the same as machine_kexec_prepare(), but is called after segments are loaded. This way, can do processing work with already loaded relocation segments. One such example is arm64: it has to have segments loaded in order to create a page table, but it cannot do it during kexec time, because at that time allocations won't be possible anymore. Signed-off-by: Pavel Tatashin --- kernel/kexec.c | 4 ++++ kernel/kexec_core.c | 6 ++++++ kernel/kexec_file.c | 4 ++++ kernel/kexec_internal.h | 2 ++ 4 files changed, 16 insertions(+) diff --git a/kernel/kexec.c b/kernel/kexec.c index 1b018f1a6e0d..27b71dc7b35a 100644 --- a/kernel/kexec.c +++ b/kernel/kexec.c @@ -159,6 +159,10 @@ static int do_kexec_load(unsigned long entry, unsigned long nr_segments, kimage_terminate(image); + ret = machine_kexec_post_load(image); + if (ret) + goto out; + /* Install the new kernel and uninstall the old */ image = xchg(dest_image, image); diff --git a/kernel/kexec_core.c b/kernel/kexec_core.c index 2c5b72863b7b..8360645d1bbe 100644 --- a/kernel/kexec_core.c +++ b/kernel/kexec_core.c @@ -587,6 +587,12 @@ static void kimage_free_extra_pages(struct kimage *image) kimage_free_page_list(&image->unusable_pages); } + +int __weak machine_kexec_post_load(struct kimage *image) +{ + return 0; +} + void kimage_terminate(struct kimage *image) { if (*image->entry != 0) diff --git a/kernel/kexec_file.c b/kernel/kexec_file.c index b8cc032d5620..cb531d768114 100644 --- a/kernel/kexec_file.c +++ b/kernel/kexec_file.c @@ -391,6 +391,10 @@ SYSCALL_DEFINE5(kexec_file_load, int, kernel_fd, int, initrd_fd, kimage_terminate(image); + ret = machine_kexec_post_load(image); + if (ret) + goto out; + /* * Free up any temporary buffers allocated which are not needed * after image has been loaded diff --git a/kernel/kexec_internal.h b/kernel/kexec_internal.h index 48aaf2ac0d0d..39d30ccf8d87 100644 --- a/kernel/kexec_internal.h +++ b/kernel/kexec_internal.h @@ -13,6 +13,8 @@ void kimage_terminate(struct kimage *image); int kimage_is_destination_range(struct kimage *image, unsigned long start, unsigned long end); +int machine_kexec_post_load(struct kimage *image); + extern struct mutex kexec_mutex; #ifdef CONFIG_KEXEC_FILE From patchwork Thu Aug 1 15:24:36 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pasha Tatashin X-Patchwork-Id: 11070913 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id EA42B13A0 for ; Thu, 1 Aug 2019 15:24:59 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id DB645285FB for ; Thu, 1 Aug 2019 15:24:59 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id CED22286C1; Thu, 1 Aug 2019 15:24:59 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 330DC28635 for ; Thu, 1 Aug 2019 15:24:59 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C9BE88E0027; Thu, 1 Aug 2019 11:24:50 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id C24C58E0001; Thu, 1 Aug 2019 11:24:50 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id AC9038E0027; Thu, 1 Aug 2019 11:24:50 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-qt1-f198.google.com (mail-qt1-f198.google.com [209.85.160.198]) by kanga.kvack.org (Postfix) with ESMTP id 7AA588E0001 for ; Thu, 1 Aug 2019 11:24:50 -0400 (EDT) Received: by mail-qt1-f198.google.com with SMTP id p34so65006957qtp.1 for ; Thu, 01 Aug 2019 08:24:50 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:from:to:subject:date:message-id :in-reply-to:references:mime-version:content-transfer-encoding; bh=I9xocEcKfjedMKhCpMGCXOBDXpoq1e/vB1JDr9sXjKk=; b=L/oZl20Btaco9szurQ7W1uMXzqv6QHT8L6Kh1SX/71EVUBVyYCG/2pVKoAB0r5zdLV tUVEhV9SiD2uKiBap9UpA7s73PlE8EpofePv+Hifoys2fMdxVGS89j4ECJOCAC9lkJ1W 6VaN3nwtemrDzJ+7/S+Zd/STVxefPeRlbx5106JhrjaUfuphEbBPBPlym2EIh25y8Y4S w3xPiWdLmQ4WsoWYfQsekgsC3cw01AypSTH0eyI3JEIFKDVeGxPzsHp6QzB380IAPLOn FwSJWCPwKrkHJFFPxkJxLNkm+7dSP7c2Th91w4HXoPi6J3pkJKFGbC2CkKKUbp8jlBs2 cxnw== X-Gm-Message-State: APjAAAWzetmJPcFHqBNiyqPZDeiHCFPHdEtEIvV1QWCHQOY9KVS+m3kT X2qPz8AAV3iCK0BsNZUbdcNLCPN14Vb6wT6NsdwpgRrpfgqnAuyrGkwZ73Xki4XnfBmQq9RqJ4P +tWQoflQC6hKlC9UN5tOjoWTmNdU4CkOxoSitnYXbw5/1YUlmGX/KimkfXzjpj7Jjqw== X-Received: by 2002:a05:620a:10bc:: with SMTP id h28mr85368861qkk.289.1564673090252; Thu, 01 Aug 2019 08:24:50 -0700 (PDT) X-Received: by 2002:a05:620a:10bc:: with SMTP id h28mr85368768qkk.289.1564673089074; Thu, 01 Aug 2019 08:24:49 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1564673089; cv=none; d=google.com; s=arc-20160816; b=ttwUTEgSJQu5IrX7zmVyx0FCJzubNDpVgkkbnubplL/q2/Tmt2pwyLa/MxHVEmKMsn sV625gwFuO5UXC3e9RDuceRZ/9fxmP/fpgouW1/z7pWlmgP/fwI6v8uWDFUhLeOMwPUN OAc9H5y3C9nVdfB/6L4rg4S9t6HphzZRCFuvOlyv3hXhQYskEF5r6HuMVp7iLBT7i8XU UDRGTVH0KA72Q6XPT/xBf23ApPaaznpjezwOgzRCunpv3bPuCRUGi+io7HNFMEdFNC17 fMksE2TdlLq/Ho/8XOVJIOpzWIoUHvWgDETj6vKxRtwLGlyv0oJDt6sV4wn6F86lu6JK FEoQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:dkim-signature; bh=I9xocEcKfjedMKhCpMGCXOBDXpoq1e/vB1JDr9sXjKk=; b=dEw0Nt6wZULWMwA+z7w9xbVwtD0JJpmUvKEVyX9sdhVc4ERnpU4CuGS07N6/1lRYW2 p2XQO2+nWodLGQznemtqliEUOflWRE9DrVdiraFQ93dGUIYH4HzTSF35Hessot2RMU3p 06qfwZfToGGc/j+tqP5rJHKodMPk6+/gXA5RJxhLIuq5ApXXbE2FWR/KGq2VO8x9Wsjd +lE19wAhB88YxBoPabCjQHPuvI3uxilDVzGq0XZeIG/Uy8mrlB/kKE27oTsxBAdzk00C mt11CzhsGtQFQZbRaEnfHG5WssX6j+HMdiYffvia18xlnP+0t0tZEiG/yNrzs2iQlvru WmWg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@soleen.com header.s=google header.b=H9kRmo9D; spf=pass (google.com: domain of pasha.tatashin@soleen.com designates 209.85.220.65 as permitted sender) smtp.mailfrom=pasha.tatashin@soleen.com Received: from mail-sor-f65.google.com (mail-sor-f65.google.com. [209.85.220.65]) by mx.google.com with SMTPS id y19sor60420447qve.64.2019.08.01.08.24.48 for (Google Transport Security); Thu, 01 Aug 2019 08:24:49 -0700 (PDT) Received-SPF: pass (google.com: domain of pasha.tatashin@soleen.com designates 209.85.220.65 as permitted sender) client-ip=209.85.220.65; Authentication-Results: mx.google.com; dkim=pass header.i=@soleen.com header.s=google header.b=H9kRmo9D; spf=pass (google.com: domain of pasha.tatashin@soleen.com designates 209.85.220.65 as permitted sender) smtp.mailfrom=pasha.tatashin@soleen.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=I9xocEcKfjedMKhCpMGCXOBDXpoq1e/vB1JDr9sXjKk=; b=H9kRmo9DikDA0wnk6Qyetpuog8uAfvIr/1QRoCqruP1kppB96lBaO26ZiL52EwXeKs 0bqZ398TbTpBlAZl3KwLimtGSWFq86L4fcdbuYX+7rG6OyKFNiNNXU+GC7VgzW4/OrlL Y6zWDcwqUj+jn9usrnRDOHh68Bson2VGCnjkY5kHuOiCldPiYSLd0jeW02uJT8AO8Wpz kfp+MQ6oxCTcI2nuFseZZRndqhOiyrt1WaI8QKQUaiJw1f4CL5zIcPN5aM41r6lgi07x 4qr/Xqj4Ho657Hl1M4ZbvF0KVHExeLlcGiYwPKNHghKMNWvXOnfa72z1f4HHCBeZiby1 mcKg== X-Google-Smtp-Source: APXvYqzvcTL7H0OIQGjS3ayyoz8R1VKYVpj6pg5T/bssf/mq4Dj4anh7C7dceOvoI1gpuWhC1CGXAw== X-Received: by 2002:a0c:b2da:: with SMTP id d26mr92635309qvf.48.1564673088659; Thu, 01 Aug 2019 08:24:48 -0700 (PDT) Received: from localhost.localdomain (c-73-69-118-222.hsd1.nh.comcast.net. [73.69.118.222]) by smtp.gmail.com with ESMTPSA id o5sm30899952qkf.10.2019.08.01.08.24.47 (version=TLS1_3 cipher=AEAD-AES256-GCM-SHA384 bits=256/256); Thu, 01 Aug 2019 08:24:48 -0700 (PDT) From: Pavel Tatashin To: pasha.tatashin@soleen.com, jmorris@namei.org, sashal@kernel.org, ebiederm@xmission.com, kexec@lists.infradead.org, linux-kernel@vger.kernel.org, corbet@lwn.net, catalin.marinas@arm.com, will@kernel.org, linux-arm-kernel@lists.infradead.org, marc.zyngier@arm.com, james.morse@arm.com, vladimir.murzin@arm.com, matthias.bgg@gmail.com, bhsharma@redhat.com, linux-mm@kvack.org Subject: [PATCH v1 5/8] arm64, kexec: move relocation function setup and clean up Date: Thu, 1 Aug 2019 11:24:36 -0400 Message-Id: <20190801152439.11363-6-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.22.0 In-Reply-To: <20190801152439.11363-1-pasha.tatashin@soleen.com> References: <20190801152439.11363-1-pasha.tatashin@soleen.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP Currently, kernel relocation function is configured in machine_kexec() at the time of kexec reboot by using control_code_page. This operation, however, is more logical to be done during kexec_load, and thus remove from reboot time. Move, setup of this function to newly added machine_kexec_post_load(). In addition, do some cleanup: add infor about reloction function to kexec_image_info(), and remove extra messages from machine_kexec(). Make dtb_mem, always available, if CONFIG_KEXEC_FILE is not configured dtb_mem is set to zero anyway. Signed-off-by: Pavel Tatashin --- arch/arm64/include/asm/kexec.h | 3 +- arch/arm64/kernel/machine_kexec.c | 49 +++++++++++-------------------- 2 files changed, 19 insertions(+), 33 deletions(-) diff --git a/arch/arm64/include/asm/kexec.h b/arch/arm64/include/asm/kexec.h index 12a561a54128..d15ca1ca1e83 100644 --- a/arch/arm64/include/asm/kexec.h +++ b/arch/arm64/include/asm/kexec.h @@ -90,14 +90,15 @@ static inline void crash_prepare_suspend(void) {} static inline void crash_post_resume(void) {} #endif -#ifdef CONFIG_KEXEC_FILE #define ARCH_HAS_KIMAGE_ARCH struct kimage_arch { void *dtb; unsigned long dtb_mem; + unsigned long kern_reloc; }; +#ifdef CONFIG_KEXEC_FILE extern const struct kexec_file_ops kexec_image_ops; struct kimage; diff --git a/arch/arm64/kernel/machine_kexec.c b/arch/arm64/kernel/machine_kexec.c index 0df8493624e0..9b41da50e6f7 100644 --- a/arch/arm64/kernel/machine_kexec.c +++ b/arch/arm64/kernel/machine_kexec.c @@ -42,6 +42,7 @@ static void _kexec_image_info(const char *func, int line, pr_debug(" start: %lx\n", kimage->start); pr_debug(" head: %lx\n", kimage->head); pr_debug(" nr_segments: %lu\n", kimage->nr_segments); + pr_debug(" kern_reloc: %pa\n", &kimage->arch.kern_reloc); for (i = 0; i < kimage->nr_segments; i++) { pr_debug(" segment[%lu]: %016lx - %016lx, 0x%lx bytes, %lu pages\n", @@ -58,6 +59,19 @@ void machine_kexec_cleanup(struct kimage *kimage) /* Empty routine needed to avoid build errors. */ } +int machine_kexec_post_load(struct kimage *kimage) +{ + unsigned long kern_reloc; + + kern_reloc = page_to_phys(kimage->control_code_page); + memcpy(__va(kern_reloc), arm64_relocate_new_kernel, + arm64_relocate_new_kernel_size); + kimage->arch.kern_reloc = kern_reloc; + + kexec_image_info(kimage); + return 0; +} + /** * machine_kexec_prepare - Prepare for a kexec reboot. * @@ -67,8 +81,6 @@ void machine_kexec_cleanup(struct kimage *kimage) */ int machine_kexec_prepare(struct kimage *kimage) { - kexec_image_info(kimage); - if (kimage->type != KEXEC_TYPE_CRASH && cpus_are_stuck_in_kernel()) { pr_err("Can't kexec: CPUs are stuck in the kernel.\n"); return -EBUSY; @@ -143,8 +155,7 @@ static void kexec_segment_flush(const struct kimage *kimage) */ void machine_kexec(struct kimage *kimage) { - phys_addr_t reboot_code_buffer_phys; - void *reboot_code_buffer; + void *reboot_code_buffer = phys_to_virt(kimage->arch.kern_reloc); bool in_kexec_crash = (kimage == kexec_crash_image); bool stuck_cpus = cpus_are_stuck_in_kernel(); @@ -155,30 +166,8 @@ void machine_kexec(struct kimage *kimage) WARN(in_kexec_crash && (stuck_cpus || smp_crash_stop_failed()), "Some CPUs may be stale, kdump will be unreliable.\n"); - reboot_code_buffer_phys = page_to_phys(kimage->control_code_page); - reboot_code_buffer = phys_to_virt(reboot_code_buffer_phys); - kexec_image_info(kimage); - pr_debug("%s:%d: control_code_page: %p\n", __func__, __LINE__, - kimage->control_code_page); - pr_debug("%s:%d: reboot_code_buffer_phys: %pa\n", __func__, __LINE__, - &reboot_code_buffer_phys); - pr_debug("%s:%d: reboot_code_buffer: %p\n", __func__, __LINE__, - reboot_code_buffer); - pr_debug("%s:%d: relocate_new_kernel: %p\n", __func__, __LINE__, - arm64_relocate_new_kernel); - pr_debug("%s:%d: relocate_new_kernel_size: 0x%lx(%lu) bytes\n", - __func__, __LINE__, arm64_relocate_new_kernel_size, - arm64_relocate_new_kernel_size); - - /* - * Copy arm64_relocate_new_kernel to the reboot_code_buffer for use - * after the kernel is shut down. - */ - memcpy(reboot_code_buffer, arm64_relocate_new_kernel, - arm64_relocate_new_kernel_size); - /* Flush the reboot_code_buffer in preparation for its execution. */ __flush_dcache_area(reboot_code_buffer, arm64_relocate_new_kernel_size); @@ -214,12 +203,8 @@ void machine_kexec(struct kimage *kimage) * userspace (kexec-tools). * In kexec_file case, the kernel starts directly without purgatory. */ - cpu_soft_restart(reboot_code_buffer_phys, kimage->head, kimage->start, -#ifdef CONFIG_KEXEC_FILE - kimage->arch.dtb_mem); -#else - 0); -#endif + cpu_soft_restart(kimage->arch.kern_reloc, kimage->head, kimage->start, + kimage->arch.dtb_mem); BUG(); /* Should never get here. */ } From patchwork Thu Aug 1 15:24:37 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pasha Tatashin X-Patchwork-Id: 11070915 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 6163913A0 for ; Thu, 1 Aug 2019 15:25:03 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 52BA4285FB for ; Thu, 1 Aug 2019 15:25:03 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 4FB74219AC; Thu, 1 Aug 2019 15:25:03 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 38538219AC for ; Thu, 1 Aug 2019 15:25:02 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B154B8E0028; Thu, 1 Aug 2019 11:24:52 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id AC70D8E0001; Thu, 1 Aug 2019 11:24:52 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9186F8E0028; Thu, 1 Aug 2019 11:24:52 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-qk1-f198.google.com (mail-qk1-f198.google.com [209.85.222.198]) by kanga.kvack.org (Postfix) with ESMTP id 676A68E0001 for ; Thu, 1 Aug 2019 11:24:52 -0400 (EDT) Received: by mail-qk1-f198.google.com with SMTP id n190so61615883qkd.5 for ; Thu, 01 Aug 2019 08:24:52 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:from:to:subject:date:message-id :in-reply-to:references:mime-version:content-transfer-encoding; bh=Tx5xNJqbq0sg9B8Ihv16AG1CCukqIRYJcVTxMeXsfQk=; b=dtmaFh6uXumbYuebUJLl6wAfPkCvexc3KEgYLfhq/mxz6gsbC2XAMiB1Nmu8I9YSas ktUzgStIRjsV6+uzwRZvvniIl+ygWClLBYG/pclANtVMTTtiUARyYc/bhShrVNWl854R Noo3MS/xE4OAEovkYYP+30BbsI+6O6Yy6nnMDaCs6lT/q+HGB3rJ5Y+UZPnw/G+K2p3R o//HYhfUly/VMcerZC8ngZWbohLsI2ci9fsNFMX9OY174hVEhVhsPUpLFufyx/4Z+bCd Gd1dpgWaDbKix2YKBEEAcEXIBCTbzYIrBSTCcuiZgYkPTvs21eC8dLPDX6l3rQS+wIoy +qSA== X-Gm-Message-State: APjAAAVOX3hRLi7qN9QGshzxAzfVcmoBCd/FG903s/ziOOHVMHgJh6+A rt7Mta1wA4kcwaGnnGmrpCAdDfA1xgqWV7s7zC2dt2qmXp2VWFCNdTt+5r9TiiNkY02PTl3Th3K 7/VAEAGwDIcBpAyBh+DXRcXarjsIk7Wvkjeew+87Uh2WKsqryP9UV74nVeIvi7yqqpg== X-Received: by 2002:a0c:b163:: with SMTP id r32mr94219770qvc.169.1564673092142; Thu, 01 Aug 2019 08:24:52 -0700 (PDT) X-Received: by 2002:a0c:b163:: with SMTP id r32mr94219630qvc.169.1564673090595; Thu, 01 Aug 2019 08:24:50 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1564673090; cv=none; d=google.com; s=arc-20160816; b=iP89IfEP1lM2SxQpSet071s8Se+GXluTPvNzSicTIvLJWblgdU98uT22q4TDEw6lso Q37GlTVNZDvCs1pUAdjpomEyVYZB3RTtnvd1KbyoPGhMOhwL64UBnIad8C8OF5jnxarm F756vASppw3j3cgi6/Ej8pzgWVc2UpSXIqZA1LbjtT79id5vsD3Rgpg0AoklyfxzFKB9 yq59kxjLeMRyTwmqbWWSzuUV4uZe5hOpXG7eg6ILIC+g4PHxEpKT85vlvb9W6gVHBWC5 X0ko6wf5RtOT6PJwgtvo/U0PUnw9FZOi/t3OogXN9LYF7qfe69BWArw4bVw66YhnMeG4 G5Dg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:dkim-signature; bh=Tx5xNJqbq0sg9B8Ihv16AG1CCukqIRYJcVTxMeXsfQk=; b=zdvMTmzZN+GlzrpBKp7Bn/Tfb0TT5ucsZEbvP4NOx/W+bCsG2A28jfOAN/ZMARznHy JEUPHxHEKN4LXJKJOvHgzuy2aaV3nvDnA+nXZV3xqsPAKtgLD0Gu0o8hQj1gU4/jdXpY alu2GVKCTu7c1wpWZj8Nx4PUQQF0sbbR3lbmGdYzS5Bgc1YFfWuRBR8rHiPhbQuwdI// b3y0ua2jzYquCdvcnrRd28a0vLsux7ZmTVBQ4dkik95CYLTAW3TharF+WTnC6154i8LR /e7KA1sIgaLv+TYa7p/BRAsDFz2y0jtmEPR6/ct82SU5OGrMMtpDPRfx+Yb8980Gl70Y jG3w== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@soleen.com header.s=google header.b=mF95WowF; spf=pass (google.com: domain of pasha.tatashin@soleen.com designates 209.85.220.65 as permitted sender) smtp.mailfrom=pasha.tatashin@soleen.com Received: from mail-sor-f65.google.com (mail-sor-f65.google.com. [209.85.220.65]) by mx.google.com with SMTPS id r3sor60957175qvp.9.2019.08.01.08.24.50 for (Google Transport Security); Thu, 01 Aug 2019 08:24:50 -0700 (PDT) Received-SPF: pass (google.com: domain of pasha.tatashin@soleen.com designates 209.85.220.65 as permitted sender) client-ip=209.85.220.65; Authentication-Results: mx.google.com; dkim=pass header.i=@soleen.com header.s=google header.b=mF95WowF; spf=pass (google.com: domain of pasha.tatashin@soleen.com designates 209.85.220.65 as permitted sender) smtp.mailfrom=pasha.tatashin@soleen.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=Tx5xNJqbq0sg9B8Ihv16AG1CCukqIRYJcVTxMeXsfQk=; b=mF95WowF8ku0WII1Ev5YCNFoUDufVvsbzJYMQWtZKbFURSRvZ9z1vbPejLpR0bhUzs UvtcxorLN1zbY5KAZ0Tfts5KKvTky8pFcjt4CchGfat1avRT6u8zlCX3hxXbYcOlVty4 kHoAdX1tSCGB9Gjq4SJ1sj3UorWZnI571n3i5cifmDwGZnix+JfOEo7xpeXnzFahN8cr WezJElry7hR0eJBLltAAUAa8texB/Iu50Y2kTRFSHwLny5CtYitEC0N5xO8SWZKvRBBP qmy3shKKPBQcY6NAXCSyLPH8a5QG4Z20wt0jdx6wXeCkTTS8WOQLJmuSmmFV/5IkZlMo yUBQ== X-Google-Smtp-Source: APXvYqxiM5b2RyyiBu9Xju1CSe+WF4b7Nav2hSHlOAdK8fIwObFSMG2C1MCQqbNvcXfxXZ9hlfwNkg== X-Received: by 2002:a0c:8910:: with SMTP id 16mr93590700qvp.55.1564673090255; Thu, 01 Aug 2019 08:24:50 -0700 (PDT) Received: from localhost.localdomain (c-73-69-118-222.hsd1.nh.comcast.net. [73.69.118.222]) by smtp.gmail.com with ESMTPSA id o5sm30899952qkf.10.2019.08.01.08.24.48 (version=TLS1_3 cipher=AEAD-AES256-GCM-SHA384 bits=256/256); Thu, 01 Aug 2019 08:24:49 -0700 (PDT) From: Pavel Tatashin To: pasha.tatashin@soleen.com, jmorris@namei.org, sashal@kernel.org, ebiederm@xmission.com, kexec@lists.infradead.org, linux-kernel@vger.kernel.org, corbet@lwn.net, catalin.marinas@arm.com, will@kernel.org, linux-arm-kernel@lists.infradead.org, marc.zyngier@arm.com, james.morse@arm.com, vladimir.murzin@arm.com, matthias.bgg@gmail.com, bhsharma@redhat.com, linux-mm@kvack.org Subject: [PATCH v1 6/8] arm64, kexec: add expandable argument to relocation function Date: Thu, 1 Aug 2019 11:24:37 -0400 Message-Id: <20190801152439.11363-7-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.22.0 In-Reply-To: <20190801152439.11363-1-pasha.tatashin@soleen.com> References: <20190801152439.11363-1-pasha.tatashin@soleen.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP Currently, kexec relocation function (arm64_relocate_new_kernel) accepts the following arguments: head: start of array that contains relocation information. entry: entry point for new kernel or purgatory. dtb_mem: first and only argument to entry. The number of arguments cannot be easily expended, because this function is also called from HVC_SOFT_RESTART, which preserves only three arguments. And, also arm64_relocate_new_kernel is written in assembly but called without stack, thus no place to move extra arguments to free registers. Soon, we will need to pass more arguments: once we enable MMU we will need to pass information about page tables. Another benefit of allowing this function to accept more arguments, is that kernel can actually accept up to 4 arguments (x0-x3), however currently only one is used, but if in the future we will need for more (for example, pass information about when previous kernel exited to have a precise measurement in time spent in purgatory), we won't be easilty do that if arm64_relocate_new_kernel can't accept more arguments. So, add a new struct: kern_reloc_arg, and place it in kexec safe page (i.e memory that is not overwritten during relocation). Thus, make arm64_relocate_new_kernel to only take one argument, that contains all the needed information. Signed-off-by: Pavel Tatashin --- arch/arm64/include/asm/kexec.h | 18 ++++++ arch/arm64/kernel/asm-offsets.c | 9 +++ arch/arm64/kernel/cpu-reset.S | 4 +- arch/arm64/kernel/cpu-reset.h | 8 +-- arch/arm64/kernel/machine_kexec.c | 28 ++++++++- arch/arm64/kernel/relocate_kernel.S | 88 ++++++++++------------------- 6 files changed, 86 insertions(+), 69 deletions(-) diff --git a/arch/arm64/include/asm/kexec.h b/arch/arm64/include/asm/kexec.h index d15ca1ca1e83..d5b79d4c7fae 100644 --- a/arch/arm64/include/asm/kexec.h +++ b/arch/arm64/include/asm/kexec.h @@ -90,12 +90,30 @@ static inline void crash_prepare_suspend(void) {} static inline void crash_post_resume(void) {} #endif +/* + * kern_reloc_arg is passed to kernel relocation function as an argument. + * head kimage->head, allows to traverse through relocation segments. + * entry_addr kimage->start, where to jump from relocation function (new + * kernel, or purgatory entry address). + * kern_arg0 first argument to kernel is its dtb address. The other + * arguments are currently unused, and must be set to 0 + */ +struct kern_reloc_arg { + unsigned long head; + unsigned long entry_addr; + unsigned long kern_arg0; + unsigned long kern_arg1; + unsigned long kern_arg2; + unsigned long kern_arg3; +}; + #define ARCH_HAS_KIMAGE_ARCH struct kimage_arch { void *dtb; unsigned long dtb_mem; unsigned long kern_reloc; + unsigned long kern_reloc_arg; }; #ifdef CONFIG_KEXEC_FILE diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c index 214685760e1c..900394907fd8 100644 --- a/arch/arm64/kernel/asm-offsets.c +++ b/arch/arm64/kernel/asm-offsets.c @@ -23,6 +23,7 @@ #include #include #include +#include int main(void) { @@ -126,6 +127,14 @@ int main(void) #ifdef CONFIG_ARM_SDE_INTERFACE DEFINE(SDEI_EVENT_INTREGS, offsetof(struct sdei_registered_event, interrupted_regs)); DEFINE(SDEI_EVENT_PRIORITY, offsetof(struct sdei_registered_event, priority)); +#endif +#ifdef CONFIG_KEXEC_CORE + DEFINE(KRELOC_HEAD, offsetof(struct kern_reloc_arg, head)); + DEFINE(KRELOC_ENTRY_ADDR, offsetof(struct kern_reloc_arg, entry_addr)); + DEFINE(KRELOC_KERN_ARG0, offsetof(struct kern_reloc_arg, kern_arg0)); + DEFINE(KRELOC_KERN_ARG1, offsetof(struct kern_reloc_arg, kern_arg1)); + DEFINE(KRELOC_KERN_ARG2, offsetof(struct kern_reloc_arg, kern_arg2)); + DEFINE(KRELOC_KERN_ARG3, offsetof(struct kern_reloc_arg, kern_arg3)); #endif return 0; } diff --git a/arch/arm64/kernel/cpu-reset.S b/arch/arm64/kernel/cpu-reset.S index 6ea337d464c4..64c78a42919f 100644 --- a/arch/arm64/kernel/cpu-reset.S +++ b/arch/arm64/kernel/cpu-reset.S @@ -43,9 +43,7 @@ ENTRY(__cpu_soft_restart) hvc #0 // no return 1: mov x18, x1 // entry - mov x0, x2 // arg0 - mov x1, x3 // arg1 - mov x2, x4 // arg2 + mov x0, x2 // arg br x18 ENDPROC(__cpu_soft_restart) diff --git a/arch/arm64/kernel/cpu-reset.h b/arch/arm64/kernel/cpu-reset.h index ed50e9587ad8..7a8720ff186f 100644 --- a/arch/arm64/kernel/cpu-reset.h +++ b/arch/arm64/kernel/cpu-reset.h @@ -11,12 +11,10 @@ #include void __cpu_soft_restart(unsigned long el2_switch, unsigned long entry, - unsigned long arg0, unsigned long arg1, unsigned long arg2); + unsigned long arg); static inline void __noreturn cpu_soft_restart(unsigned long entry, - unsigned long arg0, - unsigned long arg1, - unsigned long arg2) + unsigned long arg) { typeof(__cpu_soft_restart) *restart; @@ -25,7 +23,7 @@ static inline void __noreturn cpu_soft_restart(unsigned long entry, restart = (void *)__pa_symbol(__cpu_soft_restart); cpu_install_idmap(); - restart(el2_switch, entry, arg0, arg1, arg2); + restart(el2_switch, entry, arg); unreachable(); } diff --git a/arch/arm64/kernel/machine_kexec.c b/arch/arm64/kernel/machine_kexec.c index 9b41da50e6f7..d745ea2051df 100644 --- a/arch/arm64/kernel/machine_kexec.c +++ b/arch/arm64/kernel/machine_kexec.c @@ -43,6 +43,7 @@ static void _kexec_image_info(const char *func, int line, pr_debug(" head: %lx\n", kimage->head); pr_debug(" nr_segments: %lu\n", kimage->nr_segments); pr_debug(" kern_reloc: %pa\n", &kimage->arch.kern_reloc); + pr_debug(" kern_reloc_arg: %pa\n", &kimage->arch.kern_reloc_arg); for (i = 0; i < kimage->nr_segments; i++) { pr_debug(" segment[%lu]: %016lx - %016lx, 0x%lx bytes, %lu pages\n", @@ -59,14 +60,38 @@ void machine_kexec_cleanup(struct kimage *kimage) /* Empty routine needed to avoid build errors. */ } +/* Allocates pages for kexec page table */ +static void *kexec_page_alloc(void *arg) +{ + struct kimage *kimage = (struct kimage *)arg; + struct page *page = kimage_alloc_control_pages(kimage, 0); + + if (!page) + return NULL; + + return page_address(page); +} + int machine_kexec_post_load(struct kimage *kimage) { unsigned long kern_reloc; + struct kern_reloc_arg *kern_reloc_arg; kern_reloc = page_to_phys(kimage->control_code_page); memcpy(__va(kern_reloc), arm64_relocate_new_kernel, arm64_relocate_new_kernel_size); + + kern_reloc_arg = kexec_page_alloc(kimage); + if (!kern_reloc_arg) + return -ENOMEM; + memset(kern_reloc_arg, 0, sizeof(struct kern_reloc_arg)); + kimage->arch.kern_reloc = kern_reloc; + kimage->arch.kern_reloc_arg = __pa(kern_reloc_arg); + + kern_reloc_arg->head = kimage->head; + kern_reloc_arg->entry_addr = kimage->start; + kern_reloc_arg->kern_arg0 = kimage->arch.dtb_mem; kexec_image_info(kimage); return 0; @@ -203,8 +228,7 @@ void machine_kexec(struct kimage *kimage) * userspace (kexec-tools). * In kexec_file case, the kernel starts directly without purgatory. */ - cpu_soft_restart(kimage->arch.kern_reloc, kimage->head, kimage->start, - kimage->arch.dtb_mem); + cpu_soft_restart(kimage->arch.kern_reloc, kimage->arch.kern_reloc_arg); BUG(); /* Should never get here. */ } diff --git a/arch/arm64/kernel/relocate_kernel.S b/arch/arm64/kernel/relocate_kernel.S index c1d7db71a726..d352faf7cbe6 100644 --- a/arch/arm64/kernel/relocate_kernel.S +++ b/arch/arm64/kernel/relocate_kernel.S @@ -8,7 +8,7 @@ #include #include - +#include #include #include #include @@ -17,86 +17,58 @@ /* * arm64_relocate_new_kernel - Put a 2nd stage image in place and boot it. * - * The memory that the old kernel occupies may be overwritten when coping the + * The memory that the old kernel occupies may be overwritten when copying the * new image to its final location. To assure that the * arm64_relocate_new_kernel routine which does that copy is not overwritten, * all code and data needed by arm64_relocate_new_kernel must be between the * symbols arm64_relocate_new_kernel and arm64_relocate_new_kernel_end. The * machine_kexec() routine will copy arm64_relocate_new_kernel to the kexec - * control_code_page, a special page which has been set up to be preserved - * during the copy operation. + * safe memory that has been set up to be preserved during the copy operation. */ ENTRY(arm64_relocate_new_kernel) - - /* Setup the list loop variables. */ - mov x18, x2 /* x18 = dtb address */ - mov x17, x1 /* x17 = kimage_start */ - mov x16, x0 /* x16 = kimage_head */ - raw_dcache_line_size x15, x0 /* x15 = dcache line size */ - mov x14, xzr /* x14 = entry ptr */ - mov x13, xzr /* x13 = copy dest */ - /* Clear the sctlr_el2 flags. */ - mrs x0, CurrentEL - cmp x0, #CurrentEL_EL2 + mrs x2, CurrentEL + cmp x2, #CurrentEL_EL2 b.ne 1f - mrs x0, sctlr_el2 + mrs x2, sctlr_el2 ldr x1, =SCTLR_ELx_FLAGS - bic x0, x0, x1 + bic x2, x2, x1 pre_disable_mmu_workaround - msr sctlr_el2, x0 + msr sctlr_el2, x2 isb -1: - - /* Check if the new image needs relocation. */ +1: /* Check if the new image needs relocation. */ + ldr x16, [x0, #KRELOC_HEAD] /* x16 = kimage_head */ tbnz x16, IND_DONE_BIT, .Ldone - + raw_dcache_line_size x15, x1 /* x15 = dcache line size */ .Lloop: and x12, x16, PAGE_MASK /* x12 = addr */ - /* Test the entry flags. */ .Ltest_source: tbz x16, IND_SOURCE_BIT, .Ltest_indirection /* Invalidate dest page to PoC. */ - mov x0, x13 - add x20, x0, #PAGE_SIZE + mov x2, x13 + add x20, x2, #PAGE_SIZE sub x1, x15, #1 - bic x0, x0, x1 -2: dc ivac, x0 - add x0, x0, x15 - cmp x0, x20 + bic x2, x2, x1 +2: dc ivac, x2 + add x2, x2, x15 + cmp x2, x20 b.lo 2b dsb sy - mov x20, x13 - mov x21, x12 - copy_page x20, x21, x0, x1, x2, x3, x4, x5, x6, x7 - - /* dest += PAGE_SIZE */ - add x13, x13, PAGE_SIZE + copy_page x13, x12, x1, x2, x3, x4, x5, x6, x7, x8 b .Lnext - .Ltest_indirection: tbz x16, IND_INDIRECTION_BIT, .Ltest_destination - - /* ptr = addr */ - mov x14, x12 + mov x14, x12 /* ptr = addr */ b .Lnext - .Ltest_destination: tbz x16, IND_DESTINATION_BIT, .Lnext - - /* dest = addr */ - mov x13, x12 - + mov x13, x12 /* dest = addr */ .Lnext: - /* entry = *ptr++ */ - ldr x16, [x14], #8 - - /* while (!(entry & DONE)) */ - tbz x16, IND_DONE_BIT, .Lloop - + ldr x16, [x14], #8 /* entry = *ptr++ */ + tbz x16, IND_DONE_BIT, .Lloop /* while (!(entry & DONE)) */ .Ldone: /* wait for writes from copy_page to finish */ dsb nsh @@ -105,18 +77,16 @@ ENTRY(arm64_relocate_new_kernel) isb /* Start new image. */ - mov x0, x18 - mov x1, xzr - mov x2, xzr - mov x3, xzr - br x17 - -ENDPROC(arm64_relocate_new_kernel) + ldr x4, [x0, #KRELOC_ENTRY_ADDR] /* x4 = kimage_start */ + ldr x3, [x0, #KRELOC_KERN_ARG3] + ldr x2, [x0, #KRELOC_KERN_ARG2] + ldr x1, [x0, #KRELOC_KERN_ARG1] + ldr x0, [x0, #KRELOC_KERN_ARG0] /* x0 = dtb address */ + br x4 +END(arm64_relocate_new_kernel) .ltorg - .align 3 /* To keep the 64-bit values below naturally aligned. */ - .Lcopy_end: .org KEXEC_CONTROL_PAGE_SIZE From patchwork Thu Aug 1 15:24:38 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pasha Tatashin X-Patchwork-Id: 11070917 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 9B07E1395 for ; Thu, 1 Aug 2019 15:25:06 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 8AA30284F9 for ; Thu, 1 Aug 2019 15:25:06 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 7E75D286BF; Thu, 1 Aug 2019 15:25:06 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 86280284F9 for ; Thu, 1 Aug 2019 15:25:05 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id EE81F8E0029; Thu, 1 Aug 2019 11:24:53 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id E71898E0001; Thu, 1 Aug 2019 11:24:53 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id CC3FA8E0029; Thu, 1 Aug 2019 11:24:53 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-qt1-f199.google.com (mail-qt1-f199.google.com [209.85.160.199]) by kanga.kvack.org (Postfix) with ESMTP id A2ADF8E0001 for ; Thu, 1 Aug 2019 11:24:53 -0400 (EDT) Received: by mail-qt1-f199.google.com with SMTP id h47so64945006qtc.20 for ; Thu, 01 Aug 2019 08:24:53 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:from:to:subject:date:message-id :in-reply-to:references:mime-version:content-transfer-encoding; bh=TI08pAABernsFaEO6re+49u61n7MA9mutB/2zYVl+eU=; b=r//ryuZM1pvZhyA38qJ9RjH4V8waa48lvDv2snCcvI6snKfhfbnFfOvIzGDnN9VjOX 3bMcyroaQCNtGhxEOYj16au0JNSs1PCzCghNclezzogOmGN+OKHbvgon8lelADf33ypZ KzPv8qNcUGIavGl09/N/ReFr9ammHOxL/I1TjOMhQPXff1m5R4+RQsblVApqOaF/0eI8 IetpAWHj1o5IGUPd8B12NJZwUtfu/53lTyOm6oEdK6jDNkVE3zr8YVgjZ87fiMNdKLgg 0IjZdgEFISuB5//ef3T5w/uikni/yAb0wM1ZcaRSewiPUYsWVGCC/KK0sw9E2H2xpkAQ i6vw== X-Gm-Message-State: APjAAAXffrEeow9K0fGKB2eBIJf3zbDm3B4101Xdu4TbIhz95u22qXsL MYIza1DXVaNgLIAe/VYk0yPovoSXl1ybCBRyCltvR4B6abUi5/G4Hxsds0nL+W4QgjE3YHBXEPy FPIDLzbN7YCmYGaJejK1P6ylJIPOrqYnt+jZLTxPawJuXeb2+o6hQi4Dx+0Lo0tfG2A== X-Received: by 2002:a0c:d1e2:: with SMTP id k31mr94362375qvh.173.1564673093418; Thu, 01 Aug 2019 08:24:53 -0700 (PDT) X-Received: by 2002:a0c:d1e2:: with SMTP id k31mr94362249qvh.173.1564673092081; Thu, 01 Aug 2019 08:24:52 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1564673092; cv=none; d=google.com; s=arc-20160816; b=iOQNeQh6U1DoGb7ZTpB7vgRjXUaDFyrawwZ+zJ0JHAeTzlPRRvwOgf4fRCP2douXDs ETNQpujsFd1red3NOF8YOQ2Ah1NYmbkbksKVLiInmRXIZ9n2kXm+Rpw7TA3EK+oaBGCi 2SpfLhYMX5MGZfuxdLUPKAo65BLVYd1r4Zw77u7Spe2cRH8Sk5BoF9tPkN2e3EcVlH3R uM4jPOs7vvGnz0OQaiLL27wTwcrSU/QV6GvBRMRRMJir3eze1lHQtO31P1oX47E63bmZ UdNRJz9nDjeKn/qYPfw/25GxcTeDJuhZwRWwLik7U3fkKTPhodOJK0xlX0kBvCsP2Z3m L09g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:dkim-signature; bh=TI08pAABernsFaEO6re+49u61n7MA9mutB/2zYVl+eU=; b=MOlGch5UrIsdlqB8D3QjHPKvM8EgfrkDNawX1HWZGEj1eIZTuyx/gtmZaBylFmpGPg FEpUlwymRz/G/8gi5/ZO1ssIemy/oJxbdWf4oBinlcX7xHeWZctlv96FZsFkThnUg5PG I/5xjOnRM44XYjY9XqxFWf7FyalefPNHvqSLhVMuZLGN/UBEdIV6i91lJxVtyzhTr48Z 3hNAnYoEv7uZkNxBEUKCYJjCnv8g3w9AzYufyzW4zEvdDoyEjEKB+9tRPW+LXmY+1zjD LU/v0Qu1nyZjPPgdBWQiUsyOfGmNHB4FYIKJMboXqoc2KK1Zx/tbmegiD26wd0ATo4qr LqEw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@soleen.com header.s=google header.b=VD36srcN; spf=pass (google.com: domain of pasha.tatashin@soleen.com designates 209.85.220.65 as permitted sender) smtp.mailfrom=pasha.tatashin@soleen.com Received: from mail-sor-f65.google.com (mail-sor-f65.google.com. [209.85.220.65]) by mx.google.com with SMTPS id z124sor40582853qkd.42.2019.08.01.08.24.52 for (Google Transport Security); Thu, 01 Aug 2019 08:24:52 -0700 (PDT) Received-SPF: pass (google.com: domain of pasha.tatashin@soleen.com designates 209.85.220.65 as permitted sender) client-ip=209.85.220.65; Authentication-Results: mx.google.com; dkim=pass header.i=@soleen.com header.s=google header.b=VD36srcN; spf=pass (google.com: domain of pasha.tatashin@soleen.com designates 209.85.220.65 as permitted sender) smtp.mailfrom=pasha.tatashin@soleen.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=TI08pAABernsFaEO6re+49u61n7MA9mutB/2zYVl+eU=; b=VD36srcNf2VUG8H3tSx+ZSf/luOHis+lT5h4LMWGsogWCUh0OA41faN9JHbv0Qzvlt hKnm0h6R7b+z8iTgMGEdw+uyGgZlTH3tCByd0zygw2SaMJuL6hMwWsz/RN92mbR9L+Xm sBhYnGz7raEQeCaTl5O+wCWpjjjR3winlFB2utMk7eyk6M8cZYxDTcOvJRULNxfnqfES YP1kYXRvw2yP1it5kKXkqVYH97Vi0FHWOaDE4cbo77aYv6cKu8pZarMzYscqw6MwHguG BDzb+C44k1RXpq59EP976ZVoHjYoyN/yiYCL7+typwtJLLBtVZef0PWkcZW38r5pRdVR r1aw== X-Google-Smtp-Source: APXvYqyVNeMV2pthJyVhFt1LEnqNY+v0Wh9vD1Cowy4owrD/G6PrQhPZwpyYWSndx4BtQo2MC72i4Q== X-Received: by 2002:ae9:e608:: with SMTP id z8mr84955133qkf.182.1564673091741; Thu, 01 Aug 2019 08:24:51 -0700 (PDT) Received: from localhost.localdomain (c-73-69-118-222.hsd1.nh.comcast.net. [73.69.118.222]) by smtp.gmail.com with ESMTPSA id o5sm30899952qkf.10.2019.08.01.08.24.50 (version=TLS1_3 cipher=AEAD-AES256-GCM-SHA384 bits=256/256); Thu, 01 Aug 2019 08:24:51 -0700 (PDT) From: Pavel Tatashin To: pasha.tatashin@soleen.com, jmorris@namei.org, sashal@kernel.org, ebiederm@xmission.com, kexec@lists.infradead.org, linux-kernel@vger.kernel.org, corbet@lwn.net, catalin.marinas@arm.com, will@kernel.org, linux-arm-kernel@lists.infradead.org, marc.zyngier@arm.com, james.morse@arm.com, vladimir.murzin@arm.com, matthias.bgg@gmail.com, bhsharma@redhat.com, linux-mm@kvack.org Subject: [PATCH v1 7/8] arm64, kexec: configure transitional page table for kexec Date: Thu, 1 Aug 2019 11:24:38 -0400 Message-Id: <20190801152439.11363-8-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.22.0 In-Reply-To: <20190801152439.11363-1-pasha.tatashin@soleen.com> References: <20190801152439.11363-1-pasha.tatashin@soleen.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP Configure a page table located in kexec-safe memory that has the following mappings: 1. identity mapping for text of relocation function with executable permission. 2. identity mapping for argument for relocation function. 3. linear mappings for all source ranges 4. linear mappings for all destination ranges. Also, configure el2_vector, that is used to jump to new kernel from EL2 on non-VHE kernels. Signed-off-by: Pavel Tatashin --- arch/arm64/include/asm/kexec.h | 32 +++++++ arch/arm64/kernel/asm-offsets.c | 6 ++ arch/arm64/kernel/machine_kexec.c | 129 ++++++++++++++++++++++++++-- arch/arm64/kernel/relocate_kernel.S | 16 +++- 4 files changed, 174 insertions(+), 9 deletions(-) diff --git a/arch/arm64/include/asm/kexec.h b/arch/arm64/include/asm/kexec.h index d5b79d4c7fae..450d8440f597 100644 --- a/arch/arm64/include/asm/kexec.h +++ b/arch/arm64/include/asm/kexec.h @@ -90,6 +90,23 @@ static inline void crash_prepare_suspend(void) {} static inline void crash_post_resume(void) {} #endif +#if defined(CONFIG_KEXEC_CORE) +/* Global variables for the arm64_relocate_new_kernel routine. */ +extern const unsigned char arm64_relocate_new_kernel[]; +extern const unsigned long arm64_relocate_new_kernel_size; + +/* Body of the vector for escalating to EL2 from relocation routine */ +extern const unsigned char kexec_el1_sync[]; +extern const unsigned long kexec_el1_sync_size; + +#define KEXEC_EL2_VECTOR_TABLE_SIZE 2048 +#define KEXEC_EL2_SYNC_OFFSET (KEXEC_EL2_VECTOR_TABLE_SIZE / 2) + +#endif + +#define KEXEC_SRC_START PAGE_OFFSET +#define KEXEC_DST_START (PAGE_OFFSET + \ + ((UL(0xffffffffffffffff) - PAGE_OFFSET) >> 1) + 1) /* * kern_reloc_arg is passed to kernel relocation function as an argument. * head kimage->head, allows to traverse through relocation segments. @@ -97,6 +114,15 @@ static inline void crash_post_resume(void) {} * kernel, or purgatory entry address). * kern_arg0 first argument to kernel is its dtb address. The other * arguments are currently unused, and must be set to 0 + * trans_ttbr0 idmap for relocation function and its argument + * trans_ttbr1 linear map for source/destination addresses. + * el2_vector If present means that relocation routine will go to EL1 + * from EL2 to do the copy, and then back to EL2 to do the jump + * to new world. This vector contains only the final jump + * instruction at KEXEC_EL2_SYNC_OFFSET. + * src_addr linear map for source pages. + * dst_addr linear map for destination pages. + * copy_len Number of bytes that need to be copied */ struct kern_reloc_arg { unsigned long head; @@ -105,6 +131,12 @@ struct kern_reloc_arg { unsigned long kern_arg1; unsigned long kern_arg2; unsigned long kern_arg3; + unsigned long trans_ttbr0; + unsigned long trans_ttbr1; + unsigned long el2_vector; + unsigned long src_addr; + unsigned long dst_addr; + unsigned long copy_len; }; #define ARCH_HAS_KIMAGE_ARCH diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c index 900394907fd8..7c2ba09a8ceb 100644 --- a/arch/arm64/kernel/asm-offsets.c +++ b/arch/arm64/kernel/asm-offsets.c @@ -135,6 +135,12 @@ int main(void) DEFINE(KRELOC_KERN_ARG1, offsetof(struct kern_reloc_arg, kern_arg1)); DEFINE(KRELOC_KERN_ARG2, offsetof(struct kern_reloc_arg, kern_arg2)); DEFINE(KRELOC_KERN_ARG3, offsetof(struct kern_reloc_arg, kern_arg3)); + DEFINE(KRELOC_TRANS_TTBR0, offsetof(struct kern_reloc_arg, trans_ttbr0)); + DEFINE(KRELOC_TRANS_TTBR1, offsetof(struct kern_reloc_arg, trans_ttbr1)); + DEFINE(KRELOC_EL2_VECTOR, offsetof(struct kern_reloc_arg, el2_vector)); + DEFINE(KRELOC_SRC_ADDR, offsetof(struct kern_reloc_arg, src_addr)); + DEFINE(KRELOC_DST_ADDR, offsetof(struct kern_reloc_arg, dst_addr)); + DEFINE(KRELOC_COPY_LEN, offsetof(struct kern_reloc_arg, copy_len)); #endif return 0; } diff --git a/arch/arm64/kernel/machine_kexec.c b/arch/arm64/kernel/machine_kexec.c index d745ea2051df..16f761fc50c8 100644 --- a/arch/arm64/kernel/machine_kexec.c +++ b/arch/arm64/kernel/machine_kexec.c @@ -20,13 +20,10 @@ #include #include #include +#include #include "cpu-reset.h" -/* Global variables for the arm64_relocate_new_kernel routine. */ -extern const unsigned char arm64_relocate_new_kernel[]; -extern const unsigned long arm64_relocate_new_kernel_size; - /** * kexec_image_info - For debugging output. */ @@ -72,15 +69,128 @@ static void *kexec_page_alloc(void *arg) return page_address(page); } +/* + * Map source segments starting from KEXEC_SRC_START, and map destination + * segments starting from KEXEC_DST_START, and return size of copy in + * *copy_len argument. + * Relocation function essentially needs to do: + * memcpy(KEXEC_DST_START, KEXEC_SRC_START, copy_len); + */ +static int map_segments(struct kimage *kimage, pgd_t *pgdp, + struct trans_table_info *info, + unsigned long *copy_len) +{ + unsigned long *ptr = 0; + unsigned long dest = 0; + unsigned long src_va = KEXEC_SRC_START; + unsigned long dst_va = KEXEC_DST_START; + unsigned long len = 0; + unsigned long entry, addr; + int rc; + + for (entry = kimage->head; !(entry & IND_DONE); entry = *ptr++) { + addr = entry & PAGE_MASK; + + switch (entry & IND_FLAGS) { + case IND_DESTINATION: + dest = addr; + break; + case IND_INDIRECTION: + ptr = __va(addr); + if (rc) + return rc; + break; + case IND_SOURCE: + rc = trans_table_map_page(info, pgdp, __va(addr), + src_va, PAGE_KERNEL); + if (rc) + return rc; + rc = trans_table_map_page(info, pgdp, __va(dest), + dst_va, PAGE_KERNEL); + if (rc) + return rc; + dest += PAGE_SIZE; + src_va += PAGE_SIZE; + dst_va += PAGE_SIZE; + len += PAGE_SIZE; + } + } + *copy_len = len; + + return 0; +} + +static int mmu_relocate_setup(struct kimage *kimage, unsigned long kern_reloc, + struct kern_reloc_arg *kern_reloc_arg) +{ + struct trans_table_info info = { + .trans_alloc_page = kexec_page_alloc, + .trans_alloc_arg = kimage, + .trans_flags = 0, + }; + pgd_t *trans_ttbr0, *trans_ttbr1; + int rc; + + rc = trans_table_create_empty(&info, &trans_ttbr0); + if (rc) + return rc; + + rc = trans_table_create_empty(&info, &trans_ttbr1); + if (rc) + return rc; + + rc = map_segments(kimage, trans_ttbr1, &info, + &kern_reloc_arg->copy_len); + if (rc) + return rc; + + /* Map relocation function va == pa */ + rc = trans_table_map_page(&info, trans_ttbr0, __va(kern_reloc), + kern_reloc, PAGE_KERNEL_EXEC); + if (rc) + return rc; + + /* Map relocation function argument va == pa */ + rc = trans_table_map_page(&info, trans_ttbr0, kern_reloc_arg, + __pa(kern_reloc_arg), PAGE_KERNEL); + if (rc) + return rc; + + kern_reloc_arg->trans_ttbr0 = phys_to_ttbr(__pa(trans_ttbr0)); + kern_reloc_arg->trans_ttbr1 = phys_to_ttbr(__pa(trans_ttbr1)); + kern_reloc_arg->src_addr = KEXEC_SRC_START; + kern_reloc_arg->dst_addr = KEXEC_DST_START; + + return 0; +} + int machine_kexec_post_load(struct kimage *kimage) { + unsigned long el2_vector = 0; unsigned long kern_reloc; struct kern_reloc_arg *kern_reloc_arg; + int rc = 0; + + /* + * Sanity check that relocation function + el2_vector fit into one + * page. + */ + if (arm64_relocate_new_kernel_size > KEXEC_EL2_VECTOR_TABLE_SIZE) { + pr_err("can't fit relocation function and el2_vector in one page"); + return -ENOMEM; + } kern_reloc = page_to_phys(kimage->control_code_page); memcpy(__va(kern_reloc), arm64_relocate_new_kernel, arm64_relocate_new_kernel_size); + /* Setup vector table only when EL2 is available, but no VHE */ + if (is_hyp_mode_available() && !is_kernel_in_hyp_mode()) { + el2_vector = kern_reloc + KEXEC_EL2_VECTOR_TABLE_SIZE; + memcpy(__va(el2_vector + KEXEC_EL2_SYNC_OFFSET), kexec_el1_sync, + kexec_el1_sync_size); + } + kern_reloc_arg = kexec_page_alloc(kimage); if (!kern_reloc_arg) return -ENOMEM; @@ -91,10 +201,19 @@ int machine_kexec_post_load(struct kimage *kimage) kern_reloc_arg->head = kimage->head; kern_reloc_arg->entry_addr = kimage->start; + kern_reloc_arg->el2_vector = el2_vector; kern_reloc_arg->kern_arg0 = kimage->arch.dtb_mem; + /* + * If relocation is not needed, we do not need to enable MMU in + * relocation routine, therefore do not create page tables for + * scenarios such as crash kernel + */ + if (!(kimage->head & IND_DONE)) + rc = mmu_relocate_setup(kimage, kern_reloc, kern_reloc_arg); + kexec_image_info(kimage); - return 0; + return rc; } /** diff --git a/arch/arm64/kernel/relocate_kernel.S b/arch/arm64/kernel/relocate_kernel.S index d352faf7cbe6..14243a678277 100644 --- a/arch/arm64/kernel/relocate_kernel.S +++ b/arch/arm64/kernel/relocate_kernel.S @@ -83,17 +83,25 @@ ENTRY(arm64_relocate_new_kernel) ldr x1, [x0, #KRELOC_KERN_ARG1] ldr x0, [x0, #KRELOC_KERN_ARG0] /* x0 = dtb address */ br x4 +.ltorg +.Larm64_relocate_new_kernel_end: END(arm64_relocate_new_kernel) -.ltorg +ENTRY(kexec_el1_sync) + br x4 /* Jump to new world from el2 */ +.Lkexec_el1_sync_end: +END(kexec_el1_sync) + .align 3 /* To keep the 64-bit values below naturally aligned. */ -.Lcopy_end: .org KEXEC_CONTROL_PAGE_SIZE - /* * arm64_relocate_new_kernel_size - Number of bytes to copy to the * control_code_page. */ .globl arm64_relocate_new_kernel_size arm64_relocate_new_kernel_size: - .quad .Lcopy_end - arm64_relocate_new_kernel + .quad .Larm64_relocate_new_kernel_end - arm64_relocate_new_kernel + +.globl kexec_el1_sync_size +kexec_el1_sync_size: + .quad .Lkexec_el1_sync_end - kexec_el1_sync From patchwork Thu Aug 1 15:24:39 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pasha Tatashin X-Patchwork-Id: 11070921 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id ACAB51395 for ; Thu, 1 Aug 2019 15:25:09 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 9E37428635 for ; Thu, 1 Aug 2019 15:25:09 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 9266E2869F; Thu, 1 Aug 2019 15:25:09 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id B92DD28635 for ; Thu, 1 Aug 2019 15:25:08 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 8D1C38E002A; Thu, 1 Aug 2019 11:24:55 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 7E45D8E0001; Thu, 1 Aug 2019 11:24:55 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 60CBE8E002A; Thu, 1 Aug 2019 11:24:55 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-qt1-f200.google.com (mail-qt1-f200.google.com [209.85.160.200]) by kanga.kvack.org (Postfix) with ESMTP id 35C728E0001 for ; Thu, 1 Aug 2019 11:24:55 -0400 (EDT) Received: by mail-qt1-f200.google.com with SMTP id g30so64839983qtm.17 for ; Thu, 01 Aug 2019 08:24:55 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:from:to:subject:date:message-id :in-reply-to:references:mime-version:content-transfer-encoding; bh=PLoutsvDOEBcLXmSwpW/VZGz7Lju4ED/guZL5+5aMWk=; b=lYiplMJ9BViQ97T+xlZ4u5qzw0vfyL3dHGAY86yfHSmcedkSsxXEsDAisjmysX6dPV lyvdw9Oo1dnW83TZ5je286e85yUmbWjyEcsVYQUPd+L4NYGH75gR7EFwgGIWH5PNjfUG lCzy233icexwJq1RZi3gVLIbE79CbzIJ1Q8uMsnG2eB2fMt8EryNiiXV2aBFJt3bWTM/ DlKgfm+r68+Wx89uBe7zmzJ0K/lEW4+hJejUPM+OM5F0nALO9a9UZh9xbohqeA7qN2W0 7gisBpCEX4SUEYJQoAGtqC/C0sqwYAe9Ig8OSsj9bffCTuI+vKzVzDCjXM5l3BtZctsF PsZg== X-Gm-Message-State: APjAAAWOAedjL3NICYQkgRAjLxzOQt7aAnKVC1kCyawZnnLl2QYXk0jE Ue6Nsfd97O0spdecZt0KHpMK+wN69HRqOctFSOuU3nRVwUH/A28xOH3D+O/xNOFGSAwaJmT8BGX B0H+rVEm8wFyAFzS78+M5rz39rsQh4cxEpRZnoBMVRv0htFLqsYYbrUNZHCRT1uiIAg== X-Received: by 2002:a05:620a:16d6:: with SMTP id a22mr86688431qkn.414.1564673094970; Thu, 01 Aug 2019 08:24:54 -0700 (PDT) X-Received: by 2002:a05:620a:16d6:: with SMTP id a22mr86688307qkn.414.1564673093540; Thu, 01 Aug 2019 08:24:53 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1564673093; cv=none; d=google.com; s=arc-20160816; b=l+2pCfki3WNWq9m3ols8OvSVnrJmT+d6027ybbHxLK8MzAF8UfFQ+y8pfQwdDU5CUZ fxtn7PdpPVh6WhuQ/MwymlmA+yj5xGcb3B0iJMiXpMNcaNtaCTqQlyjPCrluAe7TWLQV qnvDs7NfSH/nvfNseDCxqO7Y6tzqsy85fYt1ksyf2g77aa6kjvz7v89s02oJneD8sY7A psVoww93tMzcYKMC00hPur+I6v5zcX7mPTNc/26KOnFZdGUzj5Rr4jkfQku+4YREAA6M xEr4UpBvRd1HbbIFVuLOtr45I9R+vF1krn81cD3Tu1bSJ3NX/5z6pof5S82eG+kMDeSP /TaA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:dkim-signature; bh=PLoutsvDOEBcLXmSwpW/VZGz7Lju4ED/guZL5+5aMWk=; b=xzCx9BmJ8osgVNt9CKg7wqnAIeFrfAMSVQkWd1BsC02VoWisRGXhbV4ez1ox6wuixW PTD6MBHO/tOpW9FzOvUpnDiy3zbrabIsdM+yVy7fbvs+1Oem1ZEBRxv6rS/C+hnR5llX PZELIG0M63yHR47NWM772Ao9MuokcpFXWM+0/Z6inHXovSxdCAra/3lXtdxEAVjZgLev OZJLXHCM9fwuuUK7dcTKMyLhlmZn+cAOD1+vBznCZdIJGuQw4DGpwGO+DEm0kVpsob9N NvZ4+sWh7qm7jk74YKmgJEN/tF2J69Dk7XRTzmhgm7VzHRSL0UnnOMRyVs4w5HkCk5+d X1RA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@soleen.com header.s=google header.b="bHx/KWL8"; spf=pass (google.com: domain of pasha.tatashin@soleen.com designates 209.85.220.65 as permitted sender) smtp.mailfrom=pasha.tatashin@soleen.com Received: from mail-sor-f65.google.com (mail-sor-f65.google.com. [209.85.220.65]) by mx.google.com with SMTPS id g83sor40745408qkb.186.2019.08.01.08.24.53 for (Google Transport Security); Thu, 01 Aug 2019 08:24:53 -0700 (PDT) Received-SPF: pass (google.com: domain of pasha.tatashin@soleen.com designates 209.85.220.65 as permitted sender) client-ip=209.85.220.65; Authentication-Results: mx.google.com; dkim=pass header.i=@soleen.com header.s=google header.b="bHx/KWL8"; spf=pass (google.com: domain of pasha.tatashin@soleen.com designates 209.85.220.65 as permitted sender) smtp.mailfrom=pasha.tatashin@soleen.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=PLoutsvDOEBcLXmSwpW/VZGz7Lju4ED/guZL5+5aMWk=; b=bHx/KWL8+n4rgdjRFcbGjEtrRWcEKSloAzai7HvQSjszByk74kWOBmHipCwjx6XwSx rE1+0hsLKEv3APKc686glCqfBg8FTWl35tThgBNaJYj5lRvT0EpEypNN4wX5pA5Zm69h GysDuc809eiUSe0zzbCwjvOyFu9/DKIKcNO46hWUOLkYfvK2MsoRRbwsnTtWJA00JfSU ePPjgGWAIRHMcODn/c2VeDQDZB+1MO+JKGtbt8Wrg/Lxv5x0KqN2gbCvsfvz4p9LFy87 vnWndJXICcvUZQwKyLtEWVqdsqhjOEzXwWqAjnn3G2GV+9AJI5EQF4b/K8wknWR/1xMo WjFQ== X-Google-Smtp-Source: APXvYqyLP4m/prZZeaEuGTVGpHEWZftvDsvYZZGYTrMEi6rZbmxVJ00jQEh1G8oV3NUUEFulu1gceg== X-Received: by 2002:a37:76c5:: with SMTP id r188mr85109323qkc.394.1564673093139; Thu, 01 Aug 2019 08:24:53 -0700 (PDT) Received: from localhost.localdomain (c-73-69-118-222.hsd1.nh.comcast.net. [73.69.118.222]) by smtp.gmail.com with ESMTPSA id o5sm30899952qkf.10.2019.08.01.08.24.51 (version=TLS1_3 cipher=AEAD-AES256-GCM-SHA384 bits=256/256); Thu, 01 Aug 2019 08:24:52 -0700 (PDT) From: Pavel Tatashin To: pasha.tatashin@soleen.com, jmorris@namei.org, sashal@kernel.org, ebiederm@xmission.com, kexec@lists.infradead.org, linux-kernel@vger.kernel.org, corbet@lwn.net, catalin.marinas@arm.com, will@kernel.org, linux-arm-kernel@lists.infradead.org, marc.zyngier@arm.com, james.morse@arm.com, vladimir.murzin@arm.com, matthias.bgg@gmail.com, bhsharma@redhat.com, linux-mm@kvack.org Subject: [PATCH v1 8/8] arm64, kexec: enable MMU during kexec relocation Date: Thu, 1 Aug 2019 11:24:39 -0400 Message-Id: <20190801152439.11363-9-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.22.0 In-Reply-To: <20190801152439.11363-1-pasha.tatashin@soleen.com> References: <20190801152439.11363-1-pasha.tatashin@soleen.com> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP Now, that we have transitional page tables configured, temporarily enable MMU to allow faster relocation of segments to final destination. The performance data: for a moderate size kernel + initramfs: 25M the relocation was taking 0.382s, with enabled MMU it now takes 0.019s only or x20 improvement. The time is proportional to the size of relocation, therefore if initramfs is larger, 100M it could take over a second. Also, remove reloc_arg->head, as it is not needed anymore once MMU is enabled. Signed-off-by: Pavel Tatashin --- arch/arm64/include/asm/kexec.h | 2 - arch/arm64/kernel/asm-offsets.c | 1 - arch/arm64/kernel/machine_kexec.c | 1 - arch/arm64/kernel/relocate_kernel.S | 136 +++++++++++++++++----------- 4 files changed, 84 insertions(+), 56 deletions(-) diff --git a/arch/arm64/include/asm/kexec.h b/arch/arm64/include/asm/kexec.h index 450d8440f597..ad81ed3e5751 100644 --- a/arch/arm64/include/asm/kexec.h +++ b/arch/arm64/include/asm/kexec.h @@ -109,7 +109,6 @@ extern const unsigned long kexec_el1_sync_size; ((UL(0xffffffffffffffff) - PAGE_OFFSET) >> 1) + 1) /* * kern_reloc_arg is passed to kernel relocation function as an argument. - * head kimage->head, allows to traverse through relocation segments. * entry_addr kimage->start, where to jump from relocation function (new * kernel, or purgatory entry address). * kern_arg0 first argument to kernel is its dtb address. The other @@ -125,7 +124,6 @@ extern const unsigned long kexec_el1_sync_size; * copy_len Number of bytes that need to be copied */ struct kern_reloc_arg { - unsigned long head; unsigned long entry_addr; unsigned long kern_arg0; unsigned long kern_arg1; diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c index 7c2ba09a8ceb..13ad00b1b90f 100644 --- a/arch/arm64/kernel/asm-offsets.c +++ b/arch/arm64/kernel/asm-offsets.c @@ -129,7 +129,6 @@ int main(void) DEFINE(SDEI_EVENT_PRIORITY, offsetof(struct sdei_registered_event, priority)); #endif #ifdef CONFIG_KEXEC_CORE - DEFINE(KRELOC_HEAD, offsetof(struct kern_reloc_arg, head)); DEFINE(KRELOC_ENTRY_ADDR, offsetof(struct kern_reloc_arg, entry_addr)); DEFINE(KRELOC_KERN_ARG0, offsetof(struct kern_reloc_arg, kern_arg0)); DEFINE(KRELOC_KERN_ARG1, offsetof(struct kern_reloc_arg, kern_arg1)); diff --git a/arch/arm64/kernel/machine_kexec.c b/arch/arm64/kernel/machine_kexec.c index 16f761fc50c8..b5ff5fdb4777 100644 --- a/arch/arm64/kernel/machine_kexec.c +++ b/arch/arm64/kernel/machine_kexec.c @@ -199,7 +199,6 @@ int machine_kexec_post_load(struct kimage *kimage) kimage->arch.kern_reloc = kern_reloc; kimage->arch.kern_reloc_arg = __pa(kern_reloc_arg); - kern_reloc_arg->head = kimage->head; kern_reloc_arg->entry_addr = kimage->start; kern_reloc_arg->el2_vector = el2_vector; kern_reloc_arg->kern_arg0 = kimage->arch.dtb_mem; diff --git a/arch/arm64/kernel/relocate_kernel.S b/arch/arm64/kernel/relocate_kernel.S index 14243a678277..96ff6760bd9c 100644 --- a/arch/arm64/kernel/relocate_kernel.S +++ b/arch/arm64/kernel/relocate_kernel.S @@ -4,6 +4,8 @@ * * Copyright (C) Linaro. * Copyright (C) Huawei Futurewei Technologies. + * Copyright (c) 2019, Microsoft Corporation. + * Pavel Tatashin */ #include @@ -14,6 +16,49 @@ #include #include +/* Invalidae TLB */ +.macro tlb_invalidate + dsb sy + dsb ish + tlbi vmalle1 + dsb ish + isb +.endm + +/* Turn-off mmu at level specified by sctlr */ +.macro turn_off_mmu sctlr, tmp1, tmp2 + mrs \tmp1, \sctlr + ldr \tmp2, =SCTLR_ELx_FLAGS + bic \tmp1, \tmp1, \tmp2 + pre_disable_mmu_workaround + msr \sctlr, \tmp1 + isb +.endm + +/* Turn-on mmu at level specified by sctlr */ +.macro turn_on_mmu sctlr, tmp1, tmp2 + mrs \tmp1, \sctlr + ldr \tmp2, =SCTLR_ELx_FLAGS + orr \tmp1, \tmp1, \tmp2 + msr \sctlr, \tmp1 + ic iallu + dsb nsh + isb +.endm + +/* + * Set ttbr0 and ttbr1, called while MMU is disabled, so no need to temporarily + * set zero_page table. Invalidate TLB after new tables are set. + */ +.macro set_ttbr arg, tmp + ldr \tmp, [\arg, #KRELOC_TRANS_TTBR0] + msr ttbr0_el1, \tmp + ldr \tmp, [\arg, #KRELOC_TRANS_TTBR1] + offset_ttbr1 \tmp + msr ttbr1_el1, \tmp + isb +.endm + /* * arm64_relocate_new_kernel - Put a 2nd stage image in place and boot it. * @@ -24,65 +69,52 @@ * symbols arm64_relocate_new_kernel and arm64_relocate_new_kernel_end. The * machine_kexec() routine will copy arm64_relocate_new_kernel to the kexec * safe memory that has been set up to be preserved during the copy operation. + * + * This function temporarily enables MMU if kernel relocation is needed. + * Also, if we enter this function at EL2 on non-VHE kernel, we temporarily go + * to EL1 to enable MMU, and escalate back to EL2 at the end to do the jump to + * the new kernel. This is determined by presence of el2_vector. */ ENTRY(arm64_relocate_new_kernel) - /* Clear the sctlr_el2 flags. */ - mrs x2, CurrentEL - cmp x2, #CurrentEL_EL2 + mrs x1, CurrentEL + cmp x1, #CurrentEL_EL2 b.ne 1f - mrs x2, sctlr_el2 - ldr x1, =SCTLR_ELx_FLAGS - bic x2, x2, x1 - pre_disable_mmu_workaround - msr sctlr_el2, x2 - isb -1: /* Check if the new image needs relocation. */ - ldr x16, [x0, #KRELOC_HEAD] /* x16 = kimage_head */ - tbnz x16, IND_DONE_BIT, .Ldone - raw_dcache_line_size x15, x1 /* x15 = dcache line size */ -.Lloop: - and x12, x16, PAGE_MASK /* x12 = addr */ - /* Test the entry flags. */ -.Ltest_source: - tbz x16, IND_SOURCE_BIT, .Ltest_indirection - - /* Invalidate dest page to PoC. */ - mov x2, x13 - add x20, x2, #PAGE_SIZE - sub x1, x15, #1 - bic x2, x2, x1 -2: dc ivac, x2 - add x2, x2, x15 - cmp x2, x20 - b.lo 2b - dsb sy - - copy_page x13, x12, x1, x2, x3, x4, x5, x6, x7, x8 - b .Lnext -.Ltest_indirection: - tbz x16, IND_INDIRECTION_BIT, .Ltest_destination - mov x14, x12 /* ptr = addr */ - b .Lnext -.Ltest_destination: - tbz x16, IND_DESTINATION_BIT, .Lnext - mov x13, x12 /* dest = addr */ -.Lnext: - ldr x16, [x14], #8 /* entry = *ptr++ */ - tbz x16, IND_DONE_BIT, .Lloop /* while (!(entry & DONE)) */ -.Ldone: - /* wait for writes from copy_page to finish */ - dsb nsh - ic iallu - dsb nsh - isb - - /* Start new image. */ - ldr x4, [x0, #KRELOC_ENTRY_ADDR] /* x4 = kimage_start */ + turn_off_mmu sctlr_el2, x1, x2 /* Turn off MMU at EL2 */ +1: mov x20, xzr /* x20 will hold vector value */ + ldr x11, [x0, #KRELOC_COPY_LEN] + cbz x11, 5f /* Check if need to relocate */ + ldr x20, [x0, #KRELOC_EL2_VECTOR] + cbz x20, 2f /* need to reduce to EL1? */ + msr vbar_el2, x20 /* el2_vector present, means */ + adr x1, 2f /* we will do copy in el1 but */ + msr elr_el2, x1 /* do final jump from el2 */ + eret /* Reduce to EL1 */ +2: set_ttbr x0, x1 /* Set our page tables */ + tlb_invalidate + turn_on_mmu sctlr_el1, x1, x2 /* Turn MMU back on */ + ldr x1, [x0, #KRELOC_DST_ADDR]; + ldr x2, [x0, #KRELOC_SRC_ADDR]; + mov x12, x1 /* x12 dst backup */ +3: copy_page x1, x2, x3, x4, x5, x6, x7, x8, x9, x10 + sub x11, x11, #PAGE_SIZE + cbnz x11, 3b /* page copy loop */ + raw_dcache_line_size x2, x3 /* x2 = dcache line size */ + sub x3, x2, #1 /* x3 = dcache_size - 1 */ + bic x12, x12, x3 +4: dc cvau, x12 /* Flush D-cache */ + add x12, x12, x2 + cmp x12, x1 /* Compare to dst + len */ + b.ne 4b /* D-cache flush loop */ + turn_off_mmu sctlr_el1, x1, x2 /* Turn off MMU */ + tlb_invalidate /* Invalidate TLB */ +5: ldr x4, [x0, #KRELOC_ENTRY_ADDR] /* x4 = kimage_start */ ldr x3, [x0, #KRELOC_KERN_ARG3] ldr x2, [x0, #KRELOC_KERN_ARG2] ldr x1, [x0, #KRELOC_KERN_ARG1] ldr x0, [x0, #KRELOC_KERN_ARG0] /* x0 = dtb address */ - br x4 + cbnz x20, 6f /* need to escalate to el2? */ + br x4 /* Jump to new world */ +6: hvc #0 /* enters kexec_el1_sync */ .ltorg .Larm64_relocate_new_kernel_end: END(arm64_relocate_new_kernel)