From patchwork Sat Aug 17 02:46:16 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pasha Tatashin X-Patchwork-Id: 11098521 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id E7C61912 for ; Sat, 17 Aug 2019 02:47:23 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id D335C1FEBA for ; Sat, 17 Aug 2019 02:47:23 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id C78D828688; Sat, 17 Aug 2019 02:47:23 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 7A02C1FEBA for ; Sat, 17 Aug 2019 02:47:23 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Cc:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=KKVeArbZVtlBHzbLG5qOTjyZdHfBTmbxMjiP261sV0o=; b=D+T6lO0IU8l5FT 9HoVCWopk/nLFPGJTAUgcCGjDLxscP/rwsyMWfKY1JZcZ9yylwLjRsuyk1kW/unVRhaPoiF+Hxf02 RT9uh9LB6P4t+o3Ovovwro0K7U3Vt9wsU/bQuWzErtBcPOMxQ6irrDoorJlhi0NQr6HN5jYPKTcsy KDY/6sUwrlpstfZQlawtDWq1nLgS8jjzlxxNTMCBadd5v4v7TT/yqWUnd6Jt2ni/3ajO/9GGhy+IF Ijze9KXlvwfAfvrbNK2EUrN7WmDnaS/VoleX1U2rp6b0b3DegYY3kCsSshwKdyOxMXsB0sHo8zDY4 9RcQ4/9VxqPy8i7opo0g==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92 #3 (Red Hat Linux)) id 1hyokU-0003C3-W8; Sat, 17 Aug 2019 02:47:23 +0000 Received: from mail-qt1-x841.google.com ([2607:f8b0:4864:20::841]) by bombadil.infradead.org with esmtps (Exim 4.92 #3 (Red Hat Linux)) id 1hyojl-0002PO-GR for linux-arm-kernel@lists.infradead.org; Sat, 17 Aug 2019 02:46:41 +0000 Received: by mail-qt1-x841.google.com with SMTP id x4so8219135qts.5 for ; Fri, 16 Aug 2019 19:46:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=/W9Ma+c4Rolhy4bvMsvhnDmmj8s4aI4sxvLahrmQOEs=; b=IzkXm55G371ze9WVXoK9zad2ACOWDzvok3ySCIMBFTsFQm/1InDzb4H9PlQDZg3B+1 91fZTR5nLchedpHW5aMPgbel2ZEnY/4Y6d81TwasXQGUoWl3pwGmSACJDbW/T3nKDssv +h3dvvD8iyrRdZCJ9mha5885pBDWmcYAqYgiE1seOv1ZfRELrnfuyWjc6lHkfNkB8ktJ OywN1d704ZniaPy18rJWm0ZLYmHEL1f2xGaIMRi+uXWwoVQjHADEaYflSmFqXYFC50TK COI8DMNyB3sCcIRHgivG4DS6VQAKx7jm3bktmlgmeRCdmQ4puTVFBSpYFVIiLAL++B8O rYkQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=/W9Ma+c4Rolhy4bvMsvhnDmmj8s4aI4sxvLahrmQOEs=; b=O0jvM7eDG6hj40t9uHv+b1888K0/ZfMmEblkJHCK0p7N+jOwEQC/pxHO6/FX8u3eqj GDJzljkHhFDZgre3YJRGVL6B6qud9XizisXElxPzgWd1YBe+fXfTWk0SzFwBEbQn/s1m j6vK3En1O0/Y6yCSGakbOaQGvqCZmHqH2GfCmeZRNfb3rQE3VHqHeVfoc7jBo9Fksvjf oLE8vnN6/7tibnrxapAGx1oR8IiJpB8WymOX4RJ9e/FSu7i/G/mFlL7yIakkQ89mX09T 6uS4j8Q5ogh8Yo1fE4x/ffSv77S43m2W4EV8IGO9v8rDJd7YEbj4JpxnOWTr3ko7pTaZ HXHA== X-Gm-Message-State: APjAAAX/Jm6no/2RiHTJ0lluOsgTvGLvdUWwAKBzjSbL4Ep42s+ZxcCc 2S06SzONZz71HdmMTPcdyD1npQ== X-Google-Smtp-Source: APXvYqx1YDVIUM4aXrDf+J+vqbqrtMk2KC2J/YfduCmqN0tJGEM/fwsJ5MGyaogor4fl5Zg9qkbq9Q== X-Received: by 2002:ac8:23cf:: with SMTP id r15mr10955016qtr.97.1566009992864; Fri, 16 Aug 2019 19:46:32 -0700 (PDT) Received: from localhost.localdomain (c-73-69-118-222.hsd1.nh.comcast.net. [73.69.118.222]) by smtp.gmail.com with ESMTPSA id o9sm3454657qtr.71.2019.08.16.19.46.31 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 16 Aug 2019 19:46:32 -0700 (PDT) From: Pavel Tatashin To: pasha.tatashin@soleen.com, jmorris@namei.org, sashal@kernel.org, ebiederm@xmission.com, kexec@lists.infradead.org, linux-kernel@vger.kernel.org, corbet@lwn.net, catalin.marinas@arm.com, will@kernel.org, linux-arm-kernel@lists.infradead.org, marc.zyngier@arm.com, james.morse@arm.com, vladimir.murzin@arm.com, matthias.bgg@gmail.com, bhsharma@redhat.com, linux-mm@kvack.org Subject: [PATCH v2 01/14] kexec: quiet down kexec reboot Date: Fri, 16 Aug 2019 22:46:16 -0400 Message-Id: <20190817024629.26611-2-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.22.1 In-Reply-To: <20190817024629.26611-1-pasha.tatashin@soleen.com> References: <20190817024629.26611-1-pasha.tatashin@soleen.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190816_194637_566545_484CF93A X-CRM114-Status: UNSURE ( 9.65 ) X-CRM114-Notice: Please train this message. X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP Here is a regular kexec command sequence and output: ===== $ kexec --reuse-cmdline -i --load Image $ kexec -e [ 161.342002] kexec_core: Starting new kernel Welcome to Buildroot buildroot login: ===== Even when "quiet" kernel parameter is specified, "kexec_core: Starting new kernel" is printed. This message has KERN_EMERG level, but there is no emergency, it is a normal kexec operation, so quiet it down to appropriate KERN_NOTICE. Machines that have slow console baud rate benefit from less output. Signed-off-by: Pavel Tatashin Reviewed-by: Simon Horman --- kernel/kexec_core.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/kernel/kexec_core.c b/kernel/kexec_core.c index d5870723b8ad..2c5b72863b7b 100644 --- a/kernel/kexec_core.c +++ b/kernel/kexec_core.c @@ -1169,7 +1169,7 @@ int kernel_kexec(void) * CPU hotplug again; so re-enable it here. */ cpu_hotplug_enable(); - pr_emerg("Starting new kernel\n"); + pr_notice("Starting new kernel\n"); machine_shutdown(); } From patchwork Sat Aug 17 02:46:17 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pasha Tatashin X-Patchwork-Id: 11098519 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 23B6C5701 for ; Sat, 17 Aug 2019 02:47:04 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 10E791FEBA for ; Sat, 17 Aug 2019 02:47:04 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 04BAA28688; Sat, 17 Aug 2019 02:47:04 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 957ED1FEBA for ; Sat, 17 Aug 2019 02:47:02 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Cc:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=L9FzPZ4rLT6JJiHTihbTrKv/1i9G2G31JxjL5wqyvUE=; b=gzaTY7q8zXB4Bu 3FpBsfJpjFTNoihoumnqZNGPpW1LA/qEQokQZ32LO2kVQ28suPtTygS+7KhpjdJiaHk9Yn5eMs3m9 e+RlIp1VJOYNzi526mAs5DhEwNBmexst7hkhPIZjjBYDzhVZMIxtB/sxtk2KtYj4+POamqxagBvSl WtmzjHXFH4hgpsLahwqR93HVxNxw28sEhJaRKAHs4QK33+oxBtHu0EXmdTb2ZmAeGh9srCqhgJ+mH 56EX/X65AvYGtgGLymUJWM2Uo290gqgqDTyXMmeO4MRQh55DeKf7R4zZcn8JD/beOF1xU5e0KtgWe q/oGrM6ZJAz5hTIVzl7A==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92 #3 (Red Hat Linux)) id 1hyokA-0002pS-6L; Sat, 17 Aug 2019 02:47:02 +0000 Received: from mail-qt1-x841.google.com ([2607:f8b0:4864:20::841]) by bombadil.infradead.org with esmtps (Exim 4.92 #3 (Red Hat Linux)) id 1hyojl-0002Pp-GP for linux-arm-kernel@lists.infradead.org; Sat, 17 Aug 2019 02:46:39 +0000 Received: by mail-qt1-x841.google.com with SMTP id 44so8193857qtg.11 for ; Fri, 16 Aug 2019 19:46:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=K4mtuNsym73dHWEDL4hJM0gd2cT9s68JdquM/qnM6EE=; b=KemMiArfhB/bkmgPhLe269DxdJ2sfUaz9BBRBWeEwBei3UcHKilv5YTdxNIM4RiWZH Lom5gHj4hJWo1eaeNJD82dMK3k2fH0A5/ReY1tb8n89jgfH9sJymNlsHbUReswF741Ee D6XAyOB5PE3OJRfWPTXL7s49Q2cskfn/e6NkuL9zjzno9KVLjDTcc6sP7R0OOIkFKtit 5jdvxdN8IjqB75e7PwasKxUnAxP2j8IdmYiNE7eLGhlFq1ZcUQLsE2Y/r6NPFy3Ajc1O HvoVAdk5fv0xOTZIGQZmY96Kd3bjixZ21+14XkaZMM1xQJg/N2IKZsH4DVlHDwJs6nP2 PhUw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=K4mtuNsym73dHWEDL4hJM0gd2cT9s68JdquM/qnM6EE=; b=caActo7y+IueOO6fFmkOrgxQSRDRIDDvs/OhTHBUUiNU9ezyG3NOfW7MSr2wL3FH2K JK07MibjN+Qe3UlmOP0kzs9D1dWZALs4sjHH/o4F+8vRRLZYCJe6m50NVMowsJZoX0Gh P9wOVTPGELJ1pJasUtiepknUXcQMYDO4c2DT3e+ma94vBZI1FwqTAYut8uhBB8H6oqDp budBoGHmne0DXQzTpVHPJquHJRwn7xB76Lt3vhKW5m+PqjaeK9YMpLyVtLQw4OlR/pSt t7sR15FpMD0O52/3SJP3xC5gxl9DKMv5TAObtL5HFBdxW3VL4egyxg6IEp2Ve3VkOb8+ 35kw== X-Gm-Message-State: APjAAAWtVnD7uxvtLuHhJ7UGfZ8+TgRcgxIr4hYNMMs5JjaiORPy6gTW 7sgwl+zTcJ6P6hhJgGdlKtnvEw== X-Google-Smtp-Source: APXvYqwLah3JFnz5InFyJaEdMbrmW5wa4z0VC1v4yTqNJxq6j6zENAvViIR8vGnHzPMR31kzpHCCug== X-Received: by 2002:a0c:ffc5:: with SMTP id h5mr3894666qvv.43.1566009994194; Fri, 16 Aug 2019 19:46:34 -0700 (PDT) Received: from localhost.localdomain (c-73-69-118-222.hsd1.nh.comcast.net. [73.69.118.222]) by smtp.gmail.com with ESMTPSA id o9sm3454657qtr.71.2019.08.16.19.46.32 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 16 Aug 2019 19:46:33 -0700 (PDT) From: Pavel Tatashin To: pasha.tatashin@soleen.com, jmorris@namei.org, sashal@kernel.org, ebiederm@xmission.com, kexec@lists.infradead.org, linux-kernel@vger.kernel.org, corbet@lwn.net, catalin.marinas@arm.com, will@kernel.org, linux-arm-kernel@lists.infradead.org, marc.zyngier@arm.com, james.morse@arm.com, vladimir.murzin@arm.com, matthias.bgg@gmail.com, bhsharma@redhat.com, linux-mm@kvack.org Subject: [PATCH v2 02/14] arm64, hibernate: create_safe_exec_page cleanup Date: Fri, 16 Aug 2019 22:46:17 -0400 Message-Id: <20190817024629.26611-3-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.22.1 In-Reply-To: <20190817024629.26611-1-pasha.tatashin@soleen.com> References: <20190817024629.26611-1-pasha.tatashin@soleen.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190816_194637_559948_25CCBEE0 X-CRM114-Status: GOOD ( 11.34 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP create_safe_exec_page() is going to be split into two parts in preparation of moving page table handling code out of hibernate.c Remove allocator parameter, and rename dst to page. Also, remove the goto's, as we can return directly without cleanups. Signed-off-by: Pavel Tatashin --- arch/arm64/kernel/hibernate.c | 60 +++++++++++++++-------------------- 1 file changed, 26 insertions(+), 34 deletions(-) diff --git a/arch/arm64/kernel/hibernate.c b/arch/arm64/kernel/hibernate.c index 9341fcc6e809..96b6f8da7e49 100644 --- a/arch/arm64/kernel/hibernate.c +++ b/arch/arm64/kernel/hibernate.c @@ -196,57 +196,51 @@ EXPORT_SYMBOL(arch_hibernation_header_restore); */ static int create_safe_exec_page(void *src_start, size_t length, unsigned long dst_addr, - phys_addr_t *phys_dst_addr, - void *(*allocator)(gfp_t mask), - gfp_t mask) + phys_addr_t *phys_dst_addr) { - int rc = 0; + void *page = (void *)get_safe_page(GFP_ATOMIC); + pgd_t *trans_table; pgd_t *pgdp; pud_t *pudp; pmd_t *pmdp; pte_t *ptep; - unsigned long dst = (unsigned long)allocator(mask); - if (!dst) { - rc = -ENOMEM; - goto out; - } + if (!page) + return -ENOMEM; + + memcpy((void *)page, src_start, length); + __flush_icache_range((unsigned long)page, (unsigned long)page + length); - memcpy((void *)dst, src_start, length); - __flush_icache_range(dst, dst + length); + trans_table = (void *)get_safe_page(GFP_ATOMIC); + if (!trans_table) + return -ENOMEM; - pgdp = pgd_offset_raw(allocator(mask), dst_addr); + pgdp = pgd_offset_raw(trans_table, dst_addr); if (pgd_none(READ_ONCE(*pgdp))) { - pudp = allocator(mask); - if (!pudp) { - rc = -ENOMEM; - goto out; - } + pudp = (void *)get_safe_page(GFP_ATOMIC); + if (!pudp) + return -ENOMEM; pgd_populate(&init_mm, pgdp, pudp); } pudp = pud_offset(pgdp, dst_addr); if (pud_none(READ_ONCE(*pudp))) { - pmdp = allocator(mask); - if (!pmdp) { - rc = -ENOMEM; - goto out; - } + pmdp = (void *)get_safe_page(GFP_ATOMIC); + if (!pmdp) + return -ENOMEM; pud_populate(&init_mm, pudp, pmdp); } pmdp = pmd_offset(pudp, dst_addr); if (pmd_none(READ_ONCE(*pmdp))) { - ptep = allocator(mask); - if (!ptep) { - rc = -ENOMEM; - goto out; - } + ptep = (void *)get_safe_page(GFP_ATOMIC); + if (!ptep) + return -ENOMEM; pmd_populate_kernel(&init_mm, pmdp, ptep); } ptep = pte_offset_kernel(pmdp, dst_addr); - set_pte(ptep, pfn_pte(virt_to_pfn(dst), PAGE_KERNEL_EXEC)); + set_pte(ptep, pfn_pte(virt_to_pfn(page), PAGE_KERNEL_EXEC)); /* * Load our new page tables. A strict BBM approach requires that we @@ -262,13 +256,12 @@ static int create_safe_exec_page(void *src_start, size_t length, */ cpu_set_reserved_ttbr0(); local_flush_tlb_all(); - write_sysreg(phys_to_ttbr(virt_to_phys(pgdp)), ttbr0_el1); + write_sysreg(phys_to_ttbr(virt_to_phys(trans_table)), ttbr0_el1); isb(); - *phys_dst_addr = virt_to_phys((void *)dst); + *phys_dst_addr = virt_to_phys((void *)page); -out: - return rc; + return 0; } #define dcache_clean_range(start, end) __flush_dcache_area(start, (end - start)) @@ -523,8 +516,7 @@ int swsusp_arch_resume(void) */ rc = create_safe_exec_page(__hibernate_exit_text_start, exit_size, (unsigned long)hibernate_exit, - &phys_hibernate_exit, - (void *)get_safe_page, GFP_ATOMIC); + &phys_hibernate_exit); if (rc) { pr_err("Failed to create safe executable page for hibernate_exit code.\n"); goto out; From patchwork Sat Aug 17 02:46:18 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pasha Tatashin X-Patchwork-Id: 11098523 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 3711813A0 for ; Sat, 17 Aug 2019 02:47:46 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 23325285D8 for ; Sat, 17 Aug 2019 02:47:46 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 168CE286E6; Sat, 17 Aug 2019 02:47:46 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id F359428688 for ; Sat, 17 Aug 2019 02:47:44 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Cc:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=SLplNIeYHPHbkc7wP0HD3Us+l28VX4MbJxSAIv1+hmc=; b=mMgBIWjm4g2rLR 2SFsCexwJPrYSXOR0hTx5QbQakAl+Q07MQQ+fUnRMN8IxLJb+DXYhnK4oQjkn2+5YfyWBheiLohBJ fdgkSDov4JqiW3UTH4hLqrp7+PI7QPBPg/GLEvKwEGQVg7bkR73RIT4B8dCHbJmCtue8UjFFYx7mO 2bsAEpeYczqWvlhOIwhACxiwAAseZsxXPDPWhag1aZqa++zdxMoeuWOFVvr/EsTrww49uM7HZcsfx UCFJPpo5gnpNqiquB8UL3nFAyKNI0qXMrJo6CTKKyqd5nLsM/hS+sWg+YWiPyjl8ZhPHRgcwenSBq sZHEe4NTtQzD8LKFhBog==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92 #3 (Red Hat Linux)) id 1hyokq-0003SE-6c; Sat, 17 Aug 2019 02:47:44 +0000 Received: from mail-qt1-x841.google.com ([2607:f8b0:4864:20::841]) by bombadil.infradead.org with esmtps (Exim 4.92 #3 (Red Hat Linux)) id 1hyojl-0002Pr-GQ for linux-arm-kernel@lists.infradead.org; Sat, 17 Aug 2019 02:46:42 +0000 Received: by mail-qt1-x841.google.com with SMTP id z4so8232432qtc.3 for ; Fri, 16 Aug 2019 19:46:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=fhpOZXqdUBODzNYqQty59G5Ge7iC8PlIDy+5W4I7/qk=; b=mBM/N/z8yGTKkYzwBN4o/dCrlkPlU17gfdIIzSulgS6UObI37vB1wBM5E6QzN8Edol +ZIF+vG/LQ4sxc0fhJFMx1EUVqlrj1DZKhZOG5CG+2qnASJdJfF++BFPASeRZHsrOQ7l 5f5Lc2n5wxaoFUY9/BftGTkGUIh93/1fDGukg46wRC7GIydl3rlFzVY7RYr8PU6ldDDJ T2/7MEX8je/x6UAz61qYNb9xAA4NihykJGICQtRl9NHdCGBGDJiuoKt5p6G9UGaGsalu ARbq1ne4gYamC8J8nKiChRzS3vCLuqR2484D9RiHFr+NU6SCL7isdVmTwg3yDP1oz/ig Fbzw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=fhpOZXqdUBODzNYqQty59G5Ge7iC8PlIDy+5W4I7/qk=; b=a8eZ/rLLGHp3R/ymG7r29Pzfw5ojbmI0sL91ocR71ZPk61tqIP6oF1S06wRjN+gm9c 4ZWDsWsmppkc7W0by9pgqysadRNaIJGMjp+ksbUbWI43NA4KP1YSwuiUfZQnnm98SLDg IU9Ll8Q7r0pZyqYMKX5qLLDPq3yP+dySNbpcbMehEo3LTZq4rvNnssv1+zqDWUajFiFS eFkqZwkF+Vx3dkGcS9b3QTJ/+afi4L8BRp9MfyZV17Evd3S7Jou8aGPYCUF1rKjfHM5D S+6k4oYD9smy8JKnVmMrC3VoyREtJaTkX96rvLHeGfWeay7Y7J/I7gVBu4CH5MwOSCtI ZX9g== X-Gm-Message-State: APjAAAU1nqkQEGVv5xixgntcKKAjT9P1bPuRFdj7meCw1A2wGz9zBbwC nNTau2sgySwQVp7cWKkhRC96fg== X-Google-Smtp-Source: APXvYqwmnAgW0ugL2Xh/QQG6ta19bkx2PHnTpQKH7KpRdc000K+gJIFau3+/d7lsbnc3Od8y2BFYyw== X-Received: by 2002:a0c:d251:: with SMTP id o17mr3892714qvh.109.1566009995607; Fri, 16 Aug 2019 19:46:35 -0700 (PDT) Received: from localhost.localdomain (c-73-69-118-222.hsd1.nh.comcast.net. [73.69.118.222]) by smtp.gmail.com with ESMTPSA id o9sm3454657qtr.71.2019.08.16.19.46.34 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 16 Aug 2019 19:46:35 -0700 (PDT) From: Pavel Tatashin To: pasha.tatashin@soleen.com, jmorris@namei.org, sashal@kernel.org, ebiederm@xmission.com, kexec@lists.infradead.org, linux-kernel@vger.kernel.org, corbet@lwn.net, catalin.marinas@arm.com, will@kernel.org, linux-arm-kernel@lists.infradead.org, marc.zyngier@arm.com, james.morse@arm.com, vladimir.murzin@arm.com, matthias.bgg@gmail.com, bhsharma@redhat.com, linux-mm@kvack.org Subject: [PATCH v2 03/14] arm64, hibernate: add trans_table public functions Date: Fri, 16 Aug 2019 22:46:18 -0400 Message-Id: <20190817024629.26611-4-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.22.1 In-Reply-To: <20190817024629.26611-1-pasha.tatashin@soleen.com> References: <20190817024629.26611-1-pasha.tatashin@soleen.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190816_194637_574540_40B9E2AD X-CRM114-Status: GOOD ( 15.61 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP trans_table_create_copy() and trans_table_map_page() are going to be the basis for public interface of new subsystem that handles page tables for cases which are between kernels: kexec, and hibernate. Signed-off-by: Pavel Tatashin --- arch/arm64/kernel/hibernate.c | 96 ++++++++++++++++++++++------------- 1 file changed, 61 insertions(+), 35 deletions(-) diff --git a/arch/arm64/kernel/hibernate.c b/arch/arm64/kernel/hibernate.c index 96b6f8da7e49..449d69b5651c 100644 --- a/arch/arm64/kernel/hibernate.c +++ b/arch/arm64/kernel/hibernate.c @@ -182,39 +182,15 @@ int arch_hibernation_header_restore(void *addr) } EXPORT_SYMBOL(arch_hibernation_header_restore); -/* - * Copies length bytes, starting at src_start into an new page, - * perform cache maintentance, then maps it at the specified address low - * address as executable. - * - * This is used by hibernate to copy the code it needs to execute when - * overwriting the kernel text. This function generates a new set of page - * tables, which it loads into ttbr0. - * - * Length is provided as we probably only want 4K of data, even on a 64K - * page system. - */ -static int create_safe_exec_page(void *src_start, size_t length, - unsigned long dst_addr, - phys_addr_t *phys_dst_addr) +int trans_table_map_page(pgd_t *trans_table, void *page, + unsigned long dst_addr, + pgprot_t pgprot) { - void *page = (void *)get_safe_page(GFP_ATOMIC); - pgd_t *trans_table; pgd_t *pgdp; pud_t *pudp; pmd_t *pmdp; pte_t *ptep; - if (!page) - return -ENOMEM; - - memcpy((void *)page, src_start, length); - __flush_icache_range((unsigned long)page, (unsigned long)page + length); - - trans_table = (void *)get_safe_page(GFP_ATOMIC); - if (!trans_table) - return -ENOMEM; - pgdp = pgd_offset_raw(trans_table, dst_addr); if (pgd_none(READ_ONCE(*pgdp))) { pudp = (void *)get_safe_page(GFP_ATOMIC); @@ -242,6 +218,44 @@ static int create_safe_exec_page(void *src_start, size_t length, ptep = pte_offset_kernel(pmdp, dst_addr); set_pte(ptep, pfn_pte(virt_to_pfn(page), PAGE_KERNEL_EXEC)); + return 0; +} + +/* + * Copies length bytes, starting at src_start into an new page, + * perform cache maintentance, then maps it at the specified address low + * address as executable. + * + * This is used by hibernate to copy the code it needs to execute when + * overwriting the kernel text. This function generates a new set of page + * tables, which it loads into ttbr0. + * + * Length is provided as we probably only want 4K of data, even on a 64K + * page system. + */ +static int create_safe_exec_page(void *src_start, size_t length, + unsigned long dst_addr, + phys_addr_t *phys_dst_addr) +{ + void *page = (void *)get_safe_page(GFP_ATOMIC); + pgd_t *trans_table; + int rc; + + if (!page) + return -ENOMEM; + + memcpy(page, src_start, length); + __flush_icache_range((unsigned long)page, (unsigned long)page + length); + + trans_table = (void *)get_safe_page(GFP_ATOMIC); + if (!trans_table) + return -ENOMEM; + + rc = trans_table_map_page(trans_table, page, dst_addr, + PAGE_KERNEL_EXEC); + if (rc) + return rc; + /* * Load our new page tables. A strict BBM approach requires that we * ensure that TLBs are free of any entries that may overlap with the @@ -259,7 +273,7 @@ static int create_safe_exec_page(void *src_start, size_t length, write_sysreg(phys_to_ttbr(virt_to_phys(trans_table)), ttbr0_el1); isb(); - *phys_dst_addr = virt_to_phys((void *)page); + *phys_dst_addr = virt_to_phys(page); return 0; } @@ -462,6 +476,24 @@ static int copy_page_tables(pgd_t *dst_pgdp, unsigned long start, return 0; } +int trans_table_create_copy(pgd_t **dst_pgdp, unsigned long start, + unsigned long end) +{ + int rc; + pgd_t *trans_table = (pgd_t *)get_safe_page(GFP_ATOMIC); + + if (!trans_table) { + pr_err("Failed to allocate memory for temporary page tables.\n"); + return -ENOMEM; + } + + rc = copy_page_tables(trans_table, start, end); + if (!rc) + *dst_pgdp = trans_table; + + return rc; +} + /* * Setup then Resume from the hibernate image using swsusp_arch_suspend_exit(). * @@ -483,13 +515,7 @@ int swsusp_arch_resume(void) * Create a second copy of just the linear map, and use this when * restoring. */ - tmp_pg_dir = (pgd_t *)get_safe_page(GFP_ATOMIC); - if (!tmp_pg_dir) { - pr_err("Failed to allocate memory for temporary page tables.\n"); - rc = -ENOMEM; - goto out; - } - rc = copy_page_tables(tmp_pg_dir, PAGE_OFFSET, 0); + rc = trans_table_create_copy(&tmp_pg_dir, PAGE_OFFSET, 0); if (rc) goto out; From patchwork Sat Aug 17 02:46:19 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pasha Tatashin X-Patchwork-Id: 11098527 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id CDE4513A0 for ; Sat, 17 Aug 2019 02:48:31 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id B7B23285D8 for ; Sat, 17 Aug 2019 02:48:31 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id ABC63286E6; Sat, 17 Aug 2019 02:48:31 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id B920F285D8 for ; Sat, 17 Aug 2019 02:48:30 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Cc:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=qh1+jrVdDTR/XzsgnEuYFsUcoLQ5ZANsdUErk0fF7v4=; b=k/ti8L8Itke9vK m0p0qhhFmPni0M4UV5zIROERU/8X1IINYE1HgCK1dtdqy7lwyEpvE/C2XEVkG8VSBWhqnL8esbkxb a3zJEwcXNzF7tLPyE+rlKFlce64nFVy+pnPTtAl1F9lOZ12K6xRTvOCpvkDRNymXVrM0nzCppQcod hOloZO6YYS+/594wLxvhYlmnbgPM80iCRvy8SfwuFtZgHumsN6upqYgH31qjJlnPGmSYiHk3cvgYt ttwg/cQ6llXl8NV8ZPixG4cdczY0AxKhQNKb3SQ5SLbbxn+/O6zEzwoYFMSBtuisgFDyHDdJdM9Yh 1SDcYubBp2fPlHxVvRcQ==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92 #3 (Red Hat Linux)) id 1hyola-00048A-8w; Sat, 17 Aug 2019 02:48:30 +0000 Received: from mail-qk1-x741.google.com ([2607:f8b0:4864:20::741]) by bombadil.infradead.org with esmtps (Exim 4.92 #3 (Red Hat Linux)) id 1hyojm-0002QO-CY for linux-arm-kernel@lists.infradead.org; Sat, 17 Aug 2019 02:46:44 +0000 Received: by mail-qk1-x741.google.com with SMTP id g17so6327674qkk.8 for ; Fri, 16 Aug 2019 19:46:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=Ia53lpF+y+jsoKGHo4pZ0MvB3e3Jf3WyVjCdlp7ODvY=; b=k1g17ItQEBYJmRi3kGMgc7SRgOi7ia9plW++8Fw82rdODTbdTdgAT9pNGbvyqShEs1 d8Qducw0kaDk7dH5J+wdZ8x/K3pu0BE6AB81fpYvZB0qVYavVTJAvDKVQ4MUVPK6px0h iawka1VFFXKMlrcpg0F2HvkFd/JvqhH81sevYJrXjRcAuEDxaZQ3hrO6s0mujNF9akC5 d0+rVDpnGeNHbCFKP9nLOHI1Jb2bXA5x+6hQkDJDYH6BIIbqZfsc2U+JrFoikBiHkc8i Ppf5jwp6QGJDHMaCAHN7q16sXsN1wDV2HtXk1Q7RikQ7pTttNk5rgUHE2+4PWYror0yb Uvug== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Ia53lpF+y+jsoKGHo4pZ0MvB3e3Jf3WyVjCdlp7ODvY=; b=LXiSFh6O01uebiOJdvnAPCaY02wQT2iPiGUOC15Uyy8hFetmFphmi/50H621gGS+Af iP4EqovnBh8Ksnuz4NxHs1wGW147OKB23jIhQ54wWkixx09BLMN/7t239esc4jX6vYyu 941mm703KHP/fuo3R0ZcU6htubg23mOFbdk1lzLAwkqM80eGhT1oDD999mGLd1C+hFxD xc+h8a6hdyf7e+NrQJPau6/A3HgGijkZ9haayAHt1onI0zdtX/DJG4B+29I2+xLMPjtu hDvYk9NaXxqLmILURqSDaz2pq5IUqPGWL1qjlWrGltZbUdsCKfMW7MD6/1buFHQgGCyT 7pFA== X-Gm-Message-State: APjAAAWDgjmMez2vWRaQWkSbapNsxHOkX0iy/NV8czDsIX5eZH7iHL6b 0sbflkmdcsYjPPh5SdbvbuUsTa3hc2E= X-Google-Smtp-Source: APXvYqzaAGnt+5F2jOXRPihTzS5EBFvFLzSlco/lHjdytkzKyZxaRFw70HD+WpDljQLIjSqUbzIP5w== X-Received: by 2002:a37:624b:: with SMTP id w72mr12264137qkb.368.1566009997057; Fri, 16 Aug 2019 19:46:37 -0700 (PDT) Received: from localhost.localdomain (c-73-69-118-222.hsd1.nh.comcast.net. [73.69.118.222]) by smtp.gmail.com with ESMTPSA id o9sm3454657qtr.71.2019.08.16.19.46.35 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 16 Aug 2019 19:46:36 -0700 (PDT) From: Pavel Tatashin To: pasha.tatashin@soleen.com, jmorris@namei.org, sashal@kernel.org, ebiederm@xmission.com, kexec@lists.infradead.org, linux-kernel@vger.kernel.org, corbet@lwn.net, catalin.marinas@arm.com, will@kernel.org, linux-arm-kernel@lists.infradead.org, marc.zyngier@arm.com, james.morse@arm.com, vladimir.murzin@arm.com, matthias.bgg@gmail.com, bhsharma@redhat.com, linux-mm@kvack.org Subject: [PATCH v2 04/14] arm64, hibernate: move page handling function to new trans_table.c Date: Fri, 16 Aug 2019 22:46:19 -0400 Message-Id: <20190817024629.26611-5-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.22.1 In-Reply-To: <20190817024629.26611-1-pasha.tatashin@soleen.com> References: <20190817024629.26611-1-pasha.tatashin@soleen.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190816_194638_436331_69E3EB46 X-CRM114-Status: GOOD ( 17.63 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP Now, that we abstracted the required functions move them to a new home. Later, we will generalize these function in order to be useful outside of hibernation. Signed-off-by: Pavel Tatashin --- arch/arm64/Kconfig | 4 + arch/arm64/include/asm/trans_table.h | 21 +++ arch/arm64/kernel/hibernate.c | 199 +------------------------ arch/arm64/mm/Makefile | 1 + arch/arm64/mm/trans_table.c | 212 +++++++++++++++++++++++++++ 5 files changed, 239 insertions(+), 198 deletions(-) create mode 100644 arch/arm64/include/asm/trans_table.h create mode 100644 arch/arm64/mm/trans_table.c diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 3adcec05b1f6..91a7416ffe4e 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -999,6 +999,10 @@ config CRASH_DUMP For more details see Documentation/admin-guide/kdump/kdump.rst +config TRANS_TABLE + def_bool y + depends on HIBERNATION || KEXEC_CORE + config XEN_DOM0 def_bool y depends on XEN diff --git a/arch/arm64/include/asm/trans_table.h b/arch/arm64/include/asm/trans_table.h new file mode 100644 index 000000000000..f57b2ab2a0b8 --- /dev/null +++ b/arch/arm64/include/asm/trans_table.h @@ -0,0 +1,21 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +/* + * Copyright (c) 2019, Microsoft Corporation. + * Pavel Tatashin + */ + +#ifndef _ASM_TRANS_TABLE_H +#define _ASM_TRANS_TABLE_H + +#include +#include + +int trans_table_create_copy(pgd_t **dst_pgdp, unsigned long start, + unsigned long end); + +int trans_table_map_page(pgd_t *trans_table, void *page, + unsigned long dst_addr, + pgprot_t pgprot); + +#endif /* _ASM_TRANS_TABLE_H */ diff --git a/arch/arm64/kernel/hibernate.c b/arch/arm64/kernel/hibernate.c index 449d69b5651c..0cb858b3f503 100644 --- a/arch/arm64/kernel/hibernate.c +++ b/arch/arm64/kernel/hibernate.c @@ -16,7 +16,6 @@ #define pr_fmt(x) "hibernate: " x #include #include -#include #include #include #include @@ -31,14 +30,12 @@ #include #include #include -#include -#include -#include #include #include #include #include #include +#include #include /* @@ -182,45 +179,6 @@ int arch_hibernation_header_restore(void *addr) } EXPORT_SYMBOL(arch_hibernation_header_restore); -int trans_table_map_page(pgd_t *trans_table, void *page, - unsigned long dst_addr, - pgprot_t pgprot) -{ - pgd_t *pgdp; - pud_t *pudp; - pmd_t *pmdp; - pte_t *ptep; - - pgdp = pgd_offset_raw(trans_table, dst_addr); - if (pgd_none(READ_ONCE(*pgdp))) { - pudp = (void *)get_safe_page(GFP_ATOMIC); - if (!pudp) - return -ENOMEM; - pgd_populate(&init_mm, pgdp, pudp); - } - - pudp = pud_offset(pgdp, dst_addr); - if (pud_none(READ_ONCE(*pudp))) { - pmdp = (void *)get_safe_page(GFP_ATOMIC); - if (!pmdp) - return -ENOMEM; - pud_populate(&init_mm, pudp, pmdp); - } - - pmdp = pmd_offset(pudp, dst_addr); - if (pmd_none(READ_ONCE(*pmdp))) { - ptep = (void *)get_safe_page(GFP_ATOMIC); - if (!ptep) - return -ENOMEM; - pmd_populate_kernel(&init_mm, pmdp, ptep); - } - - ptep = pte_offset_kernel(pmdp, dst_addr); - set_pte(ptep, pfn_pte(virt_to_pfn(page), PAGE_KERNEL_EXEC)); - - return 0; -} - /* * Copies length bytes, starting at src_start into an new page, * perform cache maintentance, then maps it at the specified address low @@ -339,161 +297,6 @@ int swsusp_arch_suspend(void) return ret; } -static void _copy_pte(pte_t *dst_ptep, pte_t *src_ptep, unsigned long addr) -{ - pte_t pte = READ_ONCE(*src_ptep); - - if (pte_valid(pte)) { - /* - * Resume will overwrite areas that may be marked - * read only (code, rodata). Clear the RDONLY bit from - * the temporary mappings we use during restore. - */ - set_pte(dst_ptep, pte_mkwrite(pte)); - } else if (debug_pagealloc_enabled() && !pte_none(pte)) { - /* - * debug_pagealloc will removed the PTE_VALID bit if - * the page isn't in use by the resume kernel. It may have - * been in use by the original kernel, in which case we need - * to put it back in our copy to do the restore. - * - * Before marking this entry valid, check the pfn should - * be mapped. - */ - BUG_ON(!pfn_valid(pte_pfn(pte))); - - set_pte(dst_ptep, pte_mkpresent(pte_mkwrite(pte))); - } -} - -static int copy_pte(pmd_t *dst_pmdp, pmd_t *src_pmdp, unsigned long start, - unsigned long end) -{ - pte_t *src_ptep; - pte_t *dst_ptep; - unsigned long addr = start; - - dst_ptep = (pte_t *)get_safe_page(GFP_ATOMIC); - if (!dst_ptep) - return -ENOMEM; - pmd_populate_kernel(&init_mm, dst_pmdp, dst_ptep); - dst_ptep = pte_offset_kernel(dst_pmdp, start); - - src_ptep = pte_offset_kernel(src_pmdp, start); - do { - _copy_pte(dst_ptep, src_ptep, addr); - } while (dst_ptep++, src_ptep++, addr += PAGE_SIZE, addr != end); - - return 0; -} - -static int copy_pmd(pud_t *dst_pudp, pud_t *src_pudp, unsigned long start, - unsigned long end) -{ - pmd_t *src_pmdp; - pmd_t *dst_pmdp; - unsigned long next; - unsigned long addr = start; - - if (pud_none(READ_ONCE(*dst_pudp))) { - dst_pmdp = (pmd_t *)get_safe_page(GFP_ATOMIC); - if (!dst_pmdp) - return -ENOMEM; - pud_populate(&init_mm, dst_pudp, dst_pmdp); - } - dst_pmdp = pmd_offset(dst_pudp, start); - - src_pmdp = pmd_offset(src_pudp, start); - do { - pmd_t pmd = READ_ONCE(*src_pmdp); - - next = pmd_addr_end(addr, end); - if (pmd_none(pmd)) - continue; - if (pmd_table(pmd)) { - if (copy_pte(dst_pmdp, src_pmdp, addr, next)) - return -ENOMEM; - } else { - set_pmd(dst_pmdp, - __pmd(pmd_val(pmd) & ~PMD_SECT_RDONLY)); - } - } while (dst_pmdp++, src_pmdp++, addr = next, addr != end); - - return 0; -} - -static int copy_pud(pgd_t *dst_pgdp, pgd_t *src_pgdp, unsigned long start, - unsigned long end) -{ - pud_t *dst_pudp; - pud_t *src_pudp; - unsigned long next; - unsigned long addr = start; - - if (pgd_none(READ_ONCE(*dst_pgdp))) { - dst_pudp = (pud_t *)get_safe_page(GFP_ATOMIC); - if (!dst_pudp) - return -ENOMEM; - pgd_populate(&init_mm, dst_pgdp, dst_pudp); - } - dst_pudp = pud_offset(dst_pgdp, start); - - src_pudp = pud_offset(src_pgdp, start); - do { - pud_t pud = READ_ONCE(*src_pudp); - - next = pud_addr_end(addr, end); - if (pud_none(pud)) - continue; - if (pud_table(pud)) { - if (copy_pmd(dst_pudp, src_pudp, addr, next)) - return -ENOMEM; - } else { - set_pud(dst_pudp, - __pud(pud_val(pud) & ~PMD_SECT_RDONLY)); - } - } while (dst_pudp++, src_pudp++, addr = next, addr != end); - - return 0; -} - -static int copy_page_tables(pgd_t *dst_pgdp, unsigned long start, - unsigned long end) -{ - unsigned long next; - unsigned long addr = start; - pgd_t *src_pgdp = pgd_offset_k(start); - - dst_pgdp = pgd_offset_raw(dst_pgdp, start); - do { - next = pgd_addr_end(addr, end); - if (pgd_none(READ_ONCE(*src_pgdp))) - continue; - if (copy_pud(dst_pgdp, src_pgdp, addr, next)) - return -ENOMEM; - } while (dst_pgdp++, src_pgdp++, addr = next, addr != end); - - return 0; -} - -int trans_table_create_copy(pgd_t **dst_pgdp, unsigned long start, - unsigned long end) -{ - int rc; - pgd_t *trans_table = (pgd_t *)get_safe_page(GFP_ATOMIC); - - if (!trans_table) { - pr_err("Failed to allocate memory for temporary page tables.\n"); - return -ENOMEM; - } - - rc = copy_page_tables(trans_table, start, end); - if (!rc) - *dst_pgdp = trans_table; - - return rc; -} - /* * Setup then Resume from the hibernate image using swsusp_arch_suspend_exit(). * diff --git a/arch/arm64/mm/Makefile b/arch/arm64/mm/Makefile index 849c1df3d214..3794fff18659 100644 --- a/arch/arm64/mm/Makefile +++ b/arch/arm64/mm/Makefile @@ -6,6 +6,7 @@ obj-y := dma-mapping.o extable.o fault.o init.o \ obj-$(CONFIG_HUGETLB_PAGE) += hugetlbpage.o obj-$(CONFIG_ARM64_PTDUMP_CORE) += dump.o obj-$(CONFIG_ARM64_PTDUMP_DEBUGFS) += ptdump_debugfs.o +obj-$(CONFIG_TRANS_TABLE) += trans_table.o obj-$(CONFIG_NUMA) += numa.o obj-$(CONFIG_DEBUG_VIRTUAL) += physaddr.o KASAN_SANITIZE_physaddr.o += n diff --git a/arch/arm64/mm/trans_table.c b/arch/arm64/mm/trans_table.c new file mode 100644 index 000000000000..b4bbb559d9cf --- /dev/null +++ b/arch/arm64/mm/trans_table.c @@ -0,0 +1,212 @@ +// SPDX-License-Identifier: GPL-2.0 + +/* + * Copyright (c) 2019, Microsoft Corporation. + * Pavel Tatashin + */ + +/* + * Transitional tables are used during system transferring from one world to + * another: such as during hibernate restore, and kexec reboots. During these + * phases one cannot rely on page table not being overwritten. + * + */ + +#include +#include +#include +#include + +static void _copy_pte(pte_t *dst_ptep, pte_t *src_ptep, unsigned long addr) +{ + pte_t pte = READ_ONCE(*src_ptep); + + if (pte_valid(pte)) { + /* + * Resume will overwrite areas that may be marked + * read only (code, rodata). Clear the RDONLY bit from + * the temporary mappings we use during restore. + */ + set_pte(dst_ptep, pte_mkwrite(pte)); + } else if (debug_pagealloc_enabled() && !pte_none(pte)) { + /* + * debug_pagealloc will removed the PTE_VALID bit if + * the page isn't in use by the resume kernel. It may have + * been in use by the original kernel, in which case we need + * to put it back in our copy to do the restore. + * + * Before marking this entry valid, check the pfn should + * be mapped. + */ + BUG_ON(!pfn_valid(pte_pfn(pte))); + + set_pte(dst_ptep, pte_mkpresent(pte_mkwrite(pte))); + } +} + +static int copy_pte(pmd_t *dst_pmdp, pmd_t *src_pmdp, unsigned long start, + unsigned long end) +{ + pte_t *src_ptep; + pte_t *dst_ptep; + unsigned long addr = start; + + dst_ptep = (pte_t *)get_safe_page(GFP_ATOMIC); + if (!dst_ptep) + return -ENOMEM; + pmd_populate_kernel(&init_mm, dst_pmdp, dst_ptep); + dst_ptep = pte_offset_kernel(dst_pmdp, start); + + src_ptep = pte_offset_kernel(src_pmdp, start); + do { + _copy_pte(dst_ptep, src_ptep, addr); + } while (dst_ptep++, src_ptep++, addr += PAGE_SIZE, addr != end); + + return 0; +} + +static int copy_pmd(pud_t *dst_pudp, pud_t *src_pudp, unsigned long start, + unsigned long end) +{ + pmd_t *src_pmdp; + pmd_t *dst_pmdp; + unsigned long next; + unsigned long addr = start; + + if (pud_none(READ_ONCE(*dst_pudp))) { + dst_pmdp = (pmd_t *)get_safe_page(GFP_ATOMIC); + if (!dst_pmdp) + return -ENOMEM; + pud_populate(&init_mm, dst_pudp, dst_pmdp); + } + dst_pmdp = pmd_offset(dst_pudp, start); + + src_pmdp = pmd_offset(src_pudp, start); + do { + pmd_t pmd = READ_ONCE(*src_pmdp); + + next = pmd_addr_end(addr, end); + if (pmd_none(pmd)) + continue; + if (pmd_table(pmd)) { + if (copy_pte(dst_pmdp, src_pmdp, addr, next)) + return -ENOMEM; + } else { + set_pmd(dst_pmdp, + __pmd(pmd_val(pmd) & ~PMD_SECT_RDONLY)); + } + } while (dst_pmdp++, src_pmdp++, addr = next, addr != end); + + return 0; +} + +static int copy_pud(pgd_t *dst_pgdp, pgd_t *src_pgdp, unsigned long start, + unsigned long end) +{ + pud_t *dst_pudp; + pud_t *src_pudp; + unsigned long next; + unsigned long addr = start; + + if (pgd_none(READ_ONCE(*dst_pgdp))) { + dst_pudp = (pud_t *)get_safe_page(GFP_ATOMIC); + if (!dst_pudp) + return -ENOMEM; + pgd_populate(&init_mm, dst_pgdp, dst_pudp); + } + dst_pudp = pud_offset(dst_pgdp, start); + + src_pudp = pud_offset(src_pgdp, start); + do { + pud_t pud = READ_ONCE(*src_pudp); + + next = pud_addr_end(addr, end); + if (pud_none(pud)) + continue; + if (pud_table(pud)) { + if (copy_pmd(dst_pudp, src_pudp, addr, next)) + return -ENOMEM; + } else { + set_pud(dst_pudp, + __pud(pud_val(pud) & ~PMD_SECT_RDONLY)); + } + } while (dst_pudp++, src_pudp++, addr = next, addr != end); + + return 0; +} + +static int copy_page_tables(pgd_t *dst_pgdp, unsigned long start, + unsigned long end) +{ + unsigned long next; + unsigned long addr = start; + pgd_t *src_pgdp = pgd_offset_k(start); + + dst_pgdp = pgd_offset_raw(dst_pgdp, start); + do { + next = pgd_addr_end(addr, end); + if (pgd_none(READ_ONCE(*src_pgdp))) + continue; + if (copy_pud(dst_pgdp, src_pgdp, addr, next)) + return -ENOMEM; + } while (dst_pgdp++, src_pgdp++, addr = next, addr != end); + + return 0; +} + +int trans_table_create_copy(pgd_t **dst_pgdp, unsigned long start, + unsigned long end) +{ + int rc; + pgd_t *trans_table = (pgd_t *)get_safe_page(GFP_ATOMIC); + + if (!trans_table) { + pr_err("Failed to allocate memory for temporary page tables.\n"); + return -ENOMEM; + } + + rc = copy_page_tables(trans_table, start, end); + if (!rc) + *dst_pgdp = trans_table; + + return rc; +} + +int trans_table_map_page(pgd_t *trans_table, void *page, + unsigned long dst_addr, + pgprot_t pgprot) +{ + pgd_t *pgdp; + pud_t *pudp; + pmd_t *pmdp; + pte_t *ptep; + + pgdp = pgd_offset_raw(trans_table, dst_addr); + if (pgd_none(READ_ONCE(*pgdp))) { + pudp = (void *)get_safe_page(GFP_ATOMIC); + if (!pudp) + return -ENOMEM; + pgd_populate(&init_mm, pgdp, pudp); + } + + pudp = pud_offset(pgdp, dst_addr); + if (pud_none(READ_ONCE(*pudp))) { + pmdp = (void *)get_safe_page(GFP_ATOMIC); + if (!pmdp) + return -ENOMEM; + pud_populate(&init_mm, pudp, pmdp); + } + + pmdp = pmd_offset(pudp, dst_addr); + if (pmd_none(READ_ONCE(*pmdp))) { + ptep = (void *)get_safe_page(GFP_ATOMIC); + if (!ptep) + return -ENOMEM; + pmd_populate_kernel(&init_mm, pmdp, ptep); + } + + ptep = pte_offset_kernel(pmdp, dst_addr); + set_pte(ptep, pfn_pte(virt_to_pfn(page), PAGE_KERNEL_EXEC)); + + return 0; +} From patchwork Sat Aug 17 02:46:20 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pasha Tatashin X-Patchwork-Id: 11098529 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 9BE8618EC for ; Sat, 17 Aug 2019 02:48:48 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 88484285D8 for ; Sat, 17 Aug 2019 02:48:48 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 7C7F328941; Sat, 17 Aug 2019 02:48:48 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id B3684285D8 for ; Sat, 17 Aug 2019 02:48:47 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Cc:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=P9hRSYREooPiad+vkv5pICSKFjWejalWmvqwHWoWits=; b=Hwt9ghdNZM5tlz MmRj3Ned4YB9q6rOHo+KpC5RcWJC5kArYaki4y5mnhLDiYS7bRfDO7NJ1/Sq1XSh0bVsGRZXuSjNA IVzb+qqTK9Bs3+oNq5Z1d0TRG4D5brsnBVSJyQSn6VoTbvuoFEvvd2lTj4wrMNbBnlZOmmVjvedX0 e8qw4pl8w2uIcbio0hzr3B+YJW2inpRrjMntqwG5nsD1V9YT0/ip2qpXJcQFXHpJFqIE9UEdO5ijR lHnJatnBqiFou3H77VCyJoLbUixG6u0VuIPvq3GmMH4Ag0ysg/mbb8iDEns8xz9QlmUNIIYaNdizU GWW7GUZ9sdhnFjGjveNA==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92 #3 (Red Hat Linux)) id 1hyolq-0004Lk-Un; Sat, 17 Aug 2019 02:48:47 +0000 Received: from mail-qt1-x842.google.com ([2607:f8b0:4864:20::842]) by bombadil.infradead.org with esmtps (Exim 4.92 #3 (Red Hat Linux)) id 1hyojn-0002Si-B5 for linux-arm-kernel@lists.infradead.org; Sat, 17 Aug 2019 02:46:44 +0000 Received: by mail-qt1-x842.google.com with SMTP id u34so8252133qte.2 for ; Fri, 16 Aug 2019 19:46:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=8WbcRUtcHS2GIYxAvuez1hxS2GbdOM3ZjGBgXllqxc8=; b=oXQfO2Fq0+QO8q1nxDXVLE8A340S8v0eM8eMEpfD8VCqrlolgOHZ/Jl4EOXvD1hBmN qUvUvJtldvU6AW3L+EyqAm5Fe/+a5+pIPehabKcS8bDJegz8Wnr8Zot1F8UHt5Sx9H0s oh904Zd6R9xJUn3e+uu59HVjR1rYCil+ohry7eNsDwnQ7TgpvGc+7ZAohfziUL/xB5Uf QigCZp1KU9V+ek+gKpoFh+7Y8AzwUZ3SvyQFDN/z9HvDer6uVe3fJhNIh7ndo41vaccT KCx+SSUXT0FW47qAr2agJ3tfNSttmXEfsU+J2R7GnC8Qe5qf2OT7EKa/vamIoTeHGcAI moAg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=8WbcRUtcHS2GIYxAvuez1hxS2GbdOM3ZjGBgXllqxc8=; b=rw/dLEv62XLWGhulKCnqJ1Te0i21tWoqgOWhCkFNgt4WBnNcaOqVJMp52HIIRkoKdW kv5YmfVrKYL9zcs5PYZgfJLzBjEN7CsGcIKRq9fYVj5AgU5oXFPmJHL+D9iG/jh8H7kw 54ir8kHaj3hIvKvbKA5wkKQvQgZ/83kyRGVGx/PVP54H6cJjVhZHEiztPnTwJrDY31MK lbnhw2Z9UtCM1FZOgvqzYJAVzyOfXZ9Fv97+a2ooNzMybCNZ1Jqj8uy1Hl+nSq1Exg9j 7xziC1RNIITNohh3EtRs1H+S26d2cx3jvysxIkn2XEQe6YLBzy/FvJUaZJaU439++1Ce zWIQ== X-Gm-Message-State: APjAAAXMR2he/Ifxp+z1fdXgOr07nv8JkXb8S/oMvy8E4Dw/y7fbJIvC pWLWGqMHe6jxskczlcwc/c8awg== X-Google-Smtp-Source: APXvYqzw8XVlfyeSlginpUsJyP3V5lwU2lsl83ETWErVpn1VDaT1lh+VZAzw47gT8aBurcy+cJ6XLg== X-Received: by 2002:ac8:128c:: with SMTP id y12mr11453271qti.242.1566009998439; Fri, 16 Aug 2019 19:46:38 -0700 (PDT) Received: from localhost.localdomain (c-73-69-118-222.hsd1.nh.comcast.net. [73.69.118.222]) by smtp.gmail.com with ESMTPSA id o9sm3454657qtr.71.2019.08.16.19.46.37 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 16 Aug 2019 19:46:37 -0700 (PDT) From: Pavel Tatashin To: pasha.tatashin@soleen.com, jmorris@namei.org, sashal@kernel.org, ebiederm@xmission.com, kexec@lists.infradead.org, linux-kernel@vger.kernel.org, corbet@lwn.net, catalin.marinas@arm.com, will@kernel.org, linux-arm-kernel@lists.infradead.org, marc.zyngier@arm.com, james.morse@arm.com, vladimir.murzin@arm.com, matthias.bgg@gmail.com, bhsharma@redhat.com, linux-mm@kvack.org Subject: [PATCH v2 05/14] arm64, trans_table: make trans_table_map_page generic Date: Fri, 16 Aug 2019 22:46:20 -0400 Message-Id: <20190817024629.26611-6-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.22.1 In-Reply-To: <20190817024629.26611-1-pasha.tatashin@soleen.com> References: <20190817024629.26611-1-pasha.tatashin@soleen.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190816_194639_568627_B5D51B70 X-CRM114-Status: GOOD ( 13.57 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP Currently, trans_table_map_page has assumptions that are relevant to hibernate. But, to make it generic we must allow it to use any allocator and also, can't assume that entries do not exist in the page table already. Also, we can't use init_mm here. Also, add "flags" for trans_table_info, they are going to be used in copy functions once they are generalized. Signed-off-by: Pavel Tatashin --- arch/arm64/include/asm/trans_table.h | 40 +++++++++++++- arch/arm64/kernel/hibernate.c | 13 ++++- arch/arm64/mm/trans_table.c | 83 +++++++++++++++++++--------- 3 files changed, 107 insertions(+), 29 deletions(-) diff --git a/arch/arm64/include/asm/trans_table.h b/arch/arm64/include/asm/trans_table.h index f57b2ab2a0b8..1a57af09ded5 100644 --- a/arch/arm64/include/asm/trans_table.h +++ b/arch/arm64/include/asm/trans_table.h @@ -11,11 +11,45 @@ #include #include +/* + * trans_alloc_page + * - Allocator that should return exactly one uninitilaized page, if this + * allocator fails, trans_table returns -ENOMEM error. + * + * trans_alloc_arg + * - Passed to trans_alloc_page as an argument + * + * trans_flags + * - bitmap with flags that control how page table is filled. + * TRANS_MKWRITE: during page table copy make PTE, PME, and PUD page + * writeable by removing RDONLY flag from PTE. + * TRANS_MKVALID: during page table copy, if PTE present, but not valid, + * make it valid. + * TRANS_CHECKPFN: During page table copy, for every PTE entry check that + * PFN that this PTE points to is valid. Otherwise return + * -ENXIO + */ + +#define TRANS_MKWRITE BIT(0) +#define TRANS_MKVALID BIT(1) +#define TRANS_CHECKPFN BIT(2) + +struct trans_table_info { + void * (*trans_alloc_page)(void *arg); + void *trans_alloc_arg; + unsigned long trans_flags; +}; + int trans_table_create_copy(pgd_t **dst_pgdp, unsigned long start, unsigned long end); -int trans_table_map_page(pgd_t *trans_table, void *page, - unsigned long dst_addr, - pgprot_t pgprot); +/* + * Add map entry to trans_table for a base-size page at PTE level. + * page: page to be mapped. + * dst_addr: new VA address for the pages + * pgprot: protection for the page. + */ +int trans_table_map_page(struct trans_table_info *info, pgd_t *trans_table, + void *page, unsigned long dst_addr, pgprot_t pgprot); #endif /* _ASM_TRANS_TABLE_H */ diff --git a/arch/arm64/kernel/hibernate.c b/arch/arm64/kernel/hibernate.c index 0cb858b3f503..524b68ec3233 100644 --- a/arch/arm64/kernel/hibernate.c +++ b/arch/arm64/kernel/hibernate.c @@ -179,6 +179,12 @@ int arch_hibernation_header_restore(void *addr) } EXPORT_SYMBOL(arch_hibernation_header_restore); +static void * +hibernate_page_alloc(void *arg) +{ + return (void *)get_safe_page((gfp_t)(unsigned long)arg); +} + /* * Copies length bytes, starting at src_start into an new page, * perform cache maintentance, then maps it at the specified address low @@ -195,6 +201,11 @@ static int create_safe_exec_page(void *src_start, size_t length, unsigned long dst_addr, phys_addr_t *phys_dst_addr) { + struct trans_table_info trans_info = { + .trans_alloc_page = hibernate_page_alloc, + .trans_alloc_arg = (void *)GFP_ATOMIC, + .trans_flags = 0, + }; void *page = (void *)get_safe_page(GFP_ATOMIC); pgd_t *trans_table; int rc; @@ -209,7 +220,7 @@ static int create_safe_exec_page(void *src_start, size_t length, if (!trans_table) return -ENOMEM; - rc = trans_table_map_page(trans_table, page, dst_addr, + rc = trans_table_map_page(&trans_info, trans_table, page, dst_addr, PAGE_KERNEL_EXEC); if (rc) return rc; diff --git a/arch/arm64/mm/trans_table.c b/arch/arm64/mm/trans_table.c index b4bbb559d9cf..12f4b3cab6d6 100644 --- a/arch/arm64/mm/trans_table.c +++ b/arch/arm64/mm/trans_table.c @@ -17,6 +17,16 @@ #include #include +static void *trans_alloc(struct trans_table_info *info) +{ + void *page = info->trans_alloc_page(info->trans_alloc_arg); + + if (page) + clear_page(page); + + return page; +} + static void _copy_pte(pte_t *dst_ptep, pte_t *src_ptep, unsigned long addr) { pte_t pte = READ_ONCE(*src_ptep); @@ -172,41 +182,64 @@ int trans_table_create_copy(pgd_t **dst_pgdp, unsigned long start, return rc; } -int trans_table_map_page(pgd_t *trans_table, void *page, - unsigned long dst_addr, - pgprot_t pgprot) +int trans_table_map_page(struct trans_table_info *info, pgd_t *trans_table, + void *page, unsigned long dst_addr, pgprot_t pgprot) { - pgd_t *pgdp; - pud_t *pudp; - pmd_t *pmdp; - pte_t *ptep; - - pgdp = pgd_offset_raw(trans_table, dst_addr); - if (pgd_none(READ_ONCE(*pgdp))) { - pudp = (void *)get_safe_page(GFP_ATOMIC); - if (!pudp) + int pgd_idx = pgd_index(dst_addr); + int pud_idx = pud_index(dst_addr); + int pmd_idx = pmd_index(dst_addr); + int pte_idx = pte_index(dst_addr); + pgd_t *pgdp = trans_table; + pgd_t pgd = READ_ONCE(pgdp[pgd_idx]); + pud_t *pudp, pud; + pmd_t *pmdp, pmd; + pte_t *ptep, pte; + + if (pgd_none(pgd)) { + pud_t *t = trans_alloc(info); + + if (!t) return -ENOMEM; - pgd_populate(&init_mm, pgdp, pudp); + + __pgd_populate(&pgdp[pgd_idx], __pa(t), PUD_TYPE_TABLE); + pgd = READ_ONCE(pgdp[pgd_idx]); } - pudp = pud_offset(pgdp, dst_addr); - if (pud_none(READ_ONCE(*pudp))) { - pmdp = (void *)get_safe_page(GFP_ATOMIC); - if (!pmdp) + pudp = __va(pgd_page_paddr(pgd)); + pud = READ_ONCE(pudp[pud_idx]); + if (pud_sect(pud)) { + return -ENXIO; + } else if (pud_none(pud) || pud_sect(pud)) { + pmd_t *t = trans_alloc(info); + + if (!t) return -ENOMEM; - pud_populate(&init_mm, pudp, pmdp); + + __pud_populate(&pudp[pud_idx], __pa(t), PMD_TYPE_TABLE); + pud = READ_ONCE(pudp[pud_idx]); } - pmdp = pmd_offset(pudp, dst_addr); - if (pmd_none(READ_ONCE(*pmdp))) { - ptep = (void *)get_safe_page(GFP_ATOMIC); - if (!ptep) + pmdp = __va(pud_page_paddr(pud)); + pmd = READ_ONCE(pmdp[pmd_idx]); + if (pmd_sect(pmd)) { + return -ENXIO; + } else if (pmd_none(pmd) || pmd_sect(pmd)) { + pte_t *t = trans_alloc(info); + + if (!t) return -ENOMEM; - pmd_populate_kernel(&init_mm, pmdp, ptep); + + __pmd_populate(&pmdp[pmd_idx], __pa(t), PTE_TYPE_PAGE); + pmd = READ_ONCE(pmdp[pmd_idx]); } - ptep = pte_offset_kernel(pmdp, dst_addr); - set_pte(ptep, pfn_pte(virt_to_pfn(page), PAGE_KERNEL_EXEC)); + ptep = __va(pmd_page_paddr(pmd)); + pte = READ_ONCE(ptep[pte_idx]); + + if (!pte_none(pte)) + return -ENXIO; + + set_pte(&ptep[pte_idx], pfn_pte(virt_to_pfn(page), pgprot)); return 0; } From patchwork Sat Aug 17 02:46:21 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pasha Tatashin X-Patchwork-Id: 11098525 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 2138F13A0 for ; Sat, 17 Aug 2019 02:48:18 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 0C7BA28688 for ; Sat, 17 Aug 2019 02:48:18 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id F1409285D8; Sat, 17 Aug 2019 02:48:17 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 91093285D8 for ; Sat, 17 Aug 2019 02:48:17 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Cc:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=WJYfD3VA38d1RK5rfR+eUGwRVk31xLBY0zlpIXRXE6k=; b=BR0q/YNaEnhoet 5VL4n/EdCbL6H7XKwx+Lv9xCwtN6717cOcaU/v4upRx6vqFnSkpgFYgPdUdBtCFBG86f4NX+JmZam o+JSh+3it5ccEOrQQIQtSxRbv6QI1O12BwoZ5VR6w9h5pfqctAnqWNYf1Pn7Odh8aYt16aXOoluqB DMtcpIGTO/pYJjSs5xlhtIpe7TJoWEewYFhWq/EtZiqVUxAeGuOaGXCMGyQXXxWfh7jfiPWNeAicK PmwHaJXA8fsimAGQ19wI4El0qXpBLQ7xuD6+uRtvnqX/p/Zt10rO/gG42mhssDLFMAUnH3EwK90vJ 9kfG8yXczxXPnQ8p1BVQ==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92 #3 (Red Hat Linux)) id 1hyolM-0003uP-Ur; Sat, 17 Aug 2019 02:48:16 +0000 Received: from mail-qt1-x841.google.com ([2607:f8b0:4864:20::841]) by bombadil.infradead.org with esmtps (Exim 4.92 #3 (Red Hat Linux)) id 1hyojo-0002UW-L8 for linux-arm-kernel@lists.infradead.org; Sat, 17 Aug 2019 02:46:44 +0000 Received: by mail-qt1-x841.google.com with SMTP id b11so8195372qtp.10 for ; Fri, 16 Aug 2019 19:46:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=6MXr7ZNXr9pMh8s805CN1W/EkafXMpyZxym4cVvk9wc=; b=Jk5d2DZoBShB4ze9idh4b2whUtX/SmRx3TmrhS9c1oewUdCmJEzH/hGw70meFsxow+ YDVZ3g0EIXKLmKSP1feQTYCGq491WhjWp3qsTbrKER7ArdWIh+EthjCYgIfVGvoygFUV tlHsVX4W+sVOY918LixttJyVlaBCZPFsYG/tnp66k+QKq69YlEcdd8RSyu/uuUQ4HzmN A2B4BbxV4o4Zdo2HKLL7ZT2tz79MNM8jtjl4cPB7HdLIkuSvqyw1UBAIJzHx9+L9WUOK x90PRf3db1rUecDzrbU/GBXEF5jwUHyMjdo64PX/wkrXaBHGfKHryUcwkyGIRw2NodIV GEhg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=6MXr7ZNXr9pMh8s805CN1W/EkafXMpyZxym4cVvk9wc=; b=maLpDWPhJsMcQHbamwz66mkeaKIgqkvxIFEgnUslW4fIfLrwXc6AIaVF8VhrwLcLn9 7DSby7Aqlo6CjRuGcrDOcQB6aKD9OPp3axO+ueAcG/enhjz9dqE0JsJOBgjP9x5yTJPq 0CenU9EhACVyqZEcjtP1kqlXM5Ls/v5WxQhPF/MygK9UZCX0nrupVNYfnA1RV/7BtOLX 1IJbAQs2MUF/nKgMwICOfaoxAtUaPcCJ8wjgbl3NcIYpV/9D8mxkJ9xbA4EAuq+IsgtL lfEnpA4DvczqntpPRxaO1smLCFeWD7Jvjgj07ZBC9CY8ITRNwka5FF7frt/dIGIfPM+e MuPw== X-Gm-Message-State: APjAAAWqVoSo7+cfypbi9cbJSQ6TN9sWH8G6uF86oXDomyQQqOGu7kLI CCFdjEjbba8JGfVUzhulvsyW1g== X-Google-Smtp-Source: APXvYqxndPHKSjrYVebk9sscUfiq8znh7MMHnkLzeCbrUekUG7hOvdYEmZRZVAY2SFXWMAuk/MAFdQ== X-Received: by 2002:ac8:3737:: with SMTP id o52mr11736300qtb.9.1566009999892; Fri, 16 Aug 2019 19:46:39 -0700 (PDT) Received: from localhost.localdomain (c-73-69-118-222.hsd1.nh.comcast.net. [73.69.118.222]) by smtp.gmail.com with ESMTPSA id o9sm3454657qtr.71.2019.08.16.19.46.38 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 16 Aug 2019 19:46:39 -0700 (PDT) From: Pavel Tatashin To: pasha.tatashin@soleen.com, jmorris@namei.org, sashal@kernel.org, ebiederm@xmission.com, kexec@lists.infradead.org, linux-kernel@vger.kernel.org, corbet@lwn.net, catalin.marinas@arm.com, will@kernel.org, linux-arm-kernel@lists.infradead.org, marc.zyngier@arm.com, james.morse@arm.com, vladimir.murzin@arm.com, matthias.bgg@gmail.com, bhsharma@redhat.com, linux-mm@kvack.org Subject: [PATCH v2 06/14] arm64, trans_table: add trans_table_create_empty Date: Fri, 16 Aug 2019 22:46:21 -0400 Message-Id: <20190817024629.26611-7-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.22.1 In-Reply-To: <20190817024629.26611-1-pasha.tatashin@soleen.com> References: <20190817024629.26611-1-pasha.tatashin@soleen.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190816_194640_794908_001EF1C3 X-CRM114-Status: GOOD ( 10.06 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP This functions returns a zeroed trans_table using the allocator that is specified in the info argument. trans_tables should be created by using this function. Signed-off-by: Pavel Tatashin --- arch/arm64/include/asm/trans_table.h | 4 ++++ arch/arm64/kernel/hibernate.c | 6 +++--- arch/arm64/mm/trans_table.c | 12 ++++++++++++ 3 files changed, 19 insertions(+), 3 deletions(-) diff --git a/arch/arm64/include/asm/trans_table.h b/arch/arm64/include/asm/trans_table.h index 1a57af09ded5..02d3a0333dc9 100644 --- a/arch/arm64/include/asm/trans_table.h +++ b/arch/arm64/include/asm/trans_table.h @@ -40,6 +40,10 @@ struct trans_table_info { unsigned long trans_flags; }; +/* Create and empty trans table. */ +int trans_table_create_empty(struct trans_table_info *info, + pgd_t **trans_table); + int trans_table_create_copy(pgd_t **dst_pgdp, unsigned long start, unsigned long end); diff --git a/arch/arm64/kernel/hibernate.c b/arch/arm64/kernel/hibernate.c index 524b68ec3233..3a7b362e5a58 100644 --- a/arch/arm64/kernel/hibernate.c +++ b/arch/arm64/kernel/hibernate.c @@ -216,9 +216,9 @@ static int create_safe_exec_page(void *src_start, size_t length, memcpy(page, src_start, length); __flush_icache_range((unsigned long)page, (unsigned long)page + length); - trans_table = (void *)get_safe_page(GFP_ATOMIC); - if (!trans_table) - return -ENOMEM; + rc = trans_table_create_empty(&trans_info, &trans_table); + if (rc) + return rc; rc = trans_table_map_page(&trans_info, trans_table, page, dst_addr, PAGE_KERNEL_EXEC); diff --git a/arch/arm64/mm/trans_table.c b/arch/arm64/mm/trans_table.c index 12f4b3cab6d6..6deb35f83118 100644 --- a/arch/arm64/mm/trans_table.c +++ b/arch/arm64/mm/trans_table.c @@ -164,6 +164,18 @@ static int copy_page_tables(pgd_t *dst_pgdp, unsigned long start, return 0; } +int trans_table_create_empty(struct trans_table_info *info, pgd_t **trans_table) +{ + pgd_t *dst_pgdp = trans_alloc(info); + + if (!dst_pgdp) + return -ENOMEM; + + *trans_table = dst_pgdp; + + return 0; +} + int trans_table_create_copy(pgd_t **dst_pgdp, unsigned long start, unsigned long end) { From patchwork Sat Aug 17 02:46:22 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pasha Tatashin X-Patchwork-Id: 11098533 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 2486E13A0 for ; Sat, 17 Aug 2019 02:49:24 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 0F4471FEBA for ; Sat, 17 Aug 2019 02:49:24 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id F3F97288FC; Sat, 17 Aug 2019 02:49:23 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 79CAD1FEBA for ; Sat, 17 Aug 2019 02:49:23 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Cc:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=lVFpk/dlGmHQd6GY8ejP/fRUCDhpC1aUVDv+1DZekYA=; b=PjT70xVBkwnrrC qLzw5PXn89VHZwQmuHEkacat7RzPWICn0fYSeImeT2Nwa2zG/PxXmPc8LmM1Nsrl48U7gU/s5Amgh yHPqTY0Qn5goSTZcWlxipUq5jl+HYGbLOFl4rUlFKB7HqGnuHd0EvEyuGI+m3OogU1f259bK4M3H/ PP8CmrTy0bEPO0m1ryPQ0ALh76Tzkx19Zhg2/fWTytosvlVP+PVSk9DUWqWfy5wUcVG0Xxguq+Yln S7fur7i3alh115WsIweXQ/7dYjP5C5AtibdE9DK7z9JHvSfrrzfJv/4szVpMz9yT3YGi8l2TjUA13 /np4j86Gp//TxoS6dMkg==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92 #3 (Red Hat Linux)) id 1hyomQ-0004uB-KU; Sat, 17 Aug 2019 02:49:22 +0000 Received: from mail-qk1-x742.google.com ([2607:f8b0:4864:20::742]) by bombadil.infradead.org with esmtps (Exim 4.92 #3 (Red Hat Linux)) id 1hyojq-0002W7-6y for linux-arm-kernel@lists.infradead.org; Sat, 17 Aug 2019 02:46:47 +0000 Received: by mail-qk1-x742.google.com with SMTP id r21so6395790qke.2 for ; Fri, 16 Aug 2019 19:46:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=ekaKdccxVH9Vd+/E5UI//vE0AEA91U4VJzNQTwl8d/E=; b=W88elMS7kW8jg3EvlTR8TpaW28BxanfKAEZA1T8tCbXzGsz62pHn2DbtF/0gi1MvG6 QnxWiMVsfkxQUm0Ae/FuzabuFMIch0uoq5Zhg7oba7ZeKDDAzmXRQ2n1IvGcQFjc8nYV m4/JkZyrt8pdX01FitiJJx0cPgVJ/FhH5Tu9IkA2IMuX557W2Y50K7mGgQhAzvNA1s7Q wWqVrMrxEqH7/ouO/0/KTzJm5DoKmotecr6ZDZVSYpvnvvHXAki018cukTq6oT85R9s1 ylwJHOA8a1x0T03lSz9YDdycI4Gy2dKPQXITbseL26rSf0oItWq1y+PT+gj0RZzI1Xzc 2brw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=ekaKdccxVH9Vd+/E5UI//vE0AEA91U4VJzNQTwl8d/E=; b=NLAHlj+28ZoXBTWE031YFVY84XzwNyzpl2hMoBx33Mm/hRzuP269Mfsp8KCb2OilNk /px0pbY9J1pO5JIL283g17oYa5JIHdeLRXFURDQ6JTDsy5M1Z9sdGYmuuUhVbNUH7Tox ZvnpmTvSUTZxWvGYK0DUtpbi5ABtQnhQBGUTQ4RE38liNxQCAwiB/oOyGeV01voG2h8S ek8TqSodbj9XtQ7yTAuT1I2qIL8pCCQIpV5hRC4g3UHpuNHnZBuGomu4F/lCCzcqdYg5 d67/NhBl4dXutF6osUVV8Um1i/N1bJAoPTS8lh9iGxByewzXwFsucC1AoIPNN6JqYJNt mIpQ== X-Gm-Message-State: APjAAAU7Jxh1C0fF0l0ARMcaJbaPw6+3+m7dU3xDkD8hmY5RY9xcUfTu G7m1gjDRbOpAGEIUY82TTbJTLQ== X-Google-Smtp-Source: APXvYqwwaSAihWvkXLVrmpYscesC91A5cT8+kN1NZAN7lWtU4GZ2OM1L0rSQkXGfKeYYArAeLuF39A== X-Received: by 2002:a37:6905:: with SMTP id e5mr11456548qkc.121.1566010001248; Fri, 16 Aug 2019 19:46:41 -0700 (PDT) Received: from localhost.localdomain (c-73-69-118-222.hsd1.nh.comcast.net. [73.69.118.222]) by smtp.gmail.com with ESMTPSA id o9sm3454657qtr.71.2019.08.16.19.46.39 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 16 Aug 2019 19:46:40 -0700 (PDT) From: Pavel Tatashin To: pasha.tatashin@soleen.com, jmorris@namei.org, sashal@kernel.org, ebiederm@xmission.com, kexec@lists.infradead.org, linux-kernel@vger.kernel.org, corbet@lwn.net, catalin.marinas@arm.com, will@kernel.org, linux-arm-kernel@lists.infradead.org, marc.zyngier@arm.com, james.morse@arm.com, vladimir.murzin@arm.com, matthias.bgg@gmail.com, bhsharma@redhat.com, linux-mm@kvack.org Subject: [PATCH v2 07/14] arm64, trans_table: adjust trans_table_create_copy interface Date: Fri, 16 Aug 2019 22:46:22 -0400 Message-Id: <20190817024629.26611-8-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.22.1 In-Reply-To: <20190817024629.26611-1-pasha.tatashin@soleen.com> References: <20190817024629.26611-1-pasha.tatashin@soleen.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190816_194642_472884_87BD8F31 X-CRM114-Status: GOOD ( 16.07 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP Make trans_table_create_copy inline with the other functions in trans_table: use the trans_table_info argument, and also use the trans_table_create_empty. Note, that the functions that are called by trans_table_create_copy are not yet adjusted to be compliant with trans_table: they do not yet use the provided allocator, do not check for generic errors, and do not yet use the flags in info argument. Signed-off-by: Pavel Tatashin --- arch/arm64/include/asm/trans_table.h | 7 ++++++- arch/arm64/kernel/hibernate.c | 31 ++++++++++++++++++++++++++-- arch/arm64/mm/trans_table.c | 17 ++++++--------- 3 files changed, 41 insertions(+), 14 deletions(-) diff --git a/arch/arm64/include/asm/trans_table.h b/arch/arm64/include/asm/trans_table.h index 02d3a0333dc9..8c296bd3e10f 100644 --- a/arch/arm64/include/asm/trans_table.h +++ b/arch/arm64/include/asm/trans_table.h @@ -44,7 +44,12 @@ struct trans_table_info { int trans_table_create_empty(struct trans_table_info *info, pgd_t **trans_table); -int trans_table_create_copy(pgd_t **dst_pgdp, unsigned long start, +/* + * Create trans table and copy entries from from_table to trans_table in range + * [start, end) + */ +int trans_table_create_copy(struct trans_table_info *info, pgd_t **trans_table, + pgd_t *from_table, unsigned long start, unsigned long end); /* diff --git a/arch/arm64/kernel/hibernate.c b/arch/arm64/kernel/hibernate.c index 3a7b362e5a58..6fbaff769c1d 100644 --- a/arch/arm64/kernel/hibernate.c +++ b/arch/arm64/kernel/hibernate.c @@ -323,15 +323,42 @@ int swsusp_arch_resume(void) phys_addr_t phys_hibernate_exit; void __noreturn (*hibernate_exit)(phys_addr_t, phys_addr_t, void *, void *, phys_addr_t, phys_addr_t); + struct trans_table_info trans_info = { + .trans_alloc_page = hibernate_page_alloc, + .trans_alloc_arg = (void *)GFP_ATOMIC, + /* + * Resume will overwrite areas that may be marked read only + * (code, rodata). Clear the RDONLY bit from the temporary + * mappings we use during restore. + */ + .trans_flags = TRANS_MKWRITE, + }; + + /* + * debug_pagealloc will removed the PTE_VALID bit if the page isn't in + * use by the resume kernel. It may have been in use by the original + * kernel, in which case we need to put it back in our copy to do the + * restore. + * + * Before marking this entry valid, check the pfn should be mapped. + */ + if (debug_pagealloc_enabled()) + trans_info.trans_flags |= (TRANS_MKVALID | TRANS_CHECKPFN); /* * Restoring the memory image will overwrite the ttbr1 page tables. * Create a second copy of just the linear map, and use this when * restoring. */ - rc = trans_table_create_copy(&tmp_pg_dir, PAGE_OFFSET, 0); - if (rc) + rc = trans_table_create_copy(&trans_info, &tmp_pg_dir, init_mm.pgd, + PAGE_OFFSET, 0); + if (rc) { + if (rc == -ENOMEM) + pr_err("Failed to allocate memory for temporary page tables.\n"); + else if (rc == -ENXIO) + pr_err("Tried to set PTE for PFN that does not exist\n"); goto out; + } /* * We need a zero page that is zero before & after resume in order to diff --git a/arch/arm64/mm/trans_table.c b/arch/arm64/mm/trans_table.c index 6deb35f83118..634293ffb54c 100644 --- a/arch/arm64/mm/trans_table.c +++ b/arch/arm64/mm/trans_table.c @@ -176,22 +176,17 @@ int trans_table_create_empty(struct trans_table_info *info, pgd_t **trans_table) return 0; } -int trans_table_create_copy(pgd_t **dst_pgdp, unsigned long start, +int trans_table_create_copy(struct trans_table_info *info, pgd_t **trans_table, + pgd_t *from_table, unsigned long start, unsigned long end) { int rc; - pgd_t *trans_table = (pgd_t *)get_safe_page(GFP_ATOMIC); - if (!trans_table) { - pr_err("Failed to allocate memory for temporary page tables.\n"); - return -ENOMEM; - } - - rc = copy_page_tables(trans_table, start, end); - if (!rc) - *dst_pgdp = trans_table; + rc = trans_table_create_empty(info, trans_table); + if (rc) + return rc; - return rc; + return copy_page_tables(*trans_table, start, end); } int trans_table_map_page(struct trans_table_info *info, pgd_t *trans_table, From patchwork Sat Aug 17 02:46:23 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pasha Tatashin X-Patchwork-Id: 11098531 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D99E618EC for ; Sat, 17 Aug 2019 02:49:01 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id C492E285D8 for ; Sat, 17 Aug 2019 02:49:01 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id B7E9A1FEBA; Sat, 17 Aug 2019 02:49:01 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 5AB92285D8 for ; Sat, 17 Aug 2019 02:49:01 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Cc:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=3XExSEeKNOOiGUnSVFagv0QkRrApGO1uB0QXYgd88yI=; b=Nlk0EZmb2SkgvO 0EBSkgIT6Z7yWCBOLDqf/mN5hvMQCKV91W1ikVFwR/tS0uV9aysmp6o8qMBdtcFCMVl4pXv8YC4Lu /gRSghDerhm3GaST53Kuk9pR1Blyr5I1V7bjvGhyTsSNfbqwJ5SaBnrp2kzWWIZ1PiA7bopCE9kwl 3/Dl6xHt9XRqDJD5Q12YLvduDDAI2Wj9B+krszZmcJheBBKhAxCKF+zlbybvuc1DRCstzy8MUYtNL 3EHomczVBmYKbsSwq+o0DRNdTModjUviEbZXvC/fgWvn6nfUTBwnYnjlhfJgINSl3Op95/v1dNw16 I6Jj/AEn0XBl8rfMqT1g==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92 #3 (Red Hat Linux)) id 1hyom4-0004aW-Qa; Sat, 17 Aug 2019 02:49:00 +0000 Received: from mail-qt1-x841.google.com ([2607:f8b0:4864:20::841]) by bombadil.infradead.org with esmtps (Exim 4.92 #3 (Red Hat Linux)) id 1hyojr-0002Xt-Mo for linux-arm-kernel@lists.infradead.org; Sat, 17 Aug 2019 02:46:46 +0000 Received: by mail-qt1-x841.google.com with SMTP id e8so8203204qtp.7 for ; Fri, 16 Aug 2019 19:46:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=RGtrOUupW4z37ipmqcLJyB3m7q6WPq+p9ucEiFhnLcg=; b=K3uh07Qn3TrpcWz7z/N+EGQbeyvKel3p7LpHgPxW5XHEjHN0nGfMdgMxXJEwmCcG6B nhWCIGqiH1BVHTBUWPfRA9ND8m+l/2KTafHuxqDij7+QKz/6GZD6bCgPhsJWTHE4AHiI 6QuEh9K7DIQ65DSCwGlbOOE6+FpfL41IRKJgh1HQ6gIEoUzJizo4TtnYl8fFrxPVcgT6 I/Rh6DFEQ9kD9MkGVP+CrWhb4fozfe+U3YNn6UXqxmwHpiDRJHemP2SO4pOJXNXrs2EC j/tgcGrskD3ABV484TSaBLkoWfzEkalgsgMt+pLjBW9D4bLY1lQad/JI8NYEodMP3lxH z25g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=RGtrOUupW4z37ipmqcLJyB3m7q6WPq+p9ucEiFhnLcg=; b=XFmIPEFDnLccenodAfH0l4z1sENaHH7/BztH8BX6L8QbUnU34yNVV56VDDp1K8LRMR incNkWh55CCLSTWHu5awXR4KpMOn0FUT7BFc7XK3pETHeWyIlIZRHEoDjZk7iyuSE3f0 KWIYjj8eY/uKRgnSEgobEg/7qGlukg0lsZmlLMh62P44FeM9ARR7OnL4rqGqTSiC2ui/ JdikPk7fF6/bVwQkVMRXcLDnIr0ogtrggtJ0JiDd5dCKVy3qTQ6JbX5WbXxggPxlWBVq qKhh9d42leKhArz5zwCefPEiEzWD+MLrVD4TREJItFKz2jjAwsL5OJeYFZ8oyFCDMJaC SRRw== X-Gm-Message-State: APjAAAVPeUpsg+SijmB4LzSxnOyhea4ABmJa03a63HVg3r5/zJ2n9/gl YvXZoHWGYfVpGr/zI0sfaVk7Fg== X-Google-Smtp-Source: APXvYqxHWuJZxhJoL32IA5dYdauUAj5ZZIoyQiY5axJ7rJ6tklLVzo2+3JJMyaLqd0hjiXW2rE1ZaQ== X-Received: by 2002:a0c:b786:: with SMTP id l6mr3917888qve.148.1566010002583; Fri, 16 Aug 2019 19:46:42 -0700 (PDT) Received: from localhost.localdomain (c-73-69-118-222.hsd1.nh.comcast.net. [73.69.118.222]) by smtp.gmail.com with ESMTPSA id o9sm3454657qtr.71.2019.08.16.19.46.41 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 16 Aug 2019 19:46:42 -0700 (PDT) From: Pavel Tatashin To: pasha.tatashin@soleen.com, jmorris@namei.org, sashal@kernel.org, ebiederm@xmission.com, kexec@lists.infradead.org, linux-kernel@vger.kernel.org, corbet@lwn.net, catalin.marinas@arm.com, will@kernel.org, linux-arm-kernel@lists.infradead.org, marc.zyngier@arm.com, james.morse@arm.com, vladimir.murzin@arm.com, matthias.bgg@gmail.com, bhsharma@redhat.com, linux-mm@kvack.org Subject: [PATCH v2 08/14] arm64, trans_table: add PUD_SECT_RDONLY Date: Fri, 16 Aug 2019 22:46:23 -0400 Message-Id: <20190817024629.26611-9-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.22.1 In-Reply-To: <20190817024629.26611-1-pasha.tatashin@soleen.com> References: <20190817024629.26611-1-pasha.tatashin@soleen.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190816_194643_867596_680A611B X-CRM114-Status: UNSURE ( 9.79 ) X-CRM114-Notice: Please train this message. X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP There is PMD_SECT_RDONLY that is used in pud_* function which is confusing. Signed-off-by: Pavel Tatashin --- arch/arm64/include/asm/pgtable-hwdef.h | 1 + arch/arm64/mm/trans_table.c | 2 +- 2 files changed, 2 insertions(+), 1 deletion(-) diff --git a/arch/arm64/include/asm/pgtable-hwdef.h b/arch/arm64/include/asm/pgtable-hwdef.h index db92950bb1a0..dcb4f13c7888 100644 --- a/arch/arm64/include/asm/pgtable-hwdef.h +++ b/arch/arm64/include/asm/pgtable-hwdef.h @@ -110,6 +110,7 @@ #define PUD_TABLE_BIT (_AT(pudval_t, 1) << 1) #define PUD_TYPE_MASK (_AT(pudval_t, 3) << 0) #define PUD_TYPE_SECT (_AT(pudval_t, 1) << 0) +#define PUD_SECT_RDONLY (_AT(pudval_t, 1) << 7) /* AP[2] */ /* * Level 2 descriptor (PMD). diff --git a/arch/arm64/mm/trans_table.c b/arch/arm64/mm/trans_table.c index 634293ffb54c..815e40bb1316 100644 --- a/arch/arm64/mm/trans_table.c +++ b/arch/arm64/mm/trans_table.c @@ -138,7 +138,7 @@ static int copy_pud(pgd_t *dst_pgdp, pgd_t *src_pgdp, unsigned long start, return -ENOMEM; } else { set_pud(dst_pudp, - __pud(pud_val(pud) & ~PMD_SECT_RDONLY)); + __pud(pud_val(pud) & ~PUD_SECT_RDONLY)); } } while (dst_pudp++, src_pudp++, addr = next, addr != end); From patchwork Sat Aug 17 02:46:24 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pasha Tatashin X-Patchwork-Id: 11098537 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 4CAD518EC for ; Sat, 17 Aug 2019 02:49:56 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 3745B1FEBA for ; Sat, 17 Aug 2019 02:49:56 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 287FC288FC; Sat, 17 Aug 2019 02:49:56 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 7562D1FEBA for ; Sat, 17 Aug 2019 02:49:55 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Cc:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=5/BLxcHPvX7ZRA6cgDEq4+FvlQJ+mXXf4JH0oQpYoaM=; b=XD/AllbF05C8Gr Jwr80qmTAr3KdiNDyxtcXT0LzW66qvEAWZcqD9V27hJYF/46uiPi0p6RvYq27F/GSTAoCLB4Gmuvz MFvU+qGFJ+RfNSJD3q0eu85i571zbuWY0OSCt27dtMVDXDWmEa7hD1P/kELGen29PDtnM2/dnlHaw PM9+6iop4Mw3UOv8f9oUf6K0jyxt9k37Vvok3/Qo7wNMGwbEXI9VtZdav0qBV1LL1PWRf1ENgeIrW 6eKDhAIUbfgT2Nch7sMzMAE9GDUnQrwfxWtNZLnLWCSHr45nq1knIopCkjvAtCBXxJnW4FNwimjdC mTkV6eAeY4uiORrb/lHA==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92 #3 (Red Hat Linux)) id 1hyomv-0005OD-QR; Sat, 17 Aug 2019 02:49:53 +0000 Received: from mail-qt1-x842.google.com ([2607:f8b0:4864:20::842]) by bombadil.infradead.org with esmtps (Exim 4.92 #3 (Red Hat Linux)) id 1hyojs-0002ZY-Um for linux-arm-kernel@lists.infradead.org; Sat, 17 Aug 2019 02:46:47 +0000 Received: by mail-qt1-x842.google.com with SMTP id t12so8203288qtp.9 for ; Fri, 16 Aug 2019 19:46:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=YcRiD81CkGA/tiGkEwYWkDWZvYJnW8+40y1Dv0r42/8=; b=l+p8+edOT71ecECp/u3tuhk3eamkWriWn5ws7p7UJAZCeqR7QgvXu6HxZKWdU7tRwJ UUtPACIFNZF+eStlnshfs4kvXDNRIMzyYMJWusoTQ3Chewjw32LETYU7RV2ejjvGFY1j R7djYTkVxGAOeWVwSx5wdHkDns8SbXlC8L5ECHyzW0RHLkO4xvSOtS80IZ9cs6L1stig 3pbKsKq4Pm2dFO0SZIYeFOC146IAJshB/M9rfARer8wtZ5DkuVtj5GBUmf8nDP7QQrvB mQoAFnQOsMvesegMQSau8Hsley1Fg4tKSbpDObFqFyNOMOwuVewaUpD4CtySzvddH5/S VjSQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=YcRiD81CkGA/tiGkEwYWkDWZvYJnW8+40y1Dv0r42/8=; b=QMkAOl2d4O/0AMlm6wfWP0pTUNrY0K9SOuPglP+5ME/22f89biJ9khGD/0CaZ0OYim s2Bxr7DhM01OluHmvRhcvIXGiM/BTT4vtB4cPB4c3kt81/atSqhmR8FZv4IUKUxYounp 80rNSch+7jeiqALb8AEp+uBw4LChTVQ7EDpWYVtFl16hRCWiZ31W6s42PpBz/uAYqwb8 7vw8vj+19mrOlb2BmGYmiV6XsrH0m9F/mscjUoLXYcU49D0ICOouBgia8jK7ZHYNEPb9 LivgtTX7tGRo+qNggbJTDjO0trKf0wDx9pEYFWVRZYt3VfulhTdWF4cHC8RVWU1tbjHQ v8lQ== X-Gm-Message-State: APjAAAUqkE14b4vuZSnQpO7QeTP7RsHRnlss1x9E1BwwKOeXQicUJt5o Gzi4LBZ5jC9KBjNG/h39J13wIA== X-Google-Smtp-Source: APXvYqzkvUJTKF443y0fqKhgWTNB+cG/M187ziS8mkOR9Fisvx3ycBpiOm4doD2LJHgEFGwqmKufxg== X-Received: by 2002:ac8:7b56:: with SMTP id m22mr4350519qtu.390.1566010003981; Fri, 16 Aug 2019 19:46:43 -0700 (PDT) Received: from localhost.localdomain (c-73-69-118-222.hsd1.nh.comcast.net. [73.69.118.222]) by smtp.gmail.com with ESMTPSA id o9sm3454657qtr.71.2019.08.16.19.46.42 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 16 Aug 2019 19:46:43 -0700 (PDT) From: Pavel Tatashin To: pasha.tatashin@soleen.com, jmorris@namei.org, sashal@kernel.org, ebiederm@xmission.com, kexec@lists.infradead.org, linux-kernel@vger.kernel.org, corbet@lwn.net, catalin.marinas@arm.com, will@kernel.org, linux-arm-kernel@lists.infradead.org, marc.zyngier@arm.com, james.morse@arm.com, vladimir.murzin@arm.com, matthias.bgg@gmail.com, bhsharma@redhat.com, linux-mm@kvack.org Subject: [PATCH v2 09/14] arm64, trans_table: complete generalization of trans_tables Date: Fri, 16 Aug 2019 22:46:24 -0400 Message-Id: <20190817024629.26611-10-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.22.1 In-Reply-To: <20190817024629.26611-1-pasha.tatashin@soleen.com> References: <20190817024629.26611-1-pasha.tatashin@soleen.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190816_194645_164844_218AB909 X-CRM114-Status: GOOD ( 12.32 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP Make the last private functions in page table copy path generlized for use outside of hibernate. Switch to use the provided allocator, flags, and source page table. Also, unify all copy function implementations to reduce the possibility of bugs. All page table levels are implemented symmetrically. Signed-off-by: Pavel Tatashin --- arch/arm64/mm/trans_table.c | 204 ++++++++++++++++++++---------------- 1 file changed, 113 insertions(+), 91 deletions(-) diff --git a/arch/arm64/mm/trans_table.c b/arch/arm64/mm/trans_table.c index 815e40bb1316..ce0f24806eaa 100644 --- a/arch/arm64/mm/trans_table.c +++ b/arch/arm64/mm/trans_table.c @@ -27,139 +27,161 @@ static void *trans_alloc(struct trans_table_info *info) return page; } -static void _copy_pte(pte_t *dst_ptep, pte_t *src_ptep, unsigned long addr) +static int trans_table_copy_pte(struct trans_table_info *info, pte_t *dst_ptep, + pte_t *src_ptep, unsigned long start, + unsigned long end) { - pte_t pte = READ_ONCE(*src_ptep); - - if (pte_valid(pte)) { - /* - * Resume will overwrite areas that may be marked - * read only (code, rodata). Clear the RDONLY bit from - * the temporary mappings we use during restore. - */ - set_pte(dst_ptep, pte_mkwrite(pte)); - } else if (debug_pagealloc_enabled() && !pte_none(pte)) { - /* - * debug_pagealloc will removed the PTE_VALID bit if - * the page isn't in use by the resume kernel. It may have - * been in use by the original kernel, in which case we need - * to put it back in our copy to do the restore. - * - * Before marking this entry valid, check the pfn should - * be mapped. - */ - BUG_ON(!pfn_valid(pte_pfn(pte))); - - set_pte(dst_ptep, pte_mkpresent(pte_mkwrite(pte))); - } -} - -static int copy_pte(pmd_t *dst_pmdp, pmd_t *src_pmdp, unsigned long start, - unsigned long end) -{ - pte_t *src_ptep; - pte_t *dst_ptep; unsigned long addr = start; + int i = pte_index(addr); - dst_ptep = (pte_t *)get_safe_page(GFP_ATOMIC); - if (!dst_ptep) - return -ENOMEM; - pmd_populate_kernel(&init_mm, dst_pmdp, dst_ptep); - dst_ptep = pte_offset_kernel(dst_pmdp, start); - - src_ptep = pte_offset_kernel(src_pmdp, start); do { - _copy_pte(dst_ptep, src_ptep, addr); - } while (dst_ptep++, src_ptep++, addr += PAGE_SIZE, addr != end); + pte_t src_pte = READ_ONCE(src_ptep[i]); + + if (pte_none(src_pte)) + continue; + if (info->trans_flags & TRANS_MKWRITE) + src_pte = pte_mkwrite(src_pte); + if (info->trans_flags & TRANS_MKVALID) + src_pte = pte_mkpresent(src_pte); + if (info->trans_flags & TRANS_CHECKPFN) { + if (!pfn_valid(pte_pfn(src_pte))) + return -ENXIO; + } + set_pte(&dst_ptep[i], src_pte); + } while (addr += PAGE_SIZE, i++, addr != end && i < PTRS_PER_PTE); return 0; } -static int copy_pmd(pud_t *dst_pudp, pud_t *src_pudp, unsigned long start, - unsigned long end) +static int trans_table_copy_pmd(struct trans_table_info *info, pmd_t *dst_pmdp, + pmd_t *src_pmdp, unsigned long start, + unsigned long end) { - pmd_t *src_pmdp; - pmd_t *dst_pmdp; unsigned long next; unsigned long addr = start; + int i = pmd_index(addr); + int rc; - if (pud_none(READ_ONCE(*dst_pudp))) { - dst_pmdp = (pmd_t *)get_safe_page(GFP_ATOMIC); - if (!dst_pmdp) - return -ENOMEM; - pud_populate(&init_mm, dst_pudp, dst_pmdp); - } - dst_pmdp = pmd_offset(dst_pudp, start); - - src_pmdp = pmd_offset(src_pudp, start); do { - pmd_t pmd = READ_ONCE(*src_pmdp); + pmd_t src_pmd = READ_ONCE(src_pmdp[i]); + pmd_t dst_pmd = READ_ONCE(dst_pmdp[i]); + pte_t *dst_ptep, *src_ptep; next = pmd_addr_end(addr, end); - if (pmd_none(pmd)) + if (pmd_none(src_pmd)) + continue; + + if (!pmd_table(src_pmd)) { + if (info->trans_flags & TRANS_MKWRITE) + pmd_val(src_pmd) &= ~PMD_SECT_RDONLY; + set_pmd(&dst_pmdp[i], src_pmd); continue; - if (pmd_table(pmd)) { - if (copy_pte(dst_pmdp, src_pmdp, addr, next)) + } + + if (pmd_none(dst_pmd)) { + pte_t *t = trans_alloc(info); + + if (!t) return -ENOMEM; - } else { - set_pmd(dst_pmdp, - __pmd(pmd_val(pmd) & ~PMD_SECT_RDONLY)); + + __pmd_populate(&dst_pmdp[i], __pa(t), PTE_TYPE_PAGE); + dst_pmd = READ_ONCE(dst_pmdp[i]); } - } while (dst_pmdp++, src_pmdp++, addr = next, addr != end); + + src_ptep = __va(pmd_page_paddr(src_pmd)); + dst_ptep = __va(pmd_page_paddr(dst_pmd)); + + rc = trans_table_copy_pte(info, dst_ptep, src_ptep, addr, next); + if (rc) + return rc; + } while (addr = next, i++, addr != end && i < PTRS_PER_PMD); return 0; } -static int copy_pud(pgd_t *dst_pgdp, pgd_t *src_pgdp, unsigned long start, - unsigned long end) +static int trans_table_copy_pud(struct trans_table_info *info, pud_t *dst_pudp, + pud_t *src_pudp, unsigned long start, + unsigned long end) { - pud_t *dst_pudp; - pud_t *src_pudp; unsigned long next; unsigned long addr = start; + int i = pud_index(addr); + int rc; - if (pgd_none(READ_ONCE(*dst_pgdp))) { - dst_pudp = (pud_t *)get_safe_page(GFP_ATOMIC); - if (!dst_pudp) - return -ENOMEM; - pgd_populate(&init_mm, dst_pgdp, dst_pudp); - } - dst_pudp = pud_offset(dst_pgdp, start); - - src_pudp = pud_offset(src_pgdp, start); do { - pud_t pud = READ_ONCE(*src_pudp); + pud_t src_pud = READ_ONCE(src_pudp[i]); + pud_t dst_pud = READ_ONCE(dst_pudp[i]); + pmd_t *dst_pmdp, *src_pmdp; next = pud_addr_end(addr, end); - if (pud_none(pud)) + if (pud_none(src_pud)) continue; - if (pud_table(pud)) { - if (copy_pmd(dst_pudp, src_pudp, addr, next)) + + if (!pud_table(src_pud)) { + if (info->trans_flags & TRANS_MKWRITE) + pud_val(src_pud) &= ~PUD_SECT_RDONLY; + set_pud(&dst_pudp[i], src_pud); + continue; + } + + if (pud_none(dst_pud)) { + pmd_t *t = trans_alloc(info); + + if (!t) return -ENOMEM; - } else { - set_pud(dst_pudp, - __pud(pud_val(pud) & ~PUD_SECT_RDONLY)); + + __pud_populate(&dst_pudp[i], __pa(t), PMD_TYPE_TABLE); + dst_pud = READ_ONCE(dst_pudp[i]); } - } while (dst_pudp++, src_pudp++, addr = next, addr != end); + + src_pmdp = __va(pud_page_paddr(src_pud)); + dst_pmdp = __va(pud_page_paddr(dst_pud)); + + rc = trans_table_copy_pmd(info, dst_pmdp, src_pmdp, addr, next); + if (rc) + return rc; + } while (addr = next, i++, addr != end && i < PTRS_PER_PUD); return 0; } -static int copy_page_tables(pgd_t *dst_pgdp, unsigned long start, - unsigned long end) +static int trans_table_copy_pgd(struct trans_table_info *info, pgd_t *dst_pgdp, + pgd_t *src_pgdp, unsigned long start, + unsigned long end) { unsigned long next; unsigned long addr = start; - pgd_t *src_pgdp = pgd_offset_k(start); + int i = pgd_index(addr); + int rc; - dst_pgdp = pgd_offset_raw(dst_pgdp, start); do { + pgd_t src_pgd; + pgd_t dst_pgd; + pud_t *dst_pudp, *src_pudp; + + src_pgd = READ_ONCE(src_pgdp[i]); + dst_pgd = READ_ONCE(dst_pgdp[i]); next = pgd_addr_end(addr, end); - if (pgd_none(READ_ONCE(*src_pgdp))) + if (pgd_none(src_pgd)) continue; - if (copy_pud(dst_pgdp, src_pgdp, addr, next)) - return -ENOMEM; - } while (dst_pgdp++, src_pgdp++, addr = next, addr != end); + + if (pgd_none(dst_pgd)) { + pud_t *t = trans_alloc(info); + + if (!t) + return -ENOMEM; + + __pgd_populate(&dst_pgdp[i], __pa(t), PUD_TYPE_TABLE); + dst_pgd = READ_ONCE(dst_pgdp[i]); + } + + src_pudp = __va(pgd_page_paddr(src_pgd)); + dst_pudp = __va(pgd_page_paddr(dst_pgd)); + + rc = trans_table_copy_pud(info, dst_pudp, src_pudp, addr, next); + if (rc) + return rc; + } while (addr = next, i++, addr != end && i < PTRS_PER_PGD); return 0; } @@ -186,7 +208,7 @@ int trans_table_create_copy(struct trans_table_info *info, pgd_t **trans_table, if (rc) return rc; - return copy_page_tables(*trans_table, start, end); + return trans_table_copy_pgd(info, *trans_table, from_table, start, end); } int trans_table_map_page(struct trans_table_info *info, pgd_t *trans_table, From patchwork Sat Aug 17 02:46:25 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pasha Tatashin X-Patchwork-Id: 11098535 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 802CE13A0 for ; Sat, 17 Aug 2019 02:49:39 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 6C6791FEBA for ; Sat, 17 Aug 2019 02:49:39 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 5F58C288FC; Sat, 17 Aug 2019 02:49:39 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id ECE791FEBA for ; Sat, 17 Aug 2019 02:49:38 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Cc:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=c5YnGWHHpTpUBzwl8FRAcwT8nipvNXXKQzEbqMTjuCA=; b=aGW3CWCbeSTsDt vEgBq3PfKiTRa9BMj9wOneMMIiwCPbnlvGoSV3kPw+awXuo4+qLCZiWlvzRpLTTnmFQoXSmqKEw4F STVjAWibiLwnSthCzivJw+PMROQdvcuyQldkrBEQg/Pxup809lormSWNJABxexTJWhmvQIIxJ3Ccq pnPP55Hu9Eb6I1WQ/ORXLhP5gx8iaTCSg0FYCRbADAyRleXw5WQBfVQGAYvTW1sKYEXrdLpK1I8fw L2BsReIFy7xFYlDBb5fI0nzLwGiet7aSGB1am5Xo41Pukj0FMoeHjiI+NAwJBFJacKZ2wCKKU7wnk oEnBOH+/YmRWBi0mjMfw==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92 #3 (Red Hat Linux)) id 1hyomg-00057X-4c; Sat, 17 Aug 2019 02:49:38 +0000 Received: from mail-qt1-x844.google.com ([2607:f8b0:4864:20::844]) by bombadil.infradead.org with esmtps (Exim 4.92 #3 (Red Hat Linux)) id 1hyoju-0002bI-V2 for linux-arm-kernel@lists.infradead.org; Sat, 17 Aug 2019 02:46:48 +0000 Received: by mail-qt1-x844.google.com with SMTP id b11so8195507qtp.10 for ; Fri, 16 Aug 2019 19:46:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=Gnt8yazO3N8sExIXY3j+u+ZqminYZIsiqJs/+B6OhTk=; b=cWRB2kBTWh4/y+MMGqB3R/WWgZ7sLaFYtihcnoSwSt6r7xSqwl4+S+IhqH4xOnp0vq ySf3411CzGXNQBIcDU+KzwjgmaJLR7fVjhfDpEJc1SruBOh25M0jDoOVJaEMEKNOil/3 hVbjYxlfCLhJGujq6rLK9H6BlMLoTORcd81nvMdoc2GpMEtVkgD/aXP06Luk6cE/7pE8 AB7O6RFL6XiMdpb3vCKLa1UAsgPUVEnO7Rmp5pIjbq7MYSSI6BvECvUA2YzN0rp2JTgy j2Ay7njDh/dyDW5KG1tPtOgqTCJkiwYA7wdLnaoHh8kHEsapZh1ufoTOsXnD8g9n8gc7 LJyA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Gnt8yazO3N8sExIXY3j+u+ZqminYZIsiqJs/+B6OhTk=; b=LYghdL5YgLeyahn5whe0lnK2jX51svx3ysHCGE8pK7IC7P8RSzuGdSSInLD978gx5t y76qkHNeWrvFufvL3MHPHsfzN07z8z1gTAA+KJUyOz6ExQZiindUjQk+KGQnrQAY2qc9 nZL1X+P8SzrdWGnO0A5dDGdah3cFmroHJm8EbLyIHccClcTuTJYBKG6Hr1RjA12qAoNd pTpxfJJVz21ySeHDf/vWeNE7D8QMgy9ztat7b1a5oVYjXzUqi8sOsDwV/5iLR8bh3jxV dikJOFJt+RjBopMvBYQf/IzyxeVZ/DqY+J7LWSXa2yZObNOjFafpoifRnqYpCywBHden hLRg== X-Gm-Message-State: APjAAAW0pr+TBbaVbbe2mxZgLF2EJPbjIxAbcZlRAxYgG7n3oc+q575n s78Z1lPF8JvEvZhf06Y9i+SNaEtf9aU= X-Google-Smtp-Source: APXvYqxszzLWkK2VLd5S53W8Fb3b8NJFaWMc7fwLM8sjAjZ7hGYnI41xlk+tqmjFr45dPKaJJpbf2w== X-Received: by 2002:aed:31c2:: with SMTP id 60mr10402242qth.331.1566010005435; Fri, 16 Aug 2019 19:46:45 -0700 (PDT) Received: from localhost.localdomain (c-73-69-118-222.hsd1.nh.comcast.net. [73.69.118.222]) by smtp.gmail.com with ESMTPSA id o9sm3454657qtr.71.2019.08.16.19.46.44 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 16 Aug 2019 19:46:44 -0700 (PDT) From: Pavel Tatashin To: pasha.tatashin@soleen.com, jmorris@namei.org, sashal@kernel.org, ebiederm@xmission.com, kexec@lists.infradead.org, linux-kernel@vger.kernel.org, corbet@lwn.net, catalin.marinas@arm.com, will@kernel.org, linux-arm-kernel@lists.infradead.org, marc.zyngier@arm.com, james.morse@arm.com, vladimir.murzin@arm.com, matthias.bgg@gmail.com, bhsharma@redhat.com, linux-mm@kvack.org Subject: [PATCH v2 10/14] kexec: add machine_kexec_post_load() Date: Fri, 16 Aug 2019 22:46:25 -0400 Message-Id: <20190817024629.26611-11-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.22.1 In-Reply-To: <20190817024629.26611-1-pasha.tatashin@soleen.com> References: <20190817024629.26611-1-pasha.tatashin@soleen.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190816_194647_048951_A9DF7BC0 X-CRM114-Status: GOOD ( 11.04 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP It is the same as machine_kexec_prepare(), but is called after segments are loaded. This way, can do processing work with already loaded relocation segments. One such example is arm64: it has to have segments loaded in order to create a page table, but it cannot do it during kexec time, because at that time allocations won't be possible anymore. Signed-off-by: Pavel Tatashin --- kernel/kexec.c | 4 ++++ kernel/kexec_core.c | 6 ++++++ kernel/kexec_file.c | 4 ++++ kernel/kexec_internal.h | 2 ++ 4 files changed, 16 insertions(+) diff --git a/kernel/kexec.c b/kernel/kexec.c index 1b018f1a6e0d..27b71dc7b35a 100644 --- a/kernel/kexec.c +++ b/kernel/kexec.c @@ -159,6 +159,10 @@ static int do_kexec_load(unsigned long entry, unsigned long nr_segments, kimage_terminate(image); + ret = machine_kexec_post_load(image); + if (ret) + goto out; + /* Install the new kernel and uninstall the old */ image = xchg(dest_image, image); diff --git a/kernel/kexec_core.c b/kernel/kexec_core.c index 2c5b72863b7b..8360645d1bbe 100644 --- a/kernel/kexec_core.c +++ b/kernel/kexec_core.c @@ -587,6 +587,12 @@ static void kimage_free_extra_pages(struct kimage *image) kimage_free_page_list(&image->unusable_pages); } + +int __weak machine_kexec_post_load(struct kimage *image) +{ + return 0; +} + void kimage_terminate(struct kimage *image) { if (*image->entry != 0) diff --git a/kernel/kexec_file.c b/kernel/kexec_file.c index b8cc032d5620..cb531d768114 100644 --- a/kernel/kexec_file.c +++ b/kernel/kexec_file.c @@ -391,6 +391,10 @@ SYSCALL_DEFINE5(kexec_file_load, int, kernel_fd, int, initrd_fd, kimage_terminate(image); + ret = machine_kexec_post_load(image); + if (ret) + goto out; + /* * Free up any temporary buffers allocated which are not needed * after image has been loaded diff --git a/kernel/kexec_internal.h b/kernel/kexec_internal.h index 48aaf2ac0d0d..39d30ccf8d87 100644 --- a/kernel/kexec_internal.h +++ b/kernel/kexec_internal.h @@ -13,6 +13,8 @@ void kimage_terminate(struct kimage *image); int kimage_is_destination_range(struct kimage *image, unsigned long start, unsigned long end); +int machine_kexec_post_load(struct kimage *image); + extern struct mutex kexec_mutex; #ifdef CONFIG_KEXEC_FILE From patchwork Sat Aug 17 02:46:26 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pasha Tatashin X-Patchwork-Id: 11098539 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id CE21513A0 for ; Sat, 17 Aug 2019 02:50:19 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id B473C1FEBA for ; Sat, 17 Aug 2019 02:50:19 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id A46CE288FC; Sat, 17 Aug 2019 02:50:19 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 259701FEBA for ; Sat, 17 Aug 2019 02:50:19 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Cc:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=zOlmYKCNHVUDgaOeR2+KQAPJHuVDXwY01pSMJ0Bdy0o=; b=gAjbMNg2kJKowa 4m6x/vv3glgQd71XRP4xFPmiqUCQqNQX0Vvi7emk7PAlAdmZTiT76T949hRG4bUPdF6aeM8DcQk+C cQ0Cu7vTflDW9vVGt1qU/JoGKFWl9RKViMDVBcgFyE9TAGVkIvaQev4oWsUAZeTMwry5bvh7RJOm0 7R3uiwmR9lLLrJRbGYBIQ90slqIARowWlB2gBcCXLp/8I6eCzn1PvPZKBKISphYufoh2S2SRsMbh8 4+6RnC1LFgDqYs/8RgCbGM4hx666aWa9tOOOIfl0YT9rAOiA3WTaTBanGO4X15GVtikxiEY4gvBRx Swbd8fwfxBV6NAnEJqeQ==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92 #3 (Red Hat Linux)) id 1hyonK-0006qu-Da; Sat, 17 Aug 2019 02:50:18 +0000 Received: from mail-qt1-x843.google.com ([2607:f8b0:4864:20::843]) by bombadil.infradead.org with esmtps (Exim 4.92 #3 (Red Hat Linux)) id 1hyojw-0002co-5h for linux-arm-kernel@lists.infradead.org; Sat, 17 Aug 2019 02:46:50 +0000 Received: by mail-qt1-x843.google.com with SMTP id 44so8194124qtg.11 for ; Fri, 16 Aug 2019 19:46:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=ERR6ga21MfM+DJETWw5X/yqx0nN/RzBQu+O8HwS3oQQ=; b=EVqaWfDS7LSA3yW55QFi3RiB93knF+Y1sPRub0ZatptMJjbb+IuJBqRb/xTbEp0mGY X3VTKkGRCamjtgc8K1D5rJ3TPHaEZmLrUXu65kEb3rZS99k3jWpUrpRp9ao+KPifv1RU 4AQpTuMmEFbZn/A+Op72Z00RWzCig3MvgSs9S2gxBLAUAAJUVir4pI9B4ZcWG3qpo6n0 oF9W/tZJLrm5cSrJYBcTIWidlToJ6zrEkQmPEg0Pnw27emi1e6eFGHRUvQQrr5JfldPc t/ozFYFFTDcCOc0A8CSQf7QFWHqh2lkRPt/3Zl1dtLLTT25z2uygphFmcaY+vAZU0R/N 05lw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=ERR6ga21MfM+DJETWw5X/yqx0nN/RzBQu+O8HwS3oQQ=; b=gqoPtMTpu1NQtoYRSe0AQR2pO1oGHLklUUfKtynjrHwG7GuXm8aG0VRzywC7h5dT0G gIUvUNkrj1CVmA0nfKmqvkh9EmArhw4tZ3uguvEc3adXklG5YAWTyLakMFX8LGa700wv xqqJp+ohqKcgp049bksl3HGImQo2DTNURNoORQNvpF8RZdqBHrxvrSuyoTPnFGTAR8Bh PVi84jOZzcPrW5cr6SV6QeqmR8upVctdf8PMDu5WnCzYeWrlnTbuCra4UGRqa2pzjThL rRMa93JMVxvagSz+OrVkXpa3bazkyX81IVESDtftd70mz2xvW0hSdQxTG/ngT/KZfUPi FEBg== X-Gm-Message-State: APjAAAWHRnzo4iUSuqb/oHBHiabNIjgcmd59G3CW8xwE6dBdbwFeVTx/ NjjErBLZ91iDCXgQ4KpB/f29AA== X-Google-Smtp-Source: APXvYqyFjy4eVrE3LEU7hmoOELaMDP4RDH3Zf0t+d6DSDNTFmwjKd+7q5EA5nSF9/6teMaAjG7TQFg== X-Received: by 2002:ad4:50d1:: with SMTP id e17mr3952806qvq.9.1566010006762; Fri, 16 Aug 2019 19:46:46 -0700 (PDT) Received: from localhost.localdomain (c-73-69-118-222.hsd1.nh.comcast.net. [73.69.118.222]) by smtp.gmail.com with ESMTPSA id o9sm3454657qtr.71.2019.08.16.19.46.45 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 16 Aug 2019 19:46:46 -0700 (PDT) From: Pavel Tatashin To: pasha.tatashin@soleen.com, jmorris@namei.org, sashal@kernel.org, ebiederm@xmission.com, kexec@lists.infradead.org, linux-kernel@vger.kernel.org, corbet@lwn.net, catalin.marinas@arm.com, will@kernel.org, linux-arm-kernel@lists.infradead.org, marc.zyngier@arm.com, james.morse@arm.com, vladimir.murzin@arm.com, matthias.bgg@gmail.com, bhsharma@redhat.com, linux-mm@kvack.org Subject: [PATCH v2 11/14] arm64, kexec: move relocation function setup and clean up Date: Fri, 16 Aug 2019 22:46:26 -0400 Message-Id: <20190817024629.26611-12-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.22.1 In-Reply-To: <20190817024629.26611-1-pasha.tatashin@soleen.com> References: <20190817024629.26611-1-pasha.tatashin@soleen.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190816_194648_548065_209C812B X-CRM114-Status: GOOD ( 12.61 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP Currently, kernel relocation function is configured in machine_kexec() at the time of kexec reboot by using control_code_page. This operation, however, is more logical to be done during kexec_load, and thus remove from reboot time. Move, setup of this function to newly added machine_kexec_post_load(). In addition, do some cleanup: add infor about reloction function to kexec_image_info(), and remove extra messages from machine_kexec(). Make dtb_mem, always available, if CONFIG_KEXEC_FILE is not configured dtb_mem is set to zero anyway. Signed-off-by: Pavel Tatashin --- arch/arm64/include/asm/kexec.h | 3 +- arch/arm64/kernel/machine_kexec.c | 49 +++++++++++-------------------- 2 files changed, 19 insertions(+), 33 deletions(-) diff --git a/arch/arm64/include/asm/kexec.h b/arch/arm64/include/asm/kexec.h index 12a561a54128..d15ca1ca1e83 100644 --- a/arch/arm64/include/asm/kexec.h +++ b/arch/arm64/include/asm/kexec.h @@ -90,14 +90,15 @@ static inline void crash_prepare_suspend(void) {} static inline void crash_post_resume(void) {} #endif -#ifdef CONFIG_KEXEC_FILE #define ARCH_HAS_KIMAGE_ARCH struct kimage_arch { void *dtb; unsigned long dtb_mem; + unsigned long kern_reloc; }; +#ifdef CONFIG_KEXEC_FILE extern const struct kexec_file_ops kexec_image_ops; struct kimage; diff --git a/arch/arm64/kernel/machine_kexec.c b/arch/arm64/kernel/machine_kexec.c index 0df8493624e0..9b41da50e6f7 100644 --- a/arch/arm64/kernel/machine_kexec.c +++ b/arch/arm64/kernel/machine_kexec.c @@ -42,6 +42,7 @@ static void _kexec_image_info(const char *func, int line, pr_debug(" start: %lx\n", kimage->start); pr_debug(" head: %lx\n", kimage->head); pr_debug(" nr_segments: %lu\n", kimage->nr_segments); + pr_debug(" kern_reloc: %pa\n", &kimage->arch.kern_reloc); for (i = 0; i < kimage->nr_segments; i++) { pr_debug(" segment[%lu]: %016lx - %016lx, 0x%lx bytes, %lu pages\n", @@ -58,6 +59,19 @@ void machine_kexec_cleanup(struct kimage *kimage) /* Empty routine needed to avoid build errors. */ } +int machine_kexec_post_load(struct kimage *kimage) +{ + unsigned long kern_reloc; + + kern_reloc = page_to_phys(kimage->control_code_page); + memcpy(__va(kern_reloc), arm64_relocate_new_kernel, + arm64_relocate_new_kernel_size); + kimage->arch.kern_reloc = kern_reloc; + + kexec_image_info(kimage); + return 0; +} + /** * machine_kexec_prepare - Prepare for a kexec reboot. * @@ -67,8 +81,6 @@ void machine_kexec_cleanup(struct kimage *kimage) */ int machine_kexec_prepare(struct kimage *kimage) { - kexec_image_info(kimage); - if (kimage->type != KEXEC_TYPE_CRASH && cpus_are_stuck_in_kernel()) { pr_err("Can't kexec: CPUs are stuck in the kernel.\n"); return -EBUSY; @@ -143,8 +155,7 @@ static void kexec_segment_flush(const struct kimage *kimage) */ void machine_kexec(struct kimage *kimage) { - phys_addr_t reboot_code_buffer_phys; - void *reboot_code_buffer; + void *reboot_code_buffer = phys_to_virt(kimage->arch.kern_reloc); bool in_kexec_crash = (kimage == kexec_crash_image); bool stuck_cpus = cpus_are_stuck_in_kernel(); @@ -155,30 +166,8 @@ void machine_kexec(struct kimage *kimage) WARN(in_kexec_crash && (stuck_cpus || smp_crash_stop_failed()), "Some CPUs may be stale, kdump will be unreliable.\n"); - reboot_code_buffer_phys = page_to_phys(kimage->control_code_page); - reboot_code_buffer = phys_to_virt(reboot_code_buffer_phys); - kexec_image_info(kimage); - pr_debug("%s:%d: control_code_page: %p\n", __func__, __LINE__, - kimage->control_code_page); - pr_debug("%s:%d: reboot_code_buffer_phys: %pa\n", __func__, __LINE__, - &reboot_code_buffer_phys); - pr_debug("%s:%d: reboot_code_buffer: %p\n", __func__, __LINE__, - reboot_code_buffer); - pr_debug("%s:%d: relocate_new_kernel: %p\n", __func__, __LINE__, - arm64_relocate_new_kernel); - pr_debug("%s:%d: relocate_new_kernel_size: 0x%lx(%lu) bytes\n", - __func__, __LINE__, arm64_relocate_new_kernel_size, - arm64_relocate_new_kernel_size); - - /* - * Copy arm64_relocate_new_kernel to the reboot_code_buffer for use - * after the kernel is shut down. - */ - memcpy(reboot_code_buffer, arm64_relocate_new_kernel, - arm64_relocate_new_kernel_size); - /* Flush the reboot_code_buffer in preparation for its execution. */ __flush_dcache_area(reboot_code_buffer, arm64_relocate_new_kernel_size); @@ -214,12 +203,8 @@ void machine_kexec(struct kimage *kimage) * userspace (kexec-tools). * In kexec_file case, the kernel starts directly without purgatory. */ - cpu_soft_restart(reboot_code_buffer_phys, kimage->head, kimage->start, -#ifdef CONFIG_KEXEC_FILE - kimage->arch.dtb_mem); -#else - 0); -#endif + cpu_soft_restart(kimage->arch.kern_reloc, kimage->head, kimage->start, + kimage->arch.dtb_mem); BUG(); /* Should never get here. */ } From patchwork Sat Aug 17 02:46:27 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pasha Tatashin X-Patchwork-Id: 11098541 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id C997313A0 for ; Sat, 17 Aug 2019 02:50:36 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id B46E41FEBA for ; Sat, 17 Aug 2019 02:50:36 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id A7E1A288FC; Sat, 17 Aug 2019 02:50:36 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id C40A81FEBA for ; Sat, 17 Aug 2019 02:50:35 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Cc:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=Lpa2G5YngdN8vwnSWOWSBQgfGDyLTHisUZXK9BZvRiE=; b=LJye7elzBH/iJr 8RDFJK5Pm/FRApECjqSiLWJ8orsn5R9EVol5NeapYk7M0a5ZaRm/orx5Z3KG+mcl1/4vz22ZZcFIN aEWcn7fCvncwBSYMrSCepiGmqKzVjb42giBjqHViBEB2j9Mkzg+kleUp1cjVSbRRBu4F48YtlNn4c mlAD7NyBNNFc8Y9xbTOkLzG8dtT7PJajaFncwINGkN2JjFjP/cZyYv0lXm615+AiONmT3C4kG6m4N UVRzx7BbDmpSFnl+EUr+QB7vXZ088W311/r9PRBmXxwICdLNrHJ3fr5KDAERbc4/bmmabKMljkvUj FSssaQR1g2hij2neVD5g==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92 #3 (Red Hat Linux)) id 1hyonb-00077I-6g; Sat, 17 Aug 2019 02:50:35 +0000 Received: from mail-qt1-x843.google.com ([2607:f8b0:4864:20::843]) by bombadil.infradead.org with esmtps (Exim 4.92 #3 (Red Hat Linux)) id 1hyojx-0002f9-RL for linux-arm-kernel@lists.infradead.org; Sat, 17 Aug 2019 02:46:52 +0000 Received: by mail-qt1-x843.google.com with SMTP id q4so8244297qtp.1 for ; Fri, 16 Aug 2019 19:46:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=vcaJ4zTl+6D0yKE3uLiz/QO0CeSTulCYT7w5yEztZY0=; b=lG10X9SSn1tRgn3/bTvAZeIawFhvuYRMlXypEHuiPRigLonCNgxliQZq0fmfvD7OUG LCmq9lECxIYDyHy4J2HfvYD6lY6o8xvYH7ZRR/200XY6PjXq3h1BPVov3DVBIHZxSqPu uIjCUq7rFyRDV1gkDDFI1vCdYYn2EXYygCcPrgGtLWM96o7HG8AyMwmGF8tvUbCUy/uA Ax1GBNbLtRb71+3D53JJeeasXPvXi817ayxYHG0/JS2q8v73tEBVCgqGPrU4I/FxqeMf sVL4wBuOVIkAD0M8csR10mSWfB5e1okV4go9DnalFsY+aSL/l2GFisgid7TkcawMGtiN av6g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=vcaJ4zTl+6D0yKE3uLiz/QO0CeSTulCYT7w5yEztZY0=; b=US/bT5cr0XLlNN9u2TBDx5FuOmo6qR39mw/tDlwIa9ThQFNSEkZ3JiiC3nLiJ52FNk eVeVRyK74dx5pK2x3meWczRrANtX0bcAyjJe7/hWBblCSeVpG85Sxxeo8PZSrmvFtwFm AVIS5hxprSfOp5QO5jsDO7EttkKsNsw7dW7flZjTh7ZBikjoioPesVL1G7XMuimyPmLG fekf7Vm2W6/TpDeDACQCvMlcKZoE7FRqC8dT4NWQw4jppmMZtz7FeFj1fzDm1lQesisK FiArfjcz4ub0cihMSkRkBluEm80k//KfT7JMPoesgrdJwCZA2cupFhIiFBE21loWVpbd sBiA== X-Gm-Message-State: APjAAAUDNrn2MvEVSyKIjfLCBqJZZ6yVPL5PAssTCWf708oq/I3p1Ii9 wprUNoW4N07pBvG5F9Q1XGloTUDfBNA= X-Google-Smtp-Source: APXvYqyNZpd0W3AMNMCKk4q7SsH5rd/d4f8oJJqXvYAt/o6ajzX6SCWNIehm2wiFx/MRe8t4k5tVFQ== X-Received: by 2002:aed:2d83:: with SMTP id i3mr11634343qtd.368.1566010008240; Fri, 16 Aug 2019 19:46:48 -0700 (PDT) Received: from localhost.localdomain (c-73-69-118-222.hsd1.nh.comcast.net. [73.69.118.222]) by smtp.gmail.com with ESMTPSA id o9sm3454657qtr.71.2019.08.16.19.46.46 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 16 Aug 2019 19:46:47 -0700 (PDT) From: Pavel Tatashin To: pasha.tatashin@soleen.com, jmorris@namei.org, sashal@kernel.org, ebiederm@xmission.com, kexec@lists.infradead.org, linux-kernel@vger.kernel.org, corbet@lwn.net, catalin.marinas@arm.com, will@kernel.org, linux-arm-kernel@lists.infradead.org, marc.zyngier@arm.com, james.morse@arm.com, vladimir.murzin@arm.com, matthias.bgg@gmail.com, bhsharma@redhat.com, linux-mm@kvack.org Subject: [PATCH v2 12/14] arm64, kexec: add expandable argument to relocation function Date: Fri, 16 Aug 2019 22:46:27 -0400 Message-Id: <20190817024629.26611-13-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.22.1 In-Reply-To: <20190817024629.26611-1-pasha.tatashin@soleen.com> References: <20190817024629.26611-1-pasha.tatashin@soleen.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190816_194650_090701_B5FF8E7F X-CRM114-Status: GOOD ( 19.52 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP Currently, kexec relocation function (arm64_relocate_new_kernel) accepts the following arguments: head: start of array that contains relocation information. entry: entry point for new kernel or purgatory. dtb_mem: first and only argument to entry. The number of arguments cannot be easily expended, because this function is also called from HVC_SOFT_RESTART, which preserves only three arguments. And, also arm64_relocate_new_kernel is written in assembly but called without stack, thus no place to move extra arguments to free registers. Soon, we will need to pass more arguments: once we enable MMU we will need to pass information about page tables. Another benefit of allowing this function to accept more arguments, is that kernel can actually accept up to 4 arguments (x0-x3), however currently only one is used, but if in the future we will need for more (for example, pass information about when previous kernel exited to have a precise measurement in time spent in purgatory), we won't be easilty do that if arm64_relocate_new_kernel can't accept more arguments. So, add a new struct: kern_reloc_arg, and place it in kexec safe page (i.e memory that is not overwritten during relocation). Thus, make arm64_relocate_new_kernel to only take one argument, that contains all the needed information. Signed-off-by: Pavel Tatashin --- arch/arm64/include/asm/kexec.h | 18 ++++++ arch/arm64/kernel/asm-offsets.c | 9 +++ arch/arm64/kernel/cpu-reset.S | 4 +- arch/arm64/kernel/cpu-reset.h | 8 +-- arch/arm64/kernel/machine_kexec.c | 28 ++++++++- arch/arm64/kernel/relocate_kernel.S | 88 ++++++++++------------------- 6 files changed, 86 insertions(+), 69 deletions(-) diff --git a/arch/arm64/include/asm/kexec.h b/arch/arm64/include/asm/kexec.h index d15ca1ca1e83..d5b79d4c7fae 100644 --- a/arch/arm64/include/asm/kexec.h +++ b/arch/arm64/include/asm/kexec.h @@ -90,12 +90,30 @@ static inline void crash_prepare_suspend(void) {} static inline void crash_post_resume(void) {} #endif +/* + * kern_reloc_arg is passed to kernel relocation function as an argument. + * head kimage->head, allows to traverse through relocation segments. + * entry_addr kimage->start, where to jump from relocation function (new + * kernel, or purgatory entry address). + * kern_arg0 first argument to kernel is its dtb address. The other + * arguments are currently unused, and must be set to 0 + */ +struct kern_reloc_arg { + unsigned long head; + unsigned long entry_addr; + unsigned long kern_arg0; + unsigned long kern_arg1; + unsigned long kern_arg2; + unsigned long kern_arg3; +}; + #define ARCH_HAS_KIMAGE_ARCH struct kimage_arch { void *dtb; unsigned long dtb_mem; unsigned long kern_reloc; + unsigned long kern_reloc_arg; }; #ifdef CONFIG_KEXEC_FILE diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c index 214685760e1c..900394907fd8 100644 --- a/arch/arm64/kernel/asm-offsets.c +++ b/arch/arm64/kernel/asm-offsets.c @@ -23,6 +23,7 @@ #include #include #include +#include int main(void) { @@ -126,6 +127,14 @@ int main(void) #ifdef CONFIG_ARM_SDE_INTERFACE DEFINE(SDEI_EVENT_INTREGS, offsetof(struct sdei_registered_event, interrupted_regs)); DEFINE(SDEI_EVENT_PRIORITY, offsetof(struct sdei_registered_event, priority)); +#endif +#ifdef CONFIG_KEXEC_CORE + DEFINE(KRELOC_HEAD, offsetof(struct kern_reloc_arg, head)); + DEFINE(KRELOC_ENTRY_ADDR, offsetof(struct kern_reloc_arg, entry_addr)); + DEFINE(KRELOC_KERN_ARG0, offsetof(struct kern_reloc_arg, kern_arg0)); + DEFINE(KRELOC_KERN_ARG1, offsetof(struct kern_reloc_arg, kern_arg1)); + DEFINE(KRELOC_KERN_ARG2, offsetof(struct kern_reloc_arg, kern_arg2)); + DEFINE(KRELOC_KERN_ARG3, offsetof(struct kern_reloc_arg, kern_arg3)); #endif return 0; } diff --git a/arch/arm64/kernel/cpu-reset.S b/arch/arm64/kernel/cpu-reset.S index 6ea337d464c4..64c78a42919f 100644 --- a/arch/arm64/kernel/cpu-reset.S +++ b/arch/arm64/kernel/cpu-reset.S @@ -43,9 +43,7 @@ ENTRY(__cpu_soft_restart) hvc #0 // no return 1: mov x18, x1 // entry - mov x0, x2 // arg0 - mov x1, x3 // arg1 - mov x2, x4 // arg2 + mov x0, x2 // arg br x18 ENDPROC(__cpu_soft_restart) diff --git a/arch/arm64/kernel/cpu-reset.h b/arch/arm64/kernel/cpu-reset.h index ed50e9587ad8..7a8720ff186f 100644 --- a/arch/arm64/kernel/cpu-reset.h +++ b/arch/arm64/kernel/cpu-reset.h @@ -11,12 +11,10 @@ #include void __cpu_soft_restart(unsigned long el2_switch, unsigned long entry, - unsigned long arg0, unsigned long arg1, unsigned long arg2); + unsigned long arg); static inline void __noreturn cpu_soft_restart(unsigned long entry, - unsigned long arg0, - unsigned long arg1, - unsigned long arg2) + unsigned long arg) { typeof(__cpu_soft_restart) *restart; @@ -25,7 +23,7 @@ static inline void __noreturn cpu_soft_restart(unsigned long entry, restart = (void *)__pa_symbol(__cpu_soft_restart); cpu_install_idmap(); - restart(el2_switch, entry, arg0, arg1, arg2); + restart(el2_switch, entry, arg); unreachable(); } diff --git a/arch/arm64/kernel/machine_kexec.c b/arch/arm64/kernel/machine_kexec.c index 9b41da50e6f7..d745ea2051df 100644 --- a/arch/arm64/kernel/machine_kexec.c +++ b/arch/arm64/kernel/machine_kexec.c @@ -43,6 +43,7 @@ static void _kexec_image_info(const char *func, int line, pr_debug(" head: %lx\n", kimage->head); pr_debug(" nr_segments: %lu\n", kimage->nr_segments); pr_debug(" kern_reloc: %pa\n", &kimage->arch.kern_reloc); + pr_debug(" kern_reloc_arg: %pa\n", &kimage->arch.kern_reloc_arg); for (i = 0; i < kimage->nr_segments; i++) { pr_debug(" segment[%lu]: %016lx - %016lx, 0x%lx bytes, %lu pages\n", @@ -59,14 +60,38 @@ void machine_kexec_cleanup(struct kimage *kimage) /* Empty routine needed to avoid build errors. */ } +/* Allocates pages for kexec page table */ +static void *kexec_page_alloc(void *arg) +{ + struct kimage *kimage = (struct kimage *)arg; + struct page *page = kimage_alloc_control_pages(kimage, 0); + + if (!page) + return NULL; + + return page_address(page); +} + int machine_kexec_post_load(struct kimage *kimage) { unsigned long kern_reloc; + struct kern_reloc_arg *kern_reloc_arg; kern_reloc = page_to_phys(kimage->control_code_page); memcpy(__va(kern_reloc), arm64_relocate_new_kernel, arm64_relocate_new_kernel_size); + + kern_reloc_arg = kexec_page_alloc(kimage); + if (!kern_reloc_arg) + return -ENOMEM; + memset(kern_reloc_arg, 0, sizeof(struct kern_reloc_arg)); + kimage->arch.kern_reloc = kern_reloc; + kimage->arch.kern_reloc_arg = __pa(kern_reloc_arg); + + kern_reloc_arg->head = kimage->head; + kern_reloc_arg->entry_addr = kimage->start; + kern_reloc_arg->kern_arg0 = kimage->arch.dtb_mem; kexec_image_info(kimage); return 0; @@ -203,8 +228,7 @@ void machine_kexec(struct kimage *kimage) * userspace (kexec-tools). * In kexec_file case, the kernel starts directly without purgatory. */ - cpu_soft_restart(kimage->arch.kern_reloc, kimage->head, kimage->start, - kimage->arch.dtb_mem); + cpu_soft_restart(kimage->arch.kern_reloc, kimage->arch.kern_reloc_arg); BUG(); /* Should never get here. */ } diff --git a/arch/arm64/kernel/relocate_kernel.S b/arch/arm64/kernel/relocate_kernel.S index c1d7db71a726..d352faf7cbe6 100644 --- a/arch/arm64/kernel/relocate_kernel.S +++ b/arch/arm64/kernel/relocate_kernel.S @@ -8,7 +8,7 @@ #include #include - +#include #include #include #include @@ -17,86 +17,58 @@ /* * arm64_relocate_new_kernel - Put a 2nd stage image in place and boot it. * - * The memory that the old kernel occupies may be overwritten when coping the + * The memory that the old kernel occupies may be overwritten when copying the * new image to its final location. To assure that the * arm64_relocate_new_kernel routine which does that copy is not overwritten, * all code and data needed by arm64_relocate_new_kernel must be between the * symbols arm64_relocate_new_kernel and arm64_relocate_new_kernel_end. The * machine_kexec() routine will copy arm64_relocate_new_kernel to the kexec - * control_code_page, a special page which has been set up to be preserved - * during the copy operation. + * safe memory that has been set up to be preserved during the copy operation. */ ENTRY(arm64_relocate_new_kernel) - - /* Setup the list loop variables. */ - mov x18, x2 /* x18 = dtb address */ - mov x17, x1 /* x17 = kimage_start */ - mov x16, x0 /* x16 = kimage_head */ - raw_dcache_line_size x15, x0 /* x15 = dcache line size */ - mov x14, xzr /* x14 = entry ptr */ - mov x13, xzr /* x13 = copy dest */ - /* Clear the sctlr_el2 flags. */ - mrs x0, CurrentEL - cmp x0, #CurrentEL_EL2 + mrs x2, CurrentEL + cmp x2, #CurrentEL_EL2 b.ne 1f - mrs x0, sctlr_el2 + mrs x2, sctlr_el2 ldr x1, =SCTLR_ELx_FLAGS - bic x0, x0, x1 + bic x2, x2, x1 pre_disable_mmu_workaround - msr sctlr_el2, x0 + msr sctlr_el2, x2 isb -1: - - /* Check if the new image needs relocation. */ +1: /* Check if the new image needs relocation. */ + ldr x16, [x0, #KRELOC_HEAD] /* x16 = kimage_head */ tbnz x16, IND_DONE_BIT, .Ldone - + raw_dcache_line_size x15, x1 /* x15 = dcache line size */ .Lloop: and x12, x16, PAGE_MASK /* x12 = addr */ - /* Test the entry flags. */ .Ltest_source: tbz x16, IND_SOURCE_BIT, .Ltest_indirection /* Invalidate dest page to PoC. */ - mov x0, x13 - add x20, x0, #PAGE_SIZE + mov x2, x13 + add x20, x2, #PAGE_SIZE sub x1, x15, #1 - bic x0, x0, x1 -2: dc ivac, x0 - add x0, x0, x15 - cmp x0, x20 + bic x2, x2, x1 +2: dc ivac, x2 + add x2, x2, x15 + cmp x2, x20 b.lo 2b dsb sy - mov x20, x13 - mov x21, x12 - copy_page x20, x21, x0, x1, x2, x3, x4, x5, x6, x7 - - /* dest += PAGE_SIZE */ - add x13, x13, PAGE_SIZE + copy_page x13, x12, x1, x2, x3, x4, x5, x6, x7, x8 b .Lnext - .Ltest_indirection: tbz x16, IND_INDIRECTION_BIT, .Ltest_destination - - /* ptr = addr */ - mov x14, x12 + mov x14, x12 /* ptr = addr */ b .Lnext - .Ltest_destination: tbz x16, IND_DESTINATION_BIT, .Lnext - - /* dest = addr */ - mov x13, x12 - + mov x13, x12 /* dest = addr */ .Lnext: - /* entry = *ptr++ */ - ldr x16, [x14], #8 - - /* while (!(entry & DONE)) */ - tbz x16, IND_DONE_BIT, .Lloop - + ldr x16, [x14], #8 /* entry = *ptr++ */ + tbz x16, IND_DONE_BIT, .Lloop /* while (!(entry & DONE)) */ .Ldone: /* wait for writes from copy_page to finish */ dsb nsh @@ -105,18 +77,16 @@ ENTRY(arm64_relocate_new_kernel) isb /* Start new image. */ - mov x0, x18 - mov x1, xzr - mov x2, xzr - mov x3, xzr - br x17 - -ENDPROC(arm64_relocate_new_kernel) + ldr x4, [x0, #KRELOC_ENTRY_ADDR] /* x4 = kimage_start */ + ldr x3, [x0, #KRELOC_KERN_ARG3] + ldr x2, [x0, #KRELOC_KERN_ARG2] + ldr x1, [x0, #KRELOC_KERN_ARG1] + ldr x0, [x0, #KRELOC_KERN_ARG0] /* x0 = dtb address */ + br x4 +END(arm64_relocate_new_kernel) .ltorg - .align 3 /* To keep the 64-bit values below naturally aligned. */ - .Lcopy_end: .org KEXEC_CONTROL_PAGE_SIZE From patchwork Sat Aug 17 02:46:28 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pasha Tatashin X-Patchwork-Id: 11098543 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 0C02B18EC for ; Sat, 17 Aug 2019 02:50:56 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id E8F1C1FEBA for ; Sat, 17 Aug 2019 02:50:55 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id D6CA5285D8; Sat, 17 Aug 2019 02:50:55 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 1CBCF1FEBA for ; Sat, 17 Aug 2019 02:50:55 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Cc:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=r35OxsiGho35XGjFCDAZuotQCD6JwblU6S6NbDUcLCo=; b=Wd579TabP1yb25 UIcSLXCUgqyoCD6Mti4BupIOP2Sf1F2AciXBMumfcpYFdU8XlM/tHIcBTaOVN7VbIo3B/JykLD1q0 34Xv3ovbePg3ib0oXTViK5mgwV95AyyypZhetFlvWoYLPETEFdXjHf8R+FEzwqyL+6JF8vwnmpYeQ 7Iqb+3iYU9+L6yinOtYx3iOFvXiyrNpmfytpplQ60wLfDy2s6JQrLWdQQtYnRGhbLnTvEOwYE9uYj QH8kvMoQ1S+96KQQ1XYSEuaoI8OvV89qygKnKv9b1jZdFtMybk4E1FsUdw2Tg8Nrn8un2m/TBEmJR h/DcRnwya8Sfe0567tQA==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92 #3 (Red Hat Linux)) id 1hyont-0007PA-Sj; Sat, 17 Aug 2019 02:50:53 +0000 Received: from mail-qk1-x743.google.com ([2607:f8b0:4864:20::743]) by bombadil.infradead.org with esmtps (Exim 4.92 #3 (Red Hat Linux)) id 1hyojz-0002h0-2Y for linux-arm-kernel@lists.infradead.org; Sat, 17 Aug 2019 02:46:53 +0000 Received: by mail-qk1-x743.google.com with SMTP id p13so6296563qkg.13 for ; Fri, 16 Aug 2019 19:46:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=bBBUe9VDlQOWmwgesL3ep9Xy47EelNTMLWgYLwyTwzk=; b=pOvaYS7vc0rcIrKGoRXhGazaWSl0AlVeKJP28XNdPbAqad1QXUN19M9WLN/R6CV3jv XFNBbtgJ7B4KnTJErSP832SVElvTqWpevgghR35B3Igt1GE2LhEplDYfX52ispnXdghL Q5ecxFpjwRNP8Xd3e0C6F1hg3ahKBtBdK1kxmJ5Uvm++L/4bNvOiPQvImRtZszyXSJBz xKGgpUFse7OK37i0BVWsc3omprUCfiN2+dqijS5q9u1dHruFBfkuNB5y7y7vNSnJTJee /TD9tkNnQqvUU3MAxtLIfrymTv3pflVAumudvq+FaK/MUmTatF90jpTLCoxMYvqTdYW9 iDHg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=bBBUe9VDlQOWmwgesL3ep9Xy47EelNTMLWgYLwyTwzk=; b=eIJTUjDIRAFPefDkJgR4+5mo2v3RETbcZJcJdp2A2Pu4Ep8UVXFBR/JWPMiSM830ov /AMfD2nDVW1Ht+c932dBFM0vriT7SQ13rfFxX96dvHuP0wQmgnbhrUwMl/sr3VyCnJvx 7ocYRpS5m9FeuxjYi5PfAA0Iz7K1hbTkrkCvh+F99SmmUYgLLtktZQKVi6+zubOsHYuf jz4mikqStWBWV31ZfkmfcAcsCBRW9tRgMWk6iChWzuNQ/bp/TPdsn/QEIE58FJsTUga5 5IVbyv0DCUx6D8IvcTxY0aobn2DovJG6tPLaMPOGj1qH8sr+/UY8zqZdWfdy6XTBA1wy T0BA== X-Gm-Message-State: APjAAAUcy1EL4guy8uwZAVZttR387inESFoJ1/ExrxTKVcKqQCp8b7+U B93/O9jMW0qVki3XMh7XFmjWHA== X-Google-Smtp-Source: APXvYqxNTTn/QXwgyE9ErnuUfkQS+QXMbiN0WKxQGojH2uwgC6LnrE/xSBGhhEJfp4gwhkUrXQsiJg== X-Received: by 2002:a37:5fc6:: with SMTP id t189mr11532966qkb.483.1566010009633; Fri, 16 Aug 2019 19:46:49 -0700 (PDT) Received: from localhost.localdomain (c-73-69-118-222.hsd1.nh.comcast.net. [73.69.118.222]) by smtp.gmail.com with ESMTPSA id o9sm3454657qtr.71.2019.08.16.19.46.48 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 16 Aug 2019 19:46:49 -0700 (PDT) From: Pavel Tatashin To: pasha.tatashin@soleen.com, jmorris@namei.org, sashal@kernel.org, ebiederm@xmission.com, kexec@lists.infradead.org, linux-kernel@vger.kernel.org, corbet@lwn.net, catalin.marinas@arm.com, will@kernel.org, linux-arm-kernel@lists.infradead.org, marc.zyngier@arm.com, james.morse@arm.com, vladimir.murzin@arm.com, matthias.bgg@gmail.com, bhsharma@redhat.com, linux-mm@kvack.org Subject: [PATCH v2 13/14] arm64, kexec: configure transitional page table for kexec Date: Fri, 16 Aug 2019 22:46:28 -0400 Message-Id: <20190817024629.26611-14-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.22.1 In-Reply-To: <20190817024629.26611-1-pasha.tatashin@soleen.com> References: <20190817024629.26611-1-pasha.tatashin@soleen.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190816_194651_201941_A5681F6A X-CRM114-Status: GOOD ( 18.60 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP Configure a page table located in kexec-safe memory that has the following mappings: 1. identity mapping for text of relocation function with executable permission. 2. identity mapping for argument for relocation function. 3. linear mappings for all source ranges 4. linear mappings for all destination ranges. Also, configure el2_vector, that is used to jump to new kernel from EL2 on non-VHE kernels. Signed-off-by: Pavel Tatashin --- arch/arm64/include/asm/kexec.h | 32 +++++++ arch/arm64/kernel/asm-offsets.c | 6 ++ arch/arm64/kernel/machine_kexec.c | 129 ++++++++++++++++++++++++++-- arch/arm64/kernel/relocate_kernel.S | 16 +++- 4 files changed, 174 insertions(+), 9 deletions(-) diff --git a/arch/arm64/include/asm/kexec.h b/arch/arm64/include/asm/kexec.h index d5b79d4c7fae..450d8440f597 100644 --- a/arch/arm64/include/asm/kexec.h +++ b/arch/arm64/include/asm/kexec.h @@ -90,6 +90,23 @@ static inline void crash_prepare_suspend(void) {} static inline void crash_post_resume(void) {} #endif +#if defined(CONFIG_KEXEC_CORE) +/* Global variables for the arm64_relocate_new_kernel routine. */ +extern const unsigned char arm64_relocate_new_kernel[]; +extern const unsigned long arm64_relocate_new_kernel_size; + +/* Body of the vector for escalating to EL2 from relocation routine */ +extern const unsigned char kexec_el1_sync[]; +extern const unsigned long kexec_el1_sync_size; + +#define KEXEC_EL2_VECTOR_TABLE_SIZE 2048 +#define KEXEC_EL2_SYNC_OFFSET (KEXEC_EL2_VECTOR_TABLE_SIZE / 2) + +#endif + +#define KEXEC_SRC_START PAGE_OFFSET +#define KEXEC_DST_START (PAGE_OFFSET + \ + ((UL(0xffffffffffffffff) - PAGE_OFFSET) >> 1) + 1) /* * kern_reloc_arg is passed to kernel relocation function as an argument. * head kimage->head, allows to traverse through relocation segments. @@ -97,6 +114,15 @@ static inline void crash_post_resume(void) {} * kernel, or purgatory entry address). * kern_arg0 first argument to kernel is its dtb address. The other * arguments are currently unused, and must be set to 0 + * trans_ttbr0 idmap for relocation function and its argument + * trans_ttbr1 linear map for source/destination addresses. + * el2_vector If present means that relocation routine will go to EL1 + * from EL2 to do the copy, and then back to EL2 to do the jump + * to new world. This vector contains only the final jump + * instruction at KEXEC_EL2_SYNC_OFFSET. + * src_addr linear map for source pages. + * dst_addr linear map for destination pages. + * copy_len Number of bytes that need to be copied */ struct kern_reloc_arg { unsigned long head; @@ -105,6 +131,12 @@ struct kern_reloc_arg { unsigned long kern_arg1; unsigned long kern_arg2; unsigned long kern_arg3; + unsigned long trans_ttbr0; + unsigned long trans_ttbr1; + unsigned long el2_vector; + unsigned long src_addr; + unsigned long dst_addr; + unsigned long copy_len; }; #define ARCH_HAS_KIMAGE_ARCH diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c index 900394907fd8..7c2ba09a8ceb 100644 --- a/arch/arm64/kernel/asm-offsets.c +++ b/arch/arm64/kernel/asm-offsets.c @@ -135,6 +135,12 @@ int main(void) DEFINE(KRELOC_KERN_ARG1, offsetof(struct kern_reloc_arg, kern_arg1)); DEFINE(KRELOC_KERN_ARG2, offsetof(struct kern_reloc_arg, kern_arg2)); DEFINE(KRELOC_KERN_ARG3, offsetof(struct kern_reloc_arg, kern_arg3)); + DEFINE(KRELOC_TRANS_TTBR0, offsetof(struct kern_reloc_arg, trans_ttbr0)); + DEFINE(KRELOC_TRANS_TTBR1, offsetof(struct kern_reloc_arg, trans_ttbr1)); + DEFINE(KRELOC_EL2_VECTOR, offsetof(struct kern_reloc_arg, el2_vector)); + DEFINE(KRELOC_SRC_ADDR, offsetof(struct kern_reloc_arg, src_addr)); + DEFINE(KRELOC_DST_ADDR, offsetof(struct kern_reloc_arg, dst_addr)); + DEFINE(KRELOC_COPY_LEN, offsetof(struct kern_reloc_arg, copy_len)); #endif return 0; } diff --git a/arch/arm64/kernel/machine_kexec.c b/arch/arm64/kernel/machine_kexec.c index d745ea2051df..16f761fc50c8 100644 --- a/arch/arm64/kernel/machine_kexec.c +++ b/arch/arm64/kernel/machine_kexec.c @@ -20,13 +20,10 @@ #include #include #include +#include #include "cpu-reset.h" -/* Global variables for the arm64_relocate_new_kernel routine. */ -extern const unsigned char arm64_relocate_new_kernel[]; -extern const unsigned long arm64_relocate_new_kernel_size; - /** * kexec_image_info - For debugging output. */ @@ -72,15 +69,128 @@ static void *kexec_page_alloc(void *arg) return page_address(page); } +/* + * Map source segments starting from KEXEC_SRC_START, and map destination + * segments starting from KEXEC_DST_START, and return size of copy in + * *copy_len argument. + * Relocation function essentially needs to do: + * memcpy(KEXEC_DST_START, KEXEC_SRC_START, copy_len); + */ +static int map_segments(struct kimage *kimage, pgd_t *pgdp, + struct trans_table_info *info, + unsigned long *copy_len) +{ + unsigned long *ptr = 0; + unsigned long dest = 0; + unsigned long src_va = KEXEC_SRC_START; + unsigned long dst_va = KEXEC_DST_START; + unsigned long len = 0; + unsigned long entry, addr; + int rc; + + for (entry = kimage->head; !(entry & IND_DONE); entry = *ptr++) { + addr = entry & PAGE_MASK; + + switch (entry & IND_FLAGS) { + case IND_DESTINATION: + dest = addr; + break; + case IND_INDIRECTION: + ptr = __va(addr); + if (rc) + return rc; + break; + case IND_SOURCE: + rc = trans_table_map_page(info, pgdp, __va(addr), + src_va, PAGE_KERNEL); + if (rc) + return rc; + rc = trans_table_map_page(info, pgdp, __va(dest), + dst_va, PAGE_KERNEL); + if (rc) + return rc; + dest += PAGE_SIZE; + src_va += PAGE_SIZE; + dst_va += PAGE_SIZE; + len += PAGE_SIZE; + } + } + *copy_len = len; + + return 0; +} + +static int mmu_relocate_setup(struct kimage *kimage, unsigned long kern_reloc, + struct kern_reloc_arg *kern_reloc_arg) +{ + struct trans_table_info info = { + .trans_alloc_page = kexec_page_alloc, + .trans_alloc_arg = kimage, + .trans_flags = 0, + }; + pgd_t *trans_ttbr0, *trans_ttbr1; + int rc; + + rc = trans_table_create_empty(&info, &trans_ttbr0); + if (rc) + return rc; + + rc = trans_table_create_empty(&info, &trans_ttbr1); + if (rc) + return rc; + + rc = map_segments(kimage, trans_ttbr1, &info, + &kern_reloc_arg->copy_len); + if (rc) + return rc; + + /* Map relocation function va == pa */ + rc = trans_table_map_page(&info, trans_ttbr0, __va(kern_reloc), + kern_reloc, PAGE_KERNEL_EXEC); + if (rc) + return rc; + + /* Map relocation function argument va == pa */ + rc = trans_table_map_page(&info, trans_ttbr0, kern_reloc_arg, + __pa(kern_reloc_arg), PAGE_KERNEL); + if (rc) + return rc; + + kern_reloc_arg->trans_ttbr0 = phys_to_ttbr(__pa(trans_ttbr0)); + kern_reloc_arg->trans_ttbr1 = phys_to_ttbr(__pa(trans_ttbr1)); + kern_reloc_arg->src_addr = KEXEC_SRC_START; + kern_reloc_arg->dst_addr = KEXEC_DST_START; + + return 0; +} + int machine_kexec_post_load(struct kimage *kimage) { + unsigned long el2_vector = 0; unsigned long kern_reloc; struct kern_reloc_arg *kern_reloc_arg; + int rc = 0; + + /* + * Sanity check that relocation function + el2_vector fit into one + * page. + */ + if (arm64_relocate_new_kernel_size > KEXEC_EL2_VECTOR_TABLE_SIZE) { + pr_err("can't fit relocation function and el2_vector in one page"); + return -ENOMEM; + } kern_reloc = page_to_phys(kimage->control_code_page); memcpy(__va(kern_reloc), arm64_relocate_new_kernel, arm64_relocate_new_kernel_size); + /* Setup vector table only when EL2 is available, but no VHE */ + if (is_hyp_mode_available() && !is_kernel_in_hyp_mode()) { + el2_vector = kern_reloc + KEXEC_EL2_VECTOR_TABLE_SIZE; + memcpy(__va(el2_vector + KEXEC_EL2_SYNC_OFFSET), kexec_el1_sync, + kexec_el1_sync_size); + } + kern_reloc_arg = kexec_page_alloc(kimage); if (!kern_reloc_arg) return -ENOMEM; @@ -91,10 +201,19 @@ int machine_kexec_post_load(struct kimage *kimage) kern_reloc_arg->head = kimage->head; kern_reloc_arg->entry_addr = kimage->start; + kern_reloc_arg->el2_vector = el2_vector; kern_reloc_arg->kern_arg0 = kimage->arch.dtb_mem; + /* + * If relocation is not needed, we do not need to enable MMU in + * relocation routine, therefore do not create page tables for + * scenarios such as crash kernel + */ + if (!(kimage->head & IND_DONE)) + rc = mmu_relocate_setup(kimage, kern_reloc, kern_reloc_arg); + kexec_image_info(kimage); - return 0; + return rc; } /** diff --git a/arch/arm64/kernel/relocate_kernel.S b/arch/arm64/kernel/relocate_kernel.S index d352faf7cbe6..14243a678277 100644 --- a/arch/arm64/kernel/relocate_kernel.S +++ b/arch/arm64/kernel/relocate_kernel.S @@ -83,17 +83,25 @@ ENTRY(arm64_relocate_new_kernel) ldr x1, [x0, #KRELOC_KERN_ARG1] ldr x0, [x0, #KRELOC_KERN_ARG0] /* x0 = dtb address */ br x4 +.ltorg +.Larm64_relocate_new_kernel_end: END(arm64_relocate_new_kernel) -.ltorg +ENTRY(kexec_el1_sync) + br x4 /* Jump to new world from el2 */ +.Lkexec_el1_sync_end: +END(kexec_el1_sync) + .align 3 /* To keep the 64-bit values below naturally aligned. */ -.Lcopy_end: .org KEXEC_CONTROL_PAGE_SIZE - /* * arm64_relocate_new_kernel_size - Number of bytes to copy to the * control_code_page. */ .globl arm64_relocate_new_kernel_size arm64_relocate_new_kernel_size: - .quad .Lcopy_end - arm64_relocate_new_kernel + .quad .Larm64_relocate_new_kernel_end - arm64_relocate_new_kernel + +.globl kexec_el1_sync_size +kexec_el1_sync_size: + .quad .Lkexec_el1_sync_end - kexec_el1_sync From patchwork Sat Aug 17 02:46:29 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pasha Tatashin X-Patchwork-Id: 11098545 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id A2FDD13A0 for ; Sat, 17 Aug 2019 02:51:16 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 8DE3B2896D for ; Sat, 17 Aug 2019 02:51:16 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 810A428979; Sat, 17 Aug 2019 02:51:16 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id C92202896D for ; Sat, 17 Aug 2019 02:51:15 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Cc:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=AilrwMfYPsNI7URGCRgaAXl0jOCZ8CrPY+PEOH3rdW0=; b=b3TtRMoAyb/N9d qrL6YznxtEhHFFz7d7ey8UMhk857+9WvxZ1lKRln/jaNr8SrE/8Af/RRTndLXzfyqijdoDuZKLxbC 3c33Qj/EVN7Up2jjV7oTaZR92S+rB3B9P0Q7KhGtANGwD9JUFzdJtE+ilqy/3oLBHmbW+r2dm4Qga 1pCCoSKE+54AUNEwixJuIJPN3WP+ULZ7+JdB7TtXSkmQUnLkh+6gCRlmz5OczToDbaMISWudV81Fk ecKHA1POd37zqVYhgFEzzQfGsKiEjMPyce/QCOBwUr8G6c8QzPiaB7v6jIEe2cC9BJhyPCDAqT0XP gmAL3UXdt1iLSkH/TZTQ==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92 #3 (Red Hat Linux)) id 1hyooF-0007h2-76; Sat, 17 Aug 2019 02:51:15 +0000 Received: from mail-qt1-x841.google.com ([2607:f8b0:4864:20::841]) by bombadil.infradead.org with esmtps (Exim 4.92 #3 (Red Hat Linux)) id 1hyojz-0002jL-UY for linux-arm-kernel@lists.infradead.org; Sat, 17 Aug 2019 02:46:54 +0000 Received: by mail-qt1-x841.google.com with SMTP id i4so8195065qtj.8 for ; Fri, 16 Aug 2019 19:46:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=soleen.com; s=google; h=from:to:subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=UH0zFVccGO7e1caCnvJXY2DYLwaevfd114xWoNKcSgE=; b=EyOdBGyufsOhE5w4xhjAwKtADSmqOJvoElKsjsXMUQVBX/cKLD3bZQH7Rm5S84S2a5 qFCsV6m/IeOjLzNTYTnxxTR8D/VuybHo63L2/QON/5AOLlf9FLDyluolghS3jzQOZtXR gg5vZOtdgLn3q4oUz61XVL13j6cPzx1vo7wF6ZUvoaT/H/aS3DMUpd1cbCD4M2EbYRrL w2HQ2vIa+OGoQZPYZgXEg657TiLqBdPz0zMPnBdn4R98/XoHQT3oOVb4noZ15xhS7qAN UyArvy+DDHkpQYPIsRTlTH+Ikmiqos/QIjszrgtPR9S4R+SjFByd9fi2ONPLS/l8Ros6 50Pw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=UH0zFVccGO7e1caCnvJXY2DYLwaevfd114xWoNKcSgE=; b=tAECAwQ8U0wevtRD+oT6cgnVfgfmtZsIgBSZ7OddXyaRI16+g0eg/CSXxrT+OOe1C0 /LTfHMYqB5qN6p3ugba5y90MJiL6TEv/L1cemt07th8EgwGrEBDN/j3WUpSGnc43D1dI 4NoRRAuYNuK46RIcuEj/4eFm+RImQmGEqnIrO3WBCGSIcgqUxgJDUWLelhE0ylqOk2CP YLf3GOsm1+CP1aeWOUUMX2aABTfBH3D5EiSLdQW+RrEZMx9f5JovlN7EU2RipSfCSl/L qU3vbOnXLFTpwL+W1hfiPw0kLkgOBbXpQtuE56S8AkS+g7cnQOWQa1rPk4KnHZg3hPlE Cc0A== X-Gm-Message-State: APjAAAXhzhCtS8R9pKNeAaukG8/sqFq2fBmR3ljpYVlRX8Ngu3yuSLAd 1CQy7irnIe1Yfy/mIrPByZaYUw== X-Google-Smtp-Source: APXvYqzl+ZApQ07RXZDfgOqjVREU5C1UddINYXX0JoK4+KUJ7gy5Q33XOJqMAPuLdQototgzBE2jAQ== X-Received: by 2002:ac8:7299:: with SMTP id v25mr11616966qto.381.1566010010942; Fri, 16 Aug 2019 19:46:50 -0700 (PDT) Received: from localhost.localdomain (c-73-69-118-222.hsd1.nh.comcast.net. [73.69.118.222]) by smtp.gmail.com with ESMTPSA id o9sm3454657qtr.71.2019.08.16.19.46.49 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 16 Aug 2019 19:46:50 -0700 (PDT) From: Pavel Tatashin To: pasha.tatashin@soleen.com, jmorris@namei.org, sashal@kernel.org, ebiederm@xmission.com, kexec@lists.infradead.org, linux-kernel@vger.kernel.org, corbet@lwn.net, catalin.marinas@arm.com, will@kernel.org, linux-arm-kernel@lists.infradead.org, marc.zyngier@arm.com, james.morse@arm.com, vladimir.murzin@arm.com, matthias.bgg@gmail.com, bhsharma@redhat.com, linux-mm@kvack.org Subject: [PATCH v2 14/14] arm64, kexec: enable MMU during kexec relocation Date: Fri, 16 Aug 2019 22:46:29 -0400 Message-Id: <20190817024629.26611-15-pasha.tatashin@soleen.com> X-Mailer: git-send-email 2.22.1 In-Reply-To: <20190817024629.26611-1-pasha.tatashin@soleen.com> References: <20190817024629.26611-1-pasha.tatashin@soleen.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20190816_194652_036109_CD476FA4 X-CRM114-Status: GOOD ( 15.05 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP Now, that we have transitional page tables configured, temporarily enable MMU to allow faster relocation of segments to final destination. The performance data: for a moderate size kernel + initramfs: 25M the relocation was taking 0.382s, with enabled MMU it now takes 0.019s only or x20 improvement. The time is proportional to the size of relocation, therefore if initramfs is larger, 100M it could take over a second. Also, remove reloc_arg->head, as it is not needed anymore once MMU is enabled. Signed-off-by: Pavel Tatashin --- arch/arm64/include/asm/kexec.h | 2 - arch/arm64/kernel/asm-offsets.c | 1 - arch/arm64/kernel/machine_kexec.c | 1 - arch/arm64/kernel/relocate_kernel.S | 136 +++++++++++++++++----------- 4 files changed, 84 insertions(+), 56 deletions(-) diff --git a/arch/arm64/include/asm/kexec.h b/arch/arm64/include/asm/kexec.h index 450d8440f597..ad81ed3e5751 100644 --- a/arch/arm64/include/asm/kexec.h +++ b/arch/arm64/include/asm/kexec.h @@ -109,7 +109,6 @@ extern const unsigned long kexec_el1_sync_size; ((UL(0xffffffffffffffff) - PAGE_OFFSET) >> 1) + 1) /* * kern_reloc_arg is passed to kernel relocation function as an argument. - * head kimage->head, allows to traverse through relocation segments. * entry_addr kimage->start, where to jump from relocation function (new * kernel, or purgatory entry address). * kern_arg0 first argument to kernel is its dtb address. The other @@ -125,7 +124,6 @@ extern const unsigned long kexec_el1_sync_size; * copy_len Number of bytes that need to be copied */ struct kern_reloc_arg { - unsigned long head; unsigned long entry_addr; unsigned long kern_arg0; unsigned long kern_arg1; diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c index 7c2ba09a8ceb..13ad00b1b90f 100644 --- a/arch/arm64/kernel/asm-offsets.c +++ b/arch/arm64/kernel/asm-offsets.c @@ -129,7 +129,6 @@ int main(void) DEFINE(SDEI_EVENT_PRIORITY, offsetof(struct sdei_registered_event, priority)); #endif #ifdef CONFIG_KEXEC_CORE - DEFINE(KRELOC_HEAD, offsetof(struct kern_reloc_arg, head)); DEFINE(KRELOC_ENTRY_ADDR, offsetof(struct kern_reloc_arg, entry_addr)); DEFINE(KRELOC_KERN_ARG0, offsetof(struct kern_reloc_arg, kern_arg0)); DEFINE(KRELOC_KERN_ARG1, offsetof(struct kern_reloc_arg, kern_arg1)); diff --git a/arch/arm64/kernel/machine_kexec.c b/arch/arm64/kernel/machine_kexec.c index 16f761fc50c8..b5ff5fdb4777 100644 --- a/arch/arm64/kernel/machine_kexec.c +++ b/arch/arm64/kernel/machine_kexec.c @@ -199,7 +199,6 @@ int machine_kexec_post_load(struct kimage *kimage) kimage->arch.kern_reloc = kern_reloc; kimage->arch.kern_reloc_arg = __pa(kern_reloc_arg); - kern_reloc_arg->head = kimage->head; kern_reloc_arg->entry_addr = kimage->start; kern_reloc_arg->el2_vector = el2_vector; kern_reloc_arg->kern_arg0 = kimage->arch.dtb_mem; diff --git a/arch/arm64/kernel/relocate_kernel.S b/arch/arm64/kernel/relocate_kernel.S index 14243a678277..96ff6760bd9c 100644 --- a/arch/arm64/kernel/relocate_kernel.S +++ b/arch/arm64/kernel/relocate_kernel.S @@ -4,6 +4,8 @@ * * Copyright (C) Linaro. * Copyright (C) Huawei Futurewei Technologies. + * Copyright (c) 2019, Microsoft Corporation. + * Pavel Tatashin */ #include @@ -14,6 +16,49 @@ #include #include +/* Invalidae TLB */ +.macro tlb_invalidate + dsb sy + dsb ish + tlbi vmalle1 + dsb ish + isb +.endm + +/* Turn-off mmu at level specified by sctlr */ +.macro turn_off_mmu sctlr, tmp1, tmp2 + mrs \tmp1, \sctlr + ldr \tmp2, =SCTLR_ELx_FLAGS + bic \tmp1, \tmp1, \tmp2 + pre_disable_mmu_workaround + msr \sctlr, \tmp1 + isb +.endm + +/* Turn-on mmu at level specified by sctlr */ +.macro turn_on_mmu sctlr, tmp1, tmp2 + mrs \tmp1, \sctlr + ldr \tmp2, =SCTLR_ELx_FLAGS + orr \tmp1, \tmp1, \tmp2 + msr \sctlr, \tmp1 + ic iallu + dsb nsh + isb +.endm + +/* + * Set ttbr0 and ttbr1, called while MMU is disabled, so no need to temporarily + * set zero_page table. Invalidate TLB after new tables are set. + */ +.macro set_ttbr arg, tmp + ldr \tmp, [\arg, #KRELOC_TRANS_TTBR0] + msr ttbr0_el1, \tmp + ldr \tmp, [\arg, #KRELOC_TRANS_TTBR1] + offset_ttbr1 \tmp + msr ttbr1_el1, \tmp + isb +.endm + /* * arm64_relocate_new_kernel - Put a 2nd stage image in place and boot it. * @@ -24,65 +69,52 @@ * symbols arm64_relocate_new_kernel and arm64_relocate_new_kernel_end. The * machine_kexec() routine will copy arm64_relocate_new_kernel to the kexec * safe memory that has been set up to be preserved during the copy operation. + * + * This function temporarily enables MMU if kernel relocation is needed. + * Also, if we enter this function at EL2 on non-VHE kernel, we temporarily go + * to EL1 to enable MMU, and escalate back to EL2 at the end to do the jump to + * the new kernel. This is determined by presence of el2_vector. */ ENTRY(arm64_relocate_new_kernel) - /* Clear the sctlr_el2 flags. */ - mrs x2, CurrentEL - cmp x2, #CurrentEL_EL2 + mrs x1, CurrentEL + cmp x1, #CurrentEL_EL2 b.ne 1f - mrs x2, sctlr_el2 - ldr x1, =SCTLR_ELx_FLAGS - bic x2, x2, x1 - pre_disable_mmu_workaround - msr sctlr_el2, x2 - isb -1: /* Check if the new image needs relocation. */ - ldr x16, [x0, #KRELOC_HEAD] /* x16 = kimage_head */ - tbnz x16, IND_DONE_BIT, .Ldone - raw_dcache_line_size x15, x1 /* x15 = dcache line size */ -.Lloop: - and x12, x16, PAGE_MASK /* x12 = addr */ - /* Test the entry flags. */ -.Ltest_source: - tbz x16, IND_SOURCE_BIT, .Ltest_indirection - - /* Invalidate dest page to PoC. */ - mov x2, x13 - add x20, x2, #PAGE_SIZE - sub x1, x15, #1 - bic x2, x2, x1 -2: dc ivac, x2 - add x2, x2, x15 - cmp x2, x20 - b.lo 2b - dsb sy - - copy_page x13, x12, x1, x2, x3, x4, x5, x6, x7, x8 - b .Lnext -.Ltest_indirection: - tbz x16, IND_INDIRECTION_BIT, .Ltest_destination - mov x14, x12 /* ptr = addr */ - b .Lnext -.Ltest_destination: - tbz x16, IND_DESTINATION_BIT, .Lnext - mov x13, x12 /* dest = addr */ -.Lnext: - ldr x16, [x14], #8 /* entry = *ptr++ */ - tbz x16, IND_DONE_BIT, .Lloop /* while (!(entry & DONE)) */ -.Ldone: - /* wait for writes from copy_page to finish */ - dsb nsh - ic iallu - dsb nsh - isb - - /* Start new image. */ - ldr x4, [x0, #KRELOC_ENTRY_ADDR] /* x4 = kimage_start */ + turn_off_mmu sctlr_el2, x1, x2 /* Turn off MMU at EL2 */ +1: mov x20, xzr /* x20 will hold vector value */ + ldr x11, [x0, #KRELOC_COPY_LEN] + cbz x11, 5f /* Check if need to relocate */ + ldr x20, [x0, #KRELOC_EL2_VECTOR] + cbz x20, 2f /* need to reduce to EL1? */ + msr vbar_el2, x20 /* el2_vector present, means */ + adr x1, 2f /* we will do copy in el1 but */ + msr elr_el2, x1 /* do final jump from el2 */ + eret /* Reduce to EL1 */ +2: set_ttbr x0, x1 /* Set our page tables */ + tlb_invalidate + turn_on_mmu sctlr_el1, x1, x2 /* Turn MMU back on */ + ldr x1, [x0, #KRELOC_DST_ADDR]; + ldr x2, [x0, #KRELOC_SRC_ADDR]; + mov x12, x1 /* x12 dst backup */ +3: copy_page x1, x2, x3, x4, x5, x6, x7, x8, x9, x10 + sub x11, x11, #PAGE_SIZE + cbnz x11, 3b /* page copy loop */ + raw_dcache_line_size x2, x3 /* x2 = dcache line size */ + sub x3, x2, #1 /* x3 = dcache_size - 1 */ + bic x12, x12, x3 +4: dc cvau, x12 /* Flush D-cache */ + add x12, x12, x2 + cmp x12, x1 /* Compare to dst + len */ + b.ne 4b /* D-cache flush loop */ + turn_off_mmu sctlr_el1, x1, x2 /* Turn off MMU */ + tlb_invalidate /* Invalidate TLB */ +5: ldr x4, [x0, #KRELOC_ENTRY_ADDR] /* x4 = kimage_start */ ldr x3, [x0, #KRELOC_KERN_ARG3] ldr x2, [x0, #KRELOC_KERN_ARG2] ldr x1, [x0, #KRELOC_KERN_ARG1] ldr x0, [x0, #KRELOC_KERN_ARG0] /* x0 = dtb address */ - br x4 + cbnz x20, 6f /* need to escalate to el2? */ + br x4 /* Jump to new world */ +6: hvc #0 /* enters kexec_el1_sync */ .ltorg .Larm64_relocate_new_kernel_end: END(arm64_relocate_new_kernel)