From patchwork Mon Aug 8 07:23:33 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yinghai Lu X-Patchwork-Id: 9266871 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id B18E560839 for ; Mon, 8 Aug 2016 07:23:50 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id A16F027B81 for ; Mon, 8 Aug 2016 07:23:50 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 959F927DF9; Mon, 8 Aug 2016 07:23:50 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.1 required=2.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_MED,T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.wl.linuxfoundation.org (Postfix) with SMTP id 81AB427B81 for ; Mon, 8 Aug 2016 07:23:49 +0000 (UTC) Received: (qmail 11647 invoked by uid 550); 8 Aug 2016 07:23:47 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Reply-To: kernel-hardening@lists.openwall.com Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 11629 invoked from network); 8 Aug 2016 07:23:47 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:sender:in-reply-to:references:from:date:message-id :subject:to:cc; bh=vuYtjvfyvi8ytce0ywIBoWrGyHZCCxyIS9zmOf8r5E0=; b=NMfWoa4r9y5JTr+j9JvolmVW246M6iYhdHzdjQTjAr1PcscrV6Ys5RrmYUry8MWMtP 0WD+9WoqU3vQ5/nRNSYNG/F4xbqgw3mlqNfSUASgL6bwbbRSKnwdF0XAWk7GrCxFg0Iq toYIIdsVfGeD4VXZRMw/VhUXYfnXezJCP98q99rjS6G/sdwBkgyshPIUyAH3esyXye9/ cjCpZgcrDA6wTORLdoLY2XyqvxB8emugOHMHRM1vPfu14f2yJP+K85CNdk7Ia9B859uM PQk/x/1yxkxGQqMWnMWL+7Y5dOKx4qk/c3OtuhKQNsVZwsjQJXE3KH96kOLbuXVwX1k2 YMlA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:sender:in-reply-to:references:from :date:message-id:subject:to:cc; bh=vuYtjvfyvi8ytce0ywIBoWrGyHZCCxyIS9zmOf8r5E0=; b=Q/KSPCo+q9G+ukt0GZe6kVhCrivfM1odrRUIKsEzTctvEdfmqFZGRmL5SQSwJOJfYk lP1nsqZw9OoR2Y2XrW58/1YlYbhAYpDA9Gev+MD9TIUioe6Zrty/NFRPNikk+TewtO2f tasX3N5BKXxySppKtA0sma3IsPn3dGjAp6pAHAAIm55adpLtZwKkvfOW8p4PWNaXRfd2 batIgHZeE/iHzyOmoLce0G5pOUiN8Nds4vF4X0Uw1dP/2d9Zc89gaktQiYtJrArdVZfv KFKOsUd51Q21NFzTwNbgSBObQkwEkenbnBUs9htph98KqLU3P5/nJCvREnR8LrE7YSnC SCcw== X-Gm-Message-State: AEkoous4T0wNOT0V5pTT3tPsXQheMIq+HLzwa/nXebYREYCaN5Km744QCUwywUomYn8H40ci2ZqHKydhjQwovQ== X-Received: by 10.176.82.58 with SMTP id i55mr16955724uaa.103.1470641015310; Mon, 08 Aug 2016 00:23:35 -0700 (PDT) MIME-Version: 1.0 Sender: yhlu.kernel@gmail.com In-Reply-To: References: <1470071280-78706-1-git-send-email-thgarnie@google.com> <2213000.eZV9GAcFWG@vostro.rjw.lan> <2869477.o6QceH2ItE@vostro.rjw.lan> From: Yinghai Lu Date: Mon, 8 Aug 2016 00:23:33 -0700 X-Google-Sender-Auth: ZB23mjp4MMJtOaUQgWimY4r4NCE Message-ID: To: "Rafael J. Wysocki" Cc: Thomas Garnier , "Rafael J. Wysocki" , Thomas Gleixner , Ingo Molnar , "H . Peter Anvin" , Kees Cook , Pavel Machek , "the arch/x86 maintainers" , Linux Kernel Mailing List , Linux PM list , "kernel-hardening@lists.openwall.com" Subject: [kernel-hardening] Re: [PATCH v2] x86/power/64: Support unaligned addresses for temporary mapping X-Virus-Scanned: ClamAV using ClamSMTP On Mon, Aug 8, 2016 at 12:06 AM, Yinghai Lu wrote: >> >>> At the same time, set_up_temporary_text_mapping could be replaced with >>> kernel_ident_mapping_init() too if restore_jump_address is KVA for >>> jump_address_phys. >> >> I see no reason to do that. >> >> First, it is not guaranteed that restore_jump_address will always be a KVA for >> jump_address_phys and second, it really is only necessary to map one PMD in >> there. > > With your v2 version, you could pass difference between restore_jump_address and > jump_address_phys as info->off ? > With that, we can kill more lines if replace with > set_up_temporary_text_mapping with > kernel_ident_mapping_init() and make code more readable. > > But just keep that in separated patch after your v2 patch. like: --- arch/x86/power/hibernate_64.c | 55 ++++++++++++------------------------------ 1 file changed, 17 insertions(+), 38 deletions(-) Index: linux-2.6/arch/x86/power/hibernate_64.c =================================================================== --- linux-2.6.orig/arch/x86/power/hibernate_64.c +++ linux-2.6/arch/x86/power/hibernate_64.c @@ -41,42 +41,6 @@ unsigned long temp_level4_pgt __visible; unsigned long relocated_restore_code __visible; -static int set_up_temporary_text_mapping(pgd_t *pgd) -{ - pmd_t *pmd; - pud_t *pud; - - /* - * The new mapping only has to cover the page containing the image - * kernel's entry point (jump_address_phys), because the switch over to - * it is carried out by relocated code running from a page allocated - * specifically for this purpose and covered by the identity mapping, so - * the temporary kernel text mapping is only needed for the final jump. - * Moreover, in that mapping the virtual address of the image kernel's - * entry point must be the same as its virtual address in the image - * kernel (restore_jump_address), so the image kernel's - * restore_registers() code doesn't find itself in a different area of - * the virtual address space after switching over to the original page - * tables used by the image kernel. - */ - pud = (pud_t *)get_safe_page(GFP_ATOMIC); - if (!pud) - return -ENOMEM; - - pmd = (pmd_t *)get_safe_page(GFP_ATOMIC); - if (!pmd) - return -ENOMEM; - - set_pmd(pmd + pmd_index(restore_jump_address), - __pmd((jump_address_phys & PMD_MASK) | __PAGE_KERNEL_LARGE_EXEC)); - set_pud(pud + pud_index(restore_jump_address), - __pud(__pa(pmd) | _KERNPG_TABLE)); - set_pgd(pgd + pgd_index(restore_jump_address), - __pgd(__pa(pud) | _KERNPG_TABLE)); - - return 0; -} - static void *alloc_pgt_page(void *context) { return (void *)get_safe_page(GFP_ATOMIC); @@ -87,7 +51,6 @@ static int set_up_temporary_mappings(voi struct x86_mapping_info info = { .alloc_pgt_page = alloc_pgt_page, .pmd_flag = __PAGE_KERNEL_LARGE_EXEC, - .offset = __PAGE_OFFSET, }; unsigned long mstart, mend; pgd_t *pgd; @@ -99,11 +62,27 @@ static int set_up_temporary_mappings(voi return -ENOMEM; /* Prepare a temporary mapping for the kernel text */ - result = set_up_temporary_text_mapping(pgd); + /* + * The new mapping only has to cover the page containing the image + * kernel's entry point (jump_address_phys), because the switch over to + * it is carried out by relocated code running from a page allocated + * specifically for this purpose and covered by the identity mapping, so + * the temporary kernel text mapping is only needed for the final jump. + * Moreover, in that mapping the virtual address of the image kernel's + * entry point must be the same as its virtual address in the image + * kernel (restore_jump_address), so the image kernel's + * restore_registers() code doesn't find itself in a different area of + * the virtual address space after switching over to the original page + * tables used by the image kernel. + */ + info.offset = restore_jump_address - jump_address_phys; + result = kernel_ident_mapping_init(&info, pgd, jump_address_phys, + jump_address_phys + PMD_SIZE); if (result) return result; /* Set up the direct mapping from scratch */ + info.offset = __PAGE_OFFSET; for (i = 0; i < nr_pfn_mapped; i++) { mstart = pfn_mapped[i].start << PAGE_SHIFT; mend = pfn_mapped[i].end << PAGE_SHIFT;