From patchwork Wed Jul 18 09:40:41 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Joerg Roedel X-Patchwork-Id: 10531813 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 681D0602CA for ; Wed, 18 Jul 2018 09:41:42 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 5765B28FD5 for ; Wed, 18 Jul 2018 09:41:42 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 4B4E228FF3; Wed, 18 Jul 2018 09:41:42 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id AD1D828FD5 for ; Wed, 18 Jul 2018 09:41:41 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 380E86B0275; Wed, 18 Jul 2018 05:41:30 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 30D966B0277; Wed, 18 Jul 2018 05:41:30 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 15DC26B0276; Wed, 18 Jul 2018 05:41:30 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-ed1-f72.google.com (mail-ed1-f72.google.com [209.85.208.72]) by kanga.kvack.org (Postfix) with ESMTP id 9B2286B0275 for ; Wed, 18 Jul 2018 05:41:29 -0400 (EDT) Received: by mail-ed1-f72.google.com with SMTP id f13-v6so631440edr.10 for ; Wed, 18 Jul 2018 02:41:29 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:from:to:cc:subject:date :message-id:in-reply-to:references; bh=IstD0PX1gZpjaOZe9ZvCml2CGUgMfzjawl5+VFu27tI=; b=sZO00K0rmvU0wMIW7piobR7OYrJ8jiBff13DYByD22rk6fFIQbCjwloIW+TUfjrIia 3eFUJEFsBHTYAjpmllr7W/TRnlUqcmaEtlRRhMC8rcQIqpxW2pPnbqUjAHD/j1oArVlo PMK9VuwDYYNk+4wyzJFxp84xYontwZNcx01RNi5jpPzGBPL6UQlQ/JjaeD1KINVjg3q+ qLUm4KXDqXigW55S/aDY+IqUeaRrKQQPtpo7E/Ip2XvCNnZbiV2bDtH3ozwyNeDTRhOs Yn0nHD6CB5ifFRc9h8NRO4c3XJlLxUD0KKzrJoMWtqprLhE9LMGDiqcEv8hTbgj5np0C IhhA== X-Gm-Message-State: AOUpUlEdJRAmv+zChgv/RKgFPXmXmNgc3FfpZZQ5d6OIzRuIlDkSKhon pwDKxj7zWg440KHJWNOObANOl0Ve1NAjRB32JqYY/G5Ou7GmYjJ3umE4bbUyyq/A4C1B+AhlSsY q4htMjccrgC8H//tFcEBi8lkClnsNJWq5AwhRYxxS/Dyt/zkGyS/m2gdz/m6JOwDUww== X-Received: by 2002:a50:cf81:: with SMTP id h1-v6mr6409401edk.35.1531906889182; Wed, 18 Jul 2018 02:41:29 -0700 (PDT) X-Google-Smtp-Source: AAOMgpcmDzlRBazfgpga4IjDbrmCSOTr+JipE4vrqPS2GdRsbshn1hUFR61forXvirmOoZRHjBOu X-Received: by 2002:a50:cf81:: with SMTP id h1-v6mr6409357edk.35.1531906888464; Wed, 18 Jul 2018 02:41:28 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1531906888; cv=none; d=google.com; s=arc-20160816; b=aDxrD0XLUARGh8WZesRS9fygc5XMashHCA5GIMpFXOG/j6BdQQ6Bi5idHqGLMFXxul 8NIkgJV26c7BqzG8vO0ipxgN2S6+mgqPta3wsMxKqVYYBotr7kyfhyZlTt8arjd02GZg feoHi5/3atRTC//vVhxFH1mG8Ks7UF3XxcwkHHGururVeMqPUVEWL+l4QTE2ZCo0F4pc x5YyfjDwI+/1GrhKD7itwXWm2LheBDeTvLLWrUdYYMRYWZCCR+W9yElo97MY/R969Mgw sTknkxFiqlhbcjDnSdQkU9wo+91HGton78U+ENbKhMAXSKdgDhpcsNf+sT1krYyuVEv9 AESg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature:arc-authentication-results; bh=IstD0PX1gZpjaOZe9ZvCml2CGUgMfzjawl5+VFu27tI=; b=C/Rd9OmkWfrIYMwKiIApVstDPJnxOR0g2hqNJOcGvJJUZi56niWgmNetMmaa7d8xjf yQcy8+gKQPwh/RMUcrJ8LznjbAODzolSEIkn0c6QtIdjKHQE1go6t7gPIaexmWBZVRkH 2RD60KKLpxiEBCORhWAs9/Sos6i8WPisyjdrYkW3N5Xp0rBujceZKdadysPti10p1qpX Evn1WNv9eXcDHsdNevlPABLDvZjrr+h6SN4IrN3nSD3tWGFfa8mh93nBTneOJ/kEV6oe Fgp8QZMIPHBZ0vPJtfSP9Glah8zKSm+jmKsHVAOENsCv2WYYyQoUv+zWF3cthk/7Mzk7 BaiA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass (test mode) header.i=@8bytes.org header.s=mail-1 header.b=LPtAA5p7; spf=pass (google.com: domain of joro@8bytes.org designates 2a01:238:4383:600:38bc:a715:4b6d:a889 as permitted sender) smtp.mailfrom=joro@8bytes.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=8bytes.org Received: from theia.8bytes.org (8bytes.org. [2a01:238:4383:600:38bc:a715:4b6d:a889]) by mx.google.com with ESMTPS id g32-v6si2789587ede.16.2018.07.18.02.41.28 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 18 Jul 2018 02:41:28 -0700 (PDT) Received-SPF: pass (google.com: domain of joro@8bytes.org designates 2a01:238:4383:600:38bc:a715:4b6d:a889 as permitted sender) client-ip=2a01:238:4383:600:38bc:a715:4b6d:a889; Authentication-Results: mx.google.com; dkim=pass (test mode) header.i=@8bytes.org header.s=mail-1 header.b=LPtAA5p7; spf=pass (google.com: domain of joro@8bytes.org designates 2a01:238:4383:600:38bc:a715:4b6d:a889 as permitted sender) smtp.mailfrom=joro@8bytes.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=8bytes.org Received: by theia.8bytes.org (Postfix, from userid 1000) id CB0941EA; Wed, 18 Jul 2018 11:41:18 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=8bytes.org; s=mail-1; t=1531906879; bh=SS6SYyEciXuD1olgi8ibNK5P0rrptUTEqVkwTJMIQ1I=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=LPtAA5p78LKf2HX20ZyqelQxzR/yZHEHw75RkKm2EmUJy7WOpTVztbMq9aEhB6zpz HrmEAHrgcITlk4c5402QyWRD+E0gO8weDum7JD0pV6p3ZGxueVJJlBmCkkiCgNwoMy 7vzTqICfka8XGzv8Jg/BDL0CQWQzY7m3OGiZInTnKcXlFu7+SJ4YOqKI+khXcTYb0v 21LIgHBgt2pXemskQuY0pNSNufnYZeTpqxzmQZjCE/wB9kmAjChQfY07+q928Fi9XH naCeub7E4ZLXTsLyEdbsN+BQcFb1m/vWIKrhZXIQC3rlTOGmn1CXfykfb5zcU4dOsq pPyJZJlMuEUlw== From: Joerg Roedel To: Thomas Gleixner , Ingo Molnar , "H . Peter Anvin" Cc: x86@kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Linus Torvalds , Andy Lutomirski , Dave Hansen , Josh Poimboeuf , Juergen Gross , Peter Zijlstra , Borislav Petkov , Jiri Kosina , Boris Ostrovsky , Brian Gerst , David Laight , Denys Vlasenko , Eduardo Valentin , Greg KH , Will Deacon , aliguori@amazon.com, daniel.gruss@iaik.tugraz.at, hughd@google.com, keescook@google.com, Andrea Arcangeli , Waiman Long , Pavel Machek , "David H . Gutteridge" , jroedel@suse.de, joro@8bytes.org Subject: [PATCH 04/39] x86/entry/32: Put ESPFIX code into a macro Date: Wed, 18 Jul 2018 11:40:41 +0200 Message-Id: <1531906876-13451-5-git-send-email-joro@8bytes.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1531906876-13451-1-git-send-email-joro@8bytes.org> References: <1531906876-13451-1-git-send-email-joro@8bytes.org> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP From: Joerg Roedel This makes it easier to split up the shared iret code path. Signed-off-by: Joerg Roedel --- arch/x86/entry/entry_32.S | 97 ++++++++++++++++++++++++----------------------- 1 file changed, 49 insertions(+), 48 deletions(-) diff --git a/arch/x86/entry/entry_32.S b/arch/x86/entry/entry_32.S index 39f711a..ef7d653 100644 --- a/arch/x86/entry/entry_32.S +++ b/arch/x86/entry/entry_32.S @@ -221,6 +221,54 @@ POP_GS_EX .endm +.macro CHECK_AND_APPLY_ESPFIX +#ifdef CONFIG_X86_ESPFIX32 +#define GDT_ESPFIX_SS PER_CPU_VAR(gdt_page) + (GDT_ENTRY_ESPFIX_SS * 8) + + ALTERNATIVE "jmp .Lend_\@", "", X86_BUG_ESPFIX + + movl PT_EFLAGS(%esp), %eax # mix EFLAGS, SS and CS + /* + * Warning: PT_OLDSS(%esp) contains the wrong/random values if we + * are returning to the kernel. + * See comments in process.c:copy_thread() for details. + */ + movb PT_OLDSS(%esp), %ah + movb PT_CS(%esp), %al + andl $(X86_EFLAGS_VM | (SEGMENT_TI_MASK << 8) | SEGMENT_RPL_MASK), %eax + cmpl $((SEGMENT_LDT << 8) | USER_RPL), %eax + jne .Lend_\@ # returning to user-space with LDT SS + + /* + * Setup and switch to ESPFIX stack + * + * We're returning to userspace with a 16 bit stack. The CPU will not + * restore the high word of ESP for us on executing iret... This is an + * "official" bug of all the x86-compatible CPUs, which we can work + * around to make dosemu and wine happy. We do this by preloading the + * high word of ESP with the high word of the userspace ESP while + * compensating for the offset by changing to the ESPFIX segment with + * a base address that matches for the difference. + */ + mov %esp, %edx /* load kernel esp */ + mov PT_OLDESP(%esp), %eax /* load userspace esp */ + mov %dx, %ax /* eax: new kernel esp */ + sub %eax, %edx /* offset (low word is 0) */ + shr $16, %edx + mov %dl, GDT_ESPFIX_SS + 4 /* bits 16..23 */ + mov %dh, GDT_ESPFIX_SS + 7 /* bits 24..31 */ + pushl $__ESPFIX_SS + pushl %eax /* new kernel esp */ + /* + * Disable interrupts, but do not irqtrace this section: we + * will soon execute iret and the tracer was already set to + * the irqstate after the IRET: + */ + DISABLE_INTERRUPTS(CLBR_ANY) + lss (%esp), %esp /* switch to espfix segment */ +.Lend_\@: +#endif /* CONFIG_X86_ESPFIX32 */ +.endm /* * %eax: prev task * %edx: next task @@ -547,21 +595,7 @@ ENTRY(entry_INT80_32) restore_all: TRACE_IRQS_IRET .Lrestore_all_notrace: -#ifdef CONFIG_X86_ESPFIX32 - ALTERNATIVE "jmp .Lrestore_nocheck", "", X86_BUG_ESPFIX - - movl PT_EFLAGS(%esp), %eax # mix EFLAGS, SS and CS - /* - * Warning: PT_OLDSS(%esp) contains the wrong/random values if we - * are returning to the kernel. - * See comments in process.c:copy_thread() for details. - */ - movb PT_OLDSS(%esp), %ah - movb PT_CS(%esp), %al - andl $(X86_EFLAGS_VM | (SEGMENT_TI_MASK << 8) | SEGMENT_RPL_MASK), %eax - cmpl $((SEGMENT_LDT << 8) | USER_RPL), %eax - je .Lldt_ss # returning to user-space with LDT SS -#endif + CHECK_AND_APPLY_ESPFIX .Lrestore_nocheck: RESTORE_REGS 4 # skip orig_eax/error_code .Lirq_return: @@ -579,39 +613,6 @@ ENTRY(iret_exc ) jmp common_exception .previous _ASM_EXTABLE(.Lirq_return, iret_exc) - -#ifdef CONFIG_X86_ESPFIX32 -.Lldt_ss: -/* - * Setup and switch to ESPFIX stack - * - * We're returning to userspace with a 16 bit stack. The CPU will not - * restore the high word of ESP for us on executing iret... This is an - * "official" bug of all the x86-compatible CPUs, which we can work - * around to make dosemu and wine happy. We do this by preloading the - * high word of ESP with the high word of the userspace ESP while - * compensating for the offset by changing to the ESPFIX segment with - * a base address that matches for the difference. - */ -#define GDT_ESPFIX_SS PER_CPU_VAR(gdt_page) + (GDT_ENTRY_ESPFIX_SS * 8) - mov %esp, %edx /* load kernel esp */ - mov PT_OLDESP(%esp), %eax /* load userspace esp */ - mov %dx, %ax /* eax: new kernel esp */ - sub %eax, %edx /* offset (low word is 0) */ - shr $16, %edx - mov %dl, GDT_ESPFIX_SS + 4 /* bits 16..23 */ - mov %dh, GDT_ESPFIX_SS + 7 /* bits 24..31 */ - pushl $__ESPFIX_SS - pushl %eax /* new kernel esp */ - /* - * Disable interrupts, but do not irqtrace this section: we - * will soon execute iret and the tracer was already set to - * the irqstate after the IRET: - */ - DISABLE_INTERRUPTS(CLBR_ANY) - lss (%esp), %esp /* switch to espfix segment */ - jmp .Lrestore_nocheck -#endif ENDPROC(entry_INT80_32) .macro FIXUP_ESPFIX_STACK