From patchwork Wed Jul 18 09:40:45 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Joerg Roedel X-Patchwork-Id: 10531827 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 4EE54602CA for ; Wed, 18 Jul 2018 09:41:53 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 3D45B1FF0B for ; Wed, 18 Jul 2018 09:41:53 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 312AA207A7; Wed, 18 Jul 2018 09:41:53 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, MAILING_LIST_MULTI, RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 9E5971FF0B for ; Wed, 18 Jul 2018 09:41:52 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 276396B0277; Wed, 18 Jul 2018 05:41:31 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 1D8D26B027A; Wed, 18 Jul 2018 05:41:31 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DE3F46B0277; Wed, 18 Jul 2018 05:41:30 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-ed1-f70.google.com (mail-ed1-f70.google.com [209.85.208.70]) by kanga.kvack.org (Postfix) with ESMTP id 766266B0274 for ; Wed, 18 Jul 2018 05:41:30 -0400 (EDT) Received: by mail-ed1-f70.google.com with SMTP id d30-v6so1705287edd.0 for ; Wed, 18 Jul 2018 02:41:30 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:from:to:cc:subject:date :message-id:in-reply-to:references; bh=JZ/aHAWK55IrRdJzzbfjJ8yjlZCHBE4Yk3xcRP6v3eI=; b=L+I6HfT2Z0uc0XxTQmj+r5b3vYEJyraoVuUYh3wMCkJoU9B8KxyVelz+iM9JQ9o1Y3 g7MGrZ04eOLUStN/o/y8eMvsQ1bpN+uDiYkpaBhIziR6VS5Uv4gh9qbcS7WPTH3r5GtJ jLpgQJrdVMTWiyfahH63ShaPZlvZ9fczofWJnO4CfZcUoLp9nmCf4rusTAcO9w5nvM0W OXVf9acwK+037474wgW92YfxOVHDF8M8uiOq6ZEKtNgx6q6A6MlM8xZWAiibwflX8bky ZJAzEPLHy6iUlefq0wC898k28szuzLHd4Td4+wCjFL10NRDldgBGHa/KvB42qfhzYHM6 ABxA== X-Gm-Message-State: AOUpUlHQ5BxPkAg9q/p+1lLvz4X8x+OrsZq8zkfSlBw6AFAgsTGpqZTz o5Icc0epgPsGZSSLsocKToG3yPrIl4nJXtzemhaVN5eoVZ8GU+Wxx/DtA+lQteBev9NdUH0sAJp OCyhV/RwoK8QdHhLRW+3KRO64jznQHCPpRTZRQ4XxHzWly4o3jrUtwjt8vwN7ZkOHDQ== X-Received: by 2002:a50:9943:: with SMTP id l3-v6mr6114393edb.272.1531906890022; Wed, 18 Jul 2018 02:41:30 -0700 (PDT) X-Google-Smtp-Source: AAOMgpeT0f/hmNk8wOEugJDgdm02bY2EkfLMUYQH/qHE+wAnbdcLU+R50iKQ8uV8djvlX57qlQSL X-Received: by 2002:a50:9943:: with SMTP id l3-v6mr6114361edb.272.1531906889328; Wed, 18 Jul 2018 02:41:29 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1531906889; cv=none; d=google.com; s=arc-20160816; b=IE5c1AdONNvkH02b4K/q6INw4TUIn3k8NoM3jTHBsKKZN+imfmUa9aTO7KISufZTwv o0BmPz8ucB8SudRpTehLt71XdcuAKx7uwS3RtYEd8QzjpM3CK81BM0jRTJFyx7KwECSE aLj3CvoGrOFoK0XiaFw7f/ouAd9HbkyNnVXNgMymA/IrEosGJtUIymd0c8IGhCdSa14F 3Gkyp2Ms61OVwnWfhxcr5eMdFY09v45ZDLBHp72SLSs62qoCep339Ya7Oi+WhZB5rfwU ydO22ONREtDJggY5fsCTPpOugT0FjEBsVh0RdHx+UtXShauQSU0Qjg5R905GhRwvLHp1 aDxQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature:arc-authentication-results; bh=JZ/aHAWK55IrRdJzzbfjJ8yjlZCHBE4Yk3xcRP6v3eI=; b=XxCWjVc3NvDnSpGIqvSV5eFfQ/ErUTrx1xqJUGHTZONvgLzrhdmVldN8Z8RYtu+IW/ Zqebq7hK3/bdj0gaMcVv7F9bobnEyyEEDcZr1W9UlgmV8+sE97c8gsuXnZCKSFuMZZpb AxIswwJqOo9lEfcfvvWXw+QG59drbLEeFH9DA1/Go34yIldjr5uxok32xiulq7/IkaDX 9O4LbbQFvCSKazP3BsUy6waIEd2BUyJUgY05ZFNqLKnd+wGjkfGZ0tmrqRzOwBZeMyiH oM2GojO74cjhUyxJaELxU1MujMj3MorUzmBDzP44dtE+txI9GQN1ZxsmgvsZXoZWEBvB Kh3Q== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass (test mode) header.i=@8bytes.org header.s=mail-1 header.b=m3BesEpS; spf=pass (google.com: domain of joro@8bytes.org designates 2a01:238:4383:600:38bc:a715:4b6d:a889 as permitted sender) smtp.mailfrom=joro@8bytes.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=8bytes.org Received: from theia.8bytes.org (8bytes.org. [2a01:238:4383:600:38bc:a715:4b6d:a889]) by mx.google.com with ESMTPS id j34-v6si279873edb.167.2018.07.18.02.41.29 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 18 Jul 2018 02:41:29 -0700 (PDT) Received-SPF: pass (google.com: domain of joro@8bytes.org designates 2a01:238:4383:600:38bc:a715:4b6d:a889 as permitted sender) client-ip=2a01:238:4383:600:38bc:a715:4b6d:a889; Authentication-Results: mx.google.com; dkim=pass (test mode) header.i=@8bytes.org header.s=mail-1 header.b=m3BesEpS; spf=pass (google.com: domain of joro@8bytes.org designates 2a01:238:4383:600:38bc:a715:4b6d:a889 as permitted sender) smtp.mailfrom=joro@8bytes.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=8bytes.org Received: by theia.8bytes.org (Postfix, from userid 1000) id 554DC22C; Wed, 18 Jul 2018 11:41:19 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=8bytes.org; s=mail-1; t=1531906880; bh=PTu4A8dCdvabnKrbq64uXB5v9LyeDwlVwgA1GxOhvvg=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=m3BesEpShkxVjMuNeIcxVW3PIDVbwks7md9pVjt90p9ZO0b17l7pYjaH4JiYqKiVw 0x93z9j+kvRBI8AIJZ/mKrhOyUXXr0JpOiWf1DWk8unL+itZpckUC45f/ltg5L4NJE kvrH+GBhI/YMOqpvVRpxT0GD9tcJJ6qT8VeTtN2bh/iD6XDB2Gx7SAGhVeQvFoFn4G Q0AXnSc+ZBcENSlsJ4FmYcs1iDNoF8+yblEnQ5qxIdL7KiyeVMyC2ctxhwP7Okn/0w fNtaISaFYaMz2VUHXFuobl5Cgf92waqB0kkZelwAGmrote2pzzMm9tkEqP796S/mu0 jO4nahp+UuSSA== From: Joerg Roedel To: Thomas Gleixner , Ingo Molnar , "H . Peter Anvin" Cc: x86@kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Linus Torvalds , Andy Lutomirski , Dave Hansen , Josh Poimboeuf , Juergen Gross , Peter Zijlstra , Borislav Petkov , Jiri Kosina , Boris Ostrovsky , Brian Gerst , David Laight , Denys Vlasenko , Eduardo Valentin , Greg KH , Will Deacon , aliguori@amazon.com, daniel.gruss@iaik.tugraz.at, hughd@google.com, keescook@google.com, Andrea Arcangeli , Waiman Long , Pavel Machek , "David H . Gutteridge" , jroedel@suse.de, joro@8bytes.org Subject: [PATCH 08/39] x86/entry/32: Leave the kernel via trampoline stack Date: Wed, 18 Jul 2018 11:40:45 +0200 Message-Id: <1531906876-13451-9-git-send-email-joro@8bytes.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1531906876-13451-1-git-send-email-joro@8bytes.org> References: <1531906876-13451-1-git-send-email-joro@8bytes.org> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP From: Joerg Roedel Switch back to the trampoline stack before returning to userspace. Signed-off-by: Joerg Roedel --- arch/x86/entry/entry_32.S | 79 +++++++++++++++++++++++++++++++++++++++++++++-- 1 file changed, 77 insertions(+), 2 deletions(-) diff --git a/arch/x86/entry/entry_32.S b/arch/x86/entry/entry_32.S index fea49ec..a905e62 100644 --- a/arch/x86/entry/entry_32.S +++ b/arch/x86/entry/entry_32.S @@ -343,6 +343,60 @@ .endm /* + * Switch back from the kernel stack to the entry stack. + * + * The %esp register must point to pt_regs on the task stack. It will + * first calculate the size of the stack-frame to copy, depending on + * whether we return to VM86 mode or not. With that it uses 'rep movsl' + * to copy the contents of the stack over to the entry stack. + * + * We must be very careful here, as we can't trust the contents of the + * task-stack once we switched to the entry-stack. When an NMI happens + * while on the entry-stack, the NMI handler will switch back to the top + * of the task stack, overwriting our stack-frame we are about to copy. + * Therefore we switch the stack only after everything is copied over. + */ +.macro SWITCH_TO_ENTRY_STACK + + ALTERNATIVE "", "jmp .Lend_\@", X86_FEATURE_XENPV + + /* Bytes to copy */ + movl $PTREGS_SIZE, %ecx + +#ifdef CONFIG_VM86 + testl $(X86_EFLAGS_VM), PT_EFLAGS(%esp) + jz .Lcopy_pt_regs_\@ + + /* Additional 4 registers to copy when returning to VM86 mode */ + addl $(4 * 4), %ecx + +.Lcopy_pt_regs_\@: +#endif + + /* Initialize source and destination for movsl */ + movl PER_CPU_VAR(cpu_tss_rw + TSS_sp0), %edi + subl %ecx, %edi + movl %esp, %esi + + /* Save future stack pointer in %ebx */ + movl %edi, %ebx + + /* Copy over the stack-frame */ + shrl $2, %ecx + cld + rep movsl + + /* + * Switch to entry-stack - needs to happen after everything is + * copied because the NMI handler will overwrite the task-stack + * when on entry-stack + */ + movl %ebx, %esp + +.Lend_\@: +.endm + +/* * %eax: prev task * %edx: next task */ @@ -581,25 +635,45 @@ ENTRY(entry_SYSENTER_32) /* Opportunistic SYSEXIT */ TRACE_IRQS_ON /* User mode traces as IRQs on. */ + + /* + * Setup entry stack - we keep the pointer in %eax and do the + * switch after almost all user-state is restored. + */ + + /* Load entry stack pointer and allocate frame for eflags/eax */ + movl PER_CPU_VAR(cpu_tss_rw + TSS_sp0), %eax + subl $(2*4), %eax + + /* Copy eflags and eax to entry stack */ + movl PT_EFLAGS(%esp), %edi + movl PT_EAX(%esp), %esi + movl %edi, (%eax) + movl %esi, 4(%eax) + + /* Restore user registers and segments */ movl PT_EIP(%esp), %edx /* pt_regs->ip */ movl PT_OLDESP(%esp), %ecx /* pt_regs->sp */ 1: mov PT_FS(%esp), %fs PTGS_TO_GS + popl %ebx /* pt_regs->bx */ addl $2*4, %esp /* skip pt_regs->cx and pt_regs->dx */ popl %esi /* pt_regs->si */ popl %edi /* pt_regs->di */ popl %ebp /* pt_regs->bp */ - popl %eax /* pt_regs->ax */ + + /* Switch to entry stack */ + movl %eax, %esp /* * Restore all flags except IF. (We restore IF separately because * STI gives a one-instruction window in which we won't be interrupted, * whereas POPF does not.) */ - addl $PT_EFLAGS-PT_DS, %esp /* point esp at pt_regs->flags */ btrl $X86_EFLAGS_IF_BIT, (%esp) popfl + popl %eax /* * Return back to the vDSO, which will pop ecx and edx. @@ -668,6 +742,7 @@ ENTRY(entry_INT80_32) restore_all: TRACE_IRQS_IRET + SWITCH_TO_ENTRY_STACK .Lrestore_all_notrace: CHECK_AND_APPLY_ESPFIX .Lrestore_nocheck: