From patchwork Fri Jul 20 16:22:24 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Joerg Roedel X-Patchwork-Id: 10537949 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 8C3326053F for ; Fri, 20 Jul 2018 16:22:50 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 7C84B2957B for ; Fri, 20 Jul 2018 16:22:50 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 6F2AE2971B; Fri, 20 Jul 2018 16:22:50 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=unavailable version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id E144A2957B for ; Fri, 20 Jul 2018 16:22:49 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 30EDA6B000C; Fri, 20 Jul 2018 12:22:43 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 222216B000D; Fri, 20 Jul 2018 12:22:43 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0E63D6B000E; Fri, 20 Jul 2018 12:22:43 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-ed1-f70.google.com (mail-ed1-f70.google.com [209.85.208.70]) by kanga.kvack.org (Postfix) with ESMTP id A2EC16B000C for ; Fri, 20 Jul 2018 12:22:42 -0400 (EDT) Received: by mail-ed1-f70.google.com with SMTP id w10-v6so4902018eds.7 for ; Fri, 20 Jul 2018 09:22:42 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:from:to:cc:subject:date :message-id:in-reply-to:references; bh=ECaXDFU87RMzvAl9n5zYKAfilu+YgBB9pVsQN1JeITs=; b=N2FPFkFaQi7dGvuObY5CSQbamtOX6K656WUcE73I+eCZ6SCw9yHA1AdE3XnECbuuua ZRuc/JwUD8B61r33aqcjTHD7FvQank2addbjr7+JJgw5VG9gYe4Q/ZAzrWVMY88ODnYi VixmFtRVehs751BdxiDqC/HtpnTQhkboFjS3/MGxHYRHIa5iqvQfoZoeC/lApdKwK1wd 5ucO15GquIYc9MvNqYBVEBmmj2lhZttSRvYtZwMPnOXuzlKjoliQFikBxZIc66RFKMtH THEQseRxM7DxsMRbhdWdm/yOpDlz8PQqMM2AyBn9mJkAH7TFidRm2JseammztbC7QQuQ NQzg== X-Gm-Message-State: AOUpUlE8UMkziH+452v86r53+BO2YyO1+jr618zrrhCdaV7EQjwXiXzA ML7MQ4iiJMXZw/fcFCoHln9MEkwPkkXM/wZRhjP06n7wqO4Tnejf47UeVcNO9L5nbn7Ln/ALPCk 0X4dtUoA3eNhMqx3BHQfA8QYnLxLZhSEdn22c5t80YuGrs/UfcPvu5yM/boFxMcapxg== X-Received: by 2002:a50:a666:: with SMTP id d93-v6mr3450290edc.244.1532103762153; Fri, 20 Jul 2018 09:22:42 -0700 (PDT) X-Google-Smtp-Source: AAOMgpfHLPiCvgmLWQZo9bPsGmir1rt+jRo/TFBF1wArKuAGKGQ2VR/fv4tLyheV+E5Qzln9W20u X-Received: by 2002:a50:a666:: with SMTP id d93-v6mr3450253edc.244.1532103761363; Fri, 20 Jul 2018 09:22:41 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1532103761; cv=none; d=google.com; s=arc-20160816; b=HA6ZugTL9SMgsrhdRlA2OVFX6iMXTlcabqwCQbMgR/Yo7j1uoCGXchcomY79z3qEqW aqrm9gta/8BFXzv4XZ+ngEtpBMfwOaYAtGovJicM3Jqf852b/OrUgFwWAG3h/UOi2aAi xpi2d5sVGHdgQ6AuD9ZujdZfNf11LbXpPydFT9dTpDJVuFC1zbJfQJDHnHF+xUMXxZ3W NS4Ea2MS+fApSRBpbVvn/hbVm+HAOys2MdZj5RjOLmsvw2MkMSHKsItMbHuYEqlIh1nr aAKTVyUgRESCHLhR1NKsDDprnYhjybXqDpWvpVp47f8rATzGi98s6o5MuHeALD58SeQK dg4A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature:arc-authentication-results; bh=ECaXDFU87RMzvAl9n5zYKAfilu+YgBB9pVsQN1JeITs=; b=y0tHcwh/x0DQW7WgQBza0AhKpQ3RhgTE2+eHqfG7XN0aVjwjH3wuiwKieFk3F4cZoA M4LrLZiQqla7qN0cewCHfN4adult3iEK3HSjezqk23lxwyWMftFAXi8Jy2LE1gaF6I4c A1X77DSNdcSsZaKS3C/NTDlirIsb/KXm4jZ1h/ntirV3gSL8uMgFyh7TgJ6LnI0VYLCn 61HgF1EOoDNNPK2H0VaKjeKiEqK2MPSKa+AEkdxKsgvNBFeiJmoPGRdbHqSPmCdef9Ze xYUcw+ehdj7DSVQXsSkJv8hCKhozqO18U7CW+ZIbpGdGq7SOvkU3jkgkpWRDiZKlU3kI CmTQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass (test mode) header.i=@8bytes.org header.s=mail-1 header.b=De52EfsT; spf=pass (google.com: domain of joro@8bytes.org designates 2a01:238:4383:600:38bc:a715:4b6d:a889 as permitted sender) smtp.mailfrom=joro@8bytes.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=8bytes.org Received: from theia.8bytes.org (8bytes.org. [2a01:238:4383:600:38bc:a715:4b6d:a889]) by mx.google.com with ESMTPS id q8-v6si2126804edn.6.2018.07.20.09.22.41 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 20 Jul 2018 09:22:41 -0700 (PDT) Received-SPF: pass (google.com: domain of joro@8bytes.org designates 2a01:238:4383:600:38bc:a715:4b6d:a889 as permitted sender) client-ip=2a01:238:4383:600:38bc:a715:4b6d:a889; Authentication-Results: mx.google.com; dkim=pass (test mode) header.i=@8bytes.org header.s=mail-1 header.b=De52EfsT; spf=pass (google.com: domain of joro@8bytes.org designates 2a01:238:4383:600:38bc:a715:4b6d:a889 as permitted sender) smtp.mailfrom=joro@8bytes.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=8bytes.org Received: by theia.8bytes.org (Postfix, from userid 1000) id 8DC184DC; Fri, 20 Jul 2018 18:22:39 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=8bytes.org; s=mail-1; t=1532103759; bh=93K1zGGfsATlPDUeMY+jMYCQoGo2+oe6yojp67mYLac=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=De52EfsTCLFLjZSDZWBQr4Yrl0JilstATSNZB/7acPb1kGI+GQPixdt9uM4REWWPK 8OKYzQCmfXeYNZiNBDWH+/cTGq34D1C0T9I9fe9byqJjvdVI3XytEQiGkmNN7ZXuug KKEo9jc694cVt8UO9KxL+9Kzarjk8B7VsFCN1/DOnmcsFoeRBs1Mh6J41pKq04/eXO DDNAjiddluj4oQSqz4785sXAzdqTCZqCmdMZHYpBlqTWau4kbfji5N3n6l53vPWoid yXQpVS63sNP401MgagUOoTO4rQSYAvOGTjVOIAUtLuxYOm28kpmxvjpB0o7JjuUW+a va75PEEV06NGg== From: Joerg Roedel To: Thomas Gleixner , Ingo Molnar , "H . Peter Anvin" Cc: x86@kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Linus Torvalds , Andy Lutomirski , Dave Hansen , Josh Poimboeuf , Juergen Gross , Peter Zijlstra , Borislav Petkov , Jiri Kosina , Boris Ostrovsky , Brian Gerst , David Laight , Denys Vlasenko , Eduardo Valentin , Greg KH , Will Deacon , aliguori@amazon.com, daniel.gruss@iaik.tugraz.at, hughd@google.com, keescook@google.com, Andrea Arcangeli , Waiman Long , Pavel Machek , "David H . Gutteridge" , jroedel@suse.de, Arnaldo Carvalho de Melo , Alexander Shishkin , Jiri Olsa , Namhyung Kim , joro@8bytes.org Subject: [PATCH 3/3] x86/entry/32: Copy only ptregs on paranoid entry/exit path Date: Fri, 20 Jul 2018 18:22:24 +0200 Message-Id: <1532103744-31902-4-git-send-email-joro@8bytes.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1532103744-31902-1-git-send-email-joro@8bytes.org> References: <1532103744-31902-1-git-send-email-joro@8bytes.org> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP From: Joerg Roedel The code that switches from entry- to task-stack when we enter from kernel-mode copies the full entry-stack contents to the task-stack. That is because we don't trust that the entry-stack contents. But actually we can trust its contents if we are not scheduled between entry and exit. So do less copying and move only the ptregs over to the task-stack in this code-path. Suggested-by: Andy Lutomirski Signed-off-by: Joerg Roedel --- arch/x86/entry/entry_32.S | 70 +++++++++++++++++++++++++---------------------- 1 file changed, 38 insertions(+), 32 deletions(-) diff --git a/arch/x86/entry/entry_32.S b/arch/x86/entry/entry_32.S index 2767c62..90166b2 100644 --- a/arch/x86/entry/entry_32.S +++ b/arch/x86/entry/entry_32.S @@ -469,33 +469,48 @@ * segment registers on the way back to user-space or when the * sysenter handler runs with eflags.tf set. * - * When we switch to the task-stack here, we can't trust the - * contents of the entry-stack anymore, as the exception handler - * might be scheduled out or moved to another CPU. Therefore we - * copy the complete entry-stack to the task-stack and set a - * marker in the iret-frame (bit 31 of the CS dword) to detect - * what we've done on the iret path. + * When we switch to the task-stack here, we extend the + * stack-frame we copy to include the entry-stack %esp and a + * pseudo %ss value so that we have a full ptregs struct on the + * stack. We set a marker in the frame (bit 31 of the CS dword). * - * On the iret path we copy everything back and switch to the - * entry-stack, so that the interrupted kernel code-path - * continues on the same stack it was interrupted with. + * On the iret path we read %esp from the PT_OLDESP slot on the + * stack and copy ptregs (except oldesp and oldss) to it, when + * we find the marker set. Then we switch to the %esp we read, + * so that the interrupted kernel code-path continues on the + * same stack it was interrupted with. * * Be aware that an NMI can happen anytime in this code. * + * Register values here are: + * * %esi: Entry-Stack pointer (same as %esp) * %edi: Top of the task stack * %eax: CR3 on kernel entry */ - /* Calculate number of bytes on the entry stack in %ecx */ - movl %esi, %ecx + /* Allocate full pt_regs on task-stack */ + subl $PTREGS_SIZE, %edi + + /* Switch to task-stack */ + movl %edi, %esp - /* %ecx to the top of entry-stack */ - andl $(MASK_entry_stack), %ecx - addl $(SIZEOF_entry_stack), %ecx + /* Populate pt_regs on task-stack */ + movl $__KERNEL_DS, PT_OLDSS(%esp) /* Check: Is this needed? */ - /* Number of bytes on the entry stack to %ecx */ - sub %esi, %ecx + /* + * Save entry-stack pointer on task-stack so that we can switch back to + * it on the the iret path. + */ + movl %esi, PT_OLDESP(%esp) + + /* sizeof(pt_regs) minus space for %esp and %ss to %ecx */ + movl $(PTREGS_SIZE - 8), %ecx + + /* Copy rest */ + shrl $2, %ecx + cld + rep movsl /* Mark stackframe as coming from entry stack */ orl $CS_FROM_ENTRY_STACK, PT_CS(%esp) @@ -505,16 +520,9 @@ * so that we can switch back to it before iret. */ testl $PTI_SWITCH_MASK, %eax - jz .Lcopy_pt_regs_\@ + jz .Lend_\@ orl $CS_FROM_USER_CR3, PT_CS(%esp) - /* - * %esi and %edi are unchanged, %ecx contains the number of - * bytes to copy. The code at .Lcopy_pt_regs_\@ will allocate - * the stack-frame on task-stack and copy everything over - */ - jmp .Lcopy_pt_regs_\@ - .Lend_\@: .endm @@ -594,16 +602,14 @@ /* Clear marker from stack-frame */ andl $(~CS_FROM_ENTRY_STACK), PT_CS(%esp) - /* Copy the remaining task-stack contents to entry-stack */ + /* + * Copy the remaining 'struct ptregs' to entry-stack. Leave out + * OLDESP and OLDSS as we didn't copy that over on entry. + */ movl %esp, %esi - movl PER_CPU_VAR(cpu_tss_rw + TSS_sp0), %edi + movl PT_OLDESP(%esp), %edi - /* Bytes on the task-stack to ecx */ - movl PER_CPU_VAR(cpu_tss_rw + TSS_sp1), %ecx - subl %esi, %ecx - - /* Allocate stack-frame on entry-stack */ - subl %ecx, %edi + movl $(PTREGS_SIZE - 8), %ecx /* * Save future stack-pointer, we must not switch until the