From patchwork Wed Jun 11 20:23:02 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andy Lutomirski X-Patchwork-Id: 4338371 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork2.web.kernel.org (Postfix) with ESMTP id E0C65BEEAA for ; Wed, 11 Jun 2014 20:26:13 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 062BB2025B for ; Wed, 11 Jun 2014 20:26:13 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 2A04B20218 for ; Wed, 11 Jun 2014 20:26:12 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1Wup4B-0000r1-Hu; Wed, 11 Jun 2014 20:24:15 +0000 Received: from mail-pd0-f181.google.com ([209.85.192.181]) by bombadil.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1Wup3b-0000Jq-RN for linux-arm-kernel@lists.infradead.org; Wed, 11 Jun 2014 20:23:40 +0000 Received: by mail-pd0-f181.google.com with SMTP id z10so172556pdj.26 for ; Wed, 11 Jun 2014 13:23:19 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:in-reply-to:references; bh=NKdbJ3dFZM3wg40mbCrz2wUZdOFrFGE81jyr2LzqHVY=; b=LTIHdsZsXCStIGqXNOx3mncsH6fj11h+z1I/i1kAEl1kPTr64iy74YMN71Nx0nsI6v p3lefiisixCcvrxaOh9727s+Crb7cDcevBJw2ixYsM/tNn4VYfF8ZGHc2jeS6GTG5+nW fLlK8RqCGhVYfB7af4q94UTPRdDUF5lv+1AMVIFjj8oEXYc6P9a5nYXtodGbJ644Sxv/ l06+tTVjCO3gtPV/GbFrSf8CLJslXarQgZ3pS5rGQff0/BO7wy8Pp+WC2t3DesOwdIuO CPBzYimMIejNNJZjVezZVp9CTgexm2qfLXwWc/HGAwoYPEKzwYLUcHgxfYVvlxcei+is n2hg== X-Gm-Message-State: ALoCoQn4eej0mEckUKULMwOU1PCCFo2q2i6gEtNyN8nYULKxqR6wPeNoWC+wrxPp0nfTrqwTGiQe X-Received: by 10.66.148.98 with SMTP id tr2mr16378991pab.33.1402518199050; Wed, 11 Jun 2014 13:23:19 -0700 (PDT) Received: from localhost (50-76-60-73-ip-static.hfc.comcastbusiness.net. [50.76.60.73]) by mx.google.com with ESMTPSA id cz3sm76588045pbc.9.2014.06.11.13.23.17 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 11 Jun 2014 13:23:18 -0700 (PDT) From: Andy Lutomirski To: linux-kernel@vger.kernel.org, Kees Cook , Will Drewry Subject: [RFC 5/5] x86,seccomp: Add a seccomp fastpath Date: Wed, 11 Jun 2014 13:23:02 -0700 Message-Id: <9e11cd988a0f120606e37b5e275019754e2774da.1402517933.git.luto@amacapital.net> X-Mailer: git-send-email 1.9.3 In-Reply-To: References: In-Reply-To: References: X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20140611_132339_913175_835C2933 X-CRM114-Status: GOOD ( 14.08 ) X-Spam-Score: -0.0 (/) Cc: linux-arch@vger.kernel.org, linux-mips@linux-mips.org, x86@kernel.org, Oleg Nesterov , Andy Lutomirski , linux-security-module@vger.kernel.org, linux-arm-kernel@lists.infradead.org X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-2.5 required=5.0 tests=BAYES_00,RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP On my VM, getpid takes about 70ns. Before this patch, adding a single-instruction always-accept seccomp filter added about 134ns of overhead to getpid. With this patch, the overhead is down to about 13ns. I'm not really thrilled by this patch. It has two main issues: 1. Calling into code in kernel/seccomp.c from assembly feels ugly. 2. The x86 64-bit syscall entry now has four separate code paths: fast, seccomp only, audit only, and slow. This kind of sucks. Would it be worth trying to rewrite the whole thing in C with a two-phase slow path approach like I'm using here for seccomp? Signed-off-by: Andy Lutomirski --- arch/x86/kernel/entry_64.S | 45 +++++++++++++++++++++++++++++++++++++++++++++ include/linux/seccomp.h | 4 ++-- 2 files changed, 47 insertions(+), 2 deletions(-) diff --git a/arch/x86/kernel/entry_64.S b/arch/x86/kernel/entry_64.S index f9e713a..feb32b2 100644 --- a/arch/x86/kernel/entry_64.S +++ b/arch/x86/kernel/entry_64.S @@ -683,6 +683,45 @@ sysret_signal: FIXUP_TOP_OF_STACK %r11, -ARGOFFSET jmp int_check_syscall_exit_work +#ifdef CONFIG_SECCOMP + /* + * Fast path for seccomp without any other slow path triggers. + */ +seccomp_fastpath: + /* Build seccomp_data */ + pushq %r9 /* args[5] */ + pushq %r8 /* args[4] */ + pushq %r10 /* args[3] */ + pushq %rdx /* args[2] */ + pushq %rsi /* args[1] */ + pushq %rdi /* args[0] */ + pushq RIP-ARGOFFSET+6*8(%rsp) /* rip */ + pushq %rax /* nr and junk */ + movl $AUDIT_ARCH_X86_64, 4(%rsp) /* arch */ + movq %rsp, %rdi + call seccomp_phase1 + addq $8*8, %rsp + cmpq $1, %rax + ja seccomp_invoke_phase2 + LOAD_ARGS 0 /* restore clobbered regs */ + jb system_call_fastpath + jmp ret_from_sys_call + +seccomp_invoke_phase2: + SAVE_REST + FIXUP_TOP_OF_STACK %rdi + movq %rax,%rdi + call seccomp_phase2 + + /* if seccomp says to skip, then set orig_ax to -1 and skip */ + test %eax,%eax + jz 1f + movq $-1, ORIG_RAX(%rsp) +1: + mov ORIG_RAX(%rsp), %rax /* reload rax */ + jmp system_call_post_trace /* and maybe do the syscall */ +#endif + #ifdef CONFIG_AUDITSYSCALL /* * Fast path for syscall audit without full syscall trace. @@ -717,6 +756,10 @@ sysret_audit: /* Do syscall tracing */ tracesys: +#ifdef CONFIG_SECCOMP + testl $(_TIF_WORK_SYSCALL_ENTRY & ~_TIF_SECCOMP),TI_flags+THREAD_INFO(%rsp,RIP-ARGOFFSET) + jz seccomp_fastpath +#endif #ifdef CONFIG_AUDITSYSCALL testl $(_TIF_WORK_SYSCALL_ENTRY & ~_TIF_SYSCALL_AUDIT),TI_flags+THREAD_INFO(%rsp,RIP-ARGOFFSET) jz auditsys @@ -725,6 +768,8 @@ tracesys: FIXUP_TOP_OF_STACK %rdi movq %rsp,%rdi call syscall_trace_enter + +system_call_post_trace: /* * Reload arg registers from stack in case ptrace changed them. * We don't reload %rax because syscall_trace_enter() returned diff --git a/include/linux/seccomp.h b/include/linux/seccomp.h index 4fc7a84..d3d4c52 100644 --- a/include/linux/seccomp.h +++ b/include/linux/seccomp.h @@ -37,8 +37,8 @@ static inline int secure_computing(void) #define SECCOMP_PHASE1_OK 0 #define SECCOMP_PHASE1_SKIP 1 -extern u32 seccomp_phase1(struct seccomp_data *sd); -int seccomp_phase2(u32 phase1_result); +asmlinkage __visible extern u32 seccomp_phase1(struct seccomp_data *sd); +asmlinkage __visible int seccomp_phase2(u32 phase1_result); #else extern void secure_computing_strict(int this_syscall); #endif