From patchwork Mon Feb 1 12:50:30 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: wanghongzhe X-Patchwork-Id: 12058889 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E8905C433E6 for ; Mon, 1 Feb 2021 12:08:18 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id B46C961477 for ; Mon, 1 Feb 2021 12:08:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231335AbhBAMGj (ORCPT ); Mon, 1 Feb 2021 07:06:39 -0500 Received: from szxga05-in.huawei.com ([45.249.212.191]:11997 "EHLO szxga05-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231322AbhBAMGd (ORCPT ); Mon, 1 Feb 2021 07:06:33 -0500 Received: from DGGEMS412-HUB.china.huawei.com (unknown [172.30.72.58]) by szxga05-in.huawei.com (SkyGuard) with ESMTP id 4DTmpB4Pl5zjHSs; Mon, 1 Feb 2021 20:04:34 +0800 (CST) Received: from huawei.com (10.175.124.27) by DGGEMS412-HUB.china.huawei.com (10.3.19.212) with Microsoft SMTP Server id 14.3.498.0; Mon, 1 Feb 2021 20:05:44 +0800 From: wanghongzhe To: , , , , , , , , , , , , , CC: Subject: [PATCH] seccomp: Improve performance by optimizing memory barrier Date: Mon, 1 Feb 2021 20:50:30 +0800 Message-ID: <1612183830-15506-1-git-send-email-wanghongzhe@huawei.com> X-Mailer: git-send-email 1.7.12.4 MIME-Version: 1.0 X-Originating-IP: [10.175.124.27] X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org If a thread(A)'s TSYNC flag is set from seccomp(), then it will synchronize its seccomp filter to other threads(B) in same thread group. To avoid race condition, seccomp puts rmb() between reading the mode and filter in seccomp check patch(in B thread). As a result, every syscall's seccomp check is slowed down by the memory barrier. However, we can optimize it by calling rmb() only when filter is NULL and reading it again after the barrier, which means the rmb() is called only once in thread lifetime. The 'filter is NULL' conditon means that it is the first time attaching filter and is by other thread(A) using TSYNC flag. In this case, thread B may read the filter first and mode later in CPU out-of-order exection. After this time, the thread B's mode is always be set, and there will no race condition with the filter/bitmap. In addtion, we should puts a write memory barrier between writing the filter and mode in smp_mb__before_atomic(), to avoid the race condition in TSYNC case. Signed-off-by: wanghongzhe --- kernel/seccomp.c | 31 ++++++++++++++++++++++--------- 1 file changed, 22 insertions(+), 9 deletions(-) diff --git a/kernel/seccomp.c b/kernel/seccomp.c index 952dc1c90229..b944cb2b6b94 100644 --- a/kernel/seccomp.c +++ b/kernel/seccomp.c @@ -397,8 +397,20 @@ static u32 seccomp_run_filters(const struct seccomp_data *sd, READ_ONCE(current->seccomp.filter); /* Ensure unexpected behavior doesn't result in failing open. */ - if (WARN_ON(f == NULL)) - return SECCOMP_RET_KILL_PROCESS; + if (WARN_ON(f == NULL)) { + /* + * Make sure the first filter addtion (from another + * thread using TSYNC flag) are seen. + */ + rmb(); + + /* Read again */ + f = READ_ONCE(current->seccomp.filter); + + /* Ensure unexpected behavior doesn't result in failing open. */ + if (WARN_ON(f == NULL)) + return SECCOMP_RET_KILL_PROCESS; + } if (seccomp_cache_check_allow(f, sd)) return SECCOMP_RET_ALLOW; @@ -614,9 +626,16 @@ static inline void seccomp_sync_threads(unsigned long flags) * equivalent (see ptrace_may_access), it is safe to * allow one thread to transition the other. */ - if (thread->seccomp.mode == SECCOMP_MODE_DISABLED) + if (thread->seccomp.mode == SECCOMP_MODE_DISABLED) { + /* + * Make sure mode cannot be set before the filter + * are set. + */ + smp_mb__before_atomic(); + seccomp_assign_mode(thread, SECCOMP_MODE_FILTER, flags); + } } } @@ -1160,12 +1179,6 @@ static int __seccomp_filter(int this_syscall, const struct seccomp_data *sd, int data; struct seccomp_data sd_local; - /* - * Make sure that any changes to mode from another thread have - * been seen after SYSCALL_WORK_SECCOMP was seen. - */ - rmb(); - if (!sd) { populate_seccomp_data(&sd_local); sd = &sd_local;