From patchwork Mon Oct 3 06:41:15 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Reshetova, Elena" X-Patchwork-Id: 9360057 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id E71B7607D8 for ; Mon, 3 Oct 2016 06:42:40 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id D5F812885D for ; Mon, 3 Oct 2016 06:42:40 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id C686A28897; Mon, 3 Oct 2016 06:42:40 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.wl.linuxfoundation.org (Postfix) with SMTP id D43EB2885D for ; Mon, 3 Oct 2016 06:42:39 +0000 (UTC) Received: (qmail 22144 invoked by uid 550); 3 Oct 2016 06:42:27 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Reply-To: kernel-hardening@lists.openwall.com Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 20354 invoked from network); 3 Oct 2016 06:42:14 -0000 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.31,288,1473145200"; d="scan'208";a="1048759955" From: Elena Reshetova To: kernel-hardening@lists.openwall.com Cc: keescook@chromium.org, Hans Liljestrand , David Windsor , Elena Reshetova Date: Mon, 3 Oct 2016 09:41:15 +0300 Message-Id: <1475476886-26232-3-git-send-email-elena.reshetova@intel.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1475476886-26232-1-git-send-email-elena.reshetova@intel.com> References: <1475476886-26232-1-git-send-email-elena.reshetova@intel.com> Subject: [kernel-hardening] [RFC PATCH 02/13] percpu-refcount: leave atomic counter unprotected X-Virus-Scanned: ClamAV using ClamSMTP From: Hans Liljestrand This is a temporary solution, and a deviation from the PaX/Grsecurity implementation where the counter in question is protected against overflows. That however necessitates decreasing the PERCPU_COUNT_BIAS which is used in lib/percpu-refcount.c. Such a change effectively cuts the safe counter range down by half, and still allows the counter to, without warning, prematurely reach zero (which is what the bias aims to prevent). Signed-off-by: Hans Liljestrand Signed-off-by: David Windsor Signed-off-by: Elena Reshetova --- include/linux/percpu-refcount.h | 18 ++++++++++++++---- lib/percpu-refcount.c | 12 ++++++------ 2 files changed, 20 insertions(+), 10 deletions(-) diff --git a/include/linux/percpu-refcount.h b/include/linux/percpu-refcount.h index 1c7eec0..7c6a482 100644 --- a/include/linux/percpu-refcount.h +++ b/include/linux/percpu-refcount.h @@ -81,7 +81,17 @@ enum { }; struct percpu_ref { - atomic_long_t count; + /* + * This is a temporary solution. + * + * The count should technically not be allowed to wrap, but due to the + * way the counter is used (in lib/percpu-refcount.c) together with the + * PERCPU_COUNT_BIAS it needs to wrap. This leaves the counter open + * to over/underflows. A non-wrapping atomic, together with a bias + * decrease would reduce the safe range in half, and also offer only + * partial protection. + */ + atomic_long_wrap_t count; /* * The low bit of the pointer indicates whether the ref is in percpu * mode; if set, then get/put will manipulate the atomic_t. @@ -174,7 +184,7 @@ static inline void percpu_ref_get_many(struct percpu_ref *ref, unsigned long nr) if (__ref_is_percpu(ref, &percpu_count)) this_cpu_add(*percpu_count, nr); else - atomic_long_add(nr, &ref->count); + atomic_long_add_wrap(nr, &ref->count); rcu_read_unlock_sched(); } @@ -272,7 +282,7 @@ static inline void percpu_ref_put_many(struct percpu_ref *ref, unsigned long nr) if (__ref_is_percpu(ref, &percpu_count)) this_cpu_sub(*percpu_count, nr); - else if (unlikely(atomic_long_sub_and_test(nr, &ref->count))) + else if (unlikely(atomic_long_sub_and_test_wrap(nr, &ref->count))) ref->release(ref); rcu_read_unlock_sched(); @@ -320,7 +330,7 @@ static inline bool percpu_ref_is_zero(struct percpu_ref *ref) if (__ref_is_percpu(ref, &percpu_count)) return false; - return !atomic_long_read(&ref->count); + return !atomic_long_read_wrap(&ref->count); } #endif diff --git a/lib/percpu-refcount.c b/lib/percpu-refcount.c index 9ac959e..2849e06 100644 --- a/lib/percpu-refcount.c +++ b/lib/percpu-refcount.c @@ -80,7 +80,7 @@ int percpu_ref_init(struct percpu_ref *ref, percpu_ref_func_t *release, else start_count++; - atomic_long_set(&ref->count, start_count); + atomic_long_set_wrap(&ref->count, start_count); ref->release = release; ref->confirm_switch = NULL; @@ -134,7 +134,7 @@ static void percpu_ref_switch_to_atomic_rcu(struct rcu_head *rcu) count += *per_cpu_ptr(percpu_count, cpu); pr_debug("global %ld percpu %ld", - atomic_long_read(&ref->count), (long)count); + atomic_long_read_wrap(&ref->count), (long)count); /* * It's crucial that we sum the percpu counters _before_ adding the sum @@ -148,11 +148,11 @@ static void percpu_ref_switch_to_atomic_rcu(struct rcu_head *rcu) * reaching 0 before we add the percpu counts. But doing it at the same * time is equivalent and saves us atomic operations: */ - atomic_long_add((long)count - PERCPU_COUNT_BIAS, &ref->count); + atomic_long_add_wrap((long)count - PERCPU_COUNT_BIAS, &ref->count); - WARN_ONCE(atomic_long_read(&ref->count) <= 0, + WARN_ONCE(atomic_long_read_wrap(&ref->count) <= 0, "percpu ref (%pf) <= 0 (%ld) after switching to atomic", - ref->release, atomic_long_read(&ref->count)); + ref->release, atomic_long_read_wrap(&ref->count)); /* @ref is viewed as dead on all CPUs, send out switch confirmation */ percpu_ref_call_confirm_rcu(rcu); @@ -194,7 +194,7 @@ static void __percpu_ref_switch_to_percpu(struct percpu_ref *ref) if (!(ref->percpu_count_ptr & __PERCPU_REF_ATOMIC)) return; - atomic_long_add(PERCPU_COUNT_BIAS, &ref->count); + atomic_long_add_wrap(PERCPU_COUNT_BIAS, &ref->count); /* * Restore per-cpu operation. smp_store_release() is paired with