From patchwork Thu Oct 20 10:25:20 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Reshetova, Elena" X-Patchwork-Id: 9386509 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 4430B607D0 for ; Thu, 20 Oct 2016 10:26:28 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 33D1F292BF for ; Thu, 20 Oct 2016 10:26:28 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 2870E292DC; Thu, 20 Oct 2016 10:26:28 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.wl.linuxfoundation.org (Postfix) with SMTP id D4B4C292BF for ; Thu, 20 Oct 2016 10:26:26 +0000 (UTC) Received: (qmail 2031 invoked by uid 550); 20 Oct 2016 10:26:10 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Reply-To: kernel-hardening@lists.openwall.com Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 32339 invoked from network); 20 Oct 2016 10:25:57 -0000 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.31,518,1473145200"; d="scan'208";a="181819496" From: Elena Reshetova To: kernel-hardening@lists.openwall.com Cc: keescook@chromium.org, Hans Liljestrand , David Windsor , Elena Reshetova Date: Thu, 20 Oct 2016 13:25:20 +0300 Message-Id: <1476959131-6153-3-git-send-email-elena.reshetova@intel.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1476959131-6153-1-git-send-email-elena.reshetova@intel.com> References: <1476959131-6153-1-git-send-email-elena.reshetova@intel.com> Subject: [kernel-hardening] [RFC v2 PATCH 02/13] percpu-refcount: leave atomic counter unprotected X-Virus-Scanned: ClamAV using ClamSMTP From: Hans Liljestrand This is a temporary solution, and a deviation from the PaX/Grsecurity implementation where the counter in question is protected against overflows. That however necessitates decreasing the PERCPU_COUNT_BIAS which is used in lib/percpu-refcount.c. Such a change effectively cuts the safe counter range down by half, and still allows the counter to, without warning, prematurely reach zero (which is what the bias aims to prevent). A slightly expanded summary follows. Please see percpu-refcount source for more details, particularly lib/percup-refcount.c where the aforementioned bias is set and documented. The atomic counter (defined in include/linux/percpu-refcount.h) needs to be updated in a non-atomic way. This means that before all the percpu refs are added the value could be off in either direction, but no more than the actual "true" value of the counter. In order to prevent the counter prematurely reaching zero, a bias is used to offset the range from [MIN,MAX] to [1,MAX]+[MIN,-1] (with "zero" in the middle, as far from 0 as possible). The problem is then that if the atomic is protected it cannot wrap (and zero is already offset next to the "wrap-barrier", so it is practically guaranteed to do just that). One solution would be to decrease this bias, effectively cutting the safe range in half (now [1,MAX]). And while overflows at MAX would be caught, the counter could still prematurely reach zero. (Although since the counter can be off at most by it's true value, presumably an overflow would still trigger at some point during the percpu ref additions, but not necessarily before zero had been reached one or more times.) The immediate solution would be to go with the bias decrease (and document the repercussions), but we have already seen some objections to that due to the reasons above. Other solutions would seem to require more comprehensive changes into how percpu-ref works, which we felt were not suited for this patch series. We therefore decided to switch the counter to an atomic_long_wrap_t and just document the issue for now. Signed-off-by: Hans Liljestrand Signed-off-by: David Windsor Signed-off-by: Elena Reshetova --- include/linux/percpu-refcount.h | 18 ++++++++++++++---- lib/percpu-refcount.c | 12 ++++++------ 2 files changed, 20 insertions(+), 10 deletions(-) diff --git a/include/linux/percpu-refcount.h b/include/linux/percpu-refcount.h index 1c7eec0..7c6a482 100644 --- a/include/linux/percpu-refcount.h +++ b/include/linux/percpu-refcount.h @@ -81,7 +81,17 @@ enum { }; struct percpu_ref { - atomic_long_t count; + /* + * This is a temporary solution. + * + * The count should technically not be allowed to wrap, but due to the + * way the counter is used (in lib/percpu-refcount.c) together with the + * PERCPU_COUNT_BIAS it needs to wrap. This leaves the counter open + * to over/underflows. A non-wrapping atomic, together with a bias + * decrease would reduce the safe range in half, and also offer only + * partial protection. + */ + atomic_long_wrap_t count; /* * The low bit of the pointer indicates whether the ref is in percpu * mode; if set, then get/put will manipulate the atomic_t. @@ -174,7 +184,7 @@ static inline void percpu_ref_get_many(struct percpu_ref *ref, unsigned long nr) if (__ref_is_percpu(ref, &percpu_count)) this_cpu_add(*percpu_count, nr); else - atomic_long_add(nr, &ref->count); + atomic_long_add_wrap(nr, &ref->count); rcu_read_unlock_sched(); } @@ -272,7 +282,7 @@ static inline void percpu_ref_put_many(struct percpu_ref *ref, unsigned long nr) if (__ref_is_percpu(ref, &percpu_count)) this_cpu_sub(*percpu_count, nr); - else if (unlikely(atomic_long_sub_and_test(nr, &ref->count))) + else if (unlikely(atomic_long_sub_and_test_wrap(nr, &ref->count))) ref->release(ref); rcu_read_unlock_sched(); @@ -320,7 +330,7 @@ static inline bool percpu_ref_is_zero(struct percpu_ref *ref) if (__ref_is_percpu(ref, &percpu_count)) return false; - return !atomic_long_read(&ref->count); + return !atomic_long_read_wrap(&ref->count); } #endif diff --git a/lib/percpu-refcount.c b/lib/percpu-refcount.c index 9ac959e..2849e06 100644 --- a/lib/percpu-refcount.c +++ b/lib/percpu-refcount.c @@ -80,7 +80,7 @@ int percpu_ref_init(struct percpu_ref *ref, percpu_ref_func_t *release, else start_count++; - atomic_long_set(&ref->count, start_count); + atomic_long_set_wrap(&ref->count, start_count); ref->release = release; ref->confirm_switch = NULL; @@ -134,7 +134,7 @@ static void percpu_ref_switch_to_atomic_rcu(struct rcu_head *rcu) count += *per_cpu_ptr(percpu_count, cpu); pr_debug("global %ld percpu %ld", - atomic_long_read(&ref->count), (long)count); + atomic_long_read_wrap(&ref->count), (long)count); /* * It's crucial that we sum the percpu counters _before_ adding the sum @@ -148,11 +148,11 @@ static void percpu_ref_switch_to_atomic_rcu(struct rcu_head *rcu) * reaching 0 before we add the percpu counts. But doing it at the same * time is equivalent and saves us atomic operations: */ - atomic_long_add((long)count - PERCPU_COUNT_BIAS, &ref->count); + atomic_long_add_wrap((long)count - PERCPU_COUNT_BIAS, &ref->count); - WARN_ONCE(atomic_long_read(&ref->count) <= 0, + WARN_ONCE(atomic_long_read_wrap(&ref->count) <= 0, "percpu ref (%pf) <= 0 (%ld) after switching to atomic", - ref->release, atomic_long_read(&ref->count)); + ref->release, atomic_long_read_wrap(&ref->count)); /* @ref is viewed as dead on all CPUs, send out switch confirmation */ percpu_ref_call_confirm_rcu(rcu); @@ -194,7 +194,7 @@ static void __percpu_ref_switch_to_percpu(struct percpu_ref *ref) if (!(ref->percpu_count_ptr & __PERCPU_REF_ATOMIC)) return; - atomic_long_add(PERCPU_COUNT_BIAS, &ref->count); + atomic_long_add_wrap(PERCPU_COUNT_BIAS, &ref->count); /* * Restore per-cpu operation. smp_store_release() is paired with