diff mbox series

selinux: avoid atomic_t usage in sidtab

Message ID 20190725135933.30046-1-omosnace@redhat.com (mailing list archive)
State Superseded
Headers show
Series selinux: avoid atomic_t usage in sidtab | expand

Commit Message

Ondrej Mosnacek July 25, 2019, 1:59 p.m. UTC
As noted in Documentation/atomic_t.txt, if we don't need the RMW atomic
operations, we should only use READ_ONCE()/WRITE_ONCE() +
smp_rmb()/smp_wmb() where necessary (or the combined variants
smp_load_acquire()/smp_store_release()).

This patch converts the sidtab code to use regular u32 for the counter
and reverse lookup cache and use the appropriate operations instead of
atomic_get()/atomic_set(). Note that when reading/updating the reverse
lookup cache we don't need memory barriers as it doesn't need to be
consistent or accurate. We can now also replace some atomic ops with
regular loads (when under spinlock) and stores (for conversion target
fields that are always accessed under the master table's spinlock).

We can now also bump SIDTAB_MAX to U32_MAX as we can use the full u32
range again.

Suggested-by: Jann Horn <jannh@google.com>
Signed-off-by: Ondrej Mosnacek <omosnace@redhat.com>
---
 security/selinux/ss/sidtab.c | 48 ++++++++++++++++--------------------
 security/selinux/ss/sidtab.h |  8 +++---
 2 files changed, 25 insertions(+), 31 deletions(-)

Comments

Jann Horn July 25, 2019, 2:59 p.m. UTC | #1
On Thu, Jul 25, 2019 at 3:59 PM Ondrej Mosnacek <omosnace@redhat.com> wrote:
> As noted in Documentation/atomic_t.txt, if we don't need the RMW atomic
> operations, we should only use READ_ONCE()/WRITE_ONCE() +
> smp_rmb()/smp_wmb() where necessary (or the combined variants
> smp_load_acquire()/smp_store_release()).
>
> This patch converts the sidtab code to use regular u32 for the counter
> and reverse lookup cache and use the appropriate operations instead of
> atomic_get()/atomic_set(). Note that when reading/updating the reverse
> lookup cache we don't need memory barriers as it doesn't need to be
> consistent or accurate. We can now also replace some atomic ops with
> regular loads (when under spinlock) and stores (for conversion target
> fields that are always accessed under the master table's spinlock).
>
> We can now also bump SIDTAB_MAX to U32_MAX as we can use the full u32
> range again.
>
> Suggested-by: Jann Horn <jannh@google.com>
> Signed-off-by: Ondrej Mosnacek <omosnace@redhat.com>

Looks good to me; you can add "Reviewed-by: Jann Horn
<jannh@google.com>" if you want.
NitinGote July 25, 2019, 6:58 p.m. UTC | #2
> -----Original Message-----
> From: Jann Horn [mailto:jannh@google.com]
> Sent: Thursday, July 25, 2019 8:29 PM
> To: Ondrej Mosnacek <omosnace@redhat.com>
> Cc: SElinux list <selinux@vger.kernel.org>; Paul Moore <paul@paul-
> moore.com>; Gote, Nitin R <nitin.r.gote@intel.com>; Kees Cook
> <keescook@chromium.org>
> Subject: Re: [PATCH] selinux: avoid atomic_t usage in sidtab
> 
> On Thu, Jul 25, 2019 at 3:59 PM Ondrej Mosnacek <omosnace@redhat.com>
> wrote:
> > As noted in Documentation/atomic_t.txt, if we don't need the RMW
> > atomic operations, we should only use READ_ONCE()/WRITE_ONCE() +
> > smp_rmb()/smp_wmb() where necessary (or the combined variants
> > smp_load_acquire()/smp_store_release()).
> >
> > This patch converts the sidtab code to use regular u32 for the counter
> > and reverse lookup cache and use the appropriate operations instead of
> > atomic_get()/atomic_set(). Note that when reading/updating the reverse
> > lookup cache we don't need memory barriers as it doesn't need to be
> > consistent or accurate. We can now also replace some atomic ops with
> > regular loads (when under spinlock) and stores (for conversion target
> > fields that are always accessed under the master table's spinlock).
> >
> > We can now also bump SIDTAB_MAX to U32_MAX as we can use the full
> u32
> > range again.
> >
> > Suggested-by: Jann Horn <jannh@google.com>
> > Signed-off-by: Ondrej Mosnacek <omosnace@redhat.com>
> 
> Looks good to me; you can add "Reviewed-by: Jann Horn
> <jannh@google.com>" if you want.

Looks good to me also;
May be there are many places in kernel where, atomic_t is not required, as we came to know in sidtab.c . 

Thanks,
Nitin
Paul Moore Aug. 1, 2019, 12:09 a.m. UTC | #3
On Thu, Jul 25, 2019 at 9:59 AM Ondrej Mosnacek <omosnace@redhat.com> wrote:
> As noted in Documentation/atomic_t.txt, if we don't need the RMW atomic
> operations, we should only use READ_ONCE()/WRITE_ONCE() +
> smp_rmb()/smp_wmb() where necessary (or the combined variants
> smp_load_acquire()/smp_store_release()).
>
> This patch converts the sidtab code to use regular u32 for the counter
> and reverse lookup cache and use the appropriate operations instead of
> atomic_get()/atomic_set(). Note that when reading/updating the reverse
> lookup cache we don't need memory barriers as it doesn't need to be
> consistent or accurate. We can now also replace some atomic ops with
> regular loads (when under spinlock) and stores (for conversion target
> fields that are always accessed under the master table's spinlock).
>
> We can now also bump SIDTAB_MAX to U32_MAX as we can use the full u32
> range again.
>
> Suggested-by: Jann Horn <jannh@google.com>
> Signed-off-by: Ondrej Mosnacek <omosnace@redhat.com>
> ---
>  security/selinux/ss/sidtab.c | 48 ++++++++++++++++--------------------
>  security/selinux/ss/sidtab.h |  8 +++---
>  2 files changed, 25 insertions(+), 31 deletions(-)

One of the things that is nice about atomic_t is that the type itself
helps indicate that this isn't a normal integer and you should look at
the stuff in atomic.h for fetching/setting the value.  While I
understand there is overhead involved, and this patch should help in
this regard, I think we lose on code readability and increase the
chance of someone manipulating these values incorrectly in the future.
I believe this is a lot of what Kees was getting at with the counter_t
idea.

At the very least I would like to see a comment in sidtab.h for the
sidtab struct explaining how users should access the count and rcache
fields.  However, what I would really like to see is some simple
macros/functions that handle the read/write accesses similar to the
way we do it with atomic_t (once again, I think this is what Kees was
getting at earlier).

> diff --git a/security/selinux/ss/sidtab.c b/security/selinux/ss/sidtab.c
> index e63a90ff2728..dc6c078b6432 100644
> --- a/security/selinux/ss/sidtab.c
> +++ b/security/selinux/ss/sidtab.c
> @@ -12,7 +12,7 @@
>  #include <linux/slab.h>
>  #include <linux/sched.h>
>  #include <linux/spinlock.h>
> -#include <linux/atomic.h>
> +#include <asm/barrier.h>
>  #include "flask.h"
>  #include "security.h"
>  #include "sidtab.h"
> @@ -23,14 +23,14 @@ int sidtab_init(struct sidtab *s)
>
>         memset(s->roots, 0, sizeof(s->roots));
>
> +       /* max count is SIDTAB_MAX so valid index is always < SIDTAB_MAX */
>         for (i = 0; i < SIDTAB_RCACHE_SIZE; i++)
> -               atomic_set(&s->rcache[i], -1);
> +               s->rcache[i] = SIDTAB_MAX;
>
>         for (i = 0; i < SECINITSID_NUM; i++)
>                 s->isids[i].set = 0;
>
> -       atomic_set(&s->count, 0);
> -
> +       s->count = 0;
>         s->convert = NULL;
>
>         spin_lock_init(&s->lock);
> @@ -130,14 +130,12 @@ static struct context *sidtab_do_lookup(struct sidtab *s, u32 index, int alloc)
>
>  static struct context *sidtab_lookup(struct sidtab *s, u32 index)
>  {
> -       u32 count = (u32)atomic_read(&s->count);
> +       /* read entries only after reading count */
> +       u32 count = smp_load_acquire(&s->count);
>
>         if (index >= count)
>                 return NULL;
>
> -       /* read entries after reading count */
> -       smp_rmb();
> -
>         return sidtab_do_lookup(s, index, 0);
>  }
>
> @@ -210,10 +208,10 @@ static int sidtab_find_context(union sidtab_entry_inner entry,
>  static void sidtab_rcache_update(struct sidtab *s, u32 index, u32 pos)
>  {
>         while (pos > 0) {
> -               atomic_set(&s->rcache[pos], atomic_read(&s->rcache[pos - 1]));
> +               WRITE_ONCE(s->rcache[pos], READ_ONCE(s->rcache[pos - 1]));
>                 --pos;
>         }
> -       atomic_set(&s->rcache[0], (int)index);
> +       WRITE_ONCE(s->rcache[0], index);
>  }
>
>  static void sidtab_rcache_push(struct sidtab *s, u32 index)
> @@ -227,14 +225,14 @@ static int sidtab_rcache_search(struct sidtab *s, struct context *context,
>         u32 i;
>
>         for (i = 0; i < SIDTAB_RCACHE_SIZE; i++) {
> -               int v = atomic_read(&s->rcache[i]);
> +               u32 v = READ_ONCE(s->rcache[i]);
>
> -               if (v < 0)
> +               if (v >= SIDTAB_MAX)
>                         continue;
>
> -               if (context_cmp(sidtab_do_lookup(s, (u32)v, 0), context)) {
> -                       sidtab_rcache_update(s, (u32)v, i);
> -                       *index = (u32)v;
> +               if (context_cmp(sidtab_do_lookup(s, v, 0), context)) {
> +                       sidtab_rcache_update(s, v, i);
> +                       *index = v;
>                         return 0;
>                 }
>         }
> @@ -245,8 +243,7 @@ static int sidtab_reverse_lookup(struct sidtab *s, struct context *context,
>                                  u32 *index)
>  {
>         unsigned long flags;
> -       u32 count = (u32)atomic_read(&s->count);
> -       u32 count_locked, level, pos;
> +       u32 count, count_locked, level, pos;
>         struct sidtab_convert_params *convert;
>         struct context *dst, *dst_convert;
>         int rc;
> @@ -255,11 +252,10 @@ static int sidtab_reverse_lookup(struct sidtab *s, struct context *context,
>         if (rc == 0)
>                 return 0;
>
> +       /* read entries only after reading count */
> +       count = smp_load_acquire(&s->count);
>         level = sidtab_level_from_count(count);
>
> -       /* read entries after reading count */
> -       smp_rmb();
> -
>         pos = 0;
>         rc = sidtab_find_context(s->roots[level], &pos, count, level,
>                                  context, index);
> @@ -272,7 +268,7 @@ static int sidtab_reverse_lookup(struct sidtab *s, struct context *context,
>         spin_lock_irqsave(&s->lock, flags);
>
>         convert = s->convert;
> -       count_locked = (u32)atomic_read(&s->count);
> +       count_locked = s->count;
>         level = sidtab_level_from_count(count_locked);
>
>         /* if count has changed before we acquired the lock, then catch up */
> @@ -315,7 +311,7 @@ static int sidtab_reverse_lookup(struct sidtab *s, struct context *context,
>                 }
>
>                 /* at this point we know the insert won't fail */
> -               atomic_set(&convert->target->count, count + 1);
> +               convert->target->count = count + 1;
>         }
>
>         if (context->len)
> @@ -326,9 +322,7 @@ static int sidtab_reverse_lookup(struct sidtab *s, struct context *context,
>         *index = count;
>
>         /* write entries before writing new count */
> -       smp_wmb();
> -
> -       atomic_set(&s->count, count + 1);
> +       smp_store_release(&s->count, count + 1);
>
>         rc = 0;
>  out_unlock:
> @@ -418,7 +412,7 @@ int sidtab_convert(struct sidtab *s, struct sidtab_convert_params *params)
>                 return -EBUSY;
>         }
>
> -       count = (u32)atomic_read(&s->count);
> +       count = s->count;
>         level = sidtab_level_from_count(count);
>
>         /* allocate last leaf in the new sidtab (to avoid race with
> @@ -431,7 +425,7 @@ int sidtab_convert(struct sidtab *s, struct sidtab_convert_params *params)
>         }
>
>         /* set count in case no new entries are added during conversion */
> -       atomic_set(&params->target->count, count);
> +       params->target->count = count;
>
>         /* enable live convert of new entries */
>         s->convert = params;
> diff --git a/security/selinux/ss/sidtab.h b/security/selinux/ss/sidtab.h
> index bbd5c0d1f3bd..b4561c5ec893 100644
> --- a/security/selinux/ss/sidtab.h
> +++ b/security/selinux/ss/sidtab.h
> @@ -40,8 +40,8 @@ union sidtab_entry_inner {
>  #define SIDTAB_LEAF_ENTRIES \
>         (SIDTAB_NODE_ALLOC_SIZE / sizeof(struct sidtab_entry_leaf))
>
> -#define SIDTAB_MAX_BITS 31 /* limited to INT_MAX due to atomic_t range */
> -#define SIDTAB_MAX (((u32)1 << SIDTAB_MAX_BITS) - 1)
> +#define SIDTAB_MAX_BITS 32
> +#define SIDTAB_MAX U32_MAX
>  /* ensure enough tree levels for SIDTAB_MAX entries */
>  #define SIDTAB_MAX_LEVEL \
>         DIV_ROUND_UP(SIDTAB_MAX_BITS - size_to_shift(SIDTAB_LEAF_ENTRIES), \
> @@ -70,12 +70,12 @@ struct sidtab_convert_params {
>
>  struct sidtab {
>         union sidtab_entry_inner roots[SIDTAB_MAX_LEVEL + 1];
> -       atomic_t count;
> +       u32 count;
>         struct sidtab_convert_params *convert;
>         spinlock_t lock;
>
>         /* reverse lookup cache */
> -       atomic_t rcache[SIDTAB_RCACHE_SIZE];
> +       u32 rcache[SIDTAB_RCACHE_SIZE];
>
>         /* index == SID - 1 (no entry for SECSID_NULL) */
>         struct sidtab_isid_entry isids[SECINITSID_NUM];
> --
> 2.21.0
>
diff mbox series

Patch

diff --git a/security/selinux/ss/sidtab.c b/security/selinux/ss/sidtab.c
index e63a90ff2728..dc6c078b6432 100644
--- a/security/selinux/ss/sidtab.c
+++ b/security/selinux/ss/sidtab.c
@@ -12,7 +12,7 @@ 
 #include <linux/slab.h>
 #include <linux/sched.h>
 #include <linux/spinlock.h>
-#include <linux/atomic.h>
+#include <asm/barrier.h>
 #include "flask.h"
 #include "security.h"
 #include "sidtab.h"
@@ -23,14 +23,14 @@  int sidtab_init(struct sidtab *s)
 
 	memset(s->roots, 0, sizeof(s->roots));
 
+	/* max count is SIDTAB_MAX so valid index is always < SIDTAB_MAX */
 	for (i = 0; i < SIDTAB_RCACHE_SIZE; i++)
-		atomic_set(&s->rcache[i], -1);
+		s->rcache[i] = SIDTAB_MAX;
 
 	for (i = 0; i < SECINITSID_NUM; i++)
 		s->isids[i].set = 0;
 
-	atomic_set(&s->count, 0);
-
+	s->count = 0;
 	s->convert = NULL;
 
 	spin_lock_init(&s->lock);
@@ -130,14 +130,12 @@  static struct context *sidtab_do_lookup(struct sidtab *s, u32 index, int alloc)
 
 static struct context *sidtab_lookup(struct sidtab *s, u32 index)
 {
-	u32 count = (u32)atomic_read(&s->count);
+	/* read entries only after reading count */
+	u32 count = smp_load_acquire(&s->count);
 
 	if (index >= count)
 		return NULL;
 
-	/* read entries after reading count */
-	smp_rmb();
-
 	return sidtab_do_lookup(s, index, 0);
 }
 
@@ -210,10 +208,10 @@  static int sidtab_find_context(union sidtab_entry_inner entry,
 static void sidtab_rcache_update(struct sidtab *s, u32 index, u32 pos)
 {
 	while (pos > 0) {
-		atomic_set(&s->rcache[pos], atomic_read(&s->rcache[pos - 1]));
+		WRITE_ONCE(s->rcache[pos], READ_ONCE(s->rcache[pos - 1]));
 		--pos;
 	}
-	atomic_set(&s->rcache[0], (int)index);
+	WRITE_ONCE(s->rcache[0], index);
 }
 
 static void sidtab_rcache_push(struct sidtab *s, u32 index)
@@ -227,14 +225,14 @@  static int sidtab_rcache_search(struct sidtab *s, struct context *context,
 	u32 i;
 
 	for (i = 0; i < SIDTAB_RCACHE_SIZE; i++) {
-		int v = atomic_read(&s->rcache[i]);
+		u32 v = READ_ONCE(s->rcache[i]);
 
-		if (v < 0)
+		if (v >= SIDTAB_MAX)
 			continue;
 
-		if (context_cmp(sidtab_do_lookup(s, (u32)v, 0), context)) {
-			sidtab_rcache_update(s, (u32)v, i);
-			*index = (u32)v;
+		if (context_cmp(sidtab_do_lookup(s, v, 0), context)) {
+			sidtab_rcache_update(s, v, i);
+			*index = v;
 			return 0;
 		}
 	}
@@ -245,8 +243,7 @@  static int sidtab_reverse_lookup(struct sidtab *s, struct context *context,
 				 u32 *index)
 {
 	unsigned long flags;
-	u32 count = (u32)atomic_read(&s->count);
-	u32 count_locked, level, pos;
+	u32 count, count_locked, level, pos;
 	struct sidtab_convert_params *convert;
 	struct context *dst, *dst_convert;
 	int rc;
@@ -255,11 +252,10 @@  static int sidtab_reverse_lookup(struct sidtab *s, struct context *context,
 	if (rc == 0)
 		return 0;
 
+	/* read entries only after reading count */
+	count = smp_load_acquire(&s->count);
 	level = sidtab_level_from_count(count);
 
-	/* read entries after reading count */
-	smp_rmb();
-
 	pos = 0;
 	rc = sidtab_find_context(s->roots[level], &pos, count, level,
 				 context, index);
@@ -272,7 +268,7 @@  static int sidtab_reverse_lookup(struct sidtab *s, struct context *context,
 	spin_lock_irqsave(&s->lock, flags);
 
 	convert = s->convert;
-	count_locked = (u32)atomic_read(&s->count);
+	count_locked = s->count;
 	level = sidtab_level_from_count(count_locked);
 
 	/* if count has changed before we acquired the lock, then catch up */
@@ -315,7 +311,7 @@  static int sidtab_reverse_lookup(struct sidtab *s, struct context *context,
 		}
 
 		/* at this point we know the insert won't fail */
-		atomic_set(&convert->target->count, count + 1);
+		convert->target->count = count + 1;
 	}
 
 	if (context->len)
@@ -326,9 +322,7 @@  static int sidtab_reverse_lookup(struct sidtab *s, struct context *context,
 	*index = count;
 
 	/* write entries before writing new count */
-	smp_wmb();
-
-	atomic_set(&s->count, count + 1);
+	smp_store_release(&s->count, count + 1);
 
 	rc = 0;
 out_unlock:
@@ -418,7 +412,7 @@  int sidtab_convert(struct sidtab *s, struct sidtab_convert_params *params)
 		return -EBUSY;
 	}
 
-	count = (u32)atomic_read(&s->count);
+	count = s->count;
 	level = sidtab_level_from_count(count);
 
 	/* allocate last leaf in the new sidtab (to avoid race with
@@ -431,7 +425,7 @@  int sidtab_convert(struct sidtab *s, struct sidtab_convert_params *params)
 	}
 
 	/* set count in case no new entries are added during conversion */
-	atomic_set(&params->target->count, count);
+	params->target->count = count;
 
 	/* enable live convert of new entries */
 	s->convert = params;
diff --git a/security/selinux/ss/sidtab.h b/security/selinux/ss/sidtab.h
index bbd5c0d1f3bd..b4561c5ec893 100644
--- a/security/selinux/ss/sidtab.h
+++ b/security/selinux/ss/sidtab.h
@@ -40,8 +40,8 @@  union sidtab_entry_inner {
 #define SIDTAB_LEAF_ENTRIES \
 	(SIDTAB_NODE_ALLOC_SIZE / sizeof(struct sidtab_entry_leaf))
 
-#define SIDTAB_MAX_BITS 31 /* limited to INT_MAX due to atomic_t range */
-#define SIDTAB_MAX (((u32)1 << SIDTAB_MAX_BITS) - 1)
+#define SIDTAB_MAX_BITS 32
+#define SIDTAB_MAX U32_MAX
 /* ensure enough tree levels for SIDTAB_MAX entries */
 #define SIDTAB_MAX_LEVEL \
 	DIV_ROUND_UP(SIDTAB_MAX_BITS - size_to_shift(SIDTAB_LEAF_ENTRIES), \
@@ -70,12 +70,12 @@  struct sidtab_convert_params {
 
 struct sidtab {
 	union sidtab_entry_inner roots[SIDTAB_MAX_LEVEL + 1];
-	atomic_t count;
+	u32 count;
 	struct sidtab_convert_params *convert;
 	spinlock_t lock;
 
 	/* reverse lookup cache */
-	atomic_t rcache[SIDTAB_RCACHE_SIZE];
+	u32 rcache[SIDTAB_RCACHE_SIZE];
 
 	/* index == SID - 1 (no entry for SECSID_NULL) */
 	struct sidtab_isid_entry isids[SECINITSID_NUM];