diff mbox series

[v3] apparmor: global buffers spin lock may get contended

Message ID 4595e7b4-ea31-5b01-f636-259e84737dfc@canonical.com (mailing list archive)
State Handled Elsewhere
Headers show
Series [v3] apparmor: global buffers spin lock may get contended | expand

Commit Message

John Johansen Feb. 17, 2023, 12:08 a.m. UTC
From f44dee132b0b55386b7ea31e68c80d367b073ee0 Mon Sep 17 00:00:00 2001
From: John Johansen <john.johansen@canonical.com>
Date: Tue, 25 Oct 2022 01:18:41 -0700
Subject: [PATCH] apparmor: cache buffers on percpu list if there is lock
  contention

On a heavily loaded machine there can be lock contention on the
global buffers lock. Add a percpu list to cache buffers on when
lock contention is encountered.

When allocating buffers attempt to use cached buffers first,
before taking the global buffers lock. When freeing buffers
try to put them back to the global list but if contention is
encountered, put the buffer on the percpu list.

The length of time a buffer is held on the percpu list is dynamically
adjusted based on lock contention.  The amount of hold time is rapidly
increased and slow ramped down.

v3:
- limit number of buffers that can be pushed onto the percpu
   list. This avoids a problem on some kernels where one percpu
   list can inherit buffers from another cpu after a reschedule,
   causing more kernel memory to used than is necessary. Under
   normal conditions this should eventually return to normal
   but under pathelogical conditions the extra memory consumption
   may have been unbouanded
v2:
- dynamically adjust buffer hold time on percpu list based on
   lock contention.
v1:
- cache buffers on percpu list on lock contention

Signed-off-by: John Johansen <john.johansen@canonical.com>
---
  security/apparmor/lsm.c | 81 ++++++++++++++++++++++++++++++++++++++---
  1 file changed, 76 insertions(+), 5 deletions(-)

Comments

Sebastian Andrzej Siewior Feb. 17, 2023, 10:44 a.m. UTC | #1
On 2023-02-16 16:08:10 [-0800], John Johansen wrote:
> --- a/security/apparmor/lsm.c
> +++ b/security/apparmor/lsm.c
> @@ -49,12 +49,19 @@ union aa_buffer {
>  	char buffer[1];
>  };
> +struct aa_local_cache {
> +	unsigned int contention;
> +	unsigned int hold;
> +	struct list_head head;
> +};

if you stick a local_lock_t into that struct, then you could replace
	cache = get_cpu_ptr(&aa_local_buffers);
with
	local_lock(&aa_local_buffers.lock);
	cache = this_cpu_ptr(&aa_local_buffers);

You would get the preempt_disable() based locking for the per-CPU
variable (as with get_cpu_ptr()) and additionally some lockdep
validation which would warn if it is used outside of task context (IRQ).

I didn't parse completely the hold/contention logic but it seems to work
;)
You check "cache->count >=  2" twice but I don't see an inc/ dec of it
nor is it part of aa_local_cache.

I can't parse how many items can end up on the local list if the global
list is locked. My guess would be more than 2 due the ->hold parameter.

Do you have any numbers on the machine and performance it improved? It
sure will be a good selling point.

Sebastian
John Johansen Feb. 20, 2023, 8:42 a.m. UTC | #2
On 2/17/23 02:44, Sebastian Andrzej Siewior wrote:
> On 2023-02-16 16:08:10 [-0800], John Johansen wrote:
>> --- a/security/apparmor/lsm.c
>> +++ b/security/apparmor/lsm.c
>> @@ -49,12 +49,19 @@ union aa_buffer {
>>   	char buffer[1];
>>   };
>> +struct aa_local_cache {
>> +	unsigned int contention;
>> +	unsigned int hold;
>> +	struct list_head head;
>> +};
> 
> if you stick a local_lock_t into that struct, then you could replace
> 	cache = get_cpu_ptr(&aa_local_buffers);
> with
> 	local_lock(&aa_local_buffers.lock);
> 	cache = this_cpu_ptr(&aa_local_buffers);
> 
> You would get the preempt_disable() based locking for the per-CPU
> variable (as with get_cpu_ptr()) and additionally some lockdep
> validation which would warn if it is used outside of task context (IRQ).
> 
I did look at local_locks and there was a reason I didn't use them. I
can't recall as the original iteration of this is over a year old now.
I will have to dig into it again.

> I didn't parse completely the hold/contention logic but it seems to work
> ;)
> You check "cache->count >=  2" twice but I don't see an inc/ dec of it
> nor is it part of aa_local_cache.
> 
sadly I messed up the reordering of this and the debug patch. This will be
fixed in v4.

> I can't parse how many items can end up on the local list if the global
> list is locked. My guess would be more than 2 due the ->hold parameter.
> 
So this iteration, forces pushing back to global list if there are already
two on the local list. The hold parameter just affects how long the
buffers remain on the local list, before trying to place them back on
the global list.

Originally before the count was added more than 2 buffers could end up
on the local list, and having too many local buffers is a waste of
memory. The count got added to address this. The value of 2 (which should
be switched to a define) was chosen because no mediation routine currently
uses more than 2 buffers.

Note that this doesn't mean that more than two buffers can be allocated
to a tasks on a cpu. Its possible in some cases to have a task have
allocated buffers and to still have buffers on the local cache list.

> Do you have any numbers on the machine and performance it improved? It
> sure will be a good selling point.
> 

I can include some supporting info, for a 16 core machine. But it will
take some time to for me to get access to a bigger machine, where this
is much more important. Hence the call for some of the other people
on this thread to test.

thanks for the feedback
Anil Altinay Feb. 21, 2023, 9:27 p.m. UTC | #3
I can test the patch with 5.10 and 5.15 kernels in different machines.
Just let me know which machine types you would like me to test.

On Mon, Feb 20, 2023 at 12:42 AM John Johansen
<john.johansen@canonical.com> wrote:
>
> On 2/17/23 02:44, Sebastian Andrzej Siewior wrote:
> > On 2023-02-16 16:08:10 [-0800], John Johansen wrote:
> >> --- a/security/apparmor/lsm.c
> >> +++ b/security/apparmor/lsm.c
> >> @@ -49,12 +49,19 @@ union aa_buffer {
> >>      char buffer[1];
> >>   };
> >> +struct aa_local_cache {
> >> +    unsigned int contention;
> >> +    unsigned int hold;
> >> +    struct list_head head;
> >> +};
> >
> > if you stick a local_lock_t into that struct, then you could replace
> >       cache = get_cpu_ptr(&aa_local_buffers);
> > with
> >       local_lock(&aa_local_buffers.lock);
> >       cache = this_cpu_ptr(&aa_local_buffers);
> >
> > You would get the preempt_disable() based locking for the per-CPU
> > variable (as with get_cpu_ptr()) and additionally some lockdep
> > validation which would warn if it is used outside of task context (IRQ).
> >
> I did look at local_locks and there was a reason I didn't use them. I
> can't recall as the original iteration of this is over a year old now.
> I will have to dig into it again.
>
> > I didn't parse completely the hold/contention logic but it seems to work
> > ;)
> > You check "cache->count >=  2" twice but I don't see an inc/ dec of it
> > nor is it part of aa_local_cache.
> >
> sadly I messed up the reordering of this and the debug patch. This will be
> fixed in v4.
>
> > I can't parse how many items can end up on the local list if the global
> > list is locked. My guess would be more than 2 due the ->hold parameter.
> >
> So this iteration, forces pushing back to global list if there are already
> two on the local list. The hold parameter just affects how long the
> buffers remain on the local list, before trying to place them back on
> the global list.
>
> Originally before the count was added more than 2 buffers could end up
> on the local list, and having too many local buffers is a waste of
> memory. The count got added to address this. The value of 2 (which should
> be switched to a define) was chosen because no mediation routine currently
> uses more than 2 buffers.
>
> Note that this doesn't mean that more than two buffers can be allocated
> to a tasks on a cpu. Its possible in some cases to have a task have
> allocated buffers and to still have buffers on the local cache list.
>
> > Do you have any numbers on the machine and performance it improved? It
> > sure will be a good selling point.
> >
>
> I can include some supporting info, for a 16 core machine. But it will
> take some time to for me to get access to a bigger machine, where this
> is much more important. Hence the call for some of the other people
> on this thread to test.
>
> thanks for the feedback
>
Anil Altinay June 26, 2023, 11:35 p.m. UTC | #4
Hi John,

I was wondering if you get a chance to work on patch v4. Please let me
know if you need help with testing.

Best,
Anil


On Tue, Feb 21, 2023 at 1:27 PM Anil Altinay <aaltinay@google.com> wrote:
>
> I can test the patch with 5.10 and 5.15 kernels in different machines.
> Just let me know which machine types you would like me to test.
>
> On Mon, Feb 20, 2023 at 12:42 AM John Johansen
> <john.johansen@canonical.com> wrote:
> >
> > On 2/17/23 02:44, Sebastian Andrzej Siewior wrote:
> > > On 2023-02-16 16:08:10 [-0800], John Johansen wrote:
> > >> --- a/security/apparmor/lsm.c
> > >> +++ b/security/apparmor/lsm.c
> > >> @@ -49,12 +49,19 @@ union aa_buffer {
> > >>      char buffer[1];
> > >>   };
> > >> +struct aa_local_cache {
> > >> +    unsigned int contention;
> > >> +    unsigned int hold;
> > >> +    struct list_head head;
> > >> +};
> > >
> > > if you stick a local_lock_t into that struct, then you could replace
> > >       cache = get_cpu_ptr(&aa_local_buffers);
> > > with
> > >       local_lock(&aa_local_buffers.lock);
> > >       cache = this_cpu_ptr(&aa_local_buffers);
> > >
> > > You would get the preempt_disable() based locking for the per-CPU
> > > variable (as with get_cpu_ptr()) and additionally some lockdep
> > > validation which would warn if it is used outside of task context (IRQ).
> > >
> > I did look at local_locks and there was a reason I didn't use them. I
> > can't recall as the original iteration of this is over a year old now.
> > I will have to dig into it again.
> >
> > > I didn't parse completely the hold/contention logic but it seems to work
> > > ;)
> > > You check "cache->count >=  2" twice but I don't see an inc/ dec of it
> > > nor is it part of aa_local_cache.
> > >
> > sadly I messed up the reordering of this and the debug patch. This will be
> > fixed in v4.
> >
> > > I can't parse how many items can end up on the local list if the global
> > > list is locked. My guess would be more than 2 due the ->hold parameter.
> > >
> > So this iteration, forces pushing back to global list if there are already
> > two on the local list. The hold parameter just affects how long the
> > buffers remain on the local list, before trying to place them back on
> > the global list.
> >
> > Originally before the count was added more than 2 buffers could end up
> > on the local list, and having too many local buffers is a waste of
> > memory. The count got added to address this. The value of 2 (which should
> > be switched to a define) was chosen because no mediation routine currently
> > uses more than 2 buffers.
> >
> > Note that this doesn't mean that more than two buffers can be allocated
> > to a tasks on a cpu. Its possible in some cases to have a task have
> > allocated buffers and to still have buffers on the local cache list.
> >
> > > Do you have any numbers on the machine and performance it improved? It
> > > sure will be a good selling point.
> > >
> >
> > I can include some supporting info, for a 16 core machine. But it will
> > take some time to for me to get access to a bigger machine, where this
> > is much more important. Hence the call for some of the other people
> > on this thread to test.
> >
> > thanks for the feedback
> >
John Johansen June 27, 2023, 12:31 a.m. UTC | #5
On 6/26/23 16:33, Anil Altinay wrote:
> Hi John,
> 
> I was wondering if you get a chance to work on patch v4. Please let me know if you need help with testing.
> 

yeah, testing help is always much appreciated. I have a v4, and I am working on 3 alternate version to compare against, to help give a better sense if we can get away with simplifying or tweak the scaling. I should be able to post them out some time tonight.

> Best,
> Anil
> 
> On Tue, Feb 21, 2023 at 1:27 PM Anil Altinay <aaltinay@google.com <mailto:aaltinay@google.com>> wrote:
> 
>     I can test the patch with 5.10 and 5.15 kernels in different machines.
>     Just let me know which machine types you would like me to test.
> 
>     On Mon, Feb 20, 2023 at 12:42 AM John Johansen
>     <john.johansen@canonical.com <mailto:john.johansen@canonical.com>> wrote:
>      >
>      > On 2/17/23 02:44, Sebastian Andrzej Siewior wrote:
>      > > On 2023-02-16 16:08:10 [-0800], John Johansen wrote:
>      > >> --- a/security/apparmor/lsm.c
>      > >> +++ b/security/apparmor/lsm.c
>      > >> @@ -49,12 +49,19 @@ union aa_buffer {
>      > >>      char buffer[1];
>      > >>   };
>      > >> +struct aa_local_cache {
>      > >> +    unsigned int contention;
>      > >> +    unsigned int hold;
>      > >> +    struct list_head head;
>      > >> +};
>      > >
>      > > if you stick a local_lock_t into that struct, then you could replace
>      > >       cache = get_cpu_ptr(&aa_local_buffers);
>      > > with
>      > >       local_lock(&aa_local_buffers.lock);
>      > >       cache = this_cpu_ptr(&aa_local_buffers);
>      > >
>      > > You would get the preempt_disable() based locking for the per-CPU
>      > > variable (as with get_cpu_ptr()) and additionally some lockdep
>      > > validation which would warn if it is used outside of task context (IRQ).
>      > >
>      > I did look at local_locks and there was a reason I didn't use them. I
>      > can't recall as the original iteration of this is over a year old now.
>      > I will have to dig into it again.
>      >
>      > > I didn't parse completely the hold/contention logic but it seems to work
>      > > ;)
>      > > You check "cache->count >=  2" twice but I don't see an inc/ dec of it
>      > > nor is it part of aa_local_cache.
>      > >
>      > sadly I messed up the reordering of this and the debug patch. This will be
>      > fixed in v4.
>      >
>      > > I can't parse how many items can end up on the local list if the global
>      > > list is locked. My guess would be more than 2 due the ->hold parameter.
>      > >
>      > So this iteration, forces pushing back to global list if there are already
>      > two on the local list. The hold parameter just affects how long the
>      > buffers remain on the local list, before trying to place them back on
>      > the global list.
>      >
>      > Originally before the count was added more than 2 buffers could end up
>      > on the local list, and having too many local buffers is a waste of
>      > memory. The count got added to address this. The value of 2 (which should
>      > be switched to a define) was chosen because no mediation routine currently
>      > uses more than 2 buffers.
>      >
>      > Note that this doesn't mean that more than two buffers can be allocated
>      > to a tasks on a cpu. Its possible in some cases to have a task have
>      > allocated buffers and to still have buffers on the local cache list.
>      >
>      > > Do you have any numbers on the machine and performance it improved? It
>      > > sure will be a good selling point.
>      > >
>      >
>      > I can include some supporting info, for a 16 core machine. But it will
>      > take some time to for me to get access to a bigger machine, where this
>      > is much more important. Hence the call for some of the other people
>      > on this thread to test.
>      >
>      > thanks for the feedback
>      >
>
Sergey Senozhatsky Oct. 6, 2023, 4:18 a.m. UTC | #6
On (23/06/26 17:31), John Johansen wrote:
> On 6/26/23 16:33, Anil Altinay wrote:
> > Hi John,
> > 
> > I was wondering if you get a chance to work on patch v4. Please let me know if you need help with testing.
> > 
> 
> yeah, testing help is always much appreciated. I have a v4, and I am
> working on 3 alternate version to compare against, to help give a better
> sense if we can get away with simplifying or tweak the scaling.
>
> I should be able to post them out some time tonight.

Hi John,

Did you get a chance to post v4? I may be able to give it some testing
on our real-life case.
diff mbox series

Patch

diff --git a/security/apparmor/lsm.c b/security/apparmor/lsm.c
index 25114735bc11..21f5ea20e715 100644
--- a/security/apparmor/lsm.c
+++ b/security/apparmor/lsm.c
@@ -49,12 +49,19 @@  union aa_buffer {
  	char buffer[1];
  };
  
+struct aa_local_cache {
+	unsigned int contention;
+	unsigned int hold;
+	struct list_head head;
+};
+
  #define RESERVE_COUNT 2
  static int reserve_count = RESERVE_COUNT;
  static int buffer_count;
  
  static LIST_HEAD(aa_global_buffers);
  static DEFINE_SPINLOCK(aa_buffers_lock);
+static DEFINE_PER_CPU(struct aa_local_cache, aa_local_buffers);
  
  /*
   * LSM hook functions
@@ -1622,14 +1629,44 @@  static int param_set_mode(const char *val, const struct kernel_param *kp)
  	return 0;
  }
  
+static void update_contention(struct aa_local_cache *cache)
+{
+	cache->contention += 3;
+	if (cache->contention > 9)
+		cache->contention = 9;
+	cache->hold += 1 << cache->contention;		/* 8, 64, 512 */
+}
+
  char *aa_get_buffer(bool in_atomic)
  {
  	union aa_buffer *aa_buf;
+	struct aa_local_cache *cache;
  	bool try_again = true;
  	gfp_t flags = (GFP_KERNEL | __GFP_RETRY_MAYFAIL | __GFP_NOWARN);
  
+	/* use per cpu cached buffers first */
+	cache = get_cpu_ptr(&aa_local_buffers);
+	if (!list_empty(&cache->head)) {
+		aa_buf = list_first_entry(&cache->head, union aa_buffer, list);
+		list_del(&aa_buf->list);
+		cache->hold--;
+		put_cpu_ptr(&aa_local_buffers);
+		return &aa_buf->buffer[0];
+	}
+	put_cpu_ptr(&aa_local_buffers);
+
+	if (!spin_trylock(&aa_buffers_lock)) {
+		cache = get_cpu_ptr(&aa_local_buffers);
+		update_contention(cache);
+		put_cpu_ptr(&aa_local_buffers);
+		spin_lock(&aa_buffers_lock);
+	} else {
+		cache = get_cpu_ptr(&aa_local_buffers);
+		if (cache->contention)
+			cache->contention--;
+		put_cpu_ptr(&aa_local_buffers);
+	}
  retry:
-	spin_lock(&aa_buffers_lock);
  	if (buffer_count > reserve_count ||
  	    (in_atomic && !list_empty(&aa_global_buffers))) {
  		aa_buf = list_first_entry(&aa_global_buffers, union aa_buffer,
@@ -1655,6 +1692,7 @@  char *aa_get_buffer(bool in_atomic)
  	if (!aa_buf) {
  		if (try_again) {
  			try_again = false;
+			spin_lock(&aa_buffers_lock);
  			goto retry;
  		}
  		pr_warn_once("AppArmor: Failed to allocate a memory buffer.\n");
@@ -1666,15 +1704,39 @@  char *aa_get_buffer(bool in_atomic)
  void aa_put_buffer(char *buf)
  {
  	union aa_buffer *aa_buf;
+	struct aa_local_cache *cache;
  
  	if (!buf)
  		return;
  	aa_buf = container_of(buf, union aa_buffer, buffer[0]);
  
-	spin_lock(&aa_buffers_lock);
-	list_add(&aa_buf->list, &aa_global_buffers);
-	buffer_count++;
-	spin_unlock(&aa_buffers_lock);
+	cache = get_cpu_ptr(&aa_local_buffers);
+	if (!cache->hold || cache->count >= 2) {
+		put_cpu_ptr(&aa_local_buffers);
+		if (spin_trylock(&aa_buffers_lock)) {
+		locked:
+			list_add(&aa_buf->list, &aa_global_buffers);
+			buffer_count++;
+			spin_unlock(&aa_buffers_lock);
+			cache = get_cpu_ptr(&aa_local_buffers);
+			if (cache->contention)
+				cache->contention--;
+			put_cpu_ptr(&aa_local_buffers);
+			return;
+		}
+		cache = get_cpu_ptr(&aa_local_buffers);
+		update_contention(cache);
+		if (cache->count >= 2) {
+			put_cpu_ptr(&aa_local_buffers);
+			spin_lock(&aa_buffers_lock);
+			/* force putting the buffer to global */
+			goto locked;
+		}
+	}
+
+	/* cache in percpu list */
+	list_add(&aa_buf->list, &cache->head);
+	put_cpu_ptr(&aa_local_buffers);
  }
  
  /*
@@ -1716,6 +1778,15 @@  static int __init alloc_buffers(void)
  	union aa_buffer *aa_buf;
  	int i, num;
  
+	/*
+	 * per cpu set of cached allocated buffers used to help reduce
+	 * lock contention
+	 */
+	for_each_possible_cpu(i) {
+		per_cpu(aa_local_buffers, i).contention = 0;
+		per_cpu(aa_local_buffers, i).hold = 0;
+		INIT_LIST_HEAD(&per_cpu(aa_local_buffers, i).head);
+	}
  	/*
  	 * A function may require two buffers at once. Usually the buffers are
  	 * used for a short period of time and are shared. On UP kernel buffers