Message ID | 1580379523-32272-1-git-send-email-vjitta@codeaurora.org (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | mm: slub: reinitialize random sequence cache on slab object update | expand |
On Thu, 30 Jan 2020, vjitta@codeaurora.org wrote: > Random sequence cache is precomputed during slab object creation > based up on the object size and no of objects per slab. These could > be changed when flags like SLAB_STORE_USER, SLAB_POISON are updated > from sysfs. So when shuffle_freelist is called during slab_alloc it Sorry no. That cannot happen. Changing the size of the slab is only possible if no slab pages are allocated. Any sysfs changes that affect the object size must fail if object and slab pages are already allocated. If you were able to change the object size then we need to prevent that from happening.
On 1/30/2020 11:58 PM, Christopher Lameter wrote: > On Thu, 30 Jan 2020, vjitta@codeaurora.org wrote: > >> Random sequence cache is precomputed during slab object creation >> based up on the object size and no of objects per slab. These could >> be changed when flags like SLAB_STORE_USER, SLAB_POISON are updated >> from sysfs. So when shuffle_freelist is called during slab_alloc it > > Sorry no. That cannot happen. Changing the size of the slab is only > possible if no slab pages are allocated. Any sysfs changes that affect the > object size must fail if object and slab pages are already allocated. > > If you were able to change the object size then we need to prevent that > from happening. > Yes, size of slab can't be changed after objects are allocated, that holds true even with this change. Let me explain a bit more about the use case here ZRAM compression uses the slub allocator, by enabling the slub debug flags like SLAB_STORE_USER etc.. the memory consumption will rather be increased which doesn't serve the purpose of ZRAM compression. So, such flags are to be disabled before the allocations happen, this requires updation of random sequence cache as object size and number of objects change after these flags are disabled. So, the sequence will be 1. Slab creation (this will set a precomputed random sequence cache) 2. Remove the debug flags 3. update the random sequence cache 4. Mount zram and then start using it for allocations. Thanks, Vijay
On 1/30/2020 11:58 PM, Christopher Lameter wrote: > On Thu, 30 Jan 2020, vjitta@codeaurora.org wrote: > >> Random sequence cache is precomputed during slab object creation >> based up on the object size and no of objects per slab. These could >> be changed when flags like SLAB_STORE_USER, SLAB_POISON are updated >> from sysfs. So when shuffle_freelist is called during slab_alloc it > > Sorry no. That cannot happen. Changing the size of the slab is only > possible if no slab pages are allocated. Any sysfs changes that affect the > object size must fail if object and slab pages are already allocated. > > If you were able to change the object size then we need to prevent that > from happening. > Yes, size of slab can't be changed after objects are allocated, that holds true even with this change. Let me explain a bit more about the use case here ZRAM compression uses the slub allocator, by enabling the slub debug flags like SLAB_STORE_USER etc.. the memory consumption will rather be increased which doesn't serve the purpose of ZRAM compression. So, such flags are to be disabled before the allocations happen, this requires updation of random sequence cache as object size and number of objects change after these flags are disabled. So, the sequence will be 1. Slab creation (this will set a precomputed random sequence cache) 2. Remove the debug flags 3. update the random sequence cache 4. Mount zram and then start using it for allocations. Thanks, Vijay
On 2020-02-03 12:27, Vijayanand Jitta wrote: > On 1/30/2020 11:58 PM, Christopher Lameter wrote: >> On Thu, 30 Jan 2020, vjitta@codeaurora.org wrote: >> >>> Random sequence cache is precomputed during slab object creation >>> based up on the object size and no of objects per slab. These could >>> be changed when flags like SLAB_STORE_USER, SLAB_POISON are updated >>> from sysfs. So when shuffle_freelist is called during slab_alloc it >> >> Sorry no. That cannot happen. Changing the size of the slab is only >> possible if no slab pages are allocated. Any sysfs changes that affect >> the >> object size must fail if object and slab pages are already allocated. >> >> If you were able to change the object size then we need to prevent >> that >> from happening. >> > > Yes, size of slab can't be changed after objects are allocated, that > holds > true even with this change. Let me explain a bit more about the use > case here > > ZRAM compression uses the slub allocator, by enabling the slub debug > flags like > SLAB_STORE_USER etc.. the memory consumption will rather be increased > which doesn't > serve the purpose of ZRAM compression. So, such flags are to be > disabled before the > allocations happen, this requires updation of random sequence cache as > object > size and number of objects change after these flags are disabled. > > So, the sequence will be > > 1. Slab creation (this will set a precomputed random sequence cache) > 2. Remove the debug flags > 3. update the random sequence cache > 4. Mount zram and then start using it for allocations. > > Thanks, > Vijay Waiting for your response. Thanks, Vijay
diff --git a/mm/slub.c b/mm/slub.c index 0ab92ec..88abac5 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -1533,6 +1533,24 @@ static int init_cache_random_seq(struct kmem_cache *s) return 0; } +/* re-initialize the random sequence cache */ +static int reinit_cache_random_seq(struct kmem_cache *s) +{ + int err; + + if (s->random_seq) { + cache_random_seq_destroy(s); + err = init_cache_random_seq(s); + + if (err) { + pr_err("SLUB: Unable to re-initialize random sequence + cache for %s\n", s->name); + return err; + } + } + + return 0; +} /* Initialize each random sequence freelist per cache */ static void __init init_freelist_randomization(void) { @@ -1607,6 +1625,10 @@ static inline int init_cache_random_seq(struct kmem_cache *s) { return 0; } +static int reinit_cache_random_seq(struct kmem_cache *s) +{ + return 0; +} static inline void init_freelist_randomization(void) { } static inline bool shuffle_freelist(struct kmem_cache *s, struct page *page) { @@ -5192,6 +5214,7 @@ static ssize_t red_zone_store(struct kmem_cache *s, s->flags |= SLAB_RED_ZONE; } calculate_sizes(s, -1); + reinit_cache_random_seq(s); return length; } SLAB_ATTR(red_zone); @@ -5212,6 +5235,7 @@ static ssize_t poison_store(struct kmem_cache *s, s->flags |= SLAB_POISON; } calculate_sizes(s, -1); + reinit_cache_random_seq(s); return length; } SLAB_ATTR(poison); @@ -5233,6 +5257,7 @@ static ssize_t store_user_store(struct kmem_cache *s, s->flags |= SLAB_STORE_USER; } calculate_sizes(s, -1); + reinit_cache_random_seq(s); return length; } SLAB_ATTR(store_user);