Message ID | 20220327051853.57647-2-songmuchun@bytedance.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | [1/2] mm: kfence: fix missing objcg housekeeping for SLAB | expand |
On Sun, Mar 27, 2022 at 1:19 PM Muchun Song <songmuchun@bytedance.com> wrote: > > If the kfence object is allocated to be used for objects vector, then > this slot of the pool eventually being occupied permanently since > the vector is never freed. The solutions could be 1) freeing vector > when the kfence object is freed or 2) allocating all vectors statically. > Since the memory consumption of object vectors is low, it is better to > chose 2) to fix the issue and it is also can reduce overhead of vectors > allocating in the future. > > Fixes: d3fb45f370d9 ("mm, kfence: insert KFENCE hooks for SLAB") > Signed-off-by: Muchun Song <songmuchun@bytedance.com> Since it cannot be compiled successfully when !CONFIG_MEMCG (The following patch should be applied), I'll update this in the next version if anyone agrees with this change. Thanks. diff --git a/mm/kfence/core.c b/mm/kfence/core.c index 9976b3f0d097..b5c4b62b5d2c 100644 --- a/mm/kfence/core.c +++ b/mm/kfence/core.c @@ -583,7 +583,9 @@ static bool __init kfence_init_pool(void) struct kfence_metadata *meta = &kfence_metadata[i]; /* Initialize metadata. */ +#ifdef CONFIG_MEMCG slab->memcg_data = (unsigned long)&meta->objcg | MEMCG_DATA_OBJCGS; +#endif INIT_LIST_HEAD(&meta->list); raw_spin_lock_init(&meta->lock); meta->state = KFENCE_OBJECT_UNUSED; @@ -940,7 +942,9 @@ void __kfence_free(void *addr) { struct kfence_metadata *meta = addr_to_metadata((unsigned long)addr); +#ifdef CONFIG_MEMCG KFENCE_WARN_ON(meta->objcg); +#endif /* * If the objects of the cache are SLAB_TYPESAFE_BY_RCU, defer freeing * the object, as the object page may be recycled for other-typed diff --git a/mm/kfence/kfence.h b/mm/kfence/kfence.h index 6f0e1aece3f8..9a6c4b1b12a8 100644 --- a/mm/kfence/kfence.h +++ b/mm/kfence/kfence.h @@ -89,7 +89,9 @@ struct kfence_metadata { struct kfence_track free_track; /* For updating alloc_covered on frees. */ u32 alloc_stack_hash; +#ifdef CONFIG_MEMCG struct obj_cgroup *objcg; +#endif };
Hi Muchun, Thank you for the patch! Perhaps something to improve: [auto build test WARNING on hnaz-mm/master] url: https://github.com/intel-lab-lkp/linux/commits/Muchun-Song/mm-kfence-fix-missing-objcg-housekeeping-for-SLAB/20220327-132038 base: https://github.com/hnaz/linux-mm master config: x86_64-randconfig-a012 (https://download.01.org/0day-ci/archive/20220327/202203271634.QymsHESG-lkp@intel.com/config) compiler: clang version 15.0.0 (https://github.com/llvm/llvm-project 0f6d9501cf49ce02937099350d08f20c4af86f3d) reproduce (this is a W=1 build): wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross chmod +x ~/bin/make.cross # https://github.com/intel-lab-lkp/linux/commit/a33cf78311711db98d9f77541d0a4b50bc466875 git remote add linux-review https://github.com/intel-lab-lkp/linux git fetch --no-tags linux-review Muchun-Song/mm-kfence-fix-missing-objcg-housekeeping-for-SLAB/20220327-132038 git checkout a33cf78311711db98d9f77541d0a4b50bc466875 # save the config file to linux build tree mkdir build_dir COMPILER_INSTALL_PATH=$HOME/0day COMPILER=clang make.cross W=1 O=build_dir ARCH=x86_64 SHELL=/bin/bash mm/kfence/ If you fix the issue, kindly add following tag as appropriate Reported-by: kernel test robot <lkp@intel.com> All warnings (new ones prefixed by >>): >> mm/kfence/core.c:593:36: warning: incompatible integer to pointer conversion passing 'unsigned long' to parameter of type 'const void *' [-Wint-conversion] struct slab *slab = virt_to_slab(addr); ^~~~ mm/kfence/../slab.h:173:53: note: passing argument to parameter 'addr' here static inline struct slab *virt_to_slab(const void *addr) ^ mm/kfence/core.c:597:9: error: no member named 'memcg_data' in 'struct slab' slab->memcg_data = (unsigned long)&meta->objcg | MEMCG_DATA_OBJCGS; ~~~~ ^ mm/kfence/core.c:597:52: error: use of undeclared identifier 'MEMCG_DATA_OBJCGS' slab->memcg_data = (unsigned long)&meta->objcg | MEMCG_DATA_OBJCGS; ^ 1 warning and 2 errors generated. vim +593 mm/kfence/core.c 543 544 /* 545 * Initialization of the KFENCE pool after its allocation. 546 * Returns 0 on success; otherwise returns the address up to 547 * which partial initialization succeeded. 548 */ 549 static unsigned long kfence_init_pool(void) 550 { 551 unsigned long addr = (unsigned long)__kfence_pool; 552 struct page *pages; 553 int i; 554 555 if (!arch_kfence_init_pool()) 556 return addr; 557 558 pages = virt_to_page(addr); 559 560 /* 561 * Set up object pages: they must have PG_slab set, to avoid freeing 562 * these as real pages. 563 * 564 * We also want to avoid inserting kfence_free() in the kfree() 565 * fast-path in SLUB, and therefore need to ensure kfree() correctly 566 * enters __slab_free() slow-path. 567 */ 568 for (i = 0; i < KFENCE_POOL_SIZE / PAGE_SIZE; i++) { 569 if (!i || (i % 2)) 570 continue; 571 572 /* Verify we do not have a compound head page. */ 573 if (WARN_ON(compound_head(&pages[i]) != &pages[i])) 574 return addr; 575 576 __SetPageSlab(&pages[i]); 577 } 578 579 /* 580 * Protect the first 2 pages. The first page is mostly unnecessary, and 581 * merely serves as an extended guard page. However, adding one 582 * additional page in the beginning gives us an even number of pages, 583 * which simplifies the mapping of address to metadata index. 584 */ 585 for (i = 0; i < 2; i++) { 586 if (unlikely(!kfence_protect(addr))) 587 return addr; 588 589 addr += PAGE_SIZE; 590 } 591 592 for (i = 0; i < CONFIG_KFENCE_NUM_OBJECTS; i++) { > 593 struct slab *slab = virt_to_slab(addr); 594 struct kfence_metadata *meta = &kfence_metadata[i]; 595 596 /* Initialize metadata. */ 597 slab->memcg_data = (unsigned long)&meta->objcg | MEMCG_DATA_OBJCGS; 598 INIT_LIST_HEAD(&meta->list); 599 raw_spin_lock_init(&meta->lock); 600 meta->state = KFENCE_OBJECT_UNUSED; 601 meta->addr = addr; /* Initialize for validation in metadata_to_pageaddr(). */ 602 list_add_tail(&meta->list, &kfence_freelist); 603 604 /* Protect the right redzone. */ 605 if (unlikely(!kfence_protect(addr + PAGE_SIZE))) 606 return addr; 607 608 addr += 2 * PAGE_SIZE; 609 } 610 611 /* 612 * The pool is live and will never be deallocated from this point on. 613 * Remove the pool object from the kmemleak object tree, as it would 614 * otherwise overlap with allocations returned by kfence_alloc(), which 615 * are registered with kmemleak through the slab post-alloc hook. 616 */ 617 kmemleak_free(__kfence_pool); 618 619 return 0; 620 } 621
Hi Muchun, Thank you for the patch! Perhaps something to improve: [auto build test WARNING on hnaz-mm/master] url: https://github.com/intel-lab-lkp/linux/commits/Muchun-Song/mm-kfence-fix-missing-objcg-housekeeping-for-SLAB/20220327-132038 base: https://github.com/hnaz/linux-mm master config: x86_64-randconfig-c022 (https://download.01.org/0day-ci/archive/20220327/202203271619.Ni4lY7Mc-lkp@intel.com/config) compiler: gcc-9 (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0 reproduce (this is a W=1 build): # https://github.com/intel-lab-lkp/linux/commit/a33cf78311711db98d9f77541d0a4b50bc466875 git remote add linux-review https://github.com/intel-lab-lkp/linux git fetch --no-tags linux-review Muchun-Song/mm-kfence-fix-missing-objcg-housekeeping-for-SLAB/20220327-132038 git checkout a33cf78311711db98d9f77541d0a4b50bc466875 # save the config file to linux build tree mkdir build_dir make W=1 O=build_dir ARCH=x86_64 SHELL=/bin/bash mm/kfence/ If you fix the issue, kindly add following tag as appropriate Reported-by: kernel test robot <lkp@intel.com> All warnings (new ones prefixed by >>): mm/kfence/core.c: In function 'kfence_init_pool': >> mm/kfence/core.c:593:36: warning: passing argument 1 of 'virt_to_slab' makes pointer from integer without a cast [-Wint-conversion] 593 | struct slab *slab = virt_to_slab(addr); | ^~~~ | | | long unsigned int In file included from mm/kfence/kfence.h:17, from mm/kfence/core.c:35: mm/kfence/../slab.h:173:53: note: expected 'const void *' but argument is of type 'long unsigned int' 173 | static inline struct slab *virt_to_slab(const void *addr) | ~~~~~~~~~~~~^~~~ mm/kfence/core.c:597:7: error: 'struct slab' has no member named 'memcg_data' 597 | slab->memcg_data = (unsigned long)&meta->objcg | MEMCG_DATA_OBJCGS; | ^~ mm/kfence/core.c:597:52: error: 'MEMCG_DATA_OBJCGS' undeclared (first use in this function) 597 | slab->memcg_data = (unsigned long)&meta->objcg | MEMCG_DATA_OBJCGS; | ^~~~~~~~~~~~~~~~~ mm/kfence/core.c:597:52: note: each undeclared identifier is reported only once for each function it appears in vim +/virt_to_slab +593 mm/kfence/core.c 543 544 /* 545 * Initialization of the KFENCE pool after its allocation. 546 * Returns 0 on success; otherwise returns the address up to 547 * which partial initialization succeeded. 548 */ 549 static unsigned long kfence_init_pool(void) 550 { 551 unsigned long addr = (unsigned long)__kfence_pool; 552 struct page *pages; 553 int i; 554 555 if (!arch_kfence_init_pool()) 556 return addr; 557 558 pages = virt_to_page(addr); 559 560 /* 561 * Set up object pages: they must have PG_slab set, to avoid freeing 562 * these as real pages. 563 * 564 * We also want to avoid inserting kfence_free() in the kfree() 565 * fast-path in SLUB, and therefore need to ensure kfree() correctly 566 * enters __slab_free() slow-path. 567 */ 568 for (i = 0; i < KFENCE_POOL_SIZE / PAGE_SIZE; i++) { 569 if (!i || (i % 2)) 570 continue; 571 572 /* Verify we do not have a compound head page. */ 573 if (WARN_ON(compound_head(&pages[i]) != &pages[i])) 574 return addr; 575 576 __SetPageSlab(&pages[i]); 577 } 578 579 /* 580 * Protect the first 2 pages. The first page is mostly unnecessary, and 581 * merely serves as an extended guard page. However, adding one 582 * additional page in the beginning gives us an even number of pages, 583 * which simplifies the mapping of address to metadata index. 584 */ 585 for (i = 0; i < 2; i++) { 586 if (unlikely(!kfence_protect(addr))) 587 return addr; 588 589 addr += PAGE_SIZE; 590 } 591 592 for (i = 0; i < CONFIG_KFENCE_NUM_OBJECTS; i++) { > 593 struct slab *slab = virt_to_slab(addr); 594 struct kfence_metadata *meta = &kfence_metadata[i]; 595 596 /* Initialize metadata. */ 597 slab->memcg_data = (unsigned long)&meta->objcg | MEMCG_DATA_OBJCGS; 598 INIT_LIST_HEAD(&meta->list); 599 raw_spin_lock_init(&meta->lock); 600 meta->state = KFENCE_OBJECT_UNUSED; 601 meta->addr = addr; /* Initialize for validation in metadata_to_pageaddr(). */ 602 list_add_tail(&meta->list, &kfence_freelist); 603 604 /* Protect the right redzone. */ 605 if (unlikely(!kfence_protect(addr + PAGE_SIZE))) 606 return addr; 607 608 addr += 2 * PAGE_SIZE; 609 } 610 611 /* 612 * The pool is live and will never be deallocated from this point on. 613 * Remove the pool object from the kmemleak object tree, as it would 614 * otherwise overlap with allocations returned by kfence_alloc(), which 615 * are registered with kmemleak through the slab post-alloc hook. 616 */ 617 kmemleak_free(__kfence_pool); 618 619 return 0; 620 } 621
On Sun, 27 Mar 2022 at 07:19, Muchun Song <songmuchun@bytedance.com> wrote: > > If the kfence object is allocated to be used for objects vector, then > this slot of the pool eventually being occupied permanently since > the vector is never freed. The solutions could be 1) freeing vector > when the kfence object is freed or 2) allocating all vectors statically. > Since the memory consumption of object vectors is low, it is better to > chose 2) to fix the issue and it is also can reduce overhead of vectors > allocating in the future. > > Fixes: d3fb45f370d9 ("mm, kfence: insert KFENCE hooks for SLAB") > Signed-off-by: Muchun Song <songmuchun@bytedance.com> > --- > mm/kfence/core.c | 3 +++ > mm/kfence/kfence.h | 1 + > 2 files changed, 4 insertions(+) Thanks for this -- mostly looks good. Minor comments below + also please fix what the test robot reported. > diff --git a/mm/kfence/core.c b/mm/kfence/core.c > index 13128fa13062..9976b3f0d097 100644 > --- a/mm/kfence/core.c > +++ b/mm/kfence/core.c > @@ -579,9 +579,11 @@ static bool __init kfence_init_pool(void) > } > > for (i = 0; i < CONFIG_KFENCE_NUM_OBJECTS; i++) { > + struct slab *slab = virt_to_slab(addr); > struct kfence_metadata *meta = &kfence_metadata[i]; > > /* Initialize metadata. */ > + slab->memcg_data = (unsigned long)&meta->objcg | MEMCG_DATA_OBJCGS; Maybe just move it to kfence_guarded_alloc(), see "/* Set required slab fields */", where similar initialization on slab is done. > INIT_LIST_HEAD(&meta->list); > raw_spin_lock_init(&meta->lock); > meta->state = KFENCE_OBJECT_UNUSED; > @@ -938,6 +940,7 @@ void __kfence_free(void *addr) > { > struct kfence_metadata *meta = addr_to_metadata((unsigned long)addr); > > + KFENCE_WARN_ON(meta->objcg); This holds true for both SLAB and SLUB, right? (I think it does, but just double-checking.) > /* > * If the objects of the cache are SLAB_TYPESAFE_BY_RCU, defer freeing > * the object, as the object page may be recycled for other-typed > diff --git a/mm/kfence/kfence.h b/mm/kfence/kfence.h > index 2a2d5de9d379..6f0e1aece3f8 100644 > --- a/mm/kfence/kfence.h > +++ b/mm/kfence/kfence.h > @@ -89,6 +89,7 @@ struct kfence_metadata { > struct kfence_track free_track; > /* For updating alloc_covered on frees. */ > u32 alloc_stack_hash; > + struct obj_cgroup *objcg; > }; > > extern struct kfence_metadata kfence_metadata[CONFIG_KFENCE_NUM_OBJECTS]; > -- > 2.11.0 >
On Mon, Mar 28, 2022 at 1:31 AM Marco Elver <elver@google.com> wrote: > > On Sun, 27 Mar 2022 at 07:19, Muchun Song <songmuchun@bytedance.com> wrote: > > > > If the kfence object is allocated to be used for objects vector, then > > this slot of the pool eventually being occupied permanently since > > the vector is never freed. The solutions could be 1) freeing vector > > when the kfence object is freed or 2) allocating all vectors statically. > > Since the memory consumption of object vectors is low, it is better to > > chose 2) to fix the issue and it is also can reduce overhead of vectors > > allocating in the future. > > > > Fixes: d3fb45f370d9 ("mm, kfence: insert KFENCE hooks for SLAB") > > Signed-off-by: Muchun Song <songmuchun@bytedance.com> > > --- > > mm/kfence/core.c | 3 +++ > > mm/kfence/kfence.h | 1 + > > 2 files changed, 4 insertions(+) > > Thanks for this -- mostly looks good. Minor comments below + also > please fix what the test robot reported. Will do. > > > diff --git a/mm/kfence/core.c b/mm/kfence/core.c > > index 13128fa13062..9976b3f0d097 100644 > > --- a/mm/kfence/core.c > > +++ b/mm/kfence/core.c > > @@ -579,9 +579,11 @@ static bool __init kfence_init_pool(void) > > } > > > > for (i = 0; i < CONFIG_KFENCE_NUM_OBJECTS; i++) { > > + struct slab *slab = virt_to_slab(addr); > > struct kfence_metadata *meta = &kfence_metadata[i]; > > > > /* Initialize metadata. */ > > + slab->memcg_data = (unsigned long)&meta->objcg | MEMCG_DATA_OBJCGS; > > Maybe just move it to kfence_guarded_alloc(), see "/* Set required > slab fields */", where similar initialization on slab is done. But slab->memcg_data is special since it is only needed to be initialized once. I think it is better move it to the place where __SetPageSlab(&pages[i]) is. What do you think? > > > INIT_LIST_HEAD(&meta->list); > > raw_spin_lock_init(&meta->lock); > > meta->state = KFENCE_OBJECT_UNUSED; > > @@ -938,6 +940,7 @@ void __kfence_free(void *addr) > > { > > struct kfence_metadata *meta = addr_to_metadata((unsigned long)addr); > > > > + KFENCE_WARN_ON(meta->objcg); > > This holds true for both SLAB and SLUB, right? (I think it does, but > just double-checking.) Right. Thanks.
On Mon, 28 Mar 2022 at 03:53, Muchun Song <songmuchun@bytedance.com> wrote: > > On Mon, Mar 28, 2022 at 1:31 AM Marco Elver <elver@google.com> wrote: > > > > On Sun, 27 Mar 2022 at 07:19, Muchun Song <songmuchun@bytedance.com> wrote: > > > > > > If the kfence object is allocated to be used for objects vector, then > > > this slot of the pool eventually being occupied permanently since > > > the vector is never freed. The solutions could be 1) freeing vector > > > when the kfence object is freed or 2) allocating all vectors statically. > > > Since the memory consumption of object vectors is low, it is better to > > > chose 2) to fix the issue and it is also can reduce overhead of vectors > > > allocating in the future. > > > > > > Fixes: d3fb45f370d9 ("mm, kfence: insert KFENCE hooks for SLAB") > > > Signed-off-by: Muchun Song <songmuchun@bytedance.com> > > > --- > > > mm/kfence/core.c | 3 +++ > > > mm/kfence/kfence.h | 1 + > > > 2 files changed, 4 insertions(+) > > > > Thanks for this -- mostly looks good. Minor comments below + also > > please fix what the test robot reported. > > Will do. > > > > > > diff --git a/mm/kfence/core.c b/mm/kfence/core.c > > > index 13128fa13062..9976b3f0d097 100644 > > > --- a/mm/kfence/core.c > > > +++ b/mm/kfence/core.c > > > @@ -579,9 +579,11 @@ static bool __init kfence_init_pool(void) > > > } > > > > > > for (i = 0; i < CONFIG_KFENCE_NUM_OBJECTS; i++) { > > > + struct slab *slab = virt_to_slab(addr); > > > struct kfence_metadata *meta = &kfence_metadata[i]; > > > > > > /* Initialize metadata. */ > > > + slab->memcg_data = (unsigned long)&meta->objcg | MEMCG_DATA_OBJCGS; > > > > Maybe just move it to kfence_guarded_alloc(), see "/* Set required > > slab fields */", where similar initialization on slab is done. > > But slab->memcg_data is special since it is only needed to be > initialized once. I think it is better move it to the place where > __SetPageSlab(&pages[i]) is. What do you think? That's fair.
diff --git a/mm/kfence/core.c b/mm/kfence/core.c index 13128fa13062..9976b3f0d097 100644 --- a/mm/kfence/core.c +++ b/mm/kfence/core.c @@ -579,9 +579,11 @@ static bool __init kfence_init_pool(void) } for (i = 0; i < CONFIG_KFENCE_NUM_OBJECTS; i++) { + struct slab *slab = virt_to_slab(addr); struct kfence_metadata *meta = &kfence_metadata[i]; /* Initialize metadata. */ + slab->memcg_data = (unsigned long)&meta->objcg | MEMCG_DATA_OBJCGS; INIT_LIST_HEAD(&meta->list); raw_spin_lock_init(&meta->lock); meta->state = KFENCE_OBJECT_UNUSED; @@ -938,6 +940,7 @@ void __kfence_free(void *addr) { struct kfence_metadata *meta = addr_to_metadata((unsigned long)addr); + KFENCE_WARN_ON(meta->objcg); /* * If the objects of the cache are SLAB_TYPESAFE_BY_RCU, defer freeing * the object, as the object page may be recycled for other-typed diff --git a/mm/kfence/kfence.h b/mm/kfence/kfence.h index 2a2d5de9d379..6f0e1aece3f8 100644 --- a/mm/kfence/kfence.h +++ b/mm/kfence/kfence.h @@ -89,6 +89,7 @@ struct kfence_metadata { struct kfence_track free_track; /* For updating alloc_covered on frees. */ u32 alloc_stack_hash; + struct obj_cgroup *objcg; }; extern struct kfence_metadata kfence_metadata[CONFIG_KFENCE_NUM_OBJECTS];
If the kfence object is allocated to be used for objects vector, then this slot of the pool eventually being occupied permanently since the vector is never freed. The solutions could be 1) freeing vector when the kfence object is freed or 2) allocating all vectors statically. Since the memory consumption of object vectors is low, it is better to chose 2) to fix the issue and it is also can reduce overhead of vectors allocating in the future. Fixes: d3fb45f370d9 ("mm, kfence: insert KFENCE hooks for SLAB") Signed-off-by: Muchun Song <songmuchun@bytedance.com> --- mm/kfence/core.c | 3 +++ mm/kfence/kfence.h | 1 + 2 files changed, 4 insertions(+)