mbox series

[v2,00/33] Separate struct slab from struct page

Message ID 20211201181510.18784-1-vbabka@suse.cz (mailing list archive)
Headers show
Series Separate struct slab from struct page | expand

Message

Vlastimil Babka Dec. 1, 2021, 6:14 p.m. UTC
Folks from non-slab subsystems are Cc'd only to patches affecting them, and
this cover letter.

Series also available in git, based on 5.16-rc3:
https://git.kernel.org/pub/scm/linux/kernel/git/vbabka/linux.git/log/?h=slab-struct_slab-v2r2

The plan: as my SLUB PREEMPT_RT series in 5.15, I would prefer to go again with
the git pull request way of eventually merging this, as it's also not a small
series. I will thus reply to this mail with asking to include my branch in
linux-next.

As stated in the v1/RFC cover letter, I wouldn't mind to then continue with
maintaining a git tree for all slab patches in general. It was apparently
already done that way before, by Pekka:
https://lore.kernel.org/linux-mm/alpine.DEB.2.00.1107221108190.2996@tiger/

Changes from v1/RFC:
https://lore.kernel.org/all/20211116001628.24216-1-vbabka@suse.cz/
- Added virt_to_folio() and folio_address() in the new Patch 1.
- Addressed feedback from Andrey Konovalov and Matthew Wilcox (Thanks!)
- Added Tested-by: Marco Elver for the KFENCE parts (Thanks!)

Previous version from Matthew Wilcox:
https://lore.kernel.org/all/20211004134650.4031813-1-willy@infradead.org/

LWN coverage of the above:
https://lwn.net/Articles/871982/

This is originally an offshoot of the folio work by Matthew. One of the more
complex parts of the struct page definition are the parts used by the slab
allocators. It would be good for the MM in general if struct slab were its own
data type, and it also helps to prevent tail pages from slipping in anywhere.
As Matthew requested in his proof of concept series, I have taken over the
development of this series, so it's a mix of patches from him (often modified
by me) and my own.

One big difference is the use of coccinelle to perform the relatively trivial
parts of the conversions automatically and at once, instead of a larger number
of smaller incremental reviewable steps. Thanks to Julia Lawall and Luis
Chamberlain for all their help!

Another notable difference is (based also on review feedback) I don't represent
with a struct slab the large kmalloc allocations which are not really a slab,
but use page allocator directly. When going from an object address to a struct
slab, the code tests first folio slab flag, and only if it's set it converts to
struct slab. This makes the struct slab type stronger.

Finally, although Matthew's version didn't use any of the folio work, the
initial support has been merged meanwhile so my version builds on top of it
where appropriate. This eliminates some of the redundant compound_head()
being performed e.g. when testing the slab flag.

To sum up, after this series, struct page fields used by slab allocators are
moved from struct page to a new struct slab, that uses the same physical
storage. The availability of the fields is further distinguished by the
selected slab allocator implementation. The advantages include:

- Similar to folios, if the slab is of order > 0, struct slab always is
  guaranteed to be the head page. Additionally it's guaranteed to be an actual
  slab page, not a large kmalloc. This removes uncertainty and potential for
  bugs.
- It's not possible to accidentally use fields of the slab implementation that's
  not configured.
- Other subsystems cannot use slab's fields in struct page anymore (some
  existing non-slab usages had to be adjusted in this series), so slab
  implementations have more freedom in rearranging them in the struct slab.

Matthew Wilcox (Oracle) (16):
  mm: Split slab into its own type
  mm: Add account_slab() and unaccount_slab()
  mm: Convert virt_to_cache() to use struct slab
  mm: Convert __ksize() to struct slab
  mm: Use struct slab in kmem_obj_info()
  mm: Convert check_heap_object() to use struct slab
  mm/slub: Convert detached_freelist to use a struct slab
  mm/slub: Convert kfree() to use a struct slab
  mm/slub: Convert print_page_info() to print_slab_info()
  mm/slub: Convert pfmemalloc_match() to take a struct slab
  mm/slob: Convert SLOB to use struct slab
  mm/kasan: Convert to struct folio and struct slab
  zsmalloc: Stop using slab fields in struct page
  bootmem: Use page->index instead of page->freelist
  iommu: Use put_pages_list
  mm: Remove slab from struct page

Vlastimil Babka (17):
  mm: add virt_to_folio() and folio_address()
  mm/slab: Dissolve slab_map_pages() in its caller
  mm/slub: Make object_err() static
  mm/slub: Convert __slab_lock() and __slab_unlock() to struct slab
  mm/slub: Convert alloc_slab_page() to return a struct slab
  mm/slub: Convert __free_slab() to use struct slab
  mm/slub: Convert most struct page to struct slab by spatch
  mm/slub: Finish struct page to struct slab conversion
  mm/slab: Convert kmem_getpages() and kmem_freepages() to struct slab
  mm/slab: Convert most struct page to struct slab by spatch
  mm/slab: Finish struct page to struct slab conversion
  mm: Convert struct page to struct slab in functions used by other
    subsystems
  mm/memcg: Convert slab objcgs from struct page to struct slab
  mm/kfence: Convert kfence_guarded_alloc() to struct slab
  mm/sl*b: Differentiate struct slab fields by sl*b implementations
  mm/slub: Simplify struct slab slabs field definition
  mm/slub: Define struct slab fields for CONFIG_SLUB_CPU_PARTIAL only
    when enabled

 arch/x86/mm/init_64.c          |    2 +-
 drivers/iommu/amd/io_pgtable.c |   59 +-
 drivers/iommu/dma-iommu.c      |   11 +-
 drivers/iommu/intel/iommu.c    |   89 +--
 include/linux/bootmem_info.h   |    2 +-
 include/linux/iommu.h          |    3 +-
 include/linux/kasan.h          |    9 +-
 include/linux/memcontrol.h     |   48 --
 include/linux/mm.h             |   12 +
 include/linux/mm_types.h       |   38 +-
 include/linux/page-flags.h     |   37 -
 include/linux/slab.h           |    8 -
 include/linux/slab_def.h       |   16 +-
 include/linux/slub_def.h       |   29 +-
 mm/bootmem_info.c              |    7 +-
 mm/kasan/common.c              |   27 +-
 mm/kasan/generic.c             |    8 +-
 mm/kasan/kasan.h               |    1 +
 mm/kasan/quarantine.c          |    2 +-
 mm/kasan/report.c              |   13 +-
 mm/kasan/report_tags.c         |   10 +-
 mm/kfence/core.c               |   17 +-
 mm/kfence/kfence_test.c        |    6 +-
 mm/memcontrol.c                |   43 +-
 mm/slab.c                      |  455 ++++++-------
 mm/slab.h                      |  322 ++++++++-
 mm/slab_common.c               |    8 +-
 mm/slob.c                      |   46 +-
 mm/slub.c                      | 1164 ++++++++++++++++----------------
 mm/sparse.c                    |    2 +-
 mm/usercopy.c                  |   13 +-
 mm/zsmalloc.c                  |   18 +-
 32 files changed, 1317 insertions(+), 1208 deletions(-)

Comments

Vlastimil Babka Dec. 1, 2021, 6:39 p.m. UTC | #1
On 12/1/21 19:14, Vlastimil Babka wrote:
> Folks from non-slab subsystems are Cc'd only to patches affecting them, and
> this cover letter.
> 
> Series also available in git, based on 5.16-rc3:
> https://git.kernel.org/pub/scm/linux/kernel/git/vbabka/linux.git/log/?h=slab-struct_slab-v2r2
> 
> The plan: as my SLUB PREEMPT_RT series in 5.15, I would prefer to go again with
> the git pull request way of eventually merging this, as it's also not a small
> series. I will thus reply to this mail with asking to include my branch in
> linux-next.
> 
> As stated in the v1/RFC cover letter, I wouldn't mind to then continue with
> maintaining a git tree for all slab patches in general. It was apparently
> already done that way before, by Pekka:
> https://lore.kernel.org/linux-mm/alpine.DEB.2.00.1107221108190.2996@tiger/

Hi Stephen,

Please include a new tree in linux-next:

git://git.kernel.org/pub/scm/linux/kernel/git/vbabka/linux.git slab-next

i.e.
https://git.kernel.org/pub/scm/linux/kernel/git/vbabka/linux.git/log/?h=slab-next

Which now is identical slab-struct_slab-v2r2 branch [1]

When I tried to merge this to next-20211201, there were minor conflicts with
two patches from motm:
zsmalloc-move-huge-compressed-obj-from-page-to-zspage.patch
mm-memcg-relocate-mod_objcg_mlstate-get_obj_stock-and-put_obj_stock.patch

Both appear to be just a change in context.

Thanks,
Vlastimil

[1] https://lore.kernel.org/all/20211201181510.18784-1-vbabka@suse.cz/

> Changes from v1/RFC:
> https://lore.kernel.org/all/20211116001628.24216-1-vbabka@suse.cz/
> - Added virt_to_folio() and folio_address() in the new Patch 1.
> - Addressed feedback from Andrey Konovalov and Matthew Wilcox (Thanks!)
> - Added Tested-by: Marco Elver for the KFENCE parts (Thanks!)
> 
> Previous version from Matthew Wilcox:
> https://lore.kernel.org/all/20211004134650.4031813-1-willy@infradead.org/
> 
> LWN coverage of the above:
> https://lwn.net/Articles/871982/
> 
> This is originally an offshoot of the folio work by Matthew. One of the more
> complex parts of the struct page definition are the parts used by the slab
> allocators. It would be good for the MM in general if struct slab were its own
> data type, and it also helps to prevent tail pages from slipping in anywhere.
> As Matthew requested in his proof of concept series, I have taken over the
> development of this series, so it's a mix of patches from him (often modified
> by me) and my own.
> 
> One big difference is the use of coccinelle to perform the relatively trivial
> parts of the conversions automatically and at once, instead of a larger number
> of smaller incremental reviewable steps. Thanks to Julia Lawall and Luis
> Chamberlain for all their help!
> 
> Another notable difference is (based also on review feedback) I don't represent
> with a struct slab the large kmalloc allocations which are not really a slab,
> but use page allocator directly. When going from an object address to a struct
> slab, the code tests first folio slab flag, and only if it's set it converts to
> struct slab. This makes the struct slab type stronger.
> 
> Finally, although Matthew's version didn't use any of the folio work, the
> initial support has been merged meanwhile so my version builds on top of it
> where appropriate. This eliminates some of the redundant compound_head()
> being performed e.g. when testing the slab flag.
> 
> To sum up, after this series, struct page fields used by slab allocators are
> moved from struct page to a new struct slab, that uses the same physical
> storage. The availability of the fields is further distinguished by the
> selected slab allocator implementation. The advantages include:
> 
> - Similar to folios, if the slab is of order > 0, struct slab always is
>   guaranteed to be the head page. Additionally it's guaranteed to be an actual
>   slab page, not a large kmalloc. This removes uncertainty and potential for
>   bugs.
> - It's not possible to accidentally use fields of the slab implementation that's
>   not configured.
> - Other subsystems cannot use slab's fields in struct page anymore (some
>   existing non-slab usages had to be adjusted in this series), so slab
>   implementations have more freedom in rearranging them in the struct slab.
> 
> Matthew Wilcox (Oracle) (16):
>   mm: Split slab into its own type
>   mm: Add account_slab() and unaccount_slab()
>   mm: Convert virt_to_cache() to use struct slab
>   mm: Convert __ksize() to struct slab
>   mm: Use struct slab in kmem_obj_info()
>   mm: Convert check_heap_object() to use struct slab
>   mm/slub: Convert detached_freelist to use a struct slab
>   mm/slub: Convert kfree() to use a struct slab
>   mm/slub: Convert print_page_info() to print_slab_info()
>   mm/slub: Convert pfmemalloc_match() to take a struct slab
>   mm/slob: Convert SLOB to use struct slab
>   mm/kasan: Convert to struct folio and struct slab
>   zsmalloc: Stop using slab fields in struct page
>   bootmem: Use page->index instead of page->freelist
>   iommu: Use put_pages_list
>   mm: Remove slab from struct page
> 
> Vlastimil Babka (17):
>   mm: add virt_to_folio() and folio_address()
>   mm/slab: Dissolve slab_map_pages() in its caller
>   mm/slub: Make object_err() static
>   mm/slub: Convert __slab_lock() and __slab_unlock() to struct slab
>   mm/slub: Convert alloc_slab_page() to return a struct slab
>   mm/slub: Convert __free_slab() to use struct slab
>   mm/slub: Convert most struct page to struct slab by spatch
>   mm/slub: Finish struct page to struct slab conversion
>   mm/slab: Convert kmem_getpages() and kmem_freepages() to struct slab
>   mm/slab: Convert most struct page to struct slab by spatch
>   mm/slab: Finish struct page to struct slab conversion
>   mm: Convert struct page to struct slab in functions used by other
>     subsystems
>   mm/memcg: Convert slab objcgs from struct page to struct slab
>   mm/kfence: Convert kfence_guarded_alloc() to struct slab
>   mm/sl*b: Differentiate struct slab fields by sl*b implementations
>   mm/slub: Simplify struct slab slabs field definition
>   mm/slub: Define struct slab fields for CONFIG_SLUB_CPU_PARTIAL only
>     when enabled
> 
>  arch/x86/mm/init_64.c          |    2 +-
>  drivers/iommu/amd/io_pgtable.c |   59 +-
>  drivers/iommu/dma-iommu.c      |   11 +-
>  drivers/iommu/intel/iommu.c    |   89 +--
>  include/linux/bootmem_info.h   |    2 +-
>  include/linux/iommu.h          |    3 +-
>  include/linux/kasan.h          |    9 +-
>  include/linux/memcontrol.h     |   48 --
>  include/linux/mm.h             |   12 +
>  include/linux/mm_types.h       |   38 +-
>  include/linux/page-flags.h     |   37 -
>  include/linux/slab.h           |    8 -
>  include/linux/slab_def.h       |   16 +-
>  include/linux/slub_def.h       |   29 +-
>  mm/bootmem_info.c              |    7 +-
>  mm/kasan/common.c              |   27 +-
>  mm/kasan/generic.c             |    8 +-
>  mm/kasan/kasan.h               |    1 +
>  mm/kasan/quarantine.c          |    2 +-
>  mm/kasan/report.c              |   13 +-
>  mm/kasan/report_tags.c         |   10 +-
>  mm/kfence/core.c               |   17 +-
>  mm/kfence/kfence_test.c        |    6 +-
>  mm/memcontrol.c                |   43 +-
>  mm/slab.c                      |  455 ++++++-------
>  mm/slab.h                      |  322 ++++++++-
>  mm/slab_common.c               |    8 +-
>  mm/slob.c                      |   46 +-
>  mm/slub.c                      | 1164 ++++++++++++++++----------------
>  mm/sparse.c                    |    2 +-
>  mm/usercopy.c                  |   13 +-
>  mm/zsmalloc.c                  |   18 +-
>  32 files changed, 1317 insertions(+), 1208 deletions(-)
>
Vlastimil Babka Dec. 1, 2021, 8:34 p.m. UTC | #2
On 12/1/21 19:39, Vlastimil Babka wrote:
> On 12/1/21 19:14, Vlastimil Babka wrote:
>> Folks from non-slab subsystems are Cc'd only to patches affecting them, and
>> this cover letter.
>>
>> Series also available in git, based on 5.16-rc3:
>> https://git.kernel.org/pub/scm/linux/kernel/git/vbabka/linux.git/log/?h=slab-struct_slab-v2r2
>>
>> The plan: as my SLUB PREEMPT_RT series in 5.15, I would prefer to go again with
>> the git pull request way of eventually merging this, as it's also not a small
>> series. I will thus reply to this mail with asking to include my branch in
>> linux-next.
>>
>> As stated in the v1/RFC cover letter, I wouldn't mind to then continue with
>> maintaining a git tree for all slab patches in general. It was apparently
>> already done that way before, by Pekka:
>> https://lore.kernel.org/linux-mm/alpine.DEB.2.00.1107221108190.2996@tiger/
> 
> Hi Stephen,
> 
> Please include a new tree in linux-next:

OK, not yet, please:
https://lore.kernel.org/all/70fbdc70-6838-0740-43e3-ed20caff47bf@arm.com/

I will reorder the patches so that changes needed in other subsystems
and the dependent actual removal of slab fields from struct page will be
ordered last, and only ask for the first, mm/ contained part to be in
-next. The patches for other subsystems can then be reviewed and picked
by their maintainers independently, while slab implementations are
already using struct slab.

> git://git.kernel.org/pub/scm/linux/kernel/git/vbabka/linux.git slab-next
> 
> i.e.
> https://git.kernel.org/pub/scm/linux/kernel/git/vbabka/linux.git/log/?h=slab-next
> 
> Which now is identical slab-struct_slab-v2r2 branch [1]
> 
> When I tried to merge this to next-20211201, there were minor conflicts with
> two patches from motm:
> zsmalloc-move-huge-compressed-obj-from-page-to-zspage.patch
> mm-memcg-relocate-mod_objcg_mlstate-get_obj_stock-and-put_obj_stock.patch
> 
> Both appear to be just a change in context.
> 
> Thanks,
> Vlastimil
> 
> [1] https://lore.kernel.org/all/20211201181510.18784-1-vbabka@suse.cz/
> 
>> Changes from v1/RFC:
>> https://lore.kernel.org/all/20211116001628.24216-1-vbabka@suse.cz/
>> - Added virt_to_folio() and folio_address() in the new Patch 1.
>> - Addressed feedback from Andrey Konovalov and Matthew Wilcox (Thanks!)
>> - Added Tested-by: Marco Elver for the KFENCE parts (Thanks!)
>>
>> Previous version from Matthew Wilcox:
>> https://lore.kernel.org/all/20211004134650.4031813-1-willy@infradead.org/
>>
>> LWN coverage of the above:
>> https://lwn.net/Articles/871982/
>>
>> This is originally an offshoot of the folio work by Matthew. One of the more
>> complex parts of the struct page definition are the parts used by the slab
>> allocators. It would be good for the MM in general if struct slab were its own
>> data type, and it also helps to prevent tail pages from slipping in anywhere.
>> As Matthew requested in his proof of concept series, I have taken over the
>> development of this series, so it's a mix of patches from him (often modified
>> by me) and my own.
>>
>> One big difference is the use of coccinelle to perform the relatively trivial
>> parts of the conversions automatically and at once, instead of a larger number
>> of smaller incremental reviewable steps. Thanks to Julia Lawall and Luis
>> Chamberlain for all their help!
>>
>> Another notable difference is (based also on review feedback) I don't represent
>> with a struct slab the large kmalloc allocations which are not really a slab,
>> but use page allocator directly. When going from an object address to a struct
>> slab, the code tests first folio slab flag, and only if it's set it converts to
>> struct slab. This makes the struct slab type stronger.
>>
>> Finally, although Matthew's version didn't use any of the folio work, the
>> initial support has been merged meanwhile so my version builds on top of it
>> where appropriate. This eliminates some of the redundant compound_head()
>> being performed e.g. when testing the slab flag.
>>
>> To sum up, after this series, struct page fields used by slab allocators are
>> moved from struct page to a new struct slab, that uses the same physical
>> storage. The availability of the fields is further distinguished by the
>> selected slab allocator implementation. The advantages include:
>>
>> - Similar to folios, if the slab is of order > 0, struct slab always is
>>   guaranteed to be the head page. Additionally it's guaranteed to be an actual
>>   slab page, not a large kmalloc. This removes uncertainty and potential for
>>   bugs.
>> - It's not possible to accidentally use fields of the slab implementation that's
>>   not configured.
>> - Other subsystems cannot use slab's fields in struct page anymore (some
>>   existing non-slab usages had to be adjusted in this series), so slab
>>   implementations have more freedom in rearranging them in the struct slab.
>>
>> Matthew Wilcox (Oracle) (16):
>>   mm: Split slab into its own type
>>   mm: Add account_slab() and unaccount_slab()
>>   mm: Convert virt_to_cache() to use struct slab
>>   mm: Convert __ksize() to struct slab
>>   mm: Use struct slab in kmem_obj_info()
>>   mm: Convert check_heap_object() to use struct slab
>>   mm/slub: Convert detached_freelist to use a struct slab
>>   mm/slub: Convert kfree() to use a struct slab
>>   mm/slub: Convert print_page_info() to print_slab_info()
>>   mm/slub: Convert pfmemalloc_match() to take a struct slab
>>   mm/slob: Convert SLOB to use struct slab
>>   mm/kasan: Convert to struct folio and struct slab
>>   zsmalloc: Stop using slab fields in struct page
>>   bootmem: Use page->index instead of page->freelist
>>   iommu: Use put_pages_list
>>   mm: Remove slab from struct page
>>
>> Vlastimil Babka (17):
>>   mm: add virt_to_folio() and folio_address()
>>   mm/slab: Dissolve slab_map_pages() in its caller
>>   mm/slub: Make object_err() static
>>   mm/slub: Convert __slab_lock() and __slab_unlock() to struct slab
>>   mm/slub: Convert alloc_slab_page() to return a struct slab
>>   mm/slub: Convert __free_slab() to use struct slab
>>   mm/slub: Convert most struct page to struct slab by spatch
>>   mm/slub: Finish struct page to struct slab conversion
>>   mm/slab: Convert kmem_getpages() and kmem_freepages() to struct slab
>>   mm/slab: Convert most struct page to struct slab by spatch
>>   mm/slab: Finish struct page to struct slab conversion
>>   mm: Convert struct page to struct slab in functions used by other
>>     subsystems
>>   mm/memcg: Convert slab objcgs from struct page to struct slab
>>   mm/kfence: Convert kfence_guarded_alloc() to struct slab
>>   mm/sl*b: Differentiate struct slab fields by sl*b implementations
>>   mm/slub: Simplify struct slab slabs field definition
>>   mm/slub: Define struct slab fields for CONFIG_SLUB_CPU_PARTIAL only
>>     when enabled
>>
>>  arch/x86/mm/init_64.c          |    2 +-
>>  drivers/iommu/amd/io_pgtable.c |   59 +-
>>  drivers/iommu/dma-iommu.c      |   11 +-
>>  drivers/iommu/intel/iommu.c    |   89 +--
>>  include/linux/bootmem_info.h   |    2 +-
>>  include/linux/iommu.h          |    3 +-
>>  include/linux/kasan.h          |    9 +-
>>  include/linux/memcontrol.h     |   48 --
>>  include/linux/mm.h             |   12 +
>>  include/linux/mm_types.h       |   38 +-
>>  include/linux/page-flags.h     |   37 -
>>  include/linux/slab.h           |    8 -
>>  include/linux/slab_def.h       |   16 +-
>>  include/linux/slub_def.h       |   29 +-
>>  mm/bootmem_info.c              |    7 +-
>>  mm/kasan/common.c              |   27 +-
>>  mm/kasan/generic.c             |    8 +-
>>  mm/kasan/kasan.h               |    1 +
>>  mm/kasan/quarantine.c          |    2 +-
>>  mm/kasan/report.c              |   13 +-
>>  mm/kasan/report_tags.c         |   10 +-
>>  mm/kfence/core.c               |   17 +-
>>  mm/kfence/kfence_test.c        |    6 +-
>>  mm/memcontrol.c                |   43 +-
>>  mm/slab.c                      |  455 ++++++-------
>>  mm/slab.h                      |  322 ++++++++-
>>  mm/slab_common.c               |    8 +-
>>  mm/slob.c                      |   46 +-
>>  mm/slub.c                      | 1164 ++++++++++++++++----------------
>>  mm/sparse.c                    |    2 +-
>>  mm/usercopy.c                  |   13 +-
>>  mm/zsmalloc.c                  |   18 +-
>>  32 files changed, 1317 insertions(+), 1208 deletions(-)
>>
>
Vlastimil Babka Dec. 2, 2021, 12:25 p.m. UTC | #3
On 12/1/21 19:14, Vlastimil Babka wrote:
> Folks from non-slab subsystems are Cc'd only to patches affecting them, and
> this cover letter.
> 
> Series also available in git, based on 5.16-rc3:
> https://git.kernel.org/pub/scm/linux/kernel/git/vbabka/linux.git/log/?h=slab-struct_slab-v2r2

I have pushed a v3, but not going to resent immediately to avoid unnecessary
spamming, the differences is just that some patches are removed and other
reordered, so the current v2 posting should be still sufficient for on-list
review:

https://git.kernel.org/pub/scm/linux/kernel/git/vbabka/linux.git/log/?h=slab-struct_slab-v3r1

patch 29/33 iommu: Use put_pages_list
- removed as this version is broken and Robin Murphy has meanwhile
incorporated it partially to his series:
https://lore.kernel.org/lkml/cover.1637671820.git.robin.murphy@arm.com/

patch 30/33 mm: Remove slab from struct page
- removed and postponed for later as this can be only be applied after the
iommu use of page.freelist is resolved

patch 27/33 zsmalloc: Stop using slab fields in struct page
patch 28/33 bootmem: Use page->index instead of page->freelist
- moved towards the end of series, to further separate the part that adjusts
non-slab users of slab fields towards removing those fields from struct page.
Vlastimil Babka Dec. 2, 2021, 4:36 p.m. UTC | #4
On 12/1/21 21:34, Vlastimil Babka wrote:
> On 12/1/21 19:39, Vlastimil Babka wrote:
>> On 12/1/21 19:14, Vlastimil Babka wrote:
>>> Folks from non-slab subsystems are Cc'd only to patches affecting them, and
>>> this cover letter.
>>>
>>> Series also available in git, based on 5.16-rc3:
>>> https://git.kernel.org/pub/scm/linux/kernel/git/vbabka/linux.git/log/?h=slab-struct_slab-v2r2
>>>
>>> The plan: as my SLUB PREEMPT_RT series in 5.15, I would prefer to go again with
>>> the git pull request way of eventually merging this, as it's also not a small
>>> series. I will thus reply to this mail with asking to include my branch in
>>> linux-next.
>>>
>>> As stated in the v1/RFC cover letter, I wouldn't mind to then continue with
>>> maintaining a git tree for all slab patches in general. It was apparently
>>> already done that way before, by Pekka:
>>> https://lore.kernel.org/linux-mm/alpine.DEB.2.00.1107221108190.2996@tiger/
>> 
>> Hi Stephen,
>> 
>> Please include a new tree in linux-next:
> 
> OK, not yet, please:
> https://lore.kernel.org/all/70fbdc70-6838-0740-43e3-ed20caff47bf@arm.com/
> 
> I will reorder the patches so that changes needed in other subsystems
> and the dependent actual removal of slab fields from struct page will be
> ordered last, and only ask for the first, mm/ contained part to be in
> -next. The patches for other subsystems can then be reviewed and picked
> by their maintainers independently, while slab implementations are
> already using struct slab.

OK, I've did that and removed the iommu commit, and the dependent removal of
slab-specific struct page fields, until that is resolved. slab
implementatiosn is nevertheless converted  to struct slab, and zsmalloc and
bootmem stop using the slab's struct page fields, so the struct page cleanup
can be finished any time later when iommu is dealt with.

> 
>> git://git.kernel.org/pub/scm/linux/kernel/git/vbabka/linux.git slab-next

So the 'slab-next' branch is now ready for linux-next, now identical to
branch slab-struct_slab-v3r1, head 	d395d823b3ae.

>> i.e.
>> https://git.kernel.org/pub/scm/linux/kernel/git/vbabka/linux.git/log/?h=slab-next
>> 
>> Which now is identical slab-struct_slab-v2r2 branch [1]
>> 
>> When I tried to merge this to next-20211201, there were minor conflicts with
>> two patches from motm:
>> zsmalloc-move-huge-compressed-obj-from-page-to-zspage.patch
>> mm-memcg-relocate-mod_objcg_mlstate-get_obj_stock-and-put_obj_stock.patch
>> 
>> Both appear to be just a change in context.
>> 
>> Thanks,
>> Vlastimil
>> 
>> [1] https://lore.kernel.org/all/20211201181510.18784-1-vbabka@suse.cz/
>> 
>>> Changes from v1/RFC:
>>> https://lore.kernel.org/all/20211116001628.24216-1-vbabka@suse.cz/
>>> - Added virt_to_folio() and folio_address() in the new Patch 1.
>>> - Addressed feedback from Andrey Konovalov and Matthew Wilcox (Thanks!)
>>> - Added Tested-by: Marco Elver for the KFENCE parts (Thanks!)
>>>
>>> Previous version from Matthew Wilcox:
>>> https://lore.kernel.org/all/20211004134650.4031813-1-willy@infradead.org/
>>>
>>> LWN coverage of the above:
>>> https://lwn.net/Articles/871982/
>>>
>>> This is originally an offshoot of the folio work by Matthew. One of the more
>>> complex parts of the struct page definition are the parts used by the slab
>>> allocators. It would be good for the MM in general if struct slab were its own
>>> data type, and it also helps to prevent tail pages from slipping in anywhere.
>>> As Matthew requested in his proof of concept series, I have taken over the
>>> development of this series, so it's a mix of patches from him (often modified
>>> by me) and my own.
>>>
>>> One big difference is the use of coccinelle to perform the relatively trivial
>>> parts of the conversions automatically and at once, instead of a larger number
>>> of smaller incremental reviewable steps. Thanks to Julia Lawall and Luis
>>> Chamberlain for all their help!
>>>
>>> Another notable difference is (based also on review feedback) I don't represent
>>> with a struct slab the large kmalloc allocations which are not really a slab,
>>> but use page allocator directly. When going from an object address to a struct
>>> slab, the code tests first folio slab flag, and only if it's set it converts to
>>> struct slab. This makes the struct slab type stronger.
>>>
>>> Finally, although Matthew's version didn't use any of the folio work, the
>>> initial support has been merged meanwhile so my version builds on top of it
>>> where appropriate. This eliminates some of the redundant compound_head()
>>> being performed e.g. when testing the slab flag.
>>>
>>> To sum up, after this series, struct page fields used by slab allocators are
>>> moved from struct page to a new struct slab, that uses the same physical
>>> storage. The availability of the fields is further distinguished by the
>>> selected slab allocator implementation. The advantages include:
>>>
>>> - Similar to folios, if the slab is of order > 0, struct slab always is
>>>   guaranteed to be the head page. Additionally it's guaranteed to be an actual
>>>   slab page, not a large kmalloc. This removes uncertainty and potential for
>>>   bugs.
>>> - It's not possible to accidentally use fields of the slab implementation that's
>>>   not configured.
>>> - Other subsystems cannot use slab's fields in struct page anymore (some
>>>   existing non-slab usages had to be adjusted in this series), so slab
>>>   implementations have more freedom in rearranging them in the struct slab.
>>>
>>> Matthew Wilcox (Oracle) (16):
>>>   mm: Split slab into its own type
>>>   mm: Add account_slab() and unaccount_slab()
>>>   mm: Convert virt_to_cache() to use struct slab
>>>   mm: Convert __ksize() to struct slab
>>>   mm: Use struct slab in kmem_obj_info()
>>>   mm: Convert check_heap_object() to use struct slab
>>>   mm/slub: Convert detached_freelist to use a struct slab
>>>   mm/slub: Convert kfree() to use a struct slab
>>>   mm/slub: Convert print_page_info() to print_slab_info()
>>>   mm/slub: Convert pfmemalloc_match() to take a struct slab
>>>   mm/slob: Convert SLOB to use struct slab
>>>   mm/kasan: Convert to struct folio and struct slab
>>>   zsmalloc: Stop using slab fields in struct page
>>>   bootmem: Use page->index instead of page->freelist
>>>   iommu: Use put_pages_list
>>>   mm: Remove slab from struct page
>>>
>>> Vlastimil Babka (17):
>>>   mm: add virt_to_folio() and folio_address()
>>>   mm/slab: Dissolve slab_map_pages() in its caller
>>>   mm/slub: Make object_err() static
>>>   mm/slub: Convert __slab_lock() and __slab_unlock() to struct slab
>>>   mm/slub: Convert alloc_slab_page() to return a struct slab
>>>   mm/slub: Convert __free_slab() to use struct slab
>>>   mm/slub: Convert most struct page to struct slab by spatch
>>>   mm/slub: Finish struct page to struct slab conversion
>>>   mm/slab: Convert kmem_getpages() and kmem_freepages() to struct slab
>>>   mm/slab: Convert most struct page to struct slab by spatch
>>>   mm/slab: Finish struct page to struct slab conversion
>>>   mm: Convert struct page to struct slab in functions used by other
>>>     subsystems
>>>   mm/memcg: Convert slab objcgs from struct page to struct slab
>>>   mm/kfence: Convert kfence_guarded_alloc() to struct slab
>>>   mm/sl*b: Differentiate struct slab fields by sl*b implementations
>>>   mm/slub: Simplify struct slab slabs field definition
>>>   mm/slub: Define struct slab fields for CONFIG_SLUB_CPU_PARTIAL only
>>>     when enabled
>>>
>>>  arch/x86/mm/init_64.c          |    2 +-
>>>  drivers/iommu/amd/io_pgtable.c |   59 +-
>>>  drivers/iommu/dma-iommu.c      |   11 +-
>>>  drivers/iommu/intel/iommu.c    |   89 +--
>>>  include/linux/bootmem_info.h   |    2 +-
>>>  include/linux/iommu.h          |    3 +-
>>>  include/linux/kasan.h          |    9 +-
>>>  include/linux/memcontrol.h     |   48 --
>>>  include/linux/mm.h             |   12 +
>>>  include/linux/mm_types.h       |   38 +-
>>>  include/linux/page-flags.h     |   37 -
>>>  include/linux/slab.h           |    8 -
>>>  include/linux/slab_def.h       |   16 +-
>>>  include/linux/slub_def.h       |   29 +-
>>>  mm/bootmem_info.c              |    7 +-
>>>  mm/kasan/common.c              |   27 +-
>>>  mm/kasan/generic.c             |    8 +-
>>>  mm/kasan/kasan.h               |    1 +
>>>  mm/kasan/quarantine.c          |    2 +-
>>>  mm/kasan/report.c              |   13 +-
>>>  mm/kasan/report_tags.c         |   10 +-
>>>  mm/kfence/core.c               |   17 +-
>>>  mm/kfence/kfence_test.c        |    6 +-
>>>  mm/memcontrol.c                |   43 +-
>>>  mm/slab.c                      |  455 ++++++-------
>>>  mm/slab.h                      |  322 ++++++++-
>>>  mm/slab_common.c               |    8 +-
>>>  mm/slob.c                      |   46 +-
>>>  mm/slub.c                      | 1164 ++++++++++++++++----------------
>>>  mm/sparse.c                    |    2 +-
>>>  mm/usercopy.c                  |   13 +-
>>>  mm/zsmalloc.c                  |   18 +-
>>>  32 files changed, 1317 insertions(+), 1208 deletions(-)
>>>
>> 
> 
>
Stephen Rothwell Dec. 2, 2021, 8:39 p.m. UTC | #5
Hi Vlastimil,

On Thu, 2 Dec 2021 17:36:45 +0100 Vlastimil Babka <vbabka@suse.cz> wrote:
>
> On 12/1/21 21:34, Vlastimil Babka wrote:
> > On 12/1/21 19:39, Vlastimil Babka wrote:  
> >> On 12/1/21 19:14, Vlastimil Babka wrote:  
> >>> Folks from non-slab subsystems are Cc'd only to patches affecting them, and
> >>> this cover letter.
> >>>
> >>> Series also available in git, based on 5.16-rc3:
> >>> https://git.kernel.org/pub/scm/linux/kernel/git/vbabka/linux.git/log/?h=slab-struct_slab-v2r2
> >>>
> >>> The plan: as my SLUB PREEMPT_RT series in 5.15, I would prefer to go again with
> >>> the git pull request way of eventually merging this, as it's also not a small
> >>> series. I will thus reply to this mail with asking to include my branch in
> >>> linux-next.
> >>>
> >>> As stated in the v1/RFC cover letter, I wouldn't mind to then continue with
> >>> maintaining a git tree for all slab patches in general. It was apparently
> >>> already done that way before, by Pekka:
> >>> https://lore.kernel.org/linux-mm/alpine.DEB.2.00.1107221108190.2996@tiger/  
> >> 
> >> Hi Stephen,
> >> 
> >> Please include a new tree in linux-next:  
> > 
> > OK, not yet, please:
> > https://lore.kernel.org/all/70fbdc70-6838-0740-43e3-ed20caff47bf@arm.com/
> > 
> > I will reorder the patches so that changes needed in other subsystems
> > and the dependent actual removal of slab fields from struct page will be
> > ordered last, and only ask for the first, mm/ contained part to be in
> > -next. The patches for other subsystems can then be reviewed and picked
> > by their maintainers independently, while slab implementations are
> > already using struct slab.  
> 
> OK, I've did that and removed the iommu commit, and the dependent removal of
> slab-specific struct page fields, until that is resolved. slab
> implementatiosn is nevertheless converted  to struct slab, and zsmalloc and
> bootmem stop using the slab's struct page fields, so the struct page cleanup
> can be finished any time later when iommu is dealt with.
> 
> >   
> >> git://git.kernel.org/pub/scm/linux/kernel/git/vbabka/linux.git slab-next  
> 
> So the 'slab-next' branch is now ready for linux-next, now identical to
> branch slab-struct_slab-v3r1, head 	d395d823b3ae.
> 
> >> i.e.
> >> https://git.kernel.org/pub/scm/linux/kernel/git/vbabka/linux.git/log/?h=slab-next
> >> 
> >> Which now is identical slab-struct_slab-v2r2 branch [1]
> >> 
> >> When I tried to merge this to next-20211201, there were minor conflicts with
> >> two patches from motm:
> >> zsmalloc-move-huge-compressed-obj-from-page-to-zspage.patch
> >> mm-memcg-relocate-mod_objcg_mlstate-get_obj_stock-and-put_obj_stock.patch
> >> 
> >> Both appear to be just a change in context.
> >> 
> >> Thanks,
> >> Vlastimil
> >> 
> >> [1] https://lore.kernel.org/all/20211201181510.18784-1-vbabka@suse.cz/
> >>   
> >>> Changes from v1/RFC:
> >>> https://lore.kernel.org/all/20211116001628.24216-1-vbabka@suse.cz/
> >>> - Added virt_to_folio() and folio_address() in the new Patch 1.
> >>> - Addressed feedback from Andrey Konovalov and Matthew Wilcox (Thanks!)
> >>> - Added Tested-by: Marco Elver for the KFENCE parts (Thanks!)
> >>>
> >>> Previous version from Matthew Wilcox:
> >>> https://lore.kernel.org/all/20211004134650.4031813-1-willy@infradead.org/
> >>>
> >>> LWN coverage of the above:
> >>> https://lwn.net/Articles/871982/
> >>>
> >>> This is originally an offshoot of the folio work by Matthew. One of the more
> >>> complex parts of the struct page definition are the parts used by the slab
> >>> allocators. It would be good for the MM in general if struct slab were its own
> >>> data type, and it also helps to prevent tail pages from slipping in anywhere.
> >>> As Matthew requested in his proof of concept series, I have taken over the
> >>> development of this series, so it's a mix of patches from him (often modified
> >>> by me) and my own.
> >>>
> >>> One big difference is the use of coccinelle to perform the relatively trivial
> >>> parts of the conversions automatically and at once, instead of a larger number
> >>> of smaller incremental reviewable steps. Thanks to Julia Lawall and Luis
> >>> Chamberlain for all their help!
> >>>
> >>> Another notable difference is (based also on review feedback) I don't represent
> >>> with a struct slab the large kmalloc allocations which are not really a slab,
> >>> but use page allocator directly. When going from an object address to a struct
> >>> slab, the code tests first folio slab flag, and only if it's set it converts to
> >>> struct slab. This makes the struct slab type stronger.
> >>>
> >>> Finally, although Matthew's version didn't use any of the folio work, the
> >>> initial support has been merged meanwhile so my version builds on top of it
> >>> where appropriate. This eliminates some of the redundant compound_head()
> >>> being performed e.g. when testing the slab flag.
> >>>
> >>> To sum up, after this series, struct page fields used by slab allocators are
> >>> moved from struct page to a new struct slab, that uses the same physical
> >>> storage. The availability of the fields is further distinguished by the
> >>> selected slab allocator implementation. The advantages include:
> >>>
> >>> - Similar to folios, if the slab is of order > 0, struct slab always is
> >>>   guaranteed to be the head page. Additionally it's guaranteed to be an actual
> >>>   slab page, not a large kmalloc. This removes uncertainty and potential for
> >>>   bugs.
> >>> - It's not possible to accidentally use fields of the slab implementation that's
> >>>   not configured.
> >>> - Other subsystems cannot use slab's fields in struct page anymore (some
> >>>   existing non-slab usages had to be adjusted in this series), so slab
> >>>   implementations have more freedom in rearranging them in the struct slab.

Added from today.  It would be good to see some Reviewed-by/Tested-by
tags in these commits ...

Thanks for adding your subsystem tree as a participant of linux-next.  As
you may know, this is not a judgement of your code.  The purpose of
linux-next is for integration testing and to lower the impact of
conflicts between subsystems in the next merge window. 

You will need to ensure that the patches/commits in your tree/series have
been:
     * submitted under GPL v2 (or later) and include the Contributor's
        Signed-off-by,
     * posted to the relevant mailing list,
     * reviewed by you (or another maintainer of your subsystem tree),
     * successfully unit tested, and 
     * destined for the current or next Linux merge window.

Basically, this should be just what you would send to Linus (or ask him
to fetch).  It is allowed to be rebased if you deem it necessary.
Vlastimil Babka Dec. 14, 2021, 12:57 p.m. UTC | #6
On 12/1/21 19:14, Vlastimil Babka wrote:
> Folks from non-slab subsystems are Cc'd only to patches affecting them, and
> this cover letter.
> 
> Series also available in git, based on 5.16-rc3:
> https://git.kernel.org/pub/scm/linux/kernel/git/vbabka/linux.git/log/?h=slab-struct_slab-v2r2

Pushed a new branch slab-struct-slab-v3r3 with accumulated fixes and small tweaks
and a new patch from Hyeonggon Yoo on top. To avoid too much spam, here's a range diff:

 1:  10b656f9eb1e =  1:  10b656f9eb1e mm: add virt_to_folio() and folio_address()
 2:  5e6ad846acf1 =  2:  5e6ad846acf1 mm/slab: Dissolve slab_map_pages() in its caller
 3:  48d4e9407aa0 =  3:  48d4e9407aa0 mm/slub: Make object_err() static
 4:  fe1e19081321 =  4:  fe1e19081321 mm: Split slab into its own type
 5:  af7fd46fbb9b =  5:  af7fd46fbb9b mm: Add account_slab() and unaccount_slab()
 6:  7ed088d601d9 =  6:  7ed088d601d9 mm: Convert virt_to_cache() to use struct slab
 7:  1d41188b9401 =  7:  1d41188b9401 mm: Convert __ksize() to struct slab
 8:  5d9d1231461f !  8:  8fd22e0b086e mm: Use struct slab in kmem_obj_info()
    @@ Commit message
         slab type instead of the page type, we make it obvious that this can
         only be called for slabs.
     
    +    [ vbabka@suse.cz: also convert the related kmem_valid_obj() to folios ]
    +
         Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
         Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
     
    @@ mm/slab.h: struct kmem_obj_info {
      #endif /* MM_SLAB_H */
     
      ## mm/slab_common.c ##
    +@@ mm/slab_common.c: bool slab_is_available(void)
    +  */
    + bool kmem_valid_obj(void *object)
    + {
    +-	struct page *page;
    ++	struct folio *folio;
    + 
    + 	/* Some arches consider ZERO_SIZE_PTR to be a valid address. */
    + 	if (object < (void *)PAGE_SIZE || !virt_addr_valid(object))
    + 		return false;
    +-	page = virt_to_head_page(object);
    +-	return PageSlab(page);
    ++	folio = virt_to_folio(object);
    ++	return folio_test_slab(folio);
    + }
    + EXPORT_SYMBOL_GPL(kmem_valid_obj);
    + 
     @@ mm/slab_common.c: void kmem_dump_obj(void *object)
      {
      	char *cp = IS_ENABLED(CONFIG_MMU) ? "" : "/vmalloc";
    @@ mm/slub.c: int __kmem_cache_shutdown(struct kmem_cache *s)
      	objp = base + s->size * objnr;
      	kpp->kp_objp = objp;
     -	if (WARN_ON_ONCE(objp < base || objp >= base + page->objects * s->size || (objp - base) % s->size) ||
    -+	if (WARN_ON_ONCE(objp < base || objp >= base + slab->objects * s->size || (objp - base) % s->size) ||
    ++	if (WARN_ON_ONCE(objp < base || objp >= base + slab->objects * s->size
    ++			 || (objp - base) % s->size) ||
      	    !(s->flags & SLAB_STORE_USER))
      		return;
      #ifdef CONFIG_SLUB_DEBUG
 9:  3aef771be335 !  9:  c97e73c3b6c2 mm: Convert check_heap_object() to use struct slab
    @@ mm/slab.h: struct kmem_obj_info {
     +#else
     +static inline
     +void __check_heap_object(const void *ptr, unsigned long n,
    -+			 const struct slab *slab, bool to_user) { }
    ++			 const struct slab *slab, bool to_user)
    ++{
    ++}
     +#endif
     +
      #endif /* MM_SLAB_H */
10:  2253e45e6bef = 10:  da05e0f7179c mm/slub: Convert detached_freelist to use a struct slab
11:  f28202bc27ba = 11:  383887e77104 mm/slub: Convert kfree() to use a struct slab
12:  31b58b1e914f = 12:  c46be093c637 mm/slub: Convert __slab_lock() and __slab_unlock() to struct slab
13:  636406a3ad59 = 13:  49dbbf917052 mm/slub: Convert print_page_info() to print_slab_info()
14:  3b49efda3b6f = 14:  4bb0c932156a mm/slub: Convert alloc_slab_page() to return a struct slab
15:  61a195526d3b ! 15:  4b9761b5cfab mm/slub: Convert __free_slab() to use struct slab
    @@ mm/slub.c: static struct page *new_slab(struct kmem_cache *s, gfp_t flags, int n
      
     -	__ClearPageSlabPfmemalloc(page);
     -	__ClearPageSlab(page);
    +-	/* In union with page->mapping where page allocator expects NULL */
    +-	page->slab_cache = NULL;
     +	__slab_clear_pfmemalloc(slab);
     +	__folio_clear_slab(folio);
    - 	/* In union with page->mapping where page allocator expects NULL */
    --	page->slab_cache = NULL;
    -+	slab->slab_cache = NULL;
    ++	folio->mapping = NULL;
      	if (current->reclaim_state)
      		current->reclaim_state->reclaimed_slab += pages;
     -	unaccount_slab(page_slab(page), order, s);
16:  987c7ed31580 = 16:  f384ec918065 mm/slub: Convert pfmemalloc_match() to take a struct slab
17:  cc742564237e ! 17:  06738ade4e17 mm/slub: Convert most struct page to struct slab by spatch
    @@ Commit message
     
         // Options: --include-headers --no-includes --smpl-spacing include/linux/slub_def.h mm/slub.c
         // Note: needs coccinelle 1.1.1 to avoid breaking whitespace, and ocaml for the
    -    // embedded script script
    +    // embedded script
     
         // build list of functions to exclude from applying the next rule
         @initialize:ocaml@
18:  b45acac9aace = 18:  1a4f69a4cced mm/slub: Finish struct page to struct slab conversion
19:  76c3eeb39684 ! 19:  1d62d706e884 mm/slab: Convert kmem_getpages() and kmem_freepages() to struct slab
    @@ mm/slab.c: slab_out_of_memory(struct kmem_cache *cachep, gfp_t gfpflags, int nod
     -	__ClearPageSlabPfmemalloc(page);
     -	__ClearPageSlab(page);
     -	page_mapcount_reset(page);
    +-	/* In union with page->mapping where page allocator expects NULL */
    +-	page->slab_cache = NULL;
     +	BUG_ON(!folio_test_slab(folio));
     +	__slab_clear_pfmemalloc(slab);
     +	__folio_clear_slab(folio);
     +	page_mapcount_reset(folio_page(folio, 0));
    - 	/* In union with page->mapping where page allocator expects NULL */
    --	page->slab_cache = NULL;
    -+	slab->slab_cache = NULL;
    ++	folio->mapping = NULL;
      
      	if (current->reclaim_state)
      		current->reclaim_state->reclaimed_slab += 1 << order;
20:  ed6144dbebce ! 20:  fd4c3aabacd3 mm/slab: Convert most struct page to struct slab by spatch
    @@ Commit message
     
         // Options: --include-headers --no-includes --smpl-spacing mm/slab.c
         // Note: needs coccinelle 1.1.1 to avoid breaking whitespace, and ocaml for the
    -    // embedded script script
    +    // embedded script
     
         // build list of functions for applying the next rule
         @initialize:ocaml@
21:  17fb81e601e6 = 21:  b59720b2edba mm/slab: Finish struct page to struct slab conversion
22:  4e8d1faebc24 ! 22:  65ced071c3e7 mm: Convert struct page to struct slab in functions used by other subsystems
    @@ Commit message
           ,...)
     
         Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
    +    Reviewed-by: Andrey Konovalov <andreyknvl@gmail.com>
         Cc: Julia Lawall <julia.lawall@inria.fr>
         Cc: Luis Chamberlain <mcgrof@kernel.org>
         Cc: Andrey Ryabinin <ryabinin.a.a@gmail.com>
23:  eefa12e18a92 = 23:  c9c8dee01e5d mm/memcg: Convert slab objcgs from struct page to struct slab
24:  fa5ba4107ce2 ! 24:  def731137335 mm/slob: Convert SLOB to use struct slab
    @@ Metadata
     Author: Matthew Wilcox (Oracle) <willy@infradead.org>
     
      ## Commit message ##
    -    mm/slob: Convert SLOB to use struct slab
    +    mm/slob: Convert SLOB to use struct slab and struct folio
     
    -    Use struct slab throughout the slob allocator.
    +    Use struct slab throughout the slob allocator. Where non-slab page can appear
    +    use struct folio instead of struct page.
     
         [ vbabka@suse.cz: don't introduce wrappers for PageSlobFree in mm/slab.h just
           for the single callers being wrappers in mm/slob.c ]
     
    +    [ Hyeonggon Yoo <42.hyeyoo@gmail.com>: fix NULL pointer deference ]
    +
         Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
         Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
     
      ## mm/slob.c ##
    +@@
    +  * If kmalloc is asked for objects of PAGE_SIZE or larger, it calls
    +  * alloc_pages() directly, allocating compound pages so the page order
    +  * does not have to be separately tracked.
    +- * These objects are detected in kfree() because PageSlab()
    ++ * These objects are detected in kfree() because folio_test_slab()
    +  * is false for them.
    +  *
    +  * SLAB is emulated on top of SLOB by simply calling constructors and
     @@ mm/slob.c: static LIST_HEAD(free_slob_large);
      /*
       * slob_page_free: true for pages on free_slob_pages list.
    @@ mm/slob.c: static void *slob_page_alloc(struct page *sp, size_t size, int align,
      							int align_offset)
      {
     -	struct page *sp;
    ++	struct folio *folio;
     +	struct slab *sp;
      	struct list_head *slob_list;
      	slob_t *b = NULL;
    @@ mm/slob.c: static void *slob_alloc(size_t size, gfp_t gfp, int align, int node,
      			return NULL;
     -		sp = virt_to_page(b);
     -		__SetPageSlab(sp);
    -+		sp = virt_to_slab(b);
    -+		__SetPageSlab(slab_page(sp));
    ++		folio = virt_to_folio(b);
    ++		__folio_set_slab(folio);
    ++		sp = folio_slab(folio);
      
      		spin_lock_irqsave(&slob_lock, flags);
      		sp->units = SLOB_UNITS(PAGE_SIZE);
    @@ mm/slob.c: static void slob_free(void *block, int size)
      		spin_unlock_irqrestore(&slob_lock, flags);
     -		__ClearPageSlab(sp);
     -		page_mapcount_reset(sp);
    -+		__ClearPageSlab(slab_page(sp));
    ++		__folio_clear_slab(slab_folio(sp));
     +		page_mapcount_reset(slab_page(sp));
      		slob_free_pages(b, 0);
      		return;
      	}
    +@@ mm/slob.c: EXPORT_SYMBOL(__kmalloc_node_track_caller);
    + 
    + void kfree(const void *block)
    + {
    +-	struct page *sp;
    ++	struct folio *sp;
    + 
    + 	trace_kfree(_RET_IP_, block);
    + 
    +@@ mm/slob.c: void kfree(const void *block)
    + 		return;
    + 	kmemleak_free(block);
    + 
    +-	sp = virt_to_page(block);
    +-	if (PageSlab(sp)) {
    ++	sp = virt_to_folio(block);
    ++	if (folio_test_slab(sp)) {
    + 		int align = max_t(size_t, ARCH_KMALLOC_MINALIGN, ARCH_SLAB_MINALIGN);
    + 		unsigned int *m = (unsigned int *)(block - align);
    + 		slob_free(m, *m + align);
    + 	} else {
    +-		unsigned int order = compound_order(sp);
    +-		mod_node_page_state(page_pgdat(sp), NR_SLAB_UNRECLAIMABLE_B,
    ++		unsigned int order = folio_order(sp);
    ++
    ++		mod_node_page_state(folio_pgdat(sp), NR_SLAB_UNRECLAIMABLE_B,
    + 				    -(PAGE_SIZE << order));
    +-		__free_pages(sp, order);
    ++		__free_pages(folio_page(sp, 0), order);
    + 
    + 	}
    + }
25:  aa4f573a4c96 ! 25:  466b9fb1f6e5 mm/kasan: Convert to struct folio and struct slab
    @@ Commit message
     
         Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
         Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
    +    Reviewed-by: Andrey Konovalov <andreyknvl@gmail.com>
         Cc: Andrey Ryabinin <ryabinin.a.a@gmail.com>
         Cc: Alexander Potapenko <glider@google.com>
         Cc: Andrey Konovalov <andreyknvl@gmail.com>
26:  67b7966d2fb6 = 26:  b8159ae8e5cd mm/kfence: Convert kfence_guarded_alloc() to struct slab
31:  d64dfe49c1e7 ! 27:  4525180926f9 mm/sl*b: Differentiate struct slab fields by sl*b implementations
    @@ Commit message
         possible.
     
         This should also prevent accidental use of fields that don't exist in given
    -    implementation. Before this patch virt_to_cache() and and cache_from_obj() was
    -    visible for SLOB (albeit not used), although it relies on the slab_cache field
    +    implementation. Before this patch virt_to_cache() and cache_from_obj() were
    +    visible for SLOB (albeit not used), although they rely on the slab_cache field
         that isn't set by SLOB. With this patch it's now a compile error, so these
         functions are now hidden behind #ifndef CONFIG_SLOB.
     
    @@ mm/kfence/core.c: static void *kfence_guarded_alloc(struct kmem_cache *cache, si
     -		slab->s_mem = addr;
     +#if defined(CONFIG_SLUB)
     +	slab->objects = 1;
    -+#elif defined (CONFIG_SLAB)
    ++#elif defined(CONFIG_SLAB)
     +	slab->s_mem = addr;
     +#endif
      
    @@ mm/slab.h
     +
     +#if defined(CONFIG_SLAB)
     +
    -+	union {
    -+		struct list_head slab_list;
    + 	union {
    + 		struct list_head slab_list;
    +-		struct {	/* Partial pages */
     +		struct rcu_head rcu_head;
     +	};
     +	struct kmem_cache *slab_cache;
     +	void *freelist;	/* array of free object indexes */
    -+	void * s_mem;	/* first object */
    ++	void *s_mem;	/* first object */
     +	unsigned int active;
     +
     +#elif defined(CONFIG_SLUB)
     +
    - 	union {
    - 		struct list_head slab_list;
    --		struct {	/* Partial pages */
    ++	union {
    ++		struct list_head slab_list;
     +		struct rcu_head rcu_head;
     +		struct {
      			struct slab *next;
    @@ mm/slab.h: struct slab {
     +#elif defined(CONFIG_SLOB)
     +
     +	struct list_head slab_list;
    -+	void * __unused_1;
    ++	void *__unused_1;
     +	void *freelist;		/* first free block */
    -+	void * __unused_2;
    ++	void *__unused_2;
     +	int units;
     +
     +#else
    @@ mm/slab.h: struct slab {
      #ifdef CONFIG_MEMCG
      	unsigned long memcg_data;
     @@ mm/slab.h: struct slab {
    - 	static_assert(offsetof(struct page, pg) == offsetof(struct slab, sl))
      SLAB_MATCH(flags, __page_flags);
      SLAB_MATCH(compound_head, slab_list);	/* Ensure bit 0 is clear */
    + SLAB_MATCH(slab_list, slab_list);
     +#ifndef CONFIG_SLOB
      SLAB_MATCH(rcu_head, rcu_head);
    + SLAB_MATCH(slab_cache, slab_cache);
    ++#endif
    ++#ifdef CONFIG_SLAB
    + SLAB_MATCH(s_mem, s_mem);
    + SLAB_MATCH(active, active);
     +#endif
      SLAB_MATCH(_refcount, __page_refcount);
      #ifdef CONFIG_MEMCG
32:  0abf87bae67e = 28:  94b78948d53f mm/slub: Simplify struct slab slabs field definition
33:  813c304f18e4 = 29:  f5261e6375f0 mm/slub: Define struct slab fields for CONFIG_SLUB_CPU_PARTIAL only when enabled
27:  ebce4b5b5ced ! 30:  1414e8c87de6 zsmalloc: Stop using slab fields in struct page
    @@ Commit message
     
         Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
         Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
    -    Cc: Minchan Kim <minchan@kernel.org>
    +    Acked-by: Minchan Kim <minchan@kernel.org>
         Cc: Nitin Gupta <ngupta@vflare.org>
         Cc: Sergey Senozhatsky <senozhatsky@chromium.org>
     
28:  f124425ae7de = 31:  8a3cda6b38eb bootmem: Use page->index instead of page->freelist
29:  82da48c73b2e <  -:  ------------ iommu: Use put_pages_list
30:  181e16dfefbb <  -:  ------------ mm: Remove slab from struct page
 -:  ------------ > 32:  91e069ba116b mm/slob: Remove unnecessary page_mapcount_reset() function call
Vlastimil Babka Dec. 14, 2021, 2:43 p.m. UTC | #7
On 12/14/21 15:38, Hyeonggon Yoo wrote:
> On Tue, Dec 14, 2021 at 01:57:22PM +0100, Vlastimil Babka wrote:
>> On 12/1/21 19:14, Vlastimil Babka wrote:
>> > Folks from non-slab subsystems are Cc'd only to patches affecting them, and
>> > this cover letter.
>> > 
>> > Series also available in git, based on 5.16-rc3:
>> > https://git.kernel.org/pub/scm/linux/kernel/git/vbabka/linux.git/log/?h=slab-struct_slab-v2r2
>> 
>> Pushed a new branch slab-struct-slab-v3r3 with accumulated fixes and small tweaks
>> and a new patch from Hyeonggon Yoo on top. To avoid too much spam, here's a range diff:
>> 
> 
> Hello Vlastimil, Thank you for nice work.
> I'm going to review and test new version soon in free time.

Thanks!

> Btw, I gave you some review and test tags and seems to be missing in new
> series. Did I do review/test process wrongly? It's first time to review
> patches so please let me know if I did it wrongly.

You did right, sorry! I didn't include them as those were for patches that I
was additionally changing after your review/test and the decision what is
substantial change enough to need a new test/review is often fuzzy. So if
you can recheck the new versions it would be great and then I will pick that
up, thanks!

> --
> Thank you.
> Hyeonggon.
Roman Gushchin Dec. 15, 2021, 1:03 a.m. UTC | #8
On Tue, Dec 14, 2021 at 01:57:22PM +0100, Vlastimil Babka wrote:
> On 12/1/21 19:14, Vlastimil Babka wrote:
> > Folks from non-slab subsystems are Cc'd only to patches affecting them, and
> > this cover letter.
> > 
> > Series also available in git, based on 5.16-rc3:
> > https://git.kernel.org/pub/scm/linux/kernel/git/vbabka/linux.git/log/?h=slab-struct_slab-v2r2
> 
> Pushed a new branch slab-struct-slab-v3r3 with accumulated fixes and small tweaks
> and a new patch from Hyeonggon Yoo on top. To avoid too much spam, here's a range diff:

Hi Vlastimil!

I've started to review this patchset (btw, a really nice work, I like
the resulting code way more). Because I'm looking at v3 and I don't have
the whole v2 in my mailbox, here is what I've now:

* mm: add virt_to_folio() and folio_address()
Reviewed-by: Roman Gushchin <guro@fb.com>

* mm/slab: Dissolve slab_map_pages() in its caller
Reviewed-by: Roman Gushchin <guro@fb.com>

* mm/slub: Make object_err() static
Reviewed-by: Roman Gushchin <guro@fb.com>

* mm: Split slab into its own type
1) Shouldn't SLAB_MATCH() macro use struct folio instead of struct page for the
comparison?
2) page_slab() is used only in kasan and only in one place, so maybe it's better
to not introduce it as a generic helper?
Other than that
Reviewed-by: Roman Gushchin <guro@fb.com>

* mm: Add account_slab() and unaccount_slab()
1) maybe change the title to convert/replace instead of add?
2) maybe move later changes to memcg_alloc_page_obj_cgroups() to this patch?
Reviewed-by: Roman Gushchin <guro@fb.com>

* mm: Convert virt_to_cache() to use struct slab
Reviewed-by: Roman Gushchin <guro@fb.com>

* mm: Convert __ksize() to struct slab
It looks like certain parts of __ksize() can be merged between slab, slub and slob?
Reviewed-by: Roman Gushchin <guro@fb.com>

* mm: Use struct slab in kmem_obj_info()
Reviewed-by: Roman Gushchin <guro@fb.com>


I'll try to finish reviewing the patchset until the  end of the week.

Thanks!

Roman
Roman Gushchin Dec. 15, 2021, 11:38 p.m. UTC | #9
On Tue, Dec 14, 2021 at 05:03:12PM -0800, Roman Gushchin wrote:
> On Tue, Dec 14, 2021 at 01:57:22PM +0100, Vlastimil Babka wrote:
> > On 12/1/21 19:14, Vlastimil Babka wrote:
> > > Folks from non-slab subsystems are Cc'd only to patches affecting them, and
> > > this cover letter.
> > > 
> > > Series also available in git, based on 5.16-rc3:
> > > https://git.kernel.org/pub/scm/linux/kernel/git/vbabka/linux.git/log/?h=slab-struct_slab-v2r2
> > 
> > Pushed a new branch slab-struct-slab-v3r3 with accumulated fixes and small tweaks
> > and a new patch from Hyeonggon Yoo on top. To avoid too much spam, here's a range diff:
> 
> Hi Vlastimil!
> 
> I've started to review this patchset (btw, a really nice work, I like
> the resulting code way more). Because I'm looking at v3 and I don't have
> the whole v2 in my mailbox, here is what I've now:
> 
> * mm: add virt_to_folio() and folio_address()
> Reviewed-by: Roman Gushchin <guro@fb.com>
> 
> * mm/slab: Dissolve slab_map_pages() in its caller
> Reviewed-by: Roman Gushchin <guro@fb.com>
> 
> * mm/slub: Make object_err() static
> Reviewed-by: Roman Gushchin <guro@fb.com>
> 
> * mm: Split slab into its own type
> 1) Shouldn't SLAB_MATCH() macro use struct folio instead of struct page for the
> comparison?
> 2) page_slab() is used only in kasan and only in one place, so maybe it's better
> to not introduce it as a generic helper?
> Other than that
> Reviewed-by: Roman Gushchin <guro@fb.com>
> 
> * mm: Add account_slab() and unaccount_slab()
> 1) maybe change the title to convert/replace instead of add?
> 2) maybe move later changes to memcg_alloc_page_obj_cgroups() to this patch?
> Reviewed-by: Roman Gushchin <guro@fb.com>
> 
> * mm: Convert virt_to_cache() to use struct slab
> Reviewed-by: Roman Gushchin <guro@fb.com>
> 
> * mm: Convert __ksize() to struct slab
> It looks like certain parts of __ksize() can be merged between slab, slub and slob?
> Reviewed-by: Roman Gushchin <guro@fb.com>
> 
> * mm: Use struct slab in kmem_obj_info()
> Reviewed-by: Roman Gushchin <guro@fb.com>

Part 2:

* mm: Convert check_heap_object() to use struct slab
Reviewed-by: Roman Gushchin <guro@fb.com>

* mm/slub: Convert detached_freelist to use a struct slab
How about to convert free_nonslab_page() to free_nonslab_folio()?
And maybe rename it to something like free_large_kmalloc()?
If I'm not missing something, large kmallocs is the only way how we can end up
there with a !slab folio/page.

* mm/slub: Convert kfree() to use a struct slab
Reviewed-by: Roman Gushchin <guro@fb.com>

* mm/slub: Convert __slab_lock() and __slab_unlock() to struct slab
Reviewed-by: Roman Gushchin <guro@fb.com>

* mm/slub: Convert print_page_info() to print_slab_info()
Do we really need to explicitly convert slab_folio()'s result to (struct folio *)?
Reviewed-by: Roman Gushchin <guro@fb.com>

* mm/slub: Convert alloc_slab_page() to return a struct slab
Reviewed-by: Roman Gushchin <guro@fb.com>

* mm/slub: Convert __free_slab() to use struct slab
Reviewed-by: Roman Gushchin <guro@fb.com>

* mm/slub: Convert pfmemalloc_match() to take a struct slab
Cool! Removing pfmemalloc_unsafe() is really nice.
Reviewed-by: Roman Gushchin <guro@fb.com>

* mm/slub: Convert most struct page to struct slab by spatch
Reviewed-by: Roman Gushchin <guro@fb.com>

* mm/slub: Finish struct page to struct slab conversion
Reviewed-by: Roman Gushchin <guro@fb.com>

* mm/slab: Convert kmem_getpages() and kmem_freepages() to struct slab
Reviewed-by: Roman Gushchin <guro@fb.com>

* mm/slab: Convert most struct page to struct slab by spatch

Another patch with the same title? Rebase error?

* mm/slab: Finish struct page to struct slab conversion

And this one too?


Thanks!

Roman
Vlastimil Babka Dec. 16, 2021, 9:19 a.m. UTC | #10
On 12/16/21 00:38, Roman Gushchin wrote:
> On Tue, Dec 14, 2021 at 05:03:12PM -0800, Roman Gushchin wrote:
>> On Tue, Dec 14, 2021 at 01:57:22PM +0100, Vlastimil Babka wrote:
>> > On 12/1/21 19:14, Vlastimil Babka wrote:
>> > > Folks from non-slab subsystems are Cc'd only to patches affecting them, and
>> > > this cover letter.
>> > > 
>> > > Series also available in git, based on 5.16-rc3:
>> > > https://git.kernel.org/pub/scm/linux/kernel/git/vbabka/linux.git/log/?h=slab-struct_slab-v2r2
>> > 
>> > Pushed a new branch slab-struct-slab-v3r3 with accumulated fixes and small tweaks
>> > and a new patch from Hyeonggon Yoo on top. To avoid too much spam, here's a range diff:
>> 
>> Hi Vlastimil!
>> 
>> I've started to review this patchset (btw, a really nice work, I like
>> the resulting code way more). Because I'm looking at v3 and I don't have

Thanks a lot, Roman!

...

> 
> * mm/slab: Convert most struct page to struct slab by spatch
> 
> Another patch with the same title? Rebase error?
> 
> * mm/slab: Finish struct page to struct slab conversion
> 
> And this one too?

No, these are for mm/slab.c, the previous were for mm/slub.c :)

> 
> Thanks!
> 
> Roman
Hyeonggon Yoo Dec. 16, 2021, 3 p.m. UTC | #11
On Tue, Dec 14, 2021 at 01:57:22PM +0100, Vlastimil Babka wrote:
> On 12/1/21 19:14, Vlastimil Babka wrote:
> > Folks from non-slab subsystems are Cc'd only to patches affecting them, and
> > this cover letter.
> > 
> > Series also available in git, based on 5.16-rc3:
> > https://git.kernel.org/pub/scm/linux/kernel/git/vbabka/linux.git/log/?h=slab-struct_slab-v2r2
> 
> Pushed a new branch slab-struct-slab-v3r3 with accumulated fixes and small tweaks
> and a new patch from Hyeonggon Yoo on top. To avoid too much spam, here's a range diff:

Reviewing the whole patch series takes longer than I thought.
I'll try to review and test rest of patches when I have time.

I added Tested-by if kernel builds okay and kselftests
does not break the kernel on my machine.
(with CONFIG_SLAB/SLUB/SLOB depending on the patch),
Let me know me if you know better way to test a patch.

# mm/slub: Define struct slab fields for CONFIG_SLUB_CPU_PARTIAL only when enabled

Reviewed-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
Tested-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>

Comment:
Works on both SLUB_CPU_PARTIAL and !SLUB_CPU_PARTIAL.
btw, do we need slabs_cpu_partial attribute when we don't use
cpu partials? (!SLUB_CPU_PARTIAL)

# mm/slub: Simplify struct slab slabs field definition
Comment:

This is how struct page looks on the top of v3r3 branch:
struct page {
[...]
                struct {        /* slab, slob and slub */
                        union {
                                struct list_head slab_list;
                                struct {        /* Partial pages */
                                        struct page *next;
#ifdef CONFIG_64BIT
                                        int pages;      /* Nr of pages left */
#else
                                        short int pages;
#endif
                                };
                        };
[...]

It's not consistent with struct slab.
I think this is because "mm: Remove slab from struct page" was dropped.
Would you update some of patches?

# mm/sl*b: Differentiate struct slab fields by sl*b implementations
Reviewed-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
Tested-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
Works SL[AUO]B on my machine and makes code much better.

# mm/slob: Convert SLOB to use struct slab and struct folio
Reviewed-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
Tested-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
It still works fine on SLOB.

# mm/slab: Convert kmem_getpages() and kmem_freepages() to struct slab
Reviewed-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
Tested-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>

# mm/slub: Convert __free_slab() to use struct slab
Reviewed-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
Tested-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>

Thanks,
Hyeonggon.

> 
>  1:  10b656f9eb1e =  1:  10b656f9eb1e mm: add virt_to_folio() and folio_address()
>  2:  5e6ad846acf1 =  2:  5e6ad846acf1 mm/slab: Dissolve slab_map_pages() in its caller
>  3:  48d4e9407aa0 =  3:  48d4e9407aa0 mm/slub: Make object_err() static
>  4:  fe1e19081321 =  4:  fe1e19081321 mm: Split slab into its own type
>  5:  af7fd46fbb9b =  5:  af7fd46fbb9b mm: Add account_slab() and unaccount_slab()
>  6:  7ed088d601d9 =  6:  7ed088d601d9 mm: Convert virt_to_cache() to use struct slab
>  7:  1d41188b9401 =  7:  1d41188b9401 mm: Convert __ksize() to struct slab
>  8:  5d9d1231461f !  8:  8fd22e0b086e mm: Use struct slab in kmem_obj_info()
>     @@ Commit message
>          slab type instead of the page type, we make it obvious that this can
>          only be called for slabs.
>      
>     +    [ vbabka@suse.cz: also convert the related kmem_valid_obj() to folios ]
>     +
>          Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
>          Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
>      
>     @@ mm/slab.h: struct kmem_obj_info {
>       #endif /* MM_SLAB_H */
>      
>       ## mm/slab_common.c ##
>     +@@ mm/slab_common.c: bool slab_is_available(void)
>     +  */
>     + bool kmem_valid_obj(void *object)
>     + {
>     +-	struct page *page;
>     ++	struct folio *folio;
>     + 
>     + 	/* Some arches consider ZERO_SIZE_PTR to be a valid address. */
>     + 	if (object < (void *)PAGE_SIZE || !virt_addr_valid(object))
>     + 		return false;
>     +-	page = virt_to_head_page(object);
>     +-	return PageSlab(page);
>     ++	folio = virt_to_folio(object);
>     ++	return folio_test_slab(folio);
>     + }
>     + EXPORT_SYMBOL_GPL(kmem_valid_obj);
>     + 
>      @@ mm/slab_common.c: void kmem_dump_obj(void *object)
>       {
>       	char *cp = IS_ENABLED(CONFIG_MMU) ? "" : "/vmalloc";
>     @@ mm/slub.c: int __kmem_cache_shutdown(struct kmem_cache *s)
>       	objp = base + s->size * objnr;
>       	kpp->kp_objp = objp;
>      -	if (WARN_ON_ONCE(objp < base || objp >= base + page->objects * s->size || (objp - base) % s->size) ||
>     -+	if (WARN_ON_ONCE(objp < base || objp >= base + slab->objects * s->size || (objp - base) % s->size) ||
>     ++	if (WARN_ON_ONCE(objp < base || objp >= base + slab->objects * s->size
>     ++			 || (objp - base) % s->size) ||
>       	    !(s->flags & SLAB_STORE_USER))
>       		return;
>       #ifdef CONFIG_SLUB_DEBUG
>  9:  3aef771be335 !  9:  c97e73c3b6c2 mm: Convert check_heap_object() to use struct slab
>     @@ mm/slab.h: struct kmem_obj_info {
>      +#else
>      +static inline
>      +void __check_heap_object(const void *ptr, unsigned long n,
>     -+			 const struct slab *slab, bool to_user) { }
>     ++			 const struct slab *slab, bool to_user)
>     ++{
>     ++}
>      +#endif
>      +
>       #endif /* MM_SLAB_H */
> 10:  2253e45e6bef = 10:  da05e0f7179c mm/slub: Convert detached_freelist to use a struct slab
> 11:  f28202bc27ba = 11:  383887e77104 mm/slub: Convert kfree() to use a struct slab
> 12:  31b58b1e914f = 12:  c46be093c637 mm/slub: Convert __slab_lock() and __slab_unlock() to struct slab
> 13:  636406a3ad59 = 13:  49dbbf917052 mm/slub: Convert print_page_info() to print_slab_info()
> 14:  3b49efda3b6f = 14:  4bb0c932156a mm/slub: Convert alloc_slab_page() to return a struct slab
> 15:  61a195526d3b ! 15:  4b9761b5cfab mm/slub: Convert __free_slab() to use struct slab
>     @@ mm/slub.c: static struct page *new_slab(struct kmem_cache *s, gfp_t flags, int n
>       
>      -	__ClearPageSlabPfmemalloc(page);
>      -	__ClearPageSlab(page);
>     +-	/* In union with page->mapping where page allocator expects NULL */
>     +-	page->slab_cache = NULL;
>      +	__slab_clear_pfmemalloc(slab);
>      +	__folio_clear_slab(folio);
>     - 	/* In union with page->mapping where page allocator expects NULL */
>     --	page->slab_cache = NULL;
>     -+	slab->slab_cache = NULL;
>     ++	folio->mapping = NULL;
>       	if (current->reclaim_state)
>       		current->reclaim_state->reclaimed_slab += pages;
>      -	unaccount_slab(page_slab(page), order, s);
> 16:  987c7ed31580 = 16:  f384ec918065 mm/slub: Convert pfmemalloc_match() to take a struct slab
> 17:  cc742564237e ! 17:  06738ade4e17 mm/slub: Convert most struct page to struct slab by spatch
>     @@ Commit message
>      
>          // Options: --include-headers --no-includes --smpl-spacing include/linux/slub_def.h mm/slub.c
>          // Note: needs coccinelle 1.1.1 to avoid breaking whitespace, and ocaml for the
>     -    // embedded script script
>     +    // embedded script
>      
>          // build list of functions to exclude from applying the next rule
>          @initialize:ocaml@
> 18:  b45acac9aace = 18:  1a4f69a4cced mm/slub: Finish struct page to struct slab conversion
> 19:  76c3eeb39684 ! 19:  1d62d706e884 mm/slab: Convert kmem_getpages() and kmem_freepages() to struct slab
>     @@ mm/slab.c: slab_out_of_memory(struct kmem_cache *cachep, gfp_t gfpflags, int nod
>      -	__ClearPageSlabPfmemalloc(page);
>      -	__ClearPageSlab(page);
>      -	page_mapcount_reset(page);
>     +-	/* In union with page->mapping where page allocator expects NULL */
>     +-	page->slab_cache = NULL;
>      +	BUG_ON(!folio_test_slab(folio));
>      +	__slab_clear_pfmemalloc(slab);
>      +	__folio_clear_slab(folio);
>      +	page_mapcount_reset(folio_page(folio, 0));
>     - 	/* In union with page->mapping where page allocator expects NULL */
>     --	page->slab_cache = NULL;
>     -+	slab->slab_cache = NULL;
>     ++	folio->mapping = NULL;
>       
>       	if (current->reclaim_state)
>       		current->reclaim_state->reclaimed_slab += 1 << order;
> 20:  ed6144dbebce ! 20:  fd4c3aabacd3 mm/slab: Convert most struct page to struct slab by spatch
>     @@ Commit message
>      
>          // Options: --include-headers --no-includes --smpl-spacing mm/slab.c
>          // Note: needs coccinelle 1.1.1 to avoid breaking whitespace, and ocaml for the
>     -    // embedded script script
>     +    // embedded script
>      
>          // build list of functions for applying the next rule
>          @initialize:ocaml@
> 21:  17fb81e601e6 = 21:  b59720b2edba mm/slab: Finish struct page to struct slab conversion
> 22:  4e8d1faebc24 ! 22:  65ced071c3e7 mm: Convert struct page to struct slab in functions used by other subsystems
>     @@ Commit message
>            ,...)
>      
>          Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
>     +    Reviewed-by: Andrey Konovalov <andreyknvl@gmail.com>
>          Cc: Julia Lawall <julia.lawall@inria.fr>
>          Cc: Luis Chamberlain <mcgrof@kernel.org>
>          Cc: Andrey Ryabinin <ryabinin.a.a@gmail.com>
> 23:  eefa12e18a92 = 23:  c9c8dee01e5d mm/memcg: Convert slab objcgs from struct page to struct slab
> 24:  fa5ba4107ce2 ! 24:  def731137335 mm/slob: Convert SLOB to use struct slab
>     @@ Metadata
>      Author: Matthew Wilcox (Oracle) <willy@infradead.org>
>      
>       ## Commit message ##
>     -    mm/slob: Convert SLOB to use struct slab
>     +    mm/slob: Convert SLOB to use struct slab and struct folio
>      
>     -    Use struct slab throughout the slob allocator.
>     +    Use struct slab throughout the slob allocator. Where non-slab page can appear
>     +    use struct folio instead of struct page.
>      
>          [ vbabka@suse.cz: don't introduce wrappers for PageSlobFree in mm/slab.h just
>            for the single callers being wrappers in mm/slob.c ]
>      
>     +    [ Hyeonggon Yoo <42.hyeyoo@gmail.com>: fix NULL pointer deference ]
>     +
>          Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
>          Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
>      
>       ## mm/slob.c ##
>     +@@
>     +  * If kmalloc is asked for objects of PAGE_SIZE or larger, it calls
>     +  * alloc_pages() directly, allocating compound pages so the page order
>     +  * does not have to be separately tracked.
>     +- * These objects are detected in kfree() because PageSlab()
>     ++ * These objects are detected in kfree() because folio_test_slab()
>     +  * is false for them.
>     +  *
>     +  * SLAB is emulated on top of SLOB by simply calling constructors and
>      @@ mm/slob.c: static LIST_HEAD(free_slob_large);
>       /*
>        * slob_page_free: true for pages on free_slob_pages list.
>     @@ mm/slob.c: static void *slob_page_alloc(struct page *sp, size_t size, int align,
>       							int align_offset)
>       {
>      -	struct page *sp;
>     ++	struct folio *folio;
>      +	struct slab *sp;
>       	struct list_head *slob_list;
>       	slob_t *b = NULL;
>     @@ mm/slob.c: static void *slob_alloc(size_t size, gfp_t gfp, int align, int node,
>       			return NULL;
>      -		sp = virt_to_page(b);
>      -		__SetPageSlab(sp);
>     -+		sp = virt_to_slab(b);
>     -+		__SetPageSlab(slab_page(sp));
>     ++		folio = virt_to_folio(b);
>     ++		__folio_set_slab(folio);
>     ++		sp = folio_slab(folio);
>       
>       		spin_lock_irqsave(&slob_lock, flags);
>       		sp->units = SLOB_UNITS(PAGE_SIZE);
>     @@ mm/slob.c: static void slob_free(void *block, int size)
>       		spin_unlock_irqrestore(&slob_lock, flags);
>      -		__ClearPageSlab(sp);
>      -		page_mapcount_reset(sp);
>     -+		__ClearPageSlab(slab_page(sp));
>     ++		__folio_clear_slab(slab_folio(sp));
>      +		page_mapcount_reset(slab_page(sp));
>       		slob_free_pages(b, 0);
>       		return;
>       	}
>     +@@ mm/slob.c: EXPORT_SYMBOL(__kmalloc_node_track_caller);
>     + 
>     + void kfree(const void *block)
>     + {
>     +-	struct page *sp;
>     ++	struct folio *sp;
>     + 
>     + 	trace_kfree(_RET_IP_, block);
>     + 
>     +@@ mm/slob.c: void kfree(const void *block)
>     + 		return;
>     + 	kmemleak_free(block);
>     + 
>     +-	sp = virt_to_page(block);
>     +-	if (PageSlab(sp)) {
>     ++	sp = virt_to_folio(block);
>     ++	if (folio_test_slab(sp)) {
>     + 		int align = max_t(size_t, ARCH_KMALLOC_MINALIGN, ARCH_SLAB_MINALIGN);
>     + 		unsigned int *m = (unsigned int *)(block - align);
>     + 		slob_free(m, *m + align);
>     + 	} else {
>     +-		unsigned int order = compound_order(sp);
>     +-		mod_node_page_state(page_pgdat(sp), NR_SLAB_UNRECLAIMABLE_B,
>     ++		unsigned int order = folio_order(sp);
>     ++
>     ++		mod_node_page_state(folio_pgdat(sp), NR_SLAB_UNRECLAIMABLE_B,
>     + 				    -(PAGE_SIZE << order));
>     +-		__free_pages(sp, order);
>     ++		__free_pages(folio_page(sp, 0), order);
>     + 
>     + 	}
>     + }
> 25:  aa4f573a4c96 ! 25:  466b9fb1f6e5 mm/kasan: Convert to struct folio and struct slab
>     @@ Commit message
>      
>          Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
>          Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
>     +    Reviewed-by: Andrey Konovalov <andreyknvl@gmail.com>
>          Cc: Andrey Ryabinin <ryabinin.a.a@gmail.com>
>          Cc: Alexander Potapenko <glider@google.com>
>          Cc: Andrey Konovalov <andreyknvl@gmail.com>
> 26:  67b7966d2fb6 = 26:  b8159ae8e5cd mm/kfence: Convert kfence_guarded_alloc() to struct slab
> 31:  d64dfe49c1e7 ! 27:  4525180926f9 mm/sl*b: Differentiate struct slab fields by sl*b implementations
>     @@ Commit message
>          possible.
>      
>          This should also prevent accidental use of fields that don't exist in given
>     -    implementation. Before this patch virt_to_cache() and and cache_from_obj() was
>     -    visible for SLOB (albeit not used), although it relies on the slab_cache field
>     +    implementation. Before this patch virt_to_cache() and cache_from_obj() were
>     +    visible for SLOB (albeit not used), although they rely on the slab_cache field
>          that isn't set by SLOB. With this patch it's now a compile error, so these
>          functions are now hidden behind #ifndef CONFIG_SLOB.
>      
>     @@ mm/kfence/core.c: static void *kfence_guarded_alloc(struct kmem_cache *cache, si
>      -		slab->s_mem = addr;
>      +#if defined(CONFIG_SLUB)
>      +	slab->objects = 1;
>     -+#elif defined (CONFIG_SLAB)
>     ++#elif defined(CONFIG_SLAB)
>      +	slab->s_mem = addr;
>      +#endif
>       
>     @@ mm/slab.h
>      +
>      +#if defined(CONFIG_SLAB)
>      +
>     -+	union {
>     -+		struct list_head slab_list;
>     + 	union {
>     + 		struct list_head slab_list;
>     +-		struct {	/* Partial pages */
>      +		struct rcu_head rcu_head;
>      +	};
>      +	struct kmem_cache *slab_cache;
>      +	void *freelist;	/* array of free object indexes */
>     -+	void * s_mem;	/* first object */
>     ++	void *s_mem;	/* first object */
>      +	unsigned int active;
>      +
>      +#elif defined(CONFIG_SLUB)
>      +
>     - 	union {
>     - 		struct list_head slab_list;
>     --		struct {	/* Partial pages */
>     ++	union {
>     ++		struct list_head slab_list;
>      +		struct rcu_head rcu_head;
>      +		struct {
>       			struct slab *next;
>     @@ mm/slab.h: struct slab {
>      +#elif defined(CONFIG_SLOB)
>      +
>      +	struct list_head slab_list;
>     -+	void * __unused_1;
>     ++	void *__unused_1;
>      +	void *freelist;		/* first free block */
>     -+	void * __unused_2;
>     ++	void *__unused_2;
>      +	int units;
>      +
>      +#else
>     @@ mm/slab.h: struct slab {
>       #ifdef CONFIG_MEMCG
>       	unsigned long memcg_data;
>      @@ mm/slab.h: struct slab {
>     - 	static_assert(offsetof(struct page, pg) == offsetof(struct slab, sl))
>       SLAB_MATCH(flags, __page_flags);
>       SLAB_MATCH(compound_head, slab_list);	/* Ensure bit 0 is clear */
>     + SLAB_MATCH(slab_list, slab_list);
>      +#ifndef CONFIG_SLOB
>       SLAB_MATCH(rcu_head, rcu_head);
>     + SLAB_MATCH(slab_cache, slab_cache);
>     ++#endif
>     ++#ifdef CONFIG_SLAB
>     + SLAB_MATCH(s_mem, s_mem);
>     + SLAB_MATCH(active, active);
>      +#endif
>       SLAB_MATCH(_refcount, __page_refcount);
>       #ifdef CONFIG_MEMCG
> 32:  0abf87bae67e = 28:  94b78948d53f mm/slub: Simplify struct slab slabs field definition
> 33:  813c304f18e4 = 29:  f5261e6375f0 mm/slub: Define struct slab fields for CONFIG_SLUB_CPU_PARTIAL only when enabled
> 27:  ebce4b5b5ced ! 30:  1414e8c87de6 zsmalloc: Stop using slab fields in struct page
>     @@ Commit message
>      
>          Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
>          Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
>     -    Cc: Minchan Kim <minchan@kernel.org>
>     +    Acked-by: Minchan Kim <minchan@kernel.org>
>          Cc: Nitin Gupta <ngupta@vflare.org>
>          Cc: Sergey Senozhatsky <senozhatsky@chromium.org>
>      
> 28:  f124425ae7de = 31:  8a3cda6b38eb bootmem: Use page->index instead of page->freelist
> 29:  82da48c73b2e <  -:  ------------ iommu: Use put_pages_list
> 30:  181e16dfefbb <  -:  ------------ mm: Remove slab from struct page
>  -:  ------------ > 32:  91e069ba116b mm/slob: Remove unnecessary page_mapcount_reset() function call
Vlastimil Babka Dec. 20, 2021, 12:24 a.m. UTC | #12
On 12/15/21 02:03, Roman Gushchin wrote:
> On Tue, Dec 14, 2021 at 01:57:22PM +0100, Vlastimil Babka wrote:
>> On 12/1/21 19:14, Vlastimil Babka wrote:
>> > Folks from non-slab subsystems are Cc'd only to patches affecting them, and
>> > this cover letter.
>> > 
>> > Series also available in git, based on 5.16-rc3:
>> > https://git.kernel.org/pub/scm/linux/kernel/git/vbabka/linux.git/log/?h=slab-struct_slab-v2r2
>> 
>> Pushed a new branch slab-struct-slab-v3r3 with accumulated fixes and small tweaks
>> and a new patch from Hyeonggon Yoo on top. To avoid too much spam, here's a range diff:
> 
> Hi Vlastimil!
> 
> I've started to review this patchset (btw, a really nice work, I like
> the resulting code way more). Because I'm looking at v3 and I don't have
> the whole v2 in my mailbox, here is what I've now:

Thanks a lot, Roman!

> * mm: add virt_to_folio() and folio_address()
> Reviewed-by: Roman Gushchin <guro@fb.com>
> 
> * mm/slab: Dissolve slab_map_pages() in its caller
> Reviewed-by: Roman Gushchin <guro@fb.com>
> 
> * mm/slub: Make object_err() static
> Reviewed-by: Roman Gushchin <guro@fb.com>
> 
> * mm: Split slab into its own type
> 1) Shouldn't SLAB_MATCH() macro use struct folio instead of struct page for the
> comparison?

Folio doesn't have define most of the fields, and matching some to page and
others to folio seems like unnecessary complication. Maybe as part of the
final struct page cleanup when the slab fields are gone from struct page,
the rest could all be in folio - I'll check once we get there.

> 2) page_slab() is used only in kasan and only in one place, so maybe it's better
> to not introduce it as a generic helper?

Yeah that's the case after the series, but as part of the incremental steps,
page_slab() gets used in many places. I'll consider removing it on top though.

> Other than that
> Reviewed-by: Roman Gushchin <guro@fb.com>
> 
> * mm: Add account_slab() and unaccount_slab()
> 1) maybe change the title to convert/replace instead of add?

Done.

> 2) maybe move later changes to memcg_alloc_page_obj_cgroups() to this patch?

Maybe possible, but that would distort the series more than I'd like to at
this rc6 time.

> Reviewed-by: Roman Gushchin <guro@fb.com>
> 
> * mm: Convert virt_to_cache() to use struct slab
> Reviewed-by: Roman Gushchin <guro@fb.com>
> 
> * mm: Convert __ksize() to struct slab
> It looks like certain parts of __ksize() can be merged between slab, slub and slob?
> Reviewed-by: Roman Gushchin <guro@fb.com>
> 
> * mm: Use struct slab in kmem_obj_info()
> Reviewed-by: Roman Gushchin <guro@fb.com>
> 
> 
> I'll try to finish reviewing the patchset until the  end of the week.
> 
> Thanks!
> 
> Roman
Vlastimil Babka Dec. 20, 2021, 12:47 a.m. UTC | #13
On 12/16/21 00:38, Roman Gushchin wrote:
> Part 2:
> 
> * mm: Convert check_heap_object() to use struct slab
> Reviewed-by: Roman Gushchin <guro@fb.com>
> 
> * mm/slub: Convert detached_freelist to use a struct slab
> How about to convert free_nonslab_page() to free_nonslab_folio()?
> And maybe rename it to something like free_large_kmalloc()?
> If I'm not missing something, large kmallocs is the only way how we can end up
> there with a !slab folio/page.

Good point, thanks! But did at as part of the following patch, where it fits
logically better.

> * mm/slub: Convert kfree() to use a struct slab
> Reviewed-by: Roman Gushchin <guro@fb.com>

Didn't add your tag because of the addition of free_large_kmalloc() change.

> * mm/slub: Convert __slab_lock() and __slab_unlock() to struct slab
> Reviewed-by: Roman Gushchin <guro@fb.com>
> 
> * mm/slub: Convert print_page_info() to print_slab_info()
> Do we really need to explicitly convert slab_folio()'s result to (struct folio *)?

Unfortunately yes, as long as folio_flags() don't take const struct folio *,
which will need some yak shaving.

> Reviewed-by: Roman Gushchin <guro@fb.com>
> 
> * mm/slub: Convert alloc_slab_page() to return a struct slab
> Reviewed-by: Roman Gushchin <guro@fb.com>
> 
> * mm/slub: Convert __free_slab() to use struct slab
> Reviewed-by: Roman Gushchin <guro@fb.com>
> 
> * mm/slub: Convert pfmemalloc_match() to take a struct slab
> Cool! Removing pfmemalloc_unsafe() is really nice.
> Reviewed-by: Roman Gushchin <guro@fb.com>
> 
> * mm/slub: Convert most struct page to struct slab by spatch
> Reviewed-by: Roman Gushchin <guro@fb.com>
> 
> * mm/slub: Finish struct page to struct slab conversion
> Reviewed-by: Roman Gushchin <guro@fb.com>
> 
> * mm/slab: Convert kmem_getpages() and kmem_freepages() to struct slab
> Reviewed-by: Roman Gushchin <guro@fb.com>

Thanks again!

> * mm/slab: Convert most struct page to struct slab by spatch
> 
> Another patch with the same title? Rebase error?
> 
> * mm/slab: Finish struct page to struct slab conversion
> 
> And this one too?
> 
> 
> Thanks!
> 
> Roman
Matthew Wilcox Dec. 20, 2021, 1:42 a.m. UTC | #14
On Mon, Dec 20, 2021 at 01:47:54AM +0100, Vlastimil Babka wrote:
> > * mm/slub: Convert print_page_info() to print_slab_info()
> > Do we really need to explicitly convert slab_folio()'s result to (struct folio *)?
> 
> Unfortunately yes, as long as folio_flags() don't take const struct folio *,
> which will need some yak shaving.

In case anyone's interested ...

folio_flags calls VM_BUG_ON_PGFLAGS() which would need its second
argument to be const.

That means dump_page() needs to take a const struct page, which
means __dump_page() needs its argument to be const.

That calls ...

is_migrate_cma_page()
page_mapping()
page_mapcount()
page_ref_count()
page_to_pgoff()
page_to_pfn()
hpage_pincount_available()
head_compound_mapcount()
head_compound_pincount()
compound_order()
PageKsm()
PageAnon()
PageCompound()

... and at that point, I ran out of motivation to track down some parts
of this tarbaby that could be fixed.  I did do:

    mm: constify page_count and page_ref_count
    mm: constify get_pfnblock_flags_mask and get_pfnblock_migratetype
    mm: make compound_head const-preserving
    mm/page_owner: constify dump_page_owner

so some of those are already done.  But a lot of them just need to be
done at the same time.  For example, page_mapping() calls
folio_mapping() which calls folio_test_slab() which calls folio_flags(),
so dump_page() and page_mapping() need to be done at the same time.

One bit that could be broken off easily (I think ...) is PageTransTail()
PageTail(), PageCompound(), PageHuge(), page_to_pgoff() and
page_to_index().  One wrinkle is needed a temporary
TESTPAGEFLAGS_FALSE_CONST.  But I haven't tried it yet.
Vlastimil Babka Dec. 20, 2021, 11:58 p.m. UTC | #15
On 12/16/21 16:00, Hyeonggon Yoo wrote:
> On Tue, Dec 14, 2021 at 01:57:22PM +0100, Vlastimil Babka wrote:
>> On 12/1/21 19:14, Vlastimil Babka wrote:
>> > Folks from non-slab subsystems are Cc'd only to patches affecting them, and
>> > this cover letter.
>> > 
>> > Series also available in git, based on 5.16-rc3:
>> > https://git.kernel.org/pub/scm/linux/kernel/git/vbabka/linux.git/log/?h=slab-struct_slab-v2r2
>> 
>> Pushed a new branch slab-struct-slab-v3r3 with accumulated fixes and small tweaks
>> and a new patch from Hyeonggon Yoo on top. To avoid too much spam, here's a range diff:
> 
> Reviewing the whole patch series takes longer than I thought.
> I'll try to review and test rest of patches when I have time.
> 
> I added Tested-by if kernel builds okay and kselftests
> does not break the kernel on my machine.
> (with CONFIG_SLAB/SLUB/SLOB depending on the patch),

Thanks!

> Let me know me if you know better way to test a patch.

Testing on your machine is just fine.

> # mm/slub: Define struct slab fields for CONFIG_SLUB_CPU_PARTIAL only when enabled
> 
> Reviewed-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
> Tested-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
> 
> Comment:
> Works on both SLUB_CPU_PARTIAL and !SLUB_CPU_PARTIAL.
> btw, do we need slabs_cpu_partial attribute when we don't use
> cpu partials? (!SLUB_CPU_PARTIAL)

The sysfs attribute? Yeah we should be consistent to userspace expecting to
read it (even with zeroes), regardless of config.

> # mm/slub: Simplify struct slab slabs field definition
> Comment:
> 
> This is how struct page looks on the top of v3r3 branch:
> struct page {
> [...]
>                 struct {        /* slab, slob and slub */
>                         union {
>                                 struct list_head slab_list;
>                                 struct {        /* Partial pages */
>                                         struct page *next;
> #ifdef CONFIG_64BIT
>                                         int pages;      /* Nr of pages left */
> #else
>                                         short int pages;
> #endif
>                                 };
>                         };
> [...]
> 
> It's not consistent with struct slab.

Hm right. But as we don't actually use the struct page version anymore, and
it's not one of the fields checked by SLAB_MATCH(), we can ignore this.

> I think this is because "mm: Remove slab from struct page" was dropped.

That was just postponed until iommu changes are in. Matthew mentioned those
might be merged too, so that final cleanup will happen too and take care of
the discrepancy above, so no need for extra churn to address it speficially.

> Would you update some of patches?
> 
> # mm/sl*b: Differentiate struct slab fields by sl*b implementations
> Reviewed-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
> Tested-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
> Works SL[AUO]B on my machine and makes code much better.
> 
> # mm/slob: Convert SLOB to use struct slab and struct folio
> Reviewed-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
> Tested-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
> It still works fine on SLOB.
> 
> # mm/slab: Convert kmem_getpages() and kmem_freepages() to struct slab
> Reviewed-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
> Tested-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
> 
> # mm/slub: Convert __free_slab() to use struct slab
> Reviewed-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
> Tested-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
> 
> Thanks,
> Hyeonggon.

Thanks again,
Vlastimil
Robin Murphy Dec. 21, 2021, 5:25 p.m. UTC | #16
On 2021-12-20 23:58, Vlastimil Babka wrote:
> On 12/16/21 16:00, Hyeonggon Yoo wrote:
>> On Tue, Dec 14, 2021 at 01:57:22PM +0100, Vlastimil Babka wrote:
>>> On 12/1/21 19:14, Vlastimil Babka wrote:
>>>> Folks from non-slab subsystems are Cc'd only to patches affecting them, and
>>>> this cover letter.
>>>>
>>>> Series also available in git, based on 5.16-rc3:
>>>> https://git.kernel.org/pub/scm/linux/kernel/git/vbabka/linux.git/log/?h=slab-struct_slab-v2r2
>>>
>>> Pushed a new branch slab-struct-slab-v3r3 with accumulated fixes and small tweaks
>>> and a new patch from Hyeonggon Yoo on top. To avoid too much spam, here's a range diff:
>>
>> Reviewing the whole patch series takes longer than I thought.
>> I'll try to review and test rest of patches when I have time.
>>
>> I added Tested-by if kernel builds okay and kselftests
>> does not break the kernel on my machine.
>> (with CONFIG_SLAB/SLUB/SLOB depending on the patch),
> 
> Thanks!
> 
>> Let me know me if you know better way to test a patch.
> 
> Testing on your machine is just fine.
> 
>> # mm/slub: Define struct slab fields for CONFIG_SLUB_CPU_PARTIAL only when enabled
>>
>> Reviewed-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
>> Tested-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
>>
>> Comment:
>> Works on both SLUB_CPU_PARTIAL and !SLUB_CPU_PARTIAL.
>> btw, do we need slabs_cpu_partial attribute when we don't use
>> cpu partials? (!SLUB_CPU_PARTIAL)
> 
> The sysfs attribute? Yeah we should be consistent to userspace expecting to
> read it (even with zeroes), regardless of config.
> 
>> # mm/slub: Simplify struct slab slabs field definition
>> Comment:
>>
>> This is how struct page looks on the top of v3r3 branch:
>> struct page {
>> [...]
>>                  struct {        /* slab, slob and slub */
>>                          union {
>>                                  struct list_head slab_list;
>>                                  struct {        /* Partial pages */
>>                                          struct page *next;
>> #ifdef CONFIG_64BIT
>>                                          int pages;      /* Nr of pages left */
>> #else
>>                                          short int pages;
>> #endif
>>                                  };
>>                          };
>> [...]
>>
>> It's not consistent with struct slab.
> 
> Hm right. But as we don't actually use the struct page version anymore, and
> it's not one of the fields checked by SLAB_MATCH(), we can ignore this.
> 
>> I think this is because "mm: Remove slab from struct page" was dropped.
> 
> That was just postponed until iommu changes are in. Matthew mentioned those
> might be merged too, so that final cleanup will happen too and take care of
> the discrepancy above, so no need for extra churn to address it speficially.

FYI the IOMMU changes are now queued in linux-next, so if all goes well 
you might be able to sneak that final patch in too.

Robin.

> 
>> Would you update some of patches?
>>
>> # mm/sl*b: Differentiate struct slab fields by sl*b implementations
>> Reviewed-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
>> Tested-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
>> Works SL[AUO]B on my machine and makes code much better.
>>
>> # mm/slob: Convert SLOB to use struct slab and struct folio
>> Reviewed-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
>> Tested-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
>> It still works fine on SLOB.
>>
>> # mm/slab: Convert kmem_getpages() and kmem_freepages() to struct slab
>> Reviewed-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
>> Tested-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
>>
>> # mm/slub: Convert __free_slab() to use struct slab
>> Reviewed-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
>> Tested-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
>>
>> Thanks,
>> Hyeonggon.
> 
> Thanks again,
> Vlastimil
> _______________________________________________
> iommu mailing list
> iommu@lists.linux-foundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/iommu
Hyeonggon Yoo Dec. 22, 2021, 7:36 a.m. UTC | #17
On Tue, Dec 21, 2021 at 12:58:14AM +0100, Vlastimil Babka wrote:
> On 12/16/21 16:00, Hyeonggon Yoo wrote:
> > On Tue, Dec 14, 2021 at 01:57:22PM +0100, Vlastimil Babka wrote:
> >> On 12/1/21 19:14, Vlastimil Babka wrote:
> >> > Folks from non-slab subsystems are Cc'd only to patches affecting them, and
> >> > this cover letter.
> >> > 
> >> > Series also available in git, based on 5.16-rc3:
> >> > https://git.kernel.org/pub/scm/linux/kernel/git/vbabka/linux.git/log/?h=slab-struct_slab-v2r2
> >> 
> >> Pushed a new branch slab-struct-slab-v3r3 with accumulated fixes and small tweaks
> >> and a new patch from Hyeonggon Yoo on top. To avoid too much spam, here's a range diff:
> > 
> > Reviewing the whole patch series takes longer than I thought.
> > I'll try to review and test rest of patches when I have time.
> > 
> > I added Tested-by if kernel builds okay and kselftests
> > does not break the kernel on my machine.
> > (with CONFIG_SLAB/SLUB/SLOB depending on the patch),
> 
> Thanks!
>

:)

> > Let me know me if you know better way to test a patch.
> 
> Testing on your machine is just fine.
>

Good!

> > # mm/slub: Define struct slab fields for CONFIG_SLUB_CPU_PARTIAL only when enabled
> > 
> > Reviewed-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
> > Tested-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
> > 
> > Comment:
> > Works on both SLUB_CPU_PARTIAL and !SLUB_CPU_PARTIAL.
> > btw, do we need slabs_cpu_partial attribute when we don't use
> > cpu partials? (!SLUB_CPU_PARTIAL)
> 
> The sysfs attribute? Yeah we should be consistent to userspace expecting to
> read it (even with zeroes), regardless of config.
> 

I thought entirely disabling the attribute is simpler,
But okay If it should be exposed even if it's always zero.

> > # mm/slub: Simplify struct slab slabs field definition
> > Comment:
> > 
> > This is how struct page looks on the top of v3r3 branch:
> > struct page {
> > [...]
> >                 struct {        /* slab, slob and slub */
> >                         union {
> >                                 struct list_head slab_list;
> >                                 struct {        /* Partial pages */
> >                                         struct page *next;
> > #ifdef CONFIG_64BIT
> >                                         int pages;      /* Nr of pages left */
> > #else
> >                                         short int pages;
> > #endif
> >                                 };
> >                         };
> > [...]
> > 
> > It's not consistent with struct slab.
> 
> Hm right. But as we don't actually use the struct page version anymore, and
> it's not one of the fields checked by SLAB_MATCH(), we can ignore this.
>

Yeah this is not a big problem. just mentioned this because 
it looked weird and I didn't know when the patch "mm: Remove slab from struct page"
will come back.

> > I think this is because "mm: Remove slab from struct page" was dropped.
>
> That was just postponed until iommu changes are in. Matthew mentioned those
> might be merged too, so that final cleanup will happen too and take care of
> the discrepancy above, so no need for extra churn to address it speficially.
> 

Okay it seems no extra work needed until the iommu changes are in!

BTW, in the patch (that I sent) ("mm/slob: Remove unnecessary page_mapcount_reset()
function call"), it refers commit 4525180926f9  ("mm/sl*b: Differentiate struct slab fields by
sl*b implementations"). But the commit hash 4525180926f9 changed after the
tree has been changed.

It will be nice to write a script to handle situations like this.

> > Would you update some of patches?
> > 
> > # mm/sl*b: Differentiate struct slab fields by sl*b implementations
> > Reviewed-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
> > Tested-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
> > Works SL[AUO]B on my machine and makes code much better.
> > 
> > # mm/slob: Convert SLOB to use struct slab and struct folio
> > Reviewed-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
> > Tested-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
> > It still works fine on SLOB.
> > 
> > # mm/slab: Convert kmem_getpages() and kmem_freepages() to struct slab
> > Reviewed-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
> > Tested-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
> >
> > # mm/slub: Convert __free_slab() to use struct slab
> > Reviewed-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
> > Tested-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
> > 
> > Thanks,
> > Hyeonggon.
> 
> Thanks again,
> Vlastimil

Have a nice day, thanks!
Hyeonggon.
Vlastimil Babka Dec. 22, 2021, 4:56 p.m. UTC | #18
On 12/14/21 13:57, Vlastimil Babka wrote:
> On 12/1/21 19:14, Vlastimil Babka wrote:
>> Folks from non-slab subsystems are Cc'd only to patches affecting them, and
>> this cover letter.
>>
>> Series also available in git, based on 5.16-rc3:
>> https://git.kernel.org/pub/scm/linux/kernel/git/vbabka/linux.git/log/?h=slab-struct_slab-v2r2
> 
> Pushed a new branch slab-struct-slab-v3r3 with accumulated fixes and small tweaks
> and a new patch from Hyeonggon Yoo on top. To avoid too much spam, here's a range diff:

Hi, I've pushed another update branch slab-struct_slab-v4r1, and also to
-next. I've shortened git commit log lines to make checkpatch happier,
so no range-diff as it would be too long. I believe it would be useless
spam to post the whole series now, shortly before xmas, so I will do it
at rc8 time, to hopefully collect remaining reviews. But if anyone wants
a mailed version, I can do that.

Changes in v4:
- rebase to 5.16-rc6 to avoid a conflict with mainline
- collect acks/reviews/tested-by from Johannes, Roman, Hyeonggon Yoo -
thanks!
- in patch "mm/slub: Convert detached_freelist to use a struct slab"
renamed free_nonslab_page() to free_large_kmalloc() and use folio there,
as suggested by Roman
- in "mm/memcg: Convert slab objcgs from struct page to struct slab"
change one caller of slab_objcgs_check() to slab_objcgs() as suggested
by Johannes, realize the other caller should be also changed, and remove
slab_objcgs_check() completely.
Hyeonggon Yoo Dec. 25, 2021, 9:16 a.m. UTC | #19
On Wed, Dec 22, 2021 at 05:56:50PM +0100, Vlastimil Babka wrote:
> On 12/14/21 13:57, Vlastimil Babka wrote:
> > On 12/1/21 19:14, Vlastimil Babka wrote:
> >> Folks from non-slab subsystems are Cc'd only to patches affecting them, and
> >> this cover letter.
> >>
> >> Series also available in git, based on 5.16-rc3:
> >> https://git.kernel.org/pub/scm/linux/kernel/git/vbabka/linux.git/log/?h=slab-struct_slab-v2r2
> > 
> > Pushed a new branch slab-struct-slab-v3r3 with accumulated fixes and small tweaks
> > and a new patch from Hyeonggon Yoo on top. To avoid too much spam, here's a range diff:
> 
> Hi, I've pushed another update branch slab-struct_slab-v4r1, and also to
> -next. I've shortened git commit log lines to make checkpatch happier,
> so no range-diff as it would be too long. I believe it would be useless
> spam to post the whole series now, shortly before xmas, so I will do it
> at rc8 time, to hopefully collect remaining reviews. But if anyone wants
> a mailed version, I can do that.

Hello Vlastimil, Merry Christmas!
This is part 2 of reviewing/testing patches.

# mm/kasan: Convert to struct folio and struct slab
I'm not familiar with kasan yet but kasan runs well on my machine and
kasan's bug report functionality too works fine.
Tested-by: Hyeongogn Yoo <42.hyeyoo@gmail.com>

# mm: Convert struct page to struct slab in functions used by other subsystems
I'm not familiar with kasan, but to ask:
Does ____kasan_slab_free detect invalid free if someone frees
an object that is not allocated from slab?

@@ -341,7 +341,7 @@ static inline bool ____kasan_slab_free(struct kmem_cache *cache, void *object,
-       if (unlikely(nearest_obj(cache, virt_to_head_page(object), object) !=
+       if (unlikely(nearest_obj(cache, virt_to_slab(object), object) !=
            object)) {
                kasan_report_invalid_free(tagged_object, ip);
                return true;

I'm asking this because virt_to_slab() will return NULL if folio_test_slab()
returns false. That will cause NULL pointer dereference in nearest_obj.
I don't think this change is intended.

This makes me think some of virt_to_head_page() -> virt_to_slab()
conversion need to be reviewed with caution.

# mm/slab: Finish struct page to struct slab conversion
Reviewed-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>

# mm/slab: Convert most struct page to struct slab by spatch
Tested-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>

I'll come back with part 3 :)
Enjoy your Christmas!
Hyeonggon

> Changes in v4:
> - rebase to 5.16-rc6 to avoid a conflict with mainline
> - collect acks/reviews/tested-by from Johannes, Roman, Hyeonggon Yoo -
> thanks!
> - in patch "mm/slub: Convert detached_freelist to use a struct slab"
> renamed free_nonslab_page() to free_large_kmalloc() and use folio there,
> as suggested by Roman
> - in "mm/memcg: Convert slab objcgs from struct page to struct slab"
> change one caller of slab_objcgs_check() to slab_objcgs() as suggested
> by Johannes, realize the other caller should be also changed, and remove
> slab_objcgs_check() completely.
Matthew Wilcox Dec. 25, 2021, 5:53 p.m. UTC | #20
On Sat, Dec 25, 2021 at 09:16:55AM +0000, Hyeonggon Yoo wrote:
> # mm: Convert struct page to struct slab in functions used by other subsystems
> I'm not familiar with kasan, but to ask:
> Does ____kasan_slab_free detect invalid free if someone frees
> an object that is not allocated from slab?
> 
> @@ -341,7 +341,7 @@ static inline bool ____kasan_slab_free(struct kmem_cache *cache, void *object,
> -       if (unlikely(nearest_obj(cache, virt_to_head_page(object), object) !=
> +       if (unlikely(nearest_obj(cache, virt_to_slab(object), object) !=
>             object)) {
>                 kasan_report_invalid_free(tagged_object, ip);
>                 return true;
> 
> I'm asking this because virt_to_slab() will return NULL if folio_test_slab()
> returns false. That will cause NULL pointer dereference in nearest_obj.
> I don't think this change is intended.

You need to track down how this could happen.  As far as I can tell,
it's always called when we know the object is part of a slab.  That's
where the cachep pointer is deduced from.
Hyeonggon Yoo Dec. 27, 2021, 2:43 a.m. UTC | #21
On Sat, Dec 25, 2021 at 05:53:23PM +0000, Matthew Wilcox wrote:
> On Sat, Dec 25, 2021 at 09:16:55AM +0000, Hyeonggon Yoo wrote:
> > # mm: Convert struct page to struct slab in functions used by other subsystems
> > I'm not familiar with kasan, but to ask:
> > Does ____kasan_slab_free detect invalid free if someone frees
> > an object that is not allocated from slab?
> > 
> > @@ -341,7 +341,7 @@ static inline bool ____kasan_slab_free(struct kmem_cache *cache, void *object,
> > -       if (unlikely(nearest_obj(cache, virt_to_head_page(object), object) !=
> > +       if (unlikely(nearest_obj(cache, virt_to_slab(object), object) !=
> >             object)) {
> >                 kasan_report_invalid_free(tagged_object, ip);
> >                 return true;
> > 
> > I'm asking this because virt_to_slab() will return NULL if folio_test_slab()
> > returns false. That will cause NULL pointer dereference in nearest_obj.
> > I don't think this change is intended.
> 
> You need to track down how this could happen.  As far as I can tell,
> it's always called when we know the object is part of a slab.  That's
> where the cachep pointer is deduced from.

Thank you Matthew, you are right. I read the code too narrowly.
when we call kasan hooks, we know that the object is allocated from
the slab cache. (through cache_from_obj)

I'll review that patch again in part 3!

Thanks,
Hyeonggon
Hyeonggon Yoo Dec. 29, 2021, 11:22 a.m. UTC | #22
On Wed, Dec 22, 2021 at 05:56:50PM +0100, Vlastimil Babka wrote:
> On 12/14/21 13:57, Vlastimil Babka wrote:
> > On 12/1/21 19:14, Vlastimil Babka wrote:
> >> Folks from non-slab subsystems are Cc'd only to patches affecting them, and
> >> this cover letter.
> >>
> >> Series also available in git, based on 5.16-rc3:
> >> https://git.kernel.org/pub/scm/linux/kernel/git/vbabka/linux.git/log/?h=slab-struct_slab-v2r2
> > 
> > Pushed a new branch slab-struct-slab-v3r3 with accumulated fixes and small tweaks
> > and a new patch from Hyeonggon Yoo on top. To avoid too much spam, here's a range diff:
> 
> Hi, I've pushed another update branch slab-struct_slab-v4r1, and also to
> -next. I've shortened git commit log lines to make checkpatch happier,
> so no range-diff as it would be too long. I believe it would be useless
> spam to post the whole series now, shortly before xmas, so I will do it
> at rc8 time, to hopefully collect remaining reviews. But if anyone wants
> a mailed version, I can do that.
>

Hello Matthew and Vlastimil.
it's part 3 of review.

# mm: Convert struct page to struct slab in functions used by other subsystems
Reviewed-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>


# mm/slub: Convert most struct page to struct slab by spatch
Reviewed-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
Tested-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
with a question below.

-static int check_slab(struct kmem_cache *s, struct page *page)
+static int check_slab(struct kmem_cache *s, struct slab *slab)
 {
        int maxobj;
 
-       if (!PageSlab(page)) {
-               slab_err(s, page, "Not a valid slab page");
+       if (!folio_test_slab(slab_folio(slab))) {
+               slab_err(s, slab, "Not a valid slab page");
                return 0;
        }

Can't we guarantee that struct slab * always points to a slab?

for struct page * it can be !PageSlab(page) because struct page *
can be other than slab. but struct slab * can only be slab
unlike struct page. code will be simpler if we guarantee that
struct slab * always points to a slab (or NULL).


# mm/slub: Convert pfmemalloc_match() to take a struct slab
It's confusing to me because the original pfmemalloc_match() is removed
and pfmemalloc_match_unsafe() was renamed to pfmemalloc_match() and
converted to use slab_test_pfmemalloc() helper.

But I agree with the resulting code. so:
Reviewed-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>


# mm/slub: Convert alloc_slab_page() to return a struct slab
Reviewed-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
Tested-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>


# mm/slub: Convert print_page_info() to print_slab_info()
Reviewed-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>

I hope to review rest of patches in a week.

Thanks,
Hyeonggon

> Changes in v4:
> - rebase to 5.16-rc6 to avoid a conflict with mainline
> - collect acks/reviews/tested-by from Johannes, Roman, Hyeonggon Yoo -
> thanks!
> - in patch "mm/slub: Convert detached_freelist to use a struct slab"
> renamed free_nonslab_page() to free_large_kmalloc() and use folio there,
> as suggested by Roman
> - in "mm/memcg: Convert slab objcgs from struct page to struct slab"
> change one caller of slab_objcgs_check() to slab_objcgs() as suggested
> by Johannes, realize the other caller should be also changed, and remove
> slab_objcgs_check() completely.
Vlastimil Babka Jan. 3, 2022, 5:56 p.m. UTC | #23
On 12/29/21 12:22, Hyeonggon Yoo wrote:
> On Wed, Dec 22, 2021 at 05:56:50PM +0100, Vlastimil Babka wrote:
>> On 12/14/21 13:57, Vlastimil Babka wrote:
>> > On 12/1/21 19:14, Vlastimil Babka wrote:
>> >> Folks from non-slab subsystems are Cc'd only to patches affecting them, and
>> >> this cover letter.
>> >>
>> >> Series also available in git, based on 5.16-rc3:
>> >> https://git.kernel.org/pub/scm/linux/kernel/git/vbabka/linux.git/log/?h=slab-struct_slab-v2r2
>> > 
>> > Pushed a new branch slab-struct-slab-v3r3 with accumulated fixes and small tweaks
>> > and a new patch from Hyeonggon Yoo on top. To avoid too much spam, here's a range diff:
>> 
>> Hi, I've pushed another update branch slab-struct_slab-v4r1, and also to
>> -next. I've shortened git commit log lines to make checkpatch happier,
>> so no range-diff as it would be too long. I believe it would be useless
>> spam to post the whole series now, shortly before xmas, so I will do it
>> at rc8 time, to hopefully collect remaining reviews. But if anyone wants
>> a mailed version, I can do that.
>>
> 
> Hello Matthew and Vlastimil.
> it's part 3 of review.
> 
> # mm: Convert struct page to struct slab in functions used by other subsystems
> Reviewed-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
> 
> 
> # mm/slub: Convert most struct page to struct slab by spatch
> Reviewed-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
> Tested-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
> with a question below.
> 
> -static int check_slab(struct kmem_cache *s, struct page *page)
> +static int check_slab(struct kmem_cache *s, struct slab *slab)
>  {
>         int maxobj;
>  
> -       if (!PageSlab(page)) {
> -               slab_err(s, page, "Not a valid slab page");
> +       if (!folio_test_slab(slab_folio(slab))) {
> +               slab_err(s, slab, "Not a valid slab page");
>                 return 0;
>         }
> 
> Can't we guarantee that struct slab * always points to a slab?

Normally, yes.

> for struct page * it can be !PageSlab(page) because struct page *
> can be other than slab. but struct slab * can only be slab
> unlike struct page. code will be simpler if we guarantee that
> struct slab * always points to a slab (or NULL).

That's what the code does indeed. But check_slab() is called as part of
various consistency checks, so there we on purpose question all assumptions
in order to find a bug (e.g. memory corruption) - such as a page that's
still on the list of slabs while it was already freed and reused and thus
e.g. lacks the slab page flag.

But it's nice how using struct slab makes such a check immediately stand out
as suspicious, right?

> # mm/slub: Convert pfmemalloc_match() to take a struct slab
> It's confusing to me because the original pfmemalloc_match() is removed
> and pfmemalloc_match_unsafe() was renamed to pfmemalloc_match() and
> converted to use slab_test_pfmemalloc() helper.
> 
> But I agree with the resulting code. so:
> Reviewed-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
> 
> 
> # mm/slub: Convert alloc_slab_page() to return a struct slab
> Reviewed-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
> Tested-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
> 
> 
> # mm/slub: Convert print_page_info() to print_slab_info()
> Reviewed-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
> 
> I hope to review rest of patches in a week.

Thanks for your reviews/tests!
Vlastimil Babka Jan. 4, 2022, 12:21 a.m. UTC | #24
On 12/1/21 19:39, Vlastimil Babka wrote:
> On 12/1/21 19:14, Vlastimil Babka wrote:
>> Folks from non-slab subsystems are Cc'd only to patches affecting them, and
>> this cover letter.
>> 
>> Series also available in git, based on 5.16-rc3:
>> https://git.kernel.org/pub/scm/linux/kernel/git/vbabka/linux.git/log/?h=slab-struct_slab-v2r2
>> 
>> The plan: as my SLUB PREEMPT_RT series in 5.15, I would prefer to go again with
>> the git pull request way of eventually merging this, as it's also not a small
>> series. I will thus reply to this mail with asking to include my branch in
>> linux-next.
>> 
>> As stated in the v1/RFC cover letter, I wouldn't mind to then continue with
>> maintaining a git tree for all slab patches in general. It was apparently
>> already done that way before, by Pekka:
>> https://lore.kernel.org/linux-mm/alpine.DEB.2.00.1107221108190.2996@tiger/
> 
> Hi Stephen,
> 
> Please include a new tree in linux-next:
> 
> git://git.kernel.org/pub/scm/linux/kernel/git/vbabka/linux.git slab-next
> 
> i.e.
> https://git.kernel.org/pub/scm/linux/kernel/git/vbabka/linux.git/log/?h=slab-next

Hi Stephen,

can you please replace the slab for-next above with the new tree I made to
more clearly separate slab maintenance from my personal branches:

git://git.kernel.org/pub/scm/linux/kernel/git/vbabka/slab.git branch for-next

Thanks,
Vlastimil
Stephen Rothwell Jan. 4, 2022, 8:44 a.m. UTC | #25
Hi Vlastimil,

On Tue, 4 Jan 2022 01:21:03 +0100 Vlastimil Babka <vbabka@suse.cz> wrote:
>
> can you please replace the slab for-next above with the new tree I made to
> more clearly separate slab maintenance from my personal branches:
> 
> git://git.kernel.org/pub/scm/linux/kernel/git/vbabka/slab.git branch for-next

I have done that today.
Vlastimil Babka Aug. 29, 2023, 9:55 a.m. UTC | #26
On 1/4/22 01:21, Vlastimil Babka wrote:
> On 12/1/21 19:39, Vlastimil Babka wrote:
>> On 12/1/21 19:14, Vlastimil Babka wrote:
>>> Folks from non-slab subsystems are Cc'd only to patches affecting them, and
>>> this cover letter.
>>> 
>>> Series also available in git, based on 5.16-rc3:
>>> https://git.kernel.org/pub/scm/linux/kernel/git/vbabka/linux.git/log/?h=slab-struct_slab-v2r2
>>> 
>>> The plan: as my SLUB PREEMPT_RT series in 5.15, I would prefer to go again with
>>> the git pull request way of eventually merging this, as it's also not a small
>>> series. I will thus reply to this mail with asking to include my branch in
>>> linux-next.
>>> 
>>> As stated in the v1/RFC cover letter, I wouldn't mind to then continue with
>>> maintaining a git tree for all slab patches in general. It was apparently
>>> already done that way before, by Pekka:
>>> https://lore.kernel.org/linux-mm/alpine.DEB.2.00.1107221108190.2996@tiger/
>> 
>> Hi Stephen,
>> 
>> Please include a new tree in linux-next:
>> 
>> git://git.kernel.org/pub/scm/linux/kernel/git/vbabka/linux.git slab-next
>> 
>> i.e.
>> https://git.kernel.org/pub/scm/linux/kernel/git/vbabka/linux.git/log/?h=slab-next
> 
> Hi Stephen,
> 
> can you please replace the slab for-next above with the new tree I made to
> more clearly separate slab maintenance from my personal branches:
> 
> git://git.kernel.org/pub/scm/linux/kernel/git/vbabka/slab.git branch for-next

Hi Stephen,

can you please start using a branch named "slab/for-next" instead of
"for-next" as I've been consolidating the naming scheme locally.

Feel free to do that switch at your own convenience, i.e. after the merge
window. I don't plan to push anything there until rc1 and will keep both old
and new name available and in sync until you confirm the switch.

Thanks,
Vlastimil

> Thanks,
> Vlastimil
Stephen Rothwell Aug. 29, 2023, 9:33 p.m. UTC | #27
Hi Vlastimil,

On Tue, 29 Aug 2023 11:55:12 +0200 Vlastimil Babka <vbabka@suse.cz> wrote:
>
> can you please start using a branch named "slab/for-next" instead of
> "for-next" as I've been consolidating the naming scheme locally.
> 
> Feel free to do that switch at your own convenience, i.e. after the merge
> window. I don't plan to push anything there until rc1 and will keep both old
> and new name available and in sync until you confirm the switch.

Done.