diff mbox series

mm: kfence: fix handling discontiguous page

Message ID 20230323025003.94447-1-songmuchun@bytedance.com (mailing list archive)
State New
Headers show
Series mm: kfence: fix handling discontiguous page | expand

Commit Message

Muchun Song March 23, 2023, 2:50 a.m. UTC
The struct pages could be discontiguous when the kfence pool is allocated
via alloc_contig_pages() with CONFIG_SPARSEMEM and !CONFIG_SPARSEMEM_VMEMMAP.
So, the iteration should use nth_page().

Fixes: 0ce20dd84089 ("mm: add Kernel Electric-Fence infrastructure")
Signed-off-by: Muchun Song <songmuchun@bytedance.com>
---
 mm/kfence/core.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

Comments

Marco Elver March 23, 2023, 8:28 a.m. UTC | #1
On Thu, 23 Mar 2023 at 03:50, 'Muchun Song' via kasan-dev
<kasan-dev@googlegroups.com> wrote:
>
> The struct pages could be discontiguous when the kfence pool is allocated
> via alloc_contig_pages() with CONFIG_SPARSEMEM and !CONFIG_SPARSEMEM_VMEMMAP.
> So, the iteration should use nth_page().
>
> Fixes: 0ce20dd84089 ("mm: add Kernel Electric-Fence infrastructure")
> Signed-off-by: Muchun Song <songmuchun@bytedance.com>

Reviewed-by: Marco Elver <elver@google.com>

> ---
>  mm/kfence/core.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/mm/kfence/core.c b/mm/kfence/core.c
> index d66092dd187c..1065e0568d05 100644
> --- a/mm/kfence/core.c
> +++ b/mm/kfence/core.c
> @@ -556,7 +556,7 @@ static unsigned long kfence_init_pool(void)
>          * enters __slab_free() slow-path.
>          */
>         for (i = 0; i < KFENCE_POOL_SIZE / PAGE_SIZE; i++) {
> -               struct slab *slab = page_slab(&pages[i]);
> +               struct slab *slab = page_slab(nth_page(pages, i));
>
>                 if (!i || (i % 2))
>                         continue;
> @@ -602,7 +602,7 @@ static unsigned long kfence_init_pool(void)
>
>  reset_slab:
>         for (i = 0; i < KFENCE_POOL_SIZE / PAGE_SIZE; i++) {
> -               struct slab *slab = page_slab(&pages[i]);
> +               struct slab *slab = page_slab(nth_page(pages, i));
>
>                 if (!i || (i % 2))
>                         continue;
> --
> 2.11.0
>
> --
> You received this message because you are subscribed to the Google Groups "kasan-dev" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to kasan-dev+unsubscribe@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/msgid/kasan-dev/20230323025003.94447-1-songmuchun%40bytedance.com.
Andrew Morton March 23, 2023, 10:18 p.m. UTC | #2
On Thu, 23 Mar 2023 10:50:03 +0800 Muchun Song <songmuchun@bytedance.com> wrote:

> The struct pages could be discontiguous when the kfence pool is allocated
> via alloc_contig_pages() with CONFIG_SPARSEMEM and !CONFIG_SPARSEMEM_VMEMMAP.
> So, the iteration should use nth_page().

What are the user-visible runtime effects of this flaw?

Thanks.
Muchun Song March 24, 2023, 1:59 a.m. UTC | #3
> On Mar 24, 2023, at 06:18, Andrew Morton <akpm@linux-foundation.org> wrote:
> 
> On Thu, 23 Mar 2023 10:50:03 +0800 Muchun Song <songmuchun@bytedance.com> wrote:
> 
>> The struct pages could be discontiguous when the kfence pool is allocated
>> via alloc_contig_pages() with CONFIG_SPARSEMEM and !CONFIG_SPARSEMEM_VMEMMAP.
>> So, the iteration should use nth_page().
> 
> What are the user-visible runtime effects of this flaw?

Set the PG_slab and memcg_data to a arbitrary address (may be not used as a struct
page), so the worst case may corrupt the kernel.

Thanks.

> 
> Thanks.
Kefeng Wang March 24, 2023, 3:43 a.m. UTC | #4
On 2023/3/23 10:50, Muchun Song wrote:
> The struct pages could be discontiguous when the kfence pool is allocated
> via alloc_contig_pages() with CONFIG_SPARSEMEM and !CONFIG_SPARSEMEM_VMEMMAP.
> So, the iteration should use nth_page().
> 

Reviewed-by: Kefeng Wang <wangkefeng.wang@huawei.com>

> Fixes: 0ce20dd84089 ("mm: add Kernel Electric-Fence infrastructure")
> Signed-off-by: Muchun Song <songmuchun@bytedance.com>
> ---
>   mm/kfence/core.c | 4 ++--
>   1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/mm/kfence/core.c b/mm/kfence/core.c
> index d66092dd187c..1065e0568d05 100644
> --- a/mm/kfence/core.c
> +++ b/mm/kfence/core.c
> @@ -556,7 +556,7 @@ static unsigned long kfence_init_pool(void)
>   	 * enters __slab_free() slow-path.
>   	 */
>   	for (i = 0; i < KFENCE_POOL_SIZE / PAGE_SIZE; i++) {
> -		struct slab *slab = page_slab(&pages[i]);
> +		struct slab *slab = page_slab(nth_page(pages, i));
>   
>   		if (!i || (i % 2))
>   			continue;
> @@ -602,7 +602,7 @@ static unsigned long kfence_init_pool(void)
>   
>   reset_slab:
>   	for (i = 0; i < KFENCE_POOL_SIZE / PAGE_SIZE; i++) {
> -		struct slab *slab = page_slab(&pages[i]);
> +		struct slab *slab = page_slab(nth_page(pages, i));
>   
>   		if (!i || (i % 2))
>   			continue;
diff mbox series

Patch

diff --git a/mm/kfence/core.c b/mm/kfence/core.c
index d66092dd187c..1065e0568d05 100644
--- a/mm/kfence/core.c
+++ b/mm/kfence/core.c
@@ -556,7 +556,7 @@  static unsigned long kfence_init_pool(void)
 	 * enters __slab_free() slow-path.
 	 */
 	for (i = 0; i < KFENCE_POOL_SIZE / PAGE_SIZE; i++) {
-		struct slab *slab = page_slab(&pages[i]);
+		struct slab *slab = page_slab(nth_page(pages, i));
 
 		if (!i || (i % 2))
 			continue;
@@ -602,7 +602,7 @@  static unsigned long kfence_init_pool(void)
 
 reset_slab:
 	for (i = 0; i < KFENCE_POOL_SIZE / PAGE_SIZE; i++) {
-		struct slab *slab = page_slab(&pages[i]);
+		struct slab *slab = page_slab(nth_page(pages, i));
 
 		if (!i || (i % 2))
 			continue;