Message ID | 20250415171954.3970818-1-jyescas@google.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | dma-buf: heaps: Set allocation orders for larger page sizes | expand |
On Tue, Apr 15, 2025 at 10:20 AM Juan Yescas <jyescas@google.com> wrote: > > This change sets the allocation orders for the different page sizes > (4k, 16k, 64k) based on PAGE_SHIFT. Before this change, the orders > for large page sizes were calculated incorrectly, this caused system > heap to allocate from 2% to 4% more memory on 16KiB page size kernels. > > This change was tested on 4k/16k page size kernels. > > Signed-off-by: Juan Yescas <jyescas@google.com> Seems reasonable to me. Acked-by: John Stultz <jstultz@google.com> thanks -john
On Tue, Apr 15, 2025 at 10:20 AM Juan Yescas <jyescas@google.com> wrote: > > This change sets the allocation orders for the different page sizes > (4k, 16k, 64k) based on PAGE_SHIFT. Before this change, the orders > for large page sizes were calculated incorrectly, this caused system > heap to allocate from 2% to 4% more memory on 16KiB page size kernels. > > This change was tested on 4k/16k page size kernels. > > Signed-off-by: Juan Yescas <jyescas@google.com> I think "dma-buf: system_heap:" would be better for the subject since this is specific to the system heap. Would you mind cleaning up the extra space on line 321 too? @@ -318,7 +318,7 @@ static struct page *alloc_largest_available(unsigned long size, int i; for (i = 0; i < NUM_ORDERS; i++) { - if (size < (PAGE_SIZE << orders[i])) + if (size < (PAGE_SIZE << orders[i])) With that, Reviewed-by: T.J. Mercier <tjmercier@google.com> Fixes: d963ab0f15fb ("dma-buf: system_heap: Allocate higher order pages if available") is also probably a good idea. > --- > drivers/dma-buf/heaps/system_heap.c | 9 ++++++++- > 1 file changed, 8 insertions(+), 1 deletion(-) > > diff --git a/drivers/dma-buf/heaps/system_heap.c b/drivers/dma-buf/heaps/system_heap.c > index 26d5dc89ea16..54674c02dcb4 100644 > --- a/drivers/dma-buf/heaps/system_heap.c > +++ b/drivers/dma-buf/heaps/system_heap.c > @@ -50,8 +50,15 @@ static gfp_t order_flags[] = {HIGH_ORDER_GFP, HIGH_ORDER_GFP, LOW_ORDER_GFP}; > * to match with the sizes often found in IOMMUs. Using order 4 pages instead > * of order 0 pages can significantly improve the performance of many IOMMUs > * by reducing TLB pressure and time spent updating page tables. > + * > + * Note: When the order is 0, the minimum allocation is PAGE_SIZE. The possible > + * page sizes for ARM devices could be 4K, 16K and 64K. > */ > -static const unsigned int orders[] = {8, 4, 0}; > +#define ORDER_1M (20 - PAGE_SHIFT) > +#define ORDER_64K (16 - PAGE_SHIFT) > +#define ORDER_FOR_PAGE_SIZE (0) > +static const unsigned int orders[] = {ORDER_1M, ORDER_64K, ORDER_FOR_PAGE_SIZE}; > + > #define NUM_ORDERS ARRAY_SIZE(orders) > > static struct sg_table *dup_sg_table(struct sg_table *table) > -- > 2.49.0.604.gff1f9ca942-goog >
> -----Original Message----- > From: T.J. Mercier [mailto:tjmercier@google.com] > Sent: Wednesday, April 16, 2025 5:57 AM > To: Juan Yescas <jyescas@google.com> > Cc: Sumit Semwal <sumit.semwal@linaro.org>; Benjamin Gaignard > <benjamin.gaignard@collabora.com>; Brian Starkey <Brian.Starkey@arm.com>; > John Stultz <jstultz@google.com>; Christian König > <christian.koenig@amd.com>; linux-media@vger.kernel.org; dri- > devel@lists.freedesktop.org; linaro-mm-sig@lists.linaro.org; linux- > kernel@vger.kernel.org; baohua@kernel.org; dmitry.osipenko@collabora.com; > jaewon31.kim@samsung.com; Guangming.Cao@mediatek.com; surenb@google.com; > kaleshsingh@google.com > Subject: Re: [PATCH] dma-buf: heaps: Set allocation orders for larger page > sizes > > On Tue, Apr 15, 2025 at 10:20 AM Juan Yescas <jyescas@google.com> wrote: > > > > This change sets the allocation orders for the different page sizes > > (4k, 16k, 64k) based on PAGE_SHIFT. Before this change, the orders for > > large page sizes were calculated incorrectly, this caused system heap > > to allocate from 2% to 4% more memory on 16KiB page size kernels. > > > > This change was tested on 4k/16k page size kernels. > > > > Signed-off-by: Juan Yescas <jyescas@google.com> > > I think "dma-buf: system_heap:" would be better for the subject since this > is specific to the system heap. > > Would you mind cleaning up the extra space on line 321 too? > @@ -318,7 +318,7 @@ static struct page > *alloc_largest_available(unsigned long size, > int i; > > for (i = 0; i < NUM_ORDERS; i++) { > - if (size < (PAGE_SIZE << orders[i])) > + if (size < (PAGE_SIZE << orders[i])) > > With that, > Reviewed-by: T.J. Mercier <tjmercier@google.com> > > Fixes: d963ab0f15fb ("dma-buf: system_heap: Allocate higher order pages if > available") is also probably a good idea. > Hi Juan. Yes. This system_heap change should be changed for 16KB page. Actually, we may need to check other drivers using page order number. I guess gpu drivers may be one of them. > > --- > > drivers/dma-buf/heaps/system_heap.c | 9 ++++++++- > > 1 file changed, 8 insertions(+), 1 deletion(-) > > > > diff --git a/drivers/dma-buf/heaps/system_heap.c > > b/drivers/dma-buf/heaps/system_heap.c > > index 26d5dc89ea16..54674c02dcb4 100644 > > --- a/drivers/dma-buf/heaps/system_heap.c > > +++ b/drivers/dma-buf/heaps/system_heap.c > > @@ -50,8 +50,15 @@ static gfp_t order_flags[] = {HIGH_ORDER_GFP, > HIGH_ORDER_GFP, LOW_ORDER_GFP}; > > * to match with the sizes often found in IOMMUs. Using order 4 pages > instead > > * of order 0 pages can significantly improve the performance of many > IOMMUs > > * by reducing TLB pressure and time spent updating page tables. > > + * > > + * Note: When the order is 0, the minimum allocation is PAGE_SIZE. > > + The possible > > + * page sizes for ARM devices could be 4K, 16K and 64K. > > */ > > -static const unsigned int orders[] = {8, 4, 0}; > > +#define ORDER_1M (20 - PAGE_SHIFT) > > +#define ORDER_64K (16 - PAGE_SHIFT) > > +#define ORDER_FOR_PAGE_SIZE (0) > > +static const unsigned int orders[] = {ORDER_1M, ORDER_64K, > > +ORDER_FOR_PAGE_SIZE}; > > + > > #define NUM_ORDERS ARRAY_SIZE(orders) > > > > static struct sg_table *dup_sg_table(struct sg_table *table) > > -- > > 2.49.0.604.gff1f9ca942-goog > >
On Tue, Apr 15, 2025 at 7:28 PM 김재원 <jaewon31.kim@samsung.com> wrote: > > > > > -----Original Message----- > > From: T.J. Mercier [mailto:tjmercier@google.com] > > Sent: Wednesday, April 16, 2025 5:57 AM > > To: Juan Yescas <jyescas@google.com> > > Cc: Sumit Semwal <sumit.semwal@linaro.org>; Benjamin Gaignard > > <benjamin.gaignard@collabora.com>; Brian Starkey <Brian.Starkey@arm.com>; > > John Stultz <jstultz@google.com>; Christian König > > <christian.koenig@amd.com>; linux-media@vger.kernel.org; dri- > > devel@lists.freedesktop.org; linaro-mm-sig@lists.linaro.org; linux- > > kernel@vger.kernel.org; baohua@kernel.org; dmitry.osipenko@collabora.com; > > jaewon31.kim@samsung.com; Guangming.Cao@mediatek.com; surenb@google.com; > > kaleshsingh@google.com > > Subject: Re: [PATCH] dma-buf: heaps: Set allocation orders for larger page > > sizes > > > > On Tue, Apr 15, 2025 at 10:20 AM Juan Yescas <jyescas@google.com> wrote: > > > > > > This change sets the allocation orders for the different page sizes > > > (4k, 16k, 64k) based on PAGE_SHIFT. Before this change, the orders for > > > large page sizes were calculated incorrectly, this caused system heap > > > to allocate from 2% to 4% more memory on 16KiB page size kernels. > > > > > > This change was tested on 4k/16k page size kernels. > > > > > > Signed-off-by: Juan Yescas <jyescas@google.com> > > > > I think "dma-buf: system_heap:" would be better for the subject since this > > is specific to the system heap. > > > > Would you mind cleaning up the extra space on line 321 too? > > @@ -318,7 +318,7 @@ static struct page > > *alloc_largest_available(unsigned long size, > > int i; > > > > for (i = 0; i < NUM_ORDERS; i++) { > > - if (size < (PAGE_SIZE << orders[i])) > > + if (size < (PAGE_SIZE << orders[i])) > > > > With that, > > Reviewed-by: T.J. Mercier <tjmercier@google.com> > > > > Fixes: d963ab0f15fb ("dma-buf: system_heap: Allocate higher order pages if > > available") is also probably a good idea. > > > > > Hi Juan. > > Yes. This system_heap change should be changed for 16KB page. Actually, > we may need to check other drivers using page order number. I guess > gpu drivers may be one of them. > Thanks Jaewon for pointing it out. We'll take a look at the GPU drivers to make sure that they are using the proper page order. > > > --- > > > drivers/dma-buf/heaps/system_heap.c | 9 ++++++++- > > > 1 file changed, 8 insertions(+), 1 deletion(-) > > > > > > diff --git a/drivers/dma-buf/heaps/system_heap.c > > > b/drivers/dma-buf/heaps/system_heap.c > > > index 26d5dc89ea16..54674c02dcb4 100644 > > > --- a/drivers/dma-buf/heaps/system_heap.c > > > +++ b/drivers/dma-buf/heaps/system_heap.c > > > @@ -50,8 +50,15 @@ static gfp_t order_flags[] = {HIGH_ORDER_GFP, > > HIGH_ORDER_GFP, LOW_ORDER_GFP}; > > > * to match with the sizes often found in IOMMUs. Using order 4 pages > > instead > > > * of order 0 pages can significantly improve the performance of many > > IOMMUs > > > * by reducing TLB pressure and time spent updating page tables. > > > + * > > > + * Note: When the order is 0, the minimum allocation is PAGE_SIZE. > > > + The possible > > > + * page sizes for ARM devices could be 4K, 16K and 64K. > > > */ > > > -static const unsigned int orders[] = {8, 4, 0}; > > > +#define ORDER_1M (20 - PAGE_SHIFT) > > > +#define ORDER_64K (16 - PAGE_SHIFT) > > > +#define ORDER_FOR_PAGE_SIZE (0) > > > +static const unsigned int orders[] = {ORDER_1M, ORDER_64K, > > > +ORDER_FOR_PAGE_SIZE}; > > > + > > > #define NUM_ORDERS ARRAY_SIZE(orders) > > > > > > static struct sg_table *dup_sg_table(struct sg_table *table) > > > -- > > > 2.49.0.604.gff1f9ca942-goog > > > > >
Am 15.04.25 um 19:19 schrieb Juan Yescas: > This change sets the allocation orders for the different page sizes > (4k, 16k, 64k) based on PAGE_SHIFT. Before this change, the orders > for large page sizes were calculated incorrectly, this caused system > heap to allocate from 2% to 4% more memory on 16KiB page size kernels. > > This change was tested on 4k/16k page size kernels. > > Signed-off-by: Juan Yescas <jyescas@google.com> > --- > drivers/dma-buf/heaps/system_heap.c | 9 ++++++++- > 1 file changed, 8 insertions(+), 1 deletion(-) > > diff --git a/drivers/dma-buf/heaps/system_heap.c b/drivers/dma-buf/heaps/system_heap.c > index 26d5dc89ea16..54674c02dcb4 100644 > --- a/drivers/dma-buf/heaps/system_heap.c > +++ b/drivers/dma-buf/heaps/system_heap.c > @@ -50,8 +50,15 @@ static gfp_t order_flags[] = {HIGH_ORDER_GFP, HIGH_ORDER_GFP, LOW_ORDER_GFP}; > * to match with the sizes often found in IOMMUs. Using order 4 pages instead > * of order 0 pages can significantly improve the performance of many IOMMUs > * by reducing TLB pressure and time spent updating page tables. > + * > + * Note: When the order is 0, the minimum allocation is PAGE_SIZE. The possible > + * page sizes for ARM devices could be 4K, 16K and 64K. > */ > -static const unsigned int orders[] = {8, 4, 0}; > +#define ORDER_1M (20 - PAGE_SHIFT) > +#define ORDER_64K (16 - PAGE_SHIFT) > +#define ORDER_FOR_PAGE_SIZE (0) > +static const unsigned int orders[] = {ORDER_1M, ORDER_64K, ORDER_FOR_PAGE_SIZE}; > +# Good catch, but I think the defines are just overkill. What you should do instead is to subtract page shift when using the array. Apart from that using 1M, 64K and then falling back to 4K just sounds random to me. We have especially pushed back on 64K more than once because it is actually not beneficial in almost all cases. I suggest to fix the code in system_heap_allocate to not over allocate instead and just try the available orders like TTM does. This has proven to be working architecture independent. Regards, Christian. > #define NUM_ORDERS ARRAY_SIZE(orders) > > static struct sg_table *dup_sg_table(struct sg_table *table)
diff --git a/drivers/dma-buf/heaps/system_heap.c b/drivers/dma-buf/heaps/system_heap.c index 26d5dc89ea16..54674c02dcb4 100644 --- a/drivers/dma-buf/heaps/system_heap.c +++ b/drivers/dma-buf/heaps/system_heap.c @@ -50,8 +50,15 @@ static gfp_t order_flags[] = {HIGH_ORDER_GFP, HIGH_ORDER_GFP, LOW_ORDER_GFP}; * to match with the sizes often found in IOMMUs. Using order 4 pages instead * of order 0 pages can significantly improve the performance of many IOMMUs * by reducing TLB pressure and time spent updating page tables. + * + * Note: When the order is 0, the minimum allocation is PAGE_SIZE. The possible + * page sizes for ARM devices could be 4K, 16K and 64K. */ -static const unsigned int orders[] = {8, 4, 0}; +#define ORDER_1M (20 - PAGE_SHIFT) +#define ORDER_64K (16 - PAGE_SHIFT) +#define ORDER_FOR_PAGE_SIZE (0) +static const unsigned int orders[] = {ORDER_1M, ORDER_64K, ORDER_FOR_PAGE_SIZE}; + #define NUM_ORDERS ARRAY_SIZE(orders) static struct sg_table *dup_sg_table(struct sg_table *table)
This change sets the allocation orders for the different page sizes (4k, 16k, 64k) based on PAGE_SHIFT. Before this change, the orders for large page sizes were calculated incorrectly, this caused system heap to allocate from 2% to 4% more memory on 16KiB page size kernels. This change was tested on 4k/16k page size kernels. Signed-off-by: Juan Yescas <jyescas@google.com> --- drivers/dma-buf/heaps/system_heap.c | 9 ++++++++- 1 file changed, 8 insertions(+), 1 deletion(-)