diff mbox

[1/3] drm/ttm: set sensible pool size limit.

Message ID 1407901926-24516-2-git-send-email-j.glisse@gmail.com (mailing list archive)
State New, archived
Headers show

Commit Message

Jerome Glisse Aug. 13, 2014, 3:52 a.m. UTC
From: Jérôme Glisse <jglisse@redhat.com>

Due to bug in code it appear that some of the pool where never properly
use and always empty. Before fixing that bug this patch set sensible
limit on pool size. The magic 64MB number was nominated.

This is obviously a some what arbitrary number but the intend of ttm pool
is to minimize page alloc cost especialy when allocating page that will be
mark to be excluded from cpu cache mecanisms. We assume that mostly small
buffer that are constantly allocated/deallocated might suffer from core
memory allocation overhead as well as cache status change. This are the
assumptions behind the 64MB value.

This obviously need some serious testing including monitoring pool size.

Signed-off-by: Jérôme Glisse <jglisse@redhat.com>
Cc: Mario Kleiner <mario.kleiner.de@gmail.com>
Cc: Michel Dänzer <michel@daenzer.net>
Cc: Thomas Hellstrom <thellstrom@vmware.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 drivers/gpu/drm/ttm/ttm_memory.c | 16 ++++++++++++++--
 1 file changed, 14 insertions(+), 2 deletions(-)

Comments

Michel Dänzer Aug. 13, 2014, 6:24 a.m. UTC | #1
On 13.08.2014 12:52, Jérôme Glisse wrote:
> From: Jérôme Glisse <jglisse@redhat.com>
> 
> Due to bug in code it appear that some of the pool where never properly
> use and always empty. Before fixing that bug this patch set sensible
> limit on pool size. The magic 64MB number was nominated.
> 
> This is obviously a some what arbitrary number but the intend of ttm pool
> is to minimize page alloc cost especialy when allocating page that will be
> mark to be excluded from cpu cache mecanisms. We assume that mostly small
> buffer that are constantly allocated/deallocated might suffer from core
> memory allocation overhead as well as cache status change. This are the
> assumptions behind the 64MB value.
> 
> This obviously need some serious testing including monitoring pool size.
> 
> Signed-off-by: Jérôme Glisse <jglisse@redhat.com>

[...]

> @@ -393,8 +404,9 @@ int ttm_mem_global_init(struct ttm_mem_global *glob)
>  		pr_info("Zone %7s: Available graphics memory: %llu kiB\n",
>  			zone->name, (unsigned long long)zone->max_mem >> 10);
>  	}
> -	ttm_page_alloc_init(glob, glob->zone_kernel->max_mem/(2*PAGE_SIZE));
> -	ttm_dma_page_alloc_init(glob, glob->zone_kernel->max_mem/(2*PAGE_SIZE));
> +	max_pool_size = min(glob->zone_kernel->max_mem >> 3UL, MAX_POOL_SIZE);

This introduces a 'comparison of distinct pointer types lacks a cast'
warning for me.
diff mbox

Patch

diff --git a/drivers/gpu/drm/ttm/ttm_memory.c b/drivers/gpu/drm/ttm/ttm_memory.c
index dbc2def..73b2ded 100644
--- a/drivers/gpu/drm/ttm/ttm_memory.c
+++ b/drivers/gpu/drm/ttm/ttm_memory.c
@@ -38,6 +38,16 @@ 
 #include <linux/slab.h>
 
 #define TTM_MEMORY_ALLOC_RETRIES 4
+/* Have a maximum of 64MB of memory inside the pool.
+ *
+ * This is obviously a some what arbitrary number but the intend of ttm pool
+ * is to minimize page alloc cost especialy when allocating page that will be
+ * mark to be excluded from cpu cache mecanisms. We assume that mostly small
+ * buffer that are constantly allocated/deallocated might suffer from core
+ * memory allocation overhead as well as cache status change. This are the
+ * assumptions behind the 64MB value.
+ */
+#define MAX_POOL_SIZE (64UL << 20UL)
 
 struct ttm_mem_zone {
 	struct kobject kobj;
@@ -363,6 +373,7 @@  int ttm_mem_global_init(struct ttm_mem_global *glob)
 	int ret;
 	int i;
 	struct ttm_mem_zone *zone;
+	unsigned long max_pool_size;
 
 	spin_lock_init(&glob->lock);
 	glob->swap_queue = create_singlethread_workqueue("ttm_swap");
@@ -393,8 +404,9 @@  int ttm_mem_global_init(struct ttm_mem_global *glob)
 		pr_info("Zone %7s: Available graphics memory: %llu kiB\n",
 			zone->name, (unsigned long long)zone->max_mem >> 10);
 	}
-	ttm_page_alloc_init(glob, glob->zone_kernel->max_mem/(2*PAGE_SIZE));
-	ttm_dma_page_alloc_init(glob, glob->zone_kernel->max_mem/(2*PAGE_SIZE));
+	max_pool_size = min(glob->zone_kernel->max_mem >> 3UL, MAX_POOL_SIZE);
+	ttm_page_alloc_init(glob, max_pool_size / (2 * PAGE_SIZE));
+	ttm_dma_page_alloc_init(glob, max_pool_size / (2 * PAGE_SIZE));
 	return 0;
 out_no_zone:
 	ttm_mem_global_release(glob);