Message ID | dc5c51aaba50906a92b9ba1a5137ed462484a7be.1707144953.git.robin.murphy@arm.com (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | iommu/iova: use named kmem_cache for iova magazines | expand |
On 05/02/2024 15:32, Robin Murphy wrote: > From: Pasha Tatashin <pasha.tatashin@soleen.com> > > The magazine buffers can take gigabytes of kmem memory, dominating all > other allocations. For observability purpose create named slab cache so > the iova magazine memory overhead can be clearly observed. > > With this change: > >> slabtop -o | head > Active / Total Objects (% used) : 869731 / 952904 (91.3%) > Active / Total Slabs (% used) : 103411 / 103974 (99.5%) > Active / Total Caches (% used) : 135 / 211 (64.0%) > Active / Total Size (% used) : 395389.68K / 411430.20K (96.1%) > Minimum / Average / Maximum Object : 0.02K / 0.43K / 8.00K > > OBJS ACTIVE USE OBJ SIZE SLABS OBJ/SLAB CACHE SIZE NAME > 244412 244239 99% 1.00K 61103 4 244412K iommu_iova_magazine > 91636 88343 96% 0.03K 739 124 2956K kmalloc-32 > 75744 74844 98% 0.12K 2367 32 9468K kernfs_node_cache > > On this machine it is now clear that magazine use 242M of kmem memory. Those caches could do with a trimming ... > > Acked-by: David Rientjes <rientjes@google.com> > Signed-off-by: Pasha Tatashin <pasha.tatashin@soleen.com> > [ rm: adjust to rework of iova_cache_{get,put} ] > Signed-off-by: Robin Murphy <robin.murphy@arm.com> > --- FWIW: Reviewed-by: John Garry <john.g.garry@oracle.com>
On 06/02/2024 11:24 am, John Garry wrote: > On 05/02/2024 15:32, Robin Murphy wrote: >> From: Pasha Tatashin <pasha.tatashin@soleen.com> >> >> The magazine buffers can take gigabytes of kmem memory, dominating all >> other allocations. For observability purpose create named slab cache so >> the iova magazine memory overhead can be clearly observed. >> >> With this change: >> >>> slabtop -o | head >> Active / Total Objects (% used) : 869731 / 952904 (91.3%) >> Active / Total Slabs (% used) : 103411 / 103974 (99.5%) >> Active / Total Caches (% used) : 135 / 211 (64.0%) >> Active / Total Size (% used) : 395389.68K / 411430.20K (96.1%) >> Minimum / Average / Maximum Object : 0.02K / 0.43K / 8.00K >> >> OBJS ACTIVE USE OBJ SIZE SLABS OBJ/SLAB CACHE SIZE NAME >> 244412 244239 99% 1.00K 61103 4 244412K iommu_iova_magazine >> 91636 88343 96% 0.03K 739 124 2956K kmalloc-32 >> 75744 74844 98% 0.12K 2367 32 9468K kernfs_node_cache >> >> On this machine it is now clear that magazine use 242M of kmem memory. > > Those caches could do with a trimming ... See the discussion on v1 for more details: https://lore.kernel.org/linux-iommu/20240201193014.2785570-1-tatashin@google.com/ but it seems this really is the idle baseline for lots of CPUs * lots of domains - if all those devices get going in anger it's likely that combined iova + iova_magazine usage truly will blow up into gigabytes. Cheers, Robin. > >> >> Acked-by: David Rientjes <rientjes@google.com> >> Signed-off-by: Pasha Tatashin <pasha.tatashin@soleen.com> >> [ rm: adjust to rework of iova_cache_{get,put} ] >> Signed-off-by: Robin Murphy <robin.murphy@arm.com> >> --- > > FWIW: > Reviewed-by: John Garry <john.g.garry@oracle.com>
On Mon, Feb 05, 2024 at 03:32:41PM +0000, Robin Murphy wrote: > From: Pasha Tatashin <pasha.tatashin@soleen.com> > > The magazine buffers can take gigabytes of kmem memory, dominating all > other allocations. For observability purpose create named slab cache so > the iova magazine memory overhead can be clearly observed. > > With this change: > > > slabtop -o | head > Active / Total Objects (% used) : 869731 / 952904 (91.3%) > Active / Total Slabs (% used) : 103411 / 103974 (99.5%) > Active / Total Caches (% used) : 135 / 211 (64.0%) > Active / Total Size (% used) : 395389.68K / 411430.20K (96.1%) > Minimum / Average / Maximum Object : 0.02K / 0.43K / 8.00K > > OBJS ACTIVE USE OBJ SIZE SLABS OBJ/SLAB CACHE SIZE NAME > 244412 244239 99% 1.00K 61103 4 244412K iommu_iova_magazine > 91636 88343 96% 0.03K 739 124 2956K kmalloc-32 > 75744 74844 98% 0.12K 2367 32 9468K kernfs_node_cache > > On this machine it is now clear that magazine use 242M of kmem memory. > > Acked-by: David Rientjes <rientjes@google.com> > Signed-off-by: Pasha Tatashin <pasha.tatashin@soleen.com> > [ rm: adjust to rework of iova_cache_{get,put} ] > Signed-off-by: Robin Murphy <robin.murphy@arm.com> > --- Reviewed-by: Jerry Snitselaar <jsnitsel@redhat.com>
diff --git a/drivers/iommu/iova.c b/drivers/iommu/iova.c index b5de865ee50b..d59d0ea2fd21 100644 --- a/drivers/iommu/iova.c +++ b/drivers/iommu/iova.c @@ -590,6 +590,8 @@ struct iova_rcache { struct delayed_work work; }; +static struct kmem_cache *iova_magazine_cache; + unsigned long iova_rcache_range(void) { return PAGE_SIZE << (IOVA_RANGE_CACHE_MAX_SIZE - 1); @@ -599,7 +601,7 @@ static struct iova_magazine *iova_magazine_alloc(gfp_t flags) { struct iova_magazine *mag; - mag = kmalloc(sizeof(*mag), flags); + mag = kmem_cache_alloc(iova_magazine_cache, flags); if (mag) mag->size = 0; @@ -608,7 +610,7 @@ static struct iova_magazine *iova_magazine_alloc(gfp_t flags) static void iova_magazine_free(struct iova_magazine *mag) { - kfree(mag); + kmem_cache_free(iova_magazine_cache, mag); } static void @@ -953,6 +955,12 @@ int iova_cache_get(void) if (!iova_cache) goto out_err; + iova_magazine_cache = kmem_cache_create("iommu_iova_magazine", + sizeof(struct iova_magazine), + 0, SLAB_HWCACHE_ALIGN, NULL); + if (!iova_magazine_cache) + goto out_err; + err = cpuhp_setup_state_multi(CPUHP_IOMMU_IOVA_DEAD, "iommu/iova:dead", NULL, iova_cpuhp_dead); if (err) { @@ -968,6 +976,7 @@ int iova_cache_get(void) out_err: kmem_cache_destroy(iova_cache); + kmem_cache_destroy(iova_magazine_cache); mutex_unlock(&iova_cache_mutex); return err; } @@ -984,6 +993,7 @@ void iova_cache_put(void) if (!iova_cache_users) { cpuhp_remove_multi_state(CPUHP_IOMMU_IOVA_DEAD); kmem_cache_destroy(iova_cache); + kmem_cache_destroy(iova_magazine_cache); } mutex_unlock(&iova_cache_mutex); }