diff mbox series

vfio: account iommu allocations

Message ID 20231130200900.2320829-1-pasha.tatashin@soleen.com (mailing list archive)
State New, archived
Headers show
Series vfio: account iommu allocations | expand

Commit Message

Pasha Tatashin Nov. 30, 2023, 8:09 p.m. UTC
iommu allocations should be accounted in order to allow admins to
monitor and limit the amount of iommu memory.

Signed-off-by: Pasha Tatashin <pasha.tatashin@soleen.com>
---
 drivers/vfio/vfio_iommu_type1.c | 8 +++++---
 1 file changed, 5 insertions(+), 3 deletions(-)

This patch is spinned of from the series:
https://lore.kernel.org/all/20231128204938.1453583-1-pasha.tatashin@soleen.com

Comments

Jason Gunthorpe Dec. 4, 2023, 3:46 p.m. UTC | #1
On Thu, Nov 30, 2023 at 08:09:00PM +0000, Pasha Tatashin wrote:
> iommu allocations should be accounted in order to allow admins to
> monitor and limit the amount of iommu memory.
> 
> Signed-off-by: Pasha Tatashin <pasha.tatashin@soleen.com>
> ---
>  drivers/vfio/vfio_iommu_type1.c | 8 +++++---
>  1 file changed, 5 insertions(+), 3 deletions(-)
> 
> This patch is spinned of from the series:
>
>https://lore.kernel.org/all/20231128204938.1453583-1-pasha.tatashin@soleen.com

Reviewed-by: Jason Gunthorpe <jgg@nvidia.com>

Jason
Pasha Tatashin Dec. 4, 2023, 8:31 p.m. UTC | #2
On Mon, Dec 4, 2023 at 10:46 AM Jason Gunthorpe <jgg@ziepe.ca> wrote:
>
> On Thu, Nov 30, 2023 at 08:09:00PM +0000, Pasha Tatashin wrote:
> > iommu allocations should be accounted in order to allow admins to
> > monitor and limit the amount of iommu memory.
> >
> > Signed-off-by: Pasha Tatashin <pasha.tatashin@soleen.com>
> > ---
> >  drivers/vfio/vfio_iommu_type1.c | 8 +++++---
> >  1 file changed, 5 insertions(+), 3 deletions(-)
> >
> > This patch is spinned of from the series:
> >
> >https://lore.kernel.org/all/20231128204938.1453583-1-pasha.tatashin@soleen.com
>
> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com>

Thank you,
Pasha

>
> Jason
Alex Williamson Dec. 5, 2023, midnight UTC | #3
On Thu, 30 Nov 2023 20:09:00 +0000
Pasha Tatashin <pasha.tatashin@soleen.com> wrote:

> iommu allocations should be accounted in order to allow admins to
> monitor and limit the amount of iommu memory.
> 
> Signed-off-by: Pasha Tatashin <pasha.tatashin@soleen.com>
> ---
>  drivers/vfio/vfio_iommu_type1.c | 8 +++++---
>  1 file changed, 5 insertions(+), 3 deletions(-)
> 
> This patch is spinned of from the series:
> https://lore.kernel.org/all/20231128204938.1453583-1-pasha.tatashin@soleen.com
> 
> diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c
> index eacd6ec04de5..b2854d7939ce 100644
> --- a/drivers/vfio/vfio_iommu_type1.c
> +++ b/drivers/vfio/vfio_iommu_type1.c
> @@ -1436,7 +1436,7 @@ static int vfio_iommu_map(struct vfio_iommu *iommu, dma_addr_t iova,
>  	list_for_each_entry(d, &iommu->domain_list, next) {
>  		ret = iommu_map(d->domain, iova, (phys_addr_t)pfn << PAGE_SHIFT,
>  				npage << PAGE_SHIFT, prot | IOMMU_CACHE,
> -				GFP_KERNEL);
> +				GFP_KERNEL_ACCOUNT);
>  		if (ret)
>  			goto unwind;
>  
> @@ -1750,7 +1750,8 @@ static int vfio_iommu_replay(struct vfio_iommu *iommu,
>  			}
>  
>  			ret = iommu_map(domain->domain, iova, phys, size,
> -					dma->prot | IOMMU_CACHE, GFP_KERNEL);
> +					dma->prot | IOMMU_CACHE,
> +					GFP_KERNEL_ACCOUNT);
>  			if (ret) {
>  				if (!dma->iommu_mapped) {
>  					vfio_unpin_pages_remote(dma, iova,
> @@ -1845,7 +1846,8 @@ static void vfio_test_domain_fgsp(struct vfio_domain *domain, struct list_head *
>  			continue;
>  
>  		ret = iommu_map(domain->domain, start, page_to_phys(pages), PAGE_SIZE * 2,
> -				IOMMU_READ | IOMMU_WRITE | IOMMU_CACHE, GFP_KERNEL);
> +				IOMMU_READ | IOMMU_WRITE | IOMMU_CACHE,
> +				GFP_KERNEL_ACCOUNT);
>  		if (!ret) {
>  			size_t unmapped = iommu_unmap(domain->domain, start, PAGE_SIZE);
>  

Applied to vfio next branch for v6.8.  Thanks,

Alex
diff mbox series

Patch

diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c
index eacd6ec04de5..b2854d7939ce 100644
--- a/drivers/vfio/vfio_iommu_type1.c
+++ b/drivers/vfio/vfio_iommu_type1.c
@@ -1436,7 +1436,7 @@  static int vfio_iommu_map(struct vfio_iommu *iommu, dma_addr_t iova,
 	list_for_each_entry(d, &iommu->domain_list, next) {
 		ret = iommu_map(d->domain, iova, (phys_addr_t)pfn << PAGE_SHIFT,
 				npage << PAGE_SHIFT, prot | IOMMU_CACHE,
-				GFP_KERNEL);
+				GFP_KERNEL_ACCOUNT);
 		if (ret)
 			goto unwind;
 
@@ -1750,7 +1750,8 @@  static int vfio_iommu_replay(struct vfio_iommu *iommu,
 			}
 
 			ret = iommu_map(domain->domain, iova, phys, size,
-					dma->prot | IOMMU_CACHE, GFP_KERNEL);
+					dma->prot | IOMMU_CACHE,
+					GFP_KERNEL_ACCOUNT);
 			if (ret) {
 				if (!dma->iommu_mapped) {
 					vfio_unpin_pages_remote(dma, iova,
@@ -1845,7 +1846,8 @@  static void vfio_test_domain_fgsp(struct vfio_domain *domain, struct list_head *
 			continue;
 
 		ret = iommu_map(domain->domain, start, page_to_phys(pages), PAGE_SIZE * 2,
-				IOMMU_READ | IOMMU_WRITE | IOMMU_CACHE, GFP_KERNEL);
+				IOMMU_READ | IOMMU_WRITE | IOMMU_CACHE,
+				GFP_KERNEL_ACCOUNT);
 		if (!ret) {
 			size_t unmapped = iommu_unmap(domain->domain, start, PAGE_SIZE);