diff mbox series

[V5,1/5] iommu/amd: Remove unnecessary locking from AMD iommu driver

Message ID 20190815110944.3579-2-murphyt7@tcd.ie (mailing list archive)
State New, archived
Headers show
Series iommu/amd: Convert the AMD iommu driver to the dma-iommu api | expand

Commit Message

Tom Murphy Aug. 15, 2019, 11:09 a.m. UTC
We can remove the mutex lock from amd_iommu_map and amd_iommu_unmap.
iommu_map doesn’t lock while mapping and so no two calls should touch
the same iova range. The AMD driver already handles the page table page
allocations without locks so we can safely remove the locks.

Signed-off-by: Tom Murphy <murphyt7@tcd.ie>
---
 drivers/iommu/amd_iommu.c       | 10 +---------
 drivers/iommu/amd_iommu_types.h |  1 -
 2 files changed, 1 insertion(+), 10 deletions(-)

Comments

Christoph Hellwig Aug. 20, 2019, 9:41 a.m. UTC | #1
On Thu, Aug 15, 2019 at 12:09:39PM +0100, Tom Murphy wrote:
> We can remove the mutex lock from amd_iommu_map and amd_iommu_unmap.
> iommu_map doesn’t lock while mapping and so no two calls should touch
> the same iova range. The AMD driver already handles the page table page
> allocations without locks so we can safely remove the locks.

I've been looking over the code and trying to understand how the
synchronization works.  I gues we the cmpxchg64 in free_clear_pte
is the important point here?  I have to admit I don't fully understand
the concurrency issues here, but neither do I understand what the
mutex you removed might have helped to start with.
Tom Murphy Aug. 24, 2019, 7:56 a.m. UTC | #2
>I have to admit I don't fully understand the concurrency issues here, but neither do I understand what the mutex you removed might have helped to start with.

Each range in the page tables is protected by the IO virtual address
allocator. The iommu driver allocates an IOVA range using locks before
it writes to a page table range. The IOVA allocator acts like a lock
on a specific range of the page tables. So we can handle most of the
concurrency issues in the IOVA allocator and avoid locking while
writing to a range in the page tables.

However because we have multiple levels of pages we might have to
allocate a middle page (a PMD) which covers more than the IOVA range
we have allocated.
To solve this we could use locks:

//pseudo code
lock_page_table()
if (we need to allocate middle pages) {
 //allocate the page
 //set the PMD value
}
unlock_page_table()

but we can actually avoid having any locking by doing the following:

//pseudo code
if (we need to allocate middle pages) {
 //allocate the page
 //cmpxchg64 to set the PMD if it wasn't already set since we last checked
 if (the PMD was set while since we last checked)
   //free the page we just allocated
}

In this case we can end up doing a pointless page allocate and free
but it's worth it to avoid using locks

You can see this in the intel iommu code here:
https://github.com/torvalds/linux/blob/9140d8bdd4c5a04abe181bb300378355d56990a4/drivers/iommu/intel-iommu.c#L904

>what the mutex you removed might have helped to start with.
The mutex I removed is arguably completely useless.

In the dma ops path we handle the IOVA allocations in the driver so we
can be sure a certain range is protected by the IOVA allocator.

Because the iommu ops path doesn't handle the IOVA allocations it
seems reasonable to lock the page tables to avoid two writers writing
to the same range at the same time. Without the lock it's complete
chaos and all writers can be writing to the same range at the same
time resulting in complete garbage.
BUT the locking doesn't actually make any real difference. Even with
locking we still have a race condition if two writers want to write to
the same range at the same time, the race is just whoever gets the
lock first, we still can't be sure what the result will be. So the
result is still garbage, just slightly more usable garbage because at
least the range is correct for one writer.
It just makes no sense to ever have two writers writing to the same
range and adding a lock doesn't fix that.
Already the Intel iommu ops path doesn't use locks for it's page table
so this isn't a new idea I'm just doing the same for the AMD iommu
driver

Does all that make sense?

On Tue, 20 Aug 2019 at 10:41, Christoph Hellwig <hch@infradead.org> wrote:
>
> On Thu, Aug 15, 2019 at 12:09:39PM +0100, Tom Murphy wrote:
> > We can remove the mutex lock from amd_iommu_map and amd_iommu_unmap.
> > iommu_map doesn’t lock while mapping and so no two calls should touch
> > the same iova range. The AMD driver already handles the page table page
> > allocations without locks so we can safely remove the locks.
>
> I've been looking over the code and trying to understand how the
> synchronization works.  I gues we the cmpxchg64 in free_clear_pte
> is the important point here?  I have to admit I don't fully understand
> the concurrency issues here, but neither do I understand what the
> mutex you removed might have helped to start with.
Christoph Hellwig Aug. 24, 2019, 10:41 p.m. UTC | #3
Thank for the explanation Tom. It might make sense to add a condensed
version of it to commit log for the next iteration.
diff mbox series

Patch

diff --git a/drivers/iommu/amd_iommu.c b/drivers/iommu/amd_iommu.c
index 008da21a2592..1948be7ac8f8 100644
--- a/drivers/iommu/amd_iommu.c
+++ b/drivers/iommu/amd_iommu.c
@@ -2858,7 +2858,6 @@  static void protection_domain_free(struct protection_domain *domain)
 static int protection_domain_init(struct protection_domain *domain)
 {
 	spin_lock_init(&domain->lock);
-	mutex_init(&domain->api_lock);
 	domain->id = domain_id_alloc();
 	if (!domain->id)
 		return -ENOMEM;
@@ -3045,9 +3044,7 @@  static int amd_iommu_map(struct iommu_domain *dom, unsigned long iova,
 	if (iommu_prot & IOMMU_WRITE)
 		prot |= IOMMU_PROT_IW;
 
-	mutex_lock(&domain->api_lock);
 	ret = iommu_map_page(domain, iova, paddr, page_size, prot, GFP_KERNEL);
-	mutex_unlock(&domain->api_lock);
 
 	domain_flush_np_cache(domain, iova, page_size);
 
@@ -3058,16 +3055,11 @@  static size_t amd_iommu_unmap(struct iommu_domain *dom, unsigned long iova,
 			   size_t page_size)
 {
 	struct protection_domain *domain = to_pdomain(dom);
-	size_t unmap_size;
 
 	if (domain->mode == PAGE_MODE_NONE)
 		return 0;
 
-	mutex_lock(&domain->api_lock);
-	unmap_size = iommu_unmap_page(domain, iova, page_size);
-	mutex_unlock(&domain->api_lock);
-
-	return unmap_size;
+	return iommu_unmap_page(domain, iova, page_size);
 }
 
 static phys_addr_t amd_iommu_iova_to_phys(struct iommu_domain *dom,
diff --git a/drivers/iommu/amd_iommu_types.h b/drivers/iommu/amd_iommu_types.h
index 9ac229e92b07..b764e1a73dcf 100644
--- a/drivers/iommu/amd_iommu_types.h
+++ b/drivers/iommu/amd_iommu_types.h
@@ -468,7 +468,6 @@  struct protection_domain {
 	struct iommu_domain domain; /* generic domain handle used by
 				       iommu core code */
 	spinlock_t lock;	/* mostly used to lock the page table*/
-	struct mutex api_lock;	/* protect page tables in the iommu-api path */
 	u16 id;			/* the domain id written to the device table */
 	int mode;		/* paging mode (0-6 levels) */
 	u64 *pt_root;		/* page table root pointer */