Message ID | 1671141424-81853-4-git-send-email-steven.sistare@oracle.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | fixes for virtual address update | expand |
I just realized it makes more sense to directly count dma->locked_vm, which can be done with a few lines in vfio_lock_acct. I will do that tomorrow, along with addressing any new comments from this review. - Steve On 12/15/2022 4:57 PM, Steve Sistare wrote: > A pinned dma mapping may include reserved pages, which are not included > in the task's locked_vm count. Maintain a count of reserved pages, for > iommu capable devices, so that locked_vm can be restored after fork or > exec in a subsequent patch. > > Signed-off-by: Steve Sistare <steven.sistare@oracle.com> > --- > drivers/vfio/vfio_iommu_type1.c | 14 +++++++++++--- > 1 file changed, 11 insertions(+), 3 deletions(-) > > diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c > index cd49b656..add87cd 100644 > --- a/drivers/vfio/vfio_iommu_type1.c > +++ b/drivers/vfio/vfio_iommu_type1.c > @@ -101,6 +101,7 @@ struct vfio_dma { > struct rb_root pfn_list; /* Ex-user pinned pfn list */ > unsigned long *bitmap; > struct mm_struct *mm; > + long reserved_pages; > }; > > struct vfio_batch { > @@ -662,7 +663,7 @@ static long vfio_pin_pages_remote(struct vfio_dma *dma, unsigned long vaddr, > { > unsigned long pfn; > struct mm_struct *mm = current->mm; > - long ret, pinned = 0, lock_acct = 0; > + long ret, pinned = 0, lock_acct = 0, reserved_pages = 0; > bool rsvd; > dma_addr_t iova = vaddr - dma->vaddr + dma->iova; > > @@ -716,7 +717,9 @@ static long vfio_pin_pages_remote(struct vfio_dma *dma, unsigned long vaddr, > * externally pinned pages are already counted against > * the user. > */ > - if (!rsvd && !vfio_find_vpfn(dma, iova)) { > + if (rsvd) { > + reserved_pages++; > + } else if (!vfio_find_vpfn(dma, iova)) { > if (!dma->lock_cap && > mm->locked_vm + lock_acct + 1 > limit) { > pr_warn("%s: RLIMIT_MEMLOCK (%ld) exceeded\n", > @@ -746,6 +749,8 @@ static long vfio_pin_pages_remote(struct vfio_dma *dma, unsigned long vaddr, > > out: > ret = vfio_lock_acct(dma, lock_acct, false); > + if (!ret) > + dma->reserved_pages += reserved_pages; > > unpin_out: > if (batch->size == 1 && !batch->offset) { > @@ -771,7 +776,7 @@ static long vfio_unpin_pages_remote(struct vfio_dma *dma, dma_addr_t iova, > unsigned long pfn, long npage, > bool do_accounting) > { > - long unlocked = 0, locked = 0; > + long unlocked = 0, locked = 0, reserved_pages = 0; > long i; > > for (i = 0; i < npage; i++, iova += PAGE_SIZE) { > @@ -779,9 +784,12 @@ static long vfio_unpin_pages_remote(struct vfio_dma *dma, dma_addr_t iova, > unlocked++; > if (vfio_find_vpfn(dma, iova)) > locked++; > + } else { > + reserved_pages++; > } > } > > + dma->reserved_pages -= reserved_pages; > if (do_accounting) > vfio_lock_acct(dma, locked - unlocked, true); >
diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c index cd49b656..add87cd 100644 --- a/drivers/vfio/vfio_iommu_type1.c +++ b/drivers/vfio/vfio_iommu_type1.c @@ -101,6 +101,7 @@ struct vfio_dma { struct rb_root pfn_list; /* Ex-user pinned pfn list */ unsigned long *bitmap; struct mm_struct *mm; + long reserved_pages; }; struct vfio_batch { @@ -662,7 +663,7 @@ static long vfio_pin_pages_remote(struct vfio_dma *dma, unsigned long vaddr, { unsigned long pfn; struct mm_struct *mm = current->mm; - long ret, pinned = 0, lock_acct = 0; + long ret, pinned = 0, lock_acct = 0, reserved_pages = 0; bool rsvd; dma_addr_t iova = vaddr - dma->vaddr + dma->iova; @@ -716,7 +717,9 @@ static long vfio_pin_pages_remote(struct vfio_dma *dma, unsigned long vaddr, * externally pinned pages are already counted against * the user. */ - if (!rsvd && !vfio_find_vpfn(dma, iova)) { + if (rsvd) { + reserved_pages++; + } else if (!vfio_find_vpfn(dma, iova)) { if (!dma->lock_cap && mm->locked_vm + lock_acct + 1 > limit) { pr_warn("%s: RLIMIT_MEMLOCK (%ld) exceeded\n", @@ -746,6 +749,8 @@ static long vfio_pin_pages_remote(struct vfio_dma *dma, unsigned long vaddr, out: ret = vfio_lock_acct(dma, lock_acct, false); + if (!ret) + dma->reserved_pages += reserved_pages; unpin_out: if (batch->size == 1 && !batch->offset) { @@ -771,7 +776,7 @@ static long vfio_unpin_pages_remote(struct vfio_dma *dma, dma_addr_t iova, unsigned long pfn, long npage, bool do_accounting) { - long unlocked = 0, locked = 0; + long unlocked = 0, locked = 0, reserved_pages = 0; long i; for (i = 0; i < npage; i++, iova += PAGE_SIZE) { @@ -779,9 +784,12 @@ static long vfio_unpin_pages_remote(struct vfio_dma *dma, dma_addr_t iova, unlocked++; if (vfio_find_vpfn(dma, iova)) locked++; + } else { + reserved_pages++; } } + dma->reserved_pages -= reserved_pages; if (do_accounting) vfio_lock_acct(dma, locked - unlocked, true);
A pinned dma mapping may include reserved pages, which are not included in the task's locked_vm count. Maintain a count of reserved pages, for iommu capable devices, so that locked_vm can be restored after fork or exec in a subsequent patch. Signed-off-by: Steve Sistare <steven.sistare@oracle.com> --- drivers/vfio/vfio_iommu_type1.c | 14 +++++++++++--- 1 file changed, 11 insertions(+), 3 deletions(-)