Message ID | 20220507085204.16914-1-yf.wang@mediatek.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | iommu/dma: Fix iova map result check bug | expand |
> The data type of the return value of the iommu_map_sg_atomic > is ssize_t, but the data type of iova size is size_t, > e.g. one is int while the other is unsigned int. > > When iommu_map_sg_atomic return value is compared with iova size, > it will force the signed int to be converted to unsigned int, if > iova map fails and iommu_map_sg_atomic return error code is less > than 0, then (ret < iova_len) is false, which will to cause not > do free iova, and the master can still successfully get the iova > of map fail, which is not expected. > > Therefore, we need to check the return value of iommu_map_sg_atomic > in two cases according to whether it is less than 0. > > Fixes: ad8f36e4b6b1 ("iommu: return full error code from iommu_map_sg[_atomic]()") > Signed-off-by: Yunfei Wang <yf.wang@mediatek.com> Yes, we have to make sure ssize_t >= 0 before comparing ssize_t and size_t. Reviewed-by: Miles Chen <miles.chen@mediatek.com> > > Cc: <stable@vger.kernel.org> # 5.15.*
On 2022-05-07 09:52, yf.wang@mediatek.com wrote: > From: Yunfei Wang <yf.wang@mediatek.com> > > The data type of the return value of the iommu_map_sg_atomic > is ssize_t, but the data type of iova size is size_t, > e.g. one is int while the other is unsigned int. > > When iommu_map_sg_atomic return value is compared with iova size, > it will force the signed int to be converted to unsigned int, if > iova map fails and iommu_map_sg_atomic return error code is less > than 0, then (ret < iova_len) is false, which will to cause not > do free iova, and the master can still successfully get the iova > of map fail, which is not expected. > > Therefore, we need to check the return value of iommu_map_sg_atomic > in two cases according to whether it is less than 0. Heh, it's always a fun day when I have to go back to the C standard to remind myself of the usual arithmetic conversions. But indeed this seems correct, and even though the double comparisons look a little non-obvious on their own I can't think of an objectively better alternative, so: Reviewed-by: Robin Murphy <robin.murphy@arm.com> > Fixes: ad8f36e4b6b1 ("iommu: return full error code from iommu_map_sg[_atomic]()") > Signed-off-by: Yunfei Wang <yf.wang@mediatek.com> > Cc: <stable@vger.kernel.org> # 5.15.* > --- > drivers/iommu/dma-iommu.c | 7 ++++--- > 1 file changed, 4 insertions(+), 3 deletions(-) > > diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c > index 09f6e1c0f9c0..2932281e93fc 100644 > --- a/drivers/iommu/dma-iommu.c > +++ b/drivers/iommu/dma-iommu.c > @@ -776,6 +776,7 @@ static struct page **__iommu_dma_alloc_noncontiguous(struct device *dev, > unsigned int count, min_size, alloc_sizes = domain->pgsize_bitmap; > struct page **pages; > dma_addr_t iova; > + ssize_t ret; > > if (static_branch_unlikely(&iommu_deferred_attach_enabled) && > iommu_deferred_attach(dev, domain)) > @@ -813,8 +814,8 @@ static struct page **__iommu_dma_alloc_noncontiguous(struct device *dev, > arch_dma_prep_coherent(sg_page(sg), sg->length); > } > > - if (iommu_map_sg_atomic(domain, iova, sgt->sgl, sgt->orig_nents, ioprot) > - < size) > + ret = iommu_map_sg_atomic(domain, iova, sgt->sgl, sgt->orig_nents, ioprot); > + if (ret < 0 || ret < size) > goto out_free_sg; > > sgt->sgl->dma_address = iova; > @@ -1209,7 +1210,7 @@ static int iommu_dma_map_sg(struct device *dev, struct scatterlist *sg, > * implementation - it knows better than we do. > */ > ret = iommu_map_sg_atomic(domain, iova, sg, nents, prot); > - if (ret < iova_len) > + if (ret < 0 || ret < iova_len) > goto out_free_iova; > > return __finalise_sg(dev, sg, nents, iova);
On Sat, 2022-05-07 at 16:52 +0800, yf.wang@mediatek.com wrote: > From: Yunfei Wang <yf.wang@mediatek.com> > > The data type of the return value of the iommu_map_sg_atomic > is ssize_t, but the data type of iova size is size_t, > e.g. one is int while the other is unsigned int. > > When iommu_map_sg_atomic return value is compared with iova size, > it will force the signed int to be converted to unsigned int, if > iova map fails and iommu_map_sg_atomic return error code is less > than 0, then (ret < iova_len) is false, which will to cause not > do free iova, and the master can still successfully get the iova > of map fail, which is not expected. > > Therefore, we need to check the return value of iommu_map_sg_atomic > in two cases according to whether it is less than 0. > > Fixes: ad8f36e4b6b1 ("iommu: return full error code from > iommu_map_sg[_atomic]()") > Signed-off-by: Yunfei Wang <yf.wang@mediatek.com> > Cc: <stable@vger.kernel.org> # 5.15.* > --- > drivers/iommu/dma-iommu.c | 7 ++++--- > 1 file changed, 4 insertions(+), 3 deletions(-) > > diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c > index 09f6e1c0f9c0..2932281e93fc 100644 > --- a/drivers/iommu/dma-iommu.c > +++ b/drivers/iommu/dma-iommu.c > @@ -776,6 +776,7 @@ static struct page > **__iommu_dma_alloc_noncontiguous(struct device *dev, > unsigned int count, min_size, alloc_sizes = domain- > >pgsize_bitmap; > struct page **pages; > dma_addr_t iova; > + ssize_t ret; > > if (static_branch_unlikely(&iommu_deferred_attach_enabled) && > iommu_deferred_attach(dev, domain)) > @@ -813,8 +814,8 @@ static struct page > **__iommu_dma_alloc_noncontiguous(struct device *dev, > arch_dma_prep_coherent(sg_page(sg), sg- > >length); > } > > - if (iommu_map_sg_atomic(domain, iova, sgt->sgl, sgt- > >orig_nents, ioprot) > - < size) > + ret = iommu_map_sg_atomic(domain, iova, sgt->sgl, sgt- > >orig_nents, ioprot); > + if (ret < 0 || ret < size) if (IS_ERR_VALUE(ret) || ret < size) for readable? > goto out_free_sg; > > sgt->sgl->dma_address = iova; > @@ -1209,7 +1210,7 @@ static int iommu_dma_map_sg(struct device *dev, > struct scatterlist *sg, > * implementation - it knows better than we do. > */ > ret = iommu_map_sg_atomic(domain, iova, sg, nents, prot); > - if (ret < iova_len) > + if (ret < 0 || ret < iova_len) > goto out_free_iova; > > return __finalise_sg(dev, sg, nents, iova);
On Sat, May 07, 2022 at 04:52:03PM +0800, yf.wang@mediatek.com wrote: > drivers/iommu/dma-iommu.c | 7 ++++--- > 1 file changed, 4 insertions(+), 3 deletions(-) Applied, thanks.
diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c index 09f6e1c0f9c0..2932281e93fc 100644 --- a/drivers/iommu/dma-iommu.c +++ b/drivers/iommu/dma-iommu.c @@ -776,6 +776,7 @@ static struct page **__iommu_dma_alloc_noncontiguous(struct device *dev, unsigned int count, min_size, alloc_sizes = domain->pgsize_bitmap; struct page **pages; dma_addr_t iova; + ssize_t ret; if (static_branch_unlikely(&iommu_deferred_attach_enabled) && iommu_deferred_attach(dev, domain)) @@ -813,8 +814,8 @@ static struct page **__iommu_dma_alloc_noncontiguous(struct device *dev, arch_dma_prep_coherent(sg_page(sg), sg->length); } - if (iommu_map_sg_atomic(domain, iova, sgt->sgl, sgt->orig_nents, ioprot) - < size) + ret = iommu_map_sg_atomic(domain, iova, sgt->sgl, sgt->orig_nents, ioprot); + if (ret < 0 || ret < size) goto out_free_sg; sgt->sgl->dma_address = iova; @@ -1209,7 +1210,7 @@ static int iommu_dma_map_sg(struct device *dev, struct scatterlist *sg, * implementation - it knows better than we do. */ ret = iommu_map_sg_atomic(domain, iova, sg, nents, prot); - if (ret < iova_len) + if (ret < 0 || ret < iova_len) goto out_free_iova; return __finalise_sg(dev, sg, nents, iova);