Message ID | 20200831203811.8494-7-nicoleotsuka@gmail.com (mailing list archive) |
---|---|
State | Awaiting Upstream |
Headers | show |
Series | Avoid overflow at boundary_size | expand |
diff --git a/arch/x86/kernel/amd_gart_64.c b/arch/x86/kernel/amd_gart_64.c index e89031e9c847..7fa0bb490065 100644 --- a/arch/x86/kernel/amd_gart_64.c +++ b/arch/x86/kernel/amd_gart_64.c @@ -96,8 +96,8 @@ static unsigned long alloc_iommu(struct device *dev, int size, base_index = ALIGN(iommu_bus_base & dma_get_seg_boundary(dev), PAGE_SIZE) >> PAGE_SHIFT; - boundary_size = ALIGN((u64)dma_get_seg_boundary(dev) + 1, - PAGE_SIZE) >> PAGE_SHIFT; + /* Overflow-free shortcut for: ALIGN(b + 1, 1 << s) >> s */ + boundary_size = (dma_get_seg_boundary(dev) >> PAGE_SHIFT) + 1; spin_lock_irqsave(&iommu_bitmap_lock, flags); offset = iommu_area_alloc(iommu_gart_bitmap, iommu_pages, next_bit,
The boundary_size might be as large as ULONG_MAX, which means that a device has no specific boundary limit. So either "+ 1" or passing it to ALIGN() would potentially overflow. According to kernel defines: #define ALIGN_MASK(x, mask) (((x) + (mask)) & ~(mask)) #define ALIGN(x, a) ALIGN_MASK(x, (typeof(x))(a) - 1) We can simplify the logic here: ALIGN(boundary + 1, 1 << shift) >> shift = ALIGN_MASK(b + 1, (1 << s) - 1) >> s = {[b + 1 + (1 << s) - 1] & ~[(1 << s) - 1]} >> s = [b + 1 + (1 << s) - 1] >> s = [b + (1 << s)] >> s = (b >> s) + 1 So fixing a potential overflow with the safer shortcut. Signed-off-by: Nicolin Chen <nicoleotsuka@gmail.com> Cc: Christoph Hellwig <hch@lst.de> --- arch/x86/kernel/amd_gart_64.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-)