Message ID | 20250210193441.2414-1-mhklinux@outlook.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | [1/1] drm/hyperv: Fix address space leak when Hyper-V DRM device is removed | expand |
On Mon, Feb 10, 2025 at 11:34:41AM -0800, mhkelley58@gmail.com wrote: > From: Michael Kelley <mhklinux@outlook.com> > > When a Hyper-V DRM device is probed, the driver allocates MMIO space for > the vram, and maps it cacheable. If the device removed, or in the error > path for device probing, the MMIO space is released but no unmap is done. > Consequently the kernel address space for the mapping is leaked. > > Fix this by adding iounmap() calls in the device removal path, and in the > error path during device probing. > > Fixes: f1f63cbb705d ("drm/hyperv: Fix an error handling path in hyperv_vmbus_probe()") > Fixes: a0ab5abced55 ("drm/hyperv : Removing the restruction of VRAM allocation with PCI bar size") > Signed-off-by: Michael Kelley <mhklinux@outlook.com> > --- > drivers/gpu/drm/hyperv/hyperv_drm_drv.c | 2 ++ > 1 file changed, 2 insertions(+) > > diff --git a/drivers/gpu/drm/hyperv/hyperv_drm_drv.c b/drivers/gpu/drm/hyperv/hyperv_drm_drv.c > index e0953777a206..b491827941f1 100644 > --- a/drivers/gpu/drm/hyperv/hyperv_drm_drv.c > +++ b/drivers/gpu/drm/hyperv/hyperv_drm_drv.c > @@ -156,6 +156,7 @@ static int hyperv_vmbus_probe(struct hv_device *hdev, > return 0; > > err_free_mmio: > + iounmap(hv->vram); > vmbus_free_mmio(hv->mem->start, hv->fb_size); > err_vmbus_close: > vmbus_close(hdev->channel); > @@ -174,6 +175,7 @@ static void hyperv_vmbus_remove(struct hv_device *hdev) > vmbus_close(hdev->channel); > hv_set_drvdata(hdev, NULL); > > + iounmap(hv->vram); > vmbus_free_mmio(hv->mem->start, hv->fb_size); > } > > -- > 2.25.1 > Thanks for the fix. May I know how do you find such issues ? Reviewed-by: Saurabh Sengar <ssengar@linux.microsoft.com> Tested-by: Saurabh Sengar <ssengar@linux.microsoft.com> - Saurabh
From: Saurabh Singh Sengar <ssengar@linux.microsoft.com> Sent: Monday, February 10, 2025 7:33 PM > > On Mon, Feb 10, 2025 at 11:34:41AM -0800, mhkelley58@gmail.com wrote: > > From: Michael Kelley <mhklinux@outlook.com> > > > > When a Hyper-V DRM device is probed, the driver allocates MMIO space for > > the vram, and maps it cacheable. If the device removed, or in the error > > path for device probing, the MMIO space is released but no unmap is done. > > Consequently the kernel address space for the mapping is leaked. > > > > Fix this by adding iounmap() calls in the device removal path, and in the > > error path during device probing. > > > > Fixes: f1f63cbb705d ("drm/hyperv: Fix an error handling path in hyperv_vmbus_probe()") > > Fixes: a0ab5abced55 ("drm/hyperv : Removing the restruction of VRAM allocation with PCI bar size") > > Signed-off-by: Michael Kelley <mhklinux@outlook.com> > > --- > > drivers/gpu/drm/hyperv/hyperv_drm_drv.c | 2 ++ > > 1 file changed, 2 insertions(+) > > > > diff --git a/drivers/gpu/drm/hyperv/hyperv_drm_drv.c > b/drivers/gpu/drm/hyperv/hyperv_drm_drv.c > > index e0953777a206..b491827941f1 100644 > > --- a/drivers/gpu/drm/hyperv/hyperv_drm_drv.c > > +++ b/drivers/gpu/drm/hyperv/hyperv_drm_drv.c > > @@ -156,6 +156,7 @@ static int hyperv_vmbus_probe(struct hv_device *hdev, > > return 0; > > > > err_free_mmio: > > + iounmap(hv->vram); > > vmbus_free_mmio(hv->mem->start, hv->fb_size); > > err_vmbus_close: > > vmbus_close(hdev->channel); > > @@ -174,6 +175,7 @@ static void hyperv_vmbus_remove(struct hv_device *hdev) > > vmbus_close(hdev->channel); > > hv_set_drvdata(hdev, NULL); > > > > + iounmap(hv->vram); > > vmbus_free_mmio(hv->mem->start, hv->fb_size); > > } > > > > -- > > 2.25.1 > > > > Thanks for the fix. May I know how do you find such issues ? I think it was that I was looking at the Hyper-V FB driver for the vmbus_free_mmio() call sites, and realizing that such call sites should probably also have an associated iounmap(). Then I was looking at the same thing in the Hyper-V DRM driver, and realizing there were no calls to iounmap()! To confirm, the contents of /proc/vmallocinfo can be filtered for ioremap calls with size 8 MiB (which actually show up as 8 MiB + 4KiB because the address space allocator adds a guard page to each allocation). When doing repeated unbind/bind sequences on the DRM driver, those 8 MiB entries in /proc/vmallocinfo kept accumulating and were never freed. Michael > > Reviewed-by: Saurabh Sengar <ssengar@linux.microsoft.com> > Tested-by: Saurabh Sengar <ssengar@linux.microsoft.com> > > - Saurabh
Hi Am 10.02.25 um 20:34 schrieb mhkelley58@gmail.com: > From: Michael Kelley <mhklinux@outlook.com> > > When a Hyper-V DRM device is probed, the driver allocates MMIO space for > the vram, and maps it cacheable. If the device removed, or in the error > path for device probing, the MMIO space is released but no unmap is done. > Consequently the kernel address space for the mapping is leaked. > > Fix this by adding iounmap() calls in the device removal path, and in the > error path during device probing. Could this driver use devm_ helpers for iomap operations? That should fix the issue automatically. Best regards Thomas > > Fixes: f1f63cbb705d ("drm/hyperv: Fix an error handling path in hyperv_vmbus_probe()") > Fixes: a0ab5abced55 ("drm/hyperv : Removing the restruction of VRAM allocation with PCI bar size") > Signed-off-by: Michael Kelley <mhklinux@outlook.com> > --- > drivers/gpu/drm/hyperv/hyperv_drm_drv.c | 2 ++ > 1 file changed, 2 insertions(+) > > diff --git a/drivers/gpu/drm/hyperv/hyperv_drm_drv.c b/drivers/gpu/drm/hyperv/hyperv_drm_drv.c > index e0953777a206..b491827941f1 100644 > --- a/drivers/gpu/drm/hyperv/hyperv_drm_drv.c > +++ b/drivers/gpu/drm/hyperv/hyperv_drm_drv.c > @@ -156,6 +156,7 @@ static int hyperv_vmbus_probe(struct hv_device *hdev, > return 0; > > err_free_mmio: > + iounmap(hv->vram); > vmbus_free_mmio(hv->mem->start, hv->fb_size); > err_vmbus_close: > vmbus_close(hdev->channel); > @@ -174,6 +175,7 @@ static void hyperv_vmbus_remove(struct hv_device *hdev) > vmbus_close(hdev->channel); > hv_set_drvdata(hdev, NULL); > > + iounmap(hv->vram); > vmbus_free_mmio(hv->mem->start, hv->fb_size); > } >
On Tue, Feb 11, 2025 at 03:46:51AM +0000, Michael Kelley wrote: > From: Saurabh Singh Sengar <ssengar@linux.microsoft.com> Sent: Monday, February 10, 2025 7:33 PM > > > > On Mon, Feb 10, 2025 at 11:34:41AM -0800, mhkelley58@gmail.com wrote: > > > From: Michael Kelley <mhklinux@outlook.com> > > > > > > When a Hyper-V DRM device is probed, the driver allocates MMIO space for > > > the vram, and maps it cacheable. If the device removed, or in the error > > > path for device probing, the MMIO space is released but no unmap is done. > > > Consequently the kernel address space for the mapping is leaked. > > > > > > Fix this by adding iounmap() calls in the device removal path, and in the > > > error path during device probing. > > > > > > Fixes: f1f63cbb705d ("drm/hyperv: Fix an error handling path in hyperv_vmbus_probe()") > > > Fixes: a0ab5abced55 ("drm/hyperv : Removing the restruction of VRAM allocation with PCI bar size") > > > Signed-off-by: Michael Kelley <mhklinux@outlook.com> > > > --- > > > drivers/gpu/drm/hyperv/hyperv_drm_drv.c | 2 ++ > > > 1 file changed, 2 insertions(+) > > > > > > diff --git a/drivers/gpu/drm/hyperv/hyperv_drm_drv.c > > b/drivers/gpu/drm/hyperv/hyperv_drm_drv.c > > > index e0953777a206..b491827941f1 100644 > > > --- a/drivers/gpu/drm/hyperv/hyperv_drm_drv.c > > > +++ b/drivers/gpu/drm/hyperv/hyperv_drm_drv.c > > > @@ -156,6 +156,7 @@ static int hyperv_vmbus_probe(struct hv_device *hdev, > > > return 0; > > > > > > err_free_mmio: > > > + iounmap(hv->vram); > > > vmbus_free_mmio(hv->mem->start, hv->fb_size); > > > err_vmbus_close: > > > vmbus_close(hdev->channel); > > > @@ -174,6 +175,7 @@ static void hyperv_vmbus_remove(struct hv_device *hdev) > > > vmbus_close(hdev->channel); > > > hv_set_drvdata(hdev, NULL); > > > > > > + iounmap(hv->vram); > > > vmbus_free_mmio(hv->mem->start, hv->fb_size); > > > } > > > > > > -- > > > 2.25.1 > > > > > > > Thanks for the fix. May I know how do you find such issues ? > > I think it was that I was looking at the Hyper-V FB driver for the > vmbus_free_mmio() call sites, and realizing that such call sites > should probably also have an associated iounmap(). Then I was > looking at the same thing in the Hyper-V DRM driver, and > realizing there were no calls to iounmap()! > > To confirm, the contents of /proc/vmallocinfo can be filtered > for ioremap calls with size 8 MiB (which actually show up as > 8 MiB + 4KiB because the address space allocator adds a guard > page to each allocation). When doing repeated unbind/bind > sequences on the DRM driver, those 8 MiB entries in > /proc/vmallocinfo kept accumulating and were never freed. > > Michael Thank you! Regards, Saurabh
From: Thomas Zimmermann <tzimmermann@suse.de> Sent: Monday, February 10, 2025 11:35 PM > > Hi > > Am 10.02.25 um 20:34 schrieb mhkelley58@gmail.com: > > From: Michael Kelley <mhklinux@outlook.com> > > > > When a Hyper-V DRM device is probed, the driver allocates MMIO space for > > the vram, and maps it cacheable. If the device removed, or in the error > > path for device probing, the MMIO space is released but no unmap is done. > > Consequently the kernel address space for the mapping is leaked. > > > > Fix this by adding iounmap() calls in the device removal path, and in the > > error path during device probing. > > Could this driver use devm_ helpers for iomap operations? That should > fix the issue automatically. Possibly. But there's no devm_ioremap_cache(). There are _uc and _wc flavors, but not _wb. I'd guess that full writeback caching was deemed nonsensical for a device. But it does make sense in this case of framebuffer memory shared with a hypervisor that is also using WB caching. Adding a _wb devm_ helper might be seen as something of an abomination. :-) Michael > > Best regards > Thomas > > > > > Fixes: f1f63cbb705d ("drm/hyperv: Fix an error handling path in > hyperv_vmbus_probe()") > > Fixes: a0ab5abced55 ("drm/hyperv : Removing the restruction of VRAM allocation > with PCI bar size") > > Signed-off-by: Michael Kelley <mhklinux@outlook.com> > > --- > > drivers/gpu/drm/hyperv/hyperv_drm_drv.c | 2 ++ > > 1 file changed, 2 insertions(+) > > > > diff --git a/drivers/gpu/drm/hyperv/hyperv_drm_drv.c > b/drivers/gpu/drm/hyperv/hyperv_drm_drv.c > > index e0953777a206..b491827941f1 100644 > > --- a/drivers/gpu/drm/hyperv/hyperv_drm_drv.c > > +++ b/drivers/gpu/drm/hyperv/hyperv_drm_drv.c > > @@ -156,6 +156,7 @@ static int hyperv_vmbus_probe(struct hv_device *hdev, > > return 0; > > > > err_free_mmio: > > + iounmap(hv->vram); > > vmbus_free_mmio(hv->mem->start, hv->fb_size); > > err_vmbus_close: > > vmbus_close(hdev->channel); > > @@ -174,6 +175,7 @@ static void hyperv_vmbus_remove(struct hv_device *hdev) > > vmbus_close(hdev->channel); > > hv_set_drvdata(hdev, NULL); > > > > + iounmap(hv->vram); > > vmbus_free_mmio(hv->mem->start, hv->fb_size); > > } > > > > -- > -- > Thomas Zimmermann > Graphics Driver Developer > SUSE Software Solutions Germany GmbH > Frankenstrasse 146, 90461 Nuernberg, Germany > GF: Ivo Totev, Andrew Myers, Andrew McDonald, Boudien Moerman > HRB 36809 (AG Nuernberg)
diff --git a/drivers/gpu/drm/hyperv/hyperv_drm_drv.c b/drivers/gpu/drm/hyperv/hyperv_drm_drv.c index e0953777a206..b491827941f1 100644 --- a/drivers/gpu/drm/hyperv/hyperv_drm_drv.c +++ b/drivers/gpu/drm/hyperv/hyperv_drm_drv.c @@ -156,6 +156,7 @@ static int hyperv_vmbus_probe(struct hv_device *hdev, return 0; err_free_mmio: + iounmap(hv->vram); vmbus_free_mmio(hv->mem->start, hv->fb_size); err_vmbus_close: vmbus_close(hdev->channel); @@ -174,6 +175,7 @@ static void hyperv_vmbus_remove(struct hv_device *hdev) vmbus_close(hdev->channel); hv_set_drvdata(hdev, NULL); + iounmap(hv->vram); vmbus_free_mmio(hv->mem->start, hv->fb_size); }