Message ID | 20191209225344.99740-18-jhubbard@nvidia.com (mailing list archive) |
---|---|
State | Superseded |
Headers | show |
Series | mm/gup: track dma-pinned pages: FOLL_PIN | expand |
On Mon, 9 Dec 2019 14:53:35 -0800 John Hubbard <jhubbard@nvidia.com> wrote: > After DMA is complete, and the device and CPU caches are synchronized, > it's still required to mark the CPU pages as dirty, if the data was > coming from the device. However, this driver was just issuing a > bare put_page() call, without any set_page_dirty*() call. > > Fix the problem, by calling set_page_dirty_lock() if the CPU pages > were potentially receiving data from the device. > > Reviewed-by: Christoph Hellwig <hch@lst.de> > Acked-by: Hans Verkuil <hverkuil-cisco@xs4all.nl> > Cc: Mauro Carvalho Chehab <mchehab@kernel.org> > Cc: <stable@vger.kernel.org> What are the user-visible effects of this change? As it's cc:stable I'd normally send this to Linus within 1-2 weeks, or sooner. Please confirm that this is a standalone fix, independent of the rest of this series.
On 12/9/19 4:56 PM, Andrew Morton wrote: > On Mon, 9 Dec 2019 14:53:35 -0800 John Hubbard <jhubbard@nvidia.com> wrote: > >> After DMA is complete, and the device and CPU caches are synchronized, >> it's still required to mark the CPU pages as dirty, if the data was >> coming from the device. However, this driver was just issuing a >> bare put_page() call, without any set_page_dirty*() call. >> >> Fix the problem, by calling set_page_dirty_lock() if the CPU pages >> were potentially receiving data from the device. >> >> Reviewed-by: Christoph Hellwig <hch@lst.de> >> Acked-by: Hans Verkuil <hverkuil-cisco@xs4all.nl> >> Cc: Mauro Carvalho Chehab <mchehab@kernel.org> >> Cc: <stable@vger.kernel.org> > > What are the user-visible effects of this change? I'll have to defer to Hans or other experts, because I merely spotted this by reading the code. > > As it's cc:stable I'd normally send this to Linus within 1-2 weeks, or > sooner. Please confirm that this is a standalone fix, independent of > the rest of this series. > > Yes, this is a stand-alone fix. Of course, as part of this series, the put_page() gets converted to put_user_pages_dirty() in the next patch, and that in turn gets renamed to unpin_user_pages_dirty() in a later patch. Just so we keep that in mind when moving patches around. thanks,
On Mon 09-12-19 16:56:27, Andrew Morton wrote: > On Mon, 9 Dec 2019 14:53:35 -0800 John Hubbard <jhubbard@nvidia.com> wrote: > > > After DMA is complete, and the device and CPU caches are synchronized, > > it's still required to mark the CPU pages as dirty, if the data was > > coming from the device. However, this driver was just issuing a > > bare put_page() call, without any set_page_dirty*() call. > > > > Fix the problem, by calling set_page_dirty_lock() if the CPU pages > > were potentially receiving data from the device. > > > > Reviewed-by: Christoph Hellwig <hch@lst.de> > > Acked-by: Hans Verkuil <hverkuil-cisco@xs4all.nl> > > Cc: Mauro Carvalho Chehab <mchehab@kernel.org> > > Cc: <stable@vger.kernel.org> > > What are the user-visible effects of this change? Presumably loss of captured video data if the page writeback hits in the wrong moment (i.e., after the page was faulted in but before the video HW stored data in the page) and the page then gets evicted from the page cache. Honza
diff --git a/drivers/media/v4l2-core/videobuf-dma-sg.c b/drivers/media/v4l2-core/videobuf-dma-sg.c index 66a6c6c236a7..28262190c3ab 100644 --- a/drivers/media/v4l2-core/videobuf-dma-sg.c +++ b/drivers/media/v4l2-core/videobuf-dma-sg.c @@ -349,8 +349,11 @@ int videobuf_dma_free(struct videobuf_dmabuf *dma) BUG_ON(dma->sglen); if (dma->pages) { - for (i = 0; i < dma->nr_pages; i++) + for (i = 0; i < dma->nr_pages; i++) { + if (dma->direction == DMA_FROM_DEVICE) + set_page_dirty_lock(dma->pages[i]); put_page(dma->pages[i]); + } kfree(dma->pages); dma->pages = NULL; }