From patchwork Wed Dec 11 02:53:10 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Hubbard X-Patchwork-Id: 11284031 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id A7FBA1593 for ; Wed, 11 Dec 2019 02:55:40 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 86A0E2077B for ; Wed, 11 Dec 2019 02:55:40 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=nvidia.com header.i=@nvidia.com header.b="cifVc1Ww" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728264AbfLKCzc (ORCPT ); Tue, 10 Dec 2019 21:55:32 -0500 Received: from hqnvemgate25.nvidia.com ([216.228.121.64]:2168 "EHLO hqnvemgate25.nvidia.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727939AbfLKCxf (ORCPT ); Tue, 10 Dec 2019 21:53:35 -0500 Received: from hqpgpgate101.nvidia.com (Not Verified[216.228.121.13]) by hqnvemgate25.nvidia.com (using TLS: TLSv1.2, DES-CBC3-SHA) id ; Tue, 10 Dec 2019 18:53:27 -0800 Received: from hqmail.nvidia.com ([172.20.161.6]) by hqpgpgate101.nvidia.com (PGP Universal service); Tue, 10 Dec 2019 18:53:33 -0800 X-PGP-Universal: processed; by hqpgpgate101.nvidia.com on Tue, 10 Dec 2019 18:53:33 -0800 Received: from HQMAIL109.nvidia.com (172.20.187.15) by HQMAIL111.nvidia.com (172.20.187.18) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Wed, 11 Dec 2019 02:53:33 +0000 Received: from HQMAIL101.nvidia.com (172.20.187.10) by HQMAIL109.nvidia.com (172.20.187.15) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Wed, 11 Dec 2019 02:53:32 +0000 Received: from rnnvemgw01.nvidia.com (10.128.109.123) by HQMAIL101.nvidia.com (172.20.187.10) with Microsoft SMTP Server (TLS) id 15.0.1473.3 via Frontend Transport; Wed, 11 Dec 2019 02:53:32 +0000 Received: from blueforge.nvidia.com (Not Verified[10.110.48.28]) by rnnvemgw01.nvidia.com with Trustwave SEG (v7,5,8,10121) id ; Tue, 10 Dec 2019 18:53:31 -0800 From: John Hubbard To: Andrew Morton CC: Al Viro , Alex Williamson , Benjamin Herrenschmidt , =?utf-8?b?QmrDtnJuIFQ=?= =?utf-8?b?w7ZwZWw=?= , Christoph Hellwig , Dan Williams , Daniel Vetter , Dave Chinner , David Airlie , "David S . Miller" , Ira Weiny , Jan Kara , Jason Gunthorpe , Jens Axboe , Jonathan Corbet , =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , Magnus Karlsson , "Mauro Carvalho Chehab" , Michael Ellerman , Michal Hocko , Mike Kravetz , "Paul Mackerras" , Shuah Khan , Vlastimil Babka , , , , , , , , , , , , , LKML , John Hubbard , "Christoph Hellwig" , Hans Verkuil , Subject: [PATCH v9 17/25] media/v4l2-core: set pages dirty upon releasing DMA buffers Date: Tue, 10 Dec 2019 18:53:10 -0800 Message-ID: <20191211025318.457113-18-jhubbard@nvidia.com> X-Mailer: git-send-email 2.24.0 In-Reply-To: <20191211025318.457113-1-jhubbard@nvidia.com> References: <20191211025318.457113-1-jhubbard@nvidia.com> MIME-Version: 1.0 X-NVConfidentiality: public DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1576032807; bh=icmjko9fP8b/hMTtebs5M97o7j6aV8LjauIhhN3vqys=; h=X-PGP-Universal:From:To:CC:Subject:Date:Message-ID:X-Mailer: In-Reply-To:References:MIME-Version:X-NVConfidentiality: Content-Transfer-Encoding:Content-Type; b=cifVc1Ww4iMHcwPZFE17sJkuzTCfp1LhU8i4d3cBz4KRFEJDZqU7mnJCHIto9hKg/ tjEh2XyuB9+xQvrp2gfY1X7qjh+jU9CIdrQZIiDX2uVSSfEpACfDf2Gm+bkzRfZVhy k0U4Nf2NcRHMnLRrQVncPgND+y6OpPcFP+imoirF54bXSodOiPmtcNb1k8xQ54u82S 6OWt8xeU1iSMHi6IAdXXCeXkJGheQnoY+aOD5D+o6JJsuRip4Wo1cXWXByGs0Cr5OW mT3bWjEVZiQHiPsrsiimn5rAmW2tUKFXhuDTx73N+6j3RQANaqbBZ7WdyIXmlY15N/ ygtMBw09poh8A== Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org After DMA is complete, and the device and CPU caches are synchronized, it's still required to mark the CPU pages as dirty, if the data was coming from the device. However, this driver was just issuing a bare put_page() call, without any set_page_dirty*() call. Fix the problem, by calling set_page_dirty_lock() if the CPU pages were potentially receiving data from the device. Reviewed-by: Christoph Hellwig Acked-by: Hans Verkuil Cc: Mauro Carvalho Chehab Cc: Signed-off-by: John Hubbard --- drivers/media/v4l2-core/videobuf-dma-sg.c | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/drivers/media/v4l2-core/videobuf-dma-sg.c b/drivers/media/v4l2-core/videobuf-dma-sg.c index 66a6c6c236a7..28262190c3ab 100644 --- a/drivers/media/v4l2-core/videobuf-dma-sg.c +++ b/drivers/media/v4l2-core/videobuf-dma-sg.c @@ -349,8 +349,11 @@ int videobuf_dma_free(struct videobuf_dmabuf *dma) BUG_ON(dma->sglen); if (dma->pages) { - for (i = 0; i < dma->nr_pages; i++) + for (i = 0; i < dma->nr_pages; i++) { + if (dma->direction == DMA_FROM_DEVICE) + set_page_dirty_lock(dma->pages[i]); put_page(dma->pages[i]); + } kfree(dma->pages); dma->pages = NULL; }