From patchwork Tue May 26 18:27:09 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: John Hubbard X-Patchwork-Id: 11571037 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 5F414913 for ; Tue, 26 May 2020 18:27:14 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 4709A206F1 for ; Tue, 26 May 2020 18:27:14 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=nvidia.com header.i=@nvidia.com header.b="PUQDLpq5" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729867AbgEZS1L (ORCPT ); Tue, 26 May 2020 14:27:11 -0400 Received: from hqnvemgate26.nvidia.com ([216.228.121.65]:18853 "EHLO hqnvemgate26.nvidia.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729163AbgEZS1L (ORCPT ); Tue, 26 May 2020 14:27:11 -0400 Received: from hqpgpgate101.nvidia.com (Not Verified[216.228.121.13]) by hqnvemgate26.nvidia.com (using TLS: TLSv1.2, DES-CBC3-SHA) id ; Tue, 26 May 2020 11:26:58 -0700 Received: from hqmail.nvidia.com ([172.20.161.6]) by hqpgpgate101.nvidia.com (PGP Universal service); Tue, 26 May 2020 11:27:10 -0700 X-PGP-Universal: processed; by hqpgpgate101.nvidia.com on Tue, 26 May 2020 11:27:10 -0700 Received: from HQMAIL101.nvidia.com (172.20.187.10) by HQMAIL105.nvidia.com (172.20.187.12) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Tue, 26 May 2020 18:27:10 +0000 Received: from hqnvemgw03.nvidia.com (10.124.88.68) by HQMAIL101.nvidia.com (172.20.187.10) with Microsoft SMTP Server (TLS) id 15.0.1473.3 via Frontend Transport; Tue, 26 May 2020 18:27:10 +0000 Received: from sandstorm.nvidia.com (Not Verified[10.2.50.17]) by hqnvemgw03.nvidia.com with Trustwave SEG (v7,5,8,10121) id ; Tue, 26 May 2020 11:27:10 -0700 From: John Hubbard To: LKML CC: Souptick Joarder , John Hubbard , =?utf-8?q?Kai_M=C3=A4kisara_=28Kolumbus?= =?utf-8?q?=29?= , Bart Van Assche , "James E . J . Bottomley" , "Martin K . Petersen" , Subject: [PATCH v2] scsi: st: convert convert get_user_pages() --> pin_user_pages() Date: Tue, 26 May 2020 11:27:09 -0700 Message-ID: <20200526182709.99599-1-jhubbard@nvidia.com> X-Mailer: git-send-email 2.26.2 MIME-Version: 1.0 X-NVConfidentiality: public DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1590517618; bh=+7F/94TzCK/cPi9jWHCOZuja1kFgCeQFGR9DSAN4SLE=; h=X-PGP-Universal:From:To:CC:Subject:Date:Message-ID:X-Mailer: MIME-Version:X-NVConfidentiality:Content-Type: Content-Transfer-Encoding; b=PUQDLpq5QAaAQylmPbkePzO834JX34IxkMzVlwVWLOXzs2DUhn+5mxC3s5mgzIfoA ypaXkrTBFosCl8JiV0dQGNqmHtUgfGbBVMltLOiFuk3UrRR1QVH37WL3I0684VGJYL bGF3oRbcdFUWNCd7BGuXcIHoPxtgiTSLzxKWEjd0PnhN2eA8s8ozzdflZ2KRjagmV/ 5ilJSQKf7JSqN518XowupBXJhekSVgU0U/9Puqp6EWVf7k7rg7hCrwhoyARYU0OXwY cezrUF5lXc9EFDA/hLqL9zb0YKRivZ2YD3bzLR4PbKlC8T8FnXHICTV+WBdly0MD9N QKD7avnwOq/GA== Sender: linux-scsi-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org This code was using get_user_pages*(), in a "Case 1" scenario (Direct IO), using the categorization from [1]. That means that it's time to convert the get_user_pages*() + put_page() calls to pin_user_pages*() + unpin_user_pages() calls. There is some helpful background in [2]: basically, this is a small part of fixing a long-standing disconnect between pinning pages, and file systems' use of those pages. Note that this effectively changes the code's behavior as well: it now ultimately calls set_page_dirty_lock(), instead of SetPageDirty().This is probably more accurate. As Christoph Hellwig put it, "set_page_dirty() is only safe if we are dealing with a file backed page where we have reference on the inode it hangs off." [3] Also, this deletes one of the two FIXME comments (about refcounting), because there is nothing wrong with the refcounting at this point. [1] Documentation/core-api/pin_user_pages.rst [2] "Explicit pinning of user-space pages": https://lwn.net/Articles/807108/ [3] https://lore.kernel.org/r/20190723153640.GB720@lst.de Cc: "Kai Mäkisara (Kolumbus)" Cc: Bart Van Assche Cc: James E.J. Bottomley Cc: Martin K. Petersen Cc: linux-scsi@vger.kernel.org Signed-off-by: John Hubbard Acked-by: Kai Mäkisara --- Hi, As mentioned in the v1 review thread, we probably still want/need this. Or so I claim. :) Please see what you think... Changes since v1: changed the commit log, to refer to Direct IO (Case 1), instead of DMA/RDMA (Case 2). And added Bart to Cc. v1: https://lore.kernel.org/linux-scsi/20200519045525.2446851-1-jhubbard@nvidia.com/ thanks, John Hubbard NVIDIA drivers/scsi/st.c | 20 +++++--------------- 1 file changed, 5 insertions(+), 15 deletions(-) diff --git a/drivers/scsi/st.c b/drivers/scsi/st.c index c5f9b348b438..1e3eda9fa231 100644 --- a/drivers/scsi/st.c +++ b/drivers/scsi/st.c @@ -4922,7 +4922,7 @@ static int sgl_map_user_pages(struct st_buffer *STbp, unsigned long end = (uaddr + count + PAGE_SIZE - 1) >> PAGE_SHIFT; unsigned long start = uaddr >> PAGE_SHIFT; const int nr_pages = end - start; - int res, i, j; + int res, i; struct page **pages; struct rq_map_data *mdata = &STbp->map_data; @@ -4944,7 +4944,7 @@ static int sgl_map_user_pages(struct st_buffer *STbp, /* Try to fault in all of the necessary pages */ /* rw==READ means read from drive, write into memory area */ - res = get_user_pages_fast(uaddr, nr_pages, rw == READ ? FOLL_WRITE : 0, + res = pin_user_pages_fast(uaddr, nr_pages, rw == READ ? FOLL_WRITE : 0, pages); /* Errors and no page mapped should return here */ @@ -4964,8 +4964,7 @@ static int sgl_map_user_pages(struct st_buffer *STbp, return nr_pages; out_unmap: if (res > 0) { - for (j=0; j < res; j++) - put_page(pages[j]); + unpin_user_pages(pages, res); res = 0; } kfree(pages); @@ -4977,18 +4976,9 @@ static int sgl_map_user_pages(struct st_buffer *STbp, static int sgl_unmap_user_pages(struct st_buffer *STbp, const unsigned int nr_pages, int dirtied) { - int i; - - for (i=0; i < nr_pages; i++) { - struct page *page = STbp->mapped_pages[i]; + /* FIXME: cache flush missing for rw==READ */ + unpin_user_pages_dirty_lock(STbp->mapped_pages, nr_pages, dirtied); - if (dirtied) - SetPageDirty(page); - /* FIXME: cache flush missing for rw==READ - * FIXME: call the correct reference counting function - */ - put_page(page); - } kfree(STbp->mapped_pages); STbp->mapped_pages = NULL;