From patchwork Wed Jul 24 04:25:13 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: john.hubbard@gmail.com X-Patchwork-Id: 11055803 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 8B24613A4 for ; Wed, 24 Jul 2019 04:26:40 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 7CE8626242 for ; Wed, 24 Jul 2019 04:26:40 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 700C32877F; Wed, 24 Jul 2019 04:26:40 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FROM,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id EEF8526242 for ; Wed, 24 Jul 2019 04:26:39 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726789AbfGXE0b (ORCPT ); Wed, 24 Jul 2019 00:26:31 -0400 Received: from mail-pl1-f196.google.com ([209.85.214.196]:40826 "EHLO mail-pl1-f196.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726410AbfGXEZc (ORCPT ); Wed, 24 Jul 2019 00:25:32 -0400 Received: by mail-pl1-f196.google.com with SMTP id a93so21418454pla.7; Tue, 23 Jul 2019 21:25:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=BJP8QYXxp7VfUt7Eaw5PwJTw08pU7M/uVeuocSBpfVU=; b=JkQ04mWEAa/01zFb0tlIM6Uj/lB55wPCkF6GMvnL5gHq24gOeLxpu2syudv1ZeubVR o/2/SbGfs0MBuqHwUIG/UqiIYwjvwRdM1smByZv+PMbXD0dTKJlqPSntSI4p9XbD0/EP dCEFzTMQ+3jKQsrfKqZebfyOsyegrsrPG3hntSrqUGy/c3DI5jDFIn00D4+KacGWHwyg 5+f61ycblG2ySUuAZOjASXUKEg6G7NHj0n1tTzQ86l9qr5EKD1HA2V/yosWmHzLAWWAj WxbUEPr8ZwbT/LJAb0AztRPHVuEiMh11F1nlDvqA1UHHjZuRTFB0keSEGorINX0fFgb2 sO/w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=BJP8QYXxp7VfUt7Eaw5PwJTw08pU7M/uVeuocSBpfVU=; b=thFSL7sx+r4akzxXmiEiA2LrLmSrS9j5zLA+uw2lkNI3L4Apc86cvdqIuxGvixua93 duDNYw+/Pc4iL2C/Ad6rfKQb+Q2NI8gA5V3CXQuwE3YthEp4BjHaFNg6T78UtRiPCFJ6 1Dfc5u6MWa4n7AEiL8VqokdDjh2Aef8J2fCRVlD5L9n1zAWTw4PmeFdotycXAOOIpkal mRU1KFYM1+783i7E4MdDyY/U15XwuMzcDCsjWQ6/6mcm9+8pDqowRpeaQIBFsTB8pkzn ewxnhyrH9DpyBMwQUs7EI1IowHQ8dLePK9c9GKhQ1n5dZdg1YltIO7DnhNTnycfw2aP5 sy4A== X-Gm-Message-State: APjAAAU7VCyAr9Bo2cxQSduPNOD1zAXPdJmX6NL2jGq7peXohtXEdC01 2m72QEVJRJAYucQEIXGTr4Y= X-Google-Smtp-Source: APXvYqwp+g/SqY8cK+RhyrKnPd9NNEiEh2F/BoaBOdfDPn2MZtzIjgPt3GPtA4VFQfpQXahpL/YYYg== X-Received: by 2002:a17:902:ac85:: with SMTP id h5mr84794603plr.198.1563942331371; Tue, 23 Jul 2019 21:25:31 -0700 (PDT) Received: from blueforge.nvidia.com (searspoint.nvidia.com. [216.228.112.21]) by smtp.gmail.com with ESMTPSA id a15sm34153364pgw.3.2019.07.23.21.25.29 (version=TLS1_3 cipher=AEAD-AES256-GCM-SHA384 bits=256/256); Tue, 23 Jul 2019 21:25:30 -0700 (PDT) From: john.hubbard@gmail.com X-Google-Original-From: jhubbard@nvidia.com To: Andrew Morton Cc: Alexander Viro , Anna Schumaker , "David S . Miller" , Dominique Martinet , Eric Van Hensbergen , Jason Gunthorpe , Jason Wang , Jens Axboe , Latchesar Ionkov , "Michael S . Tsirkin" , Miklos Szeredi , Trond Myklebust , Christoph Hellwig , Matthew Wilcox , linux-mm@kvack.org, LKML , ceph-devel@vger.kernel.org, kvm@vger.kernel.org, linux-block@vger.kernel.org, linux-cifs@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-nfs@vger.kernel.org, linux-rdma@vger.kernel.org, netdev@vger.kernel.org, samba-technical@lists.samba.org, v9fs-developer@lists.sourceforge.net, virtualization@lists.linux-foundation.org, =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , John Hubbard , Jan Kara , Dan Williams , Johannes Thumshirn , Ming Lei , Dave Chinner , Boaz Harrosh , Paolo Bonzini , Stefan Hajnoczi Subject: [PATCH 07/12] vhost-scsi: convert put_page() to put_user_page*() Date: Tue, 23 Jul 2019 21:25:13 -0700 Message-Id: <20190724042518.14363-8-jhubbard@nvidia.com> X-Mailer: git-send-email 2.22.0 In-Reply-To: <20190724042518.14363-1-jhubbard@nvidia.com> References: <20190724042518.14363-1-jhubbard@nvidia.com> MIME-Version: 1.0 X-NVConfidentiality: public Sender: ceph-devel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: ceph-devel@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Jérôme Glisse For pages that were retained via get_user_pages*(), release those pages via the new put_user_page*() routines, instead of via put_page(). This is part a tree-wide conversion, as described in commit fc1d8e7cca2d ("mm: introduce put_user_page*(), placeholder versions"). Changes from Jérôme's original patch: * Changed a WARN_ON to a BUG_ON. Signed-off-by: Jérôme Glisse Signed-off-by: John Hubbard Cc: virtualization@lists.linux-foundation.org Cc: linux-fsdevel@vger.kernel.org Cc: linux-block@vger.kernel.org Cc: linux-mm@kvack.org Cc: Jan Kara Cc: Dan Williams Cc: Alexander Viro Cc: Johannes Thumshirn Cc: Christoph Hellwig Cc: Jens Axboe Cc: Ming Lei Cc: Dave Chinner Cc: Jason Gunthorpe Cc: Matthew Wilcox Cc: Boaz Harrosh Cc: Miklos Szeredi Cc: "Michael S. Tsirkin" Cc: Jason Wang Cc: Paolo Bonzini Cc: Stefan Hajnoczi Acked-by: Michael S. Tsirkin --- drivers/vhost/scsi.c | 13 ++++++++++--- 1 file changed, 10 insertions(+), 3 deletions(-) diff --git a/drivers/vhost/scsi.c b/drivers/vhost/scsi.c index a9caf1bc3c3e..282565ab5e3f 100644 --- a/drivers/vhost/scsi.c +++ b/drivers/vhost/scsi.c @@ -329,11 +329,11 @@ static void vhost_scsi_release_cmd(struct se_cmd *se_cmd) if (tv_cmd->tvc_sgl_count) { for (i = 0; i < tv_cmd->tvc_sgl_count; i++) - put_page(sg_page(&tv_cmd->tvc_sgl[i])); + put_user_page(sg_page(&tv_cmd->tvc_sgl[i])); } if (tv_cmd->tvc_prot_sgl_count) { for (i = 0; i < tv_cmd->tvc_prot_sgl_count; i++) - put_page(sg_page(&tv_cmd->tvc_prot_sgl[i])); + put_user_page(sg_page(&tv_cmd->tvc_prot_sgl[i])); } vhost_scsi_put_inflight(tv_cmd->inflight); @@ -630,6 +630,13 @@ vhost_scsi_map_to_sgl(struct vhost_scsi_cmd *cmd, size_t offset; unsigned int npages = 0; + /* + * Here in all cases we should have an IOVEC which use GUP. If that is + * not the case then we will wrongly call put_user_page() and the page + * refcount will go wrong (this is in vhost_scsi_release_cmd()) + */ + WARN_ON(!iov_iter_get_pages_use_gup(iter)); + bytes = iov_iter_get_pages(iter, pages, LONG_MAX, VHOST_SCSI_PREALLOC_UPAGES, &offset); /* No pages were pinned */ @@ -681,7 +688,7 @@ vhost_scsi_iov_to_sgl(struct vhost_scsi_cmd *cmd, bool write, while (p < sg) { struct page *page = sg_page(p++); if (page) - put_page(page); + put_user_page(page); } return ret; }