From patchwork Tue Aug 25 01:20:34 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Hubbard X-Patchwork-Id: 11734607 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 12B431575 for ; Tue, 25 Aug 2020 01:20:43 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id C96EF20738 for ; Tue, 25 Aug 2020 01:20:42 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=nvidia.com header.i=@nvidia.com header.b="dJbhxmWV" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org C96EF20738 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=nvidia.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id BE32690001C; Mon, 24 Aug 2020 21:20:41 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id B92D18D0002; Mon, 24 Aug 2020 21:20:41 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A823E90001C; Mon, 24 Aug 2020 21:20:41 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0242.hostedemail.com [216.40.44.242]) by kanga.kvack.org (Postfix) with ESMTP id 907E78D0002 for ; Mon, 24 Aug 2020 21:20:41 -0400 (EDT) Received: from smtpin10.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 5044C1EE6 for ; Tue, 25 Aug 2020 01:20:41 +0000 (UTC) X-FDA: 77187336282.10.queen71_31176a727057 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin10.hostedemail.com (Postfix) with ESMTP id 26EDF16A0C3 for ; Tue, 25 Aug 2020 01:20:41 +0000 (UTC) X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,jhubbard@nvidia.com,,RULES_HIT:30034:30054:30055:30064:30070,0,RBL:216.228.121.143:@nvidia.com:.lbl8.mailshell.net-64.10.201.10 62.18.0.100;04ygroc4ksx347karq71ye3fxb5kbocig9uxxsbbha1m4cyakn5xr7x1eo9sgwn.apze7op11par8jydwp65p6pcmkm9rshu9w54x8wndm6sq7wh6it9s7wz9a8o66e.y-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:23,LUA_SUMMARY:none X-HE-Tag: queen71_31176a727057 X-Filterd-Recvd-Size: 5536 Received: from hqnvemgate24.nvidia.com (hqnvemgate24.nvidia.com [216.228.121.143]) by imf01.hostedemail.com (Postfix) with ESMTP for ; Tue, 25 Aug 2020 01:20:40 +0000 (UTC) Received: from hqpgpgate101.nvidia.com (Not Verified[216.228.121.13]) by hqnvemgate24.nvidia.com (using TLS: TLSv1.2, DES-CBC3-SHA) id ; Mon, 24 Aug 2020 18:18:39 -0700 Received: from hqmail.nvidia.com ([172.20.161.6]) by hqpgpgate101.nvidia.com (PGP Universal service); Mon, 24 Aug 2020 18:20:38 -0700 X-PGP-Universal: processed; by hqpgpgate101.nvidia.com on Mon, 24 Aug 2020 18:20:38 -0700 Received: from HQMAIL111.nvidia.com (172.20.187.18) by HQMAIL111.nvidia.com (172.20.187.18) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Tue, 25 Aug 2020 01:20:38 +0000 Received: from hqnvemgw03.nvidia.com (10.124.88.68) by HQMAIL111.nvidia.com (172.20.187.18) with Microsoft SMTP Server (TLS) id 15.0.1473.3 via Frontend Transport; Tue, 25 Aug 2020 01:20:38 +0000 Received: from sandstorm.nvidia.com (Not Verified[10.2.53.36]) by hqnvemgw03.nvidia.com with Trustwave SEG (v7,5,8,10121) id ; Mon, 24 Aug 2020 18:20:38 -0700 From: John Hubbard To: , CC: , , , , , , , , , , , Subject: [PATCH v2] fs/ceph: use pipe_get_pages_alloc() for pipe Date: Mon, 24 Aug 2020 18:20:34 -0700 Message-ID: <20200825012034.1962362-1-jhubbard@nvidia.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20200824185400.GE17456@casper.infradead.org> References: <20200824185400.GE17456@casper.infradead.org> MIME-Version: 1.0 X-NVConfidentiality: public DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1598318319; bh=LeU6AwNAplziQ2ZfUIYvJCauSoR/ekxkghoX0Pth920=; h=X-PGP-Universal:From:To:CC:Subject:Date:Message-ID:X-Mailer: In-Reply-To:References:MIME-Version:X-NVConfidentiality: Content-Transfer-Encoding:Content-Type; b=dJbhxmWVHB+ywuKvGhuzf3w1LZCEDf94YUwqDRfe9Q2Hr5/ds8ADrW5U+JzneKhXj npXXJ055Qgt5hE0X44gz/QtrW2vXHRX9yuixWIiZnaM1olwc9O8Yj5U/4KTZmfiHeY L/FC1LzwJsP6oTtzCYXH7Db7JrI54OthDvtnrk9EEJyg0tFZEpjiCU63NhQ/VmmuLE uBZO7o5iBq6kpernShLqsNSEUmioemlhLvSfLCu2/2OAFEmodXF1Hn3H8cJaiEzrOx OJWeBkzB6k6g6xYbUbzW/gBylQgHQAh367suhEv6p2S93O5intO7INmut3vnXhXTAp IMSDCnFxRuYcw== X-Rspamd-Queue-Id: 26EDF16A0C3 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam04 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This reduces, by one, the number of callers of iov_iter_get_pages(). That's helpful because these calls are being audited and converted over to use iov_iter_pin_user_pages(), where applicable. And this one here is already known by the caller to be only for ITER_PIPE, so let's just simplify it now. Signed-off-by: John Hubbard --- OK, here's a v2 that does EXPORT_SYMBOL_GPL, instead of EXPORT_SYMBOL, that's the only change from v1. That should help give this patch a clear bill of passage. :) thanks, John Hubbard NVIDIA fs/ceph/file.c | 3 +-- include/linux/uio.h | 3 ++- lib/iov_iter.c | 6 +++--- 3 files changed, 6 insertions(+), 6 deletions(-) diff --git a/fs/ceph/file.c b/fs/ceph/file.c index d51c3f2fdca0..d3d7dd957390 100644 --- a/fs/ceph/file.c +++ b/fs/ceph/file.c @@ -879,8 +879,7 @@ static ssize_t ceph_sync_read(struct kiocb *iocb, struct iov_iter *to, more = len < iov_iter_count(to); if (unlikely(iov_iter_is_pipe(to))) { - ret = iov_iter_get_pages_alloc(to, &pages, len, - &page_off); + ret = pipe_get_pages_alloc(to, &pages, len, &page_off); if (ret <= 0) { ceph_osdc_put_request(req); ret = -ENOMEM; diff --git a/include/linux/uio.h b/include/linux/uio.h index 3835a8a8e9ea..270a4dcf5453 100644 --- a/include/linux/uio.h +++ b/include/linux/uio.h @@ -226,7 +226,8 @@ ssize_t iov_iter_get_pages(struct iov_iter *i, struct page **pages, ssize_t iov_iter_get_pages_alloc(struct iov_iter *i, struct page ***pages, size_t maxsize, size_t *start); int iov_iter_npages(const struct iov_iter *i, int maxpages); - +ssize_t pipe_get_pages_alloc(struct iov_iter *i, struct page ***pages, + size_t maxsize, size_t *start); const void *dup_iter(struct iov_iter *new, struct iov_iter *old, gfp_t flags); static inline size_t iov_iter_count(const struct iov_iter *i) diff --git a/lib/iov_iter.c b/lib/iov_iter.c index 5e40786c8f12..6290998df480 100644 --- a/lib/iov_iter.c +++ b/lib/iov_iter.c @@ -1355,9 +1355,8 @@ static struct page **get_pages_array(size_t n) return kvmalloc_array(n, sizeof(struct page *), GFP_KERNEL); } -static ssize_t pipe_get_pages_alloc(struct iov_iter *i, - struct page ***pages, size_t maxsize, - size_t *start) +ssize_t pipe_get_pages_alloc(struct iov_iter *i, struct page ***pages, + size_t maxsize, size_t *start) { struct page **p; unsigned int iter_head, npages; @@ -1387,6 +1386,7 @@ static ssize_t pipe_get_pages_alloc(struct iov_iter *i, kvfree(p); return n; } +EXPORT_SYMBOL_GPL(pipe_get_pages_alloc); ssize_t iov_iter_get_pages_alloc(struct iov_iter *i, struct page ***pages, size_t maxsize,