From patchwork Mon Jan 23 17:29:59 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 13112619 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 132C4C05027 for ; Mon, 23 Jan 2023 17:30:26 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 8828E6B0072; Mon, 23 Jan 2023 12:30:25 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 8337B6B0073; Mon, 23 Jan 2023 12:30:25 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6D3286B0074; Mon, 23 Jan 2023 12:30:25 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 5728A6B0072 for ; Mon, 23 Jan 2023 12:30:25 -0500 (EST) Received: from smtpin24.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id ED7A5A2687 for ; Mon, 23 Jan 2023 17:30:24 +0000 (UTC) X-FDA: 80386752768.24.8F7C6D5 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf26.hostedemail.com (Postfix) with ESMTP id 90ACA14001B for ; Mon, 23 Jan 2023 17:30:21 +0000 (UTC) Authentication-Results: imf26.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=J6teM57W; spf=pass (imf26.hostedemail.com: domain of dhowells@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1674495021; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=1BQhW3wHk53SQ0g+pPIstFzSMICN3HLKrueb/Jgd0sA=; b=wazFrpUfUEG739hpE+YMwRDwJQUJEb6oUTDkC8srMiG5dNR8SCkmFy9IS1n7ZiKhKMzpub 8ZX27rIwvI2insEw3Vf65qGUHcRm3ZYeHuasaioTYMsjCc3B9Jab+gZ3X2bolUczamiK5/ 8mkU5hDddj87YLvTpmC1umFEg3oWdYg= ARC-Authentication-Results: i=1; imf26.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=J6teM57W; spf=pass (imf26.hostedemail.com: domain of dhowells@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=dhowells@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1674495021; a=rsa-sha256; cv=none; b=Mr+CYOsHjMg6VpAv9Zxa+bdDNynjmvnagJqvxGZQE04XJyARpgK/V2/Gl81WceyjLVg7wI i9An8D9ZgDW0BoWRmllWwU68ydigdcMLOFYXGHqkbOUmWT6+UXZgc3gazVBZuz6RXF5+AE TVwq8zjB+BdJsdBRdCulS/czbj36gQg= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1674495020; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=1BQhW3wHk53SQ0g+pPIstFzSMICN3HLKrueb/Jgd0sA=; b=J6teM57WmdvLgOA9My0FyEPVXvVvwWc5+GXze93xitw730MUIZliU8hnQRxJtmomgbNile OcVG94aeYBfLWucmJQOsaO2tEVta2VpakGtwUEQHwK/LaYvx+PbYajZFREhd+47kQJqeHf E7cYknu7bur1g+qpoGI9+Vc2LAkO9Zg= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-103-MGtzFN3AN8yceGHs8VyG8Q-1; Mon, 23 Jan 2023 12:30:18 -0500 X-MC-Unique: MGtzFN3AN8yceGHs8VyG8Q-1 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.rdu2.redhat.com [10.11.54.3]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 8CA7A18E6C41; Mon, 23 Jan 2023 17:30:17 +0000 (UTC) Received: from warthog.procyon.org.uk.com (unknown [10.33.36.97]) by smtp.corp.redhat.com (Postfix) with ESMTP id B1CA61121330; Mon, 23 Jan 2023 17:30:15 +0000 (UTC) From: David Howells To: Al Viro , Christoph Hellwig Cc: David Howells , Matthew Wilcox , Jens Axboe , Jan Kara , Jeff Layton , Logan Gunthorpe , linux-fsdevel@vger.kernel.org, linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, Christoph Hellwig , John Hubbard , David Hildenbrand , linux-mm@kvack.org Subject: [PATCH v8 02/10] iov_iter: Add a function to extract a page list from an iterator Date: Mon, 23 Jan 2023 17:29:59 +0000 Message-Id: <20230123173007.325544-3-dhowells@redhat.com> In-Reply-To: <20230123173007.325544-1-dhowells@redhat.com> References: <20230123173007.325544-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.3 X-Rspam-User: X-Rspamd-Server: rspam03 X-Stat-Signature: 9eb56dsb7cfg5kxsdrbnnj51bcpxhsjz X-Rspamd-Queue-Id: 90ACA14001B X-HE-Tag: 1674495021-171559 X-HE-Meta: U2FsdGVkX18+g+7ILFUSYCMFj+Oyt8px5a4bNYBCykZFpOeAPGXZ3jH+OMs8naad7t9IzrSVMNKaEQvJcFAPC9hBg61wrhmug0ydS+TQ+y/nax/xgZ8PJ7tcmmETfpYmqU3BxsxpAcRjPJ/eWoW/2O3MrsY8G81CyE0tWx1DoIVEWr7DzUFiiXXseDEPh7Juz9r0xQTt53qJsYR2oUOClWOFZ8mBqERghKA+IB8GOUo1aldTo7qIIGB/HWTOv23KffZ+ENbgLMnca+8XFO5GLbWoCkfFTmTTDUAh4UKzu9XPrW3z4p5hkDdRkl5/tPkvlPFkiVAcMesjUdaE21qGtqfcCN1hCRsQ3ovboYmCdCDpYoVub6CIybzThEEE/MOLUO+2l6V4NhGzDw5qU0YlK4k5Yhm/AVCsf9nweQOdx+T67mgYGZuanOII7+TfTxOfhXbT3oBJ1iD86ME7mvx1wR/G+6Qaf2KGj+RL2+wm/vhM40Y1jBzQOzRqXDf9gEbw0DWrrkRKxxrSFBcBMfNlmuxxAPrIEnpdWr51k6j4q1PfL62zrY1umYK7lQny5PhItYZcnpQIpV5uiAsY3WAZAElAWEDZyHSAJJ5u/by8D/KsQ/czpdWI7jRs42g3lJYv8naC4HKV7gP29pDzB3/vVgMTLjST1s28EdrZX7uvTao98sSaYeU3Ig+ffLcqeNYLJXESI15R4UK1FWhwCy38idzPWWF9BsXNfeZpb5LwzPe3b50BHuYlxW4YuJ7oWuKRy61Na6y/O9kPfysFs1x2aouungkyXnzOzXS5RympHpKU7y0GVihJ1ow5CQ0n2sUN0N2b9rJDWuEf3iPWsg8Dk8PtjhaY42eMQ6I79RiEJJdh6Tb9o3hhtpMpMmf4KLJKZhDpAbiAMGp+taMHL13Rd3F/nFxOhDlyB/Gs/sGwqSzbYtSsx+2WV+hkYWmGgBvY6N3+0GQ9X3DlOF+v52e aFuNmGaK fyWZyPS1XzulR7rj4TNRx+fVlfXF32mSV3t0DxV58zL6TVrjQt+jb3Vv+PlkBpZIrNlb1m1TSZ2LA6OMJSs9yQCSuJhfeezUhTReps9FgP/MrD4rLjBF83AsRRQBQNtDCn1Afql64/QGat2lm6fbTgb1OO7qVhzy+6IxVjCpMArf+V00DPJF8vE2GdneCjLZF6qFLAVnj93TNeSBx+A2k3aIQZTGutl/DLoWRzbGQH+36zYnejoha4KW3ehUt+iO4dh6zAM1KbOYf7mHBo1BZYFyqi/mOJGgPclqSIE5z/PAq2Vo7YgtBLwQEWQyfKkX9K2uxx5nMd4kDC8Vp6mykVxIHDcc2bE4BP7Jp58+YGAvGT8tYdYM0c2Bq9oSCOGoQEhHMznfy0XjrKN76bwNX6qqnugF3Gx+JeoiY7RoxiyrEZ3CELCNIQIRMklotySXFCNWCWMgejGYyxpU1qYANuapVET1/Y2N5v/SW3Pi7f5n0co8QiT/u5YxWEpJUtthkLmTX5ndrrsATqjp/veN5H4rPeMCnWYKVu1+odZVAADp3M2AgVwzCtEASSA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add a function, iov_iter_extract_pages(), to extract a list of pages from an iterator. The pages may be returned with a pin added or nothing, depending on the type of iterator. Add a second function, iov_iter_extract_mode(), to determine how the cleanup should be done. There are two cases: (1) ITER_IOVEC or ITER_UBUF iterator. Extracted pages will have pins (FOLL_PIN) obtained on them so that a concurrent fork() will forcibly copy the page so that DMA is done to/from the parent's buffer and is unavailable to/unaffected by the child process. iov_iter_extract_mode() will return FOLL_PIN for this case. The caller should use something like folio_put_unpin() to dispose of the page. (2) Any other sort of iterator. No refs or pins are obtained on the page, the assumption is made that the caller will manage page retention. iov_iter_extract_mode() will return 0. The pages don't need additional disposal. Signed-off-by: David Howells cc: Al Viro cc: Christoph Hellwig cc: John Hubbard cc: David Hildenbrand cc: Matthew Wilcox cc: linux-fsdevel@vger.kernel.org cc: linux-mm@kvack.org Reviewed-by: Christoph Hellwig --- Notes: ver #8) - It seems that all DIO is supposed to be done under FOLL_PIN now, and not FOLL_GET, so switch to only using pin_user_pages() for user-backed iters. - Wrap an argument in brackets in the iov_iter_extract_mode() macro. - Drop the extract_flags argument to iov_iter_extract_mode() for now [hch]. ver #7) - Switch to passing in iter-specific flags rather than FOLL_* flags. - Drop the direction flags for now. - Use ITER_ALLOW_P2PDMA to request FOLL_PCI_P2PDMA. - Disallow use of ITER_ALLOW_P2PDMA with non-user-backed iter. - Add support for extraction from KVEC-type iters. - Use iov_iter_advance() rather than open-coding it. - Make BVEC- and KVEC-type skip over initial empty vectors. ver #6) - Add back the function to indicate the cleanup mode. - Drop the cleanup_mode return arg to iov_iter_extract_pages(). - Pass FOLL_SOURCE/DEST_BUF in gup_flags. Check this against the iter data_source. ver #4) - Use ITER_SOURCE/DEST instead of WRITE/READ. - Allow additional FOLL_* flags, such as FOLL_PCI_P2PDMA to be passed in. ver #3) - Switch to using EXPORT_SYMBOL_GPL to prevent indirect 3rd-party access to get/pin_user_pages_fast()[1]. include/linux/uio.h | 22 +++ lib/iov_iter.c | 320 ++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 342 insertions(+) diff --git a/include/linux/uio.h b/include/linux/uio.h index 46d5080314c6..a8165335f8da 100644 --- a/include/linux/uio.h +++ b/include/linux/uio.h @@ -363,4 +363,26 @@ static inline void iov_iter_ubuf(struct iov_iter *i, unsigned int direction, /* Flags for iov_iter_get/extract_pages*() */ #define ITER_ALLOW_P2PDMA 0x01 /* Allow P2PDMA on the extracted pages */ +ssize_t iov_iter_extract_pages(struct iov_iter *i, struct page ***pages, + size_t maxsize, unsigned int maxpages, + unsigned int extract_flags, size_t *offset0); + +/** + * iov_iter_extract_mode - Indicate how pages from the iterator will be retained + * @iter: The iterator + * + * Examine the iterator and indicate by returning FOLL_PIN or 0 as to how, if + * at all, pages extracted from the iterator will be retained by the extraction + * function. + * + * FOLL_PIN indicates that the pages will have a pin placed in them that the + * caller must unpin. This is must be done for DMA/async DIO to force fork() + * to forcibly copy a page for the child (the parent must retain the original + * page). + * + * 0 indicates that no measures are taken and that it's up to the caller to + * retain the pages. + */ +#define iov_iter_extract_mode(iter) (user_backed_iter(iter) ? FOLL_PIN : 0) + #endif diff --git a/lib/iov_iter.c b/lib/iov_iter.c index fb04abe7d746..57f3e9404160 100644 --- a/lib/iov_iter.c +++ b/lib/iov_iter.c @@ -1916,3 +1916,323 @@ void iov_iter_restore(struct iov_iter *i, struct iov_iter_state *state) i->iov -= state->nr_segs - i->nr_segs; i->nr_segs = state->nr_segs; } + +/* + * Extract a list of contiguous pages from an ITER_PIPE iterator. This does + * not get references of its own on the pages, nor does it get a pin on them. + * If there's a partial page, it adds that first and will then allocate and add + * pages into the pipe to make up the buffer space to the amount required. + * + * The caller must hold the pipe locked and only transferring into a pipe is + * supported. + */ +static ssize_t iov_iter_extract_pipe_pages(struct iov_iter *i, + struct page ***pages, size_t maxsize, + unsigned int maxpages, + unsigned int extract_flags, + size_t *offset0) +{ + unsigned int nr, offset, chunk, j; + struct page **p; + size_t left; + + if (!sanity(i)) + return -EFAULT; + + offset = pipe_npages(i, &nr); + if (!nr) + return -EFAULT; + *offset0 = offset; + + maxpages = min_t(size_t, nr, maxpages); + maxpages = want_pages_array(pages, maxsize, offset, maxpages); + if (!maxpages) + return -ENOMEM; + p = *pages; + + left = maxsize; + for (j = 0; j < maxpages; j++) { + struct page *page = append_pipe(i, left, &offset); + if (!page) + break; + chunk = min_t(size_t, left, PAGE_SIZE - offset); + left -= chunk; + *p++ = page; + } + if (!j) + return -EFAULT; + return maxsize - left; +} + +/* + * Extract a list of contiguous pages from an ITER_XARRAY iterator. This does not + * get references on the pages, nor does it get a pin on them. + */ +static ssize_t iov_iter_extract_xarray_pages(struct iov_iter *i, + struct page ***pages, size_t maxsize, + unsigned int maxpages, + unsigned int extract_flags, + size_t *offset0) +{ + struct page *page, **p; + unsigned int nr = 0, offset; + loff_t pos = i->xarray_start + i->iov_offset; + pgoff_t index = pos >> PAGE_SHIFT; + XA_STATE(xas, i->xarray, index); + + offset = pos & ~PAGE_MASK; + *offset0 = offset; + + maxpages = want_pages_array(pages, maxsize, offset, maxpages); + if (!maxpages) + return -ENOMEM; + p = *pages; + + rcu_read_lock(); + for (page = xas_load(&xas); page; page = xas_next(&xas)) { + if (xas_retry(&xas, page)) + continue; + + /* Has the page moved or been split? */ + if (unlikely(page != xas_reload(&xas))) { + xas_reset(&xas); + continue; + } + + p[nr++] = find_subpage(page, xas.xa_index); + if (nr == maxpages) + break; + } + rcu_read_unlock(); + + maxsize = min_t(size_t, nr * PAGE_SIZE - offset, maxsize); + iov_iter_advance(i, maxsize); + return maxsize; +} + +/* + * Extract a list of contiguous pages from an ITER_BVEC iterator. This does + * not get references on the pages, nor does it get a pin on them. + */ +static ssize_t iov_iter_extract_bvec_pages(struct iov_iter *i, + struct page ***pages, size_t maxsize, + unsigned int maxpages, + unsigned int extract_flags, + size_t *offset0) +{ + struct page **p, *page; + size_t skip = i->iov_offset, offset; + int k; + + for (;;) { + if (i->nr_segs == 0) + return 0; + maxsize = min(maxsize, i->bvec->bv_len - skip); + if (maxsize) + break; + i->iov_offset = 0; + i->nr_segs--; + i->kvec++; + skip = 0; + } + + skip += i->bvec->bv_offset; + page = i->bvec->bv_page + skip / PAGE_SIZE; + offset = skip % PAGE_SIZE; + *offset0 = offset; + + maxpages = want_pages_array(pages, maxsize, offset, maxpages); + if (!maxpages) + return -ENOMEM; + p = *pages; + for (k = 0; k < maxpages; k++) + p[k] = page + k; + + maxsize = min_t(size_t, maxsize, maxpages * PAGE_SIZE - offset); + iov_iter_advance(i, maxsize); + return maxsize; +} + +/* + * Extract a list of virtually contiguous pages from an ITER_KVEC iterator. + * This does not get references on the pages, nor does it get a pin on them. + */ +static ssize_t iov_iter_extract_kvec_pages(struct iov_iter *i, + struct page ***pages, size_t maxsize, + unsigned int maxpages, + unsigned int extract_flags, + size_t *offset0) +{ + struct page **p, *page; + const void *kaddr; + size_t skip = i->iov_offset, offset, len; + int k; + + for (;;) { + if (i->nr_segs == 0) + return 0; + maxsize = min(maxsize, i->kvec->iov_len - skip); + if (maxsize) + break; + i->iov_offset = 0; + i->nr_segs--; + i->kvec++; + skip = 0; + } + + offset = skip % PAGE_SIZE; + *offset0 = offset; + kaddr = i->kvec->iov_base; + + maxpages = want_pages_array(pages, maxsize, offset, maxpages); + if (!maxpages) + return -ENOMEM; + p = *pages; + + kaddr -= offset; + len = offset + maxsize; + for (k = 0; k < maxpages; k++) { + size_t seg = min_t(size_t, len, PAGE_SIZE); + + if (is_vmalloc_or_module_addr(kaddr)) + page = vmalloc_to_page(kaddr); + else + page = virt_to_page(kaddr); + + p[k] = page; + len -= seg; + kaddr += PAGE_SIZE; + } + + maxsize = min_t(size_t, maxsize, maxpages * PAGE_SIZE - offset); + iov_iter_advance(i, maxsize); + return maxsize; +} + +/* + * Extract a list of contiguous pages from a user iterator and get a pin on + * each of them. This should only be used if the iterator is user-backed + * (IOBUF/UBUF). + * + * It does not get refs on the pages, but the pages must be unpinned by the + * caller once the transfer is complete. + * + * This is safe to be used where background IO/DMA *is* going to be modifying + * the buffer; using a pin rather than a ref makes forces fork() to give the + * child a copy of the page. + */ +static ssize_t iov_iter_extract_user_pages(struct iov_iter *i, + struct page ***pages, + size_t maxsize, + unsigned int maxpages, + unsigned int extract_flags, + size_t *offset0) +{ + unsigned long addr; + unsigned int gup_flags = FOLL_PIN; + size_t offset; + int res; + + if (i->data_source == ITER_DEST) + gup_flags |= FOLL_WRITE; + if (extract_flags & ITER_ALLOW_P2PDMA) + gup_flags |= FOLL_PCI_P2PDMA; + if (i->nofault) + gup_flags |= FOLL_NOFAULT; + + addr = first_iovec_segment(i, &maxsize); + *offset0 = offset = addr % PAGE_SIZE; + addr &= PAGE_MASK; + maxpages = want_pages_array(pages, maxsize, offset, maxpages); + if (!maxpages) + return -ENOMEM; + res = pin_user_pages_fast(addr, maxpages, gup_flags, *pages); + if (unlikely(res <= 0)) + return res; + maxsize = min_t(size_t, maxsize, res * PAGE_SIZE - offset); + iov_iter_advance(i, maxsize); + return maxsize; +} + +/** + * iov_iter_extract_pages - Extract a list of contiguous pages from an iterator + * @i: The iterator to extract from + * @pages: Where to return the list of pages + * @maxsize: The maximum amount of iterator to extract + * @maxpages: The maximum size of the list of pages + * @extract_flags: Flags to qualify request + * @offset0: Where to return the starting offset into (*@pages)[0] + * + * Extract a list of contiguous pages from the current point of the iterator, + * advancing the iterator. The maximum number of pages and the maximum amount + * of page contents can be set. + * + * If *@pages is NULL, a page list will be allocated to the required size and + * *@pages will be set to its base. If *@pages is not NULL, it will be assumed + * that the caller allocated a page list at least @maxpages in size and this + * will be filled in. + * + * @extract_flags can have ITER_ALLOW_P2PDMA set to request peer-to-peer DMA be + * allowed on the pages extracted. + * + * The iov_iter_extract_mode() function can be used to query how cleanup should + * be performed. + * + * Extra refs or pins on the pages may be obtained as follows: + * + * (*) If the iterator is user-backed (ITER_IOVEC/ITER_UBUF), pins will be + * added to the pages, but refs will not be taken. + * iov_iter_extract_mode() will return FOLL_PIN. + * + * (*) If the iterator is ITER_PIPE, this must describe a destination for the + * data. Additional pages may be allocated and added to the pipe (which + * will hold the refs), but pins will not be obtained for the caller. The + * caller must hold the pipe lock. iov_iter_extract_mode() will return 0. + * + * (*) If the iterator is ITER_KVEC, ITER_BVEC or ITER_XARRAY, the pages are + * merely listed; no extra refs or pins are obtained. + * iov_iter_extract_mode() will return 0. + * + * Note also: + * + * (*) Use with ITER_DISCARD is not supported as that has no content. + * + * On success, the function sets *@pages to the new pagelist, if allocated, and + * sets *offset0 to the offset into the first page. + * + * It may also return -ENOMEM and -EFAULT. + */ +ssize_t iov_iter_extract_pages(struct iov_iter *i, + struct page ***pages, + size_t maxsize, + unsigned int maxpages, + unsigned int extract_flags, + size_t *offset0) +{ + maxsize = min_t(size_t, min_t(size_t, maxsize, i->count), MAX_RW_COUNT); + if (!maxsize) + return 0; + + if (likely(user_backed_iter(i))) + return iov_iter_extract_user_pages(i, pages, maxsize, + maxpages, extract_flags, + offset0); + if (iov_iter_is_kvec(i)) + return iov_iter_extract_kvec_pages(i, pages, maxsize, + maxpages, extract_flags, + offset0); + if (iov_iter_is_bvec(i)) + return iov_iter_extract_bvec_pages(i, pages, maxsize, + maxpages, extract_flags, + offset0); + if (iov_iter_is_pipe(i)) + return iov_iter_extract_pipe_pages(i, pages, maxsize, + maxpages, extract_flags, + offset0); + if (iov_iter_is_xarray(i)) + return iov_iter_extract_xarray_pages(i, pages, maxsize, + maxpages, extract_flags, + offset0); + return -EFAULT; +} +EXPORT_SYMBOL_GPL(iov_iter_extract_pages);