From patchwork Tue Mar 21 20:54:33 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Stoakes X-Patchwork-Id: 13183276 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id B3AFCC76195 for ; Tue, 21 Mar 2023 20:54:48 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 41FB66B007D; Tue, 21 Mar 2023 16:54:48 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 2E4AD900004; Tue, 21 Mar 2023 16:54:48 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0BF4C900003; Tue, 21 Mar 2023 16:54:48 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id DBB726B007D for ; Tue, 21 Mar 2023 16:54:47 -0400 (EDT) Received: from smtpin18.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id AAC47C01E1 for ; Tue, 21 Mar 2023 20:54:47 +0000 (UTC) X-FDA: 80594109414.18.8C454BC Received: from mail-wr1-f49.google.com (mail-wr1-f49.google.com [209.85.221.49]) by imf12.hostedemail.com (Postfix) with ESMTP id BAEAB40011 for ; Tue, 21 Mar 2023 20:54:45 +0000 (UTC) Authentication-Results: imf12.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=K2T9ISzZ; spf=pass (imf12.hostedemail.com: domain of lstoakes@gmail.com designates 209.85.221.49 as permitted sender) smtp.mailfrom=lstoakes@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1679432085; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=djyPCeztN5boBqxO3bRuQFLy3bknB1j3IWxmRWH4tok=; b=U35xWqpD1961RSxQ9XoNTiNZIuvaxZG5DjBlsaachsUSyTpmu8qU70rUyNrhW0yPsB+NPO CZCciq1Zieu96RvVgX+0s41C6v9stwf0Y8KnPVnYgPQZ8vKy18qo6t31W+OYqZZdLcKHvq Q8ZzbzY+viD+o8xDvpii9Bdo/N1/2SU= ARC-Authentication-Results: i=1; imf12.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=K2T9ISzZ; spf=pass (imf12.hostedemail.com: domain of lstoakes@gmail.com designates 209.85.221.49 as permitted sender) smtp.mailfrom=lstoakes@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1679432085; a=rsa-sha256; cv=none; b=vPSeGOhi5wPwfrQm16KV6H11U0qPKh2FUp5CJoLkBqBSB0pC1FvS0m4Bs753/YMEaLl5N0 Npbie9KvuEqlHP4l+rlg2Zy8zibUJ9DYsqTbzxYlPVvw2+Cea9mj+NqY8hMxNp+RKzm5p2 MAALwSHti6EkafIwbnx+07SLbTnSTqc= Received: by mail-wr1-f49.google.com with SMTP id j24so6153071wrd.0 for ; Tue, 21 Mar 2023 13:54:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; t=1679432084; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=djyPCeztN5boBqxO3bRuQFLy3bknB1j3IWxmRWH4tok=; b=K2T9ISzZGBsjo+H3DC3xNn/GfV+dlSSaWpMyi0v9+VnMQbOp2uQZBGhOvfXUOdKPv7 OjDNKZpaSqIwTM7/v/Zqqp8+MZu1slgfRyTyNUqzABdsK3pl+jYaTLt39YgsCMvTNrBR 9KhudgegX2vpuDbkGzlfj2/3RuEkcBs3TSToa/yl0ro7kEuLoREzRnciju7WFD2eE0lO Ro+kFQgVS39f27HPCthzBpwQtcegWCYPoiGSU/keSc6JYUY968EBkXpF76ze3Ku7vUY2 vr3b91LMfviYqyyomTIjHOcHrYgY1r+V3jVybqgjtLWfL1QOovsultE/bkUHKCZzkOWM lkTw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1679432084; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=djyPCeztN5boBqxO3bRuQFLy3bknB1j3IWxmRWH4tok=; b=dqWUeArNXeco6fe4zy3bBjNI2O/rsEj4Y4xc6YqHxZ5o3R+ubMU0QXw6ds23dMMr8D z3xGNX+k9YNkbX/2O575SmBU20Lwg+CQWW3PpbWt9aoxGG+FUrz8+cdJ3WSal60tNpQq G3Jwe0FNvpSEQlhAWtORkSB0NjVo1dzEX71xdlFTWQ5lgdJcABjvr437w30zkW9855UX ycl9iwSuE8i45kls2tEeiGUjijI4hD+79u11pFqRqtEfgW082F652AFeuLJ75jb6uCcZ lEJHGe1jRDTHRKiiShE6zl1o1gMXZqYMVbr4UujvEPaI9CeFdo/rZ9e7UoURAbfswCaC 4atA== X-Gm-Message-State: AO0yUKUH7J5wiO+YvJet+MLSC/3l2BibrOoeMbnXY+J4AeoTm2tZ99Wg OkCGI48ft0wRMOSHzHrXhFdqYlLS5F0= X-Google-Smtp-Source: AK7set+i0F+Gcs9rYPy2oaxQ6DJCUcEdzchALZAnBYz6EMwAc4UbmZe+m094k7gbQKUk5tdBwOglnA== X-Received: by 2002:adf:f74a:0:b0:2d1:7ade:aac with SMTP id z10-20020adff74a000000b002d17ade0aacmr3308849wrp.0.1679432083775; Tue, 21 Mar 2023 13:54:43 -0700 (PDT) Received: from lucifer.home (host86-146-209-214.range86-146.btcentralplus.com. [86.146.209.214]) by smtp.googlemail.com with ESMTPSA id a8-20020a056000100800b002d8566128e5sm3744575wrx.25.2023.03.21.13.54.42 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 21 Mar 2023 13:54:43 -0700 (PDT) From: Lorenzo Stoakes To: linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, Andrew Morton Cc: Baoquan He , Uladzislau Rezki , Matthew Wilcox , David Hildenbrand , Liu Shixin , Jiri Olsa , Jens Axboe , Alexander Viro , Lorenzo Stoakes Subject: [PATCH v4 4/4] mm: vmalloc: convert vread() to vread_iter() Date: Tue, 21 Mar 2023 20:54:33 +0000 Message-Id: <6b3899bbbf1f4bd6b7133c8b6f27b3a8791607b0.1679431886.git.lstoakes@gmail.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: References: MIME-Version: 1.0 X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: BAEAB40011 X-Rspam-User: X-Stat-Signature: p8ooh7qjtobg1fn8xho4u8hpa9wfgtm9 X-HE-Tag: 1679432085-769661 X-HE-Meta: U2FsdGVkX1+3iXPmSjoZ7IcjcLq3IS3hdM15i2Cq6ThXtgTO6mx4DPOzFUVwXKwEAV2h6KzSO1IL54cq2cJvzjfgT+hKTBozayYt6rKuUm1bZVZcelEX6fSA+tG+IQi6v/Sq6lbAzVJcJ7XJkY4kQKfa7jJz9Bd12IbiMh6oQIR49bxMqd5v0vVgx+xI4m0f4H/vMjl0qn18VtVPsRGtvF9l78GaWChy6Z8U/NJxYrUoCHI6mk4xQ6tZZmWqpjenE0IuaHVUdMpYILjtnxyq9Sw8L5w2VyQeYCPvJ8Z3fFMWzVQhon5nvAqDrfyqPRUwQoAaBMNmGMps83kPYG4VoJ2y6cvUvflZz0RmXM4gLjFWrVLv4AG5+hy9frfD5Jp/LifJ5vd461cZzbJ+2fO3cVYUQ5Hr8vi/nTAbfBK3nBo37MhD7BUKvmgjrDcdOgblyDY/BhkzHKYLmXqHK65cCPYcQIALHzCTusAWHfZ1WsxEy5qOXPMVrZDhH/hLfbhR++qaIjgTMpZ6a/+umDU9IrYzk0LRI1tBvDULcSFXdzNvEkqvYPjeX4I+t+sTFjLhP9FaJPc/RhSYlCoKLOVjYT4Squ2apiLMmzVyo2M9khClXEodxbyX+TynoInqKwFP+49GmnO7zvRqZpz3oYOOo2OAzA0QJ2O+IsezKJvjHhyibKMFJxf5ZDZx8+Wzzcip1RrSime/BD9M5KJzFtbI3zwgj5WsWUCU7t2tq3Oi1mj5JsmfUhi2uGVh/Zh2T2GJvmYVrOxOqdZczoqPSTk5rwNeUPGkrx0hrVC1g1DLMrgpbmv2CEXCDb2XhBrM66G3IK6UZKNbgd5lW2/y9WAIFPTwpeGtoUpYwzuk/2OFyl5+TWnyofbAvEKPOYvsQXwEh0vQR3RYhPYkbLhj2T33+lylx3PdJQ12Dj7MC0qSalnOarpxAU8jvsyg/TMA9j6vAJd5Dcf7y6/g7XmARCA 3EpT4DOH 01aXS9P5/cjDr1vG8L8mqthgNYlfILnYAyTUIrRZm6eWjtMSLTXGSXJB7VzCtDQCLPG1JWJ4Ba6yYdq9yZTOaGav2gXcNMIeTiSw4d5g4jdtf8mby4sHYXm+P/COobRtCLEU8U3xt680NV0WOHz4e9KQJz4Rk/UmteJIBp+UV+5Z7fOTGZBK24JFcFUl6+4/kdF7wmHI/y699F6o8UOR+9eFpQJ9DGQVdOsIR4DFMjaJu37cPGf78wVziqc6bihxFifr063a9XeYPDjeoqAI9GegsRW95rs/7U4HZXMOXP88iEYcANoRMNcYo43DHcMN2QorBFJmGLIvQbZiFU2jfy16voiYltqvUV1wa7Fdu1p7dLXVpMD4W15yvA5G44NioSDpglJD16tKdVE7s7tnA9Ew9+PWOH95byeuIVusRHdG+eyPNlJqsvuik8oNgrkDRvnF6jThuAKlhBuTujpO7+J9wFJP8rbJ8aQV8Sb7+lqpWkP9CqyKHxGKM/kqCz/s3W2fZ X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Having previously laid the foundation for converting vread() to an iterator function, pull the trigger and do so. This patch attempts to provide minimal refactoring and to reflect the existing logic as best we can, for example we continue to zero portions of memory not read, as before. Overall, there should be no functional difference other than a performance improvement in /proc/kcore access to vmalloc regions. Now we have eliminated the need for a bounce buffer in read_kcore_iter(), we dispense with it. We need to ensure userland pages are faulted in before proceeding, as we take spin locks. Additionally, we must account for the fact that at any point a copy may fail if this happens, we exit indicating fewer bytes retrieved than expected. Signed-off-by: Lorenzo Stoakes --- fs/proc/kcore.c | 26 ++--- include/linux/vmalloc.h | 3 +- mm/nommu.c | 10 +- mm/vmalloc.c | 234 +++++++++++++++++++++++++--------------- 4 files changed, 160 insertions(+), 113 deletions(-) diff --git a/fs/proc/kcore.c b/fs/proc/kcore.c index 25e0eeb8d498..221e16f75ba5 100644 --- a/fs/proc/kcore.c +++ b/fs/proc/kcore.c @@ -307,13 +307,9 @@ static void append_kcore_note(char *notes, size_t *i, const char *name, *i = ALIGN(*i + descsz, 4); } -static ssize_t -read_kcore_iter(struct kiocb *iocb, struct iov_iter *iter) +static ssize_t read_kcore_iter(struct kiocb *iocb, struct iov_iter *iter) { - struct file *file = iocb->ki_filp; - char *buf = file->private_data; loff_t *ppos = &iocb->ki_pos; - size_t phdrs_offset, notes_offset, data_offset; size_t page_offline_frozen = 1; size_t phdrs_len, notes_len; @@ -507,9 +503,12 @@ read_kcore_iter(struct kiocb *iocb, struct iov_iter *iter) switch (m->type) { case KCORE_VMALLOC: - vread(buf, (char *)start, tsz); - /* we have to zero-fill user buffer even if no read */ - if (copy_to_iter(buf, tsz, iter) != tsz) { + /* + * Make sure user pages are faulted in as we acquire + * spinlocks in vread_iter(). + */ + if (fault_in_iov_iter_writeable(iter, tsz) || + vread_iter(iter, (char *)start, tsz) != tsz) { ret = -EFAULT; goto out; } @@ -582,10 +581,6 @@ static int open_kcore(struct inode *inode, struct file *filp) if (ret) return ret; - filp->private_data = kmalloc(PAGE_SIZE, GFP_KERNEL); - if (!filp->private_data) - return -ENOMEM; - if (kcore_need_update) kcore_update_ram(); if (i_size_read(inode) != proc_root_kcore->size) { @@ -596,16 +591,9 @@ static int open_kcore(struct inode *inode, struct file *filp) return 0; } -static int release_kcore(struct inode *inode, struct file *file) -{ - kfree(file->private_data); - return 0; -} - static const struct proc_ops kcore_proc_ops = { .proc_read_iter = read_kcore_iter, .proc_open = open_kcore, - .proc_release = release_kcore, .proc_lseek = default_llseek, }; diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h index 69250efa03d1..461aa5637f65 100644 --- a/include/linux/vmalloc.h +++ b/include/linux/vmalloc.h @@ -9,6 +9,7 @@ #include /* pgprot_t */ #include #include +#include #include @@ -251,7 +252,7 @@ static inline void set_vm_flush_reset_perms(void *addr) #endif /* for /proc/kcore */ -extern long vread(char *buf, char *addr, unsigned long count); +extern long vread_iter(struct iov_iter *iter, const char *addr, size_t count); /* * Internals. Don't use.. diff --git a/mm/nommu.c b/mm/nommu.c index 57ba243c6a37..e0fcd948096e 100644 --- a/mm/nommu.c +++ b/mm/nommu.c @@ -36,6 +36,7 @@ #include #include +#include #include #include #include @@ -198,14 +199,13 @@ unsigned long vmalloc_to_pfn(const void *addr) } EXPORT_SYMBOL(vmalloc_to_pfn); -long vread(char *buf, char *addr, unsigned long count) +long vread_iter(struct iov_iter *iter, char *addr, size_t count) { /* Don't allow overflow */ - if ((unsigned long) buf + count < count) - count = -(unsigned long) buf; + if ((unsigned long) addr + count < count) + count = -(unsigned long) addr; - memcpy(buf, addr, count); - return count; + return copy_to_iter(addr, count, iter); } /* diff --git a/mm/vmalloc.c b/mm/vmalloc.c index 978194dc2bb8..ebfa1e9fe6f9 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -37,7 +37,6 @@ #include #include #include -#include #include #include #include @@ -3442,62 +3441,95 @@ void *vmalloc_32_user(unsigned long size) EXPORT_SYMBOL(vmalloc_32_user); /* - * small helper routine , copy contents to buf from addr. - * If the page is not present, fill zero. + * Atomically zero bytes in the iterator. + * + * Returns the number of zeroed bytes. */ +size_t zero_iter(struct iov_iter *iter, size_t count) +{ + size_t remains = count; + + while (remains > 0) { + size_t num, copied; + + num = remains < PAGE_SIZE ? remains : PAGE_SIZE; + copied = copy_page_to_iter_atomic(ZERO_PAGE(0), 0, num, iter); + remains -= copied; + + if (copied < num) + break; + } + + return count - remains; +} -static int aligned_vread(char *buf, char *addr, unsigned long count) +/* + * small helper routine, copy contents to iter from addr. + * If the page is not present, fill zero. + * + * Returns the number of copied bytes. + */ +static size_t aligned_vread_iter(struct iov_iter *iter, + const char *addr, size_t count) { - struct page *p; - int copied = 0; + size_t remains = count; + struct page *page; - while (count) { + while (remains > 0) { unsigned long offset, length; + size_t copied = 0; offset = offset_in_page(addr); length = PAGE_SIZE - offset; - if (length > count) - length = count; - p = vmalloc_to_page(addr); + if (length > remains) + length = remains; + page = vmalloc_to_page(addr); /* - * To do safe access to this _mapped_ area, we need - * lock. But adding lock here means that we need to add - * overhead of vmalloc()/vfree() calls for this _debug_ - * interface, rarely used. Instead of that, we'll use - * kmap() and get small overhead in this access function. + * To do safe access to this _mapped_ area, we need lock. But + * adding lock here means that we need to add overhead of + * vmalloc()/vfree() calls for this _debug_ interface, rarely + * used. Instead of that, we'll use an local mapping via + * copy_page_to_iter_atomic() and accept a small overhead in + * this access function. */ - if (p) { - /* We can expect USER0 is not used -- see vread() */ - void *map = kmap_atomic(p); - memcpy(buf, map + offset, length); - kunmap_atomic(map); - } else - memset(buf, 0, length); + if (page) + copied = copy_page_to_iter_atomic(page, offset, length, + iter); + + /* Zero anything we were unable to copy. */ + copied += zero_iter(iter, length - copied); + + addr += copied; + remains -= copied; - addr += length; - buf += length; - copied += length; - count -= length; + if (copied != length) + break; } - return copied; + + return count - remains; } -static void vmap_ram_vread(char *buf, char *addr, int count, unsigned long flags) +/* + * Read from a vm_map_ram region of memory. + * + * Returns the number of copied bytes. + */ +static size_t vmap_ram_vread_iter(struct iov_iter *iter, const char *addr, + size_t count, unsigned long flags) { char *start; struct vmap_block *vb; unsigned long offset; - unsigned int rs, re, n; + unsigned int rs, re; + size_t remains, n; /* * If it's area created by vm_map_ram() interface directly, but * not further subdividing and delegating management to vmap_block, * handle it here. */ - if (!(flags & VMAP_BLOCK)) { - aligned_vread(buf, addr, count); - return; - } + if (!(flags & VMAP_BLOCK)) + return aligned_vread_iter(iter, addr, count); /* * Area is split into regions and tracked with vmap_block, read out @@ -3505,50 +3537,65 @@ static void vmap_ram_vread(char *buf, char *addr, int count, unsigned long flags */ vb = xa_load(&vmap_blocks, addr_to_vb_idx((unsigned long)addr)); if (!vb) - goto finished; + goto finished_zero; spin_lock(&vb->lock); if (bitmap_empty(vb->used_map, VMAP_BBMAP_BITS)) { spin_unlock(&vb->lock); - goto finished; + goto finished_zero; } + + remains = count; for_each_set_bitrange(rs, re, vb->used_map, VMAP_BBMAP_BITS) { - if (!count) - break; + size_t copied; + + if (remains == 0) + goto finished; + start = vmap_block_vaddr(vb->va->va_start, rs); - while (addr < start) { - if (count == 0) - goto unlock; - *buf = '\0'; - buf++; - addr++; - count--; + + if (addr < start) { + size_t to_zero = min_t(size_t, start - addr, remains); + size_t zeroed = zero_iter(iter, to_zero); + + addr += zeroed; + remains -= zeroed; + + if (remains == 0 || zeroed != to_zero) + goto finished; } + /*it could start reading from the middle of used region*/ offset = offset_in_page(addr); n = ((re - rs + 1) << PAGE_SHIFT) - offset; - if (n > count) - n = count; - aligned_vread(buf, start+offset, n); + if (n > remains) + n = remains; + + copied = aligned_vread_iter(iter, start + offset, n); - buf += n; - addr += n; - count -= n; + addr += copied; + remains -= copied; + + if (copied != n) + goto finished; } -unlock: + spin_unlock(&vb->lock); -finished: +finished_zero: /* zero-fill the left dirty or free regions */ - if (count) - memset(buf, 0, count); + return count - remains + zero_iter(iter, remains); +finished: + /* We couldn't copy/zero everything */ + spin_unlock(&vb->lock); + return count - remains; } /** - * vread() - read vmalloc area in a safe way. - * @buf: buffer for reading data - * @addr: vm address. - * @count: number of bytes to be read. + * vread_iter() - read vmalloc area in a safe way to an iterator. + * @iter: the iterator to which data should be written. + * @addr: vm address. + * @count: number of bytes to be read. * * This function checks that addr is a valid vmalloc'ed area, and * copy data from that area to a given buffer. If the given memory range @@ -3568,13 +3615,12 @@ static void vmap_ram_vread(char *buf, char *addr, int count, unsigned long flags * (same number as @count) or %0 if [addr...addr+count) doesn't * include any intersection with valid vmalloc area */ -long vread(char *buf, char *addr, unsigned long count) +long vread_iter(struct iov_iter *iter, const char *addr, size_t count) { struct vmap_area *va; struct vm_struct *vm; - char *vaddr, *buf_start = buf; - unsigned long buflen = count; - unsigned long n, size, flags; + char *vaddr; + size_t n, size, flags, remains; addr = kasan_reset_tag(addr); @@ -3582,18 +3628,22 @@ long vread(char *buf, char *addr, unsigned long count) if ((unsigned long) addr + count < count) count = -(unsigned long) addr; + remains = count; + spin_lock(&vmap_area_lock); va = find_vmap_area_exceed_addr((unsigned long)addr); if (!va) - goto finished; + goto finished_zero; /* no intersects with alive vmap_area */ - if ((unsigned long)addr + count <= va->va_start) - goto finished; + if ((unsigned long)addr + remains <= va->va_start) + goto finished_zero; list_for_each_entry_from(va, &vmap_area_list, list) { - if (!count) - break; + size_t copied; + + if (remains == 0) + goto finished; vm = va->vm; flags = va->flags & VMAP_FLAGS_MASK; @@ -3608,6 +3658,7 @@ long vread(char *buf, char *addr, unsigned long count) if (vm && (vm->flags & VM_UNINITIALIZED)) continue; + /* Pair with smp_wmb() in clear_vm_uninitialized_flag() */ smp_rmb(); @@ -3616,38 +3667,45 @@ long vread(char *buf, char *addr, unsigned long count) if (addr >= vaddr + size) continue; - while (addr < vaddr) { - if (count == 0) + + if (addr < vaddr) { + size_t to_zero = min_t(size_t, vaddr - addr, remains); + size_t zeroed = zero_iter(iter, to_zero); + + addr += zeroed; + remains -= zeroed; + + if (remains == 0 || zeroed != to_zero) goto finished; - *buf = '\0'; - buf++; - addr++; - count--; } + n = vaddr + size - addr; - if (n > count) - n = count; + if (n > remains) + n = remains; if (flags & VMAP_RAM) - vmap_ram_vread(buf, addr, n, flags); + copied = vmap_ram_vread_iter(iter, addr, n, flags); else if (!(vm->flags & VM_IOREMAP)) - aligned_vread(buf, addr, n); + copied = aligned_vread_iter(iter, addr, n); else /* IOREMAP area is treated as memory hole */ - memset(buf, 0, n); - buf += n; - addr += n; - count -= n; + copied = zero_iter(iter, n); + + addr += copied; + remains -= copied; + + if (copied != n) + goto finished; } -finished: - spin_unlock(&vmap_area_lock); - if (buf == buf_start) - return 0; +finished_zero: + spin_unlock(&vmap_area_lock); /* zero-fill memory holes */ - if (buf != buf_start + buflen) - memset(buf, 0, buflen - (buf - buf_start)); + return count - remains + zero_iter(iter, remains); +finished: + /* Nothing remains, or We couldn't copy/zero everything. */ + spin_unlock(&vmap_area_lock); - return buflen; + return count - remains; } /**