From patchwork Wed Jul 25 23:59:16 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Omar Sandoval X-Patchwork-Id: 10544951 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id CACD9139A for ; Thu, 26 Jul 2018 00:00:12 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id B9C762A9D4 for ; Thu, 26 Jul 2018 00:00:12 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id AD5472A9D9; Thu, 26 Jul 2018 00:00:12 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 3737F2A9D4 for ; Thu, 26 Jul 2018 00:00:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728756AbeGZBOK (ORCPT ); Wed, 25 Jul 2018 21:14:10 -0400 Received: from mail-pg1-f194.google.com ([209.85.215.194]:42324 "EHLO mail-pg1-f194.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727852AbeGZBNi (ORCPT ); Wed, 25 Jul 2018 21:13:38 -0400 Received: by mail-pg1-f194.google.com with SMTP id y4-v6so6308014pgp.9 for ; Wed, 25 Jul 2018 16:59:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=osandov-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=oqP8n13yd1O4w8D2hrqTvkNxTGtNRWoLTcWbgV3cEAM=; b=ly9JcTajj7UB9jQWSaJE6M+IG6z8jZbvYHKMUovAl6kAjBlAj29xS6UBeWrYv99vTf DmroyeAJpx31HCOOPJsEPnrk/G24nwvh4v9RtN4/eQGIyxl50kbPO89W5bEbuNYhjtEa hY/DVVWc2O8j2rc+Z/aP3phWgrKve0UATmihj5YYoRUWh4URcdWEhlzXggOKDxg0UFXf TVHxPURJpsOS299oweYDnAM+SQ2chkV2dd3Yjh9O9fPI547hS9ogde2LZ1UwiJ44HEQg FLtp4ZrH7xYyMvMp9/9eSbhadMhB7k8IwTmUp2ITwxxYrUK6YwSK/+TwDn6Vqdj/MoeV bSlQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=oqP8n13yd1O4w8D2hrqTvkNxTGtNRWoLTcWbgV3cEAM=; b=h/YXlhCjUEcHPpeaXj610GgBunNwniZqdTG2vn6FeBITlwEaZ7Q4HTuI8sKtNwH3Jv Q+vZ52wpbdSxYpoLU8ysNHaGScpu97LxTL4vzvz3oQOS1DI764ip9uawElG09H6+3tTV rotcq1BL5/3KoMCCvTafVet7xgAsi90o/Vyd98rXX9uJHeV30jufi4lJXX3Eq5HjS6Ze cYFPz6jBcSV+WAXzHMyOgw6f4+q/A5Rv23n/dB2wW2YRDR1j5ovlppkwm70ewBuwc7Gg C/IlZYD52RBeh7IPXIiU7EYrl1RLsOzD1JOnl6wc7i2spdGIn+4SnOKqrSZa1PnxWTn3 7CsQ== X-Gm-Message-State: AOUpUlHUZTnAExWyKtYfzanU9SpW0CrMLCA5hS3cxOR9m/IaGkp4Kpv1 aRjAFeB9wtyyi06vDukfCJb0+g== X-Google-Smtp-Source: AAOMgpcl/Pz6JS8SJDNPc4gi4YOKowmiwd1ciXjAtzWaBXKXil4E8XV68VUMLBwF+6rmug0eE+6BRg== X-Received: by 2002:a62:843:: with SMTP id c64-v6mr24267445pfd.14.1532563173339; Wed, 25 Jul 2018 16:59:33 -0700 (PDT) Received: from vader.thefacebook.com ([2620:10d:c090:180::1:8d38]) by smtp.gmail.com with ESMTPSA id 65-v6sm23188753pfq.81.2018.07.25.16.59.32 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 25 Jul 2018 16:59:32 -0700 (PDT) From: Omar Sandoval To: linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, Andrew Morton Cc: Alexey Dobriyan , Eric Biederman , James Morse , Bhupesh Sharma , kernel-team@fb.com Subject: [PATCH v4 5/9] proc/kcore: hold lock during read Date: Wed, 25 Jul 2018 16:59:16 -0700 Message-Id: X-Mailer: git-send-email 2.18.0 In-Reply-To: References: Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Omar Sandoval Now that we're using an rwsem, we can hold it during the entirety of read_kcore() and have a common return path. This is preparation for the next change. Signed-off-by: Omar Sandoval --- fs/proc/kcore.c | 70 ++++++++++++++++++++++++++++--------------------- 1 file changed, 40 insertions(+), 30 deletions(-) diff --git a/fs/proc/kcore.c b/fs/proc/kcore.c index 95aa988c5b5d..dc34642bbdb7 100644 --- a/fs/proc/kcore.c +++ b/fs/proc/kcore.c @@ -440,19 +440,18 @@ static ssize_t read_kcore(struct file *file, char __user *buffer, size_t buflen, loff_t *fpos) { char *buf = file->private_data; - ssize_t acc = 0; size_t size, tsz; size_t elf_buflen; int nphdr; unsigned long start; + size_t orig_buflen = buflen; + int ret = 0; down_read(&kclist_lock); size = get_kcore_size(&nphdr, &elf_buflen); - if (buflen == 0 || *fpos >= size) { - up_read(&kclist_lock); - return 0; - } + if (buflen == 0 || *fpos >= size) + goto out; /* trim buflen to not go beyond EOF */ if (buflen > size - *fpos) @@ -465,28 +464,26 @@ read_kcore(struct file *file, char __user *buffer, size_t buflen, loff_t *fpos) tsz = elf_buflen - *fpos; if (buflen < tsz) tsz = buflen; - elf_buf = kzalloc(elf_buflen, GFP_ATOMIC); + elf_buf = kzalloc(elf_buflen, GFP_KERNEL); if (!elf_buf) { - up_read(&kclist_lock); - return -ENOMEM; + ret = -ENOMEM; + goto out; } elf_kcore_store_hdr(elf_buf, nphdr, elf_buflen); - up_read(&kclist_lock); if (copy_to_user(buffer, elf_buf + *fpos, tsz)) { kfree(elf_buf); - return -EFAULT; + ret = -EFAULT; + goto out; } kfree(elf_buf); buflen -= tsz; *fpos += tsz; buffer += tsz; - acc += tsz; /* leave now if filled buffer already */ if (buflen == 0) - return acc; - } else - up_read(&kclist_lock); + goto out; + } /* * Check to see if our file offset matches with any of @@ -499,25 +496,29 @@ read_kcore(struct file *file, char __user *buffer, size_t buflen, loff_t *fpos) while (buflen) { struct kcore_list *m; - down_read(&kclist_lock); list_for_each_entry(m, &kclist_head, list) { if (start >= m->addr && start < (m->addr+m->size)) break; } - up_read(&kclist_lock); if (&m->list == &kclist_head) { - if (clear_user(buffer, tsz)) - return -EFAULT; + if (clear_user(buffer, tsz)) { + ret = -EFAULT; + goto out; + } } else if (m->type == KCORE_VMALLOC) { vread(buf, (char *)start, tsz); /* we have to zero-fill user buffer even if no read */ - if (copy_to_user(buffer, buf, tsz)) - return -EFAULT; + if (copy_to_user(buffer, buf, tsz)) { + ret = -EFAULT; + goto out; + } } else if (m->type == KCORE_USER) { /* User page is handled prior to normal kernel page: */ - if (copy_to_user(buffer, (char *)start, tsz)) - return -EFAULT; + if (copy_to_user(buffer, (char *)start, tsz)) { + ret = -EFAULT; + goto out; + } } else { if (kern_addr_valid(start)) { /* @@ -525,26 +526,35 @@ read_kcore(struct file *file, char __user *buffer, size_t buflen, loff_t *fpos) * hardened user copy kernel text checks. */ if (probe_kernel_read(buf, (void *) start, tsz)) { - if (clear_user(buffer, tsz)) - return -EFAULT; + if (clear_user(buffer, tsz)) { + ret = -EFAULT; + goto out; + } } else { - if (copy_to_user(buffer, buf, tsz)) - return -EFAULT; + if (copy_to_user(buffer, buf, tsz)) { + ret = -EFAULT; + goto out; + } } } else { - if (clear_user(buffer, tsz)) - return -EFAULT; + if (clear_user(buffer, tsz)) { + ret = -EFAULT; + goto out; + } } } buflen -= tsz; *fpos += tsz; buffer += tsz; - acc += tsz; start += tsz; tsz = (buflen > PAGE_SIZE ? PAGE_SIZE : buflen); } - return acc; +out: + up_read(&kclist_lock); + if (ret) + return ret; + return orig_buflen - buflen; }