From patchwork Wed Jul 18 22:58:45 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Omar Sandoval X-Patchwork-Id: 10533409 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 8E87B600F4 for ; Wed, 18 Jul 2018 22:59:22 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 81E74298B0 for ; Wed, 18 Jul 2018 22:59:22 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 75F79298BD; Wed, 18 Jul 2018 22:59:22 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id F0C02298B0 for ; Wed, 18 Jul 2018 22:59:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730384AbeGRXj0 (ORCPT ); Wed, 18 Jul 2018 19:39:26 -0400 Received: from mail-pg1-f194.google.com ([209.85.215.194]:36625 "EHLO mail-pg1-f194.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730203AbeGRXjZ (ORCPT ); Wed, 18 Jul 2018 19:39:25 -0400 Received: by mail-pg1-f194.google.com with SMTP id m19-v6so2665607pgv.3 for ; Wed, 18 Jul 2018 15:59:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=osandov-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=pZjRvZxPQspBLpfeaPSiIUV9ZEirc8l71XrDkFv8j6M=; b=bJTzg4wNOorGyL7+qi9C5rAmoFuUTvCyWVvOEm2x4lgXNH/XtGZ8UNyPyVrAPkIOJN nBZ/JH5SRJp3F/4acrWKWr6YphpXHHBP54dxjc8CXfkX/nRXxHW+/i4UqpM4BXIU80Rz gyONi92GoFaRVWIEofvbegEVRApOdX87xxAmiOhXOjT4xItWoiTmUUv77+VBN9aaO3um hL3AgddScjj6kUCLqw6192mxdoj1o9qxlG9xB/64+QUCvNDbPJPE7/+NqZeHYhcADpNz wFzY/Mit/dKnUXYc7C4pyEhcZSKodt9GDSGGaUNX180jGeKPuzuqCAJgL+x3W3w56Dea ZVWQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=pZjRvZxPQspBLpfeaPSiIUV9ZEirc8l71XrDkFv8j6M=; b=G74wuGjCjuEOqhZZxEzAH7GijP+gvzW3e33U88XN+URfivcCNDtCOs+eFWHqfpxiLk Zm1FRE5q0Baz1rR+c3xyzT2xlh91UtljCeQw+tqVFtiYKr1xspF2F/S3PpRiKdP8xRHi Wux02Cobp0CApTfVuDeN/BiTzllz/MSft55eRSqCuMIshXV+H5pek+2N2DIBOXlQ9APk hSQArIah4zSp60DHGlMiimIodmKXiI2aGRTN/+IPJ/LJ28772vSax0nb7CPCnuXCDu8L 5YMRSCH+AMWBLz8l5jNDIrwItxbZVnGj1agwiuWRGmOHhPNjvU9jDLSDcQb0+GPnsxTW Rc7Q== X-Gm-Message-State: AOUpUlGPJUaNBXzZRADisVz36YhvrUmwVBIIbZHvK0iX58Pr8rBlua1V 394ctUJ8L28WF2szvzUjBrGLfA== X-Google-Smtp-Source: AAOMgpcL1W+nYEj7ocz6OxSpwIqk8lcHy1VgSscDKHNTa60kSyvf/Hah6JUXCbrL7s17HyrcdA8tYw== X-Received: by 2002:a63:ff21:: with SMTP id k33-v6mr7381054pgi.38.1531954758650; Wed, 18 Jul 2018 15:59:18 -0700 (PDT) Received: from vader.hsd1.wa.comcast.net ([2601:602:8800:a9a9:e6a7:a0ff:fe0b:c9a8]) by smtp.gmail.com with ESMTPSA id s16-v6sm6377946pfm.114.2018.07.18.15.59.17 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 18 Jul 2018 15:59:18 -0700 (PDT) From: Omar Sandoval To: linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, Andrew Morton Cc: Alexey Dobriyan , Eric Biederman , James Morse , Bhupesh Sharma , kernel-team@fb.com Subject: [PATCH v3 5/8] proc/kcore: hold lock during read Date: Wed, 18 Jul 2018 15:58:45 -0700 Message-Id: X-Mailer: git-send-email 2.18.0 In-Reply-To: References: Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Omar Sandoval Now that we're using an rwsem, we can hold it during the entirety of read_kcore() and have a common return path. This is preparation for the next change. Signed-off-by: Omar Sandoval --- fs/proc/kcore.c | 70 ++++++++++++++++++++++++++++--------------------- 1 file changed, 40 insertions(+), 30 deletions(-) diff --git a/fs/proc/kcore.c b/fs/proc/kcore.c index 95aa988c5b5d..e317ac890871 100644 --- a/fs/proc/kcore.c +++ b/fs/proc/kcore.c @@ -440,19 +440,18 @@ static ssize_t read_kcore(struct file *file, char __user *buffer, size_t buflen, loff_t *fpos) { char *buf = file->private_data; - ssize_t acc = 0; size_t size, tsz; size_t elf_buflen; int nphdr; unsigned long start; + size_t orig_buflen = buflen; + int ret = 0; down_read(&kclist_lock); size = get_kcore_size(&nphdr, &elf_buflen); - if (buflen == 0 || *fpos >= size) { - up_read(&kclist_lock); - return 0; - } + if (buflen == 0 || *fpos >= size) + goto out; /* trim buflen to not go beyond EOF */ if (buflen > size - *fpos) @@ -465,28 +464,26 @@ read_kcore(struct file *file, char __user *buffer, size_t buflen, loff_t *fpos) tsz = elf_buflen - *fpos; if (buflen < tsz) tsz = buflen; - elf_buf = kzalloc(elf_buflen, GFP_ATOMIC); + elf_buf = kzalloc(elf_buflen, GFP_KERNEL); if (!elf_buf) { - up_read(&kclist_lock); - return -ENOMEM; + ret = -ENOMEM; + goto out; } elf_kcore_store_hdr(elf_buf, nphdr, elf_buflen); - up_read(&kclist_lock); if (copy_to_user(buffer, elf_buf + *fpos, tsz)) { kfree(elf_buf); - return -EFAULT; + ret = -EFAULT; + goto out; } kfree(elf_buf); buflen -= tsz; *fpos += tsz; buffer += tsz; - acc += tsz; /* leave now if filled buffer already */ if (buflen == 0) - return acc; - } else - up_read(&kclist_lock); + goto out; + } /* * Check to see if our file offset matches with any of @@ -499,25 +496,29 @@ read_kcore(struct file *file, char __user *buffer, size_t buflen, loff_t *fpos) while (buflen) { struct kcore_list *m; - down_read(&kclist_lock); list_for_each_entry(m, &kclist_head, list) { if (start >= m->addr && start < (m->addr+m->size)) break; } - up_read(&kclist_lock); if (&m->list == &kclist_head) { - if (clear_user(buffer, tsz)) - return -EFAULT; + if (clear_user(buffer, tsz)) { + ret = -EFAULT; + goto out; + } } else if (m->type == KCORE_VMALLOC) { vread(buf, (char *)start, tsz); /* we have to zero-fill user buffer even if no read */ - if (copy_to_user(buffer, buf, tsz)) - return -EFAULT; + if (copy_to_user(buffer, buf, tsz)) { + ret = -EFAULT; + goto out; + } } else if (m->type == KCORE_USER) { /* User page is handled prior to normal kernel page: */ - if (copy_to_user(buffer, (char *)start, tsz)) - return -EFAULT; + if (copy_to_user(buffer, (char *)start, tsz)) { + ret = -EFAULT; + goto out; + } } else { if (kern_addr_valid(start)) { /* @@ -525,26 +526,35 @@ read_kcore(struct file *file, char __user *buffer, size_t buflen, loff_t *fpos) * hardened user copy kernel text checks. */ if (probe_kernel_read(buf, (void *) start, tsz)) { - if (clear_user(buffer, tsz)) - return -EFAULT; + if (clear_user(buffer, tsz)) { + ret = -EFAULT; + goto out; + } } else { - if (copy_to_user(buffer, buf, tsz)) - return -EFAULT; + if (copy_to_user(buffer, buf, tsz)) { + ret = -EFAULT; + goto out; + } } } else { - if (clear_user(buffer, tsz)) - return -EFAULT; + if (clear_user(buffer, tsz)) { + ret = -EFAULT; + goto out; + } } } buflen -= tsz; *fpos += tsz; buffer += tsz; - acc += tsz; start += tsz; tsz = (buflen > PAGE_SIZE ? PAGE_SIZE : buflen); } - return acc; +out: + up_write(&kclist_lock); + if (ret) + return ret; + return orig_buflen - buflen; }