From patchwork Fri Jul 13 00:09:36 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Omar Sandoval X-Patchwork-Id: 10522441 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 6E94E602A0 for ; Fri, 13 Jul 2018 00:09:54 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 5E7C429372 for ; Fri, 13 Jul 2018 00:09:54 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 530A6296BE; Fri, 13 Jul 2018 00:09:54 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id DFE1229372 for ; Fri, 13 Jul 2018 00:09:53 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2387815AbeGMAVs (ORCPT ); Thu, 12 Jul 2018 20:21:48 -0400 Received: from mail-pl0-f68.google.com ([209.85.160.68]:33087 "EHLO mail-pl0-f68.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2387765AbeGMAVr (ORCPT ); Thu, 12 Jul 2018 20:21:47 -0400 Received: by mail-pl0-f68.google.com with SMTP id 6-v6so11355462plb.0 for ; Thu, 12 Jul 2018 17:09:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=osandov-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=002zvLp0GE64hLrexFk+R4hBFgfxdKVoabfUzR+Vo8Q=; b=j1zJOCPPt1IADlHyGEMJA+5iVYJ8Itp7UkQ8IP4JDiQEC9D1+fZyyCuQ/fwJuYgRTK f8PgOMS3NBp7NMjKKn7Ag4Z9T3PjvGk5My8IxFv5LRlm/I+yAiiZvMbpyyak05J7aUcR tJTA9fAAbIEsWHU1pwhF60to+RVn6xghsMeObvqQP/qu3oB8h3q0d1RIiqIWwGIX7iJj e0wfg8ZpdrfAJpj5wO1RSpoecpJcKZ+nFhHHpFs+1xBBEanvXM541puJBQa+lBvvqgXu BSrphVkFJsrCQReNWUdyi7x2MwYZY4ek2KmygVSH6m25Q8dhIe1VNhJs/kanwmxmokC2 m79g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=002zvLp0GE64hLrexFk+R4hBFgfxdKVoabfUzR+Vo8Q=; b=deEwGb7LVbzIeWh8g6arIrca8U5aIkRCiJRxqFue6ZRwAqDRq4NwYl/DJ4Yb2cFaSS VsvOcV9a+SfBGCT+qMF/csbojre5RywOPQwDmSVKhDmaqsC9XX6B48JTPq7fk2kaBsGd S0WzssXWOUSZNZfKHsGKVs6k6r6BZ3EJVs1WSIz/TA1JWchbRA0TCVcH+6y5zP90DuSy 4EhD0/xcf9+zwox2FBfdApbRtKIRC65vzUyCNNguMgOEYsONfaGSh65gtfSxamTdrM7P 6nzzzfaY5fmUBjbK1aHZBwl1pRfgZW2ljBRrLTSwlhHJEO4dDAYPXuGmUTkKiH6FoG7+ gs0w== X-Gm-Message-State: AOUpUlGwfOfD3DygNiRVyrKqwuU4D/wFwbtWi1Xi31M7VpdiXzH0DNk/ T1gnPKcuzcih9cmUJeRyhWGzRw== X-Google-Smtp-Source: AAOMgpcWFmd0zMso3aGTtx3cwdwYyWA3MlgZY68JoepqAXw9KcJRdveyysvxb1lMQgsZ34JSVw9/OQ== X-Received: by 2002:a17:902:8c88:: with SMTP id t8-v6mr4081820plo.117.1531440590147; Thu, 12 Jul 2018 17:09:50 -0700 (PDT) Received: from vader.thefacebook.com ([2620:10d:c090:200::5:74a0]) by smtp.gmail.com with ESMTPSA id b86-v6sm4452067pfj.35.2018.07.12.17.09.48 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 12 Jul 2018 17:09:49 -0700 (PDT) From: Omar Sandoval To: linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, Andrew Morton Cc: Alexey Dobriyan , Eric Biederman , James Morse , Bhupesh Sharma , kernel-team@fb.com Subject: [PATCH v2 4/7] proc/kcore: hold lock during read Date: Thu, 12 Jul 2018 17:09:36 -0700 Message-Id: X-Mailer: git-send-email 2.18.0 In-Reply-To: References: Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Omar Sandoval Now that we're using an rwsem, we can hold it during the entirety of read_kcore() and have a common return path. This is preparation for the next change. Signed-off-by: Omar Sandoval --- fs/proc/kcore.c | 70 ++++++++++++++++++++++++++++--------------------- 1 file changed, 40 insertions(+), 30 deletions(-) diff --git a/fs/proc/kcore.c b/fs/proc/kcore.c index 33667db6e370..f1ae848c7bcc 100644 --- a/fs/proc/kcore.c +++ b/fs/proc/kcore.c @@ -440,19 +440,18 @@ static ssize_t read_kcore(struct file *file, char __user *buffer, size_t buflen, loff_t *fpos) { char *buf = file->private_data; - ssize_t acc = 0; size_t size, tsz; size_t elf_buflen; int nphdr; unsigned long start; + size_t orig_buflen = buflen; + int ret = 0; down_read(&kclist_lock); size = get_kcore_size(&nphdr, &elf_buflen); - if (buflen == 0 || *fpos >= size) { - up_read(&kclist_lock); - return 0; - } + if (buflen == 0 || *fpos >= size) + goto out; /* trim buflen to not go beyond EOF */ if (buflen > size - *fpos) @@ -465,28 +464,26 @@ read_kcore(struct file *file, char __user *buffer, size_t buflen, loff_t *fpos) tsz = elf_buflen - *fpos; if (buflen < tsz) tsz = buflen; - elf_buf = kzalloc(elf_buflen, GFP_ATOMIC); + elf_buf = kzalloc(elf_buflen, GFP_KERNEL); if (!elf_buf) { - up_read(&kclist_lock); - return -ENOMEM; + ret = -ENOMEM; + goto out; } elf_kcore_store_hdr(elf_buf, nphdr, elf_buflen); - up_read(&kclist_lock); if (copy_to_user(buffer, elf_buf + *fpos, tsz)) { kfree(elf_buf); - return -EFAULT; + ret = -EFAULT; + goto out; } kfree(elf_buf); buflen -= tsz; *fpos += tsz; buffer += tsz; - acc += tsz; /* leave now if filled buffer already */ if (buflen == 0) - return acc; - } else - up_read(&kclist_lock); + goto out; + } /* * Check to see if our file offset matches with any of @@ -499,25 +496,29 @@ read_kcore(struct file *file, char __user *buffer, size_t buflen, loff_t *fpos) while (buflen) { struct kcore_list *m; - down_read(&kclist_lock); list_for_each_entry(m, &kclist_head, list) { if (start >= m->addr && start < (m->addr+m->size)) break; } - up_read(&kclist_lock); if (&m->list == &kclist_head) { - if (clear_user(buffer, tsz)) - return -EFAULT; + if (clear_user(buffer, tsz)) { + ret = -EFAULT; + goto out; + } } else if (m->type == KCORE_VMALLOC) { vread(buf, (char *)start, tsz); /* we have to zero-fill user buffer even if no read */ - if (copy_to_user(buffer, buf, tsz)) - return -EFAULT; + if (copy_to_user(buffer, buf, tsz)) { + ret = -EFAULT; + goto out; + } } else if (m->type == KCORE_USER) { /* User page is handled prior to normal kernel page: */ - if (copy_to_user(buffer, (char *)start, tsz)) - return -EFAULT; + if (copy_to_user(buffer, (char *)start, tsz)) { + ret = -EFAULT; + goto out; + } } else { if (kern_addr_valid(start)) { /* @@ -525,26 +526,35 @@ read_kcore(struct file *file, char __user *buffer, size_t buflen, loff_t *fpos) * hardened user copy kernel text checks. */ if (probe_kernel_read(buf, (void *) start, tsz)) { - if (clear_user(buffer, tsz)) - return -EFAULT; + if (clear_user(buffer, tsz)) { + ret = -EFAULT; + goto out; + } } else { - if (copy_to_user(buffer, buf, tsz)) - return -EFAULT; + if (copy_to_user(buffer, buf, tsz)) { + ret = -EFAULT; + goto out; + } } } else { - if (clear_user(buffer, tsz)) - return -EFAULT; + if (clear_user(buffer, tsz)) { + ret = -EFAULT; + goto out; + } } } buflen -= tsz; *fpos += tsz; buffer += tsz; - acc += tsz; start += tsz; tsz = (buflen > PAGE_SIZE ? PAGE_SIZE : buflen); } - return acc; +out: + up_write(&kclist_lock); + if (ret) + return ret; + return orig_buflen - buflen; }