From patchwork Mon Jul 16 16:51:54 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jann Horn X-Patchwork-Id: 10527219 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 498EC601D2 for ; Mon, 16 Jul 2018 17:04:32 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 3B52128E94 for ; Mon, 16 Jul 2018 17:04:32 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 2F359281DB; Mon, 16 Jul 2018 17:04:32 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_MED, USER_IN_DEF_DKIM_WL autolearn=ham version=3.3.1 Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.wl.linuxfoundation.org (Postfix) with SMTP id E34A7281DB for ; Mon, 16 Jul 2018 17:04:30 +0000 (UTC) Received: (qmail 17875 invoked by uid 550); 16 Jul 2018 17:04:29 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Delivered-To: moderator for kernel-hardening@lists.openwall.com Received: (qmail 5473 invoked from network); 16 Jul 2018 16:52:45 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=mime-version:date:message-id:subject:from:to:cc; bh=Vv2zUSqUEoVObJ7gX9tQ/jmINLK6RKnaNNb6+C7byY4=; b=QQCp1QnON717jciiDCMJHtWx1vB7WSKK/o0Q09wdpm0eQISvVXwMiYAwPZJwLDeNfo V0KI4X8qFD8fPeJErjiYfXNAwewsK76FUuOeeDbndbdOm5BccYUm01zxKSQ6KYdA/NH3 1wCYvM1D/TD1CsSVzlcfU+MuetwfUyCG6E8T2z4PH9bn/swhW0EYwrjVBtNPmYIi2vVk Ri2bBQUW2E+38uxmxXe+ogR3tTmiGMSmAQsc2cEbqOjLQw1vEMJ+wco7TTnOkteyZ4fG NrkqD19NjFaMq+BGHQv1izSMK+1GHSMceZj3B9va/1zDCVVlsMq67WcD6CWBddXNVD7x fJCQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:date:message-id:subject:from:to:cc; bh=Vv2zUSqUEoVObJ7gX9tQ/jmINLK6RKnaNNb6+C7byY4=; b=gAeVoSr5WHCmkAI6hgxFucX8CkiogVqVv7YRgXX9m55tD+fYVvKpaLf15i5WVTWw91 obWcxLweazKgGGzqMvWXU9Vd0pbsSj66QD2tXkryDsRk9C+gXZYlpRBarFIsIeDOogxG edrHi4z6llElsioks+Bq8rx+KtvH9UaHkbGimLilEHoC5wLbiSuVu0RCXk86AwOwYeBv 8pneV9KJxe4L1O3nKJBqA5MaVXeA9UbqIVvqn2kW3Yg/QGUTpdimmbC67Av5rAgx+diq mwbahIVPhqgueBAjzYNbnP+dG7WqSSVKx4EUSCJ7nz9xZ6tyaWT5e6MasO/mc1KzOxWF 1GAA== X-Gm-Message-State: AOUpUlHESpmO2ezkMQ3t4WzdEhu37sIcVUL+sBogz56SxpVyps298pw8 2jbQ9UY6sgvdbHkUAkJvwumHjS/dsg== X-Google-Smtp-Source: AAOMgpewhSbveTLkjo6hHIV9Qu8ws1I82EDUeHt6qLbl9Ittn6Wu+hWbFY3yQs0f0eOU9Blwk8ISovYFfw== MIME-Version: 1.0 X-Received: by 2002:a25:39d3:: with SMTP id g202-v6mr5618773yba.4.1531759952995; Mon, 16 Jul 2018 09:52:32 -0700 (PDT) Date: Mon, 16 Jul 2018 18:51:54 +0200 Message-Id: <20180716165154.58794-1-jannh@google.com> X-Mailer: git-send-email 2.18.0.203.gfac676dfb9-goog Subject: [PATCH v4] bsg: mitigate read/write abuse, block uaccess in release From: Jann Horn To: Christoph Hellwig , Jens Axboe , FUJITA Tomonori , linux-block@vger.kernel.org, linux-scsi@vger.kernel.org, jannh@google.com Cc: linux-kernel@vger.kernel.org, Douglas Gilbert , Al Viro , jejb@linux.vnet.ibm.com, martin.petersen@oracle.com, kernel-hardening@lists.openwall.com, security@kernel.org, Linus Torvalds X-Virus-Scanned: ClamAV using ClamSMTP As Al Viro noted in commit 128394eff343 ("sg_write()/bsg_write() is not fit to be called under KERNEL_DS"), bsg improperly accesses userspace memory outside the provided buffer, permitting kernel memory corruption via splice(). But bsg doesn't just do it on ->write(), also on ->read() and ->release(). As a band-aid, make sure that the ->read() and ->write() handlers can not be called in weird contexts (kernel context or credentials different from file opener), like for ib_safe_file_access(). Also, completely prevent user memory accesses in ->release() context. Also put a deprecation warning in the read/write handlers. This is similar to commit 26b5b874aff5 ("scsi: sg: mitigate read/write abuse"), which deals with similar issues in /dev/sg*. Fixes: 3d6392cfbd7d ("bsg: support for full generic block layer SG v3") Cc: Signed-off-by: Jann Horn --- Resending for bsg as requested by Christoph Hellwig. ("PATCH v4" is a bit of a misnomer, but probably less confusing than anything else I could have put in the subject line? Is there a canonical way to deal with patch series that have been split up?) changes: - fix control flow in bsg_transport_complete_rq (v1 had a bug there) - extract bsg part, since sg part has already landed separately (Christoph Hellwig) - put deprecation warning in read/write handlers, similar to Linus' suggested patch for sg block/bsg-lib.c | 7 +++++-- block/bsg.c | 43 ++++++++++++++++++++++++++++++++++--------- include/linux/bsg.h | 3 ++- 3 files changed, 41 insertions(+), 12 deletions(-) diff --git a/block/bsg-lib.c b/block/bsg-lib.c index 9419def8c017..e21f246526e2 100644 --- a/block/bsg-lib.c +++ b/block/bsg-lib.c @@ -53,7 +53,8 @@ static int bsg_transport_fill_hdr(struct request *rq, struct sg_io_v4 *hdr, return 0; } -static int bsg_transport_complete_rq(struct request *rq, struct sg_io_v4 *hdr) +static int bsg_transport_complete_rq(struct request *rq, struct sg_io_v4 *hdr, + bool cleaning_up) { struct bsg_job *job = blk_mq_rq_to_pdu(rq); int ret = 0; @@ -79,7 +80,9 @@ static int bsg_transport_complete_rq(struct request *rq, struct sg_io_v4 *hdr) if (job->reply_len && hdr->response) { int len = min(hdr->max_response_len, job->reply_len); - if (copy_to_user(uptr64(hdr->response), job->reply, len)) + if (cleaning_up) + ret = -EINVAL; + else if (copy_to_user(uptr64(hdr->response), job->reply, len)) ret = -EFAULT; else hdr->response_len = len; diff --git a/block/bsg.c b/block/bsg.c index 3da540faf673..deedce8c9ec2 100644 --- a/block/bsg.c +++ b/block/bsg.c @@ -21,6 +21,7 @@ #include #include #include +#include /* for bsg_check_file_access() */ #include #include @@ -159,7 +160,8 @@ static int bsg_scsi_fill_hdr(struct request *rq, struct sg_io_v4 *hdr, return 0; } -static int bsg_scsi_complete_rq(struct request *rq, struct sg_io_v4 *hdr) +static int bsg_scsi_complete_rq(struct request *rq, struct sg_io_v4 *hdr, + bool cleaning_up) { struct scsi_request *sreq = scsi_req(rq); int ret = 0; @@ -179,7 +181,9 @@ static int bsg_scsi_complete_rq(struct request *rq, struct sg_io_v4 *hdr) int len = min_t(unsigned int, hdr->max_response_len, sreq->sense_len); - if (copy_to_user(uptr64(hdr->response), sreq->sense, len)) + if (cleaning_up) + ret = -EINVAL; + else if (copy_to_user(uptr64(hdr->response), sreq->sense, len)) ret = -EFAULT; else hdr->response_len = len; @@ -381,11 +385,12 @@ static struct bsg_command *bsg_get_done_cmd(struct bsg_device *bd) } static int blk_complete_sgv4_hdr_rq(struct request *rq, struct sg_io_v4 *hdr, - struct bio *bio, struct bio *bidi_bio) + struct bio *bio, struct bio *bidi_bio, + bool cleaning_up) { int ret; - ret = rq->q->bsg_dev.ops->complete_rq(rq, hdr); + ret = rq->q->bsg_dev.ops->complete_rq(rq, hdr, cleaning_up); if (rq->next_rq) { blk_rq_unmap_user(bidi_bio); @@ -451,7 +456,7 @@ static int bsg_complete_all_commands(struct bsg_device *bd) break; tret = blk_complete_sgv4_hdr_rq(bc->rq, &bc->hdr, bc->bio, - bc->bidi_bio); + bc->bidi_bio, true); if (!ret) ret = tret; @@ -486,7 +491,7 @@ __bsg_read(char __user *buf, size_t count, struct bsg_device *bd, * bsg_complete_work() cannot do that for us */ ret = blk_complete_sgv4_hdr_rq(bc->rq, &bc->hdr, bc->bio, - bc->bidi_bio); + bc->bidi_bio, false); if (copy_to_user(buf, &bc->hdr, sizeof(bc->hdr))) ret = -EFAULT; @@ -523,6 +528,15 @@ static inline int err_block_err(int ret) return 0; } +static int bsg_check_file_access(struct file *file, const char *caller) +{ + if (file->f_cred != current_real_cred()) + return -EPERM; + if (uaccess_kernel()) + return -EACCES; + return 0; +} + static ssize_t bsg_read(struct file *file, char __user *buf, size_t count, loff_t *ppos) { @@ -532,6 +546,13 @@ bsg_read(struct file *file, char __user *buf, size_t count, loff_t *ppos) bsg_dbg(bd, "read %zd bytes\n", count); + pr_err_once("process %d (%s) does direct read on /dev/bsg/*\n", + task_tgid_vnr(current), current->comm); + + ret = bsg_check_file_access(file, __func__); + if (ret) + return ret; + bsg_set_block(bd, file); bytes_read = 0; @@ -606,8 +627,12 @@ bsg_write(struct file *file, const char __user *buf, size_t count, loff_t *ppos) bsg_dbg(bd, "write %zd bytes\n", count); - if (unlikely(uaccess_kernel())) - return -EINVAL; + pr_err_once("process %d (%s) does direct write on /dev/bsg/*\n", + task_tgid_vnr(current), current->comm); + + ret = bsg_check_file_access(file, __func__); + if (ret) + return ret; bsg_set_block(bd, file); @@ -857,7 +882,7 @@ static long bsg_ioctl(struct file *file, unsigned int cmd, unsigned long arg) at_head = (0 == (hdr.flags & BSG_FLAG_Q_AT_TAIL)); blk_execute_rq(bd->queue, NULL, rq, at_head); - ret = blk_complete_sgv4_hdr_rq(rq, &hdr, bio, bidi_bio); + ret = blk_complete_sgv4_hdr_rq(rq, &hdr, bio, bidi_bio, false); if (copy_to_user(uarg, &hdr, sizeof(hdr))) return -EFAULT; diff --git a/include/linux/bsg.h b/include/linux/bsg.h index dac37b6e00ec..c22bc359552a 100644 --- a/include/linux/bsg.h +++ b/include/linux/bsg.h @@ -11,7 +11,8 @@ struct bsg_ops { int (*check_proto)(struct sg_io_v4 *hdr); int (*fill_hdr)(struct request *rq, struct sg_io_v4 *hdr, fmode_t mode); - int (*complete_rq)(struct request *rq, struct sg_io_v4 *hdr); + int (*complete_rq)(struct request *rq, struct sg_io_v4 *hdr, + bool cleaning_up); void (*free_rq)(struct request *rq); };