From patchwork Mon Apr 4 05:06:07 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shaun Tancheff X-Patchwork-Id: 8737361 Return-Path: X-Original-To: patchwork-linux-scsi@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id A88F89F7C9 for ; Mon, 4 Apr 2016 05:06:48 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 8B173201F2 for ; Mon, 4 Apr 2016 05:06:47 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 50B88201EF for ; Mon, 4 Apr 2016 05:06:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751915AbcDDFGo (ORCPT ); Mon, 4 Apr 2016 01:06:44 -0400 Received: from mail-pa0-f68.google.com ([209.85.220.68]:36069 "EHLO mail-pa0-f68.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751621AbcDDFGl (ORCPT ); Mon, 4 Apr 2016 01:06:41 -0400 Received: by mail-pa0-f68.google.com with SMTP id 1so19001356pal.3; Sun, 03 Apr 2016 22:06:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=sender:from:to:cc:subject:date:message-id:in-reply-to:references; bh=9vIXQff9Ejg+a0pD4q8tdUXeprkZ29dWaQzqLF3Y/3w=; b=DE5Lt2xyDo2hVXEWHWecrl6ffw81qHFFn/raBoP6kX5VFwpaAY+5u1sXClJo7A1+S2 RDhefARG8UE6XcaJbPLgQ1WwplFLUwJNn+77WHbegdl+QfFNPGd3o016AeYG36ur1uOV IbOi+TvA/NktH9trKx2oAR0qcpEwPrhZb1LFvOC17WyLv40b+moVXNyntQcHLF9Qj5ge PBgEU8rJArMYl0I+Nm3JxpX/QfvYa90vDPySFXqTUyZs02LhtLI3qPeh7q9j6PScQetn cG65G2mJWafZi8rChgT4UayvRCIBbvaReVRh8B9ZFtMCf21zsoL9Oe8dS9NUEzsVZmF/ 4vOg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:sender:from:to:cc:subject:date:message-id :in-reply-to:references; bh=9vIXQff9Ejg+a0pD4q8tdUXeprkZ29dWaQzqLF3Y/3w=; b=etmxbHqy9Fa7Olx8NAaIRYaP0Xh8JhGUZeXqo9b6mNhSgmcsH1Hm/LlTW1yXFS76eJ RmzRkpTf9W0o7bV2u20nsqwJUbYoOSF7CeH2Jx7N/RUVW3C01OIK/UJBErpb9SdsFRYS WD0e1U0oLjdDvU6aIeTP/nUtXxqBBlLR2wGmIm/4P0VjIjuMmozFC+Be0v3iwKiQaXvZ /SrEbw5baIsXIhYQY+7J7eC9xD5EfqwtAxyN/Z9QoOXEFkv6UkeyhcNUkOrrQPue7otP Pf6YCnYGSpLHeDxzt6nM2YGofc5nEZ02GpObygMuVEBhg/obsekMf2z3qjxSQ3i/cDLH 7bdA== X-Gm-Message-State: AD7BkJKH2aG31EdE5hK5/olsZwbidXLwwyGo3lA2Sw/xoAOQgu4Np3pPfne/B0q5etaxiA== X-Received: by 10.66.118.7 with SMTP id ki7mr50524940pab.152.1459746400751; Sun, 03 Apr 2016 22:06:40 -0700 (PDT) Received: from localhost.localdomain ([103.47.135.1]) by smtp.gmail.com with ESMTPSA id k65sm35804188pfb.30.2016.04.03.22.06.38 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Sun, 03 Apr 2016 22:06:40 -0700 (PDT) From: Shaun Tancheff To: linux-ide@vger.kernel.org, dm-devel@redhat.com, linux-block@vger.kernel.org, linux-scsi@vger.kernel.org Cc: Jens Axboe , Shaun Tancheff , Shaun Tancheff Subject: [PATCH 03/12] BUG: Losing bits on request.cmd_flags Date: Mon, 4 Apr 2016 12:06:07 +0700 Message-Id: <1459746376-27983-4-git-send-email-shaun@tancheff.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1459746376-27983-1-git-send-email-shaun@tancheff.com> References: <1459746376-27983-1-git-send-email-shaun@tancheff.com> Sender: linux-scsi-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org X-Spam-Status: No, score=-7.8 required=5.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_HI,RP_MATCHES_RCVD,T_DKIM_INVALID,UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP In a few places a temporary value smaller than a cmd_flags is used to test for bits and or build up a new cmd_flags. Change to use explicit u64 values where appropriate. Signed-off-by: Shaun Tancheff --- block/blk-core.c | 17 ++++++++++------- block/blk-merge.c | 2 +- block/blk-mq.c | 2 +- block/cfq-iosched.c | 2 +- block/elevator.c | 4 ++-- include/linux/elevator.h | 4 ++-- 6 files changed, 17 insertions(+), 14 deletions(-) diff --git a/block/blk-core.c b/block/blk-core.c index e4b1269..3413604 100644 --- a/block/blk-core.c +++ b/block/blk-core.c @@ -959,7 +959,7 @@ static void __freed_request(struct request_list *rl, int sync) * A request has just been released. Account for it, update the full and * congestion status, wake up any waiters. Called under q->queue_lock. */ -static void freed_request(struct request_list *rl, unsigned int flags) +static void freed_request(struct request_list *rl, u64 flags) { struct request_queue *q = rl->q; int sync = rw_is_sync(flags); @@ -1054,7 +1054,7 @@ static struct io_context *rq_ioc(struct bio *bio) /** * __get_request - get a free request * @rl: request list to allocate from - * @rw_flags: RW and SYNC flags + * @rw: RW and SYNC flags * @bio: bio to allocate request for (can be %NULL) * @gfp_mask: allocation mask * @@ -1065,7 +1065,7 @@ static struct io_context *rq_ioc(struct bio *bio) * Returns ERR_PTR on failure, with @q->queue_lock held. * Returns request pointer on success, with @q->queue_lock *not held*. */ -static struct request *__get_request(struct request_list *rl, int rw_flags, +static struct request *__get_request(struct request_list *rl, unsigned long rw, struct bio *bio, gfp_t gfp_mask) { struct request_queue *q = rl->q; @@ -1073,6 +1073,7 @@ static struct request *__get_request(struct request_list *rl, int rw_flags, struct elevator_type *et = q->elevator->type; struct io_context *ioc = rq_ioc(bio); struct io_cq *icq = NULL; + u64 rw_flags = rw; const bool is_sync = rw_is_sync(rw_flags) != 0; int may_queue; @@ -1237,7 +1238,8 @@ rq_starved: * Returns ERR_PTR on failure, with @q->queue_lock held. * Returns request pointer on success, with @q->queue_lock *not held*. */ -static struct request *get_request(struct request_queue *q, int rw_flags, +static struct request *get_request(struct request_queue *q, + unsigned long rw_flags, struct bio *bio, gfp_t gfp_mask) { const bool is_sync = rw_is_sync(rw_flags) != 0; @@ -1490,7 +1492,7 @@ void __blk_put_request(struct request_queue *q, struct request *req) * it didn't come out of our reserved rq pools */ if (req->cmd_flags & REQ_ALLOCED) { - unsigned int flags = req->cmd_flags; + u64 flags = req->cmd_flags; struct request_list *rl = blk_rq_rl(req); BUG_ON(!list_empty(&req->queuelist)); @@ -1711,7 +1713,8 @@ static blk_qc_t blk_queue_bio(struct request_queue *q, struct bio *bio) { const bool sync = !!(bio->bi_rw & REQ_SYNC); struct blk_plug *plug; - int el_ret, rw_flags, where = ELEVATOR_INSERT_SORT; + u64 rw_flags; + int el_ret, where = ELEVATOR_INSERT_SORT; struct request *req; unsigned int request_count = 0; @@ -2248,7 +2251,7 @@ EXPORT_SYMBOL_GPL(blk_insert_cloned_request); */ unsigned int blk_rq_err_bytes(const struct request *rq) { - unsigned int ff = rq->cmd_flags & REQ_FAILFAST_MASK; + u64 ff = rq->cmd_flags & REQ_FAILFAST_MASK; unsigned int bytes = 0; struct bio *bio; diff --git a/block/blk-merge.c b/block/blk-merge.c index 2613531..fec37e1 100644 --- a/block/blk-merge.c +++ b/block/blk-merge.c @@ -604,7 +604,7 @@ static int ll_merge_requests_fn(struct request_queue *q, struct request *req, */ void blk_rq_set_mixed_merge(struct request *rq) { - unsigned int ff = rq->cmd_flags & REQ_FAILFAST_MASK; + u64 ff = rq->cmd_flags & REQ_FAILFAST_MASK; struct bio *bio; if (rq->cmd_flags & REQ_MIXED_MERGE) diff --git a/block/blk-mq.c b/block/blk-mq.c index 56c0a72..8b41ad6 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -159,7 +159,7 @@ bool blk_mq_can_queue(struct blk_mq_hw_ctx *hctx) EXPORT_SYMBOL(blk_mq_can_queue); static void blk_mq_rq_ctx_init(struct request_queue *q, struct blk_mq_ctx *ctx, - struct request *rq, unsigned int rw_flags) + struct request *rq, u64 rw_flags) { if (blk_queue_io_stat(q)) rw_flags |= REQ_IO_STAT; diff --git a/block/cfq-iosched.c b/block/cfq-iosched.c index 1f9093e..e11a220 100644 --- a/block/cfq-iosched.c +++ b/block/cfq-iosched.c @@ -4262,7 +4262,7 @@ static inline int __cfq_may_queue(struct cfq_queue *cfqq) return ELV_MQUEUE_MAY; } -static int cfq_may_queue(struct request_queue *q, int rw) +static int cfq_may_queue(struct request_queue *q, u64 rw) { struct cfq_data *cfqd = q->elevator->elevator_data; struct task_struct *tsk = current; diff --git a/block/elevator.c b/block/elevator.c index c3555c9..7c0a59c 100644 --- a/block/elevator.c +++ b/block/elevator.c @@ -511,7 +511,7 @@ void elv_merge_requests(struct request_queue *q, struct request *rq, struct request *next) { struct elevator_queue *e = q->elevator; - const int next_sorted = next->cmd_flags & REQ_SORTED; + const int next_sorted = !!(next->cmd_flags & REQ_SORTED); if (next_sorted && e->type->ops.elevator_merge_req_fn) e->type->ops.elevator_merge_req_fn(q, rq, next); @@ -717,7 +717,7 @@ void elv_put_request(struct request_queue *q, struct request *rq) e->type->ops.elevator_put_req_fn(rq); } -int elv_may_queue(struct request_queue *q, int rw) +int elv_may_queue(struct request_queue *q, u64 rw) { struct elevator_queue *e = q->elevator; diff --git a/include/linux/elevator.h b/include/linux/elevator.h index 638b324..a06cca4 100644 --- a/include/linux/elevator.h +++ b/include/linux/elevator.h @@ -26,7 +26,7 @@ typedef int (elevator_dispatch_fn) (struct request_queue *, int); typedef void (elevator_add_req_fn) (struct request_queue *, struct request *); typedef struct request *(elevator_request_list_fn) (struct request_queue *, struct request *); typedef void (elevator_completed_req_fn) (struct request_queue *, struct request *); -typedef int (elevator_may_queue_fn) (struct request_queue *, int); +typedef int (elevator_may_queue_fn) (struct request_queue *, u64); typedef void (elevator_init_icq_fn) (struct io_cq *); typedef void (elevator_exit_icq_fn) (struct io_cq *); @@ -134,7 +134,7 @@ extern struct request *elv_former_request(struct request_queue *, struct request extern struct request *elv_latter_request(struct request_queue *, struct request *); extern int elv_register_queue(struct request_queue *q); extern void elv_unregister_queue(struct request_queue *q); -extern int elv_may_queue(struct request_queue *, int); +extern int elv_may_queue(struct request_queue *, u64); extern void elv_completed_request(struct request_queue *, struct request *); extern int elv_set_request(struct request_queue *q, struct request *rq, struct bio *bio, gfp_t gfp_mask);