From patchwork Mon Jul 4 13:03:40 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: fangwei X-Patchwork-Id: 9212575 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 963DE60752 for ; Mon, 4 Jul 2016 12:57:04 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 83E7F286A7 for ; Mon, 4 Jul 2016 12:57:04 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 770FD286B5; Mon, 4 Jul 2016 12:57:04 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=2.0 tests=BAYES_00,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id E5E96286A7 for ; Mon, 4 Jul 2016 12:57:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753832AbcGDM5D (ORCPT ); Mon, 4 Jul 2016 08:57:03 -0400 Received: from szxga01-in.huawei.com ([58.251.152.64]:16917 "EHLO szxga01-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753831AbcGDM5B (ORCPT ); Mon, 4 Jul 2016 08:57:01 -0400 Received: from 172.24.1.137 (EHLO szxeml431-hub.china.huawei.com) ([172.24.1.137]) by szxrg01-dlp.huawei.com (MOS 4.3.7-GA FastPath queued) with ESMTP id DNG01345; Mon, 04 Jul 2016 20:56:50 +0800 (CST) Received: from localhost.localdomain (10.175.100.166) by szxeml431-hub.china.huawei.com (10.82.67.208) with Microsoft SMTP Server id 14.3.235.1; Mon, 4 Jul 2016 20:56:41 +0800 From: Wei Fang To: CC: , Wei Fang Subject: [PATCH] block: fix a type conversion error in __get_request() Date: Mon, 4 Jul 2016 21:03:40 +0800 Message-ID: <1467637420-4967-1-git-send-email-fangwei1@huawei.com> X-Mailer: git-send-email 1.7.1 MIME-Version: 1.0 X-Originating-IP: [10.175.100.166] X-CFilter-Loop: Reflected X-Mirapoint-Virus-RAPID-Raw: score=unknown(0), refid=str=0001.0A090202.577A5D12.0140, ss=1, re=0.000, recu=0.000, reip=0.000, cl=1, cld=1, fgs=0, ip=0.0.0.0, so=2013-06-18 04:22:30, dmn=2013-03-21 17:37:32 X-Mirapoint-Loop-Id: 582dd351c6a12c35ea4f415aa517ef41 Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Theoretically, request only flags in enum rq_flag_bits can bigger than 31 after we add some new flags in the future, so we can't store REQ_IO_STAT in op_flags which is a int type in __get_request(). Actually, when REQ_IO_STAT become 31, the most-significant bit of op_flags will be 1, and OR it to ->cmd_flags will cause the top 32 bits of ->cmd_flags become 1. Fix it by using a u64-type object to store flags. Signed-off-by: Wei Fang --- block/blk-core.c | 11 ++++++----- 1 files changed, 6 insertions(+), 5 deletions(-) diff --git a/block/blk-core.c b/block/blk-core.c index c94c7ad..3860b7d 100644 --- a/block/blk-core.c +++ b/block/blk-core.c @@ -1077,6 +1077,7 @@ static struct request *__get_request(struct request_list *rl, int op, struct io_cq *icq = NULL; const bool is_sync = rw_is_sync(op, op_flags) != 0; int may_queue; + u64 cmd_flags = (u64)(unsigned int)op_flags; if (unlikely(blk_queue_dying(q))) return ERR_PTR(-ENODEV); @@ -1125,7 +1126,7 @@ static struct request *__get_request(struct request_list *rl, int op, /* * Decide whether the new request will be managed by elevator. If - * so, mark @op_flags and increment elvpriv. Non-zero elvpriv will + * so, mark @cmd_flags and increment elvpriv. Non-zero elvpriv will * prevent the current elevator from being destroyed until the new * request is freed. This guarantees icq's won't be destroyed and * makes creating new ones safe. @@ -1134,14 +1135,14 @@ static struct request *__get_request(struct request_list *rl, int op, * it will be created after releasing queue_lock. */ if (blk_rq_should_init_elevator(bio) && !blk_queue_bypass(q)) { - op_flags |= REQ_ELVPRIV; + cmd_flags |= REQ_ELVPRIV; q->nr_rqs_elvpriv++; if (et->icq_cache && ioc) icq = ioc_lookup_icq(ioc, q); } if (blk_queue_io_stat(q)) - op_flags |= REQ_IO_STAT; + cmd_flags |= REQ_IO_STAT; spin_unlock_irq(q->queue_lock); /* allocate and init request */ @@ -1151,10 +1152,10 @@ static struct request *__get_request(struct request_list *rl, int op, blk_rq_init(q, rq); blk_rq_set_rl(rq, rl); - req_set_op_attrs(rq, op, op_flags | REQ_ALLOCED); + req_set_op_attrs(rq, op, cmd_flags | REQ_ALLOCED); /* init elvpriv */ - if (op_flags & REQ_ELVPRIV) { + if (cmd_flags & REQ_ELVPRIV) { if (unlikely(et->icq_cache && !icq)) { if (ioc) icq = ioc_create_icq(ioc, q, gfp_mask);