From patchwork Mon Oct 14 08:08:24 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yang Erkun X-Patchwork-Id: 11187961 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 5606F13BD for ; Mon, 14 Oct 2019 07:47:04 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 3F14220673 for ; Mon, 14 Oct 2019 07:47:04 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726592AbfJNHrD (ORCPT ); Mon, 14 Oct 2019 03:47:03 -0400 Received: from szxga04-in.huawei.com ([45.249.212.190]:3707 "EHLO huawei.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1725934AbfJNHrD (ORCPT ); Mon, 14 Oct 2019 03:47:03 -0400 Received: from DGGEMS403-HUB.china.huawei.com (unknown [172.30.72.58]) by Forcepoint Email with ESMTP id 70684C64199C92B1EE40; Mon, 14 Oct 2019 15:46:57 +0800 (CST) Received: from localhost.localdomain (10.175.124.28) by DGGEMS403-HUB.china.huawei.com (10.3.19.203) with Microsoft SMTP Server id 14.3.439.0; Mon, 14 Oct 2019 15:46:47 +0800 From: yangerkun To: , CC: , , Subject: [PATCH] io_uring: consider the overflow of sequence for timeout req Date: Mon, 14 Oct 2019 16:08:24 +0800 Message-ID: <20191014080824.43260-1-yangerkun@huawei.com> X-Mailer: git-send-email 2.17.2 MIME-Version: 1.0 X-Originating-IP: [10.175.124.28] X-CFilter-Loop: Reflected Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org The sequence for timeout req may overflow, and it will lead to wrong order of timeout req list. And we should consider two situation: 1. ctx->cached_sq_head + count - 1 may overflow; 2. cached_sq_head of now may overflow compare with before cached_sq_head. Fix the wrong logic by add record of count and use type long long to record the overflow. Signed-off-by: yangerkun --- fs/io_uring.c | 31 +++++++++++++++++++++++++------ 1 file changed, 25 insertions(+), 6 deletions(-) diff --git a/fs/io_uring.c b/fs/io_uring.c index 76fdbe84aff5..9cc96f68b370 100644 --- a/fs/io_uring.c +++ b/fs/io_uring.c @@ -288,6 +288,7 @@ struct io_poll_iocb { struct io_timeout { struct file *file; struct hrtimer timer; + unsigned count; }; /* @@ -1884,7 +1885,7 @@ static enum hrtimer_restart io_timeout_fn(struct hrtimer *timer) static int io_timeout(struct io_kiocb *req, const struct io_uring_sqe *sqe) { - unsigned count, req_dist, tail_index; + unsigned count; struct io_ring_ctx *ctx = req->ctx; struct list_head *entry; struct timespec64 ts; @@ -1907,21 +1908,39 @@ static int io_timeout(struct io_kiocb *req, const struct io_uring_sqe *sqe) count = 1; req->sequence = ctx->cached_sq_head + count - 1; + req->timeout.count = count; req->flags |= REQ_F_TIMEOUT; /* * Insertion sort, ensuring the first entry in the list is always * the one we need first. */ - tail_index = ctx->cached_cq_tail - ctx->rings->sq_dropped; - req_dist = req->sequence - tail_index; spin_lock_irq(&ctx->completion_lock); list_for_each_prev(entry, &ctx->timeout_list) { struct io_kiocb *nxt = list_entry(entry, struct io_kiocb, list); - unsigned dist; + unsigned nxt_sq_head; + long long tmp, tmp_nxt; - dist = nxt->sequence - tail_index; - if (req_dist >= dist) + /* count bigger than before should break directly. */ + if (count >= nxt->timeout.count) + break; + + /* + * Since cached_sq_head + count - 1 can overflow, use type long + * long to store it. + */ + tmp = (long long)ctx->cached_sq_head + count - 1; + nxt_sq_head = nxt->sequence - nxt->timeout.count + 1; + tmp_nxt = (long long)nxt_sq_head + nxt->timeout.count - 1; + + /* + * cached_sq_head may overflow, and it will never overflow twice + * once there is some timeout req still be valid. + */ + if (ctx->cached_sq_head < nxt_sq_head) + tmp_nxt += UINT_MAX; + + if (tmp >= tmp_nxt) break; } list_add(&req->list, entry);