From patchwork Sat Oct 25 19:20:40 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jens Axboe X-Patchwork-Id: 5152131 Return-Path: X-Original-To: patchwork-ceph-devel@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork2.web.kernel.org (Postfix) with ESMTP id F178DC11AC for ; Sat, 25 Oct 2014 19:20:46 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 271812026C for ; Sat, 25 Oct 2014 19:20:46 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 409E52011D for ; Sat, 25 Oct 2014 19:20:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752665AbaJYTUl (ORCPT ); Sat, 25 Oct 2014 15:20:41 -0400 Received: from mail-pa0-f48.google.com ([209.85.220.48]:63410 "EHLO mail-pa0-f48.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752278AbaJYTUj (ORCPT ); Sat, 25 Oct 2014 15:20:39 -0400 Received: by mail-pa0-f48.google.com with SMTP id ey11so3002699pad.21 for ; Sat, 25 Oct 2014 12:20:39 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to :cc:subject:references:in-reply-to:content-type; bh=/S9pZPttYRfkp0EyaKEb2YzIs28yhhYq0uY78+ABwxw=; b=hQBWTjDnXsmUUO2cJBVRIGD85/wDa3S92jqCT0n/h3nh3CRsFJXsWv0jcbV9O/jFCh wcjQN2IXyERgBuIrfJaqyvny4HKSB0kmlO81CVV9nkznTVXpRPxmQ0aaMN+hZ7e7oyry 9NDyKa1dL2x7i29NHbT8is7CqJAydpOR5NpAAGO1o/y4g838x31fmZvU+1EzSZXHikyS BQCukoavma0RShY+Iw7CbPBpMzwXWxILbX+u5KrYywFKZYGmucBOlkEBclv+DFcQyyNr ++3ZMe/jMWaW5JcBKcGxWwjmiMk9nEzHVqTZzDdnaQA5GRU8whmMnymggKAkKd7a47Qp 0uaA== X-Gm-Message-State: ALoCoQlXk7+yZzgYFmcj1ifet3ClqOMxEJN/85Iks8MQ5mk0FM4IFg1kb4dmGDtjQ67BclJG4ImY X-Received: by 10.70.37.79 with SMTP id w15mr13107491pdj.8.1414264839160; Sat, 25 Oct 2014 12:20:39 -0700 (PDT) Received: from [192.168.3.12] (66.29.187.50.static.utbb.net. [66.29.187.50]) by mx.google.com with ESMTPSA id m2sm6830779pdf.48.2014.10.25.12.20.37 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Sat, 25 Oct 2014 12:20:37 -0700 (PDT) Message-ID: <544BF808.2090800@kernel.dk> Date: Sat, 25 Oct 2014 13:20:40 -0600 From: Jens Axboe User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:31.0) Gecko/20100101 Thunderbird/31.2.0 MIME-Version: 1.0 To: Mark Kirkwood , Mark Nelson , Mark Nelson , fio@vger.kernel.org CC: "d.gollub@telekom.de >> Daniel Gollub" , "xan.peng" , "ceph-devel@vger.kernel.org" Subject: Re: fio rbd completions (Was: fio rbd hang for block sizes > 1M) References: <5449BBB3.7090109@catalyst.net.nz> <5449E50E.7000808@kernel.dk> <5449EEF1.1060407@catalyst.net.nz> <544A51C7.40803@gmail.com> <544A5DA6.2010709@gmail.com> <544AD67D.4030603@catalyst.net.nz> <544AEAE7.6080603@redhat.com> <544AF0D2.1050405@catalyst.net.nz> <544B0C7F.4080109@catalyst.net.nz> <544B1D50.4010101@kernel.dk> <544B2C19.7070009@catalyst.net.nz> In-Reply-To: <544B2C19.7070009@catalyst.net.nz> Sender: ceph-devel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: ceph-devel@vger.kernel.org X-Spam-Status: No, score=-8.3 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, RP_MATCHES_RCVD, T_TVD_MIME_EPI, UNPARSEABLE_RELAY autolearn=ham version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP On 10/24/2014 10:50 PM, Mark Kirkwood wrote: > On 25/10/14 16:47, Jens Axboe wrote: >> >> Since you're running rbd tests... Mind giving this patch a go? I don't >> have an easy way to test it myself. It has nothing to do with this >> issue, it's just a potentially faster way to do the rbd completions. >> > > Sure - but note I'm testing this on my i7 workstation (4x osd's running > on 2x Crucial M550) so not exactly server grade :-) > > With that in mind, I'm seeing slightly *slower* performance with the > patch applied: e.g: for 128k blocks - 2 runs, 1 uncached and the next > cached. Yeah, that doesn't look good. Mind trying this one out? I wonder if we doubly wait on them - or perhaps rbd_aio_wait_for_complete() isn't working correctly. If you try this one, we should know more... Goal is, I want to get rid of that usleep() in getevents. diff --git a/engines/rbd.c b/engines/rbd.c index 6fe87b8d010c..2353b1f11caf 100644 --- a/engines/rbd.c +++ b/engines/rbd.c @@ -11,7 +11,9 @@ struct fio_rbd_iou { struct io_u *io_u; + rbd_completion_t completion; int io_complete; + int io_seen; }; struct rbd_data { @@ -221,34 +223,69 @@ static struct io_u *fio_rbd_event(struct thread_data *td, int event) return rbd_data->aio_events[event]; } -static int fio_rbd_getevents(struct thread_data *td, unsigned int min, - unsigned int max, const struct timespec *t) +static inline int fri_check_complete(struct rbd_data *rbd_data, + struct io_u *io_u, + unsigned int *events) +{ + struct fio_rbd_iou *fri = io_u->engine_data; + + if (fri->io_complete) { + fri->io_complete = 0; + fri->io_seen = 1; + rbd_data->aio_events[*events] = io_u; + (*events)++; + return 1; + } + + return 0; +} + +static int rbd_iter_events(struct thread_data *td, unsigned int *events, + unsigned int min_evts, int wait) { struct rbd_data *rbd_data = td->io_ops->data; - unsigned int events = 0; + unsigned int this_events = 0; struct io_u *io_u; int i; - struct fio_rbd_iou *fov; - do { - io_u_qiter(&td->io_u_all, io_u, i) { - if (!(io_u->flags & IO_U_F_FLIGHT)) - continue; + io_u_qiter(&td->io_u_all, io_u, i) { + struct fio_rbd_iou *fri = io_u->engine_data; - fov = (struct fio_rbd_iou *)io_u->engine_data; + if (!(io_u->flags & IO_U_F_FLIGHT)) + continue; + if (fri->io_seen) + continue; - if (fov->io_complete) { - fov->io_complete = 0; - rbd_data->aio_events[events] = io_u; - events++; - } + if (fri_check_complete(rbd_data, io_u, events)) + this_events++; + else if (wait) { + rbd_aio_wait_for_complete(fri->completion); + if (fri_check_complete(rbd_data, io_u, events)) + this_events++; } - if (events < min) - usleep(100); - else + if (*events >= min_evts) + break; + } + + return this_events; +} + +static int fio_rbd_getevents(struct thread_data *td, unsigned int min, + unsigned int max, const struct timespec *t) +{ + unsigned int this_events, events = 0; + int wait = 0; + + do { + this_events = rbd_iter_events(td, &events, min, wait); + + if (events >= min) break; + if (this_events) + continue; + wait = 1; } while (1); return events; @@ -258,7 +295,7 @@ static int fio_rbd_queue(struct thread_data *td, struct io_u *io_u) { int r = -1; struct rbd_data *rbd_data = td->io_ops->data; - rbd_completion_t comp; + struct fio_rbd_iou *fri = io_u->engine_data; fio_ro_check(td, io_u); @@ -266,7 +303,7 @@ static int fio_rbd_queue(struct thread_data *td, struct io_u *io_u) r = rbd_aio_create_completion(io_u, (rbd_callback_t) _fio_rbd_finish_write_aiocb, - &comp); + &fri->completion); if (r < 0) { log_err ("rbd_aio_create_completion for DDIR_WRITE failed.\n"); @@ -274,7 +311,8 @@ static int fio_rbd_queue(struct thread_data *td, struct io_u *io_u) } r = rbd_aio_write(rbd_data->image, io_u->offset, - io_u->xfer_buflen, io_u->xfer_buf, comp); + io_u->xfer_buflen, io_u->xfer_buf, + fri->completion); if (r < 0) { log_err("rbd_aio_write failed.\n"); goto failed; @@ -284,7 +322,7 @@ static int fio_rbd_queue(struct thread_data *td, struct io_u *io_u) r = rbd_aio_create_completion(io_u, (rbd_callback_t) _fio_rbd_finish_read_aiocb, - &comp); + &fri->completion); if (r < 0) { log_err ("rbd_aio_create_completion for DDIR_READ failed.\n"); @@ -292,7 +330,8 @@ static int fio_rbd_queue(struct thread_data *td, struct io_u *io_u) } r = rbd_aio_read(rbd_data->image, io_u->offset, - io_u->xfer_buflen, io_u->xfer_buf, comp); + io_u->xfer_buflen, io_u->xfer_buf, + fri->completion); if (r < 0) { log_err("rbd_aio_read failed.\n"); @@ -303,14 +342,14 @@ static int fio_rbd_queue(struct thread_data *td, struct io_u *io_u) r = rbd_aio_create_completion(io_u, (rbd_callback_t) _fio_rbd_finish_sync_aiocb, - &comp); + &fri->completion); if (r < 0) { log_err ("rbd_aio_create_completion for DDIR_SYNC failed.\n"); goto failed; } - r = rbd_aio_flush(rbd_data->image, comp); + r = rbd_aio_flush(rbd_data->image, fri->completion); if (r < 0) { log_err("rbd_flush failed.\n"); goto failed; @@ -439,22 +478,21 @@ static int fio_rbd_invalidate(struct thread_data *td, struct fio_file *f) static void fio_rbd_io_u_free(struct thread_data *td, struct io_u *io_u) { - struct fio_rbd_iou *o = io_u->engine_data; + struct fio_rbd_iou *fri = io_u->engine_data; - if (o) { + if (fri) { io_u->engine_data = NULL; - free(o); + free(fri); } } static int fio_rbd_io_u_init(struct thread_data *td, struct io_u *io_u) { - struct fio_rbd_iou *o; + struct fio_rbd_iou *fri; - o = malloc(sizeof(*o)); - o->io_complete = 0; - o->io_u = io_u; - io_u->engine_data = o; + fri = calloc(1, sizeof(*fri)); + fri->io_u = io_u; + io_u->engine_data = fri; return 0; }