From patchwork Thu Feb 7 12:22:26 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Bonzini X-Patchwork-Id: 2110531 Return-Path: X-Original-To: patchwork-kvm@patchwork.kernel.org Delivered-To: patchwork-process-083081@patchwork2.kernel.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by patchwork2.kernel.org (Postfix) with ESMTP id 6D1AEDFB7B for ; Thu, 7 Feb 2013 12:24:56 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1758313Ab3BGMWw (ORCPT ); Thu, 7 Feb 2013 07:22:52 -0500 Received: from mail-ve0-f177.google.com ([209.85.128.177]:54790 "EHLO mail-ve0-f177.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1758292Ab3BGMWs (ORCPT ); Thu, 7 Feb 2013 07:22:48 -0500 Received: by mail-ve0-f177.google.com with SMTP id m1so2207598ves.36 for ; Thu, 07 Feb 2013 04:22:47 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=x-received:sender:from:to:cc:subject:date:message-id:x-mailer :in-reply-to:references; bh=pjpT6GUH6VfmhrFe9FpJ4hcsDjjyxreaXqguvIBNUs4=; b=y+J4aInEny8meutEtpOF0Pd/e1jJezQZWK6KOILVXeQpDlLrRaFUn6T+yseRGBx+TX nOqhXXuxoJ4QAbJXJL2UhvgMUb4DVEQklXybl3H4f4hFD56ZOS+VV+KRkh92itHPboWX 79/bAC3+s38ZVlhkav9kVZZ+P/PQac3YRgiluPDFjvZR+IjSyc3xe8DYc2Ma4SEmRc/J 7ufSnCuW5ZSuFQPLjQgb/2eUQ6eOQbMSNm4rFaXo3z/E1FUY69oBW+qKK5QCSeZ+JLt6 2ZXe90iKTuE4novTDmBxgOnZgIA0gZZb6w//4X3uTIbCIILXjhiaaj0/t+v6NtxjWzPX Fvqg== X-Received: by 10.58.220.229 with SMTP id pz5mr1311655vec.30.1360239767187; Thu, 07 Feb 2013 04:22:47 -0800 (PST) Received: from yakj.usersys.redhat.com (93-34-179-137.ip50.fastwebnet.it. [93.34.179.137]) by mx.google.com with ESMTPS id sk8sm37937888vdb.13.2013.02.07.04.22.44 (version=TLSv1.2 cipher=RC4-SHA bits=128/128); Thu, 07 Feb 2013 04:22:46 -0800 (PST) From: Paolo Bonzini To: linux-kernel@vger.kernel.org Cc: Wanlong Gao , asias@redhat.com, Rusty Russell , mst@redhat.com, kvm@vger.kernel.org, virtualization@lists.linux-foundation.org Subject: [RFC PATCH 2/8] virtio-blk: reorganize virtblk_add_req Date: Thu, 7 Feb 2013 13:22:26 +0100 Message-Id: <1360239752-2470-3-git-send-email-pbonzini@redhat.com> X-Mailer: git-send-email 1.8.1 In-Reply-To: <1360239752-2470-1-git-send-email-pbonzini@redhat.com> References: <1360239752-2470-1-git-send-email-pbonzini@redhat.com> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Right now, both virtblk_add_req and virtblk_add_req_wait call virtqueue_add_buf. To prepare for the next patches, abstract the call to virtqueue_add_buf into a new function __virtblk_add_req, and include the waiting logic directly in virtblk_add_req. Signed-off-by: Paolo Bonzini --- drivers/block/virtio_blk.c | 55 ++++++++++++++++---------------------------- 1 files changed, 20 insertions(+), 35 deletions(-) diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c index 8ad21a2..fd8a689 100644 --- a/drivers/block/virtio_blk.c +++ b/drivers/block/virtio_blk.c @@ -100,50 +100,39 @@ static inline struct virtblk_req *virtblk_alloc_req(struct virtio_blk *vblk, return vbr; } -static void virtblk_add_buf_wait(struct virtio_blk *vblk, - struct virtblk_req *vbr, - unsigned long out, - unsigned long in) +static inline int __virtblk_add_req(struct virtqueue *vq, + struct virtblk_req *vbr, + unsigned long out, + unsigned long in) { + return virtqueue_add_buf(vq, vbr->sg, out, in, vbr, GFP_ATOMIC); +} + +static void virtblk_add_req(struct virtblk_req *vbr, + unsigned int out, unsigned int in) +{ + struct virtio_blk *vblk = vbr->vblk; DEFINE_WAIT(wait); + int ret; - for (;;) { + spin_lock_irq(vblk->disk->queue->queue_lock); + while (unlikely((ret = __virtblk_add_req(vblk->vq, vbr, + out, in)) < 0)) { prepare_to_wait_exclusive(&vblk->queue_wait, &wait, TASK_UNINTERRUPTIBLE); + spin_unlock_irq(vblk->disk->queue->queue_lock); + io_schedule(); spin_lock_irq(vblk->disk->queue->queue_lock); - if (virtqueue_add_buf(vblk->vq, vbr->sg, out, in, vbr, - GFP_ATOMIC) < 0) { - spin_unlock_irq(vblk->disk->queue->queue_lock); - io_schedule(); - } else { - virtqueue_kick(vblk->vq); - spin_unlock_irq(vblk->disk->queue->queue_lock); - break; - } + finish_wait(&vblk->queue_wait, &wait); } - finish_wait(&vblk->queue_wait, &wait); -} - -static inline void virtblk_add_req(struct virtblk_req *vbr, - unsigned int out, unsigned int in) -{ - struct virtio_blk *vblk = vbr->vblk; - - spin_lock_irq(vblk->disk->queue->queue_lock); - if (unlikely(virtqueue_add_buf(vblk->vq, vbr->sg, out, in, vbr, - GFP_ATOMIC) < 0)) { - spin_unlock_irq(vblk->disk->queue->queue_lock); - virtblk_add_buf_wait(vblk, vbr, out, in); - return; - } virtqueue_kick(vblk->vq); spin_unlock_irq(vblk->disk->queue->queue_lock); } -static int virtblk_bio_send_flush(struct virtblk_req *vbr) +static void virtblk_bio_send_flush(struct virtblk_req *vbr) { unsigned int out = 0, in = 0; @@ -155,11 +144,9 @@ static int virtblk_bio_send_flush(struct virtblk_req *vbr) sg_set_buf(&vbr->sg[out + in++], &vbr->status, sizeof(vbr->status)); virtblk_add_req(vbr, out, in); - - return 0; } -static int virtblk_bio_send_data(struct virtblk_req *vbr) +static void virtblk_bio_send_data(struct virtblk_req *vbr) { struct virtio_blk *vblk = vbr->vblk; unsigned int num, out = 0, in = 0; @@ -188,8 +175,6 @@ static int virtblk_bio_send_data(struct virtblk_req *vbr) } virtblk_add_req(vbr, out, in); - - return 0; } static void virtblk_bio_send_data_work(struct work_struct *work)