From patchwork Mon Aug 13 20:18:39 2012 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg KH X-Patchwork-Id: 1316081 Return-Path: X-Original-To: patchwork-kvm@patchwork.kernel.org Delivered-To: patchwork-process-083081@patchwork2.kernel.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by patchwork2.kernel.org (Postfix) with ESMTP id B42AADFFF9 for ; Mon, 13 Aug 2012 20:46:58 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753413Ab2HMUqJ (ORCPT ); Mon, 13 Aug 2012 16:46:09 -0400 Received: from mail-ob0-f174.google.com ([209.85.214.174]:51310 "EHLO mail-ob0-f174.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753307Ab2HMUUS (ORCPT ); Mon, 13 Aug 2012 16:20:18 -0400 Received: by mail-ob0-f174.google.com with SMTP id uo13so7356092obb.19 for ; Mon, 13 Aug 2012 13:20:18 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=from:to:cc:subject:date:message-id:x-mailer:in-reply-to:references :user-agent:x-gm-message-state; bh=JJFGZ/R7/7Dczky1EzwHdHmjB7ntM4jwpjyZsvpvvpU=; b=cXK16PlPCGlFcrc3uY0PKrvwg5sWoxBNPVslPP0EqlY0rOR9AG/mZsxpbTj5h/rkFl 4gPCYRadVyqvsaU79ipEfobmDCmKrfCZM+spwGuE6eSoaKU4mLeKHSlCcTLbZJXpLsqb YyNpxTElWVii1IO/IzTl1ppo/28iO9E8yus3eHGPBCMGcwPBJtxzBx16/pI8CQJ6euZ7 0vCxDVD60jL3PJg++gJr9p7l9O8BpXYf3pKoVlZK6rRWGX3Kg+rOFnxVQk2zt7H0EDET uRznHDeqiusncv502yMf0m24f3RkAV2z1AZI4zSjRXBhb6zBNYOP4L9rwDDJM+UDObh0 AA3Q== Received: by 10.182.8.6 with SMTP id n6mr14888405oba.39.1344889218105; Mon, 13 Aug 2012 13:20:18 -0700 (PDT) Received: from localhost (c-67-168-183-230.hsd1.wa.comcast.net. [67.168.183.230]) by mx.google.com with ESMTPS id ac8sm524274obc.11.2012.08.13.13.20.15 (version=TLSv1/SSLv3 cipher=OTHER); Mon, 13 Aug 2012 13:20:17 -0700 (PDT) From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org, stable@vger.kernel.org Cc: Greg KH , torvalds@linux-foundation.org, akpm@linux-foundation.org, alan@lxorguk.ukuu.org.uk, Asias He , "Michael S. Tsirkin" , Rusty Russell , virtualization@lists.linux-foundation.org, kvm@vger.kernel.org Subject: [ 03/82] virtio-blk: Use block layer provided spinlock Date: Mon, 13 Aug 2012 13:18:39 -0700 Message-Id: <20120813201746.757026394@linuxfoundation.org> X-Mailer: git-send-email 1.7.10.1.362.g242cab3 In-Reply-To: <20120813201746.448504360@linuxfoundation.org> References: <20120813201746.448504360@linuxfoundation.org> User-Agent: quilt/0.60-20.5 X-Gm-Message-State: ALoCoQlYvvSQWuZQki3/FXSkzIzBgpdOV3056nHLrWYrxevrumkZni9uJ8AZsiwUs4r3oB5p1Pto Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Greg KH 3.5-stable review patch. If anyone has any objections, please let me know. ------------------ From: Asias He commit 2c95a3290919541b846bee3e0fbaa75860929f53 upstream. Block layer will allocate a spinlock for the queue if the driver does not provide one in blk_init_queue(). The reason to use the internal spinlock is that blk_cleanup_queue() will switch to use the internal spinlock in the cleanup code path. if (q->queue_lock != &q->__queue_lock) q->queue_lock = &q->__queue_lock; However, processes which are in D state might have taken the driver provided spinlock, when the processes wake up, they would release the block provided spinlock. --- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html ===================================== [ BUG: bad unlock balance detected! ] 3.4.0-rc7+ #238 Not tainted ------------------------------------- fio/3587 is trying to release lock (&(&q->__queue_lock)->rlock) at: [] blk_queue_bio+0x2a2/0x380 but there are no more locks to release! other info that might help us debug this: 1 lock held by fio/3587: #0: (&(&vblk->lock)->rlock){......}, at: [] get_request_wait+0x19a/0x250 Other drivers use block layer provided spinlock as well, e.g. SCSI. Switching to the block layer provided spinlock saves a bit of memory and does not increase lock contention. Performance test shows no real difference is observed before and after this patch. Changes in v2: Improve commit log as Michael suggested. Signed-off-by: Asias He Cc: virtualization@lists.linux-foundation.org Cc: kvm@vger.kernel.org Acked-by: Michael S. Tsirkin Signed-off-by: Rusty Russell Signed-off-by: Greg Kroah-Hartman --- drivers/block/virtio_blk.c | 9 +++------ 1 file changed, 3 insertions(+), 6 deletions(-) --- a/drivers/block/virtio_blk.c +++ b/drivers/block/virtio_blk.c @@ -21,8 +21,6 @@ struct workqueue_struct *virtblk_wq; struct virtio_blk { - spinlock_t lock; - struct virtio_device *vdev; struct virtqueue *vq; @@ -65,7 +63,7 @@ static void blk_done(struct virtqueue *v unsigned int len; unsigned long flags; - spin_lock_irqsave(&vblk->lock, flags); + spin_lock_irqsave(vblk->disk->queue->queue_lock, flags); while ((vbr = virtqueue_get_buf(vblk->vq, &len)) != NULL) { int error; @@ -99,7 +97,7 @@ static void blk_done(struct virtqueue *v } /* In case queue is stopped waiting for more buffers. */ blk_start_queue(vblk->disk->queue); - spin_unlock_irqrestore(&vblk->lock, flags); + spin_unlock_irqrestore(vblk->disk->queue->queue_lock, flags); } static bool do_req(struct request_queue *q, struct virtio_blk *vblk, @@ -431,7 +429,6 @@ static int __devinit virtblk_probe(struc goto out_free_index; } - spin_lock_init(&vblk->lock); vblk->vdev = vdev; vblk->sg_elems = sg_elems; sg_init_table(vblk->sg, vblk->sg_elems); @@ -456,7 +453,7 @@ static int __devinit virtblk_probe(struc goto out_mempool; } - q = vblk->disk->queue = blk_init_queue(do_virtblk_request, &vblk->lock); + q = vblk->disk->queue = blk_init_queue(do_virtblk_request, NULL); if (!q) { err = -ENOMEM; goto out_put_disk;