From patchwork Mon May 20 03:06:25 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qinchuanyu X-Patchwork-Id: 2590581 Return-Path: X-Original-To: patchwork-kvm@patchwork.kernel.org Delivered-To: patchwork-process-083081@patchwork1.kernel.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by patchwork1.kernel.org (Postfix) with ESMTP id 5E0C13FDBC for ; Mon, 20 May 2013 03:07:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755218Ab3ETDHd (ORCPT ); Sun, 19 May 2013 23:07:33 -0400 Received: from szxga02-in.huawei.com ([119.145.14.65]:35773 "EHLO szxga02-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1755086Ab3ETDHc convert rfc822-to-8bit (ORCPT ); Sun, 19 May 2013 23:07:32 -0400 Received: from 172.24.2.119 (EHLO szxeml211-edg.china.huawei.com) ([172.24.2.119]) by szxrg02-dlp.huawei.com (MOS 4.3.4-GA FastPath queued) with ESMTP id BBU69431; Mon, 20 May 2013 11:07:25 +0800 (CST) Received: from SZXEML417-HUB.china.huawei.com (10.82.67.156) by szxeml211-edg.china.huawei.com (172.24.2.182) with Microsoft SMTP Server (TLS) id 14.1.323.7; Mon, 20 May 2013 11:06:30 +0800 Received: from NKGEML401-HUB.china.huawei.com (10.98.56.32) by szxeml417-hub.china.huawei.com (10.82.67.156) with Microsoft SMTP Server (TLS) id 14.1.323.7; Mon, 20 May 2013 11:06:30 +0800 Received: from NKGEML511-MBX.china.huawei.com ([169.254.5.214]) by nkgeml401-hub.china.huawei.com ([10.98.56.32]) with mapi id 14.01.0323.007; Mon, 20 May 2013 11:06:26 +0800 From: Qinchuanyu To: "rusty@rustcorp.com.au" , "mst@redhat.com" , "dhowells@redhat.com" , "jasowang@redhat.com" CC: " (kvm@vger.kernel.org)" , " (netdev@vger.kernel.org)" Subject: [PATCH] vhost: get 2% performance improved by reducing spin_lock race in vhost_work_queue Thread-Topic: [PATCH] vhost: get 2% performance improved by reducing spin_lock race in vhost_work_queue Thread-Index: Ac5VBvou1TggFPyiQYWhLbN9xw3Ujg== Date: Mon, 20 May 2013 03:06:25 +0000 Message-ID: <5872DA217C2FF7488B20897D84F904E7338FD1E5@nkgeml511-mbx.china.huawei.com> Accept-Language: zh-CN, en-US Content-Language: zh-CN X-MS-Has-Attach: X-MS-TNEF-Correlator: x-cr-hashedpuzzle: Cvk= BIrl BRfo CjKF I78M Sui8 ZWOA ZWyx eDgJ f+Dy ht09 qb1U rcOs 1yxO +wOJ /woC; 6; ZABoAG8AdwBlAGwAbABzAEAAcgBlAGQAaABhAHQALgBjAG8AbQA7AGoAYQBzAG8AdwBhAG4AZwBAAHIAZQBkAGgAYQB0AC4AYwBvAG0AOwBrAHYAbQBAAHYAZwBlAHIALgBrAGUAcgBuAGUAbAAuAG8AcgBnADsAbQBzAHQAQAByAGUAZABoAGEAdAAuAGMAbwBtADsAbgBlAHQAZABlAHYAQAB2AGcAZQByAC4AawBlAHIAbgBlAGwALgBvAHIAZwA7AHIAdQBzAHQAeQBAAHIAdQBzAHQAYwBvAHIAcAAuAGMAbwBtAC4AYQB1AA==; Sosha1_v1; 7; {C89F74D1-31EA-4AB9-90D8-8F306517BF21}; cQBpAG4AYwBoAHUAYQBuAHkAdQBAAGgAdQBhAHcAZQBpAC4AYwBvAG0A; Mon, 20 May 2013 03:06:09 GMT; WwBQAEEAVABDAEgAXQAgAHYAaABvAHMAdAA6ACAAZwBlAHQAIAAyACUAIABwAGUAcgBmAG8AcgBtAGEAbgBjAGUAIABpAG0AcAByAG8AdgBlAGQAIABiAHkAIAByAGUAZAB1AGMAaQBuAGcAIABzAHAAaQBuAF8AbABvAGMAawAgAHIAYQBjAGUAIABpAG4AIAB2AGgAbwBzAHQAXwB3AG8AcgBrAF8AcQB1AGUAdQBlAA== x-cr-puzzleid: {C89F74D1-31EA-4AB9-90D8-8F306517BF21} x-originating-ip: [10.135.68.166] MIME-Version: 1.0 X-CFilter-Loop: Reflected Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Right now the wake_up_process func is included in spin_lock/unlock, but it could be done outside the spin_lock. I have test it with kernel 3.0.27 and guest suse11-sp2, it provide 2%-3% net performance improved. Signed-off-by: Chuanyu Qin --- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html --- a/drivers/vhost/vhost.c 2013-05-20 10:36:30.000000000 +0800 +++ b/drivers/vhost/vhost.c 2013-05-20 10:36:54.000000000 +0800 @@ -144,9 +144,10 @@ if (list_empty(&work->node)) { list_add_tail(&work->node, &dev->work_list); work->queue_seq++; + spin_unlock_irqrestore(&dev->work_lock, flags); wake_up_process(dev->worker); - } - spin_unlock_irqrestore(&dev->work_lock, flags); + } else + spin_unlock_irqrestore(&dev->work_lock, flags); } void vhost_poll_queue(struct vhost_poll *poll)