From patchwork Wed Jan 5 20:35:14 2011 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Dillow X-Patchwork-Id: 454791 X-Patchwork-Delegate: dave@thedillows.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by demeter1.kernel.org (8.14.4/8.14.3) with ESMTP id p05KZN4x023915 for ; Wed, 5 Jan 2011 20:35:23 GMT Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752532Ab1AEUfU (ORCPT ); Wed, 5 Jan 2011 15:35:20 -0500 Received: from amavis-outgoing2.knology.net ([24.214.64.231]:50666 "EHLO amavis-outgoing2.knology.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752188Ab1AEUfU (ORCPT ); Wed, 5 Jan 2011 15:35:20 -0500 Received: from localhost (amavis-outbound2 [127.0.0.1]) by amavis-outgoing2.knology.net (Postfix) with ESMTP id 687BCD0048; Wed, 5 Jan 2011 15:35:19 -0500 (EST) Received: from smtp5.knology.net ([192.168.200.214]) by localhost (amavis-outgoing2.knology.net [192.168.200.231]) (amavisd-new, port 10024) with LMTP id v35MIy2CPNIf; Wed, 5 Jan 2011 15:35:19 -0500 (EST) Received: from shed.thedillows.org (unknown [207.98.218.89]) by smtp5.knology.net (Postfix) with ESMTP id C89B4703B0; Wed, 5 Jan 2011 15:35:17 -0500 (EST) Received: from obelisk.thedillows.org (obelisk.thedillows.org [192.168.0.10]) by shed.thedillows.org (8.14.4/8.14.4) with ESMTP id p05KZH7n014208 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO); Wed, 5 Jan 2011 15:35:17 -0500 Received: from obelisk.thedillows.org (localhost [127.0.0.1]) by obelisk.thedillows.org (8.14.4/8.14.3) with ESMTP id p05KZHY5030763; Wed, 5 Jan 2011 15:35:17 -0500 Received: (from dad@localhost) by obelisk.thedillows.org (8.14.4/8.14.4/Submit) id p05KZHUS030762; Wed, 5 Jan 2011 15:35:17 -0500 X-Authentication-Warning: obelisk.thedillows.org: dad set sender to dave@thedillows.org using -f From: David Dillow To: linux-rdma@vger.kernel.org Cc: linux-scsi@vger.kernel.org, bvanassche@acm.org Subject: [PATCH v2 6/8] IB/srp: reduce lock coverage of command completion Date: Wed, 5 Jan 2011 15:35:14 -0500 Message-Id: <1294259716-30706-7-git-send-email-dillowda@ornl.gov> X-Mailer: git-send-email 1.7.3.4 In-Reply-To: <1294259716-30706-1-git-send-email-dillowda@ornl.gov> References: <1294259716-30706-1-git-send-email-dillowda@ornl.gov> Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Greylist: IP, sender and recipient auto-whitelisted, not delayed by milter-greylist-4.2.3 (demeter1.kernel.org [140.211.167.41]); Wed, 05 Jan 2011 20:35:23 +0000 (UTC) diff --git a/drivers/infiniband/ulp/srp/ib_srp.c b/drivers/infiniband/ulp/srp/ib_srp.c index d320dd4..15c3ae3 100644 --- a/drivers/infiniband/ulp/srp/ib_srp.c +++ b/drivers/infiniband/ulp/srp/ib_srp.c @@ -549,18 +549,24 @@ static void srp_unmap_data(struct scsi_cmnd *scmnd, scsi_sg_count(scmnd), scmnd->sc_data_direction); } -static void srp_remove_req(struct srp_target_port *target, struct srp_request *req) +static void srp_remove_req(struct srp_target_port *target, + struct srp_request *req, s32 req_lim_delta) { + unsigned long flags; + srp_unmap_data(req->scmnd, target, req); + spin_lock_irqsave(target->scsi_host->host_lock, flags); + target->req_lim += req_lim_delta; req->scmnd = NULL; list_add_tail(&req->list, &target->free_reqs); + spin_unlock_irqrestore(target->scsi_host->host_lock, flags); } static void srp_reset_req(struct srp_target_port *target, struct srp_request *req) { req->scmnd->result = DID_RESET << 16; req->scmnd->scsi_done(req->scmnd); - srp_remove_req(target, req); + srp_remove_req(target, req, 0); } static int srp_reconnect_target(struct srp_target_port *target) @@ -595,13 +601,11 @@ static int srp_reconnect_target(struct srp_target_port *target) while (ib_poll_cq(target->send_cq, 1, &wc) > 0) ; /* nothing */ - spin_lock_irq(target->scsi_host->host_lock); for (i = 0; i < SRP_CMD_SQ_SIZE; ++i) { struct srp_request *req = &target->req_ring[i]; if (req->scmnd) srp_reset_req(target, req); } - spin_unlock_irq(target->scsi_host->host_lock); list_del_init(&target->free_tx); for (i = 0; i < SRP_SQ_SIZE; ++i) @@ -914,15 +918,12 @@ static void srp_process_rsp(struct srp_target_port *target, struct srp_rsp *rsp) struct srp_request *req; struct scsi_cmnd *scmnd; unsigned long flags; - s32 delta; - - delta = (s32) be32_to_cpu(rsp->req_lim_delta); - - spin_lock_irqsave(target->scsi_host->host_lock, flags); - - target->req_lim += delta; if (unlikely(rsp->tag & SRP_TAG_TSK_MGMT)) { + spin_lock_irqsave(target->scsi_host->host_lock, flags); + target->req_lim += be32_to_cpu(rsp->req_lim_delta); + spin_unlock_irqrestore(target->scsi_host->host_lock, flags); + target->tsk_mgmt_status = -1; if (be32_to_cpu(rsp->resp_data_len) >= 4) target->tsk_mgmt_status = rsp->data[3]; @@ -948,12 +949,10 @@ static void srp_process_rsp(struct srp_target_port *target, struct srp_rsp *rsp) else if (rsp->flags & (SRP_RSP_FLAG_DIOVER | SRP_RSP_FLAG_DIUNDER)) scsi_set_resid(scmnd, be32_to_cpu(rsp->data_in_res_cnt)); + srp_remove_req(target, req, be32_to_cpu(rsp->req_lim_delta)); scmnd->host_scribble = NULL; scmnd->scsi_done(scmnd); - srp_remove_req(target, req); } - - spin_unlock_irqrestore(target->scsi_host->host_lock, flags); } static int srp_response_common(struct srp_target_port *target, s32 req_delta, @@ -1498,18 +1497,14 @@ static int srp_abort(struct scsi_cmnd *scmnd) SRP_TSK_ABORT_TASK)) return FAILED; - spin_lock_irq(target->scsi_host->host_lock); - if (req->scmnd) { if (!target->tsk_mgmt_status) { - srp_remove_req(target, req); + srp_remove_req(target, req, 0); scmnd->result = DID_ABORT << 16; } else ret = FAILED; } - spin_unlock_irq(target->scsi_host->host_lock); - return ret; } @@ -1528,16 +1523,12 @@ static int srp_reset_device(struct scsi_cmnd *scmnd) if (target->tsk_mgmt_status) return FAILED; - spin_lock_irq(target->scsi_host->host_lock); - for (i = 0; i < SRP_CMD_SQ_SIZE; ++i) { struct srp_request *req = &target->req_ring[i]; if (req->scmnd && req->scmnd->device == scmnd->device) srp_reset_req(target, req); } - spin_unlock_irq(target->scsi_host->host_lock); - return SUCCESS; }