From patchwork Tue Dec 28 15:28:24 2010 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Marciniszyn X-Patchwork-Id: 436411 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by demeter1.kernel.org (8.14.4/8.14.3) with ESMTP id oBSFc3Ml018707 for ; Tue, 28 Dec 2010 15:38:07 GMT Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752751Ab0L1PiG (ORCPT ); Tue, 28 Dec 2010 10:38:06 -0500 Received: from [198.186.4.11] ([198.186.4.11]:26749 "EHLO kop-dev-sles11-04.qlogic.org" rhost-flags-FAIL-FAIL-OK-FAIL) by vger.kernel.org with ESMTP id S1752415Ab0L1PiB (ORCPT ); Tue, 28 Dec 2010 10:38:01 -0500 Received: from kop-dev-sles11-04.qlogic.org (localhost [127.0.0.1]) by kop-dev-sles11-04.qlogic.org (Postfix) with ESMTP id 43DCA27E2D9; Tue, 28 Dec 2010 10:28:24 -0500 (EST) Subject: [PATCH 18/24] IB/qib: adding fix missing from earlier patch To: Roland Dreier From: Mike Marciniszyn Cc: linux-rdma@vger.kernel.org Date: Tue, 28 Dec 2010 10:28:24 -0500 Message-ID: <20101228152824.19960.40529.stgit@kop-dev-sles11-04.qlogic.org> In-Reply-To: <20101228152648.19960.84006.stgit@kop-dev-sles11-04.qlogic.org> References: <20101228152648.19960.84006.stgit@kop-dev-sles11-04.qlogic.org> User-Agent: StGit/0.15 MIME-Version: 1.0 Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Greylist: IP, sender and recipient auto-whitelisted, not delayed by milter-greylist-4.2.3 (demeter1.kernel.org [140.211.167.41]); Tue, 28 Dec 2010 15:38:07 +0000 (UTC) diff --git a/drivers/infiniband/hw/qib/qib_ud.c b/drivers/infiniband/hw/qib/qib_ud.c index a4b945d..4a51fd1 100644 --- a/drivers/infiniband/hw/qib/qib_ud.c +++ b/drivers/infiniband/hw/qib/qib_ud.c @@ -445,13 +445,14 @@ void qib_ud_rcv(struct qib_ibport *ibp, struct qib_ib_header *hdr, qkey = be32_to_cpu(ohdr->u.ud.deth[0]); src_qp = be32_to_cpu(ohdr->u.ud.deth[1]) & QIB_QPN_MASK; - /* Get the number of bytes the message was padded by. */ + /* + * Get the number of bytes the message was padded by + * and drop incomplete packets. + */ pad = (be32_to_cpu(ohdr->bth[0]) >> 20) & 3; - if (unlikely(tlen < (hdrsize + pad + 4))) { - /* Drop incomplete packets. */ - ibp->n_pkt_drops++; - goto bail; - } + if (unlikely(tlen < (hdrsize + pad + 4))) + goto drop; + tlen -= hdrsize + pad + 4; /* @@ -460,10 +461,8 @@ void qib_ud_rcv(struct qib_ibport *ibp, struct qib_ib_header *hdr, */ if (qp->ibqp.qp_num) { if (unlikely(hdr->lrh[1] == IB_LID_PERMISSIVE || - hdr->lrh[3] == IB_LID_PERMISSIVE)) { - ibp->n_pkt_drops++; - goto bail; - } + hdr->lrh[3] == IB_LID_PERMISSIVE)) + goto drop; if (qp->ibqp.qp_num > 1) { u16 pkey1, pkey2; @@ -476,7 +475,7 @@ void qib_ud_rcv(struct qib_ibport *ibp, struct qib_ib_header *hdr, 0xF, src_qp, qp->ibqp.qp_num, hdr->lrh[3], hdr->lrh[1]); - goto bail; + return; } } if (unlikely(qkey != qp->qkey)) { @@ -484,30 +483,24 @@ void qib_ud_rcv(struct qib_ibport *ibp, struct qib_ib_header *hdr, (be16_to_cpu(hdr->lrh[0]) >> 4) & 0xF, src_qp, qp->ibqp.qp_num, hdr->lrh[3], hdr->lrh[1]); - goto bail; + return; } /* Drop invalid MAD packets (see 13.5.3.1). */ if (unlikely(qp->ibqp.qp_num == 1 && (tlen != 256 || - (be16_to_cpu(hdr->lrh[0]) >> 12) == 15))) { - ibp->n_pkt_drops++; - goto bail; - } + (be16_to_cpu(hdr->lrh[0]) >> 12) == 15))) + goto drop; } else { struct ib_smp *smp; /* Drop invalid MAD packets (see 13.5.3.1). */ - if (tlen != 256 || (be16_to_cpu(hdr->lrh[0]) >> 12) != 15) { - ibp->n_pkt_drops++; - goto bail; - } + if (tlen != 256 || (be16_to_cpu(hdr->lrh[0]) >> 12) != 15) + goto drop; smp = (struct ib_smp *) data; if ((hdr->lrh[1] == IB_LID_PERMISSIVE || hdr->lrh[3] == IB_LID_PERMISSIVE) && - smp->mgmt_class != IB_MGMT_CLASS_SUBN_DIRECTED_ROUTE) { - ibp->n_pkt_drops++; - goto bail; - } + smp->mgmt_class != IB_MGMT_CLASS_SUBN_DIRECTED_ROUTE) + goto drop; } /* @@ -523,10 +516,8 @@ void qib_ud_rcv(struct qib_ibport *ibp, struct qib_ib_header *hdr, } else if (opcode == IB_OPCODE_UD_SEND_ONLY) { wc.ex.imm_data = 0; wc.wc_flags = 0; - } else { - ibp->n_pkt_drops++; - goto bail; - } + } else + goto drop; /* * A GRH is expected to preceed the data even if not @@ -556,8 +547,7 @@ void qib_ud_rcv(struct qib_ibport *ibp, struct qib_ib_header *hdr, /* Silently drop packets which are too big. */ if (unlikely(wc.byte_len > qp->r_len)) { qp->r_flags |= QIB_R_REUSE_SGE; - ibp->n_pkt_drops++; - return; + goto drop; } if (has_grh) { qib_copy_sge(&qp->r_sge, &hdr->u.l.grh, @@ -594,5 +584,8 @@ void qib_ud_rcv(struct qib_ibport *ibp, struct qib_ib_header *hdr, qib_cq_enter(to_icq(qp->ibqp.recv_cq), &wc, (ohdr->bth[0] & cpu_to_be32(IB_BTH_SOLICITED)) != 0); -bail:; + return; + +drop: + ibp->n_pkt_drops++; }