From patchwork Tue Oct 27 18:12:36 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ira Weiny X-Patchwork-Id: 7501161 Return-Path: X-Original-To: patchwork-linux-rdma@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 876739F327 for ; Tue, 27 Oct 2015 18:12:43 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id A70552086C for ; Tue, 27 Oct 2015 18:12:42 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id B00CB2086B for ; Tue, 27 Oct 2015 18:12:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S964890AbbJ0SMk (ORCPT ); Tue, 27 Oct 2015 14:12:40 -0400 Received: from mga11.intel.com ([192.55.52.93]:12101 "EHLO mga11.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754330AbbJ0SMj (ORCPT ); Tue, 27 Oct 2015 14:12:39 -0400 Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by fmsmga102.fm.intel.com with ESMTP; 27 Oct 2015 11:12:39 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.20,206,1444719600"; d="scan'208";a="589093129" Received: from phlsvsds.ph.intel.com ([10.228.195.38]) by FMSMGA003.fm.intel.com with ESMTP; 27 Oct 2015 11:12:38 -0700 Received: from phlsvsds.ph.intel.com (localhost.localdomain [127.0.0.1]) by phlsvsds.ph.intel.com (8.13.8/8.13.8) with ESMTP id t9RICbLN012640; Tue, 27 Oct 2015 14:12:37 -0400 Received: (from iweiny@localhost) by phlsvsds.ph.intel.com (8.13.8/8.13.8/Submit) id t9RICa79012637; Tue, 27 Oct 2015 14:12:36 -0400 X-Authentication-Warning: phlsvsds.ph.intel.com: iweiny set sender to ira.weiny@intel.com using -f Date: Tue, 27 Oct 2015 14:12:36 -0400 From: "ira.weiny" To: Saurabh Sengar Cc: dledford@redhat.com, sean.hefty@intel.com, hal.rosenstock@gmail.com, jgunthorpe@obsidianresearch.com, yun.wang@profitbricks.com, kaike.wan@intel.com, linux-rdma@vger.kernel.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH] IB/sa: replace GFP_KERNEL with GFP_ATOMIC Message-ID: <20151027181235.GA27038@phlsvsds.ph.intel.com> References: <1445960860-3396-1-git-send-email-saurabh.truth@gmail.com> Mime-Version: 1.0 Content-Disposition: inline In-Reply-To: <1445960860-3396-1-git-send-email-saurabh.truth@gmail.com> User-Agent: Mutt/1.4.2.2i Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Spam-Status: No, score=-6.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP On Tue, Oct 27, 2015 at 09:17:40PM +0530, Saurabh Sengar wrote: > replace GFP_KERNEL with GFP_ATOMIC, as code while holding a spinlock > should be atomic > GFP_KERNEL may sleep and can cause deadlock, where as GFP_ATOMIC may > fail but certainly avoids deadlock Great catch. Thanks! However, gfp_t is passed to send_mad and we should pass that down and use it. Compile tested only, suggestion below, Ira 14:09:12 > git di Reviewed-By: Ira Weiny --- To unsubscribe from this list: send the line "unsubscribe linux-rdma" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html diff --git a/drivers/infiniband/core/sa_query.c b/drivers/infiniband/core/sa_query.c index 8c014b33d8e0..54d454042b28 100644 --- a/drivers/infiniband/core/sa_query.c +++ b/drivers/infiniband/core/sa_query.c @@ -512,7 +512,7 @@ static int ib_nl_get_path_rec_attrs_len(ib_sa_comp_mask comp_mask) return len; } -static int ib_nl_send_msg(struct ib_sa_query *query) +static int ib_nl_send_msg(struct ib_sa_query *query, gfp_t gfp_mask) { struct sk_buff *skb = NULL; struct nlmsghdr *nlh; @@ -526,7 +526,7 @@ static int ib_nl_send_msg(struct ib_sa_query *query) if (len <= 0) return -EMSGSIZE; - skb = nlmsg_new(len, GFP_KERNEL); + skb = nlmsg_new(len, gfp_mask); if (!skb) return -ENOMEM; @@ -544,7 +544,7 @@ static int ib_nl_send_msg(struct ib_sa_query *query) /* Repair the nlmsg header length */ nlmsg_end(skb, nlh); - ret = ibnl_multicast(skb, nlh, RDMA_NL_GROUP_LS, GFP_KERNEL); + ret = ibnl_multicast(skb, nlh, RDMA_NL_GROUP_LS, gfp_mask); if (!ret) ret = len; else @@ -553,7 +553,7 @@ static int ib_nl_send_msg(struct ib_sa_query *query) return ret; } -static int ib_nl_make_request(struct ib_sa_query *query) +static int ib_nl_make_request(struct ib_sa_query *query, gfp_t gfp_mask) { unsigned long flags; unsigned long delay; @@ -563,7 +563,7 @@ static int ib_nl_make_request(struct ib_sa_query *query) query->seq = (u32)atomic_inc_return(&ib_nl_sa_request_seq); spin_lock_irqsave(&ib_nl_request_lock, flags); - ret = ib_nl_send_msg(query); + ret = ib_nl_send_msg(query, gfp_mask); if (ret <= 0) { ret = -EIO; goto request_out; @@ -1105,7 +1105,7 @@ static int send_mad(struct ib_sa_query *query, int timeout_ms, gfp_t gfp_mask) if (query->flags & IB_SA_ENABLE_LOCAL_SERVICE) { if (!ibnl_chk_listeners(RDMA_NL_GROUP_LS)) { - if (!ib_nl_make_request(query)) + if (!ib_nl_make_request(query, gfp_mask)) return id; } ib_sa_disable_local_svc(query);