From patchwork Tue Sep 7 14:54:34 2010 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yevgeny Kliteynik X-Patchwork-Id: 161051 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by demeter1.kernel.org (8.14.4/8.14.3) with ESMTP id o87EtFv8031695 for ; Tue, 7 Sep 2010 14:55:15 GMT Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1757298Ab0IGOys (ORCPT ); Tue, 7 Sep 2010 10:54:48 -0400 Received: from mail.mellanox.co.il ([194.90.237.43]:53732 "EHLO mellanox.co.il" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1757238Ab0IGOyr (ORCPT ); Tue, 7 Sep 2010 10:54:47 -0400 Received: from Internal Mail-Server by MTLPINE2 (envelope-from kliteyn@mellanox.co.il) with SMTP; 7 Sep 2010 17:54:06 +0300 Received: from [10.4.1.29] (10.4.1.29) by mtlmail01.mtl.com (10.0.8.12) with Microsoft SMTP Server id 8.2.254.0; Tue, 7 Sep 2010 17:54:44 +0300 Message-ID: <4C86522A.8090005@mellanox.co.il> Date: Tue, 7 Sep 2010 17:54:34 +0300 From: Yevgeny Kliteynik Reply-To: kliteyn@dev.mellanox.co.il User-Agent: Thunderbird 1.5.0.5 (X11/20060719) MIME-Version: 1.0 To: Sasha Khapyorsky , Linux RDMA Subject: [PATCH] opensm/osm_ucast_cache.c: fix potential seg fault Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Greylist: IP, sender and recipient auto-whitelisted, not delayed by milter-greylist-4.2.3 (demeter1.kernel.org [140.211.167.41]); Tue, 07 Sep 2010 14:55:15 +0000 (UTC) diff --git a/opensm/opensm/osm_ucast_cache.c b/opensm/opensm/osm_ucast_cache.c index c611c38..be15508 100644 --- a/opensm/opensm/osm_ucast_cache.c +++ b/opensm/opensm/osm_ucast_cache.c @@ -931,6 +931,14 @@ void osm_ucast_cache_add_node(osm_ucast_mgr_t * p_mgr, osm_node_t * p_node) p_cache_sw = cache_get_sw(p_mgr, lid_ho); CL_ASSERT(p_cache_sw); + if (!p_cache_sw) { + /* something is wrong - forget about cache */ + OSM_LOG(p_mgr->p_log, OSM_LOG_ERROR, + "ERR AD04: no cached switch with lid %u - " + "clearing cache\n", lid_ho); + osm_ucast_cache_invalidate(p_mgr); + goto Exit; + } if (!cache_sw_is_leaf(p_cache_sw)) { OSM_LOG(p_mgr->p_log, OSM_LOG_DEBUG,