From patchwork Wed Sep 16 12:33:06 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tadeusz Struk X-Patchwork-Id: 7194501 X-Patchwork-Delegate: herbert@gondor.apana.org.au Return-Path: X-Original-To: patchwork-linux-crypto@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id E02F59F380 for ; Wed, 16 Sep 2015 12:35:01 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 67AF72082A for ; Wed, 16 Sep 2015 12:34:57 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 5E6AF20828 for ; Wed, 16 Sep 2015 12:34:56 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752135AbbIPMez (ORCPT ); Wed, 16 Sep 2015 08:34:55 -0400 Received: from mga11.intel.com ([192.55.52.93]:8987 "EHLO mga11.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752126AbbIPMez (ORCPT ); Wed, 16 Sep 2015 08:34:55 -0400 Received: from orsmga003.jf.intel.com ([10.7.209.27]) by fmsmga102.fm.intel.com with ESMTP; 16 Sep 2015 05:34:54 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.17,539,1437462000"; d="scan'208";a="645931541" Received: from noxnam-mobl2.amr.corp.intel.com (HELO [127.0.1.1]) ([10.254.62.86]) by orsmga003.jf.intel.com with ESMTP; 16 Sep 2015 05:34:54 -0700 Subject: [PATCH] crypto: qat - Add load balancing across devices From: Tadeusz Struk To: herbert@gondor.apana.org.au Cc: linux-crypto@vger.kernel.org, qat-linux@intel.com, tadeusz.struk@intel.com Date: Wed, 16 Sep 2015 05:33:06 -0700 Message-ID: <20150916123306.9736.13694.stgit@tstruk-mobl1> User-Agent: StGit/0.17.1-dirty MIME-Version: 1.0 Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org X-Spam-Status: No, score=-6.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, T_RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Load balancing of crypto instances only used a single device. There was no problem with that on PF, but since there is only one or two instance per VF we need to loadbalance across devices. Signed-off-by: Tadeusz Struk --- drivers/crypto/qat/qat_common/qat_crypto.c | 61 +++++++++++++++------------- 1 file changed, 33 insertions(+), 28 deletions(-) -- To unsubscribe from this list: send the line "unsubscribe linux-crypto" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html diff --git a/drivers/crypto/qat/qat_common/qat_crypto.c b/drivers/crypto/qat/qat_common/qat_crypto.c index 07c2f9f..25db27c 100644 --- a/drivers/crypto/qat/qat_common/qat_crypto.c +++ b/drivers/crypto/qat/qat_common/qat_crypto.c @@ -60,8 +60,8 @@ static struct service_hndl qat_crypto; void qat_crypto_put_instance(struct qat_crypto_instance *inst) { - if (atomic_sub_return(1, &inst->refctr) == 0) - adf_dev_put(inst->accel_dev); + atomic_dec(&inst->refctr); + adf_dev_put(inst->accel_dev); } static int qat_crypto_free_instances(struct adf_accel_dev *accel_dev) @@ -97,19 +97,26 @@ static int qat_crypto_free_instances(struct adf_accel_dev *accel_dev) struct qat_crypto_instance *qat_crypto_get_instance_node(int node) { struct adf_accel_dev *accel_dev = NULL; - struct qat_crypto_instance *inst_best = NULL; + struct qat_crypto_instance *inst = NULL; struct list_head *itr; unsigned long best = ~0; list_for_each(itr, adf_devmgr_get_head()) { - accel_dev = list_entry(itr, struct adf_accel_dev, list); - - if ((node == dev_to_node(&GET_DEV(accel_dev)) || - dev_to_node(&GET_DEV(accel_dev)) < 0) && - adf_dev_started(accel_dev) && - !list_empty(&accel_dev->crypto_list)) - break; - accel_dev = NULL; + struct adf_accel_dev *tmp_dev; + unsigned long ctr; + + tmp_dev = list_entry(itr, struct adf_accel_dev, list); + + if ((node == dev_to_node(&GET_DEV(tmp_dev)) || + dev_to_node(&GET_DEV(tmp_dev)) < 0) && + adf_dev_started(tmp_dev) && + !list_empty(&tmp_dev->crypto_list)) { + ctr = atomic_read(&tmp_dev->ref_count); + if (best > ctr) { + accel_dev = tmp_dev; + best = ctr; + } + } } if (!accel_dev) { pr_err("QAT: Could not find a device on node %d\n", node); @@ -118,28 +125,26 @@ struct qat_crypto_instance *qat_crypto_get_instance_node(int node) if (!accel_dev || !adf_dev_started(accel_dev)) return NULL; + best = ~0; list_for_each(itr, &accel_dev->crypto_list) { - struct qat_crypto_instance *inst; - unsigned long cur; - - inst = list_entry(itr, struct qat_crypto_instance, list); - cur = atomic_read(&inst->refctr); - if (best > cur) { - inst_best = inst; - best = cur; + struct qat_crypto_instance *tmp_inst; + unsigned long ctr; + + tmp_inst = list_entry(itr, struct qat_crypto_instance, list); + ctr = atomic_read(&tmp_inst->refctr); + if (best > ctr) { + inst = tmp_inst; + best = ctr; } } - if (inst_best) { - if (atomic_add_return(1, &inst_best->refctr) == 1) { - if (adf_dev_get(accel_dev)) { - atomic_dec(&inst_best->refctr); - dev_err(&GET_DEV(accel_dev), - "Could not increment dev refctr\n"); - return NULL; - } + if (inst) { + if (adf_dev_get(accel_dev)) { + dev_err(&GET_DEV(accel_dev), "Could not increment dev refctr\n"); + return NULL; } + atomic_inc(&inst->refctr); } - return inst_best; + return inst; } static int qat_crypto_create_instances(struct adf_accel_dev *accel_dev)