From patchwork Mon Apr 1 03:57:26 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavan Chebbi X-Patchwork-Id: 13612509 X-Patchwork-Delegate: kuba@kernel.org Received: from mail-pf1-f179.google.com (mail-pf1-f179.google.com [209.85.210.179]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 26D4B1362 for ; Mon, 1 Apr 2024 03:54:38 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.179 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1711943679; cv=none; b=PlxYvl9V5NfsR+AllRba6VafKaG6e4RF3shvafDq8gmhwBCVS7BL2vkc/VABT86HJtmj+6j8VZmiZeG7FlPqyJTFNAD2z4ifFJdw7WvpykGCpSZ4yud7wou9IJCJWpYvkVOGsvYrJOQb/AYvhv43DOvj3VY74/GatdT6caKWauc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1711943679; c=relaxed/simple; bh=v6EgFqXGgCC2eRlc/QdcfmwL+zjGqo0icPD5UOdp9dM=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version:Content-Type; b=tv1F0Vm+tE7KCXYfN1+mwP9mrSw7z7m+OPbdRJ4GnODdND70wMEtyxVgKUr0ifcn8kKHEertN6HZmseWK2yGX/R2FHwBZmbz8NSjoxdi17SgtrMyPzRD5WLaQosC0bIEKAHXuMHvo4FlHVBg78kGnHpKlmgGUWPovBFxRKktbEg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=broadcom.com; spf=fail smtp.mailfrom=broadcom.com; dkim=pass (1024-bit key) header.d=broadcom.com header.i=@broadcom.com header.b=KE6RAx+Z; arc=none smtp.client-ip=209.85.210.179 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=broadcom.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=broadcom.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=broadcom.com header.i=@broadcom.com header.b="KE6RAx+Z" Received: by mail-pf1-f179.google.com with SMTP id d2e1a72fcca58-6eaf9565e6bso518260b3a.2 for ; Sun, 31 Mar 2024 20:54:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; t=1711943677; x=1712548477; darn=vger.kernel.org; h=mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:from:to:cc:subject:date:message-id:reply-to; bh=8BjqilM74GRRDq2ay7A4MQVHJmkqlbZ6Mx/kL5WjtGI=; b=KE6RAx+ZhngLa/kwOzNGFOjYB967cCDaEbsrHvA/Bb3VlDUtKzAcEz4TiuscNXg2WE VH88D1hRJfW3pe5FQWrcXr+Cj5GVZRrC6LE8ApbwZzQxanoFBu+ZWX3AFPgfjcYI6JiM Y11oGyKfI633blxqnWBWfFE5MNGpfur++GrJs= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1711943677; x=1712548477; h=mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=8BjqilM74GRRDq2ay7A4MQVHJmkqlbZ6Mx/kL5WjtGI=; b=efjDHaWCm3asRdLEE0i6A7EmuIaK6YGqjqGUw32iiScgaNuqi0SKLkdOT6CykxzRVl glvf9X79/eVvU6movm+8UuDu6JxbOhzTysAjjl7Apu25oS+pBrhpd+PinQPe1SeCZ/Pg /ps8rLz1o1AomXFyCl7rIAilO3F8N8fQBv1tXqvssa4wsf331e3VNudCHGsnXRMUN+SN W4RiwAzQqCnkgB7F+mmW2ezt7obA9BX+O7m4MhreAt8q1VJSppkurzBtk4uMsFVeX/9G u/Exei/mXZS/v5hXRAh7Te8bKdjMjcCu/WpKKpMekadjqPeQlTk7F++IJ8YWA1p+2POB 1z0w== X-Forwarded-Encrypted: i=1; AJvYcCUkW82fn5qpzW+7k4aeu9X/gabosXYXuNAhsXCx9Qh9saYcHMQLQGcqO+1Dpcn/F4KlM083wwswqr7Xo9jCsPjPmg4zM1ZP X-Gm-Message-State: AOJu0YzYbH06gGXZ/yOk4995XJGP1HF3ivT6ZPTtiplaQugQsfRSDj9Q eWgHkLEVYPmoQqqVSFg3uSa6E2WkN1wShyYxBZXsYErhOwLiXZ20muydCucZPQ== X-Google-Smtp-Source: AGHT+IGeFQJtFbQSwpFsIkTQL8dYqSt2k6eCY5RNDdJdZ2ulWR7hWoz/upvMYZsEM1PC5XXDmTO15g== X-Received: by 2002:a05:6a00:2196:b0:6ea:f7e2:49b8 with SMTP id h22-20020a056a00219600b006eaf7e249b8mr5026480pfi.3.1711943677301; Sun, 31 Mar 2024 20:54:37 -0700 (PDT) Received: from PC-MID-R740.dhcp.broadcom.net ([192.19.234.250]) by smtp.gmail.com with ESMTPSA id j26-20020a62b61a000000b006e73d1c0c0esm6860781pff.154.2024.03.31.20.54.34 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 31 Mar 2024 20:54:36 -0700 (PDT) From: Pavan Chebbi To: michael.chan@broadcom.com Cc: davem@davemloft.net, edumazet@google.com, gospo@broadcom.com, kuba@kernel.org, netdev@vger.kernel.org, pabeni@redhat.com, Somnath Kotur , Andy Gospodarek , Pavan Chebbi Subject: [PATCH net-next 3/7] bnxt_en: Allocate page pool per numa node Date: Sun, 31 Mar 2024 20:57:26 -0700 Message-Id: <20240401035730.306790-4-pavan.chebbi@broadcom.com> X-Mailer: git-send-email 2.39.1 In-Reply-To: <20240401035730.306790-1-pavan.chebbi@broadcom.com> References: <20240401035730.306790-1-pavan.chebbi@broadcom.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org From: Somnath Kotur Driver's Page Pool allocation code looks at the node local to the PCIe device to determine where to allocate memory. In scenarios where the core count per NUMA node is low (< default rings) it makes sense to exhaust page pool allocations on Node 0 first and then moving on to allocating page pools for the remaining rings from Node 1. With this patch, and the following configuration on the NIC $ ethtool -L ens1f0np0 combined 16 (core count/node = 12, first 12 rings on node#0, last 4 rings node#1) and traffic redirected to a ring on node#1 , we see a performance improvement of ~20% Signed-off-by: Somnath Kotur Reviewed-by: Andy Gospodarek Reviewed-by: Michael Chan Signed-off-by: Pavan Chebbi --- drivers/net/ethernet/broadcom/bnxt/bnxt.c | 15 +++++++++++---- 1 file changed, 11 insertions(+), 4 deletions(-) diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c index 54955f878b73..42b5825b0664 100644 --- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c +++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c @@ -3559,14 +3559,15 @@ static void bnxt_free_rx_rings(struct bnxt *bp) } static int bnxt_alloc_rx_page_pool(struct bnxt *bp, - struct bnxt_rx_ring_info *rxr) + struct bnxt_rx_ring_info *rxr, + int numa_node) { struct page_pool_params pp = { 0 }; pp.pool_size = bp->rx_agg_ring_size; if (BNXT_RX_PAGE_MODE(bp)) pp.pool_size += bp->rx_ring_size; - pp.nid = dev_to_node(&bp->pdev->dev); + pp.nid = numa_node; pp.napi = &rxr->bnapi->napi; pp.netdev = bp->dev; pp.dev = &bp->pdev->dev; @@ -3586,7 +3587,8 @@ static int bnxt_alloc_rx_page_pool(struct bnxt *bp, static int bnxt_alloc_rx_rings(struct bnxt *bp) { - int i, rc = 0, agg_rings = 0; + int numa_node = dev_to_node(&bp->pdev->dev); + int i, rc = 0, agg_rings = 0, cpu; if (!bp->rx_ring) return -ENOMEM; @@ -3597,10 +3599,15 @@ static int bnxt_alloc_rx_rings(struct bnxt *bp) for (i = 0; i < bp->rx_nr_rings; i++) { struct bnxt_rx_ring_info *rxr = &bp->rx_ring[i]; struct bnxt_ring_struct *ring; + int cpu_node; ring = &rxr->rx_ring_struct; - rc = bnxt_alloc_rx_page_pool(bp, rxr); + cpu = cpumask_local_spread(i, numa_node); + cpu_node = cpu_to_node(cpu); + netdev_dbg(bp->dev, "Allocating page pool for rx_ring[%d] on numa_node: %d\n", + i, cpu_node); + rc = bnxt_alloc_rx_page_pool(bp, rxr, cpu_node); if (rc) return rc;