From patchwork Fri Nov 24 15:47:19 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Lobakin X-Patchwork-Id: 13467813 X-Patchwork-Delegate: kuba@kernel.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="Rb4hW5Gg" Received: from mgamail.intel.com (mgamail.intel.com [192.55.52.93]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6A0F919A5; Fri, 24 Nov 2023 07:50:04 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1700841004; x=1732377004; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=C2fNXu9LchJZpawCUkZ551134BYCp00pguSifkMlGMY=; b=Rb4hW5GguezRiVKluigXxekp5pJy1J+U4c3F4ZCvGl8quye4kwtZR3Tf Jxa9dY7gzebUUG05RFXEmChz2cIzCedKWlJ8YgzDu5sfW8c3T1w6lyHuK tyeDp1so4dvuNsWT3b8oogueFWRFAjRWK/MdUkq4VLE6ty9dDlJPS96UA zbkp0n56VqVKkUCmEk2NDAEBdD7jhJgfW7fq0Bg23feAZRtZ8jJOsm3kV UNSHBCxSEQE8Nt//o8jyFWW7vHC2OA9kT6d4SjMvhPd3PG3Rsc0q1kQ+q wlkQUgBsgQF/uICsCpWbKj4BhiV5eHSg1xmdcwRtIcmiFono+WNjprprx g==; X-IronPort-AV: E=McAfee;i="6600,9927,10904"; a="389592431" X-IronPort-AV: E=Sophos;i="6.04,224,1695711600"; d="scan'208";a="389592431" Received: from orviesa001.jf.intel.com ([10.64.159.141]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Nov 2023 07:50:04 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.04,224,1695711600"; d="scan'208";a="15659777" Received: from newjersey.igk.intel.com ([10.102.20.203]) by orviesa001.jf.intel.com with ESMTP; 24 Nov 2023 07:50:01 -0800 From: Alexander Lobakin To: "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni Cc: Alexander Lobakin , Maciej Fijalkowski , Michal Kubiak , Larysa Zaremba , Alexander Duyck , Yunsheng Lin , David Christensen , Jesper Dangaard Brouer , Ilias Apalodimas , Paul Menzel , netdev@vger.kernel.org, intel-wired-lan@lists.osuosl.org, linux-kernel@vger.kernel.org Subject: [PATCH net-next v5 01/14] page_pool: make sure frag API fields don't span between cachelines Date: Fri, 24 Nov 2023 16:47:19 +0100 Message-ID: <20231124154732.1623518-2-aleksander.lobakin@intel.com> X-Mailer: git-send-email 2.42.0 In-Reply-To: <20231124154732.1623518-1-aleksander.lobakin@intel.com> References: <20231124154732.1623518-1-aleksander.lobakin@intel.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Patchwork-Delegate: kuba@kernel.org After commit 5027ec19f104 ("net: page_pool: split the page_pool_params into fast and slow") that made &page_pool contain only "hot" params at the start, cacheline boundary chops frag API fields group in the middle again. To not bother with this each time fast params get expanded or shrunk, let's just align them to `4 * sizeof(long)`, the closest upper pow-2 to their actual size (2 longs + 2 ints). This ensures 16-byte alignment for the 32-bit architectures and 32-byte alignment for the 64-bit ones, excluding unnecessary false-sharing. Signed-off-by: Alexander Lobakin --- include/net/page_pool/types.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/include/net/page_pool/types.h b/include/net/page_pool/types.h index e1bb92c192de..989d07b831fc 100644 --- a/include/net/page_pool/types.h +++ b/include/net/page_pool/types.h @@ -127,7 +127,7 @@ struct page_pool { bool has_init_callback; - long frag_users; + long frag_users __aligned(4 * sizeof(long)); struct page *frag_page; unsigned int frag_offset; u32 pages_state_hold_cnt;