From patchwork Thu Dec 22 11:46:33 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrzej Hajda X-Patchwork-Id: 13079649 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id A4D56C4332F for ; Thu, 22 Dec 2022 11:49:11 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 5321B10E4FB; Thu, 22 Dec 2022 11:49:10 +0000 (UTC) Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by gabe.freedesktop.org (Postfix) with ESMTPS id 62AA910E4F2; Thu, 22 Dec 2022 11:49:07 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1671709747; x=1703245747; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=xrva1eFk6Np1Bm1YmRwegqhZeY1WwhPNk9Drl9WzrJI=; b=UE2Sdfb7RyduJZnLoGxcxngQLIBvNWa1aUvZzaNx7oB/iJgazljtCVZC Niwzasj3Xl1B9X3YT0pu9FXODrlzsjTHL5ip52BKO9hB6WH/atFEy3iMg veQmlImqTVCt0a13265jEgTD5hIAhb8XM8d1c60bC2vXnkO7tjeRBk4iW uzC9V4lANf7tSqh6dx1JmSQnbafYs1fRKt87+ylyF+JqLgp+12O9eUDBn rGQSjWN0BwsbgD9REma45qP/XXtQB2l62E6YvbgaNmYyC9Mu5bo2V2YNq voQ3sYTFRrtyGCbCIooinFnE6K5EZGgbYbUfPyn2b/mAONFyHXekFC6dT w==; X-IronPort-AV: E=McAfee;i="6500,9779,10568"; a="318804907" X-IronPort-AV: E=Sophos;i="5.96,265,1665471600"; d="scan'208";a="318804907" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Dec 2022 03:49:06 -0800 X-IronPort-AV: E=McAfee;i="6500,9779,10568"; a="629504931" X-IronPort-AV: E=Sophos;i="5.96,265,1665471600"; d="scan'208";a="629504931" Received: from lab-ah.igk.intel.com ([10.91.215.196]) by orsmga006-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Dec 2022 03:49:00 -0800 From: Andrzej Hajda To: linux-alpha@vger.kernel.org, linux-kernel@vger.kernel.org, linux-snps-arc@lists.infradead.org, linux-arm-kernel@lists.infradead.org, linux-hexagon@vger.kernel.org, linux-ia64@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, openrisc@lists.librecores.org, linux-parisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-xtensa@linux-xtensa.org, intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org Date: Thu, 22 Dec 2022 12:46:33 +0100 Message-Id: <20221222114635.1251934-18-andrzej.hajda@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20221222114635.1251934-1-andrzej.hajda@intel.com> References: <20221222114635.1251934-1-andrzej.hajda@intel.com> MIME-Version: 1.0 Organization: Intel Technology Poland sp. z o.o. - ul. Slowackiego 173, 80-298 Gdansk - KRS 101882 - NIP 957-07-52-316 Subject: [Intel-gfx] [PATCH 17/19] arch/xtensa: rename internal name __xchg to __arch_xchg X-BeenThere: intel-gfx@lists.freedesktop.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Intel graphics driver community testing & development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Mark Rutland , Arnd Bergmann , Peter Zijlstra , Boqun Feng , Andrzej Hajda , Rodrigo Vivi , Andrew Morton , Andy Shevchenko Errors-To: intel-gfx-bounces@lists.freedesktop.org Sender: "Intel-gfx" __xchg will be used for non-atomic xchg macro. Signed-off-by: Andrzej Hajda --- arch/xtensa/include/asm/cmpxchg.h | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/arch/xtensa/include/asm/cmpxchg.h b/arch/xtensa/include/asm/cmpxchg.h index eb87810357ad88..675a11ea8de76b 100644 --- a/arch/xtensa/include/asm/cmpxchg.h +++ b/arch/xtensa/include/asm/cmpxchg.h @@ -170,7 +170,7 @@ static inline unsigned long xchg_u32(volatile int * m, unsigned long val) } #define arch_xchg(ptr,x) \ - ((__typeof__(*(ptr)))__xchg((unsigned long)(x),(ptr),sizeof(*(ptr)))) + ((__typeof__(*(ptr)))__arch_xchg((unsigned long)(x),(ptr),sizeof(*(ptr)))) static inline u32 xchg_small(volatile void *ptr, u32 x, int size) { @@ -203,7 +203,7 @@ static inline u32 xchg_small(volatile void *ptr, u32 x, int size) extern void __xchg_called_with_bad_pointer(void); static __inline__ unsigned long -__xchg(unsigned long x, volatile void * ptr, int size) +__arch_xchg(unsigned long x, volatile void * ptr, int size) { switch (size) { case 1: