From patchwork Tue Nov 13 23:39:21 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 10681651 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id B48A013BF for ; Tue, 13 Nov 2018 23:40:52 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id A5DD12B458 for ; Tue, 13 Nov 2018 23:40:52 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 997E12B487; Tue, 13 Nov 2018 23:40:52 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.6 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_LOW autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 0FA5E2B458 for ; Tue, 13 Nov 2018 23:40:52 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:MIME-Version:Cc:List-Subscribe: List-Help:List-Post:List-Archive:List-Unsubscribe:List-Id:References: In-Reply-To:Message-Id:Date:Subject:To:From:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=ojx9GYi58eYkeE+rPHN4cbSf6X/wrgsbL2tlCaPW85w=; b=BzEGLB+oRQexbdb8ygxLIicQlh Cf2OhEVgj90irFRllZcEuyNqr+rRYMDj6rUZCEv/SHNsMERp8/CgrfHg8V8KosgMnr5oMoeqiplCq P0vkZaFqOUODECSPgPqoXBK3JIFvs92MLNjQpMGq7VfyyBTxACP+TcmGluYrt85358LGFZELsSiAI F2tzIHGa4QSmfOol0PplZrIr/pa3PzEByCbq/x5XG81+LbNQUDe22stl1+RyUkV2v0g/3bdX0vD9h DK5EbhHzQvZUneVtk1XoEuW5othgqMe3fdEFSFlcj2x6YdvvjdkTBr17/eT50DzDZooxAIi3Ke8r7 2C5h4PIw==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.90_1 #2 (Red Hat Linux)) id 1gMiIb-0002Ga-JY; Tue, 13 Nov 2018 23:40:49 +0000 Received: from mail-pl1-x641.google.com ([2607:f8b0:4864:20::641]) by bombadil.infradead.org with esmtps (Exim 4.90_1 #2 (Red Hat Linux)) id 1gMiIQ-000263-NZ for linux-arm-kernel@lists.infradead.org; Tue, 13 Nov 2018 23:40:40 +0000 Received: by mail-pl1-x641.google.com with SMTP id b5so385440plr.4 for ; Tue, 13 Nov 2018 15:40:26 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=bl18UEo2JY1UI7d1LBjvZAb+9Ew8CpbjvJZtMRJRx1k=; b=gysx87IOv/IThxGB6bOYF3ek3VnRy9QeItet12DiHVgehRlUifW2hV9NXrmW72FUz+ HppwYgpGvLo04dJASCuff3RZeK6R4I9yx+2u4B1t+X7DfU4Fk0u3+2shjDvRbSqcBFib htJO4HQrXdo5+TBTpNumJYHYLXEnktO7NU6mM= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=bl18UEo2JY1UI7d1LBjvZAb+9Ew8CpbjvJZtMRJRx1k=; b=RY2RPdo4fvvGurtYCNG5s9UPOWjjsLMrNLtaFgg7TDsUHHciqD8hfBRoMKW9CrGvHT adz4dm5fhOlfntUA+URB2GIXug2u1Sk7yrBwfihoEiInMrZajeQZORrHue9eMhsJN2YW IOZnIQoCBaaDFEEdayhXZmI9E79LygDja4sZ1YFZGXdkosI4Sei20weWar+/VNFHPPEr Gv0r9dW+6YbWQYj5owmVDeMZ1STUeec5ivActspNswJOm7JEQTlpWNbngzKEz4TBVPs1 0Kik5nc+ISf5PZVkXrpcme/ez0x0V7dVmhGKxZ4D9rmZvfCZF+pDkMxpHPu2inCahJBI T0YA== X-Gm-Message-State: AGRZ1gK591WLONuXAkKFAS85Wc8vyMCBqsekEhD+GvVIjxeLxQfmRXD8 ZbIAYm+IQnPuN7LLwm5F9qhW9uhQxP2jbw== X-Google-Smtp-Source: AJdET5fmRlYu2tw4NeFPNMnGLB3Kj128KTfHS3CcqrsrTrthZXVit43Axc7VKhUJutoHjuIDNY1OMw== X-Received: by 2002:a17:902:9f91:: with SMTP id g17-v6mr7197857plq.27.1542152425677; Tue, 13 Nov 2018 15:40:25 -0800 (PST) Received: from mba13.psav.com ([64.114.255.97]) by smtp.gmail.com with ESMTPSA id c7-v6sm26270493pfh.25.2018.11.13.15.40.24 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 13 Nov 2018 15:40:24 -0800 (PST) From: Ard Biesheuvel To: linux-arm-kernel@lists.infradead.org Subject: [PATCH 1/3] arm64/atomics: refactor LL/SC base asm templates Date: Tue, 13 Nov 2018 15:39:21 -0800 Message-Id: <20181113233923.20098-2-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20181113233923.20098-1-ard.biesheuvel@linaro.org> References: <20181113233923.20098-1-ard.biesheuvel@linaro.org> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20181113_154038_763919_716C0892 X-CRM114-Status: GOOD ( 10.70 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: mark.rutland@arm.com, catalin.marinas@arm.com, Ard Biesheuvel , will.deacon@arm.com, marc.zyngier@arm.com MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP Refactor the asm templates that emit the LL/SC instruction sequences so that we will be able to reuse them in the LSE code, which will emit them out of line, but without the use of function calls. This involves factoring out the core instruction sequences and using named operands throughout. Signed-off-by: Ard Biesheuvel --- arch/arm64/include/asm/atomic_ll_sc.h | 139 ++++++++++---------- 1 file changed, 72 insertions(+), 67 deletions(-) diff --git a/arch/arm64/include/asm/atomic_ll_sc.h b/arch/arm64/include/asm/atomic_ll_sc.h index f5a2d09afb38..5f55f6b8dd7e 100644 --- a/arch/arm64/include/asm/atomic_ll_sc.h +++ b/arch/arm64/include/asm/atomic_ll_sc.h @@ -36,6 +36,51 @@ * this file, which unfortunately don't work on a per-function basis * (the optimize attribute silently ignores these options). */ +#define __LL_SC_ATOMIC_OP(asm_op, w) \ +" prfm pstl1strm, %[v] \n" \ +"1: ldxr %" #w "[res], %[v] \n" \ +" " #asm_op " %" #w "[res], %" #w "[res], %" #w "[i] \n" \ +" stxr %w[tmp], %" #w "[res], %[v] \n" \ +" cbnz %w[tmp], 1b" + +#define __LL_SC_ATOMIC_OP_RETURN(asm_op, mb, acq, rel, w) \ +" prfm pstl1strm, %[v] \n" \ +"1: ld" #acq "xr %" #w "[res], %[v] \n" \ +" " #asm_op " %" #w "[res], %" #w "[res], %" #w "[i] \n" \ +" st" #rel "xr %w[tmp], %" #w "[res], %[v] \n" \ +" cbnz %w[tmp], 1b \n" \ +" " #mb + +#define __LL_SC_ATOMIC_FETCH_OP(asm_op, mb, acq, rel, w) \ +" prfm pstl1strm, %[v] \n" \ +"1: ld" #acq "xr %" #w "[res], %[v] \n" \ +" " #asm_op " %" #w "[val], %" #w "[res], %" #w "[i] \n" \ +" st" #rel "xr %w[tmp], %" #w "[val], %[v] \n" \ +" cbnz %w[tmp], 1b \n" \ +" " #mb \ + +#define __LL_SC_CMPXCHG_BASE_OP(w, sz, name, mb, acq, rel) \ +" prfm pstl1strm, %[v] \n" \ +"1: ld" #acq "xr" #sz " %" #w "[oldval], %[v] \n" \ +" eor %" #w "[tmp], %" #w "[oldval], " \ +" %" #w "[old] \n" \ +" cbnz %" #w "[tmp], 2f \n" \ +" st" #rel "xr" #sz " %w[tmp], %" #w "[new], %[v] \n" \ +" cbnz %w[tmp], 1b \n" \ +" " #mb " \n" \ +"2:" + +#define __LL_SC_CMPXCHG_DBL_OP(mb, rel) \ +" prfm pstl1strm, %[v] \n" \ +"1: ldxp %[tmp], %[ret], %[v] \n" \ +" eor %[tmp], %[tmp], %[old1] \n" \ +" eor %[ret], %[ret], %[old2] \n" \ +" orr %[ret], %[tmp], %[ret] \n" \ +" cbnz %[ret], 2f \n" \ +" st" #rel "xp %w[tmp], %[new1], %[new2], %[v] \n" \ +" cbnz %w[tmp], 1b \n" \ +" " #mb " \n" \ +"2:" \ #define ATOMIC_OP(op, asm_op) \ __LL_SC_INLINE void \ @@ -44,14 +89,10 @@ __LL_SC_PREFIX(atomic_##op(int i, atomic_t *v)) \ unsigned long tmp; \ int result; \ \ - asm volatile("// atomic_" #op "\n" \ -" prfm pstl1strm, %2\n" \ -"1: ldxr %w0, %2\n" \ -" " #asm_op " %w0, %w0, %w3\n" \ -" stxr %w1, %w0, %2\n" \ -" cbnz %w1, 1b" \ - : "=&r" (result), "=&r" (tmp), "+Q" (v->counter) \ - : "Ir" (i)); \ + asm volatile(" // atomic_" #op "\n" \ + __LL_SC_ATOMIC_OP(asm_op, w) \ + : [res]"=&r" (result), [tmp]"=&r" (tmp), [v]"+Q" (v->counter) \ + : [i]"Ir" (i)); \ } \ __LL_SC_EXPORT(atomic_##op); @@ -63,14 +104,9 @@ __LL_SC_PREFIX(atomic_##op##_return##name(int i, atomic_t *v)) \ int result; \ \ asm volatile("// atomic_" #op "_return" #name "\n" \ -" prfm pstl1strm, %2\n" \ -"1: ld" #acq "xr %w0, %2\n" \ -" " #asm_op " %w0, %w0, %w3\n" \ -" st" #rel "xr %w1, %w0, %2\n" \ -" cbnz %w1, 1b\n" \ -" " #mb \ - : "=&r" (result), "=&r" (tmp), "+Q" (v->counter) \ - : "Ir" (i) \ + __LL_SC_ATOMIC_OP_RETURN(asm_op, mb, acq, rel, w) \ + : [res]"=&r" (result), [tmp]"=&r" (tmp), [v]"+Q" (v->counter) \ + : [i]"Ir" (i) \ : cl); \ \ return result; \ @@ -85,14 +121,10 @@ __LL_SC_PREFIX(atomic_fetch_##op##name(int i, atomic_t *v)) \ int val, result; \ \ asm volatile("// atomic_fetch_" #op #name "\n" \ -" prfm pstl1strm, %3\n" \ -"1: ld" #acq "xr %w0, %3\n" \ -" " #asm_op " %w1, %w0, %w4\n" \ -" st" #rel "xr %w2, %w1, %3\n" \ -" cbnz %w2, 1b\n" \ -" " #mb \ - : "=&r" (result), "=&r" (val), "=&r" (tmp), "+Q" (v->counter) \ - : "Ir" (i) \ + __LL_SC_ATOMIC_FETCH_OP(asm_op, mb, acq, rel, w) \ + : [res]"=&r" (result), [val]"=&r" (val), [tmp]"=&r" (tmp), \ + [v]"+Q" (v->counter) \ + : [i]"Ir" (i) \ : cl); \ \ return result; \ @@ -139,13 +171,9 @@ __LL_SC_PREFIX(atomic64_##op(long i, atomic64_t *v)) \ unsigned long tmp; \ \ asm volatile("// atomic64_" #op "\n" \ -" prfm pstl1strm, %2\n" \ -"1: ldxr %0, %2\n" \ -" " #asm_op " %0, %0, %3\n" \ -" stxr %w1, %0, %2\n" \ -" cbnz %w1, 1b" \ - : "=&r" (result), "=&r" (tmp), "+Q" (v->counter) \ - : "Ir" (i)); \ + __LL_SC_ATOMIC_OP(asm_op, ) \ + : [res]"=&r" (result), [tmp]"=&r" (tmp), [v]"+Q" (v->counter) \ + : [i]"Ir" (i)); \ } \ __LL_SC_EXPORT(atomic64_##op); @@ -157,14 +185,9 @@ __LL_SC_PREFIX(atomic64_##op##_return##name(long i, atomic64_t *v)) \ unsigned long tmp; \ \ asm volatile("// atomic64_" #op "_return" #name "\n" \ -" prfm pstl1strm, %2\n" \ -"1: ld" #acq "xr %0, %2\n" \ -" " #asm_op " %0, %0, %3\n" \ -" st" #rel "xr %w1, %0, %2\n" \ -" cbnz %w1, 1b\n" \ -" " #mb \ - : "=&r" (result), "=&r" (tmp), "+Q" (v->counter) \ - : "Ir" (i) \ + __LL_SC_ATOMIC_OP_RETURN(asm_op, mb, acq, rel, ) \ + : [res]"=&r" (result), [tmp]"=&r" (tmp), [v]"+Q" (v->counter) \ + : [i]"Ir" (i) \ : cl); \ \ return result; \ @@ -179,14 +202,10 @@ __LL_SC_PREFIX(atomic64_fetch_##op##name(long i, atomic64_t *v)) \ unsigned long tmp; \ \ asm volatile("// atomic64_fetch_" #op #name "\n" \ -" prfm pstl1strm, %3\n" \ -"1: ld" #acq "xr %0, %3\n" \ -" " #asm_op " %1, %0, %4\n" \ -" st" #rel "xr %w2, %1, %3\n" \ -" cbnz %w2, 1b\n" \ -" " #mb \ - : "=&r" (result), "=&r" (val), "=&r" (tmp), "+Q" (v->counter) \ - : "Ir" (i) \ + __LL_SC_ATOMIC_FETCH_OP(asm_op, mb, acq, rel, ) \ + : [res]"=&r" (result), [val]"=&r" (val), [tmp]"=&r" (tmp), \ + [v]"+Q" (v->counter) \ + : [i]"Ir" (i) \ : cl); \ \ return result; \ @@ -257,14 +276,7 @@ __LL_SC_PREFIX(__cmpxchg_case_##name(volatile void *ptr, \ unsigned long tmp, oldval; \ \ asm volatile( \ - " prfm pstl1strm, %[v]\n" \ - "1: ld" #acq "xr" #sz "\t%" #w "[oldval], %[v]\n" \ - " eor %" #w "[tmp], %" #w "[oldval], %" #w "[old]\n" \ - " cbnz %" #w "[tmp], 2f\n" \ - " st" #rel "xr" #sz "\t%w[tmp], %" #w "[new], %[v]\n" \ - " cbnz %w[tmp], 1b\n" \ - " " #mb "\n" \ - "2:" \ + __LL_SC_CMPXCHG_BASE_OP(w, sz, name, mb, acq, rel) \ : [tmp] "=&r" (tmp), [oldval] "=&r" (oldval), \ [v] "+Q" (*(unsigned long *)ptr) \ : [old] "Lr" (old), [new] "r" (new) \ @@ -304,18 +316,11 @@ __LL_SC_PREFIX(__cmpxchg_double##name(unsigned long old1, \ unsigned long tmp, ret; \ \ asm volatile("// __cmpxchg_double" #name "\n" \ - " prfm pstl1strm, %2\n" \ - "1: ldxp %0, %1, %2\n" \ - " eor %0, %0, %3\n" \ - " eor %1, %1, %4\n" \ - " orr %1, %0, %1\n" \ - " cbnz %1, 2f\n" \ - " st" #rel "xp %w0, %5, %6, %2\n" \ - " cbnz %w0, 1b\n" \ - " " #mb "\n" \ - "2:" \ - : "=&r" (tmp), "=&r" (ret), "+Q" (*(unsigned long *)ptr) \ - : "r" (old1), "r" (old2), "r" (new1), "r" (new2) \ + __LL_SC_CMPXCHG_DBL_OP(mb, rel) \ + : [tmp]"=&r" (tmp), [ret]"=&r" (ret), \ + [v]"+Q" (*(unsigned long *)ptr) \ + : [old1]"r" (old1), [old2]"r" (old2), [new1]"r" (new1), \ + [new2]"r" (new2) \ : cl); \ \ return ret; \