From patchwork Thu Apr 17 08:38:01 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vladimir Murzin X-Patchwork-Id: 4006091 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 37C449F319 for ; Thu, 17 Apr 2014 07:40:26 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 3E0D020373 for ; Thu, 17 Apr 2014 07:40:25 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 1F0E720364 for ; Thu, 17 Apr 2014 07:40:24 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1WaguF-0004Js-GU; Thu, 17 Apr 2014 07:38:47 +0000 Received: from mail-we0-x22a.google.com ([2a00:1450:400c:c03::22a]) by bombadil.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1WaguC-0004AS-S9 for linux-arm-kernel@lists.infradead.org; Thu, 17 Apr 2014 07:38:45 +0000 Received: by mail-we0-f170.google.com with SMTP id w61so80666wes.15 for ; Thu, 17 Apr 2014 00:38:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=from:to:cc:subject:date:message-id; bh=LPi/nn53OLnxEQIifB6mAx1pFVqPSVjcZ5ViEqVAhIA=; b=XRKJLJRoBML9rxpvPZffe2mhA7iDAW6HoG4mmc6qJoPVAfH4DBQEFio03GFphPUslS zwjBFq6UDFR0wO7qEzQ/2KkI5xRyxGqes7RFuIMC8ldIvo/FjcLgI5tcpT4XCuC+QN5B eYpPzlsyfIUV+ckQsuzRxFWbTB0rCH/81YZHn6MUPp55fqKEFUlx6va/QMQpz12xVw2D mTgAQr9YJVJvKVG4eqUlZkxmIAl6I3PPKwY07AB3DgpOBGG859Yk+NEvVwvJOxd+cguF RPbA98iVNXJ7DKn8prG1olxKACzxnepQ6PD99VVDaExlo1dXf/93kAynnbQUiEjerdF1 MPfw== X-Received: by 10.180.149.143 with SMTP id ua15mr10828478wib.36.1397720302150; Thu, 17 Apr 2014 00:38:22 -0700 (PDT) Received: from localhost.localdomain (cpc4-cmbg17-2-0-cust445.5-4.cable.virginm.net. [86.14.225.190]) by mx.google.com with ESMTPSA id i9sm3207896wiy.17.2014.04.17.00.38.20 for (version=TLSv1.2 cipher=AES128-GCM-SHA256 bits=128/128); Thu, 17 Apr 2014 00:38:21 -0700 (PDT) From: Vladimir Murzin To: xen-devel@lists.xenproject.org, linux-arm-kernel@lists.infradead.org Subject: [PATCH RFC] arm64: 32-bit tolerant sync bitops Date: Thu, 17 Apr 2014 09:38:01 +0100 Message-Id: <1397723881-31648-1-git-send-email-murzin.v@gmail.com> X-Mailer: git-send-email 1.8.3.2 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20140417_003845_060316_FB1C1207 X-CRM114-Status: GOOD ( 12.38 ) X-Spam-Score: -0.1 (/) Cc: Ian.Campbell@citrix.com, konrad.wilk@oracle.com, catalin.marinas@arm.com, Vladimir Murzin , will.deacon@arm.com, david.vrabel@citrix.com, JBeulich@suse.com, boris.ostrovsky@oracle.com X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-2.4 required=5.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED, DKIM_SIGNED, FREEMAIL_FROM, RP_MATCHES_RCVD, T_DKIM_INVALID, UNPARSEABLE_RELAY autolearn=ham version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Xen assumes that bit operations are able to operate on 32-bit size and alignment [1]. For arm64 bitops are based on atomic exclusive load/store instructions to guarantee that changes are made atomically. However, these instructions require that address to be aligned to the data size. Because, by default, bitops operates on 64-bit size it implies that address should be aligned appropriately. All these lead to breakage of Xen assumption for bitops properties. With this patch 32-bit sized/aligned bitops is implemented. [1] http://www.gossamer-threads.com/lists/xen/devel/325613 Signed-off-by: Vladimir Murzin --- Apart this patch other approaches were implemented: 1. turn bitops to be 32-bit size/align tolerant. the changes are minimal, but I'm not sure how broad side effect might be 2. separate 32-bit size/aligned operations. it exports new API, which might not be good All implementations based on arm64 version of bitops and were boot tested only. Hope, I didn't miss something ;) arch/arm64/include/asm/sync_bitops.h | 60 ++++++++++++++++++++++++++++++++---- 1 file changed, 54 insertions(+), 6 deletions(-) diff --git a/arch/arm64/include/asm/sync_bitops.h b/arch/arm64/include/asm/sync_bitops.h index 8da0bf4..809926f 100644 --- a/arch/arm64/include/asm/sync_bitops.h +++ b/arch/arm64/include/asm/sync_bitops.h @@ -3,6 +3,7 @@ #include #include +#include /* sync_bitops functions are equivalent to the SMP implementation of the * original functions, independently from CONFIG_SMP being defined. @@ -12,14 +13,61 @@ * who might be on another CPU (e.g. two uniprocessor guests communicating * via event channels and grant tables). So we need a variant of the bit * ops which are SMP safe even on a UP kernel. + * + * Xen assumes that bitops are 32-bit sized/aligned */ -#define sync_set_bit(nr, p) set_bit(nr, p) -#define sync_clear_bit(nr, p) clear_bit(nr, p) -#define sync_change_bit(nr, p) change_bit(nr, p) -#define sync_test_and_set_bit(nr, p) test_and_set_bit(nr, p) -#define sync_test_and_clear_bit(nr, p) test_and_clear_bit(nr, p) -#define sync_test_and_change_bit(nr, p) test_and_change_bit(nr, p) +#define sync_bitop32(name, instr) \ +static inline void sync_##name(int nr, volatile unsigned long *addr) \ +{ \ + unsigned tmp1, tmp2; \ + asm volatile( \ + " and %w1, %w2, #31\n" \ + " eor %w2, %w2, %w1\n" \ + " mov %w0, #1\n" \ + " add %3, %3, %2, lsr #2\n" \ + " lsl %w1, %w0, %w1\n" \ + "1: ldxr %w0, [%3]\n" \ + __stringify(instr)" %w0, %w0, %w1\n" \ + " stxr %w2, %w0, [%3]\n" \ + " cbnz %w2, 1b\n" \ + : "=&r"(tmp1), "=&r"(tmp2) \ + : "r"(nr), "r"(addr) \ + : "memory"); \ +} + +#define sync_testop32(name, instr) \ +static inline int sync_##name(int nr, volatile unsigned long *addr) \ +{ \ + int oldbit; \ + unsigned tmp1, tmp2, tmp3; \ + asm volatile( \ + " and %w1, %w4, #31\n" \ + " eor %w4, %w4, %w1\n" \ + " mov %w0, #1\n" \ + " add %5, %5, %4, lsr #2\n" \ + " lsl %w2, %w0, %w1\n" \ + "1: ldxr %w0, [%5]\n" \ + " lsr %w3, %w0, %w1\n" \ + __stringify(instr)" %w0, %w0, %w2\n" \ + " stlxr %w4, %w0, [%5]\n" \ + " cbnz %w4, 1b\n" \ + " dmb ish\n" \ + " and %w3, %w3, #1\n" \ + : "=&r"(tmp1), "=&r"(tmp2), "=&r"(tmp3), "=&r"(oldbit) \ + : "r"(nr), "r"(addr) \ + : "memory"); \ + return oldbit; \ +} + +sync_bitop32(set_bit, orr) +sync_bitop32(clear_bit, bic) +sync_bitop32(change_bit, eor) + +sync_testop32(test_and_set_bit, orr) +sync_testop32(test_and_clear_bit, bic) +sync_testop32(test_and_change_bit, eor) + #define sync_test_bit(nr, addr) test_bit(nr, addr) #define sync_cmpxchg cmpxchg