From patchwork Thu Nov 1 09:58:39 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mathieu Desnoyers X-Patchwork-Id: 10663735 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id CE90513A4 for ; Thu, 1 Nov 2018 10:01:06 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id B181B2B68E for ; Thu, 1 Nov 2018 10:01:06 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id A532E2B7F3; Thu, 1 Nov 2018 10:01:06 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id E51D62B77D for ; Thu, 1 Nov 2018 10:01:04 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728106AbeKATCh (ORCPT ); Thu, 1 Nov 2018 15:02:37 -0400 Received: from mail.efficios.com ([167.114.142.138]:42076 "EHLO mail.efficios.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728207AbeKATCh (ORCPT ); Thu, 1 Nov 2018 15:02:37 -0400 Received: from localhost (ip6-localhost [IPv6:::1]) by mail.efficios.com (Postfix) with ESMTP id C9681227754; Thu, 1 Nov 2018 06:00:18 -0400 (EDT) Received: from mail.efficios.com ([IPv6:::1]) by localhost (mail02.efficios.com [IPv6:::1]) (amavisd-new, port 10032) with ESMTP id Yj5tQMsyjb47; Thu, 1 Nov 2018 06:00:18 -0400 (EDT) Received: from localhost (ip6-localhost [IPv6:::1]) by mail.efficios.com (Postfix) with ESMTP id 2519F227750; Thu, 1 Nov 2018 06:00:18 -0400 (EDT) DKIM-Filter: OpenDKIM Filter v2.10.3 mail.efficios.com 2519F227750 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=efficios.com; s=default; t=1541066418; bh=Vj8AAHlV7YW4TWnB/pJ/RCkZAruFfnqjDu69sbCjJtU=; h=From:To:Date:Message-Id; b=RQsjeoOBms0t7uHoLE+WYKM1/eWxhawGugytEa++ANms/V0AQo51lMjqwzTr/WQ+L 6n0s0908xZbGprgxPyzmA4j+6Gk1UodMSPo99vhDLVBSq645/sgsWGw0HXqXlXPff1 FnDHqAR5OGi6uklqeE233M2QdxvRFgWse9XNPWbynbkbjP1swbAD3L2/doFIaQmRKx vExI0EezTOcr4mmAXDh224//xtcmz0+j0yQrUnSUu0rOzR6Dkcv6ZMwaAhadNUKjfl 115NOX1bL/5TWK0kfzePnV4PpeEy6N8qLhKaFcuEQ7uV/WEZ7Zu2XbwfshQg82NT0v /DsvYigzGAwaw== X-Virus-Scanned: amavisd-new at efficios.com Received: from mail.efficios.com ([IPv6:::1]) by localhost (mail02.efficios.com [IPv6:::1]) (amavisd-new, port 10026) with ESMTP id 5T1F_hPYUES2; Thu, 1 Nov 2018 06:00:18 -0400 (EDT) Received: from thinkos.etherlink (sessfw99-sesbfw99-92.ericsson.net [192.176.1.92]) by mail.efficios.com (Postfix) with ESMTPSA id 6E328227745; Thu, 1 Nov 2018 06:00:12 -0400 (EDT) From: Mathieu Desnoyers To: Peter Zijlstra , "Paul E . McKenney" , Boqun Feng Cc: linux-kernel@vger.kernel.org, linux-api@vger.kernel.org, Thomas Gleixner , Andy Lutomirski , Dave Watson , Paul Turner , Andrew Morton , Russell King , Ingo Molnar , "H . Peter Anvin" , Andi Kleen , Chris Lameter , Ben Maurer , Steven Rostedt , Josh Triplett , Linus Torvalds , Catalin Marinas , Will Deacon , Michael Kerrisk , Joel Fernandes , Mathieu Desnoyers , Shuah Khan , linux-kselftest@vger.kernel.org Subject: [RFC PATCH for 4.21 11/16] cpu-opv/selftests: Provide cpu-op library Date: Thu, 1 Nov 2018 10:58:39 +0100 Message-Id: <20181101095844.24462-12-mathieu.desnoyers@efficios.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20181101095844.24462-1-mathieu.desnoyers@efficios.com> References: <20181101095844.24462-1-mathieu.desnoyers@efficios.com> Sender: linux-kselftest-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kselftest@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP This cpu-op helper library provides a user-space API to the cpu_opv() system call. Signed-off-by: Mathieu Desnoyers Cc: Thomas Gleixner Cc: Joel Fernandes Cc: Peter Zijlstra Cc: Catalin Marinas Cc: Dave Watson Cc: Will Deacon Cc: Shuah Khan Cc: Andi Kleen Cc: linux-kselftest@vger.kernel.org Cc: "H . Peter Anvin" Cc: Chris Lameter Cc: Russell King Cc: Michael Kerrisk Cc: "Paul E . McKenney" Cc: Paul Turner Cc: Boqun Feng Cc: Josh Triplett Cc: Steven Rostedt Cc: Ben Maurer Cc: linux-api@vger.kernel.org Cc: Andy Lutomirski Cc: Andrew Morton Cc: Linus Torvalds --- tools/testing/selftests/cpu-opv/cpu-op.c | 362 +++++++++++++++++++++++++++++++ tools/testing/selftests/cpu-opv/cpu-op.h | 43 ++++ 2 files changed, 405 insertions(+) create mode 100644 tools/testing/selftests/cpu-opv/cpu-op.c create mode 100644 tools/testing/selftests/cpu-opv/cpu-op.h diff --git a/tools/testing/selftests/cpu-opv/cpu-op.c b/tools/testing/selftests/cpu-opv/cpu-op.c new file mode 100644 index 000000000000..0cdc39bffbbf --- /dev/null +++ b/tools/testing/selftests/cpu-opv/cpu-op.c @@ -0,0 +1,362 @@ +// SPDX-License-Identifier: LGPL-2.1 +/* + * cpu-op.c + * + * Copyright (C) 2017 Mathieu Desnoyers + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Lesser General Public + * License as published by the Free Software Foundation; only + * version 2.1 of the License. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Lesser General Public License for more details. + */ + +#define _GNU_SOURCE +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "cpu-op.h" + +#define ARRAY_SIZE(arr) (sizeof(arr) / sizeof((arr)[0])) + +#define ACCESS_ONCE(x) (*(__volatile__ __typeof__(x) *)&(x)) +#define WRITE_ONCE(x, v) __extension__ ({ ACCESS_ONCE(x) = (v); }) +#define READ_ONCE(x) ACCESS_ONCE(x) + +int cpu_opv(struct cpu_op *cpu_opv, int cpuopcnt, int cpu, int flags) +{ + return syscall(__NR_cpu_opv, cpu_opv, cpuopcnt, cpu, flags); +} + +int cpu_op_get_current_cpu(void) +{ + int cpu; + + cpu = sched_getcpu(); + if (cpu < 0) { + perror("sched_getcpu()"); + abort(); + } + return cpu; +} + +int cpu_op_cmpxchg(void *v, void *expect, void *old, void *n, size_t len, + int cpu) +{ + struct cpu_op opvec[] = { + [0] = { + .op = CPU_MEMCPY_OP, + .len = len, + .u.memcpy_op.dst = (unsigned long)old, + .u.memcpy_op.src = (unsigned long)v, + .u.memcpy_op.expect_fault_dst = 0, + .u.memcpy_op.expect_fault_src = 0, + }, + [1] = { + .op = CPU_COMPARE_EQ_OP, + .len = len, + .u.compare_op.a = (unsigned long)v, + .u.compare_op.b = (unsigned long)expect, + .u.compare_op.expect_fault_a = 0, + .u.compare_op.expect_fault_b = 0, + }, + [2] = { + .op = CPU_MEMCPY_OP, + .len = len, + .u.memcpy_op.dst = (unsigned long)v, + .u.memcpy_op.src = (unsigned long)n, + .u.memcpy_op.expect_fault_dst = 0, + .u.memcpy_op.expect_fault_src = 0, + }, + }; + + return cpu_opv(opvec, ARRAY_SIZE(opvec), cpu, 0); +} + +int cpu_op_add(void *v, int64_t count, size_t len, int cpu) +{ + struct cpu_op opvec[] = { + [0] = { + .op = CPU_ADD_OP, + .len = len, + .u.arithmetic_op.p = (unsigned long)v, + .u.arithmetic_op.count = count, + .u.arithmetic_op.expect_fault_p = 0, + }, + }; + + return cpu_opv(opvec, ARRAY_SIZE(opvec), cpu, 0); +} + +int cpu_op_add_release(void *v, int64_t count, size_t len, int cpu) +{ + struct cpu_op opvec[] = { + [0] = { + .op = CPU_ADD_RELEASE_OP, + .len = len, + .u.arithmetic_op.p = (unsigned long)v, + .u.arithmetic_op.count = count, + .u.arithmetic_op.expect_fault_p = 0, + }, + }; + + return cpu_opv(opvec, ARRAY_SIZE(opvec), cpu, 0); +} + +int cpu_op_cmpeqv_storev(intptr_t *v, intptr_t expect, intptr_t newv, + int cpu) +{ + struct cpu_op opvec[] = { + [0] = { + .op = CPU_COMPARE_EQ_OP, + .len = sizeof(intptr_t), + .u.compare_op.a = (unsigned long)v, + .u.compare_op.b = (unsigned long)&expect, + .u.compare_op.expect_fault_a = 0, + .u.compare_op.expect_fault_b = 0, + }, + [1] = { + .op = CPU_MEMCPY_OP, + .len = sizeof(intptr_t), + .u.memcpy_op.dst = (unsigned long)v, + .u.memcpy_op.src = (unsigned long)&newv, + .u.memcpy_op.expect_fault_dst = 0, + .u.memcpy_op.expect_fault_src = 0, + }, + }; + + return cpu_opv(opvec, ARRAY_SIZE(opvec), cpu, 0); +} + +static int cpu_op_cmpeqv_storep_expect_fault(intptr_t *v, intptr_t expect, + intptr_t *newp, int cpu) +{ + struct cpu_op opvec[] = { + [0] = { + .op = CPU_COMPARE_EQ_OP, + .len = sizeof(intptr_t), + .u.compare_op.a = (unsigned long)v, + .u.compare_op.b = (unsigned long)&expect, + .u.compare_op.expect_fault_a = 0, + .u.compare_op.expect_fault_b = 0, + }, + [1] = { + .op = CPU_MEMCPY_OP, + .len = sizeof(intptr_t), + .u.memcpy_op.dst = (unsigned long)v, + .u.memcpy_op.src = (unsigned long)newp, + .u.memcpy_op.expect_fault_dst = 0, + /* Return EAGAIN on src fault. */ + .u.memcpy_op.expect_fault_src = 1, + }, + }; + + return cpu_opv(opvec, ARRAY_SIZE(opvec), cpu, 0); +} + +int cpu_op_cmpnev_storeoffp_load(intptr_t *v, intptr_t expectnot, + off_t voffp, intptr_t *load, int cpu) +{ + int ret; + + do { + intptr_t oldv = READ_ONCE(*v); + intptr_t *newp = (intptr_t *)(oldv + voffp); + + if (oldv == expectnot) + return 1; + ret = cpu_op_cmpeqv_storep_expect_fault(v, oldv, newp, cpu); + if (!ret) { + *load = oldv; + return 0; + } + } while (ret > 0); + + return -1; +} + +int cpu_op_cmpeqv_storev_storev(intptr_t *v, intptr_t expect, + intptr_t *v2, intptr_t newv2, + intptr_t newv, int cpu) +{ + struct cpu_op opvec[] = { + [0] = { + .op = CPU_COMPARE_EQ_OP, + .len = sizeof(intptr_t), + .u.compare_op.a = (unsigned long)v, + .u.compare_op.b = (unsigned long)&expect, + .u.compare_op.expect_fault_a = 0, + .u.compare_op.expect_fault_b = 0, + }, + [1] = { + .op = CPU_MEMCPY_OP, + .len = sizeof(intptr_t), + .u.memcpy_op.dst = (unsigned long)v2, + .u.memcpy_op.src = (unsigned long)&newv2, + .u.memcpy_op.expect_fault_dst = 0, + .u.memcpy_op.expect_fault_src = 0, + }, + [2] = { + .op = CPU_MEMCPY_OP, + .len = sizeof(intptr_t), + .u.memcpy_op.dst = (unsigned long)v, + .u.memcpy_op.src = (unsigned long)&newv, + .u.memcpy_op.expect_fault_dst = 0, + .u.memcpy_op.expect_fault_src = 0, + }, + }; + + return cpu_opv(opvec, ARRAY_SIZE(opvec), cpu, 0); +} + +int cpu_op_cmpeqv_storev_storev_release(intptr_t *v, intptr_t expect, + intptr_t *v2, intptr_t newv2, + intptr_t newv, int cpu) +{ + struct cpu_op opvec[] = { + [0] = { + .op = CPU_COMPARE_EQ_OP, + .len = sizeof(intptr_t), + .u.compare_op.a = (unsigned long)v, + .u.compare_op.b = (unsigned long)&expect, + .u.compare_op.expect_fault_a = 0, + .u.compare_op.expect_fault_b = 0, + }, + [1] = { + .op = CPU_MEMCPY_OP, + .len = sizeof(intptr_t), + .u.memcpy_op.dst = (unsigned long)v2, + .u.memcpy_op.src = (unsigned long)&newv2, + .u.memcpy_op.expect_fault_dst = 0, + .u.memcpy_op.expect_fault_src = 0, + }, + [2] = { + .op = CPU_MEMCPY_RELEASE_OP, + .len = sizeof(intptr_t), + .u.memcpy_op.dst = (unsigned long)v, + .u.memcpy_op.src = (unsigned long)&newv, + .u.memcpy_op.expect_fault_dst = 0, + .u.memcpy_op.expect_fault_src = 0, + }, + }; + + return cpu_opv(opvec, ARRAY_SIZE(opvec), cpu, 0); +} + +int cpu_op_cmpeqv_cmpeqv_storev(intptr_t *v, intptr_t expect, + intptr_t *v2, intptr_t expect2, + intptr_t newv, int cpu) +{ + struct cpu_op opvec[] = { + [0] = { + .op = CPU_COMPARE_EQ_OP, + .len = sizeof(intptr_t), + .u.compare_op.a = (unsigned long)v, + .u.compare_op.b = (unsigned long)&expect, + .u.compare_op.expect_fault_a = 0, + .u.compare_op.expect_fault_b = 0, + }, + [1] = { + .op = CPU_COMPARE_EQ_OP, + .len = sizeof(intptr_t), + .u.compare_op.a = (unsigned long)v2, + .u.compare_op.b = (unsigned long)&expect2, + .u.compare_op.expect_fault_a = 0, + .u.compare_op.expect_fault_b = 0, + }, + [2] = { + .op = CPU_MEMCPY_OP, + .len = sizeof(intptr_t), + .u.memcpy_op.dst = (unsigned long)v, + .u.memcpy_op.src = (unsigned long)&newv, + .u.memcpy_op.expect_fault_dst = 0, + .u.memcpy_op.expect_fault_src = 0, + }, + }; + + return cpu_opv(opvec, ARRAY_SIZE(opvec), cpu, 0); +} + +int cpu_op_cmpeqv_memcpy_storev(intptr_t *v, intptr_t expect, + void *dst, void *src, size_t len, + intptr_t newv, int cpu) +{ + struct cpu_op opvec[] = { + [0] = { + .op = CPU_COMPARE_EQ_OP, + .len = sizeof(intptr_t), + .u.compare_op.a = (unsigned long)v, + .u.compare_op.b = (unsigned long)&expect, + .u.compare_op.expect_fault_a = 0, + .u.compare_op.expect_fault_b = 0, + }, + [1] = { + .op = CPU_MEMCPY_OP, + .len = len, + .u.memcpy_op.dst = (unsigned long)dst, + .u.memcpy_op.src = (unsigned long)src, + .u.memcpy_op.expect_fault_dst = 0, + .u.memcpy_op.expect_fault_src = 0, + }, + [2] = { + .op = CPU_MEMCPY_OP, + .len = sizeof(intptr_t), + .u.memcpy_op.dst = (unsigned long)v, + .u.memcpy_op.src = (unsigned long)&newv, + .u.memcpy_op.expect_fault_dst = 0, + .u.memcpy_op.expect_fault_src = 0, + }, + }; + + return cpu_opv(opvec, ARRAY_SIZE(opvec), cpu, 0); +} + +int cpu_op_cmpeqv_memcpy_storev_release(intptr_t *v, intptr_t expect, + void *dst, void *src, size_t len, + intptr_t newv, int cpu) +{ + struct cpu_op opvec[] = { + [0] = { + .op = CPU_COMPARE_EQ_OP, + .len = sizeof(intptr_t), + .u.compare_op.a = (unsigned long)v, + .u.compare_op.b = (unsigned long)&expect, + .u.compare_op.expect_fault_a = 0, + .u.compare_op.expect_fault_b = 0, + }, + [1] = { + .op = CPU_MEMCPY_OP, + .len = len, + .u.memcpy_op.dst = (unsigned long)dst, + .u.memcpy_op.src = (unsigned long)src, + .u.memcpy_op.expect_fault_dst = 0, + .u.memcpy_op.expect_fault_src = 0, + }, + [2] = { + .op = CPU_MEMCPY_RELEASE_OP, + .len = sizeof(intptr_t), + .u.memcpy_op.dst = (unsigned long)v, + .u.memcpy_op.src = (unsigned long)&newv, + .u.memcpy_op.expect_fault_dst = 0, + .u.memcpy_op.expect_fault_src = 0, + }, + }; + + return cpu_opv(opvec, ARRAY_SIZE(opvec), cpu, 0); +} + +int cpu_op_addv(intptr_t *v, int64_t count, int cpu) +{ + return cpu_op_add(v, count, sizeof(intptr_t), cpu); +} diff --git a/tools/testing/selftests/cpu-opv/cpu-op.h b/tools/testing/selftests/cpu-opv/cpu-op.h new file mode 100644 index 000000000000..853fcb302516 --- /dev/null +++ b/tools/testing/selftests/cpu-opv/cpu-op.h @@ -0,0 +1,43 @@ +/* SPDX-License-Identifier: LGPL-2.1 OR MIT */ +/* + * cpu-op.h + * + * (C) Copyright 2017-2018 - Mathieu Desnoyers + */ + +#ifndef CPU_OPV_H +#define CPU_OPV_H + +#include +#include +#include + +int cpu_opv(struct cpu_op *cpuopv, int cpuopcnt, int cpu, int flags); +int cpu_op_get_current_cpu(void); + +int cpu_op_cmpxchg(void *v, void *expect, void *old, void *_new, size_t len, + int cpu); +int cpu_op_add(void *v, int64_t count, size_t len, int cpu); +int cpu_op_add_release(void *v, int64_t count, size_t len, int cpu); + +int cpu_op_cmpeqv_storev(intptr_t *v, intptr_t expect, intptr_t newv, int cpu); +int cpu_op_cmpnev_storeoffp_load(intptr_t *v, intptr_t expectnot, + off_t voffp, intptr_t *load, int cpu); +int cpu_op_cmpeqv_storev_storev(intptr_t *v, intptr_t expect, + intptr_t *v2, intptr_t newv2, + intptr_t newv, int cpu); +int cpu_op_cmpeqv_storev_storev_release(intptr_t *v, intptr_t expect, + intptr_t *v2, intptr_t newv2, + intptr_t newv, int cpu); +int cpu_op_cmpeqv_cmpeqv_storev(intptr_t *v, intptr_t expect, + intptr_t *v2, intptr_t expect2, + intptr_t newv, int cpu); +int cpu_op_cmpeqv_memcpy_storev(intptr_t *v, intptr_t expect, + void *dst, void *src, size_t len, + intptr_t newv, int cpu); +int cpu_op_cmpeqv_memcpy_storev_release(intptr_t *v, intptr_t expect, + void *dst, void *src, size_t len, + intptr_t newv, int cpu); +int cpu_op_addv(intptr_t *v, int64_t count, int cpu); + +#endif /* CPU_OPV_H_ */