From patchwork Tue Sep 12 18:35:50 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shawn Anastasio X-Patchwork-Id: 13382033 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 334D1EE3F0E for ; Tue, 12 Sep 2023 18:36:23 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.600735.936494 (Exim 4.92) (envelope-from ) id 1qg8F5-0004O5-GT; Tue, 12 Sep 2023 18:36:07 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 600735.936494; Tue, 12 Sep 2023 18:36:07 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qg8F5-0004Ny-DJ; Tue, 12 Sep 2023 18:36:07 +0000 Received: by outflank-mailman (input) for mailman id 600735; Tue, 12 Sep 2023 18:36:06 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qg8F4-0004Nn-9M for xen-devel@lists.xenproject.org; Tue, 12 Sep 2023 18:36:06 +0000 Received: from raptorengineering.com (mail.raptorengineering.com [23.155.224.40]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id 3a25b1da-519b-11ee-9b0d-b553b5be7939; Tue, 12 Sep 2023 20:36:03 +0200 (CEST) Received: from localhost (localhost [127.0.0.1]) by mail.rptsys.com (Postfix) with ESMTP id BEE5D82869BC; Tue, 12 Sep 2023 13:36:02 -0500 (CDT) Received: from mail.rptsys.com ([127.0.0.1]) by localhost (vali.starlink.edu [127.0.0.1]) (amavisd-new, port 10032) with ESMTP id GDklFJAO7xyW; Tue, 12 Sep 2023 13:36:01 -0500 (CDT) Received: from localhost (localhost [127.0.0.1]) by mail.rptsys.com (Postfix) with ESMTP id 31EF28285806; Tue, 12 Sep 2023 13:36:01 -0500 (CDT) Received: from mail.rptsys.com ([127.0.0.1]) by localhost (vali.starlink.edu [127.0.0.1]) (amavisd-new, port 10026) with ESMTP id c4ciP5Lb7hBL; Tue, 12 Sep 2023 13:36:01 -0500 (CDT) Received: from raptor-ewks-026.rptsys.com (5.edge.rptsys.com [23.155.224.38]) by mail.rptsys.com (Postfix) with ESMTPSA id C6B978285940; Tue, 12 Sep 2023 13:36:00 -0500 (CDT) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 3a25b1da-519b-11ee-9b0d-b553b5be7939 DKIM-Filter: OpenDKIM Filter v2.10.3 mail.rptsys.com 31EF28285806 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=raptorengineering.com; s=B8E824E6-0BE2-11E6-931D-288C65937AAD; t=1694543761; bh=OMNtHSpQqn89e74CYNhsjMflaiQYP4Sf+eSGFc6n1/U=; h=From:To:Date:Message-Id:MIME-Version; b=YXqsiTDFFOv+trEt+vp8Kdr10QxIxGE9qsC34/undmjI3u/QhGkfcW9AS5LBtSMgI BjAewnbFUKcLb2kFIti1eWtAG5pjAhOPiCLELCQd31D31vHrpJ5pTw1LryxuhyfPuB ZTPM7ENLOt/vCMBtS9PSVehjRDXeXgr7rxEGdNBc= X-Virus-Scanned: amavisd-new at rptsys.com From: Shawn Anastasio To: xen-devel@lists.xenproject.org Cc: Timothy Pearson , Jan Beulich , Shawn Anastasio Subject: [PATCH v5 1/5] xen/ppc: Implement atomic.h Date: Tue, 12 Sep 2023 13:35:50 -0500 Message-Id: X-Mailer: git-send-email 2.30.2 In-Reply-To: References: MIME-Version: 1.0 Implement atomic.h for PPC, based off of the original Xen 3.2 implementation. This implementation depends on some functions that are not yet defined (notably __cmpxchg), so it can't yet be used. Signed-off-by: Shawn Anastasio Acked-by: Jan Beulich --- v5: No changes. v4: - Clarify dependency on __cmpxchg which doesn't get implemented until future patch - Drop !CONFIG_SMP case for PPC_ATOMIC_{ENTRY,EXIT}_BARRIER macro definitions - Add missing newlines to inline asm instructions preceeding PPC_ATOMIC_EXIT_BARRIER. This was only discovered by the change to drop the !CONFIG_SMP definitions of those macros. - Fix line wrapping in atomic_compareandswap - Fix formatting of arch_cmpxchg macro v3: - Drop full copyright text headers - Drop unnecessary spaces after casts - const-ize write_atomic_size - Don't use GNU 0-length array extension in read_atomic - Use "+m" asm constraint instead of separate "=m" and "m" - Fix line-continuing backslash formatting in arch_cmpxchg v2: - Fix style of asm block constraints to include required spaces - Fix macro local variable naming (use trailing underscore instead of leading) - Drop unnecessary parens in __atomic_add_unless xen/arch/ppc/include/asm/atomic.h | 385 ++++++++++++++++++++++++++++++ xen/arch/ppc/include/asm/memory.h | 14 ++ 2 files changed, 399 insertions(+) create mode 100644 xen/arch/ppc/include/asm/atomic.h create mode 100644 xen/arch/ppc/include/asm/memory.h -- 2.30.2 diff --git a/xen/arch/ppc/include/asm/atomic.h b/xen/arch/ppc/include/asm/atomic.h new file mode 100644 index 0000000000..64168aa3f1 --- /dev/null +++ b/xen/arch/ppc/include/asm/atomic.h @@ -0,0 +1,385 @@ +/* SPDX-License-Identifier: GPL-2.0-or-later */ +/* + * PowerPC64 atomic operations + * + * Copyright (C) 2001 Paul Mackerras , IBM + * Copyright (C) 2001 Anton Blanchard , IBM + * Copyright Raptor Engineering LLC + */ + +#ifndef _ASM_PPC64_ATOMIC_H_ +#define _ASM_PPC64_ATOMIC_H_ + +#include + +#include + +static inline int atomic_read(const atomic_t *v) +{ + return *(volatile int *)&v->counter; +} + +static inline int _atomic_read(atomic_t v) +{ + return v.counter; +} + +static inline void atomic_set(atomic_t *v, int i) +{ + v->counter = i; +} + +static inline void _atomic_set(atomic_t *v, int i) +{ + v->counter = i; +} + +void __bad_atomic_read(const volatile void *p, void *res); +void __bad_atomic_size(void); + +#define build_atomic_read(name, insn, type) \ + static inline type name(const volatile type *addr) \ + { \ + type ret; \ + asm volatile ( insn "%U1%X1 %0,%1" : "=r" (ret) : "m<>" (*addr) ); \ + return ret; \ + } + +#define build_atomic_write(name, insn, type) \ + static inline void name(volatile type *addr, type val) \ + { \ + asm volatile ( insn "%U0%X0 %1,%0" : "=m<>" (*addr) : "r" (val) ); \ + } + +#define build_add_sized(name, ldinsn, stinsn, type) \ + static inline void name(volatile type *addr, type val) \ + { \ + type t; \ + asm volatile ( "1: " ldinsn " %0,0,%3\n" \ + "add%I2 %0,%0,%2\n" \ + stinsn " %0,0,%3 \n" \ + "bne- 1b\n" \ + : "=&r" (t), "+m" (*addr) \ + : "r" (val), "r" (addr) \ + : "cc" ); \ + } + +build_atomic_read(read_u8_atomic, "lbz", uint8_t) +build_atomic_read(read_u16_atomic, "lhz", uint16_t) +build_atomic_read(read_u32_atomic, "lwz", uint32_t) +build_atomic_read(read_u64_atomic, "ldz", uint64_t) + +build_atomic_write(write_u8_atomic, "stb", uint8_t) +build_atomic_write(write_u16_atomic, "sth", uint16_t) +build_atomic_write(write_u32_atomic, "stw", uint32_t) +build_atomic_write(write_u64_atomic, "std", uint64_t) + +build_add_sized(add_u8_sized, "lbarx", "stbcx.",uint8_t) +build_add_sized(add_u16_sized, "lharx", "sthcx.", uint16_t) +build_add_sized(add_u32_sized, "lwarx", "stwcx.", uint32_t) + +#undef build_atomic_read +#undef build_atomic_write +#undef build_add_sized + +static always_inline void read_atomic_size(const volatile void *p, void *res, + unsigned int size) +{ + ASSERT(IS_ALIGNED((vaddr_t)p, size)); + switch ( size ) + { + case 1: + *(uint8_t *)res = read_u8_atomic(p); + break; + case 2: + *(uint16_t *)res = read_u16_atomic(p); + break; + case 4: + *(uint32_t *)res = read_u32_atomic(p); + break; + case 8: + *(uint64_t *)res = read_u64_atomic(p); + break; + default: + __bad_atomic_read(p, res); + break; + } +} + +static always_inline void write_atomic_size(volatile void *p, const void *val, + unsigned int size) +{ + ASSERT(IS_ALIGNED((vaddr_t)p, size)); + switch ( size ) + { + case 1: + write_u8_atomic(p, *(const uint8_t *)val); + break; + case 2: + write_u16_atomic(p, *(const uint16_t *)val); + break; + case 4: + write_u32_atomic(p, *(const uint32_t *)val); + break; + case 8: + write_u64_atomic(p, *(const uint64_t *)val); + break; + default: + __bad_atomic_size(); + break; + } +} + +#define read_atomic(p) \ + ({ \ + union { \ + typeof(*(p)) val; \ + char c[sizeof(*(p))]; \ + } x_; \ + read_atomic_size(p, x_.c, sizeof(*(p))); \ + x_.val; \ + }) + +#define write_atomic(p, x) \ + do \ + { \ + typeof(*(p)) x_ = (x); \ + write_atomic_size(p, &x_, sizeof(*(p))); \ + } while ( 0 ) + +#define add_sized(p, x) \ + ({ \ + typeof(*(p)) x_ = (x); \ + switch ( sizeof(*(p)) ) \ + { \ + case 1: \ + add_u8_sized((uint8_t *)(p), x_); \ + break; \ + case 2: \ + add_u16_sized((uint16_t *)(p), x_); \ + break; \ + case 4: \ + add_u32_sized((uint32_t *)(p), x_); \ + break; \ + default: \ + __bad_atomic_size(); \ + break; \ + } \ + }) + +static inline void atomic_add(int a, atomic_t *v) +{ + int t; + + asm volatile ( "1: lwarx %0,0,%3\n" + "add %0,%2,%0\n" + "stwcx. %0,0,%3\n" + "bne- 1b" + : "=&r" (t), "+m" (v->counter) + : "r" (a), "r" (&v->counter) + : "cc" ); +} + +static inline int atomic_add_return(int a, atomic_t *v) +{ + int t; + + asm volatile ( PPC_ATOMIC_ENTRY_BARRIER + "1: lwarx %0,0,%2\n" + "add %0,%1,%0\n" + "stwcx. %0,0,%2\n" + "bne- 1b\n" + PPC_ATOMIC_EXIT_BARRIER + : "=&r" (t) + : "r" (a), "r" (&v->counter) + : "cc", "memory" ); + + return t; +} + +static inline void atomic_sub(int a, atomic_t *v) +{ + int t; + + asm volatile ( "1: lwarx %0,0,%3\n" + "subf %0,%2,%0\n" + "stwcx. %0,0,%3\n" + "bne- 1b" + : "=&r" (t), "+m" (v->counter) + : "r" (a), "r" (&v->counter) + : "cc" ); +} + +static inline int atomic_sub_return(int a, atomic_t *v) +{ + int t; + + asm volatile ( PPC_ATOMIC_ENTRY_BARRIER + "1: lwarx %0,0,%2\n" + "subf %0,%1,%0\n" + "stwcx. %0,0,%2\n" + "bne- 1b\n" + PPC_ATOMIC_EXIT_BARRIER + : "=&r" (t) + : "r" (a), "r" (&v->counter) + : "cc", "memory" ); + + return t; +} + +static inline void atomic_inc(atomic_t *v) +{ + int t; + + asm volatile ( "1: lwarx %0,0,%2\n" + "addic %0,%0,1\n" + "stwcx. %0,0,%2\n" + "bne- 1b" + : "=&r" (t), "+m" (v->counter) + : "r" (&v->counter) + : "cc" ); +} + +static inline int atomic_inc_return(atomic_t *v) +{ + int t; + + asm volatile ( PPC_ATOMIC_ENTRY_BARRIER + "1: lwarx %0,0,%1\n" + "addic %0,%0,1\n" + "stwcx. %0,0,%1\n" + "bne- 1b\n" + PPC_ATOMIC_EXIT_BARRIER + : "=&r" (t) + : "r" (&v->counter) + : "cc", "memory" ); + + return t; +} + +static inline void atomic_dec(atomic_t *v) +{ + int t; + + asm volatile ( "1: lwarx %0,0,%2\n" + "addic %0,%0,-1\n" + "stwcx. %0,0,%2\n" + "bne- 1b" + : "=&r" (t), "+m" (v->counter) + : "r" (&v->counter) + : "cc" ); +} + +static inline int atomic_dec_return(atomic_t *v) +{ + int t; + + asm volatile ( PPC_ATOMIC_ENTRY_BARRIER + "1: lwarx %0,0,%1\n" + "addic %0,%0,-1\n" + "stwcx. %0,0,%1\n" + "bne- 1b\n" + PPC_ATOMIC_EXIT_BARRIER + : "=&r" (t) + : "r" (&v->counter) + : "cc", "memory" ); + + return t; +} + +/* + * Atomically test *v and decrement if it is greater than 0. + * The function returns the old value of *v minus 1. + */ +static inline int atomic_dec_if_positive(atomic_t *v) +{ + int t; + + asm volatile( PPC_ATOMIC_ENTRY_BARRIER + "1: lwarx %0,0,%1 # atomic_dec_if_positive\n" + "addic. %0,%0,-1\n" + "blt- 2f\n" + "stwcx. %0,0,%1\n" + "bne- 1b\n" + PPC_ATOMIC_EXIT_BARRIER + "2:" + : "=&r" (t) + : "r" (&v->counter) + : "cc", "memory" ); + + return t; +} + +static inline atomic_t atomic_compareandswap(atomic_t old, atomic_t new, + atomic_t *v) +{ + atomic_t rc; + rc.counter = __cmpxchg(&v->counter, old.counter, new.counter, + sizeof(v->counter)); + return rc; +} + +#define arch_cmpxchg(ptr, o, n) \ + ({ \ + __typeof__(*(ptr)) o_ = (o); \ + __typeof__(*(ptr)) n_ = (n); \ + (__typeof__(*(ptr))) __cmpxchg((ptr), (unsigned long)o_, \ + (unsigned long)n_, sizeof(*(ptr))); \ + }) + +static inline int atomic_cmpxchg(atomic_t *v, int old, int new) +{ + return arch_cmpxchg(&v->counter, old, new); +} + +#define ATOMIC_OP(op, insn, suffix, sign) \ + static inline void atomic_##op(int a, atomic_t *v) \ + { \ + int t; \ + asm volatile ( "1: lwarx %0,0,%3\n" \ + insn "%I2" suffix " %0,%0,%2\n" \ + "stwcx. %0,0,%3 \n" \ + "bne- 1b\n" \ + : "=&r" (t), "+m" (v->counter) \ + : "r" #sign (a), "r" (&v->counter) \ + : "cc" ); \ + } + +ATOMIC_OP(and, "and", ".", K) + +static inline int atomic_sub_and_test(int i, atomic_t *v) +{ + return atomic_sub_return(i, v) == 0; +} + +static inline int atomic_inc_and_test(atomic_t *v) +{ + return atomic_add_return(1, v) == 0; +} + +static inline int atomic_dec_and_test(atomic_t *v) +{ + return atomic_sub_return(1, v) == 0; +} + +static inline int atomic_add_negative(int i, atomic_t *v) +{ + return atomic_add_return(i, v) < 0; +} + +static inline int __atomic_add_unless(atomic_t *v, int a, int u) +{ + int c, old; + + c = atomic_read(v); + while (c != u && (old = atomic_cmpxchg(v, c, c + a)) != c) + c = old; + return c; +} + +static inline int atomic_add_unless(atomic_t *v, int a, int u) +{ + return __atomic_add_unless(v, a, u); +} + +#endif /* _ASM_PPC64_ATOMIC_H_ */ diff --git a/xen/arch/ppc/include/asm/memory.h b/xen/arch/ppc/include/asm/memory.h new file mode 100644 index 0000000000..57310eb690 --- /dev/null +++ b/xen/arch/ppc/include/asm/memory.h @@ -0,0 +1,14 @@ +/* SPDX-License-Identifier: GPL-2.0-or-later */ +/* + * Copyright (C) IBM Corp. 2005 + * + * Authors: Jimi Xenidis + */ + +#ifndef _ASM_MEMORY_H_ +#define _ASM_MEMORY_H_ + +#define PPC_ATOMIC_ENTRY_BARRIER "sync\n" +#define PPC_ATOMIC_EXIT_BARRIER "sync\n" + +#endif From patchwork Tue Sep 12 18:35:51 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shawn Anastasio X-Patchwork-Id: 13382034 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 2EA4AEE3F0F for ; Tue, 12 Sep 2023 18:36:24 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.600737.936509 (Exim 4.92) (envelope-from ) id 1qg8F7-0004gv-6n; Tue, 12 Sep 2023 18:36:09 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 600737.936509; Tue, 12 Sep 2023 18:36:09 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qg8F7-0004gV-1T; Tue, 12 Sep 2023 18:36:09 +0000 Received: by outflank-mailman (input) for mailman id 600737; Tue, 12 Sep 2023 18:36:08 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qg8F5-0004Nn-U8 for xen-devel@lists.xenproject.org; Tue, 12 Sep 2023 18:36:08 +0000 Received: from raptorengineering.com (mail.raptorengineering.com [23.155.224.40]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id 3a9b7057-519b-11ee-9b0d-b553b5be7939; Tue, 12 Sep 2023 20:36:04 +0200 (CEST) Received: from localhost (localhost [127.0.0.1]) by mail.rptsys.com (Postfix) with ESMTP id BE27782869AC; Tue, 12 Sep 2023 13:36:03 -0500 (CDT) Received: from mail.rptsys.com ([127.0.0.1]) by localhost (vali.starlink.edu [127.0.0.1]) (amavisd-new, port 10032) with ESMTP id sqskCFNCL54w; Tue, 12 Sep 2023 13:36:01 -0500 (CDT) Received: from localhost (localhost [127.0.0.1]) by mail.rptsys.com (Postfix) with ESMTP id 6C2BB8285940; Tue, 12 Sep 2023 13:36:01 -0500 (CDT) Received: from mail.rptsys.com ([127.0.0.1]) by localhost (vali.starlink.edu [127.0.0.1]) (amavisd-new, port 10026) with ESMTP id XjkJ1UzSMIvD; Tue, 12 Sep 2023 13:36:01 -0500 (CDT) Received: from raptor-ewks-026.rptsys.com (5.edge.rptsys.com [23.155.224.38]) by mail.rptsys.com (Postfix) with ESMTPSA id 217568286995; Tue, 12 Sep 2023 13:36:01 -0500 (CDT) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 3a9b7057-519b-11ee-9b0d-b553b5be7939 DKIM-Filter: OpenDKIM Filter v2.10.3 mail.rptsys.com 6C2BB8285940 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=raptorengineering.com; s=B8E824E6-0BE2-11E6-931D-288C65937AAD; t=1694543761; bh=goJyBtkG12OgwiJOsH41TdXlAtVNZrmD5X4/UV0uixs=; h=From:To:Date:Message-Id:MIME-Version; b=Gv6OrT+ksXaNqboSOdfzfMMxAGiyF/EpJXRjEOpPZyKZYfleuontl2enPuDAptFl6 4f7/GgZ6XvnHeGMvPt1aPP/kNnGTFsuZM6gxK00mP4NvxDThi77HZBedh36PrNNv/F pk0wV04Ol1GHlFYYk8S/MDGfxa0y1O1LywyRID3A= X-Virus-Scanned: amavisd-new at rptsys.com From: Shawn Anastasio To: xen-devel@lists.xenproject.org Cc: Timothy Pearson , Jan Beulich , Shawn Anastasio Subject: [PATCH v5 2/5] xen/ppc: Implement bitops.h Date: Tue, 12 Sep 2023 13:35:51 -0500 Message-Id: <06892692342540b6dc1af4d530fe3c2c25cf4a2e.1694543103.git.sanastasio@raptorengineering.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: References: MIME-Version: 1.0 Implement bitops.h, based on Linux's implementation as of commit 5321d1b1afb9a17302c6cec79f0cbf823eb0d3fc. Though it is based off of Linux's implementation, this code diverges significantly in a number of ways: - Bitmap entries changed to 32-bit words to match X86 and Arm on Xen - PPC32-specific code paths dropped - Formatting completely re-done to more closely line up with Xen. Including 4 space indentation. - Use GCC's __builtin_popcount* for hweight* implementation Signed-off-by: Shawn Anastasio Acked-by: Jan Beulich --- v5: - Switch lingering ulong bitop parameters/return values to uint. v4: - Mention __builtin_popcount impelmentation of hweight* in message - Change type of BITOP_MASK expression from unsigned long to unsigned int - Fix remaining underscore-prefixed macro params/vars - Fix line wrapping in test_and_clear_bit{,s} - Change type of test_and_clear_bits' pointer parameter from void * to unsigned int *. This was already done for other functions but missed here. - De-macroize test_and_set_bits. - Fix formatting of ffs{,l} macro's unary operators - Drop extra blank in ffz() macro definition v3: - Fix style of inline asm blocks. - Fix underscore-prefixed macro parameters/variables - Use __builtin_popcount for hweight* implementation - Update C functions to use proper Xen coding style v2: - Clarify changes from Linux implementation in commit message - Use PPC_ATOMIC_{ENTRY,EXIT}_BARRIER macros from instead of hardcoded "sync" instructions in inline assembly blocks. - Fix macro line-continuing backslash spacing - Fix hard-tab usage in find_*_bit C functions. xen/arch/ppc/include/asm/bitops.h | 332 +++++++++++++++++++++++++++++- 1 file changed, 329 insertions(+), 3 deletions(-) -- 2.30.2 diff --git a/xen/arch/ppc/include/asm/bitops.h b/xen/arch/ppc/include/asm/bitops.h index 548e97b414..0f75ff3f9d 100644 --- a/xen/arch/ppc/include/asm/bitops.h +++ b/xen/arch/ppc/include/asm/bitops.h @@ -1,9 +1,335 @@ +/* SPDX-License-Identifier: GPL-2.0-or-later */ +/* + * Adapted from Linux's arch/powerpc/include/asm/bitops.h. + * + * Merged version by David Gibson . + * Based on ppc64 versions by: Dave Engebretsen, Todd Inglett, Don + * Reed, Pat McCarthy, Peter Bergner, Anton Blanchard. They + * originally took it from the ppc32 code. + */ #ifndef _ASM_PPC_BITOPS_H #define _ASM_PPC_BITOPS_H +#include + +#define __set_bit(n, p) set_bit(n, p) +#define __clear_bit(n, p) clear_bit(n, p) + +#define BITOP_BITS_PER_WORD 32 +#define BITOP_MASK(nr) (1U << ((nr) % BITOP_BITS_PER_WORD)) +#define BITOP_WORD(nr) ((nr) / BITOP_BITS_PER_WORD) +#define BITS_PER_BYTE 8 + /* PPC bit number conversion */ -#define PPC_BITLSHIFT(be) (BITS_PER_LONG - 1 - (be)) -#define PPC_BIT(bit) (1UL << PPC_BITLSHIFT(bit)) -#define PPC_BITMASK(bs, be) ((PPC_BIT(bs) - PPC_BIT(be)) | PPC_BIT(bs)) +#define PPC_BITLSHIFT(be) (BITS_PER_LONG - 1 - (be)) +#define PPC_BIT(bit) (1UL << PPC_BITLSHIFT(bit)) +#define PPC_BITMASK(bs, be) ((PPC_BIT(bs) - PPC_BIT(be)) | PPC_BIT(bs)) + +/* Macro for generating the ***_bits() functions */ +#define DEFINE_BITOP(fn, op, prefix) \ +static inline void fn(unsigned int mask, \ + volatile unsigned int *p_) \ +{ \ + unsigned int old; \ + unsigned int *p = (unsigned int *)p_; \ + asm volatile ( prefix \ + "1: lwarx %0,0,%3,0\n" \ + #op "%I2 %0,%0,%2\n" \ + "stwcx. %0,0,%3\n" \ + "bne- 1b\n" \ + : "=&r" (old), "+m" (*p) \ + : "rK" (mask), "r" (p) \ + : "cc", "memory" ); \ +} + +DEFINE_BITOP(set_bits, or, "") +DEFINE_BITOP(change_bits, xor, "") + +#define DEFINE_CLROP(fn, prefix) \ +static inline void fn(unsigned int mask, volatile unsigned int *p_) \ +{ \ + unsigned int old; \ + unsigned int *p = (unsigned int *)p_; \ + asm volatile ( prefix \ + "1: lwarx %0,0,%3,0\n" \ + "andc %0,%0,%2\n" \ + "stwcx. %0,0,%3\n" \ + "bne- 1b\n" \ + : "=&r" (old), "+m" (*p) \ + : "r" (mask), "r" (p) \ + : "cc", "memory" ); \ +} + +DEFINE_CLROP(clear_bits, "") + +static inline void set_bit(int nr, volatile void *addr) +{ + set_bits(BITOP_MASK(nr), (volatile unsigned int *)addr + BITOP_WORD(nr)); +} +static inline void clear_bit(int nr, volatile void *addr) +{ + clear_bits(BITOP_MASK(nr), (volatile unsigned int *)addr + BITOP_WORD(nr)); +} + +/** + * test_bit - Determine whether a bit is set + * @nr: bit number to test + * @addr: Address to start counting from + */ +static inline int test_bit(int nr, const volatile void *addr) +{ + const volatile unsigned int *p = addr; + return 1 & (p[BITOP_WORD(nr)] >> (nr & (BITOP_BITS_PER_WORD - 1))); +} + +static inline unsigned int test_and_clear_bits( + unsigned int mask, + volatile unsigned int *p) +{ + unsigned int old, t; + + asm volatile ( PPC_ATOMIC_ENTRY_BARRIER + "1: lwarx %0,0,%3,0\n" + "andc %1,%0,%2\n" + "stwcx. %1,0,%3\n" + "bne- 1b\n" + PPC_ATOMIC_EXIT_BARRIER + : "=&r" (old), "=&r" (t) + : "r" (mask), "r" (p) + : "cc", "memory" ); + + return (old & mask); +} + +static inline int test_and_clear_bit(unsigned int nr, + volatile void *addr) +{ + return test_and_clear_bits(BITOP_MASK(nr), (volatile unsigned int *)addr + + BITOP_WORD(nr)) != 0; +} + +static inline unsigned int test_and_set_bits( + unsigned int mask, + volatile unsigned int *p) +{ + unsigned int old, t; + + asm volatile ( PPC_ATOMIC_ENTRY_BARRIER + "1: lwarx %0,0,%3,0\n" + "or%I2 %1,%0,%2\n" + "stwcx. %1,0,%3\n" + "bne- 1b\n" + PPC_ATOMIC_EXIT_BARRIER + : "=&r" (old), "=&r" (t) + : "rK" (mask), "r" (p) + : "cc", "memory" ); + + return (old & mask); +} + +static inline int test_and_set_bit(unsigned int nr, volatile void *addr) +{ + return test_and_set_bits(BITOP_MASK(nr), (volatile unsigned int *)addr + + BITOP_WORD(nr)) != 0; +} + +/** + * __test_and_set_bit - Set a bit and return its old value + * @nr: Bit to set + * @addr: Address to count from + * + * This operation is non-atomic and can be reordered. + * If two examples of this operation race, one can appear to succeed + * but actually fail. You must protect multiple accesses with a lock. + */ +static inline int __test_and_set_bit(int nr, volatile void *addr) +{ + unsigned int mask = BITOP_MASK(nr); + volatile unsigned int *p = ((volatile unsigned int *)addr) + BITOP_WORD(nr); + unsigned int old = *p; + + *p = old | mask; + return (old & mask) != 0; +} + +/** + * __test_and_clear_bit - Clear a bit and return its old value + * @nr: Bit to clear + * @addr: Address to count from + * + * This operation is non-atomic and can be reordered. + * If two examples of this operation race, one can appear to succeed + * but actually fail. You must protect multiple accesses with a lock. + */ +static inline int __test_and_clear_bit(int nr, volatile void *addr) +{ + unsigned int mask = BITOP_MASK(nr); + volatile unsigned int *p = ((volatile unsigned int *)addr) + BITOP_WORD(nr); + unsigned int old = *p; + + *p = old & ~mask; + return (old & mask) != 0; +} + +#define flsl(x) generic_flsl(x) +#define fls(x) generic_fls(x) +#define ffs(x) ({ unsigned int t_ = (x); fls(t_ & -t_); }) +#define ffsl(x) ({ unsigned long t_ = (x); flsl(t_ & -t_); }) + +/* Based on linux/include/asm-generic/bitops/ffz.h */ +/* + * ffz - find first zero in word. + * @word: The word to search + * + * Undefined if no zero exists, so code should check against ~0UL first. + */ +#define ffz(x) __ffs(~(x)) + +/** + * hweightN - returns the hamming weight of a N-bit word + * @x: the word to weigh + * + * The Hamming Weight of a number is the total number of bits set in it. + */ +#define hweight64(x) __builtin_popcountll(x) +#define hweight32(x) __builtin_popcount(x) +#define hweight16(x) __builtin_popcount((uint16_t)(x)) +#define hweight8(x) __builtin_popcount((uint8_t)(x)) + +/* Based on linux/include/asm-generic/bitops/builtin-__ffs.h */ +/** + * __ffs - find first bit in word. + * @word: The word to search + * + * Undefined if no bit exists, so code should check against 0 first. + */ +static always_inline unsigned long __ffs(unsigned long word) +{ + return __builtin_ctzl(word); +} + +/** + * find_first_set_bit - find the first set bit in @word + * @word: the word to search + * + * Returns the bit-number of the first set bit (first bit being 0). + * The input must *not* be zero. + */ +#define find_first_set_bit(x) (ffsl(x) - 1) + +/* + * Find the first set bit in a memory region. + */ +static inline unsigned long find_first_bit(const unsigned long *addr, + unsigned long size) +{ + const unsigned long *p = addr; + unsigned long result = 0; + unsigned long tmp; + + while ( size & ~(BITS_PER_LONG - 1) ) + { + if ( (tmp = *(p++)) ) + goto found; + result += BITS_PER_LONG; + size -= BITS_PER_LONG; + } + if ( !size ) + return result; + + tmp = (*p) & (~0UL >> (BITS_PER_LONG - size)); + if ( tmp == 0UL ) /* Are any bits set? */ + return result + size; /* Nope. */ + found: + return result + __ffs(tmp); +} + +static inline unsigned long find_next_bit(const unsigned long *addr, + unsigned long size, + unsigned long offset) +{ + const unsigned long *p = addr + BITOP_WORD(offset); + unsigned long result = offset & ~(BITS_PER_LONG - 1); + unsigned long tmp; + + if ( offset >= size ) + return size; + size -= result; + offset %= BITS_PER_LONG; + if ( offset ) + { + tmp = *(p++); + tmp &= (~0UL << offset); + if ( size < BITS_PER_LONG ) + goto found_first; + if ( tmp ) + goto found_middle; + size -= BITS_PER_LONG; + result += BITS_PER_LONG; + } + while ( size & ~(BITS_PER_LONG - 1) ) + { + if ( (tmp = *(p++)) ) + goto found_middle; + result += BITS_PER_LONG; + size -= BITS_PER_LONG; + } + if ( !size ) + return result; + tmp = *p; + + found_first: + tmp &= (~0UL >> (BITS_PER_LONG - size)); + if ( tmp == 0UL ) /* Are any bits set? */ + return result + size; /* Nope. */ + found_middle: + return result + __ffs(tmp); +} + +/* + * This implementation of find_{first,next}_zero_bit was stolen from + * Linus' asm-alpha/bitops.h. + */ +static inline unsigned long find_next_zero_bit(const unsigned long *addr, + unsigned long size, + unsigned long offset) +{ + const unsigned long *p = addr + BITOP_WORD(offset); + unsigned long result = offset & ~(BITS_PER_LONG - 1); + unsigned long tmp; + + if ( offset >= size ) + return size; + size -= result; + offset %= BITS_PER_LONG; + if ( offset ) + { + tmp = *(p++); + tmp |= ~0UL >> (BITS_PER_LONG - offset); + if ( size < BITS_PER_LONG ) + goto found_first; + if ( ~tmp ) + goto found_middle; + size -= BITS_PER_LONG; + result += BITS_PER_LONG; + } + while ( size & ~(BITS_PER_LONG - 1) ) + { + if ( ~(tmp = *(p++)) ) + goto found_middle; + result += BITS_PER_LONG; + size -= BITS_PER_LONG; + } + if ( !size ) + return result; + tmp = *p; + + found_first: + tmp |= ~0UL << size; + if ( tmp == ~0UL ) /* Are any bits zero? */ + return result + size; /* Nope. */ + found_middle: + return result + ffz(tmp); +} #endif /* _ASM_PPC_BITOPS_H */ From patchwork Tue Sep 12 18:35:52 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shawn Anastasio X-Patchwork-Id: 13382035 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 3C9B6EE3F0D for ; Tue, 12 Sep 2023 18:36:25 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.600740.936544 (Exim 4.92) (envelope-from ) id 1qg8FB-0005kQ-BW; Tue, 12 Sep 2023 18:36:13 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 600740.936544; Tue, 12 Sep 2023 18:36:13 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qg8FB-0005jr-5v; Tue, 12 Sep 2023 18:36:13 +0000 Received: by outflank-mailman (input) for mailman id 600740; Tue, 12 Sep 2023 18:36:11 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qg8F9-0004Nn-I5 for xen-devel@lists.xenproject.org; Tue, 12 Sep 2023 18:36:11 +0000 Received: from raptorengineering.com (mail.raptorengineering.com [23.155.224.40]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id 3cc9a031-519b-11ee-9b0d-b553b5be7939; Tue, 12 Sep 2023 20:36:08 +0200 (CEST) Received: from localhost (localhost [127.0.0.1]) by mail.rptsys.com (Postfix) with ESMTP id 6D8E38285940; Tue, 12 Sep 2023 13:36:07 -0500 (CDT) Received: from mail.rptsys.com ([127.0.0.1]) by localhost (vali.starlink.edu [127.0.0.1]) (amavisd-new, port 10032) with ESMTP id lJFKHikrGsn2; Tue, 12 Sep 2023 13:36:02 -0500 (CDT) Received: from localhost (localhost [127.0.0.1]) by mail.rptsys.com (Postfix) with ESMTP id 5CA7782869BB; Tue, 12 Sep 2023 13:36:02 -0500 (CDT) Received: from mail.rptsys.com ([127.0.0.1]) by localhost (vali.starlink.edu [127.0.0.1]) (amavisd-new, port 10026) with ESMTP id R2fAsnRtY2PX; Tue, 12 Sep 2023 13:36:01 -0500 (CDT) Received: from raptor-ewks-026.rptsys.com (5.edge.rptsys.com [23.155.224.38]) by mail.rptsys.com (Postfix) with ESMTPSA id 6C469828697F; Tue, 12 Sep 2023 13:36:01 -0500 (CDT) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 3cc9a031-519b-11ee-9b0d-b553b5be7939 DKIM-Filter: OpenDKIM Filter v2.10.3 mail.rptsys.com 5CA7782869BB DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=raptorengineering.com; s=B8E824E6-0BE2-11E6-931D-288C65937AAD; t=1694543762; bh=PX7xHZdY1d7Hj4fGYgq7uLDSEsTlEE+5Tz5Mjeh+Jwk=; h=From:To:Date:Message-Id:MIME-Version; b=BPSmtBSHUdpasLv83Zlghyu2zWxzMfBIVRmcGCon0ZxTQaQ2KqVoxnIi8f/rgFknF LPNkoEFBSyXD50dddADhS+cg9ADuzfmTZz2LPfDx8UhK+tazf6r3cV4Gh2EZtOY4GX e3EpVv/jbmRCQyuhS8vN/PsTrxgzRM40yvDat1T0= X-Virus-Scanned: amavisd-new at rptsys.com From: Shawn Anastasio To: xen-devel@lists.xenproject.org Cc: Timothy Pearson , Jan Beulich , Shawn Anastasio , Andrew Cooper , George Dunlap , Julien Grall , Stefano Stabellini , Wei Liu , Tamas K Lengyel , Alexandru Isaila , Petre Pircalabu Subject: [PATCH v5 3/5] xen/ppc: Define minimal stub headers required for full build Date: Tue, 12 Sep 2023 13:35:52 -0500 Message-Id: <9f11482dbcd1eb345c4976763204a086e7f59b97.1694543103.git.sanastasio@raptorengineering.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: References: MIME-Version: 1.0 Additionally, change inclusion of asm/ headers to corresponding xen/ ones throughout arch/ppc now that they work. Signed-off-by: Shawn Anastasio Acked-by: Jan Beulich --- v5: - Drop vm_event.h in favor of asm-generic variant - (numa.h) Fix lingering reference to Arm v4: - (device.h) Fix underscore prefixes in DT_DEVICE_START macro - (mm.h) Fix padding blanks in newly added *_to_* macros - (mm.h) Drop inaccurate comment for virt_to_mfn/mfn_to_virt. The implementation of these macros remains unchanged. - (mm.h) s/zero/NULL/g in comment of page_info::v::inuse::domain - (mm.h) Use uint_t instead of u types in struct page_info - (p2m.h) Drop ARM-specific p2m_type_t enumerations - (system.h) Fix hard tabs in xchg() macro - (system.h) Add missing '\n' in build_xchg inline assembly before PPC_ATOMIC_EXIT_BARRIER. - (system.h) Fix formatting of switch statements. - (system.h) Fix formatting of cmpxchg() macro. - (system.h) Fix formatting of mb/rmb/wmb macros - (system.h) Drop dead !CONFIG_SMP definition of smp memory barriers - (system.h) Replace hand-coded asm memory barriers w/ barrier() - (time.h) Fix overly-long definition of force_update_vcpu_system_time v3: - Drop procarea.h - Add SPDX headers to all headers touched or added by this patch - (altp2m.h) Use ASSERT_UNREACHABLE for function that is meant to stay unimplemented - (current.h) Consistently use plain C types in struct cpu_info - (current.h) Fix formatting of inline asm in get_cpu_info() - (device.h) Drop unnecessary DEVICE_GIC enumeration inherited from ARM - (div64.h) Drop underscore-prefixed macro variables, fix formatting - (domain.h) Drop unnecessary space after cast in guest_mode() - (guest_access.h) Clean up line wrapping of functions w/ many parameters - (guest_atomics.h) Avoid overly-long stub macros - (grant_table.h) Add include guard + SPDX - (hypercall.h) Add include guard + SPDX - (mem_access.h) Add include guard + SPDX - (mm.h) BUG_ON("unimplemented") for mfn_to_gfn/set_gpfn_from_mfn - (mm.h) Define PDX_GROUP_SHIFT in terms of XEN_PT_SHIFT_LVL_3 - (monitor.h) BUG_ON("unimplemented") for arch_monitor_get_capabilities - (system.h) Fix formatting of inline assembly + macros - (time.h) Fix too-deep indentation v2: - Use BUG_ON("unimplemented") instead of BUG() for unimplemented functions to make searching easier. - (altp2m.h) Drop Intel license in favor of an SPDX header - (altp2m.h) Drop include in favor of and forward declarations of struct domain and struct vcpu. - (bug.h) Add TODO comments above BUG and BUG_FRAME implementations - (desc.h) Drop desc.h - (mm.h) Drop include - (mm.h) Drop PGC_static definition - (mm.h) Drop max_page definition - (mm.h/mm-radix.c) Drop total_pages definition - (monitor.h) Drop and includes in favor of just - (page.h) Drop PAGE_ALIGN definition - (percpu.h) Drop include - (procarea.h) Drop license text in favor of SPDX header - (procarea.h) Drop unnecessary forward declarations and include - (processor.h) Fix macro parameter naming (drop leading underscore) - (processor.h) Add explanation comment to cpu_relax() - (regs.h) Drop stray hunk adding include - (system.h) Drop license text in favor of SPDX header - (system.h) Drop include - (opal.c) Drop stray hunk changing opal.c includes xen/arch/ppc/Kconfig | 1 + xen/arch/ppc/include/asm/Makefile | 2 + xen/arch/ppc/include/asm/altp2m.h | 25 +++ xen/arch/ppc/include/asm/bug.h | 9 + xen/arch/ppc/include/asm/cache.h | 2 + xen/arch/ppc/include/asm/config.h | 10 + xen/arch/ppc/include/asm/cpufeature.h | 10 + xen/arch/ppc/include/asm/current.h | 43 ++++ xen/arch/ppc/include/asm/delay.h | 12 ++ xen/arch/ppc/include/asm/device.h | 53 +++++ xen/arch/ppc/include/asm/div64.h | 14 ++ xen/arch/ppc/include/asm/domain.h | 47 +++++ xen/arch/ppc/include/asm/event.h | 36 ++++ xen/arch/ppc/include/asm/flushtlb.h | 24 +++ xen/arch/ppc/include/asm/grant_table.h | 5 + xen/arch/ppc/include/asm/guest_access.h | 68 +++++++ xen/arch/ppc/include/asm/guest_atomics.h | 23 +++ xen/arch/ppc/include/asm/hardirq.h | 19 ++ xen/arch/ppc/include/asm/hypercall.h | 5 + xen/arch/ppc/include/asm/io.h | 16 ++ xen/arch/ppc/include/asm/iocap.h | 8 + xen/arch/ppc/include/asm/iommu.h | 8 + xen/arch/ppc/include/asm/irq.h | 33 +++ xen/arch/ppc/include/asm/mem_access.h | 5 + xen/arch/ppc/include/asm/mm.h | 243 ++++++++++++++++++++++- xen/arch/ppc/include/asm/monitor.h | 43 ++++ xen/arch/ppc/include/asm/nospec.h | 15 ++ xen/arch/ppc/include/asm/numa.h | 26 +++ xen/arch/ppc/include/asm/p2m.h | 95 +++++++++ xen/arch/ppc/include/asm/page.h | 18 ++ xen/arch/ppc/include/asm/paging.h | 7 + xen/arch/ppc/include/asm/pci.h | 7 + xen/arch/ppc/include/asm/percpu.h | 24 +++ xen/arch/ppc/include/asm/processor.h | 10 + xen/arch/ppc/include/asm/random.h | 9 + xen/arch/ppc/include/asm/setup.h | 6 + xen/arch/ppc/include/asm/smp.h | 18 ++ xen/arch/ppc/include/asm/softirq.h | 8 + xen/arch/ppc/include/asm/spinlock.h | 15 ++ xen/arch/ppc/include/asm/system.h | 219 +++++++++++++++++++- xen/arch/ppc/include/asm/time.h | 23 +++ xen/arch/ppc/include/asm/xenoprof.h | 0 xen/arch/ppc/mm-radix.c | 2 +- xen/arch/ppc/tlb-radix.c | 2 +- xen/include/public/hvm/save.h | 2 + xen/include/public/pmu.h | 2 + xen/include/public/xen.h | 2 + 47 files changed, 1270 insertions(+), 4 deletions(-) create mode 100644 xen/arch/ppc/include/asm/Makefile create mode 100644 xen/arch/ppc/include/asm/altp2m.h create mode 100644 xen/arch/ppc/include/asm/cpufeature.h create mode 100644 xen/arch/ppc/include/asm/current.h create mode 100644 xen/arch/ppc/include/asm/delay.h create mode 100644 xen/arch/ppc/include/asm/device.h create mode 100644 xen/arch/ppc/include/asm/div64.h create mode 100644 xen/arch/ppc/include/asm/domain.h create mode 100644 xen/arch/ppc/include/asm/event.h create mode 100644 xen/arch/ppc/include/asm/flushtlb.h create mode 100644 xen/arch/ppc/include/asm/grant_table.h create mode 100644 xen/arch/ppc/include/asm/guest_access.h create mode 100644 xen/arch/ppc/include/asm/guest_atomics.h create mode 100644 xen/arch/ppc/include/asm/hardirq.h create mode 100644 xen/arch/ppc/include/asm/hypercall.h create mode 100644 xen/arch/ppc/include/asm/io.h create mode 100644 xen/arch/ppc/include/asm/iocap.h create mode 100644 xen/arch/ppc/include/asm/iommu.h create mode 100644 xen/arch/ppc/include/asm/irq.h create mode 100644 xen/arch/ppc/include/asm/mem_access.h create mode 100644 xen/arch/ppc/include/asm/monitor.h create mode 100644 xen/arch/ppc/include/asm/nospec.h create mode 100644 xen/arch/ppc/include/asm/numa.h create mode 100644 xen/arch/ppc/include/asm/p2m.h create mode 100644 xen/arch/ppc/include/asm/paging.h create mode 100644 xen/arch/ppc/include/asm/pci.h create mode 100644 xen/arch/ppc/include/asm/percpu.h create mode 100644 xen/arch/ppc/include/asm/random.h create mode 100644 xen/arch/ppc/include/asm/setup.h create mode 100644 xen/arch/ppc/include/asm/smp.h create mode 100644 xen/arch/ppc/include/asm/softirq.h create mode 100644 xen/arch/ppc/include/asm/spinlock.h create mode 100644 xen/arch/ppc/include/asm/time.h create mode 100644 xen/arch/ppc/include/asm/xenoprof.h -- 2.30.2 diff --git a/xen/arch/ppc/Kconfig b/xen/arch/ppc/Kconfig index ab116ffb2a..a6eae597af 100644 --- a/xen/arch/ppc/Kconfig +++ b/xen/arch/ppc/Kconfig @@ -1,6 +1,7 @@ config PPC def_bool y select HAS_DEVICE_TREE + select HAS_PDX config PPC64 def_bool y diff --git a/xen/arch/ppc/include/asm/Makefile b/xen/arch/ppc/include/asm/Makefile new file mode 100644 index 0000000000..821addb0bf --- /dev/null +++ b/xen/arch/ppc/include/asm/Makefile @@ -0,0 +1,2 @@ +# SPDX-License-Identifier: GPL-2.0-only +generic-y += vm_event.h diff --git a/xen/arch/ppc/include/asm/altp2m.h b/xen/arch/ppc/include/asm/altp2m.h new file mode 100644 index 0000000000..bd5c9aa415 --- /dev/null +++ b/xen/arch/ppc/include/asm/altp2m.h @@ -0,0 +1,25 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +#ifndef __ASM_PPC_ALTP2M_H__ +#define __ASM_PPC_ALTP2M_H__ + +#include + +struct domain; +struct vcpu; + +/* Alternate p2m on/off per domain */ +static inline bool altp2m_active(const struct domain *d) +{ + /* Not implemented on PPC. */ + return false; +} + +/* Alternate p2m VCPU */ +static inline uint16_t altp2m_vcpu_idx(const struct vcpu *v) +{ + /* Not implemented on PPC, should not be reached. */ + ASSERT_UNREACHABLE(); + return 0; +} + +#endif /* __ASM_PPC_ALTP2M_H__ */ diff --git a/xen/arch/ppc/include/asm/bug.h b/xen/arch/ppc/include/asm/bug.h index e5e874b31c..35d4568402 100644 --- a/xen/arch/ppc/include/asm/bug.h +++ b/xen/arch/ppc/include/asm/bug.h @@ -4,6 +4,7 @@ #define _ASM_PPC_BUG_H #include +#include /* * Power ISA guarantees that an instruction consisting of all zeroes is @@ -15,4 +16,12 @@ #define BUG_FN_REG r0 +/* TODO: implement this properly */ +#define BUG() do { \ + die(); \ +} while (0) + +/* TODO: implement this properly */ +#define BUG_FRAME(type, line, ptr, second_frame, msg) do { } while (0) + #endif /* _ASM_PPC_BUG_H */ diff --git a/xen/arch/ppc/include/asm/cache.h b/xen/arch/ppc/include/asm/cache.h index 8a0a6b7b17..0d7323d789 100644 --- a/xen/arch/ppc/include/asm/cache.h +++ b/xen/arch/ppc/include/asm/cache.h @@ -3,4 +3,6 @@ #ifndef _ASM_PPC_CACHE_H #define _ASM_PPC_CACHE_H +#define __read_mostly __section(".data.read_mostly") + #endif /* _ASM_PPC_CACHE_H */ diff --git a/xen/arch/ppc/include/asm/config.h b/xen/arch/ppc/include/asm/config.h index 30438d22d2..a11a09c570 100644 --- a/xen/arch/ppc/include/asm/config.h +++ b/xen/arch/ppc/include/asm/config.h @@ -1,3 +1,4 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ #ifndef __PPC_CONFIG_H__ #define __PPC_CONFIG_H__ @@ -41,6 +42,15 @@ #define XEN_VIRT_START _AC(0xc000000000000000, UL) +#define VMAP_VIRT_START (XEN_VIRT_START + GB(1)) +#define VMAP_VIRT_SIZE GB(1) + +#define FRAMETABLE_VIRT_START (XEN_VIRT_START + GB(32)) +#define FRAMETABLE_SIZE GB(32) +#define FRAMETABLE_NR (FRAMETABLE_SIZE / sizeof(*frame_table)) + +#define HYPERVISOR_VIRT_START XEN_VIRT_START + #define SMP_CACHE_BYTES (1 << 6) #define STACK_ORDER 0 diff --git a/xen/arch/ppc/include/asm/cpufeature.h b/xen/arch/ppc/include/asm/cpufeature.h new file mode 100644 index 0000000000..1c5946512b --- /dev/null +++ b/xen/arch/ppc/include/asm/cpufeature.h @@ -0,0 +1,10 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +#ifndef __ASM_PPC_CPUFEATURE_H__ +#define __ASM_PPC_CPUFEATURE_H__ + +static inline int cpu_nr_siblings(unsigned int cpu) +{ + return 1; +} + +#endif /* __ASM_PPC_CPUFEATURE_H__ */ diff --git a/xen/arch/ppc/include/asm/current.h b/xen/arch/ppc/include/asm/current.h new file mode 100644 index 0000000000..0ca06033f9 --- /dev/null +++ b/xen/arch/ppc/include/asm/current.h @@ -0,0 +1,43 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +#ifndef __ASM_PPC_CURRENT_H__ +#define __ASM_PPC_CURRENT_H__ + +#include + +#ifndef __ASSEMBLY__ + +struct vcpu; + +/* Which VCPU is "current" on this PCPU. */ +DECLARE_PER_CPU(struct vcpu *, curr_vcpu); + +#define current (this_cpu(curr_vcpu)) +#define set_current(vcpu) do { current = (vcpu); } while (0) +#define get_cpu_current(cpu) (per_cpu(curr_vcpu, cpu)) + +/* Per-VCPU state that lives at the top of the stack */ +struct cpu_info { + struct cpu_user_regs guest_cpu_user_regs; + unsigned long elr; + unsigned int flags; +}; + +static inline struct cpu_info *get_cpu_info(void) +{ +#ifdef __clang__ + unsigned long sp; + + asm ( "mr %0, 1" : "=r" (sp) ); +#else + register unsigned long sp asm ("r1"); +#endif + + return (struct cpu_info *)((sp & ~(STACK_SIZE - 1)) + + STACK_SIZE - sizeof(struct cpu_info)); +} + +#define guest_cpu_user_regs() (&get_cpu_info()->guest_cpu_user_regs) + +#endif /* __ASSEMBLY__ */ + +#endif /* __ASM_PPC_CURRENT_H__ */ diff --git a/xen/arch/ppc/include/asm/delay.h b/xen/arch/ppc/include/asm/delay.h new file mode 100644 index 0000000000..da6635888b --- /dev/null +++ b/xen/arch/ppc/include/asm/delay.h @@ -0,0 +1,12 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +#ifndef __ASM_PPC_DELAY_H__ +#define __ASM_PPC_DELAY_H__ + +#include + +static inline void udelay(unsigned long usecs) +{ + BUG_ON("unimplemented"); +} + +#endif /* __ASM_PPC_DELAY_H__ */ diff --git a/xen/arch/ppc/include/asm/device.h b/xen/arch/ppc/include/asm/device.h new file mode 100644 index 0000000000..8253e61d51 --- /dev/null +++ b/xen/arch/ppc/include/asm/device.h @@ -0,0 +1,53 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +#ifndef __ASM_PPC_DEVICE_H__ +#define __ASM_PPC_DEVICE_H__ + +enum device_type +{ + DEV_DT, + DEV_PCI, +}; + +struct device { + enum device_type type; +#ifdef CONFIG_HAS_DEVICE_TREE + struct dt_device_node *of_node; /* Used by drivers imported from Linux */ +#endif +}; + +enum device_class +{ + DEVICE_SERIAL, + DEVICE_IOMMU, + DEVICE_PCI_HOSTBRIDGE, + /* Use for error */ + DEVICE_UNKNOWN, +}; + +struct device_desc { + /* Device name */ + const char *name; + /* Device class */ + enum device_class class; + /* List of devices supported by this driver */ + const struct dt_device_match *dt_match; + /* + * Device initialization. + * + * -EAGAIN is used to indicate that device probing is deferred. + */ + int (*init)(struct dt_device_node *dev, const void *data); +}; + +typedef struct device device_t; + +#define DT_DEVICE_START(name_, namestr_, class_) \ +static const struct device_desc __dev_desc_##name_ __used \ +__section(".dev.info") = { \ + .name = namestr_, \ + .class = class_, \ + +#define DT_DEVICE_END \ +}; + +#endif /* __ASM_PPC_DEVICE_H__ */ diff --git a/xen/arch/ppc/include/asm/div64.h b/xen/arch/ppc/include/asm/div64.h new file mode 100644 index 0000000000..d213e50585 --- /dev/null +++ b/xen/arch/ppc/include/asm/div64.h @@ -0,0 +1,14 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +#ifndef __ASM_PPC_DIV64_H__ +#define __ASM_PPC_DIV64_H__ + +#include + +#define do_div(n, base) ({ \ + uint32_t base_ = (base); \ + uint32_t rem_ = (uint64_t)(n) % base_; \ + (n) = (uint64_t)(n) / base_; \ + rem_; \ +}) + +#endif /* __ASM_PPC_DIV64_H__ */ diff --git a/xen/arch/ppc/include/asm/domain.h b/xen/arch/ppc/include/asm/domain.h new file mode 100644 index 0000000000..573276d0a8 --- /dev/null +++ b/xen/arch/ppc/include/asm/domain.h @@ -0,0 +1,47 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +#ifndef __ASM_PPC_DOMAIN_H__ +#define __ASM_PPC_DOMAIN_H__ + +#include +#include + +struct hvm_domain +{ + uint64_t params[HVM_NR_PARAMS]; +}; + +#define is_domain_direct_mapped(d) ((void)(d), 0) + +/* TODO: Implement */ +#define guest_mode(r) ({ (void)(r); BUG_ON("unimplemented"); 0; }) + +struct arch_vcpu_io { +}; + +struct arch_vcpu { +}; + +struct arch_domain { + struct hvm_domain hvm; +}; + +#include + +static inline struct vcpu_guest_context *alloc_vcpu_guest_context(void) +{ + return xmalloc(struct vcpu_guest_context); +} + +static inline void free_vcpu_guest_context(struct vcpu_guest_context *vgc) +{ + xfree(vgc); +} + +struct guest_memory_policy {}; +static inline void update_guest_memory_policy(struct vcpu *v, + struct guest_memory_policy *gmp) +{} + +static inline void arch_vcpu_block(struct vcpu *v) {} + +#endif /* __ASM_PPC_DOMAIN_H__ */ diff --git a/xen/arch/ppc/include/asm/event.h b/xen/arch/ppc/include/asm/event.h new file mode 100644 index 0000000000..1b95ee4f61 --- /dev/null +++ b/xen/arch/ppc/include/asm/event.h @@ -0,0 +1,36 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +#ifndef __ASM_PPC_EVENT_H__ +#define __ASM_PPC_EVENT_H__ + +#include + +/* TODO: implement */ +static inline void vcpu_kick(struct vcpu *v) { BUG_ON("unimplemented"); } +static inline void vcpu_mark_events_pending(struct vcpu *v) { BUG_ON("unimplemented"); } +static inline void vcpu_update_evtchn_irq(struct vcpu *v) { BUG_ON("unimplemented"); } +static inline void vcpu_block_unless_event_pending(struct vcpu *v) { BUG_ON("unimplemented"); } + +static inline int vcpu_event_delivery_is_enabled(struct vcpu *v) +{ + BUG_ON("unimplemented"); + return 0; +} + +/* No arch specific virq definition now. Default to global. */ +static inline bool arch_virq_is_global(unsigned int virq) +{ + return true; +} + +static inline int local_events_need_delivery(void) +{ + BUG_ON("unimplemented"); + return 0; +} + +static inline void local_event_delivery_enable(void) +{ + BUG_ON("unimplemented"); +} + +#endif /* __ASM_PPC_EVENT_H__ */ diff --git a/xen/arch/ppc/include/asm/flushtlb.h b/xen/arch/ppc/include/asm/flushtlb.h new file mode 100644 index 0000000000..afcb740783 --- /dev/null +++ b/xen/arch/ppc/include/asm/flushtlb.h @@ -0,0 +1,24 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +#ifndef __ASM_PPC_FLUSHTLB_H__ +#define __ASM_PPC_FLUSHTLB_H__ + +#include + +/* + * Filter the given set of CPUs, removing those that definitely flushed their + * TLB since @page_timestamp. + */ +/* XXX lazy implementation just doesn't clear anything.... */ +static inline void tlbflush_filter(cpumask_t *mask, uint32_t page_timestamp) {} + +#define tlbflush_current_time() (0) + +static inline void page_set_tlbflush_timestamp(struct page_info *page) +{ + page->tlbflush_timestamp = tlbflush_current_time(); +} + +/* Flush specified CPUs' TLBs */ +void arch_flush_tlb_mask(const cpumask_t *mask); + +#endif /* __ASM_PPC_FLUSHTLB_H__ */ diff --git a/xen/arch/ppc/include/asm/grant_table.h b/xen/arch/ppc/include/asm/grant_table.h new file mode 100644 index 0000000000..d0ff58dd3d --- /dev/null +++ b/xen/arch/ppc/include/asm/grant_table.h @@ -0,0 +1,5 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +#ifndef __ASM_PPC_GRANT_TABLE_H__ +#define __ASM_PPC_GRANT_TABLE_H__ + +#endif /* __ASM_PPC_GRANT_TABLE_H__ */ diff --git a/xen/arch/ppc/include/asm/guest_access.h b/xen/arch/ppc/include/asm/guest_access.h new file mode 100644 index 0000000000..6546931911 --- /dev/null +++ b/xen/arch/ppc/include/asm/guest_access.h @@ -0,0 +1,68 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +#ifndef __ASM_PPC_GUEST_ACCESS_H__ +#define __ASM_PPC_GUEST_ACCESS_H__ + +#include + +/* TODO */ + +static inline unsigned long raw_copy_to_guest( + void *to, + const void *from, + unsigned int len) +{ + BUG_ON("unimplemented"); +} +static inline unsigned long raw_copy_to_guest_flush_dcache( + void *to, + const void *from, + unsigned int len) +{ + BUG_ON("unimplemented"); +} +static inline unsigned long raw_copy_from_guest( + void *to, + const void *from, + unsigned int len) +{ + BUG_ON("unimplemented"); +} +static inline unsigned long raw_clear_guest(void *to, unsigned int len) +{ + BUG_ON("unimplemented"); +} + +/* Copy data to guest physical address, then clean the region. */ +static inline unsigned long copy_to_guest_phys_flush_dcache( + struct domain *d, + paddr_t gpa, + void *buf, + unsigned int len) +{ + BUG_ON("unimplemented"); +} + +static inline int access_guest_memory_by_gpa( + struct domain *d, + paddr_t gpa, + void *buf, + uint32_t size, + bool is_write) +{ + BUG_ON("unimplemented"); +} + + +#define __raw_copy_to_guest raw_copy_to_guest +#define __raw_copy_from_guest raw_copy_from_guest +#define __raw_clear_guest raw_clear_guest + +/* + * Pre-validate a guest handle. + * Allows use of faster __copy_* functions. + */ +/* All PPC guests are paging mode external and hence safe */ +#define guest_handle_okay(hnd, nr) (1) +#define guest_handle_subrange_okay(hnd, first, last) (1) + +#endif /* __ASM_PPC_GUEST_ACCESS_H__ */ diff --git a/xen/arch/ppc/include/asm/guest_atomics.h b/xen/arch/ppc/include/asm/guest_atomics.h new file mode 100644 index 0000000000..1190e10bbb --- /dev/null +++ b/xen/arch/ppc/include/asm/guest_atomics.h @@ -0,0 +1,23 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +#ifndef __ASM_PPC_GUEST_ATOMICS_H__ +#define __ASM_PPC_GUEST_ATOMICS_H__ + +#include + +/* TODO: implement */ +#define unimplemented_guest_bit_op(d, nr, p) ({ \ + (void)(d); \ + (void)(nr); \ + (void)(p); \ + BUG_ON("unimplemented"); \ + false; \ +}) + +#define guest_test_bit(d, nr, p) unimplemented_guest_bit_op(d, nr, p) +#define guest_clear_bit(d, nr, p) unimplemented_guest_bit_op(d, nr, p) +#define guest_set_bit(d, nr, p) unimplemented_guest_bit_op(d, nr, p) +#define guest_test_and_set_bit(d, nr, p) unimplemented_guest_bit_op(d, nr, p) +#define guest_test_and_clear_bit(d, nr, p) unimplemented_guest_bit_op(d, nr, p) +#define guest_test_and_change_bit(d, nr, p) unimplemented_guest_bit_op(d, nr, p) + +#endif /* __ASM_PPC_GUEST_ATOMICS_H__ */ diff --git a/xen/arch/ppc/include/asm/hardirq.h b/xen/arch/ppc/include/asm/hardirq.h new file mode 100644 index 0000000000..343efc7e69 --- /dev/null +++ b/xen/arch/ppc/include/asm/hardirq.h @@ -0,0 +1,19 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +#ifndef __ASM_PPC_HARDIRQ_H__ +#define __ASM_PPC_HARDIRQ_H__ + +#include + +typedef struct { + unsigned long __softirq_pending; + unsigned int __local_irq_count; +} __cacheline_aligned irq_cpustat_t; + +#include /* Standard mappings for irq_cpustat_t above */ + +#define in_irq() (local_irq_count(smp_processor_id()) != 0) + +#define irq_enter() (local_irq_count(smp_processor_id())++) +#define irq_exit() (local_irq_count(smp_processor_id())--) + +#endif /* __ASM_PPC_HARDIRQ_H__ */ diff --git a/xen/arch/ppc/include/asm/hypercall.h b/xen/arch/ppc/include/asm/hypercall.h new file mode 100644 index 0000000000..1e8ca0ce9c --- /dev/null +++ b/xen/arch/ppc/include/asm/hypercall.h @@ -0,0 +1,5 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +#ifndef __ASM_PPC_HYPERCALL_H__ +#define __ASM_PPC_HYPERCALL_H__ + +#endif /* __ASM_PPC_HYPERCALL_H__ */ diff --git a/xen/arch/ppc/include/asm/io.h b/xen/arch/ppc/include/asm/io.h new file mode 100644 index 0000000000..85b5b27157 --- /dev/null +++ b/xen/arch/ppc/include/asm/io.h @@ -0,0 +1,16 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +#ifndef __ASM_PPC_IO_H__ +#define __ASM_PPC_IO_H__ + +#include + +/* TODO */ +#define readb(c) ({ (void)(c); BUG_ON("unimplemented"); 0; }) +#define readw(c) ({ (void)(c); BUG_ON("unimplemented"); 0; }) +#define readl(c) ({ (void)(c); BUG_ON("unimplemented"); 0; }) + +#define writeb(v,c) ({ (void)(v); (void)(c); BUG_ON("unimplemented"); }) +#define writew(v,c) ({ (void)(v); (void)(c); BUG_ON("unimplemented"); }) +#define writel(v,c) ({ (void)(v); (void)(c); BUG_ON("unimplemented"); }) + +#endif /* __ASM_PPC_IO_H__ */ diff --git a/xen/arch/ppc/include/asm/iocap.h b/xen/arch/ppc/include/asm/iocap.h new file mode 100644 index 0000000000..76bf13a70f --- /dev/null +++ b/xen/arch/ppc/include/asm/iocap.h @@ -0,0 +1,8 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +#ifndef __ASM_PPC_IOCAP_H__ +#define __ASM_PPC_IOCAP_H__ + +#define cache_flush_permitted(d) \ + (!rangeset_is_empty((d)->iomem_caps)) + +#endif /* __ASM_PPC_IOCAP_H__ */ diff --git a/xen/arch/ppc/include/asm/iommu.h b/xen/arch/ppc/include/asm/iommu.h new file mode 100644 index 0000000000..024ead3473 --- /dev/null +++ b/xen/arch/ppc/include/asm/iommu.h @@ -0,0 +1,8 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +#ifndef __ASM_PPC_IOMMU_H__ +#define __ASM_PPC_IOMMU_H__ + +struct arch_iommu { +}; + +#endif /* __ASM_PPC_IOMMU_H__ */ diff --git a/xen/arch/ppc/include/asm/irq.h b/xen/arch/ppc/include/asm/irq.h new file mode 100644 index 0000000000..5c37d0cf25 --- /dev/null +++ b/xen/arch/ppc/include/asm/irq.h @@ -0,0 +1,33 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +#ifndef __ASM_PPC_IRQ_H__ +#define __ASM_PPC_IRQ_H__ + +#include +#include +#include + +/* TODO */ +#define nr_irqs 0U +#define nr_static_irqs 0 +#define arch_hwdom_irqs(domid) 0U + +#define domain_pirq_to_irq(d, pirq) (pirq) + +struct arch_pirq { +}; + +struct arch_irq_desc { + unsigned int type; +}; + +static inline void arch_move_irqs(struct vcpu *v) +{ + BUG_ON("unimplemented"); +} + +static inline int platform_get_irq(const struct dt_device_node *device, int index) +{ + BUG_ON("unimplemented"); +} + +#endif /* __ASM_PPC_IRQ_H__ */ diff --git a/xen/arch/ppc/include/asm/mem_access.h b/xen/arch/ppc/include/asm/mem_access.h new file mode 100644 index 0000000000..e7986dfdbd --- /dev/null +++ b/xen/arch/ppc/include/asm/mem_access.h @@ -0,0 +1,5 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +#ifndef __ASM_PPC_MEM_ACCESS_H__ +#define __ASM_PPC_MEM_ACCESS_H__ + +#endif /* __ASM_PPC_MEM_ACCESS_H__ */ diff --git a/xen/arch/ppc/include/asm/mm.h b/xen/arch/ppc/include/asm/mm.h index c85a7ed686..a433936076 100644 --- a/xen/arch/ppc/include/asm/mm.h +++ b/xen/arch/ppc/include/asm/mm.h @@ -1,10 +1,25 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ #ifndef _ASM_PPC_MM_H #define _ASM_PPC_MM_H +#include +#include +#include +#include #include +#include + +void setup_initial_pagetables(void); #define pfn_to_paddr(pfn) ((paddr_t)(pfn) << PAGE_SHIFT) #define paddr_to_pfn(pa) ((unsigned long)((pa) >> PAGE_SHIFT)) +#define paddr_to_pdx(pa) mfn_to_pdx(maddr_to_mfn(pa)) +#define gfn_to_gaddr(gfn) pfn_to_paddr(gfn_x(gfn)) +#define gaddr_to_gfn(ga) _gfn(paddr_to_pfn(ga)) +#define mfn_to_maddr(mfn) pfn_to_paddr(mfn_x(mfn)) +#define maddr_to_mfn(ma) _mfn(paddr_to_pfn(ma)) +#define vmap_to_mfn(va) maddr_to_mfn(virt_to_maddr((vaddr_t)va)) +#define vmap_to_page(va) mfn_to_page(vmap_to_mfn(va)) #define virt_to_maddr(va) ((paddr_t)((vaddr_t)(va) & PADDR_MASK)) #define maddr_to_virt(pa) ((void *)((paddr_t)(pa) | XEN_VIRT_START)) @@ -13,6 +28,232 @@ #define __pa(x) (virt_to_maddr(x)) #define __va(x) (maddr_to_virt(x)) -void setup_initial_pagetables(void); +/* Convert between Xen-heap virtual addresses and machine frame numbers. */ +#define __virt_to_mfn(va) (virt_to_maddr(va) >> PAGE_SHIFT) +#define __mfn_to_virt(mfn) (maddr_to_virt((paddr_t)(mfn) << PAGE_SHIFT)) + +/* Convert between Xen-heap virtual addresses and page-info structures. */ +static inline struct page_info *virt_to_page(const void *v) +{ + BUG_ON("unimplemented"); + return NULL; +} + +#define virt_to_mfn(va) __virt_to_mfn(va) +#define mfn_to_virt(mfn) __mfn_to_virt(mfn) + +#define PG_shift(idx) (BITS_PER_LONG - (idx)) +#define PG_mask(x, idx) (x ## UL << PG_shift(idx)) + +#define PGT_none PG_mask(0, 1) /* no special uses of this page */ +#define PGT_writable_page PG_mask(1, 1) /* has writable mappings? */ +#define PGT_type_mask PG_mask(1, 1) /* Bits 31 or 63. */ + + /* 2-bit count of uses of this frame as its current type. */ +#define PGT_count_mask PG_mask(3, 3) + +/* Cleared when the owning guest 'frees' this page. */ +#define _PGC_allocated PG_shift(1) +#define PGC_allocated PG_mask(1, 1) +/* Page is Xen heap? */ +#define _PGC_xen_heap PG_shift(2) +#define PGC_xen_heap PG_mask(1, 2) +/* Page is broken? */ +#define _PGC_broken PG_shift(7) +#define PGC_broken PG_mask(1, 7) + /* Mutually-exclusive page states: { inuse, offlining, offlined, free }. */ +#define PGC_state PG_mask(3, 9) +#define PGC_state_inuse PG_mask(0, 9) +#define PGC_state_offlining PG_mask(1, 9) +#define PGC_state_offlined PG_mask(2, 9) +#define PGC_state_free PG_mask(3, 9) +#define page_state_is(pg, st) (((pg)->count_info&PGC_state) == PGC_state_##st) +/* Page is not reference counted */ +#define _PGC_extra PG_shift(10) +#define PGC_extra PG_mask(1, 10) + +/* Count of references to this frame. */ +#define PGC_count_width PG_shift(10) +#define PGC_count_mask ((1UL<count_info & PGC_xen_heap) +#define is_xen_heap_mfn(mfn) \ + (mfn_valid(mfn) && is_xen_heap_page(mfn_to_page(mfn))) + +#define is_xen_fixed_mfn(mfn) \ + ((mfn_to_maddr(mfn) >= virt_to_maddr((vaddr_t)_start)) && \ + (mfn_to_maddr(mfn) <= virt_to_maddr((vaddr_t)_end - 1))) + +#define page_get_owner(_p) (_p)->v.inuse.domain +#define page_set_owner(_p,_d) ((_p)->v.inuse.domain = (_d)) + +/* TODO: implement */ +#define mfn_valid(mfn) ({ (void) (mfn); 0; }) + +#define domain_set_alloc_bitsize(d) ((void)(d)) +#define domain_clamp_alloc_bitsize(d, b) (b) + +#define PFN_ORDER(pfn_) ((pfn_)->v.free.order) + +struct page_info +{ + /* Each frame can be threaded onto a doubly-linked list. */ + struct page_list_entry list; + + /* Reference count and various PGC_xxx flags and fields. */ + unsigned long count_info; + + /* Context-dependent fields follow... */ + union { + /* Page is in use: ((count_info & PGC_count_mask) != 0). */ + struct { + /* Type reference count and various PGT_xxx flags and fields. */ + unsigned long type_info; + } inuse; + /* Page is on a free list: ((count_info & PGC_count_mask) == 0). */ + union { + struct { + /* + * Index of the first *possibly* unscrubbed page in the buddy. + * One more bit than maximum possible order to accommodate + * INVALID_DIRTY_IDX. + */ +#define INVALID_DIRTY_IDX ((1UL << (MAX_ORDER + 1)) - 1) + unsigned long first_dirty:MAX_ORDER + 1; + + /* Do TLBs need flushing for safety before next page use? */ + bool need_tlbflush:1; + +#define BUDDY_NOT_SCRUBBING 0 +#define BUDDY_SCRUBBING 1 +#define BUDDY_SCRUB_ABORT 2 + unsigned long scrub_state:2; + }; + + unsigned long val; + } free; + + } u; + + union { + /* Page is in use, but not as a shadow. */ + struct { + /* Owner of this page (NULL if page is anonymous). */ + struct domain *domain; + } inuse; + + /* Page is on a free list. */ + struct { + /* Order-size of the free chunk this page is the head of. */ + unsigned int order; + } free; + + } v; + + union { + /* + * Timestamp from 'TLB clock', used to avoid extra safety flushes. + * Only valid for: a) free pages, and b) pages with zero type count + */ + uint32_t tlbflush_timestamp; + }; + uint64_t pad; +}; + + +#define FRAMETABLE_VIRT_START (XEN_VIRT_START + GB(32)) +#define frame_table ((struct page_info *)FRAMETABLE_VIRT_START) + +/* PDX of the first page in the frame table. */ +extern unsigned long frametable_base_pdx; + +/* Convert between machine frame numbers and page-info structures. */ +#define mfn_to_page(mfn) \ + (frame_table + (mfn_to_pdx(mfn) - frametable_base_pdx)) +#define page_to_mfn(pg) \ + pdx_to_mfn((unsigned long)((pg) - frame_table) + frametable_base_pdx) + +static inline void *page_to_virt(const struct page_info *pg) +{ + return mfn_to_virt(mfn_x(page_to_mfn(pg))); +} + +/* + * Common code requires get_page_type and put_page_type. + * We don't care about typecounts so we just do the minimum to make it + * happy. + */ +static inline int get_page_type(struct page_info *page, unsigned long type) +{ + return 1; +} + +static inline void put_page_type(struct page_info *page) +{ + return; +} + +/* TODO */ +static inline bool get_page_nr(struct page_info *page, const struct domain *domain, + unsigned long nr) +{ + BUG_ON("unimplemented"); +} +static inline void put_page_nr(struct page_info *page, unsigned long nr) +{ + BUG_ON("unimplemented"); +} + +static inline void put_page_and_type(struct page_info *page) +{ + put_page_type(page); + put_page(page); +} + +/* + * PPC does not have an M2P, but common code expects a handful of + * M2P-related defines and functions. Provide dummy versions of these. + */ +#define INVALID_M2P_ENTRY (~0UL) +#define SHARED_M2P_ENTRY (~0UL - 1UL) +#define SHARED_M2P(_e) ((_e) == SHARED_M2P_ENTRY) + +#define set_gpfn_from_mfn(mfn, pfn) BUG_ON("unimplemented") +#define mfn_to_gfn(d, mfn) ({ BUG_ON("unimplemented"); _gfn(0); }) + +#define PDX_GROUP_SHIFT XEN_PT_SHIFT_LVL_3 + +static inline unsigned long domain_get_maximum_gpfn(struct domain *d) +{ + BUG_ON("unimplemented"); + return 0; +} + +static inline long arch_memory_op(int op, XEN_GUEST_HANDLE_PARAM(void) arg) +{ + BUG_ON("unimplemented"); + return 0; +} + +static inline unsigned int arch_get_dma_bitsize(void) +{ + return 32; /* TODO */ +} + +/* + * On PPC, all the RAM is currently direct mapped in Xen. + * Hence return always true. + */ +static inline bool arch_mfns_in_directmap(unsigned long mfn, unsigned long nr) +{ + return true; +} #endif /* _ASM_PPC_MM_H */ diff --git a/xen/arch/ppc/include/asm/monitor.h b/xen/arch/ppc/include/asm/monitor.h new file mode 100644 index 0000000000..e5b0282bf1 --- /dev/null +++ b/xen/arch/ppc/include/asm/monitor.h @@ -0,0 +1,43 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* Derived from xen/arch/arm/include/asm/monitor.h */ +#ifndef __ASM_PPC_MONITOR_H__ +#define __ASM_PPC_MONITOR_H__ + +#include +#include + +static inline +void arch_monitor_allow_userspace(struct domain *d, bool allow_userspace) +{ +} + +static inline +int arch_monitor_domctl_op(struct domain *d, struct xen_domctl_monitor_op *mop) +{ + /* No arch-specific monitor ops on PPC. */ + return -EOPNOTSUPP; +} + +int arch_monitor_domctl_event(struct domain *d, + struct xen_domctl_monitor_op *mop); + +static inline +int arch_monitor_init_domain(struct domain *d) +{ + /* No arch-specific domain initialization on PPC. */ + return 0; +} + +static inline +void arch_monitor_cleanup_domain(struct domain *d) +{ + /* No arch-specific domain cleanup on PPC. */ +} + +static inline uint32_t arch_monitor_get_capabilities(struct domain *d) +{ + BUG_ON("unimplemented"); + return 0; +} + +#endif /* __ASM_PPC_MONITOR_H__ */ diff --git a/xen/arch/ppc/include/asm/nospec.h b/xen/arch/ppc/include/asm/nospec.h new file mode 100644 index 0000000000..b97322e48d --- /dev/null +++ b/xen/arch/ppc/include/asm/nospec.h @@ -0,0 +1,15 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* From arch/arm/include/asm/nospec.h. */ +#ifndef __ASM_PPC_NOSPEC_H__ +#define __ASM_PPC_NOSPEC_H__ + +static inline bool evaluate_nospec(bool condition) +{ + return condition; +} + +static inline void block_speculation(void) +{ +} + +#endif /* __ASM_PPC_NOSPEC_H__ */ diff --git a/xen/arch/ppc/include/asm/numa.h b/xen/arch/ppc/include/asm/numa.h new file mode 100644 index 0000000000..7fdf66c3da --- /dev/null +++ b/xen/arch/ppc/include/asm/numa.h @@ -0,0 +1,26 @@ +#ifndef __ASM_PPC_NUMA_H__ +#define __ASM_PPC_NUMA_H__ + +#include +#include + +typedef uint8_t nodeid_t; + +/* Fake one node for now. See also node_online_map. */ +#define cpu_to_node(cpu) 0 +#define node_to_cpumask(node) (cpu_online_map) + +/* + * TODO: make first_valid_mfn static when NUMA is supported on PPC, this + * is required because the dummy helpers are using it. + */ +extern mfn_t first_valid_mfn; + +/* XXX: implement NUMA support */ +#define node_spanned_pages(nid) (max_page - mfn_x(first_valid_mfn)) +#define node_start_pfn(nid) (mfn_x(first_valid_mfn)) +#define __node_distance(a, b) (20) + +#define arch_want_default_dmazone() (false) + +#endif /* __ASM_PPC_NUMA_H__ */ diff --git a/xen/arch/ppc/include/asm/p2m.h b/xen/arch/ppc/include/asm/p2m.h new file mode 100644 index 0000000000..25ba054668 --- /dev/null +++ b/xen/arch/ppc/include/asm/p2m.h @@ -0,0 +1,95 @@ +#ifndef __ASM_PPC_P2M_H__ +#define __ASM_PPC_P2M_H__ + +#include + +#define paddr_bits PADDR_BITS + +/* + * List of possible type for each page in the p2m entry. + * The number of available bit per page in the pte for this purpose is 4 bits. + * So it's possible to only have 16 fields. If we run out of value in the + * future, it's possible to use higher value for pseudo-type and don't store + * them in the p2m entry. + */ +typedef enum { + p2m_invalid = 0, /* Nothing mapped here */ + p2m_ram_rw, /* Normal read/write guest RAM */ + p2m_ram_ro, /* Read-only; writes are silently dropped */ + p2m_max_real_type, /* Types after this won't be store in the p2m */ +} p2m_type_t; + +#include + +static inline int get_page_and_type(struct page_info *page, + struct domain *domain, + unsigned long type) +{ + BUG_ON("unimplemented"); + return 1; +} + +/* Look up a GFN and take a reference count on the backing page. */ +typedef unsigned int p2m_query_t; +#define P2M_ALLOC (1u<<0) /* Populate PoD and paged-out entries */ +#define P2M_UNSHARE (1u<<1) /* Break CoW sharing */ + +static inline struct page_info *get_page_from_gfn( + struct domain *d, unsigned long gfn, p2m_type_t *t, p2m_query_t q) +{ + BUG_ON("unimplemented"); + return NULL; +} + +static inline void memory_type_changed(struct domain *d) +{ + BUG_ON("unimplemented"); +} + + +static inline int guest_physmap_mark_populate_on_demand(struct domain *d, unsigned long gfn, + unsigned int order) +{ + BUG_ON("unimplemented"); + return 1; +} + +static inline int guest_physmap_add_entry(struct domain *d, + gfn_t gfn, + mfn_t mfn, + unsigned long page_order, + p2m_type_t t) +{ + BUG_ON("unimplemented"); + return 1; +} + +/* Untyped version for RAM only, for compatibility */ +static inline int __must_check +guest_physmap_add_page(struct domain *d, gfn_t gfn, mfn_t mfn, + unsigned int page_order) +{ + return guest_physmap_add_entry(d, gfn, mfn, page_order, p2m_ram_rw); +} + +static inline mfn_t gfn_to_mfn(struct domain *d, gfn_t gfn) +{ + BUG_ON("unimplemented"); + return _mfn(0); +} + +static inline bool arch_acquire_resource_check(struct domain *d) +{ + /* + * The reference counting of foreign entries in set_foreign_p2m_entry() + * is supported on PPC. + */ + return true; +} + +static inline void p2m_altp2m_check(struct vcpu *v, uint16_t idx) +{ + /* Not supported on PPC. */ +} + +#endif /* __ASM_PPC_P2M_H__ */ diff --git a/xen/arch/ppc/include/asm/page.h b/xen/arch/ppc/include/asm/page.h index c5ee047df7..890e285051 100644 --- a/xen/arch/ppc/include/asm/page.h +++ b/xen/arch/ppc/include/asm/page.h @@ -1,3 +1,4 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ #ifndef _ASM_PPC_PAGE_H #define _ASM_PPC_PAGE_H @@ -36,6 +37,9 @@ #define PTE_XEN_RO (PTE_XEN_BASE | PTE_EAA_READ) #define PTE_XEN_RX (PTE_XEN_BASE | PTE_EAA_READ | PTE_EAA_EXECUTE) +/* TODO */ +#define PAGE_HYPERVISOR 0 + /* * Radix Tree layout for 64KB pages: * @@ -178,4 +182,18 @@ struct prtb_entry { void tlbie_all(void); +static inline void invalidate_icache(void) +{ + BUG_ON("unimplemented"); +} + +#define clear_page(page) memset(page, 0, PAGE_SIZE) +#define copy_page(dp, sp) memcpy(dp, sp, PAGE_SIZE) + +/* TODO: Flush the dcache for an entire page. */ +static inline void flush_page_to_ram(unsigned long mfn, bool sync_icache) +{ + BUG_ON("unimplemented"); +} + #endif /* _ASM_PPC_PAGE_H */ diff --git a/xen/arch/ppc/include/asm/paging.h b/xen/arch/ppc/include/asm/paging.h new file mode 100644 index 0000000000..eccacece29 --- /dev/null +++ b/xen/arch/ppc/include/asm/paging.h @@ -0,0 +1,7 @@ +#ifndef __ASM_PPC_PAGING_H__ +#define __ASM_PPC_PAGING_H__ + +#define paging_mode_translate(d) (1) +#define paging_mode_external(d) (1) + +#endif /* __ASM_PPC_PAGING_H__ */ diff --git a/xen/arch/ppc/include/asm/pci.h b/xen/arch/ppc/include/asm/pci.h new file mode 100644 index 0000000000..e76c8e5475 --- /dev/null +++ b/xen/arch/ppc/include/asm/pci.h @@ -0,0 +1,7 @@ +#ifndef __ASM_PPC_PCI_H__ +#define __ASM_PPC_PCI_H__ + +struct arch_pci_dev { +}; + +#endif /* __ASM_PPC_PCI_H__ */ diff --git a/xen/arch/ppc/include/asm/percpu.h b/xen/arch/ppc/include/asm/percpu.h new file mode 100644 index 0000000000..e7c40c0f03 --- /dev/null +++ b/xen/arch/ppc/include/asm/percpu.h @@ -0,0 +1,24 @@ +#ifndef __PPC_PERCPU_H__ +#define __PPC_PERCPU_H__ + +#ifndef __ASSEMBLY__ + +extern char __per_cpu_start[], __per_cpu_data_end[]; +extern unsigned long __per_cpu_offset[NR_CPUS]; +void percpu_init_areas(void); + +#define smp_processor_id() 0 /* TODO: Fix this */ + +#define per_cpu(var, cpu) \ + (*RELOC_HIDE(&per_cpu__##var, __per_cpu_offset[cpu])) +#define this_cpu(var) \ + (*RELOC_HIDE(&per_cpu__##var, smp_processor_id())) + +#define per_cpu_ptr(var, cpu) \ + (*RELOC_HIDE(var, __per_cpu_offset[cpu])) +#define this_cpu_ptr(var) \ + (*RELOC_HIDE(var, smp_processor_id())) + +#endif + +#endif /* __PPC_PERCPU_H__ */ diff --git a/xen/arch/ppc/include/asm/processor.h b/xen/arch/ppc/include/asm/processor.h index edb8a6088d..d3dd943c20 100644 --- a/xen/arch/ppc/include/asm/processor.h +++ b/xen/arch/ppc/include/asm/processor.h @@ -110,6 +110,10 @@ /* Macro to adjust thread priority for hardware multithreading */ #define HMT_very_low() asm volatile ( "or %r31, %r31, %r31" ) +/* TODO: This isn't correct */ +#define cpu_to_core(cpu) (0) +#define cpu_to_socket(cpu) (0) + /* * User-accessible registers: most of these need to be saved/restored * for every nested Xen invocation. @@ -178,6 +182,12 @@ static inline void noreturn die(void) HMT_very_low(); } +/* + * Implemented on pre-POWER10 by setting HMT to low then to medium using + * the special OR forms. See HMT_very_low above. + */ +#define cpu_relax() asm volatile ( "or %r1, %r1, %r1; or %r2, %r2, %r2" ) + #endif /* __ASSEMBLY__ */ #endif /* _ASM_PPC_PROCESSOR_H */ diff --git a/xen/arch/ppc/include/asm/random.h b/xen/arch/ppc/include/asm/random.h new file mode 100644 index 0000000000..2f9e9bbae4 --- /dev/null +++ b/xen/arch/ppc/include/asm/random.h @@ -0,0 +1,9 @@ +#ifndef __ASM_PPC_RANDOM_H__ +#define __ASM_PPC_RANDOM_H__ + +static inline unsigned int arch_get_random(void) +{ + return 0; +} + +#endif /* __ASM_PPC_RANDOM_H__ */ diff --git a/xen/arch/ppc/include/asm/setup.h b/xen/arch/ppc/include/asm/setup.h new file mode 100644 index 0000000000..e4f64879b6 --- /dev/null +++ b/xen/arch/ppc/include/asm/setup.h @@ -0,0 +1,6 @@ +#ifndef __ASM_PPC_SETUP_H__ +#define __ASM_PPC_SETUP_H__ + +#define max_init_domid (0) + +#endif /* __ASM_PPC_SETUP_H__ */ diff --git a/xen/arch/ppc/include/asm/smp.h b/xen/arch/ppc/include/asm/smp.h new file mode 100644 index 0000000000..eca43f0e6c --- /dev/null +++ b/xen/arch/ppc/include/asm/smp.h @@ -0,0 +1,18 @@ +#ifndef __ASM_SMP_H +#define __ASM_SMP_H + +#include +#include + +DECLARE_PER_CPU(cpumask_var_t, cpu_sibling_mask); +DECLARE_PER_CPU(cpumask_var_t, cpu_core_mask); + +#define cpu_is_offline(cpu) unlikely(!cpu_online(cpu)) + +/* + * Do we, for platform reasons, need to actually keep CPUs online when we + * would otherwise prefer them to be off? + */ +#define park_offline_cpus false + +#endif diff --git a/xen/arch/ppc/include/asm/softirq.h b/xen/arch/ppc/include/asm/softirq.h new file mode 100644 index 0000000000..a0b28a5e51 --- /dev/null +++ b/xen/arch/ppc/include/asm/softirq.h @@ -0,0 +1,8 @@ +#ifndef __ASM_PPC_SOFTIRQ_H__ +#define __ASM_PPC_SOFTIRQ_H__ + +#define NR_ARCH_SOFTIRQS 0 + +#define arch_skip_send_event_check(cpu) 0 + +#endif /* __ASM_PPC_SOFTIRQ_H__ */ diff --git a/xen/arch/ppc/include/asm/spinlock.h b/xen/arch/ppc/include/asm/spinlock.h new file mode 100644 index 0000000000..4bdb4b1e98 --- /dev/null +++ b/xen/arch/ppc/include/asm/spinlock.h @@ -0,0 +1,15 @@ +#ifndef __ASM_SPINLOCK_H +#define __ASM_SPINLOCK_H + +#define arch_lock_acquire_barrier() smp_mb() +#define arch_lock_release_barrier() smp_mb() + +#define arch_lock_relax() cpu_relax() +#define arch_lock_signal() +#define arch_lock_signal_wmb() \ +({ \ + smp_wmb(); \ + arch_lock_signal(); \ +}) + +#endif /* __ASM_SPINLOCK_H */ diff --git a/xen/arch/ppc/include/asm/system.h b/xen/arch/ppc/include/asm/system.h index 94091df644..a17072bafd 100644 --- a/xen/arch/ppc/include/asm/system.h +++ b/xen/arch/ppc/include/asm/system.h @@ -1,6 +1,223 @@ +/* SPDX-License-Identifier: GPL-2.0-or-later */ +/* + * Copyright (C) IBM Corp. 2005 + * Copyright (C) Raptor Engineering LLC + * + * Authors: Jimi Xenidis + * Shawn Anastasio + */ + #ifndef _ASM_SYSTEM_H_ #define _ASM_SYSTEM_H_ -#define smp_wmb() __asm__ __volatile__ ( "lwsync" : : : "memory" ) +#include +#include +#include +#include +#include + +#define xchg(ptr,x) \ +({ \ + __typeof__(*(ptr)) _x_ = (x); \ + (__typeof__(*(ptr))) __xchg((ptr), (unsigned long)_x_, sizeof(*(ptr))); \ +}) + +#define build_xchg(fn, type, ldinsn, stinsn) \ +static inline unsigned long fn(volatile type *m, unsigned long val) \ +{ \ + unsigned long dummy; \ + asm volatile ( PPC_ATOMIC_ENTRY_BARRIER \ + "1: " ldinsn " %0,0,%3\n" \ + stinsn " %2,0,%3\n" \ + "2: bne- 1b\n" \ + PPC_ATOMIC_EXIT_BARRIER \ + : "=&r" (dummy), "=m" (*m) \ + : "r" (val), "r" (m) \ + : "cc", "memory" ); \ + return dummy; \ +} + +build_xchg(__xchg_u8, uint8_t, "lbarx", "stbcx.") +build_xchg(__xchg_u16, uint16_t, "lharx", "sthcx.") +build_xchg(__xchg_u32, uint32_t, "lwarx", "stwcx.") +build_xchg(__xchg_u64, uint64_t, "ldarx", "stdcx.") + +#undef build_xchg + +/* + * This function doesn't exist, so you'll get a linker error + * if something tries to do an invalid xchg(). + */ +extern void __xchg_called_with_bad_pointer(void); + +static inline unsigned long __xchg(volatile void *ptr, unsigned long x, + int size) +{ + switch ( size ) + { + case 1: + return __xchg_u8(ptr, x); + case 2: + return __xchg_u16(ptr, x); + case 4: + return __xchg_u32(ptr, x); + case 8: + return __xchg_u64(ptr, x); + } + __xchg_called_with_bad_pointer(); + return x; +} + + +static inline unsigned long __cmpxchg_u32(volatile int *p, int old, int new) +{ + unsigned int prev; + + asm volatile ( PPC_ATOMIC_ENTRY_BARRIER + "1: lwarx %0,0,%2\n" + "cmpw 0,%0,%3\n" + "bne- 2f\n " + "stwcx. %4,0,%2\n" + "bne- 1b\n" + PPC_ATOMIC_EXIT_BARRIER "\n" + "2:" + : "=&r" (prev), "=m" (*p) + : "r" (p), "r" (old), "r" (new), "m" (*p) + : "cc", "memory" ); + + return prev; +} + +static inline unsigned long __cmpxchg_u64(volatile long *p, unsigned long old, + unsigned long new) +{ + unsigned long prev; + + asm volatile ( PPC_ATOMIC_ENTRY_BARRIER + "1: ldarx %0,0,%2\n" + "cmpd 0,%0,%3\n" + "bne- 2f\n" + "stdcx. %4,0,%2\n" + "bne- 1b\n" + PPC_ATOMIC_EXIT_BARRIER "\n" + "2:" + : "=&r" (prev), "=m" (*p) + : "r" (p), "r" (old), "r" (new), "m" (*p) + : "cc", "memory" ); + + return prev; +} + +/* This function doesn't exist, so you'll get a linker error + if something tries to do an invalid cmpxchg(). */ +extern void __cmpxchg_called_with_bad_pointer(void); + +static always_inline unsigned long __cmpxchg( + volatile void *ptr, + unsigned long old, + unsigned long new, + int size) +{ + switch ( size ) + { + case 2: + BUG_ON("unimplemented"); return 0; /* XXX implement __cmpxchg_u16 ? */ + case 4: + return __cmpxchg_u32(ptr, old, new); + case 8: + return __cmpxchg_u64(ptr, old, new); + } + __cmpxchg_called_with_bad_pointer(); + return old; +} + +#define cmpxchg_user(ptr,o,n) cmpxchg(ptr,o,n) + +#define cmpxchg(ptr,o,n) \ + ({ \ + __typeof__(*(ptr)) _o_ = (o); \ + __typeof__(*(ptr)) _n_ = (n); \ + (__typeof__(*(ptr)))__cmpxchg((ptr), (unsigned long)_o_, \ + (unsigned long)_n_, sizeof(*(ptr))); \ + }) + + +/* + * Memory barrier. + * The sync instruction guarantees that all memory accesses initiated + * by this processor have been performed (with respect to all other + * mechanisms that access memory). The eieio instruction is a barrier + * providing an ordering (separately) for (a) cacheable stores and (b) + * loads and stores to non-cacheable memory (e.g. I/O devices). + * + * mb() prevents loads and stores being reordered across this point. + * rmb() prevents loads being reordered across this point. + * wmb() prevents stores being reordered across this point. + * read_barrier_depends() prevents data-dependent loads being reordered + * across this point (nop on PPC). + * + * We have to use the sync instructions for mb(), since lwsync doesn't + * order loads with respect to previous stores. Lwsync is fine for + * rmb(), though. + * For wmb(), we use sync since wmb is used in drivers to order + * stores to system memory with respect to writes to the device. + * However, smp_wmb() can be a lighter-weight eieio barrier on + * SMP since it is only used to order updates to system memory. + */ +#define mb() __asm__ __volatile__ ( "sync" : : : "memory" ) +#define rmb() __asm__ __volatile__ ( "lwsync" : : : "memory" ) +#define wmb() __asm__ __volatile__ ( "sync" : : : "memory" ) +#define read_barrier_depends() do { } while(0) + +#define set_mb(var, value) do { var = value; smp_mb(); } while (0) +#define set_wmb(var, value) do { var = value; smp_wmb(); } while (0) + +#define smp_mb__before_atomic() smp_mb() +#define smp_mb__after_atomic() smp_mb() + +#define smp_mb() mb() +#define smp_rmb() rmb() +#define smp_wmb() __asm__ __volatile__ ("lwsync" : : : "memory") +#define smp_read_barrier_depends() read_barrier_depends() + +#define local_save_flags(flags) ((flags) = mfmsr()) +#define local_irq_restore(flags) do { \ + __asm__ __volatile__("": : :"memory"); \ + mtmsrd((flags)); \ +} while(0) + +static inline void local_irq_disable(void) +{ + unsigned long msr; + msr = mfmsr(); + mtmsrd(msr & ~MSR_EE); + barrier(); +} + +static inline void local_irq_enable(void) +{ + unsigned long msr; + barrier(); + msr = mfmsr(); + mtmsrd(msr | MSR_EE); +} + +static inline void __do_save_and_cli(unsigned long *flags) +{ + unsigned long msr; + msr = mfmsr(); + *flags = msr; + mtmsrd(msr & ~MSR_EE); + barrier(); +} + +#define local_irq_save(flags) __do_save_and_cli(&flags) + +static inline int local_irq_is_enabled(void) +{ + return !!(mfmsr() & MSR_EE); +} + +#define arch_fetch_and_add(x, v) __sync_fetch_and_add(x, v) #endif /* _ASM_SYSTEM_H */ diff --git a/xen/arch/ppc/include/asm/time.h b/xen/arch/ppc/include/asm/time.h new file mode 100644 index 0000000000..aa9dda82a3 --- /dev/null +++ b/xen/arch/ppc/include/asm/time.h @@ -0,0 +1,23 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +#ifndef __ASM_PPC_TIME_H__ +#define __ASM_PPC_TIME_H__ + +#include +#include +#include + +struct vcpu; + +/* TODO: implement */ +static inline void force_update_vcpu_system_time(struct vcpu *v) { + BUG_ON("unimplemented"); +} + +typedef unsigned long cycles_t; + +static inline cycles_t get_cycles(void) +{ + return mfspr(SPRN_TBRL); +} + +#endif /* __ASM_PPC_TIME_H__ */ diff --git a/xen/arch/ppc/include/asm/xenoprof.h b/xen/arch/ppc/include/asm/xenoprof.h new file mode 100644 index 0000000000..e69de29bb2 diff --git a/xen/arch/ppc/mm-radix.c b/xen/arch/ppc/mm-radix.c index 3feeb90ebc..06129cef9c 100644 --- a/xen/arch/ppc/mm-radix.c +++ b/xen/arch/ppc/mm-radix.c @@ -1,13 +1,13 @@ /* SPDX-License-Identifier: GPL-2.0-or-later */ #include #include +#include #include #include #include #include #include -#include #include #include #include diff --git a/xen/arch/ppc/tlb-radix.c b/xen/arch/ppc/tlb-radix.c index 3dde102c62..74213113e0 100644 --- a/xen/arch/ppc/tlb-radix.c +++ b/xen/arch/ppc/tlb-radix.c @@ -5,9 +5,9 @@ * * Copyright 2015-2016, Aneesh Kumar K.V, IBM Corporation. */ +#include #include -#include #include #include diff --git a/xen/include/public/hvm/save.h b/xen/include/public/hvm/save.h index 464ebdb0da..2cf4238daa 100644 --- a/xen/include/public/hvm/save.h +++ b/xen/include/public/hvm/save.h @@ -89,6 +89,8 @@ DECLARE_HVM_SAVE_TYPE(END, 0, struct hvm_save_end); #include "../arch-x86/hvm/save.h" #elif defined(__arm__) || defined(__aarch64__) #include "../arch-arm/hvm/save.h" +#elif defined(__powerpc64__) +#include "../arch-ppc.h" #else #error "unsupported architecture" #endif diff --git a/xen/include/public/pmu.h b/xen/include/public/pmu.h index eb87a81e7b..5a176b6ac3 100644 --- a/xen/include/public/pmu.h +++ b/xen/include/public/pmu.h @@ -11,6 +11,8 @@ #include "arch-x86/pmu.h" #elif defined (__arm__) || defined (__aarch64__) #include "arch-arm.h" +#elif defined (__powerpc64__) +#include "arch-ppc.h" #else #error "Unsupported architecture" #endif diff --git a/xen/include/public/xen.h b/xen/include/public/xen.h index 920567e006..b812a0a324 100644 --- a/xen/include/public/xen.h +++ b/xen/include/public/xen.h @@ -16,6 +16,8 @@ #include "arch-x86/xen.h" #elif defined(__arm__) || defined (__aarch64__) #include "arch-arm.h" +#elif defined(__powerpc64__) +#include "arch-ppc.h" #else #error "Unsupported architecture" #endif From patchwork Tue Sep 12 18:35:53 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shawn Anastasio X-Patchwork-Id: 13382032 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 349A4EE3F10 for ; Tue, 12 Sep 2023 18:36:24 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.600739.936529 (Exim 4.92) (envelope-from ) id 1qg8F8-0005E4-P5; Tue, 12 Sep 2023 18:36:10 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 600739.936529; Tue, 12 Sep 2023 18:36:10 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qg8F8-0005DL-KA; Tue, 12 Sep 2023 18:36:10 +0000 Received: by outflank-mailman (input) for mailman id 600739; Tue, 12 Sep 2023 18:36:09 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qg8F7-0004gy-Es for xen-devel@lists.xenproject.org; Tue, 12 Sep 2023 18:36:09 +0000 Received: from raptorengineering.com (mail.raptorengineering.com [23.155.224.40]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id 3aef5145-519b-11ee-8786-cb3800f73035; Tue, 12 Sep 2023 20:36:05 +0200 (CEST) Received: from localhost (localhost [127.0.0.1]) by mail.rptsys.com (Postfix) with ESMTP id 56062828588C; Tue, 12 Sep 2023 13:36:04 -0500 (CDT) Received: from mail.rptsys.com ([127.0.0.1]) by localhost (vali.starlink.edu [127.0.0.1]) (amavisd-new, port 10032) with ESMTP id UyJ4rMT5KA0S; Tue, 12 Sep 2023 13:36:02 -0500 (CDT) Received: from localhost (localhost [127.0.0.1]) by mail.rptsys.com (Postfix) with ESMTP id 675458286999; Tue, 12 Sep 2023 13:36:02 -0500 (CDT) Received: from mail.rptsys.com ([127.0.0.1]) by localhost (vali.starlink.edu [127.0.0.1]) (amavisd-new, port 10026) with ESMTP id 28PoQVfEyAXc; Tue, 12 Sep 2023 13:36:02 -0500 (CDT) Received: from raptor-ewks-026.rptsys.com (5.edge.rptsys.com [23.155.224.38]) by mail.rptsys.com (Postfix) with ESMTPSA id 1207C82869A9; Tue, 12 Sep 2023 13:36:02 -0500 (CDT) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 3aef5145-519b-11ee-8786-cb3800f73035 DKIM-Filter: OpenDKIM Filter v2.10.3 mail.rptsys.com 675458286999 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=raptorengineering.com; s=B8E824E6-0BE2-11E6-931D-288C65937AAD; t=1694543762; bh=6LM95rrDoqCiSn6LemOLiswbsaDeEh5eaAd90MA8xic=; h=From:To:Date:Message-Id:MIME-Version; b=I8kABtDzdiCGYaKXRW+75tPCf/60BW7gNz2jXOBceZiShezZv0TPCEk7lYeLff1Pg mXAGyspx8qBci3qd9Bpm4tvybIX0a2xmirIoHO2FGAJd2tPci31MkhlhlB99wYbKN/ aQGgLMmqqV3ai6pqWq+7BxIdpLiYH6RDXAq8nOhc= X-Virus-Scanned: amavisd-new at rptsys.com From: Shawn Anastasio To: xen-devel@lists.xenproject.org Cc: Timothy Pearson , Jan Beulich , Shawn Anastasio Subject: [PATCH v5 4/5] xen/ppc: Add stub function and symbol definitions Date: Tue, 12 Sep 2023 13:35:53 -0500 Message-Id: <26d561b1878082a1666935fd8c9d477de423e8ed.1694543103.git.sanastasio@raptorengineering.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: References: MIME-Version: 1.0 Add stub function and symbol definitions required by common code. If the file that the definition is supposed to be located in doesn't already exist yet, temporarily place its definition in the new stubs.c Signed-off-by: Shawn Anastasio Acked-by: Jan Beulich --- v5: No changes. v4: No changes. v3: - (stubs.c) Drop ack_none hook definition v2: - Use BUG_ON("unimplemented") instead of BUG() for unimplemented functions to make searching easier. - (mm-radix.c) Drop total_pages definition - (mm-radix.c) Move __read_mostly from after variable name to before it in declaration of `frametable_base_pdx` - (setup.c) Fix include order - (stubs.c) Drop stub .end hw_irq_controller hook xen/arch/ppc/Makefile | 1 + xen/arch/ppc/mm-radix.c | 42 +++++ xen/arch/ppc/setup.c | 8 + xen/arch/ppc/stubs.c | 339 ++++++++++++++++++++++++++++++++++++++++ 4 files changed, 390 insertions(+) create mode 100644 xen/arch/ppc/stubs.c -- 2.30.2 diff --git a/xen/arch/ppc/Makefile b/xen/arch/ppc/Makefile index b3205b8f7a..6d5569ff64 100644 --- a/xen/arch/ppc/Makefile +++ b/xen/arch/ppc/Makefile @@ -4,6 +4,7 @@ obj-$(CONFIG_EARLY_PRINTK) += early_printk.init.o obj-y += mm-radix.o obj-y += opal.o obj-y += setup.o +obj-y += stubs.o obj-y += tlb-radix.o $(TARGET): $(TARGET)-syms diff --git a/xen/arch/ppc/mm-radix.c b/xen/arch/ppc/mm-radix.c index 06129cef9c..11d0f27b60 100644 --- a/xen/arch/ppc/mm-radix.c +++ b/xen/arch/ppc/mm-radix.c @@ -265,3 +265,45 @@ void __init setup_initial_pagetables(void) /* Turn on the MMU */ enable_mmu(); } + +/* + * TODO: Implement the functions below + */ +unsigned long __read_mostly frametable_base_pdx; + +void put_page(struct page_info *p) +{ + BUG_ON("unimplemented"); +} + +void arch_dump_shared_mem_info(void) +{ + BUG_ON("unimplemented"); +} + +int xenmem_add_to_physmap_one(struct domain *d, + unsigned int space, + union add_to_physmap_extra extra, + unsigned long idx, + gfn_t gfn) +{ + BUG_ON("unimplemented"); +} + +int destroy_xen_mappings(unsigned long s, unsigned long e) +{ + BUG_ON("unimplemented"); +} + +int map_pages_to_xen(unsigned long virt, + mfn_t mfn, + unsigned long nr_mfns, + unsigned int flags) +{ + BUG_ON("unimplemented"); +} + +int __init populate_pt_range(unsigned long virt, unsigned long nr_mfns) +{ + BUG_ON("unimplemented"); +} diff --git a/xen/arch/ppc/setup.c b/xen/arch/ppc/setup.c index 3fc7705d9b..959c1454a0 100644 --- a/xen/arch/ppc/setup.c +++ b/xen/arch/ppc/setup.c @@ -1,5 +1,8 @@ /* SPDX-License-Identifier: GPL-2.0-or-later */ #include +#include +#include +#include #include #include #include @@ -33,3 +36,8 @@ void __init noreturn start_xen(unsigned long r3, unsigned long r4, unreachable(); } + +void arch_get_xen_caps(xen_capabilities_info_t *info) +{ + BUG_ON("unimplemented"); +} diff --git a/xen/arch/ppc/stubs.c b/xen/arch/ppc/stubs.c new file mode 100644 index 0000000000..4c276b0e39 --- /dev/null +++ b/xen/arch/ppc/stubs.c @@ -0,0 +1,339 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +#include +#include +#include +#include +#include +#include +#include + +#include + +/* smpboot.c */ + +cpumask_t cpu_online_map; +cpumask_t cpu_present_map; +cpumask_t cpu_possible_map; + +/* ID of the PCPU we're running on */ +DEFINE_PER_CPU(unsigned int, cpu_id); +/* XXX these seem awfully x86ish... */ +/* representing HT siblings of each logical CPU */ +DEFINE_PER_CPU_READ_MOSTLY(cpumask_var_t, cpu_sibling_mask); +/* representing HT and core siblings of each logical CPU */ +DEFINE_PER_CPU_READ_MOSTLY(cpumask_var_t, cpu_core_mask); + +nodemask_t __read_mostly node_online_map = { { [0] = 1UL } }; + +/* time.c */ + +s_time_t get_s_time(void) +{ + BUG_ON("unimplemented"); +} + +int reprogram_timer(s_time_t timeout) +{ + BUG_ON("unimplemented"); +} + +void send_timer_event(struct vcpu *v) +{ + BUG_ON("unimplemented"); +} + +/* traps.c */ + +void show_execution_state(const struct cpu_user_regs *regs) +{ + BUG_ON("unimplemented"); +} + +void arch_hypercall_tasklet_result(struct vcpu *v, long res) +{ + BUG_ON("unimplemented"); +} + +void vcpu_show_execution_state(struct vcpu *v) +{ + BUG_ON("unimplemented"); +} + +/* shutdown.c */ + +void machine_restart(unsigned int delay_millisecs) +{ + BUG_ON("unimplemented"); +} + +void machine_halt(void) +{ + BUG_ON("unimplemented"); +} + +/* vm_event.c */ + +void vm_event_fill_regs(vm_event_request_t *req) +{ + BUG_ON("unimplemented"); +} + +void vm_event_set_registers(struct vcpu *v, vm_event_response_t *rsp) +{ + BUG_ON("unimplemented"); +} + +void vm_event_monitor_next_interrupt(struct vcpu *v) +{ + /* Not supported on PPC. */ +} + +/* domctl.c */ +void arch_get_domain_info(const struct domain *d, + struct xen_domctl_getdomaininfo *info) +{ + BUG_ON("unimplemented"); +} + +/* monitor.c */ + +int arch_monitor_domctl_event(struct domain *d, + struct xen_domctl_monitor_op *mop) +{ + BUG_ON("unimplemented"); +} + +/* smp.c */ + +void arch_flush_tlb_mask(const cpumask_t *mask) +{ + BUG_ON("unimplemented"); +} + +void smp_send_event_check_mask(const cpumask_t *mask) +{ + BUG_ON("unimplemented"); +} + +void smp_send_call_function_mask(const cpumask_t *mask) +{ + BUG_ON("unimplemented"); +} + +/* irq.c */ + +struct pirq *alloc_pirq_struct(struct domain *d) +{ + BUG_ON("unimplemented"); +} + +int pirq_guest_bind(struct vcpu *v, struct pirq *pirq, int will_share) +{ + BUG_ON("unimplemented"); +} + +void pirq_guest_unbind(struct domain *d, struct pirq *pirq) +{ + BUG_ON("unimplemented"); +} + +void pirq_set_affinity(struct domain *d, int pirq, const cpumask_t *mask) +{ + BUG_ON("unimplemented"); +} + +hw_irq_controller no_irq_type = { + .typename = "none", + .startup = irq_startup_none, + .shutdown = irq_shutdown_none, + .enable = irq_enable_none, + .disable = irq_disable_none, +}; + +int arch_init_one_irq_desc(struct irq_desc *desc) +{ + BUG_ON("unimplemented"); +} + +void smp_send_state_dump(unsigned int cpu) +{ + BUG_ON("unimplemented"); +} + +/* domain.c */ + +DEFINE_PER_CPU(struct vcpu *, curr_vcpu); +unsigned long __per_cpu_offset[NR_CPUS]; + +void context_switch(struct vcpu *prev, struct vcpu *next) +{ + BUG_ON("unimplemented"); +} + +void continue_running(struct vcpu *same) +{ + BUG_ON("unimplemented"); +} + +void sync_local_execstate(void) +{ + BUG_ON("unimplemented"); +} + +void sync_vcpu_execstate(struct vcpu *v) +{ + BUG_ON("unimplemented"); +} + +void startup_cpu_idle_loop(void) +{ + BUG_ON("unimplemented"); +} + +void free_domain_struct(struct domain *d) +{ + BUG_ON("unimplemented"); +} + +void dump_pageframe_info(struct domain *d) +{ + BUG_ON("unimplemented"); +} + +void free_vcpu_struct(struct vcpu *v) +{ + BUG_ON("unimplemented"); +} + +int arch_vcpu_create(struct vcpu *v) +{ + BUG_ON("unimplemented"); +} + +void arch_vcpu_destroy(struct vcpu *v) +{ + BUG_ON("unimplemented"); +} + +void vcpu_switch_to_aarch64_mode(struct vcpu *v) +{ + BUG_ON("unimplemented"); +} + +int arch_sanitise_domain_config(struct xen_domctl_createdomain *config) +{ + BUG_ON("unimplemented"); +} + +int arch_domain_create(struct domain *d, + struct xen_domctl_createdomain *config, + unsigned int flags) +{ + BUG_ON("unimplemented"); +} + +int arch_domain_teardown(struct domain *d) +{ + BUG_ON("unimplemented"); +} + +void arch_domain_destroy(struct domain *d) +{ + BUG_ON("unimplemented"); +} + +void arch_domain_shutdown(struct domain *d) +{ + BUG_ON("unimplemented"); +} + +void arch_domain_pause(struct domain *d) +{ + BUG_ON("unimplemented"); +} + +void arch_domain_unpause(struct domain *d) +{ + BUG_ON("unimplemented"); +} + +int arch_domain_soft_reset(struct domain *d) +{ + BUG_ON("unimplemented"); +} + +void arch_domain_creation_finished(struct domain *d) +{ + BUG_ON("unimplemented"); +} + +int arch_set_info_guest(struct vcpu *v, vcpu_guest_context_u c) +{ + BUG_ON("unimplemented"); +} + +int arch_initialise_vcpu(struct vcpu *v, XEN_GUEST_HANDLE_PARAM(void) arg) +{ + BUG_ON("unimplemented"); +} + +int arch_vcpu_reset(struct vcpu *v) +{ + BUG_ON("unimplemented"); +} + +int domain_relinquish_resources(struct domain *d) +{ + BUG_ON("unimplemented"); +} + +void arch_dump_domain_info(struct domain *d) +{ + BUG_ON("unimplemented"); +} + +void arch_dump_vcpu_info(struct vcpu *v) +{ + BUG_ON("unimplemented"); +} + +void vcpu_mark_events_pending(struct vcpu *v) +{ + BUG_ON("unimplemented"); +} + +void vcpu_update_evtchn_irq(struct vcpu *v) +{ + BUG_ON("unimplemented"); +} + +void vcpu_block_unless_event_pending(struct vcpu *v) +{ + BUG_ON("unimplemented"); +} + +void vcpu_kick(struct vcpu *v) +{ + BUG_ON("unimplemented"); +} + +struct domain *alloc_domain_struct(void) +{ + BUG_ON("unimplemented"); +} + +struct vcpu *alloc_vcpu_struct(const struct domain *d) +{ + BUG_ON("unimplemented"); +} + +unsigned long +hypercall_create_continuation(unsigned int op, const char *format, ...) +{ + BUG_ON("unimplemented"); +} + +int __init parse_arch_dom0_param(const char *s, const char *e) +{ + BUG_ON("unimplemented"); +} From patchwork Tue Sep 12 18:35:54 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shawn Anastasio X-Patchwork-Id: 13382031 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 250BEEE3F0B for ; Tue, 12 Sep 2023 18:36:23 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.600738.936523 (Exim 4.92) (envelope-from ) id 1qg8F8-0005AR-Ei; Tue, 12 Sep 2023 18:36:10 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 600738.936523; Tue, 12 Sep 2023 18:36:10 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qg8F8-0005AC-BH; Tue, 12 Sep 2023 18:36:10 +0000 Received: by outflank-mailman (input) for mailman id 600738; Tue, 12 Sep 2023 18:36:09 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qg8F6-0004Nn-UC for xen-devel@lists.xenproject.org; Tue, 12 Sep 2023 18:36:08 +0000 Received: from raptorengineering.com (mail.raptorengineering.com [23.155.224.40]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id 3ad72a20-519b-11ee-9b0d-b553b5be7939; Tue, 12 Sep 2023 20:36:05 +0200 (CEST) Received: from localhost (localhost [127.0.0.1]) by mail.rptsys.com (Postfix) with ESMTP id 34E8E8285940; Tue, 12 Sep 2023 13:36:04 -0500 (CDT) Received: from mail.rptsys.com ([127.0.0.1]) by localhost (vali.starlink.edu [127.0.0.1]) (amavisd-new, port 10032) with ESMTP id wiCnQFjRUYTc; Tue, 12 Sep 2023 13:36:03 -0500 (CDT) Received: from localhost (localhost [127.0.0.1]) by mail.rptsys.com (Postfix) with ESMTP id CB83A828588C; Tue, 12 Sep 2023 13:36:02 -0500 (CDT) Received: from mail.rptsys.com ([127.0.0.1]) by localhost (vali.starlink.edu [127.0.0.1]) (amavisd-new, port 10026) with ESMTP id G-6wvpSRwke5; Tue, 12 Sep 2023 13:36:02 -0500 (CDT) Received: from raptor-ewks-026.rptsys.com (5.edge.rptsys.com [23.155.224.38]) by mail.rptsys.com (Postfix) with ESMTPSA id 5489882869AC; Tue, 12 Sep 2023 13:36:02 -0500 (CDT) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 3ad72a20-519b-11ee-9b0d-b553b5be7939 DKIM-Filter: OpenDKIM Filter v2.10.3 mail.rptsys.com CB83A828588C DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=raptorengineering.com; s=B8E824E6-0BE2-11E6-931D-288C65937AAD; t=1694543762; bh=cF5IvHr5u7imGadcwYNiu5y563SLZGH4suAc9S6xZk0=; h=From:To:Date:Message-Id:MIME-Version; b=sRurfK0c/cp+pmQdGiETrI7rVpVS3xanOQQqk86LrHVno6CDvUHZWxe+OXpg0/w5S xOseXb95hE87vAwsi+mIfErvx98ChQw2f5VnT5zUpvyPdsZIAOj2ffpQknGzlaZKDz 18dQIUmIMGwTix7YCkDDdnYAWgi7y3vX7/eiFAbY= X-Virus-Scanned: amavisd-new at rptsys.com From: Shawn Anastasio To: xen-devel@lists.xenproject.org Cc: Timothy Pearson , Jan Beulich , Shawn Anastasio Subject: [PATCH v5 5/5] xen/ppc: Enable full Xen build Date: Tue, 12 Sep 2023 13:35:54 -0500 Message-Id: <98a4fe2a4a2aee0a33b6b2110cd6ee906d4e0fe1.1694543103.git.sanastasio@raptorengineering.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: References: MIME-Version: 1.0 Bring ppc's Makefile and arch.mk in line with arm and x86 to disable the build overrides and enable the full Xen build. Signed-off-by: Shawn Anastasio Reviewed-by: Jan Beulich --- xen/arch/ppc/Makefile | 16 +++++++++++++++- xen/arch/ppc/arch.mk | 3 --- 2 files changed, 15 insertions(+), 4 deletions(-) diff --git a/xen/arch/ppc/Makefile b/xen/arch/ppc/Makefile index 6d5569ff64..71feb5e2c4 100644 --- a/xen/arch/ppc/Makefile +++ b/xen/arch/ppc/Makefile @@ -11,10 +11,24 @@ $(TARGET): $(TARGET)-syms cp -f $< $@ $(TARGET)-syms: $(objtree)/prelink.o $(obj)/xen.lds - $(LD) $(XEN_LDFLAGS) -T $(obj)/xen.lds -N $< $(build_id_linker) -o $@ + $(LD) $(XEN_LDFLAGS) -T $(obj)/xen.lds -N $< \ + $(objtree)/common/symbols-dummy.o -o $(dot-target).0 + $(NM) -pa --format=sysv $(dot-target).0 \ + | $(objtree)/tools/symbols $(all_symbols) --sysv --sort \ + > $(dot-target).0.S + $(MAKE) $(build)=$(@D) $(dot-target).0.o + $(LD) $(XEN_LDFLAGS) -T $(obj)/xen.lds -N $< \ + $(dot-target).0.o -o $(dot-target).1 + $(NM) -pa --format=sysv $(dot-target).1 \ + | $(objtree)/tools/symbols $(all_symbols) --sysv --sort \ + > $(dot-target).1.S + $(MAKE) $(build)=$(@D) $(dot-target).1.o + $(LD) $(XEN_LDFLAGS) -T $(obj)/xen.lds -N $< $(build_id_linker) \ + $(dot-target).1.o -o $@ $(NM) -pa --format=sysv $@ \ | $(objtree)/tools/symbols --all-symbols --xensyms --sysv --sort \ > $@.map + rm -f $(@D)/.$(@F).[0-9]* $(obj)/xen.lds: $(src)/xen.lds.S FORCE $(call if_changed_dep,cpp_lds_S) diff --git a/xen/arch/ppc/arch.mk b/xen/arch/ppc/arch.mk index d05cbf1df5..917ad0e6a8 100644 --- a/xen/arch/ppc/arch.mk +++ b/xen/arch/ppc/arch.mk @@ -7,6 +7,3 @@ CFLAGS += -m64 -mlittle-endian -mcpu=$(ppc-march-y) CFLAGS += -mstrict-align -mcmodel=medium -mabi=elfv2 -fPIC -mno-altivec -mno-vsx -msoft-float LDFLAGS += -m elf64lppc - -# TODO: Drop override when more of the build is working -override ALL_OBJS-y = arch/$(SRCARCH)/built_in.o common/libfdt/built_in.o lib/built_in.o