From patchwork Wed Aug 2 23:02:57 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shawn Anastasio X-Patchwork-Id: 13338908 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id DD62AC001E0 for ; Wed, 2 Aug 2023 23:03:36 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.575859.901327 (Exim 4.92) (envelope-from ) id 1qRKsE-0002Xy-6Y; Wed, 02 Aug 2023 23:03:22 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 575859.901327; Wed, 02 Aug 2023 23:03:22 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qRKsE-0002Xr-3u; Wed, 02 Aug 2023 23:03:22 +0000 Received: by outflank-mailman (input) for mailman id 575859; Wed, 02 Aug 2023 23:03:21 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qRKsC-0002Xg-Vq for xen-devel@lists.xenproject.org; Wed, 02 Aug 2023 23:03:21 +0000 Received: from raptorengineering.com (mail.raptorengineering.com [23.155.224.40]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id c461511c-3188-11ee-b265-6b7b168915f2; Thu, 03 Aug 2023 01:03:18 +0200 (CEST) Received: from localhost (localhost [127.0.0.1]) by mail.rptsys.com (Postfix) with ESMTP id 20C018285570; Wed, 2 Aug 2023 18:03:17 -0500 (CDT) Received: from mail.rptsys.com ([127.0.0.1]) by localhost (vali.starlink.edu [127.0.0.1]) (amavisd-new, port 10032) with ESMTP id L50En6x9HQe5; Wed, 2 Aug 2023 18:03:15 -0500 (CDT) Received: from localhost (localhost [127.0.0.1]) by mail.rptsys.com (Postfix) with ESMTP id D5AB982853DD; Wed, 2 Aug 2023 18:03:15 -0500 (CDT) Received: from mail.rptsys.com ([127.0.0.1]) by localhost (vali.starlink.edu [127.0.0.1]) (amavisd-new, port 10026) with ESMTP id 2FNVpzo_oeqE; Wed, 2 Aug 2023 18:03:15 -0500 (CDT) Received: from raptor-ewks-026.lan (5.edge.rptsys.com [23.155.224.38]) by mail.rptsys.com (Postfix) with ESMTPSA id 694448285570; Wed, 2 Aug 2023 18:03:15 -0500 (CDT) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: c461511c-3188-11ee-b265-6b7b168915f2 DKIM-Filter: OpenDKIM Filter v2.10.3 mail.rptsys.com D5AB982853DD DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=raptorengineering.com; s=B8E824E6-0BE2-11E6-931D-288C65937AAD; t=1691017395; bh=E+ZWgBHexIIC5drESkxQIt9+akyKQK9mOXv/LPhfb4Q=; h=From:To:Date:Message-Id:MIME-Version; b=pjB+X0hSerN2rUHKxTcnAfmizRMpvZMhGE7AQBDKKqHFNnbg/6czAiGEOl41rxIxT FrrVzUYoUnLeYvKKS8KubPQxSIBYqxhnrISMfeT9cCCyCKH/QaQ6tKQCnVcQeA2zPB TC/YoWujZn/nJ6ATFkINIV7dx0rvwFYYeoRmG810= X-Virus-Scanned: amavisd-new at rptsys.com From: Shawn Anastasio To: xen-devel@lists.xenproject.org Cc: Timothy Pearson , Andrew Cooper , Jan Beulich , Shawn Anastasio Subject: [PATCH 1/9] xen/common: Add missing #includes treewide Date: Wed, 2 Aug 2023 18:02:57 -0500 Message-Id: <2c9eb4fc175a1bdd21293f2e2611d8e21991636d.1691016993.git.sanastasio@raptorengineering.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: References: MIME-Version: 1.0 A few files treewide depend on defininitions in headers that they don't include. This works when arch headers end up including the required headers by chance, but broke on ppc64 with only minimal/stub arch headers. Signed-off-by: Shawn Anastasio --- xen/common/memory.c | 1 + xen/common/symbols.c | 1 + xen/common/xmalloc_tlsf.c | 1 + xen/include/xen/domain.h | 1 + xen/include/xen/iommu.h | 1 + xen/include/xen/sched.h | 1 + 6 files changed, 6 insertions(+) diff --git a/xen/common/memory.c b/xen/common/memory.c index c206fa4808..1b185b00e4 100644 --- a/xen/common/memory.c +++ b/xen/common/memory.c @@ -28,6 +28,7 @@ #include #include #include +#include #include #include diff --git a/xen/common/symbols.c b/xen/common/symbols.c index 9377f41424..691e617925 100644 --- a/xen/common/symbols.c +++ b/xen/common/symbols.c @@ -19,6 +19,7 @@ #include #include #include +#include #ifdef SYMBOLS_ORIGIN extern const unsigned int symbols_offsets[]; diff --git a/xen/common/xmalloc_tlsf.c b/xen/common/xmalloc_tlsf.c index c603c39bb9..349b31cb4c 100644 --- a/xen/common/xmalloc_tlsf.c +++ b/xen/common/xmalloc_tlsf.c @@ -27,6 +27,7 @@ #include #include #include +#include #define MAX_POOL_NAME_LEN 16 diff --git a/xen/include/xen/domain.h b/xen/include/xen/domain.h index d35af34841..767127b440 100644 --- a/xen/include/xen/domain.h +++ b/xen/include/xen/domain.h @@ -4,6 +4,7 @@ #include +#include #include #include #include diff --git a/xen/include/xen/iommu.h b/xen/include/xen/iommu.h index 405db59971..ef817efec9 100644 --- a/xen/include/xen/iommu.h +++ b/xen/include/xen/iommu.h @@ -24,6 +24,7 @@ #include #include #include +#include #include #include #include diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h index 854f3e32c0..6a96534a45 100644 --- a/xen/include/xen/sched.h +++ b/xen/include/xen/sched.h @@ -21,6 +21,7 @@ #include #include #include +#include #include #include #include From patchwork Wed Aug 2 23:02:58 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shawn Anastasio X-Patchwork-Id: 13338915 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 175E9C00528 for ; Wed, 2 Aug 2023 23:03:38 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.575863.901368 (Exim 4.92) (envelope-from ) id 1qRKsH-0003Ws-NH; Wed, 02 Aug 2023 23:03:25 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 575863.901368; Wed, 02 Aug 2023 23:03:25 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qRKsH-0003Vo-JJ; Wed, 02 Aug 2023 23:03:25 +0000 Received: by outflank-mailman (input) for mailman id 575863; Wed, 02 Aug 2023 23:03:23 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qRKsF-0002ha-K9 for xen-devel@lists.xenproject.org; Wed, 02 Aug 2023 23:03:23 +0000 Received: from raptorengineering.com (mail.raptorengineering.com [23.155.224.40]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id c48f3022-3188-11ee-8613-37d641c3527e; Thu, 03 Aug 2023 01:03:19 +0200 (CEST) Received: from localhost (localhost [127.0.0.1]) by mail.rptsys.com (Postfix) with ESMTP id 8476682854B3; Wed, 2 Aug 2023 18:03:17 -0500 (CDT) Received: from mail.rptsys.com ([127.0.0.1]) by localhost (vali.starlink.edu [127.0.0.1]) (amavisd-new, port 10032) with ESMTP id aZx53xWMIUrw; Wed, 2 Aug 2023 18:03:16 -0500 (CDT) Received: from localhost (localhost [127.0.0.1]) by mail.rptsys.com (Postfix) with ESMTP id 5A90282857CC; Wed, 2 Aug 2023 18:03:16 -0500 (CDT) Received: from mail.rptsys.com ([127.0.0.1]) by localhost (vali.starlink.edu [127.0.0.1]) (amavisd-new, port 10026) with ESMTP id KeZPv3Og7xUN; Wed, 2 Aug 2023 18:03:16 -0500 (CDT) Received: from raptor-ewks-026.lan (5.edge.rptsys.com [23.155.224.38]) by mail.rptsys.com (Postfix) with ESMTPSA id ECB318285755; Wed, 2 Aug 2023 18:03:15 -0500 (CDT) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: c48f3022-3188-11ee-8613-37d641c3527e DKIM-Filter: OpenDKIM Filter v2.10.3 mail.rptsys.com 5A90282857CC DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=raptorengineering.com; s=B8E824E6-0BE2-11E6-931D-288C65937AAD; t=1691017396; bh=/NqBjJ2QqrMRxqfD0TktMNubuD+Oy6q5OiAFylCvLXw=; h=From:To:Date:Message-Id:MIME-Version; b=NvDnMh3sMDwd8EfBcxA5F5FsRtP8g0Ve+fM0s3yIte6Yy23dwnX5IaCZC2Z10ygI1 1YZDZDWBqSIqdqiNNTUq3hkEAMUx4pg09L/jNejXqO2G18nFZd1QAGxTcmW+xFgqNg wjkeTJ5Pd1ddx93S1kDo1i6p/QlwgO0EXMFbA3DM= X-Virus-Scanned: amavisd-new at rptsys.com From: Shawn Anastasio To: xen-devel@lists.xenproject.org Cc: Timothy Pearson , Andrew Cooper , Jan Beulich , Shawn Anastasio Subject: [PATCH 2/9] xen/ppc: Add public/arch-ppc.h Date: Wed, 2 Aug 2023 18:02:58 -0500 Message-Id: <14d8455ca49f69a56e87aad5d4e20cf8f77e55cd.1691016993.git.sanastasio@raptorengineering.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: References: MIME-Version: 1.0 Signed-off-by: Shawn Anastasio --- xen/include/public/arch-ppc.h | 140 ++++++++++++++++++++++++++++++++++ 1 file changed, 140 insertions(+) create mode 100644 xen/include/public/arch-ppc.h diff --git a/xen/include/public/arch-ppc.h b/xen/include/public/arch-ppc.h new file mode 100644 index 0000000000..0eb7ce4208 --- /dev/null +++ b/xen/include/public/arch-ppc.h @@ -0,0 +1,140 @@ +/* + * Permission is hereby granted, free of charge, to any person obtaining a copy + * of this software and associated documentation files (the "Software"), to + * deal in the Software without restriction, including without limitation the + * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or + * sell copies of the Software, and to permit persons to whom the Software is + * furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice shall be included in + * all copies or substantial portions of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE + * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING + * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER + * DEALINGS IN THE SOFTWARE. + * + * Copyright (C) IBM Corp. 2005, 2006 + * Copyright (C) Raptor Engineering, LLC 2023 + * + * Authors: Hollis Blanchard + * Timothy Pearson + */ + +#ifndef __XEN_PUBLIC_ARCH_PPC64_H__ +#define __XEN_PUBLIC_ARCH_PPC64_H__ + +#define int64_aligned_t int64_t __attribute__((aligned(8))) +#define uint64_aligned_t uint64_t __attribute__((aligned(8))) + +#ifndef __ASSEMBLY__ +#define ___DEFINE_XEN_GUEST_HANDLE(name, type) \ + typedef union { type *p; unsigned long q; } \ + __guest_handle_ ## name; \ + typedef union { type *p; uint64_aligned_t q; } \ + __guest_handle_64_ ## name + +#define __DEFINE_XEN_GUEST_HANDLE(name, type) \ + ___DEFINE_XEN_GUEST_HANDLE(name, type); \ + ___DEFINE_XEN_GUEST_HANDLE(const_##name, const type) +#define DEFINE_XEN_GUEST_HANDLE(name) __DEFINE_XEN_GUEST_HANDLE(name, name) +#define __XEN_GUEST_HANDLE(name) __guest_handle_64_ ## name +#define XEN_GUEST_HANDLE(name) __XEN_GUEST_HANDLE(name) +#define XEN_GUEST_HANDLE_PARAM(name) __guest_handle_ ## name +#define set_xen_guest_handle_raw(hnd, val) \ + do { \ + __typeof__(&(hnd)) _sxghr_tmp = &(hnd); \ + _sxghr_tmp->q = 0; \ + _sxghr_tmp->p = val; \ + } while ( 0 ) +#define set_xen_guest_handle(hnd, val) set_xen_guest_handle_raw(hnd, val) + +#ifdef __XEN_TOOLS__ +#define get_xen_guest_handle(val, hnd) do { val = (hnd).p; } while (0) +#endif + +typedef uint64_t xen_pfn_t; +#define PRI_xen_pfn PRIx64 +#define PRIu_xen_pfn PRIu64 + +/* + * Maximum number of virtual CPUs in legacy multi-processor guests. + * Only one. All other VCPUS must use VCPUOP_register_vcpu_info. + */ +#define XEN_LEGACY_MAX_VCPUS 1 + +typedef uint64_t xen_ulong_t; +#define PRI_xen_ulong PRIx64 +#endif + +/* + * Pointers and other address fields inside interface structures are padded to + * 64 bits. This means that field alignments aren't different between 32- and + * 64-bit architectures. + */ +/* NB. Multi-level macro ensures __LINE__ is expanded before concatenation. */ +#define __MEMORY_PADDING(_X) +#define _MEMORY_PADDING(_X) __MEMORY_PADDING(_X) +#define MEMORY_PADDING _MEMORY_PADDING(__LINE__) + +/* And the trap vector is... */ +#define TRAP_INSTR "li 0,-1; sc" /* XXX just "sc"? */ + +#ifndef __ASSEMBLY__ + +#define XENCOMM_INLINE_FLAG (1UL << 63) + +typedef uint64_t xen_ulong_t; + +/* User-accessible registers: nost of these need to be saved/restored + * for every nested Xen invocation. */ +struct vcpu_guest_core_regs +{ + uint64_t gprs[32]; + uint64_t lr; + uint64_t ctr; + uint64_t srr0; + uint64_t srr1; + uint64_t pc; + uint64_t msr; + uint64_t fpscr; /* XXX Is this necessary */ + uint64_t xer; + uint64_t hid4; /* debug only */ + uint64_t dar; /* debug only */ + uint32_t dsisr; /* debug only */ + uint32_t cr; + uint32_t __pad; /* good spot for another 32bit reg */ + uint32_t entry_vector; +}; +typedef struct vcpu_guest_core_regs vcpu_guest_core_regs_t; + +typedef uint64_t tsc_timestamp_t; /* RDTSC timestamp */ /* XXX timebase */ + +/* ONLY used to communicate with dom0! See also struct exec_domain. */ +struct vcpu_guest_context { + vcpu_guest_core_regs_t user_regs; /* User-level CPU registers */ + uint64_t sdr1; /* Pagetable base */ + /* XXX etc */ +}; +typedef struct vcpu_guest_context vcpu_guest_context_t; +DEFINE_XEN_GUEST_HANDLE(vcpu_guest_context_t); + +struct arch_shared_info { + uint64_t boot_timebase; +}; + +struct arch_vcpu_info { +}; + +struct xen_arch_domainconfig { +}; + +typedef struct xen_pmu_arch { uint8_t dummy; } xen_pmu_arch_t; + +/* Support for multi-processor guests. */ +#endif + +#endif From patchwork Wed Aug 2 23:02:59 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shawn Anastasio X-Patchwork-Id: 13338911 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id D99FDC04A94 for ; Wed, 2 Aug 2023 23:03:37 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.575862.901350 (Exim 4.92) (envelope-from ) id 1qRKsG-0002zg-Bm; Wed, 02 Aug 2023 23:03:24 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 575862.901350; Wed, 02 Aug 2023 23:03:24 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qRKsG-0002y1-4C; Wed, 02 Aug 2023 23:03:24 +0000 Received: by outflank-mailman (input) for mailman id 575862; Wed, 02 Aug 2023 23:03:23 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qRKsE-0002ha-Vq for xen-devel@lists.xenproject.org; Wed, 02 Aug 2023 23:03:23 +0000 Received: from raptorengineering.com (mail.raptorengineering.com [23.155.224.40]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id c52108b4-3188-11ee-8613-37d641c3527e; Thu, 03 Aug 2023 01:03:19 +0200 (CEST) Received: from localhost (localhost [127.0.0.1]) by mail.rptsys.com (Postfix) with ESMTP id 8525C82857CC; Wed, 2 Aug 2023 18:03:18 -0500 (CDT) Received: from mail.rptsys.com ([127.0.0.1]) by localhost (vali.starlink.edu [127.0.0.1]) (amavisd-new, port 10032) with ESMTP id rrykhtEJfngC; Wed, 2 Aug 2023 18:03:16 -0500 (CDT) Received: from localhost (localhost [127.0.0.1]) by mail.rptsys.com (Postfix) with ESMTP id B5F548285755; Wed, 2 Aug 2023 18:03:16 -0500 (CDT) Received: from mail.rptsys.com ([127.0.0.1]) by localhost (vali.starlink.edu [127.0.0.1]) (amavisd-new, port 10026) with ESMTP id h0xsFUFCJUW2; Wed, 2 Aug 2023 18:03:16 -0500 (CDT) Received: from raptor-ewks-026.lan (5.edge.rptsys.com [23.155.224.38]) by mail.rptsys.com (Postfix) with ESMTPSA id 549648285570; Wed, 2 Aug 2023 18:03:16 -0500 (CDT) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: c52108b4-3188-11ee-8613-37d641c3527e DKIM-Filter: OpenDKIM Filter v2.10.3 mail.rptsys.com B5F548285755 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=raptorengineering.com; s=B8E824E6-0BE2-11E6-931D-288C65937AAD; t=1691017396; bh=lLrkou/Ji/TdfUO/m/EFd+jofhnUCIg/laAuoA5SU4Q=; h=From:To:Date:Message-Id:MIME-Version; b=Vh2PyYIMAFUm+xC0GS7Jt5f0z3KnWMNVFZuA7r8xSv0XD/Dvhd/d5I0xv9QAAM4pv S7EW6eY/okJCgkuN22nMnQHTwkQkRONvfxIQyJnz+Hj91a6oLD/QmSjO5R93Cazxsz EClDd2U0j6w/eUVvAKOZvZV88SdeXiPVMAzSp498= X-Virus-Scanned: amavisd-new at rptsys.com From: Shawn Anastasio To: xen-devel@lists.xenproject.org Cc: Timothy Pearson , Andrew Cooper , Jan Beulich , Shawn Anastasio Subject: [PATCH 3/9] xen/ppc: Implement atomic.h Date: Wed, 2 Aug 2023 18:02:59 -0500 Message-Id: <8035b9659a5930c00802dd36ab52574695d6b419.1691016993.git.sanastasio@raptorengineering.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: References: MIME-Version: 1.0 Implement atomic.h for PPC, based off of the original Xen 3.2 implementation. Signed-off-by: Shawn Anastasio --- xen/arch/ppc/include/asm/atomic.h | 387 ++++++++++++++++++++++++++++++ xen/arch/ppc/include/asm/memory.h | 34 +++ 2 files changed, 421 insertions(+) create mode 100644 xen/arch/ppc/include/asm/atomic.h create mode 100644 xen/arch/ppc/include/asm/memory.h diff --git a/xen/arch/ppc/include/asm/atomic.h b/xen/arch/ppc/include/asm/atomic.h new file mode 100644 index 0000000000..336dedc476 --- /dev/null +++ b/xen/arch/ppc/include/asm/atomic.h @@ -0,0 +1,387 @@ +/* SPDX-License-Identifier: GPL-2.0-or-later */ +/* + * PowerPC64 atomic operations + * + * Copyright (C) 2001 Paul Mackerras , IBM + * Copyright (C) 2001 Anton Blanchard , IBM + * Copyright Raptor Engineering LLC + * + * This program is free software; you can redistribute it and/or + * modify it under the terms of the GNU General Public License + * as published by the Free Software Foundation; either version + * 2 of the License, or (at your option) any later version. + */ + +#ifndef _ASM_PPC64_ATOMIC_H_ +#define _ASM_PPC64_ATOMIC_H_ + +#include + +#include +#include + +static inline int atomic_read(const atomic_t *v) +{ + return *(volatile int *)&v->counter; +} + +static inline int _atomic_read(atomic_t v) +{ + return v.counter; +} + +static inline void atomic_set(atomic_t *v, int i) +{ + v->counter = i; +} + +static inline void _atomic_set(atomic_t *v, int i) +{ + v->counter = i; +} + +void __bad_atomic_read(const volatile void *p, void *res); +void __bad_atomic_size(void); + +#define build_atomic_read(name, insn, type) \ + static inline type name(const volatile type *addr) \ + { \ + type ret; \ + asm volatile ( insn "%U1%X1 %0,%1" : "=r"(ret) : "m<>"(*addr) ); \ + return ret; \ + } + +#define build_atomic_write(name, insn, type) \ + static inline void name(volatile type *addr, type val) \ + { \ + asm volatile ( insn "%U0%X0 %1,%0" : "=m<>"(*addr) : "r"(val) ); \ + } + +#define build_add_sized(name, ldinsn, stinsn, type) \ + static inline void name(volatile type *addr, type val) \ + { \ + type t; \ + asm volatile ( "1: " ldinsn " %0,0,%3\n" \ + "add%I2 %0,%0,%2\n" \ + stinsn " %0,0,%3 \n" \ + "bne- 1b\n" \ + : "=&r"(t), "+m"(*addr) \ + : "r"(val), "r"(addr) \ + : "cc" ); \ + } + +build_atomic_read(read_u8_atomic, "lbz", uint8_t) +build_atomic_read(read_u16_atomic, "lhz", uint16_t) +build_atomic_read(read_u32_atomic, "lwz", uint32_t) +build_atomic_read(read_u64_atomic, "ldz", uint64_t) + +build_atomic_write(write_u8_atomic, "stb", uint8_t) +build_atomic_write(write_u16_atomic, "sth", uint16_t) +build_atomic_write(write_u32_atomic, "stw", uint32_t) +build_atomic_write(write_u64_atomic, "std", uint64_t) + +build_add_sized(add_u8_sized, "lbarx", "stbcx.",uint8_t) +build_add_sized(add_u16_sized, "lharx", "sthcx.", uint16_t) +build_add_sized(add_u32_sized, "lwarx", "stwcx.", uint32_t) + +#undef build_atomic_read +#undef build_atomic_write +#undef build_add_sized + +static always_inline void read_atomic_size(const volatile void *p, void *res, + unsigned int size) +{ + ASSERT(IS_ALIGNED((vaddr_t) p, size)); + switch ( size ) + { + case 1: + *(uint8_t *)res = read_u8_atomic(p); + break; + case 2: + *(uint16_t *)res = read_u16_atomic(p); + break; + case 4: + *(uint32_t *)res = read_u32_atomic(p); + break; + case 8: + *(uint64_t *)res = read_u64_atomic(p); + break; + default: + __bad_atomic_read(p, res); + break; + } +} + +static always_inline void write_atomic_size(volatile void *p, void *val, + unsigned int size) +{ + ASSERT(IS_ALIGNED((vaddr_t) p, size)); + switch ( size ) + { + case 1: + write_u8_atomic(p, *(uint8_t *)val); + break; + case 2: + write_u16_atomic(p, *(uint16_t *)val); + break; + case 4: + write_u32_atomic(p, *(uint32_t *)val); + break; + case 8: + write_u64_atomic(p, *(uint64_t *)val); + break; + default: + __bad_atomic_size(); + break; + } +} + +#define read_atomic(p) \ + ({ \ + union { \ + typeof(*(p)) val; \ + char c[0]; \ + } x_; \ + read_atomic_size(p, x_.c, sizeof(*(p))); \ + x_.val; \ + }) + +#define write_atomic(p, x) \ + do \ + { \ + typeof(*(p)) x_ = (x); \ + write_atomic_size(p, &x_, sizeof(*(p))); \ + } while ( 0 ) + +#define add_sized(p, x) \ + ({ \ + typeof(*(p)) __x = (x); \ + switch ( sizeof(*(p)) ) \ + { \ + case 1: \ + add_u8_sized((uint8_t *) (p), __x); \ + break; \ + case 2: \ + add_u16_sized((uint16_t *) (p), __x); \ + break; \ + case 4: \ + add_u32_sized((uint32_t *) (p), __x); \ + break; \ + default: \ + __bad_atomic_size(); \ + break; \ + } \ + }) + +static inline void atomic_add(int a, atomic_t *v) +{ + int t; + + asm volatile ( "1: lwarx %0,0,%3\n" + "add %0,%2,%0\n" + "stwcx. %0,0,%3\n" + "bne- 1b" + : "=&r"(t), "=m"(v->counter) + : "r"(a), "r"(&v->counter), "m"(v->counter) : "cc" ); +} + +static inline int atomic_add_return(int a, atomic_t *v) +{ + int t; + + asm volatile ( PPC_ATOMIC_ENTRY_BARRIER + "1: lwarx %0,0,%2\n" + "add %0,%1,%0\n" + "stwcx. %0,0,%2\n" + "bne- 1b" + PPC_ATOMIC_EXIT_BARRIER + : "=&r"(t) + : "r"(a), "r"(&v->counter) + : "cc", "memory" ); + + return t; +} + +static inline void atomic_sub(int a, atomic_t *v) +{ + int t; + + asm volatile ( "1: lwarx %0,0,%3\n" + "subf %0,%2,%0\n" + "stwcx. %0,0,%3\n" + "bne- 1b" + : "=&r"(t), "=m"(v->counter) + : "r"(a), "r"(&v->counter), "m"(v->counter) + : "cc" ); +} + +static inline int atomic_sub_return(int a, atomic_t *v) +{ + int t; + + asm volatile ( PPC_ATOMIC_ENTRY_BARRIER + "1: lwarx %0,0,%2\n" + "subf %0,%1,%0\n" + "stwcx. %0,0,%2\n" + "bne- 1b" + PPC_ATOMIC_EXIT_BARRIER + : "=&r"(t) + : "r"(a), "r"(&v->counter) + : "cc", "memory" ); + + return t; +} + +static inline void atomic_inc(atomic_t *v) +{ + int t; + + asm volatile ( "1: lwarx %0,0,%2\n" + "addic %0,%0,1\n" + "stwcx. %0,0,%2\n" + "bne- 1b" + : "=&r"(t), "=m"(v->counter) + : "r"(&v->counter), "m"(v->counter) + : "cc" ); +} + +static inline int atomic_inc_return(atomic_t *v) +{ + int t; + + asm volatile ( PPC_ATOMIC_ENTRY_BARRIER + "1: lwarx %0,0,%1\n" + "addic %0,%0,1\n" + "stwcx. %0,0,%1\n" + "bne- 1b" + PPC_ATOMIC_EXIT_BARRIER + : "=&r"(t) + : "r"(&v->counter) + : "cc", "memory" ); + + return t; +} + +static inline void atomic_dec(atomic_t *v) +{ + int t; + + asm volatile ( "1: lwarx %0,0,%2\n" + "addic %0,%0,-1\n" + "stwcx. %0,0,%2\n" + "bne- 1b" + : "=&r"(t), "=m"(v->counter) + : "r"(&v->counter), "m"(v->counter) + : "cc" ); +} + +static inline int atomic_dec_return(atomic_t *v) +{ + int t; + + asm volatile ( PPC_ATOMIC_ENTRY_BARRIER + "1: lwarx %0,0,%1\n" + "addic %0,%0,-1\n" + "stwcx. %0,0,%1\n" + "bne- 1b" + PPC_ATOMIC_EXIT_BARRIER + : "=&r"(t) + : "r"(&v->counter) + : "cc", "memory" ); + + return t; +} + +/* + * Atomically test *v and decrement if it is greater than 0. + * The function returns the old value of *v minus 1. + */ +static inline int atomic_dec_if_positive(atomic_t *v) +{ + int t; + + asm volatile(PPC_ATOMIC_ENTRY_BARRIER + "1: lwarx %0,0,%1 # atomic_dec_if_positive\n" + "addic. %0,%0,-1\n" + "blt- 2f\n" + "stwcx. %0,0,%1\n" + "bne- 1b\n" + PPC_ATOMIC_EXIT_BARRIER + "2:" : "=&r"(t) + : "r"(&v->counter) : "cc", "memory"); + + return t; +} + +static inline atomic_t atomic_compareandswap(atomic_t old, atomic_t new, + atomic_t *v) +{ + atomic_t rc; + rc.counter = __cmpxchg(&v->counter, old.counter, new.counter, sizeof(int)); + return rc; +} + +#define arch_cmpxchg(ptr, o, n) \ + ({ \ + __typeof__(*(ptr)) _o_ = (o); \ + __typeof__(*(ptr)) _n_ = (n); \ + (__typeof__(*(ptr))) __cmpxchg((ptr), (unsigned long) _o_, \ + (unsigned long) _n_, sizeof(*(ptr))); \ + }) + +static inline int atomic_cmpxchg(atomic_t *v, int old, int new) +{ + return arch_cmpxchg(&v->counter, old, new); +} + +#define ATOMIC_OP(op, insn, suffix, sign) \ + static inline void atomic_##op(int a, atomic_t *v) \ + { \ + int t; \ + asm volatile ( "1: lwarx %0,0,%3\n" \ + insn "%I2" suffix " %0,%0,%2\n" \ + "stwcx. %0,0,%3 \n" \ + "bne- 1b\n" \ + : "=&r"(t), "+m"(v->counter) \ + : "r" #sign(a), "r"(&v->counter) \ + : "cc" ); \ + } + +ATOMIC_OP(and, "and", ".", K) + +static inline int atomic_sub_and_test(int i, atomic_t *v) +{ + return atomic_sub_return(i, v) == 0; +} + +static inline int atomic_inc_and_test(atomic_t *v) +{ + return atomic_add_return(1, v) == 0; +} + +static inline int atomic_dec_and_test(atomic_t *v) +{ + return atomic_sub_return(1, v) == 0; +} + +static inline int atomic_add_negative(int i, atomic_t *v) +{ + return atomic_add_return(i, v) < 0; +} + +static inline int __atomic_add_unless(atomic_t *v, int a, int u) +{ + int c, old; + + c = atomic_read(v); + while (c != u && (old = atomic_cmpxchg((v), c, c + a)) != c) + c = old; + return c; +} + +static inline int atomic_add_unless(atomic_t *v, int a, int u) +{ + return __atomic_add_unless(v, a, u); +} + +#endif /* _ASM_PPC64_ATOMIC_H_ */ diff --git a/xen/arch/ppc/include/asm/memory.h b/xen/arch/ppc/include/asm/memory.h new file mode 100644 index 0000000000..7b12e01b1a --- /dev/null +++ b/xen/arch/ppc/include/asm/memory.h @@ -0,0 +1,34 @@ +/* + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License as published by + * the Free Software Foundation; either version 2 of the License, or + * (at your option) any later version. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program; if not, write to the Free Software + * Foundation, 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA. + * + * Copyright (C) IBM Corp. 2005 + * + * Authors: Jimi Xenidis + */ + +#ifndef _ASM_MEMORY_H_ +#define _ASM_MEMORY_H_ + +#include + +#ifdef CONFIG_SMP +#define PPC_ATOMIC_ENTRY_BARRIER "sync\n" +#define PPC_ATOMIC_EXIT_BARRIER "sync\n" +#else +#define PPC_ATOMIC_ENTRY_BARRIER +#define PPC_ATOMIC_EXIT_BARRIER +#endif + +#endif From patchwork Wed Aug 2 23:03:00 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shawn Anastasio X-Patchwork-Id: 13338912 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 54CC8C04FDF for ; Wed, 2 Aug 2023 23:03:38 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.575861.901345 (Exim 4.92) (envelope-from ) id 1qRKsF-0002vU-WD; Wed, 02 Aug 2023 23:03:24 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 575861.901345; Wed, 02 Aug 2023 23:03:23 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qRKsF-0002uE-S9; Wed, 02 Aug 2023 23:03:23 +0000 Received: by outflank-mailman (input) for mailman id 575861; Wed, 02 Aug 2023 23:03:22 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qRKsE-0002Xg-Kp for xen-devel@lists.xenproject.org; Wed, 02 Aug 2023 23:03:22 +0000 Received: from raptorengineering.com (mail.raptorengineering.com [23.155.224.40]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id c556464b-3188-11ee-b265-6b7b168915f2; Thu, 03 Aug 2023 01:03:19 +0200 (CEST) Received: from localhost (localhost [127.0.0.1]) by mail.rptsys.com (Postfix) with ESMTP id 9B5FD8286987; Wed, 2 Aug 2023 18:03:18 -0500 (CDT) Received: from mail.rptsys.com ([127.0.0.1]) by localhost (vali.starlink.edu [127.0.0.1]) (amavisd-new, port 10032) with ESMTP id EoR4xdmHyltq; Wed, 2 Aug 2023 18:03:17 -0500 (CDT) Received: from localhost (localhost [127.0.0.1]) by mail.rptsys.com (Postfix) with ESMTP id 414C582853DD; Wed, 2 Aug 2023 18:03:17 -0500 (CDT) Received: from mail.rptsys.com ([127.0.0.1]) by localhost (vali.starlink.edu [127.0.0.1]) (amavisd-new, port 10026) with ESMTP id NuW5S9AYKLyX; Wed, 2 Aug 2023 18:03:17 -0500 (CDT) Received: from raptor-ewks-026.lan (5.edge.rptsys.com [23.155.224.38]) by mail.rptsys.com (Postfix) with ESMTPSA id BBCF98285801; Wed, 2 Aug 2023 18:03:16 -0500 (CDT) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: c556464b-3188-11ee-b265-6b7b168915f2 DKIM-Filter: OpenDKIM Filter v2.10.3 mail.rptsys.com 414C582853DD DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=raptorengineering.com; s=B8E824E6-0BE2-11E6-931D-288C65937AAD; t=1691017397; bh=rZAldAQK00ovSXSpgF7Lp58mw77D/MWNT5l6iW6CChk=; h=From:To:Date:Message-Id:MIME-Version; b=dZMQ3XBNkxo4errxPUnFLgXNh0mDY/dATXwqgJfpw05xrpStnw9JRhdyZvtUFagds PGlKVK3mZbgOp0UP5hJhSluMMnMYbMynO9J3iqXHsR4OTlM0HvA/6MaaJQw9PcHDx7 J62CWVRpYzH6B40yO3xehHkKSu2mJKLioyM3XGQs= X-Virus-Scanned: amavisd-new at rptsys.com From: Shawn Anastasio To: xen-devel@lists.xenproject.org Cc: Timothy Pearson , Andrew Cooper , Jan Beulich , Shawn Anastasio Subject: [PATCH 4/9] xen/ppc: Implement bitops.h Date: Wed, 2 Aug 2023 18:03:00 -0500 Message-Id: <63b89c627948015cc9a1c1e5d1775ee9e41508ba.1691016993.git.sanastasio@raptorengineering.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: References: MIME-Version: 1.0 Implement bitops.h, based on Linux's implementation as of commit 5321d1b1afb9a17302c6cec79f0cbf823eb0d3fc Signed-off-by: Shawn Anastasio --- xen/arch/ppc/include/asm/bitops.h | 331 +++++++++++++++++++++++++++++- 1 file changed, 327 insertions(+), 4 deletions(-) diff --git a/xen/arch/ppc/include/asm/bitops.h b/xen/arch/ppc/include/asm/bitops.h index a7cd8ec7c5..286f68cbfe 100644 --- a/xen/arch/ppc/include/asm/bitops.h +++ b/xen/arch/ppc/include/asm/bitops.h @@ -1,11 +1,334 @@ +/* SPDX-License-Identifier: GPL-2.0-or-later */ +/* + * Adapted from Linux's arch/powerpc/include/asm/bitops.h. + * + * Merged version by David Gibson . + * Based on ppc64 versions by: Dave Engebretsen, Todd Inglett, Don + * Reed, Pat McCarthy, Peter Bergner, Anton Blanchard. They + * originally took it from the ppc32 code. + */ #ifndef _ASM_PPC_BITOPS_H #define _ASM_PPC_BITOPS_H -#include +#define __set_bit(n,p) set_bit(n,p) +#define __clear_bit(n,p) clear_bit(n,p) + +#define BITOP_BITS_PER_WORD 32 +#define BITOP_MASK(nr) (1UL << ((nr) % BITOP_BITS_PER_WORD)) +#define BITOP_WORD(nr) ((nr) / BITOP_BITS_PER_WORD) +#define BITS_PER_BYTE 8 /* PPC bit number conversion */ -#define PPC_BITLSHIFT(be) (BITS_PER_LONG - 1 - (be)) -#define PPC_BIT(bit) (1UL << PPC_BITLSHIFT(bit)) -#define PPC_BITMASK(bs, be) ((PPC_BIT(bs) - PPC_BIT(be)) | PPC_BIT(bs)) +#define PPC_BITLSHIFT(be) (BITS_PER_LONG - 1 - (be)) +#define PPC_BIT(bit) (1UL << PPC_BITLSHIFT(bit)) +#define PPC_BITMASK(bs, be) ((PPC_BIT(bs) - PPC_BIT(be)) | PPC_BIT(bs)) + +/* Macro for generating the ***_bits() functions */ +#define DEFINE_BITOP(fn, op, prefix) \ +static inline void fn(unsigned long mask, \ + volatile unsigned int *_p) \ +{ \ + unsigned long old; \ + unsigned int *p = (unsigned int *)_p; \ + __asm__ __volatile__ ( \ + prefix \ +"1: lwarx %0,0,%3,0\n" \ + #op "%I2 %0,%0,%2\n" \ + "stwcx. %0,0,%3\n" \ + "bne- 1b\n" \ + : "=&r" (old), "+m" (*p) \ + : "rK" (mask), "r" (p) \ + : "cc", "memory"); \ +} + +DEFINE_BITOP(set_bits, or, "") +DEFINE_BITOP(change_bits, xor, "") + +#define DEFINE_CLROP(fn, prefix) \ +static inline void fn(unsigned long mask, volatile unsigned int *_p) \ +{ \ + unsigned long old; \ + unsigned int *p = (unsigned int *)_p; \ + \ + asm volatile ( \ + prefix \ + "1: lwarx %0,0,%3,0\n" \ + "andc %0,%0,%2\n" \ + "stwcx. %0,0,%3\n" \ + "bne- 1b\n" \ + : "=&r" (old), "+m" (*p) \ + : "r" (mask), "r" (p) \ + : "cc", "memory"); \ +} + +DEFINE_CLROP(clear_bits, "") + +static inline void set_bit(int nr, volatile void *addr) +{ + set_bits(BITOP_MASK(nr), (volatile unsigned int *)addr + BITOP_WORD(nr)); +} +static inline void clear_bit(int nr, volatile void *addr) +{ + clear_bits(BITOP_MASK(nr), (volatile unsigned int *)addr + BITOP_WORD(nr)); +} + +/** + * test_bit - Determine whether a bit is set + * @nr: bit number to test + * @addr: Address to start counting from + */ +static inline int test_bit(int nr, const volatile void *addr) +{ + const volatile unsigned long *p = (const volatile unsigned long *)addr; + return 1UL & (p[BITOP_WORD(nr)] >> (nr & (BITOP_BITS_PER_WORD-1))); +} + +static inline unsigned long test_and_clear_bits(unsigned long mask, volatile void *_p) +{ + unsigned long old, t; + unsigned int *p = (unsigned int *)_p; + + asm volatile ( + "sync\n" + "1:" "lwarx %0,0,%3,0\n" + "andc %1,%0,%2\n" + "stwcx. %1,0,%3\n" + "bne- 1b\n" + "sync\n" + : "=&r" (old), "=&r" (t) + : "r" (mask), "r" (p) + : "cc", "memory"); + + return (old & mask); +} + +static inline int test_and_clear_bit(unsigned int nr, + volatile void *addr) +{ + return test_and_clear_bits(BITOP_MASK(nr), addr + BITOP_WORD(nr)) != 0; +} + +#define DEFINE_TESTOP(fn, op, eh) \ +static inline unsigned long fn( \ + unsigned long mask, \ + volatile unsigned int *_p) \ +{ \ + unsigned long old, t; \ + unsigned int *p = (unsigned int *)_p; \ + __asm__ __volatile__ ( \ + "sync\n" \ +"1:" "lwarx %0,0,%3,%4\n" \ + #op "%I2 %1,%0,%2\n" \ + "stwcx. %1,0,%3\n" \ + "bne- 1b\n" \ + "sync\n" \ + : "=&r" (old), "=&r" (t) \ + : "rK" (mask), "r" (p), "n" (eh) \ + : "cc", "memory"); \ + return (old & mask); \ +} + +DEFINE_TESTOP(test_and_set_bits, or, 0) + +static inline int test_and_set_bit(unsigned long nr, volatile void *addr) +{ + return test_and_set_bits(BITOP_MASK(nr), (volatile unsigned int *)addr + BITOP_WORD(nr)) != 0; +} + +/** + * __test_and_set_bit - Set a bit and return its old value + * @nr: Bit to set + * @addr: Address to count from + * + * This operation is non-atomic and can be reordered. + * If two examples of this operation race, one can appear to succeed + * but actually fail. You must protect multiple accesses with a lock. + */ +static inline int __test_and_set_bit(int nr, volatile void *addr) +{ + unsigned int mask = BITOP_MASK(nr); + volatile unsigned int *p = + ((volatile unsigned int *)addr) + BITOP_WORD(nr); + unsigned int old = *p; + + *p = old | mask; + return (old & mask) != 0; +} + +/** + * __test_and_clear_bit - Clear a bit and return its old value + * @nr: Bit to clear + * @addr: Address to count from + * + * This operation is non-atomic and can be reordered. + * If two examples of this operation race, one can appear to succeed + * but actually fail. You must protect multiple accesses with a lock. + */ +static inline int __test_and_clear_bit(int nr, volatile void *addr) +{ + unsigned int mask = BITOP_MASK(nr); + volatile unsigned int *p = + ((volatile unsigned int *)addr) + BITOP_WORD(nr); + unsigned int old = *p; + + *p = old & ~mask; + return (old & mask) != 0; +} + +#define flsl(x) generic_flsl(x) +#define fls(x) generic_fls(x) +#define ffs(x) ({ unsigned int __t = (x); fls(__t & -__t); }) +#define ffsl(x) ({ unsigned long __t = (x); flsl(__t & -__t); }) + +/* Based on linux/include/asm-generic/bitops/ffz.h */ +/* + * ffz - find first zero in word. + * @word: The word to search + * + * Undefined if no zero exists, so code should check against ~0UL first. + */ +#define ffz(x) __ffs(~(x)) + +/** + * hweightN - returns the hamming weight of a N-bit word + * @x: the word to weigh + * + * The Hamming Weight of a number is the total number of bits set in it. + */ +#define hweight64(x) generic_hweight64(x) +#define hweight32(x) generic_hweight32(x) +#define hweight16(x) generic_hweight16(x) +#define hweight8(x) generic_hweight8(x) + +/* Based on linux/include/asm-generic/bitops/builtin-__ffs.h */ +/** + * __ffs - find first bit in word. + * @word: The word to search + * + * Undefined if no bit exists, so code should check against 0 first. + */ +static /*__*/always_inline unsigned long __ffs(unsigned long word) +{ + return __builtin_ctzl(word); +} + +/** + * find_first_set_bit - find the first set bit in @word + * @word: the word to search + * + * Returns the bit-number of the first set bit (first bit being 0). + * The input must *not* be zero. + */ +#define find_first_set_bit(x) ({ ffsl(x) - 1; }) + +/* + * Find the first set bit in a memory region. + */ +static inline unsigned long find_first_bit(const unsigned long *addr, + unsigned long size) +{ + const unsigned long *p = addr; + unsigned long result = 0; + unsigned long tmp; + + while (size & ~(BITS_PER_LONG-1)) { + if ((tmp = *(p++))) + goto found; + result += BITS_PER_LONG; + size -= BITS_PER_LONG; + } + if (!size) + return result; + + tmp = (*p) & (~0UL >> (BITS_PER_LONG - size)); + if (tmp == 0UL) /* Are any bits set? */ + return result + size; /* Nope. */ +found: + return result + __ffs(tmp); +} + +static inline unsigned long find_next_bit(const unsigned long *addr, + unsigned long size, + unsigned long offset) +{ + const unsigned long *p = addr + BITOP_WORD(offset); + unsigned long result = offset & ~(BITS_PER_LONG-1); + unsigned long tmp; + + if (offset >= size) + return size; + size -= result; + offset %= BITS_PER_LONG; + if (offset) { + tmp = *(p++); + tmp &= (~0UL << offset); + if (size < BITS_PER_LONG) + goto found_first; + if (tmp) + goto found_middle; + size -= BITS_PER_LONG; + result += BITS_PER_LONG; + } + while (size & ~(BITS_PER_LONG-1)) { + if ((tmp = *(p++))) + goto found_middle; + result += BITS_PER_LONG; + size -= BITS_PER_LONG; + } + if (!size) + return result; + tmp = *p; + +found_first: + tmp &= (~0UL >> (BITS_PER_LONG - size)); + if (tmp == 0UL) /* Are any bits set? */ + return result + size; /* Nope. */ +found_middle: + return result + __ffs(tmp); +} + +/* + * This implementation of find_{first,next}_zero_bit was stolen from + * Linus' asm-alpha/bitops.h. + */ +static inline unsigned long find_next_zero_bit(const unsigned long *addr, + unsigned long size, + unsigned long offset) +{ + const unsigned long *p = addr + BITOP_WORD(offset); + unsigned long result = offset & ~(BITS_PER_LONG-1); + unsigned long tmp; + + if (offset >= size) + return size; + size -= result; + offset %= BITS_PER_LONG; + if (offset) { + tmp = *(p++); + tmp |= ~0UL >> (BITS_PER_LONG - offset); + if (size < BITS_PER_LONG) + goto found_first; + if (~tmp) + goto found_middle; + size -= BITS_PER_LONG; + result += BITS_PER_LONG; + } + while (size & ~(BITS_PER_LONG-1)) { + if (~(tmp = *(p++))) + goto found_middle; + result += BITS_PER_LONG; + size -= BITS_PER_LONG; + } + if (!size) + return result; + tmp = *p; + +found_first: + tmp |= ~0UL << size; + if (tmp == ~0UL) /* Are any bits zero? */ + return result + size; /* Nope. */ +found_middle: + return result + ffz(tmp); +} #endif /* _ASM_PPC_BITOPS_H */ From patchwork Wed Aug 2 23:03:01 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shawn Anastasio X-Patchwork-Id: 13338916 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 4F62EC001DF for ; Wed, 2 Aug 2023 23:03:41 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.575867.901407 (Exim 4.92) (envelope-from ) id 1qRKsL-0004a4-60; Wed, 02 Aug 2023 23:03:29 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 575867.901407; Wed, 02 Aug 2023 23:03:29 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qRKsK-0004Zh-VY; Wed, 02 Aug 2023 23:03:28 +0000 Received: by outflank-mailman (input) for mailman id 575867; Wed, 02 Aug 2023 23:03:26 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qRKsI-0002Xg-8h for xen-devel@lists.xenproject.org; Wed, 02 Aug 2023 23:03:26 +0000 Received: from raptorengineering.com (mail.raptorengineering.com [23.155.224.40]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id c7f4ff75-3188-11ee-b265-6b7b168915f2; Thu, 03 Aug 2023 01:03:24 +0200 (CEST) Received: from localhost (localhost [127.0.0.1]) by mail.rptsys.com (Postfix) with ESMTP id 41E9582853DD; Wed, 2 Aug 2023 18:03:23 -0500 (CDT) Received: from mail.rptsys.com ([127.0.0.1]) by localhost (vali.starlink.edu [127.0.0.1]) (amavisd-new, port 10032) with ESMTP id 4Fb2Nfq4bCbN; Wed, 2 Aug 2023 18:03:17 -0500 (CDT) Received: from localhost (localhost [127.0.0.1]) by mail.rptsys.com (Postfix) with ESMTP id C92968286980; Wed, 2 Aug 2023 18:03:17 -0500 (CDT) Received: from mail.rptsys.com ([127.0.0.1]) by localhost (vali.starlink.edu [127.0.0.1]) (amavisd-new, port 10026) with ESMTP id z5LRkq34hd61; Wed, 2 Aug 2023 18:03:17 -0500 (CDT) Received: from raptor-ewks-026.lan (5.edge.rptsys.com [23.155.224.38]) by mail.rptsys.com (Postfix) with ESMTPSA id 248908286985; Wed, 2 Aug 2023 18:03:17 -0500 (CDT) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: c7f4ff75-3188-11ee-b265-6b7b168915f2 DKIM-Filter: OpenDKIM Filter v2.10.3 mail.rptsys.com C92968286980 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=raptorengineering.com; s=B8E824E6-0BE2-11E6-931D-288C65937AAD; t=1691017397; bh=K/1O9TRQA10A6v+g1F1roS24XihOkwDHxxE5UQO9nJ8=; h=From:To:Date:Message-Id:MIME-Version; b=cV7KLHzN+chE8lGLs4rX/CIN95ws2j23JrIIK4+Goyg5B7gMOWrW38StqfjC91plA WiL0qySXJiORYFLri+CJDDZKvKGfFLTHJMZ9X2+dMZlp1Iv81ByCJ4YSycY7yGUz+I F1iwPdl4LE/4sqL/g36soeIr3ibnzrHkwePTFI3k= X-Virus-Scanned: amavisd-new at rptsys.com From: Shawn Anastasio To: xen-devel@lists.xenproject.org Cc: Timothy Pearson , Andrew Cooper , Jan Beulich , Shawn Anastasio Subject: [PATCH 5/9] xen/ppc: Define minimal stub headers required for full build Date: Wed, 2 Aug 2023 18:03:01 -0500 Message-Id: <9cd545a4a9ef55d06ea0b81044e22f64ee42823e.1691016993.git.sanastasio@raptorengineering.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: References: MIME-Version: 1.0 Additionally, change inclusion of asm/ headers to corresponding xen/ ones throughout arch/ppc now that they work. Signed-off-by: Shawn Anastasio --- xen/arch/ppc/Kconfig | 1 + xen/arch/ppc/include/asm/altp2m.h | 39 ++++ xen/arch/ppc/include/asm/bug.h | 7 + xen/arch/ppc/include/asm/cache.h | 2 + xen/arch/ppc/include/asm/config.h | 9 + xen/arch/ppc/include/asm/cpufeature.h | 9 + xen/arch/ppc/include/asm/current.h | 42 ++++ xen/arch/ppc/include/asm/delay.h | 11 + xen/arch/ppc/include/asm/desc.h | 0 xen/arch/ppc/include/asm/device.h | 53 +++++ xen/arch/ppc/include/asm/div64.h | 14 ++ xen/arch/ppc/include/asm/domain.h | 46 ++++ xen/arch/ppc/include/asm/event.h | 35 ++++ xen/arch/ppc/include/asm/flushtlb.h | 23 ++ xen/arch/ppc/include/asm/grant_table.h | 0 xen/arch/ppc/include/asm/guest_access.h | 54 +++++ xen/arch/ppc/include/asm/guest_atomics.h | 14 ++ xen/arch/ppc/include/asm/hardirq.h | 18 ++ xen/arch/ppc/include/asm/hypercall.h | 0 xen/arch/ppc/include/asm/io.h | 15 ++ xen/arch/ppc/include/asm/iocap.h | 7 + xen/arch/ppc/include/asm/iommu.h | 7 + xen/arch/ppc/include/asm/irq.h | 32 +++ xen/arch/ppc/include/asm/mem_access.h | 0 xen/arch/ppc/include/asm/mm.h | 255 ++++++++++++++++++++++- xen/arch/ppc/include/asm/monitor.h | 48 +++++ xen/arch/ppc/include/asm/nospec.h | 18 ++ xen/arch/ppc/include/asm/numa.h | 26 +++ xen/arch/ppc/include/asm/p2m.h | 105 ++++++++++ xen/arch/ppc/include/asm/page.h | 19 ++ xen/arch/ppc/include/asm/paging.h | 7 + xen/arch/ppc/include/asm/pci.h | 7 + xen/arch/ppc/include/asm/percpu.h | 26 +++ xen/arch/ppc/include/asm/procarea.h | 38 ++++ xen/arch/ppc/include/asm/processor.h | 6 + xen/arch/ppc/include/asm/random.h | 9 + xen/arch/ppc/include/asm/regs.h | 2 + xen/arch/ppc/include/asm/setup.h | 6 + xen/arch/ppc/include/asm/smp.h | 18 ++ xen/arch/ppc/include/asm/softirq.h | 8 + xen/arch/ppc/include/asm/spinlock.h | 15 ++ xen/arch/ppc/include/asm/system.h | 243 ++++++++++++++++++++- xen/arch/ppc/include/asm/time.h | 20 ++ xen/arch/ppc/include/asm/vm_event.h | 49 +++++ xen/arch/ppc/include/asm/xenoprof.h | 0 xen/arch/ppc/mm-radix.c | 2 +- xen/arch/ppc/opal.c | 2 - xen/arch/ppc/tlb-radix.c | 2 +- xen/include/public/hvm/save.h | 2 + xen/include/public/pmu.h | 2 + xen/include/public/xen.h | 2 + 51 files changed, 1368 insertions(+), 7 deletions(-) create mode 100644 xen/arch/ppc/include/asm/altp2m.h create mode 100644 xen/arch/ppc/include/asm/cpufeature.h create mode 100644 xen/arch/ppc/include/asm/current.h create mode 100644 xen/arch/ppc/include/asm/delay.h create mode 100644 xen/arch/ppc/include/asm/desc.h create mode 100644 xen/arch/ppc/include/asm/device.h create mode 100644 xen/arch/ppc/include/asm/div64.h create mode 100644 xen/arch/ppc/include/asm/domain.h create mode 100644 xen/arch/ppc/include/asm/event.h create mode 100644 xen/arch/ppc/include/asm/flushtlb.h create mode 100644 xen/arch/ppc/include/asm/grant_table.h create mode 100644 xen/arch/ppc/include/asm/guest_access.h create mode 100644 xen/arch/ppc/include/asm/guest_atomics.h create mode 100644 xen/arch/ppc/include/asm/hardirq.h create mode 100644 xen/arch/ppc/include/asm/hypercall.h create mode 100644 xen/arch/ppc/include/asm/io.h create mode 100644 xen/arch/ppc/include/asm/iocap.h create mode 100644 xen/arch/ppc/include/asm/iommu.h create mode 100644 xen/arch/ppc/include/asm/irq.h create mode 100644 xen/arch/ppc/include/asm/mem_access.h create mode 100644 xen/arch/ppc/include/asm/monitor.h create mode 100644 xen/arch/ppc/include/asm/nospec.h create mode 100644 xen/arch/ppc/include/asm/numa.h create mode 100644 xen/arch/ppc/include/asm/p2m.h create mode 100644 xen/arch/ppc/include/asm/paging.h create mode 100644 xen/arch/ppc/include/asm/pci.h create mode 100644 xen/arch/ppc/include/asm/percpu.h create mode 100644 xen/arch/ppc/include/asm/procarea.h create mode 100644 xen/arch/ppc/include/asm/random.h create mode 100644 xen/arch/ppc/include/asm/setup.h create mode 100644 xen/arch/ppc/include/asm/smp.h create mode 100644 xen/arch/ppc/include/asm/softirq.h create mode 100644 xen/arch/ppc/include/asm/spinlock.h create mode 100644 xen/arch/ppc/include/asm/time.h create mode 100644 xen/arch/ppc/include/asm/vm_event.h create mode 100644 xen/arch/ppc/include/asm/xenoprof.h diff --git a/xen/arch/ppc/Kconfig b/xen/arch/ppc/Kconfig index ab116ffb2a..a6eae597af 100644 --- a/xen/arch/ppc/Kconfig +++ b/xen/arch/ppc/Kconfig @@ -1,6 +1,7 @@ config PPC def_bool y select HAS_DEVICE_TREE + select HAS_PDX config PPC64 def_bool y diff --git a/xen/arch/ppc/include/asm/altp2m.h b/xen/arch/ppc/include/asm/altp2m.h new file mode 100644 index 0000000000..73a1e3a114 --- /dev/null +++ b/xen/arch/ppc/include/asm/altp2m.h @@ -0,0 +1,39 @@ +/* + * Alternate p2m + * + * Copyright (c) 2014, Intel Corporation. + * + * This program is free software; you can redistribute it and/or modify it + * under the terms and conditions of the GNU General Public License, + * version 2, as published by the Free Software Foundation. + * + * This program is distributed in the hope it will be useful, but WITHOUT + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for + * more details. + * + * You should have received a copy of the GNU General Public License along with + * this program; If not, see . + */ + +#ifndef __ASM_PPC_ALTP2M_H__ +#define __ASM_PPC_ALTP2M_H__ + +#include + +/* Alternate p2m on/off per domain */ +static inline bool altp2m_active(const struct domain *d) +{ + /* Not implemented on PPC. */ + return false; +} + +/* Alternate p2m VCPU */ +static inline uint16_t altp2m_vcpu_idx(const struct vcpu *v) +{ + /* Not implemented on PPC, should not be reached. */ + BUG(); + return 0; +} + +#endif /* __ASM_PPC_ALTP2M_H__ */ diff --git a/xen/arch/ppc/include/asm/bug.h b/xen/arch/ppc/include/asm/bug.h index e5e874b31c..d02e8e2898 100644 --- a/xen/arch/ppc/include/asm/bug.h +++ b/xen/arch/ppc/include/asm/bug.h @@ -4,6 +4,7 @@ #define _ASM_PPC_BUG_H #include +#include /* * Power ISA guarantees that an instruction consisting of all zeroes is @@ -15,4 +16,10 @@ #define BUG_FN_REG r0 +#define BUG() do { \ + die(); \ +} while (0) + +#define BUG_FRAME(type, line, ptr, second_frame, msg) do { } while (0) + #endif /* _ASM_PPC_BUG_H */ diff --git a/xen/arch/ppc/include/asm/cache.h b/xen/arch/ppc/include/asm/cache.h index 8a0a6b7b17..0d7323d789 100644 --- a/xen/arch/ppc/include/asm/cache.h +++ b/xen/arch/ppc/include/asm/cache.h @@ -3,4 +3,6 @@ #ifndef _ASM_PPC_CACHE_H #define _ASM_PPC_CACHE_H +#define __read_mostly __section(".data.read_mostly") + #endif /* _ASM_PPC_CACHE_H */ diff --git a/xen/arch/ppc/include/asm/config.h b/xen/arch/ppc/include/asm/config.h index b76f940431..965b79aa64 100644 --- a/xen/arch/ppc/include/asm/config.h +++ b/xen/arch/ppc/include/asm/config.h @@ -41,6 +41,15 @@ #define XEN_VIRT_START _AT(vaddr_t, 0xc000000000000000) +#define VMAP_VIRT_START (XEN_VIRT_START + GB(1)) +#define VMAP_VIRT_SIZE GB(1) + +#define FRAMETABLE_VIRT_START (XEN_VIRT_START + GB(32)) +#define FRAMETABLE_SIZE GB(32) +#define FRAMETABLE_NR (FRAMETABLE_SIZE / sizeof(*frame_table)) + +#define HYPERVISOR_VIRT_START XEN_VIRT_START + #define SMP_CACHE_BYTES (1 << 6) #define STACK_ORDER 0 diff --git a/xen/arch/ppc/include/asm/cpufeature.h b/xen/arch/ppc/include/asm/cpufeature.h new file mode 100644 index 0000000000..3552b9231d --- /dev/null +++ b/xen/arch/ppc/include/asm/cpufeature.h @@ -0,0 +1,9 @@ +#ifndef __ASM_PPC_CPUFEATURE_H__ +#define __ASM_PPC_CPUFEATURE_H__ + +static inline int cpu_nr_siblings(unsigned int cpu) +{ + return 1; +} + +#endif /* __ASM_PPC_CPUFEATURE_H__ */ diff --git a/xen/arch/ppc/include/asm/current.h b/xen/arch/ppc/include/asm/current.h new file mode 100644 index 0000000000..87a854d6b0 --- /dev/null +++ b/xen/arch/ppc/include/asm/current.h @@ -0,0 +1,42 @@ +#ifndef __ASM_PPC_CURRENT_H__ +#define __ASM_PPC_CURRENT_H__ + +#include + +#ifndef __ASSEMBLY__ + +struct vcpu; + +/* Which VCPU is "current" on this PCPU. */ +DECLARE_PER_CPU(struct vcpu *, curr_vcpu); + +#define current (this_cpu(curr_vcpu)) +#define set_current(vcpu) do { current = (vcpu); } while (0) +#define get_cpu_current(cpu) (per_cpu(curr_vcpu, cpu)) + +/* Per-VCPU state that lives at the top of the stack */ +struct cpu_info { + struct cpu_user_regs guest_cpu_user_regs; + unsigned long elr; + uint32_t flags; +}; + +static inline struct cpu_info *get_cpu_info(void) +{ +#ifdef __clang__ + unsigned long sp; + + asm ("mr %0, 1" : "=r" (sp)); +#else + register unsigned long sp asm ("r1"); +#endif + + return (struct cpu_info *)((sp & ~(STACK_SIZE - 1)) + + STACK_SIZE - sizeof(struct cpu_info)); +} + +#define guest_cpu_user_regs() (&get_cpu_info()->guest_cpu_user_regs) + +#endif /* __ASSEMBLY__ */ + +#endif /* __ASM_PPC_CURRENT_H__ */ diff --git a/xen/arch/ppc/include/asm/delay.h b/xen/arch/ppc/include/asm/delay.h new file mode 100644 index 0000000000..36be1775f8 --- /dev/null +++ b/xen/arch/ppc/include/asm/delay.h @@ -0,0 +1,11 @@ +#ifndef __ASM_PPC_DELAY_H__ +#define __ASM_PPC_DELAY_H__ + +#include + +static inline void udelay(unsigned long usecs) +{ + BUG(); +} + +#endif /* __ASM_PPC_DELAY_H__ */ diff --git a/xen/arch/ppc/include/asm/desc.h b/xen/arch/ppc/include/asm/desc.h new file mode 100644 index 0000000000..e69de29bb2 diff --git a/xen/arch/ppc/include/asm/device.h b/xen/arch/ppc/include/asm/device.h new file mode 100644 index 0000000000..cb8454f605 --- /dev/null +++ b/xen/arch/ppc/include/asm/device.h @@ -0,0 +1,53 @@ +#ifndef __ASM_PPC_DEVICE_H__ +#define __ASM_PPC_DEVICE_H__ + +enum device_type +{ + DEV_DT, + DEV_PCI, +}; + +struct device { + enum device_type type; +#ifdef CONFIG_HAS_DEVICE_TREE + struct dt_device_node *of_node; /* Used by drivers imported from Linux */ +#endif +}; + +enum device_class +{ + DEVICE_SERIAL, + DEVICE_IOMMU, + DEVICE_GIC, + DEVICE_PCI_HOSTBRIDGE, + /* Use for error */ + DEVICE_UNKNOWN, +}; + +struct device_desc { + /* Device name */ + const char *name; + /* Device class */ + enum device_class class; + /* List of devices supported by this driver */ + const struct dt_device_match *dt_match; + /* + * Device initialization. + * + * -EAGAIN is used to indicate that device probing is deferred. + */ + int (*init)(struct dt_device_node *dev, const void *data); +}; + +typedef struct device device_t; + +#define DT_DEVICE_START(_name, _namestr, _class) \ +static const struct device_desc __dev_desc_##_name __used \ +__section(".dev.info") = { \ + .name = _namestr, \ + .class = _class, \ + +#define DT_DEVICE_END \ +}; + +#endif /* __ASM_PPC_DEVICE_H__ */ diff --git a/xen/arch/ppc/include/asm/div64.h b/xen/arch/ppc/include/asm/div64.h new file mode 100644 index 0000000000..6959c3fb26 --- /dev/null +++ b/xen/arch/ppc/include/asm/div64.h @@ -0,0 +1,14 @@ +#ifndef __ASM_PPC_DIV64_H__ +#define __ASM_PPC_DIV64_H__ + +#include + +#define do_div(n,base) ({ \ + uint32_t __base = (base); \ + uint32_t __rem; \ + __rem = ((uint64_t)(n)) % __base; \ + (n) = ((uint64_t)(n)) / __base; \ + __rem; \ +}) + +#endif /* __ASM_PPC_DIV64_H__ */ diff --git a/xen/arch/ppc/include/asm/domain.h b/xen/arch/ppc/include/asm/domain.h new file mode 100644 index 0000000000..4ade3d484e --- /dev/null +++ b/xen/arch/ppc/include/asm/domain.h @@ -0,0 +1,46 @@ +#ifndef __ASM_PPC_DOMAIN_H__ +#define __ASM_PPC_DOMAIN_H__ + +#include +#include + +struct hvm_domain +{ + uint64_t params[HVM_NR_PARAMS]; +}; + +#define is_domain_direct_mapped(d) ((void)(d), 0) + +/* TODO: Implement */ +#define guest_mode(r) ({ (void) (r); BUG(); 0; }) + +struct arch_vcpu_io { +}; + +struct arch_vcpu { +}; + +struct arch_domain { + struct hvm_domain hvm; +}; + +#include + +static inline struct vcpu_guest_context *alloc_vcpu_guest_context(void) +{ + return xmalloc(struct vcpu_guest_context); +} + +static inline void free_vcpu_guest_context(struct vcpu_guest_context *vgc) +{ + xfree(vgc); +} + +struct guest_memory_policy {}; +static inline void update_guest_memory_policy(struct vcpu *v, + struct guest_memory_policy *gmp) +{} + +static inline void arch_vcpu_block(struct vcpu *v) {} + +#endif /* __ASM_PPC_DOMAIN_H__ */ diff --git a/xen/arch/ppc/include/asm/event.h b/xen/arch/ppc/include/asm/event.h new file mode 100644 index 0000000000..3141127f31 --- /dev/null +++ b/xen/arch/ppc/include/asm/event.h @@ -0,0 +1,35 @@ +#ifndef __ASM_PPC_EVENT_H__ +#define __ASM_PPC_EVENT_H__ + +#include + +/* TODO: implement */ +static inline void vcpu_kick(struct vcpu *v) { BUG(); } +static inline void vcpu_mark_events_pending(struct vcpu *v) { BUG(); } +static inline void vcpu_update_evtchn_irq(struct vcpu *v) { BUG(); } +static inline void vcpu_block_unless_event_pending(struct vcpu *v) { BUG(); } + +static inline int vcpu_event_delivery_is_enabled(struct vcpu *v) +{ + BUG(); + return 0; +} + +/* No arch specific virq definition now. Default to global. */ +static inline bool arch_virq_is_global(unsigned int virq) +{ + return true; +} + +static inline int local_events_need_delivery(void) +{ + BUG(); + return 0; +} + +static inline void local_event_delivery_enable(void) +{ + BUG(); +} + +#endif /* __ASM_PPC_EVENT_H__ */ diff --git a/xen/arch/ppc/include/asm/flushtlb.h b/xen/arch/ppc/include/asm/flushtlb.h new file mode 100644 index 0000000000..1af3bd2301 --- /dev/null +++ b/xen/arch/ppc/include/asm/flushtlb.h @@ -0,0 +1,23 @@ +#ifndef __ASM_PPC_FLUSHTLB_H__ +#define __ASM_PPC_FLUSHTLB_H__ + +#include + +/* + * Filter the given set of CPUs, removing those that definitely flushed their + * TLB since @page_timestamp. + */ +/* XXX lazy implementation just doesn't clear anything.... */ +static inline void tlbflush_filter(cpumask_t *mask, uint32_t page_timestamp) {} + +#define tlbflush_current_time() (0) + +static inline void page_set_tlbflush_timestamp(struct page_info *page) +{ + page->tlbflush_timestamp = tlbflush_current_time(); +} + +/* Flush specified CPUs' TLBs */ +void arch_flush_tlb_mask(const cpumask_t *mask); + +#endif /* __ASM_PPC_FLUSHTLB_H__ */ diff --git a/xen/arch/ppc/include/asm/grant_table.h b/xen/arch/ppc/include/asm/grant_table.h new file mode 100644 index 0000000000..e69de29bb2 diff --git a/xen/arch/ppc/include/asm/guest_access.h b/xen/arch/ppc/include/asm/guest_access.h new file mode 100644 index 0000000000..1919e0566f --- /dev/null +++ b/xen/arch/ppc/include/asm/guest_access.h @@ -0,0 +1,54 @@ +#ifndef __ASM_PPC_GUEST_ACCESS_H__ +#define __ASM_PPC_GUEST_ACCESS_H__ + +#include + +/* TODO */ + +static inline unsigned long raw_copy_to_guest(void *to, const void *from, unsigned int len) +{ + BUG(); +} +static inline unsigned long raw_copy_to_guest_flush_dcache(void *to, const void *from, + unsigned int len) +{ + BUG(); +} +static inline unsigned long raw_copy_from_guest(void *to, const void *from, unsigned int len) +{ + BUG(); +} +static inline unsigned long raw_clear_guest(void *to, unsigned int len) +{ + BUG(); +} + +/* Copy data to guest physical address, then clean the region. */ +static inline unsigned long copy_to_guest_phys_flush_dcache(struct domain *d, + paddr_t gpa, + void *buf, + unsigned int len) +{ + BUG(); +} + +static inline int access_guest_memory_by_gpa(struct domain *d, paddr_t gpa, void *buf, + uint32_t size, bool is_write) +{ + BUG(); +} + + +#define __raw_copy_to_guest raw_copy_to_guest +#define __raw_copy_from_guest raw_copy_from_guest +#define __raw_clear_guest raw_clear_guest + +/* + * Pre-validate a guest handle. + * Allows use of faster __copy_* functions. + */ +/* All PPC guests are paging mode external and hence safe */ +#define guest_handle_okay(hnd, nr) (1) +#define guest_handle_subrange_okay(hnd, first, last) (1) + +#endif /* __ASM_PPC_GUEST_ACCESS_H__ */ diff --git a/xen/arch/ppc/include/asm/guest_atomics.h b/xen/arch/ppc/include/asm/guest_atomics.h new file mode 100644 index 0000000000..bf4f802a15 --- /dev/null +++ b/xen/arch/ppc/include/asm/guest_atomics.h @@ -0,0 +1,14 @@ +#ifndef __ASM_PPC_GUEST_ATOMICS_H__ +#define __ASM_PPC_GUEST_ATOMICS_H__ + +#include + +/* TODO: implement */ +#define guest_test_bit(d, nr, p) ({ (void) (d); (void) (nr); (void) (p); BUG(); false; }) +#define guest_clear_bit(d, nr, p) ({ (void) (d); (void) (nr); (void) (p); BUG(); false; }) +#define guest_set_bit(d, nr, p) ({ (void) (d); (void) (nr); (void) (p); BUG(); false; }) +#define guest_test_and_set_bit(d, nr, p) ({ (void) (d); (void) (nr); (void) (p); BUG(); false; }) +#define guest_test_and_clear_bit(d, nr, p) ({ (void) (d); (void) (nr); (void) (p); BUG(); false; }) +#define guest_test_and_change_bit(d, nr, p) ({ (void) (d); (void) (nr); (void) (p); BUG(); false; }) + +#endif /* __ASM_PPC_GUEST_ATOMICS_H__ */ diff --git a/xen/arch/ppc/include/asm/hardirq.h b/xen/arch/ppc/include/asm/hardirq.h new file mode 100644 index 0000000000..51ef290961 --- /dev/null +++ b/xen/arch/ppc/include/asm/hardirq.h @@ -0,0 +1,18 @@ +#ifndef __ASM_PPC_HARDIRQ_H__ +#define __ASM_PPC_HARDIRQ_H__ + +#include + +typedef struct { + unsigned long __softirq_pending; + unsigned int __local_irq_count; +} __cacheline_aligned irq_cpustat_t; + +#include /* Standard mappings for irq_cpustat_t above */ + +#define in_irq() (local_irq_count(smp_processor_id()) != 0) + +#define irq_enter() (local_irq_count(smp_processor_id())++) +#define irq_exit() (local_irq_count(smp_processor_id())--) + +#endif /* __ASM_PPC_HARDIRQ_H__ */ diff --git a/xen/arch/ppc/include/asm/hypercall.h b/xen/arch/ppc/include/asm/hypercall.h new file mode 100644 index 0000000000..e69de29bb2 diff --git a/xen/arch/ppc/include/asm/io.h b/xen/arch/ppc/include/asm/io.h new file mode 100644 index 0000000000..f8f3bd6ff7 --- /dev/null +++ b/xen/arch/ppc/include/asm/io.h @@ -0,0 +1,15 @@ +#ifndef __ASM_PPC_IO_H__ +#define __ASM_PPC_IO_H__ + +#include + +/* TODO */ +#define readb(c) ({ (void)(c); BUG(); 0; }) +#define readw(c) ({ (void)(c); BUG(); 0; }) +#define readl(c) ({ (void)(c); BUG(); 0; }) + +#define writeb(v,c) ({ (void)(v); (void)(c); BUG(); }) +#define writew(v,c) ({ (void)(v); (void)(c); BUG(); }) +#define writel(v,c) ({ (void)(v); (void)(c); BUG(); }) + +#endif /* __ASM_PPC_IO_H__ */ diff --git a/xen/arch/ppc/include/asm/iocap.h b/xen/arch/ppc/include/asm/iocap.h new file mode 100644 index 0000000000..16ae0cf1c8 --- /dev/null +++ b/xen/arch/ppc/include/asm/iocap.h @@ -0,0 +1,7 @@ +#ifndef __ASM_PPC_IOCAP_H__ +#define __ASM_PPC_IOCAP_H__ + +#define cache_flush_permitted(d) \ + (!rangeset_is_empty((d)->iomem_caps)) + +#endif /* __ASM_PPC_IOCAP_H__ */ diff --git a/xen/arch/ppc/include/asm/iommu.h b/xen/arch/ppc/include/asm/iommu.h new file mode 100644 index 0000000000..fb1a381518 --- /dev/null +++ b/xen/arch/ppc/include/asm/iommu.h @@ -0,0 +1,7 @@ +#ifndef __ASM_PPC_IOMMU_H__ +#define __ASM_PPC_IOMMU_H__ + +struct arch_iommu { +}; + +#endif /* __ASM_PPC_IOMMU_H__ */ diff --git a/xen/arch/ppc/include/asm/irq.h b/xen/arch/ppc/include/asm/irq.h new file mode 100644 index 0000000000..99d30dd2bf --- /dev/null +++ b/xen/arch/ppc/include/asm/irq.h @@ -0,0 +1,32 @@ +#ifndef __ASM_PPC_IRQ_H__ +#define __ASM_PPC_IRQ_H__ + +#include +#include +#include + +/* TODO */ +#define nr_irqs 0U +#define nr_static_irqs 0 +#define arch_hwdom_irqs(domid) 0U + +#define domain_pirq_to_irq(d, pirq) (pirq) + +struct arch_pirq { +}; + +struct arch_irq_desc { + unsigned int type; +}; + +static inline void arch_move_irqs(struct vcpu *v) +{ + BUG(); +} + +static inline int platform_get_irq(const struct dt_device_node *device, int index) +{ + BUG(); +} + +#endif /* __ASM_PPC_IRQ_H__ */ diff --git a/xen/arch/ppc/include/asm/mem_access.h b/xen/arch/ppc/include/asm/mem_access.h new file mode 100644 index 0000000000..e69de29bb2 diff --git a/xen/arch/ppc/include/asm/mm.h b/xen/arch/ppc/include/asm/mm.h index 36e44a4356..86e5cbf609 100644 --- a/xen/arch/ppc/include/asm/mm.h +++ b/xen/arch/ppc/include/asm/mm.h @@ -1,19 +1,270 @@ #ifndef _ASM_PPC_MM_H #define _ASM_PPC_MM_H +#include +#include +#include #include #include +void setup_initial_pagetables(void); + +extern unsigned long total_pages; + #define pfn_to_paddr(pfn) ((paddr_t)(pfn) << PAGE_SHIFT) #define paddr_to_pfn(pa) ((unsigned long)((pa) >> PAGE_SHIFT)) +#define paddr_to_pdx(pa) mfn_to_pdx(maddr_to_mfn(pa)) +#define gfn_to_gaddr(gfn) pfn_to_paddr(gfn_x(gfn)) +#define gaddr_to_gfn(ga) _gfn(paddr_to_pfn(ga)) +#define mfn_to_maddr(mfn) pfn_to_paddr(mfn_x(mfn)) +#define maddr_to_mfn(ma) _mfn(paddr_to_pfn(ma)) +#define vmap_to_mfn(va) maddr_to_mfn(virt_to_maddr((vaddr_t)va)) +#define vmap_to_page(va) mfn_to_page(vmap_to_mfn(va)) #define virt_to_maddr(va) ((paddr_t)((vaddr_t)(va) & PADDR_MASK)) -#define maddr_to_virt(pa) ((void *)((paddr_t)(pa) | XEN_VIRT_START) +#define maddr_to_virt(pa) ((void *)((paddr_t)(pa) | XEN_VIRT_START)) /* Convert between Xen-heap virtual addresses and machine addresses. */ #define __pa(x) (virt_to_maddr(x)) #define __va(x) (maddr_to_virt(x)) -void setup_initial_pagetables(void); +/* Convert between Xen-heap virtual addresses and machine frame numbers. */ +#define __virt_to_mfn(va) (virt_to_maddr(va) >> PAGE_SHIFT) +#define __mfn_to_virt(mfn) (maddr_to_virt((paddr_t)(mfn) << PAGE_SHIFT)) + +/* Convert between Xen-heap virtual addresses and page-info structures. */ +static inline struct page_info *virt_to_page(const void *v) +{ + BUG(); + return NULL; +} + +/* + * We define non-underscored wrappers for above conversion functions. + * These are overriden in various source files while underscored version + * remain intact. + */ +#define virt_to_mfn(va) __virt_to_mfn(va) +#define mfn_to_virt(mfn) __mfn_to_virt(mfn) + +#define PG_shift(idx) (BITS_PER_LONG - (idx)) +#define PG_mask(x, idx) (x ## UL << PG_shift(idx)) + +#define PGT_none PG_mask(0, 1) /* no special uses of this page */ +#define PGT_writable_page PG_mask(1, 1) /* has writable mappings? */ +#define PGT_type_mask PG_mask(1, 1) /* Bits 31 or 63. */ + + /* 2-bit count of uses of this frame as its current type. */ +#define PGT_count_mask PG_mask(3, 3) + +/* Cleared when the owning guest 'frees' this page. */ +#define _PGC_allocated PG_shift(1) +#define PGC_allocated PG_mask(1, 1) +/* Page is Xen heap? */ +#define _PGC_xen_heap PG_shift(2) +#define PGC_xen_heap PG_mask(1, 2) +/* Page is static memory */ +#define PGC_static 0 +/* Page is broken? */ +#define _PGC_broken PG_shift(7) +#define PGC_broken PG_mask(1, 7) + /* Mutually-exclusive page states: { inuse, offlining, offlined, free }. */ +#define PGC_state PG_mask(3, 9) +#define PGC_state_inuse PG_mask(0, 9) +#define PGC_state_offlining PG_mask(1, 9) +#define PGC_state_offlined PG_mask(2, 9) +#define PGC_state_free PG_mask(3, 9) +#define page_state_is(pg, st) (((pg)->count_info&PGC_state) == PGC_state_##st) +/* Page is not reference counted */ +#define _PGC_extra PG_shift(10) +#define PGC_extra PG_mask(1, 10) + +/* Count of references to this frame. */ +#define PGC_count_width PG_shift(10) +#define PGC_count_mask ((1UL<count_info & PGC_xen_heap) +#define is_xen_heap_mfn(mfn) \ + (mfn_valid(mfn) && is_xen_heap_page(mfn_to_page(mfn))) + +#define is_xen_fixed_mfn(mfn) \ + ((mfn_to_maddr(mfn) >= virt_to_maddr(&_start)) && \ + (mfn_to_maddr(mfn) <= virt_to_maddr((vaddr_t)_end - 1))) + +#define page_get_owner(_p) (_p)->v.inuse.domain +#define page_set_owner(_p,_d) ((_p)->v.inuse.domain = (_d)) + +/* TODO: implement */ +#define mfn_valid(mfn) ({ (void) (mfn); 0; }) +#define max_page ((unsigned long )0) + +#define mfn_to_gfn(d, mfn) ((void)(d), _gfn(mfn_x(mfn))) + +#define domain_set_alloc_bitsize(d) ((void)0) +#define domain_clamp_alloc_bitsize(d, b) (b) + +#define PFN_ORDER(_pfn) ((_pfn)->v.free.order) + +struct page_info +{ + /* Each frame can be threaded onto a doubly-linked list. */ + struct page_list_entry list; + + /* Reference count and various PGC_xxx flags and fields. */ + unsigned long count_info; + + /* Context-dependent fields follow... */ + union { + /* Page is in use: ((count_info & PGC_count_mask) != 0). */ + struct { + /* Type reference count and various PGT_xxx flags and fields. */ + unsigned long type_info; + } inuse; + /* Page is on a free list: ((count_info & PGC_count_mask) == 0). */ + union { + struct { + /* + * Index of the first *possibly* unscrubbed page in the buddy. + * One more bit than maximum possible order to accommodate + * INVALID_DIRTY_IDX. + */ +#define INVALID_DIRTY_IDX ((1UL << (MAX_ORDER + 1)) - 1) + unsigned long first_dirty:MAX_ORDER + 1; + + /* Do TLBs need flushing for safety before next page use? */ + bool need_tlbflush:1; + +#define BUDDY_NOT_SCRUBBING 0 +#define BUDDY_SCRUBBING 1 +#define BUDDY_SCRUB_ABORT 2 + unsigned long scrub_state:2; + }; + + unsigned long val; + } free; + + } u; + + union { + /* Page is in use, but not as a shadow. */ + struct { + /* Owner of this page (zero if page is anonymous). */ + struct domain *domain; + } inuse; + + /* Page is on a free list. */ + struct { + /* Order-size of the free chunk this page is the head of. */ + unsigned int order; + } free; + + } v; + + union { + /* + * Timestamp from 'TLB clock', used to avoid extra safety flushes. + * Only valid for: a) free pages, and b) pages with zero type count + */ + u32 tlbflush_timestamp; + }; + u64 pad; +}; + + +#define FRAMETABLE_VIRT_START (XEN_VIRT_START + GB(32)) +#define frame_table ((struct page_info *)FRAMETABLE_VIRT_START) + +/* PDX of the first page in the frame table. */ +extern unsigned long frametable_base_pdx; + +/* Convert between machine frame numbers and page-info structures. */ +#define mfn_to_page(mfn) \ + (frame_table + (mfn_to_pdx(mfn) - frametable_base_pdx)) +#define page_to_mfn(pg) \ + pdx_to_mfn((unsigned long)((pg) - frame_table) + frametable_base_pdx) + +static inline void *page_to_virt(const struct page_info *pg) +{ + return mfn_to_virt(mfn_x(page_to_mfn(pg))); +} + +/* + * Common code requires get_page_type and put_page_type. + * We don't care about typecounts so we just do the minimum to make it + * happy. + */ +static inline int get_page_type(struct page_info *page, unsigned long type) +{ + return 1; +} + +static inline void put_page_type(struct page_info *page) +{ + return; +} + +/* TODO */ +static inline bool get_page_nr(struct page_info *page, const struct domain *domain, + unsigned long nr) +{ + BUG(); +} +static inline void put_page_nr(struct page_info *page, unsigned long nr) +{ + BUG(); +} + +static inline void put_page_and_type(struct page_info *page) +{ + put_page_type(page); + put_page(page); +} + +/* + * PPC does not have an M2P, but common code expects a handful of + * M2P-related defines and functions. Provide dummy versions of these. + */ +#define INVALID_M2P_ENTRY (~0UL) +#define SHARED_M2P_ENTRY (~0UL - 1UL) +#define SHARED_M2P(_e) ((_e) == SHARED_M2P_ENTRY) + +/* Xen always owns P2M on PPC */ +#define set_gpfn_from_mfn(mfn, pfn) do { (void) (mfn), (void)(pfn); } while (0) +#define mfn_to_gfn(d, mfn) ((void)(d), _gfn(mfn_x(mfn))) + +#define PDX_GROUP_SHIFT (16 + 5) + +static inline unsigned long domain_get_maximum_gpfn(struct domain *d) +{ + BUG(); + return 0; +} + +static inline long arch_memory_op(int op, XEN_GUEST_HANDLE_PARAM(void) arg) +{ + BUG(); + return 0; +} + +static inline unsigned int arch_get_dma_bitsize(void) +{ + return 32; /* TODO */ +} + +/* + * On PPC, all the RAM is currently direct mapped in Xen. + * Hence return always true. + */ +static inline bool arch_mfns_in_directmap(unsigned long mfn, unsigned long nr) +{ + return true; +} #endif /* _ASM_PPC_MM_H */ diff --git a/xen/arch/ppc/include/asm/monitor.h b/xen/arch/ppc/include/asm/monitor.h new file mode 100644 index 0000000000..f810a367c7 --- /dev/null +++ b/xen/arch/ppc/include/asm/monitor.h @@ -0,0 +1,48 @@ +/* Derived from xen/arch/arm/include/asm/monitor.h */ +#ifndef __ASM_PPC_MONITOR_H__ +#define __ASM_PPC_MONITOR_H__ + +#include + +#include +#include + +static inline +void arch_monitor_allow_userspace(struct domain *d, bool allow_userspace) +{ +} + +static inline +int arch_monitor_domctl_op(struct domain *d, struct xen_domctl_monitor_op *mop) +{ + /* No arch-specific monitor ops on PPC. */ + return -EOPNOTSUPP; +} + +int arch_monitor_domctl_event(struct domain *d, + struct xen_domctl_monitor_op *mop); + +static inline +int arch_monitor_init_domain(struct domain *d) +{ + /* No arch-specific domain initialization on PPC. */ + return 0; +} + +static inline +void arch_monitor_cleanup_domain(struct domain *d) +{ + /* No arch-specific domain cleanup on PPC. */ +} + +static inline uint32_t arch_monitor_get_capabilities(struct domain *d) +{ + uint32_t capabilities = 0; + + capabilities = (1U << XEN_DOMCTL_MONITOR_EVENT_GUEST_REQUEST | + 1U << XEN_DOMCTL_MONITOR_EVENT_PRIVILEGED_CALL); + + return capabilities; +} + +#endif /* __ASM_PPC_MONITOR_H__ */ diff --git a/xen/arch/ppc/include/asm/nospec.h b/xen/arch/ppc/include/asm/nospec.h new file mode 100644 index 0000000000..4d8ec923e9 --- /dev/null +++ b/xen/arch/ppc/include/asm/nospec.h @@ -0,0 +1,18 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * From arch/arm/include/asm/nospec.h. + * Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved. + */ +#ifndef __ASM_PPC_NOSPEC_H__ +#define __ASM_PPC_NOSPEC_H__ + +static inline bool evaluate_nospec(bool condition) +{ + return condition; +} + +static inline void block_speculation(void) +{ +} + +#endif /* __ASM_PPC_NOSPEC_H__ */ diff --git a/xen/arch/ppc/include/asm/numa.h b/xen/arch/ppc/include/asm/numa.h new file mode 100644 index 0000000000..d857bba2ba --- /dev/null +++ b/xen/arch/ppc/include/asm/numa.h @@ -0,0 +1,26 @@ +#ifndef __ASM_PPC_NUMA_H__ +#define __ASM_PPC_NUMA_H__ + +#include +#include + +typedef uint8_t nodeid_t; + +/* Fake one node for now. See also node_online_map. */ +#define cpu_to_node(cpu) 0 +#define node_to_cpumask(node) (cpu_online_map) + +/* + * TODO: make first_valid_mfn static when NUMA is supported on Arm, this + * is required because the dummy helpers are using it. + */ +extern mfn_t first_valid_mfn; + +/* XXX: implement NUMA support */ +#define node_spanned_pages(nid) (max_page - mfn_x(first_valid_mfn)) +#define node_start_pfn(nid) (mfn_x(first_valid_mfn)) +#define __node_distance(a, b) (20) + +#define arch_want_default_dmazone() (false) + +#endif /* __ASM_PPC_NUMA_H__ */ diff --git a/xen/arch/ppc/include/asm/p2m.h b/xen/arch/ppc/include/asm/p2m.h new file mode 100644 index 0000000000..851e9f011a --- /dev/null +++ b/xen/arch/ppc/include/asm/p2m.h @@ -0,0 +1,105 @@ +#ifndef __ASM_PPC_P2M_H__ +#define __ASM_PPC_P2M_H__ + +#include + +#define paddr_bits PADDR_BITS + +/* + * List of possible type for each page in the p2m entry. + * The number of available bit per page in the pte for this purpose is 4 bits. + * So it's possible to only have 16 fields. If we run out of value in the + * future, it's possible to use higher value for pseudo-type and don't store + * them in the p2m entry. + */ +typedef enum { + p2m_invalid = 0, /* Nothing mapped here */ + p2m_ram_rw, /* Normal read/write guest RAM */ + p2m_ram_ro, /* Read-only; writes are silently dropped */ + p2m_mmio_direct_dev,/* Read/write mapping of genuine Device MMIO area */ + p2m_mmio_direct_nc, /* Read/write mapping of genuine MMIO area non-cacheable */ + p2m_mmio_direct_c, /* Read/write mapping of genuine MMIO area cacheable */ + p2m_map_foreign_rw, /* Read/write RAM pages from foreign domain */ + p2m_map_foreign_ro, /* Read-only RAM pages from foreign domain */ + p2m_grant_map_rw, /* Read/write grant mapping */ + p2m_grant_map_ro, /* Read-only grant mapping */ + /* The types below are only used to decide the page attribute in the P2M */ + p2m_iommu_map_rw, /* Read/write iommu mapping */ + p2m_iommu_map_ro, /* Read-only iommu mapping */ + p2m_max_real_type, /* Types after this won't be store in the p2m */ +} p2m_type_t; + +#include + +static inline int get_page_and_type(struct page_info *page, + struct domain *domain, + unsigned long type) +{ + BUG(); + return 1; +} + +/* Look up a GFN and take a reference count on the backing page. */ +typedef unsigned int p2m_query_t; +#define P2M_ALLOC (1u<<0) /* Populate PoD and paged-out entries */ +#define P2M_UNSHARE (1u<<1) /* Break CoW sharing */ + +static inline struct page_info *get_page_from_gfn( + struct domain *d, unsigned long gfn, p2m_type_t *t, p2m_query_t q) +{ + BUG(); + return NULL; +} + +static inline void memory_type_changed(struct domain *d) +{ + BUG(); +} + + +static inline int guest_physmap_mark_populate_on_demand(struct domain *d, unsigned long gfn, + unsigned int order) +{ + BUG(); + return 1; +} + +static inline int guest_physmap_add_entry(struct domain *d, + gfn_t gfn, + mfn_t mfn, + unsigned long page_order, + p2m_type_t t) +{ + BUG(); + return 1; +} + +/* Untyped version for RAM only, for compatibility */ +static inline int __must_check +guest_physmap_add_page(struct domain *d, gfn_t gfn, mfn_t mfn, + unsigned int page_order) +{ + return guest_physmap_add_entry(d, gfn, mfn, page_order, p2m_ram_rw); +} + +static inline mfn_t gfn_to_mfn(struct domain *d, gfn_t gfn) +{ + BUG(); + return _mfn(0); +} + +static inline bool arch_acquire_resource_check(struct domain *d) +{ + /* + * The reference counting of foreign entries in set_foreign_p2m_entry() + * is supported on PPC. + */ + return true; +} + +static inline void p2m_altp2m_check(struct vcpu *v, uint16_t idx) +{ + /* Not supported on PPC. */ +} + +#endif /* __ASM_PPC_P2M_H__ */ diff --git a/xen/arch/ppc/include/asm/page.h b/xen/arch/ppc/include/asm/page.h index e58b0a7354..713630c393 100644 --- a/xen/arch/ppc/include/asm/page.h +++ b/xen/arch/ppc/include/asm/page.h @@ -39,6 +39,9 @@ #define PTE_XEN_RO (PTE_XEN_BASE | PTE_EAA_READ) #define PTE_XEN_RX (PTE_XEN_BASE | PTE_EAA_READ | PTE_EAA_EXECUTE) +/* TODO */ +#define PAGE_HYPERVISOR 0 + /* * Radix Tree layout for 64KB pages: * @@ -175,4 +178,20 @@ struct prtb_entry { void tlbie_all(void); +static inline void invalidate_icache(void) +{ + BUG(); +} + +#define clear_page(page) memset((void *)(page), 0, PAGE_SIZE) +#define copy_page(dp, sp) memcpy(dp, sp, PAGE_SIZE) + +/* TODO: Flush the dcache for an entire page. */ +static inline void flush_page_to_ram(unsigned long mfn, bool sync_icache) +{ + BUG(); +} + +#define PAGE_ALIGN(x) (((x) + PAGE_SIZE - 1) & PAGE_MASK) + #endif /* _ASM_PPC_PAGE_H */ diff --git a/xen/arch/ppc/include/asm/paging.h b/xen/arch/ppc/include/asm/paging.h new file mode 100644 index 0000000000..eccacece29 --- /dev/null +++ b/xen/arch/ppc/include/asm/paging.h @@ -0,0 +1,7 @@ +#ifndef __ASM_PPC_PAGING_H__ +#define __ASM_PPC_PAGING_H__ + +#define paging_mode_translate(d) (1) +#define paging_mode_external(d) (1) + +#endif /* __ASM_PPC_PAGING_H__ */ diff --git a/xen/arch/ppc/include/asm/pci.h b/xen/arch/ppc/include/asm/pci.h new file mode 100644 index 0000000000..e76c8e5475 --- /dev/null +++ b/xen/arch/ppc/include/asm/pci.h @@ -0,0 +1,7 @@ +#ifndef __ASM_PPC_PCI_H__ +#define __ASM_PPC_PCI_H__ + +struct arch_pci_dev { +}; + +#endif /* __ASM_PPC_PCI_H__ */ diff --git a/xen/arch/ppc/include/asm/percpu.h b/xen/arch/ppc/include/asm/percpu.h new file mode 100644 index 0000000000..faae72ee41 --- /dev/null +++ b/xen/arch/ppc/include/asm/percpu.h @@ -0,0 +1,26 @@ +#ifndef __PPC_PERCPU_H__ +#define __PPC_PERCPU_H__ + +#ifndef __ASSEMBLY__ + +#include + +extern char __per_cpu_start[], __per_cpu_data_end[]; +extern unsigned long __per_cpu_offset[NR_CPUS]; +void percpu_init_areas(void); + +#define smp_processor_id() 0 /* TODO: Fix this */ + +#define per_cpu(var, cpu) \ + (*RELOC_HIDE(&per_cpu__##var, __per_cpu_offset[cpu])) +#define this_cpu(var) \ + (*RELOC_HIDE(&per_cpu__##var, smp_processor_id())) + +#define per_cpu_ptr(var, cpu) \ + (*RELOC_HIDE(var, __per_cpu_offset[cpu])) +#define this_cpu_ptr(var) \ + (*RELOC_HIDE(var, smp_processor_id())) + +#endif + +#endif /* __PPC_PERCPU_H__ */ diff --git a/xen/arch/ppc/include/asm/procarea.h b/xen/arch/ppc/include/asm/procarea.h new file mode 100644 index 0000000000..55a358f1e4 --- /dev/null +++ b/xen/arch/ppc/include/asm/procarea.h @@ -0,0 +1,38 @@ +/* + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License as published by + * the Free Software Foundation; either version 2 of the License, or + * (at your option) any later version. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program; if not, write to the Free Software + * Foundation, 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA. + * + * Copyright (C) IBM Corp. 2005 + * + * Authors: Hollis Blanchard + */ + +#ifndef _ASM_PROCAREA_H_ +#define _ASM_PROCAREA_H_ + +#include + +struct vcpu; +struct gdb_state; + +struct processor_area +{ + unsigned int whoami; + unsigned int hard_id; + struct vcpu *cur_vcpu; + void *hyp_stack_base; + unsigned long saved_regs[2]; +}; + +#endif diff --git a/xen/arch/ppc/include/asm/processor.h b/xen/arch/ppc/include/asm/processor.h index 417288738c..695f98af5b 100644 --- a/xen/arch/ppc/include/asm/processor.h +++ b/xen/arch/ppc/include/asm/processor.h @@ -110,6 +110,10 @@ /* Macro to adjust thread priority for hardware multithreading */ #define HMT_very_low() asm volatile ( "or %r31, %r31, %r31" ) +/* TODO: This isn't correct */ +#define cpu_to_core(_cpu) (0) +#define cpu_to_socket(_cpu) (0) + /* * User-accessible registers: most of these need to be saved/restored * for every nested Xen invocation. @@ -175,6 +179,8 @@ static inline void noreturn die(void) HMT_very_low(); } +#define cpu_relax() asm volatile ( "or %r1, %r1, %r1; or %r2, %r2, %r2" ) + #endif /* __ASSEMBLY__ */ #endif /* _ASM_PPC_PROCESSOR_H */ diff --git a/xen/arch/ppc/include/asm/random.h b/xen/arch/ppc/include/asm/random.h new file mode 100644 index 0000000000..2f9e9bbae4 --- /dev/null +++ b/xen/arch/ppc/include/asm/random.h @@ -0,0 +1,9 @@ +#ifndef __ASM_PPC_RANDOM_H__ +#define __ASM_PPC_RANDOM_H__ + +static inline unsigned int arch_get_random(void) +{ + return 0; +} + +#endif /* __ASM_PPC_RANDOM_H__ */ diff --git a/xen/arch/ppc/include/asm/regs.h b/xen/arch/ppc/include/asm/regs.h index d5f47a6ff1..68135ca584 100644 --- a/xen/arch/ppc/include/asm/regs.h +++ b/xen/arch/ppc/include/asm/regs.h @@ -23,6 +23,8 @@ #ifndef _ASM_REG_DEFS_H_ #define _ASM_REG_DEFS_H_ +#include + /* Special Purpose Registers */ #define SPRN_VRSAVE 256 #define SPRN_DSISR 18 diff --git a/xen/arch/ppc/include/asm/setup.h b/xen/arch/ppc/include/asm/setup.h new file mode 100644 index 0000000000..e4f64879b6 --- /dev/null +++ b/xen/arch/ppc/include/asm/setup.h @@ -0,0 +1,6 @@ +#ifndef __ASM_PPC_SETUP_H__ +#define __ASM_PPC_SETUP_H__ + +#define max_init_domid (0) + +#endif /* __ASM_PPC_SETUP_H__ */ diff --git a/xen/arch/ppc/include/asm/smp.h b/xen/arch/ppc/include/asm/smp.h new file mode 100644 index 0000000000..eca43f0e6c --- /dev/null +++ b/xen/arch/ppc/include/asm/smp.h @@ -0,0 +1,18 @@ +#ifndef __ASM_SMP_H +#define __ASM_SMP_H + +#include +#include + +DECLARE_PER_CPU(cpumask_var_t, cpu_sibling_mask); +DECLARE_PER_CPU(cpumask_var_t, cpu_core_mask); + +#define cpu_is_offline(cpu) unlikely(!cpu_online(cpu)) + +/* + * Do we, for platform reasons, need to actually keep CPUs online when we + * would otherwise prefer them to be off? + */ +#define park_offline_cpus false + +#endif diff --git a/xen/arch/ppc/include/asm/softirq.h b/xen/arch/ppc/include/asm/softirq.h new file mode 100644 index 0000000000..a0b28a5e51 --- /dev/null +++ b/xen/arch/ppc/include/asm/softirq.h @@ -0,0 +1,8 @@ +#ifndef __ASM_PPC_SOFTIRQ_H__ +#define __ASM_PPC_SOFTIRQ_H__ + +#define NR_ARCH_SOFTIRQS 0 + +#define arch_skip_send_event_check(cpu) 0 + +#endif /* __ASM_PPC_SOFTIRQ_H__ */ diff --git a/xen/arch/ppc/include/asm/spinlock.h b/xen/arch/ppc/include/asm/spinlock.h new file mode 100644 index 0000000000..4bdb4b1e98 --- /dev/null +++ b/xen/arch/ppc/include/asm/spinlock.h @@ -0,0 +1,15 @@ +#ifndef __ASM_SPINLOCK_H +#define __ASM_SPINLOCK_H + +#define arch_lock_acquire_barrier() smp_mb() +#define arch_lock_release_barrier() smp_mb() + +#define arch_lock_relax() cpu_relax() +#define arch_lock_signal() +#define arch_lock_signal_wmb() \ +({ \ + smp_wmb(); \ + arch_lock_signal(); \ +}) + +#endif /* __ASM_SPINLOCK_H */ diff --git a/xen/arch/ppc/include/asm/system.h b/xen/arch/ppc/include/asm/system.h index 94091df644..5adc321e88 100644 --- a/xen/arch/ppc/include/asm/system.h +++ b/xen/arch/ppc/include/asm/system.h @@ -1,6 +1,247 @@ +/* + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License as published by + * the Free Software Foundation; either version 2 of the License, or + * (at your option) any later version. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program; if not, write to the Free Software + * Foundation, 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA. + * + * Copyright (C) IBM Corp. 2005 + * Copyright (C) Raptor Engineering LLC + * + * Authors: Jimi Xenidis + * Shawn Anastasio + */ + #ifndef _ASM_SYSTEM_H_ #define _ASM_SYSTEM_H_ -#define smp_wmb() __asm__ __volatile__ ( "lwsync" : : : "memory" ) +#include +#include +#include +#include +#include +#include + +#define xchg(ptr,x) \ +({ \ + __typeof__(*(ptr)) _x_ = (x); \ + (__typeof__(*(ptr))) __xchg((ptr), (unsigned long)_x_, sizeof(*(ptr))); \ +}) + +#define build_xchg(fn, type, ldinsn, stinsn) \ +static inline unsigned long \ +fn(volatile type *m, unsigned long val) \ +{ \ + unsigned long dummy; \ + \ + __asm__ __volatile__( \ + PPC_ATOMIC_ENTRY_BARRIER \ +"1: " ldinsn " %0,0,%3\n" \ + stinsn " %2,0,%3\n" \ +"2: bne- 1b" \ + PPC_ATOMIC_EXIT_BARRIER \ + : "=&r" (dummy), "=m" (*m) \ + : "r" (val), "r" (m) \ + : "cc", "memory"); \ + return dummy; \ +} + +build_xchg(__xchg_u8, uint8_t, "lbarx", "stbcx.") +build_xchg(__xchg_u16, uint16_t, "lharx", "sthcx.") +build_xchg(__xchg_u32, uint32_t, "lwarx", "stwcx.") +build_xchg(__xchg_u64, uint64_t, "ldarx", "stdcx.") + +#undef build_xchg + +/* + * This function doesn't exist, so you'll get a linker error + * if something tries to do an invalid xchg(). + */ +extern void __xchg_called_with_bad_pointer(void); + +static inline unsigned long +__xchg(volatile void *ptr, unsigned long x, int size) +{ + switch (size) { + case 1: + return __xchg_u8(ptr, x); + case 2: + return __xchg_u16(ptr, x); + case 4: + return __xchg_u32(ptr, x); + case 8: + return __xchg_u64(ptr, x); + } + __xchg_called_with_bad_pointer(); + return x; +} + + +static inline unsigned long +__cmpxchg_u32(volatile int *p, int old, int new) +{ + unsigned int prev; + + __asm__ __volatile__ ( + PPC_ATOMIC_ENTRY_BARRIER +"1: lwarx %0,0,%2 # __cmpxchg_u32\n\ + cmpw 0,%0,%3\n\ + bne- 2f\n\ + stwcx. %4,0,%2\n\ + bne- 1b" + PPC_ATOMIC_EXIT_BARRIER + "\n\ +2:" + : "=&r" (prev), "=m" (*p) + : "r" (p), "r" (old), "r" (new), "m" (*p) + : "cc", "memory"); + + return prev; +} + +static inline unsigned long +__cmpxchg_u64(volatile long *p, unsigned long old, unsigned long new) +{ + unsigned long prev; + + __asm__ __volatile__ ( + PPC_ATOMIC_ENTRY_BARRIER +"1: ldarx %0,0,%2 # __cmpxchg_u64\n\ + cmpd 0,%0,%3\n\ + bne- 2f\n\ + stdcx. %4,0,%2\n\ + bne- 1b" + PPC_ATOMIC_EXIT_BARRIER + "\n\ +2:" + : "=&r" (prev), "=m" (*p) + : "r" (p), "r" (old), "r" (new), "m" (*p) + : "cc", "memory"); + + return prev; +} + +/* This function doesn't exist, so you'll get a linker error + if something tries to do an invalid cmpxchg(). */ +extern void __cmpxchg_called_with_bad_pointer(void); + +static always_inline unsigned long +__cmpxchg(volatile void *ptr, unsigned long old, unsigned long new, int size) +{ + switch (size) { + case 2: + BUG(); return 0; /* XXX implement __cmpxchg_u16 ? */ + case 4: + return __cmpxchg_u32(ptr, old, new); + case 8: + return __cmpxchg_u64(ptr, old, new); + } + __cmpxchg_called_with_bad_pointer(); + return old; +} + +#define cmpxchg_user(ptr,o,n) cmpxchg(ptr,o,n) + +#define cmpxchg(ptr,o,n) \ + ({ \ + __typeof__(*(ptr)) _o_ = (o); \ + __typeof__(*(ptr)) _n_ = (n); \ + (__typeof__(*(ptr))) __cmpxchg((ptr), (unsigned long)_o_, \ + (unsigned long)_n_, sizeof(*(ptr))); \ + }) + + +/* + * Memory barrier. + * The sync instruction guarantees that all memory accesses initiated + * by this processor have been performed (with respect to all other + * mechanisms that access memory). The eieio instruction is a barrier + * providing an ordering (separately) for (a) cacheable stores and (b) + * loads and stores to non-cacheable memory (e.g. I/O devices). + * + * mb() prevents loads and stores being reordered across this point. + * rmb() prevents loads being reordered across this point. + * wmb() prevents stores being reordered across this point. + * read_barrier_depends() prevents data-dependent loads being reordered + * across this point (nop on PPC). + * + * We have to use the sync instructions for mb(), since lwsync doesn't + * order loads with respect to previous stores. Lwsync is fine for + * rmb(), though. + * For wmb(), we use sync since wmb is used in drivers to order + * stores to system memory with respect to writes to the device. + * However, smp_wmb() can be a lighter-weight eieio barrier on + * SMP since it is only used to order updates to system memory. + */ +#define mb() __asm__ __volatile__ ("sync" : : : "memory") +#define rmb() __asm__ __volatile__ ("lwsync" : : : "memory") +#define wmb() __asm__ __volatile__ ("sync" : : : "memory") +#define read_barrier_depends() do { } while(0) + +#define set_mb(var, value) do { var = value; smp_mb(); } while (0) +#define set_wmb(var, value) do { var = value; smp_wmb(); } while (0) + +#define smp_mb__before_atomic() smp_mb() +#define smp_mb__after_atomic() smp_mb() + +#ifdef CONFIG_SMP +#define smp_mb() mb() +#define smp_rmb() rmb() +#define smp_wmb() __asm__ __volatile__ ("lwsync" : : : "memory") +#define smp_read_barrier_depends() read_barrier_depends() +#else +#define smp_mb() __asm__ __volatile__("": : :"memory") +#define smp_rmb() __asm__ __volatile__("": : :"memory") +#define smp_wmb() __asm__ __volatile__("": : :"memory") +#define smp_read_barrier_depends() do { } while(0) +#endif /* CONFIG_SMP */ + +#define local_save_flags(flags) ((flags) = mfmsr()) +#define local_irq_restore(flags) do { \ + __asm__ __volatile__("": : :"memory"); \ + mtmsrd((flags)); \ +} while(0) + +static inline void local_irq_disable(void) +{ + unsigned long msr; + msr = mfmsr(); + mtmsrd(msr & ~MSR_EE); + __asm__ __volatile__("" : : : "memory"); +} + +static inline void local_irq_enable(void) +{ + unsigned long msr; + __asm__ __volatile__("" : : : "memory"); + msr = mfmsr(); + mtmsrd(msr | MSR_EE); +} + +static inline void __do_save_and_cli(unsigned long *flags) +{ + unsigned long msr; + msr = mfmsr(); + *flags = msr; + mtmsrd(msr & ~MSR_EE); + __asm__ __volatile__("" : : : "memory"); +} + +#define local_irq_save(flags) __do_save_and_cli(&flags) + +static inline int local_irq_is_enabled(void) +{ + return !!(mfmsr() & MSR_EE); +} + +#define arch_fetch_and_add(x, v) __sync_fetch_and_add(x, v) #endif /* _ASM_SYSTEM_H */ diff --git a/xen/arch/ppc/include/asm/time.h b/xen/arch/ppc/include/asm/time.h new file mode 100644 index 0000000000..7872d3c15b --- /dev/null +++ b/xen/arch/ppc/include/asm/time.h @@ -0,0 +1,20 @@ +#ifndef __ASM_PPC_TIME_H__ +#define __ASM_PPC_TIME_H__ + +#include +#include +#include + +struct vcpu; + +/* TODO: implement */ +static inline void force_update_vcpu_system_time(struct vcpu *v) { BUG(); } + +typedef unsigned long cycles_t; + +static inline cycles_t get_cycles(void) +{ + return mfspr(SPRN_TBRL); +} + +#endif /* __ASM_PPC_TIME_H__ */ diff --git a/xen/arch/ppc/include/asm/vm_event.h b/xen/arch/ppc/include/asm/vm_event.h new file mode 100644 index 0000000000..346653f32a --- /dev/null +++ b/xen/arch/ppc/include/asm/vm_event.h @@ -0,0 +1,49 @@ +#ifndef __ASM_PPC_VM_EVENT_H__ +#define __ASM_PPC_VM_EVENT_H__ + +#include +#include +#include + +static inline int vm_event_init_domain(struct domain *d) +{ + /* Nothing to do. */ + return 0; +} + +static inline void vm_event_cleanup_domain(struct domain *d) +{ + memset(&d->monitor, 0, sizeof(d->monitor)); +} + +static inline void vm_event_toggle_singlestep(struct domain *d, struct vcpu *v, + vm_event_response_t *rsp) +{ + /* Not supported on PPC. */ +} + +static inline +void vm_event_register_write_resume(struct vcpu *v, vm_event_response_t *rsp) +{ + /* Not supported on PPC. */ +} + +static inline +void vm_event_emulate_check(struct vcpu *v, vm_event_response_t *rsp) +{ + /* Not supported on PPC. */ +} + +static inline +void vm_event_sync_event(struct vcpu *v, bool value) +{ + /* Not supported on PPC. */ +} + +static inline +void vm_event_reset_vmtrace(struct vcpu *v) +{ + /* Not supported on PPC. */ +} + +#endif /* __ASM_PPC_VM_EVENT_H__ */ diff --git a/xen/arch/ppc/include/asm/xenoprof.h b/xen/arch/ppc/include/asm/xenoprof.h new file mode 100644 index 0000000000..e69de29bb2 diff --git a/xen/arch/ppc/mm-radix.c b/xen/arch/ppc/mm-radix.c index 071e71b73e..399898a36d 100644 --- a/xen/arch/ppc/mm-radix.c +++ b/xen/arch/ppc/mm-radix.c @@ -1,13 +1,13 @@ /* SPDX-License-Identifier: GPL-2.0-or-later */ #include #include +#include #include #include #include #include #include -#include #include #include #include diff --git a/xen/arch/ppc/opal.c b/xen/arch/ppc/opal.c index 396a9e403b..1183b7d5ef 100644 --- a/xen/arch/ppc/opal.c +++ b/xen/arch/ppc/opal.c @@ -1,10 +1,8 @@ /* SPDX-License-Identifier: GPL-2.0-or-later */ -#include "xen/compiler.h" #include #include #include #include -#include #include #include #include diff --git a/xen/arch/ppc/tlb-radix.c b/xen/arch/ppc/tlb-radix.c index 69934076a7..551e8280de 100644 --- a/xen/arch/ppc/tlb-radix.c +++ b/xen/arch/ppc/tlb-radix.c @@ -5,9 +5,9 @@ * * Copyright 2015-2016, Aneesh Kumar K.V, IBM Corporation. */ +#include #include -#include #include #include diff --git a/xen/include/public/hvm/save.h b/xen/include/public/hvm/save.h index 464ebdb0da..2cf4238daa 100644 --- a/xen/include/public/hvm/save.h +++ b/xen/include/public/hvm/save.h @@ -89,6 +89,8 @@ DECLARE_HVM_SAVE_TYPE(END, 0, struct hvm_save_end); #include "../arch-x86/hvm/save.h" #elif defined(__arm__) || defined(__aarch64__) #include "../arch-arm/hvm/save.h" +#elif defined(__powerpc64__) +#include "../arch-ppc.h" #else #error "unsupported architecture" #endif diff --git a/xen/include/public/pmu.h b/xen/include/public/pmu.h index eb87a81e7b..5a176b6ac3 100644 --- a/xen/include/public/pmu.h +++ b/xen/include/public/pmu.h @@ -11,6 +11,8 @@ #include "arch-x86/pmu.h" #elif defined (__arm__) || defined (__aarch64__) #include "arch-arm.h" +#elif defined (__powerpc64__) +#include "arch-ppc.h" #else #error "Unsupported architecture" #endif diff --git a/xen/include/public/xen.h b/xen/include/public/xen.h index 920567e006..b812a0a324 100644 --- a/xen/include/public/xen.h +++ b/xen/include/public/xen.h @@ -16,6 +16,8 @@ #include "arch-x86/xen.h" #elif defined(__arm__) || defined (__aarch64__) #include "arch-arm.h" +#elif defined(__powerpc64__) +#include "arch-ppc.h" #else #error "Unsupported architecture" #endif From patchwork Wed Aug 2 23:03:02 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shawn Anastasio X-Patchwork-Id: 13338907 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 3B0D8C001DF for ; Wed, 2 Aug 2023 23:03:37 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.575866.901392 (Exim 4.92) (envelope-from ) id 1qRKsJ-00049W-LS; Wed, 02 Aug 2023 23:03:27 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 575866.901392; Wed, 02 Aug 2023 23:03:27 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qRKsJ-00047Z-Hd; Wed, 02 Aug 2023 23:03:27 +0000 Received: by outflank-mailman (input) for mailman id 575866; Wed, 02 Aug 2023 23:03:25 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qRKsH-0002ha-KG for xen-devel@lists.xenproject.org; Wed, 02 Aug 2023 23:03:25 +0000 Received: from raptorengineering.com (mail.raptorengineering.com [23.155.224.40]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id c630955c-3188-11ee-8613-37d641c3527e; Thu, 03 Aug 2023 01:03:21 +0200 (CEST) Received: from localhost (localhost [127.0.0.1]) by mail.rptsys.com (Postfix) with ESMTP id D90A58285755; Wed, 2 Aug 2023 18:03:18 -0500 (CDT) Received: from mail.rptsys.com ([127.0.0.1]) by localhost (vali.starlink.edu [127.0.0.1]) (amavisd-new, port 10032) with ESMTP id q5S3udTYLjJ5; Wed, 2 Aug 2023 18:03:18 -0500 (CDT) Received: from localhost (localhost [127.0.0.1]) by mail.rptsys.com (Postfix) with ESMTP id 3CF748286985; Wed, 2 Aug 2023 18:03:18 -0500 (CDT) Received: from mail.rptsys.com ([127.0.0.1]) by localhost (vali.starlink.edu [127.0.0.1]) (amavisd-new, port 10026) with ESMTP id EVp_UIJ6pOb8; Wed, 2 Aug 2023 18:03:18 -0500 (CDT) Received: from raptor-ewks-026.lan (5.edge.rptsys.com [23.155.224.38]) by mail.rptsys.com (Postfix) with ESMTPSA id BEC5182857CC; Wed, 2 Aug 2023 18:03:17 -0500 (CDT) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: c630955c-3188-11ee-8613-37d641c3527e DKIM-Filter: OpenDKIM Filter v2.10.3 mail.rptsys.com 3CF748286985 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=raptorengineering.com; s=B8E824E6-0BE2-11E6-931D-288C65937AAD; t=1691017398; bh=mTHIJoDesvUL/1mvvl/fu0GmqgkC6yCd3GyR+DEjPQM=; h=From:To:Date:Message-Id:MIME-Version; b=gyhBMPrbZOxx2NbfavaS5idAmmnHNgceVgOSB2PqpVWV1kD5ciH4ZT6KQra/sBk0f ZKanThnkAWEw2pqLbkB6W2y9fYkk8hbs905OBqMGWFFAi4CoZL+Wxxo4K/PFThpH2e 4BXUcx1chDr7AJ6PQb5zo+eJnfZwpg2072AmoJAg= X-Virus-Scanned: amavisd-new at rptsys.com From: Shawn Anastasio To: xen-devel@lists.xenproject.org Cc: Timothy Pearson , Andrew Cooper , Jan Beulich , Shawn Anastasio Subject: [PATCH 6/9] ns16550: Guard accesses to `vuart` field by CONFIG_ARM Date: Wed, 2 Aug 2023 18:03:02 -0500 Message-Id: <1bee2fe06c4a9262b05cb4b8cd392d9630279f30.1691016993.git.sanastasio@raptorengineering.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: References: MIME-Version: 1.0 When CONFIG_ARM is defined, a `vuart` field is added to struct ns16550, but accesses to this field in ns16550_uart_dt_init weren't correctly guarded. Fix build on !ARM platforms that define CONFIG_HAS_DEVICE_TREE by correctly guarding access to the vuart field in this function. Signed-off-by: Shawn Anastasio Reviewed-by: Jan Beulich --- xen/drivers/char/ns16550.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/xen/drivers/char/ns16550.c b/xen/drivers/char/ns16550.c index 212a9c49ae..c0598efb27 100644 --- a/xen/drivers/char/ns16550.c +++ b/xen/drivers/char/ns16550.c @@ -1797,11 +1797,13 @@ static int __init ns16550_uart_dt_init(struct dt_device_node *dev, uart->dw_usr_bsy = dt_device_is_compatible(dev, "snps,dw-apb-uart"); +#ifdef CONFIG_ARM uart->vuart.base_addr = uart->io_base; uart->vuart.size = uart->io_size; uart->vuart.data_off = UART_THR <reg_shift; uart->vuart.status_off = UART_LSR<reg_shift; uart->vuart.status = UART_LSR_THRE|UART_LSR_TEMT; +#endif /* Register with generic serial driver. */ serial_register_uart(uart - ns16550_com, &ns16550_driver, uart); From patchwork Wed Aug 2 23:03:03 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shawn Anastasio X-Patchwork-Id: 13338910 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 21884C04FE0 for ; Wed, 2 Aug 2023 23:03:39 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.575865.901379 (Exim 4.92) (envelope-from ) id 1qRKsI-0003g9-GY; Wed, 02 Aug 2023 23:03:26 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 575865.901379; Wed, 02 Aug 2023 23:03:26 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qRKsI-0003d6-5X; Wed, 02 Aug 2023 23:03:26 +0000 Received: by outflank-mailman (input) for mailman id 575865; Wed, 02 Aug 2023 23:03:24 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qRKsG-0002ha-KE for xen-devel@lists.xenproject.org; Wed, 02 Aug 2023 23:03:24 +0000 Received: from raptorengineering.com (mail.raptorengineering.com [23.155.224.40]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id c630ba31-3188-11ee-8613-37d641c3527e; Thu, 03 Aug 2023 01:03:21 +0200 (CEST) Received: from localhost (localhost [127.0.0.1]) by mail.rptsys.com (Postfix) with ESMTP id 3BCD58286985; Wed, 2 Aug 2023 18:03:19 -0500 (CDT) Received: from mail.rptsys.com ([127.0.0.1]) by localhost (vali.starlink.edu [127.0.0.1]) (amavisd-new, port 10032) with ESMTP id stl2bWSDymwU; Wed, 2 Aug 2023 18:03:18 -0500 (CDT) Received: from localhost (localhost [127.0.0.1]) by mail.rptsys.com (Postfix) with ESMTP id CD8ED8286986; Wed, 2 Aug 2023 18:03:18 -0500 (CDT) Received: from mail.rptsys.com ([127.0.0.1]) by localhost (vali.starlink.edu [127.0.0.1]) (amavisd-new, port 10026) with ESMTP id awEEu_sSYccA; Wed, 2 Aug 2023 18:03:18 -0500 (CDT) Received: from raptor-ewks-026.lan (5.edge.rptsys.com [23.155.224.38]) by mail.rptsys.com (Postfix) with ESMTPSA id 3AB2A8285801; Wed, 2 Aug 2023 18:03:18 -0500 (CDT) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: c630ba31-3188-11ee-8613-37d641c3527e DKIM-Filter: OpenDKIM Filter v2.10.3 mail.rptsys.com CD8ED8286986 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=raptorengineering.com; s=B8E824E6-0BE2-11E6-931D-288C65937AAD; t=1691017398; bh=IRB/k9AAEsYW3GbnY7XYEQDTmCQcNBCKgPHrjMuNqBM=; h=From:To:Date:Message-Id:MIME-Version; b=fVtp3qJ4toRy7w4GJ53dyAY6jo82bfhuX8WVP11rW4I0gi3pk+YORa4cvIfurrRXQ 4ShGAVQcjKAz7qxmUpFWA3zjko7bewpCdf4gcgv1yF93fuoipyQXvwYMltFPxC2n3A BICoRsAchBTkZGHBYusdrJ+kXEuk/uyjA+y2kxM0= X-Virus-Scanned: amavisd-new at rptsys.com From: Shawn Anastasio To: xen-devel@lists.xenproject.org Cc: Timothy Pearson , Andrew Cooper , Jan Beulich , Shawn Anastasio Subject: [PATCH 7/9] xen/ppc: Define bug frames table in linker script Date: Wed, 2 Aug 2023 18:03:03 -0500 Message-Id: <65e26bb63977749cc50daad4ecaf891bf7cc6b0a.1691016993.git.sanastasio@raptorengineering.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: References: MIME-Version: 1.0 Define the bug frames table in ppc's linker script as is done by other architectures. Signed-off-by: Shawn Anastasio Acked-by: Jan Beulich --- xen/arch/ppc/xen.lds.S | 10 ++++++++++ 1 file changed, 10 insertions(+) diff --git a/xen/arch/ppc/xen.lds.S b/xen/arch/ppc/xen.lds.S index 2fa81d5a83..692aa34add 100644 --- a/xen/arch/ppc/xen.lds.S +++ b/xen/arch/ppc/xen.lds.S @@ -41,6 +41,16 @@ SECTIONS . = ALIGN(PAGE_SIZE); .rodata : { _srodata = .; /* Read-only data */ + /* Bug frames table */ + __start_bug_frames = .; + *(.bug_frames.0) + __stop_bug_frames_0 = .; + *(.bug_frames.1) + __stop_bug_frames_1 = .; + *(.bug_frames.2) + __stop_bug_frames_2 = .; + *(.bug_frames.3) + __stop_bug_frames_3 = .; *(.rodata) *(.rodata.*) *(.data.rel.ro) From patchwork Wed Aug 2 23:03:04 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shawn Anastasio X-Patchwork-Id: 13338909 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 3E886C41513 for ; Wed, 2 Aug 2023 23:03:37 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.575864.901373 (Exim 4.92) (envelope-from ) id 1qRKsI-0003ZY-29; Wed, 02 Aug 2023 23:03:26 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 575864.901373; Wed, 02 Aug 2023 23:03:26 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qRKsH-0003Yp-SX; Wed, 02 Aug 2023 23:03:25 +0000 Received: by outflank-mailman (input) for mailman id 575864; Wed, 02 Aug 2023 23:03:23 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qRKsF-0002Xg-L9 for xen-devel@lists.xenproject.org; Wed, 02 Aug 2023 23:03:23 +0000 Received: from raptorengineering.com (mail.raptorengineering.com [23.155.224.40]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id c6497382-3188-11ee-b265-6b7b168915f2; Thu, 03 Aug 2023 01:03:21 +0200 (CEST) Received: from localhost (localhost [127.0.0.1]) by mail.rptsys.com (Postfix) with ESMTP id 7F86B82867D6; Wed, 2 Aug 2023 18:03:20 -0500 (CDT) Received: from mail.rptsys.com ([127.0.0.1]) by localhost (vali.starlink.edu [127.0.0.1]) (amavisd-new, port 10032) with ESMTP id 5_ulkB8bEAwK; Wed, 2 Aug 2023 18:03:19 -0500 (CDT) Received: from localhost (localhost [127.0.0.1]) by mail.rptsys.com (Postfix) with ESMTP id 18B8682853DD; Wed, 2 Aug 2023 18:03:19 -0500 (CDT) Received: from mail.rptsys.com ([127.0.0.1]) by localhost (vali.starlink.edu [127.0.0.1]) (amavisd-new, port 10026) with ESMTP id PKJPQVnVlc8g; Wed, 2 Aug 2023 18:03:18 -0500 (CDT) Received: from raptor-ewks-026.lan (5.edge.rptsys.com [23.155.224.38]) by mail.rptsys.com (Postfix) with ESMTPSA id A87368286988; Wed, 2 Aug 2023 18:03:18 -0500 (CDT) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: c6497382-3188-11ee-b265-6b7b168915f2 DKIM-Filter: OpenDKIM Filter v2.10.3 mail.rptsys.com 18B8682853DD DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=raptorengineering.com; s=B8E824E6-0BE2-11E6-931D-288C65937AAD; t=1691017399; bh=1Z3BHp5C4S6qyWtJBZVL9zKOQ4J/V94ak8r3E6OfMhE=; h=From:To:Date:Message-Id:MIME-Version; b=OxSSE0AbnR2XSjQTFUqjkz660FTxvImjkTr0IhY5FmyQTiHcq2I12bHlbSICfLs8u qISW10KsmeJ5MHH9muogX6u82BMmVh0MMDKfGBlpEtP1MCgDHNqrUNb39a24/nWMyv hMyXh8BPQfu7Xbc0/7iaPfVYwpnqTkuuuTePSHO0= X-Virus-Scanned: amavisd-new at rptsys.com From: Shawn Anastasio To: xen-devel@lists.xenproject.org Cc: Timothy Pearson , Andrew Cooper , Jan Beulich , Shawn Anastasio Subject: [PATCH 8/9] xen/ppc: Add stub function and symbol definitions Date: Wed, 2 Aug 2023 18:03:04 -0500 Message-Id: <2702cfa486aa92e82fccd6393519073f10f4c40c.1691016993.git.sanastasio@raptorengineering.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: References: MIME-Version: 1.0 Add stub function and symbol definitions required by common code. If the file that the definition is supposed to be located in doesn't already exist yet, temporarily place its definition in the new stubs.c Signed-off-by: Shawn Anastasio --- xen/arch/ppc/Makefile | 1 + xen/arch/ppc/mm-radix.c | 44 +++++ xen/arch/ppc/setup.c | 10 +- xen/arch/ppc/stubs.c | 351 ++++++++++++++++++++++++++++++++++++++++ 4 files changed, 404 insertions(+), 2 deletions(-) create mode 100644 xen/arch/ppc/stubs.c diff --git a/xen/arch/ppc/Makefile b/xen/arch/ppc/Makefile index a059ac4c0a..969910b3b6 100644 --- a/xen/arch/ppc/Makefile +++ b/xen/arch/ppc/Makefile @@ -5,6 +5,7 @@ obj-$(CONFIG_EARLY_PRINTK) += early_printk.init.o obj-y += mm-radix.o obj-y += opal.o obj-y += setup.o +obj-y += stubs.o obj-y += tlb-radix.o $(TARGET): $(TARGET)-syms diff --git a/xen/arch/ppc/mm-radix.c b/xen/arch/ppc/mm-radix.c index 399898a36d..1ed6897366 100644 --- a/xen/arch/ppc/mm-radix.c +++ b/xen/arch/ppc/mm-radix.c @@ -266,3 +266,47 @@ void __init setup_initial_pagetables(void) /* Turn on the MMU */ enable_mmu(); } + + +/* + * TODO: Implement the functions below + */ +unsigned long total_pages; +unsigned long frametable_base_pdx __read_mostly; + +void put_page(struct page_info *p) +{ + BUG(); +} + +void arch_dump_shared_mem_info(void) +{ + BUG(); +} + +int xenmem_add_to_physmap_one(struct domain *d, + unsigned int space, + union add_to_physmap_extra extra, + unsigned long idx, + gfn_t gfn) +{ + BUG(); +} + +int destroy_xen_mappings(unsigned long s, unsigned long e) +{ + BUG(); +} + +int map_pages_to_xen(unsigned long virt, + mfn_t mfn, + unsigned long nr_mfns, + unsigned int flags) +{ + BUG(); +} + +int __init populate_pt_range(unsigned long virt, unsigned long nr_mfns) +{ + BUG(); +} diff --git a/xen/arch/ppc/setup.c b/xen/arch/ppc/setup.c index 466993987b..06c0a5fa80 100644 --- a/xen/arch/ppc/setup.c +++ b/xen/arch/ppc/setup.c @@ -1,5 +1,8 @@ /* SPDX-License-Identifier: GPL-2.0-or-later */ +#include #include +#include +#include #include #include #include @@ -7,8 +10,6 @@ /* Xen stack for bringing up the first CPU. */ unsigned char __initdata cpu0_boot_stack[STACK_SIZE] __aligned(STACK_SIZE); -void __init setup_initial_pagetables(void); - void __init noreturn start_xen(unsigned long r3, unsigned long r4, unsigned long r5, unsigned long r6, unsigned long r7) @@ -39,3 +40,8 @@ void __init noreturn start_xen(unsigned long r3, unsigned long r4, unreachable(); } + +void arch_get_xen_caps(xen_capabilities_info_t *info) +{ + BUG(); +} diff --git a/xen/arch/ppc/stubs.c b/xen/arch/ppc/stubs.c new file mode 100644 index 0000000000..2b3ee94115 --- /dev/null +++ b/xen/arch/ppc/stubs.c @@ -0,0 +1,351 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +#include +#include +#include +#include +#include +#include +#include + +#include + +/* smpboot.c */ + +cpumask_t cpu_online_map; +cpumask_t cpu_present_map; +cpumask_t cpu_possible_map; + +/* ID of the PCPU we're running on */ +DEFINE_PER_CPU(unsigned int, cpu_id); +/* XXX these seem awfully x86ish... */ +/* representing HT siblings of each logical CPU */ +DEFINE_PER_CPU_READ_MOSTLY(cpumask_var_t, cpu_sibling_mask); +/* representing HT and core siblings of each logical CPU */ +DEFINE_PER_CPU_READ_MOSTLY(cpumask_var_t, cpu_core_mask); + +nodemask_t __read_mostly node_online_map = { { [0] = 1UL } }; + +/* time.c */ + +s_time_t get_s_time(void) +{ + BUG(); +} + +int reprogram_timer(s_time_t timeout) +{ + BUG(); +} + +void send_timer_event(struct vcpu *v) +{ + BUG(); +} + +/* traps.c */ + +void show_execution_state(const struct cpu_user_regs *regs) +{ + BUG(); +} + +void arch_hypercall_tasklet_result(struct vcpu *v, long res) +{ + BUG(); +} + +void vcpu_show_execution_state(struct vcpu *v) +{ + BUG(); +} + +/* shutdown.c */ + +void machine_restart(unsigned int delay_millisecs) +{ + BUG(); +} + +void machine_halt(void) +{ + BUG(); +} + +/* vm_event.c */ + +void vm_event_fill_regs(vm_event_request_t *req) +{ + BUG(); +} + +void vm_event_set_registers(struct vcpu *v, vm_event_response_t *rsp) +{ + BUG(); +} + +void vm_event_monitor_next_interrupt(struct vcpu *v) +{ + /* Not supported on PPC. */ +} + +/* domctl.c */ +void arch_get_domain_info(const struct domain *d, + struct xen_domctl_getdomaininfo *info) +{ + BUG(); +} + +/* monitor.c */ + +int arch_monitor_domctl_event(struct domain *d, + struct xen_domctl_monitor_op *mop) +{ + BUG(); +} + +/* smp.c */ + +void arch_flush_tlb_mask(const cpumask_t *mask) +{ + BUG(); +} + +void smp_send_event_check_mask(const cpumask_t *mask) +{ + BUG(); +} + +void smp_send_call_function_mask(const cpumask_t *mask) +{ + BUG(); +} + +/* irq.c */ + +struct pirq *alloc_pirq_struct(struct domain *d) +{ + BUG(); +} + +int pirq_guest_bind(struct vcpu *v, struct pirq *pirq, int will_share) +{ + BUG(); +} + +void pirq_guest_unbind(struct domain *d, struct pirq *pirq) +{ + BUG(); +} + +void pirq_set_affinity(struct domain *d, int pirq, const cpumask_t *mask) +{ + BUG(); +} + +static void ack_none(struct irq_desc *irq) +{ + BUG(); +} + +static void end_none(struct irq_desc *irq) +{ + BUG(); +} + +hw_irq_controller no_irq_type = { + .typename = "none", + .startup = irq_startup_none, + .shutdown = irq_shutdown_none, + .enable = irq_enable_none, + .disable = irq_disable_none, + .ack = ack_none, + .end = end_none +}; + +int arch_init_one_irq_desc(struct irq_desc *desc) +{ + BUG(); +} + +void smp_send_state_dump(unsigned int cpu) +{ + BUG(); +} + +/* domain.c */ + +DEFINE_PER_CPU(struct vcpu *, curr_vcpu); +unsigned long __per_cpu_offset[NR_CPUS]; + +void context_switch(struct vcpu *prev, struct vcpu *next) +{ + BUG(); +} + +void continue_running(struct vcpu *same) +{ + BUG(); +} + +void sync_local_execstate(void) +{ + BUG(); +} + +void sync_vcpu_execstate(struct vcpu *v) +{ + BUG(); +} + +void startup_cpu_idle_loop(void) +{ + BUG(); +} + +void free_domain_struct(struct domain *d) +{ + BUG(); +} + +void dump_pageframe_info(struct domain *d) +{ + BUG(); +} + +void free_vcpu_struct(struct vcpu *v) +{ + BUG(); +} + +int arch_vcpu_create(struct vcpu *v) +{ + BUG(); +} + +void arch_vcpu_destroy(struct vcpu *v) +{ + BUG(); +} + +void vcpu_switch_to_aarch64_mode(struct vcpu *v) +{ + BUG(); +} + +int arch_sanitise_domain_config(struct xen_domctl_createdomain *config) +{ + BUG(); +} + +int arch_domain_create(struct domain *d, + struct xen_domctl_createdomain *config, + unsigned int flags) +{ + BUG(); +} + +int arch_domain_teardown(struct domain *d) +{ + BUG(); +} + +void arch_domain_destroy(struct domain *d) +{ + BUG(); +} + +void arch_domain_shutdown(struct domain *d) +{ + BUG(); +} + +void arch_domain_pause(struct domain *d) +{ + BUG(); +} + +void arch_domain_unpause(struct domain *d) +{ + BUG(); +} + +int arch_domain_soft_reset(struct domain *d) +{ + BUG(); +} + +void arch_domain_creation_finished(struct domain *d) +{ + BUG(); +} + +int arch_set_info_guest(struct vcpu *v, vcpu_guest_context_u c) +{ + BUG(); +} + +int arch_initialise_vcpu(struct vcpu *v, XEN_GUEST_HANDLE_PARAM(void) arg) +{ + BUG(); +} + +int arch_vcpu_reset(struct vcpu *v) +{ + BUG(); +} + +int domain_relinquish_resources(struct domain *d) +{ + BUG(); +} + +void arch_dump_domain_info(struct domain *d) +{ + BUG(); +} + +void arch_dump_vcpu_info(struct vcpu *v) +{ + BUG(); +} + +void vcpu_mark_events_pending(struct vcpu *v) +{ + BUG(); +} + +void vcpu_update_evtchn_irq(struct vcpu *v) +{ + BUG(); +} + +void vcpu_block_unless_event_pending(struct vcpu *v) +{ + BUG(); +} + +void vcpu_kick(struct vcpu *v) +{ + BUG(); +} + +struct domain *alloc_domain_struct(void) +{ + BUG(); +} + +struct vcpu *alloc_vcpu_struct(const struct domain *d) +{ + BUG(); +} + +unsigned long +hypercall_create_continuation(unsigned int op, const char *format, ...) +{ + BUG(); +} + +int __init parse_arch_dom0_param(const char *s, const char *e) +{ + BUG(); +} From patchwork Wed Aug 2 23:03:05 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shawn Anastasio X-Patchwork-Id: 13338913 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 5DEA9C04FE1 for ; Wed, 2 Aug 2023 23:03:39 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.575868.901412 (Exim 4.92) (envelope-from ) id 1qRKsL-0004fy-Nr; Wed, 02 Aug 2023 23:03:29 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 575868.901412; Wed, 02 Aug 2023 23:03:29 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qRKsL-0004ez-Ea; Wed, 02 Aug 2023 23:03:29 +0000 Received: by outflank-mailman (input) for mailman id 575868; Wed, 02 Aug 2023 23:03:26 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1qRKsI-0002ha-KR for xen-devel@lists.xenproject.org; Wed, 02 Aug 2023 23:03:26 +0000 Received: from raptorengineering.com (mail.raptorengineering.com [23.155.224.40]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id c638a8ca-3188-11ee-8613-37d641c3527e; Thu, 03 Aug 2023 01:03:21 +0200 (CEST) Received: from localhost (localhost [127.0.0.1]) by mail.rptsys.com (Postfix) with ESMTP id 5EAF38285801; Wed, 2 Aug 2023 18:03:20 -0500 (CDT) Received: from mail.rptsys.com ([127.0.0.1]) by localhost (vali.starlink.edu [127.0.0.1]) (amavisd-new, port 10032) with ESMTP id xhEj096is2P6; Wed, 2 Aug 2023 18:03:19 -0500 (CDT) Received: from localhost (localhost [127.0.0.1]) by mail.rptsys.com (Postfix) with ESMTP id CC73F8286986; Wed, 2 Aug 2023 18:03:19 -0500 (CDT) Received: from mail.rptsys.com ([127.0.0.1]) by localhost (vali.starlink.edu [127.0.0.1]) (amavisd-new, port 10026) with ESMTP id m71qTlx5EDTo; Wed, 2 Aug 2023 18:03:19 -0500 (CDT) Received: from raptor-ewks-026.lan (5.edge.rptsys.com [23.155.224.38]) by mail.rptsys.com (Postfix) with ESMTPSA id 0E9CE8285801; Wed, 2 Aug 2023 18:03:19 -0500 (CDT) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: c638a8ca-3188-11ee-8613-37d641c3527e DKIM-Filter: OpenDKIM Filter v2.10.3 mail.rptsys.com CC73F8286986 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=raptorengineering.com; s=B8E824E6-0BE2-11E6-931D-288C65937AAD; t=1691017399; bh=YjlYohwL3eAJPEYKReeeAY3HrxNXgmlGT5VKjdLta3c=; h=From:To:Date:Message-Id:MIME-Version; b=cdIn5slO1nJk9TEPSLCT2MK526NhohahrjwqqMD+D0nH3+kpBVWPq0tgFcLtwlkNW PdJNv2xsda1QqN3Km60VKIu8kUgbjBFGr2PjBiKgAdYxWqJQCXJXz6ZD/JwARqZgN8 WJFDkceDmuzGuglyDWW5NjX2OYomgLbaMTAWZXC0= X-Virus-Scanned: amavisd-new at rptsys.com From: Shawn Anastasio To: xen-devel@lists.xenproject.org Cc: Timothy Pearson , Andrew Cooper , Jan Beulich , Shawn Anastasio Subject: [PATCH 9/9] xen/ppc: Enable full Xen build Date: Wed, 2 Aug 2023 18:03:05 -0500 Message-Id: <44fe62a196e2621ac88d0f4b9e7ee97bf1874d59.1691016993.git.sanastasio@raptorengineering.com> X-Mailer: git-send-email 2.30.2 In-Reply-To: References: MIME-Version: 1.0 Bring ppc's Makefile and arch.mk in line with arm and x86 to disable the build overrides and enable the full Xen build. Signed-off-by: Shawn Anastasio Reviewed-by: Jan Beulich --- xen/arch/ppc/Makefile | 16 +++++++++++++++- xen/arch/ppc/arch.mk | 3 --- 2 files changed, 15 insertions(+), 4 deletions(-) diff --git a/xen/arch/ppc/Makefile b/xen/arch/ppc/Makefile index 969910b3b6..7b68b5ace2 100644 --- a/xen/arch/ppc/Makefile +++ b/xen/arch/ppc/Makefile @@ -12,10 +12,24 @@ $(TARGET): $(TARGET)-syms cp -f $< $@ $(TARGET)-syms: $(objtree)/prelink.o $(obj)/xen.lds - $(LD) $(XEN_LDFLAGS) -T $(obj)/xen.lds -N $< $(build_id_linker) -o $@ + $(LD) $(XEN_LDFLAGS) -T $(obj)/xen.lds -N $< \ + $(objtree)/common/symbols-dummy.o -o $(dot-target).0 + $(NM) -pa --format=sysv $(dot-target).0 \ + | $(objtree)/tools/symbols $(all_symbols) --sysv --sort \ + > $(dot-target).0.S + $(MAKE) $(build)=$(@D) $(dot-target).0.o + $(LD) $(XEN_LDFLAGS) -T $(obj)/xen.lds -N $< \ + $(dot-target).0.o -o $(dot-target).1 + $(NM) -pa --format=sysv $(dot-target).1 \ + | $(objtree)/tools/symbols $(all_symbols) --sysv --sort \ + > $(dot-target).1.S + $(MAKE) $(build)=$(@D) $(dot-target).1.o + $(LD) $(XEN_LDFLAGS) -T $(obj)/xen.lds -N $< $(build_id_linker) \ + $(dot-target).1.o -o $@ $(NM) -pa --format=sysv $@ \ | $(objtree)/tools/symbols --all-symbols --xensyms --sysv --sort \ > $@.map + rm -f $(@D)/.$(@F).[0-9]* $(obj)/xen.lds: $(src)/xen.lds.S FORCE $(call if_changed_dep,cpp_lds_S) diff --git a/xen/arch/ppc/arch.mk b/xen/arch/ppc/arch.mk index d05cbf1df5..917ad0e6a8 100644 --- a/xen/arch/ppc/arch.mk +++ b/xen/arch/ppc/arch.mk @@ -7,6 +7,3 @@ CFLAGS += -m64 -mlittle-endian -mcpu=$(ppc-march-y) CFLAGS += -mstrict-align -mcmodel=medium -mabi=elfv2 -fPIC -mno-altivec -mno-vsx -msoft-float LDFLAGS += -m elf64lppc - -# TODO: Drop override when more of the build is working -override ALL_OBJS-y = arch/$(SRCARCH)/built_in.o common/libfdt/built_in.o lib/built_in.o