From patchwork Fri Jun 9 17:41:26 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andre Przywara X-Patchwork-Id: 9779009 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 697B460318 for ; Fri, 9 Jun 2017 17:44:07 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 547E8286E0 for ; Fri, 9 Jun 2017 17:44:07 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 496A4286E7; Fri, 9 Jun 2017 17:44:07 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id DBF79286E0 for ; Fri, 9 Jun 2017 17:44:05 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1dJNvJ-0003wZ-8X; Fri, 09 Jun 2017 17:42:13 +0000 Received: from mail6.bemta3.messagelabs.com ([195.245.230.39]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1dJNvH-0003sK-E4 for xen-devel@lists.xenproject.org; Fri, 09 Jun 2017 17:42:11 +0000 Received: from [85.158.137.68] by server-16.bemta-3.messagelabs.com id 36/4D-29088-2FDDA395; Fri, 09 Jun 2017 17:42:10 +0000 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFnrKLMWRWlGSWpSXmKPExsVysyfVTffjXat Ig6kLTCy+b5nM5MDocfjDFZYAxijWzLyk/IoE1oz95w4zFzSsZqw4076LtYGxqbaLkYtDSGAz o8Sxmx3MEM5yRon2IzfZuhg5OdgEdCV23HzNDGKLCIRKPF3wHayIWeA6o8TpHfPBioQFgiTeL 25hB7FZBFQlTk44BGbzClhLzFk9DaxGQkBOouH8fbBBnEDxbXvWgsWFBKwkWg5dZp/AyL2AkW EVo0ZxalFZapGukaVeUlFmekZJbmJmjq6hgbFebmpxcWJ6ak5iUrFecn7uJkagl+sZGBh3MDb t9TvEKMnBpCTKO63AKlKILyk/pTIjsTgjvqg0J7X4EKMMB4eSBG/PHaCcYFFqempFWmYOMNxg 0hIcPEoivC9OAqV5iwsSc4sz0yFSpxh1ORb0bPjCJMSSl5+XKiXOawkyQwCkKKM0D24ELPQvM cpKCfMyMjAwCPEUpBblZpagyr9iFOdgVBLmdQaZwpOZVwK36RXQEUxARyx5ZwFyREkiQkqqgV HA/lWsk9P7mJRdywzTOPL221pe7VAKeDd97yz7C6zuHq4KLRezjayE5y/gOrhiZzNn24fHE5u mlT/7XfrrObNAV/fL5LnWkmrOTi+tbvCIvl9hFpV6Y21170HDl151vCabL3my99f/O3hEetYy 8wXJ/Rn/Z5Z+3G614yTnvGOTC/5p9C3abK3EUpyRaKjFXFScCACAcjXneAIAAA== X-Env-Sender: andre.przywara@arm.com X-Msg-Ref: server-11.tower-31.messagelabs.com!1497030129!74053384!1 X-Originating-IP: [217.140.101.70] X-SpamReason: No, hits=0.0 required=7.0 tests= X-StarScan-Received: X-StarScan-Version: 9.4.19; banners=-,-,- X-VirusChecked: Checked Received: (qmail 54202 invoked from network); 9 Jun 2017 17:42:09 -0000 Received: from foss.arm.com (HELO foss.arm.com) (217.140.101.70) by server-11.tower-31.messagelabs.com with SMTP; 9 Jun 2017 17:42:09 -0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id C55AA1596; Fri, 9 Jun 2017 10:42:08 -0700 (PDT) Received: from e104803-lin.lan (unknown [10.1.207.46]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 180023F578; Fri, 9 Jun 2017 10:42:06 -0700 (PDT) From: Andre Przywara To: Julien Grall , Stefano Stabellini Date: Fri, 9 Jun 2017 18:41:26 +0100 Message-Id: <20170609174141.5068-20-andre.przywara@arm.com> X-Mailer: git-send-email 2.9.0 In-Reply-To: <20170609174141.5068-1-andre.przywara@arm.com> References: <20170609174141.5068-1-andre.przywara@arm.com> Cc: xen-devel@lists.xenproject.org, Vijaya Kumar K , Vijay Kilari , Shanker Donthineni , Manish Jaggi Subject: [Xen-devel] [PATCH v11 19/34] ARM: vITS: add command handling stub and MMIO emulation X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP Emulate the memory mapped ITS registers and provide a stub to introduce the ITS command handling framework (but without actually emulating any commands at this time). This fixes a misnomer in our virtual ITS structure, where the spec is confusingly using ID_bits in GITS_TYPER to denote the number of event IDs (in contrast to GICD_TYPER, where it means number of LPIs). Signed-off-by: Andre Przywara Acked-by: Julien Grall --- xen/arch/arm/vgic-v3-its.c | 588 ++++++++++++++++++++++++++++++++++++++- xen/include/asm-arm/gic_v3_its.h | 3 + 2 files changed, 590 insertions(+), 1 deletion(-) diff --git a/xen/arch/arm/vgic-v3-its.c b/xen/arch/arm/vgic-v3-its.c index 065ffe2..5481791 100644 --- a/xen/arch/arm/vgic-v3-its.c +++ b/xen/arch/arm/vgic-v3-its.c @@ -19,6 +19,16 @@ * along with this program; If not, see . */ +/* + * Locking order: + * + * its->vcmd_lock (protects the command queue) + * its->its_lock (protects the translation tables) + * d->its_devices_lock (protects the device RB tree) + * v->vgic.lock (protects the struct pending_irq) + * d->pend_lpi_tree_lock (protects the radix tree) + */ + #include #include #include @@ -43,7 +53,7 @@ struct virt_its { struct domain *d; unsigned int devid_bits; - unsigned int intid_bits; + unsigned int evid_bits; spinlock_t vcmd_lock; /* Protects the virtual command buffer, which */ uint64_t cwriter; /* consists of CWRITER and CREADR and those */ uint64_t creadr; /* shadow variables cwriter and creadr. */ @@ -53,6 +63,7 @@ struct virt_its { uint64_t baser_dev, baser_coll; /* BASER0 and BASER1 for the guest */ unsigned int max_collections; unsigned int max_devices; + /* changing "enabled" requires to hold *both* the vcmd_lock and its_lock */ bool enabled; }; @@ -67,6 +78,581 @@ struct vits_itte uint16_t pad; }; +/* + * Our collection table encoding: + * Each entry just contains the VCPU ID of the respective vCPU. + */ +typedef uint16_t coll_table_entry_t; + +/* + * Our device table encodings: + * Contains the guest physical address of the Interrupt Translation Table in + * bits [51:8], and the size of it is encoded as the number of bits minus one + * in the lowest 5 bits of the word. + */ +typedef uint64_t dev_table_entry_t; +#define DEV_TABLE_ITT_ADDR(x) ((x) & GENMASK(51, 8)) +#define DEV_TABLE_ITT_SIZE(x) (BIT(((x) & GENMASK(4, 0)) + 1)) +#define DEV_TABLE_ENTRY(addr, bits) \ + (((addr) & GENMASK(51, 8)) | (((bits) - 1) & GENMASK(4, 0))) + +#define GITS_BASER_RO_MASK (GITS_BASER_TYPE_MASK | \ + (0x1fL << GITS_BASER_ENTRY_SIZE_SHIFT)) + +/************************************** + * Functions that handle ITS commands * + **************************************/ + +static uint64_t its_cmd_mask_field(uint64_t *its_cmd, unsigned int word, + unsigned int shift, unsigned int size) +{ + return (its_cmd[word] >> shift) & GENMASK(size - 1, 0); +} + +#define its_cmd_get_command(cmd) its_cmd_mask_field(cmd, 0, 0, 8) +#define its_cmd_get_deviceid(cmd) its_cmd_mask_field(cmd, 0, 32, 32) +#define its_cmd_get_size(cmd) its_cmd_mask_field(cmd, 1, 0, 5) +#define its_cmd_get_id(cmd) its_cmd_mask_field(cmd, 1, 0, 32) +#define its_cmd_get_physical_id(cmd) its_cmd_mask_field(cmd, 1, 32, 32) +#define its_cmd_get_collection(cmd) its_cmd_mask_field(cmd, 2, 0, 16) +#define its_cmd_get_target_addr(cmd) its_cmd_mask_field(cmd, 2, 16, 32) +#define its_cmd_get_validbit(cmd) its_cmd_mask_field(cmd, 2, 63, 1) +#define its_cmd_get_ittaddr(cmd) (its_cmd_mask_field(cmd, 2, 8, 44) << 8) + +#define ITS_CMD_BUFFER_SIZE(baser) ((((baser) & 0xff) + 1) << 12) +#define ITS_CMD_OFFSET(reg) ((reg) & GENMASK(19, 5)) + +static void dump_its_command(uint64_t *command) +{ + gdprintk(XENLOG_WARNING, " cmd 0x%02lx: %016lx %016lx %016lx %016lx\n", + its_cmd_get_command(command), + command[0], command[1], command[2], command[3]); +} + +/* + * Must be called with the vcmd_lock held. + * TODO: Investigate whether we can be smarter here and don't need to hold + * the lock all of the time. + */ +static int vgic_its_handle_cmds(struct domain *d, struct virt_its *its) +{ + paddr_t addr = its->cbaser & GENMASK(51, 12); + uint64_t command[4]; + + ASSERT(spin_is_locked(&its->vcmd_lock)); + + if ( its->cwriter >= ITS_CMD_BUFFER_SIZE(its->cbaser) ) + return -1; + + while ( its->creadr != its->cwriter ) + { + int ret; + + ret = vgic_access_guest_memory(d, addr + its->creadr, + command, sizeof(command), false); + if ( ret ) + return ret; + + switch ( its_cmd_get_command(command) ) + { + case GITS_CMD_SYNC: + /* We handle ITS commands synchronously, so we ignore SYNC. */ + break; + default: + gdprintk(XENLOG_WARNING, "vGITS: unhandled ITS command\n"); + dump_its_command(command); + break; + } + + write_u64_atomic(&its->creadr, (its->creadr + ITS_CMD_SIZE) % + ITS_CMD_BUFFER_SIZE(its->cbaser)); + + if ( ret ) + { + gdprintk(XENLOG_WARNING, + "vGITS: ITS command error %d while handling command\n", + ret); + dump_its_command(command); + } + } + + return 0; +} + +/***************************** + * ITS registers read access * + *****************************/ + +/* Identifying as an ARM IP, using "X" as the product ID. */ +#define GITS_IIDR_VALUE 0x5800034c + +static int vgic_v3_its_mmio_read(struct vcpu *v, mmio_info_t *info, + register_t *r, void *priv) +{ + struct virt_its *its = priv; + uint64_t reg; + + switch ( info->gpa & 0xffff ) + { + case VREG32(GITS_CTLR): + { + /* + * We try to avoid waiting for the command queue lock and report + * non-quiescent if that lock is already taken. + */ + bool have_cmd_lock; + + if ( info->dabt.size != DABT_WORD ) goto bad_width; + + have_cmd_lock = spin_trylock(&its->vcmd_lock); + reg = its->enabled ? GITS_CTLR_ENABLE : 0; + + if ( have_cmd_lock && its->cwriter == its->creadr ) + reg |= GITS_CTLR_QUIESCENT; + + if ( have_cmd_lock ) + spin_unlock(&its->vcmd_lock); + + *r = vgic_reg32_extract(reg, info); + break; + } + + case VREG32(GITS_IIDR): + if ( info->dabt.size != DABT_WORD ) goto bad_width; + *r = vgic_reg32_extract(GITS_IIDR_VALUE, info); + break; + + case VREG64(GITS_TYPER): + if ( !vgic_reg64_check_access(info->dabt) ) goto bad_width; + + reg = GITS_TYPER_PHYSICAL; + reg |= (sizeof(struct vits_itte) - 1) << GITS_TYPER_ITT_SIZE_SHIFT; + reg |= (its->evid_bits - 1) << GITS_TYPER_IDBITS_SHIFT; + reg |= (its->devid_bits - 1) << GITS_TYPER_DEVIDS_SHIFT; + *r = vgic_reg64_extract(reg, info); + break; + + case VRANGE32(0x0018, 0x001C): + goto read_reserved; + case VRANGE32(0x0020, 0x003C): + goto read_impl_defined; + case VRANGE32(0x0040, 0x007C): + goto read_reserved; + + case VREG64(GITS_CBASER): + if ( !vgic_reg64_check_access(info->dabt) ) goto bad_width; + spin_lock(&its->its_lock); + *r = vgic_reg64_extract(its->cbaser, info); + spin_unlock(&its->its_lock); + break; + + case VREG64(GITS_CWRITER): + if ( !vgic_reg64_check_access(info->dabt) ) goto bad_width; + + /* CWRITER is only written by the guest, so no extra locking here. */ + reg = its->cwriter; + *r = vgic_reg64_extract(reg, info); + break; + + case VREG64(GITS_CREADR): + if ( !vgic_reg64_check_access(info->dabt) ) goto bad_width; + + /* + * Lockless access, to avoid waiting for the whole command queue to be + * finished completely. Xen updates its->creadr atomically after each + * command has been handled, this allows other VCPUs to monitor the + * progress. + */ + reg = read_u64_atomic(&its->creadr); + *r = vgic_reg64_extract(reg, info); + break; + + case VRANGE64(0x0098, 0x00F8): + goto read_reserved; + + case VREG64(GITS_BASER0): /* device table */ + if ( !vgic_reg64_check_access(info->dabt) ) goto bad_width; + spin_lock(&its->its_lock); + *r = vgic_reg64_extract(its->baser_dev, info); + spin_unlock(&its->its_lock); + break; + + case VREG64(GITS_BASER1): /* collection table */ + if ( !vgic_reg64_check_access(info->dabt) ) goto bad_width; + spin_lock(&its->its_lock); + *r = vgic_reg64_extract(its->baser_coll, info); + spin_unlock(&its->its_lock); + break; + + case VRANGE64(GITS_BASER2, GITS_BASER7): + goto read_as_zero_64; + case VRANGE32(0x0140, 0xBFFC): + goto read_reserved; + case VRANGE32(0xC000, 0xFFCC): + goto read_impl_defined; + case VRANGE32(0xFFD0, 0xFFE4): + goto read_impl_defined; + + case VREG32(GITS_PIDR2): + if ( info->dabt.size != DABT_WORD ) goto bad_width; + *r = vgic_reg32_extract(GIC_PIDR2_ARCH_GICv3, info); + break; + + case VRANGE32(0xFFEC, 0xFFFC): + goto read_impl_defined; + + default: + printk(XENLOG_G_ERR + "%pv: vGITS: unhandled read r%d offset %#04lx\n", + v, info->dabt.reg, (unsigned long)info->gpa & 0xffff); + return 0; + } + + return 1; + +read_as_zero_64: + if ( !vgic_reg64_check_access(info->dabt) ) goto bad_width; + *r = 0; + + return 1; + +read_impl_defined: + printk(XENLOG_G_DEBUG + "%pv: vGITS: RAZ on implementation defined register offset %#04lx\n", + v, info->gpa & 0xffff); + *r = 0; + return 1; + +read_reserved: + printk(XENLOG_G_DEBUG + "%pv: vGITS: RAZ on reserved register offset %#04lx\n", + v, info->gpa & 0xffff); + *r = 0; + return 1; + +bad_width: + printk(XENLOG_G_ERR "vGITS: bad read width %d r%d offset %#04lx\n", + info->dabt.size, info->dabt.reg, (unsigned long)info->gpa & 0xffff); + domain_crash_synchronous(); + + return 0; +} + +/****************************** + * ITS registers write access * + ******************************/ + +static unsigned int its_baser_table_size(uint64_t baser) +{ + unsigned int ret, page_size[4] = {SZ_4K, SZ_16K, SZ_64K, SZ_64K}; + + ret = page_size[(baser >> GITS_BASER_PAGE_SIZE_SHIFT) & 3]; + + return ret * ((baser & GITS_BASER_SIZE_MASK) + 1); +} + +static unsigned int its_baser_nr_entries(uint64_t baser) +{ + unsigned int entry_size = GITS_BASER_ENTRY_SIZE(baser); + + return its_baser_table_size(baser) / entry_size; +} + +/* Must be called with the ITS lock held. */ +static bool vgic_v3_verify_its_status(struct virt_its *its, bool status) +{ + ASSERT(spin_is_locked(&its->its_lock)); + + if ( !status ) + return false; + + if ( !(its->cbaser & GITS_VALID_BIT) || + !(its->baser_dev & GITS_VALID_BIT) || + !(its->baser_coll & GITS_VALID_BIT) ) + { + printk(XENLOG_G_WARNING "d%d tried to enable ITS without having the tables configured.\n", + its->d->domain_id); + return false; + } + + /* + * TODO: Protect against a guest crafting ITS tables. + * The spec says that "at the time of the new allocation for use by the ITS" + * all tables must contain zeroes. We could enforce this here by clearing + * all the tables, but this would be moot since at the moment the guest + * can change the tables at any point in time anyway. Right now there are + * expectations about the tables being consistent (a VCPU lock protecting + * an LPI), which should go away with proper per-IRQ locking. + * So for now we ignore this issue and rely on Dom0 not doing bad things. + */ + ASSERT(is_hardware_domain(its->d)); + + return true; +} + +static void sanitize_its_base_reg(uint64_t *reg) +{ + uint64_t r = *reg; + + /* Avoid outer shareable. */ + switch ( (r >> GITS_BASER_SHAREABILITY_SHIFT) & 0x03 ) + { + case GIC_BASER_OuterShareable: + r &= ~GITS_BASER_SHAREABILITY_MASK; + r |= GIC_BASER_InnerShareable << GITS_BASER_SHAREABILITY_SHIFT; + break; + default: + break; + } + + /* Avoid any inner non-cacheable mapping. */ + switch ( (r >> GITS_BASER_INNER_CACHEABILITY_SHIFT) & 0x07 ) + { + case GIC_BASER_CACHE_nCnB: + case GIC_BASER_CACHE_nC: + r &= ~GITS_BASER_INNER_CACHEABILITY_MASK; + r |= GIC_BASER_CACHE_RaWb << GITS_BASER_INNER_CACHEABILITY_SHIFT; + break; + default: + break; + } + + /* Only allow non-cacheable or same-as-inner. */ + switch ( (r >> GITS_BASER_OUTER_CACHEABILITY_SHIFT) & 0x07 ) + { + case GIC_BASER_CACHE_SameAsInner: + case GIC_BASER_CACHE_nC: + break; + default: + r &= ~GITS_BASER_OUTER_CACHEABILITY_MASK; + r |= GIC_BASER_CACHE_nC << GITS_BASER_OUTER_CACHEABILITY_SHIFT; + break; + } + + *reg = r; +} + +static int vgic_v3_its_mmio_write(struct vcpu *v, mmio_info_t *info, + register_t r, void *priv) +{ + struct domain *d = v->domain; + struct virt_its *its = priv; + uint64_t reg; + uint32_t reg32; + + switch ( info->gpa & 0xffff ) + { + case VREG32(GITS_CTLR): + { + uint32_t ctlr; + + if ( info->dabt.size != DABT_WORD ) goto bad_width; + + /* + * We need to take the vcmd_lock to prevent a guest from disabling + * the ITS while commands are still processed. + */ + spin_lock(&its->vcmd_lock); + spin_lock(&its->its_lock); + ctlr = its->enabled ? GITS_CTLR_ENABLE : 0; + reg32 = ctlr; + vgic_reg32_update(®32, r, info); + + if ( ctlr ^ reg32 ) + its->enabled = vgic_v3_verify_its_status(its, + reg32 & GITS_CTLR_ENABLE); + spin_unlock(&its->its_lock); + spin_unlock(&its->vcmd_lock); + return 1; + } + + case VREG32(GITS_IIDR): + goto write_ignore_32; + + case VREG32(GITS_TYPER): + goto write_ignore_32; + + case VRANGE32(0x0018, 0x001C): + goto write_reserved; + case VRANGE32(0x0020, 0x003C): + goto write_impl_defined; + case VRANGE32(0x0040, 0x007C): + goto write_reserved; + + case VREG64(GITS_CBASER): + if ( !vgic_reg64_check_access(info->dabt) ) goto bad_width; + + spin_lock(&its->its_lock); + /* Changing base registers with the ITS enabled is UNPREDICTABLE. */ + if ( its->enabled ) + { + spin_unlock(&its->its_lock); + gdprintk(XENLOG_WARNING, + "vGITS: tried to change CBASER with the ITS enabled.\n"); + return 1; + } + + reg = its->cbaser; + vgic_reg64_update(®, r, info); + sanitize_its_base_reg(®); + + its->cbaser = reg; + its->creadr = 0; + spin_unlock(&its->its_lock); + + return 1; + + case VREG64(GITS_CWRITER): + if ( !vgic_reg64_check_access(info->dabt) ) goto bad_width; + + spin_lock(&its->vcmd_lock); + reg = ITS_CMD_OFFSET(its->cwriter); + vgic_reg64_update(®, r, info); + its->cwriter = ITS_CMD_OFFSET(reg); + + if ( its->enabled ) + if ( vgic_its_handle_cmds(d, its) ) + gdprintk(XENLOG_WARNING, "error handling ITS commands\n"); + + spin_unlock(&its->vcmd_lock); + + return 1; + + case VREG64(GITS_CREADR): + goto write_ignore_64; + + case VRANGE32(0x0098, 0x00FC): + goto write_reserved; + + case VREG64(GITS_BASER0): /* device table */ + if ( !vgic_reg64_check_access(info->dabt) ) goto bad_width; + + spin_lock(&its->its_lock); + + /* + * Changing base registers with the ITS enabled is UNPREDICTABLE, + * we choose to ignore it, but warn. + */ + if ( its->enabled ) + { + spin_unlock(&its->its_lock); + gdprintk(XENLOG_WARNING, "vGITS: tried to change BASER with the ITS enabled.\n"); + + return 1; + } + + reg = its->baser_dev; + vgic_reg64_update(®, r, info); + + /* We don't support indirect tables for now. */ + reg &= ~(GITS_BASER_RO_MASK | GITS_BASER_INDIRECT); + reg |= (sizeof(dev_table_entry_t) - 1) << GITS_BASER_ENTRY_SIZE_SHIFT; + reg |= GITS_BASER_TYPE_DEVICE << GITS_BASER_TYPE_SHIFT; + sanitize_its_base_reg(®); + + if ( reg & GITS_VALID_BIT ) + { + its->max_devices = its_baser_nr_entries(reg); + if ( its->max_devices > BIT(its->devid_bits) ) + its->max_devices = BIT(its->devid_bits); + } + else + its->max_devices = 0; + + its->baser_dev = reg; + spin_unlock(&its->its_lock); + return 1; + + case VREG64(GITS_BASER1): /* collection table */ + if ( !vgic_reg64_check_access(info->dabt) ) goto bad_width; + + spin_lock(&its->its_lock); + /* + * Changing base registers with the ITS enabled is UNPREDICTABLE, + * we choose to ignore it, but warn. + */ + if ( its->enabled ) + { + spin_unlock(&its->its_lock); + gdprintk(XENLOG_INFO, "vGITS: tried to change BASER with the ITS enabled.\n"); + return 1; + } + + reg = its->baser_coll; + vgic_reg64_update(®, r, info); + /* No indirect tables for the collection table. */ + reg &= ~(GITS_BASER_RO_MASK | GITS_BASER_INDIRECT); + reg |= (sizeof(coll_table_entry_t) - 1) << GITS_BASER_ENTRY_SIZE_SHIFT; + reg |= GITS_BASER_TYPE_COLLECTION << GITS_BASER_TYPE_SHIFT; + sanitize_its_base_reg(®); + + if ( reg & GITS_VALID_BIT ) + its->max_collections = its_baser_nr_entries(reg); + else + its->max_collections = 0; + its->baser_coll = reg; + spin_unlock(&its->its_lock); + return 1; + + case VRANGE64(GITS_BASER2, GITS_BASER7): + goto write_ignore_64; + + case VRANGE32(0x0140, 0xBFFC): + goto write_reserved; + case VRANGE32(0xC000, 0xFFCC): + goto write_impl_defined; + case VRANGE32(0xFFD0, 0xFFE4): /* IMPDEF identification registers */ + goto write_impl_defined; + + case VREG32(GITS_PIDR2): + goto write_ignore_32; + + case VRANGE32(0xFFEC, 0xFFFC): /* IMPDEF identification registers */ + goto write_impl_defined; + + default: + printk(XENLOG_G_ERR + "%pv: vGITS: unhandled write r%d offset %#04lx\n", + v, info->dabt.reg, (unsigned long)info->gpa & 0xffff); + return 0; + } + + return 1; + +write_ignore_64: + if ( !vgic_reg64_check_access(info->dabt) ) goto bad_width; + return 1; + +write_ignore_32: + if ( info->dabt.size != DABT_WORD ) goto bad_width; + return 1; + +write_impl_defined: + printk(XENLOG_G_DEBUG + "%pv: vGITS: WI on implementation defined register offset %#04lx\n", + v, info->gpa & 0xffff); + return 1; + +write_reserved: + printk(XENLOG_G_DEBUG + "%pv: vGITS: WI on implementation defined register offset %#04lx\n", + v, info->gpa & 0xffff); + return 1; + +bad_width: + printk(XENLOG_G_ERR "vGITS: bad write width %d r%d offset %#08lx\n", + info->dabt.size, info->dabt.reg, (unsigned long)info->gpa & 0xffff); + + domain_crash_synchronous(); + + return 0; +} + +static const struct mmio_handler_ops vgic_its_mmio_handler = { + .read = vgic_v3_its_mmio_read, + .write = vgic_v3_its_mmio_write, +}; + int vgic_v3_its_init_domain(struct domain *d) { spin_lock_init(&d->arch.vgic.its_devices_lock); diff --git a/xen/include/asm-arm/gic_v3_its.h b/xen/include/asm-arm/gic_v3_its.h index a659184..5db7d04 100644 --- a/xen/include/asm-arm/gic_v3_its.h +++ b/xen/include/asm-arm/gic_v3_its.h @@ -35,6 +35,7 @@ #define GITS_BASER5 0x128 #define GITS_BASER6 0x130 #define GITS_BASER7 0x138 +#define GITS_PIDR2 GICR_PIDR2 /* Register bits */ #define GITS_VALID_BIT BIT(63) @@ -57,6 +58,7 @@ #define GITS_TYPER_ITT_SIZE_MASK (0xfUL << GITS_TYPER_ITT_SIZE_SHIFT) #define GITS_TYPER_ITT_SIZE(r) ((((r) & GITS_TYPER_ITT_SIZE_MASK) >> \ GITS_TYPER_ITT_SIZE_SHIFT) + 1) +#define GITS_TYPER_PHYSICAL (1U << 0) #define GITS_BASER_INDIRECT BIT(62) #define GITS_BASER_INNER_CACHEABILITY_SHIFT 59 @@ -76,6 +78,7 @@ (((reg >> GITS_BASER_ENTRY_SIZE_SHIFT) & 0x1f) + 1) #define GITS_BASER_SHAREABILITY_SHIFT 10 #define GITS_BASER_PAGE_SIZE_SHIFT 8 +#define GITS_BASER_SIZE_MASK 0xff #define GITS_BASER_SHAREABILITY_MASK (0x3ULL << GITS_BASER_SHAREABILITY_SHIFT) #define GITS_BASER_OUTER_CACHEABILITY_MASK (0x7ULL << GITS_BASER_OUTER_CACHEABILITY_SHIFT) #define GITS_BASER_INNER_CACHEABILITY_MASK (0x7ULL << GITS_BASER_INNER_CACHEABILITY_SHIFT)