From patchwork Fri Jun 2 10:22:49 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Denis V. Lunev\" via" X-Patchwork-Id: 9762511 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 4FC4860360 for ; Fri, 2 Jun 2017 13:47:48 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 216E32818E for ; Fri, 2 Jun 2017 13:47:48 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 1475228531; Fri, 2 Jun 2017 13:47:48 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.8 required=2.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_HI,T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id D5DD82818E for ; Fri, 2 Jun 2017 13:47:37 +0000 (UTC) Received: from localhost ([::1]:49862 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1dGmvQ-0001mu-UG for patchwork-qemu-devel@patchwork.kernel.org; Fri, 02 Jun 2017 09:47:37 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:53929) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1dGjjx-0000bb-I9 for qemu-devel@nongnu.org; Fri, 02 Jun 2017 06:23:52 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1dGjjf-0005Pw-EK for qemu-devel@nongnu.org; Fri, 02 Jun 2017 06:23:33 -0400 Received: from mail3.protonmail.ch ([185.70.40.25]:55203) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1dGjjd-0005O6-Hq for qemu-devel@nongnu.org; Fri, 02 Jun 2017 06:23:15 -0400 Date: Fri, 02 Jun 2017 06:22:49 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=protonmail.ch; s=default; t=1496398978; bh=8Pa5LYdQ+Mr3eOrzgp6cbvAyZKWAqlk4vi8JBYY/mYI=; h=To:From:Reply-To:Subject:Feedback-ID:From; b=xsFVqpmyA1LeJiYvVe5/5uXvYjgsHP8Racx26S8hURmw1Uh/qGlhWh2JnRmgnYUU0 NMasBlf9BVWrp1ZuFp75YpNKZc0CvJCNemtmXyynWXN7a1TDOE4VwFsyoWuSgJ2npU w/txBQOWzlwRAwukEN7GnYlk7NngPzbBixXYnoG0= To: "qemu-devel@nongnu.org" Message-ID: Feedback-ID: yh1K4tdHv_YPKxnHc2IZfHgmpdbELTjiQqVVM0tH_NCGwt0lDeW08Hk-rSSf8Ccxvmf1Dvq0xDZMkO9czX7ufw==:Ext:ProtonMail MIME-Version: 1.0 X-detected-operating-system: by eggs.gnu.org: GNU/Linux 3.x [generic] [fuzzy] X-Received-From: 185.70.40.25 X-Mailman-Approved-At: Fri, 02 Jun 2017 09:44:35 -0400 X-Content-Filtered-By: Mailman/MimeDel 2.1.21 Subject: [Qemu-devel] Target AVR (patch) X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-Patchwork-Original-From: Anichang via Qemu-devel From: "Denis V. Lunev\" via" Reply-To: Anichang Errors-To: qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Sender: "Qemu-devel" X-Virus-Scanned: ClamAV using ClamSMTP Attached patch since git://git.qemu.org/qemu.git 43771d5d92312504305c19abe29ec5bfabd55f01 -------- Original Message -------- Date: Thu, 01 Jun 2017 19:58:28 -0400 From: Anichang To: "qemu-devel@nongnu.org" Subject: [Qemu-devel] Target AVR Message-ID: Content-Type: text/plain; charset=UTF-8 Hi all, I just resurrected the target-avr patchset from Michael Rolnik. Following the details: commit f2bca179dbfc3f378b131ed619d07db946bae598 Merge: 43771d5 ed250c0 Author: Ani Chang Date: Fri Jun 2 01:17:34 2017 +0200 target/avr: resurrected (see mailing list qemu-devel, Richard Henderson on Sep 20, 2016 at 8:35pm) and fixed (it builds). Details: - merge remote git://github.com/rth7680/qemu.git tags/pull-avr-20160920 into master - fixed include/sysemu/arch_init.h (i.e.: bump QEMU_ARCH_AVR from 1<<17 to 1<<18) - fixed target/avr/cpu.c (i.e.: remove one function arg) - fixed target/avr/machine.c (i.e.: fix a bunch of getters/setters signatures) Running the sample board outputs: $ ./qemu-system-avr Unexpected error in object_property_add() at qom/object.c:940: qemu-system-avr: attempt to add duplicate property 'memory' to object (type 'avr5-avr') Aborted (core dumped) $ Signed-off-by: Ani Chang commit 43771d5d92312504305c19abe29ec5bfabd55f01 Merge: c077a99 c064477 Author: Peter Maydell Date: Thu Jun 1 16:39:16 2017 +0100 Merge remote-tracking branch 'remotes/armbru/tags/pull-qapi-2017-05-31' into staging ... --- Following the output of 'make check'. ... GTESTER check-qtest-avr Unexpected error in object_property_add() at qom/object.c:940: attempt to add duplicate property 'memory' to object (type 'xmega7-avr') Broken pipe GTester: last random seed: R02Sb7127f88337efa767b5e96a88046ebc1 Unexpected error in object_property_add() at qom/object.c:940: qemu-system-avr: attempt to add duplicate property 'memory' to object (type 'avr5-avr') Broken pipe GTester: last random seed: R02S94aa640298a8d5a71d11208b95363edd Unexpected error in object_property_add() at qom/object.c:940: qemu-system-avr: attempt to add duplicate property 'memory' to object (type 'avr5-avr') Broken pipe GTester: last random seed: R02S76c62d67e22fbb237a3431358e65d6c2 /qemu-test/tests/Makefile.include:824: recipe for target 'check-qtest-avr' failed make: *** [check-qtest-avr] Error 1 $ --- I have no idea what to do from here. How to solve the "attempt to add duplicate property 'memory' to object" error? Regards commit f2bca179dbfc3f378b131ed619d07db946bae598 Merge: 43771d5 ed250c0 Author: Ani Chang Date: Fri Jun 2 01:17:34 2017 +0200 target/avr: resurrected (see mailing list qemu-devel, Richard Henderson on Sep 20, 2016 at 8:35pm) and fixed (it builds). Details: - merge remote git://github.com/rth7680/qemu.git tags/pull-avr-20160920 into master - fixed include/sysemu/arch_init.h (i.e.: bump QEMU_ARCH_AVR from 1<<17 to 1<<18) - fixed target/avr/cpu.c (i.e.: remove one function arg) - fixed target/avr/machine.c (i.e.: fix a bunch of getters/setters signatures) Running the sample board outputs: $ ./qemu-system-avr Unexpected error in object_property_add() at qom/object.c:940: qemu-system-avr: attempt to add duplicate property 'memory' to object (type 'avr5-avr') Aborted (core dumped) $ Signed-off-by: Ani Chang diff --cc configure index 0586ec9,737a22e..74ec6cb --- a/configure +++ b/configure @@@ -6040,13 -5694,12 +6040,15 @@@ case "$target_name" i aarch64) TARGET_BASE_ARCH=arm bflt="yes" + mttcg="yes" gdb_xml_files="aarch64-core.xml aarch64-fpu.xml arm-core.xml arm-vfp.xml arm-vfp3.xml arm-neon.xml" ;; + avr) + ;; cris) ;; + hppa) + ;; lm32) ;; m68k) diff --cc include/sysemu/arch_init.h index 8751c46,7c9edbf..1bf565f --- a/include/sysemu/arch_init.h +++ b/include/sysemu/arch_init.h @@@ -23,7 -23,7 +23,8 @@@ enum QEMU_ARCH_UNICORE32 = (1 << 14), QEMU_ARCH_MOXIE = (1 << 15), QEMU_ARCH_TRICORE = (1 << 16), - QEMU_ARCH_AVR = (1 << 17), + QEMU_ARCH_NIOS2 = (1 << 17), ++ QEMU_ARCH_AVR = (1 << 18), }; extern const uint32_t arch_type; diff --cc target/avr/Makefile.objs index 0000000,0000000..48233ef new file mode 100644 --- /dev/null +++ b/target/avr/Makefile.objs @@@ -1,0 -1,0 +1,23 @@@ ++# ++# QEMU AVR CPU ++# ++# Copyright (c) 2016 Michael Rolnik ++# ++# This library is free software; you can redistribute it and/or ++# modify it under the terms of the GNU Lesser General Public ++# License as published by the Free Software Foundation; either ++# version 2.1 of the License, or (at your option) any later version. ++# ++# This library is distributed in the hope that it will be useful, ++# but WITHOUT ANY WARRANTY; without even the implied warranty of ++# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU ++# Lesser General Public License for more details. ++# ++# You should have received a copy of the GNU Lesser General Public ++# License along with this library; if not, see ++# ++# ++ ++obj-y += translate.o cpu.o helper.o ++obj-y += gdbstub.o ++obj-$(CONFIG_SOFTMMU) += machine.o diff --cc target/avr/cpu-qom.h index 0000000,0000000..b5cd5a7 new file mode 100644 --- /dev/null +++ b/target/avr/cpu-qom.h @@@ -1,0 -1,0 +1,84 @@@ ++/* ++ * QEMU AVR CPU ++ * ++ * Copyright (c) 2016 Michael Rolnik ++ * ++ * This library is free software; you can redistribute it and/or ++ * modify it under the terms of the GNU Lesser General Public ++ * License as published by the Free Software Foundation; either ++ * version 2.1 of the License, or (at your option) any later version. ++ * ++ * This library is distributed in the hope that it will be useful, ++ * but WITHOUT ANY WARRANTY; without even the implied warranty of ++ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU ++ * Lesser General Public License for more details. ++ * ++ * You should have received a copy of the GNU Lesser General Public ++ * License along with this library; if not, see ++ * ++ */ ++ ++#ifndef QEMU_AVR_CPU_QOM_H ++#define QEMU_AVR_CPU_QOM_H ++ ++#include "qom/cpu.h" ++ ++#define TYPE_AVR_CPU "avr" ++ ++#define AVR_CPU_CLASS(klass) \ ++ OBJECT_CLASS_CHECK(AVRCPUClass, (klass), TYPE_AVR_CPU) ++#define AVR_CPU(obj) \ ++ OBJECT_CHECK(AVRCPU, (obj), TYPE_AVR_CPU) ++#define AVR_CPU_GET_CLASS(obj) \ ++ OBJECT_GET_CLASS(AVRCPUClass, (obj), TYPE_AVR_CPU) ++ ++/** ++ * AVRCPUClass: ++ * @parent_realize: The parent class' realize handler. ++ * @parent_reset: The parent class' reset handler. ++ * @vr: Version Register value. ++ * ++ * A AVR CPU model. ++ */ ++typedef struct AVRCPUClass { ++ CPUClass parent_class; ++ ++ DeviceRealize parent_realize; ++ void (*parent_reset)(CPUState *cpu); ++} AVRCPUClass; ++ ++/** ++ * AVRCPU: ++ * @env: #CPUAVRState ++ * ++ * A AVR CPU. ++ */ ++typedef struct AVRCPU { ++ /*< private >*/ ++ CPUState parent_obj; ++ /*< public >*/ ++ ++ CPUAVRState env; ++} AVRCPU; ++ ++static inline AVRCPU *avr_env_get_cpu(CPUAVRState *env) ++{ ++ return container_of(env, AVRCPU, env); ++} ++ ++#define ENV_GET_CPU(e) CPU(avr_env_get_cpu(e)) ++#define ENV_OFFSET offsetof(AVRCPU, env) ++ ++#ifndef CONFIG_USER_ONLY ++extern const struct VMStateDescription vms_avr_cpu; ++#endif ++ ++void avr_cpu_do_interrupt(CPUState *cpu); ++bool avr_cpu_exec_interrupt(CPUState *cpu, int int_req); ++void avr_cpu_dump_state(CPUState *cs, FILE *f, ++ fprintf_function cpu_fprintf, int flags); ++hwaddr avr_cpu_get_phys_page_debug(CPUState *cpu, vaddr addr); ++int avr_cpu_gdb_read_register(CPUState *cpu, uint8_t *buf, int reg); ++int avr_cpu_gdb_write_register(CPUState *cpu, uint8_t *buf, int reg); ++ ++#endif diff --cc target/avr/cpu.c index 0000000,0000000..d97d43a new file mode 100644 --- /dev/null +++ b/target/avr/cpu.c @@@ -1,0 -1,0 +1,595 @@@ ++/* ++ * QEMU AVR CPU ++ * ++ * Copyright (c) 2016 Michael Rolnik ++ * ++ * This library is free software; you can redistribute it and/or ++ * modify it under the terms of the GNU Lesser General Public ++ * License as published by the Free Software Foundation; either ++ * version 2.1 of the License, or (at your option) any later version. ++ * ++ * This library is distributed in the hope that it will be useful, ++ * but WITHOUT ANY WARRANTY; without even the implied warranty of ++ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU ++ * Lesser General Public License for more details. ++ * ++ * You should have received a copy of the GNU Lesser General Public ++ * License along with this library; if not, see ++ * ++ */ ++ ++#include "qemu/osdep.h" ++#include "qapi/error.h" ++#include "cpu.h" ++#include "qemu-common.h" ++#include "migration/vmstate.h" ++ ++static void avr_cpu_set_pc(CPUState *cs, vaddr value) ++{ ++ AVRCPU *cpu = AVR_CPU(cs); ++ ++ cpu->env.pc_w = value / 2; /* internally PC points to words */ ++} ++ ++static bool avr_cpu_has_work(CPUState *cs) ++{ ++ AVRCPU *cpu = AVR_CPU(cs); ++ CPUAVRState *env = &cpu->env; ++ ++ return (cs->interrupt_request & (CPU_INTERRUPT_HARD | CPU_INTERRUPT_RESET)) ++ && cpu_interrupts_enabled(env); ++} ++ ++static void avr_cpu_synchronize_from_tb(CPUState *cs, TranslationBlock *tb) ++{ ++ AVRCPU *cpu = AVR_CPU(cs); ++ CPUAVRState *env = &cpu->env; ++ ++ env->pc_w = tb->pc / 2; /* internally PC points to words */ ++} ++ ++static void avr_cpu_reset(CPUState *s) ++{ ++ AVRCPU *cpu = AVR_CPU(s); ++ AVRCPUClass *mcc = AVR_CPU_GET_CLASS(cpu); ++ CPUAVRState *env = &cpu->env; ++ ++ mcc->parent_reset(s); ++ ++ env->pc_w = 0; ++ env->sregI = 1; ++ env->sregC = 0; ++ env->sregZ = 0; ++ env->sregN = 0; ++ env->sregV = 0; ++ env->sregS = 0; ++ env->sregH = 0; ++ env->sregT = 0; ++ ++ env->rampD = 0; ++ env->rampX = 0; ++ env->rampY = 0; ++ env->rampZ = 0; ++ env->eind = 0; ++ env->sp = 0; ++ ++ memset(env->r, 0, sizeof(env->r)); ++ ++ tlb_flush(s); ++} ++ ++static void avr_cpu_disas_set_info(CPUState *cpu, disassemble_info *info) ++{ ++ info->mach = bfd_arch_avr; ++ info->print_insn = NULL; ++} ++ ++static void avr_cpu_realizefn(DeviceState *dev, Error **errp) ++{ ++ CPUState *cs = CPU(dev); ++ AVRCPUClass *mcc = AVR_CPU_GET_CLASS(dev); ++ ++ qemu_init_vcpu(cs); ++ cpu_reset(cs); ++ ++ mcc->parent_realize(dev, errp); ++} ++ ++static void avr_cpu_set_int(void *opaque, int irq, int level) ++{ ++ AVRCPU *cpu = opaque; ++ CPUAVRState *env = &cpu->env; ++ CPUState *cs = CPU(cpu); ++ ++ uint64_t mask = (1ull << irq); ++ if (level) { ++ env->intsrc |= mask; ++ cpu_interrupt(cs, CPU_INTERRUPT_HARD); ++ } else { ++ env->intsrc &= ~mask; ++ if (env->intsrc == 0) { ++ cpu_reset_interrupt(cs, CPU_INTERRUPT_HARD); ++ } ++ } ++} ++ ++static void avr_cpu_initfn(Object *obj) ++{ ++ CPUState *cs = CPU(obj); ++ AVRCPU *cpu = AVR_CPU(obj); ++ static int inited; ++ ++ cs->env_ptr = &cpu->env; ++ cpu_exec_initfn(cs); ++ ++#ifndef CONFIG_USER_ONLY ++ qdev_init_gpio_in(DEVICE(cpu), avr_cpu_set_int, 37); ++#endif ++ ++ if (tcg_enabled() && !inited) { ++ inited = 1; ++ avr_translate_init(); ++ } ++} ++ ++static ObjectClass *avr_cpu_class_by_name(const char *cpu_model) ++{ ++ ObjectClass *oc; ++ char *typename; ++ char **cpuname; ++ ++ if (!cpu_model) { ++ return NULL; ++ } ++ ++ cpuname = g_strsplit(cpu_model, ",", 1); ++ typename = g_strdup_printf("%s-" TYPE_AVR_CPU, cpuname[0]); ++ oc = object_class_by_name(typename); ++ ++ g_strfreev(cpuname); ++ g_free(typename); ++ ++ if (!oc ++ || !object_class_dynamic_cast(oc, TYPE_AVR_CPU) ++ || object_class_is_abstract(oc)) { ++ return NULL; ++ } ++ ++ return oc; ++} ++ ++static void avr_cpu_class_init(ObjectClass *oc, void *data) ++{ ++ DeviceClass *dc = DEVICE_CLASS(oc); ++ CPUClass *cc = CPU_CLASS(oc); ++ AVRCPUClass *mcc = AVR_CPU_CLASS(oc); ++ ++ mcc->parent_realize = dc->realize; ++ dc->realize = avr_cpu_realizefn; ++ ++ mcc->parent_reset = cc->reset; ++ cc->reset = avr_cpu_reset; ++ ++ cc->class_by_name = avr_cpu_class_by_name; ++ ++ cc->has_work = avr_cpu_has_work; ++ cc->do_interrupt = avr_cpu_do_interrupt; ++ cc->cpu_exec_interrupt = avr_cpu_exec_interrupt; ++ cc->dump_state = avr_cpu_dump_state; ++ cc->set_pc = avr_cpu_set_pc; ++#if !defined(CONFIG_USER_ONLY) ++ cc->memory_rw_debug = avr_cpu_memory_rw_debug; ++#endif ++#ifdef CONFIG_USER_ONLY ++ cc->handle_mmu_fault = avr_cpu_handle_mmu_fault; ++#else ++ cc->get_phys_page_debug = avr_cpu_get_phys_page_debug; ++ cc->vmsd = &vms_avr_cpu; ++#endif ++ cc->disas_set_info = avr_cpu_disas_set_info; ++ cc->synchronize_from_tb = avr_cpu_synchronize_from_tb; ++ cc->gdb_read_register = avr_cpu_gdb_read_register; ++ cc->gdb_write_register = avr_cpu_gdb_write_register; ++ cc->gdb_num_core_regs = 35; ++} ++ ++static void avr_avr1_initfn(Object *obj) ++{ ++ AVRCPU *cpu = AVR_CPU(obj); ++ CPUAVRState *env = &cpu->env; ++ ++ avr_set_feature(env, AVR_FEATURE_LPM); ++ avr_set_feature(env, AVR_FEATURE_2_BYTE_SP); ++ avr_set_feature(env, AVR_FEATURE_2_BYTE_PC); ++} ++ ++static void avr_avr2_initfn(Object *obj) ++{ ++ AVRCPU *cpu = AVR_CPU(obj); ++ CPUAVRState *env = &cpu->env; ++ ++ avr_set_feature(env, AVR_FEATURE_LPM); ++ avr_set_feature(env, AVR_FEATURE_IJMP_ICALL); ++ avr_set_feature(env, AVR_FEATURE_ADIW_SBIW); ++ avr_set_feature(env, AVR_FEATURE_SRAM); ++ avr_set_feature(env, AVR_FEATURE_BREAK); ++ ++ avr_set_feature(env, AVR_FEATURE_2_BYTE_PC); ++ avr_set_feature(env, AVR_FEATURE_2_BYTE_SP); ++} ++ ++static void avr_avr25_initfn(Object *obj) ++{ ++ AVRCPU *cpu = AVR_CPU(obj); ++ CPUAVRState *env = &cpu->env; ++ ++ avr_set_feature(env, AVR_FEATURE_LPM); ++ avr_set_feature(env, AVR_FEATURE_IJMP_ICALL); ++ avr_set_feature(env, AVR_FEATURE_ADIW_SBIW); ++ avr_set_feature(env, AVR_FEATURE_SRAM); ++ avr_set_feature(env, AVR_FEATURE_BREAK); ++ ++ avr_set_feature(env, AVR_FEATURE_2_BYTE_PC); ++ avr_set_feature(env, AVR_FEATURE_2_BYTE_SP); ++ avr_set_feature(env, AVR_FEATURE_LPMX); ++ avr_set_feature(env, AVR_FEATURE_MOVW); ++} ++ ++static void avr_avr3_initfn(Object *obj) ++{ ++ AVRCPU *cpu = AVR_CPU(obj); ++ CPUAVRState *env = &cpu->env; ++ ++ avr_set_feature(env, AVR_FEATURE_LPM); ++ avr_set_feature(env, AVR_FEATURE_IJMP_ICALL); ++ avr_set_feature(env, AVR_FEATURE_ADIW_SBIW); ++ avr_set_feature(env, AVR_FEATURE_SRAM); ++ avr_set_feature(env, AVR_FEATURE_BREAK); ++ ++ avr_set_feature(env, AVR_FEATURE_2_BYTE_PC); ++ avr_set_feature(env, AVR_FEATURE_2_BYTE_SP); ++ avr_set_feature(env, AVR_FEATURE_JMP_CALL); ++} ++ ++static void avr_avr31_initfn(Object *obj) ++{ ++ AVRCPU *cpu = AVR_CPU(obj); ++ CPUAVRState *env = &cpu->env; ++ ++ avr_set_feature(env, AVR_FEATURE_LPM); ++ avr_set_feature(env, AVR_FEATURE_IJMP_ICALL); ++ avr_set_feature(env, AVR_FEATURE_ADIW_SBIW); ++ avr_set_feature(env, AVR_FEATURE_SRAM); ++ avr_set_feature(env, AVR_FEATURE_BREAK); ++ ++ avr_set_feature(env, AVR_FEATURE_2_BYTE_PC); ++ avr_set_feature(env, AVR_FEATURE_2_BYTE_SP); ++ avr_set_feature(env, AVR_FEATURE_RAMPZ); ++ avr_set_feature(env, AVR_FEATURE_ELPM); ++ avr_set_feature(env, AVR_FEATURE_JMP_CALL); ++} ++ ++static void avr_avr35_initfn(Object *obj) ++{ ++ AVRCPU *cpu = AVR_CPU(obj); ++ CPUAVRState *env = &cpu->env; ++ ++ avr_set_feature(env, AVR_FEATURE_LPM); ++ avr_set_feature(env, AVR_FEATURE_IJMP_ICALL); ++ avr_set_feature(env, AVR_FEATURE_ADIW_SBIW); ++ avr_set_feature(env, AVR_FEATURE_SRAM); ++ avr_set_feature(env, AVR_FEATURE_BREAK); ++ ++ avr_set_feature(env, AVR_FEATURE_2_BYTE_PC); ++ avr_set_feature(env, AVR_FEATURE_2_BYTE_SP); ++ avr_set_feature(env, AVR_FEATURE_JMP_CALL); ++ avr_set_feature(env, AVR_FEATURE_LPMX); ++ avr_set_feature(env, AVR_FEATURE_MOVW); ++} ++ ++static void avr_avr4_initfn(Object *obj) ++{ ++ AVRCPU *cpu = AVR_CPU(obj); ++ CPUAVRState *env = &cpu->env; ++ ++ avr_set_feature(env, AVR_FEATURE_LPM); ++ avr_set_feature(env, AVR_FEATURE_IJMP_ICALL); ++ avr_set_feature(env, AVR_FEATURE_ADIW_SBIW); ++ avr_set_feature(env, AVR_FEATURE_SRAM); ++ avr_set_feature(env, AVR_FEATURE_BREAK); ++ ++ avr_set_feature(env, AVR_FEATURE_2_BYTE_PC); ++ avr_set_feature(env, AVR_FEATURE_2_BYTE_SP); ++ avr_set_feature(env, AVR_FEATURE_LPMX); ++ avr_set_feature(env, AVR_FEATURE_MOVW); ++ avr_set_feature(env, AVR_FEATURE_MUL); ++} ++ ++static void avr_avr5_initfn(Object *obj) ++{ ++ AVRCPU *cpu = AVR_CPU(obj); ++ CPUAVRState *env = &cpu->env; ++ ++ avr_set_feature(env, AVR_FEATURE_LPM); ++ avr_set_feature(env, AVR_FEATURE_IJMP_ICALL); ++ avr_set_feature(env, AVR_FEATURE_ADIW_SBIW); ++ avr_set_feature(env, AVR_FEATURE_SRAM); ++ avr_set_feature(env, AVR_FEATURE_BREAK); ++ ++ avr_set_feature(env, AVR_FEATURE_2_BYTE_PC); ++ avr_set_feature(env, AVR_FEATURE_2_BYTE_SP); ++ avr_set_feature(env, AVR_FEATURE_JMP_CALL); ++ avr_set_feature(env, AVR_FEATURE_LPMX); ++ avr_set_feature(env, AVR_FEATURE_MOVW); ++ avr_set_feature(env, AVR_FEATURE_MUL); ++} ++ ++static void avr_avr51_initfn(Object *obj) ++{ ++ AVRCPU *cpu = AVR_CPU(obj); ++ CPUAVRState *env = &cpu->env; ++ ++ avr_set_feature(env, AVR_FEATURE_LPM); ++ avr_set_feature(env, AVR_FEATURE_IJMP_ICALL); ++ avr_set_feature(env, AVR_FEATURE_ADIW_SBIW); ++ avr_set_feature(env, AVR_FEATURE_SRAM); ++ avr_set_feature(env, AVR_FEATURE_BREAK); ++ ++ avr_set_feature(env, AVR_FEATURE_2_BYTE_PC); ++ avr_set_feature(env, AVR_FEATURE_2_BYTE_SP); ++ avr_set_feature(env, AVR_FEATURE_RAMPZ); ++ avr_set_feature(env, AVR_FEATURE_ELPMX); ++ avr_set_feature(env, AVR_FEATURE_ELPM); ++ avr_set_feature(env, AVR_FEATURE_JMP_CALL); ++ avr_set_feature(env, AVR_FEATURE_LPMX); ++ avr_set_feature(env, AVR_FEATURE_MOVW); ++ avr_set_feature(env, AVR_FEATURE_MUL); ++} ++ ++static void avr_avr6_initfn(Object *obj) ++{ ++ AVRCPU *cpu = AVR_CPU(obj); ++ CPUAVRState *env = &cpu->env; ++ ++ avr_set_feature(env, AVR_FEATURE_LPM); ++ avr_set_feature(env, AVR_FEATURE_IJMP_ICALL); ++ avr_set_feature(env, AVR_FEATURE_ADIW_SBIW); ++ avr_set_feature(env, AVR_FEATURE_SRAM); ++ avr_set_feature(env, AVR_FEATURE_BREAK); ++ ++ avr_set_feature(env, AVR_FEATURE_3_BYTE_PC); ++ avr_set_feature(env, AVR_FEATURE_2_BYTE_SP); ++ avr_set_feature(env, AVR_FEATURE_RAMPZ); ++ avr_set_feature(env, AVR_FEATURE_EIJMP_EICALL); ++ avr_set_feature(env, AVR_FEATURE_ELPMX); ++ avr_set_feature(env, AVR_FEATURE_ELPM); ++ avr_set_feature(env, AVR_FEATURE_JMP_CALL); ++ avr_set_feature(env, AVR_FEATURE_LPMX); ++ avr_set_feature(env, AVR_FEATURE_MOVW); ++ avr_set_feature(env, AVR_FEATURE_MUL); ++} ++ ++static void avr_xmega2_initfn(Object *obj) ++{ ++ AVRCPU *cpu = AVR_CPU(obj); ++ CPUAVRState *env = &cpu->env; ++ ++ avr_set_feature(env, AVR_FEATURE_LPM); ++ avr_set_feature(env, AVR_FEATURE_IJMP_ICALL); ++ avr_set_feature(env, AVR_FEATURE_ADIW_SBIW); ++ avr_set_feature(env, AVR_FEATURE_SRAM); ++ avr_set_feature(env, AVR_FEATURE_BREAK); ++ ++ avr_set_feature(env, AVR_FEATURE_2_BYTE_PC); ++ avr_set_feature(env, AVR_FEATURE_2_BYTE_SP); ++ avr_set_feature(env, AVR_FEATURE_JMP_CALL); ++ avr_set_feature(env, AVR_FEATURE_LPMX); ++ avr_set_feature(env, AVR_FEATURE_MOVW); ++ avr_set_feature(env, AVR_FEATURE_MUL); ++ avr_set_feature(env, AVR_FEATURE_RMW); ++} ++ ++static void avr_xmega4_initfn(Object *obj) ++{ ++ AVRCPU *cpu = AVR_CPU(obj); ++ CPUAVRState *env = &cpu->env; ++ ++ avr_set_feature(env, AVR_FEATURE_LPM); ++ avr_set_feature(env, AVR_FEATURE_IJMP_ICALL); ++ avr_set_feature(env, AVR_FEATURE_ADIW_SBIW); ++ avr_set_feature(env, AVR_FEATURE_SRAM); ++ avr_set_feature(env, AVR_FEATURE_BREAK); ++ ++ avr_set_feature(env, AVR_FEATURE_2_BYTE_PC); ++ avr_set_feature(env, AVR_FEATURE_2_BYTE_SP); ++ avr_set_feature(env, AVR_FEATURE_RAMPZ); ++ avr_set_feature(env, AVR_FEATURE_ELPMX); ++ avr_set_feature(env, AVR_FEATURE_ELPM); ++ avr_set_feature(env, AVR_FEATURE_JMP_CALL); ++ avr_set_feature(env, AVR_FEATURE_LPMX); ++ avr_set_feature(env, AVR_FEATURE_MOVW); ++ avr_set_feature(env, AVR_FEATURE_MUL); ++ avr_set_feature(env, AVR_FEATURE_RMW); ++} ++ ++static void avr_xmega5_initfn(Object *obj) ++{ ++ AVRCPU *cpu = AVR_CPU(obj); ++ CPUAVRState *env = &cpu->env; ++ ++ avr_set_feature(env, AVR_FEATURE_LPM); ++ avr_set_feature(env, AVR_FEATURE_IJMP_ICALL); ++ avr_set_feature(env, AVR_FEATURE_ADIW_SBIW); ++ avr_set_feature(env, AVR_FEATURE_SRAM); ++ avr_set_feature(env, AVR_FEATURE_BREAK); ++ ++ avr_set_feature(env, AVR_FEATURE_2_BYTE_PC); ++ avr_set_feature(env, AVR_FEATURE_2_BYTE_SP); ++ avr_set_feature(env, AVR_FEATURE_RAMPD); ++ avr_set_feature(env, AVR_FEATURE_RAMPX); ++ avr_set_feature(env, AVR_FEATURE_RAMPY); ++ avr_set_feature(env, AVR_FEATURE_RAMPZ); ++ avr_set_feature(env, AVR_FEATURE_ELPMX); ++ avr_set_feature(env, AVR_FEATURE_ELPM); ++ avr_set_feature(env, AVR_FEATURE_JMP_CALL); ++ avr_set_feature(env, AVR_FEATURE_LPMX); ++ avr_set_feature(env, AVR_FEATURE_MOVW); ++ avr_set_feature(env, AVR_FEATURE_MUL); ++ avr_set_feature(env, AVR_FEATURE_RMW); ++} ++ ++static void avr_xmega6_initfn(Object *obj) ++{ ++ AVRCPU *cpu = AVR_CPU(obj); ++ CPUAVRState *env = &cpu->env; ++ ++ avr_set_feature(env, AVR_FEATURE_LPM); ++ avr_set_feature(env, AVR_FEATURE_IJMP_ICALL); ++ avr_set_feature(env, AVR_FEATURE_ADIW_SBIW); ++ avr_set_feature(env, AVR_FEATURE_SRAM); ++ avr_set_feature(env, AVR_FEATURE_BREAK); ++ ++ avr_set_feature(env, AVR_FEATURE_3_BYTE_PC); ++ avr_set_feature(env, AVR_FEATURE_2_BYTE_SP); ++ avr_set_feature(env, AVR_FEATURE_RAMPZ); ++ avr_set_feature(env, AVR_FEATURE_EIJMP_EICALL); ++ avr_set_feature(env, AVR_FEATURE_ELPMX); ++ avr_set_feature(env, AVR_FEATURE_ELPM); ++ avr_set_feature(env, AVR_FEATURE_JMP_CALL); ++ avr_set_feature(env, AVR_FEATURE_LPMX); ++ avr_set_feature(env, AVR_FEATURE_MOVW); ++ avr_set_feature(env, AVR_FEATURE_MUL); ++ avr_set_feature(env, AVR_FEATURE_RMW); ++} ++ ++static void avr_xmega7_initfn(Object *obj) ++{ ++ AVRCPU *cpu = AVR_CPU(obj); ++ CPUAVRState *env = &cpu->env; ++ ++ avr_set_feature(env, AVR_FEATURE_LPM); ++ avr_set_feature(env, AVR_FEATURE_IJMP_ICALL); ++ avr_set_feature(env, AVR_FEATURE_ADIW_SBIW); ++ avr_set_feature(env, AVR_FEATURE_SRAM); ++ avr_set_feature(env, AVR_FEATURE_BREAK); ++ ++ avr_set_feature(env, AVR_FEATURE_3_BYTE_PC); ++ avr_set_feature(env, AVR_FEATURE_2_BYTE_SP); ++ avr_set_feature(env, AVR_FEATURE_RAMPD); ++ avr_set_feature(env, AVR_FEATURE_RAMPX); ++ avr_set_feature(env, AVR_FEATURE_RAMPY); ++ avr_set_feature(env, AVR_FEATURE_RAMPZ); ++ avr_set_feature(env, AVR_FEATURE_EIJMP_EICALL); ++ avr_set_feature(env, AVR_FEATURE_ELPMX); ++ avr_set_feature(env, AVR_FEATURE_ELPM); ++ avr_set_feature(env, AVR_FEATURE_JMP_CALL); ++ avr_set_feature(env, AVR_FEATURE_LPMX); ++ avr_set_feature(env, AVR_FEATURE_MOVW); ++ avr_set_feature(env, AVR_FEATURE_MUL); ++ avr_set_feature(env, AVR_FEATURE_RMW); ++} ++ ++typedef struct AVRCPUInfo { ++ const char *name; ++ void (*initfn)(Object *obj); ++} AVRCPUInfo; ++ ++static const AVRCPUInfo avr_cpus[] = { ++ {.name = "avr1", .initfn = avr_avr1_initfn}, ++ {.name = "avr2", .initfn = avr_avr2_initfn}, ++ {.name = "avr25", .initfn = avr_avr25_initfn}, ++ {.name = "avr3", .initfn = avr_avr3_initfn}, ++ {.name = "avr31", .initfn = avr_avr31_initfn}, ++ {.name = "avr35", .initfn = avr_avr35_initfn}, ++ {.name = "avr4", .initfn = avr_avr4_initfn}, ++ {.name = "avr5", .initfn = avr_avr5_initfn}, ++ {.name = "avr51", .initfn = avr_avr51_initfn}, ++ {.name = "avr6", .initfn = avr_avr6_initfn}, ++ {.name = "xmega2", .initfn = avr_xmega2_initfn}, ++ {.name = "xmega4", .initfn = avr_xmega4_initfn}, ++ {.name = "xmega5", .initfn = avr_xmega5_initfn}, ++ {.name = "xmega6", .initfn = avr_xmega6_initfn}, ++ {.name = "xmega7", .initfn = avr_xmega7_initfn}, ++}; ++ ++static gint avr_cpu_list_compare(gconstpointer a, gconstpointer b) ++{ ++ ObjectClass *class_a = (ObjectClass *)a; ++ ObjectClass *class_b = (ObjectClass *)b; ++ const char *name_a; ++ const char *name_b; ++ ++ name_a = object_class_get_name(class_a); ++ name_b = object_class_get_name(class_b); ++ ++ return strcmp(name_a, name_b); ++} ++ ++static void avr_cpu_list_entry(gpointer data, gpointer user_data) ++{ ++ ObjectClass *oc = data; ++ CPUListState *s = user_data; ++ const char *typename; ++ char *name; ++ ++ typename = object_class_get_name(oc); ++ name = g_strndup(typename, strlen(typename) - strlen("-" TYPE_AVR_CPU)); ++ (*s->cpu_fprintf)(s->file, " %s\n", name); ++ g_free(name); ++} ++ ++void avr_cpu_list(FILE *f, fprintf_function cpu_fprintf) ++{ ++ CPUListState s = { ++ .file = f, ++ .cpu_fprintf = cpu_fprintf, ++ }; ++ GSList *list; ++ ++ list = object_class_get_list(TYPE_AVR_CPU, false); ++ list = g_slist_sort(list, avr_cpu_list_compare); ++ (*cpu_fprintf)(f, "Available CPUs:\n"); ++ g_slist_foreach(list, avr_cpu_list_entry, &s); ++ g_slist_free(list); ++} ++ ++AVRCPU *cpu_avr_init(const char *cpu_model) ++{ ++ return AVR_CPU(cpu_generic_init(TYPE_AVR_CPU, cpu_model)); ++} ++ ++static void cpu_register(const AVRCPUInfo *info) ++{ ++ TypeInfo type_info = { ++ .parent = TYPE_AVR_CPU, ++ .instance_size = sizeof(AVRCPU), ++ .instance_init = info->initfn, ++ .class_size = sizeof(AVRCPUClass), ++ }; ++ ++ type_info.name = g_strdup_printf("%s-" TYPE_AVR_CPU, info->name); ++ type_register(&type_info); ++ g_free((void *)type_info.name); ++} ++ ++static const TypeInfo avr_cpu_type_info = { ++ .name = TYPE_AVR_CPU, ++ .parent = TYPE_CPU, ++ .instance_size = sizeof(AVRCPU), ++ .instance_init = avr_cpu_initfn, ++ .class_size = sizeof(AVRCPUClass), ++ .class_init = avr_cpu_class_init, ++ .abstract = true, ++}; ++ ++static void avr_cpu_register_types(void) ++{ ++ int i; ++ type_register_static(&avr_cpu_type_info); ++ ++ for (i = 0; i < ARRAY_SIZE(avr_cpus); i++) { ++ cpu_register(&avr_cpus[i]); ++ } ++} ++ ++type_init(avr_cpu_register_types) diff --cc target/avr/cpu.h index 0000000,0000000..9214324 new file mode 100644 --- /dev/null +++ b/target/avr/cpu.h @@@ -1,0 -1,0 +1,237 @@@ ++/* ++ * QEMU AVR CPU ++ * ++ * Copyright (c) 2016 Michael Rolnik ++ * ++ * This library is free software; you can redistribute it and/or ++ * modify it under the terms of the GNU Lesser General Public ++ * License as published by the Free Software Foundation; either ++ * version 2.1 of the License, or (at your option) any later version. ++ * ++ * This library is distributed in the hope that it will be useful, ++ * but WITHOUT ANY WARRANTY; without even the implied warranty of ++ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU ++ * Lesser General Public License for more details. ++ * ++ * You should have received a copy of the GNU Lesser General Public ++ * License along with this library; if not, see ++ * ++ */ ++ ++#if !defined(CPU_AVR_H) ++#define CPU_AVR_H ++ ++#include "qemu-common.h" ++ ++#define TARGET_LONG_BITS 32 ++ ++#define CPUArchState struct CPUAVRState ++ ++#include "exec/cpu-defs.h" ++#include "fpu/softfloat.h" ++ ++/* ++ * TARGET_PAGE_BITS cannot be more than 8 bits because ++ * 1. all IO registers occupy [0x0000 .. 0x00ff] address range, and they ++ * should be implemented as a device and not memory ++ * 2. SRAM starts at the address 0x0100 ++ */ ++#define TARGET_PAGE_BITS 8 ++#define TARGET_PHYS_ADDR_SPACE_BITS 24 ++#define TARGET_VIRT_ADDR_SPACE_BITS 24 ++#define NB_MMU_MODES 2 ++ ++/* ++ * AVR has two memory spaces, data & code. ++ * e.g. both have 0 address ++ * ST/LD instructions access data space ++ * LPM/SPM and instruction fetching access code memory space ++ */ ++#define MMU_CODE_IDX 0 ++#define MMU_DATA_IDX 1 ++ ++#define EXCP_RESET 1 ++#define EXCP_INT(n) (EXCP_RESET + (n) + 1) ++ ++#define PHYS_ADDR_MASK 0xfff00000 ++ ++#define PHYS_BASE_CODE 0x00000000 ++#define PHYS_BASE_DATA 0x00800000 ++#define PHYS_BASE_REGS 0x10000000 ++ ++#define VIRT_BASE_CODE 0x00000000 ++#define VIRT_BASE_DATA 0x00000000 ++#define VIRT_BASE_REGS 0x00000000 ++ ++/* ++ * there are two groups of registers ++ * 1. CPU regs - accessible by LD/ST and CPU itself ++ * 2. CPU IO regs - accessible by LD/ST and IN/OUT ++ */ ++#define AVR_CPU_REGS 0x0020 ++#define AVR_CPU_IO_REGS 0x0040 ++#define AVR_REGS (AVR_CPU_IO_REGS + AVR_CPU_REGS) ++ ++#define AVR_CPU_REGS_BASE 0x0000 ++#define AVR_CPU_IO_REGS_BASE (AVR_CPU_REGS_BASE + AVR_CPU_REGS) ++ ++#define AVR_CPU_REGS_LAST (AVR_CPU_REGS_BASE + AVR_CPU_REGS - 1) ++#define AVR_CPU_IO_REGS_LAST (AVR_CPU_IO_REGS_BASE + AVR_CPU_IO_REGS - 1) ++ ++enum avr_features { ++ AVR_FEATURE_SRAM, ++ ++ AVR_FEATURE_1_BYTE_PC, ++ AVR_FEATURE_2_BYTE_PC, ++ AVR_FEATURE_3_BYTE_PC, ++ ++ AVR_FEATURE_1_BYTE_SP, ++ AVR_FEATURE_2_BYTE_SP, ++ ++ AVR_FEATURE_BREAK, ++ AVR_FEATURE_DES, ++ AVR_FEATURE_RMW, /* Read Modify Write - XCH LAC LAS LAT */ ++ ++ AVR_FEATURE_EIJMP_EICALL, ++ AVR_FEATURE_IJMP_ICALL, ++ AVR_FEATURE_JMP_CALL, ++ ++ AVR_FEATURE_ADIW_SBIW, ++ ++ AVR_FEATURE_SPM, ++ AVR_FEATURE_SPMX, ++ ++ AVR_FEATURE_ELPMX, ++ AVR_FEATURE_ELPM, ++ AVR_FEATURE_LPMX, ++ AVR_FEATURE_LPM, ++ ++ AVR_FEATURE_MOVW, ++ AVR_FEATURE_MUL, ++ AVR_FEATURE_RAMPD, ++ AVR_FEATURE_RAMPX, ++ AVR_FEATURE_RAMPY, ++ AVR_FEATURE_RAMPZ, ++}; ++ ++typedef struct CPUAVRState CPUAVRState; ++ ++struct CPUAVRState { ++ uint32_t pc_w; /* 0x003fffff up to 22 bits */ ++ ++ uint32_t sregC; /* 0x00000001 1 bits */ ++ uint32_t sregZ; /* 0x0000ffff 16 bits, negative logic */ ++ uint32_t sregN; /* 0x00000001 1 bits */ ++ uint32_t sregV; /* 0x00000001 1 bits */ ++ uint32_t sregS; /* 0x00000001 1 bits */ ++ uint32_t sregH; /* 0x00000001 1 bits */ ++ uint32_t sregT; /* 0x00000001 1 bits */ ++ uint32_t sregI; /* 0x00000001 1 bits */ ++ ++ uint32_t rampD; /* 0x00ff0000 8 bits */ ++ uint32_t rampX; /* 0x00ff0000 8 bits */ ++ uint32_t rampY; /* 0x00ff0000 8 bits */ ++ uint32_t rampZ; /* 0x00ff0000 8 bits */ ++ uint32_t eind; /* 0x00ff0000 8 bits */ ++ ++ uint32_t r[AVR_CPU_REGS]; ++ /* 8 bits each */ ++ uint32_t sp; /* 16 bits */ ++ ++ uint64_t intsrc; /* interrupt sources */ ++ bool fullacc;/* CPU/MEM if true MEM only otherwise */ ++ ++ uint32_t features; ++ ++ /* Those resources are used only in QEMU core */ ++ CPU_COMMON ++}; ++ ++static inline int avr_feature(CPUAVRState *env, int feature) ++{ ++ return (env->features & (1U << feature)) != 0; ++} ++ ++static inline void avr_set_feature(CPUAVRState *env, int feature) ++{ ++ env->features |= (1U << feature); ++} ++ ++#define cpu_list avr_cpu_list ++#define cpu_signal_handler cpu_avr_signal_handler ++ ++#include "exec/cpu-all.h" ++#include "cpu-qom.h" ++ ++static inline int cpu_mmu_index(CPUAVRState *env, bool ifetch) ++{ ++ return ifetch ? MMU_CODE_IDX : MMU_DATA_IDX; ++} ++ ++void avr_translate_init(void); ++ ++AVRCPU *cpu_avr_init(const char *cpu_model); ++ ++#define cpu_init(cpu_model) CPU(cpu_avr_init(cpu_model)) ++ ++void avr_cpu_list(FILE *f, fprintf_function cpu_fprintf); ++int cpu_avr_exec(CPUState *cpu); ++int cpu_avr_signal_handler(int host_signum, void *pinfo, void *puc); ++int avr_cpu_handle_mmu_fault(CPUState *cpu, vaddr address, int rw, ++ int mmu_idx); ++int avr_cpu_memory_rw_debug(CPUState *cs, vaddr address, uint8_t *buf, ++ int len, bool is_write); ++ ++enum { ++ TB_FLAGS_FULL_ACCESS = 1, ++}; ++ ++static inline void cpu_get_tb_cpu_state(CPUAVRState *env, target_ulong *pc, ++ target_ulong *cs_base, uint32_t *pflags) ++{ ++ uint32_t flags = 0; ++ ++ *pc = env->pc_w * 2; ++ *cs_base = 0; ++ ++ if (env->fullacc) { ++ flags |= TB_FLAGS_FULL_ACCESS; ++ } ++ ++ *pflags = flags; ++} ++ ++static inline int cpu_interrupts_enabled(CPUAVRState *env) ++{ ++ return env->sregI != 0; ++} ++ ++static inline uint8_t cpu_get_sreg(CPUAVRState *env) ++{ ++ uint8_t sreg; ++ sreg = (env->sregC & 0x01) << 0 ++ | (env->sregZ == 0 ? 1 : 0) << 1 ++ | (env->sregN) << 2 ++ | (env->sregV) << 3 ++ | (env->sregS) << 4 ++ | (env->sregH) << 5 ++ | (env->sregT) << 6 ++ | (env->sregI) << 7; ++ return sreg; ++} ++ ++static inline void cpu_set_sreg(CPUAVRState *env, uint8_t sreg) ++{ ++ env->sregC = (sreg >> 0) & 0x01; ++ env->sregZ = (sreg >> 1) & 0x01 ? 0 : 1; ++ env->sregN = (sreg >> 2) & 0x01; ++ env->sregV = (sreg >> 3) & 0x01; ++ env->sregS = (sreg >> 4) & 0x01; ++ env->sregH = (sreg >> 5) & 0x01; ++ env->sregT = (sreg >> 6) & 0x01; ++ env->sregI = (sreg >> 7) & 0x01; ++} ++ ++#include "exec/exec-all.h" ++ ++#endif /* !defined (CPU_AVR_H) */ diff --cc target/avr/cpugen/CMakeLists.txt index 0000000,0000000..ded391c new file mode 100644 --- /dev/null +++ b/target/avr/cpugen/CMakeLists.txt @@@ -1,0 -1,0 +1,38 @@@ ++cmake_minimum_required(VERSION 2.8) ++ ++project(cpugen) ++ ++set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -g -ggdb -g3") ++set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -std=c++0x") ++ ++set(Boost_USE_STATIC_LIBS ON) ++find_package( ++ Boost 1.60.0 ++ REQUIRED ++ COMPONENTS ++ system ++ regex) ++#set(BUILD_SHARED_LIBS OFF) ++#set(BUILD_STATIC_LIBS ON) ++add_subdirectory(tinyxml2) ++add_subdirectory(yaml-cpp) ++ ++include_directories( ++ ${CMAKE_CURRENT_SOURCE_DIR} ++ ${CMAKE_CURRENT_SOURCE_DIR}/.. ++ ${CMAKE_CURRENT_SOURCE_DIR}/../yaml-cpp/include ++ ${Boost_INCLUDE_DIRS} ++) ++ ++add_executable( ++ cpugen ++ src/cpugen.cpp ++ src/utils.cpp ++) ++ ++target_link_libraries( ++ cpugen ++ yaml-cpp ++ tinyxml2 ++ ${Boost_LIBRARIES} ++) diff --cc target/avr/cpugen/README.md index 0000000,0000000..f0caa8b new file mode 100644 --- /dev/null +++ b/target/avr/cpugen/README.md @@@ -1,0 -1,0 +1,17 @@@ ++# CPUGEN ++## How to build ++within ```cpugen``` directory do ++``` ++git clone https://github.com/leethomason/tinyxml2 ++git clone https://github.com/jbeder/yaml-cpp ++mkdir build ++cd build ++cmake .. ++make ++``` ++## How to use ++``` ++cpugen ../cpu/avr.yaml ++xsltproc ../xsl/decode.c.xsl output.xml > ../../decode.c ++xsltproc ../xsl/translate-inst.h.xsl output.xml > ../../translate-inst.h ++``` diff --cc target/avr/cpugen/cpu/avr.yaml index 0000000,0000000..c36b628 new file mode 100644 --- /dev/null +++ b/target/avr/cpugen/cpu/avr.yaml @@@ -1,0 -1,0 +1,213 @@@ ++cpu: ++ name: avr ++ instructions: ++ - ADC: ++ opcode: 0001 11 hRr[1] Rd[5] lRr[4] ++ - ADD: ++ opcode: 0000 11 hRr[1] Rd[5] lRr[4] ++ - ADIW: ++ opcode: 1001 0110 hImm[2] Rd[2] lImm[4] ++ - AND: ++ opcode: 0010 00 hRr[1] Rd[5] lRr[4] ++ - ANDI: ++ opcode: 0111 hImm[4] Rd[4] lImm[4] ++ - ASR: ++ opcode: 1001 010 Rd[5] 0101 ++ - BCLR: ++ opcode: 1001 0100 1 Bit[3] 1000 ++ - BLD: ++ opcode: 1111 100 Rd[5] 0 Bit[3] ++ - BRBC: ++ opcode: 1111 01 Imm[7] Bit[3] ++ - BRBS: ++ opcode: 1111 00 Imm[7] Bit[3] ++ - BREAK: ++ opcode: 1001 0101 1001 1000 ++ - BSET: ++ opcode: 1001 0100 0 Bit[3] 1000 ++ - BST: ++ opcode: 1111 101 Rd[5] 0 Bit[3] ++ - CALL: ++ opcode: 1001 010 hImm[5] 111 lImm[17] ++ - CBI: ++ opcode: 1001 1000 Imm[5] Bit[3] ++ - COM: ++ opcode: 1001 010 Rd[5] 0000 ++ - CP: ++ opcode: 0001 01 hRr[1] Rd[5] lRr[4] ++ - CPC: ++ opcode: 0000 01 hRr[1] Rd[5] lRr[4] ++ - CPI: ++ opcode: 0011 hImm[4] Rd[4] lImm[4] ++ - CPSE: ++ opcode: 0001 00 hRr[1] Rd[5] lRr[4] ++ - DEC: ++ opcode: 1001 010 Rd[5] 1010 ++ - DES: ++ opcode: 1001 0100 Imm[4] 1011 ++ - EICALL: ++ opcode: 1001 0101 0001 1001 ++ - EIJMP: ++ opcode: 1001 0100 0001 1001 ++ - ELPM1: ++ opcode: 1001 0101 1101 1000 ++ - ELPM2: ++ opcode: 1001 000 Rd[5] 0110 ++ - ELPMX: ++ opcode: 1001 000 Rd[5] 0111 ++ - EOR: ++ opcode: 0010 01 hRr[1] Rd[5] lRr[4] ++ - FMUL: ++ opcode: 0000 0011 0 Rd[3] 1 Rr[3] ++ - FMULS: ++ opcode: 0000 0011 1 Rd[3] 0 Rr[3] ++ - FMULSU: ++ opcode: 0000 0011 1 Rd[3] 1 Rr[3] ++ - ICALL: ++ opcode: 1001 0101 0000 1001 ++ - IJMP: ++ opcode: 1001 0100 0000 1001 ++ - IN: ++ opcode: 1011 0 hImm[2] Rd[5] lImm[4] ++ - INC: ++ opcode: 1001 010 Rd[5] 0011 ++ - JMP: ++ opcode: 1001 010 hImm[5] 110 lImm[17] ++ - LAC: ++ opcode: 1001 001 Rr[5] 0110 ++ - LAS: ++ opcode: 1001 001 Rr[5] 0101 ++ - LAT: ++ opcode: 1001 001 Rr[5] 0111 ++ - LDX1: ++ opcode: 1001 000 Rd[5] 1100 ++ - LDX2: ++ opcode: 1001 000 Rd[5] 1101 ++ - LDX3: ++ opcode: 1001 000 Rd[5] 1110 ++# - LDY1: ++# opcode: 1000 000 Rd[5] 1000 ++ - LDY2: ++ opcode: 1001 000 Rd[5] 1001 ++ - LDY3: ++ opcode: 1001 000 Rd[5] 1010 ++ - LDDY: ++ opcode: 10 hImm[1] 0 mImm[2] 0 Rd[5] 1 lImm[3] ++# - LDZ1: ++# opcode: 1000 000 Rd[5] 0000 ++ - LDZ2: ++ opcode: 1001 000 Rd[5] 0001 ++ - LDZ3: ++ opcode: 1001 000 Rd[5] 0010 ++ - LDDZ: ++ opcode: 10 hImm[1] 0 mImm[2] 0 Rd[5] 0 lImm[3] ++ - LDI: ++ opcode: 1110 hImm[4] Rd[4] lImm[4] ++ - LDS: ++ opcode: 1001 000 Rd[5] 0000 Imm[16] ++# - LDS16: ++# opcode: 1010 0 hImm[3] Rd[4] lImm[4] ++ - LPM1: ++ opcode: 1001 0101 1100 1000 ++ - LPM2: ++ opcode: 1001 000 Rd[5] 0100 ++ - LPMX: ++ opcode: 1001 000 Rd[5] 0101 ++ - LSR: ++ opcode: 1001 010 Rd[5] 0110 ++ - MOV: ++ opcode: 0010 11 hRr[1] Rd[5] lRr[4] ++ - MOVW: ++ opcode: 0000 0001 Rd[4] Rr[4] ++ - MUL: ++ opcode: 1001 11 hRr[1] Rd[5] lRr[4] ++ - MULS: ++ opcode: 0000 0010 Rd[4] Rr[4] ++ - MULSU: ++ opcode: 0000 0011 0 Rd[3] 0 Rr[3] ++ - NEG: ++ opcode: 1001 010 Rd[5] 0001 ++ - NOP: ++ opcode: 0000 0000 0000 0000 ++ - OR: ++ opcode: 0010 10 hRr[1] Rd[5] lRr[4] ++ - ORI: ++ opcode: 0110 hImm[4] Rd[4] lImm[4] ++ - OUT: ++ opcode: 1011 1 hImm[2] Rd[5] lImm[4] ++ - POP: ++ opcode: 1001 000 Rd[5] 1111 ++ - PUSH: ++ opcode: 1001 001 Rd[5] 1111 ++ - RCALL: ++ opcode: 1101 Imm[12] ++ - RET: ++ opcode: 1001 0101 0000 1000 ++ - RETI: ++ opcode: 1001 0101 0001 1000 ++ - RJMP: ++ opcode: 1100 Imm[12] ++ - ROR: ++ opcode: 1001 010 Rd[5] 0111 ++ - SBC: ++ opcode: 0000 10 hRr[1] Rd[5] lRr[4] ++ - SBCI: ++ opcode: 0100 hImm[4] Rd[4] lImm[4] ++ - SBI: ++ opcode: 1001 1010 Imm[5] Bit[3] ++ - SBIC: ++ opcode: 1001 1001 Imm[5] Bit[3] ++ - SBIS: ++ opcode: 1001 1011 Imm[5] Bit[3] ++ - SBIW: ++ opcode: 1001 0111 hImm[2] Rd[2] lImm[4] ++# - SBR: ++# opcode: 0110 hImm[4] Rd[4] lImm[4] ++ - SBRC: ++ opcode: 1111 110 Rr[5] 0 Bit[3] ++ - SBRS: ++ opcode: 1111 111 Rr[5] 0 Bit[3] ++ - SLEEP: ++ opcode: 1001 0101 1000 1000 ++ - SPM: ++ opcode: 1001 0101 1110 1000 ++ - SPMX: ++ opcode: 1001 0101 1111 1000 ++ - STX1: ++ opcode: 1001 001 Rr[5] 1100 ++ - STX2: ++ opcode: 1001 001 Rr[5] 1101 ++ - STX3: ++ opcode: 1001 001 Rr[5] 1110 ++# - STY1: ++# opcode: 1000 001 Rd[5] 1000 ++ - STY2: ++ opcode: 1001 001 Rd[5] 1001 ++ - STY3: ++ opcode: 1001 001 Rd[5] 1010 ++ - STDY: ++ opcode: 10 hImm[1] 0 mImm[2] 1 Rd[5] 1 lImm[3] ++# - STZ1: ++# opcode: 1000 001 Rd[5] 0000 ++ - STZ2: ++ opcode: 1001 001 Rd[5] 0001 ++ - STZ3: ++ opcode: 1001 001 Rd[5] 0010 ++ - STDZ: ++ opcode: 10 hImm[1] 0 mImm[2] 1 Rd[5] 0 lImm[3] ++ - STS: ++ opcode: 1001 001 Rd[5] 0000 Imm[16] ++# - STS16: ++# opcode: 1010 1 hImm[3] Rd[4] lImm[4] ++ - SUB: ++ opcode: 0001 10 hRr[1] Rd[5] lRr[4] ++ - SUBI: ++ opcode: 0101 hImm[4] Rd[4] lImm[4] ++ - SWAP: ++ opcode: 1001 010 Rd[5] 0010 ++# - TST: ++# opcode: 0010 00 Rd[10] ++ - WDR: ++ opcode: 1001 0101 1010 1000 ++ - XCH: ++ opcode: 1001 001 Rd[5] 0100 diff --cc target/avr/cpugen/src/CMakeLists.txt index 0000000,0000000..5f08761 new file mode 100644 --- /dev/null +++ b/target/avr/cpugen/src/CMakeLists.txt @@@ -1,0 -1,0 +1,62 @@@ ++# ++# CPUGEN ++# ++# Copyright (c) 2016 Michael Rolnik ++# ++# Permission is hereby granted, free of charge, to any person obtaining a copy ++# of this software and associated documentation files (the "Software"), to deal ++# in the Software without restriction, including without limitation the rights ++# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell ++# copies of the Software, and to permit persons to whom the Software is ++# furnished to do so, subject to the following conditions: ++# ++# The above copyright notice and this permission notice shall be included in ++# all copies or substantial portions of the Software. ++# ++# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR ++# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, ++# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL ++# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER ++# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, ++# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN ++# THE SOFTWARE. ++# ++ ++cmake_minimum_required(VERSION 2.8) ++ ++project(cpugen) ++ ++set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -g -ggdb -g3") ++set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -std=c++0x") ++ ++set(Boost_USE_STATIC_LIBS ON) ++find_package( ++ Boost 1.60.0 ++ REQUIRED ++ COMPONENTS ++ system ++ regex) ++set(BUILD_SHARED_LIBS OFF) ++set(BUILD_STATIC_LIBS ON) ++add_subdirectory(../tinyxml2 ${CMAKE_CURRENT_BINARY_DIR}/tinyxml2) ++add_subdirectory(../yaml-cpp ${CMAKE_CURRENT_BINARY_DIR}/yaml-cpp) ++ ++include_directories( ++ ${CMAKE_CURRENT_SOURCE_DIR} ++ ${CMAKE_CURRENT_SOURCE_DIR}/.. ++ ${CMAKE_CURRENT_SOURCE_DIR}/../yaml-cpp/include ++ ${Boost_INCLUDE_DIRS} ++) ++ ++add_executable( ++ cpugen ++ cpugen.cpp ++ utils.cpp ++) ++ ++target_link_libraries( ++ cpugen ++ yaml-cpp ++ tinyxml2_static ++ ${Boost_LIBRARIES} ++) diff --cc target/avr/cpugen/src/cpugen.cpp index 0000000,0000000..e479b08 new file mode 100644 --- /dev/null +++ b/target/avr/cpugen/src/cpugen.cpp @@@ -1,0 -1,0 +1,457 @@@ ++/* ++ * CPUGEN ++ * ++ * Copyright (c) 2016 Michael Rolnik ++ * ++ * Permission is hereby granted, free of charge, to any person obtaining a copy ++ * of this software and associated documentation files (the "Software"), to deal ++ * in the Software without restriction, including without limitation the rights ++ * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell ++ * copies of the Software, and to permit persons to whom the Software is ++ * furnished to do so, subject to the following conditions: ++ * ++ * The above copyright notice and this permission notice shall be included in ++ * all copies or substantial portions of the Software. ++ * ++ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR ++ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, ++ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL ++ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER ++ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, ++ * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN ++ * THE SOFTWARE. ++ */ ++ ++#include ++#include ++#include ++#include ++#include ++#include ++#include ++#include ++#include ++#include ++#include ++ ++#include "yaml-cpp/yaml.h" ++#include "tinyxml2/tinyxml2.h" ++ ++#include "utils.h" ++ ++#include ++ ++struct inst_info_t { ++ std::string name; ++ std::string opcode; ++ ++ tinyxml2::XMLElement *nodeFields; ++}; ++ ++struct cpu_info_t { ++ std::string name; ++ std::vector instructions; ++}; ++ ++int countbits(uint64_t value) ++{ ++ int counter = 0; ++ uint64_t mask = 1; ++ ++ for (size_t i = 0; i < sizeof(value) * 8; ++i) { ++ if (value & mask) { ++ counter++; ++ } ++ ++ mask <<= 1; ++ } ++ ++ return counter; ++} ++ ++int encode(uint64_t mask, uint64_t value) ++{ ++ uint64_t i = 0x0000000000000001; ++ uint64_t j = 0x0000000000000001; ++ uint64_t v = 0x0000000000000000; ++ ++ for (size_t it = 0; it < sizeof(value) * 8; ++it) { ++ if (mask & i) { ++ if (value & j) { ++ v |= i; ++ } ++ j <<= 1; ++ } ++ ++ i <<= 1; ++ } ++ ++ return v; ++} ++ ++std::string num2hex(uint64_t value) ++{ ++ std::ostringstream str; ++ str << "0x" << std::hex << std::setw(8) << std::setfill('0') << value; ++ ++ return str.str(); ++} ++ ++tinyxml2::XMLDocument doc; ++ ++void operator >> (const YAML::Node & node, inst_info_t & info) ++{ ++ for (auto it = node.begin(); it != node.end(); ++it) { ++ const YAML::Node & curr = it->second; ++ std::string name = it->first.as(); ++ ++ info.opcode = curr["opcode"].as(); ++ ++ const char *response; ++ std::vector fields; ++ std::string opcode = ""; ++ int offset; ++ tinyxml2::XMLElement *nodeFields = doc.NewElement("fields"); ++ uint32_t bitoffset = 0; ++ ++ do { ++ opcode = info.opcode; ++ boost::replace_all(info.opcode, " ", " "); ++ boost::replace_all(info.opcode, "0 0", "00"); ++ boost::replace_all(info.opcode, "0 1", "01"); ++ boost::replace_all(info.opcode, "1 0", "10"); ++ boost::replace_all(info.opcode, "1 1", "11"); ++ } while (opcode != info.opcode); ++ ++ boost::replace_all(info.opcode, "- -", "--"); ++ ++ fields = boost::split(fields, info.opcode, boost::is_any_of(" ")); ++ ++ opcode = ""; ++ info.opcode = ""; ++ unsigned f = 0; ++ for (int i = 0; i < fields.size(); i++) { ++ std::string field = fields[i]; ++ ++ if (field.empty()) { ++ continue; ++ } ++ ++ size_t len = field.length(); ++ boost::cmatch match; ++ tinyxml2::XMLElement *nodeField = doc.NewElement("field"); ++ ++ nodeFields->LinkEndChild(nodeField); ++ ++ if (boost::regex_match(field.c_str(), ++ match, ++ boost::regex("^[01]+$"))) { ++ int length = field.length(); ++ ++ nodeField->SetAttribute("name", field.c_str()); ++ nodeField->SetAttribute("length", length); ++ nodeField->SetAttribute("offset", bitoffset); ++ ++ info.opcode += field; ++ ++ bitoffset += len; ++ } else if (boost::regex_match( ++ field.c_str(), ++ match, ++ boost::regex("^[-]+$"))) ++ { ++ int length = field.length(); ++ ++ nodeField->SetAttribute("name", "RESERVED"); ++ nodeField->SetAttribute("length", length); ++ nodeField->SetAttribute("offset", bitoffset); ++ ++ info.opcode += field; ++ ++ bitoffset += len; ++ } else if (boost::regex_match(field.c_str(), ++ match, ++ boost::regex("^([a-zA-Z][a-zA-Z0-9]*)\\[([0-9]+)\\]"))) { ++ int length = std::atoi(match[2].first); ++ std::string name = std::string(match[1].first, match[1].second); ++ ++ nodeField->SetAttribute("name", name.c_str()); ++ nodeField->SetAttribute("length", length); ++ nodeField->SetAttribute("offset", bitoffset); ++ ++ for (int j = 0; j < length; j++) { ++ info.opcode += 'a' + f; ++ } ++ ++ f++; ++ ++ bitoffset += length; ++ } else if (field == "~") { ++ /* nothing */ ++ } else { ++ std::cout << "cannot parse " << name ++ << ": '" << field << "'" << std::endl; ++ exit(0); ++ } ++ } ++ ++ info.nodeFields = nodeFields; ++ info.name = name; ++ } ++} ++ ++void operator >> (inst_info_t & info, tinyxml2::XMLElement & node) ++{ ++ node.SetAttribute("length", (unsigned)info.opcode.length()); ++ node.SetAttribute("name", info.name.c_str()); ++ node.SetAttribute("opcode", info.opcode.c_str()); ++} ++ ++void operator >> (const YAML::Node & node, cpu_info_t & cpu) ++{ ++ const YAML::Node & insts = node["instructions"]; ++ ++ cpu.name = node["name"].as(); ++ ++ for (unsigned i = 0; i < insts.size(); i++) { ++ inst_info_t *inst = new inst_info_t(); ++ ++ insts[i] >> (*inst); ++ ++ if (inst->opcode != "" &&inst->opcode != "~") { ++ cpu.instructions.push_back(inst); ++ } ++ } ++} ++ ++std::pair getMinMaxInstructionLength( ++ std::vector &instructions) ++{ ++ size_t min = std::numeric_limits::max(); ++ size_t max = std::numeric_limits::min(); ++ ++ for (size_t i = 0; i < instructions.size(); i++) { ++ inst_info_t *inst = instructions[i]; ++ std::string opcode = inst->opcode; ++ size_t length = opcode.length(); ++ ++ if (opcode != "~") { ++ min = std::min(min, length); ++ max = std::max(max, length); ++ } ++ } ++ ++ return std::make_pair(min, max); ++} ++ ++uint64_t getXs(std::string const &opcode, size_t len, char chr) ++{ ++ uint64_t result = 0; ++ size_t cur; ++ uint64_t bit = 1ull << (len - 1); ++ ++ for (cur = 0; cur < len; cur++) { ++ if (opcode[cur] == chr) { ++ result |= bit; ++ } ++ ++ bit >>= 1; ++ } ++ ++ return result; ++} ++ ++uint64_t get0s(std::string const &opcode, size_t len) ++{ ++ return getXs(opcode, len, '0'); ++} ++ ++uint64_t get1s(std::string const &opcode, size_t len) ++{ ++ return getXs(opcode, len, '1'); ++} ++ ++class InstSorter ++{ ++ public: ++ InstSorter(size_t offset, size_t length) ++ : offset(offset), length(length) ++ { ++ ++ } ++ ++ bool operator()(inst_info_t *a, inst_info_t *b) ++ { ++ uint64_t field0; ++ uint64_t field1; ++ uint64_t fieldA; ++ uint64_t fieldB; ++ ++ field0 = get0s(a->opcode, length); ++ field1 = get1s(a->opcode, length); ++ fieldA = field0 | field1; ++ ++ field0 = get0s(b->opcode, length); ++ field1 = get1s(b->opcode, length); ++ fieldB = field0 | field1; ++ ++ return fieldB < fieldA; ++ } ++ ++ private: ++ size_t offset; ++ size_t length; ++ ++}; ++ ++void divide(uint64_t select0, uint64_t select1, ++ std::vector &info, ++ size_t level, tinyxml2::XMLElement *root) ++{ ++ std::pair minmaxSize; ++ ++ minmaxSize = getMinMaxInstructionLength(info); ++ ++ size_t minlen = minmaxSize.first; ++ size_t maxlen = minmaxSize.second; ++ size_t bits = std::min(minlen, sizeof(select0) * 8); ++ uint64_t all1 = (1ULL << bits) - 1; ++ uint64_t all0 = (1ULL << bits) - 1; ++ uint64_t allx = (1ULL << bits) - 1; ++ uint64_t diff; ++ ++ for (size_t i = 0; i < info.size(); ++i) { ++ std::string opcode = info[i]->opcode; ++ uint64_t field0 = get0s(opcode, minlen); ++ uint64_t field1 = get1s(opcode, minlen); ++ uint64_t fieldx = field0 | field1; ++ ++ if (opcode == "~") { ++ continue; ++ } ++ all0 &= field0; ++ all1 &= field1; ++ allx &= fieldx; ++ } ++ ++ diff = allx ^ (all0 | all1); ++ ++ if (diff == 0) { ++ tinyxml2::XMLElement *oopsNode = doc.NewElement("oops"); ++ oopsNode->SetAttribute("bits", (unsigned)bits); ++ oopsNode->SetAttribute("maxlen", (unsigned)maxlen); ++ oopsNode->SetAttribute("allx", num2hex(allx).c_str()); ++ oopsNode->SetAttribute("all0", num2hex(all0).c_str()); ++ oopsNode->SetAttribute("all1", num2hex(all1).c_str()); ++ oopsNode->SetAttribute("select0", num2hex(select0).c_str()); ++ oopsNode->SetAttribute("select1", num2hex(select1).c_str()); ++ root->LinkEndChild(oopsNode); ++ ++ std::sort(info.begin(), info.end(), InstSorter(0, minlen)); ++ ++ for (size_t i = 0; i < info.size(); ++i) { ++ inst_info_t *inst = info[i]; ++ tinyxml2::XMLElement *instNode = doc.NewElement("instruction"); ++ tinyxml2::XMLElement *matchNode = doc.NewElement("match01"); ++ ++ uint64_t field0 = get0s(inst->opcode, minlen); ++ uint64_t field1 = get1s(inst->opcode, minlen); ++ uint64_t fieldx = field0 | field1; ++ ++ root->LinkEndChild(matchNode); ++ matchNode->LinkEndChild(instNode); ++ ++ matchNode->SetAttribute("mask", num2hex(fieldx).c_str()); ++ matchNode->SetAttribute("value", num2hex(field1).c_str()); ++ ++ *inst >> *instNode; ++ ++ instNode->LinkEndChild(inst->nodeFields); ++ } ++ ++ return; ++ } ++ ++ uint64_t bitsN = countbits(diff); /* number of meaningfull bits */ ++ ++ tinyxml2::XMLElement *switchNode = doc.NewElement("switch"); ++ switchNode->SetAttribute("bits", (unsigned)bits); ++ switchNode->SetAttribute("bitoffset", (unsigned)0); ++ switchNode->SetAttribute("mask", num2hex(diff).c_str()); ++ root->LinkEndChild(switchNode); ++ ++ /* there are at most 1 << length subsets */ ++ for (size_t s = 0; s < (1 << bitsN); ++s) { ++ std::vector subset; ++ uint64_t index = encode(diff, s); ++ ++ tinyxml2::XMLElement *caseNode = doc.NewElement("case"); ++ caseNode->SetAttribute("value", num2hex(index).c_str()); ++ switchNode->LinkEndChild(caseNode); ++ ++ for (size_t i = 0; i < info.size(); ++i) { ++ std::string opcode = info[i]->opcode; ++ uint64_t field0 = get0s(opcode, minlen); ++ uint64_t field1 = get1s(opcode, minlen); ++ ++ if (((field0 & diff) == (~index & diff)) ++ && ((field1 & diff) == (index & diff))) { ++ subset.push_back(info[i]); ++ } ++ } ++ ++ if (subset.size() == 1) { ++ inst_info_t *inst = subset[0]; ++ tinyxml2::XMLElement *instNode = doc.NewElement("instruction"); ++ ++ *inst >> *instNode; ++ ++ instNode->LinkEndChild(inst->nodeFields); ++ ++ caseNode->LinkEndChild(instNode); ++ } else if (subset.size() > 1) { ++ /* this is a set of instructions, continue dividing */ ++ divide(select0 | (diff & ~index), ++ select1 | (diff & index), ++ subset, ++ level + 2, ++ caseNode); ++ } ++ } ++} ++ ++void generateParser(cpu_info_t & cpu) ++{ ++ tinyxml2::XMLElement *cpuNode = doc.NewElement("cpu"); ++ tinyxml2::XMLElement *instNode = doc.NewElement("instructions"); ++ ++ cpuNode->SetAttribute("name", cpu.name.c_str()); ++ cpuNode->LinkEndChild(instNode); ++ ++ doc.LinkEndChild(cpuNode); ++ ++ divide(0, 0, cpu.instructions, 1, instNode); ++ ++ doc.SaveFile("output.xml"); ++} ++ ++int main(int argc, char *argv[]) ++{ ++ if (argc != 2) { ++ std::cerr << "error: usage: cpuarg [input.yaml]" << std::endl; ++ std::exit(0); ++ } ++ ++ try { ++ const char *filename = argv[1]; ++ std::ifstream input(filename); ++ YAML::Node doc = YAML::Load(input); ++ cpu_info_t cpu; ++ ++ doc["cpu"] >> cpu; ++ ++ generateParser(cpu); ++ } catch(const YAML::Exception & e) { ++ std::cerr << e.what() << "\n"; ++ } ++} diff --cc target/avr/cpugen/src/utils.cpp index 0000000,0000000..5ef1961 new file mode 100644 --- /dev/null +++ b/target/avr/cpugen/src/utils.cpp @@@ -1,0 -1,0 +1,26 @@@ ++/* ++ * CPUGEN ++ * ++ * Copyright (c) 2016 Michael Rolnik ++ * ++ * Permission is hereby granted, free of charge, to any person obtaining a copy ++ * of this software and associated documentation files (the "Software"), to deal ++ * in the Software without restriction, including without limitation the rights ++ * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell ++ * copies of the Software, and to permit persons to whom the Software is ++ * furnished to do so, subject to the following conditions: ++ * ++ * The above copyright notice and this permission notice shall be included in ++ * all copies or substantial portions of the Software. ++ * ++ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR ++ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, ++ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL ++ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER ++ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, ++ * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN ++ * THE SOFTWARE. ++ */ ++ ++#include "utils.h" ++#include diff --cc target/avr/cpugen/src/utils.h index 0000000,0000000..0efaa3e new file mode 100644 --- /dev/null +++ b/target/avr/cpugen/src/utils.h @@@ -1,0 -1,0 +1,78 @@@ ++/* ++ * CPUGEN ++ * ++ * Copyright (c) 2016 Michael Rolnik ++ * ++ * Permission is hereby granted, free of charge, to any person obtaining a copy ++ * of this software and associated documentation files (the "Software"), to deal ++ * in the Software without restriction, including without limitation the rights ++ * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell ++ * copies of the Software, and to permit persons to whom the Software is ++ * furnished to do so, subject to the following conditions: ++ * ++ * The above copyright notice and this permission notice shall be included in ++ * all copies or substantial portions of the Software. ++ * ++ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR ++ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, ++ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL ++ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER ++ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, ++ * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN ++ * THE SOFTWARE. ++ */ ++ ++#ifndef UTILS_H_ ++#define UTILS_H_ ++ ++#include ++#include ++#include ++#include ++ ++typedef std::vector string_vector_t; ++ ++std::string extract(std::string & str, std::string delimiter); ++std::string rextract(std::string & str, std::string del); ++string_vector_t split(std::string str, std::string delimeter); ++std::string join(string_vector_t const &vec, std::string delimeter); ++ ++int countbits(uint64_t value); ++int encode(uint64_t mask, uint64_t value); ++std::string num2hex(uint64_t value); ++ ++class multi ++{ ++/* ++ http://www.angelikalanger.com/Articles/Cuj/05.Manipulators/Manipulators.html ++*/ ++ public: ++ multi(char c, size_t n) ++ : how_many_(n) ++ , what_(c) ++ { ++ } ++ ++ private: ++ const size_t how_many_; ++ const char what_; ++ ++ public: ++ template ++ Ostream & apply(Ostream & os) const ++ { ++ for (unsigned int i = 0; i < how_many_; ++i) { ++ os.put(what_); ++ } ++ os.flush(); ++ return os; ++ } ++}; ++ ++template ++Ostream & operator << (Ostream & os, const multi & m) ++{ ++ return m.apply(os); ++} ++ ++#endif diff --cc target/avr/cpugen/xsl/decode.c.xsl index 0000000,0000000..b8aa02c new file mode 100644 --- /dev/null +++ b/target/avr/cpugen/xsl/decode.c.xsl @@@ -1,0 -1,0 +1,103 @@@ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++#include <stdint.h> ++#include "translate.h" ++ ++void _decode(uint32_t pc, uint32_t *l, uint32_t c, translate_function_t *t) ++{ ++ ++ ++ ++ ++ ++} ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ diff --cc target/avr/cpugen/xsl/translate-inst.h.xsl index 0000000,0000000..2830ce3 new file mode 100644 --- /dev/null +++ b/target/avr/cpugen/xsl/translate-inst.h.xsl @@@ -1,0 -1,0 +1,118 @@@ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++#ifndef AVR_TRANSLATE_INST_H_ ++#define AVR_TRANSLATE_INST_H_ ++ ++typedef struct DisasContext DisasContext; ++ ++ ++ ++ ++ ++ ++#endif ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ diff --cc target/avr/cpugen/xsl/utils.xsl index 0000000,0000000..b4511b6 new file mode 100644 --- /dev/null +++ b/target/avr/cpugen/xsl/utils.xsl @@@ -1,0 -1,0 +1,108 @@@ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ /* ++ * QEMU AVR CPU ++ * ++ * Copyright (c) 2016 Michael Rolnik ++ * ++ * This library is free software; you can redistribute it and/or ++ * modify it under the terms of the GNU Lesser General Public ++ * License as published by the Free Software Foundation; either ++ * version 2.1 of the License, or (at your option) any later version. ++ * ++ * This library is distributed in the hope that it will be useful, ++ * but WITHOUT ANY WARRANTY; without even the implied warranty of ++ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU ++ * Lesser General Public License for more details. ++ * ++ * You should have received a copy of the GNU Lesser General Public ++ * License along with this library; if not, see ++ * <http://www.gnu.org/licenses/lgpl-2.1.html> ++ */ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ ++ diff --cc target/avr/decode.inc.c index 0000000,0000000..576dd83 new file mode 100644 --- /dev/null +++ b/target/avr/decode.inc.c @@@ -1,0 -1,0 +1,689 @@@ ++/* ++ * QEMU AVR CPU ++ * ++ * Copyright (c) 2016 Michael Rolnik ++ * ++ * This library is free software; you can redistribute it and/or ++ * modify it under the terms of the GNU Lesser General Public ++ * License as published by the Free Software Foundation; either ++ * version 2.1 of the License, or (at your option) any later version. ++ * ++ * This library is distributed in the hope that it will be useful, ++ * but WITHOUT ANY WARRANTY; without even the implied warranty of ++ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU ++ * Lesser General Public License for more details. ++ * ++ * You should have received a copy of the GNU Lesser General Public ++ * License along with this library; if not, see ++ * ++ */ ++ ++static void avr_decode(uint32_t pc, uint32_t *l, uint32_t c, ++ translate_function_t *t) ++{ ++ uint32_t opc = extract32(c, 0, 16); ++ switch (opc & 0x0000d000) { ++ case 0x00000000: { ++ switch (opc & 0x00002c00) { ++ case 0x00000000: { ++ switch (opc & 0x00000300) { ++ case 0x00000000: { ++ *l = 16; ++ *t = &avr_translate_NOP; ++ break; ++ } ++ case 0x00000100: { ++ *l = 16; ++ *t = &avr_translate_MOVW; ++ break; ++ } ++ case 0x00000200: { ++ *l = 16; ++ *t = &avr_translate_MULS; ++ break; ++ } ++ case 0x00000300: { ++ switch (opc & 0x00000088) { ++ case 0x00000000: { ++ *l = 16; ++ *t = &avr_translate_MULSU; ++ break; ++ } ++ case 0x00000008: { ++ *l = 16; ++ *t = &avr_translate_FMUL; ++ break; ++ } ++ case 0x00000080: { ++ *l = 16; ++ *t = &avr_translate_FMULS; ++ break; ++ } ++ case 0x00000088: { ++ *l = 16; ++ *t = &avr_translate_FMULSU; ++ break; ++ } ++ } ++ break; ++ } ++ } ++ break; ++ } ++ case 0x00000400: { ++ *l = 16; ++ *t = &avr_translate_CPC; ++ break; ++ } ++ case 0x00000800: { ++ *l = 16; ++ *t = &avr_translate_SBC; ++ break; ++ } ++ case 0x00000c00: { ++ *l = 16; ++ *t = &avr_translate_ADD; ++ break; ++ } ++ case 0x00002000: { ++ *l = 16; ++ *t = &avr_translate_AND; ++ break; ++ } ++ case 0x00002400: { ++ *l = 16; ++ *t = &avr_translate_EOR; ++ break; ++ } ++ case 0x00002800: { ++ *l = 16; ++ *t = &avr_translate_OR; ++ break; ++ } ++ case 0x00002c00: { ++ *l = 16; ++ *t = &avr_translate_MOV; ++ break; ++ } ++ } ++ break; ++ } ++ case 0x00001000: { ++ switch (opc & 0x00002000) { ++ case 0x00000000: { ++ switch (opc & 0x00000c00) { ++ case 0x00000000: { ++ *l = 16; ++ *t = &avr_translate_CPSE; ++ break; ++ } ++ case 0x00000400: { ++ *l = 16; ++ *t = &avr_translate_CP; ++ break; ++ } ++ case 0x00000800: { ++ *l = 16; ++ *t = &avr_translate_SUB; ++ break; ++ } ++ case 0x00000c00: { ++ *l = 16; ++ *t = &avr_translate_ADC; ++ break; ++ } ++ } ++ break; ++ } ++ case 0x00002000: { ++ *l = 16; ++ *t = &avr_translate_CPI; ++ break; ++ } ++ } ++ break; ++ } ++ case 0x00004000: { ++ switch (opc & 0x00002000) { ++ case 0x00000000: { ++ *l = 16; ++ *t = &avr_translate_SBCI; ++ break; ++ } ++ case 0x00002000: { ++ *l = 16; ++ *t = &avr_translate_ORI; ++ break; ++ } ++ } ++ break; ++ } ++ case 0x00005000: { ++ switch (opc & 0x00002000) { ++ case 0x00000000: { ++ *l = 16; ++ *t = &avr_translate_SUBI; ++ break; ++ } ++ case 0x00002000: { ++ *l = 16; ++ *t = &avr_translate_ANDI; ++ break; ++ } ++ } ++ break; ++ } ++ case 0x00008000: { ++ switch (opc & 0x00000208) { ++ case 0x00000000: { ++ *l = 16; ++ *t = &avr_translate_LDDZ; ++ break; ++ } ++ case 0x00000008: { ++ *l = 16; ++ *t = &avr_translate_LDDY; ++ break; ++ } ++ case 0x00000200: { ++ *l = 16; ++ *t = &avr_translate_STDZ; ++ break; ++ } ++ case 0x00000208: { ++ *l = 16; ++ *t = &avr_translate_STDY; ++ break; ++ } ++ } ++ break; ++ } ++ case 0x00009000: { ++ switch (opc & 0x00002800) { ++ case 0x00000000: { ++ switch (opc & 0x00000600) { ++ case 0x00000000: { ++ switch (opc & 0x0000000f) { ++ case 0x00000000: { ++ *l = 32; ++ *t = &avr_translate_LDS; ++ break; ++ } ++ case 0x00000001: { ++ *l = 16; ++ *t = &avr_translate_LDZ2; ++ break; ++ } ++ case 0x00000002: { ++ *l = 16; ++ *t = &avr_translate_LDZ3; ++ break; ++ } ++ case 0x00000003: { ++ break; ++ } ++ case 0x00000004: { ++ *l = 16; ++ *t = &avr_translate_LPM2; ++ break; ++ } ++ case 0x00000005: { ++ *l = 16; ++ *t = &avr_translate_LPMX; ++ break; ++ } ++ case 0x00000006: { ++ *l = 16; ++ *t = &avr_translate_ELPM2; ++ break; ++ } ++ case 0x00000007: { ++ *l = 16; ++ *t = &avr_translate_ELPMX; ++ break; ++ } ++ case 0x00000008: { ++ break; ++ } ++ case 0x00000009: { ++ *l = 16; ++ *t = &avr_translate_LDY2; ++ break; ++ } ++ case 0x0000000a: { ++ *l = 16; ++ *t = &avr_translate_LDY3; ++ break; ++ } ++ case 0x0000000b: { ++ break; ++ } ++ case 0x0000000c: { ++ *l = 16; ++ *t = &avr_translate_LDX1; ++ break; ++ } ++ case 0x0000000d: { ++ *l = 16; ++ *t = &avr_translate_LDX2; ++ break; ++ } ++ case 0x0000000e: { ++ *l = 16; ++ *t = &avr_translate_LDX3; ++ break; ++ } ++ case 0x0000000f: { ++ *l = 16; ++ *t = &avr_translate_POP; ++ break; ++ } ++ } ++ break; ++ } ++ case 0x00000200: { ++ switch (opc & 0x0000000f) { ++ case 0x00000000: { ++ *l = 32; ++ *t = &avr_translate_STS; ++ break; ++ } ++ case 0x00000001: { ++ *l = 16; ++ *t = &avr_translate_STZ2; ++ break; ++ } ++ case 0x00000002: { ++ *l = 16; ++ *t = &avr_translate_STZ3; ++ break; ++ } ++ case 0x00000003: { ++ break; ++ } ++ case 0x00000004: { ++ *l = 16; ++ *t = &avr_translate_XCH; ++ break; ++ } ++ case 0x00000005: { ++ *l = 16; ++ *t = &avr_translate_LAS; ++ break; ++ } ++ case 0x00000006: { ++ *l = 16; ++ *t = &avr_translate_LAC; ++ break; ++ } ++ case 0x00000007: { ++ *l = 16; ++ *t = &avr_translate_LAT; ++ break; ++ } ++ case 0x00000008: { ++ break; ++ } ++ case 0x00000009: { ++ *l = 16; ++ *t = &avr_translate_STY2; ++ break; ++ } ++ case 0x0000000a: { ++ *l = 16; ++ *t = &avr_translate_STY3; ++ break; ++ } ++ case 0x0000000b: { ++ break; ++ } ++ case 0x0000000c: { ++ *l = 16; ++ *t = &avr_translate_STX1; ++ break; ++ } ++ case 0x0000000d: { ++ *l = 16; ++ *t = &avr_translate_STX2; ++ break; ++ } ++ case 0x0000000e: { ++ *l = 16; ++ *t = &avr_translate_STX3; ++ break; ++ } ++ case 0x0000000f: { ++ *l = 16; ++ *t = &avr_translate_PUSH; ++ break; ++ } ++ } ++ break; ++ } ++ case 0x00000400: { ++ switch (opc & 0x0000000e) { ++ case 0x00000000: { ++ switch (opc & 0x00000001) { ++ case 0x00000000: { ++ *l = 16; ++ *t = &avr_translate_COM; ++ break; ++ } ++ case 0x00000001: { ++ *l = 16; ++ *t = &avr_translate_NEG; ++ break; ++ } ++ } ++ break; ++ } ++ case 0x00000002: { ++ switch (opc & 0x00000001) { ++ case 0x00000000: { ++ *l = 16; ++ *t = &avr_translate_SWAP; ++ break; ++ } ++ case 0x00000001: { ++ *l = 16; ++ *t = &avr_translate_INC; ++ break; ++ } ++ } ++ break; ++ } ++ case 0x00000004: { ++ *l = 16; ++ *t = &avr_translate_ASR; ++ break; ++ } ++ case 0x00000006: { ++ switch (opc & 0x00000001) { ++ case 0x00000000: { ++ *l = 16; ++ *t = &avr_translate_LSR; ++ break; ++ } ++ case 0x00000001: { ++ *l = 16; ++ *t = &avr_translate_ROR; ++ break; ++ } ++ } ++ break; ++ } ++ case 0x00000008: { ++ switch (opc & 0x00000181) { ++ case 0x00000000: { ++ *l = 16; ++ *t = &avr_translate_BSET; ++ break; ++ } ++ case 0x00000001: { ++ switch (opc & 0x00000010) { ++ case 0x00000000: { ++ *l = 16; ++ *t = &avr_translate_IJMP; ++ break; ++ } ++ case 0x00000010: { ++ *l = 16; ++ *t = &avr_translate_EIJMP; ++ break; ++ } ++ } ++ break; ++ } ++ case 0x00000080: { ++ *l = 16; ++ *t = &avr_translate_BCLR; ++ break; ++ } ++ case 0x00000081: { ++ break; ++ } ++ case 0x00000100: { ++ switch (opc & 0x00000010) { ++ case 0x00000000: { ++ *l = 16; ++ *t = &avr_translate_RET; ++ break; ++ } ++ case 0x00000010: { ++ *l = 16; ++ *t = &avr_translate_RETI; ++ break; ++ } ++ } ++ break; ++ } ++ case 0x00000101: { ++ switch (opc & 0x00000010) { ++ case 0x00000000: { ++ *l = 16; ++ *t = &avr_translate_ICALL; ++ break; ++ } ++ case 0x00000010: { ++ *l = 16; ++ *t = &avr_translate_EICALL; ++ break; ++ } ++ } ++ break; ++ } ++ case 0x00000180: { ++ switch (opc & 0x00000070) { ++ case 0x00000000: { ++ *l = 16; ++ *t = &avr_translate_SLEEP; ++ break; ++ } ++ case 0x00000010: { ++ *l = 16; ++ *t = &avr_translate_BREAK; ++ break; ++ } ++ case 0x00000020: { ++ *l = 16; ++ *t = &avr_translate_WDR; ++ break; ++ } ++ case 0x00000030: { ++ break; ++ } ++ case 0x00000040: { ++ *l = 16; ++ *t = &avr_translate_LPM1; ++ break; ++ } ++ case 0x00000050: { ++ *l = 16; ++ *t = &avr_translate_ELPM1; ++ break; ++ } ++ case 0x00000060: { ++ *l = 16; ++ *t = &avr_translate_SPM; ++ break; ++ } ++ case 0x00000070: { ++ *l = 16; ++ *t = &avr_translate_SPMX; ++ break; ++ } ++ } ++ break; ++ } ++ case 0x00000181: { ++ break; ++ } ++ } ++ break; ++ } ++ case 0x0000000a: { ++ switch (opc & 0x00000001) { ++ case 0x00000000: { ++ *l = 16; ++ *t = &avr_translate_DEC; ++ break; ++ } ++ case 0x00000001: { ++ *l = 16; ++ *t = &avr_translate_DES; ++ break; ++ } ++ } ++ break; ++ } ++ case 0x0000000c: { ++ *l = 32; ++ *t = &avr_translate_JMP; ++ break; ++ } ++ case 0x0000000e: { ++ *l = 32; ++ *t = &avr_translate_CALL; ++ break; ++ } ++ } ++ break; ++ } ++ case 0x00000600: { ++ switch (opc & 0x00000100) { ++ case 0x00000000: { ++ *l = 16; ++ *t = &avr_translate_ADIW; ++ break; ++ } ++ case 0x00000100: { ++ *l = 16; ++ *t = &avr_translate_SBIW; ++ break; ++ } ++ } ++ break; ++ } ++ } ++ break; ++ } ++ case 0x00000800: { ++ switch (opc & 0x00000400) { ++ case 0x00000000: { ++ switch (opc & 0x00000300) { ++ case 0x00000000: { ++ *l = 16; ++ *t = &avr_translate_CBI; ++ break; ++ } ++ case 0x00000100: { ++ *l = 16; ++ *t = &avr_translate_SBIC; ++ break; ++ } ++ case 0x00000200: { ++ *l = 16; ++ *t = &avr_translate_SBI; ++ break; ++ } ++ case 0x00000300: { ++ *l = 16; ++ *t = &avr_translate_SBIS; ++ break; ++ } ++ } ++ break; ++ } ++ case 0x00000400: { ++ *l = 16; ++ *t = &avr_translate_MUL; ++ break; ++ } ++ } ++ break; ++ } ++ case 0x00002000: { ++ *l = 16; ++ *t = &avr_translate_IN; ++ break; ++ } ++ case 0x00002800: { ++ *l = 16; ++ *t = &avr_translate_OUT; ++ break; ++ } ++ } ++ break; ++ } ++ case 0x0000c000: { ++ switch (opc & 0x00002000) { ++ case 0x00000000: { ++ *l = 16; ++ *t = &avr_translate_RJMP; ++ break; ++ } ++ case 0x00002000: { ++ *l = 16; ++ *t = &avr_translate_LDI; ++ break; ++ } ++ } ++ break; ++ } ++ case 0x0000d000: { ++ switch (opc & 0x00002000) { ++ case 0x00000000: { ++ *l = 16; ++ *t = &avr_translate_RCALL; ++ break; ++ } ++ case 0x00002000: { ++ switch (opc & 0x00000c00) { ++ case 0x00000000: { ++ *l = 16; ++ *t = &avr_translate_BRBS; ++ break; ++ } ++ case 0x00000400: { ++ *l = 16; ++ *t = &avr_translate_BRBC; ++ break; ++ } ++ case 0x00000800: { ++ switch (opc & 0x00000200) { ++ case 0x00000000: { ++ *l = 16; ++ *t = &avr_translate_BLD; ++ break; ++ } ++ case 0x00000200: { ++ *l = 16; ++ *t = &avr_translate_BST; ++ break; ++ } ++ } ++ break; ++ } ++ case 0x00000c00: { ++ switch (opc & 0x00000200) { ++ case 0x00000000: { ++ *l = 16; ++ *t = &avr_translate_SBRC; ++ break; ++ } ++ case 0x00000200: { ++ *l = 16; ++ *t = &avr_translate_SBRS; ++ break; ++ } ++ } ++ break; ++ } ++ } ++ break; ++ } ++ } ++ break; ++ } ++ } ++} diff --cc target/avr/gdbstub.c index 0000000,0000000..537dc72 new file mode 100644 --- /dev/null +++ b/target/avr/gdbstub.c @@@ -1,0 -1,0 +1,85 @@@ ++/* ++ * QEMU AVR CPU ++ * ++ * Copyright (c) 2016 Michael Rolnik ++ * ++ * This library is free software; you can redistribute it and/or ++ * modify it under the terms of the GNU Lesser General Public ++ * License as published by the Free Software Foundation; either ++ * version 2.1 of the License, or (at your option) any later version. ++ * ++ * This library is distributed in the hope that it will be useful, ++ * but WITHOUT ANY WARRANTY; without even the implied warranty of ++ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU ++ * Lesser General Public License for more details. ++ * ++ * You should have received a copy of the GNU Lesser General Public ++ * License along with this library; if not, see ++ * ++ */ ++ ++#include "qemu/osdep.h" ++#include "qemu-common.h" ++#include "exec/gdbstub.h" ++ ++int avr_cpu_gdb_read_register(CPUState *cs, uint8_t *mem_buf, int n) ++{ ++ AVRCPU *cpu = AVR_CPU(cs); ++ CPUAVRState *env = &cpu->env; ++ ++ /* R */ ++ if (n < 32) { ++ return gdb_get_reg8(mem_buf, env->r[n]); ++ } ++ ++ /* SREG */ ++ if (n == 32) { ++ uint8_t sreg = cpu_get_sreg(env); ++ ++ return gdb_get_reg8(mem_buf, sreg); ++ } ++ ++ /* SP */ ++ if (n == 33) { ++ return gdb_get_reg16(mem_buf, env->sp & 0x0000ffff); ++ } ++ ++ /* PC */ ++ if (n == 34) { ++ return gdb_get_reg32(mem_buf, env->pc_w * 2); ++ } ++ ++ return 0; ++} ++ ++int avr_cpu_gdb_write_register(CPUState *cs, uint8_t *mem_buf, int n) ++{ ++ AVRCPU *cpu = AVR_CPU(cs); ++ CPUAVRState *env = &cpu->env; ++ ++ /* R */ ++ if (n < 32) { ++ env->r[n] = *mem_buf; ++ return 1; ++ } ++ ++ /* SREG */ ++ if (n == 32) { ++ cpu_set_sreg(env, *mem_buf); ++ return 1; ++ } ++ ++ /* SP */ ++ if (n == 33) { ++ env->sp = lduw_p(mem_buf); ++ return 2; ++ } ++ ++ /* PC */ ++ if (n == 34) { ++ env->pc_w = ldl_p(mem_buf) / 2; ++ return 4; ++ } ++ ++ return 0; ++} diff --cc target/avr/helper.c index 0000000,0000000..bc53053 new file mode 100644 --- /dev/null +++ b/target/avr/helper.c @@@ -1,0 -1,0 +1,355 @@@ ++/* ++ * QEMU AVR CPU ++ * ++ * Copyright (c) 2016 Michael Rolnik ++ * ++ * This library is free software; you can redistribute it and/or ++ * modify it under the terms of the GNU Lesser General Public ++ * License as published by the Free Software Foundation; either ++ * version 2.1 of the License, or (at your option) any later version. ++ * ++ * This library is distributed in the hope that it will be useful, ++ * but WITHOUT ANY WARRANTY; without even the implied warranty of ++ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU ++ * Lesser General Public License for more details. ++ * ++ * You should have received a copy of the GNU Lesser General Public ++ * License along with this library; if not, see ++ * ++ */ ++ ++#include "qemu/osdep.h" ++ ++#include "cpu.h" ++#include "hw/irq.h" ++#include "include/hw/sysbus.h" ++#include "include/sysemu/sysemu.h" ++#include "exec/exec-all.h" ++#include "exec/cpu_ldst.h" ++#include "qemu/host-utils.h" ++#include "exec/helper-proto.h" ++#include "exec/ioport.h" ++ ++bool avr_cpu_exec_interrupt(CPUState *cs, int interrupt_request) ++{ ++ bool ret = false; ++ CPUClass *cc = CPU_GET_CLASS(cs); ++ AVRCPU *cpu = AVR_CPU(cs); ++ CPUAVRState *env = &cpu->env; ++ ++ if (interrupt_request & CPU_INTERRUPT_RESET) { ++ if (cpu_interrupts_enabled(env)) { ++ cs->exception_index = EXCP_RESET; ++ cc->do_interrupt(cs); ++ ++ cs->interrupt_request &= ~CPU_INTERRUPT_RESET; ++ ++ ret = true; ++ } ++ } ++ if (interrupt_request & CPU_INTERRUPT_HARD) { ++ if (cpu_interrupts_enabled(env) && env->intsrc != 0) { ++ int index = ctz32(env->intsrc); ++ cs->exception_index = EXCP_INT(index); ++ cc->do_interrupt(cs); ++ ++ env->intsrc &= env->intsrc - 1; /* clear the interrupt */ ++ cs->interrupt_request &= ~CPU_INTERRUPT_HARD; ++ ++ ret = true; ++ } ++ } ++ return ret; ++} ++ ++void avr_cpu_do_interrupt(CPUState *cs) ++{ ++ AVRCPU *cpu = AVR_CPU(cs); ++ CPUAVRState *env = &cpu->env; ++ ++ uint32_t ret = env->pc_w; ++ int vector = 0; ++ int size = avr_feature(env, AVR_FEATURE_JMP_CALL) ? 2 : 1; ++ int base = 0; /* TODO: where to get it */ ++ ++ if (cs->exception_index == EXCP_RESET) { ++ vector = 0; ++ } else if (env->intsrc != 0) { ++ vector = ctz32(env->intsrc) + 1; ++ } ++ ++ if (avr_feature(env, AVR_FEATURE_3_BYTE_PC)) { ++ cpu_stb_data(env, env->sp--, (ret & 0x0000ff)); ++ cpu_stb_data(env, env->sp--, (ret & 0x00ff00) >> 8); ++ cpu_stb_data(env, env->sp--, (ret & 0xff0000) >> 16); ++ } else if (avr_feature(env, AVR_FEATURE_2_BYTE_PC)) { ++ cpu_stb_data(env, env->sp--, (ret & 0x0000ff)); ++ cpu_stb_data(env, env->sp--, (ret & 0x00ff00) >> 8); ++ } else { ++ cpu_stb_data(env, env->sp--, (ret & 0x0000ff)); ++ } ++ ++ env->pc_w = base + vector * size; ++ env->sregI = 0; /* clear Global Interrupt Flag */ ++ ++ cs->exception_index = -1; ++} ++ ++int avr_cpu_memory_rw_debug(CPUState *cs, vaddr addr, uint8_t *buf, ++ int len, bool is_write) ++{ ++ return cpu_memory_rw_debug(cs, addr, buf, len, is_write); ++} ++ ++hwaddr avr_cpu_get_phys_page_debug(CPUState *cs, vaddr addr) ++{ ++ return addr; /* I assume 1:1 address correspondance */ ++} ++ ++int avr_cpu_handle_mmu_fault(CPUState *cs, vaddr address, int rw, int mmu_idx) ++{ ++ /* currently it's assumed that this will never happen */ ++ cs->exception_index = EXCP_DEBUG; ++ cpu_dump_state(cs, stderr, fprintf, 0); ++ return 1; ++} ++ ++void tlb_fill(CPUState *cs, target_ulong vaddr, MMUAccessType access_type, ++ int mmu_idx, uintptr_t retaddr) ++{ ++ target_ulong page_size = TARGET_PAGE_SIZE; ++ int prot = 0; ++ MemTxAttrs attrs = {}; ++ uint32_t paddr; ++ ++ vaddr &= TARGET_PAGE_MASK; ++ ++ if (mmu_idx == MMU_CODE_IDX) { ++ paddr = PHYS_BASE_CODE + vaddr - VIRT_BASE_CODE; ++ prot = PAGE_READ | PAGE_EXEC; ++ } else if (vaddr - VIRT_BASE_REGS < AVR_REGS) { ++ /* ++ * this is a write into CPU registers, exit and rebuilt this TB ++ * to use full write ++ */ ++ AVRCPU *cpu = AVR_CPU(cs); ++ CPUAVRState *env = &cpu->env; ++ env->fullacc = 1; ++ cpu_loop_exit_restore(cs, retaddr); ++ } else { ++ /* ++ * this is a write into memory. nothing special ++ */ ++ paddr = PHYS_BASE_DATA + vaddr - VIRT_BASE_DATA; ++ prot = PAGE_READ | PAGE_WRITE; ++ } ++ ++ tlb_set_page_with_attrs(cs, vaddr, paddr, attrs, prot, mmu_idx, page_size); ++} ++ ++void helper_sleep(CPUAVRState *env) ++{ ++ CPUState *cs = CPU(avr_env_get_cpu(env)); ++ ++ cs->exception_index = EXCP_HLT; ++ cpu_loop_exit(cs); ++} ++ ++void helper_unsupported(CPUAVRState *env) ++{ ++ CPUState *cs = CPU(avr_env_get_cpu(env)); ++ ++ /* ++ * I count not find what happens on the real platform, so ++ * it's EXCP_DEBUG for meanwhile ++ */ ++ cs->exception_index = EXCP_DEBUG; ++ if (qemu_loglevel_mask(LOG_UNIMP)) { ++ qemu_log("UNSUPPORTED\n"); ++ cpu_dump_state(cs, qemu_logfile, fprintf, 0); ++ } ++ cpu_loop_exit(cs); ++} ++ ++void helper_debug(CPUAVRState *env) ++{ ++ CPUState *cs = CPU(avr_env_get_cpu(env)); ++ ++ cs->exception_index = EXCP_DEBUG; ++ cpu_loop_exit(cs); ++} ++ ++void helper_wdr(CPUAVRState *env) ++{ ++ CPUState *cs = CPU(avr_env_get_cpu(env)); ++ ++ /* WD is not implemented yet, placeholder */ ++ cs->exception_index = EXCP_DEBUG; ++ cpu_loop_exit(cs); ++} ++ ++/* ++ * This function implements IN instruction ++ * ++ * It does the following ++ * a. if an IO register belongs to CPU, its value is read and returned ++ * b. otherwise io address is translated to mem address and physical memory ++ * is read. ++ * c. it caches the value for sake of SBI, SBIC, SBIS & CBI implementation ++ * ++ */ ++target_ulong helper_inb(CPUAVRState *env, uint32_t port) ++{ ++ target_ulong data = 0; ++ ++ switch (port) { ++ case 0x38: /* RAMPD */ ++ data = 0xff & (env->rampD >> 16); ++ break; ++ case 0x39: /* RAMPX */ ++ data = 0xff & (env->rampX >> 16); ++ break; ++ case 0x3a: /* RAMPY */ ++ data = 0xff & (env->rampY >> 16); ++ break; ++ case 0x3b: /* RAMPZ */ ++ data = 0xff & (env->rampZ >> 16); ++ break; ++ case 0x3c: /* EIND */ ++ data = 0xff & (env->eind >> 16); ++ break; ++ case 0x3d: /* SPL */ ++ data = env->sp & 0x00ff; ++ break; ++ case 0x3e: /* SPH */ ++ data = env->sp >> 8; ++ break; ++ case 0x3f: /* SREG */ ++ data = cpu_get_sreg(env); ++ break; ++ default: ++ /* ++ * CPU does not know how to read this register, pass it to the ++ * device/board ++ */ ++ cpu_physical_memory_read(PHYS_BASE_REGS + port + AVR_CPU_IO_REGS_BASE, ++ &data, 1); ++ } ++ ++ return data; ++} ++ ++/* ++ * This function implements OUT instruction ++ * ++ * It does the following ++ * a. if an IO register belongs to CPU, its value is written into the register ++ * b. otherwise io address is translated to mem address and physical memory ++ * is written. ++ * c. it caches the value for sake of SBI, SBIC, SBIS & CBI implementation ++ * ++ */ ++void helper_outb(CPUAVRState *env, uint32_t port, uint32_t data) ++{ ++ data &= 0x000000ff; ++ ++ switch (port) { ++ case 0x04: ++ { ++ CPUState *cpu = CPU(avr_env_get_cpu(env)); ++ qemu_irq irq = qdev_get_gpio_in(DEVICE(cpu), 3); ++ qemu_set_irq(irq, 1); ++ } ++ break; ++ case 0x38: /* RAMPD */ ++ if (avr_feature(env, AVR_FEATURE_RAMPD)) { ++ env->rampD = (data & 0xff) << 16; ++ } ++ break; ++ case 0x39: /* RAMPX */ ++ if (avr_feature(env, AVR_FEATURE_RAMPX)) { ++ env->rampX = (data & 0xff) << 16; ++ } ++ break; ++ case 0x3a: /* RAMPY */ ++ if (avr_feature(env, AVR_FEATURE_RAMPY)) { ++ env->rampY = (data & 0xff) << 16; ++ } ++ break; ++ case 0x3b: /* RAMPZ */ ++ if (avr_feature(env, AVR_FEATURE_RAMPZ)) { ++ env->rampZ = (data & 0xff) << 16; ++ } ++ break; ++ case 0x3c: /* EIDN */ ++ env->eind = (data & 0xff) << 16; ++ break; ++ case 0x3d: /* SPL */ ++ env->sp = (env->sp & 0xff00) | (data); ++ break; ++ case 0x3e: /* SPH */ ++ if (avr_feature(env, AVR_FEATURE_2_BYTE_SP)) { ++ env->sp = (env->sp & 0x00ff) | (data << 8); ++ } ++ break; ++ case 0x3f: /* SREG */ ++ cpu_set_sreg(env, data); ++ break; ++ default: ++ /* ++ * CPU does not know how to write this register, pass it to the ++ * device/board ++ */ ++ cpu_physical_memory_write(PHYS_BASE_REGS + port + AVR_CPU_IO_REGS_BASE, ++ &data, 1); ++ } ++} ++ ++/* ++ * this function implements LD instruction when there is a posibility to read ++ * from a CPU register ++ */ ++target_ulong helper_fullrd(CPUAVRState *env, uint32_t addr) ++{ ++ uint8_t data; ++ ++ env->fullacc = false; ++ switch (addr) { ++ case AVR_CPU_REGS_BASE ... AVR_CPU_REGS_LAST: ++ /* CPU registers */ ++ data = env->r[addr - AVR_CPU_REGS_BASE]; ++ break; ++ case AVR_CPU_IO_REGS_BASE ... AVR_CPU_IO_REGS_LAST: ++ /* CPU IO registers */ ++ data = helper_inb(env, addr); ++ break; ++ default: ++ /* memory */ ++ cpu_physical_memory_read(PHYS_BASE_DATA + addr - VIRT_BASE_DATA, ++ &data, 1); ++ } ++ return data; ++} ++ ++/* ++ * this function implements LD instruction when there is a posibility to write ++ * into a CPU register ++ */ ++void helper_fullwr(CPUAVRState *env, uint32_t data, uint32_t addr) ++{ ++ env->fullacc = false; ++ switch (addr) { ++ case AVR_CPU_REGS_BASE ... AVR_CPU_REGS_LAST: ++ /* CPU registers */ ++ env->r[addr - AVR_CPU_REGS_BASE] = data; ++ break; ++ case AVR_CPU_IO_REGS_BASE ... AVR_CPU_IO_REGS_LAST: ++ /* CPU IO registers */ ++ helper_outb(env, data, addr); ++ break; ++ default: ++ /* memory */ ++ cpu_physical_memory_write(PHYS_BASE_DATA + addr - VIRT_BASE_DATA, ++ &data, 1); ++ } ++} diff --cc target/avr/helper.h index 0000000,0000000..6036315 new file mode 100644 --- /dev/null +++ b/target/avr/helper.h @@@ -1,0 -1,0 +1,28 @@@ ++/* ++ * QEMU AVR CPU ++ * ++ * Copyright (c) 2016 Michael Rolnik ++ * ++ * This library is free software; you can redistribute it and/or ++ * modify it under the terms of the GNU Lesser General Public ++ * License as published by the Free Software Foundation; either ++ * version 2.1 of the License, or (at your option) any later version. ++ * ++ * This library is distributed in the hope that it will be useful, ++ * but WITHOUT ANY WARRANTY; without even the implied warranty of ++ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU ++ * Lesser General Public License for more details. ++ * ++ * You should have received a copy of the GNU Lesser General Public ++ * License along with this library; if not, see ++ * ++ */ ++ ++DEF_HELPER_1(wdr, void, env) ++DEF_HELPER_1(debug, void, env) ++DEF_HELPER_1(sleep, void, env) ++DEF_HELPER_1(unsupported, void, env) ++DEF_HELPER_3(outb, void, env, i32, i32) ++DEF_HELPER_2(inb, tl, env, i32) ++DEF_HELPER_3(fullwr, void, env, i32, i32) ++DEF_HELPER_2(fullrd, tl, env, i32) diff --cc target/avr/machine.c index 0000000,0000000..56706c4 new file mode 100644 --- /dev/null +++ b/target/avr/machine.c @@@ -1,0 -1,0 +1,116 @@@ ++/* ++ * QEMU AVR CPU ++ * ++ * Copyright (c) 2016 Michael Rolnik ++ * ++ * This library is free software; you can redistribute it and/or ++ * modify it under the terms of the GNU Lesser General Public ++ * License as published by the Free Software Foundation; either ++ * version 2.1 of the License, or (at your option) any later version. ++ * ++ * This library is distributed in the hope that it will be useful, ++ * but WITHOUT ANY WARRANTY; without even the implied warranty of ++ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU ++ * Lesser General Public License for more details. ++ * ++ * You should have received a copy of the GNU Lesser General Public ++ * License along with this library; if not, see ++ * ++ */ ++ ++#include "qemu/osdep.h" ++#include "hw/hw.h" ++#include "cpu.h" ++#include "hw/boards.h" ++#include "migration/qemu-file.h" ++ ++static int get_sreg(QEMUFile *f, void *opaque, size_t size, VMStateField *field) ++{ ++ CPUAVRState *env = opaque; ++ uint8_t sreg; ++ ++ sreg = qemu_get_ubyte(f); ++ cpu_set_sreg(env, sreg); ++ return 0; ++} ++ ++static int put_sreg(QEMUFile *f, void *opaque, size_t size, VMStateField *field, QJSON *vmdesc) ++{ ++ CPUAVRState *env = opaque; ++ uint8_t sreg = cpu_get_sreg(env); ++ ++ qemu_put_ubyte(f, sreg); ++ return 0; ++} ++ ++static const VMStateInfo vms_sreg = { ++ .name = "sreg", ++ .get = get_sreg, ++ .put = put_sreg, ++}; ++ ++static int get_segment(QEMUFile *f, void *opaque, size_t size, VMStateField *field) ++{ ++ uint32_t *ramp = opaque; ++ uint8_t temp; ++ ++ temp = qemu_get_ubyte(f); ++ *ramp = ((uint32_t)temp) << 16; ++ return 0; ++} ++ ++static int put_segment(QEMUFile *f, void *opaque, size_t size, VMStateField *field, QJSON *vmdesc) ++{ ++ uint32_t *ramp = opaque; ++ uint8_t temp = *ramp >> 16; ++ ++ qemu_put_ubyte(f, temp); ++ return 0; ++} ++ ++static const VMStateInfo vms_rampD = { ++ .name = "rampD", ++ .get = get_segment, ++ .put = put_segment, ++}; ++static const VMStateInfo vms_rampX = { ++ .name = "rampX", ++ .get = get_segment, ++ .put = put_segment, ++}; ++static const VMStateInfo vms_rampY = { ++ .name = "rampY", ++ .get = get_segment, ++ .put = put_segment, ++}; ++static const VMStateInfo vms_rampZ = { ++ .name = "rampZ", ++ .get = get_segment, ++ .put = put_segment, ++}; ++static const VMStateInfo vms_eind = { ++ .name = "eind", ++ .get = get_segment, ++ .put = put_segment, ++}; ++ ++const VMStateDescription vms_avr_cpu = { ++ .name = "cpu", ++ .version_id = 0, ++ .minimum_version_id = 0, ++ .fields = (VMStateField[]) { ++ VMSTATE_UINT32(env.pc_w, AVRCPU), ++ VMSTATE_UINT32(env.sp, AVRCPU), ++ ++ VMSTATE_UINT32_ARRAY(env.r, AVRCPU, AVR_CPU_REGS), ++ ++ VMSTATE_SINGLE(env, AVRCPU, 0, vms_sreg, CPUAVRState), ++ VMSTATE_SINGLE(env.rampD, AVRCPU, 0, vms_rampD, uint32_t), ++ VMSTATE_SINGLE(env.rampX, AVRCPU, 0, vms_rampX, uint32_t), ++ VMSTATE_SINGLE(env.rampY, AVRCPU, 0, vms_rampY, uint32_t), ++ VMSTATE_SINGLE(env.rampZ, AVRCPU, 0, vms_rampZ, uint32_t), ++ VMSTATE_SINGLE(env.eind, AVRCPU, 0, vms_eind, uint32_t), ++ ++ VMSTATE_END_OF_LIST() ++ } ++}; diff --cc target/avr/translate-inst.h index 0000000,0000000..7371c6f new file mode 100644 --- /dev/null +++ b/target/avr/translate-inst.h @@@ -1,0 -1,0 +1,691 @@@ ++/* ++ * QEMU AVR CPU ++ * ++ * Copyright (c) 2016 Michael Rolnik ++ * ++ * This library is free software; you can redistribute it and/or ++ * modify it under the terms of the GNU Lesser General Public ++ * License as published by the Free Software Foundation; either ++ * version 2.1 of the License, or (at your option) any later version. ++ * ++ * This library is distributed in the hope that it will be useful, ++ * but WITHOUT ANY WARRANTY; without even the implied warranty of ++ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU ++ * Lesser General Public License for more details. ++ * ++ * You should have received a copy of the GNU Lesser General Public ++ * License along with this library; if not, see ++ * ++ */ ++ ++#ifndef AVR_TRANSLATE_INST_H_ ++#define AVR_TRANSLATE_INST_H_ ++ ++static inline uint32_t MOVW_Rr(uint32_t opcode) ++{ ++ return extract32(opcode, 0, 4); ++} ++ ++static inline uint32_t MOVW_Rd(uint32_t opcode) ++{ ++ return extract32(opcode, 4, 4); ++} ++ ++static inline uint32_t MULS_Rr(uint32_t opcode) ++{ ++ return extract32(opcode, 0, 4); ++} ++ ++static inline uint32_t MULS_Rd(uint32_t opcode) ++{ ++ return extract32(opcode, 4, 4); ++} ++ ++static inline uint32_t MULSU_Rr(uint32_t opcode) ++{ ++ return extract32(opcode, 0, 3); ++} ++ ++static inline uint32_t MULSU_Rd(uint32_t opcode) ++{ ++ return extract32(opcode, 4, 3); ++} ++ ++static inline uint32_t FMUL_Rr(uint32_t opcode) ++{ ++ return extract32(opcode, 0, 3); ++} ++ ++static inline uint32_t FMUL_Rd(uint32_t opcode) ++{ ++ return extract32(opcode, 4, 3); ++} ++ ++static inline uint32_t FMULS_Rr(uint32_t opcode) ++{ ++ return extract32(opcode, 0, 3); ++} ++ ++static inline uint32_t FMULS_Rd(uint32_t opcode) ++{ ++ return extract32(opcode, 4, 3); ++} ++ ++static inline uint32_t FMULSU_Rr(uint32_t opcode) ++{ ++ return extract32(opcode, 0, 3); ++} ++ ++static inline uint32_t FMULSU_Rd(uint32_t opcode) ++{ ++ return extract32(opcode, 4, 3); ++} ++ ++static inline uint32_t CPC_Rd(uint32_t opcode) ++{ ++ return extract32(opcode, 4, 5); ++} ++ ++static inline uint32_t CPC_Rr(uint32_t opcode) ++{ ++ return (extract32(opcode, 9, 1) << 4) | ++ (extract32(opcode, 0, 4)); ++} ++ ++static inline uint32_t SBC_Rd(uint32_t opcode) ++{ ++ return extract32(opcode, 4, 5); ++} ++ ++static inline uint32_t SBC_Rr(uint32_t opcode) ++{ ++ return (extract32(opcode, 9, 1) << 4) | ++ (extract32(opcode, 0, 4)); ++} ++ ++static inline uint32_t ADD_Rd(uint32_t opcode) ++{ ++ return extract32(opcode, 4, 5); ++} ++ ++static inline uint32_t ADD_Rr(uint32_t opcode) ++{ ++ return (extract32(opcode, 9, 1) << 4) | ++ (extract32(opcode, 0, 4)); ++} ++ ++static inline uint32_t AND_Rd(uint32_t opcode) ++{ ++ return extract32(opcode, 4, 5); ++} ++ ++static inline uint32_t AND_Rr(uint32_t opcode) ++{ ++ return (extract32(opcode, 9, 1) << 4) | ++ (extract32(opcode, 0, 4)); ++} ++ ++static inline uint32_t EOR_Rd(uint32_t opcode) ++{ ++ return extract32(opcode, 4, 5); ++} ++ ++static inline uint32_t EOR_Rr(uint32_t opcode) ++{ ++ return (extract32(opcode, 9, 1) << 4) | ++ (extract32(opcode, 0, 4)); ++} ++ ++static inline uint32_t OR_Rd(uint32_t opcode) ++{ ++ return extract32(opcode, 4, 5); ++} ++ ++static inline uint32_t OR_Rr(uint32_t opcode) ++{ ++ return (extract32(opcode, 9, 1) << 4) | ++ (extract32(opcode, 0, 4)); ++} ++ ++static inline uint32_t MOV_Rd(uint32_t opcode) ++{ ++ return extract32(opcode, 4, 5); ++} ++ ++static inline uint32_t MOV_Rr(uint32_t opcode) ++{ ++ return (extract32(opcode, 9, 1) << 4) | ++ (extract32(opcode, 0, 4)); ++} ++ ++static inline uint32_t CPSE_Rd(uint32_t opcode) ++{ ++ return extract32(opcode, 4, 5); ++} ++ ++static inline uint32_t CPSE_Rr(uint32_t opcode) ++{ ++ return (extract32(opcode, 9, 1) << 4) | ++ (extract32(opcode, 0, 4)); ++} ++ ++static inline uint32_t CP_Rd(uint32_t opcode) ++{ ++ return extract32(opcode, 4, 5); ++} ++ ++static inline uint32_t CP_Rr(uint32_t opcode) ++{ ++ return (extract32(opcode, 9, 1) << 4) | ++ (extract32(opcode, 0, 4)); ++} ++ ++static inline uint32_t SUB_Rd(uint32_t opcode) ++{ ++ return extract32(opcode, 4, 5); ++} ++ ++static inline uint32_t SUB_Rr(uint32_t opcode) ++{ ++ return (extract32(opcode, 9, 1) << 4) | ++ (extract32(opcode, 0, 4)); ++} ++ ++static inline uint32_t ADC_Rd(uint32_t opcode) ++{ ++ return extract32(opcode, 4, 5); ++} ++ ++static inline uint32_t ADC_Rr(uint32_t opcode) ++{ ++ return (extract32(opcode, 9, 1) << 4) | ++ (extract32(opcode, 0, 4)); ++} ++ ++static inline uint32_t CPI_Rd(uint32_t opcode) ++{ ++ return extract32(opcode, 4, 4); ++} ++ ++static inline uint32_t CPI_Imm(uint32_t opcode) ++{ ++ return (extract32(opcode, 8, 4) << 4) | ++ (extract32(opcode, 0, 4)); ++} ++ ++static inline uint32_t SBCI_Rd(uint32_t opcode) ++{ ++ return extract32(opcode, 4, 4); ++} ++ ++static inline uint32_t SBCI_Imm(uint32_t opcode) ++{ ++ return (extract32(opcode, 8, 4) << 4) | ++ (extract32(opcode, 0, 4)); ++} ++ ++static inline uint32_t ORI_Rd(uint32_t opcode) ++{ ++ return extract32(opcode, 4, 4); ++} ++ ++static inline uint32_t ORI_Imm(uint32_t opcode) ++{ ++ return (extract32(opcode, 8, 4) << 4) | ++ (extract32(opcode, 0, 4)); ++} ++ ++static inline uint32_t SUBI_Rd(uint32_t opcode) ++{ ++ return extract32(opcode, 4, 4); ++} ++ ++static inline uint32_t SUBI_Imm(uint32_t opcode) ++{ ++ return (extract32(opcode, 8, 4) << 4) | ++ (extract32(opcode, 0, 4)); ++} ++ ++static inline uint32_t ANDI_Rd(uint32_t opcode) ++{ ++ return extract32(opcode, 4, 4); ++} ++ ++static inline uint32_t ANDI_Imm(uint32_t opcode) ++{ ++ return (extract32(opcode, 8, 4) << 4) | ++ (extract32(opcode, 0, 4)); ++} ++ ++static inline uint32_t LDDZ_Rd(uint32_t opcode) ++{ ++ return extract32(opcode, 4, 5); ++} ++ ++static inline uint32_t LDDZ_Imm(uint32_t opcode) ++{ ++ return (extract32(opcode, 13, 1) << 5) | ++ (extract32(opcode, 10, 2) << 3) | ++ (extract32(opcode, 0, 3)); ++} ++ ++static inline uint32_t LDDY_Rd(uint32_t opcode) ++{ ++ return extract32(opcode, 4, 5); ++} ++ ++static inline uint32_t LDDY_Imm(uint32_t opcode) ++{ ++ return (extract32(opcode, 13, 1) << 5) | ++ (extract32(opcode, 10, 2) << 3) | ++ (extract32(opcode, 0, 3)); ++} ++ ++static inline uint32_t STDZ_Rd(uint32_t opcode) ++{ ++ return extract32(opcode, 4, 5); ++} ++ ++static inline uint32_t STDZ_Imm(uint32_t opcode) ++{ ++ return (extract32(opcode, 13, 1) << 5) | ++ (extract32(opcode, 10, 2) << 3) | ++ (extract32(opcode, 0, 3)); ++} ++ ++static inline uint32_t STDY_Rd(uint32_t opcode) ++{ ++ return extract32(opcode, 4, 5); ++} ++ ++static inline uint32_t STDY_Imm(uint32_t opcode) ++{ ++ return (extract32(opcode, 13, 1) << 5) | ++ (extract32(opcode, 10, 2) << 3) | ++ (extract32(opcode, 0, 3)); ++} ++ ++static inline uint32_t LDS_Imm(uint32_t opcode) ++{ ++ return extract32(opcode, 0, 16); ++} ++ ++static inline uint32_t LDS_Rd(uint32_t opcode) ++{ ++ return extract32(opcode, 20, 5); ++} ++ ++static inline uint32_t LDZ2_Rd(uint32_t opcode) ++{ ++ return extract32(opcode, 4, 5); ++} ++ ++static inline uint32_t LDZ3_Rd(uint32_t opcode) ++{ ++ return extract32(opcode, 4, 5); ++} ++ ++static inline uint32_t LPM2_Rd(uint32_t opcode) ++{ ++ return extract32(opcode, 4, 5); ++} ++ ++static inline uint32_t LPMX_Rd(uint32_t opcode) ++{ ++ return extract32(opcode, 4, 5); ++} ++ ++static inline uint32_t ELPM2_Rd(uint32_t opcode) ++{ ++ return extract32(opcode, 4, 5); ++} ++ ++static inline uint32_t ELPMX_Rd(uint32_t opcode) ++{ ++ return extract32(opcode, 4, 5); ++} ++ ++static inline uint32_t LDY2_Rd(uint32_t opcode) ++{ ++ return extract32(opcode, 4, 5); ++} ++ ++static inline uint32_t LDY3_Rd(uint32_t opcode) ++{ ++ return extract32(opcode, 4, 5); ++} ++ ++static inline uint32_t LDX1_Rd(uint32_t opcode) ++{ ++ return extract32(opcode, 4, 5); ++} ++ ++static inline uint32_t LDX2_Rd(uint32_t opcode) ++{ ++ return extract32(opcode, 4, 5); ++} ++ ++static inline uint32_t LDX3_Rd(uint32_t opcode) ++{ ++ return extract32(opcode, 4, 5); ++} ++ ++static inline uint32_t POP_Rd(uint32_t opcode) ++{ ++ return extract32(opcode, 4, 5); ++} ++ ++static inline uint32_t STS_Imm(uint32_t opcode) ++{ ++ return extract32(opcode, 0, 16); ++} ++ ++static inline uint32_t STS_Rd(uint32_t opcode) ++{ ++ return extract32(opcode, 20, 5); ++} ++ ++static inline uint32_t STZ2_Rd(uint32_t opcode) ++{ ++ return extract32(opcode, 4, 5); ++} ++ ++static inline uint32_t STZ3_Rd(uint32_t opcode) ++{ ++ return extract32(opcode, 4, 5); ++} ++ ++static inline uint32_t XCH_Rd(uint32_t opcode) ++{ ++ return extract32(opcode, 4, 5); ++} ++ ++static inline uint32_t LAS_Rr(uint32_t opcode) ++{ ++ return extract32(opcode, 4, 5); ++} ++ ++static inline uint32_t LAC_Rr(uint32_t opcode) ++{ ++ return extract32(opcode, 4, 5); ++} ++ ++static inline uint32_t LAT_Rr(uint32_t opcode) ++{ ++ return extract32(opcode, 4, 5); ++} ++ ++static inline uint32_t STY2_Rd(uint32_t opcode) ++{ ++ return extract32(opcode, 4, 5); ++} ++ ++static inline uint32_t STY3_Rd(uint32_t opcode) ++{ ++ return extract32(opcode, 4, 5); ++} ++ ++static inline uint32_t STX1_Rr(uint32_t opcode) ++{ ++ return extract32(opcode, 4, 5); ++} ++ ++static inline uint32_t STX2_Rr(uint32_t opcode) ++{ ++ return extract32(opcode, 4, 5); ++} ++ ++static inline uint32_t STX3_Rr(uint32_t opcode) ++{ ++ return extract32(opcode, 4, 5); ++} ++ ++static inline uint32_t PUSH_Rd(uint32_t opcode) ++{ ++ return extract32(opcode, 4, 5); ++} ++ ++static inline uint32_t COM_Rd(uint32_t opcode) ++{ ++ return extract32(opcode, 4, 5); ++} ++ ++static inline uint32_t NEG_Rd(uint32_t opcode) ++{ ++ return extract32(opcode, 4, 5); ++} ++ ++static inline uint32_t SWAP_Rd(uint32_t opcode) ++{ ++ return extract32(opcode, 4, 5); ++} ++ ++static inline uint32_t INC_Rd(uint32_t opcode) ++{ ++ return extract32(opcode, 4, 5); ++} ++ ++static inline uint32_t ASR_Rd(uint32_t opcode) ++{ ++ return extract32(opcode, 4, 5); ++} ++ ++static inline uint32_t LSR_Rd(uint32_t opcode) ++{ ++ return extract32(opcode, 4, 5); ++} ++ ++static inline uint32_t ROR_Rd(uint32_t opcode) ++{ ++ return extract32(opcode, 4, 5); ++} ++ ++static inline uint32_t BSET_Bit(uint32_t opcode) ++{ ++ return extract32(opcode, 4, 3); ++} ++ ++static inline uint32_t BCLR_Bit(uint32_t opcode) ++{ ++ return extract32(opcode, 4, 3); ++} ++ ++static inline uint32_t DEC_Rd(uint32_t opcode) ++{ ++ return extract32(opcode, 4, 5); ++} ++ ++static inline uint32_t DES_Imm(uint32_t opcode) ++{ ++ return extract32(opcode, 4, 4); ++} ++ ++static inline uint32_t JMP_Imm(uint32_t opcode) ++{ ++ return (extract32(opcode, 20, 5) << 17) | ++ (extract32(opcode, 0, 17)); ++} ++ ++static inline uint32_t CALL_Imm(uint32_t opcode) ++{ ++ return (extract32(opcode, 20, 5) << 17) | ++ (extract32(opcode, 0, 17)); ++} ++ ++static inline uint32_t ADIW_Rd(uint32_t opcode) ++{ ++ return extract32(opcode, 4, 2); ++} ++ ++static inline uint32_t ADIW_Imm(uint32_t opcode) ++{ ++ return (extract32(opcode, 6, 2) << 4) | ++ (extract32(opcode, 0, 4)); ++} ++ ++static inline uint32_t SBIW_Rd(uint32_t opcode) ++{ ++ return extract32(opcode, 4, 2); ++} ++ ++static inline uint32_t SBIW_Imm(uint32_t opcode) ++{ ++ return (extract32(opcode, 6, 2) << 4) | ++ (extract32(opcode, 0, 4)); ++} ++ ++static inline uint32_t CBI_Bit(uint32_t opcode) ++{ ++ return extract32(opcode, 0, 3); ++} ++ ++static inline uint32_t CBI_Imm(uint32_t opcode) ++{ ++ return extract32(opcode, 3, 5); ++} ++ ++static inline uint32_t SBIC_Bit(uint32_t opcode) ++{ ++ return extract32(opcode, 0, 3); ++} ++ ++static inline uint32_t SBIC_Imm(uint32_t opcode) ++{ ++ return extract32(opcode, 3, 5); ++} ++ ++static inline uint32_t SBI_Bit(uint32_t opcode) ++{ ++ return extract32(opcode, 0, 3); ++} ++ ++static inline uint32_t SBI_Imm(uint32_t opcode) ++{ ++ return extract32(opcode, 3, 5); ++} ++ ++static inline uint32_t SBIS_Bit(uint32_t opcode) ++{ ++ return extract32(opcode, 0, 3); ++} ++ ++static inline uint32_t SBIS_Imm(uint32_t opcode) ++{ ++ return extract32(opcode, 3, 5); ++} ++ ++static inline uint32_t MUL_Rd(uint32_t opcode) ++{ ++ return extract32(opcode, 4, 5); ++} ++ ++static inline uint32_t MUL_Rr(uint32_t opcode) ++{ ++ return (extract32(opcode, 9, 1) << 4) | ++ (extract32(opcode, 0, 4)); ++} ++ ++static inline uint32_t IN_Rd(uint32_t opcode) ++{ ++ return extract32(opcode, 4, 5); ++} ++ ++static inline uint32_t IN_Imm(uint32_t opcode) ++{ ++ return (extract32(opcode, 9, 2) << 4) | ++ (extract32(opcode, 0, 4)); ++} ++ ++static inline uint32_t OUT_Rd(uint32_t opcode) ++{ ++ return extract32(opcode, 4, 5); ++} ++ ++static inline uint32_t OUT_Imm(uint32_t opcode) ++{ ++ return (extract32(opcode, 9, 2) << 4) | ++ (extract32(opcode, 0, 4)); ++} ++ ++static inline uint32_t RJMP_Imm(uint32_t opcode) ++{ ++ return extract32(opcode, 0, 12); ++} ++ ++static inline uint32_t LDI_Rd(uint32_t opcode) ++{ ++ return extract32(opcode, 4, 4); ++} ++ ++static inline uint32_t LDI_Imm(uint32_t opcode) ++{ ++ return (extract32(opcode, 8, 4) << 4) | ++ (extract32(opcode, 0, 4)); ++} ++ ++static inline uint32_t RCALL_Imm(uint32_t opcode) ++{ ++ return extract32(opcode, 0, 12); ++} ++ ++static inline uint32_t BRBS_Bit(uint32_t opcode) ++{ ++ return extract32(opcode, 0, 3); ++} ++ ++static inline uint32_t BRBS_Imm(uint32_t opcode) ++{ ++ return extract32(opcode, 3, 7); ++} ++ ++static inline uint32_t BRBC_Bit(uint32_t opcode) ++{ ++ return extract32(opcode, 0, 3); ++} ++ ++static inline uint32_t BRBC_Imm(uint32_t opcode) ++{ ++ return extract32(opcode, 3, 7); ++} ++ ++static inline uint32_t BLD_Bit(uint32_t opcode) ++{ ++ return extract32(opcode, 0, 3); ++} ++ ++static inline uint32_t BLD_Rd(uint32_t opcode) ++{ ++ return extract32(opcode, 4, 5); ++} ++ ++static inline uint32_t BST_Bit(uint32_t opcode) ++{ ++ return extract32(opcode, 0, 3); ++} ++ ++static inline uint32_t BST_Rd(uint32_t opcode) ++{ ++ return extract32(opcode, 4, 5); ++} ++ ++static inline uint32_t SBRC_Bit(uint32_t opcode) ++{ ++ return extract32(opcode, 0, 3); ++} ++ ++static inline uint32_t SBRC_Rr(uint32_t opcode) ++{ ++ return extract32(opcode, 4, 5); ++} ++ ++static inline uint32_t SBRS_Bit(uint32_t opcode) ++{ ++ return extract32(opcode, 0, 3); ++} ++ ++static inline uint32_t SBRS_Rr(uint32_t opcode) ++{ ++ return extract32(opcode, 4, 5); ++} ++ ++#endif diff --cc target/avr/translate.c index 0000000,0000000..b25f1c2 new file mode 100644 --- /dev/null +++ b/target/avr/translate.c @@@ -1,0 -1,0 +1,2911 @@@ ++/* ++ * QEMU AVR CPU ++ * ++ * Copyright (c) 2016 Michael Rolnik ++ * ++ * This library is free software; you can redistribute it and/or ++ * modify it under the terms of the GNU Lesser General Public ++ * License as published by the Free Software Foundation; either ++ * version 2.1 of the License, or (at your option) any later version. ++ * ++ * This library is distributed in the hope that it will be useful, ++ * but WITHOUT ANY WARRANTY; without even the implied warranty of ++ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU ++ * Lesser General Public License for more details. ++ * ++ * You should have received a copy of the GNU Lesser General Public ++ * License along with this library; if not, see ++ * ++ */ ++ ++ ++#include "qemu/osdep.h" ++#include "tcg/tcg.h" ++#include "cpu.h" ++#include "exec/exec-all.h" ++#include "disas/disas.h" ++#include "tcg-op.h" ++#include "exec/cpu_ldst.h" ++#include "exec/helper-proto.h" ++#include "exec/helper-gen.h" ++#include "exec/log.h" ++ ++static TCGv_env cpu_env; ++ ++static TCGv cpu_pc; ++ ++static TCGv cpu_Cf; ++static TCGv cpu_Zf; ++static TCGv cpu_Nf; ++static TCGv cpu_Vf; ++static TCGv cpu_Sf; ++static TCGv cpu_Hf; ++static TCGv cpu_Tf; ++static TCGv cpu_If; ++ ++static TCGv cpu_rampD; ++static TCGv cpu_rampX; ++static TCGv cpu_rampY; ++static TCGv cpu_rampZ; ++ ++static TCGv cpu_r[32]; ++static TCGv cpu_eind; ++static TCGv cpu_sp; ++ ++#define REG(x) (cpu_r[x]) ++ ++enum { ++ BS_NONE = 0, /* Nothing special (none of the below) */ ++ BS_STOP = 1, /* We want to stop translation for any reason */ ++ BS_BRANCH = 2, /* A branch condition is reached */ ++ BS_EXCP = 3, /* An exception condition is reached */ ++}; ++ ++uint32_t get_opcode(uint8_t const *code, unsigned bitBase, unsigned bitSize); ++ ++typedef struct DisasContext DisasContext; ++typedef struct InstInfo InstInfo; ++ ++typedef int (*translate_function_t)(DisasContext *ctx, uint32_t opcode); ++struct InstInfo { ++ target_long cpc; ++ target_long npc; ++ uint32_t opcode; ++ translate_function_t translate; ++ unsigned length; ++}; ++ ++/* This is the state at translation time. */ ++struct DisasContext { ++ struct TranslationBlock *tb; ++ CPUAVRState *env; ++ ++ InstInfo inst[2];/* two consecutive instructions */ ++ ++ /* Routine used to access memory */ ++ int memidx; ++ int bstate; ++ int singlestep; ++}; ++ ++static void gen_goto_tb(DisasContext *ctx, int n, target_ulong dest) ++{ ++ TranslationBlock *tb = ctx->tb; ++ ++ if (ctx->singlestep == 0) { ++ tcg_gen_goto_tb(n); ++ tcg_gen_movi_i32(cpu_pc, dest); ++ tcg_gen_exit_tb((uintptr_t)tb + n); ++ } else { ++ tcg_gen_movi_i32(cpu_pc, dest); ++ gen_helper_debug(cpu_env); ++ tcg_gen_exit_tb(0); ++ } ++} ++ ++#include "exec/gen-icount.h" ++#include "translate-inst.h" ++ ++static void gen_add_CHf(TCGv R, TCGv Rd, TCGv Rr) ++{ ++ TCGv t1 = tcg_temp_new_i32(); ++ TCGv t2 = tcg_temp_new_i32(); ++ TCGv t3 = tcg_temp_new_i32(); ++ ++ tcg_gen_and_tl(t1, Rd, Rr); /* t1 = Rd & Rr */ ++ tcg_gen_andc_tl(t2, Rd, R); /* t2 = Rd & ~R */ ++ tcg_gen_andc_tl(t3, Rr, R); /* t3 = Rr & ~R */ ++ tcg_gen_or_tl(t1, t1, t2); /* t1 = t1 | t2 | t3 */ ++ tcg_gen_or_tl(t1, t1, t3); ++ ++ tcg_gen_shri_tl(cpu_Cf, t1, 7); /* Cf = t1(7) */ ++ tcg_gen_shri_tl(cpu_Hf, t1, 3); /* Hf = t1(3) */ ++ tcg_gen_andi_tl(cpu_Hf, cpu_Hf, 1); ++ ++ tcg_temp_free_i32(t3); ++ tcg_temp_free_i32(t2); ++ tcg_temp_free_i32(t1); ++} ++ ++static void gen_add_Vf(TCGv R, TCGv Rd, TCGv Rr) ++{ ++ TCGv t1 = tcg_temp_new_i32(); ++ TCGv t2 = tcg_temp_new_i32(); ++ ++ /* t1 = Rd & Rr & ~R | ~Rd & ~Rr & R = (Rd ^ R) & ~(Rd ^ Rr) */ ++ tcg_gen_xor_tl(t1, Rd, R); ++ tcg_gen_xor_tl(t2, Rd, Rr); ++ tcg_gen_andc_tl(t1, t1, t2); ++ ++ tcg_gen_shri_tl(cpu_Vf, t1, 7); /* Vf = t1(7) */ ++ ++ tcg_temp_free_i32(t2); ++ tcg_temp_free_i32(t1); ++} ++ ++static void gen_sub_CHf(TCGv R, TCGv Rd, TCGv Rr) ++{ ++ TCGv t1 = tcg_temp_new_i32(); ++ TCGv t2 = tcg_temp_new_i32(); ++ TCGv t3 = tcg_temp_new_i32(); ++ ++ /* Cf & Hf */ ++ tcg_gen_not_tl(t1, Rd); /* t1 = ~Rd */ ++ tcg_gen_and_tl(t2, t1, Rr); /* t2 = ~Rd & Rr */ ++ tcg_gen_or_tl(t3, t1, Rr); /* t3 = (~Rd | Rr) & R */ ++ tcg_gen_and_tl(t3, t3, R); ++ tcg_gen_or_tl(t2, t2, t3); /* t2 = ~Rd & Rr | ~Rd & R | R & Rr */ ++ tcg_gen_shri_tl(cpu_Cf, t2, 7); /* Cf = t2(7) */ ++ tcg_gen_shri_tl(cpu_Hf, t2, 3); /* Hf = t2(3) */ ++ tcg_gen_andi_tl(cpu_Hf, cpu_Hf, 1); ++ ++ tcg_temp_free_i32(t3); ++ tcg_temp_free_i32(t2); ++ tcg_temp_free_i32(t1); ++} ++ ++static void gen_sub_Vf(TCGv R, TCGv Rd, TCGv Rr) ++{ ++ TCGv t1 = tcg_temp_new_i32(); ++ TCGv t2 = tcg_temp_new_i32(); ++ ++ /* Vf */ ++ /* t1 = Rd & ~Rr & ~R | ~Rd & Rr & R = (Rd ^ R) & (Rd ^ R) */ ++ tcg_gen_xor_tl(t1, Rd, R); ++ tcg_gen_xor_tl(t2, Rd, Rr); ++ tcg_gen_and_tl(t1, t1, t2); ++ tcg_gen_shri_tl(cpu_Vf, t1, 7); /* Vf = t1(7) */ ++ ++ tcg_temp_free_i32(t2); ++ tcg_temp_free_i32(t1); ++} ++ ++static void gen_NSf(TCGv R) ++{ ++ tcg_gen_shri_tl(cpu_Nf, R, 7); /* Nf = R(7) */ ++ tcg_gen_xor_tl(cpu_Sf, cpu_Nf, cpu_Vf); /* Sf = Nf ^ Vf */ ++} ++ ++static void gen_ZNSf(TCGv R) ++{ ++ tcg_gen_mov_tl(cpu_Zf, R); /* Zf = R */ ++ tcg_gen_shri_tl(cpu_Nf, R, 7); /* Nf = R(7) */ ++ tcg_gen_xor_tl(cpu_Sf, cpu_Nf, cpu_Vf); /* Sf = Nf ^ Vf */ ++} ++ ++static void gen_push_ret(DisasContext *ctx, int ret) ++{ ++ if (avr_feature(ctx->env, AVR_FEATURE_1_BYTE_PC)) { ++ ++ TCGv t0 = tcg_const_i32((ret & 0x0000ff)); ++ ++ tcg_gen_qemu_st_tl(t0, cpu_sp, MMU_DATA_IDX, MO_UB); ++ tcg_gen_subi_tl(cpu_sp, cpu_sp, 1); ++ ++ tcg_temp_free_i32(t0); ++ } else if (avr_feature(ctx->env, AVR_FEATURE_2_BYTE_PC)) { ++ ++ TCGv t0 = tcg_const_i32((ret & 0x00ffff)); ++ ++ tcg_gen_subi_tl(cpu_sp, cpu_sp, 1); ++ tcg_gen_qemu_st_tl(t0, cpu_sp, MMU_DATA_IDX, MO_BEUW); ++ tcg_gen_subi_tl(cpu_sp, cpu_sp, 1); ++ ++ tcg_temp_free_i32(t0); ++ ++ } else if (avr_feature(ctx->env, AVR_FEATURE_3_BYTE_PC)) { ++ ++ TCGv lo = tcg_const_i32((ret & 0x0000ff)); ++ TCGv hi = tcg_const_i32((ret & 0xffff00) >> 8); ++ ++ tcg_gen_qemu_st_tl(lo, cpu_sp, MMU_DATA_IDX, MO_UB); ++ tcg_gen_subi_tl(cpu_sp, cpu_sp, 2); ++ tcg_gen_qemu_st_tl(hi, cpu_sp, MMU_DATA_IDX, MO_BEUW); ++ tcg_gen_subi_tl(cpu_sp, cpu_sp, 1); ++ ++ tcg_temp_free_i32(lo); ++ tcg_temp_free_i32(hi); ++ } ++} ++ ++static void gen_pop_ret(DisasContext *ctx, TCGv ret) ++{ ++ if (avr_feature(ctx->env, AVR_FEATURE_1_BYTE_PC)) { ++ ++ tcg_gen_addi_tl(cpu_sp, cpu_sp, 1); ++ tcg_gen_qemu_ld_tl(ret, cpu_sp, MMU_DATA_IDX, MO_UB); ++ ++ } else if (avr_feature(ctx->env, AVR_FEATURE_2_BYTE_PC)) { ++ ++ tcg_gen_addi_tl(cpu_sp, cpu_sp, 1); ++ tcg_gen_qemu_ld_tl(ret, cpu_sp, MMU_DATA_IDX, MO_BEUW); ++ tcg_gen_addi_tl(cpu_sp, cpu_sp, 1); ++ ++ } else if (avr_feature(ctx->env, AVR_FEATURE_3_BYTE_PC)) { ++ ++ TCGv lo = tcg_temp_new_i32(); ++ TCGv hi = tcg_temp_new_i32(); ++ ++ tcg_gen_addi_tl(cpu_sp, cpu_sp, 1); ++ tcg_gen_qemu_ld_tl(hi, cpu_sp, MMU_DATA_IDX, MO_BEUW); ++ ++ tcg_gen_addi_tl(cpu_sp, cpu_sp, 2); ++ tcg_gen_qemu_ld_tl(lo, cpu_sp, MMU_DATA_IDX, MO_UB); ++ ++ tcg_gen_deposit_tl(ret, lo, hi, 8, 16); ++ ++ tcg_temp_free_i32(lo); ++ tcg_temp_free_i32(hi); ++ } ++} ++ ++static void gen_jmp_ez(void) ++{ ++ tcg_gen_deposit_tl(cpu_pc, cpu_r[30], cpu_r[31], 8, 8); ++ tcg_gen_or_tl(cpu_pc, cpu_pc, cpu_eind); ++ tcg_gen_exit_tb(0); ++} ++ ++static void gen_jmp_z(void) ++{ ++ tcg_gen_deposit_tl(cpu_pc, cpu_r[30], cpu_r[31], 8, 8); ++ tcg_gen_exit_tb(0); ++} ++ ++/* ++ * in the gen_set_addr & gen_get_addr functions ++ * H assumed to be in 0x00ff0000 format ++ * M assumed to be in 0x000000ff format ++ * L assumed to be in 0x000000ff format ++ */ ++static void gen_set_addr(TCGv addr, TCGv H, TCGv M, TCGv L) ++{ ++ ++ tcg_gen_andi_tl(L, addr, 0x000000ff); ++ ++ tcg_gen_andi_tl(M, addr, 0x0000ff00); ++ tcg_gen_shri_tl(M, M, 8); ++ ++ tcg_gen_andi_tl(H, addr, 0x00ff0000); ++} ++ ++static void gen_set_xaddr(TCGv addr) ++{ ++ gen_set_addr(addr, cpu_rampX, cpu_r[27], cpu_r[26]); ++} ++ ++static void gen_set_yaddr(TCGv addr) ++{ ++ gen_set_addr(addr, cpu_rampY, cpu_r[29], cpu_r[28]); ++} ++ ++static void gen_set_zaddr(TCGv addr) ++{ ++ gen_set_addr(addr, cpu_rampZ, cpu_r[31], cpu_r[30]); ++} ++ ++static TCGv gen_get_addr(TCGv H, TCGv M, TCGv L) ++{ ++ TCGv addr = tcg_temp_new_i32(); ++ ++ tcg_gen_deposit_tl(addr, M, H, 8, 8); ++ tcg_gen_deposit_tl(addr, L, addr, 8, 16); ++ ++ return addr; ++} ++ ++static TCGv gen_get_xaddr(void) ++{ ++ return gen_get_addr(cpu_rampX, cpu_r[27], cpu_r[26]); ++} ++ ++static TCGv gen_get_yaddr(void) ++{ ++ return gen_get_addr(cpu_rampY, cpu_r[29], cpu_r[28]); ++} ++ ++static TCGv gen_get_zaddr(void) ++{ ++ return gen_get_addr(cpu_rampZ, cpu_r[31], cpu_r[30]); ++} ++ ++/* ++ * Adds two registers and the contents of the C Flag and places the result in ++ * the destination register Rd. ++ */ ++static int avr_translate_ADC(DisasContext *ctx, uint32_t opcode) ++{ ++ TCGv Rd = cpu_r[ADC_Rd(opcode)]; ++ TCGv Rr = cpu_r[ADC_Rr(opcode)]; ++ TCGv R = tcg_temp_new_i32(); ++ ++ /* op */ ++ tcg_gen_add_tl(R, Rd, Rr); /* R = Rd + Rr + Cf */ ++ tcg_gen_add_tl(R, R, cpu_Cf); ++ tcg_gen_andi_tl(R, R, 0xff); /* make it 8 bits */ ++ ++ gen_add_CHf(R, Rd, Rr); ++ gen_add_Vf(R, Rd, Rr); ++ gen_ZNSf(R); ++ ++ /* R */ ++ tcg_gen_mov_tl(Rd, R); ++ ++ tcg_temp_free_i32(R); ++ ++ return BS_NONE; ++} ++ ++/* ++ * Adds two registers without the C Flag and places the result in the ++ * destination register Rd. ++ */ ++static int avr_translate_ADD(DisasContext *ctx, uint32_t opcode) ++{ ++ TCGv Rd = cpu_r[ADD_Rd(opcode)]; ++ TCGv Rr = cpu_r[ADD_Rr(opcode)]; ++ TCGv R = tcg_temp_new_i32(); ++ ++ /* op */ ++ tcg_gen_add_tl(R, Rd, Rr); /* Rd = Rd + Rr */ ++ tcg_gen_andi_tl(R, R, 0xff); /* make it 8 bits */ ++ ++ gen_add_CHf(R, Rd, Rr); ++ gen_add_Vf(R, Rd, Rr); ++ gen_ZNSf(R); ++ ++ /* R */ ++ tcg_gen_mov_tl(Rd, R); ++ ++ tcg_temp_free_i32(R); ++ ++ return BS_NONE; ++} ++ ++/* ++ * Adds an immediate value (0 - 63) to a register pair and places the result ++ * in the register pair. This instruction operates on the upper four register ++ * pairs, and is well suited for operations on the pointer registers. This ++ * instruction is not available in all devices. Refer to the device specific ++ * instruction set summary. ++ */ ++static int avr_translate_ADIW(DisasContext *ctx, uint32_t opcode) ++{ ++ if (avr_feature(ctx->env, AVR_FEATURE_ADIW_SBIW) == false) { ++ gen_helper_unsupported(cpu_env); ++ ++ return BS_EXCP; ++ } ++ ++ TCGv RdL = cpu_r[24 + 2 * ADIW_Rd(opcode)]; ++ TCGv RdH = cpu_r[25 + 2 * ADIW_Rd(opcode)]; ++ int Imm = (ADIW_Imm(opcode)); ++ TCGv R = tcg_temp_new_i32(); ++ TCGv Rd = tcg_temp_new_i32(); ++ ++ /* op */ ++ tcg_gen_deposit_tl(Rd, RdL, RdH, 8, 8); /* Rd = RdH:RdL */ ++ tcg_gen_addi_tl(R, Rd, Imm); /* R = Rd + Imm */ ++ tcg_gen_andi_tl(R, R, 0xffff); /* make it 16 bits */ ++ ++ /* Cf */ ++ tcg_gen_andc_tl(cpu_Cf, Rd, R); /* Cf = Rd & ~R */ ++ tcg_gen_shri_tl(cpu_Cf, cpu_Cf, 15); ++ ++ /* Vf */ ++ tcg_gen_andc_tl(cpu_Vf, R, Rd); /* Vf = R & ~Rd */ ++ tcg_gen_shri_tl(cpu_Vf, cpu_Vf, 15); ++ ++ /* Zf */ ++ tcg_gen_mov_tl(cpu_Zf, R); /* Zf = R */ ++ ++ /* Nf */ ++ tcg_gen_shri_tl(cpu_Nf, R, 15); /* Nf = R(15) */ ++ ++ /* Sf */ ++ tcg_gen_xor_tl(cpu_Sf, cpu_Nf, cpu_Vf);/* Sf = Nf ^ Vf */ ++ ++ /* R */ ++ tcg_gen_andi_tl(RdL, R, 0xff); ++ tcg_gen_shri_tl(RdH, R, 8); ++ ++ tcg_temp_free_i32(Rd); ++ tcg_temp_free_i32(R); ++ ++ return BS_NONE; ++} ++ ++/* ++ * Performs the logical AND between the contents of register Rd and register ++ * Rr and places the result in the destination register Rd. ++ */ ++static int avr_translate_AND(DisasContext *ctx, uint32_t opcode) ++{ ++ TCGv Rd = cpu_r[AND_Rd(opcode)]; ++ TCGv Rr = cpu_r[AND_Rr(opcode)]; ++ TCGv R = tcg_temp_new_i32(); ++ ++ /* op */ ++ tcg_gen_and_tl(R, Rd, Rr); /* Rd = Rd and Rr */ ++ ++ /* Vf */ ++ tcg_gen_movi_tl(cpu_Vf, 0x00); /* Vf = 0 */ ++ ++ /* Zf */ ++ tcg_gen_mov_tl(cpu_Zf, R); /* Zf = R */ ++ ++ gen_ZNSf(R); ++ ++ /* R */ ++ tcg_gen_mov_tl(Rd, R); ++ ++ tcg_temp_free_i32(R); ++ ++ return BS_NONE; ++} ++ ++/* ++ * Performs the logical AND between the contents of register Rd and a constant ++ * and places the result in the destination register Rd. ++ */ ++static int avr_translate_ANDI(DisasContext *ctx, uint32_t opcode) ++{ ++ TCGv Rd = cpu_r[16 + ANDI_Rd(opcode)]; ++ int Imm = (ANDI_Imm(opcode)); ++ ++ /* op */ ++ tcg_gen_andi_tl(Rd, Rd, Imm); /* Rd = Rd & Imm */ ++ ++ tcg_gen_movi_tl(cpu_Vf, 0x00); /* Vf = 0 */ ++ gen_ZNSf(Rd); ++ ++ return BS_NONE; ++} ++ ++/* ++ * Shifts all bits in Rd one place to the right. Bit 7 is held constant. Bit 0 ++ * is loaded into the C Flag of the SREG. This operation effectively divides a ++ * signed value by two without changing its sign. The Carry Flag can be used to ++ * round the result. ++ */ ++static int avr_translate_ASR(DisasContext *ctx, uint32_t opcode) ++{ ++ TCGv Rd = cpu_r[ASR_Rd(opcode)]; ++ TCGv t1 = tcg_temp_new_i32(); ++ TCGv t2 = tcg_temp_new_i32(); ++ ++ /* op */ ++ tcg_gen_andi_tl(t1, Rd, 0x80); /* t1 = (Rd & 0x80) | (Rd >> 1) */ ++ tcg_gen_shri_tl(t2, Rd, 1); ++ tcg_gen_or_tl(t1, t1, t2); ++ ++ /* Cf */ ++ tcg_gen_andi_tl(cpu_Cf, Rd, 1); /* Cf = Rd(0) */ ++ ++ /* Vf */ ++ tcg_gen_and_tl(cpu_Vf, cpu_Nf, cpu_Cf);/* Vf = Nf & Cf */ ++ ++ gen_ZNSf(t1); ++ ++ /* op */ ++ tcg_gen_mov_tl(Rd, t1); ++ ++ tcg_temp_free_i32(t2); ++ tcg_temp_free_i32(t1); ++ ++ return BS_NONE; ++} ++ ++/* ++ * Clears a single Flag in SREG. ++ */ ++static int avr_translate_BCLR(DisasContext *ctx, uint32_t opcode) ++{ ++ switch (BCLR_Bit(opcode)) { ++ case 0x00: ++ tcg_gen_movi_tl(cpu_Cf, 0x00); ++ break; ++ case 0x01: ++ tcg_gen_movi_tl(cpu_Zf, 0x01); ++ break; ++ case 0x02: ++ tcg_gen_movi_tl(cpu_Nf, 0x00); ++ break; ++ case 0x03: ++ tcg_gen_movi_tl(cpu_Vf, 0x00); ++ break; ++ case 0x04: ++ tcg_gen_movi_tl(cpu_Sf, 0x00); ++ break; ++ case 0x05: ++ tcg_gen_movi_tl(cpu_Hf, 0x00); ++ break; ++ case 0x06: ++ tcg_gen_movi_tl(cpu_Tf, 0x00); ++ break; ++ case 0x07: ++ tcg_gen_movi_tl(cpu_If, 0x00); ++ break; ++ } ++ ++ return BS_NONE; ++} ++ ++/* ++ * Copies the T Flag in the SREG (Status Register) to bit b in register Rd. ++ */ ++static int avr_translate_BLD(DisasContext *ctx, uint32_t opcode) ++{ ++ TCGv Rd = cpu_r[BLD_Rd(opcode)]; ++ TCGv t1 = tcg_temp_new_i32(); ++ ++ tcg_gen_andi_tl(Rd, Rd, ~(1u << BLD_Bit(opcode))); /* clear bit */ ++ tcg_gen_shli_tl(t1, cpu_Tf, BLD_Bit(opcode)); /* create mask */ ++ tcg_gen_or_tl(Rd, Rd, t1); ++ ++ tcg_temp_free_i32(t1); ++ ++ return BS_NONE; ++} ++ ++/* ++ * Conditional relative branch. Tests a single bit in SREG and branches ++ * relatively to PC if the bit is cleared. This instruction branches relatively ++ * to PC in either direction (PC - 63 < = destination <= PC + 64). The ++ * parameter k is the offset from PC and is represented in two's complement ++ * form. ++ */ ++static int avr_translate_BRBC(DisasContext *ctx, uint32_t opcode) ++{ ++ TCGLabel *taken = gen_new_label(); ++ int Imm = sextract32(BRBC_Imm(opcode), 0, 7); ++ ++ switch (BRBC_Bit(opcode)) { ++ case 0x00: ++ tcg_gen_brcondi_i32(TCG_COND_EQ, cpu_Cf, 0, taken); ++ break; ++ case 0x01: ++ tcg_gen_brcondi_i32(TCG_COND_NE, cpu_Zf, 0, taken); ++ break; ++ case 0x02: ++ tcg_gen_brcondi_i32(TCG_COND_EQ, cpu_Nf, 0, taken); ++ break; ++ case 0x03: ++ tcg_gen_brcondi_i32(TCG_COND_EQ, cpu_Vf, 0, taken); ++ break; ++ case 0x04: ++ tcg_gen_brcondi_i32(TCG_COND_EQ, cpu_Sf, 0, taken); ++ break; ++ case 0x05: ++ tcg_gen_brcondi_i32(TCG_COND_EQ, cpu_Hf, 0, taken); ++ break; ++ case 0x06: ++ tcg_gen_brcondi_i32(TCG_COND_EQ, cpu_Tf, 0, taken); ++ break; ++ case 0x07: ++ tcg_gen_brcondi_i32(TCG_COND_EQ, cpu_If, 0, taken); ++ break; ++ } ++ ++ gen_goto_tb(ctx, 1, ctx->inst[0].npc); ++ gen_set_label(taken); ++ gen_goto_tb(ctx, 0, ctx->inst[0].npc + Imm); ++ ++ return BS_BRANCH; ++} ++ ++/* ++ * Conditional relative branch. Tests a single bit in SREG and branches ++ * relatively to PC if the bit is set. This instruction branches relatively to ++ * PC in either direction (PC - 63 < = destination <= PC + 64). The parameter k ++ * is the offset from PC and is represented in two's complement form. ++ */ ++static int avr_translate_BRBS(DisasContext *ctx, uint32_t opcode) ++{ ++ TCGLabel *taken = gen_new_label(); ++ int Imm = sextract32(BRBS_Imm(opcode), 0, 7); ++ ++ switch (BRBS_Bit(opcode)) { ++ case 0x00: ++ tcg_gen_brcondi_i32(TCG_COND_EQ, cpu_Cf, 1, taken); ++ break; ++ case 0x01: ++ tcg_gen_brcondi_i32(TCG_COND_EQ, cpu_Zf, 0, taken); ++ break; ++ case 0x02: ++ tcg_gen_brcondi_i32(TCG_COND_EQ, cpu_Nf, 1, taken); ++ break; ++ case 0x03: ++ tcg_gen_brcondi_i32(TCG_COND_EQ, cpu_Vf, 1, taken); ++ break; ++ case 0x04: ++ tcg_gen_brcondi_i32(TCG_COND_EQ, cpu_Sf, 1, taken); ++ break; ++ case 0x05: ++ tcg_gen_brcondi_i32(TCG_COND_EQ, cpu_Hf, 1, taken); ++ break; ++ case 0x06: ++ tcg_gen_brcondi_i32(TCG_COND_EQ, cpu_Tf, 1, taken); ++ break; ++ case 0x07: ++ tcg_gen_brcondi_i32(TCG_COND_EQ, cpu_If, 1, taken); ++ break; ++ } ++ ++ gen_goto_tb(ctx, 1, ctx->inst[0].npc); ++ gen_set_label(taken); ++ gen_goto_tb(ctx, 0, ctx->inst[0].npc + Imm); ++ ++ return BS_BRANCH; ++} ++ ++/* ++ * Sets a single Flag or bit in SREG. ++ */ ++static int avr_translate_BSET(DisasContext *ctx, uint32_t opcode) ++{ ++ switch (BSET_Bit(opcode)) { ++ case 0x00: ++ tcg_gen_movi_tl(cpu_Cf, 0x01); ++ break; ++ case 0x01: ++ tcg_gen_movi_tl(cpu_Zf, 0x00); ++ break; ++ case 0x02: ++ tcg_gen_movi_tl(cpu_Nf, 0x01); ++ break; ++ case 0x03: ++ tcg_gen_movi_tl(cpu_Vf, 0x01); ++ break; ++ case 0x04: ++ tcg_gen_movi_tl(cpu_Sf, 0x01); ++ break; ++ case 0x05: ++ tcg_gen_movi_tl(cpu_Hf, 0x01); ++ break; ++ case 0x06: ++ tcg_gen_movi_tl(cpu_Tf, 0x01); ++ break; ++ case 0x07: ++ tcg_gen_movi_tl(cpu_If, 0x01); ++ break; ++ } ++ ++ return BS_NONE; ++} ++ ++/* ++ * The BREAK instruction is used by the On-chip Debug system, and is ++ * normally not used in the application software. When the BREAK instruction is ++ * executed, the AVR CPU is set in the Stopped Mode. This gives the On-chip ++ * Debugger access to internal resources. If any Lock bits are set, or either ++ * the JTAGEN or OCDEN Fuses are unprogrammed, the CPU will treat the BREAK ++ * instruction as a NOP and will not enter the Stopped mode. This instruction ++ * is not available in all devices. Refer to the device specific instruction ++ * set summary. ++ */ ++static int avr_translate_BREAK(DisasContext *ctx, uint32_t opcode) ++{ ++ if (avr_feature(ctx->env, AVR_FEATURE_BREAK) == false) { ++ gen_helper_unsupported(cpu_env); ++ ++ return BS_EXCP; ++ } ++ ++ /* TODO: ??? */ ++ return BS_NONE; ++} ++ ++/* ++ * Stores bit b from Rd to the T Flag in SREG (Status Register). ++ */ ++static int avr_translate_BST(DisasContext *ctx, uint32_t opcode) ++{ ++ TCGv Rd = cpu_r[BST_Rd(opcode)]; ++ ++ tcg_gen_andi_tl(cpu_Tf, Rd, 1 << BST_Bit(opcode)); ++ tcg_gen_shri_tl(cpu_Tf, cpu_Tf, BST_Bit(opcode)); ++ ++ return BS_NONE; ++} ++ ++/* ++ * Calls to a subroutine within the entire Program memory. The return ++ * address (to the instruction after the CALL) will be stored onto the Stack. ++ * (See also RCALL). The Stack Pointer uses a post-decrement scheme during ++ * CALL. This instruction is not available in all devices. Refer to the device ++ * specific instruction set summary. ++ */ ++static int avr_translate_CALL(DisasContext *ctx, uint32_t opcode) ++{ ++ if (avr_feature(ctx->env, AVR_FEATURE_JMP_CALL) == false) { ++ gen_helper_unsupported(cpu_env); ++ ++ return BS_EXCP; ++ } ++ ++ int Imm = CALL_Imm(opcode); ++ int ret = ctx->inst[0].npc; ++ ++ gen_push_ret(ctx, ret); ++ gen_goto_tb(ctx, 0, Imm); ++ ++ return BS_BRANCH; ++} ++ ++/* ++ * Clears a specified bit in an I/O Register. This instruction operates on ++ * the lower 32 I/O Registers -- addresses 0-31. ++ */ ++static int avr_translate_CBI(DisasContext *ctx, uint32_t opcode) ++{ ++ TCGv data = tcg_temp_new_i32(); ++ TCGv port = tcg_const_i32(CBI_Imm(opcode)); ++ ++ gen_helper_inb(data, cpu_env, port); ++ tcg_gen_andi_tl(data, data, ~(1 << CBI_Bit(opcode))); ++ gen_helper_outb(cpu_env, port, data); ++ ++ tcg_temp_free_i32(data); ++ tcg_temp_free_i32(port); ++ ++ return BS_NONE; ++} ++ ++/* ++ * Clears the specified bits in register Rd. Performs the logical AND ++ * between the contents of register Rd and the complement of the constant mask ++ * K. The result will be placed in register Rd. ++ */ ++static int avr_translate_COM(DisasContext *ctx, uint32_t opcode) ++{ ++ TCGv Rd = cpu_r[COM_Rd(opcode)]; ++ TCGv R = tcg_temp_new_i32(); ++ ++ tcg_gen_xori_tl(Rd, Rd, 0xff); ++ ++ tcg_gen_movi_tl(cpu_Cf, 1); /* Cf = 1 */ ++ tcg_gen_movi_tl(cpu_Vf, 0); /* Vf = 0 */ ++ gen_ZNSf(Rd); ++ ++ tcg_temp_free_i32(R); ++ ++ return BS_NONE; ++} ++ ++/* ++ * This instruction performs a compare between two registers Rd and Rr. ++ * None of the registers are changed. All conditional branches can be used ++ * after this instruction. ++ */ ++static int avr_translate_CP(DisasContext *ctx, uint32_t opcode) ++{ ++ TCGv Rd = cpu_r[CP_Rd(opcode)]; ++ TCGv Rr = cpu_r[CP_Rr(opcode)]; ++ TCGv R = tcg_temp_new_i32(); ++ ++ /* op */ ++ tcg_gen_sub_tl(R, Rd, Rr); /* R = Rd - Rr */ ++ tcg_gen_andi_tl(R, R, 0xff); /* make it 8 bits */ ++ ++ gen_sub_CHf(R, Rd, Rr); ++ gen_sub_Vf(R, Rd, Rr); ++ gen_ZNSf(R); ++ ++ tcg_temp_free_i32(R); ++ ++ return BS_NONE; ++} ++ ++/* ++ * This instruction performs a compare between two registers Rd and Rr and ++ * also takes into account the previous carry. None of the registers are ++ * changed. All conditional branches can be used after this instruction. ++ */ ++static int avr_translate_CPC(DisasContext *ctx, uint32_t opcode) ++{ ++ TCGv Rd = cpu_r[CPC_Rd(opcode)]; ++ TCGv Rr = cpu_r[CPC_Rr(opcode)]; ++ TCGv R = tcg_temp_new_i32(); ++ ++ /* op */ ++ tcg_gen_sub_tl(R, Rd, Rr); /* R = Rd - Rr - Cf */ ++ tcg_gen_sub_tl(R, R, cpu_Cf); ++ tcg_gen_andi_tl(R, R, 0xff); /* make it 8 bits */ ++ ++ gen_sub_CHf(R, Rd, Rr); ++ gen_sub_Vf(R, Rd, Rr); ++ gen_NSf(R); ++ ++ /* Previous value remains unchanged when the result is zero; ++ * cleared otherwise. ++ */ ++ tcg_gen_or_tl(cpu_Zf, cpu_Zf, R); ++ ++ tcg_temp_free_i32(R); ++ ++ return BS_NONE; ++} ++ ++/* ++ * This instruction performs a compare between register Rd and a constant. ++ * The register is not changed. All conditional branches can be used after this ++ * instruction. ++ */ ++static int avr_translate_CPI(DisasContext *ctx, uint32_t opcode) ++{ ++ TCGv Rd = cpu_r[16 + CPI_Rd(opcode)]; ++ int Imm = CPI_Imm(opcode); ++ TCGv Rr = tcg_const_i32(Imm); ++ TCGv R = tcg_temp_new_i32(); ++ ++ /* op */ ++ tcg_gen_sub_tl(R, Rd, Rr); /* R = Rd - Rr */ ++ tcg_gen_andi_tl(R, R, 0xff); /* make it 8 bits */ ++ ++ gen_sub_CHf(R, Rd, Rr); ++ gen_sub_Vf(R, Rd, Rr); ++ gen_ZNSf(R); ++ ++ tcg_temp_free_i32(R); ++ tcg_temp_free_i32(Rr); ++ ++ return BS_NONE; ++} ++ ++/* ++ * This instruction performs a compare between two registers Rd and Rr, and ++ * skips the next instruction if Rd = Rr. ++ */ ++static int avr_translate_CPSE(DisasContext *ctx, uint32_t opcode) ++{ ++ TCGv Rd = cpu_r[CPSE_Rd(opcode)]; ++ TCGv Rr = cpu_r[CPSE_Rr(opcode)]; ++ TCGLabel *skip = gen_new_label(); ++ ++ /* PC if next inst is skipped */ ++ tcg_gen_movi_tl(cpu_pc, ctx->inst[1].npc); ++ tcg_gen_brcond_i32(TCG_COND_EQ, Rd, Rr, skip); ++ /* PC if next inst is not skipped */ ++ tcg_gen_movi_tl(cpu_pc, ctx->inst[0].npc); ++ gen_set_label(skip); ++ ++ return BS_BRANCH; ++} ++ ++/* ++ * Subtracts one -1- from the contents of register Rd and places the result ++ * in the destination register Rd. The C Flag in SREG is not affected by the ++ * operation, thus allowing the DEC instruction to be used on a loop counter in ++ * multiple-precision computations. When operating on unsigned values, only ++ * BREQ and BRNE branches can be expected to perform consistently. When ++ * operating on two's complement values, all signed branches are available. ++ */ ++static int avr_translate_DEC(DisasContext *ctx, uint32_t opcode) ++{ ++ TCGv Rd = cpu_r[DEC_Rd(opcode)]; ++ ++ tcg_gen_subi_tl(Rd, Rd, 1); /* Rd = Rd - 1 */ ++ tcg_gen_andi_tl(Rd, Rd, 0xff); /* make it 8 bits */ ++ ++ /* cpu_Vf = Rd == 0x7f */ ++ tcg_gen_setcondi_tl(TCG_COND_EQ, cpu_Vf, Rd, 0x7f); ++ gen_ZNSf(Rd); ++ ++ return BS_NONE; ++} ++ ++/* ++ * The module is an instruction set extension to the AVR CPU, performing ++ * DES iterations. The 64-bit data block (plaintext or ciphertext) is placed in ++ * the CPU register file, registers R0-R7, where LSB of data is placed in LSB ++ * of R0 and MSB of data is placed in MSB of R7. The full 64-bit key (including ++ * parity bits) is placed in registers R8- R15, organized in the register file ++ * with LSB of key in LSB of R8 and MSB of key in MSB of R15. Executing one DES ++ * instruction performs one round in the DES algorithm. Sixteen rounds must be ++ * executed in increasing order to form the correct DES ciphertext or ++ * plaintext. Intermediate results are stored in the register file (R0-R15) ++ * after each DES instruction. The instruction's operand (K) determines which ++ * round is executed, and the half carry flag (H) determines whether encryption ++ * or decryption is performed. The DES algorithm is described in ++ * "Specifications for the Data Encryption Standard" (Federal Information ++ * Processing Standards Publication 46). Intermediate results in this ++ * implementation differ from the standard because the initial permutation and ++ * the inverse initial permutation are performed each iteration. This does not ++ * affect the result in the final ciphertext or plaintext, but reduces ++ * execution time. ++ */ ++static int avr_translate_DES(DisasContext *ctx, uint32_t opcode) ++{ ++ /* TODO: */ ++ if (avr_feature(ctx->env, AVR_FEATURE_DES) == false) { ++ gen_helper_unsupported(cpu_env); ++ ++ return BS_EXCP; ++ } ++ ++ return BS_NONE; ++} ++ ++/* ++ * Indirect call of a subroutine pointed to by the Z (16 bits) Pointer ++ * Register in the Register File and the EIND Register in the I/O space. This ++ * instruction allows for indirect calls to the entire 4M (words) Program ++ * memory space. See also ICALL. The Stack Pointer uses a post-decrement scheme ++ * during EICALL. This instruction is not available in all devices. Refer to ++ * the device specific instruction set summary. ++ */ ++static int avr_translate_EICALL(DisasContext *ctx, uint32_t opcode) ++{ ++ if (avr_feature(ctx->env, AVR_FEATURE_EIJMP_EICALL) == false) { ++ gen_helper_unsupported(cpu_env); ++ ++ return BS_EXCP; ++ } ++ ++ int ret = ctx->inst[0].npc; ++ ++ gen_push_ret(ctx, ret); ++ ++ gen_jmp_ez(); ++ ++ return BS_BRANCH; ++} ++ ++/* ++ * Indirect jump to the address pointed to by the Z (16 bits) Pointer ++ * Register in the Register File and the EIND Register in the I/O space. This ++ * instruction allows for indirect jumps to the entire 4M (words) Program ++ * memory space. See also IJMP. This instruction is not available in all ++ * devices. Refer to the device specific instruction set summary. ++ */ ++static int avr_translate_EIJMP(DisasContext *ctx, uint32_t opcode) ++{ ++ if (avr_feature(ctx->env, AVR_FEATURE_EIJMP_EICALL) == false) { ++ gen_helper_unsupported(cpu_env); ++ ++ return BS_EXCP; ++ } ++ ++ gen_jmp_ez(); ++ ++ return BS_BRANCH; ++} ++ ++/* ++ * Loads one byte pointed to by the Z-register and the RAMPZ Register in ++ * the I/O space, and places this byte in the destination register Rd. This ++ * instruction features a 100% space effective constant initialization or ++ * constant data fetch. The Program memory is organized in 16-bit words while ++ * the Z-pointer is a byte address. Thus, the least significant bit of the ++ * Z-pointer selects either low byte (ZLSB = 0) or high byte (ZLSB = 1). This ++ * instruction can address the entire Program memory space. The Z-pointer ++ * Register can either be left unchanged by the operation, or it can be ++ * incremented. The incrementation applies to the entire 24-bit concatenation ++ * of the RAMPZ and Z-pointer Registers. Devices with Self-Programming ++ * capability can use the ELPM instruction to read the Fuse and Lock bit value. ++ * Refer to the device documentation for a detailed description. This ++ * instruction is not available in all devices. Refer to the device specific ++ * instruction set summary. ++ */ ++static int avr_translate_ELPM1(DisasContext *ctx, uint32_t opcode) ++{ ++ if (avr_feature(ctx->env, AVR_FEATURE_ELPM) == false) { ++ gen_helper_unsupported(cpu_env); ++ ++ return BS_EXCP; ++ } ++ ++ TCGv Rd = cpu_r[0]; ++ TCGv addr = gen_get_zaddr(); ++ ++ tcg_gen_qemu_ld8u(Rd, addr, MMU_CODE_IDX); /* Rd = mem[addr] */ ++ ++ tcg_temp_free_i32(addr); ++ ++ return BS_NONE; ++} ++ ++static int avr_translate_ELPM2(DisasContext *ctx, uint32_t opcode) ++{ ++ if (avr_feature(ctx->env, AVR_FEATURE_ELPM) == false) { ++ gen_helper_unsupported(cpu_env); ++ ++ return BS_EXCP; ++ } ++ ++ TCGv Rd = cpu_r[ELPM2_Rd(opcode)]; ++ TCGv addr = gen_get_zaddr(); ++ ++ tcg_gen_qemu_ld8u(Rd, addr, MMU_CODE_IDX); /* Rd = mem[addr] */ ++ ++ tcg_temp_free_i32(addr); ++ ++ return BS_NONE; ++} ++ ++static int avr_translate_ELPMX(DisasContext *ctx, uint32_t opcode) ++{ ++ if (avr_feature(ctx->env, AVR_FEATURE_ELPMX) == false) { ++ gen_helper_unsupported(cpu_env); ++ ++ return BS_EXCP; ++ } ++ ++ TCGv Rd = cpu_r[ELPMX_Rd(opcode)]; ++ TCGv addr = gen_get_zaddr(); ++ ++ tcg_gen_qemu_ld8u(Rd, addr, MMU_CODE_IDX); /* Rd = mem[addr] */ ++ ++ tcg_gen_addi_tl(addr, addr, 1); /* addr = addr + 1 */ ++ ++ gen_set_zaddr(addr); ++ ++ tcg_temp_free_i32(addr); ++ ++ return BS_NONE; ++} ++ ++/* ++ * Performs the logical EOR between the contents of register Rd and ++ * register Rr and places the result in the destination register Rd. ++ */ ++static int avr_translate_EOR(DisasContext *ctx, uint32_t opcode) ++{ ++ TCGv Rd = cpu_r[EOR_Rd(opcode)]; ++ TCGv Rr = cpu_r[EOR_Rr(opcode)]; ++ ++ tcg_gen_xor_tl(Rd, Rd, Rr); ++ ++ tcg_gen_movi_tl(cpu_Vf, 0); ++ gen_ZNSf(Rd); ++ ++ return BS_NONE; ++} ++ ++/* ++ * This instruction performs 8-bit x 8-bit -> 16-bit unsigned ++ * multiplication and shifts the result one bit left. ++ */ ++static int avr_translate_FMUL(DisasContext *ctx, uint32_t opcode) ++{ ++ if (avr_feature(ctx->env, AVR_FEATURE_MUL) == false) { ++ gen_helper_unsupported(cpu_env); ++ ++ return BS_EXCP; ++ } ++ ++ TCGv R0 = cpu_r[0]; ++ TCGv R1 = cpu_r[1]; ++ TCGv Rd = cpu_r[16 + FMUL_Rd(opcode)]; ++ TCGv Rr = cpu_r[16 + FMUL_Rr(opcode)]; ++ TCGv R = tcg_temp_new_i32(); ++ ++ tcg_gen_mul_tl(R, Rd, Rr); /* R = Rd *Rr */ ++ tcg_gen_shli_tl(R, R, 1); ++ ++ tcg_gen_andi_tl(R0, R, 0xff); ++ tcg_gen_shri_tl(R, R, 8); ++ tcg_gen_andi_tl(R1, R, 0xff); ++ ++ tcg_gen_shri_tl(cpu_Cf, R, 16); /* Cf = R(16) */ ++ tcg_gen_andi_tl(cpu_Zf, R, 0x0000ffff); ++ ++ tcg_temp_free_i32(R); ++ ++ return BS_NONE; ++} ++ ++/* ++ * This instruction performs 8-bit x 8-bit -> 16-bit signed multiplication ++ * and shifts the result one bit left. ++ */ ++static int avr_translate_FMULS(DisasContext *ctx, uint32_t opcode) ++{ ++ if (avr_feature(ctx->env, AVR_FEATURE_MUL) == false) { ++ gen_helper_unsupported(cpu_env); ++ ++ return BS_EXCP; ++ } ++ ++ TCGv R0 = cpu_r[0]; ++ TCGv R1 = cpu_r[1]; ++ TCGv Rd = cpu_r[16 + FMULS_Rd(opcode)]; ++ TCGv Rr = cpu_r[16 + FMULS_Rr(opcode)]; ++ TCGv R = tcg_temp_new_i32(); ++ TCGv t0 = tcg_temp_new_i32(); ++ TCGv t1 = tcg_temp_new_i32(); ++ ++ tcg_gen_ext8s_tl(t0, Rd); /* make Rd full 32 bit signed */ ++ tcg_gen_ext8s_tl(t1, Rr); /* make Rr full 32 bit signed */ ++ tcg_gen_mul_tl(R, t0, t1); /* R = Rd *Rr */ ++ tcg_gen_shli_tl(R, R, 1); ++ ++ tcg_gen_andi_tl(R0, R, 0xff); ++ tcg_gen_shri_tl(R, R, 8); ++ tcg_gen_andi_tl(R1, R, 0xff); ++ ++ tcg_gen_shri_tl(cpu_Cf, R, 16); /* Cf = R(16) */ ++ tcg_gen_andi_tl(cpu_Zf, R, 0x0000ffff); ++ ++ tcg_temp_free_i32(t1); ++ tcg_temp_free_i32(t0); ++ tcg_temp_free_i32(R); ++ ++ return BS_NONE; ++} ++ ++/* ++ * This instruction performs 8-bit x 8-bit -> 16-bit signed multiplication ++ * and shifts the result one bit left. ++ */ ++static int avr_translate_FMULSU(DisasContext *ctx, uint32_t opcode) ++{ ++ if (avr_feature(ctx->env, AVR_FEATURE_MUL) == false) { ++ gen_helper_unsupported(cpu_env); ++ ++ return BS_EXCP; ++ } ++ ++ TCGv R0 = cpu_r[0]; ++ TCGv R1 = cpu_r[1]; ++ TCGv Rd = cpu_r[16 + FMULSU_Rd(opcode)]; ++ TCGv Rr = cpu_r[16 + FMULSU_Rr(opcode)]; ++ TCGv R = tcg_temp_new_i32(); ++ TCGv t0 = tcg_temp_new_i32(); ++ ++ tcg_gen_ext8s_tl(t0, Rd); /* make Rd full 32 bit signed */ ++ tcg_gen_mul_tl(R, t0, Rr); /* R = Rd *Rr */ ++ tcg_gen_shli_tl(R, R, 1); ++ ++ tcg_gen_andi_tl(R0, R, 0xff); ++ tcg_gen_shri_tl(R, R, 8); ++ tcg_gen_andi_tl(R1, R, 0xff); ++ ++ tcg_gen_shri_tl(cpu_Cf, R, 16); /* Cf = R(16) */ ++ tcg_gen_andi_tl(cpu_Zf, R, 0x0000ffff); ++ ++ tcg_temp_free_i32(t0); ++ tcg_temp_free_i32(R); ++ ++ return BS_NONE; ++} ++ ++/* ++ * Calls to a subroutine within the entire 4M (words) Program memory. The ++ * return address (to the instruction after the CALL) will be stored onto the ++ * Stack. See also RCALL. The Stack Pointer uses a post-decrement scheme during ++ * CALL. This instruction is not available in all devices. Refer to the device ++ * specific instruction set summary. ++ */ ++static int avr_translate_ICALL(DisasContext *ctx, uint32_t opcode) ++{ ++ if (avr_feature(ctx->env, AVR_FEATURE_IJMP_ICALL) == false) { ++ gen_helper_unsupported(cpu_env); ++ ++ return BS_EXCP; ++ } ++ ++ int ret = ctx->inst[0].npc; ++ ++ gen_push_ret(ctx, ret); ++ gen_jmp_z(); ++ ++ return BS_BRANCH; ++} ++ ++/* ++ * Indirect jump to the address pointed to by the Z (16 bits) Pointer ++ * Register in the Register File. The Z-pointer Register is 16 bits wide and ++ * allows jump within the lowest 64K words (128KB) section of Program memory. ++ * This instruction is not available in all devices. Refer to the device ++ * specific instruction set summary. ++ */ ++static int avr_translate_IJMP(DisasContext *ctx, uint32_t opcode) ++{ ++ if (avr_feature(ctx->env, AVR_FEATURE_IJMP_ICALL) == false) { ++ gen_helper_unsupported(cpu_env); ++ ++ return BS_EXCP; ++ } ++ ++ gen_jmp_z(); ++ ++ return BS_BRANCH; ++} ++ ++/* ++ * Loads data from the I/O Space (Ports, Timers, Configuration Registers, ++ * etc.) into register Rd in the Register File. ++ */ ++static int avr_translate_IN(DisasContext *ctx, uint32_t opcode) ++{ ++ TCGv Rd = cpu_r[IN_Rd(opcode)]; ++ int Imm = IN_Imm(opcode); ++ TCGv port = tcg_const_i32(Imm); ++ ++ gen_helper_inb(Rd, cpu_env, port); ++ ++ tcg_temp_free_i32(port); ++ ++ return BS_NONE; ++} ++ ++/* ++ * Adds one -1- to the contents of register Rd and places the result in the ++ * destination register Rd. The C Flag in SREG is not affected by the ++ * operation, thus allowing the INC instruction to be used on a loop counter in ++ * multiple-precision computations. When operating on unsigned numbers, only ++ * BREQ and BRNE branches can be expected to perform consistently. When ++ * operating on two's complement values, all signed branches are available. ++ */ ++static int avr_translate_INC(DisasContext *ctx, uint32_t opcode) ++{ ++ TCGv Rd = cpu_r[INC_Rd(opcode)]; ++ ++ tcg_gen_addi_tl(Rd, Rd, 1); ++ tcg_gen_andi_tl(Rd, Rd, 0xff); ++ ++ /* cpu_Vf = Rd == 0x80 */ ++ tcg_gen_setcondi_tl(TCG_COND_EQ, cpu_Vf, Rd, 0x80); ++ gen_ZNSf(Rd); ++ return BS_NONE; ++} ++ ++/* ++ * Jump to an address within the entire 4M (words) Program memory. See also ++ * RJMP. This instruction is not available in all devices. Refer to the device ++ * specific instruction set summary.0 ++ */ ++static int avr_translate_JMP(DisasContext *ctx, uint32_t opcode) ++{ ++ if (avr_feature(ctx->env, AVR_FEATURE_JMP_CALL) == false) { ++ gen_helper_unsupported(cpu_env); ++ ++ return BS_EXCP; ++ } ++ ++ gen_goto_tb(ctx, 0, JMP_Imm(opcode)); ++ return BS_BRANCH; ++} ++ ++/* ++ * Load one byte indirect from data space to register and stores an clear ++ * the bits in data space specified by the register. The instruction can only ++ * be used towards internal SRAM. The data location is pointed to by the Z (16 ++ * bits) Pointer Register in the Register File. Memory access is limited to the ++ * current data segment of 64KB. To access another data segment in devices with ++ * more than 64KB data space, the RAMPZ in register in the I/O area has to be ++ * changed. The Z-pointer Register is left unchanged by the operation. This ++ * instruction is especially suited for clearing status bits stored in SRAM. ++ */ ++static void gen_data_store(DisasContext *ctx, TCGv data, TCGv addr) ++{ ++ if (ctx->tb->flags & TB_FLAGS_FULL_ACCESS) { ++ gen_helper_fullwr(cpu_env, data, addr); ++ } else { ++ tcg_gen_qemu_st8(data, addr, MMU_DATA_IDX); /* mem[addr] = data */ ++ } ++} ++ ++static void gen_data_load(DisasContext *ctx, TCGv data, TCGv addr) ++{ ++ if (ctx->tb->flags & TB_FLAGS_FULL_ACCESS) { ++ gen_helper_fullrd(data, cpu_env, addr); ++ } else { ++ tcg_gen_qemu_ld8u(data, addr, MMU_DATA_IDX); /* data = mem[addr] */ ++ } ++} ++ ++static int avr_translate_LAC(DisasContext *ctx, uint32_t opcode) ++{ ++ if (avr_feature(ctx->env, AVR_FEATURE_RMW) == false) { ++ gen_helper_unsupported(cpu_env); ++ ++ return BS_EXCP; ++ } ++ ++ TCGv Rr = cpu_r[LAC_Rr(opcode)]; ++ TCGv addr = gen_get_zaddr(); ++ TCGv t0 = tcg_temp_new_i32(); ++ TCGv t1 = tcg_temp_new_i32(); ++ ++ gen_data_load(ctx, t0, addr); /* t0 = mem[addr] */ ++ /* t1 = t0 & (0xff - Rr) = t0 and ~Rr */ ++ tcg_gen_andc_tl(t1, t0, Rr); ++ ++ tcg_gen_mov_tl(Rr, t0); /* Rr = t0 */ ++ gen_data_store(ctx, t1, addr); /* mem[addr] = t1 */ ++ ++ tcg_temp_free_i32(t1); ++ tcg_temp_free_i32(t0); ++ tcg_temp_free_i32(addr); ++ ++ return BS_NONE; ++} ++ ++/* ++ * Load one byte indirect from data space to register and set bits in data ++ * space specified by the register. The instruction can only be used towards ++ * internal SRAM. The data location is pointed to by the Z (16 bits) Pointer ++ * Register in the Register File. Memory access is limited to the current data ++ * segment of 64KB. To access another data segment in devices with more than ++ * 64KB data space, the RAMPZ in register in the I/O area has to be changed. ++ * The Z-pointer Register is left unchanged by the operation. This instruction ++ * is especially suited for setting status bits stored in SRAM. ++ */ ++static int avr_translate_LAS(DisasContext *ctx, uint32_t opcode) ++{ ++ if (avr_feature(ctx->env, AVR_FEATURE_RMW) == false) { ++ gen_helper_unsupported(cpu_env); ++ ++ return BS_EXCP; ++ } ++ ++ TCGv Rr = cpu_r[LAS_Rr(opcode)]; ++ TCGv addr = gen_get_zaddr(); ++ TCGv t0 = tcg_temp_new_i32(); ++ TCGv t1 = tcg_temp_new_i32(); ++ ++ gen_data_load(ctx, t0, addr); /* t0 = mem[addr] */ ++ tcg_gen_or_tl(t1, t0, Rr); ++ ++ tcg_gen_mov_tl(Rr, t0); /* Rr = t0 */ ++ gen_data_store(ctx, t1, addr); /* mem[addr] = t1 */ ++ ++ tcg_temp_free_i32(t1); ++ tcg_temp_free_i32(t0); ++ tcg_temp_free_i32(addr); ++ ++ return BS_NONE; ++} ++ ++/* ++ * Load one byte indirect from data space to register and toggles bits in ++ * the data space specified by the register. The instruction can only be used ++ * towards SRAM. The data location is pointed to by the Z (16 bits) Pointer ++ * Register in the Register File. Memory access is limited to the current data ++ * segment of 64KB. To access another data segment in devices with more than ++ * 64KB data space, the RAMPZ in register in the I/O area has to be changed. ++ * The Z-pointer Register is left unchanged by the operation. This instruction ++ * is especially suited for changing status bits stored in SRAM. ++ */ ++static int avr_translate_LAT(DisasContext *ctx, uint32_t opcode) ++{ ++ if (avr_feature(ctx->env, AVR_FEATURE_RMW) == false) { ++ gen_helper_unsupported(cpu_env); ++ ++ return BS_EXCP; ++ } ++ ++ TCGv Rr = cpu_r[LAT_Rr(opcode)]; ++ TCGv addr = gen_get_zaddr(); ++ TCGv t0 = tcg_temp_new_i32(); ++ TCGv t1 = tcg_temp_new_i32(); ++ ++ gen_data_load(ctx, t0, addr); /* t0 = mem[addr] */ ++ tcg_gen_xor_tl(t1, t0, Rr); ++ ++ tcg_gen_mov_tl(Rr, t0); /* Rr = t0 */ ++ gen_data_store(ctx, t1, addr); /* mem[addr] = t1 */ ++ ++ tcg_temp_free_i32(t1); ++ tcg_temp_free_i32(t0); ++ tcg_temp_free_i32(addr); ++ ++ return BS_NONE; ++} ++ ++/* ++ * Loads one byte indirect from the data space to a register. For parts ++ * with SRAM, the data space consists of the Register File, I/O memory and ++ * internal SRAM (and external SRAM if applicable). For parts without SRAM, the ++ * data space consists of the Register File only. In some parts the Flash ++ * Memory has been mapped to the data space and can be read using this command. ++ * The EEPROM has a separate address space. The data location is pointed to by ++ * the X (16 bits) Pointer Register in the Register File. Memory access is ++ * limited to the current data segment of 64KB. To access another data segment ++ * in devices with more than 64KB data space, the RAMPX in register in the I/O ++ * area has to be changed. The X-pointer Register can either be left unchanged ++ * by the operation, or it can be post-incremented or predecremented. These ++ * features are especially suited for accessing arrays, tables, and Stack ++ * Pointer usage of the X-pointer Register. Note that only the low byte of the ++ * X-pointer is updated in devices with no more than 256 bytes data space. For ++ * such devices, the high byte of the pointer is not used by this instruction ++ * and can be used for other purposes. The RAMPX Register in the I/O area is ++ * updated in parts with more than 64KB data space or more than 64KB Program ++ * memory, and the increment/decrement is added to the entire 24-bit address on ++ * such devices. Not all variants of this instruction is available in all ++ * devices. Refer to the device specific instruction set summary. In the ++ * Reduced Core tinyAVR the LD instruction can be used to achieve the same ++ * operation as LPM since the program memory is mapped to the data memory ++ * space. ++ */ ++static int avr_translate_LDX1(DisasContext *ctx, uint32_t opcode) ++{ ++ TCGv Rd = cpu_r[LDX1_Rd(opcode)]; ++ TCGv addr = gen_get_xaddr(); ++ ++ gen_data_load(ctx, Rd, addr); ++ ++ tcg_temp_free_i32(addr); ++ ++ return BS_NONE; ++} ++ ++static int avr_translate_LDX2(DisasContext *ctx, uint32_t opcode) ++{ ++ TCGv Rd = cpu_r[LDX2_Rd(opcode)]; ++ TCGv addr = gen_get_xaddr(); ++ ++ gen_data_load(ctx, Rd, addr); ++ tcg_gen_addi_tl(addr, addr, 1); /* addr = addr + 1 */ ++ ++ gen_set_xaddr(addr); ++ ++ tcg_temp_free_i32(addr); ++ ++ return BS_NONE; ++} ++ ++static int avr_translate_LDX3(DisasContext *ctx, uint32_t opcode) ++{ ++ TCGv Rd = cpu_r[LDX3_Rd(opcode)]; ++ TCGv addr = gen_get_xaddr(); ++ ++ tcg_gen_subi_tl(addr, addr, 1); /* addr = addr - 1 */ ++ gen_data_load(ctx, Rd, addr); ++ gen_set_xaddr(addr); ++ ++ tcg_temp_free_i32(addr); ++ ++ return BS_NONE; ++} ++ ++/* ++ * Loads one byte indirect with or without displacement from the data space ++ * to a register. For parts with SRAM, the data space consists of the Register ++ * File, I/O memory and internal SRAM (and external SRAM if applicable). For ++ * parts without SRAM, the data space consists of the Register File only. In ++ * some parts the Flash Memory has been mapped to the data space and can be ++ * read using this command. The EEPROM has a separate address space. The data ++ * location is pointed to by the Y (16 bits) Pointer Register in the Register ++ * File. Memory access is limited to the current data segment of 64KB. To ++ * access another data segment in devices with more than 64KB data space, the ++ * RAMPY in register in the I/O area has to be changed. The Y-pointer Register ++ * can either be left unchanged by the operation, or it can be post-incremented ++ * or predecremented. These features are especially suited for accessing ++ * arrays, tables, and Stack Pointer usage of the Y-pointer Register. Note that ++ * only the low byte of the Y-pointer is updated in devices with no more than ++ * 256 bytes data space. For such devices, the high byte of the pointer is not ++ * used by this instruction and can be used for other purposes. The RAMPY ++ * Register in the I/O area is updated in parts with more than 64KB data space ++ * or more than 64KB Program memory, and the increment/decrement/displacement ++ * is added to the entire 24-bit address on such devices. Not all variants of ++ * this instruction is available in all devices. Refer to the device specific ++ * instruction set summary. In the Reduced Core tinyAVR the LD instruction can ++ * be used to achieve the same operation as LPM since the program memory is ++ * mapped to the data memory space. ++ */ ++static int avr_translate_LDY2(DisasContext *ctx, uint32_t opcode) ++{ ++ TCGv Rd = cpu_r[LDY2_Rd(opcode)]; ++ TCGv addr = gen_get_yaddr(); ++ ++ gen_data_load(ctx, Rd, addr); ++ tcg_gen_addi_tl(addr, addr, 1); /* addr = addr + 1 */ ++ ++ gen_set_yaddr(addr); ++ ++ tcg_temp_free_i32(addr); ++ ++ return BS_NONE; ++} ++ ++static int avr_translate_LDY3(DisasContext *ctx, uint32_t opcode) ++{ ++ TCGv Rd = cpu_r[LDY3_Rd(opcode)]; ++ TCGv addr = gen_get_yaddr(); ++ ++ tcg_gen_subi_tl(addr, addr, 1); /* addr = addr - 1 */ ++ gen_data_load(ctx, Rd, addr); ++ gen_set_yaddr(addr); ++ ++ tcg_temp_free_i32(addr); ++ ++ return BS_NONE; ++} ++ ++static int avr_translate_LDDY(DisasContext *ctx, uint32_t opcode) ++{ ++ TCGv Rd = cpu_r[LDDY_Rd(opcode)]; ++ TCGv addr = gen_get_yaddr(); ++ ++ tcg_gen_addi_tl(addr, addr, LDDY_Imm(opcode)); /* addr = addr + q */ ++ gen_data_load(ctx, Rd, addr); ++ ++ tcg_temp_free_i32(addr); ++ ++ return BS_NONE; ++} ++ ++/* ++ * Loads one byte indirect with or without displacement from the data space ++ * to a register. For parts with SRAM, the data space consists of the Register ++ * File, I/O memory and internal SRAM (and external SRAM if applicable). For ++ * parts without SRAM, the data space consists of the Register File only. In ++ * some parts the Flash Memory has been mapped to the data space and can be ++ * read using this command. The EEPROM has a separate address space. The data ++ * location is pointed to by the Z (16 bits) Pointer Register in the Register ++ * File. Memory access is limited to the current data segment of 64KB. To ++ * access another data segment in devices with more than 64KB data space, the ++ * RAMPZ in register in the I/O area has to be changed. The Z-pointer Register ++ * can either be left unchanged by the operation, or it can be post-incremented ++ * or predecremented. These features are especially suited for Stack Pointer ++ * usage of the Z-pointer Register, however because the Z-pointer Register can ++ * be used for indirect subroutine calls, indirect jumps and table lookup, it ++ * is often more convenient to use the X or Y-pointer as a dedicated Stack ++ * Pointer. Note that only the low byte of the Z-pointer is updated in devices ++ * with no more than 256 bytes data space. For such devices, the high byte of ++ * the pointer is not used by this instruction and can be used for other ++ * purposes. The RAMPZ Register in the I/O area is updated in parts with more ++ * than 64KB data space or more than 64KB Program memory, and the ++ * increment/decrement/displacement is added to the entire 24-bit address on ++ * such devices. Not all variants of this instruction is available in all ++ * devices. Refer to the device specific instruction set summary. In the ++ * Reduced Core tinyAVR the LD instruction can be used to achieve the same ++ * operation as LPM since the program memory is mapped to the data memory ++ * space. For using the Z-pointer for table lookup in Program memory see the ++ * LPM and ELPM instructions. ++ */ ++static int avr_translate_LDZ2(DisasContext *ctx, uint32_t opcode) ++{ ++ TCGv Rd = cpu_r[LDZ2_Rd(opcode)]; ++ TCGv addr = gen_get_zaddr(); ++ ++ gen_data_load(ctx, Rd, addr); ++ tcg_gen_addi_tl(addr, addr, 1); /* addr = addr + 1 */ ++ ++ gen_set_zaddr(addr); ++ ++ tcg_temp_free_i32(addr); ++ ++ return BS_NONE; ++} ++ ++static int avr_translate_LDZ3(DisasContext *ctx, uint32_t opcode) ++{ ++ TCGv Rd = cpu_r[LDZ3_Rd(opcode)]; ++ TCGv addr = gen_get_zaddr(); ++ ++ tcg_gen_subi_tl(addr, addr, 1); /* addr = addr - 1 */ ++ gen_data_load(ctx, Rd, addr); ++ ++ gen_set_zaddr(addr); ++ ++ tcg_temp_free_i32(addr); ++ ++ return BS_NONE; ++} ++ ++static int avr_translate_LDDZ(DisasContext *ctx, uint32_t opcode) ++{ ++ TCGv Rd = cpu_r[LDDZ_Rd(opcode)]; ++ TCGv addr = gen_get_zaddr(); ++ ++ tcg_gen_addi_tl(addr, addr, LDDZ_Imm(opcode)); ++ /* addr = addr + q */ ++ gen_data_load(ctx, Rd, addr); ++ ++ tcg_temp_free_i32(addr); ++ ++ return BS_NONE; ++} ++ ++/* ++ Loads an 8 bit constant directly to register 16 to 31. ++ */ ++static int avr_translate_LDI(DisasContext *ctx, uint32_t opcode) ++{ ++ TCGv Rd = cpu_r[16 + LDI_Rd(opcode)]; ++ int imm = LDI_Imm(opcode); ++ ++ tcg_gen_movi_tl(Rd, imm); ++ ++ return BS_NONE; ++} ++ ++/* ++ * Loads one byte from the data space to a register. For parts with SRAM, ++ * the data space consists of the Register File, I/O memory and internal SRAM ++ * (and external SRAM if applicable). For parts without SRAM, the data space ++ * consists of the register file only. The EEPROM has a separate address space. ++ * A 16-bit address must be supplied. Memory access is limited to the current ++ * data segment of 64KB. The LDS instruction uses the RAMPD Register to access ++ * memory above 64KB. To access another data segment in devices with more than ++ * 64KB data space, the RAMPD in register in the I/O area has to be changed. ++ * This instruction is not available in all devices. Refer to the device ++ * specific instruction set summary. ++ */ ++static int avr_translate_LDS(DisasContext *ctx, uint32_t opcode) ++{ ++ TCGv Rd = cpu_r[LDS_Rd(opcode)]; ++ TCGv addr = tcg_temp_new_i32(); ++ TCGv H = cpu_rampD; ++ ++ tcg_gen_mov_tl(addr, H); /* addr = H:M:L */ ++ tcg_gen_shli_tl(addr, addr, 16); ++ tcg_gen_ori_tl(addr, addr, LDS_Imm(opcode)); ++ ++ gen_data_load(ctx, Rd, addr); ++ ++ tcg_temp_free_i32(addr); ++ ++ return BS_NONE; ++} ++ ++/* ++ * Loads one byte pointed to by the Z-register into the destination ++ * register Rd. This instruction features a 100% space effective constant ++ * initialization or constant data fetch. The Program memory is organized in ++ * 16-bit words while the Z-pointer is a byte address. Thus, the least ++ * significant bit of the Z-pointer selects either low byte (ZLSB = 0) or high ++ * byte (ZLSB = 1). This instruction can address the first 64KB (32K words) of ++ * Program memory. The Zpointer Register can either be left unchanged by the ++ * operation, or it can be incremented. The incrementation does not apply to ++ * the RAMPZ Register. Devices with Self-Programming capability can use the ++ * LPM instruction to read the Fuse and Lock bit values. Refer to the device ++ * documentation for a detailed description. The LPM instruction is not ++ * available in all devices. Refer to the device specific instruction set ++ * summary ++ */ ++static int avr_translate_LPM1(DisasContext *ctx, uint32_t opcode) ++{ ++ if (avr_feature(ctx->env, AVR_FEATURE_LPM) == false) { ++ gen_helper_unsupported(cpu_env); ++ ++ return BS_EXCP; ++ } ++ ++ TCGv Rd = cpu_r[0]; ++ TCGv addr = tcg_temp_new_i32(); ++ TCGv H = cpu_r[31]; ++ TCGv L = cpu_r[30]; ++ ++ tcg_gen_shli_tl(addr, H, 8); /* addr = H:L */ ++ tcg_gen_or_tl(addr, addr, L); ++ ++ tcg_gen_qemu_ld8u(Rd, addr, MMU_CODE_IDX); /* Rd = mem[addr] */ ++ ++ tcg_temp_free_i32(addr); ++ ++ return BS_NONE; ++} ++ ++static int avr_translate_LPM2(DisasContext *ctx, uint32_t opcode) ++{ ++ if (avr_feature(ctx->env, AVR_FEATURE_LPM) == false) { ++ gen_helper_unsupported(cpu_env); ++ ++ return BS_EXCP; ++ } ++ ++ TCGv Rd = cpu_r[LPM2_Rd(opcode)]; ++ TCGv addr = tcg_temp_new_i32(); ++ TCGv H = cpu_r[31]; ++ TCGv L = cpu_r[30]; ++ ++ tcg_gen_shli_tl(addr, H, 8); /* addr = H:L */ ++ tcg_gen_or_tl(addr, addr, L); ++ ++ tcg_gen_qemu_ld8u(Rd, addr, MMU_CODE_IDX); /* Rd = mem[addr] */ ++ ++ tcg_temp_free_i32(addr); ++ ++ return BS_NONE; ++} ++ ++static int avr_translate_LPMX(DisasContext *ctx, uint32_t opcode) ++{ ++ if (avr_feature(ctx->env, AVR_FEATURE_LPMX) == false) { ++ gen_helper_unsupported(cpu_env); ++ ++ return BS_EXCP; ++ } ++ ++ TCGv Rd = cpu_r[LPMX_Rd(opcode)]; ++ TCGv addr = tcg_temp_new_i32(); ++ TCGv H = cpu_r[31]; ++ TCGv L = cpu_r[30]; ++ ++ tcg_gen_shli_tl(addr, H, 8); /* addr = H:L */ ++ tcg_gen_or_tl(addr, addr, L); ++ ++ tcg_gen_qemu_ld8u(Rd, addr, MMU_CODE_IDX); /* Rd = mem[addr] */ ++ ++ tcg_gen_addi_tl(addr, addr, 1); /* addr = addr + 1 */ ++ ++ tcg_gen_andi_tl(L, addr, 0xff); ++ ++ tcg_gen_shri_tl(addr, addr, 8); ++ tcg_gen_andi_tl(H, addr, 0xff); ++ ++ tcg_temp_free_i32(addr); ++ ++ return BS_NONE; ++} ++ ++/* ++ * Shifts all bits in Rd one place to the right. Bit 7 is cleared. Bit 0 is ++ * loaded into the C Flag of the SREG. This operation effectively divides an ++ * unsigned value by two. The C Flag can be used to round the result. ++ */ ++static int avr_translate_LSR(DisasContext *ctx, uint32_t opcode) ++{ ++ TCGv Rd = cpu_r[LSR_Rd(opcode)]; ++ ++ tcg_gen_andi_tl(cpu_Cf, Rd, 1); ++ ++ tcg_gen_shri_tl(Rd, Rd, 1); ++ ++ gen_ZNSf(Rd); ++ tcg_gen_xor_tl(cpu_Vf, cpu_Nf, cpu_Cf); ++ return BS_NONE; ++} ++ ++/* ++ * This instruction makes a copy of one register into another. The source ++ * register Rr is left unchanged, while the destination register Rd is loaded ++ * with a copy of Rr. ++ */ ++static int avr_translate_MOV(DisasContext *ctx, uint32_t opcode) ++{ ++ TCGv Rd = cpu_r[MOV_Rd(opcode)]; ++ TCGv Rr = cpu_r[MOV_Rr(opcode)]; ++ ++ tcg_gen_mov_tl(Rd, Rr); ++ ++ return BS_NONE; ++} ++ ++/* ++ * This instruction makes a copy of one register pair into another register ++ * pair. The source register pair Rr+1:Rr is left unchanged, while the ++ * destination register pair Rd+1:Rd is loaded with a copy of Rr + 1:Rr. This ++ * instruction is not available in all devices. Refer to the device specific ++ * instruction set summary. ++ */ ++static int avr_translate_MOVW(DisasContext *ctx, uint32_t opcode) ++{ ++ if (avr_feature(ctx->env, AVR_FEATURE_MOVW) == false) { ++ gen_helper_unsupported(cpu_env); ++ ++ return BS_EXCP; ++ } ++ ++ TCGv RdL = cpu_r[MOVW_Rd(opcode) * 2 + 0]; ++ TCGv RdH = cpu_r[MOVW_Rd(opcode) * 2 + 1]; ++ TCGv RrL = cpu_r[MOVW_Rr(opcode) * 2 + 0]; ++ TCGv RrH = cpu_r[MOVW_Rr(opcode) * 2 + 1]; ++ ++ tcg_gen_mov_tl(RdH, RrH); ++ tcg_gen_mov_tl(RdL, RrL); ++ ++ return BS_NONE; ++} ++ ++/* ++ * This instruction performs 8-bit x 8-bit -> 16-bit unsigned multiplication. ++ */ ++static int avr_translate_MUL(DisasContext *ctx, uint32_t opcode) ++{ ++ if (avr_feature(ctx->env, AVR_FEATURE_MUL) == false) { ++ gen_helper_unsupported(cpu_env); ++ ++ return BS_EXCP; ++ } ++ ++ TCGv R0 = cpu_r[0]; ++ TCGv R1 = cpu_r[1]; ++ TCGv Rd = cpu_r[MUL_Rd(opcode)]; ++ TCGv Rr = cpu_r[MUL_Rr(opcode)]; ++ TCGv R = tcg_temp_new_i32(); ++ ++ tcg_gen_mul_tl(R, Rd, Rr); /* R = Rd *Rr */ ++ ++ tcg_gen_mov_tl(R0, R); ++ tcg_gen_andi_tl(R0, R0, 0xff); ++ tcg_gen_shri_tl(R, R, 8); ++ tcg_gen_mov_tl(R1, R); ++ ++ tcg_gen_shri_tl(cpu_Cf, R, 15); /* Cf = R(16) */ ++ tcg_gen_mov_tl(cpu_Zf, R); ++ ++ tcg_temp_free_i32(R); ++ ++ return BS_NONE; ++} ++ ++/* ++ * This instruction performs 8-bit x 8-bit -> 16-bit signed multiplication. ++ */ ++static int avr_translate_MULS(DisasContext *ctx, uint32_t opcode) ++{ ++ if (avr_feature(ctx->env, AVR_FEATURE_MUL) == false) { ++ gen_helper_unsupported(cpu_env); ++ ++ return BS_EXCP; ++ } ++ ++ TCGv R0 = cpu_r[0]; ++ TCGv R1 = cpu_r[1]; ++ TCGv Rd = cpu_r[16 + MULS_Rd(opcode)]; ++ TCGv Rr = cpu_r[16 + MULS_Rr(opcode)]; ++ TCGv R = tcg_temp_new_i32(); ++ TCGv t0 = tcg_temp_new_i32(); ++ TCGv t1 = tcg_temp_new_i32(); ++ ++ tcg_gen_ext8s_tl(t0, Rd); /* make Rd full 32 bit signed */ ++ tcg_gen_ext8s_tl(t1, Rr); /* make Rr full 32 bit signed */ ++ tcg_gen_mul_tl(R, t0, t1); /* R = Rd * Rr */ ++ ++ tcg_gen_mov_tl(R0, R); ++ tcg_gen_andi_tl(R0, R0, 0xff); ++ tcg_gen_shri_tl(R, R, 8); ++ tcg_gen_mov_tl(R1, R); ++ tcg_gen_andi_tl(R1, R0, 0xff); ++ ++ tcg_gen_shri_tl(cpu_Cf, R, 15); /* Cf = R(16) */ ++ tcg_gen_mov_tl(cpu_Zf, R); ++ ++ tcg_temp_free_i32(t1); ++ tcg_temp_free_i32(t0); ++ tcg_temp_free_i32(R); ++ ++ return BS_NONE; ++} ++ ++/* ++ * This instruction performs 8-bit x 8-bit -> 16-bit multiplication of a ++ * signed and an unsigned number. ++ */ ++static int avr_translate_MULSU(DisasContext *ctx, uint32_t opcode) ++{ ++ if (avr_feature(ctx->env, AVR_FEATURE_MUL) == false) { ++ gen_helper_unsupported(cpu_env); ++ ++ return BS_EXCP; ++ } ++ ++ TCGv R0 = cpu_r[0]; ++ TCGv R1 = cpu_r[1]; ++ TCGv Rd = cpu_r[16 + MULSU_Rd(opcode)]; ++ TCGv Rr = cpu_r[16 + MULSU_Rr(opcode)]; ++ TCGv R = tcg_temp_new_i32(); ++ TCGv t0 = tcg_temp_new_i32(); ++ ++ tcg_gen_ext8s_tl(t0, Rd); /* make Rd full 32 bit signed */ ++ tcg_gen_mul_tl(R, t0, Rr); /* R = Rd *Rr */ ++ ++ tcg_gen_mov_tl(R0, R); ++ tcg_gen_andi_tl(R0, R0, 0xff); ++ tcg_gen_shri_tl(R, R, 8); ++ tcg_gen_mov_tl(R1, R); ++ tcg_gen_andi_tl(R1, R0, 0xff); ++ ++ tcg_gen_shri_tl(cpu_Cf, R, 16); /* Cf = R(16) */ ++ tcg_gen_mov_tl(cpu_Zf, R); ++ ++ tcg_temp_free_i32(t0); ++ tcg_temp_free_i32(R); ++ ++ return BS_NONE; ++} ++ ++/* ++ * Replaces the contents of register Rd with its two's complement; the ++ * value $80 is left unchanged. ++ */ ++static int avr_translate_NEG(DisasContext *ctx, uint32_t opcode) ++{ ++ TCGv Rd = cpu_r[SUB_Rd(opcode)]; ++ TCGv t0 = tcg_const_i32(0); ++ TCGv R = tcg_temp_new_i32(); ++ ++ /* op */ ++ tcg_gen_sub_tl(R, t0, Rd); /* R = 0 - Rd */ ++ tcg_gen_andi_tl(R, R, 0xff); /* make it 8 bits */ ++ ++ gen_sub_CHf(R, t0, Rd); ++ gen_sub_Vf(R, t0, Rd); ++ gen_ZNSf(R); ++ ++ /* R */ ++ tcg_gen_mov_tl(Rd, R); ++ ++ tcg_temp_free_i32(t0); ++ tcg_temp_free_i32(R); ++ ++ return BS_NONE; ++} ++ ++/* ++ * This instruction performs a single cycle No Operation. ++ */ ++static int avr_translate_NOP(DisasContext *ctx, uint32_t opcode) ++{ ++ ++ /* NOP */ ++ ++ return BS_NONE; ++} ++ ++/* ++ * Performs the logical OR between the contents of register Rd and register ++ * Rr and places the result in the destination register Rd. ++ */ ++static int avr_translate_OR(DisasContext *ctx, uint32_t opcode) ++{ ++ TCGv Rd = cpu_r[OR_Rd(opcode)]; ++ TCGv Rr = cpu_r[OR_Rr(opcode)]; ++ TCGv R = tcg_temp_new_i32(); ++ ++ tcg_gen_or_tl(R, Rd, Rr); ++ ++ tcg_gen_movi_tl(cpu_Vf, 0); ++ gen_ZNSf(R); ++ ++ tcg_temp_free_i32(R); ++ ++ return BS_NONE; ++} ++ ++/* ++ * Performs the logical OR between the contents of register Rd and a ++ * constant and places the result in the destination register Rd. ++ */ ++static int avr_translate_ORI(DisasContext *ctx, uint32_t opcode) ++{ ++ TCGv Rd = cpu_r[16 + ORI_Rd(opcode)]; ++ int Imm = (ORI_Imm(opcode)); ++ ++ tcg_gen_ori_tl(Rd, Rd, Imm); /* Rd = Rd | Imm */ ++ ++ tcg_gen_movi_tl(cpu_Vf, 0x00); /* Vf = 0 */ ++ gen_ZNSf(Rd); ++ ++ return BS_NONE; ++} ++ ++/* ++ * Stores data from register Rr in the Register File to I/O Space (Ports, ++ * Timers, Configuration Registers, etc.). ++ */ ++static int avr_translate_OUT(DisasContext *ctx, uint32_t opcode) ++{ ++ TCGv Rd = cpu_r[OUT_Rd(opcode)]; ++ int Imm = OUT_Imm(opcode); ++ TCGv port = tcg_const_i32(Imm); ++ ++ gen_helper_outb(cpu_env, port, Rd); ++ ++ tcg_temp_free_i32(port); ++ ++ return BS_NONE; ++} ++ ++/* ++ * This instruction loads register Rd with a byte from the STACK. The Stack ++ * Pointer is pre-incremented by 1 before the POP. This instruction is not ++ * available in all devices. Refer to the device specific instruction set ++ * summary. ++ */ ++static int avr_translate_POP(DisasContext *ctx, uint32_t opcode) ++{ ++ TCGv Rd = cpu_r[POP_Rd(opcode)]; ++ ++ tcg_gen_addi_tl(cpu_sp, cpu_sp, 1); ++ gen_data_load(ctx, Rd, cpu_sp); ++ ++ return BS_NONE; ++} ++ ++/* ++ * This instruction stores the contents of register Rr on the STACK. The ++ * Stack Pointer is post-decremented by 1 after the PUSH. This instruction is ++ * not available in all devices. Refer to the device specific instruction set ++ * summary. ++ */ ++static int avr_translate_PUSH(DisasContext *ctx, uint32_t opcode) ++{ ++ TCGv Rd = cpu_r[PUSH_Rd(opcode)]; ++ ++ gen_data_store(ctx, Rd, cpu_sp); ++ tcg_gen_subi_tl(cpu_sp, cpu_sp, 1); ++ ++ return BS_NONE; ++} ++ ++/* ++ * Relative call to an address within PC - 2K + 1 and PC + 2K (words). The ++ * return address (the instruction after the RCALL) is stored onto the Stack. ++ * See also CALL. For AVR microcontrollers with Program memory not exceeding 4K ++ * words (8KB) this instruction can address the entire memory from every ++ * address location. The Stack Pointer uses a post-decrement scheme during ++ * RCALL. ++ */ ++static int avr_translate_RCALL(DisasContext *ctx, uint32_t opcode) ++{ ++ int ret = ctx->inst[0].npc; ++ int dst = ctx->inst[0].npc + sextract32(RCALL_Imm(opcode), 0, 12); ++ ++ gen_push_ret(ctx, ret); ++ gen_goto_tb(ctx, 0, dst); ++ ++ return BS_BRANCH; ++} ++ ++/* ++ * Returns from subroutine. The return address is loaded from the STACK. ++ * The Stack Pointer uses a preincrement scheme during RET. ++ */ ++static int avr_translate_RET(DisasContext *ctx, uint32_t opcode) ++{ ++ gen_pop_ret(ctx, cpu_pc); ++ ++ tcg_gen_exit_tb(0); ++ ++ return BS_BRANCH; ++} ++ ++/* ++ * Returns from interrupt. The return address is loaded from the STACK and ++ * the Global Interrupt Flag is set. Note that the Status Register is not ++ * automatically stored when entering an interrupt routine, and it is not ++ * restored when returning from an interrupt routine. This must be handled by ++ * the application program. The Stack Pointer uses a pre-increment scheme ++ * during RETI. ++ */ ++static int avr_translate_RETI(DisasContext *ctx, uint32_t opcode) ++{ ++ gen_pop_ret(ctx, cpu_pc); ++ ++ tcg_gen_movi_tl(cpu_If, 1); ++ ++ tcg_gen_exit_tb(0); ++ ++ return BS_BRANCH; ++} ++ ++/* ++ * Relative jump to an address within PC - 2K +1 and PC + 2K (words). For ++ * AVR microcontrollers with Program memory not exceeding 4K words (8KB) this ++ * instruction can address the entire memory from every address location. See ++ * also JMP. ++ */ ++static int avr_translate_RJMP(DisasContext *ctx, uint32_t opcode) ++{ ++ int dst = ctx->inst[0].npc + sextract32(RJMP_Imm(opcode), 0, 12); ++ ++ gen_goto_tb(ctx, 0, dst); ++ ++ return BS_BRANCH; ++} ++ ++/* ++ * Shifts all bits in Rd one place to the right. The C Flag is shifted into ++ * bit 7 of Rd. Bit 0 is shifted into the C Flag. This operation, combined ++ * with ASR, effectively divides multi-byte signed values by two. Combined with ++ * LSR it effectively divides multi-byte unsigned values by two. The Carry Flag ++ * can be used to round the result. ++ */ ++static int avr_translate_ROR(DisasContext *ctx, uint32_t opcode) ++{ ++ TCGv Rd = cpu_r[ROR_Rd(opcode)]; ++ TCGv t0 = tcg_temp_new_i32(); ++ ++ tcg_gen_shli_tl(t0, cpu_Cf, 7); ++ tcg_gen_andi_tl(cpu_Cf, Rd, 0); ++ tcg_gen_shri_tl(Rd, Rd, 1); ++ tcg_gen_or_tl(Rd, Rd, t0); ++ ++ gen_ZNSf(Rd); ++ tcg_gen_xor_tl(cpu_Vf, cpu_Nf, cpu_Cf); ++ ++ tcg_temp_free_i32(t0); ++ ++ return BS_NONE; ++} ++ ++/* ++ * Subtracts two registers and subtracts with the C Flag and places the ++ * result in the destination register Rd. ++ */ ++static int avr_translate_SBC(DisasContext *ctx, uint32_t opcode) ++{ ++ TCGv Rd = cpu_r[SBC_Rd(opcode)]; ++ TCGv Rr = cpu_r[SBC_Rr(opcode)]; ++ TCGv R = tcg_temp_new_i32(); ++ ++ /* op */ ++ tcg_gen_sub_tl(R, Rd, Rr); /* R = Rd - Rr - Cf */ ++ tcg_gen_sub_tl(R, R, cpu_Cf); ++ tcg_gen_andi_tl(R, R, 0xff); /* make it 8 bits */ ++ ++ gen_sub_CHf(R, Rd, Rr); ++ gen_sub_Vf(R, Rd, Rr); ++ gen_ZNSf(R); ++ ++ /* R */ ++ tcg_gen_mov_tl(Rd, R); ++ ++ tcg_temp_free_i32(R); ++ ++ return BS_NONE; ++} ++ ++/* ++ * SBCI -- Subtract Immediate with Carry ++ */ ++static int avr_translate_SBCI(DisasContext *ctx, uint32_t opcode) ++{ ++ TCGv Rd = cpu_r[16 + SBCI_Rd(opcode)]; ++ TCGv Rr = tcg_const_i32(SBCI_Imm(opcode)); ++ TCGv R = tcg_temp_new_i32(); ++ ++ /* op */ ++ tcg_gen_sub_tl(R, Rd, Rr); /* R = Rd - Rr - Cf */ ++ tcg_gen_sub_tl(R, R, cpu_Cf); ++ tcg_gen_andi_tl(R, R, 0xff); /* make it 8 bits */ ++ ++ gen_sub_CHf(R, Rd, Rr); ++ gen_sub_Vf(R, Rd, Rr); ++ gen_ZNSf(R); ++ ++ /* R */ ++ tcg_gen_mov_tl(Rd, R); ++ ++ tcg_temp_free_i32(R); ++ tcg_temp_free_i32(Rr); ++ ++ return BS_NONE; ++} ++ ++/* ++ * Sets a specified bit in an I/O Register. This instruction operates on ++ * the lower 32 I/O Registers -- addresses 0-31. ++ */ ++static int avr_translate_SBI(DisasContext *ctx, uint32_t opcode) ++{ ++ TCGv data = tcg_temp_new_i32(); ++ TCGv port = tcg_const_i32(SBI_Imm(opcode)); ++ ++ gen_helper_inb(data, cpu_env, port); ++ tcg_gen_ori_tl(data, data, 1 << SBI_Bit(opcode)); ++ gen_helper_outb(cpu_env, port, data); ++ ++ tcg_temp_free_i32(port); ++ tcg_temp_free_i32(data); ++ ++ return BS_NONE; ++} ++ ++/* ++ * This instruction tests a single bit in an I/O Register and skips the ++ * next instruction if the bit is cleared. This instruction operates on the ++ * lower 32 I/O Registers -- addresses 0-31. ++ */ ++static int avr_translate_SBIC(DisasContext *ctx, uint32_t opcode) ++{ ++ TCGv data = tcg_temp_new_i32(); ++ TCGv port = tcg_const_i32(SBIC_Imm(opcode)); ++ TCGLabel *skip = gen_new_label(); ++ ++ gen_helper_inb(data, cpu_env, port); ++ ++ /* PC if next inst is skipped */ ++ tcg_gen_movi_tl(cpu_pc, ctx->inst[1].npc); ++ tcg_gen_andi_tl(data, data, 1 << SBIC_Bit(opcode)); ++ tcg_gen_brcondi_i32(TCG_COND_EQ, data, 0, skip); ++ /* PC if next inst is not skipped */ ++ tcg_gen_movi_tl(cpu_pc, ctx->inst[0].npc); ++ gen_set_label(skip); ++ ++ tcg_temp_free_i32(port); ++ tcg_temp_free_i32(data); ++ ++ return BS_BRANCH; ++} ++ ++/* ++ * This instruction tests a single bit in an I/O Register and skips the ++ * next instruction if the bit is set. This instruction operates on the lower ++ * 32 I/O Registers -- addresses 0-31. ++ */ ++static int avr_translate_SBIS(DisasContext *ctx, uint32_t opcode) ++{ ++ TCGv data = tcg_temp_new_i32(); ++ TCGv port = tcg_const_i32(SBIS_Imm(opcode)); ++ TCGLabel *skip = gen_new_label(); ++ ++ gen_helper_inb(data, cpu_env, port); ++ ++ /* PC if next inst is skipped */ ++ tcg_gen_movi_tl(cpu_pc, ctx->inst[1].npc); ++ tcg_gen_andi_tl(data, data, 1 << SBIS_Bit(opcode)); ++ tcg_gen_brcondi_i32(TCG_COND_NE, data, 0, skip); ++ /* PC if next inst is not skipped */ ++ tcg_gen_movi_tl(cpu_pc, ctx->inst[0].npc); ++ gen_set_label(skip); ++ ++ tcg_temp_free_i32(port); ++ tcg_temp_free_i32(data); ++ ++ return BS_BRANCH; ++} ++ ++/* ++ * Subtracts an immediate value (0-63) from a register pair and places the ++ * result in the register pair. This instruction operates on the upper four ++ * register pairs, and is well suited for operations on the Pointer Registers. ++ * This instruction is not available in all devices. Refer to the device ++ * specific instruction set summary. ++ */ ++static int avr_translate_SBIW(DisasContext *ctx, uint32_t opcode) ++{ ++ if (avr_feature(ctx->env, AVR_FEATURE_ADIW_SBIW) == false) { ++ gen_helper_unsupported(cpu_env); ++ ++ return BS_EXCP; ++ } ++ ++ TCGv RdL = cpu_r[24 + 2 * SBIW_Rd(opcode)]; ++ TCGv RdH = cpu_r[25 + 2 * SBIW_Rd(opcode)]; ++ int Imm = (SBIW_Imm(opcode)); ++ TCGv R = tcg_temp_new_i32(); ++ TCGv Rd = tcg_temp_new_i32(); ++ ++ /* op */ ++ tcg_gen_deposit_tl(Rd, RdL, RdH, 8, 8); /* Rd = RdH:RdL */ ++ tcg_gen_subi_tl(R, Rd, Imm); /* R = Rd - Imm */ ++ tcg_gen_andi_tl(R, R, 0xffff); /* make it 16 bits */ ++ ++ /* Cf */ ++ tcg_gen_andc_tl(cpu_Cf, R, Rd); ++ tcg_gen_shri_tl(cpu_Cf, cpu_Cf, 15); /* Cf = R & ~Rd */ ++ ++ /* Vf */ ++ tcg_gen_andc_tl(cpu_Vf, Rd, R); ++ tcg_gen_shri_tl(cpu_Vf, cpu_Vf, 15); /* Vf = Rd & ~R */ ++ ++ /* Zf */ ++ tcg_gen_mov_tl(cpu_Zf, R); /* Zf = R */ ++ ++ /* Nf */ ++ tcg_gen_shri_tl(cpu_Nf, R, 15); /* Nf = R(15) */ ++ ++ /* Sf */ ++ tcg_gen_xor_tl(cpu_Sf, cpu_Nf, cpu_Vf); /* Sf = Nf ^ Vf */ ++ ++ /* R */ ++ tcg_gen_andi_tl(RdL, R, 0xff); ++ tcg_gen_shri_tl(RdH, R, 8); ++ ++ tcg_temp_free_i32(Rd); ++ tcg_temp_free_i32(R); ++ ++ return BS_NONE; ++} ++ ++/* ++ * This instruction tests a single bit in a register and skips the next ++ * instruction if the bit is cleared. ++ */ ++static int avr_translate_SBRC(DisasContext *ctx, uint32_t opcode) ++{ ++ TCGv Rr = cpu_r[SBRC_Rr(opcode)]; ++ TCGv t0 = tcg_temp_new_i32(); ++ TCGLabel *skip = gen_new_label(); ++ ++ /* PC if next inst is skipped */ ++ tcg_gen_movi_tl(cpu_pc, ctx->inst[1].npc); ++ tcg_gen_andi_tl(t0, Rr, 1 << SBRC_Bit(opcode)); ++ tcg_gen_brcondi_i32(TCG_COND_EQ, t0, 0, skip); ++ /* PC if next inst is not skipped */ ++ tcg_gen_movi_tl(cpu_pc, ctx->inst[0].npc); ++ gen_set_label(skip); ++ ++ tcg_temp_free_i32(t0); ++ ++ return BS_BRANCH; ++} ++ ++/* ++ * This instruction tests a single bit in a register and skips the next ++ * instruction if the bit is set. ++ */ ++static int avr_translate_SBRS(DisasContext *ctx, uint32_t opcode) ++{ ++ TCGv Rr = cpu_r[SBRS_Rr(opcode)]; ++ TCGv t0 = tcg_temp_new_i32(); ++ TCGLabel *skip = gen_new_label(); ++ ++ /* PC if next inst is skipped */ ++ tcg_gen_movi_tl(cpu_pc, ctx->inst[1].npc); ++ tcg_gen_andi_tl(t0, Rr, 1 << SBRS_Bit(opcode)); ++ tcg_gen_brcondi_i32(TCG_COND_NE, t0, 0, skip); ++ /* PC if next inst is not skipped */ ++ tcg_gen_movi_tl(cpu_pc, ctx->inst[0].npc); ++ gen_set_label(skip); ++ ++ tcg_temp_free_i32(t0); ++ ++ return BS_BRANCH; ++} ++ ++/* ++ * This instruction sets the circuit in sleep mode defined by the MCU ++ * Control Register. ++ */ ++static int avr_translate_SLEEP(DisasContext *ctx, uint32_t opcode) ++{ ++ gen_helper_sleep(cpu_env); ++ ++ return BS_EXCP; ++} ++ ++/* ++ * SPM can be used to erase a page in the Program memory, to write a page ++ * in the Program memory (that is already erased), and to set Boot Loader Lock ++ * bits. In some devices, the Program memory can be written one word at a time, ++ * in other devices an entire page can be programmed simultaneously after first ++ * filling a temporary page buffer. In all cases, the Program memory must be ++ * erased one page at a time. When erasing the Program memory, the RAMPZ and ++ * Z-register are used as page address. When writing the Program memory, the ++ * RAMPZ and Z-register are used as page or word address, and the R1:R0 ++ * register pair is used as data(1). When setting the Boot Loader Lock bits, ++ * the R1:R0 register pair is used as data. Refer to the device documentation ++ * for detailed description of SPM usage. This instruction can address the ++ * entire Program memory. The SPM instruction is not available in all devices. ++ * Refer to the device specific instruction set summary. Note: 1. R1 ++ * determines the instruction high byte, and R0 determines the instruction low ++ * byte. ++ */ ++static int avr_translate_SPM(DisasContext *ctx, uint32_t opcode) ++{ ++ if (avr_feature(ctx->env, AVR_FEATURE_SPM) == false) { ++ gen_helper_unsupported(cpu_env); ++ ++ return BS_EXCP; ++ } ++ ++ /* TODO: ??? */ ++ return BS_NONE; ++} ++ ++static int avr_translate_SPMX(DisasContext *ctx, uint32_t opcode) ++{ ++ if (avr_feature(ctx->env, AVR_FEATURE_SPMX) == false) { ++ gen_helper_unsupported(cpu_env); ++ ++ return BS_EXCP; ++ } ++ ++ /* TODO: ??? */ ++ return BS_NONE; ++} ++ ++static int avr_translate_STX1(DisasContext *ctx, uint32_t opcode) ++{ ++ TCGv Rd = cpu_r[STX1_Rr(opcode)]; ++ TCGv addr = gen_get_xaddr(); ++ ++ gen_data_store(ctx, Rd, addr); ++ ++ tcg_temp_free_i32(addr); ++ ++ return BS_NONE; ++} ++ ++static int avr_translate_STX2(DisasContext *ctx, uint32_t opcode) ++{ ++ TCGv Rd = cpu_r[STX2_Rr(opcode)]; ++ TCGv addr = gen_get_xaddr(); ++ ++ gen_data_store(ctx, Rd, addr); ++ tcg_gen_addi_tl(addr, addr, 1); /* addr = addr + 1 */ ++ gen_set_xaddr(addr); ++ ++ tcg_temp_free_i32(addr); ++ ++ return BS_NONE; ++} ++ ++static int avr_translate_STX3(DisasContext *ctx, uint32_t opcode) ++{ ++ TCGv Rd = cpu_r[STX3_Rr(opcode)]; ++ TCGv addr = gen_get_xaddr(); ++ ++ tcg_gen_subi_tl(addr, addr, 1); /* addr = addr - 1 */ ++ gen_data_store(ctx, Rd, addr); ++ gen_set_xaddr(addr); ++ ++ tcg_temp_free_i32(addr); ++ ++ return BS_NONE; ++} ++ ++static int avr_translate_STY2(DisasContext *ctx, uint32_t opcode) ++{ ++ TCGv Rd = cpu_r[STY2_Rd(opcode)]; ++ TCGv addr = gen_get_yaddr(); ++ ++ gen_data_store(ctx, Rd, addr); ++ tcg_gen_addi_tl(addr, addr, 1); /* addr = addr + 1 */ ++ gen_set_yaddr(addr); ++ ++ tcg_temp_free_i32(addr); ++ ++ return BS_NONE; ++} ++ ++static int avr_translate_STY3(DisasContext *ctx, uint32_t opcode) ++{ ++ TCGv Rd = cpu_r[STY3_Rd(opcode)]; ++ TCGv addr = gen_get_yaddr(); ++ ++ tcg_gen_subi_tl(addr, addr, 1); /* addr = addr - 1 */ ++ gen_data_store(ctx, Rd, addr); ++ gen_set_yaddr(addr); ++ ++ tcg_temp_free_i32(addr); ++ ++ return BS_NONE; ++} ++ ++static int avr_translate_STDY(DisasContext *ctx, uint32_t opcode) ++{ ++ TCGv Rd = cpu_r[STDY_Rd(opcode)]; ++ TCGv addr = gen_get_yaddr(); ++ ++ tcg_gen_addi_tl(addr, addr, STDY_Imm(opcode)); ++ /* addr = addr + q */ ++ gen_data_store(ctx, Rd, addr); ++ ++ tcg_temp_free_i32(addr); ++ ++ return BS_NONE; ++} ++ ++static int avr_translate_STZ2(DisasContext *ctx, uint32_t opcode) ++{ ++ TCGv Rd = cpu_r[STZ2_Rd(opcode)]; ++ TCGv addr = gen_get_zaddr(); ++ ++ gen_data_store(ctx, Rd, addr); ++ tcg_gen_addi_tl(addr, addr, 1); /* addr = addr + 1 */ ++ ++ gen_set_zaddr(addr); ++ ++ tcg_temp_free_i32(addr); ++ ++ return BS_NONE; ++} ++ ++static int avr_translate_STZ3(DisasContext *ctx, uint32_t opcode) ++{ ++ TCGv Rd = cpu_r[STZ3_Rd(opcode)]; ++ TCGv addr = gen_get_zaddr(); ++ ++ tcg_gen_subi_tl(addr, addr, 1); /* addr = addr - 1 */ ++ gen_data_store(ctx, Rd, addr); ++ ++ gen_set_zaddr(addr); ++ ++ tcg_temp_free_i32(addr); ++ ++ return BS_NONE; ++} ++ ++static int avr_translate_STDZ(DisasContext *ctx, uint32_t opcode) ++{ ++ TCGv Rd = cpu_r[STDZ_Rd(opcode)]; ++ TCGv addr = gen_get_zaddr(); ++ ++ tcg_gen_addi_tl(addr, addr, STDZ_Imm(opcode)); ++ /* addr = addr + q */ ++ gen_data_store(ctx, Rd, addr); ++ ++ tcg_temp_free_i32(addr); ++ ++ return BS_NONE; ++} ++ ++/* ++ * Stores one byte from a Register to the data space. For parts with SRAM, ++ * the data space consists of the Register File, I/O memory and internal SRAM ++ * (and external SRAM if applicable). For parts without SRAM, the data space ++ * consists of the Register File only. The EEPROM has a separate address space. ++ * A 16-bit address must be supplied. Memory access is limited to the current ++ * data segment of 64KB. The STS instruction uses the RAMPD Register to access ++ * memory above 64KB. To access another data segment in devices with more than ++ * 64KB data space, the RAMPD in register in the I/O area has to be changed. ++ * This instruction is not available in all devices. Refer to the device ++ * specific instruction set summary. ++ */ ++static int avr_translate_STS(DisasContext *ctx, uint32_t opcode) ++{ ++ TCGv Rd = cpu_r[STS_Rd(opcode)]; ++ TCGv addr = tcg_temp_new_i32(); ++ TCGv H = cpu_rampD; ++ ++ tcg_gen_mov_tl(addr, H); /* addr = H:M:L */ ++ tcg_gen_shli_tl(addr, addr, 16); ++ tcg_gen_ori_tl(addr, addr, STS_Imm(opcode)); ++ ++ gen_data_store(ctx, Rd, addr); ++ ++ tcg_temp_free_i32(addr); ++ ++ return BS_NONE; ++} ++ ++/* ++ * Subtracts two registers and places the result in the destination ++ * register Rd. ++ */ ++static int avr_translate_SUB(DisasContext *ctx, uint32_t opcode) ++{ ++ TCGv Rd = cpu_r[SUB_Rd(opcode)]; ++ TCGv Rr = cpu_r[SUB_Rr(opcode)]; ++ TCGv R = tcg_temp_new_i32(); ++ ++ /* op */ ++ tcg_gen_sub_tl(R, Rd, Rr); /* R = Rd - Rr */ ++ tcg_gen_andi_tl(R, R, 0xff); /* make it 8 bits */ ++ ++ gen_sub_CHf(R, Rd, Rr); ++ gen_sub_Vf(R, Rd, Rr); ++ gen_ZNSf(R); ++ ++ /* R */ ++ tcg_gen_mov_tl(Rd, R); ++ ++ tcg_temp_free_i32(R); ++ ++ return BS_NONE; ++} ++ ++/* ++ * Subtracts a register and a constant and places the result in the ++ * destination register Rd. This instruction is working on Register R16 to R31 ++ * and is very well suited for operations on the X, Y, and Z-pointers. ++ */ ++static int avr_translate_SUBI(DisasContext *ctx, uint32_t opcode) ++{ ++ TCGv Rd = cpu_r[16 + SUBI_Rd(opcode)]; ++ TCGv Rr = tcg_const_i32(SUBI_Imm(opcode)); ++ TCGv R = tcg_temp_new_i32(); ++ ++ /* op */ ++ tcg_gen_sub_tl(R, Rd, Rr); ++ /* R = Rd - Imm */ ++ tcg_gen_andi_tl(R, R, 0xff); /* make it 8 bits */ ++ ++ gen_sub_CHf(R, Rd, Rr); ++ gen_sub_Vf(R, Rd, Rr); ++ gen_ZNSf(R); ++ ++ /* R */ ++ tcg_gen_mov_tl(Rd, R); ++ ++ tcg_temp_free_i32(R); ++ tcg_temp_free_i32(Rr); ++ ++ return BS_NONE; ++} ++ ++/* ++ * Swaps high and low nibbles in a register. ++ */ ++static int avr_translate_SWAP(DisasContext *ctx, uint32_t opcode) ++{ ++ TCGv Rd = cpu_r[SWAP_Rd(opcode)]; ++ TCGv t0 = tcg_temp_new_i32(); ++ TCGv t1 = tcg_temp_new_i32(); ++ ++ tcg_gen_andi_tl(t0, Rd, 0x0f); ++ tcg_gen_shli_tl(t0, t0, 4); ++ tcg_gen_andi_tl(t1, Rd, 0xf0); ++ tcg_gen_shri_tl(t1, t1, 4); ++ tcg_gen_or_tl(Rd, t0, t1); ++ ++ tcg_temp_free_i32(t1); ++ tcg_temp_free_i32(t0); ++ ++ return BS_NONE; ++} ++ ++/* ++ * This instruction resets the Watchdog Timer. This instruction must be ++ * executed within a limited time given by the WD prescaler. See the Watchdog ++ * Timer hardware specification. ++ */ ++static int avr_translate_WDR(DisasContext *ctx, uint32_t opcode) ++{ ++ gen_helper_wdr(cpu_env); ++ ++ return BS_NONE; ++} ++ ++/* ++ * Exchanges one byte indirect between register and data space. The data ++ * location is pointed to by the Z (16 bits) Pointer Register in the Register ++ * File. Memory access is limited to the current data segment of 64KB. To ++ * access another data segment in devices with more than 64KB data space, the ++ * RAMPZ in register in the I/O area has to be changed. The Z-pointer Register ++ * is left unchanged by the operation. This instruction is especially suited ++ * for writing/reading status bits stored in SRAM. ++ */ ++static int avr_translate_XCH(DisasContext *ctx, uint32_t opcode) ++{ ++ if (avr_feature(ctx->env, AVR_FEATURE_RMW) == false) { ++ gen_helper_unsupported(cpu_env); ++ ++ return BS_EXCP; ++ } ++ ++ TCGv Rd = cpu_r[XCH_Rd(opcode)]; ++ TCGv t0 = tcg_temp_new_i32(); ++ TCGv addr = gen_get_zaddr(); ++ ++ gen_data_load(ctx, t0, addr); ++ gen_data_store(ctx, Rd, addr); ++ tcg_gen_mov_tl(Rd, t0); ++ ++ tcg_temp_free_i32(t0); ++ tcg_temp_free_i32(addr); ++ ++ return BS_NONE; ++} ++ ++#include "decode.inc.c" ++ ++void avr_translate_init(void) ++{ ++ int i; ++ static int done_init; ++ ++ if (done_init) { ++ return; ++ } ++#define AVR_REG_OFFS(x) offsetof(CPUAVRState, x) ++ cpu_env = tcg_global_reg_new_ptr(TCG_AREG0, "env"); ++ cpu_pc = tcg_global_mem_new_i32(cpu_env, AVR_REG_OFFS(pc_w), "pc"); ++ cpu_Cf = tcg_global_mem_new_i32(cpu_env, AVR_REG_OFFS(sregC), "Cf"); ++ cpu_Zf = tcg_global_mem_new_i32(cpu_env, AVR_REG_OFFS(sregZ), "Zf"); ++ cpu_Nf = tcg_global_mem_new_i32(cpu_env, AVR_REG_OFFS(sregN), "Nf"); ++ cpu_Vf = tcg_global_mem_new_i32(cpu_env, AVR_REG_OFFS(sregV), "Vf"); ++ cpu_Sf = tcg_global_mem_new_i32(cpu_env, AVR_REG_OFFS(sregS), "Sf"); ++ cpu_Hf = tcg_global_mem_new_i32(cpu_env, AVR_REG_OFFS(sregH), "Hf"); ++ cpu_Tf = tcg_global_mem_new_i32(cpu_env, AVR_REG_OFFS(sregT), "Tf"); ++ cpu_If = tcg_global_mem_new_i32(cpu_env, AVR_REG_OFFS(sregI), "If"); ++ cpu_rampD = tcg_global_mem_new_i32(cpu_env, AVR_REG_OFFS(rampD), "rampD"); ++ cpu_rampX = tcg_global_mem_new_i32(cpu_env, AVR_REG_OFFS(rampX), "rampX"); ++ cpu_rampY = tcg_global_mem_new_i32(cpu_env, AVR_REG_OFFS(rampY), "rampY"); ++ cpu_rampZ = tcg_global_mem_new_i32(cpu_env, AVR_REG_OFFS(rampZ), "rampZ"); ++ cpu_eind = tcg_global_mem_new_i32(cpu_env, AVR_REG_OFFS(eind), "eind"); ++ cpu_sp = tcg_global_mem_new_i32(cpu_env, AVR_REG_OFFS(sp), "sp"); ++ ++ for (i = 0; i < 32; i++) { ++ char name[16]; ++ ++ sprintf(name, "r[%d]", i); ++ ++ cpu_r[i] = tcg_global_mem_new_i32(cpu_env, AVR_REG_OFFS(r[i]), name); ++ } ++ ++ done_init = 1; ++} ++ ++static void decode_opc(DisasContext *ctx, InstInfo *inst) ++{ ++ /* PC points to words. */ ++ inst->opcode = cpu_ldl_code(ctx->env, inst->cpc * 2); ++ inst->length = 16; ++ inst->translate = NULL; ++ ++ avr_decode(inst->cpc, &inst->length, inst->opcode, &inst->translate); ++ ++ if (inst->length == 16) { ++ inst->npc = inst->cpc + 1; ++ /* get opcode as 16bit value */ ++ inst->opcode = inst->opcode & 0x0000ffff; ++ } ++ if (inst->length == 32) { ++ inst->npc = inst->cpc + 2; ++ /* get opcode as 32bit value */ ++ inst->opcode = (inst->opcode << 16) ++ | (inst->opcode >> 16); ++ } ++} ++ ++/* generate intermediate code for basic block 'tb'. */ ++void gen_intermediate_code(CPUAVRState *env, struct TranslationBlock *tb) ++{ ++ AVRCPU *cpu = avr_env_get_cpu(env); ++ CPUState *cs = CPU(cpu); ++ DisasContext ctx; ++ target_ulong pc_start; ++ int num_insns, max_insns; ++ target_ulong cpc; ++ target_ulong npc; ++ ++ pc_start = tb->pc / 2; ++ ctx.tb = tb; ++ ctx.env = env; ++ ctx.memidx = 0; ++ ctx.bstate = BS_NONE; ++ ctx.singlestep = cs->singlestep_enabled; ++ num_insns = 0; ++ max_insns = tb->cflags & CF_COUNT_MASK; ++ ++ if (max_insns == 0) { ++ max_insns = CF_COUNT_MASK; ++ } ++ if (max_insns > TCG_MAX_INSNS) { ++ max_insns = TCG_MAX_INSNS; ++ } ++ if (tb->flags & TB_FLAGS_FULL_ACCESS) { ++ /* ++ this flag is set by ST/LD instruction ++ we will regenerate it ONLY with mem/cpu memory access ++ instead of mem access ++ */ ++ max_insns = 1; ++ } ++ ++ gen_tb_start(tb); ++ ++ /* decode first instruction */ ++ ctx.inst[0].cpc = pc_start; ++ decode_opc(&ctx, &ctx.inst[0]); ++ do { ++ /* set curr/next PCs */ ++ cpc = ctx.inst[0].cpc; ++ npc = ctx.inst[0].npc; ++ ++ /* decode next instruction */ ++ ctx.inst[1].cpc = ctx.inst[0].npc; ++ decode_opc(&ctx, &ctx.inst[1]); ++ ++ /* translate current instruction */ ++ tcg_gen_insn_start(cpc); ++ num_insns++; ++ ++ /* ++ * this is due to some strange GDB behavior ++ * let's assume main is has 0x100 address ++ * b main - sets a breakpoint to 0x00000100 address (code) ++ * b *0x100 - sets a breakpoint to 0x00800100 address (data) ++ */ ++ if (unlikely(cpu_breakpoint_test(cs, PHYS_BASE_CODE + cpc * 2, BP_ANY)) ++ || cpu_breakpoint_test(cs, PHYS_BASE_DATA + cpc * 2, BP_ANY)) { ++ tcg_gen_movi_i32(cpu_pc, cpc); ++ gen_helper_debug(cpu_env); ++ ctx.bstate = BS_EXCP; ++ goto done_generating; ++ } ++ ++ if (ctx.inst[0].translate) { ++ ctx.bstate = ctx.inst[0].translate(&ctx, ctx.inst[0].opcode); ++ } ++ ++ if (num_insns >= max_insns) { ++ break; /* max translated instructions limit reached */ ++ } ++ if (ctx.singlestep) { ++ break; /* single step */ ++ } ++ if ((cpc & (TARGET_PAGE_SIZE - 1)) == 0) { ++ break; /* page boundary */ ++ } ++ ++ ctx.inst[0] = ctx.inst[1]; /* make next inst curr */ ++ } while (ctx.bstate == BS_NONE && !tcg_op_buf_full()); ++ ++ if (tb->cflags & CF_LAST_IO) { ++ gen_io_end(); ++ } ++ ++ if (ctx.singlestep) { ++ if (ctx.bstate == BS_STOP || ctx.bstate == BS_NONE) { ++ tcg_gen_movi_tl(cpu_pc, npc); ++ } ++ gen_helper_debug(cpu_env); ++ tcg_gen_exit_tb(0); ++ } else { ++ switch (ctx.bstate) { ++ case BS_STOP: ++ case BS_NONE: ++ gen_goto_tb(&ctx, 0, npc); ++ break; ++ case BS_EXCP: ++ tcg_gen_exit_tb(0); ++ break; ++ default: ++ break; ++ } ++ } ++ ++done_generating: ++ gen_tb_end(tb, num_insns); ++ ++ tb->size = (npc - pc_start) * 2; ++ tb->icount = num_insns; ++} ++ ++void restore_state_to_opc(CPUAVRState *env, TranslationBlock *tb, ++ target_ulong *data) ++{ ++ env->pc_w = data[0]; ++} ++ ++void avr_cpu_dump_state(CPUState *cs, FILE *f, fprintf_function cpu_fprintf, ++ int flags) ++{ ++ AVRCPU *cpu = AVR_CPU(cs); ++ CPUAVRState *env = &cpu->env; ++ int i; ++ ++ cpu_fprintf(f, "\n"); ++ cpu_fprintf(f, "PC: %06x\n", env->pc_w); ++ cpu_fprintf(f, "SP: %04x\n", env->sp); ++ cpu_fprintf(f, "rampD: %02x\n", env->rampD >> 16); ++ cpu_fprintf(f, "rampX: %02x\n", env->rampX >> 16); ++ cpu_fprintf(f, "rampY: %02x\n", env->rampY >> 16); ++ cpu_fprintf(f, "rampZ: %02x\n", env->rampZ >> 16); ++ cpu_fprintf(f, "EIND: %02x\n", env->eind); ++ cpu_fprintf(f, "X: %02x%02x\n", env->r[27], env->r[26]); ++ cpu_fprintf(f, "Y: %02x%02x\n", env->r[29], env->r[28]); ++ cpu_fprintf(f, "Z: %02x%02x\n", env->r[31], env->r[30]); ++ cpu_fprintf(f, "SREG: [ %c %c %c %c %c %c %c %c ]\n", ++ env->sregI ? 'I' : '-', ++ env->sregT ? 'T' : '-', ++ env->sregH ? 'H' : '-', ++ env->sregS ? 'S' : '-', ++ env->sregV ? 'V' : '-', ++ env->sregN ? '-' : 'N', /* Zf has negative logic */ ++ env->sregZ ? 'Z' : '-', ++ env->sregC ? 'I' : '-'); ++ ++ cpu_fprintf(f, "\n"); ++ for (i = 0; i < ARRAY_SIZE(env->r); i++) { ++ cpu_fprintf(f, "R[%02d]: %02x ", i, env->r[i]); ++ ++ if ((i % 8) == 7) { ++ cpu_fprintf(f, "\n"); ++ } ++ } ++ cpu_fprintf(f, "\n"); ++}