From patchwork Mon Apr 7 14:45:52 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kohei Tokunaga X-Patchwork-Id: 14041097 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 0BB33C36010 for ; Mon, 7 Apr 2025 15:16:15 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1u1oAr-0001zO-8J; Mon, 07 Apr 2025 11:14:09 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1u1nlG-00024A-Nn; Mon, 07 Apr 2025 10:47:42 -0400 Received: from mail-pf1-x42a.google.com ([2607:f8b0:4864:20::42a]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1u1nlE-0001cr-9c; Mon, 07 Apr 2025 10:47:42 -0400 Received: by mail-pf1-x42a.google.com with SMTP id d2e1a72fcca58-739b3fe7ce8so3697698b3a.0; Mon, 07 Apr 2025 07:47:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1744037257; x=1744642057; darn=nongnu.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=303oU4ivEV4o+skcvNL6xtz8+P6ObO0/wi+L6EFBQWE=; b=A8dg5gahA06cTrlzeBL9cdZHeTPb+3sTG1L7M9qnhgDprht7ofb3sD2Esb4bAbqgMU fUd3Y5CR3nzA30PvDKiEWgTJJbZKkm0lXvDmYzDvvopvscWNAjY42zp3Tn3nSr7ERlAp S3gVUAQASh7ODj6YeYgd3lySXA8eua44VejbpbeihFkHMcrg+BT1f2IffsSCT0IZacy2 5rUd5S24c4Ti6Se11NA3aZtQq6Qq0POZazLFEEyoy+NH5JOB1jMD/VsaRyqel7Qwu1eE f+DK7WExJRB3twYDdrYl0v8E5JLE3ZZlcNw5Mf5ahRGryWlkW6Aa57PdncHd+MqJvnRT uCMA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1744037257; x=1744642057; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=303oU4ivEV4o+skcvNL6xtz8+P6ObO0/wi+L6EFBQWE=; b=AaGzab3XDcqZL3UhNjOS8dEpOWem+iWBtqJirWB7kwb0pQGn2tP5k/c7xd4sigJ+nx JMRe3euCZf5SOPskriTWoAJdV3jmypPbAjRQCdlec0JwvWlZeJig+2q4gthQmk2SD90d 2AOxHGdgpAj6R7np3OEI5+1P6rjRw132a1s96Q347QHjG7qjQ8FP0DVFfAvOugnotfYj F1ND1zHWsQooHkv3LM0Qg0JCwHdyAcB3AcEiTrP+w1jlTZTSYBXxX6CdkiReMMMfDyv0 2MoKyqLrAMINdUeHreBaL17v+8S+V/HdqFAvJQuKvp5H9JXcn0w/JMGr140ZizLkLcxf bUmg== X-Forwarded-Encrypted: i=1; AJvYcCU8iOOdhnQHk91LaoVOaaJeZoBE2DAG0prPFzFSOEqz7xNw7QQa9/p8oxX/o4Z2o+V0sXjv5ippIQ==@nongnu.org, AJvYcCUkcRW7xNUJgWtBR3boYRPPmu7BRNyAmGukgurAkILmVSPAbx3TtG5h8Z47wsrA0mTs3/UUyTKLlBpF0A==@nongnu.org, AJvYcCXgzMDmIMnEXG6cSf6+S/Hh3GUpcx4+TXKHGqkIljDymHvrGMyYnesVIROr7LVyi8jrpzR6tMFPziIxQA==@nongnu.org X-Gm-Message-State: AOJu0YwdVxwPbdbWXDIu9gI6o+hq+OMrLI7HYyv5y6XIf+jw+o4MvVTu /twmN53KlXStJilag8n3fSYzEUSimG6dHSKmCjgQXp5P5pBdLQZkkDRQnnG5 X-Gm-Gg: ASbGncsSberGk7eWem3Ihuk+Mg6O0m8M6ebar7QyfMfFYyKoaKlzqXuspTRJfmRF+Sv l5E6SKwIh1Q2QrrAE9sCiEy11Mt6xgRryTc7/2BVFoOTVHX+wvTNOWVh32ptUXBu3i4y/vsw1II p2XiuDU/Rap8dj9+lSGAQVSnaNRTa8jmI+DufqZICicbaEM2vTSk/lW8GnFFw8MpPNZgEyZAKT8 trMqOGdQWPQk9+LcEVG5+b6WxpIFdaVLo2F9ScKCvKGZs3eOYd3fbdq7Zim4ehSKtnp6x+51Znx mENIJaDwsQRL78XY1joEK76AWOGEDer2mljeuq8mh1kUSvvBTmaqNDbPNY3/BQ== X-Google-Smtp-Source: AGHT+IGZ2Wz0Y1VNTDZ7pixroI/TzMaEKIf24Xwu3jGbGdGuKfSQAcxOo2mpyOC9HsbIY+yWz/StLg== X-Received: by 2002:a05:6a00:3a0a:b0:736:53ce:a32c with SMTP id d2e1a72fcca58-73b6b8c5773mr8623059b3a.17.1744037256768; Mon, 07 Apr 2025 07:47:36 -0700 (PDT) Received: from localhost.localdomain ([240d:1a:3b6:8b00:8768:486:6a8e:e855]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-739d97ef3c2sm8856960b3a.59.2025.04.07.07.47.30 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 07 Apr 2025 07:47:36 -0700 (PDT) From: Kohei Tokunaga To: qemu-devel@nongnu.org Cc: =?utf-8?q?Alex_Benn=C3=A9e?= , =?utf-8?q?Philipp?= =?utf-8?q?e_Mathieu-Daud=C3=A9?= , Thomas Huth , Richard Henderson , Paolo Bonzini , Kevin Wolf , Hanna Reitz , Kohei Tokunaga , Christian Schoenebeck , Greg Kurz , Palmer Dabbelt , Alistair Francis , Weiwei Li , Daniel Henrique Barboza , Liu Zhiwei , =?utf-8?q?Marc-Andr=C3=A9_Lureau?= , =?utf-8?q?Daniel_P_=2E_Berrang=C3=A9?= , Eduardo Habkost , Peter Maydell , Stefan Hajnoczi , qemu-block@nongnu.org, qemu-riscv@nongnu.org, qemu-arm@nongnu.org Subject: [PATCH 01/10] various: Fix type conflict of GLib function pointers Date: Mon, 7 Apr 2025 23:45:52 +0900 Message-Id: <2be81d2f86704662c9fa33ceb46077804e34ac77.1744032780.git.ktokunaga.mail@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 Received-SPF: pass client-ip=2607:f8b0:4864:20::42a; envelope-from=ktokunaga.mail@gmail.com; helo=mail-pf1-x42a.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, FREEMAIL_FROM=0.001, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-Mailman-Approved-At: Mon, 07 Apr 2025 11:14:07 -0400 X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org On emscripten, function pointer casts can cause function call failure. This commit fixes the function definition to match to the type of the function call. - qtest_set_command_cb passed to g_once should match to GThreadFunc - object_class_cmp and cpreg_key_compare are passed to g_list_sort as GCopmareFunc but GLib cast them to GCompareDataFunc. Signed-off-by: Kohei Tokunaga --- hw/riscv/riscv_hart.c | 9 ++++++++- qom/object.c | 5 +++-- target/arm/helper.c | 4 ++-- 3 files changed, 13 insertions(+), 5 deletions(-) diff --git a/hw/riscv/riscv_hart.c b/hw/riscv/riscv_hart.c index a55d156668..e37317dcbd 100644 --- a/hw/riscv/riscv_hart.c +++ b/hw/riscv/riscv_hart.c @@ -102,10 +102,17 @@ static bool csr_qtest_callback(CharBackend *chr, gchar **words) return false; } +static gpointer g_qtest_set_command_cb( + bool (*pc_cb)(CharBackend *chr, gchar **words)) +{ + qtest_set_command_cb(pc_cb); + return NULL; +} + static void riscv_cpu_register_csr_qtest_callback(void) { static GOnce once; - g_once(&once, (GThreadFunc)qtest_set_command_cb, csr_qtest_callback); + g_once(&once, (GThreadFunc)g_qtest_set_command_cb, csr_qtest_callback); } #endif diff --git a/qom/object.c b/qom/object.c index 01618d06bd..19698aae4c 100644 --- a/qom/object.c +++ b/qom/object.c @@ -1191,7 +1191,8 @@ GSList *object_class_get_list(const char *implements_type, return list; } -static gint object_class_cmp(gconstpointer a, gconstpointer b) +static gint object_class_cmp(gconstpointer a, gconstpointer b, + gpointer user_data) { return strcasecmp(object_class_get_name((ObjectClass *)a), object_class_get_name((ObjectClass *)b)); @@ -1201,7 +1202,7 @@ GSList *object_class_get_list_sorted(const char *implements_type, bool include_abstract) { return g_slist_sort(object_class_get_list(implements_type, include_abstract), - object_class_cmp); + (GCompareFunc)object_class_cmp); } Object *object_ref(void *objptr) diff --git a/target/arm/helper.c b/target/arm/helper.c index bb445e30cd..68f81fadfc 100644 --- a/target/arm/helper.c +++ b/target/arm/helper.c @@ -220,7 +220,7 @@ static void count_cpreg(gpointer key, gpointer opaque) } } -static gint cpreg_key_compare(gconstpointer a, gconstpointer b) +static gint cpreg_key_compare(gconstpointer a, gconstpointer b, void *d) { uint64_t aidx = cpreg_to_kvm_id((uintptr_t)a); uint64_t bidx = cpreg_to_kvm_id((uintptr_t)b); @@ -244,7 +244,7 @@ void init_cpreg_list(ARMCPU *cpu) int arraylen; keys = g_hash_table_get_keys(cpu->cp_regs); - keys = g_list_sort(keys, cpreg_key_compare); + keys = g_list_sort(keys, (GCompareFunc)cpreg_key_compare); cpu->cpreg_array_len = 0; From patchwork Mon Apr 7 14:45:53 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kohei Tokunaga X-Patchwork-Id: 14041061 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 93516C36010 for ; Mon, 7 Apr 2025 15:15:12 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1u1oAr-0001zS-8B; Mon, 07 Apr 2025 11:14:09 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1u1nlT-00025q-Py; Mon, 07 Apr 2025 10:47:55 -0400 Received: from mail-pl1-x633.google.com ([2607:f8b0:4864:20::633]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1u1nlR-0001eB-7e; Mon, 07 Apr 2025 10:47:55 -0400 Received: by mail-pl1-x633.google.com with SMTP id d9443c01a7336-22403cbb47fso46962625ad.0; Mon, 07 Apr 2025 07:47:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1744037271; x=1744642071; darn=nongnu.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=maq8DmxjYyAbXMCny/cpZJZoQ4BH2LZVoK9JgpK4scU=; b=JT3Jvq91ncdL7Bdern5imau181sn7b6+oqdhh9jp+pI1bDLa3RdQVw//+AdTzugYX6 g573ENQEsvCpV1XRSSO8c0FKYdvau8Y1TeORt2uZjHrTi4RhW1i0vMDCY4MLtLC3T2dn Km1YQ33t8CV5BbUe0gpTTvSqC06KU6sfJEAmJGRn3p+0qFdMEs3l1slTrkTk2BSmu3bx OkW3yLTP20QBhY6+9wXlHIfEdRnhl+jg4jPVa2ODfkRTlOnaecII7y8aURmoR8daPx+K wuCoC4V378Kjz0NUBMaTOPDX0TfntDUwneU0xGaIq+fT/DuoBlmJRG+gySOmHk3wokti VWbA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1744037271; x=1744642071; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=maq8DmxjYyAbXMCny/cpZJZoQ4BH2LZVoK9JgpK4scU=; b=Kiw9Cvw4IePuqZ3VC25lUUnRnH0V/rYiPUNsyU9ktwn4iewKZuKC6y9Mk2LI7NqD/E /g0Z3qedKzT8D5z+lcAEs2qdKRh7zH4VM4MPTrJlv+TTfyexcOauLgHvP7g8NyKKOIo1 EN3p4YEjjwwd5IvhXpnAzHHCGEXuRPbJdIU08aScoep8BV4VAx0hNWfVjP8ZBrA4Ug/v zFDzHbezfitj8bqVoI+yoPoYLc2iSOOcApJMPth+8OsZkstFho918qGdQihMwGyy+2WC esb8Qw8ZmShUf7XIWnx4tQgzmD8Uq9bEXwi/eu+oI3vT0hcWFNDnsgMqHJFJHbdBJzhG rKPw== X-Forwarded-Encrypted: i=1; AJvYcCVWJ1Wy7PDSB1EjVBKe1DXmKEnJgu2X+lE3vPAxvbAEJ5YltfOu5ezzgco4+OS6PE135w5hqSZdQH/8HQ==@nongnu.org, AJvYcCX3HMvDpEXZyjghps2RKngTueRNbqGG2a2DHYU9gbvRGPMugJ74ZCLXv3HefCSn8o/XxInT4oV6JdtLeA==@nongnu.org, AJvYcCXUJZl/Z5Nc2lFNBZFnb52IaTvzPL5FafustHW0svlhi13Srqqry+tsyKm96DPnuFJ+E2SdZDvaYw==@nongnu.org X-Gm-Message-State: AOJu0YziTb4E5Z4NrYRFXQpaKmPvWhdqhqZJKFxJHj2+RHNsBz7+D0uu 3X5MDz2L/USPK+xmJ5vgRr9qPyMiM4M6ipI76u5MWlyDiNveZXaixT56WCiq X-Gm-Gg: ASbGncvelCBAexKP/mjmcQATGrW4Vmh57BOF+Tvw3pv6tdL14bvWy7Hrx0/2L2Qimma 2jDID+C8Qf3fMFnN8DQ6CPXSi7TL85TQqrW029nkgXw7hyKjzU99RTMgps1TbQeollVeLv+s7hH wo8vhyEZFua3I4RT2TGbRxxvB1DwjRw3MPeS4pky3eHEH2ceGbpgYWcIYCtJBGi11hoV5x4nfNy K68FwiADBLZBww/mYXswJSxPwhOcE2uF7029Kh/1GSuI29q3JS1WDMc3jPKzFgmN+LEZNErnu3b afMosyoNfAL1ckFlQ6tQDjSbH1CzArUWPx2DSA4wK3fUXHb4EFOJSz64MRk3SQ== X-Google-Smtp-Source: AGHT+IGnRlE1J9xAQ8on+RsP6dF3OMiTxnoLy+/62yPs6hp1pQvN2FypIbgjstfQKJKk6OT9jCncrw== X-Received: by 2002:a17:903:3bc6:b0:220:d79f:60f1 with SMTP id d9443c01a7336-22a8a8ced77mr165846205ad.42.1744037270688; Mon, 07 Apr 2025 07:47:50 -0700 (PDT) Received: from localhost.localdomain ([240d:1a:3b6:8b00:8768:486:6a8e:e855]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-739d97ef3c2sm8856960b3a.59.2025.04.07.07.47.44 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 07 Apr 2025 07:47:50 -0700 (PDT) From: Kohei Tokunaga To: qemu-devel@nongnu.org Cc: =?utf-8?q?Alex_Benn=C3=A9e?= , =?utf-8?q?Philipp?= =?utf-8?q?e_Mathieu-Daud=C3=A9?= , Thomas Huth , Richard Henderson , Paolo Bonzini , Kevin Wolf , Hanna Reitz , Kohei Tokunaga , Christian Schoenebeck , Greg Kurz , Palmer Dabbelt , Alistair Francis , Weiwei Li , Daniel Henrique Barboza , Liu Zhiwei , =?utf-8?q?Marc-Andr=C3=A9_Lureau?= , =?utf-8?q?Daniel_P_=2E_Berrang=C3=A9?= , Eduardo Habkost , Peter Maydell , Stefan Hajnoczi , qemu-block@nongnu.org, qemu-riscv@nongnu.org, qemu-arm@nongnu.org Subject: [PATCH 02/10] various: Define macros for dependencies on emscripten Date: Mon, 7 Apr 2025 23:45:53 +0900 Message-Id: <5f2a8fa2d7116b1d65b79fbb3a95244096fb7308.1744032780.git.ktokunaga.mail@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 Received-SPF: pass client-ip=2607:f8b0:4864:20::633; envelope-from=ktokunaga.mail@gmail.com; helo=mail-pl1-x633.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, FREEMAIL_FROM=0.001, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-Mailman-Approved-At: Mon, 07 Apr 2025 11:14:07 -0400 X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Signed-off-by: Kohei Tokunaga --- block/file-posix.c | 18 ++++++++++++++++++ include/qemu/cacheflush.h | 3 ++- os-posix.c | 5 +++++ util/cacheflush.c | 3 ++- 4 files changed, 27 insertions(+), 2 deletions(-) diff --git a/block/file-posix.c b/block/file-posix.c index 56d1972d15..69f54505bd 100644 --- a/block/file-posix.c +++ b/block/file-posix.c @@ -110,6 +110,10 @@ #include #endif +#ifdef EMSCRIPTEN +#include +#endif + /* OS X does not have O_DSYNC */ #ifndef O_DSYNC #ifdef O_SYNC @@ -2011,6 +2015,19 @@ static int handle_aiocb_write_zeroes_unmap(void *opaque) } #ifndef HAVE_COPY_FILE_RANGE +#ifdef EMSCRIPTEN +/* + * emscripten exposes copy_file_range declaration but doesn't provide the + * implementation in the final link. Define the stub here but avoid type + * conflict with the emscripten's header. + */ +ssize_t copy_file_range(int in_fd, off_t *in_off, int out_fd, + off_t *out_off, size_t len, unsigned int flags) +{ + errno = ENOSYS; + return -1; +} +#else static off_t copy_file_range(int in_fd, off_t *in_off, int out_fd, off_t *out_off, size_t len, unsigned int flags) { @@ -2023,6 +2040,7 @@ static off_t copy_file_range(int in_fd, off_t *in_off, int out_fd, #endif } #endif +#endif /* * parse_zone - Fill a zone descriptor diff --git a/include/qemu/cacheflush.h b/include/qemu/cacheflush.h index ae20bcda73..84969801e3 100644 --- a/include/qemu/cacheflush.h +++ b/include/qemu/cacheflush.h @@ -19,7 +19,8 @@ * mappings of the same physical page(s). */ -#if defined(__i386__) || defined(__x86_64__) || defined(__s390__) +#if defined(__i386__) || defined(__x86_64__) || defined(__s390__) \ + || defined(EMSCRIPTEN) static inline void flush_idcache_range(uintptr_t rx, uintptr_t rw, size_t len) { diff --git a/os-posix.c b/os-posix.c index 52925c23d3..9a7099e279 100644 --- a/os-posix.c +++ b/os-posix.c @@ -148,11 +148,16 @@ static void change_process_uid(void) exit(1); } if (user_pwd) { +#ifdef EMSCRIPTEN + error_report("initgroups unsupported"); + exit(1); +#else if (initgroups(user_pwd->pw_name, user_pwd->pw_gid) < 0) { error_report("Failed to initgroups(\"%s\", %d)", user_pwd->pw_name, user_pwd->pw_gid); exit(1); } +#endif } else { if (setgroups(1, &user_gid) < 0) { error_report("Failed to setgroups(1, [%d])", diff --git a/util/cacheflush.c b/util/cacheflush.c index 1d12899a39..e5aa256cd8 100644 --- a/util/cacheflush.c +++ b/util/cacheflush.c @@ -225,7 +225,8 @@ static void __attribute__((constructor)) init_cache_info(void) * Architecture (+ OS) specific cache flushing mechanisms. */ -#if defined(__i386__) || defined(__x86_64__) || defined(__s390__) +#if defined(__i386__) || defined(__x86_64__) || defined(__s390__) || \ + defined(EMSCRIPTEN) /* Caches are coherent and do not require flushing; symbol inline. */ From patchwork Mon Apr 7 14:45:54 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kohei Tokunaga X-Patchwork-Id: 14041102 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id B443CC3601E for ; Mon, 7 Apr 2025 15:18:17 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1u1oAt-00023I-JW; Mon, 07 Apr 2025 11:14:11 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1u1nlb-00026l-Pd; Mon, 07 Apr 2025 10:48:05 -0400 Received: from mail-pf1-x433.google.com ([2607:f8b0:4864:20::433]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1u1nlY-0001fI-Se; Mon, 07 Apr 2025 10:48:02 -0400 Received: by mail-pf1-x433.google.com with SMTP id d2e1a72fcca58-736bfa487c3so3590748b3a.1; Mon, 07 Apr 2025 07:47:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1744037278; x=1744642078; darn=nongnu.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=LeWm4gdwYX7Mpk2+jTOfSOTsV04a2XRASVxxD/NFs3U=; b=E7GuQnqrCdwldobpvMkDfYiE3uyx6UTnZ8LQ94ahtxm+q+w3GDZtXtPglzKGs9GqJt AItUsWC/4p2VbIOCveCcoCeZOg5X0IAQnbJhAeioDt9yOQBq/yGSMEXCaeaNICpf9bus FTmd7DZ17eWdxQiQt40Y/a7s5+YyHrXuhXvDVip/wrqWJ85iRU1du3JECgsbu2UJ5bdx gqJufL0NC/TbZhBC3cLVPHUxOtevM6zJUWGPCYmkHWz1VtBEc53wuS67k5Jm/fVJxGL8 OIpRAmoPojDu60i0ZrXc5fYoMHu+BHzZHqnHe5PY1n6WtFVczKDBgLxesXJfCOuus+tN QSFA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1744037278; x=1744642078; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=LeWm4gdwYX7Mpk2+jTOfSOTsV04a2XRASVxxD/NFs3U=; b=vpmmgE7qDngqrH8qaAKGea2vxxsGTWWy43lP+KzuaP3nnAEXdgtejMMXVpgLPZRymv Lm83nxBO/0r3mW6neiHRInypIlklw5xNtFeVTqpUNqrxj3XX9gQ8S4LIDc+3pEatGtdx JPDkFvDSHYfXgKkbQe5sh7fXvKTh57HyVo58l1jW0g7+KYMq7z21U6Xd9KkvNPRJFM/C zcBtEGlhI2DpQ5hcWTV4tlIP7+7xhazXKY6/rK8ZVuthF0/jxtEndCIreruzmHPLp0dR +TMMj3W69NFsQOypBGvj0Lt6NZrQ5s6eQ6h+hf9BxmeGq4pNvhHt0trwwxjR3G+bwjTv 1lTA== X-Forwarded-Encrypted: i=1; AJvYcCU625ZzH1QO6t+jxam0xhfwNKBGj9tKGML+IPBmnKCVhZvH1GH0RlsLBCv9NJLdU4aCtbxOPMnnIEIbog==@nongnu.org, AJvYcCXaYblZh8A8drnrBf0vdrbSsY/UMcVzKunWv0iuHvlFyEpCHaOMQBCHHqngoZNWmuyQL+2mOeBcPQ==@nongnu.org, AJvYcCXpWoQd5UYKBU2z4EhVReARiKiE7Qn/KW+sWj/i8SjciUcUP/jKoZ8AYSjVIvv0tPS468y9eR2t0o83lw==@nongnu.org X-Gm-Message-State: AOJu0Yys99BiE4RlmH68PVND3M1TvMCUjp1MFp0TnmjEWHJdGfYfwQ3T BnpDfQQx+vjY4a9TJIkESl529ua2PURxel/qqbxTUxJGMg3KstQAQlwRCVae X-Gm-Gg: ASbGncuhjhJDaY6st86z7/QhP35ppMBeFeRD+vSklQnGPGlqaerDkcfv2HSrHwkFBm4 +AqTvXmt23WjGwqA7gWwXCraIMA05KP+VnIKWPCOFUFTfnfILZTnYo7X6vV93YlmIf3oWsyk9Wd uprsUqymqGSimemjggWtKHbVqwzPm9UsTC8lPX50nkKYk3hJxYBfRe5nVybivpfaybljlFQXwbL B4a3D8auavo6vlg2r/C4kCLcKlN/pF9/b96BpmYVzTiPsIX/xq4NAar/EoObzDXcHX3Hwbh4nlZ yQk8L/FkWqdjC5pKuuj0IEu7IQojCHSaacn26h87stjucguf5Yff0ZhpaAyMrCAACYC/N0fG X-Google-Smtp-Source: AGHT+IFK5Snnwez+9Z1jA55NI18ZTP1BoHatw9I8udhAd9+QfwRAfkorXPb+OzP1TByBPcIzR0wn1A== X-Received: by 2002:a05:6a21:788f:b0:1f5:619a:8f73 with SMTP id adf61e73a8af0-20104735fdemr18035606637.26.1744037278135; Mon, 07 Apr 2025 07:47:58 -0700 (PDT) Received: from localhost.localdomain ([240d:1a:3b6:8b00:8768:486:6a8e:e855]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-739d97ef3c2sm8856960b3a.59.2025.04.07.07.47.52 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 07 Apr 2025 07:47:57 -0700 (PDT) From: Kohei Tokunaga To: qemu-devel@nongnu.org Cc: =?utf-8?q?Alex_Benn=C3=A9e?= , =?utf-8?q?Philipp?= =?utf-8?q?e_Mathieu-Daud=C3=A9?= , Thomas Huth , Richard Henderson , Paolo Bonzini , Kevin Wolf , Hanna Reitz , Kohei Tokunaga , Christian Schoenebeck , Greg Kurz , Palmer Dabbelt , Alistair Francis , Weiwei Li , Daniel Henrique Barboza , Liu Zhiwei , =?utf-8?q?Marc-Andr=C3=A9_Lureau?= , =?utf-8?q?Daniel_P_=2E_Berrang=C3=A9?= , Eduardo Habkost , Peter Maydell , Stefan Hajnoczi , qemu-block@nongnu.org, qemu-riscv@nongnu.org, qemu-arm@nongnu.org Subject: [PATCH 03/10] util/mmap-alloc: Add qemu_ram_mmap implementation for emscripten Date: Mon, 7 Apr 2025 23:45:54 +0900 Message-Id: <8c2b176bd4c499233a88dcd18e62d8cf94e08f56.1744032780.git.ktokunaga.mail@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 Received-SPF: pass client-ip=2607:f8b0:4864:20::433; envelope-from=ktokunaga.mail@gmail.com; helo=mail-pf1-x433.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, FREEMAIL_FROM=0.001, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-Mailman-Approved-At: Mon, 07 Apr 2025 11:14:07 -0400 X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Signed-off-by: Kohei Tokunaga --- util/mmap-alloc.c | 18 ++++++++++++++++++ 1 file changed, 18 insertions(+) diff --git a/util/mmap-alloc.c b/util/mmap-alloc.c index ed14f9c64d..91f33682e8 100644 --- a/util/mmap-alloc.c +++ b/util/mmap-alloc.c @@ -145,6 +145,7 @@ static bool map_noreserve_effective(int fd, uint32_t qemu_map_flags) return false; } +#ifndef EMSCRIPTEN /* * Reserve a new memory region of the requested size to be used for mapping * from the given fd (if any). @@ -176,6 +177,7 @@ static void *mmap_reserve(size_t size, int fd) return mmap(0, size, PROT_NONE, flags, fd, 0); } +#endif /* * Activate memory in a reserved region from the given fd (if any), to make @@ -244,6 +246,21 @@ static inline size_t mmap_guard_pagesize(int fd) #endif } +#ifdef EMSCRIPTEN +void *qemu_ram_mmap(int fd, + size_t size, + size_t align, + uint32_t qemu_map_flags, + off_t map_offset) +{ + /* + * emscripten doesn't support non-zero first argument for mmap so + * mmap a larger region without the hint and return an aligned pointer. + */ + void *ptr = mmap_activate(0, size + align, fd, qemu_map_flags, map_offset); + return (void *)QEMU_ALIGN_UP((uintptr_t)ptr, align); +} +#else void *qemu_ram_mmap(int fd, size_t size, size_t align, @@ -293,6 +310,7 @@ void *qemu_ram_mmap(int fd, return ptr; } +#endif /* EMSCRIPTEN */ void qemu_ram_munmap(int fd, void *ptr, size_t size) { From patchwork Mon Apr 7 14:45:55 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kohei Tokunaga X-Patchwork-Id: 14041103 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 66551C36010 for ; Mon, 7 Apr 2025 15:18:52 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1u1oBu-0003MS-Jw; Mon, 07 Apr 2025 11:15:14 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1u1nlh-00027c-DN; Mon, 07 Apr 2025 10:48:11 -0400 Received: from mail-pg1-x531.google.com ([2607:f8b0:4864:20::531]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1u1nle-0001fo-Qb; Mon, 07 Apr 2025 10:48:08 -0400 Received: by mail-pg1-x531.google.com with SMTP id 41be03b00d2f7-af241f0a4beso3664432a12.2; Mon, 07 Apr 2025 07:48:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1744037284; x=1744642084; darn=nongnu.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=Y/vXoOU9/WH6B8z0Pp6XgozbGmRLjl1o39LJP8NE6XY=; b=V55PK0ayEZZPdr6tJvJ2UexOv6oasowMOWVoaGxPcR4xEWcCP4UX1A02rud0GOgB+L 4F563XazQd7V9HxLb1n6d7NmaCnGgm11K4fT4M/BwBiUBKzDNZNjJy8Ho4ux3cyGncTm FesZErZ9R/iOB37z3iLvetF+ZD6Nhrq2MuyFNbLTeHvKRYL+Vv1r7nM/Z4s1rHwDGtQE 0uSAybyNqQxp+y2JwxiyyYj6hdiio77X8El9UxJi8ocqswIcY7TnYWWGBl1gkp8AYenk GjWpVAZV3Kwxq0uDrLOUQ8o6EWNi6hnjBiJDTFsbAedqlw4QozH1nVWIh3zMYdpcgvk7 aoFg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1744037284; x=1744642084; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Y/vXoOU9/WH6B8z0Pp6XgozbGmRLjl1o39LJP8NE6XY=; b=JR2lAaV4UB9/WCmR4ynYyOx0XC1f6IiMXqHYCs5IV8rNIJyn1BmyVJHvTbF0WMTdRy 4Dfu+/ZqRGEFP5oEWtaNXz6YIUoQ5bl8sNvTtrKStRBNDLRDE9d+5ro6RS1PBqErzmUL TS0zJUOcfbTegKeweFe4Uh1DRrt54IWCTleR+YX37ZZDiKa4tuvv+he1cvCFbL2WVUqJ /cpLi7bQrGHtcfXZId27luBf3RlViZYow/hdIGVf6AN48r1sw/0s8JbvVGyElYDAkLBj BFOvhaFlvSyTGevNAlKfdx7Zi1mYtJdQdLTf00qYoTt+MQ9OIQQahB8R7CBxd8KK2f0M R22A== X-Forwarded-Encrypted: i=1; AJvYcCUX/nuqE+6mzKYRh5IFT50GXBvPgD5/NZYgu00cQAuBRsrhC2mi/iL4pjh/1aJK844l5vOC/ijA2w==@nongnu.org, AJvYcCXLQgWe/eTy8aMkWfKI30fUF5gw+oeu8ODRrnMBqEHM785M/44+ZEMQ3lWQRe2/1ks+FWLELGthZglXOg==@nongnu.org, AJvYcCXznCK/cl7t+1W2ve9aYBoPKybOW/jAdeEM0etyyBsmqJWCt8ThMN3rGM+9nJFjAel/CtZIPrKny7bVCw==@nongnu.org X-Gm-Message-State: AOJu0Yz5kp8PyFFKPr0gc49gNRHo5m2akFvegeol4Rx0vylTp4K8Xj44 sPTE/i/q5TknQFvoTMBtDXOZNVb+efWbczMTr0/aCyfww5Q6JcPpW8CEw+qW X-Gm-Gg: ASbGncvxiet70KcN7AofBa93JGc3I1WXNoP/gLa1bvR6NXfIctdbETbE/P3QCGrsJGr +kcwg2euwpvMYbeQ72LnVwCrTNcCa6W4gjo138XlpBj1qq8yy+XmRDZvfoIJGKbyHfWmyHn3iFj 0UnKAb+SE+AjqHOIfjG19yqqaFlDSF1j5GK0pO6GFlopzt+gzOGkqsMdTBofCtuELD1xWBvkcKS fJCDsrzuAIkuHOd32zo2BJj7JZ1WL/N4/+ihUpLMyHCbKXOK53rUe4VtN6YMSZcMG+vD9MDjSza ZJNgspcKpumhg5ZqXMCNfcztSS2mIMWuW9svLPhBrXeczYDtFhofhKozcMmwJQ== X-Google-Smtp-Source: AGHT+IGVX4WZdwjz8t2px+L/u1IusGGhdA+M9JnGeDomm+fX2PtXqK/fd/imZowOMFEYatlfbwDLfA== X-Received: by 2002:a17:902:e749:b0:224:13a4:d61e with SMTP id d9443c01a7336-22a8a8d31e9mr167348655ad.51.1744037284265; Mon, 07 Apr 2025 07:48:04 -0700 (PDT) Received: from localhost.localdomain ([240d:1a:3b6:8b00:8768:486:6a8e:e855]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-739d97ef3c2sm8856960b3a.59.2025.04.07.07.47.58 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 07 Apr 2025 07:48:03 -0700 (PDT) From: Kohei Tokunaga To: qemu-devel@nongnu.org Cc: =?utf-8?q?Alex_Benn=C3=A9e?= , =?utf-8?q?Philipp?= =?utf-8?q?e_Mathieu-Daud=C3=A9?= , Thomas Huth , Richard Henderson , Paolo Bonzini , Kevin Wolf , Hanna Reitz , Kohei Tokunaga , Christian Schoenebeck , Greg Kurz , Palmer Dabbelt , Alistair Francis , Weiwei Li , Daniel Henrique Barboza , Liu Zhiwei , =?utf-8?q?Marc-Andr=C3=A9_Lureau?= , =?utf-8?q?Daniel_P_=2E_Berrang=C3=A9?= , Eduardo Habkost , Peter Maydell , Stefan Hajnoczi , qemu-block@nongnu.org, qemu-riscv@nongnu.org, qemu-arm@nongnu.org Subject: [PATCH 04/10] util: Add coroutine backend for emscripten Date: Mon, 7 Apr 2025 23:45:55 +0900 Message-Id: X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 Received-SPF: pass client-ip=2607:f8b0:4864:20::531; envelope-from=ktokunaga.mail@gmail.com; helo=mail-pg1-x531.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, FREEMAIL_FROM=0.001, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-Mailman-Approved-At: Mon, 07 Apr 2025 11:14:07 -0400 X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Emscripten does not support couroutine methods currently used by QEMU but provides a coroutine implementation called "fiber". This commit introduces a coroutine backend using fiber. Note that fiber does not support submitting coroutines to other threads. Signed-off-by: Kohei Tokunaga --- util/coroutine-fiber.c | 127 +++++++++++++++++++++++++++++++++++++++++ 1 file changed, 127 insertions(+) create mode 100644 util/coroutine-fiber.c diff --git a/util/coroutine-fiber.c b/util/coroutine-fiber.c new file mode 100644 index 0000000000..cb1ec92509 --- /dev/null +++ b/util/coroutine-fiber.c @@ -0,0 +1,127 @@ +/* + * emscripten fiber coroutine initialization code + * based on coroutine-ucontext.c + * + * Copyright (C) 2006 Anthony Liguori + * Copyright (C) 2011 Kevin Wolf + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Lesser General Public + * License as published by the Free Software Foundation; either + * version 2.0 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Lesser General Public License for more details. + * + * You should have received a copy of the GNU Lesser General Public + * License along with this library; if not, see . + */ + +#include "qemu/osdep.h" +#include "qemu/coroutine_int.h" +#include "qemu/coroutine-tls.h" + +#include + +typedef struct { + Coroutine base; + void *stack; + size_t stack_size; + + void *asyncify_stack; + size_t asyncify_stack_size; + + CoroutineAction action; + + emscripten_fiber_t fiber; +} CoroutineEmscripten; + +/** + * Per-thread coroutine bookkeeping + */ +QEMU_DEFINE_STATIC_CO_TLS(Coroutine *, current); +QEMU_DEFINE_STATIC_CO_TLS(CoroutineEmscripten *, leader); +size_t leader_asyncify_stack_size = COROUTINE_STACK_SIZE; + +static void coroutine_trampoline(void *co_) +{ + Coroutine *co = co_; + + while (true) { + co->entry(co->entry_arg); + qemu_coroutine_switch(co, co->caller, COROUTINE_TERMINATE); + } +} + +Coroutine *qemu_coroutine_new(void) +{ + CoroutineEmscripten *co; + + co = g_malloc0(sizeof(*co)); + + co->stack_size = COROUTINE_STACK_SIZE; + co->stack = qemu_alloc_stack(&co->stack_size); + + co->asyncify_stack_size = COROUTINE_STACK_SIZE; + co->asyncify_stack = g_malloc0(co->asyncify_stack_size); + emscripten_fiber_init(&co->fiber, coroutine_trampoline, &co->base, + co->stack, co->stack_size, co->asyncify_stack, + co->asyncify_stack_size); + + return &co->base; +} + +void qemu_coroutine_delete(Coroutine *co_) +{ + CoroutineEmscripten *co = DO_UPCAST(CoroutineEmscripten, base, co_); + + qemu_free_stack(co->stack, co->stack_size); + g_free(co->asyncify_stack); + g_free(co); +} + +CoroutineAction qemu_coroutine_switch(Coroutine *from_, Coroutine *to_, + CoroutineAction action) +{ + CoroutineEmscripten *from = DO_UPCAST(CoroutineEmscripten, base, from_); + CoroutineEmscripten *to = DO_UPCAST(CoroutineEmscripten, base, to_); + + set_current(to_); + to->action = action; + emscripten_fiber_swap(&from->fiber, &to->fiber); + return from->action; +} + +Coroutine *qemu_coroutine_self(void) +{ + Coroutine *self = get_current(); + + if (!self) { + CoroutineEmscripten *leaderp = get_leader(); + if (!leaderp) { + leaderp = g_malloc0(sizeof(*leaderp)); + leaderp->asyncify_stack = g_malloc0(leader_asyncify_stack_size); + leaderp->asyncify_stack_size = leader_asyncify_stack_size; + emscripten_fiber_init_from_current_context( + &leaderp->fiber, + leaderp->asyncify_stack, + leaderp->asyncify_stack_size); + leaderp->stack = leaderp->fiber.stack_limit; + leaderp->stack_size = + leaderp->fiber.stack_base - leaderp->fiber.stack_limit; + set_leader(leaderp); + } + self = &leaderp->base; + set_current(self); + } + return self; +} + +bool qemu_in_coroutine(void) +{ + Coroutine *self = get_current(); + + return self && self->caller; +} From patchwork Mon Apr 7 14:45:56 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kohei Tokunaga X-Patchwork-Id: 14041100 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 4CD90C36010 for ; Mon, 7 Apr 2025 15:17:00 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1u1oAt-00023H-LU; Mon, 07 Apr 2025 11:14:11 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1u1nln-00028P-HZ; Mon, 07 Apr 2025 10:48:16 -0400 Received: from mail-pf1-x42f.google.com ([2607:f8b0:4864:20::42f]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1u1nll-0001gg-If; Mon, 07 Apr 2025 10:48:15 -0400 Received: by mail-pf1-x42f.google.com with SMTP id d2e1a72fcca58-736c3e7b390so3997413b3a.2; Mon, 07 Apr 2025 07:48:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1744037291; x=1744642091; darn=nongnu.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=eS7eXR/+DdOMAXfY7Q+DNRv1hMrXf+iqTPbLWHVDLlk=; b=Na4t6OQM4boJIp/Pmewa0k6IlmjtZh0gl2hejofMfN/Yqpn2ZWZmpds8aIIsjAL5Xd lIwJwlDe/XL5n6n7hJ7fHL8+arRk1ucvSJW20Wo7t1iQuSIhaUUFiVQYAc6OJNbOFJi2 VYnYaOsRfsAn7eNLQAL1GAAZz2DCAmbBR/zgBIt1EEl6dVkt4h+qLotvPiBZp7sK+U3p ZZpHihQe8upZFE7Bn3Wamc+m6IIfgDj6NOhFoDgOKCUvK+a+FvF9VyGRcNr9i3YTh0sx BmQqok1OA0NSaBkgS1AC7MuLqTSCHErbIK7twmPKSW1mXyUIaMKzBdAYr2XU58nimEib C6rA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1744037291; x=1744642091; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=eS7eXR/+DdOMAXfY7Q+DNRv1hMrXf+iqTPbLWHVDLlk=; b=GBEP4SGJYp/Pssz88SAlEMwrCPGYr84bJ96zcHh/gaPmyHqLNF/P/VHST9EDLLe9pt +fMz0OCRiH8CEpT23QaTX93fIxi85gsZcwx0YRSu9VtoWK2qiMnGQAn7EAElsocSoSo1 PB5Up84RIy4wc9qqWJUYaAxSytn1pYlpYqCFUXQ62INLyyfGyn+0tq4uIDtAbVRx026u rbP4Vsg1xShYwkKG/pYm6UhEIY34nazwX6REzyCSFjca81oiFCSQJfl+izH/2TRAnjqD MX2dNA14baLgqCYX0PrXZvNtiLRkS1pVs2rzz7m3arbOYI4IXXedi9rBw0CGI8BBdiA/ qhnw== X-Forwarded-Encrypted: i=1; AJvYcCW0cUSr5qhh/T7nxJbwgKYHHHOhvRDw4hxoku80pdmkkdbq5ALgqllzEX8Wl9ApeAW7lDHkgzBkIqG7Ow==@nongnu.org, AJvYcCWQMCESvO2hHcYIhGY5fDd1ByKxAo6TEt2AsrHeuLP1FJWLdTpRtU0xkWUR8WV2xIZkZW47q+D4iQ==@nongnu.org, AJvYcCWrG4cNOkTv86pOr8ZXVpsyQWAcDcWGQVG+9nEbs2HvC1iUqXAMb+H3IXbzlpJi1clk6IjGZcFSvnl6+Q==@nongnu.org X-Gm-Message-State: AOJu0Yyb1LP+VgxJNU88QzupFvexAuAcWOOHqmN74Tn0q1Hitggxkufh kflNT8te3HUHGkUUGOZhQqsW44aciNr5Tf0W7nQxKQcbr0+EQYZfumWcefjg X-Gm-Gg: ASbGncv7oDgKvIp77bjWk6Fd1YjJWAvIVmX59mvGTBIDpnODqOcgPZ06Zq3JwNwhxhH mVLUmu6kv4O7MgFCPZ3bB719olx/abTHs71iJL6xYeLluHCVKGxqS6LbuQ8z8i6mJTtmIJGmqAq Z+DrSmAlfrrCkNOMKi0drZzzJ8QiPGRDhJvKZ0KevIjl3AliFIAXfHTzNHB8i6UGlbB9AbHd9yE Ln+TaHzKMNwf6SuhnXNF+I0aBzK1Vq7y3IgPw4XmCG55CtY9W1snhVWxgaBSM5ub7qBZ9PSNJQc 7PqRLS9f8cSVZQ1ejk1JKmnmhD4paW0FQPdtYdubSGP9MycMmTx74WvmNQx8Bg== X-Google-Smtp-Source: AGHT+IEBJV8PXN2+xgPpGnGGxW+rcmF9NYagYDrP8jIyQSc8TCL+FbPaJa4t0px/C7vZ0Fzdop2MCQ== X-Received: by 2002:a05:6a00:1411:b0:732:5164:3cc with SMTP id d2e1a72fcca58-739e711fcf5mr16357894b3a.19.1744037290570; Mon, 07 Apr 2025 07:48:10 -0700 (PDT) Received: from localhost.localdomain ([240d:1a:3b6:8b00:8768:486:6a8e:e855]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-739d97ef3c2sm8856960b3a.59.2025.04.07.07.48.04 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 07 Apr 2025 07:48:10 -0700 (PDT) From: Kohei Tokunaga To: qemu-devel@nongnu.org Cc: =?utf-8?q?Alex_Benn=C3=A9e?= , =?utf-8?q?Philipp?= =?utf-8?q?e_Mathieu-Daud=C3=A9?= , Thomas Huth , Richard Henderson , Paolo Bonzini , Kevin Wolf , Hanna Reitz , Kohei Tokunaga , Christian Schoenebeck , Greg Kurz , Palmer Dabbelt , Alistair Francis , Weiwei Li , Daniel Henrique Barboza , Liu Zhiwei , =?utf-8?q?Marc-Andr=C3=A9_Lureau?= , =?utf-8?q?Daniel_P_=2E_Berrang=C3=A9?= , Eduardo Habkost , Peter Maydell , Stefan Hajnoczi , qemu-block@nongnu.org, qemu-riscv@nongnu.org, qemu-arm@nongnu.org Subject: [PATCH 05/10] meson: Add wasm build in build scripts Date: Mon, 7 Apr 2025 23:45:56 +0900 Message-Id: <04b7137a464e0925e2ae533bbde4fcdfe0dfe069.1744032780.git.ktokunaga.mail@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 Received-SPF: pass client-ip=2607:f8b0:4864:20::42f; envelope-from=ktokunaga.mail@gmail.com; helo=mail-pf1-x42f.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, FREEMAIL_FROM=0.001, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-Mailman-Approved-At: Mon, 07 Apr 2025 11:14:07 -0400 X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org has_int128_type is set to false on emscripten as of now to avoid errors by libffi. And tests aren't integrated with Wasm execution environment as of now so this commit disables tests. Signed-off-by: Kohei Tokunaga --- configs/meson/emscripten.txt | 6 ++++++ configure | 7 +++++++ meson.build | 14 ++++++++++---- meson_options.txt | 2 +- scripts/meson-buildoptions.sh | 2 +- 5 files changed, 25 insertions(+), 6 deletions(-) create mode 100644 configs/meson/emscripten.txt diff --git a/configs/meson/emscripten.txt b/configs/meson/emscripten.txt new file mode 100644 index 0000000000..054b263814 --- /dev/null +++ b/configs/meson/emscripten.txt @@ -0,0 +1,6 @@ +[built-in options] +c_args = ['-Wno-unused-command-line-argument','-g','-O3','-pthread'] +cpp_args = ['-Wno-unused-command-line-argument','-g','-O3','-pthread'] +objc_args = ['-Wno-unused-command-line-argument','-g','-O3','-pthread'] +c_link_args = ['-Wno-unused-command-line-argument','-g','-O3','-pthread','-sASYNCIFY=1','-sPROXY_TO_PTHREAD=1','-sFORCE_FILESYSTEM','-sALLOW_TABLE_GROWTH','-sTOTAL_MEMORY=2GB','-sWASM_BIGINT','-sEXPORT_ES6=1','-sASYNCIFY_IMPORTS=ffi_call_js','-sEXPORTED_RUNTIME_METHODS=addFunction,removeFunction,TTY,FS'] +cpp_link_args = ['-Wno-unused-command-line-argument','-g','-O3','-pthread','-sASYNCIFY=1','-sPROXY_TO_PTHREAD=1','-sFORCE_FILESYSTEM','-sALLOW_TABLE_GROWTH','-sTOTAL_MEMORY=2GB','-sWASM_BIGINT','-sEXPORT_ES6=1','-sASYNCIFY_IMPORTS=ffi_call_js','-sEXPORTED_RUNTIME_METHODS=addFunction,removeFunction,TTY,FS'] diff --git a/configure b/configure index 02f1dd2311..a1fe6e11cd 100755 --- a/configure +++ b/configure @@ -360,6 +360,10 @@ elif check_define __NetBSD__; then host_os=netbsd elif check_define __APPLE__; then host_os=darwin +elif check_define EMSCRIPTEN ; then + host_os=emscripten + cpu=wasm32 + cross_compile="yes" else # This is a fatal error, but don't report it yet, because we # might be going to just print the --help text, or it might @@ -526,6 +530,9 @@ case "$cpu" in linux_arch=x86 CPU_CFLAGS="-m64" ;; + wasm32) + CPU_CFLAGS="-m32" + ;; esac if test -n "$host_arch" && { diff --git a/meson.build b/meson.build index 41f68d3806..bcf1e33ddf 100644 --- a/meson.build +++ b/meson.build @@ -50,9 +50,9 @@ genh = [] qapi_trace_events = [] bsd_oses = ['gnu/kfreebsd', 'freebsd', 'netbsd', 'openbsd', 'dragonfly', 'darwin'] -supported_oses = ['windows', 'freebsd', 'netbsd', 'openbsd', 'darwin', 'sunos', 'linux'] +supported_oses = ['windows', 'freebsd', 'netbsd', 'openbsd', 'darwin', 'sunos', 'linux', 'emscripten'] supported_cpus = ['ppc', 'ppc64', 's390x', 'riscv32', 'riscv64', 'x86', 'x86_64', - 'arm', 'aarch64', 'loongarch64', 'mips', 'mips64', 'sparc64'] + 'arm', 'aarch64', 'loongarch64', 'mips', 'mips64', 'sparc64', 'wasm32'] cpu = host_machine.cpu_family() @@ -353,6 +353,8 @@ foreach lang : all_languages # endif #endif''') # ok + elif compiler.get_id() == 'emscripten' + # ok else error('You either need GCC v7.4 or Clang v10.0 (or XCode Clang v15.0) to compile QEMU') endif @@ -514,6 +516,8 @@ ucontext_probe = ''' supported_backends = [] if host_os == 'windows' supported_backends += ['windows'] +elif host_os == 'emscripten' + supported_backends += ['fiber'] else if host_os != 'darwin' and cc.links(ucontext_probe) supported_backends += ['ucontext'] @@ -2962,7 +2966,7 @@ config_host_data.set('CONFIG_ATOMIC64', cc.links(''' return 0; }''', args: qemu_isa_flags)) -has_int128_type = cc.compiles(''' +has_int128_type = host_os != 'emscripten' and cc.compiles(''' __int128_t a; __uint128_t b; int main(void) { b = a; }''') @@ -4456,7 +4460,9 @@ subdir('scripts') subdir('tools') subdir('pc-bios') subdir('docs') -subdir('tests') +if host_os != 'emscripten' + subdir('tests') +endif if gtk.found() subdir('po') endif diff --git a/meson_options.txt b/meson_options.txt index 59d973bca0..6d73aafe91 100644 --- a/meson_options.txt +++ b/meson_options.txt @@ -34,7 +34,7 @@ option('fuzzing_engine', type : 'string', value : '', option('trace_file', type: 'string', value: 'trace', description: 'Trace file prefix for simple backend') option('coroutine_backend', type: 'combo', - choices: ['ucontext', 'sigaltstack', 'windows', 'auto'], + choices: ['ucontext', 'sigaltstack', 'windows', 'auto', 'fiber'], value: 'auto', description: 'coroutine backend to use') # Everything else can be set via --enable/--disable-* option diff --git a/scripts/meson-buildoptions.sh b/scripts/meson-buildoptions.sh index 3e8e00852b..cbba2f248c 100644 --- a/scripts/meson-buildoptions.sh +++ b/scripts/meson-buildoptions.sh @@ -80,7 +80,7 @@ meson_options_help() { printf "%s\n" ' --tls-priority=VALUE Default TLS protocol/cipher priority string' printf "%s\n" ' [NORMAL]' printf "%s\n" ' --with-coroutine=CHOICE coroutine backend to use (choices:' - printf "%s\n" ' auto/sigaltstack/ucontext/windows)' + printf "%s\n" ' auto/fiber/sigaltstack/ucontext/windows)' printf "%s\n" ' --with-pkgversion=VALUE use specified string as sub-version of the' printf "%s\n" ' package' printf "%s\n" ' --with-suffix=VALUE Suffix for QEMU data/modules/config directories' From patchwork Mon Apr 7 14:45:57 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kohei Tokunaga X-Patchwork-Id: 14041117 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 8DE70C36010 for ; Mon, 7 Apr 2025 15:20:49 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1u1oCP-0004Pp-V1; Mon, 07 Apr 2025 11:15:47 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1u1nlt-00029I-R0; Mon, 07 Apr 2025 10:48:25 -0400 Received: from mail-pf1-x42f.google.com ([2607:f8b0:4864:20::42f]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1u1nlr-0001hI-TF; Mon, 07 Apr 2025 10:48:21 -0400 Received: by mail-pf1-x42f.google.com with SMTP id d2e1a72fcca58-7376dd56f8fso5252829b3a.2; Mon, 07 Apr 2025 07:48:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1744037297; x=1744642097; darn=nongnu.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=fW8XIKfaYZtOTkzwtpemEa+AlpQFv1wOG7n/PEjrSqI=; b=lytEpGnG3BxuFIVFEQqRhWAWKMpW/MpmtYXlWSfpbCegJW5WRpwtHOj8jzmzBnObf+ Yknd+iYjVDhqSDn0+RHjqZPioIBJOW+gvjabobxb1RQ8XCJDXvKYue9mwz7GVmCbWh8L uTP9bMrMyC22GIAK3mlMGOTuLJhv68nOp0S7hJ0Wn+zLsn6AgrLGIUl0b22Nxe9FfnsX bvy8gB7pI7Er4jKFdVlAJefIsn7UbLj49SeS/UgGv0HEZOD5Bg9vs4SUZa4sRGXjDiFy OA+4A0cFCuHQv/YjNTdRy1QSmuy7aaMexXx31EGseROiL8Al+rpWN/FOaWg1o/Sr0DU/ SEAQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1744037297; x=1744642097; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=fW8XIKfaYZtOTkzwtpemEa+AlpQFv1wOG7n/PEjrSqI=; b=cOc6iWEJjCmvFrZwN/8dn+lUL58chK2Ni3rlKDhMy74DExOmomJCWYa5nK4wtsNnep cRdutHL7MlEblQNdJH0WS5pggf4f/TkYVQkYvtUnMAknUvJQQnTcDIm07fiv1smA6Ext g6AKspx+mhCBgC+ScbyIgLnDVI8XKTou4NY3dbecYmLAv6vTVvt3gCqCLClGXG8kf2aZ ztI9jMMrKoXZ+ubkqYEyXdC70RLyFf+tUUdvgEWxiY2kC6hco1YqiJ9blcETCYfPF+OM K7kLhXHMK5khjxlgIMPuFKuYaGxqTMdLOkrsrwyFGOoJ4tTbAQxxTglQQsgCNxcjy1Zh 5eig== X-Forwarded-Encrypted: i=1; AJvYcCUwKM9DDxBa8/PAMvd0W5V9x+/rKf15a991cAj53fGSEEIWc2vTNet7Nr1FJ/PMm3f2IFrR5UprzX8XaA==@nongnu.org, AJvYcCUwrWqx3EZAvJUdu2pm37mTN9nJ4OP6J2doEBatk8z9oZMISW6AocCBM8hOQ/CiZpuM7pBRK2V7/I4Hfw==@nongnu.org, AJvYcCWyCKjtLZMWwxRtb84H8IkIcFeuNVrW/N1+76RrT+0BoDQyrSN5Q34c/DV7izsNxthX915GsDK2DQ==@nongnu.org X-Gm-Message-State: AOJu0YwE5DHk7JCwTBonsfCKJ0dj8rpWZkgJUv1sUAn+aWFD/AZP0ELT hGnRHKZo28JwcmKm4oZ6xqc6BktlZjOSILGr3J0FOp0747FTPiPbvy+kc6bO X-Gm-Gg: ASbGnctVOAeeUdnsD1FtXiHm5VKZVcvMtz6FpA47IC0+77iR2tF8nQjJsyRrOSWonAk QN0fMgX/zTvDgatXpIhEYHgPVp/DcxaR71QYmxzAXj4aiO0TTYK0DJBzzf2ErjuYzQBDPRgaiKC 44JewnJ+ZuSal1u/cF2PcCo17Oxv4LfGDP5jzqMrmdWSTeL3WCcpAgQseYxdEJVRlc4Y4SlWE/a vjc02t1xNbswpJufwDzp+RjqzV5+1aGPEa0o3h1gQkZhLseYHcI8GqVRLXjIPRuAUNRNFI5Glkm myUIKz+VhNOzWM2nahHmWQi8bGc0yZt7fNOTdbDwL+m8jRexl4s7pWf0Cri/pA== X-Google-Smtp-Source: AGHT+IEf/31V+SpzPJuGAwERtSZYT3VmroA5Y689cGqjAR92zXSAv+cv0eNY3KphtMS3srkYGNEyog== X-Received: by 2002:a05:6a00:3922:b0:736:fff2:99b with SMTP id d2e1a72fcca58-73b6b8f7d08mr11175167b3a.23.1744037296768; Mon, 07 Apr 2025 07:48:16 -0700 (PDT) Received: from localhost.localdomain ([240d:1a:3b6:8b00:8768:486:6a8e:e855]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-739d97ef3c2sm8856960b3a.59.2025.04.07.07.48.10 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 07 Apr 2025 07:48:16 -0700 (PDT) From: Kohei Tokunaga To: qemu-devel@nongnu.org Cc: =?utf-8?q?Alex_Benn=C3=A9e?= , =?utf-8?q?Philipp?= =?utf-8?q?e_Mathieu-Daud=C3=A9?= , Thomas Huth , Richard Henderson , Paolo Bonzini , Kevin Wolf , Hanna Reitz , Kohei Tokunaga , Christian Schoenebeck , Greg Kurz , Palmer Dabbelt , Alistair Francis , Weiwei Li , Daniel Henrique Barboza , Liu Zhiwei , =?utf-8?q?Marc-Andr=C3=A9_Lureau?= , =?utf-8?q?Daniel_P_=2E_Berrang=C3=A9?= , Eduardo Habkost , Peter Maydell , Stefan Hajnoczi , qemu-block@nongnu.org, qemu-riscv@nongnu.org, qemu-arm@nongnu.org Subject: [PATCH 06/10] include/exec: Allow using 64bit guest addresses on emscripten Date: Mon, 7 Apr 2025 23:45:57 +0900 Message-Id: <04ab0a8c2ab61c47530f77b149ad29123a0ee382.1744032780.git.ktokunaga.mail@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 Received-SPF: pass client-ip=2607:f8b0:4864:20::42f; envelope-from=ktokunaga.mail@gmail.com; helo=mail-pf1-x42f.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, FREEMAIL_FROM=0.001, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-Mailman-Approved-At: Mon, 07 Apr 2025 11:14:07 -0400 X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org To enable 64-bit guest support in Wasm 32bit memory model today, it was necessary to partially revert recent changes that removed support for different pointer widths between the host and guest (e.g., commits a70af12addd9060fdf8f3dbd42b42e3072c3914f and bf455ec50b6fea15b4d2493059365bf94c706273) when compiling with Emscripten. While this serves as a temporary workaround, a long-term solution could involve adopting Wasm's 64-bit memory model once it gains broader support, as it is currently not widely adopted (e.g., unsupported by Safari and libffi). Signed-off-by: Kohei Tokunaga --- accel/tcg/cputlb.c | 8 ++++---- include/exec/tlb-common.h | 14 ++++++++++---- include/exec/vaddr.h | 11 +++++++++++ include/qemu/atomic.h | 4 ++++ meson.build | 8 +++++--- 5 files changed, 34 insertions(+), 11 deletions(-) diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c index fb22048876..8f8f5c19c4 100644 --- a/accel/tcg/cputlb.c +++ b/accel/tcg/cputlb.c @@ -104,13 +104,13 @@ static inline uint64_t tlb_read_idx(const CPUTLBEntry *entry, { /* Do not rearrange the CPUTLBEntry structure members. */ QEMU_BUILD_BUG_ON(offsetof(CPUTLBEntry, addr_read) != - MMU_DATA_LOAD * sizeof(uintptr_t)); + MMU_DATA_LOAD * sizeof(tlb_addr)); QEMU_BUILD_BUG_ON(offsetof(CPUTLBEntry, addr_write) != - MMU_DATA_STORE * sizeof(uintptr_t)); + MMU_DATA_STORE * sizeof(tlb_addr)); QEMU_BUILD_BUG_ON(offsetof(CPUTLBEntry, addr_code) != - MMU_INST_FETCH * sizeof(uintptr_t)); + MMU_INST_FETCH * sizeof(tlb_addr)); - const uintptr_t *ptr = &entry->addr_idx[access_type]; + const tlb_addr *ptr = &entry->addr_idx[access_type]; /* ofs might correspond to .addr_write, so use qatomic_read */ return qatomic_read(ptr); } diff --git a/include/exec/tlb-common.h b/include/exec/tlb-common.h index 03b5a8ffc7..679054bb44 100644 --- a/include/exec/tlb-common.h +++ b/include/exec/tlb-common.h @@ -19,14 +19,20 @@ #ifndef EXEC_TLB_COMMON_H #define EXEC_TLB_COMMON_H 1 +#ifndef EMSCRIPTEN #define CPU_TLB_ENTRY_BITS (HOST_LONG_BITS == 32 ? 4 : 5) +typedef uintptr_t tlb_addr; +#else +#define CPU_TLB_ENTRY_BITS 5 +typedef uint64_t tlb_addr; +#endif /* Minimalized TLB entry for use by TCG fast path. */ typedef union CPUTLBEntry { struct { - uintptr_t addr_read; - uintptr_t addr_write; - uintptr_t addr_code; + tlb_addr addr_read; + tlb_addr addr_write; + tlb_addr addr_code; /* * Addend to virtual address to get host address. IO accesses * use the corresponding iotlb value. @@ -37,7 +43,7 @@ typedef union CPUTLBEntry { * Padding to get a power of two size, as well as index * access to addr_{read,write,code}. */ - uintptr_t addr_idx[(1 << CPU_TLB_ENTRY_BITS) / sizeof(uintptr_t)]; + tlb_addr addr_idx[(1 << CPU_TLB_ENTRY_BITS) / sizeof(tlb_addr)]; } CPUTLBEntry; QEMU_BUILD_BUG_ON(sizeof(CPUTLBEntry) != (1 << CPU_TLB_ENTRY_BITS)); diff --git a/include/exec/vaddr.h b/include/exec/vaddr.h index 28bec632fb..ff57f944dd 100644 --- a/include/exec/vaddr.h +++ b/include/exec/vaddr.h @@ -9,6 +9,7 @@ * We do not support 64-bit guest on 32-host and detect at configure time. * Therefore, a host pointer width will always fit a guest pointer. */ +#ifndef EMSCRIPTEN typedef uintptr_t vaddr; #define VADDR_PRId PRIdPTR #define VADDR_PRIu PRIuPTR @@ -16,5 +17,15 @@ typedef uintptr_t vaddr; #define VADDR_PRIx PRIxPTR #define VADDR_PRIX PRIXPTR #define VADDR_MAX UINTPTR_MAX +#else +/* Explicitly define this as 64bit on emscripten */ +typedef uint64_t vaddr; +#define VADDR_PRId PRId64 +#define VADDR_PRIu PRIu64 +#define VADDR_PRIo PRIo64 +#define VADDR_PRIx PRIx64 +#define VADDR_PRIX PRIX64 +#define VADDR_MAX UINT64_MAX +#endif #endif diff --git a/include/qemu/atomic.h b/include/qemu/atomic.h index f80cba24cf..76a8fbcd8c 100644 --- a/include/qemu/atomic.h +++ b/include/qemu/atomic.h @@ -56,6 +56,7 @@ */ #define signal_barrier() __atomic_signal_fence(__ATOMIC_SEQ_CST) +#ifndef EMSCRIPTEN /* * Sanity check that the size of an atomic operation isn't "overly large". * Despite the fact that e.g. i686 has 64-bit atomic operations, we do not @@ -63,6 +64,9 @@ * bit of sanity checking that other 32-bit hosts might build. */ #define ATOMIC_REG_SIZE sizeof(void *) +#else +#define ATOMIC_REG_SIZE 8 /* wasm supports 64bit atomics */ +#endif /* Weak atomic operations prevent the compiler moving other * loads/stores past the atomic operation load/store. However there is diff --git a/meson.build b/meson.build index bcf1e33ddf..343408636b 100644 --- a/meson.build +++ b/meson.build @@ -3304,9 +3304,11 @@ foreach target : target_dirs target_kconfig = [] foreach sym: accelerators - # Disallow 64-bit on 32-bit emulation and virtualization - if host_long_bits < config_target['TARGET_LONG_BITS'].to_int() - continue + if host_arch != 'wasm32' + # Disallow 64-bit on 32-bit emulation and virtualization + if host_long_bits < config_target['TARGET_LONG_BITS'].to_int() + continue + endif endif if sym == 'CONFIG_TCG' or target in accelerator_targets.get(sym, []) config_target += { sym: 'y' } From patchwork Mon Apr 7 14:45:58 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kohei Tokunaga X-Patchwork-Id: 14041060 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id E94EEC369A2 for ; Mon, 7 Apr 2025 15:14:54 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1u1oAt-00023f-Qn; Mon, 07 Apr 2025 11:14:12 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1u1nm6-0002Ew-Ao; Mon, 07 Apr 2025 10:48:34 -0400 Received: from mail-pf1-x434.google.com ([2607:f8b0:4864:20::434]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1u1nlz-0001i5-WE; Mon, 07 Apr 2025 10:48:34 -0400 Received: by mail-pf1-x434.google.com with SMTP id d2e1a72fcca58-736a72220edso4570284b3a.3; Mon, 07 Apr 2025 07:48:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1744037305; x=1744642105; darn=nongnu.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=UhEGiwMMkGRM1yxW30sVtGE4HNpoTyxqGR7mPhKAnnY=; b=TfIaJ/bHJzq3FlgZzLENFlM9BZqh2l+pbY9hGlTxL1wk0YeohzUXEQBpQJw9RQPu5p byKhjHiDYgjxnIx7uwYXhm233ahmzCRg/EJdJa/B8Qv3rFoKPVY2F2y4OtF1HqYL0/5F pKRFB3m52hMrLRCDVGyYkEQvj3w5QEfPBve2gelFEZSmbfHVHe7aBnz3mtaEvjs332eA OWNnxLfjumrwus+RJLoYKseyvJ/cX7cjwQuE8wXYCVf9ptYL14bcp4KiLNQey98IFexU vDkaiwLoykbld8N1AoU+tEmilS/pha7RImLg2V2/2kxgIpuloUGz9rdhPjTV82NYYDYN D5yg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1744037305; x=1744642105; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=UhEGiwMMkGRM1yxW30sVtGE4HNpoTyxqGR7mPhKAnnY=; b=DoZ1bZN77dCKTLMAciajFxS4urEugKTLOAu8en3cfAcOA1ZuFn/UqwrkVvj5koCfv+ vi2i3OT5ntv4V6Yqr/9NFP8EAsv/BspqJSGSyUpgKXs9iADLVLd6knsjaXcoZwYLOxMj U/sHmH2mxzY0JCMOvhxXsT6kwtYYO53VflCPsIfFYijXJYmk/UcUQwb1Jhey9zo6eCDg RqUtzQ6mM8H3/aHP0p3NPMR3JSkO89uWjMNdEAK36qmmpJR2FZl1JdzwPMeZZUUijVe9 4Qy5//KHxpXpJEvdfPfsOzqIRzb4EQbzmICSgezNAnq0Uus2seJAZolsT6guIhYwyKeS EdgQ== X-Forwarded-Encrypted: i=1; AJvYcCVLKSD3usD8dJ8bes4tP6UMK3ZxJvOiMlOzgyM5qfrGJ/evYnvZrKdeojRyq76fcIn9bDGkZGSJGw==@nongnu.org, AJvYcCVcySsBHeyJiI58o8MKOP4rD/vceJJXyt0rjRMduw3W5thTtLM0SIjPh4k/VUmO/8phqVrpIaVFJjoT4A==@nongnu.org, AJvYcCW3HGu8luJaHAF4TlDeGkjosDnHr4vR4nhfKwPQOOVPPptWQEFbo8sZA9GFmo7uHFjqDQVnwYingVX8qg==@nongnu.org X-Gm-Message-State: AOJu0Yw5S5RvzXn0iQYjSMyM5QkJMSJrg/+zTSDtRImXYdq3a5+wF4m2 cllDknOSlygOKCT0rbqnFc3rMajcolmehWKSd9gjpKu/dCqcCHZtnK6BNKzQ X-Gm-Gg: ASbGncvyG9a4dkO43DDPFGmbhXqBz5lGCUt1ZJg5eKu62Vz9NqKr+K4r9mIJjpoMkwK HDBXUyN/LVVX0Lm/w8HGqq5JE/3cuNkNXWI/2grFP6TxscR+DCxHmM3Zb7S/NZjSLqCcK0ugEvV fnz3HFyfXF7pc1RIXs6vEDfbarg3s3tFWf2Im8a/4ulgCCzaYWi2B96dK0iBq8bUDe5LYcwbqFl i7sf8MqCx82a1jM0uXJB1hQsbFER2tV4UvCd4B3zhpLu0Mqee39UXK5iVNREnoaUUfiocyMTqnA NH/lp7I9kfJDun7OMVONgv9XSuAX5fioB+VIVnr6hFoQujIOyC1GBCZ0+feCHg== X-Google-Smtp-Source: AGHT+IFPx+2yTucZS0ZQ6c1k3C1H2SoHzBfrXKPfuRGWTNz8yeOYJto/W5ND61P2HNGPnG8sAjNmvA== X-Received: by 2002:aa7:888c:0:b0:736:4644:86ee with SMTP id d2e1a72fcca58-73b6aa72dedmr10682456b3a.14.1744037303924; Mon, 07 Apr 2025 07:48:23 -0700 (PDT) Received: from localhost.localdomain ([240d:1a:3b6:8b00:8768:486:6a8e:e855]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-739d97ef3c2sm8856960b3a.59.2025.04.07.07.48.17 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 07 Apr 2025 07:48:23 -0700 (PDT) From: Kohei Tokunaga To: qemu-devel@nongnu.org Cc: =?utf-8?q?Alex_Benn=C3=A9e?= , =?utf-8?q?Philipp?= =?utf-8?q?e_Mathieu-Daud=C3=A9?= , Thomas Huth , Richard Henderson , Paolo Bonzini , Kevin Wolf , Hanna Reitz , Kohei Tokunaga , Christian Schoenebeck , Greg Kurz , Palmer Dabbelt , Alistair Francis , Weiwei Li , Daniel Henrique Barboza , Liu Zhiwei , =?utf-8?q?Marc-Andr=C3=A9_Lureau?= , =?utf-8?q?Daniel_P_=2E_Berrang=C3=A9?= , Eduardo Habkost , Peter Maydell , Stefan Hajnoczi , qemu-block@nongnu.org, qemu-riscv@nongnu.org, qemu-arm@nongnu.org Subject: [PATCH 07/10] tcg: Add a TCG backend for WebAssembly Date: Mon, 7 Apr 2025 23:45:58 +0900 Message-Id: <24b5ff124d70043aff97dc30aa45f8a502676989.1744032780.git.ktokunaga.mail@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 Received-SPF: pass client-ip=2607:f8b0:4864:20::434; envelope-from=ktokunaga.mail@gmail.com; helo=mail-pf1-x434.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, FREEMAIL_FROM=0.001, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-Mailman-Approved-At: Mon, 07 Apr 2025 11:14:07 -0400 X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org A TB consists of a wasmTBHeader followed by the data listed below. The wasmTBHeader contains pointers for each element: - TCI code - Wasm code - Array of function indices imported into the Wasm instance - Counter tracking the number of TB executions - Pointer to the Wasm instance information The Wasm backend (tcg/wasm32.c) and Wasm instances running on the same thread share information, such as CPUArchState, through a wasmContext structure. The Wasm backend defines tcg_qemu_tb_exec as a common entry point for TBs, similar to the TCI backend. tcg_qemu_tb_exec runs TBs on a forked TCI interpreter by default, while compiles and executes frequently executed TBs as Wasm. The code generator (tcg/wasm32) receives TCG IR and generates both Wasm and TCI instructions. Since Wasm cannot directly jump to specific addresses, labels are implemented using Wasm control flow instructions. As shown in the pseudo-code below, a TB wraps instructions in a large loop, where codes are placed within if blocks separated by labels. Branching is handled by breaking from the current block and entering the target block. loop if ... code after label1 end if ... code after label2 end ... end Additionally, the Wasm backend differs from other backends in several ways: - goto_tb and goto_ptr return control to tcg_qemu_tb_exec which runs the target TB - Helper function pointers are stored in an array in TB and imported into the Wasm instance on execution - Wasm TBs lack prologue and epilogue. TBs are executed via tcg_qemu_tb_exec Browsers cause out of memory error if too many Wasm instances are created. To prevent this, the Wasm backend tracks active instances using an array. When instantiating a new instance risks exceeding the limit, the backend removes older instances to avoid browser errors. These removed instances are re-instantiated when needed. Signed-off-by: Kohei Tokunaga --- include/accel/tcg/getpc.h | 2 +- include/tcg/helper-info.h | 4 +- include/tcg/tcg.h | 2 +- meson.build | 2 + tcg/meson.build | 5 + tcg/tcg.c | 26 +- tcg/wasm32.c | 1260 +++++++++ tcg/wasm32.h | 39 + tcg/wasm32/tcg-target-con-set.h | 18 + tcg/wasm32/tcg-target-con-str.h | 8 + tcg/wasm32/tcg-target-has.h | 102 + tcg/wasm32/tcg-target-mo.h | 12 + tcg/wasm32/tcg-target-opc.h.inc | 4 + tcg/wasm32/tcg-target-reg-bits.h | 12 + tcg/wasm32/tcg-target.c.inc | 4484 ++++++++++++++++++++++++++++++ tcg/wasm32/tcg-target.h | 65 + 16 files changed, 6035 insertions(+), 10 deletions(-) create mode 100644 tcg/wasm32.c create mode 100644 tcg/wasm32.h create mode 100644 tcg/wasm32/tcg-target-con-set.h create mode 100644 tcg/wasm32/tcg-target-con-str.h create mode 100644 tcg/wasm32/tcg-target-has.h create mode 100644 tcg/wasm32/tcg-target-mo.h create mode 100644 tcg/wasm32/tcg-target-opc.h.inc create mode 100644 tcg/wasm32/tcg-target-reg-bits.h create mode 100644 tcg/wasm32/tcg-target.c.inc create mode 100644 tcg/wasm32/tcg-target.h diff --git a/include/accel/tcg/getpc.h b/include/accel/tcg/getpc.h index 8a97ce34e7..78acb4a3cf 100644 --- a/include/accel/tcg/getpc.h +++ b/include/accel/tcg/getpc.h @@ -13,7 +13,7 @@ #endif /* GETPC is the true target of the return instruction that we'll execute. */ -#ifdef CONFIG_TCG_INTERPRETER +#if defined(CONFIG_TCG_INTERPRETER) || defined(EMSCRIPTEN) extern __thread uintptr_t tci_tb_ptr; # define GETPC() tci_tb_ptr #else diff --git a/include/tcg/helper-info.h b/include/tcg/helper-info.h index 909fe73afa..9b4e8832a8 100644 --- a/include/tcg/helper-info.h +++ b/include/tcg/helper-info.h @@ -9,7 +9,7 @@ #ifndef TCG_HELPER_INFO_H #define TCG_HELPER_INFO_H -#ifdef CONFIG_TCG_INTERPRETER +#if defined(CONFIG_TCG_INTERPRETER) || defined(EMSCRIPTEN) #include #endif #include "tcg-target-reg-bits.h" @@ -48,7 +48,7 @@ struct TCGHelperInfo { const char *name; /* Used with g_once_init_enter. */ -#ifdef CONFIG_TCG_INTERPRETER +#if defined(CONFIG_TCG_INTERPRETER) || defined(EMSCRIPTEN) ffi_cif *cif; #else uintptr_t init; diff --git a/include/tcg/tcg.h b/include/tcg/tcg.h index 84d99508b6..c9ab6c838a 100644 --- a/include/tcg/tcg.h +++ b/include/tcg/tcg.h @@ -940,7 +940,7 @@ static inline size_t tcg_current_code_size(TCGContext *s) #define TB_EXIT_IDXMAX 1 #define TB_EXIT_REQUESTED 3 -#ifdef CONFIG_TCG_INTERPRETER +#if defined(CONFIG_TCG_INTERPRETER) || defined(EMSCRIPTEN) uintptr_t tcg_qemu_tb_exec(CPUArchState *env, const void *tb_ptr); #else typedef uintptr_t tcg_prologue_fn(CPUArchState *env, const void *tb_ptr); diff --git a/meson.build b/meson.build index 343408636b..ab84820bc5 100644 --- a/meson.build +++ b/meson.build @@ -920,6 +920,8 @@ if get_option('tcg').allowed() tcg_arch = 'i386' elif host_arch == 'ppc64' tcg_arch = 'ppc' + elif host_arch == 'wasm32' + tcg_arch = 'wasm32' endif add_project_arguments('-iquote', meson.current_source_dir() / 'tcg' / tcg_arch, language: all_languages) diff --git a/tcg/meson.build b/tcg/meson.build index 69ebb4908a..f1a1f9485d 100644 --- a/tcg/meson.build +++ b/tcg/meson.build @@ -20,6 +20,11 @@ if get_option('tcg_interpreter') method: 'pkg-config') tcg_ss.add(libffi) tcg_ss.add(files('tci.c')) +elif host_os == 'emscripten' + libffi = dependency('libffi', version: '>=3.0', required: true, + method: 'pkg-config') + specific_ss.add(libffi) + specific_ss.add(files('wasm32.c')) endif tcg_ss.add(when: libdw, if_true: files('debuginfo.c')) diff --git a/tcg/tcg.c b/tcg/tcg.c index dfd48b8264..154a4dafa7 100644 --- a/tcg/tcg.c +++ b/tcg/tcg.c @@ -136,6 +136,10 @@ static void tcg_out_goto_tb(TCGContext *s, int which); static void tcg_out_op(TCGContext *s, TCGOpcode opc, TCGType type, const TCGArg args[TCG_MAX_OP_ARGS], const int const_args[TCG_MAX_OP_ARGS]); +#if defined(EMSCRIPTEN) +static void tcg_out_label_cb(TCGContext *s, TCGLabel *l); +static int tcg_out_tb_end(TCGContext *s); +#endif #if TCG_TARGET_MAYBE_vec static bool tcg_out_dup_vec(TCGContext *s, TCGType type, unsigned vece, TCGReg dst, TCGReg src); @@ -251,7 +255,7 @@ TCGv_env tcg_env; const void *tcg_code_gen_epilogue; uintptr_t tcg_splitwx_diff; -#ifndef CONFIG_TCG_INTERPRETER +#if !defined(CONFIG_TCG_INTERPRETER) && !defined(EMSCRIPTEN) tcg_prologue_fn *tcg_qemu_tb_exec; #endif @@ -358,6 +362,9 @@ static void tcg_out_label(TCGContext *s, TCGLabel *l) tcg_debug_assert(!l->has_value); l->has_value = 1; l->u.value_ptr = tcg_splitwx_to_rx(s->code_ptr); +#if defined(EMSCRIPTEN) + tcg_out_label_cb(s, l); +#endif } TCGLabel *gen_new_label(void) @@ -1139,7 +1146,7 @@ static TCGHelperInfo info_helper_st128_mmu = { | dh_typemask(ptr, 5) /* uintptr_t ra */ }; -#ifdef CONFIG_TCG_INTERPRETER +#if defined(CONFIG_TCG_INTERPRETER) || defined(EMSCRIPTEN) static ffi_type *typecode_to_ffi(int argmask) { /* @@ -1593,7 +1600,7 @@ void tcg_prologue_init(void) s->code_buf = s->code_gen_ptr; s->data_gen_ptr = NULL; -#ifndef CONFIG_TCG_INTERPRETER +#if !defined(CONFIG_TCG_INTERPRETER) && !defined(EMSCRIPTEN) tcg_qemu_tb_exec = (tcg_prologue_fn *)tcg_splitwx_to_rx(s->code_ptr); #endif @@ -1649,11 +1656,11 @@ void tcg_prologue_init(void) } } -#ifndef CONFIG_TCG_INTERPRETER +#if !defined(CONFIG_TCG_INTERPRETER) && !defined(EMSCRIPTEN) /* * Assert that goto_ptr is implemented completely, setting an epilogue. - * For tci, we use NULL as the signal to return from the interpreter, - * so skip this check. + * For tci and wasm backend, we use NULL as the signal to return from the + * interpreter, so skip this check. */ tcg_debug_assert(tcg_code_gen_epilogue != NULL); #endif @@ -6505,6 +6512,13 @@ int tcg_gen_code(TCGContext *s, TranslationBlock *tb, uint64_t pc_start) tcg_ptr_byte_diff(s->code_ptr, s->code_buf)); #endif +#if defined(EMSCRIPTEN) + i = tcg_out_tb_end(s); + if (i < 0) { + return i; + } +#endif + return tcg_current_code_size(s); } diff --git a/tcg/wasm32.c b/tcg/wasm32.c new file mode 100644 index 0000000000..3dfd98c570 --- /dev/null +++ b/tcg/wasm32.c @@ -0,0 +1,1260 @@ +/* + * Tiny Code Generator for QEMU + * + * Wasm integration + ported TCI interpreter from tci.c + * + * Copyright (c) 2009, 2011, 2016 Stefan Weil + * + * This program is free software: you can redistribute it and/or modify + * it under the terms of the GNU General Public License as published by + * the Free Software Foundation, either version 2 of the License, or + * (at your option) any later version. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program. If not, see . + */ + +/* + * SPDX-License-Identifier: GPL-2.0-or-later + */ + +#include "qemu/osdep.h" +#include +#include +#include +#include "exec/cpu_ldst.h" +#include "tcg/tcg-op.h" +#include "tcg/tcg.h" +#include "tcg/helper-info.h" +#include "tcg/tcg-ldst.h" +#include "disas/dis-asm.h" +#include "tcg-has.h" +#include "wasm32.h" + +/* TBs executed more than this value will be compiled to wasm */ +#define INSTANTIATE_NUM 1500 + +__thread uintptr_t tci_tb_ptr; + +/* Disassemble TCI bytecode. */ +int print_insn_tci(bfd_vma addr, disassemble_info *info) +{ + return 0; /* nop */ +} + +EM_JS(int, instantiate_wasm, (int wasm_begin, + int wasm_size, + int import_vec_begin, + int import_vec_size), +{ + const memory_v = new DataView(HEAP8.buffer); + const wasm = HEAP8.subarray(wasm_begin, wasm_begin + wasm_size); + var helper = {}; + helper.u = () => { + return (Asyncify.state != Asyncify.State.Unwinding) ? 1 : 0; + }; + for (var i = 0; i < import_vec_size / 4; i++) { + helper[i] = wasmTable.get( + memory_v.getInt32(import_vec_begin + i * 4, true)); + } + const mod = new WebAssembly.Module(wasm); + const inst = new WebAssembly.Instance(mod, { + "env" : { + "buffer" : wasmMemory, + }, + "helper" : helper, + }); + + Module.__wasm32_tb.inst_gc_registry.register(inst, "instance"); + + return addFunction(inst.exports.start, 'ii'); +}); + +__thread int cur_core_num; + +static inline int32_t *get_counter_ptr(void *tb_ptr) +{ + return (int32_t *)(((struct wasmTBHeader *)tb_ptr)->counter_ptr + + cur_core_num * 4); +} + +static inline uint32_t *get_info_ptr(void *tb_ptr) +{ + return (uint32_t *)(((struct wasmTBHeader *)tb_ptr)->info_ptr + + cur_core_num * 4); +} + +static inline uint32_t *get_tci_ptr(void *tb_ptr) +{ + return (uint32_t *)(((struct wasmTBHeader *)tb_ptr)->tci_ptr); +} + +__thread struct wasmContext ctx = { + .tb_ptr = 0, + .stack = NULL, + .do_init = 1, + .stack128 = NULL, +}; + +static void tci_write_reg64(tcg_target_ulong *regs, uint32_t high_index, + uint32_t low_index, uint64_t value) +{ + regs[low_index] = (uint32_t)value; + regs[high_index] = value >> 32; +} + +/* Create a 64 bit value from two 32 bit values. */ +static uint64_t tci_uint64(uint32_t high, uint32_t low) +{ + return ((uint64_t)high << 32) + low; +} + +static void tci_args_ldst(uint32_t insn, + TCGReg *r0, + TCGReg *r1, + MemOpIdx *m2, + const void *tb_ptr, + void **l0) +{ + int diff = sextract32(insn, 12, 20); + *l0 = diff ? (uint8_t *)tb_ptr + diff : NULL; + + uint64_t *data64 = (uint64_t *)*l0; + *r0 = (TCGReg)data64[0]; + *r1 = (TCGReg)data64[1]; + *m2 = (MemOpIdx)data64[2]; +} + +/* + * Load sets of arguments all at once. The naming convention is: + * tci_args_ + * where arguments is a sequence of + * + * b = immediate (bit position) + * c = condition (TCGCond) + * i = immediate (uint32_t) + * I = immediate (tcg_target_ulong) + * l = label or pointer + * m = immediate (MemOpIdx) + * n = immediate (call return length) + * r = register + * s = signed ldst offset + */ + +static void tci_args_l(uint32_t insn, const void *tb_ptr, void **l0) +{ + int diff = sextract32(insn, 12, 20); + *l0 = diff ? (uint8_t *)tb_ptr + diff : NULL; +} + +static void tci_args_r(uint32_t insn, TCGReg *r0) +{ + *r0 = extract32(insn, 8, 4); +} + +static void tci_args_nl(uint32_t insn, const void *tb_ptr, + uint8_t *n0, void **l1) +{ + *n0 = extract32(insn, 8, 4); + *l1 = sextract32(insn, 12, 20) + (void *)tb_ptr; +} + +static void tci_args_rl(uint32_t insn, const void *tb_ptr, + TCGReg *r0, void **l1) +{ + *r0 = extract32(insn, 8, 4); + *l1 = sextract32(insn, 12, 20) + (void *)tb_ptr; +} + +static void tci_args_rr(uint32_t insn, TCGReg *r0, TCGReg *r1) +{ + *r0 = extract32(insn, 8, 4); + *r1 = extract32(insn, 12, 4); +} + +static void tci_args_ri(uint32_t insn, TCGReg *r0, tcg_target_ulong *i1) +{ + *r0 = extract32(insn, 8, 4); + *i1 = sextract32(insn, 12, 20); +} + +static void tci_args_rrr(uint32_t insn, TCGReg *r0, TCGReg *r1, TCGReg *r2) +{ + *r0 = extract32(insn, 8, 4); + *r1 = extract32(insn, 12, 4); + *r2 = extract32(insn, 16, 4); +} + +static void tci_args_rrs(uint32_t insn, TCGReg *r0, TCGReg *r1, int32_t *i2) +{ + *r0 = extract32(insn, 8, 4); + *r1 = extract32(insn, 12, 4); + *i2 = sextract32(insn, 16, 16); +} + +static void tci_args_rrbb(uint32_t insn, TCGReg *r0, TCGReg *r1, + uint8_t *i2, uint8_t *i3) +{ + *r0 = extract32(insn, 8, 4); + *r1 = extract32(insn, 12, 4); + *i2 = extract32(insn, 16, 6); + *i3 = extract32(insn, 22, 6); +} + +static void tci_args_rrrc(uint32_t insn, + TCGReg *r0, TCGReg *r1, TCGReg *r2, TCGCond *c3) +{ + *r0 = extract32(insn, 8, 4); + *r1 = extract32(insn, 12, 4); + *r2 = extract32(insn, 16, 4); + *c3 = extract32(insn, 20, 4); +} + +static void tci_args_rrrbb(uint32_t insn, TCGReg *r0, TCGReg *r1, + TCGReg *r2, uint8_t *i3, uint8_t *i4) +{ + *r0 = extract32(insn, 8, 4); + *r1 = extract32(insn, 12, 4); + *r2 = extract32(insn, 16, 4); + *i3 = extract32(insn, 20, 6); + *i4 = extract32(insn, 26, 6); +} + +static void tci_args_rrrr(uint32_t insn, + TCGReg *r0, TCGReg *r1, TCGReg *r2, TCGReg *r3) +{ + *r0 = extract32(insn, 8, 4); + *r1 = extract32(insn, 12, 4); + *r2 = extract32(insn, 16, 4); + *r3 = extract32(insn, 20, 4); +} + +static void tci_args_rrrrrc(uint32_t insn, TCGReg *r0, TCGReg *r1, + TCGReg *r2, TCGReg *r3, TCGReg *r4, TCGCond *c5) +{ + *r0 = extract32(insn, 8, 4); + *r1 = extract32(insn, 12, 4); + *r2 = extract32(insn, 16, 4); + *r3 = extract32(insn, 20, 4); + *r4 = extract32(insn, 24, 4); + *c5 = extract32(insn, 28, 4); +} + +static void tci_args_rrrrrr(uint32_t insn, TCGReg *r0, TCGReg *r1, + TCGReg *r2, TCGReg *r3, TCGReg *r4, TCGReg *r5) +{ + *r0 = extract32(insn, 8, 4); + *r1 = extract32(insn, 12, 4); + *r2 = extract32(insn, 16, 4); + *r3 = extract32(insn, 20, 4); + *r4 = extract32(insn, 24, 4); + *r5 = extract32(insn, 28, 4); +} + +static bool tci_compare32(uint32_t u0, uint32_t u1, TCGCond condition) +{ + bool result = false; + int32_t i0 = u0; + int32_t i1 = u1; + switch (condition) { + case TCG_COND_EQ: + result = (u0 == u1); + break; + case TCG_COND_NE: + result = (u0 != u1); + break; + case TCG_COND_LT: + result = (i0 < i1); + break; + case TCG_COND_GE: + result = (i0 >= i1); + break; + case TCG_COND_LE: + result = (i0 <= i1); + break; + case TCG_COND_GT: + result = (i0 > i1); + break; + case TCG_COND_LTU: + result = (u0 < u1); + break; + case TCG_COND_GEU: + result = (u0 >= u1); + break; + case TCG_COND_LEU: + result = (u0 <= u1); + break; + case TCG_COND_GTU: + result = (u0 > u1); + break; + case TCG_COND_TSTEQ: + result = (u0 & u1) == 0; + break; + case TCG_COND_TSTNE: + result = (u0 & u1) != 0; + break; + default: + g_assert_not_reached(); + } + return result; +} + +static bool tci_compare64(uint64_t u0, uint64_t u1, TCGCond condition) +{ + bool result = false; + int64_t i0 = u0; + int64_t i1 = u1; + switch (condition) { + case TCG_COND_EQ: + result = (u0 == u1); + break; + case TCG_COND_NE: + result = (u0 != u1); + break; + case TCG_COND_LT: + result = (i0 < i1); + break; + case TCG_COND_GE: + result = (i0 >= i1); + break; + case TCG_COND_LE: + result = (i0 <= i1); + break; + case TCG_COND_GT: + result = (i0 > i1); + break; + case TCG_COND_LTU: + result = (u0 < u1); + break; + case TCG_COND_GEU: + result = (u0 >= u1); + break; + case TCG_COND_LEU: + result = (u0 <= u1); + break; + case TCG_COND_GTU: + result = (u0 > u1); + break; + case TCG_COND_TSTEQ: + result = (u0 & u1) == 0; + break; + case TCG_COND_TSTNE: + result = (u0 & u1) != 0; + break; + default: + g_assert_not_reached(); + } + return result; +} + +static uint32_t tlb_load( + CPUArchState *env, uint64_t taddr, MemOp mop, uint64_t *ptr, bool is_ld) +{ + unsigned a_mask = (unsigned)ptr[3]; + int mask_ofs = (int)ptr[4]; + int8_t page_bits = (int8_t)ptr[5]; + uint64_t page_mask = ptr[6]; + int table_ofs = (uint64_t)ptr[7]; + + unsigned s_mask = (1u << (mop & MO_SIZE)) - 1; + tcg_target_long compare_mask; + + uintptr_t table = *(uintptr_t *)((int)env + table_ofs); + uintptr_t mask = *(uintptr_t *)((int)env + mask_ofs); + uintptr_t entry = table + + ((taddr >> (page_bits - CPU_TLB_ENTRY_BITS)) & mask); + int off = is_ld ? offsetof(CPUTLBEntry, addr_read) + : offsetof(CPUTLBEntry, addr_write); + uint64_t target = *(uint64_t *)(entry + off); + uint64_t c_addr = taddr; + if (a_mask < s_mask) { + c_addr += s_mask - a_mask; + } + compare_mask = page_mask | a_mask; + c_addr &= compare_mask; + + if (c_addr == target) { + return taddr + *(uintptr_t *)(entry + offsetof(CPUTLBEntry, addend)); + } + return 0; +} + +static uint64_t tci_qemu_ld(CPUArchState *env, uint64_t taddr, + MemOpIdx oi, const void *tb_ptr, uint64_t *ptr) +{ + MemOp mop = get_memop(oi); + uintptr_t ra = (uintptr_t)tb_ptr; + + uint32_t target_addr = tlb_load(env, taddr, mop, ptr, true); + if (target_addr != 0) { + switch (mop & MO_SSIZE) { + case MO_UB: + return *(uint8_t *)target_addr; + case MO_SB: + return *(int8_t *)target_addr; + case MO_UW: + return *(uint16_t *)target_addr; + case MO_SW: + return *(int16_t *)target_addr; + case MO_UL: + return *(uint32_t *)target_addr; + case MO_SL: + return *(int32_t *)target_addr; + case MO_UQ: + return *(uint64_t *)target_addr; + default: + g_assert_not_reached(); + } + } + + switch (mop & MO_SSIZE) { + case MO_UB: + return helper_ldub_mmu(env, taddr, oi, ra); + case MO_SB: + return helper_ldsb_mmu(env, taddr, oi, ra); + case MO_UW: + return helper_lduw_mmu(env, taddr, oi, ra); + case MO_SW: + return helper_ldsw_mmu(env, taddr, oi, ra); + case MO_UL: + return helper_ldul_mmu(env, taddr, oi, ra); + case MO_SL: + return helper_ldsl_mmu(env, taddr, oi, ra); + case MO_UQ: + return helper_ldq_mmu(env, taddr, oi, ra); + default: + g_assert_not_reached(); + } +} + +static void tci_qemu_st(CPUArchState *env, uint64_t taddr, uint64_t val, + MemOpIdx oi, const void *tb_ptr, uint64_t *ptr) +{ + MemOp mop = get_memop(oi); + uintptr_t ra = (uintptr_t)tb_ptr; + + uint32_t target_addr = tlb_load(env, taddr, mop, ptr, false); + if (target_addr != 0) { + switch (mop & MO_SIZE) { + case MO_UB: + *(uint8_t *)target_addr = (uint8_t)val; + break; + case MO_UW: + *(uint16_t *)target_addr = (uint16_t)val; + break; + case MO_UL: + *(uint32_t *)target_addr = (uint32_t)val; + break; + case MO_UQ: + *(uint64_t *)target_addr = (uint64_t)val; + break; + default: + g_assert_not_reached(); + } + return; + } + + switch (mop & MO_SIZE) { + case MO_UB: + helper_stb_mmu(env, taddr, val, oi, ra); + break; + case MO_UW: + helper_stw_mmu(env, taddr, val, oi, ra); + break; + case MO_UL: + helper_stl_mmu(env, taddr, val, oi, ra); + break; + case MO_UQ: + helper_stq_mmu(env, taddr, val, oi, ra); + break; + default: + g_assert_not_reached(); + } +} + +#define CASE_32_64(x) \ + case glue(glue(INDEX_op_, x), _i64): \ + case glue(glue(INDEX_op_, x), _i32): +# define CASE_64(x) \ + case glue(glue(INDEX_op_, x), _i64): + +__thread tcg_target_ulong regs[TCG_TARGET_NB_REGS]; + +static inline uintptr_t tcg_qemu_tb_exec_tci(CPUArchState *env) +{ + uint32_t *tb_ptr = get_tci_ptr(ctx.tb_ptr); + uint64_t *stack = ctx.stack; + + regs[TCG_AREG0] = (tcg_target_ulong)env; + regs[TCG_REG_CALL_STACK] = (uintptr_t)stack; + + for (;;) { + uint32_t insn; + TCGOpcode opc; + TCGReg r0, r1, r2, r3, r4, r5; + tcg_target_ulong t1; + TCGCond condition; + uint8_t pos, len; + uint32_t tmp32; + uint64_t tmp64, taddr; + uint64_t T1, T2; + MemOpIdx oi; + int32_t ofs; + void *ptr; + int32_t *counter_ptr; + + insn = *tb_ptr++; + opc = extract32(insn, 0, 8); + + switch (opc) { + case INDEX_op_call: + { + void *call_slots[MAX_CALL_IARGS]; + ffi_cif *cif; + void *func; + unsigned i, s, n; + + tci_args_nl(insn, tb_ptr, &len, &ptr); + uint64_t *data64 = (uint64_t *)ptr; + func = (void *)data64[0]; + cif = (void *)data64[1]; + + int reg_iarg_base = 8; + int reg_idx = 0; + int reg_idx_end = 5; /* NUM_OF_IARG_REGS */ + int stack_idx = 0; + n = cif->nargs; + for (i = s = 0; i < n; ++i) { + ffi_type *t = cif->arg_types[i]; + if (reg_idx < reg_idx_end) { + call_slots[i] = ®s[reg_iarg_base + reg_idx]; + reg_idx += DIV_ROUND_UP(t->size, 8); + } else { + call_slots[i] = &stack[stack_idx]; + stack_idx += DIV_ROUND_UP(t->size, 8); + } + } + + /* Helper functions may need to access the "return address" */ + tci_tb_ptr = (uintptr_t)tb_ptr; + ffi_call(cif, func, stack, call_slots); + } + + switch (len) { + case 0: /* void */ + break; + case 1: /* uint32_t */ + /* + * The result winds up "left-aligned" in the stack[0] slot. + * Note that libffi has an odd special case in that it will + * always widen an integral result to ffi_arg. + */ + if (sizeof(ffi_arg) == 8) { + regs[TCG_REG_R0] = (uint32_t)stack[0]; + } else { + regs[TCG_REG_R0] = *(uint32_t *)stack; + } + break; + case 2: /* uint64_t */ + /* + * For TCG_TARGET_REG_BITS == 32, the register pair + * must stay in host memory order. + */ + memcpy(®s[TCG_REG_R0], stack, 8); + break; + case 3: /* Int128 */ + memcpy(®s[TCG_REG_R0], stack, 16); + break; + default: + g_assert_not_reached(); + } + break; + + case INDEX_op_br: + tci_args_l(insn, tb_ptr, &ptr); + tb_ptr = ptr; + continue; + case INDEX_op_setcond_i32: + tci_args_rrrc(insn, &r0, &r1, &r2, &condition); + regs[r0] = tci_compare32(regs[r1], regs[r2], condition); + break; + case INDEX_op_movcond_i32: + tci_args_rrrrrc(insn, &r0, &r1, &r2, &r3, &r4, &condition); + tmp32 = tci_compare32(regs[r1], regs[r2], condition); + regs[r0] = regs[tmp32 ? r3 : r4]; + break; + case INDEX_op_setcond2_i32: + tci_args_rrrrrc(insn, &r0, &r1, &r2, &r3, &r4, &condition); + T1 = tci_uint64(regs[r2], regs[r1]); + T2 = tci_uint64(regs[r4], regs[r3]); + regs[r0] = tci_compare64(T1, T2, condition); + break; + case INDEX_op_setcond_i64: + tci_args_rrrc(insn, &r0, &r1, &r2, &condition); + regs[r0] = tci_compare64(regs[r1], regs[r2], condition); + break; + case INDEX_op_movcond_i64: + tci_args_rrrrrc(insn, &r0, &r1, &r2, &r3, &r4, &condition); + tmp32 = tci_compare64(regs[r1], regs[r2], condition); + regs[r0] = regs[tmp32 ? r3 : r4]; + break; + CASE_32_64(mov) + tci_args_rr(insn, &r0, &r1); + regs[r0] = regs[r1]; + break; + case INDEX_op_tci_movi: + tci_args_ri(insn, &r0, &t1); + regs[r0] = t1; + break; + case INDEX_op_tci_movl: + tci_args_rl(insn, tb_ptr, &r0, &ptr); + regs[r0] = *(tcg_target_ulong *)ptr; + break; + + /* Load/store operations (32 bit). */ + + CASE_32_64(ld8u) + tci_args_rrs(insn, &r0, &r1, &ofs); + ptr = (void *)(regs[r1] + ofs); + regs[r0] = *(uint8_t *)ptr; + break; + CASE_32_64(ld8s) + tci_args_rrs(insn, &r0, &r1, &ofs); + ptr = (void *)(regs[r1] + ofs); + regs[r0] = *(int8_t *)ptr; + break; + CASE_32_64(ld16u) + tci_args_rrs(insn, &r0, &r1, &ofs); + ptr = (void *)(regs[r1] + ofs); + regs[r0] = *(uint16_t *)ptr; + break; + CASE_32_64(ld16s) + tci_args_rrs(insn, &r0, &r1, &ofs); + ptr = (void *)(regs[r1] + ofs); + regs[r0] = *(int16_t *)ptr; + break; + case INDEX_op_ld_i32: + CASE_64(ld32u) + tci_args_rrs(insn, &r0, &r1, &ofs); + ptr = (void *)(regs[r1] + ofs); + regs[r0] = *(uint32_t *)ptr; + break; + CASE_32_64(st8) + tci_args_rrs(insn, &r0, &r1, &ofs); + ptr = (void *)(regs[r1] + ofs); + *(uint8_t *)ptr = regs[r0]; + break; + CASE_32_64(st16) + tci_args_rrs(insn, &r0, &r1, &ofs); + ptr = (void *)(regs[r1] + ofs); + *(uint16_t *)ptr = regs[r0]; + break; + case INDEX_op_st_i32: + CASE_64(st32) + tci_args_rrs(insn, &r0, &r1, &ofs); + ptr = (void *)(regs[r1] + ofs); + *(uint32_t *)ptr = regs[r0]; + break; + + /* Arithmetic operations (mixed 32/64 bit). */ + + CASE_32_64(add) + tci_args_rrr(insn, &r0, &r1, &r2); + regs[r0] = regs[r1] + regs[r2]; + break; + CASE_32_64(sub) + tci_args_rrr(insn, &r0, &r1, &r2); + regs[r0] = regs[r1] - regs[r2]; + break; + CASE_32_64(mul) + tci_args_rrr(insn, &r0, &r1, &r2); + regs[r0] = regs[r1] * regs[r2]; + break; + CASE_32_64(and) + tci_args_rrr(insn, &r0, &r1, &r2); + regs[r0] = regs[r1] & regs[r2]; + break; + CASE_32_64(or) + tci_args_rrr(insn, &r0, &r1, &r2); + regs[r0] = regs[r1] | regs[r2]; + break; + CASE_32_64(xor) + tci_args_rrr(insn, &r0, &r1, &r2); + regs[r0] = regs[r1] ^ regs[r2]; + break; +#if TCG_TARGET_HAS_andc_i32 || TCG_TARGET_HAS_andc_i64 + CASE_32_64(andc) + tci_args_rrr(insn, &r0, &r1, &r2); + regs[r0] = regs[r1] & ~regs[r2]; + break; +#endif +#if TCG_TARGET_HAS_orc_i32 || TCG_TARGET_HAS_orc_i64 + CASE_32_64(orc) + tci_args_rrr(insn, &r0, &r1, &r2); + regs[r0] = regs[r1] | ~regs[r2]; + break; +#endif +#if TCG_TARGET_HAS_eqv_i32 || TCG_TARGET_HAS_eqv_i64 + CASE_32_64(eqv) + tci_args_rrr(insn, &r0, &r1, &r2); + regs[r0] = ~(regs[r1] ^ regs[r2]); + break; +#endif +#if TCG_TARGET_HAS_nand_i32 || TCG_TARGET_HAS_nand_i64 + CASE_32_64(nand) + tci_args_rrr(insn, &r0, &r1, &r2); + regs[r0] = ~(regs[r1] & regs[r2]); + break; +#endif +#if TCG_TARGET_HAS_nor_i32 || TCG_TARGET_HAS_nor_i64 + CASE_32_64(nor) + tci_args_rrr(insn, &r0, &r1, &r2); + regs[r0] = ~(regs[r1] | regs[r2]); + break; +#endif + + /* Arithmetic operations (32 bit). */ + + case INDEX_op_div_i32: + tci_args_rrr(insn, &r0, &r1, &r2); + regs[r0] = (int32_t)regs[r1] / (int32_t)regs[r2]; + break; + case INDEX_op_divu_i32: + tci_args_rrr(insn, &r0, &r1, &r2); + regs[r0] = (uint32_t)regs[r1] / (uint32_t)regs[r2]; + break; + case INDEX_op_rem_i32: + tci_args_rrr(insn, &r0, &r1, &r2); + regs[r0] = (int32_t)regs[r1] % (int32_t)regs[r2]; + break; + case INDEX_op_remu_i32: + tci_args_rrr(insn, &r0, &r1, &r2); + regs[r0] = (uint32_t)regs[r1] % (uint32_t)regs[r2]; + break; +#if TCG_TARGET_HAS_clz_i32 + case INDEX_op_clz_i32: + tci_args_rrr(insn, &r0, &r1, &r2); + tmp32 = regs[r1]; + regs[r0] = tmp32 ? clz32(tmp32) : regs[r2]; + break; +#endif +#if TCG_TARGET_HAS_ctz_i32 + case INDEX_op_ctz_i32: + tci_args_rrr(insn, &r0, &r1, &r2); + tmp32 = regs[r1]; + regs[r0] = tmp32 ? ctz32(tmp32) : regs[r2]; + break; +#endif +#if TCG_TARGET_HAS_ctpop_i32 + case INDEX_op_ctpop_i32: + tci_args_rr(insn, &r0, &r1); + regs[r0] = ctpop32(regs[r1]); + break; +#endif + + /* Shift/rotate operations (32 bit). */ + + case INDEX_op_shl_i32: + tci_args_rrr(insn, &r0, &r1, &r2); + regs[r0] = (uint32_t)regs[r1] << (regs[r2] & 31); + break; + case INDEX_op_shr_i32: + tci_args_rrr(insn, &r0, &r1, &r2); + regs[r0] = (uint32_t)regs[r1] >> (regs[r2] & 31); + break; + case INDEX_op_sar_i32: + tci_args_rrr(insn, &r0, &r1, &r2); + regs[r0] = (int32_t)regs[r1] >> (regs[r2] & 31); + break; +#if TCG_TARGET_HAS_rot_i32 + case INDEX_op_rotl_i32: + tci_args_rrr(insn, &r0, &r1, &r2); + regs[r0] = rol32(regs[r1], regs[r2] & 31); + break; + case INDEX_op_rotr_i32: + tci_args_rrr(insn, &r0, &r1, &r2); + regs[r0] = ror32(regs[r1], regs[r2] & 31); + break; +#endif + case INDEX_op_deposit_i32: + tci_args_rrrbb(insn, &r0, &r1, &r2, &pos, &len); + regs[r0] = deposit32(regs[r1], pos, len, regs[r2]); + break; + case INDEX_op_extract_i32: + tci_args_rrbb(insn, &r0, &r1, &pos, &len); + regs[r0] = extract32(regs[r1], pos, len); + break; + case INDEX_op_sextract_i32: + tci_args_rrbb(insn, &r0, &r1, &pos, &len); + regs[r0] = sextract32(regs[r1], pos, len); + break; + case INDEX_op_brcond_i32: + tci_args_rl(insn, tb_ptr, &r0, &ptr); + if ((uint32_t)regs[r0]) { + tb_ptr = ptr; + } + break; +#if TCG_TARGET_HAS_add2_i32 + case INDEX_op_add2_i32: + tci_args_rrrrrr(insn, &r0, &r1, &r2, &r3, &r4, &r5); + T1 = tci_uint64(regs[r3], regs[r2]); + T2 = tci_uint64(regs[r5], regs[r4]); + tci_write_reg64(regs, r1, r0, T1 + T2); + break; +#endif +#if TCG_TARGET_HAS_sub2_i32 + case INDEX_op_sub2_i32: + tci_args_rrrrrr(insn, &r0, &r1, &r2, &r3, &r4, &r5); + T1 = tci_uint64(regs[r3], regs[r2]); + T2 = tci_uint64(regs[r5], regs[r4]); + tci_write_reg64(regs, r1, r0, T1 - T2); + break; +#endif +#if TCG_TARGET_HAS_mulu2_i32 + case INDEX_op_mulu2_i32: + tci_args_rrrr(insn, &r0, &r1, &r2, &r3); + tmp64 = (uint64_t)(uint32_t)regs[r2] * (uint32_t)regs[r3]; + tci_write_reg64(regs, r1, r0, tmp64); + break; +#endif +#if TCG_TARGET_HAS_muls2_i32 + case INDEX_op_muls2_i32: + tci_args_rrrr(insn, &r0, &r1, &r2, &r3); + tmp64 = (int64_t)(int32_t)regs[r2] * (int32_t)regs[r3]; + tci_write_reg64(regs, r1, r0, tmp64); + break; +#endif +#if TCG_TARGET_HAS_ext8s_i32 || TCG_TARGET_HAS_ext8s_i64 + CASE_32_64(ext8s) + tci_args_rr(insn, &r0, &r1); + regs[r0] = (int8_t)regs[r1]; + break; +#endif +#if TCG_TARGET_HAS_ext16s_i32 || TCG_TARGET_HAS_ext16s_i64 || \ + TCG_TARGET_HAS_bswap16_i32 || TCG_TARGET_HAS_bswap16_i64 + CASE_32_64(ext16s) + tci_args_rr(insn, &r0, &r1); + regs[r0] = (int16_t)regs[r1]; + break; +#endif +#if TCG_TARGET_HAS_ext8u_i32 || TCG_TARGET_HAS_ext8u_i64 + CASE_32_64(ext8u) + tci_args_rr(insn, &r0, &r1); + regs[r0] = (uint8_t)regs[r1]; + break; +#endif +#if TCG_TARGET_HAS_ext16u_i32 || TCG_TARGET_HAS_ext16u_i64 + CASE_32_64(ext16u) + tci_args_rr(insn, &r0, &r1); + regs[r0] = (uint16_t)regs[r1]; + break; +#endif +#if TCG_TARGET_HAS_bswap16_i32 || TCG_TARGET_HAS_bswap16_i64 + CASE_32_64(bswap16) + tci_args_rr(insn, &r0, &r1); + regs[r0] = bswap16(regs[r1]); + break; +#endif +#if TCG_TARGET_HAS_bswap32_i32 || TCG_TARGET_HAS_bswap32_i64 + CASE_32_64(bswap32) + tci_args_rr(insn, &r0, &r1); + regs[r0] = bswap32(regs[r1]); + break; +#endif +#if TCG_TARGET_HAS_not_i32 || TCG_TARGET_HAS_not_i64 + CASE_32_64(not) + tci_args_rr(insn, &r0, &r1); + regs[r0] = ~regs[r1]; + break; +#endif + CASE_32_64(neg) + tci_args_rr(insn, &r0, &r1); + regs[r0] = -regs[r1]; + break; + + /* Load/store operations (64 bit). */ + + case INDEX_op_ld32s_i64: + tci_args_rrs(insn, &r0, &r1, &ofs); + ptr = (void *)(regs[r1] + ofs); + regs[r0] = *(int32_t *)ptr; + break; + case INDEX_op_ld_i64: + tci_args_rrs(insn, &r0, &r1, &ofs); + ptr = (void *)(regs[r1] + ofs); + regs[r0] = *(uint64_t *)ptr; + break; + case INDEX_op_st_i64: + tci_args_rrs(insn, &r0, &r1, &ofs); + ptr = (void *)(regs[r1] + ofs); + *(uint64_t *)ptr = regs[r0]; + break; + + /* Arithmetic operations (64 bit). */ + + case INDEX_op_div_i64: + tci_args_rrr(insn, &r0, &r1, &r2); + regs[r0] = (int64_t)regs[r1] / (int64_t)regs[r2]; + break; + case INDEX_op_divu_i64: + tci_args_rrr(insn, &r0, &r1, &r2); + regs[r0] = (uint64_t)regs[r1] / (uint64_t)regs[r2]; + break; + case INDEX_op_rem_i64: + tci_args_rrr(insn, &r0, &r1, &r2); + regs[r0] = (int64_t)regs[r1] % (int64_t)regs[r2]; + break; + case INDEX_op_remu_i64: + tci_args_rrr(insn, &r0, &r1, &r2); + regs[r0] = (uint64_t)regs[r1] % (uint64_t)regs[r2]; + break; +#if TCG_TARGET_HAS_clz_i64 + case INDEX_op_clz_i64: + tci_args_rrr(insn, &r0, &r1, &r2); + regs[r0] = regs[r1] ? clz64(regs[r1]) : regs[r2]; + break; +#endif +#if TCG_TARGET_HAS_ctz_i64 + case INDEX_op_ctz_i64: + tci_args_rrr(insn, &r0, &r1, &r2); + regs[r0] = regs[r1] ? ctz64(regs[r1]) : regs[r2]; + break; +#endif +#if TCG_TARGET_HAS_ctpop_i64 + case INDEX_op_ctpop_i64: + tci_args_rr(insn, &r0, &r1); + regs[r0] = ctpop64(regs[r1]); + break; +#endif +#if TCG_TARGET_HAS_mulu2_i64 + case INDEX_op_mulu2_i64: + tci_args_rrrr(insn, &r0, &r1, &r2, &r3); + mulu64(®s[r0], ®s[r1], regs[r2], regs[r3]); + break; +#endif +#if TCG_TARGET_HAS_muls2_i64 + case INDEX_op_muls2_i64: + tci_args_rrrr(insn, &r0, &r1, &r2, &r3); + muls64(®s[r0], ®s[r1], regs[r2], regs[r3]); + break; +#endif +#if TCG_TARGET_HAS_add2_i64 + case INDEX_op_add2_i64: + tci_args_rrrrrr(insn, &r0, &r1, &r2, &r3, &r4, &r5); + T1 = regs[r2] + regs[r4]; + T2 = regs[r3] + regs[r5] + (T1 < regs[r2]); + regs[r0] = T1; + regs[r1] = T2; + break; +#endif +#if TCG_TARGET_HAS_add2_i64 + case INDEX_op_sub2_i64: + tci_args_rrrrrr(insn, &r0, &r1, &r2, &r3, &r4, &r5); + T1 = regs[r2] - regs[r4]; + T2 = regs[r3] - regs[r5] - (regs[r2] < regs[r4]); + regs[r0] = T1; + regs[r1] = T2; + break; +#endif + + /* Shift/rotate operations (64 bit). */ + + case INDEX_op_shl_i64: + tci_args_rrr(insn, &r0, &r1, &r2); + regs[r0] = regs[r1] << (regs[r2] & 63); + break; + case INDEX_op_shr_i64: + tci_args_rrr(insn, &r0, &r1, &r2); + regs[r0] = regs[r1] >> (regs[r2] & 63); + break; + case INDEX_op_sar_i64: + tci_args_rrr(insn, &r0, &r1, &r2); + regs[r0] = (int64_t)regs[r1] >> (regs[r2] & 63); + break; +#if TCG_TARGET_HAS_rot_i64 + case INDEX_op_rotl_i64: + tci_args_rrr(insn, &r0, &r1, &r2); + regs[r0] = rol64(regs[r1], regs[r2] & 63); + break; + case INDEX_op_rotr_i64: + tci_args_rrr(insn, &r0, &r1, &r2); + regs[r0] = ror64(regs[r1], regs[r2] & 63); + break; +#endif + case INDEX_op_deposit_i64: + tci_args_rrrbb(insn, &r0, &r1, &r2, &pos, &len); + regs[r0] = deposit64(regs[r1], pos, len, regs[r2]); + break; + case INDEX_op_extract_i64: + tci_args_rrbb(insn, &r0, &r1, &pos, &len); + regs[r0] = extract64(regs[r1], pos, len); + break; + case INDEX_op_sextract_i64: + tci_args_rrbb(insn, &r0, &r1, &pos, &len); + regs[r0] = sextract64(regs[r1], pos, len); + break; + case INDEX_op_brcond_i64: + tci_args_rl(insn, tb_ptr, &r0, &ptr); + if (regs[r0]) { + tb_ptr = ptr; + } + break; + case INDEX_op_ext32s_i64: + case INDEX_op_ext_i32_i64: + tci_args_rr(insn, &r0, &r1); + regs[r0] = (int32_t)regs[r1]; + break; + case INDEX_op_ext32u_i64: + case INDEX_op_extu_i32_i64: + tci_args_rr(insn, &r0, &r1); + regs[r0] = (uint32_t)regs[r1]; + break; +#if TCG_TARGET_HAS_bswap64_i64 + case INDEX_op_bswap64_i64: + tci_args_rr(insn, &r0, &r1); + regs[r0] = bswap64(regs[r1]); + break; +#endif + + /* QEMU specific operations. */ + + case INDEX_op_exit_tb: + tci_args_l(insn, tb_ptr, &ptr); + ctx.tb_ptr = 0; + return (uintptr_t)ptr; + + case INDEX_op_goto_tb: + tci_args_l(insn, tb_ptr, &ptr); + if (*(uint32_t **)ptr != tb_ptr) { + tb_ptr = *(uint32_t **)ptr; + ctx.tb_ptr = tb_ptr; + counter_ptr = get_counter_ptr(tb_ptr); + if ((*counter_ptr >= 0) && (*counter_ptr < INSTANTIATE_NUM)) { + *counter_ptr += 1; + } else { + return 0; /* enter to wasm TB */ + } + tb_ptr = get_tci_ptr(tb_ptr); + } + break; + + case INDEX_op_goto_ptr: + tci_args_r(insn, &r0); + ptr = (void *)regs[r0]; + if (!ptr) { + ctx.tb_ptr = 0; + return 0; + } + + tb_ptr = ptr; + ctx.tb_ptr = tb_ptr; + counter_ptr = get_counter_ptr(tb_ptr); + if ((*counter_ptr >= 0) && (*counter_ptr < INSTANTIATE_NUM)) { + *counter_ptr += 1; + } else { + return 0; /* enter to wasm TB */ + } + tb_ptr = get_tci_ptr(tb_ptr); + + break; + + case INDEX_op_qemu_ld_i32: + case INDEX_op_qemu_ld_i64: + tci_args_ldst(insn, &r0, &r1, &oi, tb_ptr, &ptr); + taddr = regs[r1]; + regs[r0] = tci_qemu_ld(env, taddr, oi, tb_ptr, ptr); + break; + + case INDEX_op_qemu_st_i32: + case INDEX_op_qemu_st_i64: + tci_args_ldst(insn, &r0, &r1, &oi, tb_ptr, &ptr); + taddr = regs[r1]; + tci_qemu_st(env, taddr, regs[r0], oi, tb_ptr, ptr); + break; + + case INDEX_op_mb: + /* Ensure ordering for all kinds */ + smp_mb(); + break; + default: + g_assert_not_reached(); + } + } +} + +/* + * Max number of instances can exist simultaneously. + * + * If the number of instances reaches this and a new instance needs to be + * created, old instances are removed so that new instances can be created + * without hitting the browser's limit. + */ +#define MAX_INSTANCES 15000 +#define INSTANCES_BUF_MAX (MAX_INSTANCES + 1) + +int instances_global; + +/* Holds the relationship between TB and Wasm instance. */ +struct instance_info { + void *tb; + int func_idx; +}; +__thread struct instance_info instances[INSTANCES_BUF_MAX]; +__thread int instances_begin; +__thread int instances_end; + +static void add_instance(int fidx, void *tb_ptr) +{ + uint32_t *info_ptr = get_info_ptr(tb_ptr); + + instances[instances_end].tb = tb_ptr; + instances[instances_end].func_idx = fidx; + *info_ptr = (uint32_t)(&(instances[instances_end])); + instances_end = (instances_end + 1) % INSTANCES_BUF_MAX; + + qatomic_inc(&instances_global); +} + +static int get_instance(void *tb_ptr) +{ + uint32_t *info_ptr = get_info_ptr(tb_ptr); + struct instance_info *elm = (struct instance_info *)(*info_ptr); + if (elm == NULL) { + return 0; + } + if (elm->tb != tb_ptr) { + /* invalidated */ + int32_t *counter_ptr = get_counter_ptr(tb_ptr); + *counter_ptr = INSTANTIATE_NUM; /* instanciated immediately */ + *info_ptr = 0; + return 0; + } + return elm->func_idx; +} + +__thread int instance_pending_gc; +__thread int instance_done_gc; + +static void remove_instance(void) +{ + int num; + if (instance_pending_gc > 0) { + return; + } + if (instances_begin <= instances_end) { + num = instances_end - instances_begin; + } else { + num = instances_end + (INSTANCES_BUF_MAX - instances_begin); + } + num /= 2; + for (int i = 0; i < num; i++) { + EM_ASM({ removeFunction($0); }, instances[instances_begin].func_idx); + instances[instances_begin].tb = NULL; + instances_begin = (instances_begin + 1) % INSTANCES_BUF_MAX; + } + instance_pending_gc += num; +} + +static bool can_add_instance(void) +{ + return qatomic_read(&instances_global) < MAX_INSTANCES; +} + +static void check_instance_garbage_collected(void) +{ + if (instance_done_gc > 0) { + qatomic_sub(&instances_global, instance_done_gc); + instance_pending_gc -= instance_done_gc; + instance_done_gc = 0; + } +} + +#define MAX_EXEC_NUM 50000 +__thread int exec_cnt = MAX_EXEC_NUM; +static inline void trysleep(void) +{ + if (--exec_cnt == 0) { + if (!can_add_instance()) { + emscripten_sleep(0); /* return to the browser main loop */ + check_instance_garbage_collected(); + } + exec_cnt = MAX_EXEC_NUM; + } +} + +EM_JS(void, init_wasm32_js, (int instance_done_gc_ptr), +{ + Module.__wasm32_tb = { + inst_gc_registry: new FinalizationRegistry((i) => { + if (i == "instance") { + const memory_v = new DataView(HEAP8.buffer); + let v = memory_v.getInt32(instance_done_gc_ptr, true); + memory_v.setInt32(instance_done_gc_ptr, v + 1, true); + } + }) + }; +}); + +int get_core_nums(void) +{ + return emscripten_num_logical_cores(); +} + +int cur_core_num_max; + +static void init_wasm32(void) +{ + cur_core_num = qatomic_fetch_inc(&cur_core_num_max); + ctx.stack = g_malloc(TCG_STATIC_CALL_ARGS_SIZE + TCG_STATIC_FRAME_SIZE); + ctx.stack128 = g_malloc(TCG_STATIC_CALL_ARGS_SIZE); + ctx.tci_tb_ptr = (uint32_t *)&tci_tb_ptr; + init_wasm32_js((int)&instance_done_gc); +} + +__thread bool initdone; + +typedef uint32_t (*wasm_func_ptr)(struct wasmContext *); + +uintptr_t QEMU_DISABLE_CFI tcg_qemu_tb_exec(CPUArchState *env, + const void *v_tb_ptr) +{ + if (!initdone) { + init_wasm32(); + initdone = true; + } + ctx.env = env; + ctx.tb_ptr = (void *)v_tb_ptr; + ctx.do_init = 1; + while (true) { + trysleep(); + struct wasmTBHeader *header = (struct wasmTBHeader *)ctx.tb_ptr; + int32_t *counter_ptr = get_counter_ptr(header); + uint32_t res; + int fidx = get_instance(ctx.tb_ptr); + if (fidx > 0) { + res = ((wasm_func_ptr)(fidx))(&ctx); + } else if (*counter_ptr < INSTANTIATE_NUM) { + *counter_ptr += 1; + res = tcg_qemu_tb_exec_tci(env); + } else if (!can_add_instance()) { + remove_instance(); + check_instance_garbage_collected(); + res = tcg_qemu_tb_exec_tci(env); + } else { + int fidx = instantiate_wasm((int)header->wasm_ptr, + header->wasm_size, + (int)header->import_ptr, + header->import_size); + add_instance(fidx, ctx.tb_ptr); + res = ((wasm_func_ptr)(fidx))(&ctx); + } + if ((uint32_t)ctx.tb_ptr == 0) { + return res; + } + } +} diff --git a/tcg/wasm32.h b/tcg/wasm32.h new file mode 100644 index 0000000000..cbeb281a7d --- /dev/null +++ b/tcg/wasm32.h @@ -0,0 +1,39 @@ +/* + * SPDX-License-Identifier: GPL-2.0-or-later + */ +#ifndef TCG_WASM32_H +#define TCG_WASM32_H + +struct wasmContext { + CPUArchState *env; + uint64_t *stack; + void *tb_ptr; + void *tci_tb_ptr; + uint32_t do_init; + void *stack128; +}; + +#define ENV_OFF 0 +#define STACK_OFF 4 +#define TB_PTR_OFF 8 +#define HELPER_RET_TB_PTR_OFF 12 +#define DO_INIT_OFF 16 +#define STACK128_OFF 20 + +int get_core_nums(void); + +/* + * TB of wasm backend starts from a header which stores pointers for each data + * stored in the following region of the TB. + */ +struct wasmTBHeader { + void *tci_ptr; + void *wasm_ptr; + int wasm_size; + void *import_ptr; + int import_size; + void *counter_ptr; + void *info_ptr; +}; + +#endif diff --git a/tcg/wasm32/tcg-target-con-set.h b/tcg/wasm32/tcg-target-con-set.h new file mode 100644 index 0000000000..093c8e8c3b --- /dev/null +++ b/tcg/wasm32/tcg-target-con-set.h @@ -0,0 +1,18 @@ +/* + * SPDX-License-Identifier: GPL-2.0-or-later + */ +/* + * C_On_Im(...) defines a constraint set with outputs and inputs. + * Each operand should be a sequence of constraint letters as defined by + * tcg-target-con-str.h; the constraint combination is inclusive or. + */ +C_O0_I1(r) +C_O0_I2(r, r) +C_O0_I3(r, r, r) +C_O0_I4(r, r, r, r) +C_O1_I1(r, r) +C_O1_I2(r, r, r) +C_O1_I4(r, r, r, r, r) +C_O2_I1(r, r, r) +C_O2_I2(r, r, r, r) +C_O2_I4(r, r, r, r, r, r) diff --git a/tcg/wasm32/tcg-target-con-str.h b/tcg/wasm32/tcg-target-con-str.h new file mode 100644 index 0000000000..f17f2e850f --- /dev/null +++ b/tcg/wasm32/tcg-target-con-str.h @@ -0,0 +1,8 @@ +/* + * SPDX-License-Identifier: GPL-2.0-or-later + */ +/* + * Define constraint letters for register sets: + * REGS(letter, register_mask) + */ +REGS('r', MAKE_64BIT_MASK(0, TCG_TARGET_NB_REGS)) diff --git a/tcg/wasm32/tcg-target-has.h b/tcg/wasm32/tcg-target-has.h new file mode 100644 index 0000000000..124d6bd54c --- /dev/null +++ b/tcg/wasm32/tcg-target-has.h @@ -0,0 +1,102 @@ +/* + * SPDX-License-Identifier: GPL-2.0-or-later + */ +/* + * Tiny Code Generator for QEMU + * + * Copyright (c) 2009, 2011 Stefan Weil + * + * Based on tci/tcg-target.h + * + * Permission is hereby granted, free of charge, to any person obtaining a copy + * of this software and associated documentation files (the "Software"), to deal + * in the Software without restriction, including without limitation the rights + * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell + * copies of the Software, and to permit persons to whom the Software is + * furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice shall be included in + * all copies or substantial portions of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL + * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, + * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN + * THE SOFTWARE. + */ + +#ifndef TCG_TARGET_HAS_H +#define TCG_TARGET_HAS_H + +#define TCG_TARGET_HAS_bswap16_i32 1 +#define TCG_TARGET_HAS_bswap32_i32 1 +#define TCG_TARGET_HAS_div_i32 1 +#define TCG_TARGET_HAS_rem_i32 1 +#define TCG_TARGET_HAS_ext8s_i32 1 +#define TCG_TARGET_HAS_ext16s_i32 1 +#define TCG_TARGET_HAS_ext8u_i32 1 +#define TCG_TARGET_HAS_ext16u_i32 1 +#define TCG_TARGET_HAS_andc_i32 1 +#define TCG_TARGET_HAS_extract2_i32 0 +#define TCG_TARGET_HAS_eqv_i32 1 +#define TCG_TARGET_HAS_nand_i32 1 +#define TCG_TARGET_HAS_nor_i32 1 +#define TCG_TARGET_HAS_clz_i32 1 +#define TCG_TARGET_HAS_ctz_i32 1 +#define TCG_TARGET_HAS_ctpop_i32 1 +#define TCG_TARGET_HAS_not_i32 1 +#define TCG_TARGET_HAS_orc_i32 1 +#define TCG_TARGET_HAS_rot_i32 1 +#define TCG_TARGET_HAS_negsetcond_i32 0 +#define TCG_TARGET_HAS_muls2_i32 1 +#define TCG_TARGET_HAS_muluh_i32 0 +#define TCG_TARGET_HAS_mulsh_i32 0 +#define TCG_TARGET_HAS_qemu_st8_i32 0 + +#define TCG_TARGET_HAS_extr_i64_i32 0 +#define TCG_TARGET_HAS_extrl_i64_i32 1 +#define TCG_TARGET_HAS_extrh_i64_i32 0 +#define TCG_TARGET_HAS_bswap16_i64 1 +#define TCG_TARGET_HAS_bswap32_i64 1 +#define TCG_TARGET_HAS_bswap64_i64 1 +#define TCG_TARGET_HAS_extract2_i64 0 +#define TCG_TARGET_HAS_div_i64 1 +#define TCG_TARGET_HAS_rem_i64 1 +#define TCG_TARGET_HAS_ext8s_i64 1 +#define TCG_TARGET_HAS_ext16s_i64 1 +#define TCG_TARGET_HAS_ext32s_i64 1 +#define TCG_TARGET_HAS_ext8u_i64 1 +#define TCG_TARGET_HAS_ext16u_i64 1 +#define TCG_TARGET_HAS_ext32u_i64 1 +#define TCG_TARGET_HAS_andc_i64 1 +#define TCG_TARGET_HAS_eqv_i64 1 +#define TCG_TARGET_HAS_nand_i64 1 +#define TCG_TARGET_HAS_nor_i64 1 +#define TCG_TARGET_HAS_clz_i64 1 +#define TCG_TARGET_HAS_ctz_i64 1 +#define TCG_TARGET_HAS_ctpop_i64 1 +#define TCG_TARGET_HAS_not_i64 1 +#define TCG_TARGET_HAS_orc_i64 1 +#define TCG_TARGET_HAS_rot_i64 1 +#define TCG_TARGET_HAS_negsetcond_i64 0 +#define TCG_TARGET_HAS_muls2_i64 0 +#define TCG_TARGET_HAS_add2_i32 1 +#define TCG_TARGET_HAS_sub2_i32 1 +#define TCG_TARGET_HAS_mulu2_i32 1 +#define TCG_TARGET_HAS_add2_i64 1 +#define TCG_TARGET_HAS_sub2_i64 1 +#define TCG_TARGET_HAS_mulu2_i64 0 +#define TCG_TARGET_HAS_muluh_i64 0 +#define TCG_TARGET_HAS_mulsh_i64 0 + +#define TCG_TARGET_HAS_qemu_ldst_i128 0 + +#define TCG_TARGET_HAS_tst 0 + +#define TCG_TARGET_extract_valid(type, ofs, len) 1 +#define TCG_TARGET_sextract_valid(type, ofs, len) 1 +#define TCG_TARGET_deposit_valid(type, ofs, len) 1 + +#endif /* TCG_TARGET_H */ diff --git a/tcg/wasm32/tcg-target-mo.h b/tcg/wasm32/tcg-target-mo.h new file mode 100644 index 0000000000..0865185c9a --- /dev/null +++ b/tcg/wasm32/tcg-target-mo.h @@ -0,0 +1,12 @@ +/* + * SPDX-License-Identifier: GPL-2.0-or-later + */ +/* + * Define target-specific memory model + */ +#ifndef TCG_TARGET_MO_H +#define TCG_TARGET_MO_H + +#define TCG_TARGET_DEFAULT_MO 0 + +#endif diff --git a/tcg/wasm32/tcg-target-opc.h.inc b/tcg/wasm32/tcg-target-opc.h.inc new file mode 100644 index 0000000000..ecc8c4e55e --- /dev/null +++ b/tcg/wasm32/tcg-target-opc.h.inc @@ -0,0 +1,4 @@ +/* SPDX-License-Identifier: MIT */ +/* These opcodes for use between the tci generator and interpreter. */ +DEF(tci_movi, 1, 0, 1, TCG_OPF_NOT_PRESENT) +DEF(tci_movl, 1, 0, 1, TCG_OPF_NOT_PRESENT) diff --git a/tcg/wasm32/tcg-target-reg-bits.h b/tcg/wasm32/tcg-target-reg-bits.h new file mode 100644 index 0000000000..4f60ae9166 --- /dev/null +++ b/tcg/wasm32/tcg-target-reg-bits.h @@ -0,0 +1,12 @@ +/* + * SPDX-License-Identifier: GPL-2.0-or-later + */ +/* + * Define target-specific register size + */ +#ifndef TCG_TARGET_REG_BITS_H +#define TCG_TARGET_REG_BITS_H + +#define TCG_TARGET_REG_BITS 64 + +#endif diff --git a/tcg/wasm32/tcg-target.c.inc b/tcg/wasm32/tcg-target.c.inc new file mode 100644 index 0000000000..6a31d33f71 --- /dev/null +++ b/tcg/wasm32/tcg-target.c.inc @@ -0,0 +1,4484 @@ +/* + * SPDX-License-Identifier: GPL-2.0-or-later + */ +/* + * Tiny Code Generator for QEMU + * + * Copyright (c) 2018 SiFive, Inc + * Copyright (c) 2008-2009 Arnaud Patard + * Copyright (c) 2009 Aurelien Jarno + * Copyright (c) 2008 Fabrice Bellard + * Copyright (c) 2009, 2011 Stefan Weil + * + * Based on riscv/tcg-target.c.inc and tci/tcg-target.c + * + * Permission is hereby granted, free of charge, to any person obtaining a copy + * of this software and associated documentation files (the "Software"), to deal + * in the Software without restriction, including without limitation the rights + * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell + * copies of the Software, and to permit persons to whom the Software is + * furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice shall be included in + * all copies or substantial portions of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL + * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, + * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN + * THE SOFTWARE. + */ + +#include "qapi/error.h" +#include +#include +#include "../wasm32.h" + +/* Used for function call generation. */ +#define TCG_TARGET_CALL_STACK_OFFSET 0 +#define TCG_TARGET_STACK_ALIGN 8 +#define TCG_TARGET_CALL_ARG_I32 TCG_CALL_ARG_NORMAL +#define TCG_TARGET_CALL_ARG_I64 TCG_CALL_ARG_NORMAL +#define TCG_TARGET_CALL_ARG_I128 TCG_CALL_ARG_NORMAL +#define TCG_TARGET_CALL_RET_I128 TCG_CALL_RET_NORMAL + +static TCGConstraintSetIndex +tcg_target_op_def(TCGOpcode op, TCGType type, unsigned flags) +{ + switch (op) { + case INDEX_op_goto_ptr: + return C_O0_I1(r); + + case INDEX_op_ld8u_i32: + case INDEX_op_ld8s_i32: + case INDEX_op_ld16u_i32: + case INDEX_op_ld16s_i32: + case INDEX_op_ld_i32: + case INDEX_op_ld8u_i64: + case INDEX_op_ld8s_i64: + case INDEX_op_ld16u_i64: + case INDEX_op_ld16s_i64: + case INDEX_op_ld32u_i64: + case INDEX_op_ld32s_i64: + case INDEX_op_ld_i64: + case INDEX_op_not_i32: + case INDEX_op_not_i64: + case INDEX_op_neg_i32: + case INDEX_op_neg_i64: + case INDEX_op_ext8s_i32: + case INDEX_op_ext8s_i64: + case INDEX_op_ext16s_i32: + case INDEX_op_ext16s_i64: + case INDEX_op_ext8u_i32: + case INDEX_op_ext8u_i64: + case INDEX_op_ext16u_i32: + case INDEX_op_ext16u_i64: + case INDEX_op_ext32s_i64: + case INDEX_op_ext32u_i64: + case INDEX_op_ext_i32_i64: + case INDEX_op_extu_i32_i64: + case INDEX_op_bswap16_i32: + case INDEX_op_bswap16_i64: + case INDEX_op_bswap32_i32: + case INDEX_op_bswap32_i64: + case INDEX_op_bswap64_i64: + case INDEX_op_extract_i32: + case INDEX_op_extract_i64: + case INDEX_op_sextract_i32: + case INDEX_op_sextract_i64: + case INDEX_op_extrl_i64_i32: + case INDEX_op_extrh_i64_i32: + case INDEX_op_ctpop_i32: + case INDEX_op_ctpop_i64: + return C_O1_I1(r, r); + + case INDEX_op_st8_i32: + case INDEX_op_st16_i32: + case INDEX_op_st_i32: + case INDEX_op_st8_i64: + case INDEX_op_st16_i64: + case INDEX_op_st32_i64: + case INDEX_op_st_i64: + return C_O0_I2(r, r); + + case INDEX_op_div_i32: + case INDEX_op_div_i64: + case INDEX_op_divu_i32: + case INDEX_op_divu_i64: + case INDEX_op_rem_i32: + case INDEX_op_rem_i64: + case INDEX_op_remu_i32: + case INDEX_op_remu_i64: + case INDEX_op_add_i32: + case INDEX_op_add_i64: + case INDEX_op_sub_i32: + case INDEX_op_sub_i64: + case INDEX_op_mul_i32: + case INDEX_op_mul_i64: + case INDEX_op_and_i32: + case INDEX_op_and_i64: + case INDEX_op_andc_i32: + case INDEX_op_andc_i64: + case INDEX_op_eqv_i32: + case INDEX_op_eqv_i64: + case INDEX_op_nand_i32: + case INDEX_op_nand_i64: + case INDEX_op_nor_i32: + case INDEX_op_nor_i64: + case INDEX_op_or_i32: + case INDEX_op_or_i64: + case INDEX_op_orc_i32: + case INDEX_op_orc_i64: + case INDEX_op_xor_i32: + case INDEX_op_xor_i64: + case INDEX_op_shl_i32: + case INDEX_op_shl_i64: + case INDEX_op_shr_i32: + case INDEX_op_shr_i64: + case INDEX_op_sar_i32: + case INDEX_op_sar_i64: + case INDEX_op_rotl_i32: + case INDEX_op_rotl_i64: + case INDEX_op_rotr_i32: + case INDEX_op_rotr_i64: + case INDEX_op_setcond_i32: + case INDEX_op_setcond_i64: + case INDEX_op_deposit_i32: + case INDEX_op_deposit_i64: + case INDEX_op_clz_i32: + case INDEX_op_clz_i64: + case INDEX_op_ctz_i32: + case INDEX_op_ctz_i64: + return C_O1_I2(r, r, r); + + case INDEX_op_brcond_i32: + case INDEX_op_brcond_i64: + return C_O0_I2(r, r); + + case INDEX_op_add2_i32: + case INDEX_op_add2_i64: + case INDEX_op_sub2_i32: + case INDEX_op_sub2_i64: + return C_O2_I4(r, r, r, r, r, r); + + case INDEX_op_mulu2_i32: + case INDEX_op_mulu2_i64: + case INDEX_op_muls2_i32: + case INDEX_op_muls2_i64: + return C_O2_I2(r, r, r, r); + + case INDEX_op_movcond_i32: + case INDEX_op_movcond_i64: + return C_O1_I4(r, r, r, r, r); + + case INDEX_op_setcond2_i32: + return C_O1_I4(r, r, r, r, r); + case INDEX_op_brcond2_i32: + return C_O0_I4(r, r, r, r); + + case INDEX_op_qemu_ld_i32: + return C_O1_I1(r, r); + + case INDEX_op_qemu_ld_i64: + return C_O1_I1(r, r); + case INDEX_op_qemu_st_i32: + return C_O0_I2(r, r); + case INDEX_op_qemu_st_i64: + return C_O0_I2(r, r); + + case INDEX_op_muluh_i32: + case INDEX_op_mulsh_i32: + return C_O1_I2(r, r, r); + case INDEX_op_extract2_i32: + case INDEX_op_extract2_i64: + return C_O1_I2(r, r, r); + + default: + return C_NotImplemented; + } +} + +static const int tcg_target_reg_alloc_order[TCG_TARGET_NB_REGS] = { + TCG_REG_R0, + TCG_REG_R1, + TCG_REG_R2, + TCG_REG_R3, + TCG_REG_R4, + TCG_REG_R5, + TCG_REG_R6, + TCG_REG_R7, + TCG_REG_R8, + TCG_REG_R9, + TCG_REG_R10, + TCG_REG_R11, + TCG_REG_R12, + TCG_REG_R13, + TCG_REG_R14, + TCG_REG_R15, +}; + +#define NUM_OF_IARG_REGS 5 +static const int tcg_target_call_iarg_regs[NUM_OF_IARG_REGS] = { + TCG_REG_R8, + TCG_REG_R9, + TCG_REG_R10, + TCG_REG_R11, + TCG_REG_R12, +}; + +static TCGReg tcg_target_call_oarg_reg(TCGCallReturnKind kind, int slot) +{ + tcg_debug_assert(kind == TCG_CALL_RET_NORMAL); + tcg_debug_assert(slot >= 0 && slot < 128 / TCG_TARGET_REG_BITS); + return TCG_REG_R0 + slot; +} + +#ifdef CONFIG_DEBUG_TCG +static const char *const tcg_target_reg_names[TCG_TARGET_NB_REGS] = { + "r00", + "r01", + "r02", + "r03", + "r04", + "r05", + "r06", + "r07", + "r08", + "r09", + "r10", + "r11", + "r12", + "r13", + "r14", + "r15", +}; +#endif + +#define REG_INDEX_IARG_BASE 8 +static const uint8_t tcg_target_reg_index[TCG_TARGET_NB_REGS] = { + 0, /* TCG_REG_R0 */ + 1, /* TCG_REG_R1 */ + 2, /* TCG_REG_R2 */ + 3, /* TCG_REG_R3 */ + 4, /* TCG_REG_R4 */ + 5, /* TCG_REG_R5 */ + 6, /* TCG_REG_R6 */ + 7, /* TCG_REG_R7 */ + 8, /* TCG_REG_R8 */ + 9, /* TCG_REG_R9 */ + 10, /* TCG_REG_R10 */ + 11, /* TCG_REG_R11 */ + 12, /* TCG_REG_R12 */ + 13, /* TCG_REG_R13 */ + 14, /* TCG_REG_R14 */ + 15, /* TCG_REG_R15 */ +}; + +#define BLOCK_PTR_IDX 16 + +#define CTX_IDX 0 +#define TMP32_LOCAL_0_IDX 1 +#define TMP32_LOCAL_1_IDX 2 +#define TMP64_LOCAL_0_IDX 3 + +/* function index */ +#define CHECK_UNWINDING_IDX 0 /* a funtion of checking Asyncify status */ +#define HELPER_IDX_START 1 /* other helper funcitons */ + +/* Test if a constant matches the constraint. */ +static bool tcg_target_const_match(int64_t val, int ct, + TCGType type, TCGCond cond, int vece) +{ + return ct & TCG_CT_CONST; +} + +static void fill_uint32_leb128(uint8_t *b, uint32_t v) +{ + uint32_t low7 = 0x7f; + do { + *b |= v & low7; + v >>= 7; + b++; + } while (v != 0); +} + +static int write_uint32_leb128(uint8_t *b, uint32_t v) +{ + uint8_t *base = b; + uint32_t low7 = 0x7f; + do { + *b = (uint8_t)(v & low7); + v >>= 7; + if (v != 0) { + *b |= 0x80; + } + b++; + } while (v != 0); + + return (int)(b - base); +} + +#define BUF_MAX 4096 +typedef struct LinkedBuf { + struct LinkedBuf *next; + uint8_t data[BUF_MAX]; + uint32_t size; +} LinkedBuf; + +static LinkedBuf *new_linked_buf(void) +{ + LinkedBuf *p = tcg_malloc(sizeof(LinkedBuf)); + p->size = 0; + p->next = NULL; + return p; +} + +static inline LinkedBuf *linked_buf_out8(LinkedBuf *buf, uint8_t v) +{ + if (buf->size == BUF_MAX) { + buf->next = new_linked_buf(); + buf = buf->next; + } + *(buf->data + buf->size++) = v; + return buf; +} + +static inline int linked_buf_len(LinkedBuf *buf) +{ + int total = 0; + for (LinkedBuf *p = buf; p; p = p->next) { + total += p->size; + } + return total; +} + +static inline void linked_buf_write(LinkedBuf *buf, void *dst) +{ + for (LinkedBuf *p = buf; p; p = p->next) { + memcpy(dst, p->data, p->size); + dst += p->size; + } +} + +/* + * wasm code is generataed in the dynamically allocated buffer which + * are managed as a linked list. + */ +__thread LinkedBuf *sub_buf_root; +__thread LinkedBuf *sub_buf_cur; + +static void init_sub_buf(void) +{ + sub_buf_root = new_linked_buf(); + sub_buf_cur = sub_buf_root; +} + +static inline int sub_buf_len(void) +{ + return linked_buf_len(sub_buf_root); +} + +static inline void tcg_wasm_out8(TCGContext *s, uint32_t v) +{ + sub_buf_cur = linked_buf_out8(sub_buf_cur, v); +} + +static void tcg_wasm_out_leb128_sint32_t(TCGContext *s, int32_t v) +{ + bool more = true; + uint8_t b; + uint32_t low7 = 0x7f; + while (more) { + b = v & low7; + v >>= 7; + if (((v == 0) && ((b & 0x40) == 0)) || + ((v == -1) && ((b & 0x40) != 0))) { + more = false; + } else { + b |= 0x80; + } + tcg_wasm_out8(s, b); + } +} + +static void tcg_wasm_out_leb128_sint64_t(TCGContext *s, int64_t v) +{ + bool more = true; + uint8_t b; + uint64_t low7 = 0x7f; + while (more) { + b = v & low7; + v >>= 7; + if (((v == 0) && ((b & 0x40) == 0)) || + ((v == -1) && ((b & 0x40) != 0))) { + more = false; + } else { + b |= 0x80; + } + tcg_wasm_out8(s, b); + } +} + +static void tcg_wasm_out_leb128_uint32_t(TCGContext *s, uint32_t v) +{ + uint32_t low7 = 0x7f; + uint8_t b; + do { + b = v & low7; + v >>= 7; + if (v != 0) { + b |= 0x80; + } + tcg_wasm_out8(s, b); + } while (v != 0); +} + +static void tcg_wasm_out_op_br(TCGContext *s, int i) +{ + tcg_wasm_out8(s, 0x0c); + tcg_wasm_out8(s, i); +} + +static void tcg_wasm_out_op_loop_noret(TCGContext *s) +{ + tcg_wasm_out8(s, 0x03); + tcg_wasm_out8(s, 0x40); +} + +static void tcg_wasm_out_op_if_noret(TCGContext *s) +{ + tcg_wasm_out8(s, 0x04); + tcg_wasm_out8(s, 0x40); +} + +static void tcg_wasm_out_op_if_ret_i64(TCGContext *s) +{ + tcg_wasm_out8(s, 0x04); + tcg_wasm_out8(s, 0x7e); +} + +static void tcg_wasm_out_op_if_ret_i32(TCGContext *s) +{ + tcg_wasm_out8(s, 0x04); + tcg_wasm_out8(s, 0x7f); +} + +static void tcg_wasm_out_op_else(TCGContext *s) +{ + tcg_wasm_out8(s, 0x05); +} + +static void tcg_wasm_out_op_end(TCGContext *s) +{ + tcg_wasm_out8(s, 0x0b); +} + + +static void tcg_wasm_out_op_i32_eqz(TCGContext *s) +{ + tcg_wasm_out8(s, 0x45); +} +static void tcg_wasm_out_op_i32_eq(TCGContext *s) +{ + tcg_wasm_out8(s, 0x46); +} +static void tcg_wasm_out_op_i32_and(TCGContext *s) +{ + tcg_wasm_out8(s, 0x71); +} +static void tcg_wasm_out_op_i32_or(TCGContext *s) +{ + tcg_wasm_out8(s, 0x72); +} +static void tcg_wasm_out_op_i32_shl(TCGContext *s) +{ + tcg_wasm_out8(s, 0x74); +} +static void tcg_wasm_out_op_i32_shr_s(TCGContext *s) +{ + tcg_wasm_out8(s, 0x75); +} +static void tcg_wasm_out_op_i32_shr_u(TCGContext *s) +{ + tcg_wasm_out8(s, 0x76); +} +static void tcg_wasm_out_op_i32_rotl(TCGContext *s) +{ + tcg_wasm_out8(s, 0x77); +} +static void tcg_wasm_out_op_i32_rotr(TCGContext *s) +{ + tcg_wasm_out8(s, 0x78); +} +static void tcg_wasm_out_op_i32_clz(TCGContext *s) +{ + tcg_wasm_out8(s, 0x67); +} +static void tcg_wasm_out_op_i32_ctz(TCGContext *s) +{ + tcg_wasm_out8(s, 0x68); +} +static void tcg_wasm_out_op_i32_popcnt(TCGContext *s) +{ + tcg_wasm_out8(s, 0x69); +} +static void tcg_wasm_out_op_i32_add(TCGContext *s) +{ + tcg_wasm_out8(s, 0x6a); +} +static void tcg_wasm_out_op_i32_ne(TCGContext *s) +{ + tcg_wasm_out8(s, 0x47); +} +static void tcg_wasm_out_op_i64_eqz(TCGContext *s) +{ + tcg_wasm_out8(s, 0x50); +} +static void tcg_wasm_out_op_i64_eq(TCGContext *s) +{ + tcg_wasm_out8(s, 0x51); +} +static void tcg_wasm_out_op_i64_and(TCGContext *s) +{ + tcg_wasm_out8(s, 0x83); +} +static void tcg_wasm_out_op_i64_or(TCGContext *s) +{ + tcg_wasm_out8(s, 0x84); +} +static void tcg_wasm_out_op_i64_xor(TCGContext *s) +{ + tcg_wasm_out8(s, 0x85); +} +static void tcg_wasm_out_op_i64_shl(TCGContext *s) +{ + tcg_wasm_out8(s, 0x86); +} +static void tcg_wasm_out_op_i64_shr_s(TCGContext *s) +{ + tcg_wasm_out8(s, 0x87); +} +static void tcg_wasm_out_op_i64_shr_u(TCGContext *s) +{ + tcg_wasm_out8(s, 0x88); +} +static void tcg_wasm_out_op_i64_rotl(TCGContext *s) +{ + tcg_wasm_out8(s, 0x89); +} +static void tcg_wasm_out_op_i64_rotr(TCGContext *s) +{ + tcg_wasm_out8(s, 0x8a); +} +static void tcg_wasm_out_op_i64_clz(TCGContext *s) +{ + tcg_wasm_out8(s, 0x79); +} +static void tcg_wasm_out_op_i64_ctz(TCGContext *s) +{ + tcg_wasm_out8(s, 0x7a); +} +static void tcg_wasm_out_op_i64_popcnt(TCGContext *s) +{ + tcg_wasm_out8(s, 0x7b); +} +static void tcg_wasm_out_op_i64_add(TCGContext *s) +{ + tcg_wasm_out8(s, 0x7c); +} +static void tcg_wasm_out_op_i64_sub(TCGContext *s) +{ + tcg_wasm_out8(s, 0x7d); +} +static void tcg_wasm_out_op_i64_mul(TCGContext *s) +{ + tcg_wasm_out8(s, 0x7e); +} +static void tcg_wasm_out_op_i64_div_s(TCGContext *s) +{ + tcg_wasm_out8(s, 0x7f); +} +static void tcg_wasm_out_op_i64_div_u(TCGContext *s) +{ + tcg_wasm_out8(s, 0x80); +} +static void tcg_wasm_out_op_i64_rem_s(TCGContext *s) +{ + tcg_wasm_out8(s, 0x81); +} +static void tcg_wasm_out_op_i64_rem_u(TCGContext *s) +{ + tcg_wasm_out8(s, 0x82); +} +static void tcg_wasm_out_op_i64_le_u(TCGContext *s) +{ + tcg_wasm_out8(s, 0x58); +} +static void tcg_wasm_out_op_i64_lt_u(TCGContext *s) +{ + tcg_wasm_out8(s, 0x54); +} +static void tcg_wasm_out_op_i64_gt_u(TCGContext *s) +{ + tcg_wasm_out8(s, 0x56); +} + +static void tcg_wasm_out_op_i32_wrap_i64(TCGContext *s) +{ + tcg_wasm_out8(s, 0xa7); +} + +static void tcg_wasm_out_op_var(TCGContext *s, uint8_t instr, uint8_t i) +{ + tcg_wasm_out8(s, instr); + tcg_wasm_out8(s, i); +} + +static void tcg_wasm_out_op_local_get(TCGContext *s, uint8_t i) +{ + tcg_wasm_out_op_var(s, 0x20, i); +} + +static void tcg_wasm_out_op_local_set(TCGContext *s, uint8_t i) +{ + tcg_wasm_out_op_var(s, 0x21, i); +} + +static void tcg_wasm_out_op_local_tee(TCGContext *s, uint8_t i) +{ + tcg_wasm_out_op_var(s, 0x22, i); +} + +static void tcg_wasm_out_op_global_get(TCGContext *s, uint8_t i) +{ + tcg_wasm_out_op_var(s, 0x23, i); +} + +static void tcg_wasm_out_op_global_set(TCGContext *s, uint8_t i) +{ + tcg_wasm_out_op_var(s, 0x24, i); +} + +static void tcg_wasm_out_op_global_get_r_i32(TCGContext *s, TCGReg r0) +{ + tcg_wasm_out_op_global_get(s, tcg_target_reg_index[r0]); + tcg_wasm_out_op_i32_wrap_i64(s); +} + +static void tcg_wasm_out_op_global_get_r(TCGContext *s, TCGReg r0) +{ + tcg_wasm_out_op_global_get(s, tcg_target_reg_index[r0]); +} + +static void tcg_wasm_out_op_global_set_r(TCGContext *s, TCGReg r0) +{ + tcg_wasm_out_op_global_set(s, tcg_target_reg_index[r0]); +} + +static void tcg_wasm_out_op_i32_const(TCGContext *s, int32_t v) +{ + tcg_wasm_out8(s, 0x41); + tcg_wasm_out_leb128_sint32_t(s, v); +} + +static void tcg_wasm_out_op_i64_const(TCGContext *s, int64_t v) +{ + tcg_wasm_out8(s, 0x42); + tcg_wasm_out_leb128_sint64_t(s, v); +} + +static void tcg_wasm_out_op_loadstore( + TCGContext *s, uint8_t instr, uint32_t a, uint32_t o) +{ + tcg_wasm_out8(s, instr); + tcg_wasm_out_leb128_uint32_t(s, a); + tcg_wasm_out_leb128_uint32_t(s, o); +} + +static void tcg_wasm_out_op_i64_store(TCGContext *s, uint32_t a, uint32_t o) +{ + tcg_wasm_out_op_loadstore(s, 0x37, a, o); +} + +static void tcg_wasm_out_op_i32_store(TCGContext *s, uint32_t a, uint32_t o) +{ + tcg_wasm_out_op_loadstore(s, 0x36, a, o); +} + +static void tcg_wasm_out_op_i64_store8(TCGContext *s, uint32_t a, uint32_t o) +{ + tcg_wasm_out_op_loadstore(s, 0x3c, a, o); +} + +static void tcg_wasm_out_op_i64_store16(TCGContext *s, uint32_t a, uint32_t o) +{ + tcg_wasm_out_op_loadstore(s, 0x3d, a, o); +} + +static void tcg_wasm_out_op_i64_store32(TCGContext *s, uint32_t a, uint32_t o) +{ + tcg_wasm_out_op_loadstore(s, 0x3e, a, o); +} + +static void tcg_wasm_out_op_i64_load(TCGContext *s, uint32_t a, uint32_t o) +{ + tcg_wasm_out_op_loadstore(s, 0x29, a, o); +} + +static void tcg_wasm_out_op_i32_load(TCGContext *s, uint32_t a, uint32_t o) +{ + tcg_wasm_out_op_loadstore(s, 0x28, a, o); +} + + static void tcg_wasm_out_op_i64_load8_s(TCGContext *s, uint32_t a, uint32_t o) +{ + tcg_wasm_out_op_loadstore(s, 0x30, a, o); +} + +static void tcg_wasm_out_op_i64_load8_u(TCGContext *s, uint32_t a, uint32_t o) +{ + tcg_wasm_out_op_loadstore(s, 0x31, a, o); +} + +static void tcg_wasm_out_op_i64_load16_s(TCGContext *s, uint32_t a, uint32_t o) +{ + tcg_wasm_out_op_loadstore(s, 0x32, a, o); +} + +static void tcg_wasm_out_op_i64_load16_u(TCGContext *s, uint32_t a, uint32_t o) +{ + tcg_wasm_out_op_loadstore(s, 0x33, a, o); +} + +static void tcg_wasm_out_op_i64_load32_u(TCGContext *s, uint32_t a, uint32_t o) +{ + tcg_wasm_out_op_loadstore(s, 0x35, a, o); +} + +static void tcg_wasm_out_op_i64_load32_s(TCGContext *s, uint32_t a, uint32_t o) +{ + tcg_wasm_out_op_loadstore(s, 0x34, a, o); +} + +static void tcg_wasm_out_op_return(TCGContext *s) +{ + tcg_wasm_out8(s, 0x0f); +} + +static void tcg_wasm_out_op_call(TCGContext *s, uint32_t func_idx) +{ + tcg_wasm_out8(s, 0x10); + tcg_wasm_out_leb128_uint32_t(s, func_idx); +} + +static void tcg_wasm_out_op_i64_extend_i32_u(TCGContext *s) +{ + tcg_wasm_out8(s, 0xad); +} + +static void tcg_wasm_out_op_i64_extend_i32_s(TCGContext *s) +{ + tcg_wasm_out8(s, 0xac); +} + +static void tcg_wasm_out_op_i64_extend8_s(TCGContext *s) +{ + tcg_wasm_out8(s, 0xc2); +} + +static void tcg_wasm_out_op_i64_extend16_s(TCGContext *s) +{ + tcg_wasm_out8(s, 0xc3); +} + +static void tcg_wasm_out_op_not(TCGContext *s) +{ + tcg_wasm_out_op_i64_const(s, -1); + tcg_wasm_out_op_i64_xor(s); +} + +static void tcg_wasm_out_op_set_r_as_i64(TCGContext *s, TCGReg al, TCGReg ah) +{ + tcg_wasm_out_op_local_set(s, TMP64_LOCAL_0_IDX); + + /* set lower bits */ + tcg_wasm_out_op_local_get(s, TMP64_LOCAL_0_IDX); + tcg_wasm_out_op_i64_const(s, 0xffffffff); + tcg_wasm_out_op_i64_and(s); + tcg_wasm_out_op_global_set_r(s, al); + + /* set higher bits */ + tcg_wasm_out_op_local_get(s, TMP64_LOCAL_0_IDX); + tcg_wasm_out_op_i64_const(s, 32); + tcg_wasm_out_op_i64_shr_u(s); + tcg_wasm_out_op_i64_const(s, 0xffffffff); + tcg_wasm_out_op_i64_and(s); + tcg_wasm_out_op_global_set_r(s, ah); +} + +static const struct { + uint8_t i32; + uint8_t i64; +} tcg_cond_to_inst[] = { + [TCG_COND_EQ] = { 0x46 /* i32.eq */ , 0x51 /* i64.eq */}, + [TCG_COND_NE] = { 0x47 /* i32.ne */ , 0x52 /* i64.ne */}, + [TCG_COND_LT] = { 0x48 /* i32.lt_s */ , 0x53 /* i64.lt_s */}, + [TCG_COND_GE] = { 0x4e /* i32.ge_s */ , 0x59 /* i64.ge_s */}, + [TCG_COND_LE] = { 0x4c /* i32.le_s */ , 0x57 /* i64.le_s */}, + [TCG_COND_GT] = { 0x4a /* i32.gt_s */ , 0x55 /* i64.gt_s */}, + [TCG_COND_LTU] = { 0x49 /* i32.lt_u */ , 0x54 /* i64.lt_u */}, + [TCG_COND_GEU] = { 0x4f /* i32.ge_u */ , 0x5a /* i64.ge_u */}, + [TCG_COND_LEU] = { 0x4d /* i32.le_u */ , 0x58 /* i64.le_u */}, + [TCG_COND_GTU] = { 0x4b /* i32.gt_u */ , 0x56 /* i64.gt_u */} +}; + +static void tcg_wasm_out_op_cond_i64( + TCGContext *s, TCGCond cond, TCGReg arg1, TCGReg arg2) +{ + uint8_t op = tcg_cond_to_inst[cond].i64; + tcg_wasm_out_op_global_get_r(s, arg1); + tcg_wasm_out_op_global_get_r(s, arg2); + tcg_wasm_out8(s, op); +} + +static void tcg_wasm_out_op_cond_i32( + TCGContext *s, TCGCond cond, TCGReg arg1, TCGReg arg2) +{ + uint8_t op = tcg_cond_to_inst[cond].i32; + tcg_wasm_out_op_global_get_r(s, arg1); + tcg_wasm_out_op_i32_wrap_i64(s); + tcg_wasm_out_op_global_get_r(s, arg2); + tcg_wasm_out_op_i32_wrap_i64(s); + tcg_wasm_out8(s, op); +} + +#define tcg_wasm_out_i64_calc(op) \ + static void tcg_wasm_out_i64_calc_##op( \ + TCGContext *s, TCGReg ret, TCGReg arg1, TCGReg arg2) \ + { \ + tcg_wasm_out_op_global_get_r(s, arg1); \ + tcg_wasm_out_op_global_get_r(s, arg2); \ + tcg_wasm_out_op_i64_##op(s); \ + tcg_wasm_out_op_global_set_r(s, ret); \ + } +tcg_wasm_out_i64_calc(and); +tcg_wasm_out_i64_calc(or); +tcg_wasm_out_i64_calc(xor); +tcg_wasm_out_i64_calc(rotl); +tcg_wasm_out_i64_calc(rotr); +tcg_wasm_out_i64_calc(add); +tcg_wasm_out_i64_calc(sub); +tcg_wasm_out_i64_calc(mul); + +static void tcg_wasm_out_rem_s( + TCGContext *s, TCGType type, TCGReg ret, TCGReg arg1, TCGReg arg2) +{ + switch (type) { + case TCG_TYPE_I32: + tcg_wasm_out_op_global_get_r(s, arg1); + tcg_wasm_out_op_i32_wrap_i64(s); + tcg_wasm_out_op_i64_extend_i32_s(s); + tcg_wasm_out_op_global_get_r(s, arg2); + tcg_wasm_out_op_i32_wrap_i64(s); + tcg_wasm_out_op_i64_extend_i32_s(s); + tcg_wasm_out_op_i64_rem_s(s); + tcg_wasm_out_op_global_set_r(s, ret); + break; + case TCG_TYPE_I64: + tcg_wasm_out_op_global_get_r(s, arg1); + tcg_wasm_out_op_global_get_r(s, arg2); + tcg_wasm_out_op_i64_rem_s(s); + tcg_wasm_out_op_global_set_r(s, ret); + break; + default: + g_assert_not_reached(); + } +} + +static void tcg_wasm_out_rem_u( + TCGContext *s, TCGType type, TCGReg ret, TCGReg arg1, TCGReg arg2) +{ + switch (type) { + case TCG_TYPE_I32: + tcg_wasm_out_op_global_get_r(s, arg1); + tcg_wasm_out_op_i64_const(s, 0xffffffff); + tcg_wasm_out_op_i64_and(s); + tcg_wasm_out_op_global_get_r(s, arg2); + tcg_wasm_out_op_i64_const(s, 0xffffffff); + tcg_wasm_out_op_i64_and(s); + tcg_wasm_out_op_i64_rem_u(s); + tcg_wasm_out_op_global_set_r(s, ret); + break; + case TCG_TYPE_I64: + tcg_wasm_out_op_global_get_r(s, arg1); + tcg_wasm_out_op_global_get_r(s, arg2); + tcg_wasm_out_op_i64_rem_u(s); + tcg_wasm_out_op_global_set_r(s, ret); + break; + default: + g_assert_not_reached(); + } +} + +static void tcg_wasm_out_div_s( + TCGContext *s, TCGType type, TCGReg ret, TCGReg arg1, TCGReg arg2) +{ + switch (type) { + case TCG_TYPE_I32: + tcg_wasm_out_op_global_get_r(s, arg1); + tcg_wasm_out_op_i32_wrap_i64(s); + tcg_wasm_out_op_i64_extend_i32_s(s); + tcg_wasm_out_op_global_get_r(s, arg2); + tcg_wasm_out_op_i32_wrap_i64(s); + tcg_wasm_out_op_i64_extend_i32_s(s); + tcg_wasm_out_op_i64_div_s(s); + tcg_wasm_out_op_global_set_r(s, ret); + break; + case TCG_TYPE_I64: + tcg_wasm_out_op_global_get_r(s, arg1); + tcg_wasm_out_op_global_get_r(s, arg2); + tcg_wasm_out_op_i64_div_s(s); + tcg_wasm_out_op_global_set_r(s, ret); + break; + default: + g_assert_not_reached(); + } +} + +static void tcg_wasm_out_div_u( + TCGContext *s, TCGType type, TCGReg ret, TCGReg arg1, TCGReg arg2) +{ + switch (type) { + case TCG_TYPE_I32: + tcg_wasm_out_op_global_get_r(s, arg1); + tcg_wasm_out_op_i64_const(s, 0xffffffff); + tcg_wasm_out_op_i64_and(s); + tcg_wasm_out_op_global_get_r(s, arg2); + tcg_wasm_out_op_i64_const(s, 0xffffffff); + tcg_wasm_out_op_i64_and(s); + tcg_wasm_out_op_i64_div_u(s); + tcg_wasm_out_op_global_set_r(s, ret); + break; + case TCG_TYPE_I64: + tcg_wasm_out_op_global_get_r(s, arg1); + tcg_wasm_out_op_global_get_r(s, arg2); + tcg_wasm_out_op_i64_div_u(s); + tcg_wasm_out_op_global_set_r(s, ret); + break; + default: + g_assert_not_reached(); + } +} + +static void tcg_wasm_out_shl( + TCGContext *s, TCGType type, TCGReg ret, TCGReg arg1, TCGReg arg2) +{ + switch (type) { + case TCG_TYPE_I32: + tcg_wasm_out_op_global_get_r(s, arg1); + tcg_wasm_out_op_i64_const(s, 0xffffffff); + tcg_wasm_out_op_i64_and(s); + tcg_wasm_out_op_global_get_r(s, arg2); + tcg_wasm_out_op_i64_const(s, 31); + tcg_wasm_out_op_i64_and(s); + tcg_wasm_out_op_i64_shl(s); + tcg_wasm_out_op_global_set_r(s, ret); + break; + case TCG_TYPE_I64: + tcg_wasm_out_op_global_get_r(s, arg1); + tcg_wasm_out_op_global_get_r(s, arg2); + tcg_wasm_out_op_i64_shl(s); + tcg_wasm_out_op_global_set_r(s, ret); + break; + default: + g_assert_not_reached(); + } +} + +static void tcg_wasm_out_shr_u( + TCGContext *s, TCGType type, TCGReg ret, TCGReg arg1, TCGReg arg2) +{ + switch (type) { + case TCG_TYPE_I32: + tcg_wasm_out_op_global_get_r(s, arg1); + tcg_wasm_out_op_i64_const(s, 0xffffffff); + tcg_wasm_out_op_i64_and(s); + tcg_wasm_out_op_global_get_r(s, arg2); + tcg_wasm_out_op_i64_const(s, 31); + tcg_wasm_out_op_i64_and(s); + tcg_wasm_out_op_i64_shr_u(s); + tcg_wasm_out_op_global_set_r(s, ret); + break; + case TCG_TYPE_I64: + tcg_wasm_out_op_global_get_r(s, arg1); + tcg_wasm_out_op_global_get_r(s, arg2); + tcg_wasm_out_op_i64_shr_u(s); + tcg_wasm_out_op_global_set_r(s, ret); + break; + default: + g_assert_not_reached(); + } +} + +static void tcg_wasm_out_shr_s( + TCGContext *s, TCGType type, TCGReg ret, TCGReg arg1, TCGReg arg2) +{ + switch (type) { + case TCG_TYPE_I32: + tcg_wasm_out_op_global_get_r(s, arg1); + tcg_wasm_out_op_i32_wrap_i64(s); + tcg_wasm_out_op_i64_extend_i32_s(s); + tcg_wasm_out_op_global_get_r(s, arg2); + tcg_wasm_out_op_i64_const(s, 31); + tcg_wasm_out_op_i64_and(s); + tcg_wasm_out_op_i64_shr_s(s); + tcg_wasm_out_op_global_set_r(s, ret); + break; + case TCG_TYPE_I64: + tcg_wasm_out_op_global_get_r(s, arg1); + tcg_wasm_out_op_global_get_r(s, arg2); + tcg_wasm_out_op_i64_shr_s(s); + tcg_wasm_out_op_global_set_r(s, ret); + break; + default: + g_assert_not_reached(); + } +} +static void tcg_wasm_out_i32_rotl( + TCGContext *s, TCGReg ret, TCGReg arg1, TCGReg arg2) +{ + tcg_wasm_out_op_global_get_r(s, arg1); + tcg_wasm_out_op_i32_wrap_i64(s); + tcg_wasm_out_op_global_get_r(s, arg2); + tcg_wasm_out_op_i32_wrap_i64(s); + tcg_wasm_out_op_i32_rotl(s); + tcg_wasm_out_op_i64_extend_i32_s(s); + tcg_wasm_out_op_global_set_r(s, ret); +} + +static void tcg_wasm_out_i32_rotr( + TCGContext *s, TCGReg ret, TCGReg arg1, TCGReg arg2) +{ + tcg_wasm_out_op_global_get_r(s, arg1); + tcg_wasm_out_op_i32_wrap_i64(s); + tcg_wasm_out_op_global_get_r(s, arg2); + tcg_wasm_out_op_i32_wrap_i64(s); + tcg_wasm_out_op_i32_rotr(s); + tcg_wasm_out_op_i64_extend_i32_s(s); + tcg_wasm_out_op_global_set_r(s, ret); +} + +static void tcg_wasm_out_clz64( + TCGContext *s, TCGReg ret, TCGReg arg1, TCGReg arg2) +{ + tcg_wasm_out_op_global_get_r(s, arg1); + tcg_wasm_out_op_i64_eqz(s); + tcg_wasm_out_op_if_ret_i64(s); + tcg_wasm_out_op_global_get_r(s, arg2); + tcg_wasm_out_op_else(s); + tcg_wasm_out_op_global_get_r(s, arg1); + tcg_wasm_out_op_i64_clz(s); + tcg_wasm_out_op_end(s); + tcg_wasm_out_op_global_set_r(s, ret); +} + +static void tcg_wasm_out_clz32( + TCGContext *s, TCGReg ret, TCGReg arg1, TCGReg arg2) +{ + tcg_wasm_out_op_global_get_r(s, arg1); + tcg_wasm_out_op_i64_eqz(s); + tcg_wasm_out_op_if_ret_i32(s); + tcg_wasm_out_op_global_get_r(s, arg2); + tcg_wasm_out_op_i32_wrap_i64(s); + tcg_wasm_out_op_else(s); + tcg_wasm_out_op_global_get_r(s, arg1); + tcg_wasm_out_op_i32_wrap_i64(s); + tcg_wasm_out_op_i32_clz(s); + tcg_wasm_out_op_end(s); + tcg_wasm_out_op_i64_extend_i32_s(s); + tcg_wasm_out_op_global_set_r(s, ret); +} + +static void tcg_wasm_out_ctz64( + TCGContext *s, TCGReg ret, TCGReg arg1, TCGReg arg2) +{ + tcg_wasm_out_op_global_get_r(s, arg1); + tcg_wasm_out_op_i64_eqz(s); + tcg_wasm_out_op_if_ret_i64(s); + tcg_wasm_out_op_global_get_r(s, arg2); + tcg_wasm_out_op_else(s); + tcg_wasm_out_op_global_get_r(s, arg1); + tcg_wasm_out_op_i64_ctz(s); + tcg_wasm_out_op_end(s); + tcg_wasm_out_op_global_set_r(s, ret); +} + +static void tcg_wasm_out_ctz32( + TCGContext *s, TCGReg ret, TCGReg arg1, TCGReg arg2) +{ + tcg_wasm_out_op_global_get_r(s, arg1); + tcg_wasm_out_op_i64_eqz(s); + tcg_wasm_out_op_if_ret_i32(s); + tcg_wasm_out_op_global_get_r(s, arg2); + tcg_wasm_out_op_i32_wrap_i64(s); + tcg_wasm_out_op_else(s); + tcg_wasm_out_op_global_get_r(s, arg1); + tcg_wasm_out_op_i32_wrap_i64(s); + tcg_wasm_out_op_i32_ctz(s); + tcg_wasm_out_op_end(s); + tcg_wasm_out_op_i64_extend_i32_s(s); + tcg_wasm_out_op_global_set_r(s, ret); +} + +static void tcg_wasm_out_not(TCGContext *s, TCGReg ret, TCGReg arg) +{ + tcg_wasm_out_op_global_get_r(s, arg); + tcg_wasm_out_op_not(s); + tcg_wasm_out_op_global_set_r(s, ret); +} + +static void tcg_wasm_out_andc( + TCGContext *s, TCGReg ret, TCGReg arg1, TCGReg arg2) +{ + tcg_wasm_out_op_global_get_r(s, arg1); + tcg_wasm_out_op_global_get_r(s, arg2); + tcg_wasm_out_op_not(s); + tcg_wasm_out_op_i64_and(s); + tcg_wasm_out_op_global_set_r(s, ret); +} + +static void tcg_wasm_out_orc( + TCGContext *s, TCGReg ret, TCGReg arg1, TCGReg arg2) +{ + tcg_wasm_out_op_global_get_r(s, arg1); + tcg_wasm_out_op_global_get_r(s, arg2); + tcg_wasm_out_op_not(s); + tcg_wasm_out_op_i64_or(s); + tcg_wasm_out_op_global_set_r(s, ret); +} + +static void tcg_wasm_out_eqv( + TCGContext *s, TCGReg ret, TCGReg arg1, TCGReg arg2) +{ + tcg_wasm_out_op_global_get_r(s, arg1); + tcg_wasm_out_op_global_get_r(s, arg2); + tcg_wasm_out_op_i64_xor(s); + tcg_wasm_out_op_not(s); + tcg_wasm_out_op_global_set_r(s, ret); +} + +static void tcg_wasm_out_nand( + TCGContext *s, TCGReg ret, TCGReg arg1, TCGReg arg2) +{ + tcg_wasm_out_op_global_get_r(s, arg1); + tcg_wasm_out_op_global_get_r(s, arg2); + tcg_wasm_out_op_i64_and(s); + tcg_wasm_out_op_not(s); + tcg_wasm_out_op_global_set_r(s, ret); +} + +static void tcg_wasm_out_nor( + TCGContext *s, TCGReg ret, TCGReg arg1, TCGReg arg2) +{ + tcg_wasm_out_op_global_get_r(s, arg1); + tcg_wasm_out_op_global_get_r(s, arg2); + tcg_wasm_out_op_i64_or(s); + tcg_wasm_out_op_not(s); + tcg_wasm_out_op_global_set_r(s, ret); +} + +static void tcg_wasm_out_neg(TCGContext *s, TCGReg ret, TCGReg arg) +{ + tcg_wasm_out_op_global_get_r(s, arg); + tcg_wasm_out_op_not(s); + tcg_wasm_out_op_i64_const(s, 1); + tcg_wasm_out_op_i64_add(s); + tcg_wasm_out_op_global_set_r(s, ret); +} + +static void tcg_wasm_out_ld( + TCGContext *s, TCGType type, TCGReg val, TCGReg base, intptr_t offset) +{ + switch (type) { + case TCG_TYPE_I32: + tcg_wasm_out_op_global_get_r_i32(s, base); + if ((int32_t)offset < 0) { + tcg_wasm_out_op_i32_const(s, (int32_t)offset); + tcg_wasm_out_op_i32_add(s); + offset = 0; + } + tcg_wasm_out_op_i64_load32_u(s, 0, (uint32_t)offset); + tcg_wasm_out_op_global_set_r(s, val); + break; + case TCG_TYPE_I64: + tcg_wasm_out_op_global_get_r_i32(s, base); + if ((int32_t)offset < 0) { + tcg_wasm_out_op_i32_const(s, (int32_t)offset); + tcg_wasm_out_op_i32_add(s); + offset = 0; + } + tcg_wasm_out_op_i64_load(s, 0, (uint32_t)offset); + tcg_wasm_out_op_global_set_r(s, val); + break; + default: + g_assert_not_reached(); + } +} + +static void tcg_wasm_out_ld8s( + TCGContext *s, TCGType type, TCGReg val, TCGReg base, intptr_t offset) +{ + switch (type) { + case TCG_TYPE_I32: + case TCG_TYPE_I64: + tcg_wasm_out_op_global_get_r_i32(s, base); + if ((int32_t)offset < 0) { + tcg_wasm_out_op_i32_const(s, (int32_t)offset); + tcg_wasm_out_op_i32_add(s); + offset = 0; + } + tcg_wasm_out_op_i64_load8_s(s, 0, (uint32_t)offset); + tcg_wasm_out_op_global_set_r(s, val); + break; + default: + g_assert_not_reached(); + } +} + +static void tcg_wasm_out_ld8u( + TCGContext *s, TCGType type, TCGReg val, TCGReg base, intptr_t offset) +{ + switch (type) { + case TCG_TYPE_I32: + case TCG_TYPE_I64: + tcg_wasm_out_op_global_get_r_i32(s, base); + if ((int32_t)offset < 0) { + tcg_wasm_out_op_i32_const(s, (int32_t)offset); + tcg_wasm_out_op_i32_add(s); + offset = 0; + } + tcg_wasm_out_op_i64_load8_u(s, 0, (uint32_t)offset); + tcg_wasm_out_op_global_set_r(s, val); + break; + default: + g_assert_not_reached(); + } +} + +static void tcg_wasm_out_ld16s( + TCGContext *s, TCGType type, TCGReg val, TCGReg base, intptr_t offset) +{ + switch (type) { + case TCG_TYPE_I32: + case TCG_TYPE_I64: + tcg_wasm_out_op_global_get_r_i32(s, base); + if ((int32_t)offset < 0) { + tcg_wasm_out_op_i32_const(s, (int32_t)offset); + tcg_wasm_out_op_i32_add(s); + offset = 0; + } + tcg_wasm_out_op_i64_load16_s(s, 0, (uint32_t)offset); + tcg_wasm_out_op_global_set_r(s, val); + break; + default: + g_assert_not_reached(); + } +} + +static void tcg_wasm_out_ld16u( + TCGContext *s, TCGType type, TCGReg val, TCGReg base, intptr_t offset) +{ + switch (type) { + case TCG_TYPE_I32: + case TCG_TYPE_I64: + tcg_wasm_out_op_global_get_r_i32(s, base); + if ((int32_t)offset < 0) { + tcg_wasm_out_op_i32_const(s, (int32_t)offset); + tcg_wasm_out_op_i32_add(s); + offset = 0; + } + tcg_wasm_out_op_i64_load16_u(s, 0, (uint32_t)offset); + tcg_wasm_out_op_global_set_r(s, val); + break; + default: + g_assert_not_reached(); + } +} + +static void tcg_wasm_out_ld32s( + TCGContext *s, TCGType type, TCGReg val, TCGReg base, intptr_t offset) +{ + switch (type) { + case TCG_TYPE_I32: + case TCG_TYPE_I64: + tcg_wasm_out_op_global_get_r_i32(s, base); + if ((int32_t)offset < 0) { + tcg_wasm_out_op_i32_const(s, (int32_t)offset); + tcg_wasm_out_op_i32_add(s); + offset = 0; + } + tcg_wasm_out_op_i64_load32_s(s, 0, (uint32_t)offset); + tcg_wasm_out_op_global_set_r(s, val); + break; + default: + g_assert_not_reached(); + } +} + +static void tcg_wasm_out_ld32u(TCGContext *s, TCGType type, TCGReg val, + TCGReg base, intptr_t offset) +{ + switch (type) { + case TCG_TYPE_I32: + case TCG_TYPE_I64: + tcg_wasm_out_op_global_get_r_i32(s, base); + if ((int32_t)offset < 0) { + tcg_wasm_out_op_i32_const(s, (int32_t)offset); + tcg_wasm_out_op_i32_add(s); + offset = 0; + } + tcg_wasm_out_op_i64_load32_u(s, 0, (uint32_t)offset); + tcg_wasm_out_op_global_set_r(s, val); + break; + default: + g_assert_not_reached(); + } +} + +static void tcg_wasm_out_st(TCGContext *s, TCGType type, TCGReg val, + TCGReg base, intptr_t offset) +{ + tcg_wasm_out_op_global_get_r_i32(s, base); + if ((int32_t)offset < 0) { + tcg_wasm_out_op_i32_const(s, (int32_t)offset); + tcg_wasm_out_op_i32_add(s); + offset = 0; + } + tcg_wasm_out_op_global_get_r(s, val); + switch (type) { + case TCG_TYPE_I32: + tcg_wasm_out_op_i64_store32(s, 0, (uint32_t)offset); + break; + case TCG_TYPE_I64: + tcg_wasm_out_op_i64_store(s, 0, (uint32_t)offset); + break; + default: + g_assert_not_reached(); + } +} + +static void tcg_wasm_out_st8(TCGContext *s, TCGType type, TCGReg val, + TCGReg base, intptr_t offset) +{ + switch (type) { + case TCG_TYPE_I32: + case TCG_TYPE_I64: + tcg_wasm_out_op_global_get_r_i32(s, base); + if ((int32_t)offset < 0) { + tcg_wasm_out_op_i32_const(s, (int32_t)offset); + tcg_wasm_out_op_i32_add(s); + offset = 0; + } + tcg_wasm_out_op_global_get_r(s, val); + tcg_wasm_out_op_i64_store8(s, 0, (uint32_t)offset); + break; + default: + g_assert_not_reached(); + } +} + +static void tcg_wasm_out_st16(TCGContext *s, TCGType type, TCGReg val, + TCGReg base, intptr_t offset) +{ + switch (type) { + case TCG_TYPE_I32: + case TCG_TYPE_I64: + tcg_wasm_out_op_global_get_r_i32(s, base); + if ((int32_t)offset < 0) { + tcg_wasm_out_op_i32_const(s, (int32_t)offset); + tcg_wasm_out_op_i32_add(s); + offset = 0; + } + tcg_wasm_out_op_global_get_r(s, val); + tcg_wasm_out_op_i64_store16(s, 0, (uint32_t)offset); + break; + default: + g_assert_not_reached(); + } +} + +static void tcg_wasm_out_st32(TCGContext *s, TCGType type, TCGReg val, + TCGReg base, intptr_t offset) +{ + switch (type) { + case TCG_TYPE_I32: + case TCG_TYPE_I64: + tcg_wasm_out_op_global_get_r_i32(s, base); + if ((int32_t)offset < 0) { + tcg_wasm_out_op_i32_const(s, (int32_t)offset); + tcg_wasm_out_op_i32_add(s); + offset = 0; + } + tcg_wasm_out_op_global_get_r(s, val); + tcg_wasm_out_op_i64_store32(s, 0, (uint32_t)offset); + break; + default: + g_assert_not_reached(); + } +} + +static inline bool tcg_wasm_out_sti(TCGContext *s, TCGType type, TCGArg val, + TCGReg base, intptr_t offset) +{ + return false; +} + +static bool tcg_wasm_out_mov(TCGContext *s, TCGType type, TCGReg ret, + TCGReg arg) +{ + switch (type) { + case TCG_TYPE_I32: + tcg_wasm_out_op_global_get_r(s, arg); + tcg_wasm_out_op_i32_wrap_i64(s); + tcg_wasm_out_op_i64_extend_i32_u(s); + tcg_wasm_out_op_global_set_r(s, ret); + break; + case TCG_TYPE_I64: + tcg_wasm_out_op_global_get_r(s, arg); + tcg_wasm_out_op_global_set_r(s, ret); + break; + default: + g_assert_not_reached(); + } + return true; +} + +static void tcg_wasm_out_movi(TCGContext *s, TCGType type, + TCGReg ret, tcg_target_long arg) +{ + switch (type) { + case TCG_TYPE_I32: + tcg_wasm_out_op_i64_const(s, (int32_t)arg); + break; + case TCG_TYPE_I64: + tcg_wasm_out_op_i64_const(s, arg); + break; + default: + g_assert_not_reached(); + } + tcg_wasm_out_op_global_set_r(s, ret); +} + +static void tcg_wasm_out_ext8s(TCGContext *s, TCGType type, + TCGReg rd, TCGReg rs) +{ + switch (type) { + case TCG_TYPE_I32: + case TCG_TYPE_I64: + tcg_wasm_out_op_global_get_r(s, rs); + tcg_wasm_out_op_i64_extend8_s(s); + tcg_wasm_out_op_global_set_r(s, rd); + break; + default: + g_assert_not_reached(); + } +} + +static void tcg_wasm_out_ext8u(TCGContext *s, TCGReg rd, TCGReg rs) +{ + tcg_wasm_out_op_global_get_r(s, rs); + tcg_wasm_out_op_i64_const(s, 0xff); + tcg_wasm_out_op_i64_and(s); + tcg_wasm_out_op_global_set_r(s, rd); +} + +static void tcg_wasm_out_ext16s(TCGContext *s, TCGType type, + TCGReg rd, TCGReg rs) +{ + switch (type) { + case TCG_TYPE_I32: + case TCG_TYPE_I64: + tcg_wasm_out_op_global_get_r(s, rs); + tcg_wasm_out_op_i64_extend16_s(s); + tcg_wasm_out_op_global_set_r(s, rd); + break; + default: + g_assert_not_reached(); + } +} + +static void tcg_wasm_out_ext16u(TCGContext *s, TCGReg rd, TCGReg rs) +{ + tcg_wasm_out_op_global_get_r(s, rs); + tcg_wasm_out_op_i64_const(s, 0xffff); + tcg_wasm_out_op_i64_and(s); + tcg_wasm_out_op_global_set_r(s, rd); +} + +static void tcg_wasm_out_ext32s(TCGContext *s, TCGReg rd, TCGReg rs) +{ + tcg_wasm_out_op_global_get_r(s, rs); + tcg_wasm_out_op_i32_wrap_i64(s); + tcg_wasm_out_op_i64_extend_i32_s(s); + tcg_wasm_out_op_global_set_r(s, rd); +} + +static void tcg_wasm_out_ext32u(TCGContext *s, TCGReg rd, TCGReg rs) +{ + tcg_wasm_out_op_global_get_r(s, rs); + tcg_wasm_out_op_i64_const(s, 0xffffffff); + tcg_wasm_out_op_i64_and(s); + tcg_wasm_out_op_global_set_r(s, rd); +} + +static void tcg_wasm_out_exts_i32_i64(TCGContext *s, TCGReg rd, TCGReg rs) +{ + tcg_wasm_out_ext32s(s, rd, rs); +} + +static void tcg_wasm_out_extu_i32_i64(TCGContext *s, TCGReg rd, TCGReg rs) +{ + tcg_wasm_out_ext32u(s, rd, rs); +} + +static void tcg_wasm_out_extrl_i64_i32(TCGContext *s, TCGReg rd, TCGReg rs) +{ + tcg_wasm_out_op_global_get_r(s, rs); + tcg_wasm_out_op_i64_const(s, 0xffffffff); + tcg_wasm_out_op_i64_and(s); + tcg_wasm_out_op_global_set_r(s, rd); +} + +static void tcg_wasm_out_setcond_i32(TCGContext *s, TCGCond cond, TCGReg ret, + TCGReg arg1, TCGReg arg2) +{ + tcg_wasm_out_op_cond_i32(s, cond, arg1, arg2); + tcg_wasm_out_op_i64_extend_i32_u(s); + tcg_wasm_out_op_global_set_r(s, ret); +} + +static void tcg_wasm_out_setcond_i64(TCGContext *s, TCGCond cond, TCGReg ret, + TCGReg arg1, TCGReg arg2) +{ + tcg_wasm_out_op_cond_i64(s, cond, arg1, arg2); + tcg_wasm_out_op_i64_extend_i32_u(s); + tcg_wasm_out_op_global_set_r(s, ret); +} + +static void tcg_wasm_out_movcond_i32(TCGContext *s, TCGCond cond, TCGReg ret, + TCGReg c1, TCGReg c2, TCGReg v1, TCGReg v2) +{ + tcg_wasm_out_op_cond_i32(s, cond, c1, c2); + tcg_wasm_out_op_if_ret_i64(s); + tcg_wasm_out_op_global_get_r(s, v1); + tcg_wasm_out_op_else(s); + tcg_wasm_out_op_global_get_r(s, v2); + tcg_wasm_out_op_end(s); + tcg_wasm_out_op_global_set_r(s, ret); +} + +static void tcg_wasm_out_movcond_i64(TCGContext *s, TCGCond cond, TCGReg ret, + TCGReg c1, TCGReg c2, TCGReg v1, TCGReg v2) +{ + tcg_wasm_out_op_cond_i64(s, cond, c1, c2); + tcg_wasm_out_op_if_ret_i64(s); + tcg_wasm_out_op_global_get_r(s, v1); + tcg_wasm_out_op_else(s); + tcg_wasm_out_op_global_get_r(s, v2); + tcg_wasm_out_op_end(s); + tcg_wasm_out_op_global_set_r(s, ret); +} + + +static void tcg_wasm_out_add2_i32(TCGContext *s, TCGReg retl, TCGReg reth, + TCGReg al, TCGReg ah, TCGReg bl, TCGReg bh) +{ + tcg_wasm_out_op_global_get_r(s, al); + tcg_wasm_out_op_i64_const(s, 0xffffffff); + tcg_wasm_out_op_i64_and(s); + tcg_wasm_out_op_global_get_r(s, ah); + tcg_wasm_out_op_i64_const(s, 32); + tcg_wasm_out_op_i64_shl(s); + tcg_wasm_out_op_i64_add(s); + + tcg_wasm_out_op_global_get_r(s, bl); + tcg_wasm_out_op_i64_const(s, 0xffffffff); + tcg_wasm_out_op_i64_and(s); + tcg_wasm_out_op_global_get_r(s, bh); + tcg_wasm_out_op_i64_const(s, 32); + tcg_wasm_out_op_i64_shl(s); + tcg_wasm_out_op_i64_add(s); + + tcg_wasm_out_op_i64_add(s); + tcg_wasm_out_op_local_set(s, TMP64_LOCAL_0_IDX); + + tcg_wasm_out_op_local_get(s, TMP64_LOCAL_0_IDX); + tcg_wasm_out_op_i64_const(s, 32); + tcg_wasm_out_op_i64_shr_u(s); + tcg_wasm_out_op_global_set_r(s, reth); + + tcg_wasm_out_op_local_get(s, TMP64_LOCAL_0_IDX); + tcg_wasm_out_op_i64_const(s, 0xffffffff); + tcg_wasm_out_op_i64_and(s); + tcg_wasm_out_op_global_set_r(s, retl); +} + +static void tcg_wasm_out_add2_i64(TCGContext *s, TCGReg retl, TCGReg reth, + TCGReg al, TCGReg ah, TCGReg bl, TCGReg bh) +{ + /* add higer */ + tcg_wasm_out_op_global_get_r(s, ah); + tcg_wasm_out_op_global_get_r(s, bh); + tcg_wasm_out_op_i64_add(s); + + /* add lower */ + tcg_wasm_out_op_global_get_r(s, al); + tcg_wasm_out_op_global_get_r(s, bl); + tcg_wasm_out_op_i64_add(s); + + /* get carry */ + if ((al == retl) && (bl == retl)) { + tcg_wasm_out_op_local_set(s, TMP64_LOCAL_0_IDX); + tcg_wasm_out_op_local_get(s, TMP64_LOCAL_0_IDX); + tcg_wasm_out_op_global_get_r(s, al); + tcg_wasm_out_op_i64_lt_u(s); + tcg_wasm_out_op_i64_extend_i32_s(s); + tcg_wasm_out_op_local_get(s, TMP64_LOCAL_0_IDX); + tcg_wasm_out_op_global_set_r(s, retl); + } else { + tcg_wasm_out_op_global_set_r(s, retl); + tcg_wasm_out_op_global_get_r(s, retl); + if (al == retl) { + tcg_wasm_out_op_global_get_r(s, bl); + } else { + tcg_wasm_out_op_global_get_r(s, al); + } + tcg_wasm_out_op_i64_lt_u(s); + tcg_wasm_out_op_i64_extend_i32_s(s); + } + + /* add carry to higher */ + tcg_wasm_out_op_i64_add(s); + tcg_wasm_out_op_global_set_r(s, reth); +} + +static void tcg_wasm_out_sub2_i32(TCGContext *s, TCGReg retl, TCGReg reth, + TCGReg al, TCGReg ah, TCGReg bl, TCGReg bh) +{ + tcg_wasm_out_op_global_get_r(s, al); + tcg_wasm_out_op_i64_const(s, 0xffffffff); + tcg_wasm_out_op_i64_and(s); + tcg_wasm_out_op_global_get_r(s, ah); + tcg_wasm_out_op_i64_const(s, 32); + tcg_wasm_out_op_i64_shl(s); + tcg_wasm_out_op_i64_add(s); + + tcg_wasm_out_op_global_get_r(s, bl); + tcg_wasm_out_op_i64_const(s, 0xffffffff); + tcg_wasm_out_op_i64_and(s); + tcg_wasm_out_op_global_get_r(s, bh); + tcg_wasm_out_op_i64_const(s, 32); + tcg_wasm_out_op_i64_shl(s); + tcg_wasm_out_op_i64_add(s); + + tcg_wasm_out_op_i64_sub(s); + tcg_wasm_out_op_local_set(s, TMP64_LOCAL_0_IDX); + + tcg_wasm_out_op_local_get(s, TMP64_LOCAL_0_IDX); + tcg_wasm_out_op_i64_const(s, 32); + tcg_wasm_out_op_i64_shr_u(s); + tcg_wasm_out_op_global_set_r(s, reth); + + tcg_wasm_out_op_local_get(s, TMP64_LOCAL_0_IDX); + tcg_wasm_out_op_i64_const(s, 0xffffffff); + tcg_wasm_out_op_i64_and(s); + tcg_wasm_out_op_global_set_r(s, retl); +} + + +static void tcg_wasm_out_sub2_i64(TCGContext *s, TCGReg retl, TCGReg reth, + TCGReg al, TCGReg ah, TCGReg bl, TCGReg bh) +{ + /* sub higher */ + tcg_wasm_out_op_global_get_r(s, ah); + tcg_wasm_out_op_global_get_r(s, bh); + tcg_wasm_out_op_i64_sub(s); + + /* sub lower */ + tcg_wasm_out_op_global_get_r(s, al); + tcg_wasm_out_op_global_get_r(s, bl); + tcg_wasm_out_op_i64_sub(s); + + /* get underflow */ + if (al == retl) { + tcg_wasm_out_op_local_set(s, TMP64_LOCAL_0_IDX); + + tcg_wasm_out_op_local_get(s, TMP64_LOCAL_0_IDX); + tcg_wasm_out_op_global_get_r(s, al); + tcg_wasm_out_op_i64_gt_u(s); + + tcg_wasm_out_op_local_get(s, TMP64_LOCAL_0_IDX); + tcg_wasm_out_op_global_set_r(s, retl); + } else { + tcg_wasm_out_op_global_set_r(s, retl); + + tcg_wasm_out_op_global_get_r(s, retl); + tcg_wasm_out_op_global_get_r(s, al); + tcg_wasm_out_op_i64_gt_u(s); + } + + tcg_wasm_out_op_i64_sub(s); + tcg_wasm_out_op_global_set_r(s, reth); +} + +static void tcg_wasm_out_mulu2_i32( + TCGContext *s, TCGReg retl, TCGReg reth, TCGReg arg1, TCGReg arg2) +{ + tcg_wasm_out_op_global_get_r(s, arg1); + tcg_wasm_out_op_global_get_r(s, arg2); + tcg_wasm_out_op_i64_mul(s); + tcg_wasm_out_op_set_r_as_i64(s, retl, reth); +} + +static void tcg_wasm_out_muls2_i32( + TCGContext *s, TCGReg retl, TCGReg reth, TCGReg arg1, TCGReg arg2) +{ + tcg_wasm_out_op_global_get_r(s, arg1); + tcg_wasm_out_op_global_get_r(s, arg2); + tcg_wasm_out_op_i64_mul(s); + tcg_wasm_out_op_set_r_as_i64(s, retl, reth); +} + +static void tcg_wasm_out_ctpop_i32(TCGContext *s, TCGReg dest, TCGReg src) +{ + tcg_wasm_out_op_global_get_r(s, src); + tcg_wasm_out_op_i32_wrap_i64(s); + tcg_wasm_out_op_i32_popcnt(s); + tcg_wasm_out_op_global_set_r(s, dest); +} + +static void tcg_wasm_out_ctpop_i64(TCGContext *s, TCGReg dest, TCGReg src) +{ + tcg_wasm_out_op_global_get_r(s, src); + tcg_wasm_out_op_i64_popcnt(s); + tcg_wasm_out_op_global_set_r(s, dest); +} + +static void tcg_wasm_out_deposit_i32( + TCGContext *s, TCGReg dest, TCGReg arg1, TCGReg arg2, int pos, int len) +{ + int32_t mask = ((1 << len) - 1) << pos; + tcg_wasm_out_op_global_get_r(s, arg1); + tcg_wasm_out_op_i32_wrap_i64(s); + tcg_wasm_out_op_i32_const(s, ~mask); + tcg_wasm_out_op_i32_and(s); + + tcg_wasm_out_op_global_get_r(s, arg2); + tcg_wasm_out_op_i32_wrap_i64(s); + tcg_wasm_out_op_i32_const(s, pos); + tcg_wasm_out_op_i32_shl(s); + tcg_wasm_out_op_i32_const(s, mask); + tcg_wasm_out_op_i32_and(s); + + tcg_wasm_out_op_i32_or(s); + + tcg_wasm_out_op_i64_extend_i32_u(s); + tcg_wasm_out_op_global_set_r(s, dest); +} + +static void tcg_wasm_out_deposit_i64( + TCGContext *s, TCGReg dest, TCGReg arg1, TCGReg arg2, int pos, int len) +{ + int64_t mask = (((int64_t)1 << len) - 1) << pos; + tcg_wasm_out_op_global_get_r(s, arg1); + tcg_wasm_out_op_i64_const(s, ~mask); + tcg_wasm_out_op_i64_and(s); + + tcg_wasm_out_op_global_get_r(s, arg2); + tcg_wasm_out_op_i64_const(s, pos); + tcg_wasm_out_op_i64_shl(s); + tcg_wasm_out_op_i64_const(s, mask); + tcg_wasm_out_op_i64_and(s); + + tcg_wasm_out_op_i64_or(s); + + tcg_wasm_out_op_global_set_r(s, dest); +} + +static void tcg_wasm_out_extract( + TCGContext *s, TCGReg dest, TCGReg arg1, + int pos, int len, TCGType type) +{ + int64_t mask; + switch (type) { + case TCG_TYPE_I32: + mask = 0xffffffff >> (32 - len); + break; + case TCG_TYPE_I64: + mask = ~0ULL >> (64 - len); + break; + default: + g_assert_not_reached(); + } + tcg_wasm_out_op_global_get_r(s, arg1); + tcg_wasm_out_op_i64_const(s, pos); + tcg_wasm_out_op_i64_shr_u(s); + tcg_wasm_out_op_i64_const(s, mask); + tcg_wasm_out_op_i64_and(s); + tcg_wasm_out_op_global_set_r(s, dest); +} + +static void tcg_wasm_out_sextract( + TCGContext *s, TCGReg dest, TCGReg arg1, + int pos, int len, TCGType type) +{ + int rs, sl; + switch (type) { + case TCG_TYPE_I32: + rs = 32 - len; + break; + case TCG_TYPE_I64: + rs = 64 - len; + break; + default: + g_assert_not_reached(); + } + tcg_wasm_out_op_global_get_r(s, arg1); + sl = rs - pos; + if (sl > 0) { + tcg_wasm_out_op_i64_const(s, sl); + tcg_wasm_out_op_i64_shl(s); + } + tcg_wasm_out_op_i64_const(s, rs); + tcg_wasm_out_op_i64_shr_s(s); + tcg_wasm_out_op_global_set_r(s, dest); +} + +static void tcg_wasm_out_bswap64( + TCGContext *s, TCGReg dest, TCGReg src, int flags) +{ + tcg_wasm_out_op_global_get_r(s, src); /* ABCDEFGH */ + tcg_wasm_out_op_i64_const(s, 32); + tcg_wasm_out_op_i64_rotr(s); + tcg_wasm_out_op_local_set(s, TMP64_LOCAL_0_IDX); /* EFGHABCD */ + + tcg_wasm_out_op_local_get(s, TMP64_LOCAL_0_IDX); + tcg_wasm_out_op_i64_const(s, 0xff000000ff000000); + tcg_wasm_out_op_i64_and(s); + tcg_wasm_out_op_i64_const(s, 24); + tcg_wasm_out_op_i64_shr_u(s); /* ___E___A */ + + tcg_wasm_out_op_local_get(s, TMP64_LOCAL_0_IDX); + tcg_wasm_out_op_i64_const(s, 0x00ff000000ff0000); + tcg_wasm_out_op_i64_and(s); + tcg_wasm_out_op_i64_const(s, 8); + tcg_wasm_out_op_i64_shr_u(s); /* __F___B_ */ + + tcg_wasm_out_op_i64_or(s); + + tcg_wasm_out_op_local_get(s, TMP64_LOCAL_0_IDX); + tcg_wasm_out_op_i64_const(s, 0x0000ff000000ff00); + tcg_wasm_out_op_i64_and(s); + tcg_wasm_out_op_i64_const(s, 8); + tcg_wasm_out_op_i64_shl(s); /* _G___C__ */ + + tcg_wasm_out_op_local_get(s, TMP64_LOCAL_0_IDX); + tcg_wasm_out_op_i64_const(s, 0x000000ff000000ff); + tcg_wasm_out_op_i64_and(s); + tcg_wasm_out_op_i64_const(s, 24); + tcg_wasm_out_op_i64_shl(s); /* H___D___ */ + + tcg_wasm_out_op_i64_or(s); + + tcg_wasm_out_op_i64_or(s); /* HGFEDCBA */ + tcg_wasm_out_op_global_set_r(s, dest); +} + +static void tcg_wasm_out_bswap32( + TCGContext *s, TCGReg dest, TCGReg src, int flags) +{ + tcg_wasm_out_op_global_get_r(s, src); + tcg_wasm_out_op_i32_wrap_i64(s); + tcg_wasm_out_op_local_set(s, TMP32_LOCAL_0_IDX); + + tcg_wasm_out_op_local_get(s, TMP32_LOCAL_0_IDX); /* ABCD */ + tcg_wasm_out_op_i32_const(s, 16); + tcg_wasm_out_op_i32_rotr(s); + tcg_wasm_out_op_local_set(s, TMP32_LOCAL_0_IDX); /* CDAB */ + + tcg_wasm_out_op_local_get(s, TMP32_LOCAL_0_IDX); + tcg_wasm_out_op_i32_const(s, 0xff00ff00); + tcg_wasm_out_op_i32_and(s); + tcg_wasm_out_op_i32_const(s, 8); + tcg_wasm_out_op_i32_shr_u(s); /* _C_A */ + + tcg_wasm_out_op_local_get(s, TMP32_LOCAL_0_IDX); + tcg_wasm_out_op_i32_const(s, 0x00ff00ff); + tcg_wasm_out_op_i32_and(s); + tcg_wasm_out_op_i32_const(s, 8); + tcg_wasm_out_op_i32_shl(s); /* D_B_ */ + + tcg_wasm_out_op_i32_or(s); /* DCBA */ + tcg_wasm_out_op_i64_extend_i32_u(s); + tcg_wasm_out_op_global_set_r(s, dest); +} + +static void tcg_wasm_out_bswap16( + TCGContext *s, TCGReg dest, TCGReg src, int flags) +{ + tcg_wasm_out_op_global_get_r(s, src); + tcg_wasm_out_op_i32_wrap_i64(s); + tcg_wasm_out_op_local_set(s, TMP32_LOCAL_0_IDX); + + tcg_wasm_out_op_local_get(s, TMP32_LOCAL_0_IDX); /* __AB */ + tcg_wasm_out_op_i32_const(s, 8); + tcg_wasm_out_op_i32_rotr(s); + tcg_wasm_out_op_local_set(s, TMP32_LOCAL_0_IDX); /* B__A */ + + tcg_wasm_out_op_local_get(s, TMP32_LOCAL_0_IDX); + tcg_wasm_out_op_i32_const(s, 0x000000ff); + tcg_wasm_out_op_i32_and(s); /* ___A */ + + tcg_wasm_out_op_local_get(s, TMP32_LOCAL_0_IDX); + tcg_wasm_out_op_i32_const(s, 0xff000000); + tcg_wasm_out_op_i32_and(s); + tcg_wasm_out_op_i32_const(s, 16); + if (flags & TCG_BSWAP_OS) { + tcg_wasm_out_op_i32_shr_s(s); /* SSB_ */ + } else { + tcg_wasm_out_op_i32_shr_u(s); /* 00B_ */ + } + + tcg_wasm_out_op_i32_or(s); /* **BA */ + tcg_wasm_out_op_i64_extend_i32_u(s); + tcg_wasm_out_op_global_set_r(s, dest); +} + +static void tcg_wasm_out_ctx_i32_store_const(TCGContext *s, int off, int32_t v) +{ + tcg_wasm_out_op_local_get(s, CTX_IDX); + tcg_wasm_out_op_i32_const(s, v); + tcg_wasm_out_op_i32_store(s, 0, off); +} + +static void tcg_wasm_out_ctx_i32_store_r(TCGContext *s, int off, TCGReg r0) +{ + tcg_wasm_out_op_local_get(s, CTX_IDX); + tcg_wasm_out_op_global_get_r(s, r0); + tcg_wasm_out_op_i32_wrap_i64(s); + tcg_wasm_out_op_i32_store(s, 0, off); +} + +static void tcg_wasm_out_ctx_i32_load(TCGContext *s, int off) +{ + tcg_wasm_out_op_local_get(s, CTX_IDX); + tcg_wasm_out_op_i32_load(s, 0, off); +} + +typedef struct LabelInfo { + struct LabelInfo *next; + int label; + int block; +} LabelInfo; + +__thread LabelInfo *label_info; + +static void init_label_info(void) +{ + label_info = NULL; +} + +static void add_label(int label, int block) +{ + LabelInfo *e = tcg_malloc(sizeof(LabelInfo)); + e->label = label; + e->block = block; + e->next = NULL; + if (label_info == NULL) { + label_info = e; + return; + } + LabelInfo *last = label_info; + for (LabelInfo *p = last; p; p = p->next) { + last = p; + } + last->next = e; +} + +typedef struct BlockPlaceholder { + struct BlockPlaceholder *next; + int label; + int pos; +} BlockPlaceholder; + +__thread BlockPlaceholder *block_placeholder; + +__thread int block_idx; + +static void tcg_wasm_out_new_block(TCGContext *s) +{ + tcg_wasm_out_op_end(s); /* close this block */ + + /* next block */ + tcg_wasm_out_op_global_get(s, BLOCK_PTR_IDX); + tcg_wasm_out_op_i64_const(s, ++block_idx); + tcg_wasm_out_op_i64_le_u(s); + tcg_wasm_out_op_if_noret(s); +} + +static void init_blocks(void) +{ + block_placeholder = NULL; + block_idx = 0; +} + +static void add_block_placeholder(int label, int pos) +{ + BlockPlaceholder *e = tcg_malloc(sizeof(BlockPlaceholder)); + e->label = label; + e->pos = pos; + e->next = NULL; + if (block_placeholder == NULL) { + block_placeholder = e; + return; + } + BlockPlaceholder *last = block_placeholder; + for (BlockPlaceholder *p = last; p; p = p->next) { + last = p; + } + last->next = e; +} + +static int get_block_of_label(int label) +{ + for (LabelInfo *p = label_info; p; p = p->next) { + if (p->label == label) { + return p->block; + } + } + return -1; +} + +static void tcg_wasm_out_label_idx(TCGContext *s, int label) +{ + int blk = ++block_idx; + add_label(label, blk); + tcg_wasm_out_new_block(s); +} + +static void tcg_out_label_cb(TCGContext *s, TCGLabel *l) +{ + tcg_wasm_out_label_idx(s, l->id); +} + +static void tcg_wasm_out_op_br_to_label(TCGContext *s, TCGLabel *l, bool br_if) +{ + int toploop_depth = 1; + if (br_if) { + tcg_wasm_out_op_if_noret(s); + toploop_depth++; + } + tcg_wasm_out8(s, 0x42); /* i64.const */ + + add_block_placeholder(l->id, sub_buf_len()); + + tcg_wasm_out8(s, 0x80); /* filled before instantiation */ + tcg_wasm_out8(s, 0x80); + tcg_wasm_out8(s, 0x80); + tcg_wasm_out8(s, 0x80); + tcg_wasm_out8(s, 0x00); + tcg_wasm_out_op_global_set(s, BLOCK_PTR_IDX); + if (get_block_of_label(l->id) != -1) { + /* br to the top of loop */ + tcg_wasm_out_op_br(s, toploop_depth); + } else { + /* br to the end of the current block */ + tcg_wasm_out_op_br(s, toploop_depth - 1); + } + if (br_if) { + tcg_wasm_out_op_end(s); + } +} + +static void tcg_wasm_out_br(TCGContext *s, TCGLabel *l) +{ + tcg_wasm_out_op_br_to_label(s, l, false); +} + +static void tcg_wasm_out_brcond_i32(TCGContext *s, TCGCond cond, TCGReg arg1, + TCGReg arg2, TCGLabel *l) +{ + tcg_wasm_out_op_cond_i32(s, cond, arg1, arg2); + tcg_wasm_out_op_br_to_label(s, l, true); +} + +static void tcg_wasm_out_brcond_i64(TCGContext *s, TCGCond cond, TCGReg arg1, + TCGReg arg2, TCGLabel *l) +{ + tcg_wasm_out_op_cond_i64(s, cond, arg1, arg2); + tcg_wasm_out_op_br_to_label(s, l, true); +} + +static void tcg_wasm_out_exit_tb(TCGContext *s, uintptr_t arg) +{ + tcg_wasm_out_ctx_i32_store_const(s, TB_PTR_OFF, 0); + tcg_wasm_out_op_i32_const(s, (int32_t)arg); + tcg_wasm_out_op_return(s); +} + +static void tcg_wasm_out_goto_ptr(TCGContext *s, TCGReg arg) +{ + tcg_wasm_out_op_global_get_r(s, arg); + tcg_wasm_out_op_i32_wrap_i64(s); + tcg_wasm_out_ctx_i32_load(s, TB_PTR_OFF); + tcg_wasm_out_op_i32_eq(s); + tcg_wasm_out_op_if_noret(s); + tcg_wasm_out_op_i64_const(s, 0); + tcg_wasm_out_op_global_set(s, BLOCK_PTR_IDX); + tcg_wasm_out_op_br(s, 2); /* br to the top of loop */ + tcg_wasm_out_op_end(s); + + tcg_wasm_out_ctx_i32_store_r(s, TB_PTR_OFF, arg); + tcg_wasm_out_ctx_i32_store_const(s, DO_INIT_OFF, 1); + tcg_wasm_out_op_i32_const(s, 0); + tcg_wasm_out_op_return(s); +} + +static void tcg_wasm_out_goto_tb( + TCGContext *s, int which, uint32_t cur_reset_ptr) +{ + tcg_wasm_out_op_i32_const(s, (int32_t)get_jmp_target_addr(s, which)); + tcg_wasm_out_op_i32_load(s, 0, 0); + tcg_wasm_out_op_local_set(s, TMP32_LOCAL_0_IDX); + + tcg_wasm_out_op_local_get(s, TMP32_LOCAL_0_IDX); + tcg_wasm_out_op_i32_const(s, cur_reset_ptr); + tcg_wasm_out_op_i32_ne(s); + tcg_wasm_out_op_if_noret(s); + + tcg_wasm_out_op_local_get(s, TMP32_LOCAL_0_IDX); + tcg_wasm_out_ctx_i32_load(s, TB_PTR_OFF); + tcg_wasm_out_op_i32_eq(s); + tcg_wasm_out_op_if_noret(s); + tcg_wasm_out_op_i64_const(s, 0); + tcg_wasm_out_op_global_set(s, BLOCK_PTR_IDX); + tcg_wasm_out_op_br(s, 3); /* br to the top of loop */ + tcg_wasm_out_op_end(s); + + tcg_wasm_out_op_local_get(s, CTX_IDX); + tcg_wasm_out_op_local_get(s, TMP32_LOCAL_0_IDX); + tcg_wasm_out_op_i32_store(s, 0, TB_PTR_OFF); + tcg_wasm_out_ctx_i32_store_const(s, DO_INIT_OFF, 1); + + tcg_wasm_out_op_i32_const(s, 0); + tcg_wasm_out_op_return(s); + tcg_wasm_out_op_end(s); +} + +static void push_arg_i64(TCGContext *s, int *reg_idx, int *stack_offset) +{ + if (*reg_idx < NUM_OF_IARG_REGS) { + tcg_wasm_out_op_global_get_r(s, REG_INDEX_IARG_BASE + *reg_idx); + int addend = 1; + *reg_idx = *reg_idx + addend; + } else { + tcg_wasm_out_op_global_get_r(s, TCG_REG_CALL_STACK); + tcg_wasm_out_op_i32_wrap_i64(s); + tcg_wasm_out_op_i64_load(s, 0, *stack_offset); + int addend = 8; + *stack_offset = *stack_offset + addend; + } +} + +static void gen_func_wrapper_code( + TCGContext *s, const tcg_insn_unit *func, + const TCGHelperInfo *info, int func_idx) +{ + int nargs; + unsigned typemask = info->typemask; + int rettype = typemask & 7; + int stack_offset = 0; + int reg_idx = 0; + int stack128_base = 0; + bool cached_128base = false; + + if (rettype == dh_typecode_i128) { + /* receive 128bit return value via the stack buffer */ + tcg_wasm_out_op_global_get_r(s, TCG_REG_CALL_STACK); + tcg_wasm_out_op_i32_wrap_i64(s); + } + + nargs = 32 - clz32(typemask >> 3); + nargs = DIV_ROUND_UP(nargs, 3); + for (int j = 0; j < nargs; ++j) { + int typecode = extract32(typemask, (j + 1) * 3, 3); + if (typecode == dh_typecode_void) { + continue; + } + switch (typecode) { + case dh_typecode_i32: + case dh_typecode_s32: + case dh_typecode_ptr: + push_arg_i64(s, ®_idx, &stack_offset); + tcg_wasm_out_op_i32_wrap_i64(s); + break; + case dh_typecode_i64: + case dh_typecode_s64: + push_arg_i64(s, ®_idx, &stack_offset); + break; + case dh_typecode_i128: + /* copy data to 128stack */ + if (!cached_128base) { + tcg_wasm_out_ctx_i32_load(s, STACK128_OFF); + tcg_wasm_out_op_local_set(s, TMP32_LOCAL_0_IDX); + cached_128base = true; + } + + /* push current stack128 pointer */ + tcg_wasm_out_op_local_get(s, TMP32_LOCAL_0_IDX); + tcg_wasm_out_op_i32_const(s, stack128_base); + tcg_wasm_out_op_i32_add(s); + + /* write the argument to the buffer */ + tcg_wasm_out_op_local_get(s, TMP32_LOCAL_0_IDX); + push_arg_i64(s, ®_idx, &stack_offset); + tcg_wasm_out_op_i64_store(s, 0, stack128_base); + stack128_base += 8; + + tcg_wasm_out_op_local_get(s, TMP32_LOCAL_0_IDX); + push_arg_i64(s, ®_idx, &stack_offset); + tcg_wasm_out_op_i64_store(s, 0, stack128_base); + stack128_base += 8; + break; + default: + g_assert_not_reached(); + } + } + + tcg_wasm_out_op_call(s, func_idx); + + stack_offset = 0; + if (rettype != dh_typecode_void) { + switch (rettype) { + case dh_typecode_i32: + case dh_typecode_s32: + case dh_typecode_ptr: + tcg_wasm_out_op_i64_extend_i32_s(s); + tcg_wasm_out_op_global_set_r(s, TCG_REG_R0); + break; + case dh_typecode_i64: + case dh_typecode_s64: + tcg_wasm_out_op_global_set_r(s, TCG_REG_R0); + break; + case dh_typecode_i128: + tcg_wasm_out_op_global_get_r(s, TCG_REG_CALL_STACK); + tcg_wasm_out_op_i32_wrap_i64(s); + tcg_wasm_out_op_i64_load(s, 0, stack_offset); + tcg_wasm_out_op_global_set_r(s, TCG_REG_R0); + stack_offset += 8; + + tcg_wasm_out_op_global_get_r(s, TCG_REG_CALL_STACK); + tcg_wasm_out_op_i32_wrap_i64(s); + tcg_wasm_out_op_i64_load(s, 0, stack_offset); + tcg_wasm_out_op_global_set_r(s, TCG_REG_R1); + stack_offset += 8; + break; + default: + g_assert_not_reached(); + } + } + + return; +} + +__thread LinkedBuf *types_buf_root; +__thread LinkedBuf *types_buf_cur; + +static void init_types_buf(void) +{ + types_buf_root = new_linked_buf(); + types_buf_cur = types_buf_root; +} + +static inline void types_buf_out8(uint8_t v) +{ + types_buf_cur = linked_buf_out8(types_buf_cur, v); +} + +static inline int types_buf_len(void) +{ + return linked_buf_len(types_buf_root); +} + +static void types_out_leb128_uint32(uint32_t v) +{ + uint32_t low7 = 0x7f; + uint8_t b; + do { + b = v & low7; + v >>= 7; + if (v != 0) { + b |= 0x80; + } + types_buf_out8(b); + } while (v != 0); +} + +static void gen_func_type(TCGContext *s, const TCGHelperInfo *info) +{ + int nargs; + unsigned typemask = info->typemask; + int rettype = typemask & 7; + int vec_size = 0; + + nargs = 32 - clz32(typemask >> 3); + nargs = DIV_ROUND_UP(nargs, 3); + + if (rettype == dh_typecode_i128) { + vec_size++; + } + for (int j = 0; j < nargs; ++j) { + int typecode = extract32(typemask, (j + 1) * 3, 3); + if (typecode != dh_typecode_void) { + vec_size++; + } + } + + types_buf_out8(0x60); + types_out_leb128_uint32(vec_size); + + if (rettype == dh_typecode_i128) { + types_buf_out8(0x7f); + } + + for (int j = 0; j < nargs; ++j) { + int typecode = extract32(typemask, (j + 1) * 3, 3); + if (typecode == dh_typecode_void) { + continue; + } + switch (typecode) { + case dh_typecode_i32: + case dh_typecode_s32: + case dh_typecode_ptr: + types_buf_out8(0x7f); + break; + case dh_typecode_i64: + case dh_typecode_s64: + types_buf_out8(0x7e); + break; + case dh_typecode_i128: + types_buf_out8(0x7f); + break; + default: + g_assert_not_reached(); + } + } + + if ((rettype == dh_typecode_void) || (rettype == dh_typecode_i128)) { + types_buf_out8(0x0); + } else { + types_buf_out8(0x1); + switch (rettype) { + case dh_typecode_i32: + case dh_typecode_s32: + case dh_typecode_ptr: + types_buf_out8(0x7f); + break; + case dh_typecode_i64: + case dh_typecode_s64: + types_buf_out8(0x7e); + break; + default: + g_assert_not_reached(); + } + } + return; +} + +static void gen_func_type_qemu_ld(TCGContext *s, uint32_t oi) +{ + types_buf_out8(0x60); + types_buf_out8(0x4); + types_buf_out8(0x7f); + types_buf_out8(0x7e); + types_buf_out8(0x7f); + types_buf_out8(0x7f); + types_buf_out8(0x1); + types_buf_out8(0x7e); +} + +static void gen_func_type_qemu_st(TCGContext *s, uint32_t oi) +{ + MemOp mop = get_memop(oi); + + types_buf_out8(0x60); + types_buf_out8(0x5); + types_buf_out8(0x7f); + types_buf_out8(0x7e); + switch (mop & MO_SSIZE) { + case MO_UQ: + types_buf_out8(0x7e); + break; + default: + types_buf_out8(0x7f); + break; + } + types_buf_out8(0x7f); + types_buf_out8(0x7f); + types_buf_out8(0x0); +} + +typedef struct HelperInfo { + struct HelperInfo *next; + uint32_t idx_on_qemu; +} HelperInfo; + +__thread HelperInfo *helpers; + +static void init_helpers(void) +{ + helpers = NULL; +} + +static int register_helper(TCGContext *s, int helper_idx_on_qemu) +{ + int idx = HELPER_IDX_START; + + tcg_debug_assert(helper_idx_on_qemu >= 0); + + HelperInfo *e = tcg_malloc(sizeof(HelperInfo)); + e->idx_on_qemu = helper_idx_on_qemu; + e->next = NULL; + if (helpers == NULL) { + helpers = e; + return idx; + } + HelperInfo *last = helpers; + for (HelperInfo *p = last; p; p = p->next) { + last = p; + idx++; + } + last->next = e; + return idx; +} + +static int helpers_len(void) +{ + int n = 0; + for (HelperInfo *p = helpers; p; p = p->next) { + n++; + } + return n; +} + +static inline int helpers_copy(uint32_t *dst) +{ + void *start = dst; + for (HelperInfo *p = helpers; p; p = p->next) { + *dst++ = p->idx_on_qemu; + } + return (int)dst - (int)start; +} + + +static int get_helper_idx(TCGContext *s, int helper_idx_on_qemu) +{ + int idx = HELPER_IDX_START; + + for (HelperInfo *p = helpers; p; p = p->next) { + if (p->idx_on_qemu == helper_idx_on_qemu) { + return idx; + } + idx++; + } + return -1; +} + +static void tcg_wasm_out_handle_unwinding(TCGContext *s) +{ + tcg_wasm_out_op_call(s, CHECK_UNWINDING_IDX); + tcg_wasm_out_op_i32_eqz(s); + tcg_wasm_out_op_if_noret(s); + tcg_wasm_out_op_i32_const(s, 0); + /* returns if unwinding */ + tcg_wasm_out_op_return(s); + tcg_wasm_out_op_end(s); +} + +static void tcg_wasm_out_call(TCGContext *s, const tcg_insn_unit *func, + const TCGHelperInfo *info) +{ + int func_idx = get_helper_idx(s, (int)func); + if (func_idx < 0) { + func_idx = register_helper(s, (int)func); + gen_func_type(s, info); + } + + tcg_wasm_out_ctx_i32_load(s, HELPER_RET_TB_PTR_OFF); + tcg_wasm_out_op_i32_const(s, (int32_t)s->code_ptr); + tcg_wasm_out_op_i32_store(s, 0, 0); + + /* + * update the block index so that the possible rewinding will + * skip this block + */ + tcg_wasm_out_op_i64_const(s, block_idx + 1); + tcg_wasm_out_op_global_set(s, BLOCK_PTR_IDX); + + tcg_wasm_out_new_block(s); + + gen_func_wrapper_code(s, func, info, func_idx); + tcg_wasm_out_handle_unwinding(s); +} + +void tb_target_set_jmp_target(const TranslationBlock *tb, int n, + uintptr_t jmp_rx, uintptr_t jmp_rw) +{ + /* Always indirect, nothing to do */ +} + + +static void tcg_wasm_out_i32_load_s(TCGContext *s, int off) +{ + if (off < 0) { + tcg_wasm_out_op_i32_const(s, off); + tcg_wasm_out_op_i32_add(s); + off = 0; + } + tcg_wasm_out_op_i32_load(s, 0, off); +} + +static void tcg_wasm_out_i64_load_s(TCGContext *s, int off) +{ + if (off < 0) { + tcg_wasm_out_op_i32_const(s, off); + tcg_wasm_out_op_i32_add(s); + off = 0; + } + tcg_wasm_out_op_i64_load(s, 0, off); +} + +#define MIN_TLB_MASK_TABLE_OFS INT_MIN + +static uint8_t tcg_wasm_out_tlb_load( + TCGContext *s, TCGReg addr, MemOpIdx oi, bool is_ld) +{ + MemOp opc = get_memop(oi); + TCGAtomAlign aa; + unsigned a_mask; + unsigned s_bits = opc & MO_SIZE; + unsigned s_mask = (1u << s_bits) - 1; + int mem_index = get_mmuidx(oi); + int fast_ofs = tlb_mask_table_ofs(s, mem_index); + int mask_ofs = fast_ofs + offsetof(CPUTLBDescFast, mask); + int table_ofs = fast_ofs + offsetof(CPUTLBDescFast, table); + int add_off = offsetof(CPUTLBEntry, addend); + tcg_target_long compare_mask; + + aa = atom_and_align_for_opc(s, opc, MO_ATOM_IFALIGN, false); + a_mask = (1u << aa.align) - 1; + + tcg_wasm_out_op_global_get_r(s, addr); + tcg_wasm_out_op_i64_const(s, s->page_bits - CPU_TLB_ENTRY_BITS); + tcg_wasm_out_op_i64_shr_u(s); + tcg_wasm_out_op_i32_wrap_i64(s); + + tcg_wasm_out_op_global_get_r_i32(s, TCG_AREG0); + tcg_wasm_out_i32_load_s(s, mask_ofs); + + tcg_wasm_out_op_i32_and(s); + + tcg_wasm_out_op_global_get_r_i32(s, TCG_AREG0); + tcg_wasm_out_i32_load_s(s, table_ofs); + tcg_wasm_out_op_i32_add(s); + + tcg_wasm_out_op_local_tee(s, TMP32_LOCAL_0_IDX); + tcg_wasm_out_i64_load_s( + s, is_ld ? offsetof(CPUTLBEntry, addr_read) + : offsetof(CPUTLBEntry, addr_write)); + + tcg_wasm_out_op_global_get_r(s, addr); + if (a_mask < s_mask) { + tcg_wasm_out_op_i64_const(s, s_mask - a_mask); + tcg_wasm_out_op_i64_add(s); + } + compare_mask = (uint64_t)s->page_mask | a_mask; + tcg_wasm_out_op_i64_const(s, compare_mask); + tcg_wasm_out_op_i64_and(s); + + tcg_wasm_out_op_i64_eq(s); + + tcg_wasm_out_op_i32_const(s, 0); + tcg_wasm_out_op_local_set(s, TMP32_LOCAL_1_IDX); + + tcg_wasm_out_op_if_noret(s); + tcg_wasm_out_op_local_get(s, TMP32_LOCAL_0_IDX); + tcg_wasm_out_i32_load_s(s, add_off); + tcg_wasm_out_op_global_get_r(s, addr); + tcg_wasm_out_op_i32_wrap_i64(s); + tcg_wasm_out_op_i32_add(s); + tcg_wasm_out_op_local_set(s, TMP32_LOCAL_1_IDX); + + tcg_wasm_out_op_end(s); + + return TMP32_LOCAL_1_IDX; +} + +static void tcg_wasm_out_qemu_ld_direct( + TCGContext *s, TCGReg r, uint8_t base, MemOp opc) +{ + switch (opc & (MO_SSIZE)) { + case MO_UB: + tcg_wasm_out_op_local_get(s, base); + tcg_wasm_out_op_i64_load8_u(s, 0, 0); + tcg_wasm_out_op_global_set_r(s, r); + break; + case MO_SB: + tcg_wasm_out_op_local_get(s, base); + tcg_wasm_out_op_i64_load8_s(s, 0, 0); + tcg_wasm_out_op_global_set_r(s, r); + break; + case MO_UW: + tcg_wasm_out_op_local_get(s, base); + tcg_wasm_out_op_i64_load16_u(s, 0, 0); + tcg_wasm_out_op_global_set_r(s, r); + break; + case MO_SW: + tcg_wasm_out_op_local_get(s, base); + tcg_wasm_out_op_i64_load16_s(s, 0, 0); + tcg_wasm_out_op_global_set_r(s, r); + break; + case MO_UL: + tcg_wasm_out_op_local_get(s, base); + tcg_wasm_out_op_i64_load32_u(s, 0, 0); + tcg_wasm_out_op_global_set_r(s, r); + break; + case MO_SL: + tcg_wasm_out_op_local_get(s, base); + tcg_wasm_out_op_i64_load32_s(s, 0, 0); + tcg_wasm_out_op_global_set_r(s, r); + break; + case MO_UQ: + tcg_wasm_out_op_local_get(s, base); + tcg_wasm_out_op_i64_load(s, 0, 0); + tcg_wasm_out_op_global_set_r(s, r); + break; + default: + g_assert_not_reached(); + } +} + +static void *qemu_ld_helper_ptr(uint32_t oi) +{ + MemOp mop = get_memop(oi); + switch (mop & MO_SSIZE) { + case MO_UB: + return helper_ldub_mmu; + case MO_SB: + return helper_ldsb_mmu; + case MO_UW: + return helper_lduw_mmu; + case MO_SW: + return helper_ldsw_mmu; + case MO_UL: + return helper_ldul_mmu; + case MO_SL: + return helper_ldsl_mmu; + case MO_UQ: + return helper_ldq_mmu; + default: + g_assert_not_reached(); + } +} + +static void tcg_wasm_out_qemu_ld(TCGContext *s, const TCGArg *args, bool addr64) +{ + TCGReg addr_reg; + TCGReg data_reg; + MemOpIdx oi; + MemOp mop; + uint8_t base; + int helper_idx; + int func_idx; + + data_reg = *args++; + addr_reg = *args++; + oi = *args++; + mop = get_memop(oi); + + helper_idx = (uint32_t)qemu_ld_helper_ptr(oi); + func_idx = get_helper_idx(s, helper_idx); + if (func_idx < 0) { + func_idx = register_helper(s, helper_idx); + gen_func_type_qemu_ld(s, oi); + } + + if (!addr64) { + tcg_wasm_out_ext32u(s, TCG_REG_TMP, addr_reg); + addr_reg = TCG_REG_TMP; + } + + base = tcg_wasm_out_tlb_load(s, addr_reg, oi, true); + + tcg_wasm_out_op_local_get(s, base); + tcg_wasm_out_op_i32_const(s, 0); + tcg_wasm_out_op_i32_ne(s); + tcg_wasm_out_op_if_noret(s); + + /* fast path */ + tcg_wasm_out_qemu_ld_direct(s, data_reg, base, mop); + + tcg_wasm_out_op_end(s); + + /* + * update the block index so that the possible rewinding will + * skip this block + */ + tcg_wasm_out_op_i64_const(s, block_idx + 1); + tcg_wasm_out_op_global_set(s, BLOCK_PTR_IDX); + + tcg_wasm_out_new_block(s); + + tcg_wasm_out_op_local_get(s, base); + tcg_wasm_out_op_i32_eqz(s); + tcg_wasm_out_op_if_noret(s); + + /* call helper */ + tcg_wasm_out_op_global_get_r(s, TCG_AREG0); + tcg_wasm_out_op_i32_wrap_i64(s); + tcg_wasm_out_op_global_get_r(s, addr_reg); + tcg_wasm_out_op_i32_const(s, oi); + tcg_wasm_out_op_i32_const(s, (int32_t)s->code_ptr); + + tcg_wasm_out_op_call(s, func_idx); + tcg_wasm_out_op_global_set_r(s, data_reg); + tcg_wasm_out_handle_unwinding(s); + + tcg_wasm_out_op_end(s); +} + +static void tcg_wasm_out_qemu_st_direct( + TCGContext *s, TCGReg lo, uint8_t base, MemOp opc) +{ + switch (opc & (MO_SSIZE)) { + case MO_8: + tcg_wasm_out_op_local_get(s, base); + tcg_wasm_out_op_global_get_r(s, lo); + tcg_wasm_out_op_i64_store8(s, 0, 0); + break; + case MO_16: + tcg_wasm_out_op_local_get(s, base); + tcg_wasm_out_op_global_get_r(s, lo); + tcg_wasm_out_op_i64_store16(s, 0, 0); + break; + case MO_32: + tcg_wasm_out_op_local_get(s, base); + tcg_wasm_out_op_global_get_r(s, lo); + tcg_wasm_out_op_i64_store32(s, 0, 0); + break; + case MO_64: + tcg_wasm_out_op_local_get(s, base); + tcg_wasm_out_op_global_get_r(s, lo); + tcg_wasm_out_op_i64_store(s, 0, 0); + break; + default: + g_assert_not_reached(); + } +} + +static void *qemu_st_helper_ptr(uint32_t oi) +{ + MemOp mop = get_memop(oi); + switch (mop & MO_SIZE) { + case MO_8: + return helper_stb_mmu; + case MO_16: + return helper_stw_mmu; + case MO_32: + return helper_stl_mmu; + case MO_64: + return helper_stq_mmu; + case MO_128: + return helper_st16_mmu; + default: + g_assert_not_reached(); + } +} + +static void tcg_wasm_out_qemu_st(TCGContext *s, const TCGArg *args, bool addr64) +{ + TCGReg addr_reg; + TCGReg data_reg; + MemOpIdx oi; + MemOp mop; + uint8_t base; + int helper_idx; + int func_idx; + + data_reg = *args++; + addr_reg = *args++; + oi = *args++; + mop = get_memop(oi); + + helper_idx = (uint32_t)qemu_st_helper_ptr(oi); + func_idx = get_helper_idx(s, helper_idx); + if (func_idx < 0) { + func_idx = register_helper(s, helper_idx); + gen_func_type_qemu_st(s, oi); + } + + if (!addr64) { + tcg_wasm_out_ext32u(s, TCG_REG_TMP, addr_reg); + addr_reg = TCG_REG_TMP; + } + + base = tcg_wasm_out_tlb_load(s, addr_reg, oi, false); + + tcg_wasm_out_op_local_get(s, base); + tcg_wasm_out_op_i32_const(s, 0); + tcg_wasm_out_op_i32_ne(s); + tcg_wasm_out_op_if_noret(s); + + /* fast path */ + tcg_wasm_out_qemu_st_direct(s, data_reg, base, mop); + + tcg_wasm_out_op_end(s); + + /* + * update the block index so that the possible rewinding will + * skip this block + */ + tcg_wasm_out_op_i64_const(s, block_idx + 1); + tcg_wasm_out_op_global_set(s, BLOCK_PTR_IDX); + + tcg_wasm_out_new_block(s); + + tcg_wasm_out_op_local_get(s, base); + tcg_wasm_out_op_i32_eqz(s); + tcg_wasm_out_op_if_noret(s); + + /* call helper */ + tcg_wasm_out_op_global_get_r(s, TCG_AREG0); + tcg_wasm_out_op_i32_wrap_i64(s); + tcg_wasm_out_op_global_get_r(s, addr_reg); + mop = get_memop(oi); + switch (mop & MO_SSIZE) { + case MO_UQ: + tcg_wasm_out_op_global_get_r(s, data_reg); + break; + default: + tcg_wasm_out_op_global_get_r(s, data_reg); + tcg_wasm_out_op_i32_wrap_i64(s); + break; + } + tcg_wasm_out_op_i32_const(s, oi); + tcg_wasm_out_op_i32_const(s, (int32_t)s->code_ptr); + + tcg_wasm_out_op_call(s, func_idx); + tcg_wasm_out_handle_unwinding(s); + + tcg_wasm_out_op_end(s); +} + +static bool patch_reloc(tcg_insn_unit *code_ptr_i, int type, + intptr_t value, intptr_t addend) +{ + int32_t *code_ptr = (int32_t *)code_ptr_i; + intptr_t diff = value - (intptr_t)(code_ptr + 1); + + tcg_debug_assert(addend == 0); + tcg_debug_assert(type == 20); + + if (diff == sextract32(diff, 0, type)) { + tcg_patch32((tcg_insn_unit *)code_ptr, + deposit32(*code_ptr, 32 - type, type, diff)); + return true; + } + return false; +} + +static void stack_bounds_check(TCGReg base, intptr_t offset) +{ + if (base == TCG_REG_CALL_STACK) { + tcg_debug_assert(offset >= 0); + tcg_debug_assert(offset < (TCG_STATIC_CALL_ARGS_SIZE + + TCG_STATIC_FRAME_SIZE)); + } +} + +static inline void tcg_tci_out32(TCGContext *s, uint32_t v) +{ + tcg_out32(s, v); +} + +static void *cur_tci_ptr(TCGContext *s) +{ + return s->code_ptr; +} + +static void tcg_tci_out_op_l(TCGContext *s, TCGOpcode op, TCGLabel *l0) +{ + uint32_t insn = 0; + + tcg_out_reloc(s, cur_tci_ptr(s), 20, l0, 0); + insn = deposit32(insn, 0, 8, op); + tcg_tci_out32(s, insn); +} + +static void tcg_tci_out_op_p(TCGContext *s, TCGOpcode op, void *p0) +{ + uint32_t insn = 0; + intptr_t diff; + + /* Special case for exit_tb: map null -> 0. */ + if (p0 == NULL) { + diff = 0; + } else { + diff = p0 - (cur_tci_ptr(s) + 4); + tcg_debug_assert(diff != 0); + if (diff != sextract32(diff, 0, 20)) { + tcg_raise_tb_overflow(s); + } + } + insn = deposit32(insn, 0, 8, op); + insn = deposit32(insn, 12, 20, diff); + tcg_tci_out32(s, insn); +} + +static void tcg_tci_out_op_r(TCGContext *s, TCGOpcode op, TCGReg r0) +{ + uint32_t insn = 0; + + insn = deposit32(insn, 0, 8, op); + insn = deposit32(insn, 8, 4, r0); + tcg_tci_out32(s, insn); +} + +static void tcg_tci_out_op_v(TCGContext *s, TCGOpcode op) +{ + tcg_tci_out32(s, (uint8_t)op); +} + +static void tcg_tci_out_op_ri(TCGContext *s, TCGOpcode op, + TCGReg r0, int32_t i1) +{ + uint32_t insn = 0; + + tcg_debug_assert(i1 == sextract32(i1, 0, 20)); + insn = deposit32(insn, 0, 8, op); + insn = deposit32(insn, 8, 4, r0); + insn = deposit32(insn, 12, 20, i1); + tcg_tci_out32(s, insn); +} + +static void tcg_tci_out_op_rl(TCGContext *s, TCGOpcode op, + TCGReg r0, TCGLabel *l1) +{ + uint32_t insn = 0; + + tcg_out_reloc(s, cur_tci_ptr(s), 20, l1, 0); + insn = deposit32(insn, 0, 8, op); + insn = deposit32(insn, 8, 4, r0); + tcg_tci_out32(s, insn); +} + +static void tcg_tci_out_op_rr(TCGContext *s, TCGOpcode op, + TCGReg r0, TCGReg r1) +{ + uint32_t insn = 0; + + insn = deposit32(insn, 0, 8, op); + insn = deposit32(insn, 8, 4, r0); + insn = deposit32(insn, 12, 4, r1); + tcg_tci_out32(s, insn); +} + +static void tcg_tci_out_op_rrr(TCGContext *s, TCGOpcode op, + TCGReg r0, TCGReg r1, TCGReg r2) +{ + uint32_t insn = 0; + + insn = deposit32(insn, 0, 8, op); + insn = deposit32(insn, 8, 4, r0); + insn = deposit32(insn, 12, 4, r1); + insn = deposit32(insn, 16, 4, r2); + tcg_tci_out32(s, insn); +} + +static void tcg_tci_out_op_rrs(TCGContext *s, TCGOpcode op, + TCGReg r0, TCGReg r1, intptr_t i2) +{ + uint32_t insn = 0; + + tcg_debug_assert(i2 == sextract32(i2, 0, 16)); + insn = deposit32(insn, 0, 8, op); + insn = deposit32(insn, 8, 4, r0); + insn = deposit32(insn, 12, 4, r1); + insn = deposit32(insn, 16, 16, i2); + tcg_tci_out32(s, insn); +} + +static void tcg_tci_out_op_rrbb(TCGContext *s, TCGOpcode op, TCGReg r0, + TCGReg r1, uint8_t b2, uint8_t b3) +{ + uint32_t insn = 0; + + tcg_debug_assert(b2 == extract32(b2, 0, 6)); + tcg_debug_assert(b3 == extract32(b3, 0, 6)); + insn = deposit32(insn, 0, 8, op); + insn = deposit32(insn, 8, 4, r0); + insn = deposit32(insn, 12, 4, r1); + insn = deposit32(insn, 16, 6, b2); + insn = deposit32(insn, 22, 6, b3); + tcg_tci_out32(s, insn); +} + +static void tcg_tci_out_op_rrrc(TCGContext *s, TCGOpcode op, + TCGReg r0, TCGReg r1, TCGReg r2, TCGCond c3) +{ + uint32_t insn = 0; + + insn = deposit32(insn, 0, 8, op); + insn = deposit32(insn, 8, 4, r0); + insn = deposit32(insn, 12, 4, r1); + insn = deposit32(insn, 16, 4, r2); + insn = deposit32(insn, 20, 4, c3); + tcg_tci_out32(s, insn); +} + +static void tcg_tci_out_op_rrrbb(TCGContext *s, TCGOpcode op, TCGReg r0, + TCGReg r1, TCGReg r2, uint8_t b3, uint8_t b4) +{ + uint32_t insn = 0; + + tcg_debug_assert(b3 == extract32(b3, 0, 6)); + tcg_debug_assert(b4 == extract32(b4, 0, 6)); + insn = deposit32(insn, 0, 8, op); + insn = deposit32(insn, 8, 4, r0); + insn = deposit32(insn, 12, 4, r1); + insn = deposit32(insn, 16, 4, r2); + insn = deposit32(insn, 20, 6, b3); + insn = deposit32(insn, 26, 6, b4); + tcg_tci_out32(s, insn); +} + +static void tcg_tci_out_op_rrrr(TCGContext *s, TCGOpcode op, + TCGReg r0, TCGReg r1, TCGReg r2, TCGReg r3) +{ + uint32_t insn = 0; + + insn = deposit32(insn, 0, 8, op); + insn = deposit32(insn, 8, 4, r0); + insn = deposit32(insn, 12, 4, r1); + insn = deposit32(insn, 16, 4, r2); + insn = deposit32(insn, 20, 4, r3); + tcg_tci_out32(s, insn); +} + +static void tcg_tci_out_op_rrrrrc(TCGContext *s, TCGOpcode op, + TCGReg r0, TCGReg r1, TCGReg r2, + TCGReg r3, TCGReg r4, TCGCond c5) +{ + uint32_t insn = 0; + + insn = deposit32(insn, 0, 8, op); + insn = deposit32(insn, 8, 4, r0); + insn = deposit32(insn, 12, 4, r1); + insn = deposit32(insn, 16, 4, r2); + insn = deposit32(insn, 20, 4, r3); + insn = deposit32(insn, 24, 4, r4); + insn = deposit32(insn, 28, 4, c5); + tcg_tci_out32(s, insn); +} + +static void tcg_tci_out_op_rrrrrr(TCGContext *s, TCGOpcode op, + TCGReg r0, TCGReg r1, TCGReg r2, + TCGReg r3, TCGReg r4, TCGReg r5) +{ + uint32_t insn = 0; + + insn = deposit32(insn, 0, 8, op); + insn = deposit32(insn, 8, 4, r0); + insn = deposit32(insn, 12, 4, r1); + insn = deposit32(insn, 16, 4, r2); + insn = deposit32(insn, 20, 4, r3); + insn = deposit32(insn, 24, 4, r4); + insn = deposit32(insn, 28, 4, r5); + tcg_tci_out32(s, insn); +} + +static void tcg_tci_out_movi(TCGContext *s, TCGType type, + TCGReg ret, tcg_target_long arg) +{ + switch (type) { + case TCG_TYPE_I32: + arg = (int32_t)arg; + /* fall through */ + case TCG_TYPE_I64: + break; + default: + g_assert_not_reached(); + } + + if (arg == sextract32(arg, 0, 20)) { + tcg_tci_out_op_ri(s, INDEX_op_tci_movi, ret, arg); + } else { + uint32_t insn = 0; + + new_pool_label(s, arg, 20, cur_tci_ptr(s), 0); + insn = deposit32(insn, 0, 8, INDEX_op_tci_movl); + insn = deposit32(insn, 8, 4, ret); + tcg_tci_out32(s, insn); + } +} + +static void tcg_tci_out_ldst(TCGContext *s, TCGOpcode op, TCGReg val, + TCGReg base, intptr_t offset) +{ + stack_bounds_check(base, offset); + if (offset != sextract32(offset, 0, 16)) { + tcg_tci_out_movi(s, TCG_TYPE_PTR, TCG_REG_TMP, offset); + tcg_tci_out_op_rrr(s, INDEX_op_add_i64, + TCG_REG_TMP, TCG_REG_TMP, base); + base = TCG_REG_TMP; + offset = 0; + } + tcg_tci_out_op_rrs(s, op, val, base, offset); +} + +static bool tcg_tci_out_mov(TCGContext *s, TCGType type, TCGReg ret, TCGReg arg) +{ + switch (type) { + case TCG_TYPE_I32: + tcg_tci_out_op_rr(s, INDEX_op_mov_i32, ret, arg); + break; + case TCG_TYPE_I64: + tcg_tci_out_op_rr(s, INDEX_op_mov_i64, ret, arg); + break; + default: + g_assert_not_reached(); + } + return true; +} + +static void tcg_tci_out_ext8s(TCGContext *s, TCGType type, TCGReg rd, TCGReg rs) +{ + switch (type) { + case TCG_TYPE_I32: + tcg_debug_assert(TCG_TARGET_HAS_ext8s_i32); + tcg_tci_out_op_rr(s, INDEX_op_ext8s_i32, rd, rs); + break; + case TCG_TYPE_I64: + tcg_debug_assert(TCG_TARGET_HAS_ext8s_i64); + tcg_tci_out_op_rr(s, INDEX_op_ext8s_i64, rd, rs); + break; + default: + g_assert_not_reached(); + } +} + +static void tcg_tci_out_ext8u(TCGContext *s, TCGReg rd, TCGReg rs) +{ + tcg_debug_assert(TCG_TARGET_HAS_ext8u_i64); + tcg_tci_out_op_rr(s, INDEX_op_ext8u_i64, rd, rs); +} + +static void tcg_tci_out_ext16s(TCGContext *s, TCGType type, + TCGReg rd, TCGReg rs) +{ + switch (type) { + case TCG_TYPE_I32: + tcg_debug_assert(TCG_TARGET_HAS_ext16s_i32); + tcg_tci_out_op_rr(s, INDEX_op_ext16s_i32, rd, rs); + break; + case TCG_TYPE_I64: + tcg_debug_assert(TCG_TARGET_HAS_ext16s_i64); + tcg_tci_out_op_rr(s, INDEX_op_ext16s_i64, rd, rs); + break; + default: + g_assert_not_reached(); + } +} + +static void tcg_tci_out_ext16u(TCGContext *s, TCGReg rd, TCGReg rs) +{ + tcg_debug_assert(TCG_TARGET_HAS_ext16u_i64); + tcg_tci_out_op_rr(s, INDEX_op_ext16u_i64, rd, rs); +} + +static void tcg_tci_out_ext32s(TCGContext *s, TCGReg rd, TCGReg rs) +{ + tcg_debug_assert(TCG_TARGET_REG_BITS == 64); + tcg_debug_assert(TCG_TARGET_HAS_ext32s_i64); + tcg_tci_out_op_rr(s, INDEX_op_ext32s_i64, rd, rs); +} + +static void tcg_tci_out_ext32u(TCGContext *s, TCGReg rd, TCGReg rs) +{ + tcg_debug_assert(TCG_TARGET_REG_BITS == 64); + tcg_debug_assert(TCG_TARGET_HAS_ext32u_i64); + tcg_tci_out_op_rr(s, INDEX_op_ext32u_i64, rd, rs); +} + +static void tcg_tci_out_exts_i32_i64(TCGContext *s, TCGReg rd, TCGReg rs) +{ + tcg_tci_out_ext32s(s, rd, rs); +} + +static void tcg_tci_out_extu_i32_i64(TCGContext *s, TCGReg rd, TCGReg rs) +{ + tcg_tci_out_ext32u(s, rd, rs); +} + +static void tcg_tci_out_extrl_i64_i32(TCGContext *s, TCGReg rd, TCGReg rs) +{ + tcg_debug_assert(TCG_TARGET_REG_BITS == 64); + tcg_tci_out_mov(s, TCG_TYPE_I32, rd, rs); +} + +static void tcg_tci_out_call(TCGContext *s, const tcg_insn_unit *func, + const TCGHelperInfo *info) +{ + ffi_cif *cif = info->cif; + uint32_t insn = 0; + uint8_t which; + + if (cif->rtype == &ffi_type_void) { + which = 0; + } else { + tcg_debug_assert(cif->rtype->size == 4 || + cif->rtype->size == 8 || + cif->rtype->size == 16); + which = ctz32(cif->rtype->size) - 1; + } + new_pool_l2(s, 20, cur_tci_ptr(s), + 0, (uintptr_t)func, (uintptr_t)cif); + insn = deposit32(insn, 0, 8, INDEX_op_call); + insn = deposit32(insn, 8, 4, which); + tcg_tci_out32(s, insn); +} + +static void tcg_tci_out_exit_tb(TCGContext *s, uintptr_t arg) +{ + tcg_tci_out_op_p(s, INDEX_op_exit_tb, (void *)arg); +} + +static void tcg_tci_out_goto_tb(TCGContext *s, int which) +{ + /* indirect jump method. */ + tcg_tci_out_op_p(s, INDEX_op_goto_tb, + (void *)get_jmp_target_addr(s, which)); + set_jmp_reset_offset(s, which); +} + +static void tcg_out_nop_fill(tcg_insn_unit *p, int count) +{ + int32_t *p2 = (int32_t *)p; + memset(p2, 0, sizeof(*p2) * count); +} + +static void tcg_out_goto_ptr(TCGContext *s, TCGOpcode opc, TCGReg arg) +{ + tcg_tci_out_op_r(s, opc, arg); + tcg_wasm_out_goto_ptr(s, arg); +} +static void tcg_out_br(TCGContext *s, TCGOpcode opc, TCGLabel *l) +{ + tcg_tci_out_op_l(s, opc, l); + tcg_wasm_out_br(s, l); +} +static void tcg_out_setcond_i32(TCGContext *s, TCGOpcode opc, TCGCond cond, + TCGReg ret, TCGReg arg1, TCGReg arg2) +{ + tcg_tci_out_op_rrrc(s, opc, ret, arg1, arg2, cond); + tcg_wasm_out_setcond_i32(s, cond, ret, arg1, arg2); +} +static void tcg_out_setcond_i64(TCGContext *s, TCGOpcode opc, TCGCond cond, + TCGReg ret, TCGReg arg1, TCGReg arg2) +{ + tcg_tci_out_op_rrrc(s, opc, ret, arg1, arg2, cond); + tcg_wasm_out_setcond_i64(s, cond, ret, arg1, arg2); +} +static void tcg_out_movcond_i32(TCGContext *s, TCGOpcode opc, TCGCond cond, + TCGReg ret, TCGReg c1, TCGReg c2, + TCGReg v1, TCGReg v2) +{ + tcg_tci_out_op_rrrrrc(s, opc, ret, c1, c2, v1, v2, cond); + tcg_wasm_out_movcond_i32(s, cond, ret, c1, c2, v1, v2); +} +static void tcg_out_movcond_i64(TCGContext *s, TCGOpcode opc, TCGCond cond, + TCGReg ret, TCGReg c1, TCGReg c2, + TCGReg v1, TCGReg v2) +{ + tcg_tci_out_op_rrrrrc(s, opc, ret, c1, c2, v1, v2, cond); + tcg_wasm_out_movcond_i64(s, cond, ret, c1, c2, v1, v2); +} +static void tcg_out_ld(TCGContext *s, TCGType type, TCGReg val, TCGReg base, + intptr_t offset) +{ + switch (type) { + case TCG_TYPE_I32: + tcg_tci_out_ldst(s, INDEX_op_ld_i32, val, base, offset); + break; + case TCG_TYPE_I64: + tcg_tci_out_ldst(s, INDEX_op_ld_i64, val, base, offset); + break; + default: + g_assert_not_reached(); + } + tcg_wasm_out_ld(s, type, val, base, offset); +} +static void tcg_out_ld8s(TCGContext *s, TCGOpcode opc, TCGType type, + TCGReg val, TCGReg base, intptr_t offset) +{ + tcg_tci_out_ldst(s, opc, val, base, offset); + tcg_wasm_out_ld8s(s, type, val, base, offset); +} +static void tcg_out_ld8u(TCGContext *s, TCGOpcode opc, TCGType type, + TCGReg val, TCGReg base, intptr_t offset) +{ + tcg_tci_out_ldst(s, opc, val, base, offset); + tcg_wasm_out_ld8u(s, type, val, base, offset); +} +static void tcg_out_ld16s(TCGContext *s, TCGOpcode opc, TCGType type, + TCGReg val, TCGReg base, intptr_t offset) +{ + tcg_tci_out_ldst(s, opc, val, base, offset); + tcg_wasm_out_ld16s(s, type, val, base, offset); +} +static void tcg_out_ld16u(TCGContext *s, TCGOpcode opc, TCGType type, + TCGReg val, TCGReg base, intptr_t offset) +{ + tcg_tci_out_ldst(s, opc, val, base, offset); + tcg_wasm_out_ld16u(s, type, val, base, offset); +} +static void tcg_out_ld32s(TCGContext *s, TCGOpcode opc, TCGType type, + TCGReg val, TCGReg base, intptr_t offset) +{ + tcg_tci_out_ldst(s, opc, val, base, offset); + tcg_wasm_out_ld32s(s, type, val, base, offset); +} +static void tcg_out_ld32u(TCGContext *s, TCGOpcode opc, TCGType type, + TCGReg val, TCGReg base, intptr_t offset) +{ + tcg_tci_out_ldst(s, opc, val, base, offset); + tcg_wasm_out_ld32u(s, type, val, base, offset); +} +static void tcg_out_st(TCGContext *s, TCGType type, TCGReg val, TCGReg base, + intptr_t offset) +{ + switch (type) { + case TCG_TYPE_I32: + tcg_tci_out_ldst(s, INDEX_op_st_i32, val, base, offset); + break; + case TCG_TYPE_I64: + tcg_tci_out_ldst(s, INDEX_op_st_i64, val, base, offset); + break; + default: + g_assert_not_reached(); + } + tcg_wasm_out_st(s, type, val, base, offset); +} +static void tcg_out_st8(TCGContext *s, TCGOpcode opc, TCGType type, TCGReg val, + TCGReg base, intptr_t offset) +{ + tcg_tci_out_ldst(s, opc, val, base, offset); + tcg_wasm_out_st8(s, type, val, base, offset); +} +static void tcg_out_st16(TCGContext *s, TCGOpcode opc, TCGType type, TCGReg val, + TCGReg base, intptr_t offset) +{ + tcg_tci_out_ldst(s, opc, val, base, offset); + tcg_wasm_out_st16(s, type, val, base, offset); +} +static void tcg_out_st32(TCGContext *s, TCGOpcode opc, TCGType type, TCGReg val, + TCGReg base, intptr_t offset) +{ + tcg_tci_out_ldst(s, opc, val, base, offset); + tcg_wasm_out_st32(s, type, val, base, offset); +} +static void tcg_out_add(TCGContext *s, TCGOpcode opc, + TCGReg ret, TCGReg arg1, TCGReg arg2) +{ + tcg_tci_out_op_rrr(s, opc, ret, arg1, arg2); + tcg_wasm_out_i64_calc_add(s, ret, arg1, arg2); +} +static void tcg_out_sub(TCGContext *s, TCGOpcode opc, + TCGReg ret, TCGReg arg1, TCGReg arg2) +{ + tcg_tci_out_op_rrr(s, opc, ret, arg1, arg2); + tcg_wasm_out_i64_calc_sub(s, ret, arg1, arg2); +} +static void tcg_out_mul(TCGContext *s, TCGOpcode opc, + TCGReg ret, TCGReg arg1, TCGReg arg2) +{ + tcg_tci_out_op_rrr(s, opc, ret, arg1, arg2); + tcg_wasm_out_i64_calc_mul(s, ret, arg1, arg2); +} +static void tcg_out_and(TCGContext *s, TCGOpcode opc, + TCGReg ret, TCGReg arg1, TCGReg arg2) +{ + tcg_tci_out_op_rrr(s, opc, ret, arg1, arg2); + tcg_wasm_out_i64_calc_and(s, ret, arg1, arg2); +} +static void tcg_out_or(TCGContext *s, TCGOpcode opc, + TCGReg ret, TCGReg arg1, TCGReg arg2) +{ + tcg_tci_out_op_rrr(s, opc, ret, arg1, arg2); + tcg_wasm_out_i64_calc_or(s, ret, arg1, arg2); +} +static void tcg_out_xor(TCGContext *s, TCGOpcode opc, + TCGReg ret, TCGReg arg1, TCGReg arg2) +{ + tcg_tci_out_op_rrr(s, opc, ret, arg1, arg2); + tcg_wasm_out_i64_calc_xor(s, ret, arg1, arg2); +} +static void tcg_out_shl(TCGContext *s, TCGOpcode opc, TCGType type, + TCGReg ret, TCGReg arg1, TCGReg arg2) +{ + tcg_tci_out_op_rrr(s, opc, ret, arg1, arg2); + tcg_wasm_out_shl(s, type, ret, arg1, arg2); +} +static void tcg_out_shr_u(TCGContext *s, TCGOpcode opc, TCGType type, + TCGReg ret, TCGReg arg1, TCGReg arg2) +{ + tcg_tci_out_op_rrr(s, opc, ret, arg1, arg2); + tcg_wasm_out_shr_u(s, type, ret, arg1, arg2); +} +static void tcg_out_shr_s(TCGContext *s, TCGOpcode opc, TCGType type, + TCGReg ret, TCGReg arg1, TCGReg arg2) +{ + tcg_tci_out_op_rrr(s, opc, ret, arg1, arg2); + tcg_wasm_out_shr_s(s, type, ret, arg1, arg2); +} +static void tcg_out_i64_rotl(TCGContext *s, TCGOpcode opc, TCGReg ret, + TCGReg arg1, TCGReg arg2) +{ + tcg_tci_out_op_rrr(s, opc, ret, arg1, arg2); + tcg_wasm_out_i64_calc_rotl(s, ret, arg1, arg2); +} +static void tcg_out_i32_rotl(TCGContext *s, TCGOpcode opc, TCGReg ret, + TCGReg arg1, TCGReg arg2) +{ + tcg_tci_out_op_rrr(s, opc, ret, arg1, arg2); + tcg_wasm_out_i32_rotl(s, ret, arg1, arg2); +} +static void tcg_out_i32_rotr(TCGContext *s, TCGOpcode opc, TCGReg ret, + TCGReg arg1, TCGReg arg2) +{ + tcg_tci_out_op_rrr(s, opc, ret, arg1, arg2); + tcg_wasm_out_i32_rotr(s, ret, arg1, arg2); +} +static void tcg_out_i64_rotr(TCGContext *s, TCGOpcode opc, TCGReg ret, + TCGReg arg1, TCGReg arg2) +{ + tcg_tci_out_op_rrr(s, opc, ret, arg1, arg2); + tcg_wasm_out_i64_calc_rotr(s, ret, arg1, arg2); +} +static void tcg_out_div_s(TCGContext *s, TCGOpcode opc, TCGType type, + TCGReg ret, TCGReg arg1, TCGReg arg2) +{ + tcg_tci_out_op_rrr(s, opc, ret, arg1, arg2); + tcg_wasm_out_div_s(s, type, ret, arg1, arg2); +} +static void tcg_out_div_u(TCGContext *s, TCGOpcode opc, TCGType type, + TCGReg ret, TCGReg arg1, TCGReg arg2) +{ + tcg_tci_out_op_rrr(s, opc, ret, arg1, arg2); + tcg_wasm_out_div_u(s, type, ret, arg1, arg2); +} +static void tcg_out_rem_s(TCGContext *s, TCGOpcode opc, TCGType type, + TCGReg ret, TCGReg arg1, TCGReg arg2) +{ + tcg_tci_out_op_rrr(s, opc, ret, arg1, arg2); + tcg_wasm_out_rem_s(s, type, ret, arg1, arg2); +} +static void tcg_out_rem_u(TCGContext *s, TCGOpcode opc, TCGType type, + TCGReg ret, TCGReg arg1, TCGReg arg2) +{ + tcg_tci_out_op_rrr(s, opc, ret, arg1, arg2); + tcg_wasm_out_rem_u(s, type, ret, arg1, arg2); +} +static void tcg_out_andc(TCGContext *s, TCGOpcode opc, TCGReg ret, + TCGReg arg1, TCGReg arg2) +{ + tcg_tci_out_op_rrr(s, opc, ret, arg1, arg2); + tcg_wasm_out_andc(s, ret, arg1, arg2); +} +static void tcg_out_orc(TCGContext *s, TCGOpcode opc, TCGReg ret, + TCGReg arg1, TCGReg arg2) +{ + tcg_tci_out_op_rrr(s, opc, ret, arg1, arg2); + tcg_wasm_out_orc(s, ret, arg1, arg2); +} +static void tcg_out_eqv(TCGContext *s, TCGOpcode opc, TCGReg ret, + TCGReg arg1, TCGReg arg2) +{ + tcg_tci_out_op_rrr(s, opc, ret, arg1, arg2); + tcg_wasm_out_eqv(s, ret, arg1, arg2); +} +static void tcg_out_nand(TCGContext *s, TCGOpcode opc, TCGReg ret, + TCGReg arg1, TCGReg arg2) +{ + tcg_tci_out_op_rrr(s, opc, ret, arg1, arg2); + tcg_wasm_out_nand(s, ret, arg1, arg2); +} +static void tcg_out_nor(TCGContext *s, TCGOpcode opc, TCGReg ret, + TCGReg arg1, TCGReg arg2) +{ + tcg_tci_out_op_rrr(s, opc, ret, arg1, arg2); + tcg_wasm_out_nor(s, ret, arg1, arg2); +} +static void tcg_out_clz32(TCGContext *s, TCGOpcode opc, TCGReg ret, + TCGReg arg1, TCGReg arg2) +{ + tcg_tci_out_op_rrr(s, opc, ret, arg1, arg2); + tcg_wasm_out_clz32(s, ret, arg1, arg2); +} +static void tcg_out_clz64(TCGContext *s, TCGOpcode opc, TCGReg ret, + TCGReg arg1, TCGReg arg2) +{ + tcg_tci_out_op_rrr(s, opc, ret, arg1, arg2); + tcg_wasm_out_clz64(s, ret, arg1, arg2); +} +static void tcg_out_ctz32(TCGContext *s, TCGOpcode opc, TCGReg ret, + TCGReg arg1, TCGReg arg2) +{ + tcg_tci_out_op_rrr(s, opc, ret, arg1, arg2); + tcg_wasm_out_ctz32(s, ret, arg1, arg2); +} +static void tcg_out_ctz64(TCGContext *s, TCGOpcode opc, TCGReg ret, + TCGReg arg1, TCGReg arg2) +{ + tcg_tci_out_op_rrr(s, opc, ret, arg1, arg2); + tcg_wasm_out_ctz64(s, ret, arg1, arg2); +} +static void tcg_out_brcond_i32(TCGContext *s, TCGOpcode opc, TCGCond cond, + TCGReg arg1, TCGReg arg2, TCGLabel *l) +{ + tcg_tci_out_op_rrrc(s, (opc == INDEX_op_brcond_i32 ? + INDEX_op_setcond_i32 : INDEX_op_setcond_i64), + TCG_REG_TMP, arg1, arg2, cond); + tcg_tci_out_op_rl(s, opc, TCG_REG_TMP, l); + tcg_wasm_out_brcond_i32(s, cond, arg1, arg2, l); + +} +static void tcg_out_brcond_i64(TCGContext *s, TCGOpcode opc, TCGCond cond, + TCGReg arg1, TCGReg arg2, TCGLabel *l) +{ + tcg_tci_out_op_rrrc(s, (opc == INDEX_op_brcond_i32 ? + INDEX_op_setcond_i32 : INDEX_op_setcond_i64), + TCG_REG_TMP, arg1, arg2, cond); + tcg_tci_out_op_rl(s, opc, TCG_REG_TMP, l); + tcg_wasm_out_brcond_i64(s, cond, arg1, arg2, l); + +} +static void tcg_out_neg(TCGContext *s, TCGOpcode opc, TCGReg ret, TCGReg arg) +{ + tcg_tci_out_op_rr(s, opc, ret, arg); + tcg_wasm_out_neg(s, ret, arg); +} +static void tcg_out_not(TCGContext *s, TCGOpcode opc, TCGReg ret, TCGReg arg) +{ + tcg_tci_out_op_rr(s, opc, ret, arg); + tcg_wasm_out_not(s, ret, arg); +} +static void tcg_out_ctpop_i32(TCGContext *s, TCGOpcode opc, + TCGReg dest, TCGReg src) +{ + tcg_tci_out_op_rr(s, opc, dest, src); + tcg_wasm_out_ctpop_i32(s, dest, src); +} +static void tcg_out_ctpop_i64(TCGContext *s, TCGOpcode opc, + TCGReg dest, TCGReg src) +{ + tcg_tci_out_op_rr(s, opc, dest, src); + tcg_wasm_out_ctpop_i64(s, dest, src); +} +static void tcg_out_add2_i32(TCGContext *s, TCGOpcode opc, + TCGReg retl, TCGReg reth, + TCGReg al, TCGReg ah, TCGReg bl, TCGReg bh) +{ + tcg_tci_out_op_rrrrrr(s, opc, retl, reth, al, ah, bl, bh); + tcg_wasm_out_add2_i32(s, retl, reth, al, ah, bl, bh); +} +static void tcg_out_add2_i64(TCGContext *s, TCGOpcode opc, + TCGReg retl, TCGReg reth, + TCGReg al, TCGReg ah, TCGReg bl, TCGReg bh) +{ + tcg_tci_out_op_rrrrrr(s, opc, retl, reth, al, ah, bl, bh); + tcg_wasm_out_add2_i64(s, retl, reth, al, ah, bl, bh); +} +static void tcg_out_sub2_i32(TCGContext *s, TCGOpcode opc, + TCGReg retl, TCGReg reth, + TCGReg al, TCGReg ah, TCGReg bl, TCGReg bh) +{ + tcg_tci_out_op_rrrrrr(s, opc, retl, reth, al, ah, bl, bh); + tcg_wasm_out_sub2_i32(s, retl, reth, al, ah, bl, bh); +} +static void tcg_out_sub2_i64(TCGContext *s, TCGOpcode opc, + TCGReg retl, TCGReg reth, + TCGReg al, TCGReg ah, TCGReg bl, TCGReg bh) +{ + tcg_tci_out_op_rrrrrr(s, opc, retl, reth, al, ah, bl, bh); + tcg_wasm_out_sub2_i64(s, retl, reth, al, ah, bl, bh); +} +static void tcg_out_mulu2_i32(TCGContext *s, TCGOpcode opc, TCGReg retl, + TCGReg reth, TCGReg arg1, TCGReg arg2) +{ + tcg_tci_out_op_rrrr(s, opc, retl, reth, arg1, arg2); + tcg_wasm_out_mulu2_i32(s, retl, reth, arg1, arg2); +} +static void tcg_out_muls2_i32(TCGContext *s, TCGOpcode opc, TCGReg retl, + TCGReg reth, TCGReg arg1, TCGReg arg2) +{ + tcg_tci_out_op_rrrr(s, opc, retl, reth, arg1, arg2); + tcg_wasm_out_muls2_i32(s, retl, reth, arg1, arg2); +} + +static void tcg_out_bswap16_i32(TCGContext *s, TCGOpcode opc, + TCGReg dest, TCGReg src, int flags) +{ + tcg_tci_out_op_rr(s, opc, dest, src); + if (flags & TCG_BSWAP_OS) { + tcg_tci_out_op_rr(s, INDEX_op_ext16s_i32, dest, dest); + } + tcg_wasm_out_bswap16(s, dest, src, flags); + if (flags & TCG_BSWAP_OS) { + tcg_wasm_out_ext16s(s, TCG_TYPE_I32, dest, dest); + } +} +static void tcg_out_bswap16_i64(TCGContext *s, TCGOpcode opc, + TCGReg dest, TCGReg src, int flags) +{ + tcg_tci_out_op_rr(s, opc, dest, src); + if (flags & TCG_BSWAP_OS) { + tcg_tci_out_op_rr(s, INDEX_op_ext16s_i64, dest, dest); + } + tcg_wasm_out_bswap16(s, dest, src, flags); + if (flags & TCG_BSWAP_OS) { + tcg_wasm_out_ext16s(s, TCG_TYPE_I64, dest, dest); + } +} + +static void tcg_out_bswap32_i32(TCGContext *s, TCGOpcode opc, TCGReg dest, + TCGReg src, int flags) +{ + tcg_tci_out_op_rr(s, opc, dest, src); + tcg_wasm_out_bswap32(s, dest, src, flags); +} +static void tcg_out_bswap32_i64(TCGContext *s, TCGOpcode opc, TCGReg dest, + TCGReg src, int flags) +{ + tcg_tci_out_op_rr(s, opc, dest, src); + if (flags & TCG_BSWAP_OS) { + tcg_tci_out_op_rr(s, INDEX_op_ext32s_i64, dest, dest); + } + tcg_wasm_out_bswap32(s, dest, src, flags); + if (flags & TCG_BSWAP_OS) { + tcg_wasm_out_ext32s(s, dest, dest); + } +} +static void tcg_out_bswap64_i64(TCGContext *s, TCGOpcode opc, TCGReg dest, + TCGReg src, int flags) +{ + tcg_tci_out_op_rr(s, opc, dest, src); + tcg_wasm_out_bswap64(s, dest, src, flags); +} +static void tcg_tci_out_qemu_ldst(TCGContext *s, TCGOpcode opc, + const TCGArg *args, bool addr64) +{ + TCGReg addr = args[1]; + MemOpIdx oi = args[2]; + + MemOp mopc = get_memop(oi); + TCGAtomAlign aa = atom_and_align_for_opc(s, mopc, MO_ATOM_IFALIGN, false); + unsigned a_mask = (1u << aa.align) - 1; + + int mem_index = get_mmuidx(oi); + int fast_ofs = tlb_mask_table_ofs(s, mem_index); + int mask_ofs = fast_ofs + offsetof(CPUTLBDescFast, mask); + int table_ofs = fast_ofs + offsetof(CPUTLBDescFast, table); + uint32_t insn = 0; + + if (!addr64) { + tcg_tci_out_ext32u(s, TCG_REG_TMP, addr); + addr = TCG_REG_TMP; + } + + new_pool_l8(s, 20, cur_tci_ptr(s), 0, + (TCGReg)args[0], addr, (TCGArg)args[2], + (int32_t)a_mask, (int32_t)mask_ofs, + (uint64_t)s->page_bits, s->page_mask, table_ofs); + + insn = deposit32(insn, 0, 8, opc); + tcg_tci_out32(s, insn); +} +static void tcg_out_qemu_ld(TCGContext *s, TCGOpcode opc, + const TCGArg *args, bool addr64) +{ + tcg_tci_out_qemu_ldst(s, opc, args, addr64); + tcg_wasm_out_qemu_ld(s, args, addr64); +} +static void tcg_out_qemu_st(TCGContext *s, TCGOpcode opc, + const TCGArg *args, bool addr64) +{ + tcg_tci_out_qemu_ldst(s, opc, args, addr64); + tcg_wasm_out_qemu_st(s, args, addr64); +} +static void tcg_out_deposit_i32(TCGContext *s, TCGOpcode opc, TCGReg dest, + TCGReg arg1, TCGReg arg2, int pos, int len) +{ + tcg_tci_out_op_rrrbb(s, opc, dest, arg1, arg2, pos, len); + tcg_wasm_out_deposit_i32(s, dest, arg1, arg2, pos, len); +} +static void tcg_out_deposit_i64(TCGContext *s, TCGOpcode opc, TCGReg dest, + TCGReg arg1, TCGReg arg2, int pos, int len) +{ + tcg_tci_out_op_rrrbb(s, opc, dest, arg1, arg2, pos, len); + tcg_wasm_out_deposit_i64(s, dest, arg1, arg2, pos, len); +} +static void tcg_out_extract_i32(TCGContext *s, TCGOpcode opc, TCGReg dest, + TCGReg arg1, int pos, int len) +{ + tcg_tci_out_op_rrbb(s, opc, dest, arg1, pos, len); + tcg_wasm_out_extract(s, dest, arg1, pos, len, TCG_TYPE_I32); +} +static void tcg_out_extract_i64(TCGContext *s, TCGOpcode opc, TCGReg dest, + TCGReg arg1, int pos, int len) +{ + tcg_tci_out_op_rrbb(s, opc, dest, arg1, pos, len); + tcg_wasm_out_extract(s, dest, arg1, pos, len, TCG_TYPE_I64); +} +static void tcg_out_sextract_i32(TCGContext *s, TCGOpcode opc, TCGReg dest, + TCGReg arg1, int pos, int len) +{ + tcg_tci_out_op_rrbb(s, opc, dest, arg1, pos, len); + tcg_wasm_out_sextract(s, dest, arg1, pos, len, TCG_TYPE_I32); +} +static void tcg_out_sextract_i64(TCGContext *s, TCGOpcode opc, TCGReg dest, + TCGReg arg1, int pos, int len) +{ + tcg_tci_out_op_rrbb(s, opc, dest, arg1, pos, len); + tcg_wasm_out_sextract(s, dest, arg1, pos, len, TCG_TYPE_I64); +} +static void tcg_out_ext8s(TCGContext *s, TCGType type, TCGReg ret, TCGReg arg) +{ + tcg_tci_out_ext8s(s, type, ret, arg); + tcg_wasm_out_ext8s(s, type, ret, arg); +} +static void tcg_out_ext16s(TCGContext *s, TCGType type, TCGReg ret, TCGReg arg) +{ + tcg_tci_out_ext16s(s, type, ret, arg); + tcg_wasm_out_ext16s(s, type, ret, arg); +} +static void tcg_out_ext8u(TCGContext *s, TCGReg ret, TCGReg arg) +{ + tcg_tci_out_ext8u(s, ret, arg); + tcg_wasm_out_ext8u(s, ret, arg); +} +static void tcg_out_ext16u(TCGContext *s, TCGReg ret, TCGReg arg) +{ + tcg_tci_out_ext16u(s, ret, arg); + tcg_wasm_out_ext16u(s, ret, arg); +} +static void tcg_out_ext32s(TCGContext *s, TCGReg ret, TCGReg arg) +{ + tcg_tci_out_ext32s(s, ret, arg); + tcg_wasm_out_ext32s(s, ret, arg); +} +static void tcg_out_ext32u(TCGContext *s, TCGReg ret, TCGReg arg) +{ + tcg_tci_out_ext32u(s, ret, arg); + tcg_wasm_out_ext32u(s, ret, arg); +} +static void tcg_out_exts_i32_i64(TCGContext *s, TCGReg ret, TCGReg arg) +{ + tcg_tci_out_exts_i32_i64(s, ret, arg); + tcg_wasm_out_exts_i32_i64(s, ret, arg); +} +static void tcg_out_extu_i32_i64(TCGContext *s, TCGReg ret, TCGReg arg) +{ + tcg_tci_out_extu_i32_i64(s, ret, arg); + tcg_wasm_out_extu_i32_i64(s, ret, arg); +} + +static void tcg_out_extrl_i64_i32(TCGContext *s, TCGReg rd, TCGReg rs) +{ + tcg_tci_out_extrl_i64_i32(s, rd, rs); + tcg_wasm_out_extrl_i64_i32(s, rd, rs); +} + +static bool tcg_out_mov(TCGContext *s, TCGType type, TCGReg ret, TCGReg arg) +{ + tcg_tci_out_mov(s, type, ret, arg); + tcg_wasm_out_mov(s, type, ret, arg); + return true; +} +static void tcg_out_movi(TCGContext *s, TCGType type, + TCGReg ret, tcg_target_long arg) +{ + tcg_tci_out_movi(s, type, ret, arg); + tcg_wasm_out_movi(s, type, ret, arg); +} +static void tcg_out_addi_ptr(TCGContext *s, TCGReg rd, TCGReg rs, + tcg_target_long imm) +{ + g_assert_not_reached(); +} +static bool tcg_out_xchg(TCGContext *s, TCGType type, TCGReg r1, TCGReg r2) +{ + return false; +} +static void tcg_out_exit_tb(TCGContext *s, uintptr_t arg) +{ + tcg_tci_out_exit_tb(s, arg); + tcg_wasm_out_exit_tb(s, arg); +} +static void tcg_out_goto_tb(TCGContext *s, int which) +{ + tcg_tci_out_goto_tb(s, which); + tcg_wasm_out_goto_tb(s, which, (uint32_t)s->code_ptr); +} +static bool tcg_out_sti(TCGContext *s, TCGType type, TCGArg val, + TCGReg base, intptr_t ofs) +{ + return false; +} +static void tcg_out_call(TCGContext *s, const tcg_insn_unit *target, + const TCGHelperInfo *info) +{ + tcg_tci_out_call(s, target, info); + tcg_wasm_out_call(s, target, info); +} + +static void tcg_out_op(TCGContext *s, TCGOpcode opc, TCGType type, + const TCGArg args[TCG_MAX_OP_ARGS], + const int const_args[TCG_MAX_OP_ARGS]) +{ + switch (opc) { + case INDEX_op_goto_ptr: + tcg_out_goto_ptr(s, opc, args[0]); + break; + case INDEX_op_br: + tcg_out_br(s, opc, arg_label(args[0])); + break; + case INDEX_op_setcond_i32: + tcg_out_setcond_i32(s, opc, args[3], args[0], args[1], args[2]); + break; + case INDEX_op_setcond_i64: + tcg_out_setcond_i64(s, opc, args[3], args[0], args[1], args[2]); + break; + case INDEX_op_movcond_i32: + tcg_out_movcond_i32(s, opc, args[5], args[0], args[1], args[2], + args[3], args[4]); + break; + case INDEX_op_movcond_i64: + tcg_out_movcond_i64(s, opc, args[5], args[0], args[1], args[2], + args[3], args[4]); + break; + case INDEX_op_ld_i64: + tcg_out_ld(s, TCG_TYPE_I64, args[0], args[1], args[2]); + break; + case INDEX_op_ld8s_i32: + case INDEX_op_ld8s_i64: + tcg_out_ld8s(s, opc, TCG_TYPE_I64, args[0], args[1], args[2]); + break; + case INDEX_op_ld8u_i32: + case INDEX_op_ld8u_i64: + tcg_out_ld8u(s, opc, TCG_TYPE_I64, args[0], args[1], args[2]); + break; + case INDEX_op_ld16s_i32: + case INDEX_op_ld16s_i64: + tcg_out_ld16s(s, opc, TCG_TYPE_I64, args[0], args[1], args[2]); + break; + case INDEX_op_ld16u_i32: + case INDEX_op_ld16u_i64: + tcg_out_ld16u(s, opc, TCG_TYPE_I64, args[0], args[1], args[2]); + break; + case INDEX_op_ld32u_i64: + tcg_out_ld32u(s, opc, TCG_TYPE_I64, args[0], args[1], args[2]); + break; + case INDEX_op_ld_i32: + case INDEX_op_ld32s_i64: + tcg_out_ld32s(s, opc, TCG_TYPE_I64, args[0], args[1], args[2]); + break; + case INDEX_op_st_i64: + tcg_out_st(s, TCG_TYPE_I64, args[0], args[1], args[2]); + break; + case INDEX_op_st8_i32: + case INDEX_op_st8_i64: + tcg_out_st8(s, opc, TCG_TYPE_I64, args[0], args[1], args[2]); + break; + case INDEX_op_st16_i32: + case INDEX_op_st16_i64: + tcg_out_st16(s, opc, TCG_TYPE_I64, args[0], args[1], args[2]); + break; + case INDEX_op_st_i32: + case INDEX_op_st32_i64: + tcg_out_st32(s, opc, TCG_TYPE_I64, args[0], args[1], args[2]); + break; + case INDEX_op_add_i32: + case INDEX_op_add_i64: + tcg_out_add(s, opc, args[0], args[1], args[2]); + break; + case INDEX_op_sub_i32: + case INDEX_op_sub_i64: + tcg_out_sub(s, opc, args[0], args[1], args[2]); + break; + case INDEX_op_mul_i32: + case INDEX_op_mul_i64: + tcg_out_mul(s, opc, args[0], args[1], args[2]); + break; + case INDEX_op_and_i32: + case INDEX_op_and_i64: + tcg_out_and(s, opc, args[0], args[1], args[2]); + break; + case INDEX_op_or_i32: + case INDEX_op_or_i64: + tcg_out_or(s, opc, args[0], args[1], args[2]); + break; + case INDEX_op_xor_i32: + case INDEX_op_xor_i64: + tcg_out_xor(s, opc, args[0], args[1], args[2]); + break; + case INDEX_op_shl_i32: + tcg_out_shl(s, opc, TCG_TYPE_I32, args[0], args[1], args[2]); + break; + case INDEX_op_shl_i64: + tcg_out_shl(s, opc, TCG_TYPE_I64, args[0], args[1], args[2]); + break; + case INDEX_op_shr_i32: + tcg_out_shr_u(s, opc, TCG_TYPE_I32, args[0], args[1], args[2]); + break; + case INDEX_op_shr_i64: + tcg_out_shr_u(s, opc, TCG_TYPE_I64, args[0], args[1], args[2]); + break; + case INDEX_op_sar_i32: + tcg_out_shr_s(s, opc, TCG_TYPE_I32, args[0], args[1], args[2]); + break; + case INDEX_op_sar_i64: + tcg_out_shr_s(s, opc, TCG_TYPE_I64, args[0], args[1], args[2]); + break; + case INDEX_op_rotl_i32: + tcg_out_i32_rotl(s, opc, args[0], args[1], args[2]); + break; + case INDEX_op_rotl_i64: + tcg_out_i64_rotl(s, opc, args[0], args[1], args[2]); + break; + case INDEX_op_rotr_i32: + tcg_out_i32_rotr(s, opc, args[0], args[1], args[2]); + break; + case INDEX_op_rotr_i64: + tcg_out_i64_rotr(s, opc, args[0], args[1], args[2]); + break; + case INDEX_op_div_i32: + tcg_out_div_s(s, opc, TCG_TYPE_I32, args[0], args[1], args[2]); + break; + case INDEX_op_div_i64: + tcg_out_div_s(s, opc, TCG_TYPE_I64, args[0], args[1], args[2]); + break; + case INDEX_op_divu_i32: + tcg_out_div_u(s, opc, TCG_TYPE_I32, args[0], args[1], args[2]); + break; + case INDEX_op_divu_i64: + tcg_out_div_u(s, opc, TCG_TYPE_I64, args[0], args[1], args[2]); + break; + case INDEX_op_rem_i32: + tcg_out_rem_s(s, opc, TCG_TYPE_I32, args[0], args[1], args[2]); + break; + case INDEX_op_rem_i64: + tcg_out_rem_s(s, opc, TCG_TYPE_I64, args[0], args[1], args[2]); + break; + case INDEX_op_remu_i32: + tcg_out_rem_u(s, opc, TCG_TYPE_I32, args[0], args[1], args[2]); + break; + case INDEX_op_remu_i64: + tcg_out_rem_u(s, opc, TCG_TYPE_I64, args[0], args[1], args[2]); + break; + case INDEX_op_andc_i32: + case INDEX_op_andc_i64: + tcg_out_andc(s, opc, args[0], args[1], args[2]); + break; + case INDEX_op_orc_i32: + case INDEX_op_orc_i64: + tcg_out_orc(s, opc, args[0], args[1], args[2]); + break; + case INDEX_op_eqv_i32: + case INDEX_op_eqv_i64: + tcg_out_eqv(s, opc, args[0], args[1], args[2]); + break; + case INDEX_op_nand_i32: + case INDEX_op_nand_i64: + tcg_out_nand(s, opc, args[0], args[1], args[2]); + break; + case INDEX_op_nor_i32: + case INDEX_op_nor_i64: + tcg_out_nor(s, opc, args[0], args[1], args[2]); + break; + case INDEX_op_clz_i32: + tcg_out_clz32(s, opc, args[0], args[1], args[2]); + break; + case INDEX_op_clz_i64: + tcg_out_clz64(s, opc, args[0], args[1], args[2]); + break; + case INDEX_op_ctz_i32: + tcg_out_ctz32(s, opc, args[0], args[1], args[2]); + break; + case INDEX_op_ctz_i64: + tcg_out_ctz64(s, opc, args[0], args[1], args[2]); + break; + case INDEX_op_brcond_i32: + tcg_out_brcond_i32(s, opc, args[2], args[0], args[1], + arg_label(args[3])); + break; + case INDEX_op_brcond_i64: + tcg_out_brcond_i64(s, opc, args[2], args[0], args[1], + arg_label(args[3])); + break; + case INDEX_op_neg_i32: + case INDEX_op_neg_i64: + tcg_out_neg(s, opc, args[0], args[1]); + break; + case INDEX_op_not_i32: + case INDEX_op_not_i64: + tcg_out_not(s, opc, args[0], args[1]); + break; + case INDEX_op_ctpop_i32: + tcg_out_ctpop_i32(s, opc, args[1], args[2]); + break; + case INDEX_op_ctpop_i64: + tcg_out_ctpop_i64(s, opc, args[1], args[2]); + break; + case INDEX_op_add2_i32: + tcg_out_add2_i32(s, opc, args[0], args[1], args[2], args[3], + args[4], args[5]); + break; + case INDEX_op_add2_i64: + tcg_out_add2_i64(s, opc, args[0], args[1], args[2], args[3], + args[4], args[5]); + break; + case INDEX_op_sub2_i32: + tcg_out_sub2_i32(s, opc, args[0], args[1], args[2], args[3], + args[4], args[5]); + break; + case INDEX_op_sub2_i64: + tcg_out_sub2_i64(s, opc, args[0], args[1], args[2], args[3], + args[4], args[5]); + break; + case INDEX_op_qemu_ld_i32: + case INDEX_op_qemu_ld_i64: + tcg_out_qemu_ld(s, opc, args, s->addr_type == TCG_TYPE_I64); + break; + case INDEX_op_qemu_st_i32: + case INDEX_op_qemu_st_i64: + tcg_out_qemu_st(s, opc, args, s->addr_type == TCG_TYPE_I64); + break; + + case INDEX_op_extrl_i64_i32: + tcg_out_extrl_i64_i32(s, args[0], args[1]); + break; + case INDEX_op_mb: + tcg_tci_out_op_v(s, opc); + tcg_wasm_out8(s, 0x01); /* nop */ + break; + case INDEX_op_extract_i32: + tcg_out_extract_i32(s, opc, args[0], args[1], args[2], args[3]); + break; + case INDEX_op_extract_i64: + tcg_out_extract_i64(s, opc, args[0], args[1], args[2], args[3]); + break; + case INDEX_op_sextract_i32: + tcg_out_sextract_i32(s, opc, args[0], args[1], args[2], args[3]); + break; + case INDEX_op_sextract_i64: + tcg_out_sextract_i64(s, opc, args[0], args[1], args[2], args[3]); + break; + case INDEX_op_deposit_i32: + tcg_out_deposit_i32(s, opc, args[0], args[1], args[2], args[3], + args[4]); + break; + case INDEX_op_deposit_i64: + tcg_out_deposit_i64(s, opc, args[0], args[1], args[2], args[3], + args[4]); + break; + case INDEX_op_bswap16_i32: + tcg_out_bswap16_i32(s, opc, args[0], args[1], args[2]); + break; + case INDEX_op_bswap16_i64: + tcg_out_bswap16_i64(s, opc, args[0], args[1], args[2]); + break; + case INDEX_op_bswap32_i32: + tcg_out_bswap32_i32(s, opc, args[0], args[1], args[2]); + break; + case INDEX_op_bswap32_i64: + tcg_out_bswap32_i64(s, opc, args[0], args[1], args[2]); + break; + case INDEX_op_bswap64_i64: + tcg_out_bswap64_i64(s, opc, args[0], args[1], args[2]); + break; + case INDEX_op_muls2_i32: + tcg_out_muls2_i32(s, opc, args[0], args[1], args[2], args[3]); + break; + case INDEX_op_mulu2_i32: + tcg_out_mulu2_i32(s, opc, args[0], args[1], args[2], args[3]); + break; + default: + g_assert_not_reached(); + break; + } + return; +} + +bool tcg_target_has_memory_bswap(MemOp memop) +{ + return false; +} + +static bool tcg_out_qemu_ld_slow_path(TCGContext *s, TCGLabelQemuLdst *l) +{ + g_assert_not_reached(); +} + +static bool tcg_out_qemu_st_slow_path(TCGContext *s, TCGLabelQemuLdst *l) +{ + g_assert_not_reached(); +} + +static void tcg_target_init(TCGContext *s) +{ + /* The current code uses uint8_t for tcg operations. */ + tcg_debug_assert(tcg_op_defs_max <= UINT8_MAX); + + /* Registers available for 32 bit operations. */ + tcg_target_available_regs[TCG_TYPE_I32] = BIT(TCG_TARGET_NB_REGS) - 1; + /* Registers available for 64 bit operations. */ + tcg_target_available_regs[TCG_TYPE_I64] = BIT(TCG_TARGET_NB_REGS) - 1; + /* + * The interpreter "registers" are in the local stack frame and + * cannot be clobbered by the called helper functions. However, + * the interpreter assumes a 64-bit return value and assigns to + * the return value registers. + */ + tcg_target_call_clobber_regs = + MAKE_64BIT_MASK(TCG_REG_R0, 128 / TCG_TARGET_REG_BITS); + + s->reserved_regs = 0; + tcg_regset_set_reg(s->reserved_regs, TCG_REG_TMP); + tcg_regset_set_reg(s->reserved_regs, TCG_REG_CALL_STACK); + + /* The call arguments come first, followed by the temp storage. */ + tcg_set_frame(s, TCG_REG_CALL_STACK, TCG_STATIC_CALL_ARGS_SIZE, + TCG_STATIC_FRAME_SIZE); +} + +/* Generate global QEMU prologue and epilogue code. */ +static inline void tcg_target_qemu_prologue(TCGContext *s) +{ +} + +static const uint8_t mod_1[] = { + 0x0, 0x61, 0x73, 0x6d, /* magic */ + 0x01, 0x0, 0x0, 0x0, /* version */ + /* type section */ + 0x01, 0x80, 0x80, 0x80, 0x80, 0x00, + 0x80, 0x80, 0x80, 0x80, 0x00, + /* "start" function */ + 0x60, + 0x01, 0x7f, + 0x01, 0x7f, + /* function to check asyncify state */ + 0x60, + 0x0, + 0x01, 0x7f, +}; + +static const uint8_t mod_2[] = { + /* import section */ + 0x02, 0x80, 0x80, 0x80, 0x80, 0x00, + 0x80, 0x80, 0x80, 0x80, 0x00, + /* env.buffer */ + 0x03, 0x65, 0x6e, 0x76, + 0x06, 0x62, 0x75, 0x66, 0x66, 0x65, 0x72, + 0x02, 0x03, 0x00, 0x80, 0x80, 0x80, 0x80, 0x00, + /* helper.u */ + 0x06, 0x68, 0x65, 0x6c, 0x70, 0x65, 0x72, + 0x01, 0x75, + 0x00, 0x01, +}; + +static const uint8_t mod_3[] = { + /* function section */ + 0x03, 2, 1, 0x00, + /* global section */ + 0x06, 0x7e, + 25, + 0x7e, 0x01, 0x42, 0x00, 0x0b, + 0x7e, 0x01, 0x42, 0x00, 0x0b, + 0x7e, 0x01, 0x42, 0x00, 0x0b, + 0x7e, 0x01, 0x42, 0x00, 0x0b, + 0x7e, 0x01, 0x42, 0x00, 0x0b, + 0x7e, 0x01, 0x42, 0x00, 0x0b, + 0x7e, 0x01, 0x42, 0x00, 0x0b, + 0x7e, 0x01, 0x42, 0x00, 0x0b, + 0x7e, 0x01, 0x42, 0x00, 0x0b, + 0x7e, 0x01, 0x42, 0x00, 0x0b, + 0x7e, 0x01, 0x42, 0x00, 0x0b, + 0x7e, 0x01, 0x42, 0x00, 0x0b, + 0x7e, 0x01, 0x42, 0x00, 0x0b, + 0x7e, 0x01, 0x42, 0x00, 0x0b, + 0x7e, 0x01, 0x42, 0x00, 0x0b, + 0x7e, 0x01, 0x42, 0x00, 0x0b, + 0x7e, 0x01, 0x42, 0x00, 0x0b, + 0x7e, 0x01, 0x42, 0x00, 0x0b, + 0x7e, 0x01, 0x42, 0x00, 0x0b, + 0x7e, 0x01, 0x42, 0x00, 0x0b, + 0x7e, 0x01, 0x42, 0x00, 0x0b, + 0x7e, 0x01, 0x42, 0x00, 0x0b, + 0x7e, 0x01, 0x42, 0x00, 0x0b, + 0x7e, 0x01, 0x42, 0x00, 0x0b, + 0x7e, 0x01, 0x42, 0x00, 0x0b, + /* export section */ + 0x07, 13, + 1, + /* "start" function */ + 0x05, 0x73, 0x74, 0x61, 0x72, 0x74, + 0x00, 0x80, 0x80, 0x80, 0x80, 0x00, +}; + +static const uint8_t mod_4[] = { + /* code section */ + 0x0a, 0x80, 0x80, 0x80, 0x80, 0x00, + 1, + 0x80, 0x80, 0x80, 0x80, 0x00, + /* variables */ + 0x2, 0x2, 0x7f, 0x1, 0x7e, +}; + +static int write_mod_1(TCGContext *s) +{ + void *base = s->code_ptr; + int helpers_num = helpers_len(); + + if (unlikely(((void *)s->code_ptr + sizeof(mod_1) + types_buf_len()) + > s->code_gen_highwater)) { + return -1; + } + + memcpy(s->code_ptr, mod_1, sizeof(mod_1)); + s->code_ptr += sizeof(mod_1); + linked_buf_write(types_buf_root, s->code_ptr); + s->code_ptr += types_buf_len(); + + uint32_t type_section_size = types_buf_len() + 14; + fill_uint32_leb128(base + 9, type_section_size); + fill_uint32_leb128(base + 14, HELPER_IDX_START + helpers_num + 1); + + return 0; +} + +static int write_mod_2(TCGContext *s) +{ + void *base = s->code_ptr; + int helpers_num = helpers_len(); + void *section_base; + + if (unlikely(((void *)s->code_ptr + sizeof(mod_2)) + > s->code_gen_highwater)) { + return -1; + } + + memcpy(s->code_ptr, mod_2, sizeof(mod_2)); + s->code_ptr += sizeof(mod_2); + section_base = s->code_ptr; + for (int i = 0; i < helpers_num; i++) { + int typeidx = HELPER_IDX_START + i + 1; + char buf[100]; + int n; + *(uint8_t *)s->code_ptr++ = 6; /* helper */ + *(uint8_t *)s->code_ptr++ = 0x68; + *(uint8_t *)s->code_ptr++ = 0x65; + *(uint8_t *)s->code_ptr++ = 0x6c; + *(uint8_t *)s->code_ptr++ = 0x70; + *(uint8_t *)s->code_ptr++ = 0x65; + *(uint8_t *)s->code_ptr++ = 0x72; + n = snprintf(buf, sizeof(buf), "%d", i); + s->code_ptr += write_uint32_leb128(s->code_ptr, n); + memcpy(s->code_ptr, buf, n); + s->code_ptr += n; + *(uint8_t *)s->code_ptr++ = 0x00; /* type(0) */ + s->code_ptr += write_uint32_leb128(s->code_ptr, typeidx); + } + + uint32_t import_section_size = 35 + (int)s->code_ptr - (int)section_base; + fill_uint32_leb128(base + 1, import_section_size); + fill_uint32_leb128(base + 6, HELPER_IDX_START + helpers_num + 1); + fill_uint32_leb128(base + 25, (uint32_t)(~0) / 65536); + + return 0; +} + +static int write_mod_3(TCGContext *s) +{ + void *base = s->code_ptr; + + if (unlikely(((void *)s->code_ptr + sizeof(mod_3)) + > s->code_gen_highwater)) { + return -1; + } + + memcpy(s->code_ptr, mod_3, sizeof(mod_3)); + s->code_ptr += sizeof(mod_3); + + int startidx = HELPER_IDX_START + helpers_len(); + fill_uint32_leb128(base + 142, startidx); + + return 0; +} + +static int write_mod_4(TCGContext *s) +{ + void *base = s->code_ptr; + + if (unlikely(((void *)s->code_ptr + sizeof(mod_4)) + > s->code_gen_highwater)) { + return -1; + } + + memcpy(s->code_ptr, mod_4, sizeof(mod_4)); + s->code_ptr += sizeof(mod_4); + + int code_size = sub_buf_len() + 5; + fill_uint32_leb128(base + 1, code_size + 6); + fill_uint32_leb128(base + 7, code_size); + + return 0; +} + +static int write_mod_code(TCGContext *s) +{ + void *base = s->code_ptr; + int code_size = sub_buf_len(); + + if (unlikely(((void *)s->code_ptr + code_size) > s->code_gen_highwater)) { + return -1; + } + linked_buf_write(sub_buf_root, s->code_ptr); + s->code_ptr += code_size; + for (BlockPlaceholder *p = block_placeholder; p; p = p->next) { + uint8_t *ph = p->pos + base; + int blk = get_block_of_label(p->label); + tcg_debug_assert(blk >= 0); + *ph = 0x80; + fill_uint32_leb128(ph, blk); + } + + return 0; +} + +static void tcg_out_tb_start(TCGContext *s) +{ + int size; + + init_sub_buf(); + init_types_buf(); + init_blocks(); + init_label_info(); + init_helpers(); + + /* TB starts from a header */ + struct wasmTBHeader *h = (struct wasmTBHeader *)(s->code_buf); + s->code_ptr += sizeof(struct wasmTBHeader); + + /* region to record the instance information */ + h->info_ptr = s->code_ptr; + size = get_core_nums() * 4; + memset(s->code_ptr, 0, size); + s->code_ptr += size; + + /* region to store counters */ + h->counter_ptr = s->code_ptr; + size = get_core_nums() * 4; + memset(s->code_ptr, 0, size); + s->code_ptr += size; + + /* TCI code starts here */ + h->tci_ptr = s->code_ptr; + + /* generate wasm code to initialize fundamental registers */ + tcg_wasm_out_ctx_i32_load(s, DO_INIT_OFF); + tcg_wasm_out_op_i32_const(s, 0); + tcg_wasm_out_op_i32_ne(s); + tcg_wasm_out_op_if_noret(s); + + tcg_wasm_out_op_global_get_r(s, TCG_AREG0); + tcg_wasm_out_op_i64_eqz(s); + tcg_wasm_out_op_if_noret(s); + + tcg_wasm_out_ctx_i32_load(s, ENV_OFF); + tcg_wasm_out_op_i64_extend_i32_u(s); + tcg_wasm_out_op_global_set_r(s, TCG_AREG0); + + tcg_wasm_out_ctx_i32_load(s, STACK_OFF); + tcg_wasm_out_op_i64_extend_i32_u(s); + tcg_wasm_out_op_global_set_r(s, TCG_REG_CALL_STACK); + tcg_wasm_out_op_end(s); + + tcg_wasm_out_ctx_i32_store_const(s, DO_INIT_OFF, 0); + tcg_wasm_out_op_i64_const(s, 0); + tcg_wasm_out_op_global_set(s, BLOCK_PTR_IDX); + tcg_wasm_out_op_end(s); + + tcg_wasm_out_op_loop_noret(s); + tcg_wasm_out_op_global_get(s, BLOCK_PTR_IDX); + tcg_wasm_out_op_i64_eqz(s); + tcg_wasm_out_op_if_noret(s); +} + +static int tcg_out_tb_end(TCGContext *s) +{ + int res; + struct wasmTBHeader *h = (struct wasmTBHeader *)(s->code_buf); + + tcg_wasm_out_op_end(s); /* end if */ + tcg_wasm_out_op_end(s); /* end loop */ + tcg_wasm_out8(s, 0x0); /* unreachable */ + tcg_wasm_out_op_end(s); /* end func */ + + /* write wasm blob */ + h->wasm_ptr = s->code_ptr; + res = write_mod_1(s); + if (res < 0) { + return res; + } + res = write_mod_2(s); + if (res < 0) { + return res; + } + res = write_mod_3(s); + if (res < 0) { + return res; + } + res = write_mod_4(s); + if (res < 0) { + return res; + } + res = write_mod_code(s); + if (res < 0) { + return res; + } + h->wasm_size = (int)s->code_ptr - (int)h->wasm_ptr; + + /* record imported helper functions */ + if (unlikely(((void *)s->code_ptr + 4 + helpers_len() * 4) + > s->code_gen_highwater)) { + return -1; + } + h->import_ptr = s->code_ptr; + s->code_ptr += helpers_copy((uint32_t *)s->code_ptr); + h->import_size = (int)s->code_ptr - (int)h->import_ptr; + + return 0; +} diff --git a/tcg/wasm32/tcg-target.h b/tcg/wasm32/tcg-target.h new file mode 100644 index 0000000000..5690eec663 --- /dev/null +++ b/tcg/wasm32/tcg-target.h @@ -0,0 +1,65 @@ +/* + * SPDX-License-Identifier: GPL-2.0-or-later + */ +/* + * Tiny Code Generator for QEMU + * + * Copyright (c) 2009, 2011 Stefan Weil + * + * Based on tci/tcg-target.h + * + * Permission is hereby granted, free of charge, to any person obtaining a copy + * of this software and associated documentation files (the "Software"), to deal + * in the Software without restriction, including without limitation the rights + * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell + * copies of the Software, and to permit persons to whom the Software is + * furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice shall be included in + * all copies or substantial portions of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL + * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, + * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN + * THE SOFTWARE. + */ + +#ifndef TCG_TARGET_H +#define TCG_TARGET_H + +#define TCG_TARGET_INSN_UNIT_SIZE 1 +#define MAX_CODE_GEN_BUFFER_SIZE ((size_t)-1) + +/* Number of registers available. */ +#define TCG_TARGET_NB_REGS 16 + +/* List of registers which are used by TCG. */ +typedef enum { + TCG_REG_R0 = 0, + TCG_REG_R1, + TCG_REG_R2, + TCG_REG_R3, + TCG_REG_R4, + TCG_REG_R5, + TCG_REG_R6, + TCG_REG_R7, + TCG_REG_R8, + TCG_REG_R9, + TCG_REG_R10, + TCG_REG_R11, + TCG_REG_R12, + TCG_REG_R13, + TCG_REG_R14, + TCG_REG_R15, + + TCG_REG_TMP = TCG_REG_R13, + TCG_AREG0 = TCG_REG_R14, + TCG_REG_CALL_STACK = TCG_REG_R15, +} TCGReg; + +#define HAVE_TCG_QEMU_TB_EXEC + +#endif /* TCG_TARGET_H */ From patchwork Mon Apr 7 14:45:59 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kohei Tokunaga X-Patchwork-Id: 14041062 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 50963C369A2 for ; Mon, 7 Apr 2025 15:15:13 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1u1oBC-0002Bc-Rx; Mon, 07 Apr 2025 11:14:32 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1u1nm6-0002F4-PR; Mon, 07 Apr 2025 10:48:35 -0400 Received: from mail-pf1-x42c.google.com ([2607:f8b0:4864:20::42c]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1u1nm4-0001jX-OA; Mon, 07 Apr 2025 10:48:34 -0400 Received: by mail-pf1-x42c.google.com with SMTP id d2e1a72fcca58-736e52948ebso5152082b3a.1; Mon, 07 Apr 2025 07:48:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1744037310; x=1744642110; darn=nongnu.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=50grsCyUNExDjQBR3M1QLWMMiStdz5s3sePxgxhYNa4=; b=BaQU72cvJCzWA0FjvYNiYXyG4u0MNKJ8L9scUsKDDXu0kTo9kQKrglyyCB2gfe4y+4 7LtlAuqMJJ165dQuFf3wbVUHG6Z6FEH1cI/M8Hdl3/wc6uJr9RR4ljXKN3Ovrg7v6CuC uo9G+q1OLCdLXXXBM9kANcZ6qiUl3gGqVpdMzICiCV0CmV6nU+vyYZAW25hkkTNo12Um R0W7hWm8j7NaWk63JcafMJx9KIVRHZqR1t+Aigi6xmrLfg7iaEhNfGvzW6zR6vjUAOAm k/L7c6iae+A97aVsbqwJhQyojLekmSDF1aUTZVgKxl0EjDoPmuGaFb7AupJKhPcd1Dhy 0yaA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1744037310; x=1744642110; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=50grsCyUNExDjQBR3M1QLWMMiStdz5s3sePxgxhYNa4=; b=QdX45uzZXKROnThFTzUKJ5zel28xMOqgt7EjM2UKGrB0A9srB5oxD8C7NHm6Gsp2W3 ERJX3pT4r5jApLOllhIUEw76mqEJGTIbgycJu5la5TAHD9MNujdX/6rxjYe/xLpLS2fJ ma+Jd2Vp7qRaNscpyB9Nay7YUSmKFKtQ+disGlkSn1v7VQWpsXUavABId9SxWv6ch4Ag Sg61y6RHx/aP3tqqa+0QpCv6p/2281Tvu0Ngcd/qIXXc2+WnDcdCWXKP41zIkGynDsqH YelvFk0yTz04M6mVVNwEkrIRtt0cs1nPENUnYpSpIzrLcSgpO0AO4JUCYPsnfnwH26nQ sLCQ== X-Forwarded-Encrypted: i=1; AJvYcCUkSgwN42HPAlV3ALTqrqVPq0krNLNtqG9j/tvJjt78EKRfo2MsvOreDWHPLavjEpeZ1phcziT4WG6GuA==@nongnu.org, AJvYcCWd1WAXEXqNQK8PvPEp7Rkr+CT2ftQpTBo8pwfTa5TBepCcAFKMaQOgPULo7tZtK+2kgy5jsgnC4g==@nongnu.org, AJvYcCX/9UUjddouQQ0UQ9kkSKSJJlPhjS9P3PP8A+5fxVXR5HM6G8rKWD008aBtXUtdfweJ3RLGpggABuOJ0w==@nongnu.org X-Gm-Message-State: AOJu0YzOvqU+sOTuKzDDnhEYOijxbbId/rGwR24H85tHWwodg8i+VWR2 522Gh6VoGrXV9drEput6hLBVYvjAkm8oYO/Qlylp//iN5h2MEWYpho+pgRpM X-Gm-Gg: ASbGnct1+BPlzA7XOxc0V9mxISwVnbBVRfAT+6lV9tbVV/1E/iPpb4z/RsLVmeEkWpi RFW91As7TdMbWfOlauBxYpwBSfa7BBkyp7nsIXQB8SsnCKfn9rvg/9F0tWw4bbEF87w+ApP84N/ bKzP7m21GUtZi8sKJv+HxKP1jM8aSoQ/MNwaAj9UXXUInlw4t+0uCqz2D1yyEZZcFkvPeSMYzOt PNv9VoN2uHiLy1dUhGvUwBUkcJ61A7Rk+4Vc81o1FH3EcHNN6CA0Hh9jItDIUXutq70f2fi6CrM bi61cXqt6k0Ek8yV3aeIEbLNYr3eTmdnOOkXj57hgVzzxNpM5Mk69nZhwxisRQ== X-Google-Smtp-Source: AGHT+IEtdgebOWYfB7Ru5RkfAP3s2nGxPBFQ/5hJFI03pD4SFHdP8PDukDExTccgtz8IaaGtRQK4/Q== X-Received: by 2002:a05:6a00:398d:b0:736:ab49:a6e4 with SMTP id d2e1a72fcca58-739e6fbf853mr16492428b3a.1.1744037310339; Mon, 07 Apr 2025 07:48:30 -0700 (PDT) Received: from localhost.localdomain ([240d:1a:3b6:8b00:8768:486:6a8e:e855]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-739d97ef3c2sm8856960b3a.59.2025.04.07.07.48.24 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 07 Apr 2025 07:48:29 -0700 (PDT) From: Kohei Tokunaga To: qemu-devel@nongnu.org Cc: =?utf-8?q?Alex_Benn=C3=A9e?= , =?utf-8?q?Philipp?= =?utf-8?q?e_Mathieu-Daud=C3=A9?= , Thomas Huth , Richard Henderson , Paolo Bonzini , Kevin Wolf , Hanna Reitz , Kohei Tokunaga , Christian Schoenebeck , Greg Kurz , Palmer Dabbelt , Alistair Francis , Weiwei Li , Daniel Henrique Barboza , Liu Zhiwei , =?utf-8?q?Marc-Andr=C3=A9_Lureau?= , =?utf-8?q?Daniel_P_=2E_Berrang=C3=A9?= , Eduardo Habkost , Peter Maydell , Stefan Hajnoczi , qemu-block@nongnu.org, qemu-riscv@nongnu.org, qemu-arm@nongnu.org Subject: [PATCH 08/10] hw/9pfs: Allow using hw/9pfs with emscripten Date: Mon, 7 Apr 2025 23:45:59 +0900 Message-Id: <16376e4b63fad6f847ceadb39b8f9780fc288198.1744032780.git.ktokunaga.mail@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 Received-SPF: pass client-ip=2607:f8b0:4864:20::42c; envelope-from=ktokunaga.mail@gmail.com; helo=mail-pf1-x42c.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, FREEMAIL_FROM=0.001, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-Mailman-Approved-At: Mon, 07 Apr 2025 11:14:07 -0400 X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Emscripten's fiber does not support submitting coroutines to other threads. So this commit modifies hw/9pfs/coth.h to disable this behavior when compiled with Emscripten. Signed-off-by: Kohei Tokunaga --- fsdev/file-op-9p.h | 3 +++ fsdev/meson.build | 2 +- hw/9pfs/9p-util-stub.c | 43 ++++++++++++++++++++++++++++++++++++++++++ hw/9pfs/9p-util.h | 18 ++++++++++++++++++ hw/9pfs/9p.c | 3 +++ hw/9pfs/coth.h | 12 ++++++++++++ hw/9pfs/meson.build | 2 ++ meson.build | 6 +++--- 8 files changed, 85 insertions(+), 4 deletions(-) create mode 100644 hw/9pfs/9p-util-stub.c diff --git a/fsdev/file-op-9p.h b/fsdev/file-op-9p.h index 4997677460..b7ca2640ce 100644 --- a/fsdev/file-op-9p.h +++ b/fsdev/file-op-9p.h @@ -26,6 +26,9 @@ # include # include #endif +#ifdef EMSCRIPTEN +#include +#endif #define SM_LOCAL_MODE_BITS 0600 #define SM_LOCAL_DIR_MODE_BITS 0700 diff --git a/fsdev/meson.build b/fsdev/meson.build index c751d8cb62..c3e92a29d7 100644 --- a/fsdev/meson.build +++ b/fsdev/meson.build @@ -5,6 +5,6 @@ fsdev_ss.add(when: ['CONFIG_FSDEV_9P'], if_true: files( '9p-marshal.c', 'qemu-fsdev.c', ), if_false: files('qemu-fsdev-dummy.c')) -if host_os in ['linux', 'darwin'] +if host_os in ['linux', 'darwin', 'emscripten'] system_ss.add_all(fsdev_ss) endif diff --git a/hw/9pfs/9p-util-stub.c b/hw/9pfs/9p-util-stub.c new file mode 100644 index 0000000000..57c89902ab --- /dev/null +++ b/hw/9pfs/9p-util-stub.c @@ -0,0 +1,43 @@ +/* + * 9p utilities stub functions + * + * SPDX-License-Identifier: GPL-2.0-or-later + */ + +#include "qemu/osdep.h" +#include "9p-util.h" + +ssize_t fgetxattrat_nofollow(int dirfd, const char *path, const char *name, + void *value, size_t size) +{ + return -1; +} + +ssize_t flistxattrat_nofollow(int dirfd, const char *filename, + char *list, size_t size) +{ + return -1; +} + +ssize_t fremovexattrat_nofollow(int dirfd, const char *filename, + const char *name) +{ + return -1; +} + +int fsetxattrat_nofollow(int dirfd, const char *path, const char *name, + void *value, size_t size, int flags) +{ + return -1; + +} + +int qemu_mknodat(int dirfd, const char *filename, mode_t mode, dev_t dev) +{ + return -1; +} + +ssize_t fgetxattr(int fd, const char *name, void *value, size_t size) +{ + return -1; +} diff --git a/hw/9pfs/9p-util.h b/hw/9pfs/9p-util.h index 7bc4ec8e85..8c5006fcdc 100644 --- a/hw/9pfs/9p-util.h +++ b/hw/9pfs/9p-util.h @@ -84,6 +84,24 @@ static inline int errno_to_dotl(int err) { } else if (err == EOPNOTSUPP) { err = 95; /* ==EOPNOTSUPP on Linux */ } +#elif defined(EMSCRIPTEN) + /* + * FIXME: Only most important errnos translated here yet, this should be + * extended to as many errnos being translated as possible in future. + */ + if (err == ENAMETOOLONG) { + err = 36; /* ==ENAMETOOLONG on Linux */ + } else if (err == ENOTEMPTY) { + err = 39; /* ==ENOTEMPTY on Linux */ + } else if (err == ELOOP) { + err = 40; /* ==ELOOP on Linux */ + } else if (err == ENODATA) { + err = 61; /* ==ENODATA on Linux */ + } else if (err == ENOTSUP) { + err = 95; /* ==EOPNOTSUPP on Linux */ + } else if (err == EOPNOTSUPP) { + err = 95; /* ==EOPNOTSUPP on Linux */ + } #else #error Missing errno translation to Linux for this host system #endif diff --git a/hw/9pfs/9p.c b/hw/9pfs/9p.c index 7cad2bce62..4f45f0edd3 100644 --- a/hw/9pfs/9p.c +++ b/hw/9pfs/9p.c @@ -4013,6 +4013,9 @@ out_nofid: * Linux guests. */ #define P9_XATTR_SIZE_MAX 65536 +#elif defined(EMSCRIPTEN) +/* No support for xattr */ +#define P9_XATTR_SIZE_MAX 0 #else #error Missing definition for P9_XATTR_SIZE_MAX for this host system #endif diff --git a/hw/9pfs/coth.h b/hw/9pfs/coth.h index 2c54249b35..7b0d05ba1b 100644 --- a/hw/9pfs/coth.h +++ b/hw/9pfs/coth.h @@ -19,6 +19,7 @@ #include "qemu/coroutine-core.h" #include "9p.h" +#ifndef EMSCRIPTEN /* * we want to use bottom half because we want to make sure the below * sequence of events. @@ -57,6 +58,17 @@ /* re-enter back to qemu thread */ \ qemu_coroutine_yield(); \ } while (0) +#else +/* + * FIXME: implement this on emscripten but emscripten's coroutine + * implementation (fiber) doesn't support submitting a coroutine to other + * threads. + */ +#define v9fs_co_run_in_worker(code_block) \ + do { \ + code_block; \ + } while (0) +#endif void co_run_in_worker_bh(void *); int coroutine_fn v9fs_co_readlink(V9fsPDU *, V9fsPath *, V9fsString *); diff --git a/hw/9pfs/meson.build b/hw/9pfs/meson.build index d35d4f44ff..04f85fb9e9 100644 --- a/hw/9pfs/meson.build +++ b/hw/9pfs/meson.build @@ -17,6 +17,8 @@ if host_os == 'darwin' fs_ss.add(files('9p-util-darwin.c')) elif host_os == 'linux' fs_ss.add(files('9p-util-linux.c')) +elif host_os == 'emscripten' + fs_ss.add(files('9p-util-stub.c')) endif fs_ss.add(when: 'CONFIG_XEN_BUS', if_true: files('xen-9p-backend.c')) system_ss.add_all(when: 'CONFIG_FSDEV_9P', if_true: fs_ss) diff --git a/meson.build b/meson.build index ab84820bc5..a3aadf8b59 100644 --- a/meson.build +++ b/meson.build @@ -2356,11 +2356,11 @@ dbus_display = get_option('dbus_display') \ .allowed() have_virtfs = get_option('virtfs') \ - .require(host_os == 'linux' or host_os == 'darwin', + .require(host_os == 'linux' or host_os == 'darwin' or host_os == 'emscripten', error_message: 'virtio-9p (virtfs) requires Linux or macOS') \ - .require(host_os == 'linux' or cc.has_function('pthread_fchdir_np'), + .require(host_os == 'linux' or host_os == 'emscripten' or cc.has_function('pthread_fchdir_np'), error_message: 'virtio-9p (virtfs) on macOS requires the presence of pthread_fchdir_np') \ - .require(host_os == 'darwin' or libattr.found(), + .require(host_os == 'darwin' or host_os == 'emscripten' or libattr.found(), error_message: 'virtio-9p (virtfs) on Linux requires libattr-devel') \ .disable_auto_if(not have_tools and not have_system) \ .allowed() From patchwork Mon Apr 7 14:46:00 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kohei Tokunaga X-Patchwork-Id: 14041099 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id B2354C36010 for ; Mon, 7 Apr 2025 15:16:31 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1u1oBs-00038k-Q4; Mon, 07 Apr 2025 11:15:13 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1u1nmE-0002J1-EP; Mon, 07 Apr 2025 10:48:43 -0400 Received: from mail-pf1-x42f.google.com ([2607:f8b0:4864:20::42f]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1u1nmC-0001lR-4X; Mon, 07 Apr 2025 10:48:42 -0400 Received: by mail-pf1-x42f.google.com with SMTP id d2e1a72fcca58-736bfa487c3so3591840b3a.1; Mon, 07 Apr 2025 07:48:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1744037317; x=1744642117; darn=nongnu.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=Xp+Sr4RJiOROIHlK78lOYGm5bAvFScILK0dzc+vFHek=; b=Ya0nA9ufViAEp38tO1P4qECsJoHINBKZm0zD4o0NrCLox8AUcv4z/DZUvyjGW8QlDN Wic8ngqGwEJ84TNq7Yv0Appzl3t+RA2AIv2XTM6N9d3cnacBquVmLLC+aIxdKsmPO7vC FoWvAmJeegrRHI7H7QwRCLgH+6EGdFiJDz2He/gfCSWdQmLMimQbLcE/cXbaZWh+fWOS ikTc110VKKkW4joyWbKqR7JVuEtnotqoa9E3LYpMNw2TTrRt3x/nN15fl2aJqOIo+5QG x4f9iiGPMkxpPRBRVVDtk4xxVgF/0EDLNjRkXjAGUeTHD3N0faGyi4/toDGOJ/SWNNAe 8DNQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1744037317; x=1744642117; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Xp+Sr4RJiOROIHlK78lOYGm5bAvFScILK0dzc+vFHek=; b=Ef6OmwmY4i0xK9tPFsKm9/W58DzvrJbzItScWORsk+VFJYnoMrVDDXc/KpIT9/CWck 1IQalIfM7Sbzytv2vYrJaTXYJIfsoIJw/uOjvBUvbPyj1GTRwe4RojC07bIw7vKq538M /3ADXOF/dUQ57YsD3PIUjmEBzURYqAogZeFP9w8vFa/Yp6ZxKlH9IHDEXjGlPapQNpdV 7jfSSBMCIFGeX2XpaWGSSf/K1x6O7ezRcvvVWSLRiVf6KtWtTzf9L7oVTSaS4xYhuSA3 yopd9h7SnEgvBwa7p/efB5GNK0AqL4yWGiHQ+qmHo3PSc5MnjtmKsWOwPhC+1Uu0Ytw+ yM5w== X-Forwarded-Encrypted: i=1; AJvYcCUR11lHQTtCWZf0ETBCGgvzernxdsaIjUBwZc2ZRwD+bYg9oB0qi1Y5twOhAJ6iqPpIhwfJ//TENg==@nongnu.org, AJvYcCW+bxGXh5uykd484EWzu5+P40vxAYwgG1wSI4toSln6/6RPhUV6XrFPBvYktgQi0yk4oWjoLRTu6a959g==@nongnu.org, AJvYcCXubJuDrQsb5ysnqx/A+LNANLa2zUvHFBKmCudmdanQdA+i89cqIDaAK9CjcmR4KNPLPf/VmhUnKXyzvg==@nongnu.org X-Gm-Message-State: AOJu0Yx1X979ZAIS8N2Z0jt7tAbExKAwgx9WH82dFoiWexEc4Jdo0VVe Ky3tG8TJ1at0X/TtPiJySjXqrNivvvsEY9JbneYidyxQEGIb5gzobv4BRxVY X-Gm-Gg: ASbGnctnsEKCzt9j8JJ6tJe0zJkVpxxbsp25PJz8imGxraqDMErhWLeREjsx5Mit2iw 3zNJh2Lg3sWTsWI3d2Vkde/T7yZy3EEilEVl4pGZZgBzKJzG7ewMmL6N33TTUqxRgiveFDX/L6i VSNi7cRSMFrdrKAReQVja2eJ2Zh5NKCWvuNrRk5WUAbdmUhDL3hO5F+5IfAG+RKpRQpYnmXWiYo W0Q/TdHkMkvu1g6I31CU1PjQ1L0VGWuf0iZThntxGAqUkfXp4M7qWQNhKSJ9kjGZUz4dxXz/EIB iWWyF5PuWN9fxQ2fwutSohieYt5AyKVWTGacnUGcal8puQWYa8LJ9BKyG+hUMw== X-Google-Smtp-Source: AGHT+IHuyyEeZ5PBMHbUyNVYg/55vQ7sPwGRsIy0+EN6C0rI5nS+BO4x/g+mqEDi7yXUA9N7VZAaeA== X-Received: by 2002:a05:6a20:d50b:b0:1f5:79c4:5da2 with SMTP id adf61e73a8af0-201047368bbmr23154645637.31.1744037316969; Mon, 07 Apr 2025 07:48:36 -0700 (PDT) Received: from localhost.localdomain ([240d:1a:3b6:8b00:8768:486:6a8e:e855]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-739d97ef3c2sm8856960b3a.59.2025.04.07.07.48.30 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 07 Apr 2025 07:48:36 -0700 (PDT) From: Kohei Tokunaga To: qemu-devel@nongnu.org Cc: =?utf-8?q?Alex_Benn=C3=A9e?= , =?utf-8?q?Philipp?= =?utf-8?q?e_Mathieu-Daud=C3=A9?= , Thomas Huth , Richard Henderson , Paolo Bonzini , Kevin Wolf , Hanna Reitz , Kohei Tokunaga , Christian Schoenebeck , Greg Kurz , Palmer Dabbelt , Alistair Francis , Weiwei Li , Daniel Henrique Barboza , Liu Zhiwei , =?utf-8?q?Marc-Andr=C3=A9_Lureau?= , =?utf-8?q?Daniel_P_=2E_Berrang=C3=A9?= , Eduardo Habkost , Peter Maydell , Stefan Hajnoczi , qemu-block@nongnu.org, qemu-riscv@nongnu.org, qemu-arm@nongnu.org Subject: [PATCH 09/10] gitlab: Enable CI for wasm build Date: Mon, 7 Apr 2025 23:46:00 +0900 Message-Id: <3cf17a9fb1ead58fb8be2d8782c793530cad07e2.1744032780.git.ktokunaga.mail@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 Received-SPF: pass client-ip=2607:f8b0:4864:20::42f; envelope-from=ktokunaga.mail@gmail.com; helo=mail-pf1-x42f.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, FREEMAIL_FROM=0.001, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-Mailman-Approved-At: Mon, 07 Apr 2025 11:14:07 -0400 X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Add GitLab CI job that builds QEMU using emscripten. The build runs in the added Dockerfile that contains dependencies (glib, libffi, pixman, zlib) compiled by emscripten. Signed-off-by: Kohei Tokunaga --- .gitlab-ci.d/buildtest-template.yml | 27 ++++ .gitlab-ci.d/buildtest.yml | 9 ++ .gitlab-ci.d/container-cross.yml | 5 + .../dockerfiles/emsdk-wasm32-cross.docker | 145 ++++++++++++++++++ 4 files changed, 186 insertions(+) create mode 100644 tests/docker/dockerfiles/emsdk-wasm32-cross.docker diff --git a/.gitlab-ci.d/buildtest-template.yml b/.gitlab-ci.d/buildtest-template.yml index 39da7698b0..67167d68a5 100644 --- a/.gitlab-ci.d/buildtest-template.yml +++ b/.gitlab-ci.d/buildtest-template.yml @@ -126,3 +126,30 @@ - du -chs ${CI_PROJECT_DIR}/*-cache variables: QEMU_JOB_AVOCADO: 1 + +.wasm_build_job_template: + extends: .base_job_template + stage: build + image: $CI_REGISTRY_IMAGE/qemu/$IMAGE:$QEMU_CI_CONTAINER_TAG + before_script: + - source scripts/ci/gitlab-ci-section + - section_start setup "Pre-script setup" + - JOBS=$(expr $(nproc) + 1) + - section_end setup + script: + - du -sh .git + - mkdir build + - cd build + - section_start configure "Running configure" + - emconfigure ../configure --disable-docs + ${TARGETS:+--target-list="$TARGETS"} + $CONFIGURE_ARGS || + { cat config.log meson-logs/meson-log.txt && exit 1; } + - if test -n "$LD_JOBS"; + then + pyvenv/bin/meson configure . -Dbackend_max_links="$LD_JOBS" ; + fi || exit 1; + - section_end configure + - section_start build "Building QEMU" + - emmake make -j"$JOBS" + - section_end build diff --git a/.gitlab-ci.d/buildtest.yml b/.gitlab-ci.d/buildtest.yml index 00f4bfcd9f..0f4d15021f 100644 --- a/.gitlab-ci.d/buildtest.yml +++ b/.gitlab-ci.d/buildtest.yml @@ -801,3 +801,12 @@ coverity: when: never # Always manual on forks even if $QEMU_CI == "2" - when: manual + +build-wasm: + extends: .wasm_build_job_template + timeout: 2h + needs: + job: wasm-emsdk-cross-container + variables: + IMAGE: emsdk-wasm32-cross + CONFIGURE_ARGS: --static --disable-tools --enable-debug diff --git a/.gitlab-ci.d/container-cross.yml b/.gitlab-ci.d/container-cross.yml index 34c0e729ad..3ea4971950 100644 --- a/.gitlab-ci.d/container-cross.yml +++ b/.gitlab-ci.d/container-cross.yml @@ -94,3 +94,8 @@ win64-fedora-cross-container: extends: .container_job_template variables: NAME: fedora-win64-cross + +wasm-emsdk-cross-container: + extends: .container_job_template + variables: + NAME: emsdk-wasm32-cross diff --git a/tests/docker/dockerfiles/emsdk-wasm32-cross.docker b/tests/docker/dockerfiles/emsdk-wasm32-cross.docker new file mode 100644 index 0000000000..60a7d02f56 --- /dev/null +++ b/tests/docker/dockerfiles/emsdk-wasm32-cross.docker @@ -0,0 +1,145 @@ +# syntax = docker/dockerfile:1.5 + +ARG EMSDK_VERSION_QEMU=3.1.50 +ARG ZLIB_VERSION=1.3.1 +ARG GLIB_MINOR_VERSION=2.84 +ARG GLIB_VERSION=${GLIB_MINOR_VERSION}.0 +ARG PIXMAN_VERSION=0.44.2 +ARG FFI_VERSION=v3.4.7 +ARG MESON_VERSION=1.5.0 + +FROM emscripten/emsdk:$EMSDK_VERSION_QEMU AS build-base +ARG MESON_VERSION +ENV TARGET=/builddeps/target +ENV CPATH="$TARGET/include" +ENV PKG_CONFIG_PATH="$TARGET/lib/pkgconfig" +ENV EM_PKG_CONFIG_PATH="$PKG_CONFIG_PATH" +ENV CFLAGS="-O3 -pthread -DWASM_BIGINT" +ENV CXXFLAGS="$CFLAGS" +ENV LDFLAGS="-sWASM_BIGINT -sASYNCIFY=1 -L$TARGET/lib" +RUN apt-get update && apt-get install -y \ + autoconf \ + build-essential \ + libglib2.0-dev \ + libtool \ + pkgconf \ + ninja-build \ + python3-pip +RUN pip3 install meson==${MESON_VERSION} tomli +RUN mkdir /build +WORKDIR /build +RUN mkdir -p $TARGET +RUN < /cross.meson +[host_machine] +system = 'emscripten' +cpu_family = 'wasm32' +cpu = 'wasm32' +endian = 'little' + +[binaries] +c = 'emcc' +cpp = 'em++' +ar = 'emar' +ranlib = 'emranlib' +pkgconfig = ['pkg-config', '--static'] +EOT +EOF + +FROM build-base AS zlib-dev +ARG ZLIB_VERSION +RUN mkdir -p /zlib +RUN curl -Ls https://zlib.net/zlib-$ZLIB_VERSION.tar.xz | \ + tar xJC /zlib --strip-components=1 +WORKDIR /zlib +RUN emconfigure ./configure --prefix=$TARGET --static +RUN emmake make install -j$(nproc) + +FROM build-base AS libffi-dev +ARG FFI_VERSION +RUN mkdir -p /libffi +RUN git clone https://github.com/libffi/libffi /libffi +WORKDIR /libffi +RUN git checkout $FFI_VERSION +RUN autoreconf -fiv +RUN emconfigure ./configure --host=wasm32-unknown-linux \ + --prefix=$TARGET --enable-static \ + --disable-shared --disable-dependency-tracking \ + --disable-builddir --disable-multi-os-directory \ + --disable-raw-api --disable-docs +RUN emmake make install SUBDIRS='include' -j$(nproc) + +FROM build-base AS pixman-dev +ARG PIXMAN_VERSION +RUN mkdir /pixman/ +RUN git clone https://gitlab.freedesktop.org/pixman/pixman /pixman/ +WORKDIR /pixman +RUN git checkout pixman-$PIXMAN_VERSION +RUN <> /cross.meson +[built-in options] +c_args = [$(printf "'%s', " $CFLAGS | sed 's/, $//')] +cpp_args = [$(printf "'%s', " $CFLAGS | sed 's/, $//')] +objc_args = [$(printf "'%s', " $CFLAGS | sed 's/, $//')] +c_link_args = [$(printf "'%s', " $LDFLAGS | sed 's/, $//')] +cpp_link_args = [$(printf "'%s', " $LDFLAGS | sed 's/, $//')] +EOT +EOF +RUN meson setup _build --prefix=$TARGET --cross-file=/cross.meson \ + --default-library=static \ + --buildtype=release -Dtests=disabled -Ddemos=disabled +RUN meson install -C _build + +FROM build-base AS glib-dev +ARG GLIB_VERSION +ARG GLIB_MINOR_VERSION +RUN mkdir -p /stub +WORKDIR /stub +RUN < res_query.c +#include +int res_query(const char *name, int class, + int type, unsigned char *dest, int len) +{ + h_errno = HOST_NOT_FOUND; + return -1; +} +EOT +EOF +RUN emcc ${CFLAGS} -c res_query.c -fPIC -o libresolv.o +RUN ar rcs libresolv.a libresolv.o +RUN mkdir -p $TARGET/lib/ +RUN cp libresolv.a $TARGET/lib/ + +RUN mkdir -p /glib +RUN curl -Lks https://download.gnome.org/sources/glib/${GLIB_MINOR_VERSION}/glib-$GLIB_VERSION.tar.xz | \ + tar xJC /glib --strip-components=1 + +COPY --link --from=zlib-dev /builddeps/ /builddeps/ +COPY --link --from=libffi-dev /builddeps/ /builddeps/ + +WORKDIR /glib +RUN <> /cross.meson +[built-in options] +c_args = [$(printf "'%s', " $CFLAGS | sed 's/, $//')] +cpp_args = [$(printf "'%s', " $CFLAGS | sed 's/, $//')] +objc_args = [$(printf "'%s', " $CFLAGS | sed 's/, $//')] +c_link_args = [$(printf "'%s', " $LDFLAGS | sed 's/, $//')] +cpp_link_args = [$(printf "'%s', " $LDFLAGS | sed 's/, $//')] +EOT +EOF +RUN meson setup _build --prefix=$TARGET --cross-file=/cross.meson \ + --default-library=static --buildtype=release --force-fallback-for=pcre2 \ + -Dselinux=disabled -Dxattr=false -Dlibmount=disabled -Dnls=disabled \ + -Dtests=false -Dglib_debug=disabled -Dglib_assert=false -Dglib_checks=false +# FIXME: emscripten doesn't provide some pthread functions in the final link, +# which isn't detected during meson setup. +RUN sed -i -E "/#define HAVE_POSIX_SPAWN 1/d" ./_build/config.h +RUN sed -i -E "/#define HAVE_PTHREAD_GETNAME_NP 1/d" ./_build/config.h +RUN meson install -C _build + +FROM build-base +COPY --link --from=glib-dev /builddeps/ /builddeps/ +COPY --link --from=pixman-dev /builddeps/ /builddeps/ From patchwork Mon Apr 7 14:46:01 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kohei Tokunaga X-Patchwork-Id: 14041101 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 0D606C36010 for ; Mon, 7 Apr 2025 15:17:39 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1u1oC4-0003wm-PJ; Mon, 07 Apr 2025 11:15:25 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1u1nmJ-0002LD-7X; Mon, 07 Apr 2025 10:48:47 -0400 Received: from mail-pg1-x52c.google.com ([2607:f8b0:4864:20::52c]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1u1nmH-0001nZ-AS; Mon, 07 Apr 2025 10:48:46 -0400 Received: by mail-pg1-x52c.google.com with SMTP id 41be03b00d2f7-aee773df955so3583430a12.1; Mon, 07 Apr 2025 07:48:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1744037323; x=1744642123; darn=nongnu.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=lEH4N/I/i1s5vyVW9hh0H+Bl+rbHvsyMVIl7sWn6eKY=; b=JlbYSzSA8aBDar1D3jrtAuJQmrE+kO7C23PLBvqWso35NtvlJcwraT+eEVw/aZ8xYC H5E0t0+b28r1EeWm4aWbYoiBgTE2OTDDVs5RYG1gXoSwALLlRQQ2o/Q9v9mu2sZjsgYR hWzuvP/wfGFRCJvllr/2j69lw6IbsW4FrlU3agT8fZXprjo5NWSV4oGU9wCTzJcOtfoi 6Bf2+2SZ5QbML6OE1GNNj4s4mFwvW0mqjJsCiQjyO5I5Vrp+UPVkdtzF4iF6d6NVa++w iHa1x1ZCtq1c2vTV5SGr/m7hzIS/GUvgBtR4ZHG/9cH683QFXLgF9jJQOKg9pnI736pw iITg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1744037323; x=1744642123; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=lEH4N/I/i1s5vyVW9hh0H+Bl+rbHvsyMVIl7sWn6eKY=; b=lDKjNZDCHSYp2wTGuOT7K4hBFHfumn9yz/JkmYWnUkqMMB13sgZO/6EoX/aNuF82Vn G0b4eDZUENA/2/WRSFji2ROxbpRD/gk80X+5KePOZhuKPNFGvkvpNxTqnShRbliNomkP CVtP9/8WGBiVM99D+d2IKN3D1AThETw7jpO+L2bxzfsLep1gYMyKly0dck/5naylhdEq 2K9uPkHMboGpUGbaLRLmUnhWbSkjN8zFzwrOxwj84ZmepXC9l3Ec8tqFw/NbYoW9TYHh hLEs8USK89M6f4pUXkptY8pKV0F86yv7NQ8Ouku4D0Sen75tgWdbukGKAPJHJN8kCQ3i 9Q6w== X-Forwarded-Encrypted: i=1; AJvYcCUgJvsG1196/ARie6K22eIGZBKfMJtMLLi9ER/PR1x911/LC/yi6moo+uNV9N9ih/ipVZTsNhQLL/0Fgw==@nongnu.org, AJvYcCVplv/sJM5gLTxqDH+M0mRV/VP2WodJrDq8SxZuLd4j649xfguCjUQUZSXbUB53reWcn4CSapsb0TyyEw==@nongnu.org, AJvYcCVzsi0mpGJfYEhNzCUj3eIciQkh2Z4QV/gA0aJModv1HFG+aIC/QT87osbs9VZRBOl/Agx5XmEdHQ==@nongnu.org X-Gm-Message-State: AOJu0Yyufsm+vXUbqYqHVrRwTPubN6XPlgyhfFfzXlQzN+zgggbk/7XF HmaUTVvO7FEQL584/9gCUkj14G21yG2Hdy6WsSgiV61oBgZcpRIqNZkPvUzS X-Gm-Gg: ASbGncusZLwUV9vB4WOZqh8vWaLv8sx+i0sTqfzXvcnSDkUI4jePnqm/wm1Ist2OcKK 0uCgaJ96rpWnroJVhig21c/GFLKMw7rxRmn0mgVL4gWVjeH5g/TEuI5HGR/V2Uy/wuEWUOcDPiZ lidoxGtiPRsmuo2bbUDNFIqA8HNrc6dMm54xU6orWHWs9nFU9+K7x94dr0ZkV1yOI71HLstERsb MKL/vh2Th6tKwNO6ilDJobWNDrifDYf+YYdBng/R7xR6mJIlle13c/PyeGqfFIMkLJbyyMwPgJT PfJe7jKuPMbmoAHylHyc6/Ap5ZHq4x/gmYdbchnsqNU3rlX1TpmBjUv9nnSIRA== X-Google-Smtp-Source: AGHT+IFVHb0USmbrtgolvVJ002qQw3Dst2XwZdUHXHObbdn3W+d22R0f9QZhrZRZ9w1sgrYEyRLgeg== X-Received: by 2002:a17:902:f546:b0:226:194f:48ef with SMTP id d9443c01a7336-229765d1ad7mr251187095ad.13.1744037322987; Mon, 07 Apr 2025 07:48:42 -0700 (PDT) Received: from localhost.localdomain ([240d:1a:3b6:8b00:8768:486:6a8e:e855]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-739d97ef3c2sm8856960b3a.59.2025.04.07.07.48.37 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 07 Apr 2025 07:48:42 -0700 (PDT) From: Kohei Tokunaga To: qemu-devel@nongnu.org Cc: =?utf-8?q?Alex_Benn=C3=A9e?= , =?utf-8?q?Philipp?= =?utf-8?q?e_Mathieu-Daud=C3=A9?= , Thomas Huth , Richard Henderson , Paolo Bonzini , Kevin Wolf , Hanna Reitz , Kohei Tokunaga , Christian Schoenebeck , Greg Kurz , Palmer Dabbelt , Alistair Francis , Weiwei Li , Daniel Henrique Barboza , Liu Zhiwei , =?utf-8?q?Marc-Andr=C3=A9_Lureau?= , =?utf-8?q?Daniel_P_=2E_Berrang=C3=A9?= , Eduardo Habkost , Peter Maydell , Stefan Hajnoczi , qemu-block@nongnu.org, qemu-riscv@nongnu.org, qemu-arm@nongnu.org Subject: [PATCH 10/10] MAINTAINERS: Update MAINTAINERS file for wasm-related files Date: Mon, 7 Apr 2025 23:46:01 +0900 Message-Id: X-Mailer: git-send-email 2.25.1 In-Reply-To: References: MIME-Version: 1.0 Received-SPF: pass client-ip=2607:f8b0:4864:20::52c; envelope-from=ktokunaga.mail@gmail.com; helo=mail-pg1-x52c.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, FREEMAIL_FROM=0.001, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-Mailman-Approved-At: Mon, 07 Apr 2025 11:14:07 -0400 X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Signed-off-by: Kohei Tokunaga --- MAINTAINERS | 11 +++++++++++ 1 file changed, 11 insertions(+) diff --git a/MAINTAINERS b/MAINTAINERS index d54b5578f8..ea5fde475c 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -3903,6 +3903,17 @@ F: tcg/tci/ F: tcg/tci.c F: disas/tci.c +WebAssembly TCG target +M: Kohei Tokunaga +S: Maintained +F: configs/meson/emscripten.txt +F: hw/9pfs/9p-util-stub.c +F: tcg/wasm32/ +F: tcg/wasm32.c +F: tcg/wasm32.h +F: tests/docker/dockerfiles/emsdk-wasm32-cross.docker +F: util/coroutine-fiber.c + Block drivers ------------- VMDK