From patchwork Fri Jan 29 09:32:31 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: alvise rigo X-Patchwork-Id: 8160911 Return-Path: X-Original-To: patchwork-qemu-devel@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 6723BBEEE5 for ; Fri, 29 Jan 2016 09:35:18 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 4D86520361 for ; Fri, 29 Jan 2016 09:35:17 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 2CC8B20351 for ; Fri, 29 Jan 2016 09:35:16 +0000 (UTC) Received: from localhost ([::1]:60713 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1aP5SV-0003XY-HM for patchwork-qemu-devel@patchwork.kernel.org; Fri, 29 Jan 2016 04:35:15 -0500 Received: from eggs.gnu.org ([2001:4830:134:3::10]:33595) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1aP5QR-0007SS-OD for qemu-devel@nongnu.org; Fri, 29 Jan 2016 04:33:09 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1aP5QO-0000MT-4W for qemu-devel@nongnu.org; Fri, 29 Jan 2016 04:33:07 -0500 Received: from mail-wm0-x231.google.com ([2a00:1450:400c:c09::231]:38613) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1aP5QN-0000Lp-QR for qemu-devel@nongnu.org; Fri, 29 Jan 2016 04:33:04 -0500 Received: by mail-wm0-x231.google.com with SMTP id p63so59733203wmp.1 for ; Fri, 29 Jan 2016 01:33:03 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=virtualopensystems-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=cjpfoVP3CqrZEiKfAxhDpa7BmdDRarBaSgifFFOeWco=; b=zvMf+ej4qLBoYyVkBAjr0rJXIHezedECMj2fVhiCrCdH4QbrVPVEQZ8D5ojyBJGAk4 xBFDLv7Ieerj9fGcybb1QtFMk91hmKYHHi8JMXfr5kBD6SrknUYVt+Zf/m86s3XunbBD 8ExSFGNoKtQfcd+jBZlB3g6RXOef4y2RWKUT5XiHkBbhUNa7q3hRZNMXeNrD2LWovx29 AYttNyVAODWzrv89MbvTlU/AsTnSCoRdEVTA9Qu/uvWaxGw61FkLPlPULCf3z2uCfqPC 5FkYJrDlTmpX+tu3P231739JdsVl+Vv7GTgptmYV9aBNC2+boheU3vBaj1XvHiOocBZL gwRA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=cjpfoVP3CqrZEiKfAxhDpa7BmdDRarBaSgifFFOeWco=; b=R+0aTHI4yeeM/dgrxaPwRG9ZneY+YDAy4a9JBctcygBe4tkZsApfsRFhEUlRuEqPFX 6645Zmh+Ars5nUs1NMO4L9lYy16Ps/+QNpJyzMYfGvGis2vqghouMbUC3Q2sX23fyyf9 oSS+WbyY3VIFG9Z4nclEa1rOBxelbGzMj2Rg96bA/arsi64UUqF8AZb10VZM5wmTVL7b APwMYwiVwPpFIRR16G3AhDbPWc3nswHjLm9ZC7/N/Jisqs6AY8Nq8Dgs0OHuKOhPgUgL ZBNSQlBj3Cy43JClsQxIcFHzhiT4iSxdwY3MSo+jkMkRVZiobV7Cx7r09kHD9Um7S9X8 aGnQ== X-Gm-Message-State: AG10YORI6FtVHiinqKbPYA8n0z9vflDPyQU+hrw70RsaeSsdtgedRFtQEN6BmYZu4L6kSg== X-Received: by 10.194.119.161 with SMTP id kv1mr7663897wjb.135.1454059983209; Fri, 29 Jan 2016 01:33:03 -0800 (PST) Received: from localhost.localdomain (LPuteaux-656-1-278-113.w80-15.abo.wanadoo.fr. [80.15.154.113]) by smtp.googlemail.com with ESMTPSA id o7sm14765451wjf.45.2016.01.29.01.33.02 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Fri, 29 Jan 2016 01:33:02 -0800 (PST) From: Alvise Rigo To: qemu-devel@nongnu.org, mttcg@listserver.greensocs.com Date: Fri, 29 Jan 2016 10:32:31 +0100 Message-Id: <1454059965-23402-3-git-send-email-a.rigo@virtualopensystems.com> X-Mailer: git-send-email 2.7.0 In-Reply-To: <1454059965-23402-1-git-send-email-a.rigo@virtualopensystems.com> References: <1454059965-23402-1-git-send-email-a.rigo@virtualopensystems.com> X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] X-Received-From: 2a00:1450:400c:c09::231 Cc: claudio.fontana@huawei.com, pbonzini@redhat.com, jani.kokkonen@huawei.com, tech@virtualopensystems.com, alex.bennee@linaro.org, rth@twiddle.net Subject: [Qemu-devel] [RFC v7 02/16] softmmu: Simplify helper_*_st_name, wrap unaligned code X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Sender: qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org X-Spam-Status: No, score=-6.8 required=5.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_HI, T_DKIM_INVALID, UNPARSEABLE_RELAY autolearn=ham version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Attempting to simplify the helper_*_st_name, wrap the do_unaligned_access code into an inline function. Remove also the goto statement. Based on this work, Alex proposed the following patch series https://lists.gnu.org/archive/html/qemu-devel/2016-01/msg01136.html that reduces code duplication of the softmmu_helpers. Suggested-by: Jani Kokkonen Suggested-by: Claudio Fontana Signed-off-by: Alvise Rigo Reviewed-by: Alex Bennée --- softmmu_template.h | 96 ++++++++++++++++++++++++++++++++++-------------------- 1 file changed, 60 insertions(+), 36 deletions(-) diff --git a/softmmu_template.h b/softmmu_template.h index 208f808..7029a03 100644 --- a/softmmu_template.h +++ b/softmmu_template.h @@ -370,6 +370,32 @@ static inline void glue(io_write, SUFFIX)(CPUArchState *env, iotlbentry->attrs); } +static inline void glue(helper_le_st_name, _do_unl_access)(CPUArchState *env, + DATA_TYPE val, + target_ulong addr, + TCGMemOpIdx oi, + unsigned mmu_idx, + uintptr_t retaddr) +{ + int i; + + if ((get_memop(oi) & MO_AMASK) == MO_ALIGN) { + cpu_unaligned_access(ENV_GET_CPU(env), addr, MMU_DATA_STORE, + mmu_idx, retaddr); + } + /* XXX: not efficient, but simple */ + /* Note: relies on the fact that tlb_fill() does not remove the + * previous page from the TLB cache. */ + for (i = DATA_SIZE - 1; i >= 0; i--) { + /* Little-endian extract. */ + uint8_t val8 = val >> (i * 8); + /* Note the adjustment at the beginning of the function. + Undo that for the recursion. */ + glue(helper_ret_stb, MMUSUFFIX)(env, addr + i, val8, + oi, retaddr + GETPC_ADJ); + } +} + void helper_le_st_name(CPUArchState *env, target_ulong addr, DATA_TYPE val, TCGMemOpIdx oi, uintptr_t retaddr) { @@ -399,7 +425,8 @@ void helper_le_st_name(CPUArchState *env, target_ulong addr, DATA_TYPE val, if (unlikely(tlb_addr & ~TARGET_PAGE_MASK)) { CPUIOTLBEntry *iotlbentry; if ((addr & (DATA_SIZE - 1)) != 0) { - goto do_unaligned_access; + glue(helper_le_st_name, _do_unl_access)(env, val, addr, mmu_idx, + oi, retaddr); } iotlbentry = &env->iotlb[mmu_idx][index]; @@ -414,23 +441,8 @@ void helper_le_st_name(CPUArchState *env, target_ulong addr, DATA_TYPE val, if (DATA_SIZE > 1 && unlikely((addr & ~TARGET_PAGE_MASK) + DATA_SIZE - 1 >= TARGET_PAGE_SIZE)) { - int i; - do_unaligned_access: - if ((get_memop(oi) & MO_AMASK) == MO_ALIGN) { - cpu_unaligned_access(ENV_GET_CPU(env), addr, MMU_DATA_STORE, - mmu_idx, retaddr); - } - /* XXX: not efficient, but simple */ - /* Note: relies on the fact that tlb_fill() does not remove the - * previous page from the TLB cache. */ - for (i = DATA_SIZE - 1; i >= 0; i--) { - /* Little-endian extract. */ - uint8_t val8 = val >> (i * 8); - /* Note the adjustment at the beginning of the function. - Undo that for the recursion. */ - glue(helper_ret_stb, MMUSUFFIX)(env, addr + i, val8, - oi, retaddr + GETPC_ADJ); - } + glue(helper_le_st_name, _do_unl_access)(env, val, addr, mmu_idx, + oi, retaddr); return; } @@ -450,6 +462,32 @@ void helper_le_st_name(CPUArchState *env, target_ulong addr, DATA_TYPE val, } #if DATA_SIZE > 1 +static inline void glue(helper_be_st_name, _do_unl_access)(CPUArchState *env, + DATA_TYPE val, + target_ulong addr, + TCGMemOpIdx oi, + unsigned mmu_idx, + uintptr_t retaddr) +{ + int i; + + if ((get_memop(oi) & MO_AMASK) == MO_ALIGN) { + cpu_unaligned_access(ENV_GET_CPU(env), addr, MMU_DATA_STORE, + mmu_idx, retaddr); + } + /* XXX: not efficient, but simple */ + /* Note: relies on the fact that tlb_fill() does not remove the + * previous page from the TLB cache. */ + for (i = DATA_SIZE - 1; i >= 0; i--) { + /* Big-endian extract. */ + uint8_t val8 = val >> (((DATA_SIZE - 1) * 8) - (i * 8)); + /* Note the adjustment at the beginning of the function. + Undo that for the recursion. */ + glue(helper_ret_stb, MMUSUFFIX)(env, addr + i, val8, + oi, retaddr + GETPC_ADJ); + } +} + void helper_be_st_name(CPUArchState *env, target_ulong addr, DATA_TYPE val, TCGMemOpIdx oi, uintptr_t retaddr) { @@ -479,7 +517,8 @@ void helper_be_st_name(CPUArchState *env, target_ulong addr, DATA_TYPE val, if (unlikely(tlb_addr & ~TARGET_PAGE_MASK)) { CPUIOTLBEntry *iotlbentry; if ((addr & (DATA_SIZE - 1)) != 0) { - goto do_unaligned_access; + glue(helper_be_st_name, _do_unl_access)(env, val, addr, mmu_idx, + oi, retaddr); } iotlbentry = &env->iotlb[mmu_idx][index]; @@ -494,23 +533,8 @@ void helper_be_st_name(CPUArchState *env, target_ulong addr, DATA_TYPE val, if (DATA_SIZE > 1 && unlikely((addr & ~TARGET_PAGE_MASK) + DATA_SIZE - 1 >= TARGET_PAGE_SIZE)) { - int i; - do_unaligned_access: - if ((get_memop(oi) & MO_AMASK) == MO_ALIGN) { - cpu_unaligned_access(ENV_GET_CPU(env), addr, MMU_DATA_STORE, - mmu_idx, retaddr); - } - /* XXX: not efficient, but simple */ - /* Note: relies on the fact that tlb_fill() does not remove the - * previous page from the TLB cache. */ - for (i = DATA_SIZE - 1; i >= 0; i--) { - /* Big-endian extract. */ - uint8_t val8 = val >> (((DATA_SIZE - 1) * 8) - (i * 8)); - /* Note the adjustment at the beginning of the function. - Undo that for the recursion. */ - glue(helper_ret_stb, MMUSUFFIX)(env, addr + i, val8, - oi, retaddr + GETPC_ADJ); - } + glue(helper_be_st_name, _do_unl_access)(env, val, addr, oi, mmu_idx, + retaddr); return; }