From patchwork Fri Jul 8 20:38:38 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 9221787 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 3007C60467 for ; Fri, 8 Jul 2016 20:39:49 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 2121F284CF for ; Fri, 8 Jul 2016 20:39:49 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 1624C284D1; Fri, 8 Jul 2016 20:39:49 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.8 required=2.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_HI,T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 456AE284CF for ; Fri, 8 Jul 2016 20:39:48 +0000 (UTC) Received: from localhost ([::1]:47764 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1bLcYt-0000UH-F3 for patchwork-qemu-devel@patchwork.kernel.org; Fri, 08 Jul 2016 16:39:47 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:58388) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1bLcYQ-0000Sn-6v for qemu-devel@nongnu.org; Fri, 08 Jul 2016 16:39:19 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1bLcYO-0003Ow-JI for qemu-devel@nongnu.org; Fri, 08 Jul 2016 16:39:18 -0400 Received: from mail-qk0-x244.google.com ([2607:f8b0:400d:c09::244]:33286) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1bLcYO-0003Os-EX for qemu-devel@nongnu.org; Fri, 08 Jul 2016 16:39:16 -0400 Received: by mail-qk0-x244.google.com with SMTP id n132so11799647qka.0 for ; Fri, 08 Jul 2016 13:39:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=sender:from:to:cc:subject:date:message-id:in-reply-to:references; bh=KTjIejSrpklodBaMbiM8+H25Z/uFhp+PMok0j0HI99Y=; b=gAkSpqDEgpNjtlcNK0IMHaBghJqWoTuDy8qfnJNSsha5pEto4gDxa/aGSZuZ/hsPSV N7dc9vghX8IRTmtKYuEl06RfklZ9ePZSGbTAPiH5v5n3LWqcKSJ3Pf0jOwQI+N1pbFvy dBNkSvq2ZfnZEm3+QvPKhx8Vni0OAP6X0T5IMTy227LhiuPb1Ltukyw+daT5hqV+Kleb nJm2XwKcugkLoZ+JXZmmdcGTwhuCciPlAvq96xpKMsE4HGEAtwNis8hRfv5aGlXFvNpW 7Zf4yIWh3I1GpC+4NDb+z3YhyYuggvGdwiiOS0t2ceHhnLVrgGiiAe97jhlb+edHf1io UgoQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:sender:from:to:cc:subject:date:message-id :in-reply-to:references; bh=KTjIejSrpklodBaMbiM8+H25Z/uFhp+PMok0j0HI99Y=; b=lhZqApJGXaNxxEabYHP/pf3lByk7K/5VPzG5HMyjw5ZFT7Ungp2hbtePAimJttkCVz 4UiNU9wRFZ0Biy5Eemh8tTTpwsB4QyOgf6hO7RHnOEhHYuK3fGUiaLOtY9EZQK7tXS3E AH+4FfSizgik9YU7xs5XcyhQRWnxMppsUTg+vW7XJ5txDt0aYfNWoeKipjHgkx9tloaA psVlzMzoica8WkmYNkZNDZeXKWSvfGt8pcsa9Fxsg++yEO3gSvXyiBIluHaSHKjJdeHc EzrEeuT80MN7BeO5RzKF2NEtzethQtji7ze6dCRNRM51N8NkfXXZxV4Mt4uqUYHhTgI0 3tug== X-Gm-Message-State: ALyK8tIGAjPrTHNodLkXNX12pBHen/nOODdKXRF1dRTXw6X27kF6t9bX0Z8RUXfG7E9XsA== X-Received: by 10.55.44.65 with SMTP id s62mr10688498qkh.186.1468010355957; Fri, 08 Jul 2016 13:39:15 -0700 (PDT) Received: from bigtime.com (71-37-54-227.tukw.qwest.net. [71.37.54.227]) by smtp.gmail.com with ESMTPSA id f53sm1359944qtf.25.2016.07.08.13.39.15 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 08 Jul 2016 13:39:15 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Date: Fri, 8 Jul 2016 13:38:38 -0700 Message-Id: <1468010319-16494-4-git-send-email-rth@twiddle.net> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1468010319-16494-1-git-send-email-rth@twiddle.net> References: <1468010319-16494-1-git-send-email-rth@twiddle.net> X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] X-Received-From: 2607:f8b0:400d:c09::244 Subject: [Qemu-devel] [PULL 3/4] cputlb: Fix for self-modifying writes across page boundaries X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: peter.maydell@linaro.org, Samuel Damashek Errors-To: qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Sender: "Qemu-devel" X-Virus-Scanned: ClamAV using ClamSMTP From: Samuel Damashek As it currently stands, QEMU does not properly handle self-modifying code when the write is unaligned and crosses a page boundary. The procedure for handling a write to the current translation block is to write-protect the current translation block, catch the write, split up the translation block into the current instruction (which remains write-protected so that the current instruction is not modified) and the remaining instructions in the translation block, and then restore the CPU state to before the write occurred so the write will be retried and successfully executed. However, since unaligned writes across pages are split into one-byte writes for simplicity, writes to the second page (which is not the current TB) may succeed before a write to the current TB is attempted, and since these writes are not invalidated before resuming state after splitting the TB, these writes will be performed a second time, thus corrupting the second page. Credit goes to Patrick Hulin for discovering this. In recent 64-bit versions of Windows running in emulated mode, this results in either being very unstable (a BSOD after a couple minutes of uptime), or being entirely unable to boot. Windows performs one or more 8-byte unaligned self-modifying writes (xors) which intersect the end of the current TB and the beginning of the next TB, which runs into the aforementioned issue. This commit fixes that issue by making the unaligned write loop perform the writes in forwards order, instead of reverse order. This way, QEMU immediately tries to write to the current TB, and splits the TB before any write to the second page is executed. The write then proceeds as intended. With this patch applied, I am able to boot and use Windows 7 64-bit and Windows 10 64-bit in QEMU without KVM. Per Richard Henderson's input, this patch also ensures the second page is in the TLB before executing the write loop, to ensure the second page is mapped. The original discussion of the issue is located at http://lists.nongnu.org/archive/html/qemu-devel/2014-08/msg02161.html. Signed-off-by: Samuel Damashek Message-Id: <20160706182652.16190-1-samuel.damashek@invincea.com> Signed-off-by: Richard Henderson --- softmmu_template.h | 44 +++++++++++++++++++++++++++++++++++--------- 1 file changed, 35 insertions(+), 9 deletions(-) diff --git a/softmmu_template.h b/softmmu_template.h index aeab016..284ab2c 100644 --- a/softmmu_template.h +++ b/softmmu_template.h @@ -370,12 +370,25 @@ void helper_le_st_name(CPUArchState *env, target_ulong addr, DATA_TYPE val, if (DATA_SIZE > 1 && unlikely((addr & ~TARGET_PAGE_MASK) + DATA_SIZE - 1 >= TARGET_PAGE_SIZE)) { - int i; + int i, index2; + target_ulong page2, tlb_addr2; do_unaligned_access: - /* XXX: not efficient, but simple */ - /* Note: relies on the fact that tlb_fill() does not remove the - * previous page from the TLB cache. */ - for (i = DATA_SIZE - 1; i >= 0; i--) { + /* Ensure the second page is in the TLB. Note that the first page + is already guaranteed to be filled, and that the second page + cannot evict the first. */ + page2 = (addr + DATA_SIZE) & TARGET_PAGE_MASK; + index2 = (page2 >> TARGET_PAGE_BITS) & (CPU_TLB_SIZE - 1); + tlb_addr2 = env->tlb_table[mmu_idx][index2].addr_write; + if (page2 != (tlb_addr2 & (TARGET_PAGE_MASK | TLB_INVALID_MASK)) + && !VICTIM_TLB_HIT(addr_write, page2)) { + tlb_fill(ENV_GET_CPU(env), page2, MMU_DATA_STORE, + mmu_idx, retaddr); + } + + /* XXX: not efficient, but simple. */ + /* This loop must go in the forward direction to avoid issues + with self-modifying code in Windows 64-bit. */ + for (i = 0; i < DATA_SIZE; ++i) { /* Little-endian extract. */ uint8_t val8 = val >> (i * 8); /* Note the adjustment at the beginning of the function. @@ -440,12 +453,25 @@ void helper_be_st_name(CPUArchState *env, target_ulong addr, DATA_TYPE val, if (DATA_SIZE > 1 && unlikely((addr & ~TARGET_PAGE_MASK) + DATA_SIZE - 1 >= TARGET_PAGE_SIZE)) { - int i; + int i, index2; + target_ulong page2, tlb_addr2; do_unaligned_access: + /* Ensure the second page is in the TLB. Note that the first page + is already guaranteed to be filled, and that the second page + cannot evict the first. */ + page2 = (addr + DATA_SIZE) & TARGET_PAGE_MASK; + index2 = (page2 >> TARGET_PAGE_BITS) & (CPU_TLB_SIZE - 1); + tlb_addr2 = env->tlb_table[mmu_idx][index2].addr_write; + if (page2 != (tlb_addr2 & (TARGET_PAGE_MASK | TLB_INVALID_MASK)) + && !VICTIM_TLB_HIT(addr_write, page2)) { + tlb_fill(ENV_GET_CPU(env), page2, MMU_DATA_STORE, + mmu_idx, retaddr); + } + /* XXX: not efficient, but simple */ - /* Note: relies on the fact that tlb_fill() does not remove the - * previous page from the TLB cache. */ - for (i = DATA_SIZE - 1; i >= 0; i--) { + /* This loop must go in the forward direction to avoid issues + with self-modifying code. */ + for (i = 0; i < DATA_SIZE; ++i) { /* Big-endian extract. */ uint8_t val8 = val >> (((DATA_SIZE - 1) * 8) - (i * 8)); /* Note the adjustment at the beginning of the function.