From patchwork Fri Jun 3 20:40:25 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Alex_Benn=C3=A9e?= X-Patchwork-Id: 9154051 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id C298A60751 for ; Fri, 3 Jun 2016 21:00:06 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id B221D282E8 for ; Fri, 3 Jun 2016 21:00:06 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id A538728337; Fri, 3 Jun 2016 21:00:06 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.8 required=2.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_HI,T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id C6DCC282E8 for ; Fri, 3 Jun 2016 21:00:04 +0000 (UTC) Received: from localhost ([::1]:57727 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1b8wCJ-0001xI-Px for patchwork-qemu-devel@patchwork.kernel.org; Fri, 03 Jun 2016 17:00:03 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:39990) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1b8vzj-00061q-67 for qemu-devel@nongnu.org; Fri, 03 Jun 2016 16:47:04 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1b8vzd-0004hN-LK for qemu-devel@nongnu.org; Fri, 03 Jun 2016 16:47:03 -0400 Received: from mail-wm0-x230.google.com ([2a00:1450:400c:c09::230]:35592) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1b8vzc-0004gO-AC for qemu-devel@nongnu.org; Fri, 03 Jun 2016 16:46:57 -0400 Received: by mail-wm0-x230.google.com with SMTP id a136so12228964wme.0 for ; Fri, 03 Jun 2016 13:46:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=c2f79B/d3Ko3oAhl+gEGGghLqKihVuvGEYlHDgMiY34=; b=WMO9K9FrlKmD+Y0Vlb0ExM7eCNwUdMzEOmW+41/Xz3sIVP5ARBO4eZMWPQcSmKoJxq hw43YIgXsizOeyPbmOJYkFLqseo4pcGbsRFZI2+Wm/yZxp6OuBpWdLLbeO5SKWd94mFD Fb8W6Y8JU0ARYL5B42jMunX8xWF9F1YETLMC8= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=c2f79B/d3Ko3oAhl+gEGGghLqKihVuvGEYlHDgMiY34=; b=jGs2PT+AKKwJccBEWYSCgoyEURaVa2wIXmJQ6h+4CnyJVVCLFX4ZvK2LbPTxKKkewg A1qogQPQe663iyZc8ICYVKu52zlEuzHRpjiLN20xkbY1OTrs4hFl3AOXDmpk8BJrvd61 UzSFQua7QutVQHAYB3gxrWL2FSo5/Q8XNjRM15uz4h2vonkzbfMXsNBMlqzsMXTCgZm4 P6Mghd+rAXh1rhu4IclxII32b669JRMKCr/3u2rhwaVIy43L1W8rAaD1FyGKSR/xJWQC UbJrwxb1Ei8hFlI7rM5Bjs+kfSajEzyz2aeSx6lvI8dqGPzPWlaSldXsX3KbQzhvMDSk FRQw== X-Gm-Message-State: ALyK8tIkOjP146BQqdtV7odvkQrRqDPipkfNXhzmjUoxF2rNio9nhxqy/tc9N0QcRKVkX/h/ X-Received: by 10.28.168.86 with SMTP id r83mr1170654wme.9.1464986815475; Fri, 03 Jun 2016 13:46:55 -0700 (PDT) Received: from zen.linaro.local ([81.128.185.34]) by smtp.gmail.com with ESMTPSA id 124sm1239499wml.12.2016.06.03.13.46.53 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 03 Jun 2016 13:46:53 -0700 (PDT) Received: from zen.linaroharston (localhost [127.0.0.1]) by zen.linaro.local (Postfix) with ESMTP id 7BA2A3E30B2; Fri, 3 Jun 2016 21:40:40 +0100 (BST) From: =?UTF-8?q?Alex=20Benn=C3=A9e?= To: mttcg@listserver.greensocs.com, qemu-devel@nongnu.org, fred.konrad@greensocs.com, a.rigo@virtualopensystems.com, serge.fdrv@gmail.com, cota@braap.org, bobby.prani@gmail.com Date: Fri, 3 Jun 2016 21:40:25 +0100 Message-Id: <1464986428-6739-17-git-send-email-alex.bennee@linaro.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1464986428-6739-1-git-send-email-alex.bennee@linaro.org> References: <1464986428-6739-1-git-send-email-alex.bennee@linaro.org> MIME-Version: 1.0 X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] X-Received-From: 2a00:1450:400c:c09::230 Subject: [Qemu-devel] [RFC v3 16/19] tcg: move locking for tb_invalidate_phys_page_range up X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: peter.maydell@linaro.org, Peter Crosthwaite , claudio.fontana@huawei.com, mark.burton@greensocs.com, jan.kiszka@siemens.com, pbonzini@redhat.com, =?UTF-8?q?Alex=20Benn=C3=A9e?= , rth@twiddle.net Errors-To: qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Sender: "Qemu-devel" X-Virus-Scanned: ClamAV using ClamSMTP While we previously assumed an existing memory lock protected the page look up in the MTTCG SoftMMU case the memory lock is provided by the tb_lock. As a result we push the taking of this lock up the call tree. This requires a slightly different entry for the SoftMMU and user-mode cases from tb_invalidate_phys_range. This also means user-mode breakpoint insertion needs to take two locks but it hadn't taken any previously so this is an improvement. Signed-off-by: Alex Bennée --- exec.c | 16 ++++++++++++++++ translate-all.c | 37 +++++++++++++++++++++++++++++-------- 2 files changed, 45 insertions(+), 8 deletions(-) diff --git a/exec.c b/exec.c index b7744b9..8bb7481 100644 --- a/exec.c +++ b/exec.c @@ -734,7 +734,11 @@ void cpu_exec_init(CPUState *cpu, Error **errp) #if defined(CONFIG_USER_ONLY) static void breakpoint_invalidate(CPUState *cpu, target_ulong pc) { + mmap_lock(); + tb_lock(); tb_invalidate_phys_page_range(pc, pc + 1, 0); + tb_unlock(); + mmap_unlock(); } #else static void breakpoint_invalidate(CPUState *cpu, target_ulong pc) @@ -743,6 +747,7 @@ static void breakpoint_invalidate(CPUState *cpu, target_ulong pc) hwaddr phys = cpu_get_phys_page_attrs_debug(cpu, pc, &attrs); int asidx = cpu_asidx_from_attrs(cpu, attrs); if (phys != -1) { + /* Locks grabbed by tb_invalidate_phys_addr */ tb_invalidate_phys_addr(cpu->cpu_ases[asidx].as, phys | (pc & ~TARGET_PAGE_MASK)); } @@ -2072,7 +2077,11 @@ MemoryRegion *qemu_ram_addr_from_host(void *ptr, ram_addr_t *ram_addr) static void notdirty_mem_write(void *opaque, hwaddr ram_addr, uint64_t val, unsigned size) { + bool locked = false; + if (!cpu_physical_memory_get_dirty_flag(ram_addr, DIRTY_MEMORY_CODE)) { + locked = true; + tb_lock(); tb_invalidate_phys_page_fast(ram_addr, size); } switch (size) { @@ -2088,6 +2097,11 @@ static void notdirty_mem_write(void *opaque, hwaddr ram_addr, default: abort(); } + + if (locked) { + tb_unlock(); + } + /* Set both VGA and migration bits for simplicity and to remove * the notdirty callback faster. */ @@ -2566,7 +2580,9 @@ static void invalidate_and_set_dirty(MemoryRegion *mr, hwaddr addr, cpu_physical_memory_range_includes_clean(addr, length, dirty_log_mask); } if (dirty_log_mask & (1 << DIRTY_MEMORY_CODE)) { + tb_lock(); tb_invalidate_phys_range(addr, addr + length); + tb_unlock(); dirty_log_mask &= ~(1 << DIRTY_MEMORY_CODE); } cpu_physical_memory_set_dirty_range(addr, length, dirty_log_mask); diff --git a/translate-all.c b/translate-all.c index 818520e..4bc5718 100644 --- a/translate-all.c +++ b/translate-all.c @@ -1355,12 +1355,11 @@ TranslationBlock *tb_gen_code(CPUState *cpu, * access: the virtual CPU will exit the current TB if code is modified inside * this TB. * - * Called with mmap_lock held for user-mode emulation + * Called with mmap_lock held for user-mode emulation, grabs tb_lock + * Called with tb_lock held for system-mode emulation */ -void tb_invalidate_phys_range(tb_page_addr_t start, tb_page_addr_t end) +static void tb_invalidate_phys_range_1(tb_page_addr_t start, tb_page_addr_t end) { - assert_memory_lock(); - while (start < end) { tb_invalidate_phys_page_range(start, end, 0); start &= TARGET_PAGE_MASK; @@ -1368,6 +1367,21 @@ void tb_invalidate_phys_range(tb_page_addr_t start, tb_page_addr_t end) } } +#ifdef CONFIG_SOFTMMU +void tb_invalidate_phys_range(tb_page_addr_t start, tb_page_addr_t end) +{ + assert_tb_lock(); + tb_invalidate_phys_range_1(start, end); +} +#else +void tb_invalidate_phys_range(tb_page_addr_t start, tb_page_addr_t end) +{ + assert_memory_lock(); + tb_lock(); + tb_invalidate_phys_range_1(start, end); + tb_unlock(); +} +#endif /* * Invalidate all TBs which intersect with the target physical address range * [start;end[. NOTE: start and end must refer to the *same* physical page. @@ -1375,7 +1389,8 @@ void tb_invalidate_phys_range(tb_page_addr_t start, tb_page_addr_t end) * access: the virtual CPU will exit the current TB if code is modified inside * this TB. * - * Called with mmap_lock held for user-mode emulation + * Called with tb_lock/mmap_lock held for user-mode emulation + * Called with tb_lock held for system-mode emulation */ void tb_invalidate_phys_page_range(tb_page_addr_t start, tb_page_addr_t end, int is_cpu_write_access) @@ -1398,6 +1413,7 @@ void tb_invalidate_phys_page_range(tb_page_addr_t start, tb_page_addr_t end, #endif /* TARGET_HAS_PRECISE_SMC */ assert_memory_lock(); + assert_tb_lock(); p = page_find(start >> TARGET_PAGE_BITS); if (!p) { @@ -1412,7 +1428,6 @@ void tb_invalidate_phys_page_range(tb_page_addr_t start, tb_page_addr_t end, /* we remove all the TBs in the range [start, end[ */ /* XXX: see if in some cases it could be faster to invalidate all the code */ - tb_lock(); tb = p->first_tb; while (tb != NULL) { n = (uintptr_t)tb & 3; @@ -1472,12 +1487,12 @@ void tb_invalidate_phys_page_range(tb_page_addr_t start, tb_page_addr_t end, cpu_resume_from_signal(cpu, NULL); } #endif - tb_unlock(); } #ifdef CONFIG_SOFTMMU /* len must be <= 8 and start must be a multiple of len. - * Called via softmmu_template.h, with iothread mutex not held. + * Called via softmmu_template.h when code areas are written to with + * tb_lock held. */ void tb_invalidate_phys_page_fast(tb_page_addr_t start, int len) { @@ -1492,6 +1507,8 @@ void tb_invalidate_phys_page_fast(tb_page_addr_t start, int len) (intptr_t)cpu_single_env->segs[R_CS].base); } #endif + assert_memory_lock(); + p = page_find(start >> TARGET_PAGE_BITS); if (!p) { return; @@ -1536,6 +1553,8 @@ static void tb_invalidate_phys_page(tb_page_addr_t addr, uint32_t current_flags = 0; #endif + assert_memory_lock(); + addr &= TARGET_PAGE_MASK; p = page_find(addr >> TARGET_PAGE_BITS); if (!p) { @@ -1641,7 +1660,9 @@ void tb_invalidate_phys_addr(AddressSpace *as, hwaddr addr) return; } ram_addr = memory_region_get_ram_addr(mr) + addr; + tb_lock(); tb_invalidate_phys_page_range(ram_addr, ram_addr + 1, 0); + tb_unlock(); rcu_read_unlock(); } #endif /* !defined(CONFIG_USER_ONLY) */