From patchwork Fri Jan 29 09:32:39 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: alvise rigo X-Patchwork-Id: 8161021 Return-Path: X-Original-To: patchwork-qemu-devel@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id DB081BEEE5 for ; Fri, 29 Jan 2016 09:37:20 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 0CDBA20361 for ; Fri, 29 Jan 2016 09:37:20 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 2A3512014A for ; Fri, 29 Jan 2016 09:37:19 +0000 (UTC) Received: from localhost ([::1]:60737 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1aP5UU-0007NC-Iy for patchwork-qemu-devel@patchwork.kernel.org; Fri, 29 Jan 2016 04:37:18 -0500 Received: from eggs.gnu.org ([2001:4830:134:3::10]:33693) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1aP5QX-0007aC-Iq for qemu-devel@nongnu.org; Fri, 29 Jan 2016 04:33:17 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1aP5QW-0000X9-9B for qemu-devel@nongnu.org; Fri, 29 Jan 2016 04:33:13 -0500 Received: from mail-wm0-x22d.google.com ([2a00:1450:400c:c09::22d]:34073) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1aP5QV-0000WL-VI for qemu-devel@nongnu.org; Fri, 29 Jan 2016 04:33:12 -0500 Received: by mail-wm0-x22d.google.com with SMTP id 128so44748407wmz.1 for ; Fri, 29 Jan 2016 01:33:11 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=virtualopensystems-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=hMtQQ95H4twyrmvGWJAVp/WpytmxER6c3FyfpZJRdrI=; b=k6oq/kqNjglCXmKsVzxiqr8hJ79FoSDXsNVxqkFzhyg5lEtOHK7ln2ovC7Bv9xrrfB uQM+Oek0VFem9GIAoUIKlpupnwyFM9E8iaMSDgcAGKNOihM3Tsa1tO8oVZgW8AgcoF5r aeIgWsedWeBLSSi6OjgaGuwlmhWOXkK+bZ6MN+6baJyxZYAhFDxAI8LZhZW1lzzPFydI nqu4OpnjRU99AmBKrT8x5MWDP5uy7zU0y8WDjs3VxfOJ+uI1AAmERReZ3mlbLYz6TOv4 ljrrEQqf1GqshdQEr5P5vEF7ACGoDB5QkOBHjl8YTNqfCvV2F/XFg8iNt+WefjWzFpDK oW3A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=hMtQQ95H4twyrmvGWJAVp/WpytmxER6c3FyfpZJRdrI=; b=Be+ImCEU6htjuo/WWRu6aqD/Z7m5OIBPA/JyropBaT46hpbbilmfazhCQegefqmSIj 3CFNu2UHhuVUKa7cFyHsOhsmyPUYlhlhBms/fNTkJcRn0uqMRKwshVJ3rWTKrbzMcqGt eEUglqsrsfkAbeJm+ecGg3/2L/iyzidgTsJzN8F5IKeDlaIThD0YQwnCS75b78P+gskP 5nASH9O7AIJnAIzh05DtcgozfnZj4hzGZoz5Ncq3pttQcJFHHxrYM6sbpG+1/OHnUlIn aLLHXE1p0VN7RQRjKE+dP7mMuoVr85gsvnhpmkNGuV0xeLOLi1j8WQCv7HpYM9k6zZqj hQBw== X-Gm-Message-State: AG10YOQ+tuDIJHIEXck9AsxK1HdVtFk/uhT+nueg0v2g7y0di+VxKW2zZCeVMnHNe2RiWw== X-Received: by 10.28.97.11 with SMTP id v11mr8308632wmb.42.1454059991400; Fri, 29 Jan 2016 01:33:11 -0800 (PST) Received: from localhost.localdomain (LPuteaux-656-1-278-113.w80-15.abo.wanadoo.fr. [80.15.154.113]) by smtp.googlemail.com with ESMTPSA id o7sm14765451wjf.45.2016.01.29.01.33.10 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Fri, 29 Jan 2016 01:33:10 -0800 (PST) From: Alvise Rigo To: qemu-devel@nongnu.org, mttcg@listserver.greensocs.com Date: Fri, 29 Jan 2016 10:32:39 +0100 Message-Id: <1454059965-23402-11-git-send-email-a.rigo@virtualopensystems.com> X-Mailer: git-send-email 2.7.0 In-Reply-To: <1454059965-23402-1-git-send-email-a.rigo@virtualopensystems.com> References: <1454059965-23402-1-git-send-email-a.rigo@virtualopensystems.com> X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] X-Received-From: 2a00:1450:400c:c09::22d Cc: claudio.fontana@huawei.com, pbonzini@redhat.com, jani.kokkonen@huawei.com, tech@virtualopensystems.com, alex.bennee@linaro.org, rth@twiddle.net Subject: [Qemu-devel] [RFC v7 10/16] softmmu: Protect MMIO exclusive range X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Sender: qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org X-Spam-Status: No, score=-6.8 required=5.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_HI, T_DKIM_INVALID, UNPARSEABLE_RELAY autolearn=ham version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP As for the RAM case, also the MMIO exclusive ranges have to be protected by other CPU's accesses. In order to do that, we flag the accessed MemoryRegion to mark that an exclusive access has been performed and is not concluded yet. This flag will force the other CPUs to invalidate the exclusive range in case of collision. Suggested-by: Jani Kokkonen Suggested-by: Claudio Fontana Signed-off-by: Alvise Rigo --- cputlb.c | 20 +++++++++++++------- include/exec/memory.h | 1 + softmmu_llsc_template.h | 11 +++++++---- softmmu_template.h | 22 ++++++++++++++++++++++ 4 files changed, 43 insertions(+), 11 deletions(-) diff --git a/cputlb.c b/cputlb.c index 87d09c8..06ce2da 100644 --- a/cputlb.c +++ b/cputlb.c @@ -496,19 +496,25 @@ tb_page_addr_t get_page_addr_code(CPUArchState *env1, target_ulong addr) /* For every vCPU compare the exclusive address and reset it in case of a * match. Since only one vCPU is running at once, no lock has to be held to * guard this operation. */ -static inline void lookup_and_reset_cpus_ll_addr(hwaddr addr, hwaddr size) +static inline bool lookup_and_reset_cpus_ll_addr(hwaddr addr, hwaddr size) { CPUState *cpu; + bool ret = false; CPU_FOREACH(cpu) { - if (cpu->excl_protected_range.begin != EXCLUSIVE_RESET_ADDR && - ranges_overlap(cpu->excl_protected_range.begin, - cpu->excl_protected_range.end - - cpu->excl_protected_range.begin, - addr, size)) { - cpu->excl_protected_range.begin = EXCLUSIVE_RESET_ADDR; + if (current_cpu != cpu) { + if (cpu->excl_protected_range.begin != EXCLUSIVE_RESET_ADDR && + ranges_overlap(cpu->excl_protected_range.begin, + cpu->excl_protected_range.end - + cpu->excl_protected_range.begin, + addr, size)) { + cpu->excl_protected_range.begin = EXCLUSIVE_RESET_ADDR; + ret = true; + } } } + + return ret; } #define MMUSUFFIX _mmu diff --git a/include/exec/memory.h b/include/exec/memory.h index 71e0480..bacb3ad 100644 --- a/include/exec/memory.h +++ b/include/exec/memory.h @@ -171,6 +171,7 @@ struct MemoryRegion { bool rom_device; bool flush_coalesced_mmio; bool global_locking; + bool pending_excl_access; /* A vCPU issued an exclusive access */ uint8_t dirty_log_mask; ram_addr_t ram_addr; Object *owner; diff --git a/softmmu_llsc_template.h b/softmmu_llsc_template.h index 101f5e8..b4712ba 100644 --- a/softmmu_llsc_template.h +++ b/softmmu_llsc_template.h @@ -81,15 +81,18 @@ WORD_TYPE helper_ldlink_name(CPUArchState *env, target_ulong addr, } } } + /* For this vCPU, just update the TLB entry, no need to flush. */ + env->tlb_table[mmu_idx][index].addr_write |= TLB_EXCL; } else { - hw_error("EXCL accesses to MMIO regions not supported yet."); + /* Set a pending exclusive access in the MemoryRegion */ + MemoryRegion *mr = iotlb_to_region(this, + env->iotlb[mmu_idx][index].addr, + env->iotlb[mmu_idx][index].attrs); + mr->pending_excl_access = true; } cc->cpu_set_excl_protected_range(this, hw_addr, DATA_SIZE); - /* For this vCPU, just update the TLB entry, no need to flush. */ - env->tlb_table[mmu_idx][index].addr_write |= TLB_EXCL; - /* From now on we are in LL/SC context */ this->ll_sc_context = true; diff --git a/softmmu_template.h b/softmmu_template.h index c54bdc9..71c5152 100644 --- a/softmmu_template.h +++ b/softmmu_template.h @@ -360,6 +360,14 @@ static inline void glue(io_write, SUFFIX)(CPUArchState *env, MemoryRegion *mr = iotlb_to_region(cpu, physaddr, iotlbentry->attrs); physaddr = (physaddr & TARGET_PAGE_MASK) + addr; + + /* Invalidate the exclusive range that overlaps this access */ + if (mr->pending_excl_access) { + if (lookup_and_reset_cpus_ll_addr(physaddr, 1 << SHIFT)) { + mr->pending_excl_access = false; + } + } + if (mr != &io_mem_rom && mr != &io_mem_notdirty && !cpu->can_do_io) { cpu_io_recompile(cpu, retaddr); } @@ -504,6 +512,13 @@ void helper_le_st_name(CPUArchState *env, target_ulong addr, DATA_TYPE val, glue(helper_le_st_name, _do_mmio_access)(env, val, addr, oi, mmu_idx, index, retaddr); + /* N.B.: Here excl_succeeded == true means that this access + * comes from an exclusive instruction. */ + if (cpu->excl_succeeded) { + MemoryRegion *mr = iotlb_to_region(cpu, iotlbentry->addr, + iotlbentry->attrs); + mr->pending_excl_access = false; + } } else { glue(helper_le_st_name, _do_ram_access)(env, val, addr, oi, mmu_idx, index, @@ -655,6 +670,13 @@ void helper_be_st_name(CPUArchState *env, target_ulong addr, DATA_TYPE val, glue(helper_be_st_name, _do_mmio_access)(env, val, addr, oi, mmu_idx, index, retaddr); + /* N.B.: Here excl_succeeded == true means that this access + * comes from an exclusive instruction. */ + if (cpu->excl_succeeded) { + MemoryRegion *mr = iotlb_to_region(cpu, iotlbentry->addr, + iotlbentry->attrs); + mr->pending_excl_access = false; + } } else { glue(helper_be_st_name, _do_ram_access)(env, val, addr, oi, mmu_idx, index,