From patchwork Tue May 24 09:09:05 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoffer Dall X-Patchwork-Id: 9133165 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 2C559607D3 for ; Tue, 24 May 2016 09:14:51 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 221E227BF0 for ; Tue, 24 May 2016 09:14:51 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 1743B28288; Tue, 24 May 2016 09:14:51 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.8 required=2.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_HI,T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id A975A27BF0 for ; Tue, 24 May 2016 09:14:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932392AbcEXJOq (ORCPT ); Tue, 24 May 2016 05:14:46 -0400 Received: from mail-wm0-f48.google.com ([74.125.82.48]:37227 "EHLO mail-wm0-f48.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932385AbcEXJKH (ORCPT ); Tue, 24 May 2016 05:10:07 -0400 Received: by mail-wm0-f48.google.com with SMTP id z87so15340720wmh.0 for ; Tue, 24 May 2016 02:10:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=UpYTqK/vJWFq/9MbMZ7WDLwyeVlL8GxgrputhH5T1Xk=; b=c1SF+1oxywJpds4uPs2Gk1m9f9mtArNu3RPQMZa26cG1w+9GhCj7Tc9ktXAdDFIT3s 8IgZvhOZ+7RK0Y6+6hnytAhyfO7sGe/I8qcbqdWxtUljuh/66ciXQuHHaGAKIrR9TJm2 W1GP5pMdwPuolSXtLI4wV8BRpK9cEcTg0+2Dk= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=UpYTqK/vJWFq/9MbMZ7WDLwyeVlL8GxgrputhH5T1Xk=; b=cW5gpigp5OXHeT+ddCrR2ZQeXklBA1o3ZqZIEHneRaFaZKlybPezs1WafkWq4dhsi3 rXmnuVdDgaP7ejqUbYOsC9lV8cennIVEonn3CMwB095h3U8nU5g6aWIEOHgpKBNbvFLV 3I/BJBliSVbHwwmv9YylP1/yM4jB1ezGtdS62wwMPpx28nCN348oIH80xHQROzvIOnA2 3puHgR0a5TWDJZZ7MesxrbvMxiKfzCHqGneup7k0O0iJb27ftZdm9cnQtY6QQDSAEdBo KFCnEg6zB67I+L1kp0pEs1/Sr+CWhi1X9cvmxB4uhl9UTi7GMmkDn9TPN+eAPTBr3Zgm yE8g== X-Gm-Message-State: ALyK8tJAt4S8CqJvmHL2mpbzbRLjtBfgp929d/gQ5qlspLBapNOTt+oHTKw5bwfiuWjhhGjp X-Received: by 10.194.6.164 with SMTP id c4mr3584449wja.133.1464081005740; Tue, 24 May 2016 02:10:05 -0700 (PDT) Received: from localhost.localdomain ([94.18.191.146]) by smtp.gmail.com with ESMTPSA id f11sm18163246wmf.22.2016.05.24.02.10.04 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Tue, 24 May 2016 02:10:05 -0700 (PDT) From: Christoffer Dall To: Paolo Bonzini , =?UTF-8?q?Radim=20Kr=C4=8Dm=C3=A1=C5=99?= Cc: kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org, kvm@vger.kernel.org, Marc Zyngier , Christoffer Dall , Andre Przywara Subject: [PULL 11/59] KVM: arm/arm64: Fix MMIO emulation data handling Date: Tue, 24 May 2016 11:09:05 +0200 Message-Id: <1464080993-10884-12-git-send-email-christoffer.dall@linaro.org> X-Mailer: git-send-email 2.1.2.330.g565301e.dirty In-Reply-To: <1464080993-10884-1-git-send-email-christoffer.dall@linaro.org> References: <1464080993-10884-1-git-send-email-christoffer.dall@linaro.org> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP When the kernel was handling a guest MMIO read access internally, we need to copy the emulation result into the run->mmio structure in order for the kvm_handle_mmio_return() function to pick it up and inject the result back into the guest. Currently the only user of kvm_io_bus for ARM is the VGIC, which did this copying itself, so this was not causing issues so far. But with the upcoming new vgic implementation we need this done properly. Update the kvm_handle_mmio_return description and cleanup the code to only perform a single copying when needed. Code and commit message inspired by Andre Przywara. Reported-by: Andre Przywara Signed-off-by: Christoffer Dall Signed-off-by: Andre Przywara Reviewed-by: Marc Zyngier Reviewed-by: Andre Przywara --- arch/arm/kvm/mmio.c | 14 +++++++------- virt/kvm/arm/vgic.c | 7 ------- 2 files changed, 7 insertions(+), 14 deletions(-) diff --git a/arch/arm/kvm/mmio.c b/arch/arm/kvm/mmio.c index 0f6600f..0158e9e 100644 --- a/arch/arm/kvm/mmio.c +++ b/arch/arm/kvm/mmio.c @@ -87,11 +87,10 @@ static unsigned long mmio_read_buf(char *buf, unsigned int len) /** * kvm_handle_mmio_return -- Handle MMIO loads after user space emulation + * or in-kernel IO emulation + * * @vcpu: The VCPU pointer * @run: The VCPU run struct containing the mmio data - * - * This should only be called after returning from userspace for MMIO load - * emulation. */ int kvm_handle_mmio_return(struct kvm_vcpu *vcpu, struct kvm_run *run) { @@ -206,18 +205,19 @@ int io_mem_abort(struct kvm_vcpu *vcpu, struct kvm_run *run, run->mmio.is_write = is_write; run->mmio.phys_addr = fault_ipa; run->mmio.len = len; - if (is_write) - memcpy(run->mmio.data, data_buf, len); if (!ret) { /* We handled the access successfully in the kernel. */ + if (!is_write) + memcpy(run->mmio.data, data_buf, len); vcpu->stat.mmio_exit_kernel++; kvm_handle_mmio_return(vcpu, run); return 1; - } else { - vcpu->stat.mmio_exit_user++; } + if (is_write) + memcpy(run->mmio.data, data_buf, len); + vcpu->stat.mmio_exit_user++; run->exit_reason = KVM_EXIT_MMIO; return 0; } diff --git a/virt/kvm/arm/vgic.c b/virt/kvm/arm/vgic.c index f76bb64..c3bfbb9 100644 --- a/virt/kvm/arm/vgic.c +++ b/virt/kvm/arm/vgic.c @@ -819,7 +819,6 @@ static int vgic_handle_mmio_access(struct kvm_vcpu *vcpu, struct vgic_dist *dist = &vcpu->kvm->arch.vgic; struct vgic_io_device *iodev = container_of(this, struct vgic_io_device, dev); - struct kvm_run *run = vcpu->run; const struct vgic_io_range *range; struct kvm_exit_mmio mmio; bool updated_state; @@ -848,12 +847,6 @@ static int vgic_handle_mmio_access(struct kvm_vcpu *vcpu, updated_state = false; } spin_unlock(&dist->lock); - run->mmio.is_write = is_write; - run->mmio.len = len; - run->mmio.phys_addr = addr; - memcpy(run->mmio.data, val, len); - - kvm_handle_mmio_return(vcpu, run); if (updated_state) vgic_kick_vcpus(vcpu->kvm);