From patchwork Tue Apr 1 15:26:42 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Bonzini X-Patchwork-Id: 3923861 Return-Path: X-Original-To: patchwork-kvm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 6D501BF540 for ; Tue, 1 Apr 2014 15:28:50 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 9F80820256 for ; Tue, 1 Apr 2014 15:28:49 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id BF9A42022A for ; Tue, 1 Apr 2014 15:28:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751863AbaDAP2X (ORCPT ); Tue, 1 Apr 2014 11:28:23 -0400 Received: from mail-wi0-f170.google.com ([209.85.212.170]:48380 "EHLO mail-wi0-f170.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751624AbaDAP04 (ORCPT ); Tue, 1 Apr 2014 11:26:56 -0400 Received: by mail-wi0-f170.google.com with SMTP id bs8so5011900wib.3 for ; Tue, 01 Apr 2014 08:26:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=sender:from:to:cc:subject:date:message-id:in-reply-to:references; bh=WEeA/b7C6PXTjOtbx9A+RsaTey1w7IKwoAbn1ggdqnM=; b=bgSzKbkR4mGbRc2rfQdPiE5mt74Mg0HtvJ2dmlSbRpvEciqtFYYy50+9IKHcuanL6B qczn2bMfFJkXobIV4NIaeozFPnW8mDx9UdjzSxUPG29p0gNXrmBXH3UuXcKa/KL40J3Q b4lISv1/AfVinEJmEwigAE0WxLqzAa7ArEfHLeXH9vbWD1halLXHc/O4BLwBGealBzVE W62zqQgXbag1WW88i19ZKJQw4Hwm8wC3faAHGbeqFS6o8Jsfvnbj0RStreGdhCTZJH0f HMQ9MJIjBOUdnKiLgt5IMnvZxz5FpPMYGhwUw+9FlhcxCvTePpFHtK9xTiHvwcuYWXFN 8wVw== X-Received: by 10.194.78.4 with SMTP id x4mr14771462wjw.58.1396366015028; Tue, 01 Apr 2014 08:26:55 -0700 (PDT) Received: from playground.lan (net-37-117-156-129.cust.vodafonedsl.it. [37.117.156.129]) by mx.google.com with ESMTPSA id e42sm41235757eev.32.2014.04.01.08.26.53 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 01 Apr 2014 08:26:54 -0700 (PDT) From: Paolo Bonzini To: linux-kernel@vger.kernel.org Cc: kvm@vger.kernel.org Subject: [PATCH 2/6] KVM: emulate: abstract handling of memory operands Date: Tue, 1 Apr 2014 17:26:42 +0200 Message-Id: <1396366006-22227-3-git-send-email-pbonzini@redhat.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1396366006-22227-1-git-send-email-pbonzini@redhat.com> References: <1396366006-22227-1-git-send-email-pbonzini@redhat.com> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Spam-Status: No, score=-7.4 required=5.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_HI,RP_MATCHES_RCVD,T_DKIM_INVALID,UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Abstract the pre-execution processing and writeback of memory operands in new functions. We will soon do some work before execution even for move destination, so call the function in that case too; but not for the memory operand of lea, invlpg etc. Signed-off-by: Paolo Bonzini --- arch/x86/kvm/emulate.c | 43 ++++++++++++++++++++++++++++--------------- 1 file changed, 28 insertions(+), 15 deletions(-) diff --git a/arch/x86/kvm/emulate.c b/arch/x86/kvm/emulate.c index b42184eccbcc..c7ef72c1289e 100644 --- a/arch/x86/kvm/emulate.c +++ b/arch/x86/kvm/emulate.c @@ -1545,6 +1545,29 @@ exception: return X86EMUL_PROPAGATE_FAULT; } +static int prepare_memory_operand(struct x86_emulate_ctxt *ctxt, + struct operand *op) +{ + return segmented_read(ctxt, op->addr.mem, &op->val, op->bytes); +} + +static int cmpxchg_memory_operand(struct x86_emulate_ctxt *ctxt, + struct operand *op) +{ + return segmented_cmpxchg(ctxt, op->addr.mem, + &op->orig_val, + &op->val, + op->bytes); +} + +static int write_memory_operand(struct x86_emulate_ctxt *ctxt, + struct operand *op) +{ + return segmented_write(ctxt, op->addr.mem, + &op->val, + op->bytes); +} + static void write_register_operand(struct operand *op) { /* The 4-byte case *is* correct: in 64-bit mode we zero-extend. */ @@ -1572,16 +1595,9 @@ static int writeback(struct x86_emulate_ctxt *ctxt, struct operand *op) break; case OP_MEM: if (ctxt->lock_prefix) - return segmented_cmpxchg(ctxt, - op->addr.mem, - &op->orig_val, - &op->val, - op->bytes); + return cmpxchg_memory_operand(ctxt, op); else - return segmented_write(ctxt, - op->addr.mem, - &op->val, - op->bytes); + return write_memory_operand(ctxt, op); break; case OP_MEM_STR: return segmented_write(ctxt, @@ -4588,16 +4604,14 @@ int x86_emulate_insn(struct x86_emulate_ctxt *ctxt) } if ((ctxt->src.type == OP_MEM) && !(ctxt->d & NoAccess)) { - rc = segmented_read(ctxt, ctxt->src.addr.mem, - ctxt->src.valptr, ctxt->src.bytes); + rc = prepare_memory_operand(ctxt, &ctxt->src); if (rc != X86EMUL_CONTINUE) goto done; ctxt->src.orig_val64 = ctxt->src.val64; } if (ctxt->src2.type == OP_MEM) { - rc = segmented_read(ctxt, ctxt->src2.addr.mem, - &ctxt->src2.val, ctxt->src2.bytes); + rc = prepare_memory_operand(ctxt, &ctxt->src2); if (rc != X86EMUL_CONTINUE) goto done; } @@ -4608,8 +4622,7 @@ int x86_emulate_insn(struct x86_emulate_ctxt *ctxt) if ((ctxt->dst.type == OP_MEM) && !(ctxt->d & Mov)) { /* optimisation - avoid slow emulated read if Mov */ - rc = segmented_read(ctxt, ctxt->dst.addr.mem, - &ctxt->dst.val, ctxt->dst.bytes); + rc = prepare_memory_operand(ctxt, &ctxt->dst); if (rc != X86EMUL_CONTINUE) goto done; }