From patchwork Mon Jul 14 11:38:42 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Bonzini X-Patchwork-Id: 4545011 Return-Path: X-Original-To: patchwork-kvm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork1.web.kernel.org (Postfix) with ESMTP id EE76D9F1D6 for ; Mon, 14 Jul 2014 11:39:50 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 2D1A220121 for ; Mon, 14 Jul 2014 11:39:50 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 4FC1420136 for ; Mon, 14 Jul 2014 11:39:49 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755294AbaGNLjg (ORCPT ); Mon, 14 Jul 2014 07:39:36 -0400 Received: from mail-we0-f182.google.com ([74.125.82.182]:60478 "EHLO mail-we0-f182.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1755253AbaGNLj0 (ORCPT ); Mon, 14 Jul 2014 07:39:26 -0400 Received: by mail-we0-f182.google.com with SMTP id q59so3981491wes.13 for ; Mon, 14 Jul 2014 04:39:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=sender:from:to:cc:subject:date:message-id:in-reply-to:references; bh=QB512jcTH4cS7cI2xviphQ2gFreCiaEV9dUdNiJC4og=; b=qIWCGBLPlLaQ/RlYGRqp+dZcrI0yS+UJqGoJdl+XbJR87glu24vKPp1PIHbDF9kBAe YZgQB0hTCd8i/f/icbYHF3Nv+Tlf+a8+2EJzqTBpMys4Gpyl0dkWCE+2mUxo39OmiB4C AFsUGj5bikx9qOP69qBdMHM4D+XtR/ZHpWI1SLvBHIWTLSRP9HJSL2CGlwGsacgXXSqP C+nFCiqp8BK1z32q06kgv+18Nbjp0tOfzaOM+SPMps+36eFpteaAeXoX2ynPYIOR9eaM dtBbF/dPRj7l4YoYmCTc4Mq8OfaHAz0MtLeyAoCg2FPIs6qaBZ/O1vqiU/YVxTROT+DF u4rg== X-Received: by 10.180.13.208 with SMTP id j16mr24262892wic.15.1405337965111; Mon, 14 Jul 2014 04:39:25 -0700 (PDT) Received: from playground.station (net-37-116-204-163.cust.vodafonedsl.it. [37.116.204.163]) by mx.google.com with ESMTPSA id x3sm30215467wia.11.2014.07.14.04.39.23 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 14 Jul 2014 04:39:24 -0700 (PDT) From: Paolo Bonzini To: linux-kernel@vger.kernel.org Cc: kvm@vger.kernel.org Subject: [PATCH 18/19] KVM: x86: ensure emulator fetches do not span multiple pages Date: Mon, 14 Jul 2014 13:38:42 +0200 Message-Id: <1405337923-4776-19-git-send-email-pbonzini@redhat.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1405337923-4776-1-git-send-email-pbonzini@redhat.com> References: <1405337923-4776-1-git-send-email-pbonzini@redhat.com> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Spam-Status: No, score=-6.8 required=5.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_HI,RP_MATCHES_RCVD,T_DKIM_INVALID,UNPARSEABLE_RELAY autolearn=ham version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP When the CS base is not page-aligned, the linear address of the code could get close to the page boundary (e.g. 0x...ffe) even if the EIP value is not. So we need to first linearize the address, and only then compute the number of valid bytes that can be fetched. This happens relatively often when executing real mode code. Signed-off-by: Paolo Bonzini --- arch/x86/kvm/emulate.c | 13 +++++++------ 1 file changed, 7 insertions(+), 6 deletions(-) diff --git a/arch/x86/kvm/emulate.c b/arch/x86/kvm/emulate.c index c16314807756..6a1d60956d63 100644 --- a/arch/x86/kvm/emulate.c +++ b/arch/x86/kvm/emulate.c @@ -711,14 +711,18 @@ static int segmented_read_std(struct x86_emulate_ctxt *ctxt, static int __do_insn_fetch_bytes(struct x86_emulate_ctxt *ctxt, int op_size) { int rc; - int size; + unsigned size; unsigned long linear; int cur_size = ctxt->fetch.end - ctxt->fetch.data; struct segmented_address addr = { .seg = VCPU_SREG_CS, .ea = ctxt->eip + cur_size }; - size = min(15UL ^ cur_size, - PAGE_SIZE - offset_in_page(addr.ea)); + size = 15UL ^ cur_size; + rc = __linearize(ctxt, addr, size, false, true, &linear); + if (unlikely(rc != X86EMUL_CONTINUE)) + return rc; + + size = min_t(unsigned, size, PAGE_SIZE - offset_in_page(linear)); /* * One instruction can only straddle two pages, @@ -728,9 +732,6 @@ static int __do_insn_fetch_bytes(struct x86_emulate_ctxt *ctxt, int op_size) */ if (unlikely(size < op_size)) return X86EMUL_UNHANDLEABLE; - rc = __linearize(ctxt, addr, size, false, true, &linear); - if (unlikely(rc != X86EMUL_CONTINUE)) - return rc; rc = ctxt->ops->fetch(ctxt, linear, ctxt->fetch.end, size, &ctxt->exception); if (unlikely(rc != X86EMUL_CONTINUE))