From patchwork Thu Dec 25 00:52:22 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nadav Amit X-Patchwork-Id: 5540731 Return-Path: X-Original-To: patchwork-kvm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 181469F301 for ; Thu, 25 Dec 2014 00:53:20 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 45A36201C7 for ; Thu, 25 Dec 2014 00:53:19 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 794552012D for ; Thu, 25 Dec 2014 00:53:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751993AbaLYAw7 (ORCPT ); Wed, 24 Dec 2014 19:52:59 -0500 Received: from mailgw10.technion.ac.il ([132.68.225.10]:3417 "EHLO mailgw10.technion.ac.il" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751627AbaLYAw5 (ORCPT ); Wed, 24 Dec 2014 19:52:57 -0500 X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: ApUBAJVem1SERCABjGdsb2JhbABchDDMSQKBEBYBAQEBAQEQAQEBJ0KEDQEFJ1IQQBFXGYgsw3mERAEBAQEGAgEfj3IHFoQTBahYhBJtgkMBAQE X-IPAS-Result: ApUBAJVem1SERCABjGdsb2JhbABchDDMSQKBEBYBAQEBAQEQAQEBJ0KEDQEFJ1IQQBFXGYgsw3mERAEBAQEGAgEfj3IHFoQTBahYhBJtgkMBAQE X-IronPort-AV: E=Sophos;i="5.07,640,1413234000"; d="scan'208";a="3010333" Received: from csa.cs.technion.ac.il ([132.68.32.1]) by mailgw10.technion.ac.il with ESMTP; 25 Dec 2014 02:52:53 +0200 Received: from csn.cs.technion.ac.il (csn.cs.technion.ac.il [132.68.32.15]) by csa.cs.technion.ac.il (Postfix) with ESMTP id 80535140046; Thu, 25 Dec 2014 02:52:51 +0200 (IST) Received: from csl-tapuz20.cs.technion.ac.il (csl-tapuz20.cs.technion.ac.il [132.68.206.58]) by csn.cs.technion.ac.il (Postfix) with ESMTPSA id 77FACA1CBC; Thu, 25 Dec 2014 02:52:51 +0200 (IST) From: Nadav Amit To: pbonzini@redhat.com Cc: kvm@vger.kernel.org, Nadav Amit Subject: [PATCH 7/8] KVM: x86: Do not set access bit on accessed segments Date: Thu, 25 Dec 2014 02:52:22 +0200 Message-Id: <1419468743-23732-8-git-send-email-namit@cs.technion.ac.il> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1419468743-23732-1-git-send-email-namit@cs.technion.ac.il> References: <1419468743-23732-1-git-send-email-namit@cs.technion.ac.il> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Spam-Status: No, score=-6.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, T_RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=ham version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP When segment is loaded, the segment access bit is set unconditionally. In fact, it should be set conditionally, based on whether the segment had the accessed bit set before. In addition, it can improve performance. Signed-off-by: Nadav Amit --- arch/x86/kvm/emulate.c | 11 +++++++---- 1 file changed, 7 insertions(+), 4 deletions(-) diff --git a/arch/x86/kvm/emulate.c b/arch/x86/kvm/emulate.c index 7bf3548..4fcd9fd 100644 --- a/arch/x86/kvm/emulate.c +++ b/arch/x86/kvm/emulate.c @@ -1620,10 +1620,13 @@ static int __load_segment_descriptor(struct x86_emulate_ctxt *ctxt, if (seg_desc.s) { /* mark segment as accessed */ - seg_desc.type |= 1; - ret = write_segment_descriptor(ctxt, selector, &seg_desc); - if (ret != X86EMUL_CONTINUE) - return ret; + if (!(seg_desc.type & 1)) { + seg_desc.type |= 1; + ret = write_segment_descriptor(ctxt, selector, + &seg_desc); + if (ret != X86EMUL_CONTINUE) + return ret; + } } else if (ctxt->mode == X86EMUL_MODE_PROT64) { ret = ctxt->ops->read_std(ctxt, desc_addr+8, &base3, sizeof(base3), &ctxt->exception);