From patchwork Tue May 4 18:35:27 2010 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Glauber Costa X-Patchwork-Id: 96850 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by demeter.kernel.org (8.14.3/8.14.3) with ESMTP id o44IbvKf015738 for ; Tue, 4 May 2010 18:37:58 GMT Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S933229Ab0EDSha (ORCPT ); Tue, 4 May 2010 14:37:30 -0400 Received: from mx1.redhat.com ([209.132.183.28]:34543 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1760043Ab0EDSh3 (ORCPT ); Tue, 4 May 2010 14:37:29 -0400 Received: from int-mx08.intmail.prod.int.phx2.redhat.com (int-mx08.intmail.prod.int.phx2.redhat.com [10.5.11.21]) by mx1.redhat.com (8.13.8/8.13.8) with ESMTP id o44IbSK1021384 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK) for ; Tue, 4 May 2010 14:37:28 -0400 Received: from localhost.localdomain (virtlab1.virt.bos.redhat.com [10.16.72.21]) by int-mx08.intmail.prod.int.phx2.redhat.com (8.13.8/8.13.8) with ESMTP id o44IbPOH030879; Tue, 4 May 2010 14:37:27 -0400 From: Glauber Costa To: kvm@vger.kernel.org Cc: avi@redhat.com, zamsden@redhat.com, mtosatti@redhat.com Subject: [PATCH 1/2] replace set_msr_entry with kvm_msr_entry Date: Tue, 4 May 2010 14:35:27 -0400 Message-Id: <1272998128-30384-2-git-send-email-glommer@redhat.com> In-Reply-To: <1272998128-30384-1-git-send-email-glommer@redhat.com> References: <1272998128-30384-1-git-send-email-glommer@redhat.com> X-Scanned-By: MIMEDefang 2.67 on 10.5.11.21 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Greylist: IP, sender and recipient auto-whitelisted, not delayed by milter-greylist-4.2.3 (demeter.kernel.org [140.211.167.41]); Tue, 04 May 2010 18:37:58 +0000 (UTC) diff --git a/qemu-kvm-x86.c b/qemu-kvm-x86.c index 748ff69..439c31a 100644 --- a/qemu-kvm-x86.c +++ b/qemu-kvm-x86.c @@ -693,13 +693,6 @@ int kvm_arch_qemu_create_context(void) return 0; } -static void set_msr_entry(struct kvm_msr_entry *entry, uint32_t index, - uint64_t data) -{ - entry->index = index; - entry->data = data; -} - /* returns 0 on success, non-0 on failure */ static int get_msr_entry(struct kvm_msr_entry *entry, CPUState *env) { @@ -960,19 +953,19 @@ void kvm_arch_load_regs(CPUState *env, int level) /* msrs */ n = 0; /* Remember to increase msrs size if you add new registers below */ - set_msr_entry(&msrs[n++], MSR_IA32_SYSENTER_CS, env->sysenter_cs); - set_msr_entry(&msrs[n++], MSR_IA32_SYSENTER_ESP, env->sysenter_esp); - set_msr_entry(&msrs[n++], MSR_IA32_SYSENTER_EIP, env->sysenter_eip); + kvm_msr_entry_set(&msrs[n++], MSR_IA32_SYSENTER_CS, env->sysenter_cs); + kvm_msr_entry_set(&msrs[n++], MSR_IA32_SYSENTER_ESP, env->sysenter_esp); + kvm_msr_entry_set(&msrs[n++], MSR_IA32_SYSENTER_EIP, env->sysenter_eip); if (kvm_has_msr_star) - set_msr_entry(&msrs[n++], MSR_STAR, env->star); + kvm_msr_entry_set(&msrs[n++], MSR_STAR, env->star); if (kvm_has_vm_hsave_pa) - set_msr_entry(&msrs[n++], MSR_VM_HSAVE_PA, env->vm_hsave); + kvm_msr_entry_set(&msrs[n++], MSR_VM_HSAVE_PA, env->vm_hsave); #ifdef TARGET_X86_64 if (lm_capable_kernel) { - set_msr_entry(&msrs[n++], MSR_CSTAR, env->cstar); - set_msr_entry(&msrs[n++], MSR_KERNELGSBASE, env->kernelgsbase); - set_msr_entry(&msrs[n++], MSR_FMASK, env->fmask); - set_msr_entry(&msrs[n++], MSR_LSTAR , env->lstar); + kvm_msr_entry_set(&msrs[n++], MSR_CSTAR, env->cstar); + kvm_msr_entry_set(&msrs[n++], MSR_KERNELGSBASE, env->kernelgsbase); + kvm_msr_entry_set(&msrs[n++], MSR_FMASK, env->fmask); + kvm_msr_entry_set(&msrs[n++], MSR_LSTAR , env->lstar); } #endif if (level == KVM_PUT_FULL_STATE) { @@ -983,20 +976,20 @@ void kvm_arch_load_regs(CPUState *env, int level) * huge jump-backs that would occur without any writeback at all. */ if (smp_cpus == 1 || env->tsc != 0) { - set_msr_entry(&msrs[n++], MSR_IA32_TSC, env->tsc); + kvm_msr_entry_set(&msrs[n++], MSR_IA32_TSC, env->tsc); } - set_msr_entry(&msrs[n++], MSR_KVM_SYSTEM_TIME, env->system_time_msr); - set_msr_entry(&msrs[n++], MSR_KVM_WALL_CLOCK, env->wall_clock_msr); + kvm_msr_entry_set(&msrs[n++], MSR_KVM_SYSTEM_TIME, env->system_time_msr); + kvm_msr_entry_set(&msrs[n++], MSR_KVM_WALL_CLOCK, env->wall_clock_msr); } #ifdef KVM_CAP_MCE if (env->mcg_cap) { if (level == KVM_PUT_RESET_STATE) - set_msr_entry(&msrs[n++], MSR_MCG_STATUS, env->mcg_status); + kvm_msr_entry_set(&msrs[n++], MSR_MCG_STATUS, env->mcg_status); else if (level == KVM_PUT_FULL_STATE) { - set_msr_entry(&msrs[n++], MSR_MCG_STATUS, env->mcg_status); - set_msr_entry(&msrs[n++], MSR_MCG_CTL, env->mcg_ctl); + kvm_msr_entry_set(&msrs[n++], MSR_MCG_STATUS, env->mcg_status); + kvm_msr_entry_set(&msrs[n++], MSR_MCG_CTL, env->mcg_ctl); for (i = 0; i < (env->mcg_cap & 0xff); i++) - set_msr_entry(&msrs[n++], MSR_MC0_CTL + i, env->mce_banks[i]); + kvm_msr_entry_set(&msrs[n++], MSR_MC0_CTL + i, env->mce_banks[i]); } } #endif diff --git a/target-i386/kvm.c b/target-i386/kvm.c index 5239eaf..56740bd 100644 --- a/target-i386/kvm.c +++ b/target-i386/kvm.c @@ -552,6 +552,8 @@ static int kvm_put_sregs(CPUState *env) return kvm_vcpu_ioctl(env, KVM_SET_SREGS, &sregs); } +#endif + static void kvm_msr_entry_set(struct kvm_msr_entry *entry, uint32_t index, uint64_t value) { @@ -559,6 +561,7 @@ static void kvm_msr_entry_set(struct kvm_msr_entry *entry, entry->data = value; } +#ifdef KVM_UPSTREAM static int kvm_put_msrs(CPUState *env, int level) { struct {