From patchwork Wed Apr 8 06:34:54 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xiao Guangrong X-Patchwork-Id: 6177651 Return-Path: X-Original-To: patchwork-kvm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 152729F349 for ; Wed, 8 Apr 2015 06:40:29 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 3A5BD20351 for ; Wed, 8 Apr 2015 06:40:28 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 327B12034A for ; Wed, 8 Apr 2015 06:40:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751164AbbDHGh0 (ORCPT ); Wed, 8 Apr 2015 02:37:26 -0400 Received: from mga11.intel.com ([192.55.52.93]:40412 "EHLO mga11.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751163AbbDHGhZ (ORCPT ); Wed, 8 Apr 2015 02:37:25 -0400 Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmsmga102.fm.intel.com with ESMTP; 07 Apr 2015 23:37:24 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.11,543,1422950400"; d="scan'208";a="691968059" Received: from xiao.sh.intel.com ([10.239.159.86]) by fmsmga001.fm.intel.com with ESMTP; 07 Apr 2015 23:37:23 -0700 Message-ID: <5524CC0E.8020208@linux.intel.com> Date: Wed, 08 Apr 2015 14:34:54 +0800 From: Xiao Guangrong User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:31.0) Gecko/20100101 Thunderbird/31.5.0 MIME-Version: 1.0 To: Paolo Bonzini CC: Marcelo Tosatti , kvm@vger.kernel.org, "qemu-devel@nongnu.org" , "Li, Wanpeng" Subject: [PATCH] kvm: fix slot flags sync between Qemu and KVM Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Spam-Status: No, score=-6.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, T_RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP We noticed that KVM keeps tracking dirty for the memslots when live migration failed which causes bad performance due to huge page mapping disallowed for this kind of memslot It is caused by slot flags does not properly sync-ed between Qemu and KVM. Current code doing slot update depends on slot->flags which hopes to omit unnecessary ioctl. However, slot->flags only reflects the stauts of corresponding memory region, vmsave and live migration do dirty tracking which overset KVM_MEM_LOG_DIRTY_PAGES for the slot. That causes the slot status recorded in the flags does not exactly match the stauts in kernel. We fixed it by introducing slot->is_dirty_logging which indicates the dirty status in kernel so that it helps us to sync the status between userspace and kernel Wanpeng Li Signed-off-by: Xiao Guangrong Reported-by: Wanpeng Li --- kvm-all.c | 19 ++++++++++++++++++- 1 file changed, 18 insertions(+), 1 deletion(-) diff --git a/kvm-all.c b/kvm-all.c index dd44f8c..69fa233 100644 --- a/kvm-all.c +++ b/kvm-all.c @@ -60,6 +60,15 @@ #define KVM_MSI_HASHTAB_SIZE 256 +/* + * @flags only reflects the stauts of corresponding memory region, however, + * vmsave and live migration do dirty tracking which overset + * KVM_MEM_LOG_DIRTY_PAGES for the slot. That causes the slot status recorded + * in @flags does not exactly match the stauts in kernel. + * + * @is_dirty_logging indicating the dirty status in kernel helps us to sync + * the status between userspace and kernel. + */ typedef struct KVMSlot { hwaddr start_addr; @@ -67,6 +76,7 @@ typedef struct KVMSlot void *ram; int slot; int flags; + bool is_dirty_logging; } KVMSlot; typedef struct kvm_dirty_log KVMDirtyLog; @@ -245,6 +255,7 @@ static int kvm_set_user_memory_region(KVMState *s, KVMSlot *slot) kvm_vm_ioctl(s, KVM_SET_USER_MEMORY_REGION, &mem); } mem.memory_size = slot->memory_size; + slot->is_dirty_logging = !!(mem.flags & KVM_MEM_LOG_DIRTY_PAGES); return kvm_vm_ioctl(s, KVM_SET_USER_MEMORY_REGION, &mem); } @@ -312,6 +323,7 @@ static int kvm_slot_dirty_pages_log_change(KVMSlot *mem, bool log_dirty) int old_flags; old_flags = mem->flags; + old_flags |= mem->is_dirty_logging ? KVM_MEM_LOG_DIRTY_PAGES : 0; flags = (mem->flags & ~mask) | kvm_mem_flags(s, log_dirty, false); mem->flags = flags; @@ -376,12 +388,17 @@ static int kvm_set_migration_log(bool enable) s->migration_log = enable; for (i = 0; i < s->nr_slots; i++) { + int dirty_enable; + mem = &s->slots[i]; if (!mem->memory_size) { continue; } - if (!!(mem->flags & KVM_MEM_LOG_DIRTY_PAGES) == enable) { + + /* Keep the dirty bit if it is tracked by the memory region. */ + dirty_enable = enable | (mem->flags & KVM_MEM_LOG_DIRTY_PAGES); + if (mem->is_dirty_logging == dirty_enable) { continue; } err = kvm_set_user_memory_region(s, mem);