From patchwork Thu Sep 14 03:51:13 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xiaoyao Li X-Patchwork-Id: 13384399 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9A92CEE021E for ; Thu, 14 Sep 2023 03:52:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234525AbjINDwr (ORCPT ); Wed, 13 Sep 2023 23:52:47 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46834 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234607AbjINDwp (ORCPT ); Wed, 13 Sep 2023 23:52:45 -0400 Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.24]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 38B481BE6 for ; Wed, 13 Sep 2023 20:52:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1694663561; x=1726199561; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=HN90WpAcWxivDLGkYoPpDo7sWsy3b6+tOvB3ht8MLBY=; b=DNcgP2SozvtETGKG9jj8m1gsXQWYwWLndjQwwFS0ZMCLnlkKpQr7Ip2q B80msznXDR64fQeZQ/VCVK9zcWuDHGPD4rnZ0ehfbGltSFxRysGH1ByDS 4sEsspVlV5Bb3kJTBAKowpVTopNSJLZsMQYvQXST3isz2Fl1X8KGTiNJH O3tIFEWKxmTxlDELg0iEMS0AOqIXOM53tbtcn+SlZj0/+qpj5ya1Lnv1J zCO3VzmiOEm33iBNzM0Pm+Mj90bHfXPBu2q5c+vmCJrVmfGgZwSsGpT1E H0h0LYMqi63QlicZWMTk/So+2iA7rkJ1zac/HjWs5uf0kNXxAdDMZWar7 A==; X-IronPort-AV: E=McAfee;i="6600,9927,10832"; a="381528536" X-IronPort-AV: E=Sophos;i="6.02,144,1688454000"; d="scan'208";a="381528536" Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Sep 2023 20:52:40 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10832"; a="814500685" X-IronPort-AV: E=Sophos;i="6.02,144,1688454000"; d="scan'208";a="814500685" Received: from lxy-clx-4s.sh.intel.com ([10.239.48.52]) by fmsmga004.fm.intel.com with ESMTP; 13 Sep 2023 20:52:36 -0700 From: Xiaoyao Li To: Paolo Bonzini , David Hildenbrand , Igor Mammedov , "Michael S. Tsirkin" , Marcel Apfelbaum , Richard Henderson , Peter Xu , =?utf-8?q?Philippe_Mathieu-Daud=C3=A9?= , Cornelia Huck , =?utf-8?q?Daniel_P=2E_Berrang=C3=A9?= , Eric Blake , Markus Armbruster , Marcelo Tosatti Cc: qemu-devel@nongnu.org, kvm@vger.kernel.org, xiaoyao.li@intel.com, Michael Roth , isaku.yamahata@gmail.com, Sean Christopherson , Claudio Fontana Subject: [RFC PATCH v2 17/21] kvm: handle KVM_EXIT_MEMORY_FAULT Date: Wed, 13 Sep 2023 23:51:13 -0400 Message-Id: <20230914035117.3285885-18-xiaoyao.li@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230914035117.3285885-1-xiaoyao.li@intel.com> References: <20230914035117.3285885-1-xiaoyao.li@intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org From: Chao Peng Currently only KVM_MEMORY_EXIT_FLAG_PRIVATE in flags is valid when KVM_EXIT_MEMORY_FAULT happens. It indicates userspace needs to do the memory conversion on the RAMBlock to turn the memory into desired attribute, i.e., private/shared. Note, KVM_EXIT_MEMORY_FAULT makes sense only when the RAMBlock has gmem memory backend. Signed-off-by: Chao Peng Signed-off-by: Xiaoyao Li --- accel/kvm/kvm-all.c | 54 +++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 54 insertions(+) diff --git a/accel/kvm/kvm-all.c b/accel/kvm/kvm-all.c index 7e32ee83b258..c67aa66b0559 100644 --- a/accel/kvm/kvm-all.c +++ b/accel/kvm/kvm-all.c @@ -3040,6 +3040,50 @@ static void kvm_eat_signals(CPUState *cpu) } while (sigismember(&chkset, SIG_IPI)); } +static int kvm_convert_memory(hwaddr start, hwaddr size, bool to_private) +{ + MemoryRegionSection section; + ram_addr_t offset; + RAMBlock *rb; + void *addr; + int ret = -1; + + section = memory_region_find(get_system_memory(), start, size); + if (!section.mr) { + return ret; + } + + if (memory_region_has_gmem_fd(section.mr)) { + if (to_private) { + ret = kvm_set_memory_attributes_private(start, size); + } else { + ret = kvm_set_memory_attributes_shared(start, size); + } + + if (ret) { + memory_region_unref(section.mr); + return ret; + } + + addr = memory_region_get_ram_ptr(section.mr) + + section.offset_within_region; + rb = qemu_ram_block_from_host(addr, false, &offset); + /* + * With KVM_SET_MEMORY_ATTRIBUTES by kvm_set_memory_attributes(), + * operation on underlying file descriptor is only for releasing + * unnecessary pages. + */ + ram_block_convert_range(rb, offset, size, to_private); + } else { + warn_report("Convert non guest-memfd backed memory region " + "(0x%"HWADDR_PRIx" ,+ 0x%"HWADDR_PRIx") to %s", + start, size, to_private ? "private" : "shared"); + } + + memory_region_unref(section.mr); + return ret; +} + int kvm_cpu_exec(CPUState *cpu) { struct kvm_run *run = cpu->kvm_run; @@ -3198,6 +3242,16 @@ int kvm_cpu_exec(CPUState *cpu) break; } break; + case KVM_EXIT_MEMORY_FAULT: + if (run->memory.flags & ~KVM_MEMORY_EXIT_FLAG_PRIVATE) { + error_report("KVM_EXIT_MEMORY_FAULT: Unknown flag 0x%" PRIx64, + (uint64_t)run->memory.flags); + ret = -1; + break; + } + ret = kvm_convert_memory(run->memory.gpa, run->memory.size, + run->memory.flags & KVM_MEMORY_EXIT_FLAG_PRIVATE); + break; default: DPRINTF("kvm_arch_handle_exit\n"); ret = kvm_arch_handle_exit(cpu, run);