From patchwork Fri Oct 28 16:35:37 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xiao Guangrong X-Patchwork-Id: 9402401 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 1ABD6601C0 for ; Fri, 28 Oct 2016 17:03:09 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 0CF192A8AB for ; Fri, 28 Oct 2016 17:03:09 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 015C02A8B3; Fri, 28 Oct 2016 17:03:08 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=2.0 tests=BAYES_00,RCVD_IN_DNSWL_HI autolearn=unavailable version=3.3.1 Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 1E8952A8AB for ; Fri, 28 Oct 2016 17:03:08 +0000 (UTC) Received: from localhost ([::1]:50754 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1c0AYd-000636-Bs for patchwork-qemu-devel@patchwork.kernel.org; Fri, 28 Oct 2016 13:03:07 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:56244) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1c0AMi-0003af-Ub for qemu-devel@nongnu.org; Fri, 28 Oct 2016 12:50:49 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1c0AMf-0001vV-PB for qemu-devel@nongnu.org; Fri, 28 Oct 2016 12:50:48 -0400 Received: from mga04.intel.com ([192.55.52.120]:12501) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1c0AMf-0001ue-D4 for qemu-devel@nongnu.org; Fri, 28 Oct 2016 12:50:45 -0400 Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by fmsmga104.fm.intel.com with ESMTP; 28 Oct 2016 09:50:45 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.31,410,1473145200"; d="scan'208";a="185030815" Received: from xiaoreal1.sh.intel.com (HELO xiaoreal1.sh.intel.com.sh.intel.com) ([10.239.48.133]) by fmsmga004.fm.intel.com with ESMTP; 28 Oct 2016 09:50:42 -0700 From: Xiao Guangrong To: pbonzini@redhat.com, imammedo@redhat.com Date: Sat, 29 Oct 2016 00:35:37 +0800 Message-Id: <1477672540-27952-2-git-send-email-guangrong.xiao@linux.intel.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1477672540-27952-1-git-send-email-guangrong.xiao@linux.intel.com> References: <1477672540-27952-1-git-send-email-guangrong.xiao@linux.intel.com> X-detected-operating-system: by eggs.gnu.org: Genre and OS details not recognized. X-Received-From: 192.55.52.120 Subject: [Qemu-devel] [PATCH v3 1/4] nvdimm acpi: prebuild nvdimm devices for available slots X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Xiao Guangrong , ehabkost@redhat.com, kvm@vger.kernel.org, mst@redhat.com, gleb@kernel.org, mtosatti@redhat.com, qemu-devel@nongnu.org, stefanha@redhat.com, dan.j.williams@intel.com, rth@twiddle.net Errors-To: qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Sender: "Qemu-devel" X-Virus-Scanned: ClamAV using ClamSMTP For each NVDIMM present or intended to be supported by platform, platform firmware also exposes an ACPI Namespace Device under the root device So it builds nvdimm devices for all slots to support vNVDIMM hotplug Reviewed-by: Stefan Hajnoczi Signed-off-by: Xiao Guangrong --- hw/acpi/nvdimm.c | 41 ++++++++++++++++++++++++----------------- hw/i386/acpi-build.c | 2 +- include/hw/mem/nvdimm.h | 3 ++- 3 files changed, 27 insertions(+), 19 deletions(-) diff --git a/hw/acpi/nvdimm.c b/hw/acpi/nvdimm.c index bb896c9..b8a2e62 100644 --- a/hw/acpi/nvdimm.c +++ b/hw/acpi/nvdimm.c @@ -961,12 +961,11 @@ static void nvdimm_build_device_dsm(Aml *dev, uint32_t handle) aml_append(dev, method); } -static void nvdimm_build_nvdimm_devices(GSList *device_list, Aml *root_dev) +static void nvdimm_build_nvdimm_devices(Aml *root_dev, uint32_t ram_slots) { - for (; device_list; device_list = device_list->next) { - DeviceState *dev = device_list->data; - int slot = object_property_get_int(OBJECT(dev), PC_DIMM_SLOT_PROP, - NULL); + uint32_t slot; + + for (slot = 0; slot < ram_slots; slot++) { uint32_t handle = nvdimm_slot_to_handle(slot); Aml *nvdimm_dev; @@ -987,9 +986,9 @@ static void nvdimm_build_nvdimm_devices(GSList *device_list, Aml *root_dev) } } -static void nvdimm_build_ssdt(GSList *device_list, GArray *table_offsets, - GArray *table_data, BIOSLinker *linker, - GArray *dsm_dma_arrea) +static void nvdimm_build_ssdt(GArray *table_offsets, GArray *table_data, + BIOSLinker *linker, GArray *dsm_dma_arrea, + uint32_t ram_slots) { Aml *ssdt, *sb_scope, *dev; int mem_addr_offset, nvdimm_ssdt; @@ -1021,7 +1020,7 @@ static void nvdimm_build_ssdt(GSList *device_list, GArray *table_offsets, /* 0 is reserved for root device. */ nvdimm_build_device_dsm(dev, 0); - nvdimm_build_nvdimm_devices(device_list, dev); + nvdimm_build_nvdimm_devices(dev, ram_slots); aml_append(sb_scope, dev); aml_append(ssdt, sb_scope); @@ -1046,17 +1045,25 @@ static void nvdimm_build_ssdt(GSList *device_list, GArray *table_offsets, } void nvdimm_build_acpi(GArray *table_offsets, GArray *table_data, - BIOSLinker *linker, GArray *dsm_dma_arrea) + BIOSLinker *linker, GArray *dsm_dma_arrea, + uint32_t ram_slots) { GSList *device_list; - /* no NVDIMM device is plugged. */ device_list = nvdimm_get_plugged_device_list(); - if (!device_list) { - return; + + /* NVDIMM device is plugged. */ + if (device_list) { + nvdimm_build_nfit(device_list, table_offsets, table_data, linker); + g_slist_free(device_list); + } + + /* + * NVDIMM device is allowed to be plugged only if there is available + * slot. + */ + if (ram_slots) { + nvdimm_build_ssdt(table_offsets, table_data, linker, dsm_dma_arrea, + ram_slots); } - nvdimm_build_nfit(device_list, table_offsets, table_data, linker); - nvdimm_build_ssdt(device_list, table_offsets, table_data, linker, - dsm_dma_arrea); - g_slist_free(device_list); } diff --git a/hw/i386/acpi-build.c b/hw/i386/acpi-build.c index e999654..6ae4769 100644 --- a/hw/i386/acpi-build.c +++ b/hw/i386/acpi-build.c @@ -2767,7 +2767,7 @@ void acpi_build(AcpiBuildTables *tables, MachineState *machine) } if (pcms->acpi_nvdimm_state.is_enabled) { nvdimm_build_acpi(table_offsets, tables_blob, tables->linker, - pcms->acpi_nvdimm_state.dsm_mem); + pcms->acpi_nvdimm_state.dsm_mem, machine->ram_slots); } /* Add tables supplied by user (if any) */ diff --git a/include/hw/mem/nvdimm.h b/include/hw/mem/nvdimm.h index 1cfe9e0..63a2b20 100644 --- a/include/hw/mem/nvdimm.h +++ b/include/hw/mem/nvdimm.h @@ -112,5 +112,6 @@ typedef struct AcpiNVDIMMState AcpiNVDIMMState; void nvdimm_init_acpi_state(AcpiNVDIMMState *state, MemoryRegion *io, FWCfgState *fw_cfg, Object *owner); void nvdimm_build_acpi(GArray *table_offsets, GArray *table_data, - BIOSLinker *linker, GArray *dsm_dma_arrea); + BIOSLinker *linker, GArray *dsm_dma_arrea, + uint32_t ram_slots); #endif