From patchwork Tue Mar 1 10:56:09 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xiao Guangrong X-Patchwork-Id: 8464321 Return-Path: X-Original-To: patchwork-qemu-devel@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 0E42CC0553 for ; Tue, 1 Mar 2016 11:00:12 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 3CA6F20254 for ; Tue, 1 Mar 2016 11:00:11 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 4D19F20295 for ; Tue, 1 Mar 2016 11:00:10 +0000 (UTC) Received: from localhost ([::1]:48546 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1aai2D-0005hk-Nr for patchwork-qemu-devel@patchwork.kernel.org; Tue, 01 Mar 2016 06:00:09 -0500 Received: from eggs.gnu.org ([2001:4830:134:3::10]:49165) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1aahyn-0007wu-6v for qemu-devel@nongnu.org; Tue, 01 Mar 2016 05:56:38 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1aahyl-0001R1-V1 for qemu-devel@nongnu.org; Tue, 01 Mar 2016 05:56:37 -0500 Received: from mga01.intel.com ([192.55.52.88]:12366) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1aahyl-0001N1-JS for qemu-devel@nongnu.org; Tue, 01 Mar 2016 05:56:35 -0500 Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by fmsmga101.fm.intel.com with ESMTP; 01 Mar 2016 02:56:35 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.22,522,1449561600"; d="scan'208";a="57509811" Received: from xiaoreal1.sh.intel.com (HELO xiaoreal1.sh.intel.com.sh.intel.com) ([10.239.48.79]) by fmsmga004.fm.intel.com with ESMTP; 01 Mar 2016 02:56:33 -0800 From: Xiao Guangrong To: pbonzini@redhat.com, imammedo@redhat.com Date: Tue, 1 Mar 2016 18:56:09 +0800 Message-Id: <1456829771-71553-8-git-send-email-guangrong.xiao@linux.intel.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1456829771-71553-1-git-send-email-guangrong.xiao@linux.intel.com> References: <1456829771-71553-1-git-send-email-guangrong.xiao@linux.intel.com> X-detected-operating-system: by eggs.gnu.org: Genre and OS details not recognized. X-Received-From: 192.55.52.88 Cc: Xiao Guangrong , ehabkost@redhat.com, kvm@vger.kernel.org, mst@redhat.com, gleb@kernel.org, mtosatti@redhat.com, qemu-devel@nongnu.org, stefanha@redhat.com, dan.j.williams@intel.com, rth@twiddle.net Subject: [Qemu-devel] [PATCH 7/9] nvdimm acpi: let qemu handle _DSM method X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Sender: qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org X-Spam-Status: No, score=-6.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP If dsm memory is successfully patched, we let qemu fully emulate the dsm method This patch saves _DSM input parameters into dsm memory, tell dsm memory address to QEMU, then fetch the result from the dsm memory Signed-off-by: Xiao Guangrong --- hw/acpi/nvdimm.c | 117 ++++++++++++++++++++++++++++++++++++++++++++++++++++--- 1 file changed, 112 insertions(+), 5 deletions(-) diff --git a/hw/acpi/nvdimm.c b/hw/acpi/nvdimm.c index 90032e5..781f6c1 100644 --- a/hw/acpi/nvdimm.c +++ b/hw/acpi/nvdimm.c @@ -372,6 +372,24 @@ static void nvdimm_build_nfit(GSList *device_list, GArray *table_offsets, g_array_free(structures, true); } +struct NvdimmDsmIn { + uint32_t handle; + uint32_t revision; + uint32_t function; + /* the remaining size in the page is used by arg3. */ + union { + uint8_t arg3[0]; + }; +} QEMU_PACKED; +typedef struct NvdimmDsmIn NvdimmDsmIn; + +struct NvdimmDsmOut { + /* the size of buffer filled by QEMU. */ + uint32_t len; + uint8_t data[0]; +} QEMU_PACKED; +typedef struct NvdimmDsmOut NvdimmDsmOut; + static uint64_t nvdimm_dsm_read(void *opaque, hwaddr addr, unsigned size) { @@ -411,11 +429,18 @@ void nvdimm_init_acpi_state(AcpiNVDIMMState *state, MemoryRegion *io, static void nvdimm_build_common_dsm(Aml *dev) { - Aml *method, *ifctx, *function; + Aml *method, *ifctx, *function, *dsm_mem, *unpatched, *result_size; uint8_t byte_list[1]; - method = aml_method(NVDIMM_COMMON_DSM, 4, AML_NOTSERIALIZED); + method = aml_method(NVDIMM_COMMON_DSM, 4, AML_SERIALIZED); function = aml_arg(2); + dsm_mem = aml_name(NVDIMM_ACPI_MEM_ADDR); + + /* + * do not support any method if DSM memory address has not been + * patched. + */ + unpatched = aml_if(aml_equal(dsm_mem, aml_int(0x0))); /* * function 0 is called to inquire what functions are supported by @@ -424,12 +449,36 @@ static void nvdimm_build_common_dsm(Aml *dev) ifctx = aml_if(aml_equal(function, aml_int(0))); byte_list[0] = 0 /* No function Supported */; aml_append(ifctx, aml_return(aml_buffer(1, byte_list))); - aml_append(method, ifctx); + aml_append(unpatched, ifctx); /* No function is supported yet. */ byte_list[0] = 1 /* Not Supported */; - aml_append(method, aml_return(aml_buffer(1, byte_list))); + aml_append(unpatched, aml_return(aml_buffer(1, byte_list))); + aml_append(method, unpatched); + + /* + * Currently no function is supported for both root device and NVDIMM + * devices, let's temporarily set handle to 0x0 at this time. + */ + aml_append(method, aml_store(aml_int(0x0), aml_name("HDLE"))); + aml_append(method, aml_store(aml_arg(1), aml_name("REVS"))); + aml_append(method, aml_store(aml_arg(2), aml_name("FUNC"))); + /* + * tell QEMU about the real address of DSM memory, then QEMU begins + * to emulate the method and fills the result to DSM memory. + */ + aml_append(method, aml_store(dsm_mem, aml_name("NTFI"))); + + result_size = aml_local(1); + aml_append(method, aml_store(aml_name("RLEN"), result_size)); + aml_append(method, aml_store(aml_shiftleft(result_size, aml_int(3)), + result_size)); + aml_append(method, aml_create_field(aml_name("ODAT"), aml_int(0), + result_size, "OBUF")); + aml_append(method, aml_concatenate(aml_buffer(0, NULL), aml_name("OBUF"), + aml_arg(6))); + aml_append(method, aml_return(aml_arg(6))); aml_append(dev, method); } @@ -472,7 +521,7 @@ static void nvdimm_build_nvdimm_devices(GSList *device_list, Aml *root_dev) static void nvdimm_build_ssdt(GSList *device_list, GArray *table_offsets, GArray *table_data, GArray *linker) { - Aml *ssdt, *sb_scope, *dev; + Aml *ssdt, *sb_scope, *dev, *field; int mem_addr_offset, nvdimm_ssdt; acpi_add_table(table_offsets, table_data); @@ -497,6 +546,64 @@ static void nvdimm_build_ssdt(GSList *device_list, GArray *table_offsets, */ aml_append(dev, aml_name_decl("_HID", aml_string("ACPI0012"))); + /* map DSM memory and IO into ACPI namespace. */ + aml_append(dev, aml_operation_region("NPIO", AML_SYSTEM_IO, + aml_int(NVDIMM_ACPI_IO_BASE), NVDIMM_ACPI_IO_LEN)); + aml_append(dev, aml_operation_region("NRAM", AML_SYSTEM_MEMORY, + aml_name(NVDIMM_ACPI_MEM_ADDR), TARGET_PAGE_SIZE)); + + /* + * DSM notifier: + * NTFI: write the address of DSM memory and notify QEMU to emulate + * the access. + * + * It is the IO port so that accessing them will cause VM-exit, the + * control will be transferred to QEMU. + */ + field = aml_field("NPIO", AML_DWORD_ACC, AML_NOLOCK, AML_PRESERVE); + aml_append(field, aml_named_field("NTFI", + sizeof(uint32_t) * BITS_PER_BYTE)); + aml_append(dev, field); + + /* + * DSM input: + * @HDLE: store device's handle, it's zero if the _DSM call happens + * on NVDIMM Root Device. + * @REVS: store the Arg1 of _DSM call. + * @FUNC: store the Arg2 of _DSM call. + * @ARG3: store the Arg3 of _DSM call. + * + * They are RAM mapping on host so that these accesses never cause + * VM-EXIT. + */ + field = aml_field("NRAM", AML_DWORD_ACC, AML_NOLOCK, AML_PRESERVE); + aml_append(field, aml_named_field("HDLE", + sizeof(typeof_field(NvdimmDsmIn, handle)) * BITS_PER_BYTE)); + aml_append(field, aml_named_field("REVS", + sizeof(typeof_field(NvdimmDsmIn, revision)) * BITS_PER_BYTE)); + aml_append(field, aml_named_field("FUNC", + sizeof(typeof_field(NvdimmDsmIn, function)) * BITS_PER_BYTE)); + aml_append(field, aml_named_field("ARG3", + (TARGET_PAGE_SIZE - offsetof(NvdimmDsmIn, arg3)) * + BITS_PER_BYTE)); + aml_append(dev, field); + + /* + * DSM output: + * @RLEN: the size of the buffer filled by QEMU. + * @ODAT: the buffer QEMU uses to store the result. + * + * Since the page is reused by both input and out, the input data + * will be lost after storing new result into @ODAT. + */ + field = aml_field("NRAM", AML_DWORD_ACC, AML_NOLOCK, AML_PRESERVE); + aml_append(field, aml_named_field("RLEN", + sizeof(typeof_field(NvdimmDsmOut, len)) * BITS_PER_BYTE)); + aml_append(field, aml_named_field("ODAT", + (TARGET_PAGE_SIZE - offsetof(NvdimmDsmOut, data)) * + BITS_PER_BYTE)); + aml_append(dev, field); + nvdimm_build_common_dsm(dev); nvdimm_build_device_dsm(dev);