From patchwork Wed Feb 6 05:25:54 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shivaprasad G Bhat X-Patchwork-Id: 10799433 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 05E0E746 for ; Wed, 6 Feb 2019 14:12:20 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id E73232B94B for ; Wed, 6 Feb 2019 14:12:19 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id D813A2B991; Wed, 6 Feb 2019 14:12:19 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI autolearn=ham version=3.3.1 Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 714AF2B94B for ; Wed, 6 Feb 2019 14:12:19 +0000 (UTC) Received: from localhost ([127.0.0.1]:51534 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1grNw2-0003s2-Bo for patchwork-qemu-devel@patchwork.kernel.org; Wed, 06 Feb 2019 09:12:18 -0500 Received: from eggs.gnu.org ([209.51.188.92]:52688) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1grFim-000382-So for qemu-devel@nongnu.org; Wed, 06 Feb 2019 00:26:05 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1grFil-0004gd-Ty for qemu-devel@nongnu.org; Wed, 06 Feb 2019 00:26:04 -0500 Received: from mx0b-001b2d01.pphosted.com ([148.163.158.5]:51482 helo=mx0a-001b2d01.pphosted.com) by eggs.gnu.org with esmtps (TLS1.0:RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1grFil-0004fy-M7 for qemu-devel@nongnu.org; Wed, 06 Feb 2019 00:26:03 -0500 Received: from pps.filterd (m0098413.ppops.net [127.0.0.1]) by mx0b-001b2d01.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x165OG6R027648 for ; Wed, 6 Feb 2019 00:26:03 -0500 Received: from e06smtp07.uk.ibm.com (e06smtp07.uk.ibm.com [195.75.94.103]) by mx0b-001b2d01.pphosted.com with ESMTP id 2qfqh44ma9-1 (version=TLSv1.2 cipher=AES256-GCM-SHA384 bits=256 verify=NOT) for ; Wed, 06 Feb 2019 00:26:03 -0500 Received: from localhost by e06smtp07.uk.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Wed, 6 Feb 2019 05:26:00 -0000 Received: from b06cxnps3075.portsmouth.uk.ibm.com (9.149.109.195) by e06smtp07.uk.ibm.com (192.168.101.137) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; (version=TLSv1/SSLv3 cipher=AES256-GCM-SHA384 bits=256/256) Wed, 6 Feb 2019 05:25:57 -0000 Received: from d06av25.portsmouth.uk.ibm.com (d06av25.portsmouth.uk.ibm.com [9.149.105.61]) by b06cxnps3075.portsmouth.uk.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id x165Pu8B57671836 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL); Wed, 6 Feb 2019 05:25:56 GMT Received: from d06av25.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 4F61211C050; Wed, 6 Feb 2019 05:25:56 +0000 (GMT) Received: from d06av25.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 14E8911C052; Wed, 6 Feb 2019 05:25:55 +0000 (GMT) Received: from lep8c.aus.stglabs.ibm.com (unknown [9.40.192.207]) by d06av25.portsmouth.uk.ibm.com (Postfix) with ESMTP; Wed, 6 Feb 2019 05:25:54 +0000 (GMT) From: Shivaprasad G Bhat To: qemu-devel@nongnu.org Date: Tue, 05 Feb 2019 23:25:54 -0600 In-Reply-To: <154943058200.27958.11497653677605446596.stgit@lep8c.aus.stglabs.ibm.com> References: <154943058200.27958.11497653677605446596.stgit@lep8c.aus.stglabs.ibm.com> User-Agent: StGit/0.17.1-dirty MIME-Version: 1.0 X-TM-AS-GCONF: 00 x-cbid: 19020605-0028-0000-0000-0000034508F4 X-IBM-AV-DETECTION: SAVI=unused REMOTE=unused XFE=unused x-cbparentid: 19020605-0029-0000-0000-000024031400 Message-Id: <154943065253.27958.18316807886952418325.stgit@lep8c.aus.stglabs.ibm.com> X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-02-06_03:, , signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 priorityscore=1501 malwarescore=0 suspectscore=3 phishscore=0 bulkscore=0 spamscore=0 clxscore=1015 lowpriorityscore=0 mlxscore=0 impostorscore=0 mlxlogscore=999 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1810050000 definitions=main-1902060042 X-detected-operating-system: by eggs.gnu.org: GNU/Linux 3.x [generic] [fuzzy] X-Received-From: 148.163.158.5 X-Mailman-Approved-At: Wed, 06 Feb 2019 09:10:49 -0500 Subject: [Qemu-devel] [RFC PATCH 1/4] mem: make nvdimm_device_list global X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: xiaoguangrong.eric@gmail.com, sbhat@linux.ibm.com, mst@redhat.com, bharata@linux.ibm.com, qemu-ppc@nongnu.org, imammedo@redhat.com, vaibhav@linux.ibm.com, david@gibson.dropbear.id.au Errors-To: qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Sender: "Qemu-devel" X-Virus-Scanned: ClamAV using ClamSMTP nvdimm_device_list is required for parsing the list for devices in subsequent patches. Move it to common area. Signed-off-by: Shivaprasad G Bhat Reviewed-by: Igor Mammedov --- hw/acpi/nvdimm.c | 27 --------------------------- hw/mem/nvdimm.c | 27 +++++++++++++++++++++++++++ include/hw/mem/nvdimm.h | 2 ++ 3 files changed, 29 insertions(+), 27 deletions(-) diff --git a/hw/acpi/nvdimm.c b/hw/acpi/nvdimm.c index e53b2cb681..34322298c2 100644 --- a/hw/acpi/nvdimm.c +++ b/hw/acpi/nvdimm.c @@ -33,33 +33,6 @@ #include "hw/nvram/fw_cfg.h" #include "hw/mem/nvdimm.h" -static int nvdimm_device_list(Object *obj, void *opaque) -{ - GSList **list = opaque; - - if (object_dynamic_cast(obj, TYPE_NVDIMM)) { - *list = g_slist_append(*list, DEVICE(obj)); - } - - object_child_foreach(obj, nvdimm_device_list, opaque); - return 0; -} - -/* - * inquire NVDIMM devices and link them into the list which is - * returned to the caller. - * - * Note: it is the caller's responsibility to free the list to avoid - * memory leak. - */ -static GSList *nvdimm_get_device_list(void) -{ - GSList *list = NULL; - - object_child_foreach(qdev_get_machine(), nvdimm_device_list, &list); - return list; -} - #define NVDIMM_UUID_LE(a, b, c, d0, d1, d2, d3, d4, d5, d6, d7) \ { (a) & 0xff, ((a) >> 8) & 0xff, ((a) >> 16) & 0xff, ((a) >> 24) & 0xff, \ (b) & 0xff, ((b) >> 8) & 0xff, (c) & 0xff, ((c) >> 8) & 0xff, \ diff --git a/hw/mem/nvdimm.c b/hw/mem/nvdimm.c index bf2adf5e16..f221ec7a9a 100644 --- a/hw/mem/nvdimm.c +++ b/hw/mem/nvdimm.c @@ -29,6 +29,33 @@ #include "hw/mem/nvdimm.h" #include "hw/mem/memory-device.h" +static int nvdimm_device_list(Object *obj, void *opaque) +{ + GSList **list = opaque; + + if (object_dynamic_cast(obj, TYPE_NVDIMM)) { + *list = g_slist_append(*list, DEVICE(obj)); + } + + object_child_foreach(obj, nvdimm_device_list, opaque); + return 0; +} + +/* + * inquire NVDIMM devices and link them into the list which is + * returned to the caller. + * + * Note: it is the caller's responsibility to free the list to avoid + * memory leak. + */ +GSList *nvdimm_get_device_list(void) +{ + GSList *list = NULL; + + object_child_foreach(qdev_get_machine(), nvdimm_device_list, &list); + return list; +} + static void nvdimm_get_label_size(Object *obj, Visitor *v, const char *name, void *opaque, Error **errp) { diff --git a/include/hw/mem/nvdimm.h b/include/hw/mem/nvdimm.h index c5c9b3c7f8..e8b086f2df 100644 --- a/include/hw/mem/nvdimm.h +++ b/include/hw/mem/nvdimm.h @@ -150,4 +150,6 @@ void nvdimm_build_acpi(GArray *table_offsets, GArray *table_data, uint32_t ram_slots); void nvdimm_plug(AcpiNVDIMMState *state); void nvdimm_acpi_plug_cb(HotplugHandler *hotplug_dev, DeviceState *dev); +GSList *nvdimm_get_device_list(void); + #endif From patchwork Wed Feb 6 05:26:14 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shivaprasad G Bhat X-Patchwork-Id: 10799441 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id B60481390 for ; Wed, 6 Feb 2019 14:15:13 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id A26092A536 for ; Wed, 6 Feb 2019 14:15:13 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 935472A2FA; Wed, 6 Feb 2019 14:15:13 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI autolearn=ham version=3.3.1 Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 90B3B2A2FA for ; Wed, 6 Feb 2019 14:15:11 +0000 (UTC) Received: from localhost ([127.0.0.1]:51548 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1grNyo-00052U-QV for patchwork-qemu-devel@patchwork.kernel.org; Wed, 06 Feb 2019 09:15:10 -0500 Received: from eggs.gnu.org ([209.51.188.92]:52808) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1grFjK-0003Pt-2K for qemu-devel@nongnu.org; Wed, 06 Feb 2019 00:26:38 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1grFjA-0004y1-2l for qemu-devel@nongnu.org; Wed, 06 Feb 2019 00:26:31 -0500 Received: from mx0b-001b2d01.pphosted.com ([148.163.158.5]:49506 helo=mx0a-001b2d01.pphosted.com) by eggs.gnu.org with esmtps (TLS1.0:RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1grFj6-0004vr-D1 for qemu-devel@nongnu.org; Wed, 06 Feb 2019 00:26:25 -0500 Received: from pps.filterd (m0098419.ppops.net [127.0.0.1]) by mx0b-001b2d01.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x165OJ6G051631 for ; Wed, 6 Feb 2019 00:26:22 -0500 Received: from e06smtp07.uk.ibm.com (e06smtp07.uk.ibm.com [195.75.94.103]) by mx0b-001b2d01.pphosted.com with ESMTP id 2qfscar50w-1 (version=TLSv1.2 cipher=AES256-GCM-SHA384 bits=256 verify=NOT) for ; Wed, 06 Feb 2019 00:26:22 -0500 Received: from localhost by e06smtp07.uk.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Wed, 6 Feb 2019 05:26:21 -0000 Received: from b06cxnps4074.portsmouth.uk.ibm.com (9.149.109.196) by e06smtp07.uk.ibm.com (192.168.101.137) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; (version=TLSv1/SSLv3 cipher=AES256-GCM-SHA384 bits=256/256) Wed, 6 Feb 2019 05:26:18 -0000 Received: from d06av21.portsmouth.uk.ibm.com (d06av21.portsmouth.uk.ibm.com [9.149.105.232]) by b06cxnps4074.portsmouth.uk.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id x165QGWR35586100 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL); Wed, 6 Feb 2019 05:26:16 GMT Received: from d06av21.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 822075204E; Wed, 6 Feb 2019 05:26:16 +0000 (GMT) Received: from lep8c.aus.stglabs.ibm.com (unknown [9.40.192.207]) by d06av21.portsmouth.uk.ibm.com (Postfix) with ESMTP id 42B7A52050; Wed, 6 Feb 2019 05:26:15 +0000 (GMT) From: Shivaprasad G Bhat To: qemu-devel@nongnu.org Date: Tue, 05 Feb 2019 23:26:14 -0600 In-Reply-To: <154943058200.27958.11497653677605446596.stgit@lep8c.aus.stglabs.ibm.com> References: <154943058200.27958.11497653677605446596.stgit@lep8c.aus.stglabs.ibm.com> User-Agent: StGit/0.17.1-dirty MIME-Version: 1.0 X-TM-AS-GCONF: 00 x-cbid: 19020605-0028-0000-0000-0000034508FA X-IBM-AV-DETECTION: SAVI=unused REMOTE=unused XFE=unused x-cbparentid: 19020605-0029-0000-0000-000024031408 Message-Id: <154943076146.27958.8619995020189724984.stgit@lep8c.aus.stglabs.ibm.com> X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-02-06_03:, , signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 priorityscore=1501 malwarescore=0 suspectscore=1 phishscore=0 bulkscore=0 spamscore=0 clxscore=1015 lowpriorityscore=0 mlxscore=0 impostorscore=0 mlxlogscore=800 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1810050000 definitions=main-1902060042 X-detected-operating-system: by eggs.gnu.org: GNU/Linux 3.x [generic] [fuzzy] X-Received-From: 148.163.158.5 X-Mailman-Approved-At: Wed, 06 Feb 2019 09:10:49 -0500 Subject: [Qemu-devel] [RFC PATCH 2/4] mem: implement memory_device_set_region_size X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: xiaoguangrong.eric@gmail.com, sbhat@linux.ibm.com, mst@redhat.com, bharata@linux.ibm.com, qemu-ppc@nongnu.org, imammedo@redhat.com, vaibhav@linux.ibm.com, david@gibson.dropbear.id.au Errors-To: qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Sender: "Qemu-devel" X-Virus-Scanned: ClamAV using ClamSMTP Required for PAPR NVDIMM implementation. Need memory_device_set_region_size for aligning the size to the SCM block size. Signed-off-by: Shivaprasad G Bhat --- hw/mem/memory-device.c | 15 +++++++++++++++ include/hw/mem/memory-device.h | 2 ++ 2 files changed, 17 insertions(+) diff --git a/hw/mem/memory-device.c b/hw/mem/memory-device.c index 5f2c408036..ad0419e203 100644 --- a/hw/mem/memory-device.c +++ b/hw/mem/memory-device.c @@ -330,6 +330,21 @@ uint64_t memory_device_get_region_size(const MemoryDeviceState *md, return memory_region_size(mr); } +void memory_device_set_region_size(const MemoryDeviceState *md, + uint64_t size, Error **errp) +{ + const MemoryDeviceClass *mdc = MEMORY_DEVICE_GET_CLASS(md); + MemoryRegion *mr; + + /* dropping const here is fine as we don't touch the memory region */ + mr = mdc->get_memory_region((MemoryDeviceState *)md, errp); + if (!mr) { + return; + } + + memory_region_set_size(mr, size); +} + static const TypeInfo memory_device_info = { .name = TYPE_MEMORY_DEVICE, .parent = TYPE_INTERFACE, diff --git a/include/hw/mem/memory-device.h b/include/hw/mem/memory-device.h index 0293a96abb..ba9b72fd28 100644 --- a/include/hw/mem/memory-device.h +++ b/include/hw/mem/memory-device.h @@ -103,5 +103,7 @@ void memory_device_plug(MemoryDeviceState *md, MachineState *ms); void memory_device_unplug(MemoryDeviceState *md, MachineState *ms); uint64_t memory_device_get_region_size(const MemoryDeviceState *md, Error **errp); +void memory_device_set_region_size(const MemoryDeviceState *md, + uint64_t size, Error **errp); #endif From patchwork Wed Feb 6 05:26:27 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shivaprasad G Bhat X-Patchwork-Id: 10799439 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 19DEF746 for ; Wed, 6 Feb 2019 14:13:42 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 026DF2B8B4 for ; Wed, 6 Feb 2019 14:13:42 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id E52E62B94F; Wed, 6 Feb 2019 14:13:41 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI autolearn=ham version=3.3.1 Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id BD5982B8B4 for ; Wed, 6 Feb 2019 14:13:40 +0000 (UTC) Received: from localhost ([127.0.0.1]:51540 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1grNxM-0004vA-3E for patchwork-qemu-devel@patchwork.kernel.org; Wed, 06 Feb 2019 09:13:40 -0500 Received: from eggs.gnu.org ([209.51.188.92]:52893) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1grFja-0003Wk-7f for qemu-devel@nongnu.org; Wed, 06 Feb 2019 00:26:56 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1grFjX-00059T-Li for qemu-devel@nongnu.org; Wed, 06 Feb 2019 00:26:54 -0500 Received: from mx0a-001b2d01.pphosted.com ([148.163.156.1]:40468) by eggs.gnu.org with esmtps (TLS1.0:RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1grFjV-00052i-M6 for qemu-devel@nongnu.org; Wed, 06 Feb 2019 00:26:51 -0500 Received: from pps.filterd (m0098409.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x165OHFc115972 for ; Wed, 6 Feb 2019 00:26:38 -0500 Received: from e06smtp01.uk.ibm.com (e06smtp01.uk.ibm.com [195.75.94.97]) by mx0a-001b2d01.pphosted.com with ESMTP id 2qfr78u0eb-1 (version=TLSv1.2 cipher=AES256-GCM-SHA384 bits=256 verify=NOT) for ; Wed, 06 Feb 2019 00:26:38 -0500 Received: from localhost by e06smtp01.uk.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Wed, 6 Feb 2019 05:26:35 -0000 Received: from b06cxnps4076.portsmouth.uk.ibm.com (9.149.109.198) by e06smtp01.uk.ibm.com (192.168.101.131) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; (version=TLSv1/SSLv3 cipher=AES256-GCM-SHA384 bits=256/256) Wed, 6 Feb 2019 05:26:31 -0000 Received: from d06av22.portsmouth.uk.ibm.com (d06av22.portsmouth.uk.ibm.com [9.149.105.58]) by b06cxnps4076.portsmouth.uk.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id x165QTNN66519044 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL); Wed, 6 Feb 2019 05:26:29 GMT Received: from d06av22.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id B65EB4C04E; Wed, 6 Feb 2019 05:26:29 +0000 (GMT) Received: from d06av22.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 51DCD4C050; Wed, 6 Feb 2019 05:26:28 +0000 (GMT) Received: from lep8c.aus.stglabs.ibm.com (unknown [9.40.192.207]) by d06av22.portsmouth.uk.ibm.com (Postfix) with ESMTP; Wed, 6 Feb 2019 05:26:28 +0000 (GMT) From: Shivaprasad G Bhat To: qemu-devel@nongnu.org Date: Tue, 05 Feb 2019 23:26:27 -0600 In-Reply-To: <154943058200.27958.11497653677605446596.stgit@lep8c.aus.stglabs.ibm.com> References: <154943058200.27958.11497653677605446596.stgit@lep8c.aus.stglabs.ibm.com> User-Agent: StGit/0.17.1-dirty MIME-Version: 1.0 X-TM-AS-GCONF: 00 x-cbid: 19020605-4275-0000-0000-0000030BEAC5 X-IBM-AV-DETECTION: SAVI=unused REMOTE=unused XFE=unused x-cbparentid: 19020605-4276-0000-0000-00003819EECD Message-Id: <154943078167.27958.5009288263168039462.stgit@lep8c.aus.stglabs.ibm.com> X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-02-06_03:, , signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 priorityscore=1501 malwarescore=0 suspectscore=3 phishscore=0 bulkscore=0 spamscore=0 clxscore=1015 lowpriorityscore=0 mlxscore=0 impostorscore=0 mlxlogscore=999 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1810050000 definitions=main-1902060042 X-detected-operating-system: by eggs.gnu.org: GNU/Linux 3.x [generic] [fuzzy] X-Received-From: 148.163.156.1 X-Mailman-Approved-At: Wed, 06 Feb 2019 09:10:49 -0500 Subject: [Qemu-devel] [RFC PATCH 3/4] spapr: Add NVDIMM device support X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: xiaoguangrong.eric@gmail.com, sbhat@linux.ibm.com, mst@redhat.com, bharata@linux.ibm.com, qemu-ppc@nongnu.org, imammedo@redhat.com, vaibhav@linux.ibm.com, david@gibson.dropbear.id.au Errors-To: qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Sender: "Qemu-devel" X-Virus-Scanned: ClamAV using ClamSMTP Add support for NVDIMM devices for sPAPR. Piggyback on existing nvdimm device interface in QEMU to support virtual NVDIMM devices for Power (May have to re-look at this later). Create the required DT entries for the device (some entries have dummy values right now). The patch creates the required DT node and sends a hotplug interrupt to the guest. Guest is expected to undertake the normal DR resource add path in response and start issuing PAPR SCM hcalls. This is how it can be used .. Add nvdimm=on to the qemu machine argument. Ex : -machine pseries,nvdimm=on For coldplug, the device to be added in qemu command line as shown below -object memory-backend-file,id=memnvdimm0,prealloc=yes,mem-path=/tmp/nvdimm0.img,share=yes,size=512m -device nvdimm,label-size=128k,memdev=memnvdimm0,id=nvdimm0,slot=0 For hotplug, the device to be added from monitor as below object_add memory-backend-file,id=memnvdimm0,prealloc=yes,mem-path=/tmp/nvdimm0.img,share=yes,size=512m device_add nvdimm,label-size=128k,memdev=memnvdimm0,id=nvdimm0,slot=0 Signed-off-by: Shivaprasad G Bhat Signed-off-by: Bharata B Rao [Early implementation] --- default-configs/ppc64-softmmu.mak | 1 hw/ppc/spapr.c | 212 +++++++++++++++++++++++++++++++++++-- hw/ppc/spapr_drc.c | 17 +++ hw/ppc/spapr_events.c | 4 + include/hw/ppc/spapr.h | 10 ++ include/hw/ppc/spapr_drc.h | 9 ++ 6 files changed, 241 insertions(+), 12 deletions(-) diff --git a/default-configs/ppc64-softmmu.mak b/default-configs/ppc64-softmmu.mak index 7f34ad0528..b6e1aa5125 100644 --- a/default-configs/ppc64-softmmu.mak +++ b/default-configs/ppc64-softmmu.mak @@ -20,4 +20,5 @@ CONFIG_XIVE=$(CONFIG_PSERIES) CONFIG_XIVE_SPAPR=$(CONFIG_PSERIES) CONFIG_MEM_DEVICE=y CONFIG_DIMM=y +CONFIG_NVDIMM=y CONFIG_SPAPR_RNG=y diff --git a/hw/ppc/spapr.c b/hw/ppc/spapr.c index 0fcdd35cbe..7e7a1a8041 100644 --- a/hw/ppc/spapr.c +++ b/hw/ppc/spapr.c @@ -73,6 +73,7 @@ #include "qemu/cutils.h" #include "hw/ppc/spapr_cpu_core.h" #include "hw/mem/memory-device.h" +#include "hw/mem/nvdimm.h" #include @@ -690,6 +691,7 @@ static int spapr_populate_drmem_v2(sPAPRMachineState *spapr, void *fdt, uint8_t *int_buf, *cur_index, buf_len; int ret; uint64_t lmb_size = SPAPR_MEMORY_BLOCK_SIZE; + uint64_t scm_block_size = SPAPR_MINIMUM_SCM_BLOCK_SIZE; uint64_t addr, cur_addr, size; uint32_t nr_boot_lmbs = (machine->device_memory->base / lmb_size); uint64_t mem_end = machine->device_memory->base + @@ -726,15 +728,24 @@ static int spapr_populate_drmem_v2(sPAPRMachineState *spapr, void *fdt, nr_entries++; } - /* Entry for DIMM */ - drc = spapr_drc_by_id(TYPE_SPAPR_DRC_LMB, addr / lmb_size); - g_assert(drc); - elem = spapr_get_drconf_cell(size / lmb_size, addr, - spapr_drc_index(drc), node, - SPAPR_LMB_FLAGS_ASSIGNED); + if (info->value->type == MEMORY_DEVICE_INFO_KIND_NVDIMM) { + /* Entry for NVDIMM */ + drc = spapr_drc_by_id(TYPE_SPAPR_DRC_PMEM, addr / scm_block_size); + g_assert(drc); + elem = spapr_get_drconf_cell(size / scm_block_size, addr, + spapr_drc_index(drc), -1, 0); + cur_addr = ROUND_UP(addr + size, scm_block_size); + } else { + /* Entry for DIMM */ + drc = spapr_drc_by_id(TYPE_SPAPR_DRC_LMB, addr / lmb_size); + g_assert(drc); + elem = spapr_get_drconf_cell(size / lmb_size, addr, + spapr_drc_index(drc), node, + SPAPR_LMB_FLAGS_ASSIGNED); + cur_addr = addr + size; + } QSIMPLEQ_INSERT_TAIL(&drconf_queue, elem, entry); nr_entries++; - cur_addr = addr + size; } /* Entry for remaining hotpluggable area */ @@ -1225,6 +1236,42 @@ static void spapr_dt_hypervisor(sPAPRMachineState *spapr, void *fdt) } } +static int spapr_populate_nvdimm_node(void *fdt, int fdt_offset, + uint32_t node, uint64_t addr, + uint64_t size, uint64_t label_size); +static void spapr_create_nvdimm(void *fdt) +{ + int offset = fdt_subnode_offset(fdt, 0, "persistent-memory"); + GSList *dimms = NULL; + + if (offset < 0) { + offset = fdt_add_subnode(fdt, 0, "persistent-memory"); + _FDT(offset); + _FDT((fdt_setprop_cell(fdt, offset, "#address-cells", 0x2))); + _FDT((fdt_setprop_cell(fdt, offset, "#size-cells", 0x0))); + _FDT((fdt_setprop_string(fdt, offset, "name", "persistent-memory"))); + _FDT((fdt_setprop_string(fdt, offset, "device_type", + "ibm,persistent-memory"))); + } + + /*NB : Add drc-info array here */ + + /* Create DT entries for cold plugged NVDIMM devices */ + dimms = nvdimm_get_device_list(); + for (; dimms; dimms = dimms->next) { + NVDIMMDevice *nvdimm = dimms->data; + PCDIMMDevice *di = PC_DIMM(nvdimm); + uint64_t lsize = nvdimm->label_size; + int size = object_property_get_int(OBJECT(nvdimm), PC_DIMM_SIZE_PROP, + NULL); + + spapr_populate_nvdimm_node(fdt, offset, di->node, di->addr, + size, lsize); + } + g_slist_free(dimms); + return; +} + static void *spapr_build_fdt(sPAPRMachineState *spapr) { MachineState *machine = MACHINE(spapr); @@ -1348,6 +1395,11 @@ static void *spapr_build_fdt(sPAPRMachineState *spapr) exit(1); } + /* NVDIMM devices */ + if (spapr->nvdimm_enabled) { + spapr_create_nvdimm(fdt); + } + return fdt; } @@ -3143,6 +3195,20 @@ static void spapr_set_ic_mode(Object *obj, const char *value, Error **errp) } } +static bool spapr_get_nvdimm(Object *obj, Error **errp) +{ + sPAPRMachineState *spapr = SPAPR_MACHINE(obj); + + return spapr->nvdimm_enabled; +} + +static void spapr_set_nvdimm(Object *obj, bool value, Error **errp) +{ + sPAPRMachineState *spapr = SPAPR_MACHINE(obj); + + spapr->nvdimm_enabled = value; +} + static void spapr_instance_init(Object *obj) { sPAPRMachineState *spapr = SPAPR_MACHINE(obj); @@ -3188,6 +3254,11 @@ static void spapr_instance_init(Object *obj) object_property_set_description(obj, "ic-mode", "Specifies the interrupt controller mode (xics, xive, dual)", NULL); + object_property_add_bool(obj, "nvdimm", + spapr_get_nvdimm, spapr_set_nvdimm, NULL); + object_property_set_description(obj, "nvdimm", + "Enable support for nvdimm devices", + NULL); } static void spapr_machine_finalizefn(Object *obj) @@ -3267,12 +3338,103 @@ static void spapr_add_lmbs(DeviceState *dev, uint64_t addr_start, uint64_t size, } } +static int spapr_populate_nvdimm_node(void *fdt, int fdt_offset, uint32_t node, + uint64_t addr, uint64_t size, + uint64_t label_size) +{ + int offset; + char buf[40]; + GString *lcode = g_string_sized_new(10); + sPAPRDRConnector *drc; + QemuUUID uuid; + uint32_t drc_idx; + uint32_t associativity[] = { + cpu_to_be32(0x4), /* length */ + cpu_to_be32(0x0), cpu_to_be32(0x0), + cpu_to_be32(0x0), cpu_to_be32(node) + }; + + drc = spapr_drc_by_id(TYPE_SPAPR_DRC_PMEM, + addr / SPAPR_MINIMUM_SCM_BLOCK_SIZE); + g_assert(drc); + + drc_idx = spapr_drc_index(drc); + + sprintf(buf, "pmem@%x", drc_idx); + offset = fdt_add_subnode(fdt, fdt_offset, buf); + _FDT(offset); + + _FDT((fdt_setprop_cell(fdt, offset, "reg", drc_idx))); + _FDT((fdt_setprop_string(fdt, offset, "compatible", "ibm,pmemory"))); + _FDT((fdt_setprop_string(fdt, offset, "name", "pmem"))); + _FDT((fdt_setprop_string(fdt, offset, "device_type", "ibm,pmemory"))); + + /*NB : Supposed to be random strings. Currently empty 10 strings! */ + _FDT((fdt_setprop(fdt, offset, "ibm,loc-code", lcode->str, lcode->len))); + g_string_free(lcode, TRUE); + + _FDT((fdt_setprop(fdt, offset, "ibm,associativity", associativity, + sizeof(associativity)))); + g_random_set_seed(drc_idx); + qemu_uuid_generate(&uuid); + + qemu_uuid_unparse(&uuid, buf); + _FDT((fdt_setprop_string(fdt, offset, "ibm,unit-guid", buf))); + + _FDT((fdt_setprop_cell(fdt, offset, "ibm,my-drc-index", drc_idx))); + + /*NB : What it should be? */ + _FDT(fdt_setprop_cell(fdt, offset, "ibm,latency-attribute", 828)); + + _FDT((fdt_setprop_u64(fdt, offset, "ibm,block-size", + SPAPR_MINIMUM_SCM_BLOCK_SIZE))); + _FDT((fdt_setprop_u64(fdt, offset, "ibm,number-of-blocks", + size / SPAPR_MINIMUM_SCM_BLOCK_SIZE))); + _FDT((fdt_setprop_cell(fdt, offset, "ibm,metadata-size", label_size))); + + return offset; +} + +static void spapr_add_nvdimm(DeviceState *dev, uint64_t addr, + uint64_t size, uint32_t node, + Error **errp) +{ + sPAPRMachineState *spapr = SPAPR_MACHINE(qdev_get_hotplug_handler(dev)); + sPAPRDRConnector *drc; + bool hotplugged = spapr_drc_hotplugged(dev); + NVDIMMDevice *nvdimm = NVDIMM(OBJECT(dev)); + void *fdt; + int fdt_offset, fdt_size; + Error *local_err = NULL; + + spapr_dr_connector_new(OBJECT(spapr), TYPE_SPAPR_DRC_PMEM, + addr / SPAPR_MINIMUM_SCM_BLOCK_SIZE); + drc = spapr_drc_by_id(TYPE_SPAPR_DRC_PMEM, + addr / SPAPR_MINIMUM_SCM_BLOCK_SIZE); + g_assert(drc); + + fdt = create_device_tree(&fdt_size); + fdt_offset = spapr_populate_nvdimm_node(fdt, 0, node, addr, + size, nvdimm->label_size); + + spapr_drc_attach(drc, dev, fdt, fdt_offset, &local_err); + if (local_err) { + error_propagate(errp, local_err); + return; + } + + if (hotplugged) { + spapr_hotplug_req_add_by_index(drc); + } +} + static void spapr_memory_plug(HotplugHandler *hotplug_dev, DeviceState *dev, Error **errp) { Error *local_err = NULL; sPAPRMachineState *ms = SPAPR_MACHINE(hotplug_dev); PCDIMMDevice *dimm = PC_DIMM(dev); + bool is_nvdimm = object_dynamic_cast(OBJECT(dev), TYPE_NVDIMM); uint64_t size, addr; uint32_t node; @@ -3291,9 +3453,14 @@ static void spapr_memory_plug(HotplugHandler *hotplug_dev, DeviceState *dev, node = object_property_get_uint(OBJECT(dev), PC_DIMM_NODE_PROP, &error_abort); - spapr_add_lmbs(dev, addr, size, node, - spapr_ovec_test(ms->ov5_cas, OV5_HP_EVT), - &local_err); + if (!is_nvdimm) { + spapr_add_lmbs(dev, addr, size, node, + spapr_ovec_test(ms->ov5_cas, OV5_HP_EVT), + &local_err); + } else { + spapr_add_nvdimm(dev, addr, size, node, &local_err); + } + if (local_err) { goto out_unplug; } @@ -3311,6 +3478,7 @@ static void spapr_memory_pre_plug(HotplugHandler *hotplug_dev, DeviceState *dev, { const sPAPRMachineClass *smc = SPAPR_MACHINE_GET_CLASS(hotplug_dev); sPAPRMachineState *spapr = SPAPR_MACHINE(hotplug_dev); + bool is_nvdimm = object_dynamic_cast(OBJECT(dev), TYPE_NVDIMM); PCDIMMDevice *dimm = PC_DIMM(dev); Error *local_err = NULL; uint64_t size; @@ -3328,10 +3496,30 @@ static void spapr_memory_pre_plug(HotplugHandler *hotplug_dev, DeviceState *dev, return; } - if (size % SPAPR_MEMORY_BLOCK_SIZE) { + if (!is_nvdimm && size % SPAPR_MEMORY_BLOCK_SIZE) { error_setg(errp, "Hotplugged memory size must be a multiple of " - "%" PRIu64 " MB", SPAPR_MEMORY_BLOCK_SIZE / MiB); + "%" PRIu64 " MB", SPAPR_MEMORY_BLOCK_SIZE / MiB); return; + } else if (is_nvdimm) { + NVDIMMDevice *nvdimm = NVDIMM(OBJECT(dev)); + if ((nvdimm->label_size + size) % SPAPR_MINIMUM_SCM_BLOCK_SIZE) { + error_setg(errp, "NVDIMM memory size must be a multiple of " + "%" PRIu64 "MB", SPAPR_MINIMUM_SCM_BLOCK_SIZE / MiB); + return; + } + if (((nvdimm->label_size + size) / SPAPR_MINIMUM_SCM_BLOCK_SIZE) == 1) { + error_setg(errp, "NVDIMM size must be atleast " + "%" PRIu64 "MB", 2 * SPAPR_MINIMUM_SCM_BLOCK_SIZE / MiB); + return; + } + + /* Align to scm block size, exclude the label */ + memory_device_set_region_size(MEMORY_DEVICE(nvdimm), + QEMU_ALIGN_DOWN(size, SPAPR_MINIMUM_SCM_BLOCK_SIZE), &local_err); + if (local_err) { + error_propagate(errp, local_err); + return; + } } memdev = object_property_get_link(OBJECT(dimm), PC_DIMM_MEMDEV_PROP, diff --git a/hw/ppc/spapr_drc.c b/hw/ppc/spapr_drc.c index 2edb7d1e9c..94ddd102cc 100644 --- a/hw/ppc/spapr_drc.c +++ b/hw/ppc/spapr_drc.c @@ -696,6 +696,16 @@ static void spapr_drc_lmb_class_init(ObjectClass *k, void *data) drck->release = spapr_lmb_release; } +static void spapr_drc_pmem_class_init(ObjectClass *k, void *data) +{ + sPAPRDRConnectorClass *drck = SPAPR_DR_CONNECTOR_CLASS(k); + + drck->typeshift = SPAPR_DR_CONNECTOR_TYPE_SHIFT_PMEM; + drck->typename = "MEM"; + drck->drc_name_prefix = "PMEM "; + drck->release = NULL; +} + static const TypeInfo spapr_dr_connector_info = { .name = TYPE_SPAPR_DR_CONNECTOR, .parent = TYPE_DEVICE, @@ -739,6 +749,12 @@ static const TypeInfo spapr_drc_lmb_info = { .class_init = spapr_drc_lmb_class_init, }; +static const TypeInfo spapr_drc_pmem_info = { + .name = TYPE_SPAPR_DRC_PMEM, + .parent = TYPE_SPAPR_DRC_LOGICAL, + .class_init = spapr_drc_pmem_class_init, +}; + /* helper functions for external users */ sPAPRDRConnector *spapr_drc_by_index(uint32_t index) @@ -1189,6 +1205,7 @@ static void spapr_drc_register_types(void) type_register_static(&spapr_drc_cpu_info); type_register_static(&spapr_drc_pci_info); type_register_static(&spapr_drc_lmb_info); + type_register_static(&spapr_drc_pmem_info); spapr_rtas_register(RTAS_SET_INDICATOR, "set-indicator", rtas_set_indicator); diff --git a/hw/ppc/spapr_events.c b/hw/ppc/spapr_events.c index 32719a1b72..a4fed84346 100644 --- a/hw/ppc/spapr_events.c +++ b/hw/ppc/spapr_events.c @@ -193,6 +193,7 @@ struct rtas_event_log_v6_hp { #define RTAS_LOG_V6_HP_TYPE_SLOT 3 #define RTAS_LOG_V6_HP_TYPE_PHB 4 #define RTAS_LOG_V6_HP_TYPE_PCI 5 +#define RTAS_LOG_V6_HP_TYPE_PMEM 6 uint8_t hotplug_action; #define RTAS_LOG_V6_HP_ACTION_ADD 1 #define RTAS_LOG_V6_HP_ACTION_REMOVE 2 @@ -526,6 +527,9 @@ static void spapr_hotplug_req_event(uint8_t hp_id, uint8_t hp_action, case SPAPR_DR_CONNECTOR_TYPE_CPU: hp->hotplug_type = RTAS_LOG_V6_HP_TYPE_CPU; break; + case SPAPR_DR_CONNECTOR_TYPE_PMEM: + hp->hotplug_type = RTAS_LOG_V6_HP_TYPE_PMEM; + break; default: /* we shouldn't be signaling hotplug events for resources * that don't support them diff --git a/include/hw/ppc/spapr.h b/include/hw/ppc/spapr.h index a947a0a0dc..21a9709afe 100644 --- a/include/hw/ppc/spapr.h +++ b/include/hw/ppc/spapr.h @@ -187,6 +187,7 @@ struct sPAPRMachineState { bool cmd_line_caps[SPAPR_CAP_NUM]; sPAPRCapabilities def, eff, mig; + bool nvdimm_enabled; }; #define H_SUCCESS 0 @@ -798,6 +799,15 @@ int spapr_rtc_import_offset(sPAPRRTCState *rtc, int64_t legacy_offset); #define SPAPR_LMB_FLAGS_DRC_INVALID 0x00000020 #define SPAPR_LMB_FLAGS_RESERVED 0x00000080 +/* + * The nvdimm size should be aligned to SCM block size. + * The SCM block size should be aligned to SPAPR_MEMORY_BLOCK_SIZE + * inorder to have SCM regions not to overlap with dimm memory regions. + * The SCM devices can have variable block sizes. For now, fixing the + * block size to the minimum value. + */ +#define SPAPR_MINIMUM_SCM_BLOCK_SIZE SPAPR_MEMORY_BLOCK_SIZE + void spapr_do_system_reset_on_cpu(CPUState *cs, run_on_cpu_data arg); #define HTAB_SIZE(spapr) (1ULL << ((spapr)->htab_shift)) diff --git a/include/hw/ppc/spapr_drc.h b/include/hw/ppc/spapr_drc.h index f6ff32e7e2..65925d00b1 100644 --- a/include/hw/ppc/spapr_drc.h +++ b/include/hw/ppc/spapr_drc.h @@ -70,6 +70,13 @@ #define SPAPR_DRC_LMB(obj) OBJECT_CHECK(sPAPRDRConnector, (obj), \ TYPE_SPAPR_DRC_LMB) +#define TYPE_SPAPR_DRC_PMEM "spapr-drc-pmem" +#define SPAPR_DRC_PMEM_GET_CLASS(obj) \ + OBJECT_GET_CLASS(sPAPRDRConnectorClass, obj, TYPE_SPAPR_DRC_PMEM) +#define SPAPR_DRC_PMEM_CLASS(klass) \ + OBJECT_CLASS_CHECK(sPAPRDRConnectorClass, klass, TYPE_SPAPR_DRC_PMEM) +#define SPAPR_DRC_PMEM(obj) OBJECT_CHECK(sPAPRDRConnector, (obj), \ + TYPE_SPAPR_DRC_PMEM) /* * Various hotplug types managed by sPAPRDRConnector * @@ -87,6 +94,7 @@ typedef enum { SPAPR_DR_CONNECTOR_TYPE_SHIFT_VIO = 3, SPAPR_DR_CONNECTOR_TYPE_SHIFT_PCI = 4, SPAPR_DR_CONNECTOR_TYPE_SHIFT_LMB = 8, + SPAPR_DR_CONNECTOR_TYPE_SHIFT_PMEM = 9, } sPAPRDRConnectorTypeShift; typedef enum { @@ -96,6 +104,7 @@ typedef enum { SPAPR_DR_CONNECTOR_TYPE_VIO = 1 << SPAPR_DR_CONNECTOR_TYPE_SHIFT_VIO, SPAPR_DR_CONNECTOR_TYPE_PCI = 1 << SPAPR_DR_CONNECTOR_TYPE_SHIFT_PCI, SPAPR_DR_CONNECTOR_TYPE_LMB = 1 << SPAPR_DR_CONNECTOR_TYPE_SHIFT_LMB, + SPAPR_DR_CONNECTOR_TYPE_PMEM = 1 << SPAPR_DR_CONNECTOR_TYPE_SHIFT_PMEM, } sPAPRDRConnectorType; /* From patchwork Wed Feb 6 05:26:41 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shivaprasad G Bhat X-Patchwork-Id: 10799437 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 0A74E922 for ; Wed, 6 Feb 2019 14:12:25 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id E7C0A2B94B for ; Wed, 6 Feb 2019 14:12:24 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id D98DD2B991; Wed, 6 Feb 2019 14:12:24 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI autolearn=ham version=3.3.1 Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 9F7962B94B for ; Wed, 6 Feb 2019 14:12:23 +0000 (UTC) Received: from localhost ([127.0.0.1]:51526 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1grNw6-0002lx-Op for patchwork-qemu-devel@patchwork.kernel.org; Wed, 06 Feb 2019 09:12:22 -0500 Received: from eggs.gnu.org ([209.51.188.92]:52896) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1grFja-0003Wl-9l for qemu-devel@nongnu.org; Wed, 06 Feb 2019 00:26:55 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1grFjY-0005Ad-D8 for qemu-devel@nongnu.org; Wed, 06 Feb 2019 00:26:54 -0500 Received: from mx0a-001b2d01.pphosted.com ([148.163.156.1]:55886) by eggs.gnu.org with esmtps (TLS1.0:RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1grFjX-00058y-W3 for qemu-devel@nongnu.org; Wed, 06 Feb 2019 00:26:52 -0500 Received: from pps.filterd (m0098399.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x165OL5S041928 for ; Wed, 6 Feb 2019 00:26:50 -0500 Received: from e06smtp05.uk.ibm.com (e06smtp05.uk.ibm.com [195.75.94.101]) by mx0a-001b2d01.pphosted.com with ESMTP id 2qfmq5bp90-1 (version=TLSv1.2 cipher=AES256-GCM-SHA384 bits=256 verify=NOT) for ; Wed, 06 Feb 2019 00:26:50 -0500 Received: from localhost by e06smtp05.uk.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Wed, 6 Feb 2019 05:26:48 -0000 Received: from b06cxnps4075.portsmouth.uk.ibm.com (9.149.109.197) by e06smtp05.uk.ibm.com (192.168.101.135) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; (version=TLSv1/SSLv3 cipher=AES256-GCM-SHA384 bits=256/256) Wed, 6 Feb 2019 05:26:45 -0000 Received: from d06av23.portsmouth.uk.ibm.com (d06av23.portsmouth.uk.ibm.com [9.149.105.59]) by b06cxnps4075.portsmouth.uk.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id x165Qhql53674144 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL); Wed, 6 Feb 2019 05:26:43 GMT Received: from d06av23.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 4CB64A4051; Wed, 6 Feb 2019 05:26:43 +0000 (GMT) Received: from d06av23.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id F2FADA405E; Wed, 6 Feb 2019 05:26:41 +0000 (GMT) Received: from lep8c.aus.stglabs.ibm.com (unknown [9.40.192.207]) by d06av23.portsmouth.uk.ibm.com (Postfix) with ESMTP; Wed, 6 Feb 2019 05:26:41 +0000 (GMT) From: Shivaprasad G Bhat To: qemu-devel@nongnu.org Date: Tue, 05 Feb 2019 23:26:41 -0600 In-Reply-To: <154943058200.27958.11497653677605446596.stgit@lep8c.aus.stglabs.ibm.com> References: <154943058200.27958.11497653677605446596.stgit@lep8c.aus.stglabs.ibm.com> User-Agent: StGit/0.17.1-dirty MIME-Version: 1.0 X-TM-AS-GCONF: 00 x-cbid: 19020605-0020-0000-0000-0000031343AB X-IBM-AV-DETECTION: SAVI=unused REMOTE=unused XFE=unused x-cbparentid: 19020605-0021-0000-0000-000021644494 Message-Id: <154943079488.27958.9812294887340963535.stgit@lep8c.aus.stglabs.ibm.com> X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-02-06_03:, , signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 priorityscore=1501 malwarescore=0 suspectscore=1 phishscore=0 bulkscore=0 spamscore=0 clxscore=1015 lowpriorityscore=0 mlxscore=0 impostorscore=0 mlxlogscore=999 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1810050000 definitions=main-1902060042 X-detected-operating-system: by eggs.gnu.org: GNU/Linux 3.x [generic] [fuzzy] X-Received-From: 148.163.156.1 X-Mailman-Approved-At: Wed, 06 Feb 2019 09:10:49 -0500 Subject: [Qemu-devel] [RFC PATCH 4/4] spapr: Add Hcalls to support PAPR NVDIMM device X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: xiaoguangrong.eric@gmail.com, sbhat@linux.ibm.com, mst@redhat.com, bharata@linux.ibm.com, qemu-ppc@nongnu.org, imammedo@redhat.com, vaibhav@linux.ibm.com, david@gibson.dropbear.id.au Errors-To: qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Sender: "Qemu-devel" X-Virus-Scanned: ClamAV using ClamSMTP This patch implements few of the necessary hcalls for the nvdimm support. PAPR semantics is such that each NVDIMM device is comprising of multiple SCM(Storage Class Memory) blocks. The guest requests the hypervisor to bind each of the SCM blocks of the NVDIMM device using hcalls. There can be SCM block unbind requests in case of driver errors or unplug(not supported now) use cases. The NVDIMM label read/writes are done through hcalls. Since each virtual NVDIMM device is divided into multiple SCM blocks, the bind, unbind, and queries using hcalls on those blocks can come independently. This doesn't fit well into the qemu device semantics, where the map/unmap are done at the (whole)device/object level granularity. The patch doesnt actually bind/unbind on hcalls but let it happen at the object_add/del phase itself instead. The guest kernel makes bind/unbind requests for the virtual NVDIMM device at the region level granularity. Without interleaving, each virtual NVDIMM device is presented as separate region. There is no way to configure the virtual NVDIMM interleaving for the guests today. So, there is no way a partial bind/unbind request can come for the vNVDIMM in a hcall for a subset of SCM blocks of a virtual NVDIMM. Hence it is safe to do bind/unbind everything during the object_add/del. The kernel today is not using the hcalls - h_scm_mem_query, h_scm_mem_clear, h_scm_query_logical_mem_binding and h_scm_query_block_mem_binding. They are just stubs in this patch. Signed-off-by: Shivaprasad G Bhat --- hw/ppc/spapr_hcall.c | 230 ++++++++++++++++++++++++++++++++++++++++++++++++ include/hw/ppc/spapr.h | 12 ++- 2 files changed, 240 insertions(+), 2 deletions(-) diff --git a/hw/ppc/spapr_hcall.c b/hw/ppc/spapr_hcall.c index 17bcaa3822..40553e80d6 100644 --- a/hw/ppc/spapr_hcall.c +++ b/hw/ppc/spapr_hcall.c @@ -3,11 +3,13 @@ #include "sysemu/hw_accel.h" #include "sysemu/sysemu.h" #include "qemu/log.h" +#include "qemu/range.h" #include "qemu/error-report.h" #include "cpu.h" #include "exec/exec-all.h" #include "helper_regs.h" #include "hw/ppc/spapr.h" +#include "hw/ppc/spapr_drc.h" #include "hw/ppc/spapr_cpu_core.h" #include "mmu-hash64.h" #include "cpu-models.h" @@ -16,6 +18,7 @@ #include "hw/ppc/spapr_ovec.h" #include "mmu-book3s-v3.h" #include "hw/mem/memory-device.h" +#include "hw/mem/nvdimm.h" struct LPCRSyncState { target_ulong value; @@ -1808,6 +1811,222 @@ static target_ulong h_update_dt(PowerPCCPU *cpu, sPAPRMachineState *spapr, return H_SUCCESS; } +static target_ulong h_scm_read_metadata(PowerPCCPU *cpu, + sPAPRMachineState *spapr, + target_ulong opcode, + target_ulong *args) +{ + uint32_t drc_index = args[0]; + uint64_t offset = args[1]; + uint8_t numBytesToRead = args[2]; + sPAPRDRConnector *drc = spapr_drc_by_index(drc_index); + NVDIMMDevice *nvdimm = NULL; + NVDIMMClass *ddc = NULL; + + if (numBytesToRead != 1 && numBytesToRead != 2 && + numBytesToRead != 4 && numBytesToRead != 8) { + return H_P3; + } + + if (offset & (numBytesToRead - 1)) { + return H_P2; + } + + if (drc && spapr_drc_type(drc) != SPAPR_DR_CONNECTOR_TYPE_PMEM) { + return H_PARAMETER; + } + + nvdimm = NVDIMM(drc->dev); + ddc = NVDIMM_GET_CLASS(nvdimm); + + ddc->read_label_data(nvdimm, &args[0], numBytesToRead, offset); + + return H_SUCCESS; +} + + +static target_ulong h_scm_write_metadata(PowerPCCPU *cpu, + sPAPRMachineState *spapr, + target_ulong opcode, + target_ulong *args) +{ + uint32_t drc_index = args[0]; + uint64_t offset = args[1]; + uint64_t data = args[2]; + int8_t numBytesToWrite = args[3]; + sPAPRDRConnector *drc = spapr_drc_by_index(drc_index); + NVDIMMDevice *nvdimm = NULL; + DeviceState *dev = NULL; + NVDIMMClass *ddc = NULL; + + if (numBytesToWrite != 1 && numBytesToWrite != 2 && + numBytesToWrite != 4 && numBytesToWrite != 8) { + return H_P4; + } + + if (offset & (numBytesToWrite - 1)) { + return H_P2; + } + + if (drc && spapr_drc_type(drc) != SPAPR_DR_CONNECTOR_TYPE_PMEM) { + return H_PARAMETER; + } + + dev = drc->dev; + nvdimm = NVDIMM(dev); + if (offset >= nvdimm->label_size) { + return H_P3; + } + + ddc = NVDIMM_GET_CLASS(nvdimm); + + ddc->write_label_data(nvdimm, &data, numBytesToWrite, offset); + + return H_SUCCESS; +} + +static target_ulong h_scm_bind_mem(PowerPCCPU *cpu, sPAPRMachineState *spapr, + target_ulong opcode, + target_ulong *args) +{ + uint32_t drc_index = args[0]; + uint64_t starting_index = args[1]; + uint64_t no_of_scm_blocks_to_bind = args[2]; + uint64_t target_logical_mem_addr = args[3]; + uint64_t continue_token = args[4]; + uint64_t size; + uint64_t total_no_of_scm_blocks; + + sPAPRDRConnector *drc = spapr_drc_by_index(drc_index); + hwaddr addr; + DeviceState *dev = NULL; + PCDIMMDevice *dimm = NULL; + Error *local_err = NULL; + + if (drc && spapr_drc_type(drc) != SPAPR_DR_CONNECTOR_TYPE_PMEM) { + return H_PARAMETER; + } + + dev = drc->dev; + dimm = PC_DIMM(dev); + + size = object_property_get_uint(OBJECT(dimm), + PC_DIMM_SIZE_PROP, &local_err); + if (local_err) { + error_report_err(local_err); + return H_PARAMETER; + } + + total_no_of_scm_blocks = size / SPAPR_MINIMUM_SCM_BLOCK_SIZE; + + if (starting_index > total_no_of_scm_blocks) { + return H_P2; + } + + if ((starting_index + no_of_scm_blocks_to_bind) > total_no_of_scm_blocks) { + return H_P3; + } + + /* Currently qemu assigns the address. */ + if (target_logical_mem_addr != 0xffffffffffffffff) { + return H_OVERLAP; + } + + /* + * Currently continue token should be zero qemu has already bound + * everything and this hcall doesnt return H_BUSY. + */ + if (continue_token > 0) { + return H_P5; + } + + /* NB : Already bound, Return target logical address in R4 */ + addr = object_property_get_uint(OBJECT(dimm), + PC_DIMM_ADDR_PROP, &local_err); + if (local_err) { + error_report_err(local_err); + return H_PARAMETER; + } + + args[1] = addr; + + return H_SUCCESS; +} + +static target_ulong h_scm_unbind_mem(PowerPCCPU *cpu, sPAPRMachineState *spapr, + target_ulong opcode, + target_ulong *args) +{ + uint64_t starting_scm_logical_addr = args[0]; + uint64_t no_of_scm_blocks_to_unbind = args[1]; + uint64_t size_to_unbind; + uint64_t continue_token = args[2]; + Range as = range_empty; + GSList *dimms = NULL; + bool valid = false; + + size_to_unbind = no_of_scm_blocks_to_unbind * SPAPR_MINIMUM_SCM_BLOCK_SIZE; + + /* Check if starting_scm_logical_addr is block aligned */ + if (!QEMU_IS_ALIGNED(starting_scm_logical_addr, + SPAPR_MINIMUM_SCM_BLOCK_SIZE)) { + return H_PARAMETER; + } + + range_init_nofail(&as, starting_scm_logical_addr, size_to_unbind); + + dimms = nvdimm_get_device_list(); + for (; dimms; dimms = dimms->next) { + NVDIMMDevice *nvdimm = dimms->data; + Range tmp; + int size = object_property_get_int(OBJECT(nvdimm), PC_DIMM_SIZE_PROP, + NULL); + int addr = object_property_get_int(OBJECT(nvdimm), PC_DIMM_ADDR_PROP, + NULL); + range_init_nofail(&tmp, addr, size); + + if (range_contains_range(&tmp, &as)) { + valid = true; + break; + } + } + + if (!valid) { + return H_P2; + } + + if (continue_token > 0) { + return H_P3; + } + + /*NB : dont do anything, let object_del take care of this for now. */ + + return H_SUCCESS; +} + +static target_ulong h_scm_query_block_mem_binding(PowerPCCPU *cpu, + sPAPRMachineState *spapr, + target_ulong opcode, + target_ulong *args) +{ + return H_SUCCESS; +} + +static target_ulong h_scm_query_logical_mem_binding(PowerPCCPU *cpu, + sPAPRMachineState *spapr, + target_ulong opcode, + target_ulong *args) +{ + return H_SUCCESS; +} + +static target_ulong h_scm_mem_query(PowerPCCPU *cpu, sPAPRMachineState *spapr, + target_ulong opcode, + target_ulong *args) +{ + return H_SUCCESS; +} + static spapr_hcall_fn papr_hypercall_table[(MAX_HCALL_OPCODE / 4) + 1]; static spapr_hcall_fn kvmppc_hypercall_table[KVMPPC_HCALL_MAX - KVMPPC_HCALL_BASE + 1]; @@ -1907,6 +2126,17 @@ static void hypercall_register_types(void) /* qemu/KVM-PPC specific hcalls */ spapr_register_hypercall(KVMPPC_H_RTAS, h_rtas); + /* qemu/scm specific hcalls */ + spapr_register_hypercall(H_SCM_READ_METADATA, h_scm_read_metadata); + spapr_register_hypercall(H_SCM_WRITE_METADATA, h_scm_write_metadata); + spapr_register_hypercall(H_SCM_BIND_MEM, h_scm_bind_mem); + spapr_register_hypercall(H_SCM_UNBIND_MEM, h_scm_unbind_mem); + spapr_register_hypercall(H_SCM_QUERY_BLOCK_MEM_BINDING, + h_scm_query_block_mem_binding); + spapr_register_hypercall(H_SCM_QUERY_LOGICAL_MEM_BINDING, + h_scm_query_logical_mem_binding); + spapr_register_hypercall(H_SCM_MEM_QUERY, h_scm_mem_query); + /* ibm,client-architecture-support support */ spapr_register_hypercall(KVMPPC_H_CAS, h_client_architecture_support); diff --git a/include/hw/ppc/spapr.h b/include/hw/ppc/spapr.h index 21a9709afe..28249567f4 100644 --- a/include/hw/ppc/spapr.h +++ b/include/hw/ppc/spapr.h @@ -268,6 +268,7 @@ struct sPAPRMachineState { #define H_P7 -60 #define H_P8 -61 #define H_P9 -62 +#define H_OVERLAP -68 #define H_UNSUPPORTED_FLAG -256 #define H_MULTI_THREADS_ACTIVE -9005 @@ -473,8 +474,15 @@ struct sPAPRMachineState { #define H_INT_ESB 0x3C8 #define H_INT_SYNC 0x3CC #define H_INT_RESET 0x3D0 - -#define MAX_HCALL_OPCODE H_INT_RESET +#define H_SCM_READ_METADATA 0x3E4 +#define H_SCM_WRITE_METADATA 0x3E8 +#define H_SCM_BIND_MEM 0x3EC +#define H_SCM_UNBIND_MEM 0x3F0 +#define H_SCM_QUERY_BLOCK_MEM_BINDING 0x3F4 +#define H_SCM_QUERY_LOGICAL_MEM_BINDING 0x3F8 +#define H_SCM_MEM_QUERY 0x3FC + +#define MAX_HCALL_OPCODE H_SCM_MEM_QUERY /* The hcalls above are standardized in PAPR and implemented by pHyp * as well.