From patchwork Wed Feb 2 14:10:28 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jonathan Cameron X-Patchwork-Id: 12733066 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id AA4B6C433F5 for ; Wed, 2 Feb 2022 14:28:04 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241306AbiBBO2D (ORCPT ); Wed, 2 Feb 2022 09:28:03 -0500 Received: from frasgout.his.huawei.com ([185.176.79.56]:4651 "EHLO frasgout.his.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232327AbiBBO2C (ORCPT ); Wed, 2 Feb 2022 09:28:02 -0500 Received: from fraeml701-chm.china.huawei.com (unknown [172.18.147.207]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4Jpkf71NHhz67mjq; Wed, 2 Feb 2022 22:27:27 +0800 (CST) Received: from lhreml710-chm.china.huawei.com (10.201.108.61) by fraeml701-chm.china.huawei.com (10.206.15.50) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.2308.21; Wed, 2 Feb 2022 15:28:00 +0100 Received: from SecurePC-101-06.china.huawei.com (10.122.247.231) by lhreml710-chm.china.huawei.com (10.201.108.61) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.21; Wed, 2 Feb 2022 14:27:59 +0000 From: Jonathan Cameron To: , =?utf-8?q?Alex_Benn=C3=A9e?= , Marcel Apfelbaum , "Michael S . Tsirkin" , Igor Mammedov CC: , Ben Widawsky , "Peter Maydell" , , "Shameerali Kolothum Thodi" , =?utf-8?q?Philippe_Mathieu-Daud=C3=A9?= , Saransh Gupta1 , Shreyas Shah , Chris Browy , Samarth Saxena , "Dan Williams" Subject: [PATCH v5 34/43] mem/cxl_type3: Add read and write functions for associated hostmem. Date: Wed, 2 Feb 2022 14:10:28 +0000 Message-ID: <20220202141037.17352-35-Jonathan.Cameron@huawei.com> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20220202141037.17352-1-Jonathan.Cameron@huawei.com> References: <20220202141037.17352-1-Jonathan.Cameron@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.122.247.231] X-ClientProxiedBy: lhreml743-chm.china.huawei.com (10.201.108.193) To lhreml710-chm.china.huawei.com (10.201.108.61) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-cxl@vger.kernel.org From: Jonathan Cameron Once a read or write reaches a CXL type 3 device, the HDM decoders on the device are used to establish the Device Physical Address which should be accessed. These functions peform the required maths and then directly access the hostmem->mr to fullfil the actual operation. Note that failed writes are silent, but failed reads return poison. Note this is based loosely on: https://lore.kernel.org/qemu-devel/20200817161853.593247-6-f4bug@amsat.org/ [RFC PATCH 0/9] hw/misc: Add support for interleaved memory accesses Only lightly tested so far. More complex test cases yet to be written. Signed-off-by: Jonathan Cameron --- hw/mem/cxl_type3.c | 81 +++++++++++++++++++++++++++++++++++++ include/hw/cxl/cxl_device.h | 5 +++ 2 files changed, 86 insertions(+) diff --git a/hw/mem/cxl_type3.c b/hw/mem/cxl_type3.c index b1ba4bf0de..064e8c942c 100644 --- a/hw/mem/cxl_type3.c +++ b/hw/mem/cxl_type3.c @@ -161,6 +161,87 @@ static void ct3_realize(PCIDevice *pci_dev, Error **errp) &ct3d->cxl_dstate.device_registers); } +/* TODO: Support multiple HDM decoders and DPA skip */ +static bool cxl_type3_dpa(CXLType3Dev *ct3d, hwaddr host_addr, uint64_t *dpa) +{ + uint32_t *cache_mem = ct3d->cxl_cstate.crb.cache_mem_registers; + uint64_t decoder_base, decoder_size, hpa_offset; + uint32_t hdm0_ctrl; + int ig, iw; + + decoder_base = (((uint64_t)cache_mem[R_CXL_HDM_DECODER0_BASE_HI] << 32) | + cache_mem[R_CXL_HDM_DECODER0_BASE_LO]); + if ((uint64_t)host_addr < decoder_base) { + return false; + } + + hpa_offset = (uint64_t)host_addr - decoder_base; + + decoder_size = ((uint64_t)cache_mem[R_CXL_HDM_DECODER0_SIZE_HI] << 32) | + cache_mem[R_CXL_HDM_DECODER0_SIZE_LO]; + if (hpa_offset >= decoder_size) { + return false; + } + + hdm0_ctrl = cache_mem[R_CXL_HDM_DECODER0_CTRL]; + iw = FIELD_EX32(hdm0_ctrl, CXL_HDM_DECODER0_CTRL, IW); + ig = FIELD_EX32(hdm0_ctrl, CXL_HDM_DECODER0_CTRL, IG); + + *dpa = (MAKE_64BIT_MASK(0, 8 + ig) & hpa_offset) | + ((MAKE_64BIT_MASK(8 + ig + iw, 64 - 8 - ig - iw) & hpa_offset) >> iw); + + return true; +} + +MemTxResult cxl_type3_read(PCIDevice *d, hwaddr host_addr, uint64_t *data, + unsigned size, MemTxAttrs attrs) +{ + CXLType3Dev *ct3d = CT3(d); + uint64_t dpa_offset; + MemoryRegion *mr; + + /* TODO support volatile region */ + mr = host_memory_backend_get_memory(ct3d->hostmem); + if (!mr) { + return MEMTX_ERROR; + } + + if (!cxl_type3_dpa(ct3d, host_addr, &dpa_offset)) { + return MEMTX_ERROR; + } + + if (dpa_offset > int128_get64(mr->size)) { + return MEMTX_ERROR; + } + + return memory_region_dispatch_read(mr, dpa_offset, data, + size_memop(size), attrs); +} + +MemTxResult cxl_type3_write(PCIDevice *d, hwaddr host_addr, uint64_t data, + unsigned size, MemTxAttrs attrs) +{ + CXLType3Dev *ct3d = CT3(d); + uint64_t dpa_offset; + MemoryRegion *mr; + + mr = host_memory_backend_get_memory(ct3d->hostmem); + if (!mr) { + return MEMTX_OK; + } + + if (!cxl_type3_dpa(ct3d, host_addr, &dpa_offset)) { + return MEMTX_OK; + } + + if (dpa_offset > int128_get64(mr->size)) { + return MEMTX_OK; + } + + return memory_region_dispatch_write(mr, dpa_offset, data, + size_memop(size), attrs); +} + static void ct3d_reset(DeviceState *dev) { CXLType3Dev *ct3d = CT3(dev); diff --git a/include/hw/cxl/cxl_device.h b/include/hw/cxl/cxl_device.h index 43908f161b..83da5d4e8f 100644 --- a/include/hw/cxl/cxl_device.h +++ b/include/hw/cxl/cxl_device.h @@ -264,4 +264,9 @@ struct CXLType3Class { uint64_t offset); }; +MemTxResult cxl_type3_read(PCIDevice *d, hwaddr host_addr, uint64_t *data, + unsigned size, MemTxAttrs attrs); +MemTxResult cxl_type3_write(PCIDevice *d, hwaddr host_addr, uint64_t data, + unsigned size, MemTxAttrs attrs); + #endif