From patchwork Mon Feb 12 08:06:13 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mattias Nissler X-Patchwork-Id: 13552777 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 65984C4829C for ; Mon, 12 Feb 2024 08:08:12 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1rZRL2-0000qe-E8; Mon, 12 Feb 2024 03:06:52 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1rZRL0-0000pb-K2 for qemu-devel@nongnu.org; Mon, 12 Feb 2024 03:06:50 -0500 Received: from mail-pl1-x62f.google.com ([2607:f8b0:4864:20::62f]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1rZRKw-00081B-UU for qemu-devel@nongnu.org; Mon, 12 Feb 2024 03:06:50 -0500 Received: by mail-pl1-x62f.google.com with SMTP id d9443c01a7336-1d958e0d73dso19573255ad.1 for ; Mon, 12 Feb 2024 00:06:46 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20230601.gappssmtp.com; s=20230601; t=1707725205; x=1708330005; darn=nongnu.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=NThuc7QEVnSso1ezu2C6eil55EoBLAxvYaOcrSrLXVQ=; b=mDjS7/vuBL8l+VanBsxRw8SLLT/TyahSyvyDwi4ZBGnwg7AeYZCeF9+TRtsQZt1d7X wVh2HedASqEFXBi8gN2ugZuwrZg7uoNOZgNzLSvQTpRDKesuSbQyP0yAlritX93ZjpoD aSnn8qv0nTZTgENpAun+2XbxosEM5Hk1xQ+T3BrlwSnVRtWzyEDuuDfyNKrWjF9bogol 2wYDiMO9qzAy8hHYdWJxgNPqXVWBrQ0wENM7EEWHKh5iRJuNlPB32msC6AmroVtJ3J7l sqEsg9YDnd8aLwLvV0pyl//2doOIgbV3CBOg4EcGyp35eFqsjiqfsBuk26qIdDArOv4W QNbA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1707725205; x=1708330005; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=NThuc7QEVnSso1ezu2C6eil55EoBLAxvYaOcrSrLXVQ=; b=kssUiwjwLV3jRH99Td/OgaIPcgwfba3+H9BYJTE5T13PFgtZqn0qHtL8/XJ5mH/oli VmXK1RJ1avZKFMEP6XnxB6D7JeDR06oHgJySOkbfU8zWJ67bP6pMkc3JbQ2hhskfeh4v gs/mIVy1HzxjIOCSbNWaHO29BO2Ob+0I+KKH3oN+LnjbW4PAcn2JNlosO+D9bjnb0hee uadynvnhWu4CYxf98NFhuCGGP/IPUYosAiyKgiroecuTY1nZ+ZWBLgbfiq1aNEL9fDzJ In3gHL4CtDbz9wB4Z07VfeApFVkqI4m329+i7FG5SJfZ8J6Qciote9zlT7gok+/DlsZH DnQg== X-Gm-Message-State: AOJu0YwMIa/26gLhykeyV5K8ja0Yn0+QexP48BcFRU9xpyXM1aqK2KBp vCvNYSWj4OcjaQfty2lLAkaiwpBHHvQ4A7bVzXpM5CPnSLD3sVKxT9D9cMayDdLqMurLDq7xpXK k3Dg= X-Google-Smtp-Source: AGHT+IEw5TQLGbZdD0j1t22rUN6dxi9zp3gXYdfFoOASK5gVct9LHh1JJDIZfyC7YZfpqmr0xeYykw== X-Received: by 2002:a17:903:181:b0:1d9:a15:615d with SMTP id z1-20020a170903018100b001d90a15615dmr7164653plg.1.1707725204955; Mon, 12 Feb 2024 00:06:44 -0800 (PST) X-Forwarded-Encrypted: i=1; AJvYcCWefXNgsO42StHfwuN/9Y2IMxC13be93iwajx9yZFZtgT+pssJIismX69We2t8dpKzMS4BDgJ1+WUwmeWr777FpwL/kCvWNSZU6IVejb+kSuTUUtOMz/pErRb38SihrUeoGDjky95UTreCdGBt8TQRkaB0j3S22k7WT+zUFicpZVWg5vgzIF9ODIQ+Rw95dW6UQhG5VMqE8WDl2IIgEsDHx31OPu7zZwkLK+G8ltz6wDgk/c0xB+OFFIWjAXQBCl2ktEU0ZhC8oKK72r/3X2YK2FlFXzx0g8QV6Dx5YxatnQvsjlKMGsyr5MTM3HN0uCHubUU38DcHalD6xiKN7FNKQjWLPZvMKRg880knzTfkjR5QowtwcSiGoxmdZUqQ0Qw== Received: from mnissler.ba.rivosinc.com ([64.71.180.162]) by smtp.gmail.com with ESMTPSA id s13-20020a17090330cd00b001d9fc53514esm5404649plc.66.2024.02.12.00.06.43 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 12 Feb 2024 00:06:44 -0800 (PST) From: Mattias Nissler To: qemu-devel@nongnu.org, jag.raman@oracle.com, peterx@redhat.com, stefanha@redhat.com Cc: Marcel Apfelbaum , =?utf-8?q?Philippe_Mathie?= =?utf-8?q?u-Daud=C3=A9?= , john.levon@nutanix.com, David Hildenbrand , Paolo Bonzini , "Michael S. Tsirkin" , Richard Henderson , Elena Ufimtseva , Mattias Nissler Subject: [PATCH v7 1/5] softmmu: Per-AddressSpace bounce buffering Date: Mon, 12 Feb 2024 00:06:13 -0800 Message-Id: <20240212080617.2559498-2-mnissler@rivosinc.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240212080617.2559498-1-mnissler@rivosinc.com> References: <20240212080617.2559498-1-mnissler@rivosinc.com> MIME-Version: 1.0 Received-SPF: pass client-ip=2607:f8b0:4864:20::62f; envelope-from=mnissler@rivosinc.com; helo=mail-pl1-x62f.google.com X-Spam_score_int: -18 X-Spam_score: -1.9 X-Spam_bar: - X-Spam_report: (-1.9 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Instead of using a single global bounce buffer, give each AddressSpace its own bounce buffer. The MapClient callback mechanism moves to AddressSpace accordingly. This is in preparation for generalizing bounce buffer handling further to allow multiple bounce buffers, with a total allocation limit configured per AddressSpace. Signed-off-by: Mattias Nissler Tested-by: Jonathan Cameron --- include/exec/cpu-common.h | 2 - include/exec/memory.h | 45 ++++++++++++++++- system/dma-helpers.c | 4 +- system/memory.c | 7 +++ system/physmem.c | 101 ++++++++++++++++---------------------- 5 files changed, 93 insertions(+), 66 deletions(-) diff --git a/include/exec/cpu-common.h b/include/exec/cpu-common.h index 9ead1be100..bd6999fa35 100644 --- a/include/exec/cpu-common.h +++ b/include/exec/cpu-common.h @@ -148,8 +148,6 @@ void *cpu_physical_memory_map(hwaddr addr, bool is_write); void cpu_physical_memory_unmap(void *buffer, hwaddr len, bool is_write, hwaddr access_len); -void cpu_register_map_client(QEMUBH *bh); -void cpu_unregister_map_client(QEMUBH *bh); bool cpu_physical_memory_is_io(hwaddr phys_addr); diff --git a/include/exec/memory.h b/include/exec/memory.h index 177be23db7..6995a443d3 100644 --- a/include/exec/memory.h +++ b/include/exec/memory.h @@ -1106,6 +1106,19 @@ struct MemoryListener { QTAILQ_ENTRY(MemoryListener) link_as; }; +typedef struct AddressSpaceMapClient { + QEMUBH *bh; + QLIST_ENTRY(AddressSpaceMapClient) link; +} AddressSpaceMapClient; + +typedef struct { + MemoryRegion *mr; + void *buffer; + hwaddr addr; + hwaddr len; + bool in_use; +} BounceBuffer; + /** * struct AddressSpace: describes a mapping of addresses to #MemoryRegion objects */ @@ -1123,6 +1136,12 @@ struct AddressSpace { struct MemoryRegionIoeventfd *ioeventfds; QTAILQ_HEAD(, MemoryListener) listeners; QTAILQ_ENTRY(AddressSpace) address_spaces_link; + + /* Bounce buffer to use for this address space. */ + BounceBuffer bounce; + /* List of callbacks to invoke when buffers free up */ + QemuMutex map_client_list_lock; + QLIST_HEAD(, AddressSpaceMapClient) map_client_list; }; typedef struct AddressSpaceDispatch AddressSpaceDispatch; @@ -2926,8 +2945,8 @@ bool address_space_access_valid(AddressSpace *as, hwaddr addr, hwaddr len, * May return %NULL and set *@plen to zero(0), if resources needed to perform * the mapping are exhausted. * Use only for reads OR writes - not for read-modify-write operations. - * Use cpu_register_map_client() to know when retrying the map operation is - * likely to succeed. + * Use address_space_register_map_client() to know when retrying the map + * operation is likely to succeed. * * @as: #AddressSpace to be accessed * @addr: address within that address space @@ -2952,6 +2971,28 @@ void *address_space_map(AddressSpace *as, hwaddr addr, void address_space_unmap(AddressSpace *as, void *buffer, hwaddr len, bool is_write, hwaddr access_len); +/* + * address_space_register_map_client: Register a callback to invoke when + * resources for address_space_map() are available again. + * + * address_space_map may fail when there are not enough resources available, + * such as when bounce buffer memory would exceed the limit. The callback can + * be used to retry the address_space_map operation. Note that the callback + * gets automatically removed after firing. + * + * @as: #AddressSpace to be accessed + * @bh: callback to invoke when address_space_map() retry is appropriate + */ +void address_space_register_map_client(AddressSpace *as, QEMUBH *bh); + +/* + * address_space_unregister_map_client: Unregister a callback that has + * previously been registered and not fired yet. + * + * @as: #AddressSpace to be accessed + * @bh: callback to unregister + */ +void address_space_unregister_map_client(AddressSpace *as, QEMUBH *bh); /* Internal functions, part of the implementation of address_space_read. */ MemTxResult address_space_read_full(AddressSpace *as, hwaddr addr, diff --git a/system/dma-helpers.c b/system/dma-helpers.c index 9b221cf94e..74013308f5 100644 --- a/system/dma-helpers.c +++ b/system/dma-helpers.c @@ -169,7 +169,7 @@ static void dma_blk_cb(void *opaque, int ret) if (dbs->iov.size == 0) { trace_dma_map_wait(dbs); dbs->bh = aio_bh_new(ctx, reschedule_dma, dbs); - cpu_register_map_client(dbs->bh); + address_space_register_map_client(dbs->sg->as, dbs->bh); return; } @@ -197,7 +197,7 @@ static void dma_aio_cancel(BlockAIOCB *acb) } if (dbs->bh) { - cpu_unregister_map_client(dbs->bh); + address_space_unregister_map_client(dbs->sg->as, dbs->bh); qemu_bh_delete(dbs->bh); dbs->bh = NULL; } diff --git a/system/memory.c b/system/memory.c index a229a79988..ad0caef1b8 100644 --- a/system/memory.c +++ b/system/memory.c @@ -3133,6 +3133,9 @@ void address_space_init(AddressSpace *as, MemoryRegion *root, const char *name) as->ioeventfds = NULL; QTAILQ_INIT(&as->listeners); QTAILQ_INSERT_TAIL(&address_spaces, as, address_spaces_link); + as->bounce.in_use = false; + qemu_mutex_init(&as->map_client_list_lock); + QLIST_INIT(&as->map_client_list); as->name = g_strdup(name ? name : "anonymous"); address_space_update_topology(as); address_space_update_ioeventfds(as); @@ -3140,6 +3143,10 @@ void address_space_init(AddressSpace *as, MemoryRegion *root, const char *name) static void do_address_space_destroy(AddressSpace *as) { + assert(!qatomic_read(&as->bounce.in_use)); + assert(QLIST_EMPTY(&as->map_client_list)); + qemu_mutex_destroy(&as->map_client_list_lock); + assert(QTAILQ_EMPTY(&as->listeners)); flatview_unref(as->current_map); diff --git a/system/physmem.c b/system/physmem.c index 5e66d9ae36..7170e3473f 100644 --- a/system/physmem.c +++ b/system/physmem.c @@ -2974,55 +2974,37 @@ void cpu_flush_icache_range(hwaddr start, hwaddr len) NULL, len, FLUSH_CACHE); } -typedef struct { - MemoryRegion *mr; - void *buffer; - hwaddr addr; - hwaddr len; - bool in_use; -} BounceBuffer; - -static BounceBuffer bounce; - -typedef struct MapClient { - QEMUBH *bh; - QLIST_ENTRY(MapClient) link; -} MapClient; - -QemuMutex map_client_list_lock; -static QLIST_HEAD(, MapClient) map_client_list - = QLIST_HEAD_INITIALIZER(map_client_list); - -static void cpu_unregister_map_client_do(MapClient *client) +static void +address_space_unregister_map_client_do(AddressSpaceMapClient *client) { QLIST_REMOVE(client, link); g_free(client); } -static void cpu_notify_map_clients_locked(void) +static void address_space_notify_map_clients_locked(AddressSpace *as) { - MapClient *client; + AddressSpaceMapClient *client; - while (!QLIST_EMPTY(&map_client_list)) { - client = QLIST_FIRST(&map_client_list); + while (!QLIST_EMPTY(&as->map_client_list)) { + client = QLIST_FIRST(&as->map_client_list); qemu_bh_schedule(client->bh); - cpu_unregister_map_client_do(client); + address_space_unregister_map_client_do(client); } } -void cpu_register_map_client(QEMUBH *bh) +void address_space_register_map_client(AddressSpace *as, QEMUBH *bh) { - MapClient *client = g_malloc(sizeof(*client)); + AddressSpaceMapClient *client = g_malloc(sizeof(*client)); - qemu_mutex_lock(&map_client_list_lock); + qemu_mutex_lock(&as->map_client_list_lock); client->bh = bh; - QLIST_INSERT_HEAD(&map_client_list, client, link); + QLIST_INSERT_HEAD(&as->map_client_list, client, link); /* Write map_client_list before reading in_use. */ smp_mb(); - if (!qatomic_read(&bounce.in_use)) { - cpu_notify_map_clients_locked(); + if (!qatomic_read(&as->bounce.in_use)) { + address_space_notify_map_clients_locked(as); } - qemu_mutex_unlock(&map_client_list_lock); + qemu_mutex_unlock(&as->map_client_list_lock); } void cpu_exec_init_all(void) @@ -3038,28 +3020,27 @@ void cpu_exec_init_all(void) finalize_target_page_bits(); io_mem_init(); memory_map_init(); - qemu_mutex_init(&map_client_list_lock); } -void cpu_unregister_map_client(QEMUBH *bh) +void address_space_unregister_map_client(AddressSpace *as, QEMUBH *bh) { - MapClient *client; + AddressSpaceMapClient *client; - qemu_mutex_lock(&map_client_list_lock); - QLIST_FOREACH(client, &map_client_list, link) { + qemu_mutex_lock(&as->map_client_list_lock); + QLIST_FOREACH(client, &as->map_client_list, link) { if (client->bh == bh) { - cpu_unregister_map_client_do(client); + address_space_unregister_map_client_do(client); break; } } - qemu_mutex_unlock(&map_client_list_lock); + qemu_mutex_unlock(&as->map_client_list_lock); } -static void cpu_notify_map_clients(void) +static void address_space_notify_map_clients(AddressSpace *as) { - qemu_mutex_lock(&map_client_list_lock); - cpu_notify_map_clients_locked(); - qemu_mutex_unlock(&map_client_list_lock); + qemu_mutex_lock(&as->map_client_list_lock); + address_space_notify_map_clients_locked(as); + qemu_mutex_unlock(&as->map_client_list_lock); } static bool flatview_access_valid(FlatView *fv, hwaddr addr, hwaddr len, @@ -3126,8 +3107,8 @@ flatview_extend_translation(FlatView *fv, hwaddr addr, * May map a subset of the requested range, given by and returned in *plen. * May return NULL if resources needed to perform the mapping are exhausted. * Use only for reads OR writes - not for read-modify-write operations. - * Use cpu_register_map_client() to know when retrying the map operation is - * likely to succeed. + * Use address_space_register_map_client() to know when retrying the map + * operation is likely to succeed. */ void *address_space_map(AddressSpace *as, hwaddr addr, @@ -3150,25 +3131,25 @@ void *address_space_map(AddressSpace *as, mr = flatview_translate(fv, addr, &xlat, &l, is_write, attrs); if (!memory_access_is_direct(mr, is_write)) { - if (qatomic_xchg(&bounce.in_use, true)) { + if (qatomic_xchg(&as->bounce.in_use, true)) { *plen = 0; return NULL; } /* Avoid unbounded allocations */ l = MIN(l, TARGET_PAGE_SIZE); - bounce.buffer = qemu_memalign(TARGET_PAGE_SIZE, l); - bounce.addr = addr; - bounce.len = l; + as->bounce.buffer = qemu_memalign(TARGET_PAGE_SIZE, l); + as->bounce.addr = addr; + as->bounce.len = l; memory_region_ref(mr); - bounce.mr = mr; + as->bounce.mr = mr; if (!is_write) { flatview_read(fv, addr, MEMTXATTRS_UNSPECIFIED, - bounce.buffer, l); + as->bounce.buffer, l); } *plen = l; - return bounce.buffer; + return as->bounce.buffer; } @@ -3186,7 +3167,7 @@ void *address_space_map(AddressSpace *as, void address_space_unmap(AddressSpace *as, void *buffer, hwaddr len, bool is_write, hwaddr access_len) { - if (buffer != bounce.buffer) { + if (buffer != as->bounce.buffer) { MemoryRegion *mr; ram_addr_t addr1; @@ -3202,15 +3183,15 @@ void address_space_unmap(AddressSpace *as, void *buffer, hwaddr len, return; } if (is_write) { - address_space_write(as, bounce.addr, MEMTXATTRS_UNSPECIFIED, - bounce.buffer, access_len); + address_space_write(as, as->bounce.addr, MEMTXATTRS_UNSPECIFIED, + as->bounce.buffer, access_len); } - qemu_vfree(bounce.buffer); - bounce.buffer = NULL; - memory_region_unref(bounce.mr); + qemu_vfree(as->bounce.buffer); + as->bounce.buffer = NULL; + memory_region_unref(as->bounce.mr); /* Clear in_use before reading map_client_list. */ - qatomic_set_mb(&bounce.in_use, false); - cpu_notify_map_clients(); + qatomic_set_mb(&as->bounce.in_use, false); + address_space_notify_map_clients(as); } void *cpu_physical_memory_map(hwaddr addr, From patchwork Mon Feb 12 08:06:14 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mattias Nissler X-Patchwork-Id: 13552774 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 06BB1C4829B for ; Mon, 12 Feb 2024 08:07:52 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1rZRL3-0000rF-Nu; Mon, 12 Feb 2024 03:06:53 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1rZRL1-0000qG-S7 for qemu-devel@nongnu.org; Mon, 12 Feb 2024 03:06:51 -0500 Received: from mail-pl1-x633.google.com ([2607:f8b0:4864:20::633]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1rZRKy-00081G-B5 for qemu-devel@nongnu.org; Mon, 12 Feb 2024 03:06:51 -0500 Received: by mail-pl1-x633.google.com with SMTP id d9443c01a7336-1d8aadc624dso21016625ad.0 for ; Mon, 12 Feb 2024 00:06:47 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20230601.gappssmtp.com; s=20230601; t=1707725206; x=1708330006; darn=nongnu.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=VhSeZ0By9G08zXnbI7Oa8c9OnIcfomhuD1o4c1kLREA=; b=XM+hkFUlz9NTm65HQmTEK/BmGWvLWSr0LNvUFTZod3G8eE+52BvCOZQzRLg9IEMA97 Re/IUx/rzUIX8NlJJsC/StspYrBp09hanHEMRoFt1mhwzb2LGntmbLEbCoEn/lcsWSXG djXCf8vEaIGfToNGl7IOc3paQA0twqfxzKwz6xu5Bv44q4mUgbePq8ZM+qaYkfN7cAbg pcCs2IFQ04lk/A0FNQyWiWT4WTtK/FCKX5EmFCxZW84MwPpXAcnSE/WbtYh/kcvvsLEK aDx2U1KiSgk+T0YOAHYhQUFEl6cRiQxuKQ4lXGbR2J6aXE/RzMG5CZ+O6cRI68cgentM 9gfA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1707725206; x=1708330006; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=VhSeZ0By9G08zXnbI7Oa8c9OnIcfomhuD1o4c1kLREA=; b=sQYF2AlCtg+4b1gtP285Q3FjsRAJMgYLGGOHVapROts1oia5qr03mgmY5jSMpPPrQ4 2wJF31R9qieEBgnFqAV5ZU60BaBkXWBOLTTg5P25jDcBvH8wbJ9b6aG32/XKq7Y2Pc1B IMv6+yJhBqlnBt8Oq7gEt2fDkF0AsSN7a7o9hzVqNTQeHbTr5iSMM4oMuAsxOL66HSzQ +IH1MbAQJHfPph8Q11x9/OeHZELF8QmzJJJZqGDbGweJJwp56LoTyCFDpR8jJATCs6GD JYAw9+OZjf3UflCLnQSVarH2M1QxZLY7wuNL3a06shIqgFFXJHD/LdwxhSGOkIosTj9j UdTw== X-Gm-Message-State: AOJu0Yx4KoQ4Y5KT/B4FMXeZ3EP0zXMJ8iqBZm8aK0N5MhZAujughmUh MR3BEe6z3ThrEEHPpeOllIFJ6UhsaYnuFbVcpRvac4COQEnUnXiewBtVTWDaHMBK07XLlx+kYU0 Rg+E= X-Google-Smtp-Source: AGHT+IGFo2hz7BE4jJRxzJwBlf3GcS//XNxDV7ozimP4LA4bXuT4QBrlQ4BXqa1v/odCs3PdLgtuGQ== X-Received: by 2002:a17:902:d505:b0:1da:3266:69bb with SMTP id b5-20020a170902d50500b001da326669bbmr1389816plg.60.1707725206119; Mon, 12 Feb 2024 00:06:46 -0800 (PST) X-Forwarded-Encrypted: i=1; AJvYcCVOaTus3hjaTTLmLw07fGZZmeBfWiE9+48GW2gQB/r/IMu7sgkpibsw8IAIRjsyIPfq31GwKXPHP7onL0UzkES2uW59vAMspTTOfKEAcx/FrQoqh4eeGv1AOCUgt+4GkLBkKA5A97muvKqnLBRw80zR3aUW9ET6PuzRRbumxUtgoF23eQ9/6g2EXqcwK/byiD5i5ShSTWN7OiBrHIAX3GJLGMO4L7gyM75+S9D+O03A+viGf4PBNFP7w3HkJ4LpXZxw3W88Krb2t9Xi7CIHI6d2+gRmHeYWwEKUV6uVTxt32PlLLpJLv+E/F09CP0aeDgTI5fWAhHM80/M2Wtd1Nlg0eDEcMnkm1xeb0BwdPDdaWt1CDgxA3lc9rQ8WfMeeOA== Received: from mnissler.ba.rivosinc.com ([64.71.180.162]) by smtp.gmail.com with ESMTPSA id s13-20020a17090330cd00b001d9fc53514esm5404649plc.66.2024.02.12.00.06.45 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 12 Feb 2024 00:06:45 -0800 (PST) From: Mattias Nissler To: qemu-devel@nongnu.org, jag.raman@oracle.com, peterx@redhat.com, stefanha@redhat.com Cc: Marcel Apfelbaum , =?utf-8?q?Philippe_Mathie?= =?utf-8?q?u-Daud=C3=A9?= , john.levon@nutanix.com, David Hildenbrand , Paolo Bonzini , "Michael S. Tsirkin" , Richard Henderson , Elena Ufimtseva , Mattias Nissler Subject: [PATCH v7 2/5] softmmu: Support concurrent bounce buffers Date: Mon, 12 Feb 2024 00:06:14 -0800 Message-Id: <20240212080617.2559498-3-mnissler@rivosinc.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240212080617.2559498-1-mnissler@rivosinc.com> References: <20240212080617.2559498-1-mnissler@rivosinc.com> MIME-Version: 1.0 Received-SPF: pass client-ip=2607:f8b0:4864:20::633; envelope-from=mnissler@rivosinc.com; helo=mail-pl1-x633.google.com X-Spam_score_int: -18 X-Spam_score: -1.9 X-Spam_bar: - X-Spam_report: (-1.9 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org When DMA memory can't be directly accessed, as is the case when running the device model in a separate process without shareable DMA file descriptors, bounce buffering is used. It is not uncommon for device models to request mapping of several DMA regions at the same time. Examples include: * net devices, e.g. when transmitting a packet that is split across several TX descriptors (observed with igb) * USB host controllers, when handling a packet with multiple data TRBs (observed with xhci) Previously, qemu only provided a single bounce buffer per AddressSpace and would fail DMA map requests while the buffer was already in use. In turn, this would cause DMA failures that ultimately manifest as hardware errors from the guest perspective. This change allocates DMA bounce buffers dynamically instead of supporting only a single buffer. Thus, multiple DMA mappings work correctly also when RAM can't be mmap()-ed. The total bounce buffer allocation size is limited individually for each AddressSpace. The default limit is 4096 bytes, matching the previous maximum buffer size. A new x-max-bounce-buffer-size parameter is provided to configure the limit for PCI devices. Signed-off-by: Mattias Nissler Tested-by: Jonathan Cameron --- hw/pci/pci.c | 8 ++++ include/exec/memory.h | 14 +++---- include/hw/pci/pci_device.h | 3 ++ system/memory.c | 5 ++- system/physmem.c | 80 +++++++++++++++++++++++++------------ 5 files changed, 74 insertions(+), 36 deletions(-) diff --git a/hw/pci/pci.c b/hw/pci/pci.c index 6496d027ca..036b3ff822 100644 --- a/hw/pci/pci.c +++ b/hw/pci/pci.c @@ -85,6 +85,8 @@ static Property pci_props[] = { QEMU_PCIE_ERR_UNC_MASK_BITNR, true), DEFINE_PROP_BIT("x-pcie-ari-nextfn-1", PCIDevice, cap_present, QEMU_PCIE_ARI_NEXTFN_1_BITNR, false), + DEFINE_PROP_SIZE("x-max-bounce-buffer-size", PCIDevice, + max_bounce_buffer_size, DEFAULT_MAX_BOUNCE_BUFFER_SIZE), DEFINE_PROP_END_OF_LIST() }; @@ -1203,6 +1205,8 @@ static PCIDevice *do_pci_register_device(PCIDevice *pci_dev, "bus master container", UINT64_MAX); address_space_init(&pci_dev->bus_master_as, &pci_dev->bus_master_container_region, pci_dev->name); + pci_dev->bus_master_as.max_bounce_buffer_size = + pci_dev->max_bounce_buffer_size; if (phase_check(PHASE_MACHINE_READY)) { pci_init_bus_master(pci_dev); @@ -2632,6 +2636,10 @@ static void pci_device_class_init(ObjectClass *klass, void *data) k->unrealize = pci_qdev_unrealize; k->bus_type = TYPE_PCI_BUS; device_class_set_props(k, pci_props); + object_class_property_set_description( + klass, "x-max-bounce-buffer-size", + "Maximum buffer size allocated for bounce buffers used for mapped " + "access to indirect DMA memory"); } static void pci_device_class_base_init(ObjectClass *klass, void *data) diff --git a/include/exec/memory.h b/include/exec/memory.h index 6995a443d3..e7bc4717ea 100644 --- a/include/exec/memory.h +++ b/include/exec/memory.h @@ -1111,13 +1111,7 @@ typedef struct AddressSpaceMapClient { QLIST_ENTRY(AddressSpaceMapClient) link; } AddressSpaceMapClient; -typedef struct { - MemoryRegion *mr; - void *buffer; - hwaddr addr; - hwaddr len; - bool in_use; -} BounceBuffer; +#define DEFAULT_MAX_BOUNCE_BUFFER_SIZE (4096) /** * struct AddressSpace: describes a mapping of addresses to #MemoryRegion objects @@ -1137,8 +1131,10 @@ struct AddressSpace { QTAILQ_HEAD(, MemoryListener) listeners; QTAILQ_ENTRY(AddressSpace) address_spaces_link; - /* Bounce buffer to use for this address space. */ - BounceBuffer bounce; + /* Maximum DMA bounce buffer size used for indirect memory map requests */ + uint64_t max_bounce_buffer_size; + /* Total size of bounce buffers currently allocated, atomically accessed */ + uint64_t bounce_buffer_size; /* List of callbacks to invoke when buffers free up */ QemuMutex map_client_list_lock; QLIST_HEAD(, AddressSpaceMapClient) map_client_list; diff --git a/include/hw/pci/pci_device.h b/include/hw/pci/pci_device.h index d3dd0f64b2..f4027c5379 100644 --- a/include/hw/pci/pci_device.h +++ b/include/hw/pci/pci_device.h @@ -160,6 +160,9 @@ struct PCIDevice { /* ID of standby device in net_failover pair */ char *failover_pair_id; uint32_t acpi_index; + + /* Maximum DMA bounce buffer size used for indirect memory map requests */ + uint64_t max_bounce_buffer_size; }; static inline int pci_intx(PCIDevice *pci_dev) diff --git a/system/memory.c b/system/memory.c index ad0caef1b8..1cf89654a1 100644 --- a/system/memory.c +++ b/system/memory.c @@ -3133,7 +3133,8 @@ void address_space_init(AddressSpace *as, MemoryRegion *root, const char *name) as->ioeventfds = NULL; QTAILQ_INIT(&as->listeners); QTAILQ_INSERT_TAIL(&address_spaces, as, address_spaces_link); - as->bounce.in_use = false; + as->max_bounce_buffer_size = DEFAULT_MAX_BOUNCE_BUFFER_SIZE; + as->bounce_buffer_size = 0; qemu_mutex_init(&as->map_client_list_lock); QLIST_INIT(&as->map_client_list); as->name = g_strdup(name ? name : "anonymous"); @@ -3143,7 +3144,7 @@ void address_space_init(AddressSpace *as, MemoryRegion *root, const char *name) static void do_address_space_destroy(AddressSpace *as) { - assert(!qatomic_read(&as->bounce.in_use)); + assert(qatomic_read(&as->bounce_buffer_size) == 0); assert(QLIST_EMPTY(&as->map_client_list)); qemu_mutex_destroy(&as->map_client_list_lock); diff --git a/system/physmem.c b/system/physmem.c index 7170e3473f..6a3c9de512 100644 --- a/system/physmem.c +++ b/system/physmem.c @@ -2974,6 +2974,20 @@ void cpu_flush_icache_range(hwaddr start, hwaddr len) NULL, len, FLUSH_CACHE); } +/* + * A magic value stored in the first 8 bytes of the bounce buffer struct. Used + * to detect illegal pointers passed to address_space_unmap. + */ +#define BOUNCE_BUFFER_MAGIC 0xb4017ceb4ffe12ed + +typedef struct { + uint64_t magic; + MemoryRegion *mr; + hwaddr addr; + size_t len; + uint8_t buffer[]; +} BounceBuffer; + static void address_space_unregister_map_client_do(AddressSpaceMapClient *client) { @@ -2999,9 +3013,9 @@ void address_space_register_map_client(AddressSpace *as, QEMUBH *bh) qemu_mutex_lock(&as->map_client_list_lock); client->bh = bh; QLIST_INSERT_HEAD(&as->map_client_list, client, link); - /* Write map_client_list before reading in_use. */ + /* Write map_client_list before reading bounce_buffer_size. */ smp_mb(); - if (!qatomic_read(&as->bounce.in_use)) { + if (qatomic_read(&as->bounce_buffer_size) < as->max_bounce_buffer_size) { address_space_notify_map_clients_locked(as); } qemu_mutex_unlock(&as->map_client_list_lock); @@ -3131,28 +3145,38 @@ void *address_space_map(AddressSpace *as, mr = flatview_translate(fv, addr, &xlat, &l, is_write, attrs); if (!memory_access_is_direct(mr, is_write)) { - if (qatomic_xchg(&as->bounce.in_use, true)) { + size_t size = qatomic_add_fetch(&as->bounce_buffer_size, l); + if (size > as->max_bounce_buffer_size) { + /* + * Note that the overshot might be larger than l if threads are + * racing and bump bounce_buffer_size at the same time. + */ + size_t excess = MIN(size - as->max_bounce_buffer_size, l); + l -= excess; + qatomic_sub(&as->bounce_buffer_size, excess); + } + + if (l == 0) { *plen = 0; return NULL; } - /* Avoid unbounded allocations */ - l = MIN(l, TARGET_PAGE_SIZE); - as->bounce.buffer = qemu_memalign(TARGET_PAGE_SIZE, l); - as->bounce.addr = addr; - as->bounce.len = l; + BounceBuffer *bounce = g_malloc0(l + sizeof(BounceBuffer)); + bounce->magic = BOUNCE_BUFFER_MAGIC; memory_region_ref(mr); - as->bounce.mr = mr; + bounce->mr = mr; + bounce->addr = addr; + bounce->len = l; + if (!is_write) { flatview_read(fv, addr, MEMTXATTRS_UNSPECIFIED, - as->bounce.buffer, l); + bounce->buffer, l); } *plen = l; - return as->bounce.buffer; + return bounce->buffer; } - memory_region_ref(mr); *plen = flatview_extend_translation(fv, addr, len, mr, xlat, l, is_write, attrs); @@ -3167,12 +3191,11 @@ void *address_space_map(AddressSpace *as, void address_space_unmap(AddressSpace *as, void *buffer, hwaddr len, bool is_write, hwaddr access_len) { - if (buffer != as->bounce.buffer) { - MemoryRegion *mr; - ram_addr_t addr1; + MemoryRegion *mr; + ram_addr_t addr1; - mr = memory_region_from_host(buffer, &addr1); - assert(mr != NULL); + mr = memory_region_from_host(buffer, &addr1); + if (mr != NULL) { if (is_write) { invalidate_and_set_dirty(mr, addr1, access_len); } @@ -3182,15 +3205,22 @@ void address_space_unmap(AddressSpace *as, void *buffer, hwaddr len, memory_region_unref(mr); return; } + + + BounceBuffer *bounce = container_of(buffer, BounceBuffer, buffer); + assert(bounce->magic == BOUNCE_BUFFER_MAGIC); + if (is_write) { - address_space_write(as, as->bounce.addr, MEMTXATTRS_UNSPECIFIED, - as->bounce.buffer, access_len); - } - qemu_vfree(as->bounce.buffer); - as->bounce.buffer = NULL; - memory_region_unref(as->bounce.mr); - /* Clear in_use before reading map_client_list. */ - qatomic_set_mb(&as->bounce.in_use, false); + address_space_write(as, bounce->addr, MEMTXATTRS_UNSPECIFIED, + bounce->buffer, access_len); + } + + qatomic_sub(&as->bounce_buffer_size, bounce->len); + bounce->magic = ~BOUNCE_BUFFER_MAGIC; + memory_region_unref(bounce->mr); + g_free(bounce); + /* Write bounce_buffer_size before reading map_client_list. */ + smp_mb(); address_space_notify_map_clients(as); } From patchwork Mon Feb 12 08:06:15 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mattias Nissler X-Patchwork-Id: 13552775 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id EB82AC4829B for ; Mon, 12 Feb 2024 08:08:06 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1rZRL3-0000qo-0Z; Mon, 12 Feb 2024 03:06:53 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1rZRL1-0000q7-Fb for qemu-devel@nongnu.org; Mon, 12 Feb 2024 03:06:51 -0500 Received: from mail-pf1-x432.google.com ([2607:f8b0:4864:20::432]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1rZRKy-00081P-R4 for qemu-devel@nongnu.org; Mon, 12 Feb 2024 03:06:51 -0500 Received: by mail-pf1-x432.google.com with SMTP id d2e1a72fcca58-6da6b0eb2d4so2074701b3a.1 for ; Mon, 12 Feb 2024 00:06:48 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20230601.gappssmtp.com; s=20230601; t=1707725207; x=1708330007; darn=nongnu.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=ugWmU+RI6KuJRXphjeFR5kNJeU7Hhqdcf4zIt6LNeaY=; b=MjKlGkWsXxtI1+2CDjj1Y4qsactGV+Ed7YmTNuR+rct14aDceHc0e3g3dw6zme7zMZ xedW0i1WlWHxY+zK50o2peRy1T/fI3scGCI1lHyhpW3oW2HAredZtv0KsFazWXsHqoB1 mOqQXm3ui/sxXj6uc67uTCT/ejjVzaTnqQnaTBd8ducyKmz61GBV/uvGpYSN4CgtflLk 8ay3HjmMg3CszqNMBMsy20v/hcBc4ec7irBkUDOIdzEr8EU9FDBmN7srdMDgBCLXV/KR TUFV1WRFmKuRc6rf9ySo7Q3FFes3qSPtDGfzeToyc6bkAiJKHDe6+2b9+lGPsZujS3jk EAPQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1707725207; x=1708330007; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=ugWmU+RI6KuJRXphjeFR5kNJeU7Hhqdcf4zIt6LNeaY=; b=xBUjNU6uSPqXdbqfnnELBbrXdPMpHcFMsYZSzh5QKmvmwLSr/nDCO4UPsrAMs4dD8i AGsoiKKG15/7/78ZtRVLDnuhUmxFDUj02ycq12RQZz4FPg2Bv5dbh6426sAcH2a9SCDz 6m6VL4WYKBdPtjpodbXo1Trwv7wqXV8ySacfAIGq6452srIXwT6GgdhdHp54B1VlrXeJ qCWAPiaDehkCi3MxzNbxBvMILeGpD77jlEcPdAsb75zBjrve+jhZNrjmjJwpqhmu+nh9 /wK7VNlWBh1PlBBFm9iSNApAITnelcfK2uWqElIu6mdxZL9CJAHYfMw2taLIMEFROIuW BXuw== X-Gm-Message-State: AOJu0YzlKSMG7W6yRY2rPGEMnn58ayHX1HgweEtPIF/XnXJ6BnH4lrXI Y95MKWC7GsfusVIK0kNvEYkxPu7vahX0yATWPcWxhW6D6RMcYELdIkSt7HxkDMyKcLqqYR4qwdt phd0= X-Google-Smtp-Source: AGHT+IE8Fw0k90OnImCaWsS190rn3kldMbcpggl5rFVCO57qMjShoh0Pd4WBLwJf0vyMmgLCDV/+OA== X-Received: by 2002:a05:6a21:3115:b0:19e:4f91:42b2 with SMTP id yz21-20020a056a21311500b0019e4f9142b2mr5438494pzb.18.1707725207234; Mon, 12 Feb 2024 00:06:47 -0800 (PST) X-Forwarded-Encrypted: i=1; AJvYcCXavTR69cvGmKY2vBV9NxlZAK3aTf+2oIlPcN7+5MzMDYVfWSpppYyuv1tMgOwBeEaP9yQQZrIel/0RtWtpjrSBWTeOOrujf060qRU0sqcW7LOxnMMFBJb1iEuBX3PDwOCzx/e3vpu/W2b4+datwNefRHJVdC/YW3QLRuPcM/sqZF9U3ZfF/qrqdYresXtjW/zNit4ewrf5MJYvxTvdMGHajthcIU/KB9F3xQmp8Jsddbf69H0mdDc7gWnqbITUZhpsxT5di1ZM7l9t7ONSlpfuOLyOM8lZMRop8xY3KJ3XjIrHKlnEZMyCIOFqU3JOhRgi3gHtCc6KZ6A3r37IfykY1cKlJUFkFVuUX9wt6YiedzPmijc2ct1qSFeZffMe/w== Received: from mnissler.ba.rivosinc.com ([64.71.180.162]) by smtp.gmail.com with ESMTPSA id s13-20020a17090330cd00b001d9fc53514esm5404649plc.66.2024.02.12.00.06.46 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 12 Feb 2024 00:06:46 -0800 (PST) From: Mattias Nissler To: qemu-devel@nongnu.org, jag.raman@oracle.com, peterx@redhat.com, stefanha@redhat.com Cc: Marcel Apfelbaum , =?utf-8?q?Philippe_Mathie?= =?utf-8?q?u-Daud=C3=A9?= , john.levon@nutanix.com, David Hildenbrand , Paolo Bonzini , "Michael S. Tsirkin" , Richard Henderson , Elena Ufimtseva , Mattias Nissler Subject: [PATCH v7 3/5] Update subprojects/libvfio-user Date: Mon, 12 Feb 2024 00:06:15 -0800 Message-Id: <20240212080617.2559498-4-mnissler@rivosinc.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240212080617.2559498-1-mnissler@rivosinc.com> References: <20240212080617.2559498-1-mnissler@rivosinc.com> MIME-Version: 1.0 Received-SPF: pass client-ip=2607:f8b0:4864:20::432; envelope-from=mnissler@rivosinc.com; helo=mail-pf1-x432.google.com X-Spam_score_int: -18 X-Spam_score: -1.9 X-Spam_bar: - X-Spam_report: (-1.9 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Brings in assorted bug fixes. The following are of particular interest with respect to message-based DMA support: * bb308a2 "Fix address calculation for message-based DMA" Corrects a bug in DMA address calculation. * 1569a37 "Pass server->client command over a separate socket pair" Adds support for separate sockets for either command direction, addressing a bug where libvfio-user gets confused if both client and server send commands concurrently. Signed-off-by: Mattias Nissler --- subprojects/libvfio-user.wrap | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/subprojects/libvfio-user.wrap b/subprojects/libvfio-user.wrap index 416955ca45..cdf0a7a375 100644 --- a/subprojects/libvfio-user.wrap +++ b/subprojects/libvfio-user.wrap @@ -1,4 +1,4 @@ [wrap-git] url = https://gitlab.com/qemu-project/libvfio-user.git -revision = 0b28d205572c80b568a1003db2c8f37ca333e4d7 +revision = 1569a37a54ecb63bd4008708c76339ccf7d06115 depth = 1 From patchwork Mon Feb 12 08:06:16 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mattias Nissler X-Patchwork-Id: 13552778 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id AD3E4C48297 for ; Mon, 12 Feb 2024 08:08:14 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1rZRL5-0000rw-39; Mon, 12 Feb 2024 03:06:55 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1rZRL2-0000qn-TY for qemu-devel@nongnu.org; Mon, 12 Feb 2024 03:06:52 -0500 Received: from mail-pl1-x631.google.com ([2607:f8b0:4864:20::631]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1rZRL1-00081T-0p for qemu-devel@nongnu.org; Mon, 12 Feb 2024 03:06:52 -0500 Received: by mail-pl1-x631.google.com with SMTP id d9443c01a7336-1d953fa3286so22502745ad.2 for ; Mon, 12 Feb 2024 00:06:50 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20230601.gappssmtp.com; s=20230601; t=1707725208; x=1708330008; darn=nongnu.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=kL8MqPEDgwSmrca9yBLK8PFHw9ilvDpuhB5dlU2hyEI=; b=mRmAyBdd/gqDiA9qZ/xncYYIG0Li2dnI3QW2XcuS7FtAeavCkYKkyDPlRDVB/k5Wws VsY7chaVt4pUrr2k31uyd9LkTzp0jTftVvW/2zoOApTI/H804H3FKg0a3yrn1Q6WWwsO L5os+fXEfQuuuozkxP5AONSCmPJ9FfUfLKqK/SB51yMO2OYOo8ITz+djf3QDJHyqGKCM JxuhLER/aCDvmqnU+gdDvocX0Bq2Q4MxyEUDEkauMnT1/Fe1pH9VrZAwgdukKR87SBfs tqe89FV64jzM+evH6lb6JTOOZOY59G5IaCQwQn40p/KDgjVdUHCx7Uk4C63h3CR6WWpz 5FRQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1707725208; x=1708330008; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=kL8MqPEDgwSmrca9yBLK8PFHw9ilvDpuhB5dlU2hyEI=; b=lzznWu75keJ2FDqcKjKwayHEhiyRVg/9K9tk7sNSWpT03QDTdv2FScedXPYDg8Fy6c QAg/ukT8zWDVyYg/gGuQdqmh/pPTP4Cf5+B4aslws1NfJlnNDzLxkkqqqVtxST3xFO4n fMC1w/XMs5lcbSV9cKvEkO2AiOiw/uf6S6Km+weH6EDO8LXW3cbaDDxTQ3DzlUWRpXnF l0ShFowZaYmDpDQaT4V6cn2mVadEZDGIX8C57GKoSuB1sDS0aga7ukS7isE5L446j3a/ TcIeYCZTPMHRVZ3wsMmc3GaqKLm3x+IsV+icqBB6kc38RxqaO2xJiCJRzZsK/znGixLw +PwQ== X-Gm-Message-State: AOJu0Yy8YDGaOOlvgaTv9++O2ii5Wsty6VFHBcT/kmMWEZzgg+IzVmt7 zeFmwitFj1mbRMTnNoACKw13DQONnTlK5hXE8Qg923wz2yxFkW4P0+MGgEnf3ggbnd6IgRBgiSJ Yj7w= X-Google-Smtp-Source: AGHT+IEQ6ghUK8Ttxekm7Tlsy1tf0UZcmTohqKyz3vDMnQhJP+RrPAJ5r6jKDxU3L5J4zQHqtiHUhg== X-Received: by 2002:a17:902:e5ca:b0:1d9:a5ec:30d2 with SMTP id u10-20020a170902e5ca00b001d9a5ec30d2mr8616706plf.12.1707725208392; Mon, 12 Feb 2024 00:06:48 -0800 (PST) X-Forwarded-Encrypted: i=1; AJvYcCUfX3ujxdm4a7OgawnU/UivnTlWxPNOLIjTVyHMmoz+LaTgD939PKaldnzo51JaD6q2jSUiexoq/UpXcgS2Yzb8S8fF7oClUz9/0uqoUWAhr0tW8G1ryTwKhqc9D5yFnYMMhpDjoMA2hCtMmJYbwMot/swXynjMzOhBIzYz/OgQCrdPdWfI+7ySJEy/Kyyu1cMl8JA2UVosdIf0QSeiHSiZ2aIvpriBrlxs3D8G9dXpdFsKGh/fPnnUWzHhzweCBsjnn8nBt1RJz6VfX9MjOTiUW52gtaz9ZfzgvTu3RflQm5VIJN5FER77/tm1Lh4nds/u8ir4GYipSSwwWSTpRdNUDqJHeRWJmt9hvtkSBB9pqpxQAvrA9mJ8CQpIFXltLw== Received: from mnissler.ba.rivosinc.com ([64.71.180.162]) by smtp.gmail.com with ESMTPSA id s13-20020a17090330cd00b001d9fc53514esm5404649plc.66.2024.02.12.00.06.47 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 12 Feb 2024 00:06:48 -0800 (PST) From: Mattias Nissler To: qemu-devel@nongnu.org, jag.raman@oracle.com, peterx@redhat.com, stefanha@redhat.com Cc: Marcel Apfelbaum , =?utf-8?q?Philippe_Mathie?= =?utf-8?q?u-Daud=C3=A9?= , john.levon@nutanix.com, David Hildenbrand , Paolo Bonzini , "Michael S. Tsirkin" , Richard Henderson , Elena Ufimtseva , Mattias Nissler Subject: [PATCH v7 4/5] vfio-user: Message-based DMA support Date: Mon, 12 Feb 2024 00:06:16 -0800 Message-Id: <20240212080617.2559498-5-mnissler@rivosinc.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240212080617.2559498-1-mnissler@rivosinc.com> References: <20240212080617.2559498-1-mnissler@rivosinc.com> MIME-Version: 1.0 Received-SPF: pass client-ip=2607:f8b0:4864:20::631; envelope-from=mnissler@rivosinc.com; helo=mail-pl1-x631.google.com X-Spam_score_int: -18 X-Spam_score: -1.9 X-Spam_bar: - X-Spam_report: (-1.9 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Wire up support for DMA for the case where the vfio-user client does not provide mmap()-able file descriptors, but DMA requests must be performed via the VFIO-user protocol. This installs an indirect memory region, which already works for pci_dma_{read,write}, and pci_dma_map works thanks to the existing DMA bounce buffering support. Note that while simple scenarios work with this patch, there's a known race condition in libvfio-user that will mess up the communication channel. See https://github.com/nutanix/libvfio-user/issues/279 for details as well as a proposed fix. Signed-off-by: Mattias Nissler --- hw/remote/trace-events | 2 + hw/remote/vfio-user-obj.c | 100 ++++++++++++++++++++++++++++++++------ 2 files changed, 87 insertions(+), 15 deletions(-) diff --git a/hw/remote/trace-events b/hw/remote/trace-events index 0d1b7d56a5..358a68fb34 100644 --- a/hw/remote/trace-events +++ b/hw/remote/trace-events @@ -9,6 +9,8 @@ vfu_cfg_read(uint32_t offset, uint32_t val) "vfu: cfg: 0x%x -> 0x%x" vfu_cfg_write(uint32_t offset, uint32_t val) "vfu: cfg: 0x%x <- 0x%x" vfu_dma_register(uint64_t gpa, size_t len) "vfu: registering GPA 0x%"PRIx64", %zu bytes" vfu_dma_unregister(uint64_t gpa) "vfu: unregistering GPA 0x%"PRIx64"" +vfu_dma_read(uint64_t gpa, size_t len) "vfu: DMA read 0x%"PRIx64", %zu bytes" +vfu_dma_write(uint64_t gpa, size_t len) "vfu: DMA write 0x%"PRIx64", %zu bytes" vfu_bar_register(int i, uint64_t addr, uint64_t size) "vfu: BAR %d: addr 0x%"PRIx64" size 0x%"PRIx64"" vfu_bar_rw_enter(const char *op, uint64_t addr) "vfu: %s request for BAR address 0x%"PRIx64"" vfu_bar_rw_exit(const char *op, uint64_t addr) "vfu: Finished %s of BAR address 0x%"PRIx64"" diff --git a/hw/remote/vfio-user-obj.c b/hw/remote/vfio-user-obj.c index d9b879e056..a15e291c9a 100644 --- a/hw/remote/vfio-user-obj.c +++ b/hw/remote/vfio-user-obj.c @@ -300,6 +300,63 @@ static ssize_t vfu_object_cfg_access(vfu_ctx_t *vfu_ctx, char * const buf, return count; } +static MemTxResult vfu_dma_read(void *opaque, hwaddr addr, uint64_t *val, + unsigned size, MemTxAttrs attrs) +{ + MemoryRegion *region = opaque; + vfu_ctx_t *vfu_ctx = VFU_OBJECT(region->owner)->vfu_ctx; + uint8_t buf[sizeof(uint64_t)]; + + trace_vfu_dma_read(region->addr + addr, size); + + g_autofree dma_sg_t *sg = g_malloc0(dma_sg_size()); + vfu_dma_addr_t vfu_addr = (vfu_dma_addr_t)(region->addr + addr); + if (vfu_addr_to_sgl(vfu_ctx, vfu_addr, size, sg, 1, PROT_READ) < 0 || + vfu_sgl_read(vfu_ctx, sg, 1, buf) != 0) { + return MEMTX_ERROR; + } + + *val = ldn_he_p(buf, size); + + return MEMTX_OK; +} + +static MemTxResult vfu_dma_write(void *opaque, hwaddr addr, uint64_t val, + unsigned size, MemTxAttrs attrs) +{ + MemoryRegion *region = opaque; + vfu_ctx_t *vfu_ctx = VFU_OBJECT(region->owner)->vfu_ctx; + uint8_t buf[sizeof(uint64_t)]; + + trace_vfu_dma_write(region->addr + addr, size); + + stn_he_p(buf, size, val); + + g_autofree dma_sg_t *sg = g_malloc0(dma_sg_size()); + vfu_dma_addr_t vfu_addr = (vfu_dma_addr_t)(region->addr + addr); + if (vfu_addr_to_sgl(vfu_ctx, vfu_addr, size, sg, 1, PROT_WRITE) < 0 || + vfu_sgl_write(vfu_ctx, sg, 1, buf) != 0) { + return MEMTX_ERROR; + } + + return MEMTX_OK; +} + +static const MemoryRegionOps vfu_dma_ops = { + .read_with_attrs = vfu_dma_read, + .write_with_attrs = vfu_dma_write, + .endianness = DEVICE_HOST_ENDIAN, + .valid = { + .min_access_size = 1, + .max_access_size = 8, + .unaligned = true, + }, + .impl = { + .min_access_size = 1, + .max_access_size = 8, + }, +}; + static void dma_register(vfu_ctx_t *vfu_ctx, vfu_dma_info_t *info) { VfuObject *o = vfu_get_private(vfu_ctx); @@ -308,17 +365,30 @@ static void dma_register(vfu_ctx_t *vfu_ctx, vfu_dma_info_t *info) g_autofree char *name = NULL; struct iovec *iov = &info->iova; - if (!info->vaddr) { - return; - } - name = g_strdup_printf("mem-%s-%"PRIx64"", o->device, - (uint64_t)info->vaddr); + (uint64_t)iov->iov_base); subregion = g_new0(MemoryRegion, 1); - memory_region_init_ram_ptr(subregion, NULL, name, - iov->iov_len, info->vaddr); + if (info->vaddr) { + memory_region_init_ram_ptr(subregion, OBJECT(o), name, + iov->iov_len, info->vaddr); + } else { + /* + * Note that I/O regions' MemoryRegionOps handle accesses of at most 8 + * bytes at a time, and larger accesses are broken down. However, + * many/most DMA accesses are larger than 8 bytes and VFIO-user can + * handle large DMA accesses just fine, thus this size restriction + * unnecessarily hurts performance, in particular given that each + * access causes a round trip on the VFIO-user socket. + * + * TODO: Investigate how to plumb larger accesses through memory + * regions, possibly by amending MemoryRegionOps or by creating a new + * memory region type. + */ + memory_region_init_io(subregion, OBJECT(o), &vfu_dma_ops, subregion, + name, iov->iov_len); + } dma_as = pci_device_iommu_address_space(o->pci_dev); @@ -330,20 +400,20 @@ static void dma_register(vfu_ctx_t *vfu_ctx, vfu_dma_info_t *info) static void dma_unregister(vfu_ctx_t *vfu_ctx, vfu_dma_info_t *info) { VfuObject *o = vfu_get_private(vfu_ctx); + MemoryRegionSection mr_section; AddressSpace *dma_as = NULL; - MemoryRegion *mr = NULL; - ram_addr_t offset; - mr = memory_region_from_host(info->vaddr, &offset); - if (!mr) { + dma_as = pci_device_iommu_address_space(o->pci_dev); + + mr_section = + memory_region_find(dma_as->root, (hwaddr)info->iova.iov_base, 1); + if (!mr_section.mr) { return; } - dma_as = pci_device_iommu_address_space(o->pci_dev); - - memory_region_del_subregion(dma_as->root, mr); + memory_region_del_subregion(dma_as->root, mr_section.mr); - object_unparent((OBJECT(mr))); + object_unparent((OBJECT(mr_section.mr))); trace_vfu_dma_unregister((uint64_t)info->iova.iov_base); } From patchwork Mon Feb 12 08:06:17 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mattias Nissler X-Patchwork-Id: 13552776 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 7BC3CC4829C for ; Mon, 12 Feb 2024 08:08:09 +0000 (UTC) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1rZRL5-0000ry-77; Mon, 12 Feb 2024 03:06:55 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1rZRL2-0000qg-KE for qemu-devel@nongnu.org; Mon, 12 Feb 2024 03:06:52 -0500 Received: from mail-pl1-x62b.google.com ([2607:f8b0:4864:20::62b]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1rZRL1-00081g-3E for qemu-devel@nongnu.org; Mon, 12 Feb 2024 03:06:52 -0500 Received: by mail-pl1-x62b.google.com with SMTP id d9443c01a7336-1d73066880eso27078245ad.3 for ; Mon, 12 Feb 2024 00:06:50 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20230601.gappssmtp.com; s=20230601; t=1707725209; x=1708330009; darn=nongnu.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=ew/3DuAeSXfD+s12gqctkm5cvqR42b2Xn6oAkbLXiwg=; b=O8YDie9T30EmyJKAqjXx3YFGMuyQTF715nG6kcheH6U5PpIkHnNZBsENyL7RAtNmAb ncbmcTzCm3/u8hkmCHWHW4Plr2hBBOGCCkvTXYd398V5tRmPSoAserDXZwnivJwGZXGg n3mTSKQR/gmWre3rap6T3dCXFXodUokze97AeyOVV2hOhl1evXaYnWNQyammQcZasjaA TmGsW1ay8Q+wQrRgHR2upWZCfh9+9w1VbVcXkVytOsAtP1fjGaOEWqzQessNtq9QPT3a SNMjMtZ5ta+aipISvMR/uuAAy0PqrM/nD0PsiuY7hWb2WFPOBM1zAvAfMw9nFKmyxGGn n+Hg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1707725209; x=1708330009; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=ew/3DuAeSXfD+s12gqctkm5cvqR42b2Xn6oAkbLXiwg=; b=Nb6P8eHPTLj+rZHglNrtHNWnfgref+KXWLguVt+YHWpf92HLUDyuq6PHGf/eclvrxT 9JtOEtDiRqG0+BJPcv+wl/NNsd3Qc/IUVuaMQB1a1Vj9vQ+5cZLu2/Pd0Y2TRcpo4Q0V 3yCeHdu3z7DGoIiZTv6/RtV7cuBFLh1rqhC6UARgAz2SB1IksMpDUhnV6L+5grT6OqJh AQAHh2qBvb7r61kW9IGOs4oEl8/DlKzNmpsnsnbJmjZF5Wwmw80Gya7kqnSFBl1ry/3n bw3794Q0h3N4esvp7yDjTrH6p80v009IjujTE9S3aRn2xcsRnDzIoYjNQhs+rQdqkgU4 grDg== X-Gm-Message-State: AOJu0YwWh/NEM43kAIURAPn19qFci8AuEjW2d4H12aDQ19r8RTt0ut9d kLFEB82NNur111orsCbcXdRb5MV03TXDDWqebSEBgp8lNvzf4obvw5Ltd4eZO87+TW/4zX+HZQM N0E4= X-Google-Smtp-Source: AGHT+IEl4hd5nYeWoOW0rQZJxOwWUxhImiEnWmaqZxHWAiwQtzSYK62NTyYjl9fPi1aUz9dLoN1ogA== X-Received: by 2002:a17:902:e80e:b0:1da:1780:8b49 with SMTP id u14-20020a170902e80e00b001da17808b49mr8297253plg.0.1707725209681; Mon, 12 Feb 2024 00:06:49 -0800 (PST) X-Forwarded-Encrypted: i=1; AJvYcCUcieEq3f6eopbH00fxz5JHZU/cQ9I/IW/F5couJmVwRVbZaETUUVIYbGkVYF69XE4+iswUdQUiSvuI0dY4jGNjkuTW7RbyetOlbxeEhMacKFHJdfSKh0Ya7j9yMaUuv5JEVbQ/W6U8fiszJLqo5ZSaD/seE9Gp5Z51fnNIZp0jbuP4QO2BZ9EO7CWiZ11SGmsXV4fnTLffXRryMiqzzashscStZ2fzw3tvIjE1ctrAOQe9+wHwfpxWpSViFrZmQZlghq46zBPOgZ6XtPoz7a6HWMrBSif/+0HRASRtzP4gRcFSoXIl6VgqL6jeXc9AmbdhJzlACSTx4Mb9ojKfWXncI29hHbS/mx93CLZWuUDDYNmxnamFyPS//CuTcFCHDg== Received: from mnissler.ba.rivosinc.com ([64.71.180.162]) by smtp.gmail.com with ESMTPSA id s13-20020a17090330cd00b001d9fc53514esm5404649plc.66.2024.02.12.00.06.48 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 12 Feb 2024 00:06:49 -0800 (PST) From: Mattias Nissler To: qemu-devel@nongnu.org, jag.raman@oracle.com, peterx@redhat.com, stefanha@redhat.com Cc: Marcel Apfelbaum , =?utf-8?q?Philippe_Mathie?= =?utf-8?q?u-Daud=C3=A9?= , john.levon@nutanix.com, David Hildenbrand , Paolo Bonzini , "Michael S. Tsirkin" , Richard Henderson , Elena Ufimtseva , Mattias Nissler Subject: [PATCH v7 5/5] vfio-user: Fix config space access byte order Date: Mon, 12 Feb 2024 00:06:17 -0800 Message-Id: <20240212080617.2559498-6-mnissler@rivosinc.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240212080617.2559498-1-mnissler@rivosinc.com> References: <20240212080617.2559498-1-mnissler@rivosinc.com> MIME-Version: 1.0 Received-SPF: pass client-ip=2607:f8b0:4864:20::62b; envelope-from=mnissler@rivosinc.com; helo=mail-pl1-x62b.google.com X-Spam_score_int: -18 X-Spam_score: -1.9 X-Spam_bar: - X-Spam_report: (-1.9 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org PCI config space is little-endian, so on a big-endian host we need to perform byte swaps for values as they are passed to and received from the generic PCI config space access machinery. Signed-off-by: Mattias Nissler --- hw/remote/vfio-user-obj.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/hw/remote/vfio-user-obj.c b/hw/remote/vfio-user-obj.c index a15e291c9a..0e93d7a7b4 100644 --- a/hw/remote/vfio-user-obj.c +++ b/hw/remote/vfio-user-obj.c @@ -281,7 +281,7 @@ static ssize_t vfu_object_cfg_access(vfu_ctx_t *vfu_ctx, char * const buf, while (bytes > 0) { len = (bytes > pci_access_width) ? pci_access_width : bytes; if (is_write) { - memcpy(&val, ptr, len); + val = ldn_le_p(ptr, len); pci_host_config_write_common(o->pci_dev, offset, pci_config_size(o->pci_dev), val, len); @@ -289,7 +289,7 @@ static ssize_t vfu_object_cfg_access(vfu_ctx_t *vfu_ctx, char * const buf, } else { val = pci_host_config_read_common(o->pci_dev, offset, pci_config_size(o->pci_dev), len); - memcpy(ptr, &val, len); + stn_le_p(ptr, len, val); trace_vfu_cfg_read(offset, val); } offset += len;