From patchwork Wed Jun 1 08:23:42 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Fleytman X-Patchwork-Id: 9146361 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 53C5D60761 for ; Wed, 1 Jun 2016 08:29:12 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 34B102040D for ; Wed, 1 Jun 2016 08:29:12 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 2963425819; Wed, 1 Jun 2016 08:29:12 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.8 required=2.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_HI,T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 4C4D52040D for ; Wed, 1 Jun 2016 08:29:11 +0000 (UTC) Received: from localhost ([::1]:40855 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1b81WY-0004pt-CV for patchwork-qemu-devel@patchwork.kernel.org; Wed, 01 Jun 2016 04:29:10 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:40463) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1b81Rv-0000yl-NQ for qemu-devel@nongnu.org; Wed, 01 Jun 2016 04:24:25 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1b81Rq-0000CB-J6 for qemu-devel@nongnu.org; Wed, 01 Jun 2016 04:24:22 -0400 Received: from mail-wm0-x243.google.com ([2a00:1450:400c:c09::243]:36633) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1b81Rq-0000C0-9A for qemu-devel@nongnu.org; Wed, 01 Jun 2016 04:24:18 -0400 Received: by mail-wm0-x243.google.com with SMTP id q62so4453913wmg.3 for ; Wed, 01 Jun 2016 01:24:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=daynix-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=GyHlP2olSkXcZ2iJUwdUoqWdsUqXy51DE91xcLMkB9Y=; b=hpa8nRnYiDtpnhKDKcKQWnbIR/sF+dwBEPuFxHZBHhqEoeH4tOLvWG6XQq4Onc8Xhs lhugAx4aiLLtxT0x1ELej3HDLPHAl0nLU3UNcSjE581KLZi8HqtDJxeDENMDs4P3Y5wD uNk0XcE/CyiT/oxzU3qZDeuGhVfb4W6LDYB2VtcEQ65M6/JhJtpXxvL8DrmzJskYk8xv tfuvb4P9zfBI0NEeXH7I42ndtFTLJrBTu7qs0WbJ3sA19RXF3U2mbHi6bev3SwMhwM1q je4PKzWLf7nQ0PK4uKo7u0EHSqtRP1exm8OeiKybvCtnMD/jxOkL1WMSY16kiOPNfila lHGQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=GyHlP2olSkXcZ2iJUwdUoqWdsUqXy51DE91xcLMkB9Y=; b=FN/AQ80sZ0WCyJfYwf/s7bZqpiz7CkyuwctYlUie9tdzWrS4dM74MeAjbE7WHKecfb uVA0ryXlNrtRa+84YWgcXiWE8oJx2Ce1zw7FaMn2SUWCJulETR4rJDKKRWvaKZZ4QCB8 4DanSKZz4PzUD7aVlqVLwZSU66jnbMpEZL6HuxdCwMji/g9ECra64pdrxkXs8NLFglED em+AXm4dnAWn7qCLndzL+nwa4Lm+Yrdz6sVTLoP4NPi7A36tPOokeceAneTpJjRZzG0k +tnMtAnJfo/5ZTCGooaeqOmiL+maJ169ps0j9S48CXHVHvBkoYUnNArYpqhlvm0Ny6NJ ++CQ== X-Gm-Message-State: ALyK8tI8qaxIDn7iewyWaXmxX7eblGkH1clpqVmIAl+58+G3gUgmnvhcq81+v4f8oe3F7g== X-Received: by 10.28.68.68 with SMTP id r65mr19160757wma.11.1464769457658; Wed, 01 Jun 2016 01:24:17 -0700 (PDT) Received: from bark.daynix ([5.102.236.99]) by smtp.gmail.com with ESMTPSA id o10sm29242719wjz.37.2016.06.01.01.24.16 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 01 Jun 2016 01:24:17 -0700 (PDT) From: Dmitry Fleytman To: qemu-devel@nongnu.org Date: Wed, 1 Jun 2016 11:23:42 +0300 Message-Id: <1464769426-22276-14-git-send-email-dmitry@daynix.com> X-Mailer: git-send-email 2.5.5 In-Reply-To: <1464769426-22276-1-git-send-email-dmitry@daynix.com> References: <1464769426-22276-1-git-send-email-dmitry@daynix.com> X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] X-Received-From: 2a00:1450:400c:c09::243 Subject: [Qemu-devel] [PATCH v8 13/17] vmxnet3: Use pci_dma_* API instead of cpu_physical_memory_* X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Yan Vugenfirer , Jason Wang , Leonid Bloch , Shmulik Ladkani , "Michael S. Tsirkin" Errors-To: qemu-devel-bounces+patchwork-qemu-devel=patchwork.kernel.org@nongnu.org Sender: "Qemu-devel" X-Virus-Scanned: ClamAV using ClamSMTP From: Dmitry Fleytman To make this device and network packets abstractions ready for IOMMU. Signed-off-by: Dmitry Fleytman Signed-off-by: Leonid Bloch Reviewed-by: Michael S. Tsirkin --- hw/net/net_tx_pkt.c | 16 +++++++++++----- hw/net/net_tx_pkt.h | 5 +++-- hw/net/vmxnet3.c | 51 ++++++++++++++++++++++++++++++--------------------- 3 files changed, 44 insertions(+), 28 deletions(-) diff --git a/hw/net/net_tx_pkt.c b/hw/net/net_tx_pkt.c index a64f51c..e4478be 100644 --- a/hw/net/net_tx_pkt.c +++ b/hw/net/net_tx_pkt.c @@ -20,6 +20,7 @@ #include "net/checksum.h" #include "net/tap.h" #include "net/net.h" +#include "hw/pci/pci.h" enum { NET_TX_PKT_VHDR_FRAG = 0, @@ -30,6 +31,8 @@ enum { /* TX packet private context */ struct NetTxPkt { + PCIDevice *pci_dev; + struct virtio_net_hdr virt_hdr; bool has_virt_hdr; @@ -54,11 +57,13 @@ struct NetTxPkt { bool is_loopback; }; -void net_tx_pkt_init(struct NetTxPkt **pkt, uint32_t max_frags, - bool has_virt_hdr) +void net_tx_pkt_init(struct NetTxPkt **pkt, PCIDevice *pci_dev, + uint32_t max_frags, bool has_virt_hdr) { struct NetTxPkt *p = g_malloc0(sizeof *p); + p->pci_dev = pci_dev; + p->vec = g_malloc((sizeof *p->vec) * (max_frags + NET_TX_PKT_PL_START_FRAG)); @@ -383,7 +388,8 @@ bool net_tx_pkt_add_raw_fragment(struct NetTxPkt *pkt, hwaddr pa, ventry = &pkt->raw[pkt->raw_frags]; mapped_len = len; - ventry->iov_base = cpu_physical_memory_map(pa, &mapped_len, false); + ventry->iov_base = pci_dma_map(pkt->pci_dev, pa, + &mapped_len, DMA_DIRECTION_TO_DEVICE); if ((ventry->iov_base != NULL) && (len == mapped_len)) { ventry->iov_len = mapped_len; @@ -444,8 +450,8 @@ void net_tx_pkt_reset(struct NetTxPkt *pkt) assert(pkt->raw); for (i = 0; i < pkt->raw_frags; i++) { assert(pkt->raw[i].iov_base); - cpu_physical_memory_unmap(pkt->raw[i].iov_base, pkt->raw[i].iov_len, - false, pkt->raw[i].iov_len); + pci_dma_unmap(pkt->pci_dev, pkt->raw[i].iov_base, pkt->raw[i].iov_len, + DMA_DIRECTION_TO_DEVICE, 0); } pkt->raw_frags = 0; diff --git a/hw/net/net_tx_pkt.h b/hw/net/net_tx_pkt.h index e49772d..07b9a20 100644 --- a/hw/net/net_tx_pkt.h +++ b/hw/net/net_tx_pkt.h @@ -31,11 +31,12 @@ struct NetTxPkt; * Init function for tx packet functionality * * @pkt: packet pointer + * @pci_dev: PCI device processing this packet * @max_frags: max tx ip fragments * @has_virt_hdr: device uses virtio header. */ -void net_tx_pkt_init(struct NetTxPkt **pkt, uint32_t max_frags, - bool has_virt_hdr); +void net_tx_pkt_init(struct NetTxPkt **pkt, PCIDevice *pci_dev, + uint32_t max_frags, bool has_virt_hdr); /** * Clean all tx packet resources. diff --git a/hw/net/vmxnet3.c b/hw/net/vmxnet3.c index 33cd07d..16645e6 100644 --- a/hw/net/vmxnet3.c +++ b/hw/net/vmxnet3.c @@ -802,7 +802,9 @@ vmxnet3_pop_rxc_descr(VMXNET3State *s, int qidx, uint32_t *descr_gen) hwaddr daddr = vmxnet3_ring_curr_cell_pa(&s->rxq_descr[qidx].comp_ring); - cpu_physical_memory_read(daddr, &rxcd, sizeof(struct Vmxnet3_RxCompDesc)); + pci_dma_read(PCI_DEVICE(s), daddr, + &rxcd, sizeof(struct Vmxnet3_RxCompDesc)); + ring_gen = vmxnet3_ring_curr_gen(&s->rxq_descr[qidx].comp_ring); if (rxcd.gen != ring_gen) { @@ -1023,10 +1025,11 @@ nocsum: } static void -vmxnet3_physical_memory_writev(const struct iovec *iov, - size_t start_iov_off, - hwaddr target_addr, - size_t bytes_to_copy) +vmxnet3_pci_dma_writev(PCIDevice *pci_dev, + const struct iovec *iov, + size_t start_iov_off, + hwaddr target_addr, + size_t bytes_to_copy) { size_t curr_off = 0; size_t copied = 0; @@ -1036,9 +1039,9 @@ vmxnet3_physical_memory_writev(const struct iovec *iov, size_t chunk_len = MIN((curr_off + iov->iov_len) - start_iov_off, bytes_to_copy); - cpu_physical_memory_write(target_addr + copied, - iov->iov_base + start_iov_off - curr_off, - chunk_len); + pci_dma_write(pci_dev, target_addr + copied, + iov->iov_base + start_iov_off - curr_off, + chunk_len); copied += chunk_len; start_iov_off += chunk_len; @@ -1088,15 +1091,15 @@ vmxnet3_indicate_packet(VMXNET3State *s) } chunk_size = MIN(bytes_left, rxd.len); - vmxnet3_physical_memory_writev(data, bytes_copied, - le64_to_cpu(rxd.addr), chunk_size); + vmxnet3_pci_dma_writev(PCI_DEVICE(s), data, bytes_copied, + le64_to_cpu(rxd.addr), chunk_size); bytes_copied += chunk_size; bytes_left -= chunk_size; vmxnet3_dump_rx_descr(&rxd); if (ready_rxcd_pa != 0) { - cpu_physical_memory_write(ready_rxcd_pa, &rxcd, sizeof(rxcd)); + pci_dma_write(PCI_DEVICE(s), ready_rxcd_pa, &rxcd, sizeof(rxcd)); } memset(&rxcd, 0, sizeof(struct Vmxnet3_RxCompDesc)); @@ -1127,7 +1130,8 @@ vmxnet3_indicate_packet(VMXNET3State *s) if (ready_rxcd_pa != 0) { rxcd.eop = 1; rxcd.err = (bytes_left != 0); - cpu_physical_memory_write(ready_rxcd_pa, &rxcd, sizeof(rxcd)); + + pci_dma_write(PCI_DEVICE(s), ready_rxcd_pa, &rxcd, sizeof(rxcd)); /* Flush RX descriptor changes */ smp_wmb(); @@ -1298,7 +1302,8 @@ static void vmxnet3_update_mcast_filters(VMXNET3State *s) VMXNET3_READ_DRV_SHARED64(s->drv_shmem, devRead.rxFilterConf.mfTablePA); - cpu_physical_memory_read(mcast_list_pa, s->mcast_list, list_bytes); + pci_dma_read(PCI_DEVICE(s), mcast_list_pa, s->mcast_list, list_bytes); + VMW_CFPRN("Current multicast list len is %d:", s->mcast_list_len); for (i = 0; i < s->mcast_list_len; i++) { VMW_CFPRN("\t" MAC_FMT, MAC_ARG(s->mcast_list[i].a)); @@ -1328,15 +1333,17 @@ static void vmxnet3_fill_stats(VMXNET3State *s) return; for (i = 0; i < s->txq_num; i++) { - cpu_physical_memory_write(s->txq_descr[i].tx_stats_pa, - &s->txq_descr[i].txq_stats, - sizeof(s->txq_descr[i].txq_stats)); + pci_dma_write(PCI_DEVICE(s), + s->txq_descr[i].tx_stats_pa, + &s->txq_descr[i].txq_stats, + sizeof(s->txq_descr[i].txq_stats)); } for (i = 0; i < s->rxq_num; i++) { - cpu_physical_memory_write(s->rxq_descr[i].rx_stats_pa, - &s->rxq_descr[i].rxq_stats, - sizeof(s->rxq_descr[i].rxq_stats)); + pci_dma_write(PCI_DEVICE(s), + s->rxq_descr[i].rx_stats_pa, + &s->rxq_descr[i].rxq_stats, + sizeof(s->rxq_descr[i].rxq_stats)); } } @@ -1558,7 +1565,8 @@ static void vmxnet3_activate_device(VMXNET3State *s) /* Preallocate TX packet wrapper */ VMW_CFPRN("Max TX fragments is %u", s->max_tx_frags); - net_tx_pkt_init(&s->tx_pkt, s->max_tx_frags, s->peer_has_vhdr); + net_tx_pkt_init(&s->tx_pkt, PCI_DEVICE(s), + s->max_tx_frags, s->peer_has_vhdr); net_rx_pkt_init(&s->rx_pkt, s->peer_has_vhdr); /* Read rings memory locations for RX queues */ @@ -2536,7 +2544,8 @@ static int vmxnet3_post_load(void *opaque, int version_id) VMXNET3State *s = opaque; PCIDevice *d = PCI_DEVICE(s); - net_tx_pkt_init(&s->tx_pkt, s->max_tx_frags, s->peer_has_vhdr); + net_tx_pkt_init(&s->tx_pkt, PCI_DEVICE(s), + s->max_tx_frags, s->peer_has_vhdr); net_rx_pkt_init(&s->rx_pkt, s->peer_has_vhdr); if (s->msix_used) {