From patchwork Wed Jul 7 15:34:51 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tianyu Lan X-Patchwork-Id: 12362869 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0606DC11F8D for ; Wed, 7 Jul 2021 15:35:42 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id E55AA61CBE for ; Wed, 7 Jul 2021 15:35:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232467AbhGGPiU (ORCPT ); Wed, 7 Jul 2021 11:38:20 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55426 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232478AbhGGPiN (ORCPT ); Wed, 7 Jul 2021 11:38:13 -0400 Received: from mail-pg1-x531.google.com (mail-pg1-x531.google.com [IPv6:2607:f8b0:4864:20::531]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 178B2C0613DD; Wed, 7 Jul 2021 08:35:31 -0700 (PDT) Received: by mail-pg1-x531.google.com with SMTP id f5so2635455pgv.3; Wed, 07 Jul 2021 08:35:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=d9G7rX1Wf2e4jgYzMUXcCiOPT4rIMtwlWsRa/j4ZTx8=; b=a0WA71QYdRYsIFFCpSOQ59rb6NIZMHg0p/GePTUDNAMME7mU8fkVKQ8hbtv8NXT9Hs tkAsmM0GYzVm9SqNeRWPme3CUcDBs0jgx9038MoSwUACX+gbhlapPU6hOXRXscEoLixX XEraOZ2Q+yfuw06rWV3kogG1UnRVAjD/yXEN37KqLC3SKhedpogEULf/r+gKa4TlvvRU po/5RUBydAo53uwEG5SkDP6iAH4WlViYUDFRPTzPgqxw6KPWPNXrnmFudo+tMCz5oMyN vPic8DauVlFvqlQUm3H48Z/QA8gU0yJfkrljMeCx6M5Vm8d45pXGTBP1ED2PTPqIFiJl /P4A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=d9G7rX1Wf2e4jgYzMUXcCiOPT4rIMtwlWsRa/j4ZTx8=; b=Ufmfl/fQ0Idiha7WVLL7o3MWP978M3DXUJMme6IgS21ZbX2mVrhcyWG/RO60SAsV8r bdBK7ywG8RDc3vM6lqIOdIG0ESkvZhvFPMgdD0GtOhs2i+jKiGNF17HoA72k90LZDYg+ 9fTaP4PXiCVPM+jVomAOIGT4A+GscObPRTK9vAnHt9RkyiEBt1xUoAdrqkRdjW6AuIcR UwJrjyfe0masX52a8hnbGGhMUs8uN/WBSBMUIbAo2gaTw1CUa952UHx1cKg+MA0BXp7h LOn3VIvd9XE/2aEua4ATFdO5nkXovq54YVvUYnQu5MdlWgVqm2ZEfh8Uk0HUJRY+sWlu qwXQ== X-Gm-Message-State: AOAM533MQC2TQCigtyaKLS85+6wu/Bq2pTzoGJTAtXyIyrvi3f7mS5rY /rO66D48LRJ2Ph6hW8dgIE4= X-Google-Smtp-Source: ABdhPJyCzEVFM0wpXKgWELeDZM1cHITmWbDuJvdWh6ZeFbeZBDj1wIR9OXT9qousVFMFlP/DocwEZg== X-Received: by 2002:a63:303:: with SMTP id 3mr26071907pgd.111.1625672130610; Wed, 07 Jul 2021 08:35:30 -0700 (PDT) Received: from ubuntu-Virtual-Machine.corp.microsoft.com ([2001:4898:80e8:0:6b7f:cf3e:bbf2:d229]) by smtp.gmail.com with ESMTPSA id y11sm21096877pfo.160.2021.07.07.08.35.29 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 07 Jul 2021 08:35:30 -0700 (PDT) From: Tianyu Lan To: kys@microsoft.com, haiyangz@microsoft.com, sthemmin@microsoft.com, wei.liu@kernel.org, decui@microsoft.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, konrad.wilk@oracle.com, boris.ostrovsky@oracle.com, jgross@suse.com, sstabellini@kernel.org, joro@8bytes.org, will@kernel.org, davem@davemloft.net, kuba@kernel.org, jejb@linux.ibm.com, martin.petersen@oracle.com, arnd@arndb.de, hch@lst.de, m.szyprowski@samsung.com, robin.murphy@arm.com, rppt@kernel.org, akpm@linux-foundation.org, kirill.shutemov@linux.intel.com, Tianyu.Lan@microsoft.com, thomas.lendacky@amd.com, ardb@kernel.org, nramas@linux.microsoft.com, robh@kernel.org, keescook@chromium.org, rientjes@google.com, pgonda@google.com, martin.b.radev@gmail.com, hannes@cmpxchg.org, saravanand@fb.com, krish.sadhukhan@oracle.com, xen-devel@lists.xenproject.org, tj@kernel.org, michael.h.kelley@microsoft.com Cc: iommu@lists.linux-foundation.org, linux-arch@vger.kernel.org, linux-hyperv@vger.kernel.org, linux-kernel@vger.kernel.org, linux-scsi@vger.kernel.org, netdev@vger.kernel.org, vkuznets@redhat.com, brijesh.singh@amd.com, anparri@microsoft.com Subject: [RFC PATCH V4 10/12] HV/Netvsc: Add Isolation VM support for netvsc driver Date: Wed, 7 Jul 2021 11:34:51 -0400 Message-Id: <20210707153456.3976348-11-ltykernel@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210707153456.3976348-1-ltykernel@gmail.com> References: <20210707153456.3976348-1-ltykernel@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org From: Tianyu Lan In Isolation VM, all shared memory with host needs to mark visible to host via hvcall. vmbus_establish_gpadl() has already done it for netvsc rx/tx ring buffer. The page buffer used by vmbus_sendpacket_ pagebuffer() still need to handle. Use DMA API to map/umap these memory during sending/receiving packet and Hyper-V DMA ops callback will use swiotlb function to allocate bounce buffer and copy data from/to bounce buffer. Signed-off-by: Tianyu Lan --- drivers/net/hyperv/hyperv_net.h | 6 ++ drivers/net/hyperv/netvsc.c | 144 +++++++++++++++++++++++++++++- drivers/net/hyperv/rndis_filter.c | 2 + include/linux/hyperv.h | 5 ++ 4 files changed, 154 insertions(+), 3 deletions(-) diff --git a/drivers/net/hyperv/hyperv_net.h b/drivers/net/hyperv/hyperv_net.h index b11aa68b44ec..c2fbb9d4df2c 100644 --- a/drivers/net/hyperv/hyperv_net.h +++ b/drivers/net/hyperv/hyperv_net.h @@ -164,6 +164,7 @@ struct hv_netvsc_packet { u32 total_bytes; u32 send_buf_index; u32 total_data_buflen; + struct hv_dma_range *dma_range; }; #define NETVSC_HASH_KEYLEN 40 @@ -1074,6 +1075,7 @@ struct netvsc_device { /* Receive buffer allocated by us but manages by NetVSP */ void *recv_buf; + void *recv_original_buf; u32 recv_buf_size; /* allocated bytes */ u32 recv_buf_gpadl_handle; u32 recv_section_cnt; @@ -1082,6 +1084,8 @@ struct netvsc_device { /* Send buffer allocated by us */ void *send_buf; + void *send_original_buf; + u32 send_buf_size; u32 send_buf_gpadl_handle; u32 send_section_cnt; u32 send_section_size; @@ -1729,4 +1733,6 @@ struct rndis_message { #define RETRY_US_HI 10000 #define RETRY_MAX 2000 /* >10 sec */ +void netvsc_dma_unmap(struct hv_device *hv_dev, + struct hv_netvsc_packet *packet); #endif /* _HYPERV_NET_H */ diff --git a/drivers/net/hyperv/netvsc.c b/drivers/net/hyperv/netvsc.c index 7bd935412853..fc312e5db4d5 100644 --- a/drivers/net/hyperv/netvsc.c +++ b/drivers/net/hyperv/netvsc.c @@ -153,8 +153,21 @@ static void free_netvsc_device(struct rcu_head *head) int i; kfree(nvdev->extension); - vfree(nvdev->recv_buf); - vfree(nvdev->send_buf); + + if (nvdev->recv_original_buf) { + vunmap(nvdev->recv_buf); + vfree(nvdev->recv_original_buf); + } else { + vfree(nvdev->recv_buf); + } + + if (nvdev->send_original_buf) { + vunmap(nvdev->send_buf); + vfree(nvdev->send_original_buf); + } else { + vfree(nvdev->send_buf); + } + kfree(nvdev->send_section_map); for (i = 0; i < VRSS_CHANNEL_MAX; i++) { @@ -330,6 +343,27 @@ int netvsc_alloc_recv_comp_ring(struct netvsc_device *net_device, u32 q_idx) return nvchan->mrc.slots ? 0 : -ENOMEM; } +static void *netvsc_remap_buf(void *buf, unsigned long size) +{ + unsigned long *pfns; + void *vaddr; + int i; + + pfns = kcalloc(size / HV_HYP_PAGE_SIZE, sizeof(unsigned long), + GFP_KERNEL); + if (!pfns) + return NULL; + + for (i = 0; i < size / HV_HYP_PAGE_SIZE; i++) + pfns[i] = virt_to_hvpfn(buf + i * HV_HYP_PAGE_SIZE) + + (ms_hyperv.shared_gpa_boundary >> HV_HYP_PAGE_SHIFT); + + vaddr = vmap_pfn(pfns, size / HV_HYP_PAGE_SIZE, PAGE_KERNEL_IO); + kfree(pfns); + + return vaddr; +} + static int netvsc_init_buf(struct hv_device *device, struct netvsc_device *net_device, const struct netvsc_device_info *device_info) @@ -340,6 +374,7 @@ static int netvsc_init_buf(struct hv_device *device, unsigned int buf_size; size_t map_words; int i, ret = 0; + void *vaddr; /* Get receive buffer area. */ buf_size = device_info->recv_sections * device_info->recv_section_size; @@ -375,6 +410,15 @@ static int netvsc_init_buf(struct hv_device *device, goto cleanup; } + if (hv_isolation_type_snp()) { + vaddr = netvsc_remap_buf(net_device->recv_buf, buf_size); + if (!vaddr) + goto cleanup; + + net_device->recv_original_buf = net_device->recv_buf; + net_device->recv_buf = vaddr; + } + /* Notify the NetVsp of the gpadl handle */ init_packet = &net_device->channel_init_pkt; memset(init_packet, 0, sizeof(struct nvsp_message)); @@ -477,6 +521,15 @@ static int netvsc_init_buf(struct hv_device *device, goto cleanup; } + if (hv_isolation_type_snp()) { + vaddr = netvsc_remap_buf(net_device->send_buf, buf_size); + if (!vaddr) + goto cleanup; + + net_device->send_original_buf = net_device->send_buf; + net_device->send_buf = vaddr; + } + /* Notify the NetVsp of the gpadl handle */ init_packet = &net_device->channel_init_pkt; memset(init_packet, 0, sizeof(struct nvsp_message)); @@ -767,7 +820,7 @@ static void netvsc_send_tx_complete(struct net_device *ndev, /* Notify the layer above us */ if (likely(skb)) { - const struct hv_netvsc_packet *packet + struct hv_netvsc_packet *packet = (struct hv_netvsc_packet *)skb->cb; u32 send_index = packet->send_buf_index; struct netvsc_stats *tx_stats; @@ -783,6 +836,7 @@ static void netvsc_send_tx_complete(struct net_device *ndev, tx_stats->bytes += packet->total_bytes; u64_stats_update_end(&tx_stats->syncp); + netvsc_dma_unmap(ndev_ctx->device_ctx, packet); napi_consume_skb(skb, budget); } @@ -947,6 +1001,82 @@ static void netvsc_copy_to_send_buf(struct netvsc_device *net_device, memset(dest, 0, padding); } +void netvsc_dma_unmap(struct hv_device *hv_dev, + struct hv_netvsc_packet *packet) +{ + u32 page_count = packet->cp_partial ? + packet->page_buf_cnt - packet->rmsg_pgcnt : + packet->page_buf_cnt; + int i; + + if (!hv_is_isolation_supported()) + return; + + if (!packet->dma_range) + return; + + for (i = 0; i < page_count; i++) + dma_unmap_single(&hv_dev->device, packet->dma_range[i].dma, + packet->dma_range[i].mapping_size, + DMA_TO_DEVICE); + + kfree(packet->dma_range); +} + +/* netvsc_dma_map - Map swiotlb bounce buffer with data page of + * packet sent by vmbus_sendpacket_pagebuffer() in the Isolation + * VM. + * + * In isolation VM, netvsc send buffer has been marked visible to + * host and so the data copied to send buffer doesn't need to use + * bounce buffer. The data pages handled by vmbus_sendpacket_pagebuffer() + * may not be copied to send buffer and so these pages need to be + * mapped with swiotlb bounce buffer. netvsc_dma_map() is to do + * that. The pfns in the struct hv_page_buffer need to be converted + * to bounce buffer's pfn. The loop here is necessary and so not + * use dma_map_sg() here. + */ +int netvsc_dma_map(struct hv_device *hv_dev, + struct hv_netvsc_packet *packet, + struct hv_page_buffer *pb) +{ + u32 page_count = packet->cp_partial ? + packet->page_buf_cnt - packet->rmsg_pgcnt : + packet->page_buf_cnt; + dma_addr_t dma; + int i; + + if (!hv_is_isolation_supported()) + return 0; + + packet->dma_range = kcalloc(page_count, + sizeof(*packet->dma_range), + GFP_KERNEL); + if (!packet->dma_range) + return -ENOMEM; + + for (i = 0; i < page_count; i++) { + char *src = phys_to_virt((pb[i].pfn << HV_HYP_PAGE_SHIFT) + + pb[i].offset); + u32 len = pb[i].len; + + dma = dma_map_single(&hv_dev->device, src, len, + DMA_TO_DEVICE); + if (dma_mapping_error(&hv_dev->device, dma)) { + kfree(packet->dma_range); + return -ENOMEM; + } + + packet->dma_range[i].dma = dma; + packet->dma_range[i].mapping_size = len; + pb[i].pfn = dma >> HV_HYP_PAGE_SHIFT; + pb[i].offset = offset_in_hvpage(dma); + pb[i].len = len; + } + + return 0; +} + static inline int netvsc_send_pkt( struct hv_device *device, struct hv_netvsc_packet *packet, @@ -987,14 +1117,22 @@ static inline int netvsc_send_pkt( trace_nvsp_send_pkt(ndev, out_channel, rpkt); + packet->dma_range = NULL; if (packet->page_buf_cnt) { if (packet->cp_partial) pb += packet->rmsg_pgcnt; + ret = netvsc_dma_map(ndev_ctx->device_ctx, packet, pb); + if (ret) + return ret; + ret = vmbus_sendpacket_pagebuffer(out_channel, pb, packet->page_buf_cnt, &nvmsg, sizeof(nvmsg), req_id); + + if (ret) + netvsc_dma_unmap(ndev_ctx->device_ctx, packet); } else { ret = vmbus_sendpacket(out_channel, &nvmsg, sizeof(nvmsg), diff --git a/drivers/net/hyperv/rndis_filter.c b/drivers/net/hyperv/rndis_filter.c index 983bf362466a..9425fee85aa0 100644 --- a/drivers/net/hyperv/rndis_filter.c +++ b/drivers/net/hyperv/rndis_filter.c @@ -361,6 +361,8 @@ static void rndis_filter_receive_response(struct net_device *ndev, } } + netvsc_dma_unmap(((struct net_device_context *) + netdev_priv(ndev))->device_ctx, &request->pkt); complete(&request->wait_event); } else { netdev_err(ndev, diff --git a/include/linux/hyperv.h b/include/linux/hyperv.h index babbe19f57e2..90abff664495 100644 --- a/include/linux/hyperv.h +++ b/include/linux/hyperv.h @@ -1616,6 +1616,11 @@ struct hyperv_service_callback { void (*callback)(void *context); }; +struct hv_dma_range { + dma_addr_t dma; + u32 mapping_size; +}; + #define MAX_SRV_VER 0x7ffffff extern bool vmbus_prep_negotiate_resp(struct icmsg_hdr *icmsghdrp, u8 *buf, u32 buflen, const int *fw_version, int fw_vercnt,