From patchwork Wed Sep 2 03:00:57 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Boqun Feng X-Patchwork-Id: 11749769 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 7E82F722 for ; Wed, 2 Sep 2020 03:03:32 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 6897420866 for ; Wed, 2 Sep 2020 03:03:32 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="Vy/4VBTW" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726419AbgIBDBZ (ORCPT ); Tue, 1 Sep 2020 23:01:25 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48084 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726384AbgIBDBY (ORCPT ); Tue, 1 Sep 2020 23:01:24 -0400 Received: from mail-qt1-x844.google.com (mail-qt1-x844.google.com [IPv6:2607:f8b0:4864:20::844]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 47A46C061244; Tue, 1 Sep 2020 20:01:24 -0700 (PDT) Received: by mail-qt1-x844.google.com with SMTP id e7so2602022qtj.11; Tue, 01 Sep 2020 20:01:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=5XgEkGxv2E2AECNr+4xn15ufLIzEXL6wFg7AGQm6deE=; b=Vy/4VBTWTL4CbZpFkBNzZThslEGpTpQ6X9DHlQwRPYrSCWqQbavsWxEH/uHdM/sjnK 2n50pAYN3dQPhSe4c9lz50gJhCQrxAZE54Dv2m0fNbwVSI6FTckZF47I5NjT6vl9c3Iz WIX57VPjLUBypzkDQO3/TyXZC1FSsUKbY07Sy3SOYhAooTr7BLujjwCYc+H1Y7hT9B3j giKdrwMlcxTdBZbDELXzNjTvyCfRJ0d4Q3OfC3hxlBEhd26evk0o6wxovx+mQGjHORcH eJbIl+A448VyatVvni0+dIQmuSZruPlTjQQv/RQjVUkBWhsSS9n+83q4OUoHVFzz5xWD i/+A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=5XgEkGxv2E2AECNr+4xn15ufLIzEXL6wFg7AGQm6deE=; b=F4+tIEjz3zTwh1QVqZ9zeK2qZP2s26qIalfDOhdIoFM/0XIkAMLEXP181ttz7vlT83 7PrQh9apL1VhHspnOcq9O41tLqaQb4Sv39GkmkMw6teQVP24VfD3HPCVCflqLSGbVKJo TPiOGofibSgXz7rBdJB2OO327qr+8kLlqH3Ktathb5YXADG4hgx75sJeutfV2OMdLZfK O8nL/yUBSz1FeKFg6d9qkwn+JjjxiJsqge7hPxbIF1UFnrvxPgQgRn5ktrFVF58s8D4Q VuW2uaqxz38K9wfsr/m2fsPLmXwkVWTW3yADUlc97VBGOwd9mlCtYkIUCiA4x5UCcwKF llAQ== X-Gm-Message-State: AOAM533mTklfLJ5CmV+k6HNtaI7OSANgJu0GYF65asz1PKP0R/EClphq SygKmEucun5uZ77/SkUmPNI= X-Google-Smtp-Source: ABdhPJz5Rb4nRW3wJN+qGPKf3jFe+F0PQAcfMlyqOMq35qb6gRryNubBULlWOuDUf91tdg5vkNfoCw== X-Received: by 2002:ac8:f86:: with SMTP id b6mr5017904qtk.252.1599015683461; Tue, 01 Sep 2020 20:01:23 -0700 (PDT) Received: from auth2-smtp.messagingengine.com (auth2-smtp.messagingengine.com. [66.111.4.228]) by smtp.gmail.com with ESMTPSA id r21sm3470788qtj.80.2020.09.01.20.01.20 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 01 Sep 2020 20:01:21 -0700 (PDT) Received: from compute3.internal (compute3.nyi.internal [10.202.2.43]) by mailauth.nyi.internal (Postfix) with ESMTP id 0D39927C005B; Tue, 1 Sep 2020 23:01:20 -0400 (EDT) Received: from mailfrontend2 ([10.202.2.163]) by compute3.internal (MEProxy); Tue, 01 Sep 2020 23:01:20 -0400 X-ME-Sender: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeduiedrudefkedgieeiucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne goufhorhhtvggutfgvtghiphdvucdlgedtmdenucfjughrpefhvffufffkofgjfhgggfes tdekredtredttdenucfhrhhomhepuehoqhhunhcuhfgvnhhguceosghoqhhunhdrfhgvnh hgsehgmhgrihhlrdgtohhmqeenucggtffrrghtthgvrhhnpeehvdevteefgfeiudettdef vedvvdelkeejueffffelgeeuhffhjeetkeeiueeuleenucfkphephedvrdduheehrdduud durdejudenucevlhhushhtvghrufhiiigvpedtnecurfgrrhgrmhepmhgrihhlfhhrohhm pegsohhquhhnodhmvghsmhhtphgruhhthhhpvghrshhonhgrlhhithihqdeiledvgeehtd eigedqudejjeekheehhedvqdgsohhquhhnrdhfvghngheppehgmhgrihhlrdgtohhmsehf ihigmhgvrdhnrghmvg X-ME-Proxy: Received: from localhost (unknown [52.155.111.71]) by mail.messagingengine.com (Postfix) with ESMTPA id 78AC23060067; Tue, 1 Sep 2020 23:01:19 -0400 (EDT) From: Boqun Feng To: linux-hyperv@vger.kernel.org, linux-input@vger.kernel.org, linux-kernel@vger.kernel.org, netdev@vger.kernel.org, linux-scsi@vger.kernel.org Cc: "K. Y. Srinivasan" , Haiyang Zhang , Stephen Hemminger , Wei Liu , Jiri Kosina , Benjamin Tissoires , Dmitry Torokhov , "David S. Miller" , Jakub Kicinski , "James E.J. Bottomley" , "Martin K. Petersen" , Michael Kelley , Boqun Feng Subject: [RFC v2 01/11] Drivers: hv: vmbus: Always use HV_HYP_PAGE_SIZE for gpadl Date: Wed, 2 Sep 2020 11:00:57 +0800 Message-Id: <20200902030107.33380-2-boqun.feng@gmail.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20200902030107.33380-1-boqun.feng@gmail.com> References: <20200902030107.33380-1-boqun.feng@gmail.com> MIME-Version: 1.0 Sender: linux-input-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-input@vger.kernel.org Since the hypervisor always uses 4K as its page size, the size of PFNs used for gpadl should be HV_HYP_PAGE_SIZE rather than PAGE_SIZE, so adjust this accordingly as the preparation for supporting 16K/64K page size guests. No functional changes on x86, since PAGE_SIZE is always 4k (equals to HV_HYP_PAGE_SIZE). Signed-off-by: Boqun Feng --- drivers/hv/channel.c | 13 +++++-------- 1 file changed, 5 insertions(+), 8 deletions(-) diff --git a/drivers/hv/channel.c b/drivers/hv/channel.c index 3ebda7707e46..4d0f8e5a88d6 100644 --- a/drivers/hv/channel.c +++ b/drivers/hv/channel.c @@ -22,9 +22,6 @@ #include "hyperv_vmbus.h" -#define NUM_PAGES_SPANNED(addr, len) \ -((PAGE_ALIGN(addr + len) >> PAGE_SHIFT) - (addr >> PAGE_SHIFT)) - static unsigned long virt_to_hvpfn(void *addr) { phys_addr_t paddr; @@ -35,7 +32,7 @@ static unsigned long virt_to_hvpfn(void *addr) else paddr = __pa(addr); - return paddr >> PAGE_SHIFT; + return paddr >> HV_HYP_PAGE_SHIFT; } /* @@ -330,7 +327,7 @@ static int create_gpadl_header(void *kbuffer, u32 size, int pfnsum, pfncount, pfnleft, pfncurr, pfnsize; - pagecount = size >> PAGE_SHIFT; + pagecount = size >> HV_HYP_PAGE_SHIFT; /* do we need a gpadl body msg */ pfnsize = MAX_SIZE_CHANNEL_MESSAGE - @@ -360,7 +357,7 @@ static int create_gpadl_header(void *kbuffer, u32 size, gpadl_header->range[0].byte_count = size; for (i = 0; i < pfncount; i++) gpadl_header->range[0].pfn_array[i] = virt_to_hvpfn( - kbuffer + PAGE_SIZE * i); + kbuffer + HV_HYP_PAGE_SIZE * i); *msginfo = msgheader; pfnsum = pfncount; @@ -412,7 +409,7 @@ static int create_gpadl_header(void *kbuffer, u32 size, */ for (i = 0; i < pfncurr; i++) gpadl_body->pfn[i] = virt_to_hvpfn( - kbuffer + PAGE_SIZE * (pfnsum + i)); + kbuffer + HV_HYP_PAGE_SIZE * (pfnsum + i)); /* add to msg header */ list_add_tail(&msgbody->msglistentry, @@ -441,7 +438,7 @@ static int create_gpadl_header(void *kbuffer, u32 size, gpadl_header->range[0].byte_count = size; for (i = 0; i < pagecount; i++) gpadl_header->range[0].pfn_array[i] = virt_to_hvpfn( - kbuffer + PAGE_SIZE * i); + kbuffer + HV_HYP_PAGE_SIZE * i); *msginfo = msgheader; } From patchwork Wed Sep 2 03:00:58 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Boqun Feng X-Patchwork-Id: 11749751 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id E2EE5722 for ; Wed, 2 Sep 2020 03:03:17 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id C02332087D for ; Wed, 2 Sep 2020 03:03:17 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="XGF6ZztA" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728033AbgIBDCo (ORCPT ); Tue, 1 Sep 2020 23:02:44 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48092 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727032AbgIBDB0 (ORCPT ); Tue, 1 Sep 2020 23:01:26 -0400 Received: from mail-qk1-x744.google.com (mail-qk1-x744.google.com [IPv6:2607:f8b0:4864:20::744]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CE79AC061245; Tue, 1 Sep 2020 20:01:25 -0700 (PDT) Received: by mail-qk1-x744.google.com with SMTP id o64so3078472qkb.10; Tue, 01 Sep 2020 20:01:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=4Jnje3Mv3Ua42f9D2i250fVvV/pWQVCwbP1Q1N1c4Kc=; b=XGF6ZztAR6Ig1tI5a2semdmL64O97R5djwC7fLxz1gztIlAS9wehS2bietEamM34SE hoVJokpzM8bI6XCMdPxGpI9yWy/EBTi0IHBPFNmji3KyQ4kxO++i5LWmex9kxGYddla7 LoF6SFI2hiTcylu+gNrXF/OMoxgUK2yV0iEtzsQ7w/vw0WHcdC0K9Rd7+B5jQkKXyBCZ Sm6G9XRvIJDz2aLRDpI4dn4UunTzXYkSwGQMEgECwgHe0fNLCEYJpzzkgxXDPkT9Q625 4DSLeP9ZC5xKc0b+HkRxAIy9ZCr7fj4Ije+S9FFxp7kKY8lhoCHg+EjKSGmfvXiOTtOt bzYA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=4Jnje3Mv3Ua42f9D2i250fVvV/pWQVCwbP1Q1N1c4Kc=; b=KcgYXaOtROgeTdwaqtoUzJTnxRx+DngfSjmUk2wX/UJWBPGNtJbG9k9Pi7d7zYaqLf QdWLPTigsVUaSVvSZI+qmJHBFtmkWXWzbjPvPs/3hNZXZZNf4U2xoGfC6AzxeDfNcVTn gMsa77sZPmFd4tJ1/+1BmjPZUaDKkxaXFo65DKL6gft9nC9Fnp6ZxRL7KPKHNHk4zS75 2ejeJszVesaVxoVnR3hWMDL2gnOo3Kp6glNUPpbdVZlAJPXHASuXsOWMjAFSErDLzWLf KagF+XBGRI3k5EqsGgB9YfQ21JJoow4VzSIYSZVAgHciQKDWGPmcfUd5R8rlFR21aewg hL4A== X-Gm-Message-State: AOAM5303SunisPoyxl0QKwDDEt0kW35I6QlHKcTEwgfIvrO3TMQpb0h8 4ol0OkIqujHZ3MBOYHNwhedBxn0rGjs= X-Google-Smtp-Source: ABdhPJwC7Zs1ftr0/7VphQKOvmwf+FX8kV0BTGuUx6sqJRFzhlEZsmL66+Nyc9nz+ZiadqahPm06Qg== X-Received: by 2002:a37:a514:: with SMTP id o20mr4975481qke.374.1599015684538; Tue, 01 Sep 2020 20:01:24 -0700 (PDT) Received: from auth2-smtp.messagingengine.com (auth2-smtp.messagingengine.com. [66.111.4.228]) by smtp.gmail.com with ESMTPSA id f127sm3959946qke.133.2020.09.01.20.01.22 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 01 Sep 2020 20:01:23 -0700 (PDT) Received: from compute3.internal (compute3.nyi.internal [10.202.2.43]) by mailauth.nyi.internal (Postfix) with ESMTP id 19FE827C005C; Tue, 1 Sep 2020 23:01:22 -0400 (EDT) Received: from mailfrontend2 ([10.202.2.163]) by compute3.internal (MEProxy); Tue, 01 Sep 2020 23:01:22 -0400 X-ME-Sender: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeduiedrudefkedgieeiucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne goufhorhhtvggutfgvtghiphdvucdlgedtmdenucfjughrpefhvffufffkofgjfhgggfes tdekredtredttdenucfhrhhomhepuehoqhhunhcuhfgvnhhguceosghoqhhunhdrfhgvnh hgsehgmhgrihhlrdgtohhmqeenucggtffrrghtthgvrhhnpeehvdevteefgfeiudettdef vedvvdelkeejueffffelgeeuhffhjeetkeeiueeuleenucfkphephedvrdduheehrdduud durdejudenucevlhhushhtvghrufhiiigvpedtnecurfgrrhgrmhepmhgrihhlfhhrohhm pegsohhquhhnodhmvghsmhhtphgruhhthhhpvghrshhonhgrlhhithihqdeiledvgeehtd eigedqudejjeekheehhedvqdgsohhquhhnrdhfvghngheppehgmhgrihhlrdgtohhmsehf ihigmhgvrdhnrghmvg X-ME-Proxy: Received: from localhost (unknown [52.155.111.71]) by mail.messagingengine.com (Postfix) with ESMTPA id 8BB7530600A3; Tue, 1 Sep 2020 23:01:21 -0400 (EDT) From: Boqun Feng To: linux-hyperv@vger.kernel.org, linux-input@vger.kernel.org, linux-kernel@vger.kernel.org, netdev@vger.kernel.org, linux-scsi@vger.kernel.org Cc: "K. Y. Srinivasan" , Haiyang Zhang , Stephen Hemminger , Wei Liu , Jiri Kosina , Benjamin Tissoires , Dmitry Torokhov , "David S. Miller" , Jakub Kicinski , "James E.J. Bottomley" , "Martin K. Petersen" , Michael Kelley , Boqun Feng Subject: [RFC v2 02/11] Drivers: hv: vmbus: Move __vmbus_open() Date: Wed, 2 Sep 2020 11:00:58 +0800 Message-Id: <20200902030107.33380-3-boqun.feng@gmail.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20200902030107.33380-1-boqun.feng@gmail.com> References: <20200902030107.33380-1-boqun.feng@gmail.com> MIME-Version: 1.0 Sender: linux-input-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-input@vger.kernel.org Pure function movement, no functional changes. The move is made, because in a later change, __vmbus_open() will rely on some static functions afterwards, so we sperate the move and the modification of __vmbus_open() in two patches to make it easy to review. Signed-off-by: Boqun Feng Reviewed-by: Wei Liu --- drivers/hv/channel.c | 309 ++++++++++++++++++++++--------------------- 1 file changed, 155 insertions(+), 154 deletions(-) diff --git a/drivers/hv/channel.c b/drivers/hv/channel.c index 4d0f8e5a88d6..1cbe8fc931fc 100644 --- a/drivers/hv/channel.c +++ b/drivers/hv/channel.c @@ -109,160 +109,6 @@ int vmbus_alloc_ring(struct vmbus_channel *newchannel, } EXPORT_SYMBOL_GPL(vmbus_alloc_ring); -static int __vmbus_open(struct vmbus_channel *newchannel, - void *userdata, u32 userdatalen, - void (*onchannelcallback)(void *context), void *context) -{ - struct vmbus_channel_open_channel *open_msg; - struct vmbus_channel_msginfo *open_info = NULL; - struct page *page = newchannel->ringbuffer_page; - u32 send_pages, recv_pages; - unsigned long flags; - int err; - - if (userdatalen > MAX_USER_DEFINED_BYTES) - return -EINVAL; - - send_pages = newchannel->ringbuffer_send_offset; - recv_pages = newchannel->ringbuffer_pagecount - send_pages; - - if (newchannel->state != CHANNEL_OPEN_STATE) - return -EINVAL; - - newchannel->state = CHANNEL_OPENING_STATE; - newchannel->onchannel_callback = onchannelcallback; - newchannel->channel_callback_context = context; - - err = hv_ringbuffer_init(&newchannel->outbound, page, send_pages); - if (err) - goto error_clean_ring; - - err = hv_ringbuffer_init(&newchannel->inbound, - &page[send_pages], recv_pages); - if (err) - goto error_clean_ring; - - /* Establish the gpadl for the ring buffer */ - newchannel->ringbuffer_gpadlhandle = 0; - - err = vmbus_establish_gpadl(newchannel, - page_address(newchannel->ringbuffer_page), - (send_pages + recv_pages) << PAGE_SHIFT, - &newchannel->ringbuffer_gpadlhandle); - if (err) - goto error_clean_ring; - - /* Create and init the channel open message */ - open_info = kmalloc(sizeof(*open_info) + - sizeof(struct vmbus_channel_open_channel), - GFP_KERNEL); - if (!open_info) { - err = -ENOMEM; - goto error_free_gpadl; - } - - init_completion(&open_info->waitevent); - open_info->waiting_channel = newchannel; - - open_msg = (struct vmbus_channel_open_channel *)open_info->msg; - open_msg->header.msgtype = CHANNELMSG_OPENCHANNEL; - open_msg->openid = newchannel->offermsg.child_relid; - open_msg->child_relid = newchannel->offermsg.child_relid; - open_msg->ringbuffer_gpadlhandle = newchannel->ringbuffer_gpadlhandle; - open_msg->downstream_ringbuffer_pageoffset = newchannel->ringbuffer_send_offset; - open_msg->target_vp = hv_cpu_number_to_vp_number(newchannel->target_cpu); - - if (userdatalen) - memcpy(open_msg->userdata, userdata, userdatalen); - - spin_lock_irqsave(&vmbus_connection.channelmsg_lock, flags); - list_add_tail(&open_info->msglistentry, - &vmbus_connection.chn_msg_list); - spin_unlock_irqrestore(&vmbus_connection.channelmsg_lock, flags); - - if (newchannel->rescind) { - err = -ENODEV; - goto error_free_info; - } - - err = vmbus_post_msg(open_msg, - sizeof(struct vmbus_channel_open_channel), true); - - trace_vmbus_open(open_msg, err); - - if (err != 0) - goto error_clean_msglist; - - wait_for_completion(&open_info->waitevent); - - spin_lock_irqsave(&vmbus_connection.channelmsg_lock, flags); - list_del(&open_info->msglistentry); - spin_unlock_irqrestore(&vmbus_connection.channelmsg_lock, flags); - - if (newchannel->rescind) { - err = -ENODEV; - goto error_free_info; - } - - if (open_info->response.open_result.status) { - err = -EAGAIN; - goto error_free_info; - } - - newchannel->state = CHANNEL_OPENED_STATE; - kfree(open_info); - return 0; - -error_clean_msglist: - spin_lock_irqsave(&vmbus_connection.channelmsg_lock, flags); - list_del(&open_info->msglistentry); - spin_unlock_irqrestore(&vmbus_connection.channelmsg_lock, flags); -error_free_info: - kfree(open_info); -error_free_gpadl: - vmbus_teardown_gpadl(newchannel, newchannel->ringbuffer_gpadlhandle); - newchannel->ringbuffer_gpadlhandle = 0; -error_clean_ring: - hv_ringbuffer_cleanup(&newchannel->outbound); - hv_ringbuffer_cleanup(&newchannel->inbound); - newchannel->state = CHANNEL_OPEN_STATE; - return err; -} - -/* - * vmbus_connect_ring - Open the channel but reuse ring buffer - */ -int vmbus_connect_ring(struct vmbus_channel *newchannel, - void (*onchannelcallback)(void *context), void *context) -{ - return __vmbus_open(newchannel, NULL, 0, onchannelcallback, context); -} -EXPORT_SYMBOL_GPL(vmbus_connect_ring); - -/* - * vmbus_open - Open the specified channel. - */ -int vmbus_open(struct vmbus_channel *newchannel, - u32 send_ringbuffer_size, u32 recv_ringbuffer_size, - void *userdata, u32 userdatalen, - void (*onchannelcallback)(void *context), void *context) -{ - int err; - - err = vmbus_alloc_ring(newchannel, send_ringbuffer_size, - recv_ringbuffer_size); - if (err) - return err; - - err = __vmbus_open(newchannel, userdata, userdatalen, - onchannelcallback, context); - if (err) - vmbus_free_ring(newchannel); - - return err; -} -EXPORT_SYMBOL_GPL(vmbus_open); - /* Used for Hyper-V Socket: a guest client's connect() to the host */ int vmbus_send_tl_connect_request(const guid_t *shv_guest_servie_id, const guid_t *shv_host_servie_id) @@ -556,6 +402,161 @@ int vmbus_establish_gpadl(struct vmbus_channel *channel, void *kbuffer, } EXPORT_SYMBOL_GPL(vmbus_establish_gpadl); +static int __vmbus_open(struct vmbus_channel *newchannel, + void *userdata, u32 userdatalen, + void (*onchannelcallback)(void *context), void *context) +{ + struct vmbus_channel_open_channel *open_msg; + struct vmbus_channel_msginfo *open_info = NULL; + struct page *page = newchannel->ringbuffer_page; + u32 send_pages, recv_pages; + unsigned long flags; + int err; + + if (userdatalen > MAX_USER_DEFINED_BYTES) + return -EINVAL; + + send_pages = newchannel->ringbuffer_send_offset; + recv_pages = newchannel->ringbuffer_pagecount - send_pages; + + if (newchannel->state != CHANNEL_OPEN_STATE) + return -EINVAL; + + newchannel->state = CHANNEL_OPENING_STATE; + newchannel->onchannel_callback = onchannelcallback; + newchannel->channel_callback_context = context; + + err = hv_ringbuffer_init(&newchannel->outbound, page, send_pages); + if (err) + goto error_clean_ring; + + err = hv_ringbuffer_init(&newchannel->inbound, + &page[send_pages], recv_pages); + if (err) + goto error_clean_ring; + + /* Establish the gpadl for the ring buffer */ + newchannel->ringbuffer_gpadlhandle = 0; + + err = vmbus_establish_gpadl(newchannel, + page_address(newchannel->ringbuffer_page), + (send_pages + recv_pages) << PAGE_SHIFT, + &newchannel->ringbuffer_gpadlhandle); + if (err) + goto error_clean_ring; + + /* Create and init the channel open message */ + open_info = kmalloc(sizeof(*open_info) + + sizeof(struct vmbus_channel_open_channel), + GFP_KERNEL); + if (!open_info) { + err = -ENOMEM; + goto error_free_gpadl; + } + + init_completion(&open_info->waitevent); + open_info->waiting_channel = newchannel; + + open_msg = (struct vmbus_channel_open_channel *)open_info->msg; + open_msg->header.msgtype = CHANNELMSG_OPENCHANNEL; + open_msg->openid = newchannel->offermsg.child_relid; + open_msg->child_relid = newchannel->offermsg.child_relid; + open_msg->ringbuffer_gpadlhandle = newchannel->ringbuffer_gpadlhandle; + open_msg->downstream_ringbuffer_pageoffset = newchannel->ringbuffer_send_offset; + open_msg->target_vp = hv_cpu_number_to_vp_number(newchannel->target_cpu); + + if (userdatalen) + memcpy(open_msg->userdata, userdata, userdatalen); + + spin_lock_irqsave(&vmbus_connection.channelmsg_lock, flags); + list_add_tail(&open_info->msglistentry, + &vmbus_connection.chn_msg_list); + spin_unlock_irqrestore(&vmbus_connection.channelmsg_lock, flags); + + if (newchannel->rescind) { + err = -ENODEV; + goto error_free_info; + } + + err = vmbus_post_msg(open_msg, + sizeof(struct vmbus_channel_open_channel), true); + + trace_vmbus_open(open_msg, err); + + if (err != 0) + goto error_clean_msglist; + + wait_for_completion(&open_info->waitevent); + + spin_lock_irqsave(&vmbus_connection.channelmsg_lock, flags); + list_del(&open_info->msglistentry); + spin_unlock_irqrestore(&vmbus_connection.channelmsg_lock, flags); + + if (newchannel->rescind) { + err = -ENODEV; + goto error_free_info; + } + + if (open_info->response.open_result.status) { + err = -EAGAIN; + goto error_free_info; + } + + newchannel->state = CHANNEL_OPENED_STATE; + kfree(open_info); + return 0; + +error_clean_msglist: + spin_lock_irqsave(&vmbus_connection.channelmsg_lock, flags); + list_del(&open_info->msglistentry); + spin_unlock_irqrestore(&vmbus_connection.channelmsg_lock, flags); +error_free_info: + kfree(open_info); +error_free_gpadl: + vmbus_teardown_gpadl(newchannel, newchannel->ringbuffer_gpadlhandle); + newchannel->ringbuffer_gpadlhandle = 0; +error_clean_ring: + hv_ringbuffer_cleanup(&newchannel->outbound); + hv_ringbuffer_cleanup(&newchannel->inbound); + newchannel->state = CHANNEL_OPEN_STATE; + return err; +} + +/* + * vmbus_connect_ring - Open the channel but reuse ring buffer + */ +int vmbus_connect_ring(struct vmbus_channel *newchannel, + void (*onchannelcallback)(void *context), void *context) +{ + return __vmbus_open(newchannel, NULL, 0, onchannelcallback, context); +} +EXPORT_SYMBOL_GPL(vmbus_connect_ring); + +/* + * vmbus_open - Open the specified channel. + */ +int vmbus_open(struct vmbus_channel *newchannel, + u32 send_ringbuffer_size, u32 recv_ringbuffer_size, + void *userdata, u32 userdatalen, + void (*onchannelcallback)(void *context), void *context) +{ + int err; + + err = vmbus_alloc_ring(newchannel, send_ringbuffer_size, + recv_ringbuffer_size); + if (err) + return err; + + err = __vmbus_open(newchannel, userdata, userdatalen, + onchannelcallback, context); + if (err) + vmbus_free_ring(newchannel); + + return err; +} +EXPORT_SYMBOL_GPL(vmbus_open); + + /* * vmbus_teardown_gpadl -Teardown the specified GPADL handle */ From patchwork Wed Sep 2 03:00:59 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Boqun Feng X-Patchwork-Id: 11749755 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 59BCB722 for ; Wed, 2 Sep 2020 03:03:19 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 318A12087D for ; Wed, 2 Sep 2020 03:03:19 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="nYvvpcMM" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728022AbgIBDCn (ORCPT ); Tue, 1 Sep 2020 23:02:43 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48094 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727838AbgIBDB1 (ORCPT ); Tue, 1 Sep 2020 23:01:27 -0400 Received: from mail-qt1-x842.google.com (mail-qt1-x842.google.com [IPv6:2607:f8b0:4864:20::842]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BF9E9C061246; Tue, 1 Sep 2020 20:01:26 -0700 (PDT) Received: by mail-qt1-x842.google.com with SMTP id v54so2606181qtj.7; Tue, 01 Sep 2020 20:01:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=d70ELTaAOENhEl31K35qEVH4VGfrZSg+WbPH1rIJ5nA=; b=nYvvpcMMFjLegmccoRh2oW/79f8j/zd1ATr27ov6fhOpMOW8iBR+8cQnxqR2cx6NPS 6WpoqhgExGhqOBGjOI8P1KZju8d5MLxshhPcr2HbVm0t3xbCSgsBegASt6gtEEV1pNtp DtuKfsL5Vblkr0yrjilRd5jZrHGuNqgdZIk8Q/j4w+/JwwVXamISqZekZrKDVnovGJ77 4MgJRU/aiLKpbuV58j9eDSbIzvkHgbf+Akzc5GnTdzVLTgNfTOHsoiJWiRZMKhNeLn1D CyrAfRCD40wTIEFp+aDjbuueZDpy12634yavHKEE3yF1TXGlgS1u4iV5TgvXRcWskGyR SX+Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=d70ELTaAOENhEl31K35qEVH4VGfrZSg+WbPH1rIJ5nA=; b=NdMgqkq99FltzgzRyWyLiADdPRoR2FkJvH+TqhXzW9/cWoK0mU4V20SUXqFqKo7nEM IwZJXAAGjclHiRNdGmS//T/BZLKe1qb8GOaQk48oO07ittJcfFfrEhy6gr9/fTwa1oWM fxtTbe2iUNJa0KC8dgvRVhO0lcOGLQ3cW+E60Y0nwQ77l/QNKLlA5nvgdrmqbMW5bUBB CY585k/JjB3a47DuDBp9PYEvLmg8UNvLhAWdFlExM20HbrweDuIMxxcpHxA60mA2zczk SZekXIBeV4kbl6NyaxLXQozCOVP2uV6GVXrXntWD/qJYzSW+gOB6ETkgee20FzIpKYq8 27hg== X-Gm-Message-State: AOAM532eQnah86x7z2gzJEfhMLKF8P45YWA+zNq9gSHqIckHDBSPsKZW a1djLPTbGOTwwm5vl9RRP8s= X-Google-Smtp-Source: ABdhPJxIUz5YVEBnECw5/2tuTLHCrWPhttZDOJukPr76ZLEKsieAFlm+rP1O5c2S9bkjv7+uEcbJ+Q== X-Received: by 2002:ac8:6b53:: with SMTP id x19mr4936344qts.296.1599015685540; Tue, 01 Sep 2020 20:01:25 -0700 (PDT) Received: from auth2-smtp.messagingengine.com (auth2-smtp.messagingengine.com. [66.111.4.228]) by smtp.gmail.com with ESMTPSA id y1sm1417784qti.40.2020.09.01.20.01.24 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 01 Sep 2020 20:01:24 -0700 (PDT) Received: from compute3.internal (compute3.nyi.internal [10.202.2.43]) by mailauth.nyi.internal (Postfix) with ESMTP id E9C5C27C005A; Tue, 1 Sep 2020 23:01:23 -0400 (EDT) Received: from mailfrontend1 ([10.202.2.162]) by compute3.internal (MEProxy); Tue, 01 Sep 2020 23:01:23 -0400 X-ME-Sender: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeduiedrudefkedgieeiucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne goufhorhhtvggutfgvtghiphdvucdlgedtmdenucfjughrpefhvffufffkofgjfhgggfes tdekredtredttdenucfhrhhomhepuehoqhhunhcuhfgvnhhguceosghoqhhunhdrfhgvnh hgsehgmhgrihhlrdgtohhmqeenucggtffrrghtthgvrhhnpeehvdevteefgfeiudettdef vedvvdelkeejueffffelgeeuhffhjeetkeeiueeuleenucfkphephedvrdduheehrdduud durdejudenucevlhhushhtvghrufhiiigvpedtnecurfgrrhgrmhepmhgrihhlfhhrohhm pegsohhquhhnodhmvghsmhhtphgruhhthhhpvghrshhonhgrlhhithihqdeiledvgeehtd eigedqudejjeekheehhedvqdgsohhquhhnrdhfvghngheppehgmhgrihhlrdgtohhmsehf ihigmhgvrdhnrghmvg X-ME-Proxy: Received: from localhost (unknown [52.155.111.71]) by mail.messagingengine.com (Postfix) with ESMTPA id 6214B3280059; Tue, 1 Sep 2020 23:01:23 -0400 (EDT) From: Boqun Feng To: linux-hyperv@vger.kernel.org, linux-input@vger.kernel.org, linux-kernel@vger.kernel.org, netdev@vger.kernel.org, linux-scsi@vger.kernel.org Cc: "K. Y. Srinivasan" , Haiyang Zhang , Stephen Hemminger , Wei Liu , Jiri Kosina , Benjamin Tissoires , Dmitry Torokhov , "David S. Miller" , Jakub Kicinski , "James E.J. Bottomley" , "Martin K. Petersen" , Michael Kelley , Boqun Feng Subject: [RFC v2 03/11] Drivers: hv: vmbus: Introduce types of GPADL Date: Wed, 2 Sep 2020 11:00:59 +0800 Message-Id: <20200902030107.33380-4-boqun.feng@gmail.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20200902030107.33380-1-boqun.feng@gmail.com> References: <20200902030107.33380-1-boqun.feng@gmail.com> MIME-Version: 1.0 Sender: linux-input-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-input@vger.kernel.org This patch introduces two types of GPADL: HV_GPADL_{BUFFER, RING}. The types of GPADL are purely the concept in the guest, IOW the hypervisor treat them as the same. The reason of introducing the types of GPADL is to support guests whose page size is not 4k (the page size of Hyper-V hypervisor). In these guests, both the headers and the data parts of the ringbuffers need to be aligned to the PAGE_SIZE, because 1) some of the ringbuffers will be mapped into userspace and 2) we use "double mapping" mechanism to support fast wrap-around, and "double mapping" relies on ringbuffers being page-aligned. However, the Hyper-V hypervisor only uses 4k (HV_HYP_PAGE_SIZE) headers. Our solution to this is that we always make the headers of ringbuffers take one guest page and when GPADL is established between the guest and hypervisor, the only first 4k of header is used. To handle this special case, we need the types of GPADL to differ different guest memory usage for GPADL. Type enum is introduced along with several general interfaces to describe the differences between normal buffer GPADL and ringbuffer GPADL. Signed-off-by: Boqun Feng --- drivers/hv/channel.c | 159 +++++++++++++++++++++++++++++++++++------ include/linux/hyperv.h | 44 +++++++++++- 2 files changed, 182 insertions(+), 21 deletions(-) diff --git a/drivers/hv/channel.c b/drivers/hv/channel.c index 1cbe8fc931fc..7c443fd567e4 100644 --- a/drivers/hv/channel.c +++ b/drivers/hv/channel.c @@ -35,6 +35,98 @@ static unsigned long virt_to_hvpfn(void *addr) return paddr >> HV_HYP_PAGE_SHIFT; } +/* + * hv_gpadl_size - Return the real size of a gpadl, the size that Hyper-V uses + * + * For BUFFER gpadl, Hyper-V uses the exact same size as the guest does. + * + * For RING gpadl, in each ring, the guest uses one PAGE_SIZE as the header + * (because of the alignment requirement), however, the hypervisor only + * uses the first HV_HYP_PAGE_SIZE as the header, therefore leaving a + * (PAGE_SIZE - HV_HYP_PAGE_SIZE) gap. And since there are two rings in a + * ringbuffer, So the total size for a RING gpadl that Hyper-V uses is the + * total size that the guest uses minus twice of the gap size. + */ +static inline u32 hv_gpadl_size(enum hv_gpadl_type type, u32 size) +{ + switch (type) { + case HV_GPADL_BUFFER: + return size; + case HV_GPADL_RING: + /* The size of a ringbuffer must be page-aligned */ + BUG_ON(size % PAGE_SIZE); + /* + * Two things to notice here: + * 1) We're processing two ring buffers as a unit + * 2) We're skipping any space larger than HV_HYP_PAGE_SIZE in + * the first guest-size page of each of the two ring buffers. + * So we effectively subtract out two guest-size pages, and add + * back two Hyper-V size pages. + */ + return size - 2 * (PAGE_SIZE - HV_HYP_PAGE_SIZE); + } + BUG(); + return 0; +} + +/* + * hv_ring_gpadl_send_offset - Calculate the send offset in a ring gpadl based + * on the offset in the guest + * + * @send_offset: the offset (in bytes) where the send ringbuffer starts in the + * virtual address space of the guest + */ +static inline u32 hv_ring_gpadl_send_offset(u32 send_offset) +{ + + /* + * For RING gpadl, in each ring, the guest uses one PAGE_SIZE as the + * header (because of the alignment requirement), however, the + * hypervisor only uses the first HV_HYP_PAGE_SIZE as the header, + * therefore leaving a (PAGE_SIZE - HV_HYP_PAGE_SIZE) gap. + * + * And to calculate the effective send offset in gpadl, we need to + * substract this gap. + */ + return send_offset - (PAGE_SIZE - HV_HYP_PAGE_SIZE); +} + +/* + * hv_gpadl_hvpfn - Return the Hyper-V page PFN of the @i th Hyper-V page in + * the gpadl + * + * @type: the type of the gpadl + * @kbuffer: the pointer to the gpadl in the guest + * @size: the total size (in bytes) of the gpadl + * @send_offset: the offset (in bytes) where the send ringbuffer starts in the + * virtual address space of the guest + * @i: the index + */ +static inline u64 hv_gpadl_hvpfn(enum hv_gpadl_type type, void *kbuffer, + u32 size, u32 send_offset, int i) +{ + int send_idx = hv_ring_gpadl_send_offset(send_offset) >> HV_HYP_PAGE_SHIFT; + unsigned long delta = 0UL; + + switch (type) { + case HV_GPADL_BUFFER: + break; + case HV_GPADL_RING: + if (i == 0) + delta = 0; + else if (i <= send_idx) + delta = PAGE_SIZE - HV_HYP_PAGE_SIZE; + else + delta = 2 * (PAGE_SIZE - HV_HYP_PAGE_SIZE); + break; + default: + BUG(); + break; + } + + return virt_to_hvpfn(kbuffer + delta + (HV_HYP_PAGE_SIZE * i)); +} + /* * vmbus_setevent- Trigger an event notification on the specified * channel. @@ -160,7 +252,8 @@ EXPORT_SYMBOL_GPL(vmbus_send_modifychannel); /* * create_gpadl_header - Creates a gpadl for the specified buffer */ -static int create_gpadl_header(void *kbuffer, u32 size, +static int create_gpadl_header(enum hv_gpadl_type type, void *kbuffer, + u32 size, u32 send_offset, struct vmbus_channel_msginfo **msginfo) { int i; @@ -173,7 +266,7 @@ static int create_gpadl_header(void *kbuffer, u32 size, int pfnsum, pfncount, pfnleft, pfncurr, pfnsize; - pagecount = size >> HV_HYP_PAGE_SHIFT; + pagecount = hv_gpadl_size(type, size) >> HV_HYP_PAGE_SHIFT; /* do we need a gpadl body msg */ pfnsize = MAX_SIZE_CHANNEL_MESSAGE - @@ -200,10 +293,10 @@ static int create_gpadl_header(void *kbuffer, u32 size, gpadl_header->range_buflen = sizeof(struct gpa_range) + pagecount * sizeof(u64); gpadl_header->range[0].byte_offset = 0; - gpadl_header->range[0].byte_count = size; + gpadl_header->range[0].byte_count = hv_gpadl_size(type, size); for (i = 0; i < pfncount; i++) - gpadl_header->range[0].pfn_array[i] = virt_to_hvpfn( - kbuffer + HV_HYP_PAGE_SIZE * i); + gpadl_header->range[0].pfn_array[i] = hv_gpadl_hvpfn( + type, kbuffer, size, send_offset, i); *msginfo = msgheader; pfnsum = pfncount; @@ -254,8 +347,8 @@ static int create_gpadl_header(void *kbuffer, u32 size, * so the hypervisor guarantees that this is ok. */ for (i = 0; i < pfncurr; i++) - gpadl_body->pfn[i] = virt_to_hvpfn( - kbuffer + HV_HYP_PAGE_SIZE * (pfnsum + i)); + gpadl_body->pfn[i] = hv_gpadl_hvpfn(type, + kbuffer, size, send_offset, pfnsum + i); /* add to msg header */ list_add_tail(&msgbody->msglistentry, @@ -281,10 +374,10 @@ static int create_gpadl_header(void *kbuffer, u32 size, gpadl_header->range_buflen = sizeof(struct gpa_range) + pagecount * sizeof(u64); gpadl_header->range[0].byte_offset = 0; - gpadl_header->range[0].byte_count = size; + gpadl_header->range[0].byte_count = hv_gpadl_size(type, size); for (i = 0; i < pagecount; i++) - gpadl_header->range[0].pfn_array[i] = virt_to_hvpfn( - kbuffer + HV_HYP_PAGE_SIZE * i); + gpadl_header->range[0].pfn_array[i] = hv_gpadl_hvpfn( + type, kbuffer, size, send_offset, i); *msginfo = msgheader; } @@ -297,15 +390,20 @@ static int create_gpadl_header(void *kbuffer, u32 size, } /* - * vmbus_establish_gpadl - Establish a GPADL for the specified buffer + * __vmbus_establish_gpadl - Establish a GPADL for a buffer or ringbuffer * * @channel: a channel + * @type: the type of the corresponding GPADL, only meaningful for the guest. * @kbuffer: from kmalloc or vmalloc * @size: page-size multiple + * @send_offset: the offset (in bytes) where the send ring buffer starts, + * should be 0 for BUFFER type gpadl * @gpadl_handle: some funky thing */ -int vmbus_establish_gpadl(struct vmbus_channel *channel, void *kbuffer, - u32 size, u32 *gpadl_handle) +static int __vmbus_establish_gpadl(struct vmbus_channel *channel, + enum hv_gpadl_type type, void *kbuffer, + u32 size, u32 send_offset, + u32 *gpadl_handle) { struct vmbus_channel_gpadl_header *gpadlmsg; struct vmbus_channel_gpadl_body *gpadl_body; @@ -319,7 +417,7 @@ int vmbus_establish_gpadl(struct vmbus_channel *channel, void *kbuffer, next_gpadl_handle = (atomic_inc_return(&vmbus_connection.next_gpadl_handle) - 1); - ret = create_gpadl_header(kbuffer, size, &msginfo); + ret = create_gpadl_header(type, kbuffer, size, send_offset, &msginfo); if (ret) return ret; @@ -400,6 +498,21 @@ int vmbus_establish_gpadl(struct vmbus_channel *channel, void *kbuffer, kfree(msginfo); return ret; } + +/* + * vmbus_establish_gpadl - Establish a GPADL for the specified buffer + * + * @channel: a channel + * @kbuffer: from kmalloc or vmalloc + * @size: page-size multiple + * @gpadl_handle: some funky thing + */ +int vmbus_establish_gpadl(struct vmbus_channel *channel, void *kbuffer, + u32 size, u32 *gpadl_handle) +{ + return __vmbus_establish_gpadl(channel, HV_GPADL_BUFFER, kbuffer, size, + 0U, gpadl_handle); +} EXPORT_SYMBOL_GPL(vmbus_establish_gpadl); static int __vmbus_open(struct vmbus_channel *newchannel, @@ -438,10 +551,11 @@ static int __vmbus_open(struct vmbus_channel *newchannel, /* Establish the gpadl for the ring buffer */ newchannel->ringbuffer_gpadlhandle = 0; - err = vmbus_establish_gpadl(newchannel, - page_address(newchannel->ringbuffer_page), - (send_pages + recv_pages) << PAGE_SHIFT, - &newchannel->ringbuffer_gpadlhandle); + err = __vmbus_establish_gpadl(newchannel, HV_GPADL_RING, + page_address(newchannel->ringbuffer_page), + (send_pages + recv_pages) << PAGE_SHIFT, + newchannel->ringbuffer_send_offset << PAGE_SHIFT, + &newchannel->ringbuffer_gpadlhandle); if (err) goto error_clean_ring; @@ -462,7 +576,13 @@ static int __vmbus_open(struct vmbus_channel *newchannel, open_msg->openid = newchannel->offermsg.child_relid; open_msg->child_relid = newchannel->offermsg.child_relid; open_msg->ringbuffer_gpadlhandle = newchannel->ringbuffer_gpadlhandle; - open_msg->downstream_ringbuffer_pageoffset = newchannel->ringbuffer_send_offset; + /* + * The unit of ->downstream_ringbuffer_pageoffset is HV_HYP_PAGE and + * the unit of ->ringbuffer_send_offset is PAGE, so here we first + * calculate it into bytes and then convert into HV_HYP_PAGE. + */ + open_msg->downstream_ringbuffer_pageoffset = + hv_ring_gpadl_send_offset(newchannel->ringbuffer_send_offset << PAGE_SHIFT) >> HV_HYP_PAGE_SHIFT; open_msg->target_vp = hv_cpu_number_to_vp_number(newchannel->target_cpu); if (userdatalen) @@ -556,7 +676,6 @@ int vmbus_open(struct vmbus_channel *newchannel, } EXPORT_SYMBOL_GPL(vmbus_open); - /* * vmbus_teardown_gpadl -Teardown the specified GPADL handle */ diff --git a/include/linux/hyperv.h b/include/linux/hyperv.h index 38100e80360a..7d16dd28aa48 100644 --- a/include/linux/hyperv.h +++ b/include/linux/hyperv.h @@ -29,6 +29,48 @@ #pragma pack(push, 1) +/* + * Types for GPADL, decides is how GPADL header is created. + * + * It doesn't make much difference between BUFFER and RING if PAGE_SIZE is the + * same as HV_HYP_PAGE_SIZE. + * + * If PAGE_SIZE is bigger than HV_HYP_PAGE_SIZE, the headers of ring buffers + * will be of PAGE_SIZE, however, only the first HV_HYP_PAGE will be put + * into gpadl, therefore the number for HV_HYP_PAGE and the indexes of each + * HV_HYP_PAGE will be different between different types of GPADL, for example + * if PAGE_SIZE is 64K: + * + * BUFFER: + * + * gva: |-- 64k --|-- 64k --| ... | + * gpa: | 4k | 4k | ... | 4k | 4k | 4k | ... | 4k | + * index: 0 1 2 15 16 17 18 .. 31 32 ... + * | | ... | | | ... | ... + * v V V V V V + * gpadl: | 4k | 4k | ... | 4k | 4k | 4k | ... | 4k | ... | + * index: 0 1 2 ... 15 16 17 18 .. 31 32 ... + * + * RING: + * + * | header | data | header | data | + * gva: |-- 64k --|-- 64k --| ... |-- 64k --|-- 64k --| ... | + * gpa: | 4k | .. | 4k | 4k | ... | 4k | ... | 4k | .. | 4k | .. | ... | + * index: 0 1 16 17 18 31 ... n n+1 n+16 ... 2n + * | / / / | / / + * | / / / | / / + * | / / ... / ... | / ... / + * | / / / | / / + * | / / / | / / + * V V V V V V v + * gpadl: | 4k | 4k | ... | ... | 4k | 4k | ... | + * index: 0 1 2 ... 16 ... n-15 n-14 n-13 ... 2n-30 + */ +enum hv_gpadl_type { + HV_GPADL_BUFFER, + HV_GPADL_RING +}; + /* Single-page buffer */ struct hv_page_buffer { u32 len; @@ -111,7 +153,7 @@ struct hv_ring_buffer { } feature_bits; /* Pad it to PAGE_SIZE so that data starts on page boundary */ - u8 reserved2[4028]; + u8 reserved2[PAGE_SIZE - 68]; /* * Ring data starts here + RingDataStartOffset From patchwork Wed Sep 2 03:01:00 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Boqun Feng X-Patchwork-Id: 11749759 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 98E24166C for ; Wed, 2 Sep 2020 03:03:21 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 833B62087D for ; Wed, 2 Sep 2020 03:03:21 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="jld6SluI" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728004AbgIBDCn (ORCPT ); Tue, 1 Sep 2020 23:02:43 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48102 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727851AbgIBDB3 (ORCPT ); Tue, 1 Sep 2020 23:01:29 -0400 Received: from mail-qk1-x742.google.com (mail-qk1-x742.google.com [IPv6:2607:f8b0:4864:20::742]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E9AC7C061247; Tue, 1 Sep 2020 20:01:28 -0700 (PDT) Received: by mail-qk1-x742.google.com with SMTP id f2so3104202qkh.3; Tue, 01 Sep 2020 20:01:28 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=CqVajLbzBylhgfGAcsTOQbCfA2sTjaEeoT2LMxZ+IiM=; b=jld6SluI6OirNLpb7nRYuLVFV2MzNpTQmjffzTGNNuASZdrNv1+RNjqKyKOurdgTIf C7I6mMcQWKrqbvBhfo0tO6ygjl+6O+4dBbc8poKvOrtJnC1Ha478I1oVOV457pK6DNvE pvJRn20U7akfNgWbTiCV239+TcdQrcsZOidJnjqJY6e3r0DakTl9OIcietLHVasOwjzI bYx6/EVCkQz7ln0FGnDW43lHdQPa3C9KYB2ptqXwkbY3+r1I2Gos4WnjRoDpDVR5Oyxu 8StcPpWimNY2m3ZLSoFmTNtHIHFNqK1VLikc3GDhjLz0caXPmICSkU/yFe19v89xiAmc t2+A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=CqVajLbzBylhgfGAcsTOQbCfA2sTjaEeoT2LMxZ+IiM=; b=KE3+kYVkbvK1naVn8ORtaQoPttAhCHAAAYhCFykmgFqSNHo2mg8P6kLFuIrnn502gm zjfGXtlSvxdW1SxoLGGAWZ+w/XoIF1UYR5t15i+WDx/IILB+NONr0p5AN8FE+fjJqyw4 xh3bpMwD7h8zJeK9AJDCBwSpwu8FMHRcSNHI6syj3udKJDBofq9rEu5zhx0S9Uwue0M0 g2Uk3xHckH5pK2gd5nukIbe0jO7YXyPSL0krEY5h+kbRcJf1CNpaIAzloZPMlMQad5Ct zge6t0Fs0qoP8qe4QYZTilAE7vAbrOBBf8qsSWjnRAKS2B+o9HQsx5PICR8y5RhhpI1A EdsQ== X-Gm-Message-State: AOAM532uaiCtgKgUV6pyZz9sqIZkTpMmY4HEVI4bcMFt4Yib8f/LmIoJ C7tZhOCU427hjes6CcT0YHU= X-Google-Smtp-Source: ABdhPJya4ett/bfDlaTnH2O/YYL/8kcaS4O0DBORp/3rRnG3PiHuGHF9pqq4Z5QFg6r4TiHYiLpIBg== X-Received: by 2002:a37:7a44:: with SMTP id v65mr4813046qkc.420.1599015687688; Tue, 01 Sep 2020 20:01:27 -0700 (PDT) Received: from auth2-smtp.messagingengine.com (auth2-smtp.messagingengine.com. [66.111.4.228]) by smtp.gmail.com with ESMTPSA id t186sm3744445qkc.98.2020.09.01.20.01.26 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 01 Sep 2020 20:01:26 -0700 (PDT) Received: from compute3.internal (compute3.nyi.internal [10.202.2.43]) by mailauth.nyi.internal (Postfix) with ESMTP id D272527C005A; Tue, 1 Sep 2020 23:01:25 -0400 (EDT) Received: from mailfrontend1 ([10.202.2.162]) by compute3.internal (MEProxy); Tue, 01 Sep 2020 23:01:25 -0400 X-ME-Sender: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeduiedrudefkedgieeiucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne goufhorhhtvggutfgvtghiphdvucdlgedtmdenucfjughrpefhvffufffkofgjfhgggfes tdekredtredttdenucfhrhhomhepuehoqhhunhcuhfgvnhhguceosghoqhhunhdrfhgvnh hgsehgmhgrihhlrdgtohhmqeenucggtffrrghtthgvrhhnpeehvdevteefgfeiudettdef vedvvdelkeejueffffelgeeuhffhjeetkeeiueeuleenucfkphephedvrdduheehrdduud durdejudenucevlhhushhtvghrufhiiigvpeefnecurfgrrhgrmhepmhgrihhlfhhrohhm pegsohhquhhnodhmvghsmhhtphgruhhthhhpvghrshhonhgrlhhithihqdeiledvgeehtd eigedqudejjeekheehhedvqdgsohhquhhnrdhfvghngheppehgmhgrihhlrdgtohhmsehf ihigmhgvrdhnrghmvg X-ME-Proxy: Received: from localhost (unknown [52.155.111.71]) by mail.messagingengine.com (Postfix) with ESMTPA id 42B243280063; Tue, 1 Sep 2020 23:01:25 -0400 (EDT) From: Boqun Feng To: linux-hyperv@vger.kernel.org, linux-input@vger.kernel.org, linux-kernel@vger.kernel.org, netdev@vger.kernel.org, linux-scsi@vger.kernel.org Cc: "K. Y. Srinivasan" , Haiyang Zhang , Stephen Hemminger , Wei Liu , Jiri Kosina , Benjamin Tissoires , Dmitry Torokhov , "David S. Miller" , Jakub Kicinski , "James E.J. Bottomley" , "Martin K. Petersen" , Michael Kelley , Boqun Feng Subject: [RFC v2 04/11] Drivers: hv: Use HV_HYP_PAGE in hv_synic_enable_regs() Date: Wed, 2 Sep 2020 11:01:00 +0800 Message-Id: <20200902030107.33380-5-boqun.feng@gmail.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20200902030107.33380-1-boqun.feng@gmail.com> References: <20200902030107.33380-1-boqun.feng@gmail.com> MIME-Version: 1.0 Sender: linux-input-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-input@vger.kernel.org Both the base_*_gpa should use the guest page number in Hyper-V page, so use HV_HYP_PAGE instead of PAGE. Signed-off-by: Boqun Feng --- drivers/hv/hv.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/hv/hv.c b/drivers/hv/hv.c index 7499079f4077..8ac8bbf5b5aa 100644 --- a/drivers/hv/hv.c +++ b/drivers/hv/hv.c @@ -165,7 +165,7 @@ void hv_synic_enable_regs(unsigned int cpu) hv_get_simp(simp.as_uint64); simp.simp_enabled = 1; simp.base_simp_gpa = virt_to_phys(hv_cpu->synic_message_page) - >> PAGE_SHIFT; + >> HV_HYP_PAGE_SHIFT; hv_set_simp(simp.as_uint64); @@ -173,7 +173,7 @@ void hv_synic_enable_regs(unsigned int cpu) hv_get_siefp(siefp.as_uint64); siefp.siefp_enabled = 1; siefp.base_siefp_gpa = virt_to_phys(hv_cpu->synic_event_page) - >> PAGE_SHIFT; + >> HV_HYP_PAGE_SHIFT; hv_set_siefp(siefp.as_uint64); From patchwork Wed Sep 2 03:01:01 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Boqun Feng X-Patchwork-Id: 11749725 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 0802F1575 for ; Wed, 2 Sep 2020 03:02:11 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id E011B206F0 for ; Wed, 2 Sep 2020 03:02:10 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="YJS28xmm" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727895AbgIBDBp (ORCPT ); Tue, 1 Sep 2020 23:01:45 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48108 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727852AbgIBDBb (ORCPT ); Tue, 1 Sep 2020 23:01:31 -0400 Received: from mail-qk1-x743.google.com (mail-qk1-x743.google.com [IPv6:2607:f8b0:4864:20::743]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AF6D6C061244; Tue, 1 Sep 2020 20:01:30 -0700 (PDT) Received: by mail-qk1-x743.google.com with SMTP id w16so617149qkj.7; Tue, 01 Sep 2020 20:01:30 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=3x1QWVcgV+kCL2v3rjpXLpxxpmca/bBAk4cfuphyefA=; b=YJS28xmm9bZJaXn1I6nWYR7R0sIXxeGpEHgC5eSnY3ry0NH5tSpW16+9oyh9fMuquo lQpu/Mwii80WiPoaUlcmEfB6eGzA07Sqq3+Xhg8TXkNa9B+tv1dUWKRIAtFJfzG+NA4P bbjwdRYCHqfL++pNH5bH/E8GMN2P7vhwbROsU9+dkyhhFOQFqSgFELd67uRU/gcsh1Uk UN6v5FOdDuBO3i1OQg1s0TO1RdzEwk0c0gp/0JE8TC8DrYzLWHtCn/XyNvykThZYqSrU oao9nXruQZIaJ9mr6R9g1M1zdHRDmlJz4gaBGSAcV0OT9RP26NEaCc323USLDq4KZOc/ OGAw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=3x1QWVcgV+kCL2v3rjpXLpxxpmca/bBAk4cfuphyefA=; b=poGzZCwLQYUPTMhY7WJd2y90RhhYQoo+qkhsKpMvLrhZmRB72PQ5d5UbS8G6COoJUD a2vbplv0IDv0MWyhm/9Dwcuom4Kf3riU+PH1BBtl+x9/D7vVZdlsc/5vc+ysEWG2/L5i NVg2QiGmh1cdVotDC7cJ/Ie6MeY3MiFGKFCGZIjLMzoFOTSEXvmIT4/ONUQ3ydaX93JX 8qInQ6fr3e/jqpoLUOJjuoLKa+uuvimHiBFMRvhdCeGKrF36gJ8HnoxILWN0UGP4xesh +AVB4fgH2Y0WIemJSRkS6omNwe9iruPMqgxnyk6Ft1JITa8fROFw7UfQh01exodPHbjW 3qIw== X-Gm-Message-State: AOAM530LvuPk9n/jxnaPAyIAwem+jSXCpPPTfOZROVniGbmeCH/uUGtO I9Ae588shit7xLpRQKTL5Bg= X-Google-Smtp-Source: ABdhPJyb0FqQ3u1qH/En3jl4UttQiZyk5y3+Ac7g5mkBqnfhzww0mjsSKWhujFxvb7PFMOytajBYMw== X-Received: by 2002:a05:620a:1125:: with SMTP id p5mr5124774qkk.328.1599015689710; Tue, 01 Sep 2020 20:01:29 -0700 (PDT) Received: from auth2-smtp.messagingengine.com (auth2-smtp.messagingengine.com. [66.111.4.228]) by smtp.gmail.com with ESMTPSA id f4sm3588682qtp.38.2020.09.01.20.01.27 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 01 Sep 2020 20:01:28 -0700 (PDT) Received: from compute3.internal (compute3.nyi.internal [10.202.2.43]) by mailauth.nyi.internal (Postfix) with ESMTP id 9E1A927C005B; Tue, 1 Sep 2020 23:01:27 -0400 (EDT) Received: from mailfrontend2 ([10.202.2.163]) by compute3.internal (MEProxy); Tue, 01 Sep 2020 23:01:27 -0400 X-ME-Sender: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeduiedrudefkedgieeiucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne goufhorhhtvggutfgvtghiphdvucdlgedtmdenucfjughrpefhvffufffkofgjfhgggfes tdekredtredttdenucfhrhhomhepuehoqhhunhcuhfgvnhhguceosghoqhhunhdrfhgvnh hgsehgmhgrihhlrdgtohhmqeenucggtffrrghtthgvrhhnpeehvdevteefgfeiudettdef vedvvdelkeejueffffelgeeuhffhjeetkeeiueeuleenucfkphephedvrdduheehrdduud durdejudenucevlhhushhtvghrufhiiigvpeefnecurfgrrhgrmhepmhgrihhlfhhrohhm pegsohhquhhnodhmvghsmhhtphgruhhthhhpvghrshhonhgrlhhithihqdeiledvgeehtd eigedqudejjeekheehhedvqdgsohhquhhnrdhfvghngheppehgmhgrihhlrdgtohhmsehf ihigmhgvrdhnrghmvg X-ME-Proxy: Received: from localhost (unknown [52.155.111.71]) by mail.messagingengine.com (Postfix) with ESMTPA id 1C43D3060067; Tue, 1 Sep 2020 23:01:26 -0400 (EDT) From: Boqun Feng To: linux-hyperv@vger.kernel.org, linux-input@vger.kernel.org, linux-kernel@vger.kernel.org, netdev@vger.kernel.org, linux-scsi@vger.kernel.org Cc: "K. Y. Srinivasan" , Haiyang Zhang , Stephen Hemminger , Wei Liu , Jiri Kosina , Benjamin Tissoires , Dmitry Torokhov , "David S. Miller" , Jakub Kicinski , "James E.J. Bottomley" , "Martin K. Petersen" , Michael Kelley , Boqun Feng Subject: [RFC v2 05/11] Drivers: hv: vmbus: Move virt_to_hvpfn() to hyperv header Date: Wed, 2 Sep 2020 11:01:01 +0800 Message-Id: <20200902030107.33380-6-boqun.feng@gmail.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20200902030107.33380-1-boqun.feng@gmail.com> References: <20200902030107.33380-1-boqun.feng@gmail.com> MIME-Version: 1.0 Sender: linux-input-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-input@vger.kernel.org There will be more places other than vmbus where we need to calculate the Hyper-V page PFN from a virtual address, so move virt_to_hvpfn() to hyperv generic header. Signed-off-by: Boqun Feng --- drivers/hv/channel.c | 13 ------------- include/linux/hyperv.h | 15 +++++++++++++++ 2 files changed, 15 insertions(+), 13 deletions(-) diff --git a/drivers/hv/channel.c b/drivers/hv/channel.c index 7c443fd567e4..74a8f49ab76a 100644 --- a/drivers/hv/channel.c +++ b/drivers/hv/channel.c @@ -22,19 +22,6 @@ #include "hyperv_vmbus.h" -static unsigned long virt_to_hvpfn(void *addr) -{ - phys_addr_t paddr; - - if (is_vmalloc_addr(addr)) - paddr = page_to_phys(vmalloc_to_page(addr)) + - offset_in_page(addr); - else - paddr = __pa(addr); - - return paddr >> HV_HYP_PAGE_SHIFT; -} - /* * hv_gpadl_size - Return the real size of a gpadl, the size that Hyper-V uses * diff --git a/include/linux/hyperv.h b/include/linux/hyperv.h index 7d16dd28aa48..6f4831212979 100644 --- a/include/linux/hyperv.h +++ b/include/linux/hyperv.h @@ -14,6 +14,7 @@ #include +#include #include #include #include @@ -23,6 +24,7 @@ #include #include #include +#include #define MAX_PAGE_BUFFER_COUNT 32 #define MAX_MULTIPAGE_BUFFER_COUNT 32 /* 128K */ @@ -1672,4 +1674,17 @@ struct hyperv_pci_block_ops { extern struct hyperv_pci_block_ops hvpci_block_ops; +static inline unsigned long virt_to_hvpfn(void *addr) +{ + phys_addr_t paddr; + + if (is_vmalloc_addr(addr)) + paddr = page_to_phys(vmalloc_to_page(addr)) + + offset_in_page(addr); + else + paddr = __pa(addr); + + return paddr >> HV_HYP_PAGE_SHIFT; +} + #endif /* _HYPERV_H */ From patchwork Wed Sep 2 03:01:02 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Boqun Feng X-Patchwork-Id: 11749717 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 5128A1575 for ; Wed, 2 Sep 2020 03:01:45 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 3943220707 for ; Wed, 2 Sep 2020 03:01:45 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="OZppQQjH" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727872AbgIBDBn (ORCPT ); Tue, 1 Sep 2020 23:01:43 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48116 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727853AbgIBDBc (ORCPT ); Tue, 1 Sep 2020 23:01:32 -0400 Received: from mail-qk1-x744.google.com (mail-qk1-x744.google.com [IPv6:2607:f8b0:4864:20::744]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D4E28C061249; Tue, 1 Sep 2020 20:01:31 -0700 (PDT) Received: by mail-qk1-x744.google.com with SMTP id w16so617183qkj.7; Tue, 01 Sep 2020 20:01:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=HbcXYxfwsiuTEtFnozpwnac7NHrDPEIjX3Balg7ij5I=; b=OZppQQjHnQptu/cGNYQh/AGEcitJ6KxnMzHWsU7aNkmW39VUtuPg3Ehe32ShlgQPUR 8XLvNA9kg+GoTn4qsKjpd1PhSSxNG6HTBaJAV25yJ+Ciht7CzdISw2xbvNd4nBh1McoT wvcPhooNqXxdG4BG+nG2RccuLZbkG9JY+lZ6GlN24PaAsF5SaOspTqSbJyMDsYJdNzak 8DmgTGoG0tnXVwaHtTYEnyYku9+Ya8u6Noj8mXwYc6WV2GHpTdlrUWv/2CqnAEM9DcC5 d6L9wx6g1Kbrqy9ahw661EUaeFdVItKMJrjGSvUDLgfZv0xVM/52CwYW8D5LcZwyubZb oAhg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=HbcXYxfwsiuTEtFnozpwnac7NHrDPEIjX3Balg7ij5I=; b=gIuL3pf+grW9YLY+vCHauxnFITk2Ao9bEWoSwDSCAEYFn1dl1IpTi8iPDgYkrtd4FB XLvHVnnCDPscD7gYZFNC2wNd8RwQFGcknpwRCAT5Ijrd0pr79/aTtc0hBKNgVog1c20L DpaHJY9JCcNvd+ATxru7xPiYCX0/hz0O7Vj2KMdfeVha/fSmc+Whd2XTMV2NYS8lFobR o0dXkjZAsodK8cQsEPZQFd9h2yhI9WJTG1VA05f1bU75A7iKLPBi5hM48p6dbkFKANTB Y/lInFhDprq/JZRUIirmSnqPtwPdAVDOgqhCTEDwU2EojTaeuPNRRtDUmILy6eacW/vq CuPQ== X-Gm-Message-State: AOAM530fBIE+2xMYt97rsyuaItxxjwBYvGQ+NSJ8rtF0wdzm0UwL1QWJ cmvpEPqPMsYhFrL2L3DIRYk= X-Google-Smtp-Source: ABdhPJyzODGst4eI+hBA2eZYxWwagr/jeIoIBUTb8AsSf9e8AJzFnvQn8JPc5R1ddRQJ8zIki2AlSg== X-Received: by 2002:a37:78b:: with SMTP id 133mr4940375qkh.107.1599015691159; Tue, 01 Sep 2020 20:01:31 -0700 (PDT) Received: from auth2-smtp.messagingengine.com (auth2-smtp.messagingengine.com. [66.111.4.228]) by smtp.gmail.com with ESMTPSA id i14sm4450941qkn.53.2020.09.01.20.01.29 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 01 Sep 2020 20:01:30 -0700 (PDT) Received: from compute3.internal (compute3.nyi.internal [10.202.2.43]) by mailauth.nyi.internal (Postfix) with ESMTP id 861D527C005A; Tue, 1 Sep 2020 23:01:29 -0400 (EDT) Received: from mailfrontend1 ([10.202.2.162]) by compute3.internal (MEProxy); Tue, 01 Sep 2020 23:01:29 -0400 X-ME-Sender: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeduiedrudefkedgieeiucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne goufhorhhtvggutfgvtghiphdvucdlgedtmdenucfjughrpefhvffufffkofgjfhgggfes tdekredtredttdenucfhrhhomhepuehoqhhunhcuhfgvnhhguceosghoqhhunhdrfhgvnh hgsehgmhgrihhlrdgtohhmqeenucggtffrrghtthgvrhhnpeehvdevteefgfeiudettdef vedvvdelkeejueffffelgeeuhffhjeetkeeiueeuleenucfkphephedvrdduheehrdduud durdejudenucevlhhushhtvghrufhiiigvpeefnecurfgrrhgrmhepmhgrihhlfhhrohhm pegsohhquhhnodhmvghsmhhtphgruhhthhhpvghrshhonhgrlhhithihqdeiledvgeehtd eigedqudejjeekheehhedvqdgsohhquhhnrdhfvghngheppehgmhgrihhlrdgtohhmsehf ihigmhgvrdhnrghmvg X-ME-Proxy: Received: from localhost (unknown [52.155.111.71]) by mail.messagingengine.com (Postfix) with ESMTPA id E92653280059; Tue, 1 Sep 2020 23:01:28 -0400 (EDT) From: Boqun Feng To: linux-hyperv@vger.kernel.org, linux-input@vger.kernel.org, linux-kernel@vger.kernel.org, netdev@vger.kernel.org, linux-scsi@vger.kernel.org Cc: "K. Y. Srinivasan" , Haiyang Zhang , Stephen Hemminger , Wei Liu , Jiri Kosina , Benjamin Tissoires , Dmitry Torokhov , "David S. Miller" , Jakub Kicinski , "James E.J. Bottomley" , "Martin K. Petersen" , Michael Kelley , Boqun Feng Subject: [RFC v2 06/11] hv: hyperv.h: Introduce some hvpfn helper functions Date: Wed, 2 Sep 2020 11:01:02 +0800 Message-Id: <20200902030107.33380-7-boqun.feng@gmail.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20200902030107.33380-1-boqun.feng@gmail.com> References: <20200902030107.33380-1-boqun.feng@gmail.com> MIME-Version: 1.0 Sender: linux-input-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-input@vger.kernel.org When a guest communicate with the hypervisor, it must use HV_HYP_PAGE to calculate PFN, so introduce a few hvpfn helper functions as the counterpart of the page helper functions. This is the preparation for supporting guest whose PAGE_SIZE is not 4k. Signed-off-by: Boqun Feng --- include/linux/hyperv.h | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/include/linux/hyperv.h b/include/linux/hyperv.h index 6f4831212979..54e84ba6b554 100644 --- a/include/linux/hyperv.h +++ b/include/linux/hyperv.h @@ -1687,4 +1687,8 @@ static inline unsigned long virt_to_hvpfn(void *addr) return paddr >> HV_HYP_PAGE_SHIFT; } +#define offset_in_hvpage(ptr) ((unsigned long)(ptr) & ~HV_HYP_PAGE_MASK) +#define HVPFN_UP(x) (((x) + HV_HYP_PAGE_SIZE-1) >> HV_HYP_PAGE_SHIFT) +#define page_to_hvpfn(page) ((page_to_pfn(page) << PAGE_SHIFT) >> HV_HYP_PAGE_SHIFT) + #endif /* _HYPERV_H */ From patchwork Wed Sep 2 03:01:03 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Boqun Feng X-Patchwork-Id: 11749719 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D31E01575 for ; Wed, 2 Sep 2020 03:01:47 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id B5539206CD for ; Wed, 2 Sep 2020 03:01:47 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="ocoSIb4A" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727883AbgIBDBo (ORCPT ); Tue, 1 Sep 2020 23:01:44 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48122 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727859AbgIBDBf (ORCPT ); Tue, 1 Sep 2020 23:01:35 -0400 Received: from mail-qk1-x741.google.com (mail-qk1-x741.google.com [IPv6:2607:f8b0:4864:20::741]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EA041C061245; Tue, 1 Sep 2020 20:01:33 -0700 (PDT) Received: by mail-qk1-x741.google.com with SMTP id w186so3115295qkd.1; Tue, 01 Sep 2020 20:01:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=xtjU5XT/jYJgv6uirBliL97PLE7EMPgM1E8fOJ4UGbQ=; b=ocoSIb4AHC/atxnaHoL1ld5dv+avh64gar0skQjppbtM0UumeX/OD6DxTC1EwEbtc2 cO4k/VS3IT0EdHCEzN6Zaobkuz+nsAL894oqSFQxmFCwqEXSRV9R1xbngJZlZqQg8WNV 56/cd0IzVDNKIbCsruQccVaOkTvzOaZZJqTh4ZVhofiQNh4gwepOrG9lbl7QVZ8TZL+6 TP2a0J2XmXTcTxJBDXPpHzQQs6xc609P+CsTkVroqjPiIXVEvYY7RVG7vh1uynjQ8o+q KSTlnENObk3zff6SNTyqSY4GiEZEDcndqElAhA1alfIUDPVqqSjYz2mE1SQRBPmDdS5U mpCg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=xtjU5XT/jYJgv6uirBliL97PLE7EMPgM1E8fOJ4UGbQ=; b=PCjE3L5p19ux+3KlWWzeobMiwKxZtTx+b9XOYF9UmS4do5AKFGLD8ZBRZqrVmzY32o khz0+mZGsrbmG52xe9h2PAFgNX52VFl5eSslOWI3A7nYJrUwP46akLLJoFfZZ/Uu8TPc pqoExeoflai25Z1aKm8ZzpPjqLhlcH1oaTchHczRttHk8e7NnodtKeRMnnqV0q9WN6Ae jk1u8yV60G0idNTbZTreCozyf2Ft/UsUOXx0Mtp/0zVPollGCzGixuzlpi4Eing0nh4s ojUQv4m/E7Z1ve1A2jHMJNpGjUhZEOBKGftfIfzduiNCDt0txA192DdDV6duidAsbWpj VEEQ== X-Gm-Message-State: AOAM531rYlHEhIaGVki9OuM5cBacZK4HX5th1mhE4t71SB53QKUrrrAK IJnwJSHrQdOn4/uLD8QqcNY= X-Google-Smtp-Source: ABdhPJxZyl0Q6j+q14Se51TKCFowAOZwzLZjtnPlq9fRVzkgtiJv2F2p4KHPt862iPIcdakLTS5SDg== X-Received: by 2002:a37:4e8a:: with SMTP id c132mr4935124qkb.472.1599015693191; Tue, 01 Sep 2020 20:01:33 -0700 (PDT) Received: from auth2-smtp.messagingengine.com (auth2-smtp.messagingengine.com. [66.111.4.228]) by smtp.gmail.com with ESMTPSA id 192sm3377664qkm.110.2020.09.01.20.01.31 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 01 Sep 2020 20:01:32 -0700 (PDT) Received: from compute3.internal (compute3.nyi.internal [10.202.2.43]) by mailauth.nyi.internal (Postfix) with ESMTP id 73E4A27C005A; Tue, 1 Sep 2020 23:01:31 -0400 (EDT) Received: from mailfrontend1 ([10.202.2.162]) by compute3.internal (MEProxy); Tue, 01 Sep 2020 23:01:31 -0400 X-ME-Sender: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeduiedrudefkedgieeiucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne goufhorhhtvggutfgvtghiphdvucdlgedtmdenucfjughrpefhvffufffkofgjfhgggfes tdekredtredttdenucfhrhhomhepuehoqhhunhcuhfgvnhhguceosghoqhhunhdrfhgvnh hgsehgmhgrihhlrdgtohhmqeenucggtffrrghtthgvrhhnpeehvdevteefgfeiudettdef vedvvdelkeejueffffelgeeuhffhjeetkeeiueeuleenucfkphephedvrdduheehrdduud durdejudenucevlhhushhtvghrufhiiigvpeeinecurfgrrhgrmhepmhgrihhlfhhrohhm pegsohhquhhnodhmvghsmhhtphgruhhthhhpvghrshhonhgrlhhithihqdeiledvgeehtd eigedqudejjeekheehhedvqdgsohhquhhnrdhfvghngheppehgmhgrihhlrdgtohhmsehf ihigmhgvrdhnrghmvg X-ME-Proxy: Received: from localhost (unknown [52.155.111.71]) by mail.messagingengine.com (Postfix) with ESMTPA id E18503280059; Tue, 1 Sep 2020 23:01:30 -0400 (EDT) From: Boqun Feng To: linux-hyperv@vger.kernel.org, linux-input@vger.kernel.org, linux-kernel@vger.kernel.org, netdev@vger.kernel.org, linux-scsi@vger.kernel.org Cc: "K. Y. Srinivasan" , Haiyang Zhang , Stephen Hemminger , Wei Liu , Jiri Kosina , Benjamin Tissoires , Dmitry Torokhov , "David S. Miller" , Jakub Kicinski , "James E.J. Bottomley" , "Martin K. Petersen" , Michael Kelley , Boqun Feng Subject: [RFC v2 07/11] hv_netvsc: Use HV_HYP_PAGE_SIZE for Hyper-V communication Date: Wed, 2 Sep 2020 11:01:03 +0800 Message-Id: <20200902030107.33380-8-boqun.feng@gmail.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20200902030107.33380-1-boqun.feng@gmail.com> References: <20200902030107.33380-1-boqun.feng@gmail.com> MIME-Version: 1.0 Sender: linux-input-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-input@vger.kernel.org When communicating with Hyper-V, HV_HYP_PAGE_SIZE should be used since that's the page size used by Hyper-V and Hyper-V expects all page-related data using the unit of HY_HYP_PAGE_SIZE, for example, the "pfn" in hv_page_buffer is actually the HV_HYP_PAGE (i.e. the Hyper-V page) number. In order to support guest whose page size is not 4k, we need to make hv_netvsc always use HV_HYP_PAGE_SIZE for Hyper-V communication. Signed-off-by: Boqun Feng --- drivers/net/hyperv/netvsc.c | 2 +- drivers/net/hyperv/netvsc_drv.c | 46 +++++++++++++++---------------- drivers/net/hyperv/rndis_filter.c | 12 ++++---- 3 files changed, 30 insertions(+), 30 deletions(-) diff --git a/drivers/net/hyperv/netvsc.c b/drivers/net/hyperv/netvsc.c index 41f5cf0bb997..1d6f2256da6b 100644 --- a/drivers/net/hyperv/netvsc.c +++ b/drivers/net/hyperv/netvsc.c @@ -794,7 +794,7 @@ static void netvsc_copy_to_send_buf(struct netvsc_device *net_device, } for (i = 0; i < page_count; i++) { - char *src = phys_to_virt(pb[i].pfn << PAGE_SHIFT); + char *src = phys_to_virt(pb[i].pfn << HV_HYP_PAGE_SHIFT); u32 offset = pb[i].offset; u32 len = pb[i].len; diff --git a/drivers/net/hyperv/netvsc_drv.c b/drivers/net/hyperv/netvsc_drv.c index 64b0a74c1523..61ea568e1ddf 100644 --- a/drivers/net/hyperv/netvsc_drv.c +++ b/drivers/net/hyperv/netvsc_drv.c @@ -373,32 +373,29 @@ static u16 netvsc_select_queue(struct net_device *ndev, struct sk_buff *skb, return txq; } -static u32 fill_pg_buf(struct page *page, u32 offset, u32 len, +static u32 fill_pg_buf(unsigned long hvpfn, u32 offset, u32 len, struct hv_page_buffer *pb) { int j = 0; - /* Deal with compound pages by ignoring unused part - * of the page. - */ - page += (offset >> PAGE_SHIFT); - offset &= ~PAGE_MASK; + hvpfn += offset >> HV_HYP_PAGE_SHIFT; + offset = offset & ~HV_HYP_PAGE_MASK; while (len > 0) { unsigned long bytes; - bytes = PAGE_SIZE - offset; + bytes = HV_HYP_PAGE_SIZE - offset; if (bytes > len) bytes = len; - pb[j].pfn = page_to_pfn(page); + pb[j].pfn = hvpfn; pb[j].offset = offset; pb[j].len = bytes; offset += bytes; len -= bytes; - if (offset == PAGE_SIZE && len) { - page++; + if (offset == HV_HYP_PAGE_SIZE && len) { + hvpfn++; offset = 0; j++; } @@ -421,23 +418,26 @@ static u32 init_page_array(void *hdr, u32 len, struct sk_buff *skb, * 2. skb linear data * 3. skb fragment data */ - slots_used += fill_pg_buf(virt_to_page(hdr), - offset_in_page(hdr), - len, &pb[slots_used]); + slots_used += fill_pg_buf(virt_to_hvpfn(hdr), + offset_in_hvpage(hdr), + len, + &pb[slots_used]); packet->rmsg_size = len; packet->rmsg_pgcnt = slots_used; - slots_used += fill_pg_buf(virt_to_page(data), - offset_in_page(data), - skb_headlen(skb), &pb[slots_used]); + slots_used += fill_pg_buf(virt_to_hvpfn(data), + offset_in_hvpage(data), + skb_headlen(skb), + &pb[slots_used]); for (i = 0; i < frags; i++) { skb_frag_t *frag = skb_shinfo(skb)->frags + i; - slots_used += fill_pg_buf(skb_frag_page(frag), - skb_frag_off(frag), - skb_frag_size(frag), &pb[slots_used]); + slots_used += fill_pg_buf(page_to_hvpfn(skb_frag_page(frag)), + skb_frag_off(frag), + skb_frag_size(frag), + &pb[slots_used]); } return slots_used; } @@ -453,8 +453,8 @@ static int count_skb_frag_slots(struct sk_buff *skb) unsigned long offset = skb_frag_off(frag); /* Skip unused frames from start of page */ - offset &= ~PAGE_MASK; - pages += PFN_UP(offset + size); + offset &= ~HV_HYP_PAGE_MASK; + pages += HVPFN_UP(offset + size); } return pages; } @@ -462,12 +462,12 @@ static int count_skb_frag_slots(struct sk_buff *skb) static int netvsc_get_slots(struct sk_buff *skb) { char *data = skb->data; - unsigned int offset = offset_in_page(data); + unsigned int offset = offset_in_hvpage(data); unsigned int len = skb_headlen(skb); int slots; int frag_slots; - slots = DIV_ROUND_UP(offset + len, PAGE_SIZE); + slots = DIV_ROUND_UP(offset + len, HV_HYP_PAGE_SIZE); frag_slots = count_skb_frag_slots(skb); return slots + frag_slots; } diff --git a/drivers/net/hyperv/rndis_filter.c b/drivers/net/hyperv/rndis_filter.c index b81ceba38218..acc8d957bbfc 100644 --- a/drivers/net/hyperv/rndis_filter.c +++ b/drivers/net/hyperv/rndis_filter.c @@ -25,7 +25,7 @@ static void rndis_set_multicast(struct work_struct *w); -#define RNDIS_EXT_LEN PAGE_SIZE +#define RNDIS_EXT_LEN HV_HYP_PAGE_SIZE struct rndis_request { struct list_head list_ent; struct completion wait_event; @@ -215,18 +215,18 @@ static int rndis_filter_send_request(struct rndis_device *dev, packet->page_buf_cnt = 1; pb[0].pfn = virt_to_phys(&req->request_msg) >> - PAGE_SHIFT; + HV_HYP_PAGE_SHIFT; pb[0].len = req->request_msg.msg_len; pb[0].offset = - (unsigned long)&req->request_msg & (PAGE_SIZE - 1); + (unsigned long)&req->request_msg & (HV_HYP_PAGE_SIZE - 1); /* Add one page_buf when request_msg crossing page boundary */ - if (pb[0].offset + pb[0].len > PAGE_SIZE) { + if (pb[0].offset + pb[0].len > HV_HYP_PAGE_SIZE) { packet->page_buf_cnt++; - pb[0].len = PAGE_SIZE - + pb[0].len = HV_HYP_PAGE_SIZE - pb[0].offset; pb[1].pfn = virt_to_phys((void *)&req->request_msg - + pb[0].len) >> PAGE_SHIFT; + + pb[0].len) >> HV_HYP_PAGE_SHIFT; pb[1].offset = 0; pb[1].len = req->request_msg.msg_len - pb[0].len; From patchwork Wed Sep 2 03:01:04 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Boqun Feng X-Patchwork-Id: 11749763 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D4E67166C for ; Wed, 2 Sep 2020 03:03:22 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id BD679206F0 for ; Wed, 2 Sep 2020 03:03:22 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="nl351QxC" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726853AbgIBDCm (ORCPT ); Tue, 1 Sep 2020 23:02:42 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48130 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727860AbgIBDBf (ORCPT ); Tue, 1 Sep 2020 23:01:35 -0400 Received: from mail-qt1-x841.google.com (mail-qt1-x841.google.com [IPv6:2607:f8b0:4864:20::841]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6A71AC06124F; Tue, 1 Sep 2020 20:01:35 -0700 (PDT) Received: by mail-qt1-x841.google.com with SMTP id 60so2603989qtc.9; Tue, 01 Sep 2020 20:01:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=1Yk2pMhGvv9S+U0BqWQnaIboTmUxBkNbJxvGpozTlpY=; b=nl351QxCwAgNDVrX8AWDi554Z49St+Gk9HJC+dw044z16k73bHuIgis0AQRvFp3WCC Na1DIpB6kOTF8JsZ55upiJScY1r/0HLP80XfankgbR08DTHBGQPvPsenLZkdLSJ309w5 ff5Xnmx6W0hnzUIM2i3bW34Z3b+p2PePwGrLLRtkrHQIf+3Edj/VGWhRYBdgCi3ZS8YG q9JYRnGTVClWerFGt2aNL6WgOEHDxko/BwmCJTheiNzwpyWfdHBnPXNPmTISTryfXsZp REcwDV7YuTxquSc7SoDYVCB9Zrkl/Z5wvjn9D3N9IeWVWSBiOi+HGT3Qbfy73AnFFugK jIsA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=1Yk2pMhGvv9S+U0BqWQnaIboTmUxBkNbJxvGpozTlpY=; b=cxLmZNh9XdZuyoF8kgf+UQ11F/ii8ieKCC5dftBkzV1JCCTZrf2n+JgvZRL4pwpxWZ ah8E7zeHaGlWh/BWs0QdFBHSuhyNFKuoqWiv+BFAr5SsVjWpwT4ZiJJ0g1qz69Mli8vn d0QDmwrAJPNZ+Hwgkcy0h+uv5NyAy0KW02QdlIZDv9wVRGBWwptDCeHF3QlNVtGZBmaC 1lrCwAULrtk7lzhV/Dwzv1o1PNO2e0itsf9BCFCeL4i1FJjvoi9gFBVpmnp4bqg9AY4z cEwIfsUXSjCM2iuOnEdy3y3hEYb87KxJj4ixG8UPiKd42IL3tf9b1dznTvYVuoEUhu+C 2nXA== X-Gm-Message-State: AOAM532pWZNFfKDUu2VKDoW6dvXIaeKDNOkuiP3TrH/0C0refazc0FxD 4drVqQmUuvkeedjwtGquHVg= X-Google-Smtp-Source: ABdhPJwVDifNLrLwX2BYpPr5AdIdiq7MrnW08KjQn7DQ9xsmOZHfwN1k+/HlcfegXcKrPJ4HAqImvQ== X-Received: by 2002:ac8:192b:: with SMTP id t40mr4819061qtj.60.1599015694690; Tue, 01 Sep 2020 20:01:34 -0700 (PDT) Received: from auth2-smtp.messagingengine.com (auth2-smtp.messagingengine.com. [66.111.4.228]) by smtp.gmail.com with ESMTPSA id q185sm3923186qke.25.2020.09.01.20.01.33 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 01 Sep 2020 20:01:34 -0700 (PDT) Received: from compute3.internal (compute3.nyi.internal [10.202.2.43]) by mailauth.nyi.internal (Postfix) with ESMTP id 42E1A27C005B; Tue, 1 Sep 2020 23:01:33 -0400 (EDT) Received: from mailfrontend2 ([10.202.2.163]) by compute3.internal (MEProxy); Tue, 01 Sep 2020 23:01:33 -0400 X-ME-Sender: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeduiedrudefkedgieeiucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne goufhorhhtvggutfgvtghiphdvucdlgedtmdenucfjughrpefhvffufffkofgjfhgggfes tdekredtredttdenucfhrhhomhepuehoqhhunhcuhfgvnhhguceosghoqhhunhdrfhgvnh hgsehgmhgrihhlrdgtohhmqeenucggtffrrghtthgvrhhnpeehvdevteefgfeiudettdef vedvvdelkeejueffffelgeeuhffhjeetkeeiueeuleenucfkphephedvrdduheehrdduud durdejudenucevlhhushhtvghrufhiiigvpeeinecurfgrrhgrmhepmhgrihhlfhhrohhm pegsohhquhhnodhmvghsmhhtphgruhhthhhpvghrshhonhgrlhhithihqdeiledvgeehtd eigedqudejjeekheehhedvqdgsohhquhhnrdhfvghngheppehgmhgrihhlrdgtohhmsehf ihigmhgvrdhnrghmvg X-ME-Proxy: Received: from localhost (unknown [52.155.111.71]) by mail.messagingengine.com (Postfix) with ESMTPA id B315130600A3; Tue, 1 Sep 2020 23:01:32 -0400 (EDT) From: Boqun Feng To: linux-hyperv@vger.kernel.org, linux-input@vger.kernel.org, linux-kernel@vger.kernel.org, netdev@vger.kernel.org, linux-scsi@vger.kernel.org Cc: "K. Y. Srinivasan" , Haiyang Zhang , Stephen Hemminger , Wei Liu , Jiri Kosina , Benjamin Tissoires , Dmitry Torokhov , "David S. Miller" , Jakub Kicinski , "James E.J. Bottomley" , "Martin K. Petersen" , Michael Kelley , Boqun Feng Subject: [RFC v2 08/11] Input: hyperv-keyboard: Make ringbuffer at least take two pages Date: Wed, 2 Sep 2020 11:01:04 +0800 Message-Id: <20200902030107.33380-9-boqun.feng@gmail.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20200902030107.33380-1-boqun.feng@gmail.com> References: <20200902030107.33380-1-boqun.feng@gmail.com> MIME-Version: 1.0 Sender: linux-input-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-input@vger.kernel.org When PAGE_SIZE > HV_HYP_PAGE_SIZE, we need the ringbuffer size to be at least 2 * PAGE_SIZE: one page for the header and at least one page of the data part (because of the alignment requirement for double mapping). So make sure the ringbuffer sizes to be at least 2 * PAGE_SIZE when using vmbus_open() to establish the vmbus connection. Signed-off-by: Boqun Feng --- drivers/input/serio/hyperv-keyboard.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/input/serio/hyperv-keyboard.c b/drivers/input/serio/hyperv-keyboard.c index df4e9f6f4529..77ba57ba2691 100644 --- a/drivers/input/serio/hyperv-keyboard.c +++ b/drivers/input/serio/hyperv-keyboard.c @@ -75,8 +75,8 @@ struct synth_kbd_keystroke { #define HK_MAXIMUM_MESSAGE_SIZE 256 -#define KBD_VSC_SEND_RING_BUFFER_SIZE (40 * 1024) -#define KBD_VSC_RECV_RING_BUFFER_SIZE (40 * 1024) +#define KBD_VSC_SEND_RING_BUFFER_SIZE max(40 * 1024, 2 * PAGE_SIZE) +#define KBD_VSC_RECV_RING_BUFFER_SIZE max(40 * 1024, 2 * PAGE_SIZE) #define XTKBD_EMUL0 0xe0 #define XTKBD_EMUL1 0xe1 From patchwork Wed Sep 2 03:01:05 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Boqun Feng X-Patchwork-Id: 11749739 X-Patchwork-Delegate: jikos@jikos.cz Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 664DF1575 for ; Wed, 2 Sep 2020 03:02:37 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 44CEC2068F for ; Wed, 2 Sep 2020 03:02:37 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="SM48Tazs" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727968AbgIBDCN (ORCPT ); Tue, 1 Sep 2020 23:02:13 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48136 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727861AbgIBDBh (ORCPT ); Tue, 1 Sep 2020 23:01:37 -0400 Received: from mail-qt1-x841.google.com (mail-qt1-x841.google.com [IPv6:2607:f8b0:4864:20::841]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 86E6AC061251; Tue, 1 Sep 2020 20:01:37 -0700 (PDT) Received: by mail-qt1-x841.google.com with SMTP id e5so2608296qth.5; Tue, 01 Sep 2020 20:01:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=52azQz5ILPrPTgWCNGUCm2Kg05cZXopRo/XcA02tn40=; b=SM48TazsccKI7NO/iH6gV52vthQhftE3qOOUQCOQp05NaLXKKNCg26CSp0/4AdI7eu O1EUbI2u8pvAt7C2T82zWM8NwEdQOd334dOLrJBoCbzi4DLkxShpbGQ67Mrt9tvIAUV0 RtB1yy8L+MJjN1BptIM71GefTnHAtaWbk/FEetXZRGp036SL0xR9Z7RyWgc0xdbAk4K5 qlJr1mraecwmtJnvhexB8JJW/GoqjojwKPYHawiih+zcE0w8N0NxNpWW7XvX4XhzepJ/ +yUKhZSHaKqS9ct21VIE2jFgvSkL+ihn3zlGlB92dDJieq3yQ1c5tGXO7qsIKDiIO4cO OWsQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=52azQz5ILPrPTgWCNGUCm2Kg05cZXopRo/XcA02tn40=; b=Z6OfARsN2aGVrHKZ94BSlF0hUYDSHm+0vDo9Vw+SrO2lVBa5bRLchDJCaCHHhK71qu gYomPcmiAdM8vJLWMjnkVy6/8GOEVJzQHcmIS8PQaSvKoCdWZGqg/DqJTbokZOHLbEHm 30xmvKkr8Jk66URBa9i7YgUdqsWnTfAtIBUKMOHnUWzH86kqXE3wcCexB2s95wc1Dl57 bZKKGJHoQRQke9PGKjZCWTo2TlHg4LFeYKitYzv9qTeWwSxLIfnam+zVOzAUzqf7k1hp 3Xc6vOwkRYHAnES0VPBzr/TRq9BIkpmKgKlnYxhgmJqntxStcS4u5QN2VqPoK0ckgHIo 3MZQ== X-Gm-Message-State: AOAM531J+hDRZcJenmi7gAgachbGknCQolM6Dvp6ZH94vVmjLCCZE/Ni ogPTRXJgFEywgPfj7CJMN1I= X-Google-Smtp-Source: ABdhPJwIc7QSAggeYZA4p45lAeUc49YSk2qheQN2qpKUhTKsDU92cDo8zejIipQa1XKN6scn2S9Mlg== X-Received: by 2002:ac8:708c:: with SMTP id y12mr5107582qto.24.1599015696853; Tue, 01 Sep 2020 20:01:36 -0700 (PDT) Received: from auth2-smtp.messagingengine.com (auth2-smtp.messagingengine.com. [66.111.4.228]) by smtp.gmail.com with ESMTPSA id r5sm3682435qtd.87.2020.09.01.20.01.35 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 01 Sep 2020 20:01:36 -0700 (PDT) Received: from compute3.internal (compute3.nyi.internal [10.202.2.43]) by mailauth.nyi.internal (Postfix) with ESMTP id 3AD9627C005B; Tue, 1 Sep 2020 23:01:35 -0400 (EDT) Received: from mailfrontend1 ([10.202.2.162]) by compute3.internal (MEProxy); Tue, 01 Sep 2020 23:01:35 -0400 X-ME-Sender: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeduiedrudefkedgieeiucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne goufhorhhtvggutfgvtghiphdvucdlgedtmdenucfjughrpefhvffufffkofgjfhgggfes tdekredtredttdenucfhrhhomhepuehoqhhunhcuhfgvnhhguceosghoqhhunhdrfhgvnh hgsehgmhgrihhlrdgtohhmqeenucggtffrrghtthgvrhhnpeehvdevteefgfeiudettdef vedvvdelkeejueffffelgeeuhffhjeetkeeiueeuleenucfkphephedvrdduheehrdduud durdejudenucevlhhushhtvghrufhiiigvpeeinecurfgrrhgrmhepmhgrihhlfhhrohhm pegsohhquhhnodhmvghsmhhtphgruhhthhhpvghrshhonhgrlhhithihqdeiledvgeehtd eigedqudejjeekheehhedvqdgsohhquhhnrdhfvghngheppehgmhgrihhlrdgtohhmsehf ihigmhgvrdhnrghmvg X-ME-Proxy: Received: from localhost (unknown [52.155.111.71]) by mail.messagingengine.com (Postfix) with ESMTPA id AC7D03280063; Tue, 1 Sep 2020 23:01:34 -0400 (EDT) From: Boqun Feng To: linux-hyperv@vger.kernel.org, linux-input@vger.kernel.org, linux-kernel@vger.kernel.org, netdev@vger.kernel.org, linux-scsi@vger.kernel.org Cc: "K. Y. Srinivasan" , Haiyang Zhang , Stephen Hemminger , Wei Liu , Jiri Kosina , Benjamin Tissoires , Dmitry Torokhov , "David S. Miller" , Jakub Kicinski , "James E.J. Bottomley" , "Martin K. Petersen" , Michael Kelley , Boqun Feng Subject: [RFC v2 09/11] HID: hyperv: Make ringbuffer at least take two pages Date: Wed, 2 Sep 2020 11:01:05 +0800 Message-Id: <20200902030107.33380-10-boqun.feng@gmail.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20200902030107.33380-1-boqun.feng@gmail.com> References: <20200902030107.33380-1-boqun.feng@gmail.com> MIME-Version: 1.0 Sender: linux-input-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-input@vger.kernel.org When PAGE_SIZE > HV_HYP_PAGE_SIZE, we need the ringbuffer size to be at least 2 * PAGE_SIZE: one page for the header and at least one page of the data part (because of the alignment requirement for double mapping). So make sure the ringbuffer sizes to be at least 2 * PAGE_SIZE when using vmbus_open() to establish the vmbus connection. Signed-off-by: Boqun Feng Acked-by: Jiri Kosina --- drivers/hid/hid-hyperv.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/hid/hid-hyperv.c b/drivers/hid/hid-hyperv.c index 0b6ee1dee625..dff032f17ad0 100644 --- a/drivers/hid/hid-hyperv.c +++ b/drivers/hid/hid-hyperv.c @@ -104,8 +104,8 @@ struct synthhid_input_report { #pragma pack(pop) -#define INPUTVSC_SEND_RING_BUFFER_SIZE (40 * 1024) -#define INPUTVSC_RECV_RING_BUFFER_SIZE (40 * 1024) +#define INPUTVSC_SEND_RING_BUFFER_SIZE max(40 * 1024, 2 * PAGE_SIZE) +#define INPUTVSC_RECV_RING_BUFFER_SIZE max(40 * 1024, 2 * PAGE_SIZE) enum pipe_prot_msg_type { From patchwork Wed Sep 2 03:01:06 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Boqun Feng X-Patchwork-Id: 11749741 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 908FB722 for ; Wed, 2 Sep 2020 03:02:40 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 7144220707 for ; Wed, 2 Sep 2020 03:02:40 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="FcTKBXld" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727025AbgIBDCN (ORCPT ); Tue, 1 Sep 2020 23:02:13 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48144 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727864AbgIBDBk (ORCPT ); Tue, 1 Sep 2020 23:01:40 -0400 Received: from mail-qk1-x743.google.com (mail-qk1-x743.google.com [IPv6:2607:f8b0:4864:20::743]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6DE8FC061258; Tue, 1 Sep 2020 20:01:39 -0700 (PDT) Received: by mail-qk1-x743.google.com with SMTP id o5so3083842qke.12; Tue, 01 Sep 2020 20:01:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=JL6sEmPmG1Kozdxn8ORze6+MctkHof6Njmiqz3VMuCo=; b=FcTKBXldRJFYQ4QJRVDlwvJwXr5qCD4KLvEAJq1NJQG+mfPFPJOgvIJmDERaeK38Gi fGCjn/p23LELRbfI+XsMkAkw5v4IVmsQOOM6ZjM/BUgzA0FYM8SfCxlWNMhizLV6vBG1 Zmd41VdwFhdqGtjvs64d4KMAbWUOme2DbGiYC3yMXlR1LvKtJ2iJR0X3iWS+NhGwv7je z4o6ZKmfx4lYURycV61+YkETY7H/xVpGS2BMR6E28KtoOE6lSwqlQz41h7HAoGCv59Kc OxtP6iXgctJGForXMte/TAUn2IDU9wucfPh7XPQM18jmmDPbzyd5mR4eknRvFE3PXcX0 3Eyw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=JL6sEmPmG1Kozdxn8ORze6+MctkHof6Njmiqz3VMuCo=; b=CLMpJfdcUXliC/YISk38EPXNmTO+ejJgjZqkXAZ+4ZZVxCdVzS573Q69CrNLv/WHTe fcMcFL9NNcRYeHFlwkvwPFCRyv5saMYOSg/qGI43sXnd5fZLeFNeOgcarVB62uFy3jlw I3Gk8QIP22VqcMKyXKdhLYMnqOycKLl15N1fVvbpzlG9VXfY4AyBnOxx34MZwkoGTYps eVEe7eh7Rb2WowhGH3DwCSiF19uy+shnmxxPnder4E0771mKT0cHMhv51EPgIaqLyUvu E1jEfb2UJ5NCN1n/YQ49r/hSEuU7FklWi2xvC/3LhHhaX2RmsXAy/62pydyBgJP/km/3 8s8A== X-Gm-Message-State: AOAM532TDkBfnuXlPSDdwvPZ+dowMyyCcjHKAMBbinInZQzqT4o3vR/0 56CvksNXHTPC4gs2SX5pSvo= X-Google-Smtp-Source: ABdhPJyzJAAwAl3QSJXpRPMaCasr8TW4OKrbzITkSoJw8wuiczxjl5XB8eCtn6bZe7OHiv/vnxhD0A== X-Received: by 2002:a37:498e:: with SMTP id w136mr4825638qka.187.1599015698735; Tue, 01 Sep 2020 20:01:38 -0700 (PDT) Received: from auth2-smtp.messagingengine.com (auth2-smtp.messagingengine.com. [66.111.4.228]) by smtp.gmail.com with ESMTPSA id 82sm4075095qko.106.2020.09.01.20.01.37 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 01 Sep 2020 20:01:37 -0700 (PDT) Received: from compute3.internal (compute3.nyi.internal [10.202.2.43]) by mailauth.nyi.internal (Postfix) with ESMTP id 0F60E27C005B; Tue, 1 Sep 2020 23:01:37 -0400 (EDT) Received: from mailfrontend2 ([10.202.2.163]) by compute3.internal (MEProxy); Tue, 01 Sep 2020 23:01:37 -0400 X-ME-Sender: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeduiedrudefkedgieeiucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne goufhorhhtvggutfgvtghiphdvucdlgedtmdenucfjughrpefhvffufffkofgjfhgggfes tdekredtredttdenucfhrhhomhepuehoqhhunhcuhfgvnhhguceosghoqhhunhdrfhgvnh hgsehgmhgrihhlrdgtohhmqeenucggtffrrghtthgvrhhnpeehvdevteefgfeiudettdef vedvvdelkeejueffffelgeeuhffhjeetkeeiueeuleenucfkphephedvrdduheehrdduud durdejudenucevlhhushhtvghrufhiiigvpeeinecurfgrrhgrmhepmhgrihhlfhhrohhm pegsohhquhhnodhmvghsmhhtphgruhhthhhpvghrshhonhgrlhhithihqdeiledvgeehtd eigedqudejjeekheehhedvqdgsohhquhhnrdhfvghngheppehgmhgrihhlrdgtohhmsehf ihigmhgvrdhnrghmvg X-ME-Proxy: Received: from localhost (unknown [52.155.111.71]) by mail.messagingengine.com (Postfix) with ESMTPA id 7D0D930600B1; Tue, 1 Sep 2020 23:01:36 -0400 (EDT) From: Boqun Feng To: linux-hyperv@vger.kernel.org, linux-input@vger.kernel.org, linux-kernel@vger.kernel.org, netdev@vger.kernel.org, linux-scsi@vger.kernel.org Cc: "K. Y. Srinivasan" , Haiyang Zhang , Stephen Hemminger , Wei Liu , Jiri Kosina , Benjamin Tissoires , Dmitry Torokhov , "David S. Miller" , Jakub Kicinski , "James E.J. Bottomley" , "Martin K. Petersen" , Michael Kelley , Boqun Feng Subject: [RFC v2 10/11] Driver: hv: util: Make ringbuffer at least take two pages Date: Wed, 2 Sep 2020 11:01:06 +0800 Message-Id: <20200902030107.33380-11-boqun.feng@gmail.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20200902030107.33380-1-boqun.feng@gmail.com> References: <20200902030107.33380-1-boqun.feng@gmail.com> MIME-Version: 1.0 Sender: linux-input-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-input@vger.kernel.org When PAGE_SIZE > HV_HYP_PAGE_SIZE, we need the ringbuffer size to be at least 2 * PAGE_SIZE: one page for the header and at least one page of the data part (because of the alignment requirement for double mapping). So make sure the ringbuffer sizes to be at least 2 * PAGE_SIZE when using vmbus_open() to establish the vmbus connection. Signed-off-by: Boqun Feng --- drivers/hv/hv_util.c | 16 ++++++++++++---- 1 file changed, 12 insertions(+), 4 deletions(-) diff --git a/drivers/hv/hv_util.c b/drivers/hv/hv_util.c index 92ee0fe4c919..73a77bead2be 100644 --- a/drivers/hv/hv_util.c +++ b/drivers/hv/hv_util.c @@ -461,6 +461,14 @@ static void heartbeat_onchannelcallback(void *context) } } +/* + * The size of each ring should be at least 2 * PAGE_SIZE, because we need one + * page for the header and at least another page (because of the alignment + * requirement for double mapping) for data part. + */ +#define HV_UTIL_RING_SEND_SIZE max(4 * HV_HYP_PAGE_SIZE, 2 * PAGE_SIZE) +#define HV_UTIL_RING_RECV_SIZE max(4 * HV_HYP_PAGE_SIZE, 2 * PAGE_SIZE) + static int util_probe(struct hv_device *dev, const struct hv_vmbus_device_id *dev_id) { @@ -491,8 +499,8 @@ static int util_probe(struct hv_device *dev, hv_set_drvdata(dev, srv); - ret = vmbus_open(dev->channel, 4 * HV_HYP_PAGE_SIZE, - 4 * HV_HYP_PAGE_SIZE, NULL, 0, srv->util_cb, + ret = vmbus_open(dev->channel, HV_UTIL_RING_SEND_SIZE, + HV_UTIL_RING_RECV_SIZE, NULL, 0, srv->util_cb, dev->channel); if (ret) goto error; @@ -551,8 +559,8 @@ static int util_resume(struct hv_device *dev) return ret; } - ret = vmbus_open(dev->channel, 4 * HV_HYP_PAGE_SIZE, - 4 * HV_HYP_PAGE_SIZE, NULL, 0, srv->util_cb, + ret = vmbus_open(dev->channel, HV_UTIL_RING_SEND_SIZE, + HV_UTIL_RING_RECV_SIZE, NULL, 0, srv->util_cb, dev->channel); return ret; } From patchwork Wed Sep 2 03:01:07 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Boqun Feng X-Patchwork-Id: 11749729 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 7C7C8722 for ; Wed, 2 Sep 2020 03:02:13 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 5F650206CD for ; Wed, 2 Sep 2020 03:02:13 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="YOo4unJu" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726510AbgIBDCM (ORCPT ); Tue, 1 Sep 2020 23:02:12 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48102 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726384AbgIBDBl (ORCPT ); Tue, 1 Sep 2020 23:01:41 -0400 Received: from mail-qt1-x842.google.com (mail-qt1-x842.google.com [IPv6:2607:f8b0:4864:20::842]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 389C5C061244; Tue, 1 Sep 2020 20:01:41 -0700 (PDT) Received: by mail-qt1-x842.google.com with SMTP id n10so2622969qtv.3; Tue, 01 Sep 2020 20:01:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=WGcFYnsjK23fcOf7m3ThWWICMlTjqTdhEG1f75QZ+OA=; b=YOo4unJu5nOav+nG6jqFvE2CVTE52u4HHe8TAmEbidyyHvCjKMHf8kg2IPT387b3/d 8rv8ytbGmq2gwSl/bwPQEFKMHgnW9cbYif3iwEp0JgVhOQVsQVr/doFlev0BWN2gjwyi haN5+/CdQ8UCE8I1J330gZjibQ31DzV4tJJ+ibvoGURqMwMtMOib6qjLpIMFMrhGcCj0 QpkuJccByYOI8ujsJ9x57fVPY36+RT+7ROmJFZiJ8iwlKJhb1lUSo2iVTQE3zOL6GdBe DAQVYVq1ps3w6OBGbRCK1j7POrr70xPI6ZkVFsPP+hBOGvl0R0R80i/eYVuvnezzjVq3 z4rw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=WGcFYnsjK23fcOf7m3ThWWICMlTjqTdhEG1f75QZ+OA=; b=VDQMLiLek/Aea0sa0oT92pHUMgDg7g3s0ffM1y2IPOx5pCSwqzT1wfbZFsusjvEIH2 FzdicL/4rmp6kpLlh66bUnv+DGvYLgaOXzYbtsbj8y7+GXRMuqpLUmk8RH6Thh/yBNG0 L+dWKvYmi3fRwU/CGbfqjrVgxRrhxSbvLFnE9mAIX8Z/vB9CT4FOBo+T+2OQpsV+1BR6 rSI7o0XDQ/4Zmj4AF29sguMCLQxnmWHoFpiqGwTfNKxwpC/LNNrQZxXp2f2Gnf3t7N8d KD75ERjAeKNLbj6RktIabWCp+4xhc/GeeufrMfk6ZGN0jNiwEbiVRtJBYudmEcsvwaol 2A4Q== X-Gm-Message-State: AOAM530erg5VPkQ1EFLqPBNJcacpRxorh434hZgWfgJmTLrPwtFXvcPI u1CCRI9vZjE2qW0cb3Ouzr8= X-Google-Smtp-Source: ABdhPJxf40ejt0XqIHKsnoLsul4FO9P/HNNpomn1KLUTMI/rj8pVyeuQZvCM6sjg+f8ACaU0jUz/Jg== X-Received: by 2002:ac8:3876:: with SMTP id r51mr5090876qtb.181.1599015700524; Tue, 01 Sep 2020 20:01:40 -0700 (PDT) Received: from auth2-smtp.messagingengine.com (auth2-smtp.messagingengine.com. [66.111.4.228]) by smtp.gmail.com with ESMTPSA id x124sm3448976qkd.72.2020.09.01.20.01.39 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 01 Sep 2020 20:01:40 -0700 (PDT) Received: from compute3.internal (compute3.nyi.internal [10.202.2.43]) by mailauth.nyi.internal (Postfix) with ESMTP id D936227C005A; Tue, 1 Sep 2020 23:01:38 -0400 (EDT) Received: from mailfrontend1 ([10.202.2.162]) by compute3.internal (MEProxy); Tue, 01 Sep 2020 23:01:38 -0400 X-ME-Sender: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeduiedrudefkedgieeiucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne goufhorhhtvggutfgvtghiphdvucdlgedtmdenucfjughrpefhvffufffkofgjfhgggfes tdekredtredttdenucfhrhhomhepuehoqhhunhcuhfgvnhhguceosghoqhhunhdrfhgvnh hgsehgmhgrihhlrdgtohhmqeenucggtffrrghtthgvrhhnpeehvdevteefgfeiudettdef vedvvdelkeejueffffelgeeuhffhjeetkeeiueeuleenucfkphephedvrdduheehrdduud durdejudenucevlhhushhtvghrufhiiigvpedutdenucfrrghrrghmpehmrghilhhfrhho mhepsghoqhhunhdomhgvshhmthhprghuthhhphgvrhhsohhnrghlihhthidqieelvdeghe dtieegqddujeejkeehheehvddqsghoqhhunhdrfhgvnhhgpeepghhmrghilhdrtghomhes fhhigihmvgdrnhgrmhgv X-ME-Proxy: Received: from localhost (unknown [52.155.111.71]) by mail.messagingengine.com (Postfix) with ESMTPA id 4CEAD3280065; Tue, 1 Sep 2020 23:01:38 -0400 (EDT) From: Boqun Feng To: linux-hyperv@vger.kernel.org, linux-input@vger.kernel.org, linux-kernel@vger.kernel.org, netdev@vger.kernel.org, linux-scsi@vger.kernel.org Cc: "K. Y. Srinivasan" , Haiyang Zhang , Stephen Hemminger , Wei Liu , Jiri Kosina , Benjamin Tissoires , Dmitry Torokhov , "David S. Miller" , Jakub Kicinski , "James E.J. Bottomley" , "Martin K. Petersen" , Michael Kelley , Boqun Feng Subject: [RFC v2 11/11] scsi: storvsc: Support PAGE_SIZE larger than 4K Date: Wed, 2 Sep 2020 11:01:07 +0800 Message-Id: <20200902030107.33380-12-boqun.feng@gmail.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20200902030107.33380-1-boqun.feng@gmail.com> References: <20200902030107.33380-1-boqun.feng@gmail.com> MIME-Version: 1.0 Sender: linux-input-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-input@vger.kernel.org Hyper-V always use 4k page size (HV_HYP_PAGE_SIZE), so when communicating with Hyper-V, a guest should always use HV_HYP_PAGE_SIZE as the unit for page related data. For storvsc, the data is vmbus_packet_mpb_array. And since in scsi_cmnd, sglist of pages (in unit of PAGE_SIZE) is used, we need convert pages in the sglist of scsi_cmnd into Hyper-V pages in vmbus_packet_mpb_array. This patch does the conversion by dividing pages in sglist into Hyper-V pages, offset and indexes in vmbus_packet_mpb_array are recalculated accordingly. Signed-off-by: Boqun Feng --- drivers/scsi/storvsc_drv.c | 60 ++++++++++++++++++++++++++++++++++---- 1 file changed, 54 insertions(+), 6 deletions(-) diff --git a/drivers/scsi/storvsc_drv.c b/drivers/scsi/storvsc_drv.c index 8f5f5dc863a4..3f6610717d4e 100644 --- a/drivers/scsi/storvsc_drv.c +++ b/drivers/scsi/storvsc_drv.c @@ -1739,23 +1739,71 @@ static int storvsc_queuecommand(struct Scsi_Host *host, struct scsi_cmnd *scmnd) payload_sz = sizeof(cmd_request->mpb); if (sg_count) { - if (sg_count > MAX_PAGE_BUFFER_COUNT) { + unsigned int hvpg_idx = 0; + unsigned int j = 0; + unsigned long hvpg_offset = sgl->offset & ~HV_HYP_PAGE_MASK; + unsigned int hvpg_count = HVPFN_UP(hvpg_offset + length); - payload_sz = (sg_count * sizeof(u64) + + if (hvpg_count > MAX_PAGE_BUFFER_COUNT) { + + payload_sz = (hvpg_count * sizeof(u64) + sizeof(struct vmbus_packet_mpb_array)); payload = kzalloc(payload_sz, GFP_ATOMIC); if (!payload) return SCSI_MLQUEUE_DEVICE_BUSY; } + /* + * sgl is a list of PAGEs, and payload->range.pfn_array + * expects the page number in the unit of HV_HYP_PAGE_SIZE (the + * page size that Hyper-V uses, so here we need to divide PAGEs + * into HV_HYP_PAGE in case that PAGE_SIZE > HV_HYP_PAGE_SIZE. + */ payload->range.len = length; - payload->range.offset = sgl[0].offset; + payload->range.offset = sgl[0].offset & ~HV_HYP_PAGE_MASK; + hvpg_idx = sgl[0].offset >> HV_HYP_PAGE_SHIFT; cur_sgl = sgl; - for (i = 0; i < sg_count; i++) { - payload->range.pfn_array[i] = - page_to_pfn(sg_page((cur_sgl))); + for (i = 0, j = 0; i < sg_count; i++) { + /* + * "PAGE_SIZE / HV_HYP_PAGE_SIZE - hvpg_idx" is the # + * of HV_HYP_PAGEs in the current PAGE. + * + * "hvpg_count - j" is the # of unhandled HV_HYP_PAGEs. + * + * As shown in the following, the minimal of both is + * the # of HV_HYP_PAGEs, we need to handle in this + * PAGE. + * + * |------------------ PAGE ----------------------| + * | PAGE_SIZE / HV_HYP_PAGE_SIZE in total | + * |hvpg|hvpg| ... |hvpg|... |hvpg| + * ^ ^ + * hvpg_idx | + * ^ | + * +---(hvpg_count - j)--+ + * + * or + * + * |------------------ PAGE ----------------------| + * | PAGE_SIZE / HV_HYP_PAGE_SIZE in total | + * |hvpg|hvpg| ... |hvpg|... |hvpg| + * ^ ^ + * hvpg_idx | + * ^ | + * +---(hvpg_count - j)------------------------+ + */ + unsigned int nr_hvpg = min((unsigned int)(PAGE_SIZE / HV_HYP_PAGE_SIZE) - hvpg_idx, + hvpg_count - j); + unsigned int k; + + for (k = 0; k < nr_hvpg; k++) { + payload->range.pfn_array[j] = + page_to_hvpfn(sg_page((cur_sgl))) + hvpg_idx + k; + j++; + } cur_sgl = sg_next(cur_sgl); + hvpg_idx = 0; } }