From patchwork Sat Dec 17 19:42:04 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Arseniy Krasnov X-Patchwork-Id: 13075920 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C04C7C3DA6E for ; Sat, 17 Dec 2022 19:42:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229613AbiLQTmQ (ORCPT ); Sat, 17 Dec 2022 14:42:16 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59238 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229480AbiLQTmO (ORCPT ); Sat, 17 Dec 2022 14:42:14 -0500 Received: from mx.sberdevices.ru (mx.sberdevices.ru [45.89.227.171]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 090DFE08A; Sat, 17 Dec 2022 11:42:11 -0800 (PST) Received: from s-lin-edge02.sberdevices.ru (localhost [127.0.0.1]) by mx.sberdevices.ru (Postfix) with ESMTP id 949035FD03; Sat, 17 Dec 2022 22:42:08 +0300 (MSK) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sberdevices.ru; s=mail; t=1671306128; bh=MglaPv/ZJAQYTBRWx0CWkDk0JuAmC526ZvJW04XE1lM=; h=From:To:Subject:Date:Message-ID:Content-Type:MIME-Version; b=JhvYuHbFPy55vMW5EU76Z1K6K4ZCJ+8sv8ciNgVIg8pEeCJq2uLv68QCc45cUa8cy 5zxFBjPhigL2CdhB6SSwmgA+mhCJ04fbCdcQKWZYM25Y6F53WT2R8cgR94Udg4ckJX LB0mK3g/Q+TRmsPmP7LYBtF60q3BDLvkcNe9TULQjxif1n0uHuCVXhwLlKZ1eDhyH0 tPrly3AsoCOHZxlPsHxLwSgLVfgShl8uq3FBeLuuSjugf9sj9cOjlwZDBTmd4K0jSX KL2t3ZSXvkvxqJ6EDHHDt18ZZ18oDFyu+0uGPpnQgeV0gC09DWsREaFq/7lJaHN1oM C9WQG/15XdT7Q== Received: from S-MS-EXCH02.sberdevices.ru (S-MS-EXCH02.sberdevices.ru [172.16.1.5]) by mx.sberdevices.ru (Postfix) with ESMTP; Sat, 17 Dec 2022 22:42:04 +0300 (MSK) From: Arseniy Krasnov To: Stefano Garzarella , Stefan Hajnoczi , "edumazet@google.com" , "David S. Miller" , Jakub Kicinski , Paolo Abeni CC: "linux-kernel@vger.kernel.org" , "netdev@vger.kernel.org" , "virtualization@lists.linux-foundation.org" , "kvm@vger.kernel.org" , kernel , Krasnov Arseniy , Arseniy Krasnov Subject: [RFC PATCH v1 0/2] virtio/vsock: fix mutual rx/tx hungup Thread-Topic: [RFC PATCH v1 0/2] virtio/vsock: fix mutual rx/tx hungup Thread-Index: AQHZEk+jG/1mxrs/wUyXm0IW7LxCRw== Date: Sat, 17 Dec 2022 19:42:04 +0000 Message-ID: <39b2e9fd-601b-189d-39a9-914e5574524c@sberdevices.ru> Accept-Language: en-US, ru-RU Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [172.16.1.12] Content-ID: MIME-Version: 1.0 X-KSMG-Rule-ID: 4 X-KSMG-Message-Action: clean X-KSMG-AntiSpam-Status: not scanned, disabled by settings X-KSMG-AntiSpam-Interceptor-Info: not scanned X-KSMG-AntiPhishing: not scanned, disabled by settings X-KSMG-AntiVirus: Kaspersky Secure Mail Gateway, version 1.1.2.30, bases: 2022/12/17 15:49:00 #20678428 X-KSMG-AntiVirus-Status: Clean, skipped Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Hello, seems I found strange thing(may be a bug) where sender('tx' later) and receiver('rx' later) could stuck forever. Potential fix is in the first patch, second patch contains reproducer, based on vsock test suite. Reproducer is simple: tx just sends data to rx by 'write() syscall, rx dequeues it using 'read()' syscall and uses 'poll()' for waiting. I run server in host and client in guest. rx side params: 1) SO_VM_SOCKETS_BUFFER_SIZE is 256Kb(e.g. default). 2) SO_RCVLOWAT is 128Kb. What happens in the reproducer step by step: 1) tx tries to send 256Kb + 1 byte (in a single 'write()') 2) tx sends 256Kb, data reaches rx (rx_bytes == 256Kb) 3) tx waits for space in 'write()' to send last 1 byte 4) rx does poll(), (rx_bytes >= rcvlowat) 256Kb >= 128Kb, POLLIN is set 5) rx reads 64Kb, credit update is not sent due to * 6) rx does poll(), (rx_bytes >= rcvlowat) 192Kb >= 128Kb, POLLIN is set 7) rx reads 64Kb, credit update is not sent due to * 8) rx does poll(), (rx_bytes >= rcvlowat) 128Kb >= 128Kb, POLLIN is set 9) rx reads 64Kb, credit update is not sent due to * 10) rx does poll(), (rx_bytes < rcvlowat) 64Kb < 128Kb, rx waits in poll() * is optimization in 'virtio_transport_stream_do_dequeue()' which sends OP_CREDIT_UPDATE only when we have not too much space - less than VIRTIO_VSOCK_MAX_PKT_BUF_SIZE. Now tx side waits for space inside write() and rx waits in poll() for 'rx_bytes' to reach SO_RCVLOWAT value. Both sides will wait forever. I think, possible fix is to send credit update not only when we have too small space, but also when number of bytes in receive queue is smaller than SO_RCVLOWAT thus not enough to wake up sleeping reader. I'm not sure about correctness of this idea, but anyway - I think that problem above exists. What do You think? Patchset was rebased and tested on skbuff v7 patch from Bobby Eshleman: https://lore.kernel.org/netdev/20221213192843.421032-1-bobby.eshleman@bytedance.com/ Arseniy Krasnov(2): virtio/vsock: send credit update depending on SO_RCVLOWAT vsock_test: mutual hungup reproducer net/vmw_vsock/virtio_transport_common.c | 9 +++- tools/testing/vsock/vsock_test.c | 78 +++++++++++++++++++++++++++++++++ 2 files changed, 85 insertions(+), 2 deletions(-)