From patchwork Tue Jan 12 15:31:23 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeff Hostetler X-Patchwork-Id: 12013783 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 392B1C433E0 for ; Tue, 12 Jan 2021 15:32:33 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 089B923134 for ; Tue, 12 Jan 2021 15:32:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2391244AbhALPcS (ORCPT ); Tue, 12 Jan 2021 10:32:18 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50326 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729960AbhALPcR (ORCPT ); Tue, 12 Jan 2021 10:32:17 -0500 Received: from mail-wm1-x332.google.com (mail-wm1-x332.google.com [IPv6:2a00:1450:4864:20::332]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C91F5C061794 for ; Tue, 12 Jan 2021 07:31:36 -0800 (PST) Received: by mail-wm1-x332.google.com with SMTP id y23so2496783wmi.1 for ; Tue, 12 Jan 2021 07:31:36 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=message-id:in-reply-to:references:from:date:subject:fcc :content-transfer-encoding:mime-version:to:cc; bh=dC0KoIHQvhm8J47Vm0IzlYw1O+q7nQGRN86/FlomuAg=; b=u40BtLExshAXL0ksnIm7tJxlC5hGgob1bERjZVyms0VB3pFUhU4TU/tvziiV1AKBK0 OhxdejW5X23wXL+F7CgdO6CspxIvrpcjdEc3rnO6kDiFNv4de77jR35hmGMcs1RJ1H5l XqUhqLtVaWNucVnynp2sOw3Ac2hmBEBfopXvzsnWt97Sz6+imxCbC8qNaUM6O6pSUW3J yhTDhperZTrNTjxHXZ52bX2pvg0MNpwLmiv3APuW6Cr96jmjr2hfUJQWU46JXNg3GtC6 wgNK/u0PovJX1tal0G6MVJWhQ8CqRMA0AgoT4G3B1XjUWimHXqF7ccCq1ylDK9v/FkKk XNaw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:message-id:in-reply-to:references:from:date :subject:fcc:content-transfer-encoding:mime-version:to:cc; bh=dC0KoIHQvhm8J47Vm0IzlYw1O+q7nQGRN86/FlomuAg=; b=LSUvr0r6beZnuXO60RCgWAWmW6VyvBfXewVlHVtwuFwnZD2B7a9XOtQyjjF8crQYfJ KVn1AzBoauCXiXCqeaxOe6joudtcCcNla7ZY3gvkae54rDFlyH8wmNfv83Cm+BvdI4gd EHgvfH0P7rCnsJ/fcOtxgvxALGE0PJDbKXereqbR6mJUuRL5RdmYtWDYLWq1rsHlEjdt gqYVwp61il7hF6zhYg+xAFOZgsuyklJZqQaJI8c9SgKcXKrO5Uj5paCrBoR4VTaC1lkh DjYHksaKSmYtesIKTdB+/jII7pKi09iKssDas1D81A6Slpq9lWaAL6onAI0jRfVczdb7 ekGw== X-Gm-Message-State: AOAM533k6prtdqSnxroOFCxQydSqflTmJPy4OSSjD0sYy8cV1fE9xAk4 GAnZXwK3+3SxcvjelMwFmYNIsFOdELY= X-Google-Smtp-Source: ABdhPJx589bdHr+aVd8EMOi5cPLvn14s8VMMp2b8tpZ3VDm6M49YA4NhyMyvTYmJ2tOuBNTF3RXe2A== X-Received: by 2002:a1c:7c09:: with SMTP id x9mr4147650wmc.98.1610465495384; Tue, 12 Jan 2021 07:31:35 -0800 (PST) Received: from [127.0.0.1] ([13.74.141.28]) by smtp.gmail.com with ESMTPSA id p8sm5613178wru.50.2021.01.12.07.31.34 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 12 Jan 2021 07:31:34 -0800 (PST) Message-Id: <1155a45cf64afb237204429cd4ff2e74f5f7602a.1610465492.git.gitgitgadget@gmail.com> In-Reply-To: References: Date: Tue, 12 Jan 2021 15:31:23 +0000 Subject: [PATCH 01/10] pkt-line: use stack rather than static buffer in packet_write_gently() Fcc: Sent MIME-Version: 1.0 To: git@vger.kernel.org Cc: Jeff Hostetler , Jeff Hostetler Precedence: bulk List-ID: X-Mailing-List: git@vger.kernel.org From: Jeff Hostetler From: Jeff Hostetler Teach packet_write_gently() to use a stack buffer rather than a static buffer when composing the packet line message. This helps get us ready for threaded operations. Signed-off-by: Jeff Hostetler --- pkt-line.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/pkt-line.c b/pkt-line.c index d633005ef74..98439a2fed0 100644 --- a/pkt-line.c +++ b/pkt-line.c @@ -196,7 +196,7 @@ int packet_write_fmt_gently(int fd, const char *fmt, ...) static int packet_write_gently(const int fd_out, const char *buf, size_t size) { - static char packet_write_buffer[LARGE_PACKET_MAX]; + char packet_write_buffer[LARGE_PACKET_MAX]; size_t packet_size; if (size > sizeof(packet_write_buffer) - 4) From patchwork Tue Jan 12 15:31:24 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Johannes Schindelin X-Patchwork-Id: 12013789 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 61569C433E6 for ; Tue, 12 Jan 2021 15:32:33 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 21B2823138 for ; Tue, 12 Jan 2021 15:32:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2391377AbhALPcT (ORCPT ); Tue, 12 Jan 2021 10:32:19 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50330 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2389699AbhALPcS (ORCPT ); Tue, 12 Jan 2021 10:32:18 -0500 Received: from mail-wm1-x335.google.com (mail-wm1-x335.google.com [IPv6:2a00:1450:4864:20::335]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 96D87C061575 for ; Tue, 12 Jan 2021 07:31:37 -0800 (PST) Received: by mail-wm1-x335.google.com with SMTP id c124so2242156wma.5 for ; Tue, 12 Jan 2021 07:31:37 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=message-id:in-reply-to:references:from:date:subject:fcc :content-transfer-encoding:mime-version:to:cc; bh=tWoYCWR5F8uv/qAGXMgvUK5xpn65Jh+3gAxndrW7LR0=; b=OSDduVgjrVwQPhrWnOGhIkP6mANh6TjiVW3SG49ircuQOZNlg4tMNsWIVC5GbMsYbE lFGaXF5nl2rYH5dLzqj1JihS8vne+Pnq1p8GGjnmX5xBA/jpkkwvVRIeJE1IkgrLttE8 RMikoSld3jYAvJ302NRtSvPhsBThAqaT4OPK4Dp5sVJx+Mu81D6D7im4lKZ4JbJKlThD HWfQgzm0XsM67JVmmEt198T9GUbHh3t4Iw+4j4F2xeaDYy3A/PAJz208qoF1Ns7x367N /UzpaCr3yr1OkqOBXL2vHgMjo4cGzzIIFMTIrZ+i/2j0pdzQ4Prvgu6NNU6Nv8feVYVL X6ZQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:message-id:in-reply-to:references:from:date :subject:fcc:content-transfer-encoding:mime-version:to:cc; bh=tWoYCWR5F8uv/qAGXMgvUK5xpn65Jh+3gAxndrW7LR0=; b=nQjx9DgaEIJmOmgOwmWrP7d3MQIsNQlvR3GsiprTmxdrexpQGJ1JrMTEl/+n1mkeOp 9UBzB8WU7g+DffUT2/PPpCPStNk2IBc5mjUkDUzyR3Wky5F0Xo4af0EtZUX3zrKLSVtP UIb9LMv2YykXCQBBn5inbTjIS9ZWavjhPHBkoBK/ymyVU2JmBtlmX4bEhJPH7I/eDRge dLiP2XIv0TJzAsWRKREBeIwm+sqCy11tXj8yHEiflQxMt2eV5VYviEn2F9xfsxkTyzYc FGUqKnoTk0dL7A7T87LFmDuLENmQTt7rpZiofB7n6MGe+kpy6ZJ1fpuOOgkumsDGyoVh h8vg== X-Gm-Message-State: AOAM533RBcbRpjkibxoRScKNHHSGUhMqtMV32D5wN15i2r4W7bQagr0u UEzfTm/b2jdYtrSZSWAex0EwtVdS2Qk= X-Google-Smtp-Source: ABdhPJxLovnsiT8tNMBif+UPIj9m7/YRk/7k90qBUCVSsDrO4dmp718ef2n6NpNtwN9Fb363cm5dJg== X-Received: by 2002:a1c:4904:: with SMTP id w4mr4062813wma.140.1610465496159; Tue, 12 Jan 2021 07:31:36 -0800 (PST) Received: from [127.0.0.1] ([13.74.141.28]) by smtp.gmail.com with ESMTPSA id b7sm5278325wru.33.2021.01.12.07.31.35 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 12 Jan 2021 07:31:35 -0800 (PST) Message-Id: In-Reply-To: References: Date: Tue, 12 Jan 2021 15:31:24 +0000 Subject: [PATCH 02/10] pkt-line: (optionally) libify the packet readers Fcc: Sent MIME-Version: 1.0 To: git@vger.kernel.org Cc: Jeff Hostetler , Johannes Schindelin Precedence: bulk List-ID: X-Mailing-List: git@vger.kernel.org From: Johannes Schindelin From: Johannes Schindelin So far, the (possibly indirect) callers of `get_packet_data()` can ask that function to return an error instead of `die()`ing upon end-of-file. However, random read errors will still cause the process to die. So let's introduce an explicit option to tell the packet reader machinery to please be nice and only return an error. This change prepares pkt-line for use by long-running daemon processes. Such processes should be able to serve multiple concurrent clients and and survive random IO errors. If there is an error on one connection, a daemon should be able to drop that connection and continue serving existing and future connections. This ability will be used by a Git-aware "Internal FSMonitor" feature in a later patch series. Signed-off-by: Johannes Schindelin --- pkt-line.c | 19 +++++++++++++++++-- pkt-line.h | 4 ++++ 2 files changed, 21 insertions(+), 2 deletions(-) diff --git a/pkt-line.c b/pkt-line.c index 98439a2fed0..5c2d86a2f60 100644 --- a/pkt-line.c +++ b/pkt-line.c @@ -298,8 +298,11 @@ static int get_packet_data(int fd, char **src_buf, size_t *src_size, *src_size -= ret; } else { ret = read_in_full(fd, dst, size); - if (ret < 0) + if (ret < 0) { + if (options & PACKET_READ_NEVER_DIE) + return error_errno(_("read error")); die_errno(_("read error")); + } } /* And complain if we didn't get enough bytes to satisfy the read. */ @@ -307,6 +310,8 @@ static int get_packet_data(int fd, char **src_buf, size_t *src_size, if (options & PACKET_READ_GENTLE_ON_EOF) return -1; + if (options & PACKET_READ_NEVER_DIE) + return error(_("the remote end hung up unexpectedly")); die(_("the remote end hung up unexpectedly")); } @@ -335,6 +340,9 @@ enum packet_read_status packet_read_with_status(int fd, char **src_buffer, len = packet_length(linelen); if (len < 0) { + if (options & PACKET_READ_NEVER_DIE) + return error(_("protocol error: bad line length " + "character: %.4s"), linelen); die(_("protocol error: bad line length character: %.4s"), linelen); } else if (!len) { packet_trace("0000", 4, 0); @@ -349,12 +357,19 @@ enum packet_read_status packet_read_with_status(int fd, char **src_buffer, *pktlen = 0; return PACKET_READ_RESPONSE_END; } else if (len < 4) { + if (options & PACKET_READ_NEVER_DIE) + return error(_("protocol error: bad line length %d"), + len); die(_("protocol error: bad line length %d"), len); } len -= 4; - if ((unsigned)len >= size) + if ((unsigned)len >= size) { + if (options & PACKET_READ_NEVER_DIE) + return error(_("protocol error: bad line length %d"), + len); die(_("protocol error: bad line length %d"), len); + } if (get_packet_data(fd, src_buffer, src_len, buffer, len, options) < 0) { *pktlen = -1; diff --git a/pkt-line.h b/pkt-line.h index 8c90daa59ef..c1fa245faf8 100644 --- a/pkt-line.h +++ b/pkt-line.h @@ -68,10 +68,14 @@ int write_packetized_from_buf(const char *src_in, size_t len, int fd_out); * * If options contains PACKET_READ_DIE_ON_ERR_PACKET, it dies when it sees an * ERR packet. + * + * With `PACKET_READ_NEVER_DIE`, no errors are allowed to trigger die() (except + * an ERR packet, when `PACKET_READ_DIE_ON_ERR_PACKET` is in effect). */ #define PACKET_READ_GENTLE_ON_EOF (1u<<0) #define PACKET_READ_CHOMP_NEWLINE (1u<<1) #define PACKET_READ_DIE_ON_ERR_PACKET (1u<<2) +#define PACKET_READ_NEVER_DIE (1u<<3) int packet_read(int fd, char **src_buffer, size_t *src_len, char *buffer, unsigned size, int options); From patchwork Tue Jan 12 15:31:25 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Johannes Schindelin X-Patchwork-Id: 12013793 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id BB37AC4332B for ; Tue, 12 Jan 2021 15:32:33 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 914D123130 for ; Tue, 12 Jan 2021 15:32:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2391360AbhALPcT (ORCPT ); Tue, 12 Jan 2021 10:32:19 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50332 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729960AbhALPcS (ORCPT ); Tue, 12 Jan 2021 10:32:18 -0500 Received: from mail-wr1-x434.google.com (mail-wr1-x434.google.com [IPv6:2a00:1450:4864:20::434]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6B87EC061795 for ; Tue, 12 Jan 2021 07:31:38 -0800 (PST) Received: by mail-wr1-x434.google.com with SMTP id q18so2944836wrn.1 for ; Tue, 12 Jan 2021 07:31:38 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=message-id:in-reply-to:references:from:date:subject:fcc :content-transfer-encoding:mime-version:to:cc; bh=iSqNh8w9nhe34gEVACalSGo+pyNScoUsSJCBtFuqWhM=; b=kY1j3YTdt6njQianwvvu7rFoP/KmQaE+d2ZmZ1j/qGHWJVUVzHlEjugr4l4AS86MTc hXvRSD7NsLmudH96rwFQLDywVnlDn3PTnKP7FBpGxc45gtMfgqPmkzn5QaNjBxpw4T5k SW/iroyccO46//Rvkn4RocxZOOhpgARlJwYkZ4CevYQKtDCnxvidWicnnA78lJ+xn9o5 JaJvFQJyNxJZsaBbDuXk//VzxcdSNWs3oDypmlQsneeb3WJrdeOK+gQdVfLodNsZ85mL UI3pf+QFMWm2Eyni+KfXmv4N1dA9/X2kYVLfO5h2e3Yz9eniyYVbcgEvsQJq1M4Hb7JR BxQg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:message-id:in-reply-to:references:from:date :subject:fcc:content-transfer-encoding:mime-version:to:cc; bh=iSqNh8w9nhe34gEVACalSGo+pyNScoUsSJCBtFuqWhM=; b=X/ys6vz+jHnk7+O2J5SoBpi9L5i4b5GDdYjaRTmmc+DON/KjnBqyMHkGthvupMf3VL atBbp6UMVOOpCeHRcSnaTfwT3PpHLm4ifQj1McDFxoWh5FHtoHxIsFCPIyFdtNFr2R6N XGmcH/TevCSVzFZkt9PlqqXoooeSG+Udcb4ddUaRkhzrHjgz1E06oU8WCrWOmK/BOIXC Gje6fz/82VvKUHdjsuVfAmenA7y7lHcbrBm9oLwW2YCnfT9lTihU0IDTMTzCcpIcBr6n CyEmtqAfLQ5ibLXNJmiAZjOKvQlsUvdXo7bWY/3IjMqN7ZnU9bxuOzABU43CNGbMEeJ8 TcDQ== X-Gm-Message-State: AOAM533ak6bUCELBcQUinA0ZfCMEQHCbOrMlqvIhDnqDGSkit3wEbq0j A+hSYWTczdF64YBunFl6t9t7haY1Cks= X-Google-Smtp-Source: ABdhPJxW+ZV2awW6IY4/e2rSOEvhwko8je50MnAa8H2qyiNrjhSxYmQE2JTlFX10m0lMmudtUJlDzw== X-Received: by 2002:adf:83a6:: with SMTP id 35mr4878315wre.274.1610465497064; Tue, 12 Jan 2021 07:31:37 -0800 (PST) Received: from [127.0.0.1] ([13.74.141.28]) by smtp.gmail.com with ESMTPSA id z3sm5734470wrn.59.2021.01.12.07.31.36 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 12 Jan 2021 07:31:36 -0800 (PST) Message-Id: In-Reply-To: References: Date: Tue, 12 Jan 2021 15:31:25 +0000 Subject: [PATCH 03/10] pkt-line: optionally skip the flush packet in write_packetized_from_buf() Fcc: Sent MIME-Version: 1.0 To: git@vger.kernel.org Cc: Jeff Hostetler , Johannes Schindelin Precedence: bulk List-ID: X-Mailing-List: git@vger.kernel.org From: Johannes Schindelin From: Johannes Schindelin This function currently has only one caller: `apply_multi_file_filter()` in `convert.c`. That caller wants a flush packet to be written after writing the payload. However, we are about to introduce a user that wants to write many packets before a final flush packet, so let's extend this function to prepare for that scenario. Signed-off-by: Johannes Schindelin --- convert.c | 2 +- pkt-line.c | 5 +++-- pkt-line.h | 3 ++- 3 files changed, 6 insertions(+), 4 deletions(-) diff --git a/convert.c b/convert.c index ee360c2f07c..3f396a9b288 100644 --- a/convert.c +++ b/convert.c @@ -886,7 +886,7 @@ static int apply_multi_file_filter(const char *path, const char *src, size_t len if (fd >= 0) err = write_packetized_from_fd(fd, process->in); else - err = write_packetized_from_buf(src, len, process->in); + err = write_packetized_from_buf(src, len, process->in, 1); if (err) goto done; diff --git a/pkt-line.c b/pkt-line.c index 5c2d86a2f60..ef83439b9ee 100644 --- a/pkt-line.c +++ b/pkt-line.c @@ -261,7 +261,8 @@ int write_packetized_from_fd(int fd_in, int fd_out) return err; } -int write_packetized_from_buf(const char *src_in, size_t len, int fd_out) +int write_packetized_from_buf(const char *src_in, size_t len, int fd_out, + int flush_at_end) { int err = 0; size_t bytes_written = 0; @@ -277,7 +278,7 @@ int write_packetized_from_buf(const char *src_in, size_t len, int fd_out) err = packet_write_gently(fd_out, src_in + bytes_written, bytes_to_write); bytes_written += bytes_to_write; } - if (!err) + if (!err && flush_at_end) err = packet_flush_gently(fd_out); return err; } diff --git a/pkt-line.h b/pkt-line.h index c1fa245faf8..5b7a0fb8510 100644 --- a/pkt-line.h +++ b/pkt-line.h @@ -33,7 +33,8 @@ void packet_buf_write_len(struct strbuf *buf, const char *data, size_t len); int packet_flush_gently(int fd); int packet_write_fmt_gently(int fd, const char *fmt, ...) __attribute__((format (printf, 2, 3))); int write_packetized_from_fd(int fd_in, int fd_out); -int write_packetized_from_buf(const char *src_in, size_t len, int fd_out); +int write_packetized_from_buf(const char *src_in, size_t len, int fd_out, + int flush_at_end); /* * Read a packetized line into the buffer, which must be at least size bytes From patchwork Tue Jan 12 15:31:26 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Johannes Schindelin X-Patchwork-Id: 12013787 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 84646C433E9 for ; Tue, 12 Jan 2021 15:32:33 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 3B99B23132 for ; Tue, 12 Jan 2021 15:32:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2391389AbhALPcU (ORCPT ); Tue, 12 Jan 2021 10:32:20 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50340 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2391374AbhALPcT (ORCPT ); Tue, 12 Jan 2021 10:32:19 -0500 Received: from mail-wr1-x42a.google.com (mail-wr1-x42a.google.com [IPv6:2a00:1450:4864:20::42a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 47537C06179F for ; Tue, 12 Jan 2021 07:31:39 -0800 (PST) Received: by mail-wr1-x42a.google.com with SMTP id q18so2944892wrn.1 for ; Tue, 12 Jan 2021 07:31:39 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=message-id:in-reply-to:references:from:date:subject:fcc :content-transfer-encoding:mime-version:to:cc; bh=bSCUea5G8SFvOYZSySp3ABiRylFxJviLeyQ8NHRCvPk=; b=UjZqes7x93kFuU1CxLckpjdSoWbUBK0OwnEgQuN9VJ5IYm2+HK4bIiWcRYAI9NEYZv koK2PH8LzlYLSt2lYbJhYvvdsylZLDFcmndl+3HOCovqUHGBBSNTaHIpCgzpYYFWMLGX sn9I+uO6dMEYWIPmA8yvvsOzN4kVya2cGx0wvg6SvXHriZrZY3wN/HAHyvNUm1QIdjAZ YnQrYU/PrbU74WNkQerYvbeaw3Pfez5lIq08flKuUngilSB8bAifo5HonKFy2nU59xvd FtDxD0PieKCE5AVCDIaIXUztqfl/ZtUMfpG7LzFBtNbUifZfnaEqhvnF3/x66UQJNUjq NTeg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:message-id:in-reply-to:references:from:date :subject:fcc:content-transfer-encoding:mime-version:to:cc; bh=bSCUea5G8SFvOYZSySp3ABiRylFxJviLeyQ8NHRCvPk=; b=pB7ZaBB4NYYyVyCB1rWGrCg3qCzIU3PuhY9oBb/ZTB1hp3nYFuFjWk8A9erZKzzk7s 3fhIXi5F9QSOd2exvurpOH+d0JPg+gOwoEXWEne/q1iyK6ztViypipy2DjO0f2Fhc3ne s+Hfa1FI21iARsBwHXTdyVOqP08v3bu+MVl/RzDT2gBK+b3DO4aQxMdqJZPA5inSpP3a emkF56Qbivwnn0bcwejVes9iWF1fnol7IRQ9t6pjjz6h1IiTbEejx/M1H74FtdZrWiDK LdZRbxqxwdtb1QDlPXguRTUfgUp5nxkrN8p95JxChbGRS6vGUwwWJd7Vw2bm7r57lSuJ O0iQ== X-Gm-Message-State: AOAM531JpN2y3mRwg7NuaMj+coq1VbCX4+pp8RTBNI/3PXs83YaT9wxs edpiUI5QRViPD8lnhf2ZZIFeTnhJRt0= X-Google-Smtp-Source: ABdhPJy2OYFPZbNQ91NDK8zAGkXxXYrn8m/6UqI8lTVr+sxWatBffymk1AbCHm5SxJeCkrhTmBVV/w== X-Received: by 2002:adf:f6cc:: with SMTP id y12mr4898376wrp.35.1610465497892; Tue, 12 Jan 2021 07:31:37 -0800 (PST) Received: from [127.0.0.1] ([13.74.141.28]) by smtp.gmail.com with ESMTPSA id a62sm4034353wmf.7.2021.01.12.07.31.37 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 12 Jan 2021 07:31:37 -0800 (PST) Message-Id: <2f399ac107c40f6fc71805d3dc4f2d602a33c074.1610465493.git.gitgitgadget@gmail.com> In-Reply-To: References: Date: Tue, 12 Jan 2021 15:31:26 +0000 Subject: [PATCH 04/10] pkt-line: accept additional options in read_packetized_to_strbuf() Fcc: Sent MIME-Version: 1.0 To: git@vger.kernel.org Cc: Jeff Hostetler , Johannes Schindelin Precedence: bulk List-ID: X-Mailing-List: git@vger.kernel.org From: Johannes Schindelin From: Johannes Schindelin The `read_packetized_to_strbuf()` function reads packets into a strbuf until a flush packet has been received. So far, it has only one caller: `apply_multi_file_filter()` in `convert.c`. This caller really only needs the `PACKET_READ_GENTLE_ON_EOF` option to be passed to `packet_read()` (which makes sense in the scenario where packets should be read until a flush packet is received). We are about to introduce a caller that wants to pass other options through to `packet_read()`, so let's extend the function signature accordingly. Signed-off-by: Johannes Schindelin --- convert.c | 2 +- pkt-line.c | 4 ++-- pkt-line.h | 6 +++++- 3 files changed, 8 insertions(+), 4 deletions(-) diff --git a/convert.c b/convert.c index 3f396a9b288..175c5cd51d5 100644 --- a/convert.c +++ b/convert.c @@ -903,7 +903,7 @@ static int apply_multi_file_filter(const char *path, const char *src, size_t len if (err) goto done; - err = read_packetized_to_strbuf(process->out, &nbuf) < 0; + err = read_packetized_to_strbuf(process->out, &nbuf, 0) < 0; if (err) goto done; diff --git a/pkt-line.c b/pkt-line.c index ef83439b9ee..615211819cd 100644 --- a/pkt-line.c +++ b/pkt-line.c @@ -437,7 +437,7 @@ char *packet_read_line_buf(char **src, size_t *src_len, int *dst_len) return packet_read_line_generic(-1, src, src_len, dst_len); } -ssize_t read_packetized_to_strbuf(int fd_in, struct strbuf *sb_out) +ssize_t read_packetized_to_strbuf(int fd_in, struct strbuf *sb_out, int options) { int packet_len; @@ -453,7 +453,7 @@ ssize_t read_packetized_to_strbuf(int fd_in, struct strbuf *sb_out) * that there is already room for the extra byte. */ sb_out->buf + sb_out->len, LARGE_PACKET_DATA_MAX+1, - PACKET_READ_GENTLE_ON_EOF); + options | PACKET_READ_GENTLE_ON_EOF); if (packet_len <= 0) break; sb_out->len += packet_len; diff --git a/pkt-line.h b/pkt-line.h index 5b7a0fb8510..02554a20a6c 100644 --- a/pkt-line.h +++ b/pkt-line.h @@ -135,8 +135,12 @@ char *packet_read_line_buf(char **src_buf, size_t *src_len, int *size); /* * Reads a stream of variable sized packets until a flush packet is detected. + * + * The options are augmented by PACKET_READ_GENTLE_ON_EOF and passed to + * packet_read. */ -ssize_t read_packetized_to_strbuf(int fd_in, struct strbuf *sb_out); +ssize_t read_packetized_to_strbuf(int fd_in, struct strbuf *sb_out, + int options); /* * Receive multiplexed output stream over git native protocol. From patchwork Tue Jan 12 15:31:27 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeff Hostetler X-Patchwork-Id: 12013791 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9A60BC43381 for ; Tue, 12 Jan 2021 15:32:33 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 63F3223134 for ; Tue, 12 Jan 2021 15:32:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2391408AbhALPcV (ORCPT ); Tue, 12 Jan 2021 10:32:21 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50342 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2391374AbhALPcU (ORCPT ); Tue, 12 Jan 2021 10:32:20 -0500 Received: from mail-wm1-x332.google.com (mail-wm1-x332.google.com [IPv6:2a00:1450:4864:20::332]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 20609C0617A2 for ; Tue, 12 Jan 2021 07:31:40 -0800 (PST) Received: by mail-wm1-x332.google.com with SMTP id e25so2507558wme.0 for ; Tue, 12 Jan 2021 07:31:40 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=message-id:in-reply-to:references:from:date:subject:fcc :content-transfer-encoding:mime-version:to:cc; bh=HHLHxrxflt0JGdqizd2Gl5+BYrxhvTM8hH7dP/QkbBk=; b=I7/h/rJqSaUamQX+c0SHq/fqu5lZrk/9F1eTn1dI04+RpI8iaQK1ZEjpDAw5pvVC9h AKocFCgwOf4iSTvUb9Eb5YTULrd+GyUKbXxeTMJu93zhg0SmDeMBT0RMe2tC2rjWmhwP FxeEIZxGOwjMG+sJmaBZGIBgazG4lZS5hbKLpQvsOMt1fEeP5PCUiR8f/z+pnSQQKw8f JA+Vz5+sGCi4SMgk5f9MOgby9zQNvHHqYx54zs1m8prTX0+asBiA3Q3vP8G/7f4txi+d 0N6QsNAM2afIVcb4DFe5SCWBbuzStefCGmdGUJ0xYIKLhkkTqp4VCBTwYZeRPzP1nIpY TNPg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:message-id:in-reply-to:references:from:date :subject:fcc:content-transfer-encoding:mime-version:to:cc; bh=HHLHxrxflt0JGdqizd2Gl5+BYrxhvTM8hH7dP/QkbBk=; b=mbZMOeAq3mPbEB0FNeMCmq0Qt/Oor20aFF1RjUKX1yhBJF5r5JERvaIIoYp/9snTmX o1S5LOKmuobegK3TRwoGynA6gxN/GWQZr0beekJvU4DtXlSFjZMQqTNxMznR0JR1or10 6YAzgxZ1QGSbD14dv8PjVD1lJYpxIjJC39BHzxsITfY2G2BXgWASfbvPPD5qVGxYZmT1 OnEbFFoelkpQKac3RafLAGh+Jv+SwHqrvKk32kReTy5ElImGVF2V2GP3iGz+rk1vLfH0 O8dxIFntoPKxPoMsEKpGnNUNXqQ3JlNNe7oCTBGHlN0WBh2+he3Z+N2XoaD6gFkX1y7v poMQ== X-Gm-Message-State: AOAM530sZmI1oTDMpFUiUv9CGFuCh/hDA2JLWbDxt0QeSgbWSkZY7804 jCP54z+sytz06GlRog8MUiWSfmqe+c0= X-Google-Smtp-Source: ABdhPJxyiykoBOx/ChU2zhkWC9mAK3lLPBWZcmqyEq1jIts9e8lvygKY9lJDRqTX5ftQpWVtRevqcw== X-Received: by 2002:a1c:e042:: with SMTP id x63mr4193084wmg.68.1610465498745; Tue, 12 Jan 2021 07:31:38 -0800 (PST) Received: from [127.0.0.1] ([13.74.141.28]) by smtp.gmail.com with ESMTPSA id h29sm6441264wrc.68.2021.01.12.07.31.38 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 12 Jan 2021 07:31:38 -0800 (PST) Message-Id: <7064c5e9ffa0e3e666ea6d146b0839680952757d.1610465493.git.gitgitgadget@gmail.com> In-Reply-To: References: Date: Tue, 12 Jan 2021 15:31:27 +0000 Subject: [PATCH 05/10] simple-ipc: design documentation for new IPC mechanism Fcc: Sent MIME-Version: 1.0 To: git@vger.kernel.org Cc: Jeff Hostetler , Jeff Hostetler Precedence: bulk List-ID: X-Mailing-List: git@vger.kernel.org From: Jeff Hostetler From: Jeff Hostetler Brief design documentation for new IPC mechanism allowing foreground Git client to talk with an existing daemon process at a known location using a named pipe or unix domain socket. Signed-off-by: Johannes Schindelin Signed-off-by: Jeff Hostetler --- Documentation/technical/api-simple-ipc.txt | 31 ++++++++++++++++++++++ 1 file changed, 31 insertions(+) create mode 100644 Documentation/technical/api-simple-ipc.txt diff --git a/Documentation/technical/api-simple-ipc.txt b/Documentation/technical/api-simple-ipc.txt new file mode 100644 index 00000000000..920994a69d3 --- /dev/null +++ b/Documentation/technical/api-simple-ipc.txt @@ -0,0 +1,31 @@ +simple-ipc API +============== + +The simple-ipc API is used to send an IPC message and response between +a (presumably) foreground Git client process to a background server or +daemon process. The server process must already be running. Multiple +client processes can simultaneously communicate with the server +process. + +Communication occurs over a named pipe on Windows and a Unix domain +socket on other platforms. Clients and the server rendezvous at a +previously agreed-to application-specific pathname (which is outside +the scope of this design). + +This IPC mechanism differs from the existing `sub-process.c` model +(Documentation/technical/long-running-process-protocol.txt) and used +by applications like Git-LFS because the server is assumed to be very +long running system service. In contrast, a "sub-process model process" +is started with the foreground process and exits when the foreground +process terminates. How the server is started is also outside the +scope of the IPC mechanism. + +The IPC protocol consists of a single request message from the client and +an optional request message from the server. For simplicity, pkt-line +routines are used to hide chunking and buffering concerns. Each side +terminates their message with a flush packet. +(Documentation/technical/protocol-common.txt) + +The actual format of the client and server messages is application +specific. The IPC layer transmits and receives an opaque buffer without +any concern for the content within. From patchwork Tue Jan 12 15:31:28 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeff Hostetler X-Patchwork-Id: 12013803 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 89F67C4332B for ; Tue, 12 Jan 2021 15:33:14 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 4FE4223131 for ; Tue, 12 Jan 2021 15:33:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2391553AbhALPdF (ORCPT ); Tue, 12 Jan 2021 10:33:05 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50472 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2391418AbhALPc4 (ORCPT ); Tue, 12 Jan 2021 10:32:56 -0500 Received: from mail-wm1-x329.google.com (mail-wm1-x329.google.com [IPv6:2a00:1450:4864:20::329]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 98B51C0617A3 for ; Tue, 12 Jan 2021 07:31:41 -0800 (PST) Received: by mail-wm1-x329.google.com with SMTP id y187so2493913wmd.3 for ; Tue, 12 Jan 2021 07:31:41 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=message-id:in-reply-to:references:from:date:subject:fcc :content-transfer-encoding:mime-version:to:cc; bh=h+PAnlOeSqK84W0ERQY/gCR4c55jIsBF1I+soFiWDhY=; b=fa4H7ri5c++pE+VkYEwUjCHOgwABX1ll5rsi7cQwn1XcZsiwDy4rSLh8v9AbdXHRTn cUbcSlBCayZxDZEtR/1X4heTnQwpo5YCKPlKH/0n/+SJXKgFyYqbGdNLuDQfukoWvNnt eSbUE+MBPb2nkuEx2q33XqZOCT2zZ/Q/11dKGyY1hrJl1tqch9Xo7rsEPp627mMIDD9+ OJHI4dZf/YwhE98qQWowlU7bXLBTkLrD6nu7ASWUTAfVaQV4je0NoBustLAjXKoNISce /0k6L8TpLnTBDYaS6Wf2UiSt3Z5zLe3Wov6ynARXBjjpbIPav8LS1xm0gN/6zj+geTpN whbg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:message-id:in-reply-to:references:from:date :subject:fcc:content-transfer-encoding:mime-version:to:cc; bh=h+PAnlOeSqK84W0ERQY/gCR4c55jIsBF1I+soFiWDhY=; b=IGG6PB4q033Votu/SiCNvbS+rXHFN3x9Qyk4YWL8ADnoqDdQ1iN95Y0BTPNPCGCl/Z w6YkLdPyPsuZit13fnSIKc16Qi0AI99BmtmO4Qct5ZbjofWcOu6OEollMrwJi9KygA/X 6pqM4P0ESGNmZIPfn96FOaE5238mzvPZEDy+TFyqXHWLmMc6t5912O2nShI0g5q6Or7d 3kUeagR8Vwf9uGg8DxjDiW01bqes0GRHWo+nQkhU80JCEiQgTQDpGPxMQQrZ8OQMTnW7 IBfpnJpWsEoO+c/nUJiIFQ3f1gNmznQWbYayDzl2bkfd6opaxTcKy66BvtLxcE9H041A HX7A== X-Gm-Message-State: AOAM532HqcNZCf1D4Odu++LXd6XCnMQTkXaAN3prQ0AGUd//uvB03m+W YauNVRZowlFJiIj5icCgLGmXfWG7nn4= X-Google-Smtp-Source: ABdhPJzTkc2NAUVh1QpxADR5TerFxYaJ4Fi24T/X9gOSANSgDXKzZSf9EAiv/o7vpOk4EqT6rNwDyA== X-Received: by 2002:a1c:4684:: with SMTP id t126mr4138001wma.165.1610465499574; Tue, 12 Jan 2021 07:31:39 -0800 (PST) Received: from [127.0.0.1] ([13.74.141.28]) by smtp.gmail.com with ESMTPSA id m2sm4220315wml.34.2021.01.12.07.31.38 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 12 Jan 2021 07:31:39 -0800 (PST) Message-Id: <9e27c07d7852c1476f5a2ca0bb0fe0c1f05e2de1.1610465493.git.gitgitgadget@gmail.com> In-Reply-To: References: Date: Tue, 12 Jan 2021 15:31:28 +0000 Subject: [PATCH 06/10] simple-ipc: add win32 implementation Fcc: Sent MIME-Version: 1.0 To: git@vger.kernel.org Cc: Jeff Hostetler , Jeff Hostetler Precedence: bulk List-ID: X-Mailing-List: git@vger.kernel.org From: Jeff Hostetler From: Jeff Hostetler Create Windows implementation of "simple-ipc" using named pipes. Signed-off-by: Jeff Hostetler --- Makefile | 5 + compat/simple-ipc/ipc-shared.c | 28 ++ compat/simple-ipc/ipc-win32.c | 723 ++++++++++++++++++++++++++++ config.mak.uname | 2 + contrib/buildsystems/CMakeLists.txt | 4 + simple-ipc.h | 216 +++++++++ 6 files changed, 978 insertions(+) create mode 100644 compat/simple-ipc/ipc-shared.c create mode 100644 compat/simple-ipc/ipc-win32.c create mode 100644 simple-ipc.h diff --git a/Makefile b/Makefile index 7b64106930a..c94d5847919 100644 --- a/Makefile +++ b/Makefile @@ -1682,6 +1682,11 @@ else LIB_OBJS += unix-socket.o endif +ifdef USE_WIN32_IPC + LIB_OBJS += compat/simple-ipc/ipc-shared.o + LIB_OBJS += compat/simple-ipc/ipc-win32.o +endif + ifdef NO_ICONV BASIC_CFLAGS += -DNO_ICONV endif diff --git a/compat/simple-ipc/ipc-shared.c b/compat/simple-ipc/ipc-shared.c new file mode 100644 index 00000000000..1edec815953 --- /dev/null +++ b/compat/simple-ipc/ipc-shared.c @@ -0,0 +1,28 @@ +#include "cache.h" +#include "simple-ipc.h" +#include "strbuf.h" +#include "pkt-line.h" +#include "thread-utils.h" + +#ifdef SUPPORTS_SIMPLE_IPC + +int ipc_server_run(const char *path, const struct ipc_server_opts *opts, + ipc_server_application_cb *application_cb, + void *application_data) +{ + struct ipc_server_data *server_data = NULL; + int ret; + + ret = ipc_server_run_async(&server_data, path, opts, + application_cb, application_data); + if (ret) + return ret; + + ret = ipc_server_await(server_data); + + ipc_server_free(server_data); + + return ret; +} + +#endif /* SUPPORTS_SIMPLE_IPC */ diff --git a/compat/simple-ipc/ipc-win32.c b/compat/simple-ipc/ipc-win32.c new file mode 100644 index 00000000000..475d9f02ff6 --- /dev/null +++ b/compat/simple-ipc/ipc-win32.c @@ -0,0 +1,723 @@ +#include "cache.h" +#include "simple-ipc.h" +#include "strbuf.h" +#include "pkt-line.h" +#include "thread-utils.h" + +#ifndef GIT_WINDOWS_NATIVE +#error This file can only be compiled on Windows +#endif + +static int initialize_pipe_name(const char *path, wchar_t *wpath, size_t alloc) +{ + int off = 0; + struct strbuf realpath = STRBUF_INIT; + + if (!strbuf_realpath(&realpath, path, 0)) + return -1; + + off = swprintf(wpath, alloc, L"\\\\.\\pipe\\"); + if (xutftowcs(wpath + off, realpath.buf, alloc - off) < 0) + return -1; + + /* Handle drive prefix */ + if (wpath[off] && wpath[off + 1] == L':') { + wpath[off + 1] = L'_'; + off += 2; + } + + for (; wpath[off]; off++) + if (wpath[off] == L'/') + wpath[off] = L'\\'; + + strbuf_release(&realpath); + return 0; +} + +static enum ipc_active_state get_active_state(wchar_t *pipe_path) +{ + if (WaitNamedPipeW(pipe_path, NMPWAIT_USE_DEFAULT_WAIT)) + return IPC_STATE__LISTENING; + + if (GetLastError() == ERROR_SEM_TIMEOUT) + return IPC_STATE__NOT_LISTENING; + + if (GetLastError() == ERROR_FILE_NOT_FOUND) + return IPC_STATE__PATH_NOT_FOUND; + + return IPC_STATE__OTHER_ERROR; +} + +enum ipc_active_state ipc_get_active_state(const char *path) +{ + wchar_t pipe_path[MAX_PATH]; + + if (initialize_pipe_name(path, pipe_path, ARRAY_SIZE(pipe_path)) < 0) + return IPC_STATE__INVALID_PATH; + + return get_active_state(pipe_path); +} + +#define WAIT_STEP_MS (50) + +static enum ipc_active_state connect_to_server( + const wchar_t *wpath, + DWORD timeout_ms, + const struct ipc_client_connect_options *options, + int *pfd) +{ + DWORD t_start_ms, t_waited_ms; + DWORD step_ms; + HANDLE hPipe = INVALID_HANDLE_VALUE; + DWORD mode = PIPE_READMODE_BYTE; + DWORD gle; + + *pfd = -1; + + for (;;) { + hPipe = CreateFileW(wpath, GENERIC_READ | GENERIC_WRITE, + 0, NULL, OPEN_EXISTING, 0, NULL); + if (hPipe != INVALID_HANDLE_VALUE) + break; + + gle = GetLastError(); + + switch (gle) { + case ERROR_FILE_NOT_FOUND: + if (!options->wait_if_not_found) + return IPC_STATE__PATH_NOT_FOUND; + if (!timeout_ms) + return IPC_STATE__PATH_NOT_FOUND; + + step_ms = (timeout_ms < WAIT_STEP_MS) ? + timeout_ms : WAIT_STEP_MS; + sleep_millisec(step_ms); + + timeout_ms -= step_ms; + break; /* try again */ + + case ERROR_PIPE_BUSY: + if (!options->wait_if_busy) + return IPC_STATE__NOT_LISTENING; + if (!timeout_ms) + return IPC_STATE__NOT_LISTENING; + + t_start_ms = (DWORD)(getnanotime() / 1000000); + + if (!WaitNamedPipeW(wpath, timeout_ms)) { + if (GetLastError() == ERROR_SEM_TIMEOUT) + return IPC_STATE__NOT_LISTENING; + + return IPC_STATE__OTHER_ERROR; + } + + /* + * A pipe server instance became available. + * Race other client processes to connect to + * it. + * + * But first decrement our overall timeout so + * that we don't starve if we keep losing the + * race. But also guard against special + * NPMWAIT_ values (0 and -1). + */ + t_waited_ms = (DWORD)(getnanotime() / 1000000) - t_start_ms; + if (t_waited_ms < timeout_ms) + timeout_ms -= t_waited_ms; + else + timeout_ms = 1; + break; /* try again */ + + default: + return IPC_STATE__OTHER_ERROR; + } + } + + if (!SetNamedPipeHandleState(hPipe, &mode, NULL, NULL)) { + CloseHandle(hPipe); + return IPC_STATE__OTHER_ERROR; + } + + *pfd = _open_osfhandle((intptr_t)hPipe, O_RDWR|O_BINARY); + if (*pfd < 0) { + CloseHandle(hPipe); + return IPC_STATE__OTHER_ERROR; + } + + /* fd now owns hPipe */ + + return IPC_STATE__LISTENING; +} + +/* + * The default connection timeout for Windows clients. + * + * This is not currently part of the ipc_ API (nor the config settings) + * because of differences between Windows and other platforms. + * + * This value was chosen at random. + */ +#define WINDOWS_CONNECTION_TIMEOUT_MS (30000) + +enum ipc_active_state ipc_client_try_connect( + const char *path, + const struct ipc_client_connect_options *options, + int *pfd) +{ + wchar_t wpath[MAX_PATH]; + enum ipc_active_state state = IPC_STATE__OTHER_ERROR; + + *pfd = -1; + + trace2_region_enter("ipc-client", "try-connect", NULL); + trace2_data_string("ipc-client", NULL, "try-connect/path", path); + + if (initialize_pipe_name(path, wpath, ARRAY_SIZE(wpath)) < 0) + state = IPC_STATE__INVALID_PATH; + else + state = connect_to_server(wpath, WINDOWS_CONNECTION_TIMEOUT_MS, + options, pfd); + + trace2_data_intmax("ipc-client", NULL, "try-connect/state", + (intmax_t)state); + trace2_region_leave("ipc-client", "try-connect", NULL); + return state; +} + +int ipc_client_send_command_to_fd(int fd, const char *message, + struct strbuf *answer) +{ + int ret = 0; + + strbuf_setlen(answer, 0); + + trace2_region_enter("ipc-client", "send-command", NULL); + + if (write_packetized_from_buf(message, strlen(message), fd, 1) < 0) { + ret = error(_("could not send IPC command")); + goto done; + } + + FlushFileBuffers((HANDLE)_get_osfhandle(fd)); + + if (read_packetized_to_strbuf(fd, answer, PACKET_READ_NEVER_DIE) < 0) { + ret = error(_("could not read IPC response")); + goto done; + } + +done: + trace2_region_leave("ipc-client", "send-command", NULL); + return ret; +} + +int ipc_client_send_command(const char *path, + const struct ipc_client_connect_options *options, + const char *message, struct strbuf *response) +{ + int fd; + int ret = -1; + enum ipc_active_state state; + + state = ipc_client_try_connect(path, options, &fd); + + if (state != IPC_STATE__LISTENING) + return ret; + + ret = ipc_client_send_command_to_fd(fd, message, response); + close(fd); + return ret; +} + +/* + * Duplicate the given pipe handle and wrap it in a file descriptor so + * that we can use pkt-line on it. + */ +static int dup_fd_from_pipe(const HANDLE pipe) +{ + HANDLE process = GetCurrentProcess(); + HANDLE handle; + int fd; + + if (!DuplicateHandle(process, pipe, process, &handle, 0, FALSE, + DUPLICATE_SAME_ACCESS)) { + errno = err_win_to_posix(GetLastError()); + return -1; + } + + fd = _open_osfhandle((intptr_t)handle, O_RDWR|O_BINARY); + if (fd < 0) { + errno = err_win_to_posix(GetLastError()); + CloseHandle(handle); + return -1; + } + + /* + * `handle` is now owned by `fd` and will be automatically closed + * when the descriptor is closed. + */ + + return fd; +} + +/* + * Magic numbers used to annotate callback instance data. + * These are used to help guard against accidentally passing the + * wrong instance data across multiple levels of callbacks (which + * is easy to do if there are `void*` arguments). + */ +enum magic { + MAGIC_SERVER_REPLY_DATA, + MAGIC_SERVER_THREAD_DATA, + MAGIC_SERVER_DATA, +}; + +struct ipc_server_reply_data { + enum magic magic; + int fd; + struct ipc_server_thread_data *server_thread_data; +}; + +struct ipc_server_thread_data { + enum magic magic; + struct ipc_server_thread_data *next_thread; + struct ipc_server_data *server_data; + pthread_t pthread_id; + HANDLE hPipe; +}; + +/* + * On Windows, the conceptual "ipc-server" is implemented as a pool of + * n idential/peer "server-thread" threads. That is, there is no + * hierarchy of threads; and therefore no controller thread managing + * the pool. Each thread has an independent handle to the named pipe, + * receives incoming connections, processes the client, and re-uses + * the pipe for the next client connection. + * + * Therefore, the "ipc-server" only needs to maintain a list of the + * spawned threads for eventual "join" purposes. + * + * A single "stop-event" is visible to all of the server threads to + * tell them to shutdown (when idle). + */ +struct ipc_server_data { + enum magic magic; + ipc_server_application_cb *application_cb; + void *application_data; + struct strbuf buf_path; + wchar_t wpath[MAX_PATH]; + + HANDLE hEventStopRequested; + struct ipc_server_thread_data *thread_list; + int is_stopped; +}; + +enum connect_result { + CR_CONNECTED = 0, + CR_CONNECT_PENDING, + CR_CONNECT_ERROR, + CR_WAIT_ERROR, + CR_SHUTDOWN, +}; + +static enum connect_result queue_overlapped_connect( + struct ipc_server_thread_data *server_thread_data, + OVERLAPPED *lpo) +{ + if (ConnectNamedPipe(server_thread_data->hPipe, lpo)) + goto failed; + + switch (GetLastError()) { + case ERROR_IO_PENDING: + return CR_CONNECT_PENDING; + + case ERROR_PIPE_CONNECTED: + SetEvent(lpo->hEvent); + return CR_CONNECTED; + + default: + break; + } + +failed: + error(_("ConnectNamedPipe failed for '%s' (%lu)"), + server_thread_data->server_data->buf_path.buf, + GetLastError()); + return CR_CONNECT_ERROR; +} + +/* + * Use Windows Overlapped IO to wait for a connection or for our event + * to be signalled. + */ +static enum connect_result wait_for_connection( + struct ipc_server_thread_data *server_thread_data, + OVERLAPPED *lpo) +{ + enum connect_result r; + HANDLE waitHandles[2]; + DWORD dwWaitResult; + + r = queue_overlapped_connect(server_thread_data, lpo); + if (r != CR_CONNECT_PENDING) + return r; + + waitHandles[0] = server_thread_data->server_data->hEventStopRequested; + waitHandles[1] = lpo->hEvent; + + dwWaitResult = WaitForMultipleObjects(2, waitHandles, FALSE, INFINITE); + switch (dwWaitResult) { + case WAIT_OBJECT_0 + 0: + return CR_SHUTDOWN; + + case WAIT_OBJECT_0 + 1: + ResetEvent(lpo->hEvent); + return CR_CONNECTED; + + default: + return CR_WAIT_ERROR; + } +} + +/* + * Forward declare our reply callback function so that any compiler + * errors are reported when we actually define the function (in addition + * to any errors reported when we try to pass this callback function as + * a parameter in a function call). The former are easier to understand. + */ +static ipc_server_reply_cb do_io_reply_callback; + +/* + * Relay application's response message to the client process. + * (We do not flush at this point because we allow the caller + * to chunk data to the client thru us.) + */ +static int do_io_reply_callback(struct ipc_server_reply_data *reply_data, + const char *response, size_t response_len) +{ + if (reply_data->magic != MAGIC_SERVER_REPLY_DATA) + BUG("reply_cb called with wrong instance data"); + + return write_packetized_from_buf(response, response_len, + reply_data->fd, 0); +} + +/* + * Receive the request/command from the client and pass it to the + * registered request-callback. The request-callback will compose + * a response and call our reply-callback to send it to the client. + * + * Simple-IPC only contains one round trip, so we flush and close + * here after the response. + */ +static int do_io(struct ipc_server_thread_data *server_thread_data) +{ + struct strbuf buf = STRBUF_INIT; + struct ipc_server_reply_data reply_data; + int ret = 0; + + reply_data.magic = MAGIC_SERVER_REPLY_DATA; + reply_data.server_thread_data = server_thread_data; + + reply_data.fd = dup_fd_from_pipe(server_thread_data->hPipe); + if (reply_data.fd < 0) + return error(_("could not create fd from pipe for '%s'"), + server_thread_data->server_data->buf_path.buf); + + ret = read_packetized_to_strbuf(reply_data.fd, &buf, + PACKET_READ_NEVER_DIE); + if (ret >= 0) { + ret = server_thread_data->server_data->application_cb( + server_thread_data->server_data->application_data, + buf.buf, do_io_reply_callback, &reply_data); + + packet_flush_gently(reply_data.fd); + + FlushFileBuffers((HANDLE)_get_osfhandle((reply_data.fd))); + } + else { + /* + * The client probably disconnected/shutdown before it + * could send a well-formed message. Ignore it. + */ + } + + strbuf_release(&buf); + close(reply_data.fd); + + return ret; +} + +/* + * Handle IPC request and response with this connected client. And reset + * the pipe to prepare for the next client. + */ +static int use_connection(struct ipc_server_thread_data *server_thread_data) +{ + int ret; + + ret = do_io(server_thread_data); + + FlushFileBuffers(server_thread_data->hPipe); + DisconnectNamedPipe(server_thread_data->hPipe); + + return ret; +} + +/* + * Thread proc for an IPC server worker thread. It handles a series of + * connections from clients. It cleans and reuses the hPipe between each + * client. + */ +static void *server_thread_proc(void *_server_thread_data) +{ + struct ipc_server_thread_data *server_thread_data = _server_thread_data; + HANDLE hEventConnected = INVALID_HANDLE_VALUE; + OVERLAPPED oConnect; + enum connect_result cr; + int ret; + + assert(server_thread_data->hPipe != INVALID_HANDLE_VALUE); + + trace2_thread_start("ipc-server"); + trace2_data_string("ipc-server", NULL, "pipe", + server_thread_data->server_data->buf_path.buf); + + hEventConnected = CreateEventW(NULL, TRUE, FALSE, NULL); + + memset(&oConnect, 0, sizeof(oConnect)); + oConnect.hEvent = hEventConnected; + + for (;;) { + cr = wait_for_connection(server_thread_data, &oConnect); + + switch (cr) { + case CR_SHUTDOWN: + goto finished; + + case CR_CONNECTED: + ret = use_connection(server_thread_data); + if (ret == SIMPLE_IPC_QUIT) { + ipc_server_stop_async( + server_thread_data->server_data); + goto finished; + } + if (ret > 0) { + /* + * Ignore (transient) IO errors with this + * client and reset for the next client. + */ + } + break; + + case CR_CONNECT_PENDING: + /* By construction, this should not happen. */ + BUG("ipc-server[%s]: unexpeced CR_CONNECT_PENDING", + server_thread_data->server_data->buf_path.buf); + + case CR_CONNECT_ERROR: + case CR_WAIT_ERROR: + /* + * Ignore these theoretical errors. + */ + DisconnectNamedPipe(server_thread_data->hPipe); + break; + + default: + BUG("unandled case after wait_for_connection"); + } + } + +finished: + CloseHandle(server_thread_data->hPipe); + CloseHandle(hEventConnected); + + trace2_thread_exit(); + return NULL; +} + +static HANDLE create_new_pipe(wchar_t *wpath, int is_first) +{ + HANDLE hPipe; + DWORD dwOpenMode, dwPipeMode; + LPSECURITY_ATTRIBUTES lpsa = NULL; + + dwOpenMode = PIPE_ACCESS_INBOUND | PIPE_ACCESS_OUTBOUND | + FILE_FLAG_OVERLAPPED; + + dwPipeMode = PIPE_TYPE_MESSAGE | PIPE_READMODE_BYTE | PIPE_WAIT | + PIPE_REJECT_REMOTE_CLIENTS; + + if (is_first) { + dwOpenMode |= FILE_FLAG_FIRST_PIPE_INSTANCE; + + /* + * On Windows, the first server pipe instance gets to + * set the ACL / Security Attributes on the named + * pipe; subsequent instances inherit and cannot + * change them. + * + * TODO Should we allow the application layer to + * specify security attributes, such as `LocalService` + * or `LocalSystem`, when we create the named pipe? + * This question is probably not important when the + * daemon is started by a foreground user process and + * only needs to talk to the current user, but may be + * if the daemon is run via the Control Panel as a + * System Service. + */ + } + + hPipe = CreateNamedPipeW(wpath, dwOpenMode, dwPipeMode, + PIPE_UNLIMITED_INSTANCES, 1024, 1024, 0, lpsa); + + return hPipe; +} + +int ipc_server_run_async(struct ipc_server_data **returned_server_data, + const char *path, const struct ipc_server_opts *opts, + ipc_server_application_cb *application_cb, + void *application_data) +{ + struct ipc_server_data *server_data; + wchar_t wpath[MAX_PATH]; + HANDLE hPipeFirst = INVALID_HANDLE_VALUE; + int k; + int ret = 0; + int nr_threads = opts->nr_threads; + + *returned_server_data = NULL; + + ret = initialize_pipe_name(path, wpath, ARRAY_SIZE(wpath)); + if (ret < 0) + return error( + _("could not create normalized wchar_t path for '%s'"), + path); + + hPipeFirst = create_new_pipe(wpath, 1); + if (hPipeFirst == INVALID_HANDLE_VALUE) + return error(_("IPC server already running on '%s'"), path); + + server_data = xcalloc(1, sizeof(*server_data)); + server_data->magic = MAGIC_SERVER_DATA; + server_data->application_cb = application_cb; + server_data->application_data = application_data; + server_data->hEventStopRequested = CreateEvent(NULL, TRUE, FALSE, NULL); + strbuf_init(&server_data->buf_path, 0); + strbuf_addstr(&server_data->buf_path, path); + wcscpy(server_data->wpath, wpath); + + if (nr_threads < 1) + nr_threads = 1; + + for (k = 0; k < nr_threads; k++) { + struct ipc_server_thread_data *std; + + std = xcalloc(1, sizeof(*std)); + std->magic = MAGIC_SERVER_THREAD_DATA; + std->server_data = server_data; + std->hPipe = INVALID_HANDLE_VALUE; + + std->hPipe = (k == 0) + ? hPipeFirst + : create_new_pipe(server_data->wpath, 0); + + if (std->hPipe == INVALID_HANDLE_VALUE) { + /* + * If we've reached a pipe instance limit for + * this path, just use fewer threads. + */ + free(std); + break; + } + + if (pthread_create(&std->pthread_id, NULL, + server_thread_proc, std)) { + /* + * Likewise, if we're out of threads, just use + * fewer threads than requested. + * + * However, we just give up if we can't even get + * one thread. This should not happen. + */ + if (k == 0) + die(_("could not start thread[0] for '%s'"), + path); + + CloseHandle(std->hPipe); + free(std); + break; + } + + std->next_thread = server_data->thread_list; + server_data->thread_list = std; + } + + *returned_server_data = server_data; + return 0; +} + +int ipc_server_stop_async(struct ipc_server_data *server_data) +{ + if (!server_data) + return 0; + + /* + * Gently tell all of the ipc_server threads to shutdown. + * This will be seen the next time they are idle (and waiting + * for a connection). + * + * We DO NOT attempt to force them to drop an active connection. + */ + SetEvent(server_data->hEventStopRequested); + return 0; +} + +int ipc_server_await(struct ipc_server_data *server_data) +{ + DWORD dwWaitResult; + + if (!server_data) + return 0; + + dwWaitResult = WaitForSingleObject(server_data->hEventStopRequested, INFINITE); + if (dwWaitResult != WAIT_OBJECT_0) + return error(_("wait for hEvent failed for '%s'"), + server_data->buf_path.buf); + + while (server_data->thread_list) { + struct ipc_server_thread_data *std = server_data->thread_list; + + pthread_join(std->pthread_id, NULL); + + server_data->thread_list = std->next_thread; + free(std); + } + + server_data->is_stopped = 1; + + return 0; +} + +void ipc_server_free(struct ipc_server_data *server_data) +{ + if (!server_data) + return; + + if (!server_data->is_stopped) + BUG("cannot free ipc-server while running for '%s'", + server_data->buf_path.buf); + + strbuf_release(&server_data->buf_path); + + if (server_data->hEventStopRequested != INVALID_HANDLE_VALUE) + CloseHandle(server_data->hEventStopRequested); + + while (server_data->thread_list) { + struct ipc_server_thread_data *std = server_data->thread_list; + + server_data->thread_list = std->next_thread; + free(std); + } + + free(server_data); +} diff --git a/config.mak.uname b/config.mak.uname index 198ab1e58f8..76087cff678 100644 --- a/config.mak.uname +++ b/config.mak.uname @@ -421,6 +421,7 @@ ifeq ($(uname_S),Windows) RUNTIME_PREFIX = YesPlease HAVE_WPGMPTR = YesWeDo NO_ST_BLOCKS_IN_STRUCT_STAT = YesPlease + USE_WIN32_IPC = YesPlease USE_WIN32_MMAP = YesPlease MMAP_PREVENTS_DELETE = UnfortunatelyYes # USE_NED_ALLOCATOR = YesPlease @@ -597,6 +598,7 @@ ifneq (,$(findstring MINGW,$(uname_S))) RUNTIME_PREFIX = YesPlease HAVE_WPGMPTR = YesWeDo NO_ST_BLOCKS_IN_STRUCT_STAT = YesPlease + USE_WIN32_IPC = YesPlease USE_WIN32_MMAP = YesPlease MMAP_PREVENTS_DELETE = UnfortunatelyYes USE_NED_ALLOCATOR = YesPlease diff --git a/contrib/buildsystems/CMakeLists.txt b/contrib/buildsystems/CMakeLists.txt index c151dd7257f..4bd41054ee7 100644 --- a/contrib/buildsystems/CMakeLists.txt +++ b/contrib/buildsystems/CMakeLists.txt @@ -246,6 +246,10 @@ elseif(CMAKE_SYSTEM_NAME STREQUAL "Linux") list(APPEND compat_SOURCES unix-socket.c) endif() +if(CMAKE_SYSTEM_NAME STREQUAL "Windows") + list(APPEND compat_SOURCES compat/simple-ipc/ipc-shared.c compat/simple-ipc/ipc-win32.c) +endif() + set(EXE_EXTENSION ${CMAKE_EXECUTABLE_SUFFIX}) #header checks diff --git a/simple-ipc.h b/simple-ipc.h new file mode 100644 index 00000000000..cd525e711bd --- /dev/null +++ b/simple-ipc.h @@ -0,0 +1,216 @@ +#ifndef GIT_SIMPLE_IPC_H +#define GIT_SIMPLE_IPC_H + +/* + * See Documentation/technical/api-simple-ipc.txt + */ + +#if defined(GIT_WINDOWS_NATIVE) +#define SUPPORTS_SIMPLE_IPC +#endif + +#ifdef SUPPORTS_SIMPLE_IPC + +/* + * Simple IPC Client Side API. + */ + +enum ipc_active_state { + /* + * The pipe/socket exists and the daemon is waiting for connections. + */ + IPC_STATE__LISTENING = 0, + + /* + * The pipe/socket exists, but the daemon is not listening. + * Perhaps it is very busy. + * Perhaps the daemon died without deleting the path. + * Perhaps it is shutting down and draining existing clients. + * Perhaps it is dead, but other clients are lingering and + * still holding a reference to the pathname. + */ + IPC_STATE__NOT_LISTENING, + + /* + * The requested pathname is bogus and no amount of retries + * will fix that. + */ + IPC_STATE__INVALID_PATH, + + /* + * The requested pathname is not found. This usually means + * that there is no daemon present. + */ + IPC_STATE__PATH_NOT_FOUND, + + IPC_STATE__OTHER_ERROR, +}; + +struct ipc_client_connect_options { + /* + * Spin under timeout if the server is running but can't + * accept our connection yet. This should always be set + * unless you just want to poke the server and see if it + * is alive. + */ + unsigned int wait_if_busy:1; + + /* + * Spin under timeout if the pipe/socket is not yet present + * on the file system. This is useful if we just started + * the service and need to wait for it to become ready. + */ + unsigned int wait_if_not_found:1; +}; + +#define IPC_CLIENT_CONNECT_OPTIONS_INIT { \ + .wait_if_busy = 0, \ + .wait_if_not_found = 0, \ +} + +/* + * Determine if a server is listening on this named pipe or socket using + * platform-specific logic. This might just probe the filesystem or it + * might make a trivial connection to the server using this pathname. + */ +enum ipc_active_state ipc_get_active_state(const char *path); + +/* + * Try to connect to the daemon on the named pipe or socket. + * + * Returns IPC_STATE__LISTENING (and an fd) when connected. + * + * Otherwise, returns info to help decide whether to retry or to + * spawn/respawn the server. + */ +enum ipc_active_state ipc_client_try_connect( + const char *path, + const struct ipc_client_connect_options *options, + int *pfd); + +/* + * Used by the client to synchronously send and receive a message with + * the server on the provided fd. + * + * Returns 0 when successful. + * + * Calls error() and returns non-zero otherwise. + */ +int ipc_client_send_command_to_fd(int fd, const char *message, + struct strbuf *answer); + +/* + * Used by the client to synchronously connect and send and receive a + * message to the server listening at the given path. + * + * Returns 0 when successful. + * + * Calls error() and returns non-zero otherwise. + */ +int ipc_client_send_command(const char *path, + const struct ipc_client_connect_options *options, + const char *message, struct strbuf *answer); + +/* + * Simple IPC Server Side API. + */ + +struct ipc_server_reply_data; + +typedef int (ipc_server_reply_cb)(struct ipc_server_reply_data *, + const char *response, + size_t response_len); + +/* + * Prototype for an application-supplied callback to process incoming + * client IPC messages and compose a reply. The `application_cb` should + * use the provided `reply_cb` and `reply_data` to send an IPC response + * back to the client. The `reply_cb` callback can be called multiple + * times for chunking purposes. A reply message is optional and may be + * omitted if not necessary for the application. + * + * The return value from the application callback is ignored. + * The value `SIMPLE_IPC_QUIT` can be used to shutdown the server. + */ +typedef int (ipc_server_application_cb)(void *application_data, + const char *request, + ipc_server_reply_cb *reply_cb, + struct ipc_server_reply_data *reply_data); + +#define SIMPLE_IPC_QUIT -2 + +/* + * Opaque instance data to represent an IPC server instance. + */ +struct ipc_server_data; + +/* + * Control parameters for the IPC server instance. + * Use this to hide platform-specific settings. + */ +struct ipc_server_opts +{ + int nr_threads; +}; + +/* + * Start an IPC server instance in one or more background threads + * and return a handle to the pool. + * + * Returns 0 if the asynchronous server pool was started successfully. + * Returns -1 if not. + * + * When a client IPC message is received, the `application_cb` will be + * called (possibly on a random thread) to handle the message and + * optionally compose a reply message. + */ +int ipc_server_run_async(struct ipc_server_data **returned_server_data, + const char *path, const struct ipc_server_opts *opts, + ipc_server_application_cb *application_cb, + void *application_data); + +/* + * Gently signal the IPC server pool to shutdown. No new client + * connections will be accepted, but existing connections will be + * allowed to complete. + */ +int ipc_server_stop_async(struct ipc_server_data *server_data); + +/* + * Block the calling thread until all threads in the IPC server pool + * have completed and been joined. + */ +int ipc_server_await(struct ipc_server_data *server_data); + +/* + * Close and free all resource handles associated with the IPC server + * pool. + */ +void ipc_server_free(struct ipc_server_data *server_data); + +/* + * Run an IPC server instance and block the calling thread of the + * current process. It does not return until the IPC server has + * either shutdown or had an unrecoverable error. + * + * The IPC server handles incoming IPC messages from client processes + * and may use one or more background threads as necessary. + * + * Returns 0 after the server has completed successfully. + * Returns -1 if the server cannot be started. + * + * When a client IPC message is received, the `application_cb` will be + * called (possibly on a random thread) to handle the message and + * optionally compose a reply message. + * + * Note that `ipc_server_run()` is a synchronous wrapper around the + * above asynchronous routines. It effectively hides all of the + * server state and thread details from the caller and presents a + * simple synchronous interface. + */ +int ipc_server_run(const char *path, const struct ipc_server_opts *opts, + ipc_server_application_cb *application_cb, + void *application_data); + +#endif /* SUPPORTS_SIMPLE_IPC */ +#endif /* GIT_SIMPLE_IPC_H */ From patchwork Tue Jan 12 15:31:29 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeff Hostetler X-Patchwork-Id: 12013799 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2C8AAC433E9 for ; Tue, 12 Jan 2021 15:33:14 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 0567B23131 for ; Tue, 12 Jan 2021 15:33:13 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2391498AbhALPc5 (ORCPT ); Tue, 12 Jan 2021 10:32:57 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50470 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2391414AbhALPc4 (ORCPT ); Tue, 12 Jan 2021 10:32:56 -0500 Received: from mail-wr1-x433.google.com (mail-wr1-x433.google.com [IPv6:2a00:1450:4864:20::433]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B816DC0617A4 for ; Tue, 12 Jan 2021 07:31:41 -0800 (PST) Received: by mail-wr1-x433.google.com with SMTP id y17so2898428wrr.10 for ; Tue, 12 Jan 2021 07:31:41 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=message-id:in-reply-to:references:from:date:subject:fcc :content-transfer-encoding:mime-version:to:cc; bh=YenO7yqnHXBLLCYIiSPYtExaFyt9ZYpIIXNpSVZKIXA=; b=HojPQCOe1SzoR71JAoS4Z6Oh9A17zACQUZqc5ueqCf4b/9MUySLcaeY+QYGzgpY7TB ExTC5Bg6Knil4xNyMEVvoJzBLNv6WE3+z5/ZLQaSgob+vixLnb8Q+bg7Ts00ZuspYeX4 zjpVB8WuOaBGvH9Sp+aG1Ydj9Cfw8oTMsBa4OohX7iihCzt1AyDtLG0jsyW5g+KS8ISK sv6MtYAa0lUc6GkkxAm3pV60U3+mqUh1c/L4JvXTGLCnxyWCNLm6ksDRMvldf39uq5xt 8siWvQp08SAy6hIvcPGIoF6I5tzDN7e/MEihMgfl2uO8LxNyMoLwzS+aXWzXawoUENN+ EBOw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:message-id:in-reply-to:references:from:date :subject:fcc:content-transfer-encoding:mime-version:to:cc; bh=YenO7yqnHXBLLCYIiSPYtExaFyt9ZYpIIXNpSVZKIXA=; b=l4qpLBIUlGzJPwt26jhhRaaPM24iiwO67krvoiwx4xdfhWTl95eeL0PlndmTmvD1SP u3Rydp5w+MY/fyJAdnFm8TIZJ59AjkV+GAZXr9rbiTqCoP9LaZ7HVs9hZpKJCHApJGfx Tf5wvjchU0TM3e71zLNIm4hQMq0lZhlksB/GS3JEtACGDmFRo/cEvGpkVOhTm6KozldG 9X2zy7+WZr/77rket4TMNF+3xcLIMnJzUk6N6vTKQ2MFERB35TdXT597NzdAUHqXCG6x 9LJCXs35OM2aqcWuQ3YgJdnbBX2j42Z159o5eelhtmzDetRJkplW/nUWPfYRTuCpLFp1 lgMw== X-Gm-Message-State: AOAM532cUdWjf5J7WvwY4enSy9b4sZofYGsTtC0ot9w+CRGNs7AkdMNu /wKGNjS7Q6PIuYdqAZPotV+QWuGY84E= X-Google-Smtp-Source: ABdhPJwo7JjR86xo6DkeQ1+hEVInuKcMOg2u3Ybspd0pIbGC5glLf3fx8Q5+QMPDAjfVz+q4fLc3Rg== X-Received: by 2002:a5d:660c:: with SMTP id n12mr4979258wru.291.1610465500247; Tue, 12 Jan 2021 07:31:40 -0800 (PST) Received: from [127.0.0.1] ([13.74.141.28]) by smtp.gmail.com with ESMTPSA id l7sm4359677wme.4.2021.01.12.07.31.39 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 12 Jan 2021 07:31:39 -0800 (PST) Message-Id: <96268351ac66371a0998d189db619f357d2b71fa.1610465493.git.gitgitgadget@gmail.com> In-Reply-To: References: Date: Tue, 12 Jan 2021 15:31:29 +0000 Subject: [PATCH 07/10] unix-socket: create gentle version of unix_stream_listen() Fcc: Sent MIME-Version: 1.0 To: git@vger.kernel.org Cc: Jeff Hostetler , Jeff Hostetler Precedence: bulk List-ID: X-Mailing-List: git@vger.kernel.org From: Jeff Hostetler From: Jeff Hostetler Create a gentle version of `unix_stream_listen()`. This version does not call `die()` if a socket-fd cannot be created and does not assume that it is safe to `unlink()` an existing socket-inode. `unix_stream_listen()` uses `unix_stream_socket()` helper function to create the socket-fd. Avoid that helper because it calls `die()` on errors. `unix_stream_listen()` always tries to `unlink()` the socket-path before calling `bind()`. If there is an existing server/daemon already bound and listening on that socket-path, our `unlink()` would have the effect of disassociating the existing server's bound-socket-fd from the socket-path without notifying the existing server. The existing server could continue to service existing connections (accepted-socket-fd's), but would not receive any futher new connections (since clients rendezvous via the socket-path). The existing server would effectively be offline but yet appear to be active. Furthermore, `unix_stream_listen()` creates an opportunity for a brief race condition for connecting clients if they try to connect in the interval between the forced `unlink()` and the subsequent `bind()` (which recreates the socket-path that is bound to a new socket-fd in the current process). Signed-off-by: Jeff Hostetler --- unix-socket.c | 39 +++++++++++++++++++++++++++++++++++++++ unix-socket.h | 8 ++++++++ 2 files changed, 47 insertions(+) diff --git a/unix-socket.c b/unix-socket.c index 19ed48be990..3a9ffc32268 100644 --- a/unix-socket.c +++ b/unix-socket.c @@ -121,3 +121,42 @@ int unix_stream_listen(const char *path) errno = saved_errno; return -1; } + +int unix_stream_listen_gently(const char *path, + const struct unix_stream_listen_opts *opts) +{ + int fd = -1; + int bind_successful = 0; + int saved_errno; + struct sockaddr_un sa; + struct unix_sockaddr_context ctx; + + if (unix_sockaddr_init(&sa, path, &ctx) < 0) + goto fail; + + fd = socket(AF_UNIX, SOCK_STREAM, 0); + if (fd < 0) + goto fail; + + if (opts->force_unlink_before_bind) + unlink(path); + + if (bind(fd, (struct sockaddr *)&sa, sizeof(sa)) < 0) + goto fail; + bind_successful = 1; + + if (listen(fd, opts->listen_backlog_size) < 0) + goto fail; + + unix_sockaddr_cleanup(&ctx); + return fd; + +fail: + saved_errno = errno; + unix_sockaddr_cleanup(&ctx); + close(fd); + if (bind_successful) + unlink(path); + errno = saved_errno; + return -1; +} diff --git a/unix-socket.h b/unix-socket.h index e271aeec5a0..253f579f087 100644 --- a/unix-socket.h +++ b/unix-socket.h @@ -4,4 +4,12 @@ int unix_stream_connect(const char *path); int unix_stream_listen(const char *path); +struct unix_stream_listen_opts { + int listen_backlog_size; + unsigned int force_unlink_before_bind:1; +}; + +int unix_stream_listen_gently(const char *path, + const struct unix_stream_listen_opts *opts); + #endif /* UNIX_SOCKET_H */ From patchwork Tue Jan 12 15:31:30 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeff Hostetler X-Patchwork-Id: 12013797 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C3978C433E0 for ; Tue, 12 Jan 2021 15:33:13 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 792D723131 for ; Tue, 12 Jan 2021 15:33:13 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2391542AbhALPc7 (ORCPT ); Tue, 12 Jan 2021 10:32:59 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50480 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2391148AbhALPc5 (ORCPT ); Tue, 12 Jan 2021 10:32:57 -0500 Received: from mail-wr1-x431.google.com (mail-wr1-x431.google.com [IPv6:2a00:1450:4864:20::431]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 799E5C0617A5 for ; Tue, 12 Jan 2021 07:31:42 -0800 (PST) Received: by mail-wr1-x431.google.com with SMTP id a12so2912043wrv.8 for ; Tue, 12 Jan 2021 07:31:42 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=message-id:in-reply-to:references:from:date:subject:fcc :content-transfer-encoding:mime-version:to:cc; bh=d7ZVkvm1O2Grh2REMhf+Gasv/q3hvRasFYvdKTLwRGY=; b=hv7SW73mWatAC/uygYJ2Konsdc3QgwBFh3OmXEQDiTF6vqhKKTW53ax7fDQit70Ra6 WyxdnqschjRE0vpxrYM+CBMAOp1bwLsjq2YL43pDWpkZewCqautUIHCCKSYs7X/XW9YF dIQgQcmlmCGj6/TGWxHAAMGzklPRbFiFxc8rRN/JJubErM4s90V27gGWXkZQVhNPIpY/ qVomI1itXn8pVBgcg7lFcV0ZKmH5JHl3YpzsGLQejwFrfUWtyGN60utw6OB+Cuoa2TCR fn6sjprmmw3Fr6gTaWO1h78vyFLonrSUPKJ5wv0HfgjjmsEl36WA9nzvfsuhFbJuS4Ll qU8A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:message-id:in-reply-to:references:from:date :subject:fcc:content-transfer-encoding:mime-version:to:cc; bh=d7ZVkvm1O2Grh2REMhf+Gasv/q3hvRasFYvdKTLwRGY=; b=LJYIY6SDEeTptBO+0ReSx57t8GgHZR9ucKQT/dIYiFIesD8+kTJffb9e13rMD1IML1 6Du17fSlF+54xp+q8li+iaz58nQKe6nFM7H60q4iLYdkLNnAt9I/s3bhntXh9P8vCUdQ wif6LWdAsYbr70NaQk0d/9pLiCEfofncJovPGNKdIyuDC0NeVBox060+ed1831dC1Azv /Ws+ATZJWR1G7Pxe8JWGHsjngRvp/WTPYccm9fuao38VDHQZU8S+PtPnbeBDPg4tIy8L rhqFMa0LZVfU2JDMHF8eztqXByxlD1u9FyOemXxUvTkDI+dtQcRPlW7hhjTB2idq3sZ4 OvJQ== X-Gm-Message-State: AOAM533/SGIn9EERL8xOcCuocNnTzVE4rCmPSiwnupOOkmFYpQP5oMLs PTlDhkYdw0Akglhz747W8vQf38UDMFY= X-Google-Smtp-Source: ABdhPJz/CyCI4JvKTQV4Lq5y6pYNeZ42PB+6+2kp0LVCa3mmpUwK5MYw2/No6m96+dRl5TG3CdC9gg== X-Received: by 2002:a5d:40ce:: with SMTP id b14mr4857562wrq.350.1610465501122; Tue, 12 Jan 2021 07:31:41 -0800 (PST) Received: from [127.0.0.1] ([13.74.141.28]) by smtp.gmail.com with ESMTPSA id a62sm4034548wmf.7.2021.01.12.07.31.40 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 12 Jan 2021 07:31:40 -0800 (PST) Message-Id: <383a9755669d4ed2d7dba348137dddd49ae2f3d5.1610465493.git.gitgitgadget@gmail.com> In-Reply-To: References: Date: Tue, 12 Jan 2021 15:31:30 +0000 Subject: [PATCH 08/10] unix-socket: add no-chdir option to unix_stream_listen_gently() Fcc: Sent MIME-Version: 1.0 To: git@vger.kernel.org Cc: Jeff Hostetler , Jeff Hostetler Precedence: bulk List-ID: X-Mailing-List: git@vger.kernel.org From: Jeff Hostetler From: Jeff Hostetler Calls to `chdir()` are dangerous in a multi-threaded context. If `unix_stream_listen()` is given a socket pathname that is too big to fit in a `sockaddr_un` structure, it will `chdir()` to the parent directory of the requested socket pathname, create the socket using a relative pathname, and then `chdir()` back. This is not thread-safe. Add `disallow_chdir` flag to `struct unix_sockaddr_context` and change all callers to pass an initialized context structure. Teach `unix_sockaddr_init()` to not allow calls to `chdir()` when flag is set. Extend the public interface to `unix_stream_listen_gently()` to also expose this new flag. Signed-off-by: Jeff Hostetler --- unix-socket.c | 21 +++++++++++++++++---- unix-socket.h | 1 + 2 files changed, 18 insertions(+), 4 deletions(-) diff --git a/unix-socket.c b/unix-socket.c index 3a9ffc32268..f66987261e6 100644 --- a/unix-socket.c +++ b/unix-socket.c @@ -19,8 +19,15 @@ static int chdir_len(const char *orig, int len) struct unix_sockaddr_context { char *orig_dir; + unsigned int disallow_chdir:1; }; +#define UNIX_SOCKADDR_CONTEXT_INIT \ +{ \ + .orig_dir=NULL, \ + .disallow_chdir=0, \ +} + static void unix_sockaddr_cleanup(struct unix_sockaddr_context *ctx) { if (!ctx->orig_dir) @@ -40,7 +47,11 @@ static int unix_sockaddr_init(struct sockaddr_un *sa, const char *path, { int size = strlen(path) + 1; - ctx->orig_dir = NULL; + if (ctx->disallow_chdir && size > sizeof(sa->sun_path)) { + errno = ENAMETOOLONG; + return -1; + } + if (size > sizeof(sa->sun_path)) { const char *slash = find_last_dir_sep(path); const char *dir; @@ -75,7 +86,7 @@ int unix_stream_connect(const char *path) { int fd, saved_errno; struct sockaddr_un sa; - struct unix_sockaddr_context ctx; + struct unix_sockaddr_context ctx = UNIX_SOCKADDR_CONTEXT_INIT; if (unix_sockaddr_init(&sa, path, &ctx) < 0) return -1; @@ -97,7 +108,7 @@ int unix_stream_listen(const char *path) { int fd, saved_errno; struct sockaddr_un sa; - struct unix_sockaddr_context ctx; + struct unix_sockaddr_context ctx = UNIX_SOCKADDR_CONTEXT_INIT; unlink(path); @@ -129,7 +140,9 @@ int unix_stream_listen_gently(const char *path, int bind_successful = 0; int saved_errno; struct sockaddr_un sa; - struct unix_sockaddr_context ctx; + struct unix_sockaddr_context ctx = UNIX_SOCKADDR_CONTEXT_INIT; + + ctx.disallow_chdir = opts->disallow_chdir; if (unix_sockaddr_init(&sa, path, &ctx) < 0) goto fail; diff --git a/unix-socket.h b/unix-socket.h index 253f579f087..08d3d822111 100644 --- a/unix-socket.h +++ b/unix-socket.h @@ -7,6 +7,7 @@ int unix_stream_listen(const char *path); struct unix_stream_listen_opts { int listen_backlog_size; unsigned int force_unlink_before_bind:1; + unsigned int disallow_chdir:1; }; int unix_stream_listen_gently(const char *path, From patchwork Tue Jan 12 15:31:31 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeff Hostetler X-Patchwork-Id: 12013801 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E3704C433E6 for ; Tue, 12 Jan 2021 15:33:13 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id A01CA23132 for ; Tue, 12 Jan 2021 15:33:13 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2391532AbhALPc6 (ORCPT ); Tue, 12 Jan 2021 10:32:58 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50482 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2391513AbhALPc5 (ORCPT ); Tue, 12 Jan 2021 10:32:57 -0500 Received: from mail-wm1-x333.google.com (mail-wm1-x333.google.com [IPv6:2a00:1450:4864:20::333]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DE26CC0617A6 for ; Tue, 12 Jan 2021 07:31:43 -0800 (PST) Received: by mail-wm1-x333.google.com with SMTP id y187so2494037wmd.3 for ; Tue, 12 Jan 2021 07:31:43 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=message-id:in-reply-to:references:from:date:subject:fcc :content-transfer-encoding:mime-version:to:cc; bh=YBm/h2Qr2CpzHRGAZM4aCsbcZbWtHUxXCpUESjtEil8=; b=os+3MFQubm4OfFVZnlj/7GFmqKGopfMSO9fAA2k+RYoEl0jJiWpFTLh0fKXqjLwn42 j1QvMggpHvTYIkCYFypmlXjBzXCtCsfA0w0FwaJF3h4QejN/q2sTp65CGkCHEdcUmrW6 oJ+ShtS1GjOaohIOqKCD+2JEaqyf1UUZDEjW2WrwvM5mt5MBiVNeVdlX1h/5Bp7Aukhz lC4H4WOH4tguh5WDUQdDB4CjozYeb149bp1M74l66n9Z1S7DpiAN4GV0pKACeXp5Xs9G mTQI0o+XCAKslXVXAxmXsMkn0XiBxRo2ltRGB9GlyElm2EXXia6+gpXJE0TLFo5FU75s qAzg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:message-id:in-reply-to:references:from:date :subject:fcc:content-transfer-encoding:mime-version:to:cc; bh=YBm/h2Qr2CpzHRGAZM4aCsbcZbWtHUxXCpUESjtEil8=; b=Tkbr5dzPYZkUhCrtdSp4ePJlUwuG6y3I+NmGhSPrs2lomb0lMhx0/oJZx8YTE77X/H tmvSUIn8ackUH68qPU+gUUGzNN94m3pICDaa05d6UU1Q7WSVL/Y/siYi3A94qzarunEO QRYXGwkCr9ZHtlbp5t8q7Wr3G8TKY+ybghQuPDVIdC/WdZGnrNDBJxwWC05qT9KcsRNb 0Am/cp+EuwTZIUFl+6Ettm47oJ5C6+dJvHvdHW8SkJb+JkMQ8SusrIT+BLk7q4C8KDVV A75n0bYK2p7gAsNENUeJZgu0kWRLEcj0Lgh+4p6C4XGULtbWvim9O6oN7ylNDpKWI9RC SSSA== X-Gm-Message-State: AOAM531GKAuQEqZjUAnWjjfWZnJ6uAtubQCYqCqT7s1onxOom+xoI2gn JHRofQ4+Ql/PIW7ECmk85ne1gMWtHvs= X-Google-Smtp-Source: ABdhPJxDfOSM0QNOa4R+mHOtkFT+PWg2lXcZuM/6lgU637Ou7qgzaCqz9ya0GnaWsrCQ5H7m1+PMtQ== X-Received: by 2002:a05:600c:2219:: with SMTP id z25mr4141964wml.27.1610465501942; Tue, 12 Jan 2021 07:31:41 -0800 (PST) Received: from [127.0.0.1] ([13.74.141.28]) by smtp.gmail.com with ESMTPSA id o8sm5381996wrm.17.2021.01.12.07.31.41 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 12 Jan 2021 07:31:41 -0800 (PST) Message-Id: <69969c2b8d37d11b5e5f886d700d4b6224df9894.1610465493.git.gitgitgadget@gmail.com> In-Reply-To: References: Date: Tue, 12 Jan 2021 15:31:31 +0000 Subject: [PATCH 09/10] simple-ipc: add t/helper/test-simple-ipc and t0052 Fcc: Sent MIME-Version: 1.0 To: git@vger.kernel.org Cc: Jeff Hostetler , Jeff Hostetler Precedence: bulk List-ID: X-Mailing-List: git@vger.kernel.org From: Jeff Hostetler From: Jeff Hostetler Create unit tests for "simple-ipc". These are currently only enabled on Windows. Signed-off-by: Jeff Hostetler --- Makefile | 1 + t/helper/test-simple-ipc.c | 485 +++++++++++++++++++++++++++++++++++++ t/helper/test-tool.c | 1 + t/helper/test-tool.h | 1 + t/t0052-simple-ipc.sh | 129 ++++++++++ 5 files changed, 617 insertions(+) create mode 100644 t/helper/test-simple-ipc.c create mode 100755 t/t0052-simple-ipc.sh diff --git a/Makefile b/Makefile index c94d5847919..e7ba8853ea6 100644 --- a/Makefile +++ b/Makefile @@ -740,6 +740,7 @@ TEST_BUILTINS_OBJS += test-serve-v2.o TEST_BUILTINS_OBJS += test-sha1.o TEST_BUILTINS_OBJS += test-sha256.o TEST_BUILTINS_OBJS += test-sigchain.o +TEST_BUILTINS_OBJS += test-simple-ipc.o TEST_BUILTINS_OBJS += test-strcmp-offset.o TEST_BUILTINS_OBJS += test-string-list.o TEST_BUILTINS_OBJS += test-submodule-config.o diff --git a/t/helper/test-simple-ipc.c b/t/helper/test-simple-ipc.c new file mode 100644 index 00000000000..4960e79cf18 --- /dev/null +++ b/t/helper/test-simple-ipc.c @@ -0,0 +1,485 @@ +/* + * test-simple-ipc.c: verify that the Inter-Process Communication works. + */ + +#include "test-tool.h" +#include "cache.h" +#include "strbuf.h" +#include "simple-ipc.h" +#include "parse-options.h" +#include "thread-utils.h" + +#ifndef SUPPORTS_SIMPLE_IPC +int cmd__simple_ipc(int argc, const char **argv) +{ + die("simple IPC not available on this platform"); +} +#else + +/* + * The test daemon defines an "application callback" that supports a + * series of commands (see `test_app_cb()`). + * + * Unknown commands are caught here and we send an error message back + * to the client process. + */ +static int app__unhandled_command(const char *command, + ipc_server_reply_cb *reply_cb, + struct ipc_server_reply_data *reply_data) +{ + struct strbuf buf = STRBUF_INIT; + int ret; + + strbuf_addf(&buf, "unhandled command: %s", command); + ret = reply_cb(reply_data, buf.buf, buf.len); + strbuf_release(&buf); + + return ret; +} + +/* + * Reply with a single very large buffer. This is to ensure that + * long response are properly handled -- whether the chunking occurs + * in the kernel or in the (probably pkt-line) layer. + */ +#define BIG_ROWS (10000) +static int app__big_command(ipc_server_reply_cb *reply_cb, + struct ipc_server_reply_data *reply_data) +{ + struct strbuf buf = STRBUF_INIT; + int row; + int ret; + + for (row = 0; row < BIG_ROWS; row++) + strbuf_addf(&buf, "big: %.75d\n", row); + + ret = reply_cb(reply_data, buf.buf, buf.len); + strbuf_release(&buf); + + return ret; +} + +/* + * Reply with a series of lines. This is to ensure that we can incrementally + * compute the response and chunk it to the client. + */ +#define CHUNK_ROWS (10000) +static int app__chunk_command(ipc_server_reply_cb *reply_cb, + struct ipc_server_reply_data *reply_data) +{ + struct strbuf buf = STRBUF_INIT; + int row; + int ret; + + for (row = 0; row < CHUNK_ROWS; row++) { + strbuf_setlen(&buf, 0); + strbuf_addf(&buf, "big: %.75d\n", row); + ret = reply_cb(reply_data, buf.buf, buf.len); + } + + strbuf_release(&buf); + + return ret; +} + +/* + * Slowly reply with a series of lines. This is to model an expensive to + * compute chunked response (which might happen if this callback is running + * in a thread and is fighting for a lock with other threads). + */ +#define SLOW_ROWS (1000) +#define SLOW_DELAY_MS (10) +static int app__slow_command(ipc_server_reply_cb *reply_cb, + struct ipc_server_reply_data *reply_data) +{ + struct strbuf buf = STRBUF_INIT; + int row; + int ret; + + for (row = 0; row < SLOW_ROWS; row++) { + strbuf_setlen(&buf, 0); + strbuf_addf(&buf, "big: %.75d\n", row); + ret = reply_cb(reply_data, buf.buf, buf.len); + sleep_millisec(SLOW_DELAY_MS); + } + + strbuf_release(&buf); + + return ret; +} + +/* + * The client sent a command followed by a (possibly very) large buffer. + */ +static int app__sendbytes_command(const char *received, + ipc_server_reply_cb *reply_cb, + struct ipc_server_reply_data *reply_data) +{ + struct strbuf buf_resp = STRBUF_INIT; + const char *p = "?"; + int len_ballast = 0; + int k; + int errs = 0; + int ret; + + if (skip_prefix(received, "sendbytes ", &p)) + len_ballast = strlen(p); + + /* + * Verify that the ballast is n copies of a single letter. + * And that the multi-threaded IO layer didn't cross the streams. + */ + for (k = 1; k < len_ballast; k++) + if (p[k] != p[0]) + errs++; + + if (errs) + strbuf_addf(&buf_resp, "errs:%d\n", errs); + else + strbuf_addf(&buf_resp, "rcvd:%c%08d\n", p[0], len_ballast); + + ret = reply_cb(reply_data, buf_resp.buf, buf_resp.len); + + strbuf_release(&buf_resp); + + return ret; +} + +/* + * An arbitrary fixed address to verify that the application instance + * data is handled properly. + */ +static int my_app_data = 42; + +static ipc_server_application_cb test_app_cb; + +/* + * This is "application callback" that sits on top of the "ipc-server". + * It completely defines the set of command verbs supported by this + * application. + */ +static int test_app_cb(void *application_data, + const char *command, + ipc_server_reply_cb *reply_cb, + struct ipc_server_reply_data *reply_data) +{ + /* + * Verify that we received the application-data that we passed + * when we started the ipc-server. (We have several layers of + * callbacks calling callbacks and it's easy to get things mixed + * up (especially when some are "void*").) + */ + if (application_data != (void*)&my_app_data) + BUG("application_cb: application_data pointer wrong"); + + if (!strcmp(command, "quit")) { + /* + * Tell ipc-server to hangup with an empty reply. + */ + return SIMPLE_IPC_QUIT; + } + + if (!strcmp(command, "ping")) { + const char *answer = "pong"; + return reply_cb(reply_data, answer, strlen(answer)); + } + + if (!strcmp(command, "big")) + return app__big_command(reply_cb, reply_data); + + if (!strcmp(command, "chunk")) + return app__chunk_command(reply_cb, reply_data); + + if (!strcmp(command, "slow")) + return app__slow_command(reply_cb, reply_data); + + if (starts_with(command, "sendbytes ")) + return app__sendbytes_command(command, reply_cb, reply_data); + + return app__unhandled_command(command, reply_cb, reply_data); +} + +/* + * This process will run as a simple-ipc server and listen for IPC commands + * from client processes. + */ +static int daemon__run_server(const char *path, int argc, const char **argv) +{ + struct ipc_server_opts opts = { + .nr_threads = 5 + }; + + const char * const daemon_usage[] = { + N_("test-helper simple-ipc daemon ["), + NULL + }; + struct option daemon_options[] = { + OPT_INTEGER(0, "threads", &opts.nr_threads, + N_("number of threads in server thread pool")), + OPT_END() + }; + + argc = parse_options(argc, argv, NULL, daemon_options, daemon_usage, 0); + + if (opts.nr_threads < 1) + opts.nr_threads = 1; + + /* + * Synchronously run the ipc-server. We don't need any application + * instance data, so pass an arbitrary pointer (that we'll later + * verify made the round trip). + */ + return ipc_server_run(path, &opts, test_app_cb, (void*)&my_app_data); +} + +/* + * This process will run a quick probe to see if a simple-ipc server + * is active on this path. + * + * Returns 0 if the server is alive. + */ +static int client__probe_server(const char *path) +{ + enum ipc_active_state s; + + s = ipc_get_active_state(path); + switch (s) { + case IPC_STATE__LISTENING: + return 0; + + case IPC_STATE__NOT_LISTENING: + return error("no server listening at '%s'", path); + + case IPC_STATE__PATH_NOT_FOUND: + return error("path not found '%s'", path); + + case IPC_STATE__INVALID_PATH: + return error("invalid pipe/socket name '%s'", path); + + case IPC_STATE__OTHER_ERROR: + default: + return error("other error for '%s'", path); + } +} + +/* + * Send an IPC command to an already-running server daemon and print the + * response. + * + * argv[2] contains a simple (1 word) command verb that `test_app_cb()` + * (in the daemon process) will understand. + */ +static int client__send_ipc(int argc, const char **argv, const char *path) +{ + const char *command = argc > 2 ? argv[2] : "(no command)"; + struct strbuf buf = STRBUF_INIT; + struct ipc_client_connect_options options + = IPC_CLIENT_CONNECT_OPTIONS_INIT; + + options.wait_if_busy = 1; + options.wait_if_not_found = 0; + + if (!ipc_client_send_command(path, &options, command, &buf)) { + printf("%s\n", buf.buf); + fflush(stdout); + strbuf_release(&buf); + + return 0; + } + + return error("failed to send '%s' to '%s'", command, path); +} + +/* + * Send an IPC command followed by ballast to confirm that a large + * message can be sent and that the kernel or pkt-line layers will + * properly chunk it and that the daemon receives the entire message. + */ +static int do_sendbytes(int bytecount, char byte, const char *path) +{ + struct strbuf buf_send = STRBUF_INIT; + struct strbuf buf_resp = STRBUF_INIT; + struct ipc_client_connect_options options + = IPC_CLIENT_CONNECT_OPTIONS_INIT; + + options.wait_if_busy = 1; + options.wait_if_not_found = 0; + + strbuf_addstr(&buf_send, "sendbytes "); + strbuf_addchars(&buf_send, byte, bytecount); + + if (!ipc_client_send_command(path, &options, buf_send.buf, &buf_resp)) { + strbuf_rtrim(&buf_resp); + printf("sent:%c%08d %s\n", byte, bytecount, buf_resp.buf); + fflush(stdout); + strbuf_release(&buf_send); + strbuf_release(&buf_resp); + + return 0; + } + + return error("client failed to sendbytes(%d, '%c') to '%s'", + bytecount, byte, path); +} + +/* + * Send an IPC command with ballast to an already-running server daemon. + */ +static int client__sendbytes(int argc, const char **argv, const char *path) +{ + int bytecount = 1024; + char *string = "x"; + const char * const sendbytes_usage[] = { + N_("test-helper simple-ipc sendbytes []"), + NULL + }; + struct option sendbytes_options[] = { + OPT_INTEGER(0, "bytecount", &bytecount, N_("number of bytes")), + OPT_STRING(0, "byte", &string, N_("byte"), N_("ballast")), + OPT_END() + }; + + argc = parse_options(argc, argv, NULL, sendbytes_options, sendbytes_usage, 0); + + return do_sendbytes(bytecount, string[0], path); +} + +struct multiple_thread_data { + pthread_t pthread_id; + struct multiple_thread_data *next; + const char *path; + int bytecount; + int batchsize; + int sum_errors; + int sum_good; + char letter; +}; + +static void *multiple_thread_proc(void *_multiple_thread_data) +{ + struct multiple_thread_data *d = _multiple_thread_data; + int k; + + trace2_thread_start("multiple"); + + for (k = 0; k < d->batchsize; k++) { + if (do_sendbytes(d->bytecount + k, d->letter, d->path)) + d->sum_errors++; + else + d->sum_good++; + } + + trace2_thread_exit(); + return NULL; +} + +/* + * Start a client-side thread pool. Each thread sends a series of + * IPC requests. Each request is on a new connection to the server. + */ +static int client__multiple(int argc, const char **argv, const char *path) +{ + struct multiple_thread_data *list = NULL; + int k; + int nr_threads = 5; + int bytecount = 1; + int batchsize = 10; + int sum_join_errors = 0; + int sum_thread_errors = 0; + int sum_good = 0; + + const char * const multiple_usage[] = { + N_("test-helper simple-ipc multiple []"), + NULL + }; + struct option multiple_options[] = { + OPT_INTEGER(0, "bytecount", &bytecount, N_("number of bytes")), + OPT_INTEGER(0, "threads", &nr_threads, N_("number of threads")), + OPT_INTEGER(0, "batchsize", &batchsize, N_("number of requests per thread")), + OPT_END() + }; + + argc = parse_options(argc, argv, NULL, multiple_options, multiple_usage, 0); + + if (bytecount < 1) + bytecount = 1; + if (nr_threads < 1) + nr_threads = 1; + if (batchsize < 1) + batchsize = 1; + + for (k = 0; k < nr_threads; k++) { + struct multiple_thread_data *d = xcalloc(1, sizeof(*d)); + d->next = list; + d->path = path; + d->bytecount = bytecount + batchsize*(k/26); + d->batchsize = batchsize; + d->sum_errors = 0; + d->sum_good = 0; + d->letter = 'A' + (k % 26); + + if (pthread_create(&d->pthread_id, NULL, multiple_thread_proc, d)) { + warning("failed to create thread[%d] skipping remainder", k); + free(d); + break; + } + + list = d; + } + + while (list) { + struct multiple_thread_data *d = list; + + if (pthread_join(d->pthread_id, NULL)) + sum_join_errors++; + + sum_thread_errors += d->sum_errors; + sum_good += d->sum_good; + + list = d->next; + free(d); + } + + printf("client (good %d) (join %d), (errors %d)\n", + sum_good, sum_join_errors, sum_thread_errors); + + return (sum_join_errors + sum_thread_errors) ? 1 : 0; +} + +int cmd__simple_ipc(int argc, const char **argv) +{ + const char *path = "ipc-test"; + + if (argc == 2 && !strcmp(argv[1], "SUPPORTS_SIMPLE_IPC")) + return 0; + + /* Use '!!' on all dispatch functions to map from `error()` style + * (returns -1) style to `test_must_fail` style (expects 1) and + * get less confusing shell error messages. + */ + + if (argc == 2 && !strcmp(argv[1], "is-active")) + return !!client__probe_server(path); + + if (argc >= 2 && !strcmp(argv[1], "daemon")) + return !!daemon__run_server(path, argc, argv); + + /* + * Client commands follow. Ensure a server is running before + * going any further. + */ + if (client__probe_server(path)) + return 1; + + if ((argc == 2 || argc == 3) && !strcmp(argv[1], "send")) + return !!client__send_ipc(argc, argv, path); + + if (argc >= 2 && !strcmp(argv[1], "sendbytes")) + return !!client__sendbytes(argc, argv, path); + + if (argc >= 2 && !strcmp(argv[1], "multiple")) + return !!client__multiple(argc, argv, path); + + die("Unhandled argv[1]: '%s'", argv[1]); +} +#endif diff --git a/t/helper/test-tool.c b/t/helper/test-tool.c index 9d6d14d9293..a409655f03b 100644 --- a/t/helper/test-tool.c +++ b/t/helper/test-tool.c @@ -64,6 +64,7 @@ static struct test_cmd cmds[] = { { "sha1", cmd__sha1 }, { "sha256", cmd__sha256 }, { "sigchain", cmd__sigchain }, + { "simple-ipc", cmd__simple_ipc }, { "strcmp-offset", cmd__strcmp_offset }, { "string-list", cmd__string_list }, { "submodule-config", cmd__submodule_config }, diff --git a/t/helper/test-tool.h b/t/helper/test-tool.h index a6470ff62c4..564eb3c8e91 100644 --- a/t/helper/test-tool.h +++ b/t/helper/test-tool.h @@ -54,6 +54,7 @@ int cmd__sha1(int argc, const char **argv); int cmd__oid_array(int argc, const char **argv); int cmd__sha256(int argc, const char **argv); int cmd__sigchain(int argc, const char **argv); +int cmd__simple_ipc(int argc, const char **argv); int cmd__strcmp_offset(int argc, const char **argv); int cmd__string_list(int argc, const char **argv); int cmd__submodule_config(int argc, const char **argv); diff --git a/t/t0052-simple-ipc.sh b/t/t0052-simple-ipc.sh new file mode 100755 index 00000000000..69588354545 --- /dev/null +++ b/t/t0052-simple-ipc.sh @@ -0,0 +1,129 @@ +#!/bin/sh + +test_description='simple command server' + +. ./test-lib.sh + +test-tool simple-ipc SUPPORTS_SIMPLE_IPC || { + skip_all='simple IPC not supported on this platform' + test_done +} + +stop_simple_IPC_server () { + test -n "$SIMPLE_IPC_PID" || return 0 + + kill "$SIMPLE_IPC_PID" && + SIMPLE_IPC_PID= +} + +test_expect_success 'start simple command server' ' + { test-tool simple-ipc daemon --threads=8 & } && + SIMPLE_IPC_PID=$! && + test_atexit stop_simple_IPC_server && + + sleep 1 && + + test-tool simple-ipc is-active +' + +test_expect_success 'simple command server' ' + test-tool simple-ipc send ping >actual && + echo pong >expect && + test_cmp expect actual +' + +test_expect_success 'servers cannot share the same path' ' + test_must_fail test-tool simple-ipc daemon && + test-tool simple-ipc is-active +' + +test_expect_success 'big response' ' + test-tool simple-ipc send big >actual && + test_line_count -ge 10000 actual && + grep -q "big: [0]*9999\$" actual +' + +test_expect_success 'chunk response' ' + test-tool simple-ipc send chunk >actual && + test_line_count -ge 10000 actual && + grep -q "big: [0]*9999\$" actual +' + +test_expect_success 'slow response' ' + test-tool simple-ipc send slow >actual && + test_line_count -ge 100 actual && + grep -q "big: [0]*99\$" actual +' + +# Send an IPC with n=100,000 bytes of ballast. This should be large enough +# to force both the kernel and the pkt-line layer to chunk the message to the +# daemon and for the daemon to receive it in chunks. +# +test_expect_success 'sendbytes' ' + test-tool simple-ipc sendbytes --bytecount=100000 --byte=A >actual && + grep "sent:A00100000 rcvd:A00100000" actual +' + +# Start a series of client threads that each make +# IPC requests to the server. Each ( * ) request +# will open a new connection to the server and randomly bind to a server +# thread. Each client thread exits after completing its batch. So the +# total number of live client threads will be smaller than the total. +# Each request will send a message containing at least bytes +# of ballast. (Responses are small.) +# +# The purpose here is to test threading in the server and responding to +# many concurrent client requests (regardless of whether they come from +# 1 client process or many). And to test that the server side of the +# named pipe/socket is stable. (On Windows this means that the server +# pipe is properly recycled.) +# +# On Windows it also lets us adjust the connection timeout in the +# `ipc_client_send_command()`. +# +# Note it is easy to drive the system into failure by requesting an +# insane number of threads on client or server and/or increasing the +# per-thread batchsize or the per-request bytecount (ballast). +# On Windows these failures look like "pipe is busy" errors. +# So I've chosen fairly conservative values for now. +# +# We expect output of the form "sent: ..." +# With terms (7, 19, 13) we expect: +# in [A-G] +# in [19+0 .. 19+(13-1)] +# and (7 * 13) successful responses. +# +test_expect_success 'stress test threads' ' + test-tool simple-ipc multiple \ + --threads=7 \ + --bytecount=19 \ + --batchsize=13 \ + >actual && + test_line_count = 92 actual && + grep "good 91" actual && + grep "sent:A" actual_a && + cat >expect_a <<-EOF && + sent:A00000019 rcvd:A00000019 + sent:A00000020 rcvd:A00000020 + sent:A00000021 rcvd:A00000021 + sent:A00000022 rcvd:A00000022 + sent:A00000023 rcvd:A00000023 + sent:A00000024 rcvd:A00000024 + sent:A00000025 rcvd:A00000025 + sent:A00000026 rcvd:A00000026 + sent:A00000027 rcvd:A00000027 + sent:A00000028 rcvd:A00000028 + sent:A00000029 rcvd:A00000029 + sent:A00000030 rcvd:A00000030 + sent:A00000031 rcvd:A00000031 + EOF + test_cmp expect_a actual_a +' + +test_expect_success '`quit` works' ' + test-tool simple-ipc send quit && + test_must_fail test-tool simple-ipc is-active && + test_must_fail test-tool simple-ipc send ping +' + +test_done From patchwork Tue Jan 12 15:31:32 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeff Hostetler X-Patchwork-Id: 12013795 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D3323C4332D for ; Tue, 12 Jan 2021 15:32:35 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 841E423131 for ; Tue, 12 Jan 2021 15:32:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2391411AbhALPce (ORCPT ); Tue, 12 Jan 2021 10:32:34 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50330 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2390853AbhALPce (ORCPT ); Tue, 12 Jan 2021 10:32:34 -0500 Received: from mail-wm1-x32a.google.com (mail-wm1-x32a.google.com [IPv6:2a00:1450:4864:20::32a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C754BC0617A7 for ; Tue, 12 Jan 2021 07:31:44 -0800 (PST) Received: by mail-wm1-x32a.google.com with SMTP id g10so754933wmh.2 for ; Tue, 12 Jan 2021 07:31:44 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=message-id:in-reply-to:references:from:date:subject:fcc :content-transfer-encoding:mime-version:to:cc; bh=9HXFQrxJfp/6g8c3JN4p4DCLEVDp2AADbHfDuC7ui/8=; b=XXg1SiPFyWVB2WcygUfHwtAHtfteRsrXRFKMPXdB8ae2sGkrQMkpdCgW4pK2zWmKVf +MKo2XkvvbwWl2kQ55wFShykfmOe8PyZKTIkMZqA/2tZuL9SNNiGxdY4AiSOWJ7qnDWN im8FHdZO+mpRSBQC9pP1LfdrseO7fwbAfcKriXJJBPWOjLF1FV55vR+Jjc7L0Sl27YSC VhOU3shwMBbsldhnpi/yJiHZaEAY16I6rIBcBHaQrFvyMm+i+olTEb1HTBozO9rZH0Wa jLjcg+rDOLN5GJhoBAeh71f2h173ms7SQTbVIa4TnCVAPnW5k70cT1A4x5bj1NGUFBkK jP+g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:message-id:in-reply-to:references:from:date :subject:fcc:content-transfer-encoding:mime-version:to:cc; bh=9HXFQrxJfp/6g8c3JN4p4DCLEVDp2AADbHfDuC7ui/8=; b=S0TK6dO83iKFDbRitvVoQ1VtaZEXuduxYjEX/k7sQjK5caMCfOzMT7XxbQIfFanAnu 0bc58v3Fndg2uIB2YqO4BdGMoeTWnDiGo1EU/sPrdcKc8nBZNbGokIRs0V0lPDSqXTXD AxJ7N9jhc426iDBlo1C4AxvGZw6KPcQw9oEhpcTZsQ7wM6DWX1Mc8TbGGb/KGCRkfKAj Qb4GtnVrPSMzF1+ii1ModF+Dy1FGtqtAlteIBEPL24jpsbkf3y19RqoZWj0kzNNdeRPz w/KtEpvqqINvteGJeWcyKi6pnXRwjN/tjBqD6EEBX9sMDUQeiPQMC1UMd2ucTuyXBAlN Y98w== X-Gm-Message-State: AOAM533W9tyTSQ1qkiLq36IHfK5/HhdbfssLUflytyBLe9Ymgv5UO05Y Zsu2yYHHnbach0xDSRtW1aPRFL2JQoY= X-Google-Smtp-Source: ABdhPJwCwxvAICt0wtwRW9zjL8261zDM3dpDNBCaQKC07HQQAewwYwtRmHnuOgo4R9dYLMPWS6o2Hg== X-Received: by 2002:a7b:cc94:: with SMTP id p20mr4096576wma.22.1610465502705; Tue, 12 Jan 2021 07:31:42 -0800 (PST) Received: from [127.0.0.1] ([13.74.141.28]) by smtp.gmail.com with ESMTPSA id s6sm6094397wro.79.2021.01.12.07.31.42 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 12 Jan 2021 07:31:42 -0800 (PST) Message-Id: In-Reply-To: References: Date: Tue, 12 Jan 2021 15:31:32 +0000 Subject: [PATCH 10/10] simple-ipc: add Unix domain socket implementation Fcc: Sent MIME-Version: 1.0 To: git@vger.kernel.org Cc: Jeff Hostetler , Jeff Hostetler Precedence: bulk List-ID: X-Mailing-List: git@vger.kernel.org From: Jeff Hostetler From: Jeff Hostetler Create Unix domain socket based implementation of "simple-ipc". Signed-off-by: Jeff Hostetler --- Makefile | 2 + compat/simple-ipc/ipc-unix-socket.c | 1093 +++++++++++++++++++++++++++ contrib/buildsystems/CMakeLists.txt | 2 + simple-ipc.h | 7 +- 4 files changed, 1103 insertions(+), 1 deletion(-) create mode 100644 compat/simple-ipc/ipc-unix-socket.c diff --git a/Makefile b/Makefile index e7ba8853ea6..f2524c02ff0 100644 --- a/Makefile +++ b/Makefile @@ -1681,6 +1681,8 @@ ifdef NO_UNIX_SOCKETS BASIC_CFLAGS += -DNO_UNIX_SOCKETS else LIB_OBJS += unix-socket.o + LIB_OBJS += compat/simple-ipc/ipc-shared.o + LIB_OBJS += compat/simple-ipc/ipc-unix-socket.o endif ifdef USE_WIN32_IPC diff --git a/compat/simple-ipc/ipc-unix-socket.c b/compat/simple-ipc/ipc-unix-socket.c new file mode 100644 index 00000000000..be100049e4b --- /dev/null +++ b/compat/simple-ipc/ipc-unix-socket.c @@ -0,0 +1,1093 @@ +#include "cache.h" +#include "simple-ipc.h" +#include "strbuf.h" +#include "pkt-line.h" +#include "thread-utils.h" +#include "unix-socket.h" + +#ifdef NO_UNIX_SOCKETS +#error compat/simple-ipc/ipc-unix-socket.c requires Unix sockets +#endif + +enum ipc_active_state ipc_get_active_state(const char *path) +{ + enum ipc_active_state state = IPC_STATE__OTHER_ERROR; + struct ipc_client_connect_options options + = IPC_CLIENT_CONNECT_OPTIONS_INIT; + struct stat st; + int fd_test = -1; + + options.wait_if_busy = 0; + options.wait_if_not_found = 0; + + if (lstat(path, &st) == -1) { + switch (errno) { + case ENOENT: + case ENOTDIR: + return IPC_STATE__NOT_LISTENING; + default: + return IPC_STATE__INVALID_PATH; + } + } + + /* also complain if a plain file is in the way */ + if ((st.st_mode & S_IFMT) != S_IFSOCK) + return IPC_STATE__INVALID_PATH; + + /* + * Just because the filesystem has a S_IFSOCK type inode + * at `path`, doesn't mean it that there is a server listening. + * Ping it to be sure. + */ + state = ipc_client_try_connect(path, &options, &fd_test); + close(fd_test); + + return state; +} + +/* + * This value was chosen at random. + */ +#define WAIT_STEP_MS (50) + +/* + * Try to connect to the server. If the server is just starting up or + * is very busy, we may not get a connection the first time. + */ +static enum ipc_active_state connect_to_server( + const char *path, + int timeout_ms, + const struct ipc_client_connect_options *options, + int *pfd) +{ + int wait_ms = 50; + int k; + + *pfd = -1; + + for (k = 0; k < timeout_ms; k += wait_ms) { + int fd = unix_stream_connect(path); + + if (fd != -1) { + *pfd = fd; + return IPC_STATE__LISTENING; + } + + if (errno == ENOENT) { + if (!options->wait_if_not_found) + return IPC_STATE__PATH_NOT_FOUND; + + goto sleep_and_try_again; + } + + if (errno == ETIMEDOUT) { + if (!options->wait_if_busy) + return IPC_STATE__NOT_LISTENING; + + goto sleep_and_try_again; + } + + if (errno == ECONNREFUSED) { + if (!options->wait_if_busy) + return IPC_STATE__NOT_LISTENING; + + goto sleep_and_try_again; + } + + return IPC_STATE__OTHER_ERROR; + + sleep_and_try_again: + sleep_millisec(wait_ms); + } + + return IPC_STATE__NOT_LISTENING; +} + +/* + * A randomly chosen timeout value. + */ +#define MY_CONNECTION_TIMEOUT_MS (1000) + +enum ipc_active_state ipc_client_try_connect( + const char *path, + const struct ipc_client_connect_options *options, + int *pfd) +{ + enum ipc_active_state state = IPC_STATE__OTHER_ERROR; + + *pfd = -1; + + trace2_region_enter("ipc-client", "try-connect", NULL); + trace2_data_string("ipc-client", NULL, "try-connect/path", path); + + state = connect_to_server(path, MY_CONNECTION_TIMEOUT_MS, + options, pfd); + + trace2_data_intmax("ipc-client", NULL, "try-connect/state", + (intmax_t)state); + trace2_region_leave("ipc-client", "try-connect", NULL); + return state; +} + +int ipc_client_send_command_to_fd(int fd, const char *message, + struct strbuf *answer) +{ + int ret = 0; + + strbuf_setlen(answer, 0); + + trace2_region_enter("ipc-client", "send-command", NULL); + + if (write_packetized_from_buf(message, strlen(message), fd, 1) < 0) { + ret = error(_("could not send IPC command")); + goto done; + } + + if (read_packetized_to_strbuf(fd, answer, PACKET_READ_NEVER_DIE) < 0) { + ret = error(_("could not read IPC response")); + goto done; + } + +done: + trace2_region_leave("ipc-client", "send-command", NULL); + return ret; +} + +int ipc_client_send_command(const char *path, + const struct ipc_client_connect_options *options, + const char *message, struct strbuf *answer) +{ + int fd; + int ret = -1; + enum ipc_active_state state; + + state = ipc_client_try_connect(path, options, &fd); + + if (state != IPC_STATE__LISTENING) + return ret; + + ret = ipc_client_send_command_to_fd(fd, message, answer); + close(fd); + return ret; +} + +static int set_socket_blocking_flag(int fd, int make_nonblocking) +{ + int flags; + + flags = fcntl(fd, F_GETFL, NULL); + + if (flags < 0) + return -1; + + if (make_nonblocking) + flags |= O_NONBLOCK; + else + flags &= ~O_NONBLOCK; + + return fcntl(fd, F_SETFL, flags); +} + +/* + * Magic numbers used to annotate callback instance data. + * These are used to help guard against accidentally passing the + * wrong instance data across multiple levels of callbacks (which + * is easy to do if there are `void*` arguments). + */ +enum magic { + MAGIC_SERVER_REPLY_DATA, + MAGIC_WORKER_THREAD_DATA, + MAGIC_ACCEPT_THREAD_DATA, + MAGIC_SERVER_DATA, +}; + +struct ipc_server_reply_data { + enum magic magic; + int fd; + struct ipc_worker_thread_data *worker_thread_data; +}; + +struct ipc_worker_thread_data { + enum magic magic; + struct ipc_worker_thread_data *next_thread; + struct ipc_server_data *server_data; + pthread_t pthread_id; +}; + +struct ipc_accept_thread_data { + enum magic magic; + struct ipc_server_data *server_data; + int fd_listen; + ino_t inode_listen; + int fd_send_shutdown; + int fd_wait_shutdown; + pthread_t pthread_id; +}; + +/* + * With unix-sockets, the conceptual "ipc-server" is implemented as a single + * controller "accept-thread" thread and a pool of "worker-thread" threads. + * The former does the usual `accept()` loop and dispatches connections + * to an idle worker thread. The worker threads wait in an idle loop for + * a new connection, communicate with the client and relay data to/from + * the `application_cb` and then wait for another connection from the + * server thread. This avoids the overhead of constantly creating and + * destroying threads. + */ +struct ipc_server_data { + enum magic magic; + ipc_server_application_cb *application_cb; + void *application_data; + struct strbuf buf_path; + + struct ipc_accept_thread_data *accept_thread; + struct ipc_worker_thread_data *worker_thread_list; + + pthread_mutex_t work_available_mutex; + pthread_cond_t work_available_cond; + + /* + * Accepted but not yet processed client connections are kept + * in a circular buffer FIFO. The queue is empty when the + * positions are equal. + */ + int *fifo_fds; + int queue_size; + int back_pos; + int front_pos; + + int shutdown_requested; + int is_stopped; +}; + +/* + * Remove and return the oldest queued connection. + * + * Returns -1 if empty. + */ +static int fifo_dequeue(struct ipc_server_data *server_data) +{ + /* ASSERT holding mutex */ + + int fd; + + if (server_data->back_pos == server_data->front_pos) + return -1; + + fd = server_data->fifo_fds[server_data->front_pos]; + server_data->fifo_fds[server_data->front_pos] = -1; + + server_data->front_pos++; + if (server_data->front_pos == server_data->queue_size) + server_data->front_pos = 0; + + return fd; +} + +/* + * Push a new fd onto the back of the queue. + * + * Drop it and return -1 if queue is already full. + */ +static int fifo_enqueue(struct ipc_server_data *server_data, int fd) +{ + /* ASSERT holding mutex */ + + int next_back_pos; + + next_back_pos = server_data->back_pos + 1; + if (next_back_pos == server_data->queue_size) + next_back_pos = 0; + + if (next_back_pos == server_data->front_pos) { + /* Queue is full. Just drop it. */ + close(fd); + return -1; + } + + server_data->fifo_fds[server_data->back_pos] = fd; + server_data->back_pos = next_back_pos; + + return fd; +} + +/* + * Wait for a connection to be queued to the FIFO and return it. + * + * Returns -1 if someone has already requested a shutdown. + */ +static int worker_thread__wait_for_connection( + struct ipc_worker_thread_data *worker_thread_data) +{ + /* ASSERT NOT holding mutex */ + + struct ipc_server_data *server_data = worker_thread_data->server_data; + int fd = -1; + + pthread_mutex_lock(&server_data->work_available_mutex); + for (;;) { + if (server_data->shutdown_requested) + break; + + fd = fifo_dequeue(server_data); + if (fd >= 0) + break; + + pthread_cond_wait(&server_data->work_available_cond, + &server_data->work_available_mutex); + } + pthread_mutex_unlock(&server_data->work_available_mutex); + + return fd; +} + +/* + * Forward declare our reply callback function so that any compiler + * errors are reported when we actually define the function (in addition + * to any errors reported when we try to pass this callback function as + * a parameter in a function call). The former are easier to understand. + */ +static ipc_server_reply_cb do_io_reply_callback; + +/* + * Relay application's response message to the client process. + * (We do not flush at this point because we allow the caller + * to chunk data to the client thru us.) + */ +static int do_io_reply_callback(struct ipc_server_reply_data *reply_data, + const char *response, size_t response_len) +{ + if (reply_data->magic != MAGIC_SERVER_REPLY_DATA) + BUG("reply_cb called with wrong instance data"); + + return write_packetized_from_buf(response, response_len, + reply_data->fd, 0); +} + +/* A randomly chosen value. */ +#define MY_WAIT_POLL_TIMEOUT_MS (10) + +/* + * If the client hangs up without sending any data on the wire, just + * quietly close the socket and ignore this client. + * + * This worker thread is committed to reading the IPC request data + * from the client at the other end of this fd. Wait here for the + * client to actually put something on the wire -- because if the + * client just does a ping (connect and hangup without sending any + * data), our use of the pkt-line read routines will spew an error + * message. + * + * Return -1 if the client hung up. + * Return 0 if data (possibly incomplete) is ready. + */ +static int worker_thread__wait_for_io_start( + struct ipc_worker_thread_data *worker_thread_data, + int fd) +{ + struct ipc_server_data *server_data = worker_thread_data->server_data; + struct pollfd pollfd[1]; + int result; + + for (;;) { + pollfd[0].fd = fd; + pollfd[0].events = POLLIN; + + result = poll(pollfd, 1, MY_WAIT_POLL_TIMEOUT_MS); + if (result < 0) { + if (errno == EINTR) + continue; + goto cleanup; + } + + if (result == 0) { + /* a timeout */ + + int in_shutdown; + + pthread_mutex_lock(&server_data->work_available_mutex); + in_shutdown = server_data->shutdown_requested; + pthread_mutex_unlock(&server_data->work_available_mutex); + + /* + * If a shutdown is already in progress and this + * client has not started talking yet, just drop it. + */ + if (in_shutdown) + goto cleanup; + continue; + } + + if (pollfd[0].revents & POLLHUP) + goto cleanup; + + if (pollfd[0].revents & POLLIN) + return 0; + + goto cleanup; + } + +cleanup: + close(fd); + return -1; +} + +/* + * Receive the request/command from the client and pass it to the + * registered request-callback. The request-callback will compose + * a response and call our reply-callback to send it to the client. + */ +static int worker_thread__do_io( + struct ipc_worker_thread_data *worker_thread_data, + int fd) +{ + /* ASSERT NOT holding lock */ + + struct strbuf buf = STRBUF_INIT; + struct ipc_server_reply_data reply_data; + int ret = 0; + + reply_data.magic = MAGIC_SERVER_REPLY_DATA; + reply_data.worker_thread_data = worker_thread_data; + + reply_data.fd = fd; + + ret = read_packetized_to_strbuf(reply_data.fd, &buf, + PACKET_READ_NEVER_DIE); + if (ret >= 0) { + ret = worker_thread_data->server_data->application_cb( + worker_thread_data->server_data->application_data, + buf.buf, do_io_reply_callback, &reply_data); + + packet_flush_gently(reply_data.fd); + } + else { + /* + * The client probably disconnected/shutdown before it + * could send a well-formed message. Ignore it. + */ + } + + strbuf_release(&buf); + close(reply_data.fd); + + return ret; +} + +/* + * Block SIGPIPE on the current thread (so that we get EPIPE from + * write() rather than an actual signal). + * + * Note that using sigchain_push() and _pop() to control SIGPIPE + * around our IO calls is not thread safe: + * [] It uses a global stack of handler frames. + * [] It uses ALLOC_GROW() to resize it. + * [] Finally, according to the `signal(2)` man-page: + * "The effects of `signal()` in a multithreaded process are unspecified." + */ +static void thread_block_sigpipe(sigset_t *old_set) +{ + sigset_t new_set; + + sigemptyset(&new_set); + sigaddset(&new_set, SIGPIPE); + + sigemptyset(old_set); + pthread_sigmask(SIG_BLOCK, &new_set, old_set); +} + +/* + * Thread proc for an IPC worker thread. It handles a series of + * connections from clients. It pulls the next fd from the queue + * processes it, and then waits for the next client. + * + * Block SIGPIPE in this worker thread for the life of the thread. + * This avoids stray (and sometimes delayed) SIGPIPE signals caused + * by client errors and/or when we are under extremely heavy IO load. + * + * This means that the application callback will have SIGPIPE blocked. + * The callback should not change it. + */ +static void *worker_thread_proc(void *_worker_thread_data) +{ + struct ipc_worker_thread_data *worker_thread_data = _worker_thread_data; + struct ipc_server_data *server_data = worker_thread_data->server_data; + sigset_t old_set; + int fd, io; + int ret; + + trace2_thread_start("ipc-worker"); + + thread_block_sigpipe(&old_set); + + for (;;) { + fd = worker_thread__wait_for_connection(worker_thread_data); + if (fd == -1) + break; /* in shutdown */ + + io = worker_thread__wait_for_io_start(worker_thread_data, fd); + if (io == -1) + continue; /* client hung up without sending anything */ + + ret = worker_thread__do_io(worker_thread_data, fd); + + if (ret == SIMPLE_IPC_QUIT) { + trace2_data_string("ipc-worker", NULL, "queue_stop_async", + "application_quit"); + /* The application told us to shutdown. */ + ipc_server_stop_async(server_data); + break; + } + } + + trace2_thread_exit(); + return NULL; +} + +/* + * Return 1 if someone deleted or stole the on-disk socket from us. + */ +static int socket_was_stolen(struct ipc_accept_thread_data *accept_thread_data) +{ + struct stat st; + + if (lstat(accept_thread_data->server_data->buf_path.buf, &st) == -1) + return 1; + + if (st.st_ino != accept_thread_data->inode_listen) + return 1; + + return 0; +} + +/* A randomly chosen value. */ +#define MY_ACCEPT_POLL_TIMEOUT_MS (60 * 1000) + +/* + * Accept a new client connection on our socket. This uses non-blocking + * IO so that we can also wait for shutdown requests on our socket-pair + * without actually spinning on a fast timeout. + */ +static int accept_thread__wait_for_connection( + struct ipc_accept_thread_data *accept_thread_data) +{ + struct pollfd pollfd[2]; + int result; + + for (;;) { + pollfd[0].fd = accept_thread_data->fd_wait_shutdown; + pollfd[0].events = POLLIN; + + pollfd[1].fd = accept_thread_data->fd_listen; + pollfd[1].events = POLLIN; + + result = poll(pollfd, 2, MY_ACCEPT_POLL_TIMEOUT_MS); + if (result < 0) { + if (errno == EINTR) + continue; + return result; + } + + if (result == 0) { + /* a timeout */ + + /* + * If someone deletes or force-creates a new unix + * domain socket at out path, all future clients + * will be routed elsewhere and we silently starve. + * If that happens, just queue a shutdown. + */ + if (socket_was_stolen( + accept_thread_data)) { + trace2_data_string("ipc-accept", NULL, + "queue_stop_async", + "socket_stolen"); + ipc_server_stop_async( + accept_thread_data->server_data); + } + continue; + } + + if (pollfd[0].revents & POLLIN) { + /* shutdown message queued to socketpair */ + return -1; + } + + if (pollfd[1].revents & POLLIN) { + /* a connection is available on fd_listen */ + + int client_fd = accept(accept_thread_data->fd_listen, + NULL, NULL); + if (client_fd >= 0) + return client_fd; + + /* + * An error here is unlikely -- it probably + * indicates that the connecting process has + * already dropped the connection. + */ + continue; + } + + BUG("unandled poll result errno=%d r[0]=%d r[1]=%d", + errno, pollfd[0].revents, pollfd[1].revents); + } +} + +/* + * Thread proc for the IPC server "accept thread". This waits for + * an incoming socket connection, appends it to the queue of available + * connections, and notifies a worker thread to process it. + * + * Block SIGPIPE in this thread for the life of the thread. This + * avoids any stray SIGPIPE signals when closing pipe fds under + * extremely heavy loads (such as when the fifo queue is full and we + * drop incomming connections). + */ +static void *accept_thread_proc(void *_accept_thread_data) +{ + struct ipc_accept_thread_data *accept_thread_data = _accept_thread_data; + struct ipc_server_data *server_data = accept_thread_data->server_data; + sigset_t old_set; + + trace2_thread_start("ipc-accept"); + + thread_block_sigpipe(&old_set); + + for (;;) { + int client_fd = accept_thread__wait_for_connection( + accept_thread_data); + + pthread_mutex_lock(&server_data->work_available_mutex); + if (server_data->shutdown_requested) { + pthread_mutex_unlock(&server_data->work_available_mutex); + if (client_fd >= 0) + close(client_fd); + break; + } + + if (client_fd < 0) { + /* ignore transient accept() errors */ + } + else { + fifo_enqueue(server_data, client_fd); + pthread_cond_broadcast(&server_data->work_available_cond); + } + pthread_mutex_unlock(&server_data->work_available_mutex); + } + + trace2_thread_exit(); + return NULL; +} + +/* + * We can't predict the connection arrival rate relative to the worker + * processing rate, therefore we allow the "accept-thread" to queue up + * a generous number of connections, since we'd rather have the client + * not unnecessarily timeout if we can avoid it. (The assumption is + * that this will be used for FSMonitor and a few second wait on a + * connection is better than having the client timeout and do the full + * computation itself.) + * + * The FIFO queue size is set to a multiple of the worker pool size. + * This value chosen at random. + */ +#define FIFO_SCALE (100) + +/* + * The backlog value for `listen(2)`. This doesn't need to huge, + * rather just large enough for our "accept-thread" to wake up and + * queue incoming connections onto the FIFO without the kernel + * dropping any. + * + * This value chosen at random. + */ +#define LISTEN_BACKLOG (50) + +/* + * Create a unix domain socket at the given path to listen for + * client connections. The resulting socket will then appear + * in the filesystem as an inode with S_IFSOCK. The inode is + * itself created as part of the `bind(2)` operation. + * + * The term "socket" is ambiguous in this context. We want to open a + * "socket-fd" that is bound to a "socket-inode" (path) on disk. We + * listen on "socket-fd" for new connections and clients try to + * open/connect using the "socket-inode" pathname. + * + * Unix domain sockets have a fundamental design flaw because the + * "socket-inode" persists until the pathname is deleted; closing the listening + * "socket-fd" only closes the socket handle/descriptor, it does not delete + * the inode/pathname. + * + * Well-behaving service daemons are expected to also delete the inode + * before shutdown. If a service crashes (or forgets) it can leave + * the (now stale) inode in the filesystem. This behaves like a stale + * ".lock" file and may prevent future service instances from starting + * up correctly. (Because they won't be able to bind.) + * + * When future service instances try to create the listener socket, + * `bind(2)` will fail with EADDRINUSE -- because the inode already + * exists. However, the new instance cannot tell if it is a stale + * inode *or* another service instance is already running. + * + * One possible solution is to blindly unlink the inode before + * attempting to bind a new socket-fd (and thus create) a new + * socket-inode. Then `bind(2)` should always succeed. However, if + * there is an existing service instance, it would be orphaned -- + * it would still be listening on a socket-fd that is still bound + * to an (unlinked) socket-inode, but that socket-inode is no longer + * associated with the pathname. New client connections will arrive + * at our new socket-inode and not the existing server's. (It is upto + * the existing server to detect that its socket-inode has been + * stolen and shutdown.) + * + * Since this is rather obscure and infrequent, we try to "gently" + * create the socket-inode without disturbing an existing service. + */ +static int create_listener_socket(const char *path, + const struct ipc_server_opts *ipc_opts) +{ + int fd_listen; + int fd_client; + struct unix_stream_listen_opts uslg_opts = { + .listen_backlog_size = LISTEN_BACKLOG, + .force_unlink_before_bind = 0, + .disallow_chdir = ipc_opts->uds_disallow_chdir + }; + + trace2_data_string("ipc-server", NULL, "try-listen-gently", path); + + /* + * Assume socket-inode does not exist and try to (gently) + * create a new socket-inode on disk at pathname and bind + * socket-fd to it. + */ + fd_listen = unix_stream_listen_gently(path, &uslg_opts); + if (fd_listen >= 0) + return fd_listen; + + if (errno != EADDRINUSE) + return error_errno(_("could not create socket '%s'"), + path); + + trace2_data_string("ipc-server", NULL, "try-detect-server", path); + + /* + * A socket-inode at pathname exists on disk, but we don't + * know if it a server is using it or if it is a stale inode. + * + * poke it with a trivial connection to try to find out. + */ + fd_client = unix_stream_connect(path); + if (fd_client >= 0) { + /* + * An existing service process is alive and accepted our + * connection. + */ + close(fd_client); + + /* + * We cannot create a new socket-inode here, so we cannot + * startup a new server on this pathname. + */ + errno = EADDRINUSE; + return error_errno(_("socket already in use '%s'"), + path); + } + + trace2_data_string("ipc-server", NULL, "try-listen-force", path); + + /* + * A socket-inode at pathname exists on disk, but we were not + * able to connect to it, so we believe that this is a stale + * socket-inode that a previous server forgot to delete. Use + * the tradional solution: force unlink it and create a new + * one. + * + * TODO Note that it is possible that another server is + * listening, but is either just starting up and not yet + * responsive or is stuck somehow. For now, I'm OK with + * stealing the socket-inode from it in this case. + */ + uslg_opts.force_unlink_before_bind = 1; + fd_listen = unix_stream_listen_gently(path, &uslg_opts); + if (fd_listen >= 0) + return fd_listen; + + return error_errno(_("could not force create socket '%s'"), path); +} + +static int setup_listener_socket(const char *path, ino_t *inode, + const struct ipc_server_opts *ipc_opts) +{ + int fd_listen; + struct stat st; + + trace2_region_enter("ipc-server", "create-listener_socket", NULL); + fd_listen = create_listener_socket(path, ipc_opts); + trace2_region_leave("ipc-server", "create-listener_socket", NULL); + + if (fd_listen < 0) + return fd_listen; + + /* + * We just bound a socket (descriptor) to a newly created unix + * domain socket in the filesystem. Capture the inode number + * so we can later detect if/when someone else force-creates a + * new socket and effectively steals the path from us. (Which + * would leave us listening to a socket that no client could + * reach.) + */ + if (lstat(path, &st) < 0) { + int saved_errno = errno; + + close(fd_listen); + unlink(path); + + errno = saved_errno; + return error_errno(_("could not lstat listener socket '%s'"), + path); + } + + if (set_socket_blocking_flag(fd_listen, 1)) { + int saved_errno = errno; + + close(fd_listen); + unlink(path); + + errno = saved_errno; + return error_errno(_("making listener socket nonblocking '%s'"), + path); + } + + *inode = st.st_ino; + + return fd_listen; +} + +/* + * Start IPC server in a pool of background threads. + */ +int ipc_server_run_async(struct ipc_server_data **returned_server_data, + const char *path, const struct ipc_server_opts *opts, + ipc_server_application_cb *application_cb, + void *application_data) +{ + struct ipc_server_data *server_data; + int fd_listen; + ino_t inode_listen; + int sv[2]; + int k; + int nr_threads = opts->nr_threads; + + *returned_server_data = NULL; + + /* + * Create a socketpair and set sv[1] to non-blocking. This + * will used to send a shutdown message to the accept-thread + * and allows the accept-thread to wait on EITHER a client + * connection or a shutdown request without spinning. + */ + if (socketpair(AF_UNIX, SOCK_STREAM, 0, sv) < 0) + return error_errno(_("could not create socketpair for '%s'"), + path); + + if (set_socket_blocking_flag(sv[1], 1)) { + int saved_errno = errno; + close(sv[0]); + close(sv[1]); + errno = saved_errno; + return error_errno(_("making socketpair nonblocking '%s'"), + path); + } + + fd_listen = setup_listener_socket(path, &inode_listen, opts); + if (fd_listen < 0) { + int saved_errno = errno; + close(sv[0]); + close(sv[1]); + errno = saved_errno; + return -1; + } + + server_data = xcalloc(1, sizeof(*server_data)); + server_data->magic = MAGIC_SERVER_DATA; + server_data->application_cb = application_cb; + server_data->application_data = application_data; + strbuf_init(&server_data->buf_path, 0); + strbuf_addstr(&server_data->buf_path, path); + + if (nr_threads < 1) + nr_threads = 1; + + pthread_mutex_init(&server_data->work_available_mutex, NULL); + pthread_cond_init(&server_data->work_available_cond, NULL); + + server_data->queue_size = nr_threads * FIFO_SCALE; + server_data->fifo_fds = xcalloc(server_data->queue_size, + sizeof(*server_data->fifo_fds)); + + server_data->accept_thread = + xcalloc(1, sizeof(*server_data->accept_thread)); + server_data->accept_thread->magic = MAGIC_ACCEPT_THREAD_DATA; + server_data->accept_thread->server_data = server_data; + server_data->accept_thread->fd_listen = fd_listen; + server_data->accept_thread->inode_listen = inode_listen; + server_data->accept_thread->fd_send_shutdown = sv[0]; + server_data->accept_thread->fd_wait_shutdown = sv[1]; + + if (pthread_create(&server_data->accept_thread->pthread_id, NULL, + accept_thread_proc, server_data->accept_thread)) + die_errno(_("could not start accept_thread '%s'"), path); + + for (k = 0; k < nr_threads; k++) { + struct ipc_worker_thread_data *wtd; + + wtd = xcalloc(1, sizeof(*wtd)); + wtd->magic = MAGIC_WORKER_THREAD_DATA; + wtd->server_data = server_data; + + if (pthread_create(&wtd->pthread_id, NULL, worker_thread_proc, + wtd)) { + if (k == 0) + die(_("could not start worker[0] for '%s'"), + path); + /* + * Limp along with the thread pool that we have. + */ + break; + } + + wtd->next_thread = server_data->worker_thread_list; + server_data->worker_thread_list = wtd; + } + + *returned_server_data = server_data; + return 0; +} + +/* + * Gently tell the IPC server treads to shutdown. + * Can be run on any thread. + */ +int ipc_server_stop_async(struct ipc_server_data *server_data) +{ + /* ASSERT NOT holding mutex */ + + int fd; + + if (!server_data) + return 0; + + trace2_region_enter("ipc-server", "server-stop-async", NULL); + + pthread_mutex_lock(&server_data->work_available_mutex); + + server_data->shutdown_requested = 1; + + /* + * Write a byte to the shutdown socket pair to wake up the + * accept-thread. + */ + if (write(server_data->accept_thread->fd_send_shutdown, "Q", 1) < 0) + error_errno("could not write to fd_send_shutdown"); + + /* + * Drain the queue of existing connections. + */ + while ((fd = fifo_dequeue(server_data)) != -1) + close(fd); + + /* + * Gently tell worker threads to stop processing new connections + * and exit. (This does not abort in-process conversations.) + */ + pthread_cond_broadcast(&server_data->work_available_cond); + + pthread_mutex_unlock(&server_data->work_available_mutex); + + trace2_region_leave("ipc-server", "server-stop-async", NULL); + + return 0; +} + +/* + * Wait for all IPC server threads to stop. + */ +int ipc_server_await(struct ipc_server_data *server_data) +{ + pthread_join(server_data->accept_thread->pthread_id, NULL); + + if (!server_data->shutdown_requested) + BUG("ipc-server: accept-thread stopped for '%s'", + server_data->buf_path.buf); + + while (server_data->worker_thread_list) { + struct ipc_worker_thread_data *wtd = + server_data->worker_thread_list; + + pthread_join(wtd->pthread_id, NULL); + + server_data->worker_thread_list = wtd->next_thread; + free(wtd); + } + + server_data->is_stopped = 1; + + return 0; +} + +void ipc_server_free(struct ipc_server_data *server_data) +{ + struct ipc_accept_thread_data * accept_thread_data; + + if (!server_data) + return; + + if (!server_data->is_stopped) + BUG("cannot free ipc-server while running for '%s'", + server_data->buf_path.buf); + + accept_thread_data = server_data->accept_thread; + if (accept_thread_data) { + if (accept_thread_data->fd_listen != -1) { + /* + * Only unlink the unix domain socket if we + * created it. That is, if another daemon + * process force-created a new socket at this + * path, and effectively steals our path + * (which prevents us from receiving any + * future clients), we don't want to do the + * same thing to them. + */ + if (!socket_was_stolen( + accept_thread_data)) + unlink(server_data->buf_path.buf); + + close(accept_thread_data->fd_listen); + } + if (accept_thread_data->fd_send_shutdown != -1) + close(accept_thread_data->fd_send_shutdown); + if (accept_thread_data->fd_wait_shutdown != -1) + close(accept_thread_data->fd_wait_shutdown); + + free(server_data->accept_thread); + } + + while (server_data->worker_thread_list) { + struct ipc_worker_thread_data *wtd = + server_data->worker_thread_list; + + server_data->worker_thread_list = wtd->next_thread; + free(wtd); + } + + pthread_cond_destroy(&server_data->work_available_cond); + pthread_mutex_destroy(&server_data->work_available_mutex); + + strbuf_release(&server_data->buf_path); + + free(server_data->fifo_fds); + free(server_data); +} diff --git a/contrib/buildsystems/CMakeLists.txt b/contrib/buildsystems/CMakeLists.txt index 4bd41054ee7..4c27a373414 100644 --- a/contrib/buildsystems/CMakeLists.txt +++ b/contrib/buildsystems/CMakeLists.txt @@ -248,6 +248,8 @@ endif() if(CMAKE_SYSTEM_NAME STREQUAL "Windows") list(APPEND compat_SOURCES compat/simple-ipc/ipc-shared.c compat/simple-ipc/ipc-win32.c) +else() + list(APPEND compat_SOURCES compat/simple-ipc/ipc-shared.c compat/simple-ipc/ipc-unix-socket.c) endif() set(EXE_EXTENSION ${CMAKE_EXECUTABLE_SUFFIX}) diff --git a/simple-ipc.h b/simple-ipc.h index cd525e711bd..8a6dfc72c83 100644 --- a/simple-ipc.h +++ b/simple-ipc.h @@ -5,7 +5,7 @@ * See Documentation/technical/api-simple-ipc.txt */ -#if defined(GIT_WINDOWS_NATIVE) +#if defined(GIT_WINDOWS_NATIVE) || !defined(NO_UNIX_SOCKETS) #define SUPPORTS_SIMPLE_IPC #endif @@ -151,6 +151,11 @@ struct ipc_server_data; struct ipc_server_opts { int nr_threads; + + /* + * Disallow chdir() when creating a Unix domain socket. + */ + unsigned int uds_disallow_chdir:1; }; /*