From patchwork Wed Feb 17 21:48:37 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeff Hostetler X-Patchwork-Id: 12092447 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9C96AC433DB for ; Wed, 17 Feb 2021 21:49:46 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 531EF64E7A for ; Wed, 17 Feb 2021 21:49:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232729AbhBQVtg (ORCPT ); Wed, 17 Feb 2021 16:49:36 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58146 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232050AbhBQVtc (ORCPT ); Wed, 17 Feb 2021 16:49:32 -0500 Received: from mail-wm1-x331.google.com (mail-wm1-x331.google.com [IPv6:2a00:1450:4864:20::331]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 73777C061756 for ; Wed, 17 Feb 2021 13:48:52 -0800 (PST) Received: by mail-wm1-x331.google.com with SMTP id o82so3671175wme.1 for ; Wed, 17 Feb 2021 13:48:52 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=message-id:in-reply-to:references:from:date:subject:fcc :content-transfer-encoding:mime-version:to:cc; bh=vUaX6CJSvvP6EgiP8l7AT1SqEY7jKOaixGCmCkgRPvs=; b=CSHj9lnyDEW6qHMc7/eyIBkfLzBhMOx6zNiUun8E09WOBogC/oHTHLSXeCbJem++Ve s8YaxgeKJrFQoBX5YXTih1W/2TdGrNqE137aIBISRnCMwYjO4/YyPZQ1lsuQd9au02Ak Lb8s6EgEf1Ooj/olR3kastQEFT44i8vGT5Pb9TCNPfxBOnnjMp5JA7h8d1j0bAn6EA3H QjlTRdESufvCKPpyO0aGq/nxH9TO48ER8OTTN9lHaHA3dX2byHG3+uHIqioXGFCUGS/M bPILNIQSo7wH9/FLkBucMYfADCL4E9RxNfkzTnSkXQNSFCSe7uokoi8gqpQORHm+dyn9 rmOw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:message-id:in-reply-to:references:from:date :subject:fcc:content-transfer-encoding:mime-version:to:cc; bh=vUaX6CJSvvP6EgiP8l7AT1SqEY7jKOaixGCmCkgRPvs=; b=HEEHBmIvNgtVjRUIGCTLR9fRGGH+iDA/hRufPpFzXgc1rjnKIpT5kHLFcARNMA5AB6 5WxxFperONukI36p1Xgwv1OoZnW+Kpt7m3uzhxw3HNenVrYYIPoa53y7vyjicoyFHFvc 6uNhjcWxaWWKvO1RswEZ3+4tG5imaZHQQlYjgS+YXakuO+ha2JCxZ8RWqY1GZhrS2BH4 2nVRS4EV/JM3wZV/iC4v6N9SXouJ0yrvP2gCxpg+FHRytvl6pjSo78kR8ppjAKpehCv0 TDFMipaZ/xeKkltVI9ceo6FoxLiqg2KYMGvHLZpMIrGDb4cw+/o2el3J8IhJOh3gfgDl vr7Q== X-Gm-Message-State: AOAM532STtzu4SRB6Q6AqsYETCerCD0dU82V4UnzjqJC35ynlt+GX4b1 RZa77Jk8D/4Okif8T9++KbZbO+VMJB4= X-Google-Smtp-Source: ABdhPJzTg+EhI/7Dcm2kddCAuWVh1kf9zsJXkuyortRQbDoSryN0AvEY1uLRAq5q6Ix9sNMBEyt1Hg== X-Received: by 2002:a05:600c:c6:: with SMTP id u6mr720842wmm.176.1613598531143; Wed, 17 Feb 2021 13:48:51 -0800 (PST) Received: from [127.0.0.1] ([13.74.141.28]) by smtp.gmail.com with ESMTPSA id q15sm5932800wrr.58.2021.02.17.13.48.50 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 17 Feb 2021 13:48:50 -0800 (PST) Message-Id: <2d6858b1625aa3c96688c6c6a9157c2d2b16f43e.1613598529.git.gitgitgadget@gmail.com> In-Reply-To: References: Date: Wed, 17 Feb 2021 21:48:37 +0000 Subject: [PATCH v4 01/12] pkt-line: eliminate the need for static buffer in packet_write_gently() Fcc: Sent MIME-Version: 1.0 To: git@vger.kernel.org Cc: Jeff Hostetler , Jeff King , SZEDER =?utf-8?b?R8OhYm9y?= , Johannes Schindelin , Jeff Hostetler , Jeff Hostetler Precedence: bulk List-ID: X-Mailing-List: git@vger.kernel.org From: Jeff Hostetler From: Jeff Hostetler Teach `packet_write_gently()` to write the pkt-line header and the actual buffer in 2 separate calls to `write_in_full()` and avoid the need for a static buffer, thread-safe scratch space, or an excessively large stack buffer. Change the API of `write_packetized_from_fd()` to accept a scratch space argument from its caller to avoid similar issues here. These changes are intended to make it easier to use pkt-line routines in a multi-threaded context with multiple concurrent writers writing to different streams. Signed-off-by: Jeff Hostetler --- convert.c | 7 ++++--- pkt-line.c | 28 +++++++++++++++++++--------- pkt-line.h | 12 +++++++++--- 3 files changed, 32 insertions(+), 15 deletions(-) diff --git a/convert.c b/convert.c index ee360c2f07ce..41012c2d301c 100644 --- a/convert.c +++ b/convert.c @@ -883,9 +883,10 @@ static int apply_multi_file_filter(const char *path, const char *src, size_t len if (err) goto done; - if (fd >= 0) - err = write_packetized_from_fd(fd, process->in); - else + if (fd >= 0) { + struct packet_scratch_space scratch; + err = write_packetized_from_fd(fd, process->in, &scratch); + } else err = write_packetized_from_buf(src, len, process->in); if (err) goto done; diff --git a/pkt-line.c b/pkt-line.c index d633005ef746..4cff2f7a68a5 100644 --- a/pkt-line.c +++ b/pkt-line.c @@ -196,17 +196,25 @@ int packet_write_fmt_gently(int fd, const char *fmt, ...) static int packet_write_gently(const int fd_out, const char *buf, size_t size) { - static char packet_write_buffer[LARGE_PACKET_MAX]; + char header[4]; size_t packet_size; - if (size > sizeof(packet_write_buffer) - 4) + if (size > LARGE_PACKET_DATA_MAX) return error(_("packet write failed - data exceeds max packet size")); packet_trace(buf, size, 1); packet_size = size + 4; - set_packet_header(packet_write_buffer, packet_size); - memcpy(packet_write_buffer + 4, buf, size); - if (write_in_full(fd_out, packet_write_buffer, packet_size) < 0) + + set_packet_header(header, packet_size); + + /* + * Write the header and the buffer in 2 parts so that we do not need + * to allocate a buffer or rely on a static buffer. This avoids perf + * and multi-threading issues. + */ + + if (write_in_full(fd_out, header, 4) < 0 || + write_in_full(fd_out, buf, size) < 0) return error(_("packet write failed")); return 0; } @@ -242,19 +250,21 @@ void packet_buf_write_len(struct strbuf *buf, const char *data, size_t len) packet_trace(data, len, 1); } -int write_packetized_from_fd(int fd_in, int fd_out) +int write_packetized_from_fd(int fd_in, int fd_out, + struct packet_scratch_space *scratch) { - static char buf[LARGE_PACKET_DATA_MAX]; int err = 0; ssize_t bytes_to_write; while (!err) { - bytes_to_write = xread(fd_in, buf, sizeof(buf)); + bytes_to_write = xread(fd_in, scratch->buffer, + sizeof(scratch->buffer)); if (bytes_to_write < 0) return COPY_READ_ERROR; if (bytes_to_write == 0) break; - err = packet_write_gently(fd_out, buf, bytes_to_write); + err = packet_write_gently(fd_out, scratch->buffer, + bytes_to_write); } if (!err) err = packet_flush_gently(fd_out); diff --git a/pkt-line.h b/pkt-line.h index 8c90daa59ef0..c0722aefe638 100644 --- a/pkt-line.h +++ b/pkt-line.h @@ -5,6 +5,13 @@ #include "strbuf.h" #include "sideband.h" +#define LARGE_PACKET_MAX 65520 +#define LARGE_PACKET_DATA_MAX (LARGE_PACKET_MAX - 4) + +struct packet_scratch_space { + char buffer[LARGE_PACKET_DATA_MAX]; /* does not include header bytes */ +}; + /* * Write a packetized stream, where each line is preceded by * its length (including the header) as a 4-byte hex number. @@ -32,7 +39,7 @@ void packet_buf_write(struct strbuf *buf, const char *fmt, ...) __attribute__((f void packet_buf_write_len(struct strbuf *buf, const char *data, size_t len); int packet_flush_gently(int fd); int packet_write_fmt_gently(int fd, const char *fmt, ...) __attribute__((format (printf, 2, 3))); -int write_packetized_from_fd(int fd_in, int fd_out); +int write_packetized_from_fd(int fd_in, int fd_out, struct packet_scratch_space *scratch); int write_packetized_from_buf(const char *src_in, size_t len, int fd_out); /* @@ -213,8 +220,7 @@ enum packet_read_status packet_reader_read(struct packet_reader *reader); enum packet_read_status packet_reader_peek(struct packet_reader *reader); #define DEFAULT_PACKET_MAX 1000 -#define LARGE_PACKET_MAX 65520 -#define LARGE_PACKET_DATA_MAX (LARGE_PACKET_MAX - 4) + extern char packet_buffer[LARGE_PACKET_MAX]; struct packet_writer { From patchwork Wed Feb 17 21:48:38 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Johannes Schindelin X-Patchwork-Id: 12092449 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1C4BBC433E0 for ; Wed, 17 Feb 2021 21:49:50 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id D763C64E5F for ; Wed, 17 Feb 2021 21:49:49 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231933AbhBQVts (ORCPT ); Wed, 17 Feb 2021 16:49:48 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58150 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232336AbhBQVtd (ORCPT ); Wed, 17 Feb 2021 16:49:33 -0500 Received: from mail-wm1-x331.google.com (mail-wm1-x331.google.com [IPv6:2a00:1450:4864:20::331]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1BD75C0613D6 for ; Wed, 17 Feb 2021 13:48:53 -0800 (PST) Received: by mail-wm1-x331.google.com with SMTP id x4so5457262wmi.3 for ; Wed, 17 Feb 2021 13:48:53 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=message-id:in-reply-to:references:from:date:subject:fcc :content-transfer-encoding:mime-version:to:cc; bh=1q5FaxYK9Kc6ZuQx9j/xJ03xM7/gGqa15J7NjF8Uad8=; b=nPfo3+HN1WS2AgBS8M1XvjonLHTfFlVQSY5fh0kftj1SLR+a9twH1TqPGLEEoUkUzR n6qtD2jV+8bRfmcBekv/6DWSo0N02KyHffzfOKTQqcLvHCcQ57dky1aKuJdtNeFCVlE+ qIEgbAW1I2YsOuwoYRUGs+AOpdEn7FaNyoDLhar/rlxaFUus23FZAB63sqMkv9lug9dS 0oKYvgKDTkv39lVh0PjlIpmTh+zDBcHSxEswoReHO+qNKw1m6yhgiQL3lw4fmo/4kIe8 yJWu00J6YqJ8UU2hcq+uRJS5dBL46jN2o4TuFyVRFWws5KtNVNOw2eHSiHIoXXBG2rhN BXpA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:message-id:in-reply-to:references:from:date :subject:fcc:content-transfer-encoding:mime-version:to:cc; bh=1q5FaxYK9Kc6ZuQx9j/xJ03xM7/gGqa15J7NjF8Uad8=; b=ixjnML40Fm7uZi5vo4btsZutsXj5VuuNTjtZGhiv7UWOXUtUwBxY1ZZXb6Pp06S176 1iY9jA8N+Zpoks+Qx40U+jYVCnbThtsh0jAbF8xHFwjxvwihwoeh6yjW+GIu7nxa8Q8M ek161u3y1JPU0PAKosIwWSj10KCaw3SnXsUY3Dr5xopYxeNLQShE0+rNyzeuY4lbNONF tjp4mCm8w6Ma2QmRGW7nsId7ecuBmHERmfGKRyxY5hmF/bMw6ZbuvjiwY6L54EfIQAyp KaSSH/UQ2zbh7jw+lOa0mE/wI/rNZ9T2Qow2qYGWHecN1ZgIEOqOXRzJ+dhdYj+Sg/ZX WoKQ== X-Gm-Message-State: AOAM532kDD1KTUDsAG72L6C3Ap5CNlabKxQwsBVBCpjYnIsRS4lB/3hl yH6sDJexQGNkhZr1wQ8rao+iLKOP3no= X-Google-Smtp-Source: ABdhPJxBqKAhHYdZCyAS7piVuZaUCEke+OAahEYrW6LBEs+2sp5JZd233p2EqjYLI1mPqrXyWJpvkg== X-Received: by 2002:a1c:d7:: with SMTP id 206mr756011wma.68.1613598531802; Wed, 17 Feb 2021 13:48:51 -0800 (PST) Received: from [127.0.0.1] ([13.74.141.28]) by smtp.gmail.com with ESMTPSA id b11sm5619379wrw.68.2021.02.17.13.48.51 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 17 Feb 2021 13:48:51 -0800 (PST) Message-Id: <91a9f63d66924d14a22feedf7b1d88fe298b90bc.1613598529.git.gitgitgadget@gmail.com> In-Reply-To: References: Date: Wed, 17 Feb 2021 21:48:38 +0000 Subject: [PATCH v4 02/12] pkt-line: do not issue flush packets in write_packetized_*() Fcc: Sent MIME-Version: 1.0 To: git@vger.kernel.org Cc: Jeff Hostetler , Jeff King , SZEDER =?utf-8?b?R8OhYm9y?= , Johannes Schindelin , Jeff Hostetler , Johannes Schindelin Precedence: bulk List-ID: X-Mailing-List: git@vger.kernel.org From: Johannes Schindelin From: Johannes Schindelin Remove the `packet_flush_gently()` call in `write_packetized_from_buf() and `write_packetized_from_fd()` and require the caller to call it if desired. Rename both functions to `write_packetized_from_*_no_flush()` to prevent later merge accidents. `write_packetized_from_buf()` currently only has one caller: `apply_multi_file_filter()` in `convert.c`. It always wants a flush packet to be written after writing the payload. However, we are about to introduce a caller that wants to write many packets before a final flush packet, so let's make the caller responsible for emitting the flush packet. Signed-off-by: Jeff Hostetler Signed-off-by: Johannes Schindelin --- convert.c | 8 ++++++-- pkt-line.c | 10 +++------- pkt-line.h | 4 ++-- 3 files changed, 11 insertions(+), 11 deletions(-) diff --git a/convert.c b/convert.c index 41012c2d301c..bccf7afa8797 100644 --- a/convert.c +++ b/convert.c @@ -885,9 +885,13 @@ static int apply_multi_file_filter(const char *path, const char *src, size_t len if (fd >= 0) { struct packet_scratch_space scratch; - err = write_packetized_from_fd(fd, process->in, &scratch); + err = write_packetized_from_fd_no_flush(fd, process->in, &scratch); } else - err = write_packetized_from_buf(src, len, process->in); + err = write_packetized_from_buf_no_flush(src, len, process->in); + if (err) + goto done; + + err = packet_flush_gently(process->in); if (err) goto done; diff --git a/pkt-line.c b/pkt-line.c index 4cff2f7a68a5..3602b0d37092 100644 --- a/pkt-line.c +++ b/pkt-line.c @@ -250,8 +250,8 @@ void packet_buf_write_len(struct strbuf *buf, const char *data, size_t len) packet_trace(data, len, 1); } -int write_packetized_from_fd(int fd_in, int fd_out, - struct packet_scratch_space *scratch) +int write_packetized_from_fd_no_flush(int fd_in, int fd_out, + struct packet_scratch_space *scratch) { int err = 0; ssize_t bytes_to_write; @@ -266,12 +266,10 @@ int write_packetized_from_fd(int fd_in, int fd_out, err = packet_write_gently(fd_out, scratch->buffer, bytes_to_write); } - if (!err) - err = packet_flush_gently(fd_out); return err; } -int write_packetized_from_buf(const char *src_in, size_t len, int fd_out) +int write_packetized_from_buf_no_flush(const char *src_in, size_t len, int fd_out) { int err = 0; size_t bytes_written = 0; @@ -287,8 +285,6 @@ int write_packetized_from_buf(const char *src_in, size_t len, int fd_out) err = packet_write_gently(fd_out, src_in + bytes_written, bytes_to_write); bytes_written += bytes_to_write; } - if (!err) - err = packet_flush_gently(fd_out); return err; } diff --git a/pkt-line.h b/pkt-line.h index c0722aefe638..a7149429ac35 100644 --- a/pkt-line.h +++ b/pkt-line.h @@ -39,8 +39,8 @@ void packet_buf_write(struct strbuf *buf, const char *fmt, ...) __attribute__((f void packet_buf_write_len(struct strbuf *buf, const char *data, size_t len); int packet_flush_gently(int fd); int packet_write_fmt_gently(int fd, const char *fmt, ...) __attribute__((format (printf, 2, 3))); -int write_packetized_from_fd(int fd_in, int fd_out, struct packet_scratch_space *scratch); -int write_packetized_from_buf(const char *src_in, size_t len, int fd_out); +int write_packetized_from_fd_no_flush(int fd_in, int fd_out, struct packet_scratch_space *scratch); +int write_packetized_from_buf_no_flush(const char *src_in, size_t len, int fd_out); /* * Read a packetized line into the buffer, which must be at least size bytes From patchwork Wed Feb 17 21:48:39 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Johannes Schindelin X-Patchwork-Id: 12092451 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 31475C433DB for ; Wed, 17 Feb 2021 21:49:51 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id E9DDE64DEC for ; Wed, 17 Feb 2021 21:49:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231761AbhBQVtu (ORCPT ); Wed, 17 Feb 2021 16:49:50 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58152 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232355AbhBQVte (ORCPT ); Wed, 17 Feb 2021 16:49:34 -0500 Received: from mail-wm1-x332.google.com (mail-wm1-x332.google.com [IPv6:2a00:1450:4864:20::332]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id ADFDEC061786 for ; Wed, 17 Feb 2021 13:48:53 -0800 (PST) Received: by mail-wm1-x332.google.com with SMTP id x4so5457369wmi.3 for ; Wed, 17 Feb 2021 13:48:53 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=message-id:in-reply-to:references:from:date:subject:fcc :content-transfer-encoding:mime-version:to:cc; bh=ZrqHdrLvGrETy2dSaKQym4tWVcFDSXunor28gito1nY=; b=J8WI1yaAIBzRx2Z1i3cy3b+x0MdJTK+lwD0FclOOV+ehYu/CbYu/SIPs2Pxiut8akD GH8MQDlgkjd1YTs0PTrXRhQ02uRW6FmmSNMaJU+Btmtt7xtkMNrVEjxEq1qo3kf0mJ6P //sfKJ8aOj64H/g6sRbl/av1s1q/xCINBM6PYvoR9SK9IS7yAm48sFUIO6kx7AuyOocn FSHWnxbqkwphpJhNcAZ3d23zAdS1YC8c/FBjncQlKUD2YAEftAfvu7UiHuLIAoMvpxOR g3XVWTDdva3NJ57IyLyCXiiXhrH5bias0eFiAmesg+WtzADTOxf+Ppo7afJG+kLwKv+1 d6+g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:message-id:in-reply-to:references:from:date :subject:fcc:content-transfer-encoding:mime-version:to:cc; bh=ZrqHdrLvGrETy2dSaKQym4tWVcFDSXunor28gito1nY=; b=nzPxdBcvH5i8c+AHGlJy9O6bBqgqFFs9AJX3gZ+Fwu497ulOclhiVBM/ANFJHD2IQ9 9IjckGpyBu5TqFPtVZK0PvOwN/LPf/9t4ea0ZYzzqoLG0F9utYp3qil7iEhb6lQTilu7 aYWgPDXstKhqsQhR+NDlE+pQjAf/XZ1l18pLAcNgm0iGiwHJLWM+Yk/XV1qv2ye+9GuR yDtsT2OTHDQvzMkcwHaMCcP8riQ3B6GO2iT8H6ipHH614rqgmCFtTHzJj0mLnSZU/4lX KomBdSyinVJSNoPBgm4cdeG6NMuvhSs45/zGYVjw3mtGQ3geJLikh10CsIfO9nRWnsEl 9nmA== X-Gm-Message-State: AOAM531AQLz7yiSvZO4bkTdXVVJ1S7OPB4oi6pau5NP7VHcMyLpbpfc9 pw83PKcEZjXqQPDO9qibfmUIJRIRbp8= X-Google-Smtp-Source: ABdhPJzSsIFw68XuMfUbOAqDKGiuh2UTuTbhLSV6zo1ClxiRpDYL8yppr+iEfWMnx/8JN6UlLgmUdw== X-Received: by 2002:a1c:6802:: with SMTP id d2mr726148wmc.32.1613598532379; Wed, 17 Feb 2021 13:48:52 -0800 (PST) Received: from [127.0.0.1] ([13.74.141.28]) by smtp.gmail.com with ESMTPSA id y16sm5549960wrw.46.2021.02.17.13.48.52 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 17 Feb 2021 13:48:52 -0800 (PST) Message-Id: In-Reply-To: References: Date: Wed, 17 Feb 2021 21:48:39 +0000 Subject: [PATCH v4 03/12] pkt-line: (optionally) libify the packet readers Fcc: Sent MIME-Version: 1.0 To: git@vger.kernel.org Cc: Jeff Hostetler , Jeff King , SZEDER =?utf-8?b?R8OhYm9y?= , Johannes Schindelin , Jeff Hostetler , Johannes Schindelin Precedence: bulk List-ID: X-Mailing-List: git@vger.kernel.org From: Johannes Schindelin From: Johannes Schindelin So far, the (possibly indirect) callers of `get_packet_data()` can ask that function to return an error instead of `die()`ing upon end-of-file. However, random read errors will still cause the process to die. So let's introduce an explicit option to tell the packet reader machinery to please be nice and only return an error. This change prepares pkt-line for use by long-running daemon processes. Such processes should be able to serve multiple concurrent clients and and survive random IO errors. If there is an error on one connection, a daemon should be able to drop that connection and continue serving existing and future connections. This ability will be used by a Git-aware "Internal FSMonitor" feature in a later patch series. Signed-off-by: Johannes Schindelin --- pkt-line.c | 19 +++++++++++++++++-- pkt-line.h | 4 ++++ 2 files changed, 21 insertions(+), 2 deletions(-) diff --git a/pkt-line.c b/pkt-line.c index 3602b0d37092..83c46e6b46ee 100644 --- a/pkt-line.c +++ b/pkt-line.c @@ -304,8 +304,11 @@ static int get_packet_data(int fd, char **src_buf, size_t *src_size, *src_size -= ret; } else { ret = read_in_full(fd, dst, size); - if (ret < 0) + if (ret < 0) { + if (options & PACKET_READ_NEVER_DIE) + return error_errno(_("read error")); die_errno(_("read error")); + } } /* And complain if we didn't get enough bytes to satisfy the read. */ @@ -313,6 +316,8 @@ static int get_packet_data(int fd, char **src_buf, size_t *src_size, if (options & PACKET_READ_GENTLE_ON_EOF) return -1; + if (options & PACKET_READ_NEVER_DIE) + return error(_("the remote end hung up unexpectedly")); die(_("the remote end hung up unexpectedly")); } @@ -341,6 +346,9 @@ enum packet_read_status packet_read_with_status(int fd, char **src_buffer, len = packet_length(linelen); if (len < 0) { + if (options & PACKET_READ_NEVER_DIE) + return error(_("protocol error: bad line length " + "character: %.4s"), linelen); die(_("protocol error: bad line length character: %.4s"), linelen); } else if (!len) { packet_trace("0000", 4, 0); @@ -355,12 +363,19 @@ enum packet_read_status packet_read_with_status(int fd, char **src_buffer, *pktlen = 0; return PACKET_READ_RESPONSE_END; } else if (len < 4) { + if (options & PACKET_READ_NEVER_DIE) + return error(_("protocol error: bad line length %d"), + len); die(_("protocol error: bad line length %d"), len); } len -= 4; - if ((unsigned)len >= size) + if ((unsigned)len >= size) { + if (options & PACKET_READ_NEVER_DIE) + return error(_("protocol error: bad line length %d"), + len); die(_("protocol error: bad line length %d"), len); + } if (get_packet_data(fd, src_buffer, src_len, buffer, len, options) < 0) { *pktlen = -1; diff --git a/pkt-line.h b/pkt-line.h index a7149429ac35..2e472efaf2c5 100644 --- a/pkt-line.h +++ b/pkt-line.h @@ -75,10 +75,14 @@ int write_packetized_from_buf_no_flush(const char *src_in, size_t len, int fd_ou * * If options contains PACKET_READ_DIE_ON_ERR_PACKET, it dies when it sees an * ERR packet. + * + * With `PACKET_READ_NEVER_DIE`, no errors are allowed to trigger die() (except + * an ERR packet, when `PACKET_READ_DIE_ON_ERR_PACKET` is in effect). */ #define PACKET_READ_GENTLE_ON_EOF (1u<<0) #define PACKET_READ_CHOMP_NEWLINE (1u<<1) #define PACKET_READ_DIE_ON_ERR_PACKET (1u<<2) +#define PACKET_READ_NEVER_DIE (1u<<3) int packet_read(int fd, char **src_buffer, size_t *src_len, char *buffer, unsigned size, int options); From patchwork Wed Feb 17 21:48:40 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Johannes Schindelin X-Patchwork-Id: 12092453 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2EE00C433DB for ; Wed, 17 Feb 2021 21:49:53 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 014A564E5F for ; Wed, 17 Feb 2021 21:49:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232734AbhBQVtu (ORCPT ); Wed, 17 Feb 2021 16:49:50 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58160 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232598AbhBQVtf (ORCPT ); Wed, 17 Feb 2021 16:49:35 -0500 Received: from mail-wr1-x434.google.com (mail-wr1-x434.google.com [IPv6:2a00:1450:4864:20::434]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7048DC061788 for ; Wed, 17 Feb 2021 13:48:54 -0800 (PST) Received: by mail-wr1-x434.google.com with SMTP id n4so61955wrx.1 for ; Wed, 17 Feb 2021 13:48:54 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=message-id:in-reply-to:references:from:date:subject:fcc :content-transfer-encoding:mime-version:to:cc; bh=eKIITmcddPWeE1VBlgWVjIuBS5dmDXJSUzTRMH0h5jc=; b=kkAowa8TJcG8uIni/6XwhDp8Iuq3sxQqPZ1V/1cUuOvLEm37rPPznWYlqxCNtIWUZu MRxBfLmvBdxWQDCPrEf8HhInvtNfeJg6Y60dgFflqWbu6d7/J/hSEi0w1dEcHHM/1MSD q70QX+rHDgH2jfPlR4FqFnwL+PkDJgQztDbLVFTT36CXEgPIBiBEg58rf6NXq8o31Bk6 U74Eez2s7BrYsMnpk/Opxb4hUyoKbSm+EShG4gldkbvmPS7CB7G6mreUHLpj6a7+5E73 rIhoURYtm/WIZkbDTvmAPrks4cEU07NRaH8DcvPIRIA4BDvjWCyHls+Eo258ydJIggQz QSEA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:message-id:in-reply-to:references:from:date :subject:fcc:content-transfer-encoding:mime-version:to:cc; bh=eKIITmcddPWeE1VBlgWVjIuBS5dmDXJSUzTRMH0h5jc=; b=NeP+Rtl8aoRmZEFZSjiwNcCWRTMjGHy+vbvT0csQo9U/HShHgjKcQN43NtzOMMAxSE DCvseMSnaLxC+m2HoVSvNq3rKzL6+uuPbn9KnEf/0ZWQhozij74uNqoAkzfq+/XKydy4 yzxlu1BuUXymXYGFiw5p8BgMgHRWnocOlAPioi87I9mlwqOzF7GPRy2cLt0IgDaS48MB 5EGiBEzt2x625gw7areubev/8Xf9Kleq1PHMQdNnLCh2bMVULrjAhBf7mDlwKWgnhhX4 i7s+AKM/nU6qCsOCMznRxmaz5ZzO2FRfxJoPrY367wXTrIIu95eJXYweNvWYNj0jvtiL MaLQ== X-Gm-Message-State: AOAM532hnTy2r1+yKFfAk+ZS0H8NBTyet0iBU0t+yRl9buI48nA+TiUH dgkkmP4PaGRQNhbY6Emo7wp6m6U88zk= X-Google-Smtp-Source: ABdhPJxp+GJYBSt72z2TJPOoP0oiltxzpcpci2uXjv9HL6MlTGvAmhkG6365aE5ktNrn57L4ojzqhQ== X-Received: by 2002:adf:bc45:: with SMTP id a5mr1149580wrh.290.1613598533219; Wed, 17 Feb 2021 13:48:53 -0800 (PST) Received: from [127.0.0.1] ([13.74.141.28]) by smtp.gmail.com with ESMTPSA id y2sm4793766wmg.13.2021.02.17.13.48.52 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 17 Feb 2021 13:48:52 -0800 (PST) Message-Id: <81e14bed955c6b50e155f6f73cb642d6c9f2fd73.1613598529.git.gitgitgadget@gmail.com> In-Reply-To: References: Date: Wed, 17 Feb 2021 21:48:40 +0000 Subject: [PATCH v4 04/12] pkt-line: add options argument to read_packetized_to_strbuf() Fcc: Sent MIME-Version: 1.0 To: git@vger.kernel.org Cc: Jeff Hostetler , Jeff King , SZEDER =?utf-8?b?R8OhYm9y?= , Johannes Schindelin , Jeff Hostetler , Johannes Schindelin Precedence: bulk List-ID: X-Mailing-List: git@vger.kernel.org From: Johannes Schindelin From: Johannes Schindelin Update the calling sequence of `read_packetized_to_strbuf()` to take an options argument and not assume a fixed set of options. Update the only existing caller accordingly to explicitly pass the formerly-assumed flags. The `read_packetized_to_strbuf()` function calls `packet_read()` with a fixed set of assumed options (`PACKET_READ_GENTLE_ON_EOF`). This assumption has been fine for the single existing caller `apply_multi_file_filter()` in `convert.c`. In a later commit we would like to add other callers to `read_packetized_to_strbuf()` that need a different set of options. Signed-off-by: Johannes Schindelin Signed-off-by: Jeff Hostetler --- convert.c | 3 ++- pkt-line.c | 4 ++-- pkt-line.h | 2 +- 3 files changed, 5 insertions(+), 4 deletions(-) diff --git a/convert.c b/convert.c index bccf7afa8797..9f44f00d841f 100644 --- a/convert.c +++ b/convert.c @@ -908,7 +908,8 @@ static int apply_multi_file_filter(const char *path, const char *src, size_t len if (err) goto done; - err = read_packetized_to_strbuf(process->out, &nbuf) < 0; + err = read_packetized_to_strbuf(process->out, &nbuf, + PACKET_READ_GENTLE_ON_EOF) < 0; if (err) goto done; diff --git a/pkt-line.c b/pkt-line.c index 83c46e6b46ee..18ecad65e08c 100644 --- a/pkt-line.c +++ b/pkt-line.c @@ -442,7 +442,7 @@ char *packet_read_line_buf(char **src, size_t *src_len, int *dst_len) return packet_read_line_generic(-1, src, src_len, dst_len); } -ssize_t read_packetized_to_strbuf(int fd_in, struct strbuf *sb_out) +ssize_t read_packetized_to_strbuf(int fd_in, struct strbuf *sb_out, int options) { int packet_len; @@ -458,7 +458,7 @@ ssize_t read_packetized_to_strbuf(int fd_in, struct strbuf *sb_out) * that there is already room for the extra byte. */ sb_out->buf + sb_out->len, LARGE_PACKET_DATA_MAX+1, - PACKET_READ_GENTLE_ON_EOF); + options); if (packet_len <= 0) break; sb_out->len += packet_len; diff --git a/pkt-line.h b/pkt-line.h index 2e472efaf2c5..e347fe46832a 100644 --- a/pkt-line.h +++ b/pkt-line.h @@ -142,7 +142,7 @@ char *packet_read_line_buf(char **src_buf, size_t *src_len, int *size); /* * Reads a stream of variable sized packets until a flush packet is detected. */ -ssize_t read_packetized_to_strbuf(int fd_in, struct strbuf *sb_out); +ssize_t read_packetized_to_strbuf(int fd_in, struct strbuf *sb_out, int options); /* * Receive multiplexed output stream over git native protocol. From patchwork Wed Feb 17 21:48:41 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeff Hostetler X-Patchwork-Id: 12092455 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2FE41C433E0 for ; Wed, 17 Feb 2021 21:50:28 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id ED44F64E5F for ; Wed, 17 Feb 2021 21:50:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229459AbhBQVu0 (ORCPT ); Wed, 17 Feb 2021 16:50:26 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58294 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231451AbhBQVuN (ORCPT ); Wed, 17 Feb 2021 16:50:13 -0500 Received: from mail-wr1-x42a.google.com (mail-wr1-x42a.google.com [IPv6:2a00:1450:4864:20::42a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 07C26C06178A for ; Wed, 17 Feb 2021 13:48:55 -0800 (PST) Received: by mail-wr1-x42a.google.com with SMTP id v7so18929951wrr.12 for ; Wed, 17 Feb 2021 13:48:54 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=message-id:in-reply-to:references:from:date:subject:fcc :content-transfer-encoding:mime-version:to:cc; bh=yOVx8B7rIEXDZ/mpDexA8Qe2BDO2imKZikCVpUgG7e8=; b=ovYnAvYGYuZ+thKYKxuWAyJWnDjprGAG1oypYAddmAktt+SEXiIy8h4qB9mpK3rnjn BcPAaWrx5fKx6mGxc7HkGCfH5uqfdJnzH2nsqhcHz8+wRyM1gx8T7fqtqfK9kb1cPSJe sdyUwyPDh7QUCEZ6Gtd8U84npxsQmzSNesJGwIZeLvSYoHUeQMGsQ9+MZVik8/oBDyuO PtL13DQ2sgL8ZRusiNn/U4cO3U0P4wWJl9Oi7mklSufLPxR9xOVSPtfcEFzfsYFBcD63 mF71ysxpwWSic63OEDMcOH84RwNzsi3TyB1/NzFmvPfPLXKiH/nk3lxeH55DD/RwfuVZ zpcg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:message-id:in-reply-to:references:from:date :subject:fcc:content-transfer-encoding:mime-version:to:cc; bh=yOVx8B7rIEXDZ/mpDexA8Qe2BDO2imKZikCVpUgG7e8=; b=W4cZMUIYgo0TD49o/h/eNY162BfWVDDggPxXNiMtms3BwUP4gicibREZNfF1jV21j2 k0gxJ1cEmU7sOH6NpNlcxNRKUVCNKT9fFR9X4YRbGiStjQ9by7bDyAF43ZGsCDf9oyjH D+XFUrdPI5Eg6EqSNa9HlV3vhBcpz+VC1qqB1jgPjGXg5V+dx/TgtXj0gxeZK1Gx4sRG 9knkwP7V2bHUIhNtWFM4VhlfgioAOvQtNix90nX5zAcSQV01qV4k3s7UrO2QUB7r9wxN ExeUFHjsiOvrStxGV0yFAWODxcxNjXzX1B9+gqH2LIC8owCf5K/8sl3QOvrrYO9uo4EZ Ei6A== X-Gm-Message-State: AOAM533Di7MgBAbxFR0trMHQBRUOAC6VymWlWvgTUw1ApE/aw1eTvSUg rNHRkhuxeCWsYSqRAXs6OhVeZFY8y4g= X-Google-Smtp-Source: ABdhPJyglrAwzEPpdh3XkbwarbHJ8dWwbEFlkpFZgaA07cjoZVA6wBaK2qOMsVzaUbMwXiTneyu0+g== X-Received: by 2002:a5d:6148:: with SMTP id y8mr1154224wrt.238.1613598533780; Wed, 17 Feb 2021 13:48:53 -0800 (PST) Received: from [127.0.0.1] ([13.74.141.28]) by smtp.gmail.com with ESMTPSA id e16sm7468267wrt.36.2021.02.17.13.48.53 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 17 Feb 2021 13:48:53 -0800 (PST) Message-Id: <22eec60761a88107b2e337ce13eed1020352aa73.1613598529.git.gitgitgadget@gmail.com> In-Reply-To: References: Date: Wed, 17 Feb 2021 21:48:41 +0000 Subject: [PATCH v4 05/12] simple-ipc: design documentation for new IPC mechanism Fcc: Sent MIME-Version: 1.0 To: git@vger.kernel.org Cc: Jeff Hostetler , Jeff King , SZEDER =?utf-8?b?R8OhYm9y?= , Johannes Schindelin , Jeff Hostetler , Jeff Hostetler Precedence: bulk List-ID: X-Mailing-List: git@vger.kernel.org From: Jeff Hostetler From: Jeff Hostetler Brief design documentation for new IPC mechanism allowing foreground Git client to talk with an existing daemon process at a known location using a named pipe or unix domain socket. Signed-off-by: Johannes Schindelin Signed-off-by: Jeff Hostetler --- Documentation/technical/api-simple-ipc.txt | 34 ++++++++++++++++++++++ 1 file changed, 34 insertions(+) create mode 100644 Documentation/technical/api-simple-ipc.txt diff --git a/Documentation/technical/api-simple-ipc.txt b/Documentation/technical/api-simple-ipc.txt new file mode 100644 index 000000000000..670a5c163e39 --- /dev/null +++ b/Documentation/technical/api-simple-ipc.txt @@ -0,0 +1,34 @@ +simple-ipc API +============== + +The simple-ipc API is used to send an IPC message and response between +a (presumably) foreground Git client process to a background server or +daemon process. The server process must already be running. Multiple +client processes can simultaneously communicate with the server +process. + +Communication occurs over a named pipe on Windows and a Unix domain +socket on other platforms. Clients and the server rendezvous at a +previously agreed-to application-specific pathname (which is outside +the scope of this design). + +This IPC mechanism differs from the existing `sub-process.c` model +(Documentation/technical/long-running-process-protocol.txt) and used +by applications like Git-LFS. In the simple-ipc model the server is +assumed to be a very long-running system service. In contrast, in the +LFS-style sub-process model the helper is started with the foreground +process and exits when the foreground process terminates. + +How the simple-ipc server is started is also outside the scope of the +IPC mechanism. For example, the server might be started during +maintenance operations. + +The IPC protocol consists of a single request message from the client and +an optional request message from the server. For simplicity, pkt-line +routines are used to hide chunking and buffering concerns. Each side +terminates their message with a flush packet. +(Documentation/technical/protocol-common.txt) + +The actual format of the client and server messages is application +specific. The IPC layer transmits and receives an opaque buffer without +any concern for the content within. From patchwork Wed Feb 17 21:48:42 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeff Hostetler X-Patchwork-Id: 12092457 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8C34FC433E0 for ; Wed, 17 Feb 2021 21:50:30 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 486F564E5F for ; Wed, 17 Feb 2021 21:50:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232263AbhBQVu2 (ORCPT ); Wed, 17 Feb 2021 16:50:28 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58296 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231616AbhBQVuN (ORCPT ); Wed, 17 Feb 2021 16:50:13 -0500 Received: from mail-wm1-x32e.google.com (mail-wm1-x32e.google.com [IPv6:2a00:1450:4864:20::32e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1E0E0C06178B for ; Wed, 17 Feb 2021 13:48:56 -0800 (PST) Received: by mail-wm1-x32e.google.com with SMTP id a132so3677941wmc.0 for ; Wed, 17 Feb 2021 13:48:56 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=message-id:in-reply-to:references:from:date:subject:fcc :content-transfer-encoding:mime-version:to:cc; bh=Wm3vbYWaR0XyGtF69w4fwZgm9wvPSOhWziyRIfmSjBA=; b=jRKQnZWF9dnBxgHp4nJ1W4QSp2OLURRXa2VrjuHjoBuT5cOvH5D9VWLH5xaEQoDFnw rzD4A6Gz5MI34Ik50NgiGyWp4MBMxJgDCbepktWCDi098wDtEiubiUtF2Tx0SkkbymLW Nnm6zd+KclSijFJKjwtj7b8ZRxCCgM5gIfu1YVd3wskuoa8chn4a48NVbps9190c7XB5 gH2uKzAj6tPrV7YQ9RVJ5KbL+cj0xCDaUhBwPvKXxjY9RHi6dv61FYajuZC065+nGQnl OOtBx1deNHLk5ikwNA3x/3d+JQ+g8aarG3R5ODDBGJXWKb9U335RDEEjVJgaBxsOz5Iq 9ydQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:message-id:in-reply-to:references:from:date :subject:fcc:content-transfer-encoding:mime-version:to:cc; bh=Wm3vbYWaR0XyGtF69w4fwZgm9wvPSOhWziyRIfmSjBA=; b=D8UT4Afr8V2bVjso4vBZEUdJq0zrlKdr81yQKpOvh4+do8/RanA72y960u8UXDOyxm wqFvKPi0UzojXfBXw8zSIkP0+1JsBJK1nAVbjZL6nNVxk0vZ/nlAqdoen0flgOgpFb7U /7LLlufSYNRzP+bvGdLNPN499C1rYdRbyEAIeAjSdF6CZeBBK3a4eSNdhlCOIwijYZ75 0ZUfw9K+yePS6EERwngyEzpLDAcqunUVAtitNByPD6wIue/iN9gaSS0aEGbzZv+5vraH ejjGYLA11oShkomGhoSAggjBp6XLyFQqHJWhJSUKZh7Yc4d+ig8erR5MpHjcL/zEAkFx X1Kw== X-Gm-Message-State: AOAM530/cbpmGURG/il1BBJUiOvmahgL2FxmSCO0VlMWpKXx1G3Ejtzt 2z0cwD3sCmiDgHwPgJe3Hgpo4z8RVk4= X-Google-Smtp-Source: ABdhPJy+zLd1qN08r8z1G35szey7gle9Htms5GLdVT5BmjtrOnmDMid7F7SbJWKPYPpL+QLT4yDePA== X-Received: by 2002:a1c:2605:: with SMTP id m5mr717034wmm.170.1613598534497; Wed, 17 Feb 2021 13:48:54 -0800 (PST) Received: from [127.0.0.1] ([13.74.141.28]) by smtp.gmail.com with ESMTPSA id v5sm5407116wro.71.2021.02.17.13.48.53 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 17 Feb 2021 13:48:54 -0800 (PST) Message-Id: <171ec43ecfa45054afc378aca00c13282e438c47.1613598529.git.gitgitgadget@gmail.com> In-Reply-To: References: Date: Wed, 17 Feb 2021 21:48:42 +0000 Subject: [PATCH v4 06/12] simple-ipc: add win32 implementation Fcc: Sent MIME-Version: 1.0 To: git@vger.kernel.org Cc: Jeff Hostetler , Jeff King , SZEDER =?utf-8?b?R8OhYm9y?= , Johannes Schindelin , Jeff Hostetler , Jeff Hostetler Precedence: bulk List-ID: X-Mailing-List: git@vger.kernel.org From: Jeff Hostetler From: Jeff Hostetler Create Windows implementation of "simple-ipc" using named pipes. Signed-off-by: Jeff Hostetler --- Makefile | 5 + compat/simple-ipc/ipc-shared.c | 28 ++ compat/simple-ipc/ipc-win32.c | 749 ++++++++++++++++++++++++++++ config.mak.uname | 2 + contrib/buildsystems/CMakeLists.txt | 4 + simple-ipc.h | 224 +++++++++ 6 files changed, 1012 insertions(+) create mode 100644 compat/simple-ipc/ipc-shared.c create mode 100644 compat/simple-ipc/ipc-win32.c create mode 100644 simple-ipc.h diff --git a/Makefile b/Makefile index 4128b457e14b..40d5cab78d3f 100644 --- a/Makefile +++ b/Makefile @@ -1679,6 +1679,11 @@ else LIB_OBJS += unix-socket.o endif +ifdef USE_WIN32_IPC + LIB_OBJS += compat/simple-ipc/ipc-shared.o + LIB_OBJS += compat/simple-ipc/ipc-win32.o +endif + ifdef NO_ICONV BASIC_CFLAGS += -DNO_ICONV endif diff --git a/compat/simple-ipc/ipc-shared.c b/compat/simple-ipc/ipc-shared.c new file mode 100644 index 000000000000..1edec8159532 --- /dev/null +++ b/compat/simple-ipc/ipc-shared.c @@ -0,0 +1,28 @@ +#include "cache.h" +#include "simple-ipc.h" +#include "strbuf.h" +#include "pkt-line.h" +#include "thread-utils.h" + +#ifdef SUPPORTS_SIMPLE_IPC + +int ipc_server_run(const char *path, const struct ipc_server_opts *opts, + ipc_server_application_cb *application_cb, + void *application_data) +{ + struct ipc_server_data *server_data = NULL; + int ret; + + ret = ipc_server_run_async(&server_data, path, opts, + application_cb, application_data); + if (ret) + return ret; + + ret = ipc_server_await(server_data); + + ipc_server_free(server_data); + + return ret; +} + +#endif /* SUPPORTS_SIMPLE_IPC */ diff --git a/compat/simple-ipc/ipc-win32.c b/compat/simple-ipc/ipc-win32.c new file mode 100644 index 000000000000..f0cfbf9d15c3 --- /dev/null +++ b/compat/simple-ipc/ipc-win32.c @@ -0,0 +1,749 @@ +#include "cache.h" +#include "simple-ipc.h" +#include "strbuf.h" +#include "pkt-line.h" +#include "thread-utils.h" + +#ifndef GIT_WINDOWS_NATIVE +#error This file can only be compiled on Windows +#endif + +static int initialize_pipe_name(const char *path, wchar_t *wpath, size_t alloc) +{ + int off = 0; + struct strbuf realpath = STRBUF_INIT; + + if (!strbuf_realpath(&realpath, path, 0)) + return -1; + + off = swprintf(wpath, alloc, L"\\\\.\\pipe\\"); + if (xutftowcs(wpath + off, realpath.buf, alloc - off) < 0) + return -1; + + /* Handle drive prefix */ + if (wpath[off] && wpath[off + 1] == L':') { + wpath[off + 1] = L'_'; + off += 2; + } + + for (; wpath[off]; off++) + if (wpath[off] == L'/') + wpath[off] = L'\\'; + + strbuf_release(&realpath); + return 0; +} + +static enum ipc_active_state get_active_state(wchar_t *pipe_path) +{ + if (WaitNamedPipeW(pipe_path, NMPWAIT_USE_DEFAULT_WAIT)) + return IPC_STATE__LISTENING; + + if (GetLastError() == ERROR_SEM_TIMEOUT) + return IPC_STATE__NOT_LISTENING; + + if (GetLastError() == ERROR_FILE_NOT_FOUND) + return IPC_STATE__PATH_NOT_FOUND; + + return IPC_STATE__OTHER_ERROR; +} + +enum ipc_active_state ipc_get_active_state(const char *path) +{ + wchar_t pipe_path[MAX_PATH]; + + if (initialize_pipe_name(path, pipe_path, ARRAY_SIZE(pipe_path)) < 0) + return IPC_STATE__INVALID_PATH; + + return get_active_state(pipe_path); +} + +#define WAIT_STEP_MS (50) + +static enum ipc_active_state connect_to_server( + const wchar_t *wpath, + DWORD timeout_ms, + const struct ipc_client_connect_options *options, + int *pfd) +{ + DWORD t_start_ms, t_waited_ms; + DWORD step_ms; + HANDLE hPipe = INVALID_HANDLE_VALUE; + DWORD mode = PIPE_READMODE_BYTE; + DWORD gle; + + *pfd = -1; + + for (;;) { + hPipe = CreateFileW(wpath, GENERIC_READ | GENERIC_WRITE, + 0, NULL, OPEN_EXISTING, 0, NULL); + if (hPipe != INVALID_HANDLE_VALUE) + break; + + gle = GetLastError(); + + switch (gle) { + case ERROR_FILE_NOT_FOUND: + if (!options->wait_if_not_found) + return IPC_STATE__PATH_NOT_FOUND; + if (!timeout_ms) + return IPC_STATE__PATH_NOT_FOUND; + + step_ms = (timeout_ms < WAIT_STEP_MS) ? + timeout_ms : WAIT_STEP_MS; + sleep_millisec(step_ms); + + timeout_ms -= step_ms; + break; /* try again */ + + case ERROR_PIPE_BUSY: + if (!options->wait_if_busy) + return IPC_STATE__NOT_LISTENING; + if (!timeout_ms) + return IPC_STATE__NOT_LISTENING; + + t_start_ms = (DWORD)(getnanotime() / 1000000); + + if (!WaitNamedPipeW(wpath, timeout_ms)) { + if (GetLastError() == ERROR_SEM_TIMEOUT) + return IPC_STATE__NOT_LISTENING; + + return IPC_STATE__OTHER_ERROR; + } + + /* + * A pipe server instance became available. + * Race other client processes to connect to + * it. + * + * But first decrement our overall timeout so + * that we don't starve if we keep losing the + * race. But also guard against special + * NPMWAIT_ values (0 and -1). + */ + t_waited_ms = (DWORD)(getnanotime() / 1000000) - t_start_ms; + if (t_waited_ms < timeout_ms) + timeout_ms -= t_waited_ms; + else + timeout_ms = 1; + break; /* try again */ + + default: + return IPC_STATE__OTHER_ERROR; + } + } + + if (!SetNamedPipeHandleState(hPipe, &mode, NULL, NULL)) { + CloseHandle(hPipe); + return IPC_STATE__OTHER_ERROR; + } + + *pfd = _open_osfhandle((intptr_t)hPipe, O_RDWR|O_BINARY); + if (*pfd < 0) { + CloseHandle(hPipe); + return IPC_STATE__OTHER_ERROR; + } + + /* fd now owns hPipe */ + + return IPC_STATE__LISTENING; +} + +/* + * The default connection timeout for Windows clients. + * + * This is not currently part of the ipc_ API (nor the config settings) + * because of differences between Windows and other platforms. + * + * This value was chosen at random. + */ +#define WINDOWS_CONNECTION_TIMEOUT_MS (30000) + +enum ipc_active_state ipc_client_try_connect( + const char *path, + const struct ipc_client_connect_options *options, + struct ipc_client_connection **p_connection) +{ + wchar_t wpath[MAX_PATH]; + enum ipc_active_state state = IPC_STATE__OTHER_ERROR; + int fd = -1; + + *p_connection = NULL; + + trace2_region_enter("ipc-client", "try-connect", NULL); + trace2_data_string("ipc-client", NULL, "try-connect/path", path); + + if (initialize_pipe_name(path, wpath, ARRAY_SIZE(wpath)) < 0) + state = IPC_STATE__INVALID_PATH; + else + state = connect_to_server(wpath, WINDOWS_CONNECTION_TIMEOUT_MS, + options, &fd); + + trace2_data_intmax("ipc-client", NULL, "try-connect/state", + (intmax_t)state); + trace2_region_leave("ipc-client", "try-connect", NULL); + + if (state == IPC_STATE__LISTENING) { + (*p_connection) = xcalloc(1, sizeof(struct ipc_client_connection)); + (*p_connection)->fd = fd; + } + + return state; +} + +void ipc_client_close_connection(struct ipc_client_connection *connection) +{ + if (!connection) + return; + + if (connection->fd != -1) + close(connection->fd); + + free(connection); +} + +int ipc_client_send_command_to_connection( + struct ipc_client_connection *connection, + const char *message, struct strbuf *answer) +{ + int ret = 0; + + strbuf_setlen(answer, 0); + + trace2_region_enter("ipc-client", "send-command", NULL); + + if (write_packetized_from_buf_no_flush(message, strlen(message), + connection->fd) < 0 || + packet_flush_gently(connection->fd) < 0) { + ret = error(_("could not send IPC command")); + goto done; + } + + FlushFileBuffers((HANDLE)_get_osfhandle(connection->fd)); + + if (read_packetized_to_strbuf( + connection->fd, answer, + PACKET_READ_GENTLE_ON_EOF | PACKET_READ_NEVER_DIE) < 0) { + ret = error(_("could not read IPC response")); + goto done; + } + +done: + trace2_region_leave("ipc-client", "send-command", NULL); + return ret; +} + +int ipc_client_send_command(const char *path, + const struct ipc_client_connect_options *options, + const char *message, struct strbuf *response) +{ + int ret = -1; + enum ipc_active_state state; + struct ipc_client_connection *connection = NULL; + + state = ipc_client_try_connect(path, options, &connection); + + if (state != IPC_STATE__LISTENING) + return ret; + + ret = ipc_client_send_command_to_connection(connection, message, response); + + ipc_client_close_connection(connection); + + return ret; +} + +/* + * Duplicate the given pipe handle and wrap it in a file descriptor so + * that we can use pkt-line on it. + */ +static int dup_fd_from_pipe(const HANDLE pipe) +{ + HANDLE process = GetCurrentProcess(); + HANDLE handle; + int fd; + + if (!DuplicateHandle(process, pipe, process, &handle, 0, FALSE, + DUPLICATE_SAME_ACCESS)) { + errno = err_win_to_posix(GetLastError()); + return -1; + } + + fd = _open_osfhandle((intptr_t)handle, O_RDWR|O_BINARY); + if (fd < 0) { + errno = err_win_to_posix(GetLastError()); + CloseHandle(handle); + return -1; + } + + /* + * `handle` is now owned by `fd` and will be automatically closed + * when the descriptor is closed. + */ + + return fd; +} + +/* + * Magic numbers used to annotate callback instance data. + * These are used to help guard against accidentally passing the + * wrong instance data across multiple levels of callbacks (which + * is easy to do if there are `void*` arguments). + */ +enum magic { + MAGIC_SERVER_REPLY_DATA, + MAGIC_SERVER_THREAD_DATA, + MAGIC_SERVER_DATA, +}; + +struct ipc_server_reply_data { + enum magic magic; + int fd; + struct ipc_server_thread_data *server_thread_data; +}; + +struct ipc_server_thread_data { + enum magic magic; + struct ipc_server_thread_data *next_thread; + struct ipc_server_data *server_data; + pthread_t pthread_id; + HANDLE hPipe; +}; + +/* + * On Windows, the conceptual "ipc-server" is implemented as a pool of + * n idential/peer "server-thread" threads. That is, there is no + * hierarchy of threads; and therefore no controller thread managing + * the pool. Each thread has an independent handle to the named pipe, + * receives incoming connections, processes the client, and re-uses + * the pipe for the next client connection. + * + * Therefore, the "ipc-server" only needs to maintain a list of the + * spawned threads for eventual "join" purposes. + * + * A single "stop-event" is visible to all of the server threads to + * tell them to shutdown (when idle). + */ +struct ipc_server_data { + enum magic magic; + ipc_server_application_cb *application_cb; + void *application_data; + struct strbuf buf_path; + wchar_t wpath[MAX_PATH]; + + HANDLE hEventStopRequested; + struct ipc_server_thread_data *thread_list; + int is_stopped; +}; + +enum connect_result { + CR_CONNECTED = 0, + CR_CONNECT_PENDING, + CR_CONNECT_ERROR, + CR_WAIT_ERROR, + CR_SHUTDOWN, +}; + +static enum connect_result queue_overlapped_connect( + struct ipc_server_thread_data *server_thread_data, + OVERLAPPED *lpo) +{ + if (ConnectNamedPipe(server_thread_data->hPipe, lpo)) + goto failed; + + switch (GetLastError()) { + case ERROR_IO_PENDING: + return CR_CONNECT_PENDING; + + case ERROR_PIPE_CONNECTED: + SetEvent(lpo->hEvent); + return CR_CONNECTED; + + default: + break; + } + +failed: + error(_("ConnectNamedPipe failed for '%s' (%lu)"), + server_thread_data->server_data->buf_path.buf, + GetLastError()); + return CR_CONNECT_ERROR; +} + +/* + * Use Windows Overlapped IO to wait for a connection or for our event + * to be signalled. + */ +static enum connect_result wait_for_connection( + struct ipc_server_thread_data *server_thread_data, + OVERLAPPED *lpo) +{ + enum connect_result r; + HANDLE waitHandles[2]; + DWORD dwWaitResult; + + r = queue_overlapped_connect(server_thread_data, lpo); + if (r != CR_CONNECT_PENDING) + return r; + + waitHandles[0] = server_thread_data->server_data->hEventStopRequested; + waitHandles[1] = lpo->hEvent; + + dwWaitResult = WaitForMultipleObjects(2, waitHandles, FALSE, INFINITE); + switch (dwWaitResult) { + case WAIT_OBJECT_0 + 0: + return CR_SHUTDOWN; + + case WAIT_OBJECT_0 + 1: + ResetEvent(lpo->hEvent); + return CR_CONNECTED; + + default: + return CR_WAIT_ERROR; + } +} + +/* + * Forward declare our reply callback function so that any compiler + * errors are reported when we actually define the function (in addition + * to any errors reported when we try to pass this callback function as + * a parameter in a function call). The former are easier to understand. + */ +static ipc_server_reply_cb do_io_reply_callback; + +/* + * Relay application's response message to the client process. + * (We do not flush at this point because we allow the caller + * to chunk data to the client thru us.) + */ +static int do_io_reply_callback(struct ipc_server_reply_data *reply_data, + const char *response, size_t response_len) +{ + if (reply_data->magic != MAGIC_SERVER_REPLY_DATA) + BUG("reply_cb called with wrong instance data"); + + return write_packetized_from_buf_no_flush(response, response_len, + reply_data->fd); +} + +/* + * Receive the request/command from the client and pass it to the + * registered request-callback. The request-callback will compose + * a response and call our reply-callback to send it to the client. + * + * Simple-IPC only contains one round trip, so we flush and close + * here after the response. + */ +static int do_io(struct ipc_server_thread_data *server_thread_data) +{ + struct strbuf buf = STRBUF_INIT; + struct ipc_server_reply_data reply_data; + int ret = 0; + + reply_data.magic = MAGIC_SERVER_REPLY_DATA; + reply_data.server_thread_data = server_thread_data; + + reply_data.fd = dup_fd_from_pipe(server_thread_data->hPipe); + if (reply_data.fd < 0) + return error(_("could not create fd from pipe for '%s'"), + server_thread_data->server_data->buf_path.buf); + + ret = read_packetized_to_strbuf( + reply_data.fd, &buf, + PACKET_READ_GENTLE_ON_EOF | PACKET_READ_NEVER_DIE); + if (ret >= 0) { + ret = server_thread_data->server_data->application_cb( + server_thread_data->server_data->application_data, + buf.buf, do_io_reply_callback, &reply_data); + + packet_flush_gently(reply_data.fd); + + FlushFileBuffers((HANDLE)_get_osfhandle((reply_data.fd))); + } + else { + /* + * The client probably disconnected/shutdown before it + * could send a well-formed message. Ignore it. + */ + } + + strbuf_release(&buf); + close(reply_data.fd); + + return ret; +} + +/* + * Handle IPC request and response with this connected client. And reset + * the pipe to prepare for the next client. + */ +static int use_connection(struct ipc_server_thread_data *server_thread_data) +{ + int ret; + + ret = do_io(server_thread_data); + + FlushFileBuffers(server_thread_data->hPipe); + DisconnectNamedPipe(server_thread_data->hPipe); + + return ret; +} + +/* + * Thread proc for an IPC server worker thread. It handles a series of + * connections from clients. It cleans and reuses the hPipe between each + * client. + */ +static void *server_thread_proc(void *_server_thread_data) +{ + struct ipc_server_thread_data *server_thread_data = _server_thread_data; + HANDLE hEventConnected = INVALID_HANDLE_VALUE; + OVERLAPPED oConnect; + enum connect_result cr; + int ret; + + assert(server_thread_data->hPipe != INVALID_HANDLE_VALUE); + + trace2_thread_start("ipc-server"); + trace2_data_string("ipc-server", NULL, "pipe", + server_thread_data->server_data->buf_path.buf); + + hEventConnected = CreateEventW(NULL, TRUE, FALSE, NULL); + + memset(&oConnect, 0, sizeof(oConnect)); + oConnect.hEvent = hEventConnected; + + for (;;) { + cr = wait_for_connection(server_thread_data, &oConnect); + + switch (cr) { + case CR_SHUTDOWN: + goto finished; + + case CR_CONNECTED: + ret = use_connection(server_thread_data); + if (ret == SIMPLE_IPC_QUIT) { + ipc_server_stop_async( + server_thread_data->server_data); + goto finished; + } + if (ret > 0) { + /* + * Ignore (transient) IO errors with this + * client and reset for the next client. + */ + } + break; + + case CR_CONNECT_PENDING: + /* By construction, this should not happen. */ + BUG("ipc-server[%s]: unexpeced CR_CONNECT_PENDING", + server_thread_data->server_data->buf_path.buf); + + case CR_CONNECT_ERROR: + case CR_WAIT_ERROR: + /* + * Ignore these theoretical errors. + */ + DisconnectNamedPipe(server_thread_data->hPipe); + break; + + default: + BUG("unandled case after wait_for_connection"); + } + } + +finished: + CloseHandle(server_thread_data->hPipe); + CloseHandle(hEventConnected); + + trace2_thread_exit(); + return NULL; +} + +static HANDLE create_new_pipe(wchar_t *wpath, int is_first) +{ + HANDLE hPipe; + DWORD dwOpenMode, dwPipeMode; + LPSECURITY_ATTRIBUTES lpsa = NULL; + + dwOpenMode = PIPE_ACCESS_INBOUND | PIPE_ACCESS_OUTBOUND | + FILE_FLAG_OVERLAPPED; + + dwPipeMode = PIPE_TYPE_MESSAGE | PIPE_READMODE_BYTE | PIPE_WAIT | + PIPE_REJECT_REMOTE_CLIENTS; + + if (is_first) { + dwOpenMode |= FILE_FLAG_FIRST_PIPE_INSTANCE; + + /* + * On Windows, the first server pipe instance gets to + * set the ACL / Security Attributes on the named + * pipe; subsequent instances inherit and cannot + * change them. + * + * TODO Should we allow the application layer to + * specify security attributes, such as `LocalService` + * or `LocalSystem`, when we create the named pipe? + * This question is probably not important when the + * daemon is started by a foreground user process and + * only needs to talk to the current user, but may be + * if the daemon is run via the Control Panel as a + * System Service. + */ + } + + hPipe = CreateNamedPipeW(wpath, dwOpenMode, dwPipeMode, + PIPE_UNLIMITED_INSTANCES, 1024, 1024, 0, lpsa); + + return hPipe; +} + +int ipc_server_run_async(struct ipc_server_data **returned_server_data, + const char *path, const struct ipc_server_opts *opts, + ipc_server_application_cb *application_cb, + void *application_data) +{ + struct ipc_server_data *server_data; + wchar_t wpath[MAX_PATH]; + HANDLE hPipeFirst = INVALID_HANDLE_VALUE; + int k; + int ret = 0; + int nr_threads = opts->nr_threads; + + *returned_server_data = NULL; + + ret = initialize_pipe_name(path, wpath, ARRAY_SIZE(wpath)); + if (ret < 0) + return error( + _("could not create normalized wchar_t path for '%s'"), + path); + + hPipeFirst = create_new_pipe(wpath, 1); + if (hPipeFirst == INVALID_HANDLE_VALUE) + return error(_("IPC server already running on '%s'"), path); + + server_data = xcalloc(1, sizeof(*server_data)); + server_data->magic = MAGIC_SERVER_DATA; + server_data->application_cb = application_cb; + server_data->application_data = application_data; + server_data->hEventStopRequested = CreateEvent(NULL, TRUE, FALSE, NULL); + strbuf_init(&server_data->buf_path, 0); + strbuf_addstr(&server_data->buf_path, path); + wcscpy(server_data->wpath, wpath); + + if (nr_threads < 1) + nr_threads = 1; + + for (k = 0; k < nr_threads; k++) { + struct ipc_server_thread_data *std; + + std = xcalloc(1, sizeof(*std)); + std->magic = MAGIC_SERVER_THREAD_DATA; + std->server_data = server_data; + std->hPipe = INVALID_HANDLE_VALUE; + + std->hPipe = (k == 0) + ? hPipeFirst + : create_new_pipe(server_data->wpath, 0); + + if (std->hPipe == INVALID_HANDLE_VALUE) { + /* + * If we've reached a pipe instance limit for + * this path, just use fewer threads. + */ + free(std); + break; + } + + if (pthread_create(&std->pthread_id, NULL, + server_thread_proc, std)) { + /* + * Likewise, if we're out of threads, just use + * fewer threads than requested. + * + * However, we just give up if we can't even get + * one thread. This should not happen. + */ + if (k == 0) + die(_("could not start thread[0] for '%s'"), + path); + + CloseHandle(std->hPipe); + free(std); + break; + } + + std->next_thread = server_data->thread_list; + server_data->thread_list = std; + } + + *returned_server_data = server_data; + return 0; +} + +int ipc_server_stop_async(struct ipc_server_data *server_data) +{ + if (!server_data) + return 0; + + /* + * Gently tell all of the ipc_server threads to shutdown. + * This will be seen the next time they are idle (and waiting + * for a connection). + * + * We DO NOT attempt to force them to drop an active connection. + */ + SetEvent(server_data->hEventStopRequested); + return 0; +} + +int ipc_server_await(struct ipc_server_data *server_data) +{ + DWORD dwWaitResult; + + if (!server_data) + return 0; + + dwWaitResult = WaitForSingleObject(server_data->hEventStopRequested, INFINITE); + if (dwWaitResult != WAIT_OBJECT_0) + return error(_("wait for hEvent failed for '%s'"), + server_data->buf_path.buf); + + while (server_data->thread_list) { + struct ipc_server_thread_data *std = server_data->thread_list; + + pthread_join(std->pthread_id, NULL); + + server_data->thread_list = std->next_thread; + free(std); + } + + server_data->is_stopped = 1; + + return 0; +} + +void ipc_server_free(struct ipc_server_data *server_data) +{ + if (!server_data) + return; + + if (!server_data->is_stopped) + BUG("cannot free ipc-server while running for '%s'", + server_data->buf_path.buf); + + strbuf_release(&server_data->buf_path); + + if (server_data->hEventStopRequested != INVALID_HANDLE_VALUE) + CloseHandle(server_data->hEventStopRequested); + + while (server_data->thread_list) { + struct ipc_server_thread_data *std = server_data->thread_list; + + server_data->thread_list = std->next_thread; + free(std); + } + + free(server_data); +} diff --git a/config.mak.uname b/config.mak.uname index 198ab1e58f83..76087cff6789 100644 --- a/config.mak.uname +++ b/config.mak.uname @@ -421,6 +421,7 @@ ifeq ($(uname_S),Windows) RUNTIME_PREFIX = YesPlease HAVE_WPGMPTR = YesWeDo NO_ST_BLOCKS_IN_STRUCT_STAT = YesPlease + USE_WIN32_IPC = YesPlease USE_WIN32_MMAP = YesPlease MMAP_PREVENTS_DELETE = UnfortunatelyYes # USE_NED_ALLOCATOR = YesPlease @@ -597,6 +598,7 @@ ifneq (,$(findstring MINGW,$(uname_S))) RUNTIME_PREFIX = YesPlease HAVE_WPGMPTR = YesWeDo NO_ST_BLOCKS_IN_STRUCT_STAT = YesPlease + USE_WIN32_IPC = YesPlease USE_WIN32_MMAP = YesPlease MMAP_PREVENTS_DELETE = UnfortunatelyYes USE_NED_ALLOCATOR = YesPlease diff --git a/contrib/buildsystems/CMakeLists.txt b/contrib/buildsystems/CMakeLists.txt index c151dd7257f3..4bd41054ee70 100644 --- a/contrib/buildsystems/CMakeLists.txt +++ b/contrib/buildsystems/CMakeLists.txt @@ -246,6 +246,10 @@ elseif(CMAKE_SYSTEM_NAME STREQUAL "Linux") list(APPEND compat_SOURCES unix-socket.c) endif() +if(CMAKE_SYSTEM_NAME STREQUAL "Windows") + list(APPEND compat_SOURCES compat/simple-ipc/ipc-shared.c compat/simple-ipc/ipc-win32.c) +endif() + set(EXE_EXTENSION ${CMAKE_EXECUTABLE_SUFFIX}) #header checks diff --git a/simple-ipc.h b/simple-ipc.h new file mode 100644 index 000000000000..a3f96b42cca2 --- /dev/null +++ b/simple-ipc.h @@ -0,0 +1,224 @@ +#ifndef GIT_SIMPLE_IPC_H +#define GIT_SIMPLE_IPC_H + +/* + * See Documentation/technical/api-simple-ipc.txt + */ + +#if defined(GIT_WINDOWS_NATIVE) +#define SUPPORTS_SIMPLE_IPC +#endif + +#ifdef SUPPORTS_SIMPLE_IPC +#include "pkt-line.h" + +/* + * Simple IPC Client Side API. + */ + +enum ipc_active_state { + /* + * The pipe/socket exists and the daemon is waiting for connections. + */ + IPC_STATE__LISTENING = 0, + + /* + * The pipe/socket exists, but the daemon is not listening. + * Perhaps it is very busy. + * Perhaps the daemon died without deleting the path. + * Perhaps it is shutting down and draining existing clients. + * Perhaps it is dead, but other clients are lingering and + * still holding a reference to the pathname. + */ + IPC_STATE__NOT_LISTENING, + + /* + * The requested pathname is bogus and no amount of retries + * will fix that. + */ + IPC_STATE__INVALID_PATH, + + /* + * The requested pathname is not found. This usually means + * that there is no daemon present. + */ + IPC_STATE__PATH_NOT_FOUND, + + IPC_STATE__OTHER_ERROR, +}; + +struct ipc_client_connect_options { + /* + * Spin under timeout if the server is running but can't + * accept our connection yet. This should always be set + * unless you just want to poke the server and see if it + * is alive. + */ + unsigned int wait_if_busy:1; + + /* + * Spin under timeout if the pipe/socket is not yet present + * on the file system. This is useful if we just started + * the service and need to wait for it to become ready. + */ + unsigned int wait_if_not_found:1; +}; + +#define IPC_CLIENT_CONNECT_OPTIONS_INIT { \ + .wait_if_busy = 0, \ + .wait_if_not_found = 0, \ +} + +/* + * Determine if a server is listening on this named pipe or socket using + * platform-specific logic. This might just probe the filesystem or it + * might make a trivial connection to the server using this pathname. + */ +enum ipc_active_state ipc_get_active_state(const char *path); + +struct ipc_client_connection { + int fd; +}; + +/* + * Try to connect to the daemon on the named pipe or socket. + * + * Returns IPC_STATE__LISTENING and a connection handle. + * + * Otherwise, returns info to help decide whether to retry or to + * spawn/respawn the server. + */ +enum ipc_active_state ipc_client_try_connect( + const char *path, + const struct ipc_client_connect_options *options, + struct ipc_client_connection **p_connection); + +void ipc_client_close_connection(struct ipc_client_connection *connection); + +/* + * Used by the client to synchronously send and receive a message with + * the server on the provided client connection. + * + * Returns 0 when successful. + * + * Calls error() and returns non-zero otherwise. + */ +int ipc_client_send_command_to_connection( + struct ipc_client_connection *connection, + const char *message, struct strbuf *answer); + +/* + * Used by the client to synchronously connect and send and receive a + * message to the server listening at the given path. + * + * Returns 0 when successful. + * + * Calls error() and returns non-zero otherwise. + */ +int ipc_client_send_command(const char *path, + const struct ipc_client_connect_options *options, + const char *message, struct strbuf *answer); + +/* + * Simple IPC Server Side API. + */ + +struct ipc_server_reply_data; + +typedef int (ipc_server_reply_cb)(struct ipc_server_reply_data *, + const char *response, + size_t response_len); + +/* + * Prototype for an application-supplied callback to process incoming + * client IPC messages and compose a reply. The `application_cb` should + * use the provided `reply_cb` and `reply_data` to send an IPC response + * back to the client. The `reply_cb` callback can be called multiple + * times for chunking purposes. A reply message is optional and may be + * omitted if not necessary for the application. + * + * The return value from the application callback is ignored. + * The value `SIMPLE_IPC_QUIT` can be used to shutdown the server. + */ +typedef int (ipc_server_application_cb)(void *application_data, + const char *request, + ipc_server_reply_cb *reply_cb, + struct ipc_server_reply_data *reply_data); + +#define SIMPLE_IPC_QUIT -2 + +/* + * Opaque instance data to represent an IPC server instance. + */ +struct ipc_server_data; + +/* + * Control parameters for the IPC server instance. + * Use this to hide platform-specific settings. + */ +struct ipc_server_opts +{ + int nr_threads; +}; + +/* + * Start an IPC server instance in one or more background threads + * and return a handle to the pool. + * + * Returns 0 if the asynchronous server pool was started successfully. + * Returns -1 if not. + * + * When a client IPC message is received, the `application_cb` will be + * called (possibly on a random thread) to handle the message and + * optionally compose a reply message. + */ +int ipc_server_run_async(struct ipc_server_data **returned_server_data, + const char *path, const struct ipc_server_opts *opts, + ipc_server_application_cb *application_cb, + void *application_data); + +/* + * Gently signal the IPC server pool to shutdown. No new client + * connections will be accepted, but existing connections will be + * allowed to complete. + */ +int ipc_server_stop_async(struct ipc_server_data *server_data); + +/* + * Block the calling thread until all threads in the IPC server pool + * have completed and been joined. + */ +int ipc_server_await(struct ipc_server_data *server_data); + +/* + * Close and free all resource handles associated with the IPC server + * pool. + */ +void ipc_server_free(struct ipc_server_data *server_data); + +/* + * Run an IPC server instance and block the calling thread of the + * current process. It does not return until the IPC server has + * either shutdown or had an unrecoverable error. + * + * The IPC server handles incoming IPC messages from client processes + * and may use one or more background threads as necessary. + * + * Returns 0 after the server has completed successfully. + * Returns -1 if the server cannot be started. + * + * When a client IPC message is received, the `application_cb` will be + * called (possibly on a random thread) to handle the message and + * optionally compose a reply message. + * + * Note that `ipc_server_run()` is a synchronous wrapper around the + * above asynchronous routines. It effectively hides all of the + * server state and thread details from the caller and presents a + * simple synchronous interface. + */ +int ipc_server_run(const char *path, const struct ipc_server_opts *opts, + ipc_server_application_cb *application_cb, + void *application_data); + +#endif /* SUPPORTS_SIMPLE_IPC */ +#endif /* GIT_SIMPLE_IPC_H */ From patchwork Wed Feb 17 21:48:43 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeff Hostetler X-Patchwork-Id: 12092459 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 97A5FC433DB for ; Wed, 17 Feb 2021 21:50:34 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 6F4A064E5F for ; Wed, 17 Feb 2021 21:50:34 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232778AbhBQVub (ORCPT ); Wed, 17 Feb 2021 16:50:31 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58300 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231717AbhBQVuN (ORCPT ); Wed, 17 Feb 2021 16:50:13 -0500 Received: from mail-wm1-x331.google.com (mail-wm1-x331.google.com [IPv6:2a00:1450:4864:20::331]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5FF93C06178C for ; Wed, 17 Feb 2021 13:48:56 -0800 (PST) Received: by mail-wm1-x331.google.com with SMTP id l17so3670688wmq.2 for ; Wed, 17 Feb 2021 13:48:56 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=message-id:in-reply-to:references:from:date:subject:fcc :content-transfer-encoding:mime-version:to:cc; bh=+bNP+0W/NmpTzg2fcka3X+P2W1SFHzm3DqEPSuLjvIc=; b=V74TFL3N2fRR4EOCXtVc7/TVf5LJXv2dkACr/2JH1A++aIJF0B/B/UwTJSW/G4afps qqzLrKaylvTqQp+cdYEV1jWHigQP0hFVeB6PHX/hkCbJJWC3NKIvcyPyDaO+ltv+u+ob EkHczLQIe+8nMBpZnm+e1/2GySQkoad4ezyrjkdHkr75I/HQHgf5VtmQJfkAdxzn61kY eNv1TXOmWUANWrclfEpPi+0m390tZNrFHKJ/Hzj8F2FGKGGeqqOCp/vBieMiiUbd3A21 RTWeSgvTUmc/jIgXzuyoO8tWPRD2wX0xUlGtdQHwEPPlUvmrdUIx5TsfsjNM8fLX54lR 22uA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:message-id:in-reply-to:references:from:date :subject:fcc:content-transfer-encoding:mime-version:to:cc; bh=+bNP+0W/NmpTzg2fcka3X+P2W1SFHzm3DqEPSuLjvIc=; b=gcEIE14aCZxU/iI5hOW0pJoKVu+blRAyDXsQ9E0kuYBESf2wDA42pbpRbjafl5Rsca BXdLjzyYiF3P9nj6z/ybnHD8eYrt2ep8WCFB/fIgnIhxnp82rw2iSgmQRpbubKqMaZc5 IbiGObsLetdSf2olUhbHAvSyQ0Qr9Y+/JUQaF3WNiBACndIcAq9mlAi68Z2Xa3wiE98f Jk81D3PlwC+USi56F122rtU0EEba9I+GsbkGhdpjsTA9VH6QG23BhzK8VoDBJVDPQDok dDlgsDqRJya7OrzvWfnu2PwQiWzO3jWSSbi/sSxJDqzmMdSGY16mKEM/ABSS7qZVYA29 /k4g== X-Gm-Message-State: AOAM530DkZQMiFiN9UFKoB974G3q7UcWHJGAx9vCzPY6hAnwqzdDijjQ /+pABlP2Yxq+URxozMqeadJN8KJqhgs= X-Google-Smtp-Source: ABdhPJw0veR2kJXm8kvJJdfLMN9d9ZLsKFfsG4hD4AMFaDzt2TIbrAAxxpl8XuORfBpnGwUBiIjxQw== X-Received: by 2002:a05:600c:4c94:: with SMTP id g20mr714784wmp.41.1613598535103; Wed, 17 Feb 2021 13:48:55 -0800 (PST) Received: from [127.0.0.1] ([13.74.141.28]) by smtp.gmail.com with ESMTPSA id u137sm4961993wmu.20.2021.02.17.13.48.54 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 17 Feb 2021 13:48:54 -0800 (PST) Message-Id: In-Reply-To: References: Date: Wed, 17 Feb 2021 21:48:43 +0000 Subject: [PATCH v4 07/12] unix-socket: elimiate static unix_stream_socket() helper function Fcc: Sent MIME-Version: 1.0 To: git@vger.kernel.org Cc: Jeff Hostetler , Jeff King , SZEDER =?utf-8?b?R8OhYm9y?= , Johannes Schindelin , Jeff Hostetler , Jeff Hostetler Precedence: bulk List-ID: X-Mailing-List: git@vger.kernel.org From: Jeff Hostetler From: Jeff Hostetler The static helper function `unix_stream_socket()` calls `die()`. This is not appropriate for all callers. Eliminate the wrapper function and make the callers propagate the error. Signed-off-by: Jeff Hostetler --- unix-socket.c | 27 +++++++++++++-------------- 1 file changed, 13 insertions(+), 14 deletions(-) diff --git a/unix-socket.c b/unix-socket.c index 19ed48be9902..69f81d64e9d5 100644 --- a/unix-socket.c +++ b/unix-socket.c @@ -1,14 +1,6 @@ #include "cache.h" #include "unix-socket.h" -static int unix_stream_socket(void) -{ - int fd = socket(AF_UNIX, SOCK_STREAM, 0); - if (fd < 0) - die_errno("unable to create socket"); - return fd; -} - static int chdir_len(const char *orig, int len) { char *path = xmemdupz(orig, len); @@ -73,13 +65,16 @@ static int unix_sockaddr_init(struct sockaddr_un *sa, const char *path, int unix_stream_connect(const char *path) { - int fd, saved_errno; + int fd = -1, saved_errno; struct sockaddr_un sa; struct unix_sockaddr_context ctx; if (unix_sockaddr_init(&sa, path, &ctx) < 0) return -1; - fd = unix_stream_socket(); + fd = socket(AF_UNIX, SOCK_STREAM, 0); + if (fd < 0) + goto fail; + if (connect(fd, (struct sockaddr *)&sa, sizeof(sa)) < 0) goto fail; unix_sockaddr_cleanup(&ctx); @@ -87,15 +82,16 @@ int unix_stream_connect(const char *path) fail: saved_errno = errno; + if (fd != -1) + close(fd); unix_sockaddr_cleanup(&ctx); - close(fd); errno = saved_errno; return -1; } int unix_stream_listen(const char *path) { - int fd, saved_errno; + int fd = -1, saved_errno; struct sockaddr_un sa; struct unix_sockaddr_context ctx; @@ -103,7 +99,9 @@ int unix_stream_listen(const char *path) if (unix_sockaddr_init(&sa, path, &ctx) < 0) return -1; - fd = unix_stream_socket(); + fd = socket(AF_UNIX, SOCK_STREAM, 0); + if (fd < 0) + goto fail; if (bind(fd, (struct sockaddr *)&sa, sizeof(sa)) < 0) goto fail; @@ -116,8 +114,9 @@ int unix_stream_listen(const char *path) fail: saved_errno = errno; + if (fd != -1) + close(fd); unix_sockaddr_cleanup(&ctx); - close(fd); errno = saved_errno; return -1; } From patchwork Wed Feb 17 21:48:44 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeff Hostetler X-Patchwork-Id: 12092461 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 24D46C433DB for ; Wed, 17 Feb 2021 21:50:51 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id E848564E85 for ; Wed, 17 Feb 2021 21:50:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232844AbhBQVur (ORCPT ); Wed, 17 Feb 2021 16:50:47 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58302 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231982AbhBQVuN (ORCPT ); Wed, 17 Feb 2021 16:50:13 -0500 Received: from mail-wr1-x432.google.com (mail-wr1-x432.google.com [IPv6:2a00:1450:4864:20::432]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E45DEC061793 for ; Wed, 17 Feb 2021 13:48:56 -0800 (PST) Received: by mail-wr1-x432.google.com with SMTP id u14so50339wri.3 for ; Wed, 17 Feb 2021 13:48:56 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=message-id:in-reply-to:references:from:date:subject:fcc :content-transfer-encoding:mime-version:to:cc; bh=3DE2LGbxZPS99naynGsZAiR5LTnfiDo7m2CNe53QjEk=; b=UBGYZllc82WJvgc0K31fhq0lRuWxybDS9yC8L1ZUNCn1d9VhNc7muHYAlJmtxQEaS0 p5XPL+5MFlKOMrXWuFRlxB9Lasy13Wyekw1JwjkidsUcgfX/xlBOY1eEVnhZ6NIcPhZm 0SPu2xY7fFoN8fj+5f2ML5lu7SZ6wxIzh50G4nFjm/Hazq/RuKGu6abMIZTeGyz1fd/W oiTJZOspHcJ0LaDpG/xmN+3oCgXimun0X/noKRL1vmvrRMXnrQlKSm7AJgmIZDv1WTzl nSVDHhB3ISDpNg1CaEMABeLaJY+5LfcaYOLASv/LgqrQf/WXktGDgZw6ouyusUIwaWeJ cFqQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:message-id:in-reply-to:references:from:date :subject:fcc:content-transfer-encoding:mime-version:to:cc; bh=3DE2LGbxZPS99naynGsZAiR5LTnfiDo7m2CNe53QjEk=; b=epaP16GjAjlogXyYA3IRGF7+UDYkH/HmgqRAVkDYJUPC7KVKx8YFiRp6xGUO453Kq/ 2wVLYKGABkahvftjmgiuIS62rp7HSM2kQgUkOibxZe/9DzIZHOMnioBS+N08bXIyijXq SSfZn9QOGz4TL+ZAWTqNuA0je643/qHM4jldbAVSe9QGd5vMKrlxz9x3VtRjEqpbglg8 j4T0yfIr03zKdU0ruPiHBhd1gC3VU2pEce2pK+suGMkDhyOa/ry5H3kSprs1Gcxwr1KW jsmvNZobPgkNvwAYHSblZXBM64nu1qD0oA9IFn1EtYG/Z2YDmo8FAypeHqs0c1O722N3 Ki7Q== X-Gm-Message-State: AOAM532iWsSbcNgbKkFW4Dffiq/ERF+JjGEmwHTdfcn+s3TxGmpbnhjU cRcitIpwNjTPAzdwQAApC+dex6dJb7w= X-Google-Smtp-Source: ABdhPJzvCbfKW+1pBTza/1FGXQEwBPF19PNsPghQ7cNWCqE0Ui53R9mEBbS6y/+QHHyI5HvEgz/gqw== X-Received: by 2002:adf:f3c4:: with SMTP id g4mr1141932wrp.61.1613598535697; Wed, 17 Feb 2021 13:48:55 -0800 (PST) Received: from [127.0.0.1] ([13.74.141.28]) by smtp.gmail.com with ESMTPSA id c128sm4186288wma.37.2021.02.17.13.48.55 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 17 Feb 2021 13:48:55 -0800 (PST) Message-Id: <985b2e02b2df7725d70f1365f7cd2e525c9f3ade.1613598529.git.gitgitgadget@gmail.com> In-Reply-To: References: Date: Wed, 17 Feb 2021 21:48:44 +0000 Subject: [PATCH v4 08/12] unix-socket: add backlog size option to unix_stream_listen() Fcc: Sent MIME-Version: 1.0 To: git@vger.kernel.org Cc: Jeff Hostetler , Jeff King , SZEDER =?utf-8?b?R8OhYm9y?= , Johannes Schindelin , Jeff Hostetler , Jeff Hostetler Precedence: bulk List-ID: X-Mailing-List: git@vger.kernel.org From: Jeff Hostetler From: Jeff Hostetler Update `unix_stream_listen()` to take an options structure to override default behaviors. This commit includes the size of the `listen()` backlog. Signed-off-by: Jeff Hostetler --- builtin/credential-cache--daemon.c | 3 ++- unix-socket.c | 9 +++++++-- unix-socket.h | 14 +++++++++++++- 3 files changed, 22 insertions(+), 4 deletions(-) diff --git a/builtin/credential-cache--daemon.c b/builtin/credential-cache--daemon.c index c61f123a3b81..4c6c89ab0de2 100644 --- a/builtin/credential-cache--daemon.c +++ b/builtin/credential-cache--daemon.c @@ -203,9 +203,10 @@ static int serve_cache_loop(int fd) static void serve_cache(const char *socket_path, int debug) { + struct unix_stream_listen_opts opts = UNIX_STREAM_LISTEN_OPTS_INIT; int fd; - fd = unix_stream_listen(socket_path); + fd = unix_stream_listen(socket_path, &opts); if (fd < 0) die_errno("unable to bind to '%s'", socket_path); diff --git a/unix-socket.c b/unix-socket.c index 69f81d64e9d5..5ac7dafe9828 100644 --- a/unix-socket.c +++ b/unix-socket.c @@ -89,9 +89,11 @@ int unix_stream_connect(const char *path) return -1; } -int unix_stream_listen(const char *path) +int unix_stream_listen(const char *path, + const struct unix_stream_listen_opts *opts) { int fd = -1, saved_errno; + int backlog; struct sockaddr_un sa; struct unix_sockaddr_context ctx; @@ -106,7 +108,10 @@ int unix_stream_listen(const char *path) if (bind(fd, (struct sockaddr *)&sa, sizeof(sa)) < 0) goto fail; - if (listen(fd, 5) < 0) + backlog = opts->listen_backlog_size; + if (backlog <= 0) + backlog = DEFAULT_UNIX_STREAM_LISTEN_BACKLOG; + if (listen(fd, backlog) < 0) goto fail; unix_sockaddr_cleanup(&ctx); diff --git a/unix-socket.h b/unix-socket.h index e271aeec5a07..06a5a05b03fe 100644 --- a/unix-socket.h +++ b/unix-socket.h @@ -1,7 +1,19 @@ #ifndef UNIX_SOCKET_H #define UNIX_SOCKET_H +struct unix_stream_listen_opts { + int listen_backlog_size; +}; + +#define DEFAULT_UNIX_STREAM_LISTEN_BACKLOG (5) + +#define UNIX_STREAM_LISTEN_OPTS_INIT \ +{ \ + .listen_backlog_size = DEFAULT_UNIX_STREAM_LISTEN_BACKLOG, \ +} + int unix_stream_connect(const char *path); -int unix_stream_listen(const char *path); +int unix_stream_listen(const char *path, + const struct unix_stream_listen_opts *opts); #endif /* UNIX_SOCKET_H */ From patchwork Wed Feb 17 21:48:45 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeff Hostetler X-Patchwork-Id: 12092463 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1B5A8C433DB for ; Wed, 17 Feb 2021 21:50:55 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id D948664E7A for ; Wed, 17 Feb 2021 21:50:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232801AbhBQVuv (ORCPT ); Wed, 17 Feb 2021 16:50:51 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58310 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232006AbhBQVuP (ORCPT ); Wed, 17 Feb 2021 16:50:15 -0500 Received: from mail-wm1-x32d.google.com (mail-wm1-x32d.google.com [IPv6:2a00:1450:4864:20::32d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9C2B7C061794 for ; Wed, 17 Feb 2021 13:48:57 -0800 (PST) Received: by mail-wm1-x32d.google.com with SMTP id x4so5457956wmi.3 for ; Wed, 17 Feb 2021 13:48:57 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=message-id:in-reply-to:references:from:date:subject:fcc :content-transfer-encoding:mime-version:to:cc; bh=ASF8+rFWzj6aUvWIsRo10Spu216IfvU7//f0qbC+iF8=; b=NoeVE7hASI9DVqekqwFBILs3qHhuB/Ehrx3MvNW/GUmJTcWMAXunjfjjyxIsxaRwgT XOnWSfB+NVcp6/yBzMBHkJHhYWBRDyJpY8dR8jQC9nRO9ooM5O09+0U99FHSFEIbn70z C+I3BFc4bSS0RMKyQ+VGq4n/8hl+4XlUwt9wRPtwzXNtq4lprbvXK6cf4V7UutDz/Aaa /2heQqV3gZha+357u7ebr8G9TQj7gjo8X6HnbXA1TcEZWtH59V+Cj5qaZc6r3uh+LZYN W+kklZJLqox6Ijxmk9d0fB7mpB4fF9JogMfgyP+EjSZoeEBw53qcCoMO/sXsvHqRPrPB tYGQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:message-id:in-reply-to:references:from:date :subject:fcc:content-transfer-encoding:mime-version:to:cc; bh=ASF8+rFWzj6aUvWIsRo10Spu216IfvU7//f0qbC+iF8=; b=sjte/eQN/wydgfhlwLstDvRDW74qsXhogO92JTrJ7Cd4+YTociH2Cu5y7K6M2FwNca /9o+wH3A7jaluSS48p4TE5oNy6gJ8yJOBbvI/ykqkxTNq9auPorLylqE5kHWDefpe5q5 Q5B3w91D3xe9aqVPNoR3Wj1MeLLbcNsWa7oezG/PR/0/1gZQo8WgVJk5fRNUQdXRIK/C fWMDP+3eBVXfmYU/RGaEahCBeydKxC59t0RlQQDkgyFVE2ToXp71vcZt5RDUrkCX1HQI /0kKQn7vyeJIEecCKdS/tu/cS1oWUxqduxhpByQbLu1enEqgXXNve53sGa7oMgJ1qBQ0 ZY7g== X-Gm-Message-State: AOAM533ElTN/NMeJ5+0lhkBN43UZ+o2OMYH9pl4Ell0+jXbYhLmpvvBv DWkq3Ib/TRU+T2AmRhARFHgHaKye0YA= X-Google-Smtp-Source: ABdhPJwUv5T4OtXO8BB+eWD76xzrn6RAXqM8aZyYMj/jijKUsCEW8zn8tAqdFWremYZI3L/hOUE40g== X-Received: by 2002:a1c:bb44:: with SMTP id l65mr718999wmf.86.1613598536281; Wed, 17 Feb 2021 13:48:56 -0800 (PST) Received: from [127.0.0.1] ([13.74.141.28]) by smtp.gmail.com with ESMTPSA id u137sm4962049wmu.20.2021.02.17.13.48.55 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 17 Feb 2021 13:48:56 -0800 (PST) Message-Id: <1bfa36409d0706d5e22703f80bf95dfa1a313a83.1613598529.git.gitgitgadget@gmail.com> In-Reply-To: References: Date: Wed, 17 Feb 2021 21:48:45 +0000 Subject: [PATCH v4 09/12] unix-socket: disallow chdir() when creating unix domain sockets Fcc: Sent MIME-Version: 1.0 To: git@vger.kernel.org Cc: Jeff Hostetler , Jeff King , SZEDER =?utf-8?b?R8OhYm9y?= , Johannes Schindelin , Jeff Hostetler , Jeff Hostetler Precedence: bulk List-ID: X-Mailing-List: git@vger.kernel.org From: Jeff Hostetler From: Jeff Hostetler Calls to `chdir()` are dangerous in a multi-threaded context. If `unix_stream_listen()` or `unix_stream_connect()` is given a socket pathname that is too long to fit in a `sockaddr_un` structure, it will `chdir()` to the parent directory of the requested socket pathname, create the socket using a relative pathname, and then `chdir()` back. This is not thread-safe. Teach `unix_sockaddr_init()` to not allow calls to `chdir()` when this flag is set. Signed-off-by: Jeff Hostetler --- builtin/credential-cache.c | 2 +- unix-socket.c | 17 ++++++++++++----- unix-socket.h | 4 +++- 3 files changed, 16 insertions(+), 7 deletions(-) diff --git a/builtin/credential-cache.c b/builtin/credential-cache.c index 9b3f70990597..76a6ba37223f 100644 --- a/builtin/credential-cache.c +++ b/builtin/credential-cache.c @@ -14,7 +14,7 @@ static int send_request(const char *socket, const struct strbuf *out) { int got_data = 0; - int fd = unix_stream_connect(socket); + int fd = unix_stream_connect(socket, 0); if (fd < 0) return -1; diff --git a/unix-socket.c b/unix-socket.c index 5ac7dafe9828..1eaa8cf759c0 100644 --- a/unix-socket.c +++ b/unix-socket.c @@ -28,16 +28,23 @@ static void unix_sockaddr_cleanup(struct unix_sockaddr_context *ctx) } static int unix_sockaddr_init(struct sockaddr_un *sa, const char *path, - struct unix_sockaddr_context *ctx) + struct unix_sockaddr_context *ctx, + int disallow_chdir) { int size = strlen(path) + 1; ctx->orig_dir = NULL; if (size > sizeof(sa->sun_path)) { - const char *slash = find_last_dir_sep(path); + const char *slash; const char *dir; struct strbuf cwd = STRBUF_INIT; + if (disallow_chdir) { + errno = ENAMETOOLONG; + return -1; + } + + slash = find_last_dir_sep(path); if (!slash) { errno = ENAMETOOLONG; return -1; @@ -63,13 +70,13 @@ static int unix_sockaddr_init(struct sockaddr_un *sa, const char *path, return 0; } -int unix_stream_connect(const char *path) +int unix_stream_connect(const char *path, int disallow_chdir) { int fd = -1, saved_errno; struct sockaddr_un sa; struct unix_sockaddr_context ctx; - if (unix_sockaddr_init(&sa, path, &ctx) < 0) + if (unix_sockaddr_init(&sa, path, &ctx, disallow_chdir) < 0) return -1; fd = socket(AF_UNIX, SOCK_STREAM, 0); if (fd < 0) @@ -99,7 +106,7 @@ int unix_stream_listen(const char *path, unlink(path); - if (unix_sockaddr_init(&sa, path, &ctx) < 0) + if (unix_sockaddr_init(&sa, path, &ctx, opts->disallow_chdir) < 0) return -1; fd = socket(AF_UNIX, SOCK_STREAM, 0); if (fd < 0) diff --git a/unix-socket.h b/unix-socket.h index 06a5a05b03fe..2c0b2e79d7b3 100644 --- a/unix-socket.h +++ b/unix-socket.h @@ -3,6 +3,7 @@ struct unix_stream_listen_opts { int listen_backlog_size; + unsigned int disallow_chdir:1; }; #define DEFAULT_UNIX_STREAM_LISTEN_BACKLOG (5) @@ -10,9 +11,10 @@ struct unix_stream_listen_opts { #define UNIX_STREAM_LISTEN_OPTS_INIT \ { \ .listen_backlog_size = DEFAULT_UNIX_STREAM_LISTEN_BACKLOG, \ + .disallow_chdir = 0, \ } -int unix_stream_connect(const char *path); +int unix_stream_connect(const char *path, int disallow_chdir); int unix_stream_listen(const char *path, const struct unix_stream_listen_opts *opts); From patchwork Wed Feb 17 21:48:46 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeff Hostetler X-Patchwork-Id: 12092465 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8EB26C433DB for ; Wed, 17 Feb 2021 21:51:02 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 6376C64E5F for ; Wed, 17 Feb 2021 21:51:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232840AbhBQVu4 (ORCPT ); Wed, 17 Feb 2021 16:50:56 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58312 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232021AbhBQVuP (ORCPT ); Wed, 17 Feb 2021 16:50:15 -0500 Received: from mail-wm1-x330.google.com (mail-wm1-x330.google.com [IPv6:2a00:1450:4864:20::330]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 41661C061797 for ; Wed, 17 Feb 2021 13:48:58 -0800 (PST) Received: by mail-wm1-x330.google.com with SMTP id x4so5458062wmi.3 for ; Wed, 17 Feb 2021 13:48:58 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=message-id:in-reply-to:references:from:date:subject:fcc :content-transfer-encoding:mime-version:to:cc; bh=BxZO76/AviWURZU430dd0Mk+fI3Wg10NNXpI5VTJui4=; b=UigOp15jJGovCHVVZNxODbu8Fq1zYixQB18vwvvNFUA3Q0WUAnzWfmHzh/1ZsWd+YK 1qjMe5SAWU9XwXi/kLJ+EYr9W/Hf6v7Qu8LHT3/C9cdP9BJqt7MmEAOf3Hf+pwTVNLQ3 QCYH2iXTW6b0W28cNm4B1JzuzcYMfx797RUuxRHEXe/l8RbG9ce2lrFyRWZT+aqaJOyI 72nl2cG8EjH36VLgAN7j0rQvY2uSa5nE0nZjKX2zmB+o98aVv+2vCGZutVT0N+md1YK5 hMjFctLao8kv9C8I6Kbnts+JCK6cL+clmjjK9FrVHmXLa9bcRUDSTTvQMwSWEx3F498L zjfQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:message-id:in-reply-to:references:from:date :subject:fcc:content-transfer-encoding:mime-version:to:cc; bh=BxZO76/AviWURZU430dd0Mk+fI3Wg10NNXpI5VTJui4=; b=q7LKX/Goa3DMhpDPhZDATqNZIuVb0BQYTmZUGZ8oMMjjacgjgBZjEiiTIfILANPATg 4MMD22Fv24WEd6tLQhVGUgDuZ+bsbol6V9yvnipjkfjkMHQHIYRLsiGNlYL7HelDWtIJ XAKQYOdsE0xaF0/ow+7g9NzPdKr17RYr6t3V1r6gQH7eip8FTVQTGNES1pzsn7Dv3HLt NuLwTF4bC4t2ukGfkmX2PiyAZz5J91Ia1LpgKwOVjtDRs8co2uf1wSlLDd0hBa9n/VUs vv1PBkBBI36inokFk8Rve5z0Dc1LmfmB9BzqNGuohIBIbjb7QMFbVR7RwcSn1EOu8ahs 7ekw== X-Gm-Message-State: AOAM530kHyeBPVcnG+SqF4B4dRjYmxJTzuVgCBuUtKtqS1zBPJblzm1S 4KEYWqbiHuE55TVnzz3mPEqdWT1zBOg= X-Google-Smtp-Source: ABdhPJylXYpFSygNz13pRo3L5U7XHXSRH6W1GtgPOm2ddsdFsCwD7WOqd/CVoK7qyilHCQ9QgO1bGA== X-Received: by 2002:a1c:3804:: with SMTP id f4mr734908wma.115.1613598536923; Wed, 17 Feb 2021 13:48:56 -0800 (PST) Received: from [127.0.0.1] ([13.74.141.28]) by smtp.gmail.com with ESMTPSA id v11sm6050268wrr.3.2021.02.17.13.48.56 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 17 Feb 2021 13:48:56 -0800 (PST) Message-Id: In-Reply-To: References: Date: Wed, 17 Feb 2021 21:48:46 +0000 Subject: [PATCH v4 10/12] unix-socket: create `unix_stream_server__listen_with_lock()` Fcc: Sent MIME-Version: 1.0 To: git@vger.kernel.org Cc: Jeff Hostetler , Jeff King , SZEDER =?utf-8?b?R8OhYm9y?= , Johannes Schindelin , Jeff Hostetler , Jeff Hostetler Precedence: bulk List-ID: X-Mailing-List: git@vger.kernel.org From: Jeff Hostetler From: Jeff Hostetler Create a version of `unix_stream_listen()` that uses a ".lock" lockfile to create the unix domain socket in a race-free manner. Unix domain sockets have a fundamental problem on Unix systems because they persist in the filesystem until they are deleted. This is independent of whether a server is actually listening for connections. Well-behaved servers are expected to delete the socket when they shutdown. A new server cannot easily tell if a found socket is attached to an active server or is leftover cruft from a dead server. The traditional solution used by `unix_stream_listen()` is to force delete the socket pathname and then create a new socket. This solves the latter (cruft) problem, but in the case of the former, it orphans the existing server (by stealing the pathname associated with the socket it is listening on). We cannot directly use a .lock lockfile to create the socket because the socket is created by `bind(2)` rather than the `open(2)` mechanism used by `tempfile.c`. As an alternative, we hold a plain lockfile (".lock") as a mutual exclusion device. Under the lock, we test if an existing socket ("") is has an active server. If not, create a new socket and begin listening. Then we rollback the lockfile in all cases. Signed-off-by: Jeff Hostetler --- unix-socket.c | 115 ++++++++++++++++++++++++++++++++++++++++++++++++++ unix-socket.h | 29 +++++++++++++ 2 files changed, 144 insertions(+) diff --git a/unix-socket.c b/unix-socket.c index 1eaa8cf759c0..647bbde37f97 100644 --- a/unix-socket.c +++ b/unix-socket.c @@ -1,4 +1,5 @@ #include "cache.h" +#include "lockfile.h" #include "unix-socket.h" static int chdir_len(const char *orig, int len) @@ -132,3 +133,117 @@ int unix_stream_listen(const char *path, errno = saved_errno; return -1; } + +static int is_another_server_alive(const char *path, + const struct unix_stream_listen_opts *opts) +{ + struct stat st; + int fd; + + if (!lstat(path, &st) && S_ISSOCK(st.st_mode)) { + /* + * A socket-inode exists on disk at `path`, but we + * don't know whether it belongs to an active server + * or whether the last server died without cleaning + * up. + * + * Poke it with a trivial connection to try to find + * out. + */ + fd = unix_stream_connect(path, opts->disallow_chdir); + if (fd >= 0) { + close(fd); + return 1; + } + } + + return 0; +} + +struct unix_stream_server_socket *unix_stream_server__listen_with_lock( + const char *path, + const struct unix_stream_listen_opts *opts) +{ + struct lock_file lock = LOCK_INIT; + int fd_socket; + struct unix_stream_server_socket *server_socket; + + /* + * Create a lock at ".lock" if we can. + */ + if (hold_lock_file_for_update_timeout(&lock, path, 0, + opts->timeout_ms) < 0) { + error_errno(_("could not lock listener socket '%s'"), path); + return NULL; + } + + /* + * If another server is listening on "" give up. We do not + * want to create a socket and steal future connections from them. + */ + if (is_another_server_alive(path, opts)) { + errno = EADDRINUSE; + error_errno(_("listener socket already in use '%s'"), path); + rollback_lock_file(&lock); + return NULL; + } + + /* + * Create and bind to a Unix domain socket at "". + */ + fd_socket = unix_stream_listen(path, opts); + if (fd_socket < 0) { + error_errno(_("could not create listener socket '%s'"), path); + rollback_lock_file(&lock); + return NULL; + } + + server_socket = xcalloc(1, sizeof(*server_socket)); + server_socket->path_socket = strdup(path); + server_socket->fd_socket = fd_socket; + lstat(path, &server_socket->st_socket); + + /* + * Always rollback (just delete) ".lock" because we already created + * "" as a socket and do not want to commit_lock to do the atomic + * rename trick. + */ + rollback_lock_file(&lock); + + return server_socket; +} + +void unix_stream_server__free( + struct unix_stream_server_socket *server_socket) +{ + if (!server_socket) + return; + + if (server_socket->fd_socket >= 0) { + if (!unix_stream_server__was_stolen(server_socket)) + unlink(server_socket->path_socket); + close(server_socket->fd_socket); + } + + free(server_socket->path_socket); + free(server_socket); +} + +int unix_stream_server__was_stolen( + struct unix_stream_server_socket *server_socket) +{ + struct stat st_now; + + if (!server_socket) + return 0; + + if (lstat(server_socket->path_socket, &st_now) == -1) + return 1; + + if (st_now.st_ino != server_socket->st_socket.st_ino) + return 1; + + /* We might also consider the ctime on some platforms. */ + + return 0; +} diff --git a/unix-socket.h b/unix-socket.h index 2c0b2e79d7b3..8faf5b692f90 100644 --- a/unix-socket.h +++ b/unix-socket.h @@ -2,14 +2,17 @@ #define UNIX_SOCKET_H struct unix_stream_listen_opts { + long timeout_ms; int listen_backlog_size; unsigned int disallow_chdir:1; }; +#define DEFAULT_UNIX_STREAM_LISTEN_TIMEOUT (100) #define DEFAULT_UNIX_STREAM_LISTEN_BACKLOG (5) #define UNIX_STREAM_LISTEN_OPTS_INIT \ { \ + .timeout_ms = DEFAULT_UNIX_STREAM_LISTEN_TIMEOUT, \ .listen_backlog_size = DEFAULT_UNIX_STREAM_LISTEN_BACKLOG, \ .disallow_chdir = 0, \ } @@ -18,4 +21,30 @@ int unix_stream_connect(const char *path, int disallow_chdir); int unix_stream_listen(const char *path, const struct unix_stream_listen_opts *opts); +struct unix_stream_server_socket { + char *path_socket; + struct stat st_socket; + int fd_socket; +}; + +/* + * Create a Unix Domain Socket at the given path under the protection + * of a '.lock' lockfile. + */ +struct unix_stream_server_socket *unix_stream_server__listen_with_lock( + const char *path, + const struct unix_stream_listen_opts *opts); + +/* + * Close and delete the socket. + */ +void unix_stream_server__free( + struct unix_stream_server_socket *server_socket); + +/* + * Return 1 if the inode of the pathname to our socket changes. + */ +int unix_stream_server__was_stolen( + struct unix_stream_server_socket *server_socket); + #endif /* UNIX_SOCKET_H */ From patchwork Wed Feb 17 21:48:47 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeff Hostetler X-Patchwork-Id: 12092471 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3D5A1C433DB for ; Wed, 17 Feb 2021 21:51:30 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id E787164E7A for ; Wed, 17 Feb 2021 21:51:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232967AbhBQVvL (ORCPT ); Wed, 17 Feb 2021 16:51:11 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58314 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232161AbhBQVuP (ORCPT ); Wed, 17 Feb 2021 16:50:15 -0500 Received: from mail-wr1-x42c.google.com (mail-wr1-x42c.google.com [IPv6:2a00:1450:4864:20::42c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 43423C0617A7 for ; Wed, 17 Feb 2021 13:48:59 -0800 (PST) Received: by mail-wr1-x42c.google.com with SMTP id v1so34396wrd.6 for ; Wed, 17 Feb 2021 13:48:59 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=message-id:in-reply-to:references:from:date:subject:fcc :content-transfer-encoding:mime-version:to:cc; bh=eZXjHOsZEerzYCjEv6rp9xYsvp5DJvu1L8nA4ANLFUM=; b=WqbkzYtlPva+MGcEVuY5E7fqgmBqxH8VXYGfOrTkmWIIY5gG2WBZx7CmGjADHnri7l 5bmg2ZFgpRmNOV6K2aVEtQdf+OPqyNCZmeOnsbcR4VFTRwd03b+cA0iPEVgEzAkhpg3b duBnzkmhEvlshWUIizAIz+oTp2o55J/pRdXWtg9KHa8F2/6SucQqM4yrVGiBmL1mj3UV 83UbY0HevGLXNQ38LHbV6Dx7INJcWo4nvhoLnjGSodpLYeyH1oU/JLN0sLrzYqu5KLDK BwmaA6nu439xnWEcfpBBEvv+xmjMcwphUMKdQen6VHIh59ctcR37AzZhMe0X8MM0evdS 0LuA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:message-id:in-reply-to:references:from:date :subject:fcc:content-transfer-encoding:mime-version:to:cc; bh=eZXjHOsZEerzYCjEv6rp9xYsvp5DJvu1L8nA4ANLFUM=; b=ZJhvea7OMYuxqdCtHqB+bDvKeTcqeeBCtCUvy5LcH+QcSBZlcDbhMFvFLem7Howb/5 TAMJu9S6d08nxcWRvvRacsLW6OxqMkDd5oTlEP236EgA5Ja/7jcgl5ijvtxWeBGF2H/f J+8FoxcHnsII+g3U9VGvQtt3NjNsHPkrdTeVTxNUcituCQ1LnmiG3BW0o+tH7uVXPDlx 6lK+dq5GT4iA9aSv+BcRCvTc03N0wjNnxgUIahdTzQJVnxxjVr1im7YpXX48zFVjW/cT n/W2HAeXFcsgOz0CxOt14kUmcYvgDtYBwGnsj03e8XDxZPCE27B+OV+9oqRMon8nemgk Dbvw== X-Gm-Message-State: AOAM531bMuBDKN8jEanyfnOMvdoFojBT4lJM1Kt91s0u2kFWZio1driX ZxafPoP7ah4nlTxLcJT0M9Ca6KCyeJ4= X-Google-Smtp-Source: ABdhPJxxZCwmALzGCESVm1Dgd04A4VfqT9dveLKelBEl3kzNtHIRO8nYz0Um7lDn3NpaRHz5O5xorw== X-Received: by 2002:a5d:570a:: with SMTP id a10mr1213769wrv.70.1613598537658; Wed, 17 Feb 2021 13:48:57 -0800 (PST) Received: from [127.0.0.1] ([13.74.141.28]) by smtp.gmail.com with ESMTPSA id f14sm4610592wmc.32.2021.02.17.13.48.57 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 17 Feb 2021 13:48:57 -0800 (PST) Message-Id: <43c8db9a4468c0ca50e8f4efa55ab01a77cafcf6.1613598529.git.gitgitgadget@gmail.com> In-Reply-To: References: Date: Wed, 17 Feb 2021 21:48:47 +0000 Subject: [PATCH v4 11/12] simple-ipc: add Unix domain socket implementation Fcc: Sent MIME-Version: 1.0 To: git@vger.kernel.org Cc: Jeff Hostetler , Jeff King , SZEDER =?utf-8?b?R8OhYm9y?= , Johannes Schindelin , Jeff Hostetler , Jeff Hostetler Precedence: bulk List-ID: X-Mailing-List: git@vger.kernel.org From: Jeff Hostetler From: Jeff Hostetler Create Unix domain socket based implementation of "simple-ipc". A set of `ipc_client` routines implement a client library to connect to an `ipc_server` over a Unix domain socket, send a simple request, and receive a single response. Clients use blocking IO on the socket. A set of `ipc_server` routines implement a thread pool to listen for and concurrently service client connections. The server creates a new Unix domain socket at a known location. If a socket already exists with that name, the server tries to determine if another server is already listening on the socket or if the socket is dead. If socket is busy, the server exits with an error rather than stealing the socket. If the socket is dead, the server creates a new one and starts up. If while running, the server detects that its socket has been stolen by another server, it automatically exits. Signed-off-by: Jeff Hostetler --- Makefile | 2 + compat/simple-ipc/ipc-unix-socket.c | 979 ++++++++++++++++++++++++++++ contrib/buildsystems/CMakeLists.txt | 2 + simple-ipc.h | 13 +- 4 files changed, 995 insertions(+), 1 deletion(-) create mode 100644 compat/simple-ipc/ipc-unix-socket.c diff --git a/Makefile b/Makefile index 40d5cab78d3f..08a4c88b92f5 100644 --- a/Makefile +++ b/Makefile @@ -1677,6 +1677,8 @@ ifdef NO_UNIX_SOCKETS BASIC_CFLAGS += -DNO_UNIX_SOCKETS else LIB_OBJS += unix-socket.o + LIB_OBJS += compat/simple-ipc/ipc-shared.o + LIB_OBJS += compat/simple-ipc/ipc-unix-socket.o endif ifdef USE_WIN32_IPC diff --git a/compat/simple-ipc/ipc-unix-socket.c b/compat/simple-ipc/ipc-unix-socket.c new file mode 100644 index 000000000000..b7fd0b34329e --- /dev/null +++ b/compat/simple-ipc/ipc-unix-socket.c @@ -0,0 +1,979 @@ +#include "cache.h" +#include "simple-ipc.h" +#include "strbuf.h" +#include "pkt-line.h" +#include "thread-utils.h" +#include "unix-socket.h" + +#ifdef NO_UNIX_SOCKETS +#error compat/simple-ipc/ipc-unix-socket.c requires Unix sockets +#endif + +enum ipc_active_state ipc_get_active_state(const char *path) +{ + enum ipc_active_state state = IPC_STATE__OTHER_ERROR; + struct ipc_client_connect_options options + = IPC_CLIENT_CONNECT_OPTIONS_INIT; + struct stat st; + struct ipc_client_connection *connection_test = NULL; + + options.wait_if_busy = 0; + options.wait_if_not_found = 0; + + if (lstat(path, &st) == -1) { + switch (errno) { + case ENOENT: + case ENOTDIR: + return IPC_STATE__NOT_LISTENING; + default: + return IPC_STATE__INVALID_PATH; + } + } + + /* also complain if a plain file is in the way */ + if ((st.st_mode & S_IFMT) != S_IFSOCK) + return IPC_STATE__INVALID_PATH; + + /* + * Just because the filesystem has a S_IFSOCK type inode + * at `path`, doesn't mean it that there is a server listening. + * Ping it to be sure. + */ + state = ipc_client_try_connect(path, &options, &connection_test); + ipc_client_close_connection(connection_test); + + return state; +} + +/* + * This value was chosen at random. + */ +#define WAIT_STEP_MS (50) + +/* + * Try to connect to the server. If the server is just starting up or + * is very busy, we may not get a connection the first time. + */ +static enum ipc_active_state connect_to_server( + const char *path, + int timeout_ms, + const struct ipc_client_connect_options *options, + int *pfd) +{ + int wait_ms = 50; + int k; + + *pfd = -1; + + for (k = 0; k < timeout_ms; k += wait_ms) { + int fd = unix_stream_connect(path, options->uds_disallow_chdir); + + if (fd != -1) { + *pfd = fd; + return IPC_STATE__LISTENING; + } + + if (errno == ENOENT) { + if (!options->wait_if_not_found) + return IPC_STATE__PATH_NOT_FOUND; + + goto sleep_and_try_again; + } + + if (errno == ETIMEDOUT) { + if (!options->wait_if_busy) + return IPC_STATE__NOT_LISTENING; + + goto sleep_and_try_again; + } + + if (errno == ECONNREFUSED) { + if (!options->wait_if_busy) + return IPC_STATE__NOT_LISTENING; + + goto sleep_and_try_again; + } + + return IPC_STATE__OTHER_ERROR; + + sleep_and_try_again: + sleep_millisec(wait_ms); + } + + return IPC_STATE__NOT_LISTENING; +} + +/* + * A randomly chosen timeout value. + */ +#define MY_CONNECTION_TIMEOUT_MS (1000) + +enum ipc_active_state ipc_client_try_connect( + const char *path, + const struct ipc_client_connect_options *options, + struct ipc_client_connection **p_connection) +{ + enum ipc_active_state state = IPC_STATE__OTHER_ERROR; + int fd = -1; + + *p_connection = NULL; + + trace2_region_enter("ipc-client", "try-connect", NULL); + trace2_data_string("ipc-client", NULL, "try-connect/path", path); + + state = connect_to_server(path, MY_CONNECTION_TIMEOUT_MS, + options, &fd); + + trace2_data_intmax("ipc-client", NULL, "try-connect/state", + (intmax_t)state); + trace2_region_leave("ipc-client", "try-connect", NULL); + + if (state == IPC_STATE__LISTENING) { + (*p_connection) = xcalloc(1, sizeof(struct ipc_client_connection)); + (*p_connection)->fd = fd; + } + + return state; +} + +void ipc_client_close_connection(struct ipc_client_connection *connection) +{ + if (!connection) + return; + + if (connection->fd != -1) + close(connection->fd); + + free(connection); +} + +int ipc_client_send_command_to_connection( + struct ipc_client_connection *connection, + const char *message, struct strbuf *answer) +{ + int ret = 0; + + strbuf_setlen(answer, 0); + + trace2_region_enter("ipc-client", "send-command", NULL); + + if (write_packetized_from_buf_no_flush(message, strlen(message), + connection->fd) < 0 || + packet_flush_gently(connection->fd) < 0) { + ret = error(_("could not send IPC command")); + goto done; + } + + if (read_packetized_to_strbuf( + connection->fd, answer, + PACKET_READ_GENTLE_ON_EOF | PACKET_READ_NEVER_DIE) < 0) { + ret = error(_("could not read IPC response")); + goto done; + } + +done: + trace2_region_leave("ipc-client", "send-command", NULL); + return ret; +} + +int ipc_client_send_command(const char *path, + const struct ipc_client_connect_options *options, + const char *message, struct strbuf *answer) +{ + int ret = -1; + enum ipc_active_state state; + struct ipc_client_connection *connection = NULL; + + state = ipc_client_try_connect(path, options, &connection); + + if (state != IPC_STATE__LISTENING) + return ret; + + ret = ipc_client_send_command_to_connection(connection, message, answer); + + ipc_client_close_connection(connection); + + return ret; +} + +static int set_socket_blocking_flag(int fd, int make_nonblocking) +{ + int flags; + + flags = fcntl(fd, F_GETFL, NULL); + + if (flags < 0) + return -1; + + if (make_nonblocking) + flags |= O_NONBLOCK; + else + flags &= ~O_NONBLOCK; + + return fcntl(fd, F_SETFL, flags); +} + +/* + * Magic numbers used to annotate callback instance data. + * These are used to help guard against accidentally passing the + * wrong instance data across multiple levels of callbacks (which + * is easy to do if there are `void*` arguments). + */ +enum magic { + MAGIC_SERVER_REPLY_DATA, + MAGIC_WORKER_THREAD_DATA, + MAGIC_ACCEPT_THREAD_DATA, + MAGIC_SERVER_DATA, +}; + +struct ipc_server_reply_data { + enum magic magic; + int fd; + struct ipc_worker_thread_data *worker_thread_data; +}; + +struct ipc_worker_thread_data { + enum magic magic; + struct ipc_worker_thread_data *next_thread; + struct ipc_server_data *server_data; + pthread_t pthread_id; +}; + +struct ipc_accept_thread_data { + enum magic magic; + struct ipc_server_data *server_data; + + struct unix_stream_server_socket *server_socket; + + int fd_send_shutdown; + int fd_wait_shutdown; + pthread_t pthread_id; +}; + +/* + * With unix-sockets, the conceptual "ipc-server" is implemented as a single + * controller "accept-thread" thread and a pool of "worker-thread" threads. + * The former does the usual `accept()` loop and dispatches connections + * to an idle worker thread. The worker threads wait in an idle loop for + * a new connection, communicate with the client and relay data to/from + * the `application_cb` and then wait for another connection from the + * server thread. This avoids the overhead of constantly creating and + * destroying threads. + */ +struct ipc_server_data { + enum magic magic; + ipc_server_application_cb *application_cb; + void *application_data; + struct strbuf buf_path; + + struct ipc_accept_thread_data *accept_thread; + struct ipc_worker_thread_data *worker_thread_list; + + pthread_mutex_t work_available_mutex; + pthread_cond_t work_available_cond; + + /* + * Accepted but not yet processed client connections are kept + * in a circular buffer FIFO. The queue is empty when the + * positions are equal. + */ + int *fifo_fds; + int queue_size; + int back_pos; + int front_pos; + + int shutdown_requested; + int is_stopped; +}; + +/* + * Remove and return the oldest queued connection. + * + * Returns -1 if empty. + */ +static int fifo_dequeue(struct ipc_server_data *server_data) +{ + /* ASSERT holding mutex */ + + int fd; + + if (server_data->back_pos == server_data->front_pos) + return -1; + + fd = server_data->fifo_fds[server_data->front_pos]; + server_data->fifo_fds[server_data->front_pos] = -1; + + server_data->front_pos++; + if (server_data->front_pos == server_data->queue_size) + server_data->front_pos = 0; + + return fd; +} + +/* + * Push a new fd onto the back of the queue. + * + * Drop it and return -1 if queue is already full. + */ +static int fifo_enqueue(struct ipc_server_data *server_data, int fd) +{ + /* ASSERT holding mutex */ + + int next_back_pos; + + next_back_pos = server_data->back_pos + 1; + if (next_back_pos == server_data->queue_size) + next_back_pos = 0; + + if (next_back_pos == server_data->front_pos) { + /* Queue is full. Just drop it. */ + close(fd); + return -1; + } + + server_data->fifo_fds[server_data->back_pos] = fd; + server_data->back_pos = next_back_pos; + + return fd; +} + +/* + * Wait for a connection to be queued to the FIFO and return it. + * + * Returns -1 if someone has already requested a shutdown. + */ +static int worker_thread__wait_for_connection( + struct ipc_worker_thread_data *worker_thread_data) +{ + /* ASSERT NOT holding mutex */ + + struct ipc_server_data *server_data = worker_thread_data->server_data; + int fd = -1; + + pthread_mutex_lock(&server_data->work_available_mutex); + for (;;) { + if (server_data->shutdown_requested) + break; + + fd = fifo_dequeue(server_data); + if (fd >= 0) + break; + + pthread_cond_wait(&server_data->work_available_cond, + &server_data->work_available_mutex); + } + pthread_mutex_unlock(&server_data->work_available_mutex); + + return fd; +} + +/* + * Forward declare our reply callback function so that any compiler + * errors are reported when we actually define the function (in addition + * to any errors reported when we try to pass this callback function as + * a parameter in a function call). The former are easier to understand. + */ +static ipc_server_reply_cb do_io_reply_callback; + +/* + * Relay application's response message to the client process. + * (We do not flush at this point because we allow the caller + * to chunk data to the client thru us.) + */ +static int do_io_reply_callback(struct ipc_server_reply_data *reply_data, + const char *response, size_t response_len) +{ + if (reply_data->magic != MAGIC_SERVER_REPLY_DATA) + BUG("reply_cb called with wrong instance data"); + + return write_packetized_from_buf_no_flush(response, response_len, + reply_data->fd); +} + +/* A randomly chosen value. */ +#define MY_WAIT_POLL_TIMEOUT_MS (10) + +/* + * If the client hangs up without sending any data on the wire, just + * quietly close the socket and ignore this client. + * + * This worker thread is committed to reading the IPC request data + * from the client at the other end of this fd. Wait here for the + * client to actually put something on the wire -- because if the + * client just does a ping (connect and hangup without sending any + * data), our use of the pkt-line read routines will spew an error + * message. + * + * Return -1 if the client hung up. + * Return 0 if data (possibly incomplete) is ready. + */ +static int worker_thread__wait_for_io_start( + struct ipc_worker_thread_data *worker_thread_data, + int fd) +{ + struct ipc_server_data *server_data = worker_thread_data->server_data; + struct pollfd pollfd[1]; + int result; + + for (;;) { + pollfd[0].fd = fd; + pollfd[0].events = POLLIN; + + result = poll(pollfd, 1, MY_WAIT_POLL_TIMEOUT_MS); + if (result < 0) { + if (errno == EINTR) + continue; + goto cleanup; + } + + if (result == 0) { + /* a timeout */ + + int in_shutdown; + + pthread_mutex_lock(&server_data->work_available_mutex); + in_shutdown = server_data->shutdown_requested; + pthread_mutex_unlock(&server_data->work_available_mutex); + + /* + * If a shutdown is already in progress and this + * client has not started talking yet, just drop it. + */ + if (in_shutdown) + goto cleanup; + continue; + } + + if (pollfd[0].revents & POLLHUP) + goto cleanup; + + if (pollfd[0].revents & POLLIN) + return 0; + + goto cleanup; + } + +cleanup: + close(fd); + return -1; +} + +/* + * Receive the request/command from the client and pass it to the + * registered request-callback. The request-callback will compose + * a response and call our reply-callback to send it to the client. + */ +static int worker_thread__do_io( + struct ipc_worker_thread_data *worker_thread_data, + int fd) +{ + /* ASSERT NOT holding lock */ + + struct strbuf buf = STRBUF_INIT; + struct ipc_server_reply_data reply_data; + int ret = 0; + + reply_data.magic = MAGIC_SERVER_REPLY_DATA; + reply_data.worker_thread_data = worker_thread_data; + + reply_data.fd = fd; + + ret = read_packetized_to_strbuf( + reply_data.fd, &buf, + PACKET_READ_GENTLE_ON_EOF | PACKET_READ_NEVER_DIE); + if (ret >= 0) { + ret = worker_thread_data->server_data->application_cb( + worker_thread_data->server_data->application_data, + buf.buf, do_io_reply_callback, &reply_data); + + packet_flush_gently(reply_data.fd); + } + else { + /* + * The client probably disconnected/shutdown before it + * could send a well-formed message. Ignore it. + */ + } + + strbuf_release(&buf); + close(reply_data.fd); + + return ret; +} + +/* + * Block SIGPIPE on the current thread (so that we get EPIPE from + * write() rather than an actual signal). + * + * Note that using sigchain_push() and _pop() to control SIGPIPE + * around our IO calls is not thread safe: + * [] It uses a global stack of handler frames. + * [] It uses ALLOC_GROW() to resize it. + * [] Finally, according to the `signal(2)` man-page: + * "The effects of `signal()` in a multithreaded process are unspecified." + */ +static void thread_block_sigpipe(sigset_t *old_set) +{ + sigset_t new_set; + + sigemptyset(&new_set); + sigaddset(&new_set, SIGPIPE); + + sigemptyset(old_set); + pthread_sigmask(SIG_BLOCK, &new_set, old_set); +} + +/* + * Thread proc for an IPC worker thread. It handles a series of + * connections from clients. It pulls the next fd from the queue + * processes it, and then waits for the next client. + * + * Block SIGPIPE in this worker thread for the life of the thread. + * This avoids stray (and sometimes delayed) SIGPIPE signals caused + * by client errors and/or when we are under extremely heavy IO load. + * + * This means that the application callback will have SIGPIPE blocked. + * The callback should not change it. + */ +static void *worker_thread_proc(void *_worker_thread_data) +{ + struct ipc_worker_thread_data *worker_thread_data = _worker_thread_data; + struct ipc_server_data *server_data = worker_thread_data->server_data; + sigset_t old_set; + int fd, io; + int ret; + + trace2_thread_start("ipc-worker"); + + thread_block_sigpipe(&old_set); + + for (;;) { + fd = worker_thread__wait_for_connection(worker_thread_data); + if (fd == -1) + break; /* in shutdown */ + + io = worker_thread__wait_for_io_start(worker_thread_data, fd); + if (io == -1) + continue; /* client hung up without sending anything */ + + ret = worker_thread__do_io(worker_thread_data, fd); + + if (ret == SIMPLE_IPC_QUIT) { + trace2_data_string("ipc-worker", NULL, "queue_stop_async", + "application_quit"); + /* + * The application layer is telling the ipc-server + * layer to shutdown. + * + * We DO NOT have a response to send to the client. + * + * Queue an async stop (to stop the other threads) and + * allow this worker thread to exit now (no sense waiting + * for the thread-pool shutdown signal). + * + * Other non-idle worker threads are allowed to finish + * responding to their current clients. + */ + ipc_server_stop_async(server_data); + break; + } + } + + trace2_thread_exit(); + return NULL; +} + +/* A randomly chosen value. */ +#define MY_ACCEPT_POLL_TIMEOUT_MS (60 * 1000) + +/* + * Accept a new client connection on our socket. This uses non-blocking + * IO so that we can also wait for shutdown requests on our socket-pair + * without actually spinning on a fast timeout. + */ +static int accept_thread__wait_for_connection( + struct ipc_accept_thread_data *accept_thread_data) +{ + struct pollfd pollfd[2]; + int result; + + for (;;) { + pollfd[0].fd = accept_thread_data->fd_wait_shutdown; + pollfd[0].events = POLLIN; + + pollfd[1].fd = accept_thread_data->server_socket->fd_socket; + pollfd[1].events = POLLIN; + + result = poll(pollfd, 2, MY_ACCEPT_POLL_TIMEOUT_MS); + if (result < 0) { + if (errno == EINTR) + continue; + return result; + } + + if (result == 0) { + /* a timeout */ + + /* + * If someone deletes or force-creates a new unix + * domain socket at our path, all future clients + * will be routed elsewhere and we silently starve. + * If that happens, just queue a shutdown. + */ + if (unix_stream_server__was_stolen( + accept_thread_data->server_socket)) { + trace2_data_string("ipc-accept", NULL, + "queue_stop_async", + "socket_stolen"); + ipc_server_stop_async( + accept_thread_data->server_data); + } + continue; + } + + if (pollfd[0].revents & POLLIN) { + /* shutdown message queued to socketpair */ + return -1; + } + + if (pollfd[1].revents & POLLIN) { + /* a connection is available on server_socket */ + + int client_fd = + accept(accept_thread_data->server_socket->fd_socket, + NULL, NULL); + if (client_fd >= 0) + return client_fd; + + /* + * An error here is unlikely -- it probably + * indicates that the connecting process has + * already dropped the connection. + */ + continue; + } + + BUG("unandled poll result errno=%d r[0]=%d r[1]=%d", + errno, pollfd[0].revents, pollfd[1].revents); + } +} + +/* + * Thread proc for the IPC server "accept thread". This waits for + * an incoming socket connection, appends it to the queue of available + * connections, and notifies a worker thread to process it. + * + * Block SIGPIPE in this thread for the life of the thread. This + * avoids any stray SIGPIPE signals when closing pipe fds under + * extremely heavy loads (such as when the fifo queue is full and we + * drop incomming connections). + */ +static void *accept_thread_proc(void *_accept_thread_data) +{ + struct ipc_accept_thread_data *accept_thread_data = _accept_thread_data; + struct ipc_server_data *server_data = accept_thread_data->server_data; + sigset_t old_set; + + trace2_thread_start("ipc-accept"); + + thread_block_sigpipe(&old_set); + + for (;;) { + int client_fd = accept_thread__wait_for_connection( + accept_thread_data); + + pthread_mutex_lock(&server_data->work_available_mutex); + if (server_data->shutdown_requested) { + pthread_mutex_unlock(&server_data->work_available_mutex); + if (client_fd >= 0) + close(client_fd); + break; + } + + if (client_fd < 0) { + /* ignore transient accept() errors */ + } + else { + fifo_enqueue(server_data, client_fd); + pthread_cond_broadcast(&server_data->work_available_cond); + } + pthread_mutex_unlock(&server_data->work_available_mutex); + } + + trace2_thread_exit(); + return NULL; +} + +/* + * We can't predict the connection arrival rate relative to the worker + * processing rate, therefore we allow the "accept-thread" to queue up + * a generous number of connections, since we'd rather have the client + * not unnecessarily timeout if we can avoid it. (The assumption is + * that this will be used for FSMonitor and a few second wait on a + * connection is better than having the client timeout and do the full + * computation itself.) + * + * The FIFO queue size is set to a multiple of the worker pool size. + * This value chosen at random. + */ +#define FIFO_SCALE (100) + +/* + * The backlog value for `listen(2)`. This doesn't need to huge, + * rather just large enough for our "accept-thread" to wake up and + * queue incoming connections onto the FIFO without the kernel + * dropping any. + * + * This value chosen at random. + */ +#define LISTEN_BACKLOG (50) + +static struct unix_stream_server_socket *create_listener_socket( + const char *path, + const struct ipc_server_opts *ipc_opts) +{ + struct unix_stream_server_socket *server_socket = NULL; + struct unix_stream_listen_opts uslg_opts = UNIX_STREAM_LISTEN_OPTS_INIT; + + uslg_opts.listen_backlog_size = LISTEN_BACKLOG; + uslg_opts.disallow_chdir = ipc_opts->uds_disallow_chdir; + + server_socket = unix_stream_server__listen_with_lock(path, &uslg_opts); + if (!server_socket) + return NULL; + + if (set_socket_blocking_flag(server_socket->fd_socket, 1)) { + int saved_errno = errno; + error_errno(_("could not set listener socket nonblocking '%s'"), + path); + unix_stream_server__free(server_socket); + errno = saved_errno; + return NULL; + } + + trace2_data_string("ipc-server", NULL, "listen-with-lock", path); + return server_socket; +} + +static struct unix_stream_server_socket *setup_listener_socket( + const char *path, + const struct ipc_server_opts *ipc_opts) +{ + struct unix_stream_server_socket *server_socket; + + trace2_region_enter("ipc-server", "create-listener_socket", NULL); + server_socket = create_listener_socket(path, ipc_opts); + trace2_region_leave("ipc-server", "create-listener_socket", NULL); + + return server_socket; +} + +/* + * Start IPC server in a pool of background threads. + */ +int ipc_server_run_async(struct ipc_server_data **returned_server_data, + const char *path, const struct ipc_server_opts *opts, + ipc_server_application_cb *application_cb, + void *application_data) +{ + struct unix_stream_server_socket *server_socket = NULL; + struct ipc_server_data *server_data; + int sv[2]; + int k; + int nr_threads = opts->nr_threads; + + *returned_server_data = NULL; + + /* + * Create a socketpair and set sv[1] to non-blocking. This + * will used to send a shutdown message to the accept-thread + * and allows the accept-thread to wait on EITHER a client + * connection or a shutdown request without spinning. + */ + if (socketpair(AF_UNIX, SOCK_STREAM, 0, sv) < 0) + return error_errno(_("could not create socketpair for '%s'"), + path); + + if (set_socket_blocking_flag(sv[1], 1)) { + int saved_errno = errno; + close(sv[0]); + close(sv[1]); + errno = saved_errno; + return error_errno(_("making socketpair nonblocking '%s'"), + path); + } + + server_socket = setup_listener_socket(path, opts); + if (!server_socket) { + int saved_errno = errno; + close(sv[0]); + close(sv[1]); + errno = saved_errno; + return -1; + } + + server_data = xcalloc(1, sizeof(*server_data)); + server_data->magic = MAGIC_SERVER_DATA; + server_data->application_cb = application_cb; + server_data->application_data = application_data; + strbuf_init(&server_data->buf_path, 0); + strbuf_addstr(&server_data->buf_path, path); + + if (nr_threads < 1) + nr_threads = 1; + + pthread_mutex_init(&server_data->work_available_mutex, NULL); + pthread_cond_init(&server_data->work_available_cond, NULL); + + server_data->queue_size = nr_threads * FIFO_SCALE; + server_data->fifo_fds = xcalloc(server_data->queue_size, + sizeof(*server_data->fifo_fds)); + + server_data->accept_thread = + xcalloc(1, sizeof(*server_data->accept_thread)); + server_data->accept_thread->magic = MAGIC_ACCEPT_THREAD_DATA; + server_data->accept_thread->server_data = server_data; + server_data->accept_thread->server_socket = server_socket; + server_data->accept_thread->fd_send_shutdown = sv[0]; + server_data->accept_thread->fd_wait_shutdown = sv[1]; + + if (pthread_create(&server_data->accept_thread->pthread_id, NULL, + accept_thread_proc, server_data->accept_thread)) + die_errno(_("could not start accept_thread '%s'"), path); + + for (k = 0; k < nr_threads; k++) { + struct ipc_worker_thread_data *wtd; + + wtd = xcalloc(1, sizeof(*wtd)); + wtd->magic = MAGIC_WORKER_THREAD_DATA; + wtd->server_data = server_data; + + if (pthread_create(&wtd->pthread_id, NULL, worker_thread_proc, + wtd)) { + if (k == 0) + die(_("could not start worker[0] for '%s'"), + path); + /* + * Limp along with the thread pool that we have. + */ + break; + } + + wtd->next_thread = server_data->worker_thread_list; + server_data->worker_thread_list = wtd; + } + + *returned_server_data = server_data; + return 0; +} + +/* + * Gently tell the IPC server treads to shutdown. + * Can be run on any thread. + */ +int ipc_server_stop_async(struct ipc_server_data *server_data) +{ + /* ASSERT NOT holding mutex */ + + int fd; + + if (!server_data) + return 0; + + trace2_region_enter("ipc-server", "server-stop-async", NULL); + + pthread_mutex_lock(&server_data->work_available_mutex); + + server_data->shutdown_requested = 1; + + /* + * Write a byte to the shutdown socket pair to wake up the + * accept-thread. + */ + if (write(server_data->accept_thread->fd_send_shutdown, "Q", 1) < 0) + error_errno("could not write to fd_send_shutdown"); + + /* + * Drain the queue of existing connections. + */ + while ((fd = fifo_dequeue(server_data)) != -1) + close(fd); + + /* + * Gently tell worker threads to stop processing new connections + * and exit. (This does not abort in-process conversations.) + */ + pthread_cond_broadcast(&server_data->work_available_cond); + + pthread_mutex_unlock(&server_data->work_available_mutex); + + trace2_region_leave("ipc-server", "server-stop-async", NULL); + + return 0; +} + +/* + * Wait for all IPC server threads to stop. + */ +int ipc_server_await(struct ipc_server_data *server_data) +{ + pthread_join(server_data->accept_thread->pthread_id, NULL); + + if (!server_data->shutdown_requested) + BUG("ipc-server: accept-thread stopped for '%s'", + server_data->buf_path.buf); + + while (server_data->worker_thread_list) { + struct ipc_worker_thread_data *wtd = + server_data->worker_thread_list; + + pthread_join(wtd->pthread_id, NULL); + + server_data->worker_thread_list = wtd->next_thread; + free(wtd); + } + + server_data->is_stopped = 1; + + return 0; +} + +void ipc_server_free(struct ipc_server_data *server_data) +{ + struct ipc_accept_thread_data * accept_thread_data; + + if (!server_data) + return; + + if (!server_data->is_stopped) + BUG("cannot free ipc-server while running for '%s'", + server_data->buf_path.buf); + + accept_thread_data = server_data->accept_thread; + if (accept_thread_data) { + unix_stream_server__free(accept_thread_data->server_socket); + + if (accept_thread_data->fd_send_shutdown != -1) + close(accept_thread_data->fd_send_shutdown); + if (accept_thread_data->fd_wait_shutdown != -1) + close(accept_thread_data->fd_wait_shutdown); + + free(server_data->accept_thread); + } + + while (server_data->worker_thread_list) { + struct ipc_worker_thread_data *wtd = + server_data->worker_thread_list; + + server_data->worker_thread_list = wtd->next_thread; + free(wtd); + } + + pthread_cond_destroy(&server_data->work_available_cond); + pthread_mutex_destroy(&server_data->work_available_mutex); + + strbuf_release(&server_data->buf_path); + + free(server_data->fifo_fds); + free(server_data); +} diff --git a/contrib/buildsystems/CMakeLists.txt b/contrib/buildsystems/CMakeLists.txt index 4bd41054ee70..4c27a373414a 100644 --- a/contrib/buildsystems/CMakeLists.txt +++ b/contrib/buildsystems/CMakeLists.txt @@ -248,6 +248,8 @@ endif() if(CMAKE_SYSTEM_NAME STREQUAL "Windows") list(APPEND compat_SOURCES compat/simple-ipc/ipc-shared.c compat/simple-ipc/ipc-win32.c) +else() + list(APPEND compat_SOURCES compat/simple-ipc/ipc-shared.c compat/simple-ipc/ipc-unix-socket.c) endif() set(EXE_EXTENSION ${CMAKE_EXECUTABLE_SUFFIX}) diff --git a/simple-ipc.h b/simple-ipc.h index a3f96b42cca2..f7e72e966f9a 100644 --- a/simple-ipc.h +++ b/simple-ipc.h @@ -5,7 +5,7 @@ * See Documentation/technical/api-simple-ipc.txt */ -#if defined(GIT_WINDOWS_NATIVE) +#if defined(GIT_WINDOWS_NATIVE) || !defined(NO_UNIX_SOCKETS) #define SUPPORTS_SIMPLE_IPC #endif @@ -62,11 +62,17 @@ struct ipc_client_connect_options { * the service and need to wait for it to become ready. */ unsigned int wait_if_not_found:1; + + /* + * Disallow chdir() when creating a Unix domain socket. + */ + unsigned int uds_disallow_chdir:1; }; #define IPC_CLIENT_CONNECT_OPTIONS_INIT { \ .wait_if_busy = 0, \ .wait_if_not_found = 0, \ + .uds_disallow_chdir = 0, \ } /* @@ -159,6 +165,11 @@ struct ipc_server_data; struct ipc_server_opts { int nr_threads; + + /* + * Disallow chdir() when creating a Unix domain socket. + */ + unsigned int uds_disallow_chdir:1; }; /* From patchwork Wed Feb 17 21:48:48 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeff Hostetler X-Patchwork-Id: 12092467 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 150AEC433E0 for ; Wed, 17 Feb 2021 21:51:10 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id CABE364E5F for ; Wed, 17 Feb 2021 21:51:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232934AbhBQVvE (ORCPT ); Wed, 17 Feb 2021 16:51:04 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58316 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232214AbhBQVuP (ORCPT ); Wed, 17 Feb 2021 16:50:15 -0500 Received: from mail-wr1-x42b.google.com (mail-wr1-x42b.google.com [IPv6:2a00:1450:4864:20::42b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E8EB7C0617A9 for ; Wed, 17 Feb 2021 13:48:59 -0800 (PST) Received: by mail-wr1-x42b.google.com with SMTP id t15so18907911wrx.13 for ; Wed, 17 Feb 2021 13:48:59 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=message-id:in-reply-to:references:from:date:subject:fcc :content-transfer-encoding:mime-version:to:cc; bh=bw/FGMjh2bblL31w0RRjvMxCLQipoAXky8iZ0Zb16UY=; b=D/UHmhHwn/zChMhhiklIyyz8FN/e5s6Ns6pDDdaASbcV+bi/F1SRWIvWtT59Neszr5 O3eAyoX21WjHg7PZsoIvnO221a/gQDa/SdEb7dY81ZRQbJQ4mL0mQFD39mXfg611s4sW L1AdvDEI3zn2PPcAu9lk3rFX48mS8qxncfU921g0sz1d2Z1RUgPXPemq3PKpTrfr6lls JMqz1iFAhoSpO/sTLRP+VyK7uMTItP4H0k3icFY5zpBoa6+uhKL+SBpdGoZ88WP+QPT3 ugAqBlXM1Gna9uhK+tOr84wQPxkJjyVqxj1AdxrxvrfqENP05CxJK/Z6JNFS4TWRjCRj kpfQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:message-id:in-reply-to:references:from:date :subject:fcc:content-transfer-encoding:mime-version:to:cc; bh=bw/FGMjh2bblL31w0RRjvMxCLQipoAXky8iZ0Zb16UY=; b=Wp2YKpbq1OP5xohTHgyqSFjxyqXjqBeHCdB2js4AyN7dgaz/8FvTRRsk2VYiw30rGj pRetOutGNapCFakto01Cy9pSh562c9nfOlua9kO5BDhVnPSkA4cbIiPG3TivceFfWykL 8aqbELe+1NumF3Qn7hVIYR5MB9J26MfxTSIJaguoYmwbd0C0LiE0OLjinyE0j7QuPbwZ GYsirVcdqOwcaMjP3pioJtDTepBkiigfMoYEbNuxBPEuCeKh+w3yRKs29qlQkIN9IRk3 bJa++XtSvEKGS2fyWMkrXd/teqfLIRtqGqG/mOBtheDYRBj9Qmfh8WroUFkeSLr7uabs IyMg== X-Gm-Message-State: AOAM530eWYJ66ok8iAIucLKJ3g2hIQFKoDKPLF76xz+tqqKuN6e/+c2p SvSuWEQfiqNCjxU0yz359KJOxjuQ33Y= X-Google-Smtp-Source: ABdhPJyuF8qeQ5Cw1TSZWptIcOVOSW175XwsnDddhGrjwRlyGNWLbD8iCm6mv7snJQ4vsYpG5lFD6A== X-Received: by 2002:a05:6000:104c:: with SMTP id c12mr1143160wrx.261.1613598538382; Wed, 17 Feb 2021 13:48:58 -0800 (PST) Received: from [127.0.0.1] ([13.74.141.28]) by smtp.gmail.com with ESMTPSA id s13sm4585746wmh.34.2021.02.17.13.48.57 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 17 Feb 2021 13:48:58 -0800 (PST) Message-Id: <09568a6500dde4a592a994b661a7beec23af32b4.1613598529.git.gitgitgadget@gmail.com> In-Reply-To: References: Date: Wed, 17 Feb 2021 21:48:48 +0000 Subject: [PATCH v4 12/12] t0052: add simple-ipc tests and t/helper/test-simple-ipc tool Fcc: Sent MIME-Version: 1.0 To: git@vger.kernel.org Cc: Jeff Hostetler , Jeff King , SZEDER =?utf-8?b?R8OhYm9y?= , Johannes Schindelin , Jeff Hostetler , Jeff Hostetler Precedence: bulk List-ID: X-Mailing-List: git@vger.kernel.org From: Jeff Hostetler From: Jeff Hostetler Create t0052-simple-ipc.sh with unit tests for the "simple-ipc" mechanism. Create t/helper/test-simple-ipc test tool to exercise the "simple-ipc" functions. When the tool is invoked with "run-daemon", it runs a server to listen for "simple-ipc" connections on a test socket or named pipe and responds to a set of commands to exercise/stress the communication setup. When the tool is invoked with "start-daemon", it spawns a "run-daemon" command in the background and waits for the server to become ready before exiting. (This helps make unit tests in t0052 more predictable and avoids the need for arbitrary sleeps in the test script.) The tool also has a series of client "send" commands to send commands and data to a server instance. Signed-off-by: Jeff Hostetler --- Makefile | 1 + t/helper/test-simple-ipc.c | 773 +++++++++++++++++++++++++++++++++++++ t/helper/test-tool.c | 1 + t/helper/test-tool.h | 1 + t/t0052-simple-ipc.sh | 122 ++++++ 5 files changed, 898 insertions(+) create mode 100644 t/helper/test-simple-ipc.c create mode 100755 t/t0052-simple-ipc.sh diff --git a/Makefile b/Makefile index 08a4c88b92f5..93f2e7ca9e1f 100644 --- a/Makefile +++ b/Makefile @@ -740,6 +740,7 @@ TEST_BUILTINS_OBJS += test-serve-v2.o TEST_BUILTINS_OBJS += test-sha1.o TEST_BUILTINS_OBJS += test-sha256.o TEST_BUILTINS_OBJS += test-sigchain.o +TEST_BUILTINS_OBJS += test-simple-ipc.o TEST_BUILTINS_OBJS += test-strcmp-offset.o TEST_BUILTINS_OBJS += test-string-list.o TEST_BUILTINS_OBJS += test-submodule-config.o diff --git a/t/helper/test-simple-ipc.c b/t/helper/test-simple-ipc.c new file mode 100644 index 000000000000..d67eaa9a6ecc --- /dev/null +++ b/t/helper/test-simple-ipc.c @@ -0,0 +1,773 @@ +/* + * test-simple-ipc.c: verify that the Inter-Process Communication works. + */ + +#include "test-tool.h" +#include "cache.h" +#include "strbuf.h" +#include "simple-ipc.h" +#include "parse-options.h" +#include "thread-utils.h" +#include "strvec.h" + +#ifndef SUPPORTS_SIMPLE_IPC +int cmd__simple_ipc(int argc, const char **argv) +{ + die("simple IPC not available on this platform"); +} +#else + +/* + * The test daemon defines an "application callback" that supports a + * series of commands (see `test_app_cb()`). + * + * Unknown commands are caught here and we send an error message back + * to the client process. + */ +static int app__unhandled_command(const char *command, + ipc_server_reply_cb *reply_cb, + struct ipc_server_reply_data *reply_data) +{ + struct strbuf buf = STRBUF_INIT; + int ret; + + strbuf_addf(&buf, "unhandled command: %s", command); + ret = reply_cb(reply_data, buf.buf, buf.len); + strbuf_release(&buf); + + return ret; +} + +/* + * Reply with a single very large buffer. This is to ensure that + * long response are properly handled -- whether the chunking occurs + * in the kernel or in the (probably pkt-line) layer. + */ +#define BIG_ROWS (10000) +static int app__big_command(ipc_server_reply_cb *reply_cb, + struct ipc_server_reply_data *reply_data) +{ + struct strbuf buf = STRBUF_INIT; + int row; + int ret; + + for (row = 0; row < BIG_ROWS; row++) + strbuf_addf(&buf, "big: %.75d\n", row); + + ret = reply_cb(reply_data, buf.buf, buf.len); + strbuf_release(&buf); + + return ret; +} + +/* + * Reply with a series of lines. This is to ensure that we can incrementally + * compute the response and chunk it to the client. + */ +#define CHUNK_ROWS (10000) +static int app__chunk_command(ipc_server_reply_cb *reply_cb, + struct ipc_server_reply_data *reply_data) +{ + struct strbuf buf = STRBUF_INIT; + int row; + int ret; + + for (row = 0; row < CHUNK_ROWS; row++) { + strbuf_setlen(&buf, 0); + strbuf_addf(&buf, "big: %.75d\n", row); + ret = reply_cb(reply_data, buf.buf, buf.len); + } + + strbuf_release(&buf); + + return ret; +} + +/* + * Slowly reply with a series of lines. This is to model an expensive to + * compute chunked response (which might happen if this callback is running + * in a thread and is fighting for a lock with other threads). + */ +#define SLOW_ROWS (1000) +#define SLOW_DELAY_MS (10) +static int app__slow_command(ipc_server_reply_cb *reply_cb, + struct ipc_server_reply_data *reply_data) +{ + struct strbuf buf = STRBUF_INIT; + int row; + int ret; + + for (row = 0; row < SLOW_ROWS; row++) { + strbuf_setlen(&buf, 0); + strbuf_addf(&buf, "big: %.75d\n", row); + ret = reply_cb(reply_data, buf.buf, buf.len); + sleep_millisec(SLOW_DELAY_MS); + } + + strbuf_release(&buf); + + return ret; +} + +/* + * The client sent a command followed by a (possibly very) large buffer. + */ +static int app__sendbytes_command(const char *received, + ipc_server_reply_cb *reply_cb, + struct ipc_server_reply_data *reply_data) +{ + struct strbuf buf_resp = STRBUF_INIT; + const char *p = "?"; + int len_ballast = 0; + int k; + int errs = 0; + int ret; + + if (skip_prefix(received, "sendbytes ", &p)) + len_ballast = strlen(p); + + /* + * Verify that the ballast is n copies of a single letter. + * And that the multi-threaded IO layer didn't cross the streams. + */ + for (k = 1; k < len_ballast; k++) + if (p[k] != p[0]) + errs++; + + if (errs) + strbuf_addf(&buf_resp, "errs:%d\n", errs); + else + strbuf_addf(&buf_resp, "rcvd:%c%08d\n", p[0], len_ballast); + + ret = reply_cb(reply_data, buf_resp.buf, buf_resp.len); + + strbuf_release(&buf_resp); + + return ret; +} + +/* + * An arbitrary fixed address to verify that the application instance + * data is handled properly. + */ +static int my_app_data = 42; + +static ipc_server_application_cb test_app_cb; + +/* + * This is the "application callback" that sits on top of the + * "ipc-server". It completely defines the set of commands supported + * by this application. + */ +static int test_app_cb(void *application_data, + const char *command, + ipc_server_reply_cb *reply_cb, + struct ipc_server_reply_data *reply_data) +{ + /* + * Verify that we received the application-data that we passed + * when we started the ipc-server. (We have several layers of + * callbacks calling callbacks and it's easy to get things mixed + * up (especially when some are "void*").) + */ + if (application_data != (void*)&my_app_data) + BUG("application_cb: application_data pointer wrong"); + + if (!strcmp(command, "quit")) { + /* + * The client sent a "quit" command. This is an async + * request for the server to shutdown. + * + * We DO NOT send the client a response message + * (because we have nothing to say and the other + * server threads have not yet stopped). + * + * Tell the ipc-server layer to start shutting down. + * This includes: stop listening for new connections + * on the socket/pipe and telling all worker threads + * to finish/drain their outgoing responses to other + * clients. + * + * This DOES NOT force an immediate sync shutdown. + */ + return SIMPLE_IPC_QUIT; + } + + if (!strcmp(command, "ping")) { + const char *answer = "pong"; + return reply_cb(reply_data, answer, strlen(answer)); + } + + if (!strcmp(command, "big")) + return app__big_command(reply_cb, reply_data); + + if (!strcmp(command, "chunk")) + return app__chunk_command(reply_cb, reply_data); + + if (!strcmp(command, "slow")) + return app__slow_command(reply_cb, reply_data); + + if (starts_with(command, "sendbytes ")) + return app__sendbytes_command(command, reply_cb, reply_data); + + return app__unhandled_command(command, reply_cb, reply_data); +} + +/* + * This process will run as a simple-ipc server and listen for IPC commands + * from client processes. + */ +static int daemon__run_server(const char *path, int argc, const char **argv) +{ + struct ipc_server_opts opts = { + .nr_threads = 5 + }; + + const char * const daemon_usage[] = { + N_("test-helper simple-ipc run-daemon ["), + NULL + }; + struct option daemon_options[] = { + OPT_INTEGER(0, "threads", &opts.nr_threads, + N_("number of threads in server thread pool")), + OPT_END() + }; + + argc = parse_options(argc, argv, NULL, daemon_options, daemon_usage, 0); + + if (opts.nr_threads < 1) + opts.nr_threads = 1; + + /* + * Synchronously run the ipc-server. We don't need any application + * instance data, so pass an arbitrary pointer (that we'll later + * verify made the round trip). + */ + return ipc_server_run(path, &opts, test_app_cb, (void*)&my_app_data); +} + +#ifndef GIT_WINDOWS_NATIVE +/* + * This is adapted from `daemonize()`. Use `fork()` to directly create and + * run the daemon in a child process. + */ +static int spawn_server(const char *path, + const struct ipc_server_opts *opts, + pid_t *pid) +{ + *pid = fork(); + + switch (*pid) { + case 0: + if (setsid() == -1) + error_errno(_("setsid failed")); + close(0); + close(1); + close(2); + sanitize_stdfds(); + + return ipc_server_run(path, opts, test_app_cb, (void*)&my_app_data); + + case -1: + return error_errno(_("could not spawn daemon in the background")); + + default: + return 0; + } +} +#else +/* + * Conceptually like `daemonize()` but different because Windows does not + * have `fork(2)`. Spawn a normal Windows child process but without the + * limitations of `start_command()` and `finish_command()`. + */ +static int spawn_server(const char *path, + const struct ipc_server_opts *opts, + pid_t *pid) +{ + char test_tool_exe[MAX_PATH]; + struct strvec args = STRVEC_INIT; + int in, out; + + GetModuleFileNameA(NULL, test_tool_exe, MAX_PATH); + + in = open("/dev/null", O_RDONLY); + out = open("/dev/null", O_WRONLY); + + strvec_push(&args, test_tool_exe); + strvec_push(&args, "simple-ipc"); + strvec_push(&args, "run-daemon"); + strvec_pushf(&args, "--threads=%d", opts->nr_threads); + + *pid = mingw_spawnvpe(args.v[0], args.v, NULL, NULL, in, out, out); + close(in); + close(out); + + strvec_clear(&args); + + if (*pid < 0) + return error(_("could not spawn daemon in the background")); + + return 0; +} +#endif + +/* + * This is adapted from `wait_or_whine()`. Watch the child process and + * let it get started and begin listening for requests on the socket + * before reporting our success. + */ +static int wait_for_server_startup(const char * path, pid_t pid_child, + int max_wait_sec) +{ + int status; + pid_t pid_seen; + enum ipc_active_state s; + time_t time_limit, now; + + time(&time_limit); + time_limit += max_wait_sec; + + for (;;) { + pid_seen = waitpid(pid_child, &status, WNOHANG); + + if (pid_seen == -1) + return error_errno(_("waitpid failed")); + + else if (pid_seen == 0) { + /* + * The child is still running (this should be + * the normal case). Try to connect to it on + * the socket and see if it is ready for + * business. + * + * If there is another daemon already running, + * our child will fail to start (possibly + * after a timeout on the lock), but we don't + * care (who responds) if the socket is live. + */ + s = ipc_get_active_state(path); + if (s == IPC_STATE__LISTENING) + return 0; + + time(&now); + if (now > time_limit) + return error(_("daemon not online yet")); + + continue; + } + + else if (pid_seen == pid_child) { + /* + * The new child daemon process shutdown while + * it was starting up, so it is not listening + * on the socket. + * + * Try to ping the socket in the odd chance + * that another daemon started (or was already + * running) while our child was starting. + * + * Again, we don't care who services the socket. + */ + s = ipc_get_active_state(path); + if (s == IPC_STATE__LISTENING) + return 0; + + /* + * We don't care about the WEXITSTATUS() nor + * any of the WIF*(status) values because + * `cmd__simple_ipc()` does the `!!result` + * trick on all function return values. + * + * So it is sufficient to just report the + * early shutdown as an error. + */ + return error(_("daemon failed to start")); + } + + else + return error(_("waitpid is confused")); + } +} + +/* + * This process will start a simple-ipc server in a background process and + * wait for it to become ready. This is like `daemonize()` but gives us + * more control and better error reporting (and makes it easier to write + * unit tests). + */ +static int daemon__start_server(const char *path, int argc, const char **argv) +{ + pid_t pid_child; + int ret; + int max_wait_sec = 60; + struct ipc_server_opts opts = { + .nr_threads = 5 + }; + + const char * const daemon_usage[] = { + N_("test-helper simple-ipc start-daemon ["), + NULL + }; + + struct option daemon_options[] = { + OPT_INTEGER(0, "max-wait", &max_wait_sec, + N_("seconds to wait for daemon to startup")), + OPT_INTEGER(0, "threads", &opts.nr_threads, + N_("number of threads in server thread pool")), + OPT_END() + }; + + argc = parse_options(argc, argv, NULL, daemon_options, daemon_usage, 0); + + if (max_wait_sec < 0) + max_wait_sec = 0; + if (opts.nr_threads < 1) + opts.nr_threads = 1; + + /* + * Run the actual daemon in a background process. + */ + ret = spawn_server(path, &opts, &pid_child); + if (pid_child <= 0) + return ret; + + /* + * Let the parent wait for the child process to get started + * and begin listening for requests on the socket. + */ + ret = wait_for_server_startup(path, pid_child, max_wait_sec); + + return ret; +} + +/* + * This process will run a quick probe to see if a simple-ipc server + * is active on this path. + * + * Returns 0 if the server is alive. + */ +static int client__probe_server(const char *path) +{ + enum ipc_active_state s; + + s = ipc_get_active_state(path); + switch (s) { + case IPC_STATE__LISTENING: + return 0; + + case IPC_STATE__NOT_LISTENING: + return error("no server listening at '%s'", path); + + case IPC_STATE__PATH_NOT_FOUND: + return error("path not found '%s'", path); + + case IPC_STATE__INVALID_PATH: + return error("invalid pipe/socket name '%s'", path); + + case IPC_STATE__OTHER_ERROR: + default: + return error("other error for '%s'", path); + } +} + +/* + * Send an IPC command to an already-running server daemon and print the + * response. + * + * argv[2] contains a simple (1 word) command that `test_app_cb()` (in + * the daemon process) will understand. + */ +static int client__send_ipc(int argc, const char **argv, const char *path) +{ + const char *command = argc > 2 ? argv[2] : "(no command)"; + struct strbuf buf = STRBUF_INIT; + struct ipc_client_connect_options options + = IPC_CLIENT_CONNECT_OPTIONS_INIT; + + options.wait_if_busy = 1; + options.wait_if_not_found = 0; + + if (!ipc_client_send_command(path, &options, command, &buf)) { + if (buf.len) { + printf("%s\n", buf.buf); + fflush(stdout); + } + strbuf_release(&buf); + + return 0; + } + + return error("failed to send '%s' to '%s'", command, path); +} + +/* + * Send an IPC command to an already-running server and ask it to + * shutdown. "send quit" is an async request and queues a shutdown + * event in the server, so we spin and wait here for it to actually + * shutdown to make the unit tests a little easier to write. + */ +static int client__stop_server(int argc, const char **argv, const char *path) +{ + const char *send_quit[] = { argv[0], "send", "quit", NULL }; + int max_wait_sec = 60; + int ret; + time_t time_limit, now; + enum ipc_active_state s; + + const char * const stop_usage[] = { + N_("test-helper simple-ipc stop-daemon []"), + NULL + }; + + struct option stop_options[] = { + OPT_INTEGER(0, "max-wait", &max_wait_sec, + N_("seconds to wait for daemon to stop")), + OPT_END() + }; + + argc = parse_options(argc, argv, NULL, stop_options, stop_usage, 0); + + if (max_wait_sec < 0) + max_wait_sec = 0; + + time(&time_limit); + time_limit += max_wait_sec; + + ret = client__send_ipc(3, send_quit, path); + if (ret) + return ret; + + for (;;) { + sleep_millisec(100); + + s = ipc_get_active_state(path); + + if (s != IPC_STATE__LISTENING) { + /* + * The socket/pipe is gone and/or has stopped + * responding. Lets assume that the daemon + * process has exited too. + */ + return 0; + } + + time(&now); + if (now > time_limit) + return error(_("daemon has not shutdown yet")); + } +} + +/* + * Send an IPC command followed by ballast to confirm that a large + * message can be sent and that the kernel or pkt-line layers will + * properly chunk it and that the daemon receives the entire message. + */ +static int do_sendbytes(int bytecount, char byte, const char *path, + const struct ipc_client_connect_options *options) +{ + struct strbuf buf_send = STRBUF_INIT; + struct strbuf buf_resp = STRBUF_INIT; + + strbuf_addstr(&buf_send, "sendbytes "); + strbuf_addchars(&buf_send, byte, bytecount); + + if (!ipc_client_send_command(path, options, buf_send.buf, &buf_resp)) { + strbuf_rtrim(&buf_resp); + printf("sent:%c%08d %s\n", byte, bytecount, buf_resp.buf); + fflush(stdout); + strbuf_release(&buf_send); + strbuf_release(&buf_resp); + + return 0; + } + + return error("client failed to sendbytes(%d, '%c') to '%s'", + bytecount, byte, path); +} + +/* + * Send an IPC command with ballast to an already-running server daemon. + */ +static int client__sendbytes(int argc, const char **argv, const char *path) +{ + int bytecount = 1024; + char *string = "x"; + const char * const sendbytes_usage[] = { + N_("test-helper simple-ipc sendbytes []"), + NULL + }; + struct option sendbytes_options[] = { + OPT_INTEGER(0, "bytecount", &bytecount, N_("number of bytes")), + OPT_STRING(0, "byte", &string, N_("byte"), N_("ballast")), + OPT_END() + }; + struct ipc_client_connect_options options + = IPC_CLIENT_CONNECT_OPTIONS_INIT; + + options.wait_if_busy = 1; + options.wait_if_not_found = 0; + options.uds_disallow_chdir = 0; + + argc = parse_options(argc, argv, NULL, sendbytes_options, sendbytes_usage, 0); + + return do_sendbytes(bytecount, string[0], path, &options); +} + +struct multiple_thread_data { + pthread_t pthread_id; + struct multiple_thread_data *next; + const char *path; + int bytecount; + int batchsize; + int sum_errors; + int sum_good; + char letter; +}; + +static void *multiple_thread_proc(void *_multiple_thread_data) +{ + struct multiple_thread_data *d = _multiple_thread_data; + int k; + struct ipc_client_connect_options options + = IPC_CLIENT_CONNECT_OPTIONS_INIT; + + options.wait_if_busy = 1; + options.wait_if_not_found = 0; + /* + * A multi-threaded client should not be randomly calling chdir(). + * The test will pass without this restriction because the test is + * not otherwise accessing the filesystem, but it makes us honest. + */ + options.uds_disallow_chdir = 1; + + trace2_thread_start("multiple"); + + for (k = 0; k < d->batchsize; k++) { + if (do_sendbytes(d->bytecount + k, d->letter, d->path, &options)) + d->sum_errors++; + else + d->sum_good++; + } + + trace2_thread_exit(); + return NULL; +} + +/* + * Start a client-side thread pool. Each thread sends a series of + * IPC requests. Each request is on a new connection to the server. + */ +static int client__multiple(int argc, const char **argv, const char *path) +{ + struct multiple_thread_data *list = NULL; + int k; + int nr_threads = 5; + int bytecount = 1; + int batchsize = 10; + int sum_join_errors = 0; + int sum_thread_errors = 0; + int sum_good = 0; + + const char * const multiple_usage[] = { + N_("test-helper simple-ipc multiple []"), + NULL + }; + struct option multiple_options[] = { + OPT_INTEGER(0, "bytecount", &bytecount, N_("number of bytes")), + OPT_INTEGER(0, "threads", &nr_threads, N_("number of threads")), + OPT_INTEGER(0, "batchsize", &batchsize, N_("number of requests per thread")), + OPT_END() + }; + + argc = parse_options(argc, argv, NULL, multiple_options, multiple_usage, 0); + + if (bytecount < 1) + bytecount = 1; + if (nr_threads < 1) + nr_threads = 1; + if (batchsize < 1) + batchsize = 1; + + for (k = 0; k < nr_threads; k++) { + struct multiple_thread_data *d = xcalloc(1, sizeof(*d)); + d->next = list; + d->path = path; + d->bytecount = bytecount + batchsize*(k/26); + d->batchsize = batchsize; + d->sum_errors = 0; + d->sum_good = 0; + d->letter = 'A' + (k % 26); + + if (pthread_create(&d->pthread_id, NULL, multiple_thread_proc, d)) { + warning("failed to create thread[%d] skipping remainder", k); + free(d); + break; + } + + list = d; + } + + while (list) { + struct multiple_thread_data *d = list; + + if (pthread_join(d->pthread_id, NULL)) + sum_join_errors++; + + sum_thread_errors += d->sum_errors; + sum_good += d->sum_good; + + list = d->next; + free(d); + } + + printf("client (good %d) (join %d), (errors %d)\n", + sum_good, sum_join_errors, sum_thread_errors); + + return (sum_join_errors + sum_thread_errors) ? 1 : 0; +} + +int cmd__simple_ipc(int argc, const char **argv) +{ + const char *path = "ipc-test"; + + if (argc == 2 && !strcmp(argv[1], "SUPPORTS_SIMPLE_IPC")) + return 0; + + /* + * Use '!!' on all dispatch functions to map from `error()` style + * (returns -1) style to `test_must_fail` style (expects 1). This + * makes shell error messages less confusing. + */ + + if (argc == 2 && !strcmp(argv[1], "is-active")) + return !!client__probe_server(path); + + if (argc >= 2 && !strcmp(argv[1], "run-daemon")) + return !!daemon__run_server(path, argc, argv); + + if (argc >= 2 && !strcmp(argv[1], "start-daemon")) + return !!daemon__start_server(path, argc, argv); + + /* + * Client commands follow. Ensure a server is running before + * going any further. + */ + if (client__probe_server(path)) + return 1; + + if (argc >= 2 && !strcmp(argv[1], "stop-daemon")) + return !!client__stop_server(argc, argv, path); + + if ((argc == 2 || argc == 3) && !strcmp(argv[1], "send")) + return !!client__send_ipc(argc, argv, path); + + if (argc >= 2 && !strcmp(argv[1], "sendbytes")) + return !!client__sendbytes(argc, argv, path); + + if (argc >= 2 && !strcmp(argv[1], "multiple")) + return !!client__multiple(argc, argv, path); + + die("Unhandled argv[1]: '%s'", argv[1]); +} +#endif diff --git a/t/helper/test-tool.c b/t/helper/test-tool.c index 9d6d14d92937..a409655f03b5 100644 --- a/t/helper/test-tool.c +++ b/t/helper/test-tool.c @@ -64,6 +64,7 @@ static struct test_cmd cmds[] = { { "sha1", cmd__sha1 }, { "sha256", cmd__sha256 }, { "sigchain", cmd__sigchain }, + { "simple-ipc", cmd__simple_ipc }, { "strcmp-offset", cmd__strcmp_offset }, { "string-list", cmd__string_list }, { "submodule-config", cmd__submodule_config }, diff --git a/t/helper/test-tool.h b/t/helper/test-tool.h index a6470ff62c42..564eb3c8e911 100644 --- a/t/helper/test-tool.h +++ b/t/helper/test-tool.h @@ -54,6 +54,7 @@ int cmd__sha1(int argc, const char **argv); int cmd__oid_array(int argc, const char **argv); int cmd__sha256(int argc, const char **argv); int cmd__sigchain(int argc, const char **argv); +int cmd__simple_ipc(int argc, const char **argv); int cmd__strcmp_offset(int argc, const char **argv); int cmd__string_list(int argc, const char **argv); int cmd__submodule_config(int argc, const char **argv); diff --git a/t/t0052-simple-ipc.sh b/t/t0052-simple-ipc.sh new file mode 100755 index 000000000000..18dcc8130728 --- /dev/null +++ b/t/t0052-simple-ipc.sh @@ -0,0 +1,122 @@ +#!/bin/sh + +test_description='simple command server' + +. ./test-lib.sh + +test-tool simple-ipc SUPPORTS_SIMPLE_IPC || { + skip_all='simple IPC not supported on this platform' + test_done +} + +stop_simple_IPC_server () { + test-tool simple-ipc stop-daemon +} + +test_expect_success 'start simple command server' ' + test_atexit stop_simple_IPC_server && + test-tool simple-ipc start-daemon --threads=8 && + test-tool simple-ipc is-active +' + +test_expect_success 'simple command server' ' + test-tool simple-ipc send ping >actual && + echo pong >expect && + test_cmp expect actual +' + +test_expect_success 'servers cannot share the same path' ' + test_must_fail test-tool simple-ipc run-daemon && + test-tool simple-ipc is-active +' + +test_expect_success 'big response' ' + test-tool simple-ipc send big >actual && + test_line_count -ge 10000 actual && + grep -q "big: [0]*9999\$" actual +' + +test_expect_success 'chunk response' ' + test-tool simple-ipc send chunk >actual && + test_line_count -ge 10000 actual && + grep -q "big: [0]*9999\$" actual +' + +test_expect_success 'slow response' ' + test-tool simple-ipc send slow >actual && + test_line_count -ge 100 actual && + grep -q "big: [0]*99\$" actual +' + +# Send an IPC with n=100,000 bytes of ballast. This should be large enough +# to force both the kernel and the pkt-line layer to chunk the message to the +# daemon and for the daemon to receive it in chunks. +# +test_expect_success 'sendbytes' ' + test-tool simple-ipc sendbytes --bytecount=100000 --byte=A >actual && + grep "sent:A00100000 rcvd:A00100000" actual +' + +# Start a series of client threads that each make +# IPC requests to the server. Each ( * ) request +# will open a new connection to the server and randomly bind to a server +# thread. Each client thread exits after completing its batch. So the +# total number of live client threads will be smaller than the total. +# Each request will send a message containing at least bytes +# of ballast. (Responses are small.) +# +# The purpose here is to test threading in the server and responding to +# many concurrent client requests (regardless of whether they come from +# 1 client process or many). And to test that the server side of the +# named pipe/socket is stable. (On Windows this means that the server +# pipe is properly recycled.) +# +# On Windows it also lets us adjust the connection timeout in the +# `ipc_client_send_command()`. +# +# Note it is easy to drive the system into failure by requesting an +# insane number of threads on client or server and/or increasing the +# per-thread batchsize or the per-request bytecount (ballast). +# On Windows these failures look like "pipe is busy" errors. +# So I've chosen fairly conservative values for now. +# +# We expect output of the form "sent: ..." +# With terms (7, 19, 13) we expect: +# in [A-G] +# in [19+0 .. 19+(13-1)] +# and (7 * 13) successful responses. +# +test_expect_success 'stress test threads' ' + test-tool simple-ipc multiple \ + --threads=7 \ + --bytecount=19 \ + --batchsize=13 \ + >actual && + test_line_count = 92 actual && + grep "good 91" actual && + grep "sent:A" actual_a && + cat >expect_a <<-EOF && + sent:A00000019 rcvd:A00000019 + sent:A00000020 rcvd:A00000020 + sent:A00000021 rcvd:A00000021 + sent:A00000022 rcvd:A00000022 + sent:A00000023 rcvd:A00000023 + sent:A00000024 rcvd:A00000024 + sent:A00000025 rcvd:A00000025 + sent:A00000026 rcvd:A00000026 + sent:A00000027 rcvd:A00000027 + sent:A00000028 rcvd:A00000028 + sent:A00000029 rcvd:A00000029 + sent:A00000030 rcvd:A00000030 + sent:A00000031 rcvd:A00000031 + EOF + test_cmp expect_a actual_a +' + +test_expect_success 'stop-daemon works' ' + test-tool simple-ipc stop-daemon && + test_must_fail test-tool simple-ipc is-active && + test_must_fail test-tool simple-ipc send ping +' + +test_done