From patchwork Mon Feb 1 19:45:34 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Junio C Hamano X-Patchwork-Id: 12059915 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id CD9D8C433E6 for ; Mon, 1 Feb 2021 19:47:06 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 98CEA64EA9 for ; Mon, 1 Feb 2021 19:47:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232490AbhBATq5 (ORCPT ); Mon, 1 Feb 2021 14:46:57 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40560 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230298AbhBATqb (ORCPT ); Mon, 1 Feb 2021 14:46:31 -0500 Received: from mail-wr1-x42d.google.com (mail-wr1-x42d.google.com [IPv6:2a00:1450:4864:20::42d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B3E4CC06174A for ; Mon, 1 Feb 2021 11:45:51 -0800 (PST) Received: by mail-wr1-x42d.google.com with SMTP id l12so17964230wry.2 for ; Mon, 01 Feb 2021 11:45:51 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=message-id:in-reply-to:references:from:date:subject:fcc :content-transfer-encoding:mime-version:to:cc; bh=+DQeitWBjgmX6Tf3pJzZueR3+qK5XfM+JfwKZw1URY8=; b=cbMgjObXdCymoKeY/WKefb+GJhCLM7HjCgkh3rV4koW5YzRbQF3A9/wa0VqOXgWBF8 6iqROxIgl1PMVPjEBvfBkJs5Hfh9ZXBR/GV2n5iW1szsxCNl7SdEQJgnBE/3haKd5EZO UwWai3myhumHMf+PwBSqC8VVgc/zIF+0I4yk0dQ80603Srtja6an2k2QzEY8y5YgPqxJ 94pdyauLcEzNfhkK6D02R6wICLRHJisxwPAeGLKaI4nYpCRVbwnvQUPXzNPNygE3DdZw EbQ8yvibzVBuwqHH28YZx0RSSE3xD8ApOWi4fOqyTZM360MDep+QcJU6m6mPRAP7PEH2 ivqw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:message-id:in-reply-to:references:from:date :subject:fcc:content-transfer-encoding:mime-version:to:cc; bh=+DQeitWBjgmX6Tf3pJzZueR3+qK5XfM+JfwKZw1URY8=; b=tvUmY3hduP6Yl0NYgXzXup0NaXulIQgTztz8GB0svqlqiREN8n6zq6W05Inqlj6meq MGrtYLsq5h+dccV2ottO3wtsMswng3NbM7lK3k/G26LoaQQIz5R1IJ5bp0vh/5FkjCH1 mu9ABD5kH8HvJxDJe4WH0iRhFQJaWhBZUL5p1zZRdcT8zqopay+7YZTgeJnu8Nrip6tV 8fm5im/TOFae65PJUY4kA+uZEAINfyWRbj3Wb0G4B80z6owedJ2cXoa+/jhUMMOBzaef 4TWUjJW3r8a62gmv65vzxJwa/gF6QHlSYBTz7pakOfvlbvUWWpNMQWT8E5msVH+FHQ7f Vm1w== X-Gm-Message-State: AOAM532CfWLgw6H7cOCd8OtTXIOwTBtciM/f93+HgALb5PGOKJjOXMGc 0oaDLT0WMGOG5SE6HmwbtEQF+EguvMM= X-Google-Smtp-Source: ABdhPJwYuVoWitz/ftNGQGWjxWBgoJYANsptLi9C04nzYivirKP6ADM4nDCIgrz+Er8wGZGZa6kUFQ== X-Received: by 2002:a05:6000:1043:: with SMTP id c3mr20175062wrx.140.1612208750336; Mon, 01 Feb 2021 11:45:50 -0800 (PST) Received: from [127.0.0.1] ([13.74.141.28]) by smtp.gmail.com with ESMTPSA id f17sm29995991wrv.0.2021.02.01.11.45.49 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 01 Feb 2021 11:45:49 -0800 (PST) Message-Id: <4c6766d41834da9508142d1f420a741dc550806b.1612208747.git.gitgitgadget@gmail.com> In-Reply-To: References: Date: Mon, 01 Feb 2021 19:45:34 +0000 Subject: [PATCH v2 01/14] ci/install-depends: attempt to fix "brew cask" stuff Fcc: Sent MIME-Version: 1.0 To: git@vger.kernel.org Cc: =?utf-8?b?w4Z2YXIgQXJuZmrDtnLDsA==?= Bjarmason , Jeff Hostetler , Jeff King , Chris Torek , Jeff Hostetler , Junio C Hamano Precedence: bulk List-ID: X-Mailing-List: git@vger.kernel.org From: Junio C Hamano From: Junio C Hamano We run "git pull" against "$cask_repo"; clarify that we are expecting not to have any of our own modifications and running "git pull" to merely update, by passing "--ff-only" on the command line. Also, the "brew cask install" command line triggers an error message that says: Error: Calling brew cask install is disabled! Use brew install [--cask] instead. In addition, "brew install caskroom/cask/perforce" step triggers an error that says: Error: caskroom/cask was moved. Tap homebrew/cask instead. Attempt to see if blindly following the suggestion in these error messages gets us into a better shape. Signed-off-by: Junio C Hamano --- ci/install-dependencies.sh | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/ci/install-dependencies.sh b/ci/install-dependencies.sh index 0229a77f7d2..0b1184e04ad 100755 --- a/ci/install-dependencies.sh +++ b/ci/install-dependencies.sh @@ -44,13 +44,13 @@ osx-clang|osx-gcc) test -z "$BREW_INSTALL_PACKAGES" || brew install $BREW_INSTALL_PACKAGES brew link --force gettext - brew cask install --no-quarantine perforce || { + brew install --cask --no-quarantine perforce || { # Update the definitions and try again cask_repo="$(brew --repository)"/Library/Taps/homebrew/homebrew-cask && - git -C "$cask_repo" pull --no-stat && - brew cask install --no-quarantine perforce + git -C "$cask_repo" pull --no-stat --ff-only && + brew install --cask --no-quarantine perforce } || - brew install caskroom/cask/perforce + brew install homebrew/cask/perforce case "$jobname" in osx-gcc) brew install gcc@9 From patchwork Mon Feb 1 19:45:35 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeff Hostetler X-Patchwork-Id: 12059943 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 93D4CC433E0 for ; Mon, 1 Feb 2021 19:49:54 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 614AD64ECA for ; Mon, 1 Feb 2021 19:49:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232439AbhBATtt (ORCPT ); Mon, 1 Feb 2021 14:49:49 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40568 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232359AbhBATqc (ORCPT ); Mon, 1 Feb 2021 14:46:32 -0500 Received: from mail-wr1-x432.google.com (mail-wr1-x432.google.com [IPv6:2a00:1450:4864:20::432]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E6142C0613D6 for ; Mon, 1 Feb 2021 11:45:52 -0800 (PST) Received: by mail-wr1-x432.google.com with SMTP id c12so17947673wrc.7 for ; Mon, 01 Feb 2021 11:45:52 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=message-id:in-reply-to:references:from:date:subject:fcc :content-transfer-encoding:mime-version:to:cc; bh=pSbgX86qfVOFV+DYuqgTNYqm4NtxhzBOQZKnL/QDvHQ=; b=s0B3P3HvFLOT4xe+ivSV3jbEs6SBNToylxkOuNjYRgOWDZDv/3H+DHFJU0HD/VCk4l Fq1MIRDJieFN8wRxPmTC8QeNIVbkWO+//pidjjnpMokTVuoJv6C3kUKV46Riq9AYfy5v IbKYqi/cXlXd0dF2f2Sz7FeKhAOi2QPnO8JvxwV0f9omHzSK57nwMSgJHi+y6KqN6Ma1 6zIXhvJTPtKOwt/a07jDB+ECpnPT03h8v7pgOGOl7FmrHKpKTzvpcjAkJlU7dm4oF5GW 3ZgXDbHRg7hV8PhFUw6Rga26mjNNYPyU+pGmjhGLrS0s2B2nEuzXManJPgUKo/BBSOjB fyJA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:message-id:in-reply-to:references:from:date :subject:fcc:content-transfer-encoding:mime-version:to:cc; bh=pSbgX86qfVOFV+DYuqgTNYqm4NtxhzBOQZKnL/QDvHQ=; b=suMWKdpj56pRHmvs9Op8z8DD1mUxD2sZlU+w4erVDXC7XCI+jV6OH1xRsSgud2T6uT Eb0AHf5zKvX/C4MjRuzd7mLONV5CTp6N548v278wV7jpMzFLT74hadAlv0x3YHFud+xH 7b3IiQIN+Lk5pa9KKIaBOsTIea2l64M7Ewa1ursSstMMxYWdt8qkFoX7ZlkgSIgw5erI 9gAWodGqSh6q6U2OmEnZpahXrEtDgzTsMNZbb0Jwx1YcAUnFnfbiuH3m6wdMzZpKxXfd 4/UivRB/IApsXvk+ZxcJHwp0HkfQrPt5hwecdcE79cS2s90QfcmzrsChCcGCgwS3PsTV 7BCA== X-Gm-Message-State: AOAM530Qw3J4aZgcrCoHBB47JCj9SlyfyEmXZlJ/mR9madV6U+iUeOC8 9PQem6WYzMo+hS3dj5+8cYz0UwY2aCk= X-Google-Smtp-Source: ABdhPJwm5mvFDCnMghsa5wRlMJRcYtY0zlTIp8ECCpgeLFNyNg1bSnXy6CXDp2zUEqZ9KqccRlggOw== X-Received: by 2002:adf:e38d:: with SMTP id e13mr19454044wrm.231.1612208751399; Mon, 01 Feb 2021 11:45:51 -0800 (PST) Received: from [127.0.0.1] ([13.74.141.28]) by smtp.gmail.com with ESMTPSA id h207sm320409wme.18.2021.02.01.11.45.50 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 01 Feb 2021 11:45:50 -0800 (PST) Message-Id: <3b03a8ff7a72c101f82a685cc6f34a5dd37a9c4b.1612208747.git.gitgitgadget@gmail.com> In-Reply-To: References: Date: Mon, 01 Feb 2021 19:45:35 +0000 Subject: [PATCH v2 02/14] pkt-line: promote static buffer in packet_write_gently() to callers Fcc: Sent MIME-Version: 1.0 To: git@vger.kernel.org Cc: =?utf-8?b?w4Z2YXIgQXJuZmrDtnLDsA==?= Bjarmason , Jeff Hostetler , Jeff King , Chris Torek , Jeff Hostetler , Jeff Hostetler Precedence: bulk List-ID: X-Mailing-List: git@vger.kernel.org From: Jeff Hostetler From: Jeff Hostetler Move the static buffer used in `packet_write_gently()` to its callers. This is a first step to make packet writing more thread-safe. Signed-off-by: Jeff Hostetler --- pkt-line.c | 33 ++++++++++++++++++++++++--------- pkt-line.h | 10 ++++++++-- 2 files changed, 32 insertions(+), 11 deletions(-) diff --git a/pkt-line.c b/pkt-line.c index d633005ef74..14af049cd9c 100644 --- a/pkt-line.c +++ b/pkt-line.c @@ -194,26 +194,34 @@ int packet_write_fmt_gently(int fd, const char *fmt, ...) return status; } -static int packet_write_gently(const int fd_out, const char *buf, size_t size) +/* + * Use the provided scratch space to build a combined buffer + * and write it to the file descriptor (in one write if possible). + */ +static int packet_write_gently(const int fd_out, const char *buf, size_t size, + struct packet_scratch_space *scratch) { - static char packet_write_buffer[LARGE_PACKET_MAX]; size_t packet_size; - if (size > sizeof(packet_write_buffer) - 4) + if (size > sizeof(scratch->buffer) - 4) return error(_("packet write failed - data exceeds max packet size")); packet_trace(buf, size, 1); packet_size = size + 4; - set_packet_header(packet_write_buffer, packet_size); - memcpy(packet_write_buffer + 4, buf, size); - if (write_in_full(fd_out, packet_write_buffer, packet_size) < 0) + + set_packet_header(scratch->buffer, packet_size); + memcpy(scratch->buffer + 4, buf, size); + + if (write_in_full(fd_out, scratch->buffer, packet_size) < 0) return error(_("packet write failed")); return 0; } void packet_write(int fd_out, const char *buf, size_t size) { - if (packet_write_gently(fd_out, buf, size)) + static struct packet_scratch_space scratch; + + if (packet_write_gently(fd_out, buf, size, &scratch)) die_errno(_("packet write failed")); } @@ -244,6 +252,12 @@ void packet_buf_write_len(struct strbuf *buf, const char *data, size_t len) int write_packetized_from_fd(int fd_in, int fd_out) { + /* + * TODO We could save a memcpy() if we essentially inline + * TODO packet_write_gently() here and change the xread() + * TODO to pass &buf[4]. + */ + static struct packet_scratch_space scratch; static char buf[LARGE_PACKET_DATA_MAX]; int err = 0; ssize_t bytes_to_write; @@ -254,7 +268,7 @@ int write_packetized_from_fd(int fd_in, int fd_out) return COPY_READ_ERROR; if (bytes_to_write == 0) break; - err = packet_write_gently(fd_out, buf, bytes_to_write); + err = packet_write_gently(fd_out, buf, bytes_to_write, &scratch); } if (!err) err = packet_flush_gently(fd_out); @@ -263,6 +277,7 @@ int write_packetized_from_fd(int fd_in, int fd_out) int write_packetized_from_buf(const char *src_in, size_t len, int fd_out) { + static struct packet_scratch_space scratch; int err = 0; size_t bytes_written = 0; size_t bytes_to_write; @@ -274,7 +289,7 @@ int write_packetized_from_buf(const char *src_in, size_t len, int fd_out) bytes_to_write = len - bytes_written; if (bytes_to_write == 0) break; - err = packet_write_gently(fd_out, src_in + bytes_written, bytes_to_write); + err = packet_write_gently(fd_out, src_in + bytes_written, bytes_to_write, &scratch); bytes_written += bytes_to_write; } if (!err) diff --git a/pkt-line.h b/pkt-line.h index 8c90daa59ef..4ccd6f88926 100644 --- a/pkt-line.h +++ b/pkt-line.h @@ -5,6 +5,13 @@ #include "strbuf.h" #include "sideband.h" +#define LARGE_PACKET_MAX 65520 +#define LARGE_PACKET_DATA_MAX (LARGE_PACKET_MAX - 4) + +struct packet_scratch_space { + char buffer[LARGE_PACKET_MAX]; +}; + /* * Write a packetized stream, where each line is preceded by * its length (including the header) as a 4-byte hex number. @@ -213,8 +220,7 @@ enum packet_read_status packet_reader_read(struct packet_reader *reader); enum packet_read_status packet_reader_peek(struct packet_reader *reader); #define DEFAULT_PACKET_MAX 1000 -#define LARGE_PACKET_MAX 65520 -#define LARGE_PACKET_DATA_MAX (LARGE_PACKET_MAX - 4) + extern char packet_buffer[LARGE_PACKET_MAX]; struct packet_writer { From patchwork Mon Feb 1 19:45:36 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeff Hostetler X-Patchwork-Id: 12059941 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 76F40C433DB for ; Mon, 1 Feb 2021 19:49:47 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 3316C64ECB for ; Mon, 1 Feb 2021 19:49:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232053AbhBATtl (ORCPT ); Mon, 1 Feb 2021 14:49:41 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40574 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232329AbhBATqd (ORCPT ); Mon, 1 Feb 2021 14:46:33 -0500 Received: from mail-wr1-x42e.google.com (mail-wr1-x42e.google.com [IPv6:2a00:1450:4864:20::42e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id BF44CC0613ED for ; Mon, 1 Feb 2021 11:45:53 -0800 (PST) Received: by mail-wr1-x42e.google.com with SMTP id 7so17971667wrz.0 for ; Mon, 01 Feb 2021 11:45:53 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=message-id:in-reply-to:references:from:date:subject:fcc :content-transfer-encoding:mime-version:to:cc; bh=m8IpXcH5RcuEGRDPYOKsp8OHYVdfh8TgVVg0bIEweZA=; b=KBhYybb/OVTGy2AUxk8XRdJTadnkgJ/gWBuMDtcycX+oJzJAekuNJgTPsN78OtZu0l r4vyvaPnR4M8x3n2L+YG6MBEyzgRVLOl2xNVQPvlz3mply2kqcO3kf34uEPjeIZ3ruo0 zo7f7GiaAa+VaKqARc7E+d/zSVVPQc5AVgu1RkSEF3ZBU9F7Woc55xM9csKzRi9Nwe2t wtaLxBtlw3W5CSFn+KjWR6DJ5nlWrJ81ihIWVXK1TUvD//vywWiAFN6Zfgyayh13vKWZ ISAHadM5n/2aEf0WYMP9FOKElt+zO+uZY/9s0bZzUrSdmVJEy53DFZNE4R229cv/txzs 6jOw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:message-id:in-reply-to:references:from:date :subject:fcc:content-transfer-encoding:mime-version:to:cc; bh=m8IpXcH5RcuEGRDPYOKsp8OHYVdfh8TgVVg0bIEweZA=; b=eeh9O6s9b82WZ/mzTCQRGV20I2esI/UDs6q03VBwFDaNhKZqf9+narGtdpDmH/9lNW H2F5EOUW5pGD8PJcNGSSQwkVKmXvtBJ7ty+quo4zz8em4O4P7Y2I/rOYY74TNkPDUNCS uuf0CcBebexIah8u+JB4L368Y4fnNdyLeKbeddTkT0Mv23TLIxUfweNJDGAXHmU3mca8 hofNGOfFHPuYvxLDIAIw+95P3vaxhzkUs5A4GRnlFmyn8kLcS3v5PVWOR1ME9OTinTO/ 3w87IzT6cFT2GHMbHEulFgRxZdr5IAPmFaM+iqR5eF6LVnn/W05gAMhcgQpkd0dkT3E/ G00A== X-Gm-Message-State: AOAM532pxuM4oxNSK+yM9lcp5Z7SbyxKkHKR0s7q+hAdzBA4U/sj6j8g N7inr1qGpz3RMPVv/NBQytSg9EVhxok= X-Google-Smtp-Source: ABdhPJz97bAs9CLqlmOmvmCduyZ6QIFycD2+zRvD8KJT0PyzhC+ydl30EkR0/YNp9Hvfz7BnESq9bQ== X-Received: by 2002:adf:d852:: with SMTP id k18mr20178028wrl.262.1612208752353; Mon, 01 Feb 2021 11:45:52 -0800 (PST) Received: from [127.0.0.1] ([13.74.141.28]) by smtp.gmail.com with ESMTPSA id i18sm20111119wrn.29.2021.02.01.11.45.51 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 01 Feb 2021 11:45:51 -0800 (PST) Message-Id: In-Reply-To: References: Date: Mon, 01 Feb 2021 19:45:36 +0000 Subject: [PATCH v2 03/14] pkt-line: add write_packetized_from_buf2() that takes scratch buffer Fcc: Sent MIME-Version: 1.0 To: git@vger.kernel.org Cc: =?utf-8?b?w4Z2YXIgQXJuZmrDtnLDsA==?= Bjarmason , Jeff Hostetler , Jeff King , Chris Torek , Jeff Hostetler , Jeff Hostetler Precedence: bulk List-ID: X-Mailing-List: git@vger.kernel.org From: Jeff Hostetler From: Jeff Hostetler Create version of `write_packetized_from_buf()` that takes a scratch buffer argument rather than assuming a static buffer. This will be used later as we make packet-line writing more thread-safe. Signed-off-by: Jeff Hostetler --- pkt-line.c | 9 ++++++++- pkt-line.h | 2 ++ 2 files changed, 10 insertions(+), 1 deletion(-) diff --git a/pkt-line.c b/pkt-line.c index 14af049cd9c..5d86354cbeb 100644 --- a/pkt-line.c +++ b/pkt-line.c @@ -278,6 +278,13 @@ int write_packetized_from_fd(int fd_in, int fd_out) int write_packetized_from_buf(const char *src_in, size_t len, int fd_out) { static struct packet_scratch_space scratch; + + return write_packetized_from_buf2(src_in, len, fd_out, &scratch); +} + +int write_packetized_from_buf2(const char *src_in, size_t len, int fd_out, + struct packet_scratch_space *scratch) +{ int err = 0; size_t bytes_written = 0; size_t bytes_to_write; @@ -289,7 +296,7 @@ int write_packetized_from_buf(const char *src_in, size_t len, int fd_out) bytes_to_write = len - bytes_written; if (bytes_to_write == 0) break; - err = packet_write_gently(fd_out, src_in + bytes_written, bytes_to_write, &scratch); + err = packet_write_gently(fd_out, src_in + bytes_written, bytes_to_write, scratch); bytes_written += bytes_to_write; } if (!err) diff --git a/pkt-line.h b/pkt-line.h index 4ccd6f88926..f1d5625e91f 100644 --- a/pkt-line.h +++ b/pkt-line.h @@ -41,6 +41,8 @@ int packet_flush_gently(int fd); int packet_write_fmt_gently(int fd, const char *fmt, ...) __attribute__((format (printf, 2, 3))); int write_packetized_from_fd(int fd_in, int fd_out); int write_packetized_from_buf(const char *src_in, size_t len, int fd_out); +int write_packetized_from_buf2(const char *src_in, size_t len, int fd_out, + struct packet_scratch_space *scratch); /* * Read a packetized line into the buffer, which must be at least size bytes From patchwork Mon Feb 1 19:45:37 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Johannes Schindelin X-Patchwork-Id: 12059919 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D2630C433DB for ; Mon, 1 Feb 2021 19:47:39 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 9807564EBF for ; Mon, 1 Feb 2021 19:47:39 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232540AbhBATrg (ORCPT ); Mon, 1 Feb 2021 14:47:36 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40584 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232405AbhBATqf (ORCPT ); Mon, 1 Feb 2021 14:46:35 -0500 Received: from mail-wr1-x430.google.com (mail-wr1-x430.google.com [IPv6:2a00:1450:4864:20::430]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AC386C061786 for ; Mon, 1 Feb 2021 11:45:54 -0800 (PST) Received: by mail-wr1-x430.google.com with SMTP id 7so17971706wrz.0 for ; Mon, 01 Feb 2021 11:45:54 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=message-id:in-reply-to:references:from:date:subject:fcc :content-transfer-encoding:mime-version:to:cc; bh=IAjoLQ+6HtO0n/chdlaaMSaGY2qzMCuiZTD2k2dLb2E=; b=peVM5rLqC+hn3uFY2y3DyghqPqbwqHNZW6SNSxOEIkyZzfHASkLogTdddk2DRLA7kE SHXjwgUarSq6pzBLrzhSCilJ5Hk2jLMFYTdBgLtp/t2mdvxYNNsUJSn3zC2pMUZHS0Co HybC/YDTtaZS/xTZ94u/yJZi4UcGYhRkW5DRalEOEdq/Oa7mvghtDNwZhFxmgl08krCl Z1ZVk/GlUfhHihuAaIxaoO6Tg3DMqsCGeFnvW5OKsBmxzvz42uqK1Dxnsxk2KRziE5jR BYzovmtjCGLxu08xCMGA1iR8G+GjTeUezE4PslUEw9A0aE/RUrMYcXR6rHuGdZZ3CJpN nbZg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:message-id:in-reply-to:references:from:date :subject:fcc:content-transfer-encoding:mime-version:to:cc; bh=IAjoLQ+6HtO0n/chdlaaMSaGY2qzMCuiZTD2k2dLb2E=; b=dzWzi3ZsjPNOpCGC9NYCbKEHIC7YRs0Tm/qIYLbg2c2e290zDcuZdBDU1LZvIIxqBK Gla1PzawB6U2BG6wLKA04sRX/Ps5GBLRe0zgeR8WrAAtjMgOw3YcKaSYbNpZHcmlrmy5 wzbj6g9E5z9xu45ahq9qhGunJDbV1ybWjxAjj9a93srg+yGk6HtOuyb2lFW00yxWkbUd pcK6IC9LrkP1VefjGYOLx0eU7dHkl3qdYxErF1ZRYTreAGHtvoZYXPP+1yvmnn4OfHZ0 CxQE3QjmLY2wL0BKh0yqyPRXUhD4IIQXCSnFXtCQhFPtj0jpRfXiyIjWGmagKgJMyx19 irHw== X-Gm-Message-State: AOAM532gVSOSrtlnaMXflP7Gu+MkJLZd+Eqn8d2shH8bvLG5cHDypnqi 56q/EgKPSBYaLiMU2Hj4RxN1UqvofEw= X-Google-Smtp-Source: ABdhPJzEAl3l/kLwyyJAbUple0/fEHIFfa3zFxiy8qaayryZZ1phisBVeyMnFI8kP6urza4LXstSIQ== X-Received: by 2002:a5d:5549:: with SMTP id g9mr21116148wrw.244.1612208753279; Mon, 01 Feb 2021 11:45:53 -0800 (PST) Received: from [127.0.0.1] ([13.74.141.28]) by smtp.gmail.com with ESMTPSA id u3sm30205701wre.54.2021.02.01.11.45.52 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 01 Feb 2021 11:45:52 -0800 (PST) Message-Id: <0832f7d324da643d7a480111d693ff5559c2b7a7.1612208747.git.gitgitgadget@gmail.com> In-Reply-To: References: Date: Mon, 01 Feb 2021 19:45:37 +0000 Subject: [PATCH v2 04/14] pkt-line: optionally skip the flush packet in write_packetized_from_buf() Fcc: Sent MIME-Version: 1.0 To: git@vger.kernel.org Cc: =?utf-8?b?w4Z2YXIgQXJuZmrDtnLDsA==?= Bjarmason , Jeff Hostetler , Jeff King , Chris Torek , Jeff Hostetler , Johannes Schindelin Precedence: bulk List-ID: X-Mailing-List: git@vger.kernel.org From: Johannes Schindelin From: Johannes Schindelin This function currently has only one caller: `apply_multi_file_filter()` in `convert.c`. That caller wants a flush packet to be written after writing the payload. However, we are about to introduce a user that wants to write many packets before a final flush packet, so let's extend this function to prepare for that scenario. Signed-off-by: Jeff Hostetler Signed-off-by: Johannes Schindelin --- convert.c | 2 +- pkt-line.c | 9 ++++++--- pkt-line.h | 4 +++- 3 files changed, 10 insertions(+), 5 deletions(-) diff --git a/convert.c b/convert.c index ee360c2f07c..3f396a9b288 100644 --- a/convert.c +++ b/convert.c @@ -886,7 +886,7 @@ static int apply_multi_file_filter(const char *path, const char *src, size_t len if (fd >= 0) err = write_packetized_from_fd(fd, process->in); else - err = write_packetized_from_buf(src, len, process->in); + err = write_packetized_from_buf(src, len, process->in, 1); if (err) goto done; diff --git a/pkt-line.c b/pkt-line.c index 5d86354cbeb..d91a1deda95 100644 --- a/pkt-line.c +++ b/pkt-line.c @@ -275,14 +275,17 @@ int write_packetized_from_fd(int fd_in, int fd_out) return err; } -int write_packetized_from_buf(const char *src_in, size_t len, int fd_out) +int write_packetized_from_buf(const char *src_in, size_t len, int fd_out, + int flush_at_end) { static struct packet_scratch_space scratch; - return write_packetized_from_buf2(src_in, len, fd_out, &scratch); + return write_packetized_from_buf2(src_in, len, fd_out, + flush_at_end, &scratch); } int write_packetized_from_buf2(const char *src_in, size_t len, int fd_out, + int flush_at_end, struct packet_scratch_space *scratch) { int err = 0; @@ -299,7 +302,7 @@ int write_packetized_from_buf2(const char *src_in, size_t len, int fd_out, err = packet_write_gently(fd_out, src_in + bytes_written, bytes_to_write, scratch); bytes_written += bytes_to_write; } - if (!err) + if (!err && flush_at_end) err = packet_flush_gently(fd_out); return err; } diff --git a/pkt-line.h b/pkt-line.h index f1d5625e91f..ccf27549227 100644 --- a/pkt-line.h +++ b/pkt-line.h @@ -40,8 +40,10 @@ void packet_buf_write_len(struct strbuf *buf, const char *data, size_t len); int packet_flush_gently(int fd); int packet_write_fmt_gently(int fd, const char *fmt, ...) __attribute__((format (printf, 2, 3))); int write_packetized_from_fd(int fd_in, int fd_out); -int write_packetized_from_buf(const char *src_in, size_t len, int fd_out); +int write_packetized_from_buf(const char *src_in, size_t len, int fd_out, + int flush_at_end); int write_packetized_from_buf2(const char *src_in, size_t len, int fd_out, + int flush_at_end, struct packet_scratch_space *scratch); /* From patchwork Mon Feb 1 19:45:38 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Johannes Schindelin X-Patchwork-Id: 12059921 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 910DFC433DB for ; Mon, 1 Feb 2021 19:48:14 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 5119564EC2 for ; Mon, 1 Feb 2021 19:48:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232570AbhBATrl (ORCPT ); Mon, 1 Feb 2021 14:47:41 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40586 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232372AbhBATqf (ORCPT ); Mon, 1 Feb 2021 14:46:35 -0500 Received: from mail-wr1-x430.google.com (mail-wr1-x430.google.com [IPv6:2a00:1450:4864:20::430]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9D534C061788 for ; Mon, 1 Feb 2021 11:45:55 -0800 (PST) Received: by mail-wr1-x430.google.com with SMTP id a1so17964857wrq.6 for ; Mon, 01 Feb 2021 11:45:55 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=message-id:in-reply-to:references:from:date:subject:fcc :content-transfer-encoding:mime-version:to:cc; bh=hJMMlnG4K85uP3aMIkdMyhFWAilpMetGJKPcVVcf3AE=; b=TMSO6uXG+J/PEgsfWImC93OhQoPpnLfoPYsDPQthHd3pz1R2+4F6zW62KFm3JTOGDj 4Jdgl5k/7wmDKe3cZkOC6v1Q96dup0GL+Y0NlBERyDwKmsWk9S275n8hEz4BUxeMfArm FlTXWHMU+uftGvt5bVi5ZUohnHXW10YT7JAVrNAZRtafwSDw2yB/zuevfPLHHHJQWPFl rxOaqILdz5w+eXMr7+m9jnx4NkgTbNC2oqwdXbt4G8ngi1f67FCDyR9zLpZVOzLZ2bCz QDkV1M90Z/7Q+BKFPnO4MSN3eplQ77nf/j8dEk9rXwpoRelUU6hBIhafZMvE4YPxSvVh HiCw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:message-id:in-reply-to:references:from:date :subject:fcc:content-transfer-encoding:mime-version:to:cc; bh=hJMMlnG4K85uP3aMIkdMyhFWAilpMetGJKPcVVcf3AE=; b=LN3CfqxQtrtZZP+50N82eRKoQArEltXb/+nC7xq9ujCLflOBq4xSFvyDgzfK2kPIeY msovgShwsXSw5AwrYQKP5mwny8if61gingA5OwOiiEtmEA8dAQ7SqdogNfjn5aqc8Sg2 aa3Ifst04ZMR746o4YOc3IL5SQhJeI1sObBv0BeRFAVfOsD2HXGJGvxJmDSwIerz5sun eIiz3RJTDunuq/yIrzdT/NAKd41IFdwhDSkn43+/fN3CFQhKiKcHb19n+F9Zxm2t5fFo 87jlDH7GnE6Y8yXxbs38nfThOEF/1i4F2eNeQA8gW/hJTFtLlgF47wlzGwnsmldiPLAh Cvaw== X-Gm-Message-State: AOAM533BgUU/jd0APJSsLTvkpkaAGUzXQRbrP544vCV/BCLEn22vpNOq 3Up5SH2dNo+tbBp/Zn3E8hU3okAx6Zg= X-Google-Smtp-Source: ABdhPJxjBoAWRX7RIxL1QeETVxDJWPzqUfVFnjOBTKOewmN+ZnZyWMnVVm7+5Wf6d6nrMLLNDYNEWw== X-Received: by 2002:adf:f009:: with SMTP id j9mr19842840wro.35.1612208754235; Mon, 01 Feb 2021 11:45:54 -0800 (PST) Received: from [127.0.0.1] ([13.74.141.28]) by smtp.gmail.com with ESMTPSA id k131sm280991wmb.37.2021.02.01.11.45.53 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 01 Feb 2021 11:45:53 -0800 (PST) Message-Id: <43bc4a26b79038a13d042f4538b467b1af94688b.1612208747.git.gitgitgadget@gmail.com> In-Reply-To: References: Date: Mon, 01 Feb 2021 19:45:38 +0000 Subject: [PATCH v2 05/14] pkt-line: (optionally) libify the packet readers Fcc: Sent MIME-Version: 1.0 To: git@vger.kernel.org Cc: =?utf-8?b?w4Z2YXIgQXJuZmrDtnLDsA==?= Bjarmason , Jeff Hostetler , Jeff King , Chris Torek , Jeff Hostetler , Johannes Schindelin Precedence: bulk List-ID: X-Mailing-List: git@vger.kernel.org From: Johannes Schindelin From: Johannes Schindelin So far, the (possibly indirect) callers of `get_packet_data()` can ask that function to return an error instead of `die()`ing upon end-of-file. However, random read errors will still cause the process to die. So let's introduce an explicit option to tell the packet reader machinery to please be nice and only return an error. This change prepares pkt-line for use by long-running daemon processes. Such processes should be able to serve multiple concurrent clients and and survive random IO errors. If there is an error on one connection, a daemon should be able to drop that connection and continue serving existing and future connections. This ability will be used by a Git-aware "Internal FSMonitor" feature in a later patch series. Signed-off-by: Johannes Schindelin --- pkt-line.c | 19 +++++++++++++++++-- pkt-line.h | 4 ++++ 2 files changed, 21 insertions(+), 2 deletions(-) diff --git a/pkt-line.c b/pkt-line.c index d91a1deda95..528493bca21 100644 --- a/pkt-line.c +++ b/pkt-line.c @@ -323,8 +323,11 @@ static int get_packet_data(int fd, char **src_buf, size_t *src_size, *src_size -= ret; } else { ret = read_in_full(fd, dst, size); - if (ret < 0) + if (ret < 0) { + if (options & PACKET_READ_NEVER_DIE) + return error_errno(_("read error")); die_errno(_("read error")); + } } /* And complain if we didn't get enough bytes to satisfy the read. */ @@ -332,6 +335,8 @@ static int get_packet_data(int fd, char **src_buf, size_t *src_size, if (options & PACKET_READ_GENTLE_ON_EOF) return -1; + if (options & PACKET_READ_NEVER_DIE) + return error(_("the remote end hung up unexpectedly")); die(_("the remote end hung up unexpectedly")); } @@ -360,6 +365,9 @@ enum packet_read_status packet_read_with_status(int fd, char **src_buffer, len = packet_length(linelen); if (len < 0) { + if (options & PACKET_READ_NEVER_DIE) + return error(_("protocol error: bad line length " + "character: %.4s"), linelen); die(_("protocol error: bad line length character: %.4s"), linelen); } else if (!len) { packet_trace("0000", 4, 0); @@ -374,12 +382,19 @@ enum packet_read_status packet_read_with_status(int fd, char **src_buffer, *pktlen = 0; return PACKET_READ_RESPONSE_END; } else if (len < 4) { + if (options & PACKET_READ_NEVER_DIE) + return error(_("protocol error: bad line length %d"), + len); die(_("protocol error: bad line length %d"), len); } len -= 4; - if ((unsigned)len >= size) + if ((unsigned)len >= size) { + if (options & PACKET_READ_NEVER_DIE) + return error(_("protocol error: bad line length %d"), + len); die(_("protocol error: bad line length %d"), len); + } if (get_packet_data(fd, src_buffer, src_len, buffer, len, options) < 0) { *pktlen = -1; diff --git a/pkt-line.h b/pkt-line.h index ccf27549227..7f31c892165 100644 --- a/pkt-line.h +++ b/pkt-line.h @@ -79,10 +79,14 @@ int write_packetized_from_buf2(const char *src_in, size_t len, int fd_out, * * If options contains PACKET_READ_DIE_ON_ERR_PACKET, it dies when it sees an * ERR packet. + * + * With `PACKET_READ_NEVER_DIE`, no errors are allowed to trigger die() (except + * an ERR packet, when `PACKET_READ_DIE_ON_ERR_PACKET` is in effect). */ #define PACKET_READ_GENTLE_ON_EOF (1u<<0) #define PACKET_READ_CHOMP_NEWLINE (1u<<1) #define PACKET_READ_DIE_ON_ERR_PACKET (1u<<2) +#define PACKET_READ_NEVER_DIE (1u<<3) int packet_read(int fd, char **src_buffer, size_t *src_len, char *buffer, unsigned size, int options); From patchwork Mon Feb 1 19:45:39 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Johannes Schindelin X-Patchwork-Id: 12059923 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C5DCFC43381 for ; Mon, 1 Feb 2021 19:48:14 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 9083A64EC4 for ; Mon, 1 Feb 2021 19:48:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232588AbhBATr6 (ORCPT ); Mon, 1 Feb 2021 14:47:58 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40590 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232432AbhBATqg (ORCPT ); Mon, 1 Feb 2021 14:46:36 -0500 Received: from mail-wm1-x32b.google.com (mail-wm1-x32b.google.com [IPv6:2a00:1450:4864:20::32b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8C705C06178A for ; Mon, 1 Feb 2021 11:45:56 -0800 (PST) Received: by mail-wm1-x32b.google.com with SMTP id j18so342870wmi.3 for ; Mon, 01 Feb 2021 11:45:56 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=message-id:in-reply-to:references:from:date:subject:fcc :content-transfer-encoding:mime-version:to:cc; bh=Pbcu3sqHNuJHcnom8abmYvjSL2+3Cp3JrBKFE8pxB10=; b=QlKnE8jSqPHTdqAsLnUDubeV0IHkpPiEOBrz8fna554HSH3Bsdga1Fr2YY7OMWSqGf ekTZcLYPYED5I/EAOXMKXwqHRPJehL635oATwnyvNg4lVDX+BBDlxRWFs87wZF93pGVx MvMY8Esf+GeIUAmUN9ZL1vO5NuzPIex45XtKQSunk+0M8iHZF9wxAdP7mXQDKJCahT/m 2Jsamu50Lr/5YJzEd+ipCC4TT2V8pr8kGcq8qoWOWmQGrEWTtkZNFENqcoC8BwzE1p3p fcoMcDJBcIvT/w7SzmZo+hNxeqdiLlrXN6henTXDN6tnIdDNu4LElKsFeKc97dYM+ek2 pnEw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:message-id:in-reply-to:references:from:date :subject:fcc:content-transfer-encoding:mime-version:to:cc; bh=Pbcu3sqHNuJHcnom8abmYvjSL2+3Cp3JrBKFE8pxB10=; b=gBnG9wFun2hOoDcbfcjPJyi+PYBVpb4A3yoqdTDEp5LuIlHjEDgJBtBvXUJ0HzBquH zCDD1PCUhW2JX7DG0s6DHrI+3rhvsosI/ckYdCD/LMPsrpvVKFUWsyeaz4G466SZi4Zl bbu+BpFe/QIxkr3EMRDbi566fZGON10gjI5h9/GvJm1wJ48z2gvd71HZ6YoC8+6xW1ZM IPsKeaxtznSy7jD5Jp1NeJj1goQucXNazr3HCeso1INWOkIRqSYUCbs+R1X5SVWoTvrr m0l7qs05qwmDExh8wINnpKEeCh0GJi++QSlv+pPqRpvlWapotUDtMsMB/0pW5MqjZi4z qt/Q== X-Gm-Message-State: AOAM530mibAkB6OJwEz9S8FOKVFrioqo+aefpbU64hla+RUPsVBryJPB uUFV4tS3dQ4YPbwI/ogCIbpPUHG2e50= X-Google-Smtp-Source: ABdhPJxkhsMLQuJ+NMlrqjI7Ip5OCPCSE8I8U82ErwnAGdy9ZWshso7D/gjrbT58D0lJH/+6Z8UlQA== X-Received: by 2002:a1c:4c01:: with SMTP id z1mr396377wmf.159.1612208755081; Mon, 01 Feb 2021 11:45:55 -0800 (PST) Received: from [127.0.0.1] ([13.74.141.28]) by smtp.gmail.com with ESMTPSA id i7sm3938335wru.49.2021.02.01.11.45.54 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 01 Feb 2021 11:45:54 -0800 (PST) Message-Id: <6a389a3533512acedfa1769c64296c1e19b16221.1612208747.git.gitgitgadget@gmail.com> In-Reply-To: References: Date: Mon, 01 Feb 2021 19:45:39 +0000 Subject: [PATCH v2 06/14] pkt-line: accept additional options in read_packetized_to_strbuf() Fcc: Sent MIME-Version: 1.0 To: git@vger.kernel.org Cc: =?utf-8?b?w4Z2YXIgQXJuZmrDtnLDsA==?= Bjarmason , Jeff Hostetler , Jeff King , Chris Torek , Jeff Hostetler , Johannes Schindelin Precedence: bulk List-ID: X-Mailing-List: git@vger.kernel.org From: Johannes Schindelin From: Johannes Schindelin The `read_packetized_to_strbuf()` function reads packets into a strbuf until a flush packet has been received. So far, it has only one caller: `apply_multi_file_filter()` in `convert.c`. This caller really only needs the `PACKET_READ_GENTLE_ON_EOF` option to be passed to `packet_read()` (which makes sense in the scenario where packets should be read until a flush packet is received). We are about to introduce a caller that wants to pass other options through to `packet_read()`, so let's extend the function signature accordingly. Signed-off-by: Johannes Schindelin --- convert.c | 2 +- pkt-line.c | 4 ++-- pkt-line.h | 6 +++++- 3 files changed, 8 insertions(+), 4 deletions(-) diff --git a/convert.c b/convert.c index 3f396a9b288..175c5cd51d5 100644 --- a/convert.c +++ b/convert.c @@ -903,7 +903,7 @@ static int apply_multi_file_filter(const char *path, const char *src, size_t len if (err) goto done; - err = read_packetized_to_strbuf(process->out, &nbuf) < 0; + err = read_packetized_to_strbuf(process->out, &nbuf, 0) < 0; if (err) goto done; diff --git a/pkt-line.c b/pkt-line.c index 528493bca21..f090fc56eef 100644 --- a/pkt-line.c +++ b/pkt-line.c @@ -461,7 +461,7 @@ char *packet_read_line_buf(char **src, size_t *src_len, int *dst_len) return packet_read_line_generic(-1, src, src_len, dst_len); } -ssize_t read_packetized_to_strbuf(int fd_in, struct strbuf *sb_out) +ssize_t read_packetized_to_strbuf(int fd_in, struct strbuf *sb_out, int options) { int packet_len; @@ -477,7 +477,7 @@ ssize_t read_packetized_to_strbuf(int fd_in, struct strbuf *sb_out) * that there is already room for the extra byte. */ sb_out->buf + sb_out->len, LARGE_PACKET_DATA_MAX+1, - PACKET_READ_GENTLE_ON_EOF); + options | PACKET_READ_GENTLE_ON_EOF); if (packet_len <= 0) break; sb_out->len += packet_len; diff --git a/pkt-line.h b/pkt-line.h index 7f31c892165..150319a6f00 100644 --- a/pkt-line.h +++ b/pkt-line.h @@ -145,8 +145,12 @@ char *packet_read_line_buf(char **src_buf, size_t *src_len, int *size); /* * Reads a stream of variable sized packets until a flush packet is detected. + * + * The options are augmented by PACKET_READ_GENTLE_ON_EOF and passed to + * packet_read. */ -ssize_t read_packetized_to_strbuf(int fd_in, struct strbuf *sb_out); +ssize_t read_packetized_to_strbuf(int fd_in, struct strbuf *sb_out, + int options); /* * Receive multiplexed output stream over git native protocol. From patchwork Mon Feb 1 19:45:40 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeff Hostetler X-Patchwork-Id: 12059929 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 33461C433E9 for ; Mon, 1 Feb 2021 19:48:44 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id EB8E364ECA for ; Mon, 1 Feb 2021 19:48:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230273AbhBATsS (ORCPT ); Mon, 1 Feb 2021 14:48:18 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40596 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232322AbhBATqr (ORCPT ); Mon, 1 Feb 2021 14:46:47 -0500 Received: from mail-wr1-x432.google.com (mail-wr1-x432.google.com [IPv6:2a00:1450:4864:20::432]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 679D0C06178B for ; Mon, 1 Feb 2021 11:45:57 -0800 (PST) Received: by mail-wr1-x432.google.com with SMTP id a1so17964922wrq.6 for ; Mon, 01 Feb 2021 11:45:57 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=message-id:in-reply-to:references:from:date:subject:fcc :content-transfer-encoding:mime-version:to:cc; bh=qTI5/L9pM7HLuNqoOeZhRgOhPiAn0tFJz/94G2ot7cM=; b=s50sc0IBjQtHtOnfmZr8MeiNdfKlkzvZhfO6XKnPq1h9whUYE+cXAKgDmv4Im/Bcsh pT/d14Kxg8JMQalUzX6mLIqt4ztLmGre/gp6GInD8wnpzq8CQTEhM8qZs+qESxPijkjp FpmAZCCvQi1rQZKxoF7qMCjOe/d8kkAG88i64LwX/dYxcFqM0NcSYdeA5w5RB0bv2ndA MsF988uDFQiRpmDAUi84QO/0lSX5p7KFPZAvhR4Zhq7djT6t1ZPI+Eo/I3crgPhRv4k2 X68arc69AXa9mUrR4a6F729t5fbbobY3jr5NX+vYFOsQ6GpZSNAqKvR6JM5gOpgtG7EL 84kg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:message-id:in-reply-to:references:from:date :subject:fcc:content-transfer-encoding:mime-version:to:cc; bh=qTI5/L9pM7HLuNqoOeZhRgOhPiAn0tFJz/94G2ot7cM=; b=fJAHZ8UWdc9oWYSi2tuwWsqugazIGt6L/z7397zICOEjG01w01hgYjd+qqWXfHoYfk +kZANVpy+dNYCROaYavb1atjQmwM0gdXW3eetTXDhmWBoYGXaLqnw2Tj8/IiqdBStO1P Q4wnFeKVrdGGf//sH748VQCLf6hq37bCNOxT1SZL00AV/Xs4qWd8TBqTRJpnuW2n5ZiR tYNkKLDiFIotJU9lpuMqzK9fOQGxEHCVXajNBGuGyw7pG64TKj4g8KF+gbk4ywxJqBoD H/8Lod5sdl9+EKQjboOq0m0aaQOLM/tUF5xBc8mQT084RS/ErK2jmzYz/AqCHB+eYJvM dDKw== X-Gm-Message-State: AOAM5332dAuSM1ZSv+4y8gNitoAA1/N+wl9z8U7EbXFTJBXjcf8UzvJI GkKT5if74i4qLp/XjN9iOl+FOlTcRwQ= X-Google-Smtp-Source: ABdhPJxOxJ1P6MzzEI+x23NzT8aMsOGRluHSaTYxAX0s+RBHO0GZ/L9bJiRtbH7YSHCoiJEktObTmw== X-Received: by 2002:adf:ecc1:: with SMTP id s1mr20802330wro.146.1612208756030; Mon, 01 Feb 2021 11:45:56 -0800 (PST) Received: from [127.0.0.1] ([13.74.141.28]) by smtp.gmail.com with ESMTPSA id j13sm297933wmi.24.2021.02.01.11.45.55 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 01 Feb 2021 11:45:55 -0800 (PST) Message-Id: In-Reply-To: References: Date: Mon, 01 Feb 2021 19:45:40 +0000 Subject: [PATCH v2 07/14] simple-ipc: design documentation for new IPC mechanism Fcc: Sent MIME-Version: 1.0 To: git@vger.kernel.org Cc: =?utf-8?b?w4Z2YXIgQXJuZmrDtnLDsA==?= Bjarmason , Jeff Hostetler , Jeff King , Chris Torek , Jeff Hostetler , Jeff Hostetler Precedence: bulk List-ID: X-Mailing-List: git@vger.kernel.org From: Jeff Hostetler From: Jeff Hostetler Brief design documentation for new IPC mechanism allowing foreground Git client to talk with an existing daemon process at a known location using a named pipe or unix domain socket. Signed-off-by: Johannes Schindelin Signed-off-by: Jeff Hostetler --- Documentation/technical/api-simple-ipc.txt | 34 ++++++++++++++++++++++ 1 file changed, 34 insertions(+) create mode 100644 Documentation/technical/api-simple-ipc.txt diff --git a/Documentation/technical/api-simple-ipc.txt b/Documentation/technical/api-simple-ipc.txt new file mode 100644 index 00000000000..670a5c163e3 --- /dev/null +++ b/Documentation/technical/api-simple-ipc.txt @@ -0,0 +1,34 @@ +simple-ipc API +============== + +The simple-ipc API is used to send an IPC message and response between +a (presumably) foreground Git client process to a background server or +daemon process. The server process must already be running. Multiple +client processes can simultaneously communicate with the server +process. + +Communication occurs over a named pipe on Windows and a Unix domain +socket on other platforms. Clients and the server rendezvous at a +previously agreed-to application-specific pathname (which is outside +the scope of this design). + +This IPC mechanism differs from the existing `sub-process.c` model +(Documentation/technical/long-running-process-protocol.txt) and used +by applications like Git-LFS. In the simple-ipc model the server is +assumed to be a very long-running system service. In contrast, in the +LFS-style sub-process model the helper is started with the foreground +process and exits when the foreground process terminates. + +How the simple-ipc server is started is also outside the scope of the +IPC mechanism. For example, the server might be started during +maintenance operations. + +The IPC protocol consists of a single request message from the client and +an optional request message from the server. For simplicity, pkt-line +routines are used to hide chunking and buffering concerns. Each side +terminates their message with a flush packet. +(Documentation/technical/protocol-common.txt) + +The actual format of the client and server messages is application +specific. The IPC layer transmits and receives an opaque buffer without +any concern for the content within. From patchwork Mon Feb 1 19:45:41 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeff Hostetler X-Patchwork-Id: 12059933 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 76A3AC433E6 for ; Mon, 1 Feb 2021 19:49:11 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 2497564ECB for ; Mon, 1 Feb 2021 19:49:11 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232607AbhBATtG (ORCPT ); Mon, 1 Feb 2021 14:49:06 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40606 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232374AbhBATqr (ORCPT ); Mon, 1 Feb 2021 14:46:47 -0500 Received: from mail-wm1-x332.google.com (mail-wm1-x332.google.com [IPv6:2a00:1450:4864:20::332]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0E25EC06178C for ; Mon, 1 Feb 2021 11:45:59 -0800 (PST) Received: by mail-wm1-x332.google.com with SMTP id o5so350640wmq.2 for ; Mon, 01 Feb 2021 11:45:58 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=message-id:in-reply-to:references:from:date:subject:fcc :content-transfer-encoding:mime-version:to:cc; bh=u9Io013PnKxWw5Ym5/NS4OZEkV2h4WikON4RkmtrGow=; b=EB9KaN2O38egaow5xeCMWZ5APq0iGAaA3zsqL1xnVBC/9EPO5bmqAiX3pmRrKeD1WY uBbFXf4925YheiNWRkq/wdyLrAHMdkpRncL485dBwDwL9pXB6CU+sK0Tqc+719EjU6in Wyb/FzKA+YIb4srllxwG3Trme46207gL72Iyc607rPNU73ZLMZQt9b1Z9Acar4UN0Krb ckQ7JdsfjBLh9nPQhFkbS/hXyVzXjD5DsGCHVgzqDAsvING3HPoIQ/pExRvJ3C4GB9bk j1jmWLDjx3fMVJhTNtP3JMgUXEbbiBmxqiosekMZt13irya7FtZwq+Mmf+dWGfdoEjaI N4yg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:message-id:in-reply-to:references:from:date :subject:fcc:content-transfer-encoding:mime-version:to:cc; bh=u9Io013PnKxWw5Ym5/NS4OZEkV2h4WikON4RkmtrGow=; b=LNXg02Muv7JIE/f6eCFuTVWvimt3HHQfGS33bnIW0badEjAbYh7DptZ5rMYNfcMcZv WWFGQ/unXGsnuLAmbxA4mo3iUI7S0hZDR4fXyTumLldrf21Cbk5HJIz3/QOAyGn/DmS7 e575f63N+kY9w+nqFex2eNOXgB300Eyj04LyE4/SGckx9XvfmIQ8UrUcaPW279VJjDde ZaXgPWXEDgFHYDC4cCT9AwGGSTB61kNM0iNLaX1gjGgRs3j8SXiaKaIcivLYso8AwsuK fyCgeYWH1mH914PBDw4Ce/k/IzaRyexS0YawE8UJtqY01ekf0E8QIYAizcgnwWnH8l5Q zM0A== X-Gm-Message-State: AOAM530rSugRIVRMNZibfHgYwHM7mSiowUs/mfQyolmzrLkV6reX/dAg yl9AldqtJ2gY6Wfh4G78x/Rulj47MXE= X-Google-Smtp-Source: ABdhPJyXf9BkmSwnG3nkXSf+3qvUfQvc7M83Ku2ThR2cv2vg0ce+wzQUL+H+FPNdhn5JTd7x+MP+dw== X-Received: by 2002:a1c:ac86:: with SMTP id v128mr429624wme.76.1612208756962; Mon, 01 Feb 2021 11:45:56 -0800 (PST) Received: from [127.0.0.1] ([13.74.141.28]) by smtp.gmail.com with ESMTPSA id m6sm283723wmq.13.2021.02.01.11.45.56 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 01 Feb 2021 11:45:56 -0800 (PST) Message-Id: <388366913d419bbc08f5b7658cf7fb7b9d6a9079.1612208747.git.gitgitgadget@gmail.com> In-Reply-To: References: Date: Mon, 01 Feb 2021 19:45:41 +0000 Subject: [PATCH v2 08/14] simple-ipc: add win32 implementation Fcc: Sent MIME-Version: 1.0 To: git@vger.kernel.org Cc: =?utf-8?b?w4Z2YXIgQXJuZmrDtnLDsA==?= Bjarmason , Jeff Hostetler , Jeff King , Chris Torek , Jeff Hostetler , Jeff Hostetler Precedence: bulk List-ID: X-Mailing-List: git@vger.kernel.org From: Jeff Hostetler From: Jeff Hostetler Create Windows implementation of "simple-ipc" using named pipes. Signed-off-by: Jeff Hostetler --- Makefile | 5 + compat/simple-ipc/ipc-shared.c | 28 ++ compat/simple-ipc/ipc-win32.c | 751 ++++++++++++++++++++++++++++ config.mak.uname | 2 + contrib/buildsystems/CMakeLists.txt | 4 + simple-ipc.h | 225 +++++++++ 6 files changed, 1015 insertions(+) create mode 100644 compat/simple-ipc/ipc-shared.c create mode 100644 compat/simple-ipc/ipc-win32.c create mode 100644 simple-ipc.h diff --git a/Makefile b/Makefile index 7b64106930a..c94d5847919 100644 --- a/Makefile +++ b/Makefile @@ -1682,6 +1682,11 @@ else LIB_OBJS += unix-socket.o endif +ifdef USE_WIN32_IPC + LIB_OBJS += compat/simple-ipc/ipc-shared.o + LIB_OBJS += compat/simple-ipc/ipc-win32.o +endif + ifdef NO_ICONV BASIC_CFLAGS += -DNO_ICONV endif diff --git a/compat/simple-ipc/ipc-shared.c b/compat/simple-ipc/ipc-shared.c new file mode 100644 index 00000000000..1edec815953 --- /dev/null +++ b/compat/simple-ipc/ipc-shared.c @@ -0,0 +1,28 @@ +#include "cache.h" +#include "simple-ipc.h" +#include "strbuf.h" +#include "pkt-line.h" +#include "thread-utils.h" + +#ifdef SUPPORTS_SIMPLE_IPC + +int ipc_server_run(const char *path, const struct ipc_server_opts *opts, + ipc_server_application_cb *application_cb, + void *application_data) +{ + struct ipc_server_data *server_data = NULL; + int ret; + + ret = ipc_server_run_async(&server_data, path, opts, + application_cb, application_data); + if (ret) + return ret; + + ret = ipc_server_await(server_data); + + ipc_server_free(server_data); + + return ret; +} + +#endif /* SUPPORTS_SIMPLE_IPC */ diff --git a/compat/simple-ipc/ipc-win32.c b/compat/simple-ipc/ipc-win32.c new file mode 100644 index 00000000000..7871c9d8527 --- /dev/null +++ b/compat/simple-ipc/ipc-win32.c @@ -0,0 +1,751 @@ +#include "cache.h" +#include "simple-ipc.h" +#include "strbuf.h" +#include "pkt-line.h" +#include "thread-utils.h" + +#ifndef GIT_WINDOWS_NATIVE +#error This file can only be compiled on Windows +#endif + +static int initialize_pipe_name(const char *path, wchar_t *wpath, size_t alloc) +{ + int off = 0; + struct strbuf realpath = STRBUF_INIT; + + if (!strbuf_realpath(&realpath, path, 0)) + return -1; + + off = swprintf(wpath, alloc, L"\\\\.\\pipe\\"); + if (xutftowcs(wpath + off, realpath.buf, alloc - off) < 0) + return -1; + + /* Handle drive prefix */ + if (wpath[off] && wpath[off + 1] == L':') { + wpath[off + 1] = L'_'; + off += 2; + } + + for (; wpath[off]; off++) + if (wpath[off] == L'/') + wpath[off] = L'\\'; + + strbuf_release(&realpath); + return 0; +} + +static enum ipc_active_state get_active_state(wchar_t *pipe_path) +{ + if (WaitNamedPipeW(pipe_path, NMPWAIT_USE_DEFAULT_WAIT)) + return IPC_STATE__LISTENING; + + if (GetLastError() == ERROR_SEM_TIMEOUT) + return IPC_STATE__NOT_LISTENING; + + if (GetLastError() == ERROR_FILE_NOT_FOUND) + return IPC_STATE__PATH_NOT_FOUND; + + return IPC_STATE__OTHER_ERROR; +} + +enum ipc_active_state ipc_get_active_state(const char *path) +{ + wchar_t pipe_path[MAX_PATH]; + + if (initialize_pipe_name(path, pipe_path, ARRAY_SIZE(pipe_path)) < 0) + return IPC_STATE__INVALID_PATH; + + return get_active_state(pipe_path); +} + +#define WAIT_STEP_MS (50) + +static enum ipc_active_state connect_to_server( + const wchar_t *wpath, + DWORD timeout_ms, + const struct ipc_client_connect_options *options, + int *pfd) +{ + DWORD t_start_ms, t_waited_ms; + DWORD step_ms; + HANDLE hPipe = INVALID_HANDLE_VALUE; + DWORD mode = PIPE_READMODE_BYTE; + DWORD gle; + + *pfd = -1; + + for (;;) { + hPipe = CreateFileW(wpath, GENERIC_READ | GENERIC_WRITE, + 0, NULL, OPEN_EXISTING, 0, NULL); + if (hPipe != INVALID_HANDLE_VALUE) + break; + + gle = GetLastError(); + + switch (gle) { + case ERROR_FILE_NOT_FOUND: + if (!options->wait_if_not_found) + return IPC_STATE__PATH_NOT_FOUND; + if (!timeout_ms) + return IPC_STATE__PATH_NOT_FOUND; + + step_ms = (timeout_ms < WAIT_STEP_MS) ? + timeout_ms : WAIT_STEP_MS; + sleep_millisec(step_ms); + + timeout_ms -= step_ms; + break; /* try again */ + + case ERROR_PIPE_BUSY: + if (!options->wait_if_busy) + return IPC_STATE__NOT_LISTENING; + if (!timeout_ms) + return IPC_STATE__NOT_LISTENING; + + t_start_ms = (DWORD)(getnanotime() / 1000000); + + if (!WaitNamedPipeW(wpath, timeout_ms)) { + if (GetLastError() == ERROR_SEM_TIMEOUT) + return IPC_STATE__NOT_LISTENING; + + return IPC_STATE__OTHER_ERROR; + } + + /* + * A pipe server instance became available. + * Race other client processes to connect to + * it. + * + * But first decrement our overall timeout so + * that we don't starve if we keep losing the + * race. But also guard against special + * NPMWAIT_ values (0 and -1). + */ + t_waited_ms = (DWORD)(getnanotime() / 1000000) - t_start_ms; + if (t_waited_ms < timeout_ms) + timeout_ms -= t_waited_ms; + else + timeout_ms = 1; + break; /* try again */ + + default: + return IPC_STATE__OTHER_ERROR; + } + } + + if (!SetNamedPipeHandleState(hPipe, &mode, NULL, NULL)) { + CloseHandle(hPipe); + return IPC_STATE__OTHER_ERROR; + } + + *pfd = _open_osfhandle((intptr_t)hPipe, O_RDWR|O_BINARY); + if (*pfd < 0) { + CloseHandle(hPipe); + return IPC_STATE__OTHER_ERROR; + } + + /* fd now owns hPipe */ + + return IPC_STATE__LISTENING; +} + +/* + * The default connection timeout for Windows clients. + * + * This is not currently part of the ipc_ API (nor the config settings) + * because of differences between Windows and other platforms. + * + * This value was chosen at random. + */ +#define WINDOWS_CONNECTION_TIMEOUT_MS (30000) + +enum ipc_active_state ipc_client_try_connect( + const char *path, + const struct ipc_client_connect_options *options, + struct ipc_client_connection **p_connection) +{ + wchar_t wpath[MAX_PATH]; + enum ipc_active_state state = IPC_STATE__OTHER_ERROR; + int fd = -1; + + *p_connection = NULL; + + trace2_region_enter("ipc-client", "try-connect", NULL); + trace2_data_string("ipc-client", NULL, "try-connect/path", path); + + if (initialize_pipe_name(path, wpath, ARRAY_SIZE(wpath)) < 0) + state = IPC_STATE__INVALID_PATH; + else + state = connect_to_server(wpath, WINDOWS_CONNECTION_TIMEOUT_MS, + options, &fd); + + trace2_data_intmax("ipc-client", NULL, "try-connect/state", + (intmax_t)state); + trace2_region_leave("ipc-client", "try-connect", NULL); + + if (state == IPC_STATE__LISTENING) { + (*p_connection) = xcalloc(1, sizeof(struct ipc_client_connection)); + (*p_connection)->fd = fd; + } + + return state; +} + +void ipc_client_close_connection(struct ipc_client_connection *connection) +{ + if (!connection) + return; + + if (connection->fd != -1) + close(connection->fd); + + free(connection); +} + +int ipc_client_send_command_to_connection( + struct ipc_client_connection *connection, + const char *message, struct strbuf *answer) +{ + int ret = 0; + + strbuf_setlen(answer, 0); + + trace2_region_enter("ipc-client", "send-command", NULL); + + if (write_packetized_from_buf2(message, strlen(message), + connection->fd, 1, + &connection->scratch_write_buffer) < 0) { + ret = error(_("could not send IPC command")); + goto done; + } + + FlushFileBuffers((HANDLE)_get_osfhandle(connection->fd)); + + if (read_packetized_to_strbuf(connection->fd, answer, + PACKET_READ_NEVER_DIE) < 0) { + ret = error(_("could not read IPC response")); + goto done; + } + +done: + trace2_region_leave("ipc-client", "send-command", NULL); + return ret; +} + +int ipc_client_send_command(const char *path, + const struct ipc_client_connect_options *options, + const char *message, struct strbuf *response) +{ + int ret = -1; + enum ipc_active_state state; + struct ipc_client_connection *connection = NULL; + + state = ipc_client_try_connect(path, options, &connection); + + if (state != IPC_STATE__LISTENING) + return ret; + + ret = ipc_client_send_command_to_connection(connection, message, response); + + ipc_client_close_connection(connection); + + return ret; +} + +/* + * Duplicate the given pipe handle and wrap it in a file descriptor so + * that we can use pkt-line on it. + */ +static int dup_fd_from_pipe(const HANDLE pipe) +{ + HANDLE process = GetCurrentProcess(); + HANDLE handle; + int fd; + + if (!DuplicateHandle(process, pipe, process, &handle, 0, FALSE, + DUPLICATE_SAME_ACCESS)) { + errno = err_win_to_posix(GetLastError()); + return -1; + } + + fd = _open_osfhandle((intptr_t)handle, O_RDWR|O_BINARY); + if (fd < 0) { + errno = err_win_to_posix(GetLastError()); + CloseHandle(handle); + return -1; + } + + /* + * `handle` is now owned by `fd` and will be automatically closed + * when the descriptor is closed. + */ + + return fd; +} + +/* + * Magic numbers used to annotate callback instance data. + * These are used to help guard against accidentally passing the + * wrong instance data across multiple levels of callbacks (which + * is easy to do if there are `void*` arguments). + */ +enum magic { + MAGIC_SERVER_REPLY_DATA, + MAGIC_SERVER_THREAD_DATA, + MAGIC_SERVER_DATA, +}; + +struct ipc_server_reply_data { + enum magic magic; + int fd; + struct ipc_server_thread_data *server_thread_data; +}; + +struct ipc_server_thread_data { + enum magic magic; + struct ipc_server_thread_data *next_thread; + struct ipc_server_data *server_data; + pthread_t pthread_id; + HANDLE hPipe; + struct packet_scratch_space scratch_write_buffer; +}; + +/* + * On Windows, the conceptual "ipc-server" is implemented as a pool of + * n idential/peer "server-thread" threads. That is, there is no + * hierarchy of threads; and therefore no controller thread managing + * the pool. Each thread has an independent handle to the named pipe, + * receives incoming connections, processes the client, and re-uses + * the pipe for the next client connection. + * + * Therefore, the "ipc-server" only needs to maintain a list of the + * spawned threads for eventual "join" purposes. + * + * A single "stop-event" is visible to all of the server threads to + * tell them to shutdown (when idle). + */ +struct ipc_server_data { + enum magic magic; + ipc_server_application_cb *application_cb; + void *application_data; + struct strbuf buf_path; + wchar_t wpath[MAX_PATH]; + + HANDLE hEventStopRequested; + struct ipc_server_thread_data *thread_list; + int is_stopped; +}; + +enum connect_result { + CR_CONNECTED = 0, + CR_CONNECT_PENDING, + CR_CONNECT_ERROR, + CR_WAIT_ERROR, + CR_SHUTDOWN, +}; + +static enum connect_result queue_overlapped_connect( + struct ipc_server_thread_data *server_thread_data, + OVERLAPPED *lpo) +{ + if (ConnectNamedPipe(server_thread_data->hPipe, lpo)) + goto failed; + + switch (GetLastError()) { + case ERROR_IO_PENDING: + return CR_CONNECT_PENDING; + + case ERROR_PIPE_CONNECTED: + SetEvent(lpo->hEvent); + return CR_CONNECTED; + + default: + break; + } + +failed: + error(_("ConnectNamedPipe failed for '%s' (%lu)"), + server_thread_data->server_data->buf_path.buf, + GetLastError()); + return CR_CONNECT_ERROR; +} + +/* + * Use Windows Overlapped IO to wait for a connection or for our event + * to be signalled. + */ +static enum connect_result wait_for_connection( + struct ipc_server_thread_data *server_thread_data, + OVERLAPPED *lpo) +{ + enum connect_result r; + HANDLE waitHandles[2]; + DWORD dwWaitResult; + + r = queue_overlapped_connect(server_thread_data, lpo); + if (r != CR_CONNECT_PENDING) + return r; + + waitHandles[0] = server_thread_data->server_data->hEventStopRequested; + waitHandles[1] = lpo->hEvent; + + dwWaitResult = WaitForMultipleObjects(2, waitHandles, FALSE, INFINITE); + switch (dwWaitResult) { + case WAIT_OBJECT_0 + 0: + return CR_SHUTDOWN; + + case WAIT_OBJECT_0 + 1: + ResetEvent(lpo->hEvent); + return CR_CONNECTED; + + default: + return CR_WAIT_ERROR; + } +} + +/* + * Forward declare our reply callback function so that any compiler + * errors are reported when we actually define the function (in addition + * to any errors reported when we try to pass this callback function as + * a parameter in a function call). The former are easier to understand. + */ +static ipc_server_reply_cb do_io_reply_callback; + +/* + * Relay application's response message to the client process. + * (We do not flush at this point because we allow the caller + * to chunk data to the client thru us.) + */ +static int do_io_reply_callback(struct ipc_server_reply_data *reply_data, + const char *response, size_t response_len) +{ + struct packet_scratch_space *scratch = + &reply_data->server_thread_data->scratch_write_buffer; + + if (reply_data->magic != MAGIC_SERVER_REPLY_DATA) + BUG("reply_cb called with wrong instance data"); + + return write_packetized_from_buf2(response, response_len, + reply_data->fd, 0, scratch); +} + +/* + * Receive the request/command from the client and pass it to the + * registered request-callback. The request-callback will compose + * a response and call our reply-callback to send it to the client. + * + * Simple-IPC only contains one round trip, so we flush and close + * here after the response. + */ +static int do_io(struct ipc_server_thread_data *server_thread_data) +{ + struct strbuf buf = STRBUF_INIT; + struct ipc_server_reply_data reply_data; + int ret = 0; + + reply_data.magic = MAGIC_SERVER_REPLY_DATA; + reply_data.server_thread_data = server_thread_data; + + reply_data.fd = dup_fd_from_pipe(server_thread_data->hPipe); + if (reply_data.fd < 0) + return error(_("could not create fd from pipe for '%s'"), + server_thread_data->server_data->buf_path.buf); + + ret = read_packetized_to_strbuf(reply_data.fd, &buf, + PACKET_READ_NEVER_DIE); + if (ret >= 0) { + ret = server_thread_data->server_data->application_cb( + server_thread_data->server_data->application_data, + buf.buf, do_io_reply_callback, &reply_data); + + packet_flush_gently(reply_data.fd); + + FlushFileBuffers((HANDLE)_get_osfhandle((reply_data.fd))); + } + else { + /* + * The client probably disconnected/shutdown before it + * could send a well-formed message. Ignore it. + */ + } + + strbuf_release(&buf); + close(reply_data.fd); + + return ret; +} + +/* + * Handle IPC request and response with this connected client. And reset + * the pipe to prepare for the next client. + */ +static int use_connection(struct ipc_server_thread_data *server_thread_data) +{ + int ret; + + ret = do_io(server_thread_data); + + FlushFileBuffers(server_thread_data->hPipe); + DisconnectNamedPipe(server_thread_data->hPipe); + + return ret; +} + +/* + * Thread proc for an IPC server worker thread. It handles a series of + * connections from clients. It cleans and reuses the hPipe between each + * client. + */ +static void *server_thread_proc(void *_server_thread_data) +{ + struct ipc_server_thread_data *server_thread_data = _server_thread_data; + HANDLE hEventConnected = INVALID_HANDLE_VALUE; + OVERLAPPED oConnect; + enum connect_result cr; + int ret; + + assert(server_thread_data->hPipe != INVALID_HANDLE_VALUE); + + trace2_thread_start("ipc-server"); + trace2_data_string("ipc-server", NULL, "pipe", + server_thread_data->server_data->buf_path.buf); + + hEventConnected = CreateEventW(NULL, TRUE, FALSE, NULL); + + memset(&oConnect, 0, sizeof(oConnect)); + oConnect.hEvent = hEventConnected; + + for (;;) { + cr = wait_for_connection(server_thread_data, &oConnect); + + switch (cr) { + case CR_SHUTDOWN: + goto finished; + + case CR_CONNECTED: + ret = use_connection(server_thread_data); + if (ret == SIMPLE_IPC_QUIT) { + ipc_server_stop_async( + server_thread_data->server_data); + goto finished; + } + if (ret > 0) { + /* + * Ignore (transient) IO errors with this + * client and reset for the next client. + */ + } + break; + + case CR_CONNECT_PENDING: + /* By construction, this should not happen. */ + BUG("ipc-server[%s]: unexpeced CR_CONNECT_PENDING", + server_thread_data->server_data->buf_path.buf); + + case CR_CONNECT_ERROR: + case CR_WAIT_ERROR: + /* + * Ignore these theoretical errors. + */ + DisconnectNamedPipe(server_thread_data->hPipe); + break; + + default: + BUG("unandled case after wait_for_connection"); + } + } + +finished: + CloseHandle(server_thread_data->hPipe); + CloseHandle(hEventConnected); + + trace2_thread_exit(); + return NULL; +} + +static HANDLE create_new_pipe(wchar_t *wpath, int is_first) +{ + HANDLE hPipe; + DWORD dwOpenMode, dwPipeMode; + LPSECURITY_ATTRIBUTES lpsa = NULL; + + dwOpenMode = PIPE_ACCESS_INBOUND | PIPE_ACCESS_OUTBOUND | + FILE_FLAG_OVERLAPPED; + + dwPipeMode = PIPE_TYPE_MESSAGE | PIPE_READMODE_BYTE | PIPE_WAIT | + PIPE_REJECT_REMOTE_CLIENTS; + + if (is_first) { + dwOpenMode |= FILE_FLAG_FIRST_PIPE_INSTANCE; + + /* + * On Windows, the first server pipe instance gets to + * set the ACL / Security Attributes on the named + * pipe; subsequent instances inherit and cannot + * change them. + * + * TODO Should we allow the application layer to + * specify security attributes, such as `LocalService` + * or `LocalSystem`, when we create the named pipe? + * This question is probably not important when the + * daemon is started by a foreground user process and + * only needs to talk to the current user, but may be + * if the daemon is run via the Control Panel as a + * System Service. + */ + } + + hPipe = CreateNamedPipeW(wpath, dwOpenMode, dwPipeMode, + PIPE_UNLIMITED_INSTANCES, 1024, 1024, 0, lpsa); + + return hPipe; +} + +int ipc_server_run_async(struct ipc_server_data **returned_server_data, + const char *path, const struct ipc_server_opts *opts, + ipc_server_application_cb *application_cb, + void *application_data) +{ + struct ipc_server_data *server_data; + wchar_t wpath[MAX_PATH]; + HANDLE hPipeFirst = INVALID_HANDLE_VALUE; + int k; + int ret = 0; + int nr_threads = opts->nr_threads; + + *returned_server_data = NULL; + + ret = initialize_pipe_name(path, wpath, ARRAY_SIZE(wpath)); + if (ret < 0) + return error( + _("could not create normalized wchar_t path for '%s'"), + path); + + hPipeFirst = create_new_pipe(wpath, 1); + if (hPipeFirst == INVALID_HANDLE_VALUE) + return error(_("IPC server already running on '%s'"), path); + + server_data = xcalloc(1, sizeof(*server_data)); + server_data->magic = MAGIC_SERVER_DATA; + server_data->application_cb = application_cb; + server_data->application_data = application_data; + server_data->hEventStopRequested = CreateEvent(NULL, TRUE, FALSE, NULL); + strbuf_init(&server_data->buf_path, 0); + strbuf_addstr(&server_data->buf_path, path); + wcscpy(server_data->wpath, wpath); + + if (nr_threads < 1) + nr_threads = 1; + + for (k = 0; k < nr_threads; k++) { + struct ipc_server_thread_data *std; + + std = xcalloc(1, sizeof(*std)); + std->magic = MAGIC_SERVER_THREAD_DATA; + std->server_data = server_data; + std->hPipe = INVALID_HANDLE_VALUE; + + std->hPipe = (k == 0) + ? hPipeFirst + : create_new_pipe(server_data->wpath, 0); + + if (std->hPipe == INVALID_HANDLE_VALUE) { + /* + * If we've reached a pipe instance limit for + * this path, just use fewer threads. + */ + free(std); + break; + } + + if (pthread_create(&std->pthread_id, NULL, + server_thread_proc, std)) { + /* + * Likewise, if we're out of threads, just use + * fewer threads than requested. + * + * However, we just give up if we can't even get + * one thread. This should not happen. + */ + if (k == 0) + die(_("could not start thread[0] for '%s'"), + path); + + CloseHandle(std->hPipe); + free(std); + break; + } + + std->next_thread = server_data->thread_list; + server_data->thread_list = std; + } + + *returned_server_data = server_data; + return 0; +} + +int ipc_server_stop_async(struct ipc_server_data *server_data) +{ + if (!server_data) + return 0; + + /* + * Gently tell all of the ipc_server threads to shutdown. + * This will be seen the next time they are idle (and waiting + * for a connection). + * + * We DO NOT attempt to force them to drop an active connection. + */ + SetEvent(server_data->hEventStopRequested); + return 0; +} + +int ipc_server_await(struct ipc_server_data *server_data) +{ + DWORD dwWaitResult; + + if (!server_data) + return 0; + + dwWaitResult = WaitForSingleObject(server_data->hEventStopRequested, INFINITE); + if (dwWaitResult != WAIT_OBJECT_0) + return error(_("wait for hEvent failed for '%s'"), + server_data->buf_path.buf); + + while (server_data->thread_list) { + struct ipc_server_thread_data *std = server_data->thread_list; + + pthread_join(std->pthread_id, NULL); + + server_data->thread_list = std->next_thread; + free(std); + } + + server_data->is_stopped = 1; + + return 0; +} + +void ipc_server_free(struct ipc_server_data *server_data) +{ + if (!server_data) + return; + + if (!server_data->is_stopped) + BUG("cannot free ipc-server while running for '%s'", + server_data->buf_path.buf); + + strbuf_release(&server_data->buf_path); + + if (server_data->hEventStopRequested != INVALID_HANDLE_VALUE) + CloseHandle(server_data->hEventStopRequested); + + while (server_data->thread_list) { + struct ipc_server_thread_data *std = server_data->thread_list; + + server_data->thread_list = std->next_thread; + free(std); + } + + free(server_data); +} diff --git a/config.mak.uname b/config.mak.uname index 198ab1e58f8..76087cff678 100644 --- a/config.mak.uname +++ b/config.mak.uname @@ -421,6 +421,7 @@ ifeq ($(uname_S),Windows) RUNTIME_PREFIX = YesPlease HAVE_WPGMPTR = YesWeDo NO_ST_BLOCKS_IN_STRUCT_STAT = YesPlease + USE_WIN32_IPC = YesPlease USE_WIN32_MMAP = YesPlease MMAP_PREVENTS_DELETE = UnfortunatelyYes # USE_NED_ALLOCATOR = YesPlease @@ -597,6 +598,7 @@ ifneq (,$(findstring MINGW,$(uname_S))) RUNTIME_PREFIX = YesPlease HAVE_WPGMPTR = YesWeDo NO_ST_BLOCKS_IN_STRUCT_STAT = YesPlease + USE_WIN32_IPC = YesPlease USE_WIN32_MMAP = YesPlease MMAP_PREVENTS_DELETE = UnfortunatelyYes USE_NED_ALLOCATOR = YesPlease diff --git a/contrib/buildsystems/CMakeLists.txt b/contrib/buildsystems/CMakeLists.txt index c151dd7257f..4bd41054ee7 100644 --- a/contrib/buildsystems/CMakeLists.txt +++ b/contrib/buildsystems/CMakeLists.txt @@ -246,6 +246,10 @@ elseif(CMAKE_SYSTEM_NAME STREQUAL "Linux") list(APPEND compat_SOURCES unix-socket.c) endif() +if(CMAKE_SYSTEM_NAME STREQUAL "Windows") + list(APPEND compat_SOURCES compat/simple-ipc/ipc-shared.c compat/simple-ipc/ipc-win32.c) +endif() + set(EXE_EXTENSION ${CMAKE_EXECUTABLE_SUFFIX}) #header checks diff --git a/simple-ipc.h b/simple-ipc.h new file mode 100644 index 00000000000..eb19b5da8b1 --- /dev/null +++ b/simple-ipc.h @@ -0,0 +1,225 @@ +#ifndef GIT_SIMPLE_IPC_H +#define GIT_SIMPLE_IPC_H + +/* + * See Documentation/technical/api-simple-ipc.txt + */ + +#if defined(GIT_WINDOWS_NATIVE) +#define SUPPORTS_SIMPLE_IPC +#endif + +#ifdef SUPPORTS_SIMPLE_IPC +#include "pkt-line.h" + +/* + * Simple IPC Client Side API. + */ + +enum ipc_active_state { + /* + * The pipe/socket exists and the daemon is waiting for connections. + */ + IPC_STATE__LISTENING = 0, + + /* + * The pipe/socket exists, but the daemon is not listening. + * Perhaps it is very busy. + * Perhaps the daemon died without deleting the path. + * Perhaps it is shutting down and draining existing clients. + * Perhaps it is dead, but other clients are lingering and + * still holding a reference to the pathname. + */ + IPC_STATE__NOT_LISTENING, + + /* + * The requested pathname is bogus and no amount of retries + * will fix that. + */ + IPC_STATE__INVALID_PATH, + + /* + * The requested pathname is not found. This usually means + * that there is no daemon present. + */ + IPC_STATE__PATH_NOT_FOUND, + + IPC_STATE__OTHER_ERROR, +}; + +struct ipc_client_connect_options { + /* + * Spin under timeout if the server is running but can't + * accept our connection yet. This should always be set + * unless you just want to poke the server and see if it + * is alive. + */ + unsigned int wait_if_busy:1; + + /* + * Spin under timeout if the pipe/socket is not yet present + * on the file system. This is useful if we just started + * the service and need to wait for it to become ready. + */ + unsigned int wait_if_not_found:1; +}; + +#define IPC_CLIENT_CONNECT_OPTIONS_INIT { \ + .wait_if_busy = 0, \ + .wait_if_not_found = 0, \ +} + +/* + * Determine if a server is listening on this named pipe or socket using + * platform-specific logic. This might just probe the filesystem or it + * might make a trivial connection to the server using this pathname. + */ +enum ipc_active_state ipc_get_active_state(const char *path); + +struct ipc_client_connection { + int fd; + struct packet_scratch_space scratch_write_buffer; +}; + +/* + * Try to connect to the daemon on the named pipe or socket. + * + * Returns IPC_STATE__LISTENING and a connection handle. + * + * Otherwise, returns info to help decide whether to retry or to + * spawn/respawn the server. + */ +enum ipc_active_state ipc_client_try_connect( + const char *path, + const struct ipc_client_connect_options *options, + struct ipc_client_connection **p_connection); + +void ipc_client_close_connection(struct ipc_client_connection *connection); + +/* + * Used by the client to synchronously send and receive a message with + * the server on the provided client connection. + * + * Returns 0 when successful. + * + * Calls error() and returns non-zero otherwise. + */ +int ipc_client_send_command_to_connection( + struct ipc_client_connection *connection, + const char *message, struct strbuf *answer); + +/* + * Used by the client to synchronously connect and send and receive a + * message to the server listening at the given path. + * + * Returns 0 when successful. + * + * Calls error() and returns non-zero otherwise. + */ +int ipc_client_send_command(const char *path, + const struct ipc_client_connect_options *options, + const char *message, struct strbuf *answer); + +/* + * Simple IPC Server Side API. + */ + +struct ipc_server_reply_data; + +typedef int (ipc_server_reply_cb)(struct ipc_server_reply_data *, + const char *response, + size_t response_len); + +/* + * Prototype for an application-supplied callback to process incoming + * client IPC messages and compose a reply. The `application_cb` should + * use the provided `reply_cb` and `reply_data` to send an IPC response + * back to the client. The `reply_cb` callback can be called multiple + * times for chunking purposes. A reply message is optional and may be + * omitted if not necessary for the application. + * + * The return value from the application callback is ignored. + * The value `SIMPLE_IPC_QUIT` can be used to shutdown the server. + */ +typedef int (ipc_server_application_cb)(void *application_data, + const char *request, + ipc_server_reply_cb *reply_cb, + struct ipc_server_reply_data *reply_data); + +#define SIMPLE_IPC_QUIT -2 + +/* + * Opaque instance data to represent an IPC server instance. + */ +struct ipc_server_data; + +/* + * Control parameters for the IPC server instance. + * Use this to hide platform-specific settings. + */ +struct ipc_server_opts +{ + int nr_threads; +}; + +/* + * Start an IPC server instance in one or more background threads + * and return a handle to the pool. + * + * Returns 0 if the asynchronous server pool was started successfully. + * Returns -1 if not. + * + * When a client IPC message is received, the `application_cb` will be + * called (possibly on a random thread) to handle the message and + * optionally compose a reply message. + */ +int ipc_server_run_async(struct ipc_server_data **returned_server_data, + const char *path, const struct ipc_server_opts *opts, + ipc_server_application_cb *application_cb, + void *application_data); + +/* + * Gently signal the IPC server pool to shutdown. No new client + * connections will be accepted, but existing connections will be + * allowed to complete. + */ +int ipc_server_stop_async(struct ipc_server_data *server_data); + +/* + * Block the calling thread until all threads in the IPC server pool + * have completed and been joined. + */ +int ipc_server_await(struct ipc_server_data *server_data); + +/* + * Close and free all resource handles associated with the IPC server + * pool. + */ +void ipc_server_free(struct ipc_server_data *server_data); + +/* + * Run an IPC server instance and block the calling thread of the + * current process. It does not return until the IPC server has + * either shutdown or had an unrecoverable error. + * + * The IPC server handles incoming IPC messages from client processes + * and may use one or more background threads as necessary. + * + * Returns 0 after the server has completed successfully. + * Returns -1 if the server cannot be started. + * + * When a client IPC message is received, the `application_cb` will be + * called (possibly on a random thread) to handle the message and + * optionally compose a reply message. + * + * Note that `ipc_server_run()` is a synchronous wrapper around the + * above asynchronous routines. It effectively hides all of the + * server state and thread details from the caller and presents a + * simple synchronous interface. + */ +int ipc_server_run(const char *path, const struct ipc_server_opts *opts, + ipc_server_application_cb *application_cb, + void *application_data); + +#endif /* SUPPORTS_SIMPLE_IPC */ +#endif /* GIT_SIMPLE_IPC_H */ From patchwork Mon Feb 1 19:45:42 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeff Hostetler X-Patchwork-Id: 12059937 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 619A8C433E0 for ; Mon, 1 Feb 2021 19:49:34 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 1A76D64EC8 for ; Mon, 1 Feb 2021 19:49:34 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230124AbhBATtV (ORCPT ); Mon, 1 Feb 2021 14:49:21 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40610 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232446AbhBATqr (ORCPT ); Mon, 1 Feb 2021 14:46:47 -0500 Received: from mail-wr1-x430.google.com (mail-wr1-x430.google.com [IPv6:2a00:1450:4864:20::430]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id ED258C061793 for ; Mon, 1 Feb 2021 11:45:59 -0800 (PST) Received: by mail-wr1-x430.google.com with SMTP id s7so14953681wru.5 for ; Mon, 01 Feb 2021 11:45:59 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=message-id:in-reply-to:references:from:date:subject:fcc :content-transfer-encoding:mime-version:to:cc; bh=YBm/h2Qr2CpzHRGAZM4aCsbcZbWtHUxXCpUESjtEil8=; b=CJPDzpq9mfI9g32sGpSQjgHPnsuD4jkZZID8yAjgMAm24WeZWgS2K+NNFErKGlnPjh jH0TMVyriabyyO+TDbc3+9n9Sp56mTnycKLEL9OpetMxe2RqA1FRft6foxA1lwjsC5Db rqI6/CsSsbu6vVG5xYJ3DA4q4zo8wDoNTnom47iSSGkAGO3moUH+7hTR0QKYaVz1O8Kl 07EMM5NCHm4DQYM3P+S+eCs1x/BwYEXTRWHh+fzGLb/LQqNc8bL5FJpC0F0q+bxe3qi5 6QoueKt37pGjlahObQmjPsIbdotKp/4BGizRrnNByYPaAQyOmpM4bbvkh3IUrV+h1gcT DC0g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:message-id:in-reply-to:references:from:date :subject:fcc:content-transfer-encoding:mime-version:to:cc; bh=YBm/h2Qr2CpzHRGAZM4aCsbcZbWtHUxXCpUESjtEil8=; b=KTyHTtEGooluVtapCuCjGERn7LRBTpoKRnmsX1dR6wxBvUtT2zGJAQZ1d4sLr2NL0q 5v5O7bmoTXMAof0Z6DRwZiIUKwjE+9wY+w5uGnadjN1RCUAo3RhuUXzRAefCIiBDbLvJ wx41nbf9Kfcl7JFzV/XHp5xWddjfE4mBHu9XGOmNQgoMJh11hg8GAP2DPyB9FlyZH5vC i/xKjHxkaMwGl1m5UM+ozp9xeSCIHkR3Y88bnMgCEGF0Ppkz2Q6YOC69Tnbfk1p+G72I Apnko0uXVk8ZsDfqbzxpTkOKwTbpB2ZJ2tJQADzYwNidmi1QAcf8MNHOl8pCr1q/0+od Cggg== X-Gm-Message-State: AOAM533QRQzqzqHd6d9yTVxDeNep1lTmDfr9HZP0DeSush9QrceFrqFr pqwJFUNbEA/eunfU9NlJB1UrpkSpyng= X-Google-Smtp-Source: ABdhPJyqgDOFNOmt2JPOiv4o2j7f57magwhbT4MWb4RCcm+oV9yu0AurQyxg0uDllo3u39uOZEIKNQ== X-Received: by 2002:a05:6000:11c5:: with SMTP id i5mr20177647wrx.302.1612208757979; Mon, 01 Feb 2021 11:45:57 -0800 (PST) Received: from [127.0.0.1] ([13.74.141.28]) by smtp.gmail.com with ESMTPSA id p12sm300920wmq.1.2021.02.01.11.45.57 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 01 Feb 2021 11:45:57 -0800 (PST) Message-Id: In-Reply-To: References: Date: Mon, 01 Feb 2021 19:45:42 +0000 Subject: [PATCH v2 09/14] simple-ipc: add t/helper/test-simple-ipc and t0052 Fcc: Sent MIME-Version: 1.0 To: git@vger.kernel.org Cc: =?utf-8?b?w4Z2YXIgQXJuZmrDtnLDsA==?= Bjarmason , Jeff Hostetler , Jeff King , Chris Torek , Jeff Hostetler , Jeff Hostetler Precedence: bulk List-ID: X-Mailing-List: git@vger.kernel.org From: Jeff Hostetler From: Jeff Hostetler Create unit tests for "simple-ipc". These are currently only enabled on Windows. Signed-off-by: Jeff Hostetler --- Makefile | 1 + t/helper/test-simple-ipc.c | 485 +++++++++++++++++++++++++++++++++++++ t/helper/test-tool.c | 1 + t/helper/test-tool.h | 1 + t/t0052-simple-ipc.sh | 129 ++++++++++ 5 files changed, 617 insertions(+) create mode 100644 t/helper/test-simple-ipc.c create mode 100755 t/t0052-simple-ipc.sh diff --git a/Makefile b/Makefile index c94d5847919..e7ba8853ea6 100644 --- a/Makefile +++ b/Makefile @@ -740,6 +740,7 @@ TEST_BUILTINS_OBJS += test-serve-v2.o TEST_BUILTINS_OBJS += test-sha1.o TEST_BUILTINS_OBJS += test-sha256.o TEST_BUILTINS_OBJS += test-sigchain.o +TEST_BUILTINS_OBJS += test-simple-ipc.o TEST_BUILTINS_OBJS += test-strcmp-offset.o TEST_BUILTINS_OBJS += test-string-list.o TEST_BUILTINS_OBJS += test-submodule-config.o diff --git a/t/helper/test-simple-ipc.c b/t/helper/test-simple-ipc.c new file mode 100644 index 00000000000..4960e79cf18 --- /dev/null +++ b/t/helper/test-simple-ipc.c @@ -0,0 +1,485 @@ +/* + * test-simple-ipc.c: verify that the Inter-Process Communication works. + */ + +#include "test-tool.h" +#include "cache.h" +#include "strbuf.h" +#include "simple-ipc.h" +#include "parse-options.h" +#include "thread-utils.h" + +#ifndef SUPPORTS_SIMPLE_IPC +int cmd__simple_ipc(int argc, const char **argv) +{ + die("simple IPC not available on this platform"); +} +#else + +/* + * The test daemon defines an "application callback" that supports a + * series of commands (see `test_app_cb()`). + * + * Unknown commands are caught here and we send an error message back + * to the client process. + */ +static int app__unhandled_command(const char *command, + ipc_server_reply_cb *reply_cb, + struct ipc_server_reply_data *reply_data) +{ + struct strbuf buf = STRBUF_INIT; + int ret; + + strbuf_addf(&buf, "unhandled command: %s", command); + ret = reply_cb(reply_data, buf.buf, buf.len); + strbuf_release(&buf); + + return ret; +} + +/* + * Reply with a single very large buffer. This is to ensure that + * long response are properly handled -- whether the chunking occurs + * in the kernel or in the (probably pkt-line) layer. + */ +#define BIG_ROWS (10000) +static int app__big_command(ipc_server_reply_cb *reply_cb, + struct ipc_server_reply_data *reply_data) +{ + struct strbuf buf = STRBUF_INIT; + int row; + int ret; + + for (row = 0; row < BIG_ROWS; row++) + strbuf_addf(&buf, "big: %.75d\n", row); + + ret = reply_cb(reply_data, buf.buf, buf.len); + strbuf_release(&buf); + + return ret; +} + +/* + * Reply with a series of lines. This is to ensure that we can incrementally + * compute the response and chunk it to the client. + */ +#define CHUNK_ROWS (10000) +static int app__chunk_command(ipc_server_reply_cb *reply_cb, + struct ipc_server_reply_data *reply_data) +{ + struct strbuf buf = STRBUF_INIT; + int row; + int ret; + + for (row = 0; row < CHUNK_ROWS; row++) { + strbuf_setlen(&buf, 0); + strbuf_addf(&buf, "big: %.75d\n", row); + ret = reply_cb(reply_data, buf.buf, buf.len); + } + + strbuf_release(&buf); + + return ret; +} + +/* + * Slowly reply with a series of lines. This is to model an expensive to + * compute chunked response (which might happen if this callback is running + * in a thread and is fighting for a lock with other threads). + */ +#define SLOW_ROWS (1000) +#define SLOW_DELAY_MS (10) +static int app__slow_command(ipc_server_reply_cb *reply_cb, + struct ipc_server_reply_data *reply_data) +{ + struct strbuf buf = STRBUF_INIT; + int row; + int ret; + + for (row = 0; row < SLOW_ROWS; row++) { + strbuf_setlen(&buf, 0); + strbuf_addf(&buf, "big: %.75d\n", row); + ret = reply_cb(reply_data, buf.buf, buf.len); + sleep_millisec(SLOW_DELAY_MS); + } + + strbuf_release(&buf); + + return ret; +} + +/* + * The client sent a command followed by a (possibly very) large buffer. + */ +static int app__sendbytes_command(const char *received, + ipc_server_reply_cb *reply_cb, + struct ipc_server_reply_data *reply_data) +{ + struct strbuf buf_resp = STRBUF_INIT; + const char *p = "?"; + int len_ballast = 0; + int k; + int errs = 0; + int ret; + + if (skip_prefix(received, "sendbytes ", &p)) + len_ballast = strlen(p); + + /* + * Verify that the ballast is n copies of a single letter. + * And that the multi-threaded IO layer didn't cross the streams. + */ + for (k = 1; k < len_ballast; k++) + if (p[k] != p[0]) + errs++; + + if (errs) + strbuf_addf(&buf_resp, "errs:%d\n", errs); + else + strbuf_addf(&buf_resp, "rcvd:%c%08d\n", p[0], len_ballast); + + ret = reply_cb(reply_data, buf_resp.buf, buf_resp.len); + + strbuf_release(&buf_resp); + + return ret; +} + +/* + * An arbitrary fixed address to verify that the application instance + * data is handled properly. + */ +static int my_app_data = 42; + +static ipc_server_application_cb test_app_cb; + +/* + * This is "application callback" that sits on top of the "ipc-server". + * It completely defines the set of command verbs supported by this + * application. + */ +static int test_app_cb(void *application_data, + const char *command, + ipc_server_reply_cb *reply_cb, + struct ipc_server_reply_data *reply_data) +{ + /* + * Verify that we received the application-data that we passed + * when we started the ipc-server. (We have several layers of + * callbacks calling callbacks and it's easy to get things mixed + * up (especially when some are "void*").) + */ + if (application_data != (void*)&my_app_data) + BUG("application_cb: application_data pointer wrong"); + + if (!strcmp(command, "quit")) { + /* + * Tell ipc-server to hangup with an empty reply. + */ + return SIMPLE_IPC_QUIT; + } + + if (!strcmp(command, "ping")) { + const char *answer = "pong"; + return reply_cb(reply_data, answer, strlen(answer)); + } + + if (!strcmp(command, "big")) + return app__big_command(reply_cb, reply_data); + + if (!strcmp(command, "chunk")) + return app__chunk_command(reply_cb, reply_data); + + if (!strcmp(command, "slow")) + return app__slow_command(reply_cb, reply_data); + + if (starts_with(command, "sendbytes ")) + return app__sendbytes_command(command, reply_cb, reply_data); + + return app__unhandled_command(command, reply_cb, reply_data); +} + +/* + * This process will run as a simple-ipc server and listen for IPC commands + * from client processes. + */ +static int daemon__run_server(const char *path, int argc, const char **argv) +{ + struct ipc_server_opts opts = { + .nr_threads = 5 + }; + + const char * const daemon_usage[] = { + N_("test-helper simple-ipc daemon ["), + NULL + }; + struct option daemon_options[] = { + OPT_INTEGER(0, "threads", &opts.nr_threads, + N_("number of threads in server thread pool")), + OPT_END() + }; + + argc = parse_options(argc, argv, NULL, daemon_options, daemon_usage, 0); + + if (opts.nr_threads < 1) + opts.nr_threads = 1; + + /* + * Synchronously run the ipc-server. We don't need any application + * instance data, so pass an arbitrary pointer (that we'll later + * verify made the round trip). + */ + return ipc_server_run(path, &opts, test_app_cb, (void*)&my_app_data); +} + +/* + * This process will run a quick probe to see if a simple-ipc server + * is active on this path. + * + * Returns 0 if the server is alive. + */ +static int client__probe_server(const char *path) +{ + enum ipc_active_state s; + + s = ipc_get_active_state(path); + switch (s) { + case IPC_STATE__LISTENING: + return 0; + + case IPC_STATE__NOT_LISTENING: + return error("no server listening at '%s'", path); + + case IPC_STATE__PATH_NOT_FOUND: + return error("path not found '%s'", path); + + case IPC_STATE__INVALID_PATH: + return error("invalid pipe/socket name '%s'", path); + + case IPC_STATE__OTHER_ERROR: + default: + return error("other error for '%s'", path); + } +} + +/* + * Send an IPC command to an already-running server daemon and print the + * response. + * + * argv[2] contains a simple (1 word) command verb that `test_app_cb()` + * (in the daemon process) will understand. + */ +static int client__send_ipc(int argc, const char **argv, const char *path) +{ + const char *command = argc > 2 ? argv[2] : "(no command)"; + struct strbuf buf = STRBUF_INIT; + struct ipc_client_connect_options options + = IPC_CLIENT_CONNECT_OPTIONS_INIT; + + options.wait_if_busy = 1; + options.wait_if_not_found = 0; + + if (!ipc_client_send_command(path, &options, command, &buf)) { + printf("%s\n", buf.buf); + fflush(stdout); + strbuf_release(&buf); + + return 0; + } + + return error("failed to send '%s' to '%s'", command, path); +} + +/* + * Send an IPC command followed by ballast to confirm that a large + * message can be sent and that the kernel or pkt-line layers will + * properly chunk it and that the daemon receives the entire message. + */ +static int do_sendbytes(int bytecount, char byte, const char *path) +{ + struct strbuf buf_send = STRBUF_INIT; + struct strbuf buf_resp = STRBUF_INIT; + struct ipc_client_connect_options options + = IPC_CLIENT_CONNECT_OPTIONS_INIT; + + options.wait_if_busy = 1; + options.wait_if_not_found = 0; + + strbuf_addstr(&buf_send, "sendbytes "); + strbuf_addchars(&buf_send, byte, bytecount); + + if (!ipc_client_send_command(path, &options, buf_send.buf, &buf_resp)) { + strbuf_rtrim(&buf_resp); + printf("sent:%c%08d %s\n", byte, bytecount, buf_resp.buf); + fflush(stdout); + strbuf_release(&buf_send); + strbuf_release(&buf_resp); + + return 0; + } + + return error("client failed to sendbytes(%d, '%c') to '%s'", + bytecount, byte, path); +} + +/* + * Send an IPC command with ballast to an already-running server daemon. + */ +static int client__sendbytes(int argc, const char **argv, const char *path) +{ + int bytecount = 1024; + char *string = "x"; + const char * const sendbytes_usage[] = { + N_("test-helper simple-ipc sendbytes []"), + NULL + }; + struct option sendbytes_options[] = { + OPT_INTEGER(0, "bytecount", &bytecount, N_("number of bytes")), + OPT_STRING(0, "byte", &string, N_("byte"), N_("ballast")), + OPT_END() + }; + + argc = parse_options(argc, argv, NULL, sendbytes_options, sendbytes_usage, 0); + + return do_sendbytes(bytecount, string[0], path); +} + +struct multiple_thread_data { + pthread_t pthread_id; + struct multiple_thread_data *next; + const char *path; + int bytecount; + int batchsize; + int sum_errors; + int sum_good; + char letter; +}; + +static void *multiple_thread_proc(void *_multiple_thread_data) +{ + struct multiple_thread_data *d = _multiple_thread_data; + int k; + + trace2_thread_start("multiple"); + + for (k = 0; k < d->batchsize; k++) { + if (do_sendbytes(d->bytecount + k, d->letter, d->path)) + d->sum_errors++; + else + d->sum_good++; + } + + trace2_thread_exit(); + return NULL; +} + +/* + * Start a client-side thread pool. Each thread sends a series of + * IPC requests. Each request is on a new connection to the server. + */ +static int client__multiple(int argc, const char **argv, const char *path) +{ + struct multiple_thread_data *list = NULL; + int k; + int nr_threads = 5; + int bytecount = 1; + int batchsize = 10; + int sum_join_errors = 0; + int sum_thread_errors = 0; + int sum_good = 0; + + const char * const multiple_usage[] = { + N_("test-helper simple-ipc multiple []"), + NULL + }; + struct option multiple_options[] = { + OPT_INTEGER(0, "bytecount", &bytecount, N_("number of bytes")), + OPT_INTEGER(0, "threads", &nr_threads, N_("number of threads")), + OPT_INTEGER(0, "batchsize", &batchsize, N_("number of requests per thread")), + OPT_END() + }; + + argc = parse_options(argc, argv, NULL, multiple_options, multiple_usage, 0); + + if (bytecount < 1) + bytecount = 1; + if (nr_threads < 1) + nr_threads = 1; + if (batchsize < 1) + batchsize = 1; + + for (k = 0; k < nr_threads; k++) { + struct multiple_thread_data *d = xcalloc(1, sizeof(*d)); + d->next = list; + d->path = path; + d->bytecount = bytecount + batchsize*(k/26); + d->batchsize = batchsize; + d->sum_errors = 0; + d->sum_good = 0; + d->letter = 'A' + (k % 26); + + if (pthread_create(&d->pthread_id, NULL, multiple_thread_proc, d)) { + warning("failed to create thread[%d] skipping remainder", k); + free(d); + break; + } + + list = d; + } + + while (list) { + struct multiple_thread_data *d = list; + + if (pthread_join(d->pthread_id, NULL)) + sum_join_errors++; + + sum_thread_errors += d->sum_errors; + sum_good += d->sum_good; + + list = d->next; + free(d); + } + + printf("client (good %d) (join %d), (errors %d)\n", + sum_good, sum_join_errors, sum_thread_errors); + + return (sum_join_errors + sum_thread_errors) ? 1 : 0; +} + +int cmd__simple_ipc(int argc, const char **argv) +{ + const char *path = "ipc-test"; + + if (argc == 2 && !strcmp(argv[1], "SUPPORTS_SIMPLE_IPC")) + return 0; + + /* Use '!!' on all dispatch functions to map from `error()` style + * (returns -1) style to `test_must_fail` style (expects 1) and + * get less confusing shell error messages. + */ + + if (argc == 2 && !strcmp(argv[1], "is-active")) + return !!client__probe_server(path); + + if (argc >= 2 && !strcmp(argv[1], "daemon")) + return !!daemon__run_server(path, argc, argv); + + /* + * Client commands follow. Ensure a server is running before + * going any further. + */ + if (client__probe_server(path)) + return 1; + + if ((argc == 2 || argc == 3) && !strcmp(argv[1], "send")) + return !!client__send_ipc(argc, argv, path); + + if (argc >= 2 && !strcmp(argv[1], "sendbytes")) + return !!client__sendbytes(argc, argv, path); + + if (argc >= 2 && !strcmp(argv[1], "multiple")) + return !!client__multiple(argc, argv, path); + + die("Unhandled argv[1]: '%s'", argv[1]); +} +#endif diff --git a/t/helper/test-tool.c b/t/helper/test-tool.c index 9d6d14d9293..a409655f03b 100644 --- a/t/helper/test-tool.c +++ b/t/helper/test-tool.c @@ -64,6 +64,7 @@ static struct test_cmd cmds[] = { { "sha1", cmd__sha1 }, { "sha256", cmd__sha256 }, { "sigchain", cmd__sigchain }, + { "simple-ipc", cmd__simple_ipc }, { "strcmp-offset", cmd__strcmp_offset }, { "string-list", cmd__string_list }, { "submodule-config", cmd__submodule_config }, diff --git a/t/helper/test-tool.h b/t/helper/test-tool.h index a6470ff62c4..564eb3c8e91 100644 --- a/t/helper/test-tool.h +++ b/t/helper/test-tool.h @@ -54,6 +54,7 @@ int cmd__sha1(int argc, const char **argv); int cmd__oid_array(int argc, const char **argv); int cmd__sha256(int argc, const char **argv); int cmd__sigchain(int argc, const char **argv); +int cmd__simple_ipc(int argc, const char **argv); int cmd__strcmp_offset(int argc, const char **argv); int cmd__string_list(int argc, const char **argv); int cmd__submodule_config(int argc, const char **argv); diff --git a/t/t0052-simple-ipc.sh b/t/t0052-simple-ipc.sh new file mode 100755 index 00000000000..69588354545 --- /dev/null +++ b/t/t0052-simple-ipc.sh @@ -0,0 +1,129 @@ +#!/bin/sh + +test_description='simple command server' + +. ./test-lib.sh + +test-tool simple-ipc SUPPORTS_SIMPLE_IPC || { + skip_all='simple IPC not supported on this platform' + test_done +} + +stop_simple_IPC_server () { + test -n "$SIMPLE_IPC_PID" || return 0 + + kill "$SIMPLE_IPC_PID" && + SIMPLE_IPC_PID= +} + +test_expect_success 'start simple command server' ' + { test-tool simple-ipc daemon --threads=8 & } && + SIMPLE_IPC_PID=$! && + test_atexit stop_simple_IPC_server && + + sleep 1 && + + test-tool simple-ipc is-active +' + +test_expect_success 'simple command server' ' + test-tool simple-ipc send ping >actual && + echo pong >expect && + test_cmp expect actual +' + +test_expect_success 'servers cannot share the same path' ' + test_must_fail test-tool simple-ipc daemon && + test-tool simple-ipc is-active +' + +test_expect_success 'big response' ' + test-tool simple-ipc send big >actual && + test_line_count -ge 10000 actual && + grep -q "big: [0]*9999\$" actual +' + +test_expect_success 'chunk response' ' + test-tool simple-ipc send chunk >actual && + test_line_count -ge 10000 actual && + grep -q "big: [0]*9999\$" actual +' + +test_expect_success 'slow response' ' + test-tool simple-ipc send slow >actual && + test_line_count -ge 100 actual && + grep -q "big: [0]*99\$" actual +' + +# Send an IPC with n=100,000 bytes of ballast. This should be large enough +# to force both the kernel and the pkt-line layer to chunk the message to the +# daemon and for the daemon to receive it in chunks. +# +test_expect_success 'sendbytes' ' + test-tool simple-ipc sendbytes --bytecount=100000 --byte=A >actual && + grep "sent:A00100000 rcvd:A00100000" actual +' + +# Start a series of client threads that each make +# IPC requests to the server. Each ( * ) request +# will open a new connection to the server and randomly bind to a server +# thread. Each client thread exits after completing its batch. So the +# total number of live client threads will be smaller than the total. +# Each request will send a message containing at least bytes +# of ballast. (Responses are small.) +# +# The purpose here is to test threading in the server and responding to +# many concurrent client requests (regardless of whether they come from +# 1 client process or many). And to test that the server side of the +# named pipe/socket is stable. (On Windows this means that the server +# pipe is properly recycled.) +# +# On Windows it also lets us adjust the connection timeout in the +# `ipc_client_send_command()`. +# +# Note it is easy to drive the system into failure by requesting an +# insane number of threads on client or server and/or increasing the +# per-thread batchsize or the per-request bytecount (ballast). +# On Windows these failures look like "pipe is busy" errors. +# So I've chosen fairly conservative values for now. +# +# We expect output of the form "sent: ..." +# With terms (7, 19, 13) we expect: +# in [A-G] +# in [19+0 .. 19+(13-1)] +# and (7 * 13) successful responses. +# +test_expect_success 'stress test threads' ' + test-tool simple-ipc multiple \ + --threads=7 \ + --bytecount=19 \ + --batchsize=13 \ + >actual && + test_line_count = 92 actual && + grep "good 91" actual && + grep "sent:A" actual_a && + cat >expect_a <<-EOF && + sent:A00000019 rcvd:A00000019 + sent:A00000020 rcvd:A00000020 + sent:A00000021 rcvd:A00000021 + sent:A00000022 rcvd:A00000022 + sent:A00000023 rcvd:A00000023 + sent:A00000024 rcvd:A00000024 + sent:A00000025 rcvd:A00000025 + sent:A00000026 rcvd:A00000026 + sent:A00000027 rcvd:A00000027 + sent:A00000028 rcvd:A00000028 + sent:A00000029 rcvd:A00000029 + sent:A00000030 rcvd:A00000030 + sent:A00000031 rcvd:A00000031 + EOF + test_cmp expect_a actual_a +' + +test_expect_success '`quit` works' ' + test-tool simple-ipc send quit && + test_must_fail test-tool simple-ipc is-active && + test_must_fail test-tool simple-ipc send ping +' + +test_done From patchwork Mon Feb 1 19:45:43 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeff Hostetler X-Patchwork-Id: 12059935 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id EE8ACC433DB for ; Mon, 1 Feb 2021 19:49:21 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id AD41D64ECC for ; Mon, 1 Feb 2021 19:49:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232374AbhBATtM (ORCPT ); Mon, 1 Feb 2021 14:49:12 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40612 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232464AbhBATqr (ORCPT ); Mon, 1 Feb 2021 14:46:47 -0500 Received: from mail-wm1-x334.google.com (mail-wm1-x334.google.com [IPv6:2a00:1450:4864:20::334]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4BFDBC061794 for ; Mon, 1 Feb 2021 11:46:00 -0800 (PST) Received: by mail-wm1-x334.google.com with SMTP id e15so336451wme.0 for ; Mon, 01 Feb 2021 11:46:00 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=message-id:in-reply-to:references:from:date:subject:fcc :content-transfer-encoding:mime-version:to:cc; bh=d1cHuoV8KkZBLxPSXFGQmyuEQ52j4D1R8rr07+j1FaY=; b=q66JmoZIHBM0ypO5M/rCRGV/X2h2vfWQbO6S8xjzrlggssaHOtzVfKec3PtGwT3uFJ LGXq/YmBBSa77eubNylvfbKB4NMC4oPo3g3/lro7kSGwuRHAsA/RotQ5W1g8iht55kkr r2MZJkmJhiAHBqlfEFCnagS82+3v5x4S4vjG/ILEydihG7Oul1yOUMvD1sHKcreyFcfI gdewIgsJTF47edZ26uaZm7sgBp1vTgo0Eb/aC8DU4Lx2WVTzSiZk4YNy+RklHfYMLh7k NnVN3uzB2aooav/XZPxaOzzodYKVvV/nirGOt2Wn3YfBO0SckaC2MW21I5sjEuuZFnfE Mjow== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:message-id:in-reply-to:references:from:date :subject:fcc:content-transfer-encoding:mime-version:to:cc; bh=d1cHuoV8KkZBLxPSXFGQmyuEQ52j4D1R8rr07+j1FaY=; b=FOwSHt5xz2Ijb6SKLEDWQS7PQrUr68Zkp8qYbJxt6DtLFGEr1Ct0nRcyCzQJTUWyZ8 TEziONOB4hg1lZJFUyn8Piv6Cj4tOmWXHUyEtDg4DffsavTQFMJytWM/+hxqyWNDg6sF ifgHaoCRdd8ePeOF5YtGvq+/SNVia8JytswPtHx73qG6xl3ubU8j4gF/y1rPS8RtsdN9 JFAPrIvsMRC2H23F6A98V6MJhy6rVfDQn8a53vBfCNMTMFW8Z5aTK2AmK9CGtE12MSCa Fj6nU1N2Ep/jbW8Ocy78hj34rEioJjhMo+wgwInuYigf+6qKREEMrUqrKapr2u2lmmpz egwg== X-Gm-Message-State: AOAM533Mu1ow/LqEGRIQpPvjaFhDAE00iq40oMcbQxBJMK44j6nco0r5 mxXVh8sA/UIhdBtNf7Fgnf28YqrnlcE= X-Google-Smtp-Source: ABdhPJwP6ikpr5+jbCr8/Leduz4oqJgitiVjVekVb5YBZyh9xGVsCYIIEDecYMXCtlYxiNq/lBRtaw== X-Received: by 2002:a1c:4303:: with SMTP id q3mr415201wma.3.1612208758873; Mon, 01 Feb 2021 11:45:58 -0800 (PST) Received: from [127.0.0.1] ([13.74.141.28]) by smtp.gmail.com with ESMTPSA id b3sm320834wme.32.2021.02.01.11.45.58 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 01 Feb 2021 11:45:58 -0800 (PST) Message-Id: In-Reply-To: References: Date: Mon, 01 Feb 2021 19:45:43 +0000 Subject: [PATCH v2 10/14] unix-socket: elimiate static unix_stream_socket() helper function Fcc: Sent MIME-Version: 1.0 To: git@vger.kernel.org Cc: =?utf-8?b?w4Z2YXIgQXJuZmrDtnLDsA==?= Bjarmason , Jeff Hostetler , Jeff King , Chris Torek , Jeff Hostetler , Jeff Hostetler Precedence: bulk List-ID: X-Mailing-List: git@vger.kernel.org From: Jeff Hostetler From: Jeff Hostetler The static helper function `unix_stream_socket()` calls `die()`. This is not appropriate for all callers. Eliminate the wrapper function and move the existing error handling to the callers in preparation for adapting specific callers. Signed-off-by: Jeff Hostetler --- unix-socket.c | 17 +++++++---------- 1 file changed, 7 insertions(+), 10 deletions(-) diff --git a/unix-socket.c b/unix-socket.c index 19ed48be990..ef2aeb46bcd 100644 --- a/unix-socket.c +++ b/unix-socket.c @@ -1,14 +1,6 @@ #include "cache.h" #include "unix-socket.h" -static int unix_stream_socket(void) -{ - int fd = socket(AF_UNIX, SOCK_STREAM, 0); - if (fd < 0) - die_errno("unable to create socket"); - return fd; -} - static int chdir_len(const char *orig, int len) { char *path = xmemdupz(orig, len); @@ -79,7 +71,10 @@ int unix_stream_connect(const char *path) if (unix_sockaddr_init(&sa, path, &ctx) < 0) return -1; - fd = unix_stream_socket(); + fd = socket(AF_UNIX, SOCK_STREAM, 0); + if (fd < 0) + die_errno("unable to create socket"); + if (connect(fd, (struct sockaddr *)&sa, sizeof(sa)) < 0) goto fail; unix_sockaddr_cleanup(&ctx); @@ -103,7 +98,9 @@ int unix_stream_listen(const char *path) if (unix_sockaddr_init(&sa, path, &ctx) < 0) return -1; - fd = unix_stream_socket(); + fd = socket(AF_UNIX, SOCK_STREAM, 0); + if (fd < 0) + die_errno("unable to create socket"); if (bind(fd, (struct sockaddr *)&sa, sizeof(sa)) < 0) goto fail; From patchwork Mon Feb 1 19:45:44 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeff Hostetler X-Patchwork-Id: 12059927 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 51494C43381 for ; Mon, 1 Feb 2021 19:48:44 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 1724B64EC6 for ; Mon, 1 Feb 2021 19:48:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232601AbhBATsa (ORCPT ); Mon, 1 Feb 2021 14:48:30 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40618 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232466AbhBATqr (ORCPT ); Mon, 1 Feb 2021 14:46:47 -0500 Received: from mail-wm1-x330.google.com (mail-wm1-x330.google.com [IPv6:2a00:1450:4864:20::330]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1D5A8C061797 for ; Mon, 1 Feb 2021 11:46:01 -0800 (PST) Received: by mail-wm1-x330.google.com with SMTP id u14so333812wml.4 for ; Mon, 01 Feb 2021 11:46:01 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=message-id:in-reply-to:references:from:date:subject:fcc :content-transfer-encoding:mime-version:to:cc; bh=CA3S0LJqlNcGPOU+UDmTskNqSc6aAfxyACjsPm8c3yk=; b=rdWsIvXTJEJXJBBqRSVlAAyUy475u//j/nsHCrQ5Am7vO8bDH5iswV5L91VlRH1e+p UTKo9m9MULUEWn6LuT4HeKMZpPLFF/BcRlnkshw6TOazDk0BtWTFRnBqBrl3s2e7At3G +rcRMBFgCkm546HQwwpsd+7mIe5k6CkidGBYvrw1nKgeFxqnvl+BU0vVh40jQE17bo7P YRdaiBwHyRJWoMIUNZUVukJ2JUq7INA+a7/h+gHdr7ITq7F+fJngZOvOTeyGkUpylX7+ 2LEOd66QHVjuk4xZl/uEdczWMu+XvqxPoUTniq5KFP1BFeFVdPmMmZBjxYbF25LJFuHc eX3A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:message-id:in-reply-to:references:from:date :subject:fcc:content-transfer-encoding:mime-version:to:cc; bh=CA3S0LJqlNcGPOU+UDmTskNqSc6aAfxyACjsPm8c3yk=; b=OI44tjXT26WrQSgIWRnpngV+lxL9Ui7Bs49C8mZm0Dt8+Q8dqBD53jtbgKyeNac/+6 ylnkOK1kejRURH2sgVk8Dz79+nMGb94JETDkSO0N9vDcUygGzSHGs7Pr5aLQ0Y0t5U8O swTw9NKWrwBB+mMqG7Rt1iKPEC2DV0OnySgGW1XLNRwfDJEktN4vuD8kT0pB++fujMFL hYdAsdQOG0OCEKfrdyNpt3EfrIXzbeWUSXOiB1hrFyYlKh65oxA/8xDu9EYBfQKyhVP8 RWY4qhtgiaWgAg295rDefEuLqCUNaG5MvjDcf58PVDJJl6YaHrTpP0+cEPXnER1ftE6y T/Qg== X-Gm-Message-State: AOAM530Gf8eY8WtzyJxWavoyoza8H6sNsAAXsdAhpaU/IjJcVX/uCCza eHbqizAfnYAnZePWJ9TAiTPYdKI2PHM= X-Google-Smtp-Source: ABdhPJx7sudddHeooTzYnTUteDfrg78dMwNHwVXcwkOUi6eu01v3sub4mpgRCdqL8ShgEawyaNNbgg== X-Received: by 2002:a1c:2ec2:: with SMTP id u185mr395834wmu.83.1612208759714; Mon, 01 Feb 2021 11:45:59 -0800 (PST) Received: from [127.0.0.1] ([13.74.141.28]) by smtp.gmail.com with ESMTPSA id b3sm320856wme.32.2021.02.01.11.45.59 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 01 Feb 2021 11:45:59 -0800 (PST) Message-Id: <7a6a69dfc20c6ff190cb020931c46bf4d88bab59.1612208747.git.gitgitgadget@gmail.com> In-Reply-To: References: Date: Mon, 01 Feb 2021 19:45:44 +0000 Subject: [PATCH v2 11/14] unix-socket: add options to unix_stream_listen() Fcc: Sent MIME-Version: 1.0 To: git@vger.kernel.org Cc: =?utf-8?b?w4Z2YXIgQXJuZmrDtnLDsA==?= Bjarmason , Jeff Hostetler , Jeff King , Chris Torek , Jeff Hostetler , Jeff Hostetler Precedence: bulk List-ID: X-Mailing-List: git@vger.kernel.org From: Jeff Hostetler From: Jeff Hostetler Update `unix_stream_listen()` to take an options structure to override default behaviors. This includes the size of the `listen()` backlog and whether it should always unlink the socket file before trying to create a new one. Also eliminate calls to `die()` if it cannot create a socket. Normally, `unix_stream_listen()` always tries to `unlink()` the socket-path before calling `bind()`. If there is an existing server/daemon already bound and listening on that socket-path, our `unlink()` would have the effect of disassociating the existing server's bound-socket-fd from the socket-path without notifying the existing server. The existing server could continue to service existing connections (accepted-socket-fd's), but would not receive any futher new connections (since clients rendezvous via the socket-path). The existing server would effectively be offline but yet appear to be active. Furthermore, `unix_stream_listen()` creates an opportunity for a brief race condition for connecting clients if they try to connect in the interval between the forced `unlink()` and the subsequent `bind()` (which recreates the socket-path that is bound to a new socket-fd in the current process). Signed-off-by: Jeff Hostetler --- builtin/credential-cache--daemon.c | 3 ++- unix-socket.c | 28 +++++++++++++++++++++------- unix-socket.h | 14 +++++++++++++- 3 files changed, 36 insertions(+), 9 deletions(-) diff --git a/builtin/credential-cache--daemon.c b/builtin/credential-cache--daemon.c index c61f123a3b8..4c6c89ab0de 100644 --- a/builtin/credential-cache--daemon.c +++ b/builtin/credential-cache--daemon.c @@ -203,9 +203,10 @@ static int serve_cache_loop(int fd) static void serve_cache(const char *socket_path, int debug) { + struct unix_stream_listen_opts opts = UNIX_STREAM_LISTEN_OPTS_INIT; int fd; - fd = unix_stream_listen(socket_path); + fd = unix_stream_listen(socket_path, &opts); if (fd < 0) die_errno("unable to bind to '%s'", socket_path); diff --git a/unix-socket.c b/unix-socket.c index ef2aeb46bcd..8bcef18ea55 100644 --- a/unix-socket.c +++ b/unix-socket.c @@ -88,24 +88,35 @@ int unix_stream_connect(const char *path) return -1; } -int unix_stream_listen(const char *path) +int unix_stream_listen(const char *path, + const struct unix_stream_listen_opts *opts) { - int fd, saved_errno; + int fd = -1; + int saved_errno; + int bind_successful = 0; + int backlog; struct sockaddr_un sa; struct unix_sockaddr_context ctx; - unlink(path); - if (unix_sockaddr_init(&sa, path, &ctx) < 0) return -1; + fd = socket(AF_UNIX, SOCK_STREAM, 0); if (fd < 0) - die_errno("unable to create socket"); + goto fail; + + if (opts->force_unlink_before_bind) + unlink(path); if (bind(fd, (struct sockaddr *)&sa, sizeof(sa)) < 0) goto fail; + bind_successful = 1; - if (listen(fd, 5) < 0) + if (opts->listen_backlog_size > 0) + backlog = opts->listen_backlog_size; + else + backlog = 5; + if (listen(fd, backlog) < 0) goto fail; unix_sockaddr_cleanup(&ctx); @@ -114,7 +125,10 @@ int unix_stream_listen(const char *path) fail: saved_errno = errno; unix_sockaddr_cleanup(&ctx); - close(fd); + if (fd != -1) + close(fd); + if (bind_successful) + unlink(path); errno = saved_errno; return -1; } diff --git a/unix-socket.h b/unix-socket.h index e271aeec5a0..c28372ef48e 100644 --- a/unix-socket.h +++ b/unix-socket.h @@ -1,7 +1,19 @@ #ifndef UNIX_SOCKET_H #define UNIX_SOCKET_H +struct unix_stream_listen_opts { + int listen_backlog_size; + unsigned int force_unlink_before_bind:1; +}; + +#define UNIX_STREAM_LISTEN_OPTS_INIT \ +{ \ + .listen_backlog_size = 5, \ + .force_unlink_before_bind = 1, \ +} + int unix_stream_connect(const char *path); -int unix_stream_listen(const char *path); +int unix_stream_listen(const char *path, + const struct unix_stream_listen_opts *opts); #endif /* UNIX_SOCKET_H */ From patchwork Mon Feb 1 19:45:45 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeff Hostetler X-Patchwork-Id: 12059939 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 55BB5C433E0 for ; Mon, 1 Feb 2021 19:49:45 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 136B564ECB for ; Mon, 1 Feb 2021 19:49:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232414AbhBATth (ORCPT ); Mon, 1 Feb 2021 14:49:37 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40476 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232365AbhBATqe (ORCPT ); Mon, 1 Feb 2021 14:46:34 -0500 Received: from mail-wr1-x42d.google.com (mail-wr1-x42d.google.com [IPv6:2a00:1450:4864:20::42d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 19328C0617A9 for ; Mon, 1 Feb 2021 11:46:02 -0800 (PST) Received: by mail-wr1-x42d.google.com with SMTP id q7so17921206wre.13 for ; Mon, 01 Feb 2021 11:46:02 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=message-id:in-reply-to:references:from:date:subject:fcc :content-transfer-encoding:mime-version:to:cc; bh=VzM+PE+3kF6zgcBTgKFXoFbs+d9tx2DOnwjzoMRPDAw=; b=a8Ljt8h0ZbQODOAcuSKdSDu/5mVrdmS8hs+DV+IgXtvrO3HDt3dbDKkYYdzG0IxB9P G5jvAJIBbnpZkQpJ6a7XTYhpmje3YCnKtIoGKlAZjidIGnTbkTm9NgIX0VlEsM2f8zVV U3bpP4wMrddVykn51uluO66HQkUcW8IEO+x7A79ZsUVNgKNj7knYI75+Vk98MJiV0X+r UKPEA6rxV8KLTtZBVtf+pit33UIWKWl20ZGpwUyRFIWGdp44NOHo2hPy07fxpEVqWz3P PG9vnUqgc0cvcX1ahWK3ZzJHhLGmIxn9OZgcsLj4U7GeJDSEUgpZXTm6c6rB8o+deQYe veGg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:message-id:in-reply-to:references:from:date :subject:fcc:content-transfer-encoding:mime-version:to:cc; bh=VzM+PE+3kF6zgcBTgKFXoFbs+d9tx2DOnwjzoMRPDAw=; b=X4z3DF5uVNf2rV9GSLyaX7U4PxcsQ8d/kJPBUxJsSs8zVByuv+KxKuqVQKjTSvdGJV N8Rhi7Q2xwBYDEPvVEZxeH7g35lUSyqy8yE2GPZZl/kAizi7Cn88nh2S4xUwL8sf5VN1 4y5gXzLenteZF9lVAkRdCfXKZ9+RoJyQS8JMaikAbHgi0RQckgdS9qG/IrW7ApkuftqQ Ljr5QEioxYd6UELBdj/KBmwiearLkiYM9n9NgR8fjz6+1WhN4RXwsuk2gKi7J00FH3HP bTckCwmSN0e/ldKpD88PZBbnLErpysFSukzaGdKFcsSE78Rcgf3/K7ZLuvl6RAAeyBF5 ol+A== X-Gm-Message-State: AOAM5326cv1zl338Wt7YCSsNnAdcfqJEUY5UIKT8CSJ3MoR0EAxNT2WA pCM2Lca+tTujtzXUomy0gmFttIEHgvI= X-Google-Smtp-Source: ABdhPJz5CmcvoYN4qy3FXnSY8Bh9QrKHFRBBzkowU0399AJ89CPuXUgEb8Y0xxLI4FDxlN8TkQjLLg== X-Received: by 2002:adf:f7cf:: with SMTP id a15mr20001606wrq.351.1612208760704; Mon, 01 Feb 2021 11:46:00 -0800 (PST) Received: from [127.0.0.1] ([13.74.141.28]) by smtp.gmail.com with ESMTPSA id r13sm351033wmh.9.2021.02.01.11.45.59 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 01 Feb 2021 11:46:00 -0800 (PST) Message-Id: <745b6d5fb74699b7fe7e32080b18779aa4a82547.1612208747.git.gitgitgadget@gmail.com> In-Reply-To: References: Date: Mon, 01 Feb 2021 19:45:45 +0000 Subject: [PATCH v2 12/14] unix-socket: add no-chdir option to unix_stream_listen() Fcc: Sent MIME-Version: 1.0 To: git@vger.kernel.org Cc: =?utf-8?b?w4Z2YXIgQXJuZmrDtnLDsA==?= Bjarmason , Jeff Hostetler , Jeff King , Chris Torek , Jeff Hostetler , Jeff Hostetler Precedence: bulk List-ID: X-Mailing-List: git@vger.kernel.org From: Jeff Hostetler From: Jeff Hostetler Calls to `chdir()` are dangerous in a multi-threaded context. If `unix_stream_listen()` is given a socket pathname that is too big to fit in a `sockaddr_un` structure, it will `chdir()` to the parent directory of the requested socket pathname, create the socket using a relative pathname, and then `chdir()` back. This is not thread-safe. Add `disallow_chdir` flag to `struct unix_sockaddr_context` and change all callers to pass an initialized context structure. Teach `unix_sockaddr_init()` to not allow calls to `chdir()` when flag is set. Signed-off-by: Jeff Hostetler --- unix-socket.c | 19 ++++++++++++++++--- unix-socket.h | 2 ++ 2 files changed, 18 insertions(+), 3 deletions(-) diff --git a/unix-socket.c b/unix-socket.c index 8bcef18ea55..9726992f276 100644 --- a/unix-socket.c +++ b/unix-socket.c @@ -11,8 +11,15 @@ static int chdir_len(const char *orig, int len) struct unix_sockaddr_context { char *orig_dir; + unsigned int disallow_chdir:1; }; +#define UNIX_SOCKADDR_CONTEXT_INIT \ +{ \ + .orig_dir=NULL, \ + .disallow_chdir=0, \ +} + static void unix_sockaddr_cleanup(struct unix_sockaddr_context *ctx) { if (!ctx->orig_dir) @@ -32,7 +39,11 @@ static int unix_sockaddr_init(struct sockaddr_un *sa, const char *path, { int size = strlen(path) + 1; - ctx->orig_dir = NULL; + if (ctx->disallow_chdir && size > sizeof(sa->sun_path)) { + errno = ENAMETOOLONG; + return -1; + } + if (size > sizeof(sa->sun_path)) { const char *slash = find_last_dir_sep(path); const char *dir; @@ -67,7 +78,7 @@ int unix_stream_connect(const char *path) { int fd, saved_errno; struct sockaddr_un sa; - struct unix_sockaddr_context ctx; + struct unix_sockaddr_context ctx = UNIX_SOCKADDR_CONTEXT_INIT; if (unix_sockaddr_init(&sa, path, &ctx) < 0) return -1; @@ -96,7 +107,9 @@ int unix_stream_listen(const char *path, int bind_successful = 0; int backlog; struct sockaddr_un sa; - struct unix_sockaddr_context ctx; + struct unix_sockaddr_context ctx = UNIX_SOCKADDR_CONTEXT_INIT; + + ctx.disallow_chdir = opts->disallow_chdir; if (unix_sockaddr_init(&sa, path, &ctx) < 0) return -1; diff --git a/unix-socket.h b/unix-socket.h index c28372ef48e..5b0e8ccef10 100644 --- a/unix-socket.h +++ b/unix-socket.h @@ -4,12 +4,14 @@ struct unix_stream_listen_opts { int listen_backlog_size; unsigned int force_unlink_before_bind:1; + unsigned int disallow_chdir:1; }; #define UNIX_STREAM_LISTEN_OPTS_INIT \ { \ .listen_backlog_size = 5, \ .force_unlink_before_bind = 1, \ + .disallow_chdir = 0, \ } int unix_stream_connect(const char *path); From patchwork Mon Feb 1 19:45:46 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeff Hostetler X-Patchwork-Id: 12059925 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6B2EFC4332B for ; Mon, 1 Feb 2021 19:48:44 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 3FED764ECC for ; Mon, 1 Feb 2021 19:48:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231226AbhBATsh (ORCPT ); Mon, 1 Feb 2021 14:48:37 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40684 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232503AbhBATq7 (ORCPT ); Mon, 1 Feb 2021 14:46:59 -0500 Received: from mail-wm1-x333.google.com (mail-wm1-x333.google.com [IPv6:2a00:1450:4864:20::333]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E5425C0617A7 for ; Mon, 1 Feb 2021 11:46:02 -0800 (PST) Received: by mail-wm1-x333.google.com with SMTP id u14so333861wml.4 for ; Mon, 01 Feb 2021 11:46:02 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=message-id:in-reply-to:references:from:date:subject:fcc :content-transfer-encoding:mime-version:to:cc; bh=Y+Es2quZG2VS0gOz4BAlFxwPGwEi91IxliFZiSxgf+w=; b=s9K4ZcZxnBviCAklJTI5hfY6PNtZMHSueQ8vbZ/+n23ysLG8ItQE6XdkH1ZUBSs0CB cpNkx5sGh+1bga5ejSwrIK0ZtUJIcFIwtVGPnsQmWGlwzvZROHRWFBGBAsXi4b+I/36h Jt29l6K8fTvBEHpYw0txi+R1xRY7uVS9XUDFBvrML/NifR+HD6pQzQe4iTkSZiYtYrPD YW9Nf524A5rPeutG6PcnG4074Pn18oFPZalZn8TLByQ3uCqmOCYsC7WYkTEVGQ/LMWpv Ny8xXu70QyaYA8dq8wCorVy6NTJaylYLhMx+HGXoMwJJv7HJoybPAw+lEJub2ZCjx0yg E0fw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:message-id:in-reply-to:references:from:date :subject:fcc:content-transfer-encoding:mime-version:to:cc; bh=Y+Es2quZG2VS0gOz4BAlFxwPGwEi91IxliFZiSxgf+w=; b=ZxaVEx4LpCO5nMqrirnBxzpLdqPs9lVSBdVFzyGlAjy3fPTu8clzyxiTRwwSPfVkd0 fBwv/Qh0a3sWgG1yxA8lG5kVRhDA5TK8FnAtU50B74chEIA8UDHo15VCs/1nmXHUgvej aSUuyMJK3J1bPxuIg9g2I0JRfjjXIJ9wJer3KX5vPGa9/GTML+qynDsiBd6Oy4hC2hnb 1wUKwr4poQKLZnetWZs5GPtV/7wa1YAZOQiuofpI71UTcu7fyB5qfB8jsuKTIaFM5QKv 6GiqQuWGb94PdJQDz9FH3bOsMlPAwPhjuX4WFiLt8RbbAfYbOIEDlA45yHb4QtaCMe9m zWUg== X-Gm-Message-State: AOAM531iCFAD/SzNN4RFumOGz0gIP1bh3kxTRhB02NGDtrP5s1DKG+aR /goMj8RIedNVS5UcXUmU7W8Q+zJzQSI= X-Google-Smtp-Source: ABdhPJyK8D0ncy5mwjUHrwvQrCAB2joFQpetA5eKkjxD5TLiyWusvlba1GbbJIvInWIo4Ey2f1HvVw== X-Received: by 2002:a7b:cd06:: with SMTP id f6mr392089wmj.125.1612208761538; Mon, 01 Feb 2021 11:46:01 -0800 (PST) Received: from [127.0.0.1] ([13.74.141.28]) by smtp.gmail.com with ESMTPSA id d9sm29167484wrq.74.2021.02.01.11.46.00 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 01 Feb 2021 11:46:01 -0800 (PST) Message-Id: <2cca15a10ecec321731bf628e1317ff8d244dfd0.1612208747.git.gitgitgadget@gmail.com> In-Reply-To: References: Date: Mon, 01 Feb 2021 19:45:46 +0000 Subject: [PATCH v2 13/14] unix-socket: do not call die in unix_stream_connect() Fcc: Sent MIME-Version: 1.0 To: git@vger.kernel.org Cc: =?utf-8?b?w4Z2YXIgQXJuZmrDtnLDsA==?= Bjarmason , Jeff Hostetler , Jeff King , Chris Torek , Jeff Hostetler , Jeff Hostetler Precedence: bulk List-ID: X-Mailing-List: git@vger.kernel.org From: Jeff Hostetler From: Jeff Hostetler Teach `unix_stream_connect()` to return error rather than calling `die()` when a socket cannot be created. Signed-off-by: Jeff Hostetler --- unix-socket.c | 9 ++++++--- 1 file changed, 6 insertions(+), 3 deletions(-) diff --git a/unix-socket.c b/unix-socket.c index 9726992f276..c7573df56a6 100644 --- a/unix-socket.c +++ b/unix-socket.c @@ -76,15 +76,17 @@ static int unix_sockaddr_init(struct sockaddr_un *sa, const char *path, int unix_stream_connect(const char *path) { - int fd, saved_errno; + int fd = -1; + int saved_errno; struct sockaddr_un sa; struct unix_sockaddr_context ctx = UNIX_SOCKADDR_CONTEXT_INIT; if (unix_sockaddr_init(&sa, path, &ctx) < 0) return -1; + fd = socket(AF_UNIX, SOCK_STREAM, 0); if (fd < 0) - die_errno("unable to create socket"); + goto fail; if (connect(fd, (struct sockaddr *)&sa, sizeof(sa)) < 0) goto fail; @@ -94,7 +96,8 @@ int unix_stream_connect(const char *path) fail: saved_errno = errno; unix_sockaddr_cleanup(&ctx); - close(fd); + if (fd != -1) + close(fd); errno = saved_errno; return -1; } From patchwork Mon Feb 1 19:45:47 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeff Hostetler X-Patchwork-Id: 12059931 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7A3B2C433E0 for ; Mon, 1 Feb 2021 19:48:59 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 113E864ECC for ; Mon, 1 Feb 2021 19:48:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229748AbhBATsq (ORCPT ); Mon, 1 Feb 2021 14:48:46 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40560 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232326AbhBATrH (ORCPT ); Mon, 1 Feb 2021 14:47:07 -0500 Received: from mail-wr1-x42d.google.com (mail-wr1-x42d.google.com [IPv6:2a00:1450:4864:20::42d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7B4FDC0617AA for ; Mon, 1 Feb 2021 11:46:04 -0800 (PST) Received: by mail-wr1-x42d.google.com with SMTP id p15so17900731wrq.8 for ; Mon, 01 Feb 2021 11:46:04 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=message-id:in-reply-to:references:from:date:subject:fcc :content-transfer-encoding:mime-version:to:cc; bh=tvEqUWwX/8D5rDOjLP7ZKbIeTq1T+tzB2qmATHg6ZZg=; b=gIKT+eXzAkedpHgm3Th8WPxgpXQeT4feFSElCLKiXOh8azWJBIkDS8a9geBMpp7OZV bAb/IVMDO9pOAtjQ/5fvykNYHjkEflLpBiceoMnf5kndgtjGVDl22M8WvU4dm6jF8aCk /AETriKxt1Bg0OwKzHfXujdDtDrcS32R68ztJKtyoCbEXLYIxH8rgY3S9cneXAUYXihA Vh350UmEB8FqzqN3WGK8LR2Bm7kJZscV//Lj/utUuT9b2JV0Nv6vfTO2RtRqvoAhNEIC v1vsjIP8wjj8E49OZqvxHNAyocqHo7FBoofViKD1KKMig7EtvrxyFQySAaeOxedDd1m6 SyUw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:message-id:in-reply-to:references:from:date :subject:fcc:content-transfer-encoding:mime-version:to:cc; bh=tvEqUWwX/8D5rDOjLP7ZKbIeTq1T+tzB2qmATHg6ZZg=; b=RxQXL5JtBh6tmNhiz+0ZA02v3Ci7l+AROoSO8+VpmpB5uBWGoF6ElL20xEU/slRubH pSkNHD6LSCxgotCGh9IaUrUqpbHlCJm2ORyazaVDhywF+CD3mUuNGtda0KQWPA3udGV7 72Sjj7Ih+vYGKsY/tT5ldie7vSi5r2G6gwYfUvgkFxBUETDdklXO3AJivCul8ezaR1cH RihEG68yW5Bf2Vf89Wm6QyOPNjS7ECrRF6/VKb6FEoiXDPvQgif0tQOpi2NjfaPaUfqR 0WlSzG/tz9nImVrWgxsKxCfND2BaDYWrexKt+vrrJObNvkv98A6mIy/RTgTRdqpGtHDy fccA== X-Gm-Message-State: AOAM530ouux84cRyEtdOrFcQBs4BzJfEviRCiy+8uuQdmEPzvxhltFma 417eQlrf6DVsFAW+1OSzN8ZiAN6JJDo= X-Google-Smtp-Source: ABdhPJzGJNv0DsBf8GCk35YVHTooJi/+w6yLERRfrNELucEANSsEAWr8HPooEI6qAcHZgbCJZrVsWw== X-Received: by 2002:a05:6000:12c7:: with SMTP id l7mr19776534wrx.103.1612208762437; Mon, 01 Feb 2021 11:46:02 -0800 (PST) Received: from [127.0.0.1] ([13.74.141.28]) by smtp.gmail.com with ESMTPSA id j185sm337062wma.1.2021.02.01.11.46.01 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 01 Feb 2021 11:46:01 -0800 (PST) Message-Id: <72c1c209c380d5232e6356cdc338467288ee0425.1612208747.git.gitgitgadget@gmail.com> In-Reply-To: References: Date: Mon, 01 Feb 2021 19:45:47 +0000 Subject: [PATCH v2 14/14] simple-ipc: add Unix domain socket implementation Fcc: Sent MIME-Version: 1.0 To: git@vger.kernel.org Cc: =?utf-8?b?w4Z2YXIgQXJuZmrDtnLDsA==?= Bjarmason , Jeff Hostetler , Jeff King , Chris Torek , Jeff Hostetler , Jeff Hostetler Precedence: bulk List-ID: X-Mailing-List: git@vger.kernel.org From: Jeff Hostetler From: Jeff Hostetler Create Unix domain socket based implementation of "simple-ipc". A set of `ipc_client` routines implement a client library to connect to an `ipc_server` over a Unix domain socket, send a simple request, and receive a single response. Clients use blocking IO on the socket. A set of `ipc_server` routines implement a thread pool to listen for and concurrently service client connections. The server creates a new Unix domain socket at a known location. If a socket already exists with that name, the server tries to determine if another server is already listening on the socket or if the socket is dead. If socket is busy, the server exits with an error rather than stealing the socket. If the socket is dead, the server creates a new one and starts up. If while running, the server detects that its socket has been stolen by another server, it automatically exits. Signed-off-by: Jeff Hostetler --- Makefile | 2 + compat/simple-ipc/ipc-unix-socket.c | 1127 +++++++++++++++++++++++++++ contrib/buildsystems/CMakeLists.txt | 2 + simple-ipc.h | 7 +- 4 files changed, 1137 insertions(+), 1 deletion(-) create mode 100644 compat/simple-ipc/ipc-unix-socket.c diff --git a/Makefile b/Makefile index e7ba8853ea6..f2524c02ff0 100644 --- a/Makefile +++ b/Makefile @@ -1681,6 +1681,8 @@ ifdef NO_UNIX_SOCKETS BASIC_CFLAGS += -DNO_UNIX_SOCKETS else LIB_OBJS += unix-socket.o + LIB_OBJS += compat/simple-ipc/ipc-shared.o + LIB_OBJS += compat/simple-ipc/ipc-unix-socket.o endif ifdef USE_WIN32_IPC diff --git a/compat/simple-ipc/ipc-unix-socket.c b/compat/simple-ipc/ipc-unix-socket.c new file mode 100644 index 00000000000..844906d1af5 --- /dev/null +++ b/compat/simple-ipc/ipc-unix-socket.c @@ -0,0 +1,1127 @@ +#include "cache.h" +#include "simple-ipc.h" +#include "strbuf.h" +#include "pkt-line.h" +#include "thread-utils.h" +#include "unix-socket.h" + +#ifdef NO_UNIX_SOCKETS +#error compat/simple-ipc/ipc-unix-socket.c requires Unix sockets +#endif + +enum ipc_active_state ipc_get_active_state(const char *path) +{ + enum ipc_active_state state = IPC_STATE__OTHER_ERROR; + struct ipc_client_connect_options options + = IPC_CLIENT_CONNECT_OPTIONS_INIT; + struct stat st; + struct ipc_client_connection *connection_test = NULL; + + options.wait_if_busy = 0; + options.wait_if_not_found = 0; + + if (lstat(path, &st) == -1) { + switch (errno) { + case ENOENT: + case ENOTDIR: + return IPC_STATE__NOT_LISTENING; + default: + return IPC_STATE__INVALID_PATH; + } + } + + /* also complain if a plain file is in the way */ + if ((st.st_mode & S_IFMT) != S_IFSOCK) + return IPC_STATE__INVALID_PATH; + + /* + * Just because the filesystem has a S_IFSOCK type inode + * at `path`, doesn't mean it that there is a server listening. + * Ping it to be sure. + */ + state = ipc_client_try_connect(path, &options, &connection_test); + ipc_client_close_connection(connection_test); + + return state; +} + +/* + * This value was chosen at random. + */ +#define WAIT_STEP_MS (50) + +/* + * Try to connect to the server. If the server is just starting up or + * is very busy, we may not get a connection the first time. + */ +static enum ipc_active_state connect_to_server( + const char *path, + int timeout_ms, + const struct ipc_client_connect_options *options, + int *pfd) +{ + int wait_ms = 50; + int k; + + *pfd = -1; + + for (k = 0; k < timeout_ms; k += wait_ms) { + int fd = unix_stream_connect(path); + + if (fd != -1) { + *pfd = fd; + return IPC_STATE__LISTENING; + } + + if (errno == ENOENT) { + if (!options->wait_if_not_found) + return IPC_STATE__PATH_NOT_FOUND; + + goto sleep_and_try_again; + } + + if (errno == ETIMEDOUT) { + if (!options->wait_if_busy) + return IPC_STATE__NOT_LISTENING; + + goto sleep_and_try_again; + } + + if (errno == ECONNREFUSED) { + if (!options->wait_if_busy) + return IPC_STATE__NOT_LISTENING; + + goto sleep_and_try_again; + } + + return IPC_STATE__OTHER_ERROR; + + sleep_and_try_again: + sleep_millisec(wait_ms); + } + + return IPC_STATE__NOT_LISTENING; +} + +/* + * A randomly chosen timeout value. + */ +#define MY_CONNECTION_TIMEOUT_MS (1000) + +enum ipc_active_state ipc_client_try_connect( + const char *path, + const struct ipc_client_connect_options *options, + struct ipc_client_connection **p_connection) +{ + enum ipc_active_state state = IPC_STATE__OTHER_ERROR; + int fd = -1; + + *p_connection = NULL; + + trace2_region_enter("ipc-client", "try-connect", NULL); + trace2_data_string("ipc-client", NULL, "try-connect/path", path); + + state = connect_to_server(path, MY_CONNECTION_TIMEOUT_MS, + options, &fd); + + trace2_data_intmax("ipc-client", NULL, "try-connect/state", + (intmax_t)state); + trace2_region_leave("ipc-client", "try-connect", NULL); + + if (state == IPC_STATE__LISTENING) { + (*p_connection) = xcalloc(1, sizeof(struct ipc_client_connection)); + (*p_connection)->fd = fd; + } + + return state; +} + +void ipc_client_close_connection(struct ipc_client_connection *connection) +{ + if (!connection) + return; + + if (connection->fd != -1) + close(connection->fd); + + free(connection); +} + +int ipc_client_send_command_to_connection( + struct ipc_client_connection *connection, + const char *message, struct strbuf *answer) +{ + int ret = 0; + + strbuf_setlen(answer, 0); + + trace2_region_enter("ipc-client", "send-command", NULL); + + if (write_packetized_from_buf2(message, strlen(message), + connection->fd, 1, + &connection->scratch_write_buffer) < 0) { + ret = error(_("could not send IPC command")); + goto done; + } + + if (read_packetized_to_strbuf(connection->fd, answer, + PACKET_READ_NEVER_DIE) < 0) { + ret = error(_("could not read IPC response")); + goto done; + } + +done: + trace2_region_leave("ipc-client", "send-command", NULL); + return ret; +} + +int ipc_client_send_command(const char *path, + const struct ipc_client_connect_options *options, + const char *message, struct strbuf *answer) +{ + int ret = -1; + enum ipc_active_state state; + struct ipc_client_connection *connection = NULL; + + state = ipc_client_try_connect(path, options, &connection); + + if (state != IPC_STATE__LISTENING) + return ret; + + ret = ipc_client_send_command_to_connection(connection, message, answer); + + ipc_client_close_connection(connection); + + return ret; +} + +static int set_socket_blocking_flag(int fd, int make_nonblocking) +{ + int flags; + + flags = fcntl(fd, F_GETFL, NULL); + + if (flags < 0) + return -1; + + if (make_nonblocking) + flags |= O_NONBLOCK; + else + flags &= ~O_NONBLOCK; + + return fcntl(fd, F_SETFL, flags); +} + +/* + * Magic numbers used to annotate callback instance data. + * These are used to help guard against accidentally passing the + * wrong instance data across multiple levels of callbacks (which + * is easy to do if there are `void*` arguments). + */ +enum magic { + MAGIC_SERVER_REPLY_DATA, + MAGIC_WORKER_THREAD_DATA, + MAGIC_ACCEPT_THREAD_DATA, + MAGIC_SERVER_DATA, +}; + +struct ipc_server_reply_data { + enum magic magic; + int fd; + struct ipc_worker_thread_data *worker_thread_data; +}; + +struct ipc_worker_thread_data { + enum magic magic; + struct ipc_worker_thread_data *next_thread; + struct ipc_server_data *server_data; + pthread_t pthread_id; + struct packet_scratch_space scratch_write_buffer; +}; + +struct ipc_accept_thread_data { + enum magic magic; + struct ipc_server_data *server_data; + + int fd_listen; + struct stat st_listen; + + int fd_send_shutdown; + int fd_wait_shutdown; + pthread_t pthread_id; +}; + +/* + * With unix-sockets, the conceptual "ipc-server" is implemented as a single + * controller "accept-thread" thread and a pool of "worker-thread" threads. + * The former does the usual `accept()` loop and dispatches connections + * to an idle worker thread. The worker threads wait in an idle loop for + * a new connection, communicate with the client and relay data to/from + * the `application_cb` and then wait for another connection from the + * server thread. This avoids the overhead of constantly creating and + * destroying threads. + */ +struct ipc_server_data { + enum magic magic; + ipc_server_application_cb *application_cb; + void *application_data; + struct strbuf buf_path; + + struct ipc_accept_thread_data *accept_thread; + struct ipc_worker_thread_data *worker_thread_list; + + pthread_mutex_t work_available_mutex; + pthread_cond_t work_available_cond; + + /* + * Accepted but not yet processed client connections are kept + * in a circular buffer FIFO. The queue is empty when the + * positions are equal. + */ + int *fifo_fds; + int queue_size; + int back_pos; + int front_pos; + + int shutdown_requested; + int is_stopped; +}; + +/* + * Remove and return the oldest queued connection. + * + * Returns -1 if empty. + */ +static int fifo_dequeue(struct ipc_server_data *server_data) +{ + /* ASSERT holding mutex */ + + int fd; + + if (server_data->back_pos == server_data->front_pos) + return -1; + + fd = server_data->fifo_fds[server_data->front_pos]; + server_data->fifo_fds[server_data->front_pos] = -1; + + server_data->front_pos++; + if (server_data->front_pos == server_data->queue_size) + server_data->front_pos = 0; + + return fd; +} + +/* + * Push a new fd onto the back of the queue. + * + * Drop it and return -1 if queue is already full. + */ +static int fifo_enqueue(struct ipc_server_data *server_data, int fd) +{ + /* ASSERT holding mutex */ + + int next_back_pos; + + next_back_pos = server_data->back_pos + 1; + if (next_back_pos == server_data->queue_size) + next_back_pos = 0; + + if (next_back_pos == server_data->front_pos) { + /* Queue is full. Just drop it. */ + close(fd); + return -1; + } + + server_data->fifo_fds[server_data->back_pos] = fd; + server_data->back_pos = next_back_pos; + + return fd; +} + +/* + * Wait for a connection to be queued to the FIFO and return it. + * + * Returns -1 if someone has already requested a shutdown. + */ +static int worker_thread__wait_for_connection( + struct ipc_worker_thread_data *worker_thread_data) +{ + /* ASSERT NOT holding mutex */ + + struct ipc_server_data *server_data = worker_thread_data->server_data; + int fd = -1; + + pthread_mutex_lock(&server_data->work_available_mutex); + for (;;) { + if (server_data->shutdown_requested) + break; + + fd = fifo_dequeue(server_data); + if (fd >= 0) + break; + + pthread_cond_wait(&server_data->work_available_cond, + &server_data->work_available_mutex); + } + pthread_mutex_unlock(&server_data->work_available_mutex); + + return fd; +} + +/* + * Forward declare our reply callback function so that any compiler + * errors are reported when we actually define the function (in addition + * to any errors reported when we try to pass this callback function as + * a parameter in a function call). The former are easier to understand. + */ +static ipc_server_reply_cb do_io_reply_callback; + +/* + * Relay application's response message to the client process. + * (We do not flush at this point because we allow the caller + * to chunk data to the client thru us.) + */ +static int do_io_reply_callback(struct ipc_server_reply_data *reply_data, + const char *response, size_t response_len) +{ + struct packet_scratch_space *scratch = + &reply_data->worker_thread_data->scratch_write_buffer; + + if (reply_data->magic != MAGIC_SERVER_REPLY_DATA) + BUG("reply_cb called with wrong instance data"); + + return write_packetized_from_buf2(response, response_len, + reply_data->fd, 0, scratch); +} + +/* A randomly chosen value. */ +#define MY_WAIT_POLL_TIMEOUT_MS (10) + +/* + * If the client hangs up without sending any data on the wire, just + * quietly close the socket and ignore this client. + * + * This worker thread is committed to reading the IPC request data + * from the client at the other end of this fd. Wait here for the + * client to actually put something on the wire -- because if the + * client just does a ping (connect and hangup without sending any + * data), our use of the pkt-line read routines will spew an error + * message. + * + * Return -1 if the client hung up. + * Return 0 if data (possibly incomplete) is ready. + */ +static int worker_thread__wait_for_io_start( + struct ipc_worker_thread_data *worker_thread_data, + int fd) +{ + struct ipc_server_data *server_data = worker_thread_data->server_data; + struct pollfd pollfd[1]; + int result; + + for (;;) { + pollfd[0].fd = fd; + pollfd[0].events = POLLIN; + + result = poll(pollfd, 1, MY_WAIT_POLL_TIMEOUT_MS); + if (result < 0) { + if (errno == EINTR) + continue; + goto cleanup; + } + + if (result == 0) { + /* a timeout */ + + int in_shutdown; + + pthread_mutex_lock(&server_data->work_available_mutex); + in_shutdown = server_data->shutdown_requested; + pthread_mutex_unlock(&server_data->work_available_mutex); + + /* + * If a shutdown is already in progress and this + * client has not started talking yet, just drop it. + */ + if (in_shutdown) + goto cleanup; + continue; + } + + if (pollfd[0].revents & POLLHUP) + goto cleanup; + + if (pollfd[0].revents & POLLIN) + return 0; + + goto cleanup; + } + +cleanup: + close(fd); + return -1; +} + +/* + * Receive the request/command from the client and pass it to the + * registered request-callback. The request-callback will compose + * a response and call our reply-callback to send it to the client. + */ +static int worker_thread__do_io( + struct ipc_worker_thread_data *worker_thread_data, + int fd) +{ + /* ASSERT NOT holding lock */ + + struct strbuf buf = STRBUF_INIT; + struct ipc_server_reply_data reply_data; + int ret = 0; + + reply_data.magic = MAGIC_SERVER_REPLY_DATA; + reply_data.worker_thread_data = worker_thread_data; + + reply_data.fd = fd; + + ret = read_packetized_to_strbuf(reply_data.fd, &buf, + PACKET_READ_NEVER_DIE); + if (ret >= 0) { + ret = worker_thread_data->server_data->application_cb( + worker_thread_data->server_data->application_data, + buf.buf, do_io_reply_callback, &reply_data); + + packet_flush_gently(reply_data.fd); + } + else { + /* + * The client probably disconnected/shutdown before it + * could send a well-formed message. Ignore it. + */ + } + + strbuf_release(&buf); + close(reply_data.fd); + + return ret; +} + +/* + * Block SIGPIPE on the current thread (so that we get EPIPE from + * write() rather than an actual signal). + * + * Note that using sigchain_push() and _pop() to control SIGPIPE + * around our IO calls is not thread safe: + * [] It uses a global stack of handler frames. + * [] It uses ALLOC_GROW() to resize it. + * [] Finally, according to the `signal(2)` man-page: + * "The effects of `signal()` in a multithreaded process are unspecified." + */ +static void thread_block_sigpipe(sigset_t *old_set) +{ + sigset_t new_set; + + sigemptyset(&new_set); + sigaddset(&new_set, SIGPIPE); + + sigemptyset(old_set); + pthread_sigmask(SIG_BLOCK, &new_set, old_set); +} + +/* + * Thread proc for an IPC worker thread. It handles a series of + * connections from clients. It pulls the next fd from the queue + * processes it, and then waits for the next client. + * + * Block SIGPIPE in this worker thread for the life of the thread. + * This avoids stray (and sometimes delayed) SIGPIPE signals caused + * by client errors and/or when we are under extremely heavy IO load. + * + * This means that the application callback will have SIGPIPE blocked. + * The callback should not change it. + */ +static void *worker_thread_proc(void *_worker_thread_data) +{ + struct ipc_worker_thread_data *worker_thread_data = _worker_thread_data; + struct ipc_server_data *server_data = worker_thread_data->server_data; + sigset_t old_set; + int fd, io; + int ret; + + trace2_thread_start("ipc-worker"); + + thread_block_sigpipe(&old_set); + + for (;;) { + fd = worker_thread__wait_for_connection(worker_thread_data); + if (fd == -1) + break; /* in shutdown */ + + io = worker_thread__wait_for_io_start(worker_thread_data, fd); + if (io == -1) + continue; /* client hung up without sending anything */ + + ret = worker_thread__do_io(worker_thread_data, fd); + + if (ret == SIMPLE_IPC_QUIT) { + trace2_data_string("ipc-worker", NULL, "queue_stop_async", + "application_quit"); + /* The application told us to shutdown. */ + ipc_server_stop_async(server_data); + break; + } + } + + trace2_thread_exit(); + return NULL; +} + +/* + * Return 1 if someone deleted or stole the on-disk socket from us. + */ +static int socket_was_stolen(struct ipc_accept_thread_data *accept_thread_data) +{ + struct stat st; + struct stat *ref_st = &accept_thread_data->st_listen; + + if (lstat(accept_thread_data->server_data->buf_path.buf, &st) == -1) + return 1; + + if (st.st_ino != ref_st->st_ino) + return 1; + + /* We might also consider the creation time on some platforms. */ + + return 0; +} + +/* A randomly chosen value. */ +#define MY_ACCEPT_POLL_TIMEOUT_MS (60 * 1000) + +/* + * Accept a new client connection on our socket. This uses non-blocking + * IO so that we can also wait for shutdown requests on our socket-pair + * without actually spinning on a fast timeout. + */ +static int accept_thread__wait_for_connection( + struct ipc_accept_thread_data *accept_thread_data) +{ + struct pollfd pollfd[2]; + int result; + + for (;;) { + pollfd[0].fd = accept_thread_data->fd_wait_shutdown; + pollfd[0].events = POLLIN; + + pollfd[1].fd = accept_thread_data->fd_listen; + pollfd[1].events = POLLIN; + + result = poll(pollfd, 2, MY_ACCEPT_POLL_TIMEOUT_MS); + if (result < 0) { + if (errno == EINTR) + continue; + return result; + } + + if (result == 0) { + /* a timeout */ + + /* + * If someone deletes or force-creates a new unix + * domain socket at out path, all future clients + * will be routed elsewhere and we silently starve. + * If that happens, just queue a shutdown. + */ + if (socket_was_stolen( + accept_thread_data)) { + trace2_data_string("ipc-accept", NULL, + "queue_stop_async", + "socket_stolen"); + ipc_server_stop_async( + accept_thread_data->server_data); + } + continue; + } + + if (pollfd[0].revents & POLLIN) { + /* shutdown message queued to socketpair */ + return -1; + } + + if (pollfd[1].revents & POLLIN) { + /* a connection is available on fd_listen */ + + int client_fd = accept(accept_thread_data->fd_listen, + NULL, NULL); + if (client_fd >= 0) + return client_fd; + + /* + * An error here is unlikely -- it probably + * indicates that the connecting process has + * already dropped the connection. + */ + continue; + } + + BUG("unandled poll result errno=%d r[0]=%d r[1]=%d", + errno, pollfd[0].revents, pollfd[1].revents); + } +} + +/* + * Thread proc for the IPC server "accept thread". This waits for + * an incoming socket connection, appends it to the queue of available + * connections, and notifies a worker thread to process it. + * + * Block SIGPIPE in this thread for the life of the thread. This + * avoids any stray SIGPIPE signals when closing pipe fds under + * extremely heavy loads (such as when the fifo queue is full and we + * drop incomming connections). + */ +static void *accept_thread_proc(void *_accept_thread_data) +{ + struct ipc_accept_thread_data *accept_thread_data = _accept_thread_data; + struct ipc_server_data *server_data = accept_thread_data->server_data; + sigset_t old_set; + + trace2_thread_start("ipc-accept"); + + thread_block_sigpipe(&old_set); + + for (;;) { + int client_fd = accept_thread__wait_for_connection( + accept_thread_data); + + pthread_mutex_lock(&server_data->work_available_mutex); + if (server_data->shutdown_requested) { + pthread_mutex_unlock(&server_data->work_available_mutex); + if (client_fd >= 0) + close(client_fd); + break; + } + + if (client_fd < 0) { + /* ignore transient accept() errors */ + } + else { + fifo_enqueue(server_data, client_fd); + pthread_cond_broadcast(&server_data->work_available_cond); + } + pthread_mutex_unlock(&server_data->work_available_mutex); + } + + trace2_thread_exit(); + return NULL; +} + +/* + * We can't predict the connection arrival rate relative to the worker + * processing rate, therefore we allow the "accept-thread" to queue up + * a generous number of connections, since we'd rather have the client + * not unnecessarily timeout if we can avoid it. (The assumption is + * that this will be used for FSMonitor and a few second wait on a + * connection is better than having the client timeout and do the full + * computation itself.) + * + * The FIFO queue size is set to a multiple of the worker pool size. + * This value chosen at random. + */ +#define FIFO_SCALE (100) + +/* + * The backlog value for `listen(2)`. This doesn't need to huge, + * rather just large enough for our "accept-thread" to wake up and + * queue incoming connections onto the FIFO without the kernel + * dropping any. + * + * This value chosen at random. + */ +#define LISTEN_BACKLOG (50) + +/* + * Create a unix domain socket at the given path to listen for + * client connections. The resulting socket will then appear + * in the filesystem as an inode with S_IFSOCK. The inode is + * itself created as part of the `bind(2)` operation. + * + * The term "socket" is ambiguous in this context. We want to open a + * "socket-fd" that is bound to a "socket-inode" (path) on disk. We + * listen on "socket-fd" for new connections and clients try to + * open/connect using the "socket-inode" pathname. + * + * Unix domain sockets have a fundamental design flaw because the + * "socket-inode" persists until the pathname is deleted; closing the + * listening "socket-fd" only closes the socket handle/descriptor, it + * does not delete the inode/pathname. + * + * Well-behaving service daemons are expected to also delete the inode + * before shutdown. If a service crashes (or forgets) it can leave + * the (now stale) inode in the filesystem. This behaves like a stale + * ".lock" file and may prevent future service instances from starting + * up correctly. (Because they won't be able to bind.) + * + * When future service instances try to create the listener socket, + * `bind(2)` will fail with EADDRINUSE -- because the inode already + * exists. However, the new instance cannot tell if it is a stale + * inode *or* another service instance is already running. + * + * One possible solution is to blindly unlink the inode before + * attempting to bind a new socket-fd and thus create a new + * socket-inode. Then `bind(2)` should always succeed. However, if + * there is an existing service instance, it would be orphaned -- it + * would still be listening on a socket-fd that is still bound to an + * (unlinked) socket-inode, but that socket-inode is no longer + * associated with the pathname. New client connections will arrive + * at OUR new socket-inode -- rather than the existing server's + * socket. (I suppose it is up to the existing server to detect that + * its socket-inode has been stolen and shutdown.) + * + * Another possible solution is to try to use the ".lock" trick, but + * bind() does not have a exclusive-create use bit like open() does, + * so we cannot have multiple servers fighting/racing to create the + * same file name without having losers lose without knowing that they + * lost. + * + * We try to avoid such stealing and would rather fail to run than + * steal an existing socket-inode (because we assume that the + * existing server has more context and value to the clients than a + * freshly started server). However, if multiple servers are racing + * to start, we don't care which one wins -- none of them have any + * state information yet worth fighting for. + * + * Create a "unique" socket-inode (with our PID in it (and assume that + * we can force-delete an existing socket with that name)). Stat it + * to get the inode number and ctime -- so that we can identify it as + * the one we created. Then use the atomic-rename trick to install it + * in the real location. (This will unlink an existing socket with + * that pathname -- and thereby steal the real socket-inode from an + * existing server.) + * + * Elsewhere, our thread will periodically poll the socket-inode to + * see if someone else steals ours. + */ +static int create_listener_socket(const char *path, + const struct ipc_server_opts *ipc_opts, + struct stat *st_socket) +{ + struct stat st; + struct strbuf buf_uniq = STRBUF_INIT; + int fd_listen; + struct unix_stream_listen_opts uslg_opts = UNIX_STREAM_LISTEN_OPTS_INIT; + + if (!lstat(path, &st) && S_ISSOCK(st.st_mode)) { + int fd_client; + /* + * A socket-inode at `path` exists on disk, but we + * don't know whether it belongs to an active server + * or if the last server died without cleaning up. + * + * Poke it with a trivial connection to try to find out. + */ + trace2_data_string("ipc-server", NULL, "try-detect-server", + path); + fd_client = unix_stream_connect(path); + if (fd_client >= 0) { + close(fd_client); + errno = EADDRINUSE; + return error_errno(_("socket already in use '%s'"), + path); + } + } + + /* + * Create pathname to our "unique" socket and set it up for + * business. + */ + strbuf_addf(&buf_uniq, "%s.%d", path, getpid()); + + uslg_opts.listen_backlog_size = LISTEN_BACKLOG; + uslg_opts.force_unlink_before_bind = 1; + uslg_opts.disallow_chdir = ipc_opts->uds_disallow_chdir; + fd_listen = unix_stream_listen(buf_uniq.buf, &uslg_opts); + if (fd_listen < 0) { + int saved_errno = errno; + error_errno(_("could not create listener socket '%s'"), + buf_uniq.buf); + strbuf_release(&buf_uniq); + errno = saved_errno; + return -1; + } + + if (lstat(buf_uniq.buf, st_socket)) { + int saved_errno = errno; + error_errno(_("could not stat listener socket '%s'"), + buf_uniq.buf); + close(fd_listen); + unlink(buf_uniq.buf); + strbuf_release(&buf_uniq); + errno = saved_errno; + return -1; + } + + if (set_socket_blocking_flag(fd_listen, 1)) { + int saved_errno = errno; + error_errno(_("could not set listener socket nonblocking '%s'"), + buf_uniq.buf); + close(fd_listen); + unlink(buf_uniq.buf); + strbuf_release(&buf_uniq); + errno = saved_errno; + return -1; + } + + /* + * Install it as the "real" socket so that clients will starting + * connecting to our socket. + */ + if (rename(buf_uniq.buf, path)) { + int saved_errno = errno; + error_errno(_("could not create listener socket '%s'"), path); + close(fd_listen); + unlink(buf_uniq.buf); + strbuf_release(&buf_uniq); + errno = saved_errno; + return -1; + } + + strbuf_release(&buf_uniq); + trace2_data_string("ipc-server", NULL, "try-listen", path); + return fd_listen; +} + +static int setup_listener_socket(const char *path, struct stat *st_socket, + const struct ipc_server_opts *ipc_opts) +{ + int fd_listen; + + trace2_region_enter("ipc-server", "create-listener_socket", NULL); + fd_listen = create_listener_socket(path, ipc_opts, st_socket); + trace2_region_leave("ipc-server", "create-listener_socket", NULL); + + return fd_listen; +} + +/* + * Start IPC server in a pool of background threads. + */ +int ipc_server_run_async(struct ipc_server_data **returned_server_data, + const char *path, const struct ipc_server_opts *opts, + ipc_server_application_cb *application_cb, + void *application_data) +{ + struct ipc_server_data *server_data; + int fd_listen; + struct stat st_listen; + int sv[2]; + int k; + int nr_threads = opts->nr_threads; + + *returned_server_data = NULL; + + /* + * Create a socketpair and set sv[1] to non-blocking. This + * will used to send a shutdown message to the accept-thread + * and allows the accept-thread to wait on EITHER a client + * connection or a shutdown request without spinning. + */ + if (socketpair(AF_UNIX, SOCK_STREAM, 0, sv) < 0) + return error_errno(_("could not create socketpair for '%s'"), + path); + + if (set_socket_blocking_flag(sv[1], 1)) { + int saved_errno = errno; + close(sv[0]); + close(sv[1]); + errno = saved_errno; + return error_errno(_("making socketpair nonblocking '%s'"), + path); + } + + fd_listen = setup_listener_socket(path, &st_listen, opts); + if (fd_listen < 0) { + int saved_errno = errno; + close(sv[0]); + close(sv[1]); + errno = saved_errno; + return -1; + } + + server_data = xcalloc(1, sizeof(*server_data)); + server_data->magic = MAGIC_SERVER_DATA; + server_data->application_cb = application_cb; + server_data->application_data = application_data; + strbuf_init(&server_data->buf_path, 0); + strbuf_addstr(&server_data->buf_path, path); + + if (nr_threads < 1) + nr_threads = 1; + + pthread_mutex_init(&server_data->work_available_mutex, NULL); + pthread_cond_init(&server_data->work_available_cond, NULL); + + server_data->queue_size = nr_threads * FIFO_SCALE; + server_data->fifo_fds = xcalloc(server_data->queue_size, + sizeof(*server_data->fifo_fds)); + + server_data->accept_thread = + xcalloc(1, sizeof(*server_data->accept_thread)); + server_data->accept_thread->magic = MAGIC_ACCEPT_THREAD_DATA; + server_data->accept_thread->server_data = server_data; + server_data->accept_thread->fd_listen = fd_listen; + server_data->accept_thread->st_listen = st_listen; + server_data->accept_thread->fd_send_shutdown = sv[0]; + server_data->accept_thread->fd_wait_shutdown = sv[1]; + + if (pthread_create(&server_data->accept_thread->pthread_id, NULL, + accept_thread_proc, server_data->accept_thread)) + die_errno(_("could not start accept_thread '%s'"), path); + + for (k = 0; k < nr_threads; k++) { + struct ipc_worker_thread_data *wtd; + + wtd = xcalloc(1, sizeof(*wtd)); + wtd->magic = MAGIC_WORKER_THREAD_DATA; + wtd->server_data = server_data; + + if (pthread_create(&wtd->pthread_id, NULL, worker_thread_proc, + wtd)) { + if (k == 0) + die(_("could not start worker[0] for '%s'"), + path); + /* + * Limp along with the thread pool that we have. + */ + break; + } + + wtd->next_thread = server_data->worker_thread_list; + server_data->worker_thread_list = wtd; + } + + *returned_server_data = server_data; + return 0; +} + +/* + * Gently tell the IPC server treads to shutdown. + * Can be run on any thread. + */ +int ipc_server_stop_async(struct ipc_server_data *server_data) +{ + /* ASSERT NOT holding mutex */ + + int fd; + + if (!server_data) + return 0; + + trace2_region_enter("ipc-server", "server-stop-async", NULL); + + pthread_mutex_lock(&server_data->work_available_mutex); + + server_data->shutdown_requested = 1; + + /* + * Write a byte to the shutdown socket pair to wake up the + * accept-thread. + */ + if (write(server_data->accept_thread->fd_send_shutdown, "Q", 1) < 0) + error_errno("could not write to fd_send_shutdown"); + + /* + * Drain the queue of existing connections. + */ + while ((fd = fifo_dequeue(server_data)) != -1) + close(fd); + + /* + * Gently tell worker threads to stop processing new connections + * and exit. (This does not abort in-process conversations.) + */ + pthread_cond_broadcast(&server_data->work_available_cond); + + pthread_mutex_unlock(&server_data->work_available_mutex); + + trace2_region_leave("ipc-server", "server-stop-async", NULL); + + return 0; +} + +/* + * Wait for all IPC server threads to stop. + */ +int ipc_server_await(struct ipc_server_data *server_data) +{ + pthread_join(server_data->accept_thread->pthread_id, NULL); + + if (!server_data->shutdown_requested) + BUG("ipc-server: accept-thread stopped for '%s'", + server_data->buf_path.buf); + + while (server_data->worker_thread_list) { + struct ipc_worker_thread_data *wtd = + server_data->worker_thread_list; + + pthread_join(wtd->pthread_id, NULL); + + server_data->worker_thread_list = wtd->next_thread; + free(wtd); + } + + server_data->is_stopped = 1; + + return 0; +} + +void ipc_server_free(struct ipc_server_data *server_data) +{ + struct ipc_accept_thread_data * accept_thread_data; + + if (!server_data) + return; + + if (!server_data->is_stopped) + BUG("cannot free ipc-server while running for '%s'", + server_data->buf_path.buf); + + accept_thread_data = server_data->accept_thread; + if (accept_thread_data) { + if (accept_thread_data->fd_listen != -1) { + /* + * Only unlink the unix domain socket if we + * created it. That is, if another daemon + * process force-created a new socket at this + * path, and effectively steals our path + * (which prevents us from receiving any + * future clients), we don't want to do the + * same thing to them. + */ + if (!socket_was_stolen( + accept_thread_data)) + unlink(server_data->buf_path.buf); + + close(accept_thread_data->fd_listen); + } + if (accept_thread_data->fd_send_shutdown != -1) + close(accept_thread_data->fd_send_shutdown); + if (accept_thread_data->fd_wait_shutdown != -1) + close(accept_thread_data->fd_wait_shutdown); + + free(server_data->accept_thread); + } + + while (server_data->worker_thread_list) { + struct ipc_worker_thread_data *wtd = + server_data->worker_thread_list; + + server_data->worker_thread_list = wtd->next_thread; + free(wtd); + } + + pthread_cond_destroy(&server_data->work_available_cond); + pthread_mutex_destroy(&server_data->work_available_mutex); + + strbuf_release(&server_data->buf_path); + + free(server_data->fifo_fds); + free(server_data); +} diff --git a/contrib/buildsystems/CMakeLists.txt b/contrib/buildsystems/CMakeLists.txt index 4bd41054ee7..4c27a373414 100644 --- a/contrib/buildsystems/CMakeLists.txt +++ b/contrib/buildsystems/CMakeLists.txt @@ -248,6 +248,8 @@ endif() if(CMAKE_SYSTEM_NAME STREQUAL "Windows") list(APPEND compat_SOURCES compat/simple-ipc/ipc-shared.c compat/simple-ipc/ipc-win32.c) +else() + list(APPEND compat_SOURCES compat/simple-ipc/ipc-shared.c compat/simple-ipc/ipc-unix-socket.c) endif() set(EXE_EXTENSION ${CMAKE_EXECUTABLE_SUFFIX}) diff --git a/simple-ipc.h b/simple-ipc.h index eb19b5da8b1..17b28bc1f83 100644 --- a/simple-ipc.h +++ b/simple-ipc.h @@ -5,7 +5,7 @@ * See Documentation/technical/api-simple-ipc.txt */ -#if defined(GIT_WINDOWS_NATIVE) +#if defined(GIT_WINDOWS_NATIVE) || !defined(NO_UNIX_SOCKETS) #define SUPPORTS_SIMPLE_IPC #endif @@ -160,6 +160,11 @@ struct ipc_server_data; struct ipc_server_opts { int nr_threads; + + /* + * Disallow chdir() when creating a Unix domain socket. + */ + unsigned int uds_disallow_chdir:1; }; /*