From patchwork Fri Nov 11 23:16:26 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Evan Green X-Patchwork-Id: 13040917 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5BB29C4321E for ; Fri, 11 Nov 2022 23:19:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234374AbiKKXTn (ORCPT ); Fri, 11 Nov 2022 18:19:43 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46790 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234262AbiKKXTl (ORCPT ); Fri, 11 Nov 2022 18:19:41 -0500 Received: from mail-pl1-x62f.google.com (mail-pl1-x62f.google.com [IPv6:2607:f8b0:4864:20::62f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CE97A82912 for ; Fri, 11 Nov 2022 15:19:39 -0800 (PST) Received: by mail-pl1-x62f.google.com with SMTP id 4so5420346pli.0 for ; Fri, 11 Nov 2022 15:19:39 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=Fr3KqrR46I1TnqFXZVSBV9KcBJWHB5lW47NuOMIe6Kk=; b=TV4EU4JIzkszLwQNfRm+P5TnhnItkPJOEjY4rAmAajz9txzwm6xt4+NYH2crlM6d97 0XN8wUcgTzH/Rmc4eYujoWf4Iz2lK+XbxAUKbPTbajP/3f3c+JFqX7hI0SUkOTQQCKNz DwkuLEpR4rFKGgke4f4YqBnhJMPZ/H5tKBR4Q= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Fr3KqrR46I1TnqFXZVSBV9KcBJWHB5lW47NuOMIe6Kk=; b=PvsE+555nxs3jesRdmKVdxLNmpEJKHBEAS7Oy74QE+5HiCwa2UlVjuZZbNDJv6TCS1 cyQY98UUYmRwSWLCGaUjbc4/hZHrq+1SyGyzwN7dqxZdeYdVEq06X3vTG3Uqo3Otx0vI 7YsLLsivoCEUGcD0Inf7l+d/E2izLhO/M4u6CNkrr/NjDTJztUBMmzyh2col+MUqgvoz 1f4tiaB2F5miyZtzmsKPdNwAdkoansQuP0RxlKmqKZOK3KPX3Gv9J7P/p24nYOjttYjg 8N6H8PaA/6TXdK5pjaMH8hmlpThBXfXoouIpRlK4rjRqBkz+q1G4jibs23EPUbe8C4C7 ByPQ== X-Gm-Message-State: ANoB5plSwdnQfrg1Y3+/PI1AKuLNUDK/HboJM9WtRCYqdGg0W6zugOGU SM59rbUQx4YOH84m66+Zz7F+Kg== X-Google-Smtp-Source: AA0mqf7+t7S3vNxBH2mfuS1kAi4QpIdzaJ0IpVcNZ05WpJR5cgfA8ciMrm1rEjyzG+NvDRONRFDvWA== X-Received: by 2002:a17:902:9894:b0:188:9ae7:bb7d with SMTP id s20-20020a170902989400b001889ae7bb7dmr4568122plp.113.1668208779383; Fri, 11 Nov 2022 15:19:39 -0800 (PST) Received: from evgreen-glaptop.lan ([98.45.28.95]) by smtp.gmail.com with ESMTPSA id x128-20020a623186000000b0056da2ad6503sm2106900pfx.39.2022.11.11.15.19.37 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 11 Nov 2022 15:19:39 -0800 (PST) From: Evan Green To: linux-kernel@vger.kernel.org Cc: corbet@lwn.net, linux-integrity@vger.kernel.org, Eric Biggers , gwendal@chromium.org, dianders@chromium.org, apronin@chromium.org, Pavel Machek , Ben Boeckel , rjw@rjwysocki.net, jejb@linux.ibm.com, Kees Cook , dlunev@google.com, zohar@linux.ibm.com, Matthew Garrett , jarkko@kernel.org, linux-pm@vger.kernel.org, Evan Green , Matthew Garrett , Jason Gunthorpe , Peter Huewe , axelj Subject: [PATCH v5 01/11] tpm: Add support for in-kernel resetting of PCRs Date: Fri, 11 Nov 2022 15:16:26 -0800 Message-Id: <20221111151451.v5.1.I776854f47e3340cc2913ed4d8ecdd328048b73c3@changeid> X-Mailer: git-send-email 2.38.1.431.g37b22c650d-goog In-Reply-To: <20221111231636.3748636-1-evgreen@chromium.org> References: <20221111231636.3748636-1-evgreen@chromium.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-integrity@vger.kernel.org Add an internal command for resetting a PCR. This will be used by the encrypted hibernation code to set PCR23 to a known value. The hibernation code will seal the hibernation key with a policy specifying PCR23 be set to this known value as a mechanism to ensure that the hibernation key is genuine. But to do this repeatedly, resetting the PCR is necessary as well. Link: https://lore.kernel.org/lkml/20210220013255.1083202-2-matthewgarrett@google.com/ Co-developed-by: Matthew Garrett Signed-off-by: Matthew Garrett Signed-off-by: Evan Green Reviewed-by: Jarkko Sakkinen --- Changes in v5: - Change to co-developed by Matthew (Kees) Changes in v4: - Open code tpm2_pcr_reset implementation in tpm-interface.c (Jarkko) - Rename interface symbol to tpm2_pcr_reset, fix kerneldocs (Jarkko) Changes in v3: - Unify tpm1/2_pcr_reset prototypes (Jarkko) - Wait no, remove the TPM1 stuff altogether (Jarkko) - Remove extra From tag and blank in commit msg (Jarkko). drivers/char/tpm/tpm-interface.c | 47 ++++++++++++++++++++++++++++++++ drivers/char/tpm/tpm2-cmd.c | 7 ----- include/linux/tpm.h | 14 ++++++++++ 3 files changed, 61 insertions(+), 7 deletions(-) diff --git a/drivers/char/tpm/tpm-interface.c b/drivers/char/tpm/tpm-interface.c index 1621ce8187052c..886277b2654e3b 100644 --- a/drivers/char/tpm/tpm-interface.c +++ b/drivers/char/tpm/tpm-interface.c @@ -342,6 +342,53 @@ int tpm_pcr_extend(struct tpm_chip *chip, u32 pcr_idx, } EXPORT_SYMBOL_GPL(tpm_pcr_extend); +/** + * tpm2_pcr_reset - Reset the specified PCR + * @chip: A &struct tpm_chip instance, %NULL for the default chip + * @pcr_idx: The PCR to be reset + * + * Return: Same as with tpm_transmit_cmd(), or ENOTTY for TPM1 devices. + */ +int tpm2_pcr_reset(struct tpm_chip *chip, u32 pcr_idx) +{ + struct tpm2_null_auth_area auth_area; + struct tpm_buf buf; + int rc; + + chip = tpm_find_get_ops(chip); + if (!chip) + return -ENODEV; + + if (!(chip->flags & TPM_CHIP_FLAG_TPM2)) { + rc = -ENOTTY; + goto out; + } + + rc = tpm_buf_init(&buf, TPM2_ST_SESSIONS, TPM2_CC_PCR_RESET); + if (rc) + goto out; + + tpm_buf_append_u32(&buf, pcr_idx); + + auth_area.handle = cpu_to_be32(TPM2_RS_PW); + auth_area.nonce_size = 0; + auth_area.attributes = 0; + auth_area.auth_size = 0; + + tpm_buf_append_u32(&buf, sizeof(struct tpm2_null_auth_area)); + tpm_buf_append(&buf, (const unsigned char *)&auth_area, + sizeof(auth_area)); + + rc = tpm_transmit_cmd(chip, &buf, 0, "attempting to reset a PCR"); + + tpm_buf_destroy(&buf); + +out: + tpm_put_ops(chip); + return rc; +} +EXPORT_SYMBOL_GPL(tpm2_pcr_reset); + /** * tpm_send - send a TPM command * @chip: a &struct tpm_chip instance, %NULL for the default chip diff --git a/drivers/char/tpm/tpm2-cmd.c b/drivers/char/tpm/tpm2-cmd.c index 65d03867e114c5..303ce2ea02a4b0 100644 --- a/drivers/char/tpm/tpm2-cmd.c +++ b/drivers/char/tpm/tpm2-cmd.c @@ -216,13 +216,6 @@ int tpm2_pcr_read(struct tpm_chip *chip, u32 pcr_idx, return rc; } -struct tpm2_null_auth_area { - __be32 handle; - __be16 nonce_size; - u8 attributes; - __be16 auth_size; -} __packed; - /** * tpm2_pcr_extend() - extend a PCR value * diff --git a/include/linux/tpm.h b/include/linux/tpm.h index dfeb25a0362dee..70134e6551745f 100644 --- a/include/linux/tpm.h +++ b/include/linux/tpm.h @@ -219,6 +219,7 @@ enum tpm2_command_codes { TPM2_CC_HIERARCHY_CONTROL = 0x0121, TPM2_CC_HIERARCHY_CHANGE_AUTH = 0x0129, TPM2_CC_CREATE_PRIMARY = 0x0131, + TPM2_CC_PCR_RESET = 0x013D, TPM2_CC_SEQUENCE_COMPLETE = 0x013E, TPM2_CC_SELF_TEST = 0x0143, TPM2_CC_STARTUP = 0x0144, @@ -293,6 +294,13 @@ struct tpm_header { }; } __packed; +struct tpm2_null_auth_area { + __be32 handle; + __be16 nonce_size; + u8 attributes; + __be16 auth_size; +} __packed; + /* A string buffer type for constructing TPM commands. This is based on the * ideas of string buffer code in security/keys/trusted.h but is heap based * in order to keep the stack usage minimal. @@ -423,6 +431,7 @@ extern ssize_t tpm_transmit_cmd(struct tpm_chip *chip, struct tpm_buf *buf, size_t min_rsp_body_length, const char *desc); extern int tpm_pcr_read(struct tpm_chip *chip, u32 pcr_idx, struct tpm_digest *digest); +extern int tpm2_pcr_reset(struct tpm_chip *chip, u32 pcr_idx); extern int tpm_pcr_extend(struct tpm_chip *chip, u32 pcr_idx, struct tpm_digest *digests); extern int tpm_send(struct tpm_chip *chip, void *cmd, size_t buflen); @@ -440,6 +449,11 @@ static inline int tpm_pcr_read(struct tpm_chip *chip, int pcr_idx, return -ENODEV; } +static inline int tpm2_pcr_reset(struct tpm_chip *chip, int pcr_idx) +{ + return -ENODEV; +} + static inline int tpm_pcr_extend(struct tpm_chip *chip, u32 pcr_idx, struct tpm_digest *digests) { From patchwork Fri Nov 11 23:16:27 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Evan Green X-Patchwork-Id: 13040918 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2D238C41535 for ; Fri, 11 Nov 2022 23:19:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234388AbiKKXTo (ORCPT ); Fri, 11 Nov 2022 18:19:44 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46780 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234228AbiKKXTn (ORCPT ); Fri, 11 Nov 2022 18:19:43 -0500 Received: from mail-pj1-x1034.google.com (mail-pj1-x1034.google.com [IPv6:2607:f8b0:4864:20::1034]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EA471637C for ; Fri, 11 Nov 2022 15:19:41 -0800 (PST) Received: by mail-pj1-x1034.google.com with SMTP id l22-20020a17090a3f1600b00212fbbcfb78so8957620pjc.3 for ; Fri, 11 Nov 2022 15:19:41 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=umKbLV2/EXZSPaYxETTXWSWJawSRXPaw3y+JZwX20lI=; b=caf3ZVx77TUd/TSgsUc0cN1uLK47M4itGZYebR27a13N/Z2JzTfqVYjOC9h+WirTco lKkEQRen0/mapYmtM8GXHzcEaTmhRrTCxcMpwUASU8hDhSzV2yDgUbQQTPnpDC1jYdWX lme/EKkQAtd03mv97kvvwNxMv0POgWVAGbpwA= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=umKbLV2/EXZSPaYxETTXWSWJawSRXPaw3y+JZwX20lI=; b=i/mC9AVgoxhvfC4PYeXxZZRARm24IihFmqxvYKPEtf2KNwpBV72SXPnG70AXvpVUOo dgAv+MzP+29IaB7Wq/eVl/g8ymKmkExzh3fpKiuSTNnopeP3lw3no12qDsoRoakf+/JN SBLRtpTv0tS0EXjlbe8tshDWt0hX3uc3xKwgrT2X8qtGvmAptn7yQhRv7XoFJEzh9DI9 6KMAHgBjoW8fq4X1yinmITfY39tZoHFv/QZ3GIA9NGG6GbYcmv/qloTa+5Hy8MrBtptx Eid3cmdU2CTp0Gs1q5lIcNEPcju9x6bZUM6l3Go57hmcUD1IbyZYc8V1jT4Wqb22R7NV 6QWA== X-Gm-Message-State: ANoB5pnJLKYNTGkvS/0RcY6pmqIj/PPpWayiuFyNBcC8ZgbmzSUtpv2z eaOxR52qPTGFG+g614zVugTgmQ== X-Google-Smtp-Source: AA0mqf7ewcvA0K1J4/tma0HbDwKrHuk9MedjIZ+JGbi/UiJngTEOp2O8W0m61z2UZVKPVkDgT7k7Aw== X-Received: by 2002:a17:902:f608:b0:17d:5e67:c523 with SMTP id n8-20020a170902f60800b0017d5e67c523mr4595938plg.115.1668208781497; Fri, 11 Nov 2022 15:19:41 -0800 (PST) Received: from evgreen-glaptop.lan ([98.45.28.95]) by smtp.gmail.com with ESMTPSA id x128-20020a623186000000b0056da2ad6503sm2106900pfx.39.2022.11.11.15.19.39 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 11 Nov 2022 15:19:41 -0800 (PST) From: Evan Green To: linux-kernel@vger.kernel.org Cc: corbet@lwn.net, linux-integrity@vger.kernel.org, Eric Biggers , gwendal@chromium.org, dianders@chromium.org, apronin@chromium.org, Pavel Machek , Ben Boeckel , rjw@rjwysocki.net, jejb@linux.ibm.com, Kees Cook , dlunev@google.com, zohar@linux.ibm.com, Matthew Garrett , jarkko@kernel.org, linux-pm@vger.kernel.org, Evan Green , Jason Gunthorpe , Peter Huewe Subject: [PATCH v5 02/11] tpm: Export and rename tpm2_find_and_validate_cc() Date: Fri, 11 Nov 2022 15:16:27 -0800 Message-Id: <20221111151451.v5.2.I7bbedcf5efd3f1c72c32d6002faed086c5ed31c7@changeid> X-Mailer: git-send-email 2.38.1.431.g37b22c650d-goog In-Reply-To: <20221111231636.3748636-1-evgreen@chromium.org> References: <20221111231636.3748636-1-evgreen@chromium.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-integrity@vger.kernel.org Export tpm_find_and_validate_cc() since it will be needed by an upcoming change allowing access to certain PCRs to be restricted to the kernel. In order to export it consistently, and because it's a tpm2-only function, rename it to tpm2_find_and_validate_cc(). Signed-off-by: Evan Green Reviewed-by: Kees Cook Acked-by: Jarkko Sakkinen --- (no changes since v3) Changes in v3: - Split find_and_validate_cc() export to its own patch (Jarkko) - Rename tpm_find_and_validate_cc() to tpm2_find_and_validate_cc(). drivers/char/tpm/tpm.h | 3 +++ drivers/char/tpm/tpm2-space.c | 8 ++++---- 2 files changed, 7 insertions(+), 4 deletions(-) diff --git a/drivers/char/tpm/tpm.h b/drivers/char/tpm/tpm.h index 24ee4e1cc452a0..f1e0f490176f01 100644 --- a/drivers/char/tpm/tpm.h +++ b/drivers/char/tpm/tpm.h @@ -231,6 +231,9 @@ int tpm2_find_cc(struct tpm_chip *chip, u32 cc); int tpm2_init_space(struct tpm_space *space, unsigned int buf_size); void tpm2_del_space(struct tpm_chip *chip, struct tpm_space *space); void tpm2_flush_space(struct tpm_chip *chip); +int tpm2_find_and_validate_cc(struct tpm_chip *chip, + struct tpm_space *space, + const void *cmd, size_t len); int tpm2_prepare_space(struct tpm_chip *chip, struct tpm_space *space, u8 *cmd, size_t cmdsiz); int tpm2_commit_space(struct tpm_chip *chip, struct tpm_space *space, void *buf, diff --git a/drivers/char/tpm/tpm2-space.c b/drivers/char/tpm/tpm2-space.c index ffb35f0154c16c..ca34cc006e7f8d 100644 --- a/drivers/char/tpm/tpm2-space.c +++ b/drivers/char/tpm/tpm2-space.c @@ -262,9 +262,9 @@ static int tpm2_map_command(struct tpm_chip *chip, u32 cc, u8 *cmd) return 0; } -static int tpm_find_and_validate_cc(struct tpm_chip *chip, - struct tpm_space *space, - const void *cmd, size_t len) +int tpm2_find_and_validate_cc(struct tpm_chip *chip, + struct tpm_space *space, + const void *cmd, size_t len) { const struct tpm_header *header = (const void *)cmd; int i; @@ -306,7 +306,7 @@ int tpm2_prepare_space(struct tpm_chip *chip, struct tpm_space *space, u8 *cmd, if (!space) return 0; - cc = tpm_find_and_validate_cc(chip, space, cmd, cmdsiz); + cc = tpm2_find_and_validate_cc(chip, space, cmd, cmdsiz); if (cc < 0) return cc; From patchwork Fri Nov 11 23:16:28 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Evan Green X-Patchwork-Id: 13040919 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id EC331C4332F for ; Fri, 11 Nov 2022 23:19:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234234AbiKKXTy (ORCPT ); Fri, 11 Nov 2022 18:19:54 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46830 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234320AbiKKXTp (ORCPT ); Fri, 11 Nov 2022 18:19:45 -0500 Received: from mail-pf1-x429.google.com (mail-pf1-x429.google.com [IPv6:2607:f8b0:4864:20::429]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 83AD27BE42 for ; Fri, 11 Nov 2022 15:19:44 -0800 (PST) Received: by mail-pf1-x429.google.com with SMTP id g62so6080440pfb.10 for ; Fri, 11 Nov 2022 15:19:44 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=GWKuj/eVGsjPGfeZMJPQpDngtMSO4dbOJNBV15jTv7M=; b=DG5hhRv0qxpc7vpB6DKVxAhYG849dFoxC467fVtkgj9mQ+rEHe+uxo7VDd5tqqgZ4B syFdacQkQxC7N0D+1/2pJVqQtIVQ3QUgQDLUjhIw2Gb4aOCjP3+C7voHAKPTHl2tVovY 3sclybeIlhlxTczEC23A8WU+cHFp2XaGHWAro= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=GWKuj/eVGsjPGfeZMJPQpDngtMSO4dbOJNBV15jTv7M=; b=rbSqZLeZuNRNRwkPXPKuCQkOrJc5USsGoXtOTynrFOZBRJm5rFFaxWBkXLT6qpDuz3 rkIloGssk5QnCFltPcRz6dKxmWkh0GDJhKDTv5XRUykM69zMwryJD0Chok4JrYW3H1j2 wqnBNd2DBNrhlvIvUGf0FwIXTHqdyJGOBzz8xxcg8EkmLvFOjRKoee4krlh3ucGWaX8D 7d7s3kvaol7HmsRBk99gU8B0JPnW6hT4P/uWAPDkXd1pgBK0MNq4KfoYOF0EP41MCyly 6RshacMC2/F7YkdkfTWOCnrmT0Og3AV4eHcmfvADSvx7OfS+5gIKOo/CEX33vK9Vqhaa 5o9w== X-Gm-Message-State: ANoB5pl5a6HN8w0iyjWg/DmbcKl49PH/dPusAHJtZhpXwLCjPHQYyYHN sVaakIgd/KxvzE2Lug/UPJp4DA== X-Google-Smtp-Source: AA0mqf4apev+7yemozCqhWY2HYZ232GudZiX3kIrKOTN5nVHHyyj9U9lRgP09L/K5/0G3oTIvRNplA== X-Received: by 2002:a63:ce56:0:b0:457:e41:c767 with SMTP id r22-20020a63ce56000000b004570e41c767mr3478159pgi.244.1668208783934; Fri, 11 Nov 2022 15:19:43 -0800 (PST) Received: from evgreen-glaptop.lan ([98.45.28.95]) by smtp.gmail.com with ESMTPSA id x128-20020a623186000000b0056da2ad6503sm2106900pfx.39.2022.11.11.15.19.41 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 11 Nov 2022 15:19:43 -0800 (PST) From: Evan Green To: linux-kernel@vger.kernel.org Cc: corbet@lwn.net, linux-integrity@vger.kernel.org, Eric Biggers , gwendal@chromium.org, dianders@chromium.org, apronin@chromium.org, Pavel Machek , Ben Boeckel , rjw@rjwysocki.net, jejb@linux.ibm.com, Kees Cook , dlunev@google.com, zohar@linux.ibm.com, Matthew Garrett , jarkko@kernel.org, linux-pm@vger.kernel.org, Evan Green , Matthew Garrett , Jason Gunthorpe , Peter Huewe Subject: [PATCH v5 03/11] tpm: Allow PCR 23 to be restricted to kernel-only use Date: Fri, 11 Nov 2022 15:16:28 -0800 Message-Id: <20221111151451.v5.3.I9ded8c8caad27403e9284dfc78ad6cbd845bc98d@changeid> X-Mailer: git-send-email 2.38.1.431.g37b22c650d-goog In-Reply-To: <20221111231636.3748636-1-evgreen@chromium.org> References: <20221111231636.3748636-1-evgreen@chromium.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-integrity@vger.kernel.org Introduce a new Kconfig, TCG_TPM_RESTRICT_PCR, which if enabled restricts usermode's ability to extend or reset PCR 23. Under certain circumstances it might be desirable to enable the creation of TPM-backed secrets that are only accessible to the kernel. In an ideal world this could be achieved by using TPM localities, but these don't appear to be available on consumer systems. An alternative is to simply block userland from modifying one of the resettable PCRs, leaving it available to the kernel. If the kernel ensures that no userland can access the TPM while it is carrying out work, it can reset PCR 23, extend it to an arbitrary value, create or load a secret, and then reset the PCR again. Even if userland somehow obtains the sealed material, it will be unable to unseal it since PCR 23 will never be in the appropriate state. This Kconfig is only properly supported for systems with TPM2 devices. For systems with TPM1 devices, having this Kconfig enabled completely restricts usermode's access to the TPM. TPM1 contains support for tunnelled transports, which usermode could use to smuggle commands through that this Kconfig is attempting to restrict. Link: https://lore.kernel.org/lkml/20210220013255.1083202-3-matthewgarrett@google.com/ Co-developed-by: Matthew Garrett Signed-off-by: Matthew Garrett Signed-off-by: Evan Green --- Changes in v5: - Change tags on RESTRICT_PCR patch (Kees) - Rename to TCG_TPM2_RESTRICT_PCR - Do nothing on TPM1.2 devices (Jarkko, Doug) Changes in v4: - Augment the commit message (Jarkko) Changes in v3: - Fix up commit message (Jarkko) - tpm2_find_and_validate_cc() was split (Jarkko) - Simply fully restrict TPM1 since v2 failed to account for tunnelled transport sessions (Stefan and Jarkko). Changes in v2: - Fixed sparse warnings drivers/char/tpm/Kconfig | 12 ++++++++++++ drivers/char/tpm/tpm-dev-common.c | 6 ++++++ drivers/char/tpm/tpm.h | 12 ++++++++++++ drivers/char/tpm/tpm2-cmd.c | 22 ++++++++++++++++++++++ 4 files changed, 52 insertions(+) diff --git a/drivers/char/tpm/Kconfig b/drivers/char/tpm/Kconfig index 927088b2c3d3f2..e6d3aa9f6c694f 100644 --- a/drivers/char/tpm/Kconfig +++ b/drivers/char/tpm/Kconfig @@ -211,4 +211,16 @@ config TCG_FTPM_TEE This driver proxies for firmware TPM running in TEE. source "drivers/char/tpm/st33zp24/Kconfig" + +config TCG_TPM2_RESTRICT_PCR + bool "Restrict userland access to PCR 23 on TPM2 devices" + depends on TCG_TPM + help + If set, block userland from extending or resetting PCR 23 on TPM2.0 + and later systems. This allows the PCR to be restricted to in-kernel + use, preventing userland from being able to make use of data sealed to + the TPM by the kernel. This is required for secure hibernation + support, but should be left disabled if any userland may require + access to PCR23. This is a TPM2-only feature, enabling this on a TPM1 + machine is effectively a no-op. endif # TCG_TPM diff --git a/drivers/char/tpm/tpm-dev-common.c b/drivers/char/tpm/tpm-dev-common.c index dc4c0a0a512903..66d15a2a967443 100644 --- a/drivers/char/tpm/tpm-dev-common.c +++ b/drivers/char/tpm/tpm-dev-common.c @@ -198,6 +198,12 @@ ssize_t tpm_common_write(struct file *file, const char __user *buf, priv->response_read = false; *off = 0; + if (priv->chip->flags & TPM_CHIP_FLAG_TPM2) { + ret = tpm2_cmd_restricted(priv->chip, priv->data_buffer, size); + if (ret) + goto out; + } + /* * If in nonblocking mode schedule an async job to send * the command return the size. diff --git a/drivers/char/tpm/tpm.h b/drivers/char/tpm/tpm.h index f1e0f490176f01..7fb746d210f59d 100644 --- a/drivers/char/tpm/tpm.h +++ b/drivers/char/tpm/tpm.h @@ -245,4 +245,16 @@ void tpm_bios_log_setup(struct tpm_chip *chip); void tpm_bios_log_teardown(struct tpm_chip *chip); int tpm_dev_common_init(void); void tpm_dev_common_exit(void); + +#ifdef CONFIG_TCG_TPM2_RESTRICT_PCR +#define TPM_RESTRICTED_PCR 23 + +int tpm2_cmd_restricted(struct tpm_chip *chip, u8 *buffer, size_t size); +#else +static inline int tpm2_cmd_restricted(struct tpm_chip *chip, u8 *buffer, + size_t size) +{ + return 0; +} +#endif #endif diff --git a/drivers/char/tpm/tpm2-cmd.c b/drivers/char/tpm/tpm2-cmd.c index 303ce2ea02a4b0..3bc5546fddc792 100644 --- a/drivers/char/tpm/tpm2-cmd.c +++ b/drivers/char/tpm/tpm2-cmd.c @@ -778,3 +778,25 @@ int tpm2_find_cc(struct tpm_chip *chip, u32 cc) return -1; } + +#ifdef CONFIG_TCG_TPM2_RESTRICT_PCR +int tpm2_cmd_restricted(struct tpm_chip *chip, u8 *buffer, size_t size) +{ + int cc = tpm2_find_and_validate_cc(chip, NULL, buffer, size); + __be32 *handle; + + switch (cc) { + case TPM2_CC_PCR_EXTEND: + case TPM2_CC_PCR_RESET: + if (size < (TPM_HEADER_SIZE + sizeof(u32))) + return -EINVAL; + + handle = (__be32 *)&buffer[TPM_HEADER_SIZE]; + if (be32_to_cpu(*handle) == TPM_RESTRICTED_PCR) + return -EPERM; + break; + } + + return 0; +} +#endif From patchwork Fri Nov 11 23:16:29 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Evan Green X-Patchwork-Id: 13040920 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3B77AC4332F for ; Fri, 11 Nov 2022 23:20:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234527AbiKKXT4 (ORCPT ); Fri, 11 Nov 2022 18:19:56 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47050 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234454AbiKKXTy (ORCPT ); Fri, 11 Nov 2022 18:19:54 -0500 Received: from mail-pg1-x529.google.com (mail-pg1-x529.google.com [IPv6:2607:f8b0:4864:20::529]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A23D87F556 for ; Fri, 11 Nov 2022 15:19:46 -0800 (PST) Received: by mail-pg1-x529.google.com with SMTP id v3so5512093pgh.4 for ; Fri, 11 Nov 2022 15:19:46 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=f4s2Ct5WtJjQcbMD76vUE/8+7SYUprA4PJX2Fsa+8iU=; b=GZ6Zbh1Vs7Mw1owlByqesif0CGwkiTTLoblLh+3Z+shfK2fButl+p+WzQ+ZuGJS9yu CtWpS+KWD2GAMSHN3h9rSuXHypIj5ETF02SjLU9yqSry6mi/deAJywx3Mbn9MejNeUvN HQAeTbh9jX0gbJZGG5RUxpk6dJb2rUYPB8knE= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=f4s2Ct5WtJjQcbMD76vUE/8+7SYUprA4PJX2Fsa+8iU=; b=arFnrBaBNuIEHMk3yC2l2Yo8jz3JDZp/6gIeskyfjgnjaMw/F/KMkVvWP713XtWaoH 2WCclH99bxC33M0igrzsnl8ZNoposdQp4hgqUc/Bt6qYvE1xgfBOOxM9RzMkGSrAwIdq rCyquzQvfaiUkX2iPCg8n3Jz+c6eeDRGCTnfOtG7kdDV8f852lmyXdtlVfZOZxpa/BkF LNvvZYLubfeLhF71TtgbtK/INPmwPznJIlrcf6jOKT2j6zB8M8Je7I7S7bAA9ow6NxCb 9ULAEAQ3I5TYKAhrEu/RdwgrmZYnyd3WgozzeDdAY909C1lP6OR6qVamwbi9fUSSmg58 hmVA== X-Gm-Message-State: ANoB5pleeWQXV2FwZb4iDh9Cff3h1pA7zYW7HhgqlCvjRS05FTQZB9/0 /avs+Y/1A0Dy+d34DRedYO2C4Q== X-Google-Smtp-Source: AA0mqf6t3ZA/Jow57kBc6udqyFPJl1GfniBVdpmvbxL+CrjkZl/mljwifHSPTUqxGcu5K9S5OSiI2Q== X-Received: by 2002:a63:5603:0:b0:46f:1e8e:dadc with SMTP id k3-20020a635603000000b0046f1e8edadcmr3412095pgb.561.1668208786288; Fri, 11 Nov 2022 15:19:46 -0800 (PST) Received: from evgreen-glaptop.lan ([98.45.28.95]) by smtp.gmail.com with ESMTPSA id x128-20020a623186000000b0056da2ad6503sm2106900pfx.39.2022.11.11.15.19.44 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 11 Nov 2022 15:19:46 -0800 (PST) From: Evan Green To: linux-kernel@vger.kernel.org Cc: corbet@lwn.net, linux-integrity@vger.kernel.org, Eric Biggers , gwendal@chromium.org, dianders@chromium.org, apronin@chromium.org, Pavel Machek , Ben Boeckel , rjw@rjwysocki.net, jejb@linux.ibm.com, Kees Cook , dlunev@google.com, zohar@linux.ibm.com, Matthew Garrett , jarkko@kernel.org, linux-pm@vger.kernel.org, Evan Green , David Howells , James Morris , Paul Moore , "Serge E. Hallyn" , keyrings@vger.kernel.org, linux-security-module@vger.kernel.org Subject: [PATCH v5 04/11] security: keys: trusted: Include TPM2 creation data Date: Fri, 11 Nov 2022 15:16:29 -0800 Message-Id: <20221111151451.v5.4.Ieb1215f598bc9df56b0e29e5977eae4fcca25e15@changeid> X-Mailer: git-send-email 2.38.1.431.g37b22c650d-goog In-Reply-To: <20221111231636.3748636-1-evgreen@chromium.org> References: <20221111231636.3748636-1-evgreen@chromium.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-integrity@vger.kernel.org In addition to the private key and public key, the TPM2_Create command may also return creation data, a creation hash, and a creation ticket. These fields allow the TPM to attest to the contents of a specified set of PCRs at the time the trusted key was created. Encrypted hibernation will use this to ensure that PCRs settable only by the kernel were set properly at the time of creation, indicating this is an authentic hibernate key. Encode these additional parameters into the ASN.1 created to represent the key blob. The new fields are made optional so that they don't bloat key blobs which don't need them, and to ensure interoperability with old blobs. Signed-off-by: Evan Green --- Changes in v5: - Factored some math out to a helper function (Kees) - Constified src in tpm2_key_encode(). Changes in v3: - Fix SoB and -- note ordering (Kees) - Add comments describing the TPM2 spec type names for the new fields in tpm2key.asn1 (Kees) - Add len buffer checks in tpm2_key_encode() (Kees) This is a replacement for Matthew's original patch here: https://patchwork.kernel.org/patch/12096489/ That patch was written before the exported key format was switched to ASN.1. This patch accomplishes the same thing (saving, loading, and getting pointers to the creation data) while utilizing the new ASN.1 format. --- include/keys/trusted-type.h | 8 + security/keys/trusted-keys/tpm2key.asn1 | 15 +- security/keys/trusted-keys/trusted_tpm2.c | 253 +++++++++++++++++++--- 3 files changed, 245 insertions(+), 31 deletions(-) diff --git a/include/keys/trusted-type.h b/include/keys/trusted-type.h index 4eb64548a74f1a..209086fed240a5 100644 --- a/include/keys/trusted-type.h +++ b/include/keys/trusted-type.h @@ -22,15 +22,23 @@ #define MAX_BLOB_SIZE 512 #define MAX_PCRINFO_SIZE 64 #define MAX_DIGEST_SIZE 64 +#define MAX_CREATION_DATA 412 +#define MAX_TK 76 struct trusted_key_payload { struct rcu_head rcu; unsigned int key_len; unsigned int blob_len; + unsigned int creation_len; + unsigned int creation_hash_len; + unsigned int tk_len; unsigned char migratable; unsigned char old_format; unsigned char key[MAX_KEY_SIZE + 1]; unsigned char blob[MAX_BLOB_SIZE]; + unsigned char *creation; + unsigned char *creation_hash; + unsigned char *tk; }; struct trusted_key_options { diff --git a/security/keys/trusted-keys/tpm2key.asn1 b/security/keys/trusted-keys/tpm2key.asn1 index f57f869ad60068..608f8d9ca95fa8 100644 --- a/security/keys/trusted-keys/tpm2key.asn1 +++ b/security/keys/trusted-keys/tpm2key.asn1 @@ -7,5 +7,18 @@ TPMKey ::= SEQUENCE { emptyAuth [0] EXPLICIT BOOLEAN OPTIONAL, parent INTEGER ({tpm2_key_parent}), pubkey OCTET STRING ({tpm2_key_pub}), - privkey OCTET STRING ({tpm2_key_priv}) + privkey OCTET STRING ({tpm2_key_priv}), + --- + --- A TPM2B_CREATION_DATA struct as returned from the TPM2_Create command. + --- + creationData [1] EXPLICIT OCTET STRING OPTIONAL ({tpm2_key_creation_data}), + --- + --- A TPM2B_DIGEST of the creationHash as returned from the TPM2_Create + --- command. + --- + creationHash [2] EXPLICIT OCTET STRING OPTIONAL ({tpm2_key_creation_hash}), + --- + --- A TPMT_TK_CREATION ticket as returned from the TPM2_Create command. + --- + creationTk [3] EXPLICIT OCTET STRING OPTIONAL ({tpm2_key_creation_tk}) } diff --git a/security/keys/trusted-keys/trusted_tpm2.c b/security/keys/trusted-keys/trusted_tpm2.c index 2b2c8eb258d5bd..ff2aede8986236 100644 --- a/security/keys/trusted-keys/trusted_tpm2.c +++ b/security/keys/trusted-keys/trusted_tpm2.c @@ -28,24 +28,86 @@ static struct tpm2_hash tpm2_hash_map[] = { static u32 tpm2key_oid[] = { 2, 23, 133, 10, 1, 5 }; +/* Helper function to advance past a __be16 length + buffer safely */ +static const u8 *get_sized_section(const u8 *src, const u8 *end, u16 *len) +{ + u32 length; + + if (src + sizeof(u16) > end) + return NULL; + + /* Include the size field in the returned section length. */ + length = get_unaligned_be16(src) + sizeof(u16); + *len = length; + if (*len != length) + return NULL; + + src += *len; + if (src > end) + return NULL; + + return src; +} + static int tpm2_key_encode(struct trusted_key_payload *payload, struct trusted_key_options *options, - u8 *src, u32 len) + const u8 *src, u32 len) { const int SCRATCH_SIZE = PAGE_SIZE; + const u8 *end = src + len; u8 *scratch = kmalloc(SCRATCH_SIZE, GFP_KERNEL); u8 *work = scratch, *work1; u8 *end_work = scratch + SCRATCH_SIZE; - u8 *priv, *pub; + const u8 *priv, *pub; + const u8 *creation_data = NULL, *creation_hash = NULL, *creation_tk = NULL; + u16 creation_data_len, creation_hash_len = 0, creation_tk_len = 0; u16 priv_len, pub_len; + int rc; - priv_len = get_unaligned_be16(src) + 2; priv = src; + src = get_sized_section(src, end, &priv_len); + if (!src) + return -EINVAL; - src += priv_len; - - pub_len = get_unaligned_be16(src) + 2; pub = src; + src = get_sized_section(src, end, &pub_len); + if (!src) + return -EINVAL; + + creation_data = src; + src = get_sized_section(src, end, &creation_data_len); + if (!src) + return -EINVAL; + + /* + * If the creation data has content, pull out the creation hash and + * ticket as well. Otherwise pretend it doesn't exist. + */ + if (creation_data_len > sizeof(u16)) { + creation_hash = src; + src = get_sized_section(src, end, &creation_hash_len); + if (!src) + return -EINVAL; + + /* + * The creation ticket (TPMT_TK_CREATION) consists of a 2 byte + * tag, 4 byte handle, and then a TPM2B_DIGEST, which is a 2 + * byte length followed by data. + */ + if (src + 8 > end) + return -EINVAL; + + creation_tk = src; + src = get_sized_section(src + 6, end, &creation_tk_len); + if (!src) + return -EINVAL; + + creation_tk_len += 6; + + } else { + creation_data_len = 0; + creation_data = NULL; + } if (!scratch) return -ENOMEM; @@ -63,26 +125,81 @@ static int tpm2_key_encode(struct trusted_key_payload *payload, } /* - * Assume both octet strings will encode to a 2 byte definite length + * Assume each octet string will encode to a 2 byte definite length. + * Each optional octet string consumes one extra byte. * - * Note: For a well behaved TPM, this warning should never - * trigger, so if it does there's something nefarious going on + * Note: For a well behaved TPM, this warning should never trigger, so + * if it does there's something nefarious going on */ - if (WARN(work - scratch + pub_len + priv_len + 14 > SCRATCH_SIZE, - "BUG: scratch buffer is too small")) - return -EINVAL; + if (WARN(work - scratch + pub_len + priv_len + creation_data_len + + creation_hash_len + creation_tk_len + (7 * 5) + 3 > + SCRATCH_SIZE, + "BUG: scratch buffer is too small")) { + rc = -EINVAL; + goto err; + } work = asn1_encode_integer(work, end_work, options->keyhandle); work = asn1_encode_octet_string(work, end_work, pub, pub_len); work = asn1_encode_octet_string(work, end_work, priv, priv_len); + if (creation_data_len) { + u8 *scratch2 = kmalloc(SCRATCH_SIZE, GFP_KERNEL); + u8 *work2; + u8 *end_work2 = scratch2 + SCRATCH_SIZE; + + if (!scratch2) { + rc = -ENOMEM; + goto err; + } + + work2 = asn1_encode_octet_string(scratch2, + end_work2, + creation_data, + creation_data_len); + + work = asn1_encode_tag(work, + end_work, + 1, + scratch2, + work2 - scratch2); + + work2 = asn1_encode_octet_string(scratch2, + end_work2, + creation_hash, + creation_hash_len); + + work = asn1_encode_tag(work, + end_work, + 2, + scratch2, + work2 - scratch2); + + work2 = asn1_encode_octet_string(scratch2, + end_work2, + creation_tk, + creation_tk_len); + + work = asn1_encode_tag(work, + end_work, + 3, + scratch2, + work2 - scratch2); + + kfree(scratch2); + } work1 = payload->blob; work1 = asn1_encode_sequence(work1, work1 + sizeof(payload->blob), scratch, work - scratch); - if (WARN(IS_ERR(work1), "BUG: ASN.1 encoder failed")) - return PTR_ERR(work1); + if (WARN(IS_ERR(work1), "BUG: ASN.1 encoder failed")) { + rc = PTR_ERR(work1); + goto err; + } return work1 - payload->blob; +err: + kfree(scratch); + return rc; } struct tpm2_key_context { @@ -91,15 +208,21 @@ struct tpm2_key_context { u32 pub_len; const u8 *priv; u32 priv_len; + const u8 *creation_data; + u32 creation_data_len; + const u8 *creation_hash; + u32 creation_hash_len; + const u8 *creation_tk; + u32 creation_tk_len; }; static int tpm2_key_decode(struct trusted_key_payload *payload, - struct trusted_key_options *options, - u8 **buf) + struct trusted_key_options *options) { + u64 data_len; int ret; struct tpm2_key_context ctx; - u8 *blob; + u8 *blob, *buf; memset(&ctx, 0, sizeof(ctx)); @@ -108,21 +231,57 @@ static int tpm2_key_decode(struct trusted_key_payload *payload, if (ret < 0) return ret; - if (ctx.priv_len + ctx.pub_len > MAX_BLOB_SIZE) + data_len = ctx.priv_len + ctx.pub_len + ctx.creation_data_len + + ctx.creation_hash_len + ctx.creation_tk_len; + + if (data_len > MAX_BLOB_SIZE) return -EINVAL; - blob = kmalloc(ctx.priv_len + ctx.pub_len + 4, GFP_KERNEL); - if (!blob) + buf = kmalloc(data_len + 4, GFP_KERNEL); + if (!buf) return -ENOMEM; - *buf = blob; + blob = buf; options->keyhandle = ctx.parent; memcpy(blob, ctx.priv, ctx.priv_len); blob += ctx.priv_len; memcpy(blob, ctx.pub, ctx.pub_len); + blob += ctx.pub_len; + if (ctx.creation_data_len) { + memcpy(blob, ctx.creation_data, ctx.creation_data_len); + blob += ctx.creation_data_len; + } + if (ctx.creation_hash_len) { + memcpy(blob, ctx.creation_hash, ctx.creation_hash_len); + blob += ctx.creation_hash_len; + } + + if (ctx.creation_tk_len) { + memcpy(blob, ctx.creation_tk, ctx.creation_tk_len); + blob += ctx.creation_tk_len; + } + + /* + * Copy the buffer back into the payload blob since the creation + * info will be used after loading. + */ + payload->blob_len = blob - buf; + memcpy(payload->blob, buf, payload->blob_len); + if (ctx.creation_data_len) { + payload->creation = payload->blob + ctx.priv_len + ctx.pub_len; + payload->creation_len = ctx.creation_data_len; + payload->creation_hash = payload->creation + ctx.creation_data_len; + payload->creation_hash_len = ctx.creation_hash_len; + payload->tk = payload->creation_hash + + payload->creation_hash_len; + + payload->tk_len = ctx.creation_tk_len; + } + + kfree(buf); return 0; } @@ -185,6 +344,42 @@ int tpm2_key_priv(void *context, size_t hdrlen, return 0; } +int tpm2_key_creation_data(void *context, size_t hdrlen, + unsigned char tag, + const void *value, size_t vlen) +{ + struct tpm2_key_context *ctx = context; + + ctx->creation_data = value; + ctx->creation_data_len = vlen; + + return 0; +} + +int tpm2_key_creation_hash(void *context, size_t hdrlen, + unsigned char tag, + const void *value, size_t vlen) +{ + struct tpm2_key_context *ctx = context; + + ctx->creation_hash = value; + ctx->creation_hash_len = vlen; + + return 0; +} + +int tpm2_key_creation_tk(void *context, size_t hdrlen, + unsigned char tag, + const void *value, size_t vlen) +{ + struct tpm2_key_context *ctx = context; + + ctx->creation_tk = value; + ctx->creation_tk_len = vlen; + + return 0; +} + /** * tpm_buf_append_auth() - append TPMS_AUTH_COMMAND to the buffer. * @@ -229,6 +424,7 @@ int tpm2_seal_trusted(struct tpm_chip *chip, struct trusted_key_options *options) { int blob_len = 0; + unsigned int offset; struct tpm_buf buf; u32 hash; u32 flags; @@ -317,13 +513,14 @@ int tpm2_seal_trusted(struct tpm_chip *chip, rc = -E2BIG; goto out; } - if (tpm_buf_length(&buf) < TPM_HEADER_SIZE + 4 + blob_len) { + offset = TPM_HEADER_SIZE + 4; + if (tpm_buf_length(&buf) < offset + blob_len) { rc = -EFAULT; goto out; } blob_len = tpm2_key_encode(payload, options, - &buf.data[TPM_HEADER_SIZE + 4], + &buf.data[offset], blob_len); out: @@ -370,13 +567,11 @@ static int tpm2_load_cmd(struct tpm_chip *chip, int rc; u32 attrs; - rc = tpm2_key_decode(payload, options, &blob); - if (rc) { - /* old form */ - blob = payload->blob; + rc = tpm2_key_decode(payload, options); + if (rc) payload->old_format = 1; - } + blob = payload->blob; /* new format carries keyhandle but old format doesn't */ if (!options->keyhandle) return -EINVAL; @@ -433,8 +628,6 @@ static int tpm2_load_cmd(struct tpm_chip *chip, (__be32 *) &buf.data[TPM_HEADER_SIZE]); out: - if (blob != payload->blob) - kfree(blob); tpm_buf_destroy(&buf); if (rc > 0) From patchwork Fri Nov 11 23:16:30 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Evan Green X-Patchwork-Id: 13040921 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5E185C43217 for ; Fri, 11 Nov 2022 23:20:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234595AbiKKXUO (ORCPT ); Fri, 11 Nov 2022 18:20:14 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47050 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234490AbiKKXTz (ORCPT ); Fri, 11 Nov 2022 18:19:55 -0500 Received: from mail-pl1-x635.google.com (mail-pl1-x635.google.com [IPv6:2607:f8b0:4864:20::635]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5710982935 for ; Fri, 11 Nov 2022 15:19:49 -0800 (PST) Received: by mail-pl1-x635.google.com with SMTP id v17so5394110plo.1 for ; Fri, 11 Nov 2022 15:19:49 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=0i0QEQiN8N6kWJaFnnOPreeJ6mOVw1qdLR4E8EZb574=; b=j0sOuXL3t/5MxHh4ziRJcxsV6WuCyyRXKIUnXkmE9l2HHtMnM2zwez5AC0YhPLvPKr lfjYYzI0IOdaR0Tk+uk0AfLK/Rh1SvJxA29G2dXPRsKFpblpjDjAMSlYidKoY8TIM7vQ 9vu8va++By1TIkdKup+QjSA9y0T72AksG31ws= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=0i0QEQiN8N6kWJaFnnOPreeJ6mOVw1qdLR4E8EZb574=; b=rwFOwW8laKRqkaHnNpzq6RXsuIH5TPDPngRdr4EjAxcu3H90GdmDgolG3xcfXvMilP xak1qKCNAv/D52IFpioqqgDrD6zPlf60a/weghIZ3ZamUwTifOPqFfWsHmT0pOTgn1lJ qPsA6GpBZLpZF4iD56JgSKaJ52pc1Zm8XGOfl5/O5QSLhowf8MwW7hIh47dkZz08APLc HFo4oQKajZSybMwX1wSh38xWWxPC7XVxgmw8+WIwl//4AQZWGPZfJUVomGlpcyDXRpFt 4eD9FpP8HX1/Wz6sskkLFRJM8soswM/XzbQ7YAgV7xEyZ/I5XT58hpE+h6XPZonhc/DY pOgA== X-Gm-Message-State: ANoB5pl8kFH7/p5gJ6ApHpgH0d65XvlBLKDY56xfSiLWv51ewe9J0QzP Px+J86UjchUuUSjCOW+XRhojSQkUWLTh1rQ2 X-Google-Smtp-Source: AA0mqf6KdQhIsONTKfxNTXU4F9A2vRgqSgfniE81X2SnNX8zVkszwxquPC+K1ejg5yLANg8SEckDyw== X-Received: by 2002:a17:90b:1891:b0:210:4438:2d40 with SMTP id mn17-20020a17090b189100b0021044382d40mr4125013pjb.196.1668208788735; Fri, 11 Nov 2022 15:19:48 -0800 (PST) Received: from evgreen-glaptop.lan ([98.45.28.95]) by smtp.gmail.com with ESMTPSA id x128-20020a623186000000b0056da2ad6503sm2106900pfx.39.2022.11.11.15.19.46 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 11 Nov 2022 15:19:48 -0800 (PST) From: Evan Green To: linux-kernel@vger.kernel.org Cc: corbet@lwn.net, linux-integrity@vger.kernel.org, Eric Biggers , gwendal@chromium.org, dianders@chromium.org, apronin@chromium.org, Pavel Machek , Ben Boeckel , rjw@rjwysocki.net, jejb@linux.ibm.com, Kees Cook , dlunev@google.com, zohar@linux.ibm.com, Matthew Garrett , jarkko@kernel.org, linux-pm@vger.kernel.org, Matthew Garrett , Evan Green , Ben Boeckel , David Howells , James Morris , Paul Moore , "Serge E. Hallyn" , keyrings@vger.kernel.org, linux-doc@vger.kernel.org, linux-security-module@vger.kernel.org Subject: [PATCH v5 05/11] security: keys: trusted: Allow storage of PCR values in creation data Date: Fri, 11 Nov 2022 15:16:30 -0800 Message-Id: <20221111151451.v5.5.I32591db064b6cdc91850d777f363c9d05c985b39@changeid> X-Mailer: git-send-email 2.38.1.431.g37b22c650d-goog In-Reply-To: <20221111231636.3748636-1-evgreen@chromium.org> References: <20221111231636.3748636-1-evgreen@chromium.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-integrity@vger.kernel.org From: Matthew Garrett When TPMs generate keys, they can also generate some information describing the state of the PCRs at creation time. This data can then later be certified by the TPM, allowing verification of the PCR values. This allows us to determine the state of the system at the time a key was generated. Add an additional argument to the trusted key creation options, allowing the user to provide the set of PCRs that should have their values incorporated into the creation data. Link: https://lore.kernel.org/lkml/20210220013255.1083202-6-matthewgarrett@google.com/ Signed-off-by: Matthew Garrett Signed-off-by: Evan Green Reviewed-by: Ben Boeckel Reviewed-by: Kees Cook --- Changes in v5: - Make Matthew's tag match author Changes in v3: - Clarified creationpcrs documentation (Ben) .../security/keys/trusted-encrypted.rst | 6 +++++ include/keys/trusted-type.h | 1 + security/keys/trusted-keys/trusted_tpm1.c | 9 +++++++ security/keys/trusted-keys/trusted_tpm2.c | 25 +++++++++++++++++-- 4 files changed, 39 insertions(+), 2 deletions(-) diff --git a/Documentation/security/keys/trusted-encrypted.rst b/Documentation/security/keys/trusted-encrypted.rst index 9bc9db8ec6517c..a1872964fe862f 100644 --- a/Documentation/security/keys/trusted-encrypted.rst +++ b/Documentation/security/keys/trusted-encrypted.rst @@ -199,6 +199,12 @@ Usage:: policyhandle= handle to an authorization policy session that defines the same policy and with the same hash algorithm as was used to seal the key. + creationpcrs= hex integer representing the set of PCRs to be + included in the creation data. For each bit set, the + corresponding PCR will be included in the key creation + data. Bit 0 corresponds to PCR0. Currently only the first + PC standard 24 PCRs are supported on the currently active + bank. Leading zeroes are optional. TPM2 only. "keyctl print" returns an ascii hex copy of the sealed key, which is in standard TPM_STORED_DATA format. The key length for new keys are always in bytes. diff --git a/include/keys/trusted-type.h b/include/keys/trusted-type.h index 209086fed240a5..8523d41507b2a4 100644 --- a/include/keys/trusted-type.h +++ b/include/keys/trusted-type.h @@ -54,6 +54,7 @@ struct trusted_key_options { uint32_t policydigest_len; unsigned char policydigest[MAX_DIGEST_SIZE]; uint32_t policyhandle; + uint32_t creation_pcrs; }; struct trusted_key_ops { diff --git a/security/keys/trusted-keys/trusted_tpm1.c b/security/keys/trusted-keys/trusted_tpm1.c index aa108bea6739b3..2975827c01bec0 100644 --- a/security/keys/trusted-keys/trusted_tpm1.c +++ b/security/keys/trusted-keys/trusted_tpm1.c @@ -713,6 +713,7 @@ enum { Opt_hash, Opt_policydigest, Opt_policyhandle, + Opt_creationpcrs, }; static const match_table_t key_tokens = { @@ -725,6 +726,7 @@ static const match_table_t key_tokens = { {Opt_hash, "hash=%s"}, {Opt_policydigest, "policydigest=%s"}, {Opt_policyhandle, "policyhandle=%s"}, + {Opt_creationpcrs, "creationpcrs=%s"}, {Opt_err, NULL} }; @@ -858,6 +860,13 @@ static int getoptions(char *c, struct trusted_key_payload *pay, return -EINVAL; opt->policyhandle = handle; break; + case Opt_creationpcrs: + if (!tpm2) + return -EINVAL; + res = kstrtoint(args[0].from, 16, &opt->creation_pcrs); + if (res < 0) + return -EINVAL; + break; default: return -EINVAL; } diff --git a/security/keys/trusted-keys/trusted_tpm2.c b/security/keys/trusted-keys/trusted_tpm2.c index ff2aede8986236..3d84c3d41bdee1 100644 --- a/security/keys/trusted-keys/trusted_tpm2.c +++ b/security/keys/trusted-keys/trusted_tpm2.c @@ -428,7 +428,7 @@ int tpm2_seal_trusted(struct tpm_chip *chip, struct tpm_buf buf; u32 hash; u32 flags; - int i; + int i, j; int rc; for (i = 0; i < ARRAY_SIZE(tpm2_hash_map); i++) { @@ -497,7 +497,28 @@ int tpm2_seal_trusted(struct tpm_chip *chip, tpm_buf_append_u16(&buf, 0); /* creation PCR */ - tpm_buf_append_u32(&buf, 0); + if (options->creation_pcrs) { + /* One bank */ + tpm_buf_append_u32(&buf, 1); + /* Which bank to use */ + tpm_buf_append_u16(&buf, hash); + /* Length of the PCR bitmask */ + tpm_buf_append_u8(&buf, 3); + /* PCR bitmask */ + for (i = 0; i < 3; i++) { + char tmp = 0; + + for (j = 0; j < 8; j++) { + char bit = (i * 8) + j; + + if (options->creation_pcrs & (1 << bit)) + tmp |= (1 << j); + } + tpm_buf_append_u8(&buf, tmp); + } + } else { + tpm_buf_append_u32(&buf, 0); + } if (buf.flags & TPM_BUF_OVERFLOW) { rc = -E2BIG; From patchwork Fri Nov 11 23:16:31 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Evan Green X-Patchwork-Id: 13040922 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 15265C43219 for ; Fri, 11 Nov 2022 23:20:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234641AbiKKXUT (ORCPT ); Fri, 11 Nov 2022 18:20:19 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47186 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234545AbiKKXT4 (ORCPT ); Fri, 11 Nov 2022 18:19:56 -0500 Received: from mail-pl1-x632.google.com (mail-pl1-x632.google.com [IPv6:2607:f8b0:4864:20::632]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5E06D83386 for ; Fri, 11 Nov 2022 15:19:51 -0800 (PST) Received: by mail-pl1-x632.google.com with SMTP id k7so5376609pll.6 for ; Fri, 11 Nov 2022 15:19:51 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=lT6eX+hZD0ZTySUyAECYTCle2DEgzv1f+QjqPC3Dddk=; b=U2W25pAvqdkdkWLcokZmO7l4t8Bw+tkvFOixYVuwcikCJOh84eIQEtmWeKRV3H67Cl kkxJiMOaisdceUpyytoKvTfzcHeG1zNMnbI8yauLRuL0bHiOYiKgs6QRKMPhmj7br+cC RfbQaXDPPOrgB0ad6d6z+NtpYbMorkvaHf22o= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=lT6eX+hZD0ZTySUyAECYTCle2DEgzv1f+QjqPC3Dddk=; b=WWO8QMpFFcyL9XCxXWAERRoVE7vJyU8VgimUg+THr7OBX57iJExhhIC871VzkJBG4z MJku0kllTZ0yT5iwVXBkwKbfVGJmI8YQCuqaKMipWgip/wat4LET3RxZ3fjQ/55wxLq3 Mdlk4Lyzfo+O88VSLlPESqD8NMPkPx3LkgZDpyfXSQgsMHDNd1L4VFeReHmpv/X7GacS wQ+A2tA2IoEEF2FnJRNyCxyPtJmsRwPXH8y1P9i5A2gi3r+tb5C4RmGW/wkLZIeLnFCz GgDSpOnb5bMRby0Rzmp5XbUuYb7cp9hu3ipoFZ/kUMCZXtL4Bc2lAYNT8fiO/lG0MiY8 jd0w== X-Gm-Message-State: ANoB5plJm3CJVCe1+N0s0JbNEnch6aFCD99/52syk4qIqIRRqFyQHsrm 30nUjyycbg3N6s6BFkVP9vuAGw== X-Google-Smtp-Source: AA0mqf4p+IlUyIKh3j0vUMhqyQotcP0Ad7fK3S6VQzoCBIJimaKlPbs9tjtuMVYQHk1nqchFwy1AhQ== X-Received: by 2002:a17:902:9b97:b0:186:5f71:7939 with SMTP id y23-20020a1709029b9700b001865f717939mr4244136plp.162.1668208791108; Fri, 11 Nov 2022 15:19:51 -0800 (PST) Received: from evgreen-glaptop.lan ([98.45.28.95]) by smtp.gmail.com with ESMTPSA id x128-20020a623186000000b0056da2ad6503sm2106900pfx.39.2022.11.11.15.19.49 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 11 Nov 2022 15:19:50 -0800 (PST) From: Evan Green To: linux-kernel@vger.kernel.org Cc: corbet@lwn.net, linux-integrity@vger.kernel.org, Eric Biggers , gwendal@chromium.org, dianders@chromium.org, apronin@chromium.org, Pavel Machek , Ben Boeckel , rjw@rjwysocki.net, jejb@linux.ibm.com, Kees Cook , dlunev@google.com, zohar@linux.ibm.com, Matthew Garrett , jarkko@kernel.org, linux-pm@vger.kernel.org, Evan Green , Matthew Garrett , David Howells , James Morris , Paul Moore , "Serge E. Hallyn" , axelj , keyrings@vger.kernel.org, linux-security-module@vger.kernel.org Subject: [PATCH v5 06/11] security: keys: trusted: Verify creation data Date: Fri, 11 Nov 2022 15:16:31 -0800 Message-Id: <20221111151451.v5.6.I6cdb522cb5ea28fcd1e35b4cd92cbd067f99269a@changeid> X-Mailer: git-send-email 2.38.1.431.g37b22c650d-goog In-Reply-To: <20221111231636.3748636-1-evgreen@chromium.org> References: <20221111231636.3748636-1-evgreen@chromium.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-integrity@vger.kernel.org If a loaded key contains creation data, ask the TPM to verify that creation data. This allows users like encrypted hibernate to know that the loaded and parsed creation data has not been tampered with. Suggested-by: Matthew Garrett Signed-off-by: Evan Green Reviewed-by: Kees Cook --- Source material for this change is at: https://patchwork.kernel.org/project/linux-pm/patch/20210220013255.1083202-9-matthewgarrett@google.com/ (no changes since v3) Changes in v3: - Changed funky tag to suggested-by (Kees). Matthew, holler if you want something different. Changes in v2: - Adjust hash len by 2 due to new ASN.1 storage, and add underflow check. include/linux/tpm.h | 1 + security/keys/trusted-keys/trusted_tpm2.c | 77 ++++++++++++++++++++++- 2 files changed, 77 insertions(+), 1 deletion(-) diff --git a/include/linux/tpm.h b/include/linux/tpm.h index 70134e6551745f..9c2ee3e30ffa5d 100644 --- a/include/linux/tpm.h +++ b/include/linux/tpm.h @@ -224,6 +224,7 @@ enum tpm2_command_codes { TPM2_CC_SELF_TEST = 0x0143, TPM2_CC_STARTUP = 0x0144, TPM2_CC_SHUTDOWN = 0x0145, + TPM2_CC_CERTIFYCREATION = 0x014A, TPM2_CC_NV_READ = 0x014E, TPM2_CC_CREATE = 0x0153, TPM2_CC_LOAD = 0x0157, diff --git a/security/keys/trusted-keys/trusted_tpm2.c b/security/keys/trusted-keys/trusted_tpm2.c index 3d84c3d41bdee1..402933f8c99ede 100644 --- a/security/keys/trusted-keys/trusted_tpm2.c +++ b/security/keys/trusted-keys/trusted_tpm2.c @@ -730,6 +730,74 @@ static int tpm2_unseal_cmd(struct tpm_chip *chip, return rc; } +/** + * tpm2_certify_creation() - execute a TPM2_CertifyCreation command + * + * @chip: TPM chip to use + * @payload: the key data in clear and encrypted form + * @blob_handle: the loaded TPM handle of the key + * + * Return: 0 on success + * -EINVAL on tpm error status + * < 0 error from tpm_send or tpm_buf_init + */ +static int tpm2_certify_creation(struct tpm_chip *chip, + struct trusted_key_payload *payload, + u32 blob_handle) +{ + struct tpm_header *head; + struct tpm_buf buf; + int rc; + + rc = tpm_buf_init(&buf, TPM2_ST_SESSIONS, TPM2_CC_CERTIFYCREATION); + if (rc) + return rc; + + /* Use TPM_RH_NULL for signHandle */ + tpm_buf_append_u32(&buf, 0x40000007); + + /* Object handle */ + tpm_buf_append_u32(&buf, blob_handle); + + /* Auth */ + tpm_buf_append_u32(&buf, 9); + tpm_buf_append_u32(&buf, TPM2_RS_PW); + tpm_buf_append_u16(&buf, 0); + tpm_buf_append_u8(&buf, 0); + tpm_buf_append_u16(&buf, 0); + + /* Qualifying data */ + tpm_buf_append_u16(&buf, 0); + + /* Creation data hash */ + if (payload->creation_hash_len < 2) { + rc = -EINVAL; + goto out; + } + + tpm_buf_append_u16(&buf, payload->creation_hash_len - 2); + tpm_buf_append(&buf, payload->creation_hash + 2, + payload->creation_hash_len - 2); + + /* signature scheme */ + tpm_buf_append_u16(&buf, TPM_ALG_NULL); + + /* creation ticket */ + tpm_buf_append(&buf, payload->tk, payload->tk_len); + + rc = tpm_transmit_cmd(chip, &buf, 6, "certifying creation data"); + if (rc) + goto out; + + head = (struct tpm_header *)buf.data; + + if (be32_to_cpu(head->return_code) != TPM2_RC_SUCCESS) + rc = -EINVAL; +out: + tpm_buf_destroy(&buf); + return rc; +} + /** * tpm2_unseal_trusted() - unseal the payload of a trusted key * @@ -755,8 +823,15 @@ int tpm2_unseal_trusted(struct tpm_chip *chip, goto out; rc = tpm2_unseal_cmd(chip, payload, options, blob_handle); - tpm2_flush_context(chip, blob_handle); + if (rc) + goto flush; + + if (payload->creation_len) + rc = tpm2_certify_creation(chip, payload, blob_handle); + +flush: + tpm2_flush_context(chip, blob_handle); out: tpm_put_ops(chip); From patchwork Fri Nov 11 23:16:32 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Evan Green X-Patchwork-Id: 13040923 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9EAA7C433FE for ; Fri, 11 Nov 2022 23:20:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234598AbiKKXUb (ORCPT ); Fri, 11 Nov 2022 18:20:31 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47128 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234601AbiKKXUO (ORCPT ); Fri, 11 Nov 2022 18:20:14 -0500 Received: from mail-pg1-x52d.google.com (mail-pg1-x52d.google.com [IPv6:2607:f8b0:4864:20::52d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id F417C86D4A for ; Fri, 11 Nov 2022 15:19:53 -0800 (PST) Received: by mail-pg1-x52d.google.com with SMTP id 6so5504613pgm.6 for ; Fri, 11 Nov 2022 15:19:53 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=WxnZrpKTaaz0ilGFUa5SW+L2hO71gr9FZXElRHyrORM=; b=RdC/PgTrbDOcwvzO9gBJu382WjbdIsXfOIyi8Hd9fTdzEnaQIBw9UJ+yrexnLAonkd q4MJVXGL1szFP656H+8pK0YyKANw8zT104pIOF4uNxgPHj5fkL4x8URJOxyOlcqUQ5nW jTQzIOdqO8bMGhiXfEFiZmjlOOs6y3s2J5kqc= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=WxnZrpKTaaz0ilGFUa5SW+L2hO71gr9FZXElRHyrORM=; b=RXEPMH9Bix1VeKd4Dp0X6Jm2vs5btJqcEe54CGr4rX9Y790JXnxpXK/yrs1UpXBf/n T14m8F8sb0sSIOyBCOHklXjiKLtzAnS+NzKBjVpoS2373EBCPQ6xrpLepXNokdY+FcPR 5n477V+SVtTftEyMsUDnmZhH9MZSXtaWEPQ6Q6UYDIV47YZEU5wH3T/N0WgLG4beZZCm i5w7lvk/I3V88X0JSMNo8wbQBFCk6tciI/874JVZVEVEOxjInYK2EWlhvck+0v2xd+mt VEbA/0CNBwaOiEEoymPbrzixEs5ba98w4/2DrDQDVD8mbDytdFZeH9w4oOMPVCMiTkJy b+Rg== X-Gm-Message-State: ANoB5pmHDoW5Pzofe8xqtry+lPJytOvG+Q9x6/nbHDewnVmgj5uUx202 gsv4qTzvOeLGj5qna/8UZ/7g/A== X-Google-Smtp-Source: AA0mqf6AUo5ehhTICpSEyKeq+lfewPVlP9jb9+iGmR+PgDdIb0EInl7f63HuxSJjYwRlvw0AxJvdcg== X-Received: by 2002:aa7:9009:0:b0:56c:b8c2:ee89 with SMTP id m9-20020aa79009000000b0056cb8c2ee89mr4749085pfo.21.1668208793125; Fri, 11 Nov 2022 15:19:53 -0800 (PST) Received: from evgreen-glaptop.lan ([98.45.28.95]) by smtp.gmail.com with ESMTPSA id x128-20020a623186000000b0056da2ad6503sm2106900pfx.39.2022.11.11.15.19.51 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 11 Nov 2022 15:19:52 -0800 (PST) From: Evan Green To: linux-kernel@vger.kernel.org Cc: corbet@lwn.net, linux-integrity@vger.kernel.org, Eric Biggers , gwendal@chromium.org, dianders@chromium.org, apronin@chromium.org, Pavel Machek , Ben Boeckel , rjw@rjwysocki.net, jejb@linux.ibm.com, Kees Cook , dlunev@google.com, zohar@linux.ibm.com, Matthew Garrett , jarkko@kernel.org, linux-pm@vger.kernel.org, Evan Green , Len Brown , "Rafael J. Wysocki" Subject: [PATCH v5 07/11] PM: hibernate: Add kernel-based encryption Date: Fri, 11 Nov 2022 15:16:32 -0800 Message-Id: <20221111151451.v5.7.Ifff11e11797a1bde0297577ecb2f7ebb3f9e2b04@changeid> X-Mailer: git-send-email 2.38.1.431.g37b22c650d-goog In-Reply-To: <20221111231636.3748636-1-evgreen@chromium.org> References: <20221111231636.3748636-1-evgreen@chromium.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-integrity@vger.kernel.org Enabling the kernel to be able to do encryption and integrity checks on the hibernate image prevents a malicious userspace from escalating to kernel execution via hibernation resume. As a first step toward this, add the scaffolding needed for the kernel to do AEAD encryption on the hibernate image, giving us both secrecy and integrity. We currently hardwire the encryption to be gcm(aes) in 16-page chunks. This strikes a balance between minimizing the authentication tag overhead on storage, and keeping a modest sized staging buffer. With this chunk size, we'd generate 2MB of authentication tag data on an 8GB hiberation image. The encryption currently sits on top of the core snapshot functionality, wired up only if requested in the user space software suspend (uswsusp) path. This change adds a new ioctl, SNAPSHOT_ENABLE_ENCRYPTION, that is used in both the read (suspend) and write (resume) modes to enable this new encryption functionality. The ioctl also sends or receives the kernel encryption key used to access this specific hibernate image. For now this is passed in plaintext, in a subsequent commit it will become sealed via the TPM to provide confidentiality and integrity of the key itself. This mechanism could potentially be lowered into the common snapshot code given a mechanism to stitch the key contents into the image itself. To avoid forcing usermode to deal with sequencing the auth tags in with the data, we stitch the auth tags in to the snapshot after each chunk of pages. This complicates the read and write functions, as we roll through the flow of (for read) 1) fill the staging buffer with encrypted data, 2) feed the data pages out to user mode, 3) feed the tag out to user mode. To avoid having each syscall return a small and variable amount of data, the encrypted versions of read and write operate in a loop, allowing an arbitrary amount of data through per syscall. One alternative that would simplify things here would be a streaming interface to AEAD. Then we could just stream the entire hibernate image through directly, and handle a single tag at the end. However there is a school of thought that suggests a streaming interface to AEAD represents a loaded footgun, as it tempts the caller to act on the decrypted but not yet verified data, defeating the purpose of AEAD. With this change alone, we don't actually protect ourselves from malicious userspace at all, since we kindly hand the key in plaintext to usermode. In later changes, we'll seal the key with the TPM before handing it back to usermode, so they can't decrypt or tamper with the key themselves. Signed-off-by: Evan Green --- Changes in v5: - Removed default n in Kconfig (Kees) - Expanded commit message (Jarkko) Changes in v4: - Local ordering and whitespace changes (Jarkko) Documentation/power/userland-swsusp.rst | 8 + include/uapi/linux/suspend_ioctls.h | 15 +- kernel/power/Kconfig | 12 + kernel/power/Makefile | 1 + kernel/power/snapenc.c | 492 ++++++++++++++++++++++++ kernel/power/user.c | 40 +- kernel/power/user.h | 103 +++++ 7 files changed, 659 insertions(+), 12 deletions(-) create mode 100644 kernel/power/snapenc.c create mode 100644 kernel/power/user.h diff --git a/Documentation/power/userland-swsusp.rst b/Documentation/power/userland-swsusp.rst index 1cf62d80a9ca10..f759915a78ce98 100644 --- a/Documentation/power/userland-swsusp.rst +++ b/Documentation/power/userland-swsusp.rst @@ -115,6 +115,14 @@ SNAPSHOT_S2RAM to resume the system from RAM if there's enough battery power or restore its state on the basis of the saved suspend image otherwise) +SNAPSHOT_ENABLE_ENCRYPTION + Enables encryption of the hibernate image within the kernel. Upon suspend + (ie when the snapshot device was opened for reading), returns a blob + representing the random encryption key the kernel created to encrypt the + hibernate image with. Upon resume (ie when the snapshot device was opened + for writing), receives a blob from usermode containing the key material + previously returned during hibernate. + The device's read() operation can be used to transfer the snapshot image from the kernel. It has the following limitations: diff --git a/include/uapi/linux/suspend_ioctls.h b/include/uapi/linux/suspend_ioctls.h index bcce04e21c0dce..b73026ef824bb9 100644 --- a/include/uapi/linux/suspend_ioctls.h +++ b/include/uapi/linux/suspend_ioctls.h @@ -13,6 +13,18 @@ struct resume_swap_area { __u32 dev; } __attribute__((packed)); +#define USWSUSP_KEY_NONCE_SIZE 16 + +/* + * This structure is used to pass the kernel's hibernate encryption key in + * either direction. + */ +struct uswsusp_key_blob { + __u32 blob_len; + __u8 blob[512]; + __u8 nonce[USWSUSP_KEY_NONCE_SIZE]; +} __attribute__((packed)); + #define SNAPSHOT_IOC_MAGIC '3' #define SNAPSHOT_FREEZE _IO(SNAPSHOT_IOC_MAGIC, 1) #define SNAPSHOT_UNFREEZE _IO(SNAPSHOT_IOC_MAGIC, 2) @@ -29,6 +41,7 @@ struct resume_swap_area { #define SNAPSHOT_PREF_IMAGE_SIZE _IO(SNAPSHOT_IOC_MAGIC, 18) #define SNAPSHOT_AVAIL_SWAP_SIZE _IOR(SNAPSHOT_IOC_MAGIC, 19, __kernel_loff_t) #define SNAPSHOT_ALLOC_SWAP_PAGE _IOR(SNAPSHOT_IOC_MAGIC, 20, __kernel_loff_t) -#define SNAPSHOT_IOC_MAXNR 20 +#define SNAPSHOT_ENABLE_ENCRYPTION _IOWR(SNAPSHOT_IOC_MAGIC, 21, struct uswsusp_key_blob) +#define SNAPSHOT_IOC_MAXNR 21 #endif /* _LINUX_SUSPEND_IOCTLS_H */ diff --git a/kernel/power/Kconfig b/kernel/power/Kconfig index 60a1d3051cc79a..2bde64bddae403 100644 --- a/kernel/power/Kconfig +++ b/kernel/power/Kconfig @@ -92,6 +92,18 @@ config HIBERNATION_SNAPSHOT_DEV If in doubt, say Y. +config ENCRYPTED_HIBERNATION + bool "Encryption support for userspace snapshots" + depends on HIBERNATION_SNAPSHOT_DEV + depends on CRYPTO_AEAD2=y + help + Enable support for kernel-based encryption of hibernation snapshots + created by uswsusp tools. + + Say N if userspace handles the image encryption. + + If in doubt, say N. + config PM_STD_PARTITION string "Default resume partition" depends on HIBERNATION diff --git a/kernel/power/Makefile b/kernel/power/Makefile index 874ad834dc8daf..7be08f2e0e3b68 100644 --- a/kernel/power/Makefile +++ b/kernel/power/Makefile @@ -16,6 +16,7 @@ obj-$(CONFIG_SUSPEND) += suspend.o obj-$(CONFIG_PM_TEST_SUSPEND) += suspend_test.o obj-$(CONFIG_HIBERNATION) += hibernate.o snapshot.o swap.o obj-$(CONFIG_HIBERNATION_SNAPSHOT_DEV) += user.o +obj-$(CONFIG_ENCRYPTED_HIBERNATION) += snapenc.o obj-$(CONFIG_PM_AUTOSLEEP) += autosleep.o obj-$(CONFIG_PM_WAKELOCKS) += wakelock.o diff --git a/kernel/power/snapenc.c b/kernel/power/snapenc.c new file mode 100644 index 00000000000000..0d055ea6203a5b --- /dev/null +++ b/kernel/power/snapenc.c @@ -0,0 +1,492 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* This file provides encryption support for system snapshots. */ + +#include +#include +#include +#include +#include +#include + +#include "power.h" +#include "user.h" + +/* Encrypt more data from the snapshot into the staging area. */ +static int snapshot_encrypt_refill(struct snapshot_data *data) +{ + struct aead_request *req = data->aead_req; + u8 nonce[GCM_AES_IV_SIZE]; + DECLARE_CRYPTO_WAIT(wait); + size_t total = 0; + int pg_idx; + int res; + + /* + * The first buffer is the associated data, set to the offset to prevent + * attacks that rearrange chunks. + */ + sg_set_buf(&data->sg[0], &data->crypt_total, sizeof(data->crypt_total)); + + /* Load the crypt buffer with snapshot pages. */ + for (pg_idx = 0; pg_idx < CHUNK_SIZE; pg_idx++) { + void *buf = data->crypt_pages[pg_idx]; + + res = snapshot_read_next(&data->handle); + if (res < 0) + return res; + if (res == 0) + break; + + WARN_ON(res != PAGE_SIZE); + + /* + * Copy the page into the staging area. A future optimization + * could potentially skip this copy for lowmem pages. + */ + memcpy(buf, data_of(data->handle), PAGE_SIZE); + sg_set_buf(&data->sg[1 + pg_idx], buf, PAGE_SIZE); + total += PAGE_SIZE; + } + + sg_set_buf(&data->sg[1 + pg_idx], &data->auth_tag, SNAPSHOT_AUTH_TAG_SIZE); + aead_request_set_callback(req, 0, crypto_req_done, &wait); + /* + * Use incrementing nonces for each chunk, since a 64 bit value won't + * roll into re-use for any given hibernate image. + */ + memcpy(&nonce[0], &data->nonce_low, sizeof(data->nonce_low)); + memcpy(&nonce[sizeof(data->nonce_low)], + &data->nonce_high, + sizeof(nonce) - sizeof(data->nonce_low)); + + data->nonce_low += 1; + /* Total does not include AAD or the auth tag. */ + aead_request_set_crypt(req, data->sg, data->sg, total, nonce); + res = crypto_wait_req(crypto_aead_encrypt(req), &wait); + if (res) + return res; + + data->crypt_size = total; + data->crypt_total += total; + return 0; +} + +/* Decrypt data from the staging area and push it to the snapshot. */ +static int snapshot_decrypt_drain(struct snapshot_data *data) +{ + struct aead_request *req = data->aead_req; + u8 nonce[GCM_AES_IV_SIZE]; + DECLARE_CRYPTO_WAIT(wait); + int page_count; + size_t total; + int pg_idx; + int res; + + /* Set up the associated data. */ + sg_set_buf(&data->sg[0], &data->crypt_total, sizeof(data->crypt_total)); + + /* + * Get the number of full pages, which could be short at the end. There + * should also be a tag at the end, so the offset won't be an even page. + */ + page_count = data->crypt_offset >> PAGE_SHIFT; + total = page_count << PAGE_SHIFT; + if ((total == 0) || (total == data->crypt_offset)) + return -EINVAL; + + /* + * Load the sg list with the crypt buffer. Inline decrypt back into the + * staging buffer. A future optimization could decrypt directly into + * lowmem pages. + */ + for (pg_idx = 0; pg_idx < page_count; pg_idx++) + sg_set_buf(&data->sg[1 + pg_idx], data->crypt_pages[pg_idx], PAGE_SIZE); + + /* + * It's possible this is the final decrypt, and there are fewer than + * CHUNK_SIZE pages. If this is the case we would have just written the + * auth tag into the first few bytes of a new page. Copy to the tag if + * so. + */ + if ((page_count < CHUNK_SIZE) && + (data->crypt_offset - total) == sizeof(data->auth_tag)) { + + memcpy(data->auth_tag, + data->crypt_pages[pg_idx], + sizeof(data->auth_tag)); + + } else if (data->crypt_offset != + ((CHUNK_SIZE << PAGE_SHIFT) + SNAPSHOT_AUTH_TAG_SIZE)) { + + return -EINVAL; + } + + sg_set_buf(&data->sg[1 + pg_idx], &data->auth_tag, SNAPSHOT_AUTH_TAG_SIZE); + aead_request_set_callback(req, 0, crypto_req_done, &wait); + memcpy(&nonce[0], &data->nonce_low, sizeof(data->nonce_low)); + memcpy(&nonce[sizeof(data->nonce_low)], + &data->nonce_high, + sizeof(nonce) - sizeof(data->nonce_low)); + + data->nonce_low += 1; + aead_request_set_crypt(req, data->sg, data->sg, total + SNAPSHOT_AUTH_TAG_SIZE, nonce); + res = crypto_wait_req(crypto_aead_decrypt(req), &wait); + if (res) + return res; + + data->crypt_size = 0; + data->crypt_offset = 0; + + /* Push the decrypted pages further down the stack. */ + total = 0; + for (pg_idx = 0; pg_idx < page_count; pg_idx++) { + void *buf = data->crypt_pages[pg_idx]; + + res = snapshot_write_next(&data->handle); + if (res < 0) + return res; + if (res == 0) + break; + + if (!data_of(data->handle)) + return -EINVAL; + + WARN_ON(res != PAGE_SIZE); + + /* + * Copy the page into the staging area. A future optimization + * could potentially skip this copy for lowmem pages. + */ + memcpy(data_of(data->handle), buf, PAGE_SIZE); + total += PAGE_SIZE; + } + + data->crypt_total += total; + return 0; +} + +static ssize_t snapshot_read_next_encrypted(struct snapshot_data *data, + void **buf) +{ + size_t tag_off; + + /* Refill the encrypted buffer if it's empty. */ + if ((data->crypt_size == 0) || + (data->crypt_offset >= + (data->crypt_size + SNAPSHOT_AUTH_TAG_SIZE))) { + + int rc; + + data->crypt_size = 0; + data->crypt_offset = 0; + rc = snapshot_encrypt_refill(data); + if (rc < 0) + return rc; + } + + /* Return data pages if the offset is in that region. */ + if (data->crypt_offset < data->crypt_size) { + size_t pg_idx = data->crypt_offset >> PAGE_SHIFT; + size_t pg_off = data->crypt_offset & (PAGE_SIZE - 1); + *buf = data->crypt_pages[pg_idx] + pg_off; + return PAGE_SIZE - pg_off; + } + + /* Use offsets just beyond the size to return the tag. */ + tag_off = data->crypt_offset - data->crypt_size; + if (tag_off > SNAPSHOT_AUTH_TAG_SIZE) + tag_off = SNAPSHOT_AUTH_TAG_SIZE; + + *buf = data->auth_tag + tag_off; + return SNAPSHOT_AUTH_TAG_SIZE - tag_off; +} + +static ssize_t snapshot_write_next_encrypted(struct snapshot_data *data, + void **buf) +{ + size_t tag_off; + + /* Return data pages if the offset is in that region. */ + if (data->crypt_offset < (PAGE_SIZE * CHUNK_SIZE)) { + size_t pg_idx = data->crypt_offset >> PAGE_SHIFT; + size_t pg_off = data->crypt_offset & (PAGE_SIZE - 1); + *buf = data->crypt_pages[pg_idx] + pg_off; + return PAGE_SIZE - pg_off; + } + + /* Use offsets just beyond the size to return the tag. */ + tag_off = data->crypt_offset - (PAGE_SIZE * CHUNK_SIZE); + if (tag_off > SNAPSHOT_AUTH_TAG_SIZE) + tag_off = SNAPSHOT_AUTH_TAG_SIZE; + + *buf = data->auth_tag + tag_off; + return SNAPSHOT_AUTH_TAG_SIZE - tag_off; +} + +ssize_t snapshot_read_encrypted(struct snapshot_data *data, + char __user *buf, size_t count, loff_t *offp) +{ + ssize_t total = 0; + + /* Loop getting buffers of varying sizes and copying to userspace. */ + while (count) { + size_t copy_size; + size_t not_done; + void *src; + ssize_t src_size = snapshot_read_next_encrypted(data, &src); + + if (src_size <= 0) { + if (total == 0) + return src_size; + + break; + } + + copy_size = min(count, (size_t)src_size); + not_done = copy_to_user(buf + total, src, copy_size); + copy_size -= not_done; + total += copy_size; + count -= copy_size; + data->crypt_offset += copy_size; + if (copy_size == 0) { + if (total == 0) + return -EFAULT; + + break; + } + } + + *offp += total; + return total; +} + +ssize_t snapshot_write_encrypted(struct snapshot_data *data, + const char __user *buf, size_t count, + loff_t *offp) +{ + ssize_t total = 0; + + /* Loop getting buffers of varying sizes and copying from. */ + while (count) { + size_t copy_size; + size_t not_done; + void *dst; + ssize_t dst_size = snapshot_write_next_encrypted(data, &dst); + + if (dst_size <= 0) { + if (total == 0) + return dst_size; + + break; + } + + copy_size = min(count, (size_t)dst_size); + not_done = copy_from_user(dst, buf + total, copy_size); + copy_size -= not_done; + total += copy_size; + count -= copy_size; + data->crypt_offset += copy_size; + if (copy_size == 0) { + if (total == 0) + return -EFAULT; + + break; + } + + /* Drain the encrypted buffer if it's full. */ + if ((data->crypt_offset >= + ((PAGE_SIZE * CHUNK_SIZE) + SNAPSHOT_AUTH_TAG_SIZE))) { + + int rc; + + rc = snapshot_decrypt_drain(data); + if (rc < 0) + return rc; + } + } + + *offp += total; + return total; +} + +void snapshot_teardown_encryption(struct snapshot_data *data) +{ + int i; + + if (data->aead_req) { + aead_request_free(data->aead_req); + data->aead_req = NULL; + } + + if (data->aead_tfm) { + crypto_free_aead(data->aead_tfm); + data->aead_tfm = NULL; + } + + for (i = 0; i < CHUNK_SIZE; i++) { + if (data->crypt_pages[i]) { + free_page((unsigned long)data->crypt_pages[i]); + data->crypt_pages[i] = NULL; + } + } +} + +static int snapshot_setup_encryption_common(struct snapshot_data *data) +{ + int i, rc; + + data->crypt_total = 0; + data->crypt_offset = 0; + data->crypt_size = 0; + memset(data->crypt_pages, 0, sizeof(data->crypt_pages)); + /* This only works once per hibernate. */ + if (data->aead_tfm) + return -EINVAL; + + /* Set up the encryption transform */ + data->aead_tfm = crypto_alloc_aead("gcm(aes)", 0, 0); + if (IS_ERR(data->aead_tfm)) { + rc = PTR_ERR(data->aead_tfm); + data->aead_tfm = NULL; + return rc; + } + + rc = -ENOMEM; + data->aead_req = aead_request_alloc(data->aead_tfm, GFP_KERNEL); + if (data->aead_req == NULL) + goto setup_fail; + + /* Allocate the staging area */ + for (i = 0; i < CHUNK_SIZE; i++) { + data->crypt_pages[i] = (void *)__get_free_page(GFP_ATOMIC); + if (data->crypt_pages[i] == NULL) + goto setup_fail; + } + + sg_init_table(data->sg, CHUNK_SIZE + 2); + + /* + * The associated data will be the offset so that blocks can't be + * rearranged. + */ + aead_request_set_ad(data->aead_req, sizeof(data->crypt_total)); + rc = crypto_aead_setauthsize(data->aead_tfm, SNAPSHOT_AUTH_TAG_SIZE); + if (rc) + goto setup_fail; + + return 0; + +setup_fail: + snapshot_teardown_encryption(data); + return rc; +} + +int snapshot_get_encryption_key(struct snapshot_data *data, + struct uswsusp_key_blob __user *key) +{ + u8 aead_key[SNAPSHOT_ENCRYPTION_KEY_SIZE]; + u8 nonce[USWSUSP_KEY_NONCE_SIZE]; + int rc; + + /* Don't pull a random key from a world that can be reset. */ + if (data->ready) + return -EPIPE; + + rc = snapshot_setup_encryption_common(data); + if (rc) + return rc; + + /* Build a random starting nonce. */ + get_random_bytes(nonce, sizeof(nonce)); + memcpy(&data->nonce_low, &nonce[0], sizeof(data->nonce_low)); + memcpy(&data->nonce_high, &nonce[8], sizeof(data->nonce_high)); + /* Build a random key */ + get_random_bytes(aead_key, sizeof(aead_key)); + rc = crypto_aead_setkey(data->aead_tfm, aead_key, sizeof(aead_key)); + if (rc) + goto fail; + + /* Hand the key back to user mode (to be changed!) */ + rc = put_user(sizeof(struct uswsusp_key_blob), &key->blob_len); + if (rc) + goto fail; + + rc = copy_to_user(&key->blob, &aead_key, sizeof(aead_key)); + if (rc) + goto fail; + + rc = copy_to_user(&key->nonce, &nonce, sizeof(nonce)); + if (rc) + goto fail; + + return 0; + +fail: + snapshot_teardown_encryption(data); + return rc; +} + +int snapshot_set_encryption_key(struct snapshot_data *data, + struct uswsusp_key_blob __user *key) +{ + struct uswsusp_key_blob blob; + int rc; + + /* It's too late if data's been pushed in. */ + if (data->handle.cur) + return -EPIPE; + + rc = snapshot_setup_encryption_common(data); + if (rc) + return rc; + + /* Load the key from user mode. */ + rc = copy_from_user(&blob, key, sizeof(struct uswsusp_key_blob)); + if (rc) + goto crypto_setup_fail; + + if (blob.blob_len != sizeof(struct uswsusp_key_blob)) { + rc = -EINVAL; + goto crypto_setup_fail; + } + + rc = crypto_aead_setkey(data->aead_tfm, + blob.blob, + SNAPSHOT_ENCRYPTION_KEY_SIZE); + + if (rc) + goto crypto_setup_fail; + + /* Load the starting nonce. */ + memcpy(&data->nonce_low, &blob.nonce[0], sizeof(data->nonce_low)); + memcpy(&data->nonce_high, &blob.nonce[8], sizeof(data->nonce_high)); + return 0; + +crypto_setup_fail: + snapshot_teardown_encryption(data); + return rc; +} + +loff_t snapshot_get_encrypted_image_size(loff_t raw_size) +{ + loff_t pages = raw_size >> PAGE_SHIFT; + loff_t chunks = (pages + (CHUNK_SIZE - 1)) / CHUNK_SIZE; + /* + * The encrypted size is the normal size, plus a stitched in + * authentication tag for every chunk of pages. + */ + return raw_size + (chunks * SNAPSHOT_AUTH_TAG_SIZE); +} + +int snapshot_finalize_decrypted_image(struct snapshot_data *data) +{ + int rc; + + if (data->crypt_offset != 0) { + rc = snapshot_decrypt_drain(data); + if (rc) + return rc; + } + + return 0; +} diff --git a/kernel/power/user.c b/kernel/power/user.c index 3a4e70366f354c..bba5cdbd2c0239 100644 --- a/kernel/power/user.c +++ b/kernel/power/user.c @@ -25,19 +25,10 @@ #include #include "power.h" +#include "user.h" static bool need_wait; - -static struct snapshot_data { - struct snapshot_handle handle; - int swap; - int mode; - bool frozen; - bool ready; - bool platform_support; - bool free_bitmaps; - dev_t dev; -} snapshot_state; +struct snapshot_data snapshot_state; int is_hibernate_resume_dev(dev_t dev) { @@ -122,6 +113,7 @@ static int snapshot_release(struct inode *inode, struct file *filp) } else if (data->free_bitmaps) { free_basic_memory_bitmaps(); } + snapshot_teardown_encryption(data); pm_notifier_call_chain(data->mode == O_RDONLY ? PM_POST_HIBERNATION : PM_POST_RESTORE); hibernate_release(); @@ -146,6 +138,12 @@ static ssize_t snapshot_read(struct file *filp, char __user *buf, res = -ENODATA; goto Unlock; } + + if (snapshot_encryption_enabled(data)) { + res = snapshot_read_encrypted(data, buf, count, offp); + goto Unlock; + } + if (!pg_offp) { /* on page boundary? */ res = snapshot_read_next(&data->handle); if (res <= 0) @@ -182,6 +180,11 @@ static ssize_t snapshot_write(struct file *filp, const char __user *buf, data = filp->private_data; + if (snapshot_encryption_enabled(data)) { + res = snapshot_write_encrypted(data, buf, count, offp); + goto unlock; + } + if (!pg_offp) { res = snapshot_write_next(&data->handle); if (res <= 0) @@ -317,6 +320,12 @@ static long snapshot_ioctl(struct file *filp, unsigned int cmd, break; case SNAPSHOT_ATOMIC_RESTORE: + if (snapshot_encryption_enabled(data)) { + error = snapshot_finalize_decrypted_image(data); + if (error) + break; + } + snapshot_write_finalize(&data->handle); if (data->mode != O_WRONLY || !data->frozen || !snapshot_image_loaded(&data->handle)) { @@ -352,6 +361,8 @@ static long snapshot_ioctl(struct file *filp, unsigned int cmd, } size = snapshot_get_image_size(); size <<= PAGE_SHIFT; + if (snapshot_encryption_enabled(data)) + size = snapshot_get_encrypted_image_size(size); error = put_user(size, (loff_t __user *)arg); break; @@ -409,6 +420,13 @@ static long snapshot_ioctl(struct file *filp, unsigned int cmd, error = snapshot_set_swap_area(data, (void __user *)arg); break; + case SNAPSHOT_ENABLE_ENCRYPTION: + if (data->mode == O_RDONLY) + error = snapshot_get_encryption_key(data, (void __user *)arg); + else + error = snapshot_set_encryption_key(data, (void __user *)arg); + break; + default: error = -ENOTTY; diff --git a/kernel/power/user.h b/kernel/power/user.h new file mode 100644 index 00000000000000..ac429782abff85 --- /dev/null +++ b/kernel/power/user.h @@ -0,0 +1,103 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +#include +#include +#include + +#define SNAPSHOT_ENCRYPTION_KEY_SIZE AES_KEYSIZE_128 +#define SNAPSHOT_AUTH_TAG_SIZE 16 + +/* Define the number of pages in a single AEAD encryption chunk. */ +#define CHUNK_SIZE 16 + +struct snapshot_data { + struct snapshot_handle handle; + int swap; + int mode; + bool frozen; + bool ready; + bool platform_support; + bool free_bitmaps; + dev_t dev; + +#if defined(CONFIG_ENCRYPTED_HIBERNATION) + struct crypto_aead *aead_tfm; + struct aead_request *aead_req; + void *crypt_pages[CHUNK_SIZE]; + u8 auth_tag[SNAPSHOT_AUTH_TAG_SIZE]; + struct scatterlist sg[CHUNK_SIZE + 2]; /* Add room for AD and auth tag. */ + size_t crypt_offset; + size_t crypt_size; + uint64_t crypt_total; + uint64_t nonce_low; + uint64_t nonce_high; +#endif + +}; + +extern struct snapshot_data snapshot_state; + +/* kernel/power/swapenc.c routines */ +#if defined(CONFIG_ENCRYPTED_HIBERNATION) + +ssize_t snapshot_read_encrypted(struct snapshot_data *data, + char __user *buf, size_t count, loff_t *offp); + +ssize_t snapshot_write_encrypted(struct snapshot_data *data, + const char __user *buf, size_t count, + loff_t *offp); + +void snapshot_teardown_encryption(struct snapshot_data *data); +int snapshot_get_encryption_key(struct snapshot_data *data, + struct uswsusp_key_blob __user *key); + +int snapshot_set_encryption_key(struct snapshot_data *data, + struct uswsusp_key_blob __user *key); + +loff_t snapshot_get_encrypted_image_size(loff_t raw_size); + +int snapshot_finalize_decrypted_image(struct snapshot_data *data); + +#define snapshot_encryption_enabled(data) (!!(data)->aead_tfm) + +#else + +ssize_t snapshot_read_encrypted(struct snapshot_data *data, + char __user *buf, size_t count, loff_t *offp) +{ + return -ENOTTY; +} + +ssize_t snapshot_write_encrypted(struct snapshot_data *data, + const char __user *buf, size_t count, + loff_t *offp) +{ + return -ENOTTY; +} + +static void snapshot_teardown_encryption(struct snapshot_data *data) {} +static int snapshot_get_encryption_key(struct snapshot_data *data, + struct uswsusp_key_blob __user *key) +{ + return -ENOTTY; +} + +static int snapshot_set_encryption_key(struct snapshot_data *data, + struct uswsusp_key_blob __user *key) +{ + return -ENOTTY; +} + +static loff_t snapshot_get_encrypted_image_size(loff_t raw_size) +{ + return raw_size; +} + +static int snapshot_finalize_decrypted_image(struct snapshot_data *data) +{ + return -ENOTTY; +} + +#define snapshot_encryption_enabled(data) (0) + +#endif From patchwork Fri Nov 11 23:16:33 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Evan Green X-Patchwork-Id: 13040924 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2587BC433FE for ; Fri, 11 Nov 2022 23:20:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234615AbiKKXUr (ORCPT ); Fri, 11 Nov 2022 18:20:47 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47736 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234606AbiKKXUP (ORCPT ); Fri, 11 Nov 2022 18:20:15 -0500 Received: from mail-pj1-x1033.google.com (mail-pj1-x1033.google.com [IPv6:2607:f8b0:4864:20::1033]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CD816836B0 for ; Fri, 11 Nov 2022 15:19:55 -0800 (PST) Received: by mail-pj1-x1033.google.com with SMTP id l6so5655476pjj.0 for ; Fri, 11 Nov 2022 15:19:55 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=NTqq7bj67dKUZy2JWRPDzRoAYcrjvnq9cNYv9+r3eEg=; b=fewZUJmDaI+21iNMyn8KBwBHljNN/o8Ly2dpBGbna92wAfpHrUX7UEEAR6BPqMEKvi dE0KEbKy6y8sAPK3ZTpv2TOyO2MO2gcwUEPuBldL9+AiNcwt477o+bvWx0yLOxxFtnSe R8VtXP7Xywx33YF6XK3cwSYHQjOI2eW8tCXUk= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=NTqq7bj67dKUZy2JWRPDzRoAYcrjvnq9cNYv9+r3eEg=; b=AqDB+LRhZ5goIOmUiABAJQ2imKGVEXhJ7uE22/68Fq91kG6zz4R71QUT4wgcVmYc5r UhxlywiSY2I1X6teMe2ZkOopv0At0DV1rvsqbYVvoYpxzfeN1pjQismjcZXZpujltHhb drB9WJPzs5Vep+I6I7yTdzEeRMVLkMqa3AZjl6feNJqGwIjJMPRSg4Vg0mgzcyccm1rk MaTAO4b0YNGEuVUrme6SFrRlFkcjiF6r83QnkmvLv+Ju3PJpawWpC8gSo9VTUS2l2doS dtiHRW/6XRWyndz3UpDxpKGygRfjqhcL1UqkqhZAP2iwUk6m7vfvBPsDs9E0Ad063pjW GnuQ== X-Gm-Message-State: ANoB5pnseG5nOQovVi8YKPFXETpH86DpF1UdXYthwNeZv3veyqmRrfPF QPOXPhFI4DogkXqTCbEGEp76PQ== X-Google-Smtp-Source: AA0mqf5quuWkoqyZw23YhINKkV90+YrXus3svr+jBa8IvjXKvBw9/E6wtj1s6XEDo82AlCRzjkYNvg== X-Received: by 2002:a17:902:eac6:b0:187:3932:6422 with SMTP id p6-20020a170902eac600b0018739326422mr4471754pld.135.1668208795320; Fri, 11 Nov 2022 15:19:55 -0800 (PST) Received: from evgreen-glaptop.lan ([98.45.28.95]) by smtp.gmail.com with ESMTPSA id x128-20020a623186000000b0056da2ad6503sm2106900pfx.39.2022.11.11.15.19.53 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 11 Nov 2022 15:19:54 -0800 (PST) From: Evan Green To: linux-kernel@vger.kernel.org Cc: corbet@lwn.net, linux-integrity@vger.kernel.org, Eric Biggers , gwendal@chromium.org, dianders@chromium.org, apronin@chromium.org, Pavel Machek , Ben Boeckel , rjw@rjwysocki.net, jejb@linux.ibm.com, Kees Cook , dlunev@google.com, zohar@linux.ibm.com, Matthew Garrett , jarkko@kernel.org, linux-pm@vger.kernel.org, Evan Green , Matthew Garrett , Len Brown , "Rafael J. Wysocki" Subject: [PATCH v5 08/11] PM: hibernate: Use TPM-backed keys to encrypt image Date: Fri, 11 Nov 2022 15:16:33 -0800 Message-Id: <20221111151451.v5.8.Ibd067e73916b9fae268a5824c2dd037416426af8@changeid> X-Mailer: git-send-email 2.38.1.431.g37b22c650d-goog In-Reply-To: <20221111231636.3748636-1-evgreen@chromium.org> References: <20221111231636.3748636-1-evgreen@chromium.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-integrity@vger.kernel.org When using encrypted hibernate images, have the TPM create a key for us and seal it. By handing back a sealed blob instead of the raw key, we prevent usermode from being able to decrypt and tamper with the hibernate image on a different machine. We'll also go through the motions of having PCR23 set to a known value at the time of key creation and unsealing. Currently there's nothing that enforces the contents of PCR23 as a condition to unseal the key blob, that will come in a later change. Suggested-by: Matthew Garrett Signed-off-by: Evan Green Reviewed-by: Kees Cook --- Matthew's incarnation of this patch is at: https://patchwork.kernel.org/project/linux-pm/patch/20210220013255.1083202-9-matthewgarrett@google.com/ Changes in v5: - Use Suggested-by tag instead of made up Sourced-from (Kees) - ENCRYPTED_HIBERNATION should depend on TCG_TPM2_RESTRCT_PCR Changes in v4: - s/tpm_pcr_reset/tpm2_pcr_reset/ due to change in other patch - Variable ordering and whitespace fixes (Jarkko) - Add NULL check explanation in teardown (Jarkko) - Change strlen+1 to sizeof for static buffer (Jarkko) - Fix nr_allocated_banks loop overflow (found via KASAN) Changes in v3: - ENCRYPTED_HIBERNATION needs TRUSTED_KEYS builtin for key_type_trusted. - Remove KEYS dependency since it's covered by TRUSTED_KEYS (Kees) Changes in v2: - Rework load/create_kernel_key() to eliminate a label (Andrey) - Call put_device() needed from calling tpm_default_chip(). kernel/power/Kconfig | 2 + kernel/power/snapenc.c | 211 +++++++++++++++++++++++++++++++++++++++-- kernel/power/user.h | 1 + 3 files changed, 205 insertions(+), 9 deletions(-) diff --git a/kernel/power/Kconfig b/kernel/power/Kconfig index 2bde64bddae403..420024f46992b2 100644 --- a/kernel/power/Kconfig +++ b/kernel/power/Kconfig @@ -96,6 +96,8 @@ config ENCRYPTED_HIBERNATION bool "Encryption support for userspace snapshots" depends on HIBERNATION_SNAPSHOT_DEV depends on CRYPTO_AEAD2=y + depends on TCG_TPM2_RESTRICT_PCR + depends on TRUSTED_KEYS=y help Enable support for kernel-based encryption of hibernation snapshots created by uswsusp tools. diff --git a/kernel/power/snapenc.c b/kernel/power/snapenc.c index 0d055ea6203a5b..f1db4eddb3c34c 100644 --- a/kernel/power/snapenc.c +++ b/kernel/power/snapenc.c @@ -4,13 +4,23 @@ #include #include #include +#include +#include #include #include +#include #include #include "power.h" #include "user.h" +/* sha256("To sleep, perchance to dream") */ +static struct tpm_digest known_digest = { .alg_id = TPM_ALG_SHA256, + .digest = {0x92, 0x78, 0x3d, 0x79, 0x2d, 0x00, 0x31, 0xb0, 0x55, 0xf9, + 0x1e, 0x0d, 0xce, 0x83, 0xde, 0x1d, 0xc4, 0xc5, 0x8e, 0x8c, + 0xf1, 0x22, 0x38, 0x6c, 0x33, 0xb1, 0x14, 0xb7, 0xec, 0x05, + 0x5f, 0x49}}; + /* Encrypt more data from the snapshot into the staging area. */ static int snapshot_encrypt_refill(struct snapshot_data *data) { @@ -313,6 +323,16 @@ void snapshot_teardown_encryption(struct snapshot_data *data) { int i; + /* + * Do NULL checks so this function can safely be called from error paths + * and other places where this context may not be fully set up. + */ + if (data->key) { + key_revoke(data->key); + key_put(data->key); + data->key = NULL; + } + if (data->aead_req) { aead_request_free(data->aead_req); data->aead_req = NULL; @@ -381,10 +401,82 @@ static int snapshot_setup_encryption_common(struct snapshot_data *data) return rc; } +static int snapshot_create_kernel_key(struct snapshot_data *data) +{ + /* Create a key sealed by the SRK. */ + char *keyinfo = "new\t32\tkeyhandle=0x81000000"; + const struct cred *cred = current_cred(); + struct tpm_digest *digests = NULL; + struct key *key = NULL; + struct tpm_chip *chip; + int ret, i; + + chip = tpm_default_chip(); + if (!chip) + return -ENODEV; + + if (!(tpm_is_tpm2(chip))) { + ret = -ENODEV; + goto out_dev; + } + + ret = tpm2_pcr_reset(chip, 23); + if (ret) + goto out; + + digests = kcalloc(chip->nr_allocated_banks, sizeof(struct tpm_digest), + GFP_KERNEL); + if (!digests) { + ret = -ENOMEM; + goto out; + } + + for (i = 0; i < chip->nr_allocated_banks; i++) { + digests[i].alg_id = chip->allocated_banks[i].alg_id; + if (digests[i].alg_id == known_digest.alg_id) + memcpy(&digests[i], &known_digest, sizeof(known_digest)); + } + + ret = tpm_pcr_extend(chip, 23, digests); + if (ret != 0) + goto out; + + key = key_alloc(&key_type_trusted, "swsusp", GLOBAL_ROOT_UID, + GLOBAL_ROOT_GID, cred, 0, KEY_ALLOC_NOT_IN_QUOTA, + NULL); + + if (IS_ERR(key)) { + ret = PTR_ERR(key); + key = NULL; + goto out; + } + + ret = key_instantiate_and_link(key, keyinfo, sizeof(keyinfo), NULL, + NULL); + if (ret != 0) + goto out; + + data->key = key; + key = NULL; + +out: + if (key) { + key_revoke(key); + key_put(key); + } + + kfree(digests); + tpm2_pcr_reset(chip, 23); + +out_dev: + put_device(&chip->dev); + return ret; +} + int snapshot_get_encryption_key(struct snapshot_data *data, struct uswsusp_key_blob __user *key) { - u8 aead_key[SNAPSHOT_ENCRYPTION_KEY_SIZE]; + struct trusted_key_payload *payload; u8 nonce[USWSUSP_KEY_NONCE_SIZE]; int rc; @@ -400,21 +492,28 @@ int snapshot_get_encryption_key(struct snapshot_data *data, get_random_bytes(nonce, sizeof(nonce)); memcpy(&data->nonce_low, &nonce[0], sizeof(data->nonce_low)); memcpy(&data->nonce_high, &nonce[8], sizeof(data->nonce_high)); - /* Build a random key */ - get_random_bytes(aead_key, sizeof(aead_key)); - rc = crypto_aead_setkey(data->aead_tfm, aead_key, sizeof(aead_key)); + + /* Create a kernel key, and set it. */ + rc = snapshot_create_kernel_key(data); + if (rc) + goto fail; + + payload = data->key->payload.data[0]; + /* Install the key */ + rc = crypto_aead_setkey(data->aead_tfm, payload->key, SNAPSHOT_ENCRYPTION_KEY_SIZE); if (rc) goto fail; - /* Hand the key back to user mode (to be changed!) */ - rc = put_user(sizeof(struct uswsusp_key_blob), &key->blob_len); + /* Hand the key back to user mode in sealed form. */ + rc = put_user(payload->blob_len, &key->blob_len); if (rc) goto fail; - rc = copy_to_user(&key->blob, &aead_key, sizeof(aead_key)); + rc = copy_to_user(&key->blob, &payload->blob, payload->blob_len); if (rc) goto fail; + /* The nonce just gets handed back in the clear. */ rc = copy_to_user(&key->nonce, &nonce, sizeof(nonce)); if (rc) goto fail; @@ -426,10 +525,99 @@ int snapshot_get_encryption_key(struct snapshot_data *data, return rc; } +static int snapshot_load_kernel_key(struct snapshot_data *data, + struct uswsusp_key_blob *blob) +{ + + char *keytemplate = "load\t%s\tkeyhandle=0x81000000"; + const struct cred *cred = current_cred(); + struct tpm_digest *digests = NULL; + char *blobstring = NULL; + struct key *key = NULL; + struct tpm_chip *chip; + char *keyinfo = NULL; + int i, ret; + + chip = tpm_default_chip(); + if (!chip) + return -ENODEV; + + if (!(tpm_is_tpm2(chip))) { + ret = -ENODEV; + goto out_dev; + } + + ret = tpm2_pcr_reset(chip, 23); + if (ret) + goto out; + + digests = kcalloc(chip->nr_allocated_banks, sizeof(struct tpm_digest), + GFP_KERNEL); + if (!digests) + goto out; + + for (i = 0; i < chip->nr_allocated_banks; i++) { + digests[i].alg_id = chip->allocated_banks[i].alg_id; + if (digests[i].alg_id == known_digest.alg_id) + memcpy(&digests[i], &known_digest, sizeof(known_digest)); + } + + ret = tpm_pcr_extend(chip, 23, digests); + if (ret != 0) + goto out; + + blobstring = kmalloc(blob->blob_len * 2, GFP_KERNEL); + if (!blobstring) { + ret = -ENOMEM; + goto out; + } + + bin2hex(blobstring, blob->blob, blob->blob_len); + keyinfo = kasprintf(GFP_KERNEL, keytemplate, blobstring); + if (!keyinfo) { + ret = -ENOMEM; + goto out; + } + + key = key_alloc(&key_type_trusted, "swsusp", GLOBAL_ROOT_UID, + GLOBAL_ROOT_GID, cred, 0, KEY_ALLOC_NOT_IN_QUOTA, + NULL); + + if (IS_ERR(key)) { + ret = PTR_ERR(key); + key = NULL; + goto out; + } + + ret = key_instantiate_and_link(key, keyinfo, strlen(keyinfo) + 1, NULL, + NULL); + if (ret != 0) + goto out; + + data->key = key; + key = NULL; + +out: + if (key) { + key_revoke(key); + key_put(key); + } + + kfree(keyinfo); + kfree(blobstring); + kfree(digests); + tpm2_pcr_reset(chip, 23); + +out_dev: + put_device(&chip->dev); + return ret; +} + int snapshot_set_encryption_key(struct snapshot_data *data, struct uswsusp_key_blob __user *key) { struct uswsusp_key_blob blob; + struct trusted_key_payload *payload; int rc; /* It's too late if data's been pushed in. */ @@ -445,13 +633,18 @@ int snapshot_set_encryption_key(struct snapshot_data *data, if (rc) goto crypto_setup_fail; - if (blob.blob_len != sizeof(struct uswsusp_key_blob)) { + if (blob.blob_len > sizeof(key->blob)) { rc = -EINVAL; goto crypto_setup_fail; } + rc = snapshot_load_kernel_key(data, &blob); + if (rc) + goto crypto_setup_fail; + + payload = data->key->payload.data[0]; rc = crypto_aead_setkey(data->aead_tfm, - blob.blob, + payload->key, SNAPSHOT_ENCRYPTION_KEY_SIZE); if (rc) diff --git a/kernel/power/user.h b/kernel/power/user.h index ac429782abff85..6c86fb64ebe13e 100644 --- a/kernel/power/user.h +++ b/kernel/power/user.h @@ -31,6 +31,7 @@ struct snapshot_data { uint64_t crypt_total; uint64_t nonce_low; uint64_t nonce_high; + struct key *key; #endif }; From patchwork Fri Nov 11 23:16:34 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Evan Green X-Patchwork-Id: 13040925 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 75053C43219 for ; Fri, 11 Nov 2022 23:21:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234716AbiKKXVA (ORCPT ); Fri, 11 Nov 2022 18:21:00 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47942 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234629AbiKKXUT (ORCPT ); Fri, 11 Nov 2022 18:20:19 -0500 Received: from mail-pg1-x534.google.com (mail-pg1-x534.google.com [IPv6:2607:f8b0:4864:20::534]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D47C383B9D for ; Fri, 11 Nov 2022 15:19:57 -0800 (PST) Received: by mail-pg1-x534.google.com with SMTP id 78so5482668pgb.13 for ; Fri, 11 Nov 2022 15:19:57 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=P7s0lZAjTbi+wTrNPj0k/HGP/itxgpJc8b+wZFHPxSE=; b=eOgNNVyqjwHHnBZxzFQAAdTeO8mUn5bC1xUChvIC3KQbeS0icGO8Jfbp4OinyjCntL KP5HOXMAc4oOUanwT4Jovk7A4WfcFC/I6r865/ESOScCMkNnO6nS0fAhcZxK0LBgtM33 DlX6sR6s49bwaUBqtNoE7phhANmAVkbCTQGaU= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=P7s0lZAjTbi+wTrNPj0k/HGP/itxgpJc8b+wZFHPxSE=; b=BSusnTndOvFCJqeEueLoSWClt5oDaetPiu50sXBSUxPJoVrwwjqg8yT30leBXR27Cj 4kXys4giK7SGJu/+oWkgnx6aYcpzcUs5nSdk8H1UgLZmMpOKDX9cXKHgvZeDPd5r1d3b uIGOsEWb0LBodMWwahdHh9JZWPX/JgzFVnTxsyqP4lwmIPDiDAJg+VIRkZDp9frejnu2 eBwkHYz6KAnv9ynS07tGR1zPLOstW2ayAALMIbfP9SnAM6KKAKqzKUQCkuA2TEIcO8xv bk67vS3p2RQqEiVMlWEbuiT4KU5jIknmTwSHobpHKpy/QS3sNegZQQQwR1Jt1aoxinfd rnZg== X-Gm-Message-State: ANoB5pkYTEKEMr45jUeCxPU6pSRiwpNAp/o7j9Lg9TFcmUH4blNbdzUO 95enTN/vKhTgi5YUUTngDLncEQ== X-Google-Smtp-Source: AA0mqf6QU6MVVFKuneW/LqiP64kddsV+gE62WVMmA2+xuP21CGRjqwpJaJ8hNutwV1g6AQDY8joJiQ== X-Received: by 2002:a05:6a00:1ad2:b0:56c:235:83a9 with SMTP id f18-20020a056a001ad200b0056c023583a9mr4861555pfv.6.1668208797219; Fri, 11 Nov 2022 15:19:57 -0800 (PST) Received: from evgreen-glaptop.lan ([98.45.28.95]) by smtp.gmail.com with ESMTPSA id x128-20020a623186000000b0056da2ad6503sm2106900pfx.39.2022.11.11.15.19.55 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 11 Nov 2022 15:19:56 -0800 (PST) From: Evan Green To: linux-kernel@vger.kernel.org Cc: corbet@lwn.net, linux-integrity@vger.kernel.org, Eric Biggers , gwendal@chromium.org, dianders@chromium.org, apronin@chromium.org, Pavel Machek , Ben Boeckel , rjw@rjwysocki.net, jejb@linux.ibm.com, Kees Cook , dlunev@google.com, zohar@linux.ibm.com, Matthew Garrett , jarkko@kernel.org, linux-pm@vger.kernel.org, Evan Green , Len Brown , "Rafael J. Wysocki" Subject: [PATCH v5 09/11] PM: hibernate: Mix user key in encrypted hibernate Date: Fri, 11 Nov 2022 15:16:34 -0800 Message-Id: <20221111151451.v5.9.I87952411cf83f2199ff7a4cc8c828d357b8c8ce3@changeid> X-Mailer: git-send-email 2.38.1.431.g37b22c650d-goog In-Reply-To: <20221111231636.3748636-1-evgreen@chromium.org> References: <20221111231636.3748636-1-evgreen@chromium.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-integrity@vger.kernel.org Usermode may have their own data protection requirements when it comes to encrypting the hibernate image. For example, users may want a policy where the hibernate image is protected by a key derived both from platform-level security as well as authentication data (such as a password or PIN). This way, even if the platform is compromised (ie a stolen laptop), sensitive data cannot be exfiltrated via the hibernate image without additional data (like the user's password). The kernel is already doing the encryption, but will be protecting its key with the TPM alone. Allow usermode to mix in key content of their own for the data portion of the hibernate image, so that the image encryption key is determined both by a TPM-backed secret and user-defined data. To mix the user key in, we hash the kernel key followed by the user key, and use the resulting hash as the new key. This allows usermode to mix in its key material without giving it too much control over what key is actually driving the encryption (which might be used to attack the secret kernel key). Limiting this to the data portion allows the kernel to receive the page map and prepare its giant allocation even if this user key is not yet available (ie the user has not yet finished typing in their password). Once the user key becomes available, the data portion can be pushed through to the kernel as well. This enables "preloading" scenarios, where the hibernate image is loaded off of disk while the additional key material (eg password) is being collected. One annoyance of the "preloading" scheme is that hibernate image memory is effectively double-allocated: first by the usermode process pulling encrypted contents off of disk and holding it, and second by the kernel in its giant allocation in prepare_image(). An interesting future optimization would be to allow the kernel to accept and store encrypted page data before the user key is available. This would remove the double allocation problem, as usermode could push the encrypted pages loaded from disk immediately without storing them. The kernel could defer decryption of the data until the user key is available, while still knowing the correct page locations to store the encrypted data in. Signed-off-by: Evan Green --- Changes in v5: - Remove pad struct member (Kees) Changes in v2: - Add missing static on snapshot_encrypted_byte_count() - Fold in only the used kernel key bytes to the user key. - Make the user key length 32 (Eric) - Use CRYPTO_LIB_SHA256 for less boilerplate (Eric) include/uapi/linux/suspend_ioctls.h | 17 ++- kernel/power/Kconfig | 1 + kernel/power/power.h | 1 + kernel/power/snapenc.c | 166 ++++++++++++++++++++++++++-- kernel/power/snapshot.c | 5 + kernel/power/user.c | 4 + kernel/power/user.h | 13 +++ 7 files changed, 195 insertions(+), 12 deletions(-) diff --git a/include/uapi/linux/suspend_ioctls.h b/include/uapi/linux/suspend_ioctls.h index b73026ef824bb9..7612874608bae4 100644 --- a/include/uapi/linux/suspend_ioctls.h +++ b/include/uapi/linux/suspend_ioctls.h @@ -14,6 +14,7 @@ struct resume_swap_area { } __attribute__((packed)); #define USWSUSP_KEY_NONCE_SIZE 16 +#define USWSUSP_USER_KEY_SIZE 32 /* * This structure is used to pass the kernel's hibernate encryption key in @@ -22,9 +23,20 @@ struct resume_swap_area { struct uswsusp_key_blob { __u32 blob_len; __u8 blob[512]; - __u8 nonce[USWSUSP_KEY_NONCE_SIZE]; + __u8 nonce[USWSUSP_KEY_NONCE_SIZE] __nonstring; } __attribute__((packed)); +/* + * Allow user mode to fold in key material for the data portion of the hibernate + * image. + */ +struct uswsusp_user_key { + /* Kernel returns the metadata size. */ + __kernel_loff_t meta_size; + __u32 key_len; + __u8 key[USWSUSP_USER_KEY_SIZE] __nonstring; +}; + #define SNAPSHOT_IOC_MAGIC '3' #define SNAPSHOT_FREEZE _IO(SNAPSHOT_IOC_MAGIC, 1) #define SNAPSHOT_UNFREEZE _IO(SNAPSHOT_IOC_MAGIC, 2) @@ -42,6 +54,7 @@ struct uswsusp_key_blob { #define SNAPSHOT_AVAIL_SWAP_SIZE _IOR(SNAPSHOT_IOC_MAGIC, 19, __kernel_loff_t) #define SNAPSHOT_ALLOC_SWAP_PAGE _IOR(SNAPSHOT_IOC_MAGIC, 20, __kernel_loff_t) #define SNAPSHOT_ENABLE_ENCRYPTION _IOWR(SNAPSHOT_IOC_MAGIC, 21, struct uswsusp_key_blob) -#define SNAPSHOT_IOC_MAXNR 21 +#define SNAPSHOT_SET_USER_KEY _IOWR(SNAPSHOT_IOC_MAGIC, 22, struct uswsusp_user_key) +#define SNAPSHOT_IOC_MAXNR 22 #endif /* _LINUX_SUSPEND_IOCTLS_H */ diff --git a/kernel/power/Kconfig b/kernel/power/Kconfig index 420024f46992b2..5c1f8f3f7482d7 100644 --- a/kernel/power/Kconfig +++ b/kernel/power/Kconfig @@ -98,6 +98,7 @@ config ENCRYPTED_HIBERNATION depends on CRYPTO_AEAD2=y depends on TCG_TPM2_RESTRICT_PCR depends on TRUSTED_KEYS=y + select CRYPTO_LIB_SHA256 help Enable support for kernel-based encryption of hibernation snapshots created by uswsusp tools. diff --git a/kernel/power/power.h b/kernel/power/power.h index b4f43394320961..5955e5cf692302 100644 --- a/kernel/power/power.h +++ b/kernel/power/power.h @@ -151,6 +151,7 @@ struct snapshot_handle { extern unsigned int snapshot_additional_pages(struct zone *zone); extern unsigned long snapshot_get_image_size(void); +extern unsigned long snapshot_get_meta_page_count(void); extern int snapshot_read_next(struct snapshot_handle *handle); extern int snapshot_write_next(struct snapshot_handle *handle); extern void snapshot_write_finalize(struct snapshot_handle *handle); diff --git a/kernel/power/snapenc.c b/kernel/power/snapenc.c index f1db4eddb3c34c..0b38642628f7ce 100644 --- a/kernel/power/snapenc.c +++ b/kernel/power/snapenc.c @@ -6,6 +6,7 @@ #include #include #include +#include #include #include #include @@ -21,6 +22,44 @@ static struct tpm_digest known_digest = { .alg_id = TPM_ALG_SHA256, 0xf1, 0x22, 0x38, 0x6c, 0x33, 0xb1, 0x14, 0xb7, 0xec, 0x05, 0x5f, 0x49}}; +/* Derive a key from the kernel and user keys for data encryption. */ +static int snapshot_use_user_key(struct snapshot_data *data) +{ + u8 digest[SHA256_DIGEST_SIZE]; + struct trusted_key_payload *payload = data->key->payload.data[0]; + struct sha256_state sha256_state; + + /* + * Hash the kernel key and the user key together. This folds in the user + * key, but not in a way that gives the user mode predictable control + * over the key bits. + */ + sha256_init(&sha256_state); + + BUILD_BUG_ON(sizeof(payload->key) < SNAPSHOT_ENCRYPTION_KEY_SIZE); + + sha256_update(&sha256_state, payload->key, SNAPSHOT_ENCRYPTION_KEY_SIZE); + sha256_update(&sha256_state, data->user_key, sizeof(data->user_key)); + sha256_final(&sha256_state, digest); + + BUILD_BUG_ON(SNAPSHOT_ENCRYPTION_KEY_SIZE > SHA256_DIGEST_SIZE); + + return crypto_aead_setkey(data->aead_tfm, + digest, + SNAPSHOT_ENCRYPTION_KEY_SIZE); +} + +/* Check to see if it's time to switch to the user key, and do it if so. */ +static int snapshot_check_user_key_switch(struct snapshot_data *data) +{ + if (data->user_key_valid && data->meta_size && + data->crypt_total == data->meta_size) { + return snapshot_use_user_key(data); + } + + return 0; +} + /* Encrypt more data from the snapshot into the staging area. */ static int snapshot_encrypt_refill(struct snapshot_data *data) { @@ -31,6 +70,15 @@ static int snapshot_encrypt_refill(struct snapshot_data *data) int pg_idx; int res; + if (data->crypt_total == 0) { + data->meta_size = snapshot_get_meta_page_count() << PAGE_SHIFT; + + } else { + res = snapshot_check_user_key_switch(data); + if (res) + return res; + } + /* * The first buffer is the associated data, set to the offset to prevent * attacks that rearrange chunks. @@ -41,6 +89,11 @@ static int snapshot_encrypt_refill(struct snapshot_data *data) for (pg_idx = 0; pg_idx < CHUNK_SIZE; pg_idx++) { void *buf = data->crypt_pages[pg_idx]; + /* Stop at the meta page boundary to potentially switch keys. */ + if (total && + ((data->crypt_total + total) == data->meta_size)) + break; + res = snapshot_read_next(&data->handle); if (res < 0) return res; @@ -113,10 +166,10 @@ static int snapshot_decrypt_drain(struct snapshot_data *data) sg_set_buf(&data->sg[1 + pg_idx], data->crypt_pages[pg_idx], PAGE_SIZE); /* - * It's possible this is the final decrypt, and there are fewer than - * CHUNK_SIZE pages. If this is the case we would have just written the - * auth tag into the first few bytes of a new page. Copy to the tag if - * so. + * It's possible this is the final decrypt, or the final decrypt of the + * meta region, and there are fewer than CHUNK_SIZE pages. If this is + * the case we would have just written the auth tag into the first few + * bytes of a new page. Copy to the tag if so. */ if ((page_count < CHUNK_SIZE) && (data->crypt_offset - total) == sizeof(data->auth_tag)) { @@ -171,7 +224,14 @@ static int snapshot_decrypt_drain(struct snapshot_data *data) total += PAGE_SIZE; } + if (data->crypt_total == 0) + data->meta_size = snapshot_get_meta_page_count() << PAGE_SHIFT; + data->crypt_total += total; + res = snapshot_check_user_key_switch(data); + if (res) + return res; + return 0; } @@ -220,8 +280,26 @@ static ssize_t snapshot_write_next_encrypted(struct snapshot_data *data, if (data->crypt_offset < (PAGE_SIZE * CHUNK_SIZE)) { size_t pg_idx = data->crypt_offset >> PAGE_SHIFT; size_t pg_off = data->crypt_offset & (PAGE_SIZE - 1); + size_t size_avail = PAGE_SIZE; *buf = data->crypt_pages[pg_idx] + pg_off; - return PAGE_SIZE - pg_off; + + /* + * If this is the boundary where the meta pages end, then just + * return enough for the auth tag. + */ + if (data->meta_size && (data->crypt_total < data->meta_size)) { + uint64_t total_done = + data->crypt_total + data->crypt_offset; + + if ((total_done >= data->meta_size) && + (total_done < + (data->meta_size + SNAPSHOT_AUTH_TAG_SIZE))) { + + size_avail = SNAPSHOT_AUTH_TAG_SIZE; + } + } + + return size_avail - pg_off; } /* Use offsets just beyond the size to return the tag. */ @@ -303,9 +381,15 @@ ssize_t snapshot_write_encrypted(struct snapshot_data *data, break; } - /* Drain the encrypted buffer if it's full. */ + /* + * Drain the encrypted buffer if it's full, or if we hit the end + * of the meta pages and need a key change. + */ if ((data->crypt_offset >= - ((PAGE_SIZE * CHUNK_SIZE) + SNAPSHOT_AUTH_TAG_SIZE))) { + ((PAGE_SIZE * CHUNK_SIZE) + SNAPSHOT_AUTH_TAG_SIZE)) || + (data->meta_size && (data->crypt_total < data->meta_size) && + ((data->crypt_total + data->crypt_offset) == + (data->meta_size + SNAPSHOT_AUTH_TAG_SIZE)))) { int rc; @@ -349,6 +433,8 @@ void snapshot_teardown_encryption(struct snapshot_data *data) data->crypt_pages[i] = NULL; } } + + memset(data->user_key, 0, sizeof(data->user_key)); } static int snapshot_setup_encryption_common(struct snapshot_data *data) @@ -358,6 +444,7 @@ static int snapshot_setup_encryption_common(struct snapshot_data *data) data->crypt_total = 0; data->crypt_offset = 0; data->crypt_size = 0; + data->user_key_valid = false; memset(data->crypt_pages, 0, sizeof(data->crypt_pages)); /* This only works once per hibernate. */ if (data->aead_tfm) @@ -660,15 +747,74 @@ int snapshot_set_encryption_key(struct snapshot_data *data, return rc; } -loff_t snapshot_get_encrypted_image_size(loff_t raw_size) +static loff_t snapshot_encrypted_byte_count(loff_t plain_size) { - loff_t pages = raw_size >> PAGE_SHIFT; + loff_t pages = plain_size >> PAGE_SHIFT; loff_t chunks = (pages + (CHUNK_SIZE - 1)) / CHUNK_SIZE; /* * The encrypted size is the normal size, plus a stitched in * authentication tag for every chunk of pages. */ - return raw_size + (chunks * SNAPSHOT_AUTH_TAG_SIZE); + return plain_size + (chunks * SNAPSHOT_AUTH_TAG_SIZE); +} + +static loff_t snapshot_get_meta_data_size(void) +{ + loff_t pages = snapshot_get_meta_page_count(); + + return snapshot_encrypted_byte_count(pages << PAGE_SHIFT); +} + +int snapshot_set_user_key(struct snapshot_data *data, + struct uswsusp_user_key __user *key) +{ + struct uswsusp_user_key user_key; + unsigned int key_len; + int rc; + loff_t size; + + /* + * Return the metadata size, the number of bytes that can be fed in before + * the user data key is needed at resume time. + */ + size = snapshot_get_meta_data_size(); + rc = put_user(size, &key->meta_size); + if (rc) + return rc; + + rc = copy_from_user(&user_key, key, sizeof(struct uswsusp_user_key)); + if (rc) + return rc; + + BUILD_BUG_ON(sizeof(data->user_key) < sizeof(user_key.key)); + + key_len = min_t(__u32, user_key.key_len, sizeof(data->user_key)); + if (key_len < 8) + return -EINVAL; + + /* Don't allow it if it's too late. */ + if (data->crypt_total > data->meta_size) + return -EBUSY; + + memset(data->user_key, 0, sizeof(data->user_key)); + memcpy(data->user_key, user_key.key, key_len); + data->user_key_valid = true; + /* Install the key if the user is just under the wire. */ + rc = snapshot_check_user_key_switch(data); + if (rc) + return rc; + + return 0; +} + +loff_t snapshot_get_encrypted_image_size(loff_t raw_size) +{ + loff_t pages = raw_size >> PAGE_SHIFT; + loff_t meta_size; + + pages -= snapshot_get_meta_page_count(); + meta_size = snapshot_get_meta_data_size(); + return snapshot_encrypted_byte_count(pages << PAGE_SHIFT) + meta_size; } int snapshot_finalize_decrypted_image(struct snapshot_data *data) diff --git a/kernel/power/snapshot.c b/kernel/power/snapshot.c index c20ca5fb9adc87..d8a30f3eaaf4c6 100644 --- a/kernel/power/snapshot.c +++ b/kernel/power/snapshot.c @@ -2083,6 +2083,11 @@ unsigned long snapshot_get_image_size(void) return nr_copy_pages + nr_meta_pages + 1; } +unsigned long snapshot_get_meta_page_count(void) +{ + return nr_meta_pages + 1; +} + static int init_header(struct swsusp_info *info) { memset(info, 0, sizeof(struct swsusp_info)); diff --git a/kernel/power/user.c b/kernel/power/user.c index bba5cdbd2c0239..a66e32c9596da8 100644 --- a/kernel/power/user.c +++ b/kernel/power/user.c @@ -427,6 +427,10 @@ static long snapshot_ioctl(struct file *filp, unsigned int cmd, error = snapshot_set_encryption_key(data, (void __user *)arg); break; + case SNAPSHOT_SET_USER_KEY: + error = snapshot_set_user_key(data, (void __user *)arg); + break; + default: error = -ENOTTY; diff --git a/kernel/power/user.h b/kernel/power/user.h index 6c86fb64ebe13e..d75fd287b4c3de 100644 --- a/kernel/power/user.h +++ b/kernel/power/user.h @@ -1,6 +1,7 @@ /* SPDX-License-Identifier: GPL-2.0 */ #include +#include #include #include @@ -32,6 +33,9 @@ struct snapshot_data { uint64_t nonce_low; uint64_t nonce_high; struct key *key; + u8 user_key[USWSUSP_USER_KEY_SIZE] __nonstring; + bool user_key_valid; + uint64_t meta_size; #endif }; @@ -55,6 +59,9 @@ int snapshot_get_encryption_key(struct snapshot_data *data, int snapshot_set_encryption_key(struct snapshot_data *data, struct uswsusp_key_blob __user *key); +int snapshot_set_user_key(struct snapshot_data *data, + struct uswsusp_user_key __user *key); + loff_t snapshot_get_encrypted_image_size(loff_t raw_size); int snapshot_finalize_decrypted_image(struct snapshot_data *data); @@ -89,6 +96,12 @@ static int snapshot_set_encryption_key(struct snapshot_data *data, return -ENOTTY; } +static int snapshot_set_user_key(struct snapshot_data *data, + struct uswsusp_user_key __user *key) +{ + return -ENOTTY; +} + static loff_t snapshot_get_encrypted_image_size(loff_t raw_size) { return raw_size; From patchwork Fri Nov 11 23:16:35 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Evan Green X-Patchwork-Id: 13040926 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B0015C4167D for ; Fri, 11 Nov 2022 23:21:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234734AbiKKXVE (ORCPT ); Fri, 11 Nov 2022 18:21:04 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47592 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234658AbiKKXU2 (ORCPT ); Fri, 11 Nov 2022 18:20:28 -0500 Received: from mail-pl1-x62f.google.com (mail-pl1-x62f.google.com [IPv6:2607:f8b0:4864:20::62f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7B36D86D66 for ; Fri, 11 Nov 2022 15:19:59 -0800 (PST) Received: by mail-pl1-x62f.google.com with SMTP id 4so5420886pli.0 for ; Fri, 11 Nov 2022 15:19:59 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=fYb7Pc7kk00iZIixiz1ShzY4z/wZpG44t4abSwKlK2k=; b=YsrTyClPvsWKOPUOmdMjFm3cMJd+VRq4mdvYTJHKJR8FbxBBdRBDvmV+VIuyla+V10 4VBd9zqYyPp4bHY3aYNLW86+5FWLUQW+brN4cYdvvrLpxgUQzt0En2KhqKPDgAo+FSS1 ITPaUkDCPa6LT+NLfRfX8fNR63T6F6OkGNraI= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=fYb7Pc7kk00iZIixiz1ShzY4z/wZpG44t4abSwKlK2k=; b=QsK4LTqoAB3brLLdf1Cn5yJ6uvudH/RK7AIMtH/zaZ4rc++nF1q8IO4D0ht+bRpn7m CePUUR/yWufYDUpBRyIZGQWK7vASa+c9G5MjKc38nhYxqg5jCu3ZtVCSsYDa3mpTIJxU z7BkHxez+Fzv41ShXAol+Dp5Q1xW/GzrFBW49BH1jKAyqOiVUf5W43LPgRFQT5xzk37N Z88gyalBQKwupF1cf2F1OwDO9chHi6b97lwBv6V7vdf/mWi5lb/oVcwjzG9b+uSxSFIy lv3179PtPhXEodpp/MfqE8rgQ9y4L3MU/UAaGp3QU7A2/bLUpJONVL+OeorRkmezmalH ng4A== X-Gm-Message-State: ANoB5pk63W8oWLR59d+a52CEaEwgjxGtg9ifM8zoloWGX5W9FwMQOAWR P4kTmlIRLti7D1IOf+cDft4Hdw== X-Google-Smtp-Source: AA0mqf5DEXa06bh3icgkK9RTugU4V7dfjDh1o93t+vywCXKUK2pWdElRBv/y9vBCEVYOM16Tz4oKmg== X-Received: by 2002:a17:902:c115:b0:180:87d7:9be8 with SMTP id 21-20020a170902c11500b0018087d79be8mr4327747pli.85.1668208799133; Fri, 11 Nov 2022 15:19:59 -0800 (PST) Received: from evgreen-glaptop.lan ([98.45.28.95]) by smtp.gmail.com with ESMTPSA id x128-20020a623186000000b0056da2ad6503sm2106900pfx.39.2022.11.11.15.19.57 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 11 Nov 2022 15:19:58 -0800 (PST) From: Evan Green To: linux-kernel@vger.kernel.org Cc: corbet@lwn.net, linux-integrity@vger.kernel.org, Eric Biggers , gwendal@chromium.org, dianders@chromium.org, apronin@chromium.org, Pavel Machek , Ben Boeckel , rjw@rjwysocki.net, jejb@linux.ibm.com, Kees Cook , dlunev@google.com, zohar@linux.ibm.com, Matthew Garrett , jarkko@kernel.org, linux-pm@vger.kernel.org, Evan Green , Matthew Garrett , Len Brown , "Rafael J. Wysocki" Subject: [PATCH v5 10/11] PM: hibernate: Verify the digest encryption key Date: Fri, 11 Nov 2022 15:16:35 -0800 Message-Id: <20221111151451.v5.10.I504d456c7a94ef1aaa7a2c63775ce9690c3ad7ab@changeid> X-Mailer: git-send-email 2.38.1.431.g37b22c650d-goog In-Reply-To: <20221111231636.3748636-1-evgreen@chromium.org> References: <20221111231636.3748636-1-evgreen@chromium.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-integrity@vger.kernel.org We want to ensure that the key used to encrypt the digest was created by the kernel during hibernation. To do this we request that the TPM include information about the value of PCR 23 at the time of key creation in the sealed blob. On resume, we can make sure that the PCR information in the creation data blob (already certified by the TPM to be accurate) corresponds to the expected value. Since only the kernel can touch PCR 23, if an attacker generates a key themselves the value of PCR 23 will have been different, allowing us to reject the key and boot normally instead of resuming. Co-developed-by: Matthew Garrett Signed-off-by: Matthew Garrett Signed-off-by: Evan Green --- Matthew's original version of this patch is here: https://patchwork.kernel.org/project/linux-pm/patch/20210220013255.1083202-9-matthewgarrett@google.com/ I moved the TPM2_CC_CERTIFYCREATION code into a separate change in the trusted key code because the blob_handle was being flushed and was no longer valid for use in CC_CERTIFYCREATION after the key was loaded. As an added benefit of moving the certification into the trusted keys code, we can drop the other patch from the original series that squirrelled the blob_handle away. Changes in v5: - Use a struct to access creation data (Kees) - Build PCR bitmask programmatically in creation data (Kees) Changes in v4: - Local variable reordering (Jarkko) Changes in v3: - Changed funky tag to Co-developed-by (Kees). Matthew, holler if you want something different. Changes in v2: - Fixed some sparse warnings - Use CRYPTO_LIB_SHA256 to get rid of sha256_data() (Eric) - Adjusted offsets due to new ASN.1 format, and added a creation data length check. kernel/power/snapenc.c | 122 ++++++++++++++++++++++++++++++++++++++++- 1 file changed, 120 insertions(+), 2 deletions(-) diff --git a/kernel/power/snapenc.c b/kernel/power/snapenc.c index 0b38642628f7ce..f32c7347a330a4 100644 --- a/kernel/power/snapenc.c +++ b/kernel/power/snapenc.c @@ -22,6 +22,12 @@ static struct tpm_digest known_digest = { .alg_id = TPM_ALG_SHA256, 0xf1, 0x22, 0x38, 0x6c, 0x33, 0xb1, 0x14, 0xb7, 0xec, 0x05, 0x5f, 0x49}}; +/* sha256(sha256(empty_pcr | known_digest)) */ +static const char expected_digest[] = {0x2f, 0x96, 0xf2, 0x1b, 0x70, 0xa9, 0xe8, + 0x42, 0x25, 0x8e, 0x66, 0x07, 0xbe, 0xbc, 0xe3, 0x1f, 0x2c, 0x84, 0x4a, + 0x3f, 0x85, 0x17, 0x31, 0x47, 0x9a, 0xa5, 0x53, 0xbb, 0x23, 0x0c, 0x32, + 0xf3}; + /* Derive a key from the kernel and user keys for data encryption. */ static int snapshot_use_user_key(struct snapshot_data *data) { @@ -491,7 +497,7 @@ static int snapshot_setup_encryption_common(struct snapshot_data *data) static int snapshot_create_kernel_key(struct snapshot_data *data) { /* Create a key sealed by the SRK. */ - char *keyinfo = "new\t32\tkeyhandle=0x81000000"; + char *keyinfo = "new\t32\tkeyhandle=0x81000000\tcreationpcrs=0x00800000"; const struct cred *cred = current_cred(); struct tpm_digest *digests = NULL; struct key *key = NULL; @@ -612,17 +618,57 @@ int snapshot_get_encryption_key(struct snapshot_data *data, return rc; } +/* Currently only PCR23 is included in the creation data. */ +#define SNAPSHOT_KEY_PCR_COUNT 1 + +/* The standard set of 24 PCRs takes 3 bytes to represent as a bitmask. */ +#define SNAPSHOT_KEY_PCR_SELECTION_BYTES 3 + +/* + * The TPM loves to return variable length structures. This is the form of + * TPM2B_CREATION_DATA expected and verified for the snapshot key. + */ +struct snapshot_key_creation_data { + __be16 size; + /* TPMS_CREATION_DATA, the hashed portion */ + struct { + /* TPML_PCR_SELECTION */ + struct { + __be32 count; + /* TPMS_PCR_SELECTION */ + struct { + __be16 hash_algo; + u8 size; + u8 select[SNAPSHOT_KEY_PCR_SELECTION_BYTES]; + } __packed pcr_selections; + } __packed pcr_select; + + /* TPM2B_DIGEST */ + struct { + __be16 size; + u8 digest[SHA256_DIGEST_SIZE]; + } __packed pcr_digest[SNAPSHOT_KEY_PCR_COUNT]; + + /* ... additional fields not verified ... */ + } creation; +} __packed; + static int snapshot_load_kernel_key(struct snapshot_data *data, struct uswsusp_key_blob *blob) { char *keytemplate = "load\t%s\tkeyhandle=0x81000000"; + struct snapshot_key_creation_data *creation; const struct cred *cred = current_cred(); + struct trusted_key_payload *payload; + char certhash[SHA256_DIGEST_SIZE]; struct tpm_digest *digests = NULL; + unsigned int creation_hash_length; char *blobstring = NULL; struct key *key = NULL; struct tpm_chip *chip; char *keyinfo = NULL; + u32 pcr_selection = 0; int i, ret; chip = tpm_default_chip(); @@ -640,8 +686,10 @@ static int snapshot_load_kernel_key(struct snapshot_data *data, digests = kcalloc(chip->nr_allocated_banks, sizeof(struct tpm_digest), GFP_KERNEL); - if (!digests) + if (!digests) { + ret = -ENOMEM; goto out; + } for (i = 0; i < chip->nr_allocated_banks; i++) { digests[i].alg_id = chip->allocated_banks[i].alg_id; @@ -681,6 +729,76 @@ static int snapshot_load_kernel_key(struct snapshot_data *data, if (ret != 0) goto out; + /* Verify the creation hash matches the creation data. */ + payload = key->payload.data[0]; + creation = (struct snapshot_key_creation_data *)payload->creation; + if (!creation || !payload->creation_hash || + (payload->creation_len < sizeof(*creation)) || + (payload->creation_hash_len - 2 != SHA256_DIGEST_SIZE)) { + ret = -EINVAL; + goto out; + } + + creation_hash_length = + payload->creation_len - + offsetof(struct snapshot_key_creation_data, creation); + + sha256((const u8 *)&creation->creation, creation_hash_length, certhash); + if (memcmp(payload->creation_hash + sizeof(__be16), certhash, SHA256_DIGEST_SIZE) != 0) { + ret = -EINVAL; + goto out; + } + + /* We now know that the creation data is authentic - parse it */ + + /* Verify TPML_PCR_SELECTION.count */ + if (be32_to_cpu(creation->creation.pcr_select.count) != + SNAPSHOT_KEY_PCR_COUNT) { + ret = -EINVAL; + goto out; + } + + /* Verify the PCRs are SHA256. */ + if (be16_to_cpu(creation->creation.pcr_select.pcr_selections.hash_algo) != + TPM_ALG_SHA256) { + ret = -EINVAL; + goto out; + } + + /* Gather the PCR selection bitmask. */ + if (creation->creation.pcr_select.pcr_selections.size != + SNAPSHOT_KEY_PCR_SELECTION_BYTES) { + ret = -EINVAL; + goto out; + } + + for (i = SNAPSHOT_KEY_PCR_SELECTION_BYTES - 1; i >= 0; i--) { + pcr_selection <<= 8; + pcr_selection |= + creation->creation.pcr_select.pcr_selections.select[i]; + } + + /* Verify PCR 23 is selected. */ + if (pcr_selection != (1 << 23)) { + ret = -EINVAL; + goto out; + } + + /* Verify the first and only PCR hash is the expected size. */ + if (be16_to_cpu(creation->creation.pcr_digest[0].size) != + SHA256_DIGEST_SIZE) { + ret = -EINVAL; + goto out; + } + + /* Verify PCR 23 contained the expected value when the key was created. */ + if (memcmp(&creation->creation.pcr_digest[0].digest, expected_digest, + SHA256_DIGEST_SIZE) != 0) { + + ret = -EINVAL; + goto out; + } + data->key = key; key = NULL; From patchwork Fri Nov 11 23:16:36 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Evan Green X-Patchwork-Id: 13040927 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3ED1BC4167B for ; Fri, 11 Nov 2022 23:21:08 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234569AbiKKXVH (ORCPT ); Fri, 11 Nov 2022 18:21:07 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47126 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234669AbiKKXUa (ORCPT ); Fri, 11 Nov 2022 18:20:30 -0500 Received: from mail-pj1-x102a.google.com (mail-pj1-x102a.google.com [IPv6:2607:f8b0:4864:20::102a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EA40087140 for ; Fri, 11 Nov 2022 15:20:01 -0800 (PST) Received: by mail-pj1-x102a.google.com with SMTP id d13-20020a17090a3b0d00b00213519dfe4aso5894044pjc.2 for ; Fri, 11 Nov 2022 15:20:01 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=pyE2tHYjAy2g0LSWaM4wfqP7iC/gfDdA4O7sMGWH2yA=; b=QniXPqvMvelSpdMOj8/GLE1UY/EKWhZT/bALUM3Quh6eWWK0w/0sYl2ScnQCNCKVAh aukm+2XcqJuDdHCg8UzL3LpwNp163IFUj7vjKx0Fq+lHthWxmvdh4W8DVdLgKjJ1eAfc l3kXWO+WCsMRArnQ0o3V6uHfnVSXJ3vhGipIY= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=pyE2tHYjAy2g0LSWaM4wfqP7iC/gfDdA4O7sMGWH2yA=; b=BZOhV/fhU55qhoidoVvmjmwfLEruEIU7Ge9ZS680Vsjy2rjrKaIRkSiBZnR89oxVj/ eZkG4jXesFQjGN/Arvx0bs3CsOMgoTHW36ATxLGKFjyUn5DqI3rhxbElt9KXhV/uqJfR k1XrM8INB8F0qT8KTjr7aS/RrJSDdrdnKC08yRWDVsMPnIV9kELuw4lpQCIJqtC6h0pk 6DJIilILfu3nEe4NCZ7BVvtB9wusUYoOeFEavi8eGzVjiYrFm/pdBV3a/49TRXshGVVu b83zIziYENIoOCwxBGT7eXmLwCcz4fUEYePP5WuXLzcpXzC+kSo+Ia8uLrTXoGMSllG9 ZJXw== X-Gm-Message-State: ANoB5pndt83HnkCdk9sTxW7zh2cjXcoO5MTtMdgHNtPSGBVNntAFZzs+ pU6BitlY8iX5v0LCQxfIV7aiQA== X-Google-Smtp-Source: AA0mqf7HHMySPjnsMxtbN/MAPsHsOfvy/sQbnkfpn8Bg9+grAnPmpqeHb5XWIIMOCcgqxCy1erTyKg== X-Received: by 2002:a17:90b:3704:b0:212:f264:4ee6 with SMTP id mg4-20020a17090b370400b00212f2644ee6mr4128659pjb.189.1668208801465; Fri, 11 Nov 2022 15:20:01 -0800 (PST) Received: from evgreen-glaptop.lan ([98.45.28.95]) by smtp.gmail.com with ESMTPSA id x128-20020a623186000000b0056da2ad6503sm2106900pfx.39.2022.11.11.15.19.59 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 11 Nov 2022 15:20:01 -0800 (PST) From: Evan Green To: linux-kernel@vger.kernel.org Cc: corbet@lwn.net, linux-integrity@vger.kernel.org, Eric Biggers , gwendal@chromium.org, dianders@chromium.org, apronin@chromium.org, Pavel Machek , Ben Boeckel , rjw@rjwysocki.net, jejb@linux.ibm.com, Kees Cook , dlunev@google.com, zohar@linux.ibm.com, Matthew Garrett , jarkko@kernel.org, linux-pm@vger.kernel.org, Evan Green , Matthew Garrett , Len Brown , "Rafael J. Wysocki" , axelj Subject: [PATCH v5 11/11] PM: hibernate: seal the encryption key with a PCR policy Date: Fri, 11 Nov 2022 15:16:36 -0800 Message-Id: <20221111151451.v5.11.Ifce072ae1ef1ce39bd681fff55af13a054045d9f@changeid> X-Mailer: git-send-email 2.38.1.431.g37b22c650d-goog In-Reply-To: <20221111231636.3748636-1-evgreen@chromium.org> References: <20221111231636.3748636-1-evgreen@chromium.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-integrity@vger.kernel.org The key blob is not secret, and by default the TPM will happily unseal it regardless of system state. We can protect against that by sealing the secret with a PCR policy - if the current PCR state doesn't match, the TPM will refuse to release the secret. For now let's just seal it to PCR 23. In the long term we may want a more flexible policy around this, such as including PCR 7 for PCs or 0 for Chrome OS. Link: https://lore.kernel.org/all/20210220013255.1083202-10-matthewgarrett@google.com/ Co-developed-by: Matthew Garrett Signed-off-by: Matthew Garrett Signed-off-by: Evan Green --- (no changes since v4) Changes in v4: - Local variable ordering (Jarkko) Changes in v3: - Changed funky tag to Co-developed-by (Kees) Changes in v2: - Fix sparse warnings - Fix session type comment (Andrey) - Eliminate extra label in get/create_kernel_key() (Andrey) - Call tpm_try_get_ops() before calling tpm2_flush_context(). include/linux/tpm.h | 4 + kernel/power/snapenc.c | 166 +++++++++++++++++++++++++++++++++++++++-- 2 files changed, 165 insertions(+), 5 deletions(-) diff --git a/include/linux/tpm.h b/include/linux/tpm.h index 9c2ee3e30ffa5d..252a8a92a7ff5b 100644 --- a/include/linux/tpm.h +++ b/include/linux/tpm.h @@ -233,18 +233,22 @@ enum tpm2_command_codes { TPM2_CC_CONTEXT_LOAD = 0x0161, TPM2_CC_CONTEXT_SAVE = 0x0162, TPM2_CC_FLUSH_CONTEXT = 0x0165, + TPM2_CC_START_AUTH_SESSION = 0x0176, TPM2_CC_VERIFY_SIGNATURE = 0x0177, TPM2_CC_GET_CAPABILITY = 0x017A, TPM2_CC_GET_RANDOM = 0x017B, TPM2_CC_PCR_READ = 0x017E, + TPM2_CC_POLICY_PCR = 0x017F, TPM2_CC_PCR_EXTEND = 0x0182, TPM2_CC_EVENT_SEQUENCE_COMPLETE = 0x0185, TPM2_CC_HASH_SEQUENCE_START = 0x0186, + TPM2_CC_POLICY_GET_DIGEST = 0x0189, TPM2_CC_CREATE_LOADED = 0x0191, TPM2_CC_LAST = 0x0193, /* Spec 1.36 */ }; enum tpm2_permanent_handles { + TPM2_RH_NULL = 0x40000007, TPM2_RS_PW = 0x40000009, }; diff --git a/kernel/power/snapenc.c b/kernel/power/snapenc.c index f32c7347a330a4..d3e1657674aaa1 100644 --- a/kernel/power/snapenc.c +++ b/kernel/power/snapenc.c @@ -443,6 +443,111 @@ void snapshot_teardown_encryption(struct snapshot_data *data) memset(data->user_key, 0, sizeof(data->user_key)); } +static int tpm_setup_policy(struct tpm_chip *chip, int *session_handle) +{ + struct tpm_header *head; + struct tpm_buf buf; + char nonce[32] = {0x00}; + int rc; + + rc = tpm_buf_init(&buf, TPM2_ST_NO_SESSIONS, + TPM2_CC_START_AUTH_SESSION); + if (rc) + return rc; + + /* Decrypt key */ + tpm_buf_append_u32(&buf, TPM2_RH_NULL); + + /* Auth entity */ + tpm_buf_append_u32(&buf, TPM2_RH_NULL); + + /* Nonce - blank is fine here */ + tpm_buf_append_u16(&buf, sizeof(nonce)); + tpm_buf_append(&buf, nonce, sizeof(nonce)); + + /* Encrypted secret - empty */ + tpm_buf_append_u16(&buf, 0); + + /* Session type - policy */ + tpm_buf_append_u8(&buf, 0x01); + + /* Encryption type - NULL */ + tpm_buf_append_u16(&buf, TPM_ALG_NULL); + + /* Hash type - SHA256 */ + tpm_buf_append_u16(&buf, TPM_ALG_SHA256); + + rc = tpm_send(chip, buf.data, tpm_buf_length(&buf)); + if (rc) + goto out; + + head = (struct tpm_header *)buf.data; + if (be32_to_cpu(head->length) != sizeof(struct tpm_header) + + sizeof(u32) + sizeof(u16) + sizeof(nonce)) { + rc = -EINVAL; + goto out; + } + + *session_handle = be32_to_cpu(*(__be32 *)&buf.data[10]); + memcpy(nonce, &buf.data[16], sizeof(nonce)); + tpm_buf_destroy(&buf); + rc = tpm_buf_init(&buf, TPM2_ST_NO_SESSIONS, TPM2_CC_POLICY_PCR); + if (rc) + return rc; + + tpm_buf_append_u32(&buf, *session_handle); + + /* PCR digest - read from the PCR, we'll verify creation data later */ + tpm_buf_append_u16(&buf, 0); + + /* One PCR */ + tpm_buf_append_u32(&buf, 1); + + /* SHA256 banks */ + tpm_buf_append_u16(&buf, TPM_ALG_SHA256); + + /* Select PCR 23 */ + tpm_buf_append_u32(&buf, 0x03000080); + rc = tpm_send(chip, buf.data, tpm_buf_length(&buf)); + if (rc) + goto out; + +out: + tpm_buf_destroy(&buf); + return rc; +} + +static int tpm_policy_get_digest(struct tpm_chip *chip, int handle, + char *digest) +{ + struct tpm_header *head; + struct tpm_buf buf; + int rc; + + rc = tpm_buf_init(&buf, TPM2_ST_NO_SESSIONS, TPM2_CC_POLICY_GET_DIGEST); + if (rc) + return rc; + + tpm_buf_append_u32(&buf, handle); + rc = tpm_send(chip, buf.data, tpm_buf_length(&buf)); + + if (rc) + goto out; + + head = (struct tpm_header *)buf.data; + if (be32_to_cpu(head->length) != sizeof(struct tpm_header) + + sizeof(u16) + SHA256_DIGEST_SIZE) { + rc = -EINVAL; + goto out; + } + + memcpy(digest, &buf.data[12], SHA256_DIGEST_SIZE); + +out: + tpm_buf_destroy(&buf); + return rc; +} + static int snapshot_setup_encryption_common(struct snapshot_data *data) { int i, rc; @@ -497,11 +602,16 @@ static int snapshot_setup_encryption_common(struct snapshot_data *data) static int snapshot_create_kernel_key(struct snapshot_data *data) { /* Create a key sealed by the SRK. */ - char *keyinfo = "new\t32\tkeyhandle=0x81000000\tcreationpcrs=0x00800000"; + const char *keytemplate = + "new\t32\tkeyhandle=0x81000000\tcreationpcrs=0x00800000\tpolicydigest=%s"; const struct cred *cred = current_cred(); struct tpm_digest *digests = NULL; + char policy[SHA256_DIGEST_SIZE]; + char *policydigest = NULL; + int session_handle = -1; struct key *key = NULL; struct tpm_chip *chip; + char *keyinfo = NULL; int ret, i; chip = tpm_default_chip(); @@ -534,6 +644,28 @@ static int snapshot_create_kernel_key(struct snapshot_data *data) if (ret != 0) goto out; + policydigest = kmalloc(SHA256_DIGEST_SIZE * 2 + 1, GFP_KERNEL); + if (!policydigest) { + ret = -ENOMEM; + goto out; + } + + ret = tpm_setup_policy(chip, &session_handle); + if (ret != 0) + goto out; + + ret = tpm_policy_get_digest(chip, session_handle, policy); + if (ret != 0) + goto out; + + bin2hex(policydigest, policy, SHA256_DIGEST_SIZE); + policydigest[SHA256_DIGEST_SIZE * 2] = '\0'; + keyinfo = kasprintf(GFP_KERNEL, keytemplate, policydigest); + if (!keyinfo) { + ret = -ENOMEM; + goto out; + } + key = key_alloc(&key_type_trusted, "swsusp", GLOBAL_ROOT_UID, GLOBAL_ROOT_GID, cred, 0, KEY_ALLOC_NOT_IN_QUOTA, NULL); @@ -544,7 +676,7 @@ static int snapshot_create_kernel_key(struct snapshot_data *data) goto out; } - ret = key_instantiate_and_link(key, keyinfo, sizeof(keyinfo), NULL, + ret = key_instantiate_and_link(key, keyinfo, strlen(keyinfo) + 1, NULL, NULL); if (ret != 0) goto out; @@ -558,7 +690,16 @@ static int snapshot_create_kernel_key(struct snapshot_data *data) key_put(key); } + if (session_handle != -1) { + if (tpm_try_get_ops(chip) == 0) { + tpm2_flush_context(chip, session_handle); + tpm_put_ops(chip); + } + } + kfree(digests); + kfree(keyinfo); + kfree(policydigest); tpm2_pcr_reset(chip, 23); out_dev: @@ -657,7 +798,7 @@ static int snapshot_load_kernel_key(struct snapshot_data *data, struct uswsusp_key_blob *blob) { - char *keytemplate = "load\t%s\tkeyhandle=0x81000000"; + char *keytemplate = "load\t%s\tkeyhandle=0x81000000\tpolicyhandle=0x%x"; struct snapshot_key_creation_data *creation; const struct cred *cred = current_cred(); struct trusted_key_payload *payload; @@ -665,6 +806,7 @@ static int snapshot_load_kernel_key(struct snapshot_data *data, struct tpm_digest *digests = NULL; unsigned int creation_hash_length; char *blobstring = NULL; + int session_handle = -1; struct key *key = NULL; struct tpm_chip *chip; char *keyinfo = NULL; @@ -701,14 +843,21 @@ static int snapshot_load_kernel_key(struct snapshot_data *data, if (ret != 0) goto out; - blobstring = kmalloc(blob->blob_len * 2, GFP_KERNEL); + ret = tpm_setup_policy(chip, &session_handle); + if (ret != 0) + goto out; + + blobstring = kmalloc(blob->blob_len * 2 + 1, GFP_KERNEL); if (!blobstring) { ret = -ENOMEM; goto out; } bin2hex(blobstring, blob->blob, blob->blob_len); - keyinfo = kasprintf(GFP_KERNEL, keytemplate, blobstring); + blobstring[blob->blob_len * 2] = '\0'; + keyinfo = kasprintf(GFP_KERNEL, keytemplate, blobstring, + session_handle); + if (!keyinfo) { ret = -ENOMEM; goto out; @@ -808,6 +957,13 @@ static int snapshot_load_kernel_key(struct snapshot_data *data, key_put(key); } + if (session_handle != -1) { + if (tpm_try_get_ops(chip) == 0) { + tpm2_flush_context(chip, session_handle); + tpm_put_ops(chip); + } + } + kfree(keyinfo); kfree(blobstring); kfree(digests);