From patchwork Wed May 4 23:20:53 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Evan Green X-Patchwork-Id: 12838917 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id BF061C43219 for ; Wed, 4 May 2022 23:59:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1383743AbiEEAA2 (ORCPT ); Wed, 4 May 2022 20:00:28 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41566 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237222AbiEDXZF (ORCPT ); Wed, 4 May 2022 19:25:05 -0400 Received: from mail-pl1-x632.google.com (mail-pl1-x632.google.com [IPv6:2607:f8b0:4864:20::632]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4FE204D9EE for ; Wed, 4 May 2022 16:21:20 -0700 (PDT) Received: by mail-pl1-x632.google.com with SMTP id j8so2815649pll.11 for ; Wed, 04 May 2022 16:21:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=kOErKHWJujqLaWOkztm6Xtg2ojuojMVj+Fr4mvN2bHY=; b=jYMzIpOHiHfN+4bIj/5tb5+YNcUhAnEFKP1KNaU1R74wa7pXmowvB0kqxvMPbZiDX0 ce4lExddCHHxXaFEgN/WpLdcoE4xyxvt+gbwyI00DWN4BxPSmJibPrsZuKSJB2Qp1sj2 rog+u88u3r0YgEA4474NdK7ebeueNp8BCQJW8= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=kOErKHWJujqLaWOkztm6Xtg2ojuojMVj+Fr4mvN2bHY=; b=jj6EZXj4SsWugZVl/DqUcixsYAmI5vx68AHfCzpmj4l0gACU2M81wvNqJew1cMotYt bqgBZK3H89jmidXr+E5Ph59XSHIL+Gd38ED1hoCTy8fvTlK9qSIjHapsFtTzTrXL/prp wnGb2Gtea66GrH/5xQ+VHJiHtnvJeTw9B9gePg1Y6DWuMke+PryT1KpEqJMumgiZOgPL m+/DoKZWj8RkZQNHtr0sIa2CzSrFBFmajmcA50siDZAUprJz1+QR3lEISsC7e5r4uJLe 4eB6M5RUqVo4MxLPlV+MZzgGbC7RRsOuNPFEyAhJkjuQp87Megsqa6OaiDKRXHMV3npD cMjQ== X-Gm-Message-State: AOAM5306lIHZIQOAQafvSZX8UdJwO2mDOGoZc+jW+ExEi5D1QxbRqHFS V54kq7OlhEMoxJrahAfhZgPxPg== X-Google-Smtp-Source: ABdhPJyyPuRAXT9i6lNMadNV8m2hItTolkkOwTobBqzOGh/jcV0ZGvreW1VB1z9+Q9zAcbwiLZEuOg== X-Received: by 2002:a17:902:da90:b0:15e:adc2:191d with SMTP id j16-20020a170902da9000b0015eadc2191dmr14406754plx.134.1651706479816; Wed, 04 May 2022 16:21:19 -0700 (PDT) Received: from evgreen-glaptop.lan ([98.47.98.87]) by smtp.gmail.com with ESMTPSA id q12-20020a170902f78c00b0015e8d4eb2d6sm1901pln.288.2022.05.04.16.21.18 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 04 May 2022 16:21:19 -0700 (PDT) From: Evan Green To: linux-kernel@vger.kernel.org Cc: Matthew Garrett , dlunev@google.com, zohar@linux.ibm.com, jejb@linux.ibm.com, linux-integrity@vger.kernel.org, corbet@lwn.net, rjw@rjwysocki.net, gwendal@chromium.org, jarkko@kernel.org, linux-pm@vger.kernel.org, Matthew Garrett , Matthew Garrett , Evan Green , Hao Wu , Jason Gunthorpe , Peter Huewe , axelj Subject: [PATCH 01/10] tpm: Add support for in-kernel resetting of PCRs Date: Wed, 4 May 2022 16:20:53 -0700 Message-Id: <20220504161439.1.I776854f47e3340cc2913ed4d8ecdd328048b73c3@changeid> X-Mailer: git-send-email 2.31.0 In-Reply-To: <20220504232102.469959-1-evgreen@chromium.org> References: <20220504232102.469959-1-evgreen@chromium.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-pm@vger.kernel.org From: Matthew Garrett Add an internal command for resetting a PCR. This will be used by the encrypted hibernation code to set PCR23 to a known value. The hibernation code will seal the hibernation key with a policy specifying PCR23 be set to this known value as a mechanism to ensure that the hibernation key is genuine. But to do this repeatedly, resetting the PCR is necessary as well. From: Matthew Garrett Signed-off-by: Matthew Garrett Signed-off-by: Evan Green --- Matthew's original version of this patch was at: https://patchwork.kernel.org/patch/12096487/ drivers/char/tpm/tpm-interface.c | 28 +++++++++++++++++++++++++ drivers/char/tpm/tpm.h | 2 ++ drivers/char/tpm/tpm1-cmd.c | 34 ++++++++++++++++++++++++++++++ drivers/char/tpm/tpm2-cmd.c | 36 ++++++++++++++++++++++++++++++++ include/linux/tpm.h | 7 +++++++ 5 files changed, 107 insertions(+) diff --git a/drivers/char/tpm/tpm-interface.c b/drivers/char/tpm/tpm-interface.c index 1621ce8187052c..17b8643ee109c2 100644 --- a/drivers/char/tpm/tpm-interface.c +++ b/drivers/char/tpm/tpm-interface.c @@ -342,6 +342,34 @@ int tpm_pcr_extend(struct tpm_chip *chip, u32 pcr_idx, } EXPORT_SYMBOL_GPL(tpm_pcr_extend); +/** + * tpm_pcr_reset - reset the specified PCR + * @chip: a &struct tpm_chip instance, %NULL for the default chip + * @pcr_idx: the PCR to be reset + * + * Return: same as with tpm_transmit_cmd() + */ +int tpm_pcr_reset(struct tpm_chip *chip, u32 pcr_idx) +{ + int rc; + + chip = tpm_find_get_ops(chip); + if (!chip) + return -ENODEV; + + if (chip->flags & TPM_CHIP_FLAG_TPM2) { + rc = tpm2_pcr_reset(chip, pcr_idx); + goto out; + } + + rc = tpm1_pcr_reset(chip, pcr_idx, "attempting to reset a PCR"); + +out: + tpm_put_ops(chip); + return rc; +} +EXPORT_SYMBOL_GPL(tpm_pcr_reset); + /** * tpm_send - send a TPM command * @chip: a &struct tpm_chip instance, %NULL for the default chip diff --git a/drivers/char/tpm/tpm.h b/drivers/char/tpm/tpm.h index 2163c6ee0d364f..72b6c0873852c6 100644 --- a/drivers/char/tpm/tpm.h +++ b/drivers/char/tpm/tpm.h @@ -174,6 +174,7 @@ int tpm1_get_timeouts(struct tpm_chip *chip); unsigned long tpm1_calc_ordinal_duration(struct tpm_chip *chip, u32 ordinal); int tpm1_pcr_extend(struct tpm_chip *chip, u32 pcr_idx, const u8 *hash, const char *log_msg); +int tpm1_pcr_reset(struct tpm_chip *chip, u32 pcr_idx, const char *log_msg); int tpm1_pcr_read(struct tpm_chip *chip, u32 pcr_idx, u8 *res_buf); ssize_t tpm1_getcap(struct tpm_chip *chip, u32 subcap_id, cap_t *cap, const char *desc, size_t min_cap_length); @@ -216,6 +217,7 @@ int tpm2_pcr_read(struct tpm_chip *chip, u32 pcr_idx, struct tpm_digest *digest, u16 *digest_size_ptr); int tpm2_pcr_extend(struct tpm_chip *chip, u32 pcr_idx, struct tpm_digest *digests); +int tpm2_pcr_reset(struct tpm_chip *chip, u32 pcr_idx); int tpm2_get_random(struct tpm_chip *chip, u8 *dest, size_t max); ssize_t tpm2_get_tpm_pt(struct tpm_chip *chip, u32 property_id, u32 *value, const char *desc); diff --git a/drivers/char/tpm/tpm1-cmd.c b/drivers/char/tpm/tpm1-cmd.c index f7dc986fa4a0a2..9b9ad1fc20ccfd 100644 --- a/drivers/char/tpm/tpm1-cmd.c +++ b/drivers/char/tpm/tpm1-cmd.c @@ -478,6 +478,40 @@ int tpm1_pcr_extend(struct tpm_chip *chip, u32 pcr_idx, const u8 *hash, return rc; } +struct tpm_pcr_selection { + u16 size_of_select; + u8 pcr_select[3]; +} __packed; + +#define TPM_ORD_PCR_RESET 200 +int tpm1_pcr_reset(struct tpm_chip *chip, u32 pcr_idx, const char *log_msg) +{ + struct tpm_pcr_selection selection; + struct tpm_buf buf; + int i, rc; + char tmp; + + rc = tpm_buf_init(&buf, TPM_TAG_RQU_COMMAND, TPM_ORD_PCR_RESET); + if (rc) + return rc; + + selection.size_of_select = 3; + + for (i = 0; i < selection.size_of_select; i++) { + tmp = 0; + if (pcr_idx / 3 == i) { + pcr_idx -= i * 8; + tmp |= 1 << pcr_idx; + } + selection.pcr_select[i] = tmp; + } + tpm_buf_append(&buf, (u8 *)&selection, sizeof(selection)); + + rc = tpm_transmit_cmd(chip, &buf, sizeof(selection), log_msg); + tpm_buf_destroy(&buf); + return rc; +} + #define TPM_ORD_GET_CAP 101 ssize_t tpm1_getcap(struct tpm_chip *chip, u32 subcap_id, cap_t *cap, const char *desc, size_t min_cap_length) diff --git a/drivers/char/tpm/tpm2-cmd.c b/drivers/char/tpm/tpm2-cmd.c index 4704fa553098b5..c0806b4447c8b2 100644 --- a/drivers/char/tpm/tpm2-cmd.c +++ b/drivers/char/tpm/tpm2-cmd.c @@ -269,6 +269,42 @@ int tpm2_pcr_extend(struct tpm_chip *chip, u32 pcr_idx, return rc; } +/** + * tpm2_pcr_reset() - reset a PCR + * + * @chip: TPM chip to use. + * @pcr_idx: index of the PCR. + * + * Return: Same as with tpm_transmit_cmd. + */ +int tpm2_pcr_reset(struct tpm_chip *chip, u32 pcr_idx) +{ + struct tpm_buf buf; + struct tpm2_null_auth_area auth_area; + int rc; + + rc = tpm_buf_init(&buf, TPM2_ST_SESSIONS, TPM2_CC_PCR_RESET); + if (rc) + return rc; + + tpm_buf_append_u32(&buf, pcr_idx); + + auth_area.handle = cpu_to_be32(TPM2_RS_PW); + auth_area.nonce_size = 0; + auth_area.attributes = 0; + auth_area.auth_size = 0; + + tpm_buf_append_u32(&buf, sizeof(struct tpm2_null_auth_area)); + tpm_buf_append(&buf, (const unsigned char *)&auth_area, + sizeof(auth_area)); + + rc = tpm_transmit_cmd(chip, &buf, 0, "attempting to reset a PCR"); + + tpm_buf_destroy(&buf); + + return rc; +} + struct tpm2_get_random_out { __be16 size; u8 buffer[TPM_MAX_RNG_DATA]; diff --git a/include/linux/tpm.h b/include/linux/tpm.h index dfeb25a0362dee..8320cbac6f4009 100644 --- a/include/linux/tpm.h +++ b/include/linux/tpm.h @@ -219,6 +219,7 @@ enum tpm2_command_codes { TPM2_CC_HIERARCHY_CONTROL = 0x0121, TPM2_CC_HIERARCHY_CHANGE_AUTH = 0x0129, TPM2_CC_CREATE_PRIMARY = 0x0131, + TPM2_CC_PCR_RESET = 0x013D, TPM2_CC_SEQUENCE_COMPLETE = 0x013E, TPM2_CC_SELF_TEST = 0x0143, TPM2_CC_STARTUP = 0x0144, @@ -423,6 +424,7 @@ extern ssize_t tpm_transmit_cmd(struct tpm_chip *chip, struct tpm_buf *buf, size_t min_rsp_body_length, const char *desc); extern int tpm_pcr_read(struct tpm_chip *chip, u32 pcr_idx, struct tpm_digest *digest); +extern int tpm_pcr_reset(struct tpm_chip *chip, u32 pcr_idx); extern int tpm_pcr_extend(struct tpm_chip *chip, u32 pcr_idx, struct tpm_digest *digests); extern int tpm_send(struct tpm_chip *chip, void *cmd, size_t buflen); @@ -440,6 +442,11 @@ static inline int tpm_pcr_read(struct tpm_chip *chip, int pcr_idx, return -ENODEV; } +static inline int tpm_pcr_reset(struct tpm_chip *chip, int pcr_idx) +{ + return -ENODEV; +} + static inline int tpm_pcr_extend(struct tpm_chip *chip, u32 pcr_idx, struct tpm_digest *digests) { From patchwork Wed May 4 23:20:54 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Evan Green X-Patchwork-Id: 12838920 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DF5BAC4167E for ; Wed, 4 May 2022 23:59:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1349655AbiEEAA3 (ORCPT ); Wed, 4 May 2022 20:00:29 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41646 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1353490AbiEDXZG (ORCPT ); Wed, 4 May 2022 19:25:06 -0400 Received: from mail-pj1-x1035.google.com (mail-pj1-x1035.google.com [IPv6:2607:f8b0:4864:20::1035]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 203C44EA0B for ; Wed, 4 May 2022 16:21:28 -0700 (PDT) Received: by mail-pj1-x1035.google.com with SMTP id w17-20020a17090a529100b001db302efed6so2555175pjh.4 for ; Wed, 04 May 2022 16:21:28 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=/bX1FiUp+UXF1R/ucItf2yovaC0Dr0ZWprZl8Gow/hg=; b=H7uAXX+8UrTfaoj6jJ4T4S903eK2xrBpejUgd0XcPHClwajSPh1J/aWR8AmCKonyFI x93TZEM+VSClZmLfJXVeXdyVNxQ8muYN5L0iS6NMu5DPns3E41vI4Ca/Ksc19s3InH8P bDO8idk93HMu83Hg0e7DlJoFJqDrRRbelN/SM= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=/bX1FiUp+UXF1R/ucItf2yovaC0Dr0ZWprZl8Gow/hg=; b=0U9Q6eAMZ2iARX5RrQUXx1QMhuMkDgGonEiZ1oSRyw2f8xysvZax/Z36JiDwpKFtKC Q+cbsNzdGzLhqXU8stab/WtkW3ma0QFNDxgLEfaVPKJShLV+sYLo1vvOWYdJCY+hhkJ/ 7KtyIu6DtTBUEbMEZ5F4rdsrzx9DqdlyWTzFjLfGnRFc/z7KEpJUQMy/YD3eb1x5NYSd XWmHNmz8YifNhfSV44ek6/fyFcoq5uYb6a4a9wKAExnxap0iNRqF0e95lv/dSe0Qgqcm U34oMWsEiw4m+SF1wbZMFBG23LLjVDYtCtbbBtZTeCGZXLGT9nvBhJFPfOBVIEGp0HjT Mg5g== X-Gm-Message-State: AOAM531NKrQ1ObrlrdmbgtEO5c9j7K5YTmceZmATKbwyWWUMs0rs/EfY hw/Z1/tC+40UZtsMhbXuLfwcvg== X-Google-Smtp-Source: ABdhPJzIljgYByrncIBlTw4znRoTwmkRze5mnuoJstLv4PglGih2PJvq68h2B/pV3ziLv6dbJjADpw== X-Received: by 2002:a17:90b:4b4c:b0:1dc:4c07:aa63 with SMTP id mi12-20020a17090b4b4c00b001dc4c07aa63mr2329251pjb.237.1651706487656; Wed, 04 May 2022 16:21:27 -0700 (PDT) Received: from evgreen-glaptop.lan ([98.47.98.87]) by smtp.gmail.com with ESMTPSA id q12-20020a170902f78c00b0015e8d4eb2d6sm1901pln.288.2022.05.04.16.21.26 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 04 May 2022 16:21:27 -0700 (PDT) From: Evan Green To: linux-kernel@vger.kernel.org Cc: Matthew Garrett , dlunev@google.com, zohar@linux.ibm.com, jejb@linux.ibm.com, linux-integrity@vger.kernel.org, corbet@lwn.net, rjw@rjwysocki.net, gwendal@chromium.org, jarkko@kernel.org, linux-pm@vger.kernel.org, Matthew Garrett , Matthew Garrett , Evan Green , Jason Gunthorpe , Peter Huewe Subject: [PATCH 02/10] tpm: Allow PCR 23 to be restricted to kernel-only use Date: Wed, 4 May 2022 16:20:54 -0700 Message-Id: <20220504161439.2.I9ded8c8caad27403e9284dfc78ad6cbd845bc98d@changeid> X-Mailer: git-send-email 2.31.0 In-Reply-To: <20220504232102.469959-1-evgreen@chromium.org> References: <20220504232102.469959-1-evgreen@chromium.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-pm@vger.kernel.org From: Matthew Garrett Under certain circumstances it might be desirable to enable the creation of TPM-backed secrets that are only accessible to the kernel. In an ideal world this could be achieved by using TPM localities, but these don't appear to be available on consumer systems. An alternative is to simply block userland from modifying one of the resettable PCRs, leaving it available to the kernel. If the kernel ensures that no userland can access the TPM while it is carrying out work, it can reset PCR 23, extend it to an arbitrary value, create or load a secret, and then reset the PCR again. Even if userland somehow obtains the sealed material, it will be unable to unseal it since PCR 23 will never be in the appropriate state. From: Matthew Garrett Signed-off-by: Matthew Garrett Signed-off-by: Evan Green --- Matthew's original version of this patch is at: https://patchwork.kernel.org/patch/12096491/ drivers/char/tpm/Kconfig | 10 +++++++++ drivers/char/tpm/tpm-dev-common.c | 8 +++++++ drivers/char/tpm/tpm.h | 21 +++++++++++++++++++ drivers/char/tpm/tpm1-cmd.c | 35 +++++++++++++++++++++++++++++++ drivers/char/tpm/tpm2-cmd.c | 22 +++++++++++++++++++ drivers/char/tpm/tpm2-space.c | 2 +- 6 files changed, 97 insertions(+), 1 deletion(-) diff --git a/drivers/char/tpm/Kconfig b/drivers/char/tpm/Kconfig index 4a5516406c22ed..627d636de327bd 100644 --- a/drivers/char/tpm/Kconfig +++ b/drivers/char/tpm/Kconfig @@ -199,4 +199,14 @@ config TCG_FTPM_TEE This driver proxies for firmware TPM running in TEE. source "drivers/char/tpm/st33zp24/Kconfig" + +config TCG_TPM_RESTRICT_PCR + bool "Restrict userland access to PCR 23" + depends on TCG_TPM + help + If set, block userland from extending or resetting PCR 23. This + allows it to be restricted to in-kernel use, preventing userland + from being able to make use of data sealed to the TPM by the kernel. + This is required for secure hibernation support, but should be left + disabled if any userland may require access to PCR23. endif # TCG_TPM diff --git a/drivers/char/tpm/tpm-dev-common.c b/drivers/char/tpm/tpm-dev-common.c index dc4c0a0a512903..7a4e618c7d1942 100644 --- a/drivers/char/tpm/tpm-dev-common.c +++ b/drivers/char/tpm/tpm-dev-common.c @@ -198,6 +198,14 @@ ssize_t tpm_common_write(struct file *file, const char __user *buf, priv->response_read = false; *off = 0; + if (priv->chip->flags & TPM_CHIP_FLAG_TPM2) + ret = tpm2_cmd_restricted(priv->chip, priv->data_buffer, size); + else + ret = tpm1_cmd_restricted(priv->chip, priv->data_buffer, size); + + if (ret) + goto out; + /* * If in nonblocking mode schedule an async job to send * the command return the size. diff --git a/drivers/char/tpm/tpm.h b/drivers/char/tpm/tpm.h index 72b6c0873852c6..a2131311fa5447 100644 --- a/drivers/char/tpm/tpm.h +++ b/drivers/char/tpm/tpm.h @@ -228,6 +228,8 @@ void tpm2_shutdown(struct tpm_chip *chip, u16 shutdown_type); unsigned long tpm2_calc_ordinal_duration(struct tpm_chip *chip, u32 ordinal); int tpm2_probe(struct tpm_chip *chip); int tpm2_get_cc_attrs_tbl(struct tpm_chip *chip); +int tpm_find_and_validate_cc(struct tpm_chip *chip, struct tpm_space *space, + const void *buf, size_t bufsiz); int tpm2_find_cc(struct tpm_chip *chip, u32 cc); int tpm2_init_space(struct tpm_space *space, unsigned int buf_size); void tpm2_del_space(struct tpm_chip *chip, struct tpm_space *space); @@ -243,4 +245,23 @@ void tpm_bios_log_setup(struct tpm_chip *chip); void tpm_bios_log_teardown(struct tpm_chip *chip); int tpm_dev_common_init(void); void tpm_dev_common_exit(void); + +#ifdef CONFIG_TCG_TPM_RESTRICT_PCR +#define TPM_RESTRICTED_PCR 23 + +int tpm1_cmd_restricted(struct tpm_chip *chip, u8 *buffer, size_t size); +int tpm2_cmd_restricted(struct tpm_chip *chip, u8 *buffer, size_t size); +#else +static inline int tpm1_cmd_restricted(struct tpm_chip *chip, u8 *buffer, + size_t size) +{ + return 0; +} + +static inline int tpm2_cmd_restricted(struct tpm_chip *chip, u8 *buffer, + size_t size) +{ + return 0; +} +#endif #endif diff --git a/drivers/char/tpm/tpm1-cmd.c b/drivers/char/tpm/tpm1-cmd.c index 9b9ad1fc20ccfd..d404f4092eec8b 100644 --- a/drivers/char/tpm/tpm1-cmd.c +++ b/drivers/char/tpm/tpm1-cmd.c @@ -840,3 +840,38 @@ int tpm1_get_pcr_allocation(struct tpm_chip *chip) return 0; } + +#ifdef CONFIG_TCG_TPM_RESTRICT_PCR +int tpm1_cmd_restricted(struct tpm_chip *chip, u8 *buffer, size_t size) +{ + struct tpm_header *header = (struct tpm_header *)buffer; + char len, offset; + u32 *pcr; + int pos; + + switch (be32_to_cpu(header->ordinal)) { + case TPM_ORD_PCR_EXTEND: + if (size < (TPM_HEADER_SIZE + sizeof(u32))) + return -EINVAL; + pcr = (u32 *)&buffer[TPM_HEADER_SIZE]; + if (be32_to_cpu(*pcr) == TPM_RESTRICTED_PCR) + return -EPERM; + break; + case TPM_ORD_PCR_RESET: + if (size < (TPM_HEADER_SIZE + 1)) + return -EINVAL; + len = buffer[TPM_HEADER_SIZE]; + if (size < (TPM_HEADER_SIZE + 1 + len)) + return -EINVAL; + offset = TPM_RESTRICTED_PCR/3; + if (len < offset) + break; + pos = TPM_HEADER_SIZE + 1 + offset; + if (buffer[pos] & (1 << (TPM_RESTRICTED_PCR - 2 * offset))) + return -EPERM; + break; + } + + return 0; +} +#endif diff --git a/drivers/char/tpm/tpm2-cmd.c b/drivers/char/tpm/tpm2-cmd.c index c0806b4447c8b2..439e26336923df 100644 --- a/drivers/char/tpm/tpm2-cmd.c +++ b/drivers/char/tpm/tpm2-cmd.c @@ -802,3 +802,25 @@ int tpm2_find_cc(struct tpm_chip *chip, u32 cc) return -1; } + +#ifdef CONFIG_TCG_TPM_RESTRICT_PCR +int tpm2_cmd_restricted(struct tpm_chip *chip, u8 *buffer, size_t size) +{ + int cc = tpm_find_and_validate_cc(chip, NULL, buffer, size); + u32 *handle; + + switch (cc) { + case TPM2_CC_PCR_EXTEND: + case TPM2_CC_PCR_RESET: + if (size < (TPM_HEADER_SIZE + sizeof(u32))) + return -EINVAL; + + handle = (u32 *)&buffer[TPM_HEADER_SIZE]; + if (be32_to_cpu(*handle) == TPM_RESTRICTED_PCR) + return -EPERM; + break; + } + + return 0; +} +#endif diff --git a/drivers/char/tpm/tpm2-space.c b/drivers/char/tpm/tpm2-space.c index ffb35f0154c16c..6f51cd92c6400f 100644 --- a/drivers/char/tpm/tpm2-space.c +++ b/drivers/char/tpm/tpm2-space.c @@ -262,7 +262,7 @@ static int tpm2_map_command(struct tpm_chip *chip, u32 cc, u8 *cmd) return 0; } -static int tpm_find_and_validate_cc(struct tpm_chip *chip, +int tpm_find_and_validate_cc(struct tpm_chip *chip, struct tpm_space *space, const void *cmd, size_t len) { From patchwork Wed May 4 23:20:55 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Evan Green X-Patchwork-Id: 12838919 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id CE5F5C4167B for ; Wed, 4 May 2022 23:59:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1383756AbiEEAAb (ORCPT ); Wed, 4 May 2022 20:00:31 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41704 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1359763AbiEDXZK (ORCPT ); Wed, 4 May 2022 19:25:10 -0400 Received: from mail-pg1-x530.google.com (mail-pg1-x530.google.com [IPv6:2607:f8b0:4864:20::530]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 755F24DF49 for ; Wed, 4 May 2022 16:21:32 -0700 (PDT) Received: by mail-pg1-x530.google.com with SMTP id x12so2318038pgj.7 for ; Wed, 04 May 2022 16:21:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=Yhdb5Bb/HPOT7rt6ZFl1SIPi2sOZMgAuPW9HJM+UwR0=; b=Lk4xT28VP//2/ipug2hDVNsFxT9ZIxqyDHb0zfUWgwzqxrDXikdUEYcZVll3q4j94i ztdggJgt0+wJWipyAMzuue+sqAj5cr1iqrjXU6TY0KlGJi2k7/sLjz4SzJeD6C1FqUri Vuh+XJBEfDQcKfui50SG/RlDJ8yjXNcYwrQew= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Yhdb5Bb/HPOT7rt6ZFl1SIPi2sOZMgAuPW9HJM+UwR0=; b=vYj5COKKVGnRmOUbzkJJjO3LuYNoWOibRS4klnNiUyfton04JciUcdHcEVsDs2Kh+x pnvIzqwDYdz0ecgyGkYLGKt5YpadqjMki+WIWQAzZK+FxXDQt1tMkB+eW7BwZ3N7+u8t IiR3yNQPRIi8sabCs54ZaJbnuYtkvHo7pndX9Usx/yqM/l5CqCQeRzwd2mlkNpc6vaeS 6ieWKCP+GljvV5y4DRb+kIu2griyD/shyzIIspOrO7zZv2RraIqjQfiVDxDBoeOynYNw 4xgGgRy58ASBOy6g56Ij+uPPHTrOFwFTRtJRrOvyvCZUicCBLgDPvR2BazR4ry1wJzYW V3nA== X-Gm-Message-State: AOAM530ad7ZmKu3VLczpip514wgBKVF3aojjxHY6llp/Cl8vOfnYWvHl xaYc5jza3PRwcemDEYbG+fs1xA== X-Google-Smtp-Source: ABdhPJw5P1NIDRBbT4jMlDGJLeiA/MnnIE6iVTPAa0LCmNhzVLp/2G2suU9nzhm2DtywpArEruqEYA== X-Received: by 2002:a62:ce82:0:b0:50d:512f:7b76 with SMTP id y124-20020a62ce82000000b0050d512f7b76mr23214566pfg.79.1651706491900; Wed, 04 May 2022 16:21:31 -0700 (PDT) Received: from evgreen-glaptop.lan ([98.47.98.87]) by smtp.gmail.com with ESMTPSA id q12-20020a170902f78c00b0015e8d4eb2d6sm1901pln.288.2022.05.04.16.21.30 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 04 May 2022 16:21:31 -0700 (PDT) From: Evan Green To: linux-kernel@vger.kernel.org Cc: Matthew Garrett , dlunev@google.com, zohar@linux.ibm.com, jejb@linux.ibm.com, linux-integrity@vger.kernel.org, corbet@lwn.net, rjw@rjwysocki.net, gwendal@chromium.org, jarkko@kernel.org, linux-pm@vger.kernel.org, Matthew Garrett , Matthew Garrett , Evan Green , David Howells , James Morris , "Serge E. Hallyn" , keyrings@vger.kernel.org, linux-security-module@vger.kernel.org Subject: [PATCH 03/10] security: keys: trusted: Parse out individual components of the key blob Date: Wed, 4 May 2022 16:20:55 -0700 Message-Id: <20220504161439.3.Id409e0cf397bd98b76df6eb60ff16454679dd23d@changeid> X-Mailer: git-send-email 2.31.0 In-Reply-To: <20220504232102.469959-1-evgreen@chromium.org> References: <20220504232102.469959-1-evgreen@chromium.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-pm@vger.kernel.org From: Matthew Garrett Performing any sort of state validation of a sealed TPM blob requires being able to access the individual members in the response. Parse the blob sufficiently to be able to stash pointers to each member, along with the length. From: Matthew Garrett Signed-off-by: Matthew Garrett Signed-off-by: Evan Green --- Matthew's original version of this patch is at: https://patchwork.kernel.org/patch/12096489/ include/keys/trusted-type.h | 8 +++ security/keys/trusted-keys/trusted_tpm2.c | 67 ++++++++++++++++++++++- 2 files changed, 73 insertions(+), 2 deletions(-) diff --git a/include/keys/trusted-type.h b/include/keys/trusted-type.h index d89fa2579ac056..8a793ae1ad9f70 100644 --- a/include/keys/trusted-type.h +++ b/include/keys/trusted-type.h @@ -22,15 +22,23 @@ #define MAX_BLOB_SIZE 512 #define MAX_PCRINFO_SIZE 64 #define MAX_DIGEST_SIZE 64 +#define MAX_CREATION_DATA 412 +#define MAX_TK 76 struct trusted_key_payload { struct rcu_head rcu; unsigned int key_len; unsigned int blob_len; + unsigned int creation_len; + unsigned int creation_hash_len; + unsigned int tk_len; unsigned char migratable; unsigned char old_format; unsigned char key[MAX_KEY_SIZE + 1]; unsigned char blob[MAX_BLOB_SIZE]; + unsigned char *creation; + unsigned char *creation_hash; + unsigned char *tk; }; struct trusted_key_options { diff --git a/security/keys/trusted-keys/trusted_tpm2.c b/security/keys/trusted-keys/trusted_tpm2.c index 0165da386289c3..296a00f872ba40 100644 --- a/security/keys/trusted-keys/trusted_tpm2.c +++ b/security/keys/trusted-keys/trusted_tpm2.c @@ -215,6 +215,63 @@ static void tpm2_buf_append_auth(struct tpm_buf *buf, u32 session_handle, tpm_buf_append(buf, hmac, hmac_len); } +static int tpm2_unpack_blob(struct trusted_key_payload *payload) +{ + int tmp, offset; + + /* Find the length of the private data */ + tmp = be16_to_cpup((__be16 *) &payload->blob[0]); + offset = tmp + 2; + if (offset > payload->blob_len) + return -EFAULT; + + /* Find the length of the public data */ + tmp = be16_to_cpup((__be16 *) &payload->blob[offset]); + offset += tmp + 2; + if (offset > payload->blob_len) + return -EFAULT; + + /* Find the length of the creation data and store it */ + tmp = be16_to_cpup((__be16 *) &payload->blob[offset]); + if (tmp > MAX_CREATION_DATA) + return -E2BIG; + + if ((offset + tmp + 2) > payload->blob_len) + return -EFAULT; + + payload->creation = &payload->blob[offset + 2]; + payload->creation_len = tmp; + offset += tmp + 2; + + /* Find the length of the creation hash and store it */ + tmp = be16_to_cpup((__be16 *) &payload->blob[offset]); + if (tmp > MAX_DIGEST_SIZE) + return -E2BIG; + + if ((offset + tmp + 2) > payload->blob_len) + return -EFAULT; + + payload->creation_hash = &payload->blob[offset + 2]; + payload->creation_hash_len = tmp; + offset += tmp + 2; + + /* + * Store the creation ticket. TPMT_TK_CREATION is two bytes of tag, + * four bytes of handle, and then the digest length and digest data + */ + tmp = be16_to_cpup((__be16 *) &payload->blob[offset + 6]); + if (tmp > MAX_TK) + return -E2BIG; + + if ((offset + tmp + 8) > payload->blob_len) + return -EFAULT; + + payload->tk = &payload->blob[offset]; + payload->tk_len = tmp + 8; + + return 0; +} + /** * tpm2_seal_trusted() - seal the payload of a trusted key * @@ -229,6 +286,7 @@ int tpm2_seal_trusted(struct tpm_chip *chip, struct trusted_key_options *options) { int blob_len = 0; + unsigned int offset; struct tpm_buf buf; u32 hash; u32 flags; @@ -317,15 +375,17 @@ int tpm2_seal_trusted(struct tpm_chip *chip, rc = -E2BIG; goto out; } - if (tpm_buf_length(&buf) < TPM_HEADER_SIZE + 4 + blob_len) { + offset = TPM_HEADER_SIZE + 4; + if (tpm_buf_length(&buf) < offset + blob_len) { rc = -EFAULT; goto out; } blob_len = tpm2_key_encode(payload, options, - &buf.data[TPM_HEADER_SIZE + 4], + &buf.data[offset], blob_len); + rc = tpm2_unpack_blob(payload); out: tpm_buf_destroy(&buf); @@ -431,7 +491,10 @@ static int tpm2_load_cmd(struct tpm_chip *chip, if (!rc) *blob_handle = be32_to_cpup( (__be32 *) &buf.data[TPM_HEADER_SIZE]); + else + goto out; + rc = tpm2_unpack_blob(payload); out: if (blob != payload->blob) kfree(blob); From patchwork Wed May 4 23:20:56 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Evan Green X-Patchwork-Id: 12838916 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7D310C433F5 for ; Wed, 4 May 2022 23:59:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1383734AbiEEAA0 (ORCPT ); Wed, 4 May 2022 20:00:26 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41818 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1383106AbiEDXZO (ORCPT ); Wed, 4 May 2022 19:25:14 -0400 Received: from mail-pl1-x62a.google.com (mail-pl1-x62a.google.com [IPv6:2607:f8b0:4864:20::62a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5D2F64ECCE for ; Wed, 4 May 2022 16:21:36 -0700 (PDT) Received: by mail-pl1-x62a.google.com with SMTP id p6so2297986plr.12 for ; Wed, 04 May 2022 16:21:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=oSCDp+BlWHOAe2tqoHinPSSzUxabaz3SWirbYWgZ7iE=; b=AK96nFD2jq9As8QRqe4ZZgrKB3muSxd4UKPDu0PiPgxZCEkF09YM3a/lbxSE2Aolhc 8/11YUs8xGSuIa13K4FNjPmrXP32NRaFX2/fqhw5ycNjYisg1PBFJZiM4dKkwgF7l4SI 8V/qwibG9N2Kolpq0e2PDsUZD0WN7jb8AqoT4= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=oSCDp+BlWHOAe2tqoHinPSSzUxabaz3SWirbYWgZ7iE=; b=qqmOQLgI5OQ7LOb0kwtscLy9igqBoYSSQipM8kSkr61jx+ZuEilLw2RhkVpbQDdIti 50Sobgjl3SGNRJS5rEwAXGC+zs7P+rNSzE/xL6dNFUir2aNSG9zvaYHMLUFziv5zpYzr MmNoSVoNKiVPgbXQXOt/y/LGWY3Hsr8yx1laZwT+Gb7dSlJ15lAvZV85nMXTzHSC+EvW 7fqbc3F2SFHOIUJDyvCCj6+0Ea5BIGekA/TXiQTqEccKA8g3mnSI6mX+RNkk7SLpD2nH do3C9yFI2DwIDyzgQ0d6m2UbnIiydqSj+jIBjc5k9TpeYaXy5PDFyqU5R/fzN6nRLC8a oT8Q== X-Gm-Message-State: AOAM533lGl6219AnmGkbfHBL37MakHafiWvPZWUxmMOV5XGu71L0KmxA AlGJ+xev36OWWDi78BbewYC2PA== X-Google-Smtp-Source: ABdhPJyWCNX/I16qyW4fvcathhxcuAgJ0GjSbW0c5vLuKr3I6Bnp2rIZfhNKLxSoEtDyFcQM2HOpmw== X-Received: by 2002:a17:902:7e01:b0:15e:caea:d6 with SMTP id b1-20020a1709027e0100b0015ecaea00d6mr4372912plm.33.1651706495884; Wed, 04 May 2022 16:21:35 -0700 (PDT) Received: from evgreen-glaptop.lan ([98.47.98.87]) by smtp.gmail.com with ESMTPSA id q12-20020a170902f78c00b0015e8d4eb2d6sm1901pln.288.2022.05.04.16.21.34 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 04 May 2022 16:21:35 -0700 (PDT) From: Evan Green To: linux-kernel@vger.kernel.org Cc: Matthew Garrett , dlunev@google.com, zohar@linux.ibm.com, jejb@linux.ibm.com, linux-integrity@vger.kernel.org, corbet@lwn.net, rjw@rjwysocki.net, gwendal@chromium.org, jarkko@kernel.org, linux-pm@vger.kernel.org, Matthew Garrett , Matthew Garrett , Evan Green , David Howells , James Morris , "Serge E. Hallyn" , keyrings@vger.kernel.org, linux-doc@vger.kernel.org, linux-security-module@vger.kernel.org Subject: [PATCH 04/10] security: keys: trusted: Allow storage of PCR values in creation data Date: Wed, 4 May 2022 16:20:56 -0700 Message-Id: <20220504161439.4.I32591db064b6cdc91850d777f363c9d05c985b39@changeid> X-Mailer: git-send-email 2.31.0 In-Reply-To: <20220504232102.469959-1-evgreen@chromium.org> References: <20220504232102.469959-1-evgreen@chromium.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-pm@vger.kernel.org From: Matthew Garrett When TPMs generate keys, they can also generate some information describing the state of the PCRs at creation time. This data can then later be certified by the TPM, allowing verification of the PCR values. This allows us to determine the state of the system at the time a key was generated. Add an additional argument to the trusted key creation options, allowing the user to provide the set of PCRs that should have their values incorporated into the creation data. From: Matthew Garrett Signed-off-by: Matthew Garrett Signed-off-by: Evan Green --- Matthew's original version of this patch is at: https://patchwork.kernel.org/patch/12096503/ .../security/keys/trusted-encrypted.rst | 4 +++ include/keys/trusted-type.h | 1 + security/keys/trusted-keys/trusted_tpm1.c | 9 +++++++ security/keys/trusted-keys/trusted_tpm2.c | 25 +++++++++++++++++-- 4 files changed, 37 insertions(+), 2 deletions(-) diff --git a/Documentation/security/keys/trusted-encrypted.rst b/Documentation/security/keys/trusted-encrypted.rst index f614dad7de12f9..7215b067bf128f 100644 --- a/Documentation/security/keys/trusted-encrypted.rst +++ b/Documentation/security/keys/trusted-encrypted.rst @@ -170,6 +170,10 @@ Usage:: policyhandle= handle to an authorization policy session that defines the same policy and with the same hash algorithm as was used to seal the key. + creationpcrs= hex integer representing the set of PCR values to be + included in the PCR creation data. The bit corresponding + to each PCR should be 1 to be included, 0 to be ignored. + TPM2 only. "keyctl print" returns an ascii hex copy of the sealed key, which is in standard TPM_STORED_DATA format. The key length for new keys are always in bytes. diff --git a/include/keys/trusted-type.h b/include/keys/trusted-type.h index 8a793ae1ad9f70..b3ac4afe8ba987 100644 --- a/include/keys/trusted-type.h +++ b/include/keys/trusted-type.h @@ -54,6 +54,7 @@ struct trusted_key_options { uint32_t policydigest_len; unsigned char policydigest[MAX_DIGEST_SIZE]; uint32_t policyhandle; + uint32_t creation_pcrs; }; struct trusted_key_ops { diff --git a/security/keys/trusted-keys/trusted_tpm1.c b/security/keys/trusted-keys/trusted_tpm1.c index aa108bea6739b3..2975827c01bec0 100644 --- a/security/keys/trusted-keys/trusted_tpm1.c +++ b/security/keys/trusted-keys/trusted_tpm1.c @@ -713,6 +713,7 @@ enum { Opt_hash, Opt_policydigest, Opt_policyhandle, + Opt_creationpcrs, }; static const match_table_t key_tokens = { @@ -725,6 +726,7 @@ static const match_table_t key_tokens = { {Opt_hash, "hash=%s"}, {Opt_policydigest, "policydigest=%s"}, {Opt_policyhandle, "policyhandle=%s"}, + {Opt_creationpcrs, "creationpcrs=%s"}, {Opt_err, NULL} }; @@ -858,6 +860,13 @@ static int getoptions(char *c, struct trusted_key_payload *pay, return -EINVAL; opt->policyhandle = handle; break; + case Opt_creationpcrs: + if (!tpm2) + return -EINVAL; + res = kstrtoint(args[0].from, 16, &opt->creation_pcrs); + if (res < 0) + return -EINVAL; + break; default: return -EINVAL; } diff --git a/security/keys/trusted-keys/trusted_tpm2.c b/security/keys/trusted-keys/trusted_tpm2.c index 296a00f872ba40..b7ddb78e644d17 100644 --- a/security/keys/trusted-keys/trusted_tpm2.c +++ b/security/keys/trusted-keys/trusted_tpm2.c @@ -290,7 +290,7 @@ int tpm2_seal_trusted(struct tpm_chip *chip, struct tpm_buf buf; u32 hash; u32 flags; - int i; + int i, j; int rc; for (i = 0; i < ARRAY_SIZE(tpm2_hash_map); i++) { @@ -359,7 +359,28 @@ int tpm2_seal_trusted(struct tpm_chip *chip, tpm_buf_append_u16(&buf, 0); /* creation PCR */ - tpm_buf_append_u32(&buf, 0); + if (options->creation_pcrs) { + /* One bank */ + tpm_buf_append_u32(&buf, 1); + /* Which bank to use */ + tpm_buf_append_u16(&buf, hash); + /* Length of the PCR bitmask */ + tpm_buf_append_u8(&buf, 3); + /* PCR bitmask */ + for (i = 0; i < 3; i++) { + char tmp = 0; + + for (j = 0; j < 8; j++) { + char bit = (i * 8) + j; + + if (options->creation_pcrs & (1 << bit)) + tmp |= (1 << j); + } + tpm_buf_append_u8(&buf, tmp); + } + } else { + tpm_buf_append_u32(&buf, 0); + } if (buf.flags & TPM_BUF_OVERFLOW) { rc = -E2BIG; From patchwork Wed May 4 23:20:57 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Evan Green X-Patchwork-Id: 12838914 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9F450C4332F for ; Wed, 4 May 2022 23:59:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344833AbiEEAA1 (ORCPT ); Wed, 4 May 2022 20:00:27 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41934 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1383912AbiEDXZR (ORCPT ); Wed, 4 May 2022 19:25:17 -0400 Received: from mail-pl1-x632.google.com (mail-pl1-x632.google.com [IPv6:2607:f8b0:4864:20::632]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 81DFA4ECFA for ; Wed, 4 May 2022 16:21:39 -0700 (PDT) Received: by mail-pl1-x632.google.com with SMTP id i1so2823216plg.7 for ; Wed, 04 May 2022 16:21:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=NCPgvpZcA0uAt4aHPP7BYpOEKu3u65fsur79lcM7RCY=; b=Zs2UVopnaRBcMqzeWwdu3RMSHFGCRemyVLw1Wiq1qURH8sHjtfDVQfZbSeFcEHO3yo 3A1MC5oNYk09WmKFCtVeexkI58hzjqDiFLj3wo1aQ2A7txkNgN7GmMK0k3LRqjZKN7OW Ejffr5D3JC4siZtEiGZt/YgD+39nWwRmq2VxY= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=NCPgvpZcA0uAt4aHPP7BYpOEKu3u65fsur79lcM7RCY=; b=eDDPsH0QPUsakuQWY101nxN1yEngiPhn630lha9fzsBYJPPXAhxvLelHt9+n2vslOv Zv8KSvjPgcxYeYTUznkB1DgHbBsTyP3pnoIGQP5irN42EbTE+QUM4TWqrCxy4eL9RJgy ZmY2Nn2+UpX+VO8X/Fedh2USqT4Wx6KZByWnXQXWxC9FAlFYaM+L9R7equX/SXfK1UC8 7A2HfluspY9bAxRh83EassSfkyx9VBSDDBSC18epD+ae+9Aq4e6m0DoaRB6WSdJp/qOS 9H7ByVzm+fNt9vQdkhSGhKQO9n0yVBr97igZbuwra7FK6RAEpOyi7sorlD68hoJSXYDU UpGQ== X-Gm-Message-State: AOAM533B1d7h90pEcYD8S8t42xW+1SxsswFa45fE2bXvlfvFMRd2XUDQ 2gk/ejdO8Ei6yW7h++LvNH/Zhw== X-Google-Smtp-Source: ABdhPJx2TPK/nouqDF1yYzVVWJk+thRPtu2hPJrJDWbQ1tzdLDmA68Nrl8oQvHeptfCEiB4Lsu9xWQ== X-Received: by 2002:a17:903:2d0:b0:14d:8a8d:cb1 with SMTP id s16-20020a17090302d000b0014d8a8d0cb1mr24121633plk.50.1651706499061; Wed, 04 May 2022 16:21:39 -0700 (PDT) Received: from evgreen-glaptop.lan ([98.47.98.87]) by smtp.gmail.com with ESMTPSA id q12-20020a170902f78c00b0015e8d4eb2d6sm1901pln.288.2022.05.04.16.21.37 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 04 May 2022 16:21:38 -0700 (PDT) From: Evan Green To: linux-kernel@vger.kernel.org Cc: Matthew Garrett , dlunev@google.com, zohar@linux.ibm.com, jejb@linux.ibm.com, linux-integrity@vger.kernel.org, corbet@lwn.net, rjw@rjwysocki.net, gwendal@chromium.org, jarkko@kernel.org, linux-pm@vger.kernel.org, Evan Green , David Howells , Hao Wu , James Morris , Matthew Garrett , "Serge E. Hallyn" , axelj , keyrings@vger.kernel.org, linux-security-module@vger.kernel.org Subject: [PATCH 05/10] security: keys: trusted: Verify creation data Date: Wed, 4 May 2022 16:20:57 -0700 Message-Id: <20220504161439.5.I6cdb522cb5ea28fcd1e35b4cd92cbd067f99269a@changeid> X-Mailer: git-send-email 2.31.0 In-Reply-To: <20220504232102.469959-1-evgreen@chromium.org> References: <20220504232102.469959-1-evgreen@chromium.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-pm@vger.kernel.org If a loaded key contains creation data, ask the TPM to verify that creation data. This allows users like encrypted hibernate to know that the loaded and parsed creation data has not been tampered with. Partially-sourced-from: Matthew Garrett Signed-off-by: Evan Green --- Source material for this change is at: https://patchwork.kernel.org/project/linux-pm/patch/20210220013255.1083202-9-matthewgarrett@google.com/ include/linux/tpm.h | 1 + security/keys/trusted-keys/trusted_tpm2.c | 72 ++++++++++++++++++++++- 2 files changed, 72 insertions(+), 1 deletion(-) diff --git a/include/linux/tpm.h b/include/linux/tpm.h index 8320cbac6f4009..438f8bc0a50582 100644 --- a/include/linux/tpm.h +++ b/include/linux/tpm.h @@ -224,6 +224,7 @@ enum tpm2_command_codes { TPM2_CC_SELF_TEST = 0x0143, TPM2_CC_STARTUP = 0x0144, TPM2_CC_SHUTDOWN = 0x0145, + TPM2_CC_CERTIFYCREATION = 0x014A, TPM2_CC_NV_READ = 0x014E, TPM2_CC_CREATE = 0x0153, TPM2_CC_LOAD = 0x0157, diff --git a/security/keys/trusted-keys/trusted_tpm2.c b/security/keys/trusted-keys/trusted_tpm2.c index b7ddb78e644d17..6db30a5fc320c0 100644 --- a/security/keys/trusted-keys/trusted_tpm2.c +++ b/security/keys/trusted-keys/trusted_tpm2.c @@ -600,6 +600,69 @@ static int tpm2_unseal_cmd(struct tpm_chip *chip, return rc; } +/** + * tpm2_certify_creation() - execute a TPM2_CertifyCreation command + * + * @chip: TPM chip to use + * @payload: the key data in clear and encrypted form + * @blob_handle: the loaded TPM handle of the key + * + * Return: 0 on success + * -EINVAL on tpm error status + * < 0 error from tpm_send or tpm_buf_init + */ +static int tpm2_certify_creation(struct tpm_chip *chip, + struct trusted_key_payload *payload, + u32 blob_handle) +{ + struct tpm_header *head; + struct tpm_buf buf; + int rc; + + rc = tpm_buf_init(&buf, TPM2_ST_SESSIONS, TPM2_CC_CERTIFYCREATION); + if (rc) + return rc; + + /* Use TPM_RH_NULL for signHandle */ + tpm_buf_append_u32(&buf, 0x40000007); + + /* Object handle */ + tpm_buf_append_u32(&buf, blob_handle); + + /* Auth */ + tpm_buf_append_u32(&buf, 9); + tpm_buf_append_u32(&buf, TPM2_RS_PW); + tpm_buf_append_u16(&buf, 0); + tpm_buf_append_u8(&buf, 0); + tpm_buf_append_u16(&buf, 0); + + /* Qualifying data */ + tpm_buf_append_u16(&buf, 0); + + /* Creation data hash */ + tpm_buf_append_u16(&buf, payload->creation_hash_len); + tpm_buf_append(&buf, payload->creation_hash, + payload->creation_hash_len); + + /* signature scheme */ + tpm_buf_append_u16(&buf, TPM_ALG_NULL); + + /* creation ticket */ + tpm_buf_append(&buf, payload->tk, payload->tk_len); + + rc = tpm_transmit_cmd(chip, &buf, 6, "certifying creation data"); + if (rc) + goto out; + + head = (struct tpm_header *)buf.data; + + if (head->return_code != 0) + rc = -EINVAL; +out: + tpm_buf_destroy(&buf); + return rc; +} + /** * tpm2_unseal_trusted() - unseal the payload of a trusted key * @@ -625,8 +688,15 @@ int tpm2_unseal_trusted(struct tpm_chip *chip, goto out; rc = tpm2_unseal_cmd(chip, payload, options, blob_handle); - tpm2_flush_context(chip, blob_handle); + if (rc) + goto flush; + + if (payload->creation_len) + rc = tpm2_certify_creation(chip, payload, blob_handle); + +flush: + tpm2_flush_context(chip, blob_handle); out: tpm_put_ops(chip); From patchwork Wed May 4 23:20:58 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Evan Green X-Patchwork-Id: 12838921 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 06B2FC4167D for ; Wed, 4 May 2022 23:59:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1383757AbiEEAAc (ORCPT ); Wed, 4 May 2022 20:00:32 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41934 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1384173AbiEDXZU (ORCPT ); Wed, 4 May 2022 19:25:20 -0400 Received: from mail-pj1-x102e.google.com (mail-pj1-x102e.google.com [IPv6:2607:f8b0:4864:20::102e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8F7E54F9D5 for ; Wed, 4 May 2022 16:21:42 -0700 (PDT) Received: by mail-pj1-x102e.google.com with SMTP id fv2so2577530pjb.4 for ; Wed, 04 May 2022 16:21:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=QHH93xmv+A0zLDbaf0xl31cJD918S4ys6fTj0XupotY=; b=mgIoKXm4C8HzSOSziqq8qD637H7ijAYI9RGh0Apqfmn3wBQ0OVYH6Sxim8iJnXcAAg hOBOMoznNPbiugT3gq9sY+X2Jsr05ZA1/Zywon2+srSjDE2w3VC826AUCeC1RKsNa7AM FUURGhLyPILlGETJwFn0gFqJK0WJO7ruscK60= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=QHH93xmv+A0zLDbaf0xl31cJD918S4ys6fTj0XupotY=; b=PH95/5he4eDh/PTQk0HfV8uaC5atrl0i8o6/Fjdxo3OnI9NAkg/vUV9wrQwKGfrJoq Qk9zzMsafQeVSikv8yLqOfG2BS/TvdVxXrsStEHBmjRszpwLoqyVC6IUQWAKgwJGXk8c FxeWRXCfn/iA24Kk5zA10UEEXDutXedfaI4/Ma2qyqSQoGdbGWY112bjQY6mJ76zChr4 xHokh2tliGMYTl85qBIZp2BIKW0k3RzfjNN2JlF+9b1QoiV/tUmPemNV3KBBqdU8wAqN Q0mBNFo+r2Ob1YdSw/Vys0OyappDFcqpZMzvxVPAXvs59lPni7AYVC/LtoTOwT2wHYIT TqBw== X-Gm-Message-State: AOAM530yA8kaEqzvs3aYW/viOwGgZxhjGGVk98xaUnZ/0XWX6uDZhDMG IBsGsOsdgukdEf4L6NmJXWIaSg== X-Google-Smtp-Source: ABdhPJwKC/yJmUO1MlGUMqiVqEHIOFzf4BeVmhNVFnBF3R9tgKK7RVMDeTTsL+EypB9kKDu1r1vnpQ== X-Received: by 2002:a17:902:b107:b0:15d:391c:5a72 with SMTP id q7-20020a170902b10700b0015d391c5a72mr24610238plr.46.1651706501830; Wed, 04 May 2022 16:21:41 -0700 (PDT) Received: from evgreen-glaptop.lan ([98.47.98.87]) by smtp.gmail.com with ESMTPSA id q12-20020a170902f78c00b0015e8d4eb2d6sm1901pln.288.2022.05.04.16.21.40 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 04 May 2022 16:21:41 -0700 (PDT) From: Evan Green To: linux-kernel@vger.kernel.org Cc: Matthew Garrett , dlunev@google.com, zohar@linux.ibm.com, jejb@linux.ibm.com, linux-integrity@vger.kernel.org, corbet@lwn.net, rjw@rjwysocki.net, gwendal@chromium.org, jarkko@kernel.org, linux-pm@vger.kernel.org, Evan Green , Len Brown , Pavel Machek , "Rafael J. Wysocki" Subject: [PATCH 06/10] PM: hibernate: Add kernel-based encryption Date: Wed, 4 May 2022 16:20:58 -0700 Message-Id: <20220504161439.6.Ifff11e11797a1bde0297577ecb2f7ebb3f9e2b04@changeid> X-Mailer: git-send-email 2.31.0 In-Reply-To: <20220504232102.469959-1-evgreen@chromium.org> References: <20220504232102.469959-1-evgreen@chromium.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-pm@vger.kernel.org Enabling the kernel to be able to do encryption and integrity checks on the hibernate image prevents a malicious userspace from escalating to kernel execution via hibernation resume. As a first step toward this, add the scaffolding needed for the kernel to do AEAD encryption on the hibernate image, giving us both secrecy and integrity. We currently hardwire the encryption to be gcm(aes) in 16-page chunks. This strikes a balance between minimizing the authentication tag overhead on storage, and keeping a modest sized staging buffer. With this chunk size, we'd generate 2MB of authentication tag data on an 8GB hiberation image. The encryption currently sits on top of the core snapshot functionality, wired up only if requested in the uswsusp path. This could potentially be lowered into the common snapshot code given a mechanism to stitch the key contents into the image itself. To avoid forcing usermode to deal with sequencing the auth tags in with the data, we stitch the auth tags in to the snapshot after each chunk of pages. This complicates the read and write functions, as we roll through the flow of (for read) 1) fill the staging buffer with encrypted data, 2) feed the data pages out to user mode, 3) feed the tag out to user mode. To avoid having each syscall return a small and variable amount of data, the encrypted versions of read and write operate in a loop, allowing an arbitrary amount of data through per syscall. One alternative that would simplify things here would be a streaming interface to AEAD. Then we could just stream the entire hibernate image through directly, and handle a single tag at the end. However there is a school of thought that suggests a streaming interface to AEAD represents a loaded footgun, as it tempts the caller to act on the decrypted but not yet verified data, defeating the purpose of AEAD. With this change alone, we don't actually protect ourselves from malicious userspace at all, since we kindly hand the key in plaintext to usermode. In later changes, we'll seal the key with the TPM before handing it back to usermode, so they can't decrypt or tamper with the key themselves. Signed-off-by: Evan Green --- Documentation/power/userland-swsusp.rst | 8 + include/uapi/linux/suspend_ioctls.h | 15 +- kernel/power/Kconfig | 13 + kernel/power/Makefile | 1 + kernel/power/snapenc.c | 491 ++++++++++++++++++++++++ kernel/power/user.c | 40 +- kernel/power/user.h | 101 +++++ 7 files changed, 657 insertions(+), 12 deletions(-) create mode 100644 kernel/power/snapenc.c create mode 100644 kernel/power/user.h diff --git a/Documentation/power/userland-swsusp.rst b/Documentation/power/userland-swsusp.rst index 1cf62d80a9ca10..f759915a78ce98 100644 --- a/Documentation/power/userland-swsusp.rst +++ b/Documentation/power/userland-swsusp.rst @@ -115,6 +115,14 @@ SNAPSHOT_S2RAM to resume the system from RAM if there's enough battery power or restore its state on the basis of the saved suspend image otherwise) +SNAPSHOT_ENABLE_ENCRYPTION + Enables encryption of the hibernate image within the kernel. Upon suspend + (ie when the snapshot device was opened for reading), returns a blob + representing the random encryption key the kernel created to encrypt the + hibernate image with. Upon resume (ie when the snapshot device was opened + for writing), receives a blob from usermode containing the key material + previously returned during hibernate. + The device's read() operation can be used to transfer the snapshot image from the kernel. It has the following limitations: diff --git a/include/uapi/linux/suspend_ioctls.h b/include/uapi/linux/suspend_ioctls.h index bcce04e21c0dce..b73026ef824bb9 100644 --- a/include/uapi/linux/suspend_ioctls.h +++ b/include/uapi/linux/suspend_ioctls.h @@ -13,6 +13,18 @@ struct resume_swap_area { __u32 dev; } __attribute__((packed)); +#define USWSUSP_KEY_NONCE_SIZE 16 + +/* + * This structure is used to pass the kernel's hibernate encryption key in + * either direction. + */ +struct uswsusp_key_blob { + __u32 blob_len; + __u8 blob[512]; + __u8 nonce[USWSUSP_KEY_NONCE_SIZE]; +} __attribute__((packed)); + #define SNAPSHOT_IOC_MAGIC '3' #define SNAPSHOT_FREEZE _IO(SNAPSHOT_IOC_MAGIC, 1) #define SNAPSHOT_UNFREEZE _IO(SNAPSHOT_IOC_MAGIC, 2) @@ -29,6 +41,7 @@ struct resume_swap_area { #define SNAPSHOT_PREF_IMAGE_SIZE _IO(SNAPSHOT_IOC_MAGIC, 18) #define SNAPSHOT_AVAIL_SWAP_SIZE _IOR(SNAPSHOT_IOC_MAGIC, 19, __kernel_loff_t) #define SNAPSHOT_ALLOC_SWAP_PAGE _IOR(SNAPSHOT_IOC_MAGIC, 20, __kernel_loff_t) -#define SNAPSHOT_IOC_MAXNR 20 +#define SNAPSHOT_ENABLE_ENCRYPTION _IOWR(SNAPSHOT_IOC_MAGIC, 21, struct uswsusp_key_blob) +#define SNAPSHOT_IOC_MAXNR 21 #endif /* _LINUX_SUSPEND_IOCTLS_H */ diff --git a/kernel/power/Kconfig b/kernel/power/Kconfig index a12779650f1529..8249968962bcd5 100644 --- a/kernel/power/Kconfig +++ b/kernel/power/Kconfig @@ -92,6 +92,19 @@ config HIBERNATION_SNAPSHOT_DEV If in doubt, say Y. +config ENCRYPTED_HIBERNATION + bool "Encryption support for userspace snapshots" + depends on HIBERNATION_SNAPSHOT_DEV + depends on CRYPTO_AEAD2=y + default n + help + Enable support for kernel-based encryption of hibernation snapshots + created by uswsusp tools. + + Say N if userspace handles the image encryption. + + If in doubt, say N. + config PM_STD_PARTITION string "Default resume partition" depends on HIBERNATION diff --git a/kernel/power/Makefile b/kernel/power/Makefile index 874ad834dc8daf..7be08f2e0e3b68 100644 --- a/kernel/power/Makefile +++ b/kernel/power/Makefile @@ -16,6 +16,7 @@ obj-$(CONFIG_SUSPEND) += suspend.o obj-$(CONFIG_PM_TEST_SUSPEND) += suspend_test.o obj-$(CONFIG_HIBERNATION) += hibernate.o snapshot.o swap.o obj-$(CONFIG_HIBERNATION_SNAPSHOT_DEV) += user.o +obj-$(CONFIG_ENCRYPTED_HIBERNATION) += snapenc.o obj-$(CONFIG_PM_AUTOSLEEP) += autosleep.o obj-$(CONFIG_PM_WAKELOCKS) += wakelock.o diff --git a/kernel/power/snapenc.c b/kernel/power/snapenc.c new file mode 100644 index 00000000000000..cb90692d6ab83a --- /dev/null +++ b/kernel/power/snapenc.c @@ -0,0 +1,491 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* This file provides encryption support for system snapshots. */ + +#include +#include +#include +#include +#include +#include + +#include "power.h" +#include "user.h" + +/* Encrypt more data from the snapshot into the staging area. */ +static int snapshot_encrypt_refill(struct snapshot_data *data) +{ + + u8 nonce[GCM_AES_IV_SIZE]; + int pg_idx; + int res; + struct aead_request *req = data->aead_req; + DECLARE_CRYPTO_WAIT(wait); + size_t total = 0; + + /* + * The first buffer is the associated data, set to the offset to prevent + * attacks that rearrange chunks. + */ + sg_set_buf(&data->sg[0], &data->crypt_total, sizeof(data->crypt_total)); + + /* Load the crypt buffer with snapshot pages. */ + for (pg_idx = 0; pg_idx < CHUNK_SIZE; pg_idx++) { + void *buf = data->crypt_pages[pg_idx]; + + res = snapshot_read_next(&data->handle); + if (res < 0) + return res; + if (res == 0) + break; + + WARN_ON(res != PAGE_SIZE); + + /* + * Copy the page into the staging area. A future optimization + * could potentially skip this copy for lowmem pages. + */ + memcpy(buf, data_of(data->handle), PAGE_SIZE); + sg_set_buf(&data->sg[1 + pg_idx], buf, PAGE_SIZE); + total += PAGE_SIZE; + } + + sg_set_buf(&data->sg[1 + pg_idx], &data->auth_tag, SNAPSHOT_AUTH_TAG_SIZE); + aead_request_set_callback(req, 0, crypto_req_done, &wait); + /* + * Use incrementing nonces for each chunk, since a 64 bit value won't + * roll into re-use for any given hibernate image. + */ + memcpy(&nonce[0], &data->nonce_low, sizeof(data->nonce_low)); + memcpy(&nonce[sizeof(data->nonce_low)], + &data->nonce_high, + sizeof(nonce) - sizeof(data->nonce_low)); + + data->nonce_low += 1; + /* Total does not include AAD or the auth tag. */ + aead_request_set_crypt(req, data->sg, data->sg, total, nonce); + res = crypto_wait_req(crypto_aead_encrypt(req), &wait); + if (res) + return res; + + data->crypt_size = total; + data->crypt_total += total; + return 0; +} + +/* Decrypt data from the staging area and push it to the snapshot. */ +static int snapshot_decrypt_drain(struct snapshot_data *data) +{ + u8 nonce[GCM_AES_IV_SIZE]; + int page_count; + int pg_idx; + int res; + struct aead_request *req = data->aead_req; + DECLARE_CRYPTO_WAIT(wait); + size_t total; + + /* Set up the associated data. */ + sg_set_buf(&data->sg[0], &data->crypt_total, sizeof(data->crypt_total)); + + /* + * Get the number of full pages, which could be short at the end. There + * should also be a tag at the end, so the offset won't be an even page. + */ + page_count = data->crypt_offset >> PAGE_SHIFT; + total = page_count << PAGE_SHIFT; + if ((total == 0) || (total == data->crypt_offset)) + return -EINVAL; + + /* + * Load the sg list with the crypt buffer. Inline decrypt back into the + * staging buffer. A future optimization could decrypt directly into + * lowmem pages. + */ + for (pg_idx = 0; pg_idx < page_count; pg_idx++) + sg_set_buf(&data->sg[1 + pg_idx], data->crypt_pages[pg_idx], PAGE_SIZE); + + /* + * It's possible this is the final decrypt, and there are fewer than + * CHUNK_SIZE pages. If this is the case we would have just written the + * auth tag into the first few bytes of a new page. Copy to the tag if + * so. + */ + if ((page_count < CHUNK_SIZE) && + (data->crypt_offset - total) == sizeof(data->auth_tag)) { + + memcpy(data->auth_tag, + data->crypt_pages[pg_idx], + sizeof(data->auth_tag)); + + } else if (data->crypt_offset != + ((CHUNK_SIZE << PAGE_SHIFT) + SNAPSHOT_AUTH_TAG_SIZE)) { + + return -EINVAL; + } + + sg_set_buf(&data->sg[1 + pg_idx], &data->auth_tag, SNAPSHOT_AUTH_TAG_SIZE); + aead_request_set_callback(req, 0, crypto_req_done, &wait); + memcpy(&nonce[0], &data->nonce_low, sizeof(data->nonce_low)); + memcpy(&nonce[sizeof(data->nonce_low)], + &data->nonce_high, + sizeof(nonce) - sizeof(data->nonce_low)); + + data->nonce_low += 1; + aead_request_set_crypt(req, data->sg, data->sg, total + SNAPSHOT_AUTH_TAG_SIZE, nonce); + res = crypto_wait_req(crypto_aead_decrypt(req), &wait); + if (res) + return res; + + data->crypt_size = 0; + data->crypt_offset = 0; + + /* Push the decrypted pages further down the stack. */ + total = 0; + for (pg_idx = 0; pg_idx < page_count; pg_idx++) { + void *buf = data->crypt_pages[pg_idx]; + + res = snapshot_write_next(&data->handle); + if (res < 0) + return res; + if (res == 0) + break; + + if (!data_of(data->handle)) + return -EINVAL; + + WARN_ON(res != PAGE_SIZE); + + /* + * Copy the page into the staging area. A future optimization + * could potentially skip this copy for lowmem pages. + */ + memcpy(data_of(data->handle), buf, PAGE_SIZE); + total += PAGE_SIZE; + } + + data->crypt_total += total; + return 0; +} + +static ssize_t snapshot_read_next_encrypted(struct snapshot_data *data, + void **buf) +{ + size_t tag_off; + + /* Refill the encrypted buffer if it's empty. */ + if ((data->crypt_size == 0) || + (data->crypt_offset >= + (data->crypt_size + SNAPSHOT_AUTH_TAG_SIZE))) { + + int rc; + + data->crypt_size = 0; + data->crypt_offset = 0; + rc = snapshot_encrypt_refill(data); + if (rc < 0) + return rc; + } + + /* Return data pages if the offset is in that region. */ + if (data->crypt_offset < data->crypt_size) { + size_t pg_idx = data->crypt_offset >> PAGE_SHIFT; + size_t pg_off = data->crypt_offset & (PAGE_SIZE - 1); + *buf = data->crypt_pages[pg_idx] + pg_off; + return PAGE_SIZE - pg_off; + } + + /* Use offsets just beyond the size to return the tag. */ + tag_off = data->crypt_offset - data->crypt_size; + if (tag_off > SNAPSHOT_AUTH_TAG_SIZE) + tag_off = SNAPSHOT_AUTH_TAG_SIZE; + + *buf = data->auth_tag + tag_off; + return SNAPSHOT_AUTH_TAG_SIZE - tag_off; +} + +static ssize_t snapshot_write_next_encrypted(struct snapshot_data *data, + void **buf) +{ + size_t tag_off; + + /* Return data pages if the offset is in that region. */ + if (data->crypt_offset < (PAGE_SIZE * CHUNK_SIZE)) { + size_t pg_idx = data->crypt_offset >> PAGE_SHIFT; + size_t pg_off = data->crypt_offset & (PAGE_SIZE - 1); + *buf = data->crypt_pages[pg_idx] + pg_off; + return PAGE_SIZE - pg_off; + } + + /* Use offsets just beyond the size to return the tag. */ + tag_off = data->crypt_offset - (PAGE_SIZE * CHUNK_SIZE); + if (tag_off > SNAPSHOT_AUTH_TAG_SIZE) + tag_off = SNAPSHOT_AUTH_TAG_SIZE; + + *buf = data->auth_tag + tag_off; + return SNAPSHOT_AUTH_TAG_SIZE - tag_off; +} + +ssize_t snapshot_read_encrypted(struct snapshot_data *data, + char __user *buf, size_t count, loff_t *offp) +{ + ssize_t total = 0; + + /* Loop getting buffers of varying sizes and copying to userspace. */ + while (count) { + size_t copy_size; + size_t not_done; + void *src; + ssize_t src_size = snapshot_read_next_encrypted(data, &src); + + if (src_size <= 0) { + if (total == 0) + return src_size; + + break; + } + + copy_size = min(count, (size_t)src_size); + not_done = copy_to_user(buf + total, src, copy_size); + copy_size -= not_done; + total += copy_size; + count -= copy_size; + data->crypt_offset += copy_size; + if (copy_size == 0) { + if (total == 0) + return -EFAULT; + + break; + } + } + + *offp += total; + return total; +} + +ssize_t snapshot_write_encrypted(struct snapshot_data *data, + const char __user *buf, size_t count, loff_t *offp) +{ + ssize_t total = 0; + + /* Loop getting buffers of varying sizes and copying from. */ + while (count) { + size_t copy_size; + size_t not_done; + void *dst; + ssize_t dst_size = snapshot_write_next_encrypted(data, &dst); + + if (dst_size <= 0) { + if (total == 0) + return dst_size; + + break; + } + + copy_size = min(count, (size_t)dst_size); + not_done = copy_from_user(dst, buf + total, copy_size); + copy_size -= not_done; + total += copy_size; + count -= copy_size; + data->crypt_offset += copy_size; + if (copy_size == 0) { + if (total == 0) + return -EFAULT; + + break; + } + + /* Drain the encrypted buffer if it's full. */ + if ((data->crypt_offset >= + ((PAGE_SIZE * CHUNK_SIZE) + SNAPSHOT_AUTH_TAG_SIZE))) { + + int rc; + + rc = snapshot_decrypt_drain(data); + if (rc < 0) + return rc; + } + } + + *offp += total; + return total; +} + +void snapshot_teardown_encryption(struct snapshot_data *data) +{ + int i; + + if (data->aead_req) { + aead_request_free(data->aead_req); + data->aead_req = NULL; + } + + if (data->aead_tfm) { + crypto_free_aead(data->aead_tfm); + data->aead_tfm = NULL; + } + + for (i = 0; i < CHUNK_SIZE; i++) { + if (data->crypt_pages[i]) { + free_page((unsigned long)data->crypt_pages[i]); + data->crypt_pages[i] = NULL; + } + } +} + +static int snapshot_setup_encryption_common(struct snapshot_data *data) +{ + int i, rc; + + data->crypt_total = 0; + data->crypt_offset = 0; + data->crypt_size = 0; + memset(data->crypt_pages, 0, sizeof(data->crypt_pages)); + /* This only works once per hibernate. */ + if (data->aead_tfm) + return -EINVAL; + + /* Set up the encryption transform */ + data->aead_tfm = crypto_alloc_aead("gcm(aes)", 0, 0); + if (IS_ERR(data->aead_tfm)) { + rc = PTR_ERR(data->aead_tfm); + data->aead_tfm = NULL; + return rc; + } + + rc = -ENOMEM; + data->aead_req = aead_request_alloc(data->aead_tfm, GFP_KERNEL); + if (data->aead_req == NULL) + goto setup_fail; + + /* Allocate the staging area */ + for (i = 0; i < CHUNK_SIZE; i++) { + data->crypt_pages[i] = (void *)__get_free_page(GFP_ATOMIC); + if (data->crypt_pages[i] == NULL) + goto setup_fail; + } + + sg_init_table(data->sg, CHUNK_SIZE + 2); + + /* + * The associated data will be the offset so that blocks can't be + * rearranged. + */ + aead_request_set_ad(data->aead_req, sizeof(data->crypt_total)); + rc = crypto_aead_setauthsize(data->aead_tfm, SNAPSHOT_AUTH_TAG_SIZE); + if (rc) + goto setup_fail; + + return 0; + +setup_fail: + snapshot_teardown_encryption(data); + return rc; +} + +int snapshot_get_encryption_key(struct snapshot_data *data, + struct uswsusp_key_blob __user *key) +{ + u8 aead_key[SNAPSHOT_ENCRYPTION_KEY_SIZE]; + u8 nonce[USWSUSP_KEY_NONCE_SIZE]; + int rc; + /* Don't pull a random key from a world that can be reset. */ + if (data->ready) + return -EPIPE; + + rc = snapshot_setup_encryption_common(data); + if (rc) + return rc; + + /* Build a random starting nonce. */ + get_random_bytes(nonce, sizeof(nonce)); + memcpy(&data->nonce_low, &nonce[0], sizeof(data->nonce_low)); + memcpy(&data->nonce_high, &nonce[8], sizeof(data->nonce_high)); + /* Build a random key */ + get_random_bytes(aead_key, sizeof(aead_key)); + rc = crypto_aead_setkey(data->aead_tfm, aead_key, sizeof(aead_key)); + if (rc) + goto fail; + + /* Hand the key back to user mode (to be changed!) */ + rc = put_user(sizeof(struct uswsusp_key_blob), &key->blob_len); + if (rc) + goto fail; + + rc = copy_to_user(&key->blob, &aead_key, sizeof(aead_key)); + if (rc) + goto fail; + + rc = copy_to_user(&key->nonce, &nonce, sizeof(nonce)); + if (rc) + goto fail; + + return 0; + +fail: + snapshot_teardown_encryption(data); + return rc; +} + +int snapshot_set_encryption_key(struct snapshot_data *data, + struct uswsusp_key_blob __user *key) +{ + struct uswsusp_key_blob blob; + int rc; + + /* It's too late if data's been pushed in. */ + if (data->handle.cur) + return -EPIPE; + + rc = snapshot_setup_encryption_common(data); + if (rc) + return rc; + + /* Load the key from user mode. */ + rc = copy_from_user(&blob, key, sizeof(struct uswsusp_key_blob)); + if (rc) + goto crypto_setup_fail; + + if (blob.blob_len != sizeof(struct uswsusp_key_blob)) { + rc = -EINVAL; + goto crypto_setup_fail; + } + + rc = crypto_aead_setkey(data->aead_tfm, + blob.blob, + SNAPSHOT_ENCRYPTION_KEY_SIZE); + + if (rc) + goto crypto_setup_fail; + + /* Load the starting nonce. */ + memcpy(&data->nonce_low, &blob.nonce[0], sizeof(data->nonce_low)); + memcpy(&data->nonce_high, &blob.nonce[8], sizeof(data->nonce_high)); + return 0; + +crypto_setup_fail: + snapshot_teardown_encryption(data); + return rc; +} + +loff_t snapshot_get_encrypted_image_size(loff_t raw_size) +{ + loff_t pages = raw_size >> PAGE_SHIFT; + loff_t chunks = (pages + (CHUNK_SIZE - 1)) / CHUNK_SIZE; + /* + * The encrypted size is the normal size, plus a stitched in + * authentication tag for every chunk of pages. + */ + return raw_size + (chunks * SNAPSHOT_AUTH_TAG_SIZE); +} + +int snapshot_finalize_decrypted_image(struct snapshot_data *data) +{ + int rc; + + if (data->crypt_offset != 0) { + rc = snapshot_decrypt_drain(data); + if (rc) + return rc; + } + + return 0; +} diff --git a/kernel/power/user.c b/kernel/power/user.c index ad241b4ff64c58..52ad25df4518dc 100644 --- a/kernel/power/user.c +++ b/kernel/power/user.c @@ -25,18 +25,9 @@ #include #include "power.h" +#include "user.h" - -static struct snapshot_data { - struct snapshot_handle handle; - int swap; - int mode; - bool frozen; - bool ready; - bool platform_support; - bool free_bitmaps; - dev_t dev; -} snapshot_state; +struct snapshot_data snapshot_state; int is_hibernate_resume_dev(dev_t dev) { @@ -119,6 +110,7 @@ static int snapshot_release(struct inode *inode, struct file *filp) } else if (data->free_bitmaps) { free_basic_memory_bitmaps(); } + snapshot_teardown_encryption(data); pm_notifier_call_chain(data->mode == O_RDONLY ? PM_POST_HIBERNATION : PM_POST_RESTORE); hibernate_release(); @@ -142,6 +134,12 @@ static ssize_t snapshot_read(struct file *filp, char __user *buf, res = -ENODATA; goto Unlock; } + + if (snapshot_encryption_enabled(data)) { + res = snapshot_read_encrypted(data, buf, count, offp); + goto Unlock; + } + if (!pg_offp) { /* on page boundary? */ res = snapshot_read_next(&data->handle); if (res <= 0) @@ -172,6 +170,11 @@ static ssize_t snapshot_write(struct file *filp, const char __user *buf, data = filp->private_data; + if (snapshot_encryption_enabled(data)) { + res = snapshot_write_encrypted(data, buf, count, offp); + goto unlock; + } + if (!pg_offp) { res = snapshot_write_next(&data->handle); if (res <= 0) @@ -302,6 +305,12 @@ static long snapshot_ioctl(struct file *filp, unsigned int cmd, break; case SNAPSHOT_ATOMIC_RESTORE: + if (snapshot_encryption_enabled(data)) { + error = snapshot_finalize_decrypted_image(data); + if (error) + break; + } + snapshot_write_finalize(&data->handle); if (data->mode != O_WRONLY || !data->frozen || !snapshot_image_loaded(&data->handle)) { @@ -337,6 +346,8 @@ static long snapshot_ioctl(struct file *filp, unsigned int cmd, } size = snapshot_get_image_size(); size <<= PAGE_SHIFT; + if (snapshot_encryption_enabled(data)) + size = snapshot_get_encrypted_image_size(size); error = put_user(size, (loff_t __user *)arg); break; @@ -394,6 +405,13 @@ static long snapshot_ioctl(struct file *filp, unsigned int cmd, error = snapshot_set_swap_area(data, (void __user *)arg); break; + case SNAPSHOT_ENABLE_ENCRYPTION: + if (data->mode == O_RDONLY) + error = snapshot_get_encryption_key(data, (void __user *)arg); + else + error = snapshot_set_encryption_key(data, (void __user *)arg); + break; + default: error = -ENOTTY; diff --git a/kernel/power/user.h b/kernel/power/user.h new file mode 100644 index 00000000000000..6823e2eba7ec53 --- /dev/null +++ b/kernel/power/user.h @@ -0,0 +1,101 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +#include +#include +#include + +#define SNAPSHOT_ENCRYPTION_KEY_SIZE AES_KEYSIZE_128 +#define SNAPSHOT_AUTH_TAG_SIZE 16 + +/* Define the number of pages in a single AEAD encryption chunk. */ +#define CHUNK_SIZE 16 + +struct snapshot_data { + struct snapshot_handle handle; + int swap; + int mode; + bool frozen; + bool ready; + bool platform_support; + bool free_bitmaps; + dev_t dev; + +#if defined(CONFIG_ENCRYPTED_HIBERNATION) + struct crypto_aead *aead_tfm; + struct aead_request *aead_req; + void *crypt_pages[CHUNK_SIZE]; + u8 auth_tag[SNAPSHOT_AUTH_TAG_SIZE]; + struct scatterlist sg[CHUNK_SIZE + 2]; /* Add room for AD and auth tag. */ + size_t crypt_offset; + size_t crypt_size; + uint64_t crypt_total; + uint64_t nonce_low; + uint64_t nonce_high; +#endif + +}; + +extern struct snapshot_data snapshot_state; + +/* kernel/power/swapenc.c routines */ +#if defined(CONFIG_ENCRYPTED_HIBERNATION) + +ssize_t snapshot_read_encrypted(struct snapshot_data *data, + char __user *buf, size_t count, loff_t *offp); + +ssize_t snapshot_write_encrypted(struct snapshot_data *data, + const char __user *buf, size_t count, loff_t *offp); + +void snapshot_teardown_encryption(struct snapshot_data *data); +int snapshot_get_encryption_key(struct snapshot_data *data, + struct uswsusp_key_blob __user *key); + +int snapshot_set_encryption_key(struct snapshot_data *data, + struct uswsusp_key_blob __user *key); + +loff_t snapshot_get_encrypted_image_size(loff_t raw_size); + +int snapshot_finalize_decrypted_image(struct snapshot_data *data); + +#define snapshot_encryption_enabled(data) (!!(data)->aead_tfm) + +#else + +ssize_t snapshot_read_encrypted(struct snapshot_data *data, + char __user *buf, size_t count, loff_t *offp) +{ + return -ENOTTY; +} + +ssize_t snapshot_write_encrypted(struct snapshot_data *data, + const char __user *buf, size_t count, loff_t *offp) +{ + return -ENOTTY; +} + +static void snapshot_teardown_encryption(struct snapshot_data *data) {} +static int snapshot_get_encryption_key(struct snapshot_data *data, + struct uswsusp_key_blob __user *key) +{ + return -ENOTTY; +} + +static int snapshot_set_encryption_key(struct snapshot_data *data, + struct uswsusp_key_blob __user *key) +{ + return -ENOTTY; +} + +static loff_t snapshot_get_encrypted_image_size(loff_t raw_size) +{ + return raw_size; +} + +static int snapshot_finalize_decrypted_image(struct snapshot_data *data) +{ + return -ENOTTY; +} + +#define snapshot_encryption_enabled(data) (0) + +#endif From patchwork Wed May 4 23:20:59 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Evan Green X-Patchwork-Id: 12838915 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8D020C43217 for ; Wed, 4 May 2022 23:59:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238441AbiEEAA0 (ORCPT ); Wed, 4 May 2022 20:00:26 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41936 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1384580AbiEDXZW (ORCPT ); Wed, 4 May 2022 19:25:22 -0400 Received: from mail-pl1-x62b.google.com (mail-pl1-x62b.google.com [IPv6:2607:f8b0:4864:20::62b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 54B284D9FD for ; Wed, 4 May 2022 16:21:44 -0700 (PDT) Received: by mail-pl1-x62b.google.com with SMTP id k1so2831248pll.4 for ; Wed, 04 May 2022 16:21:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=ukx11OTQFppBtUrbpfc9Q/EXYRYpQsmqVC+p8pc1mPA=; b=A1ZAuax7aaJRT8g7Q33B2GF/NpAbFcshxTmfG2dNhWIrtu8jeC1wCRTxKSTGKLpUJF 7/oOtgwmXyU9B5Q2nKyOBusu7QHKGl/9Mw91pB5EpKZVlhSenrTRu3J+DZZJfKgnc4EN 6zJxuutWZnmrdiGxZaSeHuSMSwWNFuRgaSqcY= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=ukx11OTQFppBtUrbpfc9Q/EXYRYpQsmqVC+p8pc1mPA=; b=S/bxf8CTeqMlCypG2lEAUHXdze9zCxJX7H4cF8wWBVgI5ZZ+0AQlylwkr+SL/C6nQb dLA/cqzT98AXunqkdq8KVO6U6IBIuei+efeY5IafX+PdB6Ru42821Maybp/rQmIL1uId Bv8xrYiO+uUGTjvNN4BXEeuqrfWgdPJJnv3Rv4Rw3GZdgzSjrrVaBy2IYwLnjVX447HB 0r06uS2WgGM48xKRUoghPwZmPz8TBDZEMbukec8weGwENJDQdMBz77UAjIuj/NOBVJgh hD9c7W/IxUHM5vK6m0iJWKVtoCPSlSzHHJfP9Whd3PvFJNW5kIbNwkgh8jGdR0wncLe0 zReQ== X-Gm-Message-State: AOAM533FxeN9p0FCWuQ15Tg+w+Q6o7U3bu7sCJVsr88FRLHrb7/q6IQ1 i9VUeBYKfhqMDmqqgT9vGJbArw== X-Google-Smtp-Source: ABdhPJyrm+ytdeYzHKKmwWQLVjMhBvOyLbtdiZMk+wZGCf2/Pt1spCxhC4KxjAuo9j3uolZZtb+K1Q== X-Received: by 2002:a17:90b:35cb:b0:1dc:7905:c4c1 with SMTP id nb11-20020a17090b35cb00b001dc7905c4c1mr2341829pjb.95.1651706503980; Wed, 04 May 2022 16:21:43 -0700 (PDT) Received: from evgreen-glaptop.lan ([98.47.98.87]) by smtp.gmail.com with ESMTPSA id q12-20020a170902f78c00b0015e8d4eb2d6sm1901pln.288.2022.05.04.16.21.42 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 04 May 2022 16:21:43 -0700 (PDT) From: Evan Green To: linux-kernel@vger.kernel.org Cc: Matthew Garrett , dlunev@google.com, zohar@linux.ibm.com, jejb@linux.ibm.com, linux-integrity@vger.kernel.org, corbet@lwn.net, rjw@rjwysocki.net, gwendal@chromium.org, jarkko@kernel.org, linux-pm@vger.kernel.org, Evan Green , Len Brown , Pavel Machek , "Rafael J. Wysocki" Subject: [PATCH 07/10] PM: hibernate: Use TPM-backed keys to encrypt image Date: Wed, 4 May 2022 16:20:59 -0700 Message-Id: <20220504161439.7.Ibd067e73916b9fae268a5824c2dd037416426af8@changeid> X-Mailer: git-send-email 2.31.0 In-Reply-To: <20220504232102.469959-1-evgreen@chromium.org> References: <20220504232102.469959-1-evgreen@chromium.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-pm@vger.kernel.org When using encrypted hibernate images, have the TPM create a key for us and seal it. By handing back a sealed blob instead of the raw key, we prevent usermode from being able to decrypt and tamper with the hibernate image on a different machine. We'll also go through the motions of having PCR23 set to a known value at the time of key creation and unsealing. Currently there's nothing that enforces the contents of PCR23 as a condition to unseal the key blob, that will come in a later change. Sourced-from: Matthew Garrett Signed-off-by: Evan Green --- Matthew's incarnation of this patch is at: https://patchwork.kernel.org/project/linux-pm/patch/20210220013255.1083202-9-matthewgarrett@google.com/ kernel/power/Kconfig | 2 + kernel/power/snapenc.c | 195 +++++++++++++++++++++++++++++++++++++++-- kernel/power/user.h | 1 + 3 files changed, 189 insertions(+), 9 deletions(-) diff --git a/kernel/power/Kconfig b/kernel/power/Kconfig index 8249968962bcd5..d8077391feb290 100644 --- a/kernel/power/Kconfig +++ b/kernel/power/Kconfig @@ -96,6 +96,8 @@ config ENCRYPTED_HIBERNATION bool "Encryption support for userspace snapshots" depends on HIBERNATION_SNAPSHOT_DEV depends on CRYPTO_AEAD2=y + depends on KEYS + depends on TRUSTED_KEYS default n help Enable support for kernel-based encryption of hibernation snapshots diff --git a/kernel/power/snapenc.c b/kernel/power/snapenc.c index cb90692d6ab83a..2bd5fe05a321e7 100644 --- a/kernel/power/snapenc.c +++ b/kernel/power/snapenc.c @@ -4,13 +4,23 @@ #include #include #include +#include +#include #include #include +#include #include #include "power.h" #include "user.h" +/* sha256("To sleep, perchance to dream") */ +static struct tpm_digest known_digest = { .alg_id = TPM_ALG_SHA256, + .digest = {0x92, 0x78, 0x3d, 0x79, 0x2d, 0x00, 0x31, 0xb0, 0x55, 0xf9, + 0x1e, 0x0d, 0xce, 0x83, 0xde, 0x1d, 0xc4, 0xc5, 0x8e, 0x8c, + 0xf1, 0x22, 0x38, 0x6c, 0x33, 0xb1, 0x14, 0xb7, 0xec, 0x05, + 0x5f, 0x49}}; + /* Encrypt more data from the snapshot into the staging area. */ static int snapshot_encrypt_refill(struct snapshot_data *data) { @@ -313,6 +323,12 @@ void snapshot_teardown_encryption(struct snapshot_data *data) { int i; + if (data->key) { + key_revoke(data->key); + key_put(data->key); + data->key = NULL; + } + if (data->aead_req) { aead_request_free(data->aead_req); data->aead_req = NULL; @@ -381,11 +397,77 @@ static int snapshot_setup_encryption_common(struct snapshot_data *data) return rc; } +static int snapshot_create_kernel_key(struct snapshot_data *data) +{ + const struct cred *cred = current_cred(); + struct tpm_digest *digests = NULL; + struct tpm_chip *chip; + struct key *key; + int ret, i; + /* Create a key sealed by the SRK. */ + char *keyinfo = "new\t32\tkeyhandle=0x81000000"; + + chip = tpm_default_chip(); + if (!chip) + return -ENODEV; + + if (!(tpm_is_tpm2(chip))) + return -ENODEV; + + ret = tpm_pcr_reset(chip, 23); + if (ret != 0) + return ret; + + digests = kcalloc(chip->nr_allocated_banks, sizeof(struct tpm_digest), + GFP_KERNEL); + if (!digests) { + ret = -ENOMEM; + goto reset; + } + + for (i = 0; i <= chip->nr_allocated_banks; i++) { + digests[i].alg_id = chip->allocated_banks[i].alg_id; + if (digests[i].alg_id == known_digest.alg_id) + memcpy(&digests[i], &known_digest, sizeof(known_digest)); + } + + ret = tpm_pcr_extend(chip, 23, digests); + if (ret != 0) + goto reset; + + key = key_alloc(&key_type_trusted, "swsusp", GLOBAL_ROOT_UID, + GLOBAL_ROOT_GID, cred, 0, KEY_ALLOC_NOT_IN_QUOTA, + NULL); + + if (IS_ERR(key)) { + ret = PTR_ERR(key); + goto reset; + } + + ret = key_instantiate_and_link(key, keyinfo, strlen(keyinfo) + 1, NULL, + NULL); + if (ret < 0) + goto out; + + data->key = key; + key = NULL; + +out: + if (key) { + key_revoke(key); + key_put(key); + } +reset: + kfree(digests); + tpm_pcr_reset(chip, 23); + return ret; +} + int snapshot_get_encryption_key(struct snapshot_data *data, struct uswsusp_key_blob __user *key) { - u8 aead_key[SNAPSHOT_ENCRYPTION_KEY_SIZE]; u8 nonce[USWSUSP_KEY_NONCE_SIZE]; + struct trusted_key_payload *payload; int rc; /* Don't pull a random key from a world that can be reset. */ if (data->ready) @@ -399,21 +481,28 @@ int snapshot_get_encryption_key(struct snapshot_data *data, get_random_bytes(nonce, sizeof(nonce)); memcpy(&data->nonce_low, &nonce[0], sizeof(data->nonce_low)); memcpy(&data->nonce_high, &nonce[8], sizeof(data->nonce_high)); - /* Build a random key */ - get_random_bytes(aead_key, sizeof(aead_key)); - rc = crypto_aead_setkey(data->aead_tfm, aead_key, sizeof(aead_key)); + + /* Create a kernel key, and set it. */ + rc = snapshot_create_kernel_key(data); + if (rc) + goto fail; + + payload = data->key->payload.data[0]; + /* Install the key */ + rc = crypto_aead_setkey(data->aead_tfm, payload->key, SNAPSHOT_ENCRYPTION_KEY_SIZE); if (rc) goto fail; - /* Hand the key back to user mode (to be changed!) */ - rc = put_user(sizeof(struct uswsusp_key_blob), &key->blob_len); + /* Hand the key back to user mode in sealed form. */ + rc = put_user(payload->blob_len, &key->blob_len); if (rc) goto fail; - rc = copy_to_user(&key->blob, &aead_key, sizeof(aead_key)); + rc = copy_to_user(&key->blob, &payload->blob, payload->blob_len); if (rc) goto fail; + /* The nonce just gets handed back in the clear. */ rc = copy_to_user(&key->nonce, &nonce, sizeof(nonce)); if (rc) goto fail; @@ -425,10 +514,93 @@ int snapshot_get_encryption_key(struct snapshot_data *data, return rc; } +static int snapshot_load_kernel_key(struct snapshot_data *data, + struct uswsusp_key_blob *blob) +{ + + const struct cred *cred = current_cred(); + char *keytemplate = "load\t%s\tkeyhandle=0x81000000"; + struct tpm_digest *digests = NULL; + char *blobstring = NULL; + char *keyinfo = NULL; + struct tpm_chip *chip; + struct key *key; + int i, ret; + + chip = tpm_default_chip(); + if (!chip) + return -ENODEV; + + if (!(tpm_is_tpm2(chip))) + return -ENODEV; + + ret = tpm_pcr_reset(chip, 23); + if (ret != 0) + return ret; + + digests = kcalloc(chip->nr_allocated_banks, sizeof(struct tpm_digest), + GFP_KERNEL); + if (!digests) + goto reset; + + for (i = 0; i <= chip->nr_allocated_banks; i++) { + digests[i].alg_id = chip->allocated_banks[i].alg_id; + if (digests[i].alg_id == known_digest.alg_id) + memcpy(&digests[i], &known_digest, sizeof(known_digest)); + } + + ret = tpm_pcr_extend(chip, 23, digests); + if (ret != 0) + goto reset; + + blobstring = kmalloc(blob->blob_len * 2, GFP_KERNEL); + if (!blobstring) { + ret = -ENOMEM; + goto reset; + } + + bin2hex(blobstring, blob->blob, blob->blob_len); + keyinfo = kasprintf(GFP_KERNEL, keytemplate, blobstring); + if (!keyinfo) { + ret = -ENOMEM; + goto reset; + } + + key = key_alloc(&key_type_trusted, "swsusp", GLOBAL_ROOT_UID, + GLOBAL_ROOT_GID, cred, 0, KEY_ALLOC_NOT_IN_QUOTA, + NULL); + + if (IS_ERR(key)) { + ret = PTR_ERR(key); + goto out; + } + + ret = key_instantiate_and_link(key, keyinfo, strlen(keyinfo) + 1, NULL, + NULL); + if (ret < 0) + goto out; + + data->key = key; + key = NULL; + +out: + if (key) { + key_revoke(key); + key_put(key); + } +reset: + kfree(keyinfo); + kfree(blobstring); + kfree(digests); + tpm_pcr_reset(chip, 23); + return ret; +} + int snapshot_set_encryption_key(struct snapshot_data *data, struct uswsusp_key_blob __user *key) { struct uswsusp_key_blob blob; + struct trusted_key_payload *payload; int rc; /* It's too late if data's been pushed in. */ @@ -444,13 +616,18 @@ int snapshot_set_encryption_key(struct snapshot_data *data, if (rc) goto crypto_setup_fail; - if (blob.blob_len != sizeof(struct uswsusp_key_blob)) { + if (blob.blob_len > sizeof(key->blob)) { rc = -EINVAL; goto crypto_setup_fail; } + rc = snapshot_load_kernel_key(data, &blob); + if (rc) + goto crypto_setup_fail; + + payload = data->key->payload.data[0]; rc = crypto_aead_setkey(data->aead_tfm, - blob.blob, + payload->key, SNAPSHOT_ENCRYPTION_KEY_SIZE); if (rc) diff --git a/kernel/power/user.h b/kernel/power/user.h index 6823e2eba7ec53..591b30bb213349 100644 --- a/kernel/power/user.h +++ b/kernel/power/user.h @@ -31,6 +31,7 @@ struct snapshot_data { uint64_t crypt_total; uint64_t nonce_low; uint64_t nonce_high; + struct key *key; #endif }; From patchwork Wed May 4 23:21:00 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Evan Green X-Patchwork-Id: 12838918 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id AF984C4321E for ; Wed, 4 May 2022 23:59:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241894AbiEEAA1 (ORCPT ); Wed, 4 May 2022 20:00:27 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42136 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1385450AbiEDXZY (ORCPT ); Wed, 4 May 2022 19:25:24 -0400 Received: from mail-pl1-x632.google.com (mail-pl1-x632.google.com [IPv6:2607:f8b0:4864:20::632]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E6E1A4D9FD for ; Wed, 4 May 2022 16:21:46 -0700 (PDT) Received: by mail-pl1-x632.google.com with SMTP id j14so2833751plx.3 for ; Wed, 04 May 2022 16:21:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=mNIKKnjFfbyO6GIKZxv1cwlMsvlm06g9nZabLDog+Hw=; b=P2L3QfO0rVtWtrwK+DWAeDMoaee7pJPv+20By6GZRva1qmNW7T9ZTrdgxsnUvOYRTT F8ncmZ0p3mqxBKjNK/tEKSpKn3YeFp0iJ9iQ2dzUUeXszA4crqLMqt1pmqgiRTgRz3jC X9z45rJ2qsLfDFEU5xnc8m7JZo19xCo/KvjJs= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=mNIKKnjFfbyO6GIKZxv1cwlMsvlm06g9nZabLDog+Hw=; b=ynE1p6towbHKs9oYr02NiqsSfJBex2CfHS4+jseYpqtD2BI2D9SdwS3pgnWa0nLo/9 PEEcLW7DtwXt05IEG8B/JbE6bpymz+ad7oCewTyS7XWAN2xdg52OT0Y5DKUOzYiyTC5i V/a8RPt6ykRk0dBI0Wo0OJ5LgBJbR5c0c4KNCnCT6A9ZEqhR559exA7TY2lPpp8pjyvq tv4jHp2S0XcNG1G+IQ2SBOD33UhssDzOkLkEZuT+48cvnlXG7aHKKOJd30ksBC4nuObv FzLKQy4LvOL+aMwglHH/z5W1/QvP5AG/FHeR4+VzUWmlaztHDvQ7iLjY5/t8Mh5AI0nk 5nYA== X-Gm-Message-State: AOAM5303i3xYzHt8hWEuNMzL2+LbO84IJ/8kEhUXmcWeYWtqBdUCW59Q FHWdXjHBQsE9xlC/4h68a+Evlg== X-Google-Smtp-Source: ABdhPJwKLS4W00NPlD53XayMBHEvEdGrWE4ak49ahAnNtY1iHBPvHHPZWIU2O7GwCTCvpT2bFSKfEA== X-Received: by 2002:a17:902:f54a:b0:15e:a95a:c0a7 with SMTP id h10-20020a170902f54a00b0015ea95ac0a7mr15919068plf.134.1651706506376; Wed, 04 May 2022 16:21:46 -0700 (PDT) Received: from evgreen-glaptop.lan ([98.47.98.87]) by smtp.gmail.com with ESMTPSA id q12-20020a170902f78c00b0015e8d4eb2d6sm1901pln.288.2022.05.04.16.21.45 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 04 May 2022 16:21:46 -0700 (PDT) From: Evan Green To: linux-kernel@vger.kernel.org Cc: Matthew Garrett , dlunev@google.com, zohar@linux.ibm.com, jejb@linux.ibm.com, linux-integrity@vger.kernel.org, corbet@lwn.net, rjw@rjwysocki.net, gwendal@chromium.org, jarkko@kernel.org, linux-pm@vger.kernel.org, Evan Green , Len Brown , Pavel Machek , "Rafael J. Wysocki" Subject: [PATCH 08/10] PM: hibernate: Mix user key in encrypted hibernate Date: Wed, 4 May 2022 16:21:00 -0700 Message-Id: <20220504161439.8.I87952411cf83f2199ff7a4cc8c828d357b8c8ce3@changeid> X-Mailer: git-send-email 2.31.0 In-Reply-To: <20220504232102.469959-1-evgreen@chromium.org> References: <20220504232102.469959-1-evgreen@chromium.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-pm@vger.kernel.org Usermode may have their own data protection requirements when it comes to encrypting the hibernate image. For example, users may want a policy where the hibernate image is protected by a key derived both from platform-level security as well as authentication data (such as a password or PIN). This way, even if the platform is compromised (ie a stolen laptop), sensitive data cannot be exfiltrated via the hibernate image without additional data (like the user's password). The kernel is already doing the encryption, but will be protecting its key with the TPM alone. Allow usermode to mix in key content of their own for the data portion of the hibernate image, so that the image encryption key is determined both by a TPM-backed secret and user-defined data. To mix the user key in, we hash the kernel key followed by the user key, and use the resulting hash as the new key. This allows usermode to mix in its key material without giving it too much control over what key is actually driving the encryption (which might be used to attack the secret kernel key). Limiting this to the data portion allows the kernel to receive the page map and prepare its giant allocation even if this user key is not yet available (ie the user has not yet finished typing in their password). Once the user key becomes available, the data portion can be pushed through to the kernel as well. This enables "preloading" scenarios, where the hibernate image is loaded off of disk while the additional key material (eg password) is being collected. One annoyance of the "preloading" scheme is that hibernate image memory is effectively double-allocated: first by the usermode process pulling encrypted contents off of disk and holding it, and second by the kernel in its giant allocation in prepare_image(). An interesting future optimization would be to allow the kernel to accept and store encrypted page data before the user key is available. This would remove the double allocation problem, as usermode could push the encrypted pages loaded from disk immediately without storing them. The kernel could defer decryption of the data until the user key is available, while still knowing the correct page locations to store the encrypted data in. Signed-off-by: Evan Green --- include/uapi/linux/suspend_ioctls.h | 15 ++- kernel/power/power.h | 1 + kernel/power/snapenc.c | 187 ++++++++++++++++++++++++++-- kernel/power/snapshot.c | 5 + kernel/power/user.c | 4 + kernel/power/user.h | 12 ++ 6 files changed, 213 insertions(+), 11 deletions(-) diff --git a/include/uapi/linux/suspend_ioctls.h b/include/uapi/linux/suspend_ioctls.h index b73026ef824bb9..c60b84cbb33ae1 100644 --- a/include/uapi/linux/suspend_ioctls.h +++ b/include/uapi/linux/suspend_ioctls.h @@ -25,6 +25,18 @@ struct uswsusp_key_blob { __u8 nonce[USWSUSP_KEY_NONCE_SIZE]; } __attribute__((packed)); +/* + * Allow user mode to fold in key material for the data portion of the hibernate + * image. + */ +struct uswsusp_user_key { + /* Kernel returns the metadata size. */ + __kernel_loff_t meta_size; + __u32 key_len; + __u8 key[16]; + __u32 pad; +}; + #define SNAPSHOT_IOC_MAGIC '3' #define SNAPSHOT_FREEZE _IO(SNAPSHOT_IOC_MAGIC, 1) #define SNAPSHOT_UNFREEZE _IO(SNAPSHOT_IOC_MAGIC, 2) @@ -42,6 +54,7 @@ struct uswsusp_key_blob { #define SNAPSHOT_AVAIL_SWAP_SIZE _IOR(SNAPSHOT_IOC_MAGIC, 19, __kernel_loff_t) #define SNAPSHOT_ALLOC_SWAP_PAGE _IOR(SNAPSHOT_IOC_MAGIC, 20, __kernel_loff_t) #define SNAPSHOT_ENABLE_ENCRYPTION _IOWR(SNAPSHOT_IOC_MAGIC, 21, struct uswsusp_key_blob) -#define SNAPSHOT_IOC_MAXNR 21 +#define SNAPSHOT_SET_USER_KEY _IOWR(SNAPSHOT_IOC_MAGIC, 22, struct uswsusp_user_key) +#define SNAPSHOT_IOC_MAXNR 22 #endif /* _LINUX_SUSPEND_IOCTLS_H */ diff --git a/kernel/power/power.h b/kernel/power/power.h index b4f43394320961..5955e5cf692302 100644 --- a/kernel/power/power.h +++ b/kernel/power/power.h @@ -151,6 +151,7 @@ struct snapshot_handle { extern unsigned int snapshot_additional_pages(struct zone *zone); extern unsigned long snapshot_get_image_size(void); +extern unsigned long snapshot_get_meta_page_count(void); extern int snapshot_read_next(struct snapshot_handle *handle); extern int snapshot_write_next(struct snapshot_handle *handle); extern void snapshot_write_finalize(struct snapshot_handle *handle); diff --git a/kernel/power/snapenc.c b/kernel/power/snapenc.c index 2bd5fe05a321e7..067f49c05a4d54 100644 --- a/kernel/power/snapenc.c +++ b/kernel/power/snapenc.c @@ -6,6 +6,8 @@ #include #include #include +#include +#include #include #include #include @@ -21,6 +23,66 @@ static struct tpm_digest known_digest = { .alg_id = TPM_ALG_SHA256, 0xf1, 0x22, 0x38, 0x6c, 0x33, 0xb1, 0x14, 0xb7, 0xec, 0x05, 0x5f, 0x49}}; +/* Derive a key from the kernel and user keys for data encryption. */ +static int snapshot_use_user_key(struct snapshot_data *data) +{ + struct shash_desc *desc; + u8 digest[SHA256_DIGEST_SIZE]; + struct trusted_key_payload *payload; + struct crypto_shash *tfm; + int ret; + + tfm = crypto_alloc_shash("sha256", 0, 0); + if (IS_ERR(tfm)) { + ret = -EINVAL; + goto err_rel; + } + + desc = kmalloc(sizeof(struct shash_desc) + + crypto_shash_descsize(tfm), GFP_KERNEL); + if (!desc) { + ret = -ENOMEM; + goto err_rel; + } + + desc->tfm = tfm; + ret = crypto_shash_init(desc); + if (ret != 0) + goto err_free; + + /* + * Hash the kernel key and the user key together. This folds in the user + * key, but not in a way that gives the user mode predictable control + * over the key bits. Hash in all 32 bytes of the key even though only 16 + * are in active use as extra salt. + */ + payload = data->key->payload.data[0]; + crypto_shash_update(desc, payload->key, MIN_KEY_SIZE); + crypto_shash_update(desc, data->user_key, sizeof(data->user_key)); + crypto_shash_final(desc, digest); + ret = crypto_aead_setkey(data->aead_tfm, + digest, + SNAPSHOT_ENCRYPTION_KEY_SIZE); + +err_free: + kfree(desc); + +err_rel: + crypto_free_shash(tfm); + return ret; +} + +/* Check to see if it's time to switch to the user key, and do it if so. */ +static int snapshot_check_user_key_switch(struct snapshot_data *data) +{ + if (data->user_key_valid && data->meta_size && + data->crypt_total == data->meta_size) { + return snapshot_use_user_key(data); + } + + return 0; +} + /* Encrypt more data from the snapshot into the staging area. */ static int snapshot_encrypt_refill(struct snapshot_data *data) { @@ -32,6 +94,15 @@ static int snapshot_encrypt_refill(struct snapshot_data *data) DECLARE_CRYPTO_WAIT(wait); size_t total = 0; + if (data->crypt_total == 0) { + data->meta_size = snapshot_get_meta_page_count() << PAGE_SHIFT; + + } else { + res = snapshot_check_user_key_switch(data); + if (res) + return res; + } + /* * The first buffer is the associated data, set to the offset to prevent * attacks that rearrange chunks. @@ -42,6 +113,11 @@ static int snapshot_encrypt_refill(struct snapshot_data *data) for (pg_idx = 0; pg_idx < CHUNK_SIZE; pg_idx++) { void *buf = data->crypt_pages[pg_idx]; + /* Stop at the meta page boundary to potentially switch keys. */ + if (total && + ((data->crypt_total + total) == data->meta_size)) + break; + res = snapshot_read_next(&data->handle); if (res < 0) return res; @@ -114,10 +190,10 @@ static int snapshot_decrypt_drain(struct snapshot_data *data) sg_set_buf(&data->sg[1 + pg_idx], data->crypt_pages[pg_idx], PAGE_SIZE); /* - * It's possible this is the final decrypt, and there are fewer than - * CHUNK_SIZE pages. If this is the case we would have just written the - * auth tag into the first few bytes of a new page. Copy to the tag if - * so. + * It's possible this is the final decrypt, or the final decrypt of the + * meta region, and there are fewer than CHUNK_SIZE pages. If this is + * the case we would have just written the auth tag into the first few + * bytes of a new page. Copy to the tag if so. */ if ((page_count < CHUNK_SIZE) && (data->crypt_offset - total) == sizeof(data->auth_tag)) { @@ -172,7 +248,14 @@ static int snapshot_decrypt_drain(struct snapshot_data *data) total += PAGE_SIZE; } + if (data->crypt_total == 0) + data->meta_size = snapshot_get_meta_page_count() << PAGE_SHIFT; + data->crypt_total += total; + res = snapshot_check_user_key_switch(data); + if (res) + return res; + return 0; } @@ -221,8 +304,26 @@ static ssize_t snapshot_write_next_encrypted(struct snapshot_data *data, if (data->crypt_offset < (PAGE_SIZE * CHUNK_SIZE)) { size_t pg_idx = data->crypt_offset >> PAGE_SHIFT; size_t pg_off = data->crypt_offset & (PAGE_SIZE - 1); + size_t size_avail = PAGE_SIZE; *buf = data->crypt_pages[pg_idx] + pg_off; - return PAGE_SIZE - pg_off; + + /* + * If this is the boundary where the meta pages end, then just + * return enough for the auth tag. + */ + if (data->meta_size && (data->crypt_total < data->meta_size)) { + uint64_t total_done = + data->crypt_total + data->crypt_offset; + + if ((total_done >= data->meta_size) && + (total_done < + (data->meta_size + SNAPSHOT_AUTH_TAG_SIZE))) { + + size_avail = SNAPSHOT_AUTH_TAG_SIZE; + } + } + + return size_avail - pg_off; } /* Use offsets just beyond the size to return the tag. */ @@ -303,9 +404,15 @@ ssize_t snapshot_write_encrypted(struct snapshot_data *data, break; } - /* Drain the encrypted buffer if it's full. */ + /* + * Drain the encrypted buffer if it's full, or if we hit the end + * of the meta pages and need a key change. + */ if ((data->crypt_offset >= - ((PAGE_SIZE * CHUNK_SIZE) + SNAPSHOT_AUTH_TAG_SIZE))) { + ((PAGE_SIZE * CHUNK_SIZE) + SNAPSHOT_AUTH_TAG_SIZE)) || + (data->meta_size && (data->crypt_total < data->meta_size) && + ((data->crypt_total + data->crypt_offset) == + (data->meta_size + SNAPSHOT_AUTH_TAG_SIZE)))) { int rc; @@ -345,6 +452,8 @@ void snapshot_teardown_encryption(struct snapshot_data *data) data->crypt_pages[i] = NULL; } } + + memset(data->user_key, 0, sizeof(data->user_key)); } static int snapshot_setup_encryption_common(struct snapshot_data *data) @@ -354,6 +463,7 @@ static int snapshot_setup_encryption_common(struct snapshot_data *data) data->crypt_total = 0; data->crypt_offset = 0; data->crypt_size = 0; + data->user_key_valid = false; memset(data->crypt_pages, 0, sizeof(data->crypt_pages)); /* This only works once per hibernate. */ if (data->aead_tfm) @@ -643,15 +753,72 @@ int snapshot_set_encryption_key(struct snapshot_data *data, return rc; } -loff_t snapshot_get_encrypted_image_size(loff_t raw_size) +loff_t snapshot_encrypted_byte_count(loff_t plain_size) { - loff_t pages = raw_size >> PAGE_SHIFT; + loff_t pages = plain_size >> PAGE_SHIFT; loff_t chunks = (pages + (CHUNK_SIZE - 1)) / CHUNK_SIZE; /* * The encrypted size is the normal size, plus a stitched in * authentication tag for every chunk of pages. */ - return raw_size + (chunks * SNAPSHOT_AUTH_TAG_SIZE); + return plain_size + (chunks * SNAPSHOT_AUTH_TAG_SIZE); +} + +static loff_t snapshot_get_meta_data_size(void) +{ + loff_t pages = snapshot_get_meta_page_count(); + + return snapshot_encrypted_byte_count(pages << PAGE_SHIFT); +} + +int snapshot_set_user_key(struct snapshot_data *data, + struct uswsusp_user_key __user *key) +{ + struct uswsusp_user_key user_key; + unsigned int key_len; + int rc; + loff_t size; + + /* + * Return the metadata size, the number of bytes that can be fed in before + * the user data key is needed at resume time. + */ + size = snapshot_get_meta_data_size(); + rc = put_user(size, &key->meta_size); + if (rc) + return rc; + + rc = copy_from_user(&user_key, key, sizeof(struct uswsusp_user_key)); + if (rc) + return rc; + + key_len = min_t(__u32, user_key.key_len, sizeof(data->user_key)); + if (key_len < 8) + return -EINVAL; + + /* Don't allow it if it's too late. */ + if (data->crypt_total > data->meta_size) + return -EBUSY; + + memset(data->user_key, 0, sizeof(data->user_key)); + memcpy(data->user_key, user_key.key, key_len); + data->user_key_valid = true; + /* Install the key if the user is just under the wire. */ + rc = snapshot_check_user_key_switch(data); + if (rc) + return rc; + + return 0; +} + +loff_t snapshot_get_encrypted_image_size(loff_t raw_size) +{ + loff_t pages = raw_size >> PAGE_SHIFT; + loff_t meta_size; + + pages -= snapshot_get_meta_page_count(); + meta_size = snapshot_get_meta_data_size(); + return snapshot_encrypted_byte_count(pages << PAGE_SHIFT) + meta_size; } int snapshot_finalize_decrypted_image(struct snapshot_data *data) diff --git a/kernel/power/snapshot.c b/kernel/power/snapshot.c index 2a406753af9049..026ee511633bc9 100644 --- a/kernel/power/snapshot.c +++ b/kernel/power/snapshot.c @@ -2083,6 +2083,11 @@ unsigned long snapshot_get_image_size(void) return nr_copy_pages + nr_meta_pages + 1; } +unsigned long snapshot_get_meta_page_count(void) +{ + return nr_meta_pages + 1; +} + static int init_header(struct swsusp_info *info) { memset(info, 0, sizeof(struct swsusp_info)); diff --git a/kernel/power/user.c b/kernel/power/user.c index 52ad25df4518dc..f35263e6724975 100644 --- a/kernel/power/user.c +++ b/kernel/power/user.c @@ -412,6 +412,10 @@ static long snapshot_ioctl(struct file *filp, unsigned int cmd, error = snapshot_set_encryption_key(data, (void __user *)arg); break; + case SNAPSHOT_SET_USER_KEY: + error = snapshot_set_user_key(data, (void __user *)arg); + break; + default: error = -ENOTTY; diff --git a/kernel/power/user.h b/kernel/power/user.h index 591b30bb213349..1b0743b36eee14 100644 --- a/kernel/power/user.h +++ b/kernel/power/user.h @@ -32,6 +32,9 @@ struct snapshot_data { uint64_t nonce_low; uint64_t nonce_high; struct key *key; + u8 user_key[SNAPSHOT_ENCRYPTION_KEY_SIZE]; + bool user_key_valid; + uint64_t meta_size; #endif }; @@ -54,6 +57,9 @@ int snapshot_get_encryption_key(struct snapshot_data *data, int snapshot_set_encryption_key(struct snapshot_data *data, struct uswsusp_key_blob __user *key); +int snapshot_set_user_key(struct snapshot_data *data, + struct uswsusp_user_key __user *key); + loff_t snapshot_get_encrypted_image_size(loff_t raw_size); int snapshot_finalize_decrypted_image(struct snapshot_data *data); @@ -87,6 +93,12 @@ static int snapshot_set_encryption_key(struct snapshot_data *data, return -ENOTTY; } +static int snapshot_set_user_key(struct snapshot_data *data, + struct uswsusp_user_key __user *key) +{ + return -ENOTTY; +} + static loff_t snapshot_get_encrypted_image_size(loff_t raw_size) { return raw_size; From patchwork Wed May 4 23:21:01 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Evan Green X-Patchwork-Id: 12838912 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5B64BC433EF for ; Wed, 4 May 2022 23:59:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1383732AbiEEAA0 (ORCPT ); Wed, 4 May 2022 20:00:26 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42230 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1385501AbiEDXZ1 (ORCPT ); Wed, 4 May 2022 19:25:27 -0400 Received: from mail-pl1-x633.google.com (mail-pl1-x633.google.com [IPv6:2607:f8b0:4864:20::633]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 897414D9FD for ; Wed, 4 May 2022 16:21:49 -0700 (PDT) Received: by mail-pl1-x633.google.com with SMTP id p6so2298367plr.12 for ; Wed, 04 May 2022 16:21:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=T0Ha5PjFrQlu4myFqDfG5tFx9BSiu6/SZECZT4sF2bI=; b=lygx6dOsIFOk6hF+4C4Y8LRBekgsVCS6E/2MWFWQDMqc5bqCTYepfliMJM1myFV70X jLFK02Fm2WKNyG22x0nZg+TsW5CsKzRyxkbYqV5+/PZ+gd4MpqDNr7coCd5Kt+2BDZNd ONxJuOjN2+D6qeZ3Xquf1Dcw1BHsLe77ypccU= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=T0Ha5PjFrQlu4myFqDfG5tFx9BSiu6/SZECZT4sF2bI=; b=vG9xtL9NmeqlJZdJ04+S5Hu3u8yxBhbY3DqSIajxOn72fwaVFCPv6iGyh3u6SomvmX fmrAoCqxtC44QAdOqoVr7JTF1S0Fhcej+0qeFL+BzlhC//kErqr8zVCeRqblY/7OHMts nUZq8jPY+MtyYhdJFzuQVyal9uRIgriGvmvScm1qN7qZa15u6zEa2WJ68TMbQefOlkcC 8X9WqWdXmiTY+2sno7om+ROS0hk0oOseve6/UpelgeuMuzZR+Ve/2fkcPS6C+mxa8GqC MtazgfVvGS7U57waZnUpMRAkUnWzobxKxRvm/3qxqmaBtYeVEAb7e/82VC96QrEp3Dhe w+NQ== X-Gm-Message-State: AOAM5304JhK6ogsNWawPpE3SxtPK5pAgchfWJn9YDACH4i5z40o3uWQx C2O+LgEcQxL+7pHrGhLTnfGc7w== X-Google-Smtp-Source: ABdhPJwTek7lbCixRIDyD0usxwvu26W1HK2YAo0odHiNuXLR54TGcMPEtMfBRJ0xUOtP1S0Xrla3iw== X-Received: by 2002:a17:903:22c9:b0:15e:a8a0:2a79 with SMTP id y9-20020a17090322c900b0015ea8a02a79mr16125068plg.31.1651706508866; Wed, 04 May 2022 16:21:48 -0700 (PDT) Received: from evgreen-glaptop.lan ([98.47.98.87]) by smtp.gmail.com with ESMTPSA id q12-20020a170902f78c00b0015e8d4eb2d6sm1901pln.288.2022.05.04.16.21.47 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 04 May 2022 16:21:48 -0700 (PDT) From: Evan Green To: linux-kernel@vger.kernel.org Cc: Matthew Garrett , dlunev@google.com, zohar@linux.ibm.com, jejb@linux.ibm.com, linux-integrity@vger.kernel.org, corbet@lwn.net, rjw@rjwysocki.net, gwendal@chromium.org, jarkko@kernel.org, linux-pm@vger.kernel.org, Evan Green , Len Brown , Pavel Machek , "Rafael J. Wysocki" Subject: [PATCH 09/10] PM: hibernate: Verify the digest encryption key Date: Wed, 4 May 2022 16:21:01 -0700 Message-Id: <20220504161439.9.I504d456c7a94ef1aaa7a2c63775ce9690c3ad7ab@changeid> X-Mailer: git-send-email 2.31.0 In-Reply-To: <20220504232102.469959-1-evgreen@chromium.org> References: <20220504232102.469959-1-evgreen@chromium.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-pm@vger.kernel.org We want to ensure that the key used to encrypt the digest was created by the kernel during hibernation. To do this we request that the TPM include information about the value of PCR 23 at the time of key creation in the sealed blob. On resume, we can make sure that the PCR information in the creation data blob (already certified by the TPM to be accurate) corresponds to the expected value. Since only the kernel can touch PCR 23, if an attacker generates a key themselves the value of PCR 23 will have been different, allowing us to reject the key and boot normally instead of resuming. Sourced-from: Matthew Garrett Signed-off-by: Evan Green --- Matthew's original version of this patch is here: https://patchwork.kernel.org/project/linux-pm/patch/20210220013255.1083202-9-matthewgarrett@google.com/ I moved the TPM2_CC_CERTIFYCREATION code into a separate change in the trusted key code because the blob_handle was being flushed and was no longer valid for use in CC_CERTIFYCREATION after the key was loaded. As an added benefit of moving the certification into the trusted keys code, we can drop the other patch from the original series that squirrelled the blob_handle away. kernel/power/snapenc.c | 96 +++++++++++++++++++++++++++++++++++++++++- 1 file changed, 94 insertions(+), 2 deletions(-) diff --git a/kernel/power/snapenc.c b/kernel/power/snapenc.c index 067f49c05a4d54..38bc820f780d8b 100644 --- a/kernel/power/snapenc.c +++ b/kernel/power/snapenc.c @@ -23,6 +23,45 @@ static struct tpm_digest known_digest = { .alg_id = TPM_ALG_SHA256, 0xf1, 0x22, 0x38, 0x6c, 0x33, 0xb1, 0x14, 0xb7, 0xec, 0x05, 0x5f, 0x49}}; +/* sha256(sha256(empty_pcr | known_digest)) */ +static const char expected_digest[] = {0x2f, 0x96, 0xf2, 0x1b, 0x70, 0xa9, 0xe8, + 0x42, 0x25, 0x8e, 0x66, 0x07, 0xbe, 0xbc, 0xe3, 0x1f, 0x2c, 0x84, 0x4a, + 0x3f, 0x85, 0x17, 0x31, 0x47, 0x9a, 0xa5, 0x53, 0xbb, 0x23, 0x0c, 0x32, + 0xf3}; + +static int sha256_data(char *buf, int size, char *output) +{ + struct crypto_shash *tfm; + struct shash_desc *desc; + int ret; + + tfm = crypto_alloc_shash("sha256", 0, 0); + if (IS_ERR(tfm)) + return PTR_ERR(tfm); + + desc = kmalloc(sizeof(struct shash_desc) + + crypto_shash_descsize(tfm), GFP_KERNEL); + if (!desc) { + crypto_free_shash(tfm); + return -ENOMEM; + } + + desc->tfm = tfm; + ret = crypto_shash_init(desc); + if (ret != 0) { + crypto_free_shash(tfm); + kfree(desc); + return ret; + } + + crypto_shash_update(desc, buf, size); + crypto_shash_final(desc, output); + crypto_free_shash(desc->tfm); + kfree(desc); + + return 0; +} + /* Derive a key from the kernel and user keys for data encryption. */ static int snapshot_use_user_key(struct snapshot_data *data) { @@ -515,7 +554,7 @@ static int snapshot_create_kernel_key(struct snapshot_data *data) struct key *key; int ret, i; /* Create a key sealed by the SRK. */ - char *keyinfo = "new\t32\tkeyhandle=0x81000000"; + char *keyinfo = "new\t32\tkeyhandle=0x81000000\tcreationpcrs=0x00800000"; chip = tpm_default_chip(); if (!chip) @@ -628,6 +667,7 @@ static int snapshot_load_kernel_key(struct snapshot_data *data, struct uswsusp_key_blob *blob) { + char certhash[SHA256_DIGEST_SIZE]; const struct cred *cred = current_cred(); char *keytemplate = "load\t%s\tkeyhandle=0x81000000"; struct tpm_digest *digests = NULL; @@ -635,6 +675,7 @@ static int snapshot_load_kernel_key(struct snapshot_data *data, char *keyinfo = NULL; struct tpm_chip *chip; struct key *key; + struct trusted_key_payload *payload; int i, ret; chip = tpm_default_chip(); @@ -650,8 +691,10 @@ static int snapshot_load_kernel_key(struct snapshot_data *data, digests = kcalloc(chip->nr_allocated_banks, sizeof(struct tpm_digest), GFP_KERNEL); - if (!digests) + if (!digests) { + ret = -ENOMEM; goto reset; + } for (i = 0; i <= chip->nr_allocated_banks; i++) { digests[i].alg_id = chip->allocated_banks[i].alg_id; @@ -690,6 +733,55 @@ static int snapshot_load_kernel_key(struct snapshot_data *data, if (ret < 0) goto out; + /* Verify the creation hash matches the creation data. */ + payload = key->payload.data[0]; + ret = sha256_data(payload->creation, payload->creation_len, certhash); + if (ret < 0) + goto out; + + if (memcmp(payload->creation_hash, certhash, SHA256_DIGEST_SIZE) != 0) { + ret = -EINVAL; + goto out; + } + + /* We now know that the creation data is authentic - parse it */ + + /* TPML_PCR_SELECTION.count */ + if (be32_to_cpu(*(int *)payload->creation) != 1) { + ret = -EINVAL; + goto out; + } + + if (be16_to_cpu(*(u16 *)&payload->creation[4]) != TPM_ALG_SHA256) { + ret = -EINVAL; + goto out; + } + + if (*(char *)&payload->creation[6] != 3) { + ret = -EINVAL; + goto out; + } + + /* PCR 23 selected */ + if (be32_to_cpu(*(int *)&payload->creation[6]) != 0x03000080) { + ret = -EINVAL; + goto out; + } + + if (be16_to_cpu(*(u16 *)&payload->creation[10]) != + SHA256_DIGEST_SIZE) { + ret = -EINVAL; + goto out; + } + + /* Verify PCR 23 contained the expected value when the key was created. */ + if (memcmp(&payload->creation[12], expected_digest, + SHA256_DIGEST_SIZE) != 0) { + + ret = -EINVAL; + goto out; + } + data->key = key; key = NULL; From patchwork Wed May 4 23:21:02 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Evan Green X-Patchwork-Id: 12838894 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 03424C433EF for ; Wed, 4 May 2022 23:54:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230361AbiEDX53 (ORCPT ); Wed, 4 May 2022 19:57:29 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42268 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1385524AbiEDXZ3 (ORCPT ); Wed, 4 May 2022 19:25:29 -0400 Received: from mail-pf1-x42d.google.com (mail-pf1-x42d.google.com [IPv6:2607:f8b0:4864:20::42d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id F06BD4D9FD for ; Wed, 4 May 2022 16:21:51 -0700 (PDT) Received: by mail-pf1-x42d.google.com with SMTP id v11so2296946pff.6 for ; Wed, 04 May 2022 16:21:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=JZes23V8D29o9o+3/8Wg3E/cTc0rQTuNxnYi5rLsric=; b=XrelHwAQ5nZclhDVNUhijM04rz+AIMdWor66LM+U6mT7FdbzNPxCB1jEHkolFtO4pA MhjNNy3YXQuvcea24SJHqpTqMomdy1jVehXiIhPxQ32FHx6DMgjDu5+wKs0EavqHS1ao 4OTz4nqw6vO7eIYGX95TlNSpXeIVZrvpANGY8= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=JZes23V8D29o9o+3/8Wg3E/cTc0rQTuNxnYi5rLsric=; b=xKk4e2mfPyA25QTRf0mUg1supO1vQcblmlbIs27LPP/mdFFUFTsil4ubTjWNSfIs2Y 5ufzEQJkGKpF/5brETrwLAIkrGun+NNKejbDW3n9N6Oqou8giJBJ90rjN0MGsIWFbsN+ OK4ead+f4YsrwpZfMvI34yJteixTCEbAMNLkdOIjysDEwdgVQ/7gMIM5ZeTYbF1uDDxM 7sqjCycCRad9GTqGsY7hRu5guq0bManbYylml/kdUs2/z5z1mqIHWXq4og76+sSMg1CI xBE2OxEw0gZpoX8qhqgeQpVqXKca9uEoVkEN3Bg/6Fe1ET3mtDi2FrmZCeoQDBAW6YbG pERw== X-Gm-Message-State: AOAM532w+vXbqJe+d8Cxpxj3QMN1HstxUsKhc8MgGQ2zJMSyCbvoC6qm 8Xf1j3h8HVYXRsWgnF9ll43Dfw== X-Google-Smtp-Source: ABdhPJxkwEzrHWGeAldGbUw4MtrMMRq8pNSoFW6ViiQ7UOM/5tnP8ce/RBsuczT6aEuLXMgf7vqdQA== X-Received: by 2002:a63:91c4:0:b0:3c1:d47f:1a4c with SMTP id l187-20020a6391c4000000b003c1d47f1a4cmr18349062pge.396.1651706511488; Wed, 04 May 2022 16:21:51 -0700 (PDT) Received: from evgreen-glaptop.lan ([98.47.98.87]) by smtp.gmail.com with ESMTPSA id q12-20020a170902f78c00b0015e8d4eb2d6sm1901pln.288.2022.05.04.16.21.50 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 04 May 2022 16:21:51 -0700 (PDT) From: Evan Green To: linux-kernel@vger.kernel.org Cc: Matthew Garrett , dlunev@google.com, zohar@linux.ibm.com, jejb@linux.ibm.com, linux-integrity@vger.kernel.org, corbet@lwn.net, rjw@rjwysocki.net, gwendal@chromium.org, jarkko@kernel.org, linux-pm@vger.kernel.org, Evan Green , Hao Wu , Len Brown , Matthew Garrett , Pavel Machek , "Rafael J. Wysocki" , axelj Subject: [PATCH 10/10] PM: hibernate: seal the encryption key with a PCR policy Date: Wed, 4 May 2022 16:21:02 -0700 Message-Id: <20220504161439.10.Ifce072ae1ef1ce39bd681fff55af13a054045d9f@changeid> X-Mailer: git-send-email 2.31.0 In-Reply-To: <20220504232102.469959-1-evgreen@chromium.org> References: <20220504232102.469959-1-evgreen@chromium.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-pm@vger.kernel.org The key blob is not secret, and by default the TPM will happily unseal it regardless of system state. We can protect against that by sealing the secret with a PCR policy - if the current PCR state doesn't match, the TPM will refuse to release the secret. For now let's just seal it to PCR 23. In the long term we may want a more flexible policy around this, such as including PCR 7 for PCs or 0 for Chrome OS. Sourced-from: Matthew Garrett Signed-off-by: Evan Green --- The original version of this patch is here: https://patchwork.kernel.org/project/linux-pm/patch/20210220013255.1083202-10-matthewgarrett@google.com/ include/linux/tpm.h | 4 + kernel/power/snapenc.c | 163 +++++++++++++++++++++++++++++++++++++++-- 2 files changed, 160 insertions(+), 7 deletions(-) diff --git a/include/linux/tpm.h b/include/linux/tpm.h index 438f8bc0a50582..cd520efc515bca 100644 --- a/include/linux/tpm.h +++ b/include/linux/tpm.h @@ -233,18 +233,22 @@ enum tpm2_command_codes { TPM2_CC_CONTEXT_LOAD = 0x0161, TPM2_CC_CONTEXT_SAVE = 0x0162, TPM2_CC_FLUSH_CONTEXT = 0x0165, + TPM2_CC_START_AUTH_SESSION = 0x0176, TPM2_CC_VERIFY_SIGNATURE = 0x0177, TPM2_CC_GET_CAPABILITY = 0x017A, TPM2_CC_GET_RANDOM = 0x017B, TPM2_CC_PCR_READ = 0x017E, + TPM2_CC_POLICY_PCR = 0x017F, TPM2_CC_PCR_EXTEND = 0x0182, TPM2_CC_EVENT_SEQUENCE_COMPLETE = 0x0185, TPM2_CC_HASH_SEQUENCE_START = 0x0186, + TPM2_CC_POLICY_GET_DIGEST = 0x0189, TPM2_CC_CREATE_LOADED = 0x0191, TPM2_CC_LAST = 0x0193, /* Spec 1.36 */ }; enum tpm2_permanent_handles { + TPM2_RH_NULL = 0x40000007, TPM2_RS_PW = 0x40000009, }; diff --git a/kernel/power/snapenc.c b/kernel/power/snapenc.c index 38bc820f780d8b..9d140c62b49db1 100644 --- a/kernel/power/snapenc.c +++ b/kernel/power/snapenc.c @@ -495,6 +495,111 @@ void snapshot_teardown_encryption(struct snapshot_data *data) memset(data->user_key, 0, sizeof(data->user_key)); } +static int tpm_setup_policy(struct tpm_chip *chip, int *session_handle) +{ + struct tpm_header *head; + struct tpm_buf buf; + char nonce[32] = {0x00}; + int rc; + + rc = tpm_buf_init(&buf, TPM2_ST_NO_SESSIONS, + TPM2_CC_START_AUTH_SESSION); + if (rc) + return rc; + + /* Decrypt key */ + tpm_buf_append_u32(&buf, TPM2_RH_NULL); + + /* Auth entity */ + tpm_buf_append_u32(&buf, TPM2_RH_NULL); + + /* Nonce - blank is fine here */ + tpm_buf_append_u16(&buf, sizeof(nonce)); + tpm_buf_append(&buf, nonce, sizeof(nonce)); + + /* Encrypted secret - empty */ + tpm_buf_append_u16(&buf, 0); + + /* Policy type - session */ + tpm_buf_append_u8(&buf, 0x01); + + /* Encryption type - NULL */ + tpm_buf_append_u16(&buf, TPM_ALG_NULL); + + /* Hash type - SHA256 */ + tpm_buf_append_u16(&buf, TPM_ALG_SHA256); + + rc = tpm_send(chip, buf.data, tpm_buf_length(&buf)); + if (rc) + goto out; + + head = (struct tpm_header *)buf.data; + if (be32_to_cpu(head->length) != sizeof(struct tpm_header) + + sizeof(int) + sizeof(u16) + sizeof(nonce)) { + rc = -EINVAL; + goto out; + } + + *session_handle = be32_to_cpu(*(int *)&buf.data[10]); + memcpy(nonce, &buf.data[16], sizeof(nonce)); + tpm_buf_destroy(&buf); + rc = tpm_buf_init(&buf, TPM2_ST_NO_SESSIONS, TPM2_CC_POLICY_PCR); + if (rc) + return rc; + + tpm_buf_append_u32(&buf, *session_handle); + + /* PCR digest - read from the PCR, we'll verify creation data later */ + tpm_buf_append_u16(&buf, 0); + + /* One PCR */ + tpm_buf_append_u32(&buf, 1); + + /* SHA256 banks */ + tpm_buf_append_u16(&buf, TPM_ALG_SHA256); + + /* Select PCR 23 */ + tpm_buf_append_u32(&buf, 0x03000080); + rc = tpm_send(chip, buf.data, tpm_buf_length(&buf)); + if (rc) + goto out; + +out: + tpm_buf_destroy(&buf); + return rc; +} + +static int tpm_policy_get_digest(struct tpm_chip *chip, int handle, + char *digest) +{ + struct tpm_header *head; + struct tpm_buf buf; + int rc; + + rc = tpm_buf_init(&buf, TPM2_ST_NO_SESSIONS, TPM2_CC_POLICY_GET_DIGEST); + if (rc) + return rc; + + tpm_buf_append_u32(&buf, handle); + rc = tpm_send(chip, buf.data, tpm_buf_length(&buf)); + + if (rc) + goto out; + + head = (struct tpm_header *)buf.data; + if (be32_to_cpu(head->length) != sizeof(struct tpm_header) + + sizeof(u16) + SHA256_DIGEST_SIZE) { + rc = -EINVAL; + goto out; + } + + memcpy(digest, &buf.data[12], SHA256_DIGEST_SIZE); + +out: + tpm_buf_destroy(&buf); + return rc; +} + static int snapshot_setup_encryption_common(struct snapshot_data *data) { int i, rc; @@ -554,7 +659,11 @@ static int snapshot_create_kernel_key(struct snapshot_data *data) struct key *key; int ret, i; /* Create a key sealed by the SRK. */ - char *keyinfo = "new\t32\tkeyhandle=0x81000000\tcreationpcrs=0x00800000"; + char *keyinfo = NULL; + const char *keytemplate = "new\t32\tkeyhandle=0x81000000\tcreationpcrs=0x00800000\tpolicydigest=%s"; + char policy[SHA256_DIGEST_SIZE]; + char *policydigest = NULL; + int session_handle = -1; chip = tpm_default_chip(); if (!chip) @@ -584,13 +693,35 @@ static int snapshot_create_kernel_key(struct snapshot_data *data) if (ret != 0) goto reset; + policydigest = kmalloc(SHA256_DIGEST_SIZE * 2 + 1, GFP_KERNEL); + if (!policydigest) { + ret = -ENOMEM; + goto reset; + } + + ret = tpm_setup_policy(chip, &session_handle); + if (ret != 0) + goto reset; + + ret = tpm_policy_get_digest(chip, session_handle, policy); + if (ret != 0) + goto flush; + + bin2hex(policydigest, policy, SHA256_DIGEST_SIZE); + policydigest[SHA256_DIGEST_SIZE * 2] = '\0'; + keyinfo = kasprintf(GFP_KERNEL, keytemplate, policydigest); + if (!keyinfo) { + ret = -ENOMEM; + goto flush; + } + key = key_alloc(&key_type_trusted, "swsusp", GLOBAL_ROOT_UID, GLOBAL_ROOT_GID, cred, 0, KEY_ALLOC_NOT_IN_QUOTA, NULL); if (IS_ERR(key)) { ret = PTR_ERR(key); - goto reset; + goto flush; } ret = key_instantiate_and_link(key, keyinfo, strlen(keyinfo) + 1, NULL, @@ -606,8 +737,14 @@ static int snapshot_create_kernel_key(struct snapshot_data *data) key_revoke(key); key_put(key); } + +flush: + tpm2_flush_context(chip, session_handle); + reset: kfree(digests); + kfree(keyinfo); + kfree(policydigest); tpm_pcr_reset(chip, 23); return ret; } @@ -669,13 +806,14 @@ static int snapshot_load_kernel_key(struct snapshot_data *data, char certhash[SHA256_DIGEST_SIZE]; const struct cred *cred = current_cred(); - char *keytemplate = "load\t%s\tkeyhandle=0x81000000"; + char *keytemplate = "load\t%s\tkeyhandle=0x81000000\tpolicyhandle=0x%x"; struct tpm_digest *digests = NULL; char *blobstring = NULL; char *keyinfo = NULL; struct tpm_chip *chip; struct key *key; struct trusted_key_payload *payload; + int session_handle = -1; int i, ret; chip = tpm_default_chip(); @@ -706,17 +844,24 @@ static int snapshot_load_kernel_key(struct snapshot_data *data, if (ret != 0) goto reset; - blobstring = kmalloc(blob->blob_len * 2, GFP_KERNEL); + ret = tpm_setup_policy(chip, &session_handle); + if (ret != 0) + goto reset; + + blobstring = kmalloc(blob->blob_len * 2 + 1, GFP_KERNEL); if (!blobstring) { ret = -ENOMEM; - goto reset; + goto flush; } bin2hex(blobstring, blob->blob, blob->blob_len); - keyinfo = kasprintf(GFP_KERNEL, keytemplate, blobstring); + blobstring[blob->blob_len * 2] = '\0'; + keyinfo = kasprintf(GFP_KERNEL, keytemplate, blobstring, + session_handle); + if (!keyinfo) { ret = -ENOMEM; - goto reset; + goto flush; } key = key_alloc(&key_type_trusted, "swsusp", GLOBAL_ROOT_UID, @@ -790,6 +935,10 @@ static int snapshot_load_kernel_key(struct snapshot_data *data, key_revoke(key); key_put(key); } + +flush: + tpm2_flush_context(chip, session_handle); + reset: kfree(keyinfo); kfree(blobstring);