From patchwork Wed Mar 25 10:47:06 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Roberto Sassu X-Patchwork-Id: 11457445 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 5D0E01668 for ; Wed, 25 Mar 2020 10:49:24 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 3E23F2077D for ; Wed, 25 Mar 2020 10:49:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727260AbgCYKtX (ORCPT ); Wed, 25 Mar 2020 06:49:23 -0400 Received: from lhrrgout.huawei.com ([185.176.76.210]:2588 "EHLO huawei.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726103AbgCYKtX (ORCPT ); Wed, 25 Mar 2020 06:49:23 -0400 Received: from lhreml702-cah.china.huawei.com (unknown [172.18.7.106]) by Forcepoint Email with ESMTP id 01DF12FCB04C1788E4CE; Wed, 25 Mar 2020 10:49:22 +0000 (GMT) Received: from roberto-HP-EliteDesk-800-G2-DM-65W.huawei.com (10.204.65.160) by smtpsuk.huawei.com (10.201.108.43) with Microsoft SMTP Server (TLS) id 14.3.408.0; Wed, 25 Mar 2020 10:49:13 +0000 From: Roberto Sassu To: , CC: , , , , Roberto Sassu , Subject: [PATCH v4 1/7] ima: Switch to ima_hash_algo for boot aggregate Date: Wed, 25 Mar 2020 11:47:06 +0100 Message-ID: <20200325104712.25694-2-roberto.sassu@huawei.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200325104712.25694-1-roberto.sassu@huawei.com> References: <20200325104712.25694-1-roberto.sassu@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.204.65.160] X-CFilter-Loop: Reflected Sender: linux-integrity-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-integrity@vger.kernel.org boot_aggregate is the first entry of IMA measurement list. Its purpose is to link pre-boot measurements to IMA measurements. As IMA was designed to work with a TPM 1.2, the SHA1 PCR bank was always selected even if a TPM 2.0 with support for stronger hash algorithms is available. This patch first tries to find a PCR bank with the IMA default hash algorithm. If it does not find it, it selects the SHA256 PCR bank for TPM 2.0 and SHA1 for TPM 1.2. Ultimately, it selects SHA1 also for TPM 2.0 if the SHA256 PCR bank is not found. If none of the PCR banks above can be found, boot_aggregate file digest is filled with zeros, as for TPM bypass, making it impossible to perform a remote attestation of the system. Changelog v3: - Remove option to select the first PCR bank and select SHA1 as fallback choice also for TPM 2.0 (suggested by Mimi) v1: - add Mimi's comments - if there is no PCR bank for the IMA default hash algorithm use SHA256 (suggested by James Bottomley) Cc: stable@vger.kernel.org # 5.1.x Fixes: 879b589210a9 ("tpm: retrieve digest size of unknown algorithms with PCR read") Reported-by: Jerry Snitselaar Suggested-by: James Bottomley Signed-off-by: Roberto Sassu Tested-by: Jerry Snitselaar --- security/integrity/ima/ima_crypto.c | 47 +++++++++++++++++++++++++---- security/integrity/ima/ima_init.c | 22 +++++++++++--- 2 files changed, 58 insertions(+), 11 deletions(-) diff --git a/security/integrity/ima/ima_crypto.c b/security/integrity/ima/ima_crypto.c index 423c84f95a14..8e445a671225 100644 --- a/security/integrity/ima/ima_crypto.c +++ b/security/integrity/ima/ima_crypto.c @@ -655,18 +655,29 @@ static void __init ima_pcrread(u32 idx, struct tpm_digest *d) } /* - * Calculate the boot aggregate hash + * The boot_aggregate is a cumulative hash over TPM registers 0 - 7. With + * TPM 1.2 the boot_aggregate was based on reading the SHA1 PCRs, but with + * TPM 2.0 hash agility, TPM chips could support multiple TPM PCR banks, + * allowing firmware to configure and enable different banks. + * + * Knowing which TPM bank is read to calculate the boot_aggregate digest + * needs to be conveyed to a verifier. For this reason, use the same + * hash algorithm for reading the TPM PCRs as for calculating the boot + * aggregate digest as stored in the measurement list. */ -static int __init ima_calc_boot_aggregate_tfm(char *digest, +static int __init ima_calc_boot_aggregate_tfm(char *digest, u16 alg_id, struct crypto_shash *tfm) { - struct tpm_digest d = { .alg_id = TPM_ALG_SHA1, .digest = {0} }; + struct tpm_digest d = { .alg_id = alg_id, .digest = {0} }; int rc; u32 i; SHASH_DESC_ON_STACK(shash, tfm); shash->tfm = tfm; + pr_devel("calculating the boot-aggregate based on TPM bank: %04x\n", + d.alg_id); + rc = crypto_shash_init(shash); if (rc != 0) return rc; @@ -675,7 +686,8 @@ static int __init ima_calc_boot_aggregate_tfm(char *digest, for (i = TPM_PCR0; i < TPM_PCR8; i++) { ima_pcrread(i, &d); /* now accumulate with current aggregate */ - rc = crypto_shash_update(shash, d.digest, TPM_DIGEST_SIZE); + rc = crypto_shash_update(shash, d.digest, + crypto_shash_digestsize(tfm)); } if (!rc) crypto_shash_final(shash, digest); @@ -685,14 +697,37 @@ static int __init ima_calc_boot_aggregate_tfm(char *digest, int __init ima_calc_boot_aggregate(struct ima_digest_data *hash) { struct crypto_shash *tfm; - int rc; + u16 crypto_id, alg_id; + int rc, i, bank_idx = -1; + + for (i = 0; i < ima_tpm_chip->nr_allocated_banks; i++) { + crypto_id = ima_tpm_chip->allocated_banks[i].crypto_id; + if (crypto_id == hash->algo) { + bank_idx = i; + break; + } + + if (crypto_id == HASH_ALGO_SHA256) + bank_idx = i; + + if (bank_idx == -1 && crypto_id == HASH_ALGO_SHA1) + bank_idx = i; + } + + if (bank_idx == -1) { + pr_err("No suitable TPM algorithm for boot aggregate\n"); + return 0; + } + + hash->algo = ima_tpm_chip->allocated_banks[bank_idx].crypto_id; tfm = ima_alloc_tfm(hash->algo); if (IS_ERR(tfm)) return PTR_ERR(tfm); hash->length = crypto_shash_digestsize(tfm); - rc = ima_calc_boot_aggregate_tfm(hash->digest, tfm); + alg_id = ima_tpm_chip->allocated_banks[bank_idx].alg_id; + rc = ima_calc_boot_aggregate_tfm(hash->digest, alg_id, tfm); ima_free_tfm(tfm); diff --git a/security/integrity/ima/ima_init.c b/security/integrity/ima/ima_init.c index 567468188a61..fc1e1002b48d 100644 --- a/security/integrity/ima/ima_init.c +++ b/security/integrity/ima/ima_init.c @@ -25,7 +25,7 @@ struct tpm_chip *ima_tpm_chip; /* Add the boot aggregate to the IMA measurement list and extend * the PCR register. * - * Calculate the boot aggregate, a SHA1 over tpm registers 0-7, + * Calculate the boot aggregate, a hash over tpm registers 0-7, * assuming a TPM chip exists, and zeroes if the TPM chip does not * exist. Add the boot aggregate measurement to the measurement * list and extend the PCR register. @@ -49,15 +49,27 @@ static int __init ima_add_boot_aggregate(void) int violation = 0; struct { struct ima_digest_data hdr; - char digest[TPM_DIGEST_SIZE]; + char digest[TPM_MAX_DIGEST_SIZE]; } hash; memset(iint, 0, sizeof(*iint)); memset(&hash, 0, sizeof(hash)); iint->ima_hash = &hash.hdr; - iint->ima_hash->algo = HASH_ALGO_SHA1; - iint->ima_hash->length = SHA1_DIGEST_SIZE; - + iint->ima_hash->algo = ima_hash_algo; + iint->ima_hash->length = hash_digest_size[ima_hash_algo]; + + /* + * With TPM 2.0 hash agility, TPM chips could support multiple TPM + * PCR banks, allowing firmware to configure and enable different + * banks. The SHA1 bank is not necessarily enabled. + * + * Use the same hash algorithm for reading the TPM PCRs as for + * calculating the boot aggregate digest. Preference is given to + * the configured IMA default hash algorithm. Otherwise, use the + * TCG required banks - SHA256 for TPM 2.0, SHA1 for TPM 1.2. + * Ultimately select SHA1 also for TPM 2.0 if the SHA256 PCR bank + * is not found. + */ if (ima_tpm_chip) { result = ima_calc_boot_aggregate(&hash.hdr); if (result < 0) { From patchwork Wed Mar 25 10:47:07 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Roberto Sassu X-Patchwork-Id: 11457449 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id A0BEB1668 for ; Wed, 25 Mar 2020 10:49:36 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 8B9C52078A for ; Wed, 25 Mar 2020 10:49:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727339AbgCYKtd (ORCPT ); Wed, 25 Mar 2020 06:49:33 -0400 Received: from lhrrgout.huawei.com ([185.176.76.210]:2589 "EHLO huawei.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726103AbgCYKtd (ORCPT ); Wed, 25 Mar 2020 06:49:33 -0400 Received: from lhreml702-cah.china.huawei.com (unknown [172.18.7.108]) by Forcepoint Email with ESMTP id 28705F9E84EE4E7C732D; Wed, 25 Mar 2020 10:49:32 +0000 (GMT) Received: from roberto-HP-EliteDesk-800-G2-DM-65W.huawei.com (10.204.65.160) by smtpsuk.huawei.com (10.201.108.43) with Microsoft SMTP Server (TLS) id 14.3.408.0; Wed, 25 Mar 2020 10:49:23 +0000 From: Roberto Sassu To: , CC: , , , , Roberto Sassu , Subject: [PATCH v4 2/7] ima: Evaluate error in init_ima() Date: Wed, 25 Mar 2020 11:47:07 +0100 Message-ID: <20200325104712.25694-3-roberto.sassu@huawei.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200325104712.25694-1-roberto.sassu@huawei.com> References: <20200325104712.25694-1-roberto.sassu@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.204.65.160] X-CFilter-Loop: Reflected Sender: linux-integrity-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-integrity@vger.kernel.org Evaluate error in init_ima() before register_blocking_lsm_notifier() and return if not zero. Cc: stable@vger.kernel.org # 5.3.x Fixes: b16942455193 ("ima: use the lsm policy update notifier") Signed-off-by: Roberto Sassu Reviewed-by: James Morris --- security/integrity/ima/ima_main.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/security/integrity/ima/ima_main.c b/security/integrity/ima/ima_main.c index 9d0abedeae77..f96f151294e6 100644 --- a/security/integrity/ima/ima_main.c +++ b/security/integrity/ima/ima_main.c @@ -792,6 +792,9 @@ static int __init init_ima(void) error = ima_init(); } + if (error) + return error; + error = register_blocking_lsm_notifier(&ima_lsm_policy_notifier); if (error) pr_warn("Couldn't register LSM notifier, error %d\n", error); From patchwork Wed Mar 25 10:47:08 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Roberto Sassu X-Patchwork-Id: 11457455 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 378651668 for ; Wed, 25 Mar 2020 10:49:47 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 1C15B2077D for ; Wed, 25 Mar 2020 10:49:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727320AbgCYKto (ORCPT ); Wed, 25 Mar 2020 06:49:44 -0400 Received: from lhrrgout.huawei.com ([185.176.76.210]:2590 "EHLO huawei.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726103AbgCYKtn (ORCPT ); Wed, 25 Mar 2020 06:49:43 -0400 Received: from lhreml702-cah.china.huawei.com (unknown [172.18.7.106]) by Forcepoint Email with ESMTP id 4567F91104790E47CE1C; Wed, 25 Mar 2020 10:49:42 +0000 (GMT) Received: from roberto-HP-EliteDesk-800-G2-DM-65W.huawei.com (10.204.65.160) by smtpsuk.huawei.com (10.201.108.43) with Microsoft SMTP Server (TLS) id 14.3.408.0; Wed, 25 Mar 2020 10:49:33 +0000 From: Roberto Sassu To: , CC: , , , , Roberto Sassu Subject: [PATCH v4 3/7] ima: Store template digest directly in ima_template_entry Date: Wed, 25 Mar 2020 11:47:08 +0100 Message-ID: <20200325104712.25694-4-roberto.sassu@huawei.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200325104712.25694-1-roberto.sassu@huawei.com> References: <20200325104712.25694-1-roberto.sassu@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.204.65.160] X-CFilter-Loop: Reflected Sender: linux-integrity-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-integrity@vger.kernel.org In preparation for the patch that calculates a digest for each allocated PCR bank, this patch passes to ima_calc_field_array_hash() the ima_template_entry structure, so that digests can be directly stored in that structure instead of ima_digest_data. Signed-off-by: Roberto Sassu --- security/integrity/ima/ima.h | 3 +-- security/integrity/ima/ima_api.c | 12 +----------- security/integrity/ima/ima_crypto.c | 18 +++++++----------- 3 files changed, 9 insertions(+), 24 deletions(-) diff --git a/security/integrity/ima/ima.h b/security/integrity/ima/ima.h index 64317d95363e..a2dfe24e04c7 100644 --- a/security/integrity/ima/ima.h +++ b/security/integrity/ima/ima.h @@ -138,8 +138,7 @@ int ima_calc_file_hash(struct file *file, struct ima_digest_data *hash); int ima_calc_buffer_hash(const void *buf, loff_t len, struct ima_digest_data *hash); int ima_calc_field_array_hash(struct ima_field_data *field_data, - struct ima_template_desc *desc, int num_fields, - struct ima_digest_data *hash); + struct ima_template_entry *entry); int __init ima_calc_boot_aggregate(struct ima_digest_data *hash); void ima_add_violation(struct file *file, const unsigned char *filename, struct integrity_iint_cache *iint, diff --git a/security/integrity/ima/ima_api.c b/security/integrity/ima/ima_api.c index f6bc00914aa5..2ef5a40c7ca5 100644 --- a/security/integrity/ima/ima_api.c +++ b/security/integrity/ima/ima_api.c @@ -96,26 +96,16 @@ int ima_store_template(struct ima_template_entry *entry, static const char audit_cause[] = "hashing_error"; char *template_name = entry->template_desc->name; int result; - struct { - struct ima_digest_data hdr; - char digest[TPM_DIGEST_SIZE]; - } hash; if (!violation) { - int num_fields = entry->template_desc->num_fields; - - /* this function uses default algo */ - hash.hdr.algo = HASH_ALGO_SHA1; result = ima_calc_field_array_hash(&entry->template_data[0], - entry->template_desc, - num_fields, &hash.hdr); + entry); if (result < 0) { integrity_audit_msg(AUDIT_INTEGRITY_PCR, inode, template_name, op, audit_cause, result, 0); return result; } - memcpy(entry->digest, hash.hdr.digest, hash.hdr.length); } entry->pcr = pcr; result = ima_add_template_entry(entry, violation, op, inode, filename); diff --git a/security/integrity/ima/ima_crypto.c b/security/integrity/ima/ima_crypto.c index 8e445a671225..03d73a4009ab 100644 --- a/security/integrity/ima/ima_crypto.c +++ b/security/integrity/ima/ima_crypto.c @@ -464,18 +464,16 @@ int ima_calc_file_hash(struct file *file, struct ima_digest_data *hash) * Calculate the hash of template data */ static int ima_calc_field_array_hash_tfm(struct ima_field_data *field_data, - struct ima_template_desc *td, - int num_fields, - struct ima_digest_data *hash, + struct ima_template_entry *entry, struct crypto_shash *tfm) { SHASH_DESC_ON_STACK(shash, tfm); + struct ima_template_desc *td = entry->template_desc; + int num_fields = entry->template_desc->num_fields; int rc, i; shash->tfm = tfm; - hash->length = crypto_shash_digestsize(tfm); - rc = crypto_shash_init(shash); if (rc != 0) return rc; @@ -504,24 +502,22 @@ static int ima_calc_field_array_hash_tfm(struct ima_field_data *field_data, } if (!rc) - rc = crypto_shash_final(shash, hash->digest); + rc = crypto_shash_final(shash, entry->digest); return rc; } int ima_calc_field_array_hash(struct ima_field_data *field_data, - struct ima_template_desc *desc, int num_fields, - struct ima_digest_data *hash) + struct ima_template_entry *entry) { struct crypto_shash *tfm; int rc; - tfm = ima_alloc_tfm(hash->algo); + tfm = ima_alloc_tfm(HASH_ALGO_SHA1); if (IS_ERR(tfm)) return PTR_ERR(tfm); - rc = ima_calc_field_array_hash_tfm(field_data, desc, num_fields, - hash, tfm); + rc = ima_calc_field_array_hash_tfm(field_data, entry, tfm); ima_free_tfm(tfm); From patchwork Wed Mar 25 10:47:09 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Roberto Sassu X-Patchwork-Id: 11457457 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id CBDF192A for ; Wed, 25 Mar 2020 10:49:54 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id AC8B02077D for ; Wed, 25 Mar 2020 10:49:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726139AbgCYKty (ORCPT ); Wed, 25 Mar 2020 06:49:54 -0400 Received: from lhrrgout.huawei.com ([185.176.76.210]:2591 "EHLO huawei.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726103AbgCYKty (ORCPT ); Wed, 25 Mar 2020 06:49:54 -0400 Received: from lhreml702-cah.china.huawei.com (unknown [172.18.7.106]) by Forcepoint Email with ESMTP id 67A1A494717F79CB74E9; Wed, 25 Mar 2020 10:49:52 +0000 (GMT) Received: from roberto-HP-EliteDesk-800-G2-DM-65W.huawei.com (10.204.65.160) by smtpsuk.huawei.com (10.201.108.43) with Microsoft SMTP Server (TLS) id 14.3.408.0; Wed, 25 Mar 2020 10:49:43 +0000 From: Roberto Sassu To: , CC: , , , , Roberto Sassu Subject: [PATCH v4 4/7] ima: Switch to dynamically allocated buffer for template digests Date: Wed, 25 Mar 2020 11:47:09 +0100 Message-ID: <20200325104712.25694-5-roberto.sassu@huawei.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200325104712.25694-1-roberto.sassu@huawei.com> References: <20200325104712.25694-1-roberto.sassu@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.204.65.160] X-CFilter-Loop: Reflected Sender: linux-integrity-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-integrity@vger.kernel.org This patch dynamically allocates the array of tpm_digest structures in ima_alloc_init_template() and ima_restore_template_data(). The size of the array is equal to the number of PCR banks plus ima_extra_slots, to make room for SHA1 and the IMA default hash algorithm, when PCR banks with those algorithms are not allocated. Calculating the SHA1 digest is mandatory, as SHA1 still remains the default hash algorithm for the measurement list. When IMA will support the Crypto Agile format, remaining digests will be also provided. The position in the measurement entry array of the SHA1 digest is stored in the ima_sha1_idx global variable and is determined at IMA initialization time. Changelog v3: - improve comment for ima_extra_slots (suggested by Mimi) - declare local variable digests in ima_alloc_init_template() and ima_restore_template_data() (suggested by Mimi) v2: - add NR_BANKS macro to return zero if ima_tpm_chip is NULL - replace ima_num_template_digests with NR_BANKS(ima_tpm_chip) + ima_extra_slots (suggested by Mimi) - add __ro_after_init to declaration of ima_sha1_idx and ima_extra_slots (suggested by Mimi) v1: - move ima_sha1_idx to ima_crypto.c - introduce ima_num_template_digests (suggested by Mimi) Signed-off-by: Roberto Sassu --- security/integrity/ima/ima.h | 6 +++++- security/integrity/ima/ima_api.c | 10 ++++++++++ security/integrity/ima/ima_crypto.c | 10 +++++++++- security/integrity/ima/ima_fs.c | 4 ++-- security/integrity/ima/ima_queue.c | 10 ++++++---- security/integrity/ima/ima_template.c | 15 +++++++++++++-- 6 files changed, 45 insertions(+), 10 deletions(-) diff --git a/security/integrity/ima/ima.h b/security/integrity/ima/ima.h index a2dfe24e04c7..2a7ed68e6414 100644 --- a/security/integrity/ima/ima.h +++ b/security/integrity/ima/ima.h @@ -45,11 +45,15 @@ enum tpm_pcrs { TPM_PCR0 = 0, TPM_PCR8 = 8 }; #define IMA_TEMPLATE_IMA_NAME "ima" #define IMA_TEMPLATE_IMA_FMT "d|n" +#define NR_BANKS(chip) ((chip != NULL) ? chip->nr_allocated_banks : 0) + /* current content of the policy */ extern int ima_policy_flag; /* set during initialization */ extern int ima_hash_algo; +extern int ima_sha1_idx __ro_after_init; +extern int ima_extra_slots __ro_after_init; extern int ima_appraise; extern struct tpm_chip *ima_tpm_chip; @@ -92,7 +96,7 @@ struct ima_template_desc { struct ima_template_entry { int pcr; - u8 digest[TPM_DIGEST_SIZE]; /* sha1 or md5 measurement hash */ + struct tpm_digest *digests; struct ima_template_desc *template_desc; /* template descriptor */ u32 template_data_len; struct ima_field_data template_data[0]; /* template related data */ diff --git a/security/integrity/ima/ima_api.c b/security/integrity/ima/ima_api.c index 2ef5a40c7ca5..78e0b0a7723e 100644 --- a/security/integrity/ima/ima_api.c +++ b/security/integrity/ima/ima_api.c @@ -27,6 +27,7 @@ void ima_free_template_entry(struct ima_template_entry *entry) for (i = 0; i < entry->template_desc->num_fields; i++) kfree(entry->template_data[i].data); + kfree(entry->digests); kfree(entry); } @@ -38,6 +39,7 @@ int ima_alloc_init_template(struct ima_event_data *event_data, struct ima_template_desc *desc) { struct ima_template_desc *template_desc; + struct tpm_digest *digests; int i, result = 0; if (desc) @@ -50,6 +52,14 @@ int ima_alloc_init_template(struct ima_event_data *event_data, if (!*entry) return -ENOMEM; + digests = kcalloc(NR_BANKS(ima_tpm_chip) + ima_extra_slots, + sizeof(*digests), GFP_NOFS); + if (!digests) { + result = -ENOMEM; + goto out; + } + + (*entry)->digests = digests; (*entry)->template_desc = template_desc; for (i = 0; i < template_desc->num_fields; i++) { const struct ima_template_field *field = diff --git a/security/integrity/ima/ima_crypto.c b/security/integrity/ima/ima_crypto.c index 03d73a4009ab..fe02eb28b32b 100644 --- a/security/integrity/ima/ima_crypto.c +++ b/security/integrity/ima/ima_crypto.c @@ -57,6 +57,13 @@ MODULE_PARM_DESC(ahash_bufsize, "Maximum ahash buffer size"); static struct crypto_shash *ima_shash_tfm; static struct crypto_ahash *ima_ahash_tfm; +int ima_sha1_idx __ro_after_init; +/* + * Additional number of slots reserved, as needed, for SHA1 + * and IMA default algo. + */ +int ima_extra_slots __ro_after_init = 1; + int __init ima_init_crypto(void) { long rc; @@ -502,7 +509,8 @@ static int ima_calc_field_array_hash_tfm(struct ima_field_data *field_data, } if (!rc) - rc = crypto_shash_final(shash, entry->digest); + rc = crypto_shash_final(shash, + entry->digests[ima_sha1_idx].digest); return rc; } diff --git a/security/integrity/ima/ima_fs.c b/security/integrity/ima/ima_fs.c index a71e822a6e92..8b030a1c5e0d 100644 --- a/security/integrity/ima/ima_fs.c +++ b/security/integrity/ima/ima_fs.c @@ -150,7 +150,7 @@ int ima_measurements_show(struct seq_file *m, void *v) ima_putc(m, &pcr, sizeof(e->pcr)); /* 2nd: template digest */ - ima_putc(m, e->digest, TPM_DIGEST_SIZE); + ima_putc(m, e->digests[ima_sha1_idx].digest, TPM_DIGEST_SIZE); /* 3rd: template name size */ namelen = !ima_canonical_fmt ? strlen(template_name) : @@ -233,7 +233,7 @@ static int ima_ascii_measurements_show(struct seq_file *m, void *v) seq_printf(m, "%2d ", e->pcr); /* 2nd: SHA1 template hash */ - ima_print_digest(m, e->digest, TPM_DIGEST_SIZE); + ima_print_digest(m, e->digests[ima_sha1_idx].digest, TPM_DIGEST_SIZE); /* 3th: template name */ seq_printf(m, " %s", template_name); diff --git a/security/integrity/ima/ima_queue.c b/security/integrity/ima/ima_queue.c index 8753212ddb18..49db71c200b4 100644 --- a/security/integrity/ima/ima_queue.c +++ b/security/integrity/ima/ima_queue.c @@ -55,7 +55,8 @@ static struct ima_queue_entry *ima_lookup_digest_entry(u8 *digest_value, key = ima_hash_key(digest_value); rcu_read_lock(); hlist_for_each_entry_rcu(qe, &ima_htable.queue[key], hnext) { - rc = memcmp(qe->entry->digest, digest_value, TPM_DIGEST_SIZE); + rc = memcmp(qe->entry->digests[ima_sha1_idx].digest, + digest_value, TPM_DIGEST_SIZE); if ((rc == 0) && (qe->entry->pcr == pcr)) { ret = qe; break; @@ -75,7 +76,7 @@ static int get_binary_runtime_size(struct ima_template_entry *entry) int size = 0; size += sizeof(u32); /* pcr */ - size += sizeof(entry->digest); + size += TPM_DIGEST_SIZE; size += sizeof(int); /* template name size field */ size += strlen(entry->template_desc->name); size += sizeof(entry->template_data_len); @@ -107,7 +108,7 @@ static int ima_add_digest_entry(struct ima_template_entry *entry, atomic_long_inc(&ima_htable.len); if (update_htable) { - key = ima_hash_key(entry->digest); + key = ima_hash_key(entry->digests[ima_sha1_idx].digest); hlist_add_head_rcu(&qe->hnext, &ima_htable.queue[key]); } @@ -171,7 +172,8 @@ int ima_add_template_entry(struct ima_template_entry *entry, int violation, mutex_lock(&ima_extend_list_mutex); if (!violation) { - memcpy(digest, entry->digest, sizeof(digest)); + memcpy(digest, entry->digests[ima_sha1_idx].digest, + sizeof(digest)); if (ima_lookup_digest_entry(digest, entry->pcr)) { audit_cause = "hash_exists"; result = -EEXIST; diff --git a/security/integrity/ima/ima_template.c b/security/integrity/ima/ima_template.c index 062d9ad49afb..de84252e65e9 100644 --- a/security/integrity/ima/ima_template.c +++ b/security/integrity/ima/ima_template.c @@ -301,6 +301,7 @@ static int ima_restore_template_data(struct ima_template_desc *template_desc, int template_data_size, struct ima_template_entry **entry) { + struct tpm_digest *digests; int ret = 0; int i; @@ -309,11 +310,21 @@ static int ima_restore_template_data(struct ima_template_desc *template_desc, if (!*entry) return -ENOMEM; + digests = kcalloc(NR_BANKS(ima_tpm_chip) + ima_extra_slots, + sizeof(*digests), GFP_NOFS); + if (!digests) { + kfree(*entry); + return -ENOMEM; + } + + (*entry)->digests = digests; + ret = ima_parse_buf(template_data, template_data + template_data_size, NULL, template_desc->num_fields, (*entry)->template_data, NULL, NULL, ENFORCE_FIELDS | ENFORCE_BUFEND, "template data"); if (ret < 0) { + kfree((*entry)->digests); kfree(*entry); return ret; } @@ -445,8 +456,8 @@ int ima_restore_measurement_list(loff_t size, void *buf) if (ret < 0) break; - memcpy(entry->digest, hdr[HDR_DIGEST].data, - hdr[HDR_DIGEST].len); + memcpy(entry->digests[ima_sha1_idx].digest, + hdr[HDR_DIGEST].data, hdr[HDR_DIGEST].len); entry->pcr = !ima_canonical_fmt ? *(hdr[HDR_PCR].data) : le32_to_cpu(*(hdr[HDR_PCR].data)); ret = ima_restore_measurement_entry(entry); From patchwork Wed Mar 25 10:52:48 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Roberto Sassu X-Patchwork-Id: 11457461 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 3EC041668 for ; Wed, 25 Mar 2020 10:55:09 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 1E7102077D for ; Wed, 25 Mar 2020 10:55:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726658AbgCYKzI (ORCPT ); Wed, 25 Mar 2020 06:55:08 -0400 Received: from lhrrgout.huawei.com ([185.176.76.210]:2592 "EHLO huawei.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726139AbgCYKzI (ORCPT ); Wed, 25 Mar 2020 06:55:08 -0400 Received: from LHREML711-CAH.china.huawei.com (unknown [172.18.7.106]) by Forcepoint Email with ESMTP id 34CB0544EB9BE59D97D0; Wed, 25 Mar 2020 10:55:06 +0000 (GMT) Received: from roberto-HP-EliteDesk-800-G2-DM-65W.huawei.com (10.204.65.160) by smtpsuk.huawei.com (10.201.108.34) with Microsoft SMTP Server (TLS) id 14.3.408.0; Wed, 25 Mar 2020 10:54:59 +0000 From: Roberto Sassu To: , CC: , , , , Roberto Sassu Subject: [PATCH v4 5/7] ima: Allocate and initialize tfm for each PCR bank Date: Wed, 25 Mar 2020 11:52:48 +0100 Message-ID: <20200325105248.26388-1-roberto.sassu@huawei.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200325104712.25694-1-roberto.sassu@huawei.com> References: <20200325104712.25694-1-roberto.sassu@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.204.65.160] X-CFilter-Loop: Reflected Sender: linux-integrity-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-integrity@vger.kernel.org This patch creates a crypto_shash structure for each allocated PCR bank and for SHA1 if a bank with that algorithm is not currently allocated. Changelog v2: - declare ima_init_ima_crypto() as static - replace ima_num_template_digests with NR_BANKS(ima_tpm_chip) + ima_extra_slots (suggested by Mimi) - use ima_sha1_idx to access ima_algo_array elements in ima_init_crypto() v1: - determine ima_num_template_digests before allocating ima_algo_array (suggested by Mimi) - replace kmalloc_array() with kcalloc() in ima_init_crypto() (suggested by Mimi) - check tfm first in ima_alloc_tfm() - check if ima_tpm_chip is NULL Reported-by: kbuild test robot Signed-off-by: Roberto Sassu --- security/integrity/ima/ima_crypto.c | 145 +++++++++++++++++++++++----- 1 file changed, 119 insertions(+), 26 deletions(-) diff --git a/security/integrity/ima/ima_crypto.c b/security/integrity/ima/ima_crypto.c index fe02eb28b32b..ab1c05ad1314 100644 --- a/security/integrity/ima/ima_crypto.c +++ b/security/integrity/ima/ima_crypto.c @@ -57,14 +57,21 @@ MODULE_PARM_DESC(ahash_bufsize, "Maximum ahash buffer size"); static struct crypto_shash *ima_shash_tfm; static struct crypto_ahash *ima_ahash_tfm; +struct ima_algo_desc { + struct crypto_shash *tfm; + enum hash_algo algo; +}; + int ima_sha1_idx __ro_after_init; /* * Additional number of slots reserved, as needed, for SHA1 * and IMA default algo. */ -int ima_extra_slots __ro_after_init = 1; +int ima_extra_slots __ro_after_init; -int __init ima_init_crypto(void) +static struct ima_algo_desc *ima_algo_array; + +static int __init ima_init_ima_crypto(void) { long rc; @@ -83,26 +90,121 @@ int __init ima_init_crypto(void) static struct crypto_shash *ima_alloc_tfm(enum hash_algo algo) { struct crypto_shash *tfm = ima_shash_tfm; - int rc; + int rc, i; if (algo < 0 || algo >= HASH_ALGO__LAST) algo = ima_hash_algo; - if (algo != ima_hash_algo) { - tfm = crypto_alloc_shash(hash_algo_name[algo], 0, 0); - if (IS_ERR(tfm)) { - rc = PTR_ERR(tfm); - pr_err("Can not allocate %s (reason: %d)\n", - hash_algo_name[algo], rc); - } + if (algo == ima_hash_algo) + return tfm; + + for (i = 0; i < NR_BANKS(ima_tpm_chip) + ima_extra_slots; i++) + if (ima_algo_array[i].tfm && ima_algo_array[i].algo == algo) + return ima_algo_array[i].tfm; + + tfm = crypto_alloc_shash(hash_algo_name[algo], 0, 0); + if (IS_ERR(tfm)) { + rc = PTR_ERR(tfm); + pr_err("Can not allocate %s (reason: %d)\n", + hash_algo_name[algo], rc); } return tfm; } +int __init ima_init_crypto(void) +{ + enum hash_algo algo; + long rc; + int i; + + rc = ima_init_ima_crypto(); + if (rc) + return rc; + + ima_sha1_idx = -1; + + for (i = 0; i < NR_BANKS(ima_tpm_chip); i++) { + algo = ima_tpm_chip->allocated_banks[i].crypto_id; + if (algo == HASH_ALGO_SHA1) + ima_sha1_idx = i; + } + + if (ima_sha1_idx < 0) + ima_sha1_idx = NR_BANKS(ima_tpm_chip) + ima_extra_slots++; + + ima_algo_array = kcalloc(NR_BANKS(ima_tpm_chip) + ima_extra_slots, + sizeof(*ima_algo_array), GFP_KERNEL); + if (!ima_algo_array) { + rc = -ENOMEM; + goto out; + } + + for (i = 0; i < NR_BANKS(ima_tpm_chip); i++) { + algo = ima_tpm_chip->allocated_banks[i].crypto_id; + ima_algo_array[i].algo = algo; + + /* unknown TPM algorithm */ + if (algo == HASH_ALGO__LAST) + continue; + + if (algo == ima_hash_algo) { + ima_algo_array[i].tfm = ima_shash_tfm; + continue; + } + + ima_algo_array[i].tfm = ima_alloc_tfm(algo); + if (IS_ERR(ima_algo_array[i].tfm)) { + if (algo == HASH_ALGO_SHA1) { + rc = PTR_ERR(ima_algo_array[i].tfm); + ima_algo_array[i].tfm = NULL; + goto out_array; + } + + ima_algo_array[i].tfm = NULL; + } + } + + if (ima_sha1_idx >= NR_BANKS(ima_tpm_chip)) { + if (ima_hash_algo == HASH_ALGO_SHA1) { + ima_algo_array[ima_sha1_idx].tfm = ima_shash_tfm; + } else { + ima_algo_array[ima_sha1_idx].tfm = + ima_alloc_tfm(HASH_ALGO_SHA1); + if (IS_ERR(ima_algo_array[ima_sha1_idx].tfm)) { + rc = PTR_ERR(ima_algo_array[ima_sha1_idx].tfm); + goto out_array; + } + } + + ima_algo_array[ima_sha1_idx].algo = HASH_ALGO_SHA1; + } + + return 0; +out_array: + for (i = 0; i < NR_BANKS(ima_tpm_chip) + ima_extra_slots; i++) { + if (!ima_algo_array[i].tfm || + ima_algo_array[i].tfm == ima_shash_tfm) + continue; + + crypto_free_shash(ima_algo_array[i].tfm); + } +out: + crypto_free_shash(ima_shash_tfm); + return rc; +} + static void ima_free_tfm(struct crypto_shash *tfm) { - if (tfm != ima_shash_tfm) - crypto_free_shash(tfm); + int i; + + if (tfm == ima_shash_tfm) + return; + + for (i = 0; i < NR_BANKS(ima_tpm_chip) + ima_extra_slots; i++) + if (ima_algo_array[i].tfm == tfm) + return; + + crypto_free_shash(tfm); } /** @@ -472,14 +574,14 @@ int ima_calc_file_hash(struct file *file, struct ima_digest_data *hash) */ static int ima_calc_field_array_hash_tfm(struct ima_field_data *field_data, struct ima_template_entry *entry, - struct crypto_shash *tfm) + int tfm_idx) { - SHASH_DESC_ON_STACK(shash, tfm); + SHASH_DESC_ON_STACK(shash, ima_algo_array[tfm_idx].tfm); struct ima_template_desc *td = entry->template_desc; int num_fields = entry->template_desc->num_fields; int rc, i; - shash->tfm = tfm; + shash->tfm = ima_algo_array[tfm_idx].tfm; rc = crypto_shash_init(shash); if (rc != 0) @@ -509,8 +611,7 @@ static int ima_calc_field_array_hash_tfm(struct ima_field_data *field_data, } if (!rc) - rc = crypto_shash_final(shash, - entry->digests[ima_sha1_idx].digest); + rc = crypto_shash_final(shash, entry->digests[tfm_idx].digest); return rc; } @@ -518,17 +619,9 @@ static int ima_calc_field_array_hash_tfm(struct ima_field_data *field_data, int ima_calc_field_array_hash(struct ima_field_data *field_data, struct ima_template_entry *entry) { - struct crypto_shash *tfm; int rc; - tfm = ima_alloc_tfm(HASH_ALGO_SHA1); - if (IS_ERR(tfm)) - return PTR_ERR(tfm); - - rc = ima_calc_field_array_hash_tfm(field_data, entry, tfm); - - ima_free_tfm(tfm); - + rc = ima_calc_field_array_hash_tfm(field_data, entry, ima_sha1_idx); return rc; } From patchwork Wed Mar 25 10:53:50 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Roberto Sassu X-Patchwork-Id: 11457465 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 3DDCE92A for ; Wed, 25 Mar 2020 10:55:40 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 1EC6020722 for ; Wed, 25 Mar 2020 10:55:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726206AbgCYKzj (ORCPT ); Wed, 25 Mar 2020 06:55:39 -0400 Received: from lhrrgout.huawei.com ([185.176.76.210]:2593 "EHLO huawei.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726139AbgCYKzj (ORCPT ); Wed, 25 Mar 2020 06:55:39 -0400 Received: from lhreml707-cah.china.huawei.com (unknown [172.18.7.107]) by Forcepoint Email with ESMTP id E65CBD8BB181EA3BD277; Wed, 25 Mar 2020 10:55:36 +0000 (GMT) Received: from roberto-HP-EliteDesk-800-G2-DM-65W.huawei.com (10.204.65.160) by smtpsuk.huawei.com (10.201.108.48) with Microsoft SMTP Server (TLS) id 14.3.408.0; Wed, 25 Mar 2020 10:55:27 +0000 From: Roberto Sassu To: , CC: , , , , Roberto Sassu Subject: [PATCH v4 6/7] ima: Calculate and extend PCR with digests in ima_template_entry Date: Wed, 25 Mar 2020 11:53:50 +0100 Message-ID: <20200325105350.26534-1-roberto.sassu@huawei.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200325104712.25694-1-roberto.sassu@huawei.com> References: <20200325104712.25694-1-roberto.sassu@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.204.65.160] X-CFilter-Loop: Reflected Sender: linux-integrity-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-integrity@vger.kernel.org This patch modifies ima_calc_field_array_hash() to calculate a template digest for each allocated PCR bank and SHA1. It also passes the tpm_digest array of the template entry to ima_pcr_extend() or in case of a violation, the pre-initialized digests array filled with 0xff. Padding with zeros is still done if the mapping between TPM algorithm ID and crypto ID is unknown. This patch calculates again the template digest when a measurement list is restored. Copying only the SHA1 digest (due to the limitation of the current measurement list format) is not sufficient, as hash collision detection will be done on the digest calculated with the IMA default hash algorithm. Changelog v2: - replace ima_num_template_digests with NR_BANKS(ima_tpm_chip) + ima_extra_slots (suggested by Mimi) v1: - replace ima_tpm_chip->nr_allocated_banks with ima_num_template_digests (suggested by Mimi) - retrieve alg_id only if i < ima_tpm_chip->nr_allocated_banks - check if ima_tpm_chip is NULL Signed-off-by: Roberto Sassu --- security/integrity/ima/ima_crypto.c | 29 +++++++++++++++++++++++++- security/integrity/ima/ima_queue.c | 30 ++++++++++++++++----------- security/integrity/ima/ima_template.c | 14 +++++++++++-- 3 files changed, 58 insertions(+), 15 deletions(-) diff --git a/security/integrity/ima/ima_crypto.c b/security/integrity/ima/ima_crypto.c index ab1c05ad1314..a94972d3f929 100644 --- a/security/integrity/ima/ima_crypto.c +++ b/security/integrity/ima/ima_crypto.c @@ -619,9 +619,36 @@ static int ima_calc_field_array_hash_tfm(struct ima_field_data *field_data, int ima_calc_field_array_hash(struct ima_field_data *field_data, struct ima_template_entry *entry) { - int rc; + u16 alg_id; + int rc, i; rc = ima_calc_field_array_hash_tfm(field_data, entry, ima_sha1_idx); + if (rc) + return rc; + + entry->digests[ima_sha1_idx].alg_id = TPM_ALG_SHA1; + + for (i = 0; i < NR_BANKS(ima_tpm_chip) + ima_extra_slots; i++) { + if (i == ima_sha1_idx) + continue; + + if (i < NR_BANKS(ima_tpm_chip)) { + alg_id = ima_tpm_chip->allocated_banks[i].alg_id; + entry->digests[i].alg_id = alg_id; + } + + /* for unmapped TPM algorithms digest is still a padded SHA1 */ + if (!ima_algo_array[i].tfm) { + memcpy(entry->digests[i].digest, + entry->digests[ima_sha1_idx].digest, + TPM_DIGEST_SIZE); + continue; + } + + rc = ima_calc_field_array_hash_tfm(field_data, entry, i); + if (rc) + return rc; + } return rc; } diff --git a/security/integrity/ima/ima_queue.c b/security/integrity/ima/ima_queue.c index 49db71c200b4..82a9ca43b989 100644 --- a/security/integrity/ima/ima_queue.c +++ b/security/integrity/ima/ima_queue.c @@ -135,18 +135,14 @@ unsigned long ima_get_binary_runtime_size(void) return binary_runtime_size + sizeof(struct ima_kexec_hdr); }; -static int ima_pcr_extend(const u8 *hash, int pcr) +static int ima_pcr_extend(struct tpm_digest *digests_arg, int pcr) { int result = 0; - int i; if (!ima_tpm_chip) return result; - for (i = 0; i < ima_tpm_chip->nr_allocated_banks; i++) - memcpy(digests[i].digest, hash, TPM_DIGEST_SIZE); - - result = tpm_pcr_extend(ima_tpm_chip, pcr, digests); + result = tpm_pcr_extend(ima_tpm_chip, pcr, digests_arg); if (result != 0) pr_err("Error Communicating to TPM chip, result: %d\n", result); return result; @@ -164,7 +160,8 @@ int ima_add_template_entry(struct ima_template_entry *entry, int violation, const char *op, struct inode *inode, const unsigned char *filename) { - u8 digest[TPM_DIGEST_SIZE]; + u8 *digest = entry->digests[ima_sha1_idx].digest; + struct tpm_digest *digests_arg = entry->digests; const char *audit_cause = "hash_added"; char tpm_audit_cause[AUDIT_CAUSE_LEN_MAX]; int audit_info = 1; @@ -172,8 +169,6 @@ int ima_add_template_entry(struct ima_template_entry *entry, int violation, mutex_lock(&ima_extend_list_mutex); if (!violation) { - memcpy(digest, entry->digests[ima_sha1_idx].digest, - sizeof(digest)); if (ima_lookup_digest_entry(digest, entry->pcr)) { audit_cause = "hash_exists"; result = -EEXIST; @@ -189,9 +184,9 @@ int ima_add_template_entry(struct ima_template_entry *entry, int violation, } if (violation) /* invalidate pcr */ - memset(digest, 0xff, sizeof(digest)); + digests_arg = digests; - tpmresult = ima_pcr_extend(digest, entry->pcr); + tpmresult = ima_pcr_extend(digests_arg, entry->pcr); if (tpmresult != 0) { snprintf(tpm_audit_cause, AUDIT_CAUSE_LEN_MAX, "TPM_error(%d)", tpmresult); @@ -217,6 +212,8 @@ int ima_restore_measurement_entry(struct ima_template_entry *entry) int __init ima_init_digests(void) { + u16 digest_size; + u16 crypto_id; int i; if (!ima_tpm_chip) @@ -227,8 +224,17 @@ int __init ima_init_digests(void) if (!digests) return -ENOMEM; - for (i = 0; i < ima_tpm_chip->nr_allocated_banks; i++) + for (i = 0; i < ima_tpm_chip->nr_allocated_banks; i++) { digests[i].alg_id = ima_tpm_chip->allocated_banks[i].alg_id; + digest_size = ima_tpm_chip->allocated_banks[i].digest_size; + crypto_id = ima_tpm_chip->allocated_banks[i].crypto_id; + + /* for unmapped TPM algorithms digest is still a padded SHA1 */ + if (crypto_id == HASH_ALGO__LAST) + digest_size = SHA1_DIGEST_SIZE; + + memset(digests[i].digest, 0xff, digest_size); + } return 0; } diff --git a/security/integrity/ima/ima_template.c b/security/integrity/ima/ima_template.c index de84252e65e9..5a2def40a733 100644 --- a/security/integrity/ima/ima_template.c +++ b/security/integrity/ima/ima_template.c @@ -357,6 +357,7 @@ static int ima_restore_template_data(struct ima_template_desc *template_desc, int ima_restore_measurement_list(loff_t size, void *buf) { char template_name[MAX_TEMPLATE_NAME_LEN]; + unsigned char zero[TPM_DIGEST_SIZE] = { 0 }; struct ima_kexec_hdr *khdr = buf; struct ima_field_data hdr[HDR__LAST] = { @@ -456,8 +457,17 @@ int ima_restore_measurement_list(loff_t size, void *buf) if (ret < 0) break; - memcpy(entry->digests[ima_sha1_idx].digest, - hdr[HDR_DIGEST].data, hdr[HDR_DIGEST].len); + if (memcmp(hdr[HDR_DIGEST].data, zero, sizeof(zero))) { + ret = ima_calc_field_array_hash( + &entry->template_data[0], + entry); + if (ret < 0) { + pr_err("cannot calculate template digest\n"); + ret = -EINVAL; + break; + } + } + entry->pcr = !ima_canonical_fmt ? *(hdr[HDR_PCR].data) : le32_to_cpu(*(hdr[HDR_PCR].data)); ret = ima_restore_measurement_entry(entry); From patchwork Wed Mar 25 10:54:24 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Roberto Sassu X-Patchwork-Id: 11457469 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id F15F61668 for ; Wed, 25 Mar 2020 10:56:07 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id DBB1220722 for ; Wed, 25 Mar 2020 10:56:07 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726658AbgCYK4H (ORCPT ); Wed, 25 Mar 2020 06:56:07 -0400 Received: from lhrrgout.huawei.com ([185.176.76.210]:2594 "EHLO huawei.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726139AbgCYK4H (ORCPT ); Wed, 25 Mar 2020 06:56:07 -0400 Received: from lhreml707-cah.china.huawei.com (unknown [172.18.7.108]) by Forcepoint Email with ESMTP id F025C1F99A62E89AD8C8; Wed, 25 Mar 2020 10:56:05 +0000 (GMT) Received: from roberto-HP-EliteDesk-800-G2-DM-65W.huawei.com (10.204.65.160) by smtpsuk.huawei.com (10.201.108.48) with Microsoft SMTP Server (TLS) id 14.3.408.0; Wed, 25 Mar 2020 10:55:59 +0000 From: Roberto Sassu To: , CC: , , , , Roberto Sassu Subject: [PATCH v4 7/7] ima: Use ima_hash_algo for collision detection in the measurement list Date: Wed, 25 Mar 2020 11:54:24 +0100 Message-ID: <20200325105424.26665-1-roberto.sassu@huawei.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200325104712.25694-1-roberto.sassu@huawei.com> References: <20200325104712.25694-1-roberto.sassu@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.204.65.160] X-CFilter-Loop: Reflected Sender: linux-integrity-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-integrity@vger.kernel.org Before calculating a digest for each PCR bank, collisions were detected with a SHA1 digest. This patch includes ima_hash_algo among the algorithms used to calculate the template digest and checks collisions on that digest. The position in the measurement entry array of the template digest calculated with the IMA default hash algorithm is stored in the ima_hash_algo_idx global variable and is determined at IMA initialization time. Changelog v2: - add __ro_after_init to declaration of ima_hash_algo_idx (suggested by Mimi) - replace ima_num_template_digests with NR_BANKS(ima_tpm_chip) + ima_extra_slots(suggested by Mimi) - use ima_hash_algo_idx to access ima_algo_array elements in ima_init_crypto() v1: - increment ima_num_template_digests before kcalloc() (suggested by Mimi) - check if ima_tpm_chip is NULL Signed-off-by: Roberto Sassu --- security/integrity/ima/ima.h | 1 + security/integrity/ima/ima_crypto.c | 19 ++++++++++++++++++- security/integrity/ima/ima_queue.c | 8 ++++---- 3 files changed, 23 insertions(+), 5 deletions(-) diff --git a/security/integrity/ima/ima.h b/security/integrity/ima/ima.h index 2a7ed68e6414..467dfdbea25c 100644 --- a/security/integrity/ima/ima.h +++ b/security/integrity/ima/ima.h @@ -53,6 +53,7 @@ extern int ima_policy_flag; /* set during initialization */ extern int ima_hash_algo; extern int ima_sha1_idx __ro_after_init; +extern int ima_hash_algo_idx __ro_after_init; extern int ima_extra_slots __ro_after_init; extern int ima_appraise; extern struct tpm_chip *ima_tpm_chip; diff --git a/security/integrity/ima/ima_crypto.c b/security/integrity/ima/ima_crypto.c index a94972d3f929..5201f5ec2ce4 100644 --- a/security/integrity/ima/ima_crypto.c +++ b/security/integrity/ima/ima_crypto.c @@ -63,6 +63,7 @@ struct ima_algo_desc { }; int ima_sha1_idx __ro_after_init; +int ima_hash_algo_idx __ro_after_init; /* * Additional number of slots reserved, as needed, for SHA1 * and IMA default algo. @@ -122,15 +123,25 @@ int __init ima_init_crypto(void) return rc; ima_sha1_idx = -1; + ima_hash_algo_idx = -1; for (i = 0; i < NR_BANKS(ima_tpm_chip); i++) { algo = ima_tpm_chip->allocated_banks[i].crypto_id; if (algo == HASH_ALGO_SHA1) ima_sha1_idx = i; + + if (algo == ima_hash_algo) + ima_hash_algo_idx = i; } - if (ima_sha1_idx < 0) + if (ima_sha1_idx < 0) { ima_sha1_idx = NR_BANKS(ima_tpm_chip) + ima_extra_slots++; + if (ima_hash_algo == HASH_ALGO_SHA1) + ima_hash_algo_idx = ima_sha1_idx; + } + + if (ima_hash_algo_idx < 0) + ima_hash_algo_idx = NR_BANKS(ima_tpm_chip) + ima_extra_slots++; ima_algo_array = kcalloc(NR_BANKS(ima_tpm_chip) + ima_extra_slots, sizeof(*ima_algo_array), GFP_KERNEL); @@ -179,6 +190,12 @@ int __init ima_init_crypto(void) ima_algo_array[ima_sha1_idx].algo = HASH_ALGO_SHA1; } + if (ima_hash_algo_idx >= NR_BANKS(ima_tpm_chip) && + ima_hash_algo_idx != ima_sha1_idx) { + ima_algo_array[ima_hash_algo_idx].tfm = ima_shash_tfm; + ima_algo_array[ima_hash_algo_idx].algo = ima_hash_algo; + } + return 0; out_array: for (i = 0; i < NR_BANKS(ima_tpm_chip) + ima_extra_slots; i++) { diff --git a/security/integrity/ima/ima_queue.c b/security/integrity/ima/ima_queue.c index 82a9ca43b989..fb4ec270f620 100644 --- a/security/integrity/ima/ima_queue.c +++ b/security/integrity/ima/ima_queue.c @@ -55,8 +55,8 @@ static struct ima_queue_entry *ima_lookup_digest_entry(u8 *digest_value, key = ima_hash_key(digest_value); rcu_read_lock(); hlist_for_each_entry_rcu(qe, &ima_htable.queue[key], hnext) { - rc = memcmp(qe->entry->digests[ima_sha1_idx].digest, - digest_value, TPM_DIGEST_SIZE); + rc = memcmp(qe->entry->digests[ima_hash_algo_idx].digest, + digest_value, hash_digest_size[ima_hash_algo]); if ((rc == 0) && (qe->entry->pcr == pcr)) { ret = qe; break; @@ -108,7 +108,7 @@ static int ima_add_digest_entry(struct ima_template_entry *entry, atomic_long_inc(&ima_htable.len); if (update_htable) { - key = ima_hash_key(entry->digests[ima_sha1_idx].digest); + key = ima_hash_key(entry->digests[ima_hash_algo_idx].digest); hlist_add_head_rcu(&qe->hnext, &ima_htable.queue[key]); } @@ -160,7 +160,7 @@ int ima_add_template_entry(struct ima_template_entry *entry, int violation, const char *op, struct inode *inode, const unsigned char *filename) { - u8 *digest = entry->digests[ima_sha1_idx].digest; + u8 *digest = entry->digests[ima_hash_algo_idx].digest; struct tpm_digest *digests_arg = entry->digests; const char *audit_cause = "hash_added"; char tpm_audit_cause[AUDIT_CAUSE_LEN_MAX];