From patchwork Tue Nov 7 10:37:00 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Roberto Sassu X-Patchwork-Id: 10046239 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 3F3AB6031B for ; Tue, 7 Nov 2017 10:42:03 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 276C929E9A for ; Tue, 7 Nov 2017 10:42:03 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 1BB8E29FD9; Tue, 7 Nov 2017 10:42:03 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=2.0 tests=BAYES_00,RCVD_IN_DNSWL_HI autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id A753A29F68 for ; Tue, 7 Nov 2017 10:42:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754731AbdKGKlp (ORCPT ); Tue, 7 Nov 2017 05:41:45 -0500 Received: from lhrrgout.huawei.com ([194.213.3.17]:39776 "EHLO lhrrgout.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752785AbdKGKlk (ORCPT ); Tue, 7 Nov 2017 05:41:40 -0500 Received: from 172.18.7.190 (EHLO lhreml701-cah.china.huawei.com) ([172.18.7.190]) by lhrrg02-dlp.huawei.com (MOS 4.3.7-GA FastPath queued) with ESMTP id DSD64221; Tue, 07 Nov 2017 10:41:38 +0000 (GMT) Received: from localhost.localdomain (10.204.65.254) by smtpsuk.huawei.com (10.201.108.42) with Microsoft SMTP Server (TLS) id 14.3.361.1; Tue, 7 Nov 2017 10:41:32 +0000 From: Roberto Sassu To: CC: , , , , , Roberto Sassu Subject: [PATCH v2 05/15] ima: add functions to manage digest lists Date: Tue, 7 Nov 2017 11:37:00 +0100 Message-ID: <20171107103710.10883-6-roberto.sassu@huawei.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20171107103710.10883-1-roberto.sassu@huawei.com> References: <20171107103710.10883-1-roberto.sassu@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.204.65.254] X-CFilter-Loop: Reflected X-Mirapoint-Virus-RAPID-Raw: score=unknown(0), refid=str=0001.0A090205.5A018DE3.0067, ss=1, re=0.000, recu=0.000, reip=0.000, cl=1, cld=1, fgs=0, ip=0.0.0.0, so=2013-06-18 04:22:30, dmn=2013-03-21 17:37:32 X-Mirapoint-Loop-Id: 6dad07a6f08cc6fb62c85e22bf9b5380 Sender: owner-linux-security-module@vger.kernel.org Precedence: bulk List-ID: X-Virus-Scanned: ClamAV using ClamSMTP This patch first introduces a new structure called ima_digest, which contains a digest parsed from a digest list. It has been preferred to ima_queue_entry, as the existing structure includes an additional member (a list head), which is not necessary for digest lookup. It also introduces the is_mutable field, which indicates if a file with a given digest can be updated or not. Finally, this patch introduces functions to lookup and add a digest to the new ima_digests_htable hash table. Changelog v1: - added support for immutable/mutable files Signed-off-by: Roberto Sassu --- security/integrity/ima/ima.h | 9 ++++++++ security/integrity/ima/ima_queue.c | 42 ++++++++++++++++++++++++++++++++++++++ 2 files changed, 51 insertions(+) diff --git a/security/integrity/ima/ima.h b/security/integrity/ima/ima.h index d52b487ad259..1f6591a57fea 100644 --- a/security/integrity/ima/ima.h +++ b/security/integrity/ima/ima.h @@ -107,6 +107,12 @@ struct ima_queue_entry { }; extern struct list_head ima_measurements; /* list of all measurements */ +struct ima_digest { + struct hlist_node hnext; + u8 is_mutable; + u8 digest[0]; +}; + /* Some details preceding the binary serialized measurement list */ struct ima_kexec_hdr { u16 version; @@ -150,6 +156,8 @@ void ima_print_digest(struct seq_file *m, u8 *digest, u32 size); struct ima_template_desc *ima_template_desc_current(void); int ima_restore_measurement_entry(struct ima_template_entry *entry); int ima_restore_measurement_list(loff_t bufsize, void *buf); +struct ima_digest *ima_lookup_loaded_digest(u8 *digest); +int ima_add_digest_data_entry(u8 *digest, u8 is_mutable); int ima_measurements_show(struct seq_file *m, void *v); unsigned long ima_get_binary_runtime_size(void); int ima_init_template(void); @@ -166,6 +174,7 @@ struct ima_h_table { struct hlist_head queue[IMA_MEASURE_HTABLE_SIZE]; }; extern struct ima_h_table ima_htable; +extern struct ima_h_table ima_digests_htable; static inline unsigned long ima_hash_key(u8 *digest) { diff --git a/security/integrity/ima/ima_queue.c b/security/integrity/ima/ima_queue.c index a02a86d51102..96c91c413430 100644 --- a/security/integrity/ima/ima_queue.c +++ b/security/integrity/ima/ima_queue.c @@ -42,6 +42,11 @@ struct ima_h_table ima_htable = { .queue[0 ... IMA_MEASURE_HTABLE_SIZE - 1] = HLIST_HEAD_INIT }; +struct ima_h_table ima_digests_htable = { + .len = ATOMIC_LONG_INIT(0), + .queue[0 ... IMA_MEASURE_HTABLE_SIZE - 1] = HLIST_HEAD_INIT +}; + /* mutex protects atomicity of extending measurement list * and extending the TPM PCR aggregate. Since tpm_extend can take * long (and the tpm driver uses a mutex), we can't use the spinlock. @@ -212,3 +217,40 @@ int ima_restore_measurement_entry(struct ima_template_entry *entry) mutex_unlock(&ima_extend_list_mutex); return result; } + +struct ima_digest *ima_lookup_loaded_digest(u8 *digest) +{ + struct ima_digest *d = NULL; + int digest_len = hash_digest_size[ima_hash_algo]; + unsigned int key = ima_hash_key(digest); + + rcu_read_lock(); + hlist_for_each_entry_rcu(d, &ima_digests_htable.queue[key], hnext) { + if (memcmp(d->digest, digest, digest_len) == 0) + break; + } + rcu_read_unlock(); + return d; +} + +int ima_add_digest_data_entry(u8 *digest, u8 is_mutable) +{ + struct ima_digest *d = ima_lookup_loaded_digest(digest); + int digest_len = hash_digest_size[ima_hash_algo]; + unsigned int key = ima_hash_key(digest); + + if (d) { + d->is_mutable = is_mutable; + return -EEXIST; + } + + d = kmalloc(sizeof(*d) + digest_len, GFP_KERNEL); + if (d == NULL) + return -ENOMEM; + + d->is_mutable = is_mutable; + memcpy(d->digest, digest, digest_len); + hlist_add_head_rcu(&d->hnext, &ima_digests_htable.queue[key]); + atomic_long_inc(&ima_digests_htable.len); + return 0; +}