From patchwork Mon May 15 16:13:28 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Konstantin Meskhidze (A)" X-Patchwork-Id: 13241769 Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 381B1FBE2 for ; Mon, 15 May 2023 16:14:10 +0000 (UTC) Received: from frasgout.his.huawei.com (frasgout.his.huawei.com [185.176.79.56]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2D9BD213A; Mon, 15 May 2023 09:14:06 -0700 (PDT) Received: from lhrpeml500004.china.huawei.com (unknown [172.18.147.201]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4QKktW1vt0z67N2W; Tue, 16 May 2023 00:13:07 +0800 (CST) Received: from mscphis00759.huawei.com (10.123.66.134) by lhrpeml500004.china.huawei.com (7.191.163.9) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.23; Mon, 15 May 2023 17:14:03 +0100 From: Konstantin Meskhidze To: CC: , , , , , , Subject: [PATCH v11 01/12] landlock: Make ruleset's access masks more generic Date: Tue, 16 May 2023 00:13:28 +0800 Message-ID: <20230515161339.631577-2-konstantin.meskhidze@huawei.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230515161339.631577-1-konstantin.meskhidze@huawei.com> References: <20230515161339.631577-1-konstantin.meskhidze@huawei.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Originating-IP: [10.123.66.134] X-ClientProxiedBy: mscpeml500002.china.huawei.com (7.188.26.138) To lhrpeml500004.china.huawei.com (7.191.163.9) X-CFilter-Loop: Reflected X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_MED, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net To support network type rules, this modification renames ruleset's access masks and modifies it's type to access_masks_t. This patch adds filesystem helper functions to add and get filesystem mask. Signed-off-by: Konstantin Meskhidze --- Changes since v10: * Squashes landlock_get_fs_access_mask() part from commit 2. Changes since v9: * None. Changes since v8: * Minor fixes. Changes since v7: * Refactors commit message. Changes since v6: * Adds a new access_masks_t for struct ruleset. * Renames landlock_set_fs_access_mask() to landlock_add_fs_access_mask() because it OR values. * Makes landlock_add_fs_access_mask() more resilient incorrect values. * Refactors landlock_get_fs_access_mask(). Changes since v6: * Adds a new access_masks_t for struct ruleset. * Renames landlock_set_fs_access_mask() to landlock_add_fs_access_mask() because it OR values. * Makes landlock_add_fs_access_mask() more resilient incorrect values. * Refactors landlock_get_fs_access_mask(). Changes since v5: * Changes access_mask_t to u32. * Formats code with clang-format-14. Changes since v4: * Deletes struct landlock_access_mask. Changes since v3: * Splits commit. * Adds get_mask, set_mask helpers for filesystem. * Adds new struct landlock_access_mask. --- security/landlock/fs.c | 10 +++++----- security/landlock/limits.h | 1 + security/landlock/ruleset.c | 17 +++++++++-------- security/landlock/ruleset.h | 34 ++++++++++++++++++++++++++++++---- security/landlock/syscalls.c | 7 ++++--- 5 files changed, 49 insertions(+), 20 deletions(-) -- 2.25.1 diff --git a/security/landlock/fs.c b/security/landlock/fs.c index adcea0fe7e68..0d57c6479d29 100644 --- a/security/landlock/fs.c +++ b/security/landlock/fs.c @@ -178,9 +178,9 @@ int landlock_append_fs_rule(struct landlock_ruleset *const ruleset, return -EINVAL; /* Transforms relative access rights to absolute ones. */ - access_rights |= - LANDLOCK_MASK_ACCESS_FS & - ~(ruleset->fs_access_masks[0] | ACCESS_INITIALLY_DENIED); + access_rights |= LANDLOCK_MASK_ACCESS_FS & + ~(landlock_get_fs_access_mask(ruleset, 0) | + ACCESS_INITIALLY_DENIED); object = get_inode_object(d_backing_inode(path->dentry)); if (IS_ERR(object)) return PTR_ERR(object); @@ -294,7 +294,7 @@ get_handled_accesses(const struct landlock_ruleset *const domain) size_t layer_level; for (layer_level = 0; layer_level < domain->num_layers; layer_level++) - access_dom |= domain->fs_access_masks[layer_level]; + access_dom |= landlock_get_fs_access_mask(domain, layer_level); return access_dom & LANDLOCK_MASK_ACCESS_FS; } @@ -336,7 +336,7 @@ init_layer_masks(const struct landlock_ruleset *const domain, * access rights. */ if (BIT_ULL(access_bit) & - (domain->fs_access_masks[layer_level] | + (landlock_get_fs_access_mask(domain, layer_level) | ACCESS_INITIALLY_DENIED)) { (*layer_masks)[access_bit] |= BIT_ULL(layer_level); diff --git a/security/landlock/limits.h b/security/landlock/limits.h index 82288f0e9e5e..bafb3b8dc677 100644 --- a/security/landlock/limits.h +++ b/security/landlock/limits.h @@ -21,6 +21,7 @@ #define LANDLOCK_LAST_ACCESS_FS LANDLOCK_ACCESS_FS_TRUNCATE #define LANDLOCK_MASK_ACCESS_FS ((LANDLOCK_LAST_ACCESS_FS << 1) - 1) #define LANDLOCK_NUM_ACCESS_FS __const_hweight64(LANDLOCK_MASK_ACCESS_FS) +#define LANDLOCK_SHIFT_ACCESS_FS 0 /* clang-format on */ diff --git a/security/landlock/ruleset.c b/security/landlock/ruleset.c index 996484f98bfd..1f3188b4e313 100644 --- a/security/landlock/ruleset.c +++ b/security/landlock/ruleset.c @@ -29,7 +29,7 @@ static struct landlock_ruleset *create_ruleset(const u32 num_layers) struct landlock_ruleset *new_ruleset; new_ruleset = - kzalloc(struct_size(new_ruleset, fs_access_masks, num_layers), + kzalloc(struct_size(new_ruleset, access_masks, num_layers), GFP_KERNEL_ACCOUNT); if (!new_ruleset) return ERR_PTR(-ENOMEM); @@ -40,7 +40,7 @@ static struct landlock_ruleset *create_ruleset(const u32 num_layers) /* * hierarchy = NULL * num_rules = 0 - * fs_access_masks[] = 0 + * access_masks[] = 0 */ return new_ruleset; } @@ -55,7 +55,7 @@ landlock_create_ruleset(const access_mask_t fs_access_mask) return ERR_PTR(-ENOMSG); new_ruleset = create_ruleset(1); if (!IS_ERR(new_ruleset)) - new_ruleset->fs_access_masks[0] = fs_access_mask; + landlock_add_fs_access_mask(new_ruleset, fs_access_mask, 0); return new_ruleset; } @@ -117,11 +117,12 @@ static void build_check_ruleset(void) .num_rules = ~0, .num_layers = ~0, }; - typeof(ruleset.fs_access_masks[0]) fs_access_mask = ~0; + typeof(ruleset.access_masks[0]) access_masks = ~0; BUILD_BUG_ON(ruleset.num_rules < LANDLOCK_MAX_NUM_RULES); BUILD_BUG_ON(ruleset.num_layers < LANDLOCK_MAX_NUM_LAYERS); - BUILD_BUG_ON(fs_access_mask < LANDLOCK_MASK_ACCESS_FS); + BUILD_BUG_ON(access_masks < + (LANDLOCK_MASK_ACCESS_FS << LANDLOCK_SHIFT_ACCESS_FS)); } /** @@ -281,7 +282,7 @@ static int merge_ruleset(struct landlock_ruleset *const dst, err = -EINVAL; goto out_unlock; } - dst->fs_access_masks[dst->num_layers - 1] = src->fs_access_masks[0]; + dst->access_masks[dst->num_layers - 1] = src->access_masks[0]; /* Merges the @src tree. */ rbtree_postorder_for_each_entry_safe(walker_rule, next_rule, &src->root, @@ -340,8 +341,8 @@ static int inherit_ruleset(struct landlock_ruleset *const parent, goto out_unlock; } /* Copies the parent layer stack and leaves a space for the new layer. */ - memcpy(child->fs_access_masks, parent->fs_access_masks, - flex_array_size(parent, fs_access_masks, parent->num_layers)); + memcpy(child->access_masks, parent->access_masks, + flex_array_size(parent, access_masks, parent->num_layers)); if (WARN_ON_ONCE(!parent->hierarchy)) { err = -EINVAL; diff --git a/security/landlock/ruleset.h b/security/landlock/ruleset.h index d43231b783e4..e900b84d915f 100644 --- a/security/landlock/ruleset.h +++ b/security/landlock/ruleset.h @@ -25,6 +25,11 @@ static_assert(BITS_PER_TYPE(access_mask_t) >= LANDLOCK_NUM_ACCESS_FS); /* Makes sure for_each_set_bit() and for_each_clear_bit() calls are OK. */ static_assert(sizeof(unsigned long) >= sizeof(access_mask_t)); +/* Ruleset access masks. */ +typedef u16 access_masks_t; +/* Makes sure all ruleset access rights can be stored. */ +static_assert(BITS_PER_TYPE(access_masks_t) >= LANDLOCK_NUM_ACCESS_FS); + typedef u16 layer_mask_t; /* Makes sure all layers can be checked. */ static_assert(BITS_PER_TYPE(layer_mask_t) >= LANDLOCK_MAX_NUM_LAYERS); @@ -110,7 +115,7 @@ struct landlock_ruleset { * section. This is only used by * landlock_put_ruleset_deferred() when @usage reaches zero. * The fields @lock, @usage, @num_rules, @num_layers and - * @fs_access_masks are then unused. + * @access_masks are then unused. */ struct work_struct work_free; struct { @@ -137,7 +142,7 @@ struct landlock_ruleset { */ u32 num_layers; /** - * @fs_access_masks: Contains the subset of filesystem + * @access_masks: Contains the subset of filesystem * actions that are restricted by a ruleset. A domain * saves all layers of merged rulesets in a stack * (FAM), starting from the first layer to the last @@ -148,13 +153,13 @@ struct landlock_ruleset { * layers are set once and never changed for the * lifetime of the ruleset. */ - access_mask_t fs_access_masks[]; + access_masks_t access_masks[]; }; }; }; struct landlock_ruleset * -landlock_create_ruleset(const access_mask_t fs_access_mask); +landlock_create_ruleset(const access_mask_t access_mask); void landlock_put_ruleset(struct landlock_ruleset *const ruleset); void landlock_put_ruleset_deferred(struct landlock_ruleset *const ruleset); @@ -177,4 +182,25 @@ static inline void landlock_get_ruleset(struct landlock_ruleset *const ruleset) refcount_inc(&ruleset->usage); } +static inline void +landlock_add_fs_access_mask(struct landlock_ruleset *const ruleset, + const access_mask_t fs_access_mask, + const u16 layer_level) +{ + access_mask_t fs_mask = fs_access_mask & LANDLOCK_MASK_ACCESS_FS; + + /* Should already be checked in sys_landlock_create_ruleset(). */ + WARN_ON_ONCE(fs_access_mask != fs_mask); + ruleset->access_masks[layer_level] |= + (fs_mask << LANDLOCK_SHIFT_ACCESS_FS); +} + +static inline access_mask_t +landlock_get_fs_access_mask(const struct landlock_ruleset *const ruleset, + const u16 layer_level) +{ + return (ruleset->access_masks[layer_level] >> + LANDLOCK_SHIFT_ACCESS_FS) & + LANDLOCK_MASK_ACCESS_FS; +} #endif /* _SECURITY_LANDLOCK_RULESET_H */ diff --git a/security/landlock/syscalls.c b/security/landlock/syscalls.c index 245cc650a4dc..7ec6bbed7117 100644 --- a/security/landlock/syscalls.c +++ b/security/landlock/syscalls.c @@ -310,6 +310,7 @@ SYSCALL_DEFINE4(landlock_add_rule, const int, ruleset_fd, struct path path; struct landlock_ruleset *ruleset; int res, err; + access_mask_t mask; if (!landlock_initialized) return -EOPNOTSUPP; @@ -346,10 +347,10 @@ SYSCALL_DEFINE4(landlock_add_rule, const int, ruleset_fd, } /* * Checks that allowed_access matches the @ruleset constraints - * (ruleset->fs_access_masks[0] is automatically upgraded to 64-bits). + * (ruleset->access_masks[0] is automatically upgraded to 64-bits). */ - if ((path_beneath_attr.allowed_access | ruleset->fs_access_masks[0]) != - ruleset->fs_access_masks[0]) { + mask = landlock_get_fs_access_mask(ruleset, 0); + if ((path_beneath_attr.allowed_access | mask) != mask) { err = -EINVAL; goto out_put_ruleset; } From patchwork Mon May 15 16:13:29 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: "Konstantin Meskhidze (A)" X-Patchwork-Id: 13241770 Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C2C05E543 for ; Mon, 15 May 2023 16:14:12 +0000 (UTC) Received: from frasgout.his.huawei.com (frasgout.his.huawei.com [185.176.79.56]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E4366199; Mon, 15 May 2023 09:14:08 -0700 (PDT) Received: from lhrpeml500004.china.huawei.com (unknown [172.18.147.206]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4QKksd1LFYz67nGS; Tue, 16 May 2023 00:12:21 +0800 (CST) Received: from mscphis00759.huawei.com (10.123.66.134) by lhrpeml500004.china.huawei.com (7.191.163.9) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.23; Mon, 15 May 2023 17:14:06 +0100 From: Konstantin Meskhidze To: CC: , , , , , , Subject: [PATCH v11 02/12] landlock: Allow filesystem layout changes for domains without such rule type Date: Tue, 16 May 2023 00:13:29 +0800 Message-ID: <20230515161339.631577-3-konstantin.meskhidze@huawei.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230515161339.631577-1-konstantin.meskhidze@huawei.com> References: <20230515161339.631577-1-konstantin.meskhidze@huawei.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Originating-IP: [10.123.66.134] X-ClientProxiedBy: mscpeml500002.china.huawei.com (7.188.26.138) To lhrpeml500004.china.huawei.com (7.191.163.9) X-CFilter-Loop: Reflected X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_MED, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net From: Mickaël Salaün Allow mount point and root directory changes when there is no filesystem rule tied to the current Landlock domain. This doesn't change anything for now because a domain must have at least a (filesystem) rule, but this will change when other rule types will come. For instance, a domain only restricting the network should have no impact on filesystem restrictions. Add a new get_current_fs_domain() helper to quickly check filesystem rule existence for all filesystem LSM hooks. Remove unnecessary inlining. Signed-off-by: Mickaël Salaün --- Changes since v10: * Squashes landlock_get_fs_access_mask() part into commit 1. * Opportunistically removes inline get_raw_handled_fs_accesses function with changing it's signature. Changes since v9: * Refactors documentaion landlock.rst. * Changes ACCESS_FS_INITIALLY_DENIED constant to LANDLOCK_ACCESS_FS_INITIALLY_DENIED. * Gets rid of unnecessary masking of access_dom in get_raw_handled_fs_accesses() function. Changes since v8: * Refactors get_handled_fs_accesses(). * Adds landlock_get_raw_fs_access_mask() helper. --- Documentation/userspace-api/landlock.rst | 6 +- security/landlock/fs.c | 74 ++++++++++++------------ security/landlock/ruleset.h | 25 +++++++- security/landlock/syscalls.c | 2 +- 4 files changed, 64 insertions(+), 43 deletions(-) -- 2.25.1 diff --git a/Documentation/userspace-api/landlock.rst b/Documentation/userspace-api/landlock.rst index d8cd8cd9ce25..f6a7da21708a 100644 --- a/Documentation/userspace-api/landlock.rst +++ b/Documentation/userspace-api/landlock.rst @@ -387,9 +387,9 @@ Current limitations Filesystem topology modification -------------------------------- -As for file renaming and linking, a sandboxed thread cannot modify its -filesystem topology, whether via :manpage:`mount(2)` or -:manpage:`pivot_root(2)`. However, :manpage:`chroot(2)` calls are not denied. +Threads sandboxed with filesystem restrictions cannot modify filesystem +topology, whether via :manpage:`mount(2)` or :manpage:`pivot_root(2)`. +However, :manpage:`chroot(2)` calls are not denied. Special filesystems ------------------- diff --git a/security/landlock/fs.c b/security/landlock/fs.c index 0d57c6479d29..a0c9c927fdf9 100644 --- a/security/landlock/fs.c +++ b/security/landlock/fs.c @@ -150,16 +150,6 @@ static struct landlock_object *get_inode_object(struct inode *const inode) LANDLOCK_ACCESS_FS_TRUNCATE) /* clang-format on */ -/* - * All access rights that are denied by default whether they are handled or not - * by a ruleset/layer. This must be ORed with all ruleset->fs_access_masks[] - * entries when we need to get the absolute handled access masks. - */ -/* clang-format off */ -#define ACCESS_INITIALLY_DENIED ( \ - LANDLOCK_ACCESS_FS_REFER) -/* clang-format on */ - /* * @path: Should have been checked by get_path_from_fd(). */ @@ -179,8 +169,7 @@ int landlock_append_fs_rule(struct landlock_ruleset *const ruleset, /* Transforms relative access rights to absolute ones. */ access_rights |= LANDLOCK_MASK_ACCESS_FS & - ~(landlock_get_fs_access_mask(ruleset, 0) | - ACCESS_INITIALLY_DENIED); + ~landlock_get_fs_access_mask(ruleset, 0); object = get_inode_object(d_backing_inode(path->dentry)); if (IS_ERR(object)) return PTR_ERR(object); @@ -287,15 +276,16 @@ static inline bool is_nouser_or_private(const struct dentry *dentry) unlikely(IS_PRIVATE(d_backing_inode(dentry)))); } -static inline access_mask_t -get_handled_accesses(const struct landlock_ruleset *const domain) +static access_mask_t +get_raw_handled_fs_accesses(const struct landlock_ruleset *const domain) { - access_mask_t access_dom = ACCESS_INITIALLY_DENIED; + access_mask_t access_dom = 0; size_t layer_level; for (layer_level = 0; layer_level < domain->num_layers; layer_level++) - access_dom |= landlock_get_fs_access_mask(domain, layer_level); - return access_dom & LANDLOCK_MASK_ACCESS_FS; + access_dom |= + landlock_get_raw_fs_access_mask(domain, layer_level); + return access_dom; } /** @@ -331,13 +321,8 @@ init_layer_masks(const struct landlock_ruleset *const domain, for_each_set_bit(access_bit, &access_req, ARRAY_SIZE(*layer_masks)) { - /* - * Artificially handles all initially denied by default - * access rights. - */ if (BIT_ULL(access_bit) & - (landlock_get_fs_access_mask(domain, layer_level) | - ACCESS_INITIALLY_DENIED)) { + landlock_get_fs_access_mask(domain, layer_level)) { (*layer_masks)[access_bit] |= BIT_ULL(layer_level); handled_accesses |= BIT_ULL(access_bit); @@ -347,6 +332,25 @@ init_layer_masks(const struct landlock_ruleset *const domain, return handled_accesses; } +static access_mask_t +get_handled_fs_accesses(const struct landlock_ruleset *const domain) +{ + /* Handles all initially denied by default access rights. */ + return get_raw_handled_fs_accesses(domain) | + LANDLOCK_ACCESS_FS_INITIALLY_DENIED; +} + +static const struct landlock_ruleset *get_current_fs_domain(void) +{ + const struct landlock_ruleset *const dom = + landlock_get_current_domain(); + + if (!dom || !get_raw_handled_fs_accesses(dom)) + return NULL; + + return dom; +} + /* * Check that a destination file hierarchy has more restrictions than a source * file hierarchy. This is only used for link and rename actions. @@ -519,7 +523,7 @@ static bool is_access_to_paths_allowed( * a superset of the meaningful requested accesses). */ access_masked_parent1 = access_masked_parent2 = - get_handled_accesses(domain); + get_handled_fs_accesses(domain); is_dom_check = true; } else { if (WARN_ON_ONCE(dentry_child1 || dentry_child2)) @@ -651,8 +655,7 @@ static inline int check_access_path(const struct landlock_ruleset *const domain, static inline int current_check_access_path(const struct path *const path, const access_mask_t access_request) { - const struct landlock_ruleset *const dom = - landlock_get_current_domain(); + const struct landlock_ruleset *const dom = get_current_fs_domain(); if (!dom) return 0; @@ -815,8 +818,7 @@ static int current_check_refer_path(struct dentry *const old_dentry, struct dentry *const new_dentry, const bool removable, const bool exchange) { - const struct landlock_ruleset *const dom = - landlock_get_current_domain(); + const struct landlock_ruleset *const dom = get_current_fs_domain(); bool allow_parent1, allow_parent2; access_mask_t access_request_parent1, access_request_parent2; struct path mnt_dir; @@ -1050,7 +1052,7 @@ static int hook_sb_mount(const char *const dev_name, const struct path *const path, const char *const type, const unsigned long flags, void *const data) { - if (!landlock_get_current_domain()) + if (!get_current_fs_domain()) return 0; return -EPERM; } @@ -1058,7 +1060,7 @@ static int hook_sb_mount(const char *const dev_name, static int hook_move_mount(const struct path *const from_path, const struct path *const to_path) { - if (!landlock_get_current_domain()) + if (!get_current_fs_domain()) return 0; return -EPERM; } @@ -1069,14 +1071,14 @@ static int hook_move_mount(const struct path *const from_path, */ static int hook_sb_umount(struct vfsmount *const mnt, const int flags) { - if (!landlock_get_current_domain()) + if (!get_current_fs_domain()) return 0; return -EPERM; } static int hook_sb_remount(struct super_block *const sb, void *const mnt_opts) { - if (!landlock_get_current_domain()) + if (!get_current_fs_domain()) return 0; return -EPERM; } @@ -1092,7 +1094,7 @@ static int hook_sb_remount(struct super_block *const sb, void *const mnt_opts) static int hook_sb_pivotroot(const struct path *const old_path, const struct path *const new_path) { - if (!landlock_get_current_domain()) + if (!get_current_fs_domain()) return 0; return -EPERM; } @@ -1128,8 +1130,7 @@ static int hook_path_mknod(const struct path *const dir, struct dentry *const dentry, const umode_t mode, const unsigned int dev) { - const struct landlock_ruleset *const dom = - landlock_get_current_domain(); + const struct landlock_ruleset *const dom = get_current_fs_domain(); if (!dom) return 0; @@ -1208,8 +1209,7 @@ static int hook_file_open(struct file *const file) layer_mask_t layer_masks[LANDLOCK_NUM_ACCESS_FS] = {}; access_mask_t open_access_request, full_access_request, allowed_access; const access_mask_t optional_access = LANDLOCK_ACCESS_FS_TRUNCATE; - const struct landlock_ruleset *const dom = - landlock_get_current_domain(); + const struct landlock_ruleset *const dom = get_current_fs_domain(); if (!dom) return 0; diff --git a/security/landlock/ruleset.h b/security/landlock/ruleset.h index e900b84d915f..baef84071f37 100644 --- a/security/landlock/ruleset.h +++ b/security/landlock/ruleset.h @@ -15,10 +15,21 @@ #include #include #include +#include #include "limits.h" #include "object.h" +/* + * All access rights that are denied by default whether they are handled or not + * by a ruleset/layer. This must be ORed with all ruleset->access_masks[] + * entries when we need to get the absolute handled access masks. + */ +/* clang-format off */ +#define LANDLOCK_ACCESS_FS_INITIALLY_DENIED ( \ + LANDLOCK_ACCESS_FS_REFER) +/* clang-format on */ + typedef u16 access_mask_t; /* Makes sure all filesystem access rights can be stored. */ static_assert(BITS_PER_TYPE(access_mask_t) >= LANDLOCK_NUM_ACCESS_FS); @@ -196,11 +207,21 @@ landlock_add_fs_access_mask(struct landlock_ruleset *const ruleset, } static inline access_mask_t -landlock_get_fs_access_mask(const struct landlock_ruleset *const ruleset, - const u16 layer_level) +landlock_get_raw_fs_access_mask(const struct landlock_ruleset *const ruleset, + const u16 layer_level) { return (ruleset->access_masks[layer_level] >> LANDLOCK_SHIFT_ACCESS_FS) & LANDLOCK_MASK_ACCESS_FS; } + +static inline access_mask_t +landlock_get_fs_access_mask(const struct landlock_ruleset *const ruleset, + const u16 layer_level) +{ + /* Handles all initially denied by default access rights. */ + return landlock_get_raw_fs_access_mask(ruleset, layer_level) | + LANDLOCK_ACCESS_FS_INITIALLY_DENIED; +} + #endif /* _SECURITY_LANDLOCK_RULESET_H */ diff --git a/security/landlock/syscalls.c b/security/landlock/syscalls.c index 7ec6bbed7117..d35cd5d304db 100644 --- a/security/landlock/syscalls.c +++ b/security/landlock/syscalls.c @@ -349,7 +349,7 @@ SYSCALL_DEFINE4(landlock_add_rule, const int, ruleset_fd, * Checks that allowed_access matches the @ruleset constraints * (ruleset->access_masks[0] is automatically upgraded to 64-bits). */ - mask = landlock_get_fs_access_mask(ruleset, 0); + mask = landlock_get_raw_fs_access_mask(ruleset, 0); if ((path_beneath_attr.allowed_access | mask) != mask) { err = -EINVAL; goto out_put_ruleset; From patchwork Mon May 15 16:13:30 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: "Konstantin Meskhidze (A)" X-Patchwork-Id: 13241771 Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DFF7CFBF1 for ; Mon, 15 May 2023 16:14:15 +0000 (UTC) Received: from frasgout.his.huawei.com (frasgout.his.huawei.com [185.176.79.56]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2033C19BA; Mon, 15 May 2023 09:14:12 -0700 (PDT) Received: from lhrpeml500004.china.huawei.com (unknown [172.18.147.226]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4QKkpy1yKkz6J73P; Tue, 16 May 2023 00:10:02 +0800 (CST) Received: from mscphis00759.huawei.com (10.123.66.134) by lhrpeml500004.china.huawei.com (7.191.163.9) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.23; Mon, 15 May 2023 17:14:09 +0100 From: Konstantin Meskhidze To: CC: , , , , , , Subject: [PATCH v11 03/12] landlock: Refactor landlock_find_rule/insert_rule Date: Tue, 16 May 2023 00:13:30 +0800 Message-ID: <20230515161339.631577-4-konstantin.meskhidze@huawei.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230515161339.631577-1-konstantin.meskhidze@huawei.com> References: <20230515161339.631577-1-konstantin.meskhidze@huawei.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Originating-IP: [10.123.66.134] X-ClientProxiedBy: mscpeml500002.china.huawei.com (7.188.26.138) To lhrpeml500004.china.huawei.com (7.191.163.9) X-CFilter-Loop: Reflected X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_MED, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Add a new landlock_key union and landlock_id structure to support a socket port rule type. A struct landlock_id identifies a unique entry in a ruleset: either a kernel object (e.g inode) or typed data (e.g TCP port). There is one red-black tree per key type. This patch also adds is_object_pointer() and get_root() helpers. is_object_pointer() returns true if key type is LANDLOCK_KEY_INODE. get_root() helper returns a red_black tree root pointer according to a key type. Refactor landlock_insert_rule() and landlock_find_rule() to support coming network modifications. Adding or searching a rule in ruleset can now be done thanks to a Landlock ID argument passed to these helpers. Co-developed-by: Mickaël Salaün Signed-off-by: Mickaël Salaün Signed-off-by: Konstantin Meskhidze --- Changes since v10: * None. Changes since v9: * Splits commit. * Refactors commit message. * Minor fixes. Changes since v8: * Refactors commit message. * Removes inlining. * Minor fixes. Changes since v7: * Completes all the new field descriptions landlock_key, landlock_key_type, landlock_id. * Refactors commit message, adds a co-developer. Changes since v6: * Adds union landlock_key, enum landlock_key_type, and struct landlock_id. * Refactors ruleset functions and improves switch/cases: create_rule(), insert_rule(), get_root(), is_object_pointer(), free_rule(), landlock_find_rule(). * Refactors landlock_append_fs_rule() functions to support new landlock_id type. Changes since v5: * Formats code with clang-format-14. Changes since v4: * Refactors insert_rule() and create_rule() functions by deleting rule_type from their arguments list, it helps to reduce useless code. Changes since v3: * Splits commit. * Refactors landlock_insert_rule and landlock_find_rule functions. * Rename new_ruleset->root_inode. --- security/landlock/fs.c | 21 +++--- security/landlock/ruleset.c | 134 ++++++++++++++++++++++++++---------- security/landlock/ruleset.h | 65 ++++++++++++++--- 3 files changed, 166 insertions(+), 54 deletions(-) -- 2.25.1 diff --git a/security/landlock/fs.c b/security/landlock/fs.c index a0c9c927fdf9..9a8e70f65a88 100644 --- a/security/landlock/fs.c +++ b/security/landlock/fs.c @@ -158,7 +158,9 @@ int landlock_append_fs_rule(struct landlock_ruleset *const ruleset, access_mask_t access_rights) { int err; - struct landlock_object *object; + struct landlock_id id = { + .type = LANDLOCK_KEY_INODE, + }; /* Files only get access rights that make sense. */ if (!d_is_dir(path->dentry) && @@ -170,17 +172,17 @@ int landlock_append_fs_rule(struct landlock_ruleset *const ruleset, /* Transforms relative access rights to absolute ones. */ access_rights |= LANDLOCK_MASK_ACCESS_FS & ~landlock_get_fs_access_mask(ruleset, 0); - object = get_inode_object(d_backing_inode(path->dentry)); - if (IS_ERR(object)) - return PTR_ERR(object); + id.key.object = get_inode_object(d_backing_inode(path->dentry)); + if (IS_ERR(id.key.object)) + return PTR_ERR(id.key.object); mutex_lock(&ruleset->lock); - err = landlock_insert_rule(ruleset, object, access_rights); + err = landlock_insert_rule(ruleset, id, access_rights); mutex_unlock(&ruleset->lock); /* * No need to check for an error because landlock_insert_rule() * increments the refcount for the new object if needed. */ - landlock_put_object(object); + landlock_put_object(id.key.object); return err; } @@ -197,6 +199,9 @@ find_rule(const struct landlock_ruleset *const domain, { const struct landlock_rule *rule; const struct inode *inode; + struct landlock_id id = { + .type = LANDLOCK_KEY_INODE, + }; /* Ignores nonexistent leafs. */ if (d_is_negative(dentry)) @@ -204,8 +209,8 @@ find_rule(const struct landlock_ruleset *const domain, inode = d_backing_inode(dentry); rcu_read_lock(); - rule = landlock_find_rule( - domain, rcu_dereference(landlock_inode(inode)->object)); + id.key.object = rcu_dereference(landlock_inode(inode)->object); + rule = landlock_find_rule(domain, id); rcu_read_unlock(); return rule; } diff --git a/security/landlock/ruleset.c b/security/landlock/ruleset.c index 1f3188b4e313..deab37838f5b 100644 --- a/security/landlock/ruleset.c +++ b/security/landlock/ruleset.c @@ -35,7 +35,7 @@ static struct landlock_ruleset *create_ruleset(const u32 num_layers) return ERR_PTR(-ENOMEM); refcount_set(&new_ruleset->usage, 1); mutex_init(&new_ruleset->lock); - new_ruleset->root = RB_ROOT; + new_ruleset->root_inode = RB_ROOT; new_ruleset->num_layers = num_layers; /* * hierarchy = NULL @@ -68,8 +68,18 @@ static void build_check_rule(void) BUILD_BUG_ON(rule.num_layers < LANDLOCK_MAX_NUM_LAYERS); } +static bool is_object_pointer(const enum landlock_key_type key_type) +{ + switch (key_type) { + case LANDLOCK_KEY_INODE: + return true; + } + WARN_ON_ONCE(1); + return false; +} + static struct landlock_rule * -create_rule(struct landlock_object *const object, +create_rule(const struct landlock_id id, const struct landlock_layer (*const layers)[], const u32 num_layers, const struct landlock_layer *const new_layer) { @@ -90,8 +100,13 @@ create_rule(struct landlock_object *const object, if (!new_rule) return ERR_PTR(-ENOMEM); RB_CLEAR_NODE(&new_rule->node); - landlock_get_object(object); - new_rule->object = object; + if (is_object_pointer(id.type)) { + /* This should be catched by insert_rule(). */ + WARN_ON_ONCE(!id.key.object); + landlock_get_object(id.key.object); + } + + new_rule->key = id.key; new_rule->num_layers = new_num_layers; /* Copies the original layer stack. */ memcpy(new_rule->layers, layers, @@ -102,12 +117,29 @@ create_rule(struct landlock_object *const object, return new_rule; } -static void free_rule(struct landlock_rule *const rule) +static struct rb_root *get_root(struct landlock_ruleset *const ruleset, + const enum landlock_key_type key_type) +{ + struct rb_root *root = NULL; + + switch (key_type) { + case LANDLOCK_KEY_INODE: + root = &ruleset->root_inode; + break; + } + if (WARN_ON_ONCE(!root)) + return ERR_PTR(-EINVAL); + return root; +} + +static void free_rule(struct landlock_rule *const rule, + const enum landlock_key_type key_type) { might_sleep(); if (!rule) return; - landlock_put_object(rule->object); + if (is_object_pointer(key_type)) + landlock_put_object(rule->key.object); kfree(rule); } @@ -129,8 +161,8 @@ static void build_check_ruleset(void) * insert_rule - Create and insert a rule in a ruleset * * @ruleset: The ruleset to be updated. - * @object: The object to build the new rule with. The underlying kernel - * object must be held by the caller. + * @id: The ID to build the new rule with. The underlying kernel object, if + * any, must be held by the caller. * @layers: One or multiple layers to be copied into the new rule. * @num_layers: The number of @layers entries. * @@ -144,26 +176,37 @@ static void build_check_ruleset(void) * access rights. */ static int insert_rule(struct landlock_ruleset *const ruleset, - struct landlock_object *const object, + const struct landlock_id id, const struct landlock_layer (*const layers)[], - size_t num_layers) + const size_t num_layers) { struct rb_node **walker_node; struct rb_node *parent_node = NULL; struct landlock_rule *new_rule; + struct rb_root *root; might_sleep(); lockdep_assert_held(&ruleset->lock); - if (WARN_ON_ONCE(!object || !layers)) + if (WARN_ON_ONCE(!layers)) return -ENOENT; - walker_node = &(ruleset->root.rb_node); + + if (is_object_pointer(id.type)) { + if (WARN_ON_ONCE(!id.key.object)) + return -ENOENT; + } + + root = get_root(ruleset, id.type); + if (IS_ERR(root)) + return PTR_ERR(root); + + walker_node = &root->rb_node; while (*walker_node) { struct landlock_rule *const this = rb_entry(*walker_node, struct landlock_rule, node); - if (this->object != object) { + if (this->key.data != id.key.data) { parent_node = *walker_node; - if (this->object < object) + if (this->key.data < id.key.data) walker_node = &((*walker_node)->rb_right); else walker_node = &((*walker_node)->rb_left); @@ -195,24 +238,24 @@ static int insert_rule(struct landlock_ruleset *const ruleset, * Intersects access rights when it is a merge between a * ruleset and a domain. */ - new_rule = create_rule(object, &this->layers, this->num_layers, + new_rule = create_rule(id, &this->layers, this->num_layers, &(*layers)[0]); if (IS_ERR(new_rule)) return PTR_ERR(new_rule); - rb_replace_node(&this->node, &new_rule->node, &ruleset->root); - free_rule(this); + rb_replace_node(&this->node, &new_rule->node, root); + free_rule(this, id.type); return 0; } - /* There is no match for @object. */ + /* There is no match for @id. */ build_check_ruleset(); if (ruleset->num_rules >= LANDLOCK_MAX_NUM_RULES) return -E2BIG; - new_rule = create_rule(object, layers, num_layers, NULL); + new_rule = create_rule(id, layers, num_layers, NULL); if (IS_ERR(new_rule)) return PTR_ERR(new_rule); rb_link_node(&new_rule->node, parent_node, walker_node); - rb_insert_color(&new_rule->node, &ruleset->root); + rb_insert_color(&new_rule->node, root); ruleset->num_rules++; return 0; } @@ -230,7 +273,7 @@ static void build_check_layer(void) /* @ruleset must be locked by the caller. */ int landlock_insert_rule(struct landlock_ruleset *const ruleset, - struct landlock_object *const object, + const struct landlock_id id, const access_mask_t access) { struct landlock_layer layers[] = { { @@ -240,7 +283,7 @@ int landlock_insert_rule(struct landlock_ruleset *const ruleset, } }; build_check_layer(); - return insert_rule(ruleset, object, &layers, ARRAY_SIZE(layers)); + return insert_rule(ruleset, id, &layers, ARRAY_SIZE(layers)); } static inline void get_hierarchy(struct landlock_hierarchy *const hierarchy) @@ -263,6 +306,7 @@ static int merge_ruleset(struct landlock_ruleset *const dst, struct landlock_ruleset *const src) { struct landlock_rule *walker_rule, *next_rule; + struct rb_root *src_root; int err = 0; might_sleep(); @@ -273,6 +317,10 @@ static int merge_ruleset(struct landlock_ruleset *const dst, if (WARN_ON_ONCE(!dst || !dst->hierarchy)) return -EINVAL; + src_root = get_root(src, LANDLOCK_KEY_INODE); + if (IS_ERR(src_root)) + return PTR_ERR(src_root); + /* Locks @dst first because we are its only owner. */ mutex_lock(&dst->lock); mutex_lock_nested(&src->lock, SINGLE_DEPTH_NESTING); @@ -285,11 +333,15 @@ static int merge_ruleset(struct landlock_ruleset *const dst, dst->access_masks[dst->num_layers - 1] = src->access_masks[0]; /* Merges the @src tree. */ - rbtree_postorder_for_each_entry_safe(walker_rule, next_rule, &src->root, + rbtree_postorder_for_each_entry_safe(walker_rule, next_rule, src_root, node) { struct landlock_layer layers[] = { { .level = dst->num_layers, } }; + const struct landlock_id id = { + .key = walker_rule->key, + .type = LANDLOCK_KEY_INODE, + }; if (WARN_ON_ONCE(walker_rule->num_layers != 1)) { err = -EINVAL; @@ -300,8 +352,8 @@ static int merge_ruleset(struct landlock_ruleset *const dst, goto out_unlock; } layers[0].access = walker_rule->layers[0].access; - err = insert_rule(dst, walker_rule->object, &layers, - ARRAY_SIZE(layers)); + + err = insert_rule(dst, id, &layers, ARRAY_SIZE(layers)); if (err) goto out_unlock; } @@ -316,21 +368,29 @@ static int inherit_ruleset(struct landlock_ruleset *const parent, struct landlock_ruleset *const child) { struct landlock_rule *walker_rule, *next_rule; + struct rb_root *parent_root; int err = 0; might_sleep(); if (!parent) return 0; + parent_root = get_root(parent, LANDLOCK_KEY_INODE); + if (IS_ERR(parent_root)) + return PTR_ERR(parent_root); + /* Locks @child first because we are its only owner. */ mutex_lock(&child->lock); mutex_lock_nested(&parent->lock, SINGLE_DEPTH_NESTING); /* Copies the @parent tree. */ rbtree_postorder_for_each_entry_safe(walker_rule, next_rule, - &parent->root, node) { - err = insert_rule(child, walker_rule->object, - &walker_rule->layers, + parent_root, node) { + const struct landlock_id id = { + .key = walker_rule->key, + .type = LANDLOCK_KEY_INODE, + }; + err = insert_rule(child, id, &walker_rule->layers, walker_rule->num_layers); if (err) goto out_unlock; @@ -362,8 +422,9 @@ static void free_ruleset(struct landlock_ruleset *const ruleset) struct landlock_rule *freeme, *next; might_sleep(); - rbtree_postorder_for_each_entry_safe(freeme, next, &ruleset->root, node) - free_rule(freeme); + rbtree_postorder_for_each_entry_safe(freeme, next, &ruleset->root_inode, + node) + free_rule(freeme, LANDLOCK_KEY_INODE); put_hierarchy(ruleset->hierarchy); kfree(ruleset); } @@ -454,20 +515,23 @@ landlock_merge_ruleset(struct landlock_ruleset *const parent, */ const struct landlock_rule * landlock_find_rule(const struct landlock_ruleset *const ruleset, - const struct landlock_object *const object) + const struct landlock_id id) { + const struct rb_root *root; const struct rb_node *node; - if (!object) + root = get_root((struct landlock_ruleset *)ruleset, id.type); + if (IS_ERR(root)) return NULL; - node = ruleset->root.rb_node; + node = root->rb_node; + while (node) { struct landlock_rule *this = rb_entry(node, struct landlock_rule, node); - if (this->object == object) + if (this->key.data == id.key.data) return this; - if (this->object < object) + if (this->key.data < id.key.data) node = node->rb_right; else node = node->rb_left; diff --git a/security/landlock/ruleset.h b/security/landlock/ruleset.h index baef84071f37..5e1b1b25def0 100644 --- a/security/landlock/ruleset.h +++ b/security/landlock/ruleset.h @@ -60,6 +60,47 @@ struct landlock_layer { access_mask_t access; }; +/** + * union landlock_key - Key of a ruleset's red-black tree + */ +union landlock_key { + /** + * @object: Pointer to identify a kernel object (e.g. an inode). + */ + struct landlock_object *object; + /** + * @data: Raw data to identify an arbitrary 32-bit value + * (e.g. a TCP port). + */ + uintptr_t data; +}; + +/** + * enum landlock_key_type - Type of &union landlock_key + */ +enum landlock_key_type { + /** + * @LANDLOCK_KEY_INODE: Type of &landlock_ruleset.root_inode's node + * keys. + */ + LANDLOCK_KEY_INODE = 1, +}; + +/** + * struct landlock_id - Unique rule identifier for a ruleset + */ +struct landlock_id { + /** + * @key: Identifies either a kernel object (e.g. an inode) or + * a raw value (e.g. a TCP port). + */ + union landlock_key key; + /** + * @type: Type of a landlock_ruleset's root tree. + */ + const enum landlock_key_type type; +}; + /** * struct landlock_rule - Access rights tied to an object */ @@ -69,12 +110,13 @@ struct landlock_rule { */ struct rb_node node; /** - * @object: Pointer to identify a kernel object (e.g. an inode). This - * is used as a key for this ruleset element. This pointer is set once - * and never modified. It always points to an allocated object because - * each rule increments the refcount of its object. + * @key: A union to identify either a kernel object (e.g. an inode) or + * a raw data value (e.g. a network socket port). This is used as a key + * for this ruleset element. The pointer is set once and never + * modified. It always points to an allocated object because each rule + * increments the refcount of its object. */ - struct landlock_object *object; + union landlock_key key; /** * @num_layers: Number of entries in @layers. */ @@ -110,11 +152,12 @@ struct landlock_hierarchy { */ struct landlock_ruleset { /** - * @root: Root of a red-black tree containing &struct landlock_rule - * nodes. Once a ruleset is tied to a process (i.e. as a domain), this - * tree is immutable until @usage reaches zero. + * @root_inode: Root of a red-black tree containing &struct + * landlock_rule nodes with inode object. Once a ruleset is tied to a + * process (i.e. as a domain), this tree is immutable until @usage + * reaches zero. */ - struct rb_root root; + struct rb_root root_inode; /** * @hierarchy: Enables hierarchy identification even when a parent * domain vanishes. This is needed for the ptrace protection. @@ -176,7 +219,7 @@ void landlock_put_ruleset(struct landlock_ruleset *const ruleset); void landlock_put_ruleset_deferred(struct landlock_ruleset *const ruleset); int landlock_insert_rule(struct landlock_ruleset *const ruleset, - struct landlock_object *const object, + const struct landlock_id id, const access_mask_t access); struct landlock_ruleset * @@ -185,7 +228,7 @@ landlock_merge_ruleset(struct landlock_ruleset *const parent, const struct landlock_rule * landlock_find_rule(const struct landlock_ruleset *const ruleset, - const struct landlock_object *const object); + const struct landlock_id id); static inline void landlock_get_ruleset(struct landlock_ruleset *const ruleset) { From patchwork Mon May 15 16:13:31 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Konstantin Meskhidze (A)" X-Patchwork-Id: 13241772 Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 00B29FBF1 for ; Mon, 15 May 2023 16:14:16 +0000 (UTC) Received: from frasgout.his.huawei.com (frasgout.his.huawei.com [185.176.79.56]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1223C2685; Mon, 15 May 2023 09:14:14 -0700 (PDT) Received: from lhrpeml500004.china.huawei.com (unknown [172.18.147.200]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4QKktg25Hgz67nTp; Tue, 16 May 2023 00:13:15 +0800 (CST) Received: from mscphis00759.huawei.com (10.123.66.134) by lhrpeml500004.china.huawei.com (7.191.163.9) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.23; Mon, 15 May 2023 17:14:11 +0100 From: Konstantin Meskhidze To: CC: , , , , , , Subject: [PATCH v11 04/12] landlock: Refactor merge/inherit_ruleset functions Date: Tue, 16 May 2023 00:13:31 +0800 Message-ID: <20230515161339.631577-5-konstantin.meskhidze@huawei.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230515161339.631577-1-konstantin.meskhidze@huawei.com> References: <20230515161339.631577-1-konstantin.meskhidze@huawei.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Originating-IP: [10.123.66.134] X-ClientProxiedBy: mscpeml500002.china.huawei.com (7.188.26.138) To lhrpeml500004.china.huawei.com (7.191.163.9) X-CFilter-Loop: Reflected X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_MED, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Refactor merge_ruleset() and inherit_ruleset() functions to support new rule types. This patch adds merge_tree() and inherit_tree() helpers. They use a specific ruleset's red-black tree according to a key type argument. Signed-off-by: Konstantin Meskhidze --- Changes since v10: * Refactors merge_tree() function. Changes since v9: * None Changes since v8: * Refactors commit message. * Minor fixes. Changes since v7: * Adds missed lockdep_assert_held it inherit_tree() and merge_tree(). * Fixes comment. Changes since v6: * Refactors merge_ruleset() and inherit_ruleset() functions to support new rule types. * Renames tree_merge() to merge_tree() (and reorder arguments), and tree_copy() to inherit_tree(). Changes since v5: * Refactors some logic errors. * Formats code with clang-format-14. Changes since v4: * None --- security/landlock/ruleset.c | 122 +++++++++++++++++++++++------------- 1 file changed, 79 insertions(+), 43 deletions(-) -- 2.25.1 diff --git a/security/landlock/ruleset.c b/security/landlock/ruleset.c index deab37838f5b..e4f449fdd6dd 100644 --- a/security/landlock/ruleset.c +++ b/security/landlock/ruleset.c @@ -302,36 +302,22 @@ static void put_hierarchy(struct landlock_hierarchy *hierarchy) } } -static int merge_ruleset(struct landlock_ruleset *const dst, - struct landlock_ruleset *const src) +static int merge_tree(struct landlock_ruleset *const dst, + struct landlock_ruleset *const src, + const enum landlock_key_type key_type) { struct landlock_rule *walker_rule, *next_rule; struct rb_root *src_root; int err = 0; might_sleep(); - /* Should already be checked by landlock_merge_ruleset() */ - if (WARN_ON_ONCE(!src)) - return 0; - /* Only merge into a domain. */ - if (WARN_ON_ONCE(!dst || !dst->hierarchy)) - return -EINVAL; + lockdep_assert_held(&dst->lock); + lockdep_assert_held(&src->lock); - src_root = get_root(src, LANDLOCK_KEY_INODE); + src_root = get_root(src, key_type); if (IS_ERR(src_root)) return PTR_ERR(src_root); - /* Locks @dst first because we are its only owner. */ - mutex_lock(&dst->lock); - mutex_lock_nested(&src->lock, SINGLE_DEPTH_NESTING); - - /* Stacks the new layer. */ - if (WARN_ON_ONCE(src->num_layers != 1 || dst->num_layers < 1)) { - err = -EINVAL; - goto out_unlock; - } - dst->access_masks[dst->num_layers - 1] = src->access_masks[0]; - /* Merges the @src tree. */ rbtree_postorder_for_each_entry_safe(walker_rule, next_rule, src_root, node) { @@ -340,23 +326,52 @@ static int merge_ruleset(struct landlock_ruleset *const dst, } }; const struct landlock_id id = { .key = walker_rule->key, - .type = LANDLOCK_KEY_INODE, + .type = key_type, }; - if (WARN_ON_ONCE(walker_rule->num_layers != 1)) { - err = -EINVAL; - goto out_unlock; - } - if (WARN_ON_ONCE(walker_rule->layers[0].level != 0)) { - err = -EINVAL; - goto out_unlock; - } + if (WARN_ON_ONCE(walker_rule->num_layers != 1)) + return -EINVAL; + + if (WARN_ON_ONCE(walker_rule->layers[0].level != 0)) + return -EINVAL; + layers[0].access = walker_rule->layers[0].access; err = insert_rule(dst, id, &layers, ARRAY_SIZE(layers)); if (err) - goto out_unlock; + return err; + } + return err; +} + +static int merge_ruleset(struct landlock_ruleset *const dst, + struct landlock_ruleset *const src) +{ + int err = 0; + + might_sleep(); + /* Should already be checked by landlock_merge_ruleset() */ + if (WARN_ON_ONCE(!src)) + return 0; + /* Only merge into a domain. */ + if (WARN_ON_ONCE(!dst || !dst->hierarchy)) + return -EINVAL; + + /* Locks @dst first because we are its only owner. */ + mutex_lock(&dst->lock); + mutex_lock_nested(&src->lock, SINGLE_DEPTH_NESTING); + + /* Stacks the new layer. */ + if (WARN_ON_ONCE(src->num_layers != 1 || dst->num_layers < 1)) { + err = -EINVAL; + goto out_unlock; } + dst->access_masks[dst->num_layers - 1] = src->access_masks[0]; + + /* Merges the @src inode tree. */ + err = merge_tree(dst, src, LANDLOCK_KEY_INODE); + if (err) + goto out_unlock; out_unlock: mutex_unlock(&src->lock); @@ -364,43 +379,64 @@ static int merge_ruleset(struct landlock_ruleset *const dst, return err; } -static int inherit_ruleset(struct landlock_ruleset *const parent, - struct landlock_ruleset *const child) +static int inherit_tree(struct landlock_ruleset *const parent, + struct landlock_ruleset *const child, + const enum landlock_key_type key_type) { struct landlock_rule *walker_rule, *next_rule; struct rb_root *parent_root; int err = 0; might_sleep(); - if (!parent) - return 0; + lockdep_assert_held(&parent->lock); + lockdep_assert_held(&child->lock); - parent_root = get_root(parent, LANDLOCK_KEY_INODE); + parent_root = get_root(parent, key_type); if (IS_ERR(parent_root)) return PTR_ERR(parent_root); - /* Locks @child first because we are its only owner. */ - mutex_lock(&child->lock); - mutex_lock_nested(&parent->lock, SINGLE_DEPTH_NESTING); - - /* Copies the @parent tree. */ + /* Copies the @parent inode or network tree. */ rbtree_postorder_for_each_entry_safe(walker_rule, next_rule, parent_root, node) { const struct landlock_id id = { .key = walker_rule->key, - .type = LANDLOCK_KEY_INODE, + .type = key_type, }; + err = insert_rule(child, id, &walker_rule->layers, walker_rule->num_layers); if (err) - goto out_unlock; + return err; } + return err; +} + +static int inherit_ruleset(struct landlock_ruleset *const parent, + struct landlock_ruleset *const child) +{ + int err = 0; + + might_sleep(); + if (!parent) + return 0; + + /* Locks @child first because we are its only owner. */ + mutex_lock(&child->lock); + mutex_lock_nested(&parent->lock, SINGLE_DEPTH_NESTING); + + /* Copies the @parent inode tree. */ + err = inherit_tree(parent, child, LANDLOCK_KEY_INODE); + if (err) + goto out_unlock; if (WARN_ON_ONCE(child->num_layers <= parent->num_layers)) { err = -EINVAL; goto out_unlock; } - /* Copies the parent layer stack and leaves a space for the new layer. */ + /* + * Copies the parent layer stack and leaves a space + * for the new layer. + */ memcpy(child->access_masks, parent->access_masks, flex_array_size(parent, access_masks, parent->num_layers)); From patchwork Mon May 15 16:13:32 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Konstantin Meskhidze (A)" X-Patchwork-Id: 13241773 Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 39647FBF1 for ; Mon, 15 May 2023 16:14:19 +0000 (UTC) Received: from frasgout.his.huawei.com (frasgout.his.huawei.com [185.176.79.56]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 796E02D49; Mon, 15 May 2023 09:14:16 -0700 (PDT) Received: from lhrpeml500004.china.huawei.com (unknown [172.18.147.226]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4QKksm5m01z67LHp; Tue, 16 May 2023 00:12:28 +0800 (CST) Received: from mscphis00759.huawei.com (10.123.66.134) by lhrpeml500004.china.huawei.com (7.191.163.9) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.23; Mon, 15 May 2023 17:14:13 +0100 From: Konstantin Meskhidze To: CC: , , , , , , Subject: [PATCH v11 05/12] landlock: Move and rename layer helpers Date: Tue, 16 May 2023 00:13:32 +0800 Message-ID: <20230515161339.631577-6-konstantin.meskhidze@huawei.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230515161339.631577-1-konstantin.meskhidze@huawei.com> References: <20230515161339.631577-1-konstantin.meskhidze@huawei.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Originating-IP: [10.123.66.134] X-ClientProxiedBy: mscpeml500002.china.huawei.com (7.188.26.138) To lhrpeml500004.china.huawei.com (7.191.163.9) X-CFilter-Loop: Reflected X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_MED, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net This patch renames and moves landlock_unmask_layers() and landlock_init_layer_masks() helpers to ruleset.c to share them with Landlock network implementation in following commits. Signed-off-by: Konstantin Meskhidze --- Changes since v10: * None. Changes since v9: * Minor fixes. Changes since v8: * Refactors commit message. * Adds "landlock_" prefix for moved helpers. Changes since v7: * Refactors commit message. Changes since v6: * Moves get_handled_accesses() helper from ruleset.c back to fs.c, cause it's not used in coming network commits. Changes since v5: * Splits commit. * Moves init_layer_masks() and get_handled_accesses() helpers to ruleset.c and makes then non-static. * Formats code with clang-format-14. --- security/landlock/fs.c | 136 ++++++------------------------------ security/landlock/ruleset.c | 98 ++++++++++++++++++++++++++ security/landlock/ruleset.h | 9 +++ 3 files changed, 128 insertions(+), 115 deletions(-) -- 2.25.1 diff --git a/security/landlock/fs.c b/security/landlock/fs.c index 9a8e70f65a88..db833c995003 100644 --- a/security/landlock/fs.c +++ b/security/landlock/fs.c @@ -215,60 +215,6 @@ find_rule(const struct landlock_ruleset *const domain, return rule; } -/* - * @layer_masks is read and may be updated according to the access request and - * the matching rule. - * - * Returns true if the request is allowed (i.e. relevant layer masks for the - * request are empty). - */ -static inline bool -unmask_layers(const struct landlock_rule *const rule, - const access_mask_t access_request, - layer_mask_t (*const layer_masks)[LANDLOCK_NUM_ACCESS_FS]) -{ - size_t layer_level; - - if (!access_request || !layer_masks) - return true; - if (!rule) - return false; - - /* - * An access is granted if, for each policy layer, at least one rule - * encountered on the pathwalk grants the requested access, - * regardless of its position in the layer stack. We must then check - * the remaining layers for each inode, from the first added layer to - * the last one. When there is multiple requested accesses, for each - * policy layer, the full set of requested accesses may not be granted - * by only one rule, but by the union (binary OR) of multiple rules. - * E.g. /a/b + /a => /a/b - */ - for (layer_level = 0; layer_level < rule->num_layers; layer_level++) { - const struct landlock_layer *const layer = - &rule->layers[layer_level]; - const layer_mask_t layer_bit = BIT_ULL(layer->level - 1); - const unsigned long access_req = access_request; - unsigned long access_bit; - bool is_empty; - - /* - * Records in @layer_masks which layer grants access to each - * requested access. - */ - is_empty = true; - for_each_set_bit(access_bit, &access_req, - ARRAY_SIZE(*layer_masks)) { - if (layer->access & BIT_ULL(access_bit)) - (*layer_masks)[access_bit] &= ~layer_bit; - is_empty = is_empty && !(*layer_masks)[access_bit]; - } - if (is_empty) - return true; - } - return false; -} - /* * Allows access to pseudo filesystems that will never be mountable (e.g. * sockfs, pipefs), but can still be reachable through @@ -293,50 +239,6 @@ get_raw_handled_fs_accesses(const struct landlock_ruleset *const domain) return access_dom; } -/** - * init_layer_masks - Initialize layer masks from an access request - * - * Populates @layer_masks such that for each access right in @access_request, - * the bits for all the layers are set where this access right is handled. - * - * @domain: The domain that defines the current restrictions. - * @access_request: The requested access rights to check. - * @layer_masks: The layer masks to populate. - * - * Returns: An access mask where each access right bit is set which is handled - * in any of the active layers in @domain. - */ -static inline access_mask_t -init_layer_masks(const struct landlock_ruleset *const domain, - const access_mask_t access_request, - layer_mask_t (*const layer_masks)[LANDLOCK_NUM_ACCESS_FS]) -{ - access_mask_t handled_accesses = 0; - size_t layer_level; - - memset(layer_masks, 0, sizeof(*layer_masks)); - /* An empty access request can happen because of O_WRONLY | O_RDWR. */ - if (!access_request) - return 0; - - /* Saves all handled accesses per layer. */ - for (layer_level = 0; layer_level < domain->num_layers; layer_level++) { - const unsigned long access_req = access_request; - unsigned long access_bit; - - for_each_set_bit(access_bit, &access_req, - ARRAY_SIZE(*layer_masks)) { - if (BIT_ULL(access_bit) & - landlock_get_fs_access_mask(domain, layer_level)) { - (*layer_masks)[access_bit] |= - BIT_ULL(layer_level); - handled_accesses |= BIT_ULL(access_bit); - } - } - } - return handled_accesses; -} - static access_mask_t get_handled_fs_accesses(const struct landlock_ruleset *const domain) { @@ -540,18 +442,20 @@ static bool is_access_to_paths_allowed( } if (unlikely(dentry_child1)) { - unmask_layers(find_rule(domain, dentry_child1), - init_layer_masks(domain, LANDLOCK_MASK_ACCESS_FS, + landlock_unmask_layers(find_rule(domain, dentry_child1), + landlock_init_layer_masks( + domain, LANDLOCK_MASK_ACCESS_FS, &_layer_masks_child1), - &_layer_masks_child1); + &_layer_masks_child1); layer_masks_child1 = &_layer_masks_child1; child1_is_directory = d_is_dir(dentry_child1); } if (unlikely(dentry_child2)) { - unmask_layers(find_rule(domain, dentry_child2), - init_layer_masks(domain, LANDLOCK_MASK_ACCESS_FS, + landlock_unmask_layers(find_rule(domain, dentry_child2), + landlock_init_layer_masks( + domain, LANDLOCK_MASK_ACCESS_FS, &_layer_masks_child2), - &_layer_masks_child2); + &_layer_masks_child2); layer_masks_child2 = &_layer_masks_child2; child2_is_directory = d_is_dir(dentry_child2); } @@ -603,10 +507,10 @@ static bool is_access_to_paths_allowed( } rule = find_rule(domain, walker_path.dentry); - allowed_parent1 = unmask_layers(rule, access_masked_parent1, - layer_masks_parent1); - allowed_parent2 = unmask_layers(rule, access_masked_parent2, - layer_masks_parent2); + allowed_parent1 = landlock_unmask_layers( + rule, access_masked_parent1, layer_masks_parent1); + allowed_parent2 = landlock_unmask_layers( + rule, access_masked_parent2, layer_masks_parent2); /* Stops when a rule from each layer grants access. */ if (allowed_parent1 && allowed_parent2) @@ -650,7 +554,8 @@ static inline int check_access_path(const struct landlock_ruleset *const domain, { layer_mask_t layer_masks[LANDLOCK_NUM_ACCESS_FS] = {}; - access_request = init_layer_masks(domain, access_request, &layer_masks); + access_request = + landlock_init_layer_masks(domain, access_request, &layer_masks); if (is_access_to_paths_allowed(domain, path, access_request, &layer_masks, NULL, 0, NULL, NULL)) return 0; @@ -735,16 +640,16 @@ static bool collect_domain_accesses( if (is_nouser_or_private(dir)) return true; - access_dom = init_layer_masks(domain, LANDLOCK_MASK_ACCESS_FS, - layer_masks_dom); + access_dom = landlock_init_layer_masks(domain, LANDLOCK_MASK_ACCESS_FS, + layer_masks_dom); dget(dir); while (true) { struct dentry *parent_dentry; /* Gets all layers allowing all domain accesses. */ - if (unmask_layers(find_rule(domain, dir), access_dom, - layer_masks_dom)) { + if (landlock_unmask_layers(find_rule(domain, dir), access_dom, + layer_masks_dom)) { /* * Stops when all handled accesses are allowed by at * least one rule in each layer. @@ -857,7 +762,7 @@ static int current_check_refer_path(struct dentry *const old_dentry, * The LANDLOCK_ACCESS_FS_REFER access right is not required * for same-directory referer (i.e. no reparenting). */ - access_request_parent1 = init_layer_masks( + access_request_parent1 = landlock_init_layer_masks( dom, access_request_parent1 | access_request_parent2, &layer_masks_parent1); if (is_access_to_paths_allowed( @@ -1234,7 +1139,8 @@ static int hook_file_open(struct file *const file) if (is_access_to_paths_allowed( dom, &file->f_path, - init_layer_masks(dom, full_access_request, &layer_masks), + landlock_init_layer_masks(dom, full_access_request, + &layer_masks), &layer_masks, NULL, 0, NULL, NULL)) { allowed_access = full_access_request; } else { diff --git a/security/landlock/ruleset.c b/security/landlock/ruleset.c index e4f449fdd6dd..88b69fa0d7f3 100644 --- a/security/landlock/ruleset.c +++ b/security/landlock/ruleset.c @@ -574,3 +574,101 @@ landlock_find_rule(const struct landlock_ruleset *const ruleset, } return NULL; } + +/* + * @layer_masks is read and may be updated according to the access request and + * the matching rule. + * + * Returns true if the request is allowed (i.e. relevant layer masks for the + * request are empty). + */ +bool landlock_unmask_layers( + const struct landlock_rule *const rule, + const access_mask_t access_request, + layer_mask_t (*const layer_masks)[LANDLOCK_NUM_ACCESS_FS]) +{ + size_t layer_level; + + if (!access_request || !layer_masks) + return true; + if (!rule) + return false; + + /* + * An access is granted if, for each policy layer, at least one rule + * encountered on the pathwalk grants the requested access, + * regardless of its position in the layer stack. We must then check + * the remaining layers for each inode, from the first added layer to + * the last one. When there is multiple requested accesses, for each + * policy layer, the full set of requested accesses may not be granted + * by only one rule, but by the union (binary OR) of multiple rules. + * E.g. /a/b + /a => /a/b + */ + for (layer_level = 0; layer_level < rule->num_layers; layer_level++) { + const struct landlock_layer *const layer = + &rule->layers[layer_level]; + const layer_mask_t layer_bit = BIT_ULL(layer->level - 1); + const unsigned long access_req = access_request; + unsigned long access_bit; + bool is_empty; + + /* + * Records in @layer_masks which layer grants access to each + * requested access. + */ + is_empty = true; + for_each_set_bit(access_bit, &access_req, + ARRAY_SIZE(*layer_masks)) { + if (layer->access & BIT_ULL(access_bit)) + (*layer_masks)[access_bit] &= ~layer_bit; + is_empty = is_empty && !(*layer_masks)[access_bit]; + } + if (is_empty) + return true; + } + return false; +} + +/** + * landlock_init_layer_masks - Initialize layer masks from an access request + * + * Populates @layer_masks such that for each access right in @access_request, + * the bits for all the layers are set where this access right is handled. + * + * @domain: The domain that defines the current restrictions. + * @access_request: The requested access rights to check. + * @layer_masks: The layer masks to populate. + * + * Returns: An access mask where each access right bit is set which is handled + * in any of the active layers in @domain. + */ +access_mask_t landlock_init_layer_masks( + const struct landlock_ruleset *const domain, + const access_mask_t access_request, + layer_mask_t (*const layer_masks)[LANDLOCK_NUM_ACCESS_FS]) +{ + access_mask_t handled_accesses = 0; + size_t layer_level; + + memset(layer_masks, 0, sizeof(*layer_masks)); + /* An empty access request can happen because of O_WRONLY | O_RDWR. */ + if (!access_request) + return 0; + + /* Saves all handled accesses per layer. */ + for (layer_level = 0; layer_level < domain->num_layers; layer_level++) { + const unsigned long access_req = access_request; + unsigned long access_bit; + + for_each_set_bit(access_bit, &access_req, + ARRAY_SIZE(*layer_masks)) { + if (BIT_ULL(access_bit) & + landlock_get_fs_access_mask(domain, layer_level)) { + (*layer_masks)[access_bit] |= + BIT_ULL(layer_level); + handled_accesses |= BIT_ULL(access_bit); + } + } + } + return handled_accesses; +} diff --git a/security/landlock/ruleset.h b/security/landlock/ruleset.h index 5e1b1b25def0..3635e709b20d 100644 --- a/security/landlock/ruleset.h +++ b/security/landlock/ruleset.h @@ -266,5 +266,14 @@ landlock_get_fs_access_mask(const struct landlock_ruleset *const ruleset, return landlock_get_raw_fs_access_mask(ruleset, layer_level) | LANDLOCK_ACCESS_FS_INITIALLY_DENIED; } +bool landlock_unmask_layers( + const struct landlock_rule *const rule, + const access_mask_t access_request, + layer_mask_t (*const layer_masks)[LANDLOCK_NUM_ACCESS_FS]); + +access_mask_t landlock_init_layer_masks( + const struct landlock_ruleset *const domain, + const access_mask_t access_request, + layer_mask_t (*const layer_masks)[LANDLOCK_NUM_ACCESS_FS]); #endif /* _SECURITY_LANDLOCK_RULESET_H */ From patchwork Mon May 15 16:13:33 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: "Konstantin Meskhidze (A)" X-Patchwork-Id: 13241774 Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 69E9BFBF1 for ; Mon, 15 May 2023 16:14:21 +0000 (UTC) Received: from frasgout.his.huawei.com (frasgout.his.huawei.com [185.176.79.56]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1759DE4; Mon, 15 May 2023 09:14:18 -0700 (PDT) Received: from lhrpeml500004.china.huawei.com (unknown [172.18.147.200]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4QKkq40zKpz6J6wV; Tue, 16 May 2023 00:10:08 +0800 (CST) Received: from mscphis00759.huawei.com (10.123.66.134) by lhrpeml500004.china.huawei.com (7.191.163.9) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.23; Mon, 15 May 2023 17:14:15 +0100 From: Konstantin Meskhidze To: CC: , , , , , , Subject: [PATCH v11 06/12] landlock: Refactor layer helpers Date: Tue, 16 May 2023 00:13:33 +0800 Message-ID: <20230515161339.631577-7-konstantin.meskhidze@huawei.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230515161339.631577-1-konstantin.meskhidze@huawei.com> References: <20230515161339.631577-1-konstantin.meskhidze@huawei.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Originating-IP: [10.123.66.134] X-ClientProxiedBy: mscpeml500002.china.huawei.com (7.188.26.138) To lhrpeml500004.china.huawei.com (7.191.163.9) X-CFilter-Loop: Reflected X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_MED, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Add new key_type argument to the landlock_init_layer_masks() helper. Add a masks_array_size argument to the landlock_unmask_layers() helper. These modifications support implementing new rule types in the next Landlock versions. Signed-off-by: Mickaël Salaün Signed-off-by: Konstantin Meskhidze --- Changes since v10: * Minor fix. Changes since v9: * Refactors commit message. Changes since v8: * None. Changes since v7: * Refactors commit message, adds a co-developer. * Minor fixes. Changes since v6: * Removes masks_size attribute from init_layer_masks(). * Refactors init_layer_masks() with new landlock_key_type. Changes since v5: * Splits commit. * Formats code with clang-format-14. Changes since v4: * Refactors init_layer_masks(), get_handled_accesses() and unmask_layers() functions to support multiple rule types. * Refactors landlock_get_fs_access_mask() function with LANDLOCK_MASK_ACCESS_FS mask. Changes since v3: * Splits commit. * Refactors landlock_unmask_layers functions. --- security/landlock/fs.c | 43 ++++++++++++++++++--------------- security/landlock/ruleset.c | 48 +++++++++++++++++++++++++------------ security/landlock/ruleset.h | 17 ++++++------- 3 files changed, 66 insertions(+), 42 deletions(-) -- 2.25.1 diff --git a/security/landlock/fs.c b/security/landlock/fs.c index db833c995003..81098cb54837 100644 --- a/security/landlock/fs.c +++ b/security/landlock/fs.c @@ -442,20 +442,22 @@ static bool is_access_to_paths_allowed( } if (unlikely(dentry_child1)) { - landlock_unmask_layers(find_rule(domain, dentry_child1), - landlock_init_layer_masks( - domain, LANDLOCK_MASK_ACCESS_FS, - &_layer_masks_child1), - &_layer_masks_child1); + landlock_unmask_layers( + find_rule(domain, dentry_child1), + landlock_init_layer_masks( + domain, LANDLOCK_MASK_ACCESS_FS, + &_layer_masks_child1, LANDLOCK_KEY_INODE), + &_layer_masks_child1, ARRAY_SIZE(_layer_masks_child1)); layer_masks_child1 = &_layer_masks_child1; child1_is_directory = d_is_dir(dentry_child1); } if (unlikely(dentry_child2)) { - landlock_unmask_layers(find_rule(domain, dentry_child2), - landlock_init_layer_masks( - domain, LANDLOCK_MASK_ACCESS_FS, - &_layer_masks_child2), - &_layer_masks_child2); + landlock_unmask_layers( + find_rule(domain, dentry_child2), + landlock_init_layer_masks( + domain, LANDLOCK_MASK_ACCESS_FS, + &_layer_masks_child2, LANDLOCK_KEY_INODE), + &_layer_masks_child2, ARRAY_SIZE(_layer_masks_child2)); layer_masks_child2 = &_layer_masks_child2; child2_is_directory = d_is_dir(dentry_child2); } @@ -508,14 +510,15 @@ static bool is_access_to_paths_allowed( rule = find_rule(domain, walker_path.dentry); allowed_parent1 = landlock_unmask_layers( - rule, access_masked_parent1, layer_masks_parent1); + rule, access_masked_parent1, layer_masks_parent1, + ARRAY_SIZE(*layer_masks_parent1)); allowed_parent2 = landlock_unmask_layers( - rule, access_masked_parent2, layer_masks_parent2); + rule, access_masked_parent2, layer_masks_parent2, + ARRAY_SIZE(*layer_masks_parent2)); /* Stops when a rule from each layer grants access. */ if (allowed_parent1 && allowed_parent2) break; - jump_up: if (walker_path.dentry == walker_path.mnt->mnt_root) { if (follow_up(&walker_path)) { @@ -554,8 +557,8 @@ static inline int check_access_path(const struct landlock_ruleset *const domain, { layer_mask_t layer_masks[LANDLOCK_NUM_ACCESS_FS] = {}; - access_request = - landlock_init_layer_masks(domain, access_request, &layer_masks); + access_request = landlock_init_layer_masks( + domain, access_request, &layer_masks, LANDLOCK_KEY_INODE); if (is_access_to_paths_allowed(domain, path, access_request, &layer_masks, NULL, 0, NULL, NULL)) return 0; @@ -641,7 +644,8 @@ static bool collect_domain_accesses( return true; access_dom = landlock_init_layer_masks(domain, LANDLOCK_MASK_ACCESS_FS, - layer_masks_dom); + layer_masks_dom, + LANDLOCK_KEY_INODE); dget(dir); while (true) { @@ -649,7 +653,8 @@ static bool collect_domain_accesses( /* Gets all layers allowing all domain accesses. */ if (landlock_unmask_layers(find_rule(domain, dir), access_dom, - layer_masks_dom)) { + layer_masks_dom, + ARRAY_SIZE(*layer_masks_dom))) { /* * Stops when all handled accesses are allowed by at * least one rule in each layer. @@ -764,7 +769,7 @@ static int current_check_refer_path(struct dentry *const old_dentry, */ access_request_parent1 = landlock_init_layer_masks( dom, access_request_parent1 | access_request_parent2, - &layer_masks_parent1); + &layer_masks_parent1, LANDLOCK_KEY_INODE); if (is_access_to_paths_allowed( dom, new_dir, access_request_parent1, &layer_masks_parent1, NULL, 0, NULL, NULL)) @@ -1140,7 +1145,7 @@ static int hook_file_open(struct file *const file) if (is_access_to_paths_allowed( dom, &file->f_path, landlock_init_layer_masks(dom, full_access_request, - &layer_masks), + &layer_masks, LANDLOCK_KEY_INODE), &layer_masks, NULL, 0, NULL, NULL)) { allowed_access = full_access_request; } else { diff --git a/security/landlock/ruleset.c b/security/landlock/ruleset.c index 88b69fa0d7f3..b07ad57ee40a 100644 --- a/security/landlock/ruleset.c +++ b/security/landlock/ruleset.c @@ -578,14 +578,15 @@ landlock_find_rule(const struct landlock_ruleset *const ruleset, /* * @layer_masks is read and may be updated according to the access request and * the matching rule. + * @masks_array_size must be equal to ARRAY_SIZE(*layer_masks). * * Returns true if the request is allowed (i.e. relevant layer masks for the * request are empty). */ -bool landlock_unmask_layers( - const struct landlock_rule *const rule, - const access_mask_t access_request, - layer_mask_t (*const layer_masks)[LANDLOCK_NUM_ACCESS_FS]) +bool landlock_unmask_layers(const struct landlock_rule *const rule, + const access_mask_t access_request, + layer_mask_t (*const layer_masks)[], + const size_t masks_array_size) { size_t layer_level; @@ -617,8 +618,7 @@ bool landlock_unmask_layers( * requested access. */ is_empty = true; - for_each_set_bit(access_bit, &access_req, - ARRAY_SIZE(*layer_masks)) { + for_each_set_bit(access_bit, &access_req, masks_array_size) { if (layer->access & BIT_ULL(access_bit)) (*layer_masks)[access_bit] &= ~layer_bit; is_empty = is_empty && !(*layer_masks)[access_bit]; @@ -629,6 +629,10 @@ bool landlock_unmask_layers( return false; } +typedef access_mask_t +get_access_mask_t(const struct landlock_ruleset *const ruleset, + const u16 layer_level); + /** * landlock_init_layer_masks - Initialize layer masks from an access request * @@ -638,19 +642,34 @@ bool landlock_unmask_layers( * @domain: The domain that defines the current restrictions. * @access_request: The requested access rights to check. * @layer_masks: The layer masks to populate. + * @key_type: The key type to switch between access masks of different types. * * Returns: An access mask where each access right bit is set which is handled * in any of the active layers in @domain. */ -access_mask_t landlock_init_layer_masks( - const struct landlock_ruleset *const domain, - const access_mask_t access_request, - layer_mask_t (*const layer_masks)[LANDLOCK_NUM_ACCESS_FS]) +access_mask_t +landlock_init_layer_masks(const struct landlock_ruleset *const domain, + const access_mask_t access_request, + layer_mask_t (*const layer_masks)[], + const enum landlock_key_type key_type) { access_mask_t handled_accesses = 0; - size_t layer_level; + size_t layer_level, num_access; + get_access_mask_t *get_access_mask; + + switch (key_type) { + case LANDLOCK_KEY_INODE: + get_access_mask = landlock_get_fs_access_mask; + num_access = LANDLOCK_NUM_ACCESS_FS; + break; + default: + WARN_ON_ONCE(1); + return 0; + } + + memset(layer_masks, 0, + array_size(sizeof((*layer_masks)[0]), num_access)); - memset(layer_masks, 0, sizeof(*layer_masks)); /* An empty access request can happen because of O_WRONLY | O_RDWR. */ if (!access_request) return 0; @@ -660,10 +679,9 @@ access_mask_t landlock_init_layer_masks( const unsigned long access_req = access_request; unsigned long access_bit; - for_each_set_bit(access_bit, &access_req, - ARRAY_SIZE(*layer_masks)) { + for_each_set_bit(access_bit, &access_req, num_access) { if (BIT_ULL(access_bit) & - landlock_get_fs_access_mask(domain, layer_level)) { + get_access_mask(domain, layer_level)) { (*layer_masks)[access_bit] |= BIT_ULL(layer_level); handled_accesses |= BIT_ULL(access_bit); diff --git a/security/landlock/ruleset.h b/security/landlock/ruleset.h index 3635e709b20d..2251e6048ccf 100644 --- a/security/landlock/ruleset.h +++ b/security/landlock/ruleset.h @@ -266,14 +266,15 @@ landlock_get_fs_access_mask(const struct landlock_ruleset *const ruleset, return landlock_get_raw_fs_access_mask(ruleset, layer_level) | LANDLOCK_ACCESS_FS_INITIALLY_DENIED; } -bool landlock_unmask_layers( - const struct landlock_rule *const rule, - const access_mask_t access_request, - layer_mask_t (*const layer_masks)[LANDLOCK_NUM_ACCESS_FS]); +bool landlock_unmask_layers(const struct landlock_rule *const rule, + const access_mask_t access_request, + layer_mask_t (*const layer_masks)[], + const size_t masks_array_size); -access_mask_t landlock_init_layer_masks( - const struct landlock_ruleset *const domain, - const access_mask_t access_request, - layer_mask_t (*const layer_masks)[LANDLOCK_NUM_ACCESS_FS]); +access_mask_t +landlock_init_layer_masks(const struct landlock_ruleset *const domain, + const access_mask_t access_request, + layer_mask_t (*const layer_masks)[], + const enum landlock_key_type key_type); #endif /* _SECURITY_LANDLOCK_RULESET_H */ From patchwork Mon May 15 16:13:34 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Konstantin Meskhidze (A)" X-Patchwork-Id: 13241776 Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1BE5EE57B for ; Mon, 15 May 2023 16:14:32 +0000 (UTC) Received: from frasgout.his.huawei.com (frasgout.his.huawei.com [185.176.79.56]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AFB022D7B; Mon, 15 May 2023 09:14:26 -0700 (PDT) Received: from lhrpeml500004.china.huawei.com (unknown [172.18.147.206]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4QKktv6NKQz67n3K; Tue, 16 May 2023 00:13:27 +0800 (CST) Received: from mscphis00759.huawei.com (10.123.66.134) by lhrpeml500004.china.huawei.com (7.191.163.9) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.23; Mon, 15 May 2023 17:14:18 +0100 From: Konstantin Meskhidze To: CC: , , , , , , Subject: [PATCH v11 07/12] landlock: Refactor landlock_add_rule() syscall Date: Tue, 16 May 2023 00:13:34 +0800 Message-ID: <20230515161339.631577-8-konstantin.meskhidze@huawei.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230515161339.631577-1-konstantin.meskhidze@huawei.com> References: <20230515161339.631577-1-konstantin.meskhidze@huawei.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Originating-IP: [10.123.66.134] X-ClientProxiedBy: mscpeml500002.china.huawei.com (7.188.26.138) To lhrpeml500004.china.huawei.com (7.191.163.9) X-CFilter-Loop: Reflected X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_MED, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Change the landlock_add_rule() syscall to support new rule types in future Landlock versions. Add the add_rule_path_beneath() helper to support current filesystem rules. Signed-off-by: Konstantin Meskhidze --- Changes since v10: * None. Changes since v9: * Minor fixes: - deletes unnecessary curly braces. - deletes unnecessary empty line. Changes since v8: * Refactors commit message. * Minor fixes. Changes since v7: * None Changes since v6: * None Changes since v5: * Refactors syscall landlock_add_rule() and add_rule_path_beneath() helper to make argument check ordering consistent and get rid of partial revertings in following patches. * Rolls back refactoring base_test.c seltest. * Formats code with clang-format-14. Changes since v4: * Refactors add_rule_path_beneath() and landlock_add_rule() functions to optimize code usage. * Refactors base_test.c seltest: adds LANDLOCK_RULE_PATH_BENEATH rule type in landlock_add_rule() call. Changes since v3: * Split commit. * Refactors landlock_add_rule syscall. --- security/landlock/syscalls.c | 92 +++++++++++++++++++----------------- 1 file changed, 48 insertions(+), 44 deletions(-) -- 2.25.1 diff --git a/security/landlock/syscalls.c b/security/landlock/syscalls.c index d35cd5d304db..8a54e87dbb17 100644 --- a/security/landlock/syscalls.c +++ b/security/landlock/syscalls.c @@ -274,6 +274,47 @@ static int get_path_from_fd(const s32 fd, struct path *const path) return err; } +static int add_rule_path_beneath(struct landlock_ruleset *const ruleset, + const void __user *const rule_attr) +{ + struct landlock_path_beneath_attr path_beneath_attr; + struct path path; + int res, err; + access_mask_t mask; + + /* Copies raw user space buffer, only one type for now. */ + res = copy_from_user(&path_beneath_attr, rule_attr, + sizeof(path_beneath_attr)); + if (res) + return -EFAULT; + + /* + * Informs about useless rule: empty allowed_access (i.e. deny rules) + * are ignored in path walks. + */ + if (!path_beneath_attr.allowed_access) + return -ENOMSG; + + /* + * Checks that allowed_access matches the @ruleset constraints + * (ruleset->access_masks[0] is automatically upgraded to 64-bits). + */ + mask = landlock_get_raw_fs_access_mask(ruleset, 0); + if ((path_beneath_attr.allowed_access | mask) != mask) + return -EINVAL; + + /* Gets and checks the new rule. */ + err = get_path_from_fd(path_beneath_attr.parent_fd, &path); + if (err) + return err; + + /* Imports the new rule. */ + err = landlock_append_fs_rule(ruleset, &path, + path_beneath_attr.allowed_access); + path_put(&path); + return err; +} + /** * sys_landlock_add_rule - Add a new rule to a ruleset * @@ -306,11 +347,8 @@ SYSCALL_DEFINE4(landlock_add_rule, const int, ruleset_fd, const enum landlock_rule_type, rule_type, const void __user *const, rule_attr, const __u32, flags) { - struct landlock_path_beneath_attr path_beneath_attr; - struct path path; struct landlock_ruleset *ruleset; - int res, err; - access_mask_t mask; + int err; if (!landlock_initialized) return -EOPNOTSUPP; @@ -324,48 +362,14 @@ SYSCALL_DEFINE4(landlock_add_rule, const int, ruleset_fd, if (IS_ERR(ruleset)) return PTR_ERR(ruleset); - if (rule_type != LANDLOCK_RULE_PATH_BENEATH) { + switch (rule_type) { + case LANDLOCK_RULE_PATH_BENEATH: + err = add_rule_path_beneath(ruleset, rule_attr); + break; + default: err = -EINVAL; - goto out_put_ruleset; - } - - /* Copies raw user space buffer, only one type for now. */ - res = copy_from_user(&path_beneath_attr, rule_attr, - sizeof(path_beneath_attr)); - if (res) { - err = -EFAULT; - goto out_put_ruleset; + break; } - - /* - * Informs about useless rule: empty allowed_access (i.e. deny rules) - * are ignored in path walks. - */ - if (!path_beneath_attr.allowed_access) { - err = -ENOMSG; - goto out_put_ruleset; - } - /* - * Checks that allowed_access matches the @ruleset constraints - * (ruleset->access_masks[0] is automatically upgraded to 64-bits). - */ - mask = landlock_get_raw_fs_access_mask(ruleset, 0); - if ((path_beneath_attr.allowed_access | mask) != mask) { - err = -EINVAL; - goto out_put_ruleset; - } - - /* Gets and checks the new rule. */ - err = get_path_from_fd(path_beneath_attr.parent_fd, &path); - if (err) - goto out_put_ruleset; - - /* Imports the new rule. */ - err = landlock_append_fs_rule(ruleset, &path, - path_beneath_attr.allowed_access); - path_put(&path); - -out_put_ruleset: landlock_put_ruleset(ruleset); return err; } From patchwork Mon May 15 16:13:35 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: "Konstantin Meskhidze (A)" X-Patchwork-Id: 13241777 Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B7B21101C8 for ; Mon, 15 May 2023 16:14:34 +0000 (UTC) Received: from frasgout.his.huawei.com (frasgout.his.huawei.com [185.176.79.56]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 136972D7E; Mon, 15 May 2023 09:14:27 -0700 (PDT) Received: from lhrpeml500004.china.huawei.com (unknown [172.18.147.206]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4QKktw1Xl4z67qPC; Tue, 16 May 2023 00:13:28 +0800 (CST) Received: from mscphis00759.huawei.com (10.123.66.134) by lhrpeml500004.china.huawei.com (7.191.163.9) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.23; Mon, 15 May 2023 17:14:19 +0100 From: Konstantin Meskhidze To: CC: , , , , , , Subject: [PATCH v11 08/12] landlock: Add network rules and TCP hooks support Date: Tue, 16 May 2023 00:13:35 +0800 Message-ID: <20230515161339.631577-9-konstantin.meskhidze@huawei.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230515161339.631577-1-konstantin.meskhidze@huawei.com> References: <20230515161339.631577-1-konstantin.meskhidze@huawei.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Originating-IP: [10.123.66.134] X-ClientProxiedBy: mscpeml500002.china.huawei.com (7.188.26.138) To lhrpeml500004.china.huawei.com (7.191.163.9) X-CFilter-Loop: Reflected X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_MED, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net This commit adds network rules support in the ruleset management helpers and the landlock_create_ruleset syscall. Refactor user space API to support network actions. Add new network access flags, network rule and network attributes. Increment Landlock ABI version. Expand access_masks_t to u32 to be sure network access rights can be stored. Implement socket_bind() and socket_connect() LSM hooks, which enables to restrict TCP socket binding and connection to specific ports. Co-developed-by: Mickaël Salaün Signed-off-by: Konstantin Meskhidze Signed-off-by: Mickaël Salaün --- Changes since v10: * Removes "packed" attribute. * Applies Mickaёl's patch with some refactoring. * Deletes get_port() and check_addrlen() helpers. * Refactors check_socket_access() by squashing get_port() and check_addrlen() helpers into it. * Fixes commit message. Changes since v9: * Changes UAPI port field to __u64. * Moves shared code into check_socket_access(). * Adds get_raw_handled_net_accesses() and get_current_net_domain() helpers. * Minor fixes. Changes since v8: * Squashes commits. * Refactors commit message. * Changes UAPI port field to __be16. * Changes logic of bind/connect hooks with AF_UNSPEC families. * Adds address length checking. * Minor fixes. Changes since v7: * Squashes commits. * Increments ABI version to 4. * Refactors commit message. * Minor fixes. Changes since v6: * Renames landlock_set_net_access_mask() to landlock_add_net_access_mask() because it OR values. * Makes landlock_add_net_access_mask() more resilient incorrect values. * Refactors landlock_get_net_access_mask(). * Renames LANDLOCK_MASK_SHIFT_NET to LANDLOCK_SHIFT_ACCESS_NET and use LANDLOCK_NUM_ACCESS_FS as value. * Updates access_masks_t to u32 to support network access actions. * Refactors landlock internal functions to support network actions with landlock_key/key_type/id types. Changes since v5: * Gets rid of partial revert from landlock_add_rule syscall. * Formats code with clang-format-14. Changes since v4: * Refactors landlock_create_ruleset() - splits ruleset and masks checks. * Refactors landlock_create_ruleset() and landlock mask setters/getters to support two rule types. * Refactors landlock_add_rule syscall add_rule_path_beneath function by factoring out get_ruleset_from_fd() and landlock_put_ruleset(). Changes since v3: * Splits commit. * Adds network rule support for internal landlock functions. * Adds set_mask and get_mask for network. * Adds rb_root root_net_port. --- include/uapi/linux/landlock.h | 48 +++++ security/landlock/Kconfig | 1 + security/landlock/Makefile | 2 + security/landlock/limits.h | 6 +- security/landlock/net.c | 174 +++++++++++++++++++ security/landlock/net.h | 26 +++ security/landlock/ruleset.c | 52 +++++- security/landlock/ruleset.h | 63 +++++-- security/landlock/setup.c | 2 + security/landlock/syscalls.c | 72 +++++++- tools/testing/selftests/landlock/base_test.c | 2 +- 11 files changed, 425 insertions(+), 23 deletions(-) create mode 100644 security/landlock/net.c create mode 100644 security/landlock/net.h -- 2.25.1 diff --git a/include/uapi/linux/landlock.h b/include/uapi/linux/landlock.h index 81d09ef9aa50..93794759dad4 100644 --- a/include/uapi/linux/landlock.h +++ b/include/uapi/linux/landlock.h @@ -31,6 +31,13 @@ struct landlock_ruleset_attr { * this access right. */ __u64 handled_access_fs; + + /** + * @handled_access_net: Bitmask of actions (cf. `Network flags`_) + * that is handled by this ruleset and should then be forbidden if no + * rule explicitly allow them. + */ + __u64 handled_access_net; }; /* @@ -54,6 +61,11 @@ enum landlock_rule_type { * landlock_path_beneath_attr . */ LANDLOCK_RULE_PATH_BENEATH = 1, + /** + * @LANDLOCK_RULE_NET_SERVICE: Type of a &struct + * landlock_net_service_attr . + */ + LANDLOCK_RULE_NET_SERVICE = 2, }; /** @@ -79,6 +91,23 @@ struct landlock_path_beneath_attr { */ } __attribute__((packed)); +/** + * struct landlock_net_service_attr - TCP subnet definition + * + * Argument of sys_landlock_add_rule(). + */ +struct landlock_net_service_attr { + /** + * @allowed_access: Bitmask of allowed access network for services + * (cf. `Network flags`_). + */ + __u64 allowed_access; + /** + * @port: Network port. + */ + __u64 port; +}; + /** * DOC: fs_access * @@ -189,4 +218,23 @@ struct landlock_path_beneath_attr { #define LANDLOCK_ACCESS_FS_TRUNCATE (1ULL << 14) /* clang-format on */ +/** + * DOC: net_access + * + * Network flags + * ~~~~~~~~~~~~~~~~ + * + * These flags enable to restrict a sandboxed process to a set of network + * actions. + * + * TCP sockets with allowed actions: + * + * - %LANDLOCK_ACCESS_NET_BIND_TCP: Bind a TCP socket to a local port. + * - %LANDLOCK_ACCESS_NET_CONNECT_TCP: Connect an active TCP socket to + * a remote port. + */ +/* clang-format off */ +#define LANDLOCK_ACCESS_NET_BIND_TCP (1ULL << 0) +#define LANDLOCK_ACCESS_NET_CONNECT_TCP (1ULL << 1) +/* clang-format on */ #endif /* _UAPI_LINUX_LANDLOCK_H */ diff --git a/security/landlock/Kconfig b/security/landlock/Kconfig index 8e33c4e8ffb8..10c099097533 100644 --- a/security/landlock/Kconfig +++ b/security/landlock/Kconfig @@ -3,6 +3,7 @@ config SECURITY_LANDLOCK bool "Landlock support" depends on SECURITY && !ARCH_EPHEMERAL_INODES + select SECURITY_NETWORK select SECURITY_PATH help Landlock is a sandboxing mechanism that enables processes to restrict diff --git a/security/landlock/Makefile b/security/landlock/Makefile index 7bbd2f413b3e..53d3c92ae22e 100644 --- a/security/landlock/Makefile +++ b/security/landlock/Makefile @@ -2,3 +2,5 @@ obj-$(CONFIG_SECURITY_LANDLOCK) := landlock.o landlock-y := setup.o syscalls.o object.o ruleset.o \ cred.o ptrace.o fs.o + +landlock-$(CONFIG_INET) += net.o \ No newline at end of file diff --git a/security/landlock/limits.h b/security/landlock/limits.h index bafb3b8dc677..8a1a6463c64e 100644 --- a/security/landlock/limits.h +++ b/security/landlock/limits.h @@ -23,6 +23,10 @@ #define LANDLOCK_NUM_ACCESS_FS __const_hweight64(LANDLOCK_MASK_ACCESS_FS) #define LANDLOCK_SHIFT_ACCESS_FS 0 -/* clang-format on */ +#define LANDLOCK_LAST_ACCESS_NET LANDLOCK_ACCESS_NET_CONNECT_TCP +#define LANDLOCK_MASK_ACCESS_NET ((LANDLOCK_LAST_ACCESS_NET << 1) - 1) +#define LANDLOCK_NUM_ACCESS_NET __const_hweight64(LANDLOCK_MASK_ACCESS_NET) +#define LANDLOCK_SHIFT_ACCESS_NET LANDLOCK_NUM_ACCESS_FS +/* clang-format on */ #endif /* _SECURITY_LANDLOCK_LIMITS_H */ diff --git a/security/landlock/net.c b/security/landlock/net.c new file mode 100644 index 000000000000..f8d2be53ac0d --- /dev/null +++ b/security/landlock/net.c @@ -0,0 +1,174 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Landlock LSM - Network management and hooks + * + * Copyright © 2022 Huawei Tech. Co., Ltd. + * Copyright © 2022 Microsoft Corporation + */ + +#include +#include +#include +#include + +#include "common.h" +#include "cred.h" +#include "limits.h" +#include "net.h" +#include "ruleset.h" + +int landlock_append_net_rule(struct landlock_ruleset *const ruleset, + const u16 port, access_mask_t access_rights) +{ + int err; + const struct landlock_id id = { + .key.data = (__force uintptr_t)htons(port), + .type = LANDLOCK_KEY_NET_PORT, + }; + + BUILD_BUG_ON(sizeof(port) > sizeof(id.key.data)); + + /* Transforms relative access rights to absolute ones. */ + access_rights |= LANDLOCK_MASK_ACCESS_NET & + ~landlock_get_net_access_mask(ruleset, 0); + + mutex_lock(&ruleset->lock); + err = landlock_insert_rule(ruleset, id, access_rights); + mutex_unlock(&ruleset->lock); + + return err; +} + +static access_mask_t +get_raw_handled_net_accesses(const struct landlock_ruleset *const domain) +{ + access_mask_t access_dom = 0; + size_t layer_level; + + for (layer_level = 0; layer_level < domain->num_layers; layer_level++) + access_dom |= landlock_get_net_access_mask(domain, layer_level); + return access_dom; +} + +static const struct landlock_ruleset *get_current_net_domain(void) +{ + const struct landlock_ruleset *const dom = + landlock_get_current_domain(); + + if (!dom || !get_raw_handled_net_accesses(dom)) + return NULL; + + return dom; +} + +static int check_socket_access(struct socket *const sock, + struct sockaddr *const address, + const int addrlen, + const access_mask_t access_request) +{ + __be16 port; + layer_mask_t layer_masks[LANDLOCK_NUM_ACCESS_NET] = {}; + const struct landlock_rule *rule; + access_mask_t handled_access; + struct landlock_id id = { + .type = LANDLOCK_KEY_NET_PORT, + }; + const struct landlock_ruleset *const domain = get_current_net_domain(); + + if (WARN_ON_ONCE(!domain)) + return 0; + if (WARN_ON_ONCE(domain->num_layers < 1)) + return -EACCES; + + /* Checks if it's a TCP socket. */ + if (sock->type != SOCK_STREAM) + return 0; + + /* Checks for minimal header length. */ + if (addrlen < offsetofend(struct sockaddr, sa_family)) + return -EINVAL; + + switch (address->sa_family) { + case AF_UNSPEC: + case AF_INET: + if (addrlen < sizeof(struct sockaddr_in)) + return -EINVAL; + port = ((struct sockaddr_in *)address)->sin_port; + break; +#if IS_ENABLED(CONFIG_IPV6) + case AF_INET6: + if (addrlen < SIN6_LEN_RFC2133) + return -EINVAL; + port = ((struct sockaddr_in6 *)address)->sin6_port; + break; +#endif + default: + return 0; + } + + /* Specific AF_UNSPEC handling. */ + if (address->sa_family == AF_UNSPEC) { + /* + * Connecting to an address with AF_UNSPEC dissolves the TCP + * association, which have the same effect as closing the + * connection while retaining the socket object (i.e., the file + * descriptor). As for dropping privileges, closing + * connections is always allowed. + */ + if (access_request == LANDLOCK_ACCESS_NET_CONNECT_TCP) + return 0; + + /* + * For compatibility reason, accept AF_UNSPEC for bind + * accesses (mapped to AF_INET) only if the address is + * INADDR_ANY (cf. __inet_bind). Checking the address is + * required to not wrongfully return -EACCES instead of + * -EAFNOSUPPORT. + */ + if (access_request == LANDLOCK_ACCESS_NET_BIND_TCP) { + const struct sockaddr_in *const sockaddr = + (struct sockaddr_in *)address; + + if (sockaddr->sin_addr.s_addr != htonl(INADDR_ANY)) + return -EAFNOSUPPORT; + } + } + + id.key.data = (__force uintptr_t)port; + BUILD_BUG_ON(sizeof(port) > sizeof(id.key.data)); + + rule = landlock_find_rule(domain, id); + handled_access = landlock_init_layer_masks( + domain, access_request, &layer_masks, LANDLOCK_KEY_NET_PORT); + if (landlock_unmask_layers(rule, handled_access, &layer_masks, + ARRAY_SIZE(layer_masks))) + return 0; + + return -EACCES; +} + +static int hook_socket_bind(struct socket *const sock, + struct sockaddr *const address, const int addrlen) +{ + return check_socket_access(sock, address, addrlen, + LANDLOCK_ACCESS_NET_BIND_TCP); +} + +static int hook_socket_connect(struct socket *const sock, + struct sockaddr *const address, + const int addrlen) +{ + return check_socket_access(sock, address, addrlen, + LANDLOCK_ACCESS_NET_CONNECT_TCP); +} + +static struct security_hook_list landlock_hooks[] __lsm_ro_after_init = { + LSM_HOOK_INIT(socket_bind, hook_socket_bind), + LSM_HOOK_INIT(socket_connect, hook_socket_connect), +}; + +__init void landlock_add_net_hooks(void) +{ + security_add_hooks(landlock_hooks, ARRAY_SIZE(landlock_hooks), + LANDLOCK_NAME); +} diff --git a/security/landlock/net.h b/security/landlock/net.h new file mode 100644 index 000000000000..0da1d9dff5ab --- /dev/null +++ b/security/landlock/net.h @@ -0,0 +1,26 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* + * Landlock LSM - Network management and hooks + * + * Copyright © 2022 Huawei Tech. Co., Ltd. + */ + +#ifndef _SECURITY_LANDLOCK_NET_H +#define _SECURITY_LANDLOCK_NET_H + +#include "common.h" +#include "ruleset.h" +#include "setup.h" + +#if IS_ENABLED(CONFIG_INET) +__init void landlock_add_net_hooks(void); + +int landlock_append_net_rule(struct landlock_ruleset *const ruleset, + const u16 port, access_mask_t access_rights); +#else /* IS_ENABLED(CONFIG_INET) */ +static inline void landlock_add_net_hooks(void) +{ +} +#endif /* IS_ENABLED(CONFIG_INET) */ + +#endif /* _SECURITY_LANDLOCK_NET_H */ diff --git a/security/landlock/ruleset.c b/security/landlock/ruleset.c index b07ad57ee40a..fa1f45587830 100644 --- a/security/landlock/ruleset.c +++ b/security/landlock/ruleset.c @@ -36,6 +36,9 @@ static struct landlock_ruleset *create_ruleset(const u32 num_layers) refcount_set(&new_ruleset->usage, 1); mutex_init(&new_ruleset->lock); new_ruleset->root_inode = RB_ROOT; +#if IS_ENABLED(CONFIG_INET) + new_ruleset->root_net_port = RB_ROOT; +#endif /* IS_ENABLED(CONFIG_INET) */ new_ruleset->num_layers = num_layers; /* * hierarchy = NULL @@ -46,16 +49,21 @@ static struct landlock_ruleset *create_ruleset(const u32 num_layers) } struct landlock_ruleset * -landlock_create_ruleset(const access_mask_t fs_access_mask) +landlock_create_ruleset(const access_mask_t fs_access_mask, + const access_mask_t net_access_mask) { struct landlock_ruleset *new_ruleset; /* Informs about useless ruleset. */ - if (!fs_access_mask) + if (!fs_access_mask && !net_access_mask) return ERR_PTR(-ENOMSG); new_ruleset = create_ruleset(1); - if (!IS_ERR(new_ruleset)) + if (IS_ERR(new_ruleset)) + return new_ruleset; + if (fs_access_mask) landlock_add_fs_access_mask(new_ruleset, fs_access_mask, 0); + if (net_access_mask) + landlock_add_net_access_mask(new_ruleset, net_access_mask, 0); return new_ruleset; } @@ -73,6 +81,10 @@ static bool is_object_pointer(const enum landlock_key_type key_type) switch (key_type) { case LANDLOCK_KEY_INODE: return true; +#if IS_ENABLED(CONFIG_INET) + case LANDLOCK_KEY_NET_PORT: + return false; +#endif /* IS_ENABLED(CONFIG_INET) */ } WARN_ON_ONCE(1); return false; @@ -126,6 +138,11 @@ static struct rb_root *get_root(struct landlock_ruleset *const ruleset, case LANDLOCK_KEY_INODE: root = &ruleset->root_inode; break; +#if IS_ENABLED(CONFIG_INET) + case LANDLOCK_KEY_NET_PORT: + root = &ruleset->root_net_port; + break; +#endif /* IS_ENABLED(CONFIG_INET) */ } if (WARN_ON_ONCE(!root)) return ERR_PTR(-EINVAL); @@ -154,7 +171,8 @@ static void build_check_ruleset(void) BUILD_BUG_ON(ruleset.num_rules < LANDLOCK_MAX_NUM_RULES); BUILD_BUG_ON(ruleset.num_layers < LANDLOCK_MAX_NUM_LAYERS); BUILD_BUG_ON(access_masks < - (LANDLOCK_MASK_ACCESS_FS << LANDLOCK_SHIFT_ACCESS_FS)); + ((LANDLOCK_MASK_ACCESS_FS << LANDLOCK_SHIFT_ACCESS_FS) | + (LANDLOCK_MASK_ACCESS_NET << LANDLOCK_SHIFT_ACCESS_NET))); } /** @@ -373,6 +391,12 @@ static int merge_ruleset(struct landlock_ruleset *const dst, if (err) goto out_unlock; +#if IS_ENABLED(CONFIG_INET) + /* Merges the @src network port tree. */ + err = merge_tree(dst, src, LANDLOCK_KEY_NET_PORT); + if (err) + goto out_unlock; +#endif /* IS_ENABLED(CONFIG_INET) */ out_unlock: mutex_unlock(&src->lock); mutex_unlock(&dst->lock); @@ -429,6 +453,12 @@ static int inherit_ruleset(struct landlock_ruleset *const parent, if (err) goto out_unlock; +#if IS_ENABLED(CONFIG_INET) + /* Copies the @parent network port tree. */ + err = inherit_tree(parent, child, LANDLOCK_KEY_NET_PORT); + if (err) + goto out_unlock; +#endif /* IS_ENABLED(CONFIG_INET) */ if (WARN_ON_ONCE(child->num_layers <= parent->num_layers)) { err = -EINVAL; goto out_unlock; @@ -461,6 +491,11 @@ static void free_ruleset(struct landlock_ruleset *const ruleset) rbtree_postorder_for_each_entry_safe(freeme, next, &ruleset->root_inode, node) free_rule(freeme, LANDLOCK_KEY_INODE); +#if IS_ENABLED(CONFIG_INET) + rbtree_postorder_for_each_entry_safe(freeme, next, + &ruleset->root_net_port, node) + free_rule(freeme, LANDLOCK_KEY_NET_PORT); +#endif /* IS_ENABLED(CONFIG_INET) */ put_hierarchy(ruleset->hierarchy); kfree(ruleset); } @@ -641,7 +676,8 @@ get_access_mask_t(const struct landlock_ruleset *const ruleset, * * @domain: The domain that defines the current restrictions. * @access_request: The requested access rights to check. - * @layer_masks: The layer masks to populate. + * @layer_masks: It must contain LANDLOCK_NUM_ACCESS_FS or LANDLOCK_NUM_ACCESS_NET + * elements according to @key_type. * @key_type: The key type to switch between access masks of different types. * * Returns: An access mask where each access right bit is set which is handled @@ -662,6 +698,12 @@ landlock_init_layer_masks(const struct landlock_ruleset *const domain, get_access_mask = landlock_get_fs_access_mask; num_access = LANDLOCK_NUM_ACCESS_FS; break; +#if IS_ENABLED(CONFIG_INET) + case LANDLOCK_KEY_NET_PORT: + get_access_mask = landlock_get_net_access_mask; + num_access = LANDLOCK_NUM_ACCESS_NET; + break; +#endif /* IS_ENABLED(CONFIG_INET) */ default: WARN_ON_ONCE(1); return 0; diff --git a/security/landlock/ruleset.h b/security/landlock/ruleset.h index 2251e6048ccf..dcf7fbac8367 100644 --- a/security/landlock/ruleset.h +++ b/security/landlock/ruleset.h @@ -33,13 +33,16 @@ typedef u16 access_mask_t; /* Makes sure all filesystem access rights can be stored. */ static_assert(BITS_PER_TYPE(access_mask_t) >= LANDLOCK_NUM_ACCESS_FS); +/* Makes sure all network access rights can be stored. */ +static_assert(BITS_PER_TYPE(access_mask_t) >= LANDLOCK_NUM_ACCESS_NET); /* Makes sure for_each_set_bit() and for_each_clear_bit() calls are OK. */ static_assert(sizeof(unsigned long) >= sizeof(access_mask_t)); /* Ruleset access masks. */ -typedef u16 access_masks_t; +typedef u32 access_masks_t; /* Makes sure all ruleset access rights can be stored. */ -static_assert(BITS_PER_TYPE(access_masks_t) >= LANDLOCK_NUM_ACCESS_FS); +static_assert(BITS_PER_TYPE(access_masks_t) >= + LANDLOCK_NUM_ACCESS_FS + LANDLOCK_NUM_ACCESS_NET); typedef u16 layer_mask_t; /* Makes sure all layers can be checked. */ @@ -84,6 +87,13 @@ enum landlock_key_type { * keys. */ LANDLOCK_KEY_INODE = 1, +#if IS_ENABLED(CONFIG_INET) + /** + * @LANDLOCK_KEY_NET_PORT: Type of &landlock_ruleset.root_net_port's + * node keys. + */ + LANDLOCK_KEY_NET_PORT = 2, +#endif /* IS_ENABLED(CONFIG_INET) */ }; /** @@ -158,6 +168,15 @@ struct landlock_ruleset { * reaches zero. */ struct rb_root root_inode; +#if IS_ENABLED(CONFIG_INET) + /** + * @root_net_port: Root of a red-black tree containing &struct + * landlock_rule nodes with network port. Once a ruleset is tied to a + * process (i.e. as a domain), this tree is immutable until @usage + * reaches zero. + */ + struct rb_root root_net_port; +#endif /* IS_ENABLED(CONFIG_INET) */ /** * @hierarchy: Enables hierarchy identification even when a parent * domain vanishes. This is needed for the ptrace protection. @@ -196,13 +215,13 @@ struct landlock_ruleset { */ u32 num_layers; /** - * @access_masks: Contains the subset of filesystem - * actions that are restricted by a ruleset. A domain - * saves all layers of merged rulesets in a stack - * (FAM), starting from the first layer to the last - * one. These layers are used when merging rulesets, - * for user space backward compatibility (i.e. - * future-proof), and to properly handle merged + * @access_masks: Contains the subset of filesystem and + * network actions that are restricted by a ruleset. + * A domain saves all layers of merged rulesets in a + * stack (FAM), starting from the first layer to the + * last one. These layers are used when merging + * rulesets, for user space backward compatibility + * (i.e. future-proof), and to properly handle merged * rulesets without overlapping access rights. These * layers are set once and never changed for the * lifetime of the ruleset. @@ -213,7 +232,8 @@ struct landlock_ruleset { }; struct landlock_ruleset * -landlock_create_ruleset(const access_mask_t access_mask); +landlock_create_ruleset(const access_mask_t access_mask_fs, + const access_mask_t access_mask_net); void landlock_put_ruleset(struct landlock_ruleset *const ruleset); void landlock_put_ruleset_deferred(struct landlock_ruleset *const ruleset); @@ -249,6 +269,19 @@ landlock_add_fs_access_mask(struct landlock_ruleset *const ruleset, (fs_mask << LANDLOCK_SHIFT_ACCESS_FS); } +static inline void +landlock_add_net_access_mask(struct landlock_ruleset *const ruleset, + const access_mask_t net_access_mask, + const u16 layer_level) +{ + access_mask_t net_mask = net_access_mask & LANDLOCK_MASK_ACCESS_NET; + + /* Should already be checked in sys_landlock_create_ruleset(). */ + WARN_ON_ONCE(net_access_mask != net_mask); + ruleset->access_masks[layer_level] |= + (net_mask << LANDLOCK_SHIFT_ACCESS_NET); +} + static inline access_mask_t landlock_get_raw_fs_access_mask(const struct landlock_ruleset *const ruleset, const u16 layer_level) @@ -266,6 +299,16 @@ landlock_get_fs_access_mask(const struct landlock_ruleset *const ruleset, return landlock_get_raw_fs_access_mask(ruleset, layer_level) | LANDLOCK_ACCESS_FS_INITIALLY_DENIED; } + +static inline access_mask_t +landlock_get_net_access_mask(const struct landlock_ruleset *const ruleset, + const u16 layer_level) +{ + return (ruleset->access_masks[layer_level] >> + LANDLOCK_SHIFT_ACCESS_NET) & + LANDLOCK_MASK_ACCESS_NET; +} + bool landlock_unmask_layers(const struct landlock_rule *const rule, const access_mask_t access_request, layer_mask_t (*const layer_masks)[], diff --git a/security/landlock/setup.c b/security/landlock/setup.c index 3f196d2ce4f9..7e4a598177b8 100644 --- a/security/landlock/setup.c +++ b/security/landlock/setup.c @@ -14,6 +14,7 @@ #include "fs.h" #include "ptrace.h" #include "setup.h" +#include "net.h" bool landlock_initialized __lsm_ro_after_init = false; @@ -29,6 +30,7 @@ static int __init landlock_init(void) landlock_add_cred_hooks(); landlock_add_ptrace_hooks(); landlock_add_fs_hooks(); + landlock_add_net_hooks(); landlock_initialized = true; pr_info("Up and running.\n"); return 0; diff --git a/security/landlock/syscalls.c b/security/landlock/syscalls.c index 8a54e87dbb17..5cb0a1bc6ec0 100644 --- a/security/landlock/syscalls.c +++ b/security/landlock/syscalls.c @@ -29,6 +29,7 @@ #include "cred.h" #include "fs.h" #include "limits.h" +#include "net.h" #include "ruleset.h" #include "setup.h" @@ -74,7 +75,8 @@ static void build_check_abi(void) { struct landlock_ruleset_attr ruleset_attr; struct landlock_path_beneath_attr path_beneath_attr; - size_t ruleset_size, path_beneath_size; + struct landlock_net_service_attr net_service_attr; + size_t ruleset_size, path_beneath_size, net_service_size; /* * For each user space ABI structures, first checks that there is no @@ -82,13 +84,19 @@ static void build_check_abi(void) * struct size. */ ruleset_size = sizeof(ruleset_attr.handled_access_fs); + ruleset_size += sizeof(ruleset_attr.handled_access_net); BUILD_BUG_ON(sizeof(ruleset_attr) != ruleset_size); - BUILD_BUG_ON(sizeof(ruleset_attr) != 8); + BUILD_BUG_ON(sizeof(ruleset_attr) != 16); path_beneath_size = sizeof(path_beneath_attr.allowed_access); path_beneath_size += sizeof(path_beneath_attr.parent_fd); BUILD_BUG_ON(sizeof(path_beneath_attr) != path_beneath_size); BUILD_BUG_ON(sizeof(path_beneath_attr) != 12); + + net_service_size = sizeof(net_service_attr.allowed_access); + net_service_size += sizeof(net_service_attr.port); + BUILD_BUG_ON(sizeof(net_service_attr) != net_service_size); + BUILD_BUG_ON(sizeof(net_service_attr) != 16); } /* Ruleset handling */ @@ -129,7 +137,7 @@ static const struct file_operations ruleset_fops = { .write = fop_dummy_write, }; -#define LANDLOCK_ABI_VERSION 3 +#define LANDLOCK_ABI_VERSION 4 /** * sys_landlock_create_ruleset - Create a new ruleset @@ -188,8 +196,14 @@ SYSCALL_DEFINE3(landlock_create_ruleset, LANDLOCK_MASK_ACCESS_FS) return -EINVAL; + /* Checks network content (and 32-bits cast). */ + if ((ruleset_attr.handled_access_net | LANDLOCK_MASK_ACCESS_NET) != + LANDLOCK_MASK_ACCESS_NET) + return -EINVAL; + /* Checks arguments and transforms to kernel struct. */ - ruleset = landlock_create_ruleset(ruleset_attr.handled_access_fs); + ruleset = landlock_create_ruleset(ruleset_attr.handled_access_fs, + ruleset_attr.handled_access_net); if (IS_ERR(ruleset)) return PTR_ERR(ruleset); @@ -315,13 +329,54 @@ static int add_rule_path_beneath(struct landlock_ruleset *const ruleset, return err; } +static int add_rule_net_service(struct landlock_ruleset *ruleset, + const void __user *const rule_attr) +{ +#if IS_ENABLED(CONFIG_INET) + struct landlock_net_service_attr net_service_attr; + int res; + access_mask_t mask; + + /* Copies raw user space buffer, only one type for now. */ + res = copy_from_user(&net_service_attr, rule_attr, + sizeof(net_service_attr)); + if (res) + return -EFAULT; + + /* + * Informs about useless rule: empty allowed_access (i.e. deny rules) + * are ignored by network actions. + */ + if (!net_service_attr.allowed_access) + return -ENOMSG; + + /* + * Checks that allowed_access matches the @ruleset constraints + * (ruleset->access_masks[0] is automatically upgraded to 64-bits). + */ + mask = landlock_get_net_access_mask(ruleset, 0); + if ((net_service_attr.allowed_access | mask) != mask) + return -EINVAL; + + /* Denies inserting a rule with port 0 or higher than 65535. */ + if ((net_service_attr.port == 0) || (net_service_attr.port > U16_MAX)) + return -EINVAL; + + /* Imports the new rule. */ + return landlock_append_net_rule(ruleset, net_service_attr.port, + net_service_attr.allowed_access); +#else /* IS_ENABLED(CONFIG_INET) */ + return -EAFNOSUPPORT; +#endif /* IS_ENABLED(CONFIG_INET) */ +} + /** * sys_landlock_add_rule - Add a new rule to a ruleset * * @ruleset_fd: File descriptor tied to the ruleset that should be extended * with the new rule. - * @rule_type: Identify the structure type pointed to by @rule_attr (only - * %LANDLOCK_RULE_PATH_BENEATH for now). + * @rule_type: Identify the structure type pointed to by @rule_attr: + * %LANDLOCK_RULE_PATH_BENEATH or %LANDLOCK_RULE_NET_SERVICE. * @rule_attr: Pointer to a rule (only of type &struct * landlock_path_beneath_attr for now). * @flags: Must be 0. @@ -332,6 +387,8 @@ static int add_rule_path_beneath(struct landlock_ruleset *const ruleset, * Possible returned errors are: * * - %EOPNOTSUPP: Landlock is supported by the kernel but disabled at boot time; + * - %EAFNOSUPPORT: @rule_type is LANDLOCK_RULE_NET_SERVICE but TCP/IP is not + * supported by the running kernel; * - %EINVAL: @flags is not 0, or inconsistent access in the rule (i.e. * &landlock_path_beneath_attr.allowed_access is not a subset of the * ruleset handled accesses); @@ -366,6 +423,9 @@ SYSCALL_DEFINE4(landlock_add_rule, const int, ruleset_fd, case LANDLOCK_RULE_PATH_BENEATH: err = add_rule_path_beneath(ruleset, rule_attr); break; + case LANDLOCK_RULE_NET_SERVICE: + err = add_rule_net_service(ruleset, rule_attr); + break; default: err = -EINVAL; break; diff --git a/tools/testing/selftests/landlock/base_test.c b/tools/testing/selftests/landlock/base_test.c index 792c3f0a59b4..646f778dfb1e 100644 --- a/tools/testing/selftests/landlock/base_test.c +++ b/tools/testing/selftests/landlock/base_test.c @@ -75,7 +75,7 @@ TEST(abi_version) const struct landlock_ruleset_attr ruleset_attr = { .handled_access_fs = LANDLOCK_ACCESS_FS_READ_FILE, }; - ASSERT_EQ(3, landlock_create_ruleset(NULL, 0, + ASSERT_EQ(4, landlock_create_ruleset(NULL, 0, LANDLOCK_CREATE_RULESET_VERSION)); ASSERT_EQ(-1, landlock_create_ruleset(&ruleset_attr, 0, From patchwork Mon May 15 16:13:36 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Konstantin Meskhidze (A)" X-Patchwork-Id: 13241775 Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 49D12C8ED for ; Mon, 15 May 2023 16:14:28 +0000 (UTC) Received: from frasgout.his.huawei.com (frasgout.his.huawei.com [185.176.79.56]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B85F6213A; Mon, 15 May 2023 09:14:23 -0700 (PDT) Received: from lhrpeml500004.china.huawei.com (unknown [172.18.147.207]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4QKkts02QTz67lVg; Tue, 16 May 2023 00:13:24 +0800 (CST) Received: from mscphis00759.huawei.com (10.123.66.134) by lhrpeml500004.china.huawei.com (7.191.163.9) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.23; Mon, 15 May 2023 17:14:21 +0100 From: Konstantin Meskhidze To: CC: , , , , , , Subject: [PATCH v11 09/12] selftests/landlock: Share enforce_ruleset() Date: Tue, 16 May 2023 00:13:36 +0800 Message-ID: <20230515161339.631577-10-konstantin.meskhidze@huawei.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230515161339.631577-1-konstantin.meskhidze@huawei.com> References: <20230515161339.631577-1-konstantin.meskhidze@huawei.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Originating-IP: [10.123.66.134] X-ClientProxiedBy: mscpeml500002.china.huawei.com (7.188.26.138) To lhrpeml500004.china.huawei.com (7.191.163.9) X-CFilter-Loop: Reflected X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_MED, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net This commit moves enforce_ruleset() helper function to common.h so that it can be used both by filesystem tests and network ones. Signed-off-by: Konstantin Meskhidze --- Changes since v10: * Refactors commit message. Changes since v9: * None. Changes since v8: * Adds __maybe_unused attribute for enforce_ruleset() helper. Changes since v7: * Refactors commit message. Changes since v6: * None. Changes since v5: * Splits commit. * Moves enforce_ruleset helper into common.h * Formats code with clang-format-14. --- tools/testing/selftests/landlock/common.h | 10 ++++++++++ tools/testing/selftests/landlock/fs_test.c | 10 ---------- 2 files changed, 10 insertions(+), 10 deletions(-) -- 2.25.1 diff --git a/tools/testing/selftests/landlock/common.h b/tools/testing/selftests/landlock/common.h index d7987ae8d7fc..0fd6c4cf5e6f 100644 --- a/tools/testing/selftests/landlock/common.h +++ b/tools/testing/selftests/landlock/common.h @@ -256,3 +256,13 @@ static int __maybe_unused send_fd(int usock, int fd_tx) return -errno; return 0; } + +static void __maybe_unused +enforce_ruleset(struct __test_metadata *const _metadata, const int ruleset_fd) +{ + ASSERT_EQ(0, prctl(PR_SET_NO_NEW_PRIVS, 1, 0, 0, 0)); + ASSERT_EQ(0, landlock_restrict_self(ruleset_fd, 0)) + { + TH_LOG("Failed to enforce ruleset: %s", strerror(errno)); + } +} diff --git a/tools/testing/selftests/landlock/fs_test.c b/tools/testing/selftests/landlock/fs_test.c index b6c4be3faf7a..b762b5419a89 100644 --- a/tools/testing/selftests/landlock/fs_test.c +++ b/tools/testing/selftests/landlock/fs_test.c @@ -598,16 +598,6 @@ static int create_ruleset(struct __test_metadata *const _metadata, return ruleset_fd; } -static void enforce_ruleset(struct __test_metadata *const _metadata, - const int ruleset_fd) -{ - ASSERT_EQ(0, prctl(PR_SET_NO_NEW_PRIVS, 1, 0, 0, 0)); - ASSERT_EQ(0, landlock_restrict_self(ruleset_fd, 0)) - { - TH_LOG("Failed to enforce ruleset: %s", strerror(errno)); - } -} - TEST_F_FORK(layout1, proc_nsfs) { const struct rule rules[] = { From patchwork Mon May 15 16:13:37 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Konstantin Meskhidze (A)" X-Patchwork-Id: 13241779 Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 56048101C8 for ; Mon, 15 May 2023 16:14:36 +0000 (UTC) Received: from frasgout.his.huawei.com (frasgout.his.huawei.com [185.176.79.56]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D083F199; Mon, 15 May 2023 09:14:26 -0700 (PDT) Received: from lhrpeml500004.china.huawei.com (unknown [172.18.147.206]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4QKktw0Kzrz67nTp; Tue, 16 May 2023 00:13:28 +0800 (CST) Received: from mscphis00759.huawei.com (10.123.66.134) by lhrpeml500004.china.huawei.com (7.191.163.9) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.23; Mon, 15 May 2023 17:14:24 +0100 From: Konstantin Meskhidze To: CC: , , , , , , Subject: [PATCH v11 10/12] selftests/landlock: Add 11 new test suites dedicated to network Date: Tue, 16 May 2023 00:13:37 +0800 Message-ID: <20230515161339.631577-11-konstantin.meskhidze@huawei.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230515161339.631577-1-konstantin.meskhidze@huawei.com> References: <20230515161339.631577-1-konstantin.meskhidze@huawei.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Originating-IP: [10.123.66.134] X-ClientProxiedBy: mscpeml500002.china.huawei.com (7.188.26.138) To lhrpeml500004.china.huawei.com (7.191.163.9) X-CFilter-Loop: Reflected X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_MED, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net These test suites try to check edge cases for TCP sockets bind() and connect() actions. inet: * bind: Tests with non-landlocked/landlocked ipv4 and ipv6 sockets. * connect: Tests with non-landlocked/landlocked ipv4 and ipv6 sockets. * bind_afunspec: Tests with non-landlocked/landlocked restrictions for bind action with AF_UNSPEC socket family. * connect_afunspec: Tests with non-landlocked/landlocked restrictions for connect action with AF_UNSPEC socket family. * ruleset_overlap: Tests with overlapping rules for one port. * ruleset_expanding: Tests with expanding rulesets in which rules are gradually added one by one, restricting sockets' connections. * inval_port_format: Tests with wrong port format for ipv4/ipv6 sockets and with port values more than U16_MAX. port: * inval: Tests with invalid user space supplied data: - out of range ruleset attribute; - unhandled allowed access; - zero port value; - zero access value; - legitimate access values; * bind_connect_inval_addrlen: Tests with invalid address length. * bind_connect_unix_*_socket: Tests to make sure unix sockets' actions are not restricted by Landlock rules applied to TCP ones. layout1: * with_net: Tests with network bind() socket action within filesystem directory access test. Test coverage for security/landlock is 94.8% of 934 lines according to gcc/gcov-11. Signed-off-by: Konstantin Meskhidze --- Changes since v10: * Replaces FIXTURE_VARIANT() with struct _fixture_variant_ . * Changes tests names socket -> inet, standalone -> port. * Gets rid of some DEFINEs. * Changes names and groups tests' variables. * Changes create_socket_variant() helper name to socket_variant(). * Refactors FIXTURE_SETUP(port) logic. * Changes TEST_F_FORK -> TEST_F since there no teardown. * Refactors some tests' logic. * Minor fixes. * Refactors commit message. Changes since v9: * Fixes mixing code declaration and code. * Refactors FIXTURE_TEARDOWN() with clang-format. * Replaces struct _fixture_variant_socket with FIXTURE_VARIANT(socket). * Deletes useless condition if (variant->is_sandboxed) in multiple locations. * Deletes zero_size argument in bind_variant() and connect_variant(). * Adds tests for port values exceeding U16_MAX. Changes since v8: * Adds is_sandboxed const for FIXTURE_VARIANT(socket). * Refactors AF_UNSPEC tests. * Adds address length checking tests. * Convert ports in all tests to __be16. * Adds invalid port values tests. * Minor fixes. Changes since v7: * Squashes all selftest commits. * Adds fs test with network bind() socket action. * Minor fixes. --- tools/testing/selftests/landlock/config | 4 + tools/testing/selftests/landlock/fs_test.c | 64 + tools/testing/selftests/landlock/net_test.c | 1317 +++++++++++++++++++ 3 files changed, 1385 insertions(+) create mode 100644 tools/testing/selftests/landlock/net_test.c -- 2.25.1 diff --git a/tools/testing/selftests/landlock/config b/tools/testing/selftests/landlock/config index 0f0a65287bac..71f7e9a8a64c 100644 --- a/tools/testing/selftests/landlock/config +++ b/tools/testing/selftests/landlock/config @@ -1,3 +1,7 @@ +CONFIG_INET=y +CONFIG_IPV6=y +CONFIG_NET=y +CONFIG_NET_NS=y CONFIG_OVERLAY_FS=y CONFIG_SECURITY_LANDLOCK=y CONFIG_SECURITY_PATH=y diff --git a/tools/testing/selftests/landlock/fs_test.c b/tools/testing/selftests/landlock/fs_test.c index b762b5419a89..9175ee8adf51 100644 --- a/tools/testing/selftests/landlock/fs_test.c +++ b/tools/testing/selftests/landlock/fs_test.c @@ -8,8 +8,10 @@ */ #define _GNU_SOURCE +#include #include #include +#include #include #include #include @@ -17,6 +19,7 @@ #include #include #include +#include #include #include #include @@ -4413,4 +4416,65 @@ TEST_F_FORK(layout2_overlay, same_content_different_file) } } +static const char loopback_ipv4[] = "127.0.0.1"; +const unsigned short sock_port = 15000; + +TEST_F_FORK(layout1, with_net) +{ + const struct rule rules[] = { + { + .path = dir_s1d2, + .access = ACCESS_RO, + }, + {}, + }; + struct landlock_ruleset_attr ruleset_attr_net = { + .handled_access_net = LANDLOCK_ACCESS_NET_BIND_TCP | + LANDLOCK_ACCESS_NET_CONNECT_TCP, + }; + struct landlock_net_service_attr tcp_bind = { + .allowed_access = LANDLOCK_ACCESS_NET_BIND_TCP, + + .port = sock_port, + }; + int sockfd, ruleset_fd, ruleset_fd_net; + struct sockaddr_in addr4; + + addr4.sin_family = AF_INET; + addr4.sin_port = htons(sock_port); + addr4.sin_addr.s_addr = inet_addr(loopback_ipv4); + memset(&addr4.sin_zero, '\0', 8); + + /* Creates ruleset for network access. */ + ruleset_fd_net = landlock_create_ruleset(&ruleset_attr_net, + sizeof(ruleset_attr_net), 0); + ASSERT_LE(0, ruleset_fd_net); + + /* Adds a network rule. */ + ASSERT_EQ(0, + landlock_add_rule(ruleset_fd_net, LANDLOCK_RULE_NET_SERVICE, + &tcp_bind, 0)); + + enforce_ruleset(_metadata, ruleset_fd_net); + ASSERT_EQ(0, close(ruleset_fd_net)); + + ruleset_fd = create_ruleset(_metadata, ACCESS_RW, rules); + + ASSERT_LE(0, ruleset_fd); + enforce_ruleset(_metadata, ruleset_fd); + ASSERT_EQ(0, close(ruleset_fd)); + + /* Tests on a directory with the network rule loaded. */ + ASSERT_EQ(0, test_open(dir_s1d2, O_RDONLY)); + ASSERT_EQ(0, test_open(file1_s1d2, O_RDONLY)); + + sockfd = socket(AF_INET, SOCK_STREAM | SOCK_CLOEXEC, 0); + ASSERT_LE(0, sockfd); + /* Binds a socket to port 15000. */ + ASSERT_EQ(0, bind(sockfd, &addr4, sizeof(addr4))); + + /* Closes bounded socket. */ + ASSERT_EQ(0, close(sockfd)); +} + TEST_HARNESS_MAIN diff --git a/tools/testing/selftests/landlock/net_test.c b/tools/testing/selftests/landlock/net_test.c new file mode 100644 index 000000000000..d4e219e42bba --- /dev/null +++ b/tools/testing/selftests/landlock/net_test.c @@ -0,0 +1,1317 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Landlock tests - Network + * + * Copyright (C) 2022 Huawei Tech. Co., Ltd. + */ + +#define _GNU_SOURCE +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "common.h" + +#define MAX_SOCKET_NUM 10 + +const short sock_port_start = 3470; +const short sock_port_add = 10; + +static const char loopback_ipv4[] = "127.0.0.1"; +static const char loopback_ipv6[] = "::1"; +static const char unix_address_path[] = "/tmp/unix_addr"; + +/* Number pending connections queue to be hold. */ +const short backlog = 10; + +/* Invalid attribute, out of landlock network access range. */ +const short landlock_inval_attr = 7; + +FIXTURE(inet) +{ + unsigned short port[MAX_SOCKET_NUM]; + struct sockaddr_in addr4[MAX_SOCKET_NUM]; + struct sockaddr_in6 addr6[MAX_SOCKET_NUM]; +}; + +/* struct _fixture_variant_inet */ +FIXTURE_VARIANT(inet) +{ + const bool is_ipv4; + const bool is_sandboxed; +}; + +/* clang-format off */ +FIXTURE_VARIANT_ADD(inet, ipv4) { + /* clang-format on */ + .is_ipv4 = true, + .is_sandboxed = false, +}; + +/* clang-format off */ +FIXTURE_VARIANT_ADD(inet, ipv4_sandboxed) { + /* clang-format on */ + .is_ipv4 = true, + .is_sandboxed = true, +}; + +/* clang-format off */ +FIXTURE_VARIANT_ADD(inet, ipv6) { + /* clang-format on */ + .is_ipv4 = false, + .is_sandboxed = false, +}; + +/* clang-format off */ +FIXTURE_VARIANT_ADD(inet, ipv6_sandboxed) { + /* clang-format on */ + .is_ipv4 = false, + .is_sandboxed = true, +}; + +static int socket_variant(const struct _fixture_variant_inet *const variant, + const int type) +{ + if (variant->is_ipv4) + return socket(AF_INET, type | SOCK_CLOEXEC, 0); + else + return socket(AF_INET6, type | SOCK_CLOEXEC, 0); +} + +static int bind_variant(const struct _fixture_variant_inet *const variant, + const int sockfd, + const struct _test_data_inet *const self, + const size_t index) +{ + if (variant->is_ipv4) + return bind(sockfd, &self->addr4[index], + sizeof(self->addr4[index])); + else + return bind(sockfd, &self->addr6[index], + sizeof(self->addr6[index])); +} + +static int connect_variant(const struct _fixture_variant_inet *const variant, + const int sockfd, + const struct _test_data_inet *const self, + const size_t index) +{ + if (variant->is_ipv4) + return connect(sockfd, &self->addr4[index], + sizeof(self->addr4[index])); + else + return connect(sockfd, &self->addr6[index], + sizeof(self->addr6[index])); +} + +FIXTURE_SETUP(inet) +{ + int i; + + for (i = 0; i < MAX_SOCKET_NUM; i++) { + /* Initializes socket ports . */ + self->port[i] = sock_port_start + sock_port_add * i; + + /* Initializes and IPv4 socket addresses. */ + self->addr4[i].sin_family = AF_INET; + self->addr4[i].sin_port = htons(self->port[i]); + self->addr4[i].sin_addr.s_addr = inet_addr(loopback_ipv4); + memset(&(self->addr4[i].sin_zero), '\0', 8); + + /* Initializes IPv6 socket addresses. */ + self->addr6[i].sin6_family = AF_INET6; + self->addr6[i].sin6_port = htons(self->port[i]); + inet_pton(AF_INET6, loopback_ipv6, &(self->addr6[i].sin6_addr)); + } + + set_cap(_metadata, CAP_SYS_ADMIN); + ASSERT_EQ(0, unshare(CLONE_NEWNET)); + ASSERT_EQ(0, system("ip link set dev lo up")); + clear_cap(_metadata, CAP_SYS_ADMIN); +}; + +FIXTURE_TEARDOWN(inet) +{ +} + +FIXTURE(port) +{ + unsigned short port[MAX_SOCKET_NUM]; +}; + +/* struct _fixture_variant_port */ +FIXTURE_VARIANT(port) +{ + const bool is_sandboxed; +}; + +/* clang-format off */ +FIXTURE_VARIANT_ADD(port, none_sandboxed) { + /* clang-format on */ + .is_sandboxed = false, +}; + +/* clang-format off */ +FIXTURE_VARIANT_ADD(port, sandboxed) { + /* clang-format on */ + .is_sandboxed = true, +}; + +FIXTURE_SETUP(port) +{ + int i; + + /* Initializes socket ports . */ + for (i = 0; i < MAX_SOCKET_NUM; i++) + self->port[i] = sock_port_start + sock_port_add * i; + + set_cap(_metadata, CAP_SYS_ADMIN); + ASSERT_EQ(0, unshare(CLONE_NEWNET)); + ASSERT_EQ(0, system("ip link set dev lo up")); + clear_cap(_metadata, CAP_SYS_ADMIN); +}; + +FIXTURE_TEARDOWN(port) +{ +} + +TEST_F(inet, bind) +{ + struct landlock_ruleset_attr ruleset_attr = { + .handled_access_net = LANDLOCK_ACCESS_NET_BIND_TCP | + LANDLOCK_ACCESS_NET_CONNECT_TCP, + }; + struct landlock_net_service_attr tcp_bind_connect = { + .allowed_access = LANDLOCK_ACCESS_NET_BIND_TCP | + LANDLOCK_ACCESS_NET_CONNECT_TCP, + .port = self->port[0], + }; + struct landlock_net_service_attr tcp_connect = { + .allowed_access = LANDLOCK_ACCESS_NET_CONNECT_TCP, + .port = self->port[1], + }; + struct landlock_net_service_attr tcp_denied = { + .allowed_access = 0, + .port = self->port[2], + }; + int ruleset_fd, sockfd, ret; + + if (variant->is_sandboxed) { + ruleset_fd = landlock_create_ruleset(&ruleset_attr, + sizeof(ruleset_attr), 0); + ASSERT_LE(0, ruleset_fd); + + /* + * Allows connect and bind operations to the port[0] + * socket. + */ + ASSERT_EQ(0, landlock_add_rule(ruleset_fd, + LANDLOCK_RULE_NET_SERVICE, + &tcp_bind_connect, 0)); + /* + * Allows connect and denies bind operations to the port[1] + * socket. + */ + ASSERT_EQ(0, landlock_add_rule(ruleset_fd, + LANDLOCK_RULE_NET_SERVICE, + &tcp_connect, 0)); + /* + * Empty allowed_access (i.e. deny rules) are ignored in + * network actions for port[2] socket. + */ + ASSERT_EQ(-1, landlock_add_rule(ruleset_fd, + LANDLOCK_RULE_NET_SERVICE, + &tcp_denied, 0)); + ASSERT_EQ(ENOMSG, errno); + + /* Enforces the ruleset. */ + enforce_ruleset(_metadata, ruleset_fd); + } + + sockfd = socket_variant(variant, SOCK_STREAM); + ASSERT_LE(0, sockfd); + /* Binds a socket to port[0]. */ + ret = bind_variant(variant, sockfd, self, 0); + ASSERT_EQ(0, ret); + + /* Closes bounded socket. */ + ASSERT_EQ(0, close(sockfd)); + + sockfd = socket_variant(variant, SOCK_STREAM); + ASSERT_LE(0, sockfd); + /* Binds a socket to port[1]. */ + ret = bind_variant(variant, sockfd, self, 1); + if (variant->is_sandboxed) { + ASSERT_EQ(-1, ret); + ASSERT_EQ(EACCES, errno); + } else { + ASSERT_EQ(0, ret); + } + + sockfd = socket_variant(variant, SOCK_STREAM); + ASSERT_LE(0, sockfd); + /* Binds a socket to port[2]. */ + ret = bind_variant(variant, sockfd, self, 2); + if (variant->is_sandboxed) { + ASSERT_EQ(-1, ret); + ASSERT_EQ(EACCES, errno); + } else { + ASSERT_EQ(0, ret); + } +} + +TEST_F(inet, connect) +{ + struct landlock_ruleset_attr ruleset_attr = { + .handled_access_net = LANDLOCK_ACCESS_NET_BIND_TCP | + LANDLOCK_ACCESS_NET_CONNECT_TCP, + }; + struct landlock_net_service_attr tcp_bind_connect = { + .allowed_access = LANDLOCK_ACCESS_NET_BIND_TCP | + LANDLOCK_ACCESS_NET_CONNECT_TCP, + .port = self->port[0], + }; + struct landlock_net_service_attr tcp_bind = { + .allowed_access = LANDLOCK_ACCESS_NET_BIND_TCP, + .port = self->port[1], + }; + int accept_fd, ruleset_fd, sockfd_1, sockfd_2, status, ret; + pid_t child_1, child_2; + + if (variant->is_sandboxed) { + ruleset_fd = landlock_create_ruleset(&ruleset_attr, + sizeof(ruleset_attr), 0); + ASSERT_LE(0, ruleset_fd); + + /* + * Allows connect and bind operations to the port[0] + * socket. + */ + ASSERT_EQ(0, landlock_add_rule(ruleset_fd, + LANDLOCK_RULE_NET_SERVICE, + &tcp_bind_connect, 0)); + /* + * Allows bind and denies connect operations to the port[1] + * socket. + */ + ASSERT_EQ(0, landlock_add_rule(ruleset_fd, + LANDLOCK_RULE_NET_SERVICE, + &tcp_bind, 0)); + + /* Enforces the ruleset. */ + enforce_ruleset(_metadata, ruleset_fd); + } + + /* Creates a server socket 1. */ + sockfd_1 = socket_variant(variant, SOCK_STREAM); + ASSERT_LE(0, sockfd_1); + + /* Binds the socket 1 to address with port[0]. */ + ret = bind_variant(variant, sockfd_1, self, 0); + ASSERT_EQ(0, ret); + + /* Makes listening socket 1. */ + ret = listen(sockfd_1, backlog); + ASSERT_EQ(0, ret); + + child_1 = fork(); + ASSERT_LE(0, child_1); + if (child_1 == 0) { + int child_sockfd, ret; + + /* Closes listening socket for the child. */ + ASSERT_EQ(0, close(sockfd_1)); + /* Creates a stream client socket. */ + child_sockfd = socket_variant(variant, SOCK_STREAM); + ASSERT_LE(0, child_sockfd); + + /* Makes connection to the listening socket with port[0]. */ + ret = connect_variant(variant, child_sockfd, self, 0); + ASSERT_EQ(0, ret); + + _exit(_metadata->passed ? EXIT_SUCCESS : EXIT_FAILURE); + return; + } + /* Accepts connection from the child 1. */ + accept_fd = accept(sockfd_1, NULL, 0); + ASSERT_LE(0, accept_fd); + + /* Closes connection. */ + ASSERT_EQ(0, close(accept_fd)); + + /* Closes listening socket 1 for the parent. */ + ASSERT_EQ(0, close(sockfd_1)); + + ASSERT_EQ(child_1, waitpid(child_1, &status, 0)); + ASSERT_EQ(1, WIFEXITED(status)); + ASSERT_EQ(EXIT_SUCCESS, WEXITSTATUS(status)); + + /* Creates a server socket 2. */ + sockfd_2 = socket_variant(variant, SOCK_STREAM); + ASSERT_LE(0, sockfd_2); + + /* Binds the socket 2 to address with port[1]. */ + ret = bind_variant(variant, sockfd_2, self, 1); + ASSERT_EQ(0, ret); + + /* Makes listening socket 2. */ + ret = listen(sockfd_2, backlog); + ASSERT_EQ(0, ret); + + child_2 = fork(); + ASSERT_LE(0, child_2); + if (child_2 == 0) { + int child_sockfd, ret; + + /* Closes listening socket for the child. */ + ASSERT_EQ(0, close(sockfd_2)); + /* Creates a stream client socket. */ + child_sockfd = socket_variant(variant, SOCK_STREAM); + ASSERT_LE(0, child_sockfd); + + /* Makes connection to the listening socket with port[1]. */ + ret = connect_variant(variant, child_sockfd, self, 1); + if (variant->is_sandboxed) { + ASSERT_EQ(-1, ret); + ASSERT_EQ(EACCES, errno); + } else { + ASSERT_EQ(0, ret); + } + _exit(_metadata->passed ? EXIT_SUCCESS : EXIT_FAILURE); + return; + } + + if (!variant->is_sandboxed) { + /* Accepts connection from the child 2. */ + accept_fd = accept(sockfd_1, NULL, 0); + ASSERT_LE(0, accept_fd); + + /* Closes connection. */ + ASSERT_EQ(0, close(accept_fd)); + } + + /* Closes listening socket 2 for the parent. */ + ASSERT_EQ(0, close(sockfd_2)); + + ASSERT_EQ(child_2, waitpid(child_2, &status, 0)); + ASSERT_EQ(1, WIFEXITED(status)); + ASSERT_EQ(EXIT_SUCCESS, WEXITSTATUS(status)); +} + +TEST_F(inet, bind_afunspec) +{ + struct landlock_ruleset_attr ruleset_attr_net = { + .handled_access_net = LANDLOCK_ACCESS_NET_BIND_TCP | + LANDLOCK_ACCESS_NET_CONNECT_TCP, + }; + struct landlock_net_service_attr tcp_bind = { + .allowed_access = LANDLOCK_ACCESS_NET_BIND_TCP, + + .port = self->port[0], + }; + int ruleset_fd_net, sockfd_unspec, ret; + + if (variant->is_ipv4) { + self->addr4[0].sin_family = AF_UNSPEC; + self->addr4[0].sin_addr.s_addr = htonl(INADDR_ANY); + } + + if (variant->is_sandboxed) { + /* Creates ruleset for network access. */ + ruleset_fd_net = landlock_create_ruleset( + &ruleset_attr_net, sizeof(ruleset_attr_net), 0); + ASSERT_LE(0, ruleset_fd_net); + + /* Adds a network rule. */ + ASSERT_EQ(0, landlock_add_rule(ruleset_fd_net, + LANDLOCK_RULE_NET_SERVICE, + &tcp_bind, 0)); + + enforce_ruleset(_metadata, ruleset_fd_net); + ASSERT_EQ(0, close(ruleset_fd_net)); + } + + sockfd_unspec = socket_variant(variant, SOCK_STREAM); + ASSERT_LE(0, sockfd_unspec); + + /* Binds a socket to the port[0] with INADDR_ANY address. */ + ret = bind_variant(variant, sockfd_unspec, self, 0); + ASSERT_EQ(0, ret); + + /* Closes bounded socket. */ + ASSERT_EQ(0, close(sockfd_unspec)); + + if (variant->is_ipv4) { + /* Changes to a specific address. */ + self->addr4[0].sin_addr.s_addr = inet_addr(loopback_ipv4); + + sockfd_unspec = socket_variant(variant, SOCK_STREAM); + ASSERT_LE(0, sockfd_unspec); + + /* Binds a socket to the port[0] with the specific address. */ + ret = bind_variant(variant, sockfd_unspec, self, 0); + ASSERT_EQ(-1, ret); + ASSERT_EQ(EAFNOSUPPORT, errno); + + /* Closes bounded socket. */ + ASSERT_EQ(0, close(sockfd_unspec)); + } +} + +TEST_F(inet, connect_afunspec) +{ + struct landlock_ruleset_attr ruleset_attr_bind = { + .handled_access_net = LANDLOCK_ACCESS_NET_BIND_TCP, + }; + struct landlock_net_service_attr tcp_bind = { + .allowed_access = LANDLOCK_ACCESS_NET_BIND_TCP, + + .port = self->port[0], + }; + struct landlock_ruleset_attr ruleset_attr_bind_connect = { + .handled_access_net = LANDLOCK_ACCESS_NET_BIND_TCP | + LANDLOCK_ACCESS_NET_CONNECT_TCP, + }; + struct landlock_net_service_attr tcp_bind_connect = { + .allowed_access = LANDLOCK_ACCESS_NET_BIND_TCP | + LANDLOCK_ACCESS_NET_CONNECT_TCP, + + .port = self->port[0], + }; + int sockfd, ruleset_fd_1, ruleset_fd_2, status, ret; + struct sockaddr addr_unspec = { .sa_family = AF_UNSPEC }; + pid_t child; + + if (variant->is_sandboxed) { + ruleset_fd_1 = landlock_create_ruleset( + &ruleset_attr_bind, sizeof(ruleset_attr_bind), 0); + ASSERT_LE(0, ruleset_fd_1); + + /* Allows bind operations to the port[0] socket. */ + ASSERT_EQ(0, landlock_add_rule(ruleset_fd_1, + LANDLOCK_RULE_NET_SERVICE, + &tcp_bind, 0)); + + /* Enforces the ruleset. */ + enforce_ruleset(_metadata, ruleset_fd_1); + } + + /* Creates a server socket 1. */ + sockfd = socket_variant(variant, SOCK_STREAM); + ASSERT_LE(0, sockfd); + + /* Binds the socket 1 to address with port[0]. */ + ret = bind_variant(variant, sockfd, self, 0); + ASSERT_EQ(0, ret); + + /* Makes connection to socket with port[0]. */ + ret = connect_variant(variant, sockfd, self, 0); + ASSERT_EQ(0, ret); + + if (variant->is_sandboxed) { + ruleset_fd_2 = landlock_create_ruleset( + &ruleset_attr_bind_connect, + sizeof(ruleset_attr_bind_connect), 0); + ASSERT_LE(0, ruleset_fd_2); + + /* Allows connect and bind operations to the port[0] socket. */ + ASSERT_EQ(0, landlock_add_rule(ruleset_fd_2, + LANDLOCK_RULE_NET_SERVICE, + &tcp_bind_connect, 0)); + + /* Enforces the ruleset. */ + enforce_ruleset(_metadata, ruleset_fd_2); + } + child = fork(); + ASSERT_LE(0, child); + if (child == 0) { + int ret; + + /* Child tries to disconnect already connected socket. */ + ret = connect(sockfd, (struct sockaddr *)&addr_unspec, + sizeof(addr_unspec)); + ASSERT_EQ(0, ret); + + _exit(_metadata->passed ? EXIT_SUCCESS : EXIT_FAILURE); + return; + } + /* Closes listening socket 1 for the parent. */ + ASSERT_EQ(0, close(sockfd)); + + ASSERT_EQ(child, waitpid(child, &status, 0)); + ASSERT_EQ(1, WIFEXITED(status)); + ASSERT_EQ(EXIT_SUCCESS, WEXITSTATUS(status)); +} + +TEST_F(inet, ruleset_overlap) +{ + struct landlock_ruleset_attr ruleset_attr = { + .handled_access_net = LANDLOCK_ACCESS_NET_BIND_TCP | + LANDLOCK_ACCESS_NET_CONNECT_TCP, + }; + struct landlock_net_service_attr tcp_bind = { + .allowed_access = LANDLOCK_ACCESS_NET_BIND_TCP, + + .port = self->port[0], + }; + struct landlock_net_service_attr tcp_bind_connect = { + .allowed_access = LANDLOCK_ACCESS_NET_BIND_TCP | + LANDLOCK_ACCESS_NET_CONNECT_TCP, + + .port = self->port[0], + }; + int ruleset_fd, sockfd; + int one = 1; + + ruleset_fd = + landlock_create_ruleset(&ruleset_attr, sizeof(ruleset_attr), 0); + ASSERT_LE(0, ruleset_fd); + + /* Allows bind operations to the port[0] socket. */ + ASSERT_EQ(0, landlock_add_rule(ruleset_fd, LANDLOCK_RULE_NET_SERVICE, + &tcp_bind, 0)); + /* Allows connect and bind operations to the port[0] socket. */ + ASSERT_EQ(0, landlock_add_rule(ruleset_fd, LANDLOCK_RULE_NET_SERVICE, + &tcp_bind_connect, 0)); + + /* Enforces the ruleset. */ + enforce_ruleset(_metadata, ruleset_fd); + + /* Creates a server socket. */ + sockfd = socket_variant(variant, SOCK_STREAM); + ASSERT_LE(0, sockfd); + /* Allows to reuse of local address. */ + ASSERT_EQ(0, setsockopt(sockfd, SOL_SOCKET, SO_REUSEADDR, &one, + sizeof(one))); + + /* Binds the socket to address with port[0]. */ + ASSERT_EQ(0, bind_variant(variant, sockfd, self, 0)); + + /* Makes connection to socket with port[0]. */ + ASSERT_EQ(0, connect_variant(variant, sockfd, self, 0)); + + /* Closes socket. */ + ASSERT_EQ(0, close(sockfd)); + + /* Creates another ruleset layer. */ + ruleset_fd = + landlock_create_ruleset(&ruleset_attr, sizeof(ruleset_attr), 0); + ASSERT_LE(0, ruleset_fd); + + /* + * Allows bind operations to the port[0] socket in + * the new ruleset layer. + */ + ASSERT_EQ(0, landlock_add_rule(ruleset_fd, LANDLOCK_RULE_NET_SERVICE, + &tcp_bind, 0)); + + /* Enforces the new ruleset. */ + enforce_ruleset(_metadata, ruleset_fd); + + /* Creates a server socket. */ + sockfd = socket_variant(variant, SOCK_STREAM); + ASSERT_LE(0, sockfd); + /* Allows to reuse of local address. */ + ASSERT_EQ(0, setsockopt(sockfd, SOL_SOCKET, SO_REUSEADDR, &one, + sizeof(one))); + + /* Binds the socket to address with port[0]. */ + ASSERT_EQ(0, bind_variant(variant, sockfd, self, 0)); + + /* + * Forbids to connect the socket to address with port[0], + * as just one ruleset layer has connect() access rule. + */ + ASSERT_EQ(-1, connect_variant(variant, sockfd, self, 0)); + ASSERT_EQ(EACCES, errno); + + /* Closes socket. */ + ASSERT_EQ(0, close(sockfd)); +} + +TEST_F(inet, ruleset_expanding) +{ + struct landlock_ruleset_attr ruleset_attr_1 = { + .handled_access_net = LANDLOCK_ACCESS_NET_BIND_TCP, + }; + struct landlock_net_service_attr net_service_1 = { + .allowed_access = LANDLOCK_ACCESS_NET_BIND_TCP, + + .port = self->port[0], + }; + int sockfd_1, sockfd_2; + int one = 1; + + const int ruleset_fd_1 = landlock_create_ruleset( + &ruleset_attr_1, sizeof(ruleset_attr_1), 0); + ASSERT_LE(0, ruleset_fd_1); + + /* Adds rule to port[0] socket. */ + ASSERT_EQ(0, landlock_add_rule(ruleset_fd_1, LANDLOCK_RULE_NET_SERVICE, + &net_service_1, 0)); + + /* Enforces the ruleset. */ + enforce_ruleset(_metadata, ruleset_fd_1); + ASSERT_EQ(0, close(ruleset_fd_1)); + + /* Creates a socket 1. */ + sockfd_1 = socket_variant(variant, SOCK_STREAM); + ASSERT_LE(0, sockfd_1); + /* Allows to reuse of local address. */ + ASSERT_EQ(0, setsockopt(sockfd_1, SOL_SOCKET, SO_REUSEADDR, &one, + sizeof(one))); + + /* Binds the socket 1 to address with port[0]. */ + ASSERT_EQ(0, bind_variant(variant, sockfd_1, self, 0)); + + /* Makes connection to socket 1 with port[0]. */ + ASSERT_EQ(0, connect_variant(variant, sockfd_1, self, 0)); + + /* Closes socket 1. */ + ASSERT_EQ(0, close(sockfd_1)); + + /* Creates a socket 2. */ + sockfd_2 = socket_variant(variant, SOCK_STREAM); + ASSERT_LE(0, sockfd_2); + /* Allows to reuse of local address. */ + ASSERT_EQ(0, setsockopt(sockfd_2, SOL_SOCKET, SO_REUSEADDR, &one, + sizeof(one))); + + /* + * Forbids to bind the socket 2 to address with port[1], + * since there is no rule with bind() access for port[1]. + */ + ASSERT_EQ(-1, bind_variant(variant, sockfd_2, self, 1)); + ASSERT_EQ(EACCES, errno); + + /* Expands network mask. */ + struct landlock_ruleset_attr ruleset_attr_2 = { + .handled_access_net = LANDLOCK_ACCESS_NET_BIND_TCP | + LANDLOCK_ACCESS_NET_CONNECT_TCP, + }; + + /* Adds connect() access to port[0]. */ + struct landlock_net_service_attr net_service_2 = { + .allowed_access = LANDLOCK_ACCESS_NET_BIND_TCP | + LANDLOCK_ACCESS_NET_CONNECT_TCP, + + .port = self->port[0], + }; + /* Adds bind() access to port[1]. */ + struct landlock_net_service_attr net_service_3 = { + .allowed_access = LANDLOCK_ACCESS_NET_BIND_TCP, + + .port = self->port[1], + }; + + const int ruleset_fd_2 = landlock_create_ruleset( + &ruleset_attr_2, sizeof(ruleset_attr_2), 0); + ASSERT_LE(0, ruleset_fd_2); + + /* Adds rule to port[0] socket. */ + ASSERT_EQ(0, landlock_add_rule(ruleset_fd_2, LANDLOCK_RULE_NET_SERVICE, + &net_service_2, 0)); + /* Adds rule to port[1] socket. */ + ASSERT_EQ(0, landlock_add_rule(ruleset_fd_2, LANDLOCK_RULE_NET_SERVICE, + &net_service_3, 0)); + + /* Enforces the ruleset. */ + enforce_ruleset(_metadata, ruleset_fd_2); + ASSERT_EQ(0, close(ruleset_fd_2)); + + /* Creates a socket 1. */ + sockfd_1 = socket_variant(variant, SOCK_STREAM); + ASSERT_LE(0, sockfd_1); + /* Allows to reuse of local address. */ + ASSERT_EQ(0, setsockopt(sockfd_1, SOL_SOCKET, SO_REUSEADDR, &one, + sizeof(one))); + + /* Binds the socket 1 to address with port[0]. */ + ASSERT_EQ(0, bind_variant(variant, sockfd_1, self, 0)); + + /* Makes connection to socket 1 with port[0]. */ + ASSERT_EQ(0, connect_variant(variant, sockfd_1, self, 0)); + + /* Closes socket 1. */ + ASSERT_EQ(0, close(sockfd_1)); + + /* Creates a socket 2. */ + sockfd_2 = socket_variant(variant, SOCK_STREAM); + ASSERT_LE(0, sockfd_2); + /* Allows to reuse of local address. */ + ASSERT_EQ(0, setsockopt(sockfd_2, SOL_SOCKET, SO_REUSEADDR, &one, + sizeof(one))); + + /* + * Forbids to bind the socket 2 to address with port[1], + * because just one layer has bind() access rule. + */ + ASSERT_EQ(-1, bind_variant(variant, sockfd_1, self, 1)); + ASSERT_EQ(EACCES, errno); + + /* Expands network mask. */ + struct landlock_ruleset_attr ruleset_attr_3 = { + .handled_access_net = LANDLOCK_ACCESS_NET_BIND_TCP | + LANDLOCK_ACCESS_NET_CONNECT_TCP, + }; + + /* Restricts connect() access to port[0]. */ + struct landlock_net_service_attr net_service_4 = { + .allowed_access = LANDLOCK_ACCESS_NET_BIND_TCP, + + .port = self->port[0], + }; + + const int ruleset_fd_3 = landlock_create_ruleset( + &ruleset_attr_3, sizeof(ruleset_attr_3), 0); + ASSERT_LE(0, ruleset_fd_3); + + /* Adds rule to port[0] socket. */ + ASSERT_EQ(0, landlock_add_rule(ruleset_fd_3, LANDLOCK_RULE_NET_SERVICE, + &net_service_4, 0)); + + /* Enforces the ruleset. */ + enforce_ruleset(_metadata, ruleset_fd_3); + ASSERT_EQ(0, close(ruleset_fd_3)); + + /* Creates a socket 1. */ + sockfd_1 = socket_variant(variant, SOCK_STREAM); + ASSERT_LE(0, sockfd_1); + /* Allows to reuse of local address. */ + ASSERT_EQ(0, setsockopt(sockfd_1, SOL_SOCKET, SO_REUSEADDR, &one, + sizeof(one))); + + /* Binds the socket 1 to address with port[0]. */ + ASSERT_EQ(0, bind_variant(variant, sockfd_1, self, 0)); + + /* + * Forbids to connect the socket 1 to address with port[0], + * as just one layer has connect() access rule. + */ + ASSERT_EQ(-1, connect_variant(variant, sockfd_1, self, 0)); + ASSERT_EQ(EACCES, errno); + + /* Closes socket 1. */ + ASSERT_EQ(0, close(sockfd_1)); +} + +/* clang-format off */ + +#define ACCESS_LAST LANDLOCK_ACCESS_NET_CONNECT_TCP + +#define ACCESS_ALL ( \ + LANDLOCK_ACCESS_NET_BIND_TCP | \ + LANDLOCK_ACCESS_NET_CONNECT_TCP) + +/* clang-format on */ + +TEST_F(port, inval) +{ + struct landlock_ruleset_attr ruleset_attr = { + .handled_access_net = LANDLOCK_ACCESS_NET_BIND_TCP + }; + struct landlock_ruleset_attr ruleset_attr_inval = { + .handled_access_net = landlock_inval_attr + }; + struct landlock_ruleset_attr ruleset_attr_all = { .handled_access_net = + ACCESS_ALL }; + struct landlock_net_service_attr tcp_bind_connect = { + .allowed_access = LANDLOCK_ACCESS_NET_BIND_TCP | + LANDLOCK_ACCESS_NET_CONNECT_TCP, + .port = self->port[0], + }; + struct landlock_net_service_attr tcp_bind_port_zero = { + .allowed_access = LANDLOCK_ACCESS_NET_BIND_TCP, + .port = 0, + }; + struct landlock_net_service_attr tcp_denied = { + .allowed_access = 0, + .port = self->port[1], + }; + struct landlock_net_service_attr tcp_bind = { + .allowed_access = LANDLOCK_ACCESS_NET_BIND_TCP, + .port = self->port[2], + }; + struct landlock_net_service_attr tcp_all_rules = {}; + __u64 access; + + if (variant->is_sandboxed) { + /* Checks invalid ruleset attribute. */ + const int ruleset_fd_inv = landlock_create_ruleset( + &ruleset_attr_inval, sizeof(ruleset_attr_inval), 0); + ASSERT_EQ(-1, ruleset_fd_inv); + ASSERT_EQ(EINVAL, errno); + + /* Gets ruleset. */ + const int ruleset_fd = landlock_create_ruleset( + &ruleset_attr, sizeof(ruleset_attr), 0); + ASSERT_LE(0, ruleset_fd); + + /* Checks unhandled allowed_access. */ + ASSERT_EQ(-1, landlock_add_rule(ruleset_fd, + LANDLOCK_RULE_NET_SERVICE, + &tcp_bind_connect, 0)); + ASSERT_EQ(EINVAL, errno); + + /* Checks zero port value. */ + ASSERT_EQ(-1, landlock_add_rule(ruleset_fd, + LANDLOCK_RULE_NET_SERVICE, + &tcp_bind_port_zero, 0)); + ASSERT_EQ(EINVAL, errno); + + /* Checks zero access value. */ + ASSERT_EQ(-1, landlock_add_rule(ruleset_fd, + LANDLOCK_RULE_NET_SERVICE, + &tcp_denied, 0)); + ASSERT_EQ(ENOMSG, errno); + + /* Adds with legitimate values. */ + ASSERT_EQ(0, landlock_add_rule(ruleset_fd, + LANDLOCK_RULE_NET_SERVICE, + &tcp_bind, 0)); + + const int ruleset_fd_all = landlock_create_ruleset( + &ruleset_attr_all, sizeof(ruleset_attr_all), 0); + + ASSERT_LE(0, ruleset_fd_all); + + /* Tests access rights for all network rules */ + for (access = 1; access <= ACCESS_LAST; access <<= 1) { + tcp_all_rules.allowed_access = access; + tcp_all_rules.port = self->port[3]; + ASSERT_EQ(0, + landlock_add_rule(ruleset_fd_all, + LANDLOCK_RULE_NET_SERVICE, + &tcp_all_rules, 0)); + } + + /* Enforces the ruleset. */ + enforce_ruleset(_metadata, ruleset_fd); + ASSERT_EQ(0, close(ruleset_fd)); + + enforce_ruleset(_metadata, ruleset_fd_all); + ASSERT_EQ(0, close(ruleset_fd_all)); + } +} + +TEST_F(port, bind_connect_inval_addrlen) +{ + struct landlock_ruleset_attr ruleset_attr = { + .handled_access_net = LANDLOCK_ACCESS_NET_BIND_TCP | + LANDLOCK_ACCESS_NET_CONNECT_TCP, + }; + struct landlock_net_service_attr tcp_bind_connect = { + .allowed_access = LANDLOCK_ACCESS_NET_BIND_TCP | + LANDLOCK_ACCESS_NET_CONNECT_TCP, + + .port = self->port[0], + }; + int sockfd, ruleset_fd, ret; + struct sockaddr_in addr4; + int one = 1; + + addr4.sin_family = AF_INET; + addr4.sin_port = htons(self->port[0]); + addr4.sin_addr.s_addr = htonl(INADDR_ANY); + memset(&addr4.sin_zero, '\0', 8); + + if (variant->is_sandboxed) { + ruleset_fd = landlock_create_ruleset(&ruleset_attr, + sizeof(ruleset_attr), 0); + ASSERT_LE(0, ruleset_fd); + + /* Allows bind/connect actions for socket with self->port[0]. */ + ASSERT_EQ(0, landlock_add_rule(ruleset_fd, + LANDLOCK_RULE_NET_SERVICE, + &tcp_bind_connect, 0)); + + /* Enforces the ruleset. */ + enforce_ruleset(_metadata, ruleset_fd); + } + + /* Creates a socket 1. */ + sockfd = socket(AF_INET, SOCK_STREAM | SOCK_CLOEXEC, 0); + ASSERT_LE(0, sockfd); + /* Allows to reuse of local address. */ + ASSERT_EQ(0, setsockopt(sockfd, SOL_SOCKET, SO_REUSEADDR, &one, + sizeof(one))); + + /* Binds the socket to self->port[0] with zero addrlen. */ + ret = bind(sockfd, &addr4, 0); + ASSERT_EQ(-1, ret); + ASSERT_EQ(EINVAL, errno); + + /* Connects the socket to the listening port with zero addrlen. */ + ret = connect(sockfd, &addr4, 0); + ASSERT_EQ(-1, ret); + ASSERT_EQ(EINVAL, errno); + + /* Binds the socket to self->port[0] with correct addrlen. */ + ret = bind(sockfd, &addr4, sizeof(addr4)); + ASSERT_EQ(0, ret); + + /* Connects the socket to the listening port with correct addrlen. */ + ret = connect(sockfd, &addr4, sizeof(addr4)); + ASSERT_EQ(0, ret); + + /* Closes the connection*/ + ASSERT_EQ(0, close(sockfd)); +} + +TEST_F(port, bind_connect_unix_stream_socket) +{ + struct landlock_ruleset_attr ruleset_attr = { + .handled_access_net = LANDLOCK_ACCESS_NET_BIND_TCP | + LANDLOCK_ACCESS_NET_CONNECT_TCP, + }; + struct landlock_net_service_attr tcp_bind_connect = { + .allowed_access = LANDLOCK_ACCESS_NET_BIND_TCP | + LANDLOCK_ACCESS_NET_CONNECT_TCP, + .port = self->port[0], + }; + int sockfd, accept_fd, ruleset_fd, status, ret; + struct sockaddr_un addr_unix; + pid_t child; + + if (variant->is_sandboxed) { + ruleset_fd = landlock_create_ruleset(&ruleset_attr, + sizeof(ruleset_attr), 0); + ASSERT_LE(0, ruleset_fd); + + /* + * Allows connect and bind operations to the port[0] + * socket. + */ + ASSERT_EQ(0, landlock_add_rule(ruleset_fd, + LANDLOCK_RULE_NET_SERVICE, + &tcp_bind_connect, 0)); + + /* Enforces the ruleset. */ + enforce_ruleset(_metadata, ruleset_fd); + } + + /* + * Deletes address full path link from previous server launching + * if was any. + */ + unlink(unix_address_path); + + /* Creates a server stream unix socket. */ + sockfd = socket(AF_UNIX, SOCK_STREAM, 0); + ASSERT_LE(0, sockfd); + + /* Sets unix socket address parameters */ + memset(&addr_unix, 0, sizeof(addr_unix)); + addr_unix.sun_family = AF_UNIX; + strcpy(addr_unix.sun_path, unix_address_path); + + /* Binds the socket to unix address */ + ret = bind(sockfd, (struct sockaddr *)&addr_unix, SUN_LEN(&addr_unix)); + ASSERT_EQ(0, ret); + + /* Makes listening socket. */ + ret = listen(sockfd, backlog); + ASSERT_EQ(0, ret); + + child = fork(); + ASSERT_LE(0, child); + if (child == 0) { + int child_sockfd, ret; + struct sockaddr_un connect_addr; + + /* Closes listening socket for the child. */ + ASSERT_EQ(0, close(sockfd)); + + /* Creates a client stream unix socket. */ + child_sockfd = socket(AF_UNIX, SOCK_STREAM, 0); + ASSERT_LE(0, child_sockfd); + + /* Sets unix socket address parameters */ + memset(&connect_addr, 0, sizeof(connect_addr)); + connect_addr.sun_family = AF_UNIX; + strcpy(connect_addr.sun_path, unix_address_path); + + /* Makes connection to the listening unix socket. */ + ret = connect(child_sockfd, (struct sockaddr *)&connect_addr, + SUN_LEN(&connect_addr)); + ASSERT_EQ(0, ret); + + _exit(_metadata->passed ? EXIT_SUCCESS : EXIT_FAILURE); + return; + } + /* Accepts connection from the child. */ + accept_fd = accept(sockfd, NULL, 0); + ASSERT_LE(0, accept_fd); + + /* Closes connection. */ + ASSERT_EQ(0, close(accept_fd)); + + /* Closes listening socket for the parent. */ + ASSERT_EQ(0, close(sockfd)); + + /* Deletes address full path link. */ + unlink(unix_address_path); + + ASSERT_EQ(child, waitpid(child, &status, 0)); + ASSERT_EQ(1, WIFEXITED(status)); + ASSERT_EQ(EXIT_SUCCESS, WEXITSTATUS(status)); +} + +TEST_F(port, bind_connect_unix_dgram_socket) +{ + struct landlock_ruleset_attr ruleset_attr = { + .handled_access_net = LANDLOCK_ACCESS_NET_BIND_TCP | + LANDLOCK_ACCESS_NET_CONNECT_TCP, + }; + struct landlock_net_service_attr tcp_bind_connect = { + .allowed_access = LANDLOCK_ACCESS_NET_BIND_TCP | + LANDLOCK_ACCESS_NET_CONNECT_TCP, + .port = self->port[0], + }; + int sockfd, ruleset_fd, status, ret; + struct sockaddr_un addr_unix; + pid_t child; + + if (variant->is_sandboxed) { + ruleset_fd = landlock_create_ruleset(&ruleset_attr, + sizeof(ruleset_attr), 0); + ASSERT_LE(0, ruleset_fd); + + /* + * Allows connect and bind operations to the self->port[0] + * socket. + */ + ASSERT_EQ(0, landlock_add_rule(ruleset_fd, + LANDLOCK_RULE_NET_SERVICE, + &tcp_bind_connect, 0)); + + /* Enforces the ruleset. */ + enforce_ruleset(_metadata, ruleset_fd); + } + + /* + * Deletes address full path link from previous server launching + * if was any. + */ + unlink(unix_address_path); + + /* Creates a server datagram unix socket. */ + sockfd = socket(AF_UNIX, SOCK_DGRAM, 0); + ASSERT_LE(0, sockfd); + + /* Sets unix socket address parameters */ + memset(&addr_unix, 0, sizeof(addr_unix)); + addr_unix.sun_family = AF_UNIX; + strcpy(addr_unix.sun_path, unix_address_path); + + /* Binds the socket to unix address */ + ret = bind(sockfd, (struct sockaddr *)&addr_unix, SUN_LEN(&addr_unix)); + ASSERT_EQ(0, ret); + + child = fork(); + ASSERT_LE(0, child); + if (child == 0) { + int child_sockfd, ret; + struct sockaddr_un connect_addr; + + /* Closes listening socket for the child. */ + ASSERT_EQ(0, close(sockfd)); + + /* Creates a client datagram unix socket. */ + child_sockfd = socket(AF_UNIX, SOCK_DGRAM, 0); + ASSERT_LE(0, child_sockfd); + + /* Sets unix socket address parameters */ + memset(&connect_addr, 0, sizeof(connect_addr)); + connect_addr.sun_family = AF_UNIX; + strcpy(connect_addr.sun_path, unix_address_path); + + /* Makes connection to the server unix socket. */ + ret = connect(child_sockfd, (struct sockaddr *)&connect_addr, + SUN_LEN(&connect_addr)); + ASSERT_EQ(0, ret); + + _exit(_metadata->passed ? EXIT_SUCCESS : EXIT_FAILURE); + return; + } + ASSERT_EQ(child, waitpid(child, &status, 0)); + ASSERT_EQ(1, WIFEXITED(status)); + ASSERT_EQ(EXIT_SUCCESS, WEXITSTATUS(status)); + + /* Closes socket for the parent. */ + ASSERT_EQ(0, close(sockfd)); + + /* Deletes address full path link. */ + unlink(unix_address_path); +} + +TEST_F(inet, inval_port_format) +{ + struct landlock_ruleset_attr ruleset_attr = { + .handled_access_net = LANDLOCK_ACCESS_NET_BIND_TCP | + LANDLOCK_ACCESS_NET_CONNECT_TCP, + }; + struct landlock_net_service_attr net_service_1 = { + .allowed_access = LANDLOCK_ACCESS_NET_BIND_TCP, + /* Wrong port format. */ + .port = htons(self->port[0]), + }; + struct landlock_net_service_attr net_service_2 = { + .allowed_access = LANDLOCK_ACCESS_NET_BIND_TCP, + /* Correct port format. */ + .port = self->port[1], + }; + struct landlock_net_service_attr net_service_3 = { + .allowed_access = LANDLOCK_ACCESS_NET_BIND_TCP, + .port = UINT16_MAX, + }; + struct landlock_net_service_attr net_service_4 = { + .allowed_access = LANDLOCK_ACCESS_NET_BIND_TCP, + .port = UINT16_MAX + 1, + }; + struct landlock_net_service_attr net_service_5 = { + .allowed_access = LANDLOCK_ACCESS_NET_BIND_TCP, + .port = UINT16_MAX + 2, + }; + struct landlock_net_service_attr net_service_6 = { + .allowed_access = LANDLOCK_ACCESS_NET_BIND_TCP, + .port = UINT32_MAX + 1UL, + }; + struct landlock_net_service_attr net_service_7 = { + .allowed_access = LANDLOCK_ACCESS_NET_BIND_TCP, + .port = UINT32_MAX + 2UL, + }; + int sockfd, ruleset_fd, ret; + bool little_endian = false; + unsigned int i = 1; + int one = 1; + char *c; + + if (variant->is_sandboxed) { + ruleset_fd = landlock_create_ruleset(&ruleset_attr, + sizeof(ruleset_attr), 0); + ASSERT_LE(0, ruleset_fd); + + /* Allows bind action for socket with wrong port format. */ + ASSERT_EQ(0, landlock_add_rule(ruleset_fd, + LANDLOCK_RULE_NET_SERVICE, + &net_service_1, 0)); + + /* Allows bind action for socket with correct port format. */ + ASSERT_EQ(0, landlock_add_rule(ruleset_fd, + LANDLOCK_RULE_NET_SERVICE, + &net_service_2, 0)); + + /* Allows bind action for socket with port U16_MAX. */ + ASSERT_EQ(0, landlock_add_rule(ruleset_fd, + LANDLOCK_RULE_NET_SERVICE, + &net_service_3, 0)); + + /* Denies bind action for socket with port U16_MAX + 1. */ + ASSERT_EQ(-1, landlock_add_rule(ruleset_fd, + LANDLOCK_RULE_NET_SERVICE, + &net_service_4, 0)); + ASSERT_EQ(EINVAL, errno); + + /* Denies bind action for socket with port U16_MAX + 2. */ + ASSERT_EQ(-1, landlock_add_rule(ruleset_fd, + LANDLOCK_RULE_NET_SERVICE, + &net_service_5, 0)); + ASSERT_EQ(EINVAL, errno); + + /* Denies bind action for socket with port U32_MAX + 1. */ + ASSERT_EQ(-1, landlock_add_rule(ruleset_fd, + LANDLOCK_RULE_NET_SERVICE, + &net_service_6, 0)); + ASSERT_EQ(EINVAL, errno); + + /* Denies bind action for socket with port U32_MAX + 2. */ + ASSERT_EQ(-1, landlock_add_rule(ruleset_fd, + LANDLOCK_RULE_NET_SERVICE, + &net_service_7, 0)); + ASSERT_EQ(EINVAL, errno); + + /* Enforces the ruleset. */ + enforce_ruleset(_metadata, ruleset_fd); + } + + /* Checks endianness. */ + c = (char *)&i; + if (*c) + little_endian = true; + + /* Creates a socket. */ + sockfd = socket_variant(variant, SOCK_STREAM); + ASSERT_LE(0, sockfd); + /* Allows to reuse of local address. */ + ASSERT_EQ(0, setsockopt(sockfd, SOL_SOCKET, SO_REUSEADDR, &one, + sizeof(one))); + + /* Binds the socket to port[0] with wrong format . */ + ret = bind_variant(variant, sockfd, self, 0); + if (variant->is_sandboxed) { + if (little_endian) { + ASSERT_EQ(-1, ret); + ASSERT_EQ(EACCES, errno); + } else { + /* No error for big-endinan cpu by default. */ + ASSERT_EQ(0, ret); + } + } else { + ASSERT_EQ(0, ret); + } + + /* Closes the connection*/ + ASSERT_EQ(0, close(sockfd)); + + sockfd = socket_variant(variant, SOCK_STREAM); + ASSERT_LE(0, sockfd); + /* Allows to reuse of local address. */ + ASSERT_EQ(0, setsockopt(sockfd, SOL_SOCKET, SO_REUSEADDR, &one, + sizeof(one))); + + /* Binds the socket to port[1] with correct format. */ + ret = bind_variant(variant, sockfd, self, 1); + if (variant->is_sandboxed) { + if (little_endian) { + ASSERT_EQ(0, ret); + } else { + /* No error for big-endinan cpu by default. */ + ASSERT_EQ(0, ret); + } + } else { + ASSERT_EQ(0, ret); + } + + /* Closes the connection*/ + ASSERT_EQ(0, close(sockfd)); + + if (variant->is_ipv4) + self->addr4[0].sin_port = htons(UINT16_MAX); + else + self->addr6[0].sin6_port = htons(UINT16_MAX); + + /* Creates a socket. */ + sockfd = socket_variant(variant, SOCK_STREAM); + ASSERT_LE(0, sockfd); + /* Allows to reuse of local address. */ + ASSERT_EQ(0, setsockopt(sockfd, SOL_SOCKET, SO_REUSEADDR, &one, + sizeof(one))); + + /* Binds the socket to port[0] UINT16_MAX. */ + ret = bind_variant(variant, sockfd, self, 0); + ASSERT_EQ(0, ret); + + /* Closes the connection*/ + ASSERT_EQ(0, close(sockfd)); +} + +TEST_HARNESS_MAIN From patchwork Mon May 15 16:13:38 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Konstantin Meskhidze (A)" X-Patchwork-Id: 13241778 Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 58871101D1 for ; Mon, 15 May 2023 16:14:36 +0000 (UTC) Received: from frasgout.his.huawei.com (frasgout.his.huawei.com [185.176.79.56]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 58BB730FB; Mon, 15 May 2023 09:14:30 -0700 (PDT) Received: from lhrpeml500004.china.huawei.com (unknown [172.18.147.200]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4QKkt22Ytwz67cSL; Tue, 16 May 2023 00:12:42 +0800 (CST) Received: from mscphis00759.huawei.com (10.123.66.134) by lhrpeml500004.china.huawei.com (7.191.163.9) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.23; Mon, 15 May 2023 17:14:27 +0100 From: Konstantin Meskhidze To: CC: , , , , , , Subject: [PATCH v11 11/12] samples/landlock: Add network demo Date: Tue, 16 May 2023 00:13:38 +0800 Message-ID: <20230515161339.631577-12-konstantin.meskhidze@huawei.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230515161339.631577-1-konstantin.meskhidze@huawei.com> References: <20230515161339.631577-1-konstantin.meskhidze@huawei.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Originating-IP: [10.123.66.134] X-ClientProxiedBy: mscpeml500002.china.huawei.com (7.188.26.138) To lhrpeml500004.china.huawei.com (7.191.163.9) X-CFilter-Loop: Reflected X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_MED, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net This commit adds network demo. It's possible to allow a sandboxer to bind/connect to a list of particular ports restricting network actions to the rest of ports. Signed-off-by: Konstantin Meskhidze --- Changes since v10: * Refactors populate_ruleset_net() helper. * Code style minor fix. Changes since v9: * Deletes ports converting. * Minor fixes. Changes since v8: * Convert ports to __be16. * Minor fixes. Changes since v7: * Removes network support if ABI < 4. * Removes network support if not set by a user. Changes since v6: * Removes network support if ABI < 3. Changes since v5: * Makes network ports sandboxing optional. * Fixes some logic errors. * Formats code with clang-format-14. Changes since v4: * Adds ENV_TCP_BIND_NAME "LL_TCP_BIND" and ENV_TCP_CONNECT_NAME "LL_TCP_CONNECT" variables to insert TCP ports. * Renames populate_ruleset() to populate_ruleset_fs(). * Adds populate_ruleset_net() and parse_port_num() helpers. * Refactors main() to support network sandboxing. --- samples/landlock/sandboxer.c | 128 +++++++++++++++++++++++++++++++---- 1 file changed, 116 insertions(+), 12 deletions(-) -- 2.25.1 diff --git a/samples/landlock/sandboxer.c b/samples/landlock/sandboxer.c index e2056c8b902c..b0250edb6ccb 100644 --- a/samples/landlock/sandboxer.c +++ b/samples/landlock/sandboxer.c @@ -8,6 +8,7 @@ */ #define _GNU_SOURCE +#include #include #include #include @@ -51,6 +52,8 @@ static inline int landlock_restrict_self(const int ruleset_fd, #define ENV_FS_RO_NAME "LL_FS_RO" #define ENV_FS_RW_NAME "LL_FS_RW" +#define ENV_TCP_BIND_NAME "LL_TCP_BIND" +#define ENV_TCP_CONNECT_NAME "LL_TCP_CONNECT" #define ENV_PATH_TOKEN ":" static int parse_path(char *env_path, const char ***const path_list) @@ -71,6 +74,20 @@ static int parse_path(char *env_path, const char ***const path_list) return num_paths; } +static int parse_port_num(char *env_port) +{ + int i, num_ports = 0; + + if (env_port) { + num_ports++; + for (i = 0; env_port[i]; i++) { + if (env_port[i] == ENV_PATH_TOKEN[0]) + num_ports++; + } + } + return num_ports; +} + /* clang-format off */ #define ACCESS_FILE ( \ @@ -81,8 +98,8 @@ static int parse_path(char *env_path, const char ***const path_list) /* clang-format on */ -static int populate_ruleset(const char *const env_var, const int ruleset_fd, - const __u64 allowed_access) +static int populate_ruleset_fs(const char *const env_var, const int ruleset_fd, + const __u64 allowed_access) { int num_paths, i, ret = 1; char *env_path_name; @@ -143,6 +160,45 @@ static int populate_ruleset(const char *const env_var, const int ruleset_fd, return ret; } +static int populate_ruleset_net(const char *const env_var, const int ruleset_fd, + const __u64 allowed_access) +{ + int num_ports, i, ret = 1; + char *env_port_name; + struct landlock_net_service_attr net_service = { + .allowed_access = allowed_access, + .port = 0, + }; + + env_port_name = getenv(env_var); + if (!env_port_name) + return 0; + env_port_name = strdup(env_port_name); + unsetenv(env_var); + num_ports = parse_port_num(env_port_name); + + if (num_ports == 1 && (strtok(env_port_name, ENV_PATH_TOKEN) == NULL)) { + ret = 0; + goto out_free_name; + } + + for (i = 0; i < num_ports; i++) { + net_service.port = atoi(strsep(&env_port_name, ENV_PATH_TOKEN)); + if (landlock_add_rule(ruleset_fd, LANDLOCK_RULE_NET_SERVICE, + &net_service, 0)) { + fprintf(stderr, + "Failed to update the ruleset with port \"%lld\": %s\n", + net_service.port, strerror(errno)); + goto out_free_name; + } + } + ret = 0; + +out_free_name: + free(env_port_name); + return ret; +} + /* clang-format off */ #define ACCESS_FS_ROUGHLY_READ ( \ @@ -166,39 +222,58 @@ static int populate_ruleset(const char *const env_var, const int ruleset_fd, /* clang-format on */ -#define LANDLOCK_ABI_LAST 3 +#define LANDLOCK_ABI_LAST 4 int main(const int argc, char *const argv[], char *const *const envp) { const char *cmd_path; char *const *cmd_argv; int ruleset_fd, abi; + char *env_port_name; __u64 access_fs_ro = ACCESS_FS_ROUGHLY_READ, access_fs_rw = ACCESS_FS_ROUGHLY_READ | ACCESS_FS_ROUGHLY_WRITE; + struct landlock_ruleset_attr ruleset_attr = { .handled_access_fs = access_fs_rw, + .handled_access_net = LANDLOCK_ACCESS_NET_BIND_TCP | + LANDLOCK_ACCESS_NET_CONNECT_TCP, }; if (argc < 2) { fprintf(stderr, - "usage: %s=\"...\" %s=\"...\" %s [args]...\n\n", - ENV_FS_RO_NAME, ENV_FS_RW_NAME, argv[0]); + "usage: %s=\"...\" %s=\"...\" %s=\"...\" %s=\"...\"%s " + " [args]...\n\n", + ENV_FS_RO_NAME, ENV_FS_RW_NAME, ENV_TCP_BIND_NAME, + ENV_TCP_CONNECT_NAME, argv[0]); fprintf(stderr, "Launch a command in a restricted environment.\n\n"); - fprintf(stderr, "Environment variables containing paths, " - "each separated by a colon:\n"); + fprintf(stderr, + "Environment variables containing paths and ports " + "each separated by a colon:\n"); fprintf(stderr, "* %s: list of paths allowed to be used in a read-only way.\n", ENV_FS_RO_NAME); fprintf(stderr, - "* %s: list of paths allowed to be used in a read-write way.\n", + "* %s: list of paths allowed to be used in a read-write way.\n\n", ENV_FS_RW_NAME); + fprintf(stderr, + "Environment variables containing ports are optional " + "and could be skipped.\n"); + fprintf(stderr, + "* %s: list of ports allowed to bind (server).\n", + ENV_TCP_BIND_NAME); + fprintf(stderr, + "* %s: list of ports allowed to connect (client).\n", + ENV_TCP_CONNECT_NAME); fprintf(stderr, "\nexample:\n" "%s=\"/bin:/lib:/usr:/proc:/etc:/dev/urandom\" " "%s=\"/dev/null:/dev/full:/dev/zero:/dev/pts:/tmp\" " + "%s=\"9418\" " + "%s=\"80:443\" " "%s bash -i\n\n", - ENV_FS_RO_NAME, ENV_FS_RW_NAME, argv[0]); + ENV_FS_RO_NAME, ENV_FS_RW_NAME, ENV_TCP_BIND_NAME, + ENV_TCP_CONNECT_NAME, argv[0]); fprintf(stderr, "This sandboxer can use Landlock features " "up to ABI version %d.\n", @@ -255,7 +330,12 @@ int main(const int argc, char *const argv[], char *const *const envp) case 2: /* Removes LANDLOCK_ACCESS_FS_TRUNCATE for ABI < 3 */ ruleset_attr.handled_access_fs &= ~LANDLOCK_ACCESS_FS_TRUNCATE; - + __attribute__((fallthrough)); + case 3: + /* Removes network support for ABI < 4 */ + ruleset_attr.handled_access_net &= + ~(LANDLOCK_ACCESS_NET_BIND_TCP | + LANDLOCK_ACCESS_NET_CONNECT_TCP); fprintf(stderr, "Hint: You should update the running kernel " "to leverage Landlock features " @@ -274,18 +354,42 @@ int main(const int argc, char *const argv[], char *const *const envp) access_fs_ro &= ruleset_attr.handled_access_fs; access_fs_rw &= ruleset_attr.handled_access_fs; + /* Removes bind access attribute if not supported by a user. */ + env_port_name = getenv(ENV_TCP_BIND_NAME); + if (!env_port_name) { + ruleset_attr.handled_access_net &= + ~LANDLOCK_ACCESS_NET_BIND_TCP; + } + /* Removes connect access attribute if not supported by a user. */ + env_port_name = getenv(ENV_TCP_CONNECT_NAME); + if (!env_port_name) { + ruleset_attr.handled_access_net &= + ~LANDLOCK_ACCESS_NET_CONNECT_TCP; + } + ruleset_fd = landlock_create_ruleset(&ruleset_attr, sizeof(ruleset_attr), 0); if (ruleset_fd < 0) { perror("Failed to create a ruleset"); return 1; } - if (populate_ruleset(ENV_FS_RO_NAME, ruleset_fd, access_fs_ro)) { + + if (populate_ruleset_fs(ENV_FS_RO_NAME, ruleset_fd, access_fs_ro)) { + goto err_close_ruleset; + } + if (populate_ruleset_fs(ENV_FS_RW_NAME, ruleset_fd, access_fs_rw)) { goto err_close_ruleset; } - if (populate_ruleset(ENV_FS_RW_NAME, ruleset_fd, access_fs_rw)) { + + if (populate_ruleset_net(ENV_TCP_BIND_NAME, ruleset_fd, + LANDLOCK_ACCESS_NET_BIND_TCP)) { goto err_close_ruleset; } + if (populate_ruleset_net(ENV_TCP_CONNECT_NAME, ruleset_fd, + LANDLOCK_ACCESS_NET_CONNECT_TCP)) { + goto err_close_ruleset; + } + if (prctl(PR_SET_NO_NEW_PRIVS, 1, 0, 0, 0)) { perror("Failed to restrict privileges"); goto err_close_ruleset; From patchwork Mon May 15 16:13:39 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: "Konstantin Meskhidze (A)" X-Patchwork-Id: 13241787 Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net [23.128.96.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id AAA64FBE7 for ; Mon, 15 May 2023 16:14:49 +0000 (UTC) Received: from frasgout.his.huawei.com (frasgout.his.huawei.com [185.176.79.56]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 573173594; Mon, 15 May 2023 09:14:38 -0700 (PDT) Received: from lhrpeml500004.china.huawei.com (unknown [172.18.147.200]) by frasgout.his.huawei.com (SkyGuard) with ESMTP id 4QKkqS3F2Gz6J74N; Tue, 16 May 2023 00:10:28 +0800 (CST) Received: from mscphis00759.huawei.com (10.123.66.134) by lhrpeml500004.china.huawei.com (7.191.163.9) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.23; Mon, 15 May 2023 17:14:35 +0100 From: Konstantin Meskhidze To: CC: , , , , , , Subject: [PATCH v11 12/12] landlock: Document Landlock's network support Date: Tue, 16 May 2023 00:13:39 +0800 Message-ID: <20230515161339.631577-13-konstantin.meskhidze@huawei.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230515161339.631577-1-konstantin.meskhidze@huawei.com> References: <20230515161339.631577-1-konstantin.meskhidze@huawei.com> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Originating-IP: [10.123.66.134] X-ClientProxiedBy: mscpeml500002.china.huawei.com (7.188.26.138) To lhrpeml500004.china.huawei.com (7.191.163.9) X-CFilter-Loop: Reflected X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_MED, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Describe network access rules for TCP sockets. Add network access example in the tutorial. Add kernel configuration support for network. Signed-off-by: Konstantin Meskhidze --- Changes since v10: * Fixes documentaion as Mickaёl suggested: https://lore.kernel.org/linux-security-module/ec23be77-566e-c8fd-179e-f50e025ac2cf@digikod.net/ Changes since v9: * Minor refactoring. Changes since v8: * Minor refactoring. Changes since v7: * Fixes documentaion logic errors and typos as Mickaёl suggested: https://lore.kernel.org/netdev/9f354862-2bc3-39ea-92fd-53803d9bbc21@digikod.net/ Changes since v6: * Adds network support documentaion. --- Documentation/userspace-api/landlock.rst | 83 ++++++++++++++++++------ 1 file changed, 62 insertions(+), 21 deletions(-) -- 2.25.1 diff --git a/Documentation/userspace-api/landlock.rst b/Documentation/userspace-api/landlock.rst index f6a7da21708a..f185dbaa726a 100644 --- a/Documentation/userspace-api/landlock.rst +++ b/Documentation/userspace-api/landlock.rst @@ -11,10 +11,10 @@ Landlock: unprivileged access control :Date: October 2022 The goal of Landlock is to enable to restrict ambient rights (e.g. global -filesystem access) for a set of processes. Because Landlock is a stackable -LSM, it makes possible to create safe security sandboxes as new security layers -in addition to the existing system-wide access-controls. This kind of sandbox -is expected to help mitigate the security impact of bugs or +filesystem or network access) for a set of processes. Because Landlock +is a stackable LSM, it makes possible to create safe security sandboxes as new +security layers in addition to the existing system-wide access-controls. This +kind of sandbox is expected to help mitigate the security impact of bugs or unexpected/malicious behaviors in user space applications. Landlock empowers any process, including unprivileged ones, to securely restrict themselves. @@ -28,20 +28,24 @@ appropriately `. Landlock rules ============== -A Landlock rule describes an action on an object. An object is currently a -file hierarchy, and the related filesystem actions are defined with `access -rights`_. A set of rules is aggregated in a ruleset, which can then restrict -the thread enforcing it, and its future children. +A Landlock rule describes an action on a kernel object. Filesystem +objects can be defined with a file hierarchy. Since the fourth ABI +version, TCP ports enable to identify inbound or outbound connections. +Actions on these kernel objects are defined according to `access +rights`_. A set of rules is aggregated in a ruleset, which +can then restrict the thread enforcing it, and its future children. Defining and enforcing a security policy ---------------------------------------- We first need to define the ruleset that will contain our rules. For this -example, the ruleset will contain rules that only allow read actions, but write -actions will be denied. The ruleset then needs to handle both of these kind of -actions. This is required for backward and forward compatibility (i.e. the -kernel and user space may not know each other's supported restrictions), hence -the need to be explicit about the denied-by-default access rights. +example, the ruleset will contain rules that only allow filesystem read actions +and establish a specific TCP connection, but filesystem write actions +and other TCP actions will be denied. The ruleset then needs to handle both of +these kind of actions. This is required for backward and forward compatibility +(i.e. the kernel and user space may not know each other's supported +restrictions), hence the need to be explicit about the denied-by-default access +rights. .. code-block:: c @@ -62,6 +66,9 @@ the need to be explicit about the denied-by-default access rights. LANDLOCK_ACCESS_FS_MAKE_SYM | LANDLOCK_ACCESS_FS_REFER | LANDLOCK_ACCESS_FS_TRUNCATE, + .handled_access_net = + LANDLOCK_ACCESS_NET_BIND_TCP | + LANDLOCK_ACCESS_NET_CONNECT_TCP, }; Because we may not know on which kernel version an application will be @@ -70,14 +77,18 @@ should try to protect users as much as possible whatever the kernel they are using. To avoid binary enforcement (i.e. either all security features or none), we can leverage a dedicated Landlock command to get the current version of the Landlock ABI and adapt the handled accesses. Let's check if we should -remove the ``LANDLOCK_ACCESS_FS_REFER`` or ``LANDLOCK_ACCESS_FS_TRUNCATE`` -access rights, which are only supported starting with the second and third -version of the ABI. +remove the ``LANDLOCK_ACCESS_FS_REFER`` or ``LANDLOCK_ACCESS_FS_TRUNCATE`` or +network access rights, which are only supported starting with the second, +third and fourth version of the ABI. .. code-block:: c int abi; + #define ACCESS_NET_BIND_CONNECT ( \ + LANDLOCK_ACCESS_NET_BIND_TCP | \ + LANDLOCK_ACCESS_NET_CONNECT_TCP) + abi = landlock_create_ruleset(NULL, 0, LANDLOCK_CREATE_RULESET_VERSION); if (abi < 0) { /* Degrades gracefully if Landlock is not handled. */ @@ -92,6 +103,11 @@ version of the ABI. case 2: /* Removes LANDLOCK_ACCESS_FS_TRUNCATE for ABI < 3 */ ruleset_attr.handled_access_fs &= ~LANDLOCK_ACCESS_FS_TRUNCATE; + case 3: + /* Removes network support for ABI < 4 */ + ruleset_attr.handled_access_net &= + ~(LANDLOCK_ACCESS_NET_BIND_TCP | + LANDLOCK_ACCESS_NET_CONNECT_TCP); } This enables to create an inclusive ruleset that will contain our rules. @@ -143,10 +159,23 @@ for the ruleset creation, by filtering access rights according to the Landlock ABI version. In this example, this is not required because all of the requested ``allowed_access`` rights are already available in ABI 1. -We now have a ruleset with one rule allowing read access to ``/usr`` while -denying all other handled accesses for the filesystem. The next step is to -restrict the current thread from gaining more privileges (e.g. thanks to a SUID -binary). +For network access-control, we can add a set of rules that allow to use a port +number for a specific action: HTTPS connections. + +.. code-block:: c + + struct landlock_net_service_attr net_service = { + .allowed_access = NET_CONNECT_TCP, + .port = 443, + }; + + err = landlock_add_rule(ruleset_fd, LANDLOCK_RULE_NET_SERVICE, + &net_service, 0); + +The next step is to restrict the current thread from gaining more privileges +(e.g. through a SUID binary). We now have a ruleset with the first rule allowing +read access to ``/usr`` while denying all other handled accesses for the filesystem, +and a second rule allowing HTTPS connections. .. code-block:: c @@ -355,7 +384,7 @@ Access rights ------------- .. kernel-doc:: include/uapi/linux/landlock.h - :identifiers: fs_access + :identifiers: fs_access net_access Creating a new ruleset ---------------------- @@ -374,6 +403,7 @@ Extending a ruleset .. kernel-doc:: include/uapi/linux/landlock.h :identifiers: landlock_rule_type landlock_path_beneath_attr + landlock_net_service_attr Enforcing a ruleset ------------------- @@ -451,6 +481,12 @@ always allowed when using a kernel that only supports the first or second ABI. Starting with the Landlock ABI version 3, it is now possible to securely control truncation thanks to the new ``LANDLOCK_ACCESS_FS_TRUNCATE`` access right. +Network support (ABI < 4) +------------------------- + +Starting with the Landlock ABI version 4, it is now possible to restrict TCP +bind and connect actions to only a set of allowed ports. + .. _kernel_support: Kernel support @@ -469,6 +505,11 @@ still enable it by adding ``lsm=landlock,[...]`` to Documentation/admin-guide/kernel-parameters.rst thanks to the bootloader configuration. +To be able to explicitly allow TCP operations (e.g., adding a network rule with +``LANDLOCK_ACCESS_NET_TCP_BIND``), the kernel must support TCP (``CONFIG_INET=y``). +Otherwise, sys_landlock_add_rule() returns an ``EAFNOSUPPORT`` error, which can +safely be ignored because this kind of TCP operation is already not possible. + Questions and answers =====================