Message ID | 20230420080359.2551150-7-cem@kernel.org (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | shmem: Add user and group quota support for tmpfs | expand |
On Thu 20-04-23 10:03:59, cem@kernel.org wrote: > From: Lukas Czerner <lczerner@redhat.com> > > Allow system administrator to set default global quota limits at tmpfs > mount time. > > Signed-off-by: Lukas Czerner <lczerner@redhat.com> > Signed-off-by: Carlos Maiolino <cmaiolino@redhat.com> > --- > Documentation/filesystems/tmpfs.rst | 34 ++++++++++---- > include/linux/shmem_fs.h | 8 ++++ > mm/shmem.c | 69 +++++++++++++++++++++++++++++ > mm/shmem_quota.c | 9 ++++ > 4 files changed, 111 insertions(+), 9 deletions(-) > > diff --git a/Documentation/filesystems/tmpfs.rst b/Documentation/filesystems/tmpfs.rst > index 1d4ef4f7cca7e..241c11f86cd73 100644 > --- a/Documentation/filesystems/tmpfs.rst > +++ b/Documentation/filesystems/tmpfs.rst > @@ -88,15 +88,31 @@ that instance in a system with many CPUs making intensive use of it. > > tmpfs also supports quota with the following mount options > > -======== ============================================================= > -quota User and group quota accounting and enforcement is enabled on > - the mount. Tmpfs is using hidden system quota files that are > - initialized on mount. > -usrquota User quota accounting and enforcement is enabled on the > - mount. > -grpquota Group quota accounting and enforcement is enabled on the > - mount. > -======== ============================================================= > +======================== ================================================= > +quota User and group quota accounting and enforcement > + is enabled on the mount. Tmpfs is using hidden > + system quota files that are initialized on mount. > +usrquota User quota accounting and enforcement is enabled > + on the mount. > +grpquota Group quota accounting and enforcement is enabled > + on the mount. > +usrquota_block_hardlimit Set global user quota block hard limit. > +usrquota_inode_hardlimit Set global user quota inode hard limit. > +grpquota_block_hardlimit Set global group quota block hard limit. > +grpquota_inode_hardlimit Set global group quota inode hard limit. > +======================== ================================================= > + > +None of the quota related mount options can be set or changed on remount. > + > +Quota limit parameters accept a suffix k, m or g for kilo, mega and giga > +and can't be changed on remount. Default global quota limits are taking > +effect for any and all user/group/project except root the first time the > +quota entry for user/group/project id is being accessed - typically the > +first time an inode with a particular id ownership is being created after > +the mount. In other words, instead of the limits being initialized to zero, > +they are initialized with the particular value provided with these mount > +options. The limits can be changed for any user/group id at any time as it ^^ they > +normally can. ^^^ can be > @@ -3714,6 +3723,50 @@ static int shmem_parse_one(struct fs_context *fc, struct fs_parameter *param) > ctx->seen |= SHMEM_SEEN_QUOTA; > ctx->quota_types |= QTYPE_MASK_GRP; > break; > + case Opt_usrquota_block_hardlimit: > + size = memparse(param->string, &rest); > + if (*rest || !size) > + goto bad_value; > + if (size > SHMEM_QUOTA_MAX_SPC_LIMIT) > + return invalfc(fc, > + "User quota block hardlimit too large."); > + ctx->qlimits.usrquota_bhardlimit = size; > + ctx->seen |= SHMEM_SEEN_QUOTA; > + ctx->quota_types |= QTYPE_MASK_USR; So if I get it right, the intention here is that if usrquota_block_hardlimit=value option is used, it automatically enables user quota accounting and enforcement. I guess it is logical but it is not documented and I'd prefer to require explicit usrquota mount option to enable accounting & enforcement - it is then e.g. easier to parse mount options (in userspace) for finding out whether enforcement is enabled or not. Also I can imagine we would allow changing the default limits on remount but it isn't easy to enable quota accounting on remount etc. > diff --git a/mm/shmem_quota.c b/mm/shmem_quota.c > index c0b531e2ef688..3cc53f2c35e2c 100644 > --- a/mm/shmem_quota.c > +++ b/mm/shmem_quota.c > @@ -166,6 +166,7 @@ static int shmem_acquire_dquot(struct dquot *dquot) > { > struct mem_dqinfo *info = sb_dqinfo(dquot->dq_sb, dquot->dq_id.type); > struct rb_node **n = &((struct rb_root *)info->dqi_priv)->rb_node; > + struct shmem_sb_info *sbinfo = dquot->dq_sb->s_fs_info; > struct rb_node *parent = NULL, *new_node = NULL; > struct quota_id *new_entry, *entry; > qid_t id = from_kqid(&init_user_ns, dquot->dq_id); > @@ -195,6 +196,14 @@ static int shmem_acquire_dquot(struct dquot *dquot) > } > > new_entry->id = id; > + if (dquot->dq_id.type == USRQUOTA) { > + new_entry->bhardlimit = sbinfo->qlimits.usrquota_bhardlimit; > + new_entry->ihardlimit = sbinfo->qlimits.usrquota_ihardlimit; > + } else if (dquot->dq_id.type == GRPQUOTA) { > + new_entry->bhardlimit = sbinfo->qlimits.grpquota_bhardlimit; > + new_entry->ihardlimit = sbinfo->qlimits.grpquota_ihardlimit; > + } > + > new_node = &new_entry->node; > rb_link_node(new_node, parent, n); > rb_insert_color(new_node, (struct rb_root *)info->dqi_priv); Maybe in shmem_dquot_release() when usage is 0 and limits are at default limits, we can free the structure? Honza
On Thu, Apr 20, 2023 at 04:39:54PM +0200, Jan Kara wrote: > On Thu 20-04-23 10:03:59, cem@kernel.org wrote: > > From: Lukas Czerner <lczerner@redhat.com> > > > > Allow system administrator to set default global quota limits at tmpfs > > mount time. > > > > Signed-off-by: Lukas Czerner <lczerner@redhat.com> > > Signed-off-by: Carlos Maiolino <cmaiolino@redhat.com> > > --- > > Documentation/filesystems/tmpfs.rst | 34 ++++++++++---- > > include/linux/shmem_fs.h | 8 ++++ > > mm/shmem.c | 69 +++++++++++++++++++++++++++++ > > mm/shmem_quota.c | 9 ++++ > > 4 files changed, 111 insertions(+), 9 deletions(-) > > > > diff --git a/Documentation/filesystems/tmpfs.rst b/Documentation/filesystems/tmpfs.rst > > index 1d4ef4f7cca7e..241c11f86cd73 100644 > > --- a/Documentation/filesystems/tmpfs.rst > > +++ b/Documentation/filesystems/tmpfs.rst > > @@ -88,15 +88,31 @@ that instance in a system with many CPUs making intensive use of it. > > > > tmpfs also supports quota with the following mount options > > > > -======== ============================================================= > > -quota User and group quota accounting and enforcement is enabled on > > - the mount. Tmpfs is using hidden system quota files that are > > - initialized on mount. > > -usrquota User quota accounting and enforcement is enabled on the > > - mount. > > -grpquota Group quota accounting and enforcement is enabled on the > > - mount. > > -======== ============================================================= > > +======================== ================================================= > > +quota User and group quota accounting and enforcement > > + is enabled on the mount. Tmpfs is using hidden > > + system quota files that are initialized on mount. > > +usrquota User quota accounting and enforcement is enabled > > + on the mount. > > +grpquota Group quota accounting and enforcement is enabled > > + on the mount. > > +usrquota_block_hardlimit Set global user quota block hard limit. > > +usrquota_inode_hardlimit Set global user quota inode hard limit. > > +grpquota_block_hardlimit Set global group quota block hard limit. > > +grpquota_inode_hardlimit Set global group quota inode hard limit. > > +======================== ================================================= > > + > > +None of the quota related mount options can be set or changed on remount. > > + > > +Quota limit parameters accept a suffix k, m or g for kilo, mega and giga > > +and can't be changed on remount. Default global quota limits are taking > > +effect for any and all user/group/project except root the first time the > > +quota entry for user/group/project id is being accessed - typically the > > +first time an inode with a particular id ownership is being created after > > +the mount. In other words, instead of the limits being initialized to zero, > > +they are initialized with the particular value provided with these mount > > +options. The limits can be changed for any user/group id at any time as it > ^^ they > > +normally can. > ^^^ can be > Thanks! > > @@ -3714,6 +3723,50 @@ static int shmem_parse_one(struct fs_context *fc, struct fs_parameter *param) > > ctx->seen |= SHMEM_SEEN_QUOTA; > > ctx->quota_types |= QTYPE_MASK_GRP; > > break; > > + case Opt_usrquota_block_hardlimit: > > + size = memparse(param->string, &rest); > > + if (*rest || !size) > > + goto bad_value; > > + if (size > SHMEM_QUOTA_MAX_SPC_LIMIT) > > + return invalfc(fc, > > + "User quota block hardlimit too large."); > > + ctx->qlimits.usrquota_bhardlimit = size; > > + ctx->seen |= SHMEM_SEEN_QUOTA; > > + ctx->quota_types |= QTYPE_MASK_USR; > > So if I get it right, the intention here is that if > usrquota_block_hardlimit=value option is used, it automatically enables > user quota accounting and enforcement. I guess it is logical but it is not > documented and I'd prefer to require explicit usrquota mount option to > enable accounting & enforcement - it is then e.g. easier to parse mount > options (in userspace) for finding out whether enforcement is enabled or > not. Hmmm, I think I see what you mean. I can make usrquota_block_hardlimit options to not actually set the quota flag on quota_types, so this should be explicitly set by usrquota/grpquota options. Does that work for you? > Also I can imagine we would allow changing the default limits on > remount but it isn't easy to enable quota accounting on remount etc. > hmm, yes, maybe enabling default limits to be changed on remount isn't a big deal, once the quota is already enabled, so everything is already in place. > > diff --git a/mm/shmem_quota.c b/mm/shmem_quota.c > > index c0b531e2ef688..3cc53f2c35e2c 100644 > > --- a/mm/shmem_quota.c > > +++ b/mm/shmem_quota.c > > @@ -166,6 +166,7 @@ static int shmem_acquire_dquot(struct dquot *dquot) > > { > > struct mem_dqinfo *info = sb_dqinfo(dquot->dq_sb, dquot->dq_id.type); > > struct rb_node **n = &((struct rb_root *)info->dqi_priv)->rb_node; > > + struct shmem_sb_info *sbinfo = dquot->dq_sb->s_fs_info; > > struct rb_node *parent = NULL, *new_node = NULL; > > struct quota_id *new_entry, *entry; > > qid_t id = from_kqid(&init_user_ns, dquot->dq_id); > > @@ -195,6 +196,14 @@ static int shmem_acquire_dquot(struct dquot *dquot) > > } > > > > new_entry->id = id; > > + if (dquot->dq_id.type == USRQUOTA) { > > + new_entry->bhardlimit = sbinfo->qlimits.usrquota_bhardlimit; > > + new_entry->ihardlimit = sbinfo->qlimits.usrquota_ihardlimit; > > + } else if (dquot->dq_id.type == GRPQUOTA) { > > + new_entry->bhardlimit = sbinfo->qlimits.grpquota_bhardlimit; > > + new_entry->ihardlimit = sbinfo->qlimits.grpquota_ihardlimit; > > + } > > + > > new_node = &new_entry->node; > > rb_link_node(new_node, parent, n); > > rb_insert_color(new_node, (struct rb_root *)info->dqi_priv); > > Maybe in shmem_dquot_release() when usage is 0 and limits are at default > limits, we can free the structure? hmmm, which struct are you talking about? quota_id? As we do for DQ_FAKE? > > Honza > -- > Jan Kara <jack@suse.com> > SUSE Labs, CR
On Fri 21-04-23 12:20:42, Carlos Maiolino wrote: > On Thu, Apr 20, 2023 at 04:39:54PM +0200, Jan Kara wrote: > > On Thu 20-04-23 10:03:59, cem@kernel.org wrote: > > > @@ -3714,6 +3723,50 @@ static int shmem_parse_one(struct fs_context *fc, struct fs_parameter *param) > > > ctx->seen |= SHMEM_SEEN_QUOTA; > > > ctx->quota_types |= QTYPE_MASK_GRP; > > > break; > > > + case Opt_usrquota_block_hardlimit: > > > + size = memparse(param->string, &rest); > > > + if (*rest || !size) > > > + goto bad_value; > > > + if (size > SHMEM_QUOTA_MAX_SPC_LIMIT) > > > + return invalfc(fc, > > > + "User quota block hardlimit too large."); > > > + ctx->qlimits.usrquota_bhardlimit = size; > > > + ctx->seen |= SHMEM_SEEN_QUOTA; > > > + ctx->quota_types |= QTYPE_MASK_USR; > > > > So if I get it right, the intention here is that if > > usrquota_block_hardlimit=value option is used, it automatically enables > > user quota accounting and enforcement. I guess it is logical but it is not > > documented and I'd prefer to require explicit usrquota mount option to > > enable accounting & enforcement - it is then e.g. easier to parse mount > > options (in userspace) for finding out whether enforcement is enabled or > > not. > > Hmmm, I think I see what you mean. I can make usrquota_block_hardlimit options > to not actually set the quota flag on quota_types, so this should be explicitly > set by usrquota/grpquota options. Does that work for you? Yes, works for me! > > Also I can imagine we would allow changing the default limits on > > remount but it isn't easy to enable quota accounting on remount etc. > > > > hmm, yes, maybe enabling default limits to be changed on remount isn't a big > deal, once the quota is already enabled, so everything is already in place. Exactly. I don't say you have to do it now as I don't think that is super useful. But if there's a demand we can easily do it. > > > diff --git a/mm/shmem_quota.c b/mm/shmem_quota.c > > > index c0b531e2ef688..3cc53f2c35e2c 100644 > > > --- a/mm/shmem_quota.c > > > +++ b/mm/shmem_quota.c > > > @@ -166,6 +166,7 @@ static int shmem_acquire_dquot(struct dquot *dquot) > > > { > > > struct mem_dqinfo *info = sb_dqinfo(dquot->dq_sb, dquot->dq_id.type); > > > struct rb_node **n = &((struct rb_root *)info->dqi_priv)->rb_node; > > > + struct shmem_sb_info *sbinfo = dquot->dq_sb->s_fs_info; > > > struct rb_node *parent = NULL, *new_node = NULL; > > > struct quota_id *new_entry, *entry; > > > qid_t id = from_kqid(&init_user_ns, dquot->dq_id); > > > @@ -195,6 +196,14 @@ static int shmem_acquire_dquot(struct dquot *dquot) > > > } > > > > > > new_entry->id = id; > > > + if (dquot->dq_id.type == USRQUOTA) { > > > + new_entry->bhardlimit = sbinfo->qlimits.usrquota_bhardlimit; > > > + new_entry->ihardlimit = sbinfo->qlimits.usrquota_ihardlimit; > > > + } else if (dquot->dq_id.type == GRPQUOTA) { > > > + new_entry->bhardlimit = sbinfo->qlimits.grpquota_bhardlimit; > > > + new_entry->ihardlimit = sbinfo->qlimits.grpquota_ihardlimit; > > > + } > > > + > > > new_node = &new_entry->node; > > > rb_link_node(new_node, parent, n); > > > rb_insert_color(new_node, (struct rb_root *)info->dqi_priv); > > > > Maybe in shmem_dquot_release() when usage is 0 and limits are at default > > limits, we can free the structure? > > hmmm, which struct are you talking about? quota_id? As we do for DQ_FAKE? Yes. Honza
On Fri, Apr 21, 2023 at 12:47:43PM +0200, Jan Kara wrote: > On Fri 21-04-23 12:20:42, Carlos Maiolino wrote: > > On Thu, Apr 20, 2023 at 04:39:54PM +0200, Jan Kara wrote: > > > On Thu 20-04-23 10:03:59, cem@kernel.org wrote: > > > > @@ -3714,6 +3723,50 @@ static int shmem_parse_one(struct fs_context *fc, struct fs_parameter *param) > > > > ctx->seen |= SHMEM_SEEN_QUOTA; > > > > ctx->quota_types |= QTYPE_MASK_GRP; > > > > break; > > > > + case Opt_usrquota_block_hardlimit: > > > > + size = memparse(param->string, &rest); > > > > + if (*rest || !size) > > > > + goto bad_value; > > > > + if (size > SHMEM_QUOTA_MAX_SPC_LIMIT) > > > > + return invalfc(fc, > > > > + "User quota block hardlimit too large."); > > > > + ctx->qlimits.usrquota_bhardlimit = size; > > > > + ctx->seen |= SHMEM_SEEN_QUOTA; > > > > + ctx->quota_types |= QTYPE_MASK_USR; > > > > > > So if I get it right, the intention here is that if > > > usrquota_block_hardlimit=value option is used, it automatically enables > > > user quota accounting and enforcement. I guess it is logical but it is not > > > documented and I'd prefer to require explicit usrquota mount option to > > > enable accounting & enforcement - it is then e.g. easier to parse mount > > > options (in userspace) for finding out whether enforcement is enabled or > > > not. > > > > Hmmm, I think I see what you mean. I can make usrquota_block_hardlimit options > > to not actually set the quota flag on quota_types, so this should be explicitly > > set by usrquota/grpquota options. Does that work for you? > > Yes, works for me! Great! > > > > Also I can imagine we would allow changing the default limits on > > > remount but it isn't easy to enable quota accounting on remount etc. > > > > > > > hmm, yes, maybe enabling default limits to be changed on remount isn't a big > > deal, once the quota is already enabled, so everything is already in place. > > Exactly. I don't say you have to do it now as I don't think that is super > useful. But if there's a demand we can easily do it. Sounds good :) If you don't mind, I'd postpone it, as I am planning to add prjquotas to it later, I can add it to the same series, just to avoid adding more review overhead to this one. > > > > > diff --git a/mm/shmem_quota.c b/mm/shmem_quota.c > > > > index c0b531e2ef688..3cc53f2c35e2c 100644 > > > > --- a/mm/shmem_quota.c > > > > +++ b/mm/shmem_quota.c > > > > @@ -166,6 +166,7 @@ static int shmem_acquire_dquot(struct dquot *dquot) > > > > { > > > > struct mem_dqinfo *info = sb_dqinfo(dquot->dq_sb, dquot->dq_id.type); > > > > struct rb_node **n = &((struct rb_root *)info->dqi_priv)->rb_node; > > > > + struct shmem_sb_info *sbinfo = dquot->dq_sb->s_fs_info; > > > > struct rb_node *parent = NULL, *new_node = NULL; > > > > struct quota_id *new_entry, *entry; > > > > qid_t id = from_kqid(&init_user_ns, dquot->dq_id); > > > > @@ -195,6 +196,14 @@ static int shmem_acquire_dquot(struct dquot *dquot) > > > > } > > > > > > > > new_entry->id = id; > > > > + if (dquot->dq_id.type == USRQUOTA) { > > > > + new_entry->bhardlimit = sbinfo->qlimits.usrquota_bhardlimit; > > > > + new_entry->ihardlimit = sbinfo->qlimits.usrquota_ihardlimit; > > > > + } else if (dquot->dq_id.type == GRPQUOTA) { > > > > + new_entry->bhardlimit = sbinfo->qlimits.grpquota_bhardlimit; > > > > + new_entry->ihardlimit = sbinfo->qlimits.grpquota_ihardlimit; > > > > + } > > > > + > > > > new_node = &new_entry->node; > > > > rb_link_node(new_node, parent, n); > > > > rb_insert_color(new_node, (struct rb_root *)info->dqi_priv); > > > > > > Maybe in shmem_dquot_release() when usage is 0 and limits are at default > > > limits, we can free the structure? > > > > hmmm, which struct are you talking about? quota_id? As we do for DQ_FAKE? > > Yes. > > Honza > -- > Jan Kara <jack@suse.com> > SUSE Labs, CR
diff --git a/Documentation/filesystems/tmpfs.rst b/Documentation/filesystems/tmpfs.rst index 1d4ef4f7cca7e..241c11f86cd73 100644 --- a/Documentation/filesystems/tmpfs.rst +++ b/Documentation/filesystems/tmpfs.rst @@ -88,15 +88,31 @@ that instance in a system with many CPUs making intensive use of it. tmpfs also supports quota with the following mount options -======== ============================================================= -quota User and group quota accounting and enforcement is enabled on - the mount. Tmpfs is using hidden system quota files that are - initialized on mount. -usrquota User quota accounting and enforcement is enabled on the - mount. -grpquota Group quota accounting and enforcement is enabled on the - mount. -======== ============================================================= +======================== ================================================= +quota User and group quota accounting and enforcement + is enabled on the mount. Tmpfs is using hidden + system quota files that are initialized on mount. +usrquota User quota accounting and enforcement is enabled + on the mount. +grpquota Group quota accounting and enforcement is enabled + on the mount. +usrquota_block_hardlimit Set global user quota block hard limit. +usrquota_inode_hardlimit Set global user quota inode hard limit. +grpquota_block_hardlimit Set global group quota block hard limit. +grpquota_inode_hardlimit Set global group quota inode hard limit. +======================== ================================================= + +None of the quota related mount options can be set or changed on remount. + +Quota limit parameters accept a suffix k, m or g for kilo, mega and giga +and can't be changed on remount. Default global quota limits are taking +effect for any and all user/group/project except root the first time the +quota entry for user/group/project id is being accessed - typically the +first time an inode with a particular id ownership is being created after +the mount. In other words, instead of the limits being initialized to zero, +they are initialized with the particular value provided with these mount +options. The limits can be changed for any user/group id at any time as it +normally can. Note that tmpfs quotas do not support user namespaces so no uid/gid translation is done if quotas are enable inside user namespaces. diff --git a/include/linux/shmem_fs.h b/include/linux/shmem_fs.h index b8e421e349868..8ca5e969f00fc 100644 --- a/include/linux/shmem_fs.h +++ b/include/linux/shmem_fs.h @@ -41,6 +41,13 @@ struct shmem_inode_info { (FS_IMMUTABLE_FL | FS_APPEND_FL | FS_NODUMP_FL | FS_NOATIME_FL) #define SHMEM_FL_INHERITED (FS_NODUMP_FL | FS_NOATIME_FL) +struct shmem_quota_limits { + qsize_t usrquota_bhardlimit; /* Default user quota block hard limit */ + qsize_t usrquota_ihardlimit; /* Default user quota inode hard limit */ + qsize_t grpquota_bhardlimit; /* Default group quota block hard limit */ + qsize_t grpquota_ihardlimit; /* Default group quota inode hard limit */ +}; + struct shmem_sb_info { unsigned long max_blocks; /* How many blocks are allowed */ struct percpu_counter used_blocks; /* How many are allocated */ @@ -58,6 +65,7 @@ struct shmem_sb_info { spinlock_t shrinklist_lock; /* Protects shrinklist */ struct list_head shrinklist; /* List of shinkable inodes */ unsigned long shrinklist_len; /* Length of shrinklist */ + struct shmem_quota_limits qlimits; /* Default quota limits */ }; static inline struct shmem_inode_info *SHMEM_I(struct inode *inode) diff --git a/mm/shmem.c b/mm/shmem.c index dd9faf2c5c875..2832ace81515f 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -118,6 +118,7 @@ struct shmem_options { int huge; int seen; unsigned short quota_types; + struct shmem_quota_limits qlimits; #define SHMEM_SEEN_BLOCKS 1 #define SHMEM_SEEN_INODES 2 #define SHMEM_SEEN_HUGE 4 @@ -3593,6 +3594,10 @@ enum shmem_param { Opt_quota, Opt_usrquota, Opt_grpquota, + Opt_usrquota_block_hardlimit, + Opt_usrquota_inode_hardlimit, + Opt_grpquota_block_hardlimit, + Opt_grpquota_inode_hardlimit, }; static const struct constant_table shmem_param_enums_huge[] = { @@ -3618,6 +3623,10 @@ const struct fs_parameter_spec shmem_fs_parameters[] = { fsparam_flag ("quota", Opt_quota), fsparam_flag ("usrquota", Opt_usrquota), fsparam_flag ("grpquota", Opt_grpquota), + fsparam_string("usrquota_block_hardlimit", Opt_usrquota_block_hardlimit), + fsparam_string("usrquota_inode_hardlimit", Opt_usrquota_inode_hardlimit), + fsparam_string("grpquota_block_hardlimit", Opt_grpquota_block_hardlimit), + fsparam_string("grpquota_inode_hardlimit", Opt_grpquota_inode_hardlimit), #endif {} }; @@ -3714,6 +3723,50 @@ static int shmem_parse_one(struct fs_context *fc, struct fs_parameter *param) ctx->seen |= SHMEM_SEEN_QUOTA; ctx->quota_types |= QTYPE_MASK_GRP; break; + case Opt_usrquota_block_hardlimit: + size = memparse(param->string, &rest); + if (*rest || !size) + goto bad_value; + if (size > SHMEM_QUOTA_MAX_SPC_LIMIT) + return invalfc(fc, + "User quota block hardlimit too large."); + ctx->qlimits.usrquota_bhardlimit = size; + ctx->seen |= SHMEM_SEEN_QUOTA; + ctx->quota_types |= QTYPE_MASK_USR; + break; + case Opt_grpquota_block_hardlimit: + size = memparse(param->string, &rest); + if (*rest || !size) + goto bad_value; + if (size > SHMEM_QUOTA_MAX_SPC_LIMIT) + return invalfc(fc, + "Group quota block hardlimit too large."); + ctx->qlimits.grpquota_bhardlimit = size; + ctx->seen |= SHMEM_SEEN_QUOTA; + ctx->quota_types |= QTYPE_MASK_GRP; + break; + case Opt_usrquota_inode_hardlimit: + size = memparse(param->string, &rest); + if (*rest || !size) + goto bad_value; + if (size > SHMEM_QUOTA_MAX_INO_LIMIT) + return invalfc(fc, + "User quota inode hardlimit too large."); + ctx->qlimits.usrquota_ihardlimit = size; + ctx->seen |= SHMEM_SEEN_QUOTA; + ctx->quota_types |= QTYPE_MASK_USR; + break; + case Opt_grpquota_inode_hardlimit: + size = memparse(param->string, &rest); + if (*rest || !size) + goto bad_value; + if (size > SHMEM_QUOTA_MAX_INO_LIMIT) + return invalfc(fc, + "Group quota inode hardlimit too large."); + ctx->qlimits.grpquota_ihardlimit = size; + ctx->seen |= SHMEM_SEEN_QUOTA; + ctx->quota_types |= QTYPE_MASK_GRP; + break; } return 0; @@ -3819,6 +3872,18 @@ static int shmem_reconfigure(struct fs_context *fc) goto out; } +#ifdef CONFIG_TMPFS_QUOTA +#define CHANGED_LIMIT(name) \ + (ctx->qlimits.name## hardlimit && \ + (ctx->qlimits.name## hardlimit != sbinfo->qlimits.name## hardlimit)) + + if (CHANGED_LIMIT(usrquota_b) || CHANGED_LIMIT(usrquota_i) || + CHANGED_LIMIT(grpquota_b) || CHANGED_LIMIT(grpquota_i)) { + err = "Cannot change global quota limit on remount"; + goto out; + } +#endif /* CONFIG_TMPFS_QUOTA */ + if (ctx->seen & SHMEM_SEEN_HUGE) sbinfo->huge = ctx->huge; if (ctx->seen & SHMEM_SEEN_INUMS) @@ -3988,6 +4053,10 @@ static int shmem_fill_super(struct super_block *sb, struct fs_context *fc) sb->s_qcop = &dquot_quotactl_sysfile_ops; sb->s_quota_types = QTYPE_MASK_USR | QTYPE_MASK_GRP; + /* Copy the default limits from ctx into sbinfo */ + memcpy(&sbinfo->qlimits, &ctx->qlimits, + sizeof(struct shmem_quota_limits)); + if (shmem_enable_quotas(sb, ctx->quota_types)) goto failed; } diff --git a/mm/shmem_quota.c b/mm/shmem_quota.c index c0b531e2ef688..3cc53f2c35e2c 100644 --- a/mm/shmem_quota.c +++ b/mm/shmem_quota.c @@ -166,6 +166,7 @@ static int shmem_acquire_dquot(struct dquot *dquot) { struct mem_dqinfo *info = sb_dqinfo(dquot->dq_sb, dquot->dq_id.type); struct rb_node **n = &((struct rb_root *)info->dqi_priv)->rb_node; + struct shmem_sb_info *sbinfo = dquot->dq_sb->s_fs_info; struct rb_node *parent = NULL, *new_node = NULL; struct quota_id *new_entry, *entry; qid_t id = from_kqid(&init_user_ns, dquot->dq_id); @@ -195,6 +196,14 @@ static int shmem_acquire_dquot(struct dquot *dquot) } new_entry->id = id; + if (dquot->dq_id.type == USRQUOTA) { + new_entry->bhardlimit = sbinfo->qlimits.usrquota_bhardlimit; + new_entry->ihardlimit = sbinfo->qlimits.usrquota_ihardlimit; + } else if (dquot->dq_id.type == GRPQUOTA) { + new_entry->bhardlimit = sbinfo->qlimits.grpquota_bhardlimit; + new_entry->ihardlimit = sbinfo->qlimits.grpquota_ihardlimit; + } + new_node = &new_entry->node; rb_link_node(new_node, parent, n); rb_insert_color(new_node, (struct rb_root *)info->dqi_priv);