From patchwork Tue Nov 8 13:30:09 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lukas Czerner X-Patchwork-Id: 13036305 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 428A1C4332F for ; Tue, 8 Nov 2022 13:30:38 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D37606B0071; Tue, 8 Nov 2022 08:30:37 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id CE7568E0003; Tue, 8 Nov 2022 08:30:37 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B3BA88E0002; Tue, 8 Nov 2022 08:30:37 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 9F1F66B0071 for ; Tue, 8 Nov 2022 08:30:37 -0500 (EST) Received: from smtpin21.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 37315140B02 for ; Tue, 8 Nov 2022 13:30:37 +0000 (UTC) X-FDA: 80110359714.21.7A44BED Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf04.hostedemail.com (Postfix) with ESMTP id 465684000A for ; Tue, 8 Nov 2022 13:30:36 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1667914235; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=VJd6OIs+JImzw2XBaify6BOH9gadE8td1roFOwIJvMU=; b=B3QACSNcGVuZgfalvxRVNeu2QNR/hSMJXqc6KgEjLcKACCQqVQoadgq+0xyvproNnK+uA8 iLwXqdfoc9hWB5uz7QAl7ciJ177FWhPyZqFKhn809/t4dOVKfUheviH7LZaiRkIu2o1Y8j m6CwMiBU3NdP/xpEz7rEL5fIbKe3lAY= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-473-GB2MHWf0M8qlr8h6HXDkeQ-1; Tue, 08 Nov 2022 08:30:32 -0500 X-MC-Unique: GB2MHWf0M8qlr8h6HXDkeQ-1 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.rdu2.redhat.com [10.11.54.2]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id AA76C381A722; Tue, 8 Nov 2022 13:30:21 +0000 (UTC) Received: from ovpn-194-7.brq.redhat.com (ovpn-194-7.brq.redhat.com [10.40.194.7]) by smtp.corp.redhat.com (Postfix) with ESMTP id 9380E40C947B; Tue, 8 Nov 2022 13:30:20 +0000 (UTC) From: Lukas Czerner To: Hugh Dickins Cc: Jan Kara , Eric Sandeen , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org Subject: [PATCH 1/2] shmem: implement user/group quota support for tmpfs Date: Tue, 8 Nov 2022 14:30:09 +0100 Message-Id: <20221108133010.75226-2-lczerner@redhat.com> In-Reply-To: <20221108133010.75226-1-lczerner@redhat.com> References: <20221108133010.75226-1-lczerner@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.2 ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1667914236; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=VJd6OIs+JImzw2XBaify6BOH9gadE8td1roFOwIJvMU=; b=PVMFOd9ml9uUbzpRz7/Wqk4rUxPFaJtNG2XrohOv4iXjZ0D1GnyTfMzCTA7FY+Yk9EOQEi i2LeBv6N3R0t7UP9WfvrSMrb2UeR16gAs9nJ0tAhx8zMu5p+dWOhpU0zq2wCumGAm7PcgL Ienv3P11AhkVQtBreIMtDlmjBomDTiE= ARC-Authentication-Results: i=1; imf04.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=B3QACSNc; spf=pass (imf04.hostedemail.com: domain of lczerner@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=lczerner@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1667914236; a=rsa-sha256; cv=none; b=RdwrD4k3Cy/bp9AzHkPOeqtG9PN09ixDI98SpV/i7ezs6HBGzbd+ziVfYYfTh4gLsxtUVj Z8jjoSljc5G2pBPVzN3B3ZaIErO0M1hqjBq65YZE26zQS+OnzUl+FGxAP4/qlAP7XShq+s n5Ku0tB7ELKLcxroZXkHRpS5nEPApBY= X-Rspamd-Queue-Id: 465684000A X-Rspam-User: X-Rspamd-Server: rspam08 Authentication-Results: imf04.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=B3QACSNc; spf=pass (imf04.hostedemail.com: domain of lczerner@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=lczerner@redhat.com; dmarc=pass (policy=none) header.from=redhat.com X-Stat-Signature: ik5tt9orik5tkeofbddhzuirsbdsjgx9 X-HE-Tag: 1667914236-162417 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Implement user and group quota support for tmpfs using system quota file in vfsv0 quota format. Because everything in tmpfs is temporary and as a result is lost on umount, the quota files are initialized on every mount. This also goes for quota limits, that needs to be set up after every mount. The quota support in tmpfs is well separated from the rest of the filesystem and is only enabled using mount option -o quota (and usrquota and grpquota for compatibility reasons). Only quota accounting is enabled this way, enforcement needs to be enable by regular quota tools (using Q_QUOTAON ioctl). Signed-off-by: Lukas Czerner --- Documentation/filesystems/tmpfs.rst | 12 + include/linux/shmem_fs.h | 3 + mm/shmem.c | 361 ++++++++++++++++++++++++++-- 3 files changed, 353 insertions(+), 23 deletions(-) diff --git a/Documentation/filesystems/tmpfs.rst b/Documentation/filesystems/tmpfs.rst index 0408c245785e..9c4f228ef4f3 100644 --- a/Documentation/filesystems/tmpfs.rst +++ b/Documentation/filesystems/tmpfs.rst @@ -86,6 +86,18 @@ use up all the memory on the machine; but enhances the scalability of that instance in a system with many CPUs making intensive use of it. +tmpfs also supports quota with the following mount options + +======== ============================================================= +quota Quota accounting is enabled on the mount. Tmpfs is using + hidden system quota files that are initialized on mount. + Quota limits can quota enforcement can be enabled using + standard quota tools. +usrquota Same as quota option. Exists for compatibility reasons. +grpquota Same as quota option. Exists for compatibility reasons. +======== ============================================================= + + tmpfs has a mount option to set the NUMA memory allocation policy for all files in that instance (if CONFIG_NUMA is enabled) - which can be adjusted on the fly via 'mount -o remount ...' diff --git a/include/linux/shmem_fs.h b/include/linux/shmem_fs.h index d500ea967dc7..02a328c98d3a 100644 --- a/include/linux/shmem_fs.h +++ b/include/linux/shmem_fs.h @@ -26,6 +26,9 @@ struct shmem_inode_info { atomic_t stop_eviction; /* hold when working on inode */ struct timespec64 i_crtime; /* file creation time */ unsigned int fsflags; /* flags for FS_IOC_[SG]ETFLAGS */ +#ifdef CONFIG_QUOTA + struct dquot *i_dquot[MAXQUOTAS]; +#endif struct inode vfs_inode; }; diff --git a/mm/shmem.c b/mm/shmem.c index c1d8b8a1aa3b..ec16659c2255 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -79,6 +79,12 @@ static struct vfsmount *shm_mnt; #include #include #include +#include +/* + * Required for structures definitions and macros for quote file + * initialization. + */ +#include <../fs/quota/quotaio_v2.h> #include @@ -120,8 +126,13 @@ struct shmem_options { #define SHMEM_SEEN_INODES 2 #define SHMEM_SEEN_HUGE 4 #define SHMEM_SEEN_INUMS 8 +#define SHMEM_SEEN_QUOTA 16 }; +static void shmem_set_inode_flags(struct inode *, unsigned int); +static struct inode *shmem_get_inode_noquota(struct super_block *, + struct inode *, umode_t, dev_t, unsigned long); + #ifdef CONFIG_TMPFS static unsigned long shmem_default_max_blocks(void) { @@ -136,6 +147,10 @@ static unsigned long shmem_default_max_inodes(void) } #endif +#if defined(CONFIG_TMPFS) && defined(CONFIG_QUOTA) +#define SHMEM_QUOTA_TMPFS +#endif + static int shmem_swapin_folio(struct inode *inode, pgoff_t index, struct folio **foliop, enum sgp_type sgp, gfp_t gfp, struct vm_area_struct *vma, @@ -198,26 +213,34 @@ static inline void shmem_unacct_blocks(unsigned long flags, long pages) vm_unacct_memory(pages * VM_ACCT(PAGE_SIZE)); } -static inline bool shmem_inode_acct_block(struct inode *inode, long pages) +static inline int shmem_inode_acct_block(struct inode *inode, long pages) { struct shmem_inode_info *info = SHMEM_I(inode); struct shmem_sb_info *sbinfo = SHMEM_SB(inode->i_sb); + int err = -ENOSPC; if (shmem_acct_block(info->flags, pages)) - return false; + return err; if (sbinfo->max_blocks) { if (percpu_counter_compare(&sbinfo->used_blocks, sbinfo->max_blocks - pages) > 0) goto unacct; + if (dquot_alloc_block_nodirty(inode, pages)) { + err = -EDQUOT; + goto unacct; + } percpu_counter_add(&sbinfo->used_blocks, pages); + } else if (dquot_alloc_block_nodirty(inode, pages)) { + err = -EDQUOT; + goto unacct; } - return true; + return 0; unacct: shmem_unacct_blocks(info->flags, pages); - return false; + return err; } static inline void shmem_inode_unacct_blocks(struct inode *inode, long pages) @@ -225,6 +248,8 @@ static inline void shmem_inode_unacct_blocks(struct inode *inode, long pages) struct shmem_inode_info *info = SHMEM_I(inode); struct shmem_sb_info *sbinfo = SHMEM_SB(inode->i_sb); + dquot_free_block_nodirty(inode, pages); + if (sbinfo->max_blocks) percpu_counter_sub(&sbinfo->used_blocks, pages); shmem_unacct_blocks(info->flags, pages); @@ -247,6 +272,218 @@ bool vma_is_shmem(struct vm_area_struct *vma) static LIST_HEAD(shmem_swaplist); static DEFINE_MUTEX(shmem_swaplist_mutex); +#ifdef SHMEM_QUOTA_TMPFS + +#define SHMEM_MAXQUOTAS 2 + +/* + * Define macros needed for quota file initialization. + */ +#define MAX_IQ_TIME 604800 /* (7*24*60*60) 1 week */ +#define MAX_DQ_TIME 604800 /* (7*24*60*60) 1 week */ +#define QT_TREEOFF 1 /* Offset of tree in file in blocks */ +#define QUOTABLOCK_BITS 10 +#define QUOTABLOCK_SIZE (1 << QUOTABLOCK_BITS) + +static ssize_t shmem_quota_write_inode(struct inode *inode, int type, + const char *data, size_t len, loff_t off) +{ + loff_t i_size = i_size_read(inode); + struct page *page = NULL; + unsigned long offset; + struct folio *folio; + int err = 0; + pgoff_t index; + + index = off >> PAGE_SHIFT; + offset = off & ~PAGE_MASK; + + /* + * We expect the write to fit into a single page + */ + if (PAGE_SIZE - offset < len) { + pr_warn("tmpfs: quota write (off=%llu, len=%llu) doesn't fit into a single page\n", + (unsigned long long)off, (unsigned long long)len); + return -EIO; + } + + err = shmem_get_folio(inode, index, &folio, SGP_WRITE); + if (err) + return err; + + page = folio_file_page(folio, index); + if (PageHWPoison(page)) { + folio_unlock(folio); + folio_put(folio); + return -EIO; + } + + /* Write data, or zeroout the portion of the page */ + if (data) + memcpy(page_address(page) + offset, data, len); + else + memset(page_address(page) + offset, 0, len); + + SetPageUptodate(page); + flush_dcache_folio(folio); + folio_mark_dirty(folio); + folio_unlock(folio); + folio_put(folio); + + if (i_size < off + len) + i_size_write(inode, off + len); + return err ? err : len; +} + +static ssize_t shmem_quota_write(struct super_block *sb, int type, + const char *data, size_t len, loff_t off) +{ + struct inode *inode = sb_dqopt(sb)->files[type]; + + return shmem_quota_write_inode(inode, type, data, len, off); +} + +static int shmem_enable_quotas(struct super_block *sb) +{ + int type, err = 0; + struct inode *inode; + struct v2_disk_dqheader qheader; + struct v2_disk_dqinfo qinfo; + static const uint quota_magics[] = V2_INITQMAGICS; + static const uint quota_versions[] = V2_INITQVERSIONS; + + sb_dqopt(sb)->flags |= DQUOT_QUOTA_SYS_FILE | DQUOT_NOLIST_DIRTY; + for (type = 0; type < SHMEM_MAXQUOTAS; type++) { + inode = shmem_get_inode_noquota(sb, NULL, S_IFREG | 0777, 0, + VM_NORESERVE); + if (IS_ERR_OR_NULL(inode)) { + err = PTR_ERR(inode); + goto out_err; + } + inode->i_flags |= S_NOQUOTA; + + /* Initialize generic quota file header */ + qheader.dqh_magic = cpu_to_le32(quota_magics[type]); + qheader.dqh_version = cpu_to_le32(quota_versions[type]); + + /* Initialize the quota file info structure */ + qinfo.dqi_bgrace = cpu_to_le32(MAX_DQ_TIME); + qinfo.dqi_igrace = cpu_to_le32(MAX_IQ_TIME); + qinfo.dqi_flags = cpu_to_le32(0); + qinfo.dqi_blocks = cpu_to_le32(QT_TREEOFF + 1); + qinfo.dqi_free_blk = cpu_to_le32(0); + qinfo.dqi_free_entry = cpu_to_le32(0); + + /* + * Write out generic quota header, quota info structure and + * zeroout first tree block. + */ + shmem_quota_write_inode(inode, type, (const char *)&qheader, + sizeof(qheader), 0); + shmem_quota_write_inode(inode, type, (const char *)&qinfo, + sizeof(qinfo), sizeof(qheader)); + shmem_quota_write_inode(inode, type, 0, + QT_TREEOFF * QUOTABLOCK_SIZE, + QUOTABLOCK_SIZE); + + shmem_set_inode_flags(inode, FS_NOATIME_FL | FS_IMMUTABLE_FL); + + err = dquot_load_quota_inode(inode, type, QFMT_VFS_V1, + DQUOT_USAGE_ENABLED); + iput(inode); + if (err) + goto out_err; + } + return 0; + +out_err: + pr_warn("tmpfs: failed to enable quota tracking (type=%d, err=%d)\n", + type, err); + for (type--; type >= 0; type--) { + inode = sb_dqopt(sb)->files[type]; + if (inode) + inode = igrab(inode); + dquot_quota_off(sb, type); + if (inode) + iput(inode); + } + return err; +} + +static void shmem_disable_quotas(struct super_block *sb) +{ + struct inode *inode = NULL; + int type; + + for (type = 0; type < SHMEM_MAXQUOTAS; type++) { + inode = sb_dqopt(sb)->files[type]; + if (inode) + inode = igrab(inode); + dquot_quota_off(sb, type); + if (inode) + iput(inode); + } +} + +static ssize_t shmem_quota_read(struct super_block *sb, int type, char *data, + size_t len, loff_t off) +{ + struct inode *inode = sb_dqopt(sb)->files[type]; + loff_t i_size = i_size_read(inode); + struct folio *folio = NULL; + struct page *page = NULL; + unsigned long offset; + int tocopy, err = 0; + pgoff_t index; + size_t toread; + + index = off >> PAGE_SHIFT; + offset = off & ~PAGE_MASK; + + if (off > i_size) + return 0; + if (off+len > i_size) + len = i_size - off; + toread = len; + while (toread > 0) { + tocopy = PAGE_SIZE - offset < toread ? + PAGE_SIZE - offset : toread; + + err = shmem_get_folio(inode, index, &folio, SGP_READ); + if (err) { + if (err == -EINVAL) + err = 0; + return err; + } + + if (folio) { + folio_unlock(folio); + page = folio_file_page(folio, index); + if (PageHWPoison(page)) { + folio_put(folio); + return -EIO; + } + memcpy(data, page_address(page) + offset, tocopy); + folio_put(folio); + } else { /* A hole */ + memset(data, 0, tocopy); + } + + offset = 0; + toread -= tocopy; + data += tocopy; + index++; + cond_resched(); + } + return len; +} + +static struct dquot **shmem_get_dquots(struct inode *inode) +{ + return SHMEM_I(inode)->i_dquot; +} +#endif /* SHMEM_QUOTA_TMPFS */ + /* * shmem_reserve_inode() performs bookkeeping to reserve a shmem inode, and * produces a novel ino for the newly allocated inode. @@ -353,7 +590,6 @@ static void shmem_recalc_inode(struct inode *inode) freed = info->alloced - info->swapped - inode->i_mapping->nrpages; if (freed > 0) { info->alloced -= freed; - inode->i_blocks -= freed * BLOCKS_PER_PAGE; shmem_inode_unacct_blocks(inode, freed); } } @@ -363,7 +599,7 @@ bool shmem_charge(struct inode *inode, long pages) struct shmem_inode_info *info = SHMEM_I(inode); unsigned long flags; - if (!shmem_inode_acct_block(inode, pages)) + if (shmem_inode_acct_block(inode, pages)) return false; /* nrpages adjustment first, then shmem_recalc_inode() when balanced */ @@ -371,7 +607,6 @@ bool shmem_charge(struct inode *inode, long pages) spin_lock_irqsave(&info->lock, flags); info->alloced += pages; - inode->i_blocks += pages * BLOCKS_PER_PAGE; shmem_recalc_inode(inode); spin_unlock_irqrestore(&info->lock, flags); @@ -387,7 +622,6 @@ void shmem_uncharge(struct inode *inode, long pages) spin_lock_irqsave(&info->lock, flags); info->alloced -= pages; - inode->i_blocks -= pages * BLOCKS_PER_PAGE; shmem_recalc_inode(inode); spin_unlock_irqrestore(&info->lock, flags); @@ -1119,6 +1353,13 @@ static int shmem_setattr(struct user_namespace *mnt_userns, } } + /* Transfer quota accounting */ + if (i_uid_needs_update(mnt_userns, attr, inode) || + i_gid_needs_update(mnt_userns, attr, inode)) + error = dquot_transfer(mnt_userns, inode, attr); + if (error) + return error; + setattr_copy(&init_user_ns, inode, attr); if (attr->ia_valid & ATTR_MODE) error = posix_acl_chmod(&init_user_ns, inode, inode->i_mode); @@ -1164,7 +1405,9 @@ static void shmem_evict_inode(struct inode *inode) simple_xattrs_free(&info->xattrs); WARN_ON(inode->i_blocks); shmem_free_inode(inode->i_sb); + dquot_free_inode(inode); clear_inode(inode); + dquot_drop(inode); } static int shmem_find_swap_entries(struct address_space *mapping, @@ -1569,14 +1812,14 @@ static struct folio *shmem_alloc_and_acct_folio(gfp_t gfp, struct inode *inode, { struct shmem_inode_info *info = SHMEM_I(inode); struct folio *folio; - int nr; - int err = -ENOSPC; + int nr, err; if (!IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE)) huge = false; nr = huge ? HPAGE_PMD_NR : 1; - if (!shmem_inode_acct_block(inode, nr)) + err = shmem_inode_acct_block(inode, nr); + if (err) goto failed; if (huge) @@ -1949,7 +2192,6 @@ static int shmem_get_folio_gfp(struct inode *inode, pgoff_t index, spin_lock_irq(&info->lock); info->alloced += folio_nr_pages(folio); - inode->i_blocks += (blkcnt_t)BLOCKS_PER_PAGE << folio_order(folio); shmem_recalc_inode(inode); spin_unlock_irq(&info->lock); alloced = true; @@ -2315,8 +2557,10 @@ static void shmem_set_inode_flags(struct inode *inode, unsigned int fsflags) #define shmem_initxattrs NULL #endif -static struct inode *shmem_get_inode(struct super_block *sb, struct inode *dir, - umode_t mode, dev_t dev, unsigned long flags) +static struct inode *shmem_get_inode_noquota(struct super_block *sb, + struct inode *dir, + umode_t mode, dev_t dev, + unsigned long flags) { struct inode *inode; struct shmem_inode_info *info; @@ -2384,6 +2628,35 @@ static struct inode *shmem_get_inode(struct super_block *sb, struct inode *dir, return inode; } +static struct inode *shmem_get_inode(struct super_block *sb, struct inode *dir, + umode_t mode, dev_t dev, unsigned long flags) +{ + int err; + struct inode *inode; + + inode = shmem_get_inode_noquota(sb, dir, mode, dev, flags); + if (inode) { + err = dquot_initialize(inode); + if (err) + goto errout; + + err = dquot_alloc_inode(inode); + if (err) { + dquot_drop(inode); + goto errout; + } + } + return inode; + +errout: + inode->i_flags |= S_NOQUOTA; + iput(inode); + shmem_free_inode(sb); + if (err) + return ERR_PTR(err); + return NULL; +} + #ifdef CONFIG_USERFAULTFD int shmem_mfill_atomic_pte(struct mm_struct *dst_mm, pmd_t *dst_pmd, @@ -2403,7 +2676,7 @@ int shmem_mfill_atomic_pte(struct mm_struct *dst_mm, int ret; pgoff_t max_off; - if (!shmem_inode_acct_block(inode, 1)) { + if (shmem_inode_acct_block(inode, 1)) { /* * We may have got a page, returned -ENOENT triggering a retry, * and now we find ourselves with -ENOMEM. Release the page, to @@ -2487,7 +2760,6 @@ int shmem_mfill_atomic_pte(struct mm_struct *dst_mm, spin_lock_irq(&info->lock); info->alloced++; - inode->i_blocks += BLOCKS_PER_PAGE; shmem_recalc_inode(inode); spin_unlock_irq(&info->lock); @@ -2908,7 +3180,7 @@ shmem_mknod(struct user_namespace *mnt_userns, struct inode *dir, int error = -ENOSPC; inode = shmem_get_inode(dir->i_sb, dir, mode, dev, VM_NORESERVE); - if (inode) { + if (!IS_ERR_OR_NULL(inode)) { error = simple_acl_create(dir, inode); if (error) goto out_iput; @@ -2924,7 +3196,8 @@ shmem_mknod(struct user_namespace *mnt_userns, struct inode *dir, inode_inc_iversion(dir); d_instantiate(dentry, inode); dget(dentry); /* Extra count - pin the dentry in core */ - } + } else if (IS_ERR(inode)) + error = PTR_ERR(inode); return error; out_iput: iput(inode); @@ -2939,7 +3212,7 @@ shmem_tmpfile(struct user_namespace *mnt_userns, struct inode *dir, int error = -ENOSPC; inode = shmem_get_inode(dir->i_sb, dir, mode, 0, VM_NORESERVE); - if (inode) { + if (!IS_ERR_OR_NULL(inode)) { error = security_inode_init_security(inode, dir, NULL, shmem_initxattrs, NULL); @@ -2949,7 +3222,8 @@ shmem_tmpfile(struct user_namespace *mnt_userns, struct inode *dir, if (error) goto out_iput; d_tmpfile(file, inode); - } + } else if (IS_ERR(inode)) + error = PTR_ERR(inode); return finish_open_simple(file, error); out_iput: iput(inode); @@ -3126,6 +3400,8 @@ static int shmem_symlink(struct user_namespace *mnt_userns, struct inode *dir, VM_NORESERVE); if (!inode) return -ENOSPC; + else if (IS_ERR(inode)) + return PTR_ERR(inode); error = security_inode_init_security(inode, dir, &dentry->d_name, shmem_initxattrs, NULL); @@ -3443,6 +3719,7 @@ enum shmem_param { Opt_uid, Opt_inode32, Opt_inode64, + Opt_quota, }; static const struct constant_table shmem_param_enums_huge[] = { @@ -3464,6 +3741,9 @@ const struct fs_parameter_spec shmem_fs_parameters[] = { fsparam_u32 ("uid", Opt_uid), fsparam_flag ("inode32", Opt_inode32), fsparam_flag ("inode64", Opt_inode64), + fsparam_flag ("quota", Opt_quota), + fsparam_flag ("usrquota", Opt_quota), + fsparam_flag ("grpquota", Opt_quota), {} }; @@ -3547,6 +3827,13 @@ static int shmem_parse_one(struct fs_context *fc, struct fs_parameter *param) ctx->full_inums = true; ctx->seen |= SHMEM_SEEN_INUMS; break; + case Opt_quota: +#ifdef CONFIG_QUOTA + ctx->seen |= SHMEM_SEEN_QUOTA; +#else + goto unsupported_parameter; +#endif + break; } return 0; @@ -3646,6 +3933,12 @@ static int shmem_reconfigure(struct fs_context *fc) goto out; } + if (ctx->seen & SHMEM_SEEN_QUOTA && + !sb_any_quota_loaded(fc->root->d_sb)) { + err = "Cannot enable quota on remount"; + goto out; + } + if (ctx->seen & SHMEM_SEEN_HUGE) sbinfo->huge = ctx->huge; if (ctx->seen & SHMEM_SEEN_INUMS) @@ -3728,6 +4021,9 @@ static void shmem_put_super(struct super_block *sb) { struct shmem_sb_info *sbinfo = SHMEM_SB(sb); +#ifdef SHMEM_QUOTA_TMPFS + shmem_disable_quotas(sb); +#endif free_percpu(sbinfo->ino_batch); percpu_counter_destroy(&sbinfo->used_blocks); mpol_put(sbinfo->mpol); @@ -3805,14 +4101,26 @@ static int shmem_fill_super(struct super_block *sb, struct fs_context *fc) #endif uuid_gen(&sb->s_uuid); +#ifdef SHMEM_QUOTA_TMPFS + if (ctx->seen & SHMEM_SEEN_QUOTA) { + sb->dq_op = &dquot_operations; + sb->s_qcop = &dquot_quotactl_sysfile_ops; + sb->s_quota_types = QTYPE_MASK_USR | QTYPE_MASK_GRP; + + if (shmem_enable_quotas(sb)) + goto failed; + } +#endif /* SHMEM_QUOTA_TMPFS */ + inode = shmem_get_inode(sb, NULL, S_IFDIR | sbinfo->mode, 0, VM_NORESERVE); - if (!inode) + if (IS_ERR_OR_NULL(inode)) goto failed; inode->i_uid = sbinfo->uid; inode->i_gid = sbinfo->gid; sb->s_root = d_make_root(inode); if (!sb->s_root) goto failed; + return 0; failed: @@ -3976,7 +4284,12 @@ static const struct super_operations shmem_ops = { #ifdef CONFIG_TMPFS .statfs = shmem_statfs, .show_options = shmem_show_options, -#endif +#ifdef CONFIG_QUOTA + .quota_read = shmem_quota_read, + .quota_write = shmem_quota_write, + .get_dquots = shmem_get_dquots, +#endif /* CONFIG_QUOTA */ +#endif /* CONFIG_TMPFS */ .evict_inode = shmem_evict_inode, .drop_inode = generic_delete_inode, .put_super = shmem_put_super, @@ -4196,8 +4509,10 @@ static struct file *__shmem_file_setup(struct vfsmount *mnt, const char *name, l inode = shmem_get_inode(mnt->mnt_sb, NULL, S_IFREG | S_IRWXUGO, 0, flags); - if (unlikely(!inode)) { + if (IS_ERR_OR_NULL(inode)) { shmem_unacct_size(flags, size); + if (IS_ERR(inode)) + return (struct file *)inode; return ERR_PTR(-ENOSPC); } inode->i_flags |= i_flags; From patchwork Tue Nov 8 13:30:10 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lukas Czerner X-Patchwork-Id: 13036304 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id C8C75C4332F for ; Tue, 8 Nov 2022 13:30:29 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 4D36D8E0001; Tue, 8 Nov 2022 08:30:29 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 483F06B0073; Tue, 8 Nov 2022 08:30:29 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 324278E0001; Tue, 8 Nov 2022 08:30:29 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 1CB7A6B0071 for ; Tue, 8 Nov 2022 08:30:29 -0500 (EST) Received: from smtpin11.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id AAAABC0C76 for ; Tue, 8 Nov 2022 13:30:28 +0000 (UTC) X-FDA: 80110359336.11.C1A676A Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf30.hostedemail.com (Postfix) with ESMTP id 4109F80003 for ; Tue, 8 Nov 2022 13:30:28 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1667914226; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=4m+hP4y9BVGhWyB9uT/k4w1loff2IzwiW2q+vZIYNcc=; b=Ny0LiKhHCvSC8dAZ8B01H9GX/YLEsoVIXca/FQ3TSdT+0aBBFv4ckQdjBWNkG4MNzhTL65 To45GacBewxmTgHonUHB8nTrUb8VRMgxe58YKJCjvWtu25Nkd3wySWxe82yIDM35ajtuSi FxQ/YvfWLAsOc9wbfkChOgmh4SPRu/U= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-53-CNgStrUONDmLs1Tbb2QULA-1; Tue, 08 Nov 2022 08:30:23 -0500 X-MC-Unique: CNgStrUONDmLs1Tbb2QULA-1 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.rdu2.redhat.com [10.11.54.2]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id E1CD0101E9BA; Tue, 8 Nov 2022 13:30:22 +0000 (UTC) Received: from ovpn-194-7.brq.redhat.com (ovpn-194-7.brq.redhat.com [10.40.194.7]) by smtp.corp.redhat.com (Postfix) with ESMTP id E8DB440C94AD; Tue, 8 Nov 2022 13:30:21 +0000 (UTC) From: Lukas Czerner To: Hugh Dickins Cc: Jan Kara , Eric Sandeen , linux-mm@kvack.org, linux-fsdevel@vger.kernel.org Subject: [PATCH 2/2] shmem: implement mount options for global quota limits Date: Tue, 8 Nov 2022 14:30:10 +0100 Message-Id: <20221108133010.75226-3-lczerner@redhat.com> In-Reply-To: <20221108133010.75226-1-lczerner@redhat.com> References: <20221108133010.75226-1-lczerner@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.2 ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1667914228; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=4m+hP4y9BVGhWyB9uT/k4w1loff2IzwiW2q+vZIYNcc=; b=NB9n6FJLRWkkVbF+Fptyjt7ru1e8KmZHWVOWFDPdKYgCjVAEqCpI/HIIvGfv/XaYFP9hZ9 4SOYD1PEwAO4DlB9JnG3Ehss0+NuSsjDJtGyqO1UsnqZ66eJG1T72Wkkc6C8NMddIrZJoa mRs6tFDyGjUnggPD0RStIM01O8nE74k= ARC-Authentication-Results: i=1; imf30.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=Ny0LiKhH; spf=pass (imf30.hostedemail.com: domain of lczerner@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=lczerner@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1667914228; a=rsa-sha256; cv=none; b=wdlHs6SEUXnlwzi2Hg5tp6w6X8jrnYUF9mMZROwcZCreYxd9n25Z6BZFYfhxi7C7CYxS8F PREAAnaQhxwIb1fLWKNZgmhoFMRS7joTIHr6HWpSgC4TWl4Tr0n71gohX4vwrW8ERRBkgB V2SJgbnwGAaTTpSTXE0IjVOUNdq6T2k= X-Rspamd-Queue-Id: 4109F80003 Authentication-Results: imf30.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=Ny0LiKhH; spf=pass (imf30.hostedemail.com: domain of lczerner@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=lczerner@redhat.com; dmarc=pass (policy=none) header.from=redhat.com X-Rspamd-Server: rspam10 X-Rspam-User: X-Stat-Signature: zakr9tr9rh4s4xgx9yxiu8yyag7odoge X-HE-Tag: 1667914228-474829 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Implement a set of mount options for setting glopbal quota limits on tmpfs. quota_ubh_limit - global user quota block hard limit quota_uih_limit - global user quota inode hard limit quota_gbh_limit - global group quota block hard limit quota_gih_limit - global group quota inode hard limit All of the above mount options will take an effect for any and all users/groups except for root and can be changed using standard ways of setting quota limits. Along with setting the limits, quota enforcement will be enabled as well. None of the mount options can be set or changed on remount. Signed-off-by: Lukas Czerner --- Documentation/filesystems/tmpfs.rst | 23 ++-- include/linux/shmem_fs.h | 4 + mm/shmem.c | 166 +++++++++++++++++++++++++--- 3 files changed, 171 insertions(+), 22 deletions(-) diff --git a/Documentation/filesystems/tmpfs.rst b/Documentation/filesystems/tmpfs.rst index 9c4f228ef4f3..be4aa964863d 100644 --- a/Documentation/filesystems/tmpfs.rst +++ b/Documentation/filesystems/tmpfs.rst @@ -88,14 +88,21 @@ that instance in a system with many CPUs making intensive use of it. tmpfs also supports quota with the following mount options -======== ============================================================= -quota Quota accounting is enabled on the mount. Tmpfs is using - hidden system quota files that are initialized on mount. - Quota limits can quota enforcement can be enabled using - standard quota tools. -usrquota Same as quota option. Exists for compatibility reasons. -grpquota Same as quota option. Exists for compatibility reasons. -======== ============================================================= +=============== ====================================================== +quota Quota accounting is enabled on the mount. Tmpfs is + using hidden system quota files that are initialized + on mount. Quota limits can quota enforcement can be + enabled using standard quota tools. +usrquota Same as quota option. Exists for compatibility. +grpquota Same as quota option. Exists for compatibility. +quota_ubh_limit Set global user quota block hard limit. +quota_uih_limit Set global user quota inode hard limit. +quota_gbh_limit Set global group quota block hard limit. +quota_gih_limit Set global group quota inode hard limit. +=============== ====================================================== + +Quota limit parameters accept a suffix k, m or g for kilo, mega and +giga and can't be changed on remount. tmpfs has a mount option to set the NUMA memory allocation policy for diff --git a/include/linux/shmem_fs.h b/include/linux/shmem_fs.h index 02a328c98d3a..eb5e2dc2dc4c 100644 --- a/include/linux/shmem_fs.h +++ b/include/linux/shmem_fs.h @@ -39,6 +39,10 @@ struct shmem_inode_info { struct shmem_sb_info { unsigned long max_blocks; /* How many blocks are allowed */ + unsigned long quota_ubh_limit; /* Default user quota block hard limit */ + unsigned long quota_uih_limit; /* Default user quota inode hard limit */ + unsigned long quota_gbh_limit; /* Default group quota block hard limit */ + unsigned long quota_gih_limit; /* Default group quota inode hard limit */ struct percpu_counter used_blocks; /* How many are allocated */ unsigned long max_inodes; /* How many inodes are allowed */ unsigned long free_inodes; /* How many are left for allocation */ diff --git a/mm/shmem.c b/mm/shmem.c index ec16659c2255..f1d6a3931b0a 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -99,6 +99,10 @@ static struct vfsmount *shm_mnt; /* Symlink up to this size is kmalloc'ed instead of using a swappable page */ #define SHORT_SYMLINK_LEN 128 +#if defined(CONFIG_TMPFS) && defined(CONFIG_QUOTA) +#define SHMEM_QUOTA_TMPFS +#endif + /* * shmem_fallocate communicates with shmem_fault or shmem_writepage via * inode->i_private (with i_rwsem making sure that it has only one user at @@ -115,6 +119,12 @@ struct shmem_falloc { struct shmem_options { unsigned long long blocks; unsigned long long inodes; +#ifdef SHMEM_QUOTA_TMPFS + unsigned long long quota_ubh_limit; + unsigned long long quota_uih_limit; + unsigned long long quota_gbh_limit; + unsigned long long quota_gih_limit; +#endif struct mempolicy *mpol; kuid_t uid; kgid_t gid; @@ -147,10 +157,6 @@ static unsigned long shmem_default_max_inodes(void) } #endif -#if defined(CONFIG_TMPFS) && defined(CONFIG_QUOTA) -#define SHMEM_QUOTA_TMPFS -#endif - static int shmem_swapin_folio(struct inode *inode, pgoff_t index, struct folio **foliop, enum sgp_type sgp, gfp_t gfp, struct vm_area_struct *vma, @@ -285,6 +291,54 @@ static DEFINE_MUTEX(shmem_swaplist_mutex); #define QUOTABLOCK_BITS 10 #define QUOTABLOCK_SIZE (1 << QUOTABLOCK_BITS) +struct kmem_cache *shmem_dquot_cachep; + +struct dquot *shmem_dquot_alloc(struct super_block *sb, int type) +{ + struct shmem_sb_info *sbinfo = SHMEM_SB(sb); + struct dquot *dquot = NULL; + + dquot = kmem_cache_zalloc(shmem_dquot_cachep, GFP_NOFS); + if (!dquot) + return NULL; + + if (type == USRQUOTA) { + dquot->dq_dqb.dqb_bhardlimit = + (sbinfo->quota_ubh_limit << PAGE_SHIFT); + dquot->dq_dqb.dqb_ihardlimit = sbinfo->quota_uih_limit; + } else if (type == GRPQUOTA) { + dquot->dq_dqb.dqb_bhardlimit = + (sbinfo->quota_gbh_limit << PAGE_SHIFT); + dquot->dq_dqb.dqb_ihardlimit = sbinfo->quota_gih_limit; + } + /* + * This is a bit a of a hack to allow setting global default + * limits on new files, by setting the limits here and preventing + * quota from initializing everything to zero. It won't ever be + * read from quota file because existing inodes in tmpfs are always + * kept in memory (or swap) so we know we're getting dquot for a + * new inode with no pre-existing dquot. + */ + set_bit(DQ_READ_B, &dquot->dq_flags); + return dquot; +} + +static void shmem_dquot_destroy(struct dquot *dquot) +{ + kmem_cache_free(shmem_dquot_cachep, dquot); +} + +const struct dquot_operations shmem_dquot_operations = { + .write_dquot = dquot_commit, + .acquire_dquot = dquot_acquire, + .release_dquot = dquot_release, + .mark_dirty = dquot_mark_dquot_dirty, + .write_info = dquot_commit_info, + .alloc_dquot = shmem_dquot_alloc, + .destroy_dquot = shmem_dquot_destroy, + .get_next_id = dquot_get_next_id, +}; + static ssize_t shmem_quota_write_inode(struct inode *inode, int type, const char *data, size_t len, loff_t off) { @@ -343,7 +397,7 @@ static ssize_t shmem_quota_write(struct super_block *sb, int type, return shmem_quota_write_inode(inode, type, data, len, off); } -static int shmem_enable_quotas(struct super_block *sb) +static int shmem_enable_quotas(struct super_block *sb, unsigned int dquot_flags) { int type, err = 0; struct inode *inode; @@ -389,7 +443,7 @@ static int shmem_enable_quotas(struct super_block *sb) shmem_set_inode_flags(inode, FS_NOATIME_FL | FS_IMMUTABLE_FL); err = dquot_load_quota_inode(inode, type, QFMT_VFS_V1, - DQUOT_USAGE_ENABLED); + dquot_flags); iput(inode); if (err) goto out_err; @@ -3720,6 +3774,10 @@ enum shmem_param { Opt_inode32, Opt_inode64, Opt_quota, + Opt_quota_ubh_limit, + Opt_quota_uih_limit, + Opt_quota_gbh_limit, + Opt_quota_gih_limit, }; static const struct constant_table shmem_param_enums_huge[] = { @@ -3744,6 +3802,10 @@ const struct fs_parameter_spec shmem_fs_parameters[] = { fsparam_flag ("quota", Opt_quota), fsparam_flag ("usrquota", Opt_quota), fsparam_flag ("grpquota", Opt_quota), + fsparam_string("quota_ubh_limit", Opt_quota_ubh_limit), + fsparam_string("quota_uih_limit", Opt_quota_uih_limit), + fsparam_string("quota_gbh_limit", Opt_quota_gbh_limit), + fsparam_string("quota_gih_limit", Opt_quota_gih_limit), {} }; @@ -3827,13 +3889,44 @@ static int shmem_parse_one(struct fs_context *fc, struct fs_parameter *param) ctx->full_inums = true; ctx->seen |= SHMEM_SEEN_INUMS; break; - case Opt_quota: #ifdef CONFIG_QUOTA + case Opt_quota: + ctx->seen |= SHMEM_SEEN_QUOTA; + break; + case Opt_quota_ubh_limit: + size = memparse(param->string, &rest); + if (*rest || !size) + goto bad_value; + ctx->quota_ubh_limit = DIV_ROUND_UP(size, PAGE_SIZE); + ctx->seen |= SHMEM_SEEN_QUOTA; + break; + case Opt_quota_gbh_limit: + size = memparse(param->string, &rest); + if (*rest || !size) + goto bad_value; + ctx->quota_gbh_limit = DIV_ROUND_UP(size, PAGE_SIZE); + ctx->seen |= SHMEM_SEEN_QUOTA; + break; + case Opt_quota_uih_limit: + ctx->quota_uih_limit = memparse(param->string, &rest); + if (*rest || !ctx->quota_uih_limit) + goto bad_value; + ctx->seen |= SHMEM_SEEN_QUOTA; + break; + case Opt_quota_gih_limit: + ctx->quota_gih_limit = memparse(param->string, &rest); + if (*rest || !ctx->quota_gih_limit) + goto bad_value; ctx->seen |= SHMEM_SEEN_QUOTA; + break; #else + case Opt_quota: + case Opt_quota_ubh_limit: + case Opt_quota_gbh_limit: + case Opt_quota_uih_limit: + case Opt_quota_gih_limit: goto unsupported_parameter; #endif - break; } return 0; @@ -3933,12 +4026,24 @@ static int shmem_reconfigure(struct fs_context *fc) goto out; } +#ifdef CONFIG_QUOTA if (ctx->seen & SHMEM_SEEN_QUOTA && !sb_any_quota_loaded(fc->root->d_sb)) { err = "Cannot enable quota on remount"; goto out; } +#define CHANGED_LIMIT(name) \ + (ctx->quota_##name##_limit && \ + (ctx->quota_##name##_limit != sbinfo->quota_ ##name##_limit)) + + if (CHANGED_LIMIT(ubh) || CHANGED_LIMIT(uih) || + CHANGED_LIMIT(gbh) || CHANGED_LIMIT(gih)) { + err = "Cannot change global quota limit on remount"; + goto out; + } +#endif /* CONFIG_QUOTA */ + if (ctx->seen & SHMEM_SEEN_HUGE) sbinfo->huge = ctx->huge; if (ctx->seen & SHMEM_SEEN_INUMS) @@ -4103,11 +4208,22 @@ static int shmem_fill_super(struct super_block *sb, struct fs_context *fc) #ifdef SHMEM_QUOTA_TMPFS if (ctx->seen & SHMEM_SEEN_QUOTA) { - sb->dq_op = &dquot_operations; + unsigned int dquot_flags; + + sb->dq_op = &shmem_dquot_operations; sb->s_qcop = &dquot_quotactl_sysfile_ops; sb->s_quota_types = QTYPE_MASK_USR | QTYPE_MASK_GRP; - if (shmem_enable_quotas(sb)) + dquot_flags = DQUOT_USAGE_ENABLED; + /* + * If any of the global quota limits are set, enable + * quota enforcement + */ + if (ctx->quota_ubh_limit || ctx->quota_uih_limit || + ctx->quota_gbh_limit || ctx->quota_gih_limit) + dquot_flags |= DQUOT_LIMITS_ENABLED; + + if (shmem_enable_quotas(sb, dquot_flags)) goto failed; } #endif /* SHMEM_QUOTA_TMPFS */ @@ -4121,6 +4237,17 @@ static int shmem_fill_super(struct super_block *sb, struct fs_context *fc) if (!sb->s_root) goto failed; +#ifdef SHMEM_QUOTA_TMPFS + /* + * Set quota hard limits after shmem_get_inode() to avoid setting + * it for root + */ + sbinfo->quota_ubh_limit = ctx->quota_ubh_limit; + sbinfo->quota_uih_limit = ctx->quota_uih_limit; + sbinfo->quota_gbh_limit = ctx->quota_gbh_limit; + sbinfo->quota_gih_limit = ctx->quota_gih_limit; +#endif /* SHMEM_QUOTA_TMPFS */ + return 0; failed: @@ -4183,16 +4310,27 @@ static void shmem_init_inode(void *foo) inode_init_once(&info->vfs_inode); } -static void shmem_init_inodecache(void) +static void shmem_init_mem_caches(void) { shmem_inode_cachep = kmem_cache_create("shmem_inode_cache", sizeof(struct shmem_inode_info), 0, SLAB_PANIC|SLAB_ACCOUNT, shmem_init_inode); + +#ifdef SHMEM_QUOTA_TMPFS + shmem_dquot_cachep = kmem_cache_create("shmem_dquot", + sizeof(struct dquot), sizeof(unsigned long) * 4, + (SLAB_HWCACHE_ALIGN|SLAB_RECLAIM_ACCOUNT| + SLAB_MEM_SPREAD|SLAB_PANIC), + NULL); +#endif } -static void shmem_destroy_inodecache(void) +static void shmem_destroy_mem_caches(void) { kmem_cache_destroy(shmem_inode_cachep); +#ifdef SHMEM_QUOTA_TMPFS + kmem_cache_destroy(shmem_dquot_cachep); +#endif } /* Keep the page in page cache instead of truncating it */ @@ -4340,7 +4478,7 @@ void __init shmem_init(void) { int error; - shmem_init_inodecache(); + shmem_init_mem_caches(); error = register_filesystem(&shmem_fs_type); if (error) { @@ -4366,7 +4504,7 @@ void __init shmem_init(void) out1: unregister_filesystem(&shmem_fs_type); out2: - shmem_destroy_inodecache(); + shmem_destroy_mem_caches(); shm_mnt = ERR_PTR(error); }