From patchwork Mon Aug 19 22:09:14 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hugh Dickins X-Patchwork-Id: 11102113 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 92921174A for ; Mon, 19 Aug 2019 22:09:37 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 543E322CF7 for ; Mon, 19 Aug 2019 22:09:37 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="nyAq4FdD" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 543E322CF7 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 537016B0007; Mon, 19 Aug 2019 18:09:36 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 4E8896B0008; Mon, 19 Aug 2019 18:09:36 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3FD686B000A; Mon, 19 Aug 2019 18:09:36 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0031.hostedemail.com [216.40.44.31]) by kanga.kvack.org (Postfix) with ESMTP id 204876B0007 for ; Mon, 19 Aug 2019 18:09:36 -0400 (EDT) Received: from smtpin20.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with SMTP id B635B180AD805 for ; Mon, 19 Aug 2019 22:09:35 +0000 (UTC) X-FDA: 75840569910.20.corn19_53b8662ee760d X-Spam-Summary: 2,0,0,f15da0f211a56e4e,d41d8cd98f00b204,hughd@google.com,:dhowells@redhat.com:viro@zeniv.linux.org.uk:akpm@linux-foundation.org:linux-fsdevel@vger.kernel.org:linux-kernel@vger.kernel.org:,RULES_HIT:2:41:69:355:379:800:960:966:973:982:988:989:1260:1277:1313:1314:1345:1437:1500:1516:1518:1535:1593:1594:1605:1730:1747:1777:1792:2196:2199:2393:2559:2562:2895:3138:3139:3140:3141:3142:3152:3865:3866:3867:3868:3870:3871:3872:4049:4120:4321:4385:4605:5007:6119:6261:6653:7652:7875:8603:9592:10004:11026:11473:11658:11914:12043:12291:12296:12297:12438:12517:12519:12555:12683:12740:12895:12986:13161:13229:13439:14096:14097:14659:21080:21433:21444:21451:21627:30029:30054:30062:30070,0,RBL:209.85.214.196:@google.com:.lbl8.mailshell.net-62.18.0.100 66.100.201.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: corn19_53b8662ee760d X-Filterd-Recvd-Size: 9399 Received: from mail-pl1-f196.google.com (mail-pl1-f196.google.com [209.85.214.196]) by imf28.hostedemail.com (Postfix) with ESMTP for ; Mon, 19 Aug 2019 22:09:35 +0000 (UTC) Received: by mail-pl1-f196.google.com with SMTP id f19so1216863plr.3 for ; Mon, 19 Aug 2019 15:09:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:from:to:cc:subject:message-id:user-agent:mime-version; bh=4U8tojRaKcIJnVnSgZATNOfiLncd3nw3hZsemmDG7p4=; b=nyAq4FdD5c3FMmHNFuPzWes0ffOcq3f7AlYtoA8ByfjnGwHLgYOAgslmW1s3BWcX9J 2ql7WX1JyZotu20eKy5FXmvb9sTY3I4UPCgypslQKCgjSkqVdhrmhwB6K5bdbj+ZvqWc f0gfE6AyeKXA4B43BxwcAUlbH2YjlG3F50gPNrzMnEVTfbIRDzFKoX7pAS19xZ577iVT Q4+s9m2TMcdCIyjrxm+NlCgaJKtAFQdo++TsgFoxA5MerUjC/HBrvvYyxzUyC39OiktQ qvHs5VE0XM4sHT+s76MNF5aw69g6Z1OvJUjUyG6naNf/qZsKW4vrsZkz6Mos9XOUzCly 7zmw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:user-agent :mime-version; bh=4U8tojRaKcIJnVnSgZATNOfiLncd3nw3hZsemmDG7p4=; b=Dsii+Hx6J32uXYUyprRa2x5lxO1tGvvfsrxxjGa5CMPTYC/+qisW4TvKrj7N1H/tjt 9eUI3QXHGCKz63Tm4Xy5dLBBBoNV9M2U/8QnrWEPSh6/vhR87SwC3gY6XL4cGmDCnHWT HJyJ/xVJqfQVq/LiJbCxZF3XUwKtyCSiQK0s+CHD1wHQ8aCYhhm+ANCNVhRHznu26KSA nBxMRFDIC+pzCIm98Xn+ny2xyNVddCivX3yotTY3ocyznMcF4h1C5hfTVubgtHr1CfEl R8d1RdhFMPhEVDcXQAGxYYGPDQ42za43jf21VVR+DkA/blk1T2g3aB/Hh2fW5tISzW8E qgmQ== X-Gm-Message-State: APjAAAWx4C9ZAhPfkn45cqToXbqorSH+eKqj+E+iK0AaODmJ4okr7pI7 DX8eC6SI1qHidxQ5eOApQ5Gpjg== X-Google-Smtp-Source: APXvYqyv8uu6otD6hvhmaRqTKcElwUL0Q0SvnBUpfsg5ZHDSQ83EQgXHb+krsUUz8lpyAPntgHLbLw== X-Received: by 2002:a17:902:a58c:: with SMTP id az12mr25542981plb.129.1566252573258; Mon, 19 Aug 2019 15:09:33 -0700 (PDT) Received: from [100.112.91.228] ([104.133.8.100]) by smtp.gmail.com with ESMTPSA id ck8sm14135453pjb.25.2019.08.19.15.09.32 (version=TLS1 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Mon, 19 Aug 2019 15:09:32 -0700 (PDT) Date: Mon, 19 Aug 2019 15:09:14 -0700 (PDT) From: Hugh Dickins X-X-Sender: hugh@eggly.anvils To: David Howells cc: Al Viro , Andrew Morton , linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: tmpfs: fixups to use of the new mount API Message-ID: User-Agent: Alpine 2.11 (LSU 23 2013-08-11) MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Several fixups to shmem_parse_param() and tmpfs use of new mount API: mm/shmem.c manages filesystem named "tmpfs": revert "shmem" to "tmpfs" in its mount error messages. /sys/kernel/mm/transparent_hugepage/shmem_enabled has valid options "deny" and "force", but they are not valid as tmpfs "huge" options. The "size" param is an alternative to "nr_blocks", and needs to be recognized as changing max_blocks. And where there's ambiguity, it's better to mention "size" than "nr_blocks" in messages, since "size" is the variant shown in /proc/mounts. shmem_apply_options() left ctx->mpol as the new mpol, so then it was freed in shmem_free_fc(), and the filesystem went on to use-after-free. shmem_parse_param() issue "tmpfs: Bad value for '%s'" messages just like fs_parse() would, instead of a different wording. Where config disables "mpol" or "huge", say "tmpfs: Unsupported parameter '%s'". Signed-off-by: Hugh Dickins --- mm/shmem.c | 80 ++++++++++++++++++++++++++------------------------- 1 file changed, 42 insertions(+), 38 deletions(-) --- mmotm/mm/shmem.c 2019-08-17 11:33:16.557900238 -0700 +++ linux/mm/shmem.c 2019-08-19 13:37:29.184001050 -0700 @@ -3432,13 +3432,11 @@ static const struct fs_parameter_enum sh { Opt_huge, "always", SHMEM_HUGE_ALWAYS }, { Opt_huge, "within_size", SHMEM_HUGE_WITHIN_SIZE }, { Opt_huge, "advise", SHMEM_HUGE_ADVISE }, - { Opt_huge, "deny", SHMEM_HUGE_DENY }, - { Opt_huge, "force", SHMEM_HUGE_FORCE }, {} }; const struct fs_parameter_description shmem_fs_parameters = { - .name = "shmem", + .name = "tmpfs", .specs = shmem_param_specs, .enums = shmem_param_enums, }; @@ -3448,9 +3446,9 @@ static void shmem_apply_options(struct s unsigned long inodes_in_use) { struct shmem_fs_context *ctx = fc->fs_private; - struct mempolicy *old = NULL; - if (test_bit(Opt_nr_blocks, &ctx->changes)) + if (test_bit(Opt_nr_blocks, &ctx->changes) || + test_bit(Opt_size, &ctx->changes)) sbinfo->max_blocks = ctx->max_blocks; if (test_bit(Opt_nr_inodes, &ctx->changes)) { sbinfo->max_inodes = ctx->max_inodes; @@ -3459,8 +3457,11 @@ static void shmem_apply_options(struct s if (test_bit(Opt_huge, &ctx->changes)) sbinfo->huge = ctx->huge; if (test_bit(Opt_mpol, &ctx->changes)) { - old = sbinfo->mpol; - sbinfo->mpol = ctx->mpol; + /* + * Update sbinfo->mpol now while stat_lock is held. + * Leave shmem_free_fc() to free the old mpol if any. + */ + swap(sbinfo->mpol, ctx->mpol); } if (fc->purpose != FS_CONTEXT_FOR_RECONFIGURE) { @@ -3471,8 +3472,6 @@ static void shmem_apply_options(struct s if (test_bit(Opt_mode, &ctx->changes)) sbinfo->mode = ctx->mode; } - - mpol_put(old); } static int shmem_parse_param(struct fs_context *fc, struct fs_parameter *param) @@ -3498,7 +3497,7 @@ static int shmem_parse_param(struct fs_c rest++; } if (*rest) - return invalf(fc, "shmem: Invalid size"); + goto bad_value; ctx->max_blocks = DIV_ROUND_UP(size, PAGE_SIZE); break; @@ -3506,55 +3505,59 @@ static int shmem_parse_param(struct fs_c rest = param->string; ctx->max_blocks = memparse(param->string, &rest); if (*rest) - return invalf(fc, "shmem: Invalid nr_blocks"); + goto bad_value; break; + case Opt_nr_inodes: rest = param->string; ctx->max_inodes = memparse(param->string, &rest); if (*rest) - return invalf(fc, "shmem: Invalid nr_inodes"); + goto bad_value; break; + case Opt_mode: ctx->mode = result.uint_32 & 07777; break; + case Opt_uid: ctx->uid = make_kuid(current_user_ns(), result.uint_32); if (!uid_valid(ctx->uid)) - return invalf(fc, "shmem: Invalid uid"); + goto bad_value; break; case Opt_gid: ctx->gid = make_kgid(current_user_ns(), result.uint_32); if (!gid_valid(ctx->gid)) - return invalf(fc, "shmem: Invalid gid"); + goto bad_value; break; case Opt_huge: -#ifdef CONFIG_TRANSPARENT_HUGE_PAGECACHE - if (!has_transparent_hugepage() && - result.uint_32 != SHMEM_HUGE_NEVER) - return invalf(fc, "shmem: Huge pages disabled"); - ctx->huge = result.uint_32; + if (ctx->huge != SHMEM_HUGE_NEVER && + !(IS_ENABLED(CONFIG_TRANSPARENT_HUGE_PAGECACHE) && + has_transparent_hugepage())) + goto unsupported_parameter; break; -#else - return invalf(fc, "shmem: huge= option disabled"); -#endif - - case Opt_mpol: { -#ifdef CONFIG_NUMA - struct mempolicy *mpol; - if (mpol_parse_str(param->string, &mpol)) - return invalf(fc, "shmem: Invalid mpol="); - mpol_put(ctx->mpol); - ctx->mpol = mpol; -#endif - break; - } + + case Opt_mpol: + if (IS_ENABLED(CONFIG_NUMA)) { + struct mempolicy *mpol; + if (mpol_parse_str(param->string, &mpol)) + goto bad_value; + mpol_put(ctx->mpol); + ctx->mpol = mpol; + break; + } + goto unsupported_parameter; } __set_bit(opt, &ctx->changes); return 0; + +unsupported_parameter: + return invalf(fc, "tmpfs: Unsupported parameter '%s'", param->key); +bad_value: + return invalf(fc, "tmpfs: Bad value for '%s'", param->key); } /* @@ -3572,14 +3575,15 @@ static int shmem_reconfigure(struct fs_c unsigned long inodes_in_use; spin_lock(&sbinfo->stat_lock); - if (test_bit(Opt_nr_blocks, &ctx->changes)) { + if (test_bit(Opt_nr_blocks, &ctx->changes) || + test_bit(Opt_size, &ctx->changes)) { if (ctx->max_blocks && !sbinfo->max_blocks) { spin_unlock(&sbinfo->stat_lock); - return invalf(fc, "shmem: Can't retroactively limit nr_blocks"); + return invalf(fc, "tmpfs: Cannot retroactively limit size"); } if (percpu_counter_compare(&sbinfo->used_blocks, ctx->max_blocks) > 0) { spin_unlock(&sbinfo->stat_lock); - return invalf(fc, "shmem: Too few blocks for current use"); + return invalf(fc, "tmpfs: Too small a size for current use"); } } @@ -3587,11 +3591,11 @@ static int shmem_reconfigure(struct fs_c if (test_bit(Opt_nr_inodes, &ctx->changes)) { if (ctx->max_inodes && !sbinfo->max_inodes) { spin_unlock(&sbinfo->stat_lock); - return invalf(fc, "shmem: Can't retroactively limit nr_inodes"); + return invalf(fc, "tmpfs: Cannot retroactively limit inodes"); } if (ctx->max_inodes < inodes_in_use) { spin_unlock(&sbinfo->stat_lock); - return invalf(fc, "shmem: Too few inodes for current use"); + return invalf(fc, "tmpfs: Too few inodes for current use"); } }