From patchwork Thu Mar 3 01:35:30 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hugh Dickins X-Patchwork-Id: 12766944 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 556A6C433EF for ; Thu, 3 Mar 2022 01:35:35 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D8E448D0002; Wed, 2 Mar 2022 20:35:34 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id D16028D0001; Wed, 2 Mar 2022 20:35:34 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BB6A08D0002; Wed, 2 Mar 2022 20:35:34 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0241.hostedemail.com [216.40.44.241]) by kanga.kvack.org (Postfix) with ESMTP id A95DE8D0001 for ; Wed, 2 Mar 2022 20:35:34 -0500 (EST) Received: from smtpin21.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 59A1D181D8618 for ; Thu, 3 Mar 2022 01:35:34 +0000 (UTC) X-FDA: 79201357788.21.B16E59A Received: from mail-ot1-f53.google.com (mail-ot1-f53.google.com [209.85.210.53]) by imf20.hostedemail.com (Postfix) with ESMTP id E55691C0004 for ; Thu, 3 Mar 2022 01:35:33 +0000 (UTC) Received: by mail-ot1-f53.google.com with SMTP id j3-20020a9d7683000000b005aeed94f4e9so3296392otl.6 for ; Wed, 02 Mar 2022 17:35:33 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:from:to:cc:subject:message-id:mime-version; bh=gkSN7ZvNvd/0m/mkYGWwCvWjNrGtNp3nzKCzQIc5Go4=; b=Peqfovp4UGvowB6XN/lkQXan457qd0WjSWexd2I3wFuJGidEStuSmvMy7HbGPYCf1D h0853GoGpbWoIgAglm4eHpy3WAj65EhHaF7M4kxjqccPCVaiuu2phP8eWfsrfSM4hA3T YAf3Ff7T7IP2z6Xdb0yRq7K03bCb4uQzDr9ZOMx44OSR3aYaaWSnAmZddFNe8gA9Ygvq CeZDJAPnPW/bGBg+W6yJY/PFmInaA9sa57Co9UfT0j5+EeWQFN51EDfjL0Q4+ShiyE6c ZKbrCGkrAmwdR1UYOTf/psw3MXAVbceU4bKtkx8d+C7vW3lQNPDzj0Bf8DNsB1FM+W/c fIiQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:from:to:cc:subject:message-id:mime-version; bh=gkSN7ZvNvd/0m/mkYGWwCvWjNrGtNp3nzKCzQIc5Go4=; b=KS2OJFddfL8cWzM/pKb+q43aJS2bL2HW9wK/e6Jnjkii/HRZQ4qvX0u6I1Dc65m5rI 2ZwYpy+Gr5LUrn6F+ef50TvxfxR7VKWvGBnX9/AqvDGM0jzURl+Qt6D3EZhPciGwyOi6 qg3xqVW4aBbAwpACuW5Bdk7K6XByNx2w0D5BHjn3vIhWE0wRYixNavX/dVCl2D/5dmdG /xjCgT5o4+bkhJ1kJShyzvhzKXy0SjgGLudL5gzg9oHSez69kw93RdTvAGwar42wNLZW zcFD5cnmsyQznmOue59H0pFogmTguoujlHGA/HtR1hKRGHRvkKIJYGvXmmft9nZfm5PI 0geg== X-Gm-Message-State: AOAM5300P2xszisC0XUhvmaLW5/VU3eExN5bFZXYoI2muptKcW4865Ve 4zcNFHkdYvKUMAv4Flkl8tv6UA== X-Google-Smtp-Source: ABdhPJwtU441wydHepaGEFzGV3+hX4X6j7bgaDd+Ws05CmRj7iyeePVqs5IJFRZZ3Sq4J7i731b8jw== X-Received: by 2002:a05:6830:34a0:b0:5b0:37d7:ab38 with SMTP id c32-20020a05683034a000b005b037d7ab38mr6487277otu.40.1646271332947; Wed, 02 Mar 2022 17:35:32 -0800 (PST) Received: from ripple.attlocal.net (172-10-233-147.lightspeed.sntcca.sbcglobal.net. [172.10.233.147]) by smtp.gmail.com with ESMTPSA id r41-20020a056870582900b000d6cbaf589esm466018oap.40.2022.03.02.17.35.31 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 02 Mar 2022 17:35:32 -0800 (PST) Date: Wed, 2 Mar 2022 17:35:30 -0800 (PST) From: Hugh Dickins X-X-Sender: hugh@ripple.anvils To: Andrew Morton cc: Matthew Wilcox , Vlastimil Babka , linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH mmotm] mm/munlock: mlock_vma_folio() check against VM_SPECIAL Message-ID: <9b95d366-1719-f8e2-a5a3-429f9e808288@google.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: E55691C0004 X-Stat-Signature: r35w8drycn639jjn5rhecmhsjwzq8ay4 Authentication-Results: imf20.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=Peqfovp4; spf=pass (imf20.hostedemail.com: domain of hughd@google.com designates 209.85.210.53 as permitted sender) smtp.mailfrom=hughd@google.com; dmarc=pass (policy=reject) header.from=google.com X-Rspam-User: X-Rspamd-Server: rspam08 X-HE-Tag: 1646271333-159627 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Although mmap_region() and mlock_fixup() take care that VM_LOCKED is never left set on a VM_SPECIAL vma, there is an interval while file->f_op->mmap() is using vm_insert_page(s), when VM_LOCKED may still be set while VM_SPECIAL bits are added: so mlock_vma_folio() should ignore VM_LOCKED while any VM_SPECIAL bits are set. This showed up as a "Bad page" still mlocked, when vfree()ing pages which had been vm_inserted by remap_vmalloc_range_partial(): while release_pages() and __page_cache_release(), and so put_page(), catch pages still mlocked when freeing (and clear_page_mlock() caught them when unmapping), the vfree() path is unprepared for them: fix it? but these pages should not have been mlocked in the first place. I assume that an mlockall(MCL_FUTURE) had been done in the past; or maybe the user got to specify MAP_LOCKED on a vmalloc'ing driver mmap. Signed-off-by: Hugh Dickins --- Diffed against top of next-20220301 or mmotm 2022-02-28-14-45. This patch really belongs as a fix to the mm/munlock series in Matthew's tree, so he might like to take it in there (but the patch here is the foliated version, so easiest to place it after foliation). mm/internal.h | 11 +++++++++-- 1 file changed, 9 insertions(+), 2 deletions(-) --- a/mm/internal.h +++ b/mm/internal.h @@ -421,8 +421,15 @@ extern int mlock_future_check(struct mm_struct *mm, unsigned long flags, static inline void mlock_vma_folio(struct folio *folio, struct vm_area_struct *vma, bool compound) { - /* VM_IO check prevents migration from double-counting during mlock */ - if (unlikely((vma->vm_flags & (VM_LOCKED|VM_IO)) == VM_LOCKED) && + /* + * The VM_SPECIAL check here serves two purposes. + * 1) VM_IO check prevents migration from double-counting during mlock. + * 2) Although mmap_region() and mlock_fixup() take care that VM_LOCKED + * is never left set on a VM_SPECIAL vma, there is an interval while + * file->f_op->mmap() is using vm_insert_page(s), when VM_LOCKED may + * still be set while VM_SPECIAL bits are added: so ignore it then. + */ + if (unlikely((vma->vm_flags & (VM_LOCKED|VM_SPECIAL)) == VM_LOCKED) && (compound || !folio_test_large(folio))) mlock_folio(folio); }