From patchwork Sun Sep 26 16:12:52 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nadav Amit X-Patchwork-Id: 12518937 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 80B71C433FE for ; Sun, 26 Sep 2021 23:43:58 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id E646D6115B for ; Sun, 26 Sep 2021 23:43:57 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org E646D6115B Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 75CD16B0074; Sun, 26 Sep 2021 19:43:57 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 70D546B0075; Sun, 26 Sep 2021 19:43:57 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 56180900002; Sun, 26 Sep 2021 19:43:57 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0133.hostedemail.com [216.40.44.133]) by kanga.kvack.org (Postfix) with ESMTP id 44EEA6B0074 for ; Sun, 26 Sep 2021 19:43:57 -0400 (EDT) Received: from smtpin01.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id F163F1803E8EC for ; Sun, 26 Sep 2021 23:43:56 +0000 (UTC) X-FDA: 78631354872.01.A34FFAC Received: from mail-pj1-f49.google.com (mail-pj1-f49.google.com [209.85.216.49]) by imf23.hostedemail.com (Postfix) with ESMTP id B413A90000A9 for ; Sun, 26 Sep 2021 23:43:56 +0000 (UTC) Received: by mail-pj1-f49.google.com with SMTP id bj3-20020a17090b088300b0019e6603fe89so10420112pjb.4 for ; Sun, 26 Sep 2021 16:43:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=u81btMS2EcfvD4xOl+t93Wlb5o4OTa3kZuiyGFUFTF8=; b=qlSgKJdiolKmSwykOFnZi1/XJzd8aFrG+DROd106Q+ECW3QKD8Nq+JIZs8Kg3z48Mr IbhFU/F89Xivj9Bx6lJAEbcB3aT43JnnsQ77qFheBfb327hb/4+YfPwwoLkE9k6uO6Yy MPmRD7YGyItvx1v3mFfgOwrhqpQZz29UX8QzsgwyXn4LjLMd09NaIqZMjpvqIjxI2PWJ ggPv9KW7lLO5emIWBrnA72c3UMPZI8FfSgv0wnfj9/Es0Eq2YXSyx6jHHYacjXMc4W9m IdNYXLqyHdM/EVBPqxKwJA7VKxShkNFbjn2WegVHGy43UoLkXNjzxjX4aYt3wMBTIlgu jO0Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=u81btMS2EcfvD4xOl+t93Wlb5o4OTa3kZuiyGFUFTF8=; b=bnGh1FfkjoW6pXBM3F+SK9uBiS1vHQMW6TMZumFFhq3ft+gpahpbTRvSz5JNN6QY6+ E0xpdF+M6L3YgKOfZ3Fk+haIqN30Dq74/ZhKG9WqGrKCA5DQwyN/pqzIkf7S5a2f66Yy 9AlCe9BKdd3kvw8ztYyNgd8OVLJm1fi0rcVSHimKsMHaSEgBzRsHlhgU/j+8gUpmR9lO dPaSs3wRCidqrUzprEveO0m3NRkqbq1AX478q1LvIwTsUChi7DiXDm7ssn8P93WS8V4X mD6dc66L0HrFDBuzfttHT40WdF1bOGzeMZzD30UaEsGoNbCAS1qiPmf7UiMMPc1uUkV0 8f1g== X-Gm-Message-State: AOAM532Bb11klhbHQZAJTQ7+uuVLD2dmdgpydci4JGwFQ4q4C+AVnl0O e8EAPULCHkXuiLpA0pZKl1U= X-Google-Smtp-Source: ABdhPJzKPnFqZTi36uFRL2WPagr7zdZSK/rhjNj1N5ymsyTwLG9xuGKRrMH+uCQAJVk6eUR4M5SOEA== X-Received: by 2002:a17:90a:e003:: with SMTP id u3mr16397472pjy.137.1632699835568; Sun, 26 Sep 2021 16:43:55 -0700 (PDT) Received: from sc2-haas01-esx0118.eng.vmware.com ([66.170.99.1]) by smtp.gmail.com with ESMTPSA id n22sm16783686pgc.55.2021.09.26.16.43.54 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 26 Sep 2021 16:43:55 -0700 (PDT) From: Nadav Amit X-Google-Original-From: Nadav Amit To: Andrew Morton Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Peter Xu , Nadav Amit , Andrea Arcangeli , Minchan Kim , Colin Cross , Suren Baghdasarya , Mike Rapoport Subject: [RFC PATCH 1/8] mm/madvise: propagate vma->vm_end changes Date: Sun, 26 Sep 2021 09:12:52 -0700 Message-Id: <20210926161259.238054-2-namit@vmware.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210926161259.238054-1-namit@vmware.com> References: <20210926161259.238054-1-namit@vmware.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: B413A90000A9 X-Stat-Signature: zat7d4a3dd86mhy4rafcm9m3prisa6x3 Authentication-Results: imf23.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=qlSgKJdi; spf=none (imf23.hostedemail.com: domain of mail-pj1-f49.google.com has no SPF policy when checking 209.85.216.49) smtp.helo=mail-pj1-f49.google.com; dmarc=pass (policy=none) header.from=gmail.com X-Rspamd-Server: rspam06 X-HE-Tag: 1632699836-285171 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Nadav Amit The comment in madvise_dontneed_free() says that vma splits that occur while the mmap-lock is dropped, during userfaultfd_remove(), should be handled correctly, but nothing in the code indicates that it is so: prev is invalidated, and do_madvise() will therefore continue to update VMAs from the "obsolete" end (i.e., the one before the split). Propagate the changes to end from madvise_dontneed_free() back to do_madvise() and continue the updates from the new end accordingly. Cc: Andrea Arcangeli Cc: Andrew Morton Cc: Minchan Kim Cc: Colin Cross Cc: Suren Baghdasarya Cc: Mike Rapoport Fixes: 70ccb92fdd90 ("userfaultfd: non-cooperative: userfaultfd_remove revalidate vma in MADV_DONTNEED") Signed-off-by: Nadav Amit --- mm/madvise.c | 12 ++++++++---- 1 file changed, 8 insertions(+), 4 deletions(-) diff --git a/mm/madvise.c b/mm/madvise.c index 0734db8d53a7..a2b05352ebfe 100644 --- a/mm/madvise.c +++ b/mm/madvise.c @@ -768,10 +768,11 @@ static long madvise_dontneed_single_vma(struct vm_area_struct *vma, static long madvise_dontneed_free(struct vm_area_struct *vma, struct vm_area_struct **prev, - unsigned long start, unsigned long end, + unsigned long start, unsigned long *pend, int behavior) { struct mm_struct *mm = vma->vm_mm; + unsigned long end = *pend; *prev = vma; if (!can_madv_lru_vma(vma)) @@ -811,6 +812,7 @@ static long madvise_dontneed_free(struct vm_area_struct *vma, * end-vma->vm_end range, but the manager can * handle a repetition fine. */ + *pend = end; end = vma->vm_end; } VM_WARN_ON(start >= end); @@ -980,8 +982,10 @@ static int madvise_inject_error(int behavior, static long madvise_vma(struct vm_area_struct *vma, struct vm_area_struct **prev, - unsigned long start, unsigned long end, int behavior) + unsigned long start, unsigned long *pend, int behavior) { + unsigned long end = *pend; + switch (behavior) { case MADV_REMOVE: return madvise_remove(vma, prev, start, end); @@ -993,7 +997,7 @@ madvise_vma(struct vm_area_struct *vma, struct vm_area_struct **prev, return madvise_pageout(vma, prev, start, end); case MADV_FREE: case MADV_DONTNEED: - return madvise_dontneed_free(vma, prev, start, end, behavior); + return madvise_dontneed_free(vma, prev, start, pend, behavior); case MADV_POPULATE_READ: case MADV_POPULATE_WRITE: return madvise_populate(vma, prev, start, end, behavior); @@ -1199,7 +1203,7 @@ int do_madvise(struct mm_struct *mm, unsigned long start, size_t len_in, int beh tmp = end; /* Here vma->vm_start <= start < tmp <= (end|vma->vm_end). */ - error = madvise_vma(vma, &prev, start, tmp, behavior); + error = madvise_vma(vma, &prev, start, &tmp, behavior); if (error) goto out; start = tmp;