From patchwork Mon Mar 1 02:55:43 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: OGAWA Hirofumi X-Patchwork-Id: 12108669 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 98BE2C433E0 for ; Mon, 1 Mar 2021 02:55:55 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id CC60164D5D for ; Mon, 1 Mar 2021 02:55:54 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org CC60164D5D Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=mail.parknet.co.jp Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 1B9B48D002B; Sun, 28 Feb 2021 21:55:54 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 1445E8D0019; Sun, 28 Feb 2021 21:55:54 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id F259F8D002B; Sun, 28 Feb 2021 21:55:53 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0183.hostedemail.com [216.40.44.183]) by kanga.kvack.org (Postfix) with ESMTP id D95168D0019 for ; Sun, 28 Feb 2021 21:55:53 -0500 (EST) Received: from smtpin26.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id A46FD180ACF62 for ; Mon, 1 Mar 2021 02:55:53 +0000 (UTC) X-FDA: 77869790586.26.09DDD28 Received: from mail.parknet.co.jp (mail.parknet.co.jp [210.171.160.6]) by imf10.hostedemail.com (Postfix) with ESMTP id BDA85407F8DF for ; Mon, 1 Mar 2021 02:55:45 +0000 (UTC) Received: from ibmpc.myhome.or.jp (server.parknet.ne.jp [210.171.168.39]) by mail.parknet.co.jp (Postfix) with ESMTPSA id 1725E1B4054; Mon, 1 Mar 2021 11:55:50 +0900 (JST) Received: from devron.myhome.or.jp (foobar@devron.myhome.or.jp [192.168.0.3]) by ibmpc.myhome.or.jp (8.15.2/8.15.2/Debian-20) with ESMTPS id 1212tmMT592252 (version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=NOT); Mon, 1 Mar 2021 11:55:49 +0900 Received: from devron.myhome.or.jp (foobar@localhost [127.0.0.1]) by devron.myhome.or.jp (8.15.2/8.15.2/Debian-20) with ESMTPS id 1212tmqN3158241 (version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=NOT); Mon, 1 Mar 2021 11:55:48 +0900 Received: (from hirofumi@localhost) by devron.myhome.or.jp (8.15.2/8.15.2/Submit) id 1212thr03158236; Mon, 1 Mar 2021 11:55:43 +0900 From: OGAWA Hirofumi To: linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org, Andrew Morton , Matthew Wilcox Subject: [PATCH v2] Fix zero_user_segments() with start > end Date: Mon, 01 Mar 2021 11:55:43 +0900 Message-ID: <87v9ab60r4.fsf@mail.parknet.co.jp> User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/28.0.50 (gnu/linux) MIME-Version: 1.0 X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: BDA85407F8DF X-Stat-Signature: gfwbb99qibm1mx3nwgx1t1umn3i7r4ut Received-SPF: none (parknet.co.jp>: No applicable sender policy available) receiver=imf10; identity=mailfrom; envelope-from=""; helo=mail.parknet.co.jp; client-ip=210.171.160.6 X-HE-DKIM-Result: none/none X-HE-Tag: 1614567345-169695 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: zero_user_segments() is used from __block_write_begin_int(), for example like the following zero_user_segments(page, 4096, 1024, 512, 918) But new zero_user_segments() implements for HIGMEM + TRANSPARENT_HUGEPAGE doesn't handle "start > end" case correctly, and hits BUG_ON(). (we can fix __block_write_begin_int() instead though, it is the old and multiple usage) Also it calls kmap_atomic() unnecessary while start == end == 0. Fixes: 0060ef3b4e6d ("mm: support THPs in zero_user_segments") Cc: Signed-off-by: OGAWA Hirofumi --- mm/highmem.c | 17 ++++++++++++----- 1 file changed, 12 insertions(+), 5 deletions(-) diff --git a/mm/highmem.c b/mm/highmem.c index 874b732..86f2b94 100644 --- a/mm/highmem.c 2021-02-20 12:56:49.037165666 +0900 +++ b/mm/highmem.c 2021-02-20 22:03:08.369361223 +0900 @@ -368,20 +368,24 @@ void zero_user_segments(struct page *pag BUG_ON(end1 > page_size(page) || end2 > page_size(page)); + if (start1 >= end1) + start1 = end1 = 0; + if (start2 >= end2) + start2 = end2 = 0; + for (i = 0; i < compound_nr(page); i++) { void *kaddr = NULL; - if (start1 < PAGE_SIZE || start2 < PAGE_SIZE) - kaddr = kmap_atomic(page + i); - if (start1 >= PAGE_SIZE) { start1 -= PAGE_SIZE; end1 -= PAGE_SIZE; } else { unsigned this_end = min_t(unsigned, end1, PAGE_SIZE); - if (end1 > start1) + if (end1 > start1) { + kaddr = kmap_atomic(page + i); memset(kaddr + start1, 0, this_end - start1); + } end1 -= this_end; start1 = 0; } @@ -392,8 +396,11 @@ void zero_user_segments(struct page *pag } else { unsigned this_end = min_t(unsigned, end2, PAGE_SIZE); - if (end2 > start2) + if (end2 > start2) { + if (!kaddr) + kaddr = kmap_atomic(page + i); memset(kaddr + start2, 0, this_end - start2); + } end2 -= this_end; start2 = 0; }