From patchwork Tue Aug 18 01:58:15 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chinwen Chang X-Patchwork-Id: 11719649 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 88DC2618 for ; Tue, 18 Aug 2020 01:58:55 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 68FC12076D for ; Tue, 18 Aug 2020 01:58:55 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=mediatek.com header.i=@mediatek.com header.b="AbluTKA1" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726444AbgHRB6w (ORCPT ); Mon, 17 Aug 2020 21:58:52 -0400 Received: from mailgw02.mediatek.com ([210.61.82.184]:11447 "EHLO mailgw02.mediatek.com" rhost-flags-OK-FAIL-OK-FAIL) by vger.kernel.org with ESMTP id S1726328AbgHRB6v (ORCPT ); Mon, 17 Aug 2020 21:58:51 -0400 X-UUID: 523443b9c19b400cb3b7d2e2dbcbc7c2-20200818 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=mediatek.com; s=dk; h=Content-Transfer-Encoding:Content-Type:MIME-Version:Message-ID:Date:Subject:CC:To:From; bh=Y4xhx9IjgKvrJUycw+ZxzgJtq8e5cl3Sw5tNuSRwHcw=; b=AbluTKA19PxqLoI+zLg6oLtTkD1p0KH3ifR69zYBIFpvq4MnYzH5qV48j7zZviZglQaAjeM16h0kdLR0b6RkpzMWiY2/XK8/mIUTmUco2BOUKJBkez7l4SbD3I0+QKKpvjZEmN6j0Rvcm2B3Wfq8GJ/xDcvRwf2gATwOb6jg7ms=; X-UUID: 523443b9c19b400cb3b7d2e2dbcbc7c2-20200818 Received: from mtkcas11.mediatek.inc [(172.21.101.40)] by mailgw02.mediatek.com (envelope-from ) (Cellopoint E-mail Firewall v4.1.10 Build 0809 with TLS) with ESMTP id 1822860258; Tue, 18 Aug 2020 09:58:46 +0800 Received: from mtkcas08.mediatek.inc (172.21.101.126) by mtkmbs02n2.mediatek.inc (172.21.101.101) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Tue, 18 Aug 2020 09:58:44 +0800 Received: from mtkswgap22.mediatek.inc (172.21.77.33) by mtkcas08.mediatek.inc (172.21.101.73) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Tue, 18 Aug 2020 09:58:44 +0800 From: Chinwen Chang To: Matthias Brugger , Michel Lespinasse , Andrew Morton , Vlastimil Babka , Daniel Jordan , Davidlohr Bueso , Chinwen Chang , Alexey Dobriyan , "Matthew Wilcox (Oracle)" , Jason Gunthorpe , Steven Price , Song Liu , Jimmy Assarsson , Huang Ying , Daniel Kiss , Laurent Dufour CC: , , , , Subject: [PATCH v4 0/3] Try to release mmap_lock temporarily in smaps_rollup Date: Tue, 18 Aug 2020 09:58:15 +0800 Message-ID: <1597715898-3854-1-git-send-email-chinwen.chang@mediatek.com> X-Mailer: git-send-email 1.9.1 MIME-Version: 1.0 X-TM-SNTS-SMTP: 7D471C64083F4CDE6FA76DAD05F7A7CDB90D1CBE8CE8627B75196A7301A582C22000:8 X-MTK: N Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org Recently, we have observed some janky issues caused by unpleasantly long contention on mmap_lock which is held by smaps_rollup when probing large processes. To address the problem, we let smaps_rollup detect if anyone wants to acquire mmap_lock for write attempts. If yes, just release the lock temporarily to ease the contention. smaps_rollup is a procfs interface which allows users to summarize the process's memory usage without the overhead of seq_* calls. Android uses it to sample the memory usage of various processes to balance its memory pool sizes. If no one wants to take the lock for write requests, smaps_rollup with this patch will behave like the original one. Although there are on-going mmap_lock optimizations like range-based locks, the lock applied to smaps_rollup would be the coarse one, which is hard to avoid the occurrence of aforementioned issues. So the detection and temporary release for write attempts on mmap_lock in smaps_rollup is still necessary. Change since v1: - If current VMA is freed after dropping the lock, it will return - incomplete result. To fix this issue, refine the code flow as - suggested by Steve. [1] Change since v2: - When getting back the mmap lock, the address where you stopped last - time could now be in the middle of a vma. Add one more check to handle - this case as suggested by Michel. [2] Change since v3: - last_stopped is easily confused with last_vma_end. Replace it with - a direct call to smap_gather_stats(vma, &mss, last_vma_end) as - suggested by Steve. [3] [1] https://lore.kernel.org/lkml/bf40676e-b14b-44cd-75ce-419c70194783@arm.com/ [2] https://lore.kernel.org/lkml/CANN689FtCsC71cjAjs0GPspOhgo_HRj+diWsoU1wr98YPktgWg@mail.gmail.com/ [3] https://lore.kernel.org/lkml/db0d40e2-72f3-09d5-c162-9c49218f128f@arm.com/ Chinwen Chang (3): mmap locking API: add mmap_lock_is_contended() mm: smaps*: extend smap_gather_stats to support specified beginning mm: proc: smaps_rollup: do not stall write attempts on mmap_lock fs/proc/task_mmu.c | 96 +++++++++++++++++++++++++++++++++++---- include/linux/mmap_lock.h | 5 ++ 2 files changed, 92 insertions(+), 9 deletions(-)