From patchwork Tue Nov 3 19:32:39 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dongli Zhang X-Patchwork-Id: 11878935 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 7379816C1 for ; Tue, 3 Nov 2020 19:37:12 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 1DD66221FB for ; Tue, 3 Nov 2020 19:37:11 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=oracle.com header.i=@oracle.com header.b="BVpG3BI8" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 1DD66221FB Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=oracle.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 171E06B0036; Tue, 3 Nov 2020 14:37:11 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 1210C6B005D; Tue, 3 Nov 2020 14:37:11 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 010F66B0068; Tue, 3 Nov 2020 14:37:10 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0094.hostedemail.com [216.40.44.94]) by kanga.kvack.org (Postfix) with ESMTP id C50E86B0036 for ; Tue, 3 Nov 2020 14:37:10 -0500 (EST) Received: from smtpin08.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 62B571EE6 for ; Tue, 3 Nov 2020 19:37:10 +0000 (UTC) X-FDA: 77444115420.08.rake45_46146ef272bb Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin08.hostedemail.com (Postfix) with ESMTP id 34C081819E772 for ; Tue, 3 Nov 2020 19:37:10 +0000 (UTC) X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,dongli.zhang@oracle.com,,RULES_HIT:30005:30054:30064:30090,0,RBL:156.151.31.85:@oracle.com:.lbl8.mailshell.net-62.18.0.100 64.10.201.10;04ygrw8bjsutfeuwk974ujbypqpw9ypwaxznpobm9p4qg5j4yg8hws1raqki7as.aux8ed9uny8b9y4wd4rz1uaaiueue4dxio3xi9pfjes513cpqfk8xxm7b6a1mes.r-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:neutral,Custom_rules:0:1:0,LFtime:68,LUA_SUMMARY:none X-HE-Tag: rake45_46146ef272bb X-Filterd-Recvd-Size: 6242 Received: from userp2120.oracle.com (userp2120.oracle.com [156.151.31.85]) by imf49.hostedemail.com (Postfix) with ESMTP for ; Tue, 3 Nov 2020 19:37:09 +0000 (UTC) Received: from pps.filterd (userp2120.oracle.com [127.0.0.1]) by userp2120.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 0A3JZ8Vc186834; Tue, 3 Nov 2020 19:37:02 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=from : to : cc : subject : date : message-id; s=corp-2020-01-29; bh=YH5qs3ECwIUFehV5nWCcU1wrJlau/0WJfkMsxtIJe9c=; b=BVpG3BI8HrP72ZvUl4B2PeIMOmoyNVmQJiHdTzXOqFY3lP0skPEmhtFItMQ2Xn5HNRAB AtF6JDOerH2yynwCiP7rC3AAMgkcM1M8KRYpYXgvGERDfj5GygLCmG6OzoMQZlG79tPt SnT0XY15HbcRkUihP2MU+TsexDa7e+s/FkufAnmNNWDtOgyVOdPcYjG5qs1hCHz9hm6C g1ZpPfZAqro60DgnTDiuO2nXqJOAMIslG13VCGvpu5ff77QNQX3Ru46lVodjMcJSnpDh kNVY7OkwqhwLiI8dzfQsFQxwJ6JytoCv4x0VlBEg877Biwh4526xbB3ssd4jgnw07eOi SA== Received: from aserp3030.oracle.com (aserp3030.oracle.com [141.146.126.71]) by userp2120.oracle.com with ESMTP id 34hhw2k5xw-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL); Tue, 03 Nov 2020 19:37:02 +0000 Received: from pps.filterd (aserp3030.oracle.com [127.0.0.1]) by aserp3030.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 0A3JYnWx194954; Tue, 3 Nov 2020 19:35:01 GMT Received: from userv0122.oracle.com (userv0122.oracle.com [156.151.31.75]) by aserp3030.oracle.com with ESMTP id 34jf48y4mj-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Tue, 03 Nov 2020 19:35:01 +0000 Received: from abhmp0013.oracle.com (abhmp0013.oracle.com [141.146.116.19]) by userv0122.oracle.com (8.14.4/8.14.4) with ESMTP id 0A3JYwdD005340; Tue, 3 Nov 2020 19:34:58 GMT Received: from localhost.localdomain (/10.211.9.80) by default (Oracle Beehive Gateway v4.0) with ESMTP ; Tue, 03 Nov 2020 11:34:57 -0800 From: Dongli Zhang To: linux-mm@kvack.org, netdev@vger.kernel.org Cc: linux-kernel@vger.kernel.org, akpm@linux-foundation.org, davem@davemloft.net, kuba@kernel.org, dongli.zhang@oracle.com, aruna.ramakrishna@oracle.com, bert.barbe@oracle.com, rama.nichanamatlu@oracle.com, venkat.x.venkatsubra@oracle.com, manjunath.b.patil@oracle.com, joe.jin@oracle.com, srinivas.eeda@oracle.com Subject: [PATCH 1/1] mm: avoid re-using pfmemalloc page in page_frag_alloc() Date: Tue, 3 Nov 2020 11:32:39 -0800 Message-Id: <20201103193239.1807-1-dongli.zhang@oracle.com> X-Mailer: git-send-email 2.17.1 X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9794 signatures=668682 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 spamscore=0 suspectscore=0 mlxscore=0 bulkscore=0 malwarescore=0 mlxlogscore=999 phishscore=0 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2009150000 definitions=main-2011030131 X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9794 signatures=668682 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 adultscore=0 malwarescore=0 mlxscore=0 suspectscore=0 clxscore=1011 priorityscore=1501 impostorscore=0 spamscore=0 lowpriorityscore=0 mlxlogscore=999 phishscore=0 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2009150000 definitions=main-2011030131 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000019, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The ethernet driver may allocates skb (and skb->data) via napi_alloc_skb(). This ends up to page_frag_alloc() to allocate skb->data from page_frag_cache->va. During the memory pressure, page_frag_cache->va may be allocated as pfmemalloc page. As a result, the skb->pfmemalloc is always true as skb->data is from page_frag_cache->va. The skb will be dropped if the sock (receiver) does not have SOCK_MEMALLOC. This is expected behaviour under memory pressure. However, once kernel is not under memory pressure any longer (suppose large amount of memory pages are just reclaimed), the page_frag_alloc() may still re-use the prior pfmemalloc page_frag_cache->va to allocate skb->data. As a result, the skb->pfmemalloc is always true unless page_frag_cache->va is re-allocated, even the kernel is not under memory pressure any longer. Here is how kernel runs into issue. 1. The kernel is under memory pressure and allocation of PAGE_FRAG_CACHE_MAX_ORDER in __page_frag_cache_refill() will fail. Instead, the pfmemalloc page is allocated for page_frag_cache->va. 2: All skb->data from page_frag_cache->va (pfmemalloc) will have skb->pfmemalloc=true. The skb will always be dropped by sock without SOCK_MEMALLOC. This is an expected behaviour. 3. Suppose a large amount of pages are reclaimed and kernel is not under memory pressure any longer. We expect skb->pfmemalloc drop will not happen. 4. Unfortunately, page_frag_alloc() does not proactively re-allocate page_frag_alloc->va and will always re-use the prior pfmemalloc page. The skb->pfmemalloc is always true even kernel is not under memory pressure any longer. Therefore, this patch always checks and tries to avoid re-using the pfmemalloc page for page_frag_alloc->va. Cc: Aruna Ramakrishna Cc: Bert Barbe Cc: Rama Nichanamatlu Cc: Venkat Venkatsubra Cc: Manjunath Patil Cc: Joe Jin Cc: SRINIVAS Signed-off-by: Dongli Zhang --- mm/page_alloc.c | 10 ++++++++++ 1 file changed, 10 insertions(+) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 23f5066bd4a5..291df2f9f8f3 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -5075,6 +5075,16 @@ void *page_frag_alloc(struct page_frag_cache *nc, struct page *page; int offset; + /* + * Try to avoid re-using pfmemalloc page because kernel may already + * run out of the memory pressure situation at any time. + */ + if (unlikely(nc->va && nc->pfmemalloc)) { + page = virt_to_page(nc->va); + __page_frag_cache_drain(page, nc->pagecnt_bias); + nc->va = NULL; + } + if (unlikely(!nc->va)) { refill: page = __page_frag_cache_refill(nc, gfp_mask);