From patchwork Wed Mar 15 11:31:24 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kirill A. Shutemov" X-Patchwork-Id: 13175676 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id CBD26C61DA4 for ; Wed, 15 Mar 2023 11:31:54 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C750D6B0082; Wed, 15 Mar 2023 07:31:50 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id BFE1A6B0083; Wed, 15 Mar 2023 07:31:50 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9B3DD6B0085; Wed, 15 Mar 2023 07:31:50 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 8D3806B0082 for ; Wed, 15 Mar 2023 07:31:50 -0400 (EDT) Received: from smtpin05.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 4DA57120FDF for ; Wed, 15 Mar 2023 11:31:50 +0000 (UTC) X-FDA: 80570917980.05.F61AD93 Received: from mga12.intel.com (mga12.intel.com [192.55.52.136]) by imf29.hostedemail.com (Postfix) with ESMTP id 1B3CC120011 for ; Wed, 15 Mar 2023 11:31:46 +0000 (UTC) Authentication-Results: imf29.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=ZVMnVxKB; dmarc=pass (policy=none) header.from=intel.com; spf=none (imf29.hostedemail.com: domain of kirill.shutemov@linux.intel.com has no SPF policy when checking 192.55.52.136) smtp.mailfrom=kirill.shutemov@linux.intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1678879907; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=ojvIeEkFwkjIxhoLZbBitZpqyYtExVBxUpzX5dqiNDc=; b=zXB0LCUqDgtMkuc0zZVsneEa4WlaysOSEVniAlJUlaxrBC2j4fyUsDEPeqz0cJoRpWRFWH bvk3+xqOAgzB+BgLIOlANT/fR+Wx5emoKGVbDa2OPVkn/S4ZZuYLNVK4qOgg8S0m8/sTLI G5+ku+902T01dvaybCD9iI48s3fF+7w= ARC-Authentication-Results: i=1; imf29.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=ZVMnVxKB; dmarc=pass (policy=none) header.from=intel.com; spf=none (imf29.hostedemail.com: domain of kirill.shutemov@linux.intel.com has no SPF policy when checking 192.55.52.136) smtp.mailfrom=kirill.shutemov@linux.intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1678879907; a=rsa-sha256; cv=none; b=vJVebUz3gblTQ6Nh4jcQcap2X5p3g48XFRfE06C/YDXbVucfWiOxwACUWvLYe58jr8/DJ/ H7dE7hClT9nzUvFNTibaVyLYnmott2ecmFFmGaFiHF7VfJPD8y0z/9YT/bCP881ruGP9Vd Hk+c5Tn+fI7d6cumufiAqy8o28GdDdk= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1678879907; x=1710415907; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=AX8fQ9JFCUx8sgKM3jU0DHxltPa90kVlWb8uT8+Z308=; b=ZVMnVxKBDdU/vsgiKbG4x6a670i/L7fG0XAlHqpWwWgySwaHTAqJJyGt RxQteUalliM06i81HgE1SlYMgIkZ9wQ8P81MnGf3R6VAypcnmxK0w1R3b 2kbIG3cNqpEmGuDGcs0+FsDoFCePwLsT0VbjIPSg6g5Nm9chb08uxvKHv M9ZFroJVCVF3E42BoRVBOCDeiWWmw3PjeSAi23/x3uymaKkCxNnMnfwkc sfdaZHODYXS6lDhGzMUO/T5RnWoanaWPg+EBp3euSA5VsRX55PBuNgnnl aD1s2GGIr/v+TnEYXf8IymSnyws2VnQJWowKMZk2NPOxEkxgTOWQkmCVO Q==; X-IronPort-AV: E=McAfee;i="6500,9779,10649"; a="317330274" X-IronPort-AV: E=Sophos;i="5.98,262,1673942400"; d="scan'208";a="317330274" Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Mar 2023 04:31:40 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10649"; a="925310513" X-IronPort-AV: E=Sophos;i="5.98,262,1673942400"; d="scan'208";a="925310513" Received: from nopopovi-mobl1.ger.corp.intel.com (HELO box.shutemov.name) ([10.252.33.48]) by fmsmga006-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Mar 2023 04:31:38 -0700 Received: by box.shutemov.name (Postfix, from userid 1000) id AB8FF10CC9D; Wed, 15 Mar 2023 14:31:35 +0300 (+03) From: "Kirill A. Shutemov" To: Andrew Morton , Mel Gorman , Vlastimil Babka , David Hildenbrand Cc: linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org, "Kirill A. Shutemov" , sparclinux@vger.kernel.org, David Miller Subject: [PATCH 01/10] sparc/mm: Fix MAX_ORDER usage in tsb_grow() Date: Wed, 15 Mar 2023 14:31:24 +0300 Message-Id: <20230315113133.11326-2-kirill.shutemov@linux.intel.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230315113133.11326-1-kirill.shutemov@linux.intel.com> References: <20230315113133.11326-1-kirill.shutemov@linux.intel.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: 1B3CC120011 X-Rspamd-Server: rspam09 X-Rspam-User: X-Stat-Signature: 1wodup4o9o4zepyje9funszdrcrtzm6e X-HE-Tag: 1678879906-396096 X-HE-Meta: U2FsdGVkX1+XyaVdzYOb6sFI2uadLim5GZKmBFmtxhMNDpVkv7fpbW/VjWVeD3CrzimbpEd0E9PRun+kKMkTzk5ES7ZPOroImzhc8jJ8Wq/IuzFg/FbKuV00AnaWc/JlhgDExUDcK1L0PqhQ0VJa+mDtn3HT4cOhcEqjabtXCGEwoU0E5yj48GkVTuhl2hFbYwRrz7kSk+zaavMVkVVgTaKc87cG8GGqp/GWKSVBJwnBlEeBVzeqiE/J7+xrhZGIX54A0LQekaWbsSx+LGekAYYQzcexE/3/kqhnfRb94Z82KZI0bAPRs8ECJSQu9cYt+QezPRhjotcUYnredbgI9LXkS5dxW0E6xZgC1b5pQd3znG60AwVKyFAUs5+K8GSUw7CwC5pdcaMo8qOEDdBwKhniBAYsbqu4COjiBVsQKAbvxGbv4o9xJkeqznUjAdyCI36x+SnXto6vGp+Fggaok/7v8aEVPKjEtTZAJhu8iUebZXmJYjDLubEqJdixeqW0fxuM/wV0YsiK2Yc5ag1Rp+lpS5jd3DMF43A1lwAlA4YspWP1Z8o4wRd4B6/tcItMgCDSq6uzdj2c9sYsdUMxr104VULcVJqvJMyNrSx5enNYVmPLr7Y9EXbY6Fgt97RbXXwBKbGwM3hcGPNtlXlolP/QeCp5JAwx/0SbwywRovQbb8tBOvMOyYo7Qh4rovS/uSCHweOvDntb1SOzVOtiOCA+lJLGbo3oUM2NJIzymKZeBTKJ250w8WFmgEOFjkrehpM/wjClWzgDl0GB5SnUlTYrHD4HqY+9TWAWQmZtm93C4UvWDoYwRqzNIhCPTeIr9iz1qZDVsWBvaycAzzvaYS7NP4GHB7A3cDf8wmk6Whe5A2alVfGIB4YUVygNvFL+NW1d/o16dCuO/Ts122gcTV6NNT06uhTpfEtjhW6cxwCpTDUfI5qCQk5L9nh+l1G+iZCLg5RlQ72R9WFRYNo iMSU4/gW OlxETJStVW+VPzwswbBiPEo/Z7mrM37xuq5oru/hILHmRqEA48fonypuAHDO9DaaEkDGN6GmiZ8AVYrTBIpi+0meJYYmNvCobVU8mcJhKwT5E4esYfLdOBzuKAOrLngJ59dFu4ydU2bCwerDue2J0VhnSZqmOrc1h4G4eKMVtusKUaVK+9DVe35l7+xq0wbeP06lpOspu8MlyZYQMe3HEyYYrBoywtU1WMToLrjpfogEIkAJH/dp0JUmyY0cr9jUjRrukbL3TxwuW3/x+rVo8krc71q4fOB+xo7D3r+NGK09rduoSa0Bz9QljjIYDr5njpmCa X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: MAX_ORDER is not inclusive: the maximum allocation order buddy allocator can deliver is MAX_ORDER-1. Fix MAX_ORDER usage in tsb_grow(). Signed-off-by: Kirill A. Shutemov Cc: sparclinux@vger.kernel.org Cc: David Miller Reviewed-by: Mike Kravetz Acked-by: Mike Rapoport (IBM) Acked-by: Vlastimil Babka --- arch/sparc/mm/tsb.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/arch/sparc/mm/tsb.c b/arch/sparc/mm/tsb.c index 912205787161..dba8dffe2113 100644 --- a/arch/sparc/mm/tsb.c +++ b/arch/sparc/mm/tsb.c @@ -402,8 +402,8 @@ void tsb_grow(struct mm_struct *mm, unsigned long tsb_index, unsigned long rss) unsigned long new_rss_limit; gfp_t gfp_flags; - if (max_tsb_size > (PAGE_SIZE << MAX_ORDER)) - max_tsb_size = (PAGE_SIZE << MAX_ORDER); + if (max_tsb_size > (PAGE_SIZE << (MAX_ORDER - 1))) + max_tsb_size = (PAGE_SIZE << (MAX_ORDER - 1)); new_cache_index = 0; for (new_size = 8192; new_size < max_tsb_size; new_size <<= 1UL) { From patchwork Wed Mar 15 11:31:25 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kirill A. Shutemov" X-Patchwork-Id: 13175672 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9CD98C6FD1D for ; Wed, 15 Mar 2023 11:31:48 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 0B61D6B007D; Wed, 15 Mar 2023 07:31:48 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 064C26B007E; Wed, 15 Mar 2023 07:31:47 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E22136B0080; Wed, 15 Mar 2023 07:31:47 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id D4FBE6B007D for ; Wed, 15 Mar 2023 07:31:47 -0400 (EDT) Received: from smtpin17.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id A688DAB6A7 for ; Wed, 15 Mar 2023 11:31:47 +0000 (UTC) X-FDA: 80570917854.17.7E6185D Received: from mga12.intel.com (mga12.intel.com [192.55.52.136]) by imf27.hostedemail.com (Postfix) with ESMTP id 6940E4001D for ; Wed, 15 Mar 2023 11:31:44 +0000 (UTC) Authentication-Results: imf27.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=NodePO6B; spf=none (imf27.hostedemail.com: domain of kirill.shutemov@linux.intel.com has no SPF policy when checking 192.55.52.136) smtp.mailfrom=kirill.shutemov@linux.intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1678879904; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=yD3tpaPj4k5AQgW+6Vkwtj3EtMhv0S63rcLrWTfZmGk=; b=C6o76Bw0i3w5E3gpDFwxuzP0mjbzTGdI052L0/2BPKsLu9ht+fimTL8f1D+8tfaM/Qu3kH 6Gtp207T46uZ07IqsfA3aDAXiwU757HTCGuAa/xlA/2GXVlg5qjUQWgG6UKdMutAJ13BW3 0E81ROZxRF7eVRqCqVLTLn8S37IE/R8= ARC-Authentication-Results: i=1; imf27.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=NodePO6B; spf=none (imf27.hostedemail.com: domain of kirill.shutemov@linux.intel.com has no SPF policy when checking 192.55.52.136) smtp.mailfrom=kirill.shutemov@linux.intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1678879904; a=rsa-sha256; cv=none; b=8aRvQss/Y2MuoAuk0R8jpTHslIEm9Bnzp29ZVIcXohqFSsp/ux88CKt1d5kFs2RcCPwN0H 0Nha9oKb2PVFC1aVZvmerLoW1JcZ8OX4KgZHbGvgVC7GxjYLjdL3M0FAFA/z/MrVmnNOx7 zoeIirOk46kblK6dQ6VetAdkgP8Ujr8= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1678879904; x=1710415904; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=QBCZQm/SBz0B4srej1vb8GPqV7WaEgnSaSI+esoAbhU=; b=NodePO6BtKzC1eFoBqqvN07GaKpdM6cYLJi4AGTJcF7isqt6O4jknN7f iKzycf4LkPiJJQRrDKw1T2Mr5kNTmz2HAfMwUO2TEwVJ+trulUcwNOQkL fzC6MLxYtPxLiQE2BPJFYxx0XX5lj6kGL5Ui2UzM2oq7r8pGh6R05j6Pa k+7jASnofEDFn6TqfbcQBNtrGBuqBSy9yA/LC9ZOWtecIHLAQ2ueAS8Bz 74MhZT0fEOsZpMSZJexl6HQR1z+LC60thj+GmqwFcK8iBAsTvx2EKZEk3 r6QtBC4Xt3+iK0DUjkT4uf7SZIt2t1nU1DEq5KaOL7skc1FMGBQLSMoWn g==; X-IronPort-AV: E=McAfee;i="6500,9779,10649"; a="317330284" X-IronPort-AV: E=Sophos;i="5.98,262,1673942400"; d="scan'208";a="317330284" Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Mar 2023 04:31:41 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10649"; a="925310515" X-IronPort-AV: E=Sophos;i="5.98,262,1673942400"; d="scan'208";a="925310515" Received: from nopopovi-mobl1.ger.corp.intel.com (HELO box.shutemov.name) ([10.252.33.48]) by fmsmga006-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Mar 2023 04:31:38 -0700 Received: by box.shutemov.name (Postfix, from userid 1000) id B452D10CC9E; Wed, 15 Mar 2023 14:31:35 +0300 (+03) From: "Kirill A. Shutemov" To: Andrew Morton , Mel Gorman , Vlastimil Babka , David Hildenbrand Cc: linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org, "Kirill A. Shutemov" , Richard Weinberger , Anton Ivanov , Johannes Berg Subject: [PATCH 02/10] um: Fix MAX_ORDER usage in linux_main() Date: Wed, 15 Mar 2023 14:31:25 +0300 Message-Id: <20230315113133.11326-3-kirill.shutemov@linux.intel.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230315113133.11326-1-kirill.shutemov@linux.intel.com> References: <20230315113133.11326-1-kirill.shutemov@linux.intel.com> MIME-Version: 1.0 X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 6940E4001D X-Stat-Signature: or5me4wn78hpupwhgnafcbydoicgmtgj X-Rspam-User: X-HE-Tag: 1678879904-85497 X-HE-Meta: U2FsdGVkX19RrVNVfxU7IUhyz+CthH33bfu50dJlOU7lDuaZixgbY5RJrW67my6NX/GGXchJcj5hCoJiyFX9lAGoqTidyUrxIq8Vs/J4JOnvQK2NRt401f48gr4lIPAxUVWSV2scqINMH2pBYa9dlcUbZ8HvBUPYqFHT6mmHoOOf9T6kiTb6f6DGYngq3GEkWm3gHC5YPsUptI5nRqO+iE7ElIfQOas3mZPV+LUBViVAyYCZ8hMRTgo0PIwssb/dDwDk5BAMWg31SDr1CzBy55xNFfhX5mwvFTeL5oR3hn56cYmjySwu3Sf0FgRFrOE9l00HOxsXHtnShT0e7BVKS/aqoZX6FUutiCQbJlI3Cm4pM4hlvrKWMa8j6hkJ++LAE+ixtO4S9tUEop56NNLGsggoAyMw0y3QOEukH5dCh5yUYCIRpD72/E2xkUV3wyIgwHyXBx1A+YdBCPgGx1DEPqLYX7L2bEhkJIazEjb97SHfNXaoFlL3VQjKJ/OvsG4A1R7gIPuzgDDvBpedlzbvBlcGiA5vKWbdwc/Cy/CyIAHtJWdw2S18Qyv+DDTXNaDEC/F5WFWflkDDp07pPJf2Y2Bcps3dz3dMZuczU0atcst2ThpIwVp3jQRWLVUO0Ho/BVukPcv2I2ZGkTUUuegCy7F/BnjA+JnWPUSZWqqkeF9cZZCH81V2+sZREM6ivBQ6b1vBdP0d6VXc9Jf+w1/zW89I8imHHpHFbWlHN/SfIm317DAqJOyCqptDXQWnbDjk0vRUU8DTAYAQ1w4MP/d1zj/HfvO1VDuXdeDFLKsTpWe+ZD60ogpthg6jFRRrdhkiBZ2no01AZUH53XDF9KNhICl01DGemMV6uVjM6wR8mHP7rjn+4VvJKpgFbhc0WgoEPtcihRQ4/qeA+NFeSuMUPIjXp3PLyXCshVaNToJyp5OmnUJooxqhx/itjBU+tJB2+KNTV7MoNR4/mNgcuOp ou48sG3R Bb6+TW99KfOTxLw2CpRY8oockfAdCcnk58W807Xh9Ra4IpMa6v7/C1nDv80RBKt6VxXlkbh3rLatxR7Mx2pOXOOO89Ipou/ssLjIzS37vEx8QU5DcqSTfyI/8eVuHMviZCQkX7yh17gy3HR4aUOaxFmeWTA95xNmx83y78T7+mzhix8A27oeQ4wj+IbxwdFpwUyECqUT6fyBuNchsC4Nq4RWIE7hhjYUd025p4u+94MfFbCmn75MI4ZJ38UyyKZmN+Fdz1gWnPeRk0OnMompuz2N9taOchc4lD8jprZjWC+NZKNaO1LY3emgb6W2CBRqamE13 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: MAX_ORDER is not inclusive: the maximum allocation order buddy allocator can deliver is MAX_ORDER-1. Fix MAX_ORDER usage in linux_main(). Signed-off-by: Kirill A. Shutemov Cc: Richard Weinberger Cc: Anton Ivanov Cc: Johannes Berg Acked-by: Mike Rapoport (IBM) Acked-by: Vlastimil Babka --- arch/um/kernel/um_arch.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/arch/um/kernel/um_arch.c b/arch/um/kernel/um_arch.c index 8dcda617b8bf..5e5a9c8e0e5d 100644 --- a/arch/um/kernel/um_arch.c +++ b/arch/um/kernel/um_arch.c @@ -368,10 +368,10 @@ int __init linux_main(int argc, char **argv) max_physmem = TASK_SIZE - uml_physmem - iomem_size - MIN_VMALLOC; /* - * Zones have to begin on a 1 << MAX_ORDER page boundary, + * Zones have to begin on a 1 << MAX_ORDER-1 page boundary, * so this makes sure that's true for highmem */ - max_physmem &= ~((1 << (PAGE_SHIFT + MAX_ORDER)) - 1); + max_physmem &= ~((1 << (PAGE_SHIFT + MAX_ORDER - 1)) - 1); if (physmem_size + iomem_size > max_physmem) { highmem = physmem_size + iomem_size - max_physmem; physmem_size -= highmem; From patchwork Wed Mar 15 11:31:26 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kirill A. Shutemov" X-Patchwork-Id: 13175671 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D65F1C7618B for ; Wed, 15 Mar 2023 11:31:47 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 45E046B007B; Wed, 15 Mar 2023 07:31:47 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 40DDA6B007D; Wed, 15 Mar 2023 07:31:47 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2D65D6B007E; Wed, 15 Mar 2023 07:31:47 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 1B7086B007B for ; Wed, 15 Mar 2023 07:31:47 -0400 (EDT) Received: from smtpin25.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id E432B1C690D for ; Wed, 15 Mar 2023 11:31:46 +0000 (UTC) X-FDA: 80570917812.25.22793E8 Received: from mga12.intel.com (mga12.intel.com [192.55.52.136]) by imf29.hostedemail.com (Postfix) with ESMTP id 5081A12001E for ; Wed, 15 Mar 2023 11:31:42 +0000 (UTC) Authentication-Results: imf29.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=gVtTtbhI; dmarc=pass (policy=none) header.from=intel.com; spf=none (imf29.hostedemail.com: domain of kirill.shutemov@linux.intel.com has no SPF policy when checking 192.55.52.136) smtp.mailfrom=kirill.shutemov@linux.intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1678879904; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=MD1pFyz84vdYwyMP/5a1kTCUnI97p/yuOqanyBZbHs4=; b=L3To43Ug4JfoZdrtQ3RiRB3qdpQVkFVeivZmKluh2nVndVYe+mf12HAnupQknCqFP6+71H VasLOXJGiXpqFQxyB88Jfm05aTMVBGNhDIA613LqLWcX/8Z5zwa3zl5oh1ftuLOaFRYtn4 tYY+Z3/dvjxhf5Zx1b2rwUz1raOtVQc= ARC-Authentication-Results: i=1; imf29.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=gVtTtbhI; dmarc=pass (policy=none) header.from=intel.com; spf=none (imf29.hostedemail.com: domain of kirill.shutemov@linux.intel.com has no SPF policy when checking 192.55.52.136) smtp.mailfrom=kirill.shutemov@linux.intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1678879904; a=rsa-sha256; cv=none; b=76klenqBJiqhepAqIJ4DolTy48yxMPmjynAlOxQyXkxL00hcIOmNF5lTvtkvD/sfQcmPbD Ti2C90HfxDHLzPFSV2BBkHszE9uNEFPEwjKgs9Nx23GZ12XfUB+gshSNIeDZDRWY6vHYMF 3Jugsg4lZrwpvkPBi4WoU5gmQHyZe3k= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1678879903; x=1710415903; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=CbKC9In0sfgAFhGmEkzB7Bb+JmElN2MpqrLq/60tyc4=; b=gVtTtbhId8G1WBovr43rfDah0kjDjydSuqcLgD/wKe0YBc0HwlbZIMAI fp1LYO3tJC4AZCtuYRuM5mBWye9eWdG9BOiyhSn9/duAL0/Nr/FRF+WsK nPiXNNBMoBJ9UVdSY2JqHdc42E3kDugysMIEcVXrJHdepUTnztSErgM4t qh5FC5vXYN8bxQUyb7yqebok1In0iSs2H53Yy0o96OjFf/728y7S0hEHG am3N5dGo8hZB1pNmsm16PHmFGIa65qcuOaQRClkbvONIBLmCXqFXcaJyz 2uuFe3Um/GMUwcualJdMBRWrHc6g17Bf4XghN3CTF/33E7sg41LdQh7eb Q==; X-IronPort-AV: E=McAfee;i="6500,9779,10649"; a="317330267" X-IronPort-AV: E=Sophos;i="5.98,262,1673942400"; d="scan'208";a="317330267" Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Mar 2023 04:31:40 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10649"; a="925310514" X-IronPort-AV: E=Sophos;i="5.98,262,1673942400"; d="scan'208";a="925310514" Received: from nopopovi-mobl1.ger.corp.intel.com (HELO box.shutemov.name) ([10.252.33.48]) by fmsmga006-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Mar 2023 04:31:38 -0700 Received: by box.shutemov.name (Postfix, from userid 1000) id BB81D10CC9F; Wed, 15 Mar 2023 14:31:35 +0300 (+03) From: "Kirill A. Shutemov" To: Andrew Morton , Mel Gorman , Vlastimil Babka , David Hildenbrand Cc: linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org, "Kirill A. Shutemov" , Denis Efremov Subject: [PATCH 03/10] floppy: Fix MAX_ORDER usage Date: Wed, 15 Mar 2023 14:31:26 +0300 Message-Id: <20230315113133.11326-4-kirill.shutemov@linux.intel.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230315113133.11326-1-kirill.shutemov@linux.intel.com> References: <20230315113133.11326-1-kirill.shutemov@linux.intel.com> MIME-Version: 1.0 X-Rspamd-Queue-Id: 5081A12001E X-Rspamd-Server: rspam09 X-Rspam-User: X-Stat-Signature: f959p13idihuyi71kzto1ghr3dqgk183 X-HE-Tag: 1678879902-95915 X-HE-Meta: U2FsdGVkX1+1wgxaTzz5lE4668WFrNAtv4aJUFoG3VNCDmywFttWyS0VAIWHWlMMOrmFUuoGxHI+ey6RhXcUHQ2oW1QC3vgAKuMmHPVGLoQSUqfgd6J7hH9njl8IYpoVqnN/aVVFaPWZU0XmWsscO7sPn92p/3jLrOeJJ3EjTs1XoC/HxGXptQ90mBirX7u0XpxW8B+FKff1uKUf9UXrItQP66lCIw5KMrzoT/XtaGkyqYt+ZzVhdyTwo+EMYmIZnlj3c+CvZ++26+iTatSEjXiYndrsN+tz9tjazd2uIHBYvv3P9BAMMSDyqZ6UqCSADmSdZpoaxyQQqUecW40snx8I442C050NijtNaK3uwK87dMCCmPQW9FqYTAPkLH2htC6IBEuvdDtMQGGPoZMgVbX34dtgqZLLRudBnM5K4UDeRv7TsS0iuIwHsJN3pkb4rDhuz2z0SCqicZz/GJQbBTc6Y7FWo0wOm7PVfux65rJbeG7HMzj+7M0A288PPA92GryrRBxdymdZUvpzymA/qsTNeOGZl0GoL9kLOTJHtU/671XZ6SX7NXHvDTyyRHVmw4HJhPWto+zuYvTYj0TQM+PZIjRFoFRw2pbDbJgLTQxWQIT+ozd+HaQaaEHGHQQlPABWoRjVkFHJGnQBFT0YZz3vxkFvnV6Aj7EARnIcGFIkBOh3H8Akkat35RhhxS02Ve5iVWkZVZO7D5yk447fR+S1FEaovHX9Vj9JRVBVCWBRwFx/8IICTEda0NoA6oLCHPGKh/59VW7ewCmaWF5+DyjUmW6SP2WQM9ovriFD0jAxKO0DwXyhYRTZcDDBrbWZ77+3LbqPcosR16K2C0dQ40utLuB7utoAdXzYdqnS+7QuGnch4mE8HzfelLZ7iu/MfkwntNCfyFMCUzh6+fjpldZo8NkPgpzmRGcZiIvgxIiJ3a2lZlIZsnb2CUUrCsdl6Od3eafxOVA4cKiXeOr YzdFDzKl OcpptouAqxvtVOWb9tr6qQ8SF+rphDvBfEJlX7mpIJa0JQYIfpFVVd2fSZSEbkq0TcJmOtsl392FsEQ8ANNdLo5rhjnWIRwTO7TsoQKjBYmxQnbSpm3zmmIwq4LxDCg2JxlaJtIuJSEJSCrZfnqRS/oq2N9kNrxxEfzzF0bzzFEeffsCN/rgb5o+s300tH7+obZVnPv641/FGwSnrEzEl+NNjuEdSkkgmJnuvO3Xyx3MBTZPJT2RwxyJgt7kMsCpAeyUM6yjG1loLFeY= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: MAX_ORDER is not inclusive: the maximum allocation order buddy allocator can deliver is MAX_ORDER-1. Fix MAX_ORDER usage in floppy code. Also allocation buffer exactly PAGE_SIZE << MAX_ORDER bytes is okay. Fix MAX_LEN check. Signed-off-by: Kirill A. Shutemov Cc: Denis Efremov Acked-by: Mike Rapoport (IBM) Acked-by: Vlastimil Babka --- drivers/block/floppy.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/block/floppy.c b/drivers/block/floppy.c index 487840e3564d..90d2dfb6448e 100644 --- a/drivers/block/floppy.c +++ b/drivers/block/floppy.c @@ -3079,7 +3079,7 @@ static void raw_cmd_free(struct floppy_raw_cmd **ptr) } } -#define MAX_LEN (1UL << MAX_ORDER << PAGE_SHIFT) +#define MAX_LEN (1UL << (MAX_ORDER - 1) << PAGE_SHIFT) static int raw_cmd_copyin(int cmd, void __user *param, struct floppy_raw_cmd **rcmd) @@ -3108,7 +3108,7 @@ static int raw_cmd_copyin(int cmd, void __user *param, ptr->resultcode = 0; if (ptr->flags & (FD_RAW_READ | FD_RAW_WRITE)) { - if (ptr->length <= 0 || ptr->length >= MAX_LEN) + if (ptr->length <= 0 || ptr->length > MAX_LEN) return -EINVAL; ptr->kernel_data = (char *)fd_dma_mem_alloc(ptr->length); fallback_on_nodma_alloc(&ptr->kernel_data, ptr->length); From patchwork Wed Mar 15 11:31:27 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kirill A. Shutemov" X-Patchwork-Id: 13175673 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 50A00C61DA4 for ; Wed, 15 Mar 2023 11:31:50 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A9FAB6B007E; Wed, 15 Mar 2023 07:31:48 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id A50A66B0080; Wed, 15 Mar 2023 07:31:48 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8A1FD6B0081; Wed, 15 Mar 2023 07:31:48 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 7C1BA6B007E for ; Wed, 15 Mar 2023 07:31:48 -0400 (EDT) Received: from smtpin30.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 517501C68AB for ; Wed, 15 Mar 2023 11:31:48 +0000 (UTC) X-FDA: 80570917896.30.E914F73 Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by imf06.hostedemail.com (Postfix) with ESMTP id C335418002C for ; Wed, 15 Mar 2023 11:31:45 +0000 (UTC) Authentication-Results: imf06.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=lZ7XmTP2; spf=none (imf06.hostedemail.com: domain of kirill.shutemov@linux.intel.com has no SPF policy when checking 134.134.136.65) smtp.mailfrom=kirill.shutemov@linux.intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1678879906; a=rsa-sha256; cv=none; b=mH/wW92bKQ8Oj8rdHsSaVfYapDlyWfqSMAcpiDTG7N5j1a4qHfAHYyshWT+r2BwnHhk2Z7 D8mrLLtIUPgdc5tDhClaID9gGK4TaMjxnYrWMpXiDax9sHMiggE1zqzKyM7rsRUAI7CVF5 oMnU5EsUy/5B7wWjSc3RGs4vp04+dSA= ARC-Authentication-Results: i=1; imf06.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=lZ7XmTP2; spf=none (imf06.hostedemail.com: domain of kirill.shutemov@linux.intel.com has no SPF policy when checking 134.134.136.65) smtp.mailfrom=kirill.shutemov@linux.intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1678879906; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=U4pgMkKqMw8kMXqqLc84IGGloFN7AbtYOBFMOlaSdh8=; b=TyNUqPzq/1QTyXcXNTpCKTsURDlv67BDRGSj9U1dXaApjhnEzw1hKEwJe+gFcg0Ircksfv b+Z+AUM6vX8b20982hMo/CyX8gHqZNjSL/LtW7pWjM/jf3WpdmF8dR3e+9F4pOlROfcrA/ hHe8mbO+NQph/OHZSZrgQnPlHLIl3Jk= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1678879906; x=1710415906; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=iwb00sFFSTPogcirxHway05JjoMe28eGhip2vpTL5qE=; b=lZ7XmTP28wAiiSjpSBx+2QAITH8wknB8Lwbq5TzOJnaNpkalG9kJODXp qFxTGDD6HDhDdkbwclJjd3MeKBm4uU0dLzGT4dzoJScPxuWfKE5R12cqG xrjbxyf//MStv7ShPPC1/osSQr9kmligNSSLGjzKNx0cBgRuotcnR247K LldT8yGt7NdwlpZNFmeZAUBjquZhMyvffMHyUljoQbQiYKTUlmxsneXaK z4at7DlhkSFM49DRuaF1E2FpByywTIiFC0qYQHAbCCTwJ3MzK0S89oQl2 Iet4vqzQx+hSjgZNIoGr/Sw0WVZXTHkwJsz294rLxbAaPC2+54xrqm71h w==; X-IronPort-AV: E=McAfee;i="6500,9779,10649"; a="340040104" X-IronPort-AV: E=Sophos;i="5.98,262,1673942400"; d="scan'208";a="340040104" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Mar 2023 04:31:42 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10649"; a="768456009" X-IronPort-AV: E=Sophos;i="5.98,262,1673942400"; d="scan'208";a="768456009" Received: from nopopovi-mobl1.ger.corp.intel.com (HELO box.shutemov.name) ([10.252.33.48]) by fmsmga003-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Mar 2023 04:31:38 -0700 Received: by box.shutemov.name (Postfix, from userid 1000) id C230110CCA0; Wed, 15 Mar 2023 14:31:35 +0300 (+03) From: "Kirill A. Shutemov" To: Andrew Morton , Mel Gorman , Vlastimil Babka , David Hildenbrand Cc: linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org, "Kirill A. Shutemov" , Jani Nikula , Joonas Lahtinen , Rodrigo Vivi , Tvrtko Ursulin Subject: [PATCH 04/10] drm/i915: Fix MAX_ORDER usage in i915_gem_object_get_pages_internal() Date: Wed, 15 Mar 2023 14:31:27 +0300 Message-Id: <20230315113133.11326-5-kirill.shutemov@linux.intel.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230315113133.11326-1-kirill.shutemov@linux.intel.com> References: <20230315113133.11326-1-kirill.shutemov@linux.intel.com> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Queue-Id: C335418002C X-Rspamd-Server: rspam01 X-Stat-Signature: 5mc87g8k7xohzqab6enhthui5dts745u X-HE-Tag: 1678879905-310351 X-HE-Meta: U2FsdGVkX19QKtGj56svZ92i2LNdRyxGsUK7N/QE37YjJpYz/yVypSntltjDOr+fNP/rPjRktfj9TmHusYBB0kQ4o0dXX/2JULYVwe569lm4xGLCRYpiLKd1eaHpmqpBKC4psQNdI2bCuiawK6LGK0wPM++fQXWqYz0+IV2q2ImsyA6u8Wi+wOrT+3EcAS5riqictAEvEMl1yGJlj0UCVY44BA0qu+2D6rDZjw65gAoTuWHgKd8cwJP7IHTawmS9WKtDuZnIWa+gS32ETz8GrkITU5unkoBjL101dBeLG5bC4j2+smtQnN5q1yIiZ8JS95qSyDXYNDr679jiCao7gOWWnKo0NXyoaQ3u5dI2D6AFQSBImLqPP5iZeZzV+jVAFXfBrtB6weAhEhpDWXZUDrbnfdaQNc4Xn9rEqORYzBTYFwZTGxAIrZoxn9tmE387HmTKtkUmtTyBRnvyOmJB25TNYsy3Cvu07mkNdDq6ehD+lt2+JuZNED3Ho2RFMRnt1YlTqUOHWzlQHDbUVAoZJR2z+lqkkIJI7szxB8VXxJR6piaqqXxLXnk75sTa/sTVYk7N8RmE/pFQwEZH4IK0CYQPJ1Dw++pQD0HVxQQC79QirIjmdMLVw93INImiag1yh5ZKhupUPMgKYXhlODQY2QL12RQBWU1Few9YuQI10XjslnJ6ME3/S/BUmBUaGTdGRMxQQHKvL3sIrd2EgxhgrgsKwY/qcmKytAyal25z0ktear+GBdB7gXi1iINi4je0BXbmCTwoblIGfxakr1vU6sd3TMC3Bhwo26SNsju68vTNW2KuIpzFMdDBbQ2K6zNem0hvJKnvlNQ3fME6cJtjsZsJj6xQzAAGz/ixTrB99i4bks8Lme048O/bVphxkUfN607xfeIsg7vlwYJSRO3kY+vN/o9Jl/yhxrEyw98/cTgcX7eag3D/QtPcUZ3M1pRg7aHYQVflV9g10aZ+B1y 0WqTzd6F 2d3Rqe3YaovWdtp3XvffzgUyT2flEtAuq+rHXA6VGZ8bvnCYe9qEM5XAWV9nsTblqfeVieZj2BzdLEMt8CEDyHnGD2vfVW+jEdMFzZcOT/ATH10C/lgZRXu/QvrhTAYKH1D1S1c6L6TVRDU7cZ/e/x25OY2C7z3vhRDSZQQpUSgp6Tu/Sww0/6Xx1prcXHJ9qt+GWEIqHN4zUsUr9EPYm5wR6mzlOOvT0/wFEE6PrSZR3cYkT0TS5Ez/Fqg== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: MAX_ORDER is not inclusive: the maximum allocation order buddy allocator can deliver is MAX_ORDER-1. Fix MAX_ORDER usage in i915_gem_object_get_pages_internal(). Signed-off-by: Kirill A. Shutemov Cc: Jani Nikula Cc: Joonas Lahtinen Cc: Rodrigo Vivi Cc: Tvrtko Ursulin Acked-by: Tvrtko Ursulin Acked-by: Vlastimil Babka --- drivers/gpu/drm/i915/gem/i915_gem_internal.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/gpu/drm/i915/gem/i915_gem_internal.c b/drivers/gpu/drm/i915/gem/i915_gem_internal.c index 6bc26b4b06b8..eae9e9f6d3bf 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_internal.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_internal.c @@ -36,7 +36,7 @@ static int i915_gem_object_get_pages_internal(struct drm_i915_gem_object *obj) struct sg_table *st; struct scatterlist *sg; unsigned int npages; /* restricted by sg_alloc_table */ - int max_order = MAX_ORDER; + int max_order = MAX_ORDER - 1; unsigned int max_segment; gfp_t gfp; From patchwork Wed Mar 15 11:31:28 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kirill A. Shutemov" X-Patchwork-Id: 13175675 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 53B15C7618B for ; Wed, 15 Mar 2023 11:31:53 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 6813C6B0081; Wed, 15 Mar 2023 07:31:50 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 631246B0082; Wed, 15 Mar 2023 07:31:50 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4AADB6B0083; Wed, 15 Mar 2023 07:31:50 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 398296B0081 for ; Wed, 15 Mar 2023 07:31:50 -0400 (EDT) Received: from smtpin09.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 075D7A0A02 for ; Wed, 15 Mar 2023 11:31:49 +0000 (UTC) X-FDA: 80570917980.09.0AFD3FD Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by imf12.hostedemail.com (Postfix) with ESMTP id DB82A40019 for ; Wed, 15 Mar 2023 11:31:47 +0000 (UTC) Authentication-Results: imf12.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=F056L0ij; spf=none (imf12.hostedemail.com: domain of kirill.shutemov@linux.intel.com has no SPF policy when checking 134.134.136.65) smtp.mailfrom=kirill.shutemov@linux.intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1678879908; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=9V8eylqcIM14HkIIx9TRWvpBvzEt953XO6IZPlCr1Vc=; b=w/vRpm31xniiE8weelZIOZ11lQa2vYddpZMD+DuhmYcGJ1Mfncrp+CyaJCmc3QNGX1TDCO qArYwNgKYJrqIc8txdrIjVslsRwsZ3FpkUUhWOpFtHSEADGBiVb7mQWtfw68hcyWQQkfP7 XGNvTX/DFVpKOIsWxiUcBFsznD2q+N4= ARC-Authentication-Results: i=1; imf12.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=F056L0ij; spf=none (imf12.hostedemail.com: domain of kirill.shutemov@linux.intel.com has no SPF policy when checking 134.134.136.65) smtp.mailfrom=kirill.shutemov@linux.intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1678879908; a=rsa-sha256; cv=none; b=MXiOElyWH3GS07bq2QvaV0+ek92GGtIDWRJi6bMUU1Sc4D3DlL/Qc0NjAuHJHMeRoWqX+S qe9swvkhhB///tu56UagMJVUV+/qV/X3ZN9Kweqs1Bs6sKS1d8jTYtMlPWx4XaOWe9c1SI /GLl0XlA5eSxN2OxNc2l4ywpGkjV9+U= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1678879907; x=1710415907; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=UWzGGkVyoi9uLhDhUj8Gm1X5MYwKK4ubOq9Kr6YKifI=; b=F056L0ijJGnIAQFCTeDVpNKHFVV7Wmrc4M4i/AtcejXvMqql47RZAclC 8i+nZ/rzwgR0xcRmr9zu4v+Vq+kGVgeZ7JBpokbKu3IM8SN1XFf8fNcjY Kd0xhBQsupCfPJ0rKrIOsUbFm4Bf+gOdYPFHo7V2jMhU5XIf/a9b/VVha HOXdCczs30jS9NciTJUAUmhcUCcPTJnBGnhBhVuTT6rFxX5FvcFA9IHDD HDawVjzDAQ4HnHoLuF9YB54GBoZqmNPumnMLhyQZ/woq/2jkqpArODuz0 3IbTp7zlavWW0wD9c12fCyeMxVSxLCTD5KLMihzw1jt3RD9zKo0sHHP3Y w==; X-IronPort-AV: E=McAfee;i="6500,9779,10649"; a="340040121" X-IronPort-AV: E=Sophos;i="5.98,262,1673942400"; d="scan'208";a="340040121" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Mar 2023 04:31:45 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10649"; a="768456010" X-IronPort-AV: E=Sophos;i="5.98,262,1673942400"; d="scan'208";a="768456010" Received: from nopopovi-mobl1.ger.corp.intel.com (HELO box.shutemov.name) ([10.252.33.48]) by fmsmga003-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Mar 2023 04:31:42 -0700 Received: by box.shutemov.name (Postfix, from userid 1000) id CAC6910CCA1; Wed, 15 Mar 2023 14:31:35 +0300 (+03) From: "Kirill A. Shutemov" To: Andrew Morton , Mel Gorman , Vlastimil Babka , David Hildenbrand Cc: linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org, "Kirill A. Shutemov" , Frank Haverkamp Subject: [PATCH 05/10] genwqe: Fix MAX_ORDER usage Date: Wed, 15 Mar 2023 14:31:28 +0300 Message-Id: <20230315113133.11326-6-kirill.shutemov@linux.intel.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230315113133.11326-1-kirill.shutemov@linux.intel.com> References: <20230315113133.11326-1-kirill.shutemov@linux.intel.com> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam03 X-Stat-Signature: zpr69qzpt9pmh8qkeatywaeusgniwjdg X-Rspamd-Queue-Id: DB82A40019 X-HE-Tag: 1678879907-179106 X-HE-Meta: U2FsdGVkX1+U6mUAPYdIXws4KR35stPD7a7oLV6Ev6g5biSeK6rHsZbLBpVVhIAt3AILnKFZFeunhEhPQ/PW+NAawdN7zIaiXjXga2YrmH0iXdtU0UczM4gSAjaRMDsufWR0H6urQqPDN1CFFDyZlmJhbNmFQU564+0evnCow5mbDegxz4lfBaq6T2AshTMvKmfuBfdbsqhxU7nr3BY+Ik6htesoxcePyUcmeV4FA8zKZLtTQO4DwzPSuJiVV0RLhuGFHGG4WczLoR4iyH/T+oRYFXVJG08OsxbpnZdryJkJU2/76rcIoIJweIS9v72s2xw7T0cjilMJ25nZ8brvzdNdCQqJaVZW3NJeTu2h8ls5qXakgli06OkOY+61g2x7shpBYiqL5h9gUhX+2pKXv6WfmVZqzRZUUtnf9JTWdXY+oHG9gm4JPqeVkDuqQYRpUnI9hKKWNRCrNbtAhk1gEl6nylXLnCvkG+uM/zwtrX9pccN/vRa6hnILuyzvpq/et44//GnmzrHBVZf7Ax02nhoaW4AYlwlpYrdLxUYHlS+2QTdaBy3ocmMiwaduvRNchXdeb8j1QubJ+EHzCq/GwlbdCPWa8s1C3RGQzGs8jd9sL8b+oXWKvXC7jJ8eGxO+f3kDIYY/yRxa1RIQyq4WsnjGZLpcTxeOeXnkhb4AAzLC20mE09mHaN5stA3TrZg5VvT7TIG7sCPIpHGThISA7DhdCB+E36I14q3GpmmRpf8csI965HK7+79cnvE4oKLwS4ughVCMfuWBykrcSsQuV9nLokarwSmgHEB1ZfyvG8oGO12VONC0F3SY7NlRs+sGtCQ+Fz4cIUTQY0jypAGkdhA93JioWv7uufJjcwGAhjfOZRc871tmIsqOx0V86uNcEE9atW3zzUnIADbEUl3c7x9/T3WlP/7XPLseknC13s/cSP5vh0JN0bRwWG5G2JPlAfNhn/1iA/TT4yQBuzH oHD+FixY fwCQZEyPy5N314d/3O+80J3ojpVohRWdqj2LR5jUe05s5JeuY8a0VL/3Tb56y3d6S9L1Tb+W5ydPXNn/aMYm2+WHoau1mR5j3DOCKMKfO/DzwviFv0ow0ukT8ClkCvOBDdQRT5N2zfD0ZNC5+88fpOBct7y3QDfRvaFJm3YLjtgj75SgeeyW+pUFvE32gnMJJ76Z6XAxGRWEKCbo7hzE7l4FlW1cxOY3g8LKo X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: MAX_ORDER is not inclusive: the maximum allocation order buddy allocator can deliver is MAX_ORDER-1. Fix MAX_ORDER usage in genwqe driver. Signed-off-by: Kirill A. Shutemov Cc: Frank Haverkamp Acked-by: Vlastimil Babka --- drivers/misc/genwqe/card_dev.c | 2 +- drivers/misc/genwqe/card_utils.c | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/misc/genwqe/card_dev.c b/drivers/misc/genwqe/card_dev.c index 55fc5b80e649..d0e27438a73c 100644 --- a/drivers/misc/genwqe/card_dev.c +++ b/drivers/misc/genwqe/card_dev.c @@ -443,7 +443,7 @@ static int genwqe_mmap(struct file *filp, struct vm_area_struct *vma) if (vsize == 0) return -EINVAL; - if (get_order(vsize) > MAX_ORDER) + if (get_order(vsize) >= MAX_ORDER) return -ENOMEM; dma_map = kzalloc(sizeof(struct dma_mapping), GFP_KERNEL); diff --git a/drivers/misc/genwqe/card_utils.c b/drivers/misc/genwqe/card_utils.c index f778e11237a6..ac29698d085a 100644 --- a/drivers/misc/genwqe/card_utils.c +++ b/drivers/misc/genwqe/card_utils.c @@ -308,7 +308,7 @@ int genwqe_alloc_sync_sgl(struct genwqe_dev *cd, struct genwqe_sgl *sgl, sgl->write = write; sgl->sgl_size = genwqe_sgl_size(sgl->nr_pages); - if (get_order(sgl->sgl_size) > MAX_ORDER) { + if (get_order(sgl->sgl_size) >= MAX_ORDER) { dev_err(&pci_dev->dev, "[%s] err: too much memory requested!\n", __func__); return ret; From patchwork Wed Mar 15 11:31:29 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kirill A. Shutemov" X-Patchwork-Id: 13175680 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8DDB3C76195 for ; Wed, 15 Mar 2023 11:32:02 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3669E6B008A; Wed, 15 Mar 2023 07:31:53 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 178746B0095; Wed, 15 Mar 2023 07:31:53 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C265F6B008C; Wed, 15 Mar 2023 07:31:52 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id A25346B0093 for ; Wed, 15 Mar 2023 07:31:52 -0400 (EDT) Received: from smtpin12.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 31009A016F for ; Wed, 15 Mar 2023 11:31:52 +0000 (UTC) X-FDA: 80570918064.12.0CBF57B Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by imf12.hostedemail.com (Postfix) with ESMTP id 0BD2E40022 for ; Wed, 15 Mar 2023 11:31:49 +0000 (UTC) Authentication-Results: imf12.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=WPBM4rQd; spf=none (imf12.hostedemail.com: domain of kirill.shutemov@linux.intel.com has no SPF policy when checking 134.134.136.65) smtp.mailfrom=kirill.shutemov@linux.intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1678879910; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=vmR/ViM77YKelDsLGuEzhenJST8qdvYYR1/6t1wsVrg=; b=xsEAai7qSVU+5eBR156BccOjF6YIxYeTcsdfIj5HyxsFh+C5pfiJaUIvnVhCUMHjkAisFZ 6I4gYZML25JvRimq5WNuSSNvHsTWHT6NccROJwnZjxR4pPVOGA2evATXD9wK1yvA7RfVhq zRf3BzCm0y4qdANh1+tStQp/IBUdXnQ= ARC-Authentication-Results: i=1; imf12.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=WPBM4rQd; spf=none (imf12.hostedemail.com: domain of kirill.shutemov@linux.intel.com has no SPF policy when checking 134.134.136.65) smtp.mailfrom=kirill.shutemov@linux.intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1678879910; a=rsa-sha256; cv=none; b=aPpObbbNPNJt15MAR0J9d8i1DMZyPan0IDIg7j8Fic98Mou0Z4E4DlD+Ln4Co7rTk7YNLR u/G7mdaA4sYfH8sVvhhyYLRVlEOMIONrQFVQa3a4Pl9C6xXxYd6+QR3/71H2VsD6EwfEPj YVwbOHzb7RyAv241PflSCgS+tl7MsIQ= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1678879910; x=1710415910; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=KJJgxfSP5w2rtkaNO15MqwNgcC5YINyIgZH9vrQT2MQ=; b=WPBM4rQd78aqKTMkWKxsBw/KxACrxRFvp1GxDUGZCBwqeNIlbOCgnfvw 1t8TmSGrNOD936IvVNMJpf6wjnhMxQ3mHZzGBaAQacxWhs591IR/rkJbG 8NYvOVBDezKN2F4v8xw1D2ABisDXBu7483N/13lLyM7SNBuha4Y2sHR89 U+GM5/u0GRnUUJpRfo/QN3DrudD0VAEmHT0YVAeRDYKByU74PAhap0RDB 4+uhTsD9BZPgCC+1ordyakbmaSUoupL1+lBZxp/Y8bO+otjT0Sps+r0dh /wrkB0Drk220PN6Yn5jytyRefF/zdGhVT5QGCYYJ6JLSIr6RvJ836+rNK A==; X-IronPort-AV: E=McAfee;i="6500,9779,10649"; a="340040144" X-IronPort-AV: E=Sophos;i="5.98,262,1673942400"; d="scan'208";a="340040144" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Mar 2023 04:31:46 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10649"; a="768456024" X-IronPort-AV: E=Sophos;i="5.98,262,1673942400"; d="scan'208";a="768456024" Received: from nopopovi-mobl1.ger.corp.intel.com (HELO box.shutemov.name) ([10.252.33.48]) by fmsmga003-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Mar 2023 04:31:43 -0700 Received: by box.shutemov.name (Postfix, from userid 1000) id D288A10CCA2; Wed, 15 Mar 2023 14:31:35 +0300 (+03) From: "Kirill A. Shutemov" To: Andrew Morton , Mel Gorman , Vlastimil Babka , David Hildenbrand Cc: linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org, "Kirill A. Shutemov" , Peter Zijlstra , Ingo Molnar , Arnaldo Carvalho de Melo , Mark Rutland , Alexander Shishkin , Jiri Olsa , Namhyung Kim , Ian Rogers , Adrian Hunter Subject: [PATCH 06/10] perf/core: Fix MAX_ORDER usage in rb_alloc_aux_page() Date: Wed, 15 Mar 2023 14:31:29 +0300 Message-Id: <20230315113133.11326-7-kirill.shutemov@linux.intel.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230315113133.11326-1-kirill.shutemov@linux.intel.com> References: <20230315113133.11326-1-kirill.shutemov@linux.intel.com> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam03 X-Stat-Signature: 6afoaz638jpzphpu71gks8xd5eeu1gkq X-Rspamd-Queue-Id: 0BD2E40022 X-HE-Tag: 1678879909-619534 X-HE-Meta: U2FsdGVkX1+hRNZtD6Owk6OArR8h0y1zZJMEG/mxpOHzgEg881ExnJz/TMul6Og4BdorydPoMZga1wIBxx8GT1+Vp6W6Aq9BcvXJ8RY7CAkwRvg0phqkMemNKrJMlUbMjQ3iuqhlPSgHq4NUGx9jUiD3YocJ1pNU++tJ5T31skqnMZMeCfeoKgUJgMTjnSYdMF+NkDenRH7xdr19gQIHVYoc19rAzNZdODuEeX+PRM/Ol6POKcS/CJwxWWOx8NaJfwT12gQWq7xeNgJGFIz93vy7QxlRRlKqTJ0l8Zp4V/TxH8QofBwAN23TG+V9BkFXVm66lZEQy0cZ7e4mTkD3QFBT8ouc5AKRjWE+JLGuDiNuBfbX7tjCvoguRDn25UlPX/nbgaVblE4gYTJJlqWuurmNu/hUIvAQ3TmDDOzW2nCpullqaVdBxpQA7nNa8NHKY//z1GC5wjjij416B9z/Y7HYYxw7ogNrJEHgxYr3RACwrSY1FnVcwaJeBzoY7QvYhtsEUUKHngu4UpcUbqSN+dQLGU4r6YFPqmnjUri199bMtwoKifsBOtJ6opEUC9+0ByG4nV9dstas5+UfFosNxAmUEJ12LeCgBBAUupLtsznsIs6lVQZS2C01YwVlQDCga5RC8kcoju7ueYY8UwS+GCMMiIY9JB5u4FWeqf9JpVER2EaFrlze+rwQog5pa8cpqTe3E2SD2uLuJI6u1rxWCeA0o4O6hIvPcFtQpQT+G/4yANx4cBXc0F4iApzHKE5cs73ZWQEaJQcsEYDjNTi1jl4Xbf0iHWGfMMizUp65CtfAT9SV7RcssN0dx8L0aA1Irfb3LWNMamd8Lvn7CzdWP5WzywngNrb0CZAPWolyRQcFf2WtExpvq58NrSo1Hnedfy3pnS/1KgN3PD5MdhGf3CpqzDUhYci6P3J9cKTSKI2KJY3q38/xUVuE34WF0pTJZf6LmWSsG1gOKXheduU tiYrKfC3 OFOQYmznojWb7oXjOoR3eRgT6yJesGIPsqG/DqCHgw6WlaQnoCGdzxD7EhEROZ4YJSIgfJoClotMKCPhnj4OtmT7ETkcT+OuOf9XDNEWza1kmYg50+dhCjJXBvcqwZKbezRy0l5B+yRCutxZcehR8J7EBrfLAqpGjofu0RJ5ebt3Ph/B5iI2N/RBdNnIm78XVl88sJWC1mqo5oUeFSP2J3fRQR0PwqZRAPpOeRmsehcPiKkFyDdkii8QSNS9TqSccVfo2z2gmdfetemHnSY8e9QX6iQXZ7f2OlPhW9I1zMx/a8NNK5Apt+9gsdJ9D5xUAlPOqfBPjOrXMW/93zI7KJ0n61itu0HKWc9ydWuHWiEly6FQJ4sXmC6FmTOS8eDHLHtAIdY5F+pCxDqZPlfLqb/31SJ8vVE2GOV+4jaeweVBkRNA= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: MAX_ORDER is not inclusive: the maximum allocation order buddy allocator can deliver is MAX_ORDER-1. Fix MAX_ORDER usage in rb_alloc_aux_page(). Signed-off-by: Kirill A. Shutemov Cc: Peter Zijlstra Cc: Ingo Molnar Cc: Arnaldo Carvalho de Melo Cc: Mark Rutland Cc: Alexander Shishkin Cc: Jiri Olsa Cc: Namhyung Kim Cc: Ian Rogers Cc: Adrian Hunter Acked-by: Vlastimil Babka --- kernel/events/ring_buffer.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/kernel/events/ring_buffer.c b/kernel/events/ring_buffer.c index 273a0fe7910a..d6bbdb7830b2 100644 --- a/kernel/events/ring_buffer.c +++ b/kernel/events/ring_buffer.c @@ -609,8 +609,8 @@ static struct page *rb_alloc_aux_page(int node, int order) { struct page *page; - if (order > MAX_ORDER) - order = MAX_ORDER; + if (order >= MAX_ORDER) + order = MAX_ORDER - 1; do { page = alloc_pages_node(node, PERF_AUX_GFP, order); From patchwork Wed Mar 15 11:31:30 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kirill A. Shutemov" X-Patchwork-Id: 13175679 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id B95F0C761A6 for ; Wed, 15 Mar 2023 11:32:00 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 048C96B0083; Wed, 15 Mar 2023 07:31:53 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id E75476B0093; Wed, 15 Mar 2023 07:31:52 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id AC63D6B0095; Wed, 15 Mar 2023 07:31:52 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 7FA556B008A for ; Wed, 15 Mar 2023 07:31:52 -0400 (EDT) Received: from smtpin06.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 6337A120205 for ; Wed, 15 Mar 2023 11:31:52 +0000 (UTC) X-FDA: 80570918064.06.3085A3E Received: from mga12.intel.com (mga12.intel.com [192.55.52.136]) by imf10.hostedemail.com (Postfix) with ESMTP id 378B8C000C for ; Wed, 15 Mar 2023 11:31:49 +0000 (UTC) Authentication-Results: imf10.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=PqjRTouF; spf=none (imf10.hostedemail.com: domain of kirill.shutemov@linux.intel.com has no SPF policy when checking 192.55.52.136) smtp.mailfrom=kirill.shutemov@linux.intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1678879910; a=rsa-sha256; cv=none; b=ugWXO/Ql1kCPSf1Kgupn63nFbD0zMf0zS9Uk+kND+q8Aect8w1BciX/8Ak8yEWXD28YLlG sxn3V2V9861qytymQRetcuHxuxsARkfdS6v3alTp0Xqmh2D3ULYhv1Yh2PiAfcBn+rez6D tAVJM6XcC9wpJB93Ye5Hxrk87OqB7wI= ARC-Authentication-Results: i=1; imf10.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=PqjRTouF; spf=none (imf10.hostedemail.com: domain of kirill.shutemov@linux.intel.com has no SPF policy when checking 192.55.52.136) smtp.mailfrom=kirill.shutemov@linux.intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1678879910; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=Rc0uLJU6P/CO4kkcHhmaLuqGIMvXlIBp/mDiw9HD57w=; b=lH2shuW0r5Q/Q04Yx6KiRHdIBnx/MOSZkKH4Y4oHUhvz7pIsXd2gwAdmbbJbeATSCgeWqX YzcIlkmtw7RNe2dUTLjfgktVFq/Blu4gJXgHKjeEvBWqmDV12qwzOxGUPXFHp+2YZk8x1J nwb4GaYQ3Be5Ud03Khg76/VW82GRyZ4= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1678879910; x=1710415910; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=DqwXAyl2Jl7pmzwDfdXypoW87246pyJ3bkPjNFW5WAg=; b=PqjRTouF5uWc+usrhhl6aQ78Jk1mH2QbJkpqV5au2oNekkUrYpQFNyuq 5RJSPYiADSvgrb9/4nqXxnhZer/ucE8dRQ8wLx8w0PU/5CCYcIfZTBpgH WGaJsIf6lo8/yn3EqNkioDLk3VAUvkQAAHm9x635kae76rWxMlQQUPedh z7MEW2ZvWeXVulIJ2Le1Y39SXodMi5EMoKfbZPZ2T0GZFPZKPNrikMt5i B9bKVDK5OzQ44eRsdbGm4hxBJ4d3+ifMNHPLM+50H1u2hZGXg3OoJbudo Hl9+CwdK9uv9BTfaqosI725JtOfePDeqBtj3jErvvvNKoV9nyHuUgIa+/ Q==; X-IronPort-AV: E=McAfee;i="6500,9779,10649"; a="317330348" X-IronPort-AV: E=Sophos;i="5.98,262,1673942400"; d="scan'208";a="317330348" Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Mar 2023 04:31:45 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10649"; a="925310581" X-IronPort-AV: E=Sophos;i="5.98,262,1673942400"; d="scan'208";a="925310581" Received: from nopopovi-mobl1.ger.corp.intel.com (HELO box.shutemov.name) ([10.252.33.48]) by fmsmga006-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Mar 2023 04:31:43 -0700 Received: by box.shutemov.name (Postfix, from userid 1000) id DB6B510CCA3; Wed, 15 Mar 2023 14:31:35 +0300 (+03) From: "Kirill A. Shutemov" To: Andrew Morton , Mel Gorman , Vlastimil Babka , David Hildenbrand Cc: linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org, "Kirill A. Shutemov" , Alexander Duyck Subject: [PATCH 07/10] mm/page_reporting: Fix MAX_ORDER usage in page_reporting_register() Date: Wed, 15 Mar 2023 14:31:30 +0300 Message-Id: <20230315113133.11326-8-kirill.shutemov@linux.intel.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230315113133.11326-1-kirill.shutemov@linux.intel.com> References: <20230315113133.11326-1-kirill.shutemov@linux.intel.com> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Queue-Id: 378B8C000C X-Rspamd-Server: rspam01 X-Stat-Signature: w54bf1mn5uskjecmt4k13zmqsag8oxj8 X-HE-Tag: 1678879909-428695 X-HE-Meta: U2FsdGVkX18NnyWLzL8yr3rEFqQC/CuWGDCkH5U9eT6bYSWmIh29iNz12ZEOAvy7v0LK896iRyJ6pwFZEAeqbpXP0N5zdZMeLrf7GLsiPucOmNk3bb+u2qp0fik3YfXlnNylLaZZonGoyeBRj4zNg91TpTm0ZAglua53GMg6JYWt4xu0kBA3yquYU2lKsK/b8HBSUIoY4i5Ywf+eSh96FWR1TRx3EbPyXI2PAUgv6+q5ZDqSQWPQnt6lWy40XBm5xRO0wH1kaJ7ttGvkW0ziXyYRI9bz4gq692jOHYWdMjISE30/NLZr1Hslknpw1Tnfy22CXnR6j6eXs9/Dz0pjLeqEH3XsHtPUbnW5jDscSJZaoW46saU+tdwDT3Jxk6F7AOKxVwaICHxvWvcoh7hJXik0CRgnVlupX82b8s4rIcBbxkaTT60KWjicGBxNfVvQoBvuUEMJOrTNyNNGlSMBVamRtpHOwOeP3h+3dMHw36NL+9KaH/FDPVueq62CbRaJ64dLcKIy58p0XaV2p7EEuyPfzCCOXlCW4qC2GaJfjCfwy4Lnf9YfLs6He7DZ15md8bSURs1AzAIgcolvaJGHGVYK4dcRCcY5SNV3iqytRDo/OYYyZIoWfufPaeFSAU31c53vLCHYyt5ZhDcS4N+5+uv6clYiwyTgFrx8JAcX0j2K4LfBnru68P8hf0t36zVotzjAvZAuJyCfLKsze/7fRt5ipR4y0Ec3ABhxynAfFU/eQsJmlQecfl81J8IeYbw3GUEK2VpiLHpwARMT6Jy7Yz/d7ewABkpAesQhRZvyK9RTGSu4oipN99xnpV0BLnvOA21S9HB+A38OFh/LCI8iRDFeeVKZhbTh/+KFKM7u9m59Ycyw0LuRbdaDui6q9uRY9PU2O1TJt8YpYzRk9HZEpXkO79xJINEeRFXOyie1OcTFdKTvot1GcgKUpraxvMq3yUOeQoclOzI+BbSC2BQ ES5KNGIa +MZNfgnSQHCBXMPenjxgSX7wuGP8IpdMD+vAAuJkco3yaoOBawJ7MQoKMqpa6pzFugu0h9EVk5/0CMn+XUM5jY7anp59m3sVExVip+Sft95e009YfQdJaAJeOzcQXB5WEWYH+2CRP9WaQV/knTz+qnFWwvQqzd8E+Qs2jtaHXGeB/sRh4pPGBtC1dJlVBEkOWQL6YG9FYIKJuDCMO2hJgsc+CiBHsrWVPtfQbbq/l2tyN5MA= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: MAX_ORDER is not inclusive: the maximum allocation order buddy allocator can deliver is MAX_ORDER-1. Fix MAX_ORDER usage in page_reporting_register(). Signed-off-by: Kirill A. Shutemov Cc: Alexander Duyck Reviewed-by: Vlastimil Babka --- mm/page_reporting.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/mm/page_reporting.c b/mm/page_reporting.c index c65813a9dc78..275b466de37b 100644 --- a/mm/page_reporting.c +++ b/mm/page_reporting.c @@ -370,7 +370,7 @@ int page_reporting_register(struct page_reporting_dev_info *prdev) */ if (page_reporting_order == -1) { - if (prdev->order > 0 && prdev->order <= MAX_ORDER) + if (prdev->order > 0 && prdev->order < MAX_ORDER) page_reporting_order = prdev->order; else page_reporting_order = pageblock_order; From patchwork Wed Mar 15 11:31:31 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kirill A. Shutemov" X-Patchwork-Id: 13175678 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8876FC6FD1D for ; Wed, 15 Mar 2023 11:31:57 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 68D326B0089; Wed, 15 Mar 2023 07:31:52 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 5E5FE6B008C; Wed, 15 Mar 2023 07:31:52 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 178C16B0087; Wed, 15 Mar 2023 07:31:51 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id D22446B0087 for ; Wed, 15 Mar 2023 07:31:51 -0400 (EDT) Received: from smtpin21.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id A24891A07CA for ; Wed, 15 Mar 2023 11:31:51 +0000 (UTC) X-FDA: 80570918022.21.51B6D96 Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by imf06.hostedemail.com (Postfix) with ESMTP id 70C4C180011 for ; Wed, 15 Mar 2023 11:31:48 +0000 (UTC) Authentication-Results: imf06.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=WtQzookd; spf=none (imf06.hostedemail.com: domain of kirill.shutemov@linux.intel.com has no SPF policy when checking 134.134.136.65) smtp.mailfrom=kirill.shutemov@linux.intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1678879908; a=rsa-sha256; cv=none; b=YJr9hkOTPEJwvWVuJfFLXYulshPDY55i+ZdDlxv5l0jzn9AjQWPfkYaxSa4cJ+PLaaczyU RPCPdzkXObLSeyqzGoBa6exmBJpYV0Uu0lYNKVq8Wvw4ijhYToqtziofpKynX2BgaBs1xB qOHr8V9cOceFcfa9ORIp4iyypSWQcSM= ARC-Authentication-Results: i=1; imf06.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=WtQzookd; spf=none (imf06.hostedemail.com: domain of kirill.shutemov@linux.intel.com has no SPF policy when checking 134.134.136.65) smtp.mailfrom=kirill.shutemov@linux.intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1678879908; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=AT1GhwbDyWrvyJUMDRYHSLUugvLdHkN1MSYo/oKLUQE=; b=5f+NY7NVZBeW6tu3dZmg1c/y62XFAPDrsc3hu8i7YTRfCD9XAvvciLdKuMfWXKzOEehJ1P xfSE8/IcLoxxLdaMs4/FVbGi/JprvfBVC/wzbFYqXxaIlW4MUI04KxpMgMYrUqW+czIKRm +2YNWqQIGDEUG6Z5o0576KLAY+0hCRs= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1678879908; x=1710415908; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=52iLXUh9Y9J9QoTpEvXDHrqTMiCmRpr/0OkWw2Dmeo4=; b=WtQzookd1IEGf1lqi54J7AJ/1a38sKgTXQr8lFAvs+IAAxXRE+1SE7ra AhtzGpVX/HWgbLjVrhvWHuh61LT6g/SYnGykACvmr9E1/DGAwVigj82uf rCoAMY11eZCrNYzDPrw1h+EPsCkLEfre3Te2KDUsgvXqx51or+HOrHH+r Fr+L58WidnOEKi+y3T+mTwvbjZJPLrFyTL6NU/859aytUZjDdaP5734gL FQVDO6nwkHtYctiiCJvjzCQCCQ60wOmZMKSrnA2VN5WoLaz0+YBAxZvch yiWjzii9sjW/uIyeC0QOqrIYJXyUVt7JgwHgc2hxirPntI4rpYUV5D6/1 g==; X-IronPort-AV: E=McAfee;i="6500,9779,10649"; a="340040127" X-IronPort-AV: E=Sophos;i="5.98,262,1673942400"; d="scan'208";a="340040127" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Mar 2023 04:31:45 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10649"; a="768456012" X-IronPort-AV: E=Sophos;i="5.98,262,1673942400"; d="scan'208";a="768456012" Received: from nopopovi-mobl1.ger.corp.intel.com (HELO box.shutemov.name) ([10.252.33.48]) by fmsmga003-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Mar 2023 04:31:43 -0700 Received: by box.shutemov.name (Postfix, from userid 1000) id E47DB10D684; Wed, 15 Mar 2023 14:31:35 +0300 (+03) From: "Kirill A. Shutemov" To: Andrew Morton , Mel Gorman , Vlastimil Babka , David Hildenbrand Cc: linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org, "Kirill A. Shutemov" , Christoph Lameter , Pekka Enberg Subject: [PATCH 08/10] mm/slub: Fix MAX_ORDER usage in calculate_order() Date: Wed, 15 Mar 2023 14:31:31 +0300 Message-Id: <20230315113133.11326-9-kirill.shutemov@linux.intel.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230315113133.11326-1-kirill.shutemov@linux.intel.com> References: <20230315113133.11326-1-kirill.shutemov@linux.intel.com> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Queue-Id: 70C4C180011 X-Rspamd-Server: rspam01 X-Stat-Signature: 11579fctjs7j4rxoxp49374xheuirmj6 X-HE-Tag: 1678879908-811830 X-HE-Meta: U2FsdGVkX19FAlfKLM6nE4peY1wFUp9yM7UZ0j3sOHv0R9Lwxlks5QlUnMUp9ydN09M0hvFwxTyn2sX93k07owsTqFtaybD6PiS5o743tqjJpMewiNocf+FnjC7hozjC/NjjxwNUGU7ksCYz5QwvkxF4K1IqQ7GFf+gGDz2BsWmM7349J75toVUzEkc30yyIZXUUUHhD1LhsJsI8J5+WZjlPz8HbLEDK0HR+DiTZSBA0nzZQZnvFlG7ApHUeGWISXIudoUps7cnOh3uiDIICsLXstZL0PqzIO9M36zAPYHL5AAbGlQxBkIiEfQSXaUnqvhwW39jeGbXkntvYE/zAg1fPpJMwsKRtQS9XitJmuZeiRjd7HS73UQVACIgEFecx2kg8q52UGp/NJ8I3HQxYbz0cfK7cAn3FaN6/Zbusdi4RX9Fm0rX3tgte8RwZDibQ+oUtTliVBBz0c5XNZzRyR3oJOYE+62NwA+jEOgAh/OUuJS3c+WkY2zbkTssW4HMEPzffDYaYtpILNjbPThLXkRDuj5O1wQCJMzeZAx/fBD1Zqa3zNqXjH8J6OhQV67lovJ2L+k6sR+n8OadZfeivcZZ4BU7EchgSuubsUj2H03DKEJRf9ztqkykFt6S9iA92p1VUIBZAoctgHiuLWSaplHftpqipLKp+SCoJxFGKhuVWxePMcEDn4tblME4EP3BBw/4DoO3ldMXstsn/yIBx1l8L+2jp4xRw3W7eTzjdjJww301sgL8J1abE6HlPzVWGF6iGowoSDGrlJxnoN4QwKF5BkppVdl5/63LQUnnB4f1ZgOkgQ5EZ/ouJBJwyRziJea2+fvYWQGACuksyl+XRp3aI8hJLv11I/812UiXROSjmwhs5dPmfJl+r736zQvJ6b0p5r1xm0WJBmUecrCrbWgsWl+SntPI9X+f4ObMVj71YarPXGDtltgwz++tSowlb/WYNgvjmSSXqUE7r72s Ahpi5aqO iBI6LoZUGwIYdsDEXw6L1Lq17P03VYvy9ALihzQ797FUt4bD88fIsXrAUM7oV8LV30GDaHdJsk7jVp1vSl/333/RBtx1bImqbHP8fawORfvfK84V17G4SVL/2jyoBI+g4blFpqEtMK1Ch9lCqegyklc8beftQ2u4LUFMiDGr8KkDtg5WIruesuwiVgfcZ80du1c58Hgu/BMPOsYCsvOV04wcoxNTmPhKXmY3gDe/OYJR7Ng80lOE4KKTJN7IWpMlYEPn6xarvg1HPAxgBNCX5c4tZkzn6GVPoa7PEJKl9WdXs7JNw8nwRsLk5CypWIFTmgMEk X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: MAX_ORDER is not inclusive: the maximum allocation order buddy allocator can deliver is MAX_ORDER-1. Fix MAX_ORDER usage in calculate_order(). Signed-off-by: Kirill A. Shutemov Cc: Vlastimil Babka Cc: Christoph Lameter Cc: Pekka Enberg Reviewed-by: Vlastimil Babka --- mm/slub.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/mm/slub.c b/mm/slub.c index 39327e98fce3..32eb6b50fe18 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -4171,7 +4171,7 @@ static inline int calculate_order(unsigned int size) /* * Doh this slab cannot be placed using slub_max_order. */ - order = calc_slab_order(size, 1, MAX_ORDER, 1); + order = calc_slab_order(size, 1, MAX_ORDER - 1, 1); if (order < MAX_ORDER) return order; return -ENOSYS; From patchwork Wed Mar 15 11:31:32 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kirill A. Shutemov" X-Patchwork-Id: 13175677 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2FF08C7618A for ; Wed, 15 Mar 2023 11:31:56 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 17BE46B0088; Wed, 15 Mar 2023 07:31:52 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 08D556B008A; Wed, 15 Mar 2023 07:31:52 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D6CB26B0089; Wed, 15 Mar 2023 07:31:51 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id C83916B0083 for ; Wed, 15 Mar 2023 07:31:51 -0400 (EDT) Received: from smtpin21.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 74C8016106D for ; Wed, 15 Mar 2023 11:31:51 +0000 (UTC) X-FDA: 80570918022.21.A39B799 Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by imf15.hostedemail.com (Postfix) with ESMTP id 4D6E5A0003 for ; Wed, 15 Mar 2023 11:31:49 +0000 (UTC) Authentication-Results: imf15.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=mR+gXBP7; dmarc=pass (policy=none) header.from=intel.com; spf=none (imf15.hostedemail.com: domain of kirill.shutemov@linux.intel.com has no SPF policy when checking 134.134.136.65) smtp.mailfrom=kirill.shutemov@linux.intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1678879909; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=PbeII/iXoJsPjptKNiOA/dG/Ar/LiN8R6+pedl24yA8=; b=BEXSrZe9TzxEo2Np/kAAnUziXiqo1IxPaWnDLdFpMWzT+zFnOFGzOI+GkDy2yf3fBFNE7N mYVEjR8P05Ayv2D3dXn8A8Xal9GMoujdw2g7QAdZP5xsK6HZo9gIy3oQNMK6SZcyN8mjgW 6/OnRkYhuzqGSZcvMehmhv2zWDRhOrA= ARC-Authentication-Results: i=1; imf15.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=mR+gXBP7; dmarc=pass (policy=none) header.from=intel.com; spf=none (imf15.hostedemail.com: domain of kirill.shutemov@linux.intel.com has no SPF policy when checking 134.134.136.65) smtp.mailfrom=kirill.shutemov@linux.intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1678879909; a=rsa-sha256; cv=none; b=YsuCZScxAypq/e4JphBbz106L8djgPxjWKXaXv31HHQhYMP8COIirqPkUb9Lp8vIS5f2oa 25IEwaE9Fem5b+rJgTg7VgEB2/lfKRnWvcb2Xw5DurcSWHuJvzRmNahQ0MvEL+7n+7ydee Sb522ljWNNqMl9bI2a7I4KANEPotSp4= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1678879909; x=1710415909; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=wxXpVFa3aHUh2uZGVGV6zRauliwfaR2PzVjPmjMqjVA=; b=mR+gXBP7GxMRaVckLggVNOcKZcjy+Ymd7NtHmAIq2hapG5ps3JDbm6ne Wbnf6HnlCxM5cKveKpzgHUrJBW8XWks+Vgg1/tGzVcg+J/emYWIiXnx+h 075HjpXDmOURtqwQ9UkmVXKFUPFTzqAUjcJ6hmoqVOMWH+mITtr22vXlk Uea0LdmCD8zoS0kNdXdMQxN92/OU8NJTPZ9jGRjBKyeUQrHxkYGZcT7dw p+B/TIKIeIEQ7UcHznZrlk5c8GD8OgIfV9XyZez3YVP/AoZXqmO7rMVun 6epjE1KyQwdtBsat/flfbD/uzUchoMVr+ddvZ3FXeM/8cGod2UVy6ys90 w==; X-IronPort-AV: E=McAfee;i="6500,9779,10649"; a="340040136" X-IronPort-AV: E=Sophos;i="5.98,262,1673942400"; d="scan'208";a="340040136" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Mar 2023 04:31:45 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10649"; a="768456013" X-IronPort-AV: E=Sophos;i="5.98,262,1673942400"; d="scan'208";a="768456013" Received: from nopopovi-mobl1.ger.corp.intel.com (HELO box.shutemov.name) ([10.252.33.48]) by fmsmga003-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Mar 2023 04:31:43 -0700 Received: by box.shutemov.name (Postfix, from userid 1000) id ED58B10D690; Wed, 15 Mar 2023 14:31:35 +0300 (+03) From: "Kirill A. Shutemov" To: Andrew Morton , Mel Gorman , Vlastimil Babka , David Hildenbrand Cc: linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org, "Kirill A. Shutemov" , Robin Murphy , Jacob Pan Subject: [PATCH 09/10] iommu: Fix MAX_ORDER usage in __iommu_dma_alloc_pages() Date: Wed, 15 Mar 2023 14:31:32 +0300 Message-Id: <20230315113133.11326-10-kirill.shutemov@linux.intel.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230315113133.11326-1-kirill.shutemov@linux.intel.com> References: <20230315113133.11326-1-kirill.shutemov@linux.intel.com> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: 4D6E5A0003 X-Stat-Signature: jbrr85s1xd3rogfbw1t7su9319uwcwah X-HE-Tag: 1678879909-367448 X-HE-Meta: U2FsdGVkX19ajibGzJDX2gj0KCKqIO+BiuyNXStV3OtDCzNJplr16B/Qi3AYqty65DA95FLlGiVOrWUusVAn+oUMBPmz1EIKJovrPoHeYcJbh84MFRRPY5XPcjUuCmVR5OCnG2mDM1lf+VNCr5L6+H8lcKhfjgtY9pNt5v4Xewt4sH2ObVbi24oZjqBGRUZXjxgoZ1j1i+nfda+sziXRt3PVo/QecNuQmD78Grrx1aLWNnefbjrd6nl7kUtIU9LwsQMJcjj4d6ocEeT2sFiJjgxg+3GhdHyuqfwLAMXm5DhQSPVd72qR73B4pKQ2GHF4pDgwRZ4WyzP1Pc0yrMmToo/m+4ZMIi4aItOOi9tzW6HKDYg06dYuqYDnqLKyz4o67tXkLdlPzJlQdWufhtnxR3EHv5IOiAONzOPAYdJFEvs2M7aEWqIQk6TlZETEShHN/4BrGqMqf6Z1oKhv0HvjOC/6PRhK70ITIHQbMECgzeXQej0KYeAcyMPX3/aM58N8aVc/cFZGCwMAnXqCbj07+ZKl++nHwQ4m1dvaMGPGA1HA+5rHsBswiKv+zsqf71MMkTFq7kJzaDwYOlp5U5w8iHCIMzTZitPYYl3gUH10wwMfJI3a8RsVKZyzssKlMyiGMCpND/3qcHYg4HbybNqU0vx8JrqbzAvPB8Vy6LsqLLYqrDCer4n/hHtotbxTTXix6UGEssG2WAgrJX59x+Qq/xFzuoyTvbVlwlfY3+TjyfHrSOhkJ/i/9OcTaG4NRvWO6YNQvfBWxEqm0N47ZxWEsK/iCOMxd4CW+AbfPbu8Q5DEF0DjM8+j3c6cKW4eeIffg0HPESUxiL6DyLyb74czelayw8K3YRasD6IiF+DDcRLsa/TasxhoSq2ks7y172d+QiFV5r9RUj1xz9mnhTevVmiuMAxCyW1OVhZNA8tODgFYIieAFkuxSgFEf1TkE6io+KbmmRJOAaZYRAZEmBD 6P6TqJcN pjlyexI/bGMpoaJ3DqkSJQys3i8+WV8zJ/NldOStEiMJVzfJzzNX8Rvoe1KdMqQQiwky/8tLEG6QraCtpBVdZAMlYbA+dRv1Ptybhi2U+SWLJvItuRsbMfRs3PHS+tV9MBHraq38HqhfNzznXY9tk6P8lyETGH6mEPZeK7a1ZAksErQ87qvJyCn4muPFEp8Qf6ZFY5CM2jHgXwDe2A8ckNeS2sR27iJEFQRWLbTLxLsMlQHusEeciBOjVAdp3R8So3iY9HOTPRZyqylM= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: MAX_ORDER is not inclusive: the maximum allocation order buddy allocator can deliver is MAX_ORDER-1. Fix MAX_ORDER usage in __iommu_dma_alloc_pages(). Also use GENMASK() instead of hard to read "(2U << order) - 1" magic. Signed-off-by: Kirill A. Shutemov Cc: Robin Murphy Cc: Jacob Pan Acked-by: Robin Murphy Reviewed-by: Jacob Pan Reviewed-by: Vlastimil Babka Acked-by: Joerg Roedel --- drivers/iommu/dma-iommu.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c index 99b2646cb5c7..ac996fd6bd9c 100644 --- a/drivers/iommu/dma-iommu.c +++ b/drivers/iommu/dma-iommu.c @@ -736,7 +736,7 @@ static struct page **__iommu_dma_alloc_pages(struct device *dev, struct page **pages; unsigned int i = 0, nid = dev_to_node(dev); - order_mask &= (2U << MAX_ORDER) - 1; + order_mask &= GENMASK(MAX_ORDER - 1, 0); if (!order_mask) return NULL; @@ -756,7 +756,7 @@ static struct page **__iommu_dma_alloc_pages(struct device *dev, * than a necessity, hence using __GFP_NORETRY until * falling back to minimum-order allocations. */ - for (order_mask &= (2U << __fls(count)) - 1; + for (order_mask &= GENMASK(__fls(count), 0); order_mask; order_mask &= ~order_size) { unsigned int order = __fls(order_mask); gfp_t alloc_flags = gfp; From patchwork Wed Mar 15 11:31:33 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kirill A. Shutemov" X-Patchwork-Id: 13175681 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id CB72FC7618B for ; Wed, 15 Mar 2023 11:31:58 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C28C16B0092; Wed, 15 Mar 2023 07:31:52 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id BDA466B008A; Wed, 15 Mar 2023 07:31:52 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 683346B0083; Wed, 15 Mar 2023 07:31:52 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 206D76B0089 for ; Wed, 15 Mar 2023 07:31:52 -0400 (EDT) Received: from smtpin19.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id E9EC2120205 for ; Wed, 15 Mar 2023 11:31:51 +0000 (UTC) X-FDA: 80570918022.19.4449B9A Received: from mga12.intel.com (mga12.intel.com [192.55.52.136]) by imf27.hostedemail.com (Postfix) with ESMTP id 4007540017 for ; Wed, 15 Mar 2023 11:31:49 +0000 (UTC) Authentication-Results: imf27.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=LiemgZVl; spf=none (imf27.hostedemail.com: domain of kirill.shutemov@linux.intel.com has no SPF policy when checking 192.55.52.136) smtp.mailfrom=kirill.shutemov@linux.intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1678879909; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=Z3+GHnCFwvrXwPmv6YQf99r8mWCPFhUlqNEqPcZBW8A=; b=55JfPYKuBttsxZfu8dEX60CaorqmgM8XqJJUfeMGWt4bi3NZ8pBVUIdjnTrYRPqQZ4u0oK bSJ7OY7bLvaFrzL+4TcMIgRm9i9FJGXYDIdsEnxzwtKauLkKEvFhDy+9TJrN2rMGRKmsac wo1by9SUTDjOfPzlJn+lcVHw2b1Xyao= ARC-Authentication-Results: i=1; imf27.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=LiemgZVl; spf=none (imf27.hostedemail.com: domain of kirill.shutemov@linux.intel.com has no SPF policy when checking 192.55.52.136) smtp.mailfrom=kirill.shutemov@linux.intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1678879909; a=rsa-sha256; cv=none; b=S8br/xPhFOoi5nvzAGttz/ZYEXygnS9/Nz3bIvjVMM3ueFdtFTMn+UaqRDyHpCyOw3zTLQ rHm56OLIMrSDonLFEKmMbo5gZm2euFFrDDSmX7wt4/Dz+Rtl+dBWGbW/1FQaD75QDa7gJ/ yRu5xr0DcUgrqsYBxD3YGQ9jHp3kJE4= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1678879909; x=1710415909; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=VocsmJwUkQjA7cQ3FZ1DE9cqJIphgkwTOKdheBmGvfU=; b=LiemgZVlX5G3Zcq3mM1RGiTlUXl5/hJApxTSTVDZkcGhGHnFSrT2AtFX nH5un3NyjnPSDJ4qaN9vIAB1XsXcGIkZqn9+AtwDvyzB2YFHSyIDxPGfu T/JqJ2DXd0diTvHKDxz8qlFNaZQMsVlC22OvYwSKwFUKy4hzezX38wpip UhZVNNPuUVfl9jlWyVXLZEiWE3L+Wv986ti0/yuggJqo44/YEYPIbEwDC K9ZWdMpP430S0b/YyUS8gEIuIfTs5HeJSSE9N5KCAiet3Qgz6bfyk9WhE ifMC6uKLlck362JKzC8UgO2B14lzSi6TgPmNcH/k8YTIkRww2f73Mv0lT A==; X-IronPort-AV: E=McAfee;i="6500,9779,10649"; a="317330354" X-IronPort-AV: E=Sophos;i="5.98,262,1673942400"; d="scan'208";a="317330354" Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Mar 2023 04:31:46 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10649"; a="925310590" X-IronPort-AV: E=Sophos;i="5.98,262,1673942400"; d="scan'208";a="925310590" Received: from nopopovi-mobl1.ger.corp.intel.com (HELO box.shutemov.name) ([10.252.33.48]) by fmsmga006-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Mar 2023 04:31:43 -0700 Received: by box.shutemov.name (Postfix, from userid 1000) id 0633610D691; Wed, 15 Mar 2023 14:31:36 +0300 (+03) From: "Kirill A. Shutemov" To: Andrew Morton , Mel Gorman , Vlastimil Babka , David Hildenbrand Cc: linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org, "Kirill A. Shutemov" Subject: [PATCH 10/10] mm, treewide: Redefine MAX_ORDER sanely Date: Wed, 15 Mar 2023 14:31:33 +0300 Message-Id: <20230315113133.11326-11-kirill.shutemov@linux.intel.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230315113133.11326-1-kirill.shutemov@linux.intel.com> References: <20230315113133.11326-1-kirill.shutemov@linux.intel.com> MIME-Version: 1.0 X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 4007540017 X-Stat-Signature: w31p56dzoc93mwa7y5hbhgsj8efok6qz X-Rspam-User: X-HE-Tag: 1678879909-172568 X-HE-Meta: U2FsdGVkX1+KyehDVX0ge/VUlfFC/R5I4H4BbpeESLK2Rvnu9ODKCiU9d9i0tnJlTi+RkUv461FciqIXuZPzSIUbTuz+WBh9y60ihGOtN3B9zt07jmwY+Pt2KMYwUorc78oEGPHNgvup6NGD37HvY32plgOh62UaLrVqhCCkN85YhNhx2VO4eo5Zxn1gffpi6KoJjO7RYjNVyo2UCIgr1xpSKNjolc8DqVSZN3NrkqEvguhL+Lw+nWlfTdr19KBMzGCslP36UbnWiiD+/xXH4wvsFrtYy7/ZIc3EavT/mDshgQgC5tkv/bhBQucKY/djlmY6cvCAizk++rvEbMqN3h6CFJEmFcalX65M90SEyCb/rwWO+wXxDr8Q52nIFNZ3gOF35Yl+wdNOkZ9pxd8zxtZSnDA4lDaFg2LfWgynXss3zGwCDCpFPsX7AY0gnVOJv0jtWh4Ecro6D7PI/ZKuYOZX/EEUb9IM38A1Ov+bjPMv4Z5XYysN0P6L8074iNJ3tonEcWCoX6u5gUqzu2JdjuUCCUFqDVD701fIphrNbzGd0C/EnYovgKjHI1yRR+h63PCXpTxs6oI2p3lZvMV08Pfpas34UQYkOYV5ndZWUAG9TiCty8m0PulbyS8DoO0gw8/uMNb8Eya1u67ns7kw0XK5pAUuN0ZGLFA0zf96Hfn0uIWGPngzoV67VgVis862YxJ+Y5LcofKvpk3ckLRdEmnl5C30HdpHkllFXpm39IHtiRCz1wTooF/PCCJq8TGwELeDSvmEiEhJsUc/PMmm8XgkRzcGNPvvpbNv/O9Qb/uXJTTBflkyJwayIgqCAmD1fwQzxr7TgHMkfdacZODBWS1gTXHL5/076eqwjMAmWyC90q0x4zQkQvRWl3KteiTmGY2ui/kKxsWrX7AWvDsuaO3kWN6SZpiart4YGcQvZmvE+GqCUXz6FEAOcdlLrDU/FupIgEZOrK817dY8pi9 6FCTO4ge 29F+nM8guaEBF+FNe71MzJvePLe78CjyMTxrhbuaPGazmY94v6AxpjPxt8a0wcQM6PluCTGPn3WL1gfeG8rKpur4DNsVC0OLlEFIqwTqgzgYFBMD01LfGyXXSRsWlLj8ibtx4NBlnDujYS65Jh6crpHAY598twIFt+9gFMEGrvi5Jvy6KErQcm6gDENMn3NxAYxpCco1y7/IWIIfebtTjFCR/nsuYcyinTnR4xjSrJVs1Wrs= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: MAX_ORDER currently defined as number of orders page allocator supports: user can ask buddy allocator for page order between 0 and MAX_ORDER-1. This definition is counter-intuitive and lead to number of bugs all over the kernel. Change the definition of MAX_ORDER to be inclusive: the range of orders user can ask from buddy allocator is 0..MAX_ORDER now. Signed-off-by: Kirill A. Shutemov Reviewed-by: Vlastimil Babka Reviewed-by: Michael Ellerman (powerpc) --- .../admin-guide/kdump/vmcoreinfo.rst | 2 +- .../admin-guide/kernel-parameters.txt | 2 +- arch/arc/Kconfig | 4 +- arch/arm/Kconfig | 9 ++--- arch/arm/configs/imx_v6_v7_defconfig | 2 +- arch/arm/configs/milbeaut_m10v_defconfig | 2 +- arch/arm/configs/oxnas_v6_defconfig | 2 +- arch/arm/configs/pxa_defconfig | 2 +- arch/arm/configs/sama7_defconfig | 2 +- arch/arm/configs/sp7021_defconfig | 2 +- arch/arm64/Kconfig | 27 ++++++------- arch/arm64/include/asm/sparsemem.h | 2 +- arch/arm64/kvm/hyp/include/nvhe/gfp.h | 2 +- arch/arm64/kvm/hyp/nvhe/page_alloc.c | 10 ++--- arch/csky/Kconfig | 2 +- arch/ia64/Kconfig | 8 ++-- arch/ia64/include/asm/sparsemem.h | 4 +- arch/ia64/mm/hugetlbpage.c | 2 +- arch/loongarch/Kconfig | 15 +++----- arch/m68k/Kconfig.cpu | 5 +-- arch/mips/Kconfig | 19 ++++------ arch/nios2/Kconfig | 7 +--- arch/powerpc/Kconfig | 27 ++++++------- arch/powerpc/configs/85xx/ge_imp3a_defconfig | 2 +- arch/powerpc/configs/fsl-emb-nonhw.config | 2 +- arch/powerpc/mm/book3s64/iommu_api.c | 2 +- arch/powerpc/mm/hugetlbpage.c | 2 +- arch/powerpc/platforms/powernv/pci-ioda.c | 2 +- arch/sh/configs/ecovec24_defconfig | 2 +- arch/sh/mm/Kconfig | 17 ++++----- arch/sparc/Kconfig | 5 +-- arch/sparc/kernel/pci_sun4v.c | 2 +- arch/sparc/kernel/traps_64.c | 2 +- arch/sparc/mm/tsb.c | 4 +- arch/um/kernel/um_arch.c | 4 +- arch/xtensa/Kconfig | 5 +-- drivers/base/regmap/regmap-debugfs.c | 8 ++-- drivers/block/floppy.c | 2 +- drivers/crypto/ccp/sev-dev.c | 2 +- drivers/crypto/hisilicon/sgl.c | 6 +-- drivers/gpu/drm/i915/gem/i915_gem_internal.c | 2 +- .../gpu/drm/i915/gem/selftests/huge_pages.c | 2 +- drivers/gpu/drm/ttm/ttm_pool.c | 22 +++++------ drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h | 2 +- drivers/iommu/dma-iommu.c | 2 +- drivers/irqchip/irq-gic-v3-its.c | 4 +- drivers/md/dm-bufio.c | 2 +- drivers/misc/genwqe/card_dev.c | 2 +- drivers/misc/genwqe/card_utils.c | 4 +- .../net/ethernet/hisilicon/hns3/hns3_enet.c | 2 +- drivers/net/ethernet/ibm/ibmvnic.h | 2 +- drivers/video/fbdev/hyperv_fb.c | 4 +- drivers/video/fbdev/vermilion/vermilion.c | 2 +- drivers/virtio/virtio_balloon.c | 2 +- drivers/virtio/virtio_mem.c | 12 +++--- fs/ramfs/file-nommu.c | 2 +- include/drm/ttm/ttm_pool.h | 2 +- include/linux/hugetlb.h | 2 +- include/linux/mmzone.h | 10 ++--- include/linux/pageblock-flags.h | 4 +- include/linux/slab.h | 6 +-- kernel/crash_core.c | 2 +- kernel/dma/pool.c | 6 +-- kernel/events/ring_buffer.c | 4 +- mm/Kconfig | 6 +-- mm/compaction.c | 8 ++-- mm/debug_vm_pgtable.c | 4 +- mm/huge_memory.c | 2 +- mm/hugetlb.c | 4 +- mm/kmsan/init.c | 6 +-- mm/memblock.c | 2 +- mm/memory_hotplug.c | 4 +- mm/page_alloc.c | 38 +++++++++---------- mm/page_isolation.c | 12 +++--- mm/page_owner.c | 6 +-- mm/page_reporting.c | 6 +-- mm/shuffle.h | 2 +- mm/slab.c | 2 +- mm/slub.c | 6 +-- mm/vmscan.c | 2 +- mm/vmstat.c | 14 +++---- net/smc/smc_ib.c | 2 +- security/integrity/ima/ima_crypto.c | 2 +- tools/testing/memblock/linux/mmzone.h | 6 +-- 84 files changed, 218 insertions(+), 248 deletions(-) diff --git a/Documentation/admin-guide/kdump/vmcoreinfo.rst b/Documentation/admin-guide/kdump/vmcoreinfo.rst index 86fd88492870..c267b8c61e97 100644 --- a/Documentation/admin-guide/kdump/vmcoreinfo.rst +++ b/Documentation/admin-guide/kdump/vmcoreinfo.rst @@ -172,7 +172,7 @@ variables. Offset of the free_list's member. This value is used to compute the number of free pages. -Each zone has a free_area structure array called free_area[MAX_ORDER]. +Each zone has a free_area structure array called free_area[MAX_ORDER + 1]. The free_list represents a linked list of free page blocks. (list_head, next|prev) diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt index 6221a1d057dd..50da4f26fad5 100644 --- a/Documentation/admin-guide/kernel-parameters.txt +++ b/Documentation/admin-guide/kernel-parameters.txt @@ -3969,7 +3969,7 @@ [KNL] Minimal page reporting order Format: Adjust the minimal page reporting order. The page - reporting is disabled when it exceeds (MAX_ORDER-1). + reporting is disabled when it exceeds MAX_ORDER. panic= [KNL] Kernel behaviour on panic: delay timeout > 0: seconds before rebooting diff --git a/arch/arc/Kconfig b/arch/arc/Kconfig index d9a13ccf89a3..ab6d701365bb 100644 --- a/arch/arc/Kconfig +++ b/arch/arc/Kconfig @@ -556,7 +556,7 @@ endmenu # "ARC Architecture Configuration" config ARCH_FORCE_MAX_ORDER int "Maximum zone order" - default "12" if ARC_HUGEPAGE_16M - default "11" + default "11" if ARC_HUGEPAGE_16M + default "10" source "kernel/power/Kconfig" diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig index e24a9820e12f..929e646e84b9 100644 --- a/arch/arm/Kconfig +++ b/arch/arm/Kconfig @@ -1355,9 +1355,9 @@ config ARM_MODULE_PLTS config ARCH_FORCE_MAX_ORDER int "Maximum zone order" - default "12" if SOC_AM33XX - default "9" if SA1111 - default "11" + default "11" if SOC_AM33XX + default "8" if SA1111 + default "10" help The kernel memory allocator divides physically contiguous memory blocks into "zones", where each zone is a power of two number of @@ -1366,9 +1366,6 @@ config ARCH_FORCE_MAX_ORDER blocks of physically contiguous memory, then you may need to increase this value. - This config option is actually maximum order plus one. For example, - a value of 11 means that the largest free memory block is 2^10 pages. - config ALIGNMENT_TRAP def_bool CPU_CP15_MMU select HAVE_PROC_CPU if PROC_FS diff --git a/arch/arm/configs/imx_v6_v7_defconfig b/arch/arm/configs/imx_v6_v7_defconfig index 6dc6fed12af8..345a67e67dbd 100644 --- a/arch/arm/configs/imx_v6_v7_defconfig +++ b/arch/arm/configs/imx_v6_v7_defconfig @@ -31,7 +31,7 @@ CONFIG_SOC_VF610=y CONFIG_SMP=y CONFIG_ARM_PSCI=y CONFIG_HIGHMEM=y -CONFIG_ARCH_FORCE_MAX_ORDER=14 +CONFIG_ARCH_FORCE_MAX_ORDER=13 CONFIG_CMDLINE="noinitrd console=ttymxc0,115200" CONFIG_KEXEC=y CONFIG_CPU_FREQ=y diff --git a/arch/arm/configs/milbeaut_m10v_defconfig b/arch/arm/configs/milbeaut_m10v_defconfig index bd29e5012cb0..385ad0f391a8 100644 --- a/arch/arm/configs/milbeaut_m10v_defconfig +++ b/arch/arm/configs/milbeaut_m10v_defconfig @@ -26,7 +26,7 @@ CONFIG_THUMB2_KERNEL=y # CONFIG_THUMB2_AVOID_R_ARM_THM_JUMP11 is not set # CONFIG_ARM_PATCH_IDIV is not set CONFIG_HIGHMEM=y -CONFIG_ARCH_FORCE_MAX_ORDER=12 +CONFIG_ARCH_FORCE_MAX_ORDER=11 CONFIG_SECCOMP=y CONFIG_KEXEC=y CONFIG_EFI=y diff --git a/arch/arm/configs/oxnas_v6_defconfig b/arch/arm/configs/oxnas_v6_defconfig index 70a67b3fc91b..90779812c6dd 100644 --- a/arch/arm/configs/oxnas_v6_defconfig +++ b/arch/arm/configs/oxnas_v6_defconfig @@ -12,7 +12,7 @@ CONFIG_ARCH_OXNAS=y CONFIG_MACH_OX820=y CONFIG_SMP=y CONFIG_NR_CPUS=16 -CONFIG_ARCH_FORCE_MAX_ORDER=12 +CONFIG_ARCH_FORCE_MAX_ORDER=11 CONFIG_SECCOMP=y CONFIG_ARM_APPENDED_DTB=y CONFIG_ARM_ATAG_DTB_COMPAT=y diff --git a/arch/arm/configs/pxa_defconfig b/arch/arm/configs/pxa_defconfig index e656d3af2266..b46e39369dbb 100644 --- a/arch/arm/configs/pxa_defconfig +++ b/arch/arm/configs/pxa_defconfig @@ -20,7 +20,7 @@ CONFIG_PXA_SHARPSL=y CONFIG_MACH_AKITA=y CONFIG_MACH_BORZOI=y CONFIG_AEABI=y -CONFIG_ARCH_FORCE_MAX_ORDER=9 +CONFIG_ARCH_FORCE_MAX_ORDER=8 CONFIG_CMDLINE="root=/dev/ram0 ro" CONFIG_KEXEC=y CONFIG_CPU_FREQ=y diff --git a/arch/arm/configs/sama7_defconfig b/arch/arm/configs/sama7_defconfig index 0d964c613d71..954112041403 100644 --- a/arch/arm/configs/sama7_defconfig +++ b/arch/arm/configs/sama7_defconfig @@ -19,7 +19,7 @@ CONFIG_ATMEL_CLOCKSOURCE_TCB=y # CONFIG_CACHE_L2X0 is not set # CONFIG_ARM_PATCH_IDIV is not set # CONFIG_CPU_SW_DOMAIN_PAN is not set -CONFIG_ARCH_FORCE_MAX_ORDER=15 +CONFIG_ARCH_FORCE_MAX_ORDER=14 CONFIG_UACCESS_WITH_MEMCPY=y # CONFIG_ATAGS is not set CONFIG_CMDLINE="console=ttyS0,115200 earlyprintk ignore_loglevel" diff --git a/arch/arm/configs/sp7021_defconfig b/arch/arm/configs/sp7021_defconfig index 5bca2eb59b86..c6448ac860b6 100644 --- a/arch/arm/configs/sp7021_defconfig +++ b/arch/arm/configs/sp7021_defconfig @@ -17,7 +17,7 @@ CONFIG_ARCH_SUNPLUS=y # CONFIG_VDSO is not set CONFIG_SMP=y CONFIG_THUMB2_KERNEL=y -CONFIG_ARCH_FORCE_MAX_ORDER=12 +CONFIG_ARCH_FORCE_MAX_ORDER=11 CONFIG_VFP=y CONFIG_NEON=y CONFIG_MODULES=y diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 1023e896d46b..cb5c6aa3254e 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -1476,22 +1476,22 @@ config XEN # include/linux/mmzone.h requires the following to be true: # -# MAX_ORDER - 1 + PAGE_SHIFT <= SECTION_SIZE_BITS +# MAX_ORDER + PAGE_SHIFT <= SECTION_SIZE_BITS # -# so the maximum value of MAX_ORDER is SECTION_SIZE_BITS + 1 - PAGE_SHIFT: +# so the maximum value of MAX_ORDER is SECTION_SIZE_BITS - PAGE_SHIFT: # # | SECTION_SIZE_BITS | PAGE_SHIFT | max MAX_ORDER | default MAX_ORDER | # ----+-------------------+--------------+-----------------+--------------------+ -# 4K | 27 | 12 | 16 | 11 | -# 16K | 27 | 14 | 14 | 12 | -# 64K | 29 | 16 | 14 | 14 | +# 4K | 27 | 12 | 15 | 10 | +# 16K | 27 | 14 | 13 | 11 | +# 64K | 29 | 16 | 13 | 13 | config ARCH_FORCE_MAX_ORDER int "Maximum zone order" if ARM64_4K_PAGES || ARM64_16K_PAGES - default "14" if ARM64_64K_PAGES - range 12 14 if ARM64_16K_PAGES - default "12" if ARM64_16K_PAGES - range 11 16 if ARM64_4K_PAGES - default "11" + default "13" if ARM64_64K_PAGES + range 11 13 if ARM64_16K_PAGES + default "11" if ARM64_16K_PAGES + range 10 15 if ARM64_4K_PAGES + default "10" help The kernel memory allocator divides physically contiguous memory blocks into "zones", where each zone is a power of two number of @@ -1500,14 +1500,11 @@ config ARCH_FORCE_MAX_ORDER blocks of physically contiguous memory, then you may need to increase this value. - This config option is actually maximum order plus one. For example, - a value of 11 means that the largest free memory block is 2^10 pages. - We make sure that we can allocate up to a HugePage size for each configuration. Hence we have : - MAX_ORDER = (PMD_SHIFT - PAGE_SHIFT) + 1 => PAGE_SHIFT - 2 + MAX_ORDER = PMD_SHIFT - PAGE_SHIFT => PAGE_SHIFT - 3 - However for 4K, we choose a higher default value, 11 as opposed to 10, giving us + However for 4K, we choose a higher default value, 10 as opposed to 9, giving us 4M allocations matching the default size used by generic code. config UNMAP_KERNEL_AT_EL0 diff --git a/arch/arm64/include/asm/sparsemem.h b/arch/arm64/include/asm/sparsemem.h index 4b73463423c3..5f5437621029 100644 --- a/arch/arm64/include/asm/sparsemem.h +++ b/arch/arm64/include/asm/sparsemem.h @@ -10,7 +10,7 @@ /* * Section size must be at least 512MB for 64K base * page size config. Otherwise it will be less than - * (MAX_ORDER - 1) and the build process will fail. + * MAX_ORDER and the build process will fail. */ #ifdef CONFIG_ARM64_64K_PAGES #define SECTION_SIZE_BITS 29 diff --git a/arch/arm64/kvm/hyp/include/nvhe/gfp.h b/arch/arm64/kvm/hyp/include/nvhe/gfp.h index 0a048dc06a7d..fe5472a184a3 100644 --- a/arch/arm64/kvm/hyp/include/nvhe/gfp.h +++ b/arch/arm64/kvm/hyp/include/nvhe/gfp.h @@ -16,7 +16,7 @@ struct hyp_pool { * API at EL2. */ hyp_spinlock_t lock; - struct list_head free_area[MAX_ORDER]; + struct list_head free_area[MAX_ORDER + 1]; phys_addr_t range_start; phys_addr_t range_end; unsigned short max_order; diff --git a/arch/arm64/kvm/hyp/nvhe/page_alloc.c b/arch/arm64/kvm/hyp/nvhe/page_alloc.c index 803ba3222e75..b1e392186a0f 100644 --- a/arch/arm64/kvm/hyp/nvhe/page_alloc.c +++ b/arch/arm64/kvm/hyp/nvhe/page_alloc.c @@ -110,7 +110,7 @@ static void __hyp_attach_page(struct hyp_pool *pool, * after coalescing, so make sure to mark it HYP_NO_ORDER proactively. */ p->order = HYP_NO_ORDER; - for (; (order + 1) < pool->max_order; order++) { + for (; (order + 1) <= pool->max_order; order++) { buddy = __find_buddy_avail(pool, p, order); if (!buddy) break; @@ -203,9 +203,9 @@ void *hyp_alloc_pages(struct hyp_pool *pool, unsigned short order) hyp_spin_lock(&pool->lock); /* Look for a high-enough-order page */ - while (i < pool->max_order && list_empty(&pool->free_area[i])) + while (i <= pool->max_order && list_empty(&pool->free_area[i])) i++; - if (i >= pool->max_order) { + if (i > pool->max_order) { hyp_spin_unlock(&pool->lock); return NULL; } @@ -228,8 +228,8 @@ int hyp_pool_init(struct hyp_pool *pool, u64 pfn, unsigned int nr_pages, int i; hyp_spin_lock_init(&pool->lock); - pool->max_order = min(MAX_ORDER, get_order((nr_pages + 1) << PAGE_SHIFT)); - for (i = 0; i < pool->max_order; i++) + pool->max_order = min(MAX_ORDER, get_order(nr_pages << PAGE_SHIFT)); + for (i = 0; i <= pool->max_order; i++) INIT_LIST_HEAD(&pool->free_area[i]); pool->range_start = phys; pool->range_end = phys + (nr_pages << PAGE_SHIFT); diff --git a/arch/csky/Kconfig b/arch/csky/Kconfig index dba02da6fa34..c694fac43bed 100644 --- a/arch/csky/Kconfig +++ b/arch/csky/Kconfig @@ -334,7 +334,7 @@ config HIGHMEM config ARCH_FORCE_MAX_ORDER int "Maximum zone order" - default "11" + default "10" config DRAM_BASE hex "DRAM start addr (the same with memory-section in dts)" diff --git a/arch/ia64/Kconfig b/arch/ia64/Kconfig index d7e4a24e8644..0d2f41fa56ee 100644 --- a/arch/ia64/Kconfig +++ b/arch/ia64/Kconfig @@ -202,10 +202,10 @@ config IA64_CYCLONE If you're unsure, answer N. config ARCH_FORCE_MAX_ORDER - int "MAX_ORDER (11 - 17)" if !HUGETLB_PAGE - range 11 17 if !HUGETLB_PAGE - default "17" if HUGETLB_PAGE - default "11" + int "MAX_ORDER (10 - 16)" if !HUGETLB_PAGE + range 10 16 if !HUGETLB_PAGE + default "16" if HUGETLB_PAGE + default "10" config SMP bool "Symmetric multi-processing support" diff --git a/arch/ia64/include/asm/sparsemem.h b/arch/ia64/include/asm/sparsemem.h index 84e8ce387b69..a58f8b466d96 100644 --- a/arch/ia64/include/asm/sparsemem.h +++ b/arch/ia64/include/asm/sparsemem.h @@ -12,9 +12,9 @@ #define SECTION_SIZE_BITS (30) #define MAX_PHYSMEM_BITS (50) #ifdef CONFIG_ARCH_FORCE_MAX_ORDER -#if ((CONFIG_ARCH_FORCE_MAX_ORDER - 1 + PAGE_SHIFT) > SECTION_SIZE_BITS) +#if (CONFIG_ARCH_FORCE_MAX_ORDER + PAGE_SHIFT > SECTION_SIZE_BITS) #undef SECTION_SIZE_BITS -#define SECTION_SIZE_BITS (CONFIG_ARCH_FORCE_MAX_ORDER - 1 + PAGE_SHIFT) +#define SECTION_SIZE_BITS (CONFIG_ARCH_FORCE_MAX_ORDER + PAGE_SHIFT) #endif #endif diff --git a/arch/ia64/mm/hugetlbpage.c b/arch/ia64/mm/hugetlbpage.c index 380d2f3966c9..e8dd4323fb86 100644 --- a/arch/ia64/mm/hugetlbpage.c +++ b/arch/ia64/mm/hugetlbpage.c @@ -170,7 +170,7 @@ static int __init hugetlb_setup_sz(char *str) size = memparse(str, &str); if (*str || !is_power_of_2(size) || !(tr_pages & size) || size <= PAGE_SIZE || - size >= (1UL << PAGE_SHIFT << MAX_ORDER)) { + size > (1UL << PAGE_SHIFT << MAX_ORDER)) { printk(KERN_WARNING "Invalid huge page size specified\n"); return 1; } diff --git a/arch/loongarch/Kconfig b/arch/loongarch/Kconfig index 7fd51257e0ed..272a3a12c98d 100644 --- a/arch/loongarch/Kconfig +++ b/arch/loongarch/Kconfig @@ -420,12 +420,12 @@ config NODES_SHIFT config ARCH_FORCE_MAX_ORDER int "Maximum zone order" - range 14 64 if PAGE_SIZE_64KB - default "14" if PAGE_SIZE_64KB - range 12 64 if PAGE_SIZE_16KB - default "12" if PAGE_SIZE_16KB - range 11 64 - default "11" + range 13 63 if PAGE_SIZE_64KB + default "13" if PAGE_SIZE_64KB + range 11 63 if PAGE_SIZE_16KB + default "11" if PAGE_SIZE_16KB + range 10 63 + default "10" help The kernel memory allocator divides physically contiguous memory blocks into "zones", where each zone is a power of two number of @@ -434,9 +434,6 @@ config ARCH_FORCE_MAX_ORDER blocks of physically contiguous memory, then you may need to increase this value. - This config option is actually maximum order plus one. For example, - a value of 11 means that the largest free memory block is 2^10 pages. - The page size is not necessarily 4KB. Keep this in mind when choosing a value for this option. diff --git a/arch/m68k/Kconfig.cpu b/arch/m68k/Kconfig.cpu index 9380f6e3bb66..c9df6572133f 100644 --- a/arch/m68k/Kconfig.cpu +++ b/arch/m68k/Kconfig.cpu @@ -400,7 +400,7 @@ config SINGLE_MEMORY_CHUNK config ARCH_FORCE_MAX_ORDER int "Maximum zone order" if ADVANCED depends on !SINGLE_MEMORY_CHUNK - default "11" + default "10" help The kernel memory allocator divides physically contiguous memory blocks into "zones", where each zone is a power of two number of @@ -413,9 +413,6 @@ config ARCH_FORCE_MAX_ORDER value also defines the minimal size of the hole that allows freeing unused memory map. - This config option is actually maximum order plus one. For example, - a value of 11 means that the largest free memory block is 2^10 pages. - config 060_WRITETHROUGH bool "Use write-through caching for 68060 supervisor accesses" depends on ADVANCED && M68060 diff --git a/arch/mips/Kconfig b/arch/mips/Kconfig index e2f3ca73f40d..3e8b765b8c7b 100644 --- a/arch/mips/Kconfig +++ b/arch/mips/Kconfig @@ -2137,14 +2137,14 @@ endchoice config ARCH_FORCE_MAX_ORDER int "Maximum zone order" - range 14 64 if MIPS_HUGE_TLB_SUPPORT && PAGE_SIZE_64KB - default "14" if MIPS_HUGE_TLB_SUPPORT && PAGE_SIZE_64KB - range 13 64 if MIPS_HUGE_TLB_SUPPORT && PAGE_SIZE_32KB - default "13" if MIPS_HUGE_TLB_SUPPORT && PAGE_SIZE_32KB - range 12 64 if MIPS_HUGE_TLB_SUPPORT && PAGE_SIZE_16KB - default "12" if MIPS_HUGE_TLB_SUPPORT && PAGE_SIZE_16KB - range 0 64 - default "11" + range 13 63 if MIPS_HUGE_TLB_SUPPORT && PAGE_SIZE_64KB + default "13" if MIPS_HUGE_TLB_SUPPORT && PAGE_SIZE_64KB + range 12 63 if MIPS_HUGE_TLB_SUPPORT && PAGE_SIZE_32KB + default "12" if MIPS_HUGE_TLB_SUPPORT && PAGE_SIZE_32KB + range 11 63 if MIPS_HUGE_TLB_SUPPORT && PAGE_SIZE_16KB + default "11" if MIPS_HUGE_TLB_SUPPORT && PAGE_SIZE_16KB + range 0 63 + default "10" help The kernel memory allocator divides physically contiguous memory blocks into "zones", where each zone is a power of two number of @@ -2153,9 +2153,6 @@ config ARCH_FORCE_MAX_ORDER blocks of physically contiguous memory, then you may need to increase this value. - This config option is actually maximum order plus one. For example, - a value of 11 means that the largest free memory block is 2^10 pages. - The page size is not necessarily 4KB. Keep this in mind when choosing a value for this option. diff --git a/arch/nios2/Kconfig b/arch/nios2/Kconfig index a582f72104f3..89708b95978c 100644 --- a/arch/nios2/Kconfig +++ b/arch/nios2/Kconfig @@ -46,8 +46,8 @@ source "kernel/Kconfig.hz" config ARCH_FORCE_MAX_ORDER int "Maximum zone order" - range 9 20 - default "11" + range 8 19 + default "10" help The kernel memory allocator divides physically contiguous memory blocks into "zones", where each zone is a power of two number of @@ -56,9 +56,6 @@ config ARCH_FORCE_MAX_ORDER blocks of physically contiguous memory, then you may need to increase this value. - This config option is actually maximum order plus one. For example, - a value of 11 means that the largest free memory block is 2^10 pages. - endmenu source "arch/nios2/platform/Kconfig.platform" diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig index a6c4407d3ec8..90bc0c7f2728 100644 --- a/arch/powerpc/Kconfig +++ b/arch/powerpc/Kconfig @@ -896,18 +896,18 @@ config DATA_SHIFT config ARCH_FORCE_MAX_ORDER int "Maximum zone order" - range 8 9 if PPC64 && PPC_64K_PAGES - default "9" if PPC64 && PPC_64K_PAGES - range 13 13 if PPC64 && !PPC_64K_PAGES - default "13" if PPC64 && !PPC_64K_PAGES - range 9 64 if PPC32 && PPC_16K_PAGES - default "9" if PPC32 && PPC_16K_PAGES - range 7 64 if PPC32 && PPC_64K_PAGES - default "7" if PPC32 && PPC_64K_PAGES - range 5 64 if PPC32 && PPC_256K_PAGES - default "5" if PPC32 && PPC_256K_PAGES - range 11 64 - default "11" + range 7 8 if PPC64 && PPC_64K_PAGES + default "8" if PPC64 && PPC_64K_PAGES + range 12 12 if PPC64 && !PPC_64K_PAGES + default "12" if PPC64 && !PPC_64K_PAGES + range 8 63 if PPC32 && PPC_16K_PAGES + default "8" if PPC32 && PPC_16K_PAGES + range 6 63 if PPC32 && PPC_64K_PAGES + default "6" if PPC32 && PPC_64K_PAGES + range 4 63 if PPC32 && PPC_256K_PAGES + default "4" if PPC32 && PPC_256K_PAGES + range 10 63 + default "10" help The kernel memory allocator divides physically contiguous memory blocks into "zones", where each zone is a power of two number of @@ -916,9 +916,6 @@ config ARCH_FORCE_MAX_ORDER blocks of physically contiguous memory, then you may need to increase this value. - This config option is actually maximum order plus one. For example, - a value of 11 means that the largest free memory block is 2^10 pages. - The page size is not necessarily 4KB. For example, on 64-bit systems, 64KB pages can be enabled via CONFIG_PPC_64K_PAGES. Keep this in mind when choosing a value for this option. diff --git a/arch/powerpc/configs/85xx/ge_imp3a_defconfig b/arch/powerpc/configs/85xx/ge_imp3a_defconfig index ea719898b581..6cb7e90d52c1 100644 --- a/arch/powerpc/configs/85xx/ge_imp3a_defconfig +++ b/arch/powerpc/configs/85xx/ge_imp3a_defconfig @@ -30,7 +30,7 @@ CONFIG_PREEMPT=y # CONFIG_CORE_DUMP_DEFAULT_ELF_HEADERS is not set CONFIG_BINFMT_MISC=m CONFIG_MATH_EMULATION=y -CONFIG_ARCH_FORCE_MAX_ORDER=17 +CONFIG_ARCH_FORCE_MAX_ORDER=16 CONFIG_PCI=y CONFIG_PCIEPORTBUS=y CONFIG_PCI_MSI=y diff --git a/arch/powerpc/configs/fsl-emb-nonhw.config b/arch/powerpc/configs/fsl-emb-nonhw.config index ab8a8c4530d9..3009b0efaf34 100644 --- a/arch/powerpc/configs/fsl-emb-nonhw.config +++ b/arch/powerpc/configs/fsl-emb-nonhw.config @@ -41,7 +41,7 @@ CONFIG_FIXED_PHY=y CONFIG_FONT_8x16=y CONFIG_FONT_8x8=y CONFIG_FONTS=y -CONFIG_ARCH_FORCE_MAX_ORDER=13 +CONFIG_ARCH_FORCE_MAX_ORDER=12 CONFIG_FRAMEBUFFER_CONSOLE=y CONFIG_FRAME_WARN=1024 CONFIG_FTL=y diff --git a/arch/powerpc/mm/book3s64/iommu_api.c b/arch/powerpc/mm/book3s64/iommu_api.c index 7fcfba162e0d..81d7185e2ae8 100644 --- a/arch/powerpc/mm/book3s64/iommu_api.c +++ b/arch/powerpc/mm/book3s64/iommu_api.c @@ -97,7 +97,7 @@ static long mm_iommu_do_alloc(struct mm_struct *mm, unsigned long ua, } mmap_read_lock(mm); - chunk = (1UL << (PAGE_SHIFT + MAX_ORDER - 1)) / + chunk = (1UL << (PAGE_SHIFT + MAX_ORDER)) / sizeof(struct vm_area_struct *); chunk = min(chunk, entries); for (entry = 0; entry < entries; entry += chunk) { diff --git a/arch/powerpc/mm/hugetlbpage.c b/arch/powerpc/mm/hugetlbpage.c index f1ba8d1e8c1a..b900933507da 100644 --- a/arch/powerpc/mm/hugetlbpage.c +++ b/arch/powerpc/mm/hugetlbpage.c @@ -615,7 +615,7 @@ void __init gigantic_hugetlb_cma_reserve(void) order = mmu_psize_to_shift(MMU_PAGE_16G) - PAGE_SHIFT; if (order) { - VM_WARN_ON(order < MAX_ORDER); + VM_WARN_ON(order <= MAX_ORDER); hugetlb_cma_reserve(order); } } diff --git a/arch/powerpc/platforms/powernv/pci-ioda.c b/arch/powerpc/platforms/powernv/pci-ioda.c index 4f6e20a35aa1..5a81f106068e 100644 --- a/arch/powerpc/platforms/powernv/pci-ioda.c +++ b/arch/powerpc/platforms/powernv/pci-ioda.c @@ -1740,7 +1740,7 @@ static long pnv_pci_ioda2_setup_default_config(struct pnv_ioda_pe *pe) * DMA window can be larger than available memory, which will * cause errors later. */ - const u64 maxblock = 1UL << (PAGE_SHIFT + MAX_ORDER - 1); + const u64 maxblock = 1UL << (PAGE_SHIFT + MAX_ORDER); /* * We create the default window as big as we can. The constraint is diff --git a/arch/sh/configs/ecovec24_defconfig b/arch/sh/configs/ecovec24_defconfig index b52e14ccb450..4d655e8d4d74 100644 --- a/arch/sh/configs/ecovec24_defconfig +++ b/arch/sh/configs/ecovec24_defconfig @@ -8,7 +8,7 @@ CONFIG_MODULES=y CONFIG_MODULE_UNLOAD=y # CONFIG_BLK_DEV_BSG is not set CONFIG_CPU_SUBTYPE_SH7724=y -CONFIG_ARCH_FORCE_MAX_ORDER=12 +CONFIG_ARCH_FORCE_MAX_ORDER=11 CONFIG_MEMORY_SIZE=0x10000000 CONFIG_FLATMEM_MANUAL=y CONFIG_SH_ECOVEC=y diff --git a/arch/sh/mm/Kconfig b/arch/sh/mm/Kconfig index 411fdc0901f7..40271090bd7d 100644 --- a/arch/sh/mm/Kconfig +++ b/arch/sh/mm/Kconfig @@ -20,13 +20,13 @@ config PAGE_OFFSET config ARCH_FORCE_MAX_ORDER int "Maximum zone order" - range 9 64 if PAGE_SIZE_16KB - default "9" if PAGE_SIZE_16KB - range 7 64 if PAGE_SIZE_64KB - default "7" if PAGE_SIZE_64KB - range 11 64 - default "14" if !MMU - default "11" + range 8 63 if PAGE_SIZE_16KB + default "8" if PAGE_SIZE_16KB + range 6 63 if PAGE_SIZE_64KB + default "6" if PAGE_SIZE_64KB + range 10 63 + default "13" if !MMU + default "10" help The kernel memory allocator divides physically contiguous memory blocks into "zones", where each zone is a power of two number of @@ -35,9 +35,6 @@ config ARCH_FORCE_MAX_ORDER blocks of physically contiguous memory, then you may need to increase this value. - This config option is actually maximum order plus one. For example, - a value of 11 means that the largest free memory block is 2^10 pages. - The page size is not necessarily 4KB. Keep this in mind when choosing a value for this option. diff --git a/arch/sparc/Kconfig b/arch/sparc/Kconfig index 84437a4c6545..e3242bf5a8df 100644 --- a/arch/sparc/Kconfig +++ b/arch/sparc/Kconfig @@ -271,7 +271,7 @@ config ARCH_SPARSEMEM_DEFAULT config ARCH_FORCE_MAX_ORDER int "Maximum zone order" - default "13" + default "12" help The kernel memory allocator divides physically contiguous memory blocks into "zones", where each zone is a power of two number of @@ -280,9 +280,6 @@ config ARCH_FORCE_MAX_ORDER blocks of physically contiguous memory, then you may need to increase this value. - This config option is actually maximum order plus one. For example, - a value of 13 means that the largest free memory block is 2^12 pages. - if SPARC64 || COMPILE_TEST source "kernel/power/Kconfig" endif diff --git a/arch/sparc/kernel/pci_sun4v.c b/arch/sparc/kernel/pci_sun4v.c index 384480971805..7d91ca6aa675 100644 --- a/arch/sparc/kernel/pci_sun4v.c +++ b/arch/sparc/kernel/pci_sun4v.c @@ -193,7 +193,7 @@ static void *dma_4v_alloc_coherent(struct device *dev, size_t size, size = IO_PAGE_ALIGN(size); order = get_order(size); - if (unlikely(order >= MAX_ORDER)) + if (unlikely(order > MAX_ORDER)) return NULL; npages = size >> IO_PAGE_SHIFT; diff --git a/arch/sparc/kernel/traps_64.c b/arch/sparc/kernel/traps_64.c index 5b4de4a89dec..08ffd17d5ec3 100644 --- a/arch/sparc/kernel/traps_64.c +++ b/arch/sparc/kernel/traps_64.c @@ -897,7 +897,7 @@ void __init cheetah_ecache_flush_init(void) /* Now allocate error trap reporting scoreboard. */ sz = NR_CPUS * (2 * sizeof(struct cheetah_err_info)); - for (order = 0; order < MAX_ORDER; order++) { + for (order = 0; order <= MAX_ORDER; order++) { if ((PAGE_SIZE << order) >= sz) break; } diff --git a/arch/sparc/mm/tsb.c b/arch/sparc/mm/tsb.c index dba8dffe2113..5e2931a18409 100644 --- a/arch/sparc/mm/tsb.c +++ b/arch/sparc/mm/tsb.c @@ -402,8 +402,8 @@ void tsb_grow(struct mm_struct *mm, unsigned long tsb_index, unsigned long rss) unsigned long new_rss_limit; gfp_t gfp_flags; - if (max_tsb_size > (PAGE_SIZE << (MAX_ORDER - 1))) - max_tsb_size = (PAGE_SIZE << (MAX_ORDER - 1)); + if (max_tsb_size > PAGE_SIZE << MAX_ORDER) + max_tsb_size = PAGE_SIZE << MAX_ORDER; new_cache_index = 0; for (new_size = 8192; new_size < max_tsb_size; new_size <<= 1UL) { diff --git a/arch/um/kernel/um_arch.c b/arch/um/kernel/um_arch.c index 5e5a9c8e0e5d..8dcda617b8bf 100644 --- a/arch/um/kernel/um_arch.c +++ b/arch/um/kernel/um_arch.c @@ -368,10 +368,10 @@ int __init linux_main(int argc, char **argv) max_physmem = TASK_SIZE - uml_physmem - iomem_size - MIN_VMALLOC; /* - * Zones have to begin on a 1 << MAX_ORDER-1 page boundary, + * Zones have to begin on a 1 << MAX_ORDER page boundary, * so this makes sure that's true for highmem */ - max_physmem &= ~((1 << (PAGE_SHIFT + MAX_ORDER - 1)) - 1); + max_physmem &= ~((1 << (PAGE_SHIFT + MAX_ORDER)) - 1); if (physmem_size + iomem_size > max_physmem) { highmem = physmem_size + iomem_size - max_physmem; physmem_size -= highmem; diff --git a/arch/xtensa/Kconfig b/arch/xtensa/Kconfig index bcb0c5d2abc2..3eee334ba873 100644 --- a/arch/xtensa/Kconfig +++ b/arch/xtensa/Kconfig @@ -773,7 +773,7 @@ config HIGHMEM config ARCH_FORCE_MAX_ORDER int "Maximum zone order" - default "11" + default "10" help The kernel memory allocator divides physically contiguous memory blocks into "zones", where each zone is a power of two number of @@ -782,9 +782,6 @@ config ARCH_FORCE_MAX_ORDER blocks of physically contiguous memory, then you may need to increase this value. - This config option is actually maximum order plus one. For example, - a value of 11 means that the largest free memory block is 2^10 pages. - endmenu menu "Power management options" diff --git a/drivers/base/regmap/regmap-debugfs.c b/drivers/base/regmap/regmap-debugfs.c index 817eda2075aa..c491fabe3617 100644 --- a/drivers/base/regmap/regmap-debugfs.c +++ b/drivers/base/regmap/regmap-debugfs.c @@ -226,8 +226,8 @@ static ssize_t regmap_read_debugfs(struct regmap *map, unsigned int from, if (*ppos < 0 || !count) return -EINVAL; - if (count > (PAGE_SIZE << (MAX_ORDER - 1))) - count = PAGE_SIZE << (MAX_ORDER - 1); + if (count > (PAGE_SIZE << MAX_ORDER)) + count = PAGE_SIZE << MAX_ORDER; buf = kmalloc(count, GFP_KERNEL); if (!buf) @@ -373,8 +373,8 @@ static ssize_t regmap_reg_ranges_read_file(struct file *file, if (*ppos < 0 || !count) return -EINVAL; - if (count > (PAGE_SIZE << (MAX_ORDER - 1))) - count = PAGE_SIZE << (MAX_ORDER - 1); + if (count > (PAGE_SIZE << MAX_ORDER)) + count = PAGE_SIZE << MAX_ORDER; buf = kmalloc(count, GFP_KERNEL); if (!buf) diff --git a/drivers/block/floppy.c b/drivers/block/floppy.c index 90d2dfb6448e..cec2c20f5e59 100644 --- a/drivers/block/floppy.c +++ b/drivers/block/floppy.c @@ -3079,7 +3079,7 @@ static void raw_cmd_free(struct floppy_raw_cmd **ptr) } } -#define MAX_LEN (1UL << (MAX_ORDER - 1) << PAGE_SHIFT) +#define MAX_LEN (1UL << MAX_ORDER << PAGE_SHIFT) static int raw_cmd_copyin(int cmd, void __user *param, struct floppy_raw_cmd **rcmd) diff --git a/drivers/crypto/ccp/sev-dev.c b/drivers/crypto/ccp/sev-dev.c index e2f25926eb51..bf095baca244 100644 --- a/drivers/crypto/ccp/sev-dev.c +++ b/drivers/crypto/ccp/sev-dev.c @@ -886,7 +886,7 @@ static int sev_ioctl_do_get_id2(struct sev_issue_cmd *argp) /* * The length of the ID shouldn't be assumed by software since * it may change in the future. The allocation size is limited - * to 1 << (PAGE_SHIFT + MAX_ORDER - 1) by the page allocator. + * to 1 << (PAGE_SHIFT + MAX_ORDER) by the page allocator. * If the allocation fails, simply return ENOMEM rather than * warning in the kernel log. */ diff --git a/drivers/crypto/hisilicon/sgl.c b/drivers/crypto/hisilicon/sgl.c index 09586a837b1e..3df7a256e919 100644 --- a/drivers/crypto/hisilicon/sgl.c +++ b/drivers/crypto/hisilicon/sgl.c @@ -70,11 +70,11 @@ struct hisi_acc_sgl_pool *hisi_acc_create_sgl_pool(struct device *dev, HISI_ACC_SGL_ALIGN_SIZE); /* - * the pool may allocate a block of memory of size PAGE_SIZE * 2^(MAX_ORDER - 1), + * the pool may allocate a block of memory of size PAGE_SIZE * 2^MAX_ORDER, * block size may exceed 2^31 on ia64, so the max of block size is 2^31 */ - block_size = 1 << (PAGE_SHIFT + MAX_ORDER <= 32 ? - PAGE_SHIFT + MAX_ORDER - 1 : 31); + block_size = 1 << (PAGE_SHIFT + MAX_ORDER < 32 ? + PAGE_SHIFT + MAX_ORDER : 31); sgl_num_per_block = block_size / sgl_size; block_num = count / sgl_num_per_block; remain_sgl = count % sgl_num_per_block; diff --git a/drivers/gpu/drm/i915/gem/i915_gem_internal.c b/drivers/gpu/drm/i915/gem/i915_gem_internal.c index eae9e9f6d3bf..6bc26b4b06b8 100644 --- a/drivers/gpu/drm/i915/gem/i915_gem_internal.c +++ b/drivers/gpu/drm/i915/gem/i915_gem_internal.c @@ -36,7 +36,7 @@ static int i915_gem_object_get_pages_internal(struct drm_i915_gem_object *obj) struct sg_table *st; struct scatterlist *sg; unsigned int npages; /* restricted by sg_alloc_table */ - int max_order = MAX_ORDER - 1; + int max_order = MAX_ORDER; unsigned int max_segment; gfp_t gfp; diff --git a/drivers/gpu/drm/i915/gem/selftests/huge_pages.c b/drivers/gpu/drm/i915/gem/selftests/huge_pages.c index defece0bcb81..99f39a5feca1 100644 --- a/drivers/gpu/drm/i915/gem/selftests/huge_pages.c +++ b/drivers/gpu/drm/i915/gem/selftests/huge_pages.c @@ -115,7 +115,7 @@ static int get_huge_pages(struct drm_i915_gem_object *obj) do { struct page *page; - GEM_BUG_ON(order >= MAX_ORDER); + GEM_BUG_ON(order > MAX_ORDER); page = alloc_pages(GFP | __GFP_ZERO, order); if (!page) goto err; diff --git a/drivers/gpu/drm/ttm/ttm_pool.c b/drivers/gpu/drm/ttm/ttm_pool.c index aa116a7bbae3..6c8585abe08d 100644 --- a/drivers/gpu/drm/ttm/ttm_pool.c +++ b/drivers/gpu/drm/ttm/ttm_pool.c @@ -65,11 +65,11 @@ module_param(page_pool_size, ulong, 0644); static atomic_long_t allocated_pages; -static struct ttm_pool_type global_write_combined[MAX_ORDER]; -static struct ttm_pool_type global_uncached[MAX_ORDER]; +static struct ttm_pool_type global_write_combined[MAX_ORDER + 1]; +static struct ttm_pool_type global_uncached[MAX_ORDER + 1]; -static struct ttm_pool_type global_dma32_write_combined[MAX_ORDER]; -static struct ttm_pool_type global_dma32_uncached[MAX_ORDER]; +static struct ttm_pool_type global_dma32_write_combined[MAX_ORDER + 1]; +static struct ttm_pool_type global_dma32_uncached[MAX_ORDER + 1]; static spinlock_t shrinker_lock; static struct list_head shrinker_list; @@ -405,7 +405,7 @@ int ttm_pool_alloc(struct ttm_pool *pool, struct ttm_tt *tt, else gfp_flags |= GFP_HIGHUSER; - for (order = min_t(unsigned int, MAX_ORDER - 1, __fls(num_pages)); + for (order = min_t(unsigned int, MAX_ORDER, __fls(num_pages)); num_pages; order = min_t(unsigned int, order, __fls(num_pages))) { struct ttm_pool_type *pt; @@ -542,7 +542,7 @@ void ttm_pool_init(struct ttm_pool *pool, struct device *dev, if (use_dma_alloc) { for (i = 0; i < TTM_NUM_CACHING_TYPES; ++i) - for (j = 0; j < MAX_ORDER; ++j) + for (j = 0; j <= MAX_ORDER; ++j) ttm_pool_type_init(&pool->caching[i].orders[j], pool, i, j); } @@ -562,7 +562,7 @@ void ttm_pool_fini(struct ttm_pool *pool) if (pool->use_dma_alloc) { for (i = 0; i < TTM_NUM_CACHING_TYPES; ++i) - for (j = 0; j < MAX_ORDER; ++j) + for (j = 0; j <= MAX_ORDER; ++j) ttm_pool_type_fini(&pool->caching[i].orders[j]); } @@ -616,7 +616,7 @@ static void ttm_pool_debugfs_header(struct seq_file *m) unsigned int i; seq_puts(m, "\t "); - for (i = 0; i < MAX_ORDER; ++i) + for (i = 0; i <= MAX_ORDER; ++i) seq_printf(m, " ---%2u---", i); seq_puts(m, "\n"); } @@ -627,7 +627,7 @@ static void ttm_pool_debugfs_orders(struct ttm_pool_type *pt, { unsigned int i; - for (i = 0; i < MAX_ORDER; ++i) + for (i = 0; i <= MAX_ORDER; ++i) seq_printf(m, " %8u", ttm_pool_type_count(&pt[i])); seq_puts(m, "\n"); } @@ -736,7 +736,7 @@ int ttm_pool_mgr_init(unsigned long num_pages) spin_lock_init(&shrinker_lock); INIT_LIST_HEAD(&shrinker_list); - for (i = 0; i < MAX_ORDER; ++i) { + for (i = 0; i <= MAX_ORDER; ++i) { ttm_pool_type_init(&global_write_combined[i], NULL, ttm_write_combined, i); ttm_pool_type_init(&global_uncached[i], NULL, ttm_uncached, i); @@ -769,7 +769,7 @@ void ttm_pool_mgr_fini(void) { unsigned int i; - for (i = 0; i < MAX_ORDER; ++i) { + for (i = 0; i <= MAX_ORDER; ++i) { ttm_pool_type_fini(&global_write_combined[i]); ttm_pool_type_fini(&global_uncached[i]); diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h index 8d772ea8a583..b574c58a3487 100644 --- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h +++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.h @@ -182,7 +182,7 @@ #ifdef CONFIG_CMA_ALIGNMENT #define Q_MAX_SZ_SHIFT (PAGE_SHIFT + CONFIG_CMA_ALIGNMENT) #else -#define Q_MAX_SZ_SHIFT (PAGE_SHIFT + MAX_ORDER - 1) +#define Q_MAX_SZ_SHIFT (PAGE_SHIFT + MAX_ORDER) #endif /* diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c index ac996fd6bd9c..7a9f0b0bddbd 100644 --- a/drivers/iommu/dma-iommu.c +++ b/drivers/iommu/dma-iommu.c @@ -736,7 +736,7 @@ static struct page **__iommu_dma_alloc_pages(struct device *dev, struct page **pages; unsigned int i = 0, nid = dev_to_node(dev); - order_mask &= GENMASK(MAX_ORDER - 1, 0); + order_mask &= GENMASK(MAX_ORDER, 0); if (!order_mask) return NULL; diff --git a/drivers/irqchip/irq-gic-v3-its.c b/drivers/irqchip/irq-gic-v3-its.c index 586271b8aa39..85790b870877 100644 --- a/drivers/irqchip/irq-gic-v3-its.c +++ b/drivers/irqchip/irq-gic-v3-its.c @@ -2440,8 +2440,8 @@ static bool its_parse_indirect_baser(struct its_node *its, * feature is not supported by hardware. */ new_order = max_t(u32, get_order(esz << ids), new_order); - if (new_order >= MAX_ORDER) { - new_order = MAX_ORDER - 1; + if (new_order > MAX_ORDER) { + new_order = MAX_ORDER; ids = ilog2(PAGE_ORDER_TO_SIZE(new_order) / (int)esz); pr_warn("ITS@%pa: %s Table too large, reduce ids %llu->%u\n", &its->phys_base, its_base_type_string[type], diff --git a/drivers/md/dm-bufio.c b/drivers/md/dm-bufio.c index cf077f9b30c3..733053c2eaa0 100644 --- a/drivers/md/dm-bufio.c +++ b/drivers/md/dm-bufio.c @@ -408,7 +408,7 @@ static void __cache_size_refresh(void) * If the allocation may fail we use __get_free_pages. Memory fragmentation * won't have a fatal effect here, but it just causes flushes of some other * buffers and more I/O will be performed. Don't use __get_free_pages if it - * always fails (i.e. order >= MAX_ORDER). + * always fails (i.e. order > MAX_ORDER). * * If the allocation shouldn't fail we use __vmalloc. This is only for the * initial reserve allocation, so there's no risk of wasting all vmalloc diff --git a/drivers/misc/genwqe/card_dev.c b/drivers/misc/genwqe/card_dev.c index d0e27438a73c..55fc5b80e649 100644 --- a/drivers/misc/genwqe/card_dev.c +++ b/drivers/misc/genwqe/card_dev.c @@ -443,7 +443,7 @@ static int genwqe_mmap(struct file *filp, struct vm_area_struct *vma) if (vsize == 0) return -EINVAL; - if (get_order(vsize) >= MAX_ORDER) + if (get_order(vsize) > MAX_ORDER) return -ENOMEM; dma_map = kzalloc(sizeof(struct dma_mapping), GFP_KERNEL); diff --git a/drivers/misc/genwqe/card_utils.c b/drivers/misc/genwqe/card_utils.c index ac29698d085a..1c798d6b2dfb 100644 --- a/drivers/misc/genwqe/card_utils.c +++ b/drivers/misc/genwqe/card_utils.c @@ -210,7 +210,7 @@ u32 genwqe_crc32(u8 *buff, size_t len, u32 init) void *__genwqe_alloc_consistent(struct genwqe_dev *cd, size_t size, dma_addr_t *dma_handle) { - if (get_order(size) >= MAX_ORDER) + if (get_order(size) > MAX_ORDER) return NULL; return dma_alloc_coherent(&cd->pci_dev->dev, size, dma_handle, @@ -308,7 +308,7 @@ int genwqe_alloc_sync_sgl(struct genwqe_dev *cd, struct genwqe_sgl *sgl, sgl->write = write; sgl->sgl_size = genwqe_sgl_size(sgl->nr_pages); - if (get_order(sgl->sgl_size) >= MAX_ORDER) { + if (get_order(sgl->sgl_size) > MAX_ORDER) { dev_err(&pci_dev->dev, "[%s] err: too much memory requested!\n", __func__); return ret; diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c index 25be7f8ac7cd..3973ca6adf4c 100644 --- a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c +++ b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c @@ -1041,7 +1041,7 @@ static void hns3_init_tx_spare_buffer(struct hns3_enet_ring *ring) return; order = get_order(alloc_size); - if (order >= MAX_ORDER) { + if (order > MAX_ORDER) { if (net_ratelimit()) dev_warn(ring_to_dev(ring), "failed to allocate tx spare buffer, exceed to max order\n"); return; diff --git a/drivers/net/ethernet/ibm/ibmvnic.h b/drivers/net/ethernet/ibm/ibmvnic.h index b35c9b6f913b..4e18b4cefa97 100644 --- a/drivers/net/ethernet/ibm/ibmvnic.h +++ b/drivers/net/ethernet/ibm/ibmvnic.h @@ -75,7 +75,7 @@ * pool for the 4MB. Thus the 16 Rx and Tx queues require 32 * 5 = 160 * plus 16 for the TSO pools for a total of 176 LTB mappings per VNIC. */ -#define IBMVNIC_ONE_LTB_MAX ((u32)((1 << (MAX_ORDER - 1)) * PAGE_SIZE)) +#define IBMVNIC_ONE_LTB_MAX ((u32)((1 << MAX_ORDER) * PAGE_SIZE)) #define IBMVNIC_ONE_LTB_SIZE min((u32)(8 << 20), IBMVNIC_ONE_LTB_MAX) #define IBMVNIC_LTB_SET_SIZE (38 << 20) diff --git a/drivers/video/fbdev/hyperv_fb.c b/drivers/video/fbdev/hyperv_fb.c index ec3f6cf05f8c..34781dec3856 100644 --- a/drivers/video/fbdev/hyperv_fb.c +++ b/drivers/video/fbdev/hyperv_fb.c @@ -946,7 +946,7 @@ static phys_addr_t hvfb_get_phymem(struct hv_device *hdev, if (request_size == 0) return -1; - if (order < MAX_ORDER) { + if (order <= MAX_ORDER) { /* Call alloc_pages if the size is less than 2^MAX_ORDER */ page = alloc_pages(GFP_KERNEL | __GFP_ZERO, order); if (!page) @@ -977,7 +977,7 @@ static void hvfb_release_phymem(struct hv_device *hdev, { unsigned int order = get_order(size); - if (order < MAX_ORDER) + if (order <= MAX_ORDER) __free_pages(pfn_to_page(paddr >> PAGE_SHIFT), order); else dma_free_coherent(&hdev->device, diff --git a/drivers/video/fbdev/vermilion/vermilion.c b/drivers/video/fbdev/vermilion/vermilion.c index 0374ee6b6d03..32e74e02a02f 100644 --- a/drivers/video/fbdev/vermilion/vermilion.c +++ b/drivers/video/fbdev/vermilion/vermilion.c @@ -197,7 +197,7 @@ static int vmlfb_alloc_vram(struct vml_info *vinfo, va = &vinfo->vram[i]; order = 0; - while (requested > (PAGE_SIZE << order) && order < MAX_ORDER) + while (requested > (PAGE_SIZE << order) && order <= MAX_ORDER) order++; err = vmlfb_alloc_vram_area(va, order, 0); diff --git a/drivers/virtio/virtio_balloon.c b/drivers/virtio/virtio_balloon.c index 3f78a3a1eb75..5b15936a5214 100644 --- a/drivers/virtio/virtio_balloon.c +++ b/drivers/virtio/virtio_balloon.c @@ -33,7 +33,7 @@ #define VIRTIO_BALLOON_FREE_PAGE_ALLOC_FLAG (__GFP_NORETRY | __GFP_NOWARN | \ __GFP_NOMEMALLOC) /* The order of free page blocks to report to host */ -#define VIRTIO_BALLOON_HINT_BLOCK_ORDER (MAX_ORDER - 1) +#define VIRTIO_BALLOON_HINT_BLOCK_ORDER MAX_ORDER /* The size of a free page block in bytes */ #define VIRTIO_BALLOON_HINT_BLOCK_BYTES \ (1 << (VIRTIO_BALLOON_HINT_BLOCK_ORDER + PAGE_SHIFT)) diff --git a/drivers/virtio/virtio_mem.c b/drivers/virtio/virtio_mem.c index 0c2892ec6817..835f6cc2fb66 100644 --- a/drivers/virtio/virtio_mem.c +++ b/drivers/virtio/virtio_mem.c @@ -1120,13 +1120,13 @@ static void virtio_mem_clear_fake_offline(unsigned long pfn, */ static void virtio_mem_fake_online(unsigned long pfn, unsigned long nr_pages) { - unsigned long order = MAX_ORDER - 1; + unsigned long order = MAX_ORDER; unsigned long i; /* * We might get called for ranges that don't cover properly aligned - * MAX_ORDER - 1 pages; however, we can only online properly aligned - * pages with an order of MAX_ORDER - 1 at maximum. + * MAX_ORDER pages; however, we can only online properly aligned + * pages with an order of MAX_ORDER at maximum. */ while (!IS_ALIGNED(pfn | nr_pages, 1 << order)) order--; @@ -1237,9 +1237,9 @@ static void virtio_mem_online_page(struct virtio_mem *vm, bool do_online; /* - * We can get called with any order up to MAX_ORDER - 1. If our - * subblock size is smaller than that and we have a mixture of plugged - * and unplugged subblocks within such a page, we have to process in + * We can get called with any order up to MAX_ORDER. If our subblock + * size is smaller than that and we have a mixture of plugged and + * unplugged subblocks within such a page, we have to process in * smaller granularity. In that case we'll adjust the order exactly once * within the loop. */ diff --git a/fs/ramfs/file-nommu.c b/fs/ramfs/file-nommu.c index 2f67516bb9bf..9fbb9b5256f7 100644 --- a/fs/ramfs/file-nommu.c +++ b/fs/ramfs/file-nommu.c @@ -70,7 +70,7 @@ int ramfs_nommu_expand_for_mapping(struct inode *inode, size_t newsize) /* make various checks */ order = get_order(newsize); - if (unlikely(order >= MAX_ORDER)) + if (unlikely(order > MAX_ORDER)) return -EFBIG; ret = inode_newsize_ok(inode, newsize); diff --git a/include/drm/ttm/ttm_pool.h b/include/drm/ttm/ttm_pool.h index ef09b23d29e3..8ce14f9d202a 100644 --- a/include/drm/ttm/ttm_pool.h +++ b/include/drm/ttm/ttm_pool.h @@ -72,7 +72,7 @@ struct ttm_pool { bool use_dma32; struct { - struct ttm_pool_type orders[MAX_ORDER]; + struct ttm_pool_type orders[MAX_ORDER + 1]; } caching[TTM_NUM_CACHING_TYPES]; }; diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index 7c977d234aba..8fb7d91cd0b1 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -818,7 +818,7 @@ static inline unsigned huge_page_shift(struct hstate *h) static inline bool hstate_is_gigantic(struct hstate *h) { - return huge_page_order(h) >= MAX_ORDER; + return huge_page_order(h) > MAX_ORDER; } static inline unsigned int pages_per_huge_page(const struct hstate *h) diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index 9fb1b03b83b2..54a07b8862b9 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -26,11 +26,11 @@ /* Free memory management - zoned buddy allocator. */ #ifndef CONFIG_ARCH_FORCE_MAX_ORDER -#define MAX_ORDER 11 +#define MAX_ORDER 10 #else #define MAX_ORDER CONFIG_ARCH_FORCE_MAX_ORDER #endif -#define MAX_ORDER_NR_PAGES (1 << (MAX_ORDER - 1)) +#define MAX_ORDER_NR_PAGES (1 << MAX_ORDER) /* * PAGE_ALLOC_COSTLY_ORDER is the order at which allocations are deemed @@ -93,7 +93,7 @@ static inline bool migratetype_is_mergeable(int mt) } #define for_each_migratetype_order(order, type) \ - for (order = 0; order < MAX_ORDER; order++) \ + for (order = 0; order <= MAX_ORDER; order++) \ for (type = 0; type < MIGRATE_TYPES; type++) extern int page_group_by_mobility_disabled; @@ -922,7 +922,7 @@ struct zone { CACHELINE_PADDING(_pad1_); /* free areas of different sizes */ - struct free_area free_area[MAX_ORDER]; + struct free_area free_area[MAX_ORDER + 1]; /* zone flags, see below */ unsigned long flags; @@ -1745,7 +1745,7 @@ static inline bool movable_only_nodes(nodemask_t *nodes) #define SECTION_BLOCKFLAGS_BITS \ ((1UL << (PFN_SECTION_SHIFT - pageblock_order)) * NR_PAGEBLOCK_BITS) -#if (MAX_ORDER - 1 + PAGE_SHIFT) > SECTION_SIZE_BITS +#if (MAX_ORDER + PAGE_SHIFT) > SECTION_SIZE_BITS #error Allocator MAX_ORDER exceeds SECTION_SIZE #endif diff --git a/include/linux/pageblock-flags.h b/include/linux/pageblock-flags.h index 5f1ae07d724b..e83c4c095041 100644 --- a/include/linux/pageblock-flags.h +++ b/include/linux/pageblock-flags.h @@ -41,14 +41,14 @@ extern unsigned int pageblock_order; * Huge pages are a constant size, but don't exceed the maximum allocation * granularity. */ -#define pageblock_order min_t(unsigned int, HUGETLB_PAGE_ORDER, MAX_ORDER - 1) +#define pageblock_order min_t(unsigned int, HUGETLB_PAGE_ORDER, MAX_ORDER) #endif /* CONFIG_HUGETLB_PAGE_SIZE_VARIABLE */ #else /* CONFIG_HUGETLB_PAGE */ /* If huge pages are not used, group by MAX_ORDER_NR_PAGES */ -#define pageblock_order (MAX_ORDER-1) +#define pageblock_order MAX_ORDER #endif /* CONFIG_HUGETLB_PAGE */ diff --git a/include/linux/slab.h b/include/linux/slab.h index 45af70315a94..aa4575ef2965 100644 --- a/include/linux/slab.h +++ b/include/linux/slab.h @@ -284,7 +284,7 @@ static inline unsigned int arch_slab_minalign(void) * (PAGE_SIZE*2). Larger requests are passed to the page allocator. */ #define KMALLOC_SHIFT_HIGH (PAGE_SHIFT + 1) -#define KMALLOC_SHIFT_MAX (MAX_ORDER + PAGE_SHIFT - 1) +#define KMALLOC_SHIFT_MAX (MAX_ORDER + PAGE_SHIFT) #ifndef KMALLOC_SHIFT_LOW #define KMALLOC_SHIFT_LOW 5 #endif @@ -292,7 +292,7 @@ static inline unsigned int arch_slab_minalign(void) #ifdef CONFIG_SLUB #define KMALLOC_SHIFT_HIGH (PAGE_SHIFT + 1) -#define KMALLOC_SHIFT_MAX (MAX_ORDER + PAGE_SHIFT - 1) +#define KMALLOC_SHIFT_MAX (MAX_ORDER + PAGE_SHIFT) #ifndef KMALLOC_SHIFT_LOW #define KMALLOC_SHIFT_LOW 3 #endif @@ -305,7 +305,7 @@ static inline unsigned int arch_slab_minalign(void) * be allocated from the same page. */ #define KMALLOC_SHIFT_HIGH PAGE_SHIFT -#define KMALLOC_SHIFT_MAX (MAX_ORDER + PAGE_SHIFT - 1) +#define KMALLOC_SHIFT_MAX (MAX_ORDER + PAGE_SHIFT) #ifndef KMALLOC_SHIFT_LOW #define KMALLOC_SHIFT_LOW 3 #endif diff --git a/kernel/crash_core.c b/kernel/crash_core.c index 755f5f08ab38..90ce1dfd591c 100644 --- a/kernel/crash_core.c +++ b/kernel/crash_core.c @@ -474,7 +474,7 @@ static int __init crash_save_vmcoreinfo_init(void) VMCOREINFO_OFFSET(list_head, prev); VMCOREINFO_OFFSET(vmap_area, va_start); VMCOREINFO_OFFSET(vmap_area, list); - VMCOREINFO_LENGTH(zone.free_area, MAX_ORDER); + VMCOREINFO_LENGTH(zone.free_area, MAX_ORDER + 1); log_buf_vmcoreinfo_setup(); VMCOREINFO_LENGTH(free_area.free_list, MIGRATE_TYPES); VMCOREINFO_NUMBER(NR_FREE_PAGES); diff --git a/kernel/dma/pool.c b/kernel/dma/pool.c index 4d40dcce7604..1acec2e22827 100644 --- a/kernel/dma/pool.c +++ b/kernel/dma/pool.c @@ -84,8 +84,8 @@ static int atomic_pool_expand(struct gen_pool *pool, size_t pool_size, void *addr; int ret = -ENOMEM; - /* Cannot allocate larger than MAX_ORDER-1 */ - order = min(get_order(pool_size), MAX_ORDER-1); + /* Cannot allocate larger than MAX_ORDER */ + order = min(get_order(pool_size), MAX_ORDER); do { pool_size = 1 << (PAGE_SHIFT + order); @@ -190,7 +190,7 @@ static int __init dma_atomic_pool_init(void) /* * If coherent_pool was not used on the command line, default the pool - * sizes to 128KB per 1GB of memory, min 128KB, max MAX_ORDER-1. + * sizes to 128KB per 1GB of memory, min 128KB, max MAX_ORDER. */ if (!atomic_pool_size) { unsigned long pages = totalram_pages() / (SZ_1G / SZ_128K); diff --git a/kernel/events/ring_buffer.c b/kernel/events/ring_buffer.c index d6bbdb7830b2..273a0fe7910a 100644 --- a/kernel/events/ring_buffer.c +++ b/kernel/events/ring_buffer.c @@ -609,8 +609,8 @@ static struct page *rb_alloc_aux_page(int node, int order) { struct page *page; - if (order >= MAX_ORDER) - order = MAX_ORDER - 1; + if (order > MAX_ORDER) + order = MAX_ORDER; do { page = alloc_pages_node(node, PERF_AUX_GFP, order); diff --git a/mm/Kconfig b/mm/Kconfig index 4751031f3f05..fc059969d7ba 100644 --- a/mm/Kconfig +++ b/mm/Kconfig @@ -346,9 +346,9 @@ config SHUFFLE_PAGE_ALLOCATOR the presence of a memory-side-cache. There are also incidental security benefits as it reduces the predictability of page allocations to compliment SLAB_FREELIST_RANDOM, but the - default granularity of shuffling on the "MAX_ORDER - 1" i.e, - 10th order of pages is selected based on cache utilization - benefits on x86. + default granularity of shuffling on the MAX_ORDER i.e, 10th + order of pages is selected based on cache utilization benefits + on x86. While the randomization improves cache utilization it may negatively impact workloads on platforms without a cache. For diff --git a/mm/compaction.c b/mm/compaction.c index 5a9501e0ae01..709136556b9e 100644 --- a/mm/compaction.c +++ b/mm/compaction.c @@ -583,7 +583,7 @@ static unsigned long isolate_freepages_block(struct compact_control *cc, if (PageCompound(page)) { const unsigned int order = compound_order(page); - if (likely(order < MAX_ORDER)) { + if (likely(order <= MAX_ORDER)) { blockpfn += (1UL << order) - 1; cursor += (1UL << order) - 1; } @@ -938,7 +938,7 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn, * a valid page order. Consider only values in the * valid order range to prevent low_pfn overflow. */ - if (freepage_order > 0 && freepage_order < MAX_ORDER) + if (freepage_order > 0 && freepage_order <= MAX_ORDER) low_pfn += (1UL << freepage_order) - 1; continue; } @@ -954,7 +954,7 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn, if (PageCompound(page) && !cc->alloc_contig) { const unsigned int order = compound_order(page); - if (likely(order < MAX_ORDER)) + if (likely(order <= MAX_ORDER)) low_pfn += (1UL << order) - 1; goto isolate_fail; } @@ -2124,7 +2124,7 @@ static enum compact_result __compact_finished(struct compact_control *cc) /* Direct compactor: Is a suitable page free? */ ret = COMPACT_NO_SUITABLE_PAGE; - for (order = cc->order; order < MAX_ORDER; order++) { + for (order = cc->order; order <= MAX_ORDER; order++) { struct free_area *area = &cc->zone->free_area[order]; bool can_steal; diff --git a/mm/debug_vm_pgtable.c b/mm/debug_vm_pgtable.c index af59cc7bd307..c9eb007fedcc 100644 --- a/mm/debug_vm_pgtable.c +++ b/mm/debug_vm_pgtable.c @@ -1086,7 +1086,7 @@ debug_vm_pgtable_alloc_huge_page(struct pgtable_debug_args *args, int order) struct page *page = NULL; #ifdef CONFIG_CONTIG_ALLOC - if (order >= MAX_ORDER) { + if (order > MAX_ORDER) { page = alloc_contig_pages((1 << order), GFP_KERNEL, first_online_node, NULL); if (page) { @@ -1096,7 +1096,7 @@ debug_vm_pgtable_alloc_huge_page(struct pgtable_debug_args *args, int order) } #endif - if (order < MAX_ORDER) + if (order <= MAX_ORDER) page = alloc_pages(GFP_KERNEL, order); return page; diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 4fc43859e59a..1c03cab29d22 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -471,7 +471,7 @@ static int __init hugepage_init(void) /* * hugepages can't be allocated by the buddy allocator */ - MAYBE_BUILD_BUG_ON(HPAGE_PMD_ORDER >= MAX_ORDER); + MAYBE_BUILD_BUG_ON(HPAGE_PMD_ORDER > MAX_ORDER); /* * we use page->mapping and page->index in second tail page * as list_head: assuming THP order >= 2 diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 07abcb6eb203..9525bced1e82 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -2090,7 +2090,7 @@ pgoff_t hugetlb_basepage_index(struct page *page) pgoff_t index = page_index(page_head); unsigned long compound_idx; - if (compound_order(page_head) >= MAX_ORDER) + if (compound_order(page_head) > MAX_ORDER) compound_idx = page_to_pfn(page) - page_to_pfn(page_head); else compound_idx = page - page_head; @@ -4497,7 +4497,7 @@ static int __init default_hugepagesz_setup(char *s) * The number of default huge pages (for this size) could have been * specified as the first hugetlb parameter: hugepages=X. If so, * then default_hstate_max_huge_pages is set. If the default huge - * page size is gigantic (>= MAX_ORDER), then the pages must be + * page size is gigantic (> MAX_ORDER), then the pages must be * allocated here from bootmem allocator. */ if (default_hstate_max_huge_pages) { diff --git a/mm/kmsan/init.c b/mm/kmsan/init.c index 7fb794242fad..ffedf4dbc49d 100644 --- a/mm/kmsan/init.c +++ b/mm/kmsan/init.c @@ -96,7 +96,7 @@ void __init kmsan_init_shadow(void) struct metadata_page_pair { struct page *shadow, *origin; }; -static struct metadata_page_pair held_back[MAX_ORDER] __initdata; +static struct metadata_page_pair held_back[MAX_ORDER + 1] __initdata; /* * Eager metadata allocation. When the memblock allocator is freeing pages to @@ -211,8 +211,8 @@ static void kmsan_memblock_discard(void) * order=N-1, * - repeat. */ - collect.order = MAX_ORDER - 1; - for (int i = MAX_ORDER - 1; i >= 0; i--) { + collect.order = MAX_ORDER; + for (int i = MAX_ORDER; i >= 0; i--) { if (held_back[i].shadow) smallstack_push(&collect, held_back[i].shadow); if (held_back[i].origin) diff --git a/mm/memblock.c b/mm/memblock.c index 25fd0626a9e7..338b8cb0793e 100644 --- a/mm/memblock.c +++ b/mm/memblock.c @@ -2043,7 +2043,7 @@ static void __init __free_pages_memory(unsigned long start, unsigned long end) int order; while (start < end) { - order = min(MAX_ORDER - 1UL, __ffs(start)); + order = min(MAX_ORDER, __ffs(start)); while (start + (1UL << order) > end) order--; diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c index db3b270254f1..86291c79a764 100644 --- a/mm/memory_hotplug.c +++ b/mm/memory_hotplug.c @@ -596,7 +596,7 @@ static void online_pages_range(unsigned long start_pfn, unsigned long nr_pages) unsigned long pfn; /* - * Online the pages in MAX_ORDER - 1 aligned chunks. The callback might + * Online the pages in MAX_ORDER aligned chunks. The callback might * decide to not expose all pages to the buddy (e.g., expose them * later). We account all pages as being online and belonging to this * zone ("present"). @@ -605,7 +605,7 @@ static void online_pages_range(unsigned long start_pfn, unsigned long nr_pages) * this and the first chunk to online will be pageblock_nr_pages. */ for (pfn = start_pfn; pfn < end_pfn;) { - int order = min(MAX_ORDER - 1UL, __ffs(pfn)); + int order = min(MAX_ORDER, __ffs(pfn)); (*online_page_callback)(pfn_to_page(pfn), order); pfn += (1UL << order); diff --git a/mm/page_alloc.c b/mm/page_alloc.c index ac1fc986af44..66700f27b4c6 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -1059,7 +1059,7 @@ buddy_merge_likely(unsigned long pfn, unsigned long buddy_pfn, unsigned long higher_page_pfn; struct page *higher_page; - if (order >= MAX_ORDER - 2) + if (order >= MAX_ORDER - 1) return false; higher_page_pfn = buddy_pfn & pfn; @@ -1114,7 +1114,7 @@ static inline void __free_one_page(struct page *page, VM_BUG_ON_PAGE(pfn & ((1 << order) - 1), page); VM_BUG_ON_PAGE(bad_range(zone, page), page); - while (order < MAX_ORDER - 1) { + while (order < MAX_ORDER) { if (compaction_capture(capc, page, order, migratetype)) { __mod_zone_freepage_state(zone, -(1 << order), migratetype); @@ -2579,7 +2579,7 @@ struct page *__rmqueue_smallest(struct zone *zone, unsigned int order, struct page *page; /* Find a page of the appropriate size in the preferred list */ - for (current_order = order; current_order < MAX_ORDER; ++current_order) { + for (current_order = order; current_order <= MAX_ORDER; ++current_order) { area = &(zone->free_area[current_order]); page = get_page_from_free_area(area, migratetype); if (!page) @@ -2951,7 +2951,7 @@ static bool unreserve_highatomic_pageblock(const struct alloc_context *ac, continue; spin_lock_irqsave(&zone->lock, flags); - for (order = 0; order < MAX_ORDER; order++) { + for (order = 0; order <= MAX_ORDER; order++) { struct free_area *area = &(zone->free_area[order]); page = get_page_from_free_area(area, MIGRATE_HIGHATOMIC); @@ -3035,7 +3035,7 @@ __rmqueue_fallback(struct zone *zone, int order, int start_migratetype, * approximates finding the pageblock with the most free pages, which * would be too costly to do exactly. */ - for (current_order = MAX_ORDER - 1; current_order >= min_order; + for (current_order = MAX_ORDER; current_order >= min_order; --current_order) { area = &(zone->free_area[current_order]); fallback_mt = find_suitable_fallback(area, current_order, @@ -3061,7 +3061,7 @@ __rmqueue_fallback(struct zone *zone, int order, int start_migratetype, return false; find_smallest: - for (current_order = order; current_order < MAX_ORDER; + for (current_order = order; current_order <= MAX_ORDER; current_order++) { area = &(zone->free_area[current_order]); fallback_mt = find_suitable_fallback(area, current_order, @@ -3074,7 +3074,7 @@ __rmqueue_fallback(struct zone *zone, int order, int start_migratetype, * This should not happen - we already found a suitable fallback * when looking for the largest page. */ - VM_BUG_ON(current_order == MAX_ORDER); + VM_BUG_ON(current_order > MAX_ORDER); do_steal: page = get_page_from_free_area(area, fallback_mt); @@ -4044,7 +4044,7 @@ bool __zone_watermark_ok(struct zone *z, unsigned int order, unsigned long mark, return true; /* For a high-order request, check at least one suitable page is free */ - for (o = order; o < MAX_ORDER; o++) { + for (o = order; o <= MAX_ORDER; o++) { struct free_area *area = &z->free_area[o]; int mt; @@ -5564,7 +5564,7 @@ struct page *__alloc_pages(gfp_t gfp, unsigned int order, int preferred_nid, * There are several places where we assume that the order value is sane * so bail out early if the request is out of bound. */ - if (WARN_ON_ONCE_GFP(order >= MAX_ORDER, gfp)) + if (WARN_ON_ONCE_GFP(order > MAX_ORDER, gfp)) return NULL; gfp &= gfp_allowed_mask; @@ -6294,8 +6294,8 @@ void __show_free_areas(unsigned int filter, nodemask_t *nodemask, int max_zone_i for_each_populated_zone(zone) { unsigned int order; - unsigned long nr[MAX_ORDER], flags, total = 0; - unsigned char types[MAX_ORDER]; + unsigned long nr[MAX_ORDER + 1], flags, total = 0; + unsigned char types[MAX_ORDER + 1]; if (zone_idx(zone) > max_zone_idx) continue; @@ -6305,7 +6305,7 @@ void __show_free_areas(unsigned int filter, nodemask_t *nodemask, int max_zone_i printk(KERN_CONT "%s: ", zone->name); spin_lock_irqsave(&zone->lock, flags); - for (order = 0; order < MAX_ORDER; order++) { + for (order = 0; order <= MAX_ORDER; order++) { struct free_area *area = &zone->free_area[order]; int type; @@ -6319,7 +6319,7 @@ void __show_free_areas(unsigned int filter, nodemask_t *nodemask, int max_zone_i } } spin_unlock_irqrestore(&zone->lock, flags); - for (order = 0; order < MAX_ORDER; order++) { + for (order = 0; order <= MAX_ORDER; order++) { printk(KERN_CONT "%lu*%lukB ", nr[order], K(1UL) << order); if (nr[order]) @@ -7670,7 +7670,7 @@ static inline void setup_usemap(struct zone *zone) {} /* Initialise the number of pages represented by NR_PAGEBLOCK_BITS */ void __init set_pageblock_order(void) { - unsigned int order = MAX_ORDER - 1; + unsigned int order = MAX_ORDER; /* Check that pageblock_nr_pages has not already been setup */ if (pageblock_order) @@ -9165,7 +9165,7 @@ void *__init alloc_large_system_hash(const char *tablename, else table = memblock_alloc_raw(size, SMP_CACHE_BYTES); - } else if (get_order(size) >= MAX_ORDER || hashdist) { + } else if (get_order(size) > MAX_ORDER || hashdist) { table = vmalloc_huge(size, gfp_flags); virt = true; if (table) @@ -9379,7 +9379,7 @@ int alloc_contig_range(unsigned long start, unsigned long end, order = 0; outer_start = start; while (!PageBuddy(pfn_to_page(outer_start))) { - if (++order >= MAX_ORDER) { + if (++order > MAX_ORDER) { outer_start = start; break; } @@ -9629,7 +9629,7 @@ bool is_free_buddy_page(struct page *page) unsigned long pfn = page_to_pfn(page); unsigned int order; - for (order = 0; order < MAX_ORDER; order++) { + for (order = 0; order <= MAX_ORDER; order++) { struct page *page_head = page - (pfn & ((1 << order) - 1)); if (PageBuddy(page_head) && @@ -9637,7 +9637,7 @@ bool is_free_buddy_page(struct page *page) break; } - return order < MAX_ORDER; + return order <= MAX_ORDER; } EXPORT_SYMBOL(is_free_buddy_page); @@ -9688,7 +9688,7 @@ bool take_page_off_buddy(struct page *page) bool ret = false; spin_lock_irqsave(&zone->lock, flags); - for (order = 0; order < MAX_ORDER; order++) { + for (order = 0; order <= MAX_ORDER; order++) { struct page *page_head = page - (pfn & ((1 << order) - 1)); int page_order = buddy_order(page_head); diff --git a/mm/page_isolation.c b/mm/page_isolation.c index 47fbc1696466..c6f3605e37ab 100644 --- a/mm/page_isolation.c +++ b/mm/page_isolation.c @@ -226,7 +226,7 @@ static void unset_migratetype_isolate(struct page *page, int migratetype) */ if (PageBuddy(page)) { order = buddy_order(page); - if (order >= pageblock_order && order < MAX_ORDER - 1) { + if (order >= pageblock_order && order < MAX_ORDER) { buddy = find_buddy_page_pfn(page, page_to_pfn(page), order, NULL); if (buddy && !is_migrate_isolate_page(buddy)) { @@ -290,11 +290,11 @@ __first_valid_page(unsigned long pfn, unsigned long nr_pages) * isolate_single_pageblock() * @migratetype: migrate type to set in error recovery. * - * Free and in-use pages can be as big as MAX_ORDER-1 and contain more than one + * Free and in-use pages can be as big as MAX_ORDER and contain more than one * pageblock. When not all pageblocks within a page are isolated at the same * time, free page accounting can go wrong. For example, in the case of - * MAX_ORDER-1 = pageblock_order + 1, a MAX_ORDER-1 page has two pagelbocks. - * [ MAX_ORDER-1 ] + * MAX_ORDER = pageblock_order + 1, a MAX_ORDER page has two pagelbocks. + * [ MAX_ORDER ] * [ pageblock0 | pageblock1 ] * When either pageblock is isolated, if it is a free page, the page is not * split into separate migratetype lists, which is supposed to; if it is an @@ -451,7 +451,7 @@ static int isolate_single_pageblock(unsigned long boundary_pfn, int flags, * the free page to the right migratetype list. * * head_pfn is not used here as a hugetlb page order - * can be bigger than MAX_ORDER-1, but after it is + * can be bigger than MAX_ORDER, but after it is * freed, the free page order is not. Use pfn within * the range to find the head of the free page. */ @@ -459,7 +459,7 @@ static int isolate_single_pageblock(unsigned long boundary_pfn, int flags, outer_pfn = pfn; while (!PageBuddy(pfn_to_page(outer_pfn))) { /* stop if we cannot find the free page */ - if (++order >= MAX_ORDER) + if (++order > MAX_ORDER) goto failed; outer_pfn &= ~0UL << order; } diff --git a/mm/page_owner.c b/mm/page_owner.c index 220cdeddc295..31169b3e7f06 100644 --- a/mm/page_owner.c +++ b/mm/page_owner.c @@ -315,7 +315,7 @@ void pagetypeinfo_showmixedcount_print(struct seq_file *m, unsigned long freepage_order; freepage_order = buddy_order_unsafe(page); - if (freepage_order < MAX_ORDER) + if (freepage_order <= MAX_ORDER) pfn += (1UL << freepage_order) - 1; continue; } @@ -549,7 +549,7 @@ read_page_owner(struct file *file, char __user *buf, size_t count, loff_t *ppos) if (PageBuddy(page)) { unsigned long freepage_order = buddy_order_unsafe(page); - if (freepage_order < MAX_ORDER) + if (freepage_order <= MAX_ORDER) pfn += (1UL << freepage_order) - 1; continue; } @@ -657,7 +657,7 @@ static void init_pages_in_zone(pg_data_t *pgdat, struct zone *zone) if (PageBuddy(page)) { unsigned long order = buddy_order_unsafe(page); - if (order > 0 && order < MAX_ORDER) + if (order > 0 && order <= MAX_ORDER) pfn += (1UL << order) - 1; continue; } diff --git a/mm/page_reporting.c b/mm/page_reporting.c index 275b466de37b..b021f482a4cb 100644 --- a/mm/page_reporting.c +++ b/mm/page_reporting.c @@ -20,7 +20,7 @@ static int page_order_update_notify(const char *val, const struct kernel_param * * If param is set beyond this limit, order is set to default * pageblock_order value */ - return param_set_uint_minmax(val, kp, 0, MAX_ORDER-1); + return param_set_uint_minmax(val, kp, 0, MAX_ORDER); } static const struct kernel_param_ops page_reporting_param_ops = { @@ -276,7 +276,7 @@ page_reporting_process_zone(struct page_reporting_dev_info *prdev, return err; /* Process each free list starting from lowest order/mt */ - for (order = page_reporting_order; order < MAX_ORDER; order++) { + for (order = page_reporting_order; order <= MAX_ORDER; order++) { for (mt = 0; mt < MIGRATE_TYPES; mt++) { /* We do not pull pages from the isolate free list */ if (is_migrate_isolate(mt)) @@ -370,7 +370,7 @@ int page_reporting_register(struct page_reporting_dev_info *prdev) */ if (page_reporting_order == -1) { - if (prdev->order > 0 && prdev->order < MAX_ORDER) + if (prdev->order > 0 && prdev->order <= MAX_ORDER) page_reporting_order = prdev->order; else page_reporting_order = pageblock_order; diff --git a/mm/shuffle.h b/mm/shuffle.h index cec62984f7d3..a6bdf54f96f1 100644 --- a/mm/shuffle.h +++ b/mm/shuffle.h @@ -4,7 +4,7 @@ #define _MM_SHUFFLE_H #include -#define SHUFFLE_ORDER (MAX_ORDER-1) +#define SHUFFLE_ORDER MAX_ORDER #ifdef CONFIG_SHUFFLE_PAGE_ALLOCATOR DECLARE_STATIC_KEY_FALSE(page_alloc_shuffle_key); diff --git a/mm/slab.c b/mm/slab.c index dabc2a671fc6..dea1d580a053 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -465,7 +465,7 @@ static int __init slab_max_order_setup(char *str) { get_option(&str, &slab_max_order); slab_max_order = slab_max_order < 0 ? 0 : - min(slab_max_order, MAX_ORDER - 1); + min(slab_max_order, MAX_ORDER); slab_max_order_set = true; return 1; diff --git a/mm/slub.c b/mm/slub.c index 32eb6b50fe18..0e19c0d647e6 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -4171,8 +4171,8 @@ static inline int calculate_order(unsigned int size) /* * Doh this slab cannot be placed using slub_max_order. */ - order = calc_slab_order(size, 1, MAX_ORDER - 1, 1); - if (order < MAX_ORDER) + order = calc_slab_order(size, 1, MAX_ORDER, 1); + if (order <= MAX_ORDER) return order; return -ENOSYS; } @@ -4697,7 +4697,7 @@ __setup("slub_min_order=", setup_slub_min_order); static int __init setup_slub_max_order(char *str) { get_option(&str, (int *)&slub_max_order); - slub_max_order = min(slub_max_order, (unsigned int)MAX_ORDER - 1); + slub_max_order = min(slub_max_order, (unsigned int)MAX_ORDER); return 1; } diff --git a/mm/vmscan.c b/mm/vmscan.c index 9c1c5e8b24b8..0b611d4c16f1 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -6990,7 +6990,7 @@ unsigned long try_to_free_pages(struct zonelist *zonelist, int order, * scan_control uses s8 fields for order, priority, and reclaim_idx. * Confirm they are large enough for max values. */ - BUILD_BUG_ON(MAX_ORDER > S8_MAX); + BUILD_BUG_ON(MAX_ORDER >= S8_MAX); BUILD_BUG_ON(DEF_PRIORITY > S8_MAX); BUILD_BUG_ON(MAX_NR_ZONES > S8_MAX); diff --git a/mm/vmstat.c b/mm/vmstat.c index 1ea6a5ce1c41..b7307627772d 100644 --- a/mm/vmstat.c +++ b/mm/vmstat.c @@ -1055,7 +1055,7 @@ static void fill_contig_page_info(struct zone *zone, info->free_blocks_total = 0; info->free_blocks_suitable = 0; - for (order = 0; order < MAX_ORDER; order++) { + for (order = 0; order <= MAX_ORDER; order++) { unsigned long blocks; /* @@ -1088,7 +1088,7 @@ static int __fragmentation_index(unsigned int order, struct contig_page_info *in { unsigned long requested = 1UL << order; - if (WARN_ON_ONCE(order >= MAX_ORDER)) + if (WARN_ON_ONCE(order > MAX_ORDER)) return 0; if (!info->free_blocks_total) @@ -1462,7 +1462,7 @@ static void frag_show_print(struct seq_file *m, pg_data_t *pgdat, int order; seq_printf(m, "Node %d, zone %8s ", pgdat->node_id, zone->name); - for (order = 0; order < MAX_ORDER; ++order) + for (order = 0; order <= MAX_ORDER; ++order) /* * Access to nr_free is lockless as nr_free is used only for * printing purposes. Use data_race to avoid KCSAN warning. @@ -1491,7 +1491,7 @@ static void pagetypeinfo_showfree_print(struct seq_file *m, pgdat->node_id, zone->name, migratetype_names[mtype]); - for (order = 0; order < MAX_ORDER; ++order) { + for (order = 0; order <= MAX_ORDER; ++order) { unsigned long freecount = 0; struct free_area *area; struct list_head *curr; @@ -1531,7 +1531,7 @@ static void pagetypeinfo_showfree(struct seq_file *m, void *arg) /* Print header */ seq_printf(m, "%-43s ", "Free pages count per migrate type at order"); - for (order = 0; order < MAX_ORDER; ++order) + for (order = 0; order <= MAX_ORDER; ++order) seq_printf(m, "%6d ", order); seq_putc(m, '\n'); @@ -2153,7 +2153,7 @@ static void unusable_show_print(struct seq_file *m, seq_printf(m, "Node %d, zone %8s ", pgdat->node_id, zone->name); - for (order = 0; order < MAX_ORDER; ++order) { + for (order = 0; order <= MAX_ORDER; ++order) { fill_contig_page_info(zone, order, &info); index = unusable_free_index(order, &info); seq_printf(m, "%d.%03d ", index / 1000, index % 1000); @@ -2205,7 +2205,7 @@ static void extfrag_show_print(struct seq_file *m, seq_printf(m, "Node %d, zone %8s ", pgdat->node_id, zone->name); - for (order = 0; order < MAX_ORDER; ++order) { + for (order = 0; order <= MAX_ORDER; ++order) { fill_contig_page_info(zone, order, &info); index = __fragmentation_index(order, &info); seq_printf(m, "%2d.%03d ", index / 1000, index % 1000); diff --git a/net/smc/smc_ib.c b/net/smc/smc_ib.c index 854772dd52fd..9b66d6aeeb1a 100644 --- a/net/smc/smc_ib.c +++ b/net/smc/smc_ib.c @@ -843,7 +843,7 @@ long smc_ib_setup_per_ibdev(struct smc_ib_device *smcibdev) goto out; /* the calculated number of cq entries fits to mlx5 cq allocation */ cqe_size_order = cache_line_size() == 128 ? 7 : 6; - smc_order = MAX_ORDER - cqe_size_order - 1; + smc_order = MAX_ORDER - cqe_size_order; if (SMC_MAX_CQE + 2 > (0x00000001 << smc_order) * PAGE_SIZE) cqattr.cqe = (0x00000001 << smc_order) * PAGE_SIZE - 2; smcibdev->roce_cq_send = ib_create_cq(smcibdev->ibdev, diff --git a/security/integrity/ima/ima_crypto.c b/security/integrity/ima/ima_crypto.c index 64499056648a..51ad29940f05 100644 --- a/security/integrity/ima/ima_crypto.c +++ b/security/integrity/ima/ima_crypto.c @@ -38,7 +38,7 @@ static int param_set_bufsize(const char *val, const struct kernel_param *kp) size = memparse(val, NULL); order = get_order(size); - if (order >= MAX_ORDER) + if (order > MAX_ORDER) return -EINVAL; ima_maxorder = order; ima_bufsize = PAGE_SIZE << order; diff --git a/tools/testing/memblock/linux/mmzone.h b/tools/testing/memblock/linux/mmzone.h index e65f89b12f1c..134f8eab0768 100644 --- a/tools/testing/memblock/linux/mmzone.h +++ b/tools/testing/memblock/linux/mmzone.h @@ -17,10 +17,10 @@ enum zone_type { }; #define MAX_NR_ZONES __MAX_NR_ZONES -#define MAX_ORDER 11 -#define MAX_ORDER_NR_PAGES (1 << (MAX_ORDER - 1)) +#define MAX_ORDER 10 +#define MAX_ORDER_NR_PAGES (1 << MAX_ORDER) -#define pageblock_order (MAX_ORDER - 1) +#define pageblock_order MAX_ORDER #define pageblock_nr_pages BIT(pageblock_order) #define pageblock_align(pfn) ALIGN((pfn), pageblock_nr_pages) #define pageblock_start_pfn(pfn) ALIGN_DOWN((pfn), pageblock_nr_pages)