From patchwork Tue Apr 14 13:13:31 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 11487535 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 116EF81 for ; Tue, 14 Apr 2020 13:15:03 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id C7EE820732 for ; Tue, 14 Apr 2020 13:15:02 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="m99mv6+L" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org C7EE820732 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=lst.de Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 3BE668E0013; Tue, 14 Apr 2020 09:14:57 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 396608E000E; Tue, 14 Apr 2020 09:14:57 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2AEE98E0013; Tue, 14 Apr 2020 09:14:57 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0081.hostedemail.com [216.40.44.81]) by kanga.kvack.org (Postfix) with ESMTP id 0EDC08E000E for ; Tue, 14 Apr 2020 09:14:57 -0400 (EDT) Received: from smtpin17.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id CB296180AD802 for ; Tue, 14 Apr 2020 13:14:56 +0000 (UTC) X-FDA: 76706505792.17.bee95_e01dd1260262 X-Spam-Summary: 2,0,0,91f7ae7ab8c7f866,d41d8cd98f00b204,batv+4a22011dcb1da0b09bf4+6078+infradead.org+hch@bombadil.srs.infradead.org,,RULES_HIT:41:355:379:541:800:960:965:966:973:988:989:1260:1311:1314:1345:1359:1437:1515:1534:1542:1711:1730:1747:1777:1792:2196:2198:2199:2200:2393:2559:2562:2731:3138:3139:3140:3141:3142:3353:3865:3866:3867:3868:3870:3874:4385:4390:4395:5007:6261:6653:6737:6742:8603:9036:10004:11026:11473:11658:11914:12043:12048:12160:12296:12297:12438:12517:12519:12555:12895:13894:14096:14181:14394:14721:14819:21080:21451:21627:21796:21990:30036:30054:30055,0,RBL:198.137.202.133:@bombadil.srs.infradead.org:.lbl8.mailshell.net-62.8.0.100 64.201.201.201,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fn,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:1,LUA_SUMMARY:none X-HE-Tag: bee95_e01dd1260262 X-Filterd-Recvd-Size: 4569 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) by imf45.hostedemail.com (Postfix) with ESMTP for ; Tue, 14 Apr 2020 13:14:56 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=kDWxNR99BXM0mYvZJ8aft2fdXgy+jyVb2k5m2hDhDIg=; b=m99mv6+L6NEGc5NsA05meTZ7J4 pET4ii0TRtjQCrdWlxaL+Z3i+1UptXtFxQGlSdBzQ21ohBGQEpsm2M9Vft9jJlWjOu19cVSMmyEje 0W2E7OvYWUFCIVLO80Sg9o0BZPGaQA1gW/JQLBqCnSWgvbRsrUncgPC43+OGZ32lMwtrlOJABgpmP dr9dO3PUdumoeIRhWb8CKqgTbCH+YaovAGUKzCjWGK+2U4vCn6pUJZSRIvNXmAxIaRcb8mHUrLnCb kt1YUre6HJ0Ez+Ip2qdE7zqvvZHdZOwfghfirz0wiudpSSYO5JMcU+7yljHPY3nC7FtZMX6hoowY6 r5y6QnkA==; Received: from [2001:4bb8:180:384b:c70:4a89:bc61:2] (helo=localhost) by bombadil.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux)) id 1jOLOb-0006lI-8l; Tue, 14 Apr 2020 13:14:33 +0000 From: Christoph Hellwig To: Andrew Morton , "K. Y. Srinivasan" , Haiyang Zhang , Stephen Hemminger , Wei Liu , x86@kernel.org, David Airlie , Daniel Vetter , Laura Abbott , Sumit Semwal , Sakari Ailus , Minchan Kim , Nitin Gupta Cc: Robin Murphy , Christophe Leroy , Peter Zijlstra , linuxppc-dev@lists.ozlabs.org, linux-hyperv@vger.kernel.org, dri-devel@lists.freedesktop.org, linaro-mm-sig@lists.linaro.org, linux-arch@vger.kernel.org, linux-mm@kvack.org, iommu@lists.linux-foundation.org, linux-arm-kernel@lists.infradead.org, linux-s390@vger.kernel.org, bpf@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [PATCH 12/29] mm: pass addr as unsigned long to vb_free Date: Tue, 14 Apr 2020 15:13:31 +0200 Message-Id: <20200414131348.444715-13-hch@lst.de> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20200414131348.444715-1-hch@lst.de> References: <20200414131348.444715-1-hch@lst.de> MIME-Version: 1.0 X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org. See http://www.infradead.org/rpr.html X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Ever use of addr in vb_free casts to unsigned long first, and the caller has an unsigned long version of the address available anyway. Just pass that and avoid all the casts. Signed-off-by: Christoph Hellwig Acked-by: Peter Zijlstra (Intel) --- mm/vmalloc.c | 16 +++++++--------- 1 file changed, 7 insertions(+), 9 deletions(-) diff --git a/mm/vmalloc.c b/mm/vmalloc.c index 9183fc0d365a..aada9e9144bd 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -1664,7 +1664,7 @@ static void *vb_alloc(unsigned long size, gfp_t gfp_mask) return vaddr; } -static void vb_free(const void *addr, unsigned long size) +static void vb_free(unsigned long addr, unsigned long size) { unsigned long offset; unsigned long vb_idx; @@ -1674,24 +1674,22 @@ static void vb_free(const void *addr, unsigned long size) BUG_ON(offset_in_page(size)); BUG_ON(size > PAGE_SIZE*VMAP_MAX_ALLOC); - flush_cache_vunmap((unsigned long)addr, (unsigned long)addr + size); + flush_cache_vunmap(addr, addr + size); order = get_order(size); - offset = (unsigned long)addr & (VMAP_BLOCK_SIZE - 1); - offset >>= PAGE_SHIFT; + offset = (addr & (VMAP_BLOCK_SIZE - 1)) >> PAGE_SHIFT; - vb_idx = addr_to_vb_idx((unsigned long)addr); + vb_idx = addr_to_vb_idx(addr); rcu_read_lock(); vb = radix_tree_lookup(&vmap_block_tree, vb_idx); rcu_read_unlock(); BUG_ON(!vb); - vunmap_page_range((unsigned long)addr, (unsigned long)addr + size); + vunmap_page_range(addr, addr + size); if (debug_pagealloc_enabled_static()) - flush_tlb_kernel_range((unsigned long)addr, - (unsigned long)addr + size); + flush_tlb_kernel_range(addr, addr + size); spin_lock(&vb->lock); @@ -1791,7 +1789,7 @@ void vm_unmap_ram(const void *mem, unsigned int count) if (likely(count <= VMAP_MAX_ALLOC)) { debug_check_no_locks_freed(mem, size); - vb_free(mem, size); + vb_free(addr, size); return; }