From patchwork Tue Nov 24 00:29:16 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Safonov X-Patchwork-Id: 11926909 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 270DFC2D0E4 for ; Tue, 24 Nov 2020 00:31:04 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 973E120729 for ; Tue, 24 Nov 2020 00:31:03 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="gPLTRSEg"; dkim=fail reason="signature verification failed" (2048-bit key) header.d=arista.com header.i=@arista.com header.b="RZhdgZQg" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 973E120729 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=arista.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To:Message-Id:Date: Subject:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=mCpcD4jQYDTbBJNjGlWTVVlqmCIVqV8yaCC6W1BKL8I=; b=gPLTRSEgZTuZ4PVzhuXSu1Ioe FU2uOqumLW4aGsSvYCPxu+KW9ytOaSXKFLDmYPX9CaP9kssK/1B5qfmV63VvWqvfJfM1znVxUXhua pNJiDB432S5mLApce3DywQVBnzV3KJpddldC2GAWK8ndSzfaiWBwxNwiWZWZluKeNTWoKPlwhHk5S mjmfg0a81fRbzAkTpxwJW/6LeVNtDmz/J8Lbt7krr5Y3RS9LlDf8PvmIEEVdP0rXDWup1iYQc42Rt Wn7FkMkmF8FCBKVZgxMzRnunjUQB6eCjnH63MYpCjsGPPhAUApApW/rehL8IYZXjA6GAaaC71ghrl xd1cNHWXQ==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1khMDI-0002qG-I6; Tue, 24 Nov 2020 00:29:44 +0000 Received: from mail-wr1-x444.google.com ([2a00:1450:4864:20::444]) by merlin.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1khMDF-0002on-9s for linux-arm-kernel@lists.infradead.org; Tue, 24 Nov 2020 00:29:42 +0000 Received: by mail-wr1-x444.google.com with SMTP id l1so20563662wrb.9 for ; Mon, 23 Nov 2020 16:29:39 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=arista.com; s=googlenew; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=UcCIRGmM/YGnwPjlfTDRcJXaiwDGJ+twdQ1gGnVL/d4=; b=RZhdgZQgGKcvNlF0S2Og7vrX1ONcPaiqJntZ7SQRw+joDv1hzDZUZ71RQZeqKcTQYz JQ/RfY1etQnuRbg/dDc7tRS/ZLkf7m7qjWLj66Tl0Uzlmt4qceRVoNbYBQB7Q45FZLtS /pNckThh9WoT6+NzquyLeIjyFIwuIJSJitzccp6KEpJo3YxZxxv6iKIXQEwft0GU5wLX BTq3QXPePovxGqTE0nmR8IZPnafEsh9Z0Uz9MIDHnSD8af0nUB7DwlbjBApLIqe7xqtM iFPOSstwEIrmjREf50HaOZrqTztLA2azbTCTlu546LPh9vNUlOBp0EH2X2d/DThcYUh0 WkFw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=UcCIRGmM/YGnwPjlfTDRcJXaiwDGJ+twdQ1gGnVL/d4=; b=Ud5Zwo6VpISGA8WXXHP2ENS/XYej6jfR1QsC+px3iGQHzltVo8XSLIh8MvZw1tcfO/ E/1udNYWBezO/p7/DaAnGwJviGClC/ci+dLHv0hYFaB8ewLJ+Wh9SGYysX+l2eGS8gHx 5kuAGLkGFFFgT6I0dyXLOe4f9fITIaCkD+l+DOb4GD6Yh8I5uHLSkyChIHZdNpbXWN+0 dlPDFqDKmJ96yZlDZgJy4BE0JFsA0gkaupwBxzIWXoza+w2S7S+WnRXcqGudox/v+EK1 Em/vCE50HoXuLV6+LLF8q1NYz+k5EbCDWmlh2MgxXY1cjXuTlfrM+PFrc421UEWjs1QJ vkmA== X-Gm-Message-State: AOAM533G3lb/3xo4yKXkwhvfoHf4Sf/V534FCXY+wnO/uwX5WOjtWH2d 4n9CnNbz5IIyWzPmd6CuOAVcnQ== X-Google-Smtp-Source: ABdhPJz5mg886vklcWq45WgpwsfzCRLLiJShXuFPm0RDpQervJuhQAt1rtPXEs4MPY6n9x0ZpEik1Q== X-Received: by 2002:adf:8030:: with SMTP id 45mr2191712wrk.407.1606177778887; Mon, 23 Nov 2020 16:29:38 -0800 (PST) Received: from localhost.localdomain ([2a02:8084:e84:2480:228:f8ff:fe6f:83a8]) by smtp.gmail.com with ESMTPSA id c6sm25047360wrh.74.2020.11.23.16.29.37 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 23 Nov 2020 16:29:38 -0800 (PST) From: Dmitry Safonov To: linux-kernel@vger.kernel.org Subject: [PATCH v2 03/19] arm64: Use in_compat_task() in arch_setup_additional_pages() Date: Tue, 24 Nov 2020 00:29:16 +0000 Message-Id: <20201124002932.1220517-4-dima@arista.com> X-Mailer: git-send-email 2.29.2 In-Reply-To: <20201124002932.1220517-1-dima@arista.com> References: <20201124002932.1220517-1-dima@arista.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20201123_192941_393134_C3D71DC1 X-CRM114-Status: GOOD ( 20.13 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Thomas Bogendoerfer , Arnd Bergmann , Dmitry Safonov , Catalin Marinas , x86@kernel.org, Dmitry Safonov <0x7f454c46@gmail.com>, Oleg Nesterov , Christophe Leroy , Russell King , Ingo Molnar , Borislav Petkov , Alexander Viro , Andy Lutomirski , "H. Peter Anvin" , Guo Ren , Andrew Morton , Vincenzo Frascino , Will Deacon , Thomas Gleixner , linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Instead of providing compat_arch_setup_additional_pages(), check if the task is compatible from personality, which is set earlier in load_elf_binary(). That will align code with powerpc and sparc, also it'll allow to completely remove compat_arch_setyp_addtional_pages() macro after doing the same for x86, simiplifying the binfmt code in the end. Cc: linux-arm-kernel@lists.infradead.org Signed-off-by: Dmitry Safonov --- arch/arm64/include/asm/elf.h | 5 ----- arch/arm64/kernel/vdso.c | 21 ++++++++++----------- 2 files changed, 10 insertions(+), 16 deletions(-) diff --git a/arch/arm64/include/asm/elf.h b/arch/arm64/include/asm/elf.h index d1073ffa7f24..a81953bcc1cf 100644 --- a/arch/arm64/include/asm/elf.h +++ b/arch/arm64/include/asm/elf.h @@ -237,11 +237,6 @@ do { \ #else #define COMPAT_ARCH_DLINFO #endif -struct linux_binprm; -extern int aarch32_setup_additional_pages(struct linux_binprm *bprm, - int uses_interp); -#define compat_arch_setup_additional_pages \ - aarch32_setup_additional_pages #endif /* CONFIG_COMPAT */ diff --git a/arch/arm64/kernel/vdso.c b/arch/arm64/kernel/vdso.c index cee5d04ea9ad..1b710deb84d6 100644 --- a/arch/arm64/kernel/vdso.c +++ b/arch/arm64/kernel/vdso.c @@ -401,29 +401,24 @@ static int aarch32_sigreturn_setup(struct mm_struct *mm) return PTR_ERR_OR_ZERO(ret); } -int aarch32_setup_additional_pages(struct linux_binprm *bprm, int uses_interp) +static int aarch32_setup_additional_pages(struct linux_binprm *bprm, + int uses_interp) { struct mm_struct *mm = current->mm; int ret; - if (mmap_write_lock_killable(mm)) - return -EINTR; - ret = aarch32_kuser_helpers_setup(mm); if (ret) - goto out; + return ret; if (IS_ENABLED(CONFIG_COMPAT_VDSO)) { ret = __setup_additional_pages(VDSO_ABI_AA32, mm, bprm, uses_interp); if (ret) - goto out; + return ret; } - ret = aarch32_sigreturn_setup(mm); -out: - mmap_write_unlock(mm); - return ret; + return aarch32_sigreturn_setup(mm); } #endif /* CONFIG_COMPAT */ @@ -460,7 +455,11 @@ int arch_setup_additional_pages(struct linux_binprm *bprm, int uses_interp) if (mmap_write_lock_killable(mm)) return -EINTR; - ret = __setup_additional_pages(VDSO_ABI_AA64, mm, bprm, uses_interp); + if (is_compat_task()) + ret = aarch32_setup_additional_pages(bprm, uses_interp); + else + ret = __setup_additional_pages(VDSO_ABI_AA64, mm, bprm, uses_interp); + mmap_write_unlock(mm); return ret;