From patchwork Mon Jul 20 02:29:37 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Olof Johansson X-Patchwork-Id: 6824961 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 3D405C05AC for ; Mon, 20 Jul 2015 02:36:46 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 419472065F for ; Mon, 20 Jul 2015 02:36:45 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id EC4422064C for ; Mon, 20 Jul 2015 02:36:43 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1ZH0t9-00018h-5P; Mon, 20 Jul 2015 02:33:07 +0000 Received: from mail-pd0-f181.google.com ([209.85.192.181]) by bombadil.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1ZH0t6-000173-B6 for linux-arm-kernel@lists.infradead.org; Mon, 20 Jul 2015 02:33:05 +0000 Received: by pdjr16 with SMTP id r16so96421572pdj.3 for ; Sun, 19 Jul 2015 19:32:42 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id; bh=PiG+VesY1P4MS67IaaQvJ9C9XKIAu4q5MtLx1HCGpbE=; b=FRcO+tHlBxcyTbHMlhblEVoAWPEWYQrI5fDzC56esfDfEA11zixerGJ+osxtHNZ5GL bWsp2jjvHItO9puLVA4Ed9DtAQfnA8KWafllRby5zT5LI5sYRerNIeYUIjTyzJtAWl/K uKj/PbqMBnY+h3I6EAq54SXW2swpN7XWcWApI70RVfKfWR31B/SkUU0Wa31Pa5MIY11C q4s1UUlV32lpm5gMMmS1j278UVakU3GiaJ0IN8r8opr44SY/2HXRJ30H8ScE22zRI4In 3z0iKY5ey3rV/O4Ha9u6OjEwkMtLoohdCEIl1FrNPk25PgQ9bjY7Rhq6SixT8YFcOVAZ v7aw== X-Gm-Message-State: ALoCoQlkETWq/jcaeRLlu0ednknuj2pXCx2rwwZOWQ14MBjdsTRH1IW/+i4hmWO/iaBq3F3/kUd8 X-Received: by 10.70.45.6 with SMTP id i6mr27179595pdm.13.1437359562127; Sun, 19 Jul 2015 19:32:42 -0700 (PDT) Received: from brutus.lixom.net (173-13-129-225-sfba.hfc.comcastbusiness.net. [173.13.129.225]) by smtp.gmail.com with ESMTPSA id ti10sm18612527pab.20.2015.07.19.19.32.39 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Sun, 19 Jul 2015 19:32:40 -0700 (PDT) From: Olof Johansson To: will.deacon@arm.com, catalin.marinas@arm.com Subject: [PATCH] arm64: Minor refactoring of cpu_switch_to() to fix build breakage Date: Sun, 19 Jul 2015 19:29:37 -0700 Message-Id: <1437359377-39932-1-git-send-email-olof@lixom.net> X-Mailer: git-send-email 1.7.10.4 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20150719_193304_725347_C178D787 X-CRM114-Status: GOOD ( 14.68 ) X-Spam-Score: -2.6 (--) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Olof Johansson , Dave Hansen , linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, Ingo Molnar MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-5.4 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Commit 0c8c0f03e3a2 ("x86/fpu, sched: Dynamically allocate 'struct fpu'") moved the thread_struct to the bottom of task_struct. As a result, the offset is now too large to be used in an immediate add on arm64 with some kernel configs: arch/arm64/kernel/entry.S: Assembler messages: arch/arm64/kernel/entry.S:588: Error: immediate out of range arch/arm64/kernel/entry.S:597: Error: immediate out of range There's really no reason for cpu_switch_to to take a task_struct pointer in the first place, since all it does is access the thread.cpu_context member. So, just pass that in directly. Fixes: 0c8c0f03e3a2 ("x86/fpu, sched: Dynamically allocate 'struct fpu'") Cc: Dave Hansen Signed-off-by: Olof Johansson --- arch/arm64/include/asm/processor.h | 4 ++-- arch/arm64/kernel/asm-offsets.c | 2 -- arch/arm64/kernel/entry.S | 34 ++++++++++++++++------------------ arch/arm64/kernel/process.c | 3 ++- 4 files changed, 20 insertions(+), 23 deletions(-) diff --git a/arch/arm64/include/asm/processor.h b/arch/arm64/include/asm/processor.h index e4c893e..ba90764 100644 --- a/arch/arm64/include/asm/processor.h +++ b/arch/arm64/include/asm/processor.h @@ -152,8 +152,8 @@ static inline void cpu_relax(void) #define cpu_relax_lowlatency() cpu_relax() /* Thread switching */ -extern struct task_struct *cpu_switch_to(struct task_struct *prev, - struct task_struct *next); +extern struct task_struct *cpu_switch_to(struct cpu_context *prev, + struct cpu_context *next); #define task_pt_regs(p) \ ((struct pt_regs *)(THREAD_START_SP + task_stack_page(p)) - 1) diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c index c99701a..c9e13f6 100644 --- a/arch/arm64/kernel/asm-offsets.c +++ b/arch/arm64/kernel/asm-offsets.c @@ -39,8 +39,6 @@ int main(void) DEFINE(TI_TASK, offsetof(struct thread_info, task)); DEFINE(TI_CPU, offsetof(struct thread_info, cpu)); BLANK(); - DEFINE(THREAD_CPU_CONTEXT, offsetof(struct task_struct, thread.cpu_context)); - BLANK(); DEFINE(S_X0, offsetof(struct pt_regs, regs[0])); DEFINE(S_X1, offsetof(struct pt_regs, regs[1])); DEFINE(S_X2, offsetof(struct pt_regs, regs[2])); diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S index f860bfd..2216326 100644 --- a/arch/arm64/kernel/entry.S +++ b/arch/arm64/kernel/entry.S @@ -579,29 +579,27 @@ ENDPROC(el0_irq) /* * Register switch for AArch64. The callee-saved registers need to be saved * and restored. On entry: - * x0 = previous task_struct (must be preserved across the switch) - * x1 = next task_struct + * x0 = previous cpu_context (must be preserved across the switch) + * x1 = next cpu_context * Previous and next are guaranteed not to be the same. * */ ENTRY(cpu_switch_to) - add x8, x0, #THREAD_CPU_CONTEXT mov x9, sp - stp x19, x20, [x8], #16 // store callee-saved registers - stp x21, x22, [x8], #16 - stp x23, x24, [x8], #16 - stp x25, x26, [x8], #16 - stp x27, x28, [x8], #16 - stp x29, x9, [x8], #16 - str lr, [x8] - add x8, x1, #THREAD_CPU_CONTEXT - ldp x19, x20, [x8], #16 // restore callee-saved registers - ldp x21, x22, [x8], #16 - ldp x23, x24, [x8], #16 - ldp x25, x26, [x8], #16 - ldp x27, x28, [x8], #16 - ldp x29, x9, [x8], #16 - ldr lr, [x8] + stp x19, x20, [x0], #16 // store callee-saved registers + stp x21, x22, [x0], #16 + stp x23, x24, [x0], #16 + stp x25, x26, [x0], #16 + stp x27, x28, [x0], #16 + stp x29, x9, [x0], #16 + str lr, [x0] + ldp x19, x20, [x1], #16 // restore callee-saved registers + ldp x21, x22, [x1], #16 + ldp x23, x24, [x1], #16 + ldp x25, x26, [x1], #16 + ldp x27, x28, [x1], #16 + ldp x29, x9, [x1], #16 + ldr lr, [x1] mov sp, x9 ret ENDPROC(cpu_switch_to) diff --git a/arch/arm64/kernel/process.c b/arch/arm64/kernel/process.c index 223b093..6b9a09c 100644 --- a/arch/arm64/kernel/process.c +++ b/arch/arm64/kernel/process.c @@ -325,7 +325,8 @@ struct task_struct *__switch_to(struct task_struct *prev, dsb(ish); /* the actual thread switch */ - last = cpu_switch_to(prev, next); + last = cpu_switch_to(&prev->thread.cpu_context, + &next->thread.cpu_context); return last; }