From patchwork Thu Jun 23 17:52:58 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Linus Torvalds X-Patchwork-Id: 9195665 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id B019F6075A for ; Thu, 23 Jun 2016 17:53:14 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id A4C8E20265 for ; Thu, 23 Jun 2016 17:53:14 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 997F928465; Thu, 23 Jun 2016 17:53:14 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.1 required=2.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_MED,T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.wl.linuxfoundation.org (Postfix) with SMTP id 55B0520265 for ; Thu, 23 Jun 2016 17:53:13 +0000 (UTC) Received: (qmail 26336 invoked by uid 550); 23 Jun 2016 17:53:11 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Reply-To: kernel-hardening@lists.openwall.com Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 26315 invoked from network); 23 Jun 2016 17:53:10 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:sender:in-reply-to:references:from:date:message-id :subject:to:cc; bh=uSYH+4EjMLxbntbnClNcDRcPOn1qswghNPSFN2yNGzM=; b=NDy+WsTc10xYrmSh5Zc7N1TQp+YBPxqRPKwGbHx71DxWU+cwum4G1aeT/Ld2DE9Yip 3InwG98Q/cY7o8KYeMvK2U+yhadJMhXPqpKqdLfXCsQJn1LeO8R1qMTK60/6lRWB4mOr LBMeL6wwpPk+V/nI2EVNOS1gtTMjhlYHeYoa8UNREllYoKzA1XN/Qc3ejBbvoTcdpHCd rRP24jqaWk0i/IgHxrpnJ5gq1iMCaoTARVWa9AdqL/mWu3YFDdW/7BlCVp0EqmiRe72D hXUIlQcNWcOLwilop+ml1FjAJRrl3Pv4KzKrnVyyragy2VFu1346nJDIVASBqCaHzOM6 RjaQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:sender:in-reply-to:references:from :date:message-id:subject:to:cc; bh=uSYH+4EjMLxbntbnClNcDRcPOn1qswghNPSFN2yNGzM=; b=C6rcugd7P5WVIc0dBVmVOW+B9llv2FXqQ1k5ze8DuFiemfe3iJ1p4SMtBVK5jKe52W eZYDC9si+3dKVVnN5YjjMTY30NegFYXt06Mwo3d9Ea1OY9LNayYH+/VOS2JI/rfh9a95 SCajj92Z8524t2ok1V8o01N9FcWHhW1jgnzJi1Gn2k4qcxciblaClneP0WSGIzsT2MA8 oUkXwb2Kc/ZnzCsFNyLxWaWB5UXeeqkBEsXQWTS4PlwRxoYQuqUVl+Kuo6OXe8F5fdS9 bV7asOgylsp1HwTfYcnSNTnxUTpG7dgMmtILkPA2C5PZCAQVtnCVu7CQB+3lRwFJ19qu XZgw== X-Gm-Message-State: ALyK8tKRhKyaYtByp3bNJt8B5sclkSxfQBoz/biKUXgfmDxnLL+ZB1ViQX6FWxJmM2mVQHyKrWbnBhiPEm4ilg== X-Received: by 10.202.241.134 with SMTP id p128mr7607792oih.111.1466704379248; Thu, 23 Jun 2016 10:52:59 -0700 (PDT) MIME-Version: 1.0 Sender: linus971@gmail.com In-Reply-To: References: <20160623143126.GA16664@redhat.com> <20160623170352.GA17372@redhat.com> From: Linus Torvalds Date: Thu, 23 Jun 2016 10:52:58 -0700 X-Google-Sender-Auth: -UC1JYQ81kk0BoSJ0TQFXPUgN_c Message-ID: To: Oleg Nesterov , Peter Zijlstra Cc: Andy Lutomirski , Andy Lutomirski , "the arch/x86 maintainers" , Linux Kernel Mailing List , "linux-arch@vger.kernel.org" , Borislav Petkov , Nadav Amit , Kees Cook , Brian Gerst , "kernel-hardening@lists.openwall.com" , Josh Poimboeuf , Jann Horn , Heiko Carstens Subject: [kernel-hardening] Re: [PATCH v3 00/13] Virtually mapped stacks with guard pages (x86, core) X-Virus-Scanned: ClamAV using ClamSMTP On Thu, Jun 23, 2016 at 10:44 AM, Linus Torvalds wrote: > > The thread_info->tsk pointer, that was one of the most critical issues > and the main raison d'ĂȘtre of the thread_info, has been replaced on > x86 by just using the per-cpu "current_task". Yes,.there are probably > more than a few "ti->task" users left for legacy reasons, harking back > to when the thread-info was cheaper to access, but it shouldn't be a > big deal. Ugh. Looking around at this, it turns out that a great example of this kind of legacy issue is the debug_mutex stuff. It uses "struct thread_info *" as the owner pointer, and there is _no_ existing reason for it. In fact, in every single place it actually wants the task_struct, and it does task_thread_info(task) just to convert it to the thread-info, and then converts it back with "ti->task". So the attached patch seems to be the right thing to do regardless of this whole discussion. Linus kernel/locking/mutex-debug.c | 12 ++++++------ kernel/locking/mutex-debug.h | 4 ++-- kernel/locking/mutex.c | 6 +++--- kernel/locking/mutex.h | 2 +- 4 files changed, 12 insertions(+), 12 deletions(-) diff --git a/kernel/locking/mutex-debug.c b/kernel/locking/mutex-debug.c index 3ef3736002d8..9c951fade415 100644 --- a/kernel/locking/mutex-debug.c +++ b/kernel/locking/mutex-debug.c @@ -49,21 +49,21 @@ void debug_mutex_free_waiter(struct mutex_waiter *waiter) } void debug_mutex_add_waiter(struct mutex *lock, struct mutex_waiter *waiter, - struct thread_info *ti) + struct task_struct *task) { SMP_DEBUG_LOCKS_WARN_ON(!spin_is_locked(&lock->wait_lock)); /* Mark the current thread as blocked on the lock: */ - ti->task->blocked_on = waiter; + task->blocked_on = waiter; } void mutex_remove_waiter(struct mutex *lock, struct mutex_waiter *waiter, - struct thread_info *ti) + struct task_struct *task) { DEBUG_LOCKS_WARN_ON(list_empty(&waiter->list)); - DEBUG_LOCKS_WARN_ON(waiter->task != ti->task); - DEBUG_LOCKS_WARN_ON(ti->task->blocked_on != waiter); - ti->task->blocked_on = NULL; + DEBUG_LOCKS_WARN_ON(waiter->task != task); + DEBUG_LOCKS_WARN_ON(task->blocked_on != waiter); + task->blocked_on = NULL; list_del_init(&waiter->list); waiter->task = NULL; diff --git a/kernel/locking/mutex-debug.h b/kernel/locking/mutex-debug.h index 0799fd3e4cfa..d06ae3bb46c5 100644 --- a/kernel/locking/mutex-debug.h +++ b/kernel/locking/mutex-debug.h @@ -20,9 +20,9 @@ extern void debug_mutex_wake_waiter(struct mutex *lock, extern void debug_mutex_free_waiter(struct mutex_waiter *waiter); extern void debug_mutex_add_waiter(struct mutex *lock, struct mutex_waiter *waiter, - struct thread_info *ti); + struct task_struct *task); extern void mutex_remove_waiter(struct mutex *lock, struct mutex_waiter *waiter, - struct thread_info *ti); + struct task_struct *task); extern void debug_mutex_unlock(struct mutex *lock); extern void debug_mutex_init(struct mutex *lock, const char *name, struct lock_class_key *key); diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c index 79d2d765a75f..a70b90db3909 100644 --- a/kernel/locking/mutex.c +++ b/kernel/locking/mutex.c @@ -537,7 +537,7 @@ __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass, goto skip_wait; debug_mutex_lock_common(lock, &waiter); - debug_mutex_add_waiter(lock, &waiter, task_thread_info(task)); + debug_mutex_add_waiter(lock, &waiter, task); /* add waiting tasks to the end of the waitqueue (FIFO): */ list_add_tail(&waiter.list, &lock->wait_list); @@ -584,7 +584,7 @@ __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass, } __set_task_state(task, TASK_RUNNING); - mutex_remove_waiter(lock, &waiter, current_thread_info()); + mutex_remove_waiter(lock, &waiter, task); /* set it to 0 if there are no waiters left: */ if (likely(list_empty(&lock->wait_list))) atomic_set(&lock->count, 0); @@ -605,7 +605,7 @@ skip_wait: return 0; err: - mutex_remove_waiter(lock, &waiter, task_thread_info(task)); + mutex_remove_waiter(lock, &waiter, task); spin_unlock_mutex(&lock->wait_lock, flags); debug_mutex_free_waiter(&waiter); mutex_release(&lock->dep_map, 1, ip); diff --git a/kernel/locking/mutex.h b/kernel/locking/mutex.h index 5cda397607f2..a68bae5e852a 100644 --- a/kernel/locking/mutex.h +++ b/kernel/locking/mutex.h @@ -13,7 +13,7 @@ do { spin_lock(lock); (void)(flags); } while (0) #define spin_unlock_mutex(lock, flags) \ do { spin_unlock(lock); (void)(flags); } while (0) -#define mutex_remove_waiter(lock, waiter, ti) \ +#define mutex_remove_waiter(lock, waiter, task) \ __list_del((waiter)->list.prev, (waiter)->list.next) #ifdef CONFIG_MUTEX_SPIN_ON_OWNER