From patchwork Thu Mar 31 08:33:36 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michal Hocko X-Patchwork-Id: 8708361 Return-Path: X-Original-To: patchwork-linux-sh@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 220F6C0553 for ; Thu, 31 Mar 2016 08:34:27 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 57E9D2012D for ; Thu, 31 Mar 2016 08:34:26 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 64D1420125 for ; Thu, 31 Mar 2016 08:34:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754574AbcCaIdo (ORCPT ); Thu, 31 Mar 2016 04:33:44 -0400 Received: from mail-wm0-f68.google.com ([74.125.82.68]:36044 "EHLO mail-wm0-f68.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754030AbcCaIdk (ORCPT ); Thu, 31 Mar 2016 04:33:40 -0400 Received: by mail-wm0-f68.google.com with SMTP id 20so21507058wmh.3; Thu, 31 Mar 2016 01:33:38 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to:user-agent; bh=a6I0o+KdkT0KVZ+/mayADqUWrU7NidmrS7HoiuG5XVo=; b=ZWzyUh9twbv0W2pEgzGF88xnp/XyVItyRbXq/GBVxSwWdnrpBBTx6PaR7W2JwwvGm9 059PJ2+0lukxLJmHpAkukyS86D48a7GyGR1mNB46VK/1FQaUdPGtBQ4+MTfcrNFzCUja XliTJZAEa1hOOiu1c9y1B/gbnrS8gDL6uo09AwdQAaCqrJpeUax10yEkyTbaTq9xCIMc 5t3Jm2IS5KupJXN8fyIUpTjXDOPsenekMm6EWC+RxzdIaH7oLAXbBbB0l1yS0YOxTdOG bs5sXyPuCzUoUoFclHDVRnbwNLEAGHhjGaw5WEdHO6Ey4W4be5oR39EuApaEghiVRpYI jpCQ== X-Gm-Message-State: AD7BkJILmp8lfl4nX2IkUM/v2jJu6jGf71r2mBDk99C4826tefI5zcvLoD7d7Q7yOkPJVg== X-Received: by 10.194.78.37 with SMTP id y5mr1941934wjw.78.1459413218035; Thu, 31 Mar 2016 01:33:38 -0700 (PDT) Received: from localhost (nat1.scz.suse.com. [213.151.88.250]) by smtp.gmail.com with ESMTPSA id k125sm23551378wmb.14.2016.03.31.01.33.37 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 31 Mar 2016 01:33:37 -0700 (PDT) Date: Thu, 31 Mar 2016 10:33:36 +0200 From: Michal Hocko To: Peter Zijlstra Cc: LKML , Ingo Molnar , Thomas Gleixner , "H. Peter Anvin" , "David S. Miller" , Tony Luck , Andrew Morton , Chris Zankel , Max Filippov , x86@kernel.org, linux-alpha@vger.kernel.org, linux-ia64@vger.kernel.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-xtensa@linux-xtensa.org, linux-arch@vger.kernel.org Subject: Re: [PATCH 03/11] locking, rwsem: introduce basis for down_write_killable Message-ID: <20160331083336.GA27831@dhcp22.suse.cz> References: <1456750705-7141-1-git-send-email-mhocko@kernel.org> <1456750705-7141-4-git-send-email-mhocko@kernel.org> <20160330132549.GU3408@twins.programming.kicks-ass.net> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20160330132549.GU3408@twins.programming.kicks-ass.net> User-Agent: Mutt/1.5.24 (2015-08-30) Sender: linux-sh-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-sh@vger.kernel.org X-Spam-Status: No, score=-7.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP On Wed 30-03-16 15:25:49, Peter Zijlstra wrote: [...] > Why is the signal_pending_state() test _after_ the call to schedule() > and before the 'trylock'. No special reason. I guess I was just too focused on the wake_by_signal path and didn't realize the trylock as well. > __mutex_lock_common() has it before the call to schedule and after the > 'trylock'. > > The difference is that rwsem will now respond to the KILL and return > -EINTR even if the lock is available, whereas mutex will acquire it and > ignore the signal (for a little while longer). > > Neither is wrong per se, but I feel all the locking primitives should > behave in a consistent manner in this regard. Agreed! What about the following on top? I will repost the full patch if it looks OK. Thanks! diff --git a/kernel/locking/rwsem-spinlock.c b/kernel/locking/rwsem-spinlock.c index d1d04ca10d0e..fb2db7b408f0 100644 --- a/kernel/locking/rwsem-spinlock.c +++ b/kernel/locking/rwsem-spinlock.c @@ -216,14 +216,13 @@ int __sched __down_write_state(struct rw_semaphore *sem, int state) */ if (sem->count == 0) break; - set_task_state(tsk, state); - raw_spin_unlock_irqrestore(&sem->wait_lock, flags); - schedule(); if (signal_pending_state(state, current)) { ret = -EINTR; - raw_spin_lock_irqsave(&sem->wait_lock, flags); goto out; } + set_task_state(tsk, state); + raw_spin_unlock_irqrestore(&sem->wait_lock, flags); + schedule(); raw_spin_lock_irqsave(&sem->wait_lock, flags); } /* got the lock */ diff --git a/kernel/locking/rwsem-xadd.c b/kernel/locking/rwsem-xadd.c index 5cec34f1ad6f..781b2628e41b 100644 --- a/kernel/locking/rwsem-xadd.c +++ b/kernel/locking/rwsem-xadd.c @@ -487,19 +487,19 @@ __rwsem_down_write_failed_state(struct rw_semaphore *sem, int state) /* Block until there are no active lockers. */ do { - schedule(); if (signal_pending_state(state, current)) { raw_spin_lock_irq(&sem->wait_lock); ret = ERR_PTR(-EINTR); goto out; } + schedule(); set_current_state(state); } while ((count = sem->count) & RWSEM_ACTIVE_MASK); raw_spin_lock_irq(&sem->wait_lock); } - __set_current_state(TASK_RUNNING); out: + __set_current_state(TASK_RUNNING); list_del(&waiter.list); raw_spin_unlock_irq(&sem->wait_lock);