From patchwork Wed Aug 1 15:19:55 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Johannes Weiner X-Patchwork-Id: 10552467 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id A1A9415E9 for ; Wed, 1 Aug 2018 15:17:37 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 92A982B5AF for ; Wed, 1 Aug 2018 15:17:37 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 868892B703; Wed, 1 Aug 2018 15:17:37 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.9 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id E68912B5AF for ; Wed, 1 Aug 2018 15:17:36 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B0B356B0280; Wed, 1 Aug 2018 11:17:33 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id A95D96B0282; Wed, 1 Aug 2018 11:17:33 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 935186B0283; Wed, 1 Aug 2018 11:17:33 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-qk0-f197.google.com (mail-qk0-f197.google.com [209.85.220.197]) by kanga.kvack.org (Postfix) with ESMTP id 66C4A6B0280 for ; Wed, 1 Aug 2018 11:17:33 -0400 (EDT) Received: by mail-qk0-f197.google.com with SMTP id 99-v6so17243881qkr.14 for ; Wed, 01 Aug 2018 08:17:33 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:from:to:cc:subject:date :message-id:in-reply-to:references; bh=NnxyiouhvZnUSydapW5h/V7JQgsuHd8XHvJ9oMKS3Xk=; b=Vm0WI9LFcm9fZeQDG9HDxY1nu/B4JbLsjwDtmxbodXyKCzo/l7rDzY4g8g6BrmwxlV eGomounb2fFEUNM0oZEW9Hkhx3mikhRUHjOKqspkQTqktwsKVsUkTE+tv/pYjgrkGMJj 2mBvP4nvomFiWIR4Tsinm1IImjjq0pr4WjlLLSX7Z+6qGDgxJKDe/1WP+7A1MYqx8Nal c1qtgRhjOvtgDBVBsVH32oNOEN2M8+brikz4oaQaBnq0PIPoJPcF5S+OPPtdTUTZULxm Cg6XavfoOb/iIedtT4foyV0sbXvdy3lERlazbOi96fDOcUO42vMtxBRwEIHjBv/ozB5C 9drw== X-Gm-Message-State: AOUpUlF9Lroa/MtQRQ8ebfBrE+2MR6X6PTegJKmZel5peiLQqJBuBn2Y REdqJi3bWLhZa4P+5Um+Zz5Uhvbjo6teosHQk07znjVre65ED118+xES7ptBzg7UjkTwhYP7tb3 B2bzjyjznUXy2wZOFzR/CdxQ12iubwpMrq4+OIgOjqataUBd1ZHZNqUR2OfI2CD/AtytCh/ZtLH M7cj+eIy3a6nl7PBud7olGEMIYWalm0RfMsm9BoBC/BXs9UovWLTW9GM+K8fhCsupbob2SgfMgl mVNcTuIAX2imkZWzYscWNr7hxiBSfFmjJdYgNDe+qU25ThszVR6Wo/hLS/NZ5hq68ebbCq09Dcf u3+fIOW+dbhQ9/lPilE2a4eP3vTD7WOfi/jMvpZZoYwn3/voN9ti+ZxiN65yQw3CXCV6aNPU2yW C X-Received: by 2002:a0c:9408:: with SMTP id h8-v6mr23619411qvh.135.1533136653161; Wed, 01 Aug 2018 08:17:33 -0700 (PDT) X-Received: by 2002:a0c:9408:: with SMTP id h8-v6mr23619361qvh.135.1533136652474; Wed, 01 Aug 2018 08:17:32 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1533136652; cv=none; d=google.com; s=arc-20160816; b=eotjJrTQ0uLus6+IYkpwERbotGI4Ytqlwf3ePP1JEnAZuPr2PKHsWWa2X3bL27kmza gkSYXQW4m6Ung1No0QPaE1FEdIb5Os8wuqpteETsvNtXt4Cb7IM5snXenlyE5JYHdVrq maf7BebOHZTUsaOCecd6u0OVSBs33q+gfXzOEHOGH07+ZheBlBOzlfVEnwNq7A1y/VyX B4A+HTf09WNfJ1HZMGmr66tlm18XyNEcuR2hekpsmVVCp+rKtI92wD0BMkyUtcQpuVkv HZGPx/B44XaalJsw2k2jeeOVpqNDPuzT18rQy7Qj2Ixu60rbUwhqodHxWWUXwN4dgiVr vung== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature:arc-authentication-results; bh=NnxyiouhvZnUSydapW5h/V7JQgsuHd8XHvJ9oMKS3Xk=; b=MVx37w7YofaCUgIU3wWCW/Lz0i9uETMll0W4TSkv/YXd6wEa4TpL8RMURJ0rZkg9hz 1Ecj7JnaJgBPQfkzhOnF6elAgyrk+Ey2F82ECIGuYugZ1QaT1g3LNqb5bh96fyzJ3aVL VS63ay+9dPTFAs3gKsN5i4YNViJc6ajdBPEZuWdz8VLFkYVc4nquKwFEk90pPWTPHmAZ 75GLKIAewu/qXAecMxnH6lOMA6EzTUd1qEzTseVvbOauUHk4aIGIHtnDshWs8GdLVlil kU88Gk1b+/4teqkj44oLhk7EpyYvtes/NojFiGzK3TMYjBLcEYKxI4HKRdsUh231hZje Yz3A== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@cmpxchg-org.20150623.gappssmtp.com header.s=20150623 header.b=kT802V4g; spf=pass (google.com: domain of hannes@cmpxchg.org designates 209.85.220.65 as permitted sender) smtp.mailfrom=hannes@cmpxchg.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=cmpxchg.org Received: from mail-sor-f65.google.com (mail-sor-f65.google.com. [209.85.220.65]) by mx.google.com with SMTPS id o22-v6sor8425358qka.169.2018.08.01.08.17.32 for (Google Transport Security); Wed, 01 Aug 2018 08:17:32 -0700 (PDT) Received-SPF: pass (google.com: domain of hannes@cmpxchg.org designates 209.85.220.65 as permitted sender) client-ip=209.85.220.65; Authentication-Results: mx.google.com; dkim=pass header.i=@cmpxchg-org.20150623.gappssmtp.com header.s=20150623 header.b=kT802V4g; spf=pass (google.com: domain of hannes@cmpxchg.org designates 209.85.220.65 as permitted sender) smtp.mailfrom=hannes@cmpxchg.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=cmpxchg.org DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cmpxchg-org.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=NnxyiouhvZnUSydapW5h/V7JQgsuHd8XHvJ9oMKS3Xk=; b=kT802V4gaqvbzln+f7fYdlx75z3raFvWWaTBt1Ow9JJu+sYDH8AC31h2MpNN+ybowx kWiCmVtbOreLhY2E7Ss/RDFI5q+e71n0rRqALyj7H1GsxqyD90HGeMonuHIcCMH16La9 d0OcbT8ER8TruLjo/js6yfJNamgONc0l0RaEaNrDbELS7WDGUe56gFohc3Wang2B/ils ekHw80UG/aGi4IdbXx4x294G9wbS8PW4DfBlYTJBxpUmYqT5Q3qXQ1Sf6qTCVbJpuAsU uqHWEUJLMkuO0nmka52QhW6d+spDzMj6+dbkZD6t/BbBAvDVb4Q9l+tye6zJkb5XxztA P9rw== X-Google-Smtp-Source: AAOMgpd6XA3VsHPvUpK58x5ONBk2skDpwBhL2vuXnT9tx3uqOGsyoxJ/SijzpC0KWqY+DZB2nZer+A== X-Received: by 2002:a37:dd4e:: with SMTP id n75-v6mr23297165qki.370.1533136652152; Wed, 01 Aug 2018 08:17:32 -0700 (PDT) Received: from localhost (216.49.36.201.res-cmts.bus.ptd.net. [216.49.36.201]) by smtp.gmail.com with ESMTPSA id t27-v6sm13257610qkl.11.2018.08.01.08.17.30 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Wed, 01 Aug 2018 08:17:31 -0700 (PDT) From: Johannes Weiner To: Ingo Molnar , Peter Zijlstra , Andrew Morton , Linus Torvalds Cc: Tejun Heo , Suren Baghdasaryan , Daniel Drake , Vinayak Menon , Christopher Lameter , Mike Galbraith , Shakeel Butt , Peter Enderborg , linux-mm@kvack.org, cgroups@vger.kernel.org, linux-kernel@vger.kernel.org, kernel-team@fb.com Subject: [PATCH 6/9] sched: sched.h: make rq locking and clock functions available in stats.h Date: Wed, 1 Aug 2018 11:19:55 -0400 Message-Id: <20180801151958.32590-7-hannes@cmpxchg.org> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20180801151958.32590-1-hannes@cmpxchg.org> References: <20180801151958.32590-1-hannes@cmpxchg.org> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP kernel/sched/sched.h includes "stats.h" half-way through the file. The next patch introduces users of sched.h's rq locking functions and update_rq_clock() in kernel/sched/stats.h. Move those definitions up in the file so they are available in stats.h. Signed-off-by: Johannes Weiner --- kernel/sched/sched.h | 164 +++++++++++++++++++++---------------------- 1 file changed, 82 insertions(+), 82 deletions(-) diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index cb467c221b15..b8f038497240 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -919,6 +919,8 @@ DECLARE_PER_CPU_SHARED_ALIGNED(struct rq, runqueues); #define cpu_curr(cpu) (cpu_rq(cpu)->curr) #define raw_rq() raw_cpu_ptr(&runqueues) +extern void update_rq_clock(struct rq *rq); + static inline u64 __rq_clock_broken(struct rq *rq) { return READ_ONCE(rq->clock); @@ -1037,6 +1039,86 @@ static inline void rq_repin_lock(struct rq *rq, struct rq_flags *rf) #endif } +struct rq *__task_rq_lock(struct task_struct *p, struct rq_flags *rf) + __acquires(rq->lock); + +struct rq *task_rq_lock(struct task_struct *p, struct rq_flags *rf) + __acquires(p->pi_lock) + __acquires(rq->lock); + +static inline void __task_rq_unlock(struct rq *rq, struct rq_flags *rf) + __releases(rq->lock) +{ + rq_unpin_lock(rq, rf); + raw_spin_unlock(&rq->lock); +} + +static inline void +task_rq_unlock(struct rq *rq, struct task_struct *p, struct rq_flags *rf) + __releases(rq->lock) + __releases(p->pi_lock) +{ + rq_unpin_lock(rq, rf); + raw_spin_unlock(&rq->lock); + raw_spin_unlock_irqrestore(&p->pi_lock, rf->flags); +} + +static inline void +rq_lock_irqsave(struct rq *rq, struct rq_flags *rf) + __acquires(rq->lock) +{ + raw_spin_lock_irqsave(&rq->lock, rf->flags); + rq_pin_lock(rq, rf); +} + +static inline void +rq_lock_irq(struct rq *rq, struct rq_flags *rf) + __acquires(rq->lock) +{ + raw_spin_lock_irq(&rq->lock); + rq_pin_lock(rq, rf); +} + +static inline void +rq_lock(struct rq *rq, struct rq_flags *rf) + __acquires(rq->lock) +{ + raw_spin_lock(&rq->lock); + rq_pin_lock(rq, rf); +} + +static inline void +rq_relock(struct rq *rq, struct rq_flags *rf) + __acquires(rq->lock) +{ + raw_spin_lock(&rq->lock); + rq_repin_lock(rq, rf); +} + +static inline void +rq_unlock_irqrestore(struct rq *rq, struct rq_flags *rf) + __releases(rq->lock) +{ + rq_unpin_lock(rq, rf); + raw_spin_unlock_irqrestore(&rq->lock, rf->flags); +} + +static inline void +rq_unlock_irq(struct rq *rq, struct rq_flags *rf) + __releases(rq->lock) +{ + rq_unpin_lock(rq, rf); + raw_spin_unlock_irq(&rq->lock); +} + +static inline void +rq_unlock(struct rq *rq, struct rq_flags *rf) + __releases(rq->lock) +{ + rq_unpin_lock(rq, rf); + raw_spin_unlock(&rq->lock); +} + #ifdef CONFIG_NUMA enum numa_topology_type { NUMA_DIRECT, @@ -1670,8 +1752,6 @@ static inline void sub_nr_running(struct rq *rq, unsigned count) sched_update_tick_dependency(rq); } -extern void update_rq_clock(struct rq *rq); - extern void activate_task(struct rq *rq, struct task_struct *p, int flags); extern void deactivate_task(struct rq *rq, struct task_struct *p, int flags); @@ -1752,86 +1832,6 @@ static inline void sched_rt_avg_update(struct rq *rq, u64 rt_delta) { } static inline void sched_avg_update(struct rq *rq) { } #endif -struct rq *__task_rq_lock(struct task_struct *p, struct rq_flags *rf) - __acquires(rq->lock); - -struct rq *task_rq_lock(struct task_struct *p, struct rq_flags *rf) - __acquires(p->pi_lock) - __acquires(rq->lock); - -static inline void __task_rq_unlock(struct rq *rq, struct rq_flags *rf) - __releases(rq->lock) -{ - rq_unpin_lock(rq, rf); - raw_spin_unlock(&rq->lock); -} - -static inline void -task_rq_unlock(struct rq *rq, struct task_struct *p, struct rq_flags *rf) - __releases(rq->lock) - __releases(p->pi_lock) -{ - rq_unpin_lock(rq, rf); - raw_spin_unlock(&rq->lock); - raw_spin_unlock_irqrestore(&p->pi_lock, rf->flags); -} - -static inline void -rq_lock_irqsave(struct rq *rq, struct rq_flags *rf) - __acquires(rq->lock) -{ - raw_spin_lock_irqsave(&rq->lock, rf->flags); - rq_pin_lock(rq, rf); -} - -static inline void -rq_lock_irq(struct rq *rq, struct rq_flags *rf) - __acquires(rq->lock) -{ - raw_spin_lock_irq(&rq->lock); - rq_pin_lock(rq, rf); -} - -static inline void -rq_lock(struct rq *rq, struct rq_flags *rf) - __acquires(rq->lock) -{ - raw_spin_lock(&rq->lock); - rq_pin_lock(rq, rf); -} - -static inline void -rq_relock(struct rq *rq, struct rq_flags *rf) - __acquires(rq->lock) -{ - raw_spin_lock(&rq->lock); - rq_repin_lock(rq, rf); -} - -static inline void -rq_unlock_irqrestore(struct rq *rq, struct rq_flags *rf) - __releases(rq->lock) -{ - rq_unpin_lock(rq, rf); - raw_spin_unlock_irqrestore(&rq->lock, rf->flags); -} - -static inline void -rq_unlock_irq(struct rq *rq, struct rq_flags *rf) - __releases(rq->lock) -{ - rq_unpin_lock(rq, rf); - raw_spin_unlock_irq(&rq->lock); -} - -static inline void -rq_unlock(struct rq *rq, struct rq_flags *rf) - __releases(rq->lock) -{ - rq_unpin_lock(rq, rf); - raw_spin_unlock(&rq->lock); -} - #ifdef CONFIG_SMP #ifdef CONFIG_PREEMPT