From patchwork Tue Oct 27 16:49:48 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Gardon X-Patchwork-Id: 11860911 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-20.4 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id EB80DC55178 for ; Tue, 27 Oct 2020 16:51:32 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 9317822202 for ; Tue, 27 Oct 2020 16:51:32 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="DubVR4Rd" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1813379AbgJ0Qvb (ORCPT ); Tue, 27 Oct 2020 12:51:31 -0400 Received: from mail-qv1-f74.google.com ([209.85.219.74]:45665 "EHLO mail-qv1-f74.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1813381AbgJ0Qtz (ORCPT ); Tue, 27 Oct 2020 12:49:55 -0400 Received: by mail-qv1-f74.google.com with SMTP id eh4so1152267qvb.12 for ; Tue, 27 Oct 2020 09:49:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:message-id:mime-version:subject:from:to:cc; bh=U+Sl1viisZnw2P5KbSyNcmWeiRs170Jsspt/5pdErgI=; b=DubVR4RdHZixj/MazoAiQa7LsELMe7q1j1wcRQtxAhIt8M0+qeroeNmHD6iw8d0SoT Vn9FoAs+W07E50WA9CmV1mSfGy6K9ZbVSsWhc83TFGc3jV6w53F2n366neyttweh3dMC mFbA15X775tTR+yX4VtaX4HIjQy7TPDPbadAdDGwC3hLD7enHNYSVpcF2PTrNXgWk2lm YRXwQNnH9+wgtv0BUvufXORocXPacbXMEaWP8I8UYasrW2d4p/OMRlKVZ+owH6CUStUz C3PQ+xNz7nbT91DarInhuvcueeYKu2UhcJtdSRJdvblc8c8upj4QCiB3TNx3oHKD7atD DakA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:message-id:mime-version:subject:from :to:cc; bh=U+Sl1viisZnw2P5KbSyNcmWeiRs170Jsspt/5pdErgI=; b=C+Itt6qiaXulR5+NOD9oCOFbyh/pxBcrsTO7BOscY/WhgIznnMHlYP8/DaA/b7KLGj BQCNmt4+msmHXSh5hZz3jKFJDE/N2F4AU7tBUvf8jWqBMyl4gRtn64METwkJeHEVhcFy y7SB61tGT0a13zu8VJg+cnVpoB3dmahhOOULRNIXUku0TsTXNtErPlHYiKH+r/sTi/E0 St13ehhTCQDkHfB1Oxa2E2I1bkqlJPhCQrF9/8DAufAymSHzTIpy1K1O0h5lvAk+D89b ZhwJBaAtfAh9QCFJx3m1IM+znM/IImHyz4X9DV4aT3+nkR2JYpSjNaMtOD3LxNm4LFLr LQqA== X-Gm-Message-State: AOAM532HtOC3ArlFBgB7WUeDaQM/cESpm3tZy1l7nxoqMD1vADPNC6s7 8MgocIOR3wS3jezI72BNThjXysGD23oC X-Google-Smtp-Source: ABdhPJy7jmfOQfx/bywu9bX0ELil5fzyHmKvy6d8QjRjocYnZAR09IfqYu76mv2cNBj5hnzjZWSZp9IUjefG Sender: "bgardon via sendgmr" X-Received: from bgardon.sea.corp.google.com ([2620:15c:100:202:f693:9fff:fef4:a293]) (user=bgardon job=sendgmr) by 2002:a0c:aa98:: with SMTP id f24mr3241618qvb.62.1603817394270; Tue, 27 Oct 2020 09:49:54 -0700 (PDT) Date: Tue, 27 Oct 2020 09:49:48 -0700 Message-Id: <20201027164950.1057601-1-bgardon@google.com> Mime-Version: 1.0 X-Mailer: git-send-email 2.29.0.rc2.309.g374f81d7ae-goog Subject: [PATCH 1/3] locking/rwlocks: Add contention detection for rwlocks From: Ben Gardon To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Paolo Bonzini , Sean Christopherson , Peter Shier , Peter Zijlstra , Ingo Molnar , Will Deacon , Waiman Long , Davidlohr Bueso , Ben Gardon Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org rwlocks do not currently have any facility to detect contention like spinlocks do. In order to allow users of rwlocks to better manage latency, add contention detection for queued rwlocks. Signed-off-by: Ben Gardon --- include/asm-generic/qrwlock.h | 23 +++++++++++++++++------ include/linux/rwlock.h | 7 +++++++ 2 files changed, 24 insertions(+), 6 deletions(-) diff --git a/include/asm-generic/qrwlock.h b/include/asm-generic/qrwlock.h index 3aefde23dceab..c4be4673a6ab2 100644 --- a/include/asm-generic/qrwlock.h +++ b/include/asm-generic/qrwlock.h @@ -116,15 +116,26 @@ static inline void queued_write_unlock(struct qrwlock *lock) smp_store_release(&lock->wlocked, 0); } +/** + * queued_rwlock_is_contended - check if the lock is contended + * @lock : Pointer to queue rwlock structure + * Return: 1 if lock contended, 0 otherwise + */ +static inline int queued_rwlock_is_contended(struct qrwlock *lock) +{ + return arch_spin_is_locked(&lock->wait_lock); +} + /* * Remapping rwlock architecture specific functions to the corresponding * queue rwlock functions. */ -#define arch_read_lock(l) queued_read_lock(l) -#define arch_write_lock(l) queued_write_lock(l) -#define arch_read_trylock(l) queued_read_trylock(l) -#define arch_write_trylock(l) queued_write_trylock(l) -#define arch_read_unlock(l) queued_read_unlock(l) -#define arch_write_unlock(l) queued_write_unlock(l) +#define arch_read_lock(l) queued_read_lock(l) +#define arch_write_lock(l) queued_write_lock(l) +#define arch_read_trylock(l) queued_read_trylock(l) +#define arch_write_trylock(l) queued_write_trylock(l) +#define arch_read_unlock(l) queued_read_unlock(l) +#define arch_write_unlock(l) queued_write_unlock(l) +#define arch_rwlock_is_contended(l) queued_rwlock_is_contended(l) #endif /* __ASM_GENERIC_QRWLOCK_H */ diff --git a/include/linux/rwlock.h b/include/linux/rwlock.h index 3dcd617e65ae9..7ce9a51ae5c04 100644 --- a/include/linux/rwlock.h +++ b/include/linux/rwlock.h @@ -128,4 +128,11 @@ do { \ 1 : ({ local_irq_restore(flags); 0; }); \ }) +#ifdef arch_rwlock_is_contended +#define rwlock_is_contended(lock) \ + arch_rwlock_is_contended(&(lock)->raw_lock) +#else +#define rwlock_is_contended(lock) ((void)(lock), 0) +#endif /* arch_rwlock_is_contended */ + #endif /* __LINUX_RWLOCK_H */ From patchwork Tue Oct 27 16:49:49 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Gardon X-Patchwork-Id: 11860903 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-20.4 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 188FDC388F9 for ; Tue, 27 Oct 2020 16:50:03 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id C227722202 for ; Tue, 27 Oct 2020 16:50:02 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="SGQLLWiB" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1813402AbgJ0QuA (ORCPT ); Tue, 27 Oct 2020 12:50:00 -0400 Received: from mail-pf1-f201.google.com ([209.85.210.201]:56178 "EHLO mail-pf1-f201.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1813387AbgJ0Qt4 (ORCPT ); Tue, 27 Oct 2020 12:49:56 -0400 Received: by mail-pf1-f201.google.com with SMTP id z12so1236574pfa.22 for ; Tue, 27 Oct 2020 09:49:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=nuxxs/Px/9QpRIlm7J+PyL0HWcASsbcWXqWCTDpjZLY=; b=SGQLLWiB+LUjaqR1TSdlpCpxrsi+POAiH2xjFLEZiQ/Wy8axJ7MQAwv6yJG4RQFjVH 2Bde6HUbl3/5tLJBqO2GGocfLTNhD4piiEGuFcTE3tNlCR/vOkEweNCcRo8i7U2fOu0+ UigavmfeQIAdOyfoCOq2wOlrR79bpk/B9bpd91hzS8apgFhVrwDZTlzlp/XoK5RIrc7U EgtcREjXUgluwLgZ/dBJ5fEwwxTQeMMERb5hdHFyDyg3jdX/f8TcLnKuVuBqHWDSO/Vf KMxOlowmn6ogA67Z+poRwIIIfbhDDnFdxm80hDvd3shtA5cXQWT1psAvvLBLUm4epcsX KbqQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=nuxxs/Px/9QpRIlm7J+PyL0HWcASsbcWXqWCTDpjZLY=; b=fKmZDW1FcbmHankkzPpMfkuI4i/VlE/z/YcGk5QDJdXIdHnygfsv8g3rDhzsOa8NLh 0QZVqgtJA5J14CjmEBWTabQ/3VD3grEJySzeoGBN8QHIaNjqGEb/Tgk/e37mKbK7M5QZ w5fpfSafRx6uVZmm9zKPhmToZW/gLXIe7F+ik6nWmqEokjENcLCVaiRno40Txc2dNLIx xMJe4dc2FU55J8IZ2IqAU7HEqszMVD7xqKdULMKU4kfeLh0MoqZwspUbdQ4vFxgnoqZd jcDvxNUTQQDHnrZH7fNQF5wKdyfiLZ1RjiEISbh2bB8xKY35Fhwl2Z6LEgU8a32CUo2b 6ohQ== X-Gm-Message-State: AOAM530+bycxUJcG7NQglh6GCeMp64KVx/Ui5/R+7eZFuSzKG0/LCGlV scvnHdgrWvmuVxCCIOMVvceZNcto37Rn X-Google-Smtp-Source: ABdhPJzT7GGtUaXKyrFhHPKMnptHgg3JCdWq1CQfBl6a05IhJ1AanAbFiyvdLmOm1ZktrH0sdKOH1g5fzZjb Sender: "bgardon via sendgmr" X-Received: from bgardon.sea.corp.google.com ([2620:15c:100:202:f693:9fff:fef4:a293]) (user=bgardon job=sendgmr) by 2002:a62:8097:0:b029:153:ce1d:c46 with SMTP id j145-20020a6280970000b0290153ce1d0c46mr3179460pfd.68.1603817395887; Tue, 27 Oct 2020 09:49:55 -0700 (PDT) Date: Tue, 27 Oct 2020 09:49:49 -0700 In-Reply-To: <20201027164950.1057601-1-bgardon@google.com> Message-Id: <20201027164950.1057601-2-bgardon@google.com> Mime-Version: 1.0 References: <20201027164950.1057601-1-bgardon@google.com> X-Mailer: git-send-email 2.29.0.rc2.309.g374f81d7ae-goog Subject: [PATCH 2/3] sched: Add needbreak for rwlocks From: Ben Gardon To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Paolo Bonzini , Sean Christopherson , Peter Shier , Peter Zijlstra , Ingo Molnar , Will Deacon , Waiman Long , Davidlohr Bueso , Ben Gardon Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Contention awareness while holding a spin lock is essential for reducing latency when long running kernel operations can hold that lock. Add the same contention detection interface for read/write spin locks. Signed-off-by: Ben Gardon --- include/linux/sched.h | 17 +++++++++++++++++ 1 file changed, 17 insertions(+) diff --git a/include/linux/sched.h b/include/linux/sched.h index 063cd120b4593..77179160ec3ab 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -1870,6 +1870,23 @@ static inline int spin_needbreak(spinlock_t *lock) #endif } +/* + * Check if a rwlock is contended. + * Returns non-zero if there is another task waiting on the rwlock. + * Returns zero if the lock is not contended or the system / underlying + * rwlock implementation does not support contention detection. + * Technically does not depend on CONFIG_PREEMPTION, but a general need + * for low latency. + */ +static inline int rwlock_needbreak(rwlock_t *lock) +{ +#ifdef CONFIG_PREEMPTION + return rwlock_is_contended(lock); +#else + return 0; +#endif +} + static __always_inline bool need_resched(void) { return unlikely(tif_need_resched()); From patchwork Tue Oct 27 16:49:50 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ben Gardon X-Patchwork-Id: 11860909 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-20.4 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 57227C55178 for ; Tue, 27 Oct 2020 16:51:23 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id D838820790 for ; Tue, 27 Oct 2020 16:51:22 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="aUMB149c" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1812244AbgJ0QuD (ORCPT ); Tue, 27 Oct 2020 12:50:03 -0400 Received: from mail-pl1-f201.google.com ([209.85.214.201]:46516 "EHLO mail-pl1-f201.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1812177AbgJ0Qt6 (ORCPT ); Tue, 27 Oct 2020 12:49:58 -0400 Received: by mail-pl1-f201.google.com with SMTP id r9so1251384plo.13 for ; Tue, 27 Oct 2020 09:49:58 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=sender:date:in-reply-to:message-id:mime-version:references:subject :from:to:cc; bh=9wWRec99gN9dBHws62nUYEcveG+Fv/NYOt3MJb2ccyQ=; b=aUMB149cvGGQuG2bVbNSUocZgWt1sioEvdQd1cZrZrEqyOKZ+Quzhm+vTh197m0dc5 0Vh57Q3oISS9HGv+r2I9dZMDV770VJlHwU3svC8s+mIllf3iDoeX6oD2B7dV2aKqHzUp Q4tIdMUo47oLRLSRiXIgnpJsz0dOrget/Unwv/O9XWGXkSFdwArwcINKdj95dtMvsy04 3czpJYElfSsGu4IpB8T5pMod3JxnvmtlnbSMsqvyJHUnT74eIqXKxdzzyCh2YoBeaGPm JMcqioxwFoyynkj9fgCt6MfJLBctSBh6bUW4R3xkleTBkKt0PqXAa/9dUl8iMA82QeYN jUYA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=9wWRec99gN9dBHws62nUYEcveG+Fv/NYOt3MJb2ccyQ=; b=Qg4Da0IQAtmuyGz4/vs3bC2AFrLoh5CMev/Yn+doHKogkTf1fzTXKmOJufagIB6aGw ofr4bgIJmUdU8NuTJoMoYcq1hgXql6AM7IH32oDJh1AVsh+Wq2IyeFUlQz0yRwTQCzci fAyrRsPJ66inx3cBcH72IN3HkXaCVjxahPs3wEd8lSnD6vxBT6wmxkQthKQQVg6A57nF +dDyWA4h9J5IGjJwAsFPVOF5eXMZDMzM4+WHtXp+xinq21oPhacxgV6VZJKnRUMrq/8C F8R0fKYPychC6t2LG/IHUirwFd35UKLfTlnAr64GVDbzQWJz6+dPXrvZ3UjOjAPBFDGY NCCw== X-Gm-Message-State: AOAM532rG0Bl0dP19bmLJ+CD7InafaOU/TCP2g3CE9xzp+mj6HTViIHz dI4c0GdksAU10ytVoAEp//WMJk6d75D9 X-Google-Smtp-Source: ABdhPJwcmANEN9+xA/xdNMbReOFXLVV5MCPIMd8gklhGJVpnaHvVILVlA/jnwXbAZHA9QeUOm94UHlN8lzAN Sender: "bgardon via sendgmr" X-Received: from bgardon.sea.corp.google.com ([2620:15c:100:202:f693:9fff:fef4:a293]) (user=bgardon job=sendgmr) by 2002:a17:90a:f198:: with SMTP id bv24mr2820492pjb.230.1603817397922; Tue, 27 Oct 2020 09:49:57 -0700 (PDT) Date: Tue, 27 Oct 2020 09:49:50 -0700 In-Reply-To: <20201027164950.1057601-1-bgardon@google.com> Message-Id: <20201027164950.1057601-3-bgardon@google.com> Mime-Version: 1.0 References: <20201027164950.1057601-1-bgardon@google.com> X-Mailer: git-send-email 2.29.0.rc2.309.g374f81d7ae-goog Subject: [PATCH 3/3] sched: Add cond_resched_rwlock From: Ben Gardon To: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Paolo Bonzini , Sean Christopherson , Peter Shier , Peter Zijlstra , Ingo Molnar , Will Deacon , Waiman Long , Davidlohr Bueso , Ben Gardon Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Rescheduling while holding a spin lock is essential for keeping long running kernel operations running smoothly. Add the facility to cond_resched rwlocks. Signed-off-by: Ben Gardon Acked-by: Davidlohr Bueso Acked-by: Waiman Long --- include/linux/sched.h | 12 ++++++++++++ kernel/sched/core.c | 40 ++++++++++++++++++++++++++++++++++++++++ 2 files changed, 52 insertions(+) diff --git a/include/linux/sched.h b/include/linux/sched.h index 77179160ec3ab..2eb0c53fce115 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -1841,12 +1841,24 @@ static inline int _cond_resched(void) { return 0; } }) extern int __cond_resched_lock(spinlock_t *lock); +extern int __cond_resched_rwlock_read(rwlock_t *lock); +extern int __cond_resched_rwlock_write(rwlock_t *lock); #define cond_resched_lock(lock) ({ \ ___might_sleep(__FILE__, __LINE__, PREEMPT_LOCK_OFFSET);\ __cond_resched_lock(lock); \ }) +#define cond_resched_rwlock_read(lock) ({ \ + __might_sleep(__FILE__, __LINE__, PREEMPT_LOCK_OFFSET); \ + __cond_resched_rwlock_read(lock); \ +}) + +#define cond_resched_rwlock_write(lock) ({ \ + __might_sleep(__FILE__, __LINE__, PREEMPT_LOCK_OFFSET); \ + __cond_resched_rwlock_write(lock); \ +}) + static inline void cond_resched_rcu(void) { #if defined(CONFIG_DEBUG_ATOMIC_SLEEP) || !defined(CONFIG_PREEMPT_RCU) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index d2003a7d5ab55..ac58e7829a063 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -6152,6 +6152,46 @@ int __cond_resched_lock(spinlock_t *lock) } EXPORT_SYMBOL(__cond_resched_lock); +int __cond_resched_rwlock_read(rwlock_t *lock) +{ + int resched = should_resched(PREEMPT_LOCK_OFFSET); + int ret = 0; + + lockdep_assert_held(lock); + + if (rwlock_needbreak(lock) || resched) { + read_unlock(lock); + if (resched) + preempt_schedule_common(); + else + cpu_relax(); + ret = 1; + read_lock(lock); + } + return ret; +} +EXPORT_SYMBOL(__cond_resched_rwlock_read); + +int __cond_resched_rwlock_write(rwlock_t *lock) +{ + int resched = should_resched(PREEMPT_LOCK_OFFSET); + int ret = 0; + + lockdep_assert_held(lock); + + if (rwlock_needbreak(lock) || resched) { + write_unlock(lock); + if (resched) + preempt_schedule_common(); + else + cpu_relax(); + ret = 1; + write_lock(lock); + } + return ret; +} +EXPORT_SYMBOL(__cond_resched_rwlock_write); + /** * yield - yield the current processor to other threads. *