From patchwork Fri Dec 2 17:16:16 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Brian Foster X-Patchwork-Id: 13062997 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D3C04C4321E for ; Fri, 2 Dec 2022 17:16:19 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 6116A6B0071; Fri, 2 Dec 2022 12:16:19 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 5C1646B0073; Fri, 2 Dec 2022 12:16:19 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 48BF86B0074; Fri, 2 Dec 2022 12:16:19 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 39D996B0071 for ; Fri, 2 Dec 2022 12:16:19 -0500 (EST) Received: from smtpin26.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id D4D90A058D for ; Fri, 2 Dec 2022 17:16:18 +0000 (UTC) X-FDA: 80198019636.26.2F33B40 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf11.hostedemail.com (Postfix) with ESMTP id E4E224000A for ; Fri, 2 Dec 2022 17:16:17 +0000 (UTC) Authentication-Results: imf11.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b="Lg/WieSf"; spf=pass (imf11.hostedemail.com: domain of bfoster@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=bfoster@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1670001378; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=fMasqR+dnAYUNzNvfPkl6FDxt06ngAqgCShDgQ/MNk4=; b=MsaUMoYIVcsr9pHwALrdXFGXxvXtepmubedEgjUGneWb/ITfIfWBa93dw6onO6AhjjGpDN MfnZNfVxA/vD3JeLpwnLAhpj7MFmxFfpENk06KAU6jayxPzRc/F28FErSnO7SQu7IwY1J2 t3FyHbDW4u6ep+1B6fj2vE96+GxZKtY= ARC-Authentication-Results: i=1; imf11.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b="Lg/WieSf"; spf=pass (imf11.hostedemail.com: domain of bfoster@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=bfoster@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1670001378; a=rsa-sha256; cv=none; b=gah8Y9c+Pb1zEM27mOnue75lCT8H2tmZWYSZqbPg4RPUKsLhaMyV4NsCyB5WPDu+s4+QPx i6W9ZHjau13ZYYt9JYvtli0DC9hHdy5iNaZfFPLWwiFEyGhi1ooQ3vkZAszGCmOD1+qRc3 q2cfg47nygsXsOyKCxbJxWHCeCSmbs8= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1670001377; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=fMasqR+dnAYUNzNvfPkl6FDxt06ngAqgCShDgQ/MNk4=; b=Lg/WieSf454xyqXg2lrm0s/IPKYGrnGTaAHcBzta0BQx5earo+cNdHmvtUnH4gUEAZ8QWi s7cPNV1tbI0T66eI2ACKg/FYXnM05g6Zp5SSIiNdeCbSDdXE6zIAAosY82UUebZbSgJORG XQ+SkPpTx4pa1VNvXU5xCDJH4VlYyDw= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-558-rkrXDgJWMlaM3o3biIrs3g-1; Fri, 02 Dec 2022 12:16:15 -0500 X-MC-Unique: rkrXDgJWMlaM3o3biIrs3g-1 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.rdu2.redhat.com [10.11.54.2]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 4BAD1882823; Fri, 2 Dec 2022 17:16:15 +0000 (UTC) Received: from bfoster.redhat.com (unknown [10.22.8.52]) by smtp.corp.redhat.com (Postfix) with ESMTP id 0990E40C94AA; Fri, 2 Dec 2022 17:16:15 +0000 (UTC) From: Brian Foster To: linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org Cc: ikent@redhat.com, onestero@redhat.com, willy@infradead.org, ebiederm@redhat.com Subject: [PATCH v3 1/5] pid: replace pidmap_lock with xarray lock Date: Fri, 2 Dec 2022 12:16:16 -0500 Message-Id: <20221202171620.509140-2-bfoster@redhat.com> In-Reply-To: <20221202171620.509140-1-bfoster@redhat.com> References: <20221202171620.509140-1-bfoster@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.2 X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: E4E224000A X-Rspam-User: X-Stat-Signature: ziub9ijzzzto361rww81dkmmm4bxc66n X-Spamd-Result: default: False [-2.36 / 9.00]; BAYES_HAM(-5.96)[99.91%]; R_MISSING_CHARSET(2.50)[]; MID_CONTAINS_FROM(1.00)[]; SUBJECT_HAS_UNDERSCORES(1.00)[]; DMARC_POLICY_ALLOW(-0.50)[redhat.com,none]; R_SPF_ALLOW(-0.20)[+ip4:170.10.133.0/24]; R_DKIM_ALLOW(-0.20)[redhat.com:s=mimecast20190719]; RCVD_NO_TLS_LAST(0.10)[]; MIME_GOOD(-0.10)[text/plain]; RCPT_COUNT_SEVEN(0.00)[7]; FROM_EQ_ENVFROM(0.00)[]; DKIM_TRACE(0.00)[redhat.com:+]; RCVD_COUNT_THREE(0.00)[4]; MIME_TRACE(0.00)[0:+]; FROM_HAS_DN(0.00)[]; TO_MATCH_ENVRCPT_SOME(0.00)[]; TO_DN_NONE(0.00)[]; ARC_SIGNED(0.00)[hostedemail.com:s=arc-20220608:i=1]; ARC_NA(0.00)[] X-HE-Tag: 1670001377-388485 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: As a first step to changing the struct pid tracking code from the idr over to the xarray, replace the custom pidmap_lock spinlock with the internal lock associated with the underlying xarray. This is effectively equivalent to using idr_lock() and friends, but since the goal is to disentangle from the idr, move directly to the underlying xarray api. Signed-off-by: Matthew Wilcox Signed-off-by: Brian Foster Reviewed-by: Ian Kent --- kernel/pid.c | 79 ++++++++++++++++++++++++++-------------------------- 1 file changed, 40 insertions(+), 39 deletions(-) diff --git a/kernel/pid.c b/kernel/pid.c index 3fbc5e46b721..3622f8b13143 100644 --- a/kernel/pid.c +++ b/kernel/pid.c @@ -86,22 +86,6 @@ struct pid_namespace init_pid_ns = { }; EXPORT_SYMBOL_GPL(init_pid_ns); -/* - * Note: disable interrupts while the pidmap_lock is held as an - * interrupt might come in and do read_lock(&tasklist_lock). - * - * If we don't disable interrupts there is a nasty deadlock between - * detach_pid()->free_pid() and another cpu that does - * spin_lock(&pidmap_lock) followed by an interrupt routine that does - * read_lock(&tasklist_lock); - * - * After we clean up the tasklist_lock and know there are no - * irq handlers that take it we can leave the interrupts enabled. - * For now it is easier to be safe than to prove it can't happen. - */ - -static __cacheline_aligned_in_smp DEFINE_SPINLOCK(pidmap_lock); - void put_pid(struct pid *pid) { struct pid_namespace *ns; @@ -129,10 +113,11 @@ void free_pid(struct pid *pid) int i; unsigned long flags; - spin_lock_irqsave(&pidmap_lock, flags); for (i = 0; i <= pid->level; i++) { struct upid *upid = pid->numbers + i; struct pid_namespace *ns = upid->ns; + + xa_lock_irqsave(&ns->idr.idr_rt, flags); switch (--ns->pid_allocated) { case 2: case 1: @@ -150,8 +135,8 @@ void free_pid(struct pid *pid) } idr_remove(&ns->idr, upid->nr); + xa_unlock_irqrestore(&ns->idr.idr_rt, flags); } - spin_unlock_irqrestore(&pidmap_lock, flags); call_rcu(&pid->rcu, delayed_put_pid); } @@ -206,7 +191,7 @@ struct pid *alloc_pid(struct pid_namespace *ns, pid_t *set_tid, } idr_preload(GFP_KERNEL); - spin_lock_irq(&pidmap_lock); + xa_lock_irq(&tmp->idr.idr_rt); if (tid) { nr = idr_alloc(&tmp->idr, NULL, tid, @@ -233,7 +218,7 @@ struct pid *alloc_pid(struct pid_namespace *ns, pid_t *set_tid, nr = idr_alloc_cyclic(&tmp->idr, NULL, pid_min, pid_max, GFP_ATOMIC); } - spin_unlock_irq(&pidmap_lock); + xa_unlock_irq(&tmp->idr.idr_rt); idr_preload_end(); if (nr < 0) { @@ -266,34 +251,38 @@ struct pid *alloc_pid(struct pid_namespace *ns, pid_t *set_tid, INIT_HLIST_HEAD(&pid->inodes); upid = pid->numbers + ns->level; - spin_lock_irq(&pidmap_lock); - if (!(ns->pid_allocated & PIDNS_ADDING)) - goto out_unlock; for ( ; upid >= pid->numbers; --upid) { + tmp = upid->ns; + + xa_lock_irq(&tmp->idr.idr_rt); + if (tmp == ns && !(tmp->pid_allocated & PIDNS_ADDING)) { + xa_unlock_irq(&tmp->idr.idr_rt); + put_pid_ns(ns); + goto out_free; + } + /* Make the PID visible to find_pid_ns. */ - idr_replace(&upid->ns->idr, pid, upid->nr); - upid->ns->pid_allocated++; + idr_replace(&tmp->idr, pid, upid->nr); + tmp->pid_allocated++; + xa_unlock_irq(&tmp->idr.idr_rt); } - spin_unlock_irq(&pidmap_lock); return pid; -out_unlock: - spin_unlock_irq(&pidmap_lock); - put_pid_ns(ns); - out_free: - spin_lock_irq(&pidmap_lock); while (++i <= ns->level) { upid = pid->numbers + i; - idr_remove(&upid->ns->idr, upid->nr); - } + tmp = upid->ns; - /* On failure to allocate the first pid, reset the state */ - if (ns->pid_allocated == PIDNS_ADDING) - idr_set_cursor(&ns->idr, 0); + xa_lock_irq(&tmp->idr.idr_rt); - spin_unlock_irq(&pidmap_lock); + /* On failure to allocate the first pid, reset the state */ + if (tmp == ns && tmp->pid_allocated == PIDNS_ADDING) + idr_set_cursor(&ns->idr, 0); + + idr_remove(&tmp->idr, upid->nr); + xa_unlock_irq(&tmp->idr.idr_rt); + } kmem_cache_free(ns->pid_cachep, pid); return ERR_PTR(retval); @@ -301,9 +290,9 @@ struct pid *alloc_pid(struct pid_namespace *ns, pid_t *set_tid, void disable_pid_allocation(struct pid_namespace *ns) { - spin_lock_irq(&pidmap_lock); + xa_lock_irq(&ns->idr.idr_rt); ns->pid_allocated &= ~PIDNS_ADDING; - spin_unlock_irq(&pidmap_lock); + xa_unlock_irq(&ns->idr.idr_rt); } struct pid *find_pid_ns(int nr, struct pid_namespace *ns) @@ -647,6 +636,18 @@ SYSCALL_DEFINE2(pidfd_open, pid_t, pid, unsigned int, flags) return fd; } +/* + * Note: disable interrupts while the xarray lock is held as an interrupt might + * come in and do read_lock(&tasklist_lock). + * + * If we don't disable interrupts there is a nasty deadlock between + * detach_pid()->free_pid() and another cpu that does xa_lock() followed by an + * interrupt routine that does read_lock(&tasklist_lock); + * + * After we clean up the tasklist_lock and know there are no irq handlers that + * take it we can leave the interrupts enabled. For now it is easier to be safe + * than to prove it can't happen. + */ void __init pid_idr_init(void) { /* Verify no one has done anything silly: */