From patchwork Mon Nov 18 17:52:25 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrey Grodzovsky X-Patchwork-Id: 11250085 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D764B13A4 for ; Mon, 18 Nov 2019 17:52:39 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id BF41E2195D for ; Mon, 18 Nov 2019 17:52:39 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org BF41E2195D Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=amd.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=dri-devel-bounces@lists.freedesktop.org Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 0E2F089C82; Mon, 18 Nov 2019 17:52:38 +0000 (UTC) X-Original-To: dri-devel@lists.freedesktop.org Delivered-To: dri-devel@lists.freedesktop.org Received: from NAM01-SN1-obe.outbound.protection.outlook.com (mail-eopbgr820082.outbound.protection.outlook.com [40.107.82.82]) by gabe.freedesktop.org (Postfix) with ESMTPS id 6F20F89C82; Mon, 18 Nov 2019 17:52:37 +0000 (UTC) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=UrtFvQl7Omaf59jKYtZesojHlbRkI+p0ZLhlj2w6J2Aq595hTuwMRQe7l4zh/aJy/KWTHpe/MgVqufYiN72xLRuWmonlkD6qX4kO7N2f7MgUokyTqGSyxMcssNd6Ok4pZ1OcYJgb4c49A2lrbKe4JAreVjxrHzUqS/MaDPetcIYr6EqLc+izm6WvLihPB8x7Jw20STyMq6IaK1v5+PulLo6/DJEpPSNYK9+SVHA9lNlBK9FhOmXYoKIxTZKpEjGJ5L474EOoG7BIz5X646fSo16NzVaXpo/ZOKf/fH2smhqYR2Q6NYoty6DBV3nBb/7lxiYJo97dwo1uot8mZ2z0Tw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=6VSJUtEhda+FnkSnUu1ZIt8xstFe+w6hkfs7N/NlOAo=; b=TgrtSiFvr0oRQnDdTulAbHlRtKHvoqLzRimzok6dBb0MQDX77eaeRXWYLaVSTq0ttuAFckNDr8B8TaJTMACGfFlDZD5uB8qqaHGHwtMO/dmE4ZwBujESNRwBPbUviXgJRITXpmg/7Is+yaKyX214pp9z2ihLzBiIfqFJfjzqGIopf9azv3Fd73kxf0UUvac7bqUKJeOIkwhmIlmW2QrhMZbFt/Vnd7MkdpedNOIUFCLOx16PkJYrNjtryu7Dc37SAklskTDeevtI5Dq3Oi/POCw9QT6oCVSbLDu/QxxoxrCMTTOeSdNuTxAoLTDfiPzc/7yIwDOTakxCuFFS0RJO8Q== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=none (sender ip is 165.204.84.17) smtp.rcpttodomain=lists.freedesktop.org smtp.mailfrom=amd.com; dmarc=permerror action=none header.from=amd.com; dkim=none (message not signed); arc=none Received: from CY4PR1201CA0001.namprd12.prod.outlook.com (2603:10b6:910:16::11) by CY4PR12MB1880.namprd12.prod.outlook.com (2603:10b6:903:126::23) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.2451.27; Mon, 18 Nov 2019 17:52:35 +0000 Received: from DM6NAM11FT013.eop-nam11.prod.protection.outlook.com (2a01:111:f400:7eaa::201) by CY4PR1201CA0001.outlook.office365.com (2603:10b6:910:16::11) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.2451.29 via Frontend Transport; Mon, 18 Nov 2019 17:52:35 +0000 Received-SPF: None (protection.outlook.com: amd.com does not designate permitted sender hosts) Received: from SATLEXMB02.amd.com (165.204.84.17) by DM6NAM11FT013.mail.protection.outlook.com (10.13.173.142) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256) id 15.20.2451.23 via Frontend Transport; Mon, 18 Nov 2019 17:52:35 +0000 Received: from SATLEXMB06.amd.com (10.181.40.147) by SATLEXMB02.amd.com (10.181.40.143) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.1713.5; Mon, 18 Nov 2019 11:52:35 -0600 Received: from SATLEXMB01.amd.com (10.181.40.142) by SATLEXMB06.amd.com (10.181.40.147) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.1713.5; Mon, 18 Nov 2019 11:52:34 -0600 Received: from agrodzovsky-All-Series.amd.com (10.180.168.240) by SATLEXMB01.amd.com (10.181.40.142) with Microsoft SMTP Server id 15.1.1713.5 via Frontend Transport; Mon, 18 Nov 2019 11:52:34 -0600 From: Andrey Grodzovsky To: Subject: [PATCH v2] drm/scheduler: Avoid accessing freed bad job. Date: Mon, 18 Nov 2019 12:52:25 -0500 Message-ID: <1574099545-20430-1-git-send-email-andrey.grodzovsky@amd.com> X-Mailer: git-send-email 2.7.4 MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-Office365-Filtering-HT: Tenant X-Forefront-Antispam-Report: CIP:165.204.84.17; IPV:NLI; CTRY:US; EFV:NLI; SFV:NSPM; SFS:(10009020)(4636009)(39860400002)(346002)(376002)(136003)(396003)(428003)(199004)(189003)(70586007)(186003)(53416004)(2906002)(5660300002)(6666004)(336012)(486006)(126002)(476003)(44832011)(1671002)(426003)(16586007)(305945005)(316002)(36756003)(54906003)(47776003)(478600001)(7696005)(51416003)(4326008)(50466002)(86362001)(450100002)(70206006)(8676002)(81156014)(81166006)(26005)(356004)(2616005)(109986005)(48376002)(8936002)(50226002)(14444005)(266003); DIR:OUT; SFP:1101; SCL:1; SRVR:CY4PR12MB1880; H:SATLEXMB02.amd.com; FPR:; SPF:None; LANG:en; PTR:InfoDomainNonexistent; A:1; MX:1; X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: fbf87687-7860-45fd-5bf0-08d76c501858 X-MS-TrafficTypeDiagnostic: CY4PR12MB1880: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:3631; X-Forefront-PRVS: 0225B0D5BC X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: VR8njExmg1h0Jo5JGjRiPAV6P6K/CHWoZEJQzCfJNc9OwhJ/8gVD0kgEhXUKOliA6ofQjdPPgFlz98YT2SR6Hp/Tss6pfH1AHGsixUi2EOOgdOlRPqbmaOeOplRVgteTzWEqz4hfLNf7QzxWSQpSvMbQk/uVp3aqT5W3TuTf+DORrtsWs05DlX4dSEGU7c5wLsK9qNl4UFB0FQ+3f9BlocGHcifYRtTU1RrIpWqLe+LHxfmx156l9AK7TMCisDlWFMsVI0d0hmR1e+oyrknP18EsL0se+xyZ9AOmiTJ+TZrRy/5O+zovK/aRtel4nQ3WoDrau4soNl9hghTMt0auDpXqKB1MYoSFQMaMXmkqIACifIaBaURs6HEI02d2VEUfnUpDsiT3PFGqzZ5e2bnmDDtJkqCzikfgHW3s9+PEBgiE27Ah2zMr0+pTmij8+nma X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Nov 2019 17:52:35.7001 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: fbf87687-7860-45fd-5bf0-08d76c501858 X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d; Ip=[165.204.84.17]; Helo=[SATLEXMB02.amd.com] X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: CY4PR12MB1880 X-Mailman-Original-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amdcloud.onmicrosoft.com; s=selector2-amdcloud-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=6VSJUtEhda+FnkSnUu1ZIt8xstFe+w6hkfs7N/NlOAo=; b=uwXzpxcY0mRoNN8eOgNMSsPEO/gcz2s3spnKgsDIOxhHR1OMARdoVfyHr6PO4BUB35m7A/Ftxkw/+wAVXj8q79f5hD0OZukwh4Cl/6W/ZXKknNF0FGyRv4lgORgdVXWEjDzvk/Sg20CSPBDGtoq2C7oX6VhedNjSNgi0TrJ8rkg= X-Mailman-Original-Authentication-Results: spf=none (sender IP is 165.204.84.17) smtp.mailfrom=amd.com; lists.freedesktop.org; dkim=none (message not signed) header.d=none;lists.freedesktop.org; dmarc=permerror action=none header.from=amd.com; X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Emily.Deng@amd.com, amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, Christian.Koenig@amd.com Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" Problem: Due to a race between drm_sched_cleanup_jobs in sched thread and drm_sched_job_timedout in timeout work there is a possiblity that bad job was already freed while still being accessed from the timeout thread. Fix: Instead of just peeking at the bad job in the mirror list remove it from the list under lock and then put it back later when we are garanteed no race with main sched thread is possible which is after the thread is parked. v2: Lock around processing ring_mirror_list in drm_sched_cleanup_jobs. Signed-off-by: Andrey Grodzovsky Tested-by: Emily Deng --- drivers/gpu/drm/scheduler/sched_main.c | 44 +++++++++++++++++++++++++++++----- 1 file changed, 38 insertions(+), 6 deletions(-) diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/scheduler/sched_main.c index 80ddbdf..b05b210 100644 --- a/drivers/gpu/drm/scheduler/sched_main.c +++ b/drivers/gpu/drm/scheduler/sched_main.c @@ -287,10 +287,24 @@ static void drm_sched_job_timedout(struct work_struct *work) unsigned long flags; sched = container_of(work, struct drm_gpu_scheduler, work_tdr.work); + + /* + * Protects against concurrent deletion in drm_sched_cleanup_jobs that + * is already in progress. + */ + spin_lock_irqsave(&sched->job_list_lock, flags); job = list_first_entry_or_null(&sched->ring_mirror_list, struct drm_sched_job, node); if (job) { + /* + * Remove the bad job so it cannot be freed by already in progress + * drm_sched_cleanup_jobs. It will be reinsrted back after sched->thread + * is parked at which point it's safe. + */ + list_del_init(&job->node); + spin_unlock_irqrestore(&sched->job_list_lock, flags); + job->sched->ops->timedout_job(job); /* @@ -302,6 +316,8 @@ static void drm_sched_job_timedout(struct work_struct *work) sched->free_guilty = false; } } + else + spin_unlock_irqrestore(&sched->job_list_lock, flags); spin_lock_irqsave(&sched->job_list_lock, flags); drm_sched_start_timeout(sched); @@ -373,6 +389,19 @@ void drm_sched_stop(struct drm_gpu_scheduler *sched, struct drm_sched_job *bad) kthread_park(sched->thread); /* + * Reinsert back the bad job here - now it's safe as drm_sched_cleanup_jobs + * cannot race against us and release the bad job at this point - we parked + * (waited for) any in progress (earlier) cleanups and any later ones will + * bail out due to sched->thread being parked. + */ + if (bad && bad->sched == sched) + /* + * Add at the head of the queue to reflect it was the earliest + * job extracted. + */ + list_add(&bad->node, &sched->ring_mirror_list); + + /* * Iterate the job list from later to earlier one and either deactive * their HW callbacks or remove them from mirror list if they already * signaled. @@ -656,16 +685,19 @@ static void drm_sched_cleanup_jobs(struct drm_gpu_scheduler *sched) __kthread_should_park(sched->thread)) return; - - while (!list_empty(&sched->ring_mirror_list)) { + /* See drm_sched_job_timedout for why the locking is here */ + while (true) { struct drm_sched_job *job; - job = list_first_entry(&sched->ring_mirror_list, - struct drm_sched_job, node); - if (!dma_fence_is_signaled(&job->s_fence->finished)) + spin_lock_irqsave(&sched->job_list_lock, flags); + job = list_first_entry_or_null(&sched->ring_mirror_list, + struct drm_sched_job, node); + + if (!job || !dma_fence_is_signaled(&job->s_fence->finished)) { + spin_unlock_irqrestore(&sched->job_list_lock, flags); break; + } - spin_lock_irqsave(&sched->job_list_lock, flags); /* remove job from ring_mirror_list */ list_del_init(&job->node); spin_unlock_irqrestore(&sched->job_list_lock, flags);