From patchwork Thu Dec 21 09:42:50 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: "He, Hongbo" X-Patchwork-Id: 10127239 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id C261C603B5 for ; Thu, 21 Dec 2017 09:43:37 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id A8B2729B7E for ; Thu, 21 Dec 2017 09:43:37 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 9D42A29B88; Thu, 21 Dec 2017 09:43:37 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.1 required=2.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_MED,T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 7A80529B7E for ; Thu, 21 Dec 2017 09:43:36 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 7417A89F27; Thu, 21 Dec 2017 09:43:35 +0000 (UTC) X-Original-To: dri-devel@lists.freedesktop.org Delivered-To: dri-devel@lists.freedesktop.org Received: from NAM03-CO1-obe.outbound.protection.outlook.com (mail-co1nam03on0070.outbound.protection.outlook.com [104.47.40.70]) by gabe.freedesktop.org (Postfix) with ESMTPS id 9667D89F27 for ; Thu, 21 Dec 2017 09:43:33 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amdcloud.onmicrosoft.com; s=selector1-amd-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version; bh=uyA2ZK1bMx3NF0IGUQWaY0cWNtwCg+4+N1kHsH5ZoMA=; b=E9pAL9G7VxpBcDTDog5esWsGdjk/Ow74x/2fbzQYBIK9jwfSJXBM8d0bHn0WftC9YYPk9Wet+KAMXN+iUdOYWii1Li1rwjmFgMvJAMcQSAFDTP1eOJXoPmHsEsI/H85NFG7skRVqKdydFpzvgq+01n7gHAmb4jCp16bYosnLFe4= Received: from SN1PR12CA0003.namprd12.prod.outlook.com (10.162.96.141) by BN6PR1201MB0049.namprd12.prod.outlook.com (10.174.114.11) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384_P256) id 15.20.345.14; Thu, 21 Dec 2017 09:43:31 +0000 Received: from BY2NAM03FT014.eop-NAM03.prod.protection.outlook.com (2a01:111:f400:7e4a::204) by SN1PR12CA0003.outlook.office365.com (2a01:111:e400:5174::13) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.345.14 via Frontend Transport; Thu, 21 Dec 2017 09:43:31 +0000 Authentication-Results: spf=none (sender IP is 165.204.84.17) smtp.mailfrom=amd.com; lists.freedesktop.org; dkim=none (message not signed) header.d=none;lists.freedesktop.org; dmarc=permerror action=none header.from=amd.com; Received-SPF: None (protection.outlook.com: amd.com does not designate permitted sender hosts) Received: from SATLEXCHOV02.amd.com (165.204.84.17) by BY2NAM03FT014.mail.protection.outlook.com (10.152.84.239) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384_P384) id 15.20.302.6 via Frontend Transport; Thu, 21 Dec 2017 09:43:30 +0000 Received: from roger-build-server.amd.com (10.34.1.3) by SATLEXCHOV02.amd.com (10.181.40.72) with Microsoft SMTP Server id 14.3.361.1; Thu, 21 Dec 2017 03:43:25 -0600 From: Roger He To: Subject: [PATCH 2/5] drm/ttm: use an operation ctx for ttm_tt_populate in ttm_bo_driver Date: Thu, 21 Dec 2017 17:42:50 +0800 Message-ID: <1513849373-7970-2-git-send-email-Hongbo.He@amd.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1513849373-7970-1-git-send-email-Hongbo.He@amd.com> References: <1513849373-7970-1-git-send-email-Hongbo.He@amd.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-Office365-Filtering-HT: Tenant X-Forefront-Antispam-Report: CIP:165.204.84.17; IPV:NLI; CTRY:US; EFV:NLI; SFV:NSPM; SFS:(10009020)(346002)(376002)(396003)(39380400002)(39860400002)(2980300002)(428003)(199004)(189003)(50226002)(8936002)(23676004)(68736007)(97736004)(305945005)(77096006)(5820100001)(105586002)(7696005)(2351001)(6666003)(86362001)(6916009)(76176011)(106466001)(59450400001)(575784001)(2950100002)(5660300001)(104016004)(478600001)(8676002)(2870700001)(81156014)(50466002)(72206003)(81166006)(2906002)(316002)(36756003)(356003)(53936002)(53416004)(47776003)(4326008); DIR:OUT; SFP:1101; SCL:1; SRVR:BN6PR1201MB0049; H:SATLEXCHOV02.amd.com; FPR:; SPF:None; PTR:InfoDomainNonexistent; A:1; MX:1; LANG:en; X-Microsoft-Exchange-Diagnostics: 1; BY2NAM03FT014; 1:gPwg7ahtpq8sY8P5oFSu+SUyl7oU7kKcys3bhWFyatVAUrEVd+FIhMGYdlxV/SJuBMm6NAQqYI4dSSOMoTiY85djW1np63f8CZbm8KVm1EnM8ij8YmQnz08T9iKukxdx X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 97ca122f-6d19-438b-1ce3-08d548574b5e X-Microsoft-Antispam: UriScan:; BCL:0; PCL:0; RULEID:(4534020)(4602075)(4627115)(201703031133081)(201702281549075)(5600026)(4604075)(2017052603307)(7153060); SRVR:BN6PR1201MB0049; X-Microsoft-Exchange-Diagnostics: 1; BN6PR1201MB0049; 3:8ouZGvdtQz/tBLEi/9ZpnYfN/EbjZQDxBV+w2zuGJQHCAo0aSalPYREHDZ8jBqmwm+Reve1gmskjU/RVoTO2G/QbwokiEEyvE734oSOqrypkyTlVmr7dJO3SUqnbz9jkb4FZk/2IzYegFwg67Z7F2oyavOoUJsJOW4dlh4T2znjyWbm2Eptzz81IF0dF37gnFqWTZ3tXbQlYgxqAP4DsS0dIS3g1uiV2pmfDgJkz6nTuHIXQfb3p/4UvukeqUm0P5AJSgvdHWTgUTe5+WfE8L/asuqvvPpKFg6PGqv6b9hWZ08E/cfHlTOMp47Jy3/EJ4WTlaOjK+TYl1HSUeyjVW+ouFVHzgDv0bG40MlFUiGA=; 25:N0JiNVewoiONoduUnjTlWZmG2+pMjgfl7ltM3ixOUoRiGncqw7usutHXPrlsoY9/ZTUNFoxNz1P8hpi6DSvpgiAXh+8QUkNQ0igpii8dlEdsq6nB8MDyFGiVFQ8MODZ6JTWy71T5EzzJtuatZYPT62wfjFWdolCFoiGtLhzv2VjzXZTXa0XMTZjfYqeqmkavj/sPt6R1axIOGrDs1rTfy/PpWk3s9I0h5K2O5jCnjE1MnyMzlVQu2gh3DqvUkN28/6Ugu2zcKsWfQzdVbOyssPLcqgJzqbcsFLTc66MvVQ2x3aqE1cPBJvkDZiXy3raJdHPCF/CTwJrm3BCHh3DO2A== X-MS-TrafficTypeDiagnostic: BN6PR1201MB0049: X-Microsoft-Exchange-Diagnostics: 1; BN6PR1201MB0049; 31:whMXOHmXgXV6Htx3yWY0GMIaFHFj3fJNA3G0PuyIaWcjVzdVQO/Vqwr836RrP6Nn8nu4dofeju9IwmXZVHOPpb2BN/K7l5g+W5QdbD6u/LiL+mYPSZgPDQQirC1vheM/w8t6hJfG6r81TEpt6F1Uy/k7iy0/SyIjEhwmzYDij3gZWldd+ueq9Rw0ybpSsXxHBigldOD4PbzNL7bPB09r1balp1YNDY49Cd9LszbiTwA=; 20:Y5fa9BzyZu8+dz0++gUjil1voWM63GeJL0Y0CeZWA4loUY7EMGkMTmm8a5wAsgXuHJyMx/OcV2x4bSJHGEPI9sjiXYXCokA2K86NTM6c/VipEG0gbf2rJJ0fNGhnuGHQ9Wci3ssHjZj09MoLn/nEg3g9MlGJU2tPk79Vks1I2iROiERv3kJQMPzBSXZPvXCbhi2HwoCUGpStX+1/3Y5+5/SMxvT7xxym7EgJsRFXB0v7Lci/tfXQ7+vHssg+BrEsqBNuDftDk/u+tqAQwGwPX31DwLZuboQAUEpLobUyf22MijFO2V0/iX3W8EgezqhlTbZidi/0OAZ7/tkYpHHovP4di4zAe7viRQprz/J4ahyYzOmWNwILugmqzcikHjoEFzJdP8J7HoBAwEXPhlxOr4gLOdxdiaQYgXgNsoFebRAvqc4joxwZQUUTb+w6dBx+kw53hZEu7Ec1XkI6Cf0k4ImQqh2mpIbOJ31UNW+4H/7GTbesW51n9HV/aBFWX8uY X-Microsoft-Antispam-PRVS: X-Exchange-Antispam-Report-Test: UriScan:(767451399110); X-Exchange-Antispam-Report-CFA-Test: BCL:0; PCL:0; RULEID:(6040470)(2401047)(5005006)(8121501046)(3231023)(10201501046)(93006095)(93003095)(3002001)(6055026)(6041268)(20161123562045)(20161123558120)(20161123560045)(201703131423095)(201702281528075)(20161123555045)(201703061421075)(201703061406153)(20161123564045)(6072148)(201708071742011); SRVR:BN6PR1201MB0049; BCL:0; PCL:0; RULEID:(100000803101)(100110400095); SRVR:BN6PR1201MB0049; X-Microsoft-Exchange-Diagnostics: 1; BN6PR1201MB0049; 4:x6WWL0lC+GZEwsBJ4SCqIUPaNYknk5pw6FJY1EWTN6a1tj7FKqhkGMM7Lu4RukbYwLCjrIymSew3uJ2DFP0mlLOAta4g2UFXMJLYUEb7GfD7GpRzxKnKugfxLQnweqbeURTzBtdmJkJbb00gb0ykzXrfBGjXOHWSSY4Z8fVYXXEYJNDU8SxgbmhQAAklFbav6fW0jt8mo5G2DSJ0oYzE3EjH+9m0ZdZioUDEmdVSWFab6NfY/8/GDzvVeuvhFWJFZXfqnnT872BkauPZqAooP4duAAhB0+BGwfRNoD1F6S1k4ShwpzHjxQTBYgguX1/e X-Forefront-PRVS: 0528942FD8 X-Microsoft-Exchange-Diagnostics: =?utf-8?B?MTtCTjZQUjEyMDFNQjAwNDk7MjM6bXc4aWpFenhlTnNwdWx2TVVoZnFmYlpT?= =?utf-8?B?QjBmTWcyc215Y1FaNVpJNWhNUTZFbzZLcHR2bTVDUzNKY3I5cEh1cU8vYWlW?= =?utf-8?B?VnRDQUt0SkVkOElqTzBNTmVuaUxlK1UvSWNneC80dDlnV2xBODQ2dnhQdjV3?= =?utf-8?B?U21GbkRkb3ZGYmNOaVJRVXptMlhHSG54OHZ3S002U28rQ0poa05sNS9EcjJr?= =?utf-8?B?dkZkUW9xTXgvdXJiczlDUnpRRFBwdTJWVS9FakZKNzVENmpyQXI3RG92VWR0?= =?utf-8?B?cVhqT3MvVmJlbElFTEN1TFNEd242WThqYXdzWjJwQldzK1pjdk5UaUo3dHNR?= =?utf-8?B?clBQYUQwdnZaOThjZFpnRVlmaFBSS0hNMXdaV0grelJyOGRnV1A3dkZ4Y2tP?= =?utf-8?B?SXNCT3pLTzdMaDA0bEpWNWR6dC9FSE5Wb3YxdHNCM3E3Q0FVNDkzb1o1eWxS?= =?utf-8?B?aTlEVVJzdUZRL1c2OVFqOFJRMFJDNVpnQjJCc1ZmOEdGTnhpNlRhVzlmWFVG?= =?utf-8?B?TmxwVXFMSW5jRU9jQW1KWjg0aDFEa3ovbm82b205M3pBVWoxcWN5TWRyRFky?= =?utf-8?B?SWlaSkJaYWFoYzdXU2NHM280RGR3VEd5MXZCWGYzT2JhQ3ZSSkJkVlFxVFRH?= =?utf-8?B?NHY3dC8yb0x0R09oNjFYOEtNdHpubkJTUUFURXVoUDMyTkJvajZweEJDN3g4?= =?utf-8?B?blZtMkhmaHFCcWx1Wi83bDNlcm1uTURUay9Wb29KOGRMcDNqdkR1T0hqcHhK?= =?utf-8?B?a3RWbkVDVVZMZnZldlVaSHhlR0w2MTJyV09WZzY1MzBRK285NmRsNGRtNU10?= =?utf-8?B?b2dWc2xVVndzQ0lOOUtWaFdhNVNwUCt6WEJaSGFDU2MyOUpjZW53em5CZm1s?= =?utf-8?B?WnpZQm92REkvRU9ZMHhVUkFJbW1FcHNQOWVrQ2xPc0djayswczNrZjIwSVd1?= =?utf-8?B?ZWEyYklOSXA1ak1MLzVGODdEUjF1TWExMGxvNi9Qc1FQM3FreFlGNHcxNC9H?= =?utf-8?B?YWRyeGwyV2ZyanU3Z3I4Q3B3TmYwc09zQUpkcTBEcnhwTjVUV1czTjZVcTRR?= =?utf-8?B?MFRETSs5Y3RWbWtObTVBNFVUS0lOUHJXZEZtN3NISUdaOWFNejZJVmZaTldq?= =?utf-8?B?OXlWaG9CWFc4ektERm5uL0tBcW1HbHFjTnBiZVhQczBPUUQ3N3ZrV0ZDNDJP?= =?utf-8?B?ZlhXRXNIcHowM1FyVGpEWW03dzQ0dmV0cUpmS2VSRjR1enYxQk10SEIyekRI?= =?utf-8?B?cHVxdDd5SUpyRmg2N1FkY3pXc1NVTm45WDNsMXJDZ3FxTGE2S3dWdWx1cDNn?= =?utf-8?B?VkJHc05rVUJCNkRFektEVXRXTXVXK1NMd212ZmxiSmswWXQrVXpzc25FdnRM?= =?utf-8?B?TG8zRUdRVWN3c1pBUEdhWjA2a1psamZHckZ3d2hnSFEybEppb3lHdDQ2dHJP?= =?utf-8?Q?A5alX7HgexmUOqBtIgjdj/pN3nRis?= X-Microsoft-Exchange-Diagnostics: 1; BN6PR1201MB0049; 6:T1G4LvekxvD+5kE+vfIu4Utq4uoPq9Ekrd9k4a65O5mEkZ7NyEndH3EOTQoQ/UCWuxp8UvGtXxrZamAU7IsmQzJ2jXYsRNX18N/nNIgBjywO/UzRRyXiH8zIumMySeH0RtJAb9d5/czI3JlKQT8xl4a546RGRTNnylT5yNCoelVMNoy16qEJ8zOwPleh8CZiNeO6U0esrNWFtYlPMEXJ7yfP0FZvaV8NKviU25RLD/EAAUrv74uMlOsmrf5t5dWfBjJzc5poaWpz2ZQv3qeSJCMQbDGqWovenJp5lrQC50DEInIBwZB2FLg3GaFUdl/r3RG3DwiuhJ81u+uze1LxMJ5K9W4VoC0S1rUtU7KfFY0=; 5:zRg+dYLcbeX/U8392Dfrzqf/nlA3Rygjmu32QdYk/G6t86moUpNpOoZoK+62yY/SUFw+Yz0dxx3nin6r/5b3LmpNxlgABjgcchsE9nES8NuEi+pUS+jopVbTE3kyiJtDwNRzDH3CSIupX6CvGfuQhMv/sUzQoIvxOlIJJAwNubE=; 24:Uj8LaRlzkCAHyHf/GPpgDzIGrJbX3CPoyLOywWWXZUXv/gkN4p9IJuyHZGpYSRkC17/YDdfm7U1+7775Cli75rtBtgwh8XFq7+bFFgnmlFA=; 7:1UK7jQXMj9lFZTcC2ixXil8sAsUPoGVvGHjzQXZz9JK0dcVlyYC8RcO0tjxjHxElLGLcb5BxCwooiTXsykCeQjpTtsuNtvmxiPaHIxd++UOgmXPtugG6taW5YPIBFgAMhTcnxHgBGyyoluSF6nSOdghXG3TgxQ/ID5TLQLRydzQCnafORbXcociEQDzYc3PH9M3RDVn4Tmuxl2OdaxQEi1PDLtEy7V4gdnChyRYJU/EkItblM4hDCDFJMGH4ilIY SpamDiagnosticOutput: 1:99 SpamDiagnosticMetadata: NSPM X-Microsoft-Exchange-Diagnostics: 1; BN6PR1201MB0049; 20:DYsYByxhUtbiNlTJ89khBeCncOfNvCPSZHaBEy0hjjsfvlc20p9kwA+64lLCj1DYDJHoUFCri4Lm/wQ2ICALzOTNF0inVKpmRTvk/0uVqhF+HKOS1l9eg7JGx93ceL1k6eCg29N7z8oaBiPjPMneq7VGyIf9UKwZef75mkSq0Nd0TEpLaISUV+y0OWFdNvKhCvZkrh/8BBXiZaE8xmBBF9cn1qktbNFENRtZDUEoMmvCdaHjoLGUisy5PAItxyI0 X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 21 Dec 2017 09:43:30.2388 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 97ca122f-6d19-438b-1ce3-08d548574b5e X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d; Ip=[165.204.84.17]; Helo=[SATLEXCHOV02.amd.com] X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BN6PR1201MB0049 Cc: Roger He X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" X-Virus-Scanned: ClamAV using ClamSMTP forward the operation context to ttm_tt_populate as well, and the ultimate goal is swapout enablement for per vm BOs. Change-Id: If8dfa0f500429d1420e0da67eb6901f0bfbca57b Reviewed-by: Christian König Signed-off-by: Roger He --- drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c | 7 ++++--- drivers/gpu/drm/ast/ast_ttm.c | 5 +++-- drivers/gpu/drm/cirrus/cirrus_ttm.c | 5 +++-- drivers/gpu/drm/hisilicon/hibmc/hibmc_ttm.c | 5 +++-- drivers/gpu/drm/mgag200/mgag200_ttm.c | 5 +++-- drivers/gpu/drm/nouveau/nouveau_bo.c | 8 ++++---- drivers/gpu/drm/qxl/qxl_ttm.c | 5 +++-- drivers/gpu/drm/radeon/radeon_ttm.c | 9 +++++---- drivers/gpu/drm/ttm/ttm_agp_backend.c | 4 ++-- drivers/gpu/drm/ttm/ttm_bo_util.c | 11 ++++++++--- drivers/gpu/drm/ttm/ttm_bo_vm.c | 7 ++++++- drivers/gpu/drm/ttm/ttm_page_alloc.c | 13 +++++-------- drivers/gpu/drm/ttm/ttm_page_alloc_dma.c | 11 ++++------- drivers/gpu/drm/ttm/ttm_tt.c | 6 +++++- drivers/gpu/drm/virtio/virtgpu_object.c | 6 +++++- drivers/gpu/drm/virtio/virtgpu_ttm.c | 5 +++-- drivers/gpu/drm/vmwgfx/vmwgfx_buffer.c | 13 +++++-------- drivers/gpu/drm/vmwgfx/vmwgfx_mob.c | 13 +++++++++++-- include/drm/ttm/ttm_bo_driver.h | 5 +++-- include/drm/ttm/ttm_page_alloc.h | 11 +++++++---- 20 files changed, 92 insertions(+), 62 deletions(-) diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c index f1b7d98..044f5b5 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c @@ -990,7 +990,8 @@ static struct ttm_tt *amdgpu_ttm_tt_create(struct ttm_bo_device *bdev, return >t->ttm.ttm; } -static int amdgpu_ttm_tt_populate(struct ttm_tt *ttm) +static int amdgpu_ttm_tt_populate(struct ttm_tt *ttm, + struct ttm_operation_ctx *ctx) { struct amdgpu_device *adev = amdgpu_ttm_adev(ttm->bdev); struct amdgpu_ttm_tt *gtt = (void *)ttm; @@ -1018,11 +1019,11 @@ static int amdgpu_ttm_tt_populate(struct ttm_tt *ttm) #ifdef CONFIG_SWIOTLB if (swiotlb_nr_tbl()) { - return ttm_dma_populate(>t->ttm, adev->dev); + return ttm_dma_populate(>t->ttm, adev->dev, ctx); } #endif - return ttm_populate_and_map_pages(adev->dev, >t->ttm); + return ttm_populate_and_map_pages(adev->dev, >t->ttm, ctx); } static void amdgpu_ttm_tt_unpopulate(struct ttm_tt *ttm) diff --git a/drivers/gpu/drm/ast/ast_ttm.c b/drivers/gpu/drm/ast/ast_ttm.c index 28da7c2..1413e94 100644 --- a/drivers/gpu/drm/ast/ast_ttm.c +++ b/drivers/gpu/drm/ast/ast_ttm.c @@ -216,9 +216,10 @@ static struct ttm_tt *ast_ttm_tt_create(struct ttm_bo_device *bdev, return tt; } -static int ast_ttm_tt_populate(struct ttm_tt *ttm) +static int ast_ttm_tt_populate(struct ttm_tt *ttm, + struct ttm_operation_ctx *ctx) { - return ttm_pool_populate(ttm); + return ttm_pool_populate(ttm, ctx); } static void ast_ttm_tt_unpopulate(struct ttm_tt *ttm) diff --git a/drivers/gpu/drm/cirrus/cirrus_ttm.c b/drivers/gpu/drm/cirrus/cirrus_ttm.c index 2a5b54d..65ae960 100644 --- a/drivers/gpu/drm/cirrus/cirrus_ttm.c +++ b/drivers/gpu/drm/cirrus/cirrus_ttm.c @@ -216,9 +216,10 @@ static struct ttm_tt *cirrus_ttm_tt_create(struct ttm_bo_device *bdev, return tt; } -static int cirrus_ttm_tt_populate(struct ttm_tt *ttm) +static int cirrus_ttm_tt_populate(struct ttm_tt *ttm, + struct ttm_operation_ctx *ctx) { - return ttm_pool_populate(ttm); + return ttm_pool_populate(ttm, ctx); } static void cirrus_ttm_tt_unpopulate(struct ttm_tt *ttm) diff --git a/drivers/gpu/drm/hisilicon/hibmc/hibmc_ttm.c b/drivers/gpu/drm/hisilicon/hibmc/hibmc_ttm.c index ab4ee59..8516e00 100644 --- a/drivers/gpu/drm/hisilicon/hibmc/hibmc_ttm.c +++ b/drivers/gpu/drm/hisilicon/hibmc/hibmc_ttm.c @@ -223,9 +223,10 @@ static struct ttm_tt *hibmc_ttm_tt_create(struct ttm_bo_device *bdev, return tt; } -static int hibmc_ttm_tt_populate(struct ttm_tt *ttm) +static int hibmc_ttm_tt_populate(struct ttm_tt *ttm, + struct ttm_operation_ctx *ctx) { - return ttm_pool_populate(ttm); + return ttm_pool_populate(ttm, ctx); } static void hibmc_ttm_tt_unpopulate(struct ttm_tt *ttm) diff --git a/drivers/gpu/drm/mgag200/mgag200_ttm.c b/drivers/gpu/drm/mgag200/mgag200_ttm.c index f03da63..6fa8076 100644 --- a/drivers/gpu/drm/mgag200/mgag200_ttm.c +++ b/drivers/gpu/drm/mgag200/mgag200_ttm.c @@ -216,9 +216,10 @@ static struct ttm_tt *mgag200_ttm_tt_create(struct ttm_bo_device *bdev, return tt; } -static int mgag200_ttm_tt_populate(struct ttm_tt *ttm) +static int mgag200_ttm_tt_populate(struct ttm_tt *ttm, + struct ttm_operation_ctx *ctx) { - return ttm_pool_populate(ttm); + return ttm_pool_populate(ttm, ctx); } static void mgag200_ttm_tt_unpopulate(struct ttm_tt *ttm) diff --git a/drivers/gpu/drm/nouveau/nouveau_bo.c b/drivers/gpu/drm/nouveau/nouveau_bo.c index 6b6fb20..b141c27 100644 --- a/drivers/gpu/drm/nouveau/nouveau_bo.c +++ b/drivers/gpu/drm/nouveau/nouveau_bo.c @@ -1547,7 +1547,7 @@ nouveau_ttm_fault_reserve_notify(struct ttm_buffer_object *bo) } static int -nouveau_ttm_tt_populate(struct ttm_tt *ttm) +nouveau_ttm_tt_populate(struct ttm_tt *ttm, struct ttm_operation_ctx *ctx) { struct ttm_dma_tt *ttm_dma = (void *)ttm; struct nouveau_drm *drm; @@ -1572,17 +1572,17 @@ nouveau_ttm_tt_populate(struct ttm_tt *ttm) #if IS_ENABLED(CONFIG_AGP) if (drm->agp.bridge) { - return ttm_agp_tt_populate(ttm); + return ttm_agp_tt_populate(ttm, ctx); } #endif #if IS_ENABLED(CONFIG_SWIOTLB) && IS_ENABLED(CONFIG_X86) if (swiotlb_nr_tbl()) { - return ttm_dma_populate((void *)ttm, dev); + return ttm_dma_populate((void *)ttm, dev, ctx); } #endif - r = ttm_pool_populate(ttm); + r = ttm_pool_populate(ttm, ctx); if (r) { return r; } diff --git a/drivers/gpu/drm/qxl/qxl_ttm.c b/drivers/gpu/drm/qxl/qxl_ttm.c index 78ce118..f72ce3b 100644 --- a/drivers/gpu/drm/qxl/qxl_ttm.c +++ b/drivers/gpu/drm/qxl/qxl_ttm.c @@ -291,14 +291,15 @@ static struct ttm_backend_func qxl_backend_func = { .destroy = &qxl_ttm_backend_destroy, }; -static int qxl_ttm_tt_populate(struct ttm_tt *ttm) +static int qxl_ttm_tt_populate(struct ttm_tt *ttm, + struct ttm_operation_ctx *ctx) { int r; if (ttm->state != tt_unpopulated) return 0; - r = ttm_pool_populate(ttm); + r = ttm_pool_populate(ttm, ctx); if (r) return r; diff --git a/drivers/gpu/drm/radeon/radeon_ttm.c b/drivers/gpu/drm/radeon/radeon_ttm.c index 557fd79..9424b81 100644 --- a/drivers/gpu/drm/radeon/radeon_ttm.c +++ b/drivers/gpu/drm/radeon/radeon_ttm.c @@ -721,7 +721,8 @@ static struct radeon_ttm_tt *radeon_ttm_tt_to_gtt(struct ttm_tt *ttm) return (struct radeon_ttm_tt *)ttm; } -static int radeon_ttm_tt_populate(struct ttm_tt *ttm) +static int radeon_ttm_tt_populate(struct ttm_tt *ttm, + struct ttm_operation_ctx *ctx) { struct radeon_ttm_tt *gtt = radeon_ttm_tt_to_gtt(ttm); struct radeon_device *rdev; @@ -750,17 +751,17 @@ static int radeon_ttm_tt_populate(struct ttm_tt *ttm) rdev = radeon_get_rdev(ttm->bdev); #if IS_ENABLED(CONFIG_AGP) if (rdev->flags & RADEON_IS_AGP) { - return ttm_agp_tt_populate(ttm); + return ttm_agp_tt_populate(ttm, ctx); } #endif #ifdef CONFIG_SWIOTLB if (swiotlb_nr_tbl()) { - return ttm_dma_populate(>t->ttm, rdev->dev); + return ttm_dma_populate(>t->ttm, rdev->dev, ctx); } #endif - return ttm_populate_and_map_pages(rdev->dev, >t->ttm); + return ttm_populate_and_map_pages(rdev->dev, >t->ttm, ctx); } static void radeon_ttm_tt_unpopulate(struct ttm_tt *ttm) diff --git a/drivers/gpu/drm/ttm/ttm_agp_backend.c b/drivers/gpu/drm/ttm/ttm_agp_backend.c index 028ab60..3e795a0 100644 --- a/drivers/gpu/drm/ttm/ttm_agp_backend.c +++ b/drivers/gpu/drm/ttm/ttm_agp_backend.c @@ -133,12 +133,12 @@ struct ttm_tt *ttm_agp_tt_create(struct ttm_bo_device *bdev, } EXPORT_SYMBOL(ttm_agp_tt_create); -int ttm_agp_tt_populate(struct ttm_tt *ttm) +int ttm_agp_tt_populate(struct ttm_tt *ttm, struct ttm_operation_ctx *ctx) { if (ttm->state != tt_unpopulated) return 0; - return ttm_pool_populate(ttm); + return ttm_pool_populate(ttm, ctx); } EXPORT_SYMBOL(ttm_agp_tt_populate); diff --git a/drivers/gpu/drm/ttm/ttm_bo_util.c b/drivers/gpu/drm/ttm/ttm_bo_util.c index 6e353df..b7eb507 100644 --- a/drivers/gpu/drm/ttm/ttm_bo_util.c +++ b/drivers/gpu/drm/ttm/ttm_bo_util.c @@ -376,7 +376,7 @@ int ttm_bo_move_memcpy(struct ttm_buffer_object *bo, * TTM might be null for moves within the same region. */ if (ttm && ttm->state == tt_unpopulated) { - ret = ttm->bdev->driver->ttm_tt_populate(ttm); + ret = ttm->bdev->driver->ttm_tt_populate(ttm, ctx); if (ret) goto out1; } @@ -545,14 +545,19 @@ static int ttm_bo_kmap_ttm(struct ttm_buffer_object *bo, unsigned long num_pages, struct ttm_bo_kmap_obj *map) { - struct ttm_mem_reg *mem = &bo->mem; pgprot_t prot; + struct ttm_mem_reg *mem = &bo->mem; + struct ttm_operation_ctx ctx = { + .interruptible = false, + .no_wait_gpu = false + }; struct ttm_tt *ttm = bo->ttm; + pgprot_t prot; int ret; BUG_ON(!ttm); if (ttm->state == tt_unpopulated) { - ret = ttm->bdev->driver->ttm_tt_populate(ttm); + ret = ttm->bdev->driver->ttm_tt_populate(ttm, &ctx); if (ret) return ret; } diff --git a/drivers/gpu/drm/ttm/ttm_bo_vm.c b/drivers/gpu/drm/ttm/ttm_bo_vm.c index c8ebb75..65dfcdd 100644 --- a/drivers/gpu/drm/ttm/ttm_bo_vm.c +++ b/drivers/gpu/drm/ttm/ttm_bo_vm.c @@ -215,12 +215,17 @@ static int ttm_bo_vm_fault(struct vm_fault *vmf) cvma.vm_page_prot = ttm_io_prot(bo->mem.placement, cvma.vm_page_prot); } else { + struct ttm_operation_ctx ctx = { + .interruptible = false, + .no_wait_gpu = false + }; + ttm = bo->ttm; cvma.vm_page_prot = ttm_io_prot(bo->mem.placement, cvma.vm_page_prot); /* Allocate all page at once, most common usage */ - if (ttm->bdev->driver->ttm_tt_populate(ttm)) { + if (ttm->bdev->driver->ttm_tt_populate(ttm, &ctx)) { retval = VM_FAULT_OOM; goto out_io_unlock; } diff --git a/drivers/gpu/drm/ttm/ttm_page_alloc.c b/drivers/gpu/drm/ttm/ttm_page_alloc.c index 8f93ff3..f1a3d55 100644 --- a/drivers/gpu/drm/ttm/ttm_page_alloc.c +++ b/drivers/gpu/drm/ttm/ttm_page_alloc.c @@ -1058,13 +1058,9 @@ void ttm_page_alloc_fini(void) _manager = NULL; } -int ttm_pool_populate(struct ttm_tt *ttm) +int ttm_pool_populate(struct ttm_tt *ttm, struct ttm_operation_ctx *ctx) { struct ttm_mem_global *mem_glob = ttm->glob->mem_glob; - struct ttm_operation_ctx ctx = { - .interruptible = false, - .no_wait_gpu = false - }; unsigned i; int ret; @@ -1080,7 +1076,7 @@ int ttm_pool_populate(struct ttm_tt *ttm) for (i = 0; i < ttm->num_pages; ++i) { ret = ttm_mem_global_alloc_page(mem_glob, ttm->pages[i], - PAGE_SIZE, &ctx); + PAGE_SIZE, ctx); if (unlikely(ret != 0)) { ttm_pool_unpopulate(ttm); return -ENOMEM; @@ -1117,12 +1113,13 @@ void ttm_pool_unpopulate(struct ttm_tt *ttm) } EXPORT_SYMBOL(ttm_pool_unpopulate); -int ttm_populate_and_map_pages(struct device *dev, struct ttm_dma_tt *tt) +int ttm_populate_and_map_pages(struct device *dev, struct ttm_dma_tt *tt, + struct ttm_operation_ctx *ctx) { unsigned i, j; int r; - r = ttm_pool_populate(&tt->ttm); + r = ttm_pool_populate(&tt->ttm, ctx); if (r) return r; diff --git a/drivers/gpu/drm/ttm/ttm_page_alloc_dma.c b/drivers/gpu/drm/ttm/ttm_page_alloc_dma.c index 8aac86a..3ac5391 100644 --- a/drivers/gpu/drm/ttm/ttm_page_alloc_dma.c +++ b/drivers/gpu/drm/ttm/ttm_page_alloc_dma.c @@ -923,14 +923,11 @@ static gfp_t ttm_dma_pool_gfp_flags(struct ttm_dma_tt *ttm_dma, bool huge) * On success pages list will hold count number of correctly * cached pages. On failure will hold the negative return value (-ENOMEM, etc). */ -int ttm_dma_populate(struct ttm_dma_tt *ttm_dma, struct device *dev) +int ttm_dma_populate(struct ttm_dma_tt *ttm_dma, struct device *dev, + struct ttm_operation_ctx *ctx) { struct ttm_tt *ttm = &ttm_dma->ttm; struct ttm_mem_global *mem_glob = ttm->glob->mem_glob; - struct ttm_operation_ctx ctx = { - .interruptible = false, - .no_wait_gpu = false - }; unsigned long num_pages = ttm->num_pages; struct dma_pool *pool; enum pool_type type; @@ -966,7 +963,7 @@ int ttm_dma_populate(struct ttm_dma_tt *ttm_dma, struct device *dev) break; ret = ttm_mem_global_alloc_page(mem_glob, ttm->pages[i], - pool->size, &ctx); + pool->size, ctx); if (unlikely(ret != 0)) { ttm_dma_unpopulate(ttm_dma, dev); return -ENOMEM; @@ -1002,7 +999,7 @@ int ttm_dma_populate(struct ttm_dma_tt *ttm_dma, struct device *dev) } ret = ttm_mem_global_alloc_page(mem_glob, ttm->pages[i], - pool->size, &ctx); + pool->size, ctx); if (unlikely(ret != 0)) { ttm_dma_unpopulate(ttm_dma, dev); return -ENOMEM; diff --git a/drivers/gpu/drm/ttm/ttm_tt.c b/drivers/gpu/drm/ttm/ttm_tt.c index 8ebc8d3..b48d7a0 100644 --- a/drivers/gpu/drm/ttm/ttm_tt.c +++ b/drivers/gpu/drm/ttm/ttm_tt.c @@ -263,6 +263,10 @@ void ttm_tt_unbind(struct ttm_tt *ttm) int ttm_tt_bind(struct ttm_tt *ttm, struct ttm_mem_reg *bo_mem) { + struct ttm_operation_ctx ctx = { + .interruptible = false, + .no_wait_gpu = false + }; int ret = 0; if (!ttm) @@ -271,7 +275,7 @@ int ttm_tt_bind(struct ttm_tt *ttm, struct ttm_mem_reg *bo_mem) if (ttm->state == tt_bound) return 0; - ret = ttm->bdev->driver->ttm_tt_populate(ttm); + ret = ttm->bdev->driver->ttm_tt_populate(ttm, &ctx); if (ret) return ret; diff --git a/drivers/gpu/drm/virtio/virtgpu_object.c b/drivers/gpu/drm/virtio/virtgpu_object.c index 6f66b73..0b90cdb 100644 --- a/drivers/gpu/drm/virtio/virtgpu_object.c +++ b/drivers/gpu/drm/virtio/virtgpu_object.c @@ -124,13 +124,17 @@ int virtio_gpu_object_get_sg_table(struct virtio_gpu_device *qdev, int ret; struct page **pages = bo->tbo.ttm->pages; int nr_pages = bo->tbo.num_pages; + struct ttm_operation_ctx ctx = { + .interruptible = false, + .no_wait_gpu = false + }; /* wtf swapping */ if (bo->pages) return 0; if (bo->tbo.ttm->state == tt_unpopulated) - bo->tbo.ttm->bdev->driver->ttm_tt_populate(bo->tbo.ttm); + bo->tbo.ttm->bdev->driver->ttm_tt_populate(bo->tbo.ttm, &ctx); bo->pages = kmalloc(sizeof(struct sg_table), GFP_KERNEL); if (!bo->pages) goto out; diff --git a/drivers/gpu/drm/virtio/virtgpu_ttm.c b/drivers/gpu/drm/virtio/virtgpu_ttm.c index 488c6bd..f6118f5 100644 --- a/drivers/gpu/drm/virtio/virtgpu_ttm.c +++ b/drivers/gpu/drm/virtio/virtgpu_ttm.c @@ -324,12 +324,13 @@ static struct ttm_backend_func virtio_gpu_backend_func = { .destroy = &virtio_gpu_ttm_backend_destroy, }; -static int virtio_gpu_ttm_tt_populate(struct ttm_tt *ttm) +static int virtio_gpu_ttm_tt_populate(struct ttm_tt *ttm, + struct ttm_operation_ctx *ctx) { if (ttm->state != tt_unpopulated) return 0; - return ttm_pool_populate(ttm); + return ttm_pool_populate(ttm, ctx); } static void virtio_gpu_ttm_tt_unpopulate(struct ttm_tt *ttm) diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_buffer.c b/drivers/gpu/drm/vmwgfx/vmwgfx_buffer.c index ef97542..90b0d6b 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_buffer.c +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_buffer.c @@ -635,16 +635,12 @@ static void vmw_ttm_destroy(struct ttm_tt *ttm) } -static int vmw_ttm_populate(struct ttm_tt *ttm) +static int vmw_ttm_populate(struct ttm_tt *ttm, struct ttm_operation_ctx *ctx) { struct vmw_ttm_tt *vmw_tt = container_of(ttm, struct vmw_ttm_tt, dma_ttm.ttm); struct vmw_private *dev_priv = vmw_tt->dev_priv; struct ttm_mem_global *glob = vmw_mem_glob(dev_priv); - struct ttm_operation_ctx ctx = { - .interruptible = true, - .no_wait_gpu = false - }; int ret; if (ttm->state != tt_unpopulated) @@ -653,15 +649,16 @@ static int vmw_ttm_populate(struct ttm_tt *ttm) if (dev_priv->map_mode == vmw_dma_alloc_coherent) { size_t size = ttm_round_pot(ttm->num_pages * sizeof(dma_addr_t)); - ret = ttm_mem_global_alloc(glob, size, &ctx); + ret = ttm_mem_global_alloc(glob, size, ctx); if (unlikely(ret != 0)) return ret; - ret = ttm_dma_populate(&vmw_tt->dma_ttm, dev_priv->dev->dev); + ret = ttm_dma_populate(&vmw_tt->dma_ttm, dev_priv->dev->dev, + ctx); if (unlikely(ret != 0)) ttm_mem_global_free(glob, size); } else - ret = ttm_pool_populate(ttm); + ret = ttm_pool_populate(ttm, ctx); return ret; } diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_mob.c b/drivers/gpu/drm/vmwgfx/vmwgfx_mob.c index b17f08f..736ca47 100644 --- a/drivers/gpu/drm/vmwgfx/vmwgfx_mob.c +++ b/drivers/gpu/drm/vmwgfx/vmwgfx_mob.c @@ -240,6 +240,10 @@ static int vmw_otable_batch_setup(struct vmw_private *dev_priv, unsigned long offset; unsigned long bo_size; struct vmw_otable *otables = batch->otables; + struct ttm_operation_ctx ctx = { + .interruptible = false, + .no_wait_gpu = false + }; SVGAOTableType i; int ret; @@ -264,7 +268,7 @@ static int vmw_otable_batch_setup(struct vmw_private *dev_priv, ret = ttm_bo_reserve(batch->otable_bo, false, true, NULL); BUG_ON(ret != 0); - ret = vmw_bo_driver.ttm_tt_populate(batch->otable_bo->ttm); + ret = vmw_bo_driver.ttm_tt_populate(batch->otable_bo->ttm, &ctx); if (unlikely(ret != 0)) goto out_unreserve; ret = vmw_bo_map_dma(batch->otable_bo); @@ -430,6 +434,11 @@ static int vmw_mob_pt_populate(struct vmw_private *dev_priv, struct vmw_mob *mob) { int ret; + struct ttm_operation_ctx ctx = { + .interruptible = false, + .no_wait_gpu = false + }; + BUG_ON(mob->pt_bo != NULL); ret = ttm_bo_create(&dev_priv->bdev, mob->num_pages * PAGE_SIZE, @@ -442,7 +451,7 @@ static int vmw_mob_pt_populate(struct vmw_private *dev_priv, ret = ttm_bo_reserve(mob->pt_bo, false, true, NULL); BUG_ON(ret != 0); - ret = vmw_bo_driver.ttm_tt_populate(mob->pt_bo->ttm); + ret = vmw_bo_driver.ttm_tt_populate(mob->pt_bo->ttm, &ctx); if (unlikely(ret != 0)) goto out_unreserve; ret = vmw_bo_map_dma(mob->pt_bo); diff --git a/include/drm/ttm/ttm_bo_driver.h b/include/drm/ttm/ttm_bo_driver.h index 934fecf..84860ec 100644 --- a/include/drm/ttm/ttm_bo_driver.h +++ b/include/drm/ttm/ttm_bo_driver.h @@ -352,7 +352,8 @@ struct ttm_bo_driver { * Returns: * -ENOMEM: Out of memory. */ - int (*ttm_tt_populate)(struct ttm_tt *ttm); + int (*ttm_tt_populate)(struct ttm_tt *ttm, + struct ttm_operation_ctx *ctx); /** * ttm_tt_unpopulate @@ -1077,7 +1078,7 @@ struct ttm_tt *ttm_agp_tt_create(struct ttm_bo_device *bdev, struct agp_bridge_data *bridge, unsigned long size, uint32_t page_flags, struct page *dummy_read_page); -int ttm_agp_tt_populate(struct ttm_tt *ttm); +int ttm_agp_tt_populate(struct ttm_tt *ttm, struct ttm_operation_ctx *ctx); void ttm_agp_tt_unpopulate(struct ttm_tt *ttm); #endif diff --git a/include/drm/ttm/ttm_page_alloc.h b/include/drm/ttm/ttm_page_alloc.h index 5938113..4d9b019 100644 --- a/include/drm/ttm/ttm_page_alloc.h +++ b/include/drm/ttm/ttm_page_alloc.h @@ -47,7 +47,7 @@ void ttm_page_alloc_fini(void); * * Add backing pages to all of @ttm */ -int ttm_pool_populate(struct ttm_tt *ttm); +int ttm_pool_populate(struct ttm_tt *ttm, struct ttm_operation_ctx *ctx); /** * ttm_pool_unpopulate: @@ -61,7 +61,8 @@ void ttm_pool_unpopulate(struct ttm_tt *ttm); /** * Populates and DMA maps pages to fullfil a ttm_dma_populate() request */ -int ttm_populate_and_map_pages(struct device *dev, struct ttm_dma_tt *tt); +int ttm_populate_and_map_pages(struct device *dev, struct ttm_dma_tt *tt, + struct ttm_operation_ctx *ctx); /** * Unpopulates and DMA unmaps pages as part of a @@ -89,7 +90,8 @@ void ttm_dma_page_alloc_fini(void); */ int ttm_dma_page_alloc_debugfs(struct seq_file *m, void *data); -int ttm_dma_populate(struct ttm_dma_tt *ttm_dma, struct device *dev); +int ttm_dma_populate(struct ttm_dma_tt *ttm_dma, struct device *dev, + struct ttm_operation_ctx *ctx); void ttm_dma_unpopulate(struct ttm_dma_tt *ttm_dma, struct device *dev); #else @@ -106,7 +108,8 @@ static inline int ttm_dma_page_alloc_debugfs(struct seq_file *m, void *data) return 0; } static inline int ttm_dma_populate(struct ttm_dma_tt *ttm_dma, - struct device *dev) + struct device *dev, + struct ttm_operation_ctx *ctx) { return -ENOMEM; }