From patchwork Tue Jan 13 11:19:15 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Oded Gabbay X-Patchwork-Id: 5619381 Return-Path: X-Original-To: patchwork-dri-devel@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 954019F3A0 for ; Tue, 13 Jan 2015 11:19:47 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 249692054A for ; Tue, 13 Jan 2015 11:19:45 +0000 (UTC) Received: from gabe.freedesktop.org (gabe.freedesktop.org [131.252.210.177]) by mail.kernel.org (Postfix) with ESMTP id AB24C205D3 for ; Tue, 13 Jan 2015 11:19:43 +0000 (UTC) Received: from gabe.freedesktop.org (localhost [127.0.0.1]) by gabe.freedesktop.org (Postfix) with ESMTP id 150EB89BD2; Tue, 13 Jan 2015 03:19:43 -0800 (PST) X-Original-To: dri-devel@lists.freedesktop.org Delivered-To: dri-devel@lists.freedesktop.org Received: from na01-by2-obe.outbound.protection.outlook.com (mail-by2on0131.outbound.protection.outlook.com [207.46.100.131]) by gabe.freedesktop.org (Postfix) with ESMTP id D0F7F6E5C3 for ; Tue, 13 Jan 2015 03:19:40 -0800 (PST) Received: from BY2PR02CA0048.namprd02.prod.outlook.com (10.141.216.38) by CO1PR02MB205.namprd02.prod.outlook.com (10.242.165.139) with Microsoft SMTP Server (TLS) id 15.1.53.17; Tue, 13 Jan 2015 11:19:38 +0000 Received: from BL2FFO11FD018.protection.gbl (2a01:111:f400:7c09::130) by BY2PR02CA0048.outlook.office365.com (2a01:111:e400:2c40::38) with Microsoft SMTP Server (TLS) id 15.1.53.17 via Frontend Transport; Tue, 13 Jan 2015 11:19:38 +0000 Received: from atltwp01.amd.com (165.204.84.221) by BL2FFO11FD018.mail.protection.outlook.com (10.173.161.36) with Microsoft SMTP Server id 15.1.49.13 via Frontend Transport; Tue, 13 Jan 2015 11:19:37 +0000 X-WSS-ID: 0NI44SM-07-L8Q-02 X-M-MSG: Received: from satlvexedge01.amd.com (satlvexedge01.amd.com [10.177.96.28]) (using TLSv1 with cipher AES128-SHA (128/128 bits)) (No client certificate requested) by atltwp01.amd.com (Axway MailGate 5.3.1) with ESMTPS id 24962CAE64D for ; Tue, 13 Jan 2015 05:19:34 -0600 (CST) Received: from SATLEXDAG01.amd.com (10.181.40.3) by satlvexedge01.amd.com (10.177.96.28) with Microsoft SMTP Server (TLS) id 14.3.195.1; Tue, 13 Jan 2015 05:20:07 -0600 Received: from STOREXDAG02.amd.com (10.1.13.11) by SATLEXDAG01.amd.com (10.181.40.3) with Microsoft SMTP Server (TLS) id 14.3.195.1; Tue, 13 Jan 2015 06:19:34 -0500 Received: from AMD (10.20.0.84) by storexdag02.amd.com (10.1.13.11) with Microsoft SMTP Server (TLS) id 14.3.195.1; Tue, 13 Jan 2015 06:19:33 -0500 From: Oded Gabbay To: Subject: [PATCH 1/4] drm/amdkfd: Encapsulate DQM functions in ops structure Date: Tue, 13 Jan 2015 13:19:15 +0200 Message-ID: <1421147958-15283-2-git-send-email-oded.gabbay@amd.com> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1421147958-15283-1-git-send-email-oded.gabbay@amd.com> References: <1421147958-15283-1-git-send-email-oded.gabbay@amd.com> MIME-Version: 1.0 X-Originating-IP: [10.20.0.84] X-EOPAttributedMessage: 0 Received-SPF: None (protection.outlook.com: amd.com does not designate permitted sender hosts) Authentication-Results: spf=none (sender IP is 165.204.84.221) smtp.mailfrom=Oded.Gabbay@amd.com; X-Forefront-Antispam-Report: CIP:165.204.84.221; CTRY:US; IPV:NLI; EFV:NLI; SFV:NSPM; SFS:(10019020)(6009001)(428002)(199003)(189002)(87936001)(2950100001)(77156002)(105586002)(86362001)(575784001)(50226001)(2351001)(107886001)(68736005)(110136001)(36756003)(47776003)(106466001)(229853001)(77096005)(450100001)(101416001)(92566002)(19580405001)(19580395003)(48376002)(50466002)(33646002)(46102003)(62966003)(64706001)(50986999)(97736003)(76176999); DIR:OUT; SFP:1102; SCL:1; SRVR:CO1PR02MB205; H:atltwp01.amd.com; FPR:; SPF:None; MLV:sfv; PTR:InfoDomainNonexistent; A:1; MX:1; LANG:en; X-DmarcAction-Test: None X-Microsoft-Antispam: UriScan:; X-Microsoft-Antispam: BCL:0;PCL:0;RULEID:(3005003);SRVR:CO1PR02MB205; X-Exchange-Antispam-Report-Test: UriScan:; X-Exchange-Antispam-Report-CFA-Test: BCL:0; PCL:0; RULEID:(601004); SRVR:CO1PR02MB205; X-Forefront-PRVS: 045584D28C X-Exchange-Antispam-Report-CFA-Test: BCL:0;PCL:0;RULEID:;SRVR:CO1PR02MB205; X-OriginatorOrg: amd4.onmicrosoft.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 13 Jan 2015 11:19:37.9141 (UTC) X-MS-Exchange-CrossTenant-Id: fde4dada-be84-483f-92cc-e026cbee8e96 X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=fde4dada-be84-483f-92cc-e026cbee8e96; Ip=[165.204.84.221] X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: CO1PR02MB205 X-BeenThere: dri-devel@lists.freedesktop.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Direct Rendering Infrastructure - Development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dri-devel-bounces@lists.freedesktop.org Sender: "dri-devel" X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, T_RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP This patch does some re-org on the device_queue_manager structure. It takes out all the function pointers from the structure and puts them in a new structure, called device_queue_manager_ops. Then, it puts an instance of that structure inside device_queue_manager. This re-org is done to prepare the DQM module to support more than one AMD APU (Kaveri). Signed-off-by: Oded Gabbay --- drivers/gpu/drm/amd/amdkfd/kfd_chardev.c | 2 +- drivers/gpu/drm/amd/amdkfd/kfd_device.c | 6 +- .../gpu/drm/amd/amdkfd/kfd_device_queue_manager.c | 68 +++++++++++----------- .../gpu/drm/amd/amdkfd/kfd_device_queue_manager.h | 25 +++++--- drivers/gpu/drm/amd/amdkfd/kfd_kernel_queue.c | 2 +- .../gpu/drm/amd/amdkfd/kfd_process_queue_manager.c | 16 ++--- 6 files changed, 65 insertions(+), 54 deletions(-) diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_chardev.c b/drivers/gpu/drm/amd/amdkfd/kfd_chardev.c index b008fd6..38b6150 100644 --- a/drivers/gpu/drm/amd/amdkfd/kfd_chardev.c +++ b/drivers/gpu/drm/amd/amdkfd/kfd_chardev.c @@ -439,7 +439,7 @@ static long kfd_ioctl_set_memory_policy(struct file *filep, (args.alternate_policy == KFD_IOC_CACHE_POLICY_COHERENT) ? cache_policy_coherent : cache_policy_noncoherent; - if (!dev->dqm->set_cache_memory_policy(dev->dqm, + if (!dev->dqm->ops.set_cache_memory_policy(dev->dqm, &pdd->qpd, default_policy, alternate_policy, diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_device.c b/drivers/gpu/drm/amd/amdkfd/kfd_device.c index a23ed24..a770ec6 100644 --- a/drivers/gpu/drm/amd/amdkfd/kfd_device.c +++ b/drivers/gpu/drm/amd/amdkfd/kfd_device.c @@ -253,7 +253,7 @@ bool kgd2kfd_device_init(struct kfd_dev *kfd, goto device_queue_manager_error; } - if (kfd->dqm->start(kfd->dqm) != 0) { + if (kfd->dqm->ops.start(kfd->dqm) != 0) { dev_err(kfd_device, "Error starting queuen manager for device (%x:%x)\n", kfd->pdev->vendor, kfd->pdev->device); @@ -307,7 +307,7 @@ void kgd2kfd_suspend(struct kfd_dev *kfd) BUG_ON(kfd == NULL); if (kfd->init_complete) { - kfd->dqm->stop(kfd->dqm); + kfd->dqm->ops.stop(kfd->dqm); amd_iommu_set_invalidate_ctx_cb(kfd->pdev, NULL); amd_iommu_free_device(kfd->pdev); } @@ -328,7 +328,7 @@ int kgd2kfd_resume(struct kfd_dev *kfd) return -ENXIO; amd_iommu_set_invalidate_ctx_cb(kfd->pdev, iommu_pasid_shutdown_callback); - kfd->dqm->start(kfd->dqm); + kfd->dqm->ops.start(kfd->dqm); } return 0; diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c b/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c index c83f011..12c8448 100644 --- a/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c +++ b/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c @@ -271,7 +271,7 @@ static int create_compute_queue_nocpsch(struct device_queue_manager *dqm, BUG_ON(!dqm || !q || !qpd); - mqd = dqm->get_mqd_manager(dqm, KFD_MQD_TYPE_COMPUTE); + mqd = dqm->ops.get_mqd_manager(dqm, KFD_MQD_TYPE_COMPUTE); if (mqd == NULL) return -ENOMEM; @@ -305,14 +305,14 @@ static int destroy_queue_nocpsch(struct device_queue_manager *dqm, mutex_lock(&dqm->lock); if (q->properties.type == KFD_QUEUE_TYPE_COMPUTE) { - mqd = dqm->get_mqd_manager(dqm, KFD_MQD_TYPE_COMPUTE); + mqd = dqm->ops.get_mqd_manager(dqm, KFD_MQD_TYPE_COMPUTE); if (mqd == NULL) { retval = -ENOMEM; goto out; } deallocate_hqd(dqm, q); } else if (q->properties.type == KFD_QUEUE_TYPE_SDMA) { - mqd = dqm->get_mqd_manager(dqm, KFD_MQD_TYPE_SDMA); + mqd = dqm->ops.get_mqd_manager(dqm, KFD_MQD_TYPE_SDMA); if (mqd == NULL) { retval = -ENOMEM; goto out; @@ -348,7 +348,7 @@ static int update_queue(struct device_queue_manager *dqm, struct queue *q) BUG_ON(!dqm || !q || !q->mqd); mutex_lock(&dqm->lock); - mqd = dqm->get_mqd_manager(dqm, q->properties.type); + mqd = dqm->ops.get_mqd_manager(dqm, q->properties.type); if (mqd == NULL) { mutex_unlock(&dqm->lock); return -ENOMEM; @@ -515,7 +515,7 @@ static int init_pipelines(struct device_queue_manager *dqm, memset(hpdptr, 0, CIK_HPD_EOP_BYTES * pipes_num); - mqd = dqm->get_mqd_manager(dqm, KFD_MQD_TYPE_COMPUTE); + mqd = dqm->ops.get_mqd_manager(dqm, KFD_MQD_TYPE_COMPUTE); if (mqd == NULL) { kfd_gtt_sa_free(dqm->dev, dqm->pipeline_mem); return -ENOMEM; @@ -646,7 +646,7 @@ static int create_sdma_queue_nocpsch(struct device_queue_manager *dqm, struct mqd_manager *mqd; int retval; - mqd = dqm->get_mqd_manager(dqm, KFD_MQD_TYPE_SDMA); + mqd = dqm->ops.get_mqd_manager(dqm, KFD_MQD_TYPE_SDMA); if (!mqd) return -ENOMEM; @@ -849,7 +849,7 @@ static int create_queue_cpsch(struct device_queue_manager *dqm, struct queue *q, if (q->properties.type == KFD_QUEUE_TYPE_SDMA) select_sdma_engine_id(q); - mqd = dqm->get_mqd_manager(dqm, + mqd = dqm->ops.get_mqd_manager(dqm, get_mqd_type_from_queue_type(q->properties.type)); if (mqd == NULL) { @@ -994,7 +994,7 @@ static int destroy_queue_cpsch(struct device_queue_manager *dqm, /* remove queue from list to prevent rescheduling after preemption */ mutex_lock(&dqm->lock); - mqd = dqm->get_mqd_manager(dqm, + mqd = dqm->ops.get_mqd_manager(dqm, get_mqd_type_from_queue_type(q->properties.type)); if (!mqd) { retval = -ENOMEM; @@ -1116,40 +1116,40 @@ struct device_queue_manager *device_queue_manager_init(struct kfd_dev *dev) case KFD_SCHED_POLICY_HWS: case KFD_SCHED_POLICY_HWS_NO_OVERSUBSCRIPTION: /* initialize dqm for cp scheduling */ - dqm->create_queue = create_queue_cpsch; - dqm->initialize = initialize_cpsch; - dqm->start = start_cpsch; - dqm->stop = stop_cpsch; - dqm->destroy_queue = destroy_queue_cpsch; - dqm->update_queue = update_queue; - dqm->get_mqd_manager = get_mqd_manager_nocpsch; - dqm->register_process = register_process_nocpsch; - dqm->unregister_process = unregister_process_nocpsch; - dqm->uninitialize = uninitialize_nocpsch; - dqm->create_kernel_queue = create_kernel_queue_cpsch; - dqm->destroy_kernel_queue = destroy_kernel_queue_cpsch; - dqm->set_cache_memory_policy = set_cache_memory_policy; + dqm->ops.create_queue = create_queue_cpsch; + dqm->ops.initialize = initialize_cpsch; + dqm->ops.start = start_cpsch; + dqm->ops.stop = stop_cpsch; + dqm->ops.destroy_queue = destroy_queue_cpsch; + dqm->ops.update_queue = update_queue; + dqm->ops.get_mqd_manager = get_mqd_manager_nocpsch; + dqm->ops.register_process = register_process_nocpsch; + dqm->ops.unregister_process = unregister_process_nocpsch; + dqm->ops.uninitialize = uninitialize_nocpsch; + dqm->ops.create_kernel_queue = create_kernel_queue_cpsch; + dqm->ops.destroy_kernel_queue = destroy_kernel_queue_cpsch; + dqm->ops.set_cache_memory_policy = set_cache_memory_policy; break; case KFD_SCHED_POLICY_NO_HWS: /* initialize dqm for no cp scheduling */ - dqm->start = start_nocpsch; - dqm->stop = stop_nocpsch; - dqm->create_queue = create_queue_nocpsch; - dqm->destroy_queue = destroy_queue_nocpsch; - dqm->update_queue = update_queue; - dqm->get_mqd_manager = get_mqd_manager_nocpsch; - dqm->register_process = register_process_nocpsch; - dqm->unregister_process = unregister_process_nocpsch; - dqm->initialize = initialize_nocpsch; - dqm->uninitialize = uninitialize_nocpsch; - dqm->set_cache_memory_policy = set_cache_memory_policy; + dqm->ops.start = start_nocpsch; + dqm->ops.stop = stop_nocpsch; + dqm->ops.create_queue = create_queue_nocpsch; + dqm->ops.destroy_queue = destroy_queue_nocpsch; + dqm->ops.update_queue = update_queue; + dqm->ops.get_mqd_manager = get_mqd_manager_nocpsch; + dqm->ops.register_process = register_process_nocpsch; + dqm->ops.unregister_process = unregister_process_nocpsch; + dqm->ops.initialize = initialize_nocpsch; + dqm->ops.uninitialize = uninitialize_nocpsch; + dqm->ops.set_cache_memory_policy = set_cache_memory_policy; break; default: BUG(); break; } - if (dqm->initialize(dqm) != 0) { + if (dqm->ops.initialize(dqm) != 0) { kfree(dqm); return NULL; } @@ -1161,7 +1161,7 @@ void device_queue_manager_uninit(struct device_queue_manager *dqm) { BUG_ON(!dqm); - dqm->uninitialize(dqm); + dqm->ops.uninitialize(dqm); kfree(dqm); } diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.h b/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.h index 554c06e..72d2ca0 100644 --- a/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.h +++ b/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.h @@ -46,7 +46,7 @@ struct device_process_node { }; /** - * struct device_queue_manager + * struct device_queue_manager_ops * * @create_queue: Queue creation routine. * @@ -81,15 +81,9 @@ struct device_process_node { * @set_cache_memory_policy: Sets memory policy (cached/ non cached) for the * memory apertures. * - * This struct is a base class for the kfd queues scheduler in the - * device level. The device base class should expose the basic operations - * for queue creation and queue destruction. This base class hides the - * scheduling mode of the driver and the specific implementation of the - * concrete device. This class is the only class in the queues scheduler - * that configures the H/W. */ -struct device_queue_manager { +struct device_queue_manager_ops { int (*create_queue)(struct device_queue_manager *dqm, struct queue *q, struct qcm_process_device *qpd, @@ -124,7 +118,22 @@ struct device_queue_manager { enum cache_policy alternate_policy, void __user *alternate_aperture_base, uint64_t alternate_aperture_size); +}; + +/** + * struct device_queue_manager + * + * This struct is a base class for the kfd queues scheduler in the + * device level. The device base class should expose the basic operations + * for queue creation and queue destruction. This base class hides the + * scheduling mode of the driver and the specific implementation of the + * concrete device. This class is the only class in the queues scheduler + * that configures the H/W. + * + */ +struct device_queue_manager { + struct device_queue_manager_ops ops; struct mqd_manager *mqds[KFD_MQD_TYPE_MAX]; struct packet_manager packets; diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_kernel_queue.c b/drivers/gpu/drm/amd/amdkfd/kfd_kernel_queue.c index 773c213..add0fb4 100644 --- a/drivers/gpu/drm/amd/amdkfd/kfd_kernel_queue.c +++ b/drivers/gpu/drm/amd/amdkfd/kfd_kernel_queue.c @@ -56,7 +56,7 @@ static bool initialize(struct kernel_queue *kq, struct kfd_dev *dev, switch (type) { case KFD_QUEUE_TYPE_DIQ: case KFD_QUEUE_TYPE_HIQ: - kq->mqd = dev->dqm->get_mqd_manager(dev->dqm, + kq->mqd = dev->dqm->ops.get_mqd_manager(dev->dqm, KFD_MQD_TYPE_HIQ); break; default: diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_process_queue_manager.c b/drivers/gpu/drm/amd/amdkfd/kfd_process_queue_manager.c index 948b1ca..513eeb6 100644 --- a/drivers/gpu/drm/amd/amdkfd/kfd_process_queue_manager.c +++ b/drivers/gpu/drm/amd/amdkfd/kfd_process_queue_manager.c @@ -178,7 +178,7 @@ int pqm_create_queue(struct process_queue_manager *pqm, if (list_empty(&pqm->queues)) { pdd->qpd.pqm = pqm; - dev->dqm->register_process(dev->dqm, &pdd->qpd); + dev->dqm->ops.register_process(dev->dqm, &pdd->qpd); } pqn = kzalloc(sizeof(struct process_queue_node), GFP_KERNEL); @@ -204,7 +204,7 @@ int pqm_create_queue(struct process_queue_manager *pqm, goto err_create_queue; pqn->q = q; pqn->kq = NULL; - retval = dev->dqm->create_queue(dev->dqm, q, &pdd->qpd, + retval = dev->dqm->ops.create_queue(dev->dqm, q, &pdd->qpd, &q->properties.vmid); print_queue(q); break; @@ -217,7 +217,8 @@ int pqm_create_queue(struct process_queue_manager *pqm, kq->queue->properties.queue_id = *qid; pqn->kq = kq; pqn->q = NULL; - retval = dev->dqm->create_kernel_queue(dev->dqm, kq, &pdd->qpd); + retval = dev->dqm->ops.create_kernel_queue(dev->dqm, + kq, &pdd->qpd); break; default: BUG(); @@ -285,13 +286,13 @@ int pqm_destroy_queue(struct process_queue_manager *pqm, unsigned int qid) if (pqn->kq) { /* destroy kernel queue (DIQ) */ dqm = pqn->kq->dev->dqm; - dqm->destroy_kernel_queue(dqm, pqn->kq, &pdd->qpd); + dqm->ops.destroy_kernel_queue(dqm, pqn->kq, &pdd->qpd); kernel_queue_uninit(pqn->kq); } if (pqn->q) { dqm = pqn->q->device->dqm; - retval = dqm->destroy_queue(dqm, &pdd->qpd, pqn->q); + retval = dqm->ops.destroy_queue(dqm, &pdd->qpd, pqn->q); if (retval != 0) return retval; @@ -303,7 +304,7 @@ int pqm_destroy_queue(struct process_queue_manager *pqm, unsigned int qid) clear_bit(qid, pqm->queue_slot_bitmap); if (list_empty(&pqm->queues)) - dqm->unregister_process(dqm, &pdd->qpd); + dqm->ops.unregister_process(dqm, &pdd->qpd); return retval; } @@ -324,7 +325,8 @@ int pqm_update_queue(struct process_queue_manager *pqm, unsigned int qid, pqn->q->properties.queue_percent = p->queue_percent; pqn->q->properties.priority = p->priority; - retval = pqn->q->device->dqm->update_queue(pqn->q->device->dqm, pqn->q); + retval = pqn->q->device->dqm->ops.update_queue(pqn->q->device->dqm, + pqn->q); if (retval != 0) return retval;