From patchwork Thu Jun 16 21:22:29 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 9181551 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 6133C60573 for ; Thu, 16 Jun 2016 21:22:49 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 5170028379 for ; Thu, 16 Jun 2016 21:22:49 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 468C528386; Thu, 16 Jun 2016 21:22:49 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.8 required=2.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED, DKIM_SIGNED, FREEMAIL_FROM, RCVD_IN_DNSWL_HI, T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 553B628379 for ; Thu, 16 Jun 2016 21:22:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754737AbcFPVWr (ORCPT ); Thu, 16 Jun 2016 17:22:47 -0400 Received: from mail-qk0-f195.google.com ([209.85.220.195]:34231 "EHLO mail-qk0-f195.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754574AbcFPVWq (ORCPT ); Thu, 16 Jun 2016 17:22:46 -0400 Received: by mail-qk0-f195.google.com with SMTP id j2so9294041qkf.1 for ; Thu, 16 Jun 2016 14:22:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=opyAcw9dJNTB46XALtfkB1NC6jIuKszUTS8D8VLNtuc=; b=Omak5NWKEDc/Gx6fapMb/JdlnhSU6hq/Vb9arruFutMkaV3tvP82BgjVm/2V/Yo/xK qtfBwc48qbb2OaNnQT/N0jR/aACIqYBjfpC2CDYzpzuzAWhL9sqBKi1BG+amud7CIyVX +QpQ0G6VzqJo/b7N2Iooyh9iyigAHc9JUk8oLwqt5gHxeKk3R73FYc40xgcxXaDYxOfi nBhp5WxTTo/K/R7036dFPSHKipr1HX5i+T3Rs3jk/94J0H4ScqhGV03RiLs4qFP2Jb94 0vx7GtjGOBqwevpoFPPSwoTEnRmfK2TjPQ9TxdIHlKT4WL7lkCximc2zmiUqlsjx6baX A9PQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=opyAcw9dJNTB46XALtfkB1NC6jIuKszUTS8D8VLNtuc=; b=WNbYSJtLW7UrXtHQMTnRcfy77cxY1uatjyaUfW9GxxGdQRWzKio+jifCod8dkjLzEr I9Ebzc6P5y77tjEKPt0kWhzVZxCdCLElx/24atqNCpxff8SAs+Znzd5T8f6pO/wu/59w OtTjpRvolWq3PXSC8IjFL1hXF1f977agO7Ta64rsoWuQ1swUlAPTvUNieJMx10uVPZsQ My3mn6wExzAoIK0Wfq5WqGFX6rQv5rwg04eaP3n+QHZKevJsJ8hx23sidukcVxMgYhm0 vcpgodyrpFvYxzCwnn8X/sSvCS4ucFnRfnhd0DLz7j1kg5Bn6pP6LJW3QkVr3Km5hye9 LNFw== X-Gm-Message-State: ALyK8tKPHfXI9iXC7K2wjVZtmj9v4C5/TMfqjPxnbp6ODs7UUZsS5PfhNM8P7RhMI43DGQ== X-Received: by 10.55.80.131 with SMTP id e125mr7338001qkb.155.1466112165689; Thu, 16 Jun 2016 14:22:45 -0700 (PDT) Received: from localhost (nat-pool-bos-t.redhat.com. [66.187.233.206]) by smtp.gmail.com with ESMTPSA id t1sm11867120qke.18.2016.06.16.14.22.45 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Thu, 16 Jun 2016 14:22:45 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: Archit Taneja , freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Rob Clark Subject: [PATCH 04/10] drm/msm: shrinker support Date: Thu, 16 Jun 2016 17:22:29 -0400 Message-Id: <1466112155-25061-5-git-send-email-robdclark@gmail.com> X-Mailer: git-send-email 2.5.5 In-Reply-To: <1466112155-25061-1-git-send-email-robdclark@gmail.com> References: <1466112155-25061-1-git-send-email-robdclark@gmail.com> Sender: linux-arm-msm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP For a first step, only purge obj->madv==DONTNEED objects. We could be more agressive and next try unpinning inactive objects.. but that is only useful if you have swap. Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/Makefile | 1 + drivers/gpu/drm/msm/msm_drv.c | 5 ++ drivers/gpu/drm/msm/msm_drv.h | 8 +++ drivers/gpu/drm/msm/msm_gem.c | 32 +++++++++ drivers/gpu/drm/msm/msm_gem.h | 6 ++ drivers/gpu/drm/msm/msm_gem_shrinker.c | 128 +++++++++++++++++++++++++++++++++ 6 files changed, 180 insertions(+) create mode 100644 drivers/gpu/drm/msm/msm_gem_shrinker.c diff --git a/drivers/gpu/drm/msm/Makefile b/drivers/gpu/drm/msm/Makefile index 60cb026..2bb2c4b 100644 --- a/drivers/gpu/drm/msm/Makefile +++ b/drivers/gpu/drm/msm/Makefile @@ -45,6 +45,7 @@ msm-y := \ msm_fence.o \ msm_gem.o \ msm_gem_prime.o \ + msm_gem_shrinker.o \ msm_gem_submit.o \ msm_gpu.o \ msm_iommu.o \ diff --git a/drivers/gpu/drm/msm/msm_drv.c b/drivers/gpu/drm/msm/msm_drv.c index ddf6878..3e15a50 100644 --- a/drivers/gpu/drm/msm/msm_drv.c +++ b/drivers/gpu/drm/msm/msm_drv.c @@ -195,6 +195,8 @@ static int msm_drm_uninit(struct device *dev) kfree(vbl_ev); } + msm_gem_shrinker_cleanup(ddev); + drm_kms_helper_poll_fini(ddev); drm_connector_unregister_all(ddev); @@ -352,6 +354,7 @@ static int msm_drm_init(struct device *dev, struct drm_driver *drv) } ddev->dev_private = priv; + priv->dev = ddev; priv->wq = alloc_ordered_workqueue("msm", 0); priv->atomic_wq = alloc_ordered_workqueue("msm:atomic", 0); @@ -376,6 +379,8 @@ static int msm_drm_init(struct device *dev, struct drm_driver *drv) if (ret) goto fail; + msm_gem_shrinker_init(ddev); + switch (get_mdp_ver(pdev)) { case 4: kms = mdp4_kms_init(ddev); diff --git a/drivers/gpu/drm/msm/msm_drv.h b/drivers/gpu/drm/msm/msm_drv.h index f757ade..bcf33cf 100644 --- a/drivers/gpu/drm/msm/msm_drv.h +++ b/drivers/gpu/drm/msm/msm_drv.h @@ -77,6 +77,8 @@ struct msm_vblank_ctrl { struct msm_drm_private { + struct drm_device *dev; + struct msm_kms *kms; /* subordinate devices, if present: */ @@ -147,6 +149,8 @@ struct msm_drm_private { struct drm_mm mm; } vram; + struct shrinker shrinker; + struct msm_vblank_ctrl vblank_ctrl; }; @@ -165,6 +169,9 @@ void msm_gem_submit_free(struct msm_gem_submit *submit); int msm_ioctl_gem_submit(struct drm_device *dev, void *data, struct drm_file *file); +void msm_gem_shrinker_init(struct drm_device *dev); +void msm_gem_shrinker_cleanup(struct drm_device *dev); + int msm_gem_mmap_obj(struct drm_gem_object *obj, struct vm_area_struct *vma); int msm_gem_mmap(struct file *filp, struct vm_area_struct *vma); @@ -192,6 +199,7 @@ void msm_gem_prime_unpin(struct drm_gem_object *obj); void *msm_gem_vaddr_locked(struct drm_gem_object *obj); void *msm_gem_vaddr(struct drm_gem_object *obj); int msm_gem_madvise(struct drm_gem_object *obj, unsigned madv); +void msm_gem_purge(struct drm_gem_object *obj); int msm_gem_sync_object(struct drm_gem_object *obj, struct msm_fence_context *fctx, bool exclusive); void msm_gem_move_to_active(struct drm_gem_object *obj, diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index 2636c27..444d0b5 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -448,6 +448,38 @@ int msm_gem_madvise(struct drm_gem_object *obj, unsigned madv) return (msm_obj->madv != __MSM_MADV_PURGED); } +void msm_gem_purge(struct drm_gem_object *obj) +{ + struct drm_device *dev = obj->dev; + struct msm_gem_object *msm_obj = to_msm_bo(obj); + + WARN_ON(!mutex_is_locked(&dev->struct_mutex)); + WARN_ON(!is_purgeable(msm_obj)); + WARN_ON(obj->import_attach); + + put_iova(obj); + + vunmap(msm_obj->vaddr); + msm_obj->vaddr = NULL; + + put_pages(obj); + + msm_obj->madv = __MSM_MADV_PURGED; + + drm_vma_node_unmap(&obj->vma_node, dev->anon_inode->i_mapping); + drm_gem_free_mmap_offset(obj); + + /* Our goal here is to return as much of the memory as + * is possible back to the system as we are called from OOM. + * To do this we must instruct the shmfs to drop all of its + * backing pages, *now*. + */ + shmem_truncate_range(file_inode(obj->filp), 0, (loff_t)-1); + + invalidate_mapping_pages(file_inode(obj->filp)->i_mapping, + 0, (loff_t)-1); +} + /* must be called before _move_to_active().. */ int msm_gem_sync_object(struct drm_gem_object *obj, struct msm_fence_context *fctx, bool exclusive) diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h index fa8e1f1..631dab5 100644 --- a/drivers/gpu/drm/msm/msm_gem.h +++ b/drivers/gpu/drm/msm/msm_gem.h @@ -77,6 +77,12 @@ static inline bool is_active(struct msm_gem_object *msm_obj) return msm_obj->gpu != NULL; } +static inline bool is_purgeable(struct msm_gem_object *msm_obj) +{ + return (msm_obj->madv == MSM_MADV_DONTNEED) && msm_obj->sgt && + !msm_obj->base.dma_buf && !msm_obj->base.import_attach; +} + #define MAX_CMDS 4 /* Created per submit-ioctl, to track bo's and cmdstream bufs, etc, diff --git a/drivers/gpu/drm/msm/msm_gem_shrinker.c b/drivers/gpu/drm/msm/msm_gem_shrinker.c new file mode 100644 index 0000000..2bbd6d0 --- /dev/null +++ b/drivers/gpu/drm/msm/msm_gem_shrinker.c @@ -0,0 +1,128 @@ +/* + * Copyright (C) 2016 Red Hat + * Author: Rob Clark + * + * This program is free software; you can redistribute it and/or modify it + * under the terms of the GNU General Public License version 2 as published by + * the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, but WITHOUT + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for + * more details. + * + * You should have received a copy of the GNU General Public License along with + * this program. If not, see . + */ + +#include "msm_drv.h" +#include "msm_gem.h" + +static bool mutex_is_locked_by(struct mutex *mutex, struct task_struct *task) +{ + if (!mutex_is_locked(mutex)) + return false; + +#if defined(CONFIG_SMP) || defined(CONFIG_DEBUG_MUTEXES) + return mutex->owner == task; +#else + /* Since UP may be pre-empted, we cannot assume that we own the lock */ + return false; +#endif +} + +static bool msm_gem_shrinker_lock(struct drm_device *dev, bool *unlock) +{ + if (!mutex_trylock(&dev->struct_mutex)) { + if (!mutex_is_locked_by(&dev->struct_mutex, current)) + return false; + *unlock = false; + } else { + *unlock = true; + } + + return true; +} + + +static unsigned long +msm_gem_shrinker_count(struct shrinker *shrinker, struct shrink_control *sc) +{ + struct msm_drm_private *priv = + container_of(shrinker, struct msm_drm_private, shrinker); + struct drm_device *dev = priv->dev; + struct msm_gem_object *msm_obj; + unsigned long count = 0; + bool unlock; + + if (!msm_gem_shrinker_lock(dev, &unlock)) + return 0; + + list_for_each_entry(msm_obj, &priv->inactive_list, mm_list) { + if (is_purgeable(msm_obj)) + count += msm_obj->base.size >> PAGE_SHIFT; + } + + if (unlock) + mutex_unlock(&dev->struct_mutex); + + return count; +} + +static unsigned long +msm_gem_shrinker_scan(struct shrinker *shrinker, struct shrink_control *sc) +{ + struct msm_drm_private *priv = + container_of(shrinker, struct msm_drm_private, shrinker); + struct drm_device *dev = priv->dev; + struct msm_gem_object *msm_obj; + unsigned long freed = 0; + bool unlock; + + if (!msm_gem_shrinker_lock(dev, &unlock)) + return SHRINK_STOP; + + list_for_each_entry(msm_obj, &priv->inactive_list, mm_list) { + if (freed >= sc->nr_to_scan) + break; + if (is_purgeable(msm_obj)) { + msm_gem_purge(&msm_obj->base); + freed += msm_obj->base.size >> PAGE_SHIFT; + } + } + + if (unlock) + mutex_unlock(&dev->struct_mutex); + + if (freed > 0) + pr_info("Purging %lu bytes\n", freed << PAGE_SHIFT); + + return freed; +} + +/** + * msm_gem_shrinker_init - Initialize msm shrinker + * @dev_priv: msm device + * + * This function registers and sets up the msm shrinker. + */ +void msm_gem_shrinker_init(struct drm_device *dev) +{ + struct msm_drm_private *priv = dev->dev_private; + priv->shrinker.count_objects = msm_gem_shrinker_count; + priv->shrinker.scan_objects = msm_gem_shrinker_scan; + priv->shrinker.seeks = DEFAULT_SEEKS; + WARN_ON(register_shrinker(&priv->shrinker)); +} + +/** + * msm_gem_shrinker_cleanup - Clean up msm shrinker + * @dev_priv: msm device + * + * This function unregisters the msm shrinker. + */ +void msm_gem_shrinker_cleanup(struct drm_device *dev) +{ + struct msm_drm_private *priv = dev->dev_private; + unregister_shrinker(&priv->shrinker); +}