From patchwork Wed Jul 12 23:01:37 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Haitao Huang X-Patchwork-Id: 13310962 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 72586C001B0 for ; Wed, 12 Jul 2023 23:02:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232450AbjGLXCP (ORCPT ); Wed, 12 Jul 2023 19:02:15 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50464 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231461AbjGLXCI (ORCPT ); Wed, 12 Jul 2023 19:02:08 -0400 Received: from mga06.intel.com (mga06b.intel.com [134.134.136.31]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D7B5819B; Wed, 12 Jul 2023 16:02:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1689202926; x=1720738926; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=I7lZHiacZcvp7bKbGNbGwMPQS1s/EpOcXyFib1sY4o0=; b=DBICTo6IJ+C9g9UaUIyGtYk2G28gm5YT97MVsWZBkSh5HM+Mq1DyHFFx DCQD+ctJvrT3vP60cd+NZVNIDjMRgtwporF8egGsrYk/VGm1hLSScNMrQ rkh1peCMmUwwYKwjMKnZxY9ElA8yvBjlSBjwOw/dHbiyebdrfhyGCKHwL Ai3NkXdoIwvWMDr6c8W+VvNtpcxZymLm7UoCYrAbT+kLLNr0zASfgN3oR 0qhmqB5UqdXuEd4JQY7MM/TLPm6zsdJtSb+dWyfWnNal3NHmE7jjGUdmc bRjTvGALwCHzzVXxhPCzF64uVB5YDmOcMVAZ248dEysnGfR96eCbkj5j8 w==; X-IronPort-AV: E=McAfee;i="6600,9927,10769"; a="428773872" X-IronPort-AV: E=Sophos;i="6.01,200,1684825200"; d="scan'208";a="428773872" Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Jul 2023 16:02:05 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10769"; a="835338589" X-IronPort-AV: E=Sophos;i="6.01,200,1684825200"; d="scan'208";a="835338589" Received: from b4969161e530.jf.intel.com ([10.165.56.46]) by fmsmga002.fm.intel.com with ESMTP; 12 Jul 2023 16:02:04 -0700 From: Haitao Huang To: jarkko@kernel.org, dave.hansen@linux.intel.com, tj@kernel.org, linux-kernel@vger.kernel.org, linux-sgx@vger.kernel.org, cgroups@vger.kernel.org, Thomas Gleixner , Ingo Molnar , Borislav Petkov , x86@kernel.org, "H. Peter Anvin" Cc: kai.huang@intel.com, reinette.chatre@intel.com, Kristen Carlson Accardi , zhiquan1.li@intel.com, seanjc@google.com Subject: [PATCH v3 03/28] x86/sgx: Add 'struct sgx_epc_lru_lists' to encapsulate lru list(s) Date: Wed, 12 Jul 2023 16:01:37 -0700 Message-Id: <20230712230202.47929-4-haitao.huang@linux.intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230712230202.47929-1-haitao.huang@linux.intel.com> References: <20230712230202.47929-1-haitao.huang@linux.intel.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-sgx@vger.kernel.org From: Kristen Carlson Accardi Introduce a data structure to wrap the existing reclaimable list and its spinlock in a struct to minimize the code changes needed to handle multiple LRUs as well as reclaimable and non-reclaimable lists. The new structure will be used in a following set of patches to implement SGX EPC cgroups. The changes to the structure needed for unreclaimable lists will be added in later patches. Signed-off-by: Sean Christopherson Signed-off-by: Kristen Carlson Accardi Signed-off-by: Haitao Huang Cc: Sean Christopherson V3: Removed the helper functions and revised commit messages --- arch/x86/kernel/cpu/sgx/sgx.h | 17 +++++++++++++++++ 1 file changed, 17 insertions(+) diff --git a/arch/x86/kernel/cpu/sgx/sgx.h b/arch/x86/kernel/cpu/sgx/sgx.h index f6e3c5810eef..77fceba73a25 100644 --- a/arch/x86/kernel/cpu/sgx/sgx.h +++ b/arch/x86/kernel/cpu/sgx/sgx.h @@ -92,6 +92,23 @@ static inline void *sgx_get_epc_virt_addr(struct sgx_epc_page *page) return section->virt_addr + index * PAGE_SIZE; } +/* + * This data structure wraps a list of reclaimable EPC pages, and a list of + * non-reclaimable EPC pages and is used to implement a LRU policy during + * reclamation. + */ +struct sgx_epc_lru_lists { + /* Must acquire this lock to access */ + spinlock_t lock; + struct list_head reclaimable; +}; + +static inline void sgx_lru_init(struct sgx_epc_lru_lists *lrus) +{ + spin_lock_init(&lrus->lock); + INIT_LIST_HEAD(&lrus->reclaimable); +} + struct sgx_epc_page *__sgx_alloc_epc_page(void); void sgx_free_epc_page(struct sgx_epc_page *page);