From patchwork Mon Mar 2 14:24:50 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Wang Nan X-Patchwork-Id: 5913551 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 1ACFD9F373 for ; Mon, 2 Mar 2015 14:31:02 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 32DA6201FA for ; Mon, 2 Mar 2015 14:31:01 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 4B6012013A for ; Mon, 2 Mar 2015 14:31:00 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1YSRKP-0004ud-F5; Mon, 02 Mar 2015 14:28:13 +0000 Received: from szxga01-in.huawei.com ([119.145.14.64]) by bombadil.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1YSRJ2-0003iG-9C for linux-arm-kernel@lists.infradead.org; Mon, 02 Mar 2015 14:26:54 +0000 Received: from 172.24.2.119 (EHLO lggeml422-hub.china.huawei.com) ([172.24.2.119]) by szxrg01-dlp.huawei.com (MOS 4.3.7-GA FastPath queued) with ESMTP id CKE50583; Mon, 02 Mar 2015 22:25:49 +0800 (CST) Received: from kernel-host.huawei (10.107.197.247) by lggeml422-hub.china.huawei.com (10.72.61.32) with Microsoft SMTP Server id 14.3.158.1; Mon, 2 Mar 2015 22:25:41 +0800 From: Wang Nan To: , , , , Subject: [RFC PATCH v4 12/34] early kprobes: allows __alloc_insn_slot() from early kprobes slots. Date: Mon, 2 Mar 2015 22:24:50 +0800 Message-ID: <1425306312-3437-13-git-send-email-wangnan0@huawei.com> X-Mailer: git-send-email 1.8.4 In-Reply-To: <1425306312-3437-1-git-send-email-wangnan0@huawei.com> References: <1425306312-3437-1-git-send-email-wangnan0@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.107.197.247] X-CFilter-Loop: Reflected X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20150302_062648_735171_9A180E0B X-CRM114-Status: GOOD ( 10.35 ) X-Spam-Score: -2.3 (--) Cc: x86@kernel.org, lizefan@huawei.com, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, T_RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Introduces early_slots_start/end and bitmap for struct kprobe_insn_cache then uses previous introduced macro to generate allocator. This patch makes get/free_insn_slot() and get/free_optinsn_slot() transparent to early kprobes. Signed-off-by: Wang Nan --- include/linux/kprobes.h | 40 ++++++++++++++++++++++++++++++++++++++++ kernel/kprobes.c | 14 ++++++++++++++ 2 files changed, 54 insertions(+) diff --git a/include/linux/kprobes.h b/include/linux/kprobes.h index 3d721eb..bb2b2c6 100644 --- a/include/linux/kprobes.h +++ b/include/linux/kprobes.h @@ -317,6 +317,17 @@ static inline int ek_free_##__name(__t *__s) \ (__ek_##__name##_bitmap)); \ } +/* + * Start and end of early kprobes area, including code area and + * insn_slot area. + */ +extern char __early_kprobes_start[]; +extern char __early_kprobes_end[]; + +extern kprobe_opcode_t __early_kprobes_code_area_start[]; +extern kprobe_opcode_t __early_kprobes_code_area_end[]; +extern kprobe_opcode_t __early_kprobes_insn_slot_start[]; +extern kprobe_opcode_t __early_kprobes_insn_slot_end[]; #else #define __DEFINE_EKPROBE_ALLOC_OPS(__t, __name) \ @@ -346,6 +357,8 @@ static inline int ek_free_##__name(__t *__s) \ #endif +__DEFINE_EKPROBE_ALLOC_OPS(kprobe_opcode_t, opcode) + struct kprobe_insn_cache { struct mutex mutex; void *(*alloc)(void); /* allocate insn page */ @@ -353,8 +366,35 @@ struct kprobe_insn_cache { struct list_head pages; /* list of kprobe_insn_page */ size_t insn_size; /* size of instruction slot */ int nr_garbage; +#ifdef CONFIG_EARLY_KPROBES +# define slots_start(c) ((c)->early_slots_start) +# define slots_end(c) ((c)->early_slots_end) +# define slots_bitmap(c) ((c)->early_slots_bitmap) + kprobe_opcode_t *early_slots_start; + kprobe_opcode_t *early_slots_end; + unsigned long early_slots_bitmap[EARLY_KPROBES_BITMAP_SZ]; +#else +# define slots_start(c) NULL +# define slots_end(c) NULL +# define slots_bitmap(c) NULL +#endif }; +static inline kprobe_opcode_t * +__get_insn_slot_early(struct kprobe_insn_cache *c) +{ + return __ek_alloc_opcode(slots_start(c), + slots_end(c), slots_bitmap(c)); +} + +static inline int +__free_insn_slot_early(struct kprobe_insn_cache *c, + kprobe_opcode_t *slot) +{ + return __ek_free_opcode(slot, slots_start(c), + slots_end(c), slots_bitmap(c)); +} + extern kprobe_opcode_t *__get_insn_slot(struct kprobe_insn_cache *c); extern void __free_insn_slot(struct kprobe_insn_cache *c, kprobe_opcode_t *slot, int dirty); diff --git a/kernel/kprobes.c b/kernel/kprobes.c index 614138c..1eb3000 100644 --- a/kernel/kprobes.c +++ b/kernel/kprobes.c @@ -144,6 +144,10 @@ struct kprobe_insn_cache kprobe_insn_slots = { .pages = LIST_HEAD_INIT(kprobe_insn_slots.pages), .insn_size = MAX_INSN_SIZE, .nr_garbage = 0, +#ifdef CONFIG_EARLY_KPROBES + .early_slots_start = __early_kprobes_insn_slot_start, + .early_slots_end = __early_kprobes_insn_slot_end, +#endif }; static int collect_garbage_slots(struct kprobe_insn_cache *c); @@ -156,6 +160,9 @@ kprobe_opcode_t *__get_insn_slot(struct kprobe_insn_cache *c) struct kprobe_insn_page *kip; kprobe_opcode_t *slot = NULL; + if (kprobes_is_early()) + return __get_insn_slot_early(c); + mutex_lock(&c->mutex); retry: list_for_each_entry(kip, &c->pages, list) { @@ -256,6 +263,9 @@ void __free_insn_slot(struct kprobe_insn_cache *c, { struct kprobe_insn_page *kip; + if (unlikely(__free_insn_slot_early(c, slot))) + return; + mutex_lock(&c->mutex); list_for_each_entry(kip, &c->pages, list) { long idx = ((long)slot - (long)kip->insns) / @@ -287,6 +297,10 @@ struct kprobe_insn_cache kprobe_optinsn_slots = { .pages = LIST_HEAD_INIT(kprobe_optinsn_slots.pages), /* .insn_size is initialized later */ .nr_garbage = 0, +#ifdef CONFIG_EARLY_KPROBES + .early_slots_start = __early_kprobes_code_area_start, + .early_slots_end = __early_kprobes_code_area_end, +#endif }; #endif #endif