From patchwork Mon Aug 6 16:19:22 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yordan Karadzhov X-Patchwork-Id: 10758813 Return-Path: Received: from mail-wm0-f65.google.com ([74.125.82.65]:37438 "EHLO mail-wm0-f65.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730876AbeHFS34 (ORCPT ); Mon, 6 Aug 2018 14:29:56 -0400 Received: by mail-wm0-f65.google.com with SMTP id n11-v6so14906509wmc.2 for ; Mon, 06 Aug 2018 09:20:05 -0700 (PDT) From: "Yordan Karadzhov (VMware)" To: rostedt@goodmis.org Cc: linux-trace-devel@vger.kernel.org, "Yordan Karadzhov (VMware)" Subject: [PATCH v4 1/6] kernel-shark-qt: Add generic instruments for searching inside the trace data Date: Mon, 6 Aug 2018 19:19:22 +0300 Message-Id: <20180806161927.11206-2-y.karadz@gmail.com> In-Reply-To: <20180806161927.11206-1-y.karadz@gmail.com> References: <20180806161927.11206-1-y.karadz@gmail.com> Sender: linux-trace-devel-owner@vger.kernel.org List-ID: Content-Length: 12861 This patch introduces the instrumentation for data extraction used by the visualization model of the Qt-based KernelShark. The effectiveness of these instruments for searching has a dominant effect over the performance of the model, so let's spend some time and explain this in detail. The first type of instruments provide binary search inside a sorted in time arrays of kshark_entries or trace_records. The search returns the first element of the array, having timestamp bigger than a reference time value. The time complexity of these searches is log(n). The second type of instruments provide searching for the first (in time) entry, satisfying an abstract Matching condition. Since the array is sorted in time, but we search for an abstract property, for this search the array is considered unsorted, thus we have to iterate and check all elements of the array one by one. If we search for a type of entries, which are well presented in the array, the time complexity of the search is constant, because no matter how big is the array the search only goes through small number of entries at the beginning of the array (or at the end, if we search backwards), before it finds the first match. However if we search for sparse, or even nonexistent entries, the time complexity becomes linear. These explanations will start making more sense with the following patches. Signed-off-by: Yordan Karadzhov (VMware) --- kernel-shark-qt/src/libkshark.c | 262 ++++++++++++++++++++++++++++++++ kernel-shark-qt/src/libkshark.h | 93 ++++++++++++ 2 files changed, 355 insertions(+) diff --git a/kernel-shark-qt/src/libkshark.c b/kernel-shark-qt/src/libkshark.c index c829aa9..879946b 100644 --- a/kernel-shark-qt/src/libkshark.c +++ b/kernel-shark-qt/src/libkshark.c @@ -925,3 +925,265 @@ char* kshark_dump_entry(const struct kshark_entry *entry) return NULL; } + +/** + * @brief Binary search inside a time-sorted array of kshark_entries. + * + * @param time: The value of time to search for. + * @param data: Input location for the trace data. + * @param l: Array index specifying the lower edge of the range to search in. + * @param h: Array index specifying the upper edge of the range to search in. + * + * @returns On success, the first kshark_entry inside the range, having a + timestamp equal or bigger than "time". + If all entries inside the range have timestamps greater than "time" + the function returns BSEARCH_ALL_GREATER (negative value). + If all entries inside the range have timestamps smaller than "time" + the function returns BSEARCH_ALL_SMALLER (negative value). + */ +ssize_t kshark_find_entry_by_time(uint64_t time, + struct kshark_entry **data, + size_t l, size_t h) +{ + size_t mid; + + if (data[l]->ts > time) + return BSEARCH_ALL_GREATER; + + if (data[h]->ts < time) + return BSEARCH_ALL_SMALLER; + + /* + * After executing the BSEARCH macro, "l" will be the index of the last + * entry having timestamp < time and "h" will be the index of the first + * entry having timestamp >= time. + */ + BSEARCH(h, l, data[mid]->ts < time); + return h; +} + +/** + * @brief Binary search inside a time-sorted array of pevent_records. + * + * @param time: The value of time to search for. + * @param data: Input location for the trace data. + * @param l: Array index specifying the lower edge of the range to search in. + * @param h: Array index specifying the upper edge of the range to search in. + * + * @returns On success, the first pevent_record inside the range, having a + timestamp equal or bigger than "time". + If all entries inside the range have timestamps greater than "time" + the function returns BSEARCH_ALL_GREATER (negative value). + If all entries inside the range have timestamps smaller than "time" + the function returns BSEARCH_ALL_SMALLER (negative value). + */ +ssize_t kshark_find_record_by_time(uint64_t time, + struct pevent_record **data, + size_t l, size_t h) +{ + size_t mid; + + if (data[l]->ts > time) + return BSEARCH_ALL_GREATER; + + if (data[h]->ts < time) + return BSEARCH_ALL_SMALLER; + + /* + * After executing the BSEARCH macro, "l" will be the index of the last + * record having timestamp < time and "h" will be the index of the + * first record having timestamp >= time. + */ + BSEARCH(h, l, data[mid]->ts < time); + return h; +} + +/** + * @brief Simple Pid matching function to be user for data requests. + * + * @param kshark_ctx: Input location for the session context pointer. + * @param e: kshark_entry to be checked. + * @param pid: Matching condition value. + * + * @returns True if the Pid of the entry matches the value of "pid". + * Else false. + */ +bool kshark_match_pid(struct kshark_context *kshark_ctx, + struct kshark_entry *e, int pid) +{ + if (e->pid == pid) + return true; + + return false; +} + +/** + * @brief Simple Cpu matching function to be user for data requests. + * + * @param kshark_ctx: Input location for the session context pointer. + * @param e: kshark_entry to be checked. + * @param cpu: Matching condition value. + * + * @returns True if the Cpu of the entry matches the value of "cpu". + * Else false. + */ +bool kshark_match_cpu(struct kshark_context *kshark_ctx, + struct kshark_entry *e, int cpu) +{ + if (e->cpu == cpu) + return true; + + return false; +} + +/** + * @brief Create Data request. The request defines the properties of the + * requested kshark_entry. + * + * @param first: Array index specifying the position inside the array from + * where the search starts. + * @param n: Number of array elements to search in. + * @param cond: Matching condition function. + * @param val: Matching condition value, used by the Matching condition + * function. + * @param vis_only: If true, a visible entry is requested. + * @param vis_mask: If "vis_only" is true, use this mask to specify the level + * of visibility of the requested entry. + * + * @returns Pointer to kshark_entry_request on success, or NULL on failure. + * The user is responsible for freeing the returned + * kshark_entry_request. + */ +struct kshark_entry_request * +kshark_entry_request_alloc(size_t first, size_t n, + matching_condition_func cond, int val, + bool vis_only, int vis_mask) +{ + struct kshark_entry_request *req = malloc(sizeof(*req)); + + if (!req) { + fprintf(stderr, + "Failed to allocate memory for entry request.\n"); + return NULL; + } + + req->first = first; + req->n = n; + req->cond = cond; + req->val = val; + req->vis_only = vis_only; + req->vis_mask = vis_mask; + + return req; +} + +/** Dummy entry, used to indicate the existence of filtered entries. */ +const struct kshark_entry dummy_entry = { + .next = NULL, + .visible = 0x00, + .cpu = KS_FILTERED_BIN, + .pid = KS_FILTERED_BIN, + .event_id = -1, + .offset = 0, + .ts = 0 +}; + +static const struct kshark_entry * +get_entry(const struct kshark_entry_request *req, + struct kshark_entry **data, + ssize_t *index, size_t start, ssize_t end, int inc) +{ + struct kshark_context *kshark_ctx = NULL; + const struct kshark_entry *e = NULL; + ssize_t i; + + if (index) + *index = KS_EMPTY_BIN; + + if (!kshark_instance(&kshark_ctx)) + return e; + + for (i = start; i != end; i += inc) { + if (req->cond(kshark_ctx, data[i], req->val)) { + /* + * Data satisfying the condition has been found. + */ + if (req->vis_only && + !(data[i]->visible & req->vis_mask)) { + /* This data entry has been filtered. */ + e = &dummy_entry; + } else { + e = data[i]; + break; + } + } + } + + if (index) { + if (e) + *index = (e->event_id >= 0)? i : KS_FILTERED_BIN; + else + *index = KS_EMPTY_BIN; + } + + return e; +} + +/** + * @brief Search for an entry satisfying the requirements of a given Data + * request. Start from the position provided by the request and go + * searching in the direction of the increasing timestamps (front). + * + * @param req: Input location for Data request. + * @param data: Input location for the trace data. + * @param index: Optional output location for the index of the returned + * entry inside the array. + * + * @returns Pointer to the first entry satisfying the matching conditionon + * success, or NULL on failure. + * In the special case when some entries, satisfying the Matching + * condition function have been found, but all these entries have + * been discarded because of the visibility criteria (filtered + * entries), the function returns a pointer to a special + * "Dummy entry". + */ +const struct kshark_entry * +kshark_get_entry_front(const struct kshark_entry_request *req, + struct kshark_entry **data, + ssize_t *index) +{ + ssize_t end = req->first + req->n; + + return get_entry(req, data, index, req->first, end, +1); +} + +/** + * @brief Search for an entry satisfying the requirements of a given Data + * request. Start from the position provided by the request and go + * searching in the direction of the decreasing timestamps (back). + * + * @param req: Input location for Data request. + * @param data: Input location for the trace data. + * @param index: Optional output location for the index of the returned + * entry inside the array. + * + * @returns Pointer to the first entry satisfying the matching conditionon + * success, or NULL on failure. + * In the special case when some entries, satisfying the Matching + * condition function have been found, but all these entries have + * been discarded because of the visibility criteria (filtered + * entries), the function returns a pointer to a special + * "Dummy entry". + */ +const struct kshark_entry * +kshark_get_entry_back(const struct kshark_entry_request *req, + struct kshark_entry **data, + ssize_t *index) +{ + ssize_t end = req->first - req->n; + + if (end < 0) + end = -1; + + return get_entry(req, data, index, req->first, end, -1); +} diff --git a/kernel-shark-qt/src/libkshark.h b/kernel-shark-qt/src/libkshark.h index eda0a83..4860e74 100644 --- a/kernel-shark-qt/src/libkshark.h +++ b/kernel-shark-qt/src/libkshark.h @@ -190,6 +190,99 @@ void kshark_filter_entries(struct kshark_context *kshark_ctx, struct kshark_entry **data, size_t n_entries); +/** Search failed identifiers. */ +enum kshark_search_failed { + /** All entries have timestamps greater timestamps. */ + BSEARCH_ALL_GREATER = -1, + + /** All entries have timestamps smaller timestamps. */ + BSEARCH_ALL_SMALLER = -2, +}; + +/** General purpose Binary search macro. */ +#define BSEARCH(h, l, cond) \ + ({ \ + while (h - l > 1) { \ + mid = (l + h) / 2; \ + if (cond) \ + l = mid; \ + else \ + h = mid; \ + } \ + }) + +ssize_t kshark_find_entry_by_time(uint64_t time, + struct kshark_entry **data_rows, + size_t l, size_t h); + +ssize_t kshark_find_record_by_time(uint64_t time, + struct pevent_record **data_rows, + size_t l, size_t h); + +bool kshark_match_pid(struct kshark_context *kshark_ctx, + struct kshark_entry *e, int pid); + +bool kshark_match_cpu(struct kshark_context *kshark_ctx, + struct kshark_entry *e, int cpu); + +/** Empty bin identifier. */ +#define KS_EMPTY_BIN -1 + +/** Filtered bin identifier. */ +#define KS_FILTERED_BIN -2 + +/** Matching condition function type. To be user for data requests */ +typedef bool (matching_condition_func)(struct kshark_context*, + struct kshark_entry*, + int); + +/** + * Data request structure, defining the properties of the required + * kshark_entry. + */ +struct kshark_entry_request { + /** + * Array index specifying the position inside the array from where + * the search starts. + */ + size_t first; + + /** Number of array elements to search in. */ + size_t n; + + /** Matching condition function. */ + matching_condition_func *cond; + + /** + * Matching condition value, used by the Matching condition function. + */ + int val; + + /** If true, a visible entry is requested. */ + bool vis_only; + + /** + * If "vis_only" is true, use this mask to specify the level of + * visibility of the requested entry. + */ + uint8_t vis_mask; +}; + +struct kshark_entry_request * +kshark_entry_request_alloc(size_t first, size_t n, + matching_condition_func cond, int val, + bool vis_only, int vis_mask); + +const struct kshark_entry * +kshark_get_entry_front(const struct kshark_entry_request *req, + struct kshark_entry **data, + ssize_t *index); + +const struct kshark_entry * +kshark_get_entry_back(const struct kshark_entry_request *req, + struct kshark_entry **data, + ssize_t *index); + #ifdef __cplusplus } #endif From patchwork Mon Aug 6 16:19:23 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yordan Karadzhov X-Patchwork-Id: 10758817 Return-Path: Received: from mail-wm0-f66.google.com ([74.125.82.66]:55649 "EHLO mail-wm0-f66.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730876AbeHFSaB (ORCPT ); Mon, 6 Aug 2018 14:30:01 -0400 Received: by mail-wm0-f66.google.com with SMTP id f21-v6so14403382wmc.5 for ; Mon, 06 Aug 2018 09:20:09 -0700 (PDT) From: "Yordan Karadzhov (VMware)" To: rostedt@goodmis.org Cc: linux-trace-devel@vger.kernel.org, "Yordan Karadzhov (VMware)" Subject: [PATCH v4 2/6] kernel-shark-qt: Introduce the visualization model used by the Qt-based KS Date: Mon, 6 Aug 2018 19:19:23 +0300 Message-Id: <20180806161927.11206-3-y.karadz@gmail.com> In-Reply-To: <20180806161927.11206-1-y.karadz@gmail.com> References: <20180806161927.11206-1-y.karadz@gmail.com> Sender: linux-trace-devel-owner@vger.kernel.org List-ID: Content-Length: 41292 The model, used by the Qt-based KernelShark for visualization of trace data is build over the concept of "Data Bins". When visualizing a large data-set of trace records, we are limited by the number of screen pixels available for drawing. The model divides the data-set into data-units, also called Bins. The Bin has to be defined in such a way that the entire content of one Bin can be summarized and visualized by a single graphical element. This model uses the timestamp of the trace records, as a criteria for forming Bins. When the Model has to visualize all records inside a given time-window, it divides this time-window into N smaller, uniformly sized subintervals and then defines that one Bin contains all trace records, having timestamps falling into one of this subintervals. Because the model operates over an array of trace records, sorted in time, the content of each Bin can be retrieved by simply knowing the index of the first element inside this Bin and the index of the first element of the next Bin. This means that knowing the index of the first element in each Bin is enough to determine the State of the model. The State of the model can be modified by its five basic operations: Zoon-In, Zoom-Out, Shift-Forward, Shift-Backward and Jump-To. After each of these operations, the new State of the model is retrieved, by using binary search to find the index of the first element in each Bin. This means that each one of the five basic operations of the model has log(n) time complexity (see previous change log). In order to keep the visualization of the new state of the model as efficient as possible, the model needs a way to summarize and visualize the content of the Bins in constant time. This is achieved by limiting ourself to only checking the content of the records at the beginning and at the end of the Bin. As explaned in the previous change log, this approach has the very counter-intuitive effect of making the update of the sparse (or empty) Graphs much slower. The problem of the Sparse Graphs will be addressed in another patch, where "Data Collections" will be introduced. Signed-off-by: Yordan Karadzhov (VMware) --- kernel-shark-qt/src/CMakeLists.txt | 3 +- kernel-shark-qt/src/libkshark-model.c | 1183 +++++++++++++++++++++++++ kernel-shark-qt/src/libkshark-model.h | 157 ++++ kernel-shark-qt/src/libkshark.h | 6 +- 4 files changed, 1347 insertions(+), 2 deletions(-) create mode 100644 kernel-shark-qt/src/libkshark-model.c create mode 100644 kernel-shark-qt/src/libkshark-model.h diff --git a/kernel-shark-qt/src/CMakeLists.txt b/kernel-shark-qt/src/CMakeLists.txt index ed3c60e..ec22f63 100644 --- a/kernel-shark-qt/src/CMakeLists.txt +++ b/kernel-shark-qt/src/CMakeLists.txt @@ -1,7 +1,8 @@ message("\n src ...") message(STATUS "libkshark") -add_library(kshark SHARED libkshark.c) +add_library(kshark SHARED libkshark.c + libkshark-model.c) target_link_libraries(kshark ${CMAKE_DL_LIBS} ${TRACEEVENT_LIBRARY} diff --git a/kernel-shark-qt/src/libkshark-model.c b/kernel-shark-qt/src/libkshark-model.c new file mode 100644 index 0000000..bedeb69 --- /dev/null +++ b/kernel-shark-qt/src/libkshark-model.c @@ -0,0 +1,1183 @@ +// SPDX-License-Identifier: LGPL-2.1 + +/* + * Copyright (C) 2017 VMware Inc, Yordan Karadzhov + */ + + /** + * @file libkshark.c + * @brief Visualization model for FTRACE (trace-cmd) data. + */ + +// C +#include +#include + +// KernelShark +#include "libkshark-model.h" + +/* The index of the Upper Overflow bin. */ +#define UOB(histo) (histo->n_bins) + +/* The index of the Lower Overflow bin. */ +#define LOB(histo) (histo->n_bins + 1) + +/* For all bins */ +# define ALLB(histo) LOB(histo) + +/** + * @brief Initialize the Visualization model. + * + * @param histo: Input location for the model descriptor. + */ +void ksmodel_init(struct kshark_trace_histo *histo) +{ + /* + * Initialize an empty histo. The histo will have no bins and will + * contain no data. + */ + histo->bin_size = 0; + histo->min = 0; + histo->max = 0; + histo->n_bins = 0; + + histo->bin_count = NULL; + histo->map = NULL; +} + +/** + * @brief Clear (reset) the Visualization model. + * + * @param histo: Input location for the model descriptor. + */ +void ksmodel_clear(struct kshark_trace_histo *histo) +{ + /* Reset the histo. It will have no bins and will contain no data. */ + free(histo->map); + free(histo->bin_count); + ksmodel_init(histo); +} + +static void ksmodel_reset_bins(struct kshark_trace_histo *histo, + size_t first, size_t last) +{ + /* + * Reset the content of the bins. + * Be careful here! Resetting the entire array of signed integers with + * memset() will work only for values of "0" and "-1". Hence + * KS_EMPTY_BIN is expected to be "-1". + */ + memset(&histo->map[first], KS_EMPTY_BIN, + (last - first + 1) * sizeof(histo->map[0])); + + memset(&histo->bin_count[first], 0, + (last - first + 1) * sizeof(histo->bin_count[0])); +} + +static bool ksmodel_histo_alloc(struct kshark_trace_histo *histo, size_t n) +{ + free(histo->bin_count); + free(histo->map); + + /* Create bins. Two overflow bins are added. */ + histo->map = calloc(n + 2, sizeof(*histo->map)); + histo->bin_count = calloc(n + 2, sizeof(*histo->bin_count)); + + if (!histo->map || !histo->bin_count) { + ksmodel_clear(histo); + fprintf(stderr, "Failed to allocate memory for a histo.\n"); + return false; + } + + histo->n_bins = n; + + return true; +} + +static void ksmodel_set_in_range_bining(struct kshark_trace_histo *histo, + size_t n, uint64_t min, uint64_t max, + bool force_in_range) +{ + uint64_t corrected_range, delta_range, range = max - min; + struct kshark_entry *last; + + /* The size of the bin must be >= 1, hence the range must be >= n. */ + if (n == 0 || range < n) + return; + + /* + * If the number of bins changes, allocate memory for the descriptor of + * the model. + */ + if (n != histo->n_bins) { + if (!ksmodel_histo_alloc(histo, n)) { + ksmodel_clear(histo); + return; + } + } + + /* Reset the content of all bins (including overflow bins) to zero. */ + ksmodel_reset_bins(histo, 0, ALLB(histo)); + + if (range % n == 0) { + /* + * The range is multiple of the number of bin and needs no + * adjustment. This is very unlikely to happen but still ... + */ + histo->min = min; + histo->max = max; + histo->bin_size = range / n; + } else { + /* + * The range needs adjustment. The new range will be slightly + * bigger, compared to the requested one. + */ + histo->bin_size = range / n + 1; + corrected_range = histo->bin_size * n; + delta_range = corrected_range - range; + histo->min = min - delta_range / 2; + histo->max = histo->min + corrected_range; + + if (!force_in_range) + return; + + /* + * Make sure that the new range doesn't go outside of the time + * interval of the dataset. + */ + last = histo->data[histo->data_size - 1]; + if (histo->min < histo->data[0]->ts) { + histo->min = histo->data[0]->ts; + histo->max = histo->min + corrected_range; + } else if (histo->max > last->ts) { + histo->max = last->ts; + histo->min = histo->max - corrected_range; + } + } +} + +/** + * @brief Prepare the bining of the Visualization model. + * + * @param histo: Input location for the model descriptor. + * @param n: Number of bins. + * @param min: Lower edge of the time-window to be visualized. + * @param max: Upper edge of the time-window to be visualized. + */ +void ksmodel_set_bining(struct kshark_trace_histo *histo, + size_t n, uint64_t min, uint64_t max) +{ + ksmodel_set_in_range_bining(histo, n, min, max, false); +} + +static size_t ksmodel_set_lower_edge(struct kshark_trace_histo *histo) +{ + /* + * Find the index of the first entry inside the range + * (timestamp >= min). Note that the value of "min" is considered + * inside the range. + */ + ssize_t row = kshark_find_entry_by_time(histo->min, + histo->data, + 0, + histo->data_size - 1); + + assert(row != BSEARCH_ALL_SMALLER); + + if (row == BSEARCH_ALL_GREATER || row == 0) { + /* Lower Overflow bin is empty. */ + histo->map[LOB(histo)] = KS_EMPTY_BIN; + histo->bin_count[LOB(histo)] = 0; + row = 0; + } else { + /* + * The first entry inside the range is not the first entry of + * the dataset. This means that the Lower Overflow bin contains + * data. + */ + + /* Lower Overflow bin starts at "0". */ + histo->map[LOB(histo)] = 0; + + /* + * The number of entries inside the Lower Overflow bin is equal + * to the index of the first entry inside the range. + */ + histo->bin_count[LOB(histo)] = row; + } + + /* + * Now check if the first entry inside the range falls into the first + * bin. + */ + if (histo->data[row]->ts < histo->min + histo->bin_size) { + /* + * It is inside the first bin. Set the beginning + * of the first bin. + */ + histo->map[0] = row; + } else { + /* The first bin is empty. */ + histo->map[0] = KS_EMPTY_BIN; + } + + return row; +} + +static size_t ksmodel_set_upper_edge(struct kshark_trace_histo *histo) +{ + /* + * Find the index of the first entry outside the range + * (timestamp > max). Note that the value of "max" is considered inside + * the range. Remember that kshark_find_entry_by_time returns the first + * entry which is equal or greater than the reference time. + */ + ssize_t row = kshark_find_entry_by_time(histo->max + 1, + histo->data, + 0, + histo->data_size - 1); + + assert(row != BSEARCH_ALL_GREATER); + + if (row == BSEARCH_ALL_SMALLER) { + /* Upper Overflow bin is empty. */ + histo->map[UOB(histo)] = KS_EMPTY_BIN; + histo->bin_count[UOB(histo)] = 0; + } else { + /* + * The Upper Overflow bin contains data. Set its beginning and + * the number of entries. + */ + histo->map[UOB(histo)] = row; + histo->bin_count[UOB(histo)] = histo->data_size - row; + } + + return row; +} + +static void ksmodel_set_next_bin_edge(struct kshark_trace_histo *histo, + size_t bin, size_t last_row) +{ + size_t time, next_bin = bin + 1; + ssize_t row; + + /* Calculate the beginning of the next bin. */ + time = histo->min + next_bin * histo->bin_size; + + /* + * The timestamp of the very last entry of the dataset can be exactly + * equal to the value of the upper edge of the range. This is very + * likely to happen when we use ksmodel_set_in_range_bining(). In this + * case we have to increase the size of the very last bin in order to + * make sure that the last entry of the dataset will fall into it. + */ + if (next_bin == histo->n_bins - 1) + ++time; + /* + * Find the index of the first entry inside + * the next bin (timestamp > time). + */ + row = kshark_find_entry_by_time(time, histo->data, last_row, + histo->data_size - 1); + + if (row < 0 || histo->data[row]->ts >= time + histo->bin_size) { + /* The bin is empty. */ + histo->map[next_bin] = KS_EMPTY_BIN; + return; + } + + /* Set the index of the first entry. */ + histo->map[next_bin] = row; +} + +/* + * Fill in the bin_count array, which maps the number of entries within each + * bin. + */ +static void ksmodel_set_bin_counts(struct kshark_trace_histo *histo) +{ + int i = 0, prev_not_empty; + + memset(&histo->bin_count[0], 0, + (histo->n_bins) * sizeof(histo->bin_count[0])); + /* + * Find the first bin which contains data. Start by checking the Lower + * Overflow bin. + */ + if (histo->map[LOB(histo)] != KS_EMPTY_BIN) { + prev_not_empty = LOB(histo); + } else { + /* Loop till the first non-empty bin. */ + while (histo->map[i] < 0) { + ++i; + } + + prev_not_empty = i++; + } + + /* + * Starting from the first not empty bin, loop over all bins and fill + * in the bin_count array to hold the number of entries in each bin. + */ + for (; i < histo->n_bins; ++i) { + if (histo->map[i] != KS_EMPTY_BIN) { + /* The current bin is not empty, take its data row and + * subtract it from the data row of the previous not + * empty bin, which will give us the number of data + * rows in the "prev_not_empty" bin. + */ + histo->bin_count[prev_not_empty] = + histo->map[i] - histo->map[prev_not_empty]; + + prev_not_empty = i; + } + } + + /* Check if the Upper Overflow bin contains data. */ + if (histo->map[UOB(histo)] == KS_EMPTY_BIN) { + /* + * The Upper Overflow bin is empty. Use the size of the dataset + * to calculate the content of the previouse not empty bin. + */ + histo->bin_count[prev_not_empty] = histo->data_size - + histo->map[prev_not_empty]; + } else { + /* + * Use the index of the first entry inside the Upper Overflow + * bin to calculate the content of the previouse not empty + * bin. + */ + histo->bin_count[prev_not_empty] = histo->map[UOB(histo)] - + histo->map[prev_not_empty]; + } +} + +/** + * @brief Provide the Visualization model with data. Calculate the current + * state of the model. + * + * @param histo: Input location for the model descriptor. + * @param data: Input location for the trace data. + * @param n: Number of bins. + */ +void ksmodel_fill(struct kshark_trace_histo *histo, + struct kshark_entry **data, size_t n) +{ + size_t last_row = 0; + int bin; + + histo->data_size = n; + histo->data = data; + + if (histo->n_bins == 0 || + histo->bin_size == 0 || + histo->data_size == 0) { + /* + * Something is wrong with this histo. + * Most likely the binning is not set. + */ + ksmodel_clear(histo); + fprintf(stderr, + "Unable to fill the model with data.\n"); + fprintf(stderr, + "Try to set the bining of the model first.\n"); + + return; + } + + /* Set the Lower Overflow bin */ + ksmodel_set_lower_edge(histo); + + /* + * Loop over the dataset and set the beginning of all individual bins. + */ + for (bin = 0; bin < histo->n_bins; ++bin) { + ksmodel_set_next_bin_edge(histo, bin, last_row); + if (histo->map[bin + 1] > 0) + last_row = histo->map[bin + 1]; + } + + /* Set the Upper Overflow bin. */ + ksmodel_set_upper_edge(histo); + + /* Calculate the number of entries in each bin. */ + ksmodel_set_bin_counts(histo); +} + +/** + * @brief Get the total number of entries in a given bin. + * + * @param histo: Input location for the model descriptor. + * @param bin: Bin id. + * + * @returns The number of entries in this bin. + */ +size_t ksmodel_bin_count(struct kshark_trace_histo *histo, int bin) +{ + if (bin >= 0 && bin < histo->n_bins) + return histo->bin_count[bin]; + + if (bin == UPPER_OVERFLOW_BIN) + return histo->bin_count[UOB(histo)]; + + if (bin == LOWER_OVERFLOW_BIN) + return histo->bin_count[LOB(histo)]; + + return 0; +} + +/** + * @brief Shift the time-window of the model forward. Recalculate the current + * state of the model. + * + * @param histo: Input location for the model descriptor. + * @param n: Number of bins to shift. + */ +void ksmodel_shift_forward(struct kshark_trace_histo *histo, size_t n) +{ + size_t last_row = 0; + int bin; + + if (!histo->data_size) + return; + + if (histo->map[UOB(histo)] == KS_EMPTY_BIN) { + /* + * The Upper Overflow bin is empty. This means that we are at + * the upper edge of the dataset already. Do nothing in this + * case. + */ + return; + } + + histo->min += n * histo->bin_size; + histo->max += n * histo->bin_size; + + if (n >= histo->n_bins) { + /* + * No overlap between the new and the old ranges. Recalculate + * all bins from scratch. First calculate the new range. + */ + ksmodel_set_bining(histo, histo->n_bins, histo->min, + histo->max); + + ksmodel_fill(histo, histo->data, histo->data_size); + return; + } + + /* Set the new Lower Overflow bin. */ + ksmodel_set_lower_edge(histo); + + /* + * Copy the the mapping indexes of all overlaping bins starting from + * bin "0" of the new histo. Note that the number of overlaping bins + * is histo->n_bins - n. + * We will do a sanity check. ksmodel_set_lower_edge() sets map[0] + * index of the new histo. This index should then be equal to map[n] + * index of the old histo. + */ + assert (histo->map[0] == histo->map[n]); + memmove(&histo->map[0], &histo->map[n], + sizeof(histo->map[0]) * (histo->n_bins - n)); + + /* + * The mapping index of the old Upper Overflow bin is now index of the + * first new bin. + */ + bin = UOB(histo) - n; + histo->map[bin] = histo->map[UOB(histo)]; + + /* Calculate only the content of the new (non-overlapping) bins. */ + for (; bin < histo->n_bins; ++bin) { + ksmodel_set_next_bin_edge(histo, bin, last_row); + if (histo->map[bin + 1] > 0) + last_row = histo->map[bin + 1]; + } + + /* + * Set the new Upper Overflow bin and calculate the number of entries + * in each bin. + */ + ksmodel_set_upper_edge(histo); + ksmodel_set_bin_counts(histo); +} + +/** + * @brief Shift the time-window of the model backward. Recalculate the current + * state of the model. + * + * @param histo: Input location for the model descriptor. + * @param n: Number of bins to shift. + */ +void ksmodel_shift_backward(struct kshark_trace_histo *histo, size_t n) +{ + size_t last_row = 0; + int bin; + + if (!histo->data_size) + return; + + if (histo->map[LOB(histo)] == KS_EMPTY_BIN) { + /* + * The Lower Overflow bin is empty. This means that we are at + * the Lower edge of the dataset already. Do nothing in this + * case. + */ + return; + } + + histo->min -= n * histo->bin_size; + histo->max -= n * histo->bin_size; + + if (n >= histo->n_bins) { + /* + * No overlap between the new and the old range. Recalculate + * all bins from scratch. First calculate the new range. + */ + ksmodel_set_bining(histo, histo->n_bins, histo->min, + histo->max); + + ksmodel_fill(histo, histo->data, histo->data_size); + return; + } + /* Set the new Lower Overflow bin. */ + ksmodel_set_lower_edge(histo); + + /* + * Copy the the mapping indexes of all overlaping bins starting from + * bin "0" of the old histo. Note that the number of overlaping bins + * is histo->n_bins - n. + */ + memmove(&histo->map[n], &histo->map[0], + sizeof(histo->map[0]) * (histo->n_bins - n)); + + /* Calculate only the content of the new (non-overlapping) bins. */ + for (bin = 0; bin < n; ++bin) { + ksmodel_set_next_bin_edge(histo, bin, last_row); + if (histo->map[bin + 1] > 0) + last_row = histo->map[bin + 1]; + } + + /* + * Set the new Upper Overflow bin and calculate the number of entries + * in each bin. + */ + ksmodel_set_upper_edge(histo); + ksmodel_set_bin_counts(histo); +} + +/** + * @brief Move the time-window of the model to a given location. Recalculate + * the current state of the model. + * + * @param histo: Input location for the model descriptor. + * @param ts: position in time to be visualized. + */ +void ksmodel_jump_to(struct kshark_trace_histo *histo, size_t ts) +{ + size_t min, max, range_min; + + if (ts > histo->min && ts < histo->max) { + /* + * The new position is already inside the range. + * Do nothing in this case. + */ + return; + } + + /* + * Calculate the new range without changing the size and the number + * of bins. + */ + min = ts - histo->n_bins * histo->bin_size / 2; + + /* Make sure that the range does not go outside of the dataset. */ + if (min < histo->data[0]->ts) { + min = histo->data[0]->ts; + } else { + range_min = histo->data[histo->data_size - 1]->ts - + histo->n_bins * histo->bin_size; + + if (min > range_min) + min = range_min; + } + + max = min + histo->n_bins * histo->bin_size; + + /* Use the new range to recalculate all bins from scratch. */ + ksmodel_set_bining(histo, histo->n_bins, min, max); + ksmodel_fill(histo, histo->data, histo->data_size); +} + +static void ksmodel_zoom(struct kshark_trace_histo *histo, + double r, int mark, bool zoom_in) +{ + size_t range, min, max, delta_min; + double delta_tot; + + if (!histo->data_size) + return; + + /* + * If the marker is not set, assume that the focal point of the zoom + * is the center of the range. + */ + if (mark < 0) + mark = histo->n_bins / 2; + + range = histo->max - histo->min; + + /* + * Avoid overzooming. If needed, adjust the Scale factor to a the value + * which provides bin_size >= 5. + */ + if (zoom_in && range * (1 - r) < histo->n_bins * 5) + r = 1 - (histo->n_bins * 5) / range; + + /* + * Now calculate the new range of the histo. Use the bin of the marker + * as a focal point for the zoomout. With this the maker will stay + * inside the same bin in the new histo. + * + * First we set delta_tot to increase by the percentage requested (r). + * Then we make delta_min equal to a percentage of delta_tot based on + * where the position of the mark is. After this we add / subtract the + * original min by the delta_min and subtract / add the max by + * delta_tot - delta_min. + */ + delta_tot = range * r; + + if (mark == (int)histo->n_bins - 1) + delta_min = delta_tot; + else + delta_min = delta_tot * mark / histo->n_bins; + + min = zoom_in ? histo->min + delta_min : + histo->min - delta_min; + + max = zoom_in ? histo->max - (size_t) delta_tot + delta_min : + histo->max + (size_t) delta_tot - delta_min; + + + /* Make sure the new range doesn't go outside of the dataset. */ + if (min < histo->data[0]->ts) + min = histo->data[0]->ts; + + if (max > histo->data[histo->data_size - 1]->ts) + max = histo->data[histo->data_size - 1]->ts; + + /* + * Use the new range to recalculate all bins from scratch. Enforce + * "In Range" adjustment of the range of the model, in order to avoid + * slowly drifting outside of the data-set in the case when the very + * first or the very last entry is used as a focal point. + */ + ksmodel_set_in_range_bining(histo, histo->n_bins, min, max, true); + ksmodel_fill(histo, histo->data, histo->data_size); +} + +/** + * @brief Extend the time-window of the model. Recalculate the current state + * of the model. + * + * @param histo: Input location for the model descriptor. + * @param r: Scale factor of the zoom-out. + * @param mark: Focus point of the zoom-out. + */ +void ksmodel_zoom_out(struct kshark_trace_histo *histo, + double r, int mark) +{ + ksmodel_zoom(histo, r, mark, false); +} + +/** + * @brief Shrink the time-window of the model. Recalculate the current state + * of the model. + * + * @param histo: Input location for the model descriptor. + * @param r: Scale factor of the zoom-in. + * @param mark: Focus point of the zoom-in. + */ +void ksmodel_zoom_in(struct kshark_trace_histo *histo, + double r, int mark) +{ + ksmodel_zoom(histo, r, mark, true); +} + +/** + * @brief Get the index of the first entry in a given bin. + * + * @param histo: Input location for the model descriptor. + * @param bin: Bin id. + * + * @returns Index of the first entry in this bin. If the bin is empty the + * function returns negative error identifier (KS_EMPTY_BIN). + */ +ssize_t ksmodel_first_index_at_bin(struct kshark_trace_histo *histo, int bin) +{ + if (bin >= 0 && bin < (int) histo->n_bins) + return histo->map[bin]; + + if (bin == UPPER_OVERFLOW_BIN) + return histo->map[UOB(histo)]; + + if (bin == LOWER_OVERFLOW_BIN) + return histo->map[LOB(histo)]; + + return KS_EMPTY_BIN; +} + +/** + * @brief Get the index of the last entry in a given bin. + * + * @param histo: Input location for the model descriptor. + * @param bin: Bin id. + * + * @returns Index of the last entry in this bin. If the bin is empty the + * function returns negative error identifier (KS_EMPTY_BIN). + */ +ssize_t ksmodel_last_index_at_bin(struct kshark_trace_histo *histo, int bin) +{ + ssize_t index = ksmodel_first_index_at_bin(histo, bin); + size_t count = ksmodel_bin_count(histo, bin); + + if (index >= 0 && count) + index += count - 1; + + return index; +} + +static bool ksmodel_is_visible(struct kshark_entry *e) +{ + if ((e->visible & KS_GRAPH_VIEW_FILTER_MASK) && + (e->visible & KS_EVENT_VIEW_FILTER_MASK)) + return true; + + return false; +} + +static struct kshark_entry_request * +ksmodel_entry_front_request_alloc(struct kshark_trace_histo *histo, + int bin, bool vis_only, + matching_condition_func func, int val) +{ + size_t first, n; + + /* Get the number of entries in this bin. */ + n = ksmodel_bin_count(histo, bin); + if (!n) + return NULL; + + first = ksmodel_first_index_at_bin(histo, bin); + + return kshark_entry_request_alloc(first, n, + func, val, + vis_only, KS_GRAPH_VIEW_FILTER_MASK); +} + +static struct kshark_entry_request * +ksmodel_entry_back_request_alloc(struct kshark_trace_histo *histo, + int bin, bool vis_only, + matching_condition_func func, int val) +{ + size_t first, n; + + /* Get the number of entries in this bin. */ + n = ksmodel_bin_count(histo, bin); + if (!n) + return NULL; + + first = ksmodel_last_index_at_bin(histo, bin); + + return kshark_entry_request_alloc(first, n, + func, val, + vis_only, KS_GRAPH_VIEW_FILTER_MASK); +} + +/** + * @brief Get the index of the first entry from a given Cpu in a given bin. + * + * @param histo: Input location for the model descriptor. + * @param bin: Bin id. + * @param cpu: Cpu Id. + * + * @returns Index of the first entry from a given Cpu in this bin. + */ +ssize_t ksmodel_first_index_at_cpu(struct kshark_trace_histo *histo, + int bin, int cpu) +{ + size_t i, n, first, not_found = KS_EMPTY_BIN; + + n = ksmodel_bin_count(histo, bin); + if (!n) + return not_found; + + first = ksmodel_first_index_at_bin(histo, bin); + + for (i = first; i < first + n; ++i) { + if (histo->data[i]->cpu == cpu) { + if (ksmodel_is_visible(histo->data[i])) + return i; + else + not_found = KS_FILTERED_BIN; + } + } + + return not_found; +} + +/** + * @brief Get the index of the first entry from a given Task in a given bin. + * + * @param histo: Input location for the model descriptor. + * @param bin: Bin id. + * @param pid: Process Id of a task. + * + * @returns Index of the first entry from a given Task in this bin. + */ +ssize_t ksmodel_first_index_at_pid(struct kshark_trace_histo *histo, + int bin, int pid) +{ + size_t i, n, first, not_found = KS_EMPTY_BIN; + + n = ksmodel_bin_count(histo, bin); + if (!n) + return not_found; + + first = ksmodel_first_index_at_bin(histo, bin); + + for (i = first; i < first + n; ++i) { + if (histo->data[i]->pid == pid) { + if (ksmodel_is_visible(histo->data[i])) + return i; + else + not_found = KS_FILTERED_BIN; + } + } + + return not_found; +} + +/** + * @brief In a given bin, start from the front end of the bin and go towards + * the back end, searching for an entry satisfying the Matching + * condition defined by a Matching condition function. + * + * @param histo: Input location for the model descriptor. + * @param bin: Bin id. + * @param vis_only: If true, a visible entry is requested. + * @param func: Matching condition function. + * @param val: Matching condition value, used by the Matching condition + * function. + * @param index: Optional output location for the index of the requested + * entry inside the array. + * + * @returns Pointer ot a kshark_entry, if an entry has been found. Else NULL. + */ +const struct kshark_entry * +ksmodel_get_entry_front(struct kshark_trace_histo *histo, + int bin, bool vis_only, + matching_condition_func func, int val, + ssize_t *index) +{ + struct kshark_entry_request *req; + const struct kshark_entry *entry; + + if (index) + *index = KS_EMPTY_BIN; + + /* Set the position at the beginning of the bin and go forward. */ + req = ksmodel_entry_front_request_alloc(histo, bin, vis_only, + func, val); + if (!req) + return NULL; + + entry = kshark_get_entry_front(req, histo->data, index); + free(req); + + return entry; +} + +/** + * @brief In a given bin, start from the back end of the bin and go towards + * the front end, searching for an entry satisfying the Matching + * condition defined by a Matching condition function. + * + * @param histo: Input location for the model descriptor. + * @param bin: Bin id. + * @param vis_only: If true, a visible entry is requested. + * @param func: Matching condition function. + * @param val: Matching condition value, used by the Matching condition + * function. + * @param index: Optional output location for the index of the requested + * entry inside the array. + * + * @returns Pointer ot a kshark_entry, if an entry has been found. Else NULL. + */ +const struct kshark_entry * +ksmodel_get_entry_back(struct kshark_trace_histo *histo, + int bin, bool vis_only, + matching_condition_func func, int val, + ssize_t *index) +{ + struct kshark_entry_request *req; + const struct kshark_entry *entry; + + if (index) + *index = KS_EMPTY_BIN; + + /* Set the position at the end of the bin and go backwards. */ + req = ksmodel_entry_back_request_alloc(histo, bin, vis_only, + func, val); + if (!req) + return NULL; + + entry = kshark_get_entry_back(req, histo->data, index); + free(req); + + return entry; +} + +static int ksmodel_get_entry_pid(const struct kshark_entry *entry) +{ + if (!entry) { + /* No data has been found. */ + return KS_EMPTY_BIN; + } + + /* + * Note that if some data has been found, but this data is + * filtered-outa, the Dummy entry is returned. The PID of the Dummy + * entry is KS_FILTERED_BIN. + */ + + return entry->pid; +} + +/** + * @brief In a given bin, start from the front end of the bin and go towards + * the back end, searching for an entry from a given CPU. Return + * the Process Id of the task of the entry found. + * + * @param histo: Input location for the model descriptor. + * @param bin: Bin id. + * @param cpu: CPU Id. + * @param vis_only: If true, a visible entry is requested. + * @param index: Optional output location for the index of the requested + * entry inside the array. + * + * @returns Process Id of the task if an entry has been found. Else a negative + * Identifier (KS_EMPTY_BIN or KS_FILTERED_BIN). + */ +int ksmodel_get_pid_front(struct kshark_trace_histo *histo, + int bin, int cpu, bool vis_only, + ssize_t *index) +{ + const struct kshark_entry *entry; + + if (cpu < 0) + return KS_EMPTY_BIN; + + entry = ksmodel_get_entry_front(histo, bin, vis_only, + kshark_match_cpu, cpu, + index); + return ksmodel_get_entry_pid(entry); +} + +/** + * @brief In a given bin, start from the back end of the bin and go towards + * the front end, searching for an entry from a given CPU. Return + * the Process Id of the task of the entry found. + * + * @param histo: Input location for the model descriptor. + * @param bin: Bin id. + * @param cpu: CPU Id. + * @param vis_only: If true, a visible entry is requested. + * @param index: Optional output location for the index of the requested + * entry inside the array. + * + * @returns Process Id of the task if an entry has been found. Else a negative + * Identifier (KS_EMPTY_BIN or KS_FILTERED_BIN). + */ +int ksmodel_get_pid_back(struct kshark_trace_histo *histo, + int bin, int cpu, bool vis_only, + ssize_t *index) +{ + const struct kshark_entry *entry; + + if (cpu < 0) + return KS_EMPTY_BIN; + + entry = ksmodel_get_entry_back(histo, bin, vis_only, + kshark_match_cpu, cpu, + index); + + return ksmodel_get_entry_pid(entry); +} + +static int ksmodel_get_entry_cpu(const struct kshark_entry *entry) +{ + if (!entry) { + /* No data has been found. */ + return KS_EMPTY_BIN; + } + + /* + * Note that if some data has been found, but this data is + * filtered-outa, the Dummy entry is returned. The CPU Id of the Dummy + * entry is KS_FILTERED_BIN. + */ + + return entry->cpu; +} + +/** + * @brief In a given bin, start from the front end of the bin and go towards + * the back end, searching for an entry from a given PID. Return + * the CPU Id of the entry found. + * + * @param histo: Input location for the model descriptor. + * @param bin: Bin id. + * @param pid: Process Id. + * @param vis_only: If true, a visible entry is requested. + * @param index: Optional output location for the index of the requested + * entry inside the array. + * + * @returns Process Id of the task if an entry has been found. Else a negative + * Identifier (KS_EMPTY_BIN or KS_FILTERED_BIN). + */ +int ksmodel_get_cpu_front(struct kshark_trace_histo *histo, + int bin, int pid, bool vis_only, + ssize_t *index) +{ + const struct kshark_entry *entry; + + if (pid < 0) + return KS_EMPTY_BIN; + + entry = ksmodel_get_entry_front(histo, bin, vis_only, + kshark_match_pid, pid, + index); + return ksmodel_get_entry_cpu(entry); +} + +/** + * @brief In a given bin, start from the back end of the bin and go towards + * the front end, searching for an entry from a given PID. Return + * the CPU Id of the entry found. + * + * @param histo: Input location for the model descriptor. + * @param bin: Bin id. + * @param pid: Process Id. + * @param vis_only: If true, a visible entry is requested. + * @param index: Optional output location for the index of the requested + * entry inside the array. + * + * @returns Process Id of the task if an entry has been found. Else a negative + * Identifier (KS_EMPTY_BIN or KS_FILTERED_BIN). + */ +int ksmodel_get_cpu_back(struct kshark_trace_histo *histo, + int bin, int pid, bool vis_only, + ssize_t *index) +{ + const struct kshark_entry *entry; + + if (pid < 0) + return KS_EMPTY_BIN; + + entry = ksmodel_get_entry_back(histo, bin, vis_only, + kshark_match_pid, pid, + index); + + return ksmodel_get_entry_cpu(entry); +} + +/** + * @brief Check if a visible trace event from a given Cpu exists in this bin. + * + * @param histo: Input location for the model descriptor. + * @param bin: Bin id. + * @param cpu: Cpu Id. + * @param index: Optional output location for the index of the requested + * entry inside the array. + * + * @returns True, if a visible entry exists in this bin. Else false. + */ +bool ksmodel_cpu_visible_event_exist(struct kshark_trace_histo *histo, + int bin, int cpu, ssize_t *index) +{ + struct kshark_entry_request *req; + const struct kshark_entry *entry; + + if (index) + *index = KS_EMPTY_BIN; + + /* Set the position at the beginning of the bin and go forward. */ + req = ksmodel_entry_front_request_alloc(histo, + bin, true, + kshark_match_cpu, cpu); + if (!req) + return false; + + /* + * The default visibility mask of the Model Data request is + * KS_GRAPH_VIEW_FILTER_MASK. Change the mask to + * KS_EVENT_VIEW_FILTER_MASK because we want to find a visible event. + */ + req->vis_mask = KS_EVENT_VIEW_FILTER_MASK; + + entry = kshark_get_entry_front(req, histo->data, index); + free(req); + + if (!entry || !entry->visible) { + /* No visible entry has been found. */ + return false; + } + + return true; +} + +/** + * @brief Check if a visible trace event from a given Task exists in this bin. + * + * @param histo: Input location for the model descriptor. + * @param bin: Bin id. + * @param pid: Process Id of the task. + * @param index: Optional output location for the index of the requested + * entry inside the array. + * + * @returns True, if a visible entry exists in this bin. Else false. + */ +bool ksmodel_task_visible_event_exist(struct kshark_trace_histo *histo, + int bin, int pid, ssize_t *index) +{ + struct kshark_entry_request *req; + const struct kshark_entry *entry; + + if (index) + *index = KS_EMPTY_BIN; + + /* Set the position at the beginning of the bin and go forward. */ + req = ksmodel_entry_front_request_alloc(histo, + bin, true, + kshark_match_pid, pid); + if (!req) + return false; + + /* + * The default visibility mask of the Model Data request is + * KS_GRAPH_VIEW_FILTER_MASK. Change the mask to + * KS_EVENT_VIEW_FILTER_MASK because we want to find a visible event. + */ + req->vis_mask = KS_EVENT_VIEW_FILTER_MASK; + + entry = kshark_get_entry_front(req, histo->data, index); + free(req); + + if (!entry || !entry->visible) { + /* No visible entry has been found. */ + return false; + } + + return true; +} diff --git a/kernel-shark-qt/src/libkshark-model.h b/kernel-shark-qt/src/libkshark-model.h new file mode 100644 index 0000000..9c80458 --- /dev/null +++ b/kernel-shark-qt/src/libkshark-model.h @@ -0,0 +1,157 @@ +/* SPDX-License-Identifier: LGPL-2.1 */ + +/* + * Copyright (C) 2017 VMware Inc, Yordan Karadzhov + */ + + /** + * @file libkshark-model.h + * @brief Visualization model for FTRACE (trace-cmd) data. + */ + +#ifndef _LIB_KSHARK_MODEL_H +#define _LIB_KSHARK_MODEL_H + +// KernelShark +#include "libkshark.h" + +#ifdef __cplusplus +extern "C" { +#endif // __cplusplus + +/** + * Overflow Bin identifiers. The two overflow bins are used to hold the data + * outside the visualized range. + */ +enum OverflowBin { + /** + * Identifier of the Upper Overflow Bin. This bin is used to hold the + * data after (in time) the end of the visualized range. + */ + UPPER_OVERFLOW_BIN = -1, + + /** + * Identifier of the Lower Overflow Bin. This bin is used to hold the + * data before (in time) the beginning of the visualized range. + */ + LOWER_OVERFLOW_BIN = -2, +}; + +/** Structure describing the current state of the visualization model. */ +struct kshark_trace_histo { + /** Trace data array. */ + struct kshark_entry **data; + + /** The size of the data array. */ + size_t data_size; + + /** The first entry (index of data array) in each bin. */ + ssize_t *map; + + /** Number of entries in each bin. */ + size_t *bin_count; + + /** + * Lower edge of the time-window to be visualized. Only entries having + * timestamp >= min will be visualized. + */ + uint64_t min; + + /** + * Upper edge of the time-window to be visualized. Only entries having + * timestamp <= max will be visualized. + */ + uint64_t max; + + /** The size in time for each bin. */ + uint64_t bin_size; + + /** Number of bins. */ + int n_bins; +}; + +void ksmodel_init(struct kshark_trace_histo *histo); + +void ksmodel_clear(struct kshark_trace_histo *histo); + +void ksmodel_set_bining(struct kshark_trace_histo *histo, + size_t n, uint64_t min, uint64_t max); + +void ksmodel_fill(struct kshark_trace_histo *histo, + struct kshark_entry **data, size_t n); + +size_t ksmodel_bin_count(struct kshark_trace_histo *histo, int bin); + +void ksmodel_shift_forward(struct kshark_trace_histo *histo, size_t n); + +void ksmodel_shift_backward(struct kshark_trace_histo *histo, size_t n); + +void ksmodel_jump_to(struct kshark_trace_histo *histo, size_t ts); + +void ksmodel_zoom_out(struct kshark_trace_histo *histo, + double r, int mark); + +void ksmodel_zoom_in(struct kshark_trace_histo *histo, + double r, int mark); + +ssize_t ksmodel_first_index_at_bin(struct kshark_trace_histo *histo, int bin); + +ssize_t ksmodel_last_index_at_bin(struct kshark_trace_histo *histo, int bin); + +ssize_t ksmodel_first_index_at_cpu(struct kshark_trace_histo *histo, + int bin, int cpu); + +ssize_t ksmodel_first_index_at_pid(struct kshark_trace_histo *histo, + int bin, int pid); + +const struct kshark_entry * +ksmodel_get_entry_front(struct kshark_trace_histo *histo, + int bin, bool vis_only, + matching_condition_func func, int val, + ssize_t *index); + +const struct kshark_entry * +ksmodel_get_entry_back(struct kshark_trace_histo *histo, + int bin, bool vis_only, + matching_condition_func func, int val, + ssize_t *index); + +int ksmodel_get_pid_front(struct kshark_trace_histo *histo, + int bin, int cpu, bool vis_only, + ssize_t *index); + +int ksmodel_get_pid_back(struct kshark_trace_histo *histo, + int bin, int cpu, bool vis_only, + ssize_t *index); + +int ksmodel_get_cpu_front(struct kshark_trace_histo *histo, + int bin, int pid, bool vis_only, + ssize_t *index); + +int ksmodel_get_cpu_back(struct kshark_trace_histo *histo, + int bin, int pid, bool vis_only, + ssize_t *index); + +bool ksmodel_cpu_visible_event_exist(struct kshark_trace_histo *histo, + int bin, int cpu, ssize_t *index); + +bool ksmodel_task_visible_event_exist(struct kshark_trace_histo *histo, + int bin, int pid, ssize_t *index); + +static inline double ksmodel_bin_time(struct kshark_trace_histo *histo, + int bin) +{ + return (histo->min + bin*histo->bin_size) * 1e-9; +} + +static inline uint64_t ksmodel_bin_ts(struct kshark_trace_histo *histo, + int bin) +{ + return (histo->min + bin*histo->bin_size); +} + +#ifdef __cplusplus +} +#endif // __cplusplus + +#endif diff --git a/kernel-shark-qt/src/libkshark.h b/kernel-shark-qt/src/libkshark.h index 4860e74..b0e1bc8 100644 --- a/kernel-shark-qt/src/libkshark.h +++ b/kernel-shark-qt/src/libkshark.h @@ -225,7 +225,11 @@ bool kshark_match_pid(struct kshark_context *kshark_ctx, bool kshark_match_cpu(struct kshark_context *kshark_ctx, struct kshark_entry *e, int cpu); -/** Empty bin identifier. */ +/** + * Empty bin identifier. + * KS_EMPTY_BIN is used to reset entire arrays to empty with memset(), thus it + * must be -1 for that to work. + */ #define KS_EMPTY_BIN -1 /** Filtered bin identifier. */ From patchwork Mon Aug 6 16:19:24 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yordan Karadzhov X-Patchwork-Id: 10758815 Return-Path: Received: from mail-wm0-f67.google.com ([74.125.82.67]:33659 "EHLO mail-wm0-f67.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1733159AbeHFSaA (ORCPT ); Mon, 6 Aug 2018 14:30:00 -0400 Received: by mail-wm0-f67.google.com with SMTP id r24-v6so11953845wmh.0 for ; Mon, 06 Aug 2018 09:20:10 -0700 (PDT) From: "Yordan Karadzhov (VMware)" To: rostedt@goodmis.org Cc: linux-trace-devel@vger.kernel.org, "Yordan Karadzhov (VMware)" Subject: [PATCH v4 3/6] kernel-shark-qt: Add an example showing how to manipulate the Vis. model. Date: Mon, 6 Aug 2018 19:19:24 +0300 Message-Id: <20180806161927.11206-4-y.karadz@gmail.com> In-Reply-To: <20180806161927.11206-1-y.karadz@gmail.com> References: <20180806161927.11206-1-y.karadz@gmail.com> Sender: linux-trace-devel-owner@vger.kernel.org List-ID: Content-Length: 4756 This patch introduces a basic example, showing how to initialize the Visualization model and to use the API to perform some of the basic operations. Signed-off-by: Yordan Karadzhov (VMware) --- kernel-shark-qt/examples/CMakeLists.txt | 4 + kernel-shark-qt/examples/datahisto.c | 155 ++++++++++++++++++++++++ 2 files changed, 159 insertions(+) create mode 100644 kernel-shark-qt/examples/datahisto.c diff --git a/kernel-shark-qt/examples/CMakeLists.txt b/kernel-shark-qt/examples/CMakeLists.txt index 009fd1e..6906eba 100644 --- a/kernel-shark-qt/examples/CMakeLists.txt +++ b/kernel-shark-qt/examples/CMakeLists.txt @@ -7,3 +7,7 @@ target_link_libraries(dload kshark) message(STATUS "datafilter") add_executable(dfilter datafilter.c) target_link_libraries(dfilter kshark) + +message(STATUS "datahisto") +add_executable(dhisto datahisto.c) +target_link_libraries(dhisto kshark) diff --git a/kernel-shark-qt/examples/datahisto.c b/kernel-shark-qt/examples/datahisto.c new file mode 100644 index 0000000..3f19870 --- /dev/null +++ b/kernel-shark-qt/examples/datahisto.c @@ -0,0 +1,155 @@ +// SPDX-License-Identifier: GPL-2.0 + +/* + * Copyright (C) 2018 VMware Inc, Yordan Karadzhov + */ + +// C +#include +#include + +// KernelShark +#include "libkshark.h" +#include "libkshark-model.h" + +#define N_BINS 5 + +const char *default_file = "trace.dat"; + +void dump_bin(struct kshark_trace_histo *histo, int bin, + const char *type, int val) +{ + const struct kshark_entry *e_front, *e_back; + char *entry_str; + ssize_t i_front, i_back; + + printf("bin %i {\n", bin); + if (strcmp(type, "cpu") == 0) { + e_front = ksmodel_get_entry_front(histo, bin, true, + kshark_match_cpu, val, + &i_front); + + e_back = ksmodel_get_entry_back(histo, bin, true, + kshark_match_cpu, val, + &i_back); + } else if (strcmp(type, "task") == 0) { + e_front = ksmodel_get_entry_front(histo, bin, true, + kshark_match_pid, val, + &i_front); + + e_back = ksmodel_get_entry_back(histo, bin, true, + kshark_match_pid, val, + &i_back); + } else { + i_front = ksmodel_first_index_at_bin(histo, bin); + e_front = histo->data[i_front]; + + i_back = ksmodel_last_index_at_bin(histo, bin); + e_back = histo->data[i_back]; + } + + if (i_front == KS_EMPTY_BIN) { + puts ("EMPTY BIN"); + } else { + entry_str = kshark_dump_entry(e_front); + printf("%li -> %s\n", i_front, entry_str); + free(entry_str); + + entry_str = kshark_dump_entry(e_back); + printf("%li -> %s\n", i_back, entry_str); + free(entry_str); + } + + puts("}\n"); +} + +void dump_histo(struct kshark_trace_histo *histo, const char *type, int val) +{ + size_t bin; + + for (bin = 0; bin < histo->n_bins; ++bin) + dump_bin(histo, bin, type, val); +} + +int main(int argc, char **argv) +{ + struct kshark_context *kshark_ctx; + struct kshark_entry **data = NULL; + struct kshark_trace_histo histo; + size_t i, n_rows, n_tasks; + bool status; + int *pids; + + /* Create a new kshark session. */ + kshark_ctx = NULL; + if (!kshark_instance(&kshark_ctx)) + return 1; + + /* Open a trace data file produced by trace-cmd. */ + if (argc > 1) + status = kshark_open(kshark_ctx, argv[1]); + else + status = kshark_open(kshark_ctx, default_file); + + if (!status) { + kshark_free(kshark_ctx); + return 1; + } + + /* Load the content of the file into an array of entries. */ + n_rows = kshark_load_data_entries(kshark_ctx, &data); + + /* Get a list of all tasks. */ + n_tasks = kshark_get_task_pids(kshark_ctx, &pids); + + /* Initialize the Visualization Model. */ + ksmodel_init(&histo); + ksmodel_set_bining(&histo, N_BINS, data[0]->ts, + data[n_rows - 1]->ts); + + /* Fill the model with data and calculate its state. */ + ksmodel_fill(&histo, data, n_rows); + + /* Dump the raw bins. */ + dump_histo(&histo, "", 0); + + puts("\n...\n\n"); + + /* + * Change the state of the model. Do 50% Zoom-In and dump only CPU 0. + */ + ksmodel_zoom_in(&histo, .50, -1); + dump_histo(&histo, "cpu", 0); + + puts("\n...\n\n"); + + /* Shift forward by two bins and this time dump only CPU 1. */ + ksmodel_shift_forward(&histo, 2); + dump_histo(&histo, "cpu", 1); + + puts("\n...\n\n"); + + /* + * Do 10% Zoom-Out, using the last bin as a focal point. Dump the last + * Task. + */ + ksmodel_zoom_out(&histo, .10, N_BINS - 1); + dump_histo(&histo, "task", pids[n_tasks - 1]); + + /* Reset (clear) the model. */ + ksmodel_clear(&histo); + + /* Free the memory. */ + for (i = 0; i < n_rows; ++i) + free(data[i]); + + free(data); + + /* Close the file. */ + kshark_close(kshark_ctx); + + /* Close the session. */ + kshark_free(kshark_ctx); + + return 0; +} From patchwork Mon Aug 6 16:19:25 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yordan Karadzhov X-Patchwork-Id: 10758823 Return-Path: Received: from mail-wm0-f46.google.com ([74.125.82.46]:37300 "EHLO mail-wm0-f46.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1733159AbeHFSaE (ORCPT ); Mon, 6 Aug 2018 14:30:04 -0400 Received: by mail-wm0-f46.google.com with SMTP id n11-v6so14906897wmc.2 for ; Mon, 06 Aug 2018 09:20:12 -0700 (PDT) From: "Yordan Karadzhov (VMware)" To: rostedt@goodmis.org Cc: linux-trace-devel@vger.kernel.org, "Yordan Karadzhov (VMware)" Subject: [PATCH v4 4/6] kernel-shark-qt: Define Data collections Date: Mon, 6 Aug 2018 19:19:25 +0300 Message-Id: <20180806161927.11206-5-y.karadz@gmail.com> In-Reply-To: <20180806161927.11206-1-y.karadz@gmail.com> References: <20180806161927.11206-1-y.karadz@gmail.com> Sender: linux-trace-devel-owner@vger.kernel.org List-ID: Content-Length: 29548 Data collections are used to optimize the search for an entry having an abstract property, defined by a Matching condition function and a value. When a collection is processed, the data which is relevant for the collection is enclosed in "Data intervals", defined by pairs of "Resume" and "Break" points. It is guaranteed that the data outside of the intervals contains no entries satisfying the abstract matching condition. In the same time the interval may (will) contain data that do not satisfy the matching condition. Keep in mind that the Collection is defined over an array of kshark_entries, sorted in time. Each interval will start (resume) at the index of the first appearance of an entry satisfying the matching condition. The definition of the Break point of the interval is a bit more complicated. The interval will be closed when we have an entry which still satisfies the condition, but the next entry on the same CPU no longer satisfies this condition. Once defined, the Data collection can be used when searching for an entry having the same (ore related) abstract property. The collection allows to ignore the irrelevant data, thus it eliminates the linear worst-case time complexity of the search. The user has the possibility to inflate each of the interval, by appending additional data which do not satisfy the matching condition (margin data). This data will be added to each interval at the beginning and at the end of the data which is relevant for the collection, as well as at the beginning and at the end of data-set. The margin data can be useful in the case when the user wants to be able to see what is happening before and after the appearance of his data of interest. Signed-off-by: Yordan Karadzhov (VMware) --- kernel-shark-qt/src/CMakeLists.txt | 3 +- kernel-shark-qt/src/libkshark-collection.c | 818 +++++++++++++++++++++ kernel-shark-qt/src/libkshark.c | 16 + kernel-shark-qt/src/libkshark.h | 80 ++ 4 files changed, 916 insertions(+), 1 deletion(-) create mode 100644 kernel-shark-qt/src/libkshark-collection.c diff --git a/kernel-shark-qt/src/CMakeLists.txt b/kernel-shark-qt/src/CMakeLists.txt index ec22f63..cd42920 100644 --- a/kernel-shark-qt/src/CMakeLists.txt +++ b/kernel-shark-qt/src/CMakeLists.txt @@ -2,7 +2,8 @@ message("\n src ...") message(STATUS "libkshark") add_library(kshark SHARED libkshark.c - libkshark-model.c) + libkshark-model.c + libkshark-collection.c) target_link_libraries(kshark ${CMAKE_DL_LIBS} ${TRACEEVENT_LIBRARY} diff --git a/kernel-shark-qt/src/libkshark-collection.c b/kernel-shark-qt/src/libkshark-collection.c new file mode 100644 index 0000000..aea0015 --- /dev/null +++ b/kernel-shark-qt/src/libkshark-collection.c @@ -0,0 +1,818 @@ +// SPDX-License-Identifier: LGPL-2.1 + +/* + * Copyright (C) 2018 VMware Inc, Yordan Karadzhov + */ + + /** + * @file libkshark-collection.c + * @brief Data Collections. + */ + +// C +#include +#include +#include +#include + +// KernelShark +#include "libkshark.h" + +/* Quiet warnings over documenting simple structures */ +//! @cond Doxygen_Suppress + +enum collection_point_type { + COLLECTION_IGNORE = 0, + COLLECTION_RESUME, + COLLECTION_BREAK, +}; + +#define LAST_BIN -3 + +struct entry_list { + struct entry_list *next; + size_t index; + uint8_t type; +}; + + +enum map_flags { + COLLECTION_BEFORE = -1, + COLLECTION_INSIDE = 0, + COLLECTION_AFTER = 1, +}; + +//! @endcond + +/* + * If the type of the last added entry is COLLECTION_IGNORE, overwrite this + * entry (ignore the old entry values). Else add a new entry to the list. + */ +static bool collection_add_entry(struct entry_list **list, + size_t i, uint8_t type) +{ + struct entry_list *entry = *list; + + if (entry->type != COLLECTION_IGNORE) { + entry->next = malloc(sizeof(*entry)); + if (!entry->next) + return false; + + entry = entry->next; + *list = entry; + } + + entry->index = i; + entry->type = type; + + return true; +} + +static struct kshark_entry_collection * +kshark_data_collection_alloc(struct kshark_context *kshark_ctx, + struct kshark_entry **data, + size_t first, + size_t n_rows, + matching_condition_func cond, + int val, + size_t margin) +{ + struct kshark_entry_collection *col_ptr = NULL; + struct kshark_entry *last_vis_entry = NULL; + struct entry_list *col_list, *temp; + size_t resume_count = 0, break_count = 0; + size_t i, j, end, last_added = 0; + bool good_data = false; + + col_list = malloc(sizeof(*col_list)); + if (!col_list) + goto fail; + + temp = col_list; + + if (margin != 0) { + /* + * If this collection includes margin data, add a margin data + * interval at the very beginning of the data-set. + */ + temp->index = first; + temp->type = COLLECTION_RESUME; + ++resume_count; + + collection_add_entry(&temp, first + margin - 1, + COLLECTION_BREAK); + ++break_count; + } else { + temp->type = COLLECTION_IGNORE; + } + + end = first + n_rows - margin; + for (i = first + margin; i < end; ++i) { + if (!cond(kshark_ctx, data[i], val)) { + /* + * The entry is irrelevant for this collection. + * Do nothing. + */ + continue; + } + + /* The Matching condition is satisfed. */ + if (!good_data) { + /* + * Resume the collection here. Add some margin data + * in front of the data of interest. + */ + good_data = true; + if (last_added == 0 || last_added < i - margin) { + collection_add_entry(&temp, i - margin, + COLLECTION_RESUME); + ++resume_count; + } else { + /* + * Ignore the last collection Break point. + * Continue extending the previous data + * interval. + */ + temp->type = COLLECTION_IGNORE; + --break_count; + } + } else if (good_data && + data[i]->next && + !cond(kshark_ctx, data[i]->next, val)) { + /* + * Break the collection here. Add some margin data + * after the data of interest. + */ + good_data = false; + last_vis_entry = data[i]; + + /* Keep adding entries until the "next" record. */ + for (j = i + 1; + j != end && last_vis_entry->next != data[j]; + j++) + ; + + /* + * If the number of added entries is smaller than the + * number of margin entries requested, keep adding + * until you fill the margin. + */ + if (i + margin < j) + i = j; + else + i += margin; + + last_added = i; + collection_add_entry(&temp, i, COLLECTION_BREAK); + ++break_count; + } + } + + if (good_data) { + collection_add_entry(&temp, end - 1, COLLECTION_BREAK); + ++break_count; + } + + if (margin != 0) { + /* + * If this collection includes margin data, add a margin data + * interval at the very end of the data-set. + */ + collection_add_entry(&temp, first + n_rows - margin, + COLLECTION_RESUME); + ++resume_count; + + collection_add_entry(&temp, first + n_rows - 1, + COLLECTION_BREAK); + ++break_count; + } + + /* + * If everything is OK, we must have pairs of COLLECTION_RESUME + * and COLLECTION_BREAK points. + */ + assert(break_count == resume_count); + + /* Create the collection. */ + col_ptr = malloc(sizeof(*col_ptr)); + if (!col_ptr) + goto fail; + + col_ptr->next = NULL; + + col_ptr->resume_points = calloc(resume_count, + sizeof(*col_ptr->resume_points)); + if (!col_ptr->resume_points) + goto fail; + + col_ptr->break_points = calloc(break_count, + sizeof(*col_ptr->break_points)); + if (!col_ptr->break_points) { + free(col_ptr->resume_points); + goto fail; + } + + col_ptr->cond = cond; + col_ptr->val = val; + + col_ptr->size = resume_count; + for (i = 0; i < col_ptr->size; ++i) { + assert(col_list->type == COLLECTION_RESUME); + col_ptr->resume_points[i] = col_list->index; + temp = col_list; + col_list = col_list->next; + free(temp); + + assert(col_list->type == COLLECTION_BREAK); + col_ptr->break_points[i] = col_list->index; + temp = col_list; + col_list = col_list->next; + free(temp); + } + + return col_ptr; + +fail: + fprintf(stderr, "Failed to allocate memory for Data collection.\n"); + + free(col_ptr); + for (i = 0; i < resume_count + break_count; ++i) { + temp = col_list; + col_list = col_list->next; + free(temp); + } + + return NULL; +} + +/* + * This function provides mapping between the index inside the data-set and + * the index of the collection interval. Additional output flag is used to + * resolve the ambiguity of the mapping. If the value of the flag is + * COLLECTION_INSIDE, the "source_index" is inside the returned interval. If + * the value of the flag is COLLECTION_BEFORE, the "source_index" is inside + * the gap before the returned interval. If the value of the flag is + * COLLECTION_AFTER, the "source_index" is inside the gap after the returned + * interval. + */ +static ssize_t +map_collection_index_from_source(const struct kshark_entry_collection *col, + size_t source_index, int *flag) +{ + size_t l, h, mid; + + if (!col->size) + return KS_EMPTY_BIN; + + l = 0; + h = col->size - 1; + + if (source_index < col->resume_points[l]) { + *flag = COLLECTION_BEFORE; + return l; + } + + if (source_index >= col->resume_points[h]) { + if (source_index < col->break_points[h]) + *flag = COLLECTION_INSIDE; + else + *flag = COLLECTION_AFTER; + + return h; + } + + BSEARCH(h, l, source_index > col->resume_points[mid]); + + if (source_index <= col->break_points[l]) + *flag = COLLECTION_INSIDE; + else + *flag = COLLECTION_AFTER; + + return l; +} + +static ssize_t +map_collection_request_init(const struct kshark_entry_collection *col, + struct kshark_entry_request **req, + bool front, size_t *end) +{ + struct kshark_entry_request *req_tmp = *req; + int col_index_flag; + ssize_t col_index; + size_t req_end; + + if (req_tmp->next || col->size == 0) { + fprintf(stderr, "Unexpected input in "); + fprintf(stderr, "map_collection_request_init()\n"); + goto do_nothing; + } + + req_end = front ? req_tmp->first + req_tmp->n - 1 : + req_tmp->first - req_tmp->n + 1; + + /* + * Find the first Resume Point of the collection which is equal or + * greater than the first index of this request. + */ + col_index = map_collection_index_from_source(col, + req_tmp->first, + &col_index_flag); + + /* + * The value of "col_index" is ambiguous. Use the "col_index_flag" to + * deal with all possible cases. + */ + if (col_index == KS_EMPTY_BIN) { + /* Empty collection. */ + goto do_nothing; + } + + if (col_index_flag == COLLECTION_AFTER) { + /* + * This request starts after the end of interval "col_index". + */ + if (front && (col_index == col->size - 1 || + req_end < col->resume_points[col_index + 1])) { + /* + * No overlap between the collection and this front + * request. Do nothing. + */ + goto do_nothing; + } else if (!front && req_end > col->break_points[col_index]) { + /* + * No overlap between the collection and this back + * request. Do nothing. + */ + goto do_nothing; + } + + if (front) + ++col_index; + + req_tmp->first = front ? col->resume_points[col_index] : + col->break_points[col_index]; + } + + if (col_index_flag == COLLECTION_BEFORE) { + /* + * This request starts before the beginning of interval + * "col_index". + */ + if (!front && (col_index == 0 || + req_end > col->break_points[col_index - 1])) { + /* + * No overlap between the collection and this back + * request. Do nothing. + */ + goto do_nothing; + } else if (front && req_end < col->resume_points[col_index]) { + /* + * No overlap between the collection and this front + * request. Do nothing. + */ + goto do_nothing; + } + + if (!front) + --col_index; + + req_tmp->first = front ? col->resume_points[col_index] : + col->break_points[col_index]; + } + + *end = req_end; + + return col_index; + +do_nothing: + kshark_free_entry_request(*req); + *req = NULL; + *end = KS_EMPTY_BIN; + + return KS_EMPTY_BIN; +} + +/* + * This function uses the intervals of the Data collection to transform the + * inputted single data request into a list of data requests. The new list of + * request will ignore the data outside of the intervals of the collection. + */ +static int +map_collection_back_request(const struct kshark_entry_collection *col, + struct kshark_entry_request **req) +{ + struct kshark_entry_request *req_tmp; + size_t req_first, req_end; + ssize_t col_index; + int req_count; + + col_index = map_collection_request_init(col, req, false, &req_end); + if (col_index == KS_EMPTY_BIN) + return 0; + + /* + * Now loop over the intervals of the collection going backwords till + * the end of the inputted request and create a separate request for + * each of those interest. + */ + req_tmp = *req; + req_count = 1; + while (col_index >= 0 && req_end <= col->break_points[col_index]) { + if (req_end >= col->resume_points[col_index]) { + /* + * The last entry of the original request is inside + * the "col_index" collection interval. Close the + * collection request here and return. + */ + req_tmp->n = req_tmp->first - req_end + 1; + break; + } + + /* + * The last entry of the original request is outside of the + * "col_index" interval. Close the collection request at the + * end of this interval and move to the next one. Try to make + * another request there. + */ + req_tmp->n = req_tmp->first - + col->resume_points[col_index] + 1; + + --col_index; + + if (req_end > col->break_points[col_index]) { + /* + * The last entry of the original request comes before + * the end of the next collection interval. Stop here. + */ + break; + } + + if (col_index > 0) { + /* Make a new request. */ + req_first = col->break_points[col_index]; + + req_tmp->next = + kshark_entry_request_alloc(req_first, + 0, + req_tmp->cond, + req_tmp->val, + req_tmp->vis_only, + req_tmp->vis_mask); + + if (!req_tmp->next) + goto fail; + + req_tmp = req_tmp->next; + ++req_count; + } + } + + return req_count; + +fail: + fprintf(stderr, "Failed to allocate memory for "); + fprintf(stderr, "Collection data request.\n"); + kshark_free_entry_request(*req); + *req = NULL; + return -ENOMEM; +} + +/* + * This function uses the intervals of the Data collection to transform the + * inputted single data request into a list of data requests. The new list of + * requests will ignore the data outside of the intervals of the collection. + */ +static int +map_collection_front_request(const struct kshark_entry_collection *col, + struct kshark_entry_request **req) +{ + struct kshark_entry_request *req_tmp; + size_t req_first, req_end; + ssize_t col_index; + int req_count; + + col_index = map_collection_request_init(col, req, true, &req_end); + if (col_index == KS_EMPTY_BIN) + return 0; + + /* + * Now loop over the intervals of the collection going forwards till + * the end of the inputted request and create a separate request for + * each of those interest. + */ + req_count = 1; + req_tmp = *req; + while (col_index < col->size && + req_end >= col->resume_points[col_index]) { + if (req_end <= col->break_points[col_index]) { + /* + * The last entry of the original request is inside + * the "col_index" collection interval. + * Close the collection request here and return. + */ + req_tmp->n = req_end - req_tmp->first + 1; + break; + } + + /* + * The last entry of the original request is outside this + * collection interval (col_index). Close the collection + * request at the end of the interval and move to the next + * interval. Try to make another request there. + */ + req_tmp->n = col->break_points[col_index] - + req_tmp->first + 1; + + ++col_index; + + if (req_end < col->resume_points[col_index]) { + /* + * The last entry of the original request comes before + * the beginning of next collection interval. + * Stop here. + */ + break; + } + + if (col_index < col->size) { + /* Make a new request. */ + req_first = col->resume_points[col_index]; + + req_tmp->next = + kshark_entry_request_alloc(req_first, + 0, + req_tmp->cond, + req_tmp->val, + req_tmp->vis_only, + req_tmp->vis_mask); + + if (!req_tmp->next) + goto fail; + + req_tmp = req_tmp->next; + ++req_count; + } + } + + return req_count; + +fail: + fprintf(stderr, "Failed to allocate memory for "); + fprintf(stderr, "Collection data request.\n"); + kshark_free_entry_request(*req); + *req = NULL; + return -ENOMEM; +} + +/** + * @brief Search for an entry satisfying the requirements of a given Data + * request. Start from the position provided by the request and go + * searching in the direction of the increasing timestamps (front). + * The search is performed only inside the intervals, defined by + * the data collection. + * + * @param req: Input location for a single Data request. The imputted request + * will be transformed into a list of requests. This new list of + * requests will ignore the data outside of the intervals of the + * collection. + * @param data: Input location for the trace data. + * @param col: Input location for the Data collection. + * @param index: Optional output location for the index of the returned + * entry inside the array. + * + * @returns Pointer to the first entry satisfying the matching condition on + * success, or NULL on failure. + * In the special case when some entries, satisfying the Matching + * condition function have been found, but all these entries have + * been discarded because of the visibility criteria (filtered + * entries), the function returns a pointer to a special + * "Dummy entry". + */ +const struct kshark_entry * +kshark_get_collection_entry_front(struct kshark_entry_request **req, + struct kshark_entry **data, + const struct kshark_entry_collection *col, + ssize_t *index) +{ + const struct kshark_entry *entry = NULL; + int req_count; + + /* + * Use the intervals of the Data collection to redefine the data + * request in a way which will ignore the data outside of the + * intervals of the collection. + */ + req_count = map_collection_front_request(col, req); + + if (index && !req_count) + *index = KS_EMPTY_BIN; + + /* + * Loop over the list of redefined requests and search until you find + * the first matching entry. + */ + while (*req) { + entry = kshark_get_entry_front(*req, data, index); + if (entry) + break; + + *req = (*req)->next; + } + + return entry; +} + +/** + * @brief Search for an entry satisfying the requirements of a given Data + * request. Start from the position provided by the request and go + * searching in the direction of the decreasing timestamps (back). + * The search is performed only inside the intervals, defined by + * the data collection. + * + * @param req: Input location for Data request. The imputed request + * will be transformed into a list of requests. This new list of + * requests will ignore the data outside of the intervals of the + * collection. + * @param data: Input location for the trace data. + * @param col: Input location for the Data collection. + * @param index: Optional output location for the index of the returned + * entry inside the array. + * + * @returns Pointer to the first entry satisfying the matching condition on + * success, or NULL on failure. + * In the special case when some entries, satisfying the Matching + * condition function have been found, but all these entries have + * been discarded because of the visibility criteria (filtered + * entries), the function returns a pointer to a special + * "Dummy entry". + */ +const struct kshark_entry * +kshark_get_collection_entry_back(struct kshark_entry_request **req, + struct kshark_entry **data, + const struct kshark_entry_collection *col, + ssize_t *index) +{ + const struct kshark_entry *entry = NULL; + int req_count; + + /* + * Use the intervals of the Data collection to redefine the data + * request in a way which will ignore the data outside of the + * intervals of the collection. + */ + req_count = map_collection_back_request(col, req); + if (index && !req_count) + *index = KS_EMPTY_BIN; + + /* + * Loop over the list of redefined requests and search until you find + * the first matching entry. + */ + while (*req) { + entry = kshark_get_entry_back(*req, data, index); + if (entry) + break; + + *req = (*req)->next; + } + + return entry; +} + +/** + * @brief Search the list of Data collections and find the collection defined + * with a given Matching condition function and value. + * + * @param col: Input location for the Data collection list. + * @param cond: Matching condition function. + * @param val: Matching condition value, used by the Matching condition + * function. + * + * @returns Pointer to a Data collections on success, or NULL on failure. + */ +struct kshark_entry_collection * +kshark_find_data_collection(struct kshark_entry_collection *col, + matching_condition_func cond, + int val) +{ + while (col) { + if (col->cond == cond && col->val == val) + return col; + + col = col->next; + } + + return NULL; +} + +/** + * @brief Clear all data intervals of the given Data collection. + * + * @param col: Input location for the Data collection. + */ +void kshark_reset_data_collection(struct kshark_entry_collection *col) +{ + free(col->resume_points); + col->resume_points = NULL; + + free(col->break_points); + col->break_points = NULL; + + col->size = 0; +} + +static void kshark_free_data_collection(struct kshark_entry_collection *col) +{ + free(col->resume_points); + free(col->break_points); + free(col); +} + +/** + * @brief Allocate and process data collection, defined with a given Matching + * condition function and value. Add this collection to the list of + * collections used by the session. + * + * @param kshark_ctx: Input location for the session context pointer. + * @param data: Input location for the trace data. + * @param n_rows: The size of the inputted data. + * @param cond: Matching condition function for the collection to be + * registered. + * @param val: Matching condition value of for collection to be registered. + * @param margin: The size of the additional (margin) data which do not + * satisfy the matching condition, but is added at the + * beginning and at the end of each interval of the collection + * as well as at the beginning and at the end of data-set. If + * "0", no margin data is added. + * + * @returns Pointer to the registered Data collections on success, or NULL + * on failure. + */ +struct kshark_entry_collection * +kshark_register_data_collection(struct kshark_context *kshark_ctx, + struct kshark_entry **data, + size_t n_rows, + matching_condition_func cond, + int val, + size_t margin) +{ + struct kshark_entry_collection *col; + + col = kshark_data_collection_alloc(kshark_ctx, data, + 0, n_rows, + cond, val, + margin); + + if (col) { + col->next = kshark_ctx->collections; + kshark_ctx->collections = col; + } + + return col; +} + +/** + * @brief Search the list of Data collections for a collection defined + * with a given Matching condition function and value. If such a + * collection exists, unregister (remove and free) this collection + * from the list. + * + * @param col: Input location for the Data collection list. + * @param cond: Matching condition function of the collection to be + * unregistered. + * + * @param val: Matching condition value of the collection to be unregistered. + */ +void kshark_unregister_data_collection(struct kshark_entry_collection **col, + matching_condition_func cond, + int val) +{ + struct kshark_entry_collection **last = col; + struct kshark_entry_collection *list; + + for (list = *col; list; list = list->next) { + if (list->cond == cond && list->val == val) { + *last = list->next; + kshark_free_data_collection(list); + return; + } + + last = &list->next; + } +} + +/** + * @brief Free all Data collections in a given list. + * + * @param col: Input location for the Data collection list. + */ +void kshark_free_collection_list(struct kshark_entry_collection *col) +{ + struct kshark_entry_collection *last; + + while (col) { + last = col; + col = col->next; + kshark_free_data_collection(last); + } +} diff --git a/kernel-shark-qt/src/libkshark.c b/kernel-shark-qt/src/libkshark.c index 879946b..58287e9 100644 --- a/kernel-shark-qt/src/libkshark.c +++ b/kernel-shark-qt/src/libkshark.c @@ -1067,6 +1067,7 @@ kshark_entry_request_alloc(size_t first, size_t n, return NULL; } + req->next = NULL; req->first = first; req->n = n; req->cond = cond; @@ -1077,6 +1078,21 @@ kshark_entry_request_alloc(size_t first, size_t n, return req; } +/** + * @brief Free all Data requests in a given list. + * @param req: Intput location for the Data request list. + */ +void kshark_free_entry_request(struct kshark_entry_request *req) +{ + struct kshark_entry_request *last; + + while (req) { + last = req; + req = req->next; + free(last); + } +} + /** Dummy entry, used to indicate the existence of filtered entries. */ const struct kshark_entry dummy_entry = { .next = NULL, diff --git a/kernel-shark-qt/src/libkshark.h b/kernel-shark-qt/src/libkshark.h index b0e1bc8..ff09da3 100644 --- a/kernel-shark-qt/src/libkshark.h +++ b/kernel-shark-qt/src/libkshark.h @@ -115,6 +115,9 @@ struct kshark_context { * the event. */ struct event_filter *advanced_event_filter; + + /** List of Data collections. */ + struct kshark_entry_collection *collections; }; bool kshark_instance(struct kshark_context **kshark_ctx); @@ -245,6 +248,9 @@ typedef bool (matching_condition_func)(struct kshark_context*, * kshark_entry. */ struct kshark_entry_request { + /** Pointer to the next Data request. */ + struct kshark_entry_request *next; + /** * Array index specifying the position inside the array from where * the search starts. @@ -277,6 +283,8 @@ kshark_entry_request_alloc(size_t first, size_t n, matching_condition_func cond, int val, bool vis_only, int vis_mask); +void kshark_free_entry_request(struct kshark_entry_request *req); + const struct kshark_entry * kshark_get_entry_front(const struct kshark_entry_request *req, struct kshark_entry **data, @@ -287,6 +295,78 @@ kshark_get_entry_back(const struct kshark_entry_request *req, struct kshark_entry **data, ssize_t *index); +/** + * Data collections are used to optimize the search for an entry having an + * abstract property, defined by a Matching condition function and a value. + * When a collection is processed, the data which is relevant for the + * collection is enclosed in "Data intervals", defined by pairs of "Resume" and + * "Break" points. It is guaranteed that the data outside of the intervals + * contains no entries satisfying the abstract matching condition. However, the + * intervals may (will) contain data that do not satisfy the matching condition. + * Once defined, the Data collection can be used when searching for an entry + * having the same (ore related) abstract property. The collection allows to + * ignore the irrelevant data, thus it eliminates the linear worst-case time + * complexity of the search. + */ +struct kshark_entry_collection { + /** Pointer to the next Data collection. */ + struct kshark_entry_collection *next; + + /** Matching condition function, used to define the collections. */ + matching_condition_func *cond; + + /** + * Matching condition value, used by the Matching condition finction + * to define the collections. + */ + int val; + + /** + * Array of indexes defining the beginning of each individual data + * interval. + */ + size_t *resume_points; + + /** + * Array of indexes defining the end of each individual data interval. + */ + size_t *break_points; + + /** Number of data intervals in this collection. */ + size_t size; +}; + +struct kshark_entry_collection * +kshark_register_data_collection(struct kshark_context *kshark_ctx, + struct kshark_entry **data, size_t n_rows, + matching_condition_func cond, int val, + size_t margin); + +void kshark_unregister_data_collection(struct kshark_entry_collection **col, + matching_condition_func cond, + int val); + +struct kshark_entry_collection * +kshark_find_data_collection(struct kshark_entry_collection *col, + matching_condition_func cond, + int val); + +void kshark_reset_data_collection(struct kshark_entry_collection *col); + +void kshark_free_collection_list(struct kshark_entry_collection *col); + +const struct kshark_entry * +kshark_get_collection_entry_front(struct kshark_entry_request **req, + struct kshark_entry **data, + const struct kshark_entry_collection *col, + ssize_t *index); + +const struct kshark_entry * +kshark_get_collection_entry_back(struct kshark_entry_request **req, + struct kshark_entry **data, + const struct kshark_entry_collection *col, + ssize_t *index); + #ifdef __cplusplus } #endif From patchwork Mon Aug 6 16:19:26 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yordan Karadzhov X-Patchwork-Id: 10758819 Return-Path: Received: from mail-wr1-f66.google.com ([209.85.221.66]:47079 "EHLO mail-wr1-f66.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730876AbeHFSaE (ORCPT ); Mon, 6 Aug 2018 14:30:04 -0400 Received: by mail-wr1-f66.google.com with SMTP id h14-v6so12893714wrw.13 for ; Mon, 06 Aug 2018 09:20:13 -0700 (PDT) From: "Yordan Karadzhov (VMware)" To: rostedt@goodmis.org Cc: linux-trace-devel@vger.kernel.org, "Yordan Karadzhov (VMware)" Subject: [PATCH v4 5/6] kernel-shark-qt: Make the Vis. model use Data collections. Date: Mon, 6 Aug 2018 19:19:26 +0300 Message-Id: <20180806161927.11206-6-y.karadz@gmail.com> In-Reply-To: <20180806161927.11206-1-y.karadz@gmail.com> References: <20180806161927.11206-1-y.karadz@gmail.com> Sender: linux-trace-devel-owner@vger.kernel.org List-ID: Content-Length: 11556 This patch optimizes the search instruments of the model by adding the possibility of using Data collections. Signed-off-by: Yordan Karadzhov (VMware) --- kernel-shark-qt/examples/datahisto.c | 4 ++ kernel-shark-qt/src/libkshark-model.c | 57 +++++++++++++++++++++++---- kernel-shark-qt/src/libkshark-model.h | 14 ++++++- 3 files changed, 65 insertions(+), 10 deletions(-) diff --git a/kernel-shark-qt/examples/datahisto.c b/kernel-shark-qt/examples/datahisto.c index 3f19870..99ac495 100644 --- a/kernel-shark-qt/examples/datahisto.c +++ b/kernel-shark-qt/examples/datahisto.c @@ -27,18 +27,22 @@ void dump_bin(struct kshark_trace_histo *histo, int bin, if (strcmp(type, "cpu") == 0) { e_front = ksmodel_get_entry_front(histo, bin, true, kshark_match_cpu, val, + NULL, &i_front); e_back = ksmodel_get_entry_back(histo, bin, true, kshark_match_cpu, val, + NULL, &i_back); } else if (strcmp(type, "task") == 0) { e_front = ksmodel_get_entry_front(histo, bin, true, kshark_match_pid, val, + NULL, &i_front); e_back = ksmodel_get_entry_back(histo, bin, true, kshark_match_pid, val, + NULL, &i_back); } else { i_front = ksmodel_first_index_at_bin(histo, bin); diff --git a/kernel-shark-qt/src/libkshark-model.c b/kernel-shark-qt/src/libkshark-model.c index bedeb69..3138257 100644 --- a/kernel-shark-qt/src/libkshark-model.c +++ b/kernel-shark-qt/src/libkshark-model.c @@ -869,6 +869,7 @@ ssize_t ksmodel_first_index_at_pid(struct kshark_trace_histo *histo, * @param func: Matching condition function. * @param val: Matching condition value, used by the Matching condition * function. + * @param col: Optional input location for Data collection. * @param index: Optional output location for the index of the requested * entry inside the array. * @@ -878,6 +879,7 @@ const struct kshark_entry * ksmodel_get_entry_front(struct kshark_trace_histo *histo, int bin, bool vis_only, matching_condition_func func, int val, + struct kshark_entry_collection *col, ssize_t *index) { struct kshark_entry_request *req; @@ -892,7 +894,12 @@ ksmodel_get_entry_front(struct kshark_trace_histo *histo, if (!req) return NULL; - entry = kshark_get_entry_front(req, histo->data, index); + if (col && col->size) + entry = kshark_get_collection_entry_front(&req, histo->data, + col, index); + else + entry = kshark_get_entry_front(req, histo->data, index); + free(req); return entry; @@ -909,6 +916,7 @@ ksmodel_get_entry_front(struct kshark_trace_histo *histo, * @param func: Matching condition function. * @param val: Matching condition value, used by the Matching condition * function. + * @param col: Optional input location for Data collection. * @param index: Optional output location for the index of the requested * entry inside the array. * @@ -918,6 +926,7 @@ const struct kshark_entry * ksmodel_get_entry_back(struct kshark_trace_histo *histo, int bin, bool vis_only, matching_condition_func func, int val, + struct kshark_entry_collection *col, ssize_t *index) { struct kshark_entry_request *req; @@ -932,7 +941,12 @@ ksmodel_get_entry_back(struct kshark_trace_histo *histo, if (!req) return NULL; - entry = kshark_get_entry_back(req, histo->data, index); + if (col && col->size) + entry = kshark_get_collection_entry_back(&req, histo->data, + col, index); + else + entry = kshark_get_entry_back(req, histo->data, index); + free(req); return entry; @@ -963,6 +977,7 @@ static int ksmodel_get_entry_pid(const struct kshark_entry *entry) * @param bin: Bin id. * @param cpu: CPU Id. * @param vis_only: If true, a visible entry is requested. + * @param col: Optional input location for Data collection. * @param index: Optional output location for the index of the requested * entry inside the array. * @@ -971,6 +986,7 @@ static int ksmodel_get_entry_pid(const struct kshark_entry *entry) */ int ksmodel_get_pid_front(struct kshark_trace_histo *histo, int bin, int cpu, bool vis_only, + struct kshark_entry_collection *col, ssize_t *index) { const struct kshark_entry *entry; @@ -980,7 +996,8 @@ int ksmodel_get_pid_front(struct kshark_trace_histo *histo, entry = ksmodel_get_entry_front(histo, bin, vis_only, kshark_match_cpu, cpu, - index); + col, index); + return ksmodel_get_entry_pid(entry); } @@ -993,6 +1010,7 @@ int ksmodel_get_pid_front(struct kshark_trace_histo *histo, * @param bin: Bin id. * @param cpu: CPU Id. * @param vis_only: If true, a visible entry is requested. + * @param col: Optional input location for Data collection. * @param index: Optional output location for the index of the requested * entry inside the array. * @@ -1001,6 +1019,7 @@ int ksmodel_get_pid_front(struct kshark_trace_histo *histo, */ int ksmodel_get_pid_back(struct kshark_trace_histo *histo, int bin, int cpu, bool vis_only, + struct kshark_entry_collection *col, ssize_t *index) { const struct kshark_entry *entry; @@ -1010,7 +1029,7 @@ int ksmodel_get_pid_back(struct kshark_trace_histo *histo, entry = ksmodel_get_entry_back(histo, bin, vis_only, kshark_match_cpu, cpu, - index); + col, index); return ksmodel_get_entry_pid(entry); } @@ -1040,6 +1059,7 @@ static int ksmodel_get_entry_cpu(const struct kshark_entry *entry) * @param bin: Bin id. * @param pid: Process Id. * @param vis_only: If true, a visible entry is requested. + * @param col: Optional input location for Data collection. * @param index: Optional output location for the index of the requested * entry inside the array. * @@ -1048,6 +1068,7 @@ static int ksmodel_get_entry_cpu(const struct kshark_entry *entry) */ int ksmodel_get_cpu_front(struct kshark_trace_histo *histo, int bin, int pid, bool vis_only, + struct kshark_entry_collection *col, ssize_t *index) { const struct kshark_entry *entry; @@ -1057,6 +1078,7 @@ int ksmodel_get_cpu_front(struct kshark_trace_histo *histo, entry = ksmodel_get_entry_front(histo, bin, vis_only, kshark_match_pid, pid, + col, index); return ksmodel_get_entry_cpu(entry); } @@ -1070,6 +1092,7 @@ int ksmodel_get_cpu_front(struct kshark_trace_histo *histo, * @param bin: Bin id. * @param pid: Process Id. * @param vis_only: If true, a visible entry is requested. + * @param col: Optional input location for Data collection. * @param index: Optional output location for the index of the requested * entry inside the array. * @@ -1078,6 +1101,7 @@ int ksmodel_get_cpu_front(struct kshark_trace_histo *histo, */ int ksmodel_get_cpu_back(struct kshark_trace_histo *histo, int bin, int pid, bool vis_only, + struct kshark_entry_collection *col, ssize_t *index) { const struct kshark_entry *entry; @@ -1087,6 +1111,7 @@ int ksmodel_get_cpu_back(struct kshark_trace_histo *histo, entry = ksmodel_get_entry_back(histo, bin, vis_only, kshark_match_pid, pid, + col, index); return ksmodel_get_entry_cpu(entry); @@ -1098,13 +1123,16 @@ int ksmodel_get_cpu_back(struct kshark_trace_histo *histo, * @param histo: Input location for the model descriptor. * @param bin: Bin id. * @param cpu: Cpu Id. + * @param col: Optional input location for Data collection. * @param index: Optional output location for the index of the requested * entry inside the array. * * @returns True, if a visible entry exists in this bin. Else false. */ bool ksmodel_cpu_visible_event_exist(struct kshark_trace_histo *histo, - int bin, int cpu, ssize_t *index) + int bin, int cpu, + struct kshark_entry_collection *col, + ssize_t *index) { struct kshark_entry_request *req; const struct kshark_entry *entry; @@ -1126,7 +1154,12 @@ bool ksmodel_cpu_visible_event_exist(struct kshark_trace_histo *histo, */ req->vis_mask = KS_EVENT_VIEW_FILTER_MASK; - entry = kshark_get_entry_front(req, histo->data, index); + if (col && col->size) + entry = kshark_get_collection_entry_front(&req, histo->data, + col, index); + else + entry = kshark_get_entry_front(req, histo->data, index); + free(req); if (!entry || !entry->visible) { @@ -1143,13 +1176,16 @@ bool ksmodel_cpu_visible_event_exist(struct kshark_trace_histo *histo, * @param histo: Input location for the model descriptor. * @param bin: Bin id. * @param pid: Process Id of the task. + * @param col: Optional input location for Data collection. * @param index: Optional output location for the index of the requested * entry inside the array. * * @returns True, if a visible entry exists in this bin. Else false. */ bool ksmodel_task_visible_event_exist(struct kshark_trace_histo *histo, - int bin, int pid, ssize_t *index) + int bin, int pid, + struct kshark_entry_collection *col, + ssize_t *index) { struct kshark_entry_request *req; const struct kshark_entry *entry; @@ -1171,7 +1207,12 @@ bool ksmodel_task_visible_event_exist(struct kshark_trace_histo *histo, */ req->vis_mask = KS_EVENT_VIEW_FILTER_MASK; - entry = kshark_get_entry_front(req, histo->data, index); + if (col && col->size) + entry = kshark_get_collection_entry_front(&req, histo->data, + col, index); + else + entry = kshark_get_entry_front(req, histo->data, index); + free(req); if (!entry || !entry->visible) { diff --git a/kernel-shark-qt/src/libkshark-model.h b/kernel-shark-qt/src/libkshark-model.h index 9c80458..1cf68da 100644 --- a/kernel-shark-qt/src/libkshark-model.h +++ b/kernel-shark-qt/src/libkshark-model.h @@ -108,35 +108,45 @@ const struct kshark_entry * ksmodel_get_entry_front(struct kshark_trace_histo *histo, int bin, bool vis_only, matching_condition_func func, int val, + struct kshark_entry_collection *col, ssize_t *index); const struct kshark_entry * ksmodel_get_entry_back(struct kshark_trace_histo *histo, int bin, bool vis_only, matching_condition_func func, int val, + struct kshark_entry_collection *col, ssize_t *index); int ksmodel_get_pid_front(struct kshark_trace_histo *histo, int bin, int cpu, bool vis_only, + struct kshark_entry_collection *col, ssize_t *index); int ksmodel_get_pid_back(struct kshark_trace_histo *histo, int bin, int cpu, bool vis_only, + struct kshark_entry_collection *col, ssize_t *index); int ksmodel_get_cpu_front(struct kshark_trace_histo *histo, int bin, int pid, bool vis_only, + struct kshark_entry_collection *col, ssize_t *index); int ksmodel_get_cpu_back(struct kshark_trace_histo *histo, int bin, int pid, bool vis_only, + struct kshark_entry_collection *col, ssize_t *index); bool ksmodel_cpu_visible_event_exist(struct kshark_trace_histo *histo, - int bin, int cpu, ssize_t *index); + int bin, int cpu, + struct kshark_entry_collection *col, + ssize_t *index); bool ksmodel_task_visible_event_exist(struct kshark_trace_histo *histo, - int bin, int pid, ssize_t *index); + int bin, int pid, + struct kshark_entry_collection *col, + ssize_t *index); static inline double ksmodel_bin_time(struct kshark_trace_histo *histo, int bin) From patchwork Mon Aug 6 16:19:27 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yordan Karadzhov X-Patchwork-Id: 10758821 Return-Path: Received: from mail-wr1-f68.google.com ([209.85.221.68]:44161 "EHLO mail-wr1-f68.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730876AbeHFSaF (ORCPT ); Mon, 6 Aug 2018 14:30:05 -0400 Received: by mail-wr1-f68.google.com with SMTP id r16-v6so12901375wrt.11 for ; Mon, 06 Aug 2018 09:20:15 -0700 (PDT) From: "Yordan Karadzhov (VMware)" To: rostedt@goodmis.org Cc: linux-trace-devel@vger.kernel.org, "Yordan Karadzhov (VMware)" Subject: [PATCH v4 6/6] kernel-shark-qt: Changed the KernelShark version identifier. Date: Mon, 6 Aug 2018 19:19:27 +0300 Message-Id: <20180806161927.11206-7-y.karadz@gmail.com> In-Reply-To: <20180806161927.11206-1-y.karadz@gmail.com> References: <20180806161927.11206-1-y.karadz@gmail.com> Sender: linux-trace-devel-owner@vger.kernel.org List-ID: (Patch Id) ++ Signed-off-by: Yordan Karadzhov (VMware) --- kernel-shark-qt/CMakeLists.txt | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/kernel-shark-qt/CMakeLists.txt b/kernel-shark-qt/CMakeLists.txt index 9ff12a1..7a802cd 100644 --- a/kernel-shark-qt/CMakeLists.txt +++ b/kernel-shark-qt/CMakeLists.txt @@ -6,7 +6,7 @@ project(kernel-shark-qt) set(KS_VERSION_MAJOR 0) set(KS_VERSION_MINOR 7) -set(KS_VERSION_PATCH 0) +set(KS_VERSION_PATCH 1) set(KS_VERSION_STRING ${KS_VERSION_MAJOR}.${KS_VERSION_MINOR}.${KS_VERSION_PATCH}) message("\n project: Kernel Shark: (version: ${KS_VERSION_STRING})\n")