Message ID | alpine.LFD.2.02.1104191117290.22119@i5.linux-foundation.org (mailing list archive) |
---|---|
State | Mainlined, archived |
Headers | show |
diff --git a/pre-process.c b/pre-process.c index 603cc00c999f..6d12f173146d 100644 --- a/pre-process.c +++ b/pre-process.c @@ -655,10 +655,12 @@ static const char *token_name_sequence(struct token *token, int endop, struct to static int already_tokenized(const char *path) { - int i; - struct stream *s = input_streams; + int stream, next; + + for (stream = *hash_stream(path); stream >= 0 ; stream = next) { + struct stream *s = input_streams + stream; - for (i = input_stream_nr; --i >= 0; s++) { + next = s->next_stream; if (s->constant != CONSTANT_FILE_YES) continue; if (strcmp(path, s->name))
This replaces the "loop over all streams" with a simple hash lookup. It makes the cost of checking for already tokenized streams basically go away (it could be up to 5% of CPU time, almost entirely due to the "strcmp()" of the name). Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> --- That "up to 5%" is probably debatable, and will depend on just how many includes you have etc etc. And on the CPU and library issues. But 3-4% is definitely the case for my kernel C=2 build on my current machine. So it's real, and worth it. pre-process.c | 8 +++++--- 1 files changed, 5 insertions(+), 3 deletions(-)