diff mbox series

Fix `make -jN trace-cmd gui`

Message ID 160564780533.18208.2518938894299815863.stgit@toolbox-dario-user-work (mailing list archive)
State Accepted
Commit 6a3494dbae748d4649cd21a818aa226b1073015b
Headers show
Series Fix `make -jN trace-cmd gui` | expand

Commit Message

Dario Faggioli Nov. 17, 2020, 9:16 p.m. UTC
Doing `make -j8 trace-cmd gui` fails like this:

  CMake Error at build/FindTraceCmd.cmake:110 (MESSAGE):

    Could not find libtraceevent!

  Call Stack (most recent call first):
    CMakeLists.txt:24 (include)

Or like this:

  CMake Error at build/FindTraceCmd.cmake:64 (MESSAGE):

    Could not find trace-cmd!

  Call Stack (most recent call first):
    CMakeLists.txt:20 (include)

That's because we need `trace-cmd` to have finished
building before starting to build `gui`.

Signed-off-by: Dario Faggioli <dfaggioli@suse.com>
---
 Makefile |    4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

Comments

Steven Rostedt Nov. 20, 2020, 3:09 a.m. UTC | #1
On Tue, 17 Nov 2020 21:16:45 +0000
Dario Faggioli <dfaggioli@suse.com> wrote:

> Doing `make -j8 trace-cmd gui` fails like this:
> 
>   CMake Error at build/FindTraceCmd.cmake:110 (MESSAGE):
> 
>     Could not find libtraceevent!
> 
>   Call Stack (most recent call first):
>     CMakeLists.txt:24 (include)
> 
> Or like this:
> 
>   CMake Error at build/FindTraceCmd.cmake:64 (MESSAGE):
> 
>     Could not find trace-cmd!
> 
>   Call Stack (most recent call first):
>     CMakeLists.txt:20 (include)
> 
> That's because we need `trace-cmd` to have finished
> building before starting to build `gui`.
> 
> Signed-off-by: Dario Faggioli <dfaggioli@suse.com>

Thanks Dario,

I applied it. But note, we will be releasing kernelshark soon in its
own repository (a new and improved version!)

Stay tuned!

-- Steve
Dario Faggioli Nov. 20, 2020, 12:06 p.m. UTC | #2
On Thu, 2020-11-19 at 22:09 -0500, Steven Rostedt wrote:
> On Tue, 17 Nov 2020 21:16:45 +0000
> Dario Faggioli <dfaggioli@suse.com> wrote:
> > 
> > Signed-off-by: Dario Faggioli <dfaggioli@suse.com>
> 
> Thanks Dario,
> 
Hi!

> I applied it. 
>
Great! Thanks... This will help me, because I want to do:

  %build
  make %{?_smp_mflags} prefix=%{_prefix} trace-cmd gui

in the openSUSE package for KernelShark (see the spec file here:
https://build.opensuse.org/package/show/openSUSE:Factory/kernelshark )
which is now packaging version 1.2

> But note, we will be releasing kernelshark soon in its
> own repository (a new and improved version!)
> 
Yep, sure! I did followed the talks you've been giving about it at
events, and I even played a little with what you have here:

https://github.com/yordan-karadzhov/kernel-shark-2.alpha

The host-guest tracing part, as I think you can guess.

> Stay tuned!
> 
Looking forward to it!

Thanks again and Regards
Yordan Karadzhov Nov. 20, 2020, 12:32 p.m. UTC | #3
On 20.11.20 г. 14:06 ч., Dario Faggioli wrote:
> On Thu, 2020-11-19 at 22:09 -0500, Steven Rostedt wrote:
>> On Tue, 17 Nov 2020 21:16:45 +0000
>> Dario Faggioli <dfaggioli@suse.com> wrote:
>>>
>>> Signed-off-by: Dario Faggioli <dfaggioli@suse.com>
>>
>> Thanks Dario,
>>
> Hi!
> 
>> I applied it.
>>
> Great! Thanks... This will help me, because I want to do:
> 
>    %build
>    make %{?_smp_mflags} prefix=%{_prefix} trace-cmd gui
> 
> in the openSUSE package for KernelShark (see the spec file here:
> https://build.opensuse.org/package/show/openSUSE:Factory/kernelshark )
> which is now packaging version 1.2
> 
>> But note, we will be releasing kernelshark soon in its
>> own repository (a new and improved version!)
>>
> Yep, sure! I did followed the talks you've been giving about it at
> events, and I even played a little with what you have here:
> 
> https://github.com/yordan-karadzhov/kernel-shark-2.alpha
> 
> The host-guest tracing part, as I think you can guess.

Ciao Dario,
I am very happy to hear that. The timestamp synchronization patches for 
trace-cmd are almost ready to go upstream.

And we are currently reviewing a beta version of KernelShark_2 that will 
include the guest/host visualization.
As Steven said: stay tuned!

cheers,
Yordan

> 
>> Stay tuned!
>>
> Looking forward to it!
> 
> Thanks again and Regards
>
Dario Faggioli Nov. 20, 2020, 1:43 p.m. UTC | #4
On Fri, 2020-11-20 at 14:32 +0200, Yordan Karadzhov (VMware) wrote:
> On 20.11.20 г. 14:06 ч., Dario Faggioli wrote:
> > > 
> > Yep, sure! I followed the talks you've been giving about it at
> > events, and I even played a little with what you have here:
> > 
> > https://github.com/yordan-karadzhov/kernel-shark-2.alpha
> > 
> > The host-guest tracing part, as I think you can guess.
> 
> Ciao Dario,
>
Hey!

> I am very happy to hear that. The timestamp synchronization patches
> for 
> trace-cmd are almost ready to go upstream.
> 
Yes, I tried those patches!

In fact, now that I have you here, do you mind if I change the subject
and ask a quick question about them?

So, you often say that "the accuracy of the synchronization protocol is
XX ms". Now, I guess that means that an event in the guest and the
corresponding event in the host (or vice versa) are XX ms apart. And
that's even after the synchronization of the two traces, is that right?

Question is, how do you measure that? Sure, I can look manually for an
occurrence of the pattern that I described above: i.e., an event in the
guest, then the corresponding one in the host and compute the
difference between the timestamps.

But do you have a way to do so automatically, or with a script/program,
etc?

I saw that the series included a patch which was meant at debugging and
profiling PTP, but even with that one applied and having it generate
the graphs, I have not been able to get that info without manual
inspection.

> And we are currently reviewing a beta version of KernelShark_2 that
> will 
> include the guest/host visualization.
> As Steven said: stay tuned!
> 
You bet I will. :-P

Thanks and Regards
Steven Rostedt Nov. 20, 2020, 2:08 p.m. UTC | #5
On Fri, 20 Nov 2020 14:43:21 +0100
Dario Faggioli <dfaggioli@suse.com> wrote:

> On Fri, 2020-11-20 at 14:32 +0200, Yordan Karadzhov (VMware) wrote:
> > On 20.11.20 г. 14:06 ч., Dario Faggioli wrote:  
> > > >   
> > > Yep, sure! I followed the talks you've been giving about it at
> > > events, and I even played a little with what you have here:
> > > 
> > > https://github.com/yordan-karadzhov/kernel-shark-2.alpha
> > > 
> > > The host-guest tracing part, as I think you can guess.  
> > 
> > Ciao Dario,
> >  
> Hey!
> 
> > I am very happy to hear that. The timestamp synchronization patches
> > for 
> > trace-cmd are almost ready to go upstream.
> >   
> Yes, I tried those patches!
> 
> In fact, now that I have you here, do you mind if I change the subject
> and ask a quick question about them?
> 
> So, you often say that "the accuracy of the synchronization protocol is
> XX ms". Now, I guess that means that an event in the guest and the

Note, we are usually microsecond (us) apart, not millisecond (ms) ;-)

> corresponding event in the host (or vice versa) are XX ms apart. And
> that's even after the synchronization of the two traces, is that right?

At plumbers we talked with Thomas Gleixner and he suggested ideas of how to
get to the actual shifts used in the hardware that should give us exact
timestamp offsets. We are currently working on that. But in the mean time,
the P2P is giving us somewhere between 5 and 10 us accuracy. And that's
simply because the jitter of the vsock connection (which is used for the
synchronization at start and end of the traces) has a 5 to 10 us jitter,
and it's not possible to get a more accurate than the medium that is being
used.

> 
> Question is, how do you measure that? Sure, I can look manually for an
> occurrence of the pattern that I described above: i.e., an event in the
> guest, then the corresponding one in the host and compute the
> difference between the timestamps.

You mean, how we measure the accuracy? It's usually done by seeing when we
have events from the guest showing up when we should be in the host (it's
like seeing events from userspace when you are in the kernel).

> 
> But do you have a way to do so automatically, or with a script/program,
> etc?

We have talked about having something scan to find cases where the guest
event happens in kernel and do some post processing shifting, but haven't
gotten there yet. If the hardware settings can work, then there will be no
need to do so.

-- Steve

> 
> I saw that the series included a patch which was meant at debugging and
> profiling PTP, but even with that one applied and having it generate
> the graphs, I have not been able to get that info without manual
> inspection.
> 
> > And we are currently reviewing a beta version of KernelShark_2 that
> > will 
> > include the guest/host visualization.
> > As Steven said: stay tuned!
> >   
> You bet I will. :-P
> 
> Thanks and Regards
Dario Faggioli Nov. 26, 2020, 12:15 a.m. UTC | #6
On Fri, 2020-11-20 at 09:08 -0500, Steven Rostedt wrote:
> On Fri, 20 Nov 2020 14:43:21 +0100
> Dario Faggioli <dfaggioli@suse.com> wrote:
> > 
> > So, you often say that "the accuracy of the synchronization
> > protocol is
> > XX ms". Now, I guess that means that an event in the guest and the
> 
> Note, we are usually microsecond (us) apart, not millisecond (ms) ;-)
> 
Ah, yes, sure... And sorry about that! I know it us, I'm not sure how I
ended up writing ms. That would be quite terrible indeed! :-D

> > corresponding event in the host (or vice versa) are XX ms apart.
> > And
> > that's even after the synchronization of the two traces, is that
> > right?
> 
> At plumbers we talked with Thomas Gleixner and he suggested ideas of
> how to
> get to the actual shifts used in the hardware that should give us
> exact
> timestamp offsets. We are currently working on that. 
>
Yes, I remember that, I attended the BoF.

> But in the mean time,
> the P2P is giving us somewhere between 5 and 10 us accuracy. And
> that's
> simply because the jitter of the vsock connection (which is used for
> the
> synchronization at start and end of the traces) has a 5 to 10 us
> jitter,
> and it's not possible to get a more accurate than the medium that is
> being
> used.
> 
Yes, with a student that I was helping with his thesis, we applied one
debug patch to trace-cmd that you have on this list, and we tried the
different synchronization strategies, frequency, etc.

> > Question is, how do you measure that? Sure, I can look manually for
> > an
> > occurrence of the pattern that I described above: i.e., an event in
> > the
> > guest, then the corresponding one in the host and compute the
> > difference between the timestamps.
> 
> You mean, how we measure the accuracy? It's usually done by seeing
> when we
> have events from the guest showing up when we should be in the host
> (it's
> like seeing events from userspace when you are in the kernel).
> 
Ok, makes sense. I need to try it first hand to make sure I've properly
understood it, though. I'll collect some more tracing and looks for
situations like these.

Thanks!

> > But do you have a way to do so automatically, or with a
> > script/program,
> > etc?
> 
> We have talked about having something scan to find cases where the
> guest
> event happens in kernel and do some post processing shifting, but
> haven't
> gotten there yet. 
>
Yep, as said, I was thinking at it as a way to measure how accurately
the traces are synched, but indeed once one has it, it can even use it
to actually synch them better.

But I understand how it's rather tricky.

> If the hardware settings can work, then there will be no
> need to do so.
> 
Indeed. Well, perhaps it could still be useful, as a test/check whether
things are working? :-)

Regards
Tzvetomir Stoyanov (VMware) Nov. 26, 2020, 3:53 a.m. UTC | #7
On Thu, Nov 26, 2020 at 2:15 AM Dario Faggioli <dfaggioli@suse.com> wrote:
>
> On Fri, 2020-11-20 at 09:08 -0500, Steven Rostedt wrote:
> > On Fri, 20 Nov 2020 14:43:21 +0100
> > Dario Faggioli <dfaggioli@suse.com> wrote:
> > >
> > > So, you often say that "the accuracy of the synchronization
> > > protocol is
> > > XX ms". Now, I guess that means that an event in the guest and the
> >
> > Note, we are usually microsecond (us) apart, not millisecond (ms) ;-)
> >
> Ah, yes, sure... And sorry about that! I know it us, I'm not sure how I
> ended up writing ms. That would be quite terrible indeed! :-D
>
> > > corresponding event in the host (or vice versa) are XX ms apart.
> > > And
> > > that's even after the synchronization of the two traces, is that
> > > right?
> >
> > At plumbers we talked with Thomas Gleixner and he suggested ideas of
> > how to
> > get to the actual shifts used in the hardware that should give us
> > exact
> > timestamp offsets. We are currently working on that.
> >
> Yes, I remember that, I attended the BoF.
>
> > But in the mean time,
> > the P2P is giving us somewhere between 5 and 10 us accuracy. And
> > that's
> > simply because the jitter of the vsock connection (which is used for
> > the
> > synchronization at start and end of the traces) has a 5 to 10 us
> > jitter,
> > and it's not possible to get a more accurate than the medium that is
> > being
> > used.
> >
> Yes, with a student that I was helping with his thesis, we applied one
> debug patch to trace-cmd that you have on this list, and we tried the
> different synchronization strategies, frequency, etc.
>
> > > Question is, how do you measure that? Sure, I can look manually for
> > > an
> > > occurrence of the pattern that I described above: i.e., an event in
> > > the
> > > guest, then the corresponding one in the host and compute the
> > > difference between the timestamps.
> >
> > You mean, how we measure the accuracy? It's usually done by seeing
> > when we
> > have events from the guest showing up when we should be in the host
> > (it's
> > like seeing events from userspace when you are in the kernel).
> >
> Ok, makes sense. I need to try it first hand to make sure I've properly
> understood it, though. I'll collect some more tracing and looks for
> situations like these.
>
> Thanks!
>
> > > But do you have a way to do so automatically, or with a
> > > script/program,
> > > etc?
> >
> > We have talked about having something scan to find cases where the
> > guest
> > event happens in kernel and do some post processing shifting, but
> > haven't
> > gotten there yet.
> >
> Yep, as said, I was thinking at it as a way to measure how accurately
> the traces are synched, but indeed once one has it, it can even use it
> to actually synch them better.

Hi Dario
There is one approach to measure the accuracy of the synchronisation.
May be you remember the suggestions proposed in the BoF - KVM exposes
the clock offset and scaling factor in the /proc FS. In the last
version of the time
sync patch set, v25, I've implemented such logic - when x86-tsc is used for
trace clock source, the offset is read from the KVM debug FS. In theory this
should bring the best synchronisation, but I still see some guest events in
wrong order according to the host events.
I use the offset exposed from KVM as a reference - run PTP algorithm using
the x86-tsc trace clock and compare the calculated offset with the one from
the KVM debug FS.
We still need to find a way to improve the synchronisation accuracy for cases
when other hypervisors and trace clocks are used.


>
> But I understand how it's rather tricky.
>
> > If the hardware settings can work, then there will be no
> > need to do so.
> >
> Indeed. Well, perhaps it could still be useful, as a test/check whether
> things are working? :-)
>
> Regards
> --
> Dario Faggioli, Ph.D
> http://about.me/dario.faggioli
> Virtualization Software Engineer
> SUSE Labs, SUSE https://www.suse.com/
> -------------------------------------------------------------------
> <<This happens because _I_ choose it to happen!>> (Raistlin Majere)



--
Tzvetomir (Ceco) Stoyanov
VMware Open Source Technology Center
diff mbox series

Patch

diff --git a/Makefile b/Makefile
index b034042..c8d1e02 100644
--- a/Makefile
+++ b/Makefile
@@ -301,7 +301,9 @@  BUILD_TYPE ?= RelWithDebInfo
 $(kshark-dir)/build/Makefile: $(kshark-dir)/CMakeLists.txt
 	$(Q) cd $(kshark-dir)/build && $(CMAKE_COMMAND) -DCMAKE_BUILD_TYPE=$(BUILD_TYPE) -D_INSTALL_PREFIX=$(prefix) -D_LIBDIR=$(libdir) ..
 
-gui: force $(CMD_TARGETS) $(kshark-dir)/build/Makefile
+gui: force
+	$(MAKE) $(CMD_TARGETS)
+	$(MAKE) $(kshark-dir)/build/Makefile
 	$(Q)$(MAKE) $(S) -C $(kshark-dir)/build
 	@echo "gui build complete"
 	@echo "  kernelshark located at $(kshark-dir)/bin"