Message ID | 20190905113955.24870-4-jgross@suse.com (mailing list archive) |
---|---|
State | Superseded |
Headers | show |
Series | xen: debugtrace cleanup and per-cpu buffer support | expand |
On 05.09.2019 13:39, Juergen Gross wrote: > As a preparation for per-cpu buffers do a little refactoring of the > debugtrace data: put the needed buffer admin data into the buffer as > it will be needed for each buffer. In order not to limit buffer size > switch the related fields from unsigned int to unsigned long, as on > huge machines with RAM in the TB range it might be interesting to > support buffers >4GB. Just as a further remark in this regard: > void debugtrace_printk(const char *fmt, ...) > { > static char buf[DEBUG_TRACE_ENTRY_SIZE]; > - static unsigned int count, last_count, last_prd; > + static unsigned int count, last_count; How long do we think will it take until their wrapping will become an issue with such huge buffers? Jan
On 05.09.19 14:01, Jan Beulich wrote: > On 05.09.2019 13:39, Juergen Gross wrote: >> As a preparation for per-cpu buffers do a little refactoring of the >> debugtrace data: put the needed buffer admin data into the buffer as >> it will be needed for each buffer. In order not to limit buffer size >> switch the related fields from unsigned int to unsigned long, as on >> huge machines with RAM in the TB range it might be interesting to >> support buffers >4GB. > > Just as a further remark in this regard: > >> void debugtrace_printk(const char *fmt, ...) >> { >> static char buf[DEBUG_TRACE_ENTRY_SIZE]; >> - static unsigned int count, last_count, last_prd; >> + static unsigned int count, last_count; > > How long do we think will it take until their wrapping will become > an issue with such huge buffers? Count wrapping will not result in any misbehavior of tracing. With per-cpu buffers it might result in ambiguity regarding sorting the entries, but I guess chances are rather low this being a real issue. BTW: wrapping of count is not related to buffer size, but to the amount of trace data written. Juergen
On 05.09.2019 13:39, Juergen Gross wrote: > --- a/xen/common/debugtrace.c > +++ b/xen/common/debugtrace.c > @@ -17,34 +17,40 @@ > #define DEBUG_TRACE_ENTRY_SIZE 1024 > > /* Send output direct to console, or buffer it? */ > -static volatile int debugtrace_send_to_console; > +static volatile bool debugtrace_send_to_console; > > -static char *debugtrace_buf; /* Debug-trace buffer */ > -static unsigned int debugtrace_prd; /* Producer index */ > -static unsigned int debugtrace_kilobytes = 128, debugtrace_bytes; > -static unsigned int debugtrace_used; > +struct debugtrace_data { > + unsigned long bytes; /* Size of buffer. */ Hmm, I'm sorry for recognizing this only now, but why does this field need replicating? It's the same in all instances of the structure afaict. Jan
On 05.09.19 14:13, Jan Beulich wrote: > On 05.09.2019 13:39, Juergen Gross wrote: >> --- a/xen/common/debugtrace.c >> +++ b/xen/common/debugtrace.c >> @@ -17,34 +17,40 @@ >> #define DEBUG_TRACE_ENTRY_SIZE 1024 >> >> /* Send output direct to console, or buffer it? */ >> -static volatile int debugtrace_send_to_console; >> +static volatile bool debugtrace_send_to_console; >> >> -static char *debugtrace_buf; /* Debug-trace buffer */ >> -static unsigned int debugtrace_prd; /* Producer index */ >> -static unsigned int debugtrace_kilobytes = 128, debugtrace_bytes; >> -static unsigned int debugtrace_used; >> +struct debugtrace_data { >> + unsigned long bytes; /* Size of buffer. */ > > Hmm, I'm sorry for recognizing this only now, but why does this > field need replicating? It's the same in all instances of the > structure afaict. Oh, right. In the beginning I had plans to support modifying the buffer size at runtime. Okay, I'll change it. Juergen
On 05.09.2019 14:12, Juergen Gross wrote: > On 05.09.19 14:01, Jan Beulich wrote: >> On 05.09.2019 13:39, Juergen Gross wrote: >>> As a preparation for per-cpu buffers do a little refactoring of the >>> debugtrace data: put the needed buffer admin data into the buffer as >>> it will be needed for each buffer. In order not to limit buffer size >>> switch the related fields from unsigned int to unsigned long, as on >>> huge machines with RAM in the TB range it might be interesting to >>> support buffers >4GB. >> >> Just as a further remark in this regard: >> >>> void debugtrace_printk(const char *fmt, ...) >>> { >>> static char buf[DEBUG_TRACE_ENTRY_SIZE]; >>> - static unsigned int count, last_count, last_prd; >>> + static unsigned int count, last_count; >> >> How long do we think will it take until their wrapping will become >> an issue with such huge buffers? > > Count wrapping will not result in any misbehavior of tracing. With > per-cpu buffers it might result in ambiguity regarding sorting the > entries, but I guess chances are rather low this being a real issue. > > BTW: wrapping of count is not related to buffer size, but to the > amount of trace data written. Sure, but the chance of ambiguity increases with larger buffer sizes. Jan
On 05.09.2019 14:19, Juergen Gross wrote: > On 05.09.19 14:13, Jan Beulich wrote: >> On 05.09.2019 13:39, Juergen Gross wrote: >>> --- a/xen/common/debugtrace.c >>> +++ b/xen/common/debugtrace.c >>> @@ -17,34 +17,40 @@ >>> #define DEBUG_TRACE_ENTRY_SIZE 1024 >>> >>> /* Send output direct to console, or buffer it? */ >>> -static volatile int debugtrace_send_to_console; >>> +static volatile bool debugtrace_send_to_console; >>> >>> -static char *debugtrace_buf; /* Debug-trace buffer */ >>> -static unsigned int debugtrace_prd; /* Producer index */ >>> -static unsigned int debugtrace_kilobytes = 128, debugtrace_bytes; >>> -static unsigned int debugtrace_used; >>> +struct debugtrace_data { >>> + unsigned long bytes; /* Size of buffer. */ >> >> Hmm, I'm sorry for recognizing this only now, but why does this >> field need replicating? It's the same in all instances of the >> structure afaict. > > Oh, right. In the beginning I had plans to support modifying the buffer > size at runtime. > > Okay, I'll change it. Thanks. FAOD this is not going to invalidate any of my acks. Jan
On 05.09.19 14:22, Jan Beulich wrote: > On 05.09.2019 14:12, Juergen Gross wrote: >> On 05.09.19 14:01, Jan Beulich wrote: >>> On 05.09.2019 13:39, Juergen Gross wrote: >>>> As a preparation for per-cpu buffers do a little refactoring of the >>>> debugtrace data: put the needed buffer admin data into the buffer as >>>> it will be needed for each buffer. In order not to limit buffer size >>>> switch the related fields from unsigned int to unsigned long, as on >>>> huge machines with RAM in the TB range it might be interesting to >>>> support buffers >4GB. >>> >>> Just as a further remark in this regard: >>> >>>> void debugtrace_printk(const char *fmt, ...) >>>> { >>>> static char buf[DEBUG_TRACE_ENTRY_SIZE]; >>>> - static unsigned int count, last_count, last_prd; >>>> + static unsigned int count, last_count; >>> >>> How long do we think will it take until their wrapping will become >>> an issue with such huge buffers? >> >> Count wrapping will not result in any misbehavior of tracing. With >> per-cpu buffers it might result in ambiguity regarding sorting the >> entries, but I guess chances are rather low this being a real issue. >> >> BTW: wrapping of count is not related to buffer size, but to the >> amount of trace data written. > > Sure, but the chance of ambiguity increases with larger buffer sizes. Well, better safe than sorry. Switching to unsigned long will rarely hurt, so I'm going to do just that. The only downside will be some waste of buffer space for very long running traces with huge amounts of entries. Juergen
On 05.09.2019 14:27, Juergen Gross wrote: > On 05.09.19 14:22, Jan Beulich wrote: >> On 05.09.2019 14:12, Juergen Gross wrote: >>> On 05.09.19 14:01, Jan Beulich wrote: >>>> On 05.09.2019 13:39, Juergen Gross wrote: >>>>> As a preparation for per-cpu buffers do a little refactoring of the >>>>> debugtrace data: put the needed buffer admin data into the buffer as >>>>> it will be needed for each buffer. In order not to limit buffer size >>>>> switch the related fields from unsigned int to unsigned long, as on >>>>> huge machines with RAM in the TB range it might be interesting to >>>>> support buffers >4GB. >>>> >>>> Just as a further remark in this regard: >>>> >>>>> void debugtrace_printk(const char *fmt, ...) >>>>> { >>>>> static char buf[DEBUG_TRACE_ENTRY_SIZE]; >>>>> - static unsigned int count, last_count, last_prd; >>>>> + static unsigned int count, last_count; >>>> >>>> How long do we think will it take until their wrapping will become >>>> an issue with such huge buffers? >>> >>> Count wrapping will not result in any misbehavior of tracing. With >>> per-cpu buffers it might result in ambiguity regarding sorting the >>> entries, but I guess chances are rather low this being a real issue. >>> >>> BTW: wrapping of count is not related to buffer size, but to the >>> amount of trace data written. >> >> Sure, but the chance of ambiguity increases with larger buffer sizes. > > Well, better safe than sorry. Switching to unsigned long will rarely > hurt, so I'm going to do just that. The only downside will be some waste > of buffer space for very long running traces with huge amounts of > entries. Hmm, true. Maybe we could get someone else's opinion on this - anyone? Jan
On 05.09.19 14:37, Jan Beulich wrote: > On 05.09.2019 14:27, Juergen Gross wrote: >> On 05.09.19 14:22, Jan Beulich wrote: >>> On 05.09.2019 14:12, Juergen Gross wrote: >>>> On 05.09.19 14:01, Jan Beulich wrote: >>>>> On 05.09.2019 13:39, Juergen Gross wrote: >>>>>> As a preparation for per-cpu buffers do a little refactoring of the >>>>>> debugtrace data: put the needed buffer admin data into the buffer as >>>>>> it will be needed for each buffer. In order not to limit buffer size >>>>>> switch the related fields from unsigned int to unsigned long, as on >>>>>> huge machines with RAM in the TB range it might be interesting to >>>>>> support buffers >4GB. >>>>> >>>>> Just as a further remark in this regard: >>>>> >>>>>> void debugtrace_printk(const char *fmt, ...) >>>>>> { >>>>>> static char buf[DEBUG_TRACE_ENTRY_SIZE]; >>>>>> - static unsigned int count, last_count, last_prd; >>>>>> + static unsigned int count, last_count; >>>>> >>>>> How long do we think will it take until their wrapping will become >>>>> an issue with such huge buffers? >>>> >>>> Count wrapping will not result in any misbehavior of tracing. With >>>> per-cpu buffers it might result in ambiguity regarding sorting the >>>> entries, but I guess chances are rather low this being a real issue. >>>> >>>> BTW: wrapping of count is not related to buffer size, but to the >>>> amount of trace data written. >>> >>> Sure, but the chance of ambiguity increases with larger buffer sizes. >> >> Well, better safe than sorry. Switching to unsigned long will rarely >> hurt, so I'm going to do just that. The only downside will be some waste >> of buffer space for very long running traces with huge amounts of >> entries. > > Hmm, true. Maybe we could get someone else's opinion on this - anyone? TBH, I wouldn't be concerned about the buffer space. In case there are really billions of entries, the difference between a 10-digit count value and maybe a 15 digit one (and that is already a massive amount) isn't that large. The average printed size of count with about 4 billion entries is 9.75 digits (and not just 5 :-) ). Juergen
On 05.09.19 14:46, Juergen Gross wrote: > On 05.09.19 14:37, Jan Beulich wrote: >> On 05.09.2019 14:27, Juergen Gross wrote: >>> On 05.09.19 14:22, Jan Beulich wrote: >>>> On 05.09.2019 14:12, Juergen Gross wrote: >>>>> On 05.09.19 14:01, Jan Beulich wrote: >>>>>> On 05.09.2019 13:39, Juergen Gross wrote: >>>>>>> As a preparation for per-cpu buffers do a little refactoring of the >>>>>>> debugtrace data: put the needed buffer admin data into the buffer as >>>>>>> it will be needed for each buffer. In order not to limit buffer size >>>>>>> switch the related fields from unsigned int to unsigned long, as on >>>>>>> huge machines with RAM in the TB range it might be interesting to >>>>>>> support buffers >4GB. >>>>>> >>>>>> Just as a further remark in this regard: >>>>>> >>>>>>> void debugtrace_printk(const char *fmt, ...) >>>>>>> { >>>>>>> static char buf[DEBUG_TRACE_ENTRY_SIZE]; >>>>>>> - static unsigned int count, last_count, last_prd; >>>>>>> + static unsigned int count, last_count; >>>>>> >>>>>> How long do we think will it take until their wrapping will become >>>>>> an issue with such huge buffers? >>>>> >>>>> Count wrapping will not result in any misbehavior of tracing. With >>>>> per-cpu buffers it might result in ambiguity regarding sorting the >>>>> entries, but I guess chances are rather low this being a real issue. >>>>> >>>>> BTW: wrapping of count is not related to buffer size, but to the >>>>> amount of trace data written. >>>> >>>> Sure, but the chance of ambiguity increases with larger buffer sizes. >>> >>> Well, better safe than sorry. Switching to unsigned long will rarely >>> hurt, so I'm going to do just that. The only downside will be some waste >>> of buffer space for very long running traces with huge amounts of >>> entries. >> >> Hmm, true. Maybe we could get someone else's opinion on this - anyone? > > TBH, I wouldn't be concerned about the buffer space. In case there are > really billions of entries, the difference between a 10-digit count > value and maybe a 15 digit one (and that is already a massive amount) > isn't that large. The average printed size of count with about > 4 billion entries is 9.75 digits (and not just 5 :-) ). Another option would be to let count wrap at e.g. 4 billion (or even 1 million) and add a wrap count. Each buffer struct would contain the wrap count of the last entry, and when hitting a higher wrap count a new entry like ("wrap %u->%u", old_wrap, new_wrap) would be printed. This saves buffer space without loosing information. Juergen
On 05.09.2019 16:36, Juergen Gross wrote: > On 05.09.19 14:46, Juergen Gross wrote: >> On 05.09.19 14:37, Jan Beulich wrote: >>> On 05.09.2019 14:27, Juergen Gross wrote: >>>> On 05.09.19 14:22, Jan Beulich wrote: >>>>> On 05.09.2019 14:12, Juergen Gross wrote: >>>>>> On 05.09.19 14:01, Jan Beulich wrote: >>>>>>> On 05.09.2019 13:39, Juergen Gross wrote: >>>>>>>> As a preparation for per-cpu buffers do a little refactoring of the >>>>>>>> debugtrace data: put the needed buffer admin data into the buffer as >>>>>>>> it will be needed for each buffer. In order not to limit buffer size >>>>>>>> switch the related fields from unsigned int to unsigned long, as on >>>>>>>> huge machines with RAM in the TB range it might be interesting to >>>>>>>> support buffers >4GB. >>>>>>> >>>>>>> Just as a further remark in this regard: >>>>>>> >>>>>>>> void debugtrace_printk(const char *fmt, ...) >>>>>>>> { >>>>>>>> static char buf[DEBUG_TRACE_ENTRY_SIZE]; >>>>>>>> - static unsigned int count, last_count, last_prd; >>>>>>>> + static unsigned int count, last_count; >>>>>>> >>>>>>> How long do we think will it take until their wrapping will become >>>>>>> an issue with such huge buffers? >>>>>> >>>>>> Count wrapping will not result in any misbehavior of tracing. With >>>>>> per-cpu buffers it might result in ambiguity regarding sorting the >>>>>> entries, but I guess chances are rather low this being a real issue. >>>>>> >>>>>> BTW: wrapping of count is not related to buffer size, but to the >>>>>> amount of trace data written. >>>>> >>>>> Sure, but the chance of ambiguity increases with larger buffer sizes. >>>> >>>> Well, better safe than sorry. Switching to unsigned long will rarely >>>> hurt, so I'm going to do just that. The only downside will be some waste >>>> of buffer space for very long running traces with huge amounts of >>>> entries. >>> >>> Hmm, true. Maybe we could get someone else's opinion on this - anyone? >> >> TBH, I wouldn't be concerned about the buffer space. In case there are >> really billions of entries, the difference between a 10-digit count >> value and maybe a 15 digit one (and that is already a massive amount) >> isn't that large. The average printed size of count with about >> 4 billion entries is 9.75 digits (and not just 5 :-) ). > > Another option would be to let count wrap at e.g. 4 billion (or even 1 > million) and add a wrap count. Each buffer struct would contain the > wrap count of the last entry, and when hitting a higher wrap count a > new entry like ("wrap %u->%u", old_wrap, new_wrap) would be printed. > This saves buffer space without loosing information. This sounds quite neat. Jan
On 05.09.19 16:43, Jan Beulich wrote: > On 05.09.2019 16:36, Juergen Gross wrote: >> On 05.09.19 14:46, Juergen Gross wrote: >>> On 05.09.19 14:37, Jan Beulich wrote: >>>> On 05.09.2019 14:27, Juergen Gross wrote: >>>>> On 05.09.19 14:22, Jan Beulich wrote: >>>>>> On 05.09.2019 14:12, Juergen Gross wrote: >>>>>>> On 05.09.19 14:01, Jan Beulich wrote: >>>>>>>> On 05.09.2019 13:39, Juergen Gross wrote: >>>>>>>>> As a preparation for per-cpu buffers do a little refactoring of the >>>>>>>>> debugtrace data: put the needed buffer admin data into the buffer as >>>>>>>>> it will be needed for each buffer. In order not to limit buffer size >>>>>>>>> switch the related fields from unsigned int to unsigned long, as on >>>>>>>>> huge machines with RAM in the TB range it might be interesting to >>>>>>>>> support buffers >4GB. >>>>>>>> >>>>>>>> Just as a further remark in this regard: >>>>>>>> >>>>>>>>> void debugtrace_printk(const char *fmt, ...) >>>>>>>>> { >>>>>>>>> static char buf[DEBUG_TRACE_ENTRY_SIZE]; >>>>>>>>> - static unsigned int count, last_count, last_prd; >>>>>>>>> + static unsigned int count, last_count; >>>>>>>> >>>>>>>> How long do we think will it take until their wrapping will become >>>>>>>> an issue with such huge buffers? >>>>>>> >>>>>>> Count wrapping will not result in any misbehavior of tracing. With >>>>>>> per-cpu buffers it might result in ambiguity regarding sorting the >>>>>>> entries, but I guess chances are rather low this being a real issue. >>>>>>> >>>>>>> BTW: wrapping of count is not related to buffer size, but to the >>>>>>> amount of trace data written. >>>>>> >>>>>> Sure, but the chance of ambiguity increases with larger buffer sizes. >>>>> >>>>> Well, better safe than sorry. Switching to unsigned long will rarely >>>>> hurt, so I'm going to do just that. The only downside will be some waste >>>>> of buffer space for very long running traces with huge amounts of >>>>> entries. >>>> >>>> Hmm, true. Maybe we could get someone else's opinion on this - anyone? >>> >>> TBH, I wouldn't be concerned about the buffer space. In case there are >>> really billions of entries, the difference between a 10-digit count >>> value and maybe a 15 digit one (and that is already a massive amount) >>> isn't that large. The average printed size of count with about >>> 4 billion entries is 9.75 digits (and not just 5 :-) ). >> >> Another option would be to let count wrap at e.g. 4 billion (or even 1 >> million) and add a wrap count. Each buffer struct would contain the >> wrap count of the last entry, and when hitting a higher wrap count a >> new entry like ("wrap %u->%u", old_wrap, new_wrap) would be printed. >> This saves buffer space without loosing information. > > This sounds quite neat. I'm adding a new patch for that purpose, as it is adding a new feature. Any preferences for the max value of count? I'm in favor of limiting it to 6-digit numbers as those are much easier to compare by just looking at them. Juergen
On 06.09.2019 10:49, Juergen Gross wrote: > On 05.09.19 16:43, Jan Beulich wrote: >> On 05.09.2019 16:36, Juergen Gross wrote: >>> On 05.09.19 14:46, Juergen Gross wrote: >>>> On 05.09.19 14:37, Jan Beulich wrote: >>>>> On 05.09.2019 14:27, Juergen Gross wrote: >>>>>> On 05.09.19 14:22, Jan Beulich wrote: >>>>>>> On 05.09.2019 14:12, Juergen Gross wrote: >>>>>>>> On 05.09.19 14:01, Jan Beulich wrote: >>>>>>>>> On 05.09.2019 13:39, Juergen Gross wrote: >>>>>>>>>> As a preparation for per-cpu buffers do a little refactoring of the >>>>>>>>>> debugtrace data: put the needed buffer admin data into the buffer as >>>>>>>>>> it will be needed for each buffer. In order not to limit buffer size >>>>>>>>>> switch the related fields from unsigned int to unsigned long, as on >>>>>>>>>> huge machines with RAM in the TB range it might be interesting to >>>>>>>>>> support buffers >4GB. >>>>>>>>> >>>>>>>>> Just as a further remark in this regard: >>>>>>>>> >>>>>>>>>> void debugtrace_printk(const char *fmt, ...) >>>>>>>>>> { >>>>>>>>>> static char buf[DEBUG_TRACE_ENTRY_SIZE]; >>>>>>>>>> - static unsigned int count, last_count, last_prd; >>>>>>>>>> + static unsigned int count, last_count; >>>>>>>>> >>>>>>>>> How long do we think will it take until their wrapping will become >>>>>>>>> an issue with such huge buffers? >>>>>>>> >>>>>>>> Count wrapping will not result in any misbehavior of tracing. With >>>>>>>> per-cpu buffers it might result in ambiguity regarding sorting the >>>>>>>> entries, but I guess chances are rather low this being a real issue. >>>>>>>> >>>>>>>> BTW: wrapping of count is not related to buffer size, but to the >>>>>>>> amount of trace data written. >>>>>>> >>>>>>> Sure, but the chance of ambiguity increases with larger buffer sizes. >>>>>> >>>>>> Well, better safe than sorry. Switching to unsigned long will rarely >>>>>> hurt, so I'm going to do just that. The only downside will be some waste >>>>>> of buffer space for very long running traces with huge amounts of >>>>>> entries. >>>>> >>>>> Hmm, true. Maybe we could get someone else's opinion on this - anyone? >>>> >>>> TBH, I wouldn't be concerned about the buffer space. In case there are >>>> really billions of entries, the difference between a 10-digit count >>>> value and maybe a 15 digit one (and that is already a massive amount) >>>> isn't that large. The average printed size of count with about >>>> 4 billion entries is 9.75 digits (and not just 5 :-) ). >>> >>> Another option would be to let count wrap at e.g. 4 billion (or even 1 >>> million) and add a wrap count. Each buffer struct would contain the >>> wrap count of the last entry, and when hitting a higher wrap count a >>> new entry like ("wrap %u->%u", old_wrap, new_wrap) would be printed. >>> This saves buffer space without loosing information. >> >> This sounds quite neat. > > I'm adding a new patch for that purpose, as it is adding a new feature. > > Any preferences for the max value of count? I'm in favor of limiting it > to 6-digit numbers as those are much easier to compare by just looking > at them. I'm not overly fussed; personally I'd probably wrap at 30 bits, making it generally up to 8 digits, just very slightly going into the 9-digit range. Jan
On 06.09.19 11:10, Jan Beulich wrote: > On 06.09.2019 10:49, Juergen Gross wrote: >> On 05.09.19 16:43, Jan Beulich wrote: >>> On 05.09.2019 16:36, Juergen Gross wrote: >>>> On 05.09.19 14:46, Juergen Gross wrote: >>>>> On 05.09.19 14:37, Jan Beulich wrote: >>>>>> On 05.09.2019 14:27, Juergen Gross wrote: >>>>>>> On 05.09.19 14:22, Jan Beulich wrote: >>>>>>>> On 05.09.2019 14:12, Juergen Gross wrote: >>>>>>>>> On 05.09.19 14:01, Jan Beulich wrote: >>>>>>>>>> On 05.09.2019 13:39, Juergen Gross wrote: >>>>>>>>>>> As a preparation for per-cpu buffers do a little refactoring of the >>>>>>>>>>> debugtrace data: put the needed buffer admin data into the buffer as >>>>>>>>>>> it will be needed for each buffer. In order not to limit buffer size >>>>>>>>>>> switch the related fields from unsigned int to unsigned long, as on >>>>>>>>>>> huge machines with RAM in the TB range it might be interesting to >>>>>>>>>>> support buffers >4GB. >>>>>>>>>> >>>>>>>>>> Just as a further remark in this regard: >>>>>>>>>> >>>>>>>>>>> void debugtrace_printk(const char *fmt, ...) >>>>>>>>>>> { >>>>>>>>>>> static char buf[DEBUG_TRACE_ENTRY_SIZE]; >>>>>>>>>>> - static unsigned int count, last_count, last_prd; >>>>>>>>>>> + static unsigned int count, last_count; >>>>>>>>>> >>>>>>>>>> How long do we think will it take until their wrapping will become >>>>>>>>>> an issue with such huge buffers? >>>>>>>>> >>>>>>>>> Count wrapping will not result in any misbehavior of tracing. With >>>>>>>>> per-cpu buffers it might result in ambiguity regarding sorting the >>>>>>>>> entries, but I guess chances are rather low this being a real issue. >>>>>>>>> >>>>>>>>> BTW: wrapping of count is not related to buffer size, but to the >>>>>>>>> amount of trace data written. >>>>>>>> >>>>>>>> Sure, but the chance of ambiguity increases with larger buffer sizes. >>>>>>> >>>>>>> Well, better safe than sorry. Switching to unsigned long will rarely >>>>>>> hurt, so I'm going to do just that. The only downside will be some waste >>>>>>> of buffer space for very long running traces with huge amounts of >>>>>>> entries. >>>>>> >>>>>> Hmm, true. Maybe we could get someone else's opinion on this - anyone? >>>>> >>>>> TBH, I wouldn't be concerned about the buffer space. In case there are >>>>> really billions of entries, the difference between a 10-digit count >>>>> value and maybe a 15 digit one (and that is already a massive amount) >>>>> isn't that large. The average printed size of count with about >>>>> 4 billion entries is 9.75 digits (and not just 5 :-) ). >>>> >>>> Another option would be to let count wrap at e.g. 4 billion (or even 1 >>>> million) and add a wrap count. Each buffer struct would contain the >>>> wrap count of the last entry, and when hitting a higher wrap count a >>>> new entry like ("wrap %u->%u", old_wrap, new_wrap) would be printed. >>>> This saves buffer space without loosing information. >>> >>> This sounds quite neat. >> >> I'm adding a new patch for that purpose, as it is adding a new feature. >> >> Any preferences for the max value of count? I'm in favor of limiting it >> to 6-digit numbers as those are much easier to compare by just looking >> at them. > > I'm not overly fussed; personally I'd probably wrap at 30 bits, making it > generally up to 8 digits, just very slightly going into the 9-digit range. 2^30 is a 10-digit number. :-) But wrapping at 100.000.000 is fine, too, as it is just if ( count == 100000000 ) and that is not required to be a nice binary number. Juergen
diff --git a/xen/common/debugtrace.c b/xen/common/debugtrace.c index c1ee3f45b9..0eeb1a77c5 100644 --- a/xen/common/debugtrace.c +++ b/xen/common/debugtrace.c @@ -17,34 +17,40 @@ #define DEBUG_TRACE_ENTRY_SIZE 1024 /* Send output direct to console, or buffer it? */ -static volatile int debugtrace_send_to_console; +static volatile bool debugtrace_send_to_console; -static char *debugtrace_buf; /* Debug-trace buffer */ -static unsigned int debugtrace_prd; /* Producer index */ -static unsigned int debugtrace_kilobytes = 128, debugtrace_bytes; -static unsigned int debugtrace_used; +struct debugtrace_data { + unsigned long bytes; /* Size of buffer. */ + unsigned long prd; /* Producer index. */ + char buf[]; +}; + +static struct debugtrace_data *dt_data; + +static unsigned int debugtrace_kilobytes = 128; +static bool debugtrace_used; static char debugtrace_last_entry_buf[DEBUG_TRACE_ENTRY_SIZE]; static DEFINE_SPINLOCK(debugtrace_lock); integer_param("debugtrace", debugtrace_kilobytes); static void debugtrace_dump_worker(void) { - if ( (debugtrace_bytes == 0) || !debugtrace_used ) + if ( !debugtrace_used ) return; printk("debugtrace_dump() starting\n"); /* Print oldest portion of the ring. */ - if ( debugtrace_buf[debugtrace_prd] != '\0' ) - console_serial_puts(&debugtrace_buf[debugtrace_prd], - debugtrace_bytes - debugtrace_prd); + if ( dt_data->buf[dt_data->prd] != '\0' ) + console_serial_puts(&dt_data->buf[dt_data->prd], + dt_data->bytes - dt_data->prd); /* Print youngest portion of the ring. */ - debugtrace_buf[debugtrace_prd] = '\0'; - console_serial_puts(&debugtrace_buf[0], debugtrace_prd); + dt_data->buf[dt_data->prd] = '\0'; + console_serial_puts(&dt_data->buf[0], dt_data->prd); - memset(debugtrace_buf, '\0', debugtrace_bytes); - debugtrace_prd = 0; + memset(dt_data->buf, '\0', dt_data->bytes); + dt_data->prd = 0; debugtrace_last_entry_buf[0] = 0; printk("debugtrace_dump() finished\n"); @@ -70,7 +76,6 @@ static void debugtrace_toggle(void) spin_unlock_irqrestore(&debugtrace_lock, flags); watchdog_enable(); - } void debugtrace_dump(void) @@ -92,26 +97,27 @@ static void debugtrace_add_to_buf(char *buf) for ( p = buf; *p != '\0'; p++ ) { - debugtrace_buf[debugtrace_prd++] = *p; - if ( debugtrace_prd == debugtrace_bytes ) - debugtrace_prd = 0; + dt_data->buf[dt_data->prd++] = *p; + if ( dt_data->prd == dt_data->bytes ) + dt_data->prd = 0; } } void debugtrace_printk(const char *fmt, ...) { static char buf[DEBUG_TRACE_ENTRY_SIZE]; - static unsigned int count, last_count, last_prd; + static unsigned int count, last_count; + static unsigned long last_prd; char cntbuf[24]; va_list args; unsigned long flags; unsigned int nr; - if ( debugtrace_bytes == 0 ) + if ( !dt_data ) return; - debugtrace_used = 1; + debugtrace_used = true; spin_lock_irqsave(&debugtrace_lock, flags); @@ -130,14 +136,14 @@ void debugtrace_printk(const char *fmt, ...) { if ( strcmp(buf, debugtrace_last_entry_buf) ) { - last_prd = debugtrace_prd; + last_prd = dt_data->prd; last_count = ++count; safe_strcpy(debugtrace_last_entry_buf, buf); snprintf(cntbuf, sizeof(cntbuf), "%u ", count); } else { - debugtrace_prd = last_prd; + dt_data->prd = last_prd; snprintf(cntbuf, sizeof(cntbuf), "%u-%u ", last_count, ++count); } debugtrace_add_to_buf(cntbuf); @@ -155,7 +161,8 @@ static void debugtrace_key(unsigned char key) static int __init debugtrace_init(void) { int order; - unsigned int kbytes, bytes; + unsigned long kbytes, bytes; + struct debugtrace_data *data; /* Round size down to next power of two. */ while ( (kbytes = (debugtrace_kilobytes & (debugtrace_kilobytes-1))) != 0 ) @@ -166,12 +173,15 @@ static int __init debugtrace_init(void) return 0; order = get_order_from_bytes(bytes); - debugtrace_buf = alloc_xenheap_pages(order, 0); - ASSERT(debugtrace_buf != NULL); + data = alloc_xenheap_pages(order, 0); + if ( !data ) + return -ENOMEM; - memset(debugtrace_buf, '\0', bytes); + bytes = PAGE_SIZE << order; + memset(data, '\0', bytes); - debugtrace_bytes = bytes; + data->bytes = bytes - sizeof(*data); + dt_data = data; register_keyhandler('T', debugtrace_key, "toggle debugtrace to console/buffer", 0);