diff mbox

[5/7] KVM-GST: KVM Steal time accounting

Message ID 1308007897-17013-6-git-send-email-glommer@redhat.com (mailing list archive)
State New, archived
Headers show

Commit Message

Glauber Costa June 13, 2011, 11:31 p.m. UTC
This patch accounts steal time time in kernel/sched.
I kept it from last proposal, because I still see advantages
in it: Doing it here will give us easier access from scheduler
variables such as the cpu rq. The next patch shows an example of
usage for it.

Since functions like account_idle_time() can be called from
multiple places, not only account_process_tick(), steal time
grabbing is repeated in each account function separatedely.

Signed-off-by: Glauber Costa <glommer@redhat.com>
CC: Rik van Riel <riel@redhat.com>
CC: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
CC: Peter Zijlstra <peterz@infradead.org>
CC: Avi Kivity <avi@redhat.com>
CC: Anthony Liguori <aliguori@us.ibm.com>
CC: Eric B Munson <emunson@mgebm.net>
---
 kernel/sched.c |   45 +++++++++++++++++++++++++++++++++++++++++++++
 1 files changed, 45 insertions(+), 0 deletions(-)

Comments

Eric B Munson June 14, 2011, 1:21 a.m. UTC | #1
On Mon, 13 Jun 2011, Glauber Costa wrote:

> This patch accounts steal time time in kernel/sched.
> I kept it from last proposal, because I still see advantages
> in it: Doing it here will give us easier access from scheduler
> variables such as the cpu rq. The next patch shows an example of
> usage for it.
> 
> Since functions like account_idle_time() can be called from
> multiple places, not only account_process_tick(), steal time
> grabbing is repeated in each account function separatedely.
> 
> Signed-off-by: Glauber Costa <glommer@redhat.com>
> CC: Rik van Riel <riel@redhat.com>
> CC: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
> CC: Peter Zijlstra <peterz@infradead.org>
> CC: Avi Kivity <avi@redhat.com>
> CC: Anthony Liguori <aliguori@us.ibm.com>
> CC: Eric B Munson <emunson@mgebm.net>

Tested-by: Eric B Munson <emunson@mgebm.net>
Peter Zijlstra June 14, 2011, 10:10 a.m. UTC | #2
On Mon, 2011-06-13 at 19:31 -0400, Glauber Costa wrote:
> +static inline int touch_steal_time(int is_idle)
> +{
> +       u64 steal, st = 0;
> +
> +       if (static_branch(&paravirt_steal_enabled)) {
> +
> +               steal = paravirt_steal_clock(smp_processor_id());
> +
> +               steal -= this_rq()->prev_steal_time;
> +               this_rq()->prev_steal_time += steal;

If you move this addition below this test:

> +               if (is_idle || (steal < TICK_NSEC))
> +                       return 0;

that is, right here, then you don't loose tiny steal deltas and
subsequent ticks accumulate their steal time until you really
have a full steal tick to account.

I guess you want something different for the idle case though.

> +               while (steal > TICK_NSEC) {

			/* really, if we wanted a division we'd have written one */
			asm("" : "+rm" (steal));

> +                       steal -= TICK_NSEC;
> +                       st++;
> +               }
> +
> +               account_steal_time(st);
> +               return 1;
> +       }
> +       return 0;
> +} 


--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Glauber Costa June 15, 2011, 1:08 a.m. UTC | #3
On 06/14/2011 07:10 AM, Peter Zijlstra wrote:
> On Mon, 2011-06-13 at 19:31 -0400, Glauber Costa wrote:
>> +static inline int touch_steal_time(int is_idle)
>> +{
>> +       u64 steal, st = 0;
>> +
>> +       if (static_branch(&paravirt_steal_enabled)) {
>> +
>> +               steal = paravirt_steal_clock(smp_processor_id());
>> +
>> +               steal -= this_rq()->prev_steal_time;
>> +               this_rq()->prev_steal_time += steal;
>
> If you move this addition below this test:
>
>> +               if (is_idle || (steal<  TICK_NSEC))
>> +                       return 0;
>
> that is, right here, then you don't loose tiny steal deltas and
> subsequent ticks accumulate their steal time until you really
> have a full steal tick to account.

true
> I guess you want something different for the idle case though.

definitely.

>> +               while (steal>  TICK_NSEC) {
>
> 			/* really, if we wanted a division we'd have written one */
> 			asm("" : "+rm" (steal));

Out of curiosity, have we seen any compiler de-optimize it to a 
division, or are you just being careful ?

>> +                       steal -= TICK_NSEC;
>> +                       st++;
>> +               }
>> +
>> +               account_steal_time(st);
>> +               return 1;
>> +       }
>> +       return 0;
>> +}
>
>

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Peter Zijlstra June 15, 2011, 9:28 a.m. UTC | #4
On Tue, 2011-06-14 at 22:08 -0300, Glauber Costa wrote:
> >> +               while (steal>  TICK_NSEC) {
> >
> >                       /* really, if we wanted a division we'd have
> written one */
> >                       asm("" : "+rm" (steal));
> 
> Out of curiosity, have we seen any compiler de-optimize it to a 
> division, or are you just being careful ?
> 
> >> +                       steal -= TICK_NSEC;
> >> +                       st++;
> >> +               } 

No that really happened a number of times, there's one in
sched_avg_period() that actually triggered and __iter_div_u64_rem() that
started it all iirc.
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
diff mbox

Patch

diff --git a/kernel/sched.c b/kernel/sched.c
index 3f2e502..154cb14 100644
--- a/kernel/sched.c
+++ b/kernel/sched.c
@@ -75,6 +75,7 @@ 
 #include <asm/tlb.h>
 #include <asm/irq_regs.h>
 #include <asm/mutex.h>
+#include <asm/paravirt.h>
 
 #include "sched_cpupri.h"
 #include "workqueue_sched.h"
@@ -528,6 +529,7 @@  struct rq {
 #ifdef CONFIG_IRQ_TIME_ACCOUNTING
 	u64 prev_irq_time;
 #endif
+	u64 prev_steal_time;
 
 	/* calc_load related fields */
 	unsigned long calc_load_update;
@@ -3705,6 +3707,41 @@  unsigned long long thread_group_sched_runtime(struct task_struct *p)
 }
 
 /*
+ * We have to at flush steal time information every time something else
+ * is accounted. Since the accounting functions are all visible to the rest
+ * of the kernel, it gets tricky to do them in one place. This helper function
+ * helps us.
+ *
+ * When the system is idle, the concept of steal time does not apply. We just
+ * tell the underlying hypervisor that we grabbed the data, but skip steal time
+ * accounting
+ */
+static inline int touch_steal_time(int is_idle)
+{
+	u64 steal, st = 0;
+
+	if (static_branch(&paravirt_steal_enabled)) {
+
+		steal = paravirt_steal_clock(smp_processor_id());
+
+		steal -= this_rq()->prev_steal_time;
+		this_rq()->prev_steal_time += steal;
+
+		if (is_idle || (steal < TICK_NSEC))
+			return 0;
+
+		while (steal > TICK_NSEC) {
+			steal -= TICK_NSEC;
+			st++;
+		}
+
+		account_steal_time(st);
+		return 1;
+	}
+	return 0;
+}
+
+/*
  * Account user cpu time to a process.
  * @p: the process that the cpu time gets accounted to
  * @cputime: the cpu time spent in user space since the last update
@@ -3716,6 +3753,9 @@  void account_user_time(struct task_struct *p, cputime_t cputime,
 	struct cpu_usage_stat *cpustat = &kstat_this_cpu.cpustat;
 	cputime64_t tmp;
 
+	if (touch_steal_time(0))
+		return;
+
 	/* Add user time to process. */
 	p->utime = cputime_add(p->utime, cputime);
 	p->utimescaled = cputime_add(p->utimescaled, cputime_scaled);
@@ -3802,6 +3842,9 @@  void account_system_time(struct task_struct *p, int hardirq_offset,
 	struct cpu_usage_stat *cpustat = &kstat_this_cpu.cpustat;
 	cputime64_t *target_cputime64;
 
+	if (touch_steal_time(0))
+		return;
+
 	if ((p->flags & PF_VCPU) && (irq_count() - hardirq_offset == 0)) {
 		account_guest_time(p, cputime, cputime_scaled);
 		return;
@@ -3839,6 +3882,8 @@  void account_idle_time(cputime_t cputime)
 	cputime64_t cputime64 = cputime_to_cputime64(cputime);
 	struct rq *rq = this_rq();
 
+	touch_steal_time(1);
+
 	if (atomic_read(&rq->nr_iowait) > 0)
 		cpustat->iowait = cputime64_add(cpustat->iowait, cputime64);
 	else