diff mbox series

[v3,4/4] fs/dcache: Eliminate branches in nr_dentry_negative accounting

Message ID 1536693506-11949-5-git-send-email-longman@redhat.com (mailing list archive)
State New, archived
Headers show
Series fs/dcache: Track # of negative dentries | expand

Commit Message

Waiman Long Sept. 11, 2018, 7:18 p.m. UTC
Because the accounting of nr_dentry_negative depends on whether a dentry
is a negative one or not, branch instructions are introduced to handle
the accounting conditionally. That may potentially slow down the task
by a noticeable amount if that introduces sizeable amount of additional
branch mispredictions.

To avoid that, the accounting code is now modified to use conditional
move instructions instead, if supported by the architecture.

Signed-off-by: Waiman Long <longman@redhat.com>
---
 fs/dcache.c | 41 +++++++++++++++++++++++++++++------------
 1 file changed, 29 insertions(+), 12 deletions(-)

Comments

Dave Chinner Sept. 11, 2018, 10:13 p.m. UTC | #1
On Tue, Sep 11, 2018 at 03:18:26PM -0400, Waiman Long wrote:
> Because the accounting of nr_dentry_negative depends on whether a dentry
> is a negative one or not, branch instructions are introduced to handle
> the accounting conditionally. That may potentially slow down the task
> by a noticeable amount if that introduces sizeable amount of additional
> branch mispredictions.
> 
> To avoid that, the accounting code is now modified to use conditional
> move instructions instead, if supported by the architecture.

I think this is a case of over-optimisation. It makes the code
harder to read for extremely marginal benefit, and if we ever need
to add any more code for negative dentries in these paths the first
thing we'll have to do is revert this change.

Unless you have numbers demonstrating that it's a clear performance
improvement, then NACK for this patch.

Cheers,

Dave.
Matthew Wilcox Sept. 12, 2018, 2:36 a.m. UTC | #2
On Tue, Sep 11, 2018 at 03:18:26PM -0400, Waiman Long wrote:
> Because the accounting of nr_dentry_negative depends on whether a dentry
> is a negative one or not, branch instructions are introduced to handle
> the accounting conditionally. That may potentially slow down the task
> by a noticeable amount if that introduces sizeable amount of additional
> branch mispredictions.
> 
> To avoid that, the accounting code is now modified to use conditional
> move instructions instead, if supported by the architecture.

You're substituting your judgement here for the compiler's.  I don't
see a reason why the compiler couldn't choose to use a cmov in order
to do this:

	if (dentry->d_flags & DCACHE_LRU_LIST)
		this_cpu_inc(nr_dentry_negative);

unless our macrology has got too clever for the compilre to see through
it.  In which case, the right answer is to simplify the percpu code,
not to force the compiler to optimise the code in the way that makes
sense for your current microarchitecture.
Waiman Long Sept. 12, 2018, 3:44 p.m. UTC | #3
On 09/11/2018 06:13 PM, Dave Chinner wrote:
> On Tue, Sep 11, 2018 at 03:18:26PM -0400, Waiman Long wrote:
>> Because the accounting of nr_dentry_negative depends on whether a dentry
>> is a negative one or not, branch instructions are introduced to handle
>> the accounting conditionally. That may potentially slow down the task
>> by a noticeable amount if that introduces sizeable amount of additional
>> branch mispredictions.
>>
>> To avoid that, the accounting code is now modified to use conditional
>> move instructions instead, if supported by the architecture.
> I think this is a case of over-optimisation. It makes the code
> harder to read for extremely marginal benefit, and if we ever need
> to add any more code for negative dentries in these paths the first
> thing we'll have to do is revert this change.
>
> Unless you have numbers demonstrating that it's a clear performance
> improvement, then NACK for this patch.
>
> Cheers,
>
> Dave.

Yes, this is an optimization.

Unfortunately I don't have any performance number as I had not seen any
significant performance difference outside of the noise range with these
set of changes. I am not fine with not taking this patch.

Cheers,
Longman
Waiman Long Sept. 12, 2018, 3:49 p.m. UTC | #4
On 09/11/2018 10:36 PM, Matthew Wilcox wrote:
> On Tue, Sep 11, 2018 at 03:18:26PM -0400, Waiman Long wrote:
>> Because the accounting of nr_dentry_negative depends on whether a dentry
>> is a negative one or not, branch instructions are introduced to handle
>> the accounting conditionally. That may potentially slow down the task
>> by a noticeable amount if that introduces sizeable amount of additional
>> branch mispredictions.
>>
>> To avoid that, the accounting code is now modified to use conditional
>> move instructions instead, if supported by the architecture.
> You're substituting your judgement here for the compiler's.  I don't
> see a reason why the compiler couldn't choose to use a cmov in order
> to do this:
>
> 	if (dentry->d_flags & DCACHE_LRU_LIST)
> 		this_cpu_inc(nr_dentry_negative);
>
> unless our macrology has got too clever for the compilre to see through
> it.  In which case, the right answer is to simplify the percpu code,
> not to force the compiler to optimise the code in the way that makes
> sense for your current microarchitecture.
>
I had actually looked at the x86 object file generated to verify that it
did use cmove with the patch and use branch without. It is possible that
there are other twists to make that happen with the above expression. I
will need to run some experiments to figure it out. In the mean time, I
am fine with dropping this patch as it is a micro-optimization that
doesn't change the behavior at all.

Cheers,
Longman
Matthew Wilcox Sept. 12, 2018, 3:55 p.m. UTC | #5
On Wed, Sep 12, 2018 at 11:49:22AM -0400, Waiman Long wrote:
> > unless our macrology has got too clever for the compilre to see through
> > it.  In which case, the right answer is to simplify the percpu code,
> > not to force the compiler to optimise the code in the way that makes
> > sense for your current microarchitecture.
> >
> I had actually looked at the x86 object file generated to verify that it
> did use cmove with the patch and use branch without. It is possible that
> there are other twists to make that happen with the above expression. I
> will need to run some experiments to figure it out. In the mean time, I
> am fine with dropping this patch as it is a micro-optimization that
> doesn't change the behavior at all.

I don't understand why you included it, to be honest.  But it did get
me looking at the percpu code to see if it was too clever.  And that
led to the resubmission of rth's patch from two years ago that I cc'd
you on earlier.

With that patch applied, gcc should be able to choose to use the
cmov if it feels that would be a better optimisation.  It already
makes one different decision in dcache.o, namely that it uses addq
$0x1,%gs:0x0(%rip) instead of incq %gs:0x0(%rip).  Apparently this
performs better on some CPUs.

So I wouldn't spend any more time on this patch.
Waiman Long Sept. 12, 2018, 4:11 p.m. UTC | #6
On 09/12/2018 11:55 AM, Matthew Wilcox wrote:
> On Wed, Sep 12, 2018 at 11:49:22AM -0400, Waiman Long wrote:
>>> unless our macrology has got too clever for the compilre to see through
>>> it.  In which case, the right answer is to simplify the percpu code,
>>> not to force the compiler to optimise the code in the way that makes
>>> sense for your current microarchitecture.
>>>
>> I had actually looked at the x86 object file generated to verify that it
>> did use cmove with the patch and use branch without. It is possible that
>> there are other twists to make that happen with the above expression. I
>> will need to run some experiments to figure it out. In the mean time, I
>> am fine with dropping this patch as it is a micro-optimization that
>> doesn't change the behavior at all.
> I don't understand why you included it, to be honest.  But it did get
> me looking at the percpu code to see if it was too clever.  And that
> led to the resubmission of rth's patch from two years ago that I cc'd
> you on earlier.
>
> With that patch applied, gcc should be able to choose to use the
> cmov if it feels that would be a better optimisation.  It already
> makes one different decision in dcache.o, namely that it uses addq
> $0x1,%gs:0x0(%rip) instead of incq %gs:0x0(%rip).  Apparently this
> performs better on some CPUs.
>
> So I wouldn't spend any more time on this patch.

Thank for looking into that. Well I am not going to look further into
this unless I have no other thing to do which is unlikely.

Cheers,
Longman
diff mbox series

Patch

diff --git a/fs/dcache.c b/fs/dcache.c
index c1cc956..dfd5628 100644
--- a/fs/dcache.c
+++ b/fs/dcache.c
@@ -171,6 +171,29 @@  int proc_nr_dentry(struct ctl_table *table, int write, void __user *buffer,
 	dentry_stat.nr_negative = get_nr_dentry_negative();
 	return proc_doulongvec_minmax(table, write, buffer, lenp, ppos);
 }
+
+/*
+ * Increment/Decrement nr_dentry_negative if the condition is true.
+ * For architectures that support some kind of conditional move, compiler
+ * should be able generate code to inc/dec negative dentry counter
+ * without any branch instruction.
+ */
+static inline void cond_negative_dentry_inc(bool cond)
+{
+	int val = !!cond;
+
+	this_cpu_add(nr_dentry_negative, val);
+}
+
+static inline void cond_negative_dentry_dec(bool cond)
+{
+	int val = !!cond;
+
+	this_cpu_sub(nr_dentry_negative, val);
+}
+#else
+static inline void cond_negative_dentry_inc(bool cond) { }
+static inline void cond_negative_dentry_dec(bool cond) { }
 #endif
 
 /*
@@ -343,8 +366,7 @@  static inline void __d_clear_type_and_inode(struct dentry *dentry)
 	flags &= ~(DCACHE_ENTRY_TYPE | DCACHE_FALLTHRU);
 	WRITE_ONCE(dentry->d_flags, flags);
 	dentry->d_inode = NULL;
-	if (dentry->d_flags & DCACHE_LRU_LIST)
-		this_cpu_inc(nr_dentry_negative);
+	cond_negative_dentry_inc(dentry->d_flags & DCACHE_LRU_LIST);
 }
 
 static void dentry_free(struct dentry *dentry)
@@ -412,8 +434,7 @@  static void d_lru_add(struct dentry *dentry)
 	D_FLAG_VERIFY(dentry, 0);
 	dentry->d_flags |= DCACHE_LRU_LIST;
 	this_cpu_inc(nr_dentry_unused);
-	if (d_is_negative(dentry))
-		this_cpu_inc(nr_dentry_negative);
+	cond_negative_dentry_inc(d_is_negative(dentry));
 	WARN_ON_ONCE(!list_lru_add(&dentry->d_sb->s_dentry_lru, &dentry->d_lru));
 }
 
@@ -422,8 +443,7 @@  static void d_lru_del(struct dentry *dentry)
 	D_FLAG_VERIFY(dentry, DCACHE_LRU_LIST);
 	dentry->d_flags &= ~DCACHE_LRU_LIST;
 	this_cpu_dec(nr_dentry_unused);
-	if (d_is_negative(dentry))
-		this_cpu_dec(nr_dentry_negative);
+	cond_negative_dentry_dec(d_is_negative(dentry));
 	WARN_ON_ONCE(!list_lru_del(&dentry->d_sb->s_dentry_lru, &dentry->d_lru));
 }
 
@@ -454,8 +474,7 @@  static void d_lru_isolate(struct list_lru_one *lru, struct dentry *dentry)
 	D_FLAG_VERIFY(dentry, DCACHE_LRU_LIST);
 	dentry->d_flags &= ~DCACHE_LRU_LIST;
 	this_cpu_dec(nr_dentry_unused);
-	if (d_is_negative(dentry))
-		this_cpu_dec(nr_dentry_negative);
+	cond_negative_dentry_dec(d_is_negative(dentry));
 	list_lru_isolate(lru, &dentry->d_lru);
 }
 
@@ -464,8 +483,7 @@  static void d_lru_shrink_move(struct list_lru_one *lru, struct dentry *dentry,
 {
 	D_FLAG_VERIFY(dentry, DCACHE_LRU_LIST);
 	dentry->d_flags |= DCACHE_SHRINK_LIST;
-	if (d_is_negative(dentry))
-		this_cpu_dec(nr_dentry_negative);
+	cond_negative_dentry_dec(d_is_negative(dentry));
 	list_lru_isolate_move(lru, &dentry->d_lru, list);
 }
 
@@ -1865,8 +1883,7 @@  static void __d_instantiate(struct dentry *dentry, struct inode *inode)
 	/*
 	 * Decrement negative dentry count if it was in the LRU list.
 	 */
-	if (dentry->d_flags & DCACHE_LRU_LIST)
-		this_cpu_dec(nr_dentry_negative);
+	cond_negative_dentry_dec(dentry->d_flags & DCACHE_LRU_LIST);
 	hlist_add_head(&dentry->d_u.d_alias, &inode->i_dentry);
 	raw_write_seqcount_begin(&dentry->d_seq);
 	__d_set_inode_and_type(dentry, inode, add_flags);