Message ID | 20231004165317.1061855-10-willy@infradead.org (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | Add folio_end_read | expand |
On 5/10/23 02:53, Matthew Wilcox (Oracle) wrote: > Using EOR to clear the guaranteed-to-be-set lock bit will test the > negative flag just like the x86 implementation. This should be > more efficient than the generic implementation in filemap.c. It > would be better if m68k had __GCC_ASM_FLAG_OUTPUTS__. > > Coldfire doesn't have a byte-sized EOR, so we test bit 7 after the > EOR, which is a second memory access, but it's slightly better than > the current C code. > > Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> Acked-by: Greg Ungerer <gerg@linux-m68k.org> > --- > arch/m68k/include/asm/bitops.h | 22 ++++++++++++++++++++++ > 1 file changed, 22 insertions(+) > > diff --git a/arch/m68k/include/asm/bitops.h b/arch/m68k/include/asm/bitops.h > index e984af71df6b..80ee36095905 100644 > --- a/arch/m68k/include/asm/bitops.h > +++ b/arch/m68k/include/asm/bitops.h > @@ -319,6 +319,28 @@ arch___test_and_change_bit(unsigned long nr, volatile unsigned long *addr) > return test_and_change_bit(nr, addr); > } > > +static inline bool xor_unlock_is_negative_byte(unsigned long mask, > + volatile unsigned long *p) > +{ > +#ifdef CONFIG_COLDFIRE > + __asm__ __volatile__ ("eorl %1, %0" > + : "+m" (*p) > + : "d" (mask) > + : "memory"); > + return *p & (1 << 7); > +#else > + char result; > + char *cp = (char *)p + 3; /* m68k is big-endian */ > + > + __asm__ __volatile__ ("eor.b %1, %2; smi %0" > + : "=d" (result) > + : "di" (mask), "o" (*cp) > + : "memory"); > + return result; > +#endif > +} > +#define xor_unlock_is_negative_byte xor_unlock_is_negative_byte > + > /* > * The true 68020 and more advanced processors support the "bfffo" > * instruction for finding bits. ColdFire and simple 68000 parts
On Wed, Oct 4, 2023 at 6:53 PM Matthew Wilcox (Oracle) <willy@infradead.org> wrote: > Using EOR to clear the guaranteed-to-be-set lock bit will test the > negative flag just like the x86 implementation. This should be > more efficient than the generic implementation in filemap.c. It > would be better if m68k had __GCC_ASM_FLAG_OUTPUTS__. > > Coldfire doesn't have a byte-sized EOR, so we test bit 7 after the > EOR, which is a second memory access, but it's slightly better than > the current C code. > > Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> Acked-by: Geert Uytterhoeven <geert@linux-m68k.org> Gr{oetje,eeting}s, Geert
diff --git a/arch/m68k/include/asm/bitops.h b/arch/m68k/include/asm/bitops.h index e984af71df6b..80ee36095905 100644 --- a/arch/m68k/include/asm/bitops.h +++ b/arch/m68k/include/asm/bitops.h @@ -319,6 +319,28 @@ arch___test_and_change_bit(unsigned long nr, volatile unsigned long *addr) return test_and_change_bit(nr, addr); } +static inline bool xor_unlock_is_negative_byte(unsigned long mask, + volatile unsigned long *p) +{ +#ifdef CONFIG_COLDFIRE + __asm__ __volatile__ ("eorl %1, %0" + : "+m" (*p) + : "d" (mask) + : "memory"); + return *p & (1 << 7); +#else + char result; + char *cp = (char *)p + 3; /* m68k is big-endian */ + + __asm__ __volatile__ ("eor.b %1, %2; smi %0" + : "=d" (result) + : "di" (mask), "o" (*cp) + : "memory"); + return result; +#endif +} +#define xor_unlock_is_negative_byte xor_unlock_is_negative_byte + /* * The true 68020 and more advanced processors support the "bfffo" * instruction for finding bits. ColdFire and simple 68000 parts
Using EOR to clear the guaranteed-to-be-set lock bit will test the negative flag just like the x86 implementation. This should be more efficient than the generic implementation in filemap.c. It would be better if m68k had __GCC_ASM_FLAG_OUTPUTS__. Coldfire doesn't have a byte-sized EOR, so we test bit 7 after the EOR, which is a second memory access, but it's slightly better than the current C code. Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> --- arch/m68k/include/asm/bitops.h | 22 ++++++++++++++++++++++ 1 file changed, 22 insertions(+)