Message ID | 20190219155728.19163-1-tbogendoerfer@suse.de (mailing list archive) |
---|---|
Headers | show |
Series | MIPS: SGI-IP27 rework | expand |
Hi Thomas, On Tue, Feb 19, 2019 at 04:57:14PM +0100, Thomas Bogendoerfer wrote: > SGI IP27 (Origin/Onyx2) and SGI IP30 (Octane) have a similair > architecture and share some hardware (ioc3/bridge). To share > the software parts this patchset reworks SGI IP27 interrupt > and pci bridge code. By using features Linux gained during the > many years since SGI IP27 code was integrated this even results > in code reduction and IMHO cleaner code. > > Tests have been done on a two module O200 (4 CPUs) and an > Origin 2000 (8 CPUs). > > My next step in integrating SGI IP30 support is splitting ioc3eth > into a MFD and subdevice drivers. Prototype is working, but needs > still more clean ups. > > Changes in v2: > > - replaced HUB_L/HUB_S by __raw_readq/__raw_writeq > - removed union bridge_ate > - replaced remaing fields in slice_data by per_cpu data > - use generic_handle_irq instead of do_IRQ > - use hierarchy irq domain for stacking bridge and hub interrupt > - moved __dma_to_phys/__phy_to_dma to mach-ip27/dma-direct.h > - use dev_to_node() for pcibus_to_node() implementation > > Thomas Bogendoerfer (10): > MIPS: SGI-IP27: get rid of volatile and hubreg_t > MIPS: SGI-IP27: clean up bridge access and header files > MIPS: SGI-IP27: use pr_info/pr_emerg and pr_cont to fix output > MIPS: SGI-IP27: do xtalk scanning later > MIPS: SGI-IP27: do boot CPU init later > MIPS: SGI-IP27: rework HUB interrupts > PCI: call add_bus method also for root bus > MIPS: SGI-IP27: use generic PCI driver > genirq/irqdomain: fall back to default domain when creating hierarchy > domain > MIPS: SGI-IP27: abstract chipset irq from bridge I have patches 1-6 applied locally - would you be happy for those to be pushed to mips-next without 7-10? I know ip27_defconfig still builds but have no idea whether it still works :) I don't appear to have received patches 7 or 9 but I see from archives there'll be a v3 of patch 9 at least. So I can either apply 1-6 for v5.1 or defer the whole series. Let me know - I'm happy either way. Thanks, Paul
On Thu, 21 Feb 2019 20:50:39 +0000 Paul Burton <paul.burton@mips.com> wrote: > I don't appear to have received patches 7 or 9 but I see from archives > there'll be a v3 of patch 9 at least. I'm still fighting with git send-email to do proper collecting of addresses... And the mails to your @mips.com address bounced yesterday. > So I can either apply 1-6 for v5.1 or defer the whole series. Let me > know - I'm happy either way. All changes will happen in patches 7-10, so appreciate, if you could apply 1-6 :-) Thank you, Thomas.
On Fri, Feb 22, 2019 at 09:14:18AM +0100, Thomas Bogendoerfer wrote: > > I'm still fighting with git send-email to do proper collecting of addresses... > And the mails to your @mips.com address bounced yesterday. FYI, it also bounced for me yesterday (or the day before, depending on the time zone).
Hi Thomas, On Fri, Feb 22, 2019 at 09:14:18AM +0100, Thomas Bogendoerfer wrote: > On Thu, 21 Feb 2019 20:50:39 +0000 > Paul Burton <paul.burton@mips.com> wrote: > > > I don't appear to have received patches 7 or 9 but I see from archives > > there'll be a v3 of patch 9 at least. > > I'm still fighting with git send-email to do proper collecting of addresses... > And the mails to your @mips.com address bounced yesterday. Interesting - thanks for letting me know. Some of our IT infrastructure has been physically moved over the weekend so hopefully it's just fallout from that, and I seem to be able to email myself from gmail at least. Will keep an eye on it though! > > So I can either apply 1-6 for v5.1 or defer the whole series. Let me > > know - I'm happy either way. > > All changes will happen in patches 7-10, so appreciate, if you could > apply 1-6 :-) Great, I've pushed out mips-next with patches 1-6 applied. My ack email script isn't yet smart enough to respond to the cover letter for a series that's only partially applied, so apologies for the 6 separate acks :) Thanks, Paul