Message ID | 20250227162823.3585810-1-david@protonic.nl (mailing list archive) |
---|---|
Headers | show |
Series | Add Linux Motion Control subsystem | expand |
Hello David and others On Thursday 27 of February 2025 17:28:16 David Jander wrote: > Request for comments on: adding the Linux Motion Control subsystem to the > kernel. I have noticed on Phoronix, that the new system is emerging. This is area where I have lot (more than 30 years) of experience at my company and I have done even lot with my studnets at university. I have big interest that this interface fits our use neeeds and offers for future integration of our already open-source systems/components. This is preliminary reply, I want to find time for more discussion and analysis (which is quite hard during summer term where I have lot of teaching and even ongoing project now). I would like to discuse even future subsystem evolution which would allow coordinates axes groups creation, smooth segments based on N-th order splines incremental attachment, the path planning and re-planning if the target changes as reaction to camera or other sensor needs etc. At this moment I have interrest if there is site which would start to collect these ideas and where can be some references added. I think that I have quite some stuff to offer. To have idea about my direction of thinking and needs of interface I would provide some references even to our often commercially sold but mostly conceived as hobby projects. Coordinated axes groups movement with incremental spline segment addition into command queue (our COORDMV componet of PXMC library) is demonstrated on old BOSCH SR 450 SCARA system. The robot has never fully worked at Skoda Auto with original BOSH control unit. But when it has been donated to Czech Technical University, we have build control unit at my copany based on Motorola 68376 MCU in around 2000 year. I have later paid one student to prepare demo in Python to demonstrate the system. You can click on video MARS 8 BigBot and Robot Bosch SR 450 Drawing Roses http://pikron.com/pages/products/motion_control.html The related python application is there https://github.com/cvut/pyrocon In the far future, I can imagine that it can connect to proposed LMC API and achieve the same results. The related control unit MARS 8 page http://pikron.com/pages/products/motion_control/mars_8.html CPU board for museum or curiosity http://pikron.com/pages/products/cpu_boards/mo_cpu1.html The firmware main application https://gitlab.com/pikron/projects/mo_cpu1/mars-mo_cpu1 which uses our PXMC motion control library https://gitlab.com/pikron/sw-base/pxmc There is basic documentation for it on its site https://pxmc.org/ https://pxmc.org/files/pxmc.pdf It is used in system less environment on the MARS 8 system and actual control at fixed sampling frequency is done in timer interrupt at 1 kHz. More such units serve our studnets to control CRS A465 robots for more than 20 years already. Their original control units have broken by age... The same library has been used in our design of HW and SW for infusion systems (MSP430 + iMX1 with RTEMS) https://pikron.com/pages/devel/medinst.html HPL systems (LPC1768 HW) http://pikron.com/pages/products/hplc/lcp_5024.html and on newer system less LPC4088 + Xilinx XC6SLX9 system used for example for more ESA and ADS projects https://www.esa.int/ESA_Multimedia/Images/2023/06/W-band_on_the_run https://github.com/esa/lxrmount https://gitlab.com/pikron/projects/lx_cpu/rocon-commander/-/wikis/lxr-lisa-com The LX_RoCoN is based on FPGA design with up to 8 IRC inputs, 16 arbitrarily assignable PWM H-bridge output, TUMBL (open source Microblaze variant) co-processor for up to four electronic commutations for PMSM, stepper or IRC equipped steppers there https://gitlab.com/pikron/projects/lx_cpu/lx-rocon The commutation ((forward + inverse) x (Park + Clarke)) by co-processor runs on PMW frequency (20 kHz), D+Q current PI, position PID and COORMV at 4 kHz. FPGA design has been started in the frame of the next thesis https://dspace.cvut.cz/bitstream/handle/10467/23347/F3-DP-2014-Meloun-Martin-prace.pdf More Linux, RTEMS, NuttX, etc. theses led by me there https://gitlab.fel.cvut.cz/otrees/org/-/wikis/theses-defend More information often about RT, motion control there https://gitlab.fel.cvut.cz/otrees/org/-/wikis/knowbase Back to the GNU/Linux Experiment to run our PXMC library on Linux, demonstration on Raspberry Pi, AM4300, Xilinx Zynq with DC and PMSM motors https://gitlab.com/pikron/projects/pxmc-linux The HW with small FPGA implementing IRC, 3x PWM and current ADC commanding and collection which is connected to Raspberry Pi by SPI there https://gitlab.com/pikron/projects/rpi/rpi-mc-1 It is intended for demonstration to enthusiasts, not for industry. (I am not happy to see H2 filling stations controlled by RPi today...) But the same code can be run on Xilinx Zynq with DC motor peripheral https://gitlab.fel.cvut.cz/canbus/zynq/zynq-can-sja1000-top/-/tree/master/system/ip/dcsimpledrv_1.0 and PMSM peripheral https://gitlab.fel.cvut.cz/canbus/zynq/zynq-can-sja1000-top/-/tree/master/system/ip/pmsm_3pmdrv1_to_pins but there are even more advanced option even for Linux. The TUMBL coprocessor has been replaced by small RISC-V developed in the frame of our Advanced Computer Architectures course by my studnets https://gitlab.fel.cvut.cz/otrees/fpga/rvapo-vhdl and the 3 phase motor peripheral has been combined with this coprocessor on Zynq, So PREEMP_RT Linux (or even RETMS) can deliver D and Q PWM values to shared memory and coprocessor takes care about commutation at 20 kHz, then collects A, B, C currents and convert them at 20 kHz to D Q and filters them to deliver cumulative sum and accumulated samples count to the slower Linux control loop. But ARM core can access peripherals directly as well for debugging purposes etc. The Linux, RTEMS application source https://gitlab.fel.cvut.cz/otrees/fpga/rvapo-apps/-/tree/master/apps/rvapo-pmsm co-processor firware source https://gitlab.fel.cvut.cz/otrees/fpga/rvapo-vhdl/-/blob/main/software/c/firmware_fresh/firmware.c The 3-phase peripheral can be synthesized even by fully open source tool chain to iCE40 and PMSM motor control has been demonstrated even on cheap ICE-V Wireless (ESP32C3+iCE40)) with SW running NuttX https://gitlab.fel.cvut.cz/otrees/risc-v-esp32/ice-v-pmsm We have tatgets for the most of these peripherals under Linux and NuttX for pysimCoder https://github.com/robertobucher/pysimCoder Some examples ow pysimCoder is used by independed company there https://www.youtube.com/@robots5/videos It is on NuttX, but on RPi and Zynq it works even better on GNU/Linux. So in general, I think that we have large portfolio of building blocks which would allow to build motion, robotic controllers, communications etc. and I would be happy if they are reused and even some project conceived together with others to join the forces. It would be ideal if the all motion control related resources and links could be somehow collected that wheel is not reinvented unnecessarily. The most of my code is Mozilla, GPL etc... I have right to relicence my company stuff if the license does not fit. On the other hand, I do not intend to follow such offers as of one well funded chip related association, which offered us to relicense all to them with no retain of any control and additional right and they would not take care about the valuable project at all no looking for funding etc... no promise for developmet etc... So there are some limits. Best wishes, Pavel Pavel Pisa phone: +420 603531357 e-mail: pisa@cmp.felk.cvut.cz Department of Control Engineering FEE CVUT Karlovo namesti 13, 121 35, Prague 2 university: http://control.fel.cvut.cz/ personal: http://cmp.felk.cvut.cz/~pisa company: https://pikron.com/ PiKRON s.r.o. Kankovskeho 1235, 182 00 Praha 8, Czech Republic projects: https://www.openhub.net/accounts/ppisa social: https://social.kernel.org/ppisa CAN related:http://canbus.pages.fel.cvut.cz/ RISC-V education: https://comparch.edu.cvut.cz/ Open Technologies Research Education and Exchange Services https://gitlab.fel.cvut.cz/otrees/org/-/wikis/home
Hello David and others On Thursday 27 of February 2025 17:28:16 David Jander wrote: > Request for comments on: adding the Linux Motion Control subsystem to the > kernel. I have noticed on Phoronix, that the new system is emerging. This is area where I have lot (more than 30 years) of experience at my company and I have done even lot with my studnets at university. I have big interest that this interface fits our use neeeds and offers for future integration of our already open-source systems/components. This is preliminary reply, I want to find time for more discussion and analysis (which is quite hard during summer term where I have lot of teaching and even ongoing project now). I would like to discuse even future subsystem evolution which would allow coordinates axes groups creation, smooth segments based on N-th order splines incremental attachment, the path planning and re-planning if the target changes as reaction to camera or other sensor needs etc. At this moment I have interrest if there is site which would start to collect these ideas and where can be some references added. I think that I have quite some stuff to offer. To have idea about my direction of thinking and needs of interface I would provide some references even to our often commercially sold but mostly conceived as hobby projects. Coordinated axes groups movement with incremental spline segment addition into command queue (our COORDMV componet of PXMC library) is demonstrated on old BOSCH SR 450 SCARA system. The robot has never fully worked at Skoda Auto with original BOSH control unit. But when it has been donated to Czech Technical University, we have build control unit at my copany based on Motorola 68376 MCU in around 2000 year. I have later paid one student to prepare demo in Python to demonstrate the system. You can click on video MARS 8 BigBot and Robot Bosch SR 450 Drawing Roses http://pikron.com/pages/products/motion_control.html The related python application is there https://github.com/cvut/pyrocon In the far future, I can imagine that it can connect to proposed LMC API and achieve the same results. The related control unit MARS 8 page http://pikron.com/pages/products/motion_control/mars_8.html CPU board for museum or curiosity http://pikron.com/pages/products/cpu_boards/mo_cpu1.html The firmware main application https://gitlab.com/pikron/projects/mo_cpu1/mars-mo_cpu1 which uses our PXMC motion control library https://gitlab.com/pikron/sw-base/pxmc There is basic documentation for it on its site https://pxmc.org/ https://pxmc.org/files/pxmc.pdf It is used in system less environment on the MARS 8 system and actual control at fixed sampling frequency is done in timer interrupt at 1 kHz. More such units serve our studnets to control CRS A465 robots for more than 20 years already. Their original control units have broken by age... The same library has been used in our design of HW and SW for infusion systems (MSP430 + iMX1 with RTEMS) https://pikron.com/pages/devel/medinst.html HPL systems (LPC1768 HW) http://pikron.com/pages/products/hplc/lcp_5024.html and on newer system less LPC4088 + Xilinx XC6SLX9 system used for example for more ESA and ADS projects https://www.esa.int/ESA_Multimedia/Images/2023/06/W-band_on_the_run https://github.com/esa/lxrmount https://gitlab.com/pikron/projects/lx_cpu/rocon-commander/-/wikis/lxr-lisa-com The LX_RoCoN is based on FPGA design with up to 8 IRC inputs, 16 arbitrarily assignable PWM H-bridge output, TUMBL (open source Microblaze variant) co-processor for up to four electronic commutations for PMSM, stepper or IRC equipped steppers there https://gitlab.com/pikron/projects/lx_cpu/lx-rocon The commutation ((forward + inverse) x (Park + Clarke)) by co-processor runs on PMW frequency (20 kHz), D+Q current PI, position PID and COORMV at 4 kHz. FPGA design has been started in the frame of the next thesis https://dspace.cvut.cz/bitstream/handle/10467/23347/F3-DP-2014-Meloun-Martin-prace.pdf More Linux, RTEMS, NuttX, etc. theses led by me there https://gitlab.fel.cvut.cz/otrees/org/-/wikis/theses-defend More information often about RT, motion control there https://gitlab.fel.cvut.cz/otrees/org/-/wikis/knowbase Back to the GNU/Linux Experiment to run our PXMC library on Linux, demonstration on Raspberry Pi, AM4300, Xilinx Zynq with DC and PMSM motors https://gitlab.com/pikron/projects/pxmc-linux The HW with small FPGA implementing IRC, 3x PWM and current ADC commanding and collection which is connected to Raspberry Pi by SPI there https://gitlab.com/pikron/projects/rpi/rpi-mc-1 It is intended for demonstration to enthusiasts, not for industry. (I am not happy to see H2 filling stations controlled by RPi today...) But the same code can be run on Xilinx Zynq with DC motor peripheral https://gitlab.fel.cvut.cz/canbus/zynq/zynq-can-sja1000-top/-/tree/master/system/ip/dcsimpledrv_1.0 and PMSM peripheral https://gitlab.fel.cvut.cz/canbus/zynq/zynq-can-sja1000-top/-/tree/master/system/ip/pmsm_3pmdrv1_to_pins but there are even more advanced option even for Linux. The TUMBL coprocessor has been replaced by small RISC-V developed in the frame of our Advanced Computer Architectures course by my studnets https://gitlab.fel.cvut.cz/otrees/fpga/rvapo-vhdl and the 3 phase motor peripheral has been combined with this coprocessor on Zynq, So PREEMP_RT Linux (or even RETMS) can deliver D and Q PWM values to shared memory and coprocessor takes care about commutation at 20 kHz, then collects A, B, C currents and convert them at 20 kHz to D Q and filters them to deliver cumulative sum and accumulated samples count to the slower Linux control loop. But ARM core can access peripherals directly as well for debugging purposes etc. The Linux, RTEMS application source https://gitlab.fel.cvut.cz/otrees/fpga/rvapo-apps/-/tree/master/apps/rvapo-pmsm co-processor firware source https://gitlab.fel.cvut.cz/otrees/fpga/rvapo-vhdl/-/blob/main/software/c/firmware_fresh/firmware.c The 3-phase peripheral can be synthesized even by fully open source tool chain to iCE40 and PMSM motor control has been demonstrated even on cheap ICE-V Wireless (ESP32C3+iCE40)) with SW running NuttX https://gitlab.fel.cvut.cz/otrees/risc-v-esp32/ice-v-pmsm We have tatgets for the most of these peripherals under Linux and NuttX for pysimCoder https://github.com/robertobucher/pysimCoder Some examples ow pysimCoder is used by independed company there https://www.youtube.com/@robots5/videos It is on NuttX, but on RPi and Zynq it works even better on GNU/Linux. So in general, I think that we have large portfolio of building blocks which would allow to build motion, robotic controllers, communications etc. and I would be happy if they are reused and even some project conceived together with others to join the forces. It would be ideal if the all motion control related resources and links could be somehow collected that wheel is not reinvented unnecessarily. The most of my code is Mozilla, GPL etc... I have right to relicence my company stuff if the license does not fit. On the other hand, I do not intend to follow such offers as of one well funded chip related association, which offered us to relicense all to them with no retain of any control and additional right and they would not take care about the valuable project at all no looking for funding etc... no promise for developmet etc... So there are some limits. Best wishes, Pavel Pavel Pisa phone: +420 603531357 e-mail: pisa@cmp.felk.cvut.cz Department of Control Engineering FEE CVUT Karlovo namesti 13, 121 35, Prague 2 university: http://control.fel.cvut.cz/ personal: http://cmp.felk.cvut.cz/~pisa company: https://pikron.com/ PiKRON s.r.o. Kankovskeho 1235, 182 00 Praha 8, Czech Republic projects: https://www.openhub.net/accounts/ppisa social: https://social.kernel.org/ppisa CAN related:http://canbus.pages.fel.cvut.cz/ RISC-V education: https://comparch.edu.cvut.cz/ Open Technologies Research Education and Exchange Services https://gitlab.fel.cvut.cz/otrees/org/-/wikis/home
Dear Pavel, Thanks a lot for starting the discussion... On Fri, 28 Feb 2025 10:35:57 +0100 Pavel Pisa <ppisa@pikron.com> wrote: > Hello David and others > > On Thursday 27 of February 2025 17:28:16 David Jander wrote: > > Request for comments on: adding the Linux Motion Control subsystem to the > > kernel. > > I have noticed on Phoronix, that the new system is emerging. Being featured on Phoronix on day one wasn't on my bingo card for this year, to be honest... :-) > This is area where I have lot (more than 30 years) of experience > at my company and I have done even lot with my studnets at university. > I have big interest that this interface fits our use neeeds > and offers for future integration of our already open-source > systems/components. This is very impressive and I am honored to have gotten your attention. I am looking forward to discussing this, although I am not sure whether all of this should happen here on LKML? > This is preliminary reply, I want to find time for more discussion > and analysis (which is quite hard during summer term where I have > lot of teaching and even ongoing project now). > > I would like to discuse even future subsystem evolution > which would allow coordinates axes groups creation, smooth > segments based on N-th order splines incremental attachment, > the path planning and re-planning if the target changes > as reaction to camera or other sensor needs etc. Right now LMC should be able to support hardware that has multiple channels (axes) per device. Its UAPI can describe position-based movements and time-based movements along any arbitrary combination of those channels using a pre-defined speed/acceleration profile. The profiles can be specified as an arbitrary number of speed and acceleration values. The idea is to describe a segmented profile with different acceleration values for segments between two different speed values. Simple examples are trapezoidal (accelerate from (near-)zero to Vmax with A1, and decelerate from Vmax back to zero with D1), dual-slope or S-curve, but the UAPI in theory permits an arbitrary number of segments if the underlying hardware supports it. I have some ideas for future extensions to the API that make coordinated multi-channel movements a bit easier, but that will not be in the initial push of LMC: For example torque-limit profiles for controlled torque movements, usable for example in sliding door controllers with AC machines or BLDC motors; or an ioctl to send a distance vector to a selected number of channels at once and apply a motion profile to the whole coordinated movement. In the current version you have to set up the distances and profiles for the individual channels and then trigger the start of the motion, which is more cumbersome. You can already use the finish event of a preceding motion to trigger the next one though. Another idea that has been floating in my head is to make a "virtual" motion device driver that combines individual "real" single-channel hardware drivers into one multi-channel device, but I am unsure whether it is really needed. It all depends on the latency limit differences between kernel-space and user-space whether there is something to gain. I think it is best to keep this initial version more limited in scope though, as long as the needed extensions are possible in the future without breaking existing UAPI. So I propose: Let's craft a draft UAPI (in a different place, not on LKML) that can do everything we can come up with and then reduce it to the basics for the first version. Otherwise it will get too complex to review, I'm afraid. > At this moment I have interrest if there is site which > would start to collect these ideas and where can be > some references added. I may put this on github and create a wiki there if you think that's a good enough place to discuss? > I think that I have quite some stuff to offer. That would be great! Looking forward to it :-) > To have idea about my direction of thinking and needs > of interface I would provide some references even > to our often commercially sold but mostly conceived > as hobby projects. I'll have to take some time to look into those more closely. My own experience as far as FOSS or OSHW concerns includes the reprap Kamaq project: https://reprap.org/wiki/Kamaq TL;DR: It is a 3D printer running only Linux and the whole controller software is entirely written in python (except for very little Cython/C code). This is still my 3D printer on which I satisfy all of my 3D print needs. I will need to port it to LMC one day. > Coordinated axes groups movement with incremental spline > segment addition into command queue (our COORDMV componet > of PXMC library) is demonstrated on old BOSCH SR 450 SCARA > system. The robot has never fully worked at Skoda Auto > with original BOSH control unit. But when it has been donated > to Czech Technical University, we have build control > unit at my copany based on Motorola 68376 MCU in around > 2000 year. I have later paid one student to prepare > demo in Python to demonstrate the system. > > You can click on video > > MARS 8 BigBot and Robot Bosch SR 450 Drawing Roses > http://pikron.com/pages/products/motion_control.html Very impressive! Can you explain how the spline-segment information could be conveyed to the controller? Does the controller really do an infinitesimal spline interpolation, or does it create many small linear vectors? LMC will try to limit math operations in kernel space as much as possible, so hopefully all the calculations can be done in user-space (or on the controller if that is the case). Right now, my way of thinking was that of regular 3D printers which usually only implement G0/G1 G-codes (linear interpolation). G2/G3 (circular interpolation) doesn't sound practically very useful since it is special but not very flexible. Something like generalized spline interpolation sounds more valuable, but I hadn't seen any hardware that can do it. > The related python application is there > > https://github.com/cvut/pyrocon > > In the far future, I can imagine that it can connect > to proposed LMC API and achieve the same results. Let's make it so! >[...] > which uses our PXMC motion control library > > https://gitlab.com/pikron/sw-base/pxmc > > There is basic documentation for it on its site > > https://pxmc.org/ > https://pxmc.org/files/pxmc.pdf At first glance, this looks like a piece of hardware that would fit as a LMC device. What needs to be determined is where the boundaries lie between controller firmware, kernel-space and user-space code. Generally speaking, as a rough guideline, microsecond work is better done in the controller firmware if possible. millisecond work can be done in the kernel and 10's or more millisecond work can be done in user-space, notwithstanding latency limit requirements of course. >[...] > So in general, I think that we have large portfolio > of building blocks which would allow to build motion, > robotic controllers, communications etc. and I would be happy > if they are reused and even some project conceived > together with others to join the forces. This sounds very interesting. Ideally one would end up with LMC capable of interfacing all of those devices. > It would be ideal if the all motion control related > resources and links could be somehow collected > that wheel is not reinvented unnecessarily. I completely agree. > The most of my code is Mozilla, GPL etc... I have > right to relicence my company stuff if the license does > not fit. On the other hand, I do not intend to follow > such offers as of one well funded chip related association, > which offered us to relicense all to them with no retain > of any control and additional right and they would not > take care about the valuable project at all no looking > for funding etc... no promise for developmet etc... > So there are some limits. Understandable. Best regards,
Hello David, On Friday 28 of February 2025 12:57:27 David Jander wrote: > Dear Pavel, > > Thanks a lot for starting the discussion... > > On Fri, 28 Feb 2025 10:35:57 +0100 > > Pavel Pisa <ppisa@pikron.com> wrote: > > Hello David and others > > > > On Thursday 27 of February 2025 17:28:16 David Jander wrote: > > > Request for comments on: adding the Linux Motion Control subsystem to > > > the kernel. > > > > I have noticed on Phoronix, that the new system is emerging. > > Being featured on Phoronix on day one wasn't on my bingo card for this > year, to be honest... :-) > > > This is area where I have lot (more than 30 years) of experience > > at my company and I have done even lot with my studnets at university. > > I have big interest that this interface fits our use neeeds > > and offers for future integration of our already open-source > > systems/components. > > This is very impressive and I am honored to have gotten your attention. I > am looking forward to discussing this, although I am not sure whether all > of this should happen here on LKML? We should move somewhere else and invite people from Linux CNC etc... GitHub project would work well if there is not some reluctance to commercially controlled and closed environment. Gitlab even in Gitlab.com case has option to move to own infrastructure in the future. We have Gitlab at university, companies I work with has Gitlab instances. But I think that we should stay on neutral ground. The critical is some central hub where links to specific mailinglist etc. can be placed. May it be we can ask Linux foundation to provide wiki space for Linux Motion Control Subsystem same as it is for RT https://wiki.linuxfoundation.org/realtime/start We can ask OSADL.org as likely neutral space... Or wiki at kernel.org, may it the most neutral... https://www.wiki.kernel.org/ I am not in the core teams but may it be there on LKLM somebody would suggest the direction. I can ask people from OSADL, CIPS, RT projects etc. directly... But I am not the authority and would be happy if somebody steers us. To not load others, if there is no response then I suggest to limit followup e-mails only to linux-iio and those who already communicated in this thread. > > This is preliminary reply, I want to find time for more discussion > > and analysis (which is quite hard during summer term where I have > > lot of teaching and even ongoing project now). > > > > I would like to discuse even future subsystem evolution > > which would allow coordinates axes groups creation, smooth > > segments based on N-th order splines incremental attachment, > > the path planning and re-planning if the target changes > > as reaction to camera or other sensor needs etc. > > Right now LMC should be able to support hardware that has multiple channels > (axes) per device. Its UAPI can describe position-based movements and > time-based movements along any arbitrary combination of those channels > using a pre-defined speed/acceleration profile. > > The profiles can be specified as an arbitrary number of speed and > acceleration values. The idea is to describe a segmented profile with > different acceleration values for segments between two different speed > values. Simple examples are trapezoidal (accelerate from (near-)zero to > Vmax with A1, and decelerate from Vmax back to zero with D1), dual-slope or > S-curve, but the UAPI in theory permits an arbitrary number of segments if > the underlying hardware supports it. Yes, when I have read that it reminded me my API between non-RT and RT control task in Linux and IRQs in sysless case of our system. > I have some ideas for future extensions to the API that make coordinated > multi-channel movements a bit easier, but that will not be in the initial > push of LMC: For example torque-limit profiles for controlled torque > movements, usable for example in sliding door controllers with AC machines > or BLDC motors; or an ioctl to send a distance vector to a selected number > of channels at once and apply a motion profile to the whole coordinated > movement. In the current version you have to set up the distances and > profiles for the individual channels and then trigger the start of the > motion, which is more cumbersome. You can already use the finish event of a > preceding motion to trigger the next one though. It would worth to have some commands queue for specified (prefigured/setup) xis group. Our system allows to add segments to the group queue but the timing for segment only specifies shortest time in which it can be executed. Then there is three passes optimization then. The first pass is performed at the insertion time. It checks and finds normalized change of speeds (divided by maximal accel/deccel of given axis) at vertex and finds limiting exes, one which accelerates the most and another which needs to decelerate the most. Then it computes speed discontinuity at the given sample period and it limits maximal final speed of the preceding segment such way, that the speed change is kept under COORDDISCONT multiple of accel/decel step. This way, if the straight segments are almost in line, the small change of the direction is not limiting the speed. The discontinuity is computed same way for the vertex between two N-order spline segments, but optimally computed spline segments can be joint with such discontinuity near to zero. This non RT computation as well as all non-RT a RT one on the control unit side is done in the fixed math (the most 32-bits, sometime 64-bits). Actual code of this pass are the functions pxmc_coordmv_seg_add(), pxmc_coordmv_seg_add_line() and pxmc_coordmv_seg_add_spline() https://gitlab.com/pikron/sw-base/pxmc/-/blob/master/libs4c/pxmc_coordmv/coordmv_base.c?ref_type=heads#L394 The new segment final vertex limiting speed and planned speed are set to zero. In the second pass, the queue of segments is examined from the last added one and its initial planned vertex/edge speed is updated, increased up to the minimum of limit give by discontinuity and the limit to decelerate over segment to the planned final speed of the segment. If the start vertex limit is increased then the algorithm proceeds with previous segment https://gitlab.com/pikron/sw-base/pxmc/-/blob/master/libs4c/pxmc_coordmv/coordmv_base.c?ref_type=heads#L682 There are some rules and trick to do that long free in the respect to the IRQ executor etc... Then the actual execution at the sampling frequency advances the normalized parameter going through segment from 0 to 2^32 in 2^32 modulo arithmetic. The increase is limited by smallest maximal speed of the axes recomputed over distance to parameter change and maximal allowed accelerations recomputed to the parameter change. Then the final speed is limited by to final deceleration to the end of the segment the pxmc_coordmv_upda() in https://gitlab.com/pikron/sw-base/pxmc/-/blob/master/libs4c/pxmc_coordmv/coordmv_base.c?ref_type=heads#L351 Then the actual positions of the axes are computed based on the parameter, see pxmc_coordmv_line_gen() or pxmc_coordmv_spline_gen() https://gitlab.com/pikron/sw-base/pxmc/-/blob/master/libs4c/pxmc_coordmv/coordmv_base.c?ref_type=heads#L87 https://gitlab.com/pikron/sw-base/pxmc/-/blob/master/libs4c/pxmc_coordmv/coordmv_base.c?ref_type=heads#L151 This approach ensures that if non RT part or even some commander, in demo case python sending segments to be added over 19200 bit/s serial line, does not keep with the segments execution, then the robot will stop at the final position without exceeding deceleration at any axis. So it is safe, even busback in some axis can control slow down or even slight move back in the parameter etc... but with system/actuator/tool keeping the path. If there is interrest I would find more detailed description of the optimizations and computations. I have even code for its testing and checking correctness on the command line. > Another idea that has been floating in my head is to make a "virtual" > motion device driver that combines individual "real" single-channel > hardware drivers into one multi-channel device, but I am unsure whether it > is really needed. It all depends on the latency limit differences between > kernel-space and user-space whether there is something to gain. In the fact this is the similar idea programmed and in use 25 years already. All computation is in 32-bit units specific for the each axis and only in fixed arithmetic. Some problem was fast 64-bit root square at that time. All has been computed on 21 MHz CPU with 16-bit bus with no caches with instrauctions taking from 2 to many cycles. It was capable to do that for all eight axes plus some other control tasks as flashing lights at specific positions for example for gems inspections by camera connected to PC and then cotrolling their sorting to the right pocket by air streams by MARS 8 control unit when it advanced to given location etc. So some parts of the code has been rewritten to assembly. But today, even on not so fast LPC4088 it can do this and D-Q PMSM motors control without need of assembly optimizations. > I think it is best to keep this initial version more limited in scope > though, as long as the needed extensions are possible in the future without > breaking existing UAPI. Yes, but I would be happy if the API is designed such way, that it is not obstacle when it would be extended and I have idea what would be such further goal for these applications I have already solved and running for decades at industry level. > So I propose: Let's craft a draft UAPI (in a different place, not on LKML) > that can do everything we can come up with and then reduce it to the basics > for the first version. Otherwise it will get too complex to review, I'm > afraid. Yes, I agree. To have some idea of the command set of our system, there is documentation in English for our older system which was capable to control three axes by 8-bit 80552 http://cmp.felk.cvut.cz/~pisa/mars/mars_man_en.pdf Unfortunately, I need to translate manual with coordinated movement to English still, but command set is listed (by unit itself) there https://cmp.felk.cvut.cz/~pisa/mars8/mo_help.txt There is even PXMC documented in Konrad Skup's thesis https://wiki.control.fel.cvut.cz/mediawiki/images/8/83/Dp_2007_skup_konrad.pdf I hoped and have received a promise from my former colleague leading the thesis based on my company's documentation and code that the text will be a base for the open documentation for the https://www.pxmc.org/ site. I have fulfilled my part, bought the domain, and opened the PXMC code. However, he and his student did not mention the source of the code in my company; instead, they sold the text for commercial (paid only) publication and access. So, my hopes to build community has faded in the stream of need to real projects that need to be solved. So broader introduction to the community has been postponed... by 18 years... > > At this moment I have interrest if there is site which > > would start to collect these ideas and where can be > > some references added. > > I may put this on github and create a wiki there if you think that's a good > enough place to discuss? If there is no feedback with better place to my initial list of options, I am OK with project group on GitHub where the initial project will be located with Wiki and issue tracker to start the communication and I would be happy to participate (as my time, teaching, projects allows). > > I think that I have quite some stuff to offer. > > That would be great! Looking forward to it :-) > > > To have idea about my direction of thinking and needs > > of interface I would provide some references even > > to our often commercially sold but mostly conceived > > as hobby projects. > > I'll have to take some time to look into those more closely. My own > experience as far as FOSS or OSHW concerns includes the reprap Kamaq > project: > > https://reprap.org/wiki/Kamaq OK, nice target but I would like to support 5 D CNCs with precise machining, haptic human machine systems etc... with DC, stepper and PMSM motors with vector control high resolution positional feedback etc. We control for example up to 30 kg spherical lenses positioning in the interferometric system with precision of micro fractions. The system allows inspection which thanks to multiple angles reaches lens surface reconstruction at level of nanometres https://toptec.eu/export/sites/toptec/.content/projects/finished/measuring-instrument.pdf We use optical linear sensors combined with 10k per revolution incremental sensors on the cheap stepper motor actuators and precise mechanics.. and more tricks... And I can see use of some cheap linux board, i.e. Zynq, Beagle-V Fire, which I have on my table, there in future... > TL;DR: It is a 3D printer running only Linux and the whole controller > software is entirely written in python (except for very little Cython/C > code). This is still my 3D printer on which I satisfy all of my 3D print > needs. I will need to port it to LMC one day. > > > Coordinated axes groups movement with incremental spline > > segment addition into command queue (our COORDMV componet > > of PXMC library) is demonstrated on old BOSCH SR 450 SCARA > > system. The robot has never fully worked at Skoda Auto > > with original BOSH control unit. But when it has been donated > > to Czech Technical University, we have build control > > unit at my copany based on Motorola 68376 MCU in around > > 2000 year. I have later paid one student to prepare > > demo in Python to demonstrate the system. > > > > You can click on video > > > > MARS 8 BigBot and Robot Bosch SR 450 Drawing Roses > > http://pikron.com/pages/products/motion_control.html > > Very impressive! Can you explain how the spline-segment information could > be conveyed to the controller? Does the controller really do an > infinitesimal spline interpolation, or does it create many small linear > vectors? As I referenced above, the spines are interpreted at sampling frequency all parameters are represented as 32-bit signed numbers etc... So no conversion to the straight segments. The precise postions are recomputed with even high resolution over IKT, then some intervals of these points are interpolated by spline segments with controlled error and these segments are send to the unit to keep command FIFO full but not overflow it. Unit reports how much space is left... > LMC will try to limit math operations in kernel space as much as possible, > so hopefully all the calculations can be done in user-space (or on the > controller if that is the case). All computation is fixed point only so no problem for the kernel even for interrupt. But yes, on fully preemptive kernel where user space task can be guaranteed to achieve 5 kHz sampling even on cheaper ARM hardware, the interface to the kernel can be only on D-Q PWM command and actual IRC and currents readback. But if you have API for more intelligent controllers then there s no problem to put there "SoftMAC" to represent dumb ones the same way to userspace. > Right now, my way of thinking was that of regular 3D printers which usually > only implement G0/G1 G-codes (linear interpolation). G2/G3 (circular > interpolation) doesn't sound practically very useful since it is special > but not very flexible. Something like generalized spline interpolation > sounds more valuable, but I hadn't seen any hardware that can do it. > > > The related python application is there > > > > https://github.com/cvut/pyrocon > > > > In the far future, I can imagine that it can connect > > to proposed LMC API and achieve the same results. > > Let's make it so! > > >[...] > > which uses our PXMC motion control library > > > > https://gitlab.com/pikron/sw-base/pxmc > > > > There is basic documentation for it on its site > > > > https://pxmc.org/ > > https://pxmc.org/files/pxmc.pdf > > At first glance, this looks like a piece of hardware that would fit as a > LMC device. What needs to be determined is where the boundaries lie between > controller firmware, kernel-space and user-space code. I propose to have that boundary fully configurable. So all can be implemented by software emulation in the kernel if the sampling is under 5 or 10 kHz. The interface from GNU/Linux system to hardware is set PWM A, B, C, read actual IRC and currents. Or some part can be moved to external controller or FPGA with coprocessor (the commutation fits in 2 kB of RISC-V C programmed firmware). I.e. 20 kHz or even faster Park, Clarke transformations. In this case 4 to 10 kHz command port would be D-Q PWM or current set points and back IRC position, D-Q currents. Or your proposed LMC interface can be delivered allmost directly to more complex controller which would realize whole segments processing. > Generally speaking, as a rough guideline, microsecond work is better done > in the controller firmware if possible. millisecond work can be done in the > kernel and 10's or more millisecond work can be done in user-space, > notwithstanding latency limit requirements of course. OK, I agree, the RT_PREEMPT is requiremet and no broken SMI on x86 HW. 1 kHz is enough for DC motors controller robots to go all on Linux. 5 kHz for PMSM D-Q can be done in kernel or even userspace with platform suitable for PREEMPT_RT if without some issues. Above 10 kHz should go to FPGA or external HW. > >[...] > > So in general, I think that we have large portfolio > > of building blocks which would allow to build motion, > > robotic controllers, communications etc. and I would be happy > > if they are reused and even some project conceived > > together with others to join the forces. > > This sounds very interesting. Ideally one would end up with LMC capable of > interfacing all of those devices. Yes. > > It would be ideal if the all motion control related > > resources and links could be somehow collected > > that wheel is not reinvented unnecessarily. > > I completely agree. > > > The most of my code is Mozilla, GPL etc... I have > > right to relicence my company stuff if the license does > > not fit. On the other hand, I do not intend to follow > > such offers as of one well funded chip related association, > > which offered us to relicense all to them with no retain > > of any control and additional right and they would not > > take care about the valuable project at all no looking > > for funding etc... no promise for developmet etc... > > So there are some limits. > > Understandable. Best wishes, Pavel
On 2/27/25 10:28 AM, David Jander wrote: > Request for comments on: adding the Linux Motion Control subsystem to the > kernel. > > The Linux Motion Control subsystem (LMC) is a new kernel subsystem and > associated device drivers for hardware devices that control mechanical > motion. Most often these are different types of motors, but can also be > linear actuators for example. This is something that I played around with when I first got into Linux kernel hacking as a hobbyist. It's something I've always wanted to see get upstreamed, so feel free to cc me on any future revisions of this series. I'm very interested. :-) We made drivers for basic DC motors driven by an H-bridge both with and without position feedback and also a driver for hobby-type servo motors. For those interested, there is code [1] and docs [2]. One thing we would do different if doing it over again is use a character device instead of sysfs attributes as the interface for starting/stopping/adjusting actuation. [1]: https://github.com/ev3dev/lego-linux-drivers/tree/ev3dev-stretch/motors [2]: http://docs.ev3dev.org/projects/lego-linux-drivers/en/ev3dev-stretch/motors.html > > This subsystem defines a new UAPI for motion devices on the user-space > side, as well as common functionality for hardware device drivers on the > driver side. > > The UAPI is based on a ioctl() interface on character devices representing > a specific hardware device. The hardware device can control one or more > actuators (motors), which are identified as channels in the UAPI. It is > possible to execute motions on individual channels, or combined > affecting several selected (or all) channels simutaneously. Examples of > coordinated movements of several channels could be the individual axes > of a 3D printer or CNC machine for example. > > On the hardware side, this initial set of patches also includes two drivers > for two different kinds of motors. One is a stepper motor controller > device that containes a ramp generator capable of autonomously executing > controlled motions following a multi-point acceleration profile > (TMC5240), as well as a simple DC motor controller driver that can control > DC motors via a half-bridge or full H-bridge driver such as the TI DRV8873 > for example. > > Towards the IIO subsystem, LMC supports generating iio trigger events that > fire at certain motion events, such as passing a pre-programmed position or > when reaching the motion target position, depending on the capabilities of > the hardware device. This enables for example triggering an ADC measurement > at a certain position during a movement. I would expect to be using the counter subsystem for position, at least in cases where there is something like a quadrature encoder involved. > > In the future, making use of PREEMPT_RT, even dumb STEP/DIR type stepper > motor controller drivers may be implemented entirely in the kernel, > depending on some characteristics of the hardware (latency jittter, > interrupt latency and CPU speed mainly). > > The existence of this subsystem may affect other projects, such as > Linux-CNC and Klipper for example. > > This code is already in use controlling machines with up to 16 stepper > motors and up to 4 DC motors simutaneously. Up to this point the UAPI > has shown to be adequate and sufficient. Careful thought has gone into > the UAPI design to make sure it coveres as many use-cases as possible, > while being versioned and extensible in the future, with backwards > compatibility in mind. > > David Jander (7): > drivers: Add motion control subsystem Would it be too broad to call this an actuation subsystem instead where motion is just one kind of actuation? > motion: Add ADI/Trinamic TMC5240 stepper motor controller > motion: Add simple-pwm.c PWM based DC motor controller driver > Documentation: Add Linux Motion Control documentation > dt-bindings: motion: Add common motion device properties > dt-bindings: motion: Add adi,tmc5240 bindings > dt-bindings: motion: Add motion-simple-pwm bindings >
Dear David, Thanks for reviewing! On Fri, 28 Feb 2025 16:36:31 -0600 David Lechner <dlechner@baylibre.com> wrote: > On 2/27/25 10:28 AM, David Jander wrote: > > Request for comments on: adding the Linux Motion Control subsystem to the > > kernel. > > > > The Linux Motion Control subsystem (LMC) is a new kernel subsystem and > > associated device drivers for hardware devices that control mechanical > > motion. Most often these are different types of motors, but can also be > > linear actuators for example. > > This is something that I played around with when I first got into Linux > kernel hacking as a hobbyist. It's something I've always wanted to see get > upstreamed, so feel free to cc me on any future revisions of this series. > I'm very interested. :-) Cool! Will keep you posted. > We made drivers for basic DC motors driven by an H-bridge both with and without > position feedback and also a driver for hobby-type servo motors. For those > interested, there is code [1] and docs [2]. One thing we would do different > if doing it over again is use a character device instead of sysfs attributes > as the interface for starting/stopping/adjusting actuation. > > [1]: https://github.com/ev3dev/lego-linux-drivers/tree/ev3dev-stretch/motors > [2]: http://docs.ev3dev.org/projects/lego-linux-drivers/en/ev3dev-stretch/motors.html > > > > > This subsystem defines a new UAPI for motion devices on the user-space > > side, as well as common functionality for hardware device drivers on the > > driver side. > > > > The UAPI is based on a ioctl() interface on character devices representing > > a specific hardware device. The hardware device can control one or more > > actuators (motors), which are identified as channels in the UAPI. It is > > possible to execute motions on individual channels, or combined > > affecting several selected (or all) channels simutaneously. Examples of > > coordinated movements of several channels could be the individual axes > > of a 3D printer or CNC machine for example. > > > > On the hardware side, this initial set of patches also includes two drivers > > for two different kinds of motors. One is a stepper motor controller > > device that containes a ramp generator capable of autonomously executing > > controlled motions following a multi-point acceleration profile > > (TMC5240), as well as a simple DC motor controller driver that can control > > DC motors via a half-bridge or full H-bridge driver such as the TI DRV8873 > > for example. > > > > Towards the IIO subsystem, LMC supports generating iio trigger events that > > fire at certain motion events, such as passing a pre-programmed position or > > when reaching the motion target position, depending on the capabilities of > > the hardware device. This enables for example triggering an ADC measurement > > at a certain position during a movement. > > I would expect to be using the counter subsystem for position, at least in > cases where there is something like a quadrature encoder involved. I will have to think about it. Since there are some Linux capable SoC's that have counter inputs able to do quadrature decoding, it might make sense to support that. For now, the TMC5240 controller for example has support for encoders and while in this patch-set support for it is minimal, the idea was that a motion controller that supports an encoder would just offer the option to use the encoder as the authoritative source for position information. So let's say you have a DC motor for example. Without an encoder or any other means for of speed/position feedback, the best one can do is establish a 1:1 relationship between duty-cycle and speed, obviating all inaccuracies of doing so. So a motion controller using a DC motor would just do that if it has no encoder. OTOH, if there is an encoder as a source of position and speed information, the driver could work with more accurate data. It all depends, but in the end the interface towards the user is the same: move with some speed towards some position or for some amount of time. > > In the future, making use of PREEMPT_RT, even dumb STEP/DIR type stepper > > motor controller drivers may be implemented entirely in the kernel, > > depending on some characteristics of the hardware (latency jittter, > > interrupt latency and CPU speed mainly). > > > > The existence of this subsystem may affect other projects, such as > > Linux-CNC and Klipper for example. > > > > This code is already in use controlling machines with up to 16 stepper > > motors and up to 4 DC motors simutaneously. Up to this point the UAPI > > has shown to be adequate and sufficient. Careful thought has gone into > > the UAPI design to make sure it coveres as many use-cases as possible, > > while being versioned and extensible in the future, with backwards > > compatibility in mind. > > > > David Jander (7): > > drivers: Add motion control subsystem > > Would it be too broad to call this an actuation subsystem instead where motion > is just one kind of actuation? I think it is hard enough already to make a UAPI for motion that is general enough to encompass all types of different motors and motion actuators. Generalizing even further without a serious risk of shortcomings seems almost impossible, but I am open to suggestions. Best regards,
Hi Pavel, On Fri, 28 Feb 2025 16:23:33 +0100 Pavel Pisa <ppisa@pikron.com> wrote: > Hello David, > > On Friday 28 of February 2025 12:57:27 David Jander wrote: > > Dear Pavel, > > > > Thanks a lot for starting the discussion... > > > > On Fri, 28 Feb 2025 10:35:57 +0100 > > > > Pavel Pisa <ppisa@pikron.com> wrote: > > > Hello David and others > > > > > > On Thursday 27 of February 2025 17:28:16 David Jander wrote: > > > > Request for comments on: adding the Linux Motion Control subsystem to > > > > the kernel. > > > > > > I have noticed on Phoronix, that the new system is emerging. > > > > Being featured on Phoronix on day one wasn't on my bingo card for this > > year, to be honest... :-) > > > > > This is area where I have lot (more than 30 years) of experience > > > at my company and I have done even lot with my studnets at university. > > > I have big interest that this interface fits our use neeeds > > > and offers for future integration of our already open-source > > > systems/components. > > > > This is very impressive and I am honored to have gotten your attention. I > > am looking forward to discussing this, although I am not sure whether all > > of this should happen here on LKML? > > We should move somewhere else and invite people from Linux > CNC etc... I agree. > GitHub project would work well if there is not some reluctance > to commercially controlled and closed environment. I am open to suggestions. I just happen to have a github account and also have my code there: https://github.com/yope/linux/tree/mainline-lmc-preparation > Gitlab even in Gitlab.com case has option to move to own > infrastructure in the future. We have Gitlab at university, > companies I work with has Gitlab instances. But I think that > we should stay on neutral ground. > > The critical is some central hub where links to specific > mailinglist etc. can be placed. May it be we can ask > Linux foundation to provide wiki space for Linux Motion Control > Subsystem same as it is for RT > > https://wiki.linuxfoundation.org/realtime/start > > We can ask OSADL.org as likely neutral space... That sounds really great. We were bronze members of OSADL, so maybe that's a good idea. I see you added Carsten Emde in CC ;-) > Or wiki at kernel.org, may it the most neutral... > > https://www.wiki.kernel.org/ Yes, that may be even a better place than OSADL. > I am not in the core teams but may it be there on LKLM > somebody would suggest the direction. I can ask people > from OSADL, CIPS, RT projects etc. directly... > > But I am not the authority and would be happy if somebody > steers us. > > To not load others, if there is no response then I suggest > to limit followup e-mails only to linux-iio and those > who already communicated in this thread. Agree. This will probably be my last reply to this thread with LMKL in CC. Is there anybody here willing to help with contact information? > > > This is preliminary reply, I want to find time for more discussion > > > and analysis (which is quite hard during summer term where I have > > > lot of teaching and even ongoing project now). > > > > > > I would like to discuse even future subsystem evolution > > > which would allow coordinates axes groups creation, smooth > > > segments based on N-th order splines incremental attachment, > > > the path planning and re-planning if the target changes > > > as reaction to camera or other sensor needs etc. > > > > Right now LMC should be able to support hardware that has multiple channels > > (axes) per device. Its UAPI can describe position-based movements and > > time-based movements along any arbitrary combination of those channels > > using a pre-defined speed/acceleration profile. > > > > The profiles can be specified as an arbitrary number of speed and > > acceleration values. The idea is to describe a segmented profile with > > different acceleration values for segments between two different speed > > values. Simple examples are trapezoidal (accelerate from (near-)zero to > > Vmax with A1, and decelerate from Vmax back to zero with D1), dual-slope or > > S-curve, but the UAPI in theory permits an arbitrary number of segments if > > the underlying hardware supports it. > > Yes, when I have read that it reminded me my API between non-RT > and RT control task in Linux and IRQs in sysless case of our system. > > > I have some ideas for future extensions to the API that make coordinated > > multi-channel movements a bit easier, but that will not be in the initial > > push of LMC: For example torque-limit profiles for controlled torque > > movements, usable for example in sliding door controllers with AC machines > > or BLDC motors; or an ioctl to send a distance vector to a selected number > > of channels at once and apply a motion profile to the whole coordinated > > movement. In the current version you have to set up the distances and > > profiles for the individual channels and then trigger the start of the > > motion, which is more cumbersome. You can already use the finish event of a > > preceding motion to trigger the next one though. > > It would worth to have some commands queue for specified > (prefigured/setup) xis group. I thought about this, and while queuing commands for a 3D printer seems like a great idea, since it is strictly feed-forward for the most part, queuing commands in the kernel is complicating things a lot when you also want to be able to react to real-time events in user-space, like end-stop switches and such. I think the current GPIO UAPI with support for epoll events is fantastic, and people should use it. :-) OTOH, I think that the speed and timing accuracy with which one would send individual movement commands (vectors or splines) to a motion controller is perfectly adequate for user-space, specially if you have the option of a 1-deep queue like this mechanism of triggering the next movement when the current one finishes, which basically gives you the time the current movement takes as latency-slack for user-space. I think that is enough, but let me know if you disagree. Maybe it is possible to make the N-dimensional vector interface (optionally) queued? > Our system allows to add segments to the group queue but the > timing for segment only specifies shortest time in which it can > be executed. > > Then there is three passes optimization then. > > The first pass is performed at the insertion time. It checks and > finds normalized change of speeds (divided by maximal accel/deccel > of given axis) at vertex and finds limiting exes, one which accelerates > the most and another which needs to decelerate the most. Then it > computes speed discontinuity at the given sample period and it limits > maximal final speed of the preceding segment such way, that the speed > change is kept under COORDDISCONT multiple of accel/decel step. This > way, if the straight segments are almost in line, the small change > of the direction is not limiting the speed. The discontinuity is > computed same way for the vertex between two N-order spline segments, > but optimally computed spline segments can be joint with such > discontinuity near to zero. This non RT computation as well as all > non-RT a RT one on the control unit side is done in the fixed > math (the most 32-bits, sometime 64-bits). Actual code of this > pass are the functions pxmc_coordmv_seg_add(), pxmc_coordmv_seg_add_line() > and pxmc_coordmv_seg_add_spline() Yes, this maps very well with what I had in mind when designing LMC. I haven't thought about supporting engines capable of real-time spline interpolation because I hadn't seen one before. I thought that just dividing a spline into (many) linear segments would be good enough, but if there are motion engines that can handle spline parameters, I guess we should try to support that. The motion profiles LMC supports have 2 extra parameters for limiting speed discontinuities which can be found in many common motion engines: tvmax and tvzero. https://github.com/yope/linux/blob/mainline-lmc-preparation/include/uapi/linux/motion.h#L146 Tvmax is important for situations where the maximum speed of a given profile is not reached because the distance is too short. It will make sure there is at least some period of constant speed before decelerating again. Tvzero is important for motions that starts accelerating into an opposite direction of a preceding motion, to insert a minimum time at zero velocity. But I guess you are more than familiar with these, since they are pretty common concepts. ;-) > https://gitlab.com/pikron/sw-base/pxmc/-/blob/master/libs4c/pxmc_coordmv/coordmv_base.c?ref_type=heads#L394 > > The new segment final vertex limiting speed and planned speed are > set to zero. > > In the second pass, the queue of segments is examined from the last > added one and its initial planned vertex/edge speed is updated, > increased up to the minimum of limit give by discontinuity and > the limit to decelerate over segment to the planned final speed > of the segment. If the start vertex limit is increased then > the algorithm proceeds with previous segment AFAICS, these are all motion planning tasks that should be done in user-space, right? > https://gitlab.com/pikron/sw-base/pxmc/-/blob/master/libs4c/pxmc_coordmv/coordmv_base.c?ref_type=heads#L682 > > There are some rules and trick to do that long free in the respect > to the IRQ executor etc... > > Then the actual execution at the sampling frequency advances > the normalized parameter going through segment from 0 to 2^32 > in 2^32 modulo arithmetic. The increase is limited by smallest > maximal speed of the axes recomputed over distance to parameter > change and maximal allowed accelerations recomputed to the parameter > change. Then the final speed is limited by to final deceleration > to the end of the segment the pxmc_coordmv_upda() in > > https://gitlab.com/pikron/sw-base/pxmc/-/blob/master/libs4c/pxmc_coordmv/coordmv_base.c?ref_type=heads#L351 AFAICS, this is probably better done in the controller itself, right? The most complex math I feel comfortable doing in kernel space is converting a distance-based motion given a trapezoidal acceleration profile (whith its limiting factors tvmin and tvmax, see above) into a time based motion: https://github.com/yope/linux/blob/mainline-lmc-preparation/drivers/motion/motion-helpers.c#L515 This is a helper function for drivers that want to use the internal time-based ramp generator. >[...] > > Another idea that has been floating in my head is to make a "virtual" > > motion device driver that combines individual "real" single-channel > > hardware drivers into one multi-channel device, but I am unsure whether it > > is really needed. It all depends on the latency limit differences between > > kernel-space and user-space whether there is something to gain. > > In the fact this is the similar idea programmed and in use 25 years > already. All computation is in 32-bit units specific for the each axis > and only in fixed arithmetic. Some problem was fast 64-bit root square > at that time. All has been computed on 21 MHz CPU with 16-bit bus with > no caches with instrauctions taking from 2 to many cycles. It was capable > to do that for all eight axes plus some other control tasks as flashing > lights at specific positions for example for gems inspections by camera > connected to PC and then cotrolling their sorting to the right pocket by air > streams by MARS 8 control unit when it advanced to given location etc. > So some parts of the code has been rewritten to assembly. But today, > even on not so fast LPC4088 it can do this and D-Q PMSM motors > control without need of assembly optimizations. I think that if we support different kinds of profiles on N-dimensions with support for spline parameters if the hardware supports it, we could solve for any use-case without much complexity in the kernel. > > I think it is best to keep this initial version more limited in scope > > though, as long as the needed extensions are possible in the future without > > breaking existing UAPI. > > Yes, but I would be happy if the API is designed such way, that > it is not obstacle when it would be extended and I have idea > what would be such further goal for these applications > I have already solved and running for decades at industry > level. That's great. I am confident that with your help, we can make this API as universally usable as possible, while still keeping it simple and generic. > > So I propose: Let's craft a draft UAPI (in a different place, not on LKML) > > that can do everything we can come up with and then reduce it to the basics > > for the first version. Otherwise it will get too complex to review, I'm > > afraid. > > Yes, I agree. > > To have some idea of the command set of our system, there is documentation > in English for our older system which was capable to control three > axes by 8-bit 80552 > > http://cmp.felk.cvut.cz/~pisa/mars/mars_man_en.pdf This API looks pretty straight-forward, and should be easy to implement with LMC. Controller specific settings in LMC are set using a sysfs attributes interface. An example of the settings for the TMC5240: https://github.com/yope/linux/blob/mainline-lmc-preparation/drivers/motion/tmc5240.c#L311 > > > At this moment I have interrest if there is site which > > > would start to collect these ideas and where can be > > > some references added. > > > > I may put this on github and create a wiki there if you think that's a good > > enough place to discuss? > > If there is no feedback with better place to my initial > list of options, I am OK with project group on GitHub > where the initial project will be located with Wiki > and issue tracker to start the communication > and I would be happy to participate (as my time, teaching, > projects allows). Sounds good. Let's see if we can get some attention from OSADL or Linux Foundation. If you have some contacts there, I'd be great if you could help get something set up. If not, we just use github or maybe even gitlab for now. > > > I think that I have quite some stuff to offer. > > > > That would be great! Looking forward to it :-) > > > > > To have idea about my direction of thinking and needs > > > of interface I would provide some references even > > > to our often commercially sold but mostly conceived > > > as hobby projects. > > > > I'll have to take some time to look into those more closely. My own > > experience as far as FOSS or OSHW concerns includes the reprap Kamaq > > project: > > > > https://reprap.org/wiki/Kamaq > > OK, nice target but I would like to support 5 D CNCs with > precise machining, haptic human machine systems etc... Sure! > with DC, stepper and PMSM motors with vector control > high resolution positional feedback etc. We control for > example up to 30 kg spherical lenses positioning in > the interferometric system with precision of micro fractions. > The system allows inspection which thanks to multiple > angles reaches lens surface reconstruction at level of > nanometres > > https://toptec.eu/export/sites/toptec/.content/projects/finished/measuring-instrument.pdf > > We use optical linear sensors combined with 10k per revolution > incremental sensors on the cheap stepper motor actuators and > precise mechanics.. and more tricks... And I can see use > of some cheap linux board, i.e. Zynq, Beagle-V Fire, > which I have on my table, there in future... Yes, this sounds really awesome. It sounds like a great challenge for getting LMC into a good enough shape for that sort of applications. It is exactly what I had in mind. >[...] > As I referenced above, the spines are interpreted at sampling frequency > all parameters are represented as 32-bit signed numbers etc... > So no conversion to the straight segments. The precise postions > are recomputed with even high resolution over IKT, then some > intervals of these points are interpolated by spline segments > with controlled error and these segments are send to the unit > to keep command FIFO full but not overflow it. Unit reports > how much space is left... > > > LMC will try to limit math operations in kernel space as much as possible, > > so hopefully all the calculations can be done in user-space (or on the > > controller if that is the case). > > All computation is fixed point only so no problem for the kernel > even for interrupt. But yes, on fully preemptive kernel where > user space task can be guaranteed to achieve 5 kHz sampling even > on cheaper ARM hardware, the interface to the kernel can be > only on D-Q PWM command and actual IRC and currents readback. > > But if you have API for more intelligent controllers then there > s no problem to put there "SoftMAC" to represent dumb ones the > same way to userspace. That's exactly what I thought of. Thanks for the analogy, I am going to shamelessly steal it from you if you don't mind ;-) That's also why I included 2 different drivers as example for LMC: One that does all the fast computations in hardware, and one that uses a "SoftMAC", in motion-helpers.c to generate time-based speed ramps from acceleration profiles. But I think we should limit the "SoftMAC" device capabilities to basic trapezoidal motion profiles, since it is not the main purpose of LMC to convert the Linux kernel into a high-resolution, hard-RT motion-planning engine... even if it is a tempting technical challenge to do so ;-) > > Right now, my way of thinking was that of regular 3D printers which usually > > only implement G0/G1 G-codes (linear interpolation). G2/G3 (circular > > interpolation) doesn't sound practically very useful since it is special > > but not very flexible. Something like generalized spline interpolation > > sounds more valuable, but I hadn't seen any hardware that can do it. > > > > > The related python application is there > > > > > > https://github.com/cvut/pyrocon > > > > > > In the far future, I can imagine that it can connect > > > to proposed LMC API and achieve the same results. > > > > Let's make it so! > > > > >[...] > > > which uses our PXMC motion control library > > > > > > https://gitlab.com/pikron/sw-base/pxmc > > > > > > There is basic documentation for it on its site > > > > > > https://pxmc.org/ > > > https://pxmc.org/files/pxmc.pdf > > > > At first glance, this looks like a piece of hardware that would fit as a > > LMC device. What needs to be determined is where the boundaries lie between > > controller firmware, kernel-space and user-space code. > > I propose to have that boundary fully configurable. > So all can be implemented by software emulation > in the kernel if the sampling is under 5 or 10 kHz. > The interface from GNU/Linux system to hardware > is set PWM A, B, C, read actual IRC and currents. 5-10kHz in the kernel is quite demanding already, although I agree that it is possible with many modern SoC's. The question is whether we really want to go that far. It is starting to get to levels of stress where really a small microcontroller of FPGA would be more adequate, don't you agree? And also, for what purpose do you want to read currents in real-time in the kernel? Isn't that something for closed-loop control inside a uC or FPGA? Or do you mean just to report to user-space as a filtered average? IRC (encoder feedback) could be implemented with timers that support quadrature decoding, and I can certainly envision reading them out in the kernel in order to have a simple PID controller adjust duty-cycle setpoint to match a motion profile at a lower sample rate (1kHz or lower), but isn't that more something for the controller hardware to do? Especially if done at even higher sample-rates? > Or some part can be moved to external controller > or FPGA with coprocessor (the commutation fits > in 2 kB of RISC-V C programmed firmware). > I.e. 20 kHz or even faster Park, Clarke > transformations. In this case 4 to 10 kHz > command port would be D-Q PWM or current set points > and back IRC position, D-Q currents. > > Or your proposed LMC interface can be delivered > allmost directly to more complex controller > which would realize whole segments processing. I think the latter is more suitable for Linux. Although, given the fact that many embedded Linux SoC's nowadays incorporate small microcontroller cores that support the linux remoteproc interface, maybe some drivers could make use of that for the hard-RT part. On a STM32MP15x SoC for example there are advanced timers and ADC's that are very well suited for motor-control applications. They can be used directly from Linux for not-so-hard-and-fast RT applications, but potentially also for microsecond work in the M4 core. Let's first focus on the UAPI, and make the interface able to deal with these kind of engines. > > Generally speaking, as a rough guideline, microsecond work is better done > > in the controller firmware if possible. millisecond work can be done in the > > kernel and 10's or more millisecond work can be done in user-space, > > notwithstanding latency limit requirements of course. > > OK, I agree, the RT_PREEMPT is requiremet and no broken > SMI on x86 HW. 1 kHz is enough for DC motors controller > robots to go all on Linux. 5 kHz for PMSM D-Q can be > done in kernel or even userspace with platform > suitable for PREEMPT_RT if without some issues. > > Above 10 kHz should go to FPGA or external HW. Yes, I agree. Although I'd lower the limits a bit to not make the drivers too dependent on very specific hardware platforms. > > >[...] > > > So in general, I think that we have large portfolio > > > of building blocks which would allow to build motion, > > > robotic controllers, communications etc. and I would be happy > > > if they are reused and even some project conceived > > > together with others to join the forces. > > > > This sounds very interesting. Ideally one would end up with LMC capable of > > interfacing all of those devices. > > Yes. Good. Let's do it ;-) Best regards,