Message ID | 6123f62ac44e6513a498d15034a4b6b22abe5f5b.1637061794.git.matti.vaittinen@fi.rohmeurope.com (mailing list archive) |
---|---|
State | Not Applicable, archived |
Headers | show |
Series | power: supply: Add some fuel-gauge logic | expand |
On Tue, Nov 16, 2021 at 1:26 PM Matti Vaittinen <matti.vaittinen@fi.rohmeurope.com> wrote: > Support obtaining the "capacity degradation by temperature" - tables > from device-tree to batinfo. > > Signed-off-by: Matti Vaittinen <matti.vaittinen@fi.rohmeurope.com> Same questions as on the binding patch. If we already support different degradation by temperature tables, why do we need a second mechanism for the same thing? I'd just calculate a few tables per temperature and be done with it. At least documentation needs to be updated to reflect that the two methods are exclusive and you can only use one of them. + * Usually temperature impacts on battery capacity. For systems where it is + * sufficient to describe capacity change as a series of temperature ranges + * where the change is linear (Eg delta cap = temperature_change * constant + + * offset) can be described by this structure. But what chemistry has this property? This seems to not be coming from the real physical world. I would perhaps accept differential equations but *linear* battery characteristics? If the intent is only for emulation of something that doesn't exist in reality I doubt how useful it is, all battery technologies I have seen have been nonlinear and hence we have the tables. If you want to simulate a linear discharge, then just write a few tables with linear dissipation progression, it's easier I think. Yours, Linus Walleij
Hi Linus, On 11/18/21 04:10, Linus Walleij wrote: > On Tue, Nov 16, 2021 at 1:26 PM Matti Vaittinen > <matti.vaittinen@fi.rohmeurope.com> wrote: > >> Support obtaining the "capacity degradation by temperature" - tables >> from device-tree to batinfo. >> >> Signed-off-by: Matti Vaittinen <matti.vaittinen@fi.rohmeurope.com> > > Same questions as on the binding patch. > > If we already support different degradation by temperature tables, > why do we need a second mechanism for the same thing? Thanks for bringing this up. As I said, I didn't notice that we could indeed use the CAP-OCV tables for different temperatures to bring in this information :) I see certain benefit from the possibility of not requiring to measure the OCV at different temperatures - but it may not be meaningful. As I replied to your patch 1/9 review - I need to (try to) do some more research... > I'd just calculate a few tables per temperature and be done with > it. > > At least documentation needs to be updated to reflect that the two methods > are exclusive and you can only use one of them. > > + * Usually temperature impacts on battery capacity. For systems where it is > + * sufficient to describe capacity change as a series of temperature ranges > + * where the change is linear (Eg delta cap = temperature_change * constant + > + * offset) can be described by this structure. > > But what chemistry has this property? This seems to not be coming from > the real physical world. I would perhaps accept differential equations > but *linear* battery characteristics? linear *on a given small temperature range*. I think this is improvement to situation where we don't do temperature compensation at all. And, by shortening the temperature area where CAP change is linear, we can approach any non linear behaviour, right? With the current support of 100 values, you can divide the temperature range to 100 pieces with linear CAP degradation impact - I think you get reasonably good estimates there even if your device was expected to be operational at temperatures where min...max is 100C. (that would allow us to divide the range to 1C slots and assume linear degradation at each 1C range. Or didvide this unevenly so that the temperature areas where change is more constant we could have bigger slots, and at the areas where change is faster we could have smaller slots. I don't see linear interpolation as such a big problem there? > If the intent is only for emulation of something that doesn't exist in > reality I doubt how useful it is, all battery technologies I have seen > have been nonlinear and hence we have the tables. > > If you want to simulate a linear discharge, then just write a few tables > with linear dissipation progression, it's easier I think. "linear dissipation progression" - that should be my next favourite sentence to sound professional ;) I need to first google what it is in Finnish though XD Best Regards Matti
Hi dee Ho again, On 11/18/21 08:11, Matti Vaittinen wrote: > Hi Linus, > > On 11/18/21 04:10, Linus Walleij wrote: >> On Tue, Nov 16, 2021 at 1:26 PM Matti Vaittinen >> <matti.vaittinen@fi.rohmeurope.com> wrote: >> >>> Support obtaining the "capacity degradation by temperature" - tables >>> from device-tree to batinfo. >>> >>> Signed-off-by: Matti Vaittinen <matti.vaittinen@fi.rohmeurope.com> >> >> Same questions as on the binding patch. >> >> If we already support different degradation by temperature tables, >> why do we need a second mechanism for the same thing? > > Thanks for bringing this up. As I said, I didn't notice that we could > indeed use the CAP-OCV tables for different temperatures to bring in > this information :) I see certain benefit from the possibility of not > requiring to measure the OCV at different temperatures - but it may not > be meaningful. As I replied to your patch 1/9 review - I need to (try > to) do some more research... I tried doing some pondering here today. Unfortunately, the Friday afternoon is probably the worst time to try this - my brains cease operating at the afternoon - and double so at the Friday. Friday afternoons are good for babbling via email though ;) I don't see providing OCV tables at different temperature gives the degradation of battery capacity. Whoah. A big thought for Friday. We get the OCV => SOC correspondance at different temperatures. I however don't see how this gives the OCV => energy relation. As far as I know both the OCV and the 'amount of uAhs battery is able to store' are impacted by temperature change. This means, seeing the OCV => SOC at different temperatures does not tell us what is the impact of temperature to the OCV, and what is the impact to SOC. For cases like the ROHM Chargers, we are interested on how much has the 'ability to store uAhs' changed due to the temperature. When we know the amount of uAhs we can store, we can use the coulomb counter value to estimate what we still have left in the battery. In addition to this we do use the OCV information for the "nearly depleted battery" - to improve the estimation by zero-correction algorithm. I must admit Friday afternoon is not the time I can quite recap this part. I think it was something like: 1. Measure VBat with system load (VBAT) 2. Find OCV corresponding the current SOC estimate (SOC based on coulomb counter value) - OCV_NOW 3. Compute VDROP caused by the load (OCV_NOW - VBAT) 4. Assume VDROP stays constant (or use ROHM VDR parameters if provided) 5. Using VDROP compute the OCV_MIN which matches the minimum battery voltage where system is still operational 6. Use the OCV_MIN and "OCV at SOC0 from calibration data" difference to adjust the battery capacity. (Explanation done at Friday afternoon accuracy here). >> I'd just calculate a few tables per temperature and be done with >> it. >> >> At least documentation needs to be updated to reflect that the two >> methods >> are exclusive and you can only use one of them. I don't see these exclusive (at Friday afternoon at least). I think they can complement each-others. The temp_degradation table gives us the temperature impact on <energy storing ability>, eg, how much the battery capacity has changed from designed one due to the temperature. OCV-SOC tables at various temperatures tell us how OCV looks like when we have X% of battery left at different temperatures. Estimation of how much the X% is in absolute uAhs can be done by taking into account the designed_cap, aging degradation and the temperature degradation (and the position of moon, amount of muons created by cosmic rays hitting athmosphere at knee energy region and so on...) Or am I just getting something terribly wrong (again)? :) (I still for example like internal functions named as __foo() ) Yours --Matti
On 11/26/21 13:56, Vaittinen, Matti wrote: > Hi dee Ho again, > > On 11/18/21 08:11, Matti Vaittinen wrote: >> Hi Linus, >> >> On 11/18/21 04:10, Linus Walleij wrote: >>> On Tue, Nov 16, 2021 at 1:26 PM Matti Vaittinen >>> <matti.vaittinen@fi.rohmeurope.com> wrote: >>> >>>> Support obtaining the "capacity degradation by temperature" - tables >>>> from device-tree to batinfo. >>>> >>>> Signed-off-by: Matti Vaittinen <matti.vaittinen@fi.rohmeurope.com> >>> >>> Same questions as on the binding patch. >>> >>> If we already support different degradation by temperature tables, >>> why do we need a second mechanism for the same thing? >> >> Thanks for bringing this up. As I said, I didn't notice that we could >> indeed use the CAP-OCV tables for different temperatures to bring in >> this information :) I see certain benefit from the possibility of not >> requiring to measure the OCV at different temperatures - but it may not >> be meaningful. As I replied to your patch 1/9 review - I need to (try >> to) do some more research... > > > I don't see providing OCV tables at different temperature gives the > degradation of battery capacity. Whoah. A big thought for Friday. > > We get the OCV => SOC correspondance at different temperatures. I > however don't see how this gives the OCV => energy relation. After reading what I wrote even I didn't know what I tried to say. Well, I think I tried to explain that I don't see how we can use this information to do any estimation what the Coulomb Counter reading represent at the given temperature. This is what the temperature-degradation tables aim to give us. As far as I > know both the OCV and the 'amount of uAhs battery is able to store' are > impacted by temperature change. This means, seeing the OCV => SOC at > different temperatures does not tell us what is the impact of > temperature to the OCV, and what is the impact to SOC. I think I tried to say that these curves don't help us to tell how many uAhs we have in battery with different temperatures when battery is empty, or half full or, ... Again, what we would like to know is what SOC our CC value represents - and use OCV to just adjust this further (or in some cases to correct the CC value using OCV - if we can trust the battery to be properly relaxed). Hope this did clarify. Afraid it didn't :) Best Regards -- Matti Vaittinen
Hi Matti, don't worry you are probably right. You are the domain expert working on stuff like this inside a company that actually makes charger ICs. I am just some guy on the street. So I certainly trust you on this. On Fri, Nov 26, 2021 at 12:56 PM Vaittinen, Matti <Matti.Vaittinen@fi.rohmeurope.com> wrote: > I don't see providing OCV tables at different temperature gives the > degradation of battery capacity. Whoah. A big thought for Friday. I think we are getting close to academic definitions of the problem, so we'd need input from people who actually work on charging and knows how this works? Or we need to read up on it :/ > We get the OCV => SOC correspondance at different temperatures. I read OCV = Open Circuit Voltage (which we can't measure but must calculate) and SOC = State of Charge (a percentage of the capacity) And we agree what the bindings and code gives today is (temp, OCV) -> SoC (%) And this assumption is: A) wrong as in the earth is flat or it makes the same sens to take a random number and use as capacity B) wrong as in not good enough and a bit rough around the edges and you have an idea how to improve it I assume (B) and what we are seeing on the battery indicator on most of the worlds mobile phones etc is sometimes a bit so-so because they have only used the above. (I think some people can attest to experiencing this problem.) And now we want something better, think medical equipment. > I > however don't see how this gives the OCV => energy relation. As far as I > know both the OCV and the 'amount of uAhs battery is able to store' are > impacted by temperature change. This means, seeing the OCV => SOC at > different temperatures does not tell us what is the impact of > temperature to the OCV, and what is the impact to SOC. It is definitely true that both the OCV and SOC changes according to temperature. But it is also true that these tables for a certain temperature are written with an OCV measured at this temperature, so the OCV used in the table is already compensated for the temperature, right? > For cases like the ROHM Chargers, we are interested on how much has the > 'ability to store uAhs' changed due to the temperature. When we know the > amount of uAhs we can store, we can use the coulomb counter value to > estimate what we still have left in the battery. > > In addition to this we do use the OCV information for the "nearly > depleted battery" - to improve the estimation by zero-correction > algorithm. I must admit Friday afternoon is not the time I can quite > recap this part. I think it was something like: > > 1. Measure VBat with system load (VBAT) > 2. Find OCV corresponding the current SOC estimate (SOC based on coulomb > counter value) - OCV_NOW > 3. Compute VDROP caused by the load (OCV_NOW - VBAT) > 4. Assume VDROP stays constant (or use ROHM VDR parameters if provided) > 5. Using VDROP compute the OCV_MIN which matches the minimum battery > voltage where system is still operational > 6. Use the OCV_MIN and "OCV at SOC0 from calibration data" difference to > adjust the battery capacity. That's a neat trick! If you look at drivers/power/supply/ab8500_fg.c function ab8500_fg_load_comp_volt_to_capacity() you find how someone else chose to do this with a bit of averaging etc. > >> I'd just calculate a few tables per temperature and be done with > >> it. > >> > >> At least documentation needs to be updated to reflect that the two > >> methods > >> are exclusive and you can only use one of them. > > I don't see these exclusive (at Friday afternoon at least). I think they > can complement each-others. The temp_degradation table gives us the > temperature impact on <energy storing ability>, eg, how much the battery > capacity has changed from designed one due to the temperature. > > OCV-SOC tables at various temperatures tell us how OCV looks like when > we have X% of battery left at different temperatures. Estimation of how > much the X% is in absolute uAhs can be done by taking into account the > designed_cap, aging degradation and the temperature degradation (and the > position of moon, amount of muons created by cosmic rays hitting > athmosphere at knee energy region and so on...) > > Or am I just getting something terribly wrong (again)? :) > (I still for example like internal functions named as __foo() ) OK so yeah I think you are at something here. Which is generic. The battery indicator in my Tesla in Swedish winter times looks like this: +-------------------+---+ | 25% | * | +-------------------+---+ So the star * indicates some extra capacity that is taken away because of the low temperature. This must be because the system is aware about the impact on the battery available uAh of the temperature. As you use the battery it will get warmer and the capacity will increase and the little star goes away. Current random mobile phones are not this great and do not estimate the capacity fall because of the temperature, just shows a percentage of the full capacity at the temperature right now whatever that capacity may be, so it is a relative scale and we can never show anything as nice as what the Tesla does with this. Then the question is: is the method used by Rohm universal and well-known and something many chargers will do exactly this way, so it should be in the core, or is it a particularity that should be in your driver? Yours, Linus Walleij
On Fri, Nov 26, 2021 at 1:35 PM Matti Vaittinen
<mazziesaccount@gmail.com> wrote:
> Hope this did clarify. Afraid it didn't :)
I got it first time, I think. Or I just think I got it but didn't ;)
Yours,
Linus Walleij
Morning Linus & All, First of all - thanks. I've no words to say how much I do appreciate this discussion. (There was zero sarcasm - I don't get much of discussion on these topics elsewhere...) On 11/27/21 02:54, Linus Walleij wrote: > Hi Matti, > > don't worry you are probably right. You are the domain expert working > on stuff like this inside a company that actually makes charger ICs. > I am just some guy on the street. So I certainly trust you on this. Please don't XD I am working in a company which does pretty much _all_ frigging components from simple resistors to some CPU cores. There are PMICs, chargers, touch screens, SERDES, audio ICs - pretty much every semiconductor IC one can think of. What comes to me - I have _very_ limited knowledge on any of these. I just do the drivers for whatever component they need one for. And for that purpose I do some research and a lot of "guesswork" - almost as if I was just a random guy on the street ;) I do _rarely_ have any proper documentation about how things are supposed to work - and if I do, I know how things are supposed to work at hardware-level, not how software should be orchestrating things :/ This discussion with guys like you helps _a lot_ regarding all the guesswork :) > > On Fri, Nov 26, 2021 at 12:56 PM Vaittinen, Matti > <Matti.Vaittinen@fi.rohmeurope.com> wrote: > >> I don't see providing OCV tables at different temperature gives the >> degradation of battery capacity. Whoah. A big thought for Friday. > > I think we are getting close to academic definitions of the problem, > so we'd need input from people who actually work on charging and > knows how this works? Or we need to read up on it :/ Studying is hard work. I am so glad I get to chat with someone who has obviously done some of it ;) And just to stress the point - that someone is You :] >> We get the OCV => SOC correspondance at different temperatures. > > I read > OCV = Open Circuit Voltage (which we can't measure but must calculate) Me too. Although we can measure this when designing the system in order to extract those OCV tables, right? > and > SOC = State of Charge (a percentage of the capacity) Yes. > And we agree what the bindings and code gives today is > (temp, OCV) -> SoC (%) > > And this assumption is: > > A) wrong as in the earth is flat or it makes the same sens to > take a random number and use as capacity > > B) wrong as in not good enough and a bit rough around the > edges and you have an idea how to improve it > I assume (B) and what we are seeing on the battery indicator on > most of the worlds mobile phones etc is sometimes a bit so-so > because they have only used the above. (I think some people can > attest to experiencing this problem.) > > And now we want something better, think medical equipment. I'd vothe the (B) too. In theory the OCV => SOC is correct. (Part of the) Problem is that we can't measure the OCV reliably/accurately when system is running (because there is load connected). Also, there is probably some change here caused by battery aging. So for proper state-of-charge information we can't rely the OCV only. (This is where the coulomb counter comes in). Furthermore, I would not say it is medical only. I think that even the consumer electronics can benefit (a lot) from any accuracy improvement we can provide. I guess the improved accuracy can prolong the battery life and help avoiding unnecessary charging. >> I >> however don't see how this gives the OCV => energy relation. As far as I >> know both the OCV and the 'amount of uAhs battery is able to store' are >> impacted by temperature change. This means, seeing the OCV => SOC at >> different temperatures does not tell us what is the impact of >> temperature to the OCV, and what is the impact to SOC. > > It is definitely true that both the OCV and SOC changes according to > temperature. > > But it is also true that these tables for a certain temperature are written > with an OCV measured at this temperature, so the OCV used in the > table is already compensated for the temperature, right? Yes. OCV is fitted but we typically don't have the accurate OCV at all times. This meand that we need to use the coulomb-counter value - which makes it mandatory for us to find out the absolute capacity at given temperature. >> For cases like the ROHM Chargers, we are interested on how much has the >> 'ability to store uAhs' changed due to the temperature. When we know the >> amount of uAhs we can store, we can use the coulomb counter value to >> estimate what we still have left in the battery. >> >> In addition to this we do use the OCV information for the "nearly >> depleted battery" - to improve the estimation by zero-correction >> algorithm. I must admit Friday afternoon is not the time I can quite >> recap this part. I think it was something like: >> >> 1. Measure VBat with system load (VBAT) >> 2. Find OCV corresponding the current SOC estimate (SOC based on coulomb >> counter value) - OCV_NOW >> 3. Compute VDROP caused by the load (OCV_NOW - VBAT) >> 4. Assume VDROP stays constant (or use ROHM VDR parameters if provided) >> 5. Using VDROP compute the OCV_MIN which matches the minimum battery >> voltage where system is still operational >> 6. Use the OCV_MIN and "OCV at SOC0 from calibration data" difference to >> adjust the battery capacity. > > That's a neat trick! > If you look at drivers/power/supply/ab8500_fg.c function > ab8500_fg_load_comp_volt_to_capacity() you find how > someone else chose to do this with a bit of averaging etc. Thanks for the pointer. I haven't read that yet but I will do. Although, I'll probably postpone that reading to the next week. > >>>> I'd just calculate a few tables per temperature and be done with >>>> it. >>>> >>>> At least documentation needs to be updated to reflect that the two >>>> methods >>>> are exclusive and you can only use one of them. >> >> I don't see these exclusive (at Friday afternoon at least). I think they >> can complement each-others. The temp_degradation table gives us the >> temperature impact on <energy storing ability>, eg, how much the battery >> capacity has changed from designed one due to the temperature. >> >> OCV-SOC tables at various temperatures tell us how OCV looks like when >> we have X% of battery left at different temperatures. Estimation of how >> much the X% is in absolute uAhs can be done by taking into account the >> designed_cap, aging degradation and the temperature degradation (and the >> position of moon, amount of muons created by cosmic rays hitting >> athmosphere at knee energy region and so on...) >> >> Or am I just getting something terribly wrong (again)? :) >> (I still for example like internal functions named as __foo() ) > > OK so yeah I think you are at something here. Which is generic. > > The battery indicator in my Tesla in Swedish winter times looks > like this: Oh, you're driving Tesla. I've wondered how do they work at winter time? At the Oulu Finland we can also have quite freezing cold winters so I wouldn't dare to buy Tesla (even if they weren't quite that expensive). I must admit that I do like the acceleration though. Well, on that front my old motorbike must suffice and for me my car is purely for transportation and not for run :/ > > +-------------------+---+ > | 25% | * | > +-------------------+---+ > > So the star * indicates some extra capacity that is taken away > because of the low temperature. > > This must be because the system is aware about the impact on > the battery available uAh of the temperature. As you use the > battery it will get warmer and the capacity will increase and the > little star goes away. Getting back to the topic - I think this is a good example. Coulomb counter can tell us the amoount of uAh we have drawn from the battery (with the drawback of for example the ADC offset causing drifting). The temperature-degradation tables (and some figures explaining the impact of aging) can tell us how much absolute capacity we have lost (the star thingee) - some of which can be recovered when battery is warming - and finally we do get the SOC in %. > Current random mobile phones are not this great and do not > estimate the capacity fall because of the temperature, just shows > a percentage of the full capacity at the temperature right now > whatever that capacity may be, so it is a relative scale and we > can never show anything as nice as what the Tesla does with > this. Yes. What I was after here is using the temperature-corrected capacity as a base of computing the relative SOC. I don't think we should tell user the battery is only half-full when it actually can't hold more charge in cold weather (due to reduced capacity) - because this instructs people to start charging. OTOH, I do _really_ like your Tesla example of telling the user that even though your battery is as full as it can, it does not mean it lasts the same <XX> hours it usually does when you are indoors. I am pretty sure the current power-supply framework allows us to expose the current full_cap to userlans - so building your Tesla example with the star should be doable - if the drivers can somehow get the information about the absolute capacity drop. > Then the question is: is the method used by Rohm universal and > well-known and something many chargers will do exactly this > way, so it should be in the core, or is it a particularity that should > be in your driver? I am not sure this is the correct question. I'd rephrase it to: Would it be beneficial for drivers to come if the core did support this functionality - or is the feature unnecessary, or are there better alternatives. If core does not provide the support - then it is a high mountain for one to climb if he wants to use such improvement. I think that the agreement we can do is that the OCV+temperature => SOC tables do not provide quite same information as the suggested temperature => capacity-drop tables would. Whether there are better alternatives - or if this is generally useful remains to be discussed - and this is something where I _do_ appreciate your (and everyone else's) input! Best Regards -- Matti Vaittinen
Hi Matti, not so much time so trying to answer the central question here! On Sun, Nov 28, 2021 at 9:51 AM Vaittinen, Matti <Matti.Vaittinen@fi.rohmeurope.com> wrote: > I am pretty > sure the current power-supply framework allows us to expose the current > full_cap to userlans - so building your Tesla example with the star > should be doable - if the drivers can somehow get the information about > the absolute capacity drop. To do this we would need to export a capacity at current temperature and capacity at nominal temperature (which is usually 25 deg C) so you can scale to something. This isn't in sysfs today but we could probably add it (and then the world of UI:s battery icons need to change in response). > > Then the question is: is the method used by Rohm universal and > > well-known and something many chargers will do exactly this > > way, so it should be in the core, or is it a particularity that should > > be in your driver? > > I am not sure this is the correct question. I'd rephrase it to: Would it > be beneficial for drivers to come if the core did support this > functionality - or is the feature unnecessary, or are there better > alternatives. If core does not provide the support - then it is a high > mountain for one to climb if he wants to use such improvement. I think we need this. > I think that the agreement we can do is that the OCV+temperature => SOC > tables do not provide quite same information as the suggested > temperature => capacity-drop tables would. Whether there are better > alternatives - or if this is generally useful remains to be discussed - > and this is something where I _do_ appreciate your (and everyone else's) > input! temperature + OCV => SOC isn't enough I think. We probably need something to tell us what the total usable capacity will be under different temperatures. I suspect an interpolated table is best though, this is going to be quite nonlinear in practice. Yours, Linus Walleij
On 11/30/21 03:34, Linus Walleij wrote: > Hi Matti, > > not so much time so trying to answer the central question here! > > On Sun, Nov 28, 2021 at 9:51 AM Vaittinen, Matti > <Matti.Vaittinen@fi.rohmeurope.com> wrote: > >> I am pretty >> sure the current power-supply framework allows us to expose the current >> full_cap to userlans - so building your Tesla example with the star >> should be doable - if the drivers can somehow get the information about >> the absolute capacity drop. > > To do this we would need to export a capacity at current temperature > and capacity at nominal temperature (which is usually 25 deg C) > so you can scale to something. Hmm. I wonder if we need this. Perhaps the existing: CHARGE_FULL_DESIGN, CHARGE_EMPTY_DESIGN design charge values, when battery considered full/empty. ENERGY_FULL_DESIGN, ENERGY_EMPTY_DESIGN same as above but for energy. CHARGE_FULL, CHARGE_EMPTY These attributes means "last remembered value of charge when battery became full/empty". It also could mean "value of charge when battery considered full/empty at given conditions (temperature, age)". I.e. these attributes represents real thresholds, not design values. are enough? I am unsure the user is interested in knowing which part of the lost battery cap is caused by the temperature, and which is caused by some other phenomena. Or, do you think that showing loss of "not recoverable" capacity loss (like loss caused by aging) is something that should not be displayed...? (It'd be nice to see that when buying the second-hand Tesla - and that would for sure be a subject to 'reprogramming'... :rolleyes:) > This isn't in sysfs today but we could > probably add it (and then the world of UI:s battery icons need to change > in response). Yes. I'd really like the userland displaying the difference of the designed-charge and full-charge already now. I could almost consider sending a patch if: a) I could draw icons / design UIs - which I really can't b) I knew which userland software to patch... > >>> Then the question is: is the method used by Rohm universal and >>> well-known and something many chargers will do exactly this >>> way, so it should be in the core, or is it a particularity that should >>> be in your driver? >> >> I am not sure this is the correct question. I'd rephrase it to: Would it >> be beneficial for drivers to come if the core did support this >> functionality - or is the feature unnecessary, or are there better >> alternatives. If core does not provide the support - then it is a high >> mountain for one to climb if he wants to use such improvement. > > I think we need this. > >> I think that the agreement we can do is that the OCV+temperature => SOC >> tables do not provide quite same information as the suggested >> temperature => capacity-drop tables would. Whether there are better >> alternatives - or if this is generally useful remains to be discussed - >> and this is something where I _do_ appreciate your (and everyone else's) >> input! > > temperature + OCV => SOC isn't enough I think. > > We probably need something to tell us what the total usable > capacity will be under different temperatures. I suspect an > interpolated table is best though, this is going to be quite > nonlinear in practice. Hmm. Fair enough. Maybe instead of providing 'temperature range where degradation is constant' we should simply support providing the data-points. Eg, an array of known temperature-[degradation/change]-from-[designed/full]-capacity pairs and leave selecting the best fitting model to the software. Linear interpolation is simple, and may suffice for cases where we have enough of data-points - but you are correct that there probably are better alternatives. Nice thing is software is that it can be changed over time - so even implementing it with linear approach means opening a room for further improvements ;) Well, I don't know how constant such degradation is over time. I just guess it is not constant but might be proportional to age-compensated capacity rather than the designed capacity. It'd be nice to use correct approximation of reality in device-tree... So, perhaps the data-points should not be absolute uAh values but values relative to age-corrected battery capacity (or if age correction is not available, then values relative to the designed capacity). Sigh, so many things to improve, so little time :) By the way, I was reading the ab8500 fuel-gauge driver as you suggested. So, if I am correct, you used the battery internal resistance + current to compute the voltage-drop caused by battery internal resistance. This for sure improves the accuracy when we assume VBAT can be used as OCV. Epilogue: In general, I see bunch of power-supply drivers scheduling work for running some battery-measurements. Some do this periodically (I think the ab8500 did this although I lost the track when I tried to see in which case the periodic work was not scheduled - and maybe for fuel-gauging) or after an IRQ is triggered (for example to see if change indication should be sent). I think we could simplify a few drivers if the core provided some helper thread (like the simple-gauge) which could be woken by drivers (to do fuel-gauge operations, or to just conditionally send the change notification). Best Regards --Matti
On Tue, Nov 30, 2021 at 7:33 AM Vaittinen, Matti <Matti.Vaittinen@fi.rohmeurope.com> wrote: > Hmm. Fair enough. Maybe instead of providing 'temperature range where > degradation is constant' we should simply support providing the > data-points. Eg, an array of known > temperature-[degradation/change]-from-[designed/full]-capacity pairs and > leave selecting the best fitting model to the software. Linear > interpolation is simple, and may suffice for cases where we have enough > of data-points - but you are correct that there probably are better > alternatives. Nice thing is software is that it can be changed over time > - so even implementing it with linear approach means opening a room for > further improvements ;) Yeah someone will implement spline interpolation in the kernel one day I bet... > Well, I don't know how constant such degradation is over time. I just > guess it is not constant but might be proportional to age-compensated > capacity rather than the designed capacity. It'd be nice to use correct > approximation of reality in device-tree... IIUC the degradation of a battery is related to number of full charge cycles, i.e. the times that the battery has been emptied and recharged fully. This is of course never happening in practice, so e.g. two recharge cycles from 50% to 100% is one full charge cycle. So you integrate this over time (needs to be saved in a file system or flash if the battery does not say it itself). This measures how much the lithium ions have moved around in the electrolyte and thus how much chemical interaction the battery has seen. Then the relationship between complete charge cycles and capacity degradation is certainly also going to be something nonlinear so it needs manufacturer data for the battery. > By the way, I was reading the ab8500 fuel-gauge driver as you suggested. > So, if I am correct, you used the battery internal resistance + current > to compute the voltage-drop caused by battery internal resistance. This > for sure improves the accuracy when we assume VBAT can be used as OCV. Yes this is how it is done. With a few measurements averaged to iron out the noise. > Epilogue: > In general, I see bunch of power-supply drivers scheduling work for > running some battery-measurements. Some do this periodically (I think > the ab8500 did this although I lost the track when I tried to see in > which case the periodic work was not scheduled - and maybe for > fuel-gauging) or after an IRQ is triggered (for example to see if change > indication should be sent). Yes there is some tight community of electronic engineers who read the right articles and design these things. We don't know them :( > I think we could simplify a few drivers if the core provided some helper > thread (like the simple-gauge) which could be woken by drivers (to do > fuel-gauge operations, or to just conditionally send the change > notification). I think so too, I don't think they are very rocket science once the right abstractions fall out. Yours, Linus Walleij
On 12/2/21 03:57, Linus Walleij wrote: > On Tue, Nov 30, 2021 at 7:33 AM Vaittinen, Matti > <Matti.Vaittinen@fi.rohmeurope.com> wrote: > >> Well, I don't know how constant such degradation is over time. I just >> guess it is not constant but might be proportional to age-compensated >> capacity rather than the designed capacity. It'd be nice to use correct >> approximation of reality in device-tree... > > IIUC the degradation of a battery is related to number of full charge cycles, > i.e. the times that the battery has been emptied and recharged fully. > This is of course never happening in practice, so e.g. two recharge cycles > from 50% to 100% is one full charge cycle. So you integrate this > over time (needs to be saved in a file system or flash if the battery does > not say it itself). Yes. > This measures how much the lithium ions have moved around in the > electrolyte and thus how much chemical interaction the battery has > seen. > > Then the relationship between complete charge cycles and capacity > degradation is certainly also going to be something nonlinear so it > needs manufacturer data for the battery. Right. As far as I understand, at least part of the 'aging degradation' comes from the fact that battery materials are 'vaporizing' when battery is charged. And as far as I understand, the temperature in which charging occurs has a big impact on this. Eg, higher the temperature where you do charging, worse the degradation. Which means that the cycle count should actually be weighed by the charging temperature. But this kind of missed my point :) I was thinking of how to give the absolute (uAh) value of capacity drop caused by the temperature. My original RFC patch gave this as linear change of absolute uAh's at a temperature range. As you pointed, we should not include the linearity in the DT model. So the next step would be to just give measured value pairs (should be done by battery vendor or computed by some theoretical basis) of absolute uAh/temperature - and leave fitting of the data points to be done by SW. What I was now considering is that maybe the capacity drop (in uAhs) caused by the temperature change - is not the same for new and old battery. It sounds more logical to me that the capacity drop caused by the temperature is proportional to the maximum capacity battery is having at that point of it's life. Eg, if new battery can hold 80 units of energy, and drops 20 units of energy when temperature changes from T0 => T1 - an badly aged battery which now only can hold 40 units would lose only 10 units at that same temperature drop T0 => T1. I was wondering if such an assumption is closer to the truth than saying that bot of the batteries would lose same 20 units - meaning that the new battery would lose 25% of energy at temperature drop T0 => T1 but old one would lose 50% of the capacity. I somehow think both of the batteries, old and new, would lose same % of capacity at the temperature change. So, if this assumption is correct, then we should give the temperature impact as proportion of the full capacity taking the aging into account. So if we happen to know the aging impact to the capacity, then software should use aging compensation prior computing the temperature impact. If aging information or impact is not known, then designed capacity can be used as a fall-back, even though it means we will probably be somewhat off for old batteries. My problem here is that I just assume the impact of temperature is proportional to the full-capacity which takes the aging into account. Knowing how this really is would be cool so we could get the temperature impact modelled correctly in DT. >> By the way, I was reading the ab8500 fuel-gauge driver as you suggested. >> So, if I am correct, you used the battery internal resistance + current >> to compute the voltage-drop caused by battery internal resistance. This >> for sure improves the accuracy when we assume VBAT can be used as OCV. > > Yes this is how it is done. With a few measurements averaged to > iron out the noise. > >> Epilogue: >> In general, I see bunch of power-supply drivers scheduling work for >> running some battery-measurements. Some do this periodically (I think >> the ab8500 did this although I lost the track when I tried to see in >> which case the periodic work was not scheduled - and maybe for >> fuel-gauging) or after an IRQ is triggered (for example to see if change >> indication should be sent). > > Yes there is some tight community of electronic engineers who read the > right articles and design these things. We don't know them :( Right. By the way, I heard tha the TI has patent protecting some type of battery internal resistance usage here. OTOH, ROHM has patent over some of the VDROP value table stuff. Occasionally it feels like the ice is getting thinner at each step here. :/ Best Regards Matti Vaittinen
Hi Matti, On Thu, Dec 2, 2021 at 7:29 AM Vaittinen, Matti <Matti.Vaittinen@fi.rohmeurope.com> wrote: (fast forward the stuff where we are in violent agreement) > What I was now considering is that maybe the capacity drop (in uAhs) > caused by the temperature change - is not the same for new and old > battery. It sounds more logical to me that the capacity drop caused by > the temperature is proportional to the maximum capacity battery is > having at that point of it's life. Eg, if new battery can hold 80 units > of energy, and drops 20 units of energy when temperature changes from T0 > => T1 - an badly aged battery which now only can hold 40 units would > lose only 10 units at that same temperature drop T0 => T1. I was > wondering if such an assumption is closer to the truth than saying that > bot of the batteries would lose same 20 units - meaning that the new > battery would lose 25% of energy at temperature drop T0 => T1 but old > one would lose 50% of the capacity. I somehow think both of the > batteries, old and new, would lose same % of capacity at the temperature > change. > > So, if this assumption is correct, then we should give the temperature > impact as proportion of the full capacity taking the aging into account. This looks plausible. > My problem here is that I just assume the impact of temperature is > proportional to the full-capacity which takes the aging into account. > Knowing how this really is would be cool so we could get the temperature > impact modelled correctly in DT. I suppose we should check some IEEE articles to verify that this is the case before assuming. I have access to them but no time to read :( > > Yes there is some tight community of electronic engineers who read the > > right articles and design these things. We don't know them :( > > Right. By the way, I heard tha the TI has patent protecting some type of > battery internal resistance usage here. OTOH, ROHM has patent over some > of the VDROP value table stuff. Occasionally it feels like the ice is > getting thinner at each step here. :/ This is none of our concern. Patents are concerns for people shipping devices, not for open source code. Also patents are only valid for 20 years and we are looking at longer times anyway. If we define generic DT properties for this they will be used more than 20 years from now. We even have patented code in the kernel, see: Documentation/RCU/rcu.rst Yours, Linus Walleij
diff --git a/drivers/power/supply/power_supply_core.c b/drivers/power/supply/power_supply_core.c index 295672165836..1a21f692ab81 100644 --- a/drivers/power/supply/power_supply_core.c +++ b/drivers/power/supply/power_supply_core.c @@ -562,10 +562,12 @@ struct power_supply *devm_power_supply_get_by_phandle(struct device *dev, EXPORT_SYMBOL_GPL(devm_power_supply_get_by_phandle); #endif /* CONFIG_OF */ +#define POWER_SUPPLY_TEMP_DGRD_MAX_VALUES 100 int power_supply_get_battery_info(struct power_supply *psy, struct power_supply_battery_info *info) { struct power_supply_resistance_temp_table *resist_table; + u32 *dgrd_table; struct device_node *battery_np; const char *value; int err, len, index; @@ -588,6 +590,8 @@ int power_supply_get_battery_info(struct power_supply *psy, info->temp_max = INT_MAX; info->factory_internal_resistance_uohm = -EINVAL; info->resist_table = NULL; + info->temp_dgrd_values = 0; + info->temp_dgrd = NULL; for (index = 0; index < POWER_SUPPLY_OCV_TEMP_MAX; index++) { info->ocv_table[index] = NULL; @@ -677,6 +681,55 @@ int power_supply_get_battery_info(struct power_supply *psy, of_property_read_u32_index(battery_np, "operating-range-celsius", 1, &info->temp_max); + len = of_property_count_u32_elems(battery_np, "temp-degrade-table"); + if (len == -EINVAL) + len = 0; + if (len < 0) { + err = len; + goto out_put_node; + } + /* table should consist of value pairs - maximum of 100 pairs */ + if (len % 3 || len / 3 > POWER_SUPPLY_TEMP_DGRD_MAX_VALUES) { + dev_warn(&psy->dev, + "bad amount of temperature-capacity degrade values\n"); + err = -EINVAL; + goto out_put_node; + } + info->temp_dgrd_values = len / 3; + if (info->temp_dgrd_values) { + info->temp_dgrd = devm_kcalloc(&psy->dev, + info->temp_dgrd_values, + sizeof(*info->temp_dgrd), + GFP_KERNEL); + if (!info->temp_dgrd) { + err = -ENOMEM; + goto out_put_node; + } + dgrd_table = kcalloc(len, sizeof(*dgrd_table), GFP_KERNEL); + if (!dgrd_table) { + err = -ENOMEM; + goto out_put_node; + } + err = of_property_read_u32_array(battery_np, + "temp-degrade-table", + dgrd_table, len); + if (err) { + dev_warn(&psy->dev, + "bad temperature - capacity degrade values %d\n", err); + kfree(dgrd_table); + info->temp_dgrd_values = 0; + goto out_put_node; + } + for (index = 0; index < info->temp_dgrd_values; index++) { + struct power_supply_temp_degr *d = &info->temp_dgrd[index]; + + d->temp_degrade_1C = dgrd_table[index * 3]; + d->degrade_at_set = dgrd_table[index * 3 + 1]; + d->temp_set_point = dgrd_table[index * 3 + 2]; + } + kfree(dgrd_table); + } + len = of_property_count_u32_elems(battery_np, "ocv-capacity-celsius"); if (len < 0 && len != -EINVAL) { err = len; diff --git a/include/linux/power_supply.h b/include/linux/power_supply.h index fa8cf434f7e3..fbc07d403f41 100644 --- a/include/linux/power_supply.h +++ b/include/linux/power_supply.h @@ -214,6 +214,30 @@ union power_supply_propval { struct device_node; struct power_supply; +/** + * struct power_supply_temp_degr - impact of temperature to battery capacity + * + * Usually temperature impacts on battery capacity. For systems where it is + * sufficient to describe capacity change as a series of temperature ranges + * where the change is linear (Eg delta cap = temperature_change * constant + + * offset) can be described by this structure. + * + * Please note - in order to avoid unnecessary rounding errors the change + * of capacity (uAh) is per change of temperature degree C while the temperature + * range floor is in tenths of degree C + * + * @temp_set_point: Temperature where cap change is as given in + * degrade_at_set. Units are 0.1 degree C + * @degrade_at_set: Capacity difference (from ideal) at temp_set_point + * temperature + * @temp_degrade_1C: Capacity change / temperature change (uAh / degree C) + */ +struct power_supply_temp_degr { + int temp_set_point; + int degrade_at_set; + int temp_degrade_1C; +}; + /* Run-time specific power supply configuration */ struct power_supply_config { struct device_node *of_node; @@ -377,6 +401,8 @@ struct power_supply_battery_info { int ocv_table_size[POWER_SUPPLY_OCV_TEMP_MAX]; struct power_supply_resistance_temp_table *resist_table; int resist_table_size; + int temp_dgrd_values; + struct power_supply_temp_degr *temp_dgrd; }; extern struct atomic_notifier_head power_supply_notifier;
Support obtaining the "capacity degradation by temperature" - tables from device-tree to batinfo. Signed-off-by: Matti Vaittinen <matti.vaittinen@fi.rohmeurope.com> --- RFCv3: - rename simple_gauge_temp_degr to power_supply_temp_degr --- drivers/power/supply/power_supply_core.c | 53 ++++++++++++++++++++++++ include/linux/power_supply.h | 26 ++++++++++++ 2 files changed, 79 insertions(+)