GM Volt Forum banner

1 - 19 of 19 Posts

·
Registered
Joined
·
218 Posts
Discussion Starter #1
I have a question for the Volt designers ... not holding my breath for them to answer in this forum, but let's try to guess ourselves.

I've heard that the Volt battery will charge from 30% to 80% state of charge (SOC). I've always wondered where the 80% SOC charge number came from. I'm guessing it is so that regenerative braking would be maintained even when you plugged in at the top of a mountain and then used regen all the way down.

But, how is 80% SOC measured? If there's an electrochemical way of directly measuring anode/cathode charge state, then I haven't heard of it. I always thought that the SOC would just be the integral of power. But the problem here is that you're always integrating an error term, since you don't know the battery impedance exactly. Does this mean that the 80%-30% number might drift upwards over time to 100-50%? Or does the ICE only kick in at 30% based on a small drop in cell voltage? It doesn't seem easy to infer SOC based on cell voltage for a large portion of the discharge curve:

http://www.a123systems.com/#/technology/life/lchart4/

Furthermore, if the battery loses capacity over life, it becomes even harder to calculate how long the charger should run before you are at 80% SOC.

I don't have a solution here - anyone?
 

·
Registered
Joined
·
251 Posts
Fractional Discharge Model

Joshua,


I am confident that the folks at the battery manufacturer or GM, along with their electrochemists and battery experts, have created a detailed Fractional Discharge Model for the battery and then measured and determined the statistics (distributions) of these measured/modeling parameters. A model based on Peukert’s equation (relates the capacity of a battery to the rate of discharge and the duration of the discharge), with correction factors for variables such as battery temperature, pressure, average current, charge conditions, and number of charge cycles would be the basis for their charging algorithm.

There are other considerations:

Given the weight of reactants in the battery, one can determine the maximum amount of theoretical charge (~1.8 * 10^5 Q) in a new fully charged battery. The actual value is the measured charge to total discharge. The ratio of integrated discharge current to full charge then gives the state of fractional charge. We can plot this fraction versus voltage as a gauge of fractional discharge.

Every time we fully charge a battery, we can measure the starting charged (80%) voltage and thus re-initialize our starting conditions. We then just need to measure/determine the discharge from this point.

There is no need for great accuracy for the DOD determination. It can be +-10%.
 

·
Registered
Joined
·
218 Posts
Discussion Starter #3 (Edited)
Every time we fully charge a battery, we can measure the starting charged (80%) voltage and thus re-initialize our starting conditions. We then just need to measure/determine the discharge from this point.
It seems to me that you could only "re-initialize" if you charge to near 100%, where the voltage has a noticable peak. You said "charge to 80%" and then re-initialize, but the point is that you don't know when you have charged to 80%. From the discharge curve previously cited, it looks like it's pretty flat up to ~95%.

Maybe there is just enough slope that 80% charge can be determined with temperature correction.
 

·
Registered
Joined
·
251 Posts
Charge until Vopen circuit < Verp

I don't have the data taken over every possible condition that GM and the vendor have. Given that we know the chemical process and the associated laws of thermodynamics, we should be able to model the results. As a first order approximation, we can just pump in 50% of the current and make sure we don't overcharge, i.e. that the open circuit voltage is less than the Li electrochemical reduction potential (3.045 V).
 

·
Registered
Joined
·
218 Posts
Discussion Starter #5
Maybe that's how it could work - kick in the ICE when the voltage starts to go down, turn it off when the voltage goes back up to maintain SOC at the ~15% SOC knee, and then use the charger to dispense exactly 70% SOC. This puts you up to 85% SOC, which leaves 15% left over for regen capacity should it be needed.

Still leaves open the issue of what happens near EOL.
 

·
Registered
Joined
·
251 Posts
GM et all have to do a cost analysis to determine how detailed the control and measurement electronics needs to be. The cells are not perfectly matched. There will be phenomena such as dendritic growth in cells over time. At one extreme they would have shunts/switches across all series groups of cells. This would give the capability to measure voltage and shunt current across serial cells to accommodate cell differences and to differentially charge or discharge them. There is some optimum cost/performance/risk number of temperature sensors (one sensor/cell at one extreme).

I would think that they would model each discharge cycle by breaking it into a large number of discrete time steps with an average current for each time step. There would then be a matrix of Peukert’s equation constants for a range of different possible variables.
 

·
Registered
Joined
·
251 Posts
Operate withing flat band end points - reduced stress.

GM promises a long warranty life. The key is to minimize stress or changes within the electrolyte. The graph below shows the 80% and 30% SOC points. We see that these are within the flat voltage portion and is considerably away from the knees. There is very little voltage change as a signal for SOC. The only measurable seems to be capacity (mAHr = total current drain). They have to integrate the current from the charged state to get SOC. After a large number of cylces the voltage may (rise/fall) out of the flatband 80 to 30 range, at which point they may then compensate.
 

Attachments

·
Registered
Joined
·
256 Posts
Hmmmm... well if they can successfully stop the charging process at 80% calculated on voltage or the rate of charge it might be an easy thing to just measure how much energy has been used and by measureing the current drawn to calculate when the 30% has been reached.
 

·
Registered
Joined
·
492 Posts
knees

SOC is a great conceptual quantity, but there is no way to measure it in a particular battery at a particular time. That's one of the reasons that experts have historically been pessimistic toward BEVs --- you really just don't know how long until the charge runs out. The EREV is so much better a design, as it just responds, rather than predicts. To do that, GM must be planning on sensing the changes in voltage toward the ends of the curve, even if that is not 80%-20%. Doing some kind of complex integration of current versus time is going to be such a rough and changing approximation, filled with cumulative errors, that it won't work reliably.
 

·
Registered
Joined
·
1,728 Posts
State of charge vs. state of tank

Hey, it's not like the gas gauges in my vehicles have been that great. They are highly non-linear and don't accurately represent the actual level of fuel in the tank. Is it bad design, or is there some cleverness that recognizes human nature to "push" driving on empty for "just a couple more miles", so the designers left some buffer space?
 

·
Registered
Joined
·
492 Posts
Price v precision

Altazi --- I agree, my gas guages have been the same way. The thing is that a gas guage really is measuring the amount of gas in the tank. A low cost version of a gas guage measures only approximately, but it does measure the amount of gas in the tank. As a backup, a driver can estimate how much gas is left by knowing the mileage since the last full tank, and I've done that when my gas guage wasn't working.

With battery energy, there's no way to measure SOC directly, as the better the battery is, the less the voltage varies except at extremes of charge/discharge. So all that can be done is work from the mileage or some other indirect thing. If the battery is all that is in the car, such guesswork incurs a substantial risk of stopping beside the road. In the Volt, rough guesswork based on mileage may work well enough most of the time, since the ICE/generator can come on if the battery runs out.
 

·
Registered
Joined
·
1,728 Posts
Reasonably accurate SOC estimate

I bet it's possible to create a SOC measurement algorithm based on coulomb counting (keeping track of the charge going in and out of the battery), battery temperature, number and depth of charge cycles over the battery's lifetime, and as a last resort, battery voltage. Certainly, this should be as good as a typical "gas" gauge.

I have used a similar algorithm to track the charge on a NiMH battery pack used in a product I designed, and it worked quite well. I didn't have to deal with regenerative breaking, though.
 

·
Registered
Joined
·
492 Posts
useful or not

Altazi
No doubt you are right than an algorithm that works from several inputs can be created. The question is whether the algorithm will be accurate enough to be useful, especially because error seems to accumulate.
Maybe the algorithm will be useful, but it's also reasonable that over time the algorithm may fail in a way that will let the battery run down completely. I really think the algorithm might work better if it is kept simple, and just estimates SOC from the number of miles driven since the last grid charge.
However, the EREV design can handle even battery failure by just turning on the ICE/generator when the battery voltage starts to drop. Likely the driver will never notice. Similarly, EREV control can turn off charging if voltage starts to rise. So operation will not be optimal, but it may be functional.
If the consequence over time of non-optimal operation is too many battery failures, then if the system keeps records (one hopes) all can be studied with the benefit of more data and hindsight.
That is, I don't think one can estimate SOC at all well in automotive operation, because SOC is more a concept than an experimentally measurable physical quantity. Still, it may not matter very much.
 

·
Registered
Joined
·
1,728 Posts
Recalibrate SOC?

Well, I suppose it would be possible to recalibrate the SOC measurement at the knee of the discharge curve (going well beyond the 30% limit discussed for the Volt), but at that point, the battery has almost no useful energy remaining - that's the point where you'd coast to a stop without the ICE. If you run RC cars on NiCd or NiMH cells, you know what I mean - it goes from zap to crap.

It should also be possible to recalibrate the SOC by fully charging the battery. At some point (determined by the cell chemistry), the battery is considered fully charged (e.g., CV mode current drops to 0.1C or something). If regenerative braking can charge the battery to 100%, maybe that could be used to reset the SOC.

I think it most appropriate to include the other terms I listed in my previous post, though, for improved accuracy.

Bottom line: I believe that, once the battery is fully characterized, a reasonable SOC "gas gauge" can be created.
 

·
Registered
Joined
·
251 Posts
Example SOC Calculation

Li batteries have a coulombic efficiency, CE, of about 94% that decreases with each charge cycle, as established by data curve of CE vs. number of charge cycles. When CE falls to 80% the system should set a battery end-of-life alarm.

The graph (approximated from the A123 website) shown in the "Operate within flat band end points - reduced stress" shows the A123 discharge curve (which needs to be recalibrated as discussed in the 06-27-2008, 11:20 PM post). Assume (for a given averaged discharge rate, corrected CE, corrected depolarization, and cell-cell charge balancing) we get the graph shown in "Operate within flat band end points - reduced stress" post. Then the SOC in % between 90 and 10% SOC as a function Vcell in volts for those particular conditions is approximated by:

SOC%(Vcell) = 100*[0.303 + RE{ [0.442 – 19.331*(3.45 - Vcell )]^0.55 }].

The fit of this to the above discharge data, and the data range is shown in the attachment. This is a crude example because the A123 curves were estimated. It is just a demonstration of a general technique to represent SOC vs Vcell. Batteries from other manufacturers have a broader Vcell range than A123 and SOC% can be more accurately measured.

Note: Mileage, which varies under a wide range of conditions, is not an accurate way to appoximate SOC. Integrated charge (e.g. IR voltage drop to frequency, pulses counted over time) is a better way.
 

Attachments

·
Registered
Joined
·
256 Posts
If voltage can be used but only outside of the desired range (80-30) then it would be a simple matter to use a calculated measurement to disignate when we THINK the soc is at 80 or 30 and over time as errors accumulate eventually the 94 and 5% limits will be hit. At which time the soc can be recalibrated to that known point and it will be another month or two before the battery gets out of range and needs to be recalibrated again.

How about we just ask GM?
 

·
Registered
Joined
·
1 Posts
This is a pretty old topic, but I can answer how they probably manage the accurate SOC of their battery pack. GM may or may not have developed their own battery management system. There are many off-the-shelf fuel gauge ICs that can be used by battery engineers instead of developing their own solution in-house.

You have all hit on the answer, so let me just summarize it. Voltage can be used to determine accurate state of charge of any lithium-ion battery cell. The problem is that once a load is placed on the battery, the internal impedance of the battery cell causes a voltage drop that prevents proper measurement of the battery cell voltage. So under load (during use), coulomb counting is generally used to keep track of the state of charge. In order to increase the accuracy of coulomb counting, the fuel gauge can also track the battery cell impedance and use that to compensate for the cell losses which otherwise would not be measured by the gauge. The losses change as the load on the battery changes (with different driving habits), so the losses would be continuously calculated on a regular basis as varying current is sampled and integrated over time.

When there is zero current through a resistance, voltage drop across the resistance is zero. So when the battery is left alone and the current falls to zero (or close to 0), the internal impedance will not cause a voltage drop and direct battery cell voltage can be measured. There is another problem though, because after a battery cell is charged/discharged, the cell chemistry is not stable and the voltage changes a little for a while until the chemistry has time to stabilize. According to Texas Instruments, it takes between 30 minutes and 5 hours for li-ion chemistries to stabilize enough to get an accurate SOC reading (in my testing, the stability period is generally pretty close to 30 or 45 minutes, but probably changes with cell chemistry and age of the cell under test). Every li-ion battery chemistry has a different voltage vs SOC curve, so that must be characterized for the particular chemistry, but once you have a table that correlates open-circuit voltage (no load voltage) with SOC, it's simply a matter of the fuel gauge waiting for long rest periods to update what SOC it's at, and this table remains accurate as the capacity degrades and the internal impedance increases over time and cycles. The Texas Instruments fuel gauges monitor cell voltage and wait for the voltage change to be less than 4uV/sec before allowing an SOC update. Once a load is placed on the battery, coulomb counting takes over and some small error will accumulate until another rest period occurs and again resets the SOC tracking.

Now, stopping charge at 80% can be achieved by two different means that I'm aware of. The normal way a li-ion cell is charged is at a constant current of whatever the cell is designed to handle, then waiting for the cell voltage to rise to the upper limit of the cell (usually 4.2V per cell). Then when the charge voltage rises to the cell's upper limit, the charge voltage is then limited and the current begins to fall naturally until eventually it falls so low that the cell is considered fully charged. This is called the taper current or charge termination threshold and is usually around 30 to 100 mA in small cells such as those used in power drills.

A reduced charge voltage that equates to 80% could be chosen and used instead of the typical 4.2V and that would automatically cause the cells to only charge to 80% each time. This charge voltage is slightly different for each cell chemistry, but can easily be determined through experimentation. The problem with the reduced charge voltage method is that once the charge enters constant voltage mode, it takes a long time for the charge to finish, so by lowering the charge voltage, you would still have to wait through the long period during constant-voltage charging. I would suspect that GM would instead use the max charge voltage of the cell and simply terminate charge when the fuel gauge reports 80%. That way the cell is being charged for a longer period of time in constant-current mode and the charge time would be greatly reduced because you don't have to sit through the time-consuming constant-voltage charge mode.
 
1 - 19 of 19 Posts
Top