We need a complete rethink about how the automotive industry measures fuel economy. The laboratory tested methods used globally today just aren’t fit for purpose. They’re riddled with loopholes that makers exploit to achieve a better result.
It’s easy to understand why they do it - the financial incentives to quote low CO2 figures are enormous. But so too are the financial penalties for using inaccurate data. Mitsubishi sold a controlling share in its company after admitting cheating on Japanese fuel economy tests. Suzuki has also run into trouble in Japan after saying its economy tests weren’t conducted in line with official regulation.
Both these cases hinged on the quality of the data inputted into the laboratory computer to replicate the effect of aerodynamics and rolling resistance while measuring economy on the rolling road. Mitsubishi said it had over-inflated the tires to reduce the rolling resistance, while Suzuki said it aggregated lots of different data sources instead of using one figure gained in a coastdown test.
Laboratory testing theoretically offers a level playing field. In reality, it doesn’t. As with motorsport teams, makers interpret the regulations as broadly as possible.
One solution has been to introduce Real Driving Emissions (RDE) element in Europe. This will measure tailpipe emissions on public roads using a portable emissions measuring system (PEMS), and there are calls to extend this to measure CO2 and therefore fuel economy as well.
We think this idea will introduce additional problems. Because PEMS devices are so large, they contaminate a car’s aerodynamic efficiency and distort the results. We believe the solution is to audit real world fuel consumption and allow automakers to share these results.
This real world fuel consumption can be reviewed by regulators after measuring post sale. The results could be reported by a survey of owners over a period of time.
Automakers who meet or exceed expected real world fuel economy should receive addition credits for their robust designs and be allowed to advertise their results. Similarly, like existing compliance programs, overly optimistic certifications results should result into an investigation into the reason for the differences. This visibility would reward precision and accuracy when submitting the initial figures.
Right now we’ve got a system that underreports by 30-40 percent. Introducing this real-world measurement would provide a way to dramatically reduce this difference. That would go a long way towards maintaining consumer trust.