Electric vehicle (EV) development is increasingly defined by software complexity, data volume, and shorter validation timelines. As a result, artificial intelligence (AI) is being introduced across development and testing workflows to accelerate engineering cycles, improve diagnostic depth, and reduce reliance on late-stage physical validation.
For EV engineers, the challenge is not simply adopting AI, but understanding where it delivers measurable reliability and efficiency gains, where its limitations remain, and how it can be safely applied within safety-critical systems.
In this Q&A, Steve Stoddard, Product Manager for AI at Sonatus, shares a practical engineering perspective on how AI is being used in EV development and validation today. Drawing on real-world experience with vehicle data, diagnostics, and software-defined architectures, he discusses where AI fits into modern workflows, from model training and observability to platform scalability and lifecycle integration.
Here’s what he has to say…
How are AI tools being integrated to make EV development and testing more efficient and reliable?
AI is being woven into development and testing cycles in several complementary ways. OEMs are accelerating early-stage engineering using digital twins, engineering simulations, and virtual testing environments that reduce reliance on physical prototypes.
On the software side, coding copilots and AI code generators are improving the speed of both development and QA testing. During testing and validation, AI agents are increasingly used to troubleshoot issues, analyze data, and support root-cause diagnosis. All of these have the impact of greatly speeding up development and test cycles, and help teams identify problems earlier, when they are less costly to resolve.

Digital twins and virtual vehicle models are used alongside AI tools to accelerate simulation, testing, and early-stage engineering.
What limitations and challenges arise when applying AI to EV development and test workflows?
There are two main challenges: First, AI outputs are only as good as the data used to produce them. These programs generate massive volumes of data, but collecting, cleaning, and preparing it for use in training AI systems remains a bottleneck. Second, humans still have to interact with or otherwise use the outputs of these systems. Observability or explainability becomes critical for people to understand and interpret the AI findings.
What methods ensure that data-collection tools capture sufficient variation in torque, thermal loads, and drive cycles to support reliable model training?
Reliable model training requires combining known drive-cycle test plans with flexible, granular vehicle data-collection capabilities. Teams need vehicle data-collection policies that can be applied at the individual signal or message level, across vehicle networks and buses, and support a variety of trigger conditions to capture system behavior.
Additional data from external sensors often needs to be collected and carefully timestamped to align with vehicle-generated data.
How do engineers confirm diagnostic model performance after software updates change signals or control strategies?
AI-based diagnostics tools need access to updated vehicle signal definitions on an ongoing basis. This can be provided via a pipelined connection to vehicle definition files so the model can reference the files at inference time. In other cases, engineers may supply the updated definitions with their queries, and the AI can reference these at inference time.
Which parts of EV development and testing truly benefit from AI, and which still require traditional methods?
In work with model partners, we’ve seen the ability of AI-based models for battery health and cell-level health result in better predictive accuracy compared to traditional rule-based methods. These improvements allow engineering teams to spot issues earlier and make more informed decisions about long-term performance.

AI-based diagnostics used in safety-critical contexts require human oversight and ongoing performance monitoring.
What safeguards are needed for AI tools when diagnostic outputs influence safety-critical controls?
One important safeguard is keeping humans in the loop during initial implementation. As confidence in the solution grows, KPIs based on the specific use case can be monitored to consciously choose whether and when to hand off decision-making and action-taking control to the AI.
Once automation is introduced, monitoring for model performance drift becomes critical to ensure the system remains reliable over time.
How can broader machine-learning models generalize across multiple EV platforms without overfitting to specific hardware?
With traditional machine-learning models, some amount of calibration or retraining may always be necessary. To make this more flexible, model calibration or training can be moved to the edge. Federated learning can enable the tuning of portions of the model across different vehicle endpoints. This way, each vehicle sees only an incremental increase in required compute: rather than sending the raw data to a centralized cloud location, modified weights, biases, and parameters are sent back instead. The fully updated model is then assembled in the cloud and pushed back to each edge device.
Additionally, incorporating multiple data source types and using record trust scores or confidence levels can expand model capability beyond traditional model-fitting to time-series data. As a result, assembling task-specific AI agents into an orchestrated AI platform using LLMs can avoid the need for data-specific model training or expensive LLM fine-tuning, while still delivering accurate, context-aware results.
What does the future likely look like when it comes to incorporating AI models into vehicle development
Today, most AI models and tools are task- or domain-specific. The next stage of development will be deeper integration of these AI solutions with each other so that more steps can be automated across entire workflows, vehicle domains, and product lifecycle phases. Ultimately, as these systems mature, AI will move from task-specific applications to a more connected, orchestrated role across entire workflows and phases of the vehicle lifecycle.
Filed Under: AI Engineering Collective, Featured Contributions, Q&As, Software
