Most people believe that artificial intelligence in cars is a future story. Something that will arrive one day in the form of fully self-driving vehicles, robotic taxis, and science-fiction dashboards that talk back to you. This belief is comforting, because it places the disruption safely in the future. It is also completely wrong.
Artificial intelligence has already been inside your car for years. Not in a dramatic way, and not under a single label. It entered quietly, subsystem by subsystem, feature by feature, until modern cars crossed a line that most buyers still do not realize exists: today’s vehicles are no longer primarily mechanical systems. They are computational systems that happen to move physical mass through space.
This distinction is not philosophical. It is economic, technical, and practical. It changes how cars are designed, how they fail, how they are repaired, how they are updated, how they are depreciated, and ultimately how much control you, the owner, actually have over the machine you paid for.
Before 2027, this shift will become impossible to ignore. Not because cars will suddenly become autonomous, but because even ordinary, non-autonomous vehicles will become increasingly dependent on machine-learning systems for core functions: perception, safety, energy management, and driver monitoring. The result will be cars that behave less like predictable machines and more like evolving software platforms, whose behavior can change over time without a single physical part being replaced.
The average driver is not prepared for this. Not intellectually, not financially, and not legally. Most people still think in terms of engines, gearboxes, and “options.” They do not yet think in terms of compute stacks, model updates, sensor calibration, or data pipelines. But the industry already does.
To understand what is coming, and to make rational buying decisions in the second half of the 2020s, you must first unlearn a dangerous misconception: that “AI in cars” is a single feature. It is not. It is an architectural transformation.

One of the biggest obstacles to understanding modern vehicles is language. The term “artificial intelligence” is used so loosely in marketing and media that it has almost lost technical meaning. In the context of cars, it does not refer to a single system, a single computer, or a single capability. It refers to a collection of mathematical models, trained on large amounts of data, embedded into dozens of separate control systems throughout the vehicle.
These systems do not think. They do not understand. They do not reason. They recognize patterns and calculate probabilities.
When a modern car “detects” a pedestrian, what it is really doing is running a neural network that has been trained on millions of labeled images and asking a statistical question: how likely is it that the pixels in front of me belong to the category “human being”? When it decides to brake, it is not making a moral judgment or a conceptual decision. It is executing an optimization function under uncertainty.
This distinction matters, because it explains both the power and the fragility of AI-based systems. They can outperform humans in narrow, well-defined tasks, and at the same time fail spectacularly in situations that any human child would understand instantly.
Technically, what the automotive industry calls “AI” today is almost entirely composed of machine-learning models, particularly deep neural networks. These models are trained, not programmed. Engineers do not write explicit rules for how to recognize a cyclist or predict another driver’s behavior. Instead, they feed the system enormous datasets and allow it to adjust millions or billions of internal parameters until it becomes statistically good at producing the right outputs.
A clear, non-hyped overview of this approach is provided here:
https://en.wikipedia.org/wiki/Machine_learning
And its application in transportation:
https://en.wikipedia.org/wiki/Artificial_intelligence_in_transportation
The consequence of this training-based approach is profound: the behavior of these systems is not fully predictable, even to their creators. You can test them, validate them, and bound their failure rates, but you cannot reason about them in the same way you reason about mechanical systems or classical software.
A brake pedal either works or it doesn’t. A neural network sometimes misclassifies a shadow as a wall.
This is not a bug in the usual sense. It is a fundamental property of statistical systems.
In modern vehicles, these models are used in three broad domains. First, perception: turning raw sensor data from cameras, radar, and sometimes lidar into meaningful representations of the world. Second, decision support: evaluating risks, predicting trajectories, and choosing actions. Third, optimization: continuously adjusting complex systems such as engines, batteries, transmissions, and thermal management to achieve competing goals like efficiency, performance, emissions, and longevity.
Manufacturers like Bosch, Continental, NVIDIA, and Mobileye have been building entire product lines around these ideas for years. Bosch’s own overview is revealingly blunt about the direction of travel:
https://www.bosch-mobility.com/en/solutions/software-and-ai/artificial-intelligence/
What is critical to understand is that there is no single “AI computer” in a car. There are many electronic control units, increasingly centralized into high-performance computing platforms, each running multiple models for different tasks. Some are safety-critical. Some are comfort-related. Some exist primarily to collect data.
This is why the question “does this car have AI?” is already meaningless. The real question is how deeply AI has been allowed to penetrate the control stack of the vehicle, and which functions are still governed by deterministic logic rather than learned behavior.

When most drivers think about artificial intelligence in cars, they imagine experimental features: self-driving modes, futuristic dashboards, or prototype vehicles that appear in technology conferences and promotional videos. This mental model is dangerously outdated. The more important story is not where AI will appear, but where it already has.
The modern car is no longer controlled primarily by mechanical linkages or simple software logic. It is governed by a dense network of electronic control units and centralized computers that continuously evaluate sensor data, predict outcomes, and adjust behavior in real time. In many of these systems, machine-learning models have quietly replaced rule-based logic because the problems they solve have become too complex, too variable, and too dependent on uncertain real-world conditions.
This transformation did not happen all at once. It happened subsystem by subsystem, in places where traditional engineering approaches started to reach their limits.
A generation ago, an engine control unit followed relatively straightforward maps. Throttle position, engine speed, air temperature, and a handful of other variables were combined using tables created by engineers. The behavior of the engine was, in principle, fully predictable.
That world no longer exists.
Modern powertrains, especially in turbocharged, hybrid, and plug-in hybrid vehicles, operate in a space of constraints that is too complex to manage optimally with fixed rules. Emissions regulations, fuel consumption targets, thermal limits, battery health, drivability expectations, and component longevity all compete with each other in real time. The result is a control problem that increasingly resembles an optimization problem rather than a mechanical one.
Manufacturers now use machine-learning techniques in several areas of powertrain control, including:
Adaptive shift strategies that change based on driver behavior, road type, and traffic patterns
Predictive energy management in hybrids that decides when to use the engine versus the electric motor
Combustion optimization systems that adapt to fuel quality, wear, and environmental conditions
Thermal management systems that anticipate heat loads rather than merely reacting to them
Bosch describes this transition very openly in its automotive AI documentation:
https://www.bosch-mobility.com/en/solutions/software-and-ai/artificial-intelligence/
What is important to understand is not just that these systems use AI, but that they no longer behave in a strictly pre-defined way. Two identical cars, driven by two different people, can gradually develop measurably different control behaviors over time. The vehicle is, in a limited but real sense, learning how it is being used.
This is one of the first places where the traditional idea of “the car as a fixed machine” begins to break down.
Nowhere is the role of AI more visible, and more consequential, than in modern safety systems.
Early driver assistance systems were based on relatively simple logic. If a radar sensor detected an object within a certain distance and the relative speed exceeded a threshold, a warning would sound. If the driver did not react, the system might brake.
This approach worked, but only in a narrow range of scenarios. As soon as engineers tried to expand these systems to deal with pedestrians, cyclists, complex urban environments, or ambiguous situations, rule-based logic collapsed under its own complexity.
You cannot write a list of rules for “what a pedestrian looks like” in the real world. You have to train a system to recognize them.
This is why today’s safety systems increasingly rely on neural networks for perception and classification. They are used in:
Automatic emergency braking systems that distinguish between vehicles, pedestrians, cyclists, and background objects
Lane keeping systems that must interpret poorly marked, curved, or partially occluded lane boundaries
Traffic sign recognition systems that operate across countries, styles, and weather conditions
Driver monitoring systems that attempt to infer attention, drowsiness, or distraction from camera images
Euro NCAP’s overview of modern ADAS systems makes it clear how central these technologies have become:
https://www.euroncap.com/en/vehicle-safety/the-ratings-explained/adas/
The critical point is that these systems no longer “measure and react.” They interpret and infer. And interpretation, in machines, is always probabilistic.
This is why such systems sometimes brake for shadows, fail to recognize unusual vehicles, or become confused in construction zones. They are not malfunctioning in a mechanical sense. They are making the best statistical guess they can, and sometimes that guess is wrong.
MIT’s explanation of why deep learning systems behave this way is worth reading carefully:
https://news.mit.edu/2021/deep-learning-real-world-0621

Another domain where AI has become deeply embedded, often invisibly, is navigation and route planning.
Modern navigation systems no longer merely compute the shortest path. They attempt to predict traffic evolution, understand typical congestion patterns, and sometimes even anticipate the driver’s intentions. This requires large-scale machine-learning models trained on massive amounts of real-world mobility data.
Google, for example, has been using AI in route prediction and traffic modeling for years:
https://ai.googleblog.com/2020/03/ai-and-transportation.html
In some vehicles, this goes a step further. The car begins to build a model of the driver:
Typical departure times
Frequently visited locations
Preferred routes
Driving style and aggressiveness
Willingness to accept rerouting or suggestions
This information is used to adjust everything from navigation suggestions to energy management strategies in electric vehicles. The vehicle is no longer merely reacting to commands. It is forming expectations.
This is a subtle but profound shift. A system that forms expectations about the user is no longer just a tool. It is an adaptive agent.
Even systems that seem purely mechanical, such as suspension and stability control, are increasingly influenced by machine-learning techniques.
Adaptive dampers, active anti-roll systems, and torque vectoring differentials must operate across an enormous range of conditions: smooth highways, broken pavement, gravel, rain, snow, aggressive driving, calm commuting. Writing explicit rules for all these cases is possible, but inefficient and brittle.
As a result, manufacturers are increasingly using data-driven approaches to tune and, in some cases, continuously adapt these systems. The car learns, within defined safety boundaries, how to best combine comfort, stability, and performance for a given situation and driver.
Across all these domains, the same pattern repeats itself. Control systems are moving:
From fixed maps to adaptive models
From explicit rules to learned behavior
From predictable responses to probabilistic decisions
This does not make cars worse. In many cases, it makes them dramatically better. But it does make them fundamentally different.
A probabilistic machine cannot be understood, diagnosed, or trusted in the same way as a deterministic one. It must be validated statistically, monitored continuously, and updated regularly. Its failures are not binary. They are matters of likelihood.
And this brings us to an uncomfortable but unavoidable conclusion: your car is already, in many important ways, a rolling AI system. You just haven’t been told to think of it that way.

No matter how sophisticated an artificial intelligence system may be, it is ultimately limited by one fundamental constraint: it cannot reason about what it cannot perceive. In a car, there is no direct access to “the road,” “other vehicles,” or “danger.” There are only streams of numbers coming from sensors. Everything the vehicle believes about the world must be inferred from those signals.
This is why the true foundation of AI-driven vehicles is not software, but perception hardware.
Modern cars do not have one way of sensing the environment. They have a stack of complementary sensors, each with different strengths and weaknesses, whose outputs are combined by complex algorithms in a process known as sensor fusion. The purpose of this stack is not redundancy for its own sake. It is to compensate for the fundamental limitations of each sensing modality.
A good technical overview of automotive sensors is provided here:
https://www.synopsys.com/automotive/autonomous-driving/sensors.html
Cameras are the most information-rich sensors on a car. They capture color, texture, text, shapes, and fine detail. They are also relatively cheap, compact, and easy to integrate into vehicle designs. For these reasons, they have become the backbone of almost all modern driver assistance and perception systems.
From the point of view of an AI system, a camera does not produce “an image.” It produces a grid of numbers representing light intensities. Neural networks are then used to transform these numbers into higher-level concepts: lanes, vehicles, pedestrians, traffic lights, road edges, and dozens of other categories.
This process is known as semantic segmentation and object detection, and it is one of the most computationally demanding tasks in modern vehicles.
A clear explanation of how this works in practice can be found here:
https://www.nvidia.com/en-us/self-driving-cars/ai-for-autonomous-vehicles/
However, cameras have a fatal weakness: they see the world the way humans do, and therefore they fail in many of the same ways. They struggle with:
Low sun angles and glare
Heavy rain, fog, or snow
Darkness and low contrast
Sudden transitions between light and shadow
To a neural network, a badly lit scene is not “difficult.” It is statistically unfamiliar. And unfamiliar inputs are exactly where machine-learning systems are most likely to make confident mistakes.
Radar works on a completely different physical principle. Instead of measuring light, it emits radio waves and measures their reflections. This makes radar largely immune to darkness, glare, and many weather conditions.
Radar excels at two things: measuring distance and measuring relative speed. It can tell very precisely how far away an object is and how fast it is approaching or receding. This is why radar has been used for decades in adaptive cruise control and collision warning systems.
What radar cannot do well is understand shape or category. A radar return does not tell you whether an object is a car, a motorcycle, or a metal sign. It tells you that something reflective is there.
Modern high-resolution automotive radar is becoming much more capable, but it still does not provide the kind of rich semantic information that cameras do.
A good overview of automotive radar technology:
https://www.ti.com/applications/automotive/adas/radar.html
Lidar (Light Detection and Ranging) systems measure distance by emitting laser pulses and timing their reflections. The result is a highly accurate three-dimensional point cloud of the environment.
In pure geometric terms, lidar is the most precise perception sensor currently available for vehicles. It can produce extremely accurate 3D models of surrounding objects and road geometry.
This is why many companies developing higher levels of automation rely heavily on lidar:
https://www.intel.com/content/www/us/en/automotive/driving-safety/technology/lidar.html
However, lidar systems are expensive, mechanically complex (in many designs), and visually intrusive. They also struggle in heavy rain, fog, and snow, and they provide little semantic information by themselves. A lidar point cloud tells you where surfaces are, not what they mean.

Ultrasonic sensors, commonly used for parking and low-speed maneuvering, operate at very short ranges. They are cheap and reliable for detecting nearby obstacles, curbs, and walls, but they are useless for high-speed perception or long-range planning.
They are included here not because they are glamorous, but because they illustrate an important point: the sensor stack is layered, with different technologies covering different spatial and temporal scales.
Each of these sensors sees the world in a different way, with different strengths and blind spots. Cameras understand appearance but not distance well. Radar understands distance and speed but not identity. Lidar understands shape but not meaning.
Sensor fusion is the process of combining these partial, imperfect views into a single, more reliable internal model of the world.
This is not a simple averaging process. It involves complex probabilistic algorithms that must decide:
Which sensors to trust in which conditions
How to reconcile conflicting information
How to track objects over time
How to maintain a stable world model even when data is noisy or missing
A technical but accessible overview of sensor fusion in autonomous systems:
https://www.mathworks.com/discovery/sensor-fusion.html
This fused model is what the rest of the vehicle’s AI systems actually operate on. They do not “see” raw camera images or radar echoes. They see an abstract, continuously updated reconstruction of the world.
At this point, it is impossible to avoid one of the most important and controversial debates in the automotive industry: whether a car should rely primarily on cameras, or whether it should use a richer sensor stack including radar and lidar.
Tesla is the most prominent advocate of a camera-dominant (and recently, camera-only) approach. The argument is that humans drive using vision, so a sufficiently advanced vision system should be enough.
Other manufacturers and most of the robotics and autonomous driving research community strongly disagree. They argue that:
Redundancy is essential for safety
Different physical principles fail in different ways
No single sensor modality is reliable in all conditions
This debate is not philosophical. It is about risk management.
The choice a manufacturer makes here will determine:
How the car behaves in bad weather
How it handles unusual situations
How much it must rely on statistical inference versus physical measurement
How much compute power it needs
How expensive it is to build and maintain
By the time sensor data has passed through detection networks, tracking systems, fusion algorithms, and prediction models, what the car is reacting to is no longer the world itself. It is an internal simulation of the world.
In most cases, that simulation is good enough. In edge cases, it can be dangerously wrong.
Understanding this is essential, because it explains both the impressive capabilities and the sometimes baffling failures of modern driver assistance systems. The car is not “seeing.” It is guessing, very fast, with very expensive hardware.

After reading about perception systems, sensor stacks, and the immense computational machinery inside modern vehicles, it is tempting to assume that fully autonomous cars are simply a matter of time and incremental improvement. This belief is widespread, heavily promoted, and deeply misleading.
The central problem is not that today’s systems are insufficiently powerful. It is that they are built on a class of technologies that have structural, unavoidable limitations when deployed in open-ended, adversarial, real-world environments like public roads.
To understand why, one must first confront an uncomfortable fact: machine-learning systems do not understand the world in the way humans do. They model correlations, not causation. They recognize patterns, not concepts.
A neural network trained to recognize pedestrians does not know what a pedestrian is. It knows only that certain patterns of pixels are statistically associated with the label “pedestrian” in its training data. When the environment matches that data, performance is excellent. When it does not, the system is flying blind.
This is not a temporary weakness that will disappear with more data. It is a fundamental property of how these systems work.
MIT’s explanation of this limitation is one of the clearest:
https://news.mit.edu/2021/deep-learning-real-world-0621
In controlled environments, such as chess, Go, or even highway lane keeping, statistical intelligence works extremely well. In open-ended environments with unpredictable human behavior, unusual objects, strange lighting, and infinite variation, it runs into what researchers call the long tail problem.
Most driving situations are repetitive and predictable. That is why driver assistance systems can appear almost magical in normal conditions. But the real world is not defined by normal conditions. It is defined by the rare, the weird, and the unexpected.
Examples of “long tail” situations include:
A pedestrian in an unusual costume
A vehicle carrying an irregular load
Temporary construction with non-standard markings
A fallen tree partially blocking a road
A traffic officer giving hand signals instead of using lights
Flooded roads reflecting the sky like mirrors
A truck carrying a billboard that looks like a wall
No dataset can exhaustively cover these cases. And because machine-learning systems interpolate from past data rather than reason from first principles, they can fail in ways that seem absurd to human observers.
This is not speculation. It is observed behavior in every large-scale deployment of such systems.
A good overview of the “edge case” problem in autonomous driving:
https://spectrum.ieee.org/why-self-driving-cars-are-so-hard
Because the entire decision-making process depends on perception, small errors early in the pipeline can cascade into large errors in behavior.
A misclassified shadow becomes a phantom obstacle.
A missed sign becomes a traffic violation.
A confused lane marking becomes a dangerous steering decision.
What makes this especially dangerous is that these systems often fail silently. They do not know that they are confused. They continue to produce confident outputs even when the input is far outside their training distribution.
This phenomenon, known as “out-of-distribution failure,” is well documented in the research literature:
https://arxiv.org/abs/2102.12164
One of the most common arguments made by proponents of rapid full autonomy is that more data, more compute, and bigger models will eventually solve these problems.
This belief is based on real successes in language models and image recognition. But driving is not a closed-world problem. It is an embedded physical problem involving other intelligent agents, social norms, legal ambiguity, and moral judgment.
Scaling helps. It does not eliminate the category of the problem.
Even companies with effectively unlimited data and compute have begun to quietly acknowledge this. Timelines for full autonomy have been repeatedly pushed back, often by a decade at a time.
A sober industry analysis:
https://www.rand.org/pubs/research_reports/RRA194-1.html
Traditional safety-critical systems can be mathematically analyzed and verified. You can prove that, under certain assumptions, they will not enter unsafe states.
You cannot do this with large neural networks.
At best, you can test them statistically. But public roads contain effectively infinite scenarios. This means you can never be certain you have tested the one that will cause a catastrophic failure.
This is not an academic concern. It is one of the main reasons regulators are extremely cautious about high levels of autonomy.
Overview of the verification problem:
https://www.nature.com/articles/s42256-020-00262-9
Ironically, partial automation may be more dangerous than no automation at all.
When systems work most of the time, humans begin to trust them. When they fail rarely but suddenly, humans are often out of the loop and unable to react in time.
This phenomenon is well studied in aviation and increasingly in automotive contexts:
https://www.nhtsa.gov/road-safety/driver-assistance-technologies
It is one of the reasons so many accidents involving “semi-autonomous” systems involve inattentive drivers who believed the car was more capable than it actually was.
The persistent delays in achieving reliable, general-purpose self-driving are not due to lack of effort or lack of money. They are due to the fact that the problem is qualitatively harder than early optimism assumed.
This does not mean progress will stop. It means progress will be:
Slower
More domain-limited
More geofenced
More supervised
More constrained by weather and infrastructure
In other words, the future before 2027 will be dominated not by robot chauffeurs, but by increasingly capable, increasingly complex driver assistance systems that still require human supervision.
And this brings us to the part that affects you most directly as a buyer and owner.

Up to this point, we have been talking about technology in abstract terms: sensors, models, perception systems, and limitations. But the real impact of artificial intelligence in cars is not theoretical. It is economic, legal, and practical. It changes what it means to own a vehicle, how long that vehicle remains usable, who is allowed to fix it, and who ultimately controls its behavior.
In the mechanical era of automobiles, ownership was a relatively simple concept. You bought a machine. You maintained it. You repaired it. Its behavior was largely fixed at the time it left the factory. Software existed, but it was peripheral. The identity of the car was defined by its hardware.
That world is ending.
Modern AI-driven cars are no longer static products. They are software platforms on wheels.
This means several things at once:
The car you buy is not the car you will have in three years
Core behaviors can change through updates
Features can be added, removed, or restricted remotely
Performance, efficiency, and even safety characteristics can be modified without touching a single physical component
Over-the-air updates, once limited to infotainment systems, are now increasingly used to modify powertrain behavior, battery management, driver assistance logic, and even braking and steering characteristics.
An overview of OTA updates in modern vehicles:
https://www.synopsys.com/automotive/what-is/over-the-air-updates.html
This is not necessarily bad. In some cases, it allows manufacturers to fix serious flaws or improve efficiency long after a car has been sold. But it also means that you no longer fully control the behavior of your own vehicle.
Your car becomes, in part, a service.
Once a car becomes a software platform, a new business model becomes irresistible to manufacturers: selling functionality separately from hardware.
We are already seeing:
Heated seats and steering wheels locked behind software subscriptions
Driver assistance features that require ongoing payments
Performance upgrades that exist in software but are artificially restricted
Battery capacity or charging speed limited by software
A widely discussed example:
https://www.theverge.com/2022/7/12/23205258/bmw-heated-seats-subscription
From an engineering point of view, this is efficient. From an ownership point of view, it is a fundamental redefinition of what “buying a car” means.
You are no longer buying a fully defined machine. You are buying access to a configurable software-defined system, some of whose capabilities may be withheld unless you continue to pay.
In a traditional car, replacing a part was mostly a mechanical operation. In a modern AI-driven car, many components are part of a tightly integrated, calibrated system.
Examples:
A windshield replacement may require camera recalibration
A bumper replacement may require radar realignment
A steering component replacement may require software pairing and security authorization
A battery or control module replacement may require cryptographic unlocking
This is not a conspiracy. It is a direct consequence of how tightly coupled the perception and control systems have become.
However, it has enormous consequences for independent repair and long-term ownership.
The Right to Repair movement exists largely because of this shift:
https://www.repair.org/stand-up
And in the automotive context:
https://www.ifixit.com/Right-to-Repair
The practical result is that many modern cars are becoming:
More expensive to repair
More dependent on dealer or manufacturer access
More difficult to keep running outside official service networks
In software-defined, AI-heavy vehicles, many failures are no longer physical.
They are:
Sensor miscalibrations
Software bugs
Model regressions
Update incompatibilities
Data corruption or configuration errors
These failures can be intermittent, difficult to diagnose, and sometimes impossible for the owner to reproduce.
Worse, because behavior is probabilistic, a system may “mostly work” while occasionally doing something dangerously wrong. This is a very different failure mode from a broken pump or a worn bearing.
This is one of the reasons modern automotive debugging increasingly resembles IT operations rather than mechanical repair.
An AI-driven car is, by necessity, a data-collecting machine.
It records, depending on manufacturer and region:
Driving behavior
Locations and routes
Sensor recordings
Driver attention and gaze direction
Usage patterns and habits
This data is used for:
Improving models
Diagnosing problems
Training future systems
Sometimes, monetization
A good overview of automotive data privacy issues:
https://foundation.mozilla.org/en/privacynotincluded/categories/cars/
The uncomfortable reality is that many modern cars collect more behavioral data than smartphones — and are often subject to weaker privacy controls and less transparent policies.
In the mechanical era, a car became obsolete when its hardware wore out or its emissions standards became unacceptable.
In the software era, a car can become obsolete because:
The manufacturer stops providing updates
Online services are shut down
Security certificates expire
New infrastructure is no longer supported
Features are remotely disabled
We have already seen this happen in other industries. There is no technical reason it will not happen to cars.
When you buy a modern vehicle, you are implicitly betting not just on its physical durability, but on the long-term business decisions of the company that controls its software ecosystem.
All of this leads to a question that every buyer in the second half of the 2020s will have to confront, whether they realize it or not:
Are you buying a machine, or are you entering into a long-term software relationship?
Different manufacturers are answering this question in different ways. Some are aggressively pushing toward closed, subscription-driven, tightly controlled platforms. Others are, at least for now, more conservative.
But the direction of travel is clear.

Artificial intelligence in cars is not primarily about self-driving. It is about control shifting from hardware to software, and from owner to manufacturer.
Before 2027, you will see:
More AI in core vehicle functions
More behavior defined by updates
More features sold as software
More dependence on sensors and calibration
More data collection
More opaque failure modes
More lock-in
This does not mean modern cars are bad. Many of them are astonishingly capable. But they are a different category of product than the cars most people grew up with.
If you approach them with a mechanical-era mindset, you will make bad decisions.
If you understand that you are buying into a software ecosystem, you can at least choose your compromises consciously.
Artificial intelligence in transportation:
https://en.wikipedia.org/wiki/Artificial_intelligence_in_transportation
Bosch on AI in mobility:
https://www.bosch-mobility.com/en/solutions/software-and-ai/artificial-intelligence/
ADAS and safety systems:
https://www.euroncap.com/en/vehicle-safety/the-ratings-explained/adas/
Automotive sensors overview:
https://www.synopsys.com/automotive/autonomous-driving/sensors.html
NVIDIA automotive AI:
https://www.nvidia.com/en-us/self-driving-cars/ai-for-autonomous-vehicles/
Radar in ADAS:
https://www.ti.com/applications/automotive/adas/radar.html
Lidar in autonomous vehicles:
https://www.intel.com/content/www/us/en/automotive/driving-safety/technology/lidar.html
Sensor fusion explained:
https://www.mathworks.com/discovery/sensor-fusion.html
Why deep learning fails in the real world:
https://news.mit.edu/2021/deep-learning-real-world-0621
Why self-driving is hard (IEEE):
https://spectrum.ieee.org/why-self-driving-cars-are-so-hard
Verification and safety of AI systems:
https://www.nature.com/articles/s42256-020-00262-9
NHTSA on driver assistance:
https://www.nhtsa.gov/road-safety/driver-assistance-technologies
Over-the-air updates in vehicles:
https://www.synopsys.com/automotive/what-is/over-the-air-updates.html
Right to Repair:
https://www.repair.org/stand-up
https://www.ifixit.com/Right-to-Repair
Car privacy issues:
https://foundation.mozilla.org/en/privacynotincluded/categories/cars/