ArabnaBook Connect. Trade. Thrive

Data Privacy and Security in Connected and Autonomous Vehicles

January 24, 2026
Data Privacy and Security in Connected and Autonomous Vehicles
The Car Has Become a Data Device With a Steering Wheel Attached

A connected vehicle is no longer just transportation; it is a sensor platform that continuously produces data about where you go, how you drive, who rides with you, what you listen to, and—through inference—what kind of person you are likely to be. The industry sometimes describes this as “enhanced user experience,” but that framing is incomplete. When a product can generate precise location histories and behavioral profiles, it becomes a privacy product whether the manufacturer admits it or not, and the failure modes extend far beyond annoying ads into physical safety, discrimination, coercion, and identity risk. The European Data Protection Board has explicitly treated connected vehicles and mobility applications as a context where personal data processing raises distinctive risks that must be assessed with care, not treated as ordinary telemetry.*1

The second uncomfortable truth is that privacy and security are now inseparable. Privacy is what happens when systems behave as promised and data is controlled with legitimate purpose and consent. Security is what happens when the world is hostile—when attackers, insiders, or partners try to use your data in ways you never agreed to. A connected vehicle that is “secure” but still monetizes your data without meaningful control is still a privacy problem. And a vehicle that claims to respect privacy but cannot protect its identity systems, APIs, or update pipeline will eventually become a privacy problem through breach. Regulators increasingly treat cybersecurity as a safety issue because vulnerabilities can affect life and bodily harm, which reinforces that these are not merely consumer preference topics but public-risk topics.*2

What Data Cars Collect, and Why the Category Keeps Expanding

The first category is obvious: location. Vehicles capture GPS location for navigation, roadside assistance, theft recovery, usage-based insurance, and fleet operations. But the modern location dataset is not just “where the car is.” It is time-stamped, continuous, and linkable to identity through account systems, device pairing, and subscription services. Under the GDPR, “location data” is explicitly named as a type of identifier that can make a person identifiable, which is why automakers cannot pretend location is “technical data” and outside privacy law.*3

The second category is driving behavior. Speed profiles, harsh braking, acceleration patterns, cornering, mileage, and trip timing form a behavioral signature. This data is often justified as safety improvement or insurance personalization, but it also becomes a behavioral scoring tool. In early 2026, the U.S. Federal Trade Commission finalized an order alleging that a major automaker collected and sold precise geolocation and driving behavior data without clear consent, illustrating how easily “driver improvement features” can become an opaque data pipeline with real financial consequences for consumers.*4

The third category is biometric and in-cabin data. Driver monitoring systems, face recognition for personalization, voice prints for assistants, heart-rate or fatigue detection through sensors, and occupant detection for safety all push vehicles toward sensitive-data territory. Article 9 of the GDPR places special restrictions on processing sensitive categories such as biometric data used for uniquely identifying a person, which is a direct warning to any vehicle program that treats face or voice identity as a casual convenience feature.*5

The fourth category is inferred data. Even when a dataset does not explicitly include protected traits, inference can produce them. A vehicle that knows where you stop regularly can infer your workplace and your home. It can infer religious attendance patterns, medical visits, political participation, and personal relationships. Privacy risk is not only what you collect, but what can reasonably be derived from it, and connected vehicle ecosystems are inference engines by design.*1

The Ownership Illusion: “Your Data” Is Not a Legal Concept by Default

Consumers often speak about “owning” their car data, but most legal systems do not treat personal data ownership like property ownership. They treat it as a rights framework: rights to know, to access, to delete, to restrict processing, to object, and to portability depending on the jurisdiction. Under the GDPR model, the focus is on lawful basis, transparency, purpose limitation, and data minimization, not on a simple ownership claim.*3

In the United States, privacy is largely sectoral and state-driven, which creates fragmentation and ambiguity. California’s privacy framework gives consumers rights to know, delete, and opt out of sale or sharing, and it explicitly positions itself around consumer control rather than “ownership.” The California Attorney General’s materials describe these rights and the enforcement environment in a way that should make automakers and connected service providers cautious about any casual data-sharing practice that assumes consumers will not notice or act.*6

Europe is moving further by addressing access to data generated by connected products. The EU Data Act targets fairness in access and use of data from connected products and related services, which has implications for vehicle-generated data and the ability of users and third parties to obtain it under certain conditions. If you assumed that automakers would permanently control vehicle data as a private asset, this regulation is a sign that policymakers disagree and are willing to intervene in how data value is distributed.*7

Consent in Cars: The “OK Button” Is Not Meaningful Choice

The auto industry’s default consent ritual is the in-vehicle prompt: a dense policy summary and a large “OK” button that stands between the driver and the features they need to use the product safely. This is not meaningful consent; it is compliance theater. When consent is bundled into essential functionality, people click without understanding, and companies know that. The result is a consent regime that looks legalistic but behaves coercively, especially when drivers are pressured at delivery time or at the dealership.*1

Regulators are increasingly skeptical of this pattern. The FTC’s action involving connected vehicle data explicitly alleged misleading enrollment and inadequate disclosure around data collection and data sales, reinforcing that “you clicked something once” is not a reliable defense when the design nudges consumers into agreement and hides material facts.*4

Under GDPR principles, consent must be freely given, specific, informed, and unambiguous, and withdrawal must be as easy as giving consent. That standard is difficult to reconcile with user interfaces that pressure acceptance and bury opt-outs. The EDPB connected vehicles guidelines discuss the need for choice and minimization, and they also make clear that some processing should not rely on consent if consent cannot be freely given due to power imbalance or bundling.*1

The Commercial Pressure: Turning Mobility Data Into Revenue

Connected vehicle data is financially seductive. It can power personalized services, predictive maintenance, targeted offers, and partnerships with mapping, charging, and insurance ecosystems. But the most dangerous business model is the invisible one: selling or disclosing driving behavior and geolocation to third parties that influence consumer outcomes, such as insurance pricing or consumer scoring. Once that happens, the car is no longer just collecting data; it is participating in economic sorting.

Recent enforcement and public reporting have demonstrated this risk in plain terms. The FTC’s finalized order and complaint against a major OEM described allegations that precise geolocation and driving behavior data were sold to third parties without clear consumer consent, and that such data could be used to affect insurance outcomes. This is not a hypothetical “privacy harm.” It is measurable financial harm tied to opaque data sharing.*4

If you are an automaker or mobility service provider, the strategic question is not “can we monetize data?” The strategic question is “what kind of company do we become if we monetize data invisibly?” Once consumers believe your car is a surveillance device, trust collapses, regulators intensify scrutiny, and your brand becomes fragile. Even if your engineers build world-class ADAS, your business team can destroy the brand by behaving like a data broker.

Connected Vehicle Ecosystems: Your Risk Is Often in the Cloud, Not the Car

A modern connected vehicle is a cloud product. Remote lock/unlock, vehicle location, diagnostics, OTA updates, and subscription features are mediated by backend systems and APIs. This means that personal data security depends as much on cloud identity, access control, and API authorization as it does on in-vehicle defenses.

This is why privacy debates increasingly overlap with cybersecurity governance. NHTSA’s cybersecurity best practices emphasize lifecycle security and the reality that vehicles are cyber-physical systems whose vulnerabilities can affect safety. In connected vehicle ecosystems, a weak portal or API can translate into unauthorized access to location and account functions, which becomes both a security incident and a privacy incident at once.*2

Security frameworks help only if implemented honestly. Many breaches are not caused by cryptographic failure but by poor systems design, missing monitoring, weak credential governance, and inconsistent supplier controls. Auto-ISAC best practice guidance exists precisely because the ecosystem is too complex for informal security, and because a shared baseline is needed for governance, detection, and response.*8

Autonomous Vehicles Raise the Stakes: More Sensors, More Data, More Liability

Autonomous and highly automated systems require extensive sensing and logging. They collect and process high-fidelity data from cameras, radar, lidar, ultrasonic sensors, GPS, and IMUs, and they often store event data to reconstruct incidents and improve models. This creates tension: safety demands evidence, but privacy demands restraint.

Regulatory models for automated driving increasingly involve data storage requirements that function like an aviation black box, designed to determine when the automated system was engaged and what occurred during relevant periods. UNECE regulation work around automated lane keeping systems and the requirement for a data storage system for automated driving illustrates how autonomy governance pulls data collection into the legal framework.*9

If you believe “privacy will be solved later,” autonomy makes that belief dangerous. The data collected by autonomy systems can include images of pedestrians, license plates, faces, and bystanders, often processed in real time and sometimes retained for debugging. That creates privacy obligations not only to the vehicle owner but potentially to third parties whose data is captured incidentally.

Biometrics and In-Cabin Surveillance: The Next Flashpoint

Driver monitoring is becoming common because it helps prevent misuse of assistance systems and supports safety. But it also introduces a new kind of sensitivity: the cabin is an intimate space. Cameras can capture faces, emotional states, and passenger behavior. Microphones can capture conversations. Sensors can detect presence and physiological indicators. If these are stored, transmitted, or used beyond immediate safety needs, the cabin becomes a surveillance environment.

European rules on special category data, including biometrics used for identification, place high burdens on processing and require clear justification and safeguards. This matters because many in-cabin features are being marketed as personalization, convenience, or anti-theft, which do not automatically justify sensitive data processing at scale.*5

Even outside the EU, regulators and watchdogs increasingly treat biometric processing as high risk. The UK ICO guidance frames biometric data used for unique identification as special category data requiring strict conditions, which signals that vehicles using face or voice identity must approach it as regulated risk, not product flair.*10

The deeper question is cultural: are we designing vehicles to be safer, or are we normalizing surveillance as a condition of mobility? If the industry does not answer that question responsibly, lawmakers will answer it for them.

The Privacy-Security Interface: Why “We’re Secure” Is Not Enough

A vehicle can be technically secure and still be ethically wrong. If you collect excessive data, retain it indefinitely, and share it broadly, you may never be “breached” and still cause harm. This is why privacy frameworks emphasize data minimization, purpose limitation, and governance as core principles rather than treating privacy as a subset of cybersecurity.

The NIST Privacy Framework is useful here because it treats privacy risk as an enterprise risk management discipline, structured to align with cybersecurity governance and operational practices. It emphasizes inventory and mapping of data processing, which is exactly what connected vehicle companies often lack when multiple suppliers and business units operate overlapping data pipelines.*11

At the same time, cybersecurity standards like ISO/SAE 21434 focus on engineering risk management across the lifecycle of vehicle E/E systems. Even though the standard is framed around cybersecurity, its lifecycle mentality supports privacy outcomes because security controls prevent unauthorized access to personal data and functions. The message is that privacy and security share infrastructure, and weaknesses in either can destroy trust.*12

Over-the-Air Updates: Privacy and Security Depend on Patchability

Connected vehicles are not static. Vulnerabilities are discovered after sale, and privacy failures can arise from software design choices that evolve with services. The ability to update software securely and responsibly is therefore central to both safety and privacy.

UNECE Regulation No. 156 formalizes requirements for software update management systems, emphasizing organizational governance and control over updates. This matters because a privacy promise made at purchase can be broken later by a software update that changes data collection practices, adds telemetry, or modifies consent flows. Update governance must therefore include privacy governance, not only software quality.*13

ISO 24089 provides organizational and project-level requirements and recommendations for software update engineering for road vehicles, reinforcing that update processes must be designed as engineering systems, not improvised releases. If a company cannot patch quickly and safely, vulnerabilities persist. If it patches recklessly, it creates new safety and privacy risks. Mature update engineering is a trust requirement, not a convenience feature.*14

Data Sharing With Third Parties: The Fastest Way to Lose Legitimacy

Most privacy disasters in connected vehicles follow a familiar arc: a feature is marketed as optional or beneficial, data collection is broader than consumers expect, third-party sharing is not clearly disclosed, and the consequences surface when consumers face higher costs or unexpected tracking.

The FTC’s case involving alleged sale of geolocation and driving behavior data is a strong signal that regulators are now willing to treat connected car data practices as major consumer protection issues, with long-term compliance obligations and limits on sharing. If your business model depends on opaque sharing, you should assume you are building on sand.*4

California’s privacy regime also places pressure on the sale or sharing of personal information, with consumer rights to opt out, delete, and access. This is not merely theoretical. It is enforceable, and it is designed to give consumers leverage against exactly the kinds of hidden data economies that have grown around mobility data.*6

The strongest strategic move for automakers is to adopt a “default no” posture for third-party sharing, permitting it only when it is transparent, opt-in, and demonstrably beneficial to the consumer. Anything else will eventually be treated as exploitation.

Interoperability and “Extended Vehicle” Platforms: Privacy as Architecture

As vehicles become platforms, they expose resources through web services. The ISO “extended vehicle” standards explicitly define resource categories, including personal resources, and establish interoperable web service structures. This is not a minor technical detail. It is an architectural moment: the vehicle becomes an API surface, and the privacy model becomes the authorization model.*15

If you expose vehicle resources to insurers, fleet managers, service providers, or app developers, you need fine-grained control over who can access what, under what conditions, and with what auditability. Otherwise, privacy becomes impossible to guarantee because the ecosystem becomes too large and too porous.

The EU Data Act adds another layer by shaping who may have access to data generated by connected products, which will pressure platform architectures to support lawful access and portability. If your architecture is built for exclusive control, you will face friction as legal frameworks push toward shared access and fairness.*7

The Regulatory Landscape: Fragmented Today, Converging Tomorrow

Europe has a relatively coherent privacy baseline through GDPR, sector-specific rules like ePrivacy, and emerging data governance laws like the Data Act. In this environment, a connected car company is expected to justify processing, minimize data, and give individuals control while ensuring security and accountability.*3 *16

In the United States, the story is more fragmented. Consumer privacy is shaped by state laws like California’s, by enforcement actions from agencies like the FTC, and by sectoral rules. That fragmentation creates a temptation for companies to arbitrage: to adopt minimal controls where enforcement seems weaker. The GM-related enforcement shows why that strategy is short-sighted: high-profile cases create precedent and accelerate broader scrutiny.*4

For autonomous systems, Europe’s AI Act adds another dimension: certain AI practices are prohibited and others are regulated by risk category, with obligations that can apply to high-risk systems. As vehicles deploy AI for monitoring, decision-making, and autonomy, this legal environment will shape what companies can do with data and how they must document and control risk.*17

The Blind Spot: Consumers Underestimate the Long Tail of Harm

Most consumers think privacy harm means spam or targeted ads. In vehicle contexts, privacy harm can mean stalking, domestic abuse escalation, targeted burglary, discriminatory pricing, or insurance penalties driven by opaque scoring. When vehicles collect precise location and behavior and link it to identity, the dataset becomes actionable in ways ordinary consumers do not anticipate.

Watchdog research has highlighted how poorly many car brands perform on privacy and security expectations, and how data collection can be broad, difficult to control, and difficult to opt out of. Even if you disagree with every conclusion, the existence of this critique matters because it shapes public perception and raises the reputational cost of weak privacy practices.*18

The mature response is not to dismiss consumer concerns as ignorance. It is to treat them as a warning signal that the social license for connected mobility is fragile. If the industry fails to build credible control and transparency, consumers will push back through politics, litigation, and purchasing behavior.

Threat Models: Privacy Abuses Are Not Only Hacker Stories

When executives hear “privacy risk,” they often imagine hackers. That is only one threat model. Privacy abuse can come from insiders, from vendors, from “partners” who reuse data, from overly broad employee access, and from systems designed to monetize without oversight.

Cybersecurity best practice guidance emphasizes threat detection, monitoring, and governance because risk is not confined to external attackers. Connected vehicle ecosystems must assume that misuse can occur through legitimate access pathways if controls are weak. Auto-ISAC guidance exists to help organizations implement risk management, collaboration, and lifecycle security practices precisely because the ecosystem includes many parties and many incentives.*8

NHTSA’s cybersecurity guidance similarly emphasizes the lifecycle approach and the reality that vulnerabilities can affect safety. If you are building connected features, you should assume your security posture will eventually be tested. Privacy resilience depends on preparing for that test, not hoping it never comes.*2

What “Good” Looks Like: Principles That Hold Under Scrutiny

A professional privacy program in connected and autonomous vehicles begins with data minimization. If you do not need a data element to provide a service safely and reliably, you should not collect it. Minimization is not only ethical; it is pragmatic. Every extra data element increases breach impact and governance cost. The EDPB guidelines emphasize limiting processing to what is necessary and avoiding default collection beyond core purposes.*1

The second principle is purpose limitation with real enforcement. Many companies declare purposes broadly so they can do anything later. That approach may be legally risky and is always trust-destructive. A purpose that includes “improving services” can become a loophole that swallows meaningful consent. Frameworks like NIST’s encourage clear inventory and mapping, which forces organizations to be honest about what they collect and why.*11

The third principle is genuine user control. If users cannot easily disable data collection, access their data, delete it, and understand sharing, the system is not respectful. California privacy rights and enforcement structures reflect a growing expectation that consumers must have real control, not symbolic control.*6

The fourth principle is security by design across the lifecycle. ISO/SAE 21434 exists because vehicle cybersecurity must be treated as engineering risk management, not ad hoc patching. Privacy cannot survive without security because unauthorized access to location and identity functions is catastrophic.*12

What Automakers Should Do: A Harder Standard Than Marketing Teams Prefer

If you are an OEM, your first move should be to treat privacy as a product safety and brand integrity issue, not as a legal checklist. If your privacy posture depends on consumers not reading terms, you are building future enforcement and reputation crises. The FTC’s connected vehicle enforcement is evidence that regulators will punish deceptive or unclear enrollment and undisclosed data sales, and that punishment can include long-term obligations, not just fines.*4

Your second move should be architecture-level governance. Identify every data flow from vehicle to cloud to partner, map it, restrict it, and audit it. Use privacy risk management frameworks to make this a repeatable practice rather than a one-time project. NIST’s framework is designed to help organizations build such repeatability and align privacy governance with operational realities.*11

Your third move should be radical transparency. Publish plain-language explanations of what data is collected, how often, where it goes, and how to turn it off. Make deletion and access requests workable. Avoid burying controls in menus designed to discourage use. California’s regulatory environment and consumer rights model will punish companies that hide choice, and consumers will punish them long before regulators do.*6

Your fourth move should be restraint with third-party sharing. Treat any sale or disclosure of location or driving behavior as high-risk and presumptively unacceptable unless the consumer has clearly opted in and can easily opt out. The real-world enforcement trajectory shows that invisible sharing is becoming politically toxic.*4

What Suppliers and Mobility Platforms Should Do: Stop Assuming OEMs Will Cover You

Suppliers build telematics units, infotainment systems, voice assistants, analytics pipelines, identity integrations, and OTA platforms. If you handle personal data, you are not shielded by the OEM brand. Your software can become the weak link that turns a privacy program into a breach story.

Adopt automotive cybersecurity engineering requirements and integrate them into your development lifecycle. ISO/SAE 21434’s lifecycle framing is relevant not only to OEMs but to any supplier building components that shape the security posture of the vehicle ecosystem.*12

Treat update engineering as a core discipline. Vehicles live long lives, and vulnerabilities appear long after sale. Standards like ISO 24089 emphasize organizational and project-level controls for update engineering, which suppliers must support if they want to be credible in safety-critical ecosystems.*14

And be honest about data usage. If your platform’s business model depends on reusing vehicle data for analytics, training, or resale, disclose that clearly and give OEMs and consumers real control. Anything else is a time bomb.

What Regulators Should Do: Close the Gaps That Enable Exploitation

If policymakers want connected mobility without abuse, they must reduce fragmentation and increase enforceability. The EU has demonstrated a model where baseline privacy rights exist through GDPR, sector-specific communications privacy rules exist through ePrivacy, and emerging data governance laws reshape market power over connected product data.*3 *16 *7

In the U.S., a patchwork system encourages inconsistency and makes it easier for bad practices to persist until exposed by journalists or enforcement. Strong enforcement actions help, but they arrive after harm. A more coherent baseline would reduce the incentives for opaque data markets to flourish.*4

For autonomous vehicles, lawmakers should anticipate that event data recorders, automated driving logs, and sensor retention will be normalized for safety and liability. Regulation should clarify retention limits, access conditions, and protections for bystanders whose data may be captured. UNECE frameworks for automated systems demonstrate how data logging becomes embedded in vehicle approval and safety governance.*9

What Consumers Can Do: Practical Control Without Illusions

You cannot personally audit a vehicle’s backend systems, but you can stop treating privacy as an abstract principle and treat it as a purchasing and configuration decision. Use manufacturers that offer clear opt-outs, meaningful controls, and transparency. Exercise access and deletion rights when available, especially in jurisdictions like California where these rights are explicit and enforceable.*6

Be skeptical of “driver improvement” or “smart driver” programs that promise benefits but do not clearly state what data is collected and whether it is shared. Recent enforcement shows that these programs can become pipelines into insurance and consumer scoring ecosystems.*4

Also recognize that privacy is not only about the data you knowingly provide. It is also about passive collection. A vehicle that collects location continuously may expose patterns that matter more than any single data point. In this domain, the best privacy strategy is often minimization: turning off features you do not need and resisting default opt-in designs that trade your long-term privacy for short-term convenience.*1

The Future Trend: Privacy Will Define the Social License of Autonomy

Connected and autonomous vehicles are not only engineering projects; they are legitimacy projects. They require society to accept constant sensing, algorithmic decision-making, and networked control. If the industry cannot demonstrate restraint, transparency, and real user rights, the public will interpret autonomy as surveillance with wheels.

The legal trajectory suggests that privacy and AI governance will tighten, not loosen. The EU AI Act and the broader European governance ecosystem are signals that automated systems and data-intensive products will be regulated with increasing specificity and enforcement ambition.*17

The companies that win in this era will not be the ones that collect the most data. They will be the ones that can prove they deserve it—through minimization, strong security, meaningful control, and a business model that does not depend on quietly extracting value from people’s movements. If you are betting that consumers “won’t care,” you are betting against reality, enforcement, and the direction of law.



References

*1 European Data Protection Board. (2021, March 9). Guidelines 01/2020 on processing personal data in the context of connected vehicles and mobility related applications. https://www.edpb.europa.eu/our-work-tools/our-documents/guidelines/guidelines-012020-processing-personal-data-context_en

*2 National Highway Traffic Safety Administration. (2022). Cybersecurity best practices for the safety of modern vehicles. https://www.nhtsa.gov/sites/nhtsa.gov/files/2022-09/cybersecurity-best-practices-safety-modern-vehicles-2022-pre-final-tag_0_0.pdf

*3 General Data Protection Regulation. (2016). Article 4: Definitions. https://gdpr-info.eu/art-4-gdpr/

*4 Federal Trade Commission. (2026, January 14). FTC finalizes order settling allegations that GM and OnStar collected and sold geolocation data without consumers’ consent (press release). https://www.ftc.gov/news-events/news/press-releases/2026/01/ftc-finalizes-order-settling-allegations-gm-onstar-collected-sold-geolocation-data-without-consumers

*5 General Data Protection Regulation. (2016). Article 9: Processing of special categories of personal data. https://gdpr-info.eu/art-9-gdpr/

*6 California Department of Justice. (2024, March 13). California Consumer Privacy Act (CCPA). https://oag.ca.gov/privacy/ccpa

*7 European Union. (2023, December 13). Regulation (EU) 2023/2854 on harmonised rules on fair access to and use of data (Data Act). https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=OJ%3AL_202302854

*8 Automotive Information Sharing and Analysis Center. (n.d.). Best Practice Guides (BPGs). https://automotiveisac.com/best-practice-guides

*9 United Nations Economic Commission for Europe. (2021). UN Regulation No. 157: Automated Lane Keeping Systems (R157e). https://unece.org/sites/default/files/2023-12/R157e.pdf

*10 Information Commissioner’s Office. (2024, April 9). What is special category data? https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/lawful-basis/special-category-data/what-is-special-category-data/

*11 National Institute of Standards and Technology. (2020). NIST Privacy Framework: A tool for improving privacy through enterprise risk management (Version 1.0). https://nvlpubs.nist.gov/nistpubs/CSWP/NIST.CSWP.01162020.pdf

*12 International Organization for Standardization. (2021). ISO/SAE 21434: Road vehicles — Cybersecurity engineering. https://www.iso.org/standard/70918.html

*13 United Nations Economic Commission for Europe. (2021). UN Regulation No. 156: Software update and software update management system (R156e). https://unece.org/sites/default/files/2024-03/R156e%20%282%29.pdf

*14 International Organization for Standardization. (2023). ISO 24089: Road vehicles — Software update engineering. https://www.iso.org/standard/77796.html

*15 International Organization for Standardization. (2021). ISO 20078-1: Road vehicles — Extended vehicle (ExVe) web services — Part 1: Content and definitions. https://www.iso.org/standard/80183.html

*16 European Union. (2002, July 12). Directive 2002/58/EC (ePrivacy Directive). https://eur-lex.europa.eu/eli/dir/2002/58/oj/eng

*17 European Union. (2024, June 13). Regulation (EU) 2024/1689 laying down harmonised rules on artificial intelligence (Artificial Intelligence Act). https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=OJ%3AL_202401689

*18 Mozilla Foundation. (2023, September 6). It’s official: Cars are the worst product category we have ever reviewed for privacy. https://www.mozillafoundation.org/en/privacynotincluded/articles/its-official-cars-are-the-worst-product-category-we-have-ever-reviewed-for-privacy/

Related Articles