Clean Data Alliance- Electronic Bill of Rights (EBOR)

Clean Data Alliance- Electronic Bill of Rights (EBOR)Clean Data Alliance- Electronic Bill of Rights (EBOR)Clean Data Alliance- Electronic Bill of Rights (EBOR)

Clean Data Alliance- Electronic Bill of Rights (EBOR)

Clean Data Alliance- Electronic Bill of Rights (EBOR)Clean Data Alliance- Electronic Bill of Rights (EBOR)Clean Data Alliance- Electronic Bill of Rights (EBOR)

Clean Data Alliance Electronic Bill of Rights Framework

Clean Data Alliance Electronic Bill of Rights FrameworkClean Data Alliance Electronic Bill of Rights FrameworkClean Data Alliance Electronic Bill of Rights Framework
Contact

Clean Data Alliance Electronic Bill of Rights Framework

Clean Data Alliance Electronic Bill of Rights FrameworkClean Data Alliance Electronic Bill of Rights FrameworkClean Data Alliance Electronic Bill of Rights Framework
Contact

Clean Data Alliance

The Clean Data Alliance (CDA) is a nonprofit organization dedicated to education and to advancing practical, immediately adoptable solutions for businesses operating within the data-driven advertising model known as "Targeted Advertising".


Composed of advertising and technology professionals, CDA advocates for clean data-collection standards grounded in the ethical framework of the Electronic Bill of Rights (EBOR)—a proposed foundation for future congressional policy.


Its mission is not to restrict AI, apps, social media, or chatbots, but to educate leaders across government, education, and industry about the risks and harms associated with the targeted advertising business model—particularly the role it plays in fueling technology addiction and related harms through AI-infused apps, social media platforms, chatbots, and other digital services. 


CDA promotes responsible alternatives based on informed consent, non-addictive design, and ethical clean-data practices.


 Through this work, the Clean Data Alliance seeks to help organizations adopt sustainable advertising approaches that reduce future liability associated with targeted advertising linked to technology addiction and related harms, while restoring consumer data and financial agency and protecting privacy, security, and safety in an AI-driven digital environment. 


Clean data business practices are not only safer but also support long-term profitability while protecting brand integrity and reputation, particularly as the harms associated with addictive technologies continue to make headlines. 


 Advertisers, business leaders, and elected officials must recognize the harms caused by addictive technologies, which have been well documented in news reporting, congressional hearings, and documentaries such as The Social Dilemma. 


To date, primary accountability has focused on the technology industry. However, as society enters an era shaped by AI and emerging quantum-driven systems, broader shared responsibility is likely to extend to advertisers, whose targeted advertising models financially sustain the addiction and harms associated with AI-infused apps, social media platforms, and related digital services. 


While many harms remain unseen, families who have lost teens and children to technology addiction, injury, and even death understand the urgency of reform.


After all, what companies—including advertisers and media networks—want to be associated with addictive products and services that cause harm, especially to vulnerable populations such as teens and children? 


Together, the Clean Data Alliance and the Electronic Bill of Rights provide a safe, ethical path forward—one that does not rely on additional government regulation or the banning of AI, apps, social media, chatbots, or other essential technologies needed for the future. 


By advancing the ethical framework of EBOR, CDA provides companies, advertisers, policymakers, and technology leaders with a safer, forward-looking path for the advertising industry as digital technologies continue to evolve through AI and emerging quantum-driven systems.

    


Learn More at CDA

Documented Harm- Targeted Advertising

Documented Categories of Harm- Targeted Advertising


There are many documented harms—including loss of life—associated with highly addictive and manipulative AI-infused apps, social media platforms, and emerging AI chatbots that can induce the ELIZA effect, the human tendency to attribute human emotion, trust, and authority to AI driven apps, chatbots, and evolving connected products and services.


Outlined below are the mot urgent categories of harm addressed by the Electronic Bill of Rights (EBOR) framework:

 

  • Tech Addiction – Intentional design patterns engineered to maximize engagement can create compulsive use, dependency, and loss of healthy behavioral control resulting in harm.
     
  • Psychological Harm – Prolonged exposure to addictive and manipulative digital environments is linked to online bullying, violence, anxiety, depression, self-harm, loneliness, reduced emotional well-being, suicidal thoughts, and in some cases actions a dynamic explored in documentaries such as The Social Dilemma.
     
  • Cognitive Manipulation – Algorithmic targeting and persuasive design can influence perception, decision-making, and behavior without meaningful awareness or consent leading to threats posed by consumer, political and ideological indoctrination by multinational corporations, including those who collude with governments, including those from adversarial nations.
     
  • Tech-Based Hybrid Warfare –  Governments, political actors, militaries, and intelligence agencies have infiltrated consumer grade social media platforms, while using addictive AI infused apps, platforms, and chatbots to wage phycological and cognitive warfare targeting everyone connected to the internet, including teens and children.
     
  • Government Surveillance by Proxy –  Concerns have been raised that close alignment between government institutions and Alphabet/Google, Microsoft, Apple, Amazon, Meta, and other developers has contributed to the erosion of privacy, security, safety, data and financial agency, biological agency (biometric and physical DNA), while weakening civil liberties and human rights at global scale. 


  • Weaponized Consumer Products and Services- Smartphones, tablets, connected products, and PCs supported by Android, iOS, and Windows—now essential to modern life—have evolved into continuous data-collection and surveillance nodes capable of enabling audio, video, physical surveillance as always on data extraction for profits without compensating the product owner nor the end user.   Never before in the history of consumerism have we seen consumer product manufactures and developers weaponize their products and services against their paying customers and end users, including teens and children directly or weaponized by government by proxy.  Governments around the world are conducting tech-based hybrid warfare—in real time—against their advisories, including their own populations, while bent on eliminating privacy, security, safety, civil liberties, and human rights by way of essential consumer telecom and tech products and services paid for by the product owner and subscriber.
     
  • Forced Participation by Way of Predatory Agreements –  Essential consumer products such as smartphones and connected products supported by AI infused apps, social media platforms, and chatbots are required acceptance of predatory non-negotiable contracts of adhesion (terms of service) as a condition of use resulting in forced participation in data-driven advertising ecosystems in ways that compromise privacy, security, and safety due to the used of addictive technologies embedded in AI infused app and social media ecosystems globally.  Adults, business professionals, corporations, government agencies, and teens ages 13–17 must accept these predatory terms of service, otherwise if rejected they cannot use the products and services they paid for, this is the definition of consumer oppression and cyber-enslavement as product owners and users are turned into uncompensated information producers exploited for profits 24x7/365 days per year.


  • Deceptive Trade Practices- Illegal Consumer Agreements– Ultimately, neither the product owner nor the end user may be able to provide fully informed and legally meaningful consent, particularly when the agreements they are asked to accept are difficult to read, digest, or reasonably understand prior to the purchase or use of a product or service as required by law.  The collective terms and conditions, privacy policies, end-user licensing agreements, and in-device legal disclosures can total more than 3,000 pages of complex legal language at the moment the product owner clicks “I Agree.”  Developers and manufacturers often provide limited transparency regarding their business practices, coding, algorithms, operating-system behaviors, and the functioning of potentially intrusive or addictive AI-infused apps and platforms, while participation is effectively required through non-negotiable terms of service.  By separating the collective terms of service into numerous categories—including online terms and conditions, privacy policies, and end-user license agreements—while disclosing other portions of the agreement within the device itself, such as application permissions, app-specific warnings, and product notices, meaningful transparency is undermined and confusion is created. This structure functions as an intentional legal strategy that makes the agreement torturous, complex, and exhaustive to read by design, constituting a deceptive trade practice.


  • Covert Piggyback Coding and Cross Platform Surveillance and Data Mining-  It is common in the technology industry for developers to gain access to paying customers and end users through preinstalled app agreements, enabling the marketing and distribution of intrusive, AI-infused apps and platforms across the Android, iOS, and Windows ecosystems—an arrangement that is transparent to the product owner.  However, what is not transparent is that multinational corporations, including those based in adversarial nations, can enter into piggybacking and cross-platform surveillance and data-mining agreements by embedding code within each other’s branded applications through a methodology known as “piggybacking.”  Under this structure, two or more companies or entities—including governments—can monitor, track, and data-mine an individual through a single intrusive AI-infused app without meaningful transparency to the product owner, constituting a deceptive trade practice.   This dynamic was highlighted in the 2024 Google antitrust litigation, which exposed agreements between Apple and Alphabet/Google in which Apple’s Safari browser defaulted to a Google-developed search engine that was not overtly branded as Google—effectively misleading Apple device owners who had moved away from Android-based smartphones in an effort to separate from Google’s surveillance- and data-extraction-driven business practices.
     
  • Financial Exploitation & The Rise of Cyber-Enslavement – Behavioral targeting, digital DNA profiling, surveillance-based pricing models, device-level data extraction, and downstream data monetization practices can generate disproportionate economic value from users without meaningful transparency, compensation, or direct benefit. In most cases, neither the product owner nor the end user is compensated for the valuable information produced through their use of digital devices—data that may contribute to substantial revenues within the global data-brokerage and advertising ecosystem.   Compounding these concerns, contractual terms governing many AI-infused apps and digital services place the costs of connectivity, data transmission, and platform access on the user, even as those same interactions enable ongoing data collection and monetization tied to voice, data, and internet usage plans—meaning that product owners and subscribers ultimately pay for the data being extracted from their own products and services that is later exploited for billions in profits.
     
  • Digital Discrimination and Identifiable Digital DNA Profiles – In some data-driven ecosystems, developers and data brokers may assemble highly detailed identifiable Digital DNA profiles (user profiles)—derived from over 5,000 highly confidential data points extracted from the product owner's devices and PCs.  Identifiable digital DNA profiles include personal, business, medical, legal, biometric, health and fitness, employment, location, and sensitive user data. Sensitive user data (personal and business) includes ID, messaging, emails, attachments (PDFs, photos, documents, etc.), calendar events, contacts, account information, and other confidential and protected information as the surveillance and data mining are indiscriminate.


  • Weaponized Identifiable Digital DNA Profiles- Digital DNA profiles can be weaponized against the product owner or end user by the developers, manufactures, multinational corporations, and governments by proxy.  These identifiable profiles can be used in ways that influence social scoring limiting financial or career opportunities,  The profiles can be used to cause financial harm through higher insurance pricing, interests on loans, and other forms of surveillance-based pricing models in which individuals are offered different terms or costs for the same products and services.  Concerns have also been raised that large-scale data analytics could enable population-level segmentation by socioeconomic status or other demographic factors, potentially exposing vulnerable groups to heightened targeting, marginalization, or virtual genocide through economic warfare.   In some cases, real-world violence—including acts described as genocide—has been documented, such as events in Myanmar involving the role of Meta’s Facebook and Instagram platforms and the Myanmar government, as reported by Amnesty International. More broadly, algorithmic profiling has been shown in some cases to reinforce bias or unequal treatment across areas such as employment, finance, housing, and access to essential services.
     
  • Loss of Privacy, Civil Liberties, and Human Rights by Proxy –  Concerns have been raised that close alignment between government institutions and major technology platforms has contributed to a significant erosion of privacy, civil liberties, and human rights through the widespread use of essential consumer technologies. Continuous monitoring, location tracking, behavioral profiling, and large-scale data extraction can blur personal boundaries and confidentiality across both online and offline environments—at home, in vehicles, workplaces, medical settings, legal consultations, social spaces, and everyday public life.   Particular concern has been expressed regarding children, who increasingly enter a fully surveillance and data extraction digital environment from birth and may experience limited practical privacy throughout their lifetime as they are born into digital servitude. For these reasons, some policymakers, advocates, and industry leaders argue that updated legal and ethical safeguards—such as those proposed in an Electronic Bill of Rights—are necessary to protect individual autonomy, dignity, civil liberties and fundamental human rights in a society where daily life is deeply integrated with digital systems.
     
  • Loss of Data and Financial Agency due to Internet Centralization- Alphabet/Google, Microsoft, and Apple control access to global internet trade and commerce by monopolizing the operating system (Android, iOS, Windows) market while controlling the global development and distribution of AI infused apps, chatbots, social media, streaming services, cloud storage platforms, medical platforms, banking platforms, retail purchasing platforms, digital currency platforms, and other essential platforms.  AI-infused apps and platforms—and future AI- and quantum-driven applications—must be developed for Android, iOS, or Windows in order to achieve global distribution through Alphabet/Google, Apple, and Microsoft, either via their app stores or through multibillion-dollar preinstalled app agreements. These distribution controls are used to limit business competition and innovation while eliminating much of the user’s data and financial agency, and they may also violate antitrust and existing  business-competition fairness laws in numerous ways. 

 

Ironically, those who design, protect, and profit from the system may also be affected by its consequences—along with their own families—as future generations are increasingly born into a oppressive, intrusive, exploitive and harmful AI and quantum driven digital environments supported by essential consumer products and services. In this context, individuals may unintentionally fund their own oppression and exploitation at the expense of their privacy, security, and safety by way of essential consumer products and services they pay for.


There is a better way forward for everyone.


Together, these harms underscore the urgent need for ethical, clean-data standards and practices by way of EBOR framework.



CDA EBOR- Framework

  Electronic Bill of Rights Framework for Clean Data Business Practices Are Available Upon Request.

  


Learn More by Contacting Rex M. Lee

Electronic Bill of Rights

Copyright © 2026 Electronic Bill of Rights - All Rights Reserved.

Powered by