Clean Data Alliance- Electronic Bill of Rights (EBOR)

Clean Data Alliance- Electronic Bill of Rights (EBOR)Clean Data Alliance- Electronic Bill of Rights (EBOR)Clean Data Alliance- Electronic Bill of Rights (EBOR)

Clean Data Alliance- Electronic Bill of Rights (EBOR)

Clean Data Alliance- Electronic Bill of Rights (EBOR)Clean Data Alliance- Electronic Bill of Rights (EBOR)Clean Data Alliance- Electronic Bill of Rights (EBOR)

Empower Your Legal Rights Today

Empower Your Legal Rights TodayEmpower Your Legal Rights TodayEmpower Your Legal Rights Today

Support: Electronic Bill of Rights.

Contact

Empower Your Legal Rights Today

Empower Your Legal Rights TodayEmpower Your Legal Rights TodayEmpower Your Legal Rights Today

Support: Electronic Bill of Rights.

Contact

Clean Data Alliance Adopts EBOR!

Clean Data Alliance

 

About the Clean Data Alliance

 

The Clean Data Alliance (CDA) is a nonprofit organization dedicated to education and to advancing practical, immediately adoptable solutions for businesses operating within the data-driven advertising model known as "Targeted Advertising".


Composed of advertising and technology professionals, CDA advocates for clean data-collection standards grounded in the ethical framework of the Electronic Bill of Rights (EBOR)—a proposed foundation for future congressional policy.


Its mission is not to restrict AI, apps, social media, or chatbots, but to educate leaders across government, education, and industry about the risks and harms associated with the targeted advertising business model—particularly the role it plays in fueling technology addiction and related harms through AI-infused apps, social media platforms, chatbots, and other digital services. 


CDA promotes responsible alternatives based on informed consent, non-addictive design, and ethical clean-data practices.


 Through this work, the Clean Data Alliance seeks to help organizations adopt sustainable advertising approaches that reduce future liability associated with targeted advertising linked to technology addiction and related harms, while restoring consumer data and financial agency and protecting privacy, security, and safety in an AI-driven digital environment. 


Clean data business practices are not only safer but also support long-term profitability while protecting brand integrity and reputation, particularly as the harms associated with addictive technologies continue to make headlines. 


 Advertisers, business leaders, and elected officials must recognize the harms caused by addictive technologies, which have been well documented in news reporting, congressional hearings, and documentaries such as The Social Dilemma. 


To date, primary accountability has focused on the technology industry. However, as society enters an era shaped by AI and emerging quantum-driven systems, broader shared responsibility is likely to extend to advertisers, whose targeted advertising models financially sustain the addiction and harms associated with AI-infused apps, social media platforms, and related digital services. 


While many harms remain unseen, families who have lost teens and children to technology addiction, injury, and even death understand the urgency of reform.


After all, what companies—including advertisers and media networks—want to be associated with addictive products and services that cause harm, especially to vulnerable populations such as teens and children? 


Together, the Clean Data Alliance and the Electronic Bill of Rights provide a safe, ethical path forward—one that does not rely on additional government regulation or the banning of AI, apps, social media, chatbots, or other essential technologies needed for the future. 


By advancing the ethical framework of EBOR, CDA provides companies, advertisers, policymakers, and technology leaders with a safer, forward-looking path for the advertising industry as digital technologies continue to evolve through AI and emerging quantum-driven systems.

    


Learn More at CDA

Documented Harm- Targeted Advertising

Documented Categories of Harm- Targeted Advertising


There are many documented harms—including loss of life—associated with highly addictive and manipulative AI-infused apps, social media platforms, and emerging AI chatbots that can induce the ELIZA effect, the human tendency to attribute human emotion, trust, and authority to AI driven apps, chatbots, and evolving connected products and services.


Outlined below are the mot urgent categories of harm addressed by the Electronic Bill of Rights (EBOR) framework:

 

  • Tech Addiction – Intentional design patterns engineered to maximize engagement can create compulsive use, dependency, and loss of healthy behavioral control resulting in harm.
     
  • Psychological Harm – Prolonged exposure to addictive and manipulative digital environments is linked to online bullying, violence, anxiety, depression, self-harm, loneliness, reduced emotional well-being, suicidal thoughts, and in some cases actions a dynamic explored in documentaries such as The Social Dilemma.
     
  • Cognitive Manipulation – Algorithmic targeting and persuasive design can influence perception, decision-making, and behavior without meaningful awareness or consent leading to threats posed by consumer, political and ideological indoctrination by multinational corporations, including those who collude with governments, including those from adversarial nations.
     
  • Tech-Based Hybrid Warfare –  Governments, political actors, militaries, and intelligence agencies have infiltrated consumer grade social media platforms, while using addictive AI infused apps, platforms, and chatbots to wage phycological and cognitive warfare targeting everyone connected to the internet, including teens and children.
     
  • Government Surveillance by Proxy –  Concerns have been raised that close alignment between government institutions and Alphabet/Google, Microsoft, Apple, Amazon, Meta, and other developers has contributed to the erosion of privacy, security, safety, data and financial agency, biological agency (biometric and physical DNA), while weakening civil liberties and human rights at global scale. 


  • Weaponized Consumer Products and Services- Smartphones, tablets, connected products, and PCs supported by Android, iOS, and Windows—now essential to modern life—have evolved into continuous data-collection and surveillance nodes capable of enabling audio, video, physical surveillance as always on data extraction for profits without compensating the product owner nor the end user.   Never before in the history of consumerism have we seen consumer product manufactures and developers weaponize their products and services against their paying customers and end users, including teens and children directly or weaponized by government by proxy.  Governments around the world are conducting tech-based hybrid warfare—in real time—against their advisories, including their own populations, while bent on eliminating privacy, security, safety, civil liberties, and human rights by way of essential consumer telecom and tech products and services paid for by the product owner and subscriber.
     
  • Forced Participation by Way of Predatory Agreements –  Essential consumer products such as smartphones and connected products supported by AI infused apps, social media platforms, and chatbots are required acceptance of predatory non-negotiable contracts of adhesion (terms of service) as a condition of use resulting in forced participation in data-driven advertising ecosystems in ways that compromise privacy, security, and safety due to the used of addictive technologies embedded in AI infused app and social media ecosystems globally.  Adults, business professionals, corporations, government agencies, and teens ages 13–17 must accept these predatory terms of service, otherwise if rejected they cannot use the products and services they paid for, this is the definition of consumer oppression and cyber-enslavement as product owners and users are turned into uncompensated information producers exploited for profits 24x7/365 days per year.


  • Deceptive Trade Practices- Illegal Consumer Agreements– Ultimately, neither the product owner nor the end user may be able to provide fully informed and legally meaningful consent, particularly when the agreements they are asked to accept are difficult to read, digest, or reasonably understand prior to the purchase or use of a product or service as required by law.  The collective terms and conditions, privacy policies, end-user licensing agreements, and in-device legal disclosures can total more than 3,000 pages of complex legal language at the moment the product owner clicks “I Agree.”  Developers and manufacturers often provide limited transparency regarding their business practices, coding, algorithms, operating-system behaviors, and the functioning of potentially intrusive or addictive AI-infused apps and platforms, while participation is effectively required through non-negotiable terms of service.  By separating the collective terms of service into numerous categories—including online terms and conditions, privacy policies, and end-user license agreements—while disclosing other portions of the agreement within the device itself, such as application permissions, app-specific warnings, and product notices, meaningful transparency is undermined and confusion is created. This structure functions as an intentional legal strategy that makes the agreement torturous, complex, and exhaustive to read by design, constituting a deceptive trade practice.


  • Covert Piggyback Coding and Cross Platform Surveillance and Data Mining-  It is common in the technology industry for developers to gain access to paying customers and end users through preinstalled app agreements, enabling the marketing and distribution of intrusive, AI-infused apps and platforms across the Android, iOS, and Windows ecosystems—an arrangement that is transparent to the product owner.  However, what is not transparent is that multinational corporations, including those based in adversarial nations, can enter into piggybacking and cross-platform surveillance and data-mining agreements by embedding code within each other’s branded applications through a methodology known as “piggybacking.”  Under this structure, two or more companies or entities—including governments—can monitor, track, and data-mine an individual through a single intrusive AI-infused app without meaningful transparency to the product owner, constituting a deceptive trade practice.   This dynamic was highlighted in the 2024 Google antitrust litigation, which exposed agreements between Apple and Alphabet/Google in which Apple’s Safari browser defaulted to a Google-developed search engine that was not overtly branded as Google—effectively misleading Apple device owners who had moved away from Android-based smartphones in an effort to separate from Google’s surveillance- and data-extraction-driven business practices.
     
  • Financial Exploitation & The Rise of Cyber-Enslavement – Behavioral targeting, digital DNA profiling, surveillance-based pricing models, device-level data extraction, and downstream data monetization practices can generate disproportionate economic value from users without meaningful transparency, compensation, or direct benefit. In most cases, neither the product owner nor the end user is compensated for the valuable information produced through their use of digital devices—data that may contribute to substantial revenues within the global data-brokerage and advertising ecosystem.   Compounding these concerns, contractual terms governing many AI-infused apps and digital services place the costs of connectivity, data transmission, and platform access on the user, even as those same interactions enable ongoing data collection and monetization tied to voice, data, and internet usage plans—meaning that product owners and subscribers ultimately pay for the data being extracted from their own products and services that is later exploited for billions in profits.
     
  • Digital Discrimination and Identifiable Digital DNA Profiles – In some data-driven ecosystems, developers and data brokers may assemble highly detailed identifiable Digital DNA profiles (user profiles)—derived from over 5,000 highly confidential data points extracted from the product owner's devices and PCs.  Identifiable digital DNA profiles include personal, business, medical, legal, biometric, health and fitness, employment, location, and sensitive user data. Sensitive user data (personal and business) includes ID, messaging, emails, attachments (PDFs, photos, documents, etc.), calendar events, contacts, account information, and other confidential and protected information as the surveillance and data mining are indiscriminate.


  • Weaponized Identifiable Digital DNA Profiles- Digital DNA profiles can be weaponized against the product owner or end user by the developers, manufactures, multinational corporations, and governments by proxy.  These identifiable profiles can be used in ways that influence social scoring limiting financial or career opportunities,  The profiles can be used to cause financial harm through higher insurance pricing, interests on loans, and other forms of surveillance-based pricing models in which individuals are offered different terms or costs for the same products and services.  Concerns have also been raised that large-scale data analytics could enable population-level segmentation by socioeconomic status or other demographic factors, potentially exposing vulnerable groups to heightened targeting, marginalization, or virtual genocide through economic warfare.   In some cases, real-world violence—including acts described as genocide—has been documented, such as events in Myanmar involving the role of Meta’s Facebook and Instagram platforms and the Myanmar government, as reported by Amnesty International. More broadly, algorithmic profiling has been shown in some cases to reinforce bias or unequal treatment across areas such as employment, finance, housing, and access to essential services.
     
  • Loss of Privacy, Civil Liberties, and Human Rights by Proxy –  Concerns have been raised that close alignment between government institutions and major technology platforms has contributed to a significant erosion of privacy, civil liberties, and human rights through the widespread use of essential consumer technologies. Continuous monitoring, location tracking, behavioral profiling, and large-scale data extraction can blur personal boundaries and confidentiality across both online and offline environments—at home, in vehicles, workplaces, medical settings, legal consultations, social spaces, and everyday public life.   Particular concern has been expressed regarding children, who increasingly enter a fully surveillance and data extraction digital environment from birth and may experience limited practical privacy throughout their lifetime as they are born into digital servitude. For these reasons, some policymakers, advocates, and industry leaders argue that updated legal and ethical safeguards—such as those proposed in an Electronic Bill of Rights—are necessary to protect individual autonomy, dignity, civil liberties and fundamental human rights in a society where daily life is deeply integrated with digital systems.
     
  • Loss of Data and Financial Agency due to Internet Centralization- Alphabet/Google, Microsoft, and Apple control access to global internet trade and commerce by monopolizing the operating system (Android, iOS, Windows) market while controlling the global development and distribution of AI infused apps, chatbots, social media, streaming services, cloud storage platforms, medical platforms, banking platforms, retail purchasing platforms, digital currency platforms, and other essential platforms.  AI-infused apps and platforms—and future AI- and quantum-driven applications—must be developed for Android, iOS, or Windows in order to achieve global distribution through Alphabet/Google, Apple, and Microsoft, either via their app stores or through multibillion-dollar preinstalled app agreements. These distribution controls are used to limit business competition and innovation while eliminating much of the user’s data and financial agency, and they may also violate antitrust and existing  business-competition fairness laws in numerous ways. 

 

Ironically, those who design, protect, and profit from the system may also be affected by its consequences—along with their own families—as future generations are increasingly born into a oppressive, intrusive, exploitive and harmful AI and quantum driven digital environments supported by essential consumer products and services. In this context, individuals may unintentionally fund their own oppression and exploitation at the expense of their privacy, security, and safety by way of essential consumer products and services they pay for.


There is a better way forward for everyone.


Together, these harms underscore the urgent need for ethical, clean-data standards and practices by way of EBOR framework.



EBOR- Executive Summary

   

The Need for an Electronic Bill of Rights


For more than 30 years, Big Tech and government have sat at the table—while the paying customer and taxpayer has been excluded. 


Citizens fund the system through purchases and taxes, and empower lawmakers through their votes, only to see those same lawmakers enact and support policies that erode privacy, security, safety, civil liberties, and human rights—while shielding powerful technology interests instead of protecting the public.


This imbalance of power is no longer acceptable. It is time to restore accountability, transparency, and representation.


It is time for an Electronic Bill of Rights.


The rise of global tech addiction, digital colonialism, and cyber-enslavement—driven by exploitative business models rooted in Surveillance Capitalism—has created an urgent need for an enforceable Electronic Bill of Rights (EBOR).


In today’s connected world, personal data is routinely extracted, privacy is systematically eroded, and technology is increasingly weaponized against individuals by the very companies entrusted with their loyalty, patronage, and hard-earned money. What were once tools of empowerment have become instruments of behavioral manipulation, surveillance, and control.

Living “off the grid” is no longer a viable option. 


Smartphones, tablets, PCs, vehicles, wearables, and other connected products are now essential for participation in modern society. These products operate atop global telecommunications and internet infrastructure overseen by government agencies such as the Federal Communications Commission (FCC), the Federal Trade Commission (FTC), and the Department of Homeland Security (DHS)—institutions charged with protecting citizens, consumers, and national security.


Yet these protections have proven insufficient.


Everyday consumer products of necessity have been transformed into continuous surveillance and data-mining systems by multinational corporations—some operating in or aligned with adversarial nations such as China and Russia. These products rely on leaky operating systems—Android, iOS, and Windows—that enable addictive, manipulative, and psychologically invasive technologies. AI-infused applications and chatbots, often designed to exploit human trust through behavioral techniques such as the “Eliza Effect,” further magnify these risks—posing serious harm to adults, professionals, teens, and children alike.


At the same time, governments too often shield these business models through regulatory capture, lobbying, and inaction—prioritizing profits and geopolitical convenience over citizen safety, privacy, and civil liberties. As a result, individuals are no longer treated as citizens or customers, but as expendable data commodities—monetized, manipulated, and exposed to harm.


This symbiotic relationship between governments and surveillance-driven technology corporations must end.


Never before in history have technology developers, platform operators, OEMs, and AI innovators weaponized paid consumer products against their own users at such scale—forcing participation through coercive contracts of adhesion while stripping away data sovereignty, privacy, security, and autonomy.


Without enforceable digital rights, individuals will remain subject to predatory business practices, unchecked surveillance, monopolistic control, and foreign exploitation. The consequences extend far beyond privacy—they represent clear risks to public safety, cybersecurity, national security, and data sovereignty.


The Electronic Bill of Rights establishes a clear, enforceable framework to restore balance in the digital age. It affirms the right of individuals to control their data, devices, and digital lives—free from coercion, exploitation, and opaque surveillance systems. EBOR is designed to protect human dignity while enabling innovation that respects civil liberties and consumer rights.

Governments have a duty to defend their citizens—not merely from physical harm, but from digital exploitation embedded in the very infrastructure of modern life.

This is not an anti-technology movement. 


It is a pro-human, pro-freedom, and pro-security framework for the 21st century.

Only through enforceable digital rights can personal freedom, trust, and sovereignty be preserved in the age of AI, automation, and global surveillance.

EBOR- Framework

  Electronic Bill of Rights Congressional Policy Change Framework

  

Article I: Right to Data Privacy


  • Individuals      shall have the right to control their personal data, including collection,      storage, processing, and distribution.
  • Consent      must be explicit, informed, and revocable at any time.
  • Companies      and organizations must disclose how user data is collected, shared, and      monetized in clear, understandable language—free from technical jargon,      written in plain English (or the applicable primary language of the user),      and easily accessible without requiring excessive navigation or legal      expertise.

  

Article II: Right to Data Security


  • Individuals      have the right to expect robust security measures to protect their      personal information.
  • Companies      must implement end-to-end encryption, multi-factor authentication, and      other security protocols to prevent unauthorized access.
  • Any      data breach must be immediately disclosed to affected individuals and      regulatory bodies.

  

Article III: Right to Digital Anonymity


  • No      individual shall be compelled to disclose personal information beyond what      is necessary for a specific service.
  • Users      shall have the right to browse the internet, communicate, and conduct      transactions anonymously.

  

Article IV: Right to Be Forgotten


  • Individuals      have the right to request the permanent deletion of their data from online      platforms and databases.
  • Companies      must honor deletion requests promptly, barring exceptions for legal or      regulatory obligations.

  

Article V: Right to Opt-Out of Data Monetization


  • Users      shall have the right to opt out of targeted advertising and data-sharing      agreements without being penalized or denied service.
  • Alternative      business models that do not rely on invasive data mining must be      available.

  

Article VIII: Right to Digital Freedom and Free Speech


  • No      entity, public or private, shall unlawfully censor or restrict lawful      digital expression.
  • Content      moderation policies must be transparent, consistently applied, and subject      to independent appeals processes.

  

Article IX: Right to Own and Control Digital Identity


  • Individuals      have the right to control their digital identities, including usernames,      biometric data, and online personas.
  • No      government or corporation shall claim ownership over an individual’s      digital identity.

  

Article X: Right to Decentralized and Open Internet


  • Users      have the right to access a free, open, and decentralized internet without      undue restrictions.
  • Net      neutrality must be upheld to prevent ISPs from prioritizing certain      content over others.
  • Decentralized      technologies must be legally protected to ensure alternatives to      centralized control.

  

Article XI: Right to Protection from Corporate and Foreign Surveillance


  • Companies,      app developers, and multinational corporations shall be banned from      conducting surveillance and data mining on any smartphone, tablet PC,      connected product, or PC supported by Android, iOS, or Windows.

  

Article XII: Right to National and Consumer Security and Safety


  • It      shall be illegal for tech giants to form symbiotic relationships with      foreign tech companies beholden to oppressive governments or adversarial      nations.
  • No      corporation shall be permitted to distribute intrusive, addictive, or      dangerous operating systems, apps, or AI-infused products developed under      the control of oppressive governments.
  • It      shall be illegal to share developer tools, AI development tools, or      AI-driven chips (GPUs) with companies under the control of oppressive      governments or adversarial nations.

  

Article XIII: The Abolishment of Web Scraping, Web Crawling, and Web Tracking


  • Web      scraping is data theft and shall be illegal without explicit consent of      the website owner or content creator.
  • It      shall be illegal to train AI on scraped copyrighted or original works      without consent.
  • AI      shall not impersonate any individual (likeness, biometric data, or voice      print) without explicit consent.
  • Individuals      shall retain the right to sell access to their likeness.
  • Web      tracking shall be illegal—no individual shall be tracked by websites,      crawlers, bots, or any tracking technology.

  

Article XIV: The Right to Accountability from Tech Giants


  • Developers,      executives, and board members shall be held accountable for harm,      addiction, or death caused by their products.
  • Section      230 protections shall be abolished.
  • Tech      companies shall be accountable as editors if they censor or suppress      legitimate news and press.

  

Article XV: The Right to Safe, Secure, and Private Preinstalled Apps & Technology


  • No      operating system may include uncontrollable preinstalled surveillance or      data mining technology.

  

Article XVI: The Right to Safe Technology


  • No      app, social media platform, or AI product may contain addictive or      manipulative technologies designed to exploit users.
  • Platforms      must disclose bot use, and no platform may use bots to deceive.
  • Governments      and intelligence agencies shall be banned from creating consumer accounts.
  • Consumer      protection laws must be enforced; tech lobbying must be transparent.

  

Article XVII: The Right to Influencer and Bot Transparency


  • Influencers,      corporations, and agencies must disclose use of bots.
  • Deceptive      marketing via automation shall be illegal.
  • Malicious      bot use for disinformation, election interference, or propaganda shall be      illegal.

  

Article XIX: Freedom from Addictive, Divisive, and Manipulative Technology


  • Developers      shall not provide technology designed to hijack user behavior or induce      manipulation.
  • Such      technologies, deemed more harmful than subliminal advertising, shall be      banned.

  

Article XX: Freedom from Government & Tech Collusion


  • Governments      are banned from colluding with tech firms to suppress rights, liberties,      free speech, press, or privacy.
  • Developers      are banned from hiring former government officials for political      influence.
  • Governments      may only hire former tech employees for legitimate national security      purposes—not for political suppression.

  

Article XXI: Right to Data Collection Transparency


  • Companies      must disclose all data collection, sharing, and monetization practices.
  • Users      may request a copy of their data and demand deletion within 7 business      days.
  • Third-party      data collection must be disclosed, including from data brokers.

  

Article XXII: Freedom from Indiscriminate Surveillance and Data Mining


  • Indiscriminate      surveillance and non-essential data collection shall be banned.
  • Collection      of confidential, legal, medical, biometric, or classified information      shall be prohibited.

  

Article XXIII: Freedom from Forced Participation by Way of Legal Agreements


  • Contracts      of adhesion forcing consumers to accept surveillance or data mining to use      purchased products shall be banned.

  

Article XXIX: Freedom to Control Technology and Connected Products


  • Consumers      must have full control over their devices and be able to delete unwanted      software or apps.

  

Article XXX: The Right to Transparent Legal Language & App Permissions


  • All      legal language must be concise and transparent.
  • Consumers      must have full control over app permissions.
  • All      data collection and app permissions must be disclosed in one place.

  

Article XXXI: Ban on Teen Acceptance of Legal Agreements


  • No      minor under 18 shall be permitted to accept legal agreements for digital      services.

  

Article XXXII: Right to Transparent AI and Algorithmic Accountability


  • AI      and algorithms must be transparent, explainable, and free from      discrimination.
  • Users      must be able to contest AI-driven decisions.
  • Dangerous,      addictive, manipulative, or exploitative algorithms shall be illegal.

  

Article XXXIII: Right to Fair Terms and Conditions


  • End-user      agreements must be concise, plain-language, and overseen independently.
  • One-way      contracts of adhesion shall be banned.
  • Consumers      rejecting terms must still be able to use purchased products.

  

Article XXXIV: Anti-Trust Protections & Internet Centralization

  • Surveillance      for competitive advantage shall be banned.
  • Tech      companies shall not monopolize OS, apps, media, or AI via app stores or      preinstalled software.
  • Tech      lobbying by former government officials shall be banned.

  

Article XXXV: National, Internet, and Technology Safety


  • Symbiotic      relationships between domestic tech firms and adversarial      nation-controlled companies shall be banned.
  • Sharing      of developer tools, AI tools, or chips with adversarial nations shall be      prohibited.

  

Article XXXVI: The Right to Sue and Hold Tech Giants Accountable


  • Consumers      shall have the right to sue for predatory or dangerous products.
  • Tech      firms and their executives may be held criminally and civilly liable.
  • Section      230 protections shall be abolished.

Electronic Bill of Rights

Copyright © 2026 Electronic Bill of Rights - All Rights Reserved.

Powered by