My name is Rex M. Lee, and I am a security advisor to governments, an OTA app and platform developer, and a former advisor to Congress for tech hearings (2017–2021) involving Meta, Google, ByteDance, and other social media platform developers.
I helped launch the world’s largest legal hacking firm, Houdinisoft, which was adopted by Verizon, AT&T, T-Mobile, and other global mobile network operators, as well as used for digital forensics.
I specialize in threats posed by Surveillance Capitalism, tech addiction, AI indoctrination, tech-based hybrid warfare, and civil-military fusion programs in Russia, China, and the United States.
In 2017, after advising Congress regarding the Facebook-Cambridge Analytica scandal, I was asked by Senator Ted Cruz’s office to author a congressional policy change proposal for digital rights from a developer’s perspective rather than from tech lobbyists working for Big Tech.
My initial digital rights proposal targeted the root cause of tech addiction, harm, and loss of privacy. It centered on banning Surveillance Capitalism, which would restore privacy, security, and safety to consumers of smartphones, computers, and connected products. Yet, my proposal was rejected by the U.S. Senate because there is no profit in solving the problem—especially by addressing targeted advertising, which fuels tech addiction and the loss of privacy.
I stopped advising Congress after the Instagram-Facebook whistleblower hearing involving Meta product designer Frances Haugen because the hearings became little more than puppet shows for grandstanding lawmakers who publicly claim they want to solve the problem while simultaneously taking money from the U.S.-China tech lobby. After each hearing, it became business as usual in Silicon Valley.
Aside from the harms caused by tech addiction, there are numerous other harms driven by targeted advertising rooted in Surveillance Capitalism (see harms below).
Addressing tech addiction alone will not restore privacy, security, or safety to consumers of smartphones, computers, and connected products powered by Android, iOS, or Windows.
Advertisers are equally—if not more—responsible for fueling tech addiction, manipulation, harm, and death among end users of AI-infused apps, social media platforms, and AI chatbots, including adults, teens, and children.
As a matter of fact, an entire multi-billion-dollar privacy industry has emerged in which so-called tech safety and privacy advocates make millions of dollars through books, documentaries, paid network TV and podcast appearances, interviews, and speaking fees.
While these efforts may raise awareness, many of these advocates refuse to fully endorse banning Surveillance Capitalism because this predatory, exploitative, and harmful business model fuels the very industry from which they profit.
As a matter of fact, many of these so called advocates, refuse to hold advertisers responsible for fueling the tech addiction, harm, and death, while refusing to hold Google, Apple, or Microsoft responsible for distributing highly addictive AI infused apps, social media platforms, gaming platforms, and AI chatbots that induce AI indoctrination, known as The Eliza Effect.
Without the continuation of Surveillance Capitalism, many of these paid advocates would no longer have careers built around addressing the symptoms of a problem they refuse to eliminate at its root cause, Surveillance Capitalism.
Restoring Privacy, Security, and Consumer Protection Under Existing Law
I believe that tech addiction and the loss of privacy can be significantly reduced by enforcing existing consumer and child protection laws regulated by the Federal Communications Commission (FCC), Federal Trade Commission (FTC), and State AGs, along with banning:
AI infused tech addiction and the loss of privacy are not being solved through new laws, lawsuits, congressional hearings, books, or documentaries.
Books and documentaries are great for awareness, but only addressing addictive design will not resolve the root cause; targeted advertising rooted in Surveillance Capitalism.
For more than 15 years, we have been trapped in the same feedback loop: hold a congressional hearing, file lawsuits, and create new laws, yet, AI infused tech addiction, exploitation, surveillance, manipulation, AI indoctrination, and the erosion of privacy have only accelerated while more adults, teens, and children are harmed or killed.
Advertisers need to be held accountable for funding tech addiction, without advertising money, there is no need for additive and manipulative technologies.
Electronic Bill of Rights Framework for Clean Data Business Practices Are Available Upon Request.
The Electronic Bill of Rights (EBOR) is not a theoretical framework—it is grounded in existing federal law.
Before individuals can understand how they are harmed, they must first understand how the surveillance capitalism business model operates through essential telecom and technology products—such as smartphones—that are necessary for modern life and come at a cost.
It is also critical to understand which devices and platforms—including those supported by Android, iOS, and Windows—serve as the primary vectors for surveillance, as well as the scale and scope of data extraction from those devices and services.
The following section explains how this process works, including:
It is not just privacy that is at stake—it is the systematic exploitation of individuals for profit at the expense of their privacy, security, and safety, as well as their data and financial sovereignty, biological autonomy (including biometric and genetic information), civil liberties, and fundamental human rights by way of products and services that cost money.
Modern connected devices—including smartphones, tablet PCs, connected home products, personal computers, servers, wearable technologies, security and environmental systems, connected vehicles, AI-enabled toys, and other digital services—form a pervasive and continuous data-collection environment.
These devices and platforms operate as always-on data collection nodes, enabled by operating systems such as Android, iOS, and Windows, as well as AI-infused applications, social media platforms, and chatbots.
Within this ecosystem:
This model is primarily supported by data-driven business practices centered on targeted advertising, often described as surveillance-based monetization.
Participation in this ecosystem is frequently governed by non-negotiable terms of service (contracts of adhesion), where access to essential products and services is conditioned on acceptance of broad data collection and use policies.
Data collected from individuals—including teens, children, and business professionals—is often facilitated through what can be described as “leaky operating systems” (Android, iOS, and Windows). These systems support a broad ecosystem of AI-infused applications, social media platforms, and AI chatbots that are designed to drive engagement and data collection at scale.
Within this framework, many of these applications and platforms function in ways that may be characterized as “legal malware”—software that operates within the bounds of the predatory contract of adhesion (terms of service) while enabling extensive data extraction, behavioral tracking, and monetization.
These operating systems and their associated ecosystems underpin essential connected telecom and technology products—including smartphones, computers, and other digital services—that are now required for participation in modern life.
The surveillance and data mining being conducted by Alphabet (Google), Apple, Microsoft, Meta, TikTok USDS JV, ByteDance (China), Wildberries (Russia), and others is "indiscriminate" meaning they are collecting from individuals, businesses, and government agencies is the following buckets of highly confidential information (over 5,000 data points):
All data collected from individuals—including teens and children—is aggregated into identifiable digital profiles, often referred to as Digital DNA. These profiles represent detailed behavioral, personal, and usage patterns tied to a specific individual.
Operating system providers and developers—including those behind Android, iOS, and Windows—enable the creation of these identifiable user profiles through system-level data access, application activity, and integrated services. These profiles are then leveraged for monetization, primarily through targeted advertising, as well as other commercial uses, including data sharing and resale.
In addition to data collected directly from device owners and end users, operating system providers, app developers, and AI platforms may also acquire and combine user profile data from third-party sources. This results in a layered aggregation model, where multiple entities contribute to and exploit increasingly comprehensive user profiles.
The outcome is a data ecosystem in which:
The widespread use of highly addictive and manipulative technologies embedded in AI-infused applications, social media platforms, and AI chatbots has contributed to a growing global technology addiction crisis. This crisis affects adults, teens, and children alike and has been associated with increases in anxiety, depression, violence, online bullying, self-harm, public discord, political polarization, and suicide.
At some point, the question needs to be asked:
"If addiction, harm, mental health decline, exploitation, and the loss of life among teens and children are not the line at which government, advertisers, and Big Tech change their business models, then what is?"
Individuals who access the internet through devices supported by Android, iOS, or Windows operating systems are subject to large-scale data collection and profiling within Google's global advertising ecosystems supported by the AdTech AI-Quantum Control Platform (Core) that distributes targeted ads to billions of people around the world 24x7/365 days per year.
Google's ecosystem—often described as the AdTech Core and ecosystem of micro cores—aggregate user data to enable the delivery of targeted advertising across international markets. Through a network comprised of over 35,000 data brokers, developers, advertisers, PR agencies, developers, and platform providers, user information can be distributed and utilized across multiple jurisdictions worldwide, including those in China and Russia.
This includes regions with varying regulatory standards, raising important questions about:
These platforms operate through interconnected advertising infrastructures—sometimes referred to as “micro AdTech ecosystems”—that support real-time bidding, audience targeting, and behavioral profiling at planetary scale.
Through biometric data, individuals can potentially be identified and located—even without carrying their personal device—using technologies such as facial recognition and voice pattern analysis. These capabilities can operate through nearby connected devices, cameras, or other networked products within close proximity via Google's AdTech Core and global ecosystem, for advertisers this is the greatest system every created, for military use it is the greatest targeting system on the planet.
Global advertising technology (AdTech) infrastructures—originally developed for commercial targeting—are increasingly being examined in the context of national security and information operations.
Large-scale AdTech platforms and their surrounding ecosystems enable capabilities such as:
These same capabilities may be leveraged beyond commercial use cases. Governments and state-aligned actors have been documented utilizing data-driven platforms and digital ecosystems for purposes including:
This convergence of commercial data infrastructure and government use cases is often described as civil–military fusion, where technologies developed in the private sector are adapted for national security and strategic operations.
Chinese and Russian civil military programs including those in the U.S. involving Palantir Technologies Gotham Intelligence and Military Core pose massive threats to privacy, security, and safety to everyone, including teens and children, connected to the internet by way of any device or services supported by Android, iOS, or Windows.
For decades, telecommunications, consumer protection, child safety, and privacy laws have established clear legal obligations regarding the protection of user data, confidentiality, and the lawful use of communications infrastructure.
Yet today, consumers—including minors—are routinely subjected to pervasive surveillance and data mining through smartphones, applications, operating systems, and connected products, despite the existence of these established legal safeguards.
These practices are often justified through terms of service and privacy policies. However, EBOR asserts that contractual consent does not override statutory law, particularly when services are essential to modern life.
“You cannot contract your way out of federal law.”
Telecommunications infrastructure—including wireless, landline, fiber, and satellite—is regulated under a public interest framework, not a purely commercial one.
The use of this infrastructure for surveillance-based business models raises serious legal and constitutional questions.
47 U.S.C. § 222
Telecommunications carriers are legally required to protect the confidentiality of customer data, including:
Carriers do not own this data—they are custodians with a duty to protect it.
47 C.F.R. §§ 64.2001–64.2011
These rules implement Section 222 and require:
These are enforceable regulatory obligations, not optional guidelines.
3. Unjust and Unreasonable Practices
47 U.S.C. § 201(b)
It is unlawful for telecommunications providers to engage in practices that are unjust or unreasonable.
The misuse of customer data—especially at scale—may fall within this prohibition.
47 U.S.C. § 202(a)
This statute prohibits discriminatory or coercive practices in telecommunications services, including conditions that unfairly burden consumers.
Telecommunications carriers operate under a legal requirement to serve the:
“Public interest, convenience, and necessity.”
This establishes a public trust obligation—meaning infrastructure cannot be used solely for exploitative commercial purposes at the expense of consumers.
FTC Act – 15 U.S.C. § 45
Prohibits:
This applies directly to digital platforms, apps, and data-driven services.
This law is designed to address national security risks posed by applications and platforms that are owned, controlled, or influenced by foreign adversaries.
Key Provisions:
Consumers are told: “You agreed to it.”
But in reality:
This raises a fundamental question:
Is consent valid when participation in modern society requires acceptance?
At both the state and federal level, a consistent pattern has emerged:
The result:
Incomplete legislation + weak enforcement = systemic backdoors for exploitation
This issue extends far beyond privacy.
It impacts:
Key concerns include:
The Electronic Bill of Rights is based on three fundamental legal principles:
Terms of service cannot waive federally protected rights.
Coerced or conditional consent is not valid consent.
Licensed spectrum and regulated networks are not private data-extraction systems.
The issue is not the absence of law.
The issue is:
EBOR exists to:
The Electronic Bill of Rights calls on:
to recognize and act on the following reality:
Privacy, security, and safety are not optional features. They are legal rights.
This material is provided for informational and educational purposes only and does not constitute legal advice. Readers should consult qualified legal counsel for advice regarding specific legal matters.
Restoring Privacy, Security, and Consumer Protection Under Existing Law**
Electronic Bill of Rights Framework for Clean Data Business Practices Are Available Upon Request.
Electronic Bill of Rights Framework for Clean Data Business Practices
Article I: Right to Data Privacy
Article II: Right to Data Security
Article III: Right to Digital Anonymity
Article IV: Right to Be Forgotten
Article V: Right to Opt-Out of Data Monetization
Article VIII: Right to Digital Freedom and Free Speech
Article IX: Right to Own and Control Digital Identity
Article X: Right to Decentralized and Open Internet
Article XI: Right to Protection from Corporate and Foreign Surveillance
Article XII: Right to National and Consumer Security and Safety
Article XIII: The Abolishment of Web Scraping, Web Crawling, and Web Tracking
Article XIV: The Right to Accountability from Tech Giants
Article XV: The Right to Safe, Secure, and Private Preinstalled Apps & Technology
Article XVI: The Right to Safe Technology
Article XVII: The Right to Influencer and Bot Transparency
Article XIX: Freedom from Addictive, Divisive, and Manipulative Technology
Article XX: Freedom from Government & Tech Collusion
Article XXI: Right to Data Collection Transparency
Article XXII: Freedom from Indiscriminate Surveillance and Data Mining
Article XXIII: Freedom from Forced Participation by Way of Legal Agreements
Article XXIX: Freedom to Control Technology and Connected Products
Article XXX: The Right to Transparent Legal Language & App Permissions
Article XXXI: Ban on Teen Acceptance of Legal Agreements
Article XXXII: Right to Transparent AI and Algorithmic Accountability
Article XXXIII: Right to Fair Terms and Conditions
Article XXXIV: Anti-Trust Protections & Internet Centralization
Article XXXV: National, Internet, and Technology Safety
Article XXXVI: The Right to Sue and Hold Tech Giants Accountable
Article XXXVII: Prohibition on AdTech Surveillance Infrastructure in Government and Civil-Military Programs such as the U.S. Government and Palantir Technologies.
Article XXXVIII: Equal Protection for Digital Currency, Financial, Biological, Medical, Legal, and AI–Quantum Platforms
All rights, protections, and enforcement mechanisms established under the Electronic Bill of Rights shall apply equally to all essential digital services and emerging technologies, including but not limited to:
These protections shall extend across all layers of the digital ecosystem, ensuring that no platform, technology, or service is exempt from accountability due to its classification as “emerging,” “experimental,” or “innovative.”
Universal Applicability to AI-Infused Digital Services
The combined Articles of the Electronic Bill of Rights shall apply in full to all AI-infused applications, platforms, chatbots, and digital services, regardless of delivery model, including:
No developer, platform provider, or technology company shall bypass or dilute these protections through terms of service, contracts of adhesion, or technical architecture.
Future-Proofing Consumer Protection
As technology evolves, all new categories of essential digital services shall automatically fall under the protections of the Electronic Bill of Rights without requiring additional legislation.
This ensures that innovation does not outpace consumer protection, and that human rights, data sovereignty, and individual autonomy remain preserved in the age of AI and quantum computing.

Documented Categories of Harm- Targeted Advertising
There are many documented harms—including loss of life—associated with highly addictive and manipulative AI-infused apps, social media platforms, and emerging AI chatbots that can induce the ELIZA effect, the human tendency to attribute human emotion, trust, and authority to AI driven apps, chatbots, and evolving connected products and services.
Outlined below are the mot urgent categories of harm addressed by the Electronic Bill of Rights (EBOR) framework:
Ironically, those who design, protect, and profit from the system may also be affected by its consequences—along with their own families—as future generations are increasingly born into a oppressive, intrusive, exploitive and harmful AI and quantum driven digital environments supported by essential consumer products and services. In this context, individuals may unintentionally fund their own oppression and exploitation at the expense of their privacy, security, and safety by way of essential consumer products and services they pay for.
There is a better way forward for everyone.
Together, these harms underscore the urgent need for ethical, clean-data standards and practices by way of EBOR framework.
Rex M. Lee: Rlee@ElectronicBillofRights.com