
The Dirty 30:
Tech’s Top Privacy Polluters
THIS IS A SAMPLE OF THE TOP 2 OF THE DIRTY THIRTY
WANT TO SEE MORE?
AI and Privacy
Welcome to the Dirty 30, a “worst of” list inspired by the EWG’s Dirty Dozen most pesticide-laden foods. The Dirty 30 is our curated list of privacy polluters, comprised of companies, software, platforms, and apps doing the most to exploit your data and undermine your privacy.
In the age of AI, data and privacy concerns are of heightened importance, as the large language models (LLMs) driving the AI boom that fuel chatbots, AI agents, and business insight tools are desperately thirsty for training data to help those models achieve more human-like capabilities.
Data is the new oil.
You need only look as far as your inbox. Have you received any notices of late from your SaaS providers regarding changes to terms of service? Companies are routinely expanding the types of data they collect and broadening the purposes for which they can use that data, including to train their AI models.
Add to this the concern that your confidential data may no longer be confidential. Once data is input and absorbed into the LLM, that data becomes fodder for answers delivered to other users who are not you.
AI models are also capable of more sophisticated profiling, which raises concerns around bias and discrimination, as well as increased risks for fraud or identify theft as data aggregators create digital clones that can pass verification tests. In the best case, maybe your Netflix recommendations are more relevant. In the worst case, you’re accused of fraud, arrested and are forced to file for bankruptcy.
Note that despite FTC warnings that surreptitiously changing one’s terms of service could constitute unfair or deceptive trade practices, not all companies are proactively notifying users of their policy changes. In fact, many companies state in their terms of use that they can change their policies any time and that your continued use of their service or website constitutes your consent. This may or may not be legal, but it is definitely not done with your best interests in mind.
To help you on your awareness journey, we’ve compiled this list of the worst data offenders. While some of these players are truly and objectively bad, any list will necessarily involve some subjectivity. We’ve listed the factors we considered most important in our assessment, along with our general methodology and reasoning, so you can make your own informed decisions.
Factors We Considered
🔹 Trustworthiness. What is the company’s track record around data collection, handling, and security? Is it forthcoming and above-board or sneaky and opaque? Does it have a trail of fines, lawsuits or breaches stemming from its data or security practices?
🔹 Data Minimization. Does the company overreach, collecting more data than is reasonably necessary to provide the service? Does it collect information about non-users or collect data from third parties to build a dossier on users?
🔹 Data Sensitivity. Does the company collect sensitive data, such as name, address, phone number, social security numbers, health information, financial or credit information, biometric data, location information, or behavioral or sentiment data?
🔹 Data Usage and Sharing. Does the company use your data for profiling or inferencing, share your information with third parties or an unknown number of “marketing affiliates,” or allow its employees access to review your data? Many companies like to brag that they never “sell” your data, but that doesn’t mean they aren’t sharing it.
🔹 Data Protection and Security. Does the company employ data security best practices, such as end-to-end encryption, multi-factor authentication, role-based access, and strong internal controls?
🔹 Privacy Impact. How many users are impacted by the company’s data and security practices? Our analysis is biased towards services that are widely used, especially in the United States.
🔹 Data Control. How much control do users have over their data via opt-out features and setting permissions and controls? How easy is it to opt out and toggle privacy features?
🔹 Consent and Transparency. How transparent is the company about what data it collects and how it uses that data?
Methodology and Logic
Our methodology prioritizes the first six factors, in that order. Here’s why:
🔹Trustworthiness ranks first because when a company has a history of shady practices and a trail of regulatory violations, fines, lawsuits, and whistleblowers, suffice it to say you are introducing a particular type of counterparty risk when you use their services. A company’s track record tells us about what they do versus what they say. It also speaks to how privacy-forward a company is, how seriously it takes its role as a steward of your data, how meticulous and competent it is in performing that role, and whether they see you as the customer or the product.
🔹Next, we look at the scope of a company’s data collection: if they don’t collect it in the first place, there’s not a problem!
🔹Even if a company does collect it, if it’s not particularly sensitive data or prone to misuse, not a problem.
🔹Even if it is a problem (in terms of the potential for unauthorized access or misuse), if the company is not actually using the data for an unreasonable purpose, perhaps we have only a small, contained, or even a theoretical problem.
🔹And if the problem remains small, contained, or theoretical, we can probably deal. But if the company amplifies the problem by indiscriminately sharing data (whether for $ or not) or having security lapses, now we get twitchy.
🔹And, finally, if the company has a massive customer base, this speaks to impact.
Our model assigns lower priority to the final two factors:
🔹We give points for data control features, but we’d rather not need to have to control our data. Rather, we’d prefer a privacy first approach that requires proactively opting in to any shenanigans.
🔹We also give points for transparency because it plays a role in informed consent, but a company telling us it plans to surveil us and manhandle our data before it actually does so it is cold comfort.
With that behind us, let’s dive into the dumpster.
Rankings
Trustworthiness is our highest ranking factor, so let’s start with the least trustworthy of the bunch. These companies engage in practices such as installing spyware/malware on your devices, conducting warrantless searches, lying to your face, and employing questionable practices in general.
How Did They Earn Their Spot?
Trustworthiness is our highest ranking factor, so let’s start with the least trustworthy of the bunch. These companies engage in practices such as installing spyware/malware on your devices, conducting warrantless searches, lying to your face, and employing questionable practices in general.
#1 Temu
First out of the gate, we have Temu, a popular Chinese e-commerce site that has been downloaded by 185.6 million users in the U.S., which equates to roughly 78% of the adult population. Temu has been the subject of multiple lawsuits, with allegations including: installing spyware and malware on users’ devices, failing to comply with security standards which can compromise users' financial information, and misleading users about how it collects their data.
The Temu app is alleged to be able access other data and apps on a user’s device based on the extensive access and permissions it requires. This means an employee who has downloaded Temu on their work phone has potentially exposed their company’s data. And if Temu is download on an employee’s personal phone and the employee accesses company data from that phone, such as checking their email, the company is also exposed. This should be very concerning for companies and individuals alike.
Maybe that air fryer isn’t such a bargain.
#2 Clearview AI
Haven’t heard of these guys? Allow us to introduce you. Clearview AI describes itself thus:
“We help law enforcement and governments in disrupting and solving crime, while also providing financial institutions, transportation, and other commercial enterprises to verify identities, prevent financial fraud, and combat identity theft.”
Sounds good, right? And in the right hands, it probably is. But we are talking about the government and profit-driven enterprises. You can read about the law enforcement concerns here, but we’d also like to highlight that if you have innocently shopped at Macy’s, Kohl’s, Target, Walmart, BestBuy, Albertsons, or Kroger, your face and other biometric data, along with other data Clearview has collected about you from myriad third parties, are likely in their database.
Clearview has been fined multiple times and has defended a number of lawsuits for alleged privacy violations in the U.S. and abroad arising from the use of its software. Macy’s also faced a class action lawsuit for spying on its shoppers with the Clearview AI software, which recently reached a proposed settlement. Clearview AI has now been permanently banned nationwide from making its database available to most private actors, including most businesses.
Is this really the AI future we’re envisioning?
READ ON IN WITH AN AI HOTLINE SUBSCRIPTION