How They Track You: Anatomy of a Market-Driven Surveillance System

written by Beatrice Galluzzo
How They Track You: Anatomy of a Market-Driven Surveillance System

When we think about surveillance, we still tend to picture the classic 1970s crime thriller scenario: wiretaps, search warrants, city cameras, tailing suspects, trench-coated informants hunched in the dark corners of underground parking garages. It is becoming increasingly clear that things no longer work that way—that they are far more scalable and far less visible. What does that mean? It means that while surveillance used to be the sum of targeted, somewhat costly operations limited to a handful of individuals, it has now become a full-fledged system that can extend, with relatively little effort, to millions of mostly unaware people. Data is collected continuously and automatically, analyzed by silent algorithms, and stored in infrastructures that are already operational because they are embedded in the technological structures we use every day (and we’ll come back to that). To be clear: moving from ten to ten million subjects does not change the nature of the mechanism, only its scope. It is less theatrical surveillance, but infinitely more insidious.

In practice, for years we have been immersed up to our necks in what is known as the attention economy—a model in which the most valuable resource is not money or information, but people’s attention. That famous line from the 2020 documentary The Social Dilemma—“if you’re not paying for the product, then you are the product”—captures it well. But here comes the plot twist: the very tools created to sustain the attention economy (advertising technology, data brokers, geolocation systems, data-fusion platforms, and so on) are increasingly being used not only by those trying to sell us something, but also by law enforcement and border agencies, often enhanced by artificial intelligence. Note this carefully: AI does not generate new data. It makes existing data analyzable at scale in real time and identifies patterns invisible to the human eye.

To understand how this new “market-driven” surveillance works—organized and fueled by commercial actors and infrastructures rather than directly by the state—we need to start with the mechanisms that make it possible, the technologies that accelerate it, and the regulatory responses emerging in the United States and Europe. Keeping one point firmly in mind: the boundary between “commercial” data and “police” data is alarmingly porous.

Let’s begin with the evidence. Very recently, in Donald Trump’s United States, this paradigm found a natural laboratory: ICE (Immigration and Customs Enforcement) and the DHS/CBP ecosystem (Department of Homeland Security/Customs and Border Protection). These two U.S. federal bodies have explicitly asked major tech companies to share “Big Data and Ad Tech” for use in investigations and operational activities. We are talking about a genuine shortcut through purchased data. What is needed now is no longer a (formal and legal) warrant, but a supplier. In this context, a key concept that has emerged in recent American debate is CAI—Commercially Acquired Information: personal information purchased on the market, which may include location data, identity profiles, app usage, travel records, telemetry, and so on. In the absence of a clear prohibition—which currently does not exist—or uniform case law, law enforcement no longer obtains data by “searching,” but by buying it. And yes, it sounds like the plot of a disaster movie set in some dystopian never-year 2000-something, but it is, unfortunately, everyday reality for many people.

This chilling data market creates a gray area between constitutional protections and procurement—the procedures through which public or private entities purchase goods and services. This is where AI lends a hand, fusing heterogeneous datasets and generating leads (in marketing, a potential customer; here, a trail to follow): what once required weeks of manual analysis can now be industrialized in moments.

The most intuitive example is location data. In a recent investigation by 404 Media, ICE was found to have purchased access to a tool updated daily with billions of location “pings” from hundreds of millions of phones. This changes the very nature of surveillance: you no longer follow someone already suspected; you explore an ocean of traces and then decide who might be potentially suspect. AI does the rest, dramatically accelerating the clustering of our habits, the estimation of routines (home/work patterns), and the inference of relationships and contacts. The underlying issue was at the heart of the 2018 Supreme Court case Carpenter v. United States, in which the Court ruled that access to a person’s historical location data reveals “the privacies of life”—literally, and almost poetically. It is what traditional privacy feared most: not the single snapshot or one-off location, but the reconstruction of an entire life.

And if you thought your phone’s location data falling into unknown hands was annoying enough, you may be delighted to learn that a 2025 investigation by Wired revealed that Airlines Reporting Corporation, a data broker linked to airlines (one of many, needless to say), sold DHS/CBP access to details about tens of flights, with contractual clauses even limiting transparency about the source of the data itself. The Electronic Frontier Foundation (EFF), the leading nonprofit defending civil liberties in the digital world, amplified the case to inform the public that everything—absolutely everything—can become material for investigation and control, bypassing public perception of what something as banal as “booking a flight” really entails.

If, like me, you hoped it ended there, I have to disappoint you. You love those hyper-technological cars that feel more like flight simulators than vehicles, right? Technically, they are connected cars—true IoT (Internet of Things) devices, meaning physical objects connected to the internet—that collect and transmit enormous amounts of data. Beyond your real-time location and travel history, they offer anyone upstairs who wants it a detailed overview of your schedules, your driving style, the devices you connect via Bluetooth, your voice (used for commands like “call Honey”), and much, much more. Where does all this massive screening go? It is indeed sent to automakers’ servers to run subscription services (like finding your car via app when you forget where you parked), to provide remote assistance, or to update software. But it also creates an enormous informational pool that law enforcement can access through requests that are not exactly straightforwardly legal. The car, too, has been quietly transformed into a continuous source of behavioral data, raising serious questions about transparency, consent, and privacy protection.

Now, if it is clear that data is the raw material, it is equally clear that on its own it is of little use: it must be collected, organized, connected to other data, and transformed into something actionable. In other words, it must be refined. This is where platforms like Palantir and various data-fusion systems come into play. These are software tools used by government agencies and police authorities to integrate vast—truly vast—amounts of information from diverse sources (public databases, investigative archives, administrative records, commercially purchased data from private brokers, and so on), and then relate them to one another. Data fusion means exactly this: merging heterogeneous data to reconstruct a comprehensive picture. On these platforms, operators can search for a name or license plate and visualize connections, contact networks, movements over time, and correlations between events. These technologies do not create new data, but they do connect isolated fragments and transform them into relational and, above all, predictive maps. And it hardly needs saying that the issue is not only what these tools can do, but how difficult it is to verify what they actually do.

Quick recap: the data already exists, circulates on the market, is purchased (in legally gray areas), integrated, and made operational through systems capable of fitting isolated elements into coherent architectures. Data has always been collected; now, however, that massive informational body can be queried in real time, very, very quickly. At this point, a new question arises: what happens when the key to accessing this system is no longer your phone number, your car’s license plate, or an advertising ID, but your face? When it is no longer something you own, but who you are. When you are no longer tracked through objects and codes, but directly through your body. If the interface between the individual and the surveillance infrastructure coincides with the body itself, then biometrics must enter the conversation—not as an isolated technology, but as the natural extension of a system already built to make every piece of data searchable, correlatable, and actionable.

In Europe, the institutional response follows a different trajectory from that of the United States, at least on the regulatory level. With the AI Act, the European Union has introduced a risk-classification system distinguishing between prohibited uses, high-risk uses, and applications permitted with specific safeguards. Real-time remote biometric recognition in public spaces, in particular, is subject to rather strict restrictions: as a general rule it is prohibited, but it may be authorized for law enforcement purposes in exceptional circumstances, such as preventing terrorist threats or searching for missing persons, subject to judicial authorization and within precise temporal and geographic limits. This means the EU does not legitimize indiscriminate use of biometrics, but allows it within defined procedural frameworks, introducing transparency obligations, fundamental-rights impact assessments, and independent oversight. At the same time, however, the Union is strengthening interoperability among police information systems. The so-called Prüm II updates and expands the previous cooperation system among Member States, enabling automated exchange of data such as DNA profiles, fingerprints, and vehicle information. The declared goal is to speed up cross-border investigations by making comparisons between national databases more immediate. The outcome of these moves, however, is at the very least contradictory: on the one hand, stricter rules on AI and biometric use; on the other, a system that increases the speed and scale of information circulation among authorities.

At this point, you might ask: so what? Well, the macro-problem taking shape is no longer simply one of privacy versus security, but rather one of the data market versus democracy. It is almost banal by now that technology brings with it the capacity to collect data and analyze it at inhuman scale; the question is whether our democratic society—while it lasts—is willing to accept that a system already toxic in itself and born to sell advertising becomes, in effect, a surveillance network ready to be squeezed on demand. Whether this society considers it normal for the boundary between commercial data and police data to be drawn not by clear and verifiable limits, but by contracts between public administrations and private suppliers. Whether it finds it sustainable for increasingly powerful enforcement systems to rely on platforms and databases capable of exposing the habits, movements, and relationships of millions of people. We need transparency. We need mandatory and verifiable standards. And above all, perhaps, a mature political discussion about which technologies are compatible with public spaces that wish to remain free. Because when an algorithmic interpretation translates into a stop, an interrogation, a revocation of status, an expulsion, or an arrest, the problem is no longer technical. It is political.

Beatrice Galluzzo