Your Compass in the Security Nexus


HUMINT After UTS: Tradecraft in a World of Total Telemetry

How sources, covers, and meets adapt when phones, cars, and cities are sensors


By the Security Nexus

1. From Street Surveillance to Total Telemetry

The podcast framed the problem bluntly: Ubiquitous Technical Surveillance means
the environment itself is hostile. Your phone, your car, your workplace badge, your city’s cameras and Wi-Fi all collaborate—usually unintentionally—to build a machine-readable biography of your life.

Work on city-scale “patterns of life” shows how mature this is becoming. Tenzer, Rasheed, and Shafique take raw GPS trajectory data and use a biologically inspired neural network (“Grow-When-Required” episodic memory) to
learn normal citywide behavior and flag anomalies in real time.

They formally define pattern-of-life analysis as not just spotting odd events, but continuously extracting “normal patterns over time” from streams like taxi GPS traces. The system builds a memory of typical flows—commutes, rush hours, festival surges—and then treats any deviation as a candidate anomaly.

Now map that back to the podcast’s simple example: the weekly Monday coffee run. Once your commute, errands, gym, and coffee rhythms are learned,
any deviation is informational. A diverted route to a park, a new café before a meet, a one-off afternoon in a “random” neighborhood—each of those is a spike in the telemetry.

Research on situation recognition pushes this even further. Sarkheyli and Soffker show that feature selection over sensor data can significantly improve detection rates and cut false alarms by discarding irrelevant signals and focusing on the right subset of features.

For HUMINT, that means the machines watching you are:
• Learning what “normal” looks like for your micro-environment; and
• Getting better at telling “this is just traffic noise” from “this is a real situation change.”

The old KGB watcher on the corner is now an unsupervised learning system sitting on cloud compute.



2. The Tradecraft Paradox in a Sensor-Saturated World

Kyle Cunliffe describes contemporary espionage in Russia and China as trapped in a
tradecraft paradox: cyberspace is a game-changer for finding and approaching targets, but the risk of cyber-enabled tradecraft in those environments is so high that it demands more trust up front—trust that you can’t safely build without tradecraft.

In hard targets like Moscow and Beijing, physical meetings were always the most dangerous phase of an operation. Now that danger is multiplied by dense, integrated surveillance:
• Biometric checkpoints, CCTV, and smart-city infrastructure;
• AI-driven video analytics and license-plate recognition;
• Comprehensive data retention that stitches together your movements retroactively.

The Deep Dive episode captured the operational implication:
classic Cold War tactics—dead drops, brush passes, casual walk-by meets—are close to suicidal when every public square is effectively a sensor grid.

Cunliffe’s conclusion is stark:
• To justify any risky tradecraft in a hard-target city, you need a very high level of confidence that the prospective spy is genuine.
• But the same UTS conditions that make tradecraft risky also deny you the opportunity to build that confidence safely.
• As a result, access to the highest-value targets—senior elites who can’t travel and live inside dense counterintelligence architectures—is likely to remain sharply constrained.

In other words: UTS doesn’t just increase risk; it
reallocates who can realistically be recruited and where.



3. AI as Intelligence, Not Just Autonomy

The podcast framed AI’s most important military job today as
intelligence fusion and targeting, not killer robots. Anthony King’s work supports that: he documents how British forces used data and AI to manage Liverpool’s COVID mass testing campaign and how similar architectures underpin precision targeting in Ukraine.

In Liverpool, the Combined Intelligence for Population Health Action (CIPHA) platform linked health, mobility, and testing data into a real-time picture of viral spread. That’s a population-level pattern-of-life system drilled on disease, not insurgents—but the logic is the same: use sensors + AI to find the problem quickly in a crowded urban environment.

King argues this is exactly how AI is being used for targeting in Ukraine: fusing multi-source data (SIGINT, GEOINT, OSINT) into a targeting cycle that located senior Russian officers, such as General Gerasimov, with remarkable precision.

For HUMINT, that has two consequences:
1. The surveillance layer is increasingly intelligent. It’s not just logging; it’s interpreting, ranking, and prioritizing anomalies.
2. HUMINT is pulled upstream. Human sources become critical at the seams of automated systems—explaining anomalies, validating data, and revealing blind spots in what the sensors think they see.

In that sense, HUMINT after UTS is less about “outwitting cameras” and more about
understanding, and sometimes manipulating, the targeting architectures built on those cameras.



4. Data Brokers, Legal Loopholes, and the Norms Vacuum

The Deep Dive episode emphasized a particularly uncomfortable fact: U.S. agencies exploit gaps in the Electronic Communications Privacy Act (ECPA) by
buying commercially available location data from brokers instead of getting a warrant, sidestepping Fourth Amendment safeguards that Carpenter extended to cell-site data.

The CDT report Legal Loopholes and Data for Dollars explains the mechanism in detail:
• ECPA’s protections are built around “electronic communication services” and “remote computing services,” neither of which cleanly covers modern data brokers.
• Data brokers collect precise location, advertising IDs, IPs, and device metadata, then sell them to law enforcement and intelligence customers—sometimes with explicit carve-outs for “federal law enforcement” and “national security,” and permission to resell to third parties.

Reviglio’s work situates those brokers within
surveillance capitalism: a transnational industry that infers thousands of behavioral attributes per person, maintains vast behavioral profiles (Oracle claims data on over two billion people with tens of thousands of attributes each), and is largely under-regulated.

From a HUMINT perspective, this matters because:
• A case officer no longer needs a wiretap;
a purchase order can deliver a target’s pattern-of-life.
• The same brokers that sell marketing data can be quietly incorporated into the counterintelligence kill chain.
• “Open source” and “publicly available” become euphemisms for data that is commercially available but deeply intimate.

At the international level, Dennis Broeders argues that cyber intelligence operates in a
legal and diplomatic grey zone, protected by a long-standing “silence is golden” dogma about espionage. States rarely address intelligence directly in international law; instead, peacetime cyber espionage is treated as a class of operations “not per se regulated by international law.”

Combined, you get a powerful, permissive environment:
Technically: cheap, scalable access to extremely sensitive pattern-of-life data.
Legally: a thin, outdated framework that doesn’t clearly capture new actors and practices.
Normatively: a shared, self-serving understanding that “everybody spies, so nobody wants strict rules.”

The podcast’s line that “cyber intelligence will simply be what cyber intelligence does” is not a metaphor; it’s a description of a norms vacuum being filled by practice.



5. HUMINT–OSINT Hybrids in the Social Layer

If UTS makes streetcraft hazardous, one response is to
move early phases of recruitment and assessment into digital space.

Macêdo, Peotta, and Gomes propose exactly that: a framework that combines OSINT and HUMINT concepts to profile malicious users and select potential collaborators online, with explicit attention to measuring source reliability.

Their starting point is the “social perspective” of cybersecurity—the fact that threats emerge at the intersection of human behavior and digital platforms, particularly social networks. In that environment:
• Digital behavior becomes a rich observable for spotting dispositions, grievances, access, and influence;
• HUMINT techniques (elicitation, rapport, probing for consistency) can be adapted to
virtual environments;
• The line between “source” and “user” blurs—people are constantly generating exploitable information simply by living online.

In a UTS world, it is easy to imagine recruitment pipelines where:
• OSINT and brokered data identify potential sources through behavioral profiling and access patterns;
• Digital interactions are used to test reliability and risk tolerance;
• Only at a late stage—if ever—do operations shift into physical contact.

That doesn’t solve the tradecraft paradox in hard targets, but it
changes where and how trust is built, and it intensifies the fusion of HUMINT with other INTs.



6. So What Actually Changes for Sources, Covers, and Meets?

Given all of this, what does “post-UTS” tradecraft look like at a conceptual level?

6.1 Sources: Who You Target and Where
Hard-target elites stay hard. As Cunliffe stresses, senior officials in Moscow and Beijing are heavily restricted in travel and deeply embedded in counterintelligence states. Cyberspace helps find them but doesn’t magically make them recruitable.
Peripheral and infrastructure sources gain value. Engineers in data platforms, telecom operators, data-broker staff, and vendors to the security services all sit next to the surveillance machinery. Recruiting them may be easier and sometimes more valuable than chasing a minister.
Off-grid and “low-tech” elites become strategic assets. Targets who deliberately shed devices and limit digital exhaust are harder to find, but once identified, they operate in a relatively sensor-sparse space. That’s a different kind of challenge, not necessarily a worse one.

6.2 Covers: Managing Digital Exhaust, Not Just Paper Legends

Traditional cover focused on
documents, backstopping, and biographical consistency. After UTS, the decisive factor is whether your cover produces a plausible, machine-legible pattern of life.
• A legend without credit history, social media, ride-hailing records, or location traces is almost as suspicious as a bad passport.
• Conversely, a legend with rich but incoherent digital exhaust will stand out under pattern-of-life analytics long before a human investigator ever asks a question.

So the problem becomes: how do you design and maintain a cover whose telemetry looks “boringly normal” under AI scrutiny, across years?

That’s not an operational how-to question; it’s a strategic design problem for services. It implies:
• New joint HUMINT-SIGINT-CYBER teams tasked with
cover-exhaust engineering;
• Continuous red-teaming of legends against the same analytics used by adversaries;
• Policy questions about how far democratic services should go in fabricating fake identities and data in commercial systems.

6.3 Meets: Riding the Surveillance Layer Instead of Hiding From It

Meets do not disappear, but
the logic of the meet changes:
• In dense UTS environments, the goal may be less “avoid surveillance entirely” and more “ensure surveillance sees something innocuous and pre-patterned.”
• Third-country or “safe city” meetings gain renewed importance, as Cunliffe notes, but that pushes some of the most sensitive HUMINT work away from the very environments it aims to penetrate.
• For some operations, the real “meet” is not between humans but between
data traces: a timed pattern in app usage, a sequence of micro-transactions, or a deliberately crafted anomaly injected into the telemetry that only a specific partner knows how to read.

The podcast closed on a key provocation: if your agents’ communications channels are saturated with surveillance and your highest-value targets have already adapted by de-teching their lives,
does the center of effort move from penetrating people to penetrating the surveillance layer itself?

That doesn’t eliminate classic HUMINT. It repositions it as one component of a broader contest over who controls, understands, and exploits the sensing infrastructure that now blankets everyday life.



7. Policy and Governance: Guardrails or Drift?

On the policy side, three threads from the literature converge:
1. Cyber intelligence operates in a legal grey zone. Broeders argues that the “intelligence is not discussed” norm no longer fits a world where intelligence agencies, not militaries, conduct many of the cyber operations that threaten international stability. He calls for explicit “guardrails” on cyber intelligence activities.
2. Data brokers are structurally incompatible with robust privacy and, potentially, democratic oversight. Reviglio makes the case that their opacity, transnational nature, and economic incentives all drive toward ever-greater datafication and surveillance, with national-security implications built in.
3. Domestic law is being outpaced by commercial practice. CDT’s analysis of ECPA shows how easily agencies can end-run constitutional protections by treating data purchases as “commercial transactions” rather than searches.

For democratic states trying to run effective HUMINT in a UTS world, the strategic question is not just
“Can we do this?” but “What does it do to our own system if we normalize these tools?”

If HUMINT is rebuilt on top of opaque data markets, AI-driven targeting, and legal silence about cyber intelligence, the risk isn’t just operational blowback. It’s that
the same infrastructure built to penetrate hard-target adversaries becomes domestically tempting—for everything from ordinary policing to political monitoring.

That is where law, policy, and professional ethics need to catch up to tradecraft.