Signals and Silence: When Cyberattacks are Meant to Be Noticed
Signals and Silence: When Cyberattacks are Meant to Be Noticed
By The Security Nexus.
Classic coercion theory, as articulated by Thomas Schelling, relies on the clear communication of threats to change adversary behavior (Schelling 1966). But cyber operations frequently fall short of that standard. Many lack explicit demands, are launched without attribution, and fail to create clear thresholds for compliance. In response, scholars have identified a distinct behavioral pattern they call “cyber swaggering”—actions taken to signal capability or resolve rather than to directly compel (Burton 2019; Hodgson 2018).
Cyber swaggering is not about direct outcomes. Rather, it serves to create long-term influence, shift bargaining dynamics, and shape adversary perceptions. In practice, this might involve demonstrative attacks on infrastructure, highly publicized data leaks, or the calculated insertion of malware that is meant to be discovered. These acts are less about immediate coercion than about reinforcing power narratives in an ongoing rivalry.
Strategic Attribution: The Utility of Being Blamed
In cyberspace, where deniability is often the default, claiming responsibility might seem counterproductive. Yet we have numerous cases where attribution—whether overt or tacit—serves strategic ends.
Consider North Korea’s 2014 cyberattack on Sony Pictures. Though initially denying involvement, the DPRK had already issued public threats against the film “The Interview.” The attack and subsequent data leaks effectively signaled resolve and capability while reinforcing fear of retaliation among media producers (Chen and Taw 2023, 68–70). Similarly, Russian cyber operations against Ukraine and Western targets often include signature tools or domain infrastructures that suggest intentional signaling (Nye 2017, 48).
This trend reveals a paradox: cyber deterrence, though undermined by attribution problems in theory, can be enhanced by deliberate visibility in practice. States sometimes want to be blamed—just not too clearly, and not too soon.
The Logic of Visible Coercion
For a cyberattack to achieve its coercive aims, it must fulfill three conditions: (1) the attack must be attributable, (2) the threat must be credible, and (3) the desired behavioral change must be clear (Hodgson 2018, 75). However, these are rarely met all at once in the digital domain.
Instead, we see hybrid forms of influence emerge. Operations signal capability without formal demands. They create anxiety in target states, especially when aimed at critical infrastructure (e.g., power grids, transport hubs, or financial systems), and induce decision-makers to reconsider their risk calculations. In the case of Ukraine, multiple cyberattacks against its electrical grid between 2015 and 2022 demonstrated not only Russian offensive capacity but also a willingness to operate below the threshold of war while still inflicting strategic costs (Chen and Taw 2023, 71; Nye 2017, 49).
This kind of signaling does not require verbal threats. Malware serves as the medium; its presence is the message.
Dissuasion Through Digital Theater
Cyber deterrence is often discussed in the context of punishment—imposing costs to dissuade adversaries. But in cyberspace, dissuasion may be more performative than punitive. What scholars call “deterrence by denial” or “normative dissuasion” plays a larger role (Nye 2017, 52).
Joseph Nye argues that cyber deterrence encompasses four elements: punishment, denial, entanglement, and norms. When states openly signal their cyber capabilities, they not only raise the cost of targeting themselves but also shape the expectations and fears of others. The act of “being seen” thus plays into both reputational and deterrent logics.
Misreading the Message
Yet, signaling carries risks. Cyber operations are easily misinterpreted. A display intended to warn can instead provoke. Ambiguity in cyberspace cuts both ways: it allows attackers to act without retribution, but also increases the chance that a message is misunderstood or attributed to the wrong actor (Hodgson 2018, 76; Nye 2017, 51). In the crowded and covert world of cyber conflict, misperception remains one of the greatest escalation risks.
Why Now?
Several recent attacks illustrate this turn toward visibility. The 2020 SolarWinds breach, the 2021 Colonial Pipeline ransomware attack, and the 2023 MOVEit exploit were not stealthy in their aftermath—they became headline news. Though often attributed to criminal groups or proxies, their sophistication and targets suggest strategic intent. In each case, attribution (and the reaction it provoked) became part of the operation’s impact.
Cyberwarfare is no longer the domain of silent, hidden actors. Increasingly, it is about spectacle—designed not only to disrupt, but to intimidate, discredit, and deter. The question we must now ask is not whether a cyberattack was visible, but whether it was meant to be.
—
Bibliography
Burton, Joe. 2019. “Cyber-Attacks and Freedom of Expression: Coercion, Intimidation and Virtual Occupation.” Baltic Journal of European Studies 9 (3): 117–132. https://doi.org/10.1515/bjes-2019-0025.
Chen, Sarah, and Jennifer Taw. 2023. “Conventional Retaliation and Cyber Attacks.” Cyber Defense Review 8 (1): 67–92.
Hodgson, Quentin E. 2018. “Understanding and Countering Cyber Coercion.” In 10th International Conference on Cyber Conflict, edited by T. Minárik, R. Jakschis, and L. Lindström. Tallinn: NATO CCD COE Publications.
Nye, Joseph S., Jr. 2017. “Deterrence and Dissuasion in Cyberspace.” International Security 41 (3): 44–71. https://doi.org/10.1162/isec_a_00266.
Schelling, Thomas C. 1966. Arms and Influence. New Haven: Yale University Press.