Anonymity is hard to maintain over time against an aggressive adversary.
There are many ways you can accidentally leak your identity or tie your activity to some other identity. One minor error in hiding any of those identifiers can ruin an entire mission and potentially invite retribution. Over the course of months, this adds up to potentially thousands of opportunities for error. Unfortunately, humans are not very good at achieving that level of consistency with their operational security (OPSEC).
The hacker who compromised the DNC during the 2016 elections goes by the name Guccifer 2.0. Almost immediately, people suspected that the Russian government was behind the attack, but convincing attribution has proved difficult. New information strongly links Guccifer 2.0 to the Russian GRU, and all because of simple human error.
Guccifer 2.0 took great care to cover their tracks by using a VPN service called Elite VPN to mask their real IP address. Whenever analysts tried to find Guccifer 2.0, they hit a dead end at Elite VPN’s servers in France. But Guccifer 2.0 slipped up. On one occasion, they apparently forgot to turn on the VPN before starting their activities, revealing a source IP address pointing back to GRU headquarters in Moscow. Out of all the times this identity was used, the operators only made one OPSEC mistake; but that’s all it takes.
The same thing happened with Hector Xavier Monsegur (Sabu) of LulzSec and Ross William Ulbricht (The Dread Pirate Roberts or DPR) of The Silk Road. Both were caught through OPSEC errors that allowed investigators to connect their aliases with their true identities. Like Guccifer 2.0, Sabu failed to use a VPN just once, exposing his real IP address in a chat room. Early on in his activities, DPR had accounts that related to both his real identity and his alias. One problem of maintaining perfect OPSEC is that you need to start doing it long before you know if it will be important for a given activity. These kinds of security mistakes happen all the time.
Mistakes are inevitable.
The only way to prevent this kind of error is to get humans out of the process. If they have to remember to take a series of steps before launching their browser, then mistakes are inevitable. However, if their platform is designed to only launch a browser after all the required settings and protections are in place, then the user can’t accidentally browse the web open-faced. This leverages the idea of a fail-safe design. Systems should simply not operate unless the safety features are functional and engaged. Just like you can’t start most cars unless the park or the foot break is engaged, you should not be able to access the internet at all unless your IP hiding, browser fingerprint masking, and system isolation protections are in place — to name a few.
Many operators, particularly the more technically sophisticated, think that such tools are for amateurs and “script kiddies”. They think they know what they are doing and are falsely confident that they can stay safe. People with real experience are more humble. They know that eventual human error is unavoidable. A proper set of fail-safe tools is the only way to ensure effective misattribution over an extended period of time against sophisticated and aggressive foes.