Protecting Against Malware and a New Breed of Hacker

laptop computer on the table with notepad and coffee cup with Ntrepid logo in foam

Share this post

Protecting Against Malware and a New Breed of Hacker

Ntrepid Podcast 2: Protecting Against Malware and a New Breed of Hacker

The nature and purpose of malware has changed a lot in the last few years, but on the whole, our counter-measures have not kept pace. Historically, malware was developed by individuals or small groups of hackers looking to make a name for themselves. It could be about reputation, revenge, curiosity or even counting coup against a huge organization. These days, the real problems come from criminal hackers and state or pseudo-state-sponsored hackers. All of these groups share a few characteristics. They are interested in specific results, not reputation. They are going to try to avoid detection if possible, rather than advertising their actions and they have the resources and skills to discover and exploit new vulnerabilities.

Once a computer is compromised, the payloads that are delivered have also become much more sophisticated. They are able to monitor activity, capture passwords, credit cards and other credentials. They can even capture tokens from multi-factor authentication to allow session hijacking.

Hacking activities come in two main flavors: mass attacks and targeted attacks. Mass attacks are designed to capture as many computers as possible. They spread indiscriminately and try to infect any computer that appears vulnerable. While they are often very sophisticated, the sheer size of the activity makes detection very likely, which in turn allows for the development of anti-malware rules and fingerprints. Although this does take time to create and disseminate.

Targeted attacks are very different. The malware is typically targeted by hand. It does not spread automatically or does so only within very tightly constrained limits. Because only a small number of computers are compromised, detection is much more difficult, and even heuristic and pattern-based detection is going to have a difficult time with the very low level of activity required to infect machines and deploy these tools.

Attackers have built tools that allow them to test their malware against all known anti-malware tools. This basically ensures that any new malware created will not be detected by any of the commercial anti-malware tools.

Spear phishing and waterhole attacks have become the preferred techniques for these targeted kinds of attacks. In both cases, the victim is lured into executing the malware by couching it in a context that feels safe, and meets the users expectation. The links or documents look real, and seem to come from a trusted source, and generally make sense in context.

With spear phishing, the attack generally comes through email, while waterhole attacks are centered around websites frequented by the target population. One particularly effective waterhole attack is to implement malware on a internal server of the target company. Placing the payload in an update to the HR time-keeping system, for example, is very likely to catch almost everyone in the company.

Between the time lag to detect a new mass malware and to create and deploy new rules, and the difficulty of discovering targeted malware at all, computers are vary vulnerable to attack. Most security experts feel that any computer or network that is not completely isolated can be compromised by a resourceful and capable attacker.

Of course, even air gaps aren’t perfect. It’s really difficult to make a system or network completely isolated. The Iranian nuclear centrifuges attacked by Stuxnet, were controlled by systems with no outside network connectivity, but targeted malware was able to get in through removable storage media.

We think that virtualization is a key technology to help protect you against these new breeds of attacks. It provides two critical capabilities: system isolation and rollback. System isolation is the separation of your high risk and high probability of compromise systems from your core network and valuable data. Conducting your high risk activities in a virtualized environment with no access to internal networks or servers helps prevent the loss of data and makes it extremely difficult for an attacker to use an initial breach of an isolated system as a beachhead from which to attack the rest of your network.

The best implementations of system isolation place the servers running the virtual machines completely outside the sensitive network environment. This is superior to virtualization on the desktop because even if the virtual container is breached, it still does not give access to sensitive data. Either one may be an effective solution depending on your resources and the threat level under which you’re operating.

When you use virtualized servers in an isolated environment, those servers are accessed using remote desktop protocols, generally over a secure VPN. The only connection then, between the desktop and the virtual environment, is this remote desktop session which may be only initiated over the VPN from the user’s end. We have never seen attacks back across such a path. If a virtual machine is compromised, that malware only has access to that single virtual machine, and cannot access any of the other servers, networks, data or storage.

This is also where the rollback capability comes in, because it may be impossible to detect a compromise of your virtual machines. You must assume that they have been compromised, even after a fairly limited amount of use. With virtualization, you can revert to a known good and clean version of your computer and file system daily, or even after each session. This gets around the problem of detecting and surgically removing malware by basically burning your virtual computer to the ground, and effectively dropping in a new one. The one twist is that you will be destroying any data you might have created or stored on that machine. If you want to keep that data it can be done, but only with great care. Any residual information kept around could be a vector for reinfecting your virtual computer. Which information you choose to persist between rollbacks of your virtual environment and how you store that information is critical.

Similarly, how you export data and documents from the virtual environment back to your work computer and network is critical. That, too, can be the path for infection, in this case of your core IT infrastructure. Such data needs to be heavily tested and quarantined. Best practices would be to never actually open any such documents directly on your internal computers, but to always view them in virtualized environments.

Ntrepid offers a line of products, specifically designed for this purpose. They automate the whole process of managing virtual machines and keeping them properly isolated from your network. Persisting key information and safely moving documents between the virtual environment and your desktop.

One solution, Nfusion, is a full virtualized desktop which runs the virtual machines in an isolated and dedicated server cluster, either hosted in Ntrepid’s secure cloud infrastructure, or in your data centers outside your firewall.

Passages is a secure web browsing platform. It runs in a virtual machine on your local desktop and uses VPNs to keep all traffic segregated from your internal traffic until its well outside of your security perimeter.

Both of these are designed to be used by non-technical users. Because human error is the single most common cause of security breaches, we have built the systems to be extremely user friendly and to protect against accidental compromise through carelessness or oversight. Whatever the reason for your excursions beyond the firewall, let us help you ensure that you’re not bringing back anything dangerous or contagious.

 

Transcript

Welcome to the Ntrepid Podcast: Episode 2. My name is Lance Cottrell, Chief Scientist for Ntrepid Corporation. In this episode, I will be talking about the threats from the new breed of hackers and malware and how virtualization can be used to protect yourself.

The nature and purpose of malware has changed a lot in the last few years, but on the whole, our counter-measures have not kept pace. Historically, malware was developed by individuals or small groups of hackers looking to make a name for themselves. It could be about reputation, revenge, curiosity or even counting coup against a huge organization. These days, the real problems come from criminal hackers and state or pseudo-state-sponsored hackers. All of these groups share a few characteristics. They are interested in specific results, not reputation. They are going to try to avoid detection if possible, rather than advertising their actions and they have the resources and skills to discover and exploit new vulnerabilities.

Once a computer is compromised, the payloads that are delivered have also become much more sophisticated. They are able to monitor activity, capture passwords, credit cards and other credentials. They can even capture tokens from multi-factor authentication to allow session hijacking.

Hacking activities come in two main flavors: mass attacks and targeted attacks. Mass attacks are designed to capture as many computers as possible. They spread indiscriminately and try to infect any computer that appears vulnerable. While they are often very sophisticated, the sheer size of the activity makes detection very likely, which in turn allows for the development of anti-malware rules and fingerprints. Although this does take time to create and disseminate.

Targeted attacks are very different. The malware is typically targeted by hand. It does not spread automatically or does so only within very tightly constrained limits. Because only a small number of computers are compromised, detection is much more difficult, and even heuristic and pattern-based detection is going to have a difficult time with the very low level of activity required to infect machines and deploy these tools.

Attackers have built tools that allow them to test their malware against all known anti-malware tools. This basically ensures that any new malware created will not be detected by any of the commercial anti-malware tools.

Spear phishing and waterhole attacks have become the preferred techniques for these targeted kinds of attacks. In both cases, the victim is lured into executing the malware by couching it in a context that feels safe, and meets the users expectation. The links or documents look real, and seem to come from a trusted source, and generally make sense in context.

With spear phishing, the attack generally comes through email, while waterhole attacks are centered around websites frequented by the target population. One particularly effective waterhole attack is to implement malware on a internal server of the target company. Placing the payload in an update to the HR time-keeping system, for example, is very likely to catch almost everyone in the company.

Between the time lag to detect a new mass malware and to create and deploy new rules, and the difficulty of discovering targeted malware at all, computers are vary vulnerable to attack. Most security experts feel that any computer or network that is not completely isolated can be compromised by a resourceful and capable attacker.

Of course, even air gaps aren’t perfect. It’s really difficult to make a system or network completely isolated. The Iranian nuclear centrifuges attacked by Stuxnet, were controlled by systems with no outside network connectivity, but targeted malware was able to get in through removable storage media.

We think that virtualization is a key technology to help protect you against these new breeds of attacks. It provides two critical capabilities: system isolation and rollback. System isolation is the separation of your high risk and high probability of compromise systems from your core network and valuable data. Conducting your high risk activities in a virtualized environment with no access to internal networks or servers helps prevent the loss of data and makes it extremely difficult for an attacker to use an initial breach of an isolated system as a beachhead from which to attack the rest of your network.

The best implementations of system isolation place the servers running the virtual machines completely outside the sensitive network environment. This is superior to virtualization on the desktop because even if the virtual container is breached, it still does not give access to sensitive data. Either one may be an effective solution depending on your resources and the threat level under which you’re operating.

When you use virtualized servers in an isolated environment, those servers are accessed using remote desktop protocols, generally over a secure VPN. The only connection then, between the desktop and the virtual environment, is this remote desktop session which may be only initiated over the VPN from the user’s end. We have never seen attacks back across such a path. If a virtual machine is compromised, that malware only has access to that single virtual machine, and cannot access any of the other servers, networks, data or storage.

This is also where the rollback capability comes in, because it may be impossible to detect a compromise of your virtual machines. You must assume that they have been compromised, even after a fairly limited amount of use. With virtualization, you can revert to a known good and clean version of your computer and file system daily, or even after each session. This gets around the problem of detecting and surgically removing malware by basically burning your virtual computer to the ground, and effectively dropping in a new one. The one twist is that you will be destroying any data you might have created or stored on that machine. If you want to keep that data it can be done, but only with great care. Any residual information kept around could be a vector for reinfecting your virtual computer. Which information you choose to persist between rollbacks of your virtual environment and how you store that information is critical.

Similarly, how you export data and documents from the virtual environment back to your work computer and network is critical. That, too, can be the path for infection, in this case of your core IT infrastructure. Such data needs to be heavily tested and quarantined. Best practices would be to never actually open any such documents directly on your internal computers, but to always view them in virtualized environments.

Ntrepid offers a line of products called Nfusion, specifically designed for this purpose. They automate the whole process of managing virtual machines and keeping them properly isolated from your network. Persisting key information and safely moving documents between the virtual Nfusion environment and your desktop.

The full version of Nfusion runs the virtual machines in an isolated and dedicated server cluster, either hosted in Ntrepid’s secure cloud infrastructure, or in your data centers outside your firewall.

Nfusion Web is a lightweight, rapidly deployable solution for web surfing only. It runs in a virtual machine on your local desktop and uses VPNs to keep all traffic segregated from your internal traffic until its well outside of your security perimeter.

Both of these are designed to be used by non-technical users. Because human error is the single most common cause of security breaches, we have built the systems to be extremely user friendly and to protect against accidental compromise through carelessness or oversight. Whatever the reason for your excursions beyond the firewall, let us help you ensure that you’re not bringing back anything dangerous or contagious.

For more information about this, and any other Ntrepid products, please visit us on the web at ntrepidcorp.com. And follow us on Facebook and on Twitter @ntrepidcorp.

You can also reach me directly with any questions or suggestions for future topics at lance.cottrell@ntrepidcorp.com.

Thanks for listening.