SHARE
Uncategorized / December 28, 2015

Predictions for 2016

1. Growth in the democratization of malware and the rise of the defender

In 2014 and 2015 there was significant growth in the number of high-profile breaches globally. Along with these, we have seen a growth in the malware “ecosystem” that is available “as-a-service.” There are those sourcing zero-day vulnerabilities, along with those that are serving as channels and distributors for these vulnerabilities. There are also those providing the infrastructure to package the vulnerabilities into a variety of payloads, including testing and validating them against threat databases to ensure they are not detectable, as well as those providing front and back end infrastructure such as delivery, billing, and support. There are command and control networks available to tap into. What all of this is leading to is that in order to carry out a cyber-attack, an individual or team does not need to be very intelligent, nor do they have to engineer the entire attack chain. All they need to do is piece together the components that they can now have access quite easily. Next year there will be a further broadening and cementing of this ecosystem, and consequently, a broader set of actors who can take advantage of this.

Fortunately, with democratization of malware also comes a certain normalizing downwards in the sophistication. So while we will continue to see some very sophisticated attacks by those who have the means, resources and talent, many of the cyber-attacks will also be easier to detect, contain and prevent due to the relatively larger populace who will prefer to simply plug into existing systems with little additional obscurity or obfuscation.

2. The rise of Predictive Information Security

Along with the democratization of malware will come an increase in the number of threat actors, cyber-breach attempts and incidents. Many of these attempts will exhibit polymorphic variations, i.e. variations that on the face of it make each instance of the malware look different and behave differently, but the essence of it will be relatively the same. Consequently, many of these variants may make it past traditional cyber defense systems such as firewalls or anti-virus solutions. In order to better equip organizations against these polymorphic threats, big data solutions will increasingly be deployed to focus on what can be expected after organizations have been breached. These solutions will evaluate massive data sets from a variety of sources across the infrastructure—from users, devices and applications, as well as physical, virtual and cloud instances— and will triangulate them against massive data sets of known bad behavior, to predict the next steps in the attacker lifecycle. These types of predictive solutions will provide key indicators of threats within the organization to enable faster detection of malware footprint and activity, leading to faster containment.

3. SDN will become a victim of its own hype

2016 will be a “coming to terms” year for software defined networking (SDN). With many trials and proofs of concept (POCs) now wrapping up, many SDN technologies will prove to be not ready, years into the SDN hype cycle, and will simply start losing luster and perhaps even accelerate towards a downward momentum. A much smaller subset will emerge as viable and valid technologies. Interestingly, some commercial technologies that took advantage of the SDN hype to get seeded into customer accounts may actually bubble to the top even though they may not be vertically disaggregated, or truly open- or standards- based. And that’s OK. Ultimately it’s about the problem being solved and not how purist the approach is. It does mean though that some of the original SDN technologies in comparison may look relatively immature, and may fall by the wayside in favor of the more commercial variants—unfortunately, a victim of their own hype.

4. NFV will move from POCs to deployments and hit a speed bump

2016 should see NFV move from POC to deployment in many cases., and that’s when we’ll start to see the challenges. The first challenges we’ll see will pertain to speed and performance. Moving from dedicated hardware, ASICs and FPGAs to commercial, off-the-shelf x86 based systems will entail taking a performance hit. The answer to that when it comes to NFV has been to “scale out” the solution. After all, cheap x86 servers are easy to add and roll out. But, scaling out will bring its own set of challenges, particularly if it entails maintaining state and dealing with distribution and load balancing of traffic across virtualized but replicated network functions. Consequently, NFV will have initial success  in areas without very high bandwidth, performance or scaling needs. Which is a good start. But the need for more bandwidth is like entropy of an open system. It always increases. As such, software systems that can carefully manage the ability to load balance, manage state and help extract every ounce of performance in NFV environments will see a rise.

But that begs another question – who will provide those stacks? If those software stacks come from commercial vendors, will it lead to another type of vendor lock-in. Ultimately, if the organizations that are deploying NFV- based solutions don’t use this opportunity to build the teams in-house that can write, manage, debug and maintain that software, then the first hurdle they are going to have to overcome will ultimately be a speed bump. Not a wall, but something they will have to slow down and watch out for.

5. Rationality will replace the herd in the move to the public cloud

The last several years have seen many organizations follow a herd-type movement towards adopting the public cloud. CIOs are being told to adopt and commit dollars towards the public cloud without much thought or regard to whether that means SaaS, PaaS, IaaS. When it comes to IaaS, specifically, the allure of elasticity, burst capacity, and ease of provisioning has led many organizations to commit dollars and frame policies with little analysis of the cost over extended periods of time or the associated security considerations. However, heading into 2016 more organizations are starting to cost out their public cloud strategy before actually committing or making the move to the public cloud. And that’s a good thing.

Having burst capacity in the public cloud is one thing. But having critical applications always-on for years, with massive volumes of data in the public cloud, can actually be very expensive. And moving back from the public cloud to an on-premises solution can become even more expensive as the cost of moving data out from the public cloud is not cheap. What we will see in 2016 is a smarter, more informed CIO that can weigh the models and find the right balance between what the public cloud has to offer, versus a true hybrid model with key applications and data residing on-premises and true burst capacity in the public cloud.

Featured Webinars
Hear from our experts on the latest trends and best practices to optimize your network visibility and analysis.

CONTINUE THE DISCUSSION

People are talking about this in the Gigamon Community’s Security group.

Share your thoughts today


Back to top