Sensage Blogs

Back to Sensage Blogs Home

Posts Tagged ‘InfoSec’

Offensive Defense in the Enterprise

Posted: May 2, 2013 at 10:11 am | by Joe Gottlieb

Recently, the idea of Offensive Defense has become a hot topic in the security industry. In theory, the notion of going after an attacker that targeted your organization seems like a logical plan. However, there are a host of legal and ethical concerns with this approach.

For one, current legislation is vague when it comes to an organization’s ability to go after cyber attackers off-premises. Secondly, what if you unintentionally go after an innocent bystander and cause irrevocable harm to their infrastructure? Are you then legally liable for any damaged they incurred?

Because of this ambiguity, we urge our customers to focus on what they own in their own enterprise, where hidden intrusions and malicious codes can have long-term security ramifications.

According to Verizon’s 2013 Data Breach Investigations Report, 66 percent of attacks take at least two months or longer to discover. That’s considerably more so than in 2010, when 41 percent of attacks went undetected for that long. This further supports the need for SIEM analytics, which will sharply reduce the time that a threat can “hide” within enterprise infrastructure.

By leveraging advanced SIEM solutions, organizations can define the context of threats and enable an automated, active defense. With a deeper, richer understanding of the context of patterns and anomalies via the analytical capabilities which advanced SIEM solutions deliver, you strengthen the deployment of policy-driven controls that balance enterprise defense with corporate responsibility.

In a recent piece I did on this topic, I discuss the risks, the debates and the future of sharing information about cyber attacks.

While the idea of going off-premises to attack your attacker may sound appealing, the risks clearly outweigh the benefits. There’s a better way to keep your enterprise secure, while staying out of trouble—and an advanced SIEM will get you there.

permalink


Operation High Roller: Lessons Learned

Posted: July 24, 2012 at 8:54 am | by Joe Gottlieb

Cybercrime has definitely become more profitable than old fashioned crime. Compare $38 million stolen through 2011 physical bank burglaries* with the recent report of the Operation High Roller attack, where up to $78 million may have been stolen by hackers targeting high balance bank accounts in 60 or more mid-sized banks.

We spent a little time researching this attack and, while initial reports were vague, we now understand this to be an interesting layering of multiple cyber methods in an attack that spanned many months.

Operation High Roller started with a basic phishing ploy, then leveraged several unique maneuvers once successfully in systems.  What was unique about this attack was not only the level of automation it used to distract its victims momentarily with a fake screen when hijacking their funds, but also the ability to compromise two-factor authentication for the first time – in this case a short-lived, one-time use password. With these new developments in bypassing two-factor authentication through automated sets of code, we can expect to see it used by cybercriminals in future hack attacks.

There was some good news about this attack (and most others): the operation left behind many logs which ultimately gave security experts insights on the anatomy of the attack, its migration pattern and which customers were compromised.

This is also another wakeup call that there needs to be more consistency in three key areas:

· Education – much like other attacks before this, High Roller started with a simple phishing scam. Employees need ongoing education about the new, innovative ways that attacks can take shape.

· Automation – when your organization knows that that updates of anti-virus software happen without action on their part, for example, they will distrust a pop-up from a supposed anti-virus application.

· Monitoring – it is not enough to do “spot checks” or rely on real-time alerts…clearly, cyber-criminals have figured out how to fool those processes. Develop a consistent monitoring process that looks for suspicious events, like connections to an unknown outside server at unusual times of day. There are many basic metrics you can establish baselines for, and then look for unexplained variances.

* Source: FBI

permalink


The Growing Trouble with Trust

Posted: November 4, 2011 at 5:25 am | by Joe Gottlieb

The recent news about Socialbot attacks confirm what most of us feared…that social networks are drunk with growth and not maintaining security practices commensurate with the attractiveness of their assets. At the heart of this threat, sadly, lies the human desire to trust – which has become “semi-automated” in social networks.

Social fabrics boil relationships down to simple transactions. By simply “liking” something, or “friending” someone, you create automated associations that lead to interactions – both good and bad. Social communities work on the notion of “automated trust,” and the paradigm that by taking those actions, you are prepared for all of the related consequences.  What’s more: in your social environment, your guard is down (for the most part). You trust that the information you are receiving is relevant and safe (sent by a friend, or because you “liked” something).

These trust mechanisms will make it very easy for the new frontier: mass-customized cyber-crime.

Social media vendors have enjoyed the ability to serve very indulgent communities with, only recently, the concern for increased controls and security.  It will be critical for these proprietors to take a continuous design improvement approach to their security practices. They will need to protect social networks with the same technology, people and process used by enterprises and government agencies, particularly large scale event collection, filtering and analysis.

It will also be their responsibility to educate users on the increasingly granular controls that are available, and enforce safe techniques in their community. I am speculating that the average participant does not fully understand or leverage the granular controls available in many online services. It’s important to determine what trust-level users want to exhibit and then recognize, outside of that sphere they create, everything else they share is available to the public.

We will be hearing more about Socialbots in 2012…

permalink


SEC Order to Report Potential Data Breaches

Posted: October 14, 2011 at 11:23 am | by Joe Gottlieb

Is this realistic guidance or just the illusion of concern?

The US Securities and Exchange Commission just announced an order for organizations to disclose even “potential” data breaches.

We have always known that a large percentage of breaches are NOT disclosed. A letter sent to to the US Securities and Exchange Commission by several US Senators in May, 2011, cites a study by Hiscox showing 38% of Fortune 500 companies made a significant oversight - failing to report a breach of some sort.

We believe the number is much higher - and there are several reasons for this:

  • In many cases, an organization is not even aware of a breach until they are contacted by a 3rd party
  • In some cases, the organization impacted can’t track back how the information was placed at risk, or the events that took place which caused the breach
  • Overall, the language governing breach notification requirements is vague - so it leaves the decision to each organization to interpret/decipher

That’s what makes the SEC order to now require companies to disclose even “potential” breaches even more interesting. It intends to hold organizations accountable, not just when a breach occurs, but even in cases where one is suspected. This will be tricky. If a large percentage of organizations don’t disclose SUCCESSFUL breaches, does the expansion to require the reporting of breaches that may have happened, seem possible?

It requires the SEC to provide very specific guidance and stronger definition of breach reporting requirements, several layers deeper than it has been defined in the past. Here are just a few of the challenges they will need to address:

  • What is the definition of a breach? Is it when information is taken, or is it the nature of the information?
  • How does one recognize a potential loss? As we have seen in several recent cases, a server may have been accessed by unauthorized users, but no data was stolen. Does that still count?
  • What, if any, checks and balances can be put in place with external organizations to identify unreported risks? Will those external organizations perform solid diligence before raising a red flag? Will they report the potential breach to the impacted organization first or directly to a governing body?
  • What guidelines will be used to ensure companies are adequately assessing and mitigating these risks?
  • How will the increased reporting be resourced? Who takes the hit for false alarms?
  • What are the penalties if a “potential” risk is not reported, but discovered through other means?

Again, a very interesting move - and one that needs to be taken seriously by proactive security practitioners who don’t want to be whiplashed by false alarms and panic. Collecting, storing and analyzing massive amounts of data one piece of the equation. The other is implementing a methodical approach to security event management. Here are just a few things to consider:

  • Implement a combination of real time and long-range security information and event management - neither of these alone will catch every potential risk
  • Collect data consistently - across the entire threat landscape. Focusing on just the network or endpoint will not be useful since breaches today impact multiple vectors
  • Correlate/analyze that data methodically. Stove-piped analysis will not catch clever insiders or external attackers. Set thresholds, then look for variances and outliers
  • Do all of the things you are already doing, in the way of security monitoring - just track it all so you can prove a non-event later

These are just a few of the points to think about…and again, I don’t think of this SEC guidance as game-changing. It just puts additional tension on already resource-constrained security teams to be more diligent in security event management. Read a few more best practices you should consider to make this easier…

permalink


Executive Order on Insider Threats

Posted: October 12, 2011 at 2:00 am | by Joe Gottlieb

Real Progress Being Made…With Big Data Challenges Ahead

President Obama issued an executive order which establishes an Insider Threat Task Force to prevent potentially damaging and embarrassing exposure of government secrets or classified information, such as those made public by WikiLeaks. In my opinion, this is a huge step in the right direction – providing both a framework for building out agency programs and specifics for cross-agency, centralized guidance and assessment of progress being made to address this threat.

Key takeaways:

1) Major responsibility lies with agencies, who will have build and maintain a solid program for protecting against Insider threats…of course, within the boundaries of the usual data policies and privacy regulations. Two major requirements include:

  • The identification of a senior official who is accountable for the program and compliance
  • That agencies complete self-assessments to ensure compliance

2) An Insider Threat Task Force will be put in place to identify necessary and standard technologies required to achieve progress in detecting Insider Threats

3) A Steering Committee will be put in place for information sharing and safeguarding

4) The DOD and NSA are named as the key executive agents for the Executive Order – providing oversight, developing processes for auditing progress, and establishing policies for compliance

This is all encouraging – but let’s peel back the technology involved in situational awareness and the detection of an insider attack. In order to watch where people are going, what information they are accessing and what they are doing with it, you have to collect lots of data. For a single government agency, this is no small task. Thousands of government personnel with incredibly complex access rules/permissions based on the missions or programs they are on.

What makes matters even more complex is that typical insider attacks occur over extended periods of time: whether it is because the insider moves slowly to avoid detection all along, or because they stumble on an opportunity and gain confidence to capitalize on it over time. In order to effectively detect these long-lead attacks, data has to be collected and retained for longer periods of time.

How can security teams identify which user profiles and activities, buried in this vast landscape of event data, are worth noting, isolating and investigating?

Two things are clear:

  • A combination of both real-time incident alerting AND longer-range forensic technologies are required
  • An open approach to sharing security intelligence will accelerate learnings across the board

Both of these validate the Sensage architecture and approach. Our event data warehouse was built to deal with massive data collection and processing requirements, and our open access to the data allows for sophisticated and rapid analysis of suspicious events.

Sensage is already providing key technologies to government agencies and contractors, and this Executive Order should catalyze further focus on this critical problem. Be sure to check out SensageTV for my current perspective, and look for more as we watch this important new development!

permalink


Next Page »