Home » The Art of Data Protection
Mor AhuviaSeptember 9, 2014, 09:08 am EDT
The recent unleashing of hacked, private celebrity photos to the web is driving home the message of two-factor authentication (2FA), not only to users, but to online service providers, as well.
While decidedly smaller in scale, in the same way the Snowden affair brought online encryption into public discourse, the recent iCloud hack is bringing the message of 2FA to everyday users rather than solely the infosec-initiated.
So what actually happened? And how did celebrities lose control over who can view their most intimate photos?
In its statement of September 2nd, Apple asserted that following its initial investigation, it found that certain celebrities’ accounts were accessed using “a very targeted attack on user names, passwords and security questions, a practice that has become all too common on the Internet.”
Correlating with Apple’s statement, an elaborate comment posted on the forum of macrumor.com suggests that hackers attempted to reset iCloud account passwords using password-reset security questions, which often consist of biographical information available online, including on social networks. Once in, perpetrators could sync their devices with the target-celebrity’s device in order to obtain additional celebrities iCloud account addresses, as well as download their photo-stream (which synchs photos across Apple devices, when enabled).
In addition to this method of operation, one cannot rule out simple socially-engineered phishing attacks, which could have asked VIPs to ‘update’ their iCloud accounts, and thus lead them to divulging their credentials.
Following breaches of their respective services a few short months ago, eBay decided to implement 2FA internally, in addition to already offering it externally, and Bitly announced it would offer 2FA to both its employees and users. Apple, Google, Facebook, Twitter, and Yahoo have all rolled out 2FA to their users over the past several years, reflecting heightened awareness among service providers of the importance of adding an extra layer of security to their users’ content.
So whether the information we’re accessing is corporate, financial or personal, two factor authentication can overcome the vulnerabilities of passwords which are ultimately vulnerable to social engineering, database hacks, brute-forcing, and guessing.
To learn more about SafeNet’s cloud-based multi-factor authentication solution, go to http://www.safenet-inc.com/multi-factor-authentication/authentication-as-a-service/.
Michal CohenAugust 29, 2014, 10:15 am EDT
In our day-to-day consumer life we can see a shift from commercial, industrial goods to artisanal ones. Usually, artisanal goods are perceived to be high quality and unique while industrial ones are more often considered to be run-of-the-mill.
While some might argue that notion to be true in the retail market, it cannot be further from the truth in the enterprise domain, specifically the enterprise authentication market. In the enterprise market, choosing a niche authentication solution over an established enterprise solution could be detrimental in achieving the goal of securing access to corporate resources.
So what key considerations should you take into account when deciding the best authentication vendor for your organization’s needs?
- Scalability: You might have a defined number of users and resources you need to support but that does not mean it is set in stone. Market shifts can affect the growth of your organization and you might need to scale your solution to support that growth. It is recommended to check if the solution is flexible enough to accommodate those shifts.
- TCO Over Time: When you are purchasing an authentication solution the TCO is rarely just the price of the purchased product. Plan for the long term and ask yourself what are the hidden fees, your integration efforts and resources as well as solution day to day management costs that you will encounter.
- Native Identity Federation: With the rapid increase in cloud application and platform adoption, your authentication solution should accommodate the ability to natively extend identities to the cloud as well as local resources.
- Multi-Tier/Multi-Tenant Environment: Organizations are a live organism and the organizational structure is always changing. The ability to support shared services can easily help organizations support different clients, regions and departments.
- Broad Use Case Support: One size fits all? In reality every organization is unique and the authentication solution you choose will most probably require fine tuning and tailoring to your environment. But this fine tuning can be reduced to a minimum by choosing a solution that offers out of the box support for a broad IT ecosystem.
- Technological Innovation: Your vendor of choice should be able to adopt new technologies especially given the rapid changes in IT over the past few years.
- Support: Since the authentication solution is a critical component of everyday operations, make sure the vendor you choose has the infrastructure and resources to offer international 24×7 support and high availability. A good indicator for this, is whether the service or vendor carries ISO 27001 certification, has passed SSAE 16 assessment, which covers high-availability and robustness, and SOC2 certifications.
- Security: The pinnacle of an authentication solution is that it must be secure. Make sure to check if the solution has been through security and industry standard certifications such as ISO 27001:2005 or FIPS and Common Criteria certification for tokens.
- Credibility: When selecting a vendor, it is a good idea to consult the experts. Gartner’s Magic Quadrant reports for example offer “…a culmination of research in a specific market, giving you a wide-angle view of the relative positions of the market’s competitors.” Check Gartner Magic Quadrant for User Authentication to learn more about the authentication vendor landscape.
Typically niche authentication providers can do one, or a small subset of things quite well. However, when choosing an authentication solution, in addition to your current requirements, it’s worthwhile bearing in mind ones that the (not so far) future may hold.
To learn how SafeNet can help you reach an enterprise grade authentication solution, visit our Multi-Factor Authentication page.
Malcolm Gladwell, famous for his New Yorker articles and now iconic book The Tipping Point, released a book last fall titled David and Goliath. The book is primarily about underdogs and why they succeed in business and other walks of life. It is a great read and one I highly recommend. A key lesson from the book, which Gladwell outlines in a TED talk, is how David triumphs over Goliath. He does so because he is a planner and has gleaned a wealth of knowledge from his experiences as a shepherd, which he uses against the brute strength of his giant opponent.
There are several lessons to be learned by looking at the past in the story of David and Goliath when it comes to information security today. Let’s go back in time just a few years and look at the checklist for any enterprise IT security professional. It might have looked something like this:
- Install anti-virus software on all client machines
- For remote access employees, deploy a VPN to provide a secure tunnel for users (in most cases, a username and password was sufficient for identifying the users)
- Install firewalls to protect the company network from intrusions
- Install content and URL filtering software to make sure employees don’t visit websites that could be potentially harmful to the company’s network or machines
- Install disk encryption on client machines because employees can’t always be fully trusted to take care of their laptops
What about data centers? They were physically secured, so no need to worry about them. Plus they are protected by the same security infrastructure as the rest of the enterprise anyway (firewalls, etc.).
This approach made a lot of sense at the time. What else could go wrong?! Heck, even security vendors didn’t help matters by conversing in the lingo above, and industry analysts created a nomenclature to fit nicely with each of the security categories outlined above.
Well, a lot of things have changed since then, and so have the hackers. Blunted by the strength of Goliath-like security measures of corporations, they decided to adopt a David-like approach. They got smart and started targeting the information they wanted rather than worrying about anything else. If the goal is to get a file containing credit card information, then why not go specifically after that file rather than mounting a generic attack? It was the perfect response. And now it’s time for the enterprise IT security manager to take steps not too dissimilar from those of the hackers – apply Intelligent Security.
What does Intelligent Security entail? It requires a thoughtful approach. First, what are people most after? The data. Second, where does most sensitive data sit? Data centers, of course. What should your Intelligent Security be built around? Protecting data in the data centers, whether it is stored on dedicated servers or in a virtualized cloud configuration. Third, who is accessing the data? Employees, contractors, vendors, and partners. The list goes on and on. An Intelligent Security strategy involves quite a few of these things, and they are all interconnected.
Let’s review each aspect of this strategy.
Phishing attacks are now replaced by Advanced Persistent Threats (APTs), where attackers use many techniques, including social engineering, to mount extremely targeted attacks that will simply target the individuals who have access to the sensitive data they are after. Deploying solutions that detect such attacks is an essential part of security strategy, but an Intelligent Security strategy requires very clear actions to be defined based on such detection. If you look now at one of the most famous breaches of all time, the Target breach, the company had systems that detected such attacks but failed to react to it. Threat intelligence is only as good as the actionable intelligence that can be derived from it. Rick Holland of Forrester has a very good blog post on this subject.
This may be the hardest part of the Intelligent Security strategy. It is critical that IT security professionals understand what the enterprise considers to be its most sensitive data, as well as the policies that govern its access, usage, and storage. Threat intelligence can also be used to further help improve data value intelligence. If you are constantly getting attacked regarding certain applications or data within the enterprise, well, chances are it is sensitive and important.
User Access Intelligence:
Once you know what data is important, the next element of an Intelligent Security strategy is deploying solutions that tightly govern its access. This will involve strong, multi-factor authentication techniques, special management of privileged users (such as system admins, executives, etc.), and audit trails to show who accesses data. A great example of intelligent access security is the deployment of context-based authentication, which involves the ability to detect who is accessing information from which systems, when, and under what circumstances. Creating this intelligence would, for example, detect insider threats. If a system admin is accessing financial data from an ERP system on an unauthorized public machine, would your access policy detect that as a threat?
This is at the core of a solid Intelligent Security strategy. You might even call it the last line of defense. Persistent encryption of sensitive data, whether stored or in motion, is important. If threat and user access intelligence solutions fail, and someone unauthorized gets hold of the data, you can mitigate the damage by making sure it’s encrypted. Intelligent encryption, however, requires more than just understanding data encryption solutions. It requires comprehensive Crypto Management, which is a collective term for the policies that govern the keys used in encryption and key generation and distribution, as well as vaulting of root keys. Many encryption solutions fail due to lack of strong Crypto Management. A good example is the recent SSL Heartbleed vulnerability. A code error resulted in exposure of keys used in an encrypted SSL session. While nothing would likely have prevented the exploitation of the vulnerability, the use of strong Crypto Management with hardware security modules (HSMs) for root key generation and storage would have limited the damage to just a single exposed session.
The last and most important aspect of Intelligent Security to remember is that all of the above security measures are interconnected. One feeds the other and one without the other means the entire system is weakened.
You may have noticed that nowhere above in the Intelligent Security strategy do I mention antivirus, VPNs, web filtering, and firewalls. That is not to say that these are not important, despite Symantec announcing the end of antivirus. Those are all needed, but they alone are not sufficient. An Intelligent Security strategy will use them smartly by, for example, using VPNs for file sharing that then also require strong authentication based on the user profile. This is something many enterprises now use to protect access and retrieval of highly sensitive data.
David vs. Goliath ended with intelligence winning over raw strength. In the world of security, hackers have done the same—outsmarting the large behemoth corporations. An Intelligent Security strategy, backed by solid execution, will help the Goliaths stay a step ahead of David.
With a new world record set for the largest stockpile of breached records, there has been no more obvious time than yesterday to take a “Secure the Breach” mindset—positioning organizations and retailers for successful breach-mitigation and breach-resiliency. Data that is strongly encrypted and protected with two factor authentication renders data breaches ineffectual and raises the bar for hackers.
In 2012, Korean artist Psy was the first to reach over 1 billion views of his Gangnam Style video clip. Now, another single gang has crossed over the 1 billion count threshold, this time not with an entertaining video clip, but through the nefarious deeds of an anonymous Russian-based group of hackers and fraudsters stealing account records. As reported by The New York Times, using SQL injections, a common type of zero-day exploit, the group siphoned 1.2 BILLION password combinations and 500 MILLION email addresses.
Two important questions come to mind:
1) Assuming that the siphoned 1.2 billion passwords were available in plaintext, why were these records not secured using strong encryption and key management? This would have made the siphoned ciphertext unintelligible, as hackers would not have access to secured encryption keys, such as those living in hardware security modules (HSMs).
2) How many of the pilfered 1.2 billion password-protected accounts—be they financial, webmail, or government-issued—are also protected with strong two-factor authentication (2FA)? In an ideal scenario, the siphoned 1.2 billion compromised accounts would all be protected with two-factor authentication, rendering the hacker’s trove useless. Without having the user’s second authentication factor available—be it a mobile token, hardware token, or software token—hackers would perhaps win the breach-making battle, but would lose the data-exploitation one.
The silver lining in the ongoing saga of data breaches is this:
- First, 2FA protects against illicit access to online accounts, such as illicit access gained following a database hack. 2FA also reduces the risk of compromise to begin with, as fraudsters opt for the ‘low-hanging fruit,’ targeting sites, organizations and individuals who have yet to elevate their access security.
- Second, account data gained through such breaches, be it personally identifiable information (PII) or financial information, can be rendered useless by using the appropriate encryption and key management methods.
In addition to the above instruments, the information technology industry, SafeNet included, are working under the FIDO Alliance roof to prefect a universal specification for authentication that will allow us consumers to use a single strong authentication method of choice to secure all our accounts, whether they are enterprise-issued, webmail, e-banking or e-government.
As with any ongoing adversity, evolution is only a matter of time. Be sure to stay ahead of hackers by being prepared for a breach, and avoid contending with a breach aftermath. Check out the latest breaches, breach trends, and further analysis at www.breachlevelindex.com.
Mor AhuviaAugust 4, 2014, 09:30 am EDT
As you have read so far in this series, the evolution of online threats has led to a new approach to data security. This new strategy requires organizations to accept the ‘Secure the Breach’ message—that a data breach is not a matter of ‘if’, but ‘when.’ By assuming a breach will occur, organizations are encouraged to place safeguards around the data and keys and who has access to them. To round out this series, we will conclude with step 3 of this approach and discuss the best way to control access and authentication of users.
Data access points, such as network or application login pages, should be protected by two-factor authentication, while the data itself should be encrypted (both at rest and in motion) to ensure it remains confidential—even in the event that a hacker gains access to it.
While this may seem straightforward, the identity and access management landscape has been warped in recent years by sweeping changes in the IT environment. No longer confined to the boundaries of on-premises IT, data now resides in the data center, as well as in public and private clouds. So how can organizations control access to data throughout the new IT ecosystem? And how can they ensure that a leaked or hacked password doesn’t lead to a full-blown breach? (One such incident is recounted here – Bitly’s 2FA for Employees Brings Secure Cloud Access to the Fore.)
The first part of access control is ensuring that resources are only accessible by those users who require them to do their job. Applications used by the CFO may not be required by SysAdmins, for example. To simplify matters, group-based policies can be easily created, where applicable, to leverage existing user repositories, such as Active Directory or MySQL.
After an access policy is instituted, the next step is to elevate trust—ensuring that users are who they claim to be. This is accomplished by adding strong authentication, which adds a ‘something you have’ factor to the ‘something you know factor.’ Single-factor authentication, which relies on static passwords, does not protect against guessing, phishing, database hacking and traffic sniffing. Two-factor-authentication, however, offers dramatically improved security, and can be achieved using multiple technologies:
- One-time password (OTP) authentication
- Out-of-band authentication
- Certificate-based authentication (based on x.509 PKI certificates)
Other technologies, while not comprising ‘true’ two-factor authentication, also improve security dramatically. These include:
- Pattern-based authentication (see GrIDsure)
- Context-based authentication
The tricky part here is that strong authentication must be extended to ALL data belonging to an organization, not just the data residing within the enterprise perimeter. Such data may reside in:
- Cloud applications like Salesforce.com, Office 365, and DropBox
- VDI applications such as VMware, Citrix XenApp, and AWS EC2
- Web portals, such as OWA
A good access control strategy requires strong authentication to all these resources, in addition to the local network. To eliminate the hassle of using a different password for each resource, technologies such as Identity Federation can be deployed.
To learn more about how an advanced two-factor authentication solution can help you control access to your data—wherever it may reside—see our Business Drivers for Next Generation Authentication blogs.
Throughout this series, we have uncovered a multitude of reasons why organizations can no longer solely rely on a strategy of prevention through network perimeter security, as provided by IPSs, WAFs, and firewalls. Rather, they need to adopt a strategy of breach management, which requires them to ask:
- “Where is my data?”
- “Where are my keys?”
- “Who has access to my data?”
By addressing each area and incorporating these three steps into your data protection strategy, you can be sure your most sensitive data is safe in the event a breach does occur. To learn more, visit www.securethebreach.com.
Miss parts 1-5 of this series? Catch up on what you need to know about preparing for a data breach:
- Securing the Breach, Part 1 – Accept It, Then Protect It
- Securing the Breach, Part 2 – A Three Step Strategy to Breach Bliss
- Securing the Breach, Part 3 – No Rest for Data at Rest
- Securing the Breach, Part 4 – Risk in the Fast Lane for Data in Motion
- Securing the Breach, Part 5 – Cryptographic Keys: Why is Key Security So Important?
Mor Ahuvia September 9, 2014, 09:08 am UTC
Michal Cohen August 29, 2014, 10:15 am UTC
Prakash Panjwani August 28, 2014, 12:37 pm UTC
Mor Ahuvia August 6, 2014, 01:56 pm UTC
Mor Ahuvia August 4, 2014, 09:30 am UTC
Andrew Gertz April 25, 2014, 02:06 pm UTC
Mor Ahuvia June 13, 2014, 11:00 am UTC