Home » The Art of Data Protection
Jennifer DeanApril 18, 2016, 09:24 am EDT
For many people, the line between work and personal life is blurred. When it comes to mobile devices, 80 percent of the workforce admits to using their devices for both business and private use which means personal data and company data will naturally converge onto a single device. What’s becoming increasingly important, is how to protect the data stored on and being accessed from these devices to ensure mobile workforce security.
So how much are those corporate emails or family photos worth to the average person? A recent study, How Much Is the Data on Your Mobile Device Worth?, conducted by the Ponemon Institute, took a look at the value of our mobile devices and the risks involved with bringing personal devices onto the corporate network. The study asked participants to estimate the value of their devices, including replacement costs and/or value of the data. The average value assessed by participants was $14,000, with photos being the highest valued asset at $3,074. Next on the list was contact lists (personal and business) at $2,654 and personal apps at $2,096.
But how safe is this data we deem so valuable? The research found an increasing amount of sensitive and confidential information stored on mobile devices, yet both personal and enterprise security practices are not providing adequate protection. Fifty-five percent of respondents said they are concerned about the work-related data they access and store on their mobile devices, yet 50 percent do nothing to secure it. Plus, a shocking 68 percent of respondents admitted to sharing passwords across personal and work accounts.
The discrepancy between corporate IT and the employees’ idea of the level of access using personal devices was also concerning. While the IT department believed only 19 percent of employees have access to customer records, 43 percent of polled employees said they have access.
The bottom line is that corporate IT should take a long hard look at mobile security before it’s too late. A proactive approach is always preferred, and adding a 2nd layer of authentication will help ensure users are fully identified and authenticated before they are granted access to your most valuable assets.
The most common two-factor authentication solutions used today are one-time passwords (OTP) and PKI authentication. With OTP authentication, the user is only granted access if a passcode is simultaneously generated in two places: on the authentication server and one the hardware or software token (OTP app) in the user’s possession. Digital identity certificates – or PKI – further raise the mobile security bar and enables other applications such as digital signature and file encryption.
But because most mobile devices don’t have USB slots or embedded smart card readers, it can be challenging to use smart cards on the go. Bluetooth – a connectivity channel implemented across different endpoints can tackle this problem, making authentication compatible with any mobile device. For example, Gemalto MobilePKI solutions enables providers to choose either a Bluetooth-enabled badge holder or USB token. See how it works in the video Enterprise Mobile Security.
Whatever solution best fits your organization’s needs, two-factor authentication is imperative to secure any enterprise that supports an on-the-go mobile workforce.
Mor AhuviaApril 14, 2016, 10:00 am EDT
As promised, here are a few more quick facts on the subject of strong authentication for those new to the topic, or those looking to introduce the topic to the uninitiated. How are one-time passwords generated? What is the difference between an Assurance Level and a FIPS Security Level? And what is OATH authentication?
How is a one-time password generated?
In OTP authentication, one-time passwords, or OTPs, are generated using four main inputs:
- A secret token seed, consisting of a randomly-generated string usually 256 bits or 512 bits long
- A time-synched or event-based parameter, such as a timestamp for time-based OTPs, or a counter for event-based OTPs
- Other variables, which add entropy
- A hashing algorithm, which combines the above inputs to produce a single OTP value
OTP = [HASHING ALGORITHM] [TOKEN SEED] [TIME OR EVENT-BASED VALUE] [OTHER VARIABLES]
What is a Level of Assurance?
In terms of authentication, a level of assurance denotes the level of certainty that a user is who they claim to be. Different authentication methods provide different levels of assurance, for example, a static password provides a low level of assurance, whereas two-factor or multi-factor authentication provide a higher level of assurance.
When determining the level of assurance required to secure access to a specific resource, IT and security professionals take into consideration the risk and value of the ‘information asset’ in question—for example, the enterprise VPN vs. an attendance web application. As well, higher privilege accounts, such as those belonging to network administrators or C-level personnel, generally require a higher level of assurance than a standard account (since unauthorized access to these accounts could result in much greater losses or damage).
What is a NIST Assurance Level?
In its Electronic Authentication Guideline, the US National Institute of Standards and Technology, aka NIST, has laid out a system that anyone can use to calibrate the level of assurance provided by a specific authentication method or a combination of methods, represented in ascending order of security from Assurance Level 1 to Assurance Level 4. (For details, see pages 51 thru 55 of.) Here are some examples:
- Assurance Level 1 – A password or PIN
- Assurance Level 2 – An OTP, generated by a soft token
- Assurance Level 3 – A PIN-protected OTP, generated by a soft token
- Assurance Level 4 – A PIN-protected OTP, generated by a hardware token
Note that Assurance levels 2 and higher require that the cryptographic module be FIPS validated. More on this below.
What is a FIPS Security Level?
Not to be confused with NIST Assurance Levels, the FIPS 140-2 is a series of US Federal Information Processing Standards (FIPS) that rate the security of a cryptographic module, in ascending order of security from Security Level 1 to Security Level 4:
- FIPS 140-2 Security Level 1 – Applies to the cryptographic module, or software component, of a cryptographic system. An OTP app, for example, can be FIPS 140-2 Level 1 validated when it incorporates FIPS-validated crypto libraries.
- FIPS 140-2 Security Level 2 – Applies to the physical casing of a cryptographic module (e.g. a token) and requires that it be tamper evident, meaning that visible signs of manipulation appear in the event of physical access in order protect the plaintext encryption keys from manipulation or duplication.
- FIPS 140-2 Security Level 3 – Stipulates the zeroing (‘self-destruction’) of encryption keys in the event of tampering or physical access to a token.
- FIPS 140-2 Security Level 4 – Requires complete protection of the encryption keys from extreme physical or environment conditions (e.g. Space, lab settings, etc.), so that even manipulation of such conditions do not reveal the encryption keys.
In the EU and APAC regions, the Evaluation Assurance Levels (EAL) of the Common Criteria (CC) standard are more widely referenced, comparable to the FIPS 140-2 standard used in the US and Canada.
What is Identity Federation?
Identity federation means using an identity from one security domain to access another security domain. An example would be using the identity Jill@abc.org to not only access the abc.org network, but to access 3rd party applications, as well, such as Office 365, Salesforce.com or AWS. When Jill’s enterprise identity is extended to the cloud, or ‘federated,’ she can access all her cloud-applications with her familiar enterprise identity.
Identity federation can help eliminate the help desk overhead and password fatigue that results from having 10 or 20 disparate username-and-password sets for different cloud applications.
Federation is achieved using different protocols, for example Kerberos for on-premises applications, and SAML for cloud-based applications. (For more on SAML-based federation, watch this webinar on Securing access with SafeNet Authentication Service.)
What is OATH Authentication?
OATH Authentication is an open standard for implementing strong authentication. Produced by an industry-wide collaboration of security vendors, the OATH architecture can be used by IT and security professionals as a template for integrating strong authentication into their organization’s current infrastructure. OATH’s open standards create more freedom for enterprises by preventing ‘vendor lock-in’ and thereby offering a broader choice of vendors, and enables using an OATH-based token across different vendors’ platforms. Token seeds can be exported from one OATH platform, and imported into another OATH platform.
Proprietary vs. Open Authentication Standards
Authentication technology, like other technologies, may be either open source or proprietary. SAML 2.0, OATH, and OpenID Connect are all open standards that are available to the public and developers free of charge. WS Federation Services, conversely, is a proprietary identity federation protocol created by Microsoft, who also supports SAML. Similarly encryption algorithms used in 2FA may be either be proprietary or open source, with examples of the latter being TOTP and HOTP (both OATH 2FA protocols). Proprietary methods are often more lucrative for vendors, whereas open standards that have undergone peer reviews and public scrutiny tend to enjoy greater industry-wide support.
Discover more about strong authentication in part 1 of the series and read A Security Survey of Strong Authentication Technologies – Whitepaper.
Stephen HelmApril 7, 2016, 12:33 pm EDT
Cyber threats to our critical infrastructure are nothing new. Since the early 1980’s hackers, vandals and government agencies have exploited the sensitive systems at the heart of oil pipelines, power plants, dams, etc., and done so with varying degrees of success.
Although these attacks were rare, they were highly targeted and exposed serious flaws in the security of the Industrial Control Systems (ICS) on which the utilities rely.
Revealed in 2010, Stuxnet was one of the most devastating cyber-attacks in history, and is considered a game changer in how the world viewed the security of industrial systems.
A highly sophisticated, state-sponsored cyber weapon designed to attack industrial control systems, Stuxnet made headlines as it wreaked havoc on the Iranian nuclear program, leading to serious accidents and even loss of life at an Iranian nuclear power plant.
In the years following Stuxnet, utilities have come under attack more frequently, with some public power providers indicating that they were under a “constant state of ‘attack’ from malware and entities seeking to gain access to internal systems,” as documented in the Electric Grid Vulnerability report created by U.S. Congressmen Edward J. Markey and Henry A. Waxman.
In one extreme example a “utility reported that it was the target of approximately 10,000 attempted cyberattacks each month.”
On December 23, 2015, the Ukrainian Kyivoblenergo, an electricity distribution company in Ukraine experienced a power outage as the result of a sophisticated cyber-attack. The attack was notable, because it was the first attack against a public utility that was designed to disrupt the distribution of electricity.
The attack highlighted the flaw in five commonly held smart grid cyber security myths, namely:
Industrial Control Systems are isolated. The electricity industry is comprised of a highly complex ecosystem of players, from generation, transmission, distribution operations, and markets. All of these different links in the chain must be connected to some degree. Further, modern industrial control systems rely on more connectivity than ever before. “Isolation” is often achieved with a series of firewalls designed to prevent outside intrusion into sensitive systems. These systems can be bypassed, as was the case of the 2003 Davis-Besse4 power plant attack in which an attacker penetrated the network of an unnamed Davis-Besse contractor, and navigating its way to the Davis-Besse network to introduce malware that would have otherwise been caught by their firewall.
Isolation in a utility environment involves more than just connectivity to the larger internet. Removable media, USB tokens, and even laptops are relied on for maintenance at different points of the infrastructure. All of these tools could be used to introduce malware and other security vulnerabilities.
Nobody will want to attack us. To be sure, the majority of hackers choose targets that present some opportunity for monetary gain, and very few of these adversaries would wish to cause physical harm to people or property. However, we live in a time where vandals, disgruntled employees, terrorist organizations, and even nation states have interest in attacking our critical infrastructure. These attacks occur all too frequently, and threaten to increase as our adversaries become more skilled and our systems more open.
Utilities only use obscure protocols/systems. In the past this may have been true, but today utilities rely on a multitude of commercial technologies. From communication protocols, operating systems like Microsoft and Linux, to common databases, utilities have turned to common software and hardware tools to save money and create efficiencies. Unfortunately these systems are often well understood by hackers, and provide an easier target of entry than a truly proprietary system.
Social engineering is not an issue. People are more aware of social engineering than in the past, and utilities certainly train their personnel to spot such threats, but the threat is still significant. All it takes is one employee to click on the wrong link, or open an attachment in an absence of judgement to introduce malware. Such was the case in the Ukrainian Kyivoblenergo attack.
It’s Encrypted: It’s protected. Encryption and cryptography are essential tools of protection for utilities, and used for data security, integrity, and non-repudiation. Cryptography essentially removes risk from the data and systems and places it on the sensitive cryptographic keys used to sign, encrypt, decrypt, etc. This means the security of cryptographic keys is of utmost importance. Failure to secure these keys means they could be used against the utility, either to decrypt sensitive data, or to sign malware to make it look as if it should be trusted.
In the next blog, we will talk about how utilities can establish security objectives around availability, integrity, confidentiality, and accountability to build trust into their smart grid deployments.
Want to learn more? Check out our on demand webinar, Building the Trusted Smart Grid: Threats, Challenges, and Compliance!
Chris OwenMarch 23, 2016, 10:30 am EDT
Senetas and Gemalto announced NATO approval for the latest SafeNet High Speed Encryptors for NATO Restricted use by all 28 NATO member states, further extending our high-assurance capabilities to provide maximum data protection for security-conscious organizations.The NATO approval – and subsequent inclusion of the company’s products in the NATO Information Assurance Product Catalogue – allows the encryptors to be supplied to agencies of up to all 28 NATO member states for government and defence use.
Access the NIAPC site to see a list of approved encryptors, listed under Senetas. Senetas and Gemalto partner to deliver the world’s best High Speed Encryption appliances. Gemalto and Senetas have an extended global distribution agreement in which Gemalto distributes Senetas’ high speed network encryption solutions across the globe.
NATO Approval? Why?
Simply put, the NATO approval further extends Gemalto’s SafeNet High Speed Encryptors’ high-assurance capabilities to provide maximum data protection for security-conscious organizations. In addition to the NATO approval, SafeNet High Speed Encrytors also hold certifications by leading organizations such as FIPS (USA), Common Criteria (International) and CAPS (UK).
Security hardware certification by the various international independent and government certification organisations is a strict requirement of many government agencies and defence organisations for the protection of sensitive data around the world.
These security product certifications involve intensive and rigorous testing procedures, which often take years to be completed. It is not a ‘one-time’ process; rather, an on-going process where any minute change to the product requires a process of ‘recertification’.
In simple terms, the approval states that the products are ‘…certified as suitable for government and defence use…” The specific certification classification determines the level of data sensitivity for which the product is suitable – e.g. ‘up to secret’ classification.
Why encrypt data in motion?
We all know that sensitive data needs to be protected, especially in the public sector where citizen information is extremely sensitive. But what happens to data in motion when it’s transmitted to other locations? Once it’s in motion, you’re no longer in control of it, and, if unencrypted, it can be ‘tapped’ with relative ease by cyber-criminals, or misdirected unintentionally either by human or machine error.
Why SafeNet High Speed Encryption?
Gemalto provides the world’s leading certified Layer 2 high speed encryptors that are fully assured by UK public sector and CAPS certified. These encryptors ensure the most secure data-in-motion protection, maximum performance, near-zero overhead with “set and forget” management, and lowest total cost of ownership.
SafeNet High Speed Encryptors mitigate the risk of communication interception (Sniffing), traffic analysis and fibre tapping. Among the solutions Gemalto offers are triple-certified CAPS, FIPS 140-2 Level 3, Common Criteria certified appliances that are listed in the NATO Information Assurance Product Catalogue for the protection of restricted information.
Maximum Performance & Efficiency
SafeNet High Speed Encryptors enable public sector to make the most out of their expensive 10 Gbps pipes by encrypting sensitive data (often compliance bound). Encrypt 10 Gbps pipes at line speed with almost zero latency and zero impact on network bandwidth or other network assets.
Lowest Total Cost of Ownership
SafeNet High Speed Encryptors provide best-in-class enterprise high speed encryption that can reduce network costs by as much as 50 percent, compared to solutions such as IPSEC that encrypt at Layer 3 for example.
To secure your data in motion, you need to encrypt it. By encrypting the data, you can be assured that however accessed by an unauthorized party, it is protected. The simplest and best approach is to provide protection that stays with the data, wherever it is being sent. High speed encryption does exactly that.
For more information on high speed encryption download our high speed encryption overview.
Why is strong authentication used? How does it work? And why choose one form of authentication over another? If you’re inundated with information, and are trying to drill down to the bare basics, this cheat sheet will help you make sense of it all. For the sake of brevity, the Cheat Sheet will be rolled out over two or more blog entries, so stay tuned.
Why is Strong Authentication Used?
Strong authentication is used because static username and password combinations can be easily compromised by malicious actors seeking to compromise your account or system (be it related to an online social, corporate, or retail platform). While passwords may have been sufficient back in the 60’s, they have been increasingly easy to compromise over the past 20 years with the evolution of the internet and the threat vectors that can be disseminated over it. According to the Verizon Data Breach Investigations Report, the majority of breaches are known to involve the use of compromised credentials.
Common threats that jeopardize the confidentiality of your password include phishing attacks, brute-force attacks, generic malware (aka SSL stealers), credential-database hacking and even password-guessing.
How does Two-Factor Authentication Work?
Two of the most common 2nd factor methods used today are one-time passcodes and PKI certificate-based authentication.
- One-time passcodes are a form of ‘symmetric’ authentication, where a one-time passcode is simultaneously generated in two places: on the authentication server and on the hardware token or software token (OTP app) in the user’s possession. If the OTP generated by your token matches the OTP generated by the authentication server, then authentication is successful and you’re granted access. Both proprietary and open-source protocols are used to generate an OTP. More on that in Part 2 of the Ultimate Strong Authentication Cheat Sheet.
- PKI authentication is a form of ‘asymmetric’ authentication as it relies on a pair of dissimilar encryption keys—namely, a private encryption key and a public encryption key. Hardware PKI certificate-based tokens, such as smart cards and USB tokens are used to store your secret private encryption key securely. When authenticating to your enterprise network server, for example, the server issues a numeric ‘challenge.’ That challenge is signed using your private encryption key. If there’s a mathematical correlation, or ‘match,’ between the signed challenge and your public encryption key (known to your network server), then authentication is successful and you’re granted access to the network. (This is an oversimplification. For more details, watch these Science of Secrecy videos by Simon Singh, PhD.)
What is the best strong authentication method to use?
When it comes to authentication, one size does not fit all.
- Appropriate Level of Security – While OTP apps may provide sufficient protection for most enterprise use cases, verticals that require high-assurance, such as e-government and e-health may be mandated to use PKI security by law. Broadly speaking, OTP authentication is the 2FA method of choice in North America, whereas PKI is far more popular in other regions of the world, especially in highly regulated sectors. (For details on different methods and the threats they counter, read the Survey of Authentication Technologies White Paper.)
- Cost - OTP authentication has traditionally been more affordable, as well as easier and quicker to deploy, as it does not require setting up a PKI infrastructure that involves purchasing PKI digital certificates from a Certificate Authority for each user. Plus, with OTP authentication, OTP apps can be installed on users’ mobile devices and desktops and used as hardware tokens, unlike PKI authentication where a hardware token must be procured for each user to keep their private encryption key safe. However, with advances in technology such as embedded ‘secure elements’ in mobile devices and Bluetooth Smart PKI readers, PKI is becoming increasingly affordable as well as user- and deployment-friendly.
- Regional Security Standards – Depending on regulations relevant to your industry, the hardware or software token you deploy may need to comply with the FIPS standard in the US or Common Criteria in Europe. More on these standards in Part 2.
- Usability – Organizations that require greater mobility for their workers, may seek increasingly transparent authentication methods for their employees. Software and mobile-based tokens, as well as tokenless solutions, provide a more convenient authentication journey that facilitates the implementation of secure mobility initiatives. To learn more, download our Mobile Employee eBook.
Stay tuned for Part 2 of the Ultimate Cheat Sheet on Strong Authentication with more quick facts to help you secure access across your IT ecosystem.
Jennifer Dean April 18, 2016, 09:24 am UTC
Mor Ahuvia April 14, 2016, 10:00 am UTC
Stephen Helm April 7, 2016, 12:33 pm UTC
Chris Owen March 23, 2016, 10:30 am UTC
Cheryl Barto Shoults January 24, 2012, 08:30 am UTC
Motty Alon June 6, 2013, 01:21 pm UTC
SafeNet February 10, 2014, 05:56 pm UTC