As the threat landscape evolves and adversaries find new ways to exfiltrate and manipulate data, more organizations are adopting a zero trust strategy. However, many are only focusing attention on endpoints, leaving the database vulnerable to malicious attacks. Databases are the last line of defense against data exfiltration by cybercriminals. To combat this, it’s essential that zero-trust security controls are applied to critical database assets.

The zero trust information security model denies access to data and applications by default. Threat prevention is achieved by granting access to only networks and data utilizing policy informed by continuous, contextual, risk-based verification across users and their associated devices. Zero trust advocates these three core principles: 1) All entities are untrusted by default, 2) least privilege access is enforced, and 3) comprehensive security monitoring is implemented.

The traditional scope of cybersecurity was once considered to be perimeter protection of the enterprise network and associated data and applications. This castle-and-moat security model extends trust to all users and devices within the perimeter, allowing extensive or even unlimited access to assets within the castle. Despite massive investments in perimeter security defenses, cyber attackers can still access sensitive data. Zero trust is an evolution of security that no longer relies on castle-and-moat security to protect data environments. It moves enterprise cybersecurity away from over-reliance on perimeter-based security, including firewalls and other gating technologies, to create a barrier around an organization’s IT environment. 

The 2022 IBM Cost of a Data Breach Report, conducted by the Ponemon Institute, found the average total cost of a data breach reached an all-time high of $4.35 million. Implementing zero trust has a direct impact on potentially lowering the cost of a breach by limiting the risk of unauthorized access, insider threats, and malicious attacks. Just 41 percent of organizations in the study said they deployed a zero trust security framework. The 59 percent that didn’t deploy zero trust incurred an average of $1 million in greater breach costs compared to those that did deploy. 

While the initial goal of zero trust is to prevent data breaches, the core goal is data protection. Zero Trust Data Protection (ZTDP) is a new and evolving term for an approach to data protection based on the zero trust security model. Achieving ZTDP requires an effective data security and governance solution that can implement the zero trust model within the data environment. Privacera’s approach is built on three pillars:

Least privilege access control: Most cyber attacks occur when an attacker exploits privileged credentials. By imposing least privilege access-control restrictions on software and systems access, attackers cannot use higher-privilege or administrator accounts to install malware or damage the system. Strong user authentication and authorization: Providing a granular level of data access control across systems for different users by the client, partner, business unit, sub-contractor, customer, franchise, department, or by contractual terms requires unified authentication and authorization controls capable of scaling across large, distributed hybrid and multi-cloud environments.Data obfuscation, using encryption and/or masking: Organizations must be able to granularly encrypt or mask data at the table, column, row, field, and attribute level, not just the entire data set. This enables data science and analytics teams to use more data to build models and extract insights, drive new business opportunities, garner increased customer satisfaction, and optimize business efficiency.

The Cost of a Data Breach Report also noted security automation made the single biggest difference in the total cost of a data breach, making it more likely security best practices will be followed without fail. Zero trust should inform both what is protected and how access is controlled, while security automation can more efficiently put those zero trust principles into practice. The powerful combination of zero trust and Privacera security and governance automation helps your security team to more effectively apply data security controls as well as remediate incidents as quickly as possible — ensuring you maintain a stronger and more resilient security posture while reducing your cybersecurity risks.

Learn more about the emergence of data security governance for evolving zero trust strategies and get your roadmap to business success here.

Zero Trust

By Anand Oswal, Senior Vice President and GM at cyber security leader Palo Alto Networks

Critical infrastructure forms the fabric of our society, providing power for our homes and businesses, fuel for our vehicles, and medical services that preserve human health.

With the acceleration of digital transformation spurred by the pandemic, larger and larger volumes of critical infrastructure and services have become increasingly connected. Operational technology (OT) serves a critical role as sensors in power plants, water treatment facilities, and a broad range of industrial environments.

Digital transformation has also led to a growing convergence between OT and information technology (IT). All of this connection brings accessibility benefits, but it also introduces a host of potential security risks.

Cyberattacks on critical infrastructure threaten many aspects of our lives

It’s a hard fact that there isn’t an aspect of life today free from cyberthreat. Ransomware and phishing attacks continue to proliferate, and in recent years, we’ve also seen an increasing number of attacks against critical infrastructure targets. Even in environments where OT and IT have been traditionally segmented or even air-gapped, these environments have largely converged, presenting attackers with the ability to find an initial foothold and then escalate their activities to more serious pursuits, such as disrupting operations.

Examples are all around us. Among the most far-reaching attacks against critical infrastructure in recent years was the Colonial Pipeline incident, which triggered resource supply fears across the US as the pipeline was temporarily shut down. Automobile manufacturer Toyota was forced to shut down briefly after a critical supplier was hit by a cyberattack. Meat processing vendor JBS USA Holding experienced a ransomware cyberattack that impacted the food supply chain. The Oldsmar water treatment plant in Florida was the victim of a cyberattack that could have potentially poisoned the water supply. Hospitals have suffered cyberattacks and ransomware that threaten patients’ lives, with the FBI warning that North Korea is actively targeting the US healthcare sector. The list goes on and on.

Global instability complicates this situation further as attacks against critical infrastructure around the world spiked following Russia’s invasion of Ukraine, with the deployment of Industroyer2 malware that is specifically designed to target and cripple critical industrial infrastructure.

Today’s challenges place an increasing focus on operational resiliency

With all of these significant challenges to critical infrastructure environments, it’s not surprising that there is a growing focus on operational resiliency within the sector. Simply put, failure is not an option. You can’t have your water or your power go down or have food supplies disrupted because an outage of critical infrastructure has a direct impact on human health and safety. So, the stakes are very high, and there is almost zero tolerance for something going the wrong way.

Being operationally resilient in an era of increasing threats and changing work habits is an ongoing challenge for many organizations. This is doubly true for the organizations, agencies, and companies that comprise our critical infrastructure.

Digital transformation is fundamentally changing the way this sector must approach cybersecurity. With the emerging hybrid workforce and accelerating cloud migration, applications and users are now everywhere, with users expecting access from any location on any device. The implied trust of years past, where being physically present in an office provided some measure of user authenticity simply no longer exists. This level of complexity requires a higher level of security, applied consistently across all environments and interactions.

Overcoming cybersecurity challenges in critical infrastructure

To get to a state of resiliency, there are a number of common challenges in critical infrastructure environments that need to be overcome because they negatively impact security outcomes. These include:

Legacy systems: Critical infrastructure often uses legacy systems far beyond their reasonable lifespan from a security standpoint. This means many systems are running older, unsupported operating systems, which often cannot be easily patched or upgraded due to operational, compliance, or warranty concerns.

IT/OT convergence: As IT and OT systems converge, OT systems that were previously isolated are now accessible, making them more available and, inherently, more at risk of being attacked.

A lack of skilled resources: In general, there is a lack of dedicated security personnel and security skills in this sector. There has also been a shift in recent years toward remote operations, which has put further pressure on resources.

Regulatory compliance. There are rules and regulations across many critical infrastructure verticals that create complexity concerning what is or isn’t allowed.

Getting insights from data: With a growing number of devices, it’s often a challenge for organizations to get insights and analytics from usage data that can help to steer business and operational outcomes.

The importance of Zero Trust in critical infrastructure

A Zero Trust approach can help to remediate a number of the security challenges that face critical infrastructure environments and also provide the level of cyber resilience that critical infrastructure needs now.

How come? The concept of Zero Trust, at its most basic level, is all about eliminating implied trust. Every user needs to be authenticated, every access request needs to be validated, and all activities continuously monitored. With Zero Trust authentication, access is a continuous process that helps to limit risk.

Zero Trust isn’t just about locking things down; it’s also about providing consistent security and a common experience for users, wherever they are. So, whether a user is at home or in the office, they get treated the same from a security and risk perspective. Just because a user walked into an office doesn’t mean they should automatically be granted access privileges.

Zero Trust isn’t only about users: the same principles apply to cloud workloads and infrastructure components like OT devices or network nodes. There is still a need to authenticate devices and access to authorize what the device is trying to do and provide control, and that’s what the Zero Trust Model can provide.

All of these aspects of Zero Trust enable the heightened security posture that critical infrastructure demands.

Zero Trust is a strategic initiative that helps prevent successful data breaches by eliminating the concept of implicit trust from an organization’s network architecture. The most important objectives in CI cybersecurity are about preventing damaging cyber physical effects to assets, loss of critical services, and preserving human health and safety. Critical infrastructure’s purpose-built nature and correspondingly predictable network traffic and challenges with patching make it an ideal environment for Zero Trust.

Applying a Zero Trust approach that fits critical infrastructure

It’s important to realize that Zero Trust is not a single product; it’s a journey that organizations will need to take.

Going from a traditional network architecture to Zero Trust, especially in critical infrastructure, is not going to be a “one-and-done” effort that can be achieved with the flip of a switch. Rather, the approach we recommend is a phased model that can be broken down into several key steps:

1. Identifying the crown jewels. A foundational step is to first identify what critical infrastructure IT and OT assets are in place.

2. Visibility and risk assessment of all assets. You can’t secure what you can’t see. Broad visibility that includes behavioral and transaction flow understanding is an important step in order to not only evaluate risk but also to inform the creation of Zero Trust policies.

3. OT-IT network segmentation. It is imperative to separate IT from OT networks to limit risk and minimize the attack surface.

4. Application of Zero Trust policies. This includes:

Least-privileged access and continuous trust verification, which is a key security control that greatly limits the impact of a security incidentContinuous security inspection that ensures the transactions are safe by stopping threats — both known and unknown, including zero-day threats — without affecting user productivity

By definition, critical infrastructure is vital. It needs to be operationally resilient, be able to reduce the potential attack surface, and minimize the new or expanding risks created by digital transformation. When applied correctly, a Zero Trust approach to security within critical infrastructure can play a central role in all of this — ensuring resilience and the availability of services that society depends on every day.

Learn more about our Zero Trust approach.

About Anand Oswal:

Anand serves as Senior Vice President and GM at cyber security leader Palo Alto Networks. Prior to this Anand, was Senior Vice President of Engineering for Cisco’s Intent-Based Networking Group. At Cisco he was responsible for building the complete set of platforms and solutions for the Cisco enterprise networking portfolio. The portfolio spans enterprise products across routing, access switching, IoT connectivity, wireless, and network and cloud services deployed for customers worldwide.

Anand is a dynamic leader, building strong, diverse, and motivated teams that continually excel through a relentless focus on execution. He holds more than 50 U.S. patents and is focused on innovation and inspiring his team to build awesome products and solutions.

Data and Information Security, IT Leadership

One of the most important components of data privacy and security is being compliant with the regulations that call for the protection of information.

Regulators want to see transparency and controllability within organizations, because that is what makes them trustworthy from a data privacy and security standpoint. Ideally, organizations will deploy systems that provide compelling evidence to support their claims that they are meeting their requirements to deliver the protection and performance needed by stakeholders.

Protecting data from theft and improper use has long been the domain of cybersecurity and IT executives. But today, this is really a concern for the entire C-suite and, in many cases, the board of directors, all of whom are well aware of the repercussions of a data breach and failing to comply with regulations.

There is simply too much at risk when companies don’t ensure a level of control and trust in how they handle data. This is the case because of several converging trends:

The ongoing growth in the volume of business data, including a huge amount of information about customers and employees — much of it personal and personally identifiable.The importance this data holds from a strategic standpoint. Companies rely on the insights they gain from analyzing market data to provide a competitive advantage.An ever-expanding threat landscape, with increasingly sophisticated and well-financed cybercriminals going after this data for profit.A disappearing enterprise “perimeter” with the increase in cloud services, remote work and mobile devices used by employees in various locations. The idea of a fixed perimeter protected by a firewall no longer applies to most organizations.

In the midst of all this is the increase in government regulations designed to hold organizations accountable for how they gather, store, share and use data. An organization that fails to comply with such regulations can face stiff fines and other penalties, as well as negative publicity and damage to its brand.

Gaining trust and control

One of the challenges with establishing control and trust with data is a lack of visibility regarding the data: where it resides, who has access to it, how it is being used, etc. Organizations need to know their level of risk and how risk can be mitigated, as well as their level of progress in enhancing data security and privacy.

Endpoint devices present a particularly high level of cyber risk, because of the challenges of managing a large and growing number of mobile devices and apps in the workplace, as well as desktops and laptops used for remote work. Many threat actors target corporate data for theft and extortion, and endpoint devices present potential entry points into an organization.

The endpoint attack surface has expanded quickly over the past few years,

thanks in large part to the growth of remote and hybrid work. For many organizations, there is a sense that the attack surface is spiraling out of control, because of the challenge of gaining visibility and control of this environment. They realize that just a single compromised endpoint could result in an attack that causes significant financial and reputational damage.

Unfortunately, few tools on the market are designed specifically to monitor and manage cyber risk on a unified basis. Organizations have had to stitch together point solutions to get by. And in many cases, they lack data that is current, accurate, comprehensive, and contextual.

In addition, many organizations lack the ability to measure and compare corporate risk scores with industry peers; quickly take action after risk is scored; set goals for vulnerability remediation; and prioritize which areas to spend limited security resources on.

In order to build trust and gain better control of data, organizations need to leverage technology that gives them the ability to know how vulnerable their critical assets are, whether they are achieving their goals to improve security posture, how they measure up against industry peers; and what they should be doing to become more secure.

Ideally, technology tools should be able to provide organizations with real-time comparisons with industry peers in areas such as systems vulnerability, outstanding patches and lateral movement risk.

From a visibility standpoint, tools should identify vulnerability and compliance gaps across all endpoints used in an organization, enabling organizations to prioritize those issues that represent the highest risk, visualize complex relationships between assets and collect real-time feedback. They should be able to track each asset by collecting comprehensive data on all endpoints in real time.

In terms of control, security tools need to help organizations greatly reduce the attack surface by managing patches, software updates and configurations. Metrics should provide a clear sense of progress over time and indicate where improvements are needed.

From a trust perspective, tools should provide a single, accurate view of risks, enabling risk scoring and dashboards that give executives a clear sense of the level of risks and how they can be mitigated.

When it comes to ensuring compliance with data privacy regulations, IT and security leaders need to establish trust and control within their organizations’ environments. That’s the only way to demonstrate to regulators — as well as to customers, employees and business partners — that they are taking data privacy seriously and taking the necessary steps.

The most effective ways to be compliant and at the same time enhance data security are to gain greater visibility into the organization’s infrastructure, including every endpoint device, evaluate the effectiveness of security solutions and make needed improvements, and compare risk metrics with those of comparable organizations.

Assess the risk of your organization with the Tanium Risk Assessment. Your customized risk report will include your risk score, proposed implementation plan, how you compare to industry peers, and more.

Data Privacy

One of the most important components of data privacy and security is being compliant with the regulations that call for the protection of information.

Regulators want to see transparency and controllability within organizations, because that is what makes them trustworthy from a data privacy and security standpoint. Ideally, organizations will deploy systems that provide compelling evidence to support their claims that they are meeting their requirements to deliver the protection and performance needed by stakeholders.

Protecting data from theft and improper use has long been the domain of cybersecurity and IT executives. But today, this is really a concern for the entire C-suite and, in many cases, the board of directors, all of whom are well aware of the repercussions of a data breach and failing to comply with regulations.

There is simply too much at risk when companies don’t ensure a level of control and trust in how they handle data. This is the case because of several converging trends:

The ongoing growth in the volume of business data, including a huge amount of information about customers and employees — much of it personal and personally identifiable.The importance this data holds from a strategic standpoint. Companies rely on the insights they gain from analyzing market data to provide a competitive advantage.An ever-expanding threat landscape, with increasingly sophisticated and well-financed cybercriminals going after this data for profit.A disappearing enterprise “perimeter” with the increase in cloud services, remote work and mobile devices used by employees in various locations. The idea of a fixed perimeter protected by a firewall no longer applies to most organizations.

In the midst of all this is the increase in government regulations designed to hold organizations accountable for how they gather, store, share and use data. An organization that fails to comply with such regulations can face stiff fines and other penalties, as well as negative publicity and damage to its brand.

Gaining trust and control

One of the challenges with establishing control and trust with data is a lack of visibility regarding the data: where it resides, who has access to it, how it is being used, etc. Organizations need to know their level of risk and how risk can be mitigated, as well as their level of progress in enhancing data security and privacy.

Endpoint devices present a particularly high level of cyber risk, because of the challenges of managing a large and growing number of mobile devices and apps in the workplace, as well as desktops and laptops used for remote work. Many threat actors target corporate data for theft and extortion, and endpoint devices present potential entry points into an organization.

The endpoint attack surface has expanded quickly over the past few years,

thanks in large part to the growth of remote and hybrid work. For many organizations, there is a sense that the attack surface is spiraling out of control, because of the challenge of gaining visibility and control of this environment. They realize that just a single compromised endpoint could result in an attack that causes significant financial and reputational damage.

Unfortunately, few tools on the market are designed specifically to monitor and manage cyber risk on a unified basis. Organizations have had to stitch together point solutions to get by. And in many cases, they lack data that is current, accurate, comprehensive, and contextual.

In addition, many organizations lack the ability to measure and compare corporate risk scores with industry peers; quickly take action after risk is scored; set goals for vulnerability remediation; and prioritize which areas to spend limited security resources on.

In order to build trust and gain better control of data, organizations need to leverage technology that gives them the ability to know how vulnerable their critical assets are, whether they are achieving their goals to improve security posture, how they measure up against industry peers; and what they should be doing to become more secure.

Ideally, technology tools should be able to provide organizations with real-time comparisons with industry peers in areas such as systems vulnerability, outstanding patches and lateral movement risk.

From a visibility standpoint, tools should identify vulnerability and compliance gaps across all endpoints used in an organization, enabling organizations to prioritize those issues that represent the highest risk, visualize complex relationships between assets and collect real-time feedback. They should be able to track each asset by collecting comprehensive data on all endpoints in real time.

In terms of control, security tools need to help organizations greatly reduce the attack surface by managing patches, software updates and configurations. Metrics should provide a clear sense of progress over time and indicate where improvements are needed.

From a trust perspective, tools should provide a single, accurate view of risks, enabling risk scoring and dashboards that give executives a clear sense of the level of risks and how they can be mitigated.

When it comes to ensuring compliance with data privacy regulations, IT and security leaders need to establish trust and control within their organizations’ environments. That’s the only way to demonstrate to regulators — as well as to customers, employees and business partners — that they are taking data privacy seriously and taking the necessary steps.

The most effective ways to be compliant and at the same time enhance data security are to gain greater visibility into the organization’s infrastructure, including every endpoint device, evaluate the effectiveness of security solutions and make needed improvements, and compare risk metrics with those of comparable organizations.

Assess the risk of your organization with the Tanium Risk Assessment. Your customized risk report will include your risk score, proposed implementation plan, how you compare to industry peers, and more.

Data Privacy

Businesses are always in need of the most robust security possible. As the remote workforce expanded during and post-COVID, so did the attack surface for cybercriminals—forcing security teams to pivot their strategy to effectively protect company resources. Furthermore, the rise of organisations moving to the cloud, increasing complexity of IT environments, and legacy technical debts means tighter security mechanisms are vital.

During this time of change, the hype around Zero Trust increased, but with several different interpretations of what it was and how it helps. Zero Trust means — as the name suggests — to trust nothing by default.

Zero Trust isn’t a software in itself, but a strategy. Meeting the mandate will mean using a number of approaches, techniques and software types. The challenge only grows for those working piecemeal, without an overarching plan for using software and platforms that work together.  In this article, I’ll discuss whether Zero Trust is a strategy to which all businesses should strive towards, the growing shift towards a holistic security approach and how XDR aligns with Zero Trust.

Is Zero Trust an achievable goal for all businesses?

Zero Trust is an approach, not something that can be purchased. Just like a company will never be “100% secure”, it will never likely have “achieved Zero Trust.” That doesn’t mean security and Zero Trust are abandoned, but instead they are goals that are continuously strived for.

At Trend Micro, we leverage the terminology and concept of “Zero Trust” to help our own employees gain awareness of cybersecurity, while focusing on enhancements of foundational cybersecurity maturity through people, process and technology:

People –  Enhancing awareness; turning the weakest link to the strongest link in defending against cyber threats.Process – Developing, communicating and enforcing cybersecurity policy with alignments to enterprise risk management prioritisation and remediation.Technology – Leveraging telemetry data integration and machine learning to gain full cyber risk visibility for action.

It is extremely costly to achieve the highest maturity of Zero Trust in an IT environment and in most cases, it is not economically feasible nor practical to do so. The maturity level should depend on the enterprise’s risk management framework and approaches as well as its data classification.

Shifting towards a holistic approach

Organisations often begin their Zero Trust journey when faced with new security considerations as they move to the cloud. Migrating on-premises resources to the cloud entails monitoring a growing digital attack surface, which equals all possible entry points for unauthorised access into any system that is typically complex, massive, and constantly evolving.

Since the cloud doesn’t have a perimeter like on-premises environments, IT teams are struggling to keep up. A recent global study by Trend Micro found that SecOps lack confidence in their ability to prioritise or respond to alerts, with 54% of respondents saying they were “drowning in alerts”. With many enterprises using a hybrid cloud environment, operating several siloed point products to catch cyberthreats can be extremely challenging.

Organisations should look towards a holistic approach, adopting defensive in-depth security with multiple layers of protection. A unified cybersecurity platform, like Trend Micro One, provides enterprise-wide visibility, detection, and response combined with the security capabilities you need throughout the attack surface risk lifecycle. Our platform enables SecOps teams by providing a single point of truth across the entire infrastructure, gathering telemetry from all environments and correlating threat data to deliver fewer, but highly relevant, alerts to manage.

How XDR creates a solid foundation for Zero Trust

To properly assess the trustworthiness of any devices or applications, you need comprehensive visibility across your environment. A well implemented XDR solution provides full cyber risk visibility into an IT environment and when used in tandem with the Zero Trust approach, organisations can further enhance their security.

Monitoring and managing behaviour patterns of user access and data access are critical parts of Zero Trust. Trend Micro’s XDR solution offers automated detection and responses through machine learning and big data analysis. XDR automated response enforces consistent security policy while aligning to enterprise risk management.

Since XDR is constantly collecting and correlating data, it establishes a continuous assessment pillar of the Zero Trust strategy. This means that even after you’ve approved initial access for an endpoint, that asset will continually be reviewed and reassessed to ensure it remains uncompromised.

All businesses should strive for a foundational level of Zero Trust. To address the complexity of risk, the process needs to be treated like a lifecycle, in which continuous visibility and assessment are used to discover an organisation’s attack surface, assess the risk, and then mitigate the risk. At Trend Micro, we advise our customers to take Zero Trust implementation one step at a time.

Zero Trust

The discovery of the Log4j vulnerability in December 2021 is one of the more recent and prominent reminders of why cybersecurity teams need to implement a zero-trust security architecture.

Not that they should need reminders. Incidents are happening every day, and some of them—such as ransomware attacks that impact entire supply chains—make the headlines. In the case of Log4j, a Java-based logging utility that’s part of the Apache Logging Services, security researchers found a zero-day security vulnerability involving arbitrary code execution.

This was no garden variety vulnerability. Security experts described the flaw as being one of the biggest and most critical discovered in recent years. And it provides a glaring example of how at-risk organizations can be. New software vulnerabilities are being uncovered all the time, some of them leading to serious security breaches and lost data.

As cybersecurity and IT leaders know all too well, the complexities of security have increased significantly in recent years. Not only are attacks getting increasingly sophisticated, but cybercriminals are more organized than before, in some cases well-financed by nation-states.

In addition, the attack vector has broadened considerably in recent years. Hybrid and remote work models mean more people are working remotely and, in many cases, are using their own devices and networks to access critical business data.

Furthermore, the use of cloud services and multi-cloud strategies continues to increase. Sometimes cloud deployments are not even on the radar of central IT and therefore not managed as other IT assets might be. Given the rise of cloud services, remote work, and mobile environments, the concept of perimeter defense has been obliterated. There really is no such thing as a perimeter, or perimeter defense, anymore.

The need for zero trust

All of these developments provide good reasons for organizations to shift to a zero-trust model of cybersecurity. The idea of zero trust is fairly simple: trust no user or device, and always verify. A successful zero-trust approach considers three things: a user’s credentials, the data the user is trying to access, and the device the individual is using.

By combining the principle of least privilege with a modern approach of contextual access, multi-factor authentication (MFA), and network access, organizations can maintain a more agile security model that is well suited for a cloud-heavy and mobile-centric environment.

The result of the zero-trust approach is that organizations can reduce their attack surface and ensure that sensitive data can only be accessed by those users that need it under approved and validated context. This serves to greatly reduce risk.

Traditional zero-trust practices have typically focused on network access and identity and access management (IAM) through single sign-on (SSO). With remote work now encompassing such a large portion of end-user access, however, device posture is increasingly important as devices act as the new perimeter in a perimeter-less world.

By adding device validation to their security protocol, enterprises can defend against criminals who steal credentials or devices and use them along with MFA to gain access to networks and data.

If a network environment is monitored for non-compliance or critical vulnerabilities, then securing the device is the last defense against having compromised sensitive data. This is why it’s so important to adopt a converged endpoint management solution as part of the zero-trust approach.

Here are some of the key components of a zero trust practice organizations should consider:

Device compliance monitoring and enforcement. This confirms the security posture for devices and gives security teams the control to take action if something is not right. IAM. Provides authentication checks to confirm an individual’s identity and compares the user’s access against role-based rules.Network access. Organizations can control access to resources and network segments based on a user’s persona and the device being used. 

Laying the security fundamentals

Along with deploying the zero-trust approach, organizations should be sure to pay heed to security fundamentals. For example, they need to patch vulnerabilities as soon as they are identified. The Log4j development showed why that is important.

Patches should be installed and updated, but not in a haphazard way. Comprehensive patch-management programs should encompass all devices used in the organization connected to the internet and corporate networks.

Another good practice is to reassess all endpoints where systems are vulnerable to attacks. This includes conducting an audit of all those systems and devices that have administrative access to network systems, and an evaluation of the security protections on any sensors or other internet of things (IoT) devices tied to networks.

On a longer-term basis, companies need to reassess how they gather, store, and categorize the growing volumes of data they are managing. That might mean segmenting data so that more stringent security controls are placed on access to the most sensitive data such as personal information or intellectual property.

In addition, organizations need to be vigilant about using MFA and strong passwords. Networks have been compromised because hackers guessed users’ passwords, which suggests a need for policies that require more complex passwords or the use of MFA.

Users can be careless when it comes to cybersecurity practices, so providing good training programs and running awareness campaigns are also good ideas to educate everyone in the organization. These programs should cover signs to look for that indicate phishing and other attacks as well as social engineering techniques frequently used by bad actors to gain sensitive information or network access.

By deploying a zero-trust model and taking care of the cybersecurity “basics,” organizations can put themselves in a position to defend against the latest threats, including ransomware. 

Security today requires more than simply managing identities and authenticating users. It needs to assume that anyone or anything trying to get into the network is an intruder until proven otherwise.

Embracing the age of zero-trust security 

It’s a perfect confluence of events for zero trust to take center stage in the world of cyber security: the rise of hybrid and remote work, the ongoing shift to cloud services, the continuing growth of mobile devices in the workplace, and an onslaught of sophisticated attacks that can impact entire supply chains.

Never have organizations faced so many challenges in protecting their data resources and never have they needed to be more suspicious of users and devices trying to access their networks. The zero-trust model, with its principal concept that users, devices, applications, and even networks should not be trusted by default, even if they are connected to a verified network and even if they were previously verified, is well suited to today’s typical IT environment.

There is simply too much risk that an outside entity trying to gain access actually has nefarious intent. There is too much at stake to trust anyone or anything. One of the more notable effects of the shift to zero trust is the realization that traditional virtual private networks (VPNs) are no longer fully capable of securing remote access to corporate networks.

The distributed workforce at an organization might have access to highly regulated customer data through on-premises or cloud-based customer relationship management and enterprise resource planning systems. They might also need to access commercially sensitive intellectual property—all of this from personal devices.

Organizations need an effective way to secure and authenticate these users, and unfortunately, traditional VPNs have struggled to keep up with the traffic workloads that work-from-home generates.

Research by Tanium has found that overtaxed VPNs were the second biggest security challenge for organizations transitioning to a distributed workforce. The problems with legacy VPNs have not only imperiled the security of traffic flows, they are also contributing to a growing risk of security threats related to endpoints.

When the pandemic hit and organizations were forced to allow many employees to work from home, they relied on VPNs to support their distributed workforces, but with less than stellar results. While VPNs are familiar to many users and already in use for remote access, they are not the ideal tools to provide secure access for so many users relying on devices that in many cases are not as secure as they should be.

VPNs will not provide adequate defense against threats aimed at the home networks many users rely on when working remotely. In addition, the sheer number of VPNs a company might need to support an enormous mobile or hybrid workforce means the management and maintenance burdens could be overwhelming.

Zeroing in on zero trust

To truly provide secure access for a large number of remote workers, organizations need to think beyond VPNs and fully adopt the zero-trust model of cybersecurity.

With a zero-trust strategy and tools, it’s easier for security teams to provide secure access to applications, because they have more granular access controls and users do not get blanket permissions. Access rights are very specific and require continuous verification.

Device validation also makes up a key tenet of a successful zero trust strategy, and with remote work making up a large portion of end-user access today device posture is extremely important. Devices in many cases are the new “perimeter” within organizations, and device validation enables organizations to protect against stolen credentials or even stolen devices that cybercriminals can use to gain access to networks.

This is why practicing strong endpoint management is such an important part of a zero-trust approach. Without real-time and accurate endpoint management, organizations can’t enforce compliance or validate device posture as a prerequisite for access. Authentication alone can’t ensure that a device is secured.

The right tools can allow security teams to continuously check device posture against policies, to ensure that the zero-trust approach really does trust no one, even after identity and access policies are in place. Ideally, organizations should be able to integrate new zero-trust solutions with the tools they already use, so they don’t have to start from scratch.

The concept of zero trust might come across as negative—even paranoid: Don’t trust anything, whether it’s devices and other endpoints, applications, networks or individuals. But what the model really indicates is that organizations are operating in uniquely challenging times, and much is at stake when a data breach or ransomware attack occurs.

More people are working remotely, in many cases using their own devices and networks. Companies are relying on cloud services more than ever. Attacks have become more sophisticated and can impact entire supply chains.

Organizations need to take the initiative to ensure that valuable data resources are always protected and to be certain that the users and devices trying to access their networks will not do harm. Implementing a zero-trust strategy is a truly effective way to achieve this level of security.

Learn how to migrate to a zero-trust architecture with real-time visibility and control of your endpoints here.

Zero Trust

In business, data science and artificial intelligence are usually geared towards powerful efficiencies and growth. User trust is often overlooked. This can quickly morph into a major problem, particularly when AI is introduced to support strategic choices.

Data science and AI teams focus constantly on methodology and accuracy. This is critical, ensuring algorithms deliver valuable insights, analytics and support increased automation.

Nevertheless, most organizations face growing problems around users’ trust in algorithms. On the one hand, the quality of automated analysis is not clearly understood, and on the other, there is a perceived threat of machines making people’s own expertise redundant. This has become a particular difficulty in a crucial area of AI: decision support.

“The moment that models start guiding strategic decisions, there is a shift in requirements,” explains René Traue, senior data scientist at the market intelligence and consultancy firm GfK. “Users must be able to deeply trust the applications. They have to find them indispensable when making major choices. If not, they can end up walking away from them.”

Building confidence

In order to overcome this issue, the applications running AI algorithms must be designed to build confidence in the outcomes. “Think of a decision support system as being like an assisted driving car. That car might automatically brake if you get too close to the driver in front, or correct the steering if you drift lane. However, many people would not be happy to go straight into trusting the automation to take control in this way: first they need to gain confidence in the quality of the support system,” Traue explains.

Carmakers have acted by adding warnings when their cars are about to self-brake, or ensuring drivers keep ultimate control through the steering wheel when any correction is being made.

“Drivers can then increasingly trust the car to make the right decisions. They can stop instinctively ‘fighting it’ and allow the automation to work,” Traue says. “It’s the same idea in business. Decision support must be applied in a very transparent way, allowing the user to keep a key level of control at first, while the system proves itself to be consistently good and helpful.” There is an additional key requirement: company strategists expect to receive clear evidence from the system to back up any actions advised.

Respecting limits

GfK’s own decision support system, gfknewron, informs decisions in contexts including forecasting sales, setting prices, making brand decisions, and scenario testing, to name just a few. “We remain acutely aware of the importance of getting our solutions right, so we are completely focused on what works and what the limitations are,” Traue explains. This includes ensuring any analytical conclusions are not only built on extensive data, but also run through a rigorous quality assurance process. GfK’s system examines all results, and flags or even suppresses any that have possible quality problems – allowing GfK’s human experts to review and accept or correct, as necessary. This is a critical area of investment, to avoid any risk of sending out potentially misleading guidance.

“gfknewron is designed so that people can understand the rationale for the recommendations it gives them. We constantly assess the algorithms, using not only our data scientists but also – and increasingly – our specialist MLOps analysts, who continuous monitor the validity and accuracy of our models,” he says. “We want to help decision makers trust the utter reliability of gfknewron in accelerating good choices and freeing up their time.” In addition, the company encourages radically transparent feedback from users.

Eliminating complexity

Just as one negative experience can make people avoid an AI-powered decision support system altogether, a beneficial experience tends to result in increased trust. There is enormous potential for AI to support more and more decision areas, when users see it working well. Traue concludes: “The world is becoming so complex. Tech and consumer brands may be managing multiple products, distribution channels, promotion campaigns, and marketing channels at any one time. When decision-makers have trustworthy AI to cut through this complexity and data, they can focus their time on identifying the best option from the recommendations, to develop a competitive advantage in the market.”

To find out more about gfknewron, visit www.gfk.com/products/gfknewron

Artificial Intelligence, Machine Learning