Technology mergers and acquisitions are on the rise, and any one of them could throw a wrench into your IT operations.

After all, many of the software vendors you rely on for point solutions likely offer cross-platform or multiplatform products, linking into your chosen ERP and its main competitors, for example, or to your preferred hyperscaler, as well as other cloud services and components of your IT estate.

What’s going to happen, then, if that point solution is acquired by another vendor — perhaps not your preferred supplier — and integrated into its stack?

The question is topical: Hyperconverged infrastructure vendor Nutanix, used by many enterprises to unify their private and public clouds, has been the subject of takeover talk ever since Bain Capital invested $750 million in it in August 2020. Rumored buyers have included IBM, Cisco, and Bain itself, and in December 2022 reports named HPE as a potential acquirer of Nutanix.

We’ve already seen what happened when HPE bought hyperconverged infrastructure vendor SimpliVity back in January 2017. Buying another vendor in the same space isn’t out of the question, as Nutanix and SimpliVity target enterprises of different sizes.

Prior to its acquisition by HPE, SimpliVity supported its hardware accelerator and software on servers from a variety of vendors. It also offered a hardware appliance, OmniCube, built on OEM servers from Dell. Now, though, HPE only sells SimpliVity as an appliance, built on its own ProLiant servers.

Customers of Nutanix who aren’t customers of HPE might justifiably be concerned — but they could just as easily worry about the prospects of an acquisition by IBM, the focus of earlier Nutanix rumors. IBM no longer makes its own servers, but it might focus on integrating the software with its Red Hat Virtualization platform and IBM Cloud, to the detriment of other customers relying on other integrations.

What to ask

The question CIOs need to ask themselves is not who will buy Nutanix, but what to do if a key vendor is acquired or otherwise changes direction — a fundamental facet of any vendor management plan.

“If your software vendor is independent then the immediate question is: Is the company buying this one that I’m using? If that’s true, then you’re in a better position. If not, then you immediately have to start figuring out your exit strategy,” says Tony Harvey, a senior director and analyst at Gartner who advises on vendor selection.

A first step, he says, is to figure out the strategy of the acquirer: “Are they going to continue to support it as a pure-play piece of software that can be installed on any server, much like Dell did with VMware? Or is it going to be more like HPE with SimpliVity, where effectively all non-HPE hardware was shut down fairly rapidly?” CIOs should also be looking at what the support structure will be, and the likely timescale for any changes.

Harvey’s focus is on data center infrastructure but, he says, whether the acquirer is a server vendor, a hyperscaler, or a bigger software vendor, “It’s a similar calculation.” There’s more at stake if you’re not already a customer of the acquirer.

A hyperscaler buying a popular software package will most likely be looking to use it as an on-ramp to its infrastructure, moving the management plane to the cloud but allowing existing customers to continue running the software on premises on generic hardware for a while, he says: “You’ve got a few years of runway, but now you need to start thinking about your exit plan.”

It’s all in the timing

The best time to plant a tree, they say, is 20 years ago, and the second best is right now. You won’t want your vendor exit plans hanging around quite as long, but now is also a great time to make or refresh them.

“The first thing to do is look at your existing contract. Migrating off this stuff is not a short-term project, so if you’ve got a renewal coming up, the first thing is to get the renewal done before anything like this happens,” says Harvey. If you just renewed, you’ll already have plenty of runway.

Then, talk to the vendor to understand their product roadmap — and tell them you’re going to hold them to it. “If that roadmap meets your needs, maybe you stay with that vendor,” he says. If it doesn’t, “You know where you need to go.”

Harvey pointed to Broadcom’s acquisition of Symantec’s enterprise security business in 2019 — and the subsequent price hikes for Symantec products — as an example of why it’s helpful to get those contract terms locked in early. Customer backlash from those price changes also explains why Broadcom is so keen to talk about its plans for VMware following its May 2022 offer to buy the company from Dell.

The risks that could affect vendors go far beyond acquisitions or other strategic changes: There’s also their general financial health, their ability to deliver, how they manage cybersecurity, regulatory or legislative changes, and other geopolitical factors.

Weigh the benefits

“You need to be keeping an eye on these things, but obviously you can’t war-game every event, every single software vendor,” he says.

Rather than weigh yourself down with plans for every eventuality, rank the software you use according to how significant it is to your business, and how difficult it is to replace, and have a pre-planned procedure in case it is targeted for acquisition.

“You don’t need to do that for every piece of software, but moving from SAP HANA to Oracle ERP or vice versa is a major project, and you’d really want to think about that.”

There is one factor in CIOs’ favor when it comes to such important applications, he says, citing the example of Broadcom’s planned acquisition of VMware: “It’s the kind of acquisition that does get ramped up to the Federal Trade Commission and the European Commission, and gets delayed for six months as they go through all the legal obligations, so it really does give you some time to plan.”

It’s also important to avoid analysis paralysis, he says. If you’re using a particular application, it’s possible that the business value it delivers now that outweighs the consequences of the vendor perhaps being acquired at some time in the future. Or perhaps the functionality it provides is really just a feature that will one day be rolled into the larger application it augments, in which case it can be treated as a short-term purchase.

“You certainly should look at your suppliers and how likely they are to be bought, but there’s always that trade off,” he concludes.

Mergers and Acquisitions, Risk Management, Vendor Management

Merger and acquisition (M&A) activity hit a record high in 2021 of more than $5 trillion in global volume. While the market has certainly slowed this year, it remains on par with pre-pandemic levels — quite a feat at a time of business uncertainty and inflation. But when it comes to corporate deal-making, risk lurks around every corner. The potential for overpaying, miscalculating synergies and missing potentially serious deficiencies in a target company is high.

With so much at stake, information is power. But while plenty of focus is centered on gathering financials, reviewing contracts, picking through insurance details and more, insight into IT risk may be harder to come by. Acquiring organizations need a rapid, accurate way to assess and map all of the endpoint assets in a target company, and then work quickly post-completion to assess and manage cyber risk.

The need for visibility

M&A deal volume may have fallen 12% year on year in early 2022, but the market remains bullish, driven by cash-rich private equity firms that are sitting on trillions of dollars, according to McKinsey. Still, security and IT operations are a growing concern for those with money to spend. It’s extremely rare for both sides of a deal to have similar standards for cybersecurity, asset management and key IT policies. That disconnect can cause major problems down the road.

Due diligence is therefore a critical step; enabling acquiring firms to spot potential opportunities for cost savings and synergies, whilst also understanding how risky a purchase a company may be. It benefits both sides. If an acquirer is unable to gain assurances around risk levels, they could theoretically call a deal off, or lower the offered acquisition price. Should they press on regardless, the organization may experience significant unforeseen problems trying to merge IT systems. Or it might unwittingly take on risk that erodes deal value over time – such as an undiscovered security breach that leads to customer class action suits, regulatory fines and reputational damage. 

These concerns are far from theoretical. After the discovery of historic data breaches at Yahoo, Verizon’s purchase price of the internet pioneer was adjusted down by $350m, or around 7% of deal size, back in 2017.  Marriott International was not so lucky when it bought hotel giant Starwood. It wasn’t until September 2018, two years after the acquisition and four years after the initial security breach, that an unauthorized intrusion was finally discovered. The breach turned out to be one of the biggest to date, impacting over 380 million customers, and led to an £18.4m ($21m) fine from the UK’s data protection regulator.

Getting due diligence right

In an ideal world, CIOs would be involved in M&A activity from the very start, asking the right questions and providing counsel to the CEO and senior leadership team on whether to proceed with a target. However, the truth is that this isn’t always the case. Such is the secrecy of deal-making that negotiations are usually limited to a small handful of executives, leaving some bosses on the outside. 

The best way CIOs can rectify this is to proactively educate senior executives about the importance of information security due diligence during M&A. If they succeed in embedding a security-by-design culture at the very top of the organization, those executives should be able to ask the right questions of targeted companies, to judge their level of risk exposure early on. They may even be inclined to invite the CIO in to help.

For most organizations, however, the first critical point at which due diligence can be applied is after an acquisition has been announced. This is where the acquiring company must gather as much information as possible to better understand risk levels and opportunities for cost reduction and efficiencies. SOC 2 compliance would make things run much smoother, providing useful insight into the level of security maturity at an acquired firm. But more likely than not, the acquiring company’s CIO will need to rely on their own processes.

Visibility is everything. They need accurate, current data on every single endpoint in the corporate environment, plus granular detail on what software is running on each asset and where there are unpatched vulnerabilities and misconfigurations. That’s easier said than done, and most current tools on the market struggle to provide answers to these questions across the virtual machines, containers, cloud servers, home working laptops and office-based equipment that run the modern enterprise. Even if they are able to provide full coverage, these tools may take days or weeks to deliver results, by which time the information is out of date.

Managing post-deal risk

The second opportunity for the CIO is once contracts are signed. Now it’s time to use a unified endpoint management platform to deliver a fast, accurate risk assessment of the acquired company’s IT environment. By inventorying all hardware and software assets, they can develop a machine and license consolidation strategy, eliminating redundant or duplicated software. The same tools should also enable CIOs to distribute new applications to the acquired company, scan for unmanaged endpoints, find and remediate any problems, and enhance IT hygiene across the board.

M&A is a high-risk, high-pressure world. By prioritizing endpoint visibility and control at every stage of a deal, organizations stand the best chance of preserving business value, reducing cyber risk and optimizing ROI.

Learn more about how Tanium can help manage risk and increase business value during mergers and acquisitions.

Risk Management

Merger and acquisition (M&A) activity hit a record high in 2021 of more than $5 trillion in global volume. While the market has certainly slowed this year, it remains on par with pre-pandemic levels — quite a feat at a time of business uncertainty and inflation. But when it comes to corporate deal-making, risk lurks around every corner. The potential for overpaying, miscalculating synergies and missing potentially serious deficiencies in a target company is high.

With so much at stake, information is power. But while plenty of focus is centered on gathering financials, reviewing contracts, picking through insurance details and more, insight into IT risk may be harder to come by. Acquiring organizations need a rapid, accurate way to assess and map all of the endpoint assets in a target company, and then work quickly post-completion to assess and manage cyber risk.

The need for visibility

M&A deal volume may have fallen 12% year on year in early 2022, but the market remains bullish, driven by cash-rich private equity firms that are sitting on trillions of dollars, according to McKinsey. Still, security and IT operations are a growing concern for those with money to spend. It’s extremely rare for both sides of a deal to have similar standards for cybersecurity, asset management and key IT policies. That disconnect can cause major problems down the road.

Due diligence is therefore a critical step; enabling acquiring firms to spot potential opportunities for cost savings and synergies, whilst also understanding how risky a purchase a company may be. It benefits both sides. If an acquirer is unable to gain assurances around risk levels, they could theoretically call a deal off, or lower the offered acquisition price. Should they press on regardless, the organization may experience significant unforeseen problems trying to merge IT systems. Or it might unwittingly take on risk that erodes deal value over time – such as an undiscovered security breach that leads to customer class action suits, regulatory fines and reputational damage. 

These concerns are far from theoretical. After the discovery of historic data breaches at Yahoo, Verizon’s purchase price of the internet pioneer was adjusted down by $350m, or around 7% of deal size, back in 2017.  Marriott International was not so lucky when it bought hotel giant Starwood. It wasn’t until September 2018, two years after the acquisition and four years after the initial security breach, that an unauthorized intrusion was finally discovered. The breach turned out to be one of the biggest to date, impacting over 380 million customers, and led to an £18.4m ($21m) fine from the UK’s data protection regulator.

Getting due diligence right

In an ideal world, CIOs would be involved in M&A activity from the very start, asking the right questions and providing counsel to the CEO and senior leadership team on whether to proceed with a target. However, the truth is that this isn’t always the case. Such is the secrecy of deal-making that negotiations are usually limited to a small handful of executives, leaving some bosses on the outside. 

The best way CIOs can rectify this is to proactively educate senior executives about the importance of information security due diligence during M&A. If they succeed in embedding a security-by-design culture at the very top of the organization, those executives should be able to ask the right questions of targeted companies, to judge their level of risk exposure early on. They may even be inclined to invite the CIO in to help.

For most organizations, however, the first critical point at which due diligence can be applied is after an acquisition has been announced. This is where the acquiring company must gather as much information as possible to better understand risk levels and opportunities for cost reduction and efficiencies. SOC 2 compliance would make things run much smoother, providing useful insight into the level of security maturity at an acquired firm. But more likely than not, the acquiring company’s CIO will need to rely on their own processes.

Visibility is everything. They need accurate, current data on every single endpoint in the corporate environment, plus granular detail on what software is running on each asset and where there are unpatched vulnerabilities and misconfigurations. That’s easier said than done, and most current tools on the market struggle to provide answers to these questions across the virtual machines, containers, cloud servers, home working laptops and office-based equipment that run the modern enterprise. Even if they are able to provide full coverage, these tools may take days or weeks to deliver results, by which time the information is out of date.

Managing post-deal risk

The second opportunity for the CIO is once contracts are signed. Now it’s time to use a unified endpoint management platform to deliver a fast, accurate risk assessment of the acquired company’s IT environment. By inventorying all hardware and software assets, they can develop a machine and license consolidation strategy, eliminating redundant or duplicated software. The same tools should also enable CIOs to distribute new applications to the acquired company, scan for unmanaged endpoints, find and remediate any problems, and enhance IT hygiene across the board.

M&A is a high-risk, high-pressure world. By prioritizing endpoint visibility and control at every stage of a deal, organizations stand the best chance of preserving business value, reducing cyber risk and optimizing ROI.

Learn more about how Tanium can help manage risk and increase business value during mergers and acquisitions.

Risk Management

Cybersecurity threats and their resulting breaches are top of mind for CIOs today. Managing such risks, however, is just one aspect of the entire IT risk management landscape that CIOs must address.

Equally important is reliability risk – the risks inherent in IT’s essential fragility. Issues might occur at anytime, anywhere across the complex hybrid IT landscape, potentially slowing or bringing down services.

Addressing such cybersecurity and reliability risks in separate silos is a recipe for failure. Collaboration across the respective responsible teams is essential for effective risk management.

Such collaboration is both an organizational and a technological challenge – and the organizational aspects depend upon the right technology.

The key to solving complex IT ops problems collaboratively, in fact, is to build a common engineering approach to managing risk across the concerns of the security and operations (ops) teams – in other words, a holistic approach to managing risk. 

Risk management starting point: site reliability engineering

By engineering, we mean a formal, quantitative approach to measuring and managing operational risks that can lead to reliability issues. The starting point for such an approach is site reliability engineering (SRE). 

SRE is a modern technique for managing the risks inherent in running complex, dynamic software deployments – risks like downtime, slowdowns, and the like that might have root causes anywhere, including the network, the software infrastructure, or deployed applications.

The practice of SRE requires dealing with ongoing tradeoffs. The ops team must be able to make fact-based judgments about whether to increase a service’s reliability (and hence, its cost), or lower its reliability and cost to increase the speed of development of the applications providing the service.

Error budgets: the key to site reliability engineering

Instead of targeting perfection – technology that never fails – the real question is just how far short of perfect reliability should an organization aim for. We call this quantity the error budget.

The error budget represents the total number of errors a particular service can accumulate over time before users become dissatisfied with the service.

Most importantly, the error budget should never equal zero. The operator’s goal should never be to entirely eliminate reliability issues, because such an approach would both be too costly and take too long – thus impacting the ability for the organization to deploy software quickly and run dynamic software at scale.

Instead, the operator should maintain an optimal balance among cost, speed, and reliability. Error budgets quantify this balance.

Bringing SRE to cybersecurity        

In order to bring the SRE approach to mitigating reliability risks to the cybersecurity team, it’s essential for the team to calculate risk scores for every observed event that might be relevant to the cybersecurity engineer. 

Risk scoring is an essential aspect of cybersecurity risk management. “Risk management… involves identifying all the IT resources and processes involved in creating and managing department records, identifying all the risks associated with these resources and processes, identifying the likelihood of each risk, and then applying people, processes, and technology to address those risks,” according to Jennifer Pittman-Leeper, Customer Engagement Manager for Tanium.

Risk scoring combined with cybersecurity-centric observability gives the cybersecurity engineer the raw data they need to make informed threat mitigation decisions, just as reliability-centric observability provides the SRE with the data they need to mitigate reliability issues.

Introducing the threat budget

Once we have a quantifiable, real-time measure of threats, then we can create an analogue to SRE for cybersecurity engineers.

We can posit the notion of a threat budget which would represent the total number of unmitigated threats a particular service can accumulate over time before a corresponding compromise adversely impacts the users of the service.

The essential insight here is that threat budgets should never be zero, since eliminating threats entirely would be too expensive and would slow the software effort down, just as error budgets of zero would. “Even the most comprehensive… cybersecurity program can’t afford to protect every IT asset and IT process to the greatest extent possible,” Pittman-Leeper continued. “IT investments will have to be prioritized.”

Some threat budget greater than zero, therefore, would reflect the optimal compromise among cost, time, and the risk of compromise. 

We might call this approach to threat budgets Service Threat Engineering, analogous to Site Reliability Engineering.

What Service Threat Engineering really means is that based upon risk scoring, cybersecurity engineers now have a quantifiable approach to achieving optimal threat mitigation that takes into account all of the relevant parameters, instead of relying upon personal expertise, tribal knowledge, and irrational expectations for cybersecurity effectiveness.

Holistic engineering for better collaboration

Even though risk scoring uses the word risk, I’ve used the word threat to differentiate Service Threat Engineering from SRE. After all, SRE is also about quantifying and managing risks – except with SRE, the risks are reliability-related rather than threat-related.

As a result, Service Threat Engineering is more than analogous to SRE. Rather, they are both approaches to managing two different, but related kinds of risks.

Cybersecurity compromises can certainly lead to reliability issues (ransomware and denial of service being two familiar examples). But there is more to this story.

Ops and security teams have always had a strained relationship, as they work on the same systems while having different priorities. Bringing threat management to the same level as SRE, however, may very well help these two teams align over similar approaches to managing risk.

Service Threat Engineering, therefore, targets the organizational challenges that continue to plague IT organizations – a strategic benefit that many organizations should welcome.

Learn how Tanium is bringing together teams, tools, and workflows with a Converged Endpoint Management platform.

Risk Management

Cybersecurity threats and their resulting breaches are top of mind for CIOs today. Managing such risks, however, is just one aspect of the entire IT risk management landscape that CIOs must address.

Equally important is reliability risk – the risks inherent in IT’s essential fragility. Issues might occur at anytime, anywhere across the complex hybrid IT landscape, potentially slowing or bringing down services.

Addressing such cybersecurity and reliability risks in separate silos is a recipe for failure. Collaboration across the respective responsible teams is essential for effective risk management.

Such collaboration is both an organizational and a technological challenge – and the organizational aspects depend upon the right technology.

The key to solving complex IT ops problems collaboratively, in fact, is to build a common engineering approach to managing risk across the concerns of the security and operations (ops) teams – in other words, a holistic approach to managing risk. 

Risk management starting point: site reliability engineering

By engineering, we mean a formal, quantitative approach to measuring and managing operational risks that can lead to reliability issues. The starting point for such an approach is site reliability engineering (SRE). 

SRE is a modern technique for managing the risks inherent in running complex, dynamic software deployments – risks like downtime, slowdowns, and the like that might have root causes anywhere, including the network, the software infrastructure, or deployed applications.

The practice of SRE requires dealing with ongoing tradeoffs. The ops team must be able to make fact-based judgments about whether to increase a service’s reliability (and hence, its cost), or lower its reliability and cost to increase the speed of development of the applications providing the service.

Error budgets: the key to site reliability engineering

Instead of targeting perfection – technology that never fails – the real question is just how far short of perfect reliability should an organization aim for. We call this quantity the error budget.

The error budget represents the total number of errors a particular service can accumulate over time before users become dissatisfied with the service.

Most importantly, the error budget should never equal zero. The operator’s goal should never be to entirely eliminate reliability issues, because such an approach would both be too costly and take too long – thus impacting the ability for the organization to deploy software quickly and run dynamic software at scale.

Instead, the operator should maintain an optimal balance among cost, speed, and reliability. Error budgets quantify this balance.

Bringing SRE to cybersecurity        

In order to bring the SRE approach to mitigating reliability risks to the cybersecurity team, it’s essential for the team to calculate risk scores for every observed event that might be relevant to the cybersecurity engineer. 

Risk scoring is an essential aspect of cybersecurity risk management. “Risk management… involves identifying all the IT resources and processes involved in creating and managing department records, identifying all the risks associated with these resources and processes, identifying the likelihood of each risk, and then applying people, processes, and technology to address those risks,” according to Jennifer Pittman-Leeper, Customer Engagement Manager for Tanium.

Risk scoring combined with cybersecurity-centric observability gives the cybersecurity engineer the raw data they need to make informed threat mitigation decisions, just as reliability-centric observability provides the SRE with the data they need to mitigate reliability issues.

Introducing the threat budget

Once we have a quantifiable, real-time measure of threats, then we can create an analogue to SRE for cybersecurity engineers.

We can posit the notion of a threat budget which would represent the total number of unmitigated threats a particular service can accumulate over time before a corresponding compromise adversely impacts the users of the service.

The essential insight here is that threat budgets should never be zero, since eliminating threats entirely would be too expensive and would slow the software effort down, just as error budgets of zero would. “Even the most comprehensive… cybersecurity program can’t afford to protect every IT asset and IT process to the greatest extent possible,” Pittman-Leeper continued. “IT investments will have to be prioritized.”

Some threat budget greater than zero, therefore, would reflect the optimal compromise among cost, time, and the risk of compromise. 

We might call this approach to threat budgets Service Threat Engineering, analogous to Site Reliability Engineering.

What Service Threat Engineering really means is that based upon risk scoring, cybersecurity engineers now have a quantifiable approach to achieving optimal threat mitigation that takes into account all of the relevant parameters, instead of relying upon personal expertise, tribal knowledge, and irrational expectations for cybersecurity effectiveness.

Holistic engineering for better collaboration

Even though risk scoring uses the word risk, I’ve used the word threat to differentiate Service Threat Engineering from SRE. After all, SRE is also about quantifying and managing risks – except with SRE, the risks are reliability-related rather than threat-related.

As a result, Service Threat Engineering is more than analogous to SRE. Rather, they are both approaches to managing two different, but related kinds of risks.

Cybersecurity compromises can certainly lead to reliability issues (ransomware and denial of service being two familiar examples). But there is more to this story.

Ops and security teams have always had a strained relationship, as they work on the same systems while having different priorities. Bringing threat management to the same level as SRE, however, may very well help these two teams align over similar approaches to managing risk.

Service Threat Engineering, therefore, targets the organizational challenges that continue to plague IT organizations – a strategic benefit that many organizations should welcome.

Learn how Tanium is bringing together teams, tools, and workflows with a Converged Endpoint Management platform.

Risk Management

Cybersecurity breaches can result in millions of dollars in losses for global enterprises and they can even represent an existential threat for smaller companies. For boards of directors not to get seriously involved in protecting the information assets of their organizations is not just risky — it’s negligent.

Boards need to be on top of the latest threats and vulnerabilities their companies might be facing, and they need to ensure that cybersecurity programs are getting the funding, resources and support they need.

Lack of cybersecurity oversight

In recent years boards have become much more engaged in security-related issues, thanks in large part to high-profile data breaches and other incidents that brought home the real dangers of having insufficient security. But much work remains to be done. The fact is, at many organizations board oversight of cybersecurity is unacceptable.

Research has shown that many boards are not prepared to deal with a cyberattack, with no plans or strategies in place for cybersecurity response. Few have a board-level cybersecurity committee in place.

More CIOs are joining boards

On a positive note, more technology leaders including CIOs are being named to boards, and that might soon extend to security executives as well. Earlier this year the Security Exchange Commission (SEC) proposed amendments to its rules to enhance and standardize disclosures regarding cybersecurity risk management, strategy, governance, and incident reporting by public companies.

This includes requirements for public companies to report any board member’s cybersecurity expertise, reflecting a growing understanding that the disclosure of cybersecurity expertise on boards is important when potential investors consider investment opportunities and shareholders elect directors. This could lead to more CISOs and other security leaders being named to boards.

Greater involvement of IT and security executives on boards is a favorable development in terms of better protecting information resources. But in general, boards need to become savvier when it comes to cybersecurity and be prepared to take the proper actions.

Asking the right questions

The best way to gain knowledge about security is to ask the right questions. One of the most important queries is which IT assets the organization is securing? Knowing the answer to this requires having the ability to monitor the organization’s endpoints at any time, identify which systems are connecting to the corporate network, determine which software is running on devices, etc…

Deploying reliable asset discovery and inventory systems is a key part of gaining a high level of visibility to ensure the assets are secure.

Another important question to ask is how is the organization protecting its most vital resources? This might include financial data, customer records, source code for key products, encryption keys and other security tools, and other assets.

Not all data is equal from a security, privacy and regulatory perspective, and board members need to fully understand the controls in place to secure access to this and other highly sensitive data. Part of the process for safeguarding the most vital resources within the organization is managing access to these assets, so boards should be up to speed on what kinds of access controls are in place.

Board members also need to ask about which entities pose the greatest security risks to the business at any point in time, so this is another key question to ask. The challenge here is that the threat vectors are constantly changing. But that doesn’t mean boards should settle for a generic response.

Accessing threats from the inside out

A good assessment of the threat landscape includes looking not just at external sources of attacks but within the organization itself. Many security incidents originate via employee negligence and other insider threats. So, a proper follow-up question would be to ask what kind of training programs and policies the company has in place to ensure that employees are practicing good security hygiene and know how to identify possible attacks such as phishing.

Part of analyzing the threat vector also includes inquiring about what the company looks like to attackers and how they might carry out attacks. This can help in determining whether the organization is adequately protected against a variety of known tactics and techniques employed by bad actors.

In addition, board members should ask IT and security executives about the level of confidence they have in the organization’s risk-mitigation strategy and its ability to quickly respond to an attack. This is a good way to determine whether the security program thinks it has adequate resources and support to meet cybersecurity needs, and what needs to be done to enhance security via specific investments.

It’s most effective when the executives come prepared with specific data about security shortfalls, such as the number of critical vulnerabilities the company has faced, how long it takes on average to remediate them, the number and extent of outages due to security issues, security skills gaps, etc.

In the event of an emergency

Finally, board members should ask what the board’s role should be in the event of a security incident. This includes the board’s role in determining whether to pay a ransom following a ransomware attack, how

board members will communicate with each other if corporate networks are down, or how they will handle public relations after a breach, for example.

It has never been more important for boards to take a proactive, vigilant approach to cybersecurity at their organizations. Cyberattacks such as ransomware and distributed denial of service are not to be taken lightly in today’s digital business environment where an outage of even a few hours can be extremely costly.

Boards that are well informed about the latest security threats, vulnerabilities, solutions and strategies will be best equipped to help their organizations protect their valuable data resources as well as the devices, systems and networks that keep business processes running every day.

Want to learn more? Check out this Cybersecurity Readiness Checklist for Board Members.

Risk Management

Cybersecurity breaches can result in millions of dollars in losses for global enterprises and they can even represent an existential threat for smaller companies. For boards of directors not to get seriously involved in protecting the information assets of their organizations is not just risky — it’s negligent.

Boards need to be on top of the latest threats and vulnerabilities their companies might be facing, and they need to ensure that cybersecurity programs are getting the funding, resources and support they need.

Lack of cybersecurity oversight

In recent years boards have become much more engaged in security-related issues, thanks in large part to high-profile data breaches and other incidents that brought home the real dangers of having insufficient security. But much work remains to be done. The fact is, at many organizations board oversight of cybersecurity is unacceptable.

Research has shown that many boards are not prepared to deal with a cyberattack, with no plans or strategies in place for cybersecurity response. Few have a board-level cybersecurity committee in place.

More CIOs are joining boards

On a positive note, more technology leaders including CIOs are being named to boards, and that might soon extend to security executives as well. Earlier this year the Security Exchange Commission (SEC) proposed amendments to its rules to enhance and standardize disclosures regarding cybersecurity risk management, strategy, governance, and incident reporting by public companies.

This includes requirements for public companies to report any board member’s cybersecurity expertise, reflecting a growing understanding that the disclosure of cybersecurity expertise on boards is important when potential investors consider investment opportunities and shareholders elect directors. This could lead to more CISOs and other security leaders being named to boards.

Greater involvement of IT and security executives on boards is a favorable development in terms of better protecting information resources. But in general, boards need to become savvier when it comes to cybersecurity and be prepared to take the proper actions.

Asking the right questions

The best way to gain knowledge about security is to ask the right questions. One of the most important queries is which IT assets the organization is securing? Knowing the answer to this requires having the ability to monitor the organization’s endpoints at any time, identify which systems are connecting to the corporate network, determine which software is running on devices, etc…

Deploying reliable asset discovery and inventory systems is a key part of gaining a high level of visibility to ensure the assets are secure.

Another important question to ask is how is the organization protecting its most vital resources? This might include financial data, customer records, source code for key products, encryption keys and other security tools, and other assets.

Not all data is equal from a security, privacy and regulatory perspective, and board members need to fully understand the controls in place to secure access to this and other highly sensitive data. Part of the process for safeguarding the most vital resources within the organization is managing access to these assets, so boards should be up to speed on what kinds of access controls are in place.

Board members also need to ask about which entities pose the greatest security risks to the business at any point in time, so this is another key question to ask. The challenge here is that the threat vectors are constantly changing. But that doesn’t mean boards should settle for a generic response.

Accessing threats from the inside out

A good assessment of the threat landscape includes looking not just at external sources of attacks but within the organization itself. Many security incidents originate via employee negligence and other insider threats. So, a proper follow-up question would be to ask what kind of training programs and policies the company has in place to ensure that employees are practicing good security hygiene and know how to identify possible attacks such as phishing.

Part of analyzing the threat vector also includes inquiring about what the company looks like to attackers and how they might carry out attacks. This can help in determining whether the organization is adequately protected against a variety of known tactics and techniques employed by bad actors.

In addition, board members should ask IT and security executives about the level of confidence they have in the organization’s risk-mitigation strategy and its ability to quickly respond to an attack. This is a good way to determine whether the security program thinks it has adequate resources and support to meet cybersecurity needs, and what needs to be done to enhance security via specific investments.

It’s most effective when the executives come prepared with specific data about security shortfalls, such as the number of critical vulnerabilities the company has faced, how long it takes on average to remediate them, the number and extent of outages due to security issues, security skills gaps, etc.

In the event of an emergency

Finally, board members should ask what the board’s role should be in the event of a security incident. This includes the board’s role in determining whether to pay a ransom following a ransomware attack, how

board members will communicate with each other if corporate networks are down, or how they will handle public relations after a breach, for example.

It has never been more important for boards to take a proactive, vigilant approach to cybersecurity at their organizations. Cyberattacks such as ransomware and distributed denial of service are not to be taken lightly in today’s digital business environment where an outage of even a few hours can be extremely costly.

Boards that are well informed about the latest security threats, vulnerabilities, solutions and strategies will be best equipped to help their organizations protect their valuable data resources as well as the devices, systems and networks that keep business processes running every day.

Want to learn more? Check out this Cybersecurity Readiness Checklist for Board Members.

Risk Management

Since the pandemic began, 60 million people in Southeast Asia have become digital consumers. The staggering opportunities Asia’s burgeoning digital economy presents are reason enough to spur you into rethinking the way you do business.

This means one thing: digital transformation. Cloud adoption empowers organisations to adapt quickly to sudden market disruptions. Back when the pandemic was at its peak, hybrid work and enterprise mobile apps ensured critical operations were able to maintain business-as-usual despite lockdowns and border closures. Today, they are empowering an increasingly mobile workforce to stay productive—on their terms.

Facilitating this transformation saw organisations dismantling legacy infrastructures and adopting decentralised networks, cloud-based services, and the widespread use of employees’ personal devices.

But with this new cloud-enabled environment of mobile devices and apps, remote workspaces, and edge-computing components came substantial information gaps. Ask yourself if you have complete visibility of all your IT assets; there’s a good chance you’d answer no. This shouldn’t come as a surprise, as 94% of organisations find 20% or more of their endpoints undiscovered and therefore unprotected

Why you can’t ignore your undiscovered (and unprotected) endpoints

The rapid proliferation of endpoints, which increases the complexity of today’s IT environments and introduces a broader attack surface for cyber criminals to exploit, only serves to underscore the importance of knowing all your endpoints. Here’s what will happen if you don’t.

Exposure to security risk. You need to keep your doors and windows locked if you want to secure your home. But what if you don’t know how many you have or where they are located? It’s the same with endpoints: you can’t protect what you can’t see. Knowing your endpoints and getting real-time updates on their status will go a long way to proactively keeping cyber threats at bay and responding to an incident rapidly—and at scale.

Poor decision-making. Access to real-time data relies on instantaneous communication with all your IT assets, the data from which enable your teams to make better-informed decisions. Yet current endpoint practices work with data collected at an earlier point in time. What this means is that by the time your team utilises the data, it’s already outdated. This, in turn, renders the insights they derived inaccurate, and in some instances, unusable.

Inefficient operations. Despite IT assets being constantly added to or decommissioned from the environment due to workforce shifts and new requirements, many enterprises still track their inventory manually with Excel spreadsheets. You can imagine their struggle to get a complete and accurate inventory of every single asset and the resulting guessing games IT teams need to play to figure out what to manage and patch without that inventory.

Getting a better handle on ever-present security threats 

Having a bird’s-eye view of your endpoints requires you to have the right tools to manage them, no matter the size or complexity of your digital environment. These should help regain real-time visibility and complete control by:

Identifying unknown endpoints that are yet to be discovered, evaluated, and monitoredFinding issues by comparing installations and versions of your software for each endpoint against defined software bundles and updatesStandardising your environment by instantly applying updates to out-of-date installations and installing software missing from endpoints that require themEnabling automation of software management to further reduce reliance on IT teams by governing end-user self-service

You only stand to gain when you truly understand the importance of real-time visibility and complete control over your endpoints—and commit to it. In the case of insurer Zurich, having a high-resolution view over an environment with over 100,000 endpoints worldwide meant greater cyber resilience, savings of up to 100 resource hours a month, and deeper collaboration between cybersecurity and operations.

Secure your business with real-time visibility and complete control over your endpoints. Learn how with Tanium.

Endpoint Protection

While pandemic-driven digital transformation has enabled the media and entertainment industry to stream awesome content 24/7 – digital technology is also safeguarding visitors, performing artist, and crew at the Eurovision Song Contest by monitoring their Covid-19 exposure levels in real time.

The Eurovision Song Contest, by the way, is the world’s largest live music event, organized each year in May by the local organizer and the European Broadcasting Union.

A New Normal: Bubble-Up for Safety at Live Events with Flockey

Knowing your risk level as you navigate a large venue can help you avoid crowds and stay safely within your bubble – all of which empowers you to enjoy the experience all the more.

That’s why the local organizer of the Eurovision Song Contest last year in Rotterdam, the Netherlands, reached out to Unlimited Solutions for their newly released app Flockey – a powerful social distancing app that is bringing live audiences and live music back together again with Covid risk assurance at large-scale events. Venue organizers can use the app to safeguard employees and visitors through proactive crowd management.

This “new normal” has been helping people return to experience live events with an inobtrusive app that helps them avoid high-risk levels as they move around the venue in their Bluetooth bubble.

Live at Eurovision: a Bluetooth App to Navigate Covid Risk

The Eurovision Song Contest partnered with Unlimited Solutions to help them overcome restrictions and fear at large-scale events. This was accomplished by using data to give the organizer tangible real-time insight as to what is happening in the venue concerning social distancing and risky behavior.

The solution – based on EY AgilityWorks’ patented EY Proximity Monitor technology – was white labelled as Flockey by Unlimited Solutions for the event industry. The social distancing app gives venue employees, delegations, and visitors at the venue, real-time insight of their Covid-19 exposure risk levels.

Flockey made its first debut at the Eurovision Song Contest at the Rotterdam Ahoy in May 2021.

“Our industry has been at a standstill for more than a year,” says Olivier Monod de Froideville of Unlimited Solutions. “It is therefore great that we can contribute to the safer organization of large-scale events with Flockey.”

Richard van Vught, head of Security at the Eurovision Song Contest in Rotterdam, says the solution, “brings insight and peace of mind, with or without ever-changing [Covid safety] measures.” Flockey provides his team with a dashboard for real-time visualization of crowd movement and risks.

Social Distancing App Shows Transmission Rates in Real Time

So, how does it work? Flockey measures the distance between visitors using anonymized Bluetooth low-energy data from mobile devices like a smartphone or lanyard tag. The solution gives event organizers instant insight into visitor flows at the venue by providing the location of visitors and employees in real time via pre-installed beacons (or sensors) in the venue.

If you are an artist, crew, or audience member, you wear a tag on a lanyard, a wristband, or simply download the Flockey app on your smartphone. As soon as it’s activated, you are in your own Bluetooth bubble while the social distancing app monitors your proximity to others. Every few seconds, the app uses your smartphone to send and receive Bluetooth signals coming from other nearby users’ smartphones or from their battery-powered tags. Neat, huh?

Flockey sends this anonymous data to a central system with a tailor-made dashboard that enables event managers in the control room to monitor and log crowd movements. From the dashboard they can see employees and visitors’ proximity, the location of the interactions, and the time of the interaction – with risk levels registered as low, medium, or high – and take appropriate action. 

Eurovision Safeguards a Spectacular Experience with Flockey

According to Tom Valema, EY AgilityWorks, “The EY Proximity Monitor [Flockey] enables event venues and organizers to responsibly receive audiences. Based on the data, they can demonstrate that the social distancing measures work.”  

Ensuring safeguards also has a positive impact on employee productivity and mental health as well as venue operations by helping to lower infection rates and, subsequently, insurance costs. “These safety measures encourage event goers to return to venues more comfortably—as part of a new normal,” adds Bernd Kramer, EY AgilityWorks.

Flockey is based on the EY Proximity Monitor solution which applies Bluetooth technology from Scandinavia-based Forkbeard at the front end and data analytics from the SAP Business Technology Platform (BTP) at the backend with SAP Analytics Cloud for reporting. ESRI mapping adds powerful geospatial analysis on top of 3D mapping for full visualization of what’s happening in real time.

“EY AgilityWorks and Flockey truly helped to keep Eurovision Song Contest covid safe and contributed tremendously to the success of this major world event,” says Vught.

As a result, EY AgilityWorks was named a Finalist at the SAP Innovation Awards for 2022. You can read about their innovative solution in their Innovation Awards pitch deck.

Data Management

Work has changed dramatically thanks to the global COVID pandemic. Workers across every market sector in Australia are now spending their workdays alternating between offices and other locations such as their homes. It’s a hybrid work model that is certainly here to stay.

But moving workers outside the network perimeter presents cyber security challenges for every organisation. It provides an expanded attack surface as enterprises ramp up their use of cloud services and enable staff to access key systems and applications from just about anywhere.  

Senior technology leaders gathered in Melbourne recently to discuss the cyber security implications of a more permanently distributed workforce as their organisations move more services to the cloud. The conversation was sponsored by Palo Alto Networks.

Sean Duca, vice-president, regional chief security officer, Asia-Pacific & Japan at Palo Alto Networks, says with the primary focus now on safety and securely delivering work to staff, irrespective of where they are, organisations need to think about where data resides, how it is protected, who has access to it and how it is accessed.

“With many applications consumed ‘as a service’ or running outside the traditional network perimeter, the need to do access, authorisation and inspection is paramount,” Duca says.

“Attackers target the employee’s laptops and applications they use, which means we need to inspect the traffic for each application. The attack surface will continue to grow and also be a target for cybercriminals. This means that we must stay vigilant and have the ability to continuously identify when changes to our workforce happen, while watching our cloud estates at all times,” he says.

Brenden Smyth from Palo Alto Networks adds the main impact of this more flexible workforce on organisations is that they no longer have one or two points of entry that are well controlled and managed.

“Since 2020, organisations have created many hundreds if not tens of thousands of points of entry with the forced introduction of remote working,” he says.

“On top of that, company boards need to consider the personal and financial impacts [of a breach] that they are responsible for in the business they run. They need to make sure users are protected within the office, as well as those users connecting from any location,” he says.

Gus D’Onofrio, chief information technology officer at the United Workers Union, believes that there will come a time when physical devices will be distributed among the workforce to ensure their secure connectivity.

“This will be the new standard,” he says.

Iain Lyon, executive director, information technology at IFM Investors, says the key to securing distributed workforces is to ensure the home environment is suitably secure so the employee can do the work they need to do.

“It may be that for certain classifications of data or user activity, we will need to set up additional technology in the home to ensure compliance with security policy. That challenge is both technical and requires careful human resource thought,” he says.

Meeting the demands of remote workers

During the discussion, attendees were asked if security capabilities are adequate to meet the new demands of connecting remote workers to onsite premises, infrastructure-as-a-service and software-as-a-service applications.

Palo Alto Networks’ Duca says existing cyber capabilities are only adequate if they do more than connectivity (access and authorisation).

“It’s analogous to an airport; we check where passengers go based on their ID and boarding pass and inspect their person and belongings. If the crown jewel in an airport is the planes, we do everything to protect what and who gets on.

“Why should organisations do anything less?” he asks. “If you can’t do continuous validation and enforcement, what is the security efficacy of the security capability?”

Meanwhile, Suhel Khan, data practice manager at superannuation organisation, Cbus, adds that distributed workforces need stronger perimeter security and edge security systems, fine-grained ‘joiner-mover-leaver’ access control and entitlements, as well as geography-sensitive content management and distribution paradigms.

“We have reached a certain baseline in regard to the cyber security capabilities that are available in the market. The bigger challenge is procuring and integrating the right suite of applications that work across respective ecosystems,” he says.

Held back by legacy systems

Many enterprises are still running legacy systems and applications that can’t meet the demands of a borderless workforce.

Palo Alto Networks’ Smyth says cyber impacts of sticking with older systems and applications are endless.

“Directly connected to SaaS and IaaS apps without security, patch management, vendor support – the list goes on – means organisations will not have full control of their environment,” he says.

Duca adds that organisations running legacy platforms could see an impact on productivity from their employees, and the solution may not be able to deal with modern-day threats.

“Every organisation should use this as a point in time to reassess and rearchitect what the world looks like today and what it may look like tomorrow. In a dynamic and ever-changing world, businesses should look to a software-driven model as it will allow them to pivot and change according to their needs,” he says.

Cbus has challenges around optimally integrating software suites for end-to-end seamless process flow, like most enterprises that have built technical systems for core business functions over the past 10 years, says Cbus’ Khan.

“There are several app modernisation transformation programs to help us move forward. I believe that there will always be ‘heritage systems’ to take care of and transition away from.

“The only difference is that in the near future, these older systems will be built on the cloud rather than [run] on-premise and we would be replacing such cloud-native legacy applications with autonomous intelligent apps,” Khan says.

Meanwhile, IFM Investor’s Lyon says that like very firm, IFM has several key applications that are mature and do an excellent job.

“We are not being held back. Our use of the Citrix platform to encapsulate the stable and resilient core applications has allowed us to be agnostic to the borderless nature of work,” he says.

Centralising security in the cloud

The advent of secure access service edge (SASE) and SD-WAN technologies has seen many organisations centralise security services in the cloud rather than keep them at remote sites.

Palo Alto Networks’ Duca says that for many years, gaps will continue to appear from inconsistent policies and enforcement. With the majority of apps and data that sit in the cloud, centralising cyber services allows for consistent security close to the crown jewels.

“There’s no point sending the traffic back to the corporate HQ to send it back out again,” he says.

The decision about whether or not to centralise security services in the cloud or keep them at remote sites is based on the risk appetite of the organisation.

“In superannuation, a good proportion of cyber security programs are geared towards being compliant and dealing with threats due to an uncertain global political outlook. Organisations that can afford to run their own backup/failsafe system on premise should consider [moving this function] to the cloud. Cloud-first is the dominant approach in a very dynamic market,” he says.

United Workers Union’s D’Onofrio, adds that the pros of centralising security services at remote sites are faster access and response times, which is ideal for geographically distributed workforces and customer bases. A con, he says, is that a distributed footprint implies stretched security domains.

On the flipside, security domains are easier to manage if they are centralised in the cloud but will deliver slower response times for customers and staff who are based geographically afar, he says.

Cyberattacks