Since the pandemic began, 60 million people in Southeast Asia have become digital consumers. The staggering opportunities Asia’s burgeoning digital economy presents are reason enough to spur you into rethinking the way you do business.

This means one thing: digital transformation. Cloud adoption empowers organisations to adapt quickly to sudden market disruptions. Back when the pandemic was at its peak, hybrid work and enterprise mobile apps ensured critical operations were able to maintain business-as-usual despite lockdowns and border closures. Today, they are empowering an increasingly mobile workforce to stay productive—on their terms.

Facilitating this transformation saw organisations dismantling legacy infrastructures and adopting decentralised networks, cloud-based services, and the widespread use of employees’ personal devices.

But with this new cloud-enabled environment of mobile devices and apps, remote workspaces, and edge-computing components came substantial information gaps. Ask yourself if you have complete visibility of all your IT assets; there’s a good chance you’d answer no. This shouldn’t come as a surprise, as 94% of organisations find 20% or more of their endpoints undiscovered and therefore unprotected

Why you can’t ignore your undiscovered (and unprotected) endpoints

The rapid proliferation of endpoints, which increases the complexity of today’s IT environments and introduces a broader attack surface for cyber criminals to exploit, only serves to underscore the importance of knowing all your endpoints. Here’s what will happen if you don’t.

Exposure to security risk. You need to keep your doors and windows locked if you want to secure your home. But what if you don’t know how many you have or where they are located? It’s the same with endpoints: you can’t protect what you can’t see. Knowing your endpoints and getting real-time updates on their status will go a long way to proactively keeping cyber threats at bay and responding to an incident rapidly—and at scale.

Poor decision-making. Access to real-time data relies on instantaneous communication with all your IT assets, the data from which enable your teams to make better-informed decisions. Yet current endpoint practices work with data collected at an earlier point in time. What this means is that by the time your team utilises the data, it’s already outdated. This, in turn, renders the insights they derived inaccurate, and in some instances, unusable.

Inefficient operations. Despite IT assets being constantly added to or decommissioned from the environment due to workforce shifts and new requirements, many enterprises still track their inventory manually with Excel spreadsheets. You can imagine their struggle to get a complete and accurate inventory of every single asset and the resulting guessing games IT teams need to play to figure out what to manage and patch without that inventory.

Getting a better handle on ever-present security threats 

Having a bird’s-eye view of your endpoints requires you to have the right tools to manage them, no matter the size or complexity of your digital environment. These should help regain real-time visibility and complete control by:

Identifying unknown endpoints that are yet to be discovered, evaluated, and monitoredFinding issues by comparing installations and versions of your software for each endpoint against defined software bundles and updatesStandardising your environment by instantly applying updates to out-of-date installations and installing software missing from endpoints that require themEnabling automation of software management to further reduce reliance on IT teams by governing end-user self-service

You only stand to gain when you truly understand the importance of real-time visibility and complete control over your endpoints—and commit to it. In the case of insurer Zurich, having a high-resolution view over an environment with over 100,000 endpoints worldwide meant greater cyber resilience, savings of up to 100 resource hours a month, and deeper collaboration between cybersecurity and operations.

Secure your business with real-time visibility and complete control over your endpoints. Learn how with Tanium.

Endpoint Protection

While pandemic-driven digital transformation has enabled the media and entertainment industry to stream awesome content 24/7 – digital technology is also safeguarding visitors, performing artist, and crew at the Eurovision Song Contest by monitoring their Covid-19 exposure levels in real time.

The Eurovision Song Contest, by the way, is the world’s largest live music event, organized each year in May by the local organizer and the European Broadcasting Union.

A New Normal: Bubble-Up for Safety at Live Events with Flockey

Knowing your risk level as you navigate a large venue can help you avoid crowds and stay safely within your bubble – all of which empowers you to enjoy the experience all the more.

That’s why the local organizer of the Eurovision Song Contest last year in Rotterdam, the Netherlands, reached out to Unlimited Solutions for their newly released app Flockey – a powerful social distancing app that is bringing live audiences and live music back together again with Covid risk assurance at large-scale events. Venue organizers can use the app to safeguard employees and visitors through proactive crowd management.

This “new normal” has been helping people return to experience live events with an inobtrusive app that helps them avoid high-risk levels as they move around the venue in their Bluetooth bubble.

Live at Eurovision: a Bluetooth App to Navigate Covid Risk

The Eurovision Song Contest partnered with Unlimited Solutions to help them overcome restrictions and fear at large-scale events. This was accomplished by using data to give the organizer tangible real-time insight as to what is happening in the venue concerning social distancing and risky behavior.

The solution – based on EY AgilityWorks’ patented EY Proximity Monitor technology – was white labelled as Flockey by Unlimited Solutions for the event industry. The social distancing app gives venue employees, delegations, and visitors at the venue, real-time insight of their Covid-19 exposure risk levels.

Flockey made its first debut at the Eurovision Song Contest at the Rotterdam Ahoy in May 2021.

“Our industry has been at a standstill for more than a year,” says Olivier Monod de Froideville of Unlimited Solutions. “It is therefore great that we can contribute to the safer organization of large-scale events with Flockey.”

Richard van Vught, head of Security at the Eurovision Song Contest in Rotterdam, says the solution, “brings insight and peace of mind, with or without ever-changing [Covid safety] measures.” Flockey provides his team with a dashboard for real-time visualization of crowd movement and risks.

Social Distancing App Shows Transmission Rates in Real Time

So, how does it work? Flockey measures the distance between visitors using anonymized Bluetooth low-energy data from mobile devices like a smartphone or lanyard tag. The solution gives event organizers instant insight into visitor flows at the venue by providing the location of visitors and employees in real time via pre-installed beacons (or sensors) in the venue.

If you are an artist, crew, or audience member, you wear a tag on a lanyard, a wristband, or simply download the Flockey app on your smartphone. As soon as it’s activated, you are in your own Bluetooth bubble while the social distancing app monitors your proximity to others. Every few seconds, the app uses your smartphone to send and receive Bluetooth signals coming from other nearby users’ smartphones or from their battery-powered tags. Neat, huh?

Flockey sends this anonymous data to a central system with a tailor-made dashboard that enables event managers in the control room to monitor and log crowd movements. From the dashboard they can see employees and visitors’ proximity, the location of the interactions, and the time of the interaction – with risk levels registered as low, medium, or high – and take appropriate action. 

Eurovision Safeguards a Spectacular Experience with Flockey

According to Tom Valema, EY AgilityWorks, “The EY Proximity Monitor [Flockey] enables event venues and organizers to responsibly receive audiences. Based on the data, they can demonstrate that the social distancing measures work.”  

Ensuring safeguards also has a positive impact on employee productivity and mental health as well as venue operations by helping to lower infection rates and, subsequently, insurance costs. “These safety measures encourage event goers to return to venues more comfortably—as part of a new normal,” adds Bernd Kramer, EY AgilityWorks.

Flockey is based on the EY Proximity Monitor solution which applies Bluetooth technology from Scandinavia-based Forkbeard at the front end and data analytics from the SAP Business Technology Platform (BTP) at the backend with SAP Analytics Cloud for reporting. ESRI mapping adds powerful geospatial analysis on top of 3D mapping for full visualization of what’s happening in real time.

“EY AgilityWorks and Flockey truly helped to keep Eurovision Song Contest covid safe and contributed tremendously to the success of this major world event,” says Vught.

As a result, EY AgilityWorks was named a Finalist at the SAP Innovation Awards for 2022. You can read about their innovative solution in their Innovation Awards pitch deck.

Data Management

Work has changed dramatically thanks to the global COVID pandemic. Workers across every market sector in Australia are now spending their workdays alternating between offices and other locations such as their homes. It’s a hybrid work model that is certainly here to stay.

But moving workers outside the network perimeter presents cyber security challenges for every organisation. It provides an expanded attack surface as enterprises ramp up their use of cloud services and enable staff to access key systems and applications from just about anywhere.  

Senior technology leaders gathered in Melbourne recently to discuss the cyber security implications of a more permanently distributed workforce as their organisations move more services to the cloud. The conversation was sponsored by Palo Alto Networks.

Sean Duca, vice-president, regional chief security officer, Asia-Pacific & Japan at Palo Alto Networks, says with the primary focus now on safety and securely delivering work to staff, irrespective of where they are, organisations need to think about where data resides, how it is protected, who has access to it and how it is accessed.

“With many applications consumed ‘as a service’ or running outside the traditional network perimeter, the need to do access, authorisation and inspection is paramount,” Duca says.

“Attackers target the employee’s laptops and applications they use, which means we need to inspect the traffic for each application. The attack surface will continue to grow and also be a target for cybercriminals. This means that we must stay vigilant and have the ability to continuously identify when changes to our workforce happen, while watching our cloud estates at all times,” he says.

Brenden Smyth from Palo Alto Networks adds the main impact of this more flexible workforce on organisations is that they no longer have one or two points of entry that are well controlled and managed.

“Since 2020, organisations have created many hundreds if not tens of thousands of points of entry with the forced introduction of remote working,” he says.

“On top of that, company boards need to consider the personal and financial impacts [of a breach] that they are responsible for in the business they run. They need to make sure users are protected within the office, as well as those users connecting from any location,” he says.

Gus D’Onofrio, chief information technology officer at the United Workers Union, believes that there will come a time when physical devices will be distributed among the workforce to ensure their secure connectivity.

“This will be the new standard,” he says.

Iain Lyon, executive director, information technology at IFM Investors, says the key to securing distributed workforces is to ensure the home environment is suitably secure so the employee can do the work they need to do.

“It may be that for certain classifications of data or user activity, we will need to set up additional technology in the home to ensure compliance with security policy. That challenge is both technical and requires careful human resource thought,” he says.

Meeting the demands of remote workers

During the discussion, attendees were asked if security capabilities are adequate to meet the new demands of connecting remote workers to onsite premises, infrastructure-as-a-service and software-as-a-service applications.

Palo Alto Networks’ Duca says existing cyber capabilities are only adequate if they do more than connectivity (access and authorisation).

“It’s analogous to an airport; we check where passengers go based on their ID and boarding pass and inspect their person and belongings. If the crown jewel in an airport is the planes, we do everything to protect what and who gets on.

“Why should organisations do anything less?” he asks. “If you can’t do continuous validation and enforcement, what is the security efficacy of the security capability?”

Meanwhile, Suhel Khan, data practice manager at superannuation organisation, Cbus, adds that distributed workforces need stronger perimeter security and edge security systems, fine-grained ‘joiner-mover-leaver’ access control and entitlements, as well as geography-sensitive content management and distribution paradigms.

“We have reached a certain baseline in regard to the cyber security capabilities that are available in the market. The bigger challenge is procuring and integrating the right suite of applications that work across respective ecosystems,” he says.

Held back by legacy systems

Many enterprises are still running legacy systems and applications that can’t meet the demands of a borderless workforce.

Palo Alto Networks’ Smyth says cyber impacts of sticking with older systems and applications are endless.

“Directly connected to SaaS and IaaS apps without security, patch management, vendor support – the list goes on – means organisations will not have full control of their environment,” he says.

Duca adds that organisations running legacy platforms could see an impact on productivity from their employees, and the solution may not be able to deal with modern-day threats.

“Every organisation should use this as a point in time to reassess and rearchitect what the world looks like today and what it may look like tomorrow. In a dynamic and ever-changing world, businesses should look to a software-driven model as it will allow them to pivot and change according to their needs,” he says.

Cbus has challenges around optimally integrating software suites for end-to-end seamless process flow, like most enterprises that have built technical systems for core business functions over the past 10 years, says Cbus’ Khan.

“There are several app modernisation transformation programs to help us move forward. I believe that there will always be ‘heritage systems’ to take care of and transition away from.

“The only difference is that in the near future, these older systems will be built on the cloud rather than [run] on-premise and we would be replacing such cloud-native legacy applications with autonomous intelligent apps,” Khan says.

Meanwhile, IFM Investor’s Lyon says that like very firm, IFM has several key applications that are mature and do an excellent job.

“We are not being held back. Our use of the Citrix platform to encapsulate the stable and resilient core applications has allowed us to be agnostic to the borderless nature of work,” he says.

Centralising security in the cloud

The advent of secure access service edge (SASE) and SD-WAN technologies has seen many organisations centralise security services in the cloud rather than keep them at remote sites.

Palo Alto Networks’ Duca says that for many years, gaps will continue to appear from inconsistent policies and enforcement. With the majority of apps and data that sit in the cloud, centralising cyber services allows for consistent security close to the crown jewels.

“There’s no point sending the traffic back to the corporate HQ to send it back out again,” he says.

The decision about whether or not to centralise security services in the cloud or keep them at remote sites is based on the risk appetite of the organisation.

“In superannuation, a good proportion of cyber security programs are geared towards being compliant and dealing with threats due to an uncertain global political outlook. Organisations that can afford to run their own backup/failsafe system on premise should consider [moving this function] to the cloud. Cloud-first is the dominant approach in a very dynamic market,” he says.

United Workers Union’s D’Onofrio, adds that the pros of centralising security services at remote sites are faster access and response times, which is ideal for geographically distributed workforces and customer bases. A con, he says, is that a distributed footprint implies stretched security domains.

On the flipside, security domains are easier to manage if they are centralised in the cloud but will deliver slower response times for customers and staff who are based geographically afar, he says.

Cyberattacks

Any organization that fails to take into consideration supply chain cybersecurity threats is putting itself at great risk. The fact is, a company can be impacted by incidents that emerge from virtually anywhere within the chain. That’s why third-party risk management has become so important.

Managing third-party risk involves identifying and mitigating risks to an organization from external business partners, including suppliers, vendors, service providers, consultants, and contractors. As part of the process, organizations need to understand the risk profile not only of their direct business partners but also of the companies those partners are doing business with.

In many cases, third-party vendors outsource portions of their business to service providers, and each of those companies’ security postures can ultimately have an impact on the organization that’s contracting with the third party. Each additional third-party relationship magnifies an organization’s risk.

Needless to say, third-party risk management can be a challenging undertaking, especially for enterprises with complex supply chains. But it is necessary to implement the safeguards required to reduce or eliminate risk. The stakes are too high to ignore potential threats among business partners. This includes the possibility of business disruption because of a security breach.

For example, many companies have come to rely on outsourced services for payroll, IT infrastructure, web hosting, application development, among many others. If a third-party provider fails to deliver its services because of a disruption of any kind, that can have significant consequences for its clients.

In addition, third-party access to an organization’s physical facilities and IT systems can open it up to an increased level of risk. If the company’s customer data is exposed because of a third-party’s security vulnerability, for instance, the company is still liable for the attack. Industry research has shown that a large percentage of companies have experienced a cybersecurity breach because of weaknesses in their supply chains or third-party vendors.

Managing the risk

As part of managing third-party risk, it is essential that organizations vet all the parties they partner with. This helps them identify and assess the risks third parties create so they can work with them either to control those risks or find more secure alternatives.

In order to assess their third-party associates, organizations need to first take a complete inventory of these companies. Third-party partners might include suppliers and contractors, IT management services, software vendors, cloud service providers, staffing agencies, payroll service providers, fidelity management services, and tax professionals to name a few.

They might include everything from large, global organizations to individual contractors, as well as the companies whose businesses subcontract services to. Needless to say, this can take time, but it’s worth the effort to ensure third-party risk management.

Part of the vetting process also includes identifying and documenting the systems, applications, and data each of the third-party entities can access. Because many of these partners can access and process an organization’s highly sensitive data, it’s vital to ensure they meet proper security standards.

It’s also important to evaluate third parties individually to determine the likelihood and possible results of data breaches and other security incidents, then classify the parties according to the level of risk they pose. That way, security teams can focus their mitigation efforts on the highest-risk parties first.

There is really no limit to the potential security shortfalls partners can have. Third parties might not have an acceptable level of visibility into endpoint devices used by their employees or sufficient access controls to ensure the protection of their systems and networks. That can leave them exposed to a number of security issues.

One way to assess the security posture of business partners is to send each a questionnaire for information gathering around organizational structure and governance, cybersecurity practices in place, networks and other digital assets it needs to access, and security measures it takes when accessing resources.

Based on the security assessment, mitigation might require taking measures such as restricting access to certain assets, calling on the partner to address vulnerabilities, or implementing stronger security controls. If an organization determines that risks can’t be reasonably reduced, it should consider ending the relationship.

Beyond security

Evaluating third-party is not limited to cybersecurity. Organizations also need to ensure that partners are meeting regulatory compliance requirements because a lack of third-party controls can result in data loss and subsequent regulatory fines.

In addition, companies need to ensure that proper operational controls are in place with third parties, because failures can cause businesses to shut down for extended periods.

Third-party failures in any of these areas can result in lost business, financial damage, and a negative impact on the brand and reputation of any company that deals with the business that experienced a breach or disruption.

It’s important to remember that third-party risk management is not a “set it and forget it” proposition. Because third-party behavior and the threat landscape change over time, organizations need to perform regular assessments of their business partners. They can perform monitoring continuously in real-time by deploying tools such as vendor risk-management platforms.

The assessment process needs to be repeated for each new third-party partner an organization hires. By being constantly vigilant, organizations can ensure that the third-party companies they do business with present as little risk as possible.

Third-party risk management takes a lot of effort, but the potential advantages are clear. It leads to greater visibility into relationships with partners, which in turn enables companies to better understand the interconnectivity among supply chain parties and the potential risks.

In addition, due diligence allows executives to make more informed decisions about the organizations they do business with. Risks can be identified and controlled. Third-party risk management can also lead to better regulatory compliance because it’s a requirement of many regulations.

Perhaps most important, managing the risk of third-party relationships will help keep an organization’s IT resources protected against a variety of threats, and its supply chain operating efficiently.

Take control of third-party risk management. Learn how.

Risk Management

Managing risk is one of the top responsibilities of any leadership team. But leaders can manage only the risks they know about. Effective leadership, it turns out, depends on risk reporting. Reporting risks to your company’s executive team and board of directors will help your organization make the right decisions about reducing risks. 

This article focuses on the reporting of risk itself. That means finding the right information to share with your company’s leadership team and sharing it so it can be acted on effectively. 

Reporting risks that matter to a company’s leadership

Risk means a lot of things to a lot of different people. If you talk to IT people about risks, you’ll hear about the risk of server outages or data breaches or software vulnerabilities that could lead to data breaches. 

You might also hear about unauthorized devices, bring-your-own-device (BYOD) policies, and how difficult it is to monitor what employees are doing with the company’s data on their home networks now that they’re working remotely.

All those things, from server outages to remote employees, represent risks of one form or another. But if you’re in charge of reporting risk to your company’s executive team and the board, do you really want to give them a list of unpatched systems or an estimate of how many employees are using BYOD devices? 

What risks does your company’s leadership team ultimately care about?

To answer that, let’s ask about risk itself. Fortunately, there’s a generally agreed-upon definition of risk, at least among IT professionals. ISO 31000, the International Standards Organization’s guidelines for risk management, defines risk as “the effect of uncertainty on objectives.” 

“Uncertainty” seems straightforward enough. If something is certain, there’s no risk involved. If we know absolutely that our servers will never crash, there’s no risk of them crashing.

But what about “objectives?” Every employee, team, department, and business has objectives. When reporting risk to the executive team and the board, you need to ask yourself which objectives they care about. It’s not that they’re indifferent to the goals of individual teams and projects. Rather, it is the job of a company’s leadership to focus on the big picture. 

Here are three objectives you can be sure your company’s leaders care about:

Data confidentiality, integrity, and availabilityBusiness continuityRegulatory compliance 

There may be other objectives, such as a certain percentage of revenue growth or a good reputation in the marketplace. But you can be sure that your company’s leadership cares about managing and protecting its important data, avoiding IT outages that bring business to a halt, and ensuring that the company never makes headlines about regulatory fines.

Each of these objectives will likely require detailed reporting to support the objective’s overall risk assessment. For example, the data the board cares about encompasses everything from customer data to employee data to financial records to intellectual capital such as product designs and patents. All those need to be managed and secured. 

Different types of data may be facing different types of risks of varying severity. The board will need to know how much this objective is at risk overall, as well as what specific types of data might require new investments in security or personnel training. 

Before you prepare a report about risk in your organization, make sure you understand your leadership team’s objectives. Some of those might be posted on your company’s website. Others might be listed in an internal, long-term strategic plan. One way or another, you need to know what those objectives are because you’re going to use them to frame your discussion of risk.

Your risk report should provide the leadership team with the information they need to make smart decisions about which actions to take to mitigate risks related to the company’s strategic objectives.

Identifying risks helps you think like an attacker

There’s an added benefit to framing your risk reports this way. When you’ve identified risks to your data and to the company’s business continuity, you’ve also identified the weak points that criminal syndicates and hostile nation-states might attack.

After all, when a cybercriminal tries to break into your company’s IT systems, what are they doing? Most likely they’re trying to get to your data to steal it or leak it, or trying to get to the systems that process your data and disrupt them, possibly through ransomware or some other form of attack.

Because you’re now measuring and reporting risk based on strategic objectives, you have a detailed, weighted report on the weakness and vulnerabilities related to your data and the systems that store, process, and present your data. You know what’s most likely to be targeted and how to go about protecting them, based on your detailed knowledge of vulnerabilities and probabilities.

All this supporting information makes the risk assessment you’re presenting to the board much more credible and useful. The board sees how data and business continuity are at risk, what controls are in place to mitigate those risks, and how those controls could be improved or broadened to reduce risks further in keeping with the company’s overall strategy.

Risk reporting is an ongoing practice

Risk reporting should be an ongoing practice. Risks are continually changing, whether they’re arising from new business initiatives or new types of cyber threats. Automating data collection and risk assessment helps provide your company’s leadership team with the vital information they need to make the right decisions to mitigate risk and advance the company’s objectives.

Not sure about your risk levels? Get your risk report here.

Risk Management

By Dr. May Wang, CTO of IoT Security at Palo Alto Networks and the Co-founder, Chief Technology Officer (CTO), and board member of Zingbox

At the foundation of cybersecurity is the need to understand your risks and how to minimize them. Individuals and organizations often think about risk in terms of what they’re trying to protect. When talking about risk in the IT world, we mainly talk about data, with terms like data privacy, data leakage and data loss. But there is more to cybersecurity risk than just protecting data. So, what should our security risk management strategies consider? Protecting data and blocking known vulnerabilities are good tactics for cybersecurity, but those activities are not the only components of what CISOs should be considering and doing. What’s often missing is a comprehensive approach to risk management and a strategy that considers more than just data.

The modern IT enterprise certainly consumes and generates data, but it also has myriad devices, including IoT devices, which are often not under the direct supervision or control of central IT operations. While data loss is a risk, so too are service interruptions, especially as IoT and OT devices continue to play critical roles across society. For a healthcare operation for example, a failure of a medical device could lead to life or death consequences.

Challenges of Security Risk Management

Attacks are changing all the time, and device configurations can often be in flux. Just like IT itself is always in motion, it’s important to emphasize that risk management is not static.

In fact, risk management is a very dynamic thing, so thinking about risk as a point-in-time exercise is missing the mark. There is a need to consider multiple dimensions of the IT and IoT landscape when evaluating risk. There are different users, applications, deployment locations and usage patterns that organizations need to manage risk for, and those things can and will change often and regularly.

There are a number of challenges with security risk management, not the least of which is sheer size and complexity of the IT and IoT estate. CISOs today can easily be overwhelmed by information and by data, coming from an increasing volume of devices. Alongside the volume is a large variety of different types of devices, each with its own particular attack surface. Awareness of all IT and IoT assets and the particular risk each one can represent is not an easy thing for a human to accurately document. The complexity of managing a diverse array of policies, devices and access controls across a distributed enterprise, in an approach that minimizes risk, is not a trivial task.

A Better Strategy to Manage Security Risks

Security risk management is not a single task, or a single tool. It’s a strategy that involves several key components that can help CISOs to eliminate gaps and better set the groundwork for positive outcomes.

Establishing visibility. To eliminate gaps, organizations need to first know what they have. IT and IoT asset management isn’t just about knowing what managed devices are present, but also knowing unmanaged IoT devices and understanding what operating systems and application versions are present at all times.

Ensuring continuous monitoring. Risk is not static, and monitoring shouldn’t be either. Continuous monitoring of all the changes, including who is accessing the network, where devices are connecting and what applications are doing, is critical to managing risk.

Focusing on network segmentation. Reducing risk in the event of a potential security incident can often be achieved by reducing the “blast radius” of a threat. With network segmentation, where different services and devices only run on specific segments of a network, the attack surface can be minimized and we can avoid unseen and unmanaged IoT devices as springboards for attacks for other areas of the network. So, instead of an exploit in one system impacting an entire organization, the impact can be limited to just the network segment that was attacked.

Prioritizing threat prevention. Threat prevention technologies such as endpoint and network protection are also foundational components of an effective security risk management strategy. Equally important for threat prevention is having the right policy configuration and least-privileged access in place on endpoints including IoT devices and network protection technologies to prevent potential attacks from happening.

Executing the strategic components above at scale can be optimally achieved with machine learning and automation. With the growing volume of data, network traffic and devices, it’s just not possible for any one human, or even group of humans to keep up. By making use of machine learning-based automation, it’s possible to rapidly identify all IT, IoT, OT and BYOD devices to improve visibility, correlate activity in continuous monitoring, recommend the right policies for least-privileged access, suggest optimized configuration for network segmentation and add an additional layer of security with proactive threat prevention.

About Dr. May Wang:

Dr. May Wang is the CTO of IoT Security at Palo Alto Networks and the Co-founder, Chief Technology Officer (CTO), and board member of Zingbox, which was acquired by Palo Alto Networks in 2019 for its security solutions to Internet of Things (IoT).

IT Leadership, Security

These are challenging times to be a CIO. It was all talk about digital transformation to drive post-pandemic business recovery a few months ago. Now, the goalposts have shifted thanks to rising inflation, geopolitical uncertainty and the Great Resignation. Meeting these challenges requires IT leaders to ruthlessly prioritize: taking action to mitigate escalating cyber and compliance risks by managing their attack surface more effectively amidst continued skills shortages.

For many, the key lies in choosing the right platform to drive visibility and control across the endpoint estate.

The ever-growing attack surface 

That pandemic-era digital spending was certainly necessary to support hybrid working, drive process efficiencies and create new customer experiences. But it also left behind an unwelcomed legacy as corporate attack surfaces expanded significantly. 

An explosion in potentially unmanaged home working endpoints and distributed cloud assets have added opacity at a time when CIOs desperately need visibility. Two-fifths of global organizations admit that their digital attack surface is “spiraling out of control.” Some organizations also exacerbate their challenges in this regard by rushing products to market, incurring heavy technical debt in the process. 

Attack surface challenges are especially acute in industries like manufacturing, which became the most targeted sector in 2021. The convergence of IT and OT in smart factories is helping these organizations to become more efficient and productive, but it’s also exposing them to increased risk as legacy equipment is made to be connected. 

Nearly half (47%) of all attacks on the sector last year were caused by vulnerabilities that the victim had yet to or could not patch. Like their counterparts in almost every sector, manufacturing CIOs are also kept awake at night by supply chain risk. An October 2021 report claimed that 93% of global organizations have suffered a direct breach due to weaknesses in their supply chains over the previous year.

Managing this risk effectively will require rigorous and continuous third-party auditing based on asset visibility and best practice cyber hygiene checks. The same approach can also help drive visibility at a time when supply chains are still under tremendous strain from the continued impact of COVID-19 in Asia and new geopolitical uncertainty.

Threat actors are ruthlessly exploiting visibility and control gaps wherever they can find them, most notably via ransomware. The average ransom payment rose 78% year-on-year in 2021, with some vendors detecting a record-breaking volume of attacks. Most are down to a combination of phishing, exploited software vulnerabilities, and misconfigured endpoints, particularly RDP servers left exposed without strong authentication.

Missing talent

In fact, misconfiguration is one of the biggest sources of cyber risk today perpetuated by talent shortages and digital transformation, the latter creating new and complex IT environments which become more challenging to manage securely. The talent shortfall cuts across multiple sectors and is most acute in cyber with a gap of over 2.7 million professionals globally, including 402,000 in North America. The Great Resignation and workplace stress continue to take their toll. Nearly two-thirds (64%) of SOC analysts claim they’ll change jobs next year.

With talent in such short supplies and commanding such a high price, it becomes even more important to deploy it as efficiently as possible. Technology should be the CIO’s friend, yet a proliferation of IT and security point solutions is undermining productivity, not enhancing it. Our research shows that the average organization runs over 40 discrete IT security and management tools. They not only add licensing costs and significant administrative overheads but can also create visibility gaps that threat actors are primed to exploit. 

Tool bloat is even more likely in the public sector, where CIOs often lack a common security governance framework to guide purchasing strategies. Government IT leaders are also weighed down by the significant financial burden of license under utilization as they often lack the ability to discover, manage and measure their software assets.

The regulatory landscape continues to evolve

As if these challenges weren’t enough, CIOs must also prioritize compliance risk management. The EU’s GDPR set in motion a domino effect of copycat legislation around the world, which has raised the stakes for corporate data protection and privacy. But the landscape is also shifting in other ways. 

No longer is regulation solely for large organizations in healthcare, manufacturing or financial services sectors. New rules and policies are being drawn up and older ones are expanding in scope. Once the preserve of financial institutions, Sarbanes-Oxley will apply to all businesses that handle credit, beginning in December 2022. That means organizations as diverse as car dealerships, furniture sellers and retail stores will need to get SOX-compliant or face potentially significant financial consequences.

Start with visibility and control

As CIOs look to prioritize while economic headwinds gather strength, managing IT risk becomes even more critical. This is where best practice cyber hygiene can play an important role. It sounds simple in theory but can be challenging to achieve in practice.

Cyber hygiene is built on comprehensive visibility of the endpoint IT estate. That means understanding everything the organization is running and what is running on those endpoints at all times—whether it’s an on-prem server, a cloud container, a virtual machine or a home working laptop. 

It’s especially challenging, and critical, in dynamic and ephemeral cloud environments, which change second by second. Once this visibility has been achieved, organizations need technology that empowers them to run continuous scans and automated remediation activities to find and fix any vulnerabilities or misconfigurations—and to rapidly detect and investigate emerging threats.

This endpoint insight will not just help to mitigate risk but also optimize software license utilization and enhance regulatory compliance. Delivered from a single platform, it should help stretched IT teams do more with less and maximize their productivity. 

The hard work starts now.

Learn how to get complete endpoint visibility and control here.

IT Leadership

The software supply chain is, as most of us know by now, both a blessing and a curse.

It’s an amazing, labyrinthine, complex (some would call it messy) network of components that, when it works as designed and intended, delivers the magical conveniences and advantages of modern life: Information and connections from around the world plus unlimited music, videos, and other entertainment, all in our pockets. Vehicles with lane assist and accident avoidance.

Home security systems. Smart traffic systems. And on and on.

But when one or more of those components has defects that can be exploited by criminals, it can be risky and dangerous. It puts the entire chain in jeopardy. You know — the weakest link syndrome. Software vulnerabilities can be exploited to disrupt the distribution of fuel or food. It can be leveraged to steal identities, empty bank accounts, loot intellectual property, spy on a nation, and even attack a nation.

So the security of every link in the software supply chain is important — important enough to have made it into a portion of President Joe Biden’s May 2021 executive order, “Improving the Nation’s Cybersecurity” (also known as EO 14028).

It’s also important enough to have been one of the primary topics of discussion at The 2022 RSA conference in San Francisco. Among dozens of presentations on the topic at the conference was “Software supply chain: The challenges, risks, and strategies for success” by Tim Mackey, principal security strategist within the Synopsys Cybersecurity Research Center (CyRC).

Challenges and risks

The challenges and risks are abundant. For starters, too many organizations don’t always vet the software components they buy or pull from the internet. Mackey noted that while some companies do a thorough background check on vendors before they buy — covering everything from the executive team, financials, ethics, product quality, and other factors to generate a vendor risk-assessment score — that isn’t the norm.

“The rest of the world is coming through, effectively, an unmanaged procurement process,” he said. “In fact, developers love that they can just download anything from the internet and bring it into their code.”

While there may be some regulatory or compliance requirements on those developers, “they typically aren’t there from the security perspective,” Mackey said. “So once you’ve decided that, say, an Apache license is an appropriate thing to use within an organization, whether there are any unpatched CVEs [Common Vulnerabilities and Exposures] associated with anything with an Apache license, that’s somebody else’s problem. There’s a lot of things that fall into the category of somebody else’s problem.”

Then there’s the fact that the large majority of the software in use today — nearly 80% — is open source, as documented by the annual “Open Source Security and Risk Analysis” (OSSRA) report by the Synopsys CyRC.

Open source software is no more or less secure than commercial or proprietary software and is hugely popular for good reasons — it’s usually free and can be customized to do whatever a user wants, within certain licensing restrictions.

But, as Mackey noted, open source software is generally made by volunteer communities — sometimes very small communities — and those involved may eventually lose interest or be unable to maintain a project. That means if vulnerabilities get discovered, they won’t necessarily get fixed.

And even when patches are created to fix vulnerabilities, they don’t get “pushed” to users. Users must “pull” them from a repository. So if they don’t know they’re using a vulnerable component in their software supply chain, they won’t know they need to pull in a patch, leaving them exposed. The infamous Log4Shell group of vulnerabilities in the open source Apache logging library Log4j is one of the most recent examples of that.

Keeping track isn’t enough

To manage that risk requires some serious effort. Simply keeping track of the components in a software product can get very complicated very quickly. Mackey told of a simple app he created that had eight declared “dependencies” — components necessary to make the app do what the developer wants it to do. But one of those eight had 15 dependencies of its own. And one of those 15 had another 30. By the time he got several levels deep, there were 133 — for just one relatively simple app.

Also, within those 133 dependencies were “multiple instances of code that had explicit end-of-life statements associated with them,” he said. That means it was no longer going to be maintained or updated.

And simply keeping track of components is not enough. There are other questions organizations should be asking themselves, according to Mackey. They include: Do you have secure development environments? Are you able to bring your supply chain back to integrity? Do you regularly test for vulnerabilities and remediate them?

“This is very detailed stuff,” he said, adding still more questions. Do you understand your code provenance and what the controls are? Are you providing a software Bill of Materials (SBOM) for every single product you’re creating? “I can all but guarantee that the majority of people on this [conference] show floor are not doing that today,” he said.

But if organizations want to sell software products to the U.S. government, these are things they need to start doing. “The contract clauses for the U.S. government are in the process of being rewritten,” he said. “That means any of you who are producing software that is going to be consumed by the government need to pay attention to this. And it’s a moving target — you may not be able to sell to the U.S. government the way that you’re used to doing it.”

Even SBOMs, while useful and necessary — and a hot topic in software supply chain security — are not enough, Mackey said.

Coordinated efforts

“Supply chain risk management (SCRM) is really about a set of coordinated efforts within an organization to identify, monitor, and detect what’s going on. And it includes the software you create as well as acquire, because even though it might be free, it still needs to go through the same process,” he said.

Among those coordinated efforts is the need to deal with code components such as libraries within the supply chain that are deprecated — no longer being maintained. Mackey said developers who aren’t aware of that will frequently send “pull requests” asking when the next update on a library is coming.

And if there is a reply at all, it’s that the component is end-of-life, been end-of-life, and that the only thing to do is move to another library.

“But what if everything depends on it?” he said. “This is a perfect example of the types of problems we’re going to run into as we start managing software supply chains.”

Another problem is that developers don’t even know about some dependencies they’re pulling into a software project, and whether those might have vulnerabilities.

“The OSSRA report found that the top framework with vulnerabilities last year was jQuery [a JavaScript library]. Nobody decides to use JQuery, it comes along for the ride,” he said, adding that that is true of others as well, including Lodash (a JavaScript library) and Spring Framework (an application framework and inversion of control container for the Java platform). “They all come along for the ride,” he said. “They’re not part of any monitoring. They’re not getting patched because people simply don’t know about them.”

Building trust

There are multiple other necessary activities within SCRM that, collectively, are intended to make it much more likely that a software product can be trusted. Many of them are contained in the guidance on software supply chain security issued in early May by the National Institute of Standards and Technology in response to the Biden EO.

Mackey said this means that organizations will need their “procurement teams to be working with the government’s team to define what the security requirements are. Those requirements are then going to inform what the IT team is going to do — what a secure deployment means. So when somebody buys something you have that information going into procurement for validation.”

“A provider needs to be able to explain what their SBOM is and where they got their code because that’s where the patches need to come from,” he said.

Finally, Mackey said the biggest threat is the tendency to assume that if something is secure at one point in time, it will always be secure.

“We love to put check boxes beside things — move them to the done column and leave them there,” he said. “The biggest threat we have is that someone’s going to exploit the fact that we have a check mark on something that is in fact a dynamic something — not a static something that deserves a check mark. That’s the real world. It’s messy — really messy.”

How prepared are software vendors to implement the security measures that will eventually be required of them? Mackey said he has seen reports showing that for some of those measures, the percentage is as high as 44%. “But around 18% is more typical,” he said. “People are getting a little bit of the message, but we’re not quite there yet.”

So for those who want to sell to the government, it’s time to up their SCRM game. “The clock is ticking,” Mackey said.

Click here to find more Synopsys content about securing your software supply chain.

Security

‘Mind the gap’ is an automated announcement used by London Underground for more than 50 years to warn passengers about the gap between the train and the platform edge.

It’s a message that would resonate well in IT operations. Enterprises increasingly rely on “work from anywhere” (WFA) infrastructure, software as a service (SaaS), and public cloud networks. In this complex platform mix, visibility gaps can quickly surface in the performance of ISP and cloud networks, along with remote work environments.

Gaps are also inherent in today’s IT standard operating procedures. Network teams follow a certain set of rules to begin troubleshooting and ultimately isolate and fix issues. If these standardized workflows are missing core features, or teams need multiple tools to run these troubleshooting procedures, this can quickly result in delayed remediation and potential business disruption.

Dimensional Research, for example, reveals that 97% of network and operations professionals report network challenges and 81% confirm network blind spots. Complete outages (37%) are the worst problem, although network issues have also delayed new projects (36%).

So how can IT operations close the gap? The enterprise needs network monitoring software that reaches beyond the data center infrastructure; providing end-to-end network delivery insights that correspond with users’ digital experience.

It’s time to re-think network monitoring. These are four key capabilities network professionals should consider for a modern network monitoring platform.

User experience: Moving business applications to multi-cloud platforms and co-located data centers makes third-party networks a performance dependency. Digital experience monitoring along the network, between the end-user and the cloud deployments becomes a necessity to ensure seamless user experiences.Scale: Demand for SaaS, unified communications as a service (UcaaS), contact center as a service (CcaaS), and the WFA culture is rapidly expanding the network edge. Network professionals need to harness the complexity and dynamic nature of these deployments.Security: The modern WAN infrastructure involves technologies such as software-defined WAN (SD-WAN), next-generation firewall (NGFW), and much more. Misconfigurations can easily be missed, resulting in performance issues or security breaches.Visibility: The remotely connected workplace introduces a new, uncharted network ecosystem. Visibility into these remote networks such as home WiFi/LAN is at best patchy, making issue resolution a guessing game.

The bottom line? IT teams need a complete, efficient view of their network infrastructure, including all applications, users, and locations. Without it, IT risks losing control of operations, ultimately eroding confidence in IT, and potentially forcing decision-makers to reallocate or reduce IT budgets.

Now is the time to rethink network operations and evolve traditional NetOps into Experience-Driven NetOps. With Experience-Driven NetOps, network teams can proactively identify the root cause of problems and isolate issues within a single tool that enables one-click access to all their standard operating procedures through out-of-the-box workflows and user-experience metrics. This industry-first approach delivers digital experience and network performance insights across the edge infrastructure, internet connections, and cloud services, allowing teams to plan for network support where it matters most.

Maybe it’s time for that “mind the gap” announcement to be broadcast in IT departments? With a possible slight change to, “mind the growing void” to ensure networks are experience-proven and network operations teams are experience-driven.

Tackle the new challenges of network monitoring in this eBook, 4 Imperatives for Monitoring Modern Networks. Read now and discover how organizations can plan their monitoring strategy for the next-generation network technologies.

Networking