Cybersecurity breaches can result in millions of dollars in losses for global enterprises and they can even represent an existential threat for smaller companies. For boards of directors not to get seriously involved in protecting the information assets of their organizations is not just risky — it’s negligent.

Boards need to be on top of the latest threats and vulnerabilities their companies might be facing, and they need to ensure that cybersecurity programs are getting the funding, resources and support they need.

Lack of cybersecurity oversight

In recent years boards have become much more engaged in security-related issues, thanks in large part to high-profile data breaches and other incidents that brought home the real dangers of having insufficient security. But much work remains to be done. The fact is, at many organizations board oversight of cybersecurity is unacceptable.

Research has shown that many boards are not prepared to deal with a cyberattack, with no plans or strategies in place for cybersecurity response. Few have a board-level cybersecurity committee in place.

More CIOs are joining boards

On a positive note, more technology leaders including CIOs are being named to boards, and that might soon extend to security executives as well. Earlier this year the Security Exchange Commission (SEC) proposed amendments to its rules to enhance and standardize disclosures regarding cybersecurity risk management, strategy, governance, and incident reporting by public companies.

This includes requirements for public companies to report any board member’s cybersecurity expertise, reflecting a growing understanding that the disclosure of cybersecurity expertise on boards is important when potential investors consider investment opportunities and shareholders elect directors. This could lead to more CISOs and other security leaders being named to boards.

Greater involvement of IT and security executives on boards is a favorable development in terms of better protecting information resources. But in general, boards need to become savvier when it comes to cybersecurity and be prepared to take the proper actions.

Asking the right questions

The best way to gain knowledge about security is to ask the right questions. One of the most important queries is which IT assets the organization is securing? Knowing the answer to this requires having the ability to monitor the organization’s endpoints at any time, identify which systems are connecting to the corporate network, determine which software is running on devices, etc…

Deploying reliable asset discovery and inventory systems is a key part of gaining a high level of visibility to ensure the assets are secure.

Another important question to ask is how is the organization protecting its most vital resources? This might include financial data, customer records, source code for key products, encryption keys and other security tools, and other assets.

Not all data is equal from a security, privacy and regulatory perspective, and board members need to fully understand the controls in place to secure access to this and other highly sensitive data. Part of the process for safeguarding the most vital resources within the organization is managing access to these assets, so boards should be up to speed on what kinds of access controls are in place.

Board members also need to ask about which entities pose the greatest security risks to the business at any point in time, so this is another key question to ask. The challenge here is that the threat vectors are constantly changing. But that doesn’t mean boards should settle for a generic response.

Accessing threats from the inside out

A good assessment of the threat landscape includes looking not just at external sources of attacks but within the organization itself. Many security incidents originate via employee negligence and other insider threats. So, a proper follow-up question would be to ask what kind of training programs and policies the company has in place to ensure that employees are practicing good security hygiene and know how to identify possible attacks such as phishing.

Part of analyzing the threat vector also includes inquiring about what the company looks like to attackers and how they might carry out attacks. This can help in determining whether the organization is adequately protected against a variety of known tactics and techniques employed by bad actors.

In addition, board members should ask IT and security executives about the level of confidence they have in the organization’s risk-mitigation strategy and its ability to quickly respond to an attack. This is a good way to determine whether the security program thinks it has adequate resources and support to meet cybersecurity needs, and what needs to be done to enhance security via specific investments.

It’s most effective when the executives come prepared with specific data about security shortfalls, such as the number of critical vulnerabilities the company has faced, how long it takes on average to remediate them, the number and extent of outages due to security issues, security skills gaps, etc.

In the event of an emergency

Finally, board members should ask what the board’s role should be in the event of a security incident. This includes the board’s role in determining whether to pay a ransom following a ransomware attack, how

board members will communicate with each other if corporate networks are down, or how they will handle public relations after a breach, for example.

It has never been more important for boards to take a proactive, vigilant approach to cybersecurity at their organizations. Cyberattacks such as ransomware and distributed denial of service are not to be taken lightly in today’s digital business environment where an outage of even a few hours can be extremely costly.

Boards that are well informed about the latest security threats, vulnerabilities, solutions and strategies will be best equipped to help their organizations protect their valuable data resources as well as the devices, systems and networks that keep business processes running every day.

Want to learn more? Check out this Cybersecurity Readiness Checklist for Board Members.

Risk Management

Cybersecurity breaches can result in millions of dollars in losses for global enterprises and they can even represent an existential threat for smaller companies. For boards of directors not to get seriously involved in protecting the information assets of their organizations is not just risky — it’s negligent.

Boards need to be on top of the latest threats and vulnerabilities their companies might be facing, and they need to ensure that cybersecurity programs are getting the funding, resources and support they need.

Lack of cybersecurity oversight

In recent years boards have become much more engaged in security-related issues, thanks in large part to high-profile data breaches and other incidents that brought home the real dangers of having insufficient security. But much work remains to be done. The fact is, at many organizations board oversight of cybersecurity is unacceptable.

Research has shown that many boards are not prepared to deal with a cyberattack, with no plans or strategies in place for cybersecurity response. Few have a board-level cybersecurity committee in place.

More CIOs are joining boards

On a positive note, more technology leaders including CIOs are being named to boards, and that might soon extend to security executives as well. Earlier this year the Security Exchange Commission (SEC) proposed amendments to its rules to enhance and standardize disclosures regarding cybersecurity risk management, strategy, governance, and incident reporting by public companies.

This includes requirements for public companies to report any board member’s cybersecurity expertise, reflecting a growing understanding that the disclosure of cybersecurity expertise on boards is important when potential investors consider investment opportunities and shareholders elect directors. This could lead to more CISOs and other security leaders being named to boards.

Greater involvement of IT and security executives on boards is a favorable development in terms of better protecting information resources. But in general, boards need to become savvier when it comes to cybersecurity and be prepared to take the proper actions.

Asking the right questions

The best way to gain knowledge about security is to ask the right questions. One of the most important queries is which IT assets the organization is securing? Knowing the answer to this requires having the ability to monitor the organization’s endpoints at any time, identify which systems are connecting to the corporate network, determine which software is running on devices, etc…

Deploying reliable asset discovery and inventory systems is a key part of gaining a high level of visibility to ensure the assets are secure.

Another important question to ask is how is the organization protecting its most vital resources? This might include financial data, customer records, source code for key products, encryption keys and other security tools, and other assets.

Not all data is equal from a security, privacy and regulatory perspective, and board members need to fully understand the controls in place to secure access to this and other highly sensitive data. Part of the process for safeguarding the most vital resources within the organization is managing access to these assets, so boards should be up to speed on what kinds of access controls are in place.

Board members also need to ask about which entities pose the greatest security risks to the business at any point in time, so this is another key question to ask. The challenge here is that the threat vectors are constantly changing. But that doesn’t mean boards should settle for a generic response.

Accessing threats from the inside out

A good assessment of the threat landscape includes looking not just at external sources of attacks but within the organization itself. Many security incidents originate via employee negligence and other insider threats. So, a proper follow-up question would be to ask what kind of training programs and policies the company has in place to ensure that employees are practicing good security hygiene and know how to identify possible attacks such as phishing.

Part of analyzing the threat vector also includes inquiring about what the company looks like to attackers and how they might carry out attacks. This can help in determining whether the organization is adequately protected against a variety of known tactics and techniques employed by bad actors.

In addition, board members should ask IT and security executives about the level of confidence they have in the organization’s risk-mitigation strategy and its ability to quickly respond to an attack. This is a good way to determine whether the security program thinks it has adequate resources and support to meet cybersecurity needs, and what needs to be done to enhance security via specific investments.

It’s most effective when the executives come prepared with specific data about security shortfalls, such as the number of critical vulnerabilities the company has faced, how long it takes on average to remediate them, the number and extent of outages due to security issues, security skills gaps, etc.

In the event of an emergency

Finally, board members should ask what the board’s role should be in the event of a security incident. This includes the board’s role in determining whether to pay a ransom following a ransomware attack, how

board members will communicate with each other if corporate networks are down, or how they will handle public relations after a breach, for example.

It has never been more important for boards to take a proactive, vigilant approach to cybersecurity at their organizations. Cyberattacks such as ransomware and distributed denial of service are not to be taken lightly in today’s digital business environment where an outage of even a few hours can be extremely costly.

Boards that are well informed about the latest security threats, vulnerabilities, solutions and strategies will be best equipped to help their organizations protect their valuable data resources as well as the devices, systems and networks that keep business processes running every day.

Want to learn more? Check out this Cybersecurity Readiness Checklist for Board Members.

Risk Management

In today’s fast-paced business world, where companies must constantly innovate to keep up with competitors,depending on fully customizable software solutions created with programming languages and manual coding is insufficient.

Instead, enterprises increasingly are pursuing no-code and low-code solutions for application development. No-code and low-code development entails creating software applications by using a user-friendly graphic interface that often includes drag and drop. These solutions require less coding expertise, making application development accessible to a larger swath of workers. That accessibility is critical, especially as companies continue to face a shortage of highly skilled IT workers. In fact, IDC has identified low-code/no-code mobile applications as a driver of the future of work.

“The key difference between traditional and no-code and low-code solutions is just how easy and flexible the user experience can be with no-code and low-code,” says Alex Zhong, director of product marketing at GEP. “Speed has become more and more important in the business environment today. You need to get things done in a rapid way when you’re responding to the disruptive environment and your customers.”

The traditional application development process is both complicated and multilayered. It entails zeroing in on the business need, evaluating and assessing the idea, submitting the application development request to IT, getting evaluations and approvals to secure funding, designing, creating and producing, and doing user testing.

“Traditionally it’s a lengthy process with many people involved,” Zhong says. “This can take quite a few weeks and often longer.” Not only does the time workers spend accrue but various costs also quickly add up. “The new way of application development reduces complexity, tremendously shortens the process, and puts application development more in users’ hands.”

Here are some other benefits of no-code/low-code solutions over the traditional approach:

Projects are more malleable. “With local solutions, you can make changes quicker,” says Kelli Smith, GEP’s head of product management for platform. With fewer levels of approval and cooks in the kitchen, it’s easy to tweak ideas on the fly and make improvements to applications as you go.

Ideas are less likely to get lost in translation. With traditional development, sometimes ideas aren’t perfectly translated into a product. With the user at the helm working closely with IT, ideas are more likely to be accurately executed.

IT and the business work better together. No-code and low-code solutions are typically driven by someone close to the business, but IT is still involved in an advisory role — especially in initial stages. The relationship becomes more of a collaborative one. “The business is developing together with IT,” Smith says.

Developers are freed up for more complex work. With the business more involved in application development, IT workers’ time is freed up to dedicate to more complicated tasks and projects rather than an excess of manual or administrative work.

Often, moving away from traditional application development is a process for enterprises. Companies may start with low-code solutions and gradually shift toward no-code solutions. The evolution requires a culture change, vision from leadership, and endorsement from IT.

Importantly, employees also need to be empowered to participate.

GEP believes that no-code/low-code is the way of the future. The company is leading efforts in no-code and low-code solutions through partners and investments in solutions. “In today’s environment,” Zhong says, “no-code/low-code is simply key to giving enterprises more flexibility.”

At GEP we help companies with transformative, holistic supply chain solutions so they can become more agile and resilient. Our end-to-end comprehensive, unified solutions harness technology to change organizations for the better. To find out more, visit GEP.

Supply Chain

Organizations that have embraced a cloud-first model are seeing a myriad of benefits. The elasticity of the cloud allows enterprises to easily scale up and down as needed. In practice, rather than commit to just one cloud service in today’s world of more distributed organizations due to Covid-19, many enterprises prefer to have multiple cloud solutions they source from a variety of vendors.

The cloud also helps to enhance security, improves insight into data, and aids with disaster recovery and cost savings. Cloud has become a utility for successful businesses. Around 75% of enterprise customers using cloud infrastructure as a service (IaaS) have been predicted to adopt a deliberate multi-cloud strategy by 2022, up from 49% in 2017, according to Gartner.

“Businesses don’t want to be locked into one particular cloud,” says Tejpal Chadha, Global Head, Digitate SaaS Cloud & Cyber Security. “They want to run their applications on different clouds so they’re not dependent on one in case it were to temporarily shut down. Multi-cloud has really become a must-have for organizations.”

Yet, at the same time, companies that tap into these multi-cloud solutions are opening themselves up to additional, and potentially significant, security risks. They become increasingly vulnerable in an age of more sophisticated, active cyberhackers.

To address security risks, cloud services have their own monitoring processes and tools that are designed to keep data secure. Many offer customers basic monitoring tools for free. But if companies want a particularly robust monitoring service, they often must pay added fees. With multiple clouds, this added expense can be significant.

“The cost goes up when you have to have specific monitoring tools for each cloud,” Chadha says. “Monitoring also needs to be instantaneous or real-time to be effective.”

Organizations using multi-cloud solutions are also susceptible to cloud sprawl, which happens when an organization lacks visibility into or control over its cloud computing resources.The organizationtherefore ends up with excess, unused servers or paying higher rates than necessary.

For enterprises safeguarding their multi-cloud solutions, a better tactic is to use just one third-party overarching tool for all clouds – one that monitors everything instantaneously. ignio™, the award-winning enterprise automation platform from AIOps vendor Digitate, does just that.

ignio AIOps, Digitate’s flagship product, facilitates autonomous cloud IT operations by tapping into AI and machine learning to provide a closed-loop solution for Azure and AWS, with versions for Google Cloud (GCP) and private clouds also coming soon. With visibility and intelligence across layers of cloud services, ignio AIOps provides multi-cloud support by leveraging cloud-native technologies and APIs. It also provides actionable insights to better manage your cloud technology stack.

ignio is unique in that it cuts across multiple data centers, both private and public clouds, and seamlessly handles everything  in a single window. It gets a bird’s eye view of the health of a company’s data centers and clouds. Then, ignio continuously monitors, predicts, and takes corrective action across clouds while also automating previously manual tasks, which ignio calls “closed-loop remediation.” The closed-loop remediation enables companies to automate actions for both remediation, compliance, and other essential CloudOps tasks.

“The ignio AIOps software first comes in and, in the blueprinting process, gives a holistic view of what companies have in their universe,” Chadha says. “We call that blueprinting or discovery. Then, we help automate tasks. We’re completely agnostic when it comes to monitoring or taking corrective action, or helping increase automation across all of these clouds.”

As Digitate ignio customers automate processes and reduce manual IT efforts, they’re finding they’re saving money — some millions of dollars a year. For many companies, tasks that once took three days now take only an hour.

“The biggest benefits are that less manual work is ultimately needed, and then there’s also the costs savings,” Chadha says. “Enterprises using this tool are managing their multi-cloud estate much more efficiently.”

To learn more about Digitate ignio and how Digitate’s products can help you thread the multi-cloud needle, visit Digitate.

IT Leadership

CRM software provider Zendesk has decided to lay off 300 employees from its global workforce of 5,450 employees to reduce operating expenses, a recent filing with the US Securities and Exchange Commission (SEC) showed.

The decision comes just months after the company was acquired by a consortium of private equity firms for $10.2 billion. “This decision (layoffs) was based on cost-reduction initiatives intended to reduce operating expenses and sharpen Zendesk’s focus on key growth priorities,” the company wrote in the SEC filing.

In a separate press statement, Zendesk’s executive team took responsibility of the job cuts and said the company is pulling back from the ways it had previously invested in “hiring growth”, much in advance of the business growth.

“…we grew our team much faster than we should have based on revenue growth expectations that were not pragmatic. As an executive team we take responsibility for that,” the company said.

The statement outlined how Zendesk’s top management tried different measures, such as closing over 100 positions, to try and address its bottom line but was unable to get past the issue.

The roles impacted by the layoffs were decided on five strategic priorities, the company said, including “optimizing our processes and systems, reducing duplication of effort, increasing our spans of control and rebalancing our roles towards Go to Market to build on our enterprise opportunity while continuing to build and deliver compelling products for our customers.”

Layoffs to cost Zendesk $28 million

The layoffs are estimated to set Zendesk back by about $28 million, primarily due to costs incurred on severance payments and employee benefits, the SEC filing showed.

Out of the total estimated cost, the company expects to incur $8 million in the fourth quarter of 2022.

As part of the layoffs, Zendesk said it will provide outgoing employees three months of base salary along with one week’s pay for each year of full service.

Other benefits include a prorated portion of the employee’s annual bonus payable at target, two months of equity award vesting, health insurance benefit coverage and job search support resources.

Tech firms continue to see layoffs

The CRM software provider’s decision to layoff almost 5% of its workforce comes at a time when other tech firms such as Salesforce, Meta, Twitter, Microsoft and Oracle have announced job cuts in the wake of economic headwinds.

On Wednesday, Meta, the parent company of Facebook, Instagram and WhatsApp, said it is preparing to cut thousands of jobs, impacting 13% of its global workforce.

Salesforce, another CRM software provider, too announced mass layoffs this week, cutting at least hundreds of jobs from its 73,000-person workforce.

Last month, Microsoft had said it would be laying off close to 1,000 employees. Cloud service provider Oracle is also continuing to layoff staff globally in the past few months.

Tech industry prepares for more layoffs

Announcements of thousands of job cuts in the past couple of weeks may not be the end of the trouble for the technology sector. Analysts expect the worse is yet to come.

“It’s a good bet that tech companies that haven’t yet laid off employees are carefully considering whether or not to do so. It wouldn’t be surprising to see more layoffs in the next few months, particularly among firms whose fiscal year ends on December 31st,” JP Gownder, principal analyst at Forrester said in a statement.

Gownder said the job cuts were a result of these companies trying to set up finances for success in 2023. “Widespread economic concerns—some prompted by rising interest rates, others by the war in Ukraine, high fuel costs, and supply chain issues—are prompting these moves in anticipation of lower demand.”

The layoffs, according to the analyst, also point at skilling challenges being experienced by several employees. 

Positions that don’t require “enough” IT skills will find it more difficult to find jobs compared to people who are considered top talent in the technology sector, said Gownder. “Many of the laid-off tech workers have skills that will be valuable in other sectors. Nearly every company, regardless of industry is now a “technology firm” that relies on software developers, engineers, and IT talent. So top tech talent who lose their jobs will find other positions, most likely.”

IT Jobs

As 2022 wraps up, many IT leaders are re-evaluating their current infrastructure to understand how they can continue to modernize, reduce complexity at scale and — most importantly — protect their organization. Common pain points include management overhead and rising costs, with their overall impact on budget becoming a larger and larger concern.

But it’s not just the price tag. Ransomware attacks, natural disasters, and other unplanned outages continue to rise, requiring more attention and highlighting business risk. To reduce the impact of these outages, enterprises require simple, automated responses that cover 100% of business requirements while minimizing resources and improving processes. Planned downtime, for IT migrations and moving to the cloud, also consumes valuable organizational time. IT leaders know they need to invest more heavily in modern data protection services to streamline operations and avoid disruptions.

They also know that without a reliable disaster recovery (DR) solution to protect business-critical applications, all their modernization efforts could be rendered moot in a matter of seconds.

Downtime and data loss are common – and expensive

An IDC survey across North America and Western Europe highlights the need for effective disaster recovery. Titled  “The State of Ransomware and Disaster Preparedness 2022,” the study shows that “79% of respondents indicated they had activated a disaster response within the past 12 months, with 61% of those responses triggered by ransomware or other malware. Indeed, 60% of respondents said they had experienced unrecoverable data during that same time, substantially more than the 43% response rate to the same question a year earlier.”

According to these numbers, C-suites now equate ransomware to a disaster event. They also recognize that these attacks and outages are no longer a question of if, but of when, how often, and at what cost? In that context, an agile and reliable DR solution is a must-have in today’s digital world.

The cost of downtime and data loss

In their report, IDC states that downtime costs around $250,000 per hour, on average, across all industries. Since this average includes small businesses, the actual per-hour cost to mid-size and enterprise businesses actually surpasses $1 million.

Financial losses represent only part of the problem. Data loss, productivity, and brand reputation are also at stake. A disruption can lead to even more significant losses for your organization, including loss of employees, users, and customers. If the breach leads to exposure of personal information, just a few minutes of downtime can result in years of reputational damage and loss of trust. On top of that, the average recovery time from a ransomware attack is 21 days, so administrative drain and long-term cost will weigh heavily on your organization for weeks, if not longer. 

If ransomware attacks are inevitable, then IT decision makers must prioritize disaster recovery. To do so effectively, they need to understand two factors that define a truly robust DR solution.

You need enterprise-grade, continuous protection

Today’s organizations can’t afford to lose data. To protect data, productivity, and revenue, companies need to increase the granularity of recovery while maintaining performance. To accomplish that, they need true Continuous Data Protection (CDP), which provides granular recovery to within seconds, as well as the option to recover to many more points in time. This  tool dramatically reduces the impact of outages and disruptions to your organization.

CDP is the best way to protect your business and achieve business continuity. Compare it to traditional backup and snapshots, which entail scheduling, agents, and impacts to your production environment. With true CDP, your recovery point objective (RPO) is reduced to minutes, with just a few seconds of data loss.

Picture this scenario: A cyberattack suddenly hits your organization at 10 p.m., but you’re able to recover immediately to a state seconds before the attack. As a result, within just a few minutes, it’s as if the attack never happened. With true CDP, that kind of disaster recovery can be your reality.

The right DR can elevate operational efficiency and reduce costs

As you begin to review and refine your day-to-day operations, it’s critical to measure how much downtime would cost your business. From that calculation, you can provide a compelling argument as to why your organization needs further investments in data protection. The 3-2-1 backup rule is no longer enough, as businesses cannot afford hours of downtime and data loss. Your backup service needs to work hand in hand with your disaster recovery solution to ensure you’re well secured.

Adding true CDP technology to your disaster recovery plans enables you to increase your operational efficiency in five ways:

Leverage automation and orchestration to reduce the need for disparate tools and the amount of administrative drain while protecting your workloads.Increase productivity and drive innovation while allowing your team to focus on innovation or clearing ticket backlogs.Modernize and scale your infrastructure as needed with a DR solution that fits you at every stage, regardless of upgrades or expansions.Save money with granular recovery to minimize data loss and resources dedicated to managing an unplanned disruption.Monitor, measure, and report accurately on your SLAs, RPOs, and RTOs to meet compliance requirements.

SaaS-based disaster recovery delivers even more

A little over a year ago, Hewlett Packard Enterprise (HPE) acquired Zerto, the industry-leading disaster solution. Zerto pioneered the DR industry with true CDP, and it continues to deliver the fastest recovery experience due to three key capabilities:

Near-synchronous replication: Data replication that leverages the speed of synchronous replication without impacting your production environment and the efficiency and restore capabilities of asynchronous. Zerto uses block-level replication, so each change is copied at the hypervisor level without a need for agents, snapshots, or scheduled maintenance.Unique journaling technology: Checkpoints of data are stored in a journal every few seconds for up to 30 days. Granular restore points enable you to recover a wide array of objects directly from the journal, so you can recover whole sites, individual applications, and even single files.Application-centric recovery: Ensure write-order fidelity across all VMs, datastores, and hosts. This allows an application to recover as a single cohesive unit to an exact point in time.

Today, this underlying technology is available on the globally accessible HPE GreenLake edge to cloud platform. HPE GreenLake for Disaster Recovery is a SaaS-based DR solution that provides true CDP and a seamless cloud operational experience. With HPE GreenLake, you can manage everything from storage to compute to networking — and now disaster recovery — from a single, unified platform on any device.

Radically reduce data loss and downtime through continuous data protection on a global, scalable platform with HPE GreenLake for Disaster Recovery and start boosting your organization’s confidence in recovery from any outage.

About Kyleigh Fitzgerald

Kyleigh Fitzgerald is Senior Product Marketing Manager at Zerto, a Hewlett Packard Enterprise Company. She joined Zerto with over 10 years of tech marketing experience, with a background from the web industry to programmatic advertising, IT consulting and services.

Disaster Recovery, HPE, IT Leadership

Many view today’s supply chains as true marvels of modern existence — push a button and a desired object is delivered to one’s doorstep. Others see modern supply chains disrupting local economies and damaging the environment.

Massively complex, interdependent, and subject to disruptions, supply chains were, for the most part just a few years ago, the purview of midlevel executives operating out of sight of newsrooms and boardrooms. The pandemic, escalating geopolitical tensions, cyberattacks, and severe weather events have made the supply chain a universal issue subject to boardroom and even White House scrutiny.

Supply chain disruptions and irregularities leading to shortages, delays, and escalating price increases have become defining realities of modern business today. So too is the fallout of an ever-expanding knowledge set that sees modern enterprises filled with black boxes of “we-know-it’s-important-but-we-don’t-really-understand-it” specialty areas. Supply chain used to be one of those black boxes. But CEOs and boards of directors are now demanding that the supply chain black box be opened and fully explained. This is not a trivial exercise — and it is one that CIOs need to undertake strategically.

The CIO as transparency and data delivery champion

Prior to the pandemic, most people — even businesses — took supply chains for granted. You wanted something, or needed a part to produce a product, and you simply ordered it and it would be delivered — quickly, affordably, and with forecastable precision. This is no longer the case. Supply chain realities are changing how organizations operate, and how they design and deliver new products and services.

But the first step to making supply chains more resilient is transparency. For IT, this means mapping the total end-to-end flow of material, tasks, and costs from product/service design to ultimate customer delivery. This exercise will surface high-risk areas of the supply chain such as the auto industry’s overdependence on a few semiconductor factories in Taiwan, or the global pharmaceutical sectors’ reliance on Chinese supplies for foundational life science ingredients.

One life sciences organization had secured the raw materials needed to manufacture its end product but failed to account for supply issues with the packaging of that medicine. Shortages in the ink used to print expiration dates on the packaging made shipping the product impossible. The adequate supply of ink for labeling, not raw materials for production, had become the bottleneck in the supply chain. Companies must pay attention to all aspects of their supply chain.

Of course, history tells us that management teams have a tendency to overcorrect in response to many crises. Yes, we have learned that existing supply chains are not as resilient as we thought. But before rearchitecting the entire supply chain, CIOs and their C-suite colleagues need to collect estimates regarding how much will more money resilient supply chains will actually cost.

Scholars at the DHL Initiative on Globalization at the NYU Stern Center for the Future of Management remind us that attitudes regarding supply chain strategies are not etched in stone: “In an April 2020 survey, 83% of executives said their companies planned on nearshoring to regionalize their supply chains. When the same survey was repeated in March-April 2021, only 23% still said they were planning on nearshoring.”

Historically the CIO and the IT organization have delivered and managed the transactional and information systems that drive the supply chain. In most organizations, IT and the CIO have not taken the responsibility of aggregating and making sense of the end-to-end data supply chain systems generate. They should assist the data analytics team in implementing digital dashboards for end-to-end supply chain visibility.

Supply chain analytics are the key way CIOs can help address this central business issue — and help ensure the strategic response on the part of the business to supply issues is measured, realistic, and impactful.

As for customers’ concerns about the impact of supply chains on the environment, analytics can too play a part — as well as messaging.

Research at MIT’s Sustainable Supply Chain Lab shows that with the proper messaging, “70% of the consumers surveyed were willing to delay home deliveries by approximately five days if given an environmental incentive to do so at the time of purchase.” Furthermore, the words used to describe the eco-benefit mattered as well: “Around 90% of respondents accepted slower deliveries when they were told about the number of trees saved, compared with 40% of those who were told about reduced emissions.”

So, in addition to helping establish ESG-related metrics around the impact of their companies supply chains, CIOs can also help establish channels for open and honest communication with customers regarding supply chain realities through customer engagement initiatives aimed at putting data to work to assuage their concerns.

Supply Chain

The data center has traditionally been the central spine of your IT strategy. The core hub and home for applications, routing, firewalls, processing, and more. However, trends such as the cloud, mobility, and pandemic-induced homeworking are upending everything.

Now, the enterprise is reliant on distributed workplaces and cloud-based resources generating traffic beyond the network, such as home working or cloud platforms. Conventional networking models that backhaul traffic to the data center are seen as slow, resource-intensive, and inefficient. Ultimately, the Internet is the new enterprise network.

If the core data center is the spine, then the wide-area network (WAN) has to be the arms, right? During the pandemic, a survey revealed that 52% of U.S. businesses have adopted some form of SD-WAN technology. Larger enterprises, like national (79%) and global (77%) businesses, have adopted SD-WAN at much higher rates than smaller firms.

But operational visibility is an essential component of an SD-WAN implementation because, unlike MPLS links, the internet is a diverse and unpredictable transport. SD-WAN orchestrator application policies and automated routing decisions make day-to-day operations easier but can also deteriorate the overall end-to-end performance. As a result, applications can run slower than before a corrective action, making troubleshooting these issues very difficult without additional insight or validation.

Visibility beyond the edge

Just think about the number of possible paths data can take to be delivered end-to-end. If you take the example of an organization having 100 branch offices, two data centers, two cloud providers, 15 SaaS applications, and using four ISPs – there are more than 7,000 possible network paths in use anytime. If the network team sticks to traditional network monitoring, limited to branch offices and data centers, it means the overall visibility is reduced to less than 2% of the estate (102 paths over 7000+). The lack of visibility beyond the edge of the enterprise network can leave network operations entirely out of control.

Additionally, most SD-WAN vendors only measure and provide visibility from customer-edge to customer-edge – basically, the edge network devices and the secure tunnels that connect data centers to branch offices, banks, retail stores, etc. In order to deliver a reliable and secure user experience over this new and complex network architecture, network professionals need end-to-end visibility; not just edge-to-edge.

Experience-Driven NetOps is an approach that extends visibility beyond the edge of the data center and into the branch site, remote locations, ISP and cloud networks, and remote users to provide visibility from an end-user perspective (where they connect to in the enterprise) rather than from the controller-only edge perspective. Furthermore, there are thousands of more network devices behind the edge of an SD-WAN deployment. Do you really want another tool to manage those devices too? 

Make no mistake, if you’re deploying new software-defined technologies but still lack visibility into the end-user experience delivered by these architectures, you are only solving half of the problem to deliver the network support your business expects. Today, reliable networks need to be experience-proven. And network operations teams have to become experience-driven.

You can learn more about how to tackle the new challenges of user experience in this eBook, Guide to Visibility Anywhere. Read now and discover how organizations can create network visibility across the network edge and beyond.

Networking

‘Mind the gap’ is an automated announcement used by London Underground for more than 50 years to warn passengers about the gap between the train and the platform edge.

It’s a message that would resonate well in IT operations. Enterprises increasingly rely on “work from anywhere” (WFA) infrastructure, software as a service (SaaS), and public cloud networks. In this complex platform mix, visibility gaps can quickly surface in the performance of ISP and cloud networks, along with remote work environments.

Gaps are also inherent in today’s IT standard operating procedures. Network teams follow a certain set of rules to begin troubleshooting and ultimately isolate and fix issues. If these standardized workflows are missing core features, or teams need multiple tools to run these troubleshooting procedures, this can quickly result in delayed remediation and potential business disruption.

Dimensional Research, for example, reveals that 97% of network and operations professionals report network challenges and 81% confirm network blind spots. Complete outages (37%) are the worst problem, although network issues have also delayed new projects (36%).

So how can IT operations close the gap? The enterprise needs network monitoring software that reaches beyond the data center infrastructure; providing end-to-end network delivery insights that correspond with users’ digital experience.

It’s time to re-think network monitoring. These are four key capabilities network professionals should consider for a modern network monitoring platform.

User experience: Moving business applications to multi-cloud platforms and co-located data centers makes third-party networks a performance dependency. Digital experience monitoring along the network, between the end-user and the cloud deployments becomes a necessity to ensure seamless user experiences.Scale: Demand for SaaS, unified communications as a service (UcaaS), contact center as a service (CcaaS), and the WFA culture is rapidly expanding the network edge. Network professionals need to harness the complexity and dynamic nature of these deployments.Security: The modern WAN infrastructure involves technologies such as software-defined WAN (SD-WAN), next-generation firewall (NGFW), and much more. Misconfigurations can easily be missed, resulting in performance issues or security breaches.Visibility: The remotely connected workplace introduces a new, uncharted network ecosystem. Visibility into these remote networks such as home WiFi/LAN is at best patchy, making issue resolution a guessing game.

The bottom line? IT teams need a complete, efficient view of their network infrastructure, including all applications, users, and locations. Without it, IT risks losing control of operations, ultimately eroding confidence in IT, and potentially forcing decision-makers to reallocate or reduce IT budgets.

Now is the time to rethink network operations and evolve traditional NetOps into Experience-Driven NetOps. With Experience-Driven NetOps, network teams can proactively identify the root cause of problems and isolate issues within a single tool that enables one-click access to all their standard operating procedures through out-of-the-box workflows and user-experience metrics. This industry-first approach delivers digital experience and network performance insights across the edge infrastructure, internet connections, and cloud services, allowing teams to plan for network support where it matters most.

Maybe it’s time for that “mind the gap” announcement to be broadcast in IT departments? With a possible slight change to, “mind the growing void” to ensure networks are experience-proven and network operations teams are experience-driven.

Tackle the new challenges of network monitoring in this eBook, 4 Imperatives for Monitoring Modern Networks. Read now and discover how organizations can plan their monitoring strategy for the next-generation network technologies.

Networking