Cybersecurity vendor CrowdStrike initiated a series of computer system outages across the world on Friday, July 19, disrupting nearly every industry and sowing chaos at airports, financial institutions, and healthcare systems, among others.

At issue was a flawed update to CrowdStrike Falcon, the company’s popular endpoint detection and response (EDR) platform, which crashed Windows machines and sent them into an endless reboot cycle, taking down servers and rendering ‘blue screens of death’ on displays across the world.

How did the CrowdStrike outage unfold?

Australian businesses were among the first to report encountering difficulties on Friday morning, with some continuing to encounter difficulties throughout the day. Travelers at Sydney Airport experienced delays and cancellations. At 6pm Australian Eastern Standard Time (08:00 UTC), Bank Australia posted an announcement to its home page saying that its contact center services were still experiencing problems.

Businesses across the globe followed suit, as their days began. Travelers at airports in Hong Kong, India, Berlin, and Amsterdam encountered delays and cancelations. The Federal Aviation Administration reported that US airlines grounded all flights for a period of time, according to the New York Times.

What has been the impact of the CrowdStrike outage?

As one of the largest cybersecurity companies, CrowdStrike’s software is very popular among businesses across the globe. For example, over half of Fortune 500 companies use security products from CrowdStrike, which CSO ranks No. 6 on its list of most powerful cybersecurity companies.

Because of this, fallout from the flawed update has been widespread and substantial, with some calling it the “largest IT outage in history.”

To provide scope for this, more than 3,000 flights within, into, or out of the US were canceled on July 19, with more than 11,000 delayed. Planes continued to be grounded in the days since, with nearly 2,500 flights canceled within, into, or out of the US, and more than 38,000 delayed, three days after the outage occurred.

The outage also significantly impacted the healthcare industry, with some healthcare systems and hospitals postponing all or most procedures and clinicians resorting to pen and paper, unable to access EHRs.

Given the nature of the fix for many enterprises, and the popularity of CrowdStrike’s software, IT organizations have been working around the clock to restore their systems, with many still mired in doing so days after the initial faulty update was served up by CrowdStrike.

On July 20, Microsoft reported that an estimated 8.5 million Windows devices had been impacted by the outage. On July 27, Microsoft clarified that its estimates are based on crash reports, which are “sampled and collected only from customers who choose to upload their crashes to Microsoft.”

What caused the CrowdStrike outage?

In a blog post on July 19, CrowdStrike CEO George Kurtz apologized to the company’s customers and partners for crashing their Windows systems. Separately, the company provided initial details about what caused the disaster.

According to CrowdStrike, a defective content update to its Falcon EDR platform was pushed to Windows machines at 04:09 UTC (0:09 ET) on Friday, July 19. CrowdStrike typically pushes updates to configuration files (called “Channel Files”) for Falcon endpoint sensors several times a day.

The defect that triggered the outage was in Channel File 291, which is stored in “C:WindowsSystem32driversCrowdStrike” with a filename beginning “C-00000291-” and ending “.sys”. Channel File 291 passes information to the Falcon sensor about how to evaluate “named pipe” execution, which Windows systems use for intersystem or interprocess communication. These commands are not inherently malicious but can be misused.

“The update that occurred at 04:09 UTC was designed to target newly observed, malicious named pipes being used by common C2 [command and control] frameworks in cyberattacks,” the technical post explained.

However, according to CrowdStrike, “The configuration update triggered a logic error that resulted in an operating system crash.”

Upon automatic reboot, the Windows systems with the defective Channel File 291 installed would crash again, causing an endless reboot cycle.

In a follow-up post on July 24, CrowdStrike provided further details on the logic error: “When received by the sensor and loaded into the Content Interpreter, problematic content in Channel File 291 resulted in an out-of-bounds memory read triggering an exception. This unexpected exception could not be gracefully handled, resulting in a Windows operating system crash (BSOD).”

The defective update, which included new exploit signatures, was part of CrowdStrike’s Rapid Response Content program, which the company says goes through less rigorous testing than do updates to Falcon’s software agents. Whereas customers have the option of operating with the latest version of Falcon’s Sensor Content, or with either of the two previous versions if they prefer reliability over coverage of the most recent attacks, Rapid Response Content is deployed automatically to compatible sensor versions.

The flawed update only impacted machines running Windows. Linux and MacOS machines using CrowdStrike were unaffected, according to the company.

How has CrowdStrike responded?

According to the company, CrowdStrike pushed out a fix removing the defective content in Channel File 291 just 79 minutes after the initial flawed update was sent. Machines that had not yet updated to the faulty Channel File 291 update would not be impacted by the flaw. But those machines that had already downloaded the defective content weren’t so lucky.

To remediate those systems caught up in endless reboot, CrowdStrike published another blog post with a far longer set of actions to perform. Included were suggestions for remotely detecting and automatically recovering affected systems, with detailed sets of instructions for temporary workarounds for affected physical machines or virtual servers, including manual reboots.

On July 24, CrowdStrike reported on the testing process lapses that led to the flawed update being pushed out to customer systems. In its post-mortem, the company blamed a hole in its testing software that caused its Content Validator tool to miss a flaw in the defective Channel File 291 content update. The company has pledged to improve its testing processes by ensuring updates are tested locally before being sent to clients, adding additional stability and content interface testing, improving error handling procedures, and introducing a staggered deployment strategy for Rapid Response Content.

CrowdStrike has also sent $10 in Uber Eats credits to IT staff for the “additional work” they put in helping CrowdStrike clients recover, TechCrunch reported. The email, sent by CrowdStrike Chief Business Officer Daniel Bernard, said in part, “To express our gratitude, your next cup of coffee or late night snack is on us!” A CrowdStrike representation confirmed to TechCrunch that the Uber Eats coupons were flagged as fraud by Uber due to high usage rates.

On July 25, CrowdStrike CEO Kurtz took to LinkedIn to ensure customers that the company “will not rest until we achieve full recovery.”

“Our recovery efforts have been enhanced thanks to the development of automatic recovery techniques and by mobilizing all our resources to support our customers,” he wrote.

What went wrong with CrowdStrike testing?

CrowdStrike’s review of its testing shortcomings noted that, whereas rigorous testing processes are applied to new versions of its Sensor Content, Rapid Response Content, which is delivered as a configuration update to Falcon sensors, goes through less-rigorous validation.

In developing Rapid Response Content, CrowdStrike uses its Content Configuration System to create Template Instances that describe the hallmarks of malicious activity to be detected, storing them in Channel Files that it then tests with a tool called the Content Validator.

According to the company, disaster struck when two Template Instances were deployed on July 19. “Due to a bug in the Content Validator, one of the two Template Instances passed validation despite containing problematic content data,” CrowdStrike said in its review.

Industry experts and analysts have since come out to say that the practice of rushing through patches and pushing them directly to global environments has become mainstream, making it likely that another vendor could fall prey to this issue in the future.

How has recovery from the outage fared?

For many organizations, recovering from the outage is an ongoing issue. With one suggested solution for remedying the defective content being to reboot each machine manually into safe mode, deleting the defective file, and restarting the computer, doing so at scale will remain a challenge.

It has been noted that some organizations with hardware refresh plans in place are considering accelerating those plans as a remedy to replace affected machines rather than commit the resources necessary to conduct the manual fix to their fleets.

On July 25, CrowdStrike CEO Kurtz posted to LinkedIn that “over 97% of Windows sensors are back online as of July 25.”

What is CrowdStrike Falcon?

CrowdStrike Falcon is endpoint detection and response (EDR) software that monitors end-user hardware devices across a network for suspicious activities and behavior, reacting automatically to block perceived threats and saving forensics data for further investigation.

Like all EDR platforms, CrowdStrike has deep visibility into everything happening on an endpoint device — processes, changes to registry settings, file and network activity — which it combines with data aggregation and analytics capabilities to recognize and counter threats by either automated processes or human intervention. 

Because of this, Falcon is privileged software with deep administrative access to the systems it monitors, making it tightly integrated with core operating systems, with the ability to shut down activities that it deems malicious. This tight integration proved to be a weakness for IT organizations in this instance, rendering Windows machines inoperable due to the flawed Falcon update.

The company has also introduced AI-powered automation capabilities into Falcon for IT, to help bridge the gap between IT and security operations, according to the company.

What has been the fallout of CrowdStrike’s failure?

In addition to dealing with fixing their Windows machines, IT leaders and their teams are evaluating lessons that can be gleaned from the incident, with many looking at ways to avoid single points of failure, re-evaluating their cloud strategies, and reassessing response and recovery plans. Industry thought leaders are also questioning the viability of administrative software with privileged access, like CrowdStrike’s. And as recovery nears completion, CISOs have cause to reflect and rethink key strategies.

As for CrowdStrike, US Congress has called on CEO Kurtz to testify at a hearing about the tech outage. According to the New York Times, Kurtz was sent a letter by Representative Mark Green (R-Tenn.), chairman of the Homeland Security Committee, and Representative Andrew Garbarino (R-NY).

Americans “deserve to know in detail how this incident happened and the mitigation steps CrowdStrike is taking,” they wrote in their letter to Kurtz, who was involved in a similar situation when, as CTO of McAfee, the company pushed out a faulty anti-virus update that impacted thousands of customers, triggering BSODs and creating the effect of a denial-of-service attack.

Financial impacts of the outage have yet to be estimated, but Derek Kilmer, a professional liability broker at Burns & Wilcox, said he expects insured losses of up to $1 billion or “much higher,” according to The Financial Times. Insurer Parametrix pegs that number at $5.4 billion lost, just for US Fortune 500 companies, excluding Microsoft, Reuters reported.

Based on Microsoft’s initial estimate of 8.5 million Windows devices impacted, research firm J. Gold Associates has projected the IT remediation costs at $701 million, based on 12.75 million resource-hours necessary from internal technical support teams to repair the machines. That coupled with the fact that, according to Parametrix, “loss covered under cyber insurance policies is likely to be no more than 10% to 20%, due to many companies’ large risk retentions,” the financial hit from CrowdStrike is likely to be enormous.

In response to concerns around privileged access, Microsoft announced it is now prioritizing the reduction of kernel-level access for software applications, a move designed to enhance the overall security and resilience of the Windows operating system.

Questions have also been raised about suppliers’ responsibilities to provide quality assurance for their products, including warranties.

Delta Airlines, which canceled nearly 7,000 flights, resulting in more than 175,000 refund requests, has hired lawyer David Boies to pursue damages from CrowdStrike and Microsoft, according to CNBC. The news outlet reports Delta’s estimated costs as a result of the outage is $500 million. Boies led the US government’s antitrust case against Microsoft in 2001. Delta CEO Ed Bastian told CNBC that the airline had to manually reset 40,000 servers and will “rethink Microsoft” for Delta’s future.

Meanwhile, CrowdStrike shareholders filed a class-action lawsuit against the company, arguing that CrowdStrike defrauded them by not revealing that its software validation process was faulty, resulting in the outage and a subsequent 32% decline in market value, totaling $25 billion.

Ongoing coverage of the CrowdStrike failure

News

July 19: Blue screen of death strikes crowd of CrowdStrike servers 

July 20: CrowdStrike CEO apologizes for crashing IT systems around the world, details fix 

July 22: CrowdStrike incident has CIOs rethinking their cloud strategies 

July 22: Microsoft pins Windows outage on EU-enforced ‘interoperability’ deal 

July 24: CrowdStrike blames testing shortcomings for Windows meltdown

July 26: 97 per cent of CrowdStrike Windows sensors back online

July 26: Counting the cost of CrowdStrike: the bug that bit billions

July 29: CrowdStrike was not the only security vendor vulnerable to hasty testing

July 29: Microsoft shifts focus to kernel-level security after CrowdStrike incident

Aug. 1: Delta Airlines to ‘rethink Microsoft’ in wake of CrowdStrike outage

Analysis

July 20: Put not your trust in Windows — or CrowdStrike 

July 22: Early IT takeaways from the CrowdStrike outage 

July 24: CrowdStrike meltdown highlights IT’s weakest link: Too much administration

July 25: CIOs must reassess cloud concentration risk post-CrowdStrike

July 29: CrowdStrike debacle underscores importance of having a plan

July 30: CrowdStrike crisis gives CISOs opportunity to rethink key strategies

Originally published on July 23, 2024, this article has been updated to reflect evolving developments.

Access to artificial intelligence (AI) and the drive for adoption by organizations is more prevalent now than it’s ever been, yet many companies are struggling with how to manage data and the overall process. As companies open this “pandora’s box” of new capabilities, they must be prepared to manage data inputs and outputs in secure ways or risk allowing their private data to be consumed in public AI models.

Through this evolution, it is critical that companies consider that ChatGPT is a public model built to grow and expand off use through advanced learning models. Private instances will be leveraged shortly where the model for answering prompted questions will arise solely from internal data selected – as such, it’s important that companies determine where public use cases will be appropriate (e.g., non-sensitive information) versus what mandates the need for private instances (e.g., company financial information and other data sets that are either internal and/or confidential).

All in . . . but what about the data?

The popularity of recently released AI platforms such as Open AI’s ChatGPT and Google Bard has led to a mad rush for AI use cases. Organizations are envisioning a future in this space where AI platforms will be able to consume company-specific data in a closed environment vs. using a global ecosystem as is common today. AI relies upon large sets of data fed into it to help create output but is limited by the quality of data that is consumed by the model. This was on display during the initial test releases of Google Bard, where it provided a factually inaccurate answer on the James Webb Space Telescope based on reference data it ingested. Often, individuals will want to drive toward the end goal first (implementing automation of data practices) without going through the necessary steps to discover, ingest, transform, sanitize, label, annotate, and join key data sets together. Without this important step, AI may produce inconsistent or inaccurate data that could put an organization in a risky gambit of leveraging insights that are not vetted.

Through data governance practices, such as accurately labeled metadata and trusted parameters for ownership, definitions, calculations, and use, organizations can ensure they are able to organize and maintain their data in a way that can be useable for AI initiatives. By understanding this challenge, many organizations are now focusing on how to appropriately curate their most useful data in a way that can be readily retrieved, interpreted, and utilized to support business operations.

Storing and retrieving governed data

Influential technology, like Natural Language Processing (NLP), allows for the retrieval of responses based on questions that are asked conversationally or a standard business request. This process parses a request into meaningful components and ensures that the right context is applied within a response. As technology evolves, this function will allow for a company’s specific lexicon to be accounted for and processed through an AI platform. One application of this may be related to defining company-specific attributes for particular phrases (e.g., How a ‘customer’ may be defined for an organization vs. the broader definition of a ‘customer’) to ensure that organizationally agreed nomenclature and meaning are applied through AI responses. For instance, an individual may be asked to “create a report that highlights the latest revenue by division for the past two years: that applies all the necessary business metadata that an analyst and management would expect.

Historically, this request requires individuals to convert the ask into a query that can be pulled from a standard database. AI and NLP technology is now capable of processing both the request and the underlying results, enabling data to be interpreted and applied to business needs. However, the main challenge is that many organizations do not have their data in a manner or form that is capable of being stored, retrieved, and utilized by AI – generally due to individuals taking non-standard approaches to obtaining data and making assumptions about how to use data sets.

Setting and defining key terms

A critical step for quality outputs is having data organized in a way that can be properly interpreted by an AI model. The first step in this process is to ensure the right technical and business metadata is in place. The following aspects of data should be recorded and available:

Term definition

Calculation criteria (as applicable)

Lineage of the underlying data sources (upstream/downstream)

Quality parameters

Uses/affinity mentions within the business

Ownership

The above criteria should be used as a starting point for how to enhance the fields and tables captured to enable proper business use and application. Accurate metadata is critical to ensure that private algorithms can be trained to emphasize the most important data sets with reliable and relevant information.

A metadata dictionary that has appropriate processes in place for updates to the data and verification practices will support the drive for consistent data usage and maintain a clean, usable data set for transformation initiatives.

Understanding the use case and application

Once the right information is recorded related to the foundation of the underlying data set, it is critical to understand how data is ultimately used and applied to a business need. Key considerations regarding the use case of data include documenting the sensitivity of information recorded (data classification), organizing and applying a category associated with a logical data domain structure to data sets (data labeling), applying boundaries associated with how data is shared, and stored (data retention), and ultimately defining protocols for destroying data that is no longer essential or where requests for the removal of data have been presented and are legally required (data deletion).

An understanding of the correct use and application of underlying data sets can allow for proper decision-making regarding other ways data can be used and what areas an organization may want to ensure they do not engage in based on strategic direction and legal and/or regulatory guidance. Furthermore, the storage and maintenance of business and technical metadata will allow AI platforms to customize the content and responses generated to ensure organizations receive both tailored question handling and relevant response parsing – this will ultimately allow for the utilization of company-specific language processing capabilities.

Prepare now for what’s coming next

It is now more critical than ever that the right parameters are placed around how and where data should be stored to ensure the right data sets are being retrieved by human users while allowing for growth and enablement of AI use cases going forward. The concept of AI model training relies on clean data which can be enforced through governance of the underlying data set. This further escalates the demand for appropriate data governance to ensure that valuable data sets can be leveraged.

This shift has greatly accelerated the need for data governance – which by some may have been seen as a ‘nice to have’ or even as an afterthought into a ‘must have’ capability allowing organizations to remain competitive and be seen as truly transformative in how they use data, their most valuable asset, both internally for operations and with their customers in an advanced data landscape. AI is putting the age-old adage of ‘garbage in, garbage out’ onto steroids, allowing any data defects flowing into the model to potentially be a portion of the output and further highlighting the importance of tying up your data governance controls.

Read the results of Protiviti’s Global Technology Executive Survey: Innovation vs. Technical Debt Tug of War 

Connect with the Author

Will Shuman
Director, Technology Consulting

Data Management

The electricity supply in Australia, New Zealand and Singapore is very reliable, much more so than in many countries in East Asia, but outages do occur, and the shift to renewables is increasing the risk, as are more extreme weather events.

Also, there can be other problems about which the average user would be unaware: brownouts (sudden voltage drops) spikes (sudden voltage surges) and noise (high frequency signals on the supply line).

Loss of power can bring a business to an immediate halt. The other problems can temporarily or permanently disable vital and costly equipment.

An uninterruptible power supply (UPS) can ensure continued business operation and protect business-critical equipment against failure or irregularity of mains electricity supply, but a smart, remotely managed UPS can do much more to protect vital IT equipment. And it can reduce, or eliminate, the need for on-site IT expertise.

This brandpost will provide an overview of these functions and detail a new offering from Schneider Electric that makes them more accessible than ever.

Smart, Remotely Managed UPS: More than just Backup Power

At its most basic a UPS will ensure clean power to connected equipment and a seamless transition to battery power when mains power fails, until its battery becomes discharged. For many organisations and applications these basic capabilities are insufficient. A smart UPS can provide a number of other important functions.

If the length of a power outage exceeds the capacity of the UPS battery all connected equipment can be properly shutdown before power is lost, and equipment can be correctly rebooted when power is restored. As the battery becomes depleted non-critical equipment can be shutdown first to enable vital equipment to be powered for longer.

When IT equipment malfunctions powering off and rebooting can often restore normal operation: this can be done by a smart UPS.

Large organisations have many facilities, some large some small. Many might me so small as to make the presence of on-site IT expertise prohibitively expensive. In addition to providing power supply security, a smart, remotely managed, UPS can enable an offsite technician to perform many critical functions that would normally require them to be on site. These include.

Rebooting IT equipment remotely (this can solve many IT problems).

Viewing and downloading equipment log data to identify issues before they result in failure.

Scheduling shutdown and rebooting of connected equipment to save power and increase security.

Providing immediate notification of critical issues to a remote operator.

Providing data to an oversight expert monitoring system that integrates other information sources such as surveillance video and creates a unified view of a complex IT environment that is accessible from anywhere.

Schneider Electric Smart UPS Products and Monitoring Software/Services

Schneider Electric offers a range of Smart UPS products that support remote management and monitoring of the UPS and connected equipment. They deliver all the benefits described above, and more. They are multiple models with the features and capacities to suit a wide range of requirements.

Schneider Electric’s Smart-UPS On-Line products are available to support loads from 1kVA to 20kVA and can be rack or tower mounted. They can be configured with multiple battery packs to provide backup power for mission-critical systems during long power outages.

And when backup batteries are depleted their inbuilt PowerChute™ Network Shutdown management software will gracefully shutdown the operating systems of supported equipment.

They can be programmed to shutdown non-critical equipment to conserve battery power for critical systems in the event of a prolonged power outage.

The Schneider Electric EcoStruxure™ IT SmartConnect cloud service provides remote power monitoring, customisable email notifications, remote diagnostics, and UPS firmware updates – all via a web portal.

Get Remote Systems Monitoring via SE Smart UPS

Schneider Electric’s Smart-UPS for loads greater than 5kVA come equipped with a network management card to support all these features for remote monitoring and control of the UPS and connected equipment but require a licence to the EcoStruxure IT Expert software to enable all the features described above.

However, as a special offer for a limited time only, Schneider Electric is offering a bundle comprising the UPS, a network management card and a one-year licence to the EcoStruxure IT Expert software, for every model in the range, including sub 5kVA products.

Contact Schneider Electric today to learn how you can get power protection and monitoring and control of critical IT systems – all from a remote location.

Remote Access

Over 90 wildfires ravaged Spain’s Asturias principality in March this year. Though not as cold and wet as northern Europe, March is still the tail end of winter in northwest Spain, a region not typically considered a tinder box. But the climate emergency is steadily changing that.

But Spain’s predicament isn’t unique. Across the world, climate change has bitten hard into the economies of tech-centric California, again due to wildfires. Australia and Pakistan have seen communities wrecked by large-scale flooding and continual rain, while in 2022, Europe had its hottest summer on record.

There is a need and realization by the business world to be more environmentally sustainable since organizations are seeing an impact on the bottom line as a direct result of climate change. So the CIO, the technologies they deploy, and the partnerships they form are essential to the future of a more environmentally sustainable way of doing business.

A question of time

Thomas Kiessling, CTO with Siemens Smart Infrastructure, part of the German engineering and technology conglomerate that makes trains, electrical equipment, traffic control systems, and more, understands that time is running out. His concerns are backed up by the Intergovernmental Panel on Climate Change (IPCC), which on March 20, 2023, said it’s unlikely the world will keep to its Paris Climate Accord promises.

And if the world’s temperatures rise by or above 1.5 degrees Celsius, businesses will feel further impacts to their bottom line, including increased supply-chain issues on a network already overstretched and fragile. Food and water insecurity will increase, and energy systems, housing stock, insurance, and currency markets will all become more volatile—a worrying set of scenarios for business leaders and boards.

CIO enablement

Historically, CIOs have been vital enablers during times of major change, championing e-commerce, digital transformation or agile ways of working. Organizations responding to the climate emergency are, therefore, calling on those enablement skills to mitigate the environmental impact of the business.

Key to this is a greater understanding of business operations and their production of CO2, or use of unsustainable practices and resources. As with most business challenges, data is instrumental. “Like anything, the hard work is the initial assessment,” says CGI director of business consulting and CIO advisor Sean Sadler. “From a technology perspective, you need to look at the infrastructure, where it’s applied, how much energy it draws, and then how it fits into the overall sustainability scheme.” 

CIOs who create data cultures across organizations enable not only sustainable business processes but also reduce reliance on consultancies, according to IDC. “Organizations with the most mature environmental, social, and governance (ESG) strategies are increasingly turning to software platforms to meet their data management and reporting needs,” says Amy Cravens, IDC research manager, ESG Reporting and Management Technologies. “This represents an important transition toward independent ESG program management and away from dependence on ESG consultants and service providers. Software platforms will also play an essential role in an organization’s ESG maturity journey. These platforms will support organizations from early-stage data gathering and materiality assessments through sustainable business strategy enablement and every step in between.”

Sadler, who has led technology in healthcare, veterinary services, media firms, and technology suppliers, says consultancies and systems integrators should be considered as part of a CIO’s sustainability plans. Their deep connections to a variety of vendors, skills, experience and templates will be highly useful. “It can often help with the collaboration with other parts of the business, like finance and procurement as you have a more holistic approach,” he says.

The IDC survey further finds that the manufacturing sector is leading the maturity of ESG strategies, followed by the services sector, indicative, perhaps, of industries with the most challenging sustainability demands to get on the front foot.

CIOs in organizations already with ESG maturity adopt data management, ESG reporting, and risk tools. In the 2022 Digital Leadership Report by international staffing and CIO recruitment firm Nash Squared, 70% of business technology leaders said that technology plays a crucial part in sustainability.

“CIOs are in a great position to demonstrate their business acumen,” says Sadler. “They can cut costs and generate additional revenue streams.” And DXC Technology director and GM Carl Kinson says IT is now central to cost reduction, while high inflation and rising energy costs make CIOs and organizations assess their energy spending in a level of detail not seen for a long time. This will have a knock-on environmental benefit. Kinson says CIOs are looking to extract greater value from enterprise cloud computing estates, application workloads, system code, and even the use or return of on-premise technology in order to reduce energy costs.

“We’re working with clients to set carbon budgets for each stakeholder to make them accountable, which is a great way to make sure all areas of the business are doing their bit to be more sustainable,” says Sadler.

Great expectations

Falling short of corporate sustainability goals will not only upset the board but exacerbate the search for skills CIOs face, which, in turn, complicates strategies to digitize the business.

Becoming an environmentally sustainable business is core to the purpose of a modern organization and its ability to recruit and retain today’s technology talent.

Climate urgency also impacts CIOs themselves in their employment decisions, too. “I would need to understand the sustainability angles of an organization,” says James Holmes, CIO with The North of England P&I Association, a shipping insurance firm. Business advisory firm McKinsey also finds that 83% of C-suite executives and investment professionals believe that organizational ESG programs will contribute to an increase in shareholder value in the next five years. And the Nash Squared Digital Leadership Report adds that due to the urgent global move to integrate sustainability into core business operations and the customer proposition, it’s important that digital leaders have what it calls a dual lens on sustainability.

Part of that increased shareholder value will be to ensure the business is able to meet the evolving regulations surrounding environmental sustainability. For CIOs in Europe, the EU Sustainable Finance Disclosure Regulation was adopted in April 2022, and the Corporate Sustainability Reporting Directive (CSRD) secured a majority in the European Parliament in November 2022. California also introduced environmental regulations in September 2022, and other US states are likely to follow.

“Regulation can be pro-growth,” Chi Onwurah, shadow business minister in the UK Parliament and a former technologist, recently said at an open-source technology conference. “Good regulations create a virtuous circle as more people trust the system.”

CIOs and IT leadership, whether in the UK or not, are integral to make organizations more environmentally sustainable in order to help stave off environmental collapse. No vertical market can operate effectively during an ongoing environmental emergency unless a technological response based on collated data is enacted and supported across the organization.

During the Covid-19 pandemic, CIOs and IT leaders enabled new ways of adapting to change, and these need to continue as environmentally sustainable business processes become greater priorities.

CIO, Green IT, IT Leadership

At Choice Hotels, cloud is a tool to help the hospitality giant achieve corporate goals. That can include making progress on immediate objectives, such as environmental sustainability, while keeping an eye on trendy topics such as the metaverse and ChatGPT.

“We’re investing in technology, we’re investing in leveraging the cloud to do meaningful things while we figure out what does tomorrow look like?” said CIO Brian Kirkland.

Kirkland will describe key points on how cloud is enabling business value, including its sustainability initiatives, at CIO’s Future of Cloud & Data Summit, taking place virtually on April 12.

The day-long conference will drill into key areas of balancing data security and innovation, emerging technologies, and leading major initiatives.

The program kicks off with a big-picture view of how the cloud will change the way we live, work, play, and innovate from futurist and Delphi Group Chairman and Founder Tom Koulopoulos. Afterward, he will answer questions in a lively discussion with attendees.  

Before organizations map an architectural approach to data, the first thing that they should understand is data intelligence. Stewart Bond, IDC’s vice president for data integration and intelligence software, will dissect this foundational element and how it drives strategy as well as answer audience questions about governance, ownership, security, privacy, and more.

With that foundation, CIOs can move on to considering emerging best practices and options for cloud architecture and cloud solution optimization. David Linthicum, chief cloud strategy officer at Deloitte Consulting and a contributor to InfoWorld, will delve into strategies that deliver real business value – a mandate that every IT leader is facing now.

Want to know how top-performing companies are approaching aspects of cloud strategy? Hear how Novanta Inc. CIO Sarah Betadam led a three-year journey to becoming a fully functional data-driven enterprise. Later, learn how Tapestry – home to luxury consumer brands such as Coach and Kate Spade – developed a cloud-first operating model in a conversation between CIO Ashish Parmar and Vice President of Data Science and Engineering Fabio Luzzi.

Another top trend is AI. Phil Perkins, the co-author of The Day Before Digital Transformation, will discuss the most effective applications of AI being used today and what to expect next.

At some organizations, data can be a matter of life and death. Learn about a data-focused death investigations case management system used to influence public safety in a conversation between Gina Skagos, executive officer, and Sandra Parker, provincial nurse manager, at the Province of Ontario’s Office of the Chief Coroner.

Throughout the summit, sponsors including IBM, CoreStack, VMware, and Palo Alto Networks will offer thought leadership and solutions on subjects such as new models of IT consumption, cloud security, and optimizing hybrid multi-cloud infrastructures.

Check out the full summit agenda here. The event is free to attend for qualified attendees. Don’t miss out – register today.

Cloud Management, Hybrid Cloud, IT Leadership, IT Strategy

With five state-of-art data centers located in the Sydney and Canberra metropolitan areas, including a facility created to manage cloud applications and data that require PROTECTED, SECRET and higher classifications, Macquarie Government, as part of the ASX listed Macquarie Telecom Group, was one of the first companies to provide sovereign IT services to Australia’s government agencies. It’s a journey that began nearly a decade ago when it became the first Australian cloud to be certified by the Australian Signals Directorate (ASD). Today, 42% of the nation’s federal agencies rely on Macquarie Government’s cloud solutions and services to address the most stringent security and sovereignty requirements.

We recently connected with Aidan Tudehope, managing director of Macquarie Government, to see what he believes is driving the demand for more sovereign cloud services and to learn what it means for the company to have earned the VMware Sovereign Cloud distinction. We also took the opportunity to see what he sees as the greatest misconception about data sovereignty.

“We have been championing the importance of sovereign clouds for more than a decade,” says Tudehope. “Earning the VMware Sovereign Cloud distinction is an important validation of our message, particularly given our close partnership with VMware for so many years.”

Tudehope notes that in addition to offering a wide range of VMware Cloud Verified services, the company’s private cloud offerings, including its OFFICIAL Cloud, a robust private cloud designed for non-classified workloads, and its PROTECTED Cloud – a high-security cloud built for the Australian government in Macquarie Government’s secure gateway – are all built on VMware technologies.

“Our customers use and trust VMware,” he says. “When they use our clouds built on VMware technologies they can still use the tools they are familiar with to safely deploy workloads containing sensitive information for Australia’s government agencies and citizens into the cloud without losing sovereign control. Another benefit of being based on VMware technology is that it is far easier for agencies to migrate, deploy or extend workloads into the cloud, or alternatively to move data back-and-forth with consistent information security controls already applied. This reduces the time and effort that otherwise would be required to have their cloud deployments assessed by the Infosec Registered Assessors Program.”

In Australia, government agencies are required to host data classified as “PROTECTED” or above in a facility with the highest level of certification. It’s a requirement that also applies to all “whole of government systems” provided or used by numerous agencies. Macquarie Government was one of the first companies to be certified “strategic”, the highest level of certification, and the only company to have this certification for both it’s cloud and datacentre offerings.

Macquarie Government also created Australia’s first purpose-built cloud exchange designed specifically for federal agencies. Coupled with a security layer, it enables government agencies to implement a multi, hybrid-cloud strategy through Amazon Web Services, Microsoft Azure and other cloud providers when appropriate.

Tudehope stresses though that data sovereignty is not just about where data resides. It also means that agencies maintain authority and control over the data at all times.

“The greatest misconception that my colleagues and I encounter is that data residency and data sovereignty are synonymous and can be used interchangeably. Data residency of course refers to where data is located. That’s important, but data sovereignty enables government not only to ensure that data remains in its jurisdiction, but that it cannot at any point be accessed by foreign contractors, support teams, or any individuals that do not possess required security clearances,” says Tudehope. “Data sovereignty is crucially important for regulatory and data security purposes.”

He points to the Australian Government Information Security Manual to convey just how important it is. The manual states, “outsourced cloud services may be located offshore and subject to lawful and covert data collection without their customers’ knowledge. Additionally, use of offshore services introduces jurisdictional risks as foreign countries’ laws could change with little warning. Finally, foreign owned suppliers operating in Australia may be subject to a foreign government’s lawful access to data belonging to their customers.”

It’s an important reality Tudehope believes can’t be stressed enough and one of the reasons Macquarie Government employs more than 200 Australian citizens with the proper security certifications, to oversee not only its cloud solutions and services, but also its many security offerings for federal agencies. These include a security operations center with threat monitoring and proactive threat hunting, and Security Incident and Event Management-as-a-Service – offerings that enable the company’s security experts to analyze more than 7 billion security events each day.

“Digital services make government more accessible and more effective, but citizens will only use them if their personal information is safeguarded with exceptional vigilance here in Australia,” says Tudehope. “That requires the right cloud, the right employees to oversee it, and robust security services to keep it all safe.”

Learn more about Macquarie Government and its partnership with VMware here.

Data and Information Security, IT Leadership

Businesses are feeling growing pressure to act on climate change from all angles. However, despite data centres and transmission networks being responsible for nearly 1 per cent of energy-related greenhouse gas emissions, a new Deloitte study reports little over half (54 per cent) of businesses have converted to energy-efficient technologies.

This number is concerning given emerging digital technologies such as blockchain, IoT, artificial intelligence, and machine learning are increasing demand for data centre services further, as workloads are no longer confined to the core data centre and can run anywhere, including the edge. Australian businesses need to transition to sustainable IT solutions to support these emerging technologies while staying in line with Australia’s new commitment to an emissions reduction target of 43 per cent and net zero emissions by 2050.

New servers form the foundation of sustainable infrastructure, offering greater performance while taking up less space and consuming less energy – driving sustainability goals while enabling industry innovation.

Sustainable IT infrastructure is no longer just a nice-to-have

In the past, businesses sought IT systems that delivered the most ROI or the highest efficiency – however, with new local and global emissions reduction targets in place, this is no longer enough. IT infrastructure must run at the smallest possible carbon footprint with minimum environmental impact to meet Environmental, Social and Governance (ESG) goals and comply with government demands for sustainable innovation.

It’s not just the public sector pushing companies to change. A Google Trends search reveals Australians and New Zealanders are 3rd and 4th most interested in sustainability worldwide, with eight out of ten Australian consumers now expecting businesses to operate sustainably. Four in ten say they’ll stop purchasing from brands that don’t. Consumers want more from companies than they have in the past – and the right IT infrastructure is essential to meeting these expectations. A recent research commissioned by Dell Technologies focused on Gen Z adults aged between 18 to 26 confirms this sentiment. Nearly two-thirds of Gen Z adults in Australia believe technology will play an important role in overcoming the biggest societal challenges, such as the climate crisis.

Transitioning to newer servers can form the basis of a modern, sustainable IT set-up, appeasing customers and keeping pace with government legislation. For example, Dell’s edge servers can operate up to 55 degrees Celsius. This allows the technology to run at warmer temperatures, meaning there’s no need to cool the room down to keep the servers operational, which is true of older server models. The result is advanced power management control and reduced power consumption, which is not just a nice to have; it’s essential.

Enabling emerging tech at the edge

The infrastructure must also support emerging technologies. This is critical in Australia to meet the continuing growth in demand for data and connectivity from industries like agriculture and healthcare that are relying on new tech to operate efficiently over vast swaths of land in remote locations. These industries are embracing emerging technologies, with data processed at the edge, to overcome ongoing supply chain issues in the unique and often harsh Australian climate and landscape.

In rural locations, latency matters, and technology must be brought closer to improve efficiency. However, the most significant opportunity for edge computing in Australia is its ability to support AI and automation, which will support and grow these industries.

For example, TPG Telecom trialed AI-enabled image processing, computer vision and edge computing technologies to enable multiple high-quality 4K video streams to count sheep at a regional livestock exchange, automating the process and removing human error.

In Australian healthcare, individuals seeking services can travel hours to receive critical care. Reports in deeply remote locations say it can take up to 14 hours to reach a fully equipped hospital. Edge computing, together with emerging tech, enables rural access to digital health services and improves operations in major regional hospitals.

Townsville University Hospital in North Queensland is leading by example, harnessing low-latency and high-input/output operations per second (IOPS) storage at the edge to deliver better regional care. The new servers support emerging technologies, including AI, to improve ward management and patient flow reporting systems in a location cut off from cloud computing services available in metropolitan cities. Staff can now perform near real-time reporting, improving efficiency and access to current information to improve outcomes in the remote and indigenous communities it services. 

Innovative solutions like these are only possible with efficient servers that can handle high bandwidth and low latency workloads close to the data source. Next-generation technology architectures must support and accelerate modern workloads and serve the industries our economy relies on, whether on-premises in data centres or at the edge in remote locations – and they need to do it while being sustainable.

Supporting sustainable innovation

Dell Technologies’ latest generation of PowerEdge servers support sustainable innovation, providing the foundation for an energy-efficient IT system while enabling emerging tech.

Designed with a focus on environmental sustainability, they’re providing customers with triple the performance over the previous generations of servers. This means more powerful and efficient technology with less floor space required. They’re built with the Dell Smart Cooling suite, which increases airflow and reduces fan power by up to 52 per cent compared to previous generations, delivering performance with less power needed to cool the server.

To further reduce the carbon footprint, the servers use up to 35 per cent recycled plastic and are designed so components can be repaired, replaced, or easily recycled. Customers can also monitor carbon emissions and better manage their sustainability targets using the Dell OpenManage Enterprise Power Manager software.

The new PowerEdge servers are built to excel in demanding tasks, from AI and analytics to massive databases, supporting modern workloads and industry innovation – even in remote Australian locations. The servers can be used as a subscription via Dell APEX. Customers can adopt a flexible approach to avoid the expense of having more computing resources than they need, which is beneficial for increasingly tight budgets and sustainability efforts, reducing unnecessary energy consumption.

With new tech, we can have our cake and eat it too  

It seems like asking for a lot; powerful infrastructure that can enable the latest advancements in tech, improve efficiency and support Australian industries operating in remote locations over large geographic areas. We’re asking tech to deliver this while meeting ESG goals and aligning with Australia’s new carbon emissions targets. But the new reality is IT infrastructure must be sustainable while maintaining high performance.

It’s not just a wish list; the tech is available. Adopting next-generation servers that can handle it all will enable Australia to meet its carbon goals while driving the innovation our industries need to thrive.

Infrastructure Management

Are you overthinking your cloud model? If so, you’re likely in need of a well-defined cloud strategy. 

Companies with a clear cloud strategy position themselves to achieve more from cloud computing than those without. A well-defined cloud strategy provides a playbook inclusive of principles, baselines, services, financial models, and prioritization guidelines that enable companies to make informed decisions that support their goals.

In addition, a cloud strategy that is concise, actionable, and reviewed continuously allows organizations to assess current and future states in alignment with security, architecture, governance, compliance, human resources, quote-to-cash processes, and business objectives.

Unfortunately, many organizations mistakenly take their cloud adoption or migration plan as a cloud strategy. Whereas a cloud adoption or migration plan is focused on the “how,” a true cloud strategy is focused on answering the “what” and “why.” A cloud strategy provides a clear decision framework that directly supports business goals and outcomes.

In this article, I’ll share what a cloud strategy looks like, how it helps companies make better decisions, and how to get started.

What a cloud strategy is (and isn’t)

Unfortunately, many misperceptions surround the cloud, as outlined in Gartner’s Top 10 Mistakes in Building a Cloud Strategy. One such misperception is that a cloud strategy is a cloud adoption or migration plan. It’s not. A cloud strategy is not a data center strategy, a cloud-first strategy, or an IT-only strategy. Furthermore, it’s not the execution-level implementation plan many organizations think of. Take a minute to review the top 10 misperceptions that lead to mistakes in building a cloud strategy, as defined per Gartner:

GDT

In The Cloud Strategy Cookbook, Gartner defines a cloud strategy as “…a concise viewpoint on the role of cloud computing in your organization.” A true cloud strategy describes the role of the cloud as a business accelerant. It exists to align stakeholders and establish guardrails for decision-making. It includes leaders across the business, including C-level executives. Most of all, it directly supports whatever your business is trying to achieve.

A formal cloud strategy streamlines decision-making, drives better ROI, and reduces risks in strategy execution. A clearly defined cloud strategy helps businesses maximize their cloud investments and align with other synergetic domains of the business. Ultimately, a cloud strategy reduces frustration, disappointment, overspending, and low-value creation.

Getting started with your cloud strategy

If you don’t have a cloud strategy, you’re not alone. Most organizations don’t start with a strategy. Instead, they may begin with a business driver, such as closing a data center or merging with or acquiring another company. They may start with experimentation or heavier adoption in one silo of the organization. In most cases, it’s not until later that they circle back and create a cloud strategy that is not intermingled with adoption or execution planning.

A cloud strategy playbook does not need to be long or written in stone. In fact, it should be relatively short (10-20 pages) and considered a living document —updated regularly to reflect the shifting needs of your business, changes in the market, or anything that affects a significant organizational change.

For example, if your company merges with or acquires another company, that will impact your principles and R-Lane prioritizations. Or, if a new market opportunity comes along, you may need to revisit your cloud strategy to ensure it will help you drive competitive advantage.

The steps for creating a cloud strategy are simple:

Secure executive sponsorship.Gather the right people and set up a cloud strategy office, business office, or steering committee to help ensure your cloud strategy aligns with other domains in the business, such as security, architecture, data center, compliance, finance, IT service management, HR, and legal.Finally, host an iterative assessment series with workshop sessions to define or refine your cloud strategy, including principles and alignment to other domains (architecture, security, procurement, HR, etc.). Establish a cadence to review it regularly.

10 questions your cloud strategy should answer

As part of your cloud strategy definition, you should consider assessment and hands-on workshop sessions to define guiding principles. To do so, be sure that you answer the following 10 foundationally strategic questions:

What is your delivery and operational model? Align your workload and business objectives to a public, hybrid, private, multi-cloud, distributed, or smart cloud (workload-by-workload decision).What is your service model prioritization guideline (for SaaS, PaaS, IaaS, or XaaS)? Use guiding principles to define your service models based on defined guidelines.What is your consumption/development model? An example might be buying SaaS before building.What is your cloud deployment model? Will it be hybrid, multi-cloud, or cloud-native, distributed cloud?What is your R-Lane prioritization model? For example, define if lift-and-shift is the preferred option or last resort as a principle. Should a long-term and short-term R-Lane vision be defined for each workload? For instance, one workload can be a rehost in the short term but a re-architect/refactor in the long term.What is your FinOps model? Charge back or show back? Proportional or even split?What is your workload analysis model? Is it a workload-by-workload analysis or a big-bang analysis? How sophisticated are the inventory capabilities for workload-by-workload exercises (done outside of the strategy workstream itself)?What is your data center (or off-cloud) strategy?What is your cloud exit strategy?What is your level of strategic alignment with other parts of the business?

Define your cloud strategy with help from GDT

A solid cloud strategy helps ensure your organization makes suitable investments to support your business and streamlines decision-making around your cloud model.

With decades of experience, GDT can help you define your cloud strategy and create a cloud strategy playbook based on Gartner’s principles and best practices. To learn more, contact the cloud experts at GDT today.

Cloud Computing

In today’s connected, always-on world, unplanned downtime caused by a disaster can exact substantial tolls on your business from a cost, productivity, and customer experience perspective. Investing in a robust disaster recovery program upfront can save considerable costs down the road.

Unfortunately, many businesses learn this lesson the hard way. According to FEMA, nearly a quarter of businesses never re-open following a major disaster—a sobering statistic.[i]

Fortunately, it doesn’t have to be that way. Disaster recovery-as-a-service (DRaaS) eliminates hefty capital expenditures and additional staff needed to maintain traditional, owned disaster recovery infrastructure. Instead, this cloud-based, scalable solution helps businesses quickly resume critical operations following a disaster—often within mere seconds.

The many virtues of DRaaS

Disasters come in many forms: cyber-attacks, equipment failures, fires, power outages—basically anything that can take down your systems. Without a robust disaster recovery plan in place, it can take days, weeks, or even months to recover.

Unfortunately, time and budgetary constraints often mean disaster recovery efforts get put on the back burner where they languish. Many companies have not defined their recovery point objectives (RPOs) and their recovery time objectives (RTOs), and data classification have fallen by the wayside. Once a disaster strikes, recovery efforts take far longer, and in some cases, businesses may not ever fully recover.

DRaaS uses the cloud to back up and safeguard applications and data from a disaster. DRaaS takes a tiered approach to disaster recovery, using pre-defined or customized RPOs and RTOs to provide the right level of backup and recovery from edge to cloud. This ensures business-critical applications and data get recovered quickly. DRaaS also accommodates your required service levels based on data classification, mapping them to the most appropriate recovery strategy.

DRaaS streamlines disaster recovery planning and support, freeing staff to support your core business. It can grow and scale with your business. Furthermore, DRaaS saves money over the long term, providing a more cost-effective alternative to in-house disaster recovery programs with owned and self-managed equipment. Ultimately, DRaaS minimizes data loss and downtime, simplifies operations, and reduces risk in a cost-effective, customizable, and scalable way.

Get started with DRaaS

Protect your mission-critical data and applications with DRaaS. GDT can help you deploy DRaaS from edge to cloud using Zerto on HPE GreenLake. This DRaaS solution leverages journal-based continuous data protection and ultra-fast recovery for your applications and data. Scalable, automated data management capabilities simplify workload and data mobility across clouds. Zerto features down-to-the-second (or synchronous) RPOs, industry-leading RTOs, and edge-to-cloud flexibility.

Whether you need file-level, app-level, or site-level recovery, GDT can simplify data classification and match it with the proper disaster recovery level or service. GDT not only handles the technology, but we also help you determine the best approach based on your business needs, turning technology conversations into business conversations that help ensure the continuity of your business when a disaster happens.

To learn more about implementing DRaaS, talk to one of GDT’s disaster recovery specialists.

[i] FEMA, “Stay in business after a disaster by planning ahead,” found at: https://www.fema.gov/press-release/20210318/stay-business-after-disaster-planning-ahead (accessed Nov. 21, 2022)

Disaster Recovery

Technology mergers and acquisitions are on the rise, and any one of them could throw a wrench into your IT operations.

After all, many of the software vendors you rely on for point solutions likely offer cross-platform or multiplatform products, linking into your chosen ERP and its main competitors, for example, or to your preferred hyperscaler, as well as other cloud services and components of your IT estate.

What’s going to happen, then, if that point solution is acquired by another vendor — perhaps not your preferred supplier — and integrated into its stack?

The question is topical: Hyperconverged infrastructure vendor Nutanix, used by many enterprises to unify their private and public clouds, has been the subject of takeover talk ever since Bain Capital invested $750 million in it in August 2020. Rumored buyers have included IBM, Cisco, and Bain itself, and in December 2022 reports named HPE as a potential acquirer of Nutanix.

We’ve already seen what happened when HPE bought hyperconverged infrastructure vendor SimpliVity back in January 2017. Buying another vendor in the same space isn’t out of the question, as Nutanix and SimpliVity target enterprises of different sizes.

Prior to its acquisition by HPE, SimpliVity supported its hardware accelerator and software on servers from a variety of vendors. It also offered a hardware appliance, OmniCube, built on OEM servers from Dell. Now, though, HPE only sells SimpliVity as an appliance, built on its own ProLiant servers.

Customers of Nutanix who aren’t customers of HPE might justifiably be concerned — but they could just as easily worry about the prospects of an acquisition by IBM, the focus of earlier Nutanix rumors. IBM no longer makes its own servers, but it might focus on integrating the software with its Red Hat Virtualization platform and IBM Cloud, to the detriment of other customers relying on other integrations.

What to ask

The question CIOs need to ask themselves is not who will buy Nutanix, but what to do if a key vendor is acquired or otherwise changes direction — a fundamental facet of any vendor management plan.

“If your software vendor is independent then the immediate question is: Is the company buying this one that I’m using? If that’s true, then you’re in a better position. If not, then you immediately have to start figuring out your exit strategy,” says Tony Harvey, a senior director and analyst at Gartner who advises on vendor selection.

A first step, he says, is to figure out the strategy of the acquirer: “Are they going to continue to support it as a pure-play piece of software that can be installed on any server, much like Dell did with VMware? Or is it going to be more like HPE with SimpliVity, where effectively all non-HPE hardware was shut down fairly rapidly?” CIOs should also be looking at what the support structure will be, and the likely timescale for any changes.

Harvey’s focus is on data center infrastructure but, he says, whether the acquirer is a server vendor, a hyperscaler, or a bigger software vendor, “It’s a similar calculation.” There’s more at stake if you’re not already a customer of the acquirer.

A hyperscaler buying a popular software package will most likely be looking to use it as an on-ramp to its infrastructure, moving the management plane to the cloud but allowing existing customers to continue running the software on premises on generic hardware for a while, he says: “You’ve got a few years of runway, but now you need to start thinking about your exit plan.”

It’s all in the timing

The best time to plant a tree, they say, is 20 years ago, and the second best is right now. You won’t want your vendor exit plans hanging around quite as long, but now is also a great time to make or refresh them.

“The first thing to do is look at your existing contract. Migrating off this stuff is not a short-term project, so if you’ve got a renewal coming up, the first thing is to get the renewal done before anything like this happens,” says Harvey. If you just renewed, you’ll already have plenty of runway.

Then, talk to the vendor to understand their product roadmap — and tell them you’re going to hold them to it. “If that roadmap meets your needs, maybe you stay with that vendor,” he says. If it doesn’t, “You know where you need to go.”

Harvey pointed to Broadcom’s acquisition of Symantec’s enterprise security business in 2019 — and the subsequent price hikes for Symantec products — as an example of why it’s helpful to get those contract terms locked in early. Customer backlash from those price changes also explains why Broadcom is so keen to talk about its plans for VMware following its May 2022 offer to buy the company from Dell.

The risks that could affect vendors go far beyond acquisitions or other strategic changes: There’s also their general financial health, their ability to deliver, how they manage cybersecurity, regulatory or legislative changes, and other geopolitical factors.

Weigh the benefits

“You need to be keeping an eye on these things, but obviously you can’t war-game every event, every single software vendor,” he says.

Rather than weigh yourself down with plans for every eventuality, rank the software you use according to how significant it is to your business, and how difficult it is to replace, and have a pre-planned procedure in case it is targeted for acquisition.

“You don’t need to do that for every piece of software, but moving from SAP HANA to Oracle ERP or vice versa is a major project, and you’d really want to think about that.”

There is one factor in CIOs’ favor when it comes to such important applications, he says, citing the example of Broadcom’s planned acquisition of VMware: “It’s the kind of acquisition that does get ramped up to the Federal Trade Commission and the European Commission, and gets delayed for six months as they go through all the legal obligations, so it really does give you some time to plan.”

It’s also important to avoid analysis paralysis, he says. If you’re using a particular application, it’s possible that the business value it delivers now that outweighs the consequences of the vendor perhaps being acquired at some time in the future. Or perhaps the functionality it provides is really just a feature that will one day be rolled into the larger application it augments, in which case it can be treated as a short-term purchase.

“You certainly should look at your suppliers and how likely they are to be bought, but there’s always that trade off,” he concludes.

Mergers and Acquisitions, Risk Management, Vendor Management