In today’s connected, always-on world, unplanned downtime caused by a disaster can exact substantial tolls on your business from a cost, productivity, and customer experience perspective. Investing in a robust disaster recovery program upfront can save considerable costs down the road.

Unfortunately, many businesses learn this lesson the hard way. According to FEMA, nearly a quarter of businesses never re-open following a major disaster—a sobering statistic.[i]

Fortunately, it doesn’t have to be that way. Disaster recovery-as-a-service (DRaaS) eliminates hefty capital expenditures and additional staff needed to maintain traditional, owned disaster recovery infrastructure. Instead, this cloud-based, scalable solution helps businesses quickly resume critical operations following a disaster—often within mere seconds.

The many virtues of DRaaS

Disasters come in many forms: cyber-attacks, equipment failures, fires, power outages—basically anything that can take down your systems. Without a robust disaster recovery plan in place, it can take days, weeks, or even months to recover.

Unfortunately, time and budgetary constraints often mean disaster recovery efforts get put on the back burner where they languish. Many companies have not defined their recovery point objectives (RPOs) and their recovery time objectives (RTOs), and data classification have fallen by the wayside. Once a disaster strikes, recovery efforts take far longer, and in some cases, businesses may not ever fully recover.

DRaaS uses the cloud to back up and safeguard applications and data from a disaster. DRaaS takes a tiered approach to disaster recovery, using pre-defined or customized RPOs and RTOs to provide the right level of backup and recovery from edge to cloud. This ensures business-critical applications and data get recovered quickly. DRaaS also accommodates your required service levels based on data classification, mapping them to the most appropriate recovery strategy.

DRaaS streamlines disaster recovery planning and support, freeing staff to support your core business. It can grow and scale with your business. Furthermore, DRaaS saves money over the long term, providing a more cost-effective alternative to in-house disaster recovery programs with owned and self-managed equipment. Ultimately, DRaaS minimizes data loss and downtime, simplifies operations, and reduces risk in a cost-effective, customizable, and scalable way.

Get started with DRaaS

Protect your mission-critical data and applications with DRaaS. GDT can help you deploy DRaaS from edge to cloud using Zerto on HPE GreenLake. This DRaaS solution leverages journal-based continuous data protection and ultra-fast recovery for your applications and data. Scalable, automated data management capabilities simplify workload and data mobility across clouds. Zerto features down-to-the-second (or synchronous) RPOs, industry-leading RTOs, and edge-to-cloud flexibility.

Whether you need file-level, app-level, or site-level recovery, GDT can simplify data classification and match it with the proper disaster recovery level or service. GDT not only handles the technology, but we also help you determine the best approach based on your business needs, turning technology conversations into business conversations that help ensure the continuity of your business when a disaster happens.

To learn more about implementing DRaaS, talk to one of GDT’s disaster recovery specialists.

[i] FEMA, “Stay in business after a disaster by planning ahead,” found at: (accessed Nov. 21, 2022)

Disaster Recovery

Technology mergers and acquisitions are on the rise, and any one of them could throw a wrench into your IT operations.

After all, many of the software vendors you rely on for point solutions likely offer cross-platform or multiplatform products, linking into your chosen ERP and its main competitors, for example, or to your preferred hyperscaler, as well as other cloud services and components of your IT estate.

What’s going to happen, then, if that point solution is acquired by another vendor — perhaps not your preferred supplier — and integrated into its stack?

The question is topical: Hyperconverged infrastructure vendor Nutanix, used by many enterprises to unify their private and public clouds, has been the subject of takeover talk ever since Bain Capital invested $750 million in it in August 2020. Rumored buyers have included IBM, Cisco, and Bain itself, and in December 2022 reports named HPE as a potential acquirer of Nutanix.

We’ve already seen what happened when HPE bought hyperconverged infrastructure vendor SimpliVity back in January 2017. Buying another vendor in the same space isn’t out of the question, as Nutanix and SimpliVity target enterprises of different sizes.

Prior to its acquisition by HPE, SimpliVity supported its hardware accelerator and software on servers from a variety of vendors. It also offered a hardware appliance, OmniCube, built on OEM servers from Dell. Now, though, HPE only sells SimpliVity as an appliance, built on its own ProLiant servers.

Customers of Nutanix who aren’t customers of HPE might justifiably be concerned — but they could just as easily worry about the prospects of an acquisition by IBM, the focus of earlier Nutanix rumors. IBM no longer makes its own servers, but it might focus on integrating the software with its Red Hat Virtualization platform and IBM Cloud, to the detriment of other customers relying on other integrations.

What to ask

The question CIOs need to ask themselves is not who will buy Nutanix, but what to do if a key vendor is acquired or otherwise changes direction — a fundamental facet of any vendor management plan.

“If your software vendor is independent then the immediate question is: Is the company buying this one that I’m using? If that’s true, then you’re in a better position. If not, then you immediately have to start figuring out your exit strategy,” says Tony Harvey, a senior director and analyst at Gartner who advises on vendor selection.

A first step, he says, is to figure out the strategy of the acquirer: “Are they going to continue to support it as a pure-play piece of software that can be installed on any server, much like Dell did with VMware? Or is it going to be more like HPE with SimpliVity, where effectively all non-HPE hardware was shut down fairly rapidly?” CIOs should also be looking at what the support structure will be, and the likely timescale for any changes.

Harvey’s focus is on data center infrastructure but, he says, whether the acquirer is a server vendor, a hyperscaler, or a bigger software vendor, “It’s a similar calculation.” There’s more at stake if you’re not already a customer of the acquirer.

A hyperscaler buying a popular software package will most likely be looking to use it as an on-ramp to its infrastructure, moving the management plane to the cloud but allowing existing customers to continue running the software on premises on generic hardware for a while, he says: “You’ve got a few years of runway, but now you need to start thinking about your exit plan.”

It’s all in the timing

The best time to plant a tree, they say, is 20 years ago, and the second best is right now. You won’t want your vendor exit plans hanging around quite as long, but now is also a great time to make or refresh them.

“The first thing to do is look at your existing contract. Migrating off this stuff is not a short-term project, so if you’ve got a renewal coming up, the first thing is to get the renewal done before anything like this happens,” says Harvey. If you just renewed, you’ll already have plenty of runway.

Then, talk to the vendor to understand their product roadmap — and tell them you’re going to hold them to it. “If that roadmap meets your needs, maybe you stay with that vendor,” he says. If it doesn’t, “You know where you need to go.”

Harvey pointed to Broadcom’s acquisition of Symantec’s enterprise security business in 2019 — and the subsequent price hikes for Symantec products — as an example of why it’s helpful to get those contract terms locked in early. Customer backlash from those price changes also explains why Broadcom is so keen to talk about its plans for VMware following its May 2022 offer to buy the company from Dell.

The risks that could affect vendors go far beyond acquisitions or other strategic changes: There’s also their general financial health, their ability to deliver, how they manage cybersecurity, regulatory or legislative changes, and other geopolitical factors.

Weigh the benefits

“You need to be keeping an eye on these things, but obviously you can’t war-game every event, every single software vendor,” he says.

Rather than weigh yourself down with plans for every eventuality, rank the software you use according to how significant it is to your business, and how difficult it is to replace, and have a pre-planned procedure in case it is targeted for acquisition.

“You don’t need to do that for every piece of software, but moving from SAP HANA to Oracle ERP or vice versa is a major project, and you’d really want to think about that.”

There is one factor in CIOs’ favor when it comes to such important applications, he says, citing the example of Broadcom’s planned acquisition of VMware: “It’s the kind of acquisition that does get ramped up to the Federal Trade Commission and the European Commission, and gets delayed for six months as they go through all the legal obligations, so it really does give you some time to plan.”

It’s also important to avoid analysis paralysis, he says. If you’re using a particular application, it’s possible that the business value it delivers now that outweighs the consequences of the vendor perhaps being acquired at some time in the future. Or perhaps the functionality it provides is really just a feature that will one day be rolled into the larger application it augments, in which case it can be treated as a short-term purchase.

“You certainly should look at your suppliers and how likely they are to be bought, but there’s always that trade off,” he concludes.

Mergers and Acquisitions, Risk Management, Vendor Management

By Jeff Carpenter

You might have heard of Apache Cassandra, the open-source NoSQL database. And you might know that some big, very successful companies rely on it, including LinkedIn, Netflix, The Home Depot, and Apple.

But did you know that Cassandra is used by a huge range of companies — including small, cloud-native application builders, financial firms, and broadcasters?

Here, I’ll give you an overview of Cassandra, along with a few reasons why this database might just be the right way to persist data at your organization and ensure your data and the apps that your developers build on it are infinitely scalable, secure, and fast.

A (very abridged) look at the database landscape

Many people in technology first became familiar with relational databases like Oracle DB or MySQL. They’re very powerful because they ensure data consistency and availability at the same time, and they’re effective and relatively easy to use — as long as your databases are running on the same machine.

Apache Cassandra 4.1 is generally available! Read more

But if you need to run more transactions or need more space to store your data, you’ll run into upper limits pretty quickly, as relational databases can’t scale efficiently.

The solution? Split the data among multiple machines and create a distributed system. NoSQL (“Not only SQL”) databases were invented to cope with these new requirements of volume (capacity), velocity (throughput), and variety (format) of big data.

It was born out of necessity, as the rise of Big Tech over the past decade has driven the global data sphere to skyrocket 15-fold; relational databases simply can’t cope with the new data volume or new performance requirements. Huge global operations like Google, Facebook, and LinkedIn created NoSQL databases to enable them to scale efficiently, go global, and achieve zero downtime.

Cassandra’s early days

In the mid-2000s, engineers at young, fast-growing Facebook had a problem: how could they store and access the mushrooming data created by Messenger, the platform that enabled users of the social networking site to communicate with one another? Nothing on the market could handle the hundreds of millions of users that were on the platform at peak times, spread across tens of thousands of servers spread across data centers around the world.

So, Facebook’s team built their own database to enable users to search their Messenger inboxes. It replicated data across geographies to keep latencies down, handled billions of writes per day, and could scale as the number of users grew. (You can geek out on the original Facebook Cassandra paper, authored by its creators, here).

As it became clear that this technology was suitable for other purposes, the company gave Cassandra to the Apache Software Foundation (ASF), where it became an open-source project (it was voted into a top-level project in 2010).

Cassandra’s scalability was impressive, but its reliability also sets it apart among databases. Because of its geographic distribution and the fact that data is replicated across multiple datacenters, Cassandra’s uptime and disaster recovery capabilities are unparalleled. This quickly caught the eye of other rising web stars, like Netflix. The company launched its streaming service in 2007 using an Oracle database housed in a single data center. The company’s rapid growth quickly highlighted the danger of managing data at a single point of failure. By 2013, most of Netflix’s data was housed in Cassandra. 

Cassandra has become the de facto standard database for high-growth applications that need reliability, high performance, and scalability: it’s used by approximately 90% of the Fortune 100, and a bunch of relatively recent developments are making it even more accessible to a wider range of organizations.

Why Cassandra?

Let’s quickly recap some of the unique capabilities of Cassandra:

Scalability – There are essentially no limitations on volume and velocity. Because it’s partitioned over a distributed architecture, Cassandra is capable of handling various data types at petabyte scale.Speed – Read-write performance is unmatched, thanks in part to Cassandra’s distributed nature — it can operate across multiple instances called “nodes.” A single node is very performant, but a cluster with multiple nodes and data centers brings throughput to the next level. Decentralization means that every node can deal with any request, read, or write.Availability – Theoretically, organizations can achieve 100% uptime thanks to data replication, decentralization, and a topology-aware placement strategy that replicates to multiple data centers, eliminating the waste associated with the traditional practice of maintaining duplicative infrastructure for disaster recovery.Geographically distributed – Multi-data center deployments provide exceptional disaster tolerance while keeping data close to clients around the globe, reducing latency (learn more about global data distribution here).Platform and vendor agnostic – Cassandra isn’t bound to any platform or service provider, which enables organizations to build hybrid- and multi-cloud solutions. It also doesn’t belong to any commercial vendor; the fact that it’s offered by the open-source, non-profit ASF means it’s openly available and continuously improving.

For more details, see this excellent Cassandra overview provided by the ASF.

Why Cassandra for your organization?

Online banking services, airline booking systems, and popular retail apps. These modern applications and workloads — many of which operate at huge, distributed scale — should never go down. Cassandra’s seamless and consistent ability to scale to hundreds of terabytes, along with its exceptional performance under heavy loads, has made it a key part of the data infrastructures of companies that operate these kinds of applications.

For instance, Best Buy, the world’s biggest multichannel consumer electronics retailer, describes Cassandra as “flawless” in how it handles huge spikes in holiday shopping traffic.

But Cassandra isn’t just for big, established sector leaders like Best Buy or Bloomberg. It’s a powerful data store for developers and architects who build high-growth applications at organizations of all sizes. Consider Praveen Viswanath, a cofounder of Alpha Ori Technologies, which offers an IOT platform for data acquisition from ships and processing and analytics for their operators.

Having experienced the power of the NoSQL database in earlier roles, Viswanath again turned to Cassandra — delivered via DataStax’s Astra DB cloud service — for its distributed reliability and high throughput, as Alpha Ori’s platform required the constant gathering of thousands of data points from the 40 or so major systems aboard the over 260 ships that it served.

Because of his team’s need to focus on development rather than database operation, Viswanath chose the Astra DB managed service, a serverless solution that scales up and down when needed.

A flourishing ecosystem

The availability of Cassandra as a managed service is one way that this powerful database is reaching more organizations. But there’s also an ecosystem of complementary open-source technologies that have sprung up around Cassandra to make it simpler for developers to build apps with it.

Stargate is an open-source data gateway that provides a pluggable API layer that greatly simplifies developer interaction with any Cassandra database. REST, GraphQL, Document, and gRPC APIs make it easy to just start coding with Cassandra without having to learn the complexities of CQL and Cassandra data modeling.

K8ssandra is another open-source project that demonstrates this approachability, making it possible to deploy Cassandra on any Kubernetes engine, from the public cloud providers to VMWare and OpenStack. K8ssandra extends the Kubernetes promise of application portability to the data tier, making it easier to avoid vendor-lock in.

A vibrant future

As a highly active open source project, Cassandra is always being updated and extended by a vibrant community of very smart people at companies like Apple, Netflix, and my employer, DataStax. Indeed, the Apache Software Foundation today announced the general availability of Cassandra 4.1. Through exciting innovations like ACID transaction support (long a holy grail of distributed NoSQL databases) and improved indexing, we are working to make Cassandra more powerful, easy to use, and ready for the future.

Want to learn more about Apache Cassandra? Register now for the Cassandra Summit, which takes place in San Jose, Calif., March 13-14, 2023.

About Jeff Carpenter:


Jeff has worked as a software engineer and architect in multiple industries and as a developer advocate helping engineers succeed with Apache Cassandra. He’s involved in multiple open source projects in the Cassandra and Kubernetes ecosystems including Stargate and K8ssandra. Jeff is coauthor of the O’Reilly books Cassandra: The Definitive Guide and Managing Cloud Native Data on Kubernetes.

Data Management, IT Leadership

For many enterprises, the pandemic involved rapidly deployed ways of enabling remote working. Today, the need for long-term solutions means that hybrid working is one of the top three trends driving network modernization – as reflected in the 2022-23 Global Network Report published by NTT.

According to the survey data for this report, 93% of CEOs agree that even if their staff return to the physical workplace, they will provide an expanded remote or hybrid-working policy.

But even though hybrid working is here to stay, organizations may still lack the cybersecurity controls and business-grade internet connections, like SD-WAN, that are required to support remote and hybrid workers. The burden on the network grows even as some employees start returning to the office.

The report flags three ways in which CIOs and CTOs are reshaping the physical workplace to meet these new demands:

Increasing Wi-Fi density and speed for seamless connectivity and high-bandwidth applicationsConverging information technology (IT) and operational technology (OT) networksProviding modern meeting rooms and devices with high-resolution video

However, the impact of these increased burdens on enterprise networks isn’t always widely appreciated. Amit Dhingra, Executive Vice President of Enterprise Networking at NTT, says: “Nobody expected it. But the requirement on the network is increasing because even when we go into the office, we spend much of the day on high-definition video calls.

“The network was never built for that. It was built for in-person collaboration within the office, not virtual collaboration. But in the workplace, we have all become guzzlers of bandwidth in a way we were not before the pandemic.”

In the Global Network Report survey, 97% of CIOs and CTOs say that hybrid working leads to a higher demand for network connectivity, generated by both home and remote working.

The result: network managers now need to ensure that networks are both fit for remote working and able to cope with the demands of high bandwidth consumption in the workplace.

The survey also shows that 93% of CIOs and CTOs believe the campus network is the most critical element to enabling a resilient hybrid workplace.

On top of these challenges, Matthew Allen, Vice President, Service Offer Management – Networking at NTT, identifies a further difficulty. “Once you’ve got distributed employees all over the place, how does IT get visibility? How are those users accessing their systems? Are they performing to standard? If employees are not able to access key business systems no matter where they are, you have an issue. Lack of visibility is really one of the key problems that we have encountered.”

NTT’s recipe for hybrid working begins with zero trust network architecture, identity management and multifactor authentication. There’s also the requirement for a seamless transition between different work environments, says Allen: “Everyone returns to the office, carries their company laptop if they’ve got one, carries their phone, carries their watch, carries whatever. If I’m connecting all of these things at home, I want to be able to walk into my office and connect in a similar way without too much drama. If the underpinning technology enables that experience, this becomes a way to start bringing people back into offices.”

Finally, there’s the need to consider access technologies. Dhingra says: “It’s wireless-first if you ask us, perhaps wireless-only campus connectivity. Gone are the days when you used to have lots of fixed LAN cables. This is true in offices, and it’s true in factories too. This, combined with wireless access, enables manufacturers to rejig assembly lines within days, rather than months.”

Enabling productivity and effective collaboration amid hybrid working has become one of the top five business objectives for organizations, the Global Network Report shows. After what was – in many cases – a frantic transition at the start of the pandemic, it’s clear that a good deal more work remains to be done before enterprises can claim to have laid the foundations for a long-term hybrid-working strategy.

NTT’s Global Network Report takes stock of how networks are evolving, organizations’ preparedness for these changes and how they will adapt their networks to these new demands.

Download the 2022–23 Global Network Report


This article was co-authored by Duke Dyksterhouse, an Associate at Metis Strategy.

A lobby television isn’t all that uncommon or remarkable for a $4.5-billion-dollar company, but what’s on the 85-inch screen in the lobby of Generac’s headquarters certainly is. Rather than the predictable advertisements or staged photos featuring happy employees, it’s a demo of the energy management firm’s latest innovation, called PowerINSIGHTS.  

It’s an interactive platform. Zip and click and zoom about a map of North America bespeckled with glowing, Generac-orange dots, and as you dance about, watch the handful of key metrics in the UI change to reflect the region examined: UtilityScore, OpportunityScore, PowerScore. Simple metrics, but dense with information, telling not only of any one region’s energy landscape but of the entire energy market’s trajectory. 

Tim Dickson, CIO, Generac

Generac Power Systems

“Every day that I come into the office,” explains Tim Dickson, CIO of Generac, “I see people I’ve never met, people I’ve never even seen, standing around the demo screen in the lobby. And ideas for how to improve it are pouring in. Other business units, like our subsidiary Ecobee, have already gotten involved. They’ve added their assets to the platform.” 

In the world of energy management, Generac’s PowerINSIGHTS platform is a riveting achievement in the race to extract an unprecedented level of intelligence from power grids, which have become more difficult to manage with the rise of Distributed Energy Resources (DERs) like solar, EVs, and, of course, Generac generators. DERs are hard to visualize as they come in many forms and run on unpredictable schedules. PowerINSIGHTS changes that. Its glowing orange dots represent the once “hidden” DERs, and its accompanying metrics reveal how such energy in a geography is managed, used, distributed, and so on. 

“This platform brings an incredible amount of unseen energy into play,” says Amod Goyal, one of Generac’s development experts and the manager of the PowerINSIGHTS implementation. “We can see where there’s idle power that a customer might want to sell and where we can redistribute it to help people in need, like after a hurricane. We can do this all without providing any external access to customer data, and we never disclose any personal identifiable information.” 

PowerINSIGHTS’ value and novelty may make you think the platform is the premeditated outcome of an arduous program. Its display in the Generac lobby encourages that suspicion. But PowerINSIGHTS is the unexpected outcome of a hackathon led by Tim and his IT organization. Even more notably, the hackathon was one of Tim’s first initiatives after taking the helm as CIO in August of 2020. 

Conventional wisdom suggests CIOs should master IT fundamentals before they get innovative. The helpdesk must run like a German train station, the Wi-Fi can’t drop (ever), and the conference room must be easier to navigate than an iPhone. While getting the basics right is table stakes for any CIO, if you wait to innovate until your peers commend you for doing so, bring a comfy chair because you’re going to be waiting for a while. Additionally, the master-the-rules-before-you-break-them philosophy is exceedingly narrow. Who made Wi-Fi or conference-room navigation the rule? The CIO is meant to enable the business, and there are many ways to do that beyond ensuring network uptime.  

The best CIOs want to rattle their departments, change their organizations’ stars, and lunge at the big ideas white-boarded in a frenzy of inspiration. But, as is often the case, what if they don’t have the resources, the time, the money, or the mandate?  

Do it anyway, Tim says. You might surprise yourself. On the heels of the successful hackathon and PowerINSIGHTS development, he offered three points of advice and encouragement for technology leaders who want to drive innovation, even if they aren’t sure they are ready: You have more at your disposal than you think, your people are more talented than you know, and you will be known for what you do. 

You have more at your disposal than you think 

Despite what some IT leaders think, innovation is not reserved only for the Googles and the Teslas of the world. Additionally, not all innovative organizations need to be built from scratch. You don’t have to invest in a new kitchen to cook something new; sometimes you need only to step back and consider how you might differently combine the ingredients you already have.  

PowerINSIGHTS is a perfect example of this. No element of the platform is all that novel, Tim says, and Generac had the underlying data for years. What’s more, the geospatial visualization of that data was made possible by a feature of Microsoft Azure that had been hiding in plain sight. The innovation came from a new combination of these elements.  

There may also be significant change agents in your broader ecosystem. For example, to build momentum behind his hackathon, Tim recruited vendors to sponsor it. Microsoft, Databricks, and others sent in experts a month ahead of time to upskill Generac’s workforce. Suddenly, IT employees found themselves learning the things that interested them and developing the skills they wanted to develop. Other departments, feeling the excitement, jumped into the mix and IT employees found themselves solving problems alongside their peers from Connectivity and Engineering, a demonstration of the business partnership CIOs dream of. 

Often, the best inventions seem obvious in retrospect. Keep that in mind when you think your department lacks the resources to build something new. Tim recruited partners to support the hackathon, yes, but what made the difference was Tim’s push to give employees the chance to innovate with what they had. Without that push, it’s likely that the PowerINSIGHTS idea would not have seen the light of day.  

Your people are more talented than you know 

As corporate IT departments evolve, so too are the qualities their leaders seek in candidates. Where nuts-and-bolts, black-and-white problem-solving once may have sufficed, skills like ownership, autonomy, creativity, big-picture thinking, and continuous learning are quickly becoming essential. Because many IT leaders have yet to see their current employees exhibit these traits, they tend to think they lack them altogether. Therefore, they decide they cannot transform their department or make it innovative until they first hire the “right” people. Since that often requires a budget they don’t have, it’s a good excuse to stand still. 

Oftentimes, however, employees already have the autonomy, creativity, and all the attributes that companies covet; they just lack an avenue to showcase those attributes. As Tim predicted it would, the hackathon opened that avenue to Generac’s employees. He elaborated on this insight last year in Metis Strategy’s Digital Symposium: “We had 16 teams participate, 70 people, and we’ve implemented over half of [their] ideas in production deployment. What that showed me is that there was a significant amount of pent-up demand…a significant desire for folks who aspired to do more…and show and present their ideas…in a form that they didn’t necessarily have before.” 

The hackathon revealed such an explosive appetite for innovation that, in its wake, Tim and his colleagues configured a digital COE as a central muscle for nurturing that appetite on an ongoing basis. The COE helps anyone in the organization, regardless of their position or business unit, develop their ideas with emerging technologies. “It allows those people with the ideas an avenue to bring them to light,” explained Tim. “When you have that type of engagement from team members, where they feel their voices are being heard, that’s a model that can scale…so we’ve embraced that here at Generac.” 

You don’t always need better talent to innovate. Sometimes, you need to innovate to find out how good your talent is. That’s the paradox that drove Tim to host his hackathon in the first place. He wanted to learn who and what he was working with. Dickson likens it to karaoke: “You just don’t know who’s going to hop up, grab the mic, and just wail it out,” he says. “It’s one of the most inspiring things to witness. But you have to play a tune worth singing to.” 

You are known for what you do 

Aristotle once wrote, “We are what we repeatedly do.” Tim’s rendition is, “You will be known for what you do.” In either case, the emphasis is on the “do.” Tim’s gentle reminder to his employees, and his advice to CIOs, is that the most eloquent memos and best-laid plans are meaningless if there’s no action behind them. Don’t try to convince anyone that you or your department are innovators or wait for permission to become innovators. Be innovators. 

The key is to get to something real, however rough. If the idea is even halfway decent, says Tim, that will change everything. And PowerINSIGHTS is the perfect example. Prior to the hackathon and maybe even prior to that, Tim and his team could have frittered away time around the water cooler, spitballing the merits of such an innovation to anyone who would listen. But they didn’t. Instead, they built it, crude as the first iteration may have been. At first, the user interface was Spartan, the user experience clunky, but no matter. 

“Once we had something people could see and touch, the whole mood shifted,” Tim said. “The CEO actually… proposed some of the first use cases for PowerINSIGHTS and has remained very involved in the project since.”  

That initial action on the innovation front led to real transformation for legacy processes and technologies as well. In Generac’s case, one of the biggest shifts has been an embrace of cloud infrastructure. “Everything was on-prem when I started. But cloud will be essential to supporting PowerINSIGHTS in the long run, so we’ve stood up a cloud-first infrastructure. And of course, the benefits of that have reached beyond PowerINSIGHTS.”  

We preach often in this column that you don’t have to have all the answers before embarking on an innovation initiative. Tim and PowerINSIGHTS are clear evidence of that. His team had a plan, of course, but they didn’t wait for anyone’s permission to leap. CIOs hoping to reposition their organizations need not wait. By engaging teams across the organization and acting quickly, you will likely discover new opportunities for innovation, energize a team of talented and passionate people, and win respect, quickly, from your peers. 

CIO, Innovation

Don’t miss CIO’s Future of Digital Innovation Summit and Awards Canada, happening on November 29-30 produced by IDC and CIO, in partnership with TECHNATION. Registration is complimentary, and attendees will have the opportunity to gain the latest knowledge in innovation from experts in a broad range of industries.

The conference will kick off on November 29 with a keynote from Lee-Anne McAlear, Program Director, the Centre of Excellence in Innovation Management, York University. McAlear will focus on digital leadership in a time of continuous change. Kelley Irwin, Chief Information Officer, Electrical Safety Authority, Kalyan Chakravarthy, Chief Information Officer, the Regional Municipality of Durham, and Kyla Lougheed, Digital Transformation Lead, United Way Greater Toronto, will participate in the CIO Panel: Jumpstarting Innovation for Customer & Employee Experience. Theywill discuss developing new innovative capabilities to improve the customer and employee experience. In this interactive group session, you’ll have the opportunity to ask questions, share your thoughts, and dive into some of the lessons learned when implementing innovative projects.

The afternoon sessions include collaborative solutions for hybrid work environments presented by Aruna Ravichandran, SVP and Chief Marketing Officer, Webex by Cisco, and Culture, Growth, and the Modern Digital Enterprise, in which Sabina Schneider, Chief Solutions Officer – North America, Globant, will focus on current and future business environments. The day will end with a highly anticipated session on Transforming the Technology Foundations for Business Enablement and Agility with CIO Awards Canada Winners CIBC, represented by Richard Jardim, Executive Vice-President and CIO, and Bradley Fedosoff, Senior Vice-President, Architecture, Data and Analytics.

Day one offers a full day of insights and discussions with Canadian CIOs and senior technology leaders who are building digital innovation and transforming into digital businesses. Check out the full agenda here.

Day two, November 30, kicks off with a presentation on The End Game: How to Deliver Sustained Digital Innovation, lead by Nancy Gohring, Research Director, Future of Digital Innovation, IDC. Immediately following her presentation, you’ll be able to ask questions about the future of digital business. The final session before the double awards ceremony will be a fireside chat with Shaifa Kanji, Assistant Deputy Minister, Chief Digital Officer of DTSS, Innovation, Science and Economic Development Canada, interviewed by Angela Mondou, President and CEO of TECHNATION, who will discuss accelerated digitization in Canada. The summit will cap off with the best of the best, with the unveiling of TECHNATION’s Ingenious Awards, and then the CIO Awards Ceremony where we celebrate Canadian organizations that are using technology to innovate and deliver business value. To attend the summit and access the full agenda, register today.


It’s hard to imagine where today’s businesses would be without conversational AI. This technology, which powers both chatbots and conversational IVR systems, proved essential for navigating a changing service economy through a global pandemic.

Even before COVID-19, Gartner predicted that 70% of white-collar workers would interact with conversational AI platforms every day by 2022. The market for this technology is now expected to grow at a compound annual growth rate (CAGR) of 21.8%, reaching $18.4 billion by 2026.

This is thanks, in no small part, to how much this technology has improved in recent years. Chatbots, in particular, can now support the customer experience in many ways, enabling more customer self-service and reducing the demand on human agents.

Nonetheless, success is not a given when contact centers deploy chatbots and other conversational AI solutions. A chatbot comes with powerful AI capabilities, but it still hasn’t been tailored to fit your needs or tested in your business. Before contact centers take the plunge, they must consider what it really takes to ensure their conversational AI solutions will support and enhance the customer experience.

The growing demand for chatbots in the contact center

In large part, contact center executives don’t need to be convinced that they should adopt conversational AI in the form of either chatbots or intelligent voice assistants. Most are overly eager to bring these solutions into the mix. According to Canam Research, 78% of contact centers planned to deploy AI by 2023, with the largest portion (55%) pointing to chatbots as their primary AI solution. The CAGR for chatbots is expected to grow even faster than conversational AI in general, at 30.29% from 2022–2027.

There are good reasons for this, too. Across the board, contact center executives see the fruits of deploying chatbot solutions. A recent survey of Fast Company Executive Board members noted that adding a chatbot solution to their website enhanced customer engagement, accelerated service, enhanced personalized support, and increased customer satisfaction — just to name a few outcomes.

These positive results are encouraging, but that doesn’t mean chatbots and other conversational AI technologies are now flawless. They still fall short in many ways, from misinterpreted customer intents to delayed handoffs and security failures. And the resulting poor customer experiences can lead to customer churn and other negative impacts on a brand. These possibilities should make any contact center executive pause before jumping on the chatbot bandwagon unprepared.

The chatbot testing conundrum

That’s not to say contact center leaders shouldn’t embrace this technology — only that they should do it in the right way. As responsive and smart as AI is, it’s still limited by its programming. Ultimately, chatbot misfires still occur because bots can’t possibly account for all potential human interactions. The nuances and quirks of human communication are so vast and varied that there’s no way to prepare a chatbot for all possibilities out of the box.

Consider, for instance, how many possible ways someone could ask a chatbot to order a vegetarian pizza.  They may ask for a “veggie pizza,” a “pizza with no meat,” a “meatless pizza,” or use one of any number of other phrases. On top of that, any given person might bring their own quirks, like spelling errors, colloquial ways of saying something, limited tech capabilities — you name it. How do you know if your bot is capable of handling all these variations and nuances? You need to test it.

But truly testing for all these and the many other options for how someone could order pizza is an extensive job. Doing it manually would require many hours, or possibly even days, first to come up with the types of tests to run and then to run them. To do it efficiently, you need a solution that can accomplish all the necessary steps for you — a testing platform that allows you to quickly and efficiently expose these limitations so you can send the bot back to development and teach it new skills.

AI testing AI: the true path to flawless CX

Fundamentally, this kind of testing must cover the entire process so your testers don’t have to test your chatbots manually or spend hours developing test cases.

It means testing from end to end with automated natural language processing (NLP) score testing, conversational flow testing, security testing, performance testing, and chatbot monitoring. Ideally, the testing process should be simple and intuitive, with no coding, scripting, or programming involved.

Let’s return to the veggie pizza example. It would take a person (or a team of people) an incredibly long time to come up with all the ways someone could order their veggie pizza; and even then, they’d probably miss some. The only way to effectively come up with all possibilities would be to leverage AI to generate the test data. AI could select a question, such as “Can I have a vegetarian pizza,” and then automatically generate a list of ways to say the same thing. It could then automatically test the chatbot with those variations to see how it responds.

Going a step further, how many different ways could a person actually say each of those variations? AI can be used to further drill into the unique human quirks that different customers might bring to an interaction. For instance, AI could add layers to testing for customers who type sloppily, type in all caps, misuse homophones, add extra spaces or emojis, and more. “Pizza with no meat” could then become “pizza with no meet,” “PIZZA NO MEAT,” and any number of other possibilities.

These are just examples, but what’s important is that your testers don’t have to come up with all these options or run the tests themselves. You need a testing solution that will do it for them, with minimal manual effort. What you want is, effectively, AI testing AI so you can run these kinds of comprehensive, detailed tests much more quickly and frequently. This allows your testers to expose more chatbot weaknesses so your developers can teach and improve your bots more often and with greater precision, ultimately providing a better-quality experience for your chatbot using customers.

Contact center executives’ instincts are right: Investing in chatbots is a smart move. But doing so without adequate testing support could lead to more harm than good. Cyara Botium does exactly what we have described here and can provide the testing support your contact center’s chatbot technology needs. Learn more and try a demo to see for yourself.

Artificial Intelligence, Machine Learning

Imagine booking a room at a small, charming, off-the-beaten-path hotel on the Hawaiian island of Kauai using a popular mobile travel app, only to discover that the room is… haunted!  That’s what happened to my friend Dana. As Dana told it, she went to bed at midnight after a long travel day. But that didn’t go as planned. She struggled to sleep. Then, at 3 A.M., Dana claims she saw a pale, vaporous head of an older fellow floating above her bed, mouthing words she could not hear. (Believe me, Dana’s rather unique customer adventure is worth a blog all its own.)

I haven’t had the chance to travel all that much lately, but when I get back to it, I sure hope I don’t experience what Dana went through. In fact, I’d want to know in advance things about where I am staying that may be a bit off center, like, is the place haunted? 

Wouldn’t it be practical, then, if your favorite travel app can not only give a “your hotel may be haunted” notification, but also provide a bit of history behind the why – without you having to hunt for some random Top Ten blog or Tik Tok video that may not even include your particular hotel?

Does such a travel app exist? Nope, not at the moment.

And yet, perhaps there IS a ghost of a chance that this kind of app will be materializing soon.

When that happens, the reveal could be at the SAP Innovation Awards!  Even now, an SAP customer or partner could be developing such a tool. If so, I can’t wait for this app to be celebrated at a future Innovation Awards show. Honestly, think of how much fun that would be!

In keeping with the Halloween spirit

This is Halloween after all, so indulge me as I share stories about a couple of my hometown hotels with their own unique haunted histories – information not found on any travel app that I’m aware of.

Room 33

In San Francisco’s North Beach district, there exists a small, quaint hotel that was built amidst the ruins of the 1906 earthquake by Bank of America founder A.P. Giannini. It’s one that I used to walk past every day when I worked in the area. Today, it’s a popular place to stay for budget-minded travelers who want to enjoy the neighborhood’s Italian restaurants near Fisherman’s Wharf.

But what is not commonly known is that this family-owned hotel once thrived as a busy brothel during the City’s wild Barbary Coast days. Its former madam, famous for her boisterous larger-than-life personality, still roams the halls of her former establishment, knocking on doors with Room 33 being her favorite haunt. But she’s not alone. A sad little ghost girl has occasionally been seen in the hallways — always reaching for the doorknob of one room in particular. Reason? Unknown.

Room 207

If you’re into classic art deco décor but demand all the comforts of a modern hotel, then there’s a century-old hotel off Union Square that is right for you. But, be warned. Room 207 is where you might encounter one hotel guest who doesn’t want to leave — even though she has long since departed this mortal plain. Reports of doors mysteriously opening and closing, and small objects appearing or disappearing have been ongoing for years. It is thought that the disruptive spirit haunting the room is that of famous playwright Lillian Hellman, who had regular liaisons there in the 1920s with writer Dashiell Hammett, author of The Maltese Falcon. Perhaps the ghost of Miss Hellman is still searching for the elusive jewel-encrusted blackbird, much like Brigid O’Shaughnessy, the fictional femme fatale from the book written by her lover — which, coincidentally, was set in San Francisco.

Across the Fairmont

Arguably, San Francisco’s most famous ghost concerns one Flora Sommerton, a comely 18-year-old debutante who disappeared in 1876. Legend has it that she ran away to escape a pre-arranged marriage to a rich but much older gentleman. So she bailed from her grand engagement party, held at her home across the street from the historic Fairmont Hotel, and was never seen again. That is, until 50 years later. In 1926, the withered body of an old woman was discovered in a cheap hotel room in Butte, Montana — reportedly wearing the same 19th century white ball gown and jewelry that Flora fled in. There were old, brittle newspaper clippings of Flora’s disappearance pinned to the walls of the small, dank flophouse room. It was her. Flora had finally been found. Her body was brought back to San Francisco where she was buried in the family plot. But Flora’s story does not end there. Today, as you approach the Fairmont Hotel on any given sunny afternoon, keep an eye out for what many people have seen throughout the years: the ghostly figure of fair, young Flora, parasol in hand, quickly walking down California Street, then vanishing as she rounds the corner to where her home once stood — and always in that flowing, white, ballroom gown.

There are many more stories to be had about haunted hotels and their spooky history here in the San Francisco Bay Area. And in your city, too, no doubt. But I will have to wait patiently for some future travel app to clue me in on which ones. Maybe I will get my wish at the upcoming 10th Anniversary SAP Innovation Awards 2023, spirits willing.

Happy Halloween!

Mari Kure

Devops, Software Development

National Trust CIO Jon Townsend is laying down some home truths on sustainability. Five years ago, he notes that the message in the IT industry was not of reaching net zero, ESG commitments or being more sustainable, but rather of businesses becoming “bigger, better, faster.”

“We need to change that conversation,” he says, “and get [sustainability] higher on the agenda.”

Townsend says the COP-26 agreement in Glasgow, Scotland, last year, visible changes in weather conditions, and increased awareness on climate change are turning the tide and, as CIO of the UK’s biggest conservation charity, he acknowledges that the 125-year-old non-profit has no choice but to be an “example for others to follow”.

After all, if the National Trust, formed on the principle that humans want ‘quiet, beauty and space,’ can’t get this right, what chance do other institutions have?

“It’s not easy for anybody,” says Townsend, “but it starts for me with transparency around what you’re trying to achieve, and the scale of the problem you’re trying to overcome. You can either be proactive and see [sustainability] as a net positive for your organisation, or you can wait for your consumers, supporters, members, and customers to tell you they care about this.”

Sustainability starts with visibility and storytelling

National Trust oversees almost 800 miles of coastline, 250,000 hectares (620,000 acres) of land and one million pieces of art across 500 historic buildings.

Given this real estate, Townsend admits that the charity is better placed than others to respond to the climate crisis, yet he maintains that it faces a significant challenge around data and reporting, especially when attempting to review scope 3 emissions, indirect emissions that occur in the company’s supply chain. Scope 1 emissions are direct emissions from a company’s owned or controlled sources; scope 2 covers indirect emissions generated through purchased electricity, steam, heating or cooling; and scope 3 covers waste disposal, employee commuting and business travel, and purchased goods and services.

Jon Townsend

“The first thing is to make sure we have the data so we understand scope one, two and three emissions,” says Townsend, who formerly held senior technology and security roles within the Department of Work and Pensions (DWP) and the Ministry of Defence (MoD), prior to becoming director of technology and information security at the National Trust in October 2015.

“It’s not just about carbon capture, it’s also about reducing emissions because the simple equation is you may capture more, but if you’re still emitting that same level of carbon, you’re not going to solve the problem,” he says. “It’s important for us, but also for IT organisations everywhere, to think about scope three emissions, who you’re working with, and how seriously they’re taking the issue.”

For Townsend, who’s also the non-profit’s CSO, getting results comes down to storytelling, much in the same way as relaying a message to the CFO may do so in cost savings. In other words, conveying the impact of sustainability in ways the organisation’s employees can easily understand.

“It’s identifying the areas where you can make a difference, and telling that story to people,” he says.

He does, however, have a warning to how such storytelling plays out.

“We make sure we don’t sound too preachy, or like it’s some sort of parent-child thing,” he says. “It’s storytelling in a way that people get it, it’s easy for them to consume, and something they can relate to another on.”

IT sustainability projects in the works

As part of its 2025 business strategy, the National Trust has committed to reducing its conversation backlog, reducing energy use by 15% and sourcing 50% of energy from renewables by 2020-21 against its 2008 usage as a baseline.

It’s also made commitments to reach carbon net zero in operations by 2030, including ambitions to plant 20 million trees and create 25,000 hectares (62,000 acres) of new wildlife habitats.

Townsend’s IT department has its own role to play in reaching these objectives, from making better technology decisions and scrutinising IT suppliers’ own green credentials, to educating the workplace on their carbon impact.

For example, with suppliers, the team is working with hardware partners to reduce plastic packaging, making sustainability a key decision factor in partner and system selection during the RFP process, and looking for greater transparency from suppliers on scope 1, 2, and 3 emissions.

Systems no longer being properly utilised, such as data analytics platforms not in use, are swiftly decommissioned as well, says Townsend.

The National Trust is also migrating away from old, energy-intensive data centres, cutting back on content on its new website, and introducing smart IoT sensors in buildings to monitor emissions.

The communication to, and education of, the workforce is just as important. Townsend says the non-profit is attempting to cut down on excess emails and PowerPoint presentations — even advising of the trees saved on each virtual meeting.

Plus, within the IT department, the team uses technology to monitor emissions to reduce data storage for more efficient archiving.

Measuring effectiveness, of course, is vital to see what works and what doesn’t.

“We are helping the organisation understand its overall emissions, and measure total emissions caused by technology—both within the organisation and within our supply chain,” says Townsend of sustainability metrics, while still calling for suppliers to be as transparent as possible about emissions from their own supply chains. He adds that the National Trust measures its progress in this area by tracking everything from carbon capture, water management and use of materials, through to how its activities, projects and supply chains impact nature and people. The organisation is also measuring its sustainability impact in accordance with frameworks, such as BREEAM, LEED and WELL.

The CIO’s role in sustainability

CIOs have developed a greater role and responsibility in prioritising sustainability within organisations, from inputting into strategy and becoming executive sponsors on sustainability programs, to actively contributing to ESG goals through their IT strategies. And yet previous studies have suggested that sustainability is way down their priority list.

It’s not always straightforward to pin down what the sustainability objectives are in the CIO’s role but Townsend believes they should support, rather than lead, the sustainability initiative outright.

“It needs to be somebody who understands the impact on nature, farming, and our let state,” he says of National Trust’s position, pointing to the non-profit’s land and nature director as the most obvious candidate to lead the sustainability charge. “We have many thousands of properties and holiday cottages that we rent out. That isn’t a CIO function to me.”

At the National Trust, which has almost six million members, Townsend has regular conversations with investment boards on the topic, and invites the charity’s sustainability team to meet the IT department and join his quarterly leadership meeting so he can understand what more he and his team can do as an IT organisation.

He does, however, admit that this role may vary. For example, a smaller financial services firm, one leading with digital technologies and with little real estate, may take a different view. Ultimately, he says, sustainability should be every executive’s priority.

Cultivating cloud-based organisational regrowth

The National Trust reported losing £200m in operating income through the COVID-19 pandemic, with Townsend admitting it was a tricky time for an organisation reliant on supporter funding.

“Most organisations have had a tough time in the last couple of years and it’s no different for us,” he says. “However, one of the lessons I think we learnt is that technology underpins everything we do.”

Digital channels have become another way of reaching supporters and customers, with one such fundraising campaign raising £1 million in 21 days for the National Trust to buy 700,000 square metres of land around the White Cliffs of Dover. More recently through COVID-19, the National Trust utilised digital channels to offer virtual tours of places and gardens, as well as baking recipes.

With the Trust now in its second wave of digital transformation, it’s delivering a new membership and fundraising platform, built on Salesforce’s Service Cloud, Marketing Cloud and Experience Cloud, and a new enterprise data platform built on Snowflake and hosted on Microsoft’s Azure, which is used along with Alteryx and Tableau to provide data analytics, insights and reporting.

In addition, the Trust has delivered a new Enterprise Integration platform using Microsoft Azure Integration Services, and is building a digital platform for National Trust’s website, utilising the Bloomreach CMS and hosted on AWS. Townsend says the website will improve accessibility features and should reduce the carbon impact of the website by 50%.

The hope is to join up digital and physical experiences so the National Trust can continue to help people through a difficult time.

“We want people to be able to experience nature, beauty and history,” says Townsend, “and to find some relief from all the pressures they experience in the rest of their lives.”

CIO, Data Management

Modernizing and future-proofing your analytics

Executive-level commitment to a broad data governance strategy is gaining momentum in order to balance technology, people, and processes. In a recent Gartner survey, 78% of CFOs said they will increase or maintain enterprise digital investments. And a Gartner forecast states worldwide IT spending will grow 3% in 2022.

The counterbalance to this positive trend comes from NewVantage Partners’ Data and AI Leadership Executive Survey 2022, which states only 26% of respondents claim to have reached their data goals. The gap between data winners and stragglers is widening.

Technology balance

One look at the Andreessen-Horowitz framework for the modern data infrastructure and you see data ecosystem complexity is becoming a nightmare to manage. The ability to properly secure this new smorgasbord of data platform choices increases the management challenge.

Andreessen-Horowitz framework for the modern data infrastructure

People balance

Until recently, data management and analysis was almost solely an IT function. Today, the business ranks are filled with similar skills with data stewards, data analysts, and data scientists tasked to build a data security governance platform. Meanwhile, CISOs, CIOs, and CDOs are thinking about compliance requirements and implementation. And IT has seen dwindling resources to cater to data consumers. While there are many positives regarding the expansion of data-related roles, it has also meant dwindling IT resources directly dedicated to data consumers, despite IT being tasked with servicing a growing data landscape.

Process balance

On-premises technologies have moved to the cloud, often in an à la carte, buy-as-you-go style, without significant forward-looking strategy. In addition, a stream of new regulations demands new processes to regulate and assure the responsible discovery, access, and use of data. Add to this the federation of our data expertise into the business functions, and organizations now require a scalable approach to data governance processes.

The growing cost of getting it wrong

While many proof points exist for the value of data and the positive impact, the cost of doing nothing or getting it wrong has gone somewhat unnoticed. Key considerations include:

The average cost of a security breach in 2022 is around $4.35m, compared to $3.8m two years ago (Source: IBM’s Cost of a Data Breach Report 2022).Regulatory fines, such as GDPR, are becoming real with companies such as Amazon and WhatsApp reporting multi-hundred-thousand-dollar fines.Analyst, data engineer, and data scientist productivity remains a major challenge as they continue to report 80% of their time is spent on finding and getting access to the right data, as well as cleaning that data.The intangible cost of delayed business decisions because the projects are on hold or severely impacted and delayed.Loss of consumer trust once confidence is broken due to mishandling of data, causing lasting damages to a company’s brand as well as severe financial repercussions.

Modernizing your data security governance thinking

Modernization starts with thinking differently about the approach to people, processes, and technology.

Modernizing data security governance technology: Security and data governance need to exist across every part of the data lifecycle. Maintaining that security posture on a point-by-point basis is simply not viable. A broad-based data security platform that will bring you a centralized data security control plane across your entire hybrid data estate is required.

Modernizing the roles of your data stakeholders: Key stakeholders have expanded beyond the traditional experts employed by IT. Data experts live in the business. Data scientists in the business team are embraced, but data governance stakeholders have yet to receive the formal recognition they deserve. The data owners are business people. Formalize security and data governance objectives early. Empower your business data stakeholders to perform those objectives in a scalable and automated manner.

Modernizing your data governance processes: Gartner speaks extensively of the evolution of data governance from dictated (IT command and control) to distributed (everything left to be performed at the edges of the process). Implement a blended model where the system is based on federated responsibilities with centralized control and auditability.

Unified data security governance

AWS, Snowflake, Databricks, Azure and Google continue to deliver more choices on their ecosystems, which offer more opportunities for your business. But more choices inherently increase the difficulty of enforcing security across this increasingly diverse landscape. The only way to future-proof your analytics along with your security and privacy posture is through a unified data security governance approach. Privacera was co-founded by the innovators who led the charge in creating Apache Ranger™, one of the most widely used open source data security and access control frameworks. As the only scalable data policy management solution based on open standards, Privacera offers a proven way to future-proof, while preventing vendor lock-in. Read more about the immense benefits of a data security platform based on open standards.

Data and Information Security