Continuing with current cloud adoption plans is a risky strategy because the challenges of managing and securing sensitive data are growing. Businesses cannot afford to maintain this status quo amid rising sovereignty concerns.

Some 90% of organisations in Europe and 88% in the Middle East, Turkey, and Africa (META) now use cloud technology, which is a keystone for digital transformation – according to an IDC InfoBrief, sponsored by VMware. As it becomes a dominant IT operating model, critical data is finding its way into the cloud. Almost 50% of European companies are putting classified data in the public cloud.

While private on-prem cloud remains an organisation’s primary cloud environment for storing high-sensitivity data, 23% of those surveyed chose public cloud for this data class. Some 32% of companies use global public cloud providers to store confidential data.

Rising volumes of sensitive data in public cloud make sovereignty an imperative

Organisations are starting to value strategic autonomy to ensure resilience amid growing geopolitical and economic uncertainties. Digital sovereignty starts with data sovereignty, which forms the legal basis for organisations to ensure regulatory compliance. Data sovereignty is about making sure that data is subject to the laws and governance structures of the country it belongs to. With a large amount of sensitive data now hosted in cloud, sovereignty should influence an organisation’s future cloud strategy. This is becoming a priority as sensitive data volumes are growing exponentially.

The importance of sovereignty for EMEA organisations

The only option for customers to get sovereign cloud security is to engage with cloud providers which are well positioned in local markets.

Drivers for considering sovereignty:

Relevance of data sovereignty cited as “very important” or “extremely important” by 88% of very large organisations (5,000 FTEs) and 63% of all EMEA organisations.

In Europe, organisations are driven by the need for continuous compliance, regulations, and legal obligations.

In META, organisations are driven by the introduction of internal/corporate policies.

Business drivers for data sovereignty:

Customer expectations about privacy and confidentiality

Need to protect future investments in data

Continued macroeconomic volatility, ambiguity, and uncertainties are heightening interest in sovereign solutions

Protection against future EU ruling that could impact your business

How VMware can help

Sovereign Cloud is all about choice and control. VMware’s offering addresses the strategic imperatives for data sovereignty on data security, protection, residency, interoperability, and portability: 

Leveraging the VMware Multicloud Foundation 

Innovating on sovereign capabilities (Tanzu, Aria, open ecosystem solutions) 

Leveraging a broad ecosystem of sovereign cloud providers 

VMware is well recognised on trust and on several capabilities for data sovereignty needs: flexibility and choice/data security and privacy/control of data access/multicloud/ data residency. It is already deployed with more than 20 Sovereign Cloud Providers. 

Laurent Allard, Head of Sovereign Cloud, VMware, says: “To ensure success in their sovereign journey, organisations must work with partners they trust and that are capable of hosting authentic and autonomous sovereign cloud platforms. VMware Cloud Providers recognised within the VMware Sovereign Cloud initiative commit to designing and operating cloud solutions based on modern, software defined architectures that embody key principles and best practices for data sovereignty. More than 36 global and 14 EU VMware Sovereign Cloud Partners can deliver to customers cloud services in alignment with security and local regulations, while enabling sovereign innovation.”

To read the full InfoBrief click here. Find out more about VMware’s Sovereign Cloud here.

Cloud Management, Cloud Security, Data Management, Data Privacy

Nearly all organizations are struggling with how to stay in control as their data migrates to the cloud and users connect from anywhere. The answer, they’ve been told, is zero trust. Zero trust starts from the premise that an organization is going to be breached so that they can then focus on minimizing any potential harm. Although well-defined as an architecture and a philosophy, it is difficult to apply these principles across your infrastructure in the real world. While you can’t buy a complete, packaged zero trust solution, you can build a solid defense strategy around zero trust concepts to secure sensitive data and enable the business and users to proceed in a safe manner.

Over the past few years, the environment to which zero trust is being applied has changed dramatically. Users are working outside the safety of traditional security perimeters, using devices and networks the organization doesn’t control. Cloud and remotely accessible infrastructure enables anyone to work and collaborate from anywhere on any device, but it is critical to ensure access is secure and centrally managed.

There are numerous elements to zero trust, notably, user (identity), endpoint, data, and risk. Rather than a ready-made solution or platform, zero trust represents a mindset, a philosophy, and ultimately a cybersecurity architecture. Aaron Cockerill, Chief Strategy Officer with endpoint and cloud security solutions provider Lookout, urges organizations to focus on what matters most: sensitive data.

“What’s happened over the last couple of years with digital transformation is that users, apps and data have left the building and are no longer within that traditional security perimeter,” says Cockerill. “Rather than simply providing remote access via virtual private networks, take the services that you had in that perimeter and put them in the cloud so that you can work in this new hybrid environment where apps are in the cloud or cloud-accessible.”

That’s the aim of Security Service Edge (SSE) solutions that provide cloud-based security components, including Cloud Access Security Broker (CASB), Zero Trust Network Access (ZTNA), Secure Web Gateway (SWG) and Firewall as a Service (FWaaS). 

“When users were in the building, the internet was filtered for them by their company to make sure that they were not clicking on malicious links and performing other non-secure activities, so part of SSE moves those services into the cloud.” 

Most organizations have now adopted hundreds of SaaS apps, and each one handles authorization and access control differently. To avoid having IT become an expert in every SaaS app, centralized policy management across all cloud and SaaS apps through a CASB solution should also be top of the list for most organizations. “By centralizing data access policies IT teams can minimize workloads, simplify administration, and avoid misconfiguration that can introduce vulnerabilities.” Cockerill adds. 

Finally, in the wrong hands VPNs expose large parts of your infrastructure to attack. “That’s basically a tunnel through the firewall into the soft, gooey center of any organization’s IT infrastructure, which is a nightmare from a security standpoint. Once someone connects via VPN they typically have unfettered access to adjacent apps and data, and this is where lateral movement comes into play. You need to segregate your infrastructure to prevent lateral movement.” Bad actors use lateral movement to search for systems and data that can be leveraged to extort their target. Zero trust proposes microsegmentation to address this, but ZTNA is a simpler and more modern approach. 

The noise level around zero trust can be confusing for organizations trying to chart the safest course. Cockerill warns against falling for misleading claims about so-called “zero trust solutions,” and instead recommends assessing your current and desired state against an established zero trust security model, such as one drafted by the Cybersecurity & Infrastructure Security Agency (CISA).

“Implementing zero trust is a never-ending journey and the best way of establishing the right elements of technology for you to embrace in that journey is comparing yourself back to those maturity models,” says Cockerill. “There’s no silver bullet so don’t be misled by vendors telling you there is because you can’t buy it off the shelf. You need to look for vendors that acknowledge that integration with your existing infrastructure is the right approach.”

The CISA model aligns the zero-trust security model to five pillars:

IdentityDeviceNetwork/EnvironmentApplication WorkloadData

According to CISA, each pillar can progress at its own pace and may be farther along than others, until cross-pillar coordination is required, allowing for a gradual evolution to zero trust.

There are endless ways to apply zero trust, so it’s important to start out with a well-thought plan. Cockerill recommends that organizations prioritize their implementation efforts according to their risk registers. “I would prioritize, maybe even over-correct towards the protection of data to stop your data from being stolen” he adds. “Zero trust represents our best approach to the battle against cyber attackers, but it shouldn’t be considered a panacea. It’s virtually impossible to deploy controls across everything, so it’s critical to assess the risks involving the organization’s most sensitive data and start the zero-trust implementation there.”

For more information on the Lookout Cloud Security Platform, visit us here.

Zero Trust

Choice Hotels International’s early and big bet on the cloud has allowed it to glean the many benefits of its digital transformation and devote more energies to a key corporate value — sustainability, its CIO maintains.

That is largely due to the 80-year-old hotel chain’s tight partnership with Amazon Web Services, says Choice CIO Brian Kirkland, who claims his company is enjoying the cost benefits and energy efficiencies of the cloud while exploiting many of 225 related services AWS offers such as analytics and AI to keep advancing on its digital reservations and pricing platform.

Kirkland, a founding member of SustainabilityIT.org, an organization to drive global sustainability through technology leadership, says Choice was the first hospitality company to make a strategic commitment to developing a cloud-native and sustainable platform on AWS.

The cloud platform helped Choice pivot quickly during the pandemic, enabling it to scale on demand as hotel traffic went down. It also helped reduce energy consumption and costs. More importantly, perhaps, pushing all in on the cloud has freed up Choice to be not just a hospitality company and franchisor, but to Kirkland’s considerable pride, a technology company as well, he says.

“I’m not in the business of managing infrastructure,” says Kirkland, whose previous stints at GoDaddy and Intel helped build the technology acumen he parlays in a new type of industry he joined in 2015. “I am in the business of hospitality. Our goal is to deliver business value for our franchisees and our guests by leveraging AWS.” 

With Amazon taking care of infrastructure, patching, and security, Choice’s 650-member Scottsdale, Ariz.-based IT team can focus on building business value using a plethora of AWS services, including Amazon Aurora, Amazon SageMaker, Amazon Elastic Kubernetes, as well as other SaaS tools such as Automation Anywhere and IDeaS for the cloud-based revenue management system Choice built called Choice Max, also on AWS.

This cloud-native strategy is essential to building unique value Choice’s core customers, owners of franchises ranging from Comfort Inn to EconoLodge, Quality Inn, and upper scale Cambria Inn. With its recent acquisition of the Radisson chain, Choice now operates 22 brands and more than 650,000 hotel rooms in 46 nations, including 1 in every 10 hotel rooms in the US.

And his team can do this more sustainably, another “benefit that’s important to us,” Kirkland says. “If you look at Amazon’s journey, and the way they run their data centers, they claim to be five times more energy efficient than an average data center.” 

Choice’s all-in cloud journey

Choice got its digital start building an iOS-based iPhone app for its franchise customers about a decade ago. By 2015, it was redesigning its property management system on the cloud, called Choice Advantage, which is licensed to third-party companies, including rivals.

In 2017, Choice, one of the largest franchisors globally, representing 7,500 franchisees that collectively generate roughly $10 billion in annual revenues, debuted the crown jewel of its digital transformation: a cloud-native central reservation system called ChoiceEdge. The system was built from the ground up using a variety of microservices on AWS to ensure hotel rooms booked by travel agents, SaaS partners such as Travelocity, and wholesalers are queued and synchronized properly — no easy task given that, because of the finite number of hotel rooms available, reservations have to be handled perfectly, Kirkland says.

“It is the brains of everything — our core system. It is a very complex system, and we were the first company to completely rewrite our reservation system for the cloud from scratch,” Kirkland says. “We are using Java applications to build our logic but also AWS middleware technology in many, many ways across our ecosystem to build Choice Edge. All the logic is still in Java hosted on Amazon’s infrastructure.”

Aside from the core cloud services, Choice also uses Amazon RedShift as a front end to its cloud data warehouse, Amazon SageMaker to build machine leaning models, and Amazon Kinesis to collect, process, and analyze real-time data.

To ensure more sustainable operations, the company’s tech staff also relies on Amazon Lambda’s serverless, event-driven compute services to run code without provisioning servers. It is a significant energy saver that enables Choice to pay for only what it uses.

Choice Edge has been operational for more than five years and the technology team is able to make changes and slipstream in new AWS services with no interruption, a requirement for an always-on reservation system, Kirkland says. 

“It’s been out there for a long time on the cloud and a big part of our migration journey to get everything out of the data center and into the cloud,” he says, adding that he plans to migrate the Radisson system to Edge this year as well.

Choice closed one data center last year and plans to close its second data center in 2023. The company is on schedule to be 100% in the cloud by year’s end, the CIO says.

Making sustainability a key priority

At Amazon’s recent re:Invent conference,  Steven M. Elinson, managing director of travel and hospitality at AWS, highlighted not only Choice’s ability to deliver to its franchisees the cost and scalability benefits of the cloud but also the company’s leadership in making sustainability a unique value for the hotel chain, applauding Kirkland and Choice Distinguished Cloud Platform Architect Shawn Seaton for “working to build a more sustainable future for generations that are yet to come.”

Kirkland, whose work with SustainabilityIT.org underscores his passion about driving more sustainable IT operations across the industry, believes IT is entering a critical moment in time when advances in the cloud and services such as Lambda can help make a difference. But he still knows his core job as an IT leader is delivering business value and insists that CIOs and technologists must try to make use of these services — if not for the environment’s survival, then for their own company’s survival.

“The world has never experienced as much data that is available and readily accessible as it is right now with the cloud and with the technology that exists in today’s world. If you are sitting there doing nothing with it, you’re leaving an untapped opportunity on the table,” Kirkland says.

“I don’t have hundreds or thousands of employees out there building the technology. Amazon does,” he says. “I can use all that data and the cloud technology from Amazon and to do something that drives business value where in the past you couldn’t. To me, that’s the untapped opportunity for every business and every company in the world. That really is going to be a differentiator for the ones that do it well versus the ones that don’t.”

Cloud Computing, Green IT, Travel and Hospitality Industry

Journey Beyond, a part of Hornblower Group, is Australia’s leading experiential tourism group. Headquartered in Adelaide, it operates 13 brands and experiences spanning the country. The company’s overall strategy is to “have a customer experience that’s second-to-none — from the moment they first engage with the company to plan their experience, to when they return home at the end of their travels — regardless of what Journey Beyond adventure you are booking.”

However, the company’s disparate technology systems were proving to be a hinderance in its commitment to consistently deliver unmatched services and experiences to customers. As its business diversified, including its own acquisition by Hornblower Group in early 2022, Journey Beyond inherited a range of disparate technology systems, including six different phone systems and an outdated contact center that was only servicing Journey Beyond’s rail journeys. The remaining brands in the company’s portfolio were using basic phone functionality for customer enquiries and reservations.

Madhumita Mazumdar, GM of information and communications technology at Journey Beyond

istock

“The different communication solutions were unable to provide an integrated 360-degree customer view, which made it difficult to ensure a consistent, unrivalled customer experience across all 13 tourism ventures, and any other brands Journey Beyond may add to its portfolio in the future. The absence of advanced contact center features and analytics further prevented us from driving exceptional customer experience. Besides, we couldn’t enable work-from-anywhere, on any device capability, for employees,” says Madhumita Mazumdar, GM of information and communications technology at Journey Beyond.  

These challenges forced the company to transition to a modern cloud-based communication platform.

Multiple communication solutions cause multiple challenges

Because Beyond Journey operates in the experiential tourism market, providing a personalized, seamless customer experience is essential — something its previous communications systems lacked, Mazumdar says.

“For instance, our train journeys get sold out a year prior to their launch. Therefore, when we launch a new season, there is a huge volume of calls from our customers and agents. The existing system lacked callback mechanism, leading to callers waiting in queue for as long as 40 minutes, which adversely impacted their experience,” she says, adding that there was also no way to prioritize certain calls over others.

The existing system also lacked analytical capability to provide any customer insights and it wasn’t integrated with Beyond Journey’s CRM. As a result, representatives interacting with a customer didn’t know whether the customer had traveled with the company before. “The communication between us and the customer was transactional instead of being personalized,” Mazumdar says.

Since the existing systems were very old, they couldn’t be managed remotely. In case of an outage, the company had to send a local person to rectify the on-site phone system, which could take a couple of hours. During this time, customers were unable to call Journey Beyond.

“The IVR was also not standardized across the company. As the IVRs were recorded in voices of employees from different business units, a caller had no idea they were part of the same business,” says Mazumdar.

Incoming calls to Beyond Journey’s toll-free numbers were also adding to the operational cost. “We paid per-minute on the calls received to our toll-free numbers. The high call volumes meant huge costs for us. Even if the call was hanging in the queue, it was costing us every minute,” she says.

Implementing a consolidated communications platform

To overcome the bottlenecks and drive customer engagement to the next level, Journey Beyond launched a contact center transformation, the first step of which was to establish a common unified communications (UC) platform across the business and integrate it with a new contact center (CC) solution. After evaluating several UC and CC solutions, Journey Beyond chose RingCentral’s integrated UCaaS and CCaaS platforms — RingCentral MVP and Contact Center.

“We started evaluating multiple vendors in the first quarter of 2021. The software evaluation process took three to five months after which the implementation started in August 2021. We went live in October 2021,” Mazumdar says. The entire SaaS solution was hosted on AWS.

The company took this opportunity to shift to soft phones and headsets by getting rid of all physical phones. “We purchased good quality noise-cancelling headsets, which was the only hardware we invested in significantly,” says Mazumdar. “Although we had premium support from RingCentral, we decided to learn everything about the solution and take full control over it. So, while the integration and prebuild was completely done by RingCentral, over time we trained multiple people in the team on the solution. In hindsight, this was the best thing we did,” says Mazumdar, who brought in two dedicated IT resources with phone system background for the new solution.

“Different business units within the company work differently. For instance, the peak hours for one business could be different from those of another business, which impacts how you set up the call flows. It’s not one basic standard rule that could be set up for all businesses across the company. With in-house understanding of the solution, we had full control over the solution and were able to make changes, refinements, and complex prioritization rules to it ourselves without depending on the solution provider,” she says.

Cloud-based solution delivers customer visibility and value

Connecting multiple businesses with a common communications platform to deliver consistent customer service across the group has yielded compelling business benefits to Journey Beyond.

A key advantage of the tight integration between UC and CC is the customer service operation’s accessibility for the entire Journey Beyond team.

“At a national integrated level, we now have subject matter experts in each of our experiences available to deliver unrivalled customer experience, with economies of scale. So, if one team is under duress in terms of call volumes, the call can be overflowed and picked up quickly by a consultant with secondary expertise in that brand,” says Mazumdar.

Journey Beyond is supporting its customer experience drive by integrating the CC solution with its CRM to develop omni-channel CX capabilities and build towards a 360-degree view of the customer.

“We are building up our ‘Know You Customer’ strategy, which starts with our customer service agents knowing who you are when you call any of our Journey Beyond brands,” says Mazumdar. “Callers who have travelled with us before, have their phone number in our CRM. When they call, their records pop up. The executive can look at the customer’s history with the company and the communication between them becomes a lot more personalized. The integrated view of the customer also helps to cross sell. For instance, if a person is booking a train journey from Adelaide but our executive knows that he is coming from Sydney, he can sell him another trip in Sydney.”

The other major advantage is the scalability and remote capabilities of the cloud-based platform. The solution allows Journey Beyond to run operations 24×7 with centralized administration and distributed users, working from anywhere, on any device. This has also given Journey Beyond the opportunity to recruit for talent in other locations outside the market around its Adelaide office.

Journey Beyond has also rolled out the solution’s workforce management functionality to better align agent availability with customer demand. The advanced feedback capabilities allow Journey Beyond to measure customer net promoter scores (NPS) right down to the consultant level. That NPS functionality will then be integrated into Salesforce, enhancing the 360-degree view of the customer experience.

The solution’s quality management functionality is providing Journey Beyond with a level of automation to ensure the contact basics are being completed, allowing leaders to focus on scoring the more complex or intangible components of customer engagements — delivering a recording of both the call and what is happening on screen at the same time. “Quality analytics completes the picture in terms of everything we need to see from a skills gap perspective,” says Mazumdar. Journey Beyond has deployed the UC solution to all businesses nationally. The CC solution has been rolled out at the company’ rail division and Rottnest Express while onboarding for the other businesses is in progress.

Unified Communications

In a 2021 survey, 95% of respondents agreed that a hybrid cloud is critical for success, and 86% plan to invest more in hybrid multicloud.

Hybrid multicloud has emerged as the new design center for organizations of all sizes. Rather than purchasing costly infrastructure upfront to accommodate future growth, the hybrid multicloud helps you scale up and down as needed and right-size your environment. Deploying data and workloads in this model offers the potential for incredible value, including improved agility, functionality, cost savings, performance, cloud security, compliance, sustainability, disaster recovery—the list goes on. 

However, enjoying the benefits of hybrid multicloud requires organizations to first overcome a variety of challenges. I’ll share some of these challenges as well as practices and recommendations to help your organization realize the full value of your investment.

Challenge 1: Mindset

The cloud isn’t as much a place to go as it is a way of operating. When organizations move from on-premises to hybrid multicloud, it requires a shift in mindset and protocols—an important concept for organizations to embrace. Many of the tools, skillsets, and processes used on-premises must evolve to those used in the cloud. Your applications may need to be refactored. In a word, your organization must adapt its way of operating to maximize the value of hybrid multicloud.

Challenge 2: Compliance

Compliance poses another challenge. Wherever your organization puts data, it must comply with industry regulations. Moving data later can rack up expensive egress charges. In advance, your organization must carefully consider where data needs to reside physically and how you will ensure compliance, maintain visibility, and report on your compliance posture.

Challenge 3: Security

The same is true for cloud security, which is always top of mind for organizations. Your organization must make security as robust as possible across storage, network, compute, and people—essentially every layer. This means that if you’re operating under zero-trust policies, you need to understand how that impacts your hybrid multicloud model.

Challenge 4: Cost optimization

While hybrid multicloud can be incredibly cost-effective, understanding and managing costs across providers and usage can prove incredibly complex. Make sure that by design, you’re addressing cloud cost optimization challenges upfront, narrowing the focus to minimize complexity while ensuring interoperability. Implement cloud FinOps tools and processes to maximize your investments by enabling broad visibility and cost control across hybrid multicloud. When evaluating cloud provider lock-in, tread carefully to ensure it supports your strategy as a business.

Challenge 5: Disaster recovery

Organizations often see disaster recovery as the low-hanging fruit of the hybrid multicloud journey because it eliminates a second data center full of depreciating and idle equipment. Because the way your organization handles disaster recovery changes, you may choose to extend the products you already have. You might add new approaches and tooling. Regardless, you need a plan in place before you make this transition.

Challenge 6: Dependencies

Understanding and addressing workloads and dependencies across your infrastructure is fundamental to minimizing the risk of issues and outages. Previous methodologies may not apply in hybrid multicloud, especially when it comes to common cloud attributes such as services and self-service automation. That means you must complete application services dependency mapping as part of assessment and planning activities. This work includes determining which applications need to be refactored or modernized to achieve performance objectives and operate efficiently.

Challenge 7: Skillsets

Not surprisingly, the skillsets required to support hybrid multicloud differ from those needed to support a traditional on-premises environment. Ensuring your organization has the right skillsets to support this work can be challenging. Therefore, it’s essential to understand the necessary toolsets and skills so you can put a plan in place for addressing training gaps and potentially supplementing staff.

Accelerate your hybrid multicloud journey

Moving to hybrid multicloud is a highly complex endeavor that when done well, can pay off in spades for your organization. A successful journey requires careful, detailed planning that takes these and other challenges into account. The more challenges you solve on the front end, the faster and more effective your transition will be on the back end.

GDT has been accelerating customer success for more than 26 years, helping countless customers streamline their hybrid cloud journeys. Our experts provide architecture, advisory, design, deployment, and management services, all customized to your specific needs, providing you a secure and cost-effective infrastructure that can flex and scale as business requirements change.

Contact the experts at GDT to see how we can help your business streamline your hybrid multicloud journey.

Multi Cloud

By: Nav Chander, Head of Service Provider SD-WAN/SASE Product Marketing at Aruba, a Hewlett Packard Enterprise company.

Today, enterprise IT leaders are facing the reality that a hybrid work environment is the new normal as we transition from a post-pandemic world. This has meant updating cloud, networking, and security infrastructure to adapt to the new realities of hybrid work and a world where employees will need to connect to and access business applications from anywhere and from any device, in a secure manner. In fact, most applications are now cloud-hosted, presenting additional IT challenges to ensure a high-quality end-user experience for the remote worker, home office worker, or branch office.

Network security policies that are based on the legacy data-center environment where applications are backhauled to the data center affect application performance and user experience negatively within a cloud-first environment. These policies also don’t function end-to-end in an environment where there are BYOD or IoT devices. When networking and network security requirements are managed by separate IT teams independently and in parallel, do you achieve the best architecture for digital transformation?

So, does implementing a SASE architecture based on a single vendor solve all of these challenges?

SASE, in itself, is not its own technology or service: the term describes a suite of services that combine advanced SD-WAN with Security Service Edge (SSE) to connect and protect the company from web-based attacks and unauthorized access to the network and applications. By integrating SD-WAN and cloud security into a common framework, SASE implementations can both improve network performance and reduce security risks. But, because SASE is a collection of capabilities, organizations need to have a good understanding of which components they require to best fit their needs.

A key component of a SASE framework is SD-WAN. Because of SD-WAN’s rapid adoption to support direct internet access, organizations can leverage existing products to serve as a foundation for their SASE implementations. This would be true for both do-it-yourself as well as managed services implementations.Enterprises are operating a hybrid access networking environment of legacy MPLS, business and broadband internet 4G/5G and even satellite.

Today, enterprises can start their SASE implementation by adopting a secure SD-WAN solution with integrated software security functions such as NGFW, IDS/IPS, DDoS detection, and protection. Organizations can retire branch firewalls to simplify WAN architecture and eliminate the cost and complexity associated with the ongoing management of dedicated branch firewalls. The Aruba EdgeConnect SD-WAN platform provides comprehensive edge-to-cloud security by integrating with leading cloud-delivered security providers to enable a best-of-breed SASE architecture. Moreover, the Aruba EdgeConnect SD-WAN platform was recently awarded an industry-first Secure SD-WAN certification from ICSA Labs.

When it comes to SASE and SD-WAN transformations, enterprises may have different requirements. Some enterprises, particularly retail, retail banking, and distributed sales offices that require essential SD-WAN capabilities plus Aruba’s EdgeConnect advanced application performance features, can be included in a single Foundation software license that includes a full advanced NGFW, fine-grained segmentation, Layer 7 firewall, DDoS protection, and anti-spoofing. The EdgeConnect SD-WAN is an all-in-one WAN edge branch platform and includes a Foundation license that is simpler to deploy and support for enterprises with lean IT teams and can replace existing branch routers and firewalls with a combination of SD-WAN, routing, multi-cloud on-ramps, and advanced security. It has the added flexibility for an optional software license for Boost WAN Optimization, IDS/IPS with the optional Dynamic Threat Defense license, and automated SASE integration with leading cloud security providers, which provides a flexible SD-WAN and integrated SASE journey.

Then there are other enterprises that require more advanced SD-WAN features to address complex WAN topologies and use cases. An Advanced EdgeConnect SD-WAN software license includes the flexibility to support any WAN topology, including full mesh and network segments/VRFs to account for merger and acquisition scenarios that require multi-VRF/overlapping IP address capability. The Advanced license supports seven business-intent overlays that allow enterprises to apply comprehensive application prioritization and granular security policies for a wide range of traffic types. Like the Foundation license, the Advanced license also supports the same optional software licenses for WAN Optimization option, IDS/IPS option with Dynamic Threat Defense license, and automated SASE integration with leading cloud security providers.

Many enterprises will benefit from a secure SD-WAN solution that can retire branch firewalls, simplify WAN architecture, and gain the freedom and flexibility benefits of an integrated best-of-breed SASE architecture. Aruba’s new Foundation and Advanced licenses for Aruba EdgeConnect SD-WAN enable customers to transform both their WAN and security architectures with a secure SD-WAN solution that offers all the advanced NGFW capabilities and seamless integration with public cloud providers (AWS, Azure, GCP) and industry-leading SSE providers. This robust, multi-vendor, best-in-breed approach for SASE adoption will mitigate the risk associated with relying on a single technology vendor to supply all the necessary components while enabling a secure cloud-first digital transformation enabling enterprises to embark on their own SASE journey.

SASE

In spite of long-term investments in such disciplines as agile, lean, and DevOps, many teams still encounter significant product challenges. In fact, one survey found teams in 92% of organizations are struggling with delivery inefficiency and a lack of visibility into the product lifecycle.[1] To take the next step in their evolutions, many teams are pursuing Value Stream Management (VSM). Through VSM, teams can establish the capabilities needed to better focus on customer value and optimize their ability to deliver that value.

While the benefits can be significant, there are a number of pitfalls that teams can encounter in their move to harness VSM. These obstacles can stymie progress, and erode the potential rewards that can be realized from a VSM initiative. In this post, I’ll take a look at four common pitfalls we see teams encounter, and provide some insights for avoiding these problems.

Pitfall #1: Missing the value

Very often, we see teams establish value streams that are doomed from inception. Why? Because they’re not centered on the right definition of value.

Too often, teams start with an incomplete or erroneous definition of value. For example, it is common to confuse new application capabilities with value. However, it may be that the features identified aren’t really wanted by customers. They may prefer fewer features, or even an experience in which their needs are addressed seamlessly, so they don’t even have to use the app. The key is to ensure you understand who the customer is and how they define value.

In defining value, teams need to identify the tangible, concrete outcomes that customers can realize. (It is important to note in this context, customers can be employees within the enterprise, as well as external audiences, such as customers and partners.) Benefits can include financial gains, such as improved sales or heightened profitability; enhanced or streamlined capabilities for meeting compliance and regulatory mandates; and improved competitive differentiation. When it comes to crystalizing and pursuing value, objectives and key results (OKRs) can be indispensable. OKRs can help teams gain improved visibility and alignment around value and the outcomes that need to be achieved.

Pitfall #2: Misidentifying value streams

Once teams have established a solid definition of value, it’s critical to gain a holistic perspective on all the people and teams that are needed to deliver that value. Too often, teams are too narrow in their value stream definitions.

Generally, value streams must include teams upstream from product and development, such as marketing and sales, as well as downstream, including support and professional services. The key here is that all value streams are built with customers at the center.

Broadcom

 

Pitfall #3: Focusing on the wrong metrics

While it’s a saying you hear a lot, it is absolutely true: what gets measured gets managed. That’s why it’s so critical to establish effective measurements. In order to do so, focus on these principles:

Prioritize customer value to ensure you’re investing in the right activities.Connect value to execution to ensure you’re building the right things.Align the execution of teams in order to ensure things are built right.

It is important to recognize that data is a foundational element to getting all these efforts right.

It is vital that this data is a natural outcome of value streams — not a separate initiative. Too often, teams spend massive amounts of money and time in aggregating data from disparate resources, and manually cobbling together data in spreadsheets and slides. Further, these manual efforts mean different teams end up looking at different data and findings are out of date. By contrast, when data is generated as a natural output of ongoing work, everyone can be working from current data, and even more importantly, everyone will be working from the same data. This is essential in getting all VSM participants and stakeholders on the same page.

Pitfall #4: Missing the big picture

Often, teams start with too narrow of a scope for their value streams. In reality, these narrow efforts are typically really single-process, business process management (BPM) endeavors. By contrast, value streams represent an end-to-end system for the flow of value, from initial concepts through to the customer’s realization of value. While BPM can be considered a tactical improvement plan, VSM is a strategic improvement plan. Value streams need to be high-level, but defined in such a way that they have metrics that can be associated with them so progress can be objectively monitored.

Tips for navigating the four pitfalls

 Put your clients at the heart of your value streams and strategize around demonstrable and measurable business outcomes.

Value streams are often larger than we think. Have you remembered to include sales, HR, marketing, legal, customer service and professional services in your value stream?

Measure what matters and forget about the rest. We could spend our days elbow deep in measuring the stuff that just doesn’t help move the needle.

 Learn More

To learn more about these pitfalls, and get in-depth insights for architecting an optimized VSM approach in your organization, be sure to check out our webinar, “Four Pitfalls of Value Stream Management and How to Avoid Them.”

[1] Dimensional Research, sponsored by Broadcom, “Value Streams are Accelerating Digital Transformation: A Global Survey of Executives and IT Leaders,” October 2021

Devops, Software Development

It was only a few years ago when ‘digital transformation’ was on every CIO’s agenda, and businesses started to understand how cloud could deliver real value. They stopped asking whether they should move to the cloud and started asking what they needed to do to get there.

This was the case for Murray & Roberts’ CIO Hilton Currie in 2016, when the cloud services market in South Africa was booming. Already worth $140 million and growing rapidly, it was around this time when the engineering and mining contractor undertook its migration to the cloud. But things didn’t go according to plan — a story not unique to South Africa or this industry. So Currie made the difficult decision to repatriate Murray & Roberts’ IT stack, one that required selling the business on reversing an IT strategy Currie had previously sold as M&R’s future.

Currie recently spoke to CIO.com about the mining company’s bumpy cloud journey, what motivated them to move and, ultimately, what drove them to change back. 

CIO.com: What motivated Murray & Roberts to embark on its original cloud journey?

Hilton Currie: Up until 2015, we were 100% on-premises and we had no major issues. I think the first red flag was the age of the equipment in our primary and secondary data centers. We pretty much ran on a major production environment, and then we had a hot standby disaster recovery environment, which was far out of support and had come to end of life. This brings its own risks to the table. Our production environment still had a bit of life in it, but not much and decisions had to be made.

Around this time, we were in discussions with our outsource partner to outsource our IT support and they proposed that they could make cloud affordable for us. Honestly, in 2016, cloud wasn’t really affordable. If we were willing to look at outsourcing the whole lot to them — all the way from server application right down to the technician support level — and adopt their managed public cloud, they were confident they could make it work for us. If we looked at the cost of new infrastructure we needed to renew, it actually looked very attractive financially. So we opted to go ahead.

Once you signed on, how then did the migration progress?

We started with a serious migration from the first quarter of 2017 because a lot of our equipment was end of life, so it was an all-or-nothing approach. We used a locally hosted cloud vendor that was connected to our outsource partner, but also with affiliation to a big global company. It took a bit longer than anticipated but by November 2017, we were almost fully across and functioning. We had everything from our big ERP systems to smaller, bespoke systems running in the cloud.

We had third-party independent consultants come in to analyze certain systems and licensing. But in early 2018, the first major hiccup hit us as the independent party who did the audit missed the terms of use on some of our licensing. For example, Microsoft has quite heavy restrictions on using perpetual license software in the cloud, specifically SQL Server. Some licenses aren’t valid if you don’t own the rights to the lower-level equipment. Unfortunately, our licensing expert missed that. To adjust this, we’d have to move from perpetual licenses to subscription licenses, which would have been a grudge purchase because some systems lag in terms of the versions they’re certified to run on. We would have had to purchase a current version of SQL Server and then downgrade it like four or five versions because that’s the version our ERP and other systems run on. This would have been hellishly expensive, so we brought all services that were impacted by licensing restrictions back on-prem and pulled all our SQL servers back, which was a costly exercise because we had to purchase new equipment, and the rest of the applications ran in the cloud. We ended up with a split or hybrid setup, which brought some challenges and became quite a nightmare to manage. We did eventually get it working and ran this way for about six to eight months. During that time, there was a buzz around cloud and I think the outsourced vendor was getting a few new clients on their managed cloud platform, which started taking a toll on us because it wasn’t long after that they started imposing rate limitations.

Hilton Currie

What prompted you to realize the shift to the cloud wasn’t working, and how hard was it to decide to move back?

Historically, we could get full performance out of the cloud platform. There were no restrictions and then suddenly they started imposing limitations, with an additional charge if we wanted more. Suddenly, the commercial model fell to pieces. We tried to make do with the limitations but it got to a point where it just wasn’t working. We were only fully cloud for about a year and a half and about a year in the business was brought to its knees. Applications started failing, email and phones didn’t work and our ERP became unusable. In some of the worst-case scenarios, it took our finance teams up to 15 minutes to open Excel files. We complained and they lifted some of the rate limitations while we made alternative arrangements.

Around March 2019, we decided to move back on-premises and by mid-2020 we were fully on-premises again. For me, the decision was crystal clear. At a point, it felt like I was walking around the building with a target on my back because this affected everyone and there was an air that our IT was falling to pieces.

In Murray & Roberts, IT reports to the financial director of the group, so I sat down with him to explain that it’s not all bad. I built a detailed roadmap highlighting that we were in a bad space, but outlined that getting out of this situation was possible. It came down to the numbers in terms of spend on new kit. I showed that it would cost less over three years moving to new kit than it would staying where we were. It was a no-brainer for him to accept but it was a difficult conversation. We sold them the cloud journey back in 2016 and they backed us and jumped on board. Then, a year and half later, we wanted to jump ship. But I think the results of moving back to a private cloud spoke for themselves.

So what systems are in place now in M&R’s on-premises data center? Have the issues you identified been resolved?

Instead of keeping things as is when we moved back on-prem, we did a refresh by making a list of all our systems to get a better view on how important they were to the business and where they sat. As part of this rationalization process, a couple of the systems were upgraded since we had a chance to rebuild from scratch, so we took advantage of that and got things running the way we wanted. We did a lot of consolidation as well. When we went to cloud, at one point we had over 300 servers and the end goal when we moved back on-prem was to get this down to about 180.

Given how the cloud market has changed, would you consider another cloud migration?

We’re not against cloud. We understand it can add value and it has a place. In fact, we’re starting a complete Office365 migration. But we’ll only lift certain systems and we’re taking a more selective approach. Cloud is a big buzzword, but you need to ask what it promises and what it’s going to give your business in terms of value. If you’re going to cloud for commercial reasons, it’s a big mistake because it’s not cheaper. And if you’re going for performance reasons, it’s an even bigger mistake and there are many reasons for this.

In South Africa specifically, there are a lot of issues of bandwidth and throughput to international vendors because stuff still sits in Europe or in the Americas. With the kind of flexibility that virtual environments can offer on a private cloud, do you really need a public cloud if you don’t need to be agile or scale drastically? We found that a well managed, fully redundant virtual environment, hosted in the private cloud on our own kit, in a tier-four data center was the ideal scenario for us. We’ve been running this way since mid-2020 and have not looked back

Any learnings from this experience that you’d like to share with other CIOs?

Looking back, I don’t think we made a bad decision. The biggest learning is about focusing on the big picture. Make sure you understand your long-term roadmap very clearly so there are no surprises. Often people will be blinded by the commercials, but be very careful about licensing and terms of use because many vendors have restrictions in place. Do your due diligence. All vendors write clauses into their contracts that it’s subject to change over time. But make sure you’ve got a backup or a rollback plan because it’s very difficult to bridge the gap when the company is on its knees and you need to buy equipment and do a full migration. I would also never recommend any full lift-and-shift approach. There are just too many variables.

What advice would you give aspirant CIOs, having gone through this? One of the aspects that’s lacking in many CIOs is the interaction with the business. The CIO role is not just an IT role. Gaining an understanding of your business and building relationships with important stakeholders is key to a successful CIO career. Although you’re looking at governance, you’re looking at compliance, processes and things like that. If you’re not aligned with what the business requires, you’re sitting in no man’s land because there’s a mismatch between what IT offers and what the business needs. When you’re looking at technology for all bells and whistles, you’re missing the point. It should be about adopting the right technology to boost productivity and to facilitate how the business operates.

CIO

Heading down the path of systems thinking for the hybrid cloud is the equivalent of taking the road less traveled in the storage industry. It is much more common to hear vendor noise about direct cloud integration features, such as a mechanism to move data on a storage array to public cloud services or run separate instances of the core vendor software inside public cloud environments. This is because of a narrow way of thinking that is centered on a storage array mentality. While there is value in those capabilities, practitioners need to consider a broader vision.

When my Infinidat colleagues and I talk to CIOs and other senior leaders at large enterprise organizations, we speak much more about the bigger picture of all the different aspects of the enterprise environment. The CIO needs it to be as simple as possible, especially if the desired state is a low investment in traditional data centers, which is the direction the IT pendulum continues to swing.

Applying systems thinking to the hybrid cloud is changing the way CIOs and IT teams are approaching their cloud journey. Systems thinking takes into consideration the end-to-end environment and the operational realities associated with that environment. There are several components with different values across the environment, which ultimately supports an overall cloud transformation. Storage is a critical part of the overall corporate cloud strategy.

Savvy IT leaders have come to realize the benefits of both the public cloud and private cloud, culminating in hybrid cloud implementations. Escalating costs on the public cloud will likely reinforce hybrid approaches to storage and cause the pendulum to swing back toward private cloud in the future, but besides serving as a transitional path today, the main reasons for using a private cloud today are about control and cybersecurity.

Being able to create a system that can accommodate both of those elements at the right scale for a large enterprise environment is not an easy task. And it goes far beyond the kind of individual array type services that are baked into point solutions within a typical storage environment.

What exactly is hybrid cloud?

Hybrid cloud is simply a world where you have workloads running in at least one public cloud component, plus a data center-based component. It could be traditionally-owned data centers or a co-location facility, but it’s something where the customer is responsible for control of the physical infrastructure, not a vendor.

To support that deployment scenario, you need workload mobility. You need the ability to quickly provision and manage the underlying infrastructure. You need visibility into the entire stack. Those are the biggest rocks among many factors that determine hybrid cloud success.

For typical enterprises, using larger building blocks on the infrastructure side makes the journey to hybrid cloud easier. There are fewer potential points of failure, fewer “moving pieces,” and increased simplification of the existing hybrid or existing physical infrastructure, whether it is deployed in a data center or in a co-location type of environment. This deployment model also can dramatically reduce overall storage estate CAPEX and OPEX.

But what happens when the building blocks for storage are small – under a petabyte or so each? There is inherently more orchestration overhead, and now vendors are increasingly dependent on an extra “glue” layer to put all these smaller pieces together.

Working with bigger pieces (petabytes) from the beginning can omit a significant amount of that complexity in a hybrid cloud. It’s a question of how much investment a CIO wants to put in different pieces of “glue” between different systems vs. getting larger building blocks conducive to a systems thinking approach.

The right places in the stack

A number of storage array vendors highlight an ability to snap data to public clouds, and there is value in this capability, but it’s less valuable than you might think when you’re thinking at a systems level. That is because large enterprises will most likely want backup software with routine, specific schedules across their entire infrastructure and coordination with their application stacks. IT managers are not going to want an array to move data when the application doesn’t know about it.

A common problem is that many storage array vendors focus on doing it within their piece of the stack. Yet, in fact, the right answer is most likely at the backup software layer somewhere − somewhere higher than the individual arrays in the stack. It’s about upleveling the overall thought process to systems thinking: what SLAs you want to achieve across your on-prem and public cloud environments. The right backup software can integrate with the underlying infrastructure pieces to provide that.

Hybrid cloud needs to be thought of holistically, not as a “spec checkbox” type activity. And you need to think about where the right places are in this stack to provide the functionality.

Paying twice for the same storage

Solutions that involve deploying another vendor’s software on top of storage that you already have to pay for from the hyperscaler means paying twice for the same storage, and this makes little sense in the long term.

Sure, it may be an okay transitional solution. Or if you’re really baked into the vendor’s APIs or way of doing things, then maybe that’s good accommodation. But the end state is almost never going to be a situation where the CIO is signing off on a check for two different vendors for the same bits of data. It simply doesn’t make sense.

Thinking at the systems level

Tactical issues get resolved when you apply systems thinking to enterprise storage. Keep in mind:

Consider where the data resiliency needs to be orchestrated and whether that needs to be within individual arrays or better positioned as part of an overall backup strategy or whatever strategy it isBeware of just running the same storage software in the public cloud because it’s a transitional solution at bestCost management is critical

On the last point, you should have a good look at the true economic profile your organization is getting on-premises. You can get cloud-like business models and the OPEX aspects from vendors, such as Infinidat, lowering costs compared to traditional storage infrastructure.

Almost all storage decisions are fundamentally economic decisions, whether it’s a direct price per GB cost, the overall operational costs, or cost avoidance/opportunity costs. It all comes back to costs at some level, but an important part of that is questioning the assumptions of the existing architectures.

If you’re coming from a world where you have 50 mid-range arrays, and you have a potential of reducing the quantity of moving pieces in that infrastructure, the consolidation and simplification alone could translate into significant cost benefits: OPEX, CAPEX, and operational manpower. And that’s before you even start talking about moving data outside of more traditional data center environments.

Leveraging technologies, such as Infinidat’s enterprise storage solutions, makes it more straightforward to simplify and consolidate on the on-prem side of the hybrid cloud environment, potentially allowing for incremental investment in the public cloud side, if that’s the direction for your particular enterprise.

How much are you spending maintaining these solutions, the incumbent solutions, both in terms of your standard maintenance or support subscription fees? Those fees, by the way, add up quite significantly. In terms of your staff time and productivity to support 50 arrays, when you could be supporting three systems or one system, you should look holistically at the real costs, not just what you’re paying the vendors. What are the opportunity costs of maintaining a more complex traditional infrastructure? 

On the public cloud side of things, leveraging cloud cost management tools, we’ve seen over a billion dollars of VC money that’s gone into that space, and many companies are not taking full advantage of this, particularly enterprises who are early in their cloud transformation. The cost management aspect and the automation around it − the degree of work that you can put into it for real meaningful financial results − are not always the highest priority when you’re just getting started. And the challenge with not baking it in from the beginning is that it’s harder to graft it in when processes become entrenched.

For more information, visit Infinidat here

Hybrid Cloud

Johnny Serrano, CIO with Australian mine safety specialists GroundProbe always had a fascination with how things worked, from his first Sony Walkman to the growing number of games that had suddenly become available to kids.

“I really knew that I wanted to get paid to make games. That was my whole motivation,” Serrano tells CIO Australia.

But it wasn’t until the now CIO50 alumnus finished high school that he started seriously contemplating a professional career in technology.

A year later, having taken the first step of acquiring a diploma in software engineering, Serrano found the inspiration to enrol in a Bachelor of Information Technology, Electronic Commerce at Queensland University of Technology.

“I was really fortunate to study that business IT degree. I would be coding in one class and then studying business and economics in another; it really provided me with a holistic view of business,” Serrano says. His first real job in tech would soon expose him to that and more.

The start of a career in IT

While working for Brisbane-based Budget Databases in 2006, he was despatched to Innisfail, 260km north of his hometown of Townsville, which had been devastated by cyclone Larry, then the strongest to ever hit Australia causing $1.5 billion worth of damage.

With thousands of people trying to make insurance claims for damaged or destroyed homes, cars and other assets, Serrano led a team to stand up a new network and systems to help manage the surge. That experience taught him of technology’s potential to genuinely improve lives.

“I got to be on the ground, really helping people and helping rebuild the town, which looking back was really meaningful,” Serrano says. Returning to work back in Brisbane, Serrano who is also chief data officer at GroundProbe, resumed his side-hustle in the nightclub scene helping DJs sort out their music files and systems for shows.

Next, he landed at US defence company Raytheon, which had recently established an Australian office in Brisbane. And for the perennial gadget junkie, this opened a whole new world of technological wonder.

“I visited military bases all over Australia and was exposed to some pretty cool stuff,” Serrano says. In particular, he worked on creating — and decommissioning — supporting infrastructure for Super Hornets and F-111 fighters. He also helped to support flight simulators, inadvertently realising his dream of being paid to make games.

“During my five years at Raytheon I can say my technology learnings — and career in general — really accelerated. I worked with highly technical individuals who provided their time and experience for free to a savvy young worker like myself who was willing to take advantage and listen,” Serrano says.

These early mentors further influenced him in helping to adopt a “calmer, big picture perspective of how to deal with incidents that was both reassuring and confidence-building”.

Often, he was involved in highly sensitive operations, which included setting up critical projects in normal civilian buildings disguised so as not appear out of place. “The defence industry knowledge I gained, and security controls implemented are still part of my thinking to this day,” he says.

Saving lives with technology

Arriving in Australia as a refugee from war-torn El Salvador in the 1980s, Serrano has always had a strong sense of social justice. And now, with many years of high-level and senior technology experience under his belt, he embarked on a major career pivot, sitting the GAMSAT medical entry exam with the aim of one day joining Medicine Sans Frontiers.

Needing to supplement his income as a mature-aged student and new father, Serrano took a job as a technical business analyst in the IT department at GroundProbe, expecting to be there for about a year.

Suffice to say, life got in the way, as it was around this time he and his wife welcomed their first child into the world, now the eldest of four.

Fast forward to today, Serrano has now spent five years as both CIO and CDO of the company where he oversees a team of 11 that have helped establish it as a genuine digital transformation leader in the mining industry.

While there have been significant advancements in safety over the years, few would argue there’s still much room for improvement, especially in developing economies where mine accidents remain tragically common, often resulting in loss of life and serious injury, devastating families and communities.

Since it was founded in 2001, GroundProbe has been developing software and digital sensors designed to help mine operators — and workers — be more alert and responsive to the many dangers that can present themselves, while collecting site data to inform more intelligent project design.

Harnessing augmented reality during the pandemic

Like so many technology leaders, Serrano and his team were thrown many a curveball throughout the Coronavirus pandemic, not least of which was the inability to have GroundProbe engineers physically visit clients due to travel and other COVID-19 restrictions.

This was brought into sharp relief when a customer in Bolivia had one of its radars stop at a mine site. In response, Serrano and his team worked quickly to create a solution combining augmented reality (AR), smart glasses and video that has proved transformational.

“Using AR we were able to get everything back up and running in under an hour,” he says.

Mine operators were able to plug into the system and receive detailed, real-time instructions from GroundProbe’s service teams via AR headsets, helping them maintain operation of the company’s products for detecting dangerous wall movement.

With that test case proven, he and his team quickly mobilised to make the technology available across all the 30 countries GroundProbe operates in, spanning Asia, Africa, Europe, South America and the US.

Serrano boasts that while many of GroundProbe’s competitors were scrambling to figure out ways to maintain service levels, the company confidently launched a marketing campaign with the tagline ‘We are still operational’. “Our machines do save lives all over the world and we pride ourselves on being able to keep them up and running using augmented reality,” he says.

In many ways it was an opportunity for him to draw on all his past professional experiences, bringing together crisis management and hands-on problem solving.

Serrano shares lessons to lead

Perhaps even more importantly, it helped him hone valuable leadership skills, both in the management of his own team, as well as working with senior executives in a mission-critical capacity.

Serrano had previously led the global deployment of a new ERP system for GroundProbe and commenced a major program to retire technical debt, but this was different.

He notes that leading his team in developing a successful technology-led response to COVID-19 has helped reshape entrenched “defensive” perceptions amongst the GroundProbe management that IT is merely an “overhead”. There is now increased respect throughout the company for what technology can do, a sense that it’s an essential part of the business, as well as greater appreciation of the people that are involved in its deployment.

“Implementing tech is a journey, but it’s really all about the people at the end of the day,” he says.

Serrano feels that gaining executive support for this vision is the biggest challenge facing many CIOs today, especially as they come under increased pressure to conjure and deploy strategies for digital innovation that deliver tangible business outcomes.

Since the onset of the pandemic, GroundProbe has developed a laser-like focus on improving customer experience, which Serrano says further underscores the importance of giving his tech team as much freedom as possible.

But this also means ensuring they’re free to fail.

“We get told we’re not allowed to fail but it’s going to happen when you’re a manager. But it’s about de-risking that, which comes back to trust,” Serrano says.

Standing up emerging technology like AR on a global scale was a risky move for him and his team, and he admits it could have gone either way.

Yet he’s careful to encourage his younger team members to have confidence in their ideas, and to not fear failure, invoking the words of American actress Stella Adler: “You will only fail to learn if you do not learn from failing”.

IT Leadership