By: Nav Chander, Head of Service Provider SD-WAN/SASE Product Marketing at Aruba, a Hewlett Packard Enterprise company.

Today, enterprise IT leaders are facing the reality that a hybrid work environment is the new normal as we transition from a post-pandemic world. This has meant updating cloud, networking, and security infrastructure to adapt to the new realities of hybrid work and a world where employees will need to connect to and access business applications from anywhere and from any device, in a secure manner. In fact, most applications are now cloud-hosted, presenting additional IT challenges to ensure a high-quality end-user experience for the remote worker, home office worker, or branch office.

Network security policies that are based on the legacy data-center environment where applications are backhauled to the data center affect application performance and user experience negatively within a cloud-first environment. These policies also don’t function end-to-end in an environment where there are BYOD or IoT devices. When networking and network security requirements are managed by separate IT teams independently and in parallel, do you achieve the best architecture for digital transformation?

So, does implementing a SASE architecture based on a single vendor solve all of these challenges?

SASE, in itself, is not its own technology or service: the term describes a suite of services that combine advanced SD-WAN with Security Service Edge (SSE) to connect and protect the company from web-based attacks and unauthorized access to the network and applications. By integrating SD-WAN and cloud security into a common framework, SASE implementations can both improve network performance and reduce security risks. But, because SASE is a collection of capabilities, organizations need to have a good understanding of which components they require to best fit their needs.

A key component of a SASE framework is SD-WAN. Because of SD-WAN’s rapid adoption to support direct internet access, organizations can leverage existing products to serve as a foundation for their SASE implementations. This would be true for both do-it-yourself as well as managed services implementations.Enterprises are operating a hybrid access networking environment of legacy MPLS, business and broadband internet 4G/5G and even satellite.

Today, enterprises can start their SASE implementation by adopting a secure SD-WAN solution with integrated software security functions such as NGFW, IDS/IPS, DDoS detection, and protection. Organizations can retire branch firewalls to simplify WAN architecture and eliminate the cost and complexity associated with the ongoing management of dedicated branch firewalls. The Aruba EdgeConnect SD-WAN platform provides comprehensive edge-to-cloud security by integrating with leading cloud-delivered security providers to enable a best-of-breed SASE architecture. Moreover, the Aruba EdgeConnect SD-WAN platform was recently awarded an industry-first Secure SD-WAN certification from ICSA Labs.

When it comes to SASE and SD-WAN transformations, enterprises may have different requirements. Some enterprises, particularly retail, retail banking, and distributed sales offices that require essential SD-WAN capabilities plus Aruba’s EdgeConnect advanced application performance features, can be included in a single Foundation software license that includes a full advanced NGFW, fine-grained segmentation, Layer 7 firewall, DDoS protection, and anti-spoofing. The EdgeConnect SD-WAN is an all-in-one WAN edge branch platform and includes a Foundation license that is simpler to deploy and support for enterprises with lean IT teams and can replace existing branch routers and firewalls with a combination of SD-WAN, routing, multi-cloud on-ramps, and advanced security. It has the added flexibility for an optional software license for Boost WAN Optimization, IDS/IPS with the optional Dynamic Threat Defense license, and automated SASE integration with leading cloud security providers, which provides a flexible SD-WAN and integrated SASE journey.

Then there are other enterprises that require more advanced SD-WAN features to address complex WAN topologies and use cases. An Advanced EdgeConnect SD-WAN software license includes the flexibility to support any WAN topology, including full mesh and network segments/VRFs to account for merger and acquisition scenarios that require multi-VRF/overlapping IP address capability. The Advanced license supports seven business-intent overlays that allow enterprises to apply comprehensive application prioritization and granular security policies for a wide range of traffic types. Like the Foundation license, the Advanced license also supports the same optional software licenses for WAN Optimization option, IDS/IPS option with Dynamic Threat Defense license, and automated SASE integration with leading cloud security providers.

Many enterprises will benefit from a secure SD-WAN solution that can retire branch firewalls, simplify WAN architecture, and gain the freedom and flexibility benefits of an integrated best-of-breed SASE architecture. Aruba’s new Foundation and Advanced licenses for Aruba EdgeConnect SD-WAN enable customers to transform both their WAN and security architectures with a secure SD-WAN solution that offers all the advanced NGFW capabilities and seamless integration with public cloud providers (AWS, Azure, GCP) and industry-leading SSE providers. This robust, multi-vendor, best-in-breed approach for SASE adoption will mitigate the risk associated with relying on a single technology vendor to supply all the necessary components while enabling a secure cloud-first digital transformation enabling enterprises to embark on their own SASE journey.

SASE

In spite of long-term investments in such disciplines as agile, lean, and DevOps, many teams still encounter significant product challenges. In fact, one survey found teams in 92% of organizations are struggling with delivery inefficiency and a lack of visibility into the product lifecycle.[1] To take the next step in their evolutions, many teams are pursuing Value Stream Management (VSM). Through VSM, teams can establish the capabilities needed to better focus on customer value and optimize their ability to deliver that value.

While the benefits can be significant, there are a number of pitfalls that teams can encounter in their move to harness VSM. These obstacles can stymie progress, and erode the potential rewards that can be realized from a VSM initiative. In this post, I’ll take a look at four common pitfalls we see teams encounter, and provide some insights for avoiding these problems.

Pitfall #1: Missing the value

Very often, we see teams establish value streams that are doomed from inception. Why? Because they’re not centered on the right definition of value.

Too often, teams start with an incomplete or erroneous definition of value. For example, it is common to confuse new application capabilities with value. However, it may be that the features identified aren’t really wanted by customers. They may prefer fewer features, or even an experience in which their needs are addressed seamlessly, so they don’t even have to use the app. The key is to ensure you understand who the customer is and how they define value.

In defining value, teams need to identify the tangible, concrete outcomes that customers can realize. (It is important to note in this context, customers can be employees within the enterprise, as well as external audiences, such as customers and partners.) Benefits can include financial gains, such as improved sales or heightened profitability; enhanced or streamlined capabilities for meeting compliance and regulatory mandates; and improved competitive differentiation. When it comes to crystalizing and pursuing value, objectives and key results (OKRs) can be indispensable. OKRs can help teams gain improved visibility and alignment around value and the outcomes that need to be achieved.

Pitfall #2: Misidentifying value streams

Once teams have established a solid definition of value, it’s critical to gain a holistic perspective on all the people and teams that are needed to deliver that value. Too often, teams are too narrow in their value stream definitions.

Generally, value streams must include teams upstream from product and development, such as marketing and sales, as well as downstream, including support and professional services. The key here is that all value streams are built with customers at the center.

Broadcom

 

Pitfall #3: Focusing on the wrong metrics

While it’s a saying you hear a lot, it is absolutely true: what gets measured gets managed. That’s why it’s so critical to establish effective measurements. In order to do so, focus on these principles:

Prioritize customer value to ensure you’re investing in the right activities.Connect value to execution to ensure you’re building the right things.Align the execution of teams in order to ensure things are built right.

It is important to recognize that data is a foundational element to getting all these efforts right.

It is vital that this data is a natural outcome of value streams — not a separate initiative. Too often, teams spend massive amounts of money and time in aggregating data from disparate resources, and manually cobbling together data in spreadsheets and slides. Further, these manual efforts mean different teams end up looking at different data and findings are out of date. By contrast, when data is generated as a natural output of ongoing work, everyone can be working from current data, and even more importantly, everyone will be working from the same data. This is essential in getting all VSM participants and stakeholders on the same page.

Pitfall #4: Missing the big picture

Often, teams start with too narrow of a scope for their value streams. In reality, these narrow efforts are typically really single-process, business process management (BPM) endeavors. By contrast, value streams represent an end-to-end system for the flow of value, from initial concepts through to the customer’s realization of value. While BPM can be considered a tactical improvement plan, VSM is a strategic improvement plan. Value streams need to be high-level, but defined in such a way that they have metrics that can be associated with them so progress can be objectively monitored.

Tips for navigating the four pitfalls

 Put your clients at the heart of your value streams and strategize around demonstrable and measurable business outcomes.

Value streams are often larger than we think. Have you remembered to include sales, HR, marketing, legal, customer service and professional services in your value stream?

Measure what matters and forget about the rest. We could spend our days elbow deep in measuring the stuff that just doesn’t help move the needle.

 Learn More

To learn more about these pitfalls, and get in-depth insights for architecting an optimized VSM approach in your organization, be sure to check out our webinar, “Four Pitfalls of Value Stream Management and How to Avoid Them.”

[1] Dimensional Research, sponsored by Broadcom, “Value Streams are Accelerating Digital Transformation: A Global Survey of Executives and IT Leaders,” October 2021

Devops, Software Development

It was only a few years ago when ‘digital transformation’ was on every CIO’s agenda, and businesses started to understand how cloud could deliver real value. They stopped asking whether they should move to the cloud and started asking what they needed to do to get there.

This was the case for Murray & Roberts’ CIO Hilton Currie in 2016, when the cloud services market in South Africa was booming. Already worth $140 million and growing rapidly, it was around this time when the engineering and mining contractor undertook its migration to the cloud. But things didn’t go according to plan — a story not unique to South Africa or this industry. So Currie made the difficult decision to repatriate Murray & Roberts’ IT stack, one that required selling the business on reversing an IT strategy Currie had previously sold as M&R’s future.

Currie recently spoke to CIO.com about the mining company’s bumpy cloud journey, what motivated them to move and, ultimately, what drove them to change back. 

CIO.com: What motivated Murray & Roberts to embark on its original cloud journey?

Hilton Currie: Up until 2015, we were 100% on-premises and we had no major issues. I think the first red flag was the age of the equipment in our primary and secondary data centers. We pretty much ran on a major production environment, and then we had a hot standby disaster recovery environment, which was far out of support and had come to end of life. This brings its own risks to the table. Our production environment still had a bit of life in it, but not much and decisions had to be made.

Around this time, we were in discussions with our outsource partner to outsource our IT support and they proposed that they could make cloud affordable for us. Honestly, in 2016, cloud wasn’t really affordable. If we were willing to look at outsourcing the whole lot to them — all the way from server application right down to the technician support level — and adopt their managed public cloud, they were confident they could make it work for us. If we looked at the cost of new infrastructure we needed to renew, it actually looked very attractive financially. So we opted to go ahead.

Once you signed on, how then did the migration progress?

We started with a serious migration from the first quarter of 2017 because a lot of our equipment was end of life, so it was an all-or-nothing approach. We used a locally hosted cloud vendor that was connected to our outsource partner, but also with affiliation to a big global company. It took a bit longer than anticipated but by November 2017, we were almost fully across and functioning. We had everything from our big ERP systems to smaller, bespoke systems running in the cloud.

We had third-party independent consultants come in to analyze certain systems and licensing. But in early 2018, the first major hiccup hit us as the independent party who did the audit missed the terms of use on some of our licensing. For example, Microsoft has quite heavy restrictions on using perpetual license software in the cloud, specifically SQL Server. Some licenses aren’t valid if you don’t own the rights to the lower-level equipment. Unfortunately, our licensing expert missed that. To adjust this, we’d have to move from perpetual licenses to subscription licenses, which would have been a grudge purchase because some systems lag in terms of the versions they’re certified to run on. We would have had to purchase a current version of SQL Server and then downgrade it like four or five versions because that’s the version our ERP and other systems run on. This would have been hellishly expensive, so we brought all services that were impacted by licensing restrictions back on-prem and pulled all our SQL servers back, which was a costly exercise because we had to purchase new equipment, and the rest of the applications ran in the cloud. We ended up with a split or hybrid setup, which brought some challenges and became quite a nightmare to manage. We did eventually get it working and ran this way for about six to eight months. During that time, there was a buzz around cloud and I think the outsourced vendor was getting a few new clients on their managed cloud platform, which started taking a toll on us because it wasn’t long after that they started imposing rate limitations.

Hilton Currie

What prompted you to realize the shift to the cloud wasn’t working, and how hard was it to decide to move back?

Historically, we could get full performance out of the cloud platform. There were no restrictions and then suddenly they started imposing limitations, with an additional charge if we wanted more. Suddenly, the commercial model fell to pieces. We tried to make do with the limitations but it got to a point where it just wasn’t working. We were only fully cloud for about a year and a half and about a year in the business was brought to its knees. Applications started failing, email and phones didn’t work and our ERP became unusable. In some of the worst-case scenarios, it took our finance teams up to 15 minutes to open Excel files. We complained and they lifted some of the rate limitations while we made alternative arrangements.

Around March 2019, we decided to move back on-premises and by mid-2020 we were fully on-premises again. For me, the decision was crystal clear. At a point, it felt like I was walking around the building with a target on my back because this affected everyone and there was an air that our IT was falling to pieces.

In Murray & Roberts, IT reports to the financial director of the group, so I sat down with him to explain that it’s not all bad. I built a detailed roadmap highlighting that we were in a bad space, but outlined that getting out of this situation was possible. It came down to the numbers in terms of spend on new kit. I showed that it would cost less over three years moving to new kit than it would staying where we were. It was a no-brainer for him to accept but it was a difficult conversation. We sold them the cloud journey back in 2016 and they backed us and jumped on board. Then, a year and half later, we wanted to jump ship. But I think the results of moving back to a private cloud spoke for themselves.

So what systems are in place now in M&R’s on-premises data center? Have the issues you identified been resolved?

Instead of keeping things as is when we moved back on-prem, we did a refresh by making a list of all our systems to get a better view on how important they were to the business and where they sat. As part of this rationalization process, a couple of the systems were upgraded since we had a chance to rebuild from scratch, so we took advantage of that and got things running the way we wanted. We did a lot of consolidation as well. When we went to cloud, at one point we had over 300 servers and the end goal when we moved back on-prem was to get this down to about 180.

Given how the cloud market has changed, would you consider another cloud migration?

We’re not against cloud. We understand it can add value and it has a place. In fact, we’re starting a complete Office365 migration. But we’ll only lift certain systems and we’re taking a more selective approach. Cloud is a big buzzword, but you need to ask what it promises and what it’s going to give your business in terms of value. If you’re going to cloud for commercial reasons, it’s a big mistake because it’s not cheaper. And if you’re going for performance reasons, it’s an even bigger mistake and there are many reasons for this.

In South Africa specifically, there are a lot of issues of bandwidth and throughput to international vendors because stuff still sits in Europe or in the Americas. With the kind of flexibility that virtual environments can offer on a private cloud, do you really need a public cloud if you don’t need to be agile or scale drastically? We found that a well managed, fully redundant virtual environment, hosted in the private cloud on our own kit, in a tier-four data center was the ideal scenario for us. We’ve been running this way since mid-2020 and have not looked back

Any learnings from this experience that you’d like to share with other CIOs?

Looking back, I don’t think we made a bad decision. The biggest learning is about focusing on the big picture. Make sure you understand your long-term roadmap very clearly so there are no surprises. Often people will be blinded by the commercials, but be very careful about licensing and terms of use because many vendors have restrictions in place. Do your due diligence. All vendors write clauses into their contracts that it’s subject to change over time. But make sure you’ve got a backup or a rollback plan because it’s very difficult to bridge the gap when the company is on its knees and you need to buy equipment and do a full migration. I would also never recommend any full lift-and-shift approach. There are just too many variables.

What advice would you give aspirant CIOs, having gone through this? One of the aspects that’s lacking in many CIOs is the interaction with the business. The CIO role is not just an IT role. Gaining an understanding of your business and building relationships with important stakeholders is key to a successful CIO career. Although you’re looking at governance, you’re looking at compliance, processes and things like that. If you’re not aligned with what the business requires, you’re sitting in no man’s land because there’s a mismatch between what IT offers and what the business needs. When you’re looking at technology for all bells and whistles, you’re missing the point. It should be about adopting the right technology to boost productivity and to facilitate how the business operates.

CIO

Heading down the path of systems thinking for the hybrid cloud is the equivalent of taking the road less traveled in the storage industry. It is much more common to hear vendor noise about direct cloud integration features, such as a mechanism to move data on a storage array to public cloud services or run separate instances of the core vendor software inside public cloud environments. This is because of a narrow way of thinking that is centered on a storage array mentality. While there is value in those capabilities, practitioners need to consider a broader vision.

When my Infinidat colleagues and I talk to CIOs and other senior leaders at large enterprise organizations, we speak much more about the bigger picture of all the different aspects of the enterprise environment. The CIO needs it to be as simple as possible, especially if the desired state is a low investment in traditional data centers, which is the direction the IT pendulum continues to swing.

Applying systems thinking to the hybrid cloud is changing the way CIOs and IT teams are approaching their cloud journey. Systems thinking takes into consideration the end-to-end environment and the operational realities associated with that environment. There are several components with different values across the environment, which ultimately supports an overall cloud transformation. Storage is a critical part of the overall corporate cloud strategy.

Savvy IT leaders have come to realize the benefits of both the public cloud and private cloud, culminating in hybrid cloud implementations. Escalating costs on the public cloud will likely reinforce hybrid approaches to storage and cause the pendulum to swing back toward private cloud in the future, but besides serving as a transitional path today, the main reasons for using a private cloud today are about control and cybersecurity.

Being able to create a system that can accommodate both of those elements at the right scale for a large enterprise environment is not an easy task. And it goes far beyond the kind of individual array type services that are baked into point solutions within a typical storage environment.

What exactly is hybrid cloud?

Hybrid cloud is simply a world where you have workloads running in at least one public cloud component, plus a data center-based component. It could be traditionally-owned data centers or a co-location facility, but it’s something where the customer is responsible for control of the physical infrastructure, not a vendor.

To support that deployment scenario, you need workload mobility. You need the ability to quickly provision and manage the underlying infrastructure. You need visibility into the entire stack. Those are the biggest rocks among many factors that determine hybrid cloud success.

For typical enterprises, using larger building blocks on the infrastructure side makes the journey to hybrid cloud easier. There are fewer potential points of failure, fewer “moving pieces,” and increased simplification of the existing hybrid or existing physical infrastructure, whether it is deployed in a data center or in a co-location type of environment. This deployment model also can dramatically reduce overall storage estate CAPEX and OPEX.

But what happens when the building blocks for storage are small – under a petabyte or so each? There is inherently more orchestration overhead, and now vendors are increasingly dependent on an extra “glue” layer to put all these smaller pieces together.

Working with bigger pieces (petabytes) from the beginning can omit a significant amount of that complexity in a hybrid cloud. It’s a question of how much investment a CIO wants to put in different pieces of “glue” between different systems vs. getting larger building blocks conducive to a systems thinking approach.

The right places in the stack

A number of storage array vendors highlight an ability to snap data to public clouds, and there is value in this capability, but it’s less valuable than you might think when you’re thinking at a systems level. That is because large enterprises will most likely want backup software with routine, specific schedules across their entire infrastructure and coordination with their application stacks. IT managers are not going to want an array to move data when the application doesn’t know about it.

A common problem is that many storage array vendors focus on doing it within their piece of the stack. Yet, in fact, the right answer is most likely at the backup software layer somewhere − somewhere higher than the individual arrays in the stack. It’s about upleveling the overall thought process to systems thinking: what SLAs you want to achieve across your on-prem and public cloud environments. The right backup software can integrate with the underlying infrastructure pieces to provide that.

Hybrid cloud needs to be thought of holistically, not as a “spec checkbox” type activity. And you need to think about where the right places are in this stack to provide the functionality.

Paying twice for the same storage

Solutions that involve deploying another vendor’s software on top of storage that you already have to pay for from the hyperscaler means paying twice for the same storage, and this makes little sense in the long term.

Sure, it may be an okay transitional solution. Or if you’re really baked into the vendor’s APIs or way of doing things, then maybe that’s good accommodation. But the end state is almost never going to be a situation where the CIO is signing off on a check for two different vendors for the same bits of data. It simply doesn’t make sense.

Thinking at the systems level

Tactical issues get resolved when you apply systems thinking to enterprise storage. Keep in mind:

Consider where the data resiliency needs to be orchestrated and whether that needs to be within individual arrays or better positioned as part of an overall backup strategy or whatever strategy it isBeware of just running the same storage software in the public cloud because it’s a transitional solution at bestCost management is critical

On the last point, you should have a good look at the true economic profile your organization is getting on-premises. You can get cloud-like business models and the OPEX aspects from vendors, such as Infinidat, lowering costs compared to traditional storage infrastructure.

Almost all storage decisions are fundamentally economic decisions, whether it’s a direct price per GB cost, the overall operational costs, or cost avoidance/opportunity costs. It all comes back to costs at some level, but an important part of that is questioning the assumptions of the existing architectures.

If you’re coming from a world where you have 50 mid-range arrays, and you have a potential of reducing the quantity of moving pieces in that infrastructure, the consolidation and simplification alone could translate into significant cost benefits: OPEX, CAPEX, and operational manpower. And that’s before you even start talking about moving data outside of more traditional data center environments.

Leveraging technologies, such as Infinidat’s enterprise storage solutions, makes it more straightforward to simplify and consolidate on the on-prem side of the hybrid cloud environment, potentially allowing for incremental investment in the public cloud side, if that’s the direction for your particular enterprise.

How much are you spending maintaining these solutions, the incumbent solutions, both in terms of your standard maintenance or support subscription fees? Those fees, by the way, add up quite significantly. In terms of your staff time and productivity to support 50 arrays, when you could be supporting three systems or one system, you should look holistically at the real costs, not just what you’re paying the vendors. What are the opportunity costs of maintaining a more complex traditional infrastructure? 

On the public cloud side of things, leveraging cloud cost management tools, we’ve seen over a billion dollars of VC money that’s gone into that space, and many companies are not taking full advantage of this, particularly enterprises who are early in their cloud transformation. The cost management aspect and the automation around it − the degree of work that you can put into it for real meaningful financial results − are not always the highest priority when you’re just getting started. And the challenge with not baking it in from the beginning is that it’s harder to graft it in when processes become entrenched.

For more information, visit Infinidat here

Hybrid Cloud

Johnny Serrano, CIO with Australian mine safety specialists GroundProbe always had a fascination with how things worked, from his first Sony Walkman to the growing number of games that had suddenly become available to kids.

“I really knew that I wanted to get paid to make games. That was my whole motivation,” Serrano tells CIO Australia.

But it wasn’t until the now CIO50 alumnus finished high school that he started seriously contemplating a professional career in technology.

A year later, having taken the first step of acquiring a diploma in software engineering, Serrano found the inspiration to enrol in a Bachelor of Information Technology, Electronic Commerce at Queensland University of Technology.

“I was really fortunate to study that business IT degree. I would be coding in one class and then studying business and economics in another; it really provided me with a holistic view of business,” Serrano says. His first real job in tech would soon expose him to that and more.

The start of a career in IT

While working for Brisbane-based Budget Databases in 2006, he was despatched to Innisfail, 260km north of his hometown of Townsville, which had been devastated by cyclone Larry, then the strongest to ever hit Australia causing $1.5 billion worth of damage.

With thousands of people trying to make insurance claims for damaged or destroyed homes, cars and other assets, Serrano led a team to stand up a new network and systems to help manage the surge. That experience taught him of technology’s potential to genuinely improve lives.

“I got to be on the ground, really helping people and helping rebuild the town, which looking back was really meaningful,” Serrano says. Returning to work back in Brisbane, Serrano who is also chief data officer at GroundProbe, resumed his side-hustle in the nightclub scene helping DJs sort out their music files and systems for shows.

Next, he landed at US defence company Raytheon, which had recently established an Australian office in Brisbane. And for the perennial gadget junkie, this opened a whole new world of technological wonder.

“I visited military bases all over Australia and was exposed to some pretty cool stuff,” Serrano says. In particular, he worked on creating — and decommissioning — supporting infrastructure for Super Hornets and F-111 fighters. He also helped to support flight simulators, inadvertently realising his dream of being paid to make games.

“During my five years at Raytheon I can say my technology learnings — and career in general — really accelerated. I worked with highly technical individuals who provided their time and experience for free to a savvy young worker like myself who was willing to take advantage and listen,” Serrano says.

These early mentors further influenced him in helping to adopt a “calmer, big picture perspective of how to deal with incidents that was both reassuring and confidence-building”.

Often, he was involved in highly sensitive operations, which included setting up critical projects in normal civilian buildings disguised so as not appear out of place. “The defence industry knowledge I gained, and security controls implemented are still part of my thinking to this day,” he says.

Saving lives with technology

Arriving in Australia as a refugee from war-torn El Salvador in the 1980s, Serrano has always had a strong sense of social justice. And now, with many years of high-level and senior technology experience under his belt, he embarked on a major career pivot, sitting the GAMSAT medical entry exam with the aim of one day joining Medicine Sans Frontiers.

Needing to supplement his income as a mature-aged student and new father, Serrano took a job as a technical business analyst in the IT department at GroundProbe, expecting to be there for about a year.

Suffice to say, life got in the way, as it was around this time he and his wife welcomed their first child into the world, now the eldest of four.

Fast forward to today, Serrano has now spent five years as both CIO and CDO of the company where he oversees a team of 11 that have helped establish it as a genuine digital transformation leader in the mining industry.

While there have been significant advancements in safety over the years, few would argue there’s still much room for improvement, especially in developing economies where mine accidents remain tragically common, often resulting in loss of life and serious injury, devastating families and communities.

Since it was founded in 2001, GroundProbe has been developing software and digital sensors designed to help mine operators — and workers — be more alert and responsive to the many dangers that can present themselves, while collecting site data to inform more intelligent project design.

Harnessing augmented reality during the pandemic

Like so many technology leaders, Serrano and his team were thrown many a curveball throughout the Coronavirus pandemic, not least of which was the inability to have GroundProbe engineers physically visit clients due to travel and other COVID-19 restrictions.

This was brought into sharp relief when a customer in Bolivia had one of its radars stop at a mine site. In response, Serrano and his team worked quickly to create a solution combining augmented reality (AR), smart glasses and video that has proved transformational.

“Using AR we were able to get everything back up and running in under an hour,” he says.

Mine operators were able to plug into the system and receive detailed, real-time instructions from GroundProbe’s service teams via AR headsets, helping them maintain operation of the company’s products for detecting dangerous wall movement.

With that test case proven, he and his team quickly mobilised to make the technology available across all the 30 countries GroundProbe operates in, spanning Asia, Africa, Europe, South America and the US.

Serrano boasts that while many of GroundProbe’s competitors were scrambling to figure out ways to maintain service levels, the company confidently launched a marketing campaign with the tagline ‘We are still operational’. “Our machines do save lives all over the world and we pride ourselves on being able to keep them up and running using augmented reality,” he says.

In many ways it was an opportunity for him to draw on all his past professional experiences, bringing together crisis management and hands-on problem solving.

Serrano shares lessons to lead

Perhaps even more importantly, it helped him hone valuable leadership skills, both in the management of his own team, as well as working with senior executives in a mission-critical capacity.

Serrano had previously led the global deployment of a new ERP system for GroundProbe and commenced a major program to retire technical debt, but this was different.

He notes that leading his team in developing a successful technology-led response to COVID-19 has helped reshape entrenched “defensive” perceptions amongst the GroundProbe management that IT is merely an “overhead”. There is now increased respect throughout the company for what technology can do, a sense that it’s an essential part of the business, as well as greater appreciation of the people that are involved in its deployment.

“Implementing tech is a journey, but it’s really all about the people at the end of the day,” he says.

Serrano feels that gaining executive support for this vision is the biggest challenge facing many CIOs today, especially as they come under increased pressure to conjure and deploy strategies for digital innovation that deliver tangible business outcomes.

Since the onset of the pandemic, GroundProbe has developed a laser-like focus on improving customer experience, which Serrano says further underscores the importance of giving his tech team as much freedom as possible.

But this also means ensuring they’re free to fail.

“We get told we’re not allowed to fail but it’s going to happen when you’re a manager. But it’s about de-risking that, which comes back to trust,” Serrano says.

Standing up emerging technology like AR on a global scale was a risky move for him and his team, and he admits it could have gone either way.

Yet he’s careful to encourage his younger team members to have confidence in their ideas, and to not fear failure, invoking the words of American actress Stella Adler: “You will only fail to learn if you do not learn from failing”.

IT Leadership

The ANWR Group, a Mainhausen-based community of financial services and retailers in the footwear, sporting goods, and leather goods industries, has, until 2018, used the ERP system of its bank subsidiary DZB Bank, and as a result, banking sector regulations for financial accounting and controlling also applied to the retail area of ​​the company.

Over time, these regulations became more restrictive, and the flexibility needed for the trading industry was no longer available. “We had already started separating the IT systems a few years earlier in order to better prepare both the bank and the trading companies for the respective requirements,” recalls ANWR Group CIO Sven Kulikowsky. The ERP software was the last shared system.

Together in the greenfield

ANWR adopts a cloud-first strategy for new IT projects, and in 2018, the IT department tackled the migration to SAP S/4HANA together with the business areas of financial accounting and controlling. There was already knowledge of the solutions from the Walldorf-based software company since the previous core system was an on-premises SAP R/3 that was heavily modified. So the new environment really had to be based on a greenfield approach in the public cloud set up by SAP.

“It was extremely important to get the departments on board from the start,” says Kulikowsky. Together they determined what the new solution had to be able to do from the start. In joint workshops, mixed teams from business departments and the IT evaluated the capabilities and degree of maturity of the cloud platform.

Agile with purpose

In order to organize the change, a steering committee was formed as the highest control body. Underneath, a project board formed as a control team from Kulikowsky and his counterparts in financial accounting and controlling, which coordinated with the project manager of the external partner Camelot ITLab for two hours a week. The team received input from cross-functional working groups made up of staff and external consultants, who discussed problems with specific processes. “We were able to quickly compare different opinions and make decisions,” says Kulikowsky. As a result, departments and IT have always pulled together.

He set a goal of migrating all systems to the new environment by the end of 2021, and the 2021 annual financial statements created with S/4HANA. Plus, the 2022 financial year was to start without the old environment, and to do this, Kulikowsky defined nine waves.

Cloud Management, SAP