John McCaffrey, CIO at H2M architects + engineers, joins host Maryfran Johnson for this CIO Leadership Live interview, jointly produced by CIO.com and the CIO Executive Council. They discuss infrastructure lifecycles, 3D scanning and design, and more.

Watch this episode:

Listen to this episode:

Careers, CIO, CIO Leadership Live

By Anand Oswal, Senior Vice President and GM at cyber security leader Palo Alto Networks

Critical infrastructure forms the fabric of our society, providing power for our homes and businesses, fuel for our vehicles, and medical services that preserve human health.

With the acceleration of digital transformation spurred by the pandemic, larger and larger volumes of critical infrastructure and services have become increasingly connected. Operational technology (OT) serves a critical role as sensors in power plants, water treatment facilities, and a broad range of industrial environments.

Digital transformation has also led to a growing convergence between OT and information technology (IT). All of this connection brings accessibility benefits, but it also introduces a host of potential security risks.

Cyberattacks on critical infrastructure threaten many aspects of our lives

It’s a hard fact that there isn’t an aspect of life today free from cyberthreat. Ransomware and phishing attacks continue to proliferate, and in recent years, we’ve also seen an increasing number of attacks against critical infrastructure targets. Even in environments where OT and IT have been traditionally segmented or even air-gapped, these environments have largely converged, presenting attackers with the ability to find an initial foothold and then escalate their activities to more serious pursuits, such as disrupting operations.

Examples are all around us. Among the most far-reaching attacks against critical infrastructure in recent years was the Colonial Pipeline incident, which triggered resource supply fears across the US as the pipeline was temporarily shut down. Automobile manufacturer Toyota was forced to shut down briefly after a critical supplier was hit by a cyberattack. Meat processing vendor JBS USA Holding experienced a ransomware cyberattack that impacted the food supply chain. The Oldsmar water treatment plant in Florida was the victim of a cyberattack that could have potentially poisoned the water supply. Hospitals have suffered cyberattacks and ransomware that threaten patients’ lives, with the FBI warning that North Korea is actively targeting the US healthcare sector. The list goes on and on.

Global instability complicates this situation further as attacks against critical infrastructure around the world spiked following Russia’s invasion of Ukraine, with the deployment of Industroyer2 malware that is specifically designed to target and cripple critical industrial infrastructure.

Today’s challenges place an increasing focus on operational resiliency

With all of these significant challenges to critical infrastructure environments, it’s not surprising that there is a growing focus on operational resiliency within the sector. Simply put, failure is not an option. You can’t have your water or your power go down or have food supplies disrupted because an outage of critical infrastructure has a direct impact on human health and safety. So, the stakes are very high, and there is almost zero tolerance for something going the wrong way.

Being operationally resilient in an era of increasing threats and changing work habits is an ongoing challenge for many organizations. This is doubly true for the organizations, agencies, and companies that comprise our critical infrastructure.

Digital transformation is fundamentally changing the way this sector must approach cybersecurity. With the emerging hybrid workforce and accelerating cloud migration, applications and users are now everywhere, with users expecting access from any location on any device. The implied trust of years past, where being physically present in an office provided some measure of user authenticity simply no longer exists. This level of complexity requires a higher level of security, applied consistently across all environments and interactions.

Overcoming cybersecurity challenges in critical infrastructure

To get to a state of resiliency, there are a number of common challenges in critical infrastructure environments that need to be overcome because they negatively impact security outcomes. These include:

Legacy systems: Critical infrastructure often uses legacy systems far beyond their reasonable lifespan from a security standpoint. This means many systems are running older, unsupported operating systems, which often cannot be easily patched or upgraded due to operational, compliance, or warranty concerns.

IT/OT convergence: As IT and OT systems converge, OT systems that were previously isolated are now accessible, making them more available and, inherently, more at risk of being attacked.

A lack of skilled resources: In general, there is a lack of dedicated security personnel and security skills in this sector. There has also been a shift in recent years toward remote operations, which has put further pressure on resources.

Regulatory compliance. There are rules and regulations across many critical infrastructure verticals that create complexity concerning what is or isn’t allowed.

Getting insights from data: With a growing number of devices, it’s often a challenge for organizations to get insights and analytics from usage data that can help to steer business and operational outcomes.

The importance of Zero Trust in critical infrastructure

A Zero Trust approach can help to remediate a number of the security challenges that face critical infrastructure environments and also provide the level of cyber resilience that critical infrastructure needs now.

How come? The concept of Zero Trust, at its most basic level, is all about eliminating implied trust. Every user needs to be authenticated, every access request needs to be validated, and all activities continuously monitored. With Zero Trust authentication, access is a continuous process that helps to limit risk.

Zero Trust isn’t just about locking things down; it’s also about providing consistent security and a common experience for users, wherever they are. So, whether a user is at home or in the office, they get treated the same from a security and risk perspective. Just because a user walked into an office doesn’t mean they should automatically be granted access privileges.

Zero Trust isn’t only about users: the same principles apply to cloud workloads and infrastructure components like OT devices or network nodes. There is still a need to authenticate devices and access to authorize what the device is trying to do and provide control, and that’s what the Zero Trust Model can provide.

All of these aspects of Zero Trust enable the heightened security posture that critical infrastructure demands.

Zero Trust is a strategic initiative that helps prevent successful data breaches by eliminating the concept of implicit trust from an organization’s network architecture. The most important objectives in CI cybersecurity are about preventing damaging cyber physical effects to assets, loss of critical services, and preserving human health and safety. Critical infrastructure’s purpose-built nature and correspondingly predictable network traffic and challenges with patching make it an ideal environment for Zero Trust.

Applying a Zero Trust approach that fits critical infrastructure

It’s important to realize that Zero Trust is not a single product; it’s a journey that organizations will need to take.

Going from a traditional network architecture to Zero Trust, especially in critical infrastructure, is not going to be a “one-and-done” effort that can be achieved with the flip of a switch. Rather, the approach we recommend is a phased model that can be broken down into several key steps:

1. Identifying the crown jewels. A foundational step is to first identify what critical infrastructure IT and OT assets are in place.

2. Visibility and risk assessment of all assets. You can’t secure what you can’t see. Broad visibility that includes behavioral and transaction flow understanding is an important step in order to not only evaluate risk but also to inform the creation of Zero Trust policies.

3. OT-IT network segmentation. It is imperative to separate IT from OT networks to limit risk and minimize the attack surface.

4. Application of Zero Trust policies. This includes:

Least-privileged access and continuous trust verification, which is a key security control that greatly limits the impact of a security incidentContinuous security inspection that ensures the transactions are safe by stopping threats — both known and unknown, including zero-day threats — without affecting user productivity

By definition, critical infrastructure is vital. It needs to be operationally resilient, be able to reduce the potential attack surface, and minimize the new or expanding risks created by digital transformation. When applied correctly, a Zero Trust approach to security within critical infrastructure can play a central role in all of this — ensuring resilience and the availability of services that society depends on every day.

Learn more about our Zero Trust approach.

About Anand Oswal:

Anand serves as Senior Vice President and GM at cyber security leader Palo Alto Networks. Prior to this Anand, was Senior Vice President of Engineering for Cisco’s Intent-Based Networking Group. At Cisco he was responsible for building the complete set of platforms and solutions for the Cisco enterprise networking portfolio. The portfolio spans enterprise products across routing, access switching, IoT connectivity, wireless, and network and cloud services deployed for customers worldwide.

Anand is a dynamic leader, building strong, diverse, and motivated teams that continually excel through a relentless focus on execution. He holds more than 50 U.S. patents and is focused on innovation and inspiring his team to build awesome products and solutions.

Data and Information Security, IT Leadership

With 190 participating countries and 24 million visitors, Expo 2020 Dubai was one of the world’s largest events, connecting everyone to innovative and inspiring ideas for a brighter future. But what does it take to support an event on such a grand scale? The answer is a robust cloud and modern IT infrastructure, which would allow 1,200 employees to collaborate with one another, amidst nationwide lockdowns and supply chain disruptions.

As part of the World Expos, Expo 2020 Dubai sought to create one of the smartest and most connected places to give its participants a lasting impression beyond the event. More than just departing with the knowledge and connections gained from the Expo, the organization also wanted to impart the rich cultural heritage of the United Arab Emirates (UAE). This means delivering a deeply personalized and hyper-relevant experience, from a smooth ticketing journey to chatbots that offered real-time assistance in multiple languages.

To bring this ambition to life, a robust foundation of technology was necessary, one that could support the seamless integration of systems and apps, and a myriad of digital services, and meet numerous, diverse IT requirements. It was with these in mind that Expo 2020 Dubai decided on a multi-cloud infrastructure that was hyper-flexible, scalable, secure, and reliable enough to support the event’s operations while serving as a platform to manage the build process for the event.

Behind The Winning Cloud Partnership

Expo 2020 Dubai was built from the ground up: a 4.38km² wide site comprising sprawling parks, a shopping mall, and the Expo Village. In the same vein, its cloud journey also underpinned the various stages of its development, including civil infrastructure, building construction, crowd management, smart city operations, and marketing. Key to this multi-cloud infrastructure was flexibility, scalability, and security, upon which its integrated, intelligent systems were built on. This enabled the Expo teams, vendors, suppliers, and volunteers across nations to work seamlessly together.

It is through the collaborative effort of e& and Accenture that the Etisalat OneCloud and Amazon Web Services (AWS) were successfully integrated to make Expo 2020 Dubai one of the first and largest true multi-cloud infrastructures in the region. Etisalat OneCloud provided the resilient, reliable, and secure environment the event needed for its localized business-critical apps, whereas AWS delivered the structure necessary to support global digital services and apps, such as websites, participant portals and eCommerce platforms.

But what brought both solutions together was Accenture Service Delivery Platform, which offered the interconnectivity for enabling several layers of integration at the app and security level.

As the technological groundwork of Expo 2020 Dubai consisted of over 90 applications, Accenture Service Delivery Platform delivered the integration the multi-cloud infrastructure required without any external systems while meeting the stringent app requirements around scalability, security, and hyper-reliability. This was done across six months of development and throughout the entire customer lifecycle spanning awareness, discovery, purchase, and post-sales.

Delivering An Unprecedented Experience

Through this sprawling multi-cloud infrastructure, Expo 2020 Dubai could host all the Pavilion designs, themes, and content from over 190 participating countries while integrating authorizations, supply chain management, and workforce licensing functions. At the same time, the event realized seamless and highly personalized experiences for its visitors with a suite of visitor-facing digital channels. This was inclusive of the Expo 2020 official mobile app, virtual assistant, and an official website.

Expo 2020 Dubai also incorporated a central information hub and a best-in-class ticketing journey alongside digital services tailored to a visitor’s personal preferences in real-time and in their preferred language. Then there was AMAL, a chatbot powered by artificial intelligence, instrumental in gathering critical information on the Expo shows and attractions while giving live feedback as the event took place.

It is clear that behind this global gathering of nations designed around enhancing our collective knowledge, aspirations, and progress, a large-scale digital transformation took place: one which enabled the multi-cloud environment for Expo 2020 Dubai and was instrumental to the success of this life-changing event.

The Expo’s key themes of opportunity, mobility, and sustainability were succinctly captured in its infrastructure, demonstrating the potential of cloud in unlocking intelligent operations and business agility. As evident in the successes of Expo 2020 Dubai and other businesses, such as leading transport fuels provider Ampol, cloud has become an indispensable cornerstone to succeed in today’s digital-first economy. And it’s this very cloud continuum that will continue to bring businesses one step closer to innovation, aiding them in delivering truly transformative services and experiences.

Read the full story here:  https://www.accenture.com/ae-en/case-studies/applied-intelligence/expo-2020-dubai

Hybrid Cloud, Infrastructure Management, Multi Cloud

Kyndryl claims to be the world’s largest IT infrastructure provider. A division of IBM until November 2021, it is now a separate company. Initially, little changed for customers — except perhaps the logo on their invoice — but with time, Kyndryl is taking advantage of its freedom from IBM to introduce new services and work with new partners.

What does Kyndryl do?

Essentially, Kyndryl does exactly what the managed infrastructure services unit of IBM’s Global Technology Services segment did: outsource the management of enterprises’ IT infrastructure, whether it came from IBM or another vendor.

Under IBM’s stewardship, the activities since moved to Kyndryl were in slow decline, from $21.8 billion in annual revenue in 2018 down 7% to $20.28 billion in 2019, and down 4.6% to $19.35 billion in 2020, according to IBM filings with the SEC. That hasn’t changed since the split: Kyndryl’s first full-year filing as an independent company, barely two months after the separation, showed 2021 revenue down a further 4%, to $18.66 billion. The decline continued into 2022, with first quarter revenue down 7% year on year, and the second quarter down 10%.

However, Kyndryl is beginning to develop new services, and is forming partnerships in a bid to grow its revenue. It estimates that the $415 billion market opportunity it addresses is growing at 7% a year, with some areas it is targeting (including security, intelligent automation and public cloud managed services) growing even faster.

Kyndryl has organized itself into six global managed services practices, each of which manages a different aspect of technology. These are:

Applications, data and AI
Cloud
Core enterprise and zCloud, IBM’s mainframe-as-a-service offering
Digital workplace
Network and edge
Security and resiliency

There is also a customer advisory practice that combines managed services, advisory services, and implantation.

In September 2022, Kyndryl also launched two new branded services, Bridge and Vital. The company calls Kyndryl Bridge an open integration platform, an operational monitoring system, somewhat like HPE GreenLake or IBM vCenter, that Kyndryl staff will connect to an enterprise’s existing IT infrastructure to help CIOs keep ahead of problems. Kyndryl Vital is essentially a design workshop, during which Kyndryl consultants work alongside an enterprise’s employees to prototype applications.

Who are Kyndryl’s partners?

At the moment of their split, Kyndryl and IBM were one another’s biggest suppliers, and that will remain true for the time being. But Kyndryl is free to independently explore, with no preference for IBM’s software and services.

Kyndryl named Microsoft its first cloud infrastructure partner in November 2021, announcing a similar partnership with Google the following month. But it took it until February 2022 to form a pact with Amazon Web Services.

IBM had partnerships with numerous software providers, and Kyndryl inherited or expanded some of those, including with Elastic, Lenovo, SAP, ServiceNow, and VMware.

Kyndryl has also formed new partnerships, including with Cisco Systems, Citrix, Cloudera, Dynatrace, EY, Field Safe Solutions, NetApp, Nokia, Oracle, Pure Storage, IBM subsidiary Red Hat, and Veritas Technologies. These partnerships expand Kyndryl’s repertoire when it comes to integrating products and services into Bridge, or incorporating them into co-creations with Vital.

How big is Kyndryl?

Kyndryl started with 4,600 customers (including 75 of the Fortune 100), over a quarter of IBM’s 350,000 staff, activities generating around $19 billion in annual revenue and an order backlog (or long-term maintenance contracts from all those customers) of around $62 billion. Where that puts Kyndryl in the rankings depends on what you’re measuring. Kyndryl says it’s the world’s largest IT infrastructure provider, although IT channel publication CRN says it’s only the fifth-largest solutions provider, a much broader category, behind Accenture, what’s left of IBM, DXC Technology, and Tata Consulting Services.

Is Kyndryl hiring?

Like crazy! Kyndryl hired over a dozen top executives in 2021, and by the end of the year had 88,683 employees. Although its hiring in the US has slowed, it had 1,141 lower-level job openings posted at press time, over half of them in the EU, with other significant concentrations in India and Japan. Half the openings are for technical specialists, with more than 100 openings in systems architecture and an emphasis on automation.

Who works at Kyndryl?

Most staff at Kyndryl simply changed email addresses, carrying on doing the same work for clients as they did at IBM before the split. Indeed, Kyndryl went out of its way to reassure customers that their key points of contact and support, and the other team members they work with, would not change, and that the company continues to work with experts in other divisions of IBM as it did before.

But the company brought in new blood for many of the most senior roles, either hiring in from other companies, or poaching from other divisions of IBM. CEO Martin Schroeter is ex-IBM, in fact. He left the company in June 2020, before the spin-off was announced, and came back to lead Kyndryl, then known as NewCo, in January 2021. He was previously SVP of global markets at IBM, and before that its CFO.

The next senior appointments, in March 2021, were chief marketing officer Maria Bartolome Winans, who came to the spin-off directly from her role as CMO for IBM Americas, and group president Elly Keinan, another former IBMer who took time out to work in venture capital after 33 years at the company.

Global head of corporate affairs Una Pulizzi was also a new hire in April 2021, previously in a similar role at GE, while general counsel Edward Sebold was chief legal officer for IBM’s Watson Health division.

Poaching of more senior IBMers continued in early May 2021. Chief transformation officer Nelly Akoth was previously with IBM Global Business Services; Leigh Price moved from one leadership role in strategy and corporate development to another; and Vineet Khurana became controller at Kyndryl after five years in three different CFO roles at IBM. Kyndryl’s global alliances and partnerships leader Stephen Leonard held a number of positions at IBM, most recently as general manager of the Power Systems division.

It wasn’t until the second half of May 2021 that Kyndryl began to name its top technical staff: CIO Michael Bradshaw is new to IBM, having previously served as CIO at NBC/Universal and as CIO for Mission Systems and Training at Lockheed Martin. CTO Antoine Shagoury is a former CIO of US bank State Street and of stock exchanges in London and the US. Most recently, he worked at strategic advisory partnership Ridge-Lane.

Other senior Kyndryl hires from outside IBM include Vic Bhagat, a former CIO for Verizon Enterprise Solutions, EMC, and several units of GE as the head of its customer advisory practice, and COO Harsh Chugh, most recently CFO at SaaS provider PlanSource.

Who is on Kyndryl’s board?

To provide the new company with more stability, Kyndryl’s board of directors will serve overlapping three-year terms through 2027, so it’ll take at least two elections for an outside group to take control of the board.

Kyndryl’s first 10 directors are:

CEO Martyn Schroeter, board chairman
Stephen Hester, lead independent director. He was CEO of RSA Insurance Group until June 2021, and is chairman of easyJet
Dominic Caruso, retired Johnson & Johnson CFO
John Harris, former VP of business development for Raytheon and board member at Cisco Systems
Shirley Ann Jackson, president of Rensselaer Polytechnic Institute
Janina Kugel, former CHRO and member of the managing board of German industrial conglomerate Siemens
Denis Machuel, CEO of temporary staffing firm Adecco
Rahul Merchant, former head of technology at retirement fund TIAA, Fannie Mae, and Merrill Lynch, and current board member at Convergint Technologies, Global Cloud Exchange, Juniper Networks, and Emulex
Jana Schreuder, retired COO of Northern Trust and current board member at Entrust Datacard and Blucora
Howard Ungerleider, president and CFO of commodity chemicals company Dow

What does Kyndryl’s split mean for IBM?

IBM is still one of the biggest technology businesses in the world. Its separation from Kyndryl freed it from a legacy business that wasn’t growing, and enabled it to reorganize into three main operating segments now called Software, Consulting (formerly Global Business Services), and Infrastructure. It’s doing well post-split: For the full year 2021 revenue from Software rose 5.3% to $24.1 billion, and Consulting made $17.8 billion, up 9.8%, although revenue from Infrastructure, the segment Kyndryl was spun out of, fell 2.4% to $14.2 billion. Those trends, both positive and negative, continued through the first half of 2022.

Customer needs for application services and infrastructure services are diverging, and so spinning off Kyndryl will allow IBM to focus on growing its open hybrid cloud platform and AI capabilities, IBM CEO Arvind Krishna said in October 2020. The split turns IBM from a services-led company to one making more than half its revenue from software and solutions.

But until that growth takes hold, Kyndryl and IBM remain close, as they began their separate lives as one another’s largest customers.

 

IBM, IBM Global Services, Managed IT Services, Managed Service Providers, Outsourcing, Technology Industry

IT leaders in EMEA are increasingly leading digital transformation efforts, with 84% of CIOs saying they’re responsible for these efforts, according to the Foundry 2022 State of the CIO study.

But significant challenges remain. Their most pressing issue: the need for technology integration/implementation skills to support digital business initiatives, according to the survey. Many organizations are also struggling to modernize their IT architecture to accommodate digitization. The talent gap affects these efforts, as do a lack of strategy and little sense of urgency.

Yet, there is risk in sidelining these issues and not moving quickly. Competitors may overcome challenges and gain an advantage, for example. Fast-acting companies may be winning the race to envision and realize their digital transformation objectives. That’s why CEOs are urging their IT leaders to upgrade IT and push digital initiatives forward.

How to avoid stagnation

It’s time for enterprises to take a fresh look at their transformation projects and set a new tone to accelerate their digital initiatives. That should start with examining their IT architecture and asking whether/how it enables the business to:

Use cloud services to achieve improved business outcomes such as efficiency, cost optimization, and speed to market;

Pivot to new ways of working, as needed. Evolving market forces combined with the hybrid workplace require business agility;

Modernize by adopting new technologies. Automation, artificial intelligence, and 5G require an open, flexible IT infrastructure.

In other words, does your IT infrastructure allow for easy integration of data sources to speed business decision-making? Does it allow for easy application migration to the cloud? Do your IT teams have automated processes and platforms to ease application modernization and development?

For example, the City of Madrid was facing an urgent need to deliver digital services to its citizens. However, it was challenged by the limits of its IT architecture, complex data regulation requirements, and the ever-increasing need to improve cybersecurity efforts.

Working with Kyndryl, the City deployed a hybrid cloud IT architecture, including a new data center with backup and disaster recovery capabilities. The results:

Accelerated digital services, with enhanced security measures, for citizensFaster processing of large data volumesEnhanced data protectionAn optimized IT infrastructure with room to scale and grow as needed

The next step

Organizations facing skillset and strategy challenges can benefit from working with an expert managed services and implementation partner. Kyndryl consults with its customers to understand existing resources and business needs, then helps define and chart the digital transformation journey.

For example, Kyndryl’s advisory and implementation services have helped customers unlock business value with a pragmatic, building-block approach. Its integrated portfolio of solutions and services address use cases ranging from cloud to the digital workplace and from security to the network and edge. And Kyndryl’s infrastructure practice has deep expertise in designing, building, and implementing all types of IT environments to accelerate digital outcomes.

Accelerate your digital transformation by starting here.

Cloud Management

A growing number of organizations are looking to the cloud and cloud services to help evolve and accelerate their digital transformation plans. But why is this? What benefits do cloud services offer over self-hosted and self-managed infrastructure?

1. Faster time to market

As digital transformation is rapidly altering the competitive landscape across organizations and industries, customers expect more, better, and faster applications and services. Organizations have to be agile and respond to new requirements and opportunities as quickly as possible. Time to market is increasingly of the essence.

Cloud services can help boost developer velocity and allow you to deliver new innovative applications and services faster. Through instantly available self-serve setup options, developers are able to dive into new projects immediately, as well as build, debug and deploy the resulting applications more quickly and more frequently.

2. Reduces operational complexity

Working with a trusted cloud service provider can help reduce operational complexity by avoiding the need to build, manage and maintain your own IT infrastructure. Additionally, instead of dedicating in-house resources to installing, configuring, updating, maintaining, and managing that infrastructure, your IT teams are able to focus on more innovative and higher-value projects.

3. Fills skills gaps

With an increasing number of digital transformation leaders moving to a cloud strategy, engineering and IT teams often find themselves facing a skills gap that can be costly and time consuming to fill, and can draw resources away from more strategic projects.

Cloud services quickly fill skills gaps for teams that don’t have the time, knowledge, or experience to manage those functions on their own. And even if those skills aren’t lacking, internal teams may be better off focusing on innovation and speed instead of spending time on the rote, day-to-day tasks of building and maintaining their own platform or infrastructure.

4. Ability to focus on core competencies

Managed cloud services allow your teams to take advantage of fully hosted, integrated tools to adopt modern development methods that streamline the process of creating, testing, deploying, and iterating on new application projects.

This not only increases developer velocity and effectiveness, it has also been shown to improve employee satisfaction, helping organizations retain skilled developer and IT talent.

5. On-demand scalability

Your organization’s technology needs are constantly evolving as new applications and services are created, user demand increases, and new tools, technologies and processes emerge.

Managed cloud services are able to scale up as needed, increasing available compute resources, storage and bandwidth to meet consumer demand, day or night. Similarly, those resources can be scaled back when they aren’t needed to reduce costs.

Additionally, with cloud services, your organization doesn’t have to build or maintain its own development infrastructure, technologies and tools at all, eliminating the need for (and delays caused by) complicated and inefficient infrastructure upgrades when increased capacity is required.

6. Improved reliability and performance

Since cloud service providers are responsible for service availability and service level agreements (SLAs), you can worry less about potential outages or other disruptions. Regardless of how or when issues may occur, the service provider has experts on duty to deal with any problems that arise.

Working with a managed service provider also provides the opportunity to simplify disaster recovery efforts. Since most providers have multiple regions, it’s possible to architect systems and applications to have a disaster recovery plan that fails over to a different region, or even another cloud provider. Site Reliability Engineering (SRE) and security teams are able to respond quickly, and have reliable and tested data backups and recovery plans in place.

7. Reduced security risks

The cost of cybersecurity incidents is not just measured in downtime and dollars, but also in terms of their impact on an organization’s reputation. With security threats seemingly on a continuous upswing, making sure your network, data, and applications are as secure as possible is both important and difficult.

Many cloud service providers employ security professionals. These experts help to take care of some of the security infrastructure for customers, providing features such as strong identity management, fine-grained access control, proper data encryption, advanced threat monitoring and detection, clearly defined security response plans, and more.

8. A developer-first experience

Cloud services can help provide a developer-first experience. Developers are able to choose the languages and tools they work with, while being freed from the day-to-day drudgery of managing their own infrastructure and application development platform.

Developers can also make use of the cloud services on a self-serve basis, so they can kick off the process of developing, building, testing, and deploying new applications and experiments without IT or other teams’ involvement.

9. Cost reductions

Cloud services reduce the need to build and maintain your own in-house IT infrastructure. And with the ability to scale up and down on demand, cloud services optimize your costs so you pay for only what you use, when you use it. Your cloud service provider’s SLA also shields you from unexpected costs and expensive service outages.

Staff and training are also significant expenses, of course, particularly when you’re talking about highly specialized roles such as SREs, senior IT staff, and experienced security professionals.

With the increasing demand for top tech talent, staffing a full-time, in-house IT department is growing more expensive by the day. Cloud services reduce the need for you to hire these roles yourself, letting you use your staffing and training budgets more strategically.

Red Hat Cloud Services

Red Hat Cloud Services include hosted and managed platform, application, and data services that accelerate time to value and reduce the operational cost and complexity of delivering cloud-native applications. Organizations can confidently build and scale applications with a streamlined experience across services and across clouds while Red Hat manages the rest.

Red Hat’s cloud services comprise a platform for developing, deploying, and scaling cloud-native applications in open hybrid-cloud environments. The combination of enterprise-grade Kubernetes, cloud-native approach to application delivery, and managed operations allows enterprise development teams to increase application velocity and focus on core competencies.

To learn more about Red Hat Cloud Services, visit us here.

Cloud Computing

One type of infrastructure that has gained popularity is hyperconverged infrastructure (HCI). Interest in HCI and other hybrid technologies such as Azure Arc is growing as enterprise organizations embrace hybrid and multi-cloud environments as part of their digital transformation initiatives. Survey data from IDC shows broad HCI adoption among enterprises of all sizes, with more than 80% of the organizations surveyed planning to move toward HCI for their core infrastructure going forward.

“Hyperconverged infrastructure has matured considerably in the past decade, giving enterprises a chance to simplify the way they deploy, manage, and maintain IT infrastructure,” Carol Sliwa, Research Director with IDC’s Infrastructure Platforms and Technologies Group, said on a recent webinar sponsored by Microsoft and Intel.

“Enterprises need to simplify deployment and management to stay agile to gain greater business benefit from the data they’re collecting,” Sliwa said. “They also need infrastructure that can deploy flexibly and unify management across hybrid cloud environments. Software-defined HCI is well suited to meet their hybrid cloud needs.”

IDC research shows that most enterprises currently use HCI in core data centers and co-location sites, often for mission-critical workloads. Sliwa also expects usage to grow in edge locations as enterprises modernize their IT infrastructure to simplify deployment, management, and maintenance of new IoT, analytics, and business applications.

Sliwa was joined on the webinar by speakers from Microsoft and Intel, who discussed the benefits of HCI for managing and optimizing both hybrid/multi-cloud and edge computing environments.

Jeff Woolsey, Principal Program Manager for Azure Edge & Platform at Microsoft, explained how Microsoft’s Azure Stack HCI and Azure Arc enable consistent cloud management across cloud and on-premises environments.

“Azure Stack HCI provides central monitoring and comprehensive configuration management, built into the box, so that your cloud and on-premises HCI infrastructure are the same,” Woolsey said. “That ultimately means lower OPEX because instead of training and retraining on bespoke solutions, you’re using and managing the same solution across cloud and on-prem.”

Azure Arc provides a bridge for the Azure ecosystem of services and applications to run on a variety of hardware and IoT devices across Azure, multi-cloud, data centers, and edge environments, Woolsey said. The service provides a consistent and flexible development, operations, and security model for both new and existing applications, allowing customers “to innovate anywhere,” he added.

Christine McMonigal, Director of Hyperconverged Marketing at Intel, explained how the Intel-Microsoft partnership has resulted in consistent, secure, end-to-end infrastructure that delivers a number of price/performance benefits to customers.

“We see how customers are demanding a more scalable and flexible compute infrastructure to support their increasing and changing workload demands,” said McMonigal. “Our Intel Select Solutions for Microsoft Azure Stack HCI have optimized configurations for the edge and for the data center. These reduce your time to evaluate, select, and purchase, streamlining the time to deploy new infrastructure.”

Watch the full webinar here: 

For more information on how HCI use is growing for mission-critical workloads, read the IDC Spotlight paper.

Edge Computing, Hybrid Cloud

Elaborating on some points from my previous post on building innovation ecosystems, here’s a look at how digital twins, which serve as a bridge between the physical and digital domains, rely on historical and real-time data, as well as machine learning models, to provide a virtual representation of physical objects, processes, and systems.

Keith Bentley of software developer Bentley Systems describes digital twins as the biggest opportunity for IT value contribution to the physical infrastructure industry since the personal computer, and they’re used in a wide variety of industries, lending enterprises insights into maintenance and ways to optimize manufacturing supply chains.

By 2026, the global digital twin market is expected to reach $48.2 billion, according to a report by MarketsAndMarkets.com, and the infrastructure and architectural engineering and construction (AEC) industries are integral to this growth. Everything from buildings, bridges, and parking structures, to water and sewer lines, roadways and entire cities are ripe for reaping the value of digital twins.

Here’s a look at how digital twins are disrupting the status quo in the infrastructure industry — and why IT and innovation leaders at infrastructure and AEC enterprises would be wise to capitalize on them.

Redrafting the business model

For decades in the AEC industry, work has been performed on a project-by-project basis using computer-aided design (CAD) and more recently building information modeling (BIM) software to create specific 2D and 3D deliverables. The industry is now moving toward integrated suites of tools and industry clouds, which open the door to new business models, industry ecosystems, and more collaborative ways of working.

As the use of digital twins advances, new possibilities for annuity revenues are opening up as well for AEC firms to manage and maintain infrastructural digital twins for their clients.

These new business models are disrupting the infrastructure industry and reconfiguring opportunities as the industry adjusts to new ways of working. Digital twins will likely do for the infrastructure space what various platform models have already done for music, books, retail, and gig economy services.

Due to the cloud-based, platform business model, possibilities will open up not only for operations and maintenance services around core digital twin models, but for value-added digital services wrapped around these twins such as visualization, collaboration, physical and cybersecurity, data analytics, and AI-enabled preventative maintenance.

Plus, infrastructure developers can partner with digital twin providers and the surrounding ecosystem of service providers to benefit from the sale of the physical asset as well as the provisioning of ongoing digital services via digital twin models. Over time, these subscription-based services could add a significant amount to the original sale price. For example, a real estate project of 100,000 square feet could net $1 million in add-on revenues over five years from digital twin-related services, and nearly 80% of an asset’s lifetime value is realized in operations.

Digital twin use cases and ROI

The full suite of digital twin use cases encompasses many areas, but one of the largest is in helping infrastructure become more efficient, resilient, and sustainable. With 70% of the world’s carbon emissions having some link to the way infrastructure is either planned, designed, built, or operated, digital twins can help with visibility and insights for real-time decisions. Using our earlier example, if a 100,000 square foot building has $200,000 in annual maintenance costs, the digital twin may save 25% from that and add additional value of $160,000 in terms of environmental, security, and useability benefits like booking of meeting rooms, space utilization analytics, and process visibility.

Another use case relates to worker safety. Bridge inspectors, for instance, often still suspend themselves from ropes, but with drone-based bridge inspections, such as those by Manam that capture photogrammetry used to assemble a 3D digital twin, they can now move much of the inspection process into the office. This saves time and greatly reduces injury risk. With each state in the US often having tens of thousands of bridges to inspect, the ROI for state Departments of Transportation becomes highly significant. Bridge inspectors still need to go out into the field with tools, however, but the 3D model provides an additional technique for rapid visual inspection, detailed analysis, and even AI-detected defects.

And from a security perspective, a digital twin for the Capital One Arena in Washington D.C., for instance, acts as a proving ground for the latest innovations in intelligent building sensor suites to help first responders rapidly prioritize search and rescue areas when emergencies occur.

A real-time system of record

By addressing the full lifecycle from construction to operations and maintenance, infrastructure digital twins provide a system of record and a single source of truth for all parties involved. The former BIM approach was the system of record during the plan, design, and build phases of a project, but it typically stopped once delivery was made to building operators.

As a living system of record, the digital twin merges the visual and geometric representation of the asset, process, or system with the engineering data, IT data, and operational data (such as IoT and SCADA) all in a real-time representation of the physical asset.

Without digital twins, architects often have no visibility into the operational side of their designs, something that could be valuable for feedback and continuous improvement in order to modify and refine designs over time.

For owners and operators, the digital twin provides an up-to-date virtual model they can view anytime from anywhere. They also have visibility into how these assets are performing including past, present, and future indicators.

Visualization and the metaverse

For complex systems such as buildings, visualization — including renderings, videos, and AR/VR/XR — is an indispensable element to clearly unlock the benefits of digital twins by communicating plans and ideas. AR inspection in particular helps site managers immediately flag mistakes for time and cost savings. They can also scan QR codes onsite to inspect the digital twin data associated with any physical equipment in the facility, such as HVAC systems or mechanical, electrical, and plumbing (MEP) equipment. And in VR mode, they can perform remote inspections of all data layers built into the digital twin model via fly throughs.

“We’ve seen an uptake in live digital twins in recent months,” says Martin Rapos, CEO of 3D BIM developer Akular. “In addition to the master integration of building data to break IoT and other building systems silos, there’s increased need for advanced visualization, where the data needs to be geolocated and accurately tagged on 2D or 3D files. The use of VR, MR and mobile devices in working with the digital twin is on the rise as well, allowing builders and asset operators to bring the digital twin from the office to the site, which is what the industry has been trying to achieve for years.”

As also discussed in my previous post, integrating visualization tools and capabilities into digital twin solutions is key to the technology stack and overall ecosystem so customers can better visualize and collaborate around design or operational decisions regarding their physical assets. Compared to other industries, infrastructure has been slow to digitally transform. But over the next two years, the shift to digital twins will likely move to early mainstream and propel the industry forward, so CIOs and executives working in the industry should watch these developments closely and structure their own digital twin strategies for how best to unlock their potential.

Digital Transformation, Infrastructure Management

Many companies that begin their AI projects in the cloud often reach a point when cost and time variables become issues. That’s typically due to the exponential growth in dataset size and complexity of AI models.

“In an early phase, you might submit a job to the cloud where a training run would execute and the AI model would converge quickly,” says Tony Paikeday, senior director of AI systems at NVIDIA. “But as models and datasets grow, there’s a stifling effect associated with the escalating compute cost and time. Developers find that a training job now takes many hours or even days, and in the case of some language models, it could take many weeks. What used to be fast, iterative model prototyping, grinds to a halt and creative exploration starts to get stifled.”

This inflection point related to the increasing amount of time needed for AI model training — as well as increasing costs around data gravity and compute cycles — spurs many companies to adopt a hybridized approach and move their AI projects from the cloud back to an on-premises infrastructure or one that’s colocated with their data lake.

But there’s an additional trap that many companies might encounter. Paikeday says it occurs if they choose to build such infrastructure themselves or repurpose existing IT infrastructure instead of going to a purpose-built architecture designed specifically for AI.

“The IT team might say, ‘We have lots of servers, let’s just configure them with GPUs and throw these jobs at them’,” he says. “But then they realize it’s not the same as a system that is designed specifically to train AI models at scale, across a cluster that’s optimized to deliver results in minutes instead of weeks.”

With AI development, companies need fast ROI, by ensuring data scientists are working on the right things. “You’re paying a lot of money for data-science talent,” Paikeday says. “The more time they spend not doing data science — like waiting on a training run, troubleshooting software, or talking to network, storage or server vendors to solve an issue — that’s lost money and a lot of sweat equity that has nothing to do with creating models that deliver business value.”

That’s a significant benefit of a purpose-built appliance for AI models that can be installed on premises or in a colocation facility. For example, NVIDIA’s DGX A100 is meant to be unpacked, plugged in and powered-up enabling data scientists to be productive within hours, instead of weeks. The DGX system offers companies five key benefits to scale AI development:

A hardware design that is optimized for AI, along with parallelism throughout the architecture to efficiently distribute computational work across all the GPUs and DGX systems connected together. It’s not just a system; it’s an infrastructure that scales to any size problem.A field-proven, fully integrated AI software stack including drivers, libraries and AI frameworks that are optimized to work seamlessly together.A turnkey, integrated data center solution that companies can buy from their favorite value-added reseller that brings together compute, storage, networking, software and consultants to get things up and running quickly.The DGX system is a platform, not just a box, from a company that specializes in AI, and has already created state-of-the-art models, including natural language processing, recommender systems, autonomous systems, and more — all of which are continually being improved by the NVIDIA team and made available to every DGX customer.“DGXperts” bring AI-fluency and know-how, giving guidance on the best way to build a model, solve a challenge, or just assist a customer that is working on an AI project.

When it’s time to move an AI project from exploration to a production application, the right choice can speed and scale the ROI of your AI investment.

Discover how NVIDIA DGX A100, powered by NVIDIA A100 Tensor Core GPUs and AMD EPYC CPUs, meets the unique demands of AI.

Artificial Intelligence, IT Leadership

Fueled by enterprise demand for data analytics, machine learning, data center consolidation and cloud-native app development, spending on cloud infrastructure services jumped 33% year on year to $62.3 billion in the second quarter, according to Canalys.

The Singapore-based market research firm said its latest cloud spending research, released Tuesday, shows that demand for cloud services remains strong despite a global economy suffering from inflation, rising interest rates and recession.

Google, Microsoft and Amazon collectively made up almost two in three dollars spent on cloud infrastructure around the world last quarter, Canalys noted. The firm defines cloud infrastructure services as those that provide IaaS (infrastructure-as-a-service) and PaaS (platform-as-a-service), either via private or public hosting environments. It excludes direct sales for SaaS (software-as-a-service) applications, but includes revenue from the infrastructure services  used to host and operate them.

According to Canalys’ figures, AWS alone accounted for about 1/3rd of global cloud infrastructure revenue in the second quarter of 2022, or $19.3 billion out of $62.3 billion overall, representing a 33% year-on-year increase in Amazon’s figures. Azure was in second place, with 24% of the market after 40% annual growth, and Google Cloud took third, with 45% growth accounting for 8% of total market share.

Azure’s growth rate means that Microsoft has continued to close in on Amazon for primacy in this market, according to Canalys. A vice president at the research firm, Alex Smith, said that Microsoft’s record number of major deals in the $100 million and $1 billion ranges is the product of a wide product portfolio and tight integration with software partners.

“While opportunities abound for providers large and small, the interesting battle remains right at the top between AWS and Microsoft,” he said in a statement announcing Canalys’ results. “The race to invest in infrastructure to keep pace with demand will be intense and test the nerves of the companies’ CFOs as both inflation and rising interest rates create cost headwinds.”

Cloud providers build out infrastructure

Despite those headwinds, however, both Amazon and Microsoft have continued to aggressively build out capacity, according to the researchers—the latter company has announced 10 new cloud regions, to become available in the next year, and the former has announced eight, divided into 24 new availability zones in the same time frame.

According to Canalys research analyst Yi Zhang, demand is likely to continue to increase, as companies move more and more core parts of their infrastructure into it. “Most companies have gone beyond the initial step of moving a portion of their workloads to the cloud and are looking at migrating key services,” he said in a statement. “The top cloud vendors are accelerating their partnerships with a variety of software companies to demonstrate a differentiated value proposition. Recently, Microsoft pointed to expanded services to migrate more Oracle workloads to Azure, which in turn are connected to databases running in Oracle Cloud

Cloud Computing, Technology Industry