If you’ve been reading a lot about quantum computing recently, you likely have a few questions.

Some of those questions may be about how quantum computing works. After all, it is very different from other kinds of computing. (You can learn a little about the basics in the recent CIO article Are you ready for quantum computing?)

You probably have one other very important question: What can quantum computing do for my business?

Until recently, most of the conversation about quantum computing has been academic. Researchers have been focused on getting the technology to work and engineers have been building systems with more qubits.

Now the emphasis is starting to shift to actual use cases as organizations take a closer look at quantum.

In a recent conversation, Victor Fong, Distinguished Engineer at Dell, and Michael Robillard, Sr. Distinguished Engineer at Dell, offered their thoughts on what quantum computing can do for businesses. The short answer is that quantum acts as an accelerator, allowing computers to complete some kinds of processing much more quickly than has ever been possible before.

To understand what that means, Fong and Robillard recommend that organizations get started by examining the technology. They laid out three steps they believe companies should be taking today:

1. Discover the potential of quantum computing

You may not have quantum computing experts on your staff today. That’s okay because your competitors almost certainly don’t have any quantum experts either. Only a small group of people currently have the expertise to be considered true experts in quantum computing.

Fortunately, you don’t have to be an expert to get started.

The first stage of preparing your organization for quantum computing is to do some foundational research. Look up some introductory guides. Read some articles. If you don’t know where to begin, Dell has a Quantum Computing Resource Center with white papers, analyst reports and recent news about quantum.

Be prepared to be a little confused at first.

Quantum computing is fundamentally different from the classical computers you use every day. Quantum computers rely on the principles of quantum mechanics, which Albert Einstein once described as “spooky action at a distance.” Quantum computing might seem strange — maybe even spooky — at first. But it operates by some basic rules that you can understand.

The computer hardware is also quite a bit different than what you’re used to. Quantum computers store information in qubits. Qubits operate at atomic scale, which means they are very, very small. They are also quite delicate. Small changes in the environment, referred to as “noise,” can easily disrupt the system enough that it cannot function as intended.

Developing and deploying systems built around these minuscule, sensitive parts is both difficult and expensive. But engineers are also developing simulators that mimic how quantum computers work. These simulators can be used to experiment with quantum.

2. Identify some quantum use cases for your organization

Once you understand the basics of quantum computing, you’ll be ready to start brainstorming ways that your organization can use the technology. 

Not every computing problem is well-suited to quantum processing. You wouldn’t want to use a quantum computer to do any kind of calculation that has one right answer. For example, you shouldn’t use a quantum computer to calculate your tax bill or process your payroll.

On the other hand, quantum computers can be very good at solving optimization problems. If you need to choose the best answer from a group of possible right answers, quantum computing may be ideal.

Some organizations are already experimenting with quantum computing for a variety of use cases:

Logistics and transportation firms are trying quantum computing as a way to find the most fuel-efficient, fastest and safest routes to deliver cargo and passengers while accounting for weather and traffic.

Financial firms want quantum computing to optimize their portfolios and maximize returns while mitigating risk.

Chemical and materials manufacturers seek to harness the power of quantum computing to come up with new formulas and model how a material’s properties will change under various conditions.

Drug companies want quantum computing to develop new treatments and vaccines for debilitating illnesses.

Auto manufacturers are testing quantum computing to help optimize the large batteries necessary for electric vehicles.

Technology companies of all kinds are experimenting to help develop new products and services or optimize those they already offer.

Even if you aren’t in one of these industries, you probably have similar use cases where quantum computing would be helpful. The key is to look for situations that are difficult to model because of a large number of variables. You also want use cases that are intrinsic to your business, where improving operations would have a large impact on your bottom line.

3. Deploy a test case.

Believe it or not, it’s not too early to start experimenting with quantum computing.

Anyone can download the open source Qiskit software development kit (SDK) that allows you to write code that will run on quantum systems.

A few vendors already offer access to quantum systems, although using these systems for experimentation can become expensive quickly.

Many people find it more affordable to begin by testing on a simulator. Quantum simulators use classical computing hardware to simulate the operation of quantum systems. They allow engineers to keep costs low while perfecting the code that they want to run on the quantum system. Simulators can also alleviate some data privacy concerns, and they eliminate the previously mentioned problem of quantum noise.

Finding the right tool for the job

Different kinds of computers are a little like the different kinds of saws you might use for woodworking. You can do most kinds of cutting with a standard circular saw. In the same way, a classical computer can do most kinds of calculations.

But some kinds of woodworking — like intricate scrollwork — are almost impossible to do with a circular saw. For that, you would want a jigsaw or even a scroll saw. And while you can do miter cuts with a circular saw, it’s a lot easier with a miter saw. A quantum computer should be thought of as a specialized tool. It won’t ever replace classic computing, but it makes some specialized tasks a whole lot faster and easier.

While engineers have made a lot of progress designing and building quantum computers, we’re still in the early days of the quantum era. Right now, quantum computing isn’t right for a lot of situations.

But as time goes on and the technology improves, quantum computing will become a better choice more often. And organizations that have already begun experimenting with the technology will have a head start.

That’s why now is the time to get started — learn more about quantum computing, identify test cases and begin to experiment.

***

Read more about Dell Technologies Quantum Computing here.

Read more about Intel Quantum Computing here.

Digital Transformation

The retail industry is transforming rapidly. Modern retailers rely heavily on automation for managing inventory, shelf design, customer service, and logistics. Video cameras and sensors that allow for unique store design help to enhance the customer experience. Technology is truly powering retail transformation, setting modern stores apart from traditional brick-and-mortar ones.

It is no easy feat sending all these video streams and sensor data to the cloud for real-time analysis. High bandwidth is required to move heavy data streams. So is low latency for quick data processing and decision making, especially when robotics is involved. 

This is where edge computing and edge-native applications become relevant for retail stores. They allow computing to occur closer to the source of data–right inside the store. Coupled with a private 5G communication network, retailers can deploy cost-effective and high performing ‘edge-native’ applications.

At the same time, companies must maintain secure environments and prevent fraud. According to a recent Microsoft blog, organizations can use security and compliance solutions in Microsoft 365 E5 to have visibility into their threat landscape and leverage built-in AI and machine learning in Microsoft Sentinel and Microsoft Defender for Cloud to proactively manage threats and reduce alert fatigue.

Read the full blog post to learn more.

Cloud Computing, Retail Industry

Private 5G is the next evolution of networking for mission-critical applications used in factories, logistics centers and hospitals. In fact,  any environment that needs the reliability, security and speed of a wired connection combined with the movement of people, things and data.

The element of movement is often a factor in Industry 4.0 digital transformation – and that’s where private 5G shines.

Private 5G is deployed as an extension of an organization’s WAN. It’s fast, secure, reliable and has low latency. You can rely on it to transmit data. But if you don’t have a computing resource at the edge where the data is collected to create actionable intelligence in real time, you’re missing out on revolutionary possibilities.

Edge computing brings out the real potential of private 5G

Bringing managed private 5G together with managed edge computing enables businesses to analyze situations in the now – no more waiting for data to be collected (often a slow process) and sent to a data center to be processed first.

In manufacturing, this combined-platform approach quickly delivers the right information to where decisions have to be made: the factory floor. This has implications for everything from an evolutionary increase in productivity and quality, to greater flexibility and customization.

Organizations also have to control data sovereignty, ownership and location. Private 5G can protect data by ensuring that all traffic remains on-premises.

While private 5G is a powerful tool, use cases make it exciting

If you switch to private 5G, it helps to avoid Wi-Fi access-point proliferation as well as blind spots in monitoring, as asset-based sensors can collect and transmit huge volumes of data quickly, and we can achieve indoor-positioning accuracy of less than one meter.

It’s also a much simpler exercise to reconfigure connectivity between devices and improve the timing and synchronization of data feeds from sensors.

Last year, Cisco’s Strategic Execution Office ran a study on private 5G in collaboration with Deloitte, titled “Vertical Use Cases Offer Development”, which delves into the main applications of private 5G through use cases.

They found that the highest demand for private 5G is in the manufacturing, logistics and government industries. Their findings match our experience, as these are the sectors in which NTT’s Private 5G and Edge as a Service are most in demand.

Moving from broad themes to specific applications

The study identified four themes: enabling hybrid connectivity; activation and policy setup for varied sensor profiles; advanced intelligence with private 5G and the edge-computing stack; and integrated app and infrastructure to enable business outcomes.

NTT’s experience has taught us that these themes can be translated into five main areas of application:

Group wireless communications (push-to-talk) enable workers to communicate across locations, with real-time location tracking.Private 5G supports augmented reality and virtual reality, allowing for self-assist, work-assist, and remote-assist capabilities.Private 5G makes real-time connectivity and control possible for autonomous guided vehicles.Computer vision for automatic video surveillance, inspection and guidance is faster and more efficient on a private 5G network.Connected devices can remain reliably and securely connected to the enterprise network throughout the work shift without relying on Wi-Fi or portable hot spots.

Exploring the difference 5G will make in manufacturing

The study also explores how private 5G can optimize assets and processes in manufacturing, assembly, testing, and storage facilities. Private 5G allows for faster and more precise asset tracking, system monitoring, and real-time schedule and process optimization using location and event data from sensors and factory systems.

The research provides two examples of private 5G use cases in factories:

Factory asset intelligence: Traceability from parts to product, with increased sensor enablement across manufacturing, assembly and testing sitesDynamic factory scheduling: Closed-loop control and safety applications enabled by real-time actuation, sensor fusion and dynamic process schedules.

As we continue to explore the potential of private 5G, it is clear that this technology has the power to transform the manufacturing industry and pave the way for a more efficient and effective future.

To find out more about the use cases private 5G unlocks and how they can offer business benefits, download NTT’s white paper: Smart manufacturing: accelerating digital transformation with private 5G networks and edge computing.

Edge Computing, Manufacturing Industry, Manufacturing Systems, Private 5G

Many people associate high-performance computing (HPC), also known as supercomputing, with far-reaching government-funded research or consortia-led efforts to map the human genome or to pursue the latest cancer cure. 

But HPC can also be used to advance more traditional business outcomes — from fraud detection and intelligent operations to digital transformation. The challenge: making complex compute-intensive technology accessible for mainstream use.

As companies digitally transform and steer toward becoming data-driven businesses, there is a need for increased computing horsepower to manage and extract business intelligence and drive data-intensive workloads at scale. The rise of artificial intelligence (AI), machine learning (ML), and real-time analytics applications, often deployed at the edge, can utilize HPC resources to unlock insights from data and efficiently run increasingly large and more complex models and simulations. 

The convergence of HPC with AI-based analytics is impacting nearly every industry and across a wide range of applications, including space exploration, drug discovery, financial modeling, automotive design, and systems engineering.

“HPC is becoming a utility in our lives — people aren’t thinking about what it takes to design this tire, validate a chip design, parse and analyze customer preferences, do risk management, or build a 3D structure of the COVID-19 virus,” notes Max Alt, distinguished technologist and director of Hybrid HPC at HPE. “HPC is everywhere, but you don’t think about it, because it’s hidden at the core.”

HPC’s scalable architecture is particularly well suited for AI applications, given the nature of computation required and the unpredictable growth of data associated with these workflows. HPC’s use of graphics-processing-unit (GPU) parallel processing power — coupled with its simultaneous processing of compute, storage, interconnects, and software — raises the bar on AI efficiencies. At the same time, such applications and workflows can operate and scale more readily.

Even with widespread usage, there is more opportunity to leverage HPC for better and faster outcomes and insights. HPC architecture — typically clusters of CPU and GPUs working in parallel and connected to a high-speed network and data storage system — is expensive, requiring a significant capital investment. HPC workloads are typically associated with vast data sets, which means that public cloud might be an expensive option due to requirements regarding latency and performance issues. In addition, data security and data gravity concerns often rule out public cloud. 

Another major barrier to more widespread deployment: a lack of in-house specialized expertise and talent. HPC infrastructure is far more complex than traditional IT infrastructure, requiring specialized skills for managing, scheduling, and monitoring workloads. “You have tightly coupled computing with HPC, so all of the servers need to be well synchronized and performing operations in parallel together,” Alt explains. “With HPC, everything needs to be in sync, and if one node goes down, it can fail a large, expensive job. So, you need to make sure there is support for fault tolerance.”

HPE GreenLake for HPC Is a Game Changer

An as-a-service approach can address many of these challenges and unlock the power of HPC for digital transformation. HPE GreenLake for HPC enables companies to unleash the power of HPC without having to make big up-front investments on their own. This as-a-service-based delivery model enables enterprises to pay for HPC resources based on the capacity they use. At the same time, it provides access to third-party experts who can manage and maintain the environment in a company-owned data center or colocation facility while freeing up internal IT departments.

“The trend of consuming what used to be a boutique computing environment now as-a-service is growing exponentially,” Alt says. 

HPE GreenLake for HPC bundles the core components of an HPC solution (high-speed storage, parallel file systems, low-latency interconnect, and high-bandwidth networking) in an integrated software stack that can be assembled to meet an organization’s specific workload needs. 

As part of the HPE GreenLake edge-to-cloud platform, HPE GreenLake for HPC gives organizations access to turnkey and easily scalable HPC capabilities through a cloud service consumption model that’s available on-premises. The HPE GreenLake platform experience provides transparency for HPC usage and costs and delivers self-service capabilities; users pay only for the HPC resources they consume, and built-in buffer capacity allows for scalability, including unexpected spikes in demand. HPE experts also manage the HPC environment, freeing up IT resources and delivering access to the specialized performance tuning, capacity planning, and life cycle management skills.

To meet the needs of the most demanding compute and data-intensive workloads, including AI and ML initiatives, HPE has turbocharged HPE GreenLake for HPC with purpose-built HPC capabilities. Among the more notable features are expanded GPU capabilities, including NVIDIA Tensor Core models; support for high-performance HPE Parallel File System Storage; multicloud connector APIs; and HPE Slingshot, a high-performance Ethernet fabric designed to meet the needs of data-intensive AI workloads. HPE also released lower entry points to HPC to make the capabilities more accessible for customers looking to test and scale workloads.

As organizations pursue HPC capabilities, they should consider the following:

Stop thinking of HPC in terms of a specialized boutique technology; think of it more as a common utility used to drive business outcomes.

Look for HPC options that are supported by a rich ecosystem of complementary tools and services to drive better results and deliver customer excellence.

Evaluate the HPE GreenLake for HPC model. Organizations can dial capabilities up and down, depending on need, while simplifying access and lowering costs.

HPC horsepower is critical, as data-intensive workloads, including AI, take center stage. An as-a-service model democratizes what’s traditionally been out of reach for most, delivering an accessible path to HPC while accelerating data-first business.

For more information, visit https://www.hpe.com/us/en/hpe-greenlake-compute.html

High-Performance Computing

We’ve entered another year where current economic conditions are pressuring organizations to do more with less, all while still executing against digital transformation imperatives to keep the business running and competitive. To understand how organizations may be approaching their cloud strategies and tech investments in 2023, members of VMware’s Tanzu Vanguard community shared their insights on what trends will take shape.

Tanzu Vanguards, which includes leaders, engineers, and developers from DATEV, Dell, GAIG, and TeraSky, provided their perspectives on analyst predictions and industry data that point to larger trends impacting cloud computing, application development, and technology decisions.

Trend #1: More organizations will take on a cloud-native first strategy, accelerating the shift to containers and Kubernetes as the backbone for current and new applications.

According to Forrester, forty percent of firms will take a cloud-native first strategy. Forrester’s Infrastructure Cloud Survey 2022 reveals that cloud decision-makers have implemented containerized applications that account for half of the total workloads in their organizations. Kubernetes will propel application modernization with DevOps automation, low-code capabilities, and site reliability engineering (SRE) and organizations should accelerate investment in this area as their distributed compute backbone.

“I agree on the cloud-native first strategy [prediction] since Kubernetes is the base for modern infrastructure. But you have to take into account that cloud native first does not mean public cloud first. Especially in regulated environments, public clouds or the big hyperscalers won’t always be an option,” says Juergen Sussner, cloud platform engineer and developer at DATEV. “If you look into the startup world, they start in public clouds, but as they grow to a certain scale, cloud costs will become a big problem and the need for more control might come up to bring things back into their own infrastructures or sovereign clouds. So cloud-native first, yes but maybe not public cloud first to the same degree.”

While Scott Rosenberg, practice leader of cloud technologies and automation at TeraSky, agrees with Forrester’s prediction, he notes that there is nuance in the details. “The growth of Kubernetes, and the benefits it brings to organizations, is not something that is going away. Kubernetes and containerized environments are here to stay, and their footprint will continue to grow. As Kubernetes is becoming more mature, and the ecosystem around it as well is stabilizing, I believe that the challenges we are experiencing around knowledge gaps, and technical difficulties are going to get smaller over the next few years. With that being said, due to the maturity of Kubernetes, I believe that over the next year, the industry will understand which types of workloads are fit for Kubernetes and which types of workloads, truly should not be run in a containerized environment. I believe VM-based and container-based workloads will live together and in harmony for many more years, however, I see the management layers of the 2 unifying in the near future, as is evident by the rise of ecosystem tooling like Crossplane, VMware Tanzu VM Operator, KubeVirt and more.”

Even if organizations decide to take a containerized approach to their applications,  Jim Kohl, application and developer consultant at GAIG, says “there still is heavy lifting in moving the company project portfolio over to the new system. Even then, companies will have a blend of VM-centered workloads alongside containerized workloads.”

Similarly, Thomas Rudrof, cloud platform engineer at DATEV eG, agrees that we won’t necessarily see the end of VM-based workloads. “Our organization, as well as the majority of the industry, is already adopting a cloud-native-first or a Kubernetes-native-first strategy and will increase their investment in technologies like Kubernetes and containers in the coming years. Especially for new apps or when modernizing existing apps. However, it is also important to note that there are still many apps that run on virtual machines and do not work natively in containers, especially in the case of third-party software. Therefore, I think there will still be a need for VM workloads in the coming years,” says Rudrof.       

“This year, companies will focus on cost optimization and better use of existing hardware resources. Using containerization will allow you to better control application environments along with their lifecycle. It will also allow for more effective and faster delivery of the application to the customer. IT departments should reorganize some IT processes that use a VM-based approach rather than containers,” says Lukasz Zasko, principal engineer at Dell.

Trend #2: Optimizing costs and operational efficiency will be a focus for organizations looking to improve their financial position amidst an economic downturn and skills shortages. IT leaders and executives must use AI and cloud platforms, and adopt platform engineering, to improve costs, operations, and software delivery.

Gartner’s Top Strategic Technology Trends for 2023 advises that this year is an opportunity for organizations to optimize IT systems and costs through a “digital immune system” that combines software engineering strategies like observability, AI/automation, and design and testing, to deliver resilient systems that mitigate operational and security risks. Additionally, with ongoing supply chain issues and skills shortages, organizations can scale productivity by using industry cloud platforms and platform engineering to empower agile teams with self-service capabilities to increase the pace of product delivery. Lastly, as organizations look to control cloud costs, Gartner states that investments in sustainable technology will have the potential to create greater operational resiliency and financial performance, while also improving environmental and social ecosystems.

“Eliminating cognitive load from your developers by using platform engineering techniques makes them more productive and therefore more efficient. There’s always a discussion about what can be centralized, and what should and should not be centralized as it can cause too much process overhead when not giving this specific control to your developer teams,” Sussner says. “The rise of AI in this case can’t be overlooked, like GitHub Copilot and many intelligent tools for managing security and many other aspects of supply chains.”

However, cost savings isn’t necessarily a new prediction or trend for organizations in 2023, according to Martin Zimmer – Technology Lead Modern Application Platforms at Bechtle GmbH. “I have heard this for 10 or more years. Also, AI will not help with [cost savings] because the initial costs are way too high at the moment.”

On the other hand, Rudrof says, “AI has the potential to significantly improve the efficiency, productivity, and effectiveness of IT professionals and organizations, and is likely to play an increasingly important role in the industry in the coming years.” He is also optimistic about platform engineering as a trend that will impact enterprise strategies. “I believe that platform teams are essential in helping DevOps teams focus on creating business value and in providing golden paths to enhance the overall developer experience,” says Rudrof.

Trend #3: Infrastructure and operations leaders will need to rethink their methods for growing skills to keep pace with the rapid changes in technology and ways of working.

Gartner predicts that through 2025, 80% of the operational tasks will require skills that less than half the workforce is trained in today. Gartner recommends that leaders implement a prioritized set of methods to change the skills portfolio of the infrastructure and operations organization by creating a skills roadmap that emphasizes connected learning, digital dexterity, collaboration, and problem-solving.

“The main problem in 2023 will be how can we learn new skills fast and stay on top of all the new tools and technologies in every area. If you implement a toolchain today, tomorrow it’s old,” Zimmer says. He adds that implementing a skills portfolio is nothing new. “Connected learning, digital dexterity, collaboration, and problem-solving should be the ‘normal’ skills of everyone who works inside the IT organization. The days where an IT ‘guru’ sits in his dark room and runs away when you try to talk with him are long gone.”

While developing digital and human skills will always be important for current and future workforces as hybrid work and digital transformation initiatives take hold, organizations must also look inward to evolve company culture. Sussner believes that being able to react and adapt to change is a skill in itself that an organization has to develop. “Not only do DevOps teams have to adapt to changing requirements, but also company structures. If you take Conway’s law seriously, this means being able to develop software in an agile way, would also raise the necessity to be able to change company structures accordingly.” Conway’s law states that organizations design systems that mirror their own communication structure.

“This huge step in company culture requires brave managers adopting agile principles. So in my opinion, it’s not only about technology transformation, it’s also about company culture that has to evolve. If neither technology nor culture does not take part in this game, all will fail,” Sussner adds.

At a time when budgets and margins are tightening, leaders should take this time to re-evaluate investments and prioritize the technologies and skills that build a resilient business. As business success increasingly relies on the organization’s ability to deliver software and services quickly and securely, building a company culture that prioritizes the developer experience and removes infrastructure complexity to drive productivity and efficiency will be critical for 2023 and beyond.

To learn more, visit us here.

Cloud Computing

Supply chain disruptions have impacted businesses across all industries this year. To help ease the transport portion of that equation, Danish shipping giant Maersk is undertaking a transformation that provides a prime example of the power of computing at the edge.

Gavin Laybourne, global CIO of Maersk’s APM Terminals business, is embracing cutting-edge technologies to accelerate and fortify the global supply chain, working with technology giants to implement edge computing, private 5G networks, and thousands of IoT devices at its terminals to elevate the efficiency, quality, and visibility of the container ships Maersk uses to transport cargo across the oceans.

Laybourne, who is based in The Hague, Netherlands, oversees 67 terminals, which collectively handle roughly 15 million containers shipped from thousands of ports. He joined Maersk three years ago from the oil and gas industry and since then has been overseeing public and private clouds, applying data analytics to all processes, and preparing for what he calls the next-generation “smartport” based on a switch to edge computing in real-time processing.

“Edge provides processing of real-time computation — computer vision and real-time computation of algorithms for decision making,” Laybourne says. “I send data back to the cloud where I can afford a 5-10 millisecond delay of processing.”

Bringing computing power to the edge enables data to be analyzed in near real-time — a necessity in the supply chain — and that is not possible with the cloud alone, he says.

Laybourne has been working closely with Microsoft on the evolving edge infrastructure, which will be key in many industries requiring fast access to data, such as industrial and manufacturing sectors. Some in his company focus on moving the containers. Laybourne is one who moves the electrons.

Digitizing the port of the future

Maerk’s move to edge computing follows a major cloud migration performed just a few years ago. Most enterprises that shift to the cloud are likely to stay there, but Laybourne predicts many industrial conglomerates and manufacturers will follow Maersk to the edge.

“Two to three years ago, we put everything on the cloud, but what we’re doing now is different,” Laybourne says. “The cloud, for me, is not the North Star. We must have the edge. We need real-time instruction sets for machines [container handling equipment at container terminals in ports] and then we’ll use cloud technologies where the data is not time-sensitive.”

Laybourne’s IT team is working with Microsoft to move cloud data to the edge, where containers are removed from ships by automated cranes and transferred to predefined locations in the port. To date, Laybourne and his team have migrated about 40% of APM Terminals’ cloud data to the edge, with a target to hit 80% by the end of 2023 at all operated terminals.

As Laybourne sees it, the move positions Maersk to capitalize on a forthcoming sea change for the global supply chain, one that will be fueled by enhanced data analytics, improved connectivity via 5G/6G private networks, and satellite connectivity and industry standards to enable the interoperability between ports. To date, Maersk controls about 19% of the overall capacity in its market.

As part of Maersk’s edge infrastructure, container contents can be examined by myriad IoT sensors immediately upon arrival at the terminals. RFIDs can also be checked in promptly and entered into the manifest before being moved robotically to their temporary locations. In some terminals, such operations are still performed by people, with cargo recorded on paper and data not accessible in the cloud for hours or longer, Laybourne says.

Cybersecurity, of course, is another major initiative for Maersk, as is data interoperability. Laybourne represents the company on the Digital Container Shipping Association committee, which is creating interoperability standards “because our customers don’t want to deal with paper. They want to have a digital experience,” he says.

The work to digitize is well under way. Maersk uses real-time digital tools such as Track & Trace and Container Status Notifications, APIs, and Terminal Alerts to keep customers informed about cargo. Automated cranes and robotics have removed most of the dangerous, manual work done in the past, and have improved the company’s sustainability and decarbonization efforts, Laybourne notes.

“Robotic automation has been in play in this industry for many years,” he says, adding that the pandemic has shifted the mindset of business-as-usual to upskilling laborers and making the supply chain far more efficient.

“We have automated assets such as cranes and berth and then there’s [the challenge of] how to make them more autonomous. After the pandemic, customers are now starting to reconfigure their supply chains,” he says, adding that autonomous, next-generation robotics is a key goal. “If you think of the energy crisis, the Ukraine situation, inflation, and more, companies are coming to a new view of business continuity and future sustainability compliance.”

Top vendors such as Microsoft and Amazon are looking at edge computing use cases for all industries, not just transport and logistics. According to IDC, more than 50% of new IT infrastructure will be deployed at the edge in 2023.

Gartner calls implementations like Maersk’s the “cloud-out edge” model. “It is not as much about moving from the cloud to edge as it is about bringing the cloud capabilities closer to the end users,” says Sid Nag, vice president and analyst at Gartner. “This also allows for a much more pervasive and distributed model.”

Next-gen connectivity and AI on deck

Aside from its partnership with Microsoft on edge computing, Maersk is collaborating with Nokia and Verizon on building private 5G networks at its terminals and recently demonstrated a blueprint of its plans at the Verizon Innovation Center in Boston. The ongoing work is among the first steps toward a breakthrough in connectivity and security, Laybourne maintains.

“It’s technology that opens up a lot more in terms of its connectivity, and in some of our terminals, where we have mission-critical systems platforms, the latency that 5G can offer is fantastic,” he says, noting that it will allow the cargo to “call home” data every 10 milliseconds as opposed to weeks. “But the real breakthrough on 5G and LTE is that I can secure my own spectrum. I own that port — nobody else. That’s the real breakthrough.”

Garnter’s Nag agrees that private 5G and edge computing provide meaningful synergies. “Private 5G can guarantee high-speed connectivity and low latencies needed in industries where use cases usually involve the deployment of hundreds of IoT devices, which then in turn require inter connectivity between each other,” Nag says.

For Maersk, installing IoT sensors and devices is also revolutionizing terminal operations. In the past, the cargo in containers had to be inspected and recorded on paper. Looking forward, Laybourne says, the process will all be automated and data will be digitized quickly.

His data science team, for example, has written algorithms for computer vision devices that are installed within the container to get around-the-clock electronic eyes on the cargo and identify and possibly prevent damage or spoilage.

Edge computing with IoT sensors that incorporate computer vision and AI will also give customers what they’ve longed for some time, and most pointedly during the pandemic: almost instant access to cargo data upon arrival, as well as automated repairs or fixes.

“It can then decide whether there’s an intervention needed, such as maintenance or repair, and that information is released to the customer,” the CIO says, adding that cameras and data collection devices will be installed throughout terminals to monitor for anything, be it theft, lost cargo, or potentially unsafe conditions.

Maersk has also been working with AI pioneer Databricks to develop algorithms to make its IoT devices and automated processes smarter. The company’s data scientists have built machine learning models in-house to improve safety and identify cargo. Data scientists will some day up the ante with advanced models to make all processes autonomous.

And this, Laybourne maintains, is the holy grail: changing the character of the company and the industry.

“We’ve been a company with a culture of configurators. So now we’ve become a culture of builders,” the digital leader says. “We’re building a lot of the software ourselves.

This is where the data scientists sit and work on machine learning algorithms.”

For example, his data scientists are working on advanced ML models to handle exceptions or variations in data. They are also working on advanced planning and forecasting algorithms that will have an unprecedented impact on efficiencies. “Traditionally, this industry thinks about the next day,” the CIO says. “What we’re looking at actually is the next week, or the next three weeks.”

The core mission won’t change. But everything else will, he notes.

“We’re still going to have the job of lifting a box from a vessel into something else. Are we going to have autonomous floating containers and underseas hyperloops? I don’t think so,” Laybourne says, claiming the container industry is well behind others in its digital transformation but that is changing at lightning-fast speed. “Loading and unloading will still be part of the operation. But the technologies we put around it and in it will change everything.”

Cloud Computing, Edge Computing, Internet of Things, Supply Chain

For years, quantum computing has seemed like the stuff of science fiction. But the truth is that quantum computing is here and it’s more accessible to organizations than you think. And while the technology is still in its infancy, it is advancing fast.

In a recent interview, Ken Durazzo, vice president of Dell Technologies’ OCTO Research Office, explained, “There is a race toward quantum.” In the very near future, the computational capabilities of quantum will be widely available to accelerate applications and reveal new forms of business value.

A Dell Technologies white paper titled “5 Things You Should Be Doing Now to Prepare for Quantum Computing,” takes a deeper look at the technology and explains why companies should get started with quantum today.

What is quantum computing?

The first thing you need to know is that quantum computers are not just faster versions of the computers we use today. “Quantum systems fundamentally behave and compute much, much differently than our normal systems, our classical systems, do,” said Durazzo.

Traditional, or classical, computers rely on transistors that can be either on or off, a 1 or 0. And they store this information as bits.

Quantum computers harness the principles of quantum mechanics to consider multiple possibilities — not just 1 or 0. They use qubits to compute all the possibilities simultaneously. This makes quantum computers are very, very good at calculating the possibilities when a problem has multiple possible outcomes. And for these specialized problems, they are very, very, fast. 

Will quantum computers replace classical computers?

You probably won’t ever replace all your classical computers with quantum computers. For one thing, quantum computers aren’t very good at finding definitive answers to very precise questions, such as “What is the current balance in my bank account?”

For another quantum computers need classical computers to function. “You can’t have quantum computing without classical computing,” said Durazzo. “It is highly likely that we’ll see hybrid classical-quantum computing as the way forward through the era of fault-tolerant quantum systems.” In most of the current systems, a classical computer (complete with storage, processors, and networking) provides the input that then goes to a quantum computing layer. The quantum layer does the processing and then transmits the output back to a classical system, which could eventually be a high-performance computing (HPC) system as the number of qubits grows. 

This setup makes it possible for anyone to experiment with quantum computing today. You can download the development tools and then interact with a virtual quantum processor (vQPU) or a physical quantum machine in the cloud. Both approaches provide an identical experience as they are programmed the same and are indistinguishable in terms of application or algorithm experimentation.

What should you be doing today?

Right now, very few people have experience with quantum computers. For this reason, Durazzo recommends that organizations start by learning as much as they can. You can download tools like Qiskit and start experimenting with quantum simulators. “At the end of the day, [quantum computing] really does take quite a bit of mental refactoring and a different way of thinking about computing and getting hands on keyboard is a wonderful way of starting to learn how this all works,” he advised.

Enterprises should also choose a partner who has experience with quantum computing to help guide them on their quantum journey. Dell Technologies has been working with quantum computing since 2016 and has resources that can help customers get up to speed with the new technology quickly. In fact, “We can now have a customer take our software and our Dell hardware and have a working quantum system in an hour,” Durazzo explained. “We have built significant automation in the system to enable a fast and frictionless deployment.”

He added, “Everything that we do is to try and make it easier for customers to adopt and take rapid advantage of new technology to accelerate their business. That’s our primary goal as a research organization. We take some of the friction out for customers because we’re learning ahead of them.”

For more information about quantum computing and details about how to get started with your quantum journey, check out “5 Things You Should Be Doing Now to Prepare for Quantum Computing.”

***

Intel® Technologies Move Analytics Forward

Data analytics is the key to unlocking the most value you can extract from data across your organization. To create a productive, cost-effective analytics strategy that gets results, you need high performance hardware that’s optimized to work with the software you use.

Modern data analytics spans a range of technologies, from dedicated analytics platforms and databases to deep learning and artificial intelligence (AI). Just starting out with analytics? Ready to evolve your analytics strategy or improve your data quality? There’s always room to grow, and Intel is ready to help. With a deep ecosystem of analytics technologies and partners, Intel accelerates the efforts of data scientists, analysts, and developers in every industry. Find out more about Intel advanced analytics.

IT Leadership

Companies capture more data and compute capacity at the edge. At the same time, they are laying the groundwork for a distributed enterprise that can capitalize on a multiplier effect to maximize intended business outcomes.

The number of edge sites — factory floors, retail shops, hospitals, and countless other locations — is growing. This gives businesses more opportunity to gain insights and make better decisions across the distributed enterprise. Data follows the activities of customers, employees, patients, and processes. Pushing computing power to the distributed edge ensures that data can be analyzed in near real time —a model not possible with cloud computing.

With centralized cloud computing, due to bandwidth constraints, it takes too long to move large data sets and analyze the data. This introduces unwanted decision latency, which, in turn, destroys the business value of the data. Edge computing addresses this need for immediate processing by leaving the data where it is created by instead moving compute resources next to such data streams. This strategy enables real-time analysis of data as it is being captured and eliminates decision delays. Now the next level of operational efficiency can be realized with real-time decision-making and automation. At the edge: where activity takes place.

Industry experts are projecting that 50 billion devices will be connected worldwide this year, with the amount of data being generated at the edge slated to increase by over 500% between 2019 and 2025, amounting to a whopping 175 zettabytes worldwide. The tipping point comes in 2025, when, experts project, roughly half of all data will be generated and processed at the edge, soon overtaking the amount of data and applications addressed by centralized cloud and data center computing.

The deluge of edge data opens opportunities for all kinds of actionable insights, whether it’s to correct a factory floor glitch impacting product quality or serving up a product recommendation based on customers’ past buying behavior. On its own, such individual action can have genuine business impact. But when you multiply the possible effects across thousands of locations processing thousands of transactions, there is a huge opportunity to parlay insights into revenue growth, cost reduction, and even business risk mitigation.

“Compute and sensors are doing new things in real time that they couldn’t do before, which gives you new degrees of freedom in running businesses,” explains Denis Vilfort, director of Edge Marketing at HPE. “For every dollar increasing revenue or decreasing costs, you can multiple it by the number of times you’re taking that action at a factory or a retail store — you’re basically building a money-making machine … and improving operations.”

The multiplier effect at work

The rise of edge computing essentially transforms the conventional notion of a large, centralized data center into having more data centers that are much smaller and located everywhere, Vilfort says. “Today we can package compute power for the edge in less than 2% of the space the same firepower took up 25 years ago. So you don’t want to centralize computing — that’s mainframe thinking,” he explains. “You want to democratize compute power and give everyone access to small — but powerful — distributed compute clusters. Compute needs to be where the data is: at the edge.”

Each location leverages its own insights and can share them with others. These small insights can optimize operation of one location. Spread across all sites, these seemingly small gains can add up quickly when new learnings are replicated and repeated. The following examples showcase the power of the multiplier effect in action:

Foxconn, a large global electronics manufacturer, moved from a cloud implementation to high-resolution cameras and artificial intelligence (AI) enabled at the edge for a quality assurance application. The shift reduced pass/fail time from 21 seconds down to one second; when this reduction is multiplied across a monthly production of thousands of servers, the company benefits from a 33% increase in unit capacity, representing millions more in revenue per month.A supermarket chain tapped in-store AI and real-time video analytics to reduce shrinkage at self-checkout stations. That same edge-based application, implemented across hundreds of stores, prevents millions of dollars of theft per year.Texmark, an oil refinery, was pouring more than $1 million a year into a manual inspection process, counting on workers to visually inspect 133 pumps and miles of pipeline on a regular basis. Having switched to an intelligent edge compute model, including the installation of networked sensors throughout the refinery, Texmark is now able to catch potential problems before anyone is endangered, not to mention benefit from doubled output while cutting maintenance costs in half.A big box retailer implemented an AI-based recommendation engine to help customers find what they need without having to rely on in-store experts. Automating that process increased revenue per store. Multiplied across its thousands of sites, the edge-enabled recommendation process has the potential to translate into revenue upside of more than $350 million for every 1% revenue increase per store.

The HPE GreenLake Advantage

The HPE GreenLake platform brings an optimized operating model, consistent and secure data governance practices, and a cloudlike platform experience to edge environments — creating a robust foundation upon which to execute the multiplier effect across sites. For many organizations, the preponderance of data needs to remain at the edge, for a variety of reasons, including data gravity issues or because there’s a need for autonomy and resilience in case a weather event or a power outage threatens to shut down operations.

HPE GreenLake’s consumption-based as-a-service model ensures that organizations can more effectively manage costs with pay-per-use predictability, also providing access to buffer capacity to ensure ease of scalability. This means that organizations don’t have to foot the bill to build out costly IT infrastructure at each edge location but can, rather, contract for capabilities according to specific business needs. HPE also manages the day-to-day responsibilities associated with each environment, ensuring robust security and systems performance while creating opportunity for internal IT organizations to focus on higher-value activities.

As benefits of edge computing get multiplied across processes and locations, the advantages are clear. For example, an additional monthly increase in bottom-line profits of $2,000 per location per month is easily obtained by a per-location HPE GreenLake compute service at, say, $800 per location per month. The net profit, then, is $1,200. When that is multiplied across 1,000 locations, the result is an aggregated profit of an additional $1.2 million per month — or $14.4 million per year. Small positive changes across a distributed enterprise quickly multiply, and tangible results are now within reach.  

As companies build out their edge capabilities and sow the seeds to benefit from a multiplier effect, they should remember to:

Evaluate what decisions can benefit from being made and acted upon in real time as well as what data is critical to delivering on those insights so the edge environments can be built out accordinglyConsider scalability — how many sites could benefit from a similar setup and how hard it will be to deploy and operate those distributed environmentsIdentify the success factors that lead to revenue gains or cost reductions in a specific edge site and replicate that setup and those workflows at other sites

In the end, the multiplier effect is all about maximizing the potential of edge computing to achieve more efficient operations and maximize overall business success. “We’re in the middle of shifting from an older way of doing things to a new and exciting way of doing things,” Vilfort says. “At HPE we are helping customers find a better way to use distributed technology in their distributed sites to enable their distributed enterprise to run more efficiently.”

For more information, click here.

Cloud Computing

The benefits of analyzing vast amounts of data, long-term or in real-time, has captured the attention of businesses of all sizes. Big data analytics has moved beyond the rarified domain of government and university research environments equipped with supercomputers to include businesses of all kinds that are using modern high performance computing (HPC) solutions to get their analytics jobs done. Its big data meets HPC ― otherwise known as high performance data analytics. 

Bigger, Faster, More Compute-intensive Data Analytics

Big data analytics has relied on HPC infrastructure for many years to handle data mining processes. Today, parallel processing solutions handle massive amounts of data and run powerful analytics software that uses artificial intelligence (AI) and machine learning (ML) for highly demanding jobs.

A report by Intersect360 Research found that “Traditionally, most HPC applications have been deterministic; given a set of inputs, the computer program performs calculations to determine an answer. Machine learning represents another type of applications that is experiential; the application makes predictions about new or current data based on patterns seen in the past.”

This shift to AI, ML, large data sets, and more compute-intensive analytical calculations has contributed to the growth of the global high performance data analytics market, which was valued at $48.28 billion in 2020 and is projected to grow to $187.57 billion in 2026, according to research by Mordor Intelligence. “Analytics and AI require immensely powerful processes across compute, networking and storage,” the report explained. “As a result, more companies are increasingly using HPC solutions for AI-enabled innovation and productivity.”

Benefits and ROI

Millions of businesses need to deploy advanced analytics at the speed of events. A subset of these organizations will require high performance data analytics solutions. Those HPC solutions and architectures will benefit from the integration of diverse datasets from on-premise to edge to cloud. The use of new sources of data from the Internet of Things to empower customer interactions and other departments will provide a further competitive advantage to many businesses. Simplified analytics platforms that are user-friendly resources open to every employee, customer, and partner will change the responsibilities and roles of countless professions.

How does a business calculate the return on investment (ROI) of high performance data analytics? It varies with different use cases.

For analytics used to help increase operational efficiency, key performance indicators (KPIs) contributing to ROI may include downtime, cost savings, time-to-market, and production volume. For sales and marketing, KPIs may include sales volume, average deal size, revenue by campaign, and churn rate. For analytics used to detect fraud, KPIs may include number of fraud attempts, chargebacks, and order approval rates. In a healthcare environment, analytics used to improve patient outcomes might include key performance indicators that track cost of care, emergency room wait times, hospital readmissions, and billing errors.

Customer Success Stories

Combining data analytics with HPC:

A technology firm applies AI, machine learning, and data analytics to client drug diversion data from acute, specialty, and long-term care facilities and delivers insights within five minutes of receiving new data while maintaining a HPC environment with 99.99% uptime to comply with service level agreements (SLAs).A research university was able to tap into 2 petabytes of data across two HPC clusters with 13,080 cores to create a mathematical model to predict behavior during the COVID-19 pandemic.A technology services provider is able to inspect 124 moving railcars ― a 120% reduction in inspection time ― and transmit results in eight minutes, based on processing and analyzing 1.31 terabytes of data per day.A race car designer is able to process and analyze 100,000 data points per second per car ― one billion in a two-hour race ― that are used by digital twins running hundreds of different race scenarios to inform design modifications and racing strategy.  Scientists at a university research center are able to utilize hundreds of terabytes of data, processed at I/O speeds of 200 Gbps, to conduct cosmological research into the origins of the universe.

Data Scientists are Part of the Equation

High performance data analytics is gaining stature as more and more data is being collected.  Beyond the data and HPC systems, it takes expertise to recognize and champion the value of this data. According to Datamation, “The rise of chief data officers and chief analytics officers is the clearest indication that analytics has moved from the backroom to the boardroom, and more and more often it’s data experts that are setting strategy.” 

No wonder skilled data analysts continue to be among the most in-demand professionals in the world. The U.S. Bureau of Labor Statistics predicts that the field will be among the fastest-growing occupations for the next decade, with 11.5 million new jobs by 2026. 

For more information read “Unleash data-driven insights and opportunities with analytics: How organizations are unlocking the value of their data capital from edge to core to cloud” from Dell Technologies. 

***

Intel® Technologies Move Analytics Forward

Data analytics is the key to unlocking the most value you can extract from data across your organization. To create a productive, cost-effective analytics strategy that gets results, you need high performance hardware that’s optimized to work with the software you use.

Modern data analytics spans a range of technologies, from dedicated analytics platforms and databases to deep learning and artificial intelligence (AI). Just starting out with analytics? Ready to evolve your analytics strategy or improve your data quality? There’s always room to grow, and Intel is ready to help. With a deep ecosystem of analytics technologies and partners, Intel accelerates the efforts of data scientists, analysts, and developers in every industry. Find out more about Intel advanced analytics.

Data Management

From telecommunications networks to the manufacturing floor, through financial services to autonomous vehicles and beyond, computers are everywhere these days, generating a growing tsunami of data that needs to be captured, stored, processed, and analyzed.

At Red Hat, we see edge computing as an opportunity to extend the open hybrid cloud all the way to data sources and end users. Where data has traditionally lived in the datacenter or cloud, there are benefits and innovations that can be realized by processing the data these devices generate closer to where it is produced.

This is where edge computing comes in.

What is edge computing?

Edge computing is a distributed computing model in which data is captured, stored, processed, and analyzed at or near the physical location where it is created. By pushing computing out closer to these locations, users benefit from faster, more reliable services while companies benefit from the flexibility and scalability of hybrid cloud computing.

Edge computing vs. cloud computing

A cloud is an IT environment that abstracts, pools, and shares IT resources across a network. An edge is a computing location at the edge of a network, along with the hardware and software at those physical locations. Cloud computing is the act of running workloads within clouds, while edge computing is the act of running workloads on edge devices.

You can read more about cloud versus edge here.

4 benefits of edge computing

As the number of computing devices has grown, our networks simply haven’t kept pace with the demand, causing applications to be slower and/or more expensive to host centrally.

Pushing computing out to the edge helps reduce many of the issues and costs related to network latency and bandwidth, while also enabling new types of applications that were previously impractical or impossible. 

1.    Improve performance

When applications and data are hosted on centralized datacenters and accessed via the internet, speed and performance can suffer from slow network connections. By moving things out to the edge, network-related performance and availability issues are reduced, although not entirely eliminated.

2. Place applications where they make the most sense

By processing data closer to where it’s generated, insights can be gained more quickly and response times reduced drastically. This is particularly true for locations that may have intermittent connectivity, including geographically remote offices and on vehicles such as ships, trains, and airplanes.

3. Simplify meeting regulatory and compliance requirements

Different situations and locations often have different privacy, data residency, and localization requirements, which can be extremely complicated to manage through centralized data processing and storage, such as in datacenters or the cloud.

With edge computing, however, data can be collected, stored, processed, managed, and even scrubbed in-place, making it much easier to meet different locales’ regulatory and compliance requirements. For example, edge computing can be used to strip personally identifiable information (PII) or faces from video before being sent back to the datacenter.

4.    Enable AI/ML applications

Artificial intelligence and machine learning (AI/ML) are growing in importance and popularity since computers are often able to respond to rapidly changing situations much more quickly and accurately than humans.

But AI/ML applications often require processing, analyzing, and responding to enormous quantities of data which can’t reasonably be achieved with centralized processing due to network latency and bandwidth issues. Edge computing allows AI/ML applications to be deployed close to where data is collected so analytical results can be obtained in near real-time.

Red Hat’s approach to edge computing

Of course, the many benefits of edge computing come with some additional complexity in terms of scale, interoperability, and manageability.

Edge deployments often extend to a large number of locations that have minimal (or no) IT staff, or that vary in physical and environmental conditions. Edge stacks also often mix and match a combination of hardware and software elements from different vendors, and highly distributed edge architectures can become difficult to manage as infrastructure scales out to hundreds or even thousands of locations.

The Red Hat Edge portfolio addresses these challenges by helping organizations standardize on a modern hybrid cloud infrastructure, providing an interoperable, scalable and modern edge computing platform that combines the flexibility and extensibility of open source with the power of a rapidly growing partner ecosystem.

The Red Hat Edge portfolio includes:

Red Hat Enterprise Linux and Red Hat OpenShift, which are designed to be the common platform for all of an organization’s infrastructure from core datacenters out to edge environments.Red Hat Advanced Cluster Management for Kubernetes and Red Hat Ansible Automation Platform provide the management and automation platforms needed to drive visibility and consistency across the organization’s entire domain.Finally, the Red Hat Application Services portfolio provides critical integration for enterprise applications while also building a robust data pipeline.

The Red Hat Edge portfolio allows organizations to build and manage applications across hybrid, multi-cloud, and edge locations, increasing app innovation, speeding up deployment, and updating and improving overall DevSecOps efficiency.

To learn more, visit Red Hat here.

Edge Computing