Private 5G is the next evolution of networking for mission-critical applications used in factories, logistics centers and hospitals. In fact,  any environment that needs the reliability, security and speed of a wired connection combined with the movement of people, things and data.

The element of movement is often a factor in Industry 4.0 digital transformation – and that’s where private 5G shines.

Private 5G is deployed as an extension of an organization’s WAN. It’s fast, secure, reliable and has low latency. You can rely on it to transmit data. But if you don’t have a computing resource at the edge where the data is collected to create actionable intelligence in real time, you’re missing out on revolutionary possibilities.

Edge computing brings out the real potential of private 5G

Bringing managed private 5G together with managed edge computing enables businesses to analyze situations in the now – no more waiting for data to be collected (often a slow process) and sent to a data center to be processed first.

In manufacturing, this combined-platform approach quickly delivers the right information to where decisions have to be made: the factory floor. This has implications for everything from an evolutionary increase in productivity and quality, to greater flexibility and customization.

Organizations also have to control data sovereignty, ownership and location. Private 5G can protect data by ensuring that all traffic remains on-premises.

While private 5G is a powerful tool, use cases make it exciting

If you switch to private 5G, it helps to avoid Wi-Fi access-point proliferation as well as blind spots in monitoring, as asset-based sensors can collect and transmit huge volumes of data quickly, and we can achieve indoor-positioning accuracy of less than one meter.

It’s also a much simpler exercise to reconfigure connectivity between devices and improve the timing and synchronization of data feeds from sensors.

Last year, Cisco’s Strategic Execution Office ran a study on private 5G in collaboration with Deloitte, titled “Vertical Use Cases Offer Development”, which delves into the main applications of private 5G through use cases.

They found that the highest demand for private 5G is in the manufacturing, logistics and government industries. Their findings match our experience, as these are the sectors in which NTT’s Private 5G and Edge as a Service are most in demand.

Moving from broad themes to specific applications

The study identified four themes: enabling hybrid connectivity; activation and policy setup for varied sensor profiles; advanced intelligence with private 5G and the edge-computing stack; and integrated app and infrastructure to enable business outcomes.

NTT’s experience has taught us that these themes can be translated into five main areas of application:

Group wireless communications (push-to-talk) enable workers to communicate across locations, with real-time location tracking.Private 5G supports augmented reality and virtual reality, allowing for self-assist, work-assist, and remote-assist capabilities.Private 5G makes real-time connectivity and control possible for autonomous guided vehicles.Computer vision for automatic video surveillance, inspection and guidance is faster and more efficient on a private 5G network.Connected devices can remain reliably and securely connected to the enterprise network throughout the work shift without relying on Wi-Fi or portable hot spots.

Exploring the difference 5G will make in manufacturing

The study also explores how private 5G can optimize assets and processes in manufacturing, assembly, testing, and storage facilities. Private 5G allows for faster and more precise asset tracking, system monitoring, and real-time schedule and process optimization using location and event data from sensors and factory systems.

The research provides two examples of private 5G use cases in factories:

Factory asset intelligence: Traceability from parts to product, with increased sensor enablement across manufacturing, assembly and testing sitesDynamic factory scheduling: Closed-loop control and safety applications enabled by real-time actuation, sensor fusion and dynamic process schedules.

As we continue to explore the potential of private 5G, it is clear that this technology has the power to transform the manufacturing industry and pave the way for a more efficient and effective future.

To find out more about the use cases private 5G unlocks and how they can offer business benefits, download NTT’s white paper: Smart manufacturing: accelerating digital transformation with private 5G networks and edge computing.

Edge Computing, Manufacturing Industry, Manufacturing Systems, Private 5G

Enterprises driving toward data-first modernization need to determine the optimal multicloud strategy, starting with which applications and data are best suited to migrate to cloud and what should remain in the core and at the edge.

A hybrid approach is clearly established as the optimal operating model of choice. A Flexera report found the shift to hybrid infrastructure supported by overwhelming numbers of survey respondents, with 89% of them opting for a multicloud strategy and 80% taking a hybrid approach that combines use of public as well as private clouds.

The shift toward hybrid IT has clear upsides, enabling organizations to choose the right solution for each task and workload, depending on criteria such as performance, security, compliance, and cost, among other factors. The challenge is that CIOs must apply a rigorous process and holistic assessment to determine the optimal data modernization strategy, given that there is no one-size-fits-all answer.

Many organizations set out on the modernization journey guided by the premise that cloud-first or cloud-only is the ultimate destination, only to find that the path is not appropriate for all data and workloads. “Directionally correct CIOs and the C-suite looked at the public cloud and liked the operating model: the pay-as-you-go, predefined services, the automation and orchestration, and the partner ecosystem all available to you,” says Rocco Lavista, worldwide vice president for HPE GreenLake sales and go-to-market. “Many tried to move their whole estate into public cloud, and what they found is that that doesn’t work for everything. It’s less about what application and data should go on public cloud and more about a continuum from the edge to core [in colocated or private data centers] to public cloud.”

Close to the Edge

There are several reasons why certain data and workloads need to remain at the edge, as opposed to transitioning to public cloud. Data gravity is perhaps the most significant arbiter of where to deploy workloads, particularly when there is a need to analyze massive amounts of data quickly — for example, with X-ray or MRI machines in a hospital setting, for quality assurance data from a manufacturing line, and even with data collected at point-of-sale systems in a retail setting. 

Artificial intelligence (AI) projects are another useful example. “Where I’ve seen AI projects fail is in trying to bring the massive amounts of data from where it’s created to the training model [in some public cloud] and get timely insights, versus taking the model and bringing it closer to where the data is created,” Lavista explains. “Here, there is a synergistic need between what is happening at the edge and the processing power required in real time to facilitate your business objectives.” 

Application entanglement presents another barrier keeping organizations from migrating some applications and data to cloud. Some legacy applications have been architected in a way that doesn’t allow pieces of functionality and data to be migrated to cloud easily; in other cases, making a wholesale migration is out of the question, for reasons related to cost and complexity. There are also workloads that don’t make economic sense to refactor from operating in a fixed environment to a variable cost-based architecture and others with specific regulatory or industry obligations tied to data sovereignty or privacy that prevent a holistic migration strategy in embrace of public cloud.

The HPE GreenLake Advantage

Given the importance of the edge in the data modernization strategy, HPE seeks to remove any uncertainty regarding where to deploy applications and data. The HPE GreenLake edge-to-cloud platform brings the desired cloud-based operating model and platform experience, but with consistent and secure data governance practices, starting at the edge and running all the way to public cloud. This can be applied across any industry — such as retail, banking, manufacturing, or healthcare — and regardless of where the workload resides.

HPE GreenLake with the managed service offering is inclusive of all public clouds, ensuring a consistent experience whether data and applications are deployed on AWS, Microsoft Azure, or Google Cloud Platform as part of a hybrid mix that encompasses cloud in concert with on-premises infrastructure in an internal data center or colocation facility.

“IT teams want a unified solution they can use to manage all technology needs, from infrastructure as a service (IaaS) to platform as a service (PaaS) and container as a service (CaaS), that drive automation and orchestration that are not snowflakes,” says Lavista. “HPE GreenLake provides that standard operating model from edge to core and all the way through to the public cloud.”

By aligning with HPE GreenLake solutions, IT organizations also free themselves of the day-to-day operations of running infrastructure to focus on delivering core capabilities for business users as well as DevOps teams. The HPE GreenLake team works with organizations to assess which workloads are a better fit for cloud or edge, by evaluating a variety of factors, including technical complexity, system dependencies, service-level agreement (SLA) requirements, and latency demands. For example, a quality control system on a manufacturing line might be better suited for an edge solution, due to the need to analyze data in volume and in near real time. But an AI application that could benefit from a facial recognition service might be better served by public cloud for such service, given the broad ecosystem of available third-party services that eliminate the need to re-create the wheel for every innovation. 

To ensure top performance, Lavista counsels companies to fully understand their core business objectives and to be pragmatic about their cloud migration goals so they avoid the trap of moving data and workloads simply because it’s the latest technology trend. “Understand your options based on where you are coming from,” he says. “If what you are looking for is to optimize the IT operating model, you can still get that without moving applications and data.”

For more information, visit https://www.hpe.com/us/en/solutions/edge.html

Hybrid Cloud

Companies capture more data and compute capacity at the edge. At the same time, they are laying the groundwork for a distributed enterprise that can capitalize on a multiplier effect to maximize intended business outcomes.

The number of edge sites — factory floors, retail shops, hospitals, and countless other locations — is growing. This gives businesses more opportunity to gain insights and make better decisions across the distributed enterprise. Data follows the activities of customers, employees, patients, and processes. Pushing computing power to the distributed edge ensures that data can be analyzed in near real time — a model not possible with cloud computing. 

With centralized cloud computing, due to bandwidth constraints, it takes too long to move large data sets and analyze the data. This introduces unwanted decision latency, which, in turn, destroys the business value of the data. Edge computing addresses this need for immediate processing by leaving the data where it is created by instead moving compute resources next to such data streams. This strategy enables real-time analysis of data as it is being captured and eliminates decision delays. Now the next level of operational efficiency can be realized with real-time decision-making and automation. At the edge: where activity takes place. 

Industry experts are projecting that 50 billion devices will be connected worldwide this year, with the amount of data being generated at the edge slated to increase by over 500% between 2019 and 2025, amounting to a whopping 175 zettabytes worldwide. The tipping point comes in 2025, when, experts project, roughly half of all data will be generated and processed at the edge, soon overtaking the amount of data and applications addressed by centralized cloud and data center computing.

The deluge of edge data opens opportunities for all kinds of actionable insights, whether it’s to correct a factory floor glitch impacting product quality or serving up a product recommendation based on customers’ past buying behavior. On its own, such individual action can have genuine business impact. But when you multiply the possible effects across thousands of locations processing thousands of transactions, there is a huge opportunity to parlay insights into revenue growth, cost reduction, and even business risk mitigation.

“Compute and sensors are doing new things in real time that they couldn’t do before, which gives you new degrees of freedom in running businesses,” explains Denis Vilfort, director of Edge Marketing at HPE. “For every dollar increasing revenue or decreasing costs, you can multiple it by the number of times you’re taking that action at a factory or a retail store — you’re basically building a money-making machine … and improving operations.”

The multiplier effect at work

The rise of edge computing essentially transforms the conventional notion of a large, centralized data center into having more data centers that are much smaller and located everywhere, Vilfort says. “Today we can package compute power for the edge in less than 2% of the space the same firepower took up 25 years ago. So, you don’t want to centralize computing — that’s mainframe thinking,” he explains. “You want to democratize compute power and give everyone access to small — but powerful — distributed compute clusters. Compute needs to be where the data is: at the edge.”

Each location leverages its own insights and can share them with others. These small insights can optimize operation of one location. Spread across all sites, these seemingly small gains can add up quickly when new learnings are replicated and repeated. The following examples showcase the power of the multiplier effect in action:

Foxconn, a large global electronics manufacturer, moved from a cloud implementation to high-resolution cameras and artificial intelligence (AI) enabled at the edge for a quality assurance application. The shift reduced pass/fail time from 21 seconds down to one second; when this reduction is multiplied across a monthly production of thousands of servers, the company benefits from a 33% increase in unit capacity, representing millions more in revenue per month.

A supermarket chain tapped in-store AI and real-time video analytics to reduce shrinkage at self-checkout stations. That same edge-based application, implemented across hundreds of stores, prevents millions of dollars of theft per year.

Texmark, an oil refinery, was pouring more than $1 million a year into a manual inspection process, counting on workers to visually inspect 133 pumps and miles of pipeline on a regular basis. Having switched to an intelligent edge compute model, including the installation of networked sensors throughout the refinery, Texmark is now able to catch potential problems before anyone is endangered, not to mention benefit from doubled output while cutting maintenance costs in half.

A big box retailer implemented an AI-based recommendation engine to help customers find what they need without having to rely on in-store experts. Automating that process increased revenue per store. Multiplied across its thousands of sites, the edge-enabled recommendation process has the potential to translate into revenue upside of more than $350 million for every 1% revenue increase per store. 

The HPE GreenLake Advantage

The HPE GreenLake platform brings an optimized operating model, consistent and secure data governance practices, and a cloudlike platform experience to edge environments — creating a robust foundation upon which to execute the multiplier effect across sites. For many organizations, the preponderance of data needs to remain at the edge, for a variety of reasons, including data gravity issues or because there’s a need for autonomy and resilience in case a weather event or a power outage threatens to shut down operations.

HPE GreenLake’s consumption-based as-a-service model ensures that organizations can more effectively manage costs with pay-per-use predictability, also providing access to buffer capacity to ensure ease of scalability. This means that organizations don’t have to foot the bill to build out costly IT infrastructure at each edge location but can, rather, contract for capabilities according to specific business needs. HPE also manages the day-to-day responsibilities associated with each environment, ensuring robust security and systems performance while creating opportunity for internal IT organizations to focus on higher-value activities.

As benefits of edge computing get multiplied across processes and locations, the advantages are clear. For example, an additional monthly increase in bottom-line profits of $2,000 per location per month is easily obtained by a per-location HPE GreenLake compute service at, say, $800 per location per month. The net profit, then, is $1,200. When that is multiplied across 1,000 locations, the result is an aggregated profit of an additional $1.2 million per month — or $14.4 million per year. Small positive changes across a distributed enterprise quickly multiply, and tangible results are now within reach.  

As companies build out their edge capabilities and sow the seeds to benefit from a multiplier effect, they should remember to:

Evaluate what decisions can benefit from being made and acted upon in real time as well as what data is critical to delivering on those insights so the edge environments can be built out accordingly

Consider scalability — how many sites could benefit from a similar setup and how hard it will be to deploy and operate those distributed environments

Identify the success factors that lead to revenue gains or cost reductions in a specific edge site and replicate that setup and those workflows at other sites

In the end, the multiplier effect is all about maximizing the potential of edge computing to achieve more efficient operations and maximize overall business success. “We’re in the middle of shifting from an older way of doing things to a new and exciting way of doing things,” Vilfort says. “At HPE we are helping customers find a better way to use distributed technology in their distributed sites to enable their distributed enterprise to run more efficiently.”

For more information, visit https://www.hpe.com/us/en/solutions/edge.html

Edge Computing

When you store and deliver data at Shutterstock’s scale, the flexibility and elasticity of the cloud is a huge win, freeing you from the burden of costly, high-maintenance data centers. But for the New York-based provider of stock photography, footage, and music, it’s the innovation edge that makes the cloud picture perfect for its business.

“The speed of innovation is really starting to accelerate,” says Jefferson Frazer, director of edge compute, delivery, and storage at Shutterstock, which is headquartered in the Empire State Building. “If you’re not keeping up, you’re getting left behind.”

Advancements in analytics and AI as well as support for unstructured data in centralized data lakes are key benefits of doing business in the cloud, and Shutterstock is capitalizing on its cloud foundation, creating new revenue streams and business models using the cloud and data lakes as key components of its innovation platform.

The company, which customizes, sells, and licenses more than one billion images, videos, and music clips from its mammoth catalog stored on AWS and Snowflake to media and marketing companies or any customer requiring digital content, currently stores more than 60 petabytes of objects, assets, and descriptors across its distributed data store.

But it’s the ability to tap sophisticated analytics and AI in the cloud, combined with the “democratization of data” enabled by data lakes, that is not only accelerating innovation at Shutterstock but also facilitating new products and services, Frazer says.

“The expectation from developers is that they can go faster than they’ve ever gone before and that every part of the lifecycle around this data needs to be elastic, scalable,” he says. “Nothing can be held back from giving everyone in your business democratized equal access to this information so they can leverage it to do their part of the job.”

Jefferson Frazer, director of edge compute, delivery, and storage, Shutterstock

Shutterstock

The challenge for any enterprise, he says, is finding a centralized path to access disparate stores.

“We think we found a good balance there. We use Snowflake very heavily as our primary data querying engine to cross all of our distributed boundaries because we pull in from structured and non-structured data stores and flat objects that have no structure,” Frazer says. “Then coupling with AWS’ strong authentication mechanisms, we can say with certainty that we have security and restrictions around who can access data.”

This level of development is very complex and only possible with a skilled CIO who has a deep understanding of all business processes and new cloud technologies as soon as they are made available, Frazer says.

Cloud-first, cloud-fast

Frazer believes Shutterstock CIO Hugues Hervouet has just the right blend of tech know-how and business acumen to pinpoint which parts of the company are using data to its full potential, which could capitalize more, and where opportunities for expansion and cross-functional use reside.  

“Our CIO is particularly invested in pushing forward data consumption,” Frazer says. “As soon as new cloud features come out, they are immediately consumed.”

Hervouet himself says he is driving his developers to innovate faster and develop new classes of applications as soon as new cloud capabilities are released. Shutterstock’s current focus, for instance, is generative AI — considered by many to be a bleeding-edge application.

Hugues Hervouet, CIO, Shutterstock

Shutterstock

“We have been able to reallocate engineers to work on value-add activities, such as implementing a generative AI solution that enables our customers to create compelling images using the platform by describing what they are looking for in just a couple of sentences,” Hervouet says, noting this enables customers to find and create content they need much faster.

Frazer says Shutterstock has a dedicated team building AI algorithms and new machine learning models that are integrated into all aspects of the customer lifecycle, such as an engine that learns from customer consumption patterns and makes recommendations. To do so, the team leverages tools from AWS and Databricks, as well as custom Jupyter notebooks.

For Shutterstock, the benefits of AI have been immediately apparent. Storage intelligence, for example, has reduced the duplication of images, an issue that occurs after acquisitions. And generative AI has helped reduce the time required to prepare custom images for customers.

“What we’ve seen from the cloud is being able to adapt to the complexities of different data structures much faster,” Frazer points out. “Previous tasks such as changing a watermark on an image or changing metadata tagging would take months of preparation for the storage and compute we’d need. Now that’s down to a number of hours.”

Shutterstock is also working with OpenAI, using their “models to generate content now trained off of our datasets,” Frazer says.

Optimizing for innovation

Analytics in cloud is also proving key to Shutterstock operations. The company relies on Amazon QuickSight and Athena to add visualizations and perform deep queries on its data to ensure optimal performance across the application lifecycle, Frazer says.

“Analytics doesn’t just stop at performance,” he says. “We want to understand everything that the customer is doing on our website. Why didn’t they click on this button? The customer hovered for two seconds and didn’t click that type of data. That is invaluable when optimizing your site.”

Other services such as Amazon CloudFront enable Shutterstock customers to enhance their content-on-demand networks, and Lambda — a serverless compute service that runs code without having to provision or manage servers — benefits Shutterstock customers wherever they are in the world, he says.

For Shutterstock, the cloud has led to faster innovation, but few enterprises are capable of exploiting sophisticated features out of the gate and ought to proceed cautiously with advanced cloud services, says IDC analyst Dave McCarthy.

While “the cloud gives enterprises access to the latest technologies with the ability to provision new resources in minutes, cloud providers are releasing new capabilities faster than enterprises can consume them,” McCarthy says.

Gartner analyst Arun Chandrasekaran adds that accelerated innovation in the cloud offers a high risk/reward ratio “disruptors can leverage” and creates a dynamic work environment to attract top talent. But there are pitfalls to innovating too quickly, particularly if the enterprise lacks a cohesive strategy and direction, he says.

“It can lead to too much experimentation and lack of clear business value from such projects,” Chandrasekaran says, as well as “potentially lower reliability and more firefighting than true innovation.”

Even those organizations with the talent to tackle cutting-edge technologies in the cloud can be slowed by the nature of their business environments, McCarthy says. “Many companies find themselves in a hybrid architecture where they have one foot in the old world and one in the new,” he says. “That creates some unique challenges in how to manage both environments consistently.”

Still, the drumbeat for innovation marches on. “CIOs need to think of digital transformation in the context of continuous innovation,” McCarthy says. “It should not be considered a one-time exercise, but rather an ongoing process where new technology becomes embedded into the business as it becomes available.”

For Shutterstock, that process is a facet of the company’s culture, thanks to strong IT leadership, a robust cloud infrastructure, a diverse toolset, and talent, Frazer says.

Artificial Intelligence, Cloud Computing, Media and Entertainment Industry

Imagine an airport that uses computer vision to track errant luggage in real time, or a commercial kitchen able to detect refrigeration conditions and prevent spoilage. Imagine an amusement park outfitting its rides with sensors that can talk directly to operations for upgraded safety and better guest experiences. Imagine a factory or a chain of retailers reducing energy and cutting equipment downtime. 

These scenarios are not imaginary. They are playing out across industries with the help of edge computing, Internet of Things (IoT) devices and an innovative approach known as Business Outcomes-as-a-Service.[1]

In each case, the company has volumes of streaming data and needs a way to quickly analyze it for outcomes such as greater asset availability, improved site safety and enhanced sustainability. In each case, they are taking strategic advantage of data generated at the edge, using artificial intelligence and cloud architecture. And they’re achieving significant wins.[2]

Here, we explore the demands and opportunities of edge computing and how an approach to Business Outcomes-as-a-Service can provide end-to-end analytics with lowered operational risk.

From the Edge to the Cloud and Back

Computing at the edge and the far edge allows for data to be processed near the point where it’s generated. The speed and volume of data flowing, often in real time, from sensors and other IoT devices, comes with potential for enormous gains in business and operational intelligence. But this advancement also adds complexity. 

Most organizations still need methods for analyzing data at the point of conception so it can be acted upon immediately. Some have managed to derive meaningful, rapid and repeatable business outcomes from their IoT data streams and analytics using Business Outcomes-as-a-Service (Atos BOaaS), developed by Atos, an international leader in digital transformation. Already, Atos customers have reported positive experiences.

“For a retail customer, we’re talking about 66,000 hours saved in maintenance and compliance for maintaining the edge environment, which translates into about 480 metric tons of CO2 saved every year — thanks to automation and end-to-end monitoring,” said Arnaud Langer, Global Edge and IoT Senior Product Director at Atos.

Four Key Benefits of an End-to-End Analytics Service

As many tech and industry leaders are noting,[3] businesses are now prioritizing value and speed to deployment. Outcome-based solutions delivered in an as-a-service model allow companies to realize this rapid time-to-value. 

Those using a turnkey, scalable BOaaS platform are quickly able to manage an entire AI and IoT ecosystem from one dashboard, across the cloud, edge and far edge.[4] The solution allows them to generate value from real-time data using a platform for ingesting, storing and analyzing continuously streaming data. It’s bringing advanced analytics and AI capabilities where they’re needed most – the edge. Already deployed in commercial kitchens and retail chains, on factory floors and at amusement parks, the solution has shown the following benefits.

Value: Increased uptime of critical assets with predictive maintenanceSustainability: Reduced onsite support and carbon footprint with touchless operationsSafety: AI-enhanced computer vision for safer, efficient operationsCost-effectiveness: Full operational expense (OpEx) as-a-service pricing and operational model

For a manufacturer or retailer, for instance, an equipment or IT interruption would typically impact employees, customers and revenue due to the traditionally painful restoration process. But the monitoring BOaaS system reduces downtime by detecting interruptions before they occur so that remediation can happen before the equipment fails, while it is still running, and before any downtime is experienced – and the problem can often be resolved remotely. If immediate remedies are not possible, the system will alert staff then procure and ship a replacement part to arrive on site. Employees can securely connect to the platform and deploy the applications they need via the cloud, minimizing impact to business operations. 

Across industries, data streams often surpass the ability to capture and analyze information. By tapping into hundreds of untapped endpoints and millions of data points that were previously underutilized, the Atos system allows real-time innovations. For example, AI based predictive maintenance and computer vision to monitor all hardware—lowering support costs, decreasing IT complexity and driving decarbonization.

The Technology Behind Business Outcomes

It was a tall order for Atos: Harness the power of data by bringing together hardware, software, and AI in one OpEx solution. 

To most effectively develop BOaaS as a touchless, end-to-end managed service, Atos leveraged the compute and storage power of Dell Technologies. Atos chose Dell Technologies’ Streaming Data Platform[5] for its ability to deliver reliable, fast and secure data pipelines from edge to cloud.

“Using Dell Technologies solutions, we’ve already achieved a 10% reduction in downtime. This can save up to millions of dollars annually,” Langer said. In the future, we expect to triple that to 30% lower downtime, saving untold millions per customer, per location.”

Watch this video to learn more about how Atos and Dell are enabling game-changing innovation at the edge. 

[1] https://atos.net/en/2022/press-release_2022_05_04/atos-launches-innovative-edge-to-cloud-5g-and-ai-enabled-solution

[2] https://atos.net/en/portfolio/accelerate-decisions-with-data-for-better-business-outcomes

[3] https://www.enterprisetimes.co.uk/2022/12/30/business-and-technology-trends-for-professional-services-in-2023/

[4] https://www.engineering.com/story/bring-ai-and-automation-to-the-far-edge-of-the-factory-floor

[5] https://www.dell.com/en-us/dt/storage/streaming-data-platform.htm

IT Leadership

Is the move to cloud a modern gold rush?

This seems to be the case for many organizations as they embark on a cloud strategy to support their business goals. But there are pitfalls along the way: the cloud is, after all, simply an enabling technology and not a solution in itself.

Organizations are increasingly taking a considered approach to the adoption of Amazon Web Services, Microsoft Azure or Google Cloud Platform.

They want to innovate to create new applications, get things to market faster and be more competitive. Yet security demands, skills shortages and cost challenges, along with high levels of application complexity, tend to hold them back.

When multiple applications are deployed on one or more cloud platforms, it’s difficult to keep track of the volume of resources and frequency of change within the organization’s environment securely and cost-effectively.

How do they keep track to avoid rogue spending? Who’s got their hand on the switch to turn off the services they don’t need at specific times?

Another common challenge is maintaining the specialist skills needed to manage multiple cloud technologies along with legacy corporate data centers and traditional applications.

In a 451 Research report on enterprise transformation, 30% of respondent organizations agreed that they lacked the expertise needed to manage cloud platforms.

A new platform for charting the way forward

At NTT, we advise our clients on bringing together and managing all the components – cloud platforms, infrastructure and software – that they need to deliver their desired outcomes.

But we also support the optimal execution of their strategy with our Adaptive Cloud to Edge Platform. In short, we help our clients to choose the best execution venue for their workload and then deploy, operate and optimize their applications in the cloud.

The platform brings together our 20 years of cloud-management experience and ambition to innovate without compromise.

By enabling AIOps, it delivers real-time analytics, automation, observability, security and service-delivery integration across the multicloud environment. It allows us to orchestrate and automate activities that drive business outcomes.

Built to control costs and meet compliance requirements

The platform makes our services more efficient, cost-effective, automated and secure across disparate cloud technologies, which in turn enables us to better meet our clients’ compliance and cost-control needs.

It also comes with built-in guardrails, allowing our clients a level of flexibility or self-service without introducing risk into what they’re doing or bypassing their governance and security requirements.

Efficient delivery using infrastructure as code

Our clients want a more streamlined, automated and reliable approach to delivering their solutions using infrastructure as code, and our platform supports that.

It’s not simply adding a layer of abstraction; it’s designed to let clients access their resources as efficiently as possible.

They can release their code using our platform, to be deployed through a managed process.

This means they can free up valuable resources to focus on development while increasing their velocity of software delivery.

Visibility, control and governance across the board

The platform is the heart of our Multicloud as a Service offering because it provides visibility, control and governance across all clouds and for all workloads.

It enhances the cloud providers’ native control planes with AI-backed insights for anomaly detection, correlation forecasting, automated operations, agile deployments and more, without limiting direct access to the cloud.

These elements give organizations more comfort in consuming these services in a way that is closely aligned with their needs.

The value of a managed service

Essentially, we enable our clients to innovate by deploying, operating and monitoring applications with speed and efficiency across their choice of cloud technologies.

This can be difficult for many clients to do themselves because most have managed their technology in a particular way for years and now have to make a step change into the cloud paradigm. But NTT has operated cloud platforms and delivered managed services across multiple industries and technologies for more than two decades, so we’re perfectly placed to help them make the leap.

Some of the components of our platform may be familiar, but how we bring them together is unique. Our many years of operating experience have been baked into this platform to make it a true differentiator.

That’s the value of a platform-enabled managed service: you’ll get things done quicker with high proficiency – and at a lower cost – because you’re getting access to a robust product driven by proven expertise and tailored to accelerate your organization’s digital transformation.

Read more about our Adaptive Cloud to Edge Platform.

George Rigby is Vice President of Go-to-Market: Managed Cloud and Infrastructure Services at NTT

Multi Cloud

Supply chain disruptions have impacted businesses across all industries this year. To help ease the transport portion of that equation, Danish shipping giant Maersk is undertaking a transformation that provides a prime example of the power of computing at the edge.

Gavin Laybourne, global CIO of Maersk’s APM Terminals business, is embracing cutting-edge technologies to accelerate and fortify the global supply chain, working with technology giants to implement edge computing, private 5G networks, and thousands of IoT devices at its terminals to elevate the efficiency, quality, and visibility of the container ships Maersk uses to transport cargo across the oceans.

Laybourne, who is based in The Hague, Netherlands, oversees 67 terminals, which collectively handle roughly 15 million containers shipped from thousands of ports. He joined Maersk three years ago from the oil and gas industry and since then has been overseeing public and private clouds, applying data analytics to all processes, and preparing for what he calls the next-generation “smartport” based on a switch to edge computing in real-time processing.

“Edge provides processing of real-time computation — computer vision and real-time computation of algorithms for decision making,” Laybourne says. “I send data back to the cloud where I can afford a 5-10 millisecond delay of processing.”

Bringing computing power to the edge enables data to be analyzed in near real-time — a necessity in the supply chain — and that is not possible with the cloud alone, he says.

Laybourne has been working closely with Microsoft on the evolving edge infrastructure, which will be key in many industries requiring fast access to data, such as industrial and manufacturing sectors. Some in his company focus on moving the containers. Laybourne is one who moves the electrons.

Digitizing the port of the future

Maerk’s move to edge computing follows a major cloud migration performed just a few years ago. Most enterprises that shift to the cloud are likely to stay there, but Laybourne predicts many industrial conglomerates and manufacturers will follow Maersk to the edge.

“Two to three years ago, we put everything on the cloud, but what we’re doing now is different,” Laybourne says. “The cloud, for me, is not the North Star. We must have the edge. We need real-time instruction sets for machines [container handling equipment at container terminals in ports] and then we’ll use cloud technologies where the data is not time-sensitive.”

Laybourne’s IT team is working with Microsoft to move cloud data to the edge, where containers are removed from ships by automated cranes and transferred to predefined locations in the port. To date, Laybourne and his team have migrated about 40% of APM Terminals’ cloud data to the edge, with a target to hit 80% by the end of 2023 at all operated terminals.

As Laybourne sees it, the move positions Maersk to capitalize on a forthcoming sea change for the global supply chain, one that will be fueled by enhanced data analytics, improved connectivity via 5G/6G private networks, and satellite connectivity and industry standards to enable the interoperability between ports. To date, Maersk controls about 19% of the overall capacity in its market.

As part of Maersk’s edge infrastructure, container contents can be examined by myriad IoT sensors immediately upon arrival at the terminals. RFIDs can also be checked in promptly and entered into the manifest before being moved robotically to their temporary locations. In some terminals, such operations are still performed by people, with cargo recorded on paper and data not accessible in the cloud for hours or longer, Laybourne says.

Cybersecurity, of course, is another major initiative for Maersk, as is data interoperability. Laybourne represents the company on the Digital Container Shipping Association committee, which is creating interoperability standards “because our customers don’t want to deal with paper. They want to have a digital experience,” he says.

The work to digitize is well under way. Maersk uses real-time digital tools such as Track & Trace and Container Status Notifications, APIs, and Terminal Alerts to keep customers informed about cargo. Automated cranes and robotics have removed most of the dangerous, manual work done in the past, and have improved the company’s sustainability and decarbonization efforts, Laybourne notes.

“Robotic automation has been in play in this industry for many years,” he says, adding that the pandemic has shifted the mindset of business-as-usual to upskilling laborers and making the supply chain far more efficient.

“We have automated assets such as cranes and berth and then there’s [the challenge of] how to make them more autonomous. After the pandemic, customers are now starting to reconfigure their supply chains,” he says, adding that autonomous, next-generation robotics is a key goal. “If you think of the energy crisis, the Ukraine situation, inflation, and more, companies are coming to a new view of business continuity and future sustainability compliance.”

Top vendors such as Microsoft and Amazon are looking at edge computing use cases for all industries, not just transport and logistics. According to IDC, more than 50% of new IT infrastructure will be deployed at the edge in 2023.

Gartner calls implementations like Maersk’s the “cloud-out edge” model. “It is not as much about moving from the cloud to edge as it is about bringing the cloud capabilities closer to the end users,” says Sid Nag, vice president and analyst at Gartner. “This also allows for a much more pervasive and distributed model.”

Next-gen connectivity and AI on deck

Aside from its partnership with Microsoft on edge computing, Maersk is collaborating with Nokia and Verizon on building private 5G networks at its terminals and recently demonstrated a blueprint of its plans at the Verizon Innovation Center in Boston. The ongoing work is among the first steps toward a breakthrough in connectivity and security, Laybourne maintains.

“It’s technology that opens up a lot more in terms of its connectivity, and in some of our terminals, where we have mission-critical systems platforms, the latency that 5G can offer is fantastic,” he says, noting that it will allow the cargo to “call home” data every 10 milliseconds as opposed to weeks. “But the real breakthrough on 5G and LTE is that I can secure my own spectrum. I own that port — nobody else. That’s the real breakthrough.”

Garnter’s Nag agrees that private 5G and edge computing provide meaningful synergies. “Private 5G can guarantee high-speed connectivity and low latencies needed in industries where use cases usually involve the deployment of hundreds of IoT devices, which then in turn require inter connectivity between each other,” Nag says.

For Maersk, installing IoT sensors and devices is also revolutionizing terminal operations. In the past, the cargo in containers had to be inspected and recorded on paper. Looking forward, Laybourne says, the process will all be automated and data will be digitized quickly.

His data science team, for example, has written algorithms for computer vision devices that are installed within the container to get around-the-clock electronic eyes on the cargo and identify and possibly prevent damage or spoilage.

Edge computing with IoT sensors that incorporate computer vision and AI will also give customers what they’ve longed for some time, and most pointedly during the pandemic: almost instant access to cargo data upon arrival, as well as automated repairs or fixes.

“It can then decide whether there’s an intervention needed, such as maintenance or repair, and that information is released to the customer,” the CIO says, adding that cameras and data collection devices will be installed throughout terminals to monitor for anything, be it theft, lost cargo, or potentially unsafe conditions.

Maersk has also been working with AI pioneer Databricks to develop algorithms to make its IoT devices and automated processes smarter. The company’s data scientists have built machine learning models in-house to improve safety and identify cargo. Data scientists will some day up the ante with advanced models to make all processes autonomous.

And this, Laybourne maintains, is the holy grail: changing the character of the company and the industry.

“We’ve been a company with a culture of configurators. So now we’ve become a culture of builders,” the digital leader says. “We’re building a lot of the software ourselves.

This is where the data scientists sit and work on machine learning algorithms.”

For example, his data scientists are working on advanced ML models to handle exceptions or variations in data. They are also working on advanced planning and forecasting algorithms that will have an unprecedented impact on efficiencies. “Traditionally, this industry thinks about the next day,” the CIO says. “What we’re looking at actually is the next week, or the next three weeks.”

The core mission won’t change. But everything else will, he notes.

“We’re still going to have the job of lifting a box from a vessel into something else. Are we going to have autonomous floating containers and underseas hyperloops? I don’t think so,” Laybourne says, claiming the container industry is well behind others in its digital transformation but that is changing at lightning-fast speed. “Loading and unloading will still be part of the operation. But the technologies we put around it and in it will change everything.”

Cloud Computing, Edge Computing, Internet of Things, Supply Chain

The Tour de France is many things. It’s the world’s largest cycling event, attracting 150 million TV viewers in Europe alone and 10 million fans across social media platforms. It’s also a huge logistical challenge, requiring a complex network of road closures as well as ensuring millions of spectators enjoy the race safely.

Amaury Sport Organisation (A.S.O.), the organizers of the Tour de France, also need to ensure they can tell the story of the race to a vast audience of fans, something that hasn’t been easy over the years. Race officials stationed in remote areas have long had to contend with poor connectivity, and an ever-growing legion of viewers has stretched external-facing applications to their limits. 

NTT, which has been the official technology partner of the Tour de France for the past eight years, has used its Edge as a Service offerings to help the Tour de France retain its status as the world’s premier cycling event. By embracing technologies such as artificial intelligence (AI), the Internet of Things (IoT) and digital twins, A.S.O. have expanded the reach of the race to a new generation of fans and ensured they’re able to continually optimize race operations. 

“We started working on Tour de France in 2015 and, when we began, the digital capability of the event was very limited,” says Peter Gray, Senior Vice President, Advanced Technology Group (Sport) at NTT. “Lots of information was captured manually and communicated over race radio. There was limited information available on television and very limited available information available on digital.

“Today, you a see a television broadcast that’s full of live, rich data about rider speeds and time gaps, and you’ve got second screen apps like Race Center that allow you to follow every moment of the race.”

For example, by leveraging a digital fabric consisting of IoT sensors and real-time analytics, NTT created a digital twin of the Tour de France last year, turning the roads of France into the world’s largest connected stadium. This provides race organizers with an unprecedented view of the race, allowing them to deliver new and enhanced digital experiences to engage fans around the world.

NTT has also created a digital human, an interactive kiosk featuring a realistic AI-generated human avatar which functions as a digital Tour de France guide. The avatar was located at the Grand Depart in Copenhagen and in NTT’s Tech Truck, which follows the entire tour.

“It’s hooked into all of the knowledge that we have about the race and can talk with you authoritatively about what’s going on,” says Gray. 

NTT has partnered with other sports organizers to transform the digital experience of sporting events such as the Absa Cape Epic, the Open Championship and the iconic NTT INDYCAR SERIES, where digital twin and predictive analytics technologies help to put fans behind the wheel of race cars. 

However, beyond the sporting arena, there are lessons to be learned for all organizations looking to embrace technologies such as edge computing to digitally transform their businesses.  

“What we’re doing with the Tour de France is a microcosm of the digital transformation that many businesses are going through. When I described the race being highly manual in 2015, that’s analogous to a business that’s running on manual operating processes, and continuing to use lots of paper, and having disconnected processes.

“Organizations are looking to use things like IoT to capture and measure different parts of their business. They’re looking to use things like digital twins to give them holistic visibility across an entire landscape. 

“That landscape might be a race traveling across France, or that might be a factory or retail site.”

Learn more about the revolutionary fan experience in the world’s largest connected stadium.

Edge Computing

A shift toward hybrid IT infrastructure has accelerated as a result of the pandemic, along with an increased demand for ultra-low latency, high-bandwidth networks and, by extension, edge computing.

However, many organizations simply don’t have the resources or the expertise to build or manage the complex distributed systems required for effective edge computing delivery, a distributed computing paradigm that brings computation and data storage closer to the sources of data.

The open architecture, which is sometimes referred to as fog computing, drives storage and data processing towards a location where it’s needed. Next-generation technologies such as private 5G enable this edge connectivity, while IoT technologies deliver connected devices.

For these companies, an edge-as-a-service (EaaS) solution — which combines hardware, edge connectivity services, and cloud platforms — provides a one-stop solution to accelerate their path to effective edge computing. It offloads the complexities associated with moving applications to the edge and helps businesses confronting a lack of skills within their internal IT teams to achieve greater operational efficiency, security and growth.

For certain industries, such as manufacturing, healthcare and logistics, innovations at the edge such as private 5G and IoT are delivering an even greater seismic shift, enabling them to embark on transformation journeys that were not possible before. 

NTT’s Edge as a Service is the first globally available, fully managed hyper-converged edge computing infrastructure, IoT and private 5G network offering, delivering near-zero latency for enterprise applications at the network edge, boosting user experiences in a secure environment, optimizing costs, and enabling organizations to get closer to their sustainability goals.

“NTT’s managed edge computing enables applications and data to be placed closer to the sources and users of that data and content without the need for dedicated on-site IT resources,” said Parm Sandhu, Vice President of Enterprise 5G Products and Services at NTT. “NTT manages the hardware, application deployment, security and software patching. By processing the data on-site, companies can save on expensive backhaul transport costs required to deliver large amounts of data for traditional cloud processes.”

Embracing EaaS also enables organizations to overcome other challenges they may be facing. For some, latency is a problem due to the absence of on-premises centralized processing. Edge computing enables processing at greater speeds and volumes, leading to greater action-led results in real-time.

For others, security may be even more of an issue due to data at the edge (which can include facial recognition and personal health data), including personal details which are subject to more regulatory scrutiny. Edge computing helps protect data stored at the edge and can be used to help organizations facing growing regulatory requirements.

By embracing NTT’s unique EaaS solution, organizations can also expand their reach and pursue opportunities enabled by next-generation technology. For example, industrial firms can benefit from smart factories, precision monitoring and control, and predictive maintenance enabled by computer vision. The healthcare industry can streamline operations with remote patient monitoring, virtual consultations and robotic surgery.

NTT’s EaaS solution is also key to enabling next-generation technologies such as digital twin models, autonomous mobile robots (AMRs) and autonomous vehicles.

“These technologies require features that are enabled through NTT’s Edge as a Service,” says Sandhu. “For example, the adaptive control of operational assets through AI/ML-enabled applications that learn operational patterns and enact automated self-correction, and mass data virtualization and analysis that bring together disparate data streams into one comprehensive enterprise view.”

Delivering distributed systems from the edge is highly complex and unlikely to be subsumed as a core business capability except within the very largest of enterprises. Using EaaS can help organizations stay hyperfocused on their core business while recognizing new use cases to help them scale.

NTT is working with the City of Las Vegas to transform it with Private 5G. With over 40 million visitors each year and 600,000 residents, the city is faced with immense pressure on their infrastructure as they strive to deliver high-quality services. Watch the NTT keynote address at Mobile World Congress 2022.

Edge Computing

As the chief engineer and head of the department for digital transformation of manufacturing technologies at the Laboratory for Machine Tools and Production Engineering (WZL) within RWTH Aachen University, I’ve seen a lot of technological advancements in the manufacturing industry over my tenure. I hope to help other manufacturers struggling with the complexities of AI in manufacturing by summarizing my findings and sharing some key themes.

The WZL has been synonymous with pioneering research and successful innovations in the field of production technology for more than a hundred years, and we publish over a hundred scientific and technical papers on our research activities every year. The WZL is focused on a holistic approach to production engineering, covering the specifics of manufacturing technologies, machine tools, production metrology and production management, helping manufacturers test and refine advanced technology solutions before putting them into production at the manufacturing edge. In my team, we have a mix of computer scientists, like me, working together with mathematicians and mechanical engineers to help manufacturers use advanced technologies to gain new insights from machine, product, and manufacturing data.

Closing the edge AI insight gap starts and ends with people 

Manufacturers of all sizes are looking to develop AI models they can use at the edge to translate their data into something that’s helpful to engineers and adds value to the business. Most of our AI efforts are focused on creating a more transparent shop floor, with automated, AI-driven insights that can:

Enable faster and more accurate quality assessmentReduce the time it takes to find and address process problemsDeliver predictive maintenance capabilities that reduce downtime

However, AI at the manufacturing edge introduces some unique challenges. IT teams are used to deploying solutions that work for a lot of different general use cases, while operational technology (OT) teams usually need a specific solution for a unique problem. For example, the same architecture and technologies can enable AI at the manufacturing edge for various use cases, but more often than not, the way to extract data from edge OT devices and systems that move their data into the IT systems is unique for each case. 

Unfortunately, when we start a project, there usually isn’t an existing interface for getting data out of OT devices and into the IT system that is going to process it. And each OT device manufacturer has its own systems and protocols. In order to take a general IT solution and transform into something that can answer specific OT needs, IT and OT teams must work together at the device level to extract meaningful data for the AI model. This will require IT to start speaking the language of OT, developing a deep understanding of the challenges OT faces daily, so the two teams can work together. In particular, this requires a clear communication of divided responsibilities between both domains and a commitment to common goals. 

Simplifying data insights at the manufacturing edge

Once IT and OT can work together to successfully get data from OT systems to the IT systems that run the AI models, that’s just the beginning. A challenge I see a lot in the industry is when an organization still uses multiple use-case-specific architectures and pipelines to build their AI foundation. The IT systems themselves often need to be upgraded, because legacy systems can’t handle the transmission needs of these very large data sets. 

Many of the companies we work with throughout our various research communities, industry consortia or conferences, such as WBAICNAP or AWK2023 — especially the small to medium manufacturers — ask us specifically for technologies that don’t require highly specialized data scientists to operate. That’s because manufacturers can have a hard time justifying the ROI if a project requires adding one or more data scientists to the payroll. 

To answer these needs, we develop solutions that manufacturers can use to get results at the edge as simply as possible. As a mechanical engineering institute, we’d rather not spend a lot of time doing research about infrastructure and managing IT systems, so we often seek out partners like Dell Technologies, who have the solutions and expertise to help reduce some of the barriers to entry for AI at the edge.

For example, when we did a project that involved high- frequency sensors, there was no product available at the time that could deal with our volume and type of data. We were working with a variety of open-source technologies to get what we needed, but securing, scaling, and troubleshooting each component led to a lot of management overhead.

We presented our use case to Dell Technologies, and they suggested their Streaming Data Platform. This platform reminds me of the way the smartphone revolutionized usability in 2007. When the smartphone came out, it had a very simple and intuitive user interface so anyone could just turn it on and use it without having to read a manual. 

The Streaming Data Platform is like that. It reduces friction to make it easier for people who are not computer scientists to capture the data flow from an edge device without having technical expertise in these systems. The platform also makes it easy to visualize the data at a glance, so engineers can quickly achieve insights.

When we applied it to our use case, we found that it deals with these data streams very naturally and efficiently, and it reduced the amount of time required to manage the solution. Now, developers can focus on developing the code, not dealing with infrastructure complexities. By reducing the management overhead, we can use the time saved to work with data and get better insights.

The future of AI at the manufacturing edge

With all of this said, one of the biggest challenges I see overall with AI for edge manufacturing is the recognition that AI insights are an augmentation to people and knowledge — not a replacement. And that it is much more important for people to work together in managing and analyzing that data to ensure that the end goal of getting business insights to serve a particular problem are being met. 

When manufacturers use many different solutions pieced together to find insights, it might work, but it’s unnecessarily difficult. There are technologies out there today that can remedy these challenges, it’s just a matter of finding them and checking them out. We’ve found that the Dell Streaming Data Platform can capture data from edge devices, analyze the data using AI models in near real time, and feed insights back to the business to add value that benefits both IT and OT teams.

Learn more

If you are interested in current challenges, trends and solutions to empower sustainable production, find out more at the AWK2023 where more than a thousand participants from production companies all around the world come together to discuss solutions for green production.

Find out more about AI at the manufacturing edge solutions from Dell Technologies and Intel.  

***

Intel® Technologies Move Analytics Forward

Data analytics is the key to unlocking the most value you can extract from data across your organization. To create a productive, cost-effective analytics strategy that gets results, you need high performance hardware that’s optimized to work with the software you use.

Modern data analytics spans a range of technologies, from dedicated analytics platforms and databases to deep learning and artificial intelligence (AI). Just starting out with analytics? Ready to evolve your analytics strategy or improve your data quality? There’s always room to grow, and Intel is ready to help. With a deep ecosystem of analytics technologies and partners, Intel accelerates the efforts of data scientists, analysts, and developers in every industry. Find out more about Intel advanced analytics.

IT Leadership