Across the manufacturing industry, innovation is happening at the edge. Edge computing allows manufacturers to process data closer to the source where it is being generated, rather than sending it offsite to a cloud or data center for analysis and response. 

For an industry defined by machinery and supply chains, this comes as no surprise. The proliferation of smart equipment, robotics and AI-powered devices designed for the manufacturing sector underscores the value edge presents to manufacturers. 

Yet, when surveyed, a significant gap appears between organizations that recognize the value of edge computing (94%) and those who are currently running mature edge strategies (10%). Running edge devices and smart-manufacturing machines does not always mean there is a fully functioning edge strategy in place. 

Why the gap? 

What is holding back successful edge implementation in an industry that clearly recognizes its benefits?

The very same survey mentioned above suggests that complexity is to blame– with 85% of respondents saying that a simpler path to edge operations is needed. 

What specifically do these complexities consist of? Top among them is: 

Data security constraints: managing large volumes of data generated at the edge, maintaining adequate risk protections, and adhering to regulatory compliance policies creates edge uncertainty.Infrastructure decisions: choosing, deploying, and testing edge infrastructure solutions can be a complex, costly proposition. Components and configuration options vary significantly based on manufacturing environments and desired use casesOvercoming the IT/OT divide: barriers between OT (operational technology) devices on the factory floor and enterprise applications (IT) in the cloud limit data integration and time to value for edge initiatives. Seamless implementation of edge computing solutions is difficult to achieve without solid IT/OT collaboration in place.Lack of edge expertise: a scarcity of edge experience limits the implementation of effective edge strategies. The move to real-time streaming data, data management, and mission-critical automation has a steep learning curve.

Combined, these challenges are holding back the manufacturing sector today, limiting edge ROI (return on investment), time to market and competitiveness across a critical economic sector. 

As organizations aspire toward transformation, they must find a holistic approach to simplifying—and reaping the benefits of — smart factory initiatives at the edge.

Build a Simpler Edge 

What does a holistic approach to manufacturing edge initiatives look like? It begins with these best practices: 

Start with proven technologies to overcome infrastructure guesswork and obtain a scalable, unified edge architecture that ingests, stores, and analyzes data from disparate sources in near-real time and is ready to run advanced smart-factory applications in a matter of days, not weeks. Deliver IT and OT convergence by eliminating data silos between edge devices on the factory floor (OT) and enterprise applications in the cloud (IT), rapidly integrating diverse data types for faster time to value Streamline the adoption of edge use cases with easy and quick deployment of new applications, such as machine vision for improved production quality and digital twin composition for situational modeling, monitoring, and simulationScale securely using proven security solutions that protect the entire edge estate, from IT to OT. Strengthen industrial cybersecurity using threat detection, vulnerability alerts, network segmentation, and remote incident managementEstablish a foundation for future innovation with edge technologies that scale with your business, are easily configured to adopt new use cases— like artificial intelligence, machine learning and private 5G— that minimize the complexity that holds manufacturers back from operating in the data age.

Don’t go it alone

The best way to apply these practices is to start with a tested solution designed specifically for manufacturing edge applications. Let your solution partner provide much of the edge expertise your organization may not possess internally. A partner who has successfully developed, tested and deployed edge manufacturing solutions for a wide variety of use cases will help you avoid costly mistakes and reduce time to value along the way. 

You don’t need to be an industry expert to know that the manufacturing sector is highly competitive and data-driven. Every bit of information, every insight matters and can mean the difference between success or failure. 

Product design and quality, plant performance and safety, team productivity and retention, customer preferences and satisfaction — are all contained in your edge data. Your ability to access and understand that data depends entirely on the practices you adopt today. 

Digitally transforming edge operations is essential to maintaining and growing your competitive advantage moving forward.

A trusted advisor at the edge

Dell has been designing and testing edge manufacturing solutions for over a decade, with customers that include EricssonMcLarenLinde and the Laboratory for Machine Tools at Aachen University

You can learn more about our approach to edge solutions for the manufacturing sector, featuring Intel® Xeon® processors, at Dell Manufacturing Solutions. The latest 4th Gen Intel® Xeon® Scalable processors have built-in AI acceleration for edge workloads – with up to 10x higher PyTorch real-time inference performance with built-in Intel® Advanced Matrix Extensions (Intel® AMX) (BF16) vs. the prior generation (FP32)1.

See [A17] at intel.com/processorclaims: 4th Gen Intel® Xeon® Scalable processors. Results may vary.

Edge Computing

Where are you right now, as you read this? Our educated guess would be a city. According to current figures from World Bank, around half of the world’s population — 56% to be precise — call cities their home. However, if we were to ask you the same question in 2050, those odds will have increased significantly.

Estimates from the same report suggest that in less than 30 years, 70% of the population will live in cities. In countries like the US, this figure is set to exceed 80%. It’s clear that urbanization is growing at a rapid rate. Cities are devouring a greater proportion of rural spaces — and an increasing number of people see them as providing the best opportunity for work and quality of life.

This growth places cities at the forefront of economic, social, and global concerns about energy and water use, traffic management, sanitation, and sustainability. To address those concerns, municipalities are increasingly turning to smart solutions that promise to improve infrastructure and governance. But how does a city know which vendors to trust? Which partners are most capable of bringing a city’s smart ambitions to fruition?

As with every high-growth market, regulation and certification often has to play catch-up. There are hundreds of companies promising the latest smart technology, the brightest and best innovations. Only relatively recently have organisations begun to evaluate and make efforts to agree upon the criteria for what qualifies a city as smart.

The AWS Smart City Competency partnership 

A smart city requires the proper mix of data, technology, infrastructure, and services to deliver sustainable and citizen-centric solutions. It’s important for city authorities to work with the right partners, ones that enable smart cities and help them thrive.

Amazon — more specifically Amazon Web Services (AWS) — has emerged as a leader in smart city certification with its Smart City Competency partnership. The initiative is designed to “support public sector customers’ innovations to quickly deliver smarter and more efficient citizen services.” As a trusted presence in the digital space, AWS is well positioned to deliver world-class recommendations to customers looking to build and deploy innovative smart city solutions.

The premise is fairly straightforward. The AWS Smart City Competency “will differentiate highly specialised AWS Partners with a demonstrated deep technical expertise and proven track record of customer success within the Smart City use cases.” The idea is that, through the AWS Smart City Competency, customers will be able to quickly and confidently identify approved partners to help them address smart city challenges.

The benefits are clear. When working with a certified AWS partner like Interact, you can feel secure knowing that the system has met and exceeded a high competence threshold. The partnership offers a host of additional benefits, including partner opportunity acceleration funding, discounted AWS training, and ongoing support and networking opportunities.

Opportunities from the World Bank, the UN, and elsewhere

The AWS Smart City Competency is just one example of an initiative designed to define smart city standards. World Bank, a voice of authority in the smart city space, has launched the Global Smart City Partnership Program (GSCP).

The Global Smart City Partnership Program was established in 2018 to help the World Bank Group teams and clients make the best use of data, technologies, and available resources. It is built on the understanding that technology – and data-driven innovations can improve city planning, management, and service delivery, better engage citizens, and enhance governmental accountability. Like the AWS program, the goal of the World Bank is to work closely with prominent smart city experts from all around the world and match them with certified partners they can trust.

The United Nations Development Program (UNDP) for Smart Cities shares a similar desire for aligning smart city customers and dependable vendors. The UNDP cites a number of factors by which smart city projects fail, including organisational culture, difficulties in achieving behaviour change, lack of technical expertise and leadership, and a singular focus on technology. Too often, the actual needs and realities of customers are overlooked; only by matching those customers with genuine smart city experts can a greater level of success be achieved.

Emerging smart city standards

For a city to truly become a smart city, it needs to integrate data-driven solutions across numerous application areas, from transportation and mobility to utility planning, waste management, and emergency response. This means it’s likely that decision makers will turn to numerous vendors to carry out individual projects.

But cities are not silos. They’re living, breathing entities — ecosystems in which each element impacts and interacts with the next. This makes the issue of interoperability a pertinent one.

According to Smart Cities World, “Public tenders for various smart city applications globally more and more include the requests for compliance to international standards . . . [V]endors want to make sure that their systems are future-proof and allow interoperability with other market players.”

Not only does this highlight the benefits of being recognised and certified by the programs we’ve discussed, but it places a greater onus on providers to ensure that their products adopt an open systems approach.

Emerging standards, best practices, and coordinated initiatives, along with a general increase in experience and expertise, has made it easier to recognise what a smart city is—and, crucially, what it is not. For cities with smart aspirations, choosing the right partner is integral to success. Certification programs like the ones mentioned here make it far easier to judge who those partners are. To find out more about Interact click here.

Artificial Intelligence, Banking, Education Industry, Financial Services Industry

The digital transformation bandwagon is a crowded one, with enterprises of all kinds heeding the call to modernize. The pace has only quickened in a post-pandemic age of enhanced digital collaboration and remote work. Nonetheless, 70% of digital transformation projects fall short of their goals, as organizations struggle to implement complex new technologies across the enterprise.

Fortunately, businesses can leverage AI and automation to better manage the speed, scale, and complexity of the changes that come with digital transformation. In particular, artificial intelligence for IT operations (or AIOps) platforms can be a game changer. AIOps solutions use machine learning to connect and contextualize operational data for decision support or even auto-resolution of issues. This simplifies and streamlines the transformation journey, especially as the enterprise scales up to larger and larger operations.

The benefits of automation and AIOps can only be realized, however, if companies choose solutions that put the power within reach – ones that package up the complexities and make AIOps accessible to users. And even then, teams must decide which business challenges to target with these solutions.  Let’s take a closer look at how to navigate these decisions about the solutions and use cases that can best leverage AI for maximum impact in the digital transformation journey.

Finding the right automation approach

Thousands of organizations in every part of the world see the advantages of AI-driven applications to streamline their IT and business operations. A “machine-first” approach frees staff from large portions of tedious, manual tasks while reducing risk and boosting output.

AIOps for decision support and automated issue resolution in the IT department can further add to the value derived from AI in an organization’s digital transformation.

Yet conversations with customers and prospects invariably touch on a shared complaint: Enterprise leaders know AI is a powerful ally in the digital transformation journey, but the technology can seem overwhelming and takes too long to scope and shop for all the components.  They’re looking for vendors to offer easier “on-ramps” to digital transformation. They want SaaS options and the availability of quick-install packages that feature just the functions that address a specific need or use case to leap into their intelligent automation journey.

Ultimately, a highly effective approach for leveraging AI in digital transformation involves so-called Out of the Box (OOTB) solutions that package up the complexity as pre-built knowledge that’s tailored for specific kinds of use cases that matter most to the organization.

Choosing the right use cases

Digital transformations are paradoxical in that you’re modernizing the whole organization over the course of time, but it’s impossible to “boil the ocean” and do it all at once. That’s why it’s so important to choose highly strategic and impactful use cases to get the ball rolling, demonstrate early wins, and then expand more broadly across the enterprise over time. 

OOTB solutions can help pare down the complexity. But it is just as important to choose the right use cases to apply such solutions. Even companies that know automation and AIOps are necessary to optimize and scale their systems can struggle with exactly where to apply them in the enterprise to reap the most value.

By way of a cheat sheet, here are four key areas that are ripe for transformation with AI, and where the value of AIOps solutions will shine through most clearly in the form of operational and revenue gains:

IT incident and event managementA robust AIOps solution can prevent outages and enhance event governance via predictive intelligence and autonomous event management. Once implemented, such a solution can render a 360° view of all alerts across all enterprise technology stacks – leveraging machine learning to remove unwanted event noise and autonomously resolve business-critical issues.Business health monitoring – A proactive AI-driven monitoring solution can manage the health of critical processes and business transactions, such as for the retail industry, for enhanced business continuity and revenue assurance. AI-powered diagnosis techniques can continually check the health of retail stores and e-commerce sites and automatically diagnose and resolve unhealthy components. Business SLA predictions – AI can be used to predict delays in business processes, give ahead-of-time notifications, and provide recommendations to prevent outages and Service Level Agreement (SLA) violations. Such a platform can be configured for automated monitoring, with timely anomaly detection and alerts across the entire workload ecosystem.IDoc management for SAP – Intermediate Document (IDoc) management breakdowns can slow progress in transferring data or information from SAP to other systems and vice versa. An AI platform with intelligent automation techniques can identify, prioritize, and then autonomously resolve issues across the entire IDoc landscape – thereby minimizing risk, optimizing supply chain performance, and enhancing business continuity. 

Conclusion

Organizations pursuing digital transformation are increasingly benefiting from enhanced AI-driven capabilities like AIOps that bring new levels of IT and business operations agility to advanced, multi-cloud environments.  As these options become more widespread, enterprises at all stages of the digital journey are learning the basic formula for maximizing the return on these technology investments: They’re solving the complexity problem with SaaS-based, pre-packaged solutions; and they’re becoming more strategic in selecting use cases ideally suited for AIOps and the power of machine learning.

To get up and running fast at any stage of your digital journey, visit Digitate to learn more.

Digital Transformation, IT Leadership

Modernization is on the minds of IT decision makers, and with good reason — legacy systems cannot keep up with the realities of today’s business environment. Additionally,      many organizations are discovering their modernization advantage: their developer teams, and the databases that underpin  their applications.

“Legacy modernization is really a strategic initiative that enables you to apply the latest innovations in development methodologies and technology to refresh your portfolio of applications,” says Frederic Favelin, EMEA Technical Director, Partner Presales at MongoDB.

His remarks came during an episode of  Google Cloud’s podcast series “The Principles of a Cloud Data Strategy.”

This is much more than just lift and shift,” Favelin continues. “Moving your existing application and databases to faster hardware or onto the cloud may get you slightly higher performances and marginally reduce cost, but you will fail to realize the transformational business agility and scale, or development freedom without modernizing the whole infrastructure.”      

The ‘Innovation Tax’

For many organizations, databases have proliferated, leading to a complex ecosystem of resources — cloud, on-premise, NoSQL, non-relational, traditional. The problem, Favelin says, is organizations have deployed non-relational or no-SQL databases as “band aids to compensate for the shortcomings of legacy databases.”

“So they quickly find that most non-relational databases  excel at just a few specific things — niche things — and they have really limited capabilities otherwise, such as limited queries, capabilities, or lack of data consistency,” says Favelin.

“So it’s at this point that organizations start to really feel the burden of learning, maintaining and trying to figure out how to integrate the data between a growing set of technologies. This often means that separate search technologies are added to the data infrastructure, which require teams to move and transform data from database to dedicated search engine.”

Add the need to integrate increasingly strategic mobile capabilities, and the environment  gets even more complex, quickly. In addition, as organizations are striving to deliver a richer application experience through analytics, they sometimes need to use complex extract, transform, and load (ETL) operations to move the operational data to a separate analytical database.

This adds even more time, people and money to the day-to-day operations. “So at MongoDB, we give this a name: innovation tax,” Favelin says.

Toward a modern ecosystem

Favelin says a modern database solution must address three critical needs:

It should address the fastest way to innovate, with flexibility and a       consistent developer experience. It must be highly secure, have database encryption, and be fully auditable.Next is the freedom and the flexibility to be deployed on any infrastructure– starting from laptops, moving to the cloud, and integrating with Kubernetes. It must be scalable, resilient, and mission critical with auto scaling.Finally, to offer a unified modern application experience means that the developer data platform needs to include full text search capabilities, must be operational between transactional workloads and analytical workloads, while bringing the freshness of the transactional data to the analytical data in order to be as efficient as possible to serve the best experience for the users.

“The MongoDB developer data platform helps ensure a unified developer experience,” Favelin says, “not just across different operational database workloads, but across data workloads, including search mobile data, real time analytics and more.”

Check out “The Principles of a Cloud Data Strategy”  podcast series from Google Cloud on Google podcasts, Apple podcasts, Spotify, or wherever you get your podcasts.Get started today with MongoDB Atlas on Google Cloud on Google Marketplace.

Cloud Architecture, Databases

The Keys to Become a Data-Driven Organization

Only 26.5% of organizations say they’ve reached their goals of becoming a data-driven organization, according to NewVantage Partners’ Data and AI Leadership Executive Survey 2022. Astonishingly, this leaves three-quarters of those surveyed indicating they’ve not met their goals in this area.

Fortunately, there are some bright points on the horizon. A recent Gartner survey says 78% of CFOs will increase or maintain enterprise digital investments. And Gartner forecasts worldwide IT spending will grow 3% in 2022. Further, a Gartner CDO survey indicated top-performing CDOs are significantly more likely to have projects with the CEO, and they engage in value delivery rather than enablement.

While this is great progress, perhaps one of the most important points of agreement is how to balance business value creation with risk and compliance mandates.

Business value creation vs. risk, security, privacy, and compliance

For many organizations, the conversation of balancing business value and security is cast through industry regulations. But it remains important for organizations to truly understand and agree on how and where you define your stance.

The crux is there’s no one size that fits all. But there are universal ways to mitigate risks and meet compliance mandates. For instance, if the use of PII data in certain analytical scenarios isn’t allowed, that doesn’t imply you should scrap the analytical project. You can mask or remove PII-related information and continue with your analytical projects.

Defining the value created from data is fairly nuanced. Many companies struggle to agree on a unified lens through which to view their data’s value. To simplify, your organization can view your data through the filter of four categories:

Direct attributed revenueIndirect attributed revenueCost savings and optimizationRisk and compliance failure avoidance

Balance data democratization and security at scale

Once your organization has defined guidelines and policies on how to treat regulated data, the biggest challenge is to enforce those policies at scale. A comprehensive data security and access governance framework can go a long way to help you frame your approach.

Perimeter-based security: In an on-premises world, your network is the gateway to the kingdom. If you lock that down, you may have the pretense of safety from the outside world. Internally, though, there’s still full access. The challenge is even larger in the cloud.Application security: The next level of defense is to provide authentication for accessing applications. In this model, getting access to the network only gets you so far unless your credentials allow you access to the application you’re trying to use.Data security: The last mile of your defense is data security. If someone gets through all the security layers, for example, you can still ensure access. Privacy is defined at the data level, so only authorized data is visible. Making sure fine-grained data access policies—as well as data masking and encryption—are applied down to the file, column, row, and cell is one of the most powerful ways to strengthen your security posture.

Enforcing these security protocols across your entire data estate is a massive challenge. And you can’t scale if executed in a siloed piecemeal fashion.

Universal data security platform

One of the emerging patterns for modern data infrastructure is that data governance and security processes need to become a horizontal competency across your entire data estate. This requires one of the most important C-suite dialogues between the CISO, CDAO, and CIO, since co-ownership across these groups is essential.

Universal data security platforms provide a central policy control plane that natively synchronizes policies into each distinct data service. Policies are created once and deployed everywhere. In addition, it provides a single view into your data estate, sensitive data locations, policies applied, and access events. Privacera works with Fortune 100 and 500 companies to reach their data security goals, including federal agencies and myriad types of enterprises across sectors. For example, Sun Life Financial teamed up with Privacera to secure and streamline their cloud-migration process, while seamlessly leveraging existing investments thanks to the open-standards framework. For more information on how to start or continue your data security journey, contact Privacera’s Center of Excellence.

Data and Information Security

Many people associate high-performance computing (HPC), also known as supercomputing, with far-reaching government-funded research or consortia-led efforts to map the human genome or to pursue the latest cancer cure.

But HPC can also be tapped to advance more traditional business outcomes — from fraud detection and intelligent operations to helping advance digital transformation. The challenge: making complex compute-intensive technology accessible for mainstream use.

As companies digitally transform and steer toward becoming data-driven businesses, there is a need for increased computing horsepower to manage and extract business intelligence and drive data-intensive workloads at scale. The rise of artificial intelligence (AI), machine learning (ML), and real-time analytics applications, often deployed at the edge, can utilize HPC resources to unlock insights from data and efficiently run increasingly large and more complex models and simulations.

The convergence of HPC with AI-based analytics is impacting nearly every industry and across a wide range of applications, including space exploration, drug discovery, financial modeling, automotive design, and systems engineering.

“HPC is becoming a utility in our lives — people aren’t thinking about what it takes to design this tire, validate a chip design, parse and analyze customer preferences, do risk management, or build a 3D structure of the COVID-19 virus,” notes Max Alt, distinguished technologist and director of Hybrid HPC at HPE. “HPC is everywhere, but you don’t think about it, because it’s hidden at the core.”

HPC’s scalable architecture is particularly well suited for AI applications, given the nature of computation required and the unpredictable growth of data associated with these workflows. HPC’s use of graphics-processing-unit (GPU) parallel processing power — coupled with its simultaneous processing of compute, storage, interconnects, and software — raises the bar on AI efficiencies. At the same time, such applications and workflows can operate and scale more readily.

Even with widespread usage, there is more opportunity to leverage HPC for better and faster outcomes and insights. HPC architecture — typically clusters of CPU and GPUs working in parallel and connected to a high-speed network and data storage system — is expensive, requiring a significant capital investment. HPC workloads are typically associated with vast data sets, which means that public cloud might be an expensive option due to requirements regarding latency and performance issues. In addition, data security and data gravity concerns often rule out public cloud.

Another major barrier to more widespread deployment: a lack of in-house specialized expertise and talent. HPC infrastructure is far more complex than traditional IT infrastructure, requiring specialized skills for managing, scheduling, and monitoring workloads. “You have tightly coupled computing with HPC, so all of the servers need to be well synchronized and performing operations in parallel together,” Alt explains. “With HPC, everything needs to be in sync, and if one node goes down, it can fail a large, expensive job. So you need to make sure there is support for fault tolerance.”

HPE GreenLake for HPC Is a Game Changer

An as-a-service approach can address many of these challenges and unlock the power of HPC for digital transformation. HPE GreenLake for HPC enables companies to unleash the power of HPC without having to make big up-front investments on their own. This as-a-service-based delivery model enables enterprises to pay for HPC resources based on the capacity they use. At the same time, it provides access to third-party experts who can manage and maintain the environment in a company-owned data center or colocation facility while freeing up internal IT departments.

“The trend of consuming what used to be a boutique computing environment now as-a-service is growing exponentially,” Alt says.

HPE GreenLake for HPC bundles the core components of an HPC solution (high-speed storage, parallel file systems, low-latency interconnect, and high-bandwidth networking) in an integrated software stack that can be assembled to meet an organization’s specific workload needs.

As part of the HPE GreenLake edge-to-cloud platform, HPE GreenLake for HPC gives organizations access to turnkey and easily scalable HPC capabilities through a cloud service consumption model that’s available on-premises. The HPE GreenLake platform experience provides transparency for HPC usage and costs and delivers self-service capabilities; users pay only for the HPC resources they consume, and built-in buffer capacity allows for scalability, including unexpected spikes in demand. HPE experts also manage the HPC environment, freeing up IT resources and delivering access to the specialized performance tuning, capacity planning, and life cycle management skills.

To meet the needs of the most demanding compute and data-intensive workloads, including AI and ML initiatives, HPE has turbocharged HPE GreenLake for HPC with purpose-built HPC capabilities. Among the more notable features are expanded GPU capabilities, including NVIDIA Tensor Core models; support for high-performance HPE Parallel File System Storage; multicloud connector APIs; and HPE Slingshot, a high-performance Ethernet fabric designed to meet the needs of data-intensive AI workloads. HPE also released lower entry points to HPC to make the capabilities more accessible for customers looking to test and scale workloads.

As organizations pursue HPC capabilities, they should consider the following:

Stop thinking of HPC in terms of a specialized boutique technology; think of it more as a common utility used to drive business outcomes.Look for HPC options that are supported by a rich ecosystem of complementary tools and services to drive better results and deliver customer excellence.Evaluate the HPE GreenLake for HPC model. Organizations can dial capabilities up and down, depending on need, while simplifying access and lowering costs.

HPC horsepower is critical, as data-intensive workloads, including AI, take center stage. An as-a-service model democratizes what’s traditionally been out of reach for most, delivering an accessible path to HPC while accelerating data-first business.

For more information, visit https://www.hpe.com/us/en/greenlake/high-performance-compute.html

High-Performance Computing