For years, quantum computing has seemed like the stuff of science fiction. But the truth is that quantum computing is here and it’s more accessible to organizations than you think. And while the technology is still in its infancy, it is advancing fast.

In a recent interview, Ken Durazzo, vice president of Dell Technologies’ OCTO Research Office, explained, “There is a race toward quantum.” In the very near future, the computational capabilities of quantum will be widely available to accelerate applications and reveal new forms of business value.

A Dell Technologies white paper titled “5 Things You Should Be Doing Now to Prepare for Quantum Computing,” takes a deeper look at the technology and explains why companies should get started with quantum today.

What is quantum computing?

The first thing you need to know is that quantum computers are not just faster versions of the computers we use today. “Quantum systems fundamentally behave and compute much, much differently than our normal systems, our classical systems, do,” said Durazzo.

Traditional, or classical, computers rely on transistors that can be either on or off, a 1 or 0. And they store this information as bits.

Quantum computers harness the principles of quantum mechanics to consider multiple possibilities — not just 1 or 0. They use qubits to compute all the possibilities simultaneously. This makes quantum computers are very, very good at calculating the possibilities when a problem has multiple possible outcomes. And for these specialized problems, they are very, very, fast. 

Will quantum computers replace classical computers?

You probably won’t ever replace all your classical computers with quantum computers. For one thing, quantum computers aren’t very good at finding definitive answers to very precise questions, such as “What is the current balance in my bank account?”

For another quantum computers need classical computers to function. “You can’t have quantum computing without classical computing,” said Durazzo. “It is highly likely that we’ll see hybrid classical-quantum computing as the way forward through the era of fault-tolerant quantum systems.” In most of the current systems, a classical computer (complete with storage, processors, and networking) provides the input that then goes to a quantum computing layer. The quantum layer does the processing and then transmits the output back to a classical system, which could eventually be a high-performance computing (HPC) system as the number of qubits grows. 

This setup makes it possible for anyone to experiment with quantum computing today. You can download the development tools and then interact with a virtual quantum processor (vQPU) or a physical quantum machine in the cloud. Both approaches provide an identical experience as they are programmed the same and are indistinguishable in terms of application or algorithm experimentation.

What should you be doing today?

Right now, very few people have experience with quantum computers. For this reason, Durazzo recommends that organizations start by learning as much as they can. You can download tools like Qiskit and start experimenting with quantum simulators. “At the end of the day, [quantum computing] really does take quite a bit of mental refactoring and a different way of thinking about computing and getting hands on keyboard is a wonderful way of starting to learn how this all works,” he advised.

Enterprises should also choose a partner who has experience with quantum computing to help guide them on their quantum journey. Dell Technologies has been working with quantum computing since 2016 and has resources that can help customers get up to speed with the new technology quickly. In fact, “We can now have a customer take our software and our Dell hardware and have a working quantum system in an hour,” Durazzo explained. “We have built significant automation in the system to enable a fast and frictionless deployment.”

He added, “Everything that we do is to try and make it easier for customers to adopt and take rapid advantage of new technology to accelerate their business. That’s our primary goal as a research organization. We take some of the friction out for customers because we’re learning ahead of them.”

For more information about quantum computing and details about how to get started with your quantum journey, check out “5 Things You Should Be Doing Now to Prepare for Quantum Computing.”

***

Intel® Technologies Move Analytics Forward

Data analytics is the key to unlocking the most value you can extract from data across your organization. To create a productive, cost-effective analytics strategy that gets results, you need high performance hardware that’s optimized to work with the software you use.

Modern data analytics spans a range of technologies, from dedicated analytics platforms and databases to deep learning and artificial intelligence (AI). Just starting out with analytics? Ready to evolve your analytics strategy or improve your data quality? There’s always room to grow, and Intel is ready to help. With a deep ecosystem of analytics technologies and partners, Intel accelerates the efforts of data scientists, analysts, and developers in every industry. Find out more about Intel advanced analytics.

IT Leadership

Companies capture more data and compute capacity at the edge. At the same time, they are laying the groundwork for a distributed enterprise that can capitalize on a multiplier effect to maximize intended business outcomes.

The number of edge sites — factory floors, retail shops, hospitals, and countless other locations — is growing. This gives businesses more opportunity to gain insights and make better decisions across the distributed enterprise. Data follows the activities of customers, employees, patients, and processes. Pushing computing power to the distributed edge ensures that data can be analyzed in near real time —a model not possible with cloud computing.

With centralized cloud computing, due to bandwidth constraints, it takes too long to move large data sets and analyze the data. This introduces unwanted decision latency, which, in turn, destroys the business value of the data. Edge computing addresses this need for immediate processing by leaving the data where it is created by instead moving compute resources next to such data streams. This strategy enables real-time analysis of data as it is being captured and eliminates decision delays. Now the next level of operational efficiency can be realized with real-time decision-making and automation. At the edge: where activity takes place.

Industry experts are projecting that 50 billion devices will be connected worldwide this year, with the amount of data being generated at the edge slated to increase by over 500% between 2019 and 2025, amounting to a whopping 175 zettabytes worldwide. The tipping point comes in 2025, when, experts project, roughly half of all data will be generated and processed at the edge, soon overtaking the amount of data and applications addressed by centralized cloud and data center computing.

The deluge of edge data opens opportunities for all kinds of actionable insights, whether it’s to correct a factory floor glitch impacting product quality or serving up a product recommendation based on customers’ past buying behavior. On its own, such individual action can have genuine business impact. But when you multiply the possible effects across thousands of locations processing thousands of transactions, there is a huge opportunity to parlay insights into revenue growth, cost reduction, and even business risk mitigation.

“Compute and sensors are doing new things in real time that they couldn’t do before, which gives you new degrees of freedom in running businesses,” explains Denis Vilfort, director of Edge Marketing at HPE. “For every dollar increasing revenue or decreasing costs, you can multiple it by the number of times you’re taking that action at a factory or a retail store — you’re basically building a money-making machine … and improving operations.”

The multiplier effect at work

The rise of edge computing essentially transforms the conventional notion of a large, centralized data center into having more data centers that are much smaller and located everywhere, Vilfort says. “Today we can package compute power for the edge in less than 2% of the space the same firepower took up 25 years ago. So you don’t want to centralize computing — that’s mainframe thinking,” he explains. “You want to democratize compute power and give everyone access to small — but powerful — distributed compute clusters. Compute needs to be where the data is: at the edge.”

Each location leverages its own insights and can share them with others. These small insights can optimize operation of one location. Spread across all sites, these seemingly small gains can add up quickly when new learnings are replicated and repeated. The following examples showcase the power of the multiplier effect in action:

Foxconn, a large global electronics manufacturer, moved from a cloud implementation to high-resolution cameras and artificial intelligence (AI) enabled at the edge for a quality assurance application. The shift reduced pass/fail time from 21 seconds down to one second; when this reduction is multiplied across a monthly production of thousands of servers, the company benefits from a 33% increase in unit capacity, representing millions more in revenue per month.A supermarket chain tapped in-store AI and real-time video analytics to reduce shrinkage at self-checkout stations. That same edge-based application, implemented across hundreds of stores, prevents millions of dollars of theft per year.Texmark, an oil refinery, was pouring more than $1 million a year into a manual inspection process, counting on workers to visually inspect 133 pumps and miles of pipeline on a regular basis. Having switched to an intelligent edge compute model, including the installation of networked sensors throughout the refinery, Texmark is now able to catch potential problems before anyone is endangered, not to mention benefit from doubled output while cutting maintenance costs in half.A big box retailer implemented an AI-based recommendation engine to help customers find what they need without having to rely on in-store experts. Automating that process increased revenue per store. Multiplied across its thousands of sites, the edge-enabled recommendation process has the potential to translate into revenue upside of more than $350 million for every 1% revenue increase per store.

The HPE GreenLake Advantage

The HPE GreenLake platform brings an optimized operating model, consistent and secure data governance practices, and a cloudlike platform experience to edge environments — creating a robust foundation upon which to execute the multiplier effect across sites. For many organizations, the preponderance of data needs to remain at the edge, for a variety of reasons, including data gravity issues or because there’s a need for autonomy and resilience in case a weather event or a power outage threatens to shut down operations.

HPE GreenLake’s consumption-based as-a-service model ensures that organizations can more effectively manage costs with pay-per-use predictability, also providing access to buffer capacity to ensure ease of scalability. This means that organizations don’t have to foot the bill to build out costly IT infrastructure at each edge location but can, rather, contract for capabilities according to specific business needs. HPE also manages the day-to-day responsibilities associated with each environment, ensuring robust security and systems performance while creating opportunity for internal IT organizations to focus on higher-value activities.

As benefits of edge computing get multiplied across processes and locations, the advantages are clear. For example, an additional monthly increase in bottom-line profits of $2,000 per location per month is easily obtained by a per-location HPE GreenLake compute service at, say, $800 per location per month. The net profit, then, is $1,200. When that is multiplied across 1,000 locations, the result is an aggregated profit of an additional $1.2 million per month — or $14.4 million per year. Small positive changes across a distributed enterprise quickly multiply, and tangible results are now within reach.  

As companies build out their edge capabilities and sow the seeds to benefit from a multiplier effect, they should remember to:

Evaluate what decisions can benefit from being made and acted upon in real time as well as what data is critical to delivering on those insights so the edge environments can be built out accordinglyConsider scalability — how many sites could benefit from a similar setup and how hard it will be to deploy and operate those distributed environmentsIdentify the success factors that lead to revenue gains or cost reductions in a specific edge site and replicate that setup and those workflows at other sites

In the end, the multiplier effect is all about maximizing the potential of edge computing to achieve more efficient operations and maximize overall business success. “We’re in the middle of shifting from an older way of doing things to a new and exciting way of doing things,” Vilfort says. “At HPE we are helping customers find a better way to use distributed technology in their distributed sites to enable their distributed enterprise to run more efficiently.”

For more information, click here.

Cloud Computing

The benefits of analyzing vast amounts of data, long-term or in real-time, has captured the attention of businesses of all sizes. Big data analytics has moved beyond the rarified domain of government and university research environments equipped with supercomputers to include businesses of all kinds that are using modern high performance computing (HPC) solutions to get their analytics jobs done. Its big data meets HPC ― otherwise known as high performance data analytics. 

Bigger, Faster, More Compute-intensive Data Analytics

Big data analytics has relied on HPC infrastructure for many years to handle data mining processes. Today, parallel processing solutions handle massive amounts of data and run powerful analytics software that uses artificial intelligence (AI) and machine learning (ML) for highly demanding jobs.

A report by Intersect360 Research found that “Traditionally, most HPC applications have been deterministic; given a set of inputs, the computer program performs calculations to determine an answer. Machine learning represents another type of applications that is experiential; the application makes predictions about new or current data based on patterns seen in the past.”

This shift to AI, ML, large data sets, and more compute-intensive analytical calculations has contributed to the growth of the global high performance data analytics market, which was valued at $48.28 billion in 2020 and is projected to grow to $187.57 billion in 2026, according to research by Mordor Intelligence. “Analytics and AI require immensely powerful processes across compute, networking and storage,” the report explained. “As a result, more companies are increasingly using HPC solutions for AI-enabled innovation and productivity.”

Benefits and ROI

Millions of businesses need to deploy advanced analytics at the speed of events. A subset of these organizations will require high performance data analytics solutions. Those HPC solutions and architectures will benefit from the integration of diverse datasets from on-premise to edge to cloud. The use of new sources of data from the Internet of Things to empower customer interactions and other departments will provide a further competitive advantage to many businesses. Simplified analytics platforms that are user-friendly resources open to every employee, customer, and partner will change the responsibilities and roles of countless professions.

How does a business calculate the return on investment (ROI) of high performance data analytics? It varies with different use cases.

For analytics used to help increase operational efficiency, key performance indicators (KPIs) contributing to ROI may include downtime, cost savings, time-to-market, and production volume. For sales and marketing, KPIs may include sales volume, average deal size, revenue by campaign, and churn rate. For analytics used to detect fraud, KPIs may include number of fraud attempts, chargebacks, and order approval rates. In a healthcare environment, analytics used to improve patient outcomes might include key performance indicators that track cost of care, emergency room wait times, hospital readmissions, and billing errors.

Customer Success Stories

Combining data analytics with HPC:

A technology firm applies AI, machine learning, and data analytics to client drug diversion data from acute, specialty, and long-term care facilities and delivers insights within five minutes of receiving new data while maintaining a HPC environment with 99.99% uptime to comply with service level agreements (SLAs).A research university was able to tap into 2 petabytes of data across two HPC clusters with 13,080 cores to create a mathematical model to predict behavior during the COVID-19 pandemic.A technology services provider is able to inspect 124 moving railcars ― a 120% reduction in inspection time ― and transmit results in eight minutes, based on processing and analyzing 1.31 terabytes of data per day.A race car designer is able to process and analyze 100,000 data points per second per car ― one billion in a two-hour race ― that are used by digital twins running hundreds of different race scenarios to inform design modifications and racing strategy.  Scientists at a university research center are able to utilize hundreds of terabytes of data, processed at I/O speeds of 200 Gbps, to conduct cosmological research into the origins of the universe.

Data Scientists are Part of the Equation

High performance data analytics is gaining stature as more and more data is being collected.  Beyond the data and HPC systems, it takes expertise to recognize and champion the value of this data. According to Datamation, “The rise of chief data officers and chief analytics officers is the clearest indication that analytics has moved from the backroom to the boardroom, and more and more often it’s data experts that are setting strategy.” 

No wonder skilled data analysts continue to be among the most in-demand professionals in the world. The U.S. Bureau of Labor Statistics predicts that the field will be among the fastest-growing occupations for the next decade, with 11.5 million new jobs by 2026. 

For more information read “Unleash data-driven insights and opportunities with analytics: How organizations are unlocking the value of their data capital from edge to core to cloud” from Dell Technologies. 

***

Intel® Technologies Move Analytics Forward

Data analytics is the key to unlocking the most value you can extract from data across your organization. To create a productive, cost-effective analytics strategy that gets results, you need high performance hardware that’s optimized to work with the software you use.

Modern data analytics spans a range of technologies, from dedicated analytics platforms and databases to deep learning and artificial intelligence (AI). Just starting out with analytics? Ready to evolve your analytics strategy or improve your data quality? There’s always room to grow, and Intel is ready to help. With a deep ecosystem of analytics technologies and partners, Intel accelerates the efforts of data scientists, analysts, and developers in every industry. Find out more about Intel advanced analytics.

Data Management

From telecommunications networks to the manufacturing floor, through financial services to autonomous vehicles and beyond, computers are everywhere these days, generating a growing tsunami of data that needs to be captured, stored, processed, and analyzed.

At Red Hat, we see edge computing as an opportunity to extend the open hybrid cloud all the way to data sources and end users. Where data has traditionally lived in the datacenter or cloud, there are benefits and innovations that can be realized by processing the data these devices generate closer to where it is produced.

This is where edge computing comes in.

What is edge computing?

Edge computing is a distributed computing model in which data is captured, stored, processed, and analyzed at or near the physical location where it is created. By pushing computing out closer to these locations, users benefit from faster, more reliable services while companies benefit from the flexibility and scalability of hybrid cloud computing.

Edge computing vs. cloud computing

A cloud is an IT environment that abstracts, pools, and shares IT resources across a network. An edge is a computing location at the edge of a network, along with the hardware and software at those physical locations. Cloud computing is the act of running workloads within clouds, while edge computing is the act of running workloads on edge devices.

You can read more about cloud versus edge here.

4 benefits of edge computing

As the number of computing devices has grown, our networks simply haven’t kept pace with the demand, causing applications to be slower and/or more expensive to host centrally.

Pushing computing out to the edge helps reduce many of the issues and costs related to network latency and bandwidth, while also enabling new types of applications that were previously impractical or impossible. 

1.    Improve performance

When applications and data are hosted on centralized datacenters and accessed via the internet, speed and performance can suffer from slow network connections. By moving things out to the edge, network-related performance and availability issues are reduced, although not entirely eliminated.

2. Place applications where they make the most sense

By processing data closer to where it’s generated, insights can be gained more quickly and response times reduced drastically. This is particularly true for locations that may have intermittent connectivity, including geographically remote offices and on vehicles such as ships, trains, and airplanes.

3. Simplify meeting regulatory and compliance requirements

Different situations and locations often have different privacy, data residency, and localization requirements, which can be extremely complicated to manage through centralized data processing and storage, such as in datacenters or the cloud.

With edge computing, however, data can be collected, stored, processed, managed, and even scrubbed in-place, making it much easier to meet different locales’ regulatory and compliance requirements. For example, edge computing can be used to strip personally identifiable information (PII) or faces from video before being sent back to the datacenter.

4.    Enable AI/ML applications

Artificial intelligence and machine learning (AI/ML) are growing in importance and popularity since computers are often able to respond to rapidly changing situations much more quickly and accurately than humans.

But AI/ML applications often require processing, analyzing, and responding to enormous quantities of data which can’t reasonably be achieved with centralized processing due to network latency and bandwidth issues. Edge computing allows AI/ML applications to be deployed close to where data is collected so analytical results can be obtained in near real-time.

Red Hat’s approach to edge computing

Of course, the many benefits of edge computing come with some additional complexity in terms of scale, interoperability, and manageability.

Edge deployments often extend to a large number of locations that have minimal (or no) IT staff, or that vary in physical and environmental conditions. Edge stacks also often mix and match a combination of hardware and software elements from different vendors, and highly distributed edge architectures can become difficult to manage as infrastructure scales out to hundreds or even thousands of locations.

The Red Hat Edge portfolio addresses these challenges by helping organizations standardize on a modern hybrid cloud infrastructure, providing an interoperable, scalable and modern edge computing platform that combines the flexibility and extensibility of open source with the power of a rapidly growing partner ecosystem.

The Red Hat Edge portfolio includes:

Red Hat Enterprise Linux and Red Hat OpenShift, which are designed to be the common platform for all of an organization’s infrastructure from core datacenters out to edge environments.Red Hat Advanced Cluster Management for Kubernetes and Red Hat Ansible Automation Platform provide the management and automation platforms needed to drive visibility and consistency across the organization’s entire domain.Finally, the Red Hat Application Services portfolio provides critical integration for enterprise applications while also building a robust data pipeline.

The Red Hat Edge portfolio allows organizations to build and manage applications across hybrid, multi-cloud, and edge locations, increasing app innovation, speeding up deployment, and updating and improving overall DevSecOps efficiency.

To learn more, visit Red Hat here.

Edge Computing

2022 could be a turning point for pairing edge computing and 5G in the enterprise. Let’s examine trends to watch.

The distributed, granular nature of edge computing – where an “edge device” could mean anything from an iPhone to a hyper-specialized IoT sensor on an oil rig in the middle of an ocean – is reflected in the variety of its enterprise use cases.

There are some visible common denominators powering edge implementations: Containers and other cloud-native technologies come to mind, as does machine learning. But the specific applications of edge built on top of those foundations quickly diversify.

“Telco applications often have little in common with industrial IoT use cases, which in turn differ from those in the automotive industry,” says Gordon Haff, technology evangelist, Red Hat.

This reflects the diversity of broader edge computing trends he sees expanding in 2022.

When you pair maturing edge technologies and the expansion of 5G networks, the enterprise strategies and goals could become even more specific.

Simply put, “the 5G and edge combination varies by the type of enterprise business,” says Yugal Joshi, partner at Everest Group, where he leads the firm’s digital, cloud, and application services research practices.

Broadly speaking, the 5G-edge tandem is poised to drive the next phases of digital transformations already underway in many companies. As Joshi sees it, there will be a new wave of high-value production assets (including the copious amounts of data that edge devices and applications produce) becoming mainstream pieces of the IT portfolio – and subsequently creating business impact.

“Enterprises combine 5G to edge locations and create a chain of smart devices that can communicate with each other and back-end systems, unlike earlier times where network transformation didn’t touch the last-mile device,” Joshi says.

 

Edge computing’s turning-point year

The 5G-edge pairing is a long-tail event for enterprises. But there are plenty of reasons – including, of course, the expansion of telco-operated 5G networks – to think 2022 will be a turning-point year.

“We’ll see the transition from many smaller, early-stage deployments to wide-scale, global deployments of production 5G networks, following cloud-native design principles,” says Red Hat CTO Chris Wright. “As we provide a cloud-native platform for 5G, we have great visibility into this transition.”

“In 2022, 5G and edge will unify as a common platform to deliver ultra-reliable and low latency applications,” says Shamik Mishra, CTO for connectivity, Capgemini Engineering. A confluence of broader factors is feeding this type of belief including, of course, more widely available 5G networks.

“Edge use cases have a potential to go mainstream in 2022 with the development of edge-to-cloud architecture patterns and the rollout of 5G,” says Saurabh Mishra, senior manager of IoT at SAS.

The “last mile” concept is key. From a consumer standpoint, the only thing most people really care about when it comes to 5G is: “This makes my phone faster.”

The enterprise POV is more complex. At its core, though the 5G-edge relationship also boils down to speed, it’s usually expressed in two related terms more familiar to the world of IT: latency and performance. The relentless pursuit of low latency and high performance is embedded in the DNA of IT leaders and telco operators alike.

New horizons, familiar challenges

Consumer adoption of 5G and edge is enviably straightforward: Do I live in a coverage area, and do I need a new phone?

Obviously, there’s a little more to it from both the operator and broader enterprise perspective. While the potential of 5G-enabled edge architectures and applications is vast – and potentially lucrative – there will be some challenges for IT and business leaders along the way. Many of them may seem familiar.

For one, the 5G-edge combo in an enterprise context invariably means deploying and managing not just IT but OT (operational technology), and lots of it. As with other major initiatives, there will be a lot of moving parts and pieces to manage.

“Governance and scale will continue to be a challenge given the disparate people and systems involved – OT versus IT,” says Mishra from SAS. “Decision-making around what workloads live in the cloud versus the edge and a lack of understanding about the security profile for an edge-focused application will also be a challenge.”

Scale may be the biggest mountain to climb. It will require pinpoint planning, according to Kris Murphy, senior principal software engineer at Red Hat.

“Standardize ruthlessly, minimize operational ‘surface area,’ pull whenever possible over push, and automate the small things,” Murphy says.

5G and edge will also breed another familiar issue for CIOs – the occasional gap between what a vendor or provider says it can do and what it can actually do in your organization. Joshi says this is one important area that enterprise leaders can prepare for now, while the underlying technologies advance and mature.

“What will be more important for enterprise IT is to enhance its business understanding of operational technology, as well as be comfortable working with a variety of network equipment providers, cloud vendors, and IT service providers,” Joshi says.

Lock-in could be another familiar challenge for enterprise IT, Joshi says, underlining the need for rigorous evaluation of potential platforms and providers.

“Open source adoption and openness of the value chain, [including] RAN software, towers, base stations, cloud compute, and storage” will be an important consideration, Joshi says, as well a nose for finding substance amidst hype.

That brings us back to use cases. If you’re unsure about what’s next for 5G and edge in your organization, then start with the potential business applications. That should ultimately guide any further strategic development. Joshi sees growing adoption of remote training using digital twins, remote health consultations, media streaming, and real-time asset monitoring, among other uses.

“Any enabling factors in 5G such as small cells and low latency, strongly align to an edge architecture,” Joshi says. “However, the intention should not be to enable 5G, but to have a suitable business scenario where 5G adoption can enhance impact.”

To learn more, visit Red Hat here.

Edge Computing

Volume gets a lot of the press when it comes to data. Size is right there in the once-ubiquitous term “Big Data.”

This isn’t a new thing. Back when I was an IT industry analyst, I once observed in a research note that marketing copy placed way too much emphasis on the bandwidth numbers associated with big server designs, and not enough on the time that elapses between a request for data and its initial arrival – which is to say the latency.

We’ve seen a similar dynamic with respect to IoT and edge computing. With ever-increasing quantities of data collected by ever-increasing numbers of sensors, surely there’s a need to filter or aggregate that data rather than shipping it all over the network to a centralized data center for analysis.

[ Read also: IT decision-makers are prioritizing digital transformation. ]

Indeed there is. Red Hat recently had Frost and Sullivan conduct 40 interviews with line-of-business executives (along with a few in IT roles) from organizations with more than 1,000 employees globally. They represented companies in manufacturing, energy, and utilities split between North America, Germany, China, and India. When asked about their main triggers to implement edge computing, bandwidth issues did come up, as did issues around having too much data at a central data center.

 

Latency, connectivity top the list

However, our interview subjects placed significantly more emphasis on latency, and more broadly, their dependence on network connectivity. Triggers such as the need to improve connectivity, increase computing speed, process data faster and on-site, and avoid data latency resulting from transferring data to the cloud and back were common themes.

For example, a decision-maker in the oil and gas industry told us that moving compute out to the edge “improves your ability to react to any occasional situation because you no longer have to take everything in a centralized manner. You can take the local data, run it through your edge computing framework or models, and make real-time decisions. The other is in terms of the overall security. Now that your data is not leaving, and it is both produced and consumed locally, the risk of somebody intercepting the data while it is traversing on the network pretty much goes away.”

For another data point, a Red Hat and Pulse.qa IT community poll found that 45% of 239 respondents said that lower latency was the biggest advantage of deploying workloads to the edge. (And the number-two result was optimized data performance, which is at least related.) Reduced bandwidth? That was down in the single digits (8%).

Latency loomed large when we asked our interview subjects what they saw as the top benefits of edge computing.

Latency also loomed large when we asked our interview subjects what they saw as the top benefits of edge computing.

The very top benefits cited were related to immediate access to data, including wanting data to be accessible in real-time so that it can be processed and analyzed immediately on-site, eliminating data delays caused by data transfers, and having 24/7 access to reliable data – raising the possibility of constant analysis and availability of quick results. A common theme was actionable local analysis.

Cost as a benefit of edge computing did pop up here and there – especially in the context of reducing cloud usage and related costs. However, consistent with other research we’ve done, cost wasn’t cited as a primary driver or benefit of edge computing. Rather, the drivers are mostly data access and related gains.

Hybrid cloud, data are drivers

Why are we seeing this increased emphasis on edge computing and associated local data processing? Our interviews and other research suggest that two reasons are probably particularly important.

The first is that 15 years after the first public cloud rollout, IT organizations have increasingly adopted an explicit hybrid cloud strategy. Red Hat’s 2022 Global Tech Outlook survey found it was the most common strategy for cloud among the over 1,300 IT decision-maker respondents.

Public cloud-first was the least common cloud strategy and was down a tick from the previous year’s survey. This is consistent with data we’ve seen in other surveys.

None of this is to say that public clouds are in any way a passing fad. But edge computing has helped to focus attention on computing (and storage) out at the various edges of the network rather than totally centralized at a handful of large public cloud providers. Edge computing has added a rationale for why public clouds will not be the only place where computing will happen.

The second reason is that we’re doing more complex and more data-intensive tasks out at the edge. Our interviewees told us that one main trigger for implementing edge computing is the need to embrace digital transformation and implement solutions such as IoT, AI, connected cars, machine learning, and robotics. These applications often have a cloud component as well. For example, it’s common to train machine-learning models in a cloud environment but then run them at the edge.

We’re even starting to see Kubernetes-based cluster deployments on the edge using a product such as Red Hat OpenShift. Doing so not only provides scalability and flexibility for edge deployments but also provides a consistent set of tools and processes from the data center to the edge.

It’s not surprising that data locality and latency are important characteristics of a hybrid cloud of which an edge deployment may be a part. Observability and monitoring matter too. So do provisioning and other aspects of management. And yes, bandwidth – and the reliability of links – plays into the mix. That’s because a hybrid cloud is a form of a distributed system, so if something matters in any other computer system, it probably matters in a distributed system too. Maybe even more so.

To learn more, visit Red Hat here.

Edge Computing

The successful journey to cloud adoption for Banking, Financial Services, and Insurance (BFSI) enterprises cannot be completed without addressing the complexities of core business systems. Many businesses have been able to migrate corporate support systems – such as ERP and CRM, as well as IT security and infrastructure systems to the public cloud. However, security concerns, legacy architecture, country-specific regulations, latency requirements, and transition challenges continue to keep the core system from cloud adoption.

BFSI enterprises will be unable to realize the full cloud potential until their core business systems use cloud platforms and services. Firms are looking for solutions that will allow them to continue operating out of their data centers while also providing access to the cloud-shared infrastructure made available to them in their own data centers.

To address these challenges, leading cloud service providers have launched hybrid integrated solution offerings that allow enterprises to access cloud services from their respective data centers via shared infrastructure provided by cloud providers. These allow enterprises to deploy their applications on either the cloud shared infrastructure or on their own data centers without having to rewrite the code.

Enterprises have two options: run applications directly on the cloud or run computing and storage on-premises using the same APIs. To provide a consistent experience across on-premises and cloud environments, the on-premises cloud solution is linked to the nearest cloud service provider region. Cloud infrastructure, services, and updates, like public cloud services, are managed by cloud service providers.

AWS Outposts is a leading hybrid integrated solution that provides enterprises with seamless public cloud services at their data centers. Outpost is a managed AWS service that includes computing and storage. Outpost provides enterprises with an option to be closer to the data center, as many BFSI core systems require significantly low latency, as well as an ecosystem of business applications residing on on-premise data centers.

AWS Outposts will deliver value to BFSI enterprises

Several BFSI enterprises use appliance-based databases for high performance and high availability computing. In the short and medium term, it is unlikely that these enterprises will migrate their appliance-based databases to the cloud; however, AWS provides an option to run these systems on Outposts while keeping databases on the appliances. Outpost also assists in the migration of databases from proprietary and expansive operating systems and hardware to more cost-effective and economical hardware options.

Other use cases, such as commercial off-the-shelf BFSI products that require high-end servers, can be easily moved to AWS Outposts, lowering the total cost of ownership. As a strategy, legacy monolithic core applications that require reengineering can be easily moved to AWS Outposts first and then modernized incrementally onto the public cloud.

A unified hybrid cloud system is the way forward for BFSI enterprises

AWS Outposts offer BFSI enterprises a solution that combines public and private infrastructure, consistent service APIs, and centralized management interfaces. The AWS Outpost service will be able to assist BFSI enterprises in dealing with the many expansive appliance-based core systems that run on proprietary vendor-provided operating systems and hardware that are very expensive.

AWS Outposts will allow BFSI enterprises to gradually migrate to the public cloud while maintaining core application dependencies. AWS Outpost enables a true hybrid cloud for BFSI enterprises.

Author Bio

TCS

Ph: +91 9841412619

E-mail: asim.kar@tcs.com

Asim Kar has 25+ years of overall IT experience spanning executing large-scale transformation programs and running technology organizations in BFSI in migration and reengineering space. He is currently heading the cloud technology focus group in BFSI. He leads complex transformation projects in traditional technologies in telecom, insurance, banks, and financial services programs.

To learn more, visit us here.

Hybrid Cloud

Many people associate high-performance computing (HPC), also known as supercomputing, with far-reaching government-funded research or consortia-led efforts to map the human genome or to pursue the latest cancer cure.

But HPC can also be tapped to advance more traditional business outcomes — from fraud detection and intelligent operations to helping advance digital transformation. The challenge: making complex compute-intensive technology accessible for mainstream use.

As companies digitally transform and steer toward becoming data-driven businesses, there is a need for increased computing horsepower to manage and extract business intelligence and drive data-intensive workloads at scale. The rise of artificial intelligence (AI), machine learning (ML), and real-time analytics applications, often deployed at the edge, can utilize HPC resources to unlock insights from data and efficiently run increasingly large and more complex models and simulations.

The convergence of HPC with AI-based analytics is impacting nearly every industry and across a wide range of applications, including space exploration, drug discovery, financial modeling, automotive design, and systems engineering.

“HPC is becoming a utility in our lives — people aren’t thinking about what it takes to design this tire, validate a chip design, parse and analyze customer preferences, do risk management, or build a 3D structure of the COVID-19 virus,” notes Max Alt, distinguished technologist and director of Hybrid HPC at HPE. “HPC is everywhere, but you don’t think about it, because it’s hidden at the core.”

HPC’s scalable architecture is particularly well suited for AI applications, given the nature of computation required and the unpredictable growth of data associated with these workflows. HPC’s use of graphics-processing-unit (GPU) parallel processing power — coupled with its simultaneous processing of compute, storage, interconnects, and software — raises the bar on AI efficiencies. At the same time, such applications and workflows can operate and scale more readily.

Even with widespread usage, there is more opportunity to leverage HPC for better and faster outcomes and insights. HPC architecture — typically clusters of CPU and GPUs working in parallel and connected to a high-speed network and data storage system — is expensive, requiring a significant capital investment. HPC workloads are typically associated with vast data sets, which means that public cloud might be an expensive option due to requirements regarding latency and performance issues. In addition, data security and data gravity concerns often rule out public cloud.

Another major barrier to more widespread deployment: a lack of in-house specialized expertise and talent. HPC infrastructure is far more complex than traditional IT infrastructure, requiring specialized skills for managing, scheduling, and monitoring workloads. “You have tightly coupled computing with HPC, so all of the servers need to be well synchronized and performing operations in parallel together,” Alt explains. “With HPC, everything needs to be in sync, and if one node goes down, it can fail a large, expensive job. So you need to make sure there is support for fault tolerance.”

HPE GreenLake for HPC Is a Game Changer

An as-a-service approach can address many of these challenges and unlock the power of HPC for digital transformation. HPE GreenLake for HPC enables companies to unleash the power of HPC without having to make big up-front investments on their own. This as-a-service-based delivery model enables enterprises to pay for HPC resources based on the capacity they use. At the same time, it provides access to third-party experts who can manage and maintain the environment in a company-owned data center or colocation facility while freeing up internal IT departments.

“The trend of consuming what used to be a boutique computing environment now as-a-service is growing exponentially,” Alt says.

HPE GreenLake for HPC bundles the core components of an HPC solution (high-speed storage, parallel file systems, low-latency interconnect, and high-bandwidth networking) in an integrated software stack that can be assembled to meet an organization’s specific workload needs.

As part of the HPE GreenLake edge-to-cloud platform, HPE GreenLake for HPC gives organizations access to turnkey and easily scalable HPC capabilities through a cloud service consumption model that’s available on-premises. The HPE GreenLake platform experience provides transparency for HPC usage and costs and delivers self-service capabilities; users pay only for the HPC resources they consume, and built-in buffer capacity allows for scalability, including unexpected spikes in demand. HPE experts also manage the HPC environment, freeing up IT resources and delivering access to the specialized performance tuning, capacity planning, and life cycle management skills.

To meet the needs of the most demanding compute and data-intensive workloads, including AI and ML initiatives, HPE has turbocharged HPE GreenLake for HPC with purpose-built HPC capabilities. Among the more notable features are expanded GPU capabilities, including NVIDIA Tensor Core models; support for high-performance HPE Parallel File System Storage; multicloud connector APIs; and HPE Slingshot, a high-performance Ethernet fabric designed to meet the needs of data-intensive AI workloads. HPE also released lower entry points to HPC to make the capabilities more accessible for customers looking to test and scale workloads.

As organizations pursue HPC capabilities, they should consider the following:

Stop thinking of HPC in terms of a specialized boutique technology; think of it more as a common utility used to drive business outcomes.Look for HPC options that are supported by a rich ecosystem of complementary tools and services to drive better results and deliver customer excellence.Evaluate the HPE GreenLake for HPC model. Organizations can dial capabilities up and down, depending on need, while simplifying access and lowering costs.

HPC horsepower is critical, as data-intensive workloads, including AI, take center stage. An as-a-service model democratizes what’s traditionally been out of reach for most, delivering an accessible path to HPC while accelerating data-first business.

For more information, visit https://www.hpe.com/us/en/greenlake/high-performance-compute.html

High-Performance Computing

One type of infrastructure that has gained popularity is hyperconverged infrastructure (HCI). Interest in HCI and other hybrid technologies such as Azure Arc is growing as enterprise organizations embrace hybrid and multi-cloud environments as part of their digital transformation initiatives. Survey data from IDC shows broad HCI adoption among enterprises of all sizes, with more than 80% of the organizations surveyed planning to move toward HCI for their core infrastructure going forward.

“Hyperconverged infrastructure has matured considerably in the past decade, giving enterprises a chance to simplify the way they deploy, manage, and maintain IT infrastructure,” Carol Sliwa, Research Director with IDC’s Infrastructure Platforms and Technologies Group, said on a recent webinar sponsored by Microsoft and Intel.

“Enterprises need to simplify deployment and management to stay agile to gain greater business benefit from the data they’re collecting,” Sliwa said. “They also need infrastructure that can deploy flexibly and unify management across hybrid cloud environments. Software-defined HCI is well suited to meet their hybrid cloud needs.”

IDC research shows that most enterprises currently use HCI in core data centers and co-location sites, often for mission-critical workloads. Sliwa also expects usage to grow in edge locations as enterprises modernize their IT infrastructure to simplify deployment, management, and maintenance of new IoT, analytics, and business applications.

Sliwa was joined on the webinar by speakers from Microsoft and Intel, who discussed the benefits of HCI for managing and optimizing both hybrid/multi-cloud and edge computing environments.

Jeff Woolsey, Principal Program Manager for Azure Edge & Platform at Microsoft, explained how Microsoft’s Azure Stack HCI and Azure Arc enable consistent cloud management across cloud and on-premises environments.

“Azure Stack HCI provides central monitoring and comprehensive configuration management, built into the box, so that your cloud and on-premises HCI infrastructure are the same,” Woolsey said. “That ultimately means lower OPEX because instead of training and retraining on bespoke solutions, you’re using and managing the same solution across cloud and on-prem.”

Azure Arc provides a bridge for the Azure ecosystem of services and applications to run on a variety of hardware and IoT devices across Azure, multi-cloud, data centers, and edge environments, Woolsey said. The service provides a consistent and flexible development, operations, and security model for both new and existing applications, allowing customers “to innovate anywhere,” he added.

Christine McMonigal, Director of Hyperconverged Marketing at Intel, explained how the Intel-Microsoft partnership has resulted in consistent, secure, end-to-end infrastructure that delivers a number of price/performance benefits to customers.

“We see how customers are demanding a more scalable and flexible compute infrastructure to support their increasing and changing workload demands,” said McMonigal. “Our Intel Select Solutions for Microsoft Azure Stack HCI have optimized configurations for the edge and for the data center. These reduce your time to evaluate, select, and purchase, streamlining the time to deploy new infrastructure.”

Watch the full webinar here: 

For more information on how HCI use is growing for mission-critical workloads, read the IDC Spotlight paper.

Edge Computing, Hybrid Cloud

For decades, organizations have tried to unlock the collective knowledge contained within their people and systems. And the challenge is getting harder, since every year, massive amounts of additional information are created for people to share. We’ve reached a point at which individuals are unable consume, understand, or even find half the information that is available to them.